text
stringlengths
0
514k
meta
dict
--- abstract: 'In atomic and molecular phase measurements using laser-induced fluorescence detection, optical cycling can enhance the effective photon detection efficiency and hence improve sensitivity. We show that detecting many photons per atom or molecule, while necessary, is not a sufficient condition to approach the quantum projection limit for detection of the phase in a two-level system. In particular, detecting the maximum number of photons from an imperfectly closed optical cycle reduces the signal-to-noise ratio (SNR) by a factor of $\sqrt{2}$, compared to the ideal case in which leakage from the optical cycle is sufficiently small. We derive a general result for the SNR in a system in terms of the photon detection efficiency, probability for leakage out of the optical cycle per scattered photon, and the product of the average photon scattering rate and total scattering time per atom or molecule.' author: - Zack Lasner - 'D. DeMille' bibliography: - 'cyclingPaperBibFinal.bib' title: 'Statistical sensitivity of phase measurements via laser-induced fluorescence with optical cycling detection' --- Atoms and molecules are powerful platforms to probe phenomena at quantum-projection-limited precision. In many atomic and molecular experiments, a quantum state is read out by laser-induced fluorescence (LIF), in which population is driven to a short-lived state and the resulting fluorescence photons are detected. Due to geometric constraints on optical collection and technological limitations of photodetectors, the majority of emitted photons are typically undetected, reducing the experimental signal. Optical cycling transitions can be exploited to overcome these limitations, by scattering many photons per particle. In the limit that many photons from each particle are detected, the signal-to-noise ratio (SNR) may be limited by the quantum projection (QP) noise (often referred to as atom or molecule shot noise). LIF detection with photon cycling is commonly used in ultra-precise atomic clock [@Wynands2005; @Zelevinsky2008] and atom interferometer [@Cronin2009] experiments to approach the QP limit. Molecules possess additional features, beyond those in atoms, that make them favorable probes of fundamental symmetry violation [@ACMECollaboration2014; @Collaboration2018; @Hudson2011; @Devlin2015; @Hunter2012; @Kozyryev2017] and fundamental constant variation [@Borkowski2018; @Beloy2011; @DeMille2008; @Zelevinsky2008; @Shelkovnikov2008; @Kozyryev2018], as well as promising platforms for quantum information and simulation [@DeMille2002; @Liu2018; @Micheli2006; @Sundar2018; @Wall2015]. Many molecular experiments that have been proposed, or which are now being actively pursued, will rely on optical cycling to enhance measurement sensitivity while using LIF detection [@Collaboration2018; @Hunter2012; @Kozyryev2018; @Kozyryev2017; @ACMECollaboration2014; @Devlin2015]. Due to the absence of selection rules governing vibrational decays, fully closed molecular optical cycling transitions cannot be obtained: each photon emission is associated with a non-zero probability of decaying to a “dark” state that is no longer driven to an excited state by any lasers. However, for some molecules many photons can be scattered using a single excitation laser, and up to $\sim10^{6}$ photons have been scattered using multiple repumping lasers to return population from vibrationally excited states into the optical cycle [@DiRosa2004; @Shuman2009]. This has enabled, for example, laser cooling and magneto-optical trapping of molecules [@Shuman2010; @Barry2014; @Hummon2013; @Collopy2018; @Zhelyazkova2014; @Truppe2017; @Chae2017; @Anderegg2017]. Furthermore, some precision measurements rely on atoms in which no simply closed optical cycle exists [@Regan2002; @Parker2015]; our discussion here will be equally applicable to such species. These considerations motivate a careful study of LIF detection for precision measurement under the constraint of imperfectly closed optical cycling. Some consequences of loss during the cycling process have been considered in [@Rocco2014]. However, the effect of the statistical nature of the cycling process on the optimal noise performance has not been previously explored. In particular, the number of photons scattered before a particle (an atom or molecule) decays to an unaddressed dark state, and therefore ceases to fluoresce, is governed by a statistical distribution rather than a fixed finite number. We show that due to the width of this distribution, a naive cycling scheme reduces the SNR to below the QP limit. In particular, we find that in addition to the intuitive requirement that many photons from every particle are detected, to approach the QP limit it is also necessary that the probability of each particle exiting the cycling transition (via decay to a dark state outside the cycle) is negligible during detection. If this second condition is not satisfied, so that each particle scatters enough photons that it is very likely to have been optically pumped into a dark state, then the SNR is decreased by a factor of $\sqrt{2}$ below the QP limit. Consider an ensemble of $N$ particles in an effective two-level system, in a state of the form $$|\psi\rangle=(e^{-i\phi}|\uparrow\rangle+e^{i\phi}|\downarrow\rangle)/\sqrt{2}.$$ The relative phase $\phi$ is the quantity of interest in this discussion. It can be measured, for example, by projecting the wavefunction onto an orthonormal basis $\{|X\rangle\propto|\uparrow\rangle+|\downarrow\rangle,\,|Y\rangle\propto|\uparrow\rangle-|\downarrow\rangle\}$ such that $|\langle X|\psi\rangle|^{2}=\cos^{2}(\phi)$ and $|\langle Y|\psi\rangle|^{2}=\sin^{2}(\phi)$. In the LIF technique, this can be achieved by driving state-selective transitions, each addressing either $|X\rangle$ or $|Y\rangle$, through an excited state that subsequently decays to a ground state and emits a fluorescence photon. This light is detected, and the resulting total signals, $S_{X}$ and $S_{Y}$, are associated with each state. (This protocol is equivalent to the more standard Ramsey method, in which each spin is reoriented for detection by a spin-flip pulse and the population of spin-up and spin-down particles is measured [@Ramsey1950].) The measured value of the phase, $\tilde{\phi},$ is computed from the observed values of $S_{X}$ and $S_{Y}$. In the absence of optical cycling, the statistical uncertainty of the phase measurement is $\sigma_{\tilde{\phi}}=\frac{1}{2\sqrt{N\epsilon}}$, where $\epsilon$ is the photon detection efficiency and $0<\epsilon\leq1$. Note that $N\epsilon$ is the average number of detected photons; hence, this result is often referred to as the “photon shot noise limit.” In the ideal case of $\epsilon=1$, the QP limit (a.k.a. the atom or molecule shot noise limit) limit $\sigma_{\tilde{\phi}}=\frac{1}{2\sqrt{N}}$ is obtained. This scaling is derived as a limiting case of our general treatment below, where the effects of optical cycling are also considered. We suppose that the phase is projected onto the $\{|X\rangle,\,|Y\rangle\}$ basis independently for each particle. Repeated over the ensemble of particles, the total number of particles $N_{X}$ projected along $|X\rangle$ is drawn from a binomial distribution, $N_{X}\sim B(N,\,\cos^{2}\phi)$, where $x\sim f(\alpha_{1},\cdots,\alpha_{k})$ denotes that the random variable $x$ is drawn from the probability distribution $f$ parametrized by $\alpha_{1},\cdots,\alpha_{k}$, and $B(\nu,\,\rho)$ is the binomial distribution for the total number of successes in a sequence of $\nu$ independent trials that each have a probability $\rho$ of success. Therefore, $\overline{N_{X}}=N\,\cos^{2}\phi$ and $\sigma_{N_{X}}^{2}=N\,\cos^{2}\phi\sin^{2}\phi$, where $\bar{x}$ is the expectation value of a random variable $x$ and $\sigma_{x}$ is its standard deviation over many repetitions of an experiment. We define the number of photons scattered from the $i$-th particle to be $n_{i}$, where a “photon scatter” denotes laser excitation followed by emission of one spontaneous decay photon, and define $\overline{n_{i}}=\bar{n}$ (the average number of photons scattered per particle) and $\sigma_{n_{i}}=\sigma_{n}$. Note that these quantities are assumed to be the same for all particles (i.e., independent of $i$). The probability of detecting any given photon (including both imperfect optical collection and detector quantum efficiency) is $\epsilon$, such that each photon is randomly either detected or not detected. We define $d_{ij}$ to be a binary variable indexing whether the $j$-th photon scattered from the $i$-th particle is detected. Therefore, $d_{ij}\sim B(1,\,\epsilon)$, and it follows that $\overline{d_{ij}}=\epsilon$ and $\sigma_{d_{ij}}^{2}=\epsilon(1-\epsilon)$. We define the signal of the measurement of a particular quadrature $|X\rangle$ or $|Y\rangle$ from the ensemble, when projecting onto that quadrature, to be the total number of photons detected. For example, the signal $S_{X}$ from particles projected along $|X\rangle$ is $$S_{X}=\sum_{i=1}^{N_{X}}\sum_{j=1}^{n_{i}}d_{ij}.\label{eq:Sx definition}$$ $\noindent$Explicitly, among $N$ total particles, $N_{X}$ are projected by the excitation light onto the $|X\rangle$ state and the rest are projected onto $|Y\rangle$. The $i$-th particle projected onto $|X\rangle$ scatters a total of $n_{i}$ photons, and we count each photon that is detected (in which case $d_{ij}=1)$. The right-hand side of Eq. \[eq:Sx definition\] depends on $\phi$ implicitly through $N_{X}$, and we use this dependence to compute $\tilde{\phi}$, the measured value of $\phi$. Because $N_{X},\,n_{i},$ and $d_{ij}$ are all statistical quantities, the extracted value $\tilde{\phi}$ has a statistical uncertainty. The QP limit is achieved when the only contribution to uncertainty arises from $N_{X}$ due to projection onto the $\{|X\rangle,|Y\rangle\}$ basis. We can compute $\overline{S_{X}}$ by repeated application of Wald’s lemma ([@Bruss1991; @Wald2013]), $\overline{\sum_{i=1}^{m}x}=\bar{m}\bar{x}$. This results in $$\overline{S_{X}}=N\cos^{2}\phi\,\bar{n}\epsilon.\label{eq:ESx}$$ $\noindent$That is, the expected signal from projecting onto the $|X\rangle$ state is (as could be anticipated) simply the product of the average number of particles in $|X\rangle$, $N\cos^{2}\phi$, the number of photons scattered per particle, $\bar{n}$, and the probability of detecting each photon, $\epsilon$. We compute the variance in $S_{X}$ by repeated use of the law of total variance [@Blitzstein], $\sigma_{a}^{2}=\overline{\sigma_{a|b}^{2}}+\sigma_{\overline{a|b}}^{2}$, where $\overline{a|b}$ denotes the mean of $a$ conditional on a fixed value of $b$ and, analogously, $\sigma_{a|b}^{2}$ denotes the variance of $a$ conditional on a fixed value of $b$. This gives $$\sigma_{S_{X}}^{2}=N\cos^{2}\phi\,\bar{n}\epsilon^{2}\left(\frac{1}{\epsilon}+\frac{\sigma_{n}^{2}}{\bar{n}}-1+\bar{n}\sin^{2}\phi\right).$$ $\noindent$The results for $S_{Y}$ are identical, with the substitution $\cos^{2}\phi\leftrightarrow\sin^{2}\phi$. Many atomic clocks [@Weyers2001; @Jefferts2002; @Kurosu2004; @Levi2004; @Szymaniec2005a] and some molecular precision measurement experiments [@ACMECollaboration2014; @Devlin2015] measure both $S_{X}$ and $S_{Y}$, while others detect only a single state [@Collaboration2018; @Hudson2011; @Regan2002; @Parker2015]. In what follows, we assume that both states are probed. The case of detecting only one state, with some means of normalizing for variations in $N\bar{n}\epsilon$, can be worked out using similar considerations. In the regime $\phi=\pm\frac{\pi}{4}+\delta\phi$, where $\delta\phi\ll1$, sensitivity to small changes in phase, $\delta\phi$, is maximized. In this case, we define the measured phase deviation $\delta\tilde{\phi}$ by $\tilde{\phi}=\pm\frac{\pi}{4}+\delta\tilde{\phi}$. This is related to measured quantities via the asymmetry $\mathcal{A}=\frac{S_{X}-S_{Y}}{S_{X}+S_{Y}}=\mp\sin(2\delta\tilde{\phi})\approx\mp2\delta\tilde{\phi}$. When $N\gg1$, the average value of $\tilde{\phi}$ computed in this way is equal to the phase $\phi$ of the two-level system. The uncertainty in the asymmetry, $\sigma_{\mathcal{A}}\approx\frac{1}{N}\sqrt{\sigma_{S_{X}}^{2}+\sigma_{S_{Y}}^{2}-2\sigma_{S_{X},S_{Y}}^{2}}$, can be computed to leading order in $\delta\phi$ from $\sigma_{S_{X}}$, $\sigma_{S_{Y}}$, and the covariance $\sigma_{S_{X},S_{Y}}^{2}=\overline{S_{X}S_{Y}}-\overline{S_{X}}\,\overline{S_{Y}}$ using standard error propagation [@Bevington1969]. We relate $\sigma_{\mathcal{A}}$ to the uncertainty in the measured phase by $\sigma_{\mathcal{A}}=2\sigma_{\tilde{\phi}}$. This relationship defines the statistical uncertainty in $\tilde{\phi}$, the measured value of $\phi$, for the protocol described here. The covariance, $\sigma_{S_{X},S_{Y}}^{2}=-\frac{N}{4}\bar{n}^{2}\epsilon^{2}$, can be calculated directly using the same methods already described. This result can be understood as follows: the photon scattering and detection processes for particles projected onto $|X\rangle$ and $|Y\rangle$ are independent, so the covariance between signals $S_{X}$ and $S_{Y}$ only arises from quantum projection. In the simplest case of perfectly efficient, noise-free detection and photon scattering, e.g., $\epsilon=1$, $\bar{n}=1$, and $\sigma_{n}=0$, the quantum projection noise leads to signal variances $\sigma_{S_{X}}^{2}=\sigma_{S_{Y}}^{2}=\frac{N}{4}$. The covariance is negative because a larger number of particles projected onto $|X\rangle$ is associated with a smaller number of particles projected onto $|Y\rangle$. The additional factor of $\bar{n}^{2}\epsilon^{2}$ for the general case accounts for the fact that both signals $S_{X}$ and $S_{Y}$ are scaled by $\bar{n}\epsilon$ when $\bar{n}$ photons are scattered per particle and a proportion $\epsilon$ of those photons are detected on average. The uncertainty in the measured phase, computed using the procedure just described, has the form $\sigma_{\tilde{\phi}}=\frac{1}{2\sqrt{N}}\sqrt{F}$, where we have defined the “excess noise factor” $F$ given in this phase regime by $$F=1+\frac{1}{\bar{n}}\left(\frac{1}{\epsilon}-1\right)+\frac{\sigma_{n}^{2}}{\bar{n}^{2}}.$$ It is instructive to evaluate this expression in some simple limiting cases. For example, consider the case when exactly one photon is scattered per particle so that $\bar{n}=1$ and $\sigma_{n}=0$. (This is typical for experiments with molecules, where optical excitation essentially always leads to decay into a dark state.) In this case, $F=\frac{1}{\epsilon}$ and the uncertainty in the phase measurement is $\sigma_{\tilde{\phi}}=\frac{1}{2\sqrt{N\epsilon}}$, as stated previously. Alternatively, as $\bar{n}\rightarrow\infty$, $F\rightarrow1+\left(\frac{\sigma_{n}}{\bar{n}}\right)^{2}$. This is in exact analogy with the excess noise of a photodetector whose average gain is $\bar{n}$ and whose variance in gain is $\sigma_{n}^{2}$ [@Knoll2010]. By inspection, the ideal result of $F\rightarrow1$ can be achieved only if $\frac{\sigma_{n}}{\bar{n}}\rightarrow0$, and either $\epsilon\rightarrow1$ or $\bar{n}\rightarrow\infty$. We now compute $\bar{n}$ and $\sigma_{n}^{2}$ for a realistic optical cycling process. We define the branching fraction to dark states, which are lost from the optical cycle, to be $b_{\ell}$. We assume that each particle interacts with the excitation laser light for a time $T$, during which the scattering rate of a particle in the optical cycle is $r$. Therefore, an average of $rT$ photons would be scattered in the absence of decay to dark states, i.e. when $b_{\ell}=0$. (All of our results hold for a time-dependent scattering rate $r(t)$, with the substitution $rT\rightarrow\int r(t)dt$.) Note that in the limit $rT\rightarrow\infty$, $1/b_{\ell}$ photons are scattered per particle on average. Recall that the number of photons scattered from the $i$-th particle, when projected to a given state, is $n_{i}$. We define the probability that a particle emits exactly $n_{i}$ photons to be $P(n_{i};\,rT,b_{\ell})$. This probability distribution can be computed by first ignoring the decay to dark states. For the case where $b_{\ell}=0$, the number of photons emitted in time $T$ follows a Poisson distribution with average number of scattered photons $rT$. For the more general case where $b_{\ell}>0$, we assign a binary label to each photon depending on whether it is associated with a decay to a dark state. Each decay is characterized by a Bernoulli process, and we use the conventional labels of “successful” (corresponding to decay to an optical cycling state) and “unsuccessful” (corresponding to decay to a dark state) for each outcome. Then $P(n_{i};\,rT,b_{\ell})$ is the probability that there are exactly $n_{i}$ events in the Poisson process, all of which are successful, or there are at least $n_{i}$ events such that the first $n_{i}-1$ are successful and the $n_{i}$-th is unsuccessful. (For concreteness, we have assumed that unsuccessful decays, i.e., those that populate dark states, emit photons with the same detection probability as all successful decays. The opposite case, in which decays to dark states are always undetected, can be worked out with the same approach and leads to similar conclusions.) Direct calculation gives $$\bar{n}=\frac{1-e^{-b_{\ell}rT}}{b_{\ell}}{\rm \,and}$$ $$\sigma_{n}^{2}=\frac{1-b_{\ell}+e^{-b_{\ell}rT}b_{\ell}(2b_{\ell}rT-2rT+1)-e^{-2b_{\ell}rT}}{b_{\ell}^{2}}.$$ Therefore, $$F=1+\frac{1}{1-e^{-b_{\ell}rT}}\left(\frac{b_{\ell}}{\epsilon}+\frac{1-2b_{\ell}+2b_{\ell}e^{-b_{\ell}rT}(1-rT(1-b_{\ell}))-e^{-2b_{\ell}rT}}{1-e^{-b_{\ell}rT}}\right).\label{eq:sigmaPhi}$$ $\noindent$The behavior of the SNR (proportional to $1/\sqrt{F}$) arising from Eq. \[eq:sigmaPhi\] is illustrated in Fig. \[fig:snr\]. ![$1/\sqrt{F}$, the SNR resulting from Eq. \[eq:sigmaPhi\], normalized to the ideal case of the QP limit ($F=1$). This plot assumes $\epsilon=0.1$. When few photons per particle can be detected, i.e., when $\epsilon/b_{\ell}\ll1$ (far left of plot), cycling to very deep completion $(b_{\ell}rT\gg1$) does not significantly affect the SNR. Even when one photon per particle can be detected on average, i.e., when $\epsilon/b_{\ell}=1$ (dashed red line), the SNR never exceeds roughly half its ideal value. By further closing the optical cycle, i.e. such that $\epsilon/b_{\ell}\gg1$ (right of dashed red line), the SNR can be improved to near the optimal value given by the QP limit. However, to reach this optimal regime, the number of photons that would be scattered in the absence of dark states, $rT$, must be small compared to the average number that can be scattered before a particle exits the optical cycle, $1/b_{\ell}$. For example, with $1/b_{\ell}=1,000$ (green dashed line) and $rT=100$ so that $b_{\ell}rT=0.1$ (lower circle), the SNR is more than 30% larger than in the case when $rT=10,000$ and $b_{\ell}rT=10$ (upper circle). \[fig:snr\]](\string"snr_plot_3\string".pdf){width="8cm"} To understand the implications of this result, we consider several special cases, summarized in Table \[tab:special-cases\]. We first consider the simple case when cycling is allowed to proceed until all particles decay to dark states, i.e., $b_{\ell}rT\rightarrow\infty$. We refer to this as the case of “cycling to completion.” In this case, for the generically applicable regime $\epsilon\leq\frac{1}{2}$ we find $F\geq2$, even as the transition becomes perfectly closed ($b_{\ell}\rightarrow0$). We can understand this result intuitively as follows. As the optical cycling proceeds, the number of particles that will still be in the optical cycle after each photon scatter is proportional to the number of particles that are currently in the optical cycle, $\frac{dP}{dn_{i}}\propto P$. Hence, we expect $P(n_{i};\,rT\rightarrow\infty,b_{\ell})\propto e^{-\alpha n_{i}}$ for some characteristic constant $\alpha$. In fact, one can show that for $rT\rightarrow\infty$, this result holds with $\alpha\approx b_{\ell}$. The width $\sigma_{n}$ of this exponential distribution is given by the mean $\bar{n}$; that is, $\sigma_{n}\approx\bar{n}$. Therefore, we should expect that cycling to completion reduces the SNR by a factor of $\sqrt{F}=\sqrt{1+(\sigma_{n}/\bar{n})^{2}}\rightarrow\sqrt{2}$ compared to the ideal case of $F=1$, which requires $\frac{\sigma_{n}}{\bar{n}}=0$. Surprisingly, this reduction in SNR can be partially recovered for an imperfectly closed optical cycle, by choosing a finite cycling time, $rT<\infty$, to minimize $\sigma_{\tilde{\phi}}$. The best limiting case, as found from Eq. \[eq:sigmaPhi\], preserves the condition that many photons are detected per particle, $rT\epsilon\gg1$, but additionally requires that the probability of decaying to a dark state remains small, $rTb_{\ell}\ll1$. In this case, photon emission is approximately a Poisson process for which $\left(\frac{\sigma_{n}}{\bar{n}}\right)^{2}\approx\frac{1}{rT}\ll1$, and the excess noise factor, $F$, does not have a significant contribution from the variation in scattered photon number. The optimal value of $rT$ for a finite proportion of decays to dark states, $b_{\ell}$, and detection efficiency, $\epsilon$, lies in the intermediate regime and can be computed numerically. A special case of “cycling to completion,” which must be considered separately, occurs when every particle scatters exactly one photon, corresponding to parameter values $b_{\ell}=1$ and $rT\gg1$ so that $\bar{n}=1$ and $\sigma_{n}=0$. As we have already seen, in this case there is no contribution to the excess noise arising from variation in the scattered photon number, and hence the SNR is limited only by photon shot noise: $F=\frac{1}{\epsilon}$. In atomic physics experiments with essentially completely closed optical cycles, $b_{\ell}\approx0$, the limit $b_{\ell}rT\rightarrow\infty$ is not obtained even for very long cycling times where $rT\gg1$. Instead, in this case $b_{\ell}rT\rightarrow0$ and hence $F\rightarrow1+\frac{1}{rT\epsilon}$, which approaches unity as the probability to detect a photon from each particle becomes large, $rT\epsilon\gg1$. Therefore, the reduction in the SNR associated with the distribution of scattered photons does not occur in this limit of a completely closed optical cycle. Condition Sub-conditon $F$ ---- ------------------------------- -------------------------------- -------------------------------------------------------------------- 1a $b_{\ell}rT\rightarrow\infty$ $2+b_{\ell}(\frac{1}{\epsilon}-2)$ 1b $b_{\ell}rT\rightarrow\infty$ $\epsilon\leq0.5$ $\geq2$ 2a $b_{\ell}rT\rightarrow0$ $1+\frac{1}{rT\epsilon}+\frac{1}{2}b_{\ell}(\frac{1}{\epsilon}-2)$ 2b $b_{\ell}rT\rightarrow0$ $\epsilon rT\rightarrow\infty$ 1 3a **$b_{\ell}\rightarrow1$** $\frac{1}{\epsilon}\frac{1}{1-e^{-rT}}$ 3b $b_{\ell}\rightarrow1$ $rT\rightarrow\infty$ $\frac{1}{\epsilon}$ : The excess noise factor $F$ in some special cases. (1a) All particles are lost to dark states during cycling. (1b) With all particles lost and realistic detection efficiency, $\epsilon\leq0.5$, $F\geq2$. (2a) No particles are lost to dark states. (2b) No particles are lost, but many photons per particle are detected. The QP limit is reached. (3a) Up to one photon can be scattered per particle. (3b) Exactly one photon is scattered per particle and the photon shot noise limit is reached.\[tab:special-cases\] We have also considered how the additional noise due to optical cycling combines with other noise sources in the detection process. For example, consider intrinsic noise in the photodetector itself. Commonly, a photodetector (such as a photomultiplier or avalanche photodiode) has average intrinsic gain $\bar{G}$ and variance in the gain $\sigma_{G}^{2}$, with resulting excess noise factor $f=1+\frac{\sigma_{G}^{2}}{\bar{G}^{2}}$. Including this imperfection in the model considered here leaves Eq. \[eq:sigmaPhi\] unchanged up to the substitution $\epsilon\rightarrow\epsilon/f$. Similar derivations can be performed assuming a statistical distribution of $N$ or $\phi$ to obtain qualitatively similar but more cumbersome results. In conclusion, we have shown that a quantum phase measurement, with detection via laser-induced fluorescence using optical cycling on an open transition, when driven to completion, incurs a reduction in the SNR by a factor of $\sqrt{2}$ compared to the QP limit when the optical cycle is driven to completion. This effect has been understood as due to the distribution of the number of scattered photons for this particular case. This reduction of the SNR does not occur for typical atomic systems, where decay out of the optical cycle and into dark states is negligible over the timescale of the measurement. An expression for the SNR has been derived for the general case, in which the cycling time is finite and the probability of decay to dark states is non-zero. For a given decay rate to dark states, an optimal combination of cycling rate and time can be computed numerically to obtain a SNR that most closely approaches the QP limit. This ideal limit can be obtained only when the photon cycling proceeds long enough for many photons from each atom or molecule to be detected, but not long enough for most atoms or molecules to exit the optical cycle by decaying to an unaddressed dark state. This work was supported by the NSF.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In a classical optimal stopping problem the aim is to maximize the expected value of a functional of a diffusion evaluated at a stopping time. This note considers optimal stopping problems beyond this paradigm. We study problems in which the value associated to a stopping rule depends on the law of the stopped process. If this value is quasi-convex on the space of attainable laws then it is well known result that it is sufficient to restrict attention to the class of threshold strategies. However, if the objective function is not quasi-convex, this may not be the case. We show that, nonetheless, it is sufficient to restrict attention to mixtures of threshold strategies.' author: - 'Vicky HendersonDavid HobsonMatthew Zeng\' title: 'Optimal Stopping and the Sufficiency of Randomized Threshold Strategies[^1]' --- (1,0)[440]{} Introduction and main results ============================= Let $Y=(Y_t)_{t \geq 0}$ be a time-homogeneous, continuous strong-Markov process. Let ${\mathcal T}$ be the set of all stopping times, and let ${\mathcal T}_T$ be the set of all (one- and two-sided) threshold stopping times, ie. stopping rules based on the first crossing of upper or lower thresholds. Let $V=V(\tau)$ be the value associated with a stopping rule $\tau$. Consider the optimal stopping problem associated with $V$, ie. the problem of finding $$\label{eq:osp} V_*( {\mathcal S}) = \sup_{\tau \in {\mathcal S}} V(\tau)$$ where ${\mathcal S}$ is some set of stopping times (for example ${\mathcal S}= {\mathcal T}$ or ${\mathcal S}= {\mathcal T}_T$), and especially the problem of finding an optimizer for . We say the $V=V(\tau)$ is law invariant if, whenever $\sigma,\tau$ are stopping times, ${\mathcal L}(Y_\sigma)= {\mathcal L}(Y_\tau)$ implies that $V(\sigma)=V(\tau)$, where ${\mathcal L}(Z)$ is the law of $Z$. It follows that $V(\tau)=H({\mathcal L}(Y_\tau))$ for some map $H$. The following result is well-known, but we include it as a contrast to our result on the sufficiency of randomized threshold rules. Suppose $H$ is quasi-convex and lower semi-continuous. Then $V_*({\mathcal T}_T) = V_*({\mathcal T})$. In the setting of Theorem \[thm:main2\], in solving the optimal stopping problem over the set of all stopping times it is sufficient to restrict attention to threshold rules. \[cor:A\] As the canonical example, consider expected utility, whence $V(\tau) = {\mathbb E}[u( Y_\tau) ]$, for a continuous, increasing function $u$. Then $V$ is law invariant. Indeed $V(\tau)= H({\mathcal L}(Y_\tau))$ where $H(\zeta) = \int u(z) \zeta(dz)$. $H$ is quasi-convex and lower semi-continuous. In this example it is well known that there is an optimal stopping rule which is of threshold form, see for example, Dayanik and Karatzas [@DayanikKaratzas:03]. The fact that quasi-convexity means that there is no benefit from following randomized strategies is well understood in the economics literature, see Machina [@Machina:85] Camerer and Ho [@CamererHo:94], Wakker [@Wakker:10] and He et al [@HeHuOblojZhou:17]. Recently there has been a surge of interest in problems which, whilst they have the law invariance property, do not satisfy the quasi-convex criterion. Two examples are optimal stopping under prospect theory (Xu and Zhou [@XuZhou:13]), and optimal stopping under cautious stochastic choice (Henderson et al [@HendersonHobsonZeng:17]). Introduce the set ${\mathcal T}_R$ of mixed or randomized threshold rules. Suppose law invariance holds for $V$, but not quasi-convexity for $H$. Then $V_*({\mathcal T}_T) \leq V_*({\mathcal T}_R) = V_*({\mathcal T})$. We will show by example that the first inequality may be strict. In the setting of Theorem \[thm:main1\], in solving the optimal stopping problem over the set of all stopping rules it is sufficient to restrict attention to randomized threshold rules, but it may not be sufficient to restrict attention to (pure) threshold rules. \[cor:B\] It should be noted that we do not include discounting in our analysis since a problem involving discounting does not satisfy the law invariance property. Nonetheless, as is well known, the conclusion of Corollary \[cor:A\] remains true for the problem of maximizing discounted expected utility of the stopped process $V(\tau) = {\mathbb E}[ e^{- \beta \tau} u(Y_\tau)]$. However, in problems which go beyond the expected utility paradigm, there are often modelling issues which mitigate against the inclusion of discounting. For this reason, historically the literature has concentrated on problems with no discounting. Finding the optimal stopping rule is often already challenging in these models. The significance of Corollary \[cor:B\] is as follows. In many classical models optimal stopping behavior involves stopping on first exit from an interval. If decision makers are observed to stop at levels which have already been visited by the process, then this behavior is inconsistent with the classical optimal stopping model. However, our result implies that the converse is not true: if decision makers are observed to stop only when the process is reaching new maxima or minima, then it does not necessarily mean that they are maximizers of expected payoffs. Instead the decision criteria may be more complicated, and they may be utilizing a randomized threshold rule. Problem specification and the problem in natural scale ====================================================== We work on a filtered probability space $(\Omega, {\mathcal F}, {\mathbb F}= \{ {\mathcal F}_t \}_{t \geq 0} , {\mathbb P})$. Let $Y= (Y_t)_{t \geq 0}$ be a $({\mathbb F}, {\mathbb P})$-stochastic process on this probability space with state space $I$ which is an interval. Let $\bar{I}$ be the closure of $I$. We suppose that $Y$ is a regular, time-homogeneous diffusion with initial value $Y_0=y$ such that $y$ lies in the interior of $I$. Let ${\mathcal T}$ be the class of all stopping times $\tau$ such that $\lim_{t \uparrow \infty} Y_{t \wedge \tau}$ exists (almost surely). We introduce two subclasses of stopping times - ${\mathcal T}_T$, the subclass of (pure) threshold stopping times; - ${\mathcal T}_R$, the subclass of randomised threshold stopping times. Note that ${\mathcal T}_T \subset {\mathcal T}_R \subset {\mathcal T}$. The set of pure threshold stopping times includes stopping immediately and can be written as $${\mathcal T}_T = {\mathcal T}\cap \left(\cup_{\beta \leq y \leq \gamma; \; \beta, \gamma \in \bar{I}^Y} \{ \tau_{\beta,\gamma} \} \right), \label{eq:TTdef}$$ where $\tau_{a,b} = \inf_{u \geq 0} \{ u: Y_u \notin (a,b) \}$. Note that if $a = y$ or $b=y$ then $\tau_{a,b}=0$ almost surely, and that if $\sigma=\tau$ almost surely then we have $V(\sigma)=V(\tau)$. Hence we may suppose that $\tau \equiv 0$, the strategy of stopping immediately, lies in ${\mathcal T}_T$. In order to be able to define a sufficiently rich class of randomized stopping times we need to assume that ${\mathbb F}$ is larger than the filtration generated by $Y$. ${\mathcal F}_0$ is sufficiently rich as to include a continuous random variable, and the stochastic process $Y$ is independent of this random variable. \[ass:filtration\] It follows from the assumption that for any probability measure $\zeta$ on ${\mathcal D}= ([-\infty,y] \cap \bar{I}) \times ([y,\infty]\cap \bar{I})$ there exists an ${\mathcal F}_0$-measurable random variable $\Theta = \Theta_\zeta = (A_\zeta, B_\zeta)$ such that $(A_\zeta, B_\zeta)$ has law $\zeta$. For a set $\Gamma$ let ${\mathcal P}(\Gamma)$ be the set of probability measures on $\Gamma$. Then for any $\zeta \in {\mathcal P}({\mathcal D})$ we can define the randomised stopping time $\tau_\zeta$ as the first time $Y$ leaves a random interval, where the interval is chosen at time 0 with law $\zeta$. Then $\tau_\zeta = \tau_{A_\zeta, B_\zeta} = \inf \{ u : Y_u \notin (A_\zeta,B_\zeta)\}$. The set of randomized threshold rules ${\mathcal T}_R$ is given by $${\mathcal T}_R = {\mathcal T}\cap \left( \{ \tau_\zeta : \zeta \in {\mathcal P}({\mathcal D}) \} \right). \label{eq:TRdef}$$ Our analysis is focussed on problems in which the value associated with a stopping rule depends only on the law of the stopped process. Let $Q({\mathcal S})= \{ \mu : \mu = {\mathcal L}(Y_\tau) , \tau \in {\mathcal S}\}$. \[ass:lip\] $V$ is law invariant, ie $V(\tau) = H({\mathcal L}(Y_\tau))$ for some function $H : Q({\mathcal T}) \mapsto {\mathbb R}$. Given that the value associated with a stopping rule is law invariant, one natural approach to finding the optimal stopping time is to try to characterize $Q({\mathcal S})$. Often, the best way to do this is via a change of scale. Let $s$ be a strictly increasing function such that $X = s(Y)$ is a local martingale. (Such a function $s$ exists under very mild conditions on $Y$ see, for example Rogers and Williams [@RogersWilliams:00], and is called a scale function. For example, if $Y$ solves the SDE $dY_t = \sigma(Y_t) dB_t + \xi(Y_t) dt$ for smooth functions $\sigma$ and $\xi$ with $\sigma > 0$ then $s=s(z)$ is a solution to $\frac{1}{2} \sigma(z)^2 s'' + \xi(z) s' = 0$. Note that if $s$ is a scale function then so is any affine transformation of $s$ and so we may choose any convenient normalization for $s$.) Let $I^X = s(I)$ and let $\bar{I}^X$ be the closure of $I^X$. Then $X$ is a regular, time-homogenous local-martingale diffusion on $I^X$ with initial value $x=s(y)$. Set $Q^X({\mathcal S}) = \{ \nu : \nu = {\mathcal L}(X_\tau) , \tau \in {\mathcal S}\}$. Then if ${\mathcal L}(X_\tau) = \nu$ we have ${\mathcal L}(Y_\tau) = \nu \sharp s$ where $(\nu \sharp s)(D) = \nu(s(D))$. It follows that $\nu \in Q^X({\mathcal S})$ if and only if $\nu \sharp s \in Q({\mathcal S})$ and hence $$\label{eq:QQX} Q({\mathcal S}) = \{ \nu \sharp s ; \nu \in Q^X({\mathcal S}) \}.$$ Thus, if we can characterize $Q^X({\mathcal S})$ then we can also characterize $Q({\mathcal S})$. Moreover, defining $H^X : Q^X({\mathcal T}) \mapsto {\mathbb R}$ by $H^X(\nu) = H(\nu \sharp s)$ we have $V_*({\mathcal S}) = \sup_{\mu \in Q({\mathcal S})} H(\mu) = \sup_{\nu \in Q^X({\mathcal S})} H^X(\nu)$. The problem of optimizing over stopping laws for the problem with $Y$ becomes a problem of optimizing over the possible laws of the stopped process $X$ in natural scale. Note that $\tau_{a,b} = \inf_{u \geq 0} \{ u : Y_u \notin (a,b) \} = \inf_{u \geq 0} \{ u : X_u \notin (s(a),s(b)) \} =: \tau^X_{s(a),s(b)}$. Hence ${\mathcal T}_T$ has the alternative representation $${\mathcal T}_T = {\mathcal T}\cap \left( \cup_{\beta \leq x \leq \gamma; \; \beta, \gamma \in \bar{I}^X} \{ \tau^X_{\beta,\gamma} \} \right) ,$$ and the set of threshold stopping times for $Y$ is the set of threshold stopping times for $X$. Similarly, ${\mathcal T}_R$ can be rewritten as ${\mathcal T}_R = {\mathcal T}\cap (\{ \tau^X_\eta : \eta \in {\mathcal P}({\mathcal D}^X) \})$ where ${\mathcal D}^X = ([-\infty,x] \cap \bar{I}^X) \times ([x, \infty) \cap \bar{I}^X))$ and $$\tau^X_\eta = \inf_{u \geq 0} \{u : X_u \notin (A_\eta, B_\eta) \mbox{where $(A_\eta, B_\eta)$ has law $\eta$} \} .$$ Characterizing the possible laws of the stopped process in natural scale ======================================================================== If $X=s(Y)$ is in natural scale then the state space of $X$ is an interval $I^X = s(I)$ and $X_0 = x := s(y)$. There are four cases: 1. $I^X$ is bounded; 2. $I^X$ is unbounded above but bounded below; 3. $I^X$ is bounded above but unbounded below; 4. $I^X$ is unbounded above and below. The third case can be reduced to the second by reflection. The first case is generally similar to the second case, and typically the proofs are similar but simpler. The final case is degenerate and will be treated separately. In the main text we will mainly present arguments for the second case (with the other cases covered in an appendix), but results will be stated in a form which applies in all cases. Henceforth, in the main text we suppose $I^X$ is bounded below, but unbounded above. Without loss of generality we may assume $I^X=(0,\infty)$ or $[0,\infty)$. Then $X$ is a non-negative local martingale and hence a super-martingale. Moreover, $\lim_{t \rightarrow \infty} X_t$ exists. Hence ${\mathcal T}$ includes stopping rules which take infinite values and on $\{\tau = \infty \}$ we set $X_\tau = \lim_{t \rightarrow \infty} X_t=0$. In this case ${\mathcal T}$ is the set of all stopping times and the intersection with ${\mathcal T}$ in the definitions and is not necessary. By Fatou’s lemma and the super-martingale property $${\mathbb E}[X_\tau] = {\mathbb E}[ \lim_{t \rightarrow \infty} X_{t \wedge \tau}] \leq \liminf_{t \rightarrow \infty} {\mathbb E}[X_{t \wedge \tau}] \leq x .$$ In particular, if we set ${\mathcal P}_{\leq x} = \{ \nu \in {\mathcal P}([0,\infty)) : \int z \nu(dz) \leq x \}$ then $Q^X({\mathcal T}) \subseteq {\mathcal P}_{\leq x}$. $Q^X({\mathcal T}) = Q^X({\mathcal T}_R)$. \[lem:Q=\] Here we prove the lemma in the case where $I^X$ is bounded below. We show that $Q^X({\mathcal T}) = Q^X({\mathcal T}_R)={\mathcal P}_{\leq x}$. Given $\nu \in {\mathcal P}_{\leq x}$ the aim is to find a stopping time $\tau \in {\mathcal T}_R$ such that ${\mathcal L}(X_\tau)= \nu$. The task of finding general stopping times with ${\mathcal L}(X_\tau) = \xi$ for given $\xi \in {\mathcal P}(\overline{I}^X)$ is known as the Skorokhod embedding problem (Skorokhod [@Skorokhod:65]). In fact we use an extension of an embedding due to Hall [@Hall:85], see also Durrett [@Durrett:91]. The extension relates to the fact that we allow for target laws which have a different mean to the initial value of $X$, whereas the Hall embedding assumes $\int z \nu(dz) = x$. The Hall embedding, and the extension we give, are mixtures of threshold strategies. Suppose $\nu$ is an element of ${\mathcal P}_{\leq x}$ (and $\nu$ is not a point mass at $x$). The case of $\nu = \delta_x$ corresponds to the (threshold) stopping time $\tau=0$. Let $G$ be the (right-continuous) quantile function of $\nu$. We have $x \geq \int z \nu(dz) = \int_{(0,1)}G(u) du$. In particular, unless $\lim_{u \uparrow 1}G(u) \leq x$ there exists a unique solution $v^* \in [0,1)$ to $\int_v^1 [G(w) - x] dw = 0$. Let $z^* = G(v^*)\leq x$. If $\lim_{u \uparrow 1}G(u) \leq x$ then set $v^*=1$ and $z^* = \lim_{u \uparrow 1}G(u)$. Let $\nu_0$ be the measure of size $v^*$ such that $\nu_0([0,z)) = v^* \wedge \nu([0,z))$. Then $\nu_0$ has support contained in $[0,z^*]$. Let $\nu_1$ be the measure of size $1-v^*$ such that $\nu_1([0,z)) = (\nu([0,z))-v^*)^+$. Then $\nu_1$ has support in $[z^*,\infty)$ and barycentre $x$. Moreover $\nu = \nu_0 + \nu_1$. Define $c = \int_x^\infty (y-x) \nu(dy)$. By construction, $c = \int_x^\infty (y-x) \nu_1(dy)$ and we have from the fact that $\nu_1$ has barycentre $x$ that $\int_{z^*}^\infty (y-x) \nu_1(dy)=0$ and hence $$c = \int_{z^*}^x (x-y) \nu_1(dy). \label{eq:c}$$ Let $\eta \in {\mathcal P}([0,x] \times(x,\infty])$ be given by $$\eta(da,db) = \nu_0(da) I_{ \{ 0 \leq a \leq z^* \} } I_{ \{ b = \infty \} } + \nu_1(da) \nu_1(db) \frac{(b-a)}{c} I_{ \{ z^* \leq a \leq x < b < \infty \} } .$$ Note first that $\eta$ is a probability measure: $$\begin{aligned} \lefteqn{ \int_{0 \leq a \leq x} \int_{x < b \leq \infty} \eta(da,db) } \\ & = & v^* + \int_{z^* \leq a \leq x} \nu_1(da) \int_{x < b < \infty} \frac{b-x}{c} \nu_1(db) + \int_{z^* \leq a \leq x} \frac{x-a}{c} \nu_1(da) \int_{x < b < \infty} \nu_1(db) \\ & = & v^* + \int_{z^* \leq a \leq x} \nu_1(da) + \int_{x < b < \infty} \nu_1(db) = v^* + \nu_1([z^*,\infty)) = 1\end{aligned}$$ where we use the definition of $c$ and in going from the second line to the third. It remains to show that ${\mathcal L}(X_{\tau^X_\eta}) = \nu$. Let $f$ be a bounded test function. Then, using the fact that if $b=\infty$ then $X_{\tau^X_{a,\infty}}=a$, and the definition of $c$ and for the penultimate line, $$\begin{aligned} {\mathbb E}[f( X_{\tau^X_\eta})] & = & \int \int \eta(da,db) {\mathbb E}[f(X_{\tau^X_{a,b}})] \\ & = & \int \nu_0 (da) f(a) + \int_{z^* \leq a \leq x} \int_{x < b < \infty} \nu_1(da) \nu_1(db) \frac{b-a}{c} \left[ f(a)\frac{(b-x)}{b-a} + f(b) \frac{(x-a)}{b-a} \right] \\ & = & \int \nu_0 (da) f(a) + \int_{z^* \leq a \leq x} \nu_1(da) f(a) \int_{x < b<\infty} \nu_1(db) \frac{(b-x)}{c} \\ && \hspace{30mm} + \int_{z^* \leq a \leq x} \frac{(x-a)}{c} \nu_1(da) \int_{x < b<\infty} f(b) \nu_1(db) \\ & = & \int_{0 \leq z \leq z^*} f(z) \nu_0(dz) + \int_{z^* \leq z \leq x} f(z) \nu_1(dz)+ \int_{x < z} f(z) \nu_1(dz) \\ & = & \int f(z) \nu(dz).\end{aligned}$$ Hence ${\mathcal L}(X_{\tau_\eta}) = \nu$ as required. Let $\chi_{a,b} = \frac{b-x}{b-a} \delta_a + \frac{x-a}{b-a} \delta_b$. Then $\chi_{a,b}$ is the law of $X_{\tau^X_{a,b}}$. Moreover, ${\mathcal L}(X_{\tau^X_{a, \infty}}) = \delta_a$. Then, $$Q^X({\mathcal T}_T) = \left( \cup_{0 \leq a \leq x} \delta_x \right) \cup \left( \cup_{0 \leq a < x <b <\infty} \chi_{a,b} \right).$$ Sufficiency of mixed threshold rules ==================================== Our main result is that in a large class of problems it is sufficient to search over the class of mixed threshold rules. \[thm:main1\] Suppose $Y$ is a regular, time-homogeneous diffusion. Suppose the law invariance property holds (Assumption \[ass:lip\]) and that the filtration is sufficiently rich (Assumption \[ass:filtration\]). Then $V_*({\mathcal T}) = V_*({\mathcal T}_R)$. Since $Q^X({\mathcal T}) = Q^X({\mathcal T}_R)$ (Lemma \[lem:Q=\]) we have $Q({\mathcal T}) = Q({\mathcal T}_R)$. Then $$V_*({\mathcal T}) = \sup_{\mu \in Q({\mathcal T})} H(\mu) = \sup_{\mu \in Q({\mathcal T}_R)} H(\mu) = V_*({\mathcal T}_R).$$ Note that it is not our claim that every optimal stopping rule is a mixed threshold rule. Typically, at least in the case where $V({\mathcal T}_T) < V({\mathcal T})$, there will be other optimal stopping rules which are not of threshold type. Examples -------- ### Rank dependent utility and optimal stopping Let $Z$ be a non-negative random variable. Let $v:[0,\infty) \mapsto [0,\infty)$ be an increasing, differentiable function with $v(0)=0$. Then the expected value of $v(Z)$ can be expressed as ${\mathbb E}[v(Z)] = \int_0^\infty v'(z) \bar{F}_Z(z) dz$. Under rank-dependent utility (Quiggin [@Quiggin:82]) or probability weighting (Tversky and Kahneman [@TverskyKahneman:96]) the prospect value ${\mathcal E}_v(Z)$ of $Z$ is $${\mathcal E}_v(Z) = \int_0^\infty v'(z) w(\bar{F}_Z(z)) dz$$ where $w :[0,1] \mapsto [0,1]$ is an increasing, differentiable probability weighting function. Writing $G_Z=F_Z^{-1}$ for the quantile function of $Z$, then after a change of variable and integration by parts we have (see Xu and Zhou [@XuZhou:13 Lemma 3.1]) the alternative representation $${\mathcal E}_v(Z) = \int_0^1 w'(1-u) {G}_Z(u) du.$$ Now let $Y=(Y_t)_{t \geq 0}$ be a non-negative diffusion and consider the problem of maximizing over stopping times the prospect value of the stopped process $Y$, ie of finding $$\label{eq:rdu} \sup_{\tau \in {\mathcal T}} {\mathcal E}_v(Y_\tau).$$ Clearly the prospect value depends on the stopping time only through the law of the stopped process. Hence it is sufficient to characterize the optimal target distribution, for example via its quantile function. Xu and Zhou [@XuZhou:13] solve for the optimal quantile function in several cases. One relevant case is the following: Suppose $Y$ is in natural scale and has state space $[0,\infty)$and initial value $y$. Suppose $v$ and $w$ are concave. Suppose there exists $\lambda^* \in (0,\infty)$ which solves $$\int_0^1 (v')^{-1} \left( \frac{\lambda^*}{w'(1-u)} \right) du = y .$$ Then the quantile function of the optimal stopping distribution is $G^*(u) = (v')^{-1} \left( \frac{\lambda^*}{w'(1-u)} \right)$. \[prop:pt\] Xu and Zhou [@XuZhou:13] point out that although there is a unique optimal prospect there are infinitely many stopping rules which attain this prospect. They advocate the use of the stopping rule based on the Azéma-Yor stopping time [@AzemaYor:79], in which case the stopping rule has a drawdown feature, and involves stopping the first time the process falls below some function of the maximum. Our main result says that there is also a randomized threshold rule which is optimal. ### Cautious stochastic choice Given a process $Y$ and a utility function $u$ the certainty equivalent associated with a stopping time $\tau$ is ${\mathcal C}_u(\tau) = u^{-1}({\mathbb E}[u(Y_\tau)])$. The idea in Cautious stochastic choice (Cerreia-Vioglio et al [@CerreiaVioglio:15]) is that agents use multiple utility functions and evaluate an outcome in a robust manner as the least favorable of the individual certainty equivalents. If the set of utility functions is $\{ u_\alpha \}_{\alpha \in {\mathcal A}}$, and if we write ${\mathcal C}_\alpha$ as shorthand for ${\mathcal C}_{u_\alpha}$ then the CSC value of a stopping rule is $$CSC(\tau) = \inf_{\alpha \in {\mathcal A}} {\mathcal C}_\alpha(\tau) = \inf_{\alpha \in {\mathcal A}} u_\alpha^{-1} ({\mathbb E}[u_\alpha(Y_\tau)]) , \label{eq:csc}$$ and an optimal stopping rule is the one which maximizes the CSC value. Clearly the CSC value of a stopping rule depends only on the law of $Y_\tau$. Moreover, suppose ${\mathcal A}= \{\alpha, \beta \}$ and suppose $u_{\alpha}$ and $u_{\beta}$ are strictly increasing and continuous with strictly increasing and continuous inverses. Suppose further that there exist $\tau_1$ and $\tau_2$ and $\tilde{y}$ such that $u_\alpha^{-1}({\mathbb E}[u_\alpha(Y_{\tau_1})]) > \tilde{y} > u_\beta^{-1}({\mathbb E}[u_\beta(Y_{\tau_1})])$ and $u_\alpha^{-1}({\mathbb E}[u_\alpha(Y_{\tau_2})]) < \tilde{y} < u_\beta^{-1}({\mathbb E}[u_\beta(Y_{\tau_2})])$. Let $\tau^\theta$ be a mixture of $\tau_1$ and $\tau_2$, defined such that if $Z$ is a ${\mathcal F}_0$-measurable random variable taking values in $\{1,2\}$ with ${\mathbb P}(Z=1)=\theta$ then $\tau^\theta = \tau_Z$. Then for ${\gamma \in {\mathcal A}}$, ${\mathcal C}_{\gamma}(\tau^\theta) = u_\gamma^{-1} (\theta {\mathbb E}[u_\gamma(Y_{\tau_1})] + (1-\theta) {\mathbb E}[u_\gamma(Y_{\tau_2})])$ is a continuous function of $\theta$. Moreover, ${\mathcal C}_{\alpha}(\tau^\theta)$ is strictly increasing in $\theta$ and ${\mathcal C}_{\beta}(\tau^\theta)$ is strictly decreasing. By our assumptions it follows that the best choice $\theta^*$ of $\theta$ is such that ${\mathcal C}_{\alpha}(\tau^{\theta^*}) = {\mathcal C}_{\beta}(\tau^{\theta^*})$ and then $\theta^* \in (0,1)$ and $CSC(\tau^{\theta^*}) > \max \{ CSC(\tau_1), CSC(\tau_2)\}$. In particular, the value associated with a stopping rule is not quasi-convex. By the analysis of this section, in searching for an optimal stopping rule it is sufficient to restrict attention to randomized threshold rules, but we cannot expect in general that there is a pure threshold rule which is optimal. For a deeper study of optimal stopping in the context of Cautious stochastic choice see Henderson et al [@HendersonHobsonZeng:17]. Sufficient conditions for the optimality of pure threshold rules ================================================================ In this section we argue that if the value associated with a stopping rule is law invariant, and if $H$ is quasi-convex and lower semi-continuous then pure threshold rules are optimal. Recall that $H$ is quasi-convex if $H(\lambda \mu_1 + (1-\lambda) \mu_2) \leq \max \{ H(\mu_1), H(\mu_2) \}$ for $\lambda \in(0,1)$. It follows by induction that if $\mu = \sum_{i=1}^N \lambda_i \mu_i$ where $\lambda_i \geq 0$, $\sum_{i=1}^N \lambda_i=1$ and $\mu_i \in Q({\mathcal T})$ then $$\label{eq:qc} H(\mu) \leq \max_{1 \leq i \leq N} H(\mu_i) \leq \sup_{\tilde{\mu} \in Q({\mathcal T})} H(\tilde{\mu}) .$$ Recall also that if $H$ is lower semi-continuous and $\mu_n \Rightarrow \mu$ then $H(\mu) \leq \lim \inf H(\mu_n)$. In fact we do not require $H(\mu) \leq \lim \inf H(\mu_n)$, but rather the weaker condition $H(\mu) \leq \lim \sup H(\mu_n)$. Suppose $\nu \in Q^X({\mathcal T})$ consists of finitely many atoms. Then there exists $\eta \in {\mathcal P}({\mathcal D}^X)$ such that $\eta$ consists of finitely many atoms and ${\mathcal L}(X_{\tau^X_\eta})=\nu$. \[lem:atoms\] It follows from the construction in the proof of Lemma \[lem:Q=\] that if $\mu$ is purely atomic then so is $\eta$. Let $\nu$ be an element of $Q^X({\mathcal T})$. Then there exist $(\eta_n)_{n \geq 1}$ such that $\eta_n$ has finite support for each $n$ and such that ${\mathcal L}(X_{\tau^X_{\eta_n}}) \Rightarrow \nu$. \[lem:approx\] Since $\nu \in Q^X({\mathcal T})=Q^X({\mathcal T}_R)$ there exists $\eta$ such that ${\mathcal L}(X_{\tau^X_\eta}) = \nu$. Let $(\eta_n)_{n \geq 1}$ be a sequence of measures with finite support such that $\eta_n \Rightarrow \eta$. Then for $f:[0,\infty) \mapsto {\mathbb R}$ a bounded continuous test function define $\tilde{f}:[0,x] \times [x,\infty)$ by $\tilde{f}(a,b) = f(a) \frac{b-x}{b-a} + f(b)\frac{x-a}{b-a}$ for $a<b$ with $\tilde{f}(x,x)=f(x)$. Then, since $\tilde{f}$ is bounded and continuous $${\mathbb E}[f(X_{\tau^X_{\eta_n}})] = \int \int \eta_n(da, db) \tilde{f}(a,b) \rightarrow \int \int \eta(da, db) \tilde{f}(a,b) = {\mathbb E}[f(X_{\tau^X_{\eta}})]$$ and it follows that $\nu_n := {\mathcal L}(X_{\tau^X_{\eta_n}}) \Rightarrow \nu$. \[thm:main2\] Suppose $Y$ is a regular, time-homogeneous diffusion. Suppose the law invariance property holds (Assumption \[ass:lip\]). Suppose that $H$ is quasi-convex and lower semi-continuous. Then $V_*({\mathcal T}) = V_*({\mathcal T}_T)$. Clearly $V_*({\mathcal T}) \geq V_*({\mathcal T}_T)$. For any $\mu_n$ with finite support we can define $\nu_n = \mu_n \sharp s^{-1}$. Then we can find a measure $\eta_n$ with finite support such that ${\mathcal L}(X_{\tau^X_{\eta_n}}) = \nu_n$. Moreover $\nu_n$ can be decomposed as a convex combination $$\nu_n = \sum_{i=1}^N \gamma_i \chi_{a_i,b_i} + \sum_{j=1}^M \lambda_j \delta_{a_j}.$$ Then, since $H$ is quasi-convex, $$\begin{aligned} H(\mu_n) & \leq & \left( \max_{1 \leq i \leq N} H(\chi_{a_i,b_i} \sharp s^{-1}) \right) \vee \left( \max_{1 \leq j \leq M} H(\delta_{s^{-1}(a_j)}) \right) \\ & \leq & \left( \sup_{0 \leq a \leq x \leq b<\infty} H(\chi_{a,b} \sharp s^{-1}) \right) \vee \left( \sup_{0 \leq a \leq x} H(\delta_{s^{-1}(a)}) \right) = V_*({\mathcal T}_T).\end{aligned}$$ Then, for $\tau \in {\mathcal T}$, if $\mu = {\mathcal L}(Y_\tau)$ and if $\mu_n \Rightarrow \mu$ $$V_\tau = H(\mu) \leq \limsup H(\mu_n) \leq V_*({\mathcal T}_T).$$ Hence $V_*({\mathcal T}) \leq V_*({\mathcal T}_T)$. Discussion ========== In classical optimal stopping problems involving maximizing expected utility the optimal strategy is a threshold rule and involves stopping the first time that the process leaves an interval. However, in more general settings the optimal strategy may be more sophisticated. In some settings, for example those involving regret (Loomes and Sugden [@LoomesSugden:82]) the optimal stopping rule may depend on some functional of the path (for example the maximum price to date). But, as argued here, for a large class of problems the payoff depends only on the distribution of the stopped process, and then there are many optimal stopping rules, some of which take the form of randomized threshold rules. In this article we have utilized (an extended version of) the Hall solution of the Skorokhod embedding problem (Hall [@Hall:85]) to give our randomized threshold rule, but there are other solutions of the Skorokhod embedding which can also be viewed as mixed threshold rules, including the original solution of Skorokhod [@Skorokhod:65] and the solution of Hirsch et al [@HirschProfettaRoynetteYor:11]. The idea that if the objective is expressed in terms of a function which is not quasi-convex then agents may want to use randomised strategies is well appreciated in static settings. In a dynamic setting He et al [@HeHuOblojZhou:17] argue that in binomial-tree, probability-weighted model of a casino (Barberis [@Barberis:14]) gamblers may prefer path-dependent strategies over strategies which are defined via a partition of the set of nodes into those at which the gambler stops and those at which he continues. (See also Ebert and Strack [@EbertStrack:16] and Henderson et al [@HendersonHobsonTse:17] for discussion of a related optimal stopping problem with probability weighting based on a diffusion process.) He et al [@HeHuOblojZhou:17] argue further that the path-dependent strategy can be replaced by a randomized strategy under which the decision about whether to stop at a node depends not on the path history but rather the realization of an independent uniform random variable. This preference for randomization mirrors our result, but takes a different form. In our perpetual problem the agent chooses a randomized pair of levels and then follows a threshold strategy based on these levels. In He et al [@HeHuOblojZhou:17] a zero-one decision about whether to stop at a node is replaced by a probability of continuing, and the stopping rules which arise are not randomized threshold rules. Many optimal stopping models in the economics literature predict that the agent will stop on first exit from an interval, which necessarily involves stopping either at the current maximum or the current minimum. If instead, observed behavior includes stopping at levels which are not equal to one of the running extrema of the process then this is evidence against the model. (Strack and Viefers [@StrackViefers:17] present experimental evidence from a laboratory game that players do not follow threshold strategies - instead players visit the same price three times on average before stopping.) But, our results imply that the converse is not true. Even if agents only ever take a decision to sell at a time when the process is at a new maximum or new minimum, this does not necessarily mean that agents are following a pure threshold rule. They could have any target distribution, as for example in Proposition \[prop:pt\], but be realizing this target distribution via a randomized threshold rule. [99]{} Azéma J. and M. Yor, 1979, [Une solution simple au problème de Skorokhod.]{}[*Sem. de Prob. XIII*]{}, 90-115. Barberis N., 2012, A Model of Casino Gambling, [*Management Science*]{}, 58, 35-51. Camerer, C. F., and T. Ho, 1994, Violations of the betweenness axiom and nonlinearity in probabilities, [*Journal of Risk and Uncertainty*]{}, 8, 167-196. Cerreia-Vioglio, S., D.Dillenberger, P. Ortoleva, and G. Riella, 2017, Deliberately Stochastic, Working paper, Columbia University. Dayanik S. and I. Karatzas, 2003, On the optimal stopping problem for one-dimensional diffusions. [*Stoc. Proc. & Appl*]{}, 107, 2, 173-212. Durrett R., 1991, [*Probability: [T]{}heory and [E]{}xamples*]{}. Wadsworth, Pacific Grove, California. Ebert S. and P. Strack, 2015, Until the Bitter End: On Prospect Theory in a Dynamic Context. [*American Economic Review*]{}, 105(4), 1618-1633. Hall W.J., 1998, On the [S]{}korokhod embedding theorem. [*Technical Report 33*]{} Stanford University, Department of Statistics. Henderson V., D. Hobson and M. Zeng, 2017, Cautious Stochastic Choice, Optimal Stopping and Deliberate Randomization, Working paper, University of Warwick. Henderson V., Hobson D. and A.S.L.Tse, 2017, Randomized Strategies and Prospect Theory in a Dynamic Context, [*Journal of Economic Theory*]{}, 168, 287-300. He X., S. Hu, J. Obloj and X.Y. Zhou, 2017, Path dependent and randomized strategies in Barberis’ Casino Gambling model, [*Operations Research*]{}, 65, 1, 97-103. Hirsch F., C. Profetta, B. Roynette and M. Yor, 2011, Constructing self-similar martingales via two Skorokhod embeddings. [*Sem. de Prob. XLIII*]{} 451-503 LNM 2006, Springer-Verlag, Berlin. Loomes G. and R. Sugden, 1982, Regret theory: An alternative theory of rational choice under uncertainty, [*Economic Journal*]{}, 92, 805-824. Machina M., 1985, Stochastic Choice Functions Generated from Deterministic Preferences over Lotteries, [*Economic Journal*]{} , 95, 379, 575-594. Quiggin J., 1982, A Theory of Anticipated Utility, [*Journal of Economic Behaviour and Organisation*]{}, 3, 323-343. Rogers L.C.G. and D. Williams, 2000, [*Diffusions, Markov Processes and Martingales: Itô Calculus*]{} Wiley, Chichester. Rogozin B.A., 1996, On the distribution of functionals related to boundary problems for processes with independent increments. [*Th. Prob. Appl.*]{}, 11, 580-591. Skorokhod A.V., 1965, [*Studies in the theory of random processes*]{}, Addison-Wesley, Reading, Mass.. Strack P. and P. Viefers, 2017, Too Proud to Stop: Regret in Dynamic Decisions, [*SSRN Working paper, id2465840*]{}. Tversky, A. and D. Kahneman, 1992, Advances in Prospect Theory: Cumulative Representation of Uncertainty, [*Journal of Risk and Uncertainty*]{}, 5, 297-323. Wakker P., 2010, Prospect Theory for Risk and Ambiguity, [*Cambridge University Press*]{}. Xu Z.Q. and X.Y. Zhou, 2013, Optimal stopping under probability distortion. [*Ann. Appl. Prob.*]{}, 23, 1, 251-282. Extension to other state spaces for the process in natural scale ================================================================ The range of $X$ is unbounded below but bounded above ----------------------------------------------------- In this case we may assume that ${I}^X = (-\infty,0)$ or $(-\infty, 0]$. The analysis goes through almost unchanged except that now $X$ is a convergent sub-martingale and $Q^X({\mathcal T}) = Q({\mathcal T}_R) = {\mathcal P}_{\geq x}$ where ${\mathcal P}_{\geq x} = \{ \nu \in {\mathcal P}((-\infty,0]) : \int z \nu(dz) \geq x \}$. The range of $X$ is bounded --------------------------- Suppose $X$ is bounded. In this case $Q({\mathcal T}) = Q({\mathcal T}_R) = {\mathcal P}_{=x}$ where ${\mathcal P}_{=x} = \{ \nu \in {\mathcal P}(\bar{I}^X) : \int z \nu(dz) = x \}$. To see this note that $X$ is a uniformly integrable martingale and not just a super-martingale. Therefore we must have ${\mathbb E}[X_\tau] = \lim {\mathbb E}[X_{\tau \wedge t}] = x$ and hence $Q({\mathcal T}) \subseteq {\mathcal P}_{=x}$. Conversely, by the same argument as in Lemma \[lem:Q=\], but this time with $v^*=0$ and $\nu_1 \equiv \nu$, we deduce that for any $\nu \in {\mathcal P}_{=x}$ there exists a randomization $\eta$ such that ${\mathcal L}(X_{\tau^X_\eta}) = \nu$. It follows that $Q({\mathcal T}) = Q({\mathcal T}_R) = {\mathcal P}_{=x}$. The proofs of Lemma \[lem:atoms\], Lemma \[lem:approx\] and Theorem \[thm:main1\] go through unchanged. The range of $X$ is ${\mathbb R}$ --------------------------------- Now suppose $I^X$ is unbounded above and below. By the Rogozin trichotomy (Rogozin [@Rogozin:66]) $-\infty = \lim \inf_t X_t < x < \lim \sup_t X_t = \infty$ and $\lim_{t \uparrow \infty} X_t$ does not exist. In this case we must restrict ${\mathcal T}$ to the set of stopping times with ${\mathbb P}(\tau < \infty) = 1$. In the main text we set ${\mathcal T}_T = {\mathcal T}\cap \left( \cup_{\beta \leq y \leq \gamma, \beta, \gamma \in \bar{I}^Y} \{ \tau_{\beta,\gamma} \} \right)$ but we could equivalently write ${\mathcal T}_T = \cup_{(\beta,\gamma) \in {\mathcal D}_0} \{ \tau_{\beta,\gamma} \}$, where ${\mathcal D}_0 = ([-\infty, y] \cap \bar{I}^Y) \times ([y,\infty] \cap \bar{I}^Y) \setminus \{ s^{-1}(-\infty), s^{-1}(\infty) \}$. We have to exclude the threshold rule $\tau_{s^{-1}(-\infty), s^{-1}(\infty)}$ since $\tau_{s^{-1}(-\infty), s^{-1}(\infty)} = \infty$ almost surely and $Y_\infty$ is not defined. In terms of threshold rules $\tau^X_{a,b}$ for $X$ we allow $a = -\infty$ or $b = \infty$ but not both. Then ${\mathcal T}_T = \{ \tau_{\beta,\gamma} : (\beta,\gamma) \in {\mathcal D}^X_0) \}$ where ${\mathcal D}^X_0 = {\mathcal D}^X \setminus \{-\infty,\infty\} = [\infty,x] \times [x,\infty] \setminus \{-\infty,\infty \}$. In the definition of randomized threshold rules we can write ${\mathcal T}_R = \{ \tau_\zeta : \zeta \in {\mathcal P}({\mathcal D}_0) \}$ where ${\mathcal D}_0$ is as above and similarly ${\mathcal T}_R = \{ \tau^X_\eta : \eta \in {\mathcal P}({\mathcal D}^X_0) \}$. When $I^X={\mathbb R}$ we claim that we have $Q^X({\mathcal T}) = Q^X({\mathcal T}_R) = {\mathcal P}({\mathbb R})$. Since stopping times are finite almost surely we must have $Q^X({\mathcal T}) \subseteq {\mathcal P}({\mathbb R})$ so it is sufficient to show that for any $\nu \in {\mathcal P}({\mathbb R})$ we have $\nu \in Q^X({\mathcal T}_R)$. Given $\nu \in {\mathcal P}({\mathbb R})$ let $A_\nu$ be a ${\mathcal F}_0$-measurable random variable with law $\nu$ and set $\tau = \inf \{u: X_u = A_\nu \}$. Then ${\mathcal L}(X_\tau) = {\mathcal L}(A_\nu) = \nu$. The proofs of Lemma \[lem:atoms\], Lemma \[lem:approx\] and Theorem \[thm:main1\] go through unchanged. Other results ------------- A proof is given in Xu and Zhou [@XuZhou:13 Theorem 5.1], but since it is short, elegant and pertinent to our main results we include it here. From the characterization of $Q({\mathcal T})$ we have that a quantile function must satisfy $\int_0^1 G(u) du \leq y$. By construction $G^*$ has this property, and since $v'$ and $w'$ are decreasing, $G^*$ is increasing. Hence $G^*$ has the properties required of a quantile function of a distribution which can be obtained by stopping $Y$. On the other hand, for any non-negative function $G$ with $\int_0^1 G(u) du \leq y$, $$\begin{aligned} \int_0^1 w'(1-u) v(G(u)) du & = & \int_0^1 [w'(1-u) v(G(u)) - \lambda^* G(u)] du + \lambda^* \int_0^1 G(u) du \\ &\leq & \int_0^1 \sup_{g>0} [w'(1-u) v(g) - \lambda^* g] du + \lambda^* y \\ & = & \int_0^1 [w'(1-u) v(G^*(u)) - \lambda^* G^*(u)] du + \lambda^* y = \int_0^1 w'(1-u) v(G^*(u)) du .\end{aligned}$$ [^1]: University of Warwick, Coventry, CV4 7AL. UK. Email: vicky.henderson@warwick.ac.uk, d.hobson@warwick.ac.uk, m.zeng@warwick.ac.uk. We would like to thank participants at the 10th Oxford-Princeton workshop (May 25-26, 2017) for helpful comments. Matthew Zeng is supported by a [Chancellor’s International Scholarship]{} at the University of Warwick.
{ "pile_set_name": "ArXiv" }
--- author: - 'E. O. Zavarygin$^{1,2}$[^1] and A. V. Ivanchik$^{1,2}$[^2]' date: 'Received 05 December, 2014' title: | Variation of the baryon-to-photon ratio\ due to decay of dark matter particles --- INTRODUCTION ============ In the last decade, cosmology has passed into the category of precision sciences. Many cosmological parameters are currently determined with a high precision that occasionally reaches fractions of a percent (Ade et al. 2014). One of such parameters is the baryon-to-photon ratio $\eta \equiv n_{\rm b}/n_{\gamma}$, where $n_{\rm b}$ and $n_{\gamma}$ are the baryon and photon number densities in the Universe, respectively. In the standard cosmological model, the present value of $\eta$ is assumed to have been formed upon completion of electron-positron annihilation several seconds after the Big Bang and has not changed up to now. The value of $n_{\gamma}$ associated with the cosmic microwave background (CMB) photons is defined by the well-known relation $$n_{\gamma}=\frac{2\zeta(3)}{\pi^2}\left( \frac{kT}{\hbar c}\right)^3=410.73\left(\frac{T}{2.7255\,\text{K}}\right)^3\text{cm}^{-3},$$ where $\zeta(x)$ is the Riemann zeta function, $k$ is the Boltzmann constant, $\hbar$ is the Planck constant, $c$ is the speed of light, and $T$ is the CMB temperature at the corresponding epoch. The CMB temperature is currently determined with a high accuracy and is $T_0 = 2.7255(6)\,$K at the present epoch (Fixsen 2009); for other epochs, it is expressed by the relation $T=T_0(1 + z)$, where $z$ is the cosmological redshift at the corresponding epoch. Thus, given $n_{\gamma}$, a relation between the parameter $\eta$ and $\Omega_{\rm b}$, the relative baryon density in the Universe, can be obtained (Steigman 2006): $$\eta = 273.9\times10^{-10}\Omega_{\rm b}h^2,$$ where $h = 0.673(12)$ is the dimensionless Hubble parameter at the present epoch (Ade et al. 2014). According to present views, the baryon density, which is the density of ordinary matter (atoms, molecules, planets and stars, interstellar and intergalactic gases), does not exceed 5% of the entire matter filling the Universe, while 95% of the density in the Universe is composed of unknown forms of matter/energy that manifest themselves (for the time being) gravitationally (see, e.g., Gorbunov and Rubakov 2008). At present, observations allow $\Omega_{\rm b}$ to be independently estimated for four cosmological epochs:\ (i) the epoch of Big Bang nucleosynthesis ($z_{\rm BBN}\sim10^9$; see, e.g., Steigman et al. 2007);\ (ii) the epoch of primordial recombination ($z_{\rm PR}\simeq1100$; see, e.g., Ade et al. 2014);\ (iii) the epoch associated with the Ly$\alpha$ forest ($z\sim2\div3$; i.e., $\sim$10 Gyr ago; see, e.g., Rauch 1998; Hui et al. 2002);\ (iv) the present epoch ($z = 0$; see, e.g., Fukugita and Peebles 2004). For the processes at the epochs of Big Bang nucleosynthesis and primordial recombination, $\eta$ is one of the key parameters determining their physics. For these epochs, the methods of estimating $\eta$, (i) comparing the observational data on the relative abundances of the primordial light elements (D, $^4$He, $^7$Li) with the predictions of the Big Bang nucleosynthesis theory and (ii) analyzing the CMB anisotropy, give the most accurate estimates of $\eta$ to date that coincide, within the observational error limits: $\eta_{\rm BBN} = (6.0 \pm 0.4) \times 10^{-10}$ (Steigman 2007) and $\eta_{\rm CMB} = (6.05 \pm 0.07) \times 10^{-10}$ (Ade et al. 2014). This argues for the correctness of the adopted model of the Universe and for the validity of the standard physics used in theoretical calculations. However, it should be noted that at present, as the accuracy of observations increases, some discrepancy between the results of observations and the abundances of the primordial elements predicted in the Big Bang nucleosynthesis theory has become evident. The “lithium problem” is well known (see, e.g., Cyburt et al. 2008); not all is ideal with helium and deuterium (for a detailed discussion of these problems, see Ivanchik et al. 2015). These inconsistencies can be related both to the systematic and statistical errors of experiments and to the manifestations of new physics (physics beyond the standard model). The determination of $\Omega_{\rm b}$ and the corresponding $\eta$ at epochs (iii) and (iv) has a considerably lower accuracy. The value of $\eta$ measured for the epoch associated with the Ly$\alpha$ forest coincides, by an order of magnitude, with $\eta_{\rm BBN}$ and $\eta_{\rm CMB}$, but, at the same time, is also strongly model-dependent (e.g., Hui et al. 2002). The measured $\Omega_{\rm b}$ and $\eta$ at the present epoch are at best half those predicted by Big Bang nucleosynthesis calculations and CMB anisotropy analysis. The so-called problem of missing baryons (see, e.g., Nicastro et al. 2008) is associated with this. It is hoped that further observations and new experiments will allow $\Omega_{\rm b}$ for different cosmological epochs and the corresponding $\eta$ to be determined with a higher accuracy. In turn, this can become a powerful tool for investigating the physics beyond the standard model, where the values of $\eta$ for different cosmological epochs can be different. Constraints on the deviation of $\eta$ allow various theoretical models admitting such a change to be selected. In this paper, we discuss the possibility of a change in $\eta$ on cosmological time scales attributable to the decays of dark matter particles. For example, supersymmetric particles (see, e.g., Jungman et al. 1996; Bertone et al. 2004; and references therein) can act as such particles; some of them can decay into the lightest stable supersymmetric particles and standard model particles (baryons, leptons, photons, etc.; see, e.g., Cirelli et al. 2011): $${\rm X} \rightarrow \chi + ... \begin{cases} {\gamma + \gamma +...} \\ {\rm p + \bar{p} +...}, \end{cases}$$ where X and $\chi$ are unstable and stable dark matter particles, respectively. This can lead to a change in $\eta$. The currently available observational data suggest that the dark matter density in the Universe is approximately a factor of 5 larger than the baryon density: $\Omega_{\rm CDM}\simeq 5\Omega_{\rm b}$, i.e., the relation between the number density of dark matter particles and the number densities of baryons and photons in the Universe is $n_{\rm CDM}\simeq 5(m_{\rm b}/m_{\rm CDM})n_{\rm b} = 5(m_{\rm b}/m_{\rm CDM})n_{\gamma}\eta$. Assuming that the changes in the number densities of various types of particles in the decay reactions of dark matter particles are related as $\Delta n_{\rm CDM} \sim \Delta n_{\rm b}$ and $\Delta n_{\rm CDM} \sim \Delta n_{\gamma}$, it is easy to see that the parameter $\eta$ is most sensitive precisely to the change in baryon number density. In the decays of dark matter particles with masses $m_{\rm CDM} \sim 10$GeV$-$1TeV, the change in $\eta$ as a result of the change in baryon number density could reach $\Delta\eta/\eta \sim 0.01 - 1$ [^3]. The change in photon number density and the change in $\eta$ attributable to it will be approximately billion times smaller. Therefore, in our paper we focused our attention on the possibility of a change in $\eta$ due to the decays of dark matter particles with the formation of a baryon component. Despite the negligible contribution to the change in $\eta$ from the photon component, a comparison of the predicted gamma-ray background (dark matter particle decay products) with the observed isotropic gamma-ray background in the Universe can serve as an additional source of constraints on the decay models of dark matter particles. The photons produced by such processes are high-energy ones. The observational data on the isotropic gamma-ray background constrain their possible number in the Universe, which, in turn, narrows the range of admissible parameters of dark matter particles, determines the maximum possible number of baryons, the decay products of dark matter particles, and the corresponding change in the baryon-to-photon ratio in such decays. Thus, the observational data on the gamma-ray background, along with the cosmological experiments described above, serve as a source of constraints on the decay models of dark matter particles and on the possible change in $\eta$. Running ahead, we will say that at present the constraints from isotropic gamma-ray background observations are more severe than those following from cosmological experiments. Depending on the lifetime of dark matter particles, a statistically significant change in $\eta$ can occur at different cosmological epochs. We consider lifetimes $\tau$ in the following range: $t_{\rm BBN}\ll \tau \lesssim t_0$, where $t_{\rm BBN}\simeq3\,$min is the age of the Universe at the end of the epoch of Big Bang nucleosynthesis, $t_0 \simeq 13.8\,$Gyr is the present age of the Universe (Ade et al. 2014). The decays of dark matter particles with short lifetimes ($\tau \lesssim t_{\rm BBN}$) can change significantly the chemical composition of the Universe (see, e.g., Jedamzik 2004; Kawasaki et al. 2005). The available observational data on the abundances of the primordial light elements (D, $^4$He, $^7$Li) agree well with the predictions of Big Bang nucleosynthesis calculations, which, in turn, limits the possibility of such a change. For long lifetimes exceeding the present age of the Universe ($\tau > t_0$), the change in $\eta$ at the above four cosmological epochs will be so small that this will unlikely allow it to be detected without a significant improvement in observational capabilities. THE BARYON-TO-PHOTON RATIO IN MODELS WITH PARTICLE DECAY {#model} ======================================================== A large class of models with decaying dark matter particles suggests the existence of the lightest stable particle that we will designate as $\chi$. An unstable dark matter particle, which we will designate as X, will decay with time into a $\chi$-particle and standard model particles. There can be reactions of the type $\rm X \rightarrow \rm{\chi \, p \, \bar{p}}$ among such reactions, whose influence on $\eta$ is investigated in this paper[^4]. A quantitative parameter characterizing the fraction of the decay channels of X-particles whose products are hadrons (in our case, these will be protons and antiprotons) in the total number of decay channels is the hadronic branching ratio $B_h$, which is $B_h = 1$ in our case. The currently available observational data argue for the absence (or a negligible amount) of relic antimatter (baryon-asymmetric Universe). For this reason, the parameter $\eta$ in the standard cosmological model is defined as the ratio of the baryon number density to the photon number density. Since in our model the decays of X-particles will lead to the production of protons and antiprotons, we will define the parameter $\eta$ as the ratio of the sum of the baryon and antibaryon number densities to the photon number density in the Universe: $$\begin{aligned} \eta(z) = \frac{n_{\rm b}(z) + n_{\rm \bar{b}}(z)}{n_{\gamma}(z)}&&\\ \nonumber = \frac{n_{\rm b}^{\rm BBN}(z) + \Delta n_{\rm p}(z) + \Delta n_{\rm \bar{p}}(z)}{n_{\gamma}^{\rm BBN}(z)} &=& \eta_{\rm BBN} + \Delta \eta (z), \label{eta_definition}\end{aligned}$$ where $n_{\rm b}^{\rm BBN}$ and $n^{\rm BBN}_{\gamma}$ are the baryon and photon number densities corresponding to $\eta_{\rm BBN} = n^{\rm BBN}_{\rm b} /n^{\rm BBN}_{\gamma}$; $\Delta n_{\rm p}(z)$ and $\Delta n_{\rm \bar{p}}(z)$ are the number densities of X-particle decay products: protons and antiprotons, respectively (in the model under consideration, $\Delta n_{\rm p}(z) = \Delta n_{\rm \bar{p}}(z)$, i.e., the generated baryonic charge is $\Delta B = 0$). It is this value of (2) that would be measured when determining the speed of sound of the baryon-photon plasma at the epoch of CMB anisotropy formation in the case of proton and antiproton generation in accordance with the formula (see, e.g., Gorbunov and Rubakov 2010) $$u_s^2=\frac{\delta p}{\delta \rho}=\frac{c^2}{3(1+3\rho_{\rm B\bar{\rm B}}/4\rho_{\gamma})},$$ where $\rho_{\rm B\bar{\rm B}}=\rho_{\rm B}+\rho_{\bar{\rm B}}$ is the sum of the baryon and antibaryon densities in the Universe. In the standard cosmological model, this quantity coincides with the baryon density of the Universe $\rho_{\rm B}$. Thus, the baryon-to-photon ratio determined when analyzing the CMB anisotropy is also the ratio of the sum of the baryon and antibaryon number densities to the photon number density and has the following form in the presence of X-particle decay products: $$\eta_{\rm CMB} = \left. \frac{n_{\rm b}(z) + n_{\rm \bar{b}}(z)}{n_{\gamma}(z)}\right|_{z=z_{\rm PR}} = \eta_{\rm BBN} + \Delta \eta (z_{\rm PR}),$$ Note that for very early decays the antiprotons being produced have time to annihilate with protons, and $\eta$ again returns to its initial value $\eta = \eta_{\rm BBN}$. The decays of X-particles with long lifetimes will occur in an already fairly expanded Universe; consequently, the antiprotons being produced may not have time to annihilate. Thus, the later $\eta$ can differ from $\eta_{\rm BBN}$ and $\eta_{\rm CMB}$. However, during the formation of a large-scale structure, when halos in which the density of matter exceeds considerably the average one is formed, an excess of antiprotons would lead to enhanced gamma-ray radiation from them. INFLUENCE OF THE DECAY OF DARK MATTER PARTICLES ON THE CHANGE IN [$\eta$]{} {#Data} =========================================================================== The evolution of the number densities of X-particles, $\chi$-particles, protons, and antiprotons in the Universe is described by the system of kinetic equations $$\begin{aligned} \label{NLSP_decay} \frac{dn_{\rm X}}{dt} &+ 3Hn_{\rm X} \;\,=\; -\Gamma n_{\rm X},\\ \label{LSP_ann} \frac{dn_{\chi}}{dt} &+ 3Hn_{\chi} \;\;=\; \Gamma n_{\rm X},\\ \label{proton_ann} \frac{dn_{\rm p,\bar{p}}}{dt} &+ 3Hn_{\rm p,\bar{p}} =\; - \langle \sigma v\rangle^{\text{ann}}_{\rm p\bar{p}}n_{\rm p} n_{\rm \bar{p}} + B_h \Gamma n_{\rm X},\end{aligned}$$ where Eq. (7) consists of two equations describing the evolution of the proton and antiproton number densities, $n_{\rm p}$ and $n_{\rm \bar{p}}$, respectively; $n_{\rm X}$ and $n_{\chi}$ are the number densities of X- and $\chi$-particles, respectively; $H = \dot{a}/a$ is the Hubble parameter; $a(t)$ is the scale factor; $\Gamma = 1/\tau$ is the decay rate of X-particles; $\langle \sigma v\rangle^{\text{ann}}_{\rm p\bar{p}}$ is the product of the relative velocity $v$ and proton-antiproton annihilation cross section $\sigma_{\rm ann}$ averaged over the momentum with a distribution function. In a wide energy range (10MeV $\lesssim T_{\rm \bar{p}} \lesssim$ 10 GeV), this quantity may be considered a constant, $\langle \sigma v\rangle^{\text{ann}}_{\rm p\bar{p}} = 10^{-15}\,$cm$^3$s$^{-1}$ (see, e.g., Stecker 1967; Weniger et al. 2013). The parameters of the standard cosmological model presented in Table 1 are used to solve Eqs. (5)–(7). [c|c|c]{} &[Value]{}&[Reference$^1$]{}\ $\Omega_{\text{R}}$ & 5.46$\times 10^{-5}$ & 1\ $\Omega_{\text{CDM}}$ & 0.265 & 2\ $\Omega_{\text{b}}$ & 0.05 & 2\ $\Omega_{\Lambda}$ & 0.685 & 2\ $H_0$ & 67.3 kms$^{-1}$Mpc$^{-1}$ & 2\ $t_0$ & 13.8 Gyr & 2\ \ \ Apart from the decays of dark matter particles, we investigated the processes of their annihilation. We showed that the influence of the annihilation of dark matter particles with an annihilation cross section $\langle \sigma v\rangle^{\text{ann}}_{\chi\bar{\chi}} = 10^{-26}\,$cm$^3$s$^{-1}$ (see, e.g., Jungman et al. 1996) in the case where the annihilation products are protons and antiprotons, $\chi\bar{\chi} \rightarrow \rm p \bar{p}$, on the change in $\eta$ on all the time scales of interest could be neglected. This implies the absence of the terms responsible for the annihilation of X- and $\chi$-particles in Eq. (7). The change in $\eta$ attributable to the annihilation of dark matter particles with masses 10GeV–1TeV alone is negligible even at the epoch of Big Bang nucleosynthesis (at which the contribution from the annihilation is maximal): $|\Delta\eta/\eta_{\rm BBN}|<10^{-13}\div10^{-11}$ (the upper limit corresponds to a lower $\chi$-particle mass). To determine the initial conditions for Eqs. (5) and (6), we introduce a parameter $\alpha$ defining the fraction (by the number of particles) of unstable dark matter particles in the entire dark matter at the epoch of Big Bang nucleosynthesis. For the range of lifetimes $t_{\rm BBN} \ll \tau \lesssim t_0$ we consider, the entire dark matter at the present epoch will be composed of stable $\chi$-particles some of which ($\alpha$) were produced by the decays of X-particles and some ($1-\alpha$) are the relic ones, i.e., the $\chi$-particle mass determines the initial conditions for the X-particles as well. The availability of reliable data on the parameter $\eta$ at the epoch of Big Bang nucleosynthesis allows $\eta_{\rm BBN}$ to be used to determine the initial condition for Eq. (7). Thus, when solving the system of equations (5)–(7), we use the following initial conditions: $$z^0=z_{\rm BBN}=10^9, \quad t^0=\frac{1}{2H(z_{\rm BBN})},$$ $$\quad n^0_{\rm p}=\eta_{\rm BBN} n_{\gamma}(z_{\rm BBN}), \quad n^0_{\bar{\text{p}}}=0,$$ $$n^0_{\chi}=(1-\alpha)\frac{\Omega_{\rm CDM}\rho_{\rm c}}{m_{\chi}c^2}, \quad n^0_{\rm X}=\alpha\frac{\Omega_{\rm CDM}\rho_{\rm c}}{m_{\chi}c^2},$$ Let us write the system of equations (5)–(7) in a comoving volume that changes with time as $\sim a^3$, i.e., $\sim(1 + z)^{-3}$: $$\begin{aligned} \label{NLSP_decay_Y} \frac{dY_{\rm X}}{dt} & = -\Gamma Y_{\rm X},\\ \label{LSP_ann_Y} \frac{dY_{\chi}}{dt} & = \Gamma Y_{\rm X},\\ \label{proton_ann_Y} \frac{dY_{\rm p,\bar{p}}}{dt} & = - \langle \sigma v\rangle^{\rm ann}_{\rm p\bar{p}}Y_{\rm p} Y_{\rm \bar{p}}(1+z)^3 + B_h \Gamma Y_{\rm X},\end{aligned}$$ where $Y_{i}= n_{i}/(1+z)^3$ is the number density of the ith type of particles in the comoving volume. In such a form, Eqs. (9) and (10) have obvious analytical solutions that describe the evolution of the number densities of X- and $\chi$-particles in the comoving volume: $$Y_{\rm X}(t) = Y_{\rm X}^0 e^{-t/\tau}, \label{NLSP_solution}$$ $$Y_{\chi}(t) = Y_{\chi}^0 + Y_{\rm X}^0(1-e^{-t/\tau}),$$ where $Y_{\rm X}^0=n^0_{\rm X}/(1+z^0)^3$ and $Y_{\chi}^0=n_{\chi}^0/(1+z^0)^3$ are the initial number densities of X- and $\chi$-particles in the comoving volume. Substituting solution (12), $\Gamma = 1/\tau$, and $B_h = 1$ into Eq. (11), we obtain the final system of equations describing the evolution of the proton and antiproton number densities in the model under consideration: $$\frac{dY_{\rm p,\bar{p}}}{dt} = - \langle \sigma v\rangle^{\rm ann}_{\rm p\bar{p}}Y_{\rm p} Y_{\rm \bar{p}}(1+z)^3 + \frac{Y_{\rm X}^0}{\tau}e^{-t/\tau}. \label{prot_antiprot_eq}$$ The corresponding change in the baryon-to-photon ratio, $$\frac{\Delta\eta(z)}{\eta_{\rm BBN}}=\frac{\eta(z) - \eta_{\rm BBN}}{\eta_{\rm BBN}},$$ determined from the solution of the system of equations (14) for $m_{\chi} = 10\,$GeV, $\alpha = 0.5$, and various $\tau$ is presented in Fig. 1a. Note that the parameters $\alpha$ and $m_{\chi}$ enter into the system of equations (14) in the form of a ratio. Therefore, the result presented in Fig. 1 also corresponds to the case of larger masses of dark matter particles provided that $\alpha/m_{\chi}$ is conserved. ![image](fig1.eps){width="75.00000%"} We see that the change in the baryon-to-photon ratio in the model under consideration for lifetimes $\tau \gtrsim 10^{12}\,$s can reach $\Delta\eta(z)/\eta_{\rm BBN} \sim 0.01-1$, which is a potentially observable value. We also see that the number densities of the protons and antiprotons in the comoving volume produced by late decays ($\tau > 10^{13}\,$s) in an already fairly expanded Universe freeze in such a way that $\eta$ can differ significantly from $\eta_{\rm BBN}$ and $\eta_{\rm CMB}$ by the present epoch. Note, however, that in the decays $\rm X \rightarrow \chi p\bar{p}$ with the conservation of baryonic charge (i.e., $\Delta n_{\rm p}(t) = \Delta n_{\rm \bar{p}}(t)$), $\Delta \eta/\eta_{\rm BBN} \sim 1$ at the present epoch would imply almost equal numbers of protons and antiprotons in the Universe, while our Universe is significantly asymmetric in baryonic charge. The existence of such a number of antiprotons in the Universe would also give rise to an excess of the gamma-ray background from the annihilation of protons with antiprotons (see the next section).\ Figure 1b presents the dependence $\Delta \eta(\tau)/\eta_{\rm BBN}$ of the change in $\eta$ at the epoch of primordial recombination (the epoch for which the parameter $\eta$ has been measured most precisely to date) on the lifetime of X-particles $\tau$ for various $\alpha$. We see that the fraction of the change in $\eta$ at this epoch can reach $\Delta\eta/\eta_{\rm BBN} \sim 0.01-0.1$, which is also a potentially observable value. Figure 1c present the dependence $\Delta\eta(\tau)/\eta_{\rm BBN}$ referring to the present epoch ($t_0 \simeq 13.8\,$Gyr). We see that the decay of X-particles in the model under consideration leads to a significant change in the present baryon density for $\tau > 10^{13}\,$s. However, the accuracy of its determination at an epoch $z \sim 2-3$ and at the present epoch is still considerably lower than that for the epochs of Big Bang nucleosynthesis and primordial recombination.\ The results obtained should not come into conflict with other observational data:\ (1) The decays with a predominance of hadronic channels at early epochs $\tau \ll t_{\rm PR}$ can change significantly the chemical composition of the Universe (see, e.g., Jedamzik 2004; Kawasaki et al. 2005). The available observational data on the abundances of the primordial light elements (D, $^4$He, $^7$Li) agree well with the predictions of Big Bang nucleosynthesis calculations, which, in turn, limits the possibility of such a change.\ (2) The decays with $\tau \sim t_{\rm PR}$ can distort the CMB spectrum and affect the angular CMB anisotropy (see, e.g., Chen and Kamionkowski 2004; Chluba and Sunyaev, 2012). Comparison with observational data also allows the possible models to be constrained severely.\ (3) The hadronic decays with $\tau \gtrsim t_{\rm PR}$ can give rise to an excess gamma-ray background from the annihilation of produced antiprotons with background protons and directly from the decays of X-particles (see the next section).\ In our case, we used data on the isotropic gamma-ray background to obtain constraints on the decays of particles with $t_{\rm PR} \lesssim \tau \lesssim t_0$, because a maximal effect of change in the baryon-to-photon ratio is expected for such lifetimes of X-particles (see Fig. 1). As we will see, at present these constraints are more significant than those that can be given by present-day cosmological experiments. CONSTRAINT ON THE POSSIBLE CHANGE IN [$\eta$]{} ASSOCIATED WITH THE OBSERVATION OF AN ISOTROPIC GAMMA-RAY BACKGROUND {#gamma_flux} ==================================================================================================================== As was shown by Cirelli et al. (2011), apart from protons and antiprotons, photons and leptons will also be present among the end decay products of dark matter particles, with their fraction exceeding considerably the fraction of baryons even in the case of $B_h = 1$ (i.e., when the decays completely run via hadronic channels). The reason is that apart from protons and antiprotons, mesons are produced in the hadronization process, which contribute to the photon and lepton components. In addition, the appearance of an antiproton fraction in the Universe will be accompanied by the formation of an additional gamma-ray background from the annihilation of proton-antiproton pairs. The main gamma-ray background as a result of such a process will arise from the decay of the $\pi^0$ meson produced by the proton-antiproton annihilation (Stecker 1967; Steigman 1976). Both these processes, which can be represented schematically as ![image](fig2.eps){width="60.00000%"} $$\begin{aligned} {\rm X} \rightarrow \chi + ... \begin{cases} {\gamma + \gamma +...} \\ {\rm p + \bar{p}} \rightarrow \begin{cases} \pi^0 \;\, \rightarrow & \gamma + \gamma\\ \pi^{\pm} \, \rightarrow & \mu^{\pm} + \nu_{\mu}(\tilde{\nu_{\mu}}), \end{cases} \end{cases} \\ \nonumber \mu^{\pm}\rightarrow e^{\pm} + \nu_{e}(\tilde{\nu_{e}}) + \nu_{\mu}(\tilde{\nu_{\mu}}),\end{aligned}$$ will contribute to the isotropic gamma-ray background in the Universe. We calculate the corresponding gamma-ray background by taking into account its extension to cosmological distances. Note that photons of different energies at different cosmological epochs interact differently with the medium in which they propagate (see, e.g., Zdziarski and Svensson 1989; Chen and Kamionkowski 2004). More specifically, there is a transparency window: the photons with energies $E_{\gamma} < 10\,$GeV emitted at epochs $0 < z \lesssim 1000$ propagate almost without absorption and reach us in the form of an isotropic gamma-ray background. The formation of such a gamma-ray background is expected from the decays of X-particles with lifetimes $t_{\rm PR} \lesssim \tau \lesssim t_0$. ![image](fig3.eps){width="90.00000%"} ----------- -------------------- --------------------- ---------------------- -------------------- ----- $m_{\chi}=10\,$GeV $m_{\chi}=100\,$GeV $m_{\chi}=1000\,$GeV $10^{14}$ $5\times10^{-6}$ $5\times10^{-5}$ $5\times10^{-4}$ $2.3\times10^{-6}$ 120 $10^{15}$ $5\times10^{-7}$ $5\times10^{-6}$ $5\times10^{-5}$ $4.2\times10^{-7}$ 18 $10^{16}$ $10^{-7}$ $10^{-6}$ $10^{-5}$ $10^{-7}$ 2.2 $10^{17}$ $10^{-8}$ $10^{-7}$ $10^{-6}$ $10^{-8}$ 0 ----------- -------------------- --------------------- ---------------------- -------------------- ----- \[table\_alpha\] The general formula describing the intensity of the isotropic gamma-ray background $I_{\gamma}(E_{\gamma})$ (keV$\cdot$cm$^{-2}$s$^{-1}$sr$^{-1}$keV$^{-1}$) from various processes is (see, e.g., Peacock 2010) $$\begin{aligned} &I_{\gamma}(E_{\gamma}) = E_{\gamma}\frac{d\Phi_{\gamma}}{d\Omega dE_{\gamma}}& \\ \nonumber &= \frac{c}{4\pi}\int\limits_0^{1000} dz \frac{\epsilon_{\gamma}([1+z]E_{\gamma},z)}{H(z)(1+z)^4}e^{-\tau(E_{\gamma},z)},& \label{extragalactic gamma flux general}\end{aligned}$$ where $\Phi_{\gamma}$ is the gamma-ray photon flux per unit time through a unit area, $\tau(E_{\gamma}, z)$ is the optical depth describing the absorption of a photon emitted at epoch $z$ with energy $E_{\gamma}(1 + z)$, $\epsilon_{\gamma}$ is the volume emissivity that in our case is the sum of two terms, $$\epsilon_{\gamma}(E_{\gamma},z) = \epsilon^{\rm X}_{\gamma}(E_{\gamma},z) + \epsilon^{\rm p\bar{p}}_{\gamma}(E_{\gamma},z), \label{volume emissivity}$$ describing the two contributions to the gamma-ray background mentioned above. The first term $\epsilon^{\rm X}_{\gamma}$ is related to the photons that are the X-particle decay products; the second term $\epsilon^{\rm p\bar{p}}_{\gamma}$ is related to the photons that are the proton-antiproton annihilation products. These terms are described by the expressions $$\begin{aligned} \epsilon^{\rm X}_{\gamma}(E_{\gamma},z) = E_{\gamma} \Gamma n_{\rm X}(z) \frac{dN_{\gamma}}{dE_{\gamma}}\\\nonumber = E_{\gamma} \Gamma Y_{\rm X}(z) (1+z)^3\frac{dN_{\gamma}}{dE_{\gamma}} , \label{volume emissivity X}\end{aligned}$$ $$\begin{aligned} \epsilon^{\rm p\bar{p}}_{\gamma}(E_{\gamma},z) = E_{\gamma} \langle \sigma v\rangle^{\rm ann}_{\rm p\bar{p}} n_{\rm p}(z) n_{\rm \bar{p}}(z)\frac{dN_{\gamma}}{dE_{\gamma}}\\\nonumber = E_{\gamma} \langle \sigma v\rangle^{\rm\ ann}_{\rm p\bar{p}} Y_{\rm p}(z) Y_{\rm \bar{p}}(z)(1+z)^6 \frac{dN_{\gamma}}{dE_{\gamma}}, \label{volume emissivity pp}\end{aligned}$$ where $dN_{\gamma}/dE_{\gamma}$ is the spectrum of the photons (phot$\cdot$keV$^{-1}$) emitted in one event of X-particle decay (in Eq. (19)) and proton-antiproton annihilation (in Eq. (20)). In our calculations, we use the spectrum $dN_{\gamma}/dE_{\gamma}$ of the photons that are the X-particle decay products calculated in the PYTHIA package. The numerical code for computing the spectra of the dark matter particle decay and annihilation products was taken from the site[^5]; the details of using it can be found in Cirelli et al. (2011). We use the data for $m_{\rm X} - m_{\chi} \sim 10\,$GeV from the entire range of energy release accessible in the numerical code in such reactions, 10GeV$-$200TeV, to determine an upper bound on the possible change in $\eta$. The optical depths in (17) were also taken from this site. The spectrum $dN_{\gamma}/dE_{\gamma}$ of the photons that are the proton-antiproton annihilation products was taken from Backenstoss et al. (1983). ![image](fig4.eps){width="90.00000%"} For comparison, Fig. 2 presents the gamma-ray background $d\Phi_{\gamma}/d\Omega dE_{\gamma}$ (phot$\cdot$cm$^{-2}$s$^{-1}$sr$^{-1}$keV$^{-1}$) attributable to the contribution from each of the two terms in (18). The observational data on the isotropic gamma-ray background (10keV$-$1GeV) taken from Sreekumar et al. (1998), Bloemen et al. (1999), Gruber et al. (1999), and Ajello et al. (2008) are also presented in the figure. We see that the gamma-ray background directly from the decays of X-particles allow stringent constraints to be placed on the decay processes. Figure 3 shows the total gamma-ray background $d\Phi_{\gamma}/d\Omega dE_{\gamma}$ with the inclusion of both terms in (18) for X-particle lifetimes $t_{\rm PR} \lesssim \tau \lesssim t_0$ and various values of the parameter $\alpha$. The gamma-ray background admissible by the currently available observational data corresponds to $\alpha_{\rm max}$, which characterizes the maximum admissible fraction of unstable X-particles with the corresponding lifetime. The values of $\alpha_{\rm max}$ for lifetimes $t_{\rm PR} \lesssim \tau \lesssim t_0$ are presented in Table 2. Figure 4 presents the fraction of the change in the baryon-to-photon ratio corresponding to $\alpha_{\rm max}$ for various X-particle lifetimes (for comparison, Fig. 4 also presents this change for $\alpha = 1$). We see that this change may reach $\Delta \eta(z)/\eta_{\rm BBN} \lesssim 10^{-5}$. The present-day observational accuracy is $\Delta\eta/\eta \sim 10^{-2} - 10^{-1}$. Note that the corresponding number of antiprotons in the Universe at the present epoch related to $\Delta \eta$ via the relation $$\frac{n_{\rm \bar{p}}}{n_{\rm p}}\simeq \left. \frac{1}{2}\frac{\Delta\eta}{\eta_{\rm BBN}}\right|_{z=0}$$ is consistent with the observational data on antiprotons in cosmic rays (see, e.g., Adriani et al. 2010). Since the parameters $\alpha$ and $m_{\chi}$ enter into the system of equations (14) in the form of a ratio, the result obtained can be easily generalized to the case of larger masses of dark matter particles. For $\chi$-particles with masses $m_{\chi} = 10$, 100, and 1000 GeV, the derived parameter $\alpha_{\rm max}$ and the corresponding maximum change $\Delta\eta/\eta_{\rm BBN}$ in the baryon-to-photon ratio are listed in Table 2. The table also gives the cosmological redshift $z^{*}$ corresponding to the maximum change in $\eta$. CONCLUSIONS {#CONCLUSIONS} =========== We investigated the influence of the baryonic decay channels of dark matter particles $\rm X \rightarrow \chi p\bar{p}$ on the change in the baryon-to-photon ratio at different cosmological epochs. We showed that the present dark matter density $\Omega_{\rm CDM} \simeq 0.26$ is sufficient for the decay reactions of dark matter particles with masses 10GeV$-$1TeV to change the baryon-to-photon ration up to $\Delta\eta(z)/\eta_{\rm BBN}\sim0.01-1$ (Fig 1). However, such a change in $\eta$ would lead to an excess of the gamma-ray background from the annihilation of proton-antiproton pairs, the decay products of dark matter particles, and from the gamma-ray photons produced directly in the decays of dark matter particles. We used the observational data on the isotropic gamma-ray background to constrain the decay models of dark matter particles leading to a maximum effect of change in $\eta$: we determined the maximum admissible fraction of unstable dark matter particles with lifetimes $t_{\rm PR} \lesssim \tau \lesssim t_0$ and the change in $\eta$ related to them. The maximum possible change in the baryon-to-photon ratio attributable to such decays is $\Delta\eta(z)/\eta_{\rm BBN}\lesssim 10^{-5}$ (Fig. 4). Despite the fact that at present the data on the gamma-ray background constrain most severely the decay models of dark matter particles with the emission of baryons, the situation can change in future, with increasing accuracy of existing cosmological experiments and the appearance of new ones. The detection of a change in the baryon-to-photon ratio in such experiments at a level of $\lesssim10^{-5}$ will serve as evidence for the existence of decaying dark matter particles, while its detailed study will be a powerful tool for studying their properties. In contrast, the constancy of the baryon-to-photon ratio will serve as a new source of constraints on the range of admissible parameters of dark matter particles. ACKNOWLEDGMENTS {#ACKNOWLEDGMENTS .unnumbered} =============== We thank the referees for their valuable remarks. This work has been supported by the Russian Science Foundation (grant No 14-12-00955). [99]{} P.A.R. Ade, N. Aghanim, C. Armitage-Caplan, et al., Astron. Astrophys. **571**, 66 (2014). O. Adriani, G.C. Barbarino, G.A. Bazilevskaya, et al., Phys. Rev. Lett. **105**, 121101 (2010). M. Ajello, J. Greiner, G. Sato, et al., Astrophys. J. **689**, 666 (2008). G. Backenstoss, M. Hasinoff, P. Pavlopoulos, et al., Nucl. Phys. B **228**, 424 (1983). G. Bertone, D. Hooper and J. Silk, Phys. Rep. **405**, 279 (2004). H. Bloemen, W. Hermsen, S.C. Kappadath. et al., Astro. Lett. and Communications **39**, 213 (1999). X. Chen and M. Kamionkowski, Phys. Rev. D **70**, 043502 (2004). J. Chluba and R.A. Sunyaev, MNRAS **419**, 1294 (2012). M. Cirelli, G. Corcella, A. Hektor, et al., J. Cosmol. and Astropart. Phys. **3**, 51 (2011). R.H. Cyburt, B.D Fields and K.A Olive, J. Cosmol. and Astropart. Phys. **11**, 12 (2008). D.J. Fixsen, Astrophys. J. **707**, 916 (2009). M. Fukugita and P.J.E. Peebles, Astrophys. J. **616**, 643 (2004). D.S. Gorbunov and V.A. Rubakov, Introduction to the Early Universe: Hot Big Bang Theory (LKI, Moscow, 2008; World Scientific, Singapore, 2011). D.S. Gorbunov and V.A. Rubakov, Introduction to the Theory of the Early Universe: Cosmological Perturbations and Inflationary Theory (LKI, Moscow, 2010; World Scientific, Singapore, 2011). D.E. Gruber, J.L. Matteson, L.E. Peterson, et al., Astrophys. J. **520**, 124 (1999). L. Hui, Z. Haiman, M. Zaldarriaga, et al., Astrophys. J. **564**, 525 (2002). A.V. Ivanchik, S.A. Balashev, D.A. Varshalovish et al., Astron. Rep. **92**, No 2 (2015). K. Jedamzik, Phys. Rev. D **70**, 063524 (2004). G. Jungman, M. Kamionkowski and K. Griest, Phys. Rep. **267**, 195 (1996). M. Kawasaki, K. Kohri and T. Moroi, Phys. Rev. D **71**, 083502 (2005). F. Nicastro, S. Mathur and M. Elvis, Science **319**, 55 (2008). J. A. Peacock, Cosmological physics, 9 ed. (Cambridge University Press, 2010). M. Rauch, Ann. Rev. **36**, 267 (1998). P. Sreekumar, D.L. Bertsch, B.L. Dingus, et al., Astrophys. J. **494**, 523 (1998). F. Stecker, SAO Special Report No 261 (1967). G. Steigman, Ann Rev. **14**, 339 (1976). G. Steigman, JCAP **10**, 16 (2006). G. Steigman, Ann. Rev. Nucl. Part. Sci. **57**, 463 (2007). C. Weniger, P.D. Serpico, F. Iocco, et al., Phys. Rev. D **87**, 123008 (2013). A.A. Zdziarski and R. Svensson, Astrophys. J. **344**, 551 (1989). [^1]: E-mail: e.zavarygin@gmail.com [^2]: E-mail: iav@astro.ioffe.ru [^3]: Here and below, out of all baryons, we restrict ourselves to protons. This assumption is valid for obtaining estimates, because the bulk of the baryon density in the Universe is contained in the hydrogen nuclei, while heavier baryons (for example, D, He, etc.) are generated with a considerably lower probability. [^4]: Since we consider cosmological time scales, all of the neutrons and antineutrons that are also produced in such decays transform into protons and antiprotons. [^5]: http://www.marcocirelli.net/PPPC4DMID.html
{ "pile_set_name": "ArXiv" }
--- abstract: 'In a two-flavor color superconductor, the $SU(3)_c$ gauge symmetry is spontaneously broken by diquark condensation. The Nambu-Goldstone excitations of the diquark condensate mix with the gluons associated with the broken generators of the original gauge group. It is shown how one can decouple these modes with a particular choice of ’t Hooft gauge. We then explicitly compute the spectral density for transverse and longitudinal gluons of adjoint color 8. The Nambu-Goldstone excitations give rise to a singularity in the real part of the longitudinal gluon self-energy. This leads to a vanishing gluon spectral density for energies and momenta located on the dispersion branch of the Nambu-Goldstone excitations.' address: - | Institut für Theoretische Physik, Johann Wolfgang Goethe-Universität\ Robert-Mayer-Str. 8–10, D-60054 Frankfurt/Main, Germany\ E-mail: drischke@th.physik.uni-frankfurt.de - | School of Physics and Astronomy, University of Minnesota\ 116 Church Street S.E., Minneapolis, MN 55455, U.S.A.\ E-mail: shovkovy@physics.umn.edu author: - 'Dirk H. Rischke' - 'Igor A. Shovkovy[^1]' title: 'Longitudinal gluons and Nambu-Goldstone bosons in a two-flavor color superconductor' --- Introduction ============ Cold, dense quark matter is a color superconductor [@bailinlove]. For two massless quark flavors (say, up and down), Cooper pairs with total spin zero condense in the color-antitriplet, flavor-singlet channel. In this so-called two-flavor color superconductor, the $SU(3)_c$ gauge symmetry is spontaneously broken to $SU(2)_c$ [@arw]. If we choose to orient the (anti-) color charge of the Cooper pair along the (anti-) blue direction in color space, only red and green quarks form Cooper pairs, while blue quarks remain unpaired. Then, the three generators $T_1,\, T_2,$ and $T_3$ of the original $SU(3)_c$ gauge group form the generators of the residual $SU(2)_c$ symmetry. The remaining five generators $T_4, \ldots, T_8$ are broken. (More precisely, the last broken generator is a combination of $T_8$ and the generator ${\bf 1}$ of the global $U(1)$ symmetry of baryon number conservation, for details see Ref. [@sw2] and below). According to Goldstone’s theorem, this pattern of symmetry breaking gives rise to five massless bosons, the so-called Nambu-Goldstone bosons, corresponding to the five broken generators of $SU(3)_c$. Physically, these massless bosons correspond to fluctuations of the order parameter, in our case the diquark condensate, in directions in color-flavor space where the effective potential is flat. For gauge theories (where the local gauge symmetry cannot truly be spontaneously broken), these bosons are “eaten” by the gauge bosons corresponding to the broken generators of the original gauge group, [*i.e.*]{}, in our case the gluons with adjoint colors $a= 4, \ldots, 8$. They give rise to a longitudinal degree of freedom for these gauge bosons. The appearance of a longitudinal degree of freedom is commonly a sign that the gauge boson becomes massive. In a dense (or hot) medium, however, even [*without*]{} spontaneous breaking of the gauge symmetry the gauge bosons already have a longitudinal degree of freedom, the so-called [*plasmon*]{} mode [@LeBellac]. Its appearance is related to the presence of gapless charged quasiparticles. Both transverse and longitudinal modes exhibit a mass gap, [*i.e.*]{}, the gluon energy $p_0 \rightarrow m_g > 0$ for momenta $p \rightarrow 0$. In quark matter with $N_f$ massless quark flavors at zero temperature $T=0$, the gluon mass parameter (squared) is [@LeBellac] $$\label{gluonmass} m_g^2 = \frac{N_f}{6\, \pi^2} \, g^2 \, \mu^2\,\, ,$$ where $g$ is the QCD coupling constant and $\mu$ is the quark chemical potential. It is [*a priori*]{} unclear how the Nambu-Goldstone bosons interact with these longitudinal gluon modes. In particular, it is of interest to know whether coupling terms between these modes exist and, if yes, whether these terms can be eliminated by a suitable choice of (’t Hooft) gauge. The aim of the present work is to address these questions. We shall show that the answer to both questions is “yes”. We shall then demonstrate by focussing on the gluon of adjoint color 8, how the Nambu-Goldstone mode affects the spectral density of the longitudinal gluon. Our work is partially based on and motivated by previous studies of gluons in a two-flavor color superconductor [@carterdiakonov; @dhr2f; @dhrselfenergy]. The gluon self-energy and the resulting spectral properties have been discussed in Ref. [@dhrselfenergy]. In that paper, however, the fluctuations of the diquark condensate have been neglected. Consequently, the longitudinal degrees of freedom of the gluons corresponding to the broken generators of $SU(3)_c$ have not been treated correctly. The gluon polarization tensor was no longer explicitly transverse (a transverse polarization tensor $\Pi^{\mu\nu}$ obeys $P_\mu \, \Pi^{\mu \nu} = \Pi^{\mu \nu}\, P_\nu = 0$), and it did not satisfy the Slavnov-Taylor identity. As a consequence, the plasmon mode exhibited a certain peculiar behavior in the low-momentum limit, which cannot be physical (cf. Fig. 5 (a) of Ref. [@dhrselfenergy]). It was already realized in Ref. [@dhrselfenergy] that the reason for this unphysical behavior is the fact that the mixing of the gluon with the excitations of the condensate was neglected. It was moreover suggested in Ref. [@dhrselfenergy] that proper inclusion of this mixing would amend the shortcomings of the previous analysis. The aim of the present work is to follow this suggestion and thus to correct the results of Ref. [@dhrselfenergy] with respect to the longitudinal gluon. Note that in Ref. [@carterdiakonov] fluctuations of the color-superconducting condensate were taken into account in the calculation of the gluon polarization tensor. As a consequence, the latter is explicitly transverse. However, the analysis was done in the vacuum, at $\mu=0$, not at (asymptotically) large chemical potential. The outline of the present work is as follows. In Section \[II\] we derive the transverse and longitudinal gluon propagators including fluctuations of the diquark condensate. In Section \[III\] we use the resulting expressions to compute the spectral density for the gluon of adjoint color 8. Section \[IV\] concludes this work with a summary of our results. Our units are $\hbar=c=k_B=1$. The metric tensor is $g^{\mu \nu}= {\rm diag}\,(+,-,-,-)$. We denote 4-vectors in energy-momentum space by capital letters, $K^{\mu} = (k_0,{\bf k})$. Absolute magnitudes of 3-vectors are denoted as $k \equiv |{\bf k}|$, and the unit vector in the direction of ${\bf k}$ is $\hat{\bf k} \equiv {\bf k}/k$. Derivation of the propagator for transverse and longitudinal gluons {#II} =================================================================== In this section, we derive the gluon propagator taking into account the fluctuations of the diquark condensate. A short version of this derivation can be found in Appendix C of Ref. [@msw] \[see also the original Ref. [@gusyshov]\]. Nevertheless, for the sake of clarity and in order to make our presentation self-contained, we decide to present this once more in greater detail and in the notation of Ref. [@dhrselfenergy]. As this part is rather technical, the reader less interested in the details of the derivation should skip directly to our main result, Eqs. (\[transverse\]), (\[longitudinal\]), and (\[hatPi00aa\]). We start with the grand partition function of QCD, \[Z\] $$\label{ZQCD} {\cal Z} = \int {\cal D} A \; e^{ S_A } \;{\cal Z}_q[A]\,\, ,$$ where $${\cal Z}_q[A] = \int {\cal D} \bar{\psi} \, {\cal D} \psi\, \exp \left[ \int_x \bar{\psi} \left( i \gamma^\mu \partial_\mu + \mu \gamma_0 + g \gamma^\mu A_\mu^a T_a \right) \psi \right] \,. \label{Zquarks}$$ is the grand partition function for massless quarks in the presence of a gluon field $A^\mu_a$. In Eq. (\[Z\]), the space-time integration is defined as $\int_x \equiv \int_0^{1/T} d\tau \int_V d^3{\bf x}\,$, where $V$ is the volume of the system, $\gamma^\mu$ are the Dirac matrices, and $T_a= \lambda_a/2$ are the generators of $SU(N_c)$. For QCD, $N_c = 3$, and $\lambda_a$ are the Gell-Mann matrices. The quark fields $\psi$ are $4 N_c N_f$-component spinors, [*i.e.*]{}, they carry Dirac indices $\alpha = 1, \ldots,4$, fundamental color indices $i=1,\ldots,N_c$, and flavor indices $f=1,\ldots,N_f$. The action for the gauge fields consists of three parts, $$\label{L_A} S_A = S_{F^2} + S_{\rm gf} + S_{\rm FPG}\,\, ,$$ where $$S_{F^2} = - \frac{1}{4} \int_x F^{\mu \nu}_a \, F_{\mu \nu}^a$$ is the gauge field part; here, $F_{\mu\nu}^a = \partial_\mu A_\nu^a - \partial_\nu A_\mu^a + g f^{abc} A_\mu^b A_\nu^c$ is the field strength tensor. The part corresponding to gauge fixing, $S_{\rm gf}$, and to Fadeev-Popov ghosts, $S_{\rm FPG}$, will be discussed later. For fermions at finite chemical potential it is advantageous to introduce the charge-conjugate degrees of freedom explicitly. This restores the symmetry of the theory under $\mu \rightarrow - \mu$. Therefore, in Ref. [@dhr2f], a kind of replica method was applied, in which one first artificially increases the number of quark species, and then replaces half of these species of quark fields by charge-conjugate quark fields. More precisely, first replace the quark partition function ${\cal Z}_q[A]$ by ${\cal Z}_M[A] \equiv \left\{ {\cal Z}_q[A] \right\}^M$, $M$ being some large integer number. (Sending $M \rightarrow 1$ at the end of the calculation reproduces the original partition function.) Then, take $M$ to be an even integer number, and replace the quark fields by charge-conjugate quark fields in $M/2$ of the factors ${\cal Z}_q[A]$ in ${\cal Z}_M[A]$. This results in $${\cal Z}_M[A] = \int \prod_{r=1}^{M/2} {\cal D} \bar{\Psi}_r \, {\cal D} \Psi_r \; \exp \left\{ \sum_{r=1}^{M/2} \left[ \int_{x,y} \bar{\Psi}_r(x) \,{\cal G}_0^{-1} (x,y)\, \Psi_r(y) + \int_x g\, \bar{\Psi}_r(x) \, A_\mu^a(x)\,\hat{\Gamma}^\mu_a\, \Psi_r(x) \right] \right\} \,\, . \label{Zquarks2}$$ Here, $r$ labels the quark species and $\Psi_r$, $\bar{\Psi}_r$ are $8 N_c N_f$-component Nambu-Gor’kov spinors, $$\Psi_r \equiv \left( \begin{array}{c} \psi_r \\ \psi_{C r} \end{array} \right) \,\,\, , \,\,\,\, \bar{\Psi}_r \equiv ( \bar{\psi}_r \, , \, \bar{\psi}_{C r} )\,\, ,$$ where $\psi_{C r} \equiv C \bar{\psi}_r^T$ is the charge conjugate spinor and $C=i \gamma^2 \gamma_0$ is the charge conjugation matrix. The inverse of the $8 N_c N_f \times 8 N_c N_f$-dimensional Nambu-Gor’kov propagator for non-interacting quarks is defined as $$\label{S0-1} {\cal G}_0^{-1} \equiv \left( \begin{array}{cc} [G_0^+]^{-1} & 0 \\ 0 & [G_0^-]^{-1} \end{array} \right)\,\, ,$$ where $$\label{G0pm-1} [G_0^\pm]^{-1}(x,y) \equiv -i \left( i \gamma_\mu \partial^\mu_x \pm \mu \gamma_0 \right) \delta^{(4)}(x-y)$$ is the inverse propagator for non-interacting quarks (upper sign) or charge conjugate quarks (lower sign), respectively. The Nambu-Gor’kov matrix vertex describing the interaction between quarks and gauge fields is defined as follows: $$\label{Gamma} \hat{\Gamma}^\mu_a \equiv \left( \begin{array}{cc} \Gamma^\mu_a & 0 \\ 0 & \bar{\Gamma}^\mu_a \end{array} \right)\,\, ,$$ where $\Gamma^\mu_a \equiv \gamma^\mu T_a$ and $\bar{\Gamma}^\mu_a \equiv C (\gamma^\mu)^T C^{-1} T_a^T \equiv -\gamma^\mu T_a^T$. Following Ref. [@bailinlove] we now add the term $\int_{x,y} \bar{\psi}_{C r}(x)\, \Delta^+(x,y) \, \psi_r(y)$ and the corresponding charge-conjugate term $\int_{x,y} \bar{\psi}_r(x)\, \Delta^-(x,y) \, \psi_{C r}(y)$, where $\Delta^- \equiv \gamma_0 \, (\Delta^+)^\dagger \, \gamma_0$, to the argument of the exponent in Eq. (\[Zquarks2\]). This defines the quark (replica) partition function in the presence of the gluon field $A^\mu_a$ [*and*]{} the diquark source fields $\Delta^+$, $\Delta^-$: $${\cal Z}_M[A,\Delta^+,\Delta^-] \equiv \int \prod_{r=1}^{M/2} {\cal D} \bar{\Psi}_r \, {\cal D} \Psi_r \; \exp \left\{ \sum_{r=1}^{M/2} \left[ \int_{x,y} \bar{\Psi}_r(x) \,{\cal G}^{-1} (x,y)\, \Psi_r(y) + \int_x g\, \bar{\Psi}_r(x) \, A_\mu^a(x)\,\hat{\Gamma}^\mu_a\, \Psi_r(x) \right] \right\} \,\, , \label{Zquarks3}$$ where $$\label{G-1} {\cal G}^{-1} \equiv \left( \begin{array}{cc} [G^+_0]^{-1} & \Delta^- \\ \Delta^+ & [G^-_0]^{-1} \end{array} \right)$$ is the inverse quasiparticle propagator. Inserting the partition function (\[Zquarks3\]) into Eq. (\[ZQCD\]), the (replica) QCD partition function is then computed in the presence of the (external) diquark source terms $\Delta^\pm(x,y)$, ${\cal Z} \rightarrow {\cal Z}[\Delta^+,\Delta^-]$. In principle, this is not the physically relevant quantity, from which one derives thermodynamic properties of the color superconductor. The diquark condensate is not an external field, but assumes a nonzero value because of an intrinsic property of the system, namely the attractive gluon interaction in the color-antitriplet channel, which destabilizes the Fermi surface. The proper functional from which one derives thermodynamic functions is obtained by a Legendre transformation of $\ln {\cal Z} [\Delta^+, \Delta^-]$, in which the functional dependence on the diquark source term is replaced by that on the corresponding canonically conjugate variable, the diquark condensate. The Legendre-transformed functional is the effective action for the diquark condensate. If the latter is [*constant*]{}, the effective action is, up to a factor of $V/T$, identical to the effective potential. The effective potential is simply a function of the diquark condensate. Its explicit form for large-density QCD was derived in Ref.  [@eff-pot]. The value of this function at its maximum determines the pressure. The maximum is determined by a Dyson-Schwinger equation for the diquark condensate, which is identical to the standard gap equation for the color-superconducting gap. It has been solved in the mean-field approximation in Refs.  [@rdpdhr2; @schaferwilczek; @miransky]. In the mean-field approximation [@rdpdhrscalar], $$\label{mfa} \Delta^+(x,y) \sim \langle \, \psi_{C r}(x) \, \bar{\psi}_r(y)\, \rangle \,\,\,\, , \,\,\,\,\, \Delta^-(x,y) \sim \langle \, \psi_r(x) \, \bar{\psi}_{C r}(y) \, \rangle \,\, .$$ In this work, we are interested in the gluon propagator, and the derivation of the pressure via a Legendre transformation of $\ln {\cal Z}[\Delta^+,\Delta^-]$ is of no concern to us. In the following, we shall therefore continue to consider the partition function in the presence of (external) diquark source terms $\Delta^\pm$. The diquark source terms in the quark (replica) partition function (\[Zquarks3\]) could in principle be chosen differently for each quark species. This could be made explicit by giving $\Delta^\pm$ a subscript $r$, $\Delta^\pm \rightarrow \Delta^\pm_r$. However, as we take the limit $M \rightarrow 1$ at the end, it is not necessary to do so, as only $\Delta^\pm_1 \equiv \Delta^\pm$ will survive anyway. In other words, we use the [*same*]{} diquark sources for [*all*]{} quark species. The next step is to explicitly investigate the fluctuations of the diquark condensate around its expectation value. These fluctuations correspond physically to the Nambu-Goldstone excitations (loosely termed “mesons” in the following) in a color superconductor. As mentioned in the introduction, there are five such mesons in a two-flavor color superconductor, corresponding to the generators of $SU(3)_c$ which are broken in the color-superconducting phase. If the condensate is chosen to point in the (anti-) blue direction in fundamental color space, the broken generators are $T_4, \ldots, T_7$ of the original $SU(3)_c$ group and the particular combination $B \equiv ({\bf 1} + \sqrt{3} T_8)/3$ of generators of the global $U(1)_B$ and local $SU(3)_c$ symmetry [@sw2]. The effective action for the diquark condensate and, consequently, for the meson fields as fluctuations of the diquark condensate, is derived via a Legendre transformation of $\ln {\cal Z}[\Delta^+,\Delta^-]$. In this work, we are concerned with the properties of the gluons and thus refrain from computing this effective action explicitly. Consequently, instead of considering the physical meson fields, we consider the variables in ${\cal Z}[\Delta^+,\Delta^-]$, which correspond to these fields. These are the fluctuations of the diquark source terms $\Delta^\pm$. We choose these fluctuations to be complex phase factors multiplying the magnitude of the source terms, \[DeltaPhi\] $$\begin{aligned} \Delta^+(x,y) & = & {\cal V}^* (x)\, \Phi^+(x,y) \, {\cal V}^\dagger(y) \,\, , \\ \Delta^-(x,y) & = & {\cal V}(x) \, \Phi^-(x,y) \, {\cal V}^T(y) \,\, ,\end{aligned}$$ where $$\label{phase} {\cal V}(x) \equiv \exp \left[ i \left( \sum_{a=4}^7 \varphi_a(x) T_a + \frac{1}{\sqrt{3}}\, \varphi_8(x) B \right) \right]\,\,.$$ The extra factor $1/\sqrt{3}$ in front of $\varphi_8$ as compared to the treatment in Ref. [@msw] is chosen to simplify the notation in the following. Although the fields $\varphi_a$ are not the meson fields themselves, but external fields which, after a Legendre transformation of $\ln {\cal Z}[\Delta^+,\Delta^-]$, are replaced by the meson fields, we nevertheless (and somewhat imprecisely) refer to them as meson fields in the following. After having explicitly introduced the fluctuations of the diquark source terms in terms of phase factors, the functions $\Phi^\pm$ are only allowed to fluctuate in magnitude. For the sake of completeness, let us mention that one could again have introduced different fields $\varphi_{a r}$ for each replica $r$, but this is not really necessary, as we shall take the limit $M \rightarrow 1$ at the end of the calculation anyway. It is advantageous to also subject the quark fields $\psi_r$ to a nonlinear transformation, introducing new fields $\chi_r$ via $$\label{chi} \psi_r = {\cal V}\, \chi_r \,\,\,\, , \,\,\,\,\, \bar{\psi}_r = \bar{\chi}_r\, {\cal V}^\dagger\,\, .$$ Since the meson fields are real-valued and the generators $T_4, \ldots, T_7$ and $B$ are hermitian, the (matrix-valued) operator ${\cal V}$ is unitary, ${\cal V}^{-1} = {\cal V}^\dagger$. Therefore, the measure of the Grassmann integration over quark fields in Eq. (\[Zquarks3\]) remains unchanged. From Eq. (\[chi\]), the charge-conjugate fields transform as $$\psi_{C r} = {\cal V}^* \, \chi_{C r} \,\,\,\, , \,\,\,\,\, \bar{\psi}_{C r} = \bar{\chi}_{C r} \, {\cal V}^T\,\, ,$$ The advantage of transforming the quark fields is that this preserves the simple structure of the terms coupling the quark fields to the diquark sources, $$\bar{\psi}_{C r}(x)\, \Delta^+(x,y) \, \psi_r(y) \equiv \bar{\chi}_{C r}(x)\, \Phi^+(x,y) \, \chi_r(y) \,\,\,\, , \,\,\,\,\, \bar{\psi}_r(x)\, \Delta^-(x,y) \, \psi_{C r}(y) \equiv \bar{\chi}_r(x)\, \Phi^-(x,y) \, \chi_{C r}(y) \,\, .$$ In mean-field approximation, the diquark source terms are proportional to $$\label{mfa2} \Phi^+(x,y) \sim \langle \, \chi_{C r}(x) \, \bar{\chi}_r(y)\, \rangle \,\,\,\, , \,\,\,\,\, \Phi^-(x,y) \sim \langle \, \chi_r(x) \, \bar{\chi}_{C r}(y) \, \rangle\,\, .$$ The transformation (\[chi\]) has the following effect on the kinetic terms of the quarks and the term coupling quarks to gluons: $$\begin{aligned} \bar{\psi}_r \, \left( i \, \gamma^\mu \partial_\mu + \mu \, \gamma_0 + g \, \gamma_\mu \, A^\mu_a T_a \right)\, \psi_r & = & \bar{\chi}_r \, \left( i\, \gamma^\mu \partial_\mu + \mu \, \gamma_0 + \gamma_\mu \, \omega^\mu \right) \, \chi_r \,\, , \\ \bar{\psi}_{C r} \, \left( i \, \gamma^\mu \partial_\mu - \mu \, \gamma_0 - g \, \gamma_\mu \, A^\mu_a T_a^T \right)\, \psi_{C r} & = & \bar{\chi}_{C r} \, \left( i\, \gamma^\mu \partial_\mu - \mu \, \gamma_0 + \gamma_\mu \, \omega^\mu_C \right) \, \chi_{C r} \,\, ,\end{aligned}$$ where \[Maurer1\] $$\omega^\mu \equiv {\cal V}^\dagger \, \left( i \, \partial^\mu + g\, A^\mu_a T_a \right) \, {\cal V}$$ is the $N_c N_f \times N_c N_f$-dimensional Maurer-Cartan one-form introduced in Ref. [@sannino] and $$\omega^\mu_C \equiv {\cal V}^T \, \left( i \, \partial^\mu - g\, A^\mu_a T_a^T \right) \, {\cal V}^*$$ is its charge-conjugate version. Note that the partial derivative acts only on the phase factors ${\cal V}$ and ${\cal V}^*$ on the right. Introducing the Nambu-Gor’kov spinors $$X_r \equiv \left( \begin{array}{c} \chi_r \\ \chi_{C r} \end{array} \right) \,\,\, , \,\,\,\, \bar{X}_r \equiv ( \bar{\chi}_r \, , \, \bar{\chi}_{C r} )$$ and the $2 N_c N_f \times 2 N_c N_f$-dimensional Maurer-Cartan one-form $$\label{Maurer2} \Omega^\mu(x,y) \equiv -i \, \left( \begin{array}{cc} \omega^\mu(x) & 0 \\ 0 & \omega_C^\mu(x) \end{array} \right)\, \delta^{(4)}(x-y) \,\, ,$$ the quark (replica) partition function becomes $${\cal Z}_M[\Omega,\Phi^+,\Phi^-] \equiv \int \prod_{r=1}^{M/2} {\cal D} \bar{X}_r \, {\cal D} X_r \; \exp \left\{ \sum_{r=1}^{M/2} \int_{x,y} \bar{X}_r(x) \,\left [\, {\cal S}^{-1} (x,y) + \gamma_\mu \Omega^\mu(x,y) \, \right] \, X_r(y) \right\} \,\, , \label{Zquarks4}$$ where $${\cal S}^{-1} \equiv \left( \begin{array}{cc} [G^+_0]^{-1} & \Phi^- \\ \Phi^+ & [G^-_0]^{-1} \end{array} \right)\,\, .$$ We are interested in the properties of the gluons, and thus may integrate out the fermion fields. This integration can be performed analytically, with the result $${\cal Z}_M[\Omega,\Phi^+,\Phi^-] \equiv \left[ \,{\rm det} \left( {\cal S}^{-1} + \gamma_\mu \Omega^\mu \right) \, \right]^{M/2} \,\, . \label{Zquarks5}$$ The determinant is to be taken over Nambu-Gor’kov, color, flavor, spin, and space-time indices. Finally, letting $M \rightarrow 1$, we obtain the QCD partition function (in the presence of meson, $\varphi_a$, and diquark, $\Phi^\pm$, source fields) $$\label{ZQCD2} {\cal Z}[\varphi,\Phi^+, \Phi^-] = \int {\cal D} A \; \exp\left[ S_A + \frac{1}{2} \, {\rm Tr} \ln \left({\cal S}^{-1} + \gamma_\mu \Omega^\mu \right) \, \right]\,\, .$$ Remembering that $\Omega^\mu$ is linear in $A^\mu_a$, cf. Eq. (\[Maurer2\]) with (\[Maurer1\]), in order to derive the gluon propagator it is sufficient to expand the logarithm to second order in $\Omega^\mu$, $$\begin{aligned} \frac{1}{2}\, {\rm Tr} \ln \left({\cal S}^{-1} + \gamma_\mu \Omega^\mu \right) \, & \simeq & \frac{1}{2}\,{\rm Tr} \ln {\cal S}^{-1} + \frac{1}{2}\,{\rm Tr} \left( {\cal S}\, \gamma_\mu \Omega^\mu \right) - \frac{1}{4} {\rm Tr} \left( {\cal S} \, \gamma_\mu \Omega^\mu \, {\cal S} \, \gamma_\nu \Omega^\nu \right) \nonumber \\ & \equiv & S_0[\Phi^+,\Phi^-] + S_1[\Omega,\Phi^+,\Phi^-] + S_2[\Omega,\Phi^+,\Phi^-]\,\,, \label{expandlog}\end{aligned}$$ with obvious definitions for the $S_i$. The quasiparticle propagator is $${\cal S} \equiv \left( \begin{array}{cc} G^+ & \Xi^- \\ \Xi^+ & G^- \end{array} \right)\,\,,$$ with $$G^\pm = \left\{ [G_0^\pm]^{-1} - \Sigma^\pm \right\}^{-1} \,\,\,\, , \,\,\,\,\, \Sigma^\pm = \Phi^\mp \, G_0^\mp \, \Phi^\pm \,\,\,\, ,\,\,\,\,\, \Xi^\pm = - G_0^\mp \, \Phi^\pm \, G^\pm\,\, .$$ To make further progress, we now expand $\omega^\mu$ and $\omega_C^\mu $ to linear order in the meson fields, \[linearomega\] $$\begin{aligned} \omega^\mu & \simeq & g \, A^\mu_a \, T_a - \sum_{a=4}^7 \left( \partial^\mu \varphi_a \right)\, T_a - \frac{1}{\sqrt{3}}\, \left(\partial^\mu \varphi_8\right)\, B\,\, , \\ \omega_C^\mu & \simeq & - g \, A^\mu_a \, T_a^T + \sum_{a=4}^7 \left( \partial^\mu \varphi_a \right) \, T_a^T + \frac{1}{\sqrt{3}}\, \left( \partial^\mu \varphi_8\right) \, B^T\,\, .\end{aligned}$$ The term $S_1$ in Eq. (\[expandlog\]) is simply a tadpole source term for the gluon fields. This term does not affect the gluon propagator, and thus can be ignored in the following. The quadratic term $S_2$ represents the contribution of a fermion loop to the gluon self-energy. Its computation proceeds by first taking the trace over Nambu-Gor’kov space, $$\begin{aligned} S_2 & = & -\frac{1}{4} \int_{x,y} {\rm Tr}_{c,f,s} \left[ G^+(x,y) \, \gamma_\mu \omega^\mu(y)\, G^+(y,x) \, \gamma_\nu \omega^\nu(x) + G^-(x,y) \, \gamma_\mu \omega_C^\mu(y)\, G^-(y,x) \, \gamma_\nu \omega_C^\nu(x) \right. \nonumber \\ & & \hspace*{2.1cm} + \left. \Xi^+(x,y) \, \gamma_\mu \omega^\mu(y)\, \Xi^-(y,x) \, \gamma_\nu \omega_C^\nu(x) + \Xi^-(x,y) \, \gamma_\mu \omega_C^\mu(y)\, \Xi^+(y,x) \, \gamma_\nu \omega^\nu(x) \right] \,\,. \label{S2}\end{aligned}$$ The remaining trace runs only over color, flavor, and spin indices. Using translational invariance, the propagators and fields are now Fourier-transformed as $$\begin{aligned} G^\pm (x,y) & = & \frac{T}{V} \sum_K e^{-i K \cdot (x-y)} \, G^\pm(K)\,\, ,\\ \Xi^\pm (x,y) & = & \frac{T}{V} \sum_K e^{-i K \cdot (x-y)} \, \Xi^\pm(K)\,\, ,\\ \omega^\mu (x) & = & \sum_P e^{-i P \cdot x} \, \omega^\mu(P)\,\, ,\\ \omega_C^\mu (x) & = & \sum_P e^{-i P \cdot x} \, \omega_C^\mu(P)\,\, .\end{aligned}$$ Inserting this into Eq. (\[S2\]), we arrive at Eq. (C16) of Ref. [@msw], which in our notation reads $$\begin{aligned} S_2 & = & -\frac{1}{4} \sum_{K,P} {\rm Tr}_{c,f,s} \left[ G^+(K) \, \gamma_\mu \omega^\mu(P)\, G^+(K-P) \, \gamma_\nu \omega^\nu(-P) + G^-(K) \, \gamma_\mu \omega_C^\mu(P)\, G^-(K-P) \, \gamma_\nu \omega_C^\nu(-P) \right. \nonumber \\ & & \hspace*{1.1cm} + \left. \Xi^+(K) \, \gamma_\mu \omega^\mu(P)\, \Xi^-(K-P) \, \gamma_\nu \omega_C^\nu(-P) + \Xi^-(K) \, \gamma_\mu \omega_C^\mu(P)\, \Xi^+(K-P) \, \gamma_\nu \omega^\nu(-P) \right] \,\,. \end{aligned}$$ The remainder of the calculation is straightforward, but somewhat tedious. First, insert the (Fourier-transform of the) linearized version (\[linearomega\]) for the fields $\omega^\mu$ and $\omega_C^\mu$. This produces a plethora of terms which are second order in the gluon and meson fields, with coefficients that are traces over color, flavor, and spin. Next, perform the color and flavor traces in these coefficients. It turns out that some of them are identically zero, preventing the occurrence of terms which mix gluons of adjoint colors 1, 2, and 3 (the unbroken $SU(2)_c$ subgroup) among themselves and with the other gluon and meson fields. Furthermore, there are no terms mixing the meson fields $\varphi_a,\, a=4, \ldots 7,$ with $\varphi_8$. There are mixed terms between gluons and mesons with adjoint color indices $4, \ldots, 7$, and between the gluon field $A_8^\mu$ and the meson field $\varphi_8$. Some of the mixed terms (those which mix gluons and mesons of adjoint colors 4 and 5, as well as 6 and 7) can be eliminated via a unitary transformation analogous to the one employed in Ref. [@dhr2f], Eq. (80). Introducing the tensors $$\begin{aligned} \Pi^{\mu \nu}_{11} (P) & \equiv & \Pi^{\mu \nu}_{22} (P) \equiv \Pi^{\mu \nu}_{33} (P) = \frac{g^2}{2} \, \frac{T}{V} \sum_K {\rm Tr}_{s} \left[ \gamma^\mu \, G^+ (K) \, \gamma^\nu \, G^+(K-P) + \gamma^\mu \, G^- (K) \, \gamma^\nu \, G^-(K-P) \right. \nonumber \\ & & \left. \hspace*{5.1cm} +\, \gamma^\mu \, \Xi^- (K) \, \gamma^\nu \, \Xi^+(K-P) + \gamma^\mu \, \Xi^+ (K) \, \gamma^\nu \, \Xi^-(K-P) \right] \,\,, \label{Pi11}\end{aligned}$$ cf.  Eq. (78a) of Ref. [@dhr2f], $$\begin{aligned} \Pi^{\mu \nu}_{44} (P) & \equiv & \Pi^{\mu \nu}_{66} (P) = \frac{g^2}{2} \, \frac{T}{V} \sum_K {\rm Tr}_{s} \left[ \gamma^\mu \, G_0^+ (K) \, \gamma^\nu \, G^+(K-P) + \gamma^\mu \, G^- (K) \, \gamma^\nu \, G_0^-(K-P) \right]\,\, , \label{Pi44diag} \end{aligned}$$ cf. Eq. (83a) of Ref. [@dhr2f], $$\begin{aligned} \Pi^{\mu \nu}_{55} (P) & \equiv & \Pi^{\mu \nu}_{77} (P) = \frac{g^2}{2} \, \frac{T}{V} \sum_K {\rm Tr}_{s} \left[ \gamma^\mu \, G^+ (K) \, \gamma^\nu \, G_0^+(K-P) + \gamma^\mu \, G_0^- (K) \, \gamma^\nu \, G^-(K-P) \right]\,\, . \label{Pi55diag}\end{aligned}$$ cf. Eq. (83b) of Ref. [@dhr2f], as well as $$\begin{aligned} \Pi^{\mu \nu}_{88} (P) & = & \frac{2}{3} \, \Pi_0^{\mu \nu}(P) + \frac{1}{3} \, \tilde{\Pi}^{\mu \nu} (P) \,\, , \label{Pi88}\\ \tilde{\Pi}^{\mu \nu} (P) & = & \frac{g^2}{2} \, \frac{T}{V} \sum_K {\rm Tr}_{s} \left[ \gamma^\mu \, G^+ (K) \, \gamma^\nu \, G^+(K-P) + \gamma^\mu \, G^- (K) \, \gamma^\nu \, G^-(K-P) \right. \nonumber \\ & & \left. \hspace*{1.8cm} -\, \gamma^\mu \, \Xi^- (K) \, \gamma^\nu \, \Xi^+(K-P) - \gamma^\mu \, \Xi^+ (K) \, \gamma^\nu \, \Xi^-(K-P) \right] \,\,, \label{Pitilde}\end{aligned}$$ cf. Eq. (78c) of Ref. [@dhr2f], where $\Pi_0^{\mu \nu}$ is the gluon self-energy in a dense, but normal-conducting system, $$\Pi_0^{\mu \nu} (P) = \frac{g^2}{2}\, \frac{T}{V} \sum_K {\rm Tr}_{s} \left[\gamma^\mu \, G_0^+ (K)\, \gamma^\nu \, G_0^+(K-P) + \gamma^\mu \,G_0^-(K)\,\gamma^\nu \,G_0^-(K-P) \right]\,\, , \label{Pi0}$$ cf. Eq. (27b) of Ref. [@dhr2f], the final result can be written in the compact form (cf. Eq. (C19) of Ref. [@msw]) $$\label{S2final} S_2 = - \frac{1}{2} \, \frac{V}{T} \, \sum_P \sum_{a=1}^8 \left[A_\mu^a(-P) - \frac{i}{g}\, P_\mu\, \varphi^a(-P)\right] \, \Pi^{\mu \nu}_{aa}(P) \, \left[A_\nu^a(P) + \frac{i}{g}\, P_\nu\, \varphi^a(P)\right] \,\, .$$ In deriving Eq. (\[S2final\]), we have made use of the transversality of the polarization tensor in the normal-conducting phase, $\Pi_0^{\mu \nu}(P) \, P_\nu = P_\mu \, \Pi_0^{\mu \nu}(P)=0$. Note that the tensors $\Pi^{\mu \nu}_{aa}$ for $a= 1, \, 2,$ and 3 are also transverse, but those for $a=4,\ldots,8$ are not. This can be seen explicitly from the expressions given in Ref.  [@dhrselfenergy]. The compact notation of Eq. (\[S2final\]) is made possible by the fact that $\varphi^a \equiv 0$ for $a = 1,2,3$, and because we introduced the extra factor $1/\sqrt{3}$ in Eq. (\[phase\]) as compared to Ref. [@msw]. To make further progress, it is advantageous to tensor-decompose $\Pi^{\mu \nu}_{aa}$. Various ways to do this are possible [@msw]; here we follow the notation of Ref. [@LeBellac]. First, define a projector onto the subspace parallel to $P^\mu$, $$\label{E} {\rm E}^{\mu \nu} = \frac{P^\mu \, P^\nu}{P^2}\,\, .$$ Then choose a vector orthogonal to $P^\mu$, for instance $$N^\mu \equiv \left( \frac{p_0\, p^2}{P^2}, \frac{p_0^2\, {\bf p}}{P^2} \right) \equiv \left(g^{\mu \nu} - {\rm E}^{\mu \nu}\right)\, f_\nu\,\, ,$$ with $f^\mu = (0,{\bf p})$. Note that $N^2 = -p_0^2\,p^2/P^2$. Now define the projectors $$\label{BCA} {\rm B}^{\mu \nu} = \frac{N^\mu\, N^\nu}{N^2}\,\,\,\, , \,\,\,\,\, {\rm C}^{\mu \nu} = N^\mu \, P^\nu + P^\mu\, N^\nu \,\,\,\, , \,\,\,\,\, {\rm A}^{\mu \nu} = g^{\mu \nu} - {\rm B}^{\mu \nu} - {\rm E}^{\mu \nu} \,\, .$$ Using the explicit form of $N^\mu$, one convinces oneself that the tensor ${\rm A}^{\mu \nu}$ projects onto the spatially transverse subspace orthogonal to $P^\mu$, $${\rm A}^{00} = {\rm A}^{0i}=0\,\,\,\, , \,\,\,\,\, {\rm A}^{ij} = - \left(\delta^{ij} - \hat{p}^i \, \hat{p}^j \right)\,\, .$$ (Reference [@LeBellac] also uses the notation $P_T^{\mu \nu}$ for ${\rm A}^{\mu \nu}$.) Consequently, the tensor ${\rm B}^{\mu \nu}$ projects onto the spatially longitudinal subspace orthogonal to $P^\mu$, $${\rm B}^{00} = - \frac{p^2}{P^2} \,\,\,\, , \,\,\,\,\, {\rm B}^{0i} = - \frac{p_0\, p^i}{P^2}\,\,\,\, ,\,\,\,\, {\rm B}^{ij} = - \frac{p_0^2}{P^2}\,\hat{p}^i\,\hat{p}^j\,\, .$$ (Reference [@LeBellac] also employs the notation $P_L^{\mu \nu}$ for ${\rm B}^{\mu \nu}$.) With these tensors, the gluon self-energy can be written in the form $$\label{tensor} \Pi^{\mu \nu}_{aa}(P) = \Pi^{\rm a}_{aa}(P) \, {\rm A}^{\mu \nu} + \Pi^{\rm b}_{aa}(P) \, {\rm B}^{\mu \nu} + \Pi^{\rm c}_{aa}(P)\, {\rm C}^{\mu \nu} + \Pi^{\rm e}_{aa}(P)\, {\rm E}^{\mu \nu}\,\, .$$ The polarization functions $\Pi^{\rm a}_{aa},\, \Pi^{\rm b}_{aa}, \, \Pi^{\rm c}_{aa},$ and $\Pi^{\rm e}_{aa}$ can be computed by projecting the tensor $\Pi^{\mu \nu}_{aa}$ onto the respective subspaces of the projectors (\[E\]) and (\[BCA\]). Introducing the abbreviations $$\Pi^t_{aa}(P) \equiv \frac{1}{2} \, \left( \delta^{ij} - \hat{p}^i\, \hat{p}^j \right) \, \Pi^{ij}_{aa}(P) \,\,\,\, ,\,\,\,\,\, \Pi^\ell_{aa}(P) \equiv \hat{p}_i \, \Pi^{ij}_{aa}(P)\, \hat{p}_j \,\, .$$ these functions read \[Pifunctions\] $$\begin{aligned} \Pi^{\rm a}_{aa}(P) & = & \frac{1}{2}\, \Pi^{\mu \nu}_{aa}(P)\, {\rm A}_{\mu \nu} = - \Pi^t_{aa}(P) \,\, , \label{Pia} \\ \Pi^{\rm b}_{aa}(P) & = & \Pi^{\mu \nu}_{aa}(P)\, {\rm B}_{\mu \nu} = - \frac{p^2}{P^2} \, \left[ \Pi^{00}_{aa}(P) + 2\, \frac{p_0}{p}\, \Pi^{0i}_{aa}(P)\,\hat{p}_i + \frac{p_0^2}{p^2}\, \Pi^\ell_{aa}(P) \right] \,\, , \\ \Pi^{\rm c}_{aa}(P) & = & \frac{1}{2\, N^2 \, P^2}\, \Pi^{\mu \nu}_{aa}(P)\, {\rm C}_{\mu \nu} = -\frac{1}{P^2}\, \left[ \Pi^{00}_{aa}(P) + \frac{p_0^2+p^2}{p_0\,p}\, \Pi^{0i}_{aa}(P)\,\hat{p}_i + \Pi^\ell_{aa}(P) \right] \,\, , \\ \Pi^{\rm e}_{aa}(P) & = & \Pi^{\mu \nu}_{aa}(P)\, {\rm E}_{\mu \nu} = \frac{1}{P^2}\, \left[ p_0^2 \, \Pi^{00}_{aa}(P) + 2\,p_0\,p \, \Pi^{0i}_{aa}(P) \, \hat{p}_i + p^2 \, \Pi^\ell_{aa}(P) \right] \,\, .\end{aligned}$$ For the explicitly transverse tensor $\Pi^{\mu \nu}_{11}$, the functions $\Pi^{\rm c}_{11} = \Pi^{\rm e}_{11} \equiv 0$. The same holds for the HDL polarization tensor $\Pi_0^{\mu \nu}$. For the other gluon colors $a=4, \ldots, 8$, the functions $\Pi^{\rm c}_{aa}$ and $\Pi^{\rm e}_{aa}$ do not vanish. Note that the dimensions of $\Pi^{\rm a}_{aa},\, \Pi^{\rm b}_{aa},$ and $\Pi^{\rm e}_{aa}$ are $[{\rm MeV}^2]$, while $\Pi^{\rm c}_{aa}$ is dimensionless. Now let us define the functions $$\label{functions} A_{\perp\, \mu}^a(P) = {{\rm A}_\mu}^\nu\, A^a_\nu(P) \,\,\,\, , \,\,\,\,\, A_\parallel^a(P) = \frac{ P^\mu \, A^a_\mu(P)}{P^2} \,\,\,\, , \,\,\,\,\, A_N^a(P) = \frac{ N^\mu \, A^a_\mu(P)}{N^2}\,\, .$$ Note that $A_\parallel^a(-P) = - P^\mu \, A^a_\mu(-P)/P^2$, and $A_N^a(-P) = - N^\mu \, A^a_\mu(-P)/N^2$, since $N^\mu$ is odd under $P \rightarrow -P$. The fields $A_\parallel^a(P)$ and $A_N^a(P)$ are dimensionless. With the tensor decomposition (\[tensor\]) and the functions (\[functions\]), Eq. (\[S2final\]) becomes $$\begin{aligned} S_2 & = & -\frac{1}{2}\, \frac{V}{T} \sum_P \sum_{a=1}^8 \left\{ \frac{}{} A_{\perp\, \mu}^a(-P)\, \Pi^{\rm a}_{aa}(P)\, {\rm A}^{\mu \nu}\, A_{\perp\,\nu}^a(P) - A_N^a(-P) \, \Pi^{\rm b}_{aa}(P)\, N^2 \, A_N^a(P) \right. \nonumber \\ & & - \left[A_\parallel^a(-P) + \frac{i}{g}\, \varphi^a(-P)\right]\, \Pi^{\rm c}_{aa}(P) \,N^2 P^2 \, A_N^a(P) - A_N^a(-P) \, \Pi^{\rm c}_{aa}(P) \,N^2 P^2 \ \left[A_\parallel^a(P) + \frac{i}{g}\, \varphi^a(P)\right] \nonumber \\ & & - \left. \left[A_\parallel^a(-P) + \frac{i}{g}\, \varphi^a(-P)\right]\, \Pi^{\rm e}_{aa}(P) \, P^2 \, \left[A_\parallel^a(P) + \frac{i}{g}\, \varphi^a(P)\right] \right\}\,\,. \label{decompose}\end{aligned}$$ In any spontaneously broken gauge theory, the excitations of the condensate mix with the gauge fields corresponding to the broken generators of the underlying gauge group. The mixing occurs in the components orthogonal to the spatially transverse degrees of freedom, [*i.e.*]{}, for the spatially longitudinal fields, $A_N^a$, and the fields parallel to $P^\mu$, $A_\parallel^a$. For the two-flavor color superconductor, these components mix with the meson fields for gluon colors $4, \ldots, 8$. The mixing is particularly evident in Eq. (\[decompose\]). The terms mixing mesons and gauge fields can be eliminated by a suitable choice of gauge. The gauge to accomplish this goal is the ’t Hooft gauge. The “unmixing” procedure of mesons and gauge fields consists of two steps. First, we eliminate the terms in Eq. (\[decompose\]) which mix $A_N^a$ and $A_\parallel^a$. This is achieved by substituting $$\label{sub} \hat{A}_\parallel^a(P) = A_\parallel^a(P) + \frac{\Pi^{\rm c}_{aa}(P)\, N^2}{\Pi^{\rm e}_{aa}(P)} \, A_N^a(P)\,\, .$$ (We do not perform this substitution for $a=1,2,3$; for these gluon colors $\Pi^{\rm c}_{aa}\equiv 0$, such that there are no terms in Eq. (\[decompose\]) which mix $A_\parallel^a$ and $A_N^a$). This shift of the gauge field component $A_\parallel^a$ is completely innocuous for the following reasons. First, the Jacobian $\partial(\hat{A}_\parallel, A_N)/\partial(A_\parallel, A_N)$ is unity, so the measure of the functional integral over gauge fields is not affected. Second, the only other term in the gauge field action, which is quadratic in the gauge fields and thus relevant for the derivation of the gluon propagator, is the free field action $$\label{S0} S_{F^2}^{(0)} \equiv - \frac{1}{2} \, \frac{V}{T} \sum_P \sum_{a=1}^8 A_\mu^a(-P) \, \left(P^2\, g^{\mu \nu} - P^{\mu}\, P^{\nu} \right) \, A_\nu^a(P) \equiv - \frac{1}{2} \, \frac{V}{T} \sum_P \sum_{a=1}^8 A_\mu^a(-P) \, P^2 \, \left({\rm A}^{\mu \nu} + {\rm B}^{\mu \nu} \right) \, A_\nu^a(P)\,\, ,$$ and it does not contain the parallel components $A_\parallel^a(P)$. It is therefore also not affected by the shift of variables (\[sub\]). After renaming $\hat{A}_\parallel^a \rightarrow A_\parallel^a$, the final result for $S_2$ reads: $$\begin{aligned} S_2 & = & -\frac{1}{2}\, \frac{V}{T} \sum_P \sum_{a=1}^8 \left\{ \frac{}{} A_{\perp\, \mu}^a(-P)\, \Pi^{\rm a}_{aa}(P)\, A^{\mu \nu}\, A_{\perp\,\nu}^a(P) - A_N^a(-P) \,\hat{\Pi}^{\rm b}_{aa}(P) \, N^2 \, A_N^a(P) \right. \nonumber \\ & & \hspace*{1.85cm} - \left. \left[A_\parallel^a(-P) + \frac{i}{g}\, \varphi^a(-P)\right]\, \Pi^{\rm e}_{aa}(P) \, P^2 \, \left[A_\parallel^a(P) + \frac{i}{g}\, \varphi^a(P)\right] \right\}\,\,, \label{S2finalunmix}\end{aligned}$$ where we introduced $$\label{hatPib} \hat{\Pi}^{\rm b}_{aa}(P) \equiv \Pi^{\rm b}_{aa}(P) - \frac{\left[\Pi^{\rm c}_{aa}(P)\right]^2 N^2 P^2}{\Pi^{\rm e}_{aa}(P)} \,\, .$$ The ’t Hooft gauge fixing term is now chosen to eliminate the mixing between $A_\parallel^a$ and $\varphi^a$: $$S_{\rm gf} = \frac{1}{2 \, \lambda} \,\frac{V}{T} \sum_P \sum_{a=1}^8 \left[ P^2\, A_\parallel^a(-P) - \lambda\, \frac{i}{g} \, \Pi^{\rm e}_{aa}(P)\, \varphi^a(-P) \right] \, \left[ P^2\, A_\parallel^a(P) - \lambda\, \frac{i}{g}\, \Pi^{\rm e}_{aa}(P)\, \varphi^a(P) \right] \,\, . \label{L_gf}$$ This gauge condition is non-local in coordinate space, which seems peculiar, but poses no problem in momentum space. Note that $P^2\, A_\parallel^a(P) \equiv P^\mu \, A_\mu^a(P)$. Therefore, in various limits the choice of gauge (\[L\_gf\]) corresponds to covariant gauge, $$S_{\rm cg} = \frac{1}{2 \, \lambda} \,\frac{V}{T} \sum_P \sum_{a=1}^8 A_\mu^a(-P)\, P^\mu\, P^\nu \, A_\nu^a(P) \,\, . \label{L_cg}$$ The first limit we consider is $T, \mu \rightarrow 0$, [*i.e.*]{} the vacuum. Then, $\Pi^{\rm e}_{aa} \equiv 0$, and Eq. (\[L\_gf\]) becomes (\[L\_cg\]). The second case is the limit of large 4-momenta, $P \rightarrow \infty$. As shown in Ref. [@dhrselfenergy], in this region of phase space the effects from a color-superconducting condensate on the gluon polarization tensor are negligible. In other words, the gluon polarization tensor approaches the HDL limit. The physical reason is that gluons with large momenta do not see quark Cooper pairs as composite objects, but resolve the individual color charges inside the pair. Consequently, $\Pi^{\rm e}_{aa}(P) \, P^2 \rightarrow P_\mu \, \Pi^{\mu \nu}_0(P)\, P_\nu \equiv 0$ for $P \rightarrow \infty$ and, for large $P$, the individual terms in the sum over $P$ in Eqs. (\[L\_gf\]) and (\[L\_cg\]) agree. Finally, for gluon colors $a=1,2,3$, $\Pi^{\rm e}_{aa} \equiv 0$, since the self-energy $\Pi^{\mu \nu}_{11}$ is transverse. Thus, for $a=1,2,3$ the terms in Eqs. (\[L\_gf\]) and (\[L\_cg\]) are identical. The decoupling of mesons and gluon degrees of freedom becomes obvious once we add (\[L\_gf\]) to (\[S2finalunmix\]) and (\[S0\]), $$\begin{aligned} S_{F^2}^{(0)} + S_2 + S_{\rm gf} & = & -\frac{1}{2} \, \frac{V}{T} \sum_P \sum_{a=1}^8 \left\{ \frac{}{} A_{\perp\, \mu}^a(-P)\, \left[ P^2 + \Pi^{\rm a}_{aa}(P) \right]\, {\rm A}^{\mu \nu}\, A_{\perp\,\nu}^a(P) \right. \nonumber \\ & & \hspace*{2.1cm} - \; A_N^a(-P) \, \left[ P^2 + \hat{\Pi}^{\rm b}_{aa}(P) \right] \, N^2 \, A_N^a(P) \nonumber \\ & & \hspace*{2.1cm} - \; A_\parallel^a(-P) \, \left[ \frac{1}{\lambda}\, P^2 + \Pi^{\rm e}_{aa}(P) \right]\, P^2 \, A_\parallel^a(P) \nonumber \\ & & \hspace*{2.1cm} + \left. \frac{\lambda}{g^2}\, \varphi^a(-P)\, \left[ \frac{1}{\lambda}\, P^2 + \Pi^{\rm e}_{aa}(P) \right] \, \Pi^{\rm e}_{aa}(P)\, \varphi^a(P) \right\} \,\, . \label{SgfS2}\end{aligned}$$ Consequently, the inverse gluon propagator is $${\Delta^{-1}}^{\mu \nu}_{aa}(P) = \left[ P^2 + \Pi^{\rm a}_{aa}(P) \right]\, {\rm A}^{\mu \nu} + \left[ P^2 + \hat{\Pi}^{\rm b}_{aa}(P) \right] \, {\rm B}^{\mu \nu} + \left[ \frac{1}{\lambda}\, P^2 + \Pi^{\rm e}_{aa}(P) \right] \, {\rm E}^{\mu \nu}\,\, .$$ Inverting this as discussed in Ref. [@LeBellac], one obtains the gluon propagator for gluons of color $a$, $$\label{glueprop} \Delta^{\mu \nu}_{aa}(P) = \frac{1}{P^2 + \Pi^{\rm a}_{aa}(P)}\, {\rm A}^{\mu \nu} + \frac{1}{P^2 + \hat{\Pi}^{\rm b}_{aa}(P)}\, {\rm B}^{\mu \nu} + \frac{\lambda}{P^2 + \lambda\, \Pi^{\rm e}_{aa}(P)} \, {\rm E}^{\mu \nu}\,\, .$$ For any $\lambda \neq 0$, the gluon propagator contains unphysical contributions parallel to $P^\mu$, which have to be cancelled by the corresponding Faddeev-Popov ghosts when computing physical observables. Only for $\lambda = 0$ these contributions vanish and the gluon propagator is explicitly transverse, [*i.e.*]{}, $P_\mu\, \Delta^{\mu \nu}_{aa}(P) = \Delta^{\mu \nu}_{aa}(P)\,P_\nu = 0$. Also, in this case the ghost propagator is independent of the chemical potential $\mu$. The contribution of Fadeev-Popov ghosts to the gluon polarization tensor is then $\sim g^2\,T^2$ and thus negligible at $T=0$. We shall therefore focus on this particular choice for the gauge parameter in the following. Note that for $\lambda = 0$, the inverse meson field propagator is $$\label{NGbosons} D^{-1}_{aa}(P) \equiv \Pi^{\rm e}_{aa}(P)\, P^2 = P_\mu \, \Pi^{\mu \nu}_{aa}(P)\, P_\nu \,\, ,$$ and the dispersion relation for the mesons follows from the condition $D^{-1}_{aa}(P)=0$, as demonstrated in Ref. [@zarembo] for a three-flavor color superconductor in the color-flavor-locked phase. The gluon propagator for transverse and longitudinal modes can now be read off Eq. (\[glueprop\]) as coefficients of the corresponding tensors ${\rm A}^{\mu \nu}$ (the projector onto the spatially transverse subspace orthogonal to $P^\mu$) and ${\rm B}^{\mu \nu}$ (the projector onto the spatially longitudinal subspace orthogonal to $P^\mu$). For the transverse modes one has [@LeBellac] $$\label{transverse} \Delta^t_{aa}(P) \equiv \frac{1}{P^2 + \Pi^{\rm a}_{aa}(P)} = \frac{1}{P^2 - \Pi^t_{aa}(P)}\,\, ,$$ where we used Eq. (\[Pia\]). Multiplying the coefficient of ${\rm B}^{\mu \nu}$ in Eq. (\[glueprop\]) with the standard factor $-P^2/p^2$ [@LeBellac], one obtains for the longitudinal modes $$\label{longitudinal} \hat{\Delta}^{00}_{aa}(P) \equiv - \frac{P^2}{p^2} \, \frac{1}{P^2 + \hat{\Pi}^{\rm b}_{aa}(P)} = - \frac{1}{p^2 - \hat{\Pi}^{00}_{aa}(P)}\,\, ,$$ where the longitudinal gluon self-energy $$\label{hatPi00aa} \hat{\Pi}^{00}_{aa}(P) \equiv p^2 \, \frac{\Pi^{00}_{aa}(P)\,\Pi^\ell_{aa}(P) - \left[ \Pi^{0i}_{aa}(P) \, \hat{p}_i \right]^2 }{ p_0^2 \, \Pi^{00}_{aa}(P) + 2\, p_0\,p\, \Pi^{0i}_{aa}(P) \, \hat{p}_i + p^2 \, \Pi^\ell_{aa}(P) }$$ follows from the definition of $\hat{\Pi}^{\rm b}_{aa}$, Eq.(\[hatPib\]), and the relations (\[Pifunctions\]). The longitudinal gluon propagator $\hat{\Delta}^{00}_{aa}$ [*must not be confused*]{} with the the $00$-component of $\Delta^{\mu \nu}_{aa}$. We deliberately use this (slightly ambiguous) notation to facilitate the comparison of our new and correct results with those of Ref. [@dhrselfenergy], which were partially incorrect. The results of that paper were derived in Coulomb gauge, where the $00$-component of the propagator is indeed [*identical*]{} to the longitudinal propagator (\[longitudinal\]). We were not able to find a ’t Hooft gauge that converged to the Coulomb gauge in the various limits discussed above, and consequently had to base our discussion on the covariant gauge (\[L\_cg\]) as limiting case of Eq. (\[L\_gf\]). To summarize this section, we have computed the gluon propagator for gluons in a two-flavor color superconductor. Due to condensation of quark Cooper pairs, the $SU(3)_c$ gauge symmetry is spontaneously broken to $SU(2)_c$, leading to the appearance of five Nambu-Goldstone bosons. In general, these bosons mix with some components of the gauge fields corresponding to the broken generators. To “unmix” them we have used a form of ’t Hooft gauge which smoothly converges to covariant gauge in the vacuum, as well as for large gluon momenta, and when the gluon polarization tensor is explicitly transverse. Finally, choosing the gauge fixing parameter $\lambda=0$ we derived the gluon propagator for transverse, Eq. (\[transverse\]), and longitudinal modes, Eq. (\[longitudinal\]) with (\[hatPi00aa\]). Spectral properties of the eighth gluon {#III} ======================================= In this section, we explicitly compute the spectral properties of the eighth gluon. We shall confirm the results of Ref. [@dhrselfenergy] for the transverse mode and amend those for the longitudinal mode, which have not been correctly computed in Ref. [@dhrselfenergy]. In particular, we shall show that the plasmon dispersion relation now has the correct behavior $p_0 \rightarrow m_g$ as $p \rightarrow 0$. Furthermore, the longitudinal spectral density vanishes for gluon energies and momenta located on the dispersion branch of the Nambu-Goldstone bosons, [*i.e.*]{}, for energies and momenta given by the roots of Eq. (\[NGbosons\]). For the eighth gluon, this condition can be written in the form $P_\mu\, \tilde{\Pi}^{\mu \nu}(P)\, P_\nu = 0$ [@gusyshov; @zarembo], since the HDL self-energy is transverse, $P_\mu\, \Pi^{\mu \nu}_0(P)\, P_\nu \equiv 0$. Polarization tensor ------------------- We first compute the polarization tensor for the transverse and longitudinal components of the eighth gluon. To this end, it is convenient to rewrite the longitudinal gluon self-energy (\[hatPi00aa\]) in the form $$\begin{aligned} \label{Pi0088} \hat{\Pi}^{00}_{88}(P) & \equiv & \frac{2}{3} \, \Pi_0^{00}(P) + \frac{1}{3}\, \hat{\Pi}^{00}(P) \,\, , \\ \hat{\Pi}^{00}(P) & \equiv & p^2 \, \frac{\tilde{\Pi}^{00}(P)\,\tilde{\Pi}^\ell(P) - \left[ \tilde{\Pi}^{0i}(P) \, \hat{p}_i \right]^2 }{ p_0^2 \, \tilde{\Pi}^{00}(P) + 2\, p_0\,p\, \tilde{\Pi}^{0i}(P) \, \hat{p}_i + p^2 \, \tilde{\Pi}^\ell(P) } \,\, , \label{hatPi00}\end{aligned}$$ with $\tilde{\Pi}^\ell (P) \equiv \hat{p}_i \, \tilde{\Pi}^{ij}(P)\, \hat{p}_j$. Let us now explicitly compute the polarization functions. As in Ref. [@dhrselfenergy] we take $T=0$, and we shall use the identity $$\label{ident} \frac{1}{x+i \eta} \equiv {\cal P}\, \frac{1}{x} - i \pi \, \delta(x)\,\, ,$$ where ${\cal P}$ stands for the principal value description, in order to decompose the polarization tensor into real and imaginary parts. The imaginary parts can then be straightforwardly computed, while the real parts are computed from the dispersion integral $$\label{dispint} {\rm Re} \, \Pi(p_0,{\bf p}) \equiv \frac{1}{\pi} \, {\cal P} \int_{- \infty}^{\infty} d \omega\, \frac{{\rm Im}\, \Pi(\omega,{\bf p})}{ \omega - p_0} + C\,\, ,$$ where $C$ is a (subtraction) constant. If ${\rm Im}\, \Pi(\omega, {\bf p})$ is an odd function of $\omega$, ${\rm Im}\, \Pi(-\omega, {\bf p}) = - {\rm Im}\, \Pi(\omega, {\bf p})$, Eq. (\[dispint\]) becomes Eq. (39) of Ref. [@dhrselfenergy], $$\label{odd} {\rm Re} \, \Pi(p_0,{\bf p}) \equiv \frac{1}{\pi} \, {\cal P} \int_0^{\infty} d \omega\, {\rm Im}\, \Pi_{\rm odd}(\omega,{\bf p}) \, \left(\frac{1}{\omega+p_0} + \frac{1}{\omega - p_0} \right) + C\,\, ,$$ and if it is an even function of $\omega$, ${\rm Im}\, \Pi(-\omega, {\bf p}) = {\rm Im}\, \Pi(\omega, {\bf p})$, we have instead $$\label{even} {\rm Re} \, \Pi(p_0,{\bf p}) \equiv \frac{1}{\pi} \, {\cal P} \int_0^{\infty} d \omega\, {\rm Im}\, \Pi_{\rm even}(\omega,{\bf p}) \, \left(\frac{1}{\omega-p_0} - \frac{1}{\omega + p_0} \right) + C\,\, ,$$ Since the polarization tensor for the transverse gluon modes, $\Pi^t_{88} \equiv \frac{2}{3}\, \Pi_0^t + \frac{1}{3}\, \tilde{\Pi}^t$, has already been computed in Ref. [@dhrselfenergy], we just cite the results. The imaginary part of the transverse HDL polarization function reads (cf. Eq. (22b) of Ref. [@dhrselfenergy]) $${\rm Im}\, \Pi^t_0(P) = - \pi\, \frac{3}{4}\, m_g^2 \, \frac{p_0}{p}\, \left(1- \frac{p_0^2}{p^2} \right) \, \theta(p-p_0)\,\, .$$ The corresponding real part is computed from Eq. (\[odd\]), with the result (cf. Eqs. (40b) and (41) of Ref. [@dhrselfenergy]) $${\rm Re}\, \Pi^t_0(P) = \frac{3}{2}\, m_g^2\, \left[ \frac{p_0^2}{p^2} + \left( 1- \frac{p_0^2}{p^2} \right) \, \frac{p_0}{2\, p} \, \ln \left| \frac{p_0 + p}{p_0 - p} \right| \, \right]\,\, .$$ We have used the fact that the value of the subtraction constant is $C^t_0=m_g^2$, which can be derived from comparing a direct calculation of ${\rm Re}\, \Pi^t_0$ using Eq. (19b) of Ref. [@dhrselfenergy] with the above computation via the dispersion formula (\[odd\]). The imaginary part of the tensor $\tilde{\Pi}^t$ is given by (cf. Eq. (36) of Ref. [@dhrselfenergy]) $$\begin{aligned} \lefteqn{{\rm Im}\, \tilde{\Pi}^t(P) = - \pi\, \frac{3}{4}\, m_g^2 \, \theta(p_0 - 2\, \phi)\, \frac{p_0}{p} \left\{ \frac{}{} \theta(E_p - p_0) \, \left[ \left( 1 - \frac{p_0^2}{p^2}\, (1+s^2) \right) \, {\bf E}(t) - s^2 \,\left( 1- 2\, \frac{p_0^2}{p^2} \right) \, {\bf K}(t) \right] \right.} \nonumber \\ & + & \left. \theta(p_0 - E_p) \left[ \left( 1 - \frac{p_0^2}{p^2}\, (1+s^2) \right) \, E(\alpha,t) - \left( 1- \frac{p_0^2}{p^2} \right)\, \frac{p}{p_0}\, \sqrt{1 - \frac{4\, \phi^2}{p_0^2 - p^2}} - s^2 \,\left( 1- 2\, \frac{p_0^2}{p^2} \right)\, F(\alpha,t) \right] \right\}\,\, ,\end{aligned}$$ where $\phi$ is the value of the color-superconducting gap, $E_p = \sqrt{p^2 + 4 \phi^2}$, $t = \sqrt{1-4\phi^2/p_0^2}$, $s^2 = 1 - t^2$, $\alpha = \arcsin [p/(t p_0)]$, and $F(\alpha,t)$, $E(\alpha,t)$ are elliptic integrals of the first and second kind, while ${\bf K}(t) \equiv F( \pi/2, t)$ and ${\bf E}(t) \equiv E( \pi/2,t)$ are the corresponding complete elliptic integrals. The real part is again computed from Eq. (\[odd\]). The integral has to be done numerically, see Appendix A of Ref.  [@dhrselfenergy] for details. The subtraction constant is, for reasons discussed at length in Ref. [@dhrselfenergy], identical to the one in the HDL limit, $C^t \equiv C^t_0 = m_g^2$. Finally, taking the linear combination $\Pi^t_{88} \equiv \frac{2}{3}\, \Pi_0^t + \frac{1}{3}\, \tilde{\Pi}^t$ completes the calculation of the transverse polarization function $\Pi^t_{88}$. In order to compute the polarization function for the longitudinal gluon, $\hat{\Pi}^{00}_{88}$, we have to know the functions $\Pi_0^{00}(P)$, $\tilde{\Pi}^{00}(P)$, $\tilde{\Pi}^{0i}(P)\, \hat{p}_i$, and $\tilde{\Pi}^\ell(P)$. The first two functions, $\Pi_0^{00}(P)$ and $\tilde{\Pi}^{00}(P)$ have also been computed in Ref. [@dhrselfenergy]. The imaginary part of the longitudinal HDL polarization function is (cf. Eq. (22a) of Ref. [@dhrselfenergy]) $${\rm Im}\, \Pi_0^{00}(P) = - \pi\, \frac{3}{2}\, m_g^2 \, \frac{p_0}{p} \, \theta(p-p_0)\,\, .$$ The real part is computed from Eq. (\[odd\]), with the result (cf. Eqs. (40a) and (41) of Ref. [@dhrselfenergy]) $${\rm Re}\, \Pi^{00}_0(P) = - 3\, m_g^2\, \left( 1- \frac{p_0}{2\, p} \, \ln \left| \frac{p_0 + p}{p_0 - p} \right| \, \right)\,\, . \label{RePi000}$$ Here, the subtraction constant is $C^{00}_0 = 0$. The imaginary part of the function $\tilde{\Pi}^{00}$ is (cf. Eq. (35) of Ref. [@dhrselfenergy]) $${\rm Im}\, \tilde{\Pi}^{00}(P) = - \pi\, \frac{3}{2}\, m_g^2 \, \theta(p_0 - 2\, \phi)\, \frac{p_0}{p} \left\{ \frac{}{} \theta(E_p - p_0) \, {\bf E}(t) + \theta(p_0 - E_p) \left[ E(\alpha,t) - \frac{p}{p_0}\, \sqrt{1 - \frac{4\, \phi^2}{p_0^2 - p^2}} \right] \right\}\,\, .$$ The real part is computed from Eq. (\[odd\]), with the subtraction constant $C^{00} \equiv C^{00}_0 = 0$. Again, the integral has to be done numerically. It remains to compute the functions $\tilde{\Pi}^{0i}(P)\, \hat{p}_i$ and $\tilde{\Pi}^\ell(P)$. First, one performs the spin traces in Eq. (\[Pitilde\]) to obtain Eqs. (102b) and (102c) of Ref. [@dhr2f]. Then, taking $T=0$, \[Pi0iPil\] $$\begin{aligned} \tilde{\Pi}^{0i}(P)\, \hat{p}_i & = & \frac{g^2}{2} \int \frac{d^3 {\bf k}}{(2 \pi)^3} \, \sum_{e_1,e_2= \pm} \left( e_1\, \hat{\bf k}_1 \cdot {\bf p} + e_2\, \hat{\bf k}_2 \cdot {\bf p} \right)\, \left( \frac{\xi_2}{2 \epsilon_2} - \frac{\xi_1}{2 \epsilon_1} \right) \nonumber \\ & & \hspace*{2.6cm} \times \, \left( \frac{1}{p_0 + \epsilon_1 + \epsilon_2 + i \eta} + \frac{1}{p_0 - \epsilon_1 - \epsilon_2+ i \eta} \right) \,\, , \\ \tilde{\Pi}^\ell(P) & = & - \frac{g^2}{2} \int \frac{d^3 {\bf k}}{(2 \pi)^3} \, \sum_{e_1,e_2= \pm} \left[ \left(1- e_1e_2\, \hat{\bf k}_1 \cdot {\bf k}_2 \right) + 2\, e_1 e_2\, \hat{\bf k}_1 \cdot {\bf p}\; \hat{\bf k}_2 \cdot {\bf p} \right]\, \frac{\epsilon_1 \epsilon_2 - \xi_1 \xi_2 - \phi_1 \phi_2}{2 \, \epsilon_1 \epsilon_2} \nonumber \\ & & \hspace*{2.6cm} \times \, \left( \frac{1}{p_0 + \epsilon_1 + \epsilon_2 + i \eta} - \frac{1}{p_0 - \epsilon_1 - \epsilon_2+ i \eta} \right) \,\, , \end{aligned}$$ where ${\bf k}_{1,2} = {\bf k} \pm {\bf p}/2$, $\phi_i \equiv \phi^{e_i}_{{\bf k}_i}$ is the gap function for quasiparticles ($e_i = +1$) or quasi-antiparticles ($e_i = -1$) with momentum ${\bf k}_i$, $\xi_i \equiv e_ik_i - \mu$, and $\epsilon_i \equiv \sqrt{\xi_i^2 + \phi_i^2}$. One now repeats the steps discussed in detail in Section II.A of Ref. [@dhrselfenergy] to obtain (for $p_0 \geq 0$) \[Pi0iPil2\] $$\begin{aligned} {\rm Im}\, \tilde{\Pi}^{0i}(P)\, \hat{p}_i & = & \pi\, \frac{3}{2}\, m_g^2 \, \theta(p_0 - 2\, \phi)\, \frac{p_0^2}{p^2} \left\{ \frac{}{} \theta(E_p - p_0) \, \left[ {\bf E}(t) - s^2 \, {\bf K}(t) \right] \right. \nonumber \\ & & \hspace*{3.2cm} + \left. \theta(p_0 - E_p) \left[ E(\alpha,t) - \frac{p}{p_0}\, \sqrt{1 - \frac{4\, \phi^2}{p_0^2 - p^2}} - s^2 \, F(\alpha,t) \right] \right\}\,\, , \\ {\rm Im}\, \tilde{\Pi}^\ell(P) & = & - \pi\, \frac{3}{2}\, m_g^2 \, \theta(p_0 - 2\, \phi)\, \frac{p_0^3}{p^3} \left\{ \frac{}{} \theta(E_p - p_0) \, \left[ (1+s^2)\, {\bf E}(t) - 2\, s^2 \, {\bf K}(t) \right] \right. \nonumber \\ & & \hspace*{2.5cm} + \left. \theta(p_0 - E_p) \left[ (1+s^2)\, E(\alpha,t) - \frac{p}{p_0}\, \sqrt{1 - \frac{4\, \phi^2}{p_0^2 - p^2}} - 2\, s^2 \, F(\alpha,t) \right] \right\}\,\, .\end{aligned}$$ One observes that in the limit $\phi \rightarrow 0$, the functions (\[Pi0iPil2\]) approach the HDL result $$\begin{aligned} {\rm Im}\, \Pi_0^{0i}(P)\, \hat{p}_i & = & \pi\, \frac{3}{2}\, m_g^2 \, \frac{p_0^2}{p^2} \, \theta(p-p_0)\,\, , \\ {\rm Im}\, \Pi_0^\ell(P) & = & - \pi\, \frac{3}{2}\, m_g^2 \, \frac{p_0^3}{p^3} \, \theta(p-p_0)\,\,.\end{aligned}$$ Applying Eq. (\[ident\]) to Eqs. (\[Pi0iPil\]) we immediately see that the imaginary part of $\tilde{\Pi}^{0i}(P)\, \hat{p}_i$ is [*even*]{}, while that of $\tilde{\Pi}^\ell(P)$ is [*odd*]{}. Thus, in order to compute the real part of $\tilde{\Pi}^{0i}(P)\, \hat{p}_i$, we have to use Eq. (\[even\]), while the real part of $\tilde{\Pi}^\ell(P)$ has to be computed from Eq. (\[odd\]). When implementing the numerical procedure discussed in Appendix A of Ref. [@dhrselfenergy] for the integral in Eq. (\[even\]), one has to modify Eq. (A1) of Ref. [@dhrselfenergy] appropriately. Finally, one has to determine the values of the subtraction constants $C^{0i}$ and $C^\ell$. We again use the fact that $C^{0i} \equiv C^{0i}_0$ and $C^\ell \equiv C^\ell_0$, where the index “0” refers to the HDL limit. The corresponding constants are determined by first computing ${\rm Re}\, \Pi_0^{0i}(P)\, \hat{p}_i$ and ${\rm Re}\, \Pi_0^\ell(P)$ from the dispersion formulas (\[odd\]) and (\[even\]). The result of this calculation is then compared to that of a direct computation using, for instance, the result (\[RePi000\]) for ${\rm Re}\, \Pi_0^{00}(P)$ and then inferring ${\rm Re}\, \Pi_0^{0i}(P) \, \hat{p}_i$ and ${\rm Re}\, \Pi_0^\ell(P)$ from the transversality of $\Pi_0^{\mu \nu}$. The result is $C^{0i} \equiv C^{0i}_0 = 0$ and $C^{\ell}\equiv C^\ell_0 = m_g^2$. At this point, we have determined all functions entering the transverse and longitudinal polarization functions for the eighth gluon. In Fig. \[fig1\] we show the imaginary parts and in Fig. \[fig2\] the real parts, for a fixed gluon momentum $p= 4\, \phi$, as a function of gluon energy $p_0$ (in units of $2\, \phi$). The units for the imaginary parts are $-3 \, m_g^2/2$, and for the real parts $+ 3\, m_g^2/2$. For comparison, in parts (a) and (g) of these figures, we show the results from Ref. [@dhrselfenergy] for the longitudinal and transverse polarization function of the gluon with adjoint color 1. In parts (d), (e), and (f) the functions $\tilde{\Pi}^{00}$, $-\tilde{\Pi}^{0i}\, \hat{p}_i$, and $\tilde{\Pi}^\ell$ are shown. According to Eq. (\[hatPi00\]) these are required to determine $\hat{\Pi}^{00}$, shown in part (b). Using Eq. (\[Pi0088\]), this result is then combined with the HDL polarization function $\Pi^{00}_0$ to compute $\hat{\Pi}^{00}_{88}$, shown in part (c). Finally, the transverse polarization function for gluons of color 8 is shown in part (i). This function is given by the linear combination $\Pi^t_{88} = \frac{2}{3}\, \Pi^t_0 + \frac{1}{3}\, \tilde{\Pi}^t$ of the transverse HDL polarization function $\Pi^t_0$ and the function $\tilde{\Pi}^t$, both of which are shown in part (h). In all figures, the results for the two-flavor color superconductor are drawn as solid lines, while the dotted lines correspond to those in a normal conductor, $\phi \rightarrow 0$ (the HDL limit). Note that parts (a), (d), (g), (h), and (i) of Figs. \[fig1\] and \[fig2\] agree with parts (a), (b), (d), (e), and (f) of Figs. 2 and 3 of Ref. [@dhrselfenergy]. The new results are parts (e) and (f) of Figs. \[fig1\] and \[fig2\], which are used to determine the functions in parts (b) and (c), the latter showing the correct longitudinal polarization function for the eighth gluon. In Ref. [@dhrselfenergy], this function was not computed correctly, as the effect from the fluctuations of the condensate on the polarization tensor of the gluons was not taken into account. The singularity around a gluon energy somewhat smaller than $p_0 = 2\, \phi$ visible in Figs. \[fig2\] (b) and (c) seems peculiar. It turns out that it arises due to a zero in the denominator of $\hat{\Pi}^{00}$ in Eq. (\[hatPi00\]), [*i.e.*]{}, when $P_\mu\, \tilde{\Pi}^{\mu \nu}(P)\, P_\nu = 0$. As discussed above, this condition defines the dispersion branch of the Nambu-Goldstone excitations [@zarembo]. Therefore, the singularity is tied to the existence of the Nambu-Goldstone excitations of the diquark condensate. Spectral densities ------------------ Let us now determine the spectral densities for longitudinal and transverse modes, defined by (cf. Eq. (45) of Ref. [@dhrselfenergy]) $$\rho^{00}_{88}(p_0, {\bf p}) \equiv \frac{1}{\pi}\, {\rm Im}\, \hat{\Delta}^{00}_{88} (p_0 + i \eta, {\bf p}) \,\,\,\, , \,\,\,\,\, \rho^t_{88}(p_0, {\bf p}) \equiv \frac{1}{\pi}\, {\rm Im}\, \Delta^t_{88} (p_0 + i \eta, {\bf p})$$ The longitudinal and transverse spectral densities for gluons of color 8 are shown in Figs. \[fig3\] (c) and (d), for fixed gluon momentum $p = m_g/2$ and $m_g = 8\, \phi$. For comparison, the corresponding spectral densities for gluons of color 1 are shown in parts (a) and (b). Parts (a), (b), and (d) are identical to those of Fig. 6 of Ref. [@dhrselfenergy], part (c) is new and replaces Fig. 6 (c) of Ref. [@dhrselfenergy]. One observes a peak in the spectral density around $p_0 = m_g$. This peak corresponds to the ordinary longitudinal gluon mode (the plasmon) present in a dense (or hot) medium. Note that the longitudinal spectral density for gluons of color 8 vanishes at an energy somewhat smaller than $p_0 = m_g/4$. The reason is the singularity of the real part of the gluon self-energy seen in Figs. \[fig2\] (b) and (c). The location of this point is where $P_\mu \, \tilde{\Pi}^{\mu \nu}(P)\, P_\nu =0$, [*i.e.*]{}, on the dispersion branch of the Nambu-Goldstone excitations. Finally, we show in Fig. \[fig4\] the dispersion relations for all excitations, defined by the roots of \[disprel\] $$p^2 - {\rm Re}\, \hat{\Pi}^{00}_{88}(p_0, {\bf p}) = 0$$ for longitudinal gluons (cf. Eq. (47a) of Ref. [@dhrselfenergy]), and by the roots of $$p_0^2 - p^2 - {\rm Re}\, \Pi^t_{88}(p_0, {\bf p}) = 0$$ for transverse gluons (cf. Eq. (47b) of Ref. [@dhrselfenergy]). Let us mention that not all excitations found via Eqs. (\[disprel\]) correspond to truly stable quasiparticles, [*i.e.*]{}, the imaginary parts of the self-energies do not always vanish along the dispersion curves. Nevertheless, in that case Eqs. (\[disprel\]) can still be used to identify peaks in the spectral densities, which correspond to [*unstable*]{} modes (which decay with a rate proportional to the width of the peak). As long as the width of the peak (the decay rate of the quasiparticles) is small compared to its height, it makes sense to refer to these modes as quasiparticles. Fig. \[fig4\] corresponds to Fig. 5 of Ref. [@dhrselfenergy]. In fact, part (b) is identical in both figures. Fig. \[fig4\] (a) differs from Fig. 5 (a) of Ref. [@dhrselfenergy], reflecting our new and correct results for the longitudinal gluon self-energy. In Fig. 5 (a) of Ref. [@dhrselfenergy], the dispersion curve for the longitudinal gluon of color 8 was seen to diverge for small gluon momenta. In Ref. [@dhrselfenergy] it was argued that this behavior was due to neglecting the mesonic fluctuations of the diquark condensate. Indeed, properly accounting for these modes, we obtain a reasonable dispersion curve, approaching $p_0 = m_g$ as the momentum goes to zero. In Fig. \[fig4\] (a) we also show the dispersion branch for the Nambu-Goldstone excitations (dash-dotted). This is strictly speaking not given by a root of Eq. (\[disprel\]), but by the singularity of the real part of the longitudinal gluon self-energy. However, because this singularity involves a change of sign, a normal root-finding algorithm applied to Eq. (\[disprel\]) will also locate this singularity. As expected [@zarembo], the dispersion branch is linear, $$p_0 \simeq \frac{1}{\sqrt{3}}\, p \,\, ,$$ for small gluon momenta, and approaches the value $p_0 = 2\, \phi$ for $p \rightarrow \infty$. Conclusions {#IV} =========== In cold, dense quark matter with $N_f=2$ massless quark flavors, condensation of quark Cooper pairs spontaneously breaks the $SU(3)_c$ gauge symmetry to $SU(2)_c$. This results in five Nambu-Goldstone excitations which mix with some of the components of the gluon fields corresponding to the broken generators. We have shown how to decouple them by a particular choice of ’t Hooft gauge. The unphysical degrees of freedom in the gluon propagator can be eliminated by fixing the ’t Hooft gauge parameter $\lambda = 0$. In this way, we derived the propagator for transverse and longitudinal gluon modes in a two-flavor color superconductor accounting for the effect of the Nambu-Goldstone excitations. We then proceeded to explicitly compute the spectral properties of transverse and longitudinal gluons of adjoint color 8. The spectral density of the longitudinal mode now exhibits a well-behaved plasmon branch with the correct low-momentum limit $p_0 \rightarrow m_g$. Moreover, the spectral density vanishes for gluon energies and momenta corresponding to the dispersion relation for Nambu-Goldstone excitations. We have thus amended and corrected previous results presented in Ref. [@dhrselfenergy]. Our results pose one final question: using the correct expression for the longitudinal self-energy of adjoint colors $4,\ldots,8$, do the values of the Debye masses derived in Ref. [@dhr2f] change? The answer is “no”. In the limit $p_0 = 0,\, p \rightarrow 0$, application of Eqs. (120), (124), and (129) of Ref. [@dhr2f] to Eq. (\[hatPi00aa\]) yields $\hat{\Pi}^{00}_{aa}(0) \equiv \Pi^{00}_{aa}(0)$, and the results of Ref. [@dhr2f] for the Debye masses remain valid. Acknowledgments {#acknowledgments .unnumbered} =============== We thank G. Carter, D. Diakonov, and R.D. Pisarski for discussions. We thank R.D. Pisarski in particular for a critical reading of the manuscript and for the suggestion to use ’t Hooft gauge to decouple meson and gluon modes. D.H.R. thanks the Nuclear Theory groups at BNL and Columbia University for their hospitality during a visit where part of this work was done. He also gratefully acknowledges continuing access to the computing facilities of Columbia University’s Nuclear Theory group. I.A.S. would like to thank the members of the Institut für Theoretische Physik at the Johann Wolfgang Goethe-Universität for their hospitality, where part of this work was done. The work of I.A.S. was supported by the U.S.Department of Energy Grant No. DE-FG02-87ER40328. [99]{} D. Bailin and A. Love, Phys. Rept.  [**107**]{}, 325 (1984). M. Alford, K. Rajagopal, and F. Wilczek, Phys. Lett. B [**422**]{}, 247 (1998); R. Rapp, T. Schäfer, E.V. Shuryak, and M. Velkovsky, Phys. Rev. Lett.  [**81**]{}, 53 (1998). T. Schäfer and F. Wilczek, Phys. Rev. D [**60**]{}, 074014 (1999). M. Le Bellac, [*Thermal Field Theory*]{} (Cambridge, Cambridge University Press, 1996). G.W. Carter and D. Diakonov, Nucl. Phys. B [**582**]{}, 571 (2000). D.H. Rischke, Phys. Rev. D [**62**]{}, 034007 (2000). D.H. Rischke, Phys. Rev. D [**64**]{}, 094003 (2001). V.A. Miransky, I.A. Shovkovy, and L.C.R. Wijewardhana, Phys. Rev. D [**64**]{}, 096002 (2001). V.P. Gusynin and I.A. Shovkovy, Nucl. Phys. A [**700**]{}, 577 (2002). V.A. Miransky, I.A. Shovkovy, and L.C.R. Wijewardhana, Phys. Lett. B [**468**]{}, 270 (1999). R.D. Pisarski and D.H. Rischke, Phys. Rev. D [**61**]{}, 074017 (2000). T. Schäfer and F. Wilczek, Phys. Rev. D [**60**]{}, 114033 (1999). D.K. Hong, V.A. Miransky, I.A. Shovkovy, and L.C.R. Wijewardhana, Phys. Rev. D [**61**]{}, 056001 (2000) \[Erratum-ibid. D [**62**]{}, 059903 (2000)\]. R.D. Pisarski and D.H. Rischke, Phys. Rev. D [**60**]{}, 094013 (1999). R. Casalbuoni, Z. Duan, and F. Sannino, Phys. Rev. D [**62**]{}, 094004 (2000). K. Zarembo, Phys. Rev. D [**62**]{}, 054003 (2000). [^1]: On leave of absence from Bogolyubov Institute for Theoretical Physics, 252143 Kiev, Ukraine.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The Lie group method provides an efficient tool to solve nonlinear partial differential equations. This paper suggests a fractional partner for fractional partial differential equations. A space-time fractional diffusion equation is used as an example to illustrate the effectiveness of the Lie group method.' author: - | Guo-cheng Wu[^1]\ Modern Textile Institute, Donghua University, 1882 Yan’an Xilu Road,\ [Shanghai 200051, China]{}\ \[6pt\] Received 20 May 2010; accepted 13 July 2010 title: A Fractional Lie Group Method For Anomalous Diffusion Equations --- \[theorem\][**[Definition]{}**]{} Lie group method; Anonymous diffusion equation; Fractional characteristic method Introduction ============ In the last three decades, researchers have found fractional differential equations (FDEs) useful in various fields: rheology, quantitative biology, electrochemistry, scattering theory, diffusion, transport theory, probability potential theory and elasticity \[1\], for details, see the monographs of Kilbas et al. \[2\], Kiryakova \[3\], Lakshmikantham and Vatsala \[4\], Miller and Ross \[5\], and Podlubny \[6\]. On the other hand, finding accurate and efficient methods for solving FDEs has been an active research undertaking. Since Sophus Lie’s work on group analysis, more than 100 years ago, Lie group theory has become more and more pervasive in its influence on other mathematical disciplines \[7, 8\]. Then a question may naturally arise: is there a fractional Lie group method for fractional differential equations? Up to now, only a few works can be found in the literature. For example, Buckwarand and Luchko derived scaling transformations \[9\] for the fractional diffusion equation in Riemann-Liouville sense $$\frac{{\partial ^\alpha u(x,t)}}{{\partial t^\alpha }} = D\frac{{\partial ^2 \mathop u\limits^{} (x,t)}}{{\partial x^2 }},\;\;0 < \alpha ,\;0 < x{\rm{,}}\;0 < t,\;0 < D. \label{eq1} %(1)$$ Gazizov et al. find symmetry properties of fractional diffusion equations of Caputo derivative \[10\] $$\frac{{\partial ^\alpha u(x,t)}}{{\partial t^\alpha }} = k\frac{{\partial (\mathop {k(u)u_x }\limits^{} (x,t))}}{{\partial x}},\;\;0 < \alpha ,\;0 < x{\rm{,}}\;0 < t,\;0 < k. \label{eq2} %(2)$$ Djordjevic and Atanackovic \[11\] obtained some similarity solutions for the time-fractional heat diffusion $$\frac{{\partial ^\alpha T(x,t)}}{{\partial t^\alpha }} = k\frac{{\partial^{2} (T(x,t))}}{{\partial x^{2}}},\;\;0 < \alpha ,\;0 < x{\rm{,}}\;0 < t. \label{eq3} %(3)$$ In this study, we investigate anonymous diffusion \[12\] $$\frac{{\partial ^\alpha u(x,t)}}{{\partial t^\alpha }} = \frac{{\partial ^{2\beta } u(x,t)}}{{\partial x^{2\beta } }},0 < \alpha ,\;\beta \le 1,\;0 < x{\rm{,}}\;0 < t, \label{eq4} %(4)$$ with a fractional Lie group method, and derive its classification of solutions. Here the fractional derivative is in the modified Riemann-Liouville sense \[13\] and $\frac{{\partial ^{2\beta } u(x,t)}}{{\partial x^{2\beta } }}$ is defined by$\frac{{\partial ^\beta }}{{\partial x^\beta }}(\frac{{\partial ^\beta u(x,t)}}{{\partial x^\beta }}).$ Characteristic Method for Fractional Differential Equations =========================================================== Through this paper, we adopt the fractional derivative in modified Riemann-Liouville sense \[13\]. Firstly, we introduce some properties of the fractional calculus that we will use in this study. \(I) Integration with respect to $(dx)^\alpha $(Lemma **2.1** of \[14\]) $$_0 I_x^\alpha f(x) = \frac{1}{{\Gamma (\alpha )}}\int_0^x (x - \xi )^{\alpha - 1} f(\xi )d\xi = \frac{1}{{\Gamma (\alpha + 1)}}\int_0^x f(\xi )(d\xi )^\alpha ,0 < \alpha \le 1. \label{eq5} %(5)$$ \(II) Some other useful formulas $$f([x(t)])^{(\alpha )} = \frac{{df}}{{dx}}x^{(\alpha )} (t), ~\\ {}_0D_x^\alpha x^\beta = \frac{{\Gamma (1 + \beta )}}{{\Gamma (1 +\beta - \alpha )}}x^{\beta - \alpha } . \\ \label{eq6} %(6)$$ The properties of Jumarie’s derivative were summarized in \[13\]. The extension of Jumaire’s fractional derivative and integral to variational approach of several variables is done by Almeida et al. \[15\]. Fractional variational interactional method is proposed for fractional differential equations \[16\]. It is well known that the method of characteristics has played a very important role in mathematical physics. Preciously, the method of characteristics is used to solve the initial value problem for general first order. With the modified Riemann-Liouville derivative, Jumaire ever gave a Lagrange characteristic method \[17\]. We present a more generalized fractional method of characteristics and use it to solve linear fractional partial equations. Consider the following first order equation, $$a(x,t)\frac{{\partial u(x,t)}}{{\partial x}} + b(x,t)\frac{{\partial u(x,t)}}{{\partial t}} = c(x,t). \label{eq7} %(7)$$ The goal of the method of characteristics is to change coordinates from ${\rm{(}}x,\;t{\rm{)}}$ to a new coordinate system ${\rm{(}}x_0 ,\;s{\rm{)}}$ in which the PDE becomes an ordinary differential equation along certain curves in the $x - t$ plane. The curves are called the characteristic curves. More generally, we consider to extend this method to linear space-time fractional differential equations $$a(x,t)\frac{{\partial ^{^\beta } u(x,t)}}{{\partial x^\beta }} + b(x,t)\frac{{\partial ^\alpha u(x,t)}}{{\partial t^\alpha }} = c(x,t),0 < \alpha ,\beta \le 1. \label{eq8} %(8)$$ With the fractional Taylor’s series in two variables \[13\] $$du = \frac{{\partial ^{^\beta } u(x,t)}}{{\Gamma (1 + \beta )\partial x^\beta }}(dx)^{^\beta } + \frac{{\partial ^\alpha u(x,t)}}{{\Gamma (1 + \alpha )\partial t^\alpha }}(dt)^\alpha ,\;\;0 < \alpha ,\;\beta \le 1. \label{eq9} %(9)$$ Similarly, we derive the generalized characteristic curves $$\frac{{du}}{{ds}} = c(x,t),$$ $$\frac{{(dx)^{^\beta } }}{{\Gamma (1 + \beta )ds}} = a(x,t),\label{eq10} %(10)$$ $$\frac{{(dt)^\alpha }}{{\Gamma (1 + \alpha )ds}} = b(x,t). \\ \\$$ Eqs. (10)-(12) can be reduced to Jumarie’s result if $\alpha = \beta $ in \[17\]. As an example, we consider the fractional equation $$\frac{{x^\beta }}{{\Gamma (1 + \beta )}}\frac{{\partial ^\beta u(x,t)}}{{\partial x^\beta }} + \frac{{2t^\alpha }}{{\Gamma (1 + \alpha )}}\frac{{\partial ^\alpha u(x,t)}}{{\partial t^\alpha }} = 0,\;\;0 < \alpha ,\;\beta \le 1. \label{eq11} %(11)$$ We can have the fractional scaling transformation $$u = u(\frac{{x^{^{2\beta } } }}{{\Gamma ^2 (1 + \beta )}}/\frac{{2t^\alpha }}{{\Gamma (1 + \alpha )}}). \label{eq12} %(12)$$ Note that when $\alpha = \beta = 1{\rm{,}}$ as is well known, $\frac{{x^{^2 } }}{{2t}}$ is one invariant of the line differential equation $$x\frac{{\partial u(x,t)}}{{\partial x}} + 2t\frac{{\partial u(x,t)}}{{\partial t}} = 0. \label{eq13} %(13)$$ Lie Group method for Fractional diffusion equation ================================================== With the proposed fractional method of characteristics, now we can consider a fractional Lie Group method for the fractional diffusion equation, which are the generalizations of the classical diffusion equations treating the super-diffusive flow processes. These equations arise in continuous-time random walks, modeling of anomalous diffusive and sub-diffusive systems, unification of diffusion and wave propagation phenomenon \[18 - 23\]. We assume the one-parameter Lie group of transformations in ${\rm{(}}x,\;t,\;u)$ given by $$\begin{array}{l} \frac{{\tilde x^{^\beta } }}{{\Gamma (1 + \beta )}} = \frac{{x^{^\beta } }}{{\Gamma (1 + \beta )}} + \varepsilon \xi (x,t,u) + O(\varepsilon ), \\ \frac{{\tilde t^{^\alpha } }}{{\Gamma (1 + \alpha )}} = \frac{{t^\alpha }}{{\Gamma (1 + \alpha )}} + \varepsilon \tau (x,t,u) + O(\varepsilon ),\\ \tilde u = u + \varepsilon \phi (x,t,u) + O(\varepsilon {\rm{),}} \\ \end{array} \label{eq14} %(14)$$ where $\varepsilon $ is the group parameter. We start from the set of fractional vector fields instead of using the one of integer order \[9 - 11\] $$V = \xi (x,t,u)D^\beta _x + \tau (x,t,u)D^\alpha _t + \phi (x,t,u)D_u . \label{eq15} %(15)$$ The fractional second order prolongation $Pr^{(2\beta )} V$ of the infinitesimal generators can be represented as $$Pr^{(2\beta )} V = V + \phi ^{[t]} \frac{{\partial \phi }}{{\partial D_t ^\alpha u}} + \phi ^{[x]} \frac{{\partial \phi }}{{\partial D_x ^\beta u}} + \phi ^{[tt]} \frac{{\partial \phi }}{{\partial D_t ^{2\alpha } u}} + \phi ^{[xx]} \frac{{\partial \phi }}{{\partial D_x ^{2\beta } u}} + \phi ^{[xt]} \frac{{\partial \phi }}{{\partial D_x ^\beta D_t ^\alpha u}}. \label{eq16} %(16)$$ As a result, we can have $$Pr^{(2\beta )} V(\Delta [u]) = 0, \label{eq17} %(17)$$ where $\Delta [u] = \frac{{\partial ^\alpha u(x,t)}}{{\partial t^\alpha }} - \frac{{\partial ^{2\beta } u(x,t)}}{{\partial x^{2\beta } }}.$ Eq. (19) can be rewritten in the form $$\left. {(\phi ^{[t]} - \phi ^{[xx]} )} \right|_{\Delta [u] = 0} = 0. \label{eq18} %(18)$$ The generalized prolongation vector fields are defined as $$\phi ^{[t]} = D_t ^\alpha \phi - (D_t ^\alpha \xi )D_x ^\beta u - (D_t ^\alpha \tau )D_t ^\alpha u,$$ $$\phi ^{[x]} = D_x ^\beta \phi - (D_x \xi ^\beta )D_x ^\beta u - (D_x ^\beta \tau )D_t ^\alpha u, \label{eq19} %(19)$$ $$\phi ^{[xx]} = D_x ^{2\beta } \phi - 2(D_x ^\beta \xi )D_x ^{2\beta } u - (D_x ^{2\beta } \xi )D_x ^\beta u - 2(D_x ^\beta \tau )D_x ^\beta D_t ^\alpha u - (D_x ^{2\beta } \tau )D_t ^\alpha u_t.$$ Substituting Eqs. (21)-(23) into Eq. (20) and setting the coefficients to zero, we can obtain some line fractional equations from which we can derive $$\begin{array}{l} \xi (x,t,u) = c_1 + c_4 \frac{{x^\beta }}{{\Gamma (1 + \beta )}} + 2c_5 \frac{{t^\alpha }}{{\Gamma (1 + \alpha )}} + 4c_6 \frac{{x^\beta t^\alpha }}{{\Gamma (1 + \beta )\Gamma (1 + \alpha )}}, \\ \tau (x,t,u) = c_2 + 2c_4 \frac{{t^\alpha }}{{\Gamma (1 + \alpha )}} + 4c_6 \frac{{t^{2\alpha } }}{{\Gamma (1 + 2\alpha )}}, \\ \phi (x,t,u) = (c_3 - c_5 \frac{{x^\beta }}{{\Gamma (1 + \beta )}} - 2c_6 \frac{{t^\alpha }}{{\Gamma (1 + \alpha )}} - c_6 \frac{{x^{2\beta } }}{{\Gamma (1 + 2\beta )}})u + a(x,t), \\ \end{array}$$ where $c_i (i = 0...6)$ are real constants and the function $a(x,t)$ satisfies $$\frac{{\partial ^\alpha \mathop a\limits^{} (x,t)}}{{\partial t^\alpha }} = \frac{{\partial ^{2\beta } \mathop a\limits^{} (x,t)}}{{\partial x^{2\beta } }},\;\;0 < \alpha \le 1,\;0 < \beta \le 1.$$ It is easy to check that the two vector fields $\{ V_1 ,V_2 ,V_3 ,V_4 ,V_5 ,V_s \} $ are closed under the Lie bracket. Thus, a basis for the Lie algebra is $\{ V_1 ,V_2 ,V_3 ,V_4 ,V_5 \}, $ which consists of the four-dimensional sub-algebra $\{ V_1 ,V_2 ,V_3 ,V_4 \} $ $$\begin{array}{l} v_1 = \frac{{\partial ^\beta }}{{\partial x^\beta }},\;\;v_2 = \frac{{\partial ^\alpha }}{{\partial t^\alpha }},\;\;v_3 = \frac{\partial }{{\partial u}},\;\;v_4 = \frac{{x^\beta }}{{\Gamma (1 + \beta )}}\frac{{\partial ^\beta }}{{\partial x^\beta }} + \frac{{2t^\alpha }}{{\Gamma (1 + \alpha )}}\frac{{\partial ^\alpha }}{{\partial t^\alpha }}, \\ v_5 = \frac{{2t^\alpha }}{{\Gamma (1 + \alpha )}}\frac{{\partial ^\beta }}{{\partial x^\beta }} - \frac{{ux^\beta }}{{\Gamma (1 + \beta )}}\frac{\partial }{{\partial u}},\;\; \\ v_6 = \frac{{4t^\alpha }}{{\Gamma (1 + \alpha )}}\frac{{x^\beta }}{{\Gamma (1 + \beta )}}\frac{{\partial ^\beta }}{{\partial x^\beta }} + \frac{{4t^{2\alpha } }}{{\Gamma (1 + 2\alpha )}}\frac{{\partial ^\alpha }}{{\partial t^\alpha }} - (\frac{{x^{2\beta } }}{{\Gamma (1 + 2\beta )}} + \frac{{2t^\alpha }}{{\Gamma (1 + \alpha )}})u\frac{\partial }{{\partial u}}{\rm{,}} \\ \end{array}$$ and one infinite-dimensional sub-algebra $$v_7 = a(x,t)\frac{\partial }{{\partial u}}.$$ Assume $u = f(\frac{{x^\beta }}{{\Gamma (1 + \alpha )}},\;\frac{{t^\alpha }}{{\Gamma (1 + \beta )}})$ is an exact solution of Eq. (4). Then with the proposed fractional method of characteristics, solving the above symmetry equations, we can derive $$\begin{array}{l} u^{{\rm{(1)}}} = f(\frac{{x^\beta }}{{\Gamma (1 + \alpha )}} - \varepsilon ,\frac{{t^\alpha }}{{\Gamma (1 + \alpha )}}){\rm{,}} \\ u^{{\rm{(2)}}} = f(\frac{{x^\beta }}{{\Gamma (1 + \beta )}},\frac{{t^\alpha }}{{\Gamma (1 + \alpha )}} - \varepsilon ){\rm{,}} \\ u^{{\rm{(3)}}} = e^\varepsilon f(\frac{{x^\beta }}{{\Gamma (1 + \beta )}},\;\frac{{t^\alpha }}{{\Gamma (1 + \alpha )}}){\rm{,}} \\ u^{{\rm{(4)}}} = f(\frac{{x^\beta }}{{\Gamma (1 + \beta )}}e^{-\varepsilon} ,\frac{{t^\alpha }}{{\Gamma (1 + \alpha )}}e^{ - 2\varepsilon } {\rm{),}} \\ u^{{\rm{(5)}}} = e^{\frac{{t^\alpha \varepsilon ^2 }}{{\Gamma (1 + \alpha )}} - \frac{{x^\beta \varepsilon }}{{\Gamma (1 + \beta )}}} f(\frac{{x^\beta }}{{\Gamma (1 + \beta )}} - 2\varepsilon \frac{{t^\alpha }}{{\Gamma (1 + \alpha )}},\frac{{t^\alpha }}{{\Gamma (1 + \alpha )}}){\rm{,}} \\ u^{{\rm{(6)}}} = \frac{1}{{\sqrt {1 + 4\varepsilon \frac{{t^\alpha }}{{\Gamma (1 + \alpha )}}} }}e^{\frac{{ - x^{2\beta } \varepsilon \Gamma (1 + \alpha )}}{{\Gamma (1 + 2\beta )\Gamma (1 + \alpha ) + 4\varepsilon t^\alpha \Gamma (1 + 2\beta )}}} \\ \times f(\frac{{\Gamma (1 + \alpha )x^\beta }}{{\Gamma (1 + \beta )\Gamma (1 + \alpha ) + 4\varepsilon \Gamma (1 + \alpha )x^\beta }},\frac{{t^\alpha }}{{\Gamma (1 + \beta ) + 4\varepsilon \Gamma (1 + \alpha )t^\alpha }}){\rm{,}} \\ \\ u^{{\rm{(7)}}} = f(\frac{{x^\beta }}{{\Gamma (1 + \alpha )}},\frac{{t^\alpha }}{{\Gamma (1 + \alpha )}}) + \varepsilon a(x,t){\rm{,}} \\ \end{array}$$ which are all the classification of solutions of Eq. (4). Take the solution $u^{{\rm{(5)}}} $ as an example, $$u^{{\rm{(5)}}}= e^{\frac{{t^\alpha \varepsilon ^2 }}{{\Gamma (1 + \alpha )}} - \frac{{x^{\beta} \varepsilon }}{{\Gamma (1 + \beta )}}} f(\frac{{x^\beta }}{{\Gamma (1 + \beta )}} - 2\varepsilon \frac{{t^\alpha }}{{\Gamma (1 + \alpha )}},~\frac{{t^\alpha }}{{\Gamma (1 + \alpha )}}). \label{eq20} %(20)$$ Assume $f(\frac{{x^\beta }}{{\Gamma (1 + \beta )}} - 2\varepsilon \frac{{t^\alpha }}{{\Gamma (1 + \alpha )}},~\frac{{t^\alpha }}{{\Gamma (1 + \alpha )}}) = c,~$which can be set as the initial value of Eq. (4). Now we can check that $u_1 ^{{\rm{(5)}}} = ce^{\frac{{t^\beta \varepsilon ^2 }}{{\Gamma (1 + \beta )}} - \frac{{x^\alpha \varepsilon }}{{\Gamma (1 + \alpha )}}} $ is one of the exact solutions. If we make $f(\frac{{x^\beta }}{{\Gamma (1 + \alpha )}},\frac{{t^\alpha }}{{\Gamma (1 + \alpha )}}) = u_1 ^{{\rm{(5)}}} = ce^{\frac{{x^\beta \varepsilon ^2 }}{{\Gamma (1 + \beta )}} - \frac{{t^\alpha \varepsilon }}{{\Gamma (1 + \alpha )}}}$, we can derive a new iteration solution $u_2 ^{{\rm{(5)}}} $. As a result, by similar manipulations, we can give $u_{_3 } ^{{\rm{(5)}}}$…$u_n ^{{\rm{(5)}}} $ which are new exact solutions of Eq. (4). Conclusions =========== Fractional differential equations have caught considerable attention due to their various applications in real physical problems. However, there is no systematic method to derive the exact solution. Now, the problem is partly solved in this paper. Another problem may arise: can the Lie group method be extended to fractional differential equations of fractional order 0 $\thicksim$ 2? We will discuss such work in future. 20 pt [99]{} K.B. Oldham, J. Spanier, The fractional calculus, Academic Press, New York (1999). A.A. Kilbas, H.M. Srivastava, J.J. Trujillo, Theory and Applications of Fractional Differential Equations, Elsevier Science B.V, Amsterdam, 2006. V. Kiryakova, Generalized Fractional Calculus and Applications, Longman Scientific & Technical, Harlow, 1994, copublished in the United States with John Wiley & Sons, Inc., New York. V. Lakshmikantham, A.S. Vatsala, Basic theory of fractional differential equations, Nonlinear Anal. 69 (2008) 2677-2682. K.S. Miller, B. Ross, An Introduction to the Fractional Calculus and Differential Equations, John Wiley, New York, 1993. I. Podlubny, Fractional Differential Equation, Academic Press, San Diego, 1999. P.J. Olver, Applications of Lie Groups to Differential Equations, second ed., GTM 107, Springer, Berlin, 1993. G.W. Bluman, S.C. Anco, Symmetry and integration methods for differential equations, Appl. Math. Sci. Vol. 154, Springer, New York, 2002. E. Buckwar, Y. Luchko, Invariance of a Partial Differential Equation of Fractional Order under the Lie Group of Scaling Transformations, J. Math. Anal. Appl. 227 (1998) 81-97. R.K. Gazizov, A.A. Kasatkin, S.Y. Lukashchuk, symmetry properties of fractional diffusion equations, Phys. Scr. (2009) 014016. V.D. Djordjevic, T.M. Atanackovic, Similarity solutions to nonlinear heat conduction and Burgers/Korteweg-deVries fractional equations, J. Comput. Appl. Math. 222 (2008) 701-714. H.G. Sun, W. Chen, H. Sheng, Y.Q. Chen, On mean square displacement behaviors of anomalous diffusions with variable and random orders, Phys. Lett. A 374 (2010) 906-910. G. Jumarie, Modified Riemann-Liouville derivative and fractional Taylor series of non-differentiable functions further results, Comput. Math. Appl. 51 (2006) 1367-1376. G. Jumarie, Laplace’s transform of fractional order via the Mittag-Leffler function and modified Riemann-Liouville derivative, 22 (2009) 1659-1664. R. Almeida, A.B. Malinowska, D. F. M. Torres, A fractional calculus of variations for multiple integrals with application to vibrating string, J. Math. Phys. 51 (2010) 033503. G.C. Wu, E.W.M. Lee, Fractional Variational Iteration method and Its Appliation, Phys. Lett. A 374 (2010) 2506-2509. G. Jumarie, Lagrange characteristic method for solving a class of nonlinear partial differential equations of fractional order, Appl. Math. Lett. 19 (2006) 873-880. F. Mainardi, Fractional relaxation-oscillation and fractional diffusion-wave phenomena, Chaos. Soliton. Fract. 7 (1996) 1461-1477. F. Mainardi, The fundamental solutions for the fractional diffusion-wave equation, Appl. Math. Lett. 9 (1996) 23-28. O.P. Agrawal, Solution for a fractional diffusion-wave equation defined in a bounded domain, Nonlinear Dynam. 29 (2002) 145-155. K. Al-Khaled, S. Momani, An approximate solution for a fractional diffusion-wave equation using the decomposition method, Appl. Math. Comput. 165 (2005) 473-483. N. Ozdemir, D. Karadeniz, Fractional diffusion-wave problem in cylindrical coordinates, Phys. Lett A 372 (2008) 5968-5972. S. Das, Analytical Solution Of A Fractional Diffusion Equation By Variational Iteration Method, Comput. Math. Appl. 57 (2009) 483-487.4 pt [^1]: Corresponding author, E-mail: wuguocheng2002@yahoo.com.cn. (G.C. Wu)
{ "pile_set_name": "ArXiv" }
--- abstract: 'The off-axis location of the Advanced Camera for Surveys (ACS) is the chief (but not sole) cause of strong geometric distortion in all detectors: the Wide Field Camera (WFC), High Resolution Camera (HRC), and Solar Blind Camera (SBC). Dithered observations of rich star cluster fields are used to calibrate the distortion. We describe the observations obtained, the algorithms used to perform the calibrations and the accuracy achieved.' author: - 'G.R. Meurer$^1$, D. Lindler$^2$, J.P. Blakeslee$^1$, C. Cox$^3$, A.R.  Martel$^1$, H.D. Tran$^1$, R.J. Bouwens$^4$, H.C. Ford$^1$, M. Clampin$^3$, G.F. Hartig$^3$, M.  Sirianni$^1$, & G. de Marchi$^3$' title: Calibration of Geometric Distortion in the ACS Detectors --- Introduction ============ Images from the Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS) suffer from strong geometric distortion: the square pixels of its detectors project to trapezoids of varying area across the field of view. The tilted focal surface with respect to the chief ray is the primary source of distortion of all three ACS detectors. In addition, The HST Optical Telescope Assembly induces distortion as does the ACS M2 and IM2 mirrors (which are designed to remove HST’s spherical aberration). The SBC’s optics include a photo-cathode and micro-channel plate which also induce distortion. Here we describe our method of calibrating the geometric distortion using dithered observations of star clusters. The distortion solutions we derived are given in the IDC tables delivered in Nov 2002, and currently implemented in the STScI CALACS pipeline. This paper is a more up to date summary of our results than that presented at the workshop. An expanded description of our procedures is given by Meurer (2002). Method ====== [**Observations**]{}. The ACS SMOV geometric distortion campaign consisted of two HST observing programs: 9028 which targeted the core of 47 Tucanae (NGC 104) with the WFC and HRC, and 9027 which consisted of SBC observations of NGC 6681. Additional observations from programs 9011, 9018, 9019, 9024 and 9443 were used as additional sources of data, to check the results, and to constrain the absolute pointing of the telescope. The CCD exposures of 47 Tucanae were designed to well detect stars on the main sequence turn-off at $m_B = 17.5$ in each frame. This allows for a high density of stars with relatively short exposures. The F475W filter (Sloan g’) was used for the CCD observations so as to minimize the number of saturated red giant branch stars in the field. For the HRC two 60s exposures were taken at each pointing, while for the WFC which has a larger time overhead, only one such exposure was obtained per pointing. Simulated images made prior to launch, as well as archival WFPC2 images from Gilliland et al. (2000) were used to check that crowding would not be an issue. For calibrating the distortion in the SBC we used exposures of NGC 6681 (300s - 450s) which was chosen for the relatively high density of UV emitters (hot horizontal branch stars). The pointing center was dithered around each star field. For the WFC and HRC pointings, the dither pattern was designed so that the offsets between all pairs of images adequately, and non-redundantly, samples all spatial scales from about 5 pixels to 3/4 the detector size. For the SBC pointings, a more regular pattern of offsets is used augmented by a series of  5 pixel offsets. [**Distortion model**]{}. The heart of the distortion model relates pixel position ($x,y$) to sky position using a polynomial transformation (Hack & Cox, 2000) given by: $$x_c = \sum_{m=0}^{k}\sum_{n=0}^{m} a_{m,n}(x - x_r)^n (y - y_r)^{m-n}\, , \hspace{0.5cm} y_c = \sum_{m=0}^{k}\sum_{n=0}^{m} b_{m,n}(x - x_r)^n (y - y_r)^{m-n}$$ Here $k$ is the order of the fit, $x_r,y_r$ is the reference pixel, taken to be the center of each detector, or WFC chip, and $x_c,y_c$ are undistorted image coordinates. The coefficients to the fits, $a_{m,n}$ and $b_{m,n}$, are free parameters. For the WFC, an offset is applied to get the two CCD chips on the same coordinate system: $$X' = x_c + \Delta{x}{\rm (chip\#)}\, , \hspace{0.5cm} Y' = y_c + \Delta{y}{\rm (chip\#)}.$$ $\Delta{x}{\rm (chip\#)},\Delta{y}{\rm (chip\#)}$ are 0,0 for WFC’s chip 1 (as indicated by the FITS CCDCHIP keyword) and correspond to the separation between chips 1 and 2 for chip 2. The chip 2 offsets are free parameters in our fit. $X',Y'$ correspond to tangential plane positions in arcseconds which we tie to the HST $V2, V3$ coordinate system. Next the positions are corrected for velocity aberration: $X = \gamma X'$, $Y = \gamma Y'$, where $$\gamma = \frac{1 + {\bf u} \cdot {\bf v} / c}{1 - (v/c)^2}.$$ Here [**u**]{} is the unit vector towards the target and [**v**]{} is the velocity vector of the telescope (heliocentric plus orbital). Neglect of the velocity aberration correction can result in misalignments on order of a pixel for WFC images taken six months apart for targets near the ecliptic. Finally, we must transform all frames to the same coordinate grid on the sky $X_{\rm sky}, Y_{\rm sky}$: $$X_{\rm sky} = \cos \Delta \theta_i X - \sin \Delta \theta_i Y + \Delta X_i\, , \hspace{0.5cm} Y_{\rm sky} = \sin \Delta \theta_i X + \cos \Delta \theta_i Y + \Delta Y_i$$ where the free parameters $\Delta X_i, \Delta Y_i, \Delta \theta_i$ are the position and rotation offsets of frame $i$. [**Calibration algorithm**]{}. We use the positions of stars observed multiple times in the dithered star fields to iteratively solve for the free parameters in the distortion solution: fit coefficients $a_{m,n}, b_{m,n}$; chip 2 offsets $\Delta x{\rm (chip\, 2)}, \Delta y{\rm (chip\, 2)}$ (WFC only); frame offsets $\Delta X_i, \Delta Y_i, \Delta \theta_i$; and tangential plane position $X_{\rm sky}, Y_{\rm sky}$ of each star used in the fit. The stars are selected by finding local maxima above a selected threshold. The centroid in a $7 \times 7$ box about the local maximum is compared to Gaussian fits to the $x, y$ profiles, if the two estimates of position differ by more than 0.25 pixels, the measurement is rejected as likely being effected by a cosmic ray hit or crowding. Further details of the fit algorithm can be found in Meurer et al. (2002). [**Low order terms**]{}. Originally only SMOV images taken with a single roll angle were used to define the distortion solutions. The solution using only these data is degenerate in the zeroth (absolute pointing) and linear terms (scale, skewness). So we used the largest commanded offsets with a given guide star pair to set the linear terms. However, comparison of corrected coordinates to astrometric positions showed that residual skewness in the solution remained. Hence, as of November 2002, the IDC tables for WFC and SBC are based on data from multiple roll angles. The overall plate scale is set by the largest commanded offset. For the HRC, the linear scale is set by matching HRC and WFC coordinates, since the same field was used in the SMOV observations. The zeroth order terms (position of the ACS apertures in the HST $V2,V3$ frame) was determined from observations of an astrometric field. Results ======= -------- ------ ------------ -------- ----------- -------- ------------ ------------ ------- pixel Camera chip size Filter Pointings $N$ rms(x) rms(y) Notes \[arcsec\] \[pixels\] \[pixels\] WFC 1 0.05 F475W 25 142289 0.042 0.045 WFC 2 0.05 F475W 25 103453 0.035 0.037 WFC 1 0.05 F775W 10 31652 0.050 0.056 2 WFC 2 0.05 F775W 10 33834 0.041 0.048 2 HRC 0.025 F475W 20 77433 0.027 0.026 HRC 0.025 F775W 13 31515 0.026 0.043 3 HRC 0.025 F220W 12 14715 0.112 0.108 3 SBC 0.03 F125LP 34 1561 0.109 0.094 -------- ------ ------------ -------- ----------- -------- ------------ ------------ ------- : Summary of fit results[]{data-label="t:res"} The distortion in all ACS detectors is highly non-linear as illustrated in Fig. \[f:nonlin\]. We find that a quartic fit ($k=4$) is adequate for characterizing the distortion to an accuracy much better than our requirement of 0.2 pixels over the entire field of view. Table \[t:res\] summarizes the rms of the fits to the various datasets. The WFC and HRC fits were all to F475W data as noted above. To check the wavelength dependence of the distortion we used data obtained with F775W (WFC and HRC) and F220W (HRC) from programs 9018 and 9019. We held the coefficients fixed and only fit the offsets in order to check whether a single distortion solution is sufficient for each detector. Table 2 shows that there is a marginal increase in the rms for the red data of the WFC, little or no increase in the fit rms for the red HRC data, but a significant increase in the rms using the UV data. An examination of the HRC F220W images reveals the most likely cause: the stellar PSF is elongated by  0.1". A similar elongation can also be seen in SBC PSFs. We attribute this to aberration in the optics of either the ACS M1 or M2 mirrors or the HST OTA (Hartig, et al., 2002). The aberration amounts to  0.1 waves at 1600Å, but is negligible relative to optical wavelengths, hence it is not apparent in optical HRC images. While it was expected that the same distortion solution would be applicable to all filters except the polarizers, recent work (by Tom Brown, STScI, and our team) has shown that at least one other optical filter (F814W) induces a significant plate scale change (factor of $\sim 4 \times 10^{-5}$). In the long term, the IDC tables will be selected by filter in the STScI CALACS pipeline. While a quartic solution is adequate for most purposes, binned residual maps (Fig. \[f:resid\]) show that there are significant coherent residuals in the WFC and HRC solutions. These have amplitudes up to $\sim 0.1$ pixels. The small-scale geometric distortion is the subject of the Anderson & King contribution to this proceedings. Hack, W., & Cox, C. 2000, ISR ACS 2000-11, STScI. Hartig, G. et al. 2002, in “Future EUV and UV Visible Space Astrophysics Missions and Instrumentation”, eds. J.C. Blades &  O.H. Siegmund, Proc. SPIE, Vol. 4854, in press \[4854-30\]. Gilliland, R.L. et al. 2000, ApJ, 545, L47. Meurer, G.R. et al. 2002, in “Future EUV and UV Visible Space Astrophysics Missions and Instrumentation”, eds. J.C. Blades &  O.H. Siegmund, Proc. SPIE, Vol. 4854, in press \[4854-30\].
{ "pile_set_name": "ArXiv" }
--- abstract: 'We examine the 3-10 keV EPIC spectra of Mrk 205 and Mrk 509 to investigate their Fe K features. The most significant feature in the spectra of both objects is an emission line at 6.4 keV. The spectra can be adequately modelled with a power law and a relatively narrow ($\sigma < 0.2$ keV) Fe K${\alpha}$ emission line. Better fits are obtained when an additional Gaussian emission line, relativistic accretion-disk line, or Compton reflection from cold material, is added to the spectral model. We obtain similar goodness of fit for any of these three models, but the model including Compton reflection from cold material offers the simplest, physically self-consistent solution, because it only requires one reprocessing region. Thus the Fe K spectral features in Mrk 205 and Mrk 509 do not present strong evidence for reprocessing in the inner, relativistic parts of accretion disks.' author: - | M.J. Page$^{1}$, S.W. Davis$^{2}$, N.J. Salvi$^{1}$\ $^{1}$Mullard Space Science Laboratory, University College London, Holmbury St Mary, Dorking, Surrey, RH5 6NT, UK\ $^{2}$Department of Physics, University of California, Santa Barbara, CA 93106, USA\ nocite: - '[@tanaka95]' - '[@nandra97]' - '[@reynolds97]' - '[@reeves01a]' - '[@blustin02]' - '[@reeves01b]' - '[@antonucci93]' - '[@pounds01]' - '[@reeves01b]' - '[@pounds01]' - '[@reeves01b]' - '[@pounds01]' - '[@pounds01]' - '[@dickey90]' - '[@reeves01b]' - '[@fabian89]' - '[@magdziarz95]' - '[@reeves01b]' - '[@anders89]' - '[@antonucci93]' - '[@matt91]' - '[@george91]' - '[@shields95]' - '[@leahy93]' - '[@yaqoob01]' - '[@davidson79]' - '[@goad98]' - '[@wandel99]' - '[@george91]' - '[@pounds01]' - '[@pounds01]' - '[@reeves01b]' title: The origin of the Fe K features in Markarian 205 and Markarian 509 --- accretion, accretion disks – black hole physics – galaxies: Seyfert – Introduction {#sec:introduction} ============ X-ray observations probe the central regions of AGN. In the standard paradigm, this corresponds to the inner parts of an accretion disk around a supermassive black hole. Above the disk, a hot corona Compton upscatters optical-EUV photons to X-ray energies; some of this X-ray radiation is reprocessed in the surrounding material including the disk, giving rise to prominent Fe K$\alpha$ lines. The broad, distorted velocity profile of Fe K$\alpha$ emission, suggesting an accretion disk around a supermasive black hole, was first observed with [[*ASCA*]{}]{}  in MCG –6-30-16 (Tanaka [et al. ]{} 1995). This profile, with a sharp blue wing and a broad red tail, is a remarkable probe of the strong gravity regime. Studies of other AGN with [[*ASCA*]{}]{} suggested that broad, low-ionisation Fe K$\alpha$ emission is common in AGN (Nandra [et al. ]{} 1997, Reynolds [et al. ]{} 1997), but it was not until the launch of [[*XMM-Newton*]{}]{} that the diversity of Fe line profiles could really be investigated. With the large increase in collecting area afforded by [[*XMM-Newton*]{}]{}, it was soon noticed that some luminous AGN showed different Fe line profiles to that of MCG –6-30-15 (e.g. Reeves [et al. ]{} 2001a, Blustin [et al. ]{} 2002). One particularly interesting example, the luminous Seyfert 1 galaxy Mrk 205, appeared to have a narrow, neutral Fe K$\alpha$ line at 6.4 keV, accompanied by a broad line from He-like Fe (Reeves [et al. ]{} 2001b). Reeves [et al. ]{}argued that the ionised Fe line originates in the inner parts of an accretion disk, while the neutral Fe K$\alpha$ line originates in a molecular torus, hypothesised to lie outside the broad line regions in AGN unification schemes (Antonucci 1993). A later observation of another Seyfert galaxy, Mrk 509, showed a very similar pair of narrow-neutral and broad-ionised Fe K$\alpha$ emission lines (Pounds [et al. ]{} 2001), demonstrating that this configuration of Fe K$\alpha$ profiles is not an isolated phenomenon in Mrk 205. In this paper we revisit the Fe K features in Mrk 205 and Mrk 509 seen by [[*XMM-Newton*]{}]{}. By coadding the spectra from the three EPIC instruments we are able to maximise the signal to noise per bin while properly sampling the EPIC spectral resolution around Fe K. The paper is laid out as follows. In Section \[sec:observation\] we describe the observations and data reduction, and we describe the spectral fitting in Section \[sec:results\]. The results are discussed in Section \[sec:discussion\] and we present our conclusions in Section \[sec:conclusions\]. The Appendix contains a description of the method employed to coadd spectra from the different EPIC instruments. Observations and data reduction {#sec:observation} =============================== Mrk 205 was observed with [[*XMM-Newton*]{}]{} on the 7th May 2000, and these data were presented by Reeves [et al. ]{} (2001b). Several exposures were taken in both MOS and PN cameras, in full frame and large-window modes. Spectra of the source were extracted from circular regions of radius $\sim 50''$ and background spectra were obtained from nearby source-free regions. All valid event patterns (singles, doubles, and triples) were selected in MOS, and only single events in PN. The spectra were combined using the procedure outlined in Appendix A. Mrk 509 has been observed twice with [[*XMM-Newton*]{}]{}. The first observation took place on 25th November 2000 and the data were presented by Pounds [et al. ]{} (2001). The second observation was performed on the 20th April 2001. In both observations, MOS and PN cameras were operated in small window mode. Source spectra were taken from a circular region of $40'' - 50''$ radius, and background spectra were obtained from nearby regions free from bright sources. Single events were selected in MOS, and single and double events were selected in PN. The spectra were combined using the procedure outlined in the Appendix to produce one spectrum for each observation and one spectrum which is a combination of the two. --------- ------------------ ---------- ------------------ Object Date Exposure Count rate (ks) (count s$^{-1}$) Mrk 205 7 May 2000 49.0 4.9 Mrk 509 25 November 2000 23.4 26.7 Mrk 509 20 April 2001 30.6 38.3 --------- ------------------ ---------- ------------------ : [[*XMM-Newton*]{}]{} observations of Mrk 205 and Mrk 509[]{data-label="tab:observations"} Results {#sec:results} ======= The spectral fitting was performed with [XSPEC]{}. Only the rest-frame 3-10 keV energy range was used in the spectral fitting because we are primarily interested in the Fe K features. The broad emission lines reported by Reeves [et al. ]{} (2001b) and Pounds [et al. ]{} (2001) are only significant between 5 and 8 keV (see Fig. 4 of Pounds [et al. ]{} 2001), so the 3-10 keV energy range allows a good measurement of the continuum on either side of Fe K, while excluding the noisier and less-well calibrated data at higher energies and the complex spectrum found at lower energies. We included the small effect of absorption from the Galaxy as a component in all our spectral modelling (${N_{H}}= 2.9 \times 10^{20}$ cm$^{-2}$ towards Mrk 205 and ${N_{H}}= 4.1 \times 10^{20}$ cm$^{-2}$ towards Mrk 509, Dickey & Lockman 1990). The results of the spectral fits are given in Table \[tab:results\]. Mrk 205 ------- We began by fitting a power law model. The counts spectrum is shown in Fig. \[fig:mrk205\_allspec\], along with the power law model convolved with the instrument response. Like Reeves [et al. ]{} (2001b) we find this model is a poor fit, and there are significant residuals around 6.4 keV. We therefore added a gaussian emission line, and obtained an acceptable fit, with a resolved line consistent with 6.4 keV Neutral Fe K$\alpha$. Although the fit was acceptable, residuals remained at $\sim 7$ keV, and so we tried further fits including one of three additional model components that might plausibly account for these residuals and improve the fit: a gaussian emission line at $E>6.5$ keV, a relativistically-broadened accretion disk line (Fabian [et al. ]{} 1989; the “diskline” model in [XSPEC]{}), and Compton reflection from cold material (“pexrav” in [XSPEC]{}; Magdziarz and Zdziarski 1995). In the diskline model the energy of the line was constrained to lie between 6.7 and 6.9 keV, corresponding to He-like or H-like Fe (as proposed by Reeves [et al. ]{} 2001b). In the reflection model the inclination was fixed at 45$^{o}$, and we assumed Solar elemental abundances (Anders and Grevesse 1989). The addition of any of these three components resulted in a similar goodness of fit. The energies of the second Gaussian line and diskline components are found to be consistent with Fe XXVI, and in the case of the diskline the best fit was found for a line produced many $R_{G}$ from the black hole giving it a narrow profile, similar to that of the Gaussian line. We have also tried more complex models, including two emission lines as well as reflection: the lowest value of $\chi^{2}/\nu=138/135$ was obtained when the second emission line is a Gaussian. However, according to the F-test, this is only a 1$\sigma$ improvement over the model including a single emission line and Compton reflection. Mrk 509 first observation ------------------------- As for Mrk 205, we began with a power law model and obtained a completely unacceptable fit with large residuals around 6.4 keV (see Fig. \[fig:mrk509\_1\_allspec\]). Addition of a Gaussian line at $\sim 6.4$ keV resulted in a much better, but still poor fit (rejected with $> 99.9\%$ confidence). Adding a further Gaussian, a diskline or a reflection component improved the goodness of fit, which is marginally better when the third model component is a diskline rather than a reflection component or a second Gaussian line. However, whichever additional model component is included the model is still unacceptable at $>99\%$ confidence. We have also tested a more complex model, including a second emission line as well as reflection, and obtain a best $\chi^{2}/\nu=182/138$ for a diskline; this is only marginally better (2$\sigma$ according to the F-test) than the fit with reflection but without the second emission line. Strong residuals at 8.3 and 9.8 keV together contribute $\sim 30$ to the $\chi^{2}$, and are therefore largely responsible for the poor model fit. We have no credible explanation for these residuals other than as statistical fluctuations. Mrk 509 second observation -------------------------- A power law model provided a poor fit to the spectrum (see Fig. \[fig:mrk509\_2\_allspec\]), and once again we found that the addition of a $\sim 6.4$ keV emission line resulted in a significantly improved (and quite acceptable) $\chi^{2}/\nu$=124/144. The addition of an extra Gaussian, diskline or reflection component each improved the fit further, resulting in very a good fit. However, the addition of a second emission line to the reflection model results in no further improvement in $\chi^{2}/\nu$. Mrk 509 both observations combined ---------------------------------- The overall spectral shape of Mrk 509 is extremely similar in the two observations, and so to improve signal to noise we have coadded the data from both observations. A model containing only a power law is convincingly rejected, but a power law and a single Gaussian provide an acceptable fit to the data. Adding an additional Gaussian, diskline or reflection component makes a significant improvement to the $\chi^{2}/\nu$ (at $>99.9\%$ significance according to the F-test), resulting in a good fit to the data. Finally, we have tried more complex models combining reflection with a second Gaussian or accretion-disc emission line. In this case the $\chi^{2}/\nu$ is poorer than for the model including reflection without a second emission line. Discussion {#sec:discussion} ========== The most significant spectral feature in all the 3-10 keV EPIC spectra of Mrk 205 and Mrk 509 is low-ionization Fe K$\alpha$ emission at 6.4 keV. In both sources, improved fits are obtained when an additional emission component is included, peaking at slightly higher energy; Gaussian or relativistic emission lines or cold reflection are all plausible forms for this additional spectral component. For Mrk 509, the spectral modelling requires that the 6.4 keV line is broad, FWHM $> 5000$ km s$^{-1}$ unless this additional component is included. More complex models, combining a second emission line with cold reflection, do not produce significantly better fits for any of the spectra than models with cold reflection but without a second emission line. The 6.4 keV emission is a signature of reprocessing by cold material, but where does this reprocessing take place? Both galaxies show negligible intrinsic absorption in soft X–rays, and hence the material responsible for the 6.4 keV emission must lie outside the line of sight to the continuum source. Possible locations include the accretion disk, the molecular torus favoured by AGN unification schemes (Antonucci 1993), and the (optical) broad line clouds. The molecular torus and the accretion disk represent Compton thick targets, and reprocessing at these locations will result in a Compton reflection component with an edge at 7.1 keV as well as Fe K$\alpha$ line emission (Matt, Perola & Piro 1991, George and Fabian 1991). However, the broad line clouds are expected to be Compton thin (Shields, Ferland & Peterson 1995) and therefore the Fe K$\alpha$ line emission will not be accompanied by significant Compton scattered continuum. Simulations by Leahy and Creighton (1993, see also Yaqoob [et al. ]{} 2001) show that if the broad-line clouds have column densities of $10^{23}$ cm$^{-2}$, they would need to cover 50% of the sky, as seen by the continuum source, to produce the Fe K$\alpha$ line of 50 eV equivalent width in Mrk 509. To produce the 100 eV equivalent width line seen in Mrk 205, the broad line clouds would need to surround $\sim 100\%$ of the central source. These covering fractions are much higher than the typical broad line region covering fractions deduced from the ultraviolet (10% – 25%, Davidson & Netzer 1979, Goad & Koratkar 1998), suggesting that the broad line regions are probably not responsible for the majority of the Fe K$\alpha$ photons. Furthermore in Mrk 509, even if the broad line region does produce the Fe K$\alpha$ line, something else must contribute an additional broad spectral component at slightly higher energy, or else the velocity width of the Fe K$\alpha$ line is inconsistent with the width of the optical lines, which have FWHM of only 2270 km s$^{-1}$ (Wandel, Peterson & Malkan 1999). On the other hand, if the Fe K$\alpha$ line originates in Compton thick material, the equivalent width of the line suggests that this material intercepts $\sim$ 30 – 60 per cent of the emitted radiation in Mrk 205, and about half as much in Mrk 509 (George & Fabian 1991). Therefore in both AGN the strength of the Fe K$\alpha$ line alone suggests that a significant Compton reflection component should be present. When reflection is included in the fit we find that $ > 55\%$ of the radiation is intercepted by the reflector in Mrk 205, and $ > 40\%$ in Mrk 509, slightly higher than inferred from the Fe K$\alpha$ line but not greatly so. Thus a Compton thick reprocessor that intercepts a significant fraction of the primary X-rays can account for all the Fe K features in Mrk 205 or Mrk 509. This reprocessor could be a distant molecular torus, or the accretion disk itself. However, the velocity width of the Fe K$\alpha$ line implies that even if the reprocessor is the accretion disk, little of the reprocessed emission comes from the inner, relativistic, parts of the disk. So what evidence do we have for emission from highly ionized Fe, potentially in an accretion disk? For Mrk 205, the fits with a second emission line are as good as those including a reflection component. However the second emission line is narrow when it is modeled as a Gaussian, and similarly the best fit accretion disk line is one in which the emission is dominated from material in the outer disk, resulting in a narrow line. Hence even if the spectrum of Mrk 205 [*does*]{} include Fe XXV or Fe XXVI, there is no evidence for relativistic broadening. For Mrk 509, the best-fit second emission line is broad, whether it is modeled as a Gaussian or as a disk line, and the steep emissivity index implies that most of the emission would come from the inner part of the disk. The disk would have an inclination of between 20 and 40 degrees, in agreement with the findings of Pounds [et al. ]{} (2001). However, the $\chi^{2}/\nu$ for the fits including an accretion disk line are only slightly better than the fits including Compton reflection for the first observation of Mrk 509, and are slightly poorer for the second observation; the combined spectrum is as well fit by either model. Hence although the features in the spectrum of Mrk 509 can be fit with a model including both a distant, cold, Compton-thin reprocessor and relativistically broadened emission from highly ionised Fe in an accretion disk, they can be fit equally well by reprocessing and Compton reflection from distant, cold material. Thus in both Mrk 205 and Mrk 509, we find that the Fe K features can be explained by a single phase Compton thick cold reflector. While the presence of reflection from the highly ionized, high velocity, inner parts of an accretion disk is not ruled out by these data, it is not unambiguously detected. Conclusions {#sec:conclusions} =========== We have analysed the 3-10 keV [[*XMM-Newton*]{}]{} EPIC spectra of Mrk 205 and Mrk 509 to investigate the Fe K features in these objects. Acceptable fits can be obtained for models containing nothing more than a power law and an emission line at 6.4 keV, consistent with cold Fe K$\alpha$. However, better fits are obtained when an additional spectral component is included in the model, either Compton reflection from cold material or an emission line from ionised Fe; the goodness of fit is similar whichever component is added. In Mrk 205, there is no evidence for relativistic broadening of any emission line, but in Mrk 509 the best fit parameters for an ionised Fe emission line suggest that it might originate in the inner regions of an accretion disk. However, illumination of distant, cold material provides a simpler, self consistent explanation of the spectral features than models including reflection from highly ionized, relativistic material. Therefore, contrary to Pounds [et al. ]{} (2001) and Reeves [et al. ]{} (2001b), we do not find strong evidence in either object for reprocessing in the highly ionised inner parts of an accretion disk. Acknowledgments =============== Based on observations obtained with [[*XMM-Newton*]{}]{}, an ESA science mission with instruments and contributions directly funded by ESA Member states and the USA (NASA). Anders E., & Grevesse N., Geochimica et Cosmochimica Acta, 1989, 53, 197 Antonucci R., 1993, Annu. Rev. Astron. Astrophys., 31, 473 Blustin A.J., Branduardi-Raymont G., Behar E., Kaastra J.S., Kahn S.M., Page M.J., Sako M., Steenbrugge K.C., 2002, A&A, in press Davidson K., & Netzer H., 1979, Rev. Mod. Phys., 51, 715 Dickey J.M., & Lockman F.J., 1990, Annu. Rev. Astron. Astrophys., 28, 215 Fabian A.C., Rees M.J., Stella L., White N.E., 1989, MNRAS, 238, 729 George I.M., Fabian A.C., 1991, MNRAS, 249, 352 Goad K., & Koratkar A., 1998, ApJ, 495, 718 Leahy D.A., & Creighton J., 1993, MNRAS, 263, 314 Magdziarz P., & Zdziarski A.A., 1995, MNRAS, 273, 837 Matt G., Perola G.C., & Piro L., 1991, A&A, 247, 25 Nandra K., George I.M., Mushotzky R.F., Turner T.J., Yaqoob T., 1997, ApJ, 477, 602 Pounds K., Reeves J., O’Brien P., Page K., Turner M., Nayakshin S., 2001, ApJ, 559, 181 Reeves J.N., Turner M.J.L., Bennie P.J., Pounds K.A., Short A., O’Brien P.T., Boller Th., Kuster M., Tiengo A., 2001, A&A, 365, L116 Reeves J.N., Turner M.J.L., Pounds K.A., O’Brien P.T., Boller Th., Ferrando P., Kendziorra E., Vercellone S., 2001, A&A, 365, L134 Reynolds C.S., 1997, MNRAS, 286, 513 Shields J.C., Ferland G.J., Peterson B.M., 1995, ApJ, 441, 507 Tanaka Y., Nandra K., Fabian A.C., Inoue H., Otani C., Dotani T., Hayashida K., Iwasawa K., Kii T., Kunieda H., Makino F., Matsuoka M., 1995, Nature, 375, 659 Wandel A., Peterson B.M., & Malkan, M.A., 1999, ApJ, 526, 579 Yaqoob T., George I.M., Nandra K., Turner T.J., Serlemitsos P.J., Mushotzky R.F., 2001, ApJ, 546, 759 Method to combine spectra from EPIC MOS and EPIC pn cameras {#sec:appendix} =========================================================== The following method was designed with pulse height spectra and response matrices in standard ‘OGIP’[^1] format in mind. We use the term ‘response matrix’ to refer to the product of the effective area and the energy$\to$channel redistribution matrix. We wish to combine a number of individual source+background pulse-height spectra to a single pulse-height spectrum. We label each individual spectrum with the index $s=1,2,...N_{spec}$, and we use the index $i=1,2,...N_{chan}(s)$ to denote the channels of each spectrum. Each channel is assigned a ‘nominal’ energy range of $ENOM_{min}(i)<E<ENOM_{max}(i)$. The number of photons in each channel $i$ of spectrum $s$ is $C(s,i)$. In general, each source+background spectrum $C(s,i)$ has a corresponding background spectrum $B(s,i)$, with a scaling factor $F(s)$ relating the geometric area and/or exposure times of the two spectra. Each of the original spectra has a corresponding response matrix, whose elements contain the effective area for a given channel and a given energy range. The element of the response matrix for spectrum $s$ corresponding to a particular channel $i$ and a particular energy range $E_{min}(j)<E<E_{max}(j)$ (where $j=1,2,...N_{range}(s)$) is denoted $R(s,i,j)$. Throughout we use capitalised indices when refering to the summed spectrum and response matrix; the index $s$ is omitted when referring to a combined spectrum or response matrix. We define the fractional overlap of channel $i$ of spectrum $s$ with channel $I$ of the combined spectrum to be $${\rm if} \ \ ENOM_{min}(s,i) < ENOM_{max}(I)$$ $${\rm and} \ \ ENOM_{max}(s,i) > ENOM_{min}(I)$$ $$\ \ \ f(s,i,I) =$$ $$\ \ \ \{min[ENOM_{max}(s,i),ENOM_{max}(I)] -$$ $$\ \ \ max[ENOM_{min}(s,i), ENOM_{min}(I)]\}\ /$$ $$\ \ \ \{ENOM_{max}(s,i)-ENOM_{min}(s,i)\}$$ $${\rm otherwise}$$ $$\ \ \ f(s,i,I) = 0$$ The combined spectrum is constructed by summing all the counts from all the spectra in the nominal energy range of each channel. So, $$C(I) = \sum_{s=1}^{N_{spec}} \sum_{i=1}^{N_{chan}(s)} f(s,i,I) C(s,i)$$ If the nominal energy ranges of the channels of the different spectra do not coincide with those of the combined spectrum, some randomization of photons will be required to ensure that the channels of the output spectrum contain integer numbers of counts. We combine the individual background spectra into a single background spectrum with a single scaling factor $F$. The following scheme produces a background photon spectrum with the signal to noise ratio propogated from the individual background spectra. $$B(I) = \frac{1}{F} \sum_{s=1}^{N_{spec}} F(s) \sum_{i=1}^{N_{chan}(s)} B(s,i)$$ where $$F=\frac { \sum_{s=1}^{N_{spec}} F^{2}(s) \sum_{i=1}^{N_{chan}(s)} B(s,i) } { \sum_{s=1}^{N_{spec}} F(s) \sum_{i=1}^{N_{chan}(s)} B(s,i) }$$ The response matrices are easily combined without any complicated weighting, provided that the pulse height spectra are all realisations of a single spectrum (e.g. if they are from observations of the same source at the same time but by different instruments). If the individual pulse height spectra are realisations of [*intrinsically different*]{} spectra, then the response matrix combination described here will not generally be appropriate. However if the spectra differ only in intensity then a simple scaling factor can be used to weight the contributions of the different response matrices to the final spectrum. For each energy range of the combined response matrix, the elements corresponding to each channel are combined in the same way as the original spectra. This is more complicated if the energy ranges differ between the response matrices. In this case we use a weighted average response for a given energy range. We define the fractional overlap of energy range $j$ of spectrum $s$ with the energy range $J$ of the output response matrix as follows: $${\rm if} \ \ E_{min}(s,j) < E_{max}(J)$$ $${\rm and} \ \ E_{max}(s,j) > E_{min}(J)$$ $$\ \ \ g(s,j,J) =$$ $$\ \ \ \frac{min[E_{max}(s,j),E_{max}(J)]-max[E_{min}(s,j), E_{min}(J)]}{E_{max}(s,j)-E_{min}(s,j)}$$ $${\rm otherwise}$$ $$\ \ \ g(s,j,J) = 0$$ The combined response matrix is then constructed according to: $$R(I,J) =$$ $$\sum_{s=1}^{N_{spec}} \sum_{i=1}^{N_{chan}(s)} f(s,i,I) \frac{ \sum_{j=1}^{N_{range}(s)} g(s,j,J) R(s,i,j) }{ \sum_{j=1}^{N_{range}(s)} g(s,j,J) }$$ [^1]: http://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/\ docs/spectra/ogip\_92\_007/ogip\_92\_007.html
{ "pile_set_name": "ArXiv" }
--- author: - | \ \ \ \ \ \ \ \ title: '\' --- =1 Introduction {#sec:introduction} ============ Among all four-dimensional quantum field theories lies a unique example singled out for its remarkable symmetry and mathematical structure as well as its key role in the AdS/CFT correspondence. This is maximally supersymmetric ($\mathcal{N}\!=\!4$) Yang-Mills theory (SYM) in the planar limit [@ArkaniHamed:2008gz]. It has been the subject of great interest over recent years, and the source of many remarkable discoveries that may extend to much more general quantum field theories. These features include a connection to Grassmannian geometry [@ArkaniHamed:2009dn; @ArkaniHamed:2009sx; @ArkaniHamed:2009dg; @ArkaniHamed:2012nw; @ArkaniHamed:book], extra simplicity for planar theories’ loop integrands [@ArkaniHamed:2010gh; @Bourjaily:2011hi; @Bourjaily:2015jna], the existence of all-loop recursion relations [@ArkaniHamed:2010kv], and the existence of unanticipated symmetries [@Drummond:2007cf; @Drummond:2008vq; @Brandhuber:2008pf; @Drummond:2009fd] and related dualities between observables in the theory [@Alday:2007hr; @Drummond:2007aua; @Brandhuber:2007yx; @Alday:2010zy; @Eden:2010zz; @Mason:2010yk; @CaronHuot:2010ek; @Eden:2011yp; @Adamo:2011dq; @Eden:2011ku]. Of these, the duality between scattering amplitudes and correlation functions, will play a fundamental role throughout this work. Much of this progress has been fueled through concrete theoretical data: heroic efforts of computation are made to determine observables (with more states, and at higher orders of perturbation); and this data leads to the discovery of new patterns and structures that allow these efforts to be extended even further. This virtuous cycle—even when applied merely to the ‘simplest’ quantum field theory—has taught us a great deal about the structure of field theory in general, and represents an extremely fruitful way to improve our ability to make predictions for experiments. In this paper, we greatly extend the reach of this theoretical data by computing a particular observable in this simple theory to [*ten*]{} loops—mere months after eight loops was first determined. This is made possible through the use of powerful new [*graphical*]{} rules described in this work. The observable in question is the four-point correlation function among scalars—the simplest operator that receives quantum corrections in planar SYM. This correlation function is closely related to the four-particle scattering amplitude, as reviewed below. But the information contained in this single function is vastly more general: it contains information about all scattering amplitudes in the theory—including those involving more external states (at lower loop-orders). As such, our determination of the four-point correlator at ten loops immediately provides information about the five-point amplitude at nine loops, the six-point amplitude at eight loops, etc. [@Ambrosio:2013pba]. Before we review this correspondence and describe the rules used to obtain the ten loop correlator, it is worth taking a moment to reflect on the history of our knowledge about it. Considered as an amplitude, it has been the subject of much interest for a long time. The tree-level amplitude was first expressed in supersymmetric form by Nair in . It was computed using unitarity to two loops in 1997 [@Bern:1997nh] (see also [@Anastasiou:2003kj]), to three loops in 2005 [@Bern:2005iz], to five loops in 2007—first at four loops [@Bern:2006ew], and five quickly thereafter [@Bern:2007ct]—and to six loops around 2009 [@Bern:2012di] (although published later). The extension to seven loops required significant new technology. This came from the discovery of the soft-collinear bootstrap in 2011 [@Bourjaily:2011hi]. Although not known at the time, the soft-collinear bootstrap method (as described in ), would have failed beyond seven loops; but luckily, the missing ingredient would be supplied by the duality between amplitudes and correlation functions discovered in [@Eden:2010zz; @Alday:2010zy] and elaborated in [@Eden:2010ce; @Eden:2011yp; @Eden:2011ku; @Adamo:2011dq]. The determination of the four-point correlator in planar SYM followed a somewhat less linear trajectory. One and two loops were obtained soon after (and motivated by) the AdS/CFT correspondence between 1998 and 2000 [@GonzalezRey:1998tk; @Eden:1998hh; @Eden:1999kh; @Eden:2000mv; @Bianchi:2000hn]. But despite a great deal of effort by a number of groups, the three loop result had to wait over 10 years until 2011—at which time the four, five, and six loop results were found in quick succession [@Eden:2011we; @Eden:2012tu; @Ambrosio:2013pba; @Drummond:2013nda]; seven loops was reached in 2013 [@Ambrosio:2013pba]. The breakthrough for the correlator, enabling this rapid development, was the discovery of a hidden symmetry [@Eden:2011we; @Eden:2012tu]. On the amplitude side, the extension of the above methods to eight loops also required the exploitation of this symmetry via the duality between amplitudes and correlators. This hidden symmetry (reviewed below) greatly simplifies the work required to extend the soft-collinear bootstrap, making it possible to determine the eight loop functions in 2015 [@Bourjaily:2015bpz]. While the eight loop amplitude and correlator were determined (the ‘hard way’,) using just the soft-collinear bootstrap and hidden symmetry, we had already started exploring alternative methods to find these functions which seemed quite promising. These were mentioned in the conclusions of —the details of which we describe in this note. This new approach, based not on algebraic relations but graphical ones, has allowed for a watershed of new theoretical data similar to that of 2007: within a few short months, we were able to fully determine both the nine and ten loop correlation functions. The reason for this great advance—the (computational) advantages of graphical rules—will be discussed at the end of this introduction. Our work is organized as follows. In we review the representation of amplitudes and correlation functions, and the duality between them. This will include a summary of the notation and conventions used throughout this paper, and also a description of the way that the terms involved are represented both algebraically and graphically. We elaborate on how the plane embedding of the terms that contribute to the correlator (viewed as graphs) allow for the direct extraction of amplitudes at corresponding (and lower) loop-orders—including amplitudes involving more than four external states—in . The three graphical rules sufficient to fix all possible contributions (at least through ten loops) are described in . We will refer to these as the triangle, square, and pentagon rules. The triangle and the square rules relate terms at different loop orders, while the pentagon rule relates terms at a given loop-order. While the square rule is merely the graphical manifestation of the so-called ‘rung’ rule [@Bern:1997nh; @Eden:2012tu] (generalized by the hidden symmetry of the correlator), the triangle and pentagon rules are new. We provide illustrations of each and proofs of their validity in . These rules have varying levels of strength. While the square rule is well-known to be insufficient to determine the amplitude or correlator at all orders (and the same is true for the pentagon rule), we expect that the combination of the square and triangle rules [do]{} prove sufficient—but only after their consequences at higher loop-orders are taken also into account. (For example, the pentagon rule was not required for us to determine the nine loop correlator—but the constraints that follow from the square and triangle rules at ten loops were necessary.) In we describe the varying strengths of each of these rules, and summarize the expressions found for the correlation function and amplitude through ten loops in . The explicit expressions for the ten loop correlator and amplitude have been made available at <http://goo.gl/JH0yEc>. Details on how this data can be obtained and the functionality provided (as part of a bare-bones [Mathematica]{} package) are described in . Before we begin, however, it seems appropriate to first describe what accounts for the advance—from eight to ten loops—in such a short interval of time. This turns out to be entirely a consequence of the computational power of working with graphical objects over algebraic expressions. The superiority of a graphical framework may not be manifest to all readers, and so it is worth describing why this is the case—and why a direct extension of the soft-collinear bootstrap beyond eight loops (implemented algebraically) does not seem within the reach of existing resources. #### Why [Graphical]{} Rules?  \ It is worth taking a moment to describe the incredible advantages of [*graphical*]{} methods over analytic or algebraic ones. The integrands of planar amplitudes or correlators can only meaningfully be defined if the labels of the internal loop momenta are fully symmetrized. Only then do they become well-defined, rational functions. But this means that, considered as algebraic functions, even [*evaluation*]{} of an integrand requires summing over all the permuted relabelings of the loop momenta (not to mention any cyclic or dihedral symmetrization of the legs that is also required). Thus, any analysis that makes use of evaluation will be rendered computationally intractable beyond some loop-order by the simple factorial growth in the time required by symmetrized evaluation. This is the case for the soft-collinear bootstrap as implemented in . At eight loops, the system of equations required to find the coefficients is a relatively straight-forward problem in linear algebra; and solving this system of equations is well within the limits of a typical laptop computer. However, [*setting up*]{} this linear algebra problem requires the evaluation of many terms—each at a sufficient number of points in loop-momentum space. And even with considerable ingenuity (and access to dozens of CPUs), these evaluations required more than two weeks to complete. Extending this method to nine loops would cost an additional factor of 9 from the combinatorics, and also a factor of 15 from the growth in the number of unknowns. This seems well beyond the reach of present-day computational resources. However, when the terms involved in the representation of an amplitude or correlator are considered more abstractly as [*graphs*]{}, the symmetrization required by evaluation becomes irrelevant: relabeling the vertices of a graph clearly leaves the [*graph*]{} unchanged. And it turns out that graphs can be compared with remarkable efficiency. Indeed, [Mathematica]{} has built-in (and impressive) functionality for checking if two graphs are isomorphic (providing all isomorphisms that may exist). This means that relations among terms, when expressed as identities among graphs, can be implemented well beyond the limits faced for any method requiring evaluation. We do not yet know of how the soft-collinear bootstrap can be translated as a graphical rule. And this prevents its extension beyond eight loops—at least at any time in the near future. However, the graphical rules we describe here prove sufficient to uniquely fix the amplitude and correlator through at least ten loops, and reproduce the eight loop answer in minutes rather than weeks. The extension of these ideas—perhaps amended by a broader set of analogous rules—to higher loops seems plausible using existing computational resources. Details of what challenges we expect in going to higher orders will be described in the conclusions. Review of Amplitude/Correlator Duality {#sec:review_of_duality} ====================================== Let us briefly review the functional forms of the four-particle amplitude and correlator in planar maximally supersymmetric ($\mathcal{N}\!=\!4$) Yang-Mills theory (SYM), the duality that exists between these observables, and how each can be represented analytically as well graphically at each loop-order. This will serve as a casual review for readers already familiar with the subject; but for those less familiar, we will take care to be explicit about the (many, often implicit) conventions. The most fundamental objects of interest in any conformal field theory are gauge-invariant operators and their correlation functions. Perhaps the simplest operator in planar SYM is $\mathcal{O}(x)\!\equiv\!\mathrm{Tr}({\varphi}(x)^2)$, where ${\varphi}$ is one of the six scalars of the theory and the trace is taken over gauge group indices (in the adjoint representation). This is a very special operator: it is related by (dual) superconformal symmetry to both the stress-energy tensor and the on-shell Lagrangian, is dual to supergravity states on AdS$_5$, protected from renormalization, and annihilated by half of the supercharges of the theory. Moreover, its two- and three-point correlation functions are protected from perturbative corrections. The four-point correlator involving $\mathcal{O}(x)$, [$$\mathcal{G}_4(x_1,x_2,x_3,x_4)\equiv\langle\mathcal{O}(x_1)\overline{\mathcal{O}}(x_2)\mathcal{O}(x_3)\overline{\mathcal{O}}(x_4)\rangle,\label{definition_of_correlator}\vspace{-0.5pt}$$]{} is therefore the first non-trivial observable of interest in the theory. This correlator, computed perturbatively in loop-order and divided by the tree-level correlator is related to the four-particle amplitude (also divided by the tree) in a simple way [@Alday:2010zy; @Eden:2010zz]: [$$\lim_{\substack{\text{4-point}\\\text{light-like}}}\left(\frac{\mathcal{G}_4^{}(x_1,x_2,x_3,x_4)}{\mathcal{G}_4^{(0)}\!(x_1,x_2,x_3,x_4)}\right)=\mathcal{A}^{}_4(x_1,x_2,x_3,x_4)^2,\label{correlator_amplitude_relation}\vspace{-0.5pt}$$]{} where the amplitude is represented in dual-momentum coordinates, , and the light-like limit corresponds to taking the four (otherwise generic) points $x_a\!\in\!\mathbb{R}^{3,1}$ to be light-like separated: defining $x_{ab}\!\equiv\!x_b{\raisebox{0.75pt}{\scalebox{0.75}{$\,-\,$}}}x_a$, this corresponds to the limit where ${x_{1\hspace{0.5pt}2}^2}\!=\!{x_{2\hspace{0.5pt}3}^2}\!=\!{x_{3\hspace{0.5pt}4}^2}\!=\!{x_{1\hspace{0.5pt}4}^2}\!=\!0$. Importantly, while the correlator is generally finite upon integration, the limit taken in (\[correlator\_amplitude\_relation\]) is divergent; however, the correspondence exists at the level of the loop [*integrand*]{}—both of which can be uniquely defined in any (planar) quantum field theory upon symmetrization in (dual) loop-momentum space. As a loop integrand, both sides of the identity (\[correlator\_amplitude\_relation\]) are rational functions in $(4{\raisebox{0.75pt}{\scalebox{0.75}{$\,+\,$}}}\ell)$ points in $x$-space—to be integrated over the $\ell$ additional points, which we will (suggestively) denote as $x_{4+1},\ldots,x_{4+\ell}$. While the external points $x_1,\ldots,x_4$ would seem to stand on rather different footing relative to the loop momenta, it was noticed in that this distinction disappears completely if one considers instead the function (appropriate for the component of the supercorrelator in (\[definition\_of\_correlator\])), [$$\mathcal{F}^{(\ell)}(x_1,\ldots,x_4,x_5,\ldots,x_{4+\ell})\equiv\frac{1}{2}\left(\frac{G_4^{(\ell)}(x_1,x_2,x_3,x_4)}{G_4^{(0)}\!(x_1,x_2,x_3,x_4)}\right)/\xi^{(4)},\label{definition_of_f}\vspace{-0.5pt}$$]{} where $\xi^{(4)}$ is defined to be ${x_{1\hspace{0.5pt}2}^2}{x_{2\hspace{0.5pt}3}^2}{x_{3\hspace{0.5pt}4}^2}{x_{1\hspace{0.5pt}4}^2}({x_{1\hspace{0.5pt}3}^2}{x_{2\hspace{0.5pt}4}^2})^2$. As the attentive reader may infer, we will later have use to generalize this—yielding $\xi^{(4)}$ as a particular instance of, [$$\xi^{(n)}\equiv\prod_{a=1}^n{x_{a\hspace{0.5pt}a+1}^2}{x_{a\hspace{0.5pt}a+2}^2},\label{definition_of_general_xi}\vspace{-0.5pt}$$]{} where cyclic ordering on $n$ points $x_a$ is understood (as well as the symmetry ). With this slight modification, it was discovered in that the function $\mathcal{F}^{(\ell)}$ is fully [*permutation invariant in its arguments*]{}. This hidden symmetry is quite remarkable, and is responsible for a dramatic simplification in the representation of both the amplitude and the correlator. Because of the close connection between $\mathcal{F}^{(\ell)}$ and the correlation function defined via (\[definition\_of\_f\]), we will frequently refer to $\mathcal{F}^{(\ell)}$ as ‘the $\ell$ loop correlation function’ throughout the rest of this work; we hope this slight abuse of language will not lead to any confusion to the reader. $f$-Graphs: Their Analytic and Graphical Representations {#subsec:fgraphs_and_conventions} -------------------------------------------------------- Considering the full symmetry of $\mathcal{F}^{(\ell)}$ among its $(4{\raisebox{0.75pt}{\scalebox{0.75}{$\,+\,$}}}\ell)$ arguments, we are led to think of the possible contributions more as graphs than algebraic expressions. Conformality requires that any such contribution must be weight $\!{\raisebox{0.75pt}{\scalebox{0.75}{$\,-\,$}}}4$ in each of its arguments; locality ensures that only factors of the form ${x_{a\hspace{0.5pt}b}^2}$ can appear in the denominator; analyticity requires that there are at most single poles in these factors (for the amplitude—for the correlator, analysis of OPE limits); and finally, planarity informs us that these factors must form a plane graph. The denominator of any possible contribution, therefore, can be encoded as a plane graph with edges $a\!\leftrightarrow\!b$ for each factor ${x_{a\hspace{0.5pt}b}^2}$. (Because ${x_{a\hspace{0.5pt}b}^2}\!\!=\!{x_{b\hspace{0.5pt}a}^2}$, these graphs are naturally [*undirected*]{}.) We are therefore interested in plane graphs involving $(4{\raisebox{0.75pt}{\scalebox{0.75}{$\,+\,$}}}\ell)$ points, with valency at least 4 in each vertex. Excess conformal weight from vertices with higher valency can be absorbed by factors in the numerator. Conveniently, it is not hard to enumerate all such plane graphs—one can use the program [CaGe]{} [@CaGe], for example. Decorating each of these plane graphs with all inequivalent numerators capable of rending the net conformal weight of every vertex to be $\!{\raisebox{0.75pt}{\scalebox{0.75}{$\,-\,$}}}4$ results in the space of so-called ‘$f$-graphs’. The enumeration of the possible $f$-graph contributions that result from this exercise (through eleven loop-order) is given in . Also in the Table, we have listed the number of (graph-inequivalent) planar, (dual-)conformally invariant (‘DCI’) integrands that exist. (The way in which these contributions to the four-particle amplitude are obtainable from each $f$-graph is described below.) $\hspace{1.5pt}\begin{array}{|@{$\,$}c@{$\,$}|@{$\,$}r@{$\,$}|@{$\,$}r@{$\,$}|@{$\,$}r@{$\,$}|@{$\,$}r@{$\,$}|}\multicolumn{1}{@{$\,$}c@{$\,$}}{\begin{array}{@{}l@{}}\\[-4pt]\text{$\ell\,$}\end{array}}&\multicolumn{1}{@{$\,$}c@{$\,$}}{\!\begin{array}{@{}c@{}}\text{number of}\\[-4pt]\text{plane graphs}\end{array}}\,&\multicolumn{1}{@{$\,$}c@{$\,$}}{\begin{array}{@{}c@{}}\text{number of graphs}\\[-4pt]\text{admitting decoration}\end{array}}\,&\multicolumn{1}{@{$\,$}c@{$\,$}}{\begin{array}{@{}c@{}}\text{number of decorated}\\[-4pt]\text{plane graphs ($f$-graphs)}\end{array}}\,&\multicolumn{1}{@{$\,$}c@{$\,$}}{\begin{array}{@{}c@{}}\text{number of planar}\\[-4pt]\text{DCI integrands}\end{array}}\,\\[-0pt]\hline1&0&0&0&1\\\hline2&1&1&1&1\\[-0pt]\hline3&1&1&1&2\\\hline4&4&3&3&8\\\hline5&14&7&7&34\\\hline6&69&31&36&284\\\hline7&446&164&220&3,\!239\\\hline8&3,\!763&1,\!432&2,\!709&52,\!033\\\hline9&34,\!662&13,\!972&43,\!017&1,\!025,\!970\\\hline10&342,\!832&153,\!252&900,\!145&24,\!081,\!425\\\hline11&3,\!483,\!075&1,\!727,\!655&22,\!097,\!035&651,\!278,\!237\\\hline\end{array}$ (To be clear, counts the number of [*plane*]{} graphs—that is, graphs with a fixed plane embedding. The distinction here is only relevant for graphs that are not 3-vertex connected—which are the only planar graphs that admit multiple plane embeddings. We have found that no such graphs contribute to the amplitude or correlator through ten loops—and we strongly expect their absence can be proven. However, because the graphical rules we describe are sensitive to the plane embedding, we have been careful about this distinction in our analysis—without presumptions on their irrelevance.) When representing an $f$-graph graphically, we use solid lines to represent every factor in the denominator, and dashed lines (with multiplicity) to indicate the factors that appear in the numerator. For example, the possible $f$-graphs through four loops are as follows: [$$\begin{array}{rc@{$\;\;\;\;\;$}rc@{$\;\;\;\;\;$}rc}\\[-40pt]f^{(1)}_1\equiv&{\text{\makebox[75pt][c]{$\hspace{-150pt}\displaystyle{\raisebox{-54.75pt}{\ \includegraphics[scale=1]{one_loop_f_graph_1}}}\hspace{-150pt}$}}}&f^{(2)}_1\equiv&{\text{\makebox[75pt][c]{$\hspace{-150pt}\displaystyle{\raisebox{-54.75pt}{\ \includegraphics[scale=1]{two_loop_f_graph_1}}}\hspace{-150pt}$}}}&f^{(3)}_1\equiv&{\text{\makebox[75pt][c]{$\hspace{-150pt}\displaystyle{\raisebox{-54.75pt}{\ \includegraphics[scale=1]{three_loop_f_graph_1}}}\hspace{-150pt}$}}}\\[-26pt]f^{(4)}_1\equiv&{\text{\makebox[75pt][c]{$\hspace{-150pt}\displaystyle{\raisebox{-54.75pt}{\ \includegraphics[scale=1]{four_loop_f_graph_1}}}\hspace{-150pt}$}}}&f^{(4)}_2\equiv&{\text{\makebox[75pt][c]{$\hspace{-150pt}\displaystyle{\raisebox{-54.75pt}{\ \includegraphics[scale=1]{four_loop_f_graph_2}}}\hspace{-150pt}$}}}&f^{(4)}_3\equiv&{\text{\makebox[75pt][c]{$\hspace{-150pt}\displaystyle{\raisebox{-54.75pt}{\ \includegraphics[scale=1]{four_loop_f_graph_3}}}\hspace{-150pt}$}}}\\[-35pt]\end{array}\vspace{20pt}\label{one_through_four_loop_f_graphs}\vspace{-0.5pt}$$]{} In terms of these, the loop-level correlators $\mathcal{F}^{(\ell)}$ would be expanded according to: [$$\mathcal{F}^{(1)}=f^{(1)}_1,\quad \mathcal{F}^{(2)}=f^{(2)}_1,\quad \mathcal{F}^{(3)}=f^{(3)}_1,\quad \mathcal{F}^{(4)}=f^{(4)}_1+f^{(4)}_2-f^{(4)}_3.\label{correlators_through_four_loops}\vspace{-0.5pt}$$]{} (Notice that $f^{(1)}_1$ in (\[one\_through\_four\_loop\_f\_graphs\]) is not planar; this is the only exception to the rule; however, it does lead to planar contributions to $\mathcal{G}^{(1)}_4$ and $\mathcal{A}_4^{(1)}$ after multiplication by $\xi^{(4)}$.) In general, we can always express the $\ell$ loop correlator $\mathcal{F}^{(\ell)}$ in terms of the $f^{(\ell)}_i$ according to, [$$\mathcal{F}^{(\ell)}\equiv\sum_{i}c^{\ell}_i\,f^{(\ell)}_i\,,\label{general_correlator_expansion}\vspace{-5pt}\vspace{-0.5pt}$$]{} where the coefficients $c_i^{\ell}$ (indexed by the complete set of $f$-graphs at $\ell$ loops) are rational numbers—to be determined using principles such as those described below. At eleven loops, for example, there will be $22,\!097,\!035$ coefficients $c_i^{11}$ that must be determined (see ). Analytically, these graphs correspond to the product of factors ${x_{a\hspace{0.5pt}b}^2}$ in the denominator for each solid line in the figure, and factors ${x_{a\hspace{0.5pt}b}^2}$ in the numerator for each dashed line in the figure. This requires, of course, a choice of the labels for the vertices of the graph. For example, [$$\hspace{-85.5pt}{\raisebox{-54.75pt}{\ \includegraphics[scale=1]{four_loop_f_graph_2_with_labels}}}\hspace{-8.5pt}\equiv\!\!\frac{{x_{1\hspace{0.5pt}6}^2}{x_{3\hspace{0.5pt}7}^2}}{{x_{1\hspace{0.5pt}2}^2}{x_{1\hspace{0.5pt}3}^2}{x_{1\hspace{0.5pt}4}^2}{x_{1\hspace{0.5pt}5}^2}{x_{1\hspace{0.5pt}7}^2}{x_{2\hspace{0.5pt}3}^2}{x_{2\hspace{0.5pt}7}^2}{x_{2\hspace{0.5pt}8}^2}{x_{3\hspace{0.5pt}4}^2}{x_{3\hspace{0.5pt}6}^2}{x_{3\hspace{0.5pt}8}^2}{x_{4\hspace{0.5pt}5}^2}{x_{4\hspace{0.5pt}6}^2}{x_{5\hspace{0.5pt}6}^2}{x_{5\hspace{0.5pt}7}^2}{x_{6\hspace{0.5pt}7}^2}{x_{6\hspace{0.5pt}8}^2}{x_{7\hspace{0.5pt}8}^2}}\!.\hspace{-50pt}\vspace{-2pt}\vspace{-0.5pt}$$]{} But any other choice of labels would have corresponded to the same graph, and so we must sum over all the (distinct) relabelings of the function. Of the $8!$ such relabelings, many leave the corresponding function unchanged—resulting (for this example) in 8 copies of each function. Thus, had we chosen to naïvely sum over all permutations of labels, we would over-count each graph, requiring division by a compensatory ‘symmetry factor’ of 8 in the analytic expression contributing to the amplitude or correlation function. (This symmetry factor is easily computed as the size of the automorphism group of the graph.) However, we prefer not to include such symmetry factors in our expressions, which is why we write the coefficient of this graph in (\[correlators\_through\_four\_loops\]) as ‘$\!{\raisebox{0.75pt}{\scalebox{0.75}{$\,+\,$}}}\!$1’ rather than ‘$\!{\raisebox{0.75pt}{\scalebox{0.75}{$\,+\,$}}}\!$1/8’. And so, to be perhaps overly explicit, we should be clear that this will always be our convention. Contributions to the amplitude or correlator, when converted from graphs to analytic expressions, should be symmetrized and summed; but we will always (implicitly) consider the summation to include only the [*distinct*]{} terms that result from symmetrization. Hence, no (compensatory) symmetry factors will appear in our coefficients. Had we instead used the convention where $f$-graphs’ analytic expressions should be generated by summing over [*all*]{} terms generated by $\mathfrak{S}_{4+\ell}$, the coefficients of the four loop correlator, for example, would have been $\{\!{\raisebox{0.75pt}{\scalebox{0.75}{$\,+\,$}}}1/8,\!{\raisebox{0.75pt}{\scalebox{0.75}{$\,+\,$}}}1/24,\!{\raisebox{0.75pt}{\scalebox{0.75}{$\,-\,$}}}1/16\}$ instead of $\{\!{\raisebox{0.75pt}{\scalebox{0.75}{$\,+\,$}}}1,\!{\raisebox{0.75pt}{\scalebox{0.75}{$\,+\,$}}}1,\!{\raisebox{0.75pt}{\scalebox{0.75}{$\,-\,$}}}1\}$ as written in (\[correlators\_through\_four\_loops\]). Four-Particle Amplitude Extraction via Light-Like Limits Along Faces {#subsec:amplitude_extraction} -------------------------------------------------------------------- When the correlation function $\mathcal{F}^{(\ell)}$ is expanded in terms of plane graphs, it is very simple to extract the $\ell$ loop scattering amplitude through the relation (\[correlator\_amplitude\_relation\]). To be clear, upon expanding the square of the amplitude in powers of the coupling (and dividing by the tree amplitude), we find that: [$$\lim_{\substack{\text{4-point}\\\text{light-like}}}\!\!\Big(\xi^{(4)}\mathcal{F}^{(\ell)}\Big)=\frac{1}{2}\big((\mathcal{A}_4)^2\big)^{(\ell)}=\left(\mathcal{A}_{4}^{(\ell)}+\mathcal{A}_4^{(\ell-1)}\mathcal{A}_4^{(1)}+\mathcal{A}_{4}^{(\ell-2)}\mathcal{A}_4^{(2)}+\ldots\right).\label{f_to_4pt_amp_map_with_series_expansion}\vspace{-0.5pt}$$]{} Before we describe how each term in this expansion can be extracted from the contributions to $\mathcal{F}^{(\ell)}$, let us first discuss which terms survive the light-like limit. Recall from equation (\[definition\_of\_general\_xi\]) that $\xi^{(4)}$ is proportional to ${x_{1\hspace{0.5pt}2}^2}{x_{2\hspace{0.5pt}3}^2}{x_{3\hspace{0.5pt}4}^2}{x_{1\hspace{0.5pt}4}^2}$—each factor of which vanishes in the light-like limit. Because $\xi^{(4)}$ identifies four specific points $x_a$, while $\mathcal{F}^{(\ell)}$ is a permutation-invariant sum of terms, it is clear that these four points can be arbitrarily chosen among the $(4{\raisebox{0.75pt}{\scalebox{0.75}{$\,+\,$}}}\ell)$ vertices of any $f$-graph; and thus the light-like limit will be non-vanishing iff the graph contains an edge connecting each of the pairs of vertices: $1\!\leftrightarrow\!2$, $2\!\leftrightarrow\!3$, $3\!\leftrightarrow\!4$, $1\!\leftrightarrow\!4$. Thus, terms that survive the light-like limit are those corresponding to a 4-cycle of the (denominator of the) graph. Any $n$-cycle of a plane graph divides it into an ‘interior’ and ‘exterior’ according to the plane embedding (viewed on a sphere). And this partition exactly corresponds to that required by the products of amplitudes appearing in (\[f\_to\_4pt\_amp\_map\_with\_series\_expansion\]). We can illustrate this partitioning with the following example of a ten loop $f$-graph (ignoring any factors that appear in the numerator): [$${\raisebox{-54.75pt}{\ \includegraphics[scale=1]{ten_loop_cycles_1}}}\qquad{\raisebox{-54.75pt}{\ \includegraphics[scale=1]{ten_loop_cycles_2}}}\qquad{\raisebox{-54.75pt}{\ \includegraphics[scale=1]{ten_loop_cycles_3}}}\label{example_cycles}\vspace{-0.5pt}$$]{} These three 4-cycles would lead to contributions to $\mathcal{A}_4^{(10)}$, $\mathcal{A}_4^{(9)}\mathcal{A}_4^{(1)}$, and $\mathcal{A}_4^{(5)}\mathcal{A}_4^{(5)}$, respectively. Notice that we have colored the vertices in each of the examples above according to how they are partitioned by the cycle indicated. The fact that the $\ell$ loop correlator $\mathcal{F}^{(\ell)}$ contains within it complete information about lower loops will prove extremely useful to us in the next section. For example, the square (or ‘rung’) rule follows immediately from the requirement that the $\mathcal{A}_4^{(\ell-1)}\mathcal{A}_4^{(1)}$ term in the expansion (\[f\_to\_4pt\_amp\_map\_with\_series\_expansion\]) is correctly reproduced from the representation of $\mathcal{F}^{(\ell)}$ in terms of $f$-graphs. The leading term in (\[f\_to\_4pt\_amp\_map\_with\_series\_expansion\]) is arguably the most interesting. As illustrated above, these contributions arise from any 4-cycle of an $f$-graph encompassing no internal vertices. Such cycles correspond to [*faces*]{} of the graph—either a single square face, or two triangular faces which share an edge. This leads to a direct projection from $f$-graphs into planar ‘amplitude’ graphs that are manifestly dual conformally invariant (‘DCI’). Interestingly, the graphs that result from taking the light-like limit along each face of the graph can appear surprisingly different. Consider for example the following five loop $f$-graph, which has four non-isomorphic faces, resulting in four rather different DCI integrands: [$${\text{\makebox[0pt][c]{$\hspace{-150pt}\displaystyle\hspace{-225pt}{\text{\makebox[0pt][r]{${\raisebox{-54.75pt}{\ \includegraphics[scale=1]{five_loop_f_graph_with_faces}}}$}}}{\text{\makebox[0pt][l]{$\hspace{-15pt}{\raisebox{-2.25pt}{\scalebox{1.75}{$\Rightarrow$}}}\!\left\{\!\rule{0pt}{40pt}\right.\hspace{-12.5pt}{\raisebox{-54.75pt}{\ \includegraphics[scale=1]{five_loop_planar_projection_2_v2}}}\hspace{-10pt}{\raisebox{-54.75pt}{\ \includegraphics[scale=1]{five_loop_planar_projection_3_v2}}}\hspace{-12.5pt}{\raisebox{-54.75pt}{\ \includegraphics[scale=1]{five_loop_planar_projection_4_v2}}}\hspace{-12.5pt}{\raisebox{-54.75pt}{\ \includegraphics[scale=1]{five_loop_planar_projection_1_v2}}}\hspace{-5pt}\left.\rule{0pt}{40pt}\right\}$}}}\hspace{-150pt}$}}}\label{five_loop_planar_projections_example}\vspace{-20pt}\vspace{-0.5pt}$$]{} Here, we have drawn these graphs in both momentum space and dual-momentum space—with black lines indicating ordinary Feynman propagators (which may be more familiar to many readers), and grey lines indicating the dual graphs (more directly related to the $f$-graph). We have not drawn any dashed lines to indicate factors of $s\!\equiv\!{x_{1\hspace{0.5pt}3}^2}$ or $t\!\equiv\!{x_{2\hspace{0.5pt}4}^2}$ in numerators that would be uniquely fixed by dual conformal invariance. Notice that one of the faces—the orange one—corresponds to the ‘outer’ four-cycle of the graph as drawn; also, the external points of each planar integrand have been colored according to the face involved. As one further illustration of this correspondence, consider the following seven loop $f$-graph, which similarly leads to four inequivalent DCI integrands (drawn in momentum space): [$${\text{\makebox[0pt][c]{$\hspace{-150pt}\displaystyle\hspace{-235pt}{\text{\makebox[0pt][r]{${\raisebox{-34.75pt}{\ \includegraphics[scale=1]{seven_loop_f_graph_with_faces}}}$}}}{\text{\makebox[0pt][l]{$\hspace{-0pt}{\raisebox{-2.25pt}{\scalebox{1.75}{$\Rightarrow$}}}\left\{\rule{0pt}{40pt}\right.\hspace{-10pt}{\raisebox{-34.75pt}{\ \includegraphics[scale=1]{seven_loop_planar_projection_3}}}\hspace{-5pt}{\raisebox{-34.75pt}{\ \includegraphics[scale=1]{seven_loop_planar_projection_1}}}\hspace{-2.5pt}{\raisebox{-34.75pt}{\ \includegraphics[scale=1]{seven_loop_planar_projection_2}}}\hspace{-10.pt}{\raisebox{-34.75pt}{\ \includegraphics[scale=1]{seven_loop_planar_projection_4}}}\hspace{-10pt}\left.\rule{0pt}{40pt}\right\}$}}}\hspace{-150pt}$}}}\label{seven_loop_planar_projections_example}\vspace{-0pt}\vspace{-0.5pt}$$]{} Before moving on, it is worth a brief aside to mention that these projected contributions are to be symmetrized according to the same convention discussed above for $f$-graphs—namely, when considered as analytic expressions, only distinct terms are to be summed. This follows directly from our convention for $f$-graphs and the light-like limit, without any relative symmetry factors required between the coefficients of $f$-graphs and the coefficients of each distinct DCI integrand obtained by taking the light-like limit. Higher-Point Amplitude Extraction from the Correlator {#subsec:higher_point_amplitude_extraction} ----------------------------------------------------- Remarkably enough, although the correlation function $\mathcal{F}^{(\ell)}$ was defined to be closely related to the (actual) four-point correlation function $\mathcal{G}^{(\ell)}_4$ in planar SYM, which accounts for its relation to the four-particle scattering amplitude $\mathcal{A}_4^{(\ell)}$, it turns out that interesting combinations of [*all*]{} higher-point amplitudes can also be obtained from it [@Eden:2011yp; @Eden:2011ku; @Ambrosio:2013pba]. Perhaps this should not be too surprising, as $\mathcal{F}^{(\ell)}$ is a symmetrical function on $(4{\raisebox{0.75pt}{\scalebox{0.75}{$\,+\,$}}}\ell)$ points $x_a$; but it is an incredibly powerful observation: it implies that $\mathcal{F}^{(\infty)}$ contains information about [*all*]{} scattering amplitudes in planar SYM! The way in which higher-point, lower-loop amplitudes are encoded in the function $\mathcal{F}^{(\ell)}$ is a consequence of the fully supersymmetric amplitude/correlator duality [@Eden:2010zz; @Alday:2010zy; @Eden:2010ce; @Eden:2011yp; @Adamo:2011dq; @Eden:2011ku] which was unpacked in : [$$\lim_{\substack{\text{n-point}\\\text{light-like}}}\!\!\Big(\xi^{(n)}\mathcal{F}^{(\ell)}\Big)=\frac{1}{2}\sum_{k=0}^{n-4}\mathcal{A}_n^{k}\,\mathcal{A}_n^{n-4-k}/(\mathcal{A}_n^{n-4,(0)}).\label{f_to_npt_amp_map}\vspace{-5pt}\vspace{-0.5pt}$$]{} Here, we have used the notation $\mathcal{A}_n^{k,(\ell)}$ to represent the $\ell$-loop $n$-particle N$^k$MHV amplitude divided by the $n$ particle MHV tree-amplitude. We should point out that division in (\[f\_to\_npt\_amp\_map\]) by the N$^{n-4}$MHV ($\overline{\text{MHV}}$) tree-amplitude is required to absorb the Grassmann $\eta$ weights—resulting in a purely bosonic sum of terms from which all amplitudes can be extracted. It is worth mentioning that while for four particles, the $\ell$ loop amplitude can be directly extracted from $\mathcal{F}^{(\ell)}$, and for five-points one can also extract the full amplitude, for higher-point amplitudes it is not yet clear if or how one can obtain full information about amplitudes from the combination on the left-hand side of . Elaboration of how this works in detail is beyond the scope of our present work, but because the case of $n\!=\!5$ will play an important role in motivating (and proving) the ‘pentagon rule’ described in the next section, it is worth illustrating at least this case in some detail. #### The Pentagonal Light-Like Limit:  \ In addition to being the simplest example of how higher-point amplitudes can be extracted from $\mathcal{F}^{(\ell)}$ via (\[f\_to\_npt\_amp\_map\]), the case of five particles will prove quite useful to us in our discussion of the pentagon rule described in the next section. Therefore, let us briefly summarize how this works in practice. In the case of five particles, the right-hand side of (\[f\_to\_npt\_amp\_map\]) is simply the product of the MHV and $\overline{\text{MHV}}$ amplitudes—divided by the $\overline{\text{MHV}}$ tree-amplitude (with division by $\mathcal{A}_5^{0,(0)}$ left implicit, as always). Conventionally defining and , and expanding each in powers of the coupling, the relation (\[f\_to\_npt\_amp\_map\]) becomes more symmetrically expressed as: [$$\lim_{\substack{\text{5-point}\\\text{light-like}}}\!\!\Big(\xi^{(5)}\mathcal{F}^{(\ell+1)}\Big)=\sum_{m=0}^{\ell}\mathcal{M}_5^{(m)}\overline{\mathcal{M}}_5^{(\ell-m)}.\label{f_to_5pt_amp_map}\vspace{-0pt}\vspace{-0.5pt}$$]{} Moreover, because the parity-even contributions to the loop integrands $\mathcal{M}_5^{(\ell)}$ and $\overline{\mathcal{M}}_5^{(\ell)}$ are the same, it is further convenient to define: [$$\mathcal{M}_{\text{even}}^{(\ell)}\equiv\frac{1}{2}\left(\mathcal{M}_5^{(\ell)}+\overline{\mathcal{M}}_5^{(\ell)}\right)\quad\text{and}\quad\mathcal{M}_{\text{odd}}^{(\ell)}\equiv\frac{1}{2}\left(\mathcal{M}_5^{(\ell)}-\overline{\mathcal{M}}_5^{(\ell)}\right).\label{5pt_even_and_odd_definitions}\vspace{-0.5pt}$$]{} Because any integrand constructed out of factors ${x_{a\hspace{0.5pt}b}^2}$ will be manifestly parity-even, it is not entirely obvious how the parity-odd contributions to loop integrands should be represented. Arguably, the most natural way to represent parity-odd contributions is in terms of a six-dimensional formulation of dual momentum space (essentially the Klein quadric) which was first introduced in this context in following the introduction of momentum twistors in . Each point $x_a$ is represented by a (six-component) bi-twistor $X_a$. The (dual) conformal group $SO(2,4)$ acts linearly on this six-component object and so it is natural to define a fully antisymmetric epsilon-tensor, $\epsilon_{abcdef}\!\equiv\!\det\{X_a,\ldots,X_f\}$, in which the parity-odd part of the $\ell$ loop integrand can be represented [@Ambrosio:2013pba]: [$$\mathcal{M}_{\text{odd}}\equiv i\epsilon_{12345\ell}\,\widehat{\mathcal{M}}_{\text{odd}},\label{definition_of_epsilon_prefactors_for_odd_integrands}\vspace{-0.5pt}$$]{} where $\widehat{\mathcal{M}}_{\text{odd}}$ is a parity-even function, directly expressible in terms of factors ${x_{a\hspace{0.5pt}b}^2}$. Putting everything together, the expansion (\[f\_to\_5pt\_amp\_map\]) becomes: [$$\hspace{-75pt}\lim_{\substack{\text{5-point}\\\text{light-like}}}\!\!\Big(\xi^{(5)}\mathcal{F}^{(\ell+1)}\Big)=\sum_{m=0}^{\ell}\left(\mathcal{M}_{\text{even}}^{(m)}\mathcal{M}_{\text{even}}^{(\ell-m)}+\epsilon_{123456}\epsilon_{12345(m+6)}\widehat{\mathcal{M}}_{\text{odd}}^{(m)}\widehat{\mathcal{M}}_{\text{odd}}^{(\ell-m)}\right).\label{f_to_5pt_amp_map2}\hspace{-40pt}\vspace{-5pt}\vspace{-0.5pt}$$]{} The pentagon rule we derive in the next section amounts to the equality between two different ways to extract the $\ell$-loop 5-particle integrand from $\mathcal{F}^{(\ell+2)}$, by identifying, as part of the contribution, the one loop integrand. As such, it is worthwhile to at least quote these contributions. They are as follows: [$$\mathcal{M}_{\text{even}}^{(1)}\equiv{\raisebox{-34.75pt}{\ \includegraphics[scale=1]{five_point_one_loop_even}}}\qquad\text{and}\qquad\mathcal{M}_{\text{odd}}^{(1)}\equiv{\raisebox{-34.75pt}{\ \includegraphics[scale=1]{five_point_one_loop_odd}}}\label{five_point_one_loop_terms}\vspace{-0.5pt}$$]{} where the circled vertex in the right-hand figure indicates the last argument of the epsilon-tensor. When converted into analytic expressions, these correspond to: [$${\text{\makebox[0pt][c]{$\hspace{-150pt}\displaystyle{\raisebox{-34.75pt}{\ \includegraphics[scale=1]{five_point_one_loop_even}}}\equiv\frac{{x_{1\hspace{0.5pt}3}^2}{x_{2\hspace{0.5pt}4}^2}}{{x_{1\hspace{0.5pt}6}^2}{x_{2\hspace{0.5pt}6}^2}{x_{3\hspace{0.5pt}6}^2}{x_{4\hspace{0.5pt}6}^2}}+\text{cyclic},\quad{\raisebox{-34.75pt}{\ \includegraphics[scale=1]{five_point_one_loop_odd}}}\equiv\frac{i\epsilon_{123456}}{{x_{1\hspace{0.5pt}6}^2}{x_{2\hspace{0.5pt}6}^2}{x_{3\hspace{0.5pt}6}^2}{x_{4\hspace{0.5pt}6}^2}{x_{5\hspace{0.5pt}6}^2}}\hspace{-150pt}$}}},\label{five_point_one_loop_terms_analytic}\nonumber\vspace{-0.5pt}$$]{} where the cyclic sum of terms involves only the 5 external vertices. (Graphical) Rules For Bootstrapping Amplitudes {#sec:graphical_bootstraps} ============================================== As described above, the correlator $\mathcal{F}^{(\ell)}$ can be expanded into a basis of $\ell$ loop according to (\[general\_correlator\_expansion\]). The challenge, then, is to determine the coefficients $c_i^{\ell}$. We take for granted that the one loop four-particle amplitude integrand may be represented in dual momentum coordinates as: [$$\mathcal{A}_4^{(1)}\equiv{\raisebox{-34.75pt}{\ \includegraphics[scale=1]{four_point_one_loop_amplitude}}}\equiv\frac{{x_{1\hspace{0.5pt}3}^2}{x_{2\hspace{0.5pt}4}^2}}{{x_{1\hspace{0.5pt}5}^2}{x_{2\hspace{0.5pt}5}^2}{x_{3\hspace{0.5pt}5}^2}{x_{4\hspace{0.5pt}5}^2}},\label{four_point_one_loop_integrand_in_x_space}\vspace{-5pt}\vspace{-0.5pt}$$]{} with which we expect most readers will be familiar. This formula in fact [*defines*]{} the one loop $f$-graph $f^{(1)}_1$—as there does not exist any planar graph involving five points each having valency at least 4. As such, it is defined so as to ensure that equation (\[f\_to\_4pt\_amp\_map\_with\_series\_expansion\]) holds: [$$f^{(1)}_1\equiv\mathcal{A}_4^{(1)}/\xi^{(4)}\equiv\!\!{\text{\makebox[90pt][c]{$\hspace{-150pt}\displaystyle{\raisebox{-54.75pt}{\ \includegraphics[scale=1]{one_loop_f_graph_1}}}\hspace{-150pt}$}}}\equiv\frac{1}{{x_{1\hspace{0.5pt}2}^2}{x_{1\hspace{0.5pt}3}^2}{x_{1\hspace{0.5pt}4}^2}{x_{1\hspace{0.5pt}5}^2}{x_{2\hspace{0.5pt}3}^2}{x_{2\hspace{0.5pt}4}^2}{x_{2\hspace{0.5pt}5}^2}{x_{3\hspace{0.5pt}4}^2}{x_{3\hspace{0.5pt}5}^2}{x_{4\hspace{0.5pt}5}^2}}.\label{definition_of_f1}\vspace{-24pt}\vspace{-0.5pt}$$]{} This effectively defines $\mathcal{F}^{(1)}\!\equiv\!f_1^{(1)}$, with a coefficient $c_1^{1}\!\equiv\!{\raisebox{0.75pt}{\scalebox{0.75}{$\,+\,$}}}1$. Given this seed, we will see that consistency among the products of lower-loop amplitudes in (\[f\_to\_4pt\_amp\_map\_with\_series\_expansion\])—as well as those involving more particles (\[f\_to\_npt\_amp\_map\])—will be strong enough to uniquely determine the coefficients of all $f$-graphs in the expansion for $\mathcal{F}^{(\ell)}$ in terms of lower loop-orders. In this section we describe how this can be done in practice through three simple, graphical rules that allow us to ‘bootstrap’ all necessary coefficients through at least ten loops. To be clear, the rules we describe are merely three among many that follow from the self-consistency of equations (\[f\_to\_4pt\_amp\_map\_with\_series\_expansion\]) and (\[f\_to\_npt\_amp\_map\]); they are not obviously the strongest or most effective of such rules; but they are [*necessary*]{} conditions of any representation of the correlator, and we have found them to be [*sufficient*]{} to uniquely fix the expansion of $\mathcal{F}^{(\ell)}$ into $f$-graphs, (\[general\_correlator\_expansion\]), through at least ten loops. Let us briefly describe each of these three rules in qualitative terms, before giving more detail (and derivations) in the following subsections. We refer to these as the ‘triangle rule’, the ‘square rule’, and the ‘pentagon rule’. Despite the natural ordering suggested by their names, it is perhaps best to start with the square rule—which is simply a generalization of what has long been called the ‘rung’ rule [@Bern:1997nh]. #### The Square (or ‘Rung’) Rule:  \ The square rule is arguably the most powerful of the three rules, and provides the simplest constraints—directly fixing the coefficients of certain $f$-graphs at $\ell$ loops to be equal to the coefficients of $f$-graphs at $(\ell{\raisebox{0.75pt}{\scalebox{0.75}{$\,-\,$}}}1)$ loops. Roughly speaking, the square rule follows from the requirement that whenever an $f$-graph [*has*]{} a contribution to $\mathcal{A}_4^{(\ell-1)}\mathcal{A}_4^{(1)}$, this contribution must be correct. It simply reflects the translation of what has long been known as the ‘rung’ rule [@Bern:1997nh] into the language of the correlator and $f$-graphs [@Eden:2012tu]; however, this translation proves much more powerful than the original, as described in more detail below. As will be seen in the , for example, the square rule fixes $\sim\!95\%$ of all $f$-graph coefficients at eleven loops—the only coefficients not fixed by the square rule are those of $f$-graphs which do not contribute any terms to $\mathcal{A}_4^{(\ell-1)}\mathcal{A}_4^{(1)}$.  \ #### The Triangle Rule:  \ Simply put, the triangle rule states that shrinking triangular faces at $\ell$ loops is equivalent to shrinking edges at $(\ell{\raisebox{0.75pt}{\scalebox{0.75}{$\,-\,$}}}1)$ loops. By this we mean simply identifying the three vertices of any triangular face of an $f$-graph at $\ell$ loops and identifying two vertices connected by an edge of an $f$-graph at $(\ell{\raisebox{0.75pt}{\scalebox{0.75}{$\,-\,$}}}1)$ loops, respectively. The result of either operation is never an $f$-graph (as it will not have correct conformal weights, and will often involve vertices connected by more than one edge), but this does not prevent us from implementing the rule graphically. Typically, there are many fewer inequivalent graphs involving shrunken faces/edges, and so the triangle rule typically results in relations involving many $f$-graph coefficients. This makes the equations relatively harder to solve. As described in more detail below, the triangle rule follows from the Euclidean short distance [@Eden:2012tu; @Eden:2012fe] limit of correlation functions. We will prove this in the following subsection, and describe more fully its strength in fixing coefficients in . But it is worth mentioning here that when combined with the square rule, the triangle rule is sufficient to fix $\mathcal{F}^{(\ell)}$ completely through seven loops; and the implications of the triangle rule applied at [*ten*]{} loops is sufficient to fix $\mathcal{F}^{(\ell)}$ through [*nine*]{} loops (although the triangle and square rules alone, when imposed at nine loops, would not suffice).  \ #### The Pentagon Rule:  \ The pentagon rule is the five-particle analog of the square rule—following from the requirement that the $\mathcal{M}^{(\ell-1)}\mathcal{M}^{(1)}$ terms in the expansion (\[f\_to\_5pt\_amp\_map\]) are correct. Unlike the square rule, however, it does not make use of knowing lower-loop five-particle amplitudes; rather, it simply requires that the odd contributions to the amplitude are consistent. We will describe in detail how the pentagon rule is derived below, and give examples of how it fixes coefficients. One important aspect of the pentagon rule, however, is that it relates coefficients at a [*fixed loop-order*]{}. Indeed, as an algebraic constraint, the pentagon rule always becomes the requirement that the sum of some subset of coefficients $c_i^\ell$ is zero (without any relative factors ever required).\ Before we describe and derive each of these three rules in detail, it is worth mentioning that they lead to mutually overlapping and individually [*over-constrained*]{} relations on the coefficients of $f$-graphs. As such, the fact that any solution exists to these equations—whether from each individual rule or in combination—strongly implies the correctness of our rules (and the correctness of their implementation in our code). And of course, the results we find are consistent with all known results through eight loops, which have been found using a diversity of other methods. The Square (or ‘Rung’) Rule: Removing One Loop Squares {#subsec:square_rule} ------------------------------------------------------ Recall from that, upon taking the 4-point light-like limit, an $f$-graph contributes a term to $\mathcal{A}_4^{(\ell-1)}\mathcal{A}_4^{(1)}$ in the expansion (\[f\_to\_4pt\_amp\_map\_with\_series\_expansion\]) if (and only if) there exists a 4-cycle that encloses a single vertex. See, for example, the second illustration given in (\[example\_cycles\]). Because of planarity, the enclosed vertex must have valency exactly 4, and so any such cycle must form a face with the topology: [$${\raisebox{-34.75pt}{\ \includegraphics[scale=1]{square_rule_face_topology}}}\label{square_rule_face_toplogy}\vspace{-7pt}\vspace{-0.5pt}$$]{} Whenever an $f$-graph has such a face, it will contribute a term of the form $\mathcal{A}_4^{(\ell-1)}\mathcal{A}_4^{(1)}$ in the light-like limit. If we define the operator $\mathcal{S}(\mathcal{F})$ to be the projection onto such contributions, then the rung rule states that $\mathcal{S}(\mathcal{F}^{(\ell)})/\mathcal{A}_4^{(1)}\!\!=\!\mathcal{A}_4^{(\ell-1)}$. Graphically, division of (\[square\_rule\_face\_toplogy\]) by the graph for $\mathcal{A}_4^{(1)}$ in (\[four\_point\_one\_loop\_integrand\_in\_x\_space\]) would correspond to the graphical replacement: [$$\hspace{-120pt}{\raisebox{-34.75pt}{\ \includegraphics[scale=1]{square_rule_face_topology}}}{\raisebox{-2.25pt}{\scalebox{1.75}{$\Rightarrow$}}}\left({\raisebox{-34.75pt}{\ \includegraphics[scale=1]{square_rule_face_topology}}}{\raisebox{-2.25pt}{\scalebox{1.75}{$\times$}}}{\raisebox{-34.75pt}{\ \includegraphics[scale=1]{inverse_one_loop_graph}}}\right){\raisebox{-2.25pt}{\scalebox{1.75}{$=$}}}{\raisebox{-34.75pt}{\ \includegraphics[scale=1]{image_of_square_after_division}}}\hspace{-100pt}\label{graphical_square_rule}\vspace{-5pt}\vspace{-0.5pt}$$]{} (Here, we have illustrated division by the graph for $\mathcal{A}_4^{(1)}$—shown in (\[four\_point\_one\_loop\_integrand\_in\_x\_space\])—as multiplication by its inverse.) Importantly, the image on the right hand side of (\[graphical\_square\_rule\]) resulting from this operation is not always planar! For it to be planar, there must exist a numerator factor connecting any two of the vertices of the square face—to cancel against one or both of the ‘new’ factors in the denominator appearing in (\[graphical\_square\_rule\]). When the image is non-planar, however, the graph [*cannot*]{} contribute to $\mathcal{A}_4^{(\ell-1)}$,[^1] and thus the coefficient of such an $f$-graph must vanish. For example, consider the following six loop $f$-graph which has a face with the topology (\[square\_rule\_face\_toplogy\]), and so its contribution to $\mathcal{F}^{(6)}$ would be constrained by the square rule: [$${\raisebox{-54.75pt}{\ \includegraphics[scale=1]{six_loop_vanishing_by_square_rule_example}}}\label{six_loop_vanishing_by_square_rule_example}\vspace{-15pt}\vspace{-0.5pt}$$]{} In this case, because there are no numerator factors (indicated by dashed lines) connecting the vertices of the highlighted 4-cycle, its image under (\[graphical\_square\_rule\]) would be non-planar, and hence this term cannot appear in $\mathcal{A}_4^{(5)}$. Therefore, the coefficient of this $f$-graph must be zero. (In fact, this reasoning accounts for 8 of the 10 vanishing coefficients that first appear at six loops.) As discussed in , this immediately implies that there are no possible contributions with ‘$k\!=\!4$’ divergences. More typically, however, there is at least one numerator factor in the $\ell$ loop that connects vertices of the one loop square face (\[square\_rule\_face\_toplogy\]) in order to cancel one or both of the new denominator factors in (\[graphical\_square\_rule\]). When this is the case, the image is an $(\ell{\raisebox{0.75pt}{\scalebox{0.75}{$\,-\,$}}}1)$ loop , and the square rule states that their coefficients are identical. For example, the coefficient of the five loop shown in (\[five\_loop\_planar\_projections\_example\]) is fixed by the square rule to have the same coefficient as $f_3^{(4)}$ shown in (\[one\_through\_four\_loop\_f\_graphs\]): [$${\raisebox{-34.75pt}{\ \includegraphics[scale=1]{five_loop_square_rule_example_1}}}\,\,{\raisebox{-2.25pt}{\scalebox{1.75}{$\Rightarrow$}}}{\raisebox{-34.75pt}{\ \includegraphics[scale=1]{five_loop_square_rule_example_2}}}\label{five_loop_square_rule_example}\vspace{-7pt}\vspace{-0.5pt}$$]{} In summary, the square rule fixes the coefficient of any $f$-graph that has a face with the topology (\[square\_rule\_face\_toplogy\]) directly in terms of lower-loop coefficients. And this turns out to constrain the vast majority of possible contributions, as summarized in . And it is worth emphasizing that the square rule described here is in fact substantially stronger than what has been traditionally called the ‘rung’ rule [@Bern:1997nh] for two reasons: first, the square rule unifies collections of planar DCI contributions to amplitudes according to the hidden symmetry of the correlator—allowing us to fix coefficients of even the ‘non-rung-rule’ integrands such as those appearing in (\[five\_loop\_planar\_projections\_example\]); secondly, the square rule allows us to infer the vanishing of certain coefficients due to the non-existence of lower-loop graphs (due to non-planarity). $${\text{\makebox[0pt][c]{$\hspace{-150pt}\displaystyle{\hspace{-150pt}$}}}\begin{array}{|r|r|r|r|r|r|r|r|r|r|}\cline{2-10}\multicolumn{1}{r}{\ell\!=}&\multicolumn{1}{|c|}{3}&\multicolumn{1}{c|}{4}&\multicolumn{1}{c|}{5}&\multicolumn{1}{c|}{6}&\multicolumn{1}{c|}{7}&\multicolumn{1}{c|}{8}&\multicolumn{1}{c|}{9}&\multicolumn{1}{c|}{10}&\multicolumn{1}{c|}{11}\\\hline\text{number of $f$-graph coefficients:}&1&3&7&36&220&2,\!709&43,\!017&900,\!145&22,\!097,\!035\\\hline\text{number unfixed by square rule:}&0&1&1&5&22&293&2,\!900&52,\!475&1,\!017,\!869\\\hline\hline\text{percent fixed by square rule (\%):}&100&67&86&86&90&89&93&94&95\\\hline\end{array}}$$ The Triangle Rule: Collapsing Triangles and Edges {#subsec:triangle_shrink} ------------------------------------------------- The triangle rule relates the coefficients of $f$-graphs at $\ell$ loops to those at $(\ell{\raisebox{0.75pt}{\scalebox{0.75}{$\,-\,$}}}1)$ loops. Simply stated, collapsing triangles (to points) at $\ell$ loops is equivalent to collapsing edges of graphs at $(\ell{\raisebox{0.75pt}{\scalebox{0.75}{$\,-\,$}}}1)$ loops. More specifically, we can define an operation $\mathcal{T}$ that projects all $f$-graphs onto their triangular faces (identifying the points of each face), and another operation $\mathcal{E}$ that collapses all edges of $f$-graphs (identifying points). Algebraically, the triangle rule corresponds to, [$$\mathcal{T}(\mathcal{F}^{(\ell)})=2\,\mathcal{E}(\mathcal{F}^{(\ell-1)}).\label{algebraic_but_figurative_triangle_rule}\vspace{-6pt}\vspace{-0.5pt}$$]{} Under either operation, the result is some non-conformal (generally) multi-graph with fewer vertices, with each image coming from possibly many $f$-graphs; thus, (\[algebraic\_but\_figurative\_triangle\_rule\]) gives a linear relation between the $\ell$ loop coefficients of $\mathcal{F}^{(\ell)}$—those that project under $\mathcal{T}$ to the same image—and the $(\ell{\raisebox{0.75pt}{\scalebox{0.75}{$\,-\,$}}}1)$ loop coefficients of $\mathcal{F}^{(\ell-1)}$. (It often happens that an image of $\mathcal{F}^{(\ell)}$ under $\mathcal{T}$ is not found among the images of $\mathcal{F}^{(\ell-1)}$ under $\mathcal{E}$; in this case, the right-hand side of (\[algebraic\_but\_figurative\_triangle\_rule\]) will be zero.) One small subtlety that is worth mentioning is that we must be careful about symmetry factors—as the automorphism group of the pre-image may not align with the image. To be clear, $\mathcal{T}$ acts on [*each*]{} triangular face of a graph (not necessarily inequivalent), and $\mathcal{E}$ acts on [*each*]{} edge of a graph (again, not necessarily inequivalent); each term in the image is then summed with a factor equal to the ratio of symmetry factor of the image to that of the pre-image. In both cases, this amounts to including a symmetry factor that compensates for the difference between the symmetries of an ordinary $f$-graph and the symmetries of $f$-graphs with a [*decorated*]{} triangle or edge. Let us illustrate this with an example from the seven loop correlation function. The image of $\mathcal{F}^{(7)}$ under $\mathcal{T}$ includes $433$ graph-inequivalent images—each resulting in one identity among the coefficients $c_i^{7}$ and $c_i^6$. One of these inequivalent images results in the identity: $$\begin{aligned} {\text{\makebox[0pt][c]{$\hspace{-150pt}\displaystyle\mathcal{T}\hspace{-4pt}\left(\rule{0pt}{40pt}\right.\hspace{-5pt}c^{7}_1\!\!{\text{\makebox[70pt][c]{$\hspace{-150pt}\displaystyle{\raisebox{-37.25pt}{\ \includegraphics[scale=1]{seven_loop_triangle_rule_1}}}\hspace{-150pt}$}}}\!+\!c^7_2\!\!{\text{\makebox[70pt][c]{$\hspace{-150pt}\displaystyle{\raisebox{-37.25pt}{\ \includegraphics[scale=1]{seven_loop_triangle_rule_2}}}\hspace{-150pt}$}}}\!+\!c^7_3{\text{\makebox[70pt][c]{$\hspace{-150pt}\displaystyle{\raisebox{-37.25pt}{\ \includegraphics[scale=1]{seven_loop_triangle_rule_3}}}\hspace{-150pt}$}}}\,\,+\!\ldots\hspace{-5pt}\left.\rule{0pt}{40pt}\right)\hspace{-4pt}=2\,\mathcal{E}\hspace{-4pt}\left(\rule{0pt}{40pt}\right.\hspace{-6pt}c^6_1\!\!{\text{\makebox[70pt][c]{$\hspace{-150pt}\displaystyle{\raisebox{-37.25pt}{\ \includegraphics[scale=1]{six_loop_triangle_rule_1}}}\hspace{-150pt}$}}}\!+\!\ldots\hspace{-5pt}\left.\rule{0pt}{40pt}\right)\hspace{-150pt}$}}}\nonumber\\[-32pt]{\text{\makebox[0pt][c]{$\hspace{-150pt}\displaystyle\hspace{-15pt}{\raisebox{-2.25pt}{\scalebox{1.75}{$\Rightarrow$}}}\left(c_1^7+2\,c_2^7+c_3^7\right){\text{\makebox[70pt][c]{$\hspace{-150pt}\displaystyle{\raisebox{-37.25pt}{\ \includegraphics[scale=1]{seven_loop_triangle_image}}}\hspace{-150pt}$}}}\hspace{-10pt}=2\,c_1^6{\text{\makebox[70pt][c]{$\hspace{-150pt}\displaystyle{\raisebox{-37.25pt}{\ \includegraphics[scale=1]{seven_loop_triangle_image}}}\hspace{-150pt}$}}}\hspace{-7.5pt}{\raisebox{-2.25pt}{\scalebox{1.75}{$\Rightarrow$}}}\,c_1^7+2\,c_2^7+c_3^7=2\,c_1^6.\hspace{-150pt}$}}}\label{triangle_rule_example}\\[-30pt]\nonumber\end{aligned}$$ While not visually manifest, it is not hard to check that shrinking each highlighted triangle/edge in the first line of (\[triangle\_rule\_example\]) results in graphs isomorphic to the one shown in the second line. And indeed, the coefficients of the six and seven loop correlators (obtained independently) satisfy this identity: $\{c_1^7,c^7_2,c^7_3,c^6_1\}\!=\!\{\!{\raisebox{0.75pt}{\scalebox{0.75}{$\,+\,$}}}1,\!{\raisebox{0.75pt}{\scalebox{0.75}{$\,+\,$}}}1,\!{\raisebox{0.75pt}{\scalebox{0.75}{$\,-\,$}}}1,\!{\raisebox{0.75pt}{\scalebox{0.75}{$\,+\,$}}}1\}$. (The coefficient of 2 appearing in front of $c_2^7$ results from the fact that the symmetry factor of the initial graph is 1, while its image under $\mathcal{T}$ has a symmetry factor of 2.) ### Proof and Origins of the Triangle Rule {#subsubsec:proof_of_triangle_rule .unnumbered} The triangle rule arises from a reformulation of the Euclidean short distance limit of correlation functions discussed in . In the Euclidean short distance limit $x_2\!\rightarrow\!x_1$, the operator product expansion dictates that the leading divergence of the logarithm of the correlation function is proportional to the one loop divergence. More precisely, [$$\lim_{x_2\rightarrow x_1}\!\log\!\Big(1+\sum_{\ell\geq1}a^\ell\,F^{(\ell)}\Big)=\gamma(a)\!\lim_{x_2\rightarrow x_1}\!F^{(1)}+\ldots,\label{konishi_log_relation}\vspace{-7pt}\vspace{-0.5pt}$$]{} where ‘$a$’ refers to the coupling, $F$ is defined by, [$$F^{(\ell)}\equiv3\,\frac{\mathcal{G}^{(\ell)}_4(x_1,x_2,x_3,x_4)}{\mathcal{G}^{(0)}_4(x_1,x_2,x_3,x_4)}\,,\label{definition_of_F}\vspace{-5pt}\vspace{-0.5pt}$$]{} and where the dots in (\[konishi\_log\_relation\]) refer to subleading terms in this limit. The proportionality constant $\gamma(a)$ here is the anomalous dimension of the Konishi operator, and the factor 3 in (\[definition\_of\_F\]) also has a physical origin—ultimately arising from the tree-level three-point function of two stress-energy multiplets and the Konishi multiplet.[^2] The important point for us from (\[konishi\_log\_relation\]) is that the logarithm of the correlator has the same divergence as the one loop correlator, whereas the correlator itself at $\ell$ loops diverges as the $\ell^{\text{th}}$ power of the one loop correlator $\lim_{x_2\rightarrow x_1}\!\big(\mathcal{G}_4^{(\ell)}\big)\!\sim\!\log^\ell\!\left({x_{1\hspace{0.5pt}2}^2}\right)$. At the integrand level this divergence arises from loop integration variables approaching $x_2\!=\!x_1$. The only way for a loop integral of this form—with symmetrized integration variables—to be reduced to a single log divergence is if the integrand had reduced divergence in the simultaneous limit $x_5, x_2\!\rightarrow\!x_1$, where we recall that $x_5$ is one of the loop integration variables.[^3] More precisely then, defining the relevant perturbative logarithm of the correlation function as ${g}^{(\ell)}$: [$$\sum_{\ell\geq1}a^\ell g^{(\ell)}\equiv\log\!\Big(1+\sum_{\ell\geq1}a^\ell\,F^{(\ell)}\Big),\label{definition_of_g}\vspace{-2pt}\vspace{-0.5pt}$$]{} then at the integrand-level (\[konishi\_log\_relation\]) implies:[^4] [$$\lim_{x_5,x_2\rightarrow x_1}\left(\frac{g^{(\ell)}(x_1, \dots, x_{4+\ell})}{g^{(1)}(x_1,\dots,x_{5})}\right)=0,\qquad\ell\!>\!1\,.\label{eq:3b}\vspace{-3pt}\vspace{-0.5pt}$$]{} This equation gives a clean integrand-level consequence of the reduced divergence; however, it is phrased in terms of the logarithm of the integrand rather than the integrand itself, and this does not translate directly into a graphical rule. However, notice the relation between the $\log$-expansion $g$ and the correlator $F$, [$$g^{(\ell)} = F^{(\ell)} - \frac 1\ell g^{(1)}(x_{5}) F^{(\ell-1)} - \sum_{m=2}^{\ell-1} \frac{m}{\ell} g^{(m)}(x_{5}) F^{(\ell-m)}\, .\label{logarithm_expansion}\vspace{-3pt}\vspace{-0.5pt}$$]{} This formula can be read at the level of the integrand, and we write the dependence of the loop variable $x_{5}$ explicitly, the dependence on all other loop variables is completely symmetrized.[^5] From equation (\[logarithm\_expansion\]), it is straightforward to see (using an induction argument) that (\[eq:3b\]) is equivalent to [$$\lim_{x_2,x_{5} \rightarrow x_1} \frac{F^{(\ell)}(x_1,\dots,x_{4+\ell})}{g^{(1)}(x_1,x_2,x_3,x_4,x_{5})}= \frac1\ell\,\lim_{x_2\rightarrow x_1} F^{(\ell-1)}(x_1, \dots,{\widehat}x_5,\dots, x_{4+\ell})\,,\label{eq:6}\vspace{-3pt}\vspace{-0.5pt}$$]{} where the variable $x_5$ is missing in the right-hand side. This is now a direct rewriting of the reduced divergence at the level of integrands and as a relation for the loop level correlator (rather than the more complicated logarithm). Note that everything in the discussion of this section so far can be transferred straightforwardly onto the soft/collinear divergence constraint; and indeed, a rephrasing of the soft/collinear constraint similar to (\[eq:6\]) was conjectured in , with the relevant limit being $x_5$ approaching the line joining $x_1$ and $x_2$, $\lim_{x_{5}\rightarrow [x_1,x_2]}$. Now inputting the one loop correlator, $\lim_{x_2,x_{5}\rightarrow x_1}g^{(1)}(x_1,\dots,x_{5})= 6/({x_{1\hspace{0.5pt}5}^2}{x_{2\hspace{0.5pt}5}^2})$, and rewriting this in terms of $\mathcal{F}^{(\ell)}$, (\[eq:6\]) becomes simply [$$\lim_{x_2,x_{5}\rightarrow x_1}({x_{1\hspace{0.5pt}2}^2} {x_{1\hspace{0.5pt}5}^2}{x_{2\hspace{0.5pt}5}^2})\times {{\mathcal F}^{(\ell)}(x_1, \dots, x_{4+\ell})}= 6\lim_{x_2\rightarrow x_1} ({x_{1\hspace{0.5pt}2}^2})\times {\mathcal F}^{(\ell-1)}(x_1, \dots, x_{3+\ell})\, .\label{eq:7}\vspace{-3pt}\vspace{-0.5pt}$$]{} The final step in this rephrasing of the coincidence limit is to view (\[eq:7\]) graphically. Clearly the limit on the left-hand side will only be non-zero if the corresponding term in the labelled contains the triangle with vertices $x_1 ,x_2, x_5$. The limit then deletes this triangle and shrinks it to a point. On the right-hand side, we similarly choose terms in the labelled containing the edge $x_1\!\!\leftrightarrow\!x_2$, delete this edge and then shrink to a point. The equation has to hold graphically and we no longer need to consider explicit labels. Simply shrink all inequivalent (up to automorphisms) triangles of the linear sum of graphs on the left-hand side and equate it to the result of shrinking all inequivalent (again, up to automorphisms) edges of the linear sum of graphs on the right-hand side. The different (non-isomorphic) shrunk graphs are independent, and thus for each shrunk graph we obtain an equation relating $\ell$ loop coefficients to $(\ell{\raisebox{0.75pt}{\scalebox{0.75}{$\,-\,$}}}1)$ loop coefficients. There are six different labelings of the triangle and two different labelings of the edge which all reduce to the same expression in this limit, thus the factor of 6 in the algebraic expression (\[eq:7\]) becomes the factor of 2 in the equivalent graphical version (\[algebraic\_but\_figurative\_triangle\_rule\]). The Pentagon Rule: Equivalence of One Loop Pentagons {#subsec:pentagon_rule} ---------------------------------------------------- Let us now describe the pentagon rule. It is perhaps the hardest to describe (and derive), but it ultimately turns out to imply much simpler relations among coefficients than the triangle rule. In particular, the pentagon rule will always imply that the sum of some subset of coefficients $\{c_i^\ell\}$ vanishes—with no relative factors between terms in the sum. Let us first describe operationally how these identities are found graphically, and then describe how this rule can be deduced from considerations of 5-point light-like limits according to (\[f\_to\_5pt\_amp\_map2\]). Graphically, each pentagon rule identity involves a relation between $f$-graphs involving the following topologies: [$${\raisebox{-34.75pt}{\ \includegraphics[scale=1]{pentagon_rule_seed}}}\;{\raisebox{-2.25pt}{\scalebox{1.75}{$\Rightarrow$}}}\left\{\!\!{\raisebox{-34.75pt}{\ \includegraphics[scale=1]{pentagon_rule_images}}}\right\}\label{topologies_of_the_pentagon_rule}\vspace{-0.5pt}$$]{} Each pentagon rule identity involves an $f$-graph with a face with the topology on the left-hand side of the figure above, (\[topologies\_of\_the\_pentagon\_rule\]). This sub-graph is easily identified as having the structure of $\mathcal{M}_{\text{even}}^{(1)}$—see equation (\[five\_point\_one\_loop\_terms\]). (This is merely suggestive: we will soon see that it is the role these graphs play in $\mathcal{M}_{\text{odd}}^{(1)}$ that is critical.) Importantly, these $f$-graphs may involve any number of numerators of the form ${x_{{\color{hred}a}\hspace{0.5pt}{\color{hblue}b}}^2}$—including some that are ‘implicit’: any points ${\color{hblue}x_b}$ separated from ${\color{hred}x_a}$ by a face (not connected by an edge), because for such points ${\color{hblue}x_b}$, multiplication by ${x_{{\color{hred}a}\hspace{0.5pt}{\color{hblue}b}}^2}/{x_{{\color{hred}a}\hspace{0.5pt}{\color{hblue}b}}^2}$ would not affect planarity of the factors in the denominator. The graphs on the right-hand side of (\[topologies\_of\_the\_pentagon\_rule\]), then, are the collection of those obtained from that on the left-hand side by multiplication by a simple cross-ratio: [$$f_i^{(\ell)}({\color{hred}x_{a}},{\color{hblue}x}_{{\color{hblue}b}},x_c,x_d)\mapsto f_{i'}^{(\ell)}\equiv f_i^{(\ell)}\frac{{x_{{\color{hred}a}\hspace{0.5pt}d}^2}{x_{{\color{hblue}b}\hspace{0.5pt}c}^2}}{{x_{{\color{hred}a}\hspace{0.5pt}{\color{hblue}b}}^2}{x_{c\hspace{0.5pt}d}^2}}.\label{cross_ratio_relation_for_pentagon_rule}\vspace{-0.5pt}$$]{} There is one final restriction that must be mentioned. The generators of pentagon rule identities—$f$-graphs including subgraphs with the topology shown on the left-hand side of (\[topologies\_of\_the\_pentagon\_rule\])—must not involve any numerators connecting points on the pentagon [*other than*]{} between ${\color{hred}x_a}$ and $x_d$ (arbitrary powers of ${x_{{\color{hred}a}\hspace{0.5pt}d}^2}$ are allowed). While the requirements for the graphs that participate in pentagon rule identities may seem stringent, each is important—as we will see when we describe the rule’s proof. But the identities that result are very powerful: they always take the form that the sum of the coefficients of the graphs involved (both the initial graph, and all its images in (\[topologies\_of\_the\_pentagon\_rule\])) must vanish. Let us illustrate these relations with a concrete example from seven loops. Below, we have drawn an $f$-graph on the left, highlighting in blue the three points $\{{\color{hblue}x_b}\}$ that satisfy requirements described above; and on the right we have drawn the three $f$-graphs related to the initial graph according to (\[cross\_ratio\_relation\_for\_pentagon\_rule\]): [$$\hspace{-234pt}{\raisebox{-54.75pt}{\ \includegraphics[scale=1]{seven_loop_pentagon_rule_example_seed}}}{\raisebox{-2.25pt}{\scalebox{1.75}{$\Rightarrow$}}}\!\!\left\{\rule{0pt}{47.5pt}\right.\!\!\hspace{-7.5pt}{\raisebox{-54.75pt}{\ \includegraphics[scale=1]{seven_loop_pentagon_rule_example_images_1}}},\hspace{-5pt}{\raisebox{-54.75pt}{\ \includegraphics[scale=1]{seven_loop_pentagon_rule_example_images_2}}},\hspace{-5pt}{\raisebox{-54.75pt}{\ \includegraphics[scale=1]{seven_loop_pentagon_rule_example_images_3}}}\hspace{-7.5pt}\left.\rule{0pt}{47.5pt}\right\}\hspace{-200pt}\label{seven_loop_pentagon_rule_example}\vspace{-6pt}\vspace{-0.5pt}$$]{} Notice that two of the three points ${\color{hblue}x_b}$ are ‘implicit’ in the manner described above. Labeling the coefficients of the $f$-graphs in (\[seven\_loop\_pentagon\_rule\_example\]) from left to right as , the pentagon rule would imply that And indeed, these coefficients of terms in the seven loop correlator turn out to be: which do satisfy this identity. As usual, there are no symmetry factors to consider; but it is important that only [*distinct*]{} images are included in the set on the right-hand side of (\[topologies\_of\_the\_pentagon\_rule\]). As will be discussed in , the pentagon rule is strong enough to fix all coefficients but one not already fixed by the square rule through seven loops. ### Proof of the Pentagon Rule {#subsubsec:proof_of_pentagon_rule .unnumbered} The pentagon rule (\[topologies\_of\_the\_pentagon\_rule\]) arises from examining the 5-point light-like limit of the correlator and its relation to the five-particle amplitude (just as the square rule arises from the 4-point light-like limit and its relation to the four-particle amplitude explained in ). As described in , in the pentagonal light-like limit the correlator is directly related to the five-particle amplitude as in (\[f\_to\_5pt\_amp\_map2\]). In particular let us focus on the terms involving one loop amplitudes in (\[f\_to\_5pt\_amp\_map2\]): ${\mathcal F}^{(\ell+1)}$ contains the terms, [$$\hspace{-75pt}\frac{1}{\xi^{(5)}}\left(\mathcal{M}_{\text{even}}^{(1)}\mathcal{M}_{\text{even}}^{(\ell-1)}+\epsilon_{123456}\epsilon_{12345(m+6)}\widehat{\mathcal{M}}_{\text{odd}}^{(1)}\widehat{\mathcal{M}}_{\text{odd}}^{(\ell-1)}\right)\,.\label{eq:24}\hspace{-40pt}\vspace{-5pt}\vspace{-0.5pt}$$]{} Indeed any term in the correlator which graphically has a plane embedding with the topology of a 5-cycle whose ‘inside’ contains a single vertex and whose ‘outside’ contains $\ell{\raisebox{0.75pt}{\scalebox{0.75}{$\,-\,$}}}1$ vertices has to arise from the above terms [@Ambrosio:2013pba]. Inserting the one loop expressions  and the algebraic identity (valid only in the pentagonal light-like limit), [$$\begin{split}&\hspace{-20pt}\phantom{=\,}\frac{\epsilon_{123456}\,\epsilon_{123457}}{{x_{1\hspace{0.5pt}2}^2}{x_{2\hspace{0.5pt}3}^2}{x_{3\hspace{0.5pt}4}^2}{x_{4\hspace{0.5pt}5}^2}{x_{1\hspace{0.5pt}5}^2}}\\&\hspace{-20pt}=2\,{x_{6\hspace{0.5pt}7}^2}+\left[\frac{{x_{1\hspace{0.5pt}6}^2}{x_{2\hspace{0.5pt}7}^2}{x_{3\hspace{0.5pt}5}^2}+{x_{1\hspace{0.5pt}7}^2}{x_{2\hspace{0.5pt}6}^2}{x_{3\hspace{0.5pt}5}^2}}{{x_{1\hspace{0.5pt}3}^2}{x_{2\hspace{0.5pt}5}^2}}-\frac{{x_{1\hspace{0.5pt}7}^2}{x_{3\hspace{0.5pt}6}^2}+{x_{1\hspace{0.5pt}6}^2}{x_{3\hspace{0.5pt}7}^2}}{{x_{1\hspace{0.5pt}3}^2}}-\frac{{x_{1\hspace{0.5pt}6}^2}{x_{1\hspace{0.5pt}7}^2}{x_{2\hspace{0.5pt}4}^2}{x_{3\hspace{0.5pt}5}^2}}{{x_{1\hspace{0.5pt}3}^2}{x_{1\hspace{0.5pt}4}^2}{x_{2\hspace{0.5pt}5}^2}}+\text{cyclic}\right]\,,\hspace{-24pt}\end{split}\label{eq:19}\vspace{-0pt}\vspace{-0.5pt}$$]{} then becomes the following contribution to ${\mathcal F}^{(\ell+1)}$ [$$\begin{split}&\hspace{-30pt}\frac{1}{{x_{1\hspace{0.5pt}2}^2}{x_{2\hspace{0.5pt}3}^2}{x_{3\hspace{0.5pt}4}^2}{x_{4\hspace{0.5pt}5}^2}{x_{1\hspace{0.5pt}5}^2}}\Bigg(2\,\frac{{x_{6\hspace{0.5pt}7}^2}}{{x_{1\hspace{0.5pt}6}^2}{x_{2\hspace{0.5pt}6}^2}{x_{3\hspace{0.5pt}6}^2}{x_{4\hspace{0.5pt}6}^2}{x_{5\hspace{0.5pt}6}^2}}{\widehat}{\mathcal{M}}^{(\ell-1)}_{\text{odd}}+\Bigg\{\frac{1}{{x_{1\hspace{0.5pt}6}^2}{x_{2\hspace{0.5pt}6}^2}{x_{3\hspace{0.5pt}6}^2}{x_{4\hspace{0.5pt}6}^2}}\Bigg[\frac{1}{{x_{1\hspace{0.5pt}4}^2}{x_{2\hspace{0.5pt}5}^2}{x_{3\hspace{0.5pt}5}^2}}\mathcal{M}_{\text{even}}^{(\ell-1)}\hspace{-40pt}\\&\hspace{-30pt}+\Bigg(\frac{{x_{1\hspace{0.5pt}7}^2}{x_{2\hspace{0.5pt}4}^2}}{{x_{1\hspace{0.5pt}4}^2}{x_{2\hspace{0.5pt}5}^2}}+\frac{{x_{4\hspace{0.5pt}7}^2}{x_{1\hspace{0.5pt}3}^2}}{{x_{1\hspace{0.5pt}4}^2}{x_{3\hspace{0.5pt}5}^2}}-\frac{{x_{3\hspace{0.5pt}7}^2}}{{x_{3\hspace{0.5pt}5}^2}}-\frac{{x_{2\hspace{0.5pt}7}^2}}{{x_{2\hspace{0.5pt}5}^2}}-\frac{{x_{5\hspace{0.5pt}7}^2}{x_{1\hspace{0.5pt}3}^2}{x_{2\hspace{0.5pt}4}^2}}{{x_{2\hspace{0.5pt}5}^2}{x_{3\hspace{0.5pt}5}^2}{x_{1\hspace{0.5pt}4}^2}}\Bigg){\widehat}{\mathcal{M}}_{\text{odd}}^{(\ell-1)}\Bigg]+\text{cyclic}\Bigg\}\Bigg)\,.\hspace{-40pt}\end{split}\label{eq:12}\vspace{-5pt}\vspace{-0.5pt}$$]{} We wish to now consider all terms in ${\mathcal F}^{(\ell+1)}$ containing the structure occurring in the pentagon rule, namely a ‘pentawheel’ with a spoke missing, [$${\raisebox{-34.75pt}{\ \includegraphics[scale=1]{pentagon_proof_fig_1}}}\label{eq:23}\vspace{-5pt}\vspace{-0.5pt}$$]{} with numerators (if present at all within this subgraph) allowed [*only*]{} between the vertex with the missing spoke and the marked point (as shown). A term in ${\mathcal F}^{(\ell+1)}$ containing this subgraph inevitably contributes to the pentagonal light-like limit and by its topology it has to arise from the ${\mathcal M}^{(1)} \times {\mathcal M}^{(\ell-1)}$ terms, [*i.e.*]{} somewhere in . We now proceed to investigate all seven terms in  to show that this structure of interest can only arise from the fifth and sixth terms. We start with the second term of [$$\frac{1}{{x_{1\hspace{0.5pt}2}^2}{x_{2\hspace{0.5pt}3}^2}{x_{3\hspace{0.5pt}4}^2}{x_{4\hspace{0.5pt}5}^2}{x_{5\hspace{0.5pt}1}^2}}\,\frac{1}{{x_{1\hspace{0.5pt}6}^2}{x_{2\hspace{0.5pt}6}^2}{x_{3\hspace{0.5pt}6}^2}{x_{4\hspace{0.5pt}6}^2}}\,\frac{1}{{x_{1\hspace{0.5pt}4}^2}{x_{2\hspace{0.5pt}5}^2}{x_{3\hspace{0.5pt}5}^2}}{\mathcal M}_{\text{even}}^{(\ell-1)}\,,\label{eq:3}\vspace{-5pt}\vspace{-0.5pt}$$]{} arising from the even part of the amplitude, which is the most subtle one. Graphically, this term can be displayed as: [$${\raisebox{-34.75pt}{\ \includegraphics[scale=1]{pentagon_proof_fig_2}}}\label{pentagon_proof_figure_2}\vspace{-5pt}\vspace{-0.5pt}$$]{} In order for this to yield the structure in a planar , the amplitude ${\mathcal M}_{\text{even}}^{(\ell-1)}$ must either contain a numerator ${x_{1\hspace{0.5pt}4}^2}$ (to cancel the corresponding propagator above) or alternatively it must contain the numerator terms ${x_{2\hspace{0.5pt}5}^2}$ and ${x_{3\hspace{0.5pt}5}^2}$ in order to allow the edge ${x_{1\hspace{0.5pt}4}^2}$ to be drawn outside the pentagon without any edge crossing. Analyzing these different possibilities one concludes that this requires all three numerators ${x_{1\hspace{0.5pt}4}^2}{x_{2\hspace{0.5pt}5}^2}{x_{3\hspace{0.5pt}5}^2}$ to be present in a term of ${\mathcal M}_{\text{even}}^{(\ell-1)}$. Now using the amplitude/correlator duality again in a different way note that such a contribution to ${\mathcal M}_{\text{even}}^{(\ell-1)}$ must also contribute to the lower-loop correlator ${\mathcal F}^{(\ell)}$ through (\[f\_to\_5pt\_amp\_map2\]) [$$\lim_{\substack{\text{5-point}\\\text{light-like}}}\!\!\left(\xi^{(5)}\mathcal{F}^{(\ell-1)}\right)={\mathcal M}_\text{even}^{(\ell-1)} + \ldots\,.\label{eq:28}\vspace{-5pt}\vspace{-0.5pt}$$]{} So a term in ${\mathcal M}_{\text{even}}^{(\ell-1)}$ with numerators ${x_{1\hspace{0.5pt}4}^2}{x_{2\hspace{0.5pt}5}^2}{x_{3\hspace{0.5pt}5}^2}$ contributes a term with topology, [$${\raisebox{-34.75pt}{\ \includegraphics[scale=1]{pentagon_proof_fig_3}}}\label{pentagon_proof_figure_3}\vspace{-5pt}\vspace{-0.5pt}$$]{} (Here the numerators ${x_{1\hspace{0.5pt}4}^2}{x_{2\hspace{0.5pt}5}^2}{x_{3\hspace{0.5pt}5}^2}$ cancel three of the denominators of $1/\xi^{(5)}$, but they leave the pentagon and two further edges attached to the pentagon as shown.) We see that this term can never be planar (this term in ${\mathcal M}_{\text{even}}^{(\ell-1)}$ has to be attached to all five external legs by conformal invariance so one cannot pull one of the offending edges outside the pentagon) [*unless*]{} there is a further numerator term, either ${x_{2\hspace{0.5pt}4}^2}$ or ${x_{1\hspace{0.5pt}3}^2}$ to cancel one of these edges. But in this case inserting this back into  we obtain the required structure  but with this further numerator which is of the type explicitly disallowed from our rule. Having ruled out the second term, we consider the other terms of . The first term can clearly never give a pentawheel with a spoke missing. The contribution of the third term of  has the diagrammatic form: [$${\raisebox{-34.75pt}{\ \includegraphics[scale=1]{pentagon_proof_fig_4}}}\label{pentagon_proof_figure_4}\vspace{-2.5pt}\vspace{-0.5pt}$$]{} and so could potentially give a contribution of the form of a pentawheel with a spoke missing if ${\widehat}{\mathcal M}_\text{odd}^{(\ell-1)}$ has a numerator ${x_{1\hspace{0.5pt}4}^2}$ to cancel the corresponding edge. However in any case such a term would also contain the numerator ${x_{2\hspace{0.5pt}4}^2}$ which we disallow in . The third and last terms are similarly ruled out as a source for the structure in question. So we conclude that the fifth and sixth terms are the only ones which can yield the structure we focus on in the pentagon rule. Given this important fact, we are now in a position to understand the origin of the pentagon rule. Every occurrence of the structure  arises from the fifth or sixth terms in , namely from ${x_{3\hspace{0.5pt}7}^2}/{x_{3\hspace{0.5pt}5}^2}\times {\widehat}{\mathcal M}_{\text{odd}}^{(\ell-1)}$ (where $x_3$ is the marked point of the pentagon). But we also know [@Ambrosio:2013pba] that ${\widehat}{\mathcal M}_{\text{odd}}^{(\ell-1)}$ is in direct one-to-one correspondence with pentawheel structures of $f^{(\ell+1)}$ (the first term in ). Thus there is a direct link between the pentawheel structures and the structure  and this link appears with a sign due to the sign difference between the first and fifth/sixth terms in . To get from the first term of  to the fifth term, one multiplies by ${x_{3\hspace{0.5pt}7}^2}{x_{5\hspace{0.5pt}6}^2}/({x_{3\hspace{0.5pt}5}^2}{x_{6\hspace{0.5pt}7}^2})$—that is, deleting the two edges, ${x_{3\hspace{0.5pt}7}^2}$ and ${x_{5\hspace{0.5pt}6}^2}$, and deleting the two numerator lines ${x_{6\hspace{0.5pt}7}^2},{x_{3\hspace{0.5pt}5}^2}$. This is precisely the operation involved in the five-point rule described in more detail above (see (\[cross\_ratio\_relation\_for\_pentagon\_rule\])). Bootstrapping Amplitudes/Correlators to Many Loops {#sec:results} ================================================== In this section, we survey the relative strengths of the three rules described in the pervious section, and then some of the more noteworthy aspects of the forms found for the correlator through ten loops. Before we begin, however, it is worth emphasizing that the three rules we have used are only three among many which follow from the way in which lower loop (and higher point) amplitudes are encoded in the correlator $\mathcal{F}^{(\ell)}$ via equations (\[f\_to\_4pt\_amp\_map\_with\_series\_expansion\]) and (\[f\_to\_npt\_amp\_map\]). The triangle, square, and pentagon rules merely represent those we implemented first, and which proved sufficient through ten loops. And finally, it is worth mentioning that we expect the soft-collinear bootstrap criterion to continue to prove sufficient to fix all coefficients at all loops, even if using this tool has proven computationally out of reach beyond eight loops. (If it were to be translated into a purely graphical rule, it may prove extraordinarily powerful.)  \ #### The Square Rule:  \ As described in the previous section, the square rule is undoubtedly the most powerful of the three, and results in the simplest possible relations between coefficients—namely, that certain $\ell$ loop coefficients are identical to particular $(\ell{\raisebox{0.75pt}{\scalebox{0.75}{$\,-\,$}}}1)$ loop coefficients. As illustrated in , the square rule is strong enough to fix $\sim\!95$% of the $22,\!097,\!035$ $f$-graphs coefficients at eleven loops. The role of the triangle and pentagon rules, therefore, can be seen as tools to fix the coefficients not already fixed by the square rule.  \ #### The Triangle Rule:  \ Similar to the square rule, the triangle rule is strong enough to fix all coefficients through three loops, but will leave one free coefficient at four loops. Conveniently, the relations required by the triangle rule are not the same as those of the square rule, and so the combination of the two fix everything. In fact, the square and triangle rule together immediately fix all correlation functions through seven loops, and all but 22 of the $2,\!709$ eight loop coefficients. (This fact was known when the eight loop correlator was found in , which is why we alluded to these new rules in the conclusions of that Letter.) Interestingly, applying the triangle and square rules to nine loops fixes all but 3 of the $43,\!017$ [*new coefficients*]{}, including 20 of those not already fixed at eight loops. (To be clear, this means that, without any further input, there would be a total of $3{\raisebox{0.75pt}{\scalebox{0.75}{$\,+\,$}}}2$ unfixed coefficients at nine loops.) Motivated by this, we implemented the triangle and square rules at ten loops, and found that these rules sufficed to determine eight and nine loop correlators uniquely. At ten loops, we found the complete system of equations following from the two rules to fix all but $1,\!570$ of the coefficients of the $900,\!145$ $f$-graphs. These facts are summarized in . Notice that the number of unknowns quoted in that table for $\ell$ loops are the number of coefficients given the lower loop correlator. If the coefficients at lower loops were not assumed, then there would be $5$ unknowns at nine loops rather than 3; but the number quoted for ten loops would be the same—because all lower loop coefficients are fixed by the ten loop relations. $${\text{\makebox[0pt][c]{$\hspace{-150pt}\displaystyle\begin{array}{|r|r|r|r|r|r|r|r|r|r|}\cline{2-10}\multicolumn{1}{r}{\ell\!=}&\multicolumn{1}{|c|}{2}&\multicolumn{1}{c|}{3}&\multicolumn{1}{c|}{4}&\multicolumn{1}{c|}{5}&\multicolumn{1}{c|}{6}&\multicolumn{1}{c|}{7}&\multicolumn{1}{c|}{8}&\multicolumn{1}{c|}{9}&\multicolumn{1}{c|}{10}\\\hline\text{number of $f$-graph coefficients:}&\,1\,&\,1\,&\,3\,&\,7\,&\,36\,&\,220\,&\,2,\!709\,&\,43,\!017\,&\,900,\!145\,\\\hline\text{unknowns remaining after square rule:}&\,\,0\,&\,\,0\,&\,\,1\,&1\,&5\,&22\,&293\,&2,\!900\,&52,\!475\,\\\hline\text{unknowns after square \& triangle rules:}&\,\,0\,&\,\,0\,&\,\,0\,&\,\,0\,&\,\,0\,&\,\,0\,&22\,&3\,&1,\!570\,\\\hline\end{array}\hspace{-150pt}$}}}$$  \ #### The Pentagon Rule:  \ The pentagon rule is not quite as strong as the others, but the relations implied are much simpler to implement. In fact, there are no instances of $f$-graphs for which the pentagon rule applies until four loops, when it implies a single linear relation among the three coefficients. This relation, when combined with the square rule fixes the four loop correlator, and the same is true for five loops. However at six loops, the two rules combined leave 1 (of the $36$) $f$-graph coefficients undetermined. The reason for this is simple: there exists an $f$-graph at six loops which neither contributes to $\mathcal{A}^{(5)}_4\mathcal{A}^{(1)}_4$ nor to $\mathcal{M}_5^{(4)}\overline{\mathcal{M}}_5^{(1)}$. This is easily seen by inspection of the $f$-graph in question: [$${\raisebox{-54.75pt}{\ \includegraphics[scale=1]{six_loop_prism_graph}}}\label{six_loop_prism_graph}\vspace{-5pt}\vspace{-0.5pt}$$]{} We will have more to say about this graph and its coefficient below. There is one graph at seven loops related to (\[six\_loop\_prism\_graph\]) by the square rule that is also left undetermined, but all other coefficients (219 of the 220) are fixed by the combination of the square and pentagon rules. The number of coefficients fixed by the square and pentagon rules through nine loops is summarized in . As before, only the number of [*new*]{} coefficients are quoted—assuming that the lower loop coefficients are known. $${\text{\makebox[0pt][c]{$\hspace{-150pt}\displaystyle\begin{array}{|r|r|r|r|r|r|r|r|r|}\cline{2-9}\multicolumn{1}{r}{\ell\!=}&\multicolumn{1}{|c|}{2}&\multicolumn{1}{c|}{3}&\multicolumn{1}{c|}{4}&\multicolumn{1}{c|}{5}&\multicolumn{1}{c|}{6}&\multicolumn{1}{c|}{7}&\multicolumn{1}{c|}{8}&\multicolumn{1}{c|}{9}\\\hline\text{number of $f$-graph coefficients:}&\,1\,&\,1\,&\,3\,&\,7\,&\,36\,&\,220\,&\,2,\!709\,&\,43,\!017\,\\\hline\text{unknowns remaining after square rule:}&\,\,0\,&\,\,0\,&\,\,1\,&\,\,1\,&\,\,5\,&22\,&293\,&2,\!900\,\\\hline\text{unknowns after square \& pentagon rules:}&\,\,0\,&\,\,0\,&\,\,0\,&\,\,0\,&\,\,\,1\,&\,\,\,0\,&\,\,\,17\,&\,\,\,64\,\\\hline\end{array}\hspace{-150pt}$}}}$$ Aspects of Correlators and Amplitudes at High Loop-Orders {#subsec:statistical_tour} --------------------------------------------------------- While no two of the three rules alone prove sufficient to determine the ten loop correlation function, the three in combination fix all coefficients uniquely—without any outside information about lower loops. As such, the reproduction of the eight (and lower) loop functions found in can be viewed as an independent check on the code being employed. Moreover, because the three rules each impose mutually overlapping (and individually over constrained) constraints on the coefficients, the existence of any solution is a source of considerable confidence in our results. One striking aspect of the correlation function exposed only at high loop-order is that the (increasingly vast) majority of coefficients are zero: while all possible $f$-graphs contribute through five loops, only 26 of the 36 graphs at six loops do; by ten loops, $85\%$ of the coefficients vanish. (At eleven loops, [*at least*]{} $19,\!388,\!448$ coefficients vanish ($88\%$) due to the square rule alone.) This pattern is illustrated in , where we count all contributions—both for $f$-graphs, and planar DCI integrands. The two principle novelties discovered for the eight loop correlator [@Bourjaily:2015bpz] also persist to higher loops. Specifically, we refer to the fact that there are contributions to the amplitude that are finite (upon integration) even on-shell, and contributions to the correlator that are (individually) divergent even off-shell. The meaning of the finite integrals remains unclear (although they would have prevented the use of the soft-collinear bootstrap without grouping terms according to $f$-graphs); but the existence of divergent contributions imposes an important constraint on the result: because the correlator is strictly finite off-shell, all such divergences must cancel in combination. (Moreover, these contributions impose an interesting technical obstruction to evaluation, as they cannot be easily regulated in four dimensions—such as by going to the Higgs branch of the theory [@Alday:2009zm].) $$\hspace{1.5pt}\begin{array}{|@{$\,$}c@{$\,$}|@{$\,$}r@{$\,$}|@{$\,$}r@{$\,$}|@{$\,\,$}r@{$\,\,$}|@{$\;\;\;\;\;\;\;$}|@{$\,$}r@{$\,$}|@{$\,$}r@{$\,$}|@{$\,\,$}r@{$\,\,$}|}\multicolumn{1}{@{$\,$}c@{$\,$}}{\begin{array}{@{}l@{}}\\[-4pt]\text{$\ell\,$}\end{array}}&\multicolumn{1}{@{$\,$}c@{$\,$}}{\!\begin{array}{@{}c@{}}\text{number of}\\[-4pt]\text{$f$-graphs}\end{array}}\,&\multicolumn{1}{@{$\,$}c@{$\,$}}{\begin{array}{@{}c@{}}\text{no.\ of $f$-graph}\\[-4pt]\text{contributions}\end{array}}\,\,&\multicolumn{1}{@{$\,$}c@{$\,$}}{\begin{array}{@{}c@{$\,\,\,\,\;\;\;\;\;$}}\text{}\\[-4pt]\text{\!\!(\%)}\end{array}}&\multicolumn{1}{@{$\,$}c@{$\,$}}{\begin{array}{@{}c@{}}\text{number of}\\[-4pt]\text{DCI integrands}\end{array}}\,&\multicolumn{1}{@{$\,$}c@{$\,$}}{\begin{array}{@{}c@{}}\text{no.\ of integrand}\\[-4pt]\text{contributions}\end{array}}\,&\multicolumn{1}{@{$\,$}c@{$\,$}}{\begin{array}{@{}c@{}}\text{}\\[-4pt]\text{(\%)}\end{array}}\\[-0pt]\hline1&1&1&100&1&1&100\\\hline2&1&1&100&1&1&100\\\hline3&1&1&100&2&2&100\\\hline4&3&3&100&8&8&100\\\hline5&7&7&100&34&34&100\\\hline6&36&26&72&284&229&81\\\hline7&220&127&58&3,\!239&1,\!873&58\\\hline8&2,\!709&1,\!060&39&52,\!033&19,\!949&38\\\hline9&43,\!017&10,\!525&24&1,\!025,\!970&247,\!856&24\\\hline10&900,\!145&136,\!433&15&24,\!081,\!425&3,\!586,\!145&15\\\hline\end{array}\vspace{-16pt}$$ At eight loops there are exactly 4 $f$-graphs which lead to finite DCI integrands, and all 4 have non-vanishing coefficients. At nine loops there are 45, of which 33 contribute; at ten loops there are $1,\!287$, of which $570$ contribute. For the individually divergent contributions, their number and complexity grow considerably beyond eight loops. The first appearance of such divergences happened at eight loops—with terms that had a so-called ‘$k\!=\!5$’ divergence (see [@Bourjaily:2015bpz] for details). Of the 662 $f$-graphs with a $k\!=\!5$ divergence at eight loops, only 60 contributed. At nine loops there are $15,\!781$, of which $961$ contribute; at ten loops, there are $424,\!348$, of which $21,\!322$ contribute. Notice that terms with these divergences grow proportionally in number—and even start to have the feel of being ubiquitous asymptotically. We have not enumerated all the divergent contributions for $k\!>\!5$, but essentially all categories of such divergences exist and contribute to the correlator. (For example, there are $971$ contributions at ten loops with (the simplest category of) a $k\!=\!7$ divergence.) While the coefficients of $f$-graphs are encouragingly simple at low loop-orders, the variety of possible coefficients seems to grow considerably at higher orders. The distribution of these coefficients is given in . While all coefficients through five loops were $\pm\!1$, those at higher loops include many novelties. (Of course, the increasing dominance of zeros among the coefficients is still rather encouraging.) Interestingly, it is clear from that new coefficients (up to signs) only appear at even loop-orders. The first term with coefficient $\!{\raisebox{0.75pt}{\scalebox{0.75}{$\,-\,$}}}1$ occurs at four loops, and the first appearance of $\!{\raisebox{0.75pt}{\scalebox{0.75}{$\,+\,$}}}2$ at six loops. At eight loops, we saw the first instances of $\pm\frac{1}{2}$, $\pm\frac{3}{2}$, and also $\!{\raisebox{0.75pt}{\scalebox{0.75}{$\,-\,$}}}5$. And there are many novel coefficients that first appear at ten loops. $$\hspace{-1.2pt}\begin{array}{|c|r|r|r|r|r|r|r|r|r|r|r|r|r|r|}\multicolumn{1}{c}{}&\multicolumn{14}{c}{\text{number of $f$-graphs at $\ell$ loops having coefficient:}}\\\cline{2-15}\multicolumn{1}{c}{\ell}&\multicolumn{1}{|c|}{\pm1\phantom{}}&\multicolumn{1}{c|}{0}&\multicolumn{1}{c|}{\pm2}&\multicolumn{1}{c|}{\pm1/2}&\multicolumn{1}{c|}{\pm3/2}&\multicolumn{1}{c|}{\pm5}&\multicolumn{1}{c|}{\pm1/4}&\multicolumn{1}{c|}{\pm3/4}&\multicolumn{1}{c|}{\pm5/4}&\multicolumn{1}{c|}{+7/4}&\multicolumn{1}{c|}{\pm9/4}&\multicolumn{1}{c|}{\pm5/2}&\multicolumn{1}{c|}{+4}&\multicolumn{1}{c|}{+14}\\\hline1&1&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}\\\hline2&1&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}\\\hline3&1&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}\\\hline4&3&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}\\\hline5&7&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}\\\hline6&25&10&1&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}\\\hline7&126&93&1&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}\\\hline8&906&1,\!649&9&141&3&1&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}\\\hline9&7,\!919&32,\!492&54&2,\!529&22&1&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}&{\color{dim}0}\\\hline10&78,\!949&763,\!712&490&50,\!633&329&9&5,\!431&559&18&5&4&4&1&1\\\hline\end{array}$$ While most of the ‘new’ coefficients occur with sufficient multiplicity to require further consideration (more than warranted here), there is at least one class of contributions which seems predictably novel. Consider the following six, eight, and ten loop $f$-graphs: [$${\raisebox{-54.75pt}{\ \includegraphics[scale=1]{six_loop_prism_graph}}}\quad{\raisebox{-54.75pt}{\ \includegraphics[scale=1]{eight_loop_prism_graph}}}\quad{\raisebox{-54.75pt}{\ \includegraphics[scale=1]{ten_loop_prism_graph}}}\label{prism_graph_figures}\vspace{-10pt}\vspace{-0.5pt}$$]{} These graphs all have the topology of a $(\ell/2{\raisebox{0.75pt}{\scalebox{0.75}{$\,+\,$}}}2)$-gon anti-prism, and all represent contributions with unique (and always exceptional) coefficients. In particular, these graphs contribute to the correlator with coefficients $\!{\raisebox{0.75pt}{\scalebox{0.75}{$\,+\,$}}}2$, $\!{\raisebox{0.75pt}{\scalebox{0.75}{$\,-\,$}}}5$ and $\!\!{\raisebox{0.75pt}{\scalebox{0.75}{$\,+\,$}}}14$, respectively. (Notice also that the four loop $f$-graph $f_3^{(4)}$ shown in (\[one\_through\_four\_loop\_f\_graphs\]) is an anti-prism of this type—and is the first term having contribution $\!{\raisebox{0.75pt}{\scalebox{0.75}{$\,-\,$}}}1$—as is the only two loop $f$-graph (the octahedron), which also follows this pattern.) Each of the $f$-graphs in (\[prism\_graph\_figures\]) contribute a unique DCI integrand to the $\ell$ loop amplitude, [$${\raisebox{-54.75pt}{\ \includegraphics[scale=1]{six_loop_coeff_2_dci_int}}}\qquad{\raisebox{-54.75pt}{\ \includegraphics[scale=1]{eight_loop_coeff_5_dci_int}}}\qquad{\raisebox{-54.75pt}{\ \includegraphics[scale=1]{ten_loop_coeff_14_dci_int}}}\label{prism_dci_integrands}\vspace{-12pt}\vspace{-0.5pt}$$]{} with each drawn in momentum space as Feynman graphs for the sake of intuition. From these, a clear pattern emerges—leading us to make a rather speculative guess for the coefficients of these terms. It seems plausible that the coefficients of anti-prism graphs are given by the Catalan numbers—leading us to predict that the coefficient of the octagonal anti-prism $f$-graph at twelve loops, for example, will be $\!{\raisebox{0.75pt}{\scalebox{0.75}{$\,-\,$}}}42$. Testing this conjecture—let alone proving it—however, must await further work. The only other term that contributes at ten loops with a unique coefficient is the following, which has coefficient $\!{\raisebox{0.75pt}{\scalebox{0.75}{$\,+\,$}}}4$: [$${\raisebox{-54.75pt}{\ \includegraphics[scale=1]{ten_loop_coeff_4_graph}}}\;\;{\raisebox{-2.25pt}{\scalebox{1.75}{$\supset$}}}{\raisebox{-54.75pt}{\ \includegraphics[scale=1]{ten_loop_coeff_4_dci_int}}}\label{ten_loop_coefficient_4_graph},\ldots\vspace{-10pt}\vspace{-0.5pt}$$]{} We hope that the explicit form of the correlation functions provided at <http://goo.gl/JH0yEc> (see ) will provide sufficient data for other researchers to find new patterns within the structure of coefficients. Conclusions and Future Directions {#sec:conclusions} ================================= In this work, we have described a small set of simple, graphical rules which prove to be extremely efficient in fixing the possible contributions to the $\ell$ loop four-point correlation function in planar maximally supersymmetric $(\mathcal{N}\!=\!4)$ Yang-Mills theory (SYM). And we have described the form that results when used to fix the correlation function through ten loop-order. While clearly this is merely the simplest non-trivial observable in (arguably) the simplest four-dimensional quantum field theory, it exemplifies many of the features (and possible tools) we expect will be applicable to more general quantum field theories. And even within the limited scope of planar SYM, this single function contains important information about higher-point amplitudes. It is important to reiterate that the rules we have described are merely necessary conditions—and not obviously sufficient to all orders. But these three rules are merely three among many that follow from the consistency of the amplitude/correlator duality. Even without extension beyond ten loops, it would be worthwhile (and very interesting) to explore the strengths of the various natural generalizations of the rules we have described. Another important open direction would be to explore the systematic extraction of higher-point (lower loop) amplitudes from the four-point correlator. This has proven exceptionally direct and straight-forward for five-point amplitudes, but further work should be done to better understand the systematics (and potential difficulties) of this procedure for higher multiplicity. (Even six-particle amplitude extraction remains largely unexplored.) Finally, it is natural to wonder how far this programme can be extended beyond ten loops. Although the use of graphical rules essentially eliminates the challenges of setting up the linear algebra problem to be solved, solving the system of equations that result (with millions of unknowns) rapidly becomes rather non-trivial. However, such problems of linear algebra (involving (very) large systems of equations) arise in many areas of physics and computer science, and there is reason to expect that they may be surmounted through the use of programmes such as that described in (an impressive implementation of Laporta’s algorithm). At present, it is unclear where the next computational bottle-neck will be, but it is worth pushing these tools as far as they can go—certainly to eleven loops, and possibly even twelve. Acknowledgements {#acknowledgements .unnumbered} ================ The authors gratefully acknowledge helpful discussions with Zvi Bern, Simon Caron-Huot, JJ Carrasco, Dmitry Chicherin, Burkhard Eden, Gregory Korchemsky, Emery Sokatchev, and Marcus Spradlin. This work was supported in part by the Harvard Society of Fellows, a grant from the Harvard Milton Fund, by the Danish National Research Foundation (DNRF91), and by a MOBILEX research grant from the Danish Council for Independent Research (JLB); by an STFC studentship (VVT); and by an STFC Consolidated Grant ST/L000407/1 and the Marie Curie network GATIS (gatis.desy.eu) of the European Union’s Seventh Framework Programme FP7/2007-2013 under REA Grant Agreement No. 317089 (PH). PH would also like to acknowledge the hospitality of Laboratoire dÕAnnecy-le-Vieux de Physique Théorique, UMR 5108, where this work was completed. Obtaining and Using the Explicit Results in [Mathematica]{} {#appendix:mathematica_and_explicit_results} =========================================================== Our full results, including all contributions to the amplitude and correlator $\mathcal{F}^{(\ell)}$ through ten loops, have been made available at the site <http://goo.gl/JH0yEc>. These can be obtained by downloading the compressed file multiloop ------------------------------------------------------------------------ data.zip , or by downloading each data file individually (which are encoded somewhat esoterically). These files include a [Mathematica]{} package, consolidated ------------------------------------------------------------------------ multiloop ------------------------------------------------------------------------ data.m , and a notebook multiloop ------------------------------------------------------------------------ demo.nb . The demonstration notebook illustrates the principle data defined in the package, and examples of how these functions are represented. Also included in the package are several general-purpose functions that may be useful to the reader—for example, a functions that compute symmetry factors and check if two functions are isomorphic (as graphs). Principle among the data included in this package are the list of all $f$-graphs at $\ell$ loops with non-vanishing coefficients for $\ell\!=\!1,\ldots,10$, and the corresponding coefficients. Also included is a list of all $\ell$ loop DCI integrands obtained from each $f$-graph in the light-like limit. Importantly, we have only included terms with non-vanishing coefficients—in order to reduce the file size of the data. The complete list of $f$-graphs at each loop order can be obtained by contacting the authors. [10]{} N. Arkani-Hamed, F. Cachazo, and J. Kaplan, “[What is the Simplest Quantum Field Theory?]{},” [[*JHEP*]{} [**1009**]{} (2010) 016](http://dx.doi.org/10.1007/JHEP09(2010)016), [[ arXiv:0808.1446 \[hep-th\]]{}](http://arxiv.org/abs/0808.1446). N. Arkani-Hamed, F. Cachazo, C. Cheung, and J. Kaplan, “[A Duality For The $S$-Matrix]{},” [[*JHEP*]{} [**1003**]{} (2010) 020](http://dx.doi.org/10.1007/JHEP03(2010)020), [[ arXiv:0907.5418 \[hep-th\]]{}](http://arxiv.org/abs/0907.5418). N. Arkani-Hamed, J. Bourjaily, F. Cachazo, and J. Trnka, “[Local Spacetime Physics from the Grassmannian]{},” [[*JHEP*]{} [**1101**]{} (2011) 108](http://dx.doi.org/10.1007/JHEP01(2011)108), [[ arXiv:0912.3249 \[hep-th\]]{}](http://arxiv.org/abs/0912.3249). N. Arkani-Hamed, J. Bourjaily, F. Cachazo, and J. Trnka, “[Unification of Residues and Grassmannian Dualities]{},” [[*JHEP*]{} [**1101**]{} (2011) 049](http://dx.doi.org/10.1007/JHEP01(2011)049), [[ arXiv:0912.4912 \[hep-th\]]{}](http://arxiv.org/abs/0912.4912). N. Arkani-Hamed, J. L. Bourjaily, F. Cachazo, A. B. Goncharov, A. Postnikov, and J. Trnka, “[Scattering Amplitudes and the Positive Grassmannian]{},” [[ arXiv:1212.5605 \[hep-th\]]{}](http://arxiv.org/abs/1212.5605). N. Arkani-Hamed, J. L. Bourjaily, F. Cachazo, A. B. Goncharov, A. Postnikov, and J. Trnka, [*[Grassmannian Geometry of Scattering Amplitudes]{}*]{}. Cambridge University Press, 2016. N. Arkani-Hamed, J. L. Bourjaily, F. Cachazo, and J. Trnka, “[Local Integrals for Planar Scattering Amplitudes]{},” [[*JHEP*]{} [**1206**]{} (2012) 125](http://dx.doi.org/10.1007/JHEP06(2012)125), [[ arXiv:1012.6032 \[hep-th\]]{}](http://arxiv.org/abs/1012.6032). J. L. Bourjaily, A. DiRe, A. Shaikh, M. Spradlin, and A. Volovich, “[The Soft-Collinear Bootstrap: $\mathcal{N}\!=\!4$ Yang-Mills Amplitudes at Six and Seven Loops]{},” [[ *JHEP*]{} [**1203**]{} (2012) 032](http://dx.doi.org/10.1007/JHEP03(2012)032), [[ arXiv:1112.6432 \[hep-th\]]{}](http://arxiv.org/abs/1112.6432). J. L. Bourjaily and J. Trnka, “[Local Integrand Representations of All Two-Loop Amplitudes in Planar SYM]{},” [[*JHEP*]{} [**08**]{} (2015) 119](http://dx.doi.org/10.1007/JHEP08(2015)119), [[ arXiv:1505.05886 \[hep-th\]]{}](http://arxiv.org/abs/1505.05886). N. Arkani-Hamed, J. L. Bourjaily, F. Cachazo, S. Caron-Huot, and J. Trnka, “[The All-Loop Integrand For Scattering Amplitudes in Planar $\mathcal{N}\!=\!4$ SYM]{},” [[*JHEP*]{} [**1101**]{} (2011) 041](http://dx.doi.org/10.1007/JHEP01(2011)041), [[ arXiv:1008.2958 \[hep-th\]]{}](http://arxiv.org/abs/1008.2958). J. M. Drummond, J. Henn, G. P. Korchemsky, and E. Sokatchev, “[On Planar Gluon Amplitudes/Wilson Loops Duality]{},” [[*Nucl. Phys.*]{} [**B795**]{} (2008) 52–68](http://dx.doi.org/10.1016/j.nuclphysb.2007.11.007), [[ arXiv:0709.2368 \[hep-th\]]{}](http://arxiv.org/abs/0709.2368). J. Drummond, J. Henn, G. Korchemsky, and E. Sokatchev, “[Dual Superconformal Symmetry of Scattering Amplitudes in $\mathcal{N}\!=\!4$ super Yang-Mills Theory]{},” [[ *Nucl. Phys.*]{} [**B828**]{} (2010) 317–374](http://dx.doi.org/10.1016/j.nuclphysb.2009.11.022), [[ arXiv:0807.1095 \[hep-th\]]{}](http://arxiv.org/abs/0807.1095). A. Brandhuber, P. Heslop, and G. Travaglini, “[A Note on Dual Superconformal Symmetry of the $\mathcal{N}\!=\!4$ Super Yang-Mills S-Matrix]{},” [[*Phys. Rev.*]{} [ **D78**]{} (2008) 125005](http://dx.doi.org/10.1103/PhysRevD.78.125005), [[ arXiv:0807.4097 \[hep-th\]]{}](http://arxiv.org/abs/0807.4097). J. M. Drummond, J. M. Henn, and J. Plefka, “[Yangian Symmetry of Scattering Amplitudes in $\mathcal{N}\!=\!4$ Super Yang-Mills Theory]{},” [[*JHEP*]{} [**05**]{} (2009) 046](http://dx.doi.org/10.1088/1126-6708/2009/05/046), [[ arXiv:0902.2987 \[hep-th\]]{}](http://arxiv.org/abs/0902.2987). L. F. Alday and J. M. Maldacena, “[Gluon Scattering Amplitudes at Strong Coupling]{},” [[ *JHEP*]{} [**06**]{} (2007) 064](http://dx.doi.org/10.1088/1126-6708/2007/06/064), [[ arXiv:0705.0303 \[hep-th\]]{}](http://arxiv.org/abs/0705.0303). J. M. Drummond, G. P. Korchemsky, and E. Sokatchev, “[Conformal Properties of Four-Gluon Planar Amplitudes and Wilson loops]{},” [[*Nucl. Phys.*]{} [**B795**]{} (2008) 385–408](http://dx.doi.org/10.1016/j.nuclphysb.2007.11.041), [[ arXiv:0707.0243 \[hep-th\]]{}](http://arxiv.org/abs/0707.0243). A. Brandhuber, P. Heslop, and G. Travaglini, “[MHV Amplitudes in $\mathcal{N}\!=\!4$ Super Yang-Mills and Wilson Loops]{},” [[*Nucl. Phys.*]{} [**B794**]{} (2008) 231–243](http://dx.doi.org/10.1016/j.nuclphysb.2007.11.002), [[ arXiv:0707.1153 \[hep-th\]]{}](http://arxiv.org/abs/0707.1153). L. F. Alday, B. Eden, G. P. Korchemsky, J. Maldacena, and E. Sokatchev, “[From Correlation Functions to Wilson Loops]{},” [[*JHEP*]{} [**1109**]{} (2011) 123](http://dx.doi.org/10.1007/JHEP09(2011)123), [[ arXiv:1007.3243 \[hep-th\]]{}](http://arxiv.org/abs/1007.3243). B. Eden, G. P. Korchemsky, and E. Sokatchev, “[From Correlation Functions to Scattering Amplitudes]{},” [[*JHEP*]{} [**1112**]{} (2011) 002](http://dx.doi.org/10.1007/JHEP12(2011)002), [[ arXiv:1007.3246 \[hep-th\]]{}](http://arxiv.org/abs/1007.3246). L. Mason and D. Skinner, “[The Complete Planar $S$-Matrix of $\mathcal{N}\!=\!4$ SYM as a Wilson Loop in Twistor Space]{},” [[*JHEP*]{} [**12**]{} (2010) 018](http://dx.doi.org/10.1007/JHEP12(2010)018), [[ arXiv:1009.2225 \[hep-th\]]{}](http://arxiv.org/abs/1009.2225). S. Caron-Huot, “[Notes on the Scattering Amplitude / Wilson Loop Duality]{},” [[*JHEP*]{} [**1107**]{} (2011) 058](http://dx.doi.org/10.1007/JHEP07(2011)058), [[ arXiv:1010.1167 \[hep-th\]]{}](http://arxiv.org/abs/1010.1167). B. Eden, P. Heslop, G. P. Korchemsky, and E. Sokatchev, “[The Super-Correlator/ Super-Amplitude Duality: Part I]{},” [[*Nucl. Phys.*]{} [**B869**]{} (2013) 329–377](http://dx.doi.org/10.1016/j.nuclphysb.2012.12.015), [[ arXiv:1103.3714 \[hep-th\]]{}](http://arxiv.org/abs/1103.3714). T. Adamo, M. Bullimore, L. Mason, and D. Skinner, “[A Proof of the Supersymmetric Correlation Function / Wilson Loop Correspondence]{},” [[*JHEP*]{} [**1108**]{} (2011) 076](http://dx.doi.org/10.1007/JHEP08(2011)076), [[ arXiv:1103.4119 \[hep-th\]]{}](http://arxiv.org/abs/1103.4119). B. Eden, P. Heslop, G. P. Korchemsky, and E. Sokatchev, “[The Super-Correlator/ Super-Amplitude Duality: Part II]{},” [[*Nucl. Phys.*]{} [**B869**]{} (2013) 378–416](http://dx.doi.org/10.1016/j.nuclphysb.2012.12.014), [[ arXiv:1103.4353 \[hep-th\]]{}](http://arxiv.org/abs/1103.4353). R. G. Ambrosio, B. Eden, T. Goddard, P. Heslop, and C. Taylor, “[Local Integrands for the Five-Point Amplitude in Planar $\mathcal{N}\!=\!4$ SYM Up to Five Loops]{},” [[*JHEP*]{} [**01**]{} (2015) 116](http://dx.doi.org/10.1007/JHEP01(2015)116), [[ arXiv:1312.1163 \[hep-th\]]{}](http://arxiv.org/abs/1312.1163). V. P. Nair, “[A Current Algebra for Some Gauge Theory Amplitudes]{},” [[*Phys. Lett.*]{} [ **B214**]{} (1988) 215](http://dx.doi.org/10.1016/0370-2693(88)91471-2). Z. Bern, J. Rozowsky, and B. Yan, “[Two-Loop Four-Gluon Amplitudes in $\mathcal{N}\!=\!4$ SuperYang-Mills]{},” [[*Phys. Lett.*]{} [ **B401**]{} (1997) 273–282](http://dx.doi.org/10.1016/S0370-2693(97)00413-9), [[ arXiv:hep-ph/9702424]{}](http://arxiv.org/abs/hep-ph/9702424). C. Anastasiou, Z. Bern, L. J. Dixon, and D. A. Kosower, “[Planar Amplitudes in Maximally Supersymmetric Yang-Mills Theory]{},” [[*Phys. Rev. Lett.*]{} [**91**]{} (2003) 251602](http://dx.doi.org/10.1103/PhysRevLett.91.251602), [[ arXiv:hep-th/0309040]{}](http://arxiv.org/abs/hep-th/0309040). Z. Bern, L. J. Dixon, and V. A. Smirnov, “[Iteration of Planar Amplitudes in Maximally Supersymmetric Yang-Mills Theory at Three Loops and Beyond]{},” [[*Phys. Rev.*]{} [ **D72**]{} (2005) 085001](http://dx.doi.org/10.1103/PhysRevD.72.085001), [[ arXiv:hep-th/0505205]{}](http://arxiv.org/abs/hep-th/0505205). Z. Bern, M. Czakon, L. J. Dixon, D. A. Kosower, and V. A. Smirnov, “[The Four-Loop Planar Amplitude and Cusp Anomalous Dimension in Maximally Supersymmetric Yang-Mills Theory]{},” [[*Phys. Rev.*]{} [ **D75**]{} (2007) 085010](http://dx.doi.org/10.1103/PhysRevD.75.085010), [[ arXiv:hep-th/0610248 \[hep-th\]]{}](http://arxiv.org/abs/hep-th/0610248). Z. Bern, J. Carrasco, H. Johansson, and D. Kosower, “[Maximally Supersymmetric Planar Yang-Mills Amplitudes at Five Loops]{},” [[*Phys. Rev.*]{} [ **D76**]{} (2007) 125020](http://dx.doi.org/10.1103/PhysRevD.76.125020), [[ arXiv:0705.1864 \[hep-th\]]{}](http://arxiv.org/abs/0705.1864). Z. Bern, J. J. Carrasco, L. J. Dixon, M. R. Douglas, M. von Hippel, and H. Johansson, “[$D\!=\!5$ Maximally Supersymmetric Yang-Mills Theory Diverges at Six Loops]{},” [[*Phys. Rev.*]{} [ **D87**]{} (2013) no. 2, 025018](http://dx.doi.org/10.1103/PhysRevD.87.025018), [[ arXiv:1210.7709 \[hep-th\]]{}](http://arxiv.org/abs/1210.7709). B. Eden, G. P. Korchemsky, and E. Sokatchev, “[More on the Duality Correlators/Amplitudes]{},” [[*Phys. Lett.*]{} [**B709**]{} (2012) 247–253](http://dx.doi.org/10.1016/j.physletb.2012.02.014), [[ arXiv:1009.2488 \[hep-th\]]{}](http://arxiv.org/abs/1009.2488). F. Gonzalez-Rey, I. Y. Park, and K. Schalm, “[A Note on Four Point Functions of Conformal Operators in $\mathcal{N}\!=\!4$ SuperYang-Mills]{},” [[*Phys. Lett.*]{} [ **B448**]{} (1999) 37–40](http://dx.doi.org/10.1016/S0370-2693(99)00017-9), [[ arXiv:hep-th/9811155 \[hep-th\]]{}](http://arxiv.org/abs/hep-th/9811155). B. Eden, P. S. Howe, C. Schubert, E. Sokatchev, and P. C. West, “[Four Point Functions in $\mathcal{N}\!=\!4$ Supersymmetric Yang-Mills Theory at Two Loops]{},” [[*Nucl. Phys.*]{} [**B557**]{} (1999) 355–379](http://dx.doi.org/10.1016/S0550-3213(99)00360-0), [[ arXiv:hep-th/9811172 \[hep-th\]]{}](http://arxiv.org/abs/hep-th/9811172). B. Eden, P. S. Howe, C. Schubert, E. Sokatchev, and P. C. West, “[Simplifications of Four Point Functions in $\mathcal{N}\!=\!4$ Supersymmetric Yang-Mills Theory at Two Loops]{},” [[*Phys. Lett.*]{} [ **B466**]{} (1999) 20–26](http://dx.doi.org/10.1016/S0370-2693(99)01033-3), [[ arXiv:hep-th/9906051 \[hep-th\]]{}](http://arxiv.org/abs/hep-th/9906051). B. Eden, C. Schubert, and E. Sokatchev, “[Three Loop four Point Correlator in $\mathcal{N}\!=\!4$ SYM]{},” [[*Phys. Lett.*]{} [ **B482**]{} (2000) 309–314](http://dx.doi.org/10.1016/S0370-2693(00)00515-3), [[ arXiv:hep-th/0003096 \[hep-th\]]{}](http://arxiv.org/abs/hep-th/0003096). M. Bianchi, S. Kovacs, G. Rossi, and Y. S. Stanev, “[Anomalous Dimensions in $\mathcal{N}\!=\!4$ SYM Theory at Order $g^4$]{},” [[*Nucl. Phys.*]{} [ **B584**]{} (2000) 216–232](http://dx.doi.org/10.1016/S0550-3213(00)00312-6), [[ arXiv:hep-th/0003203 \[hep-th\]]{}](http://arxiv.org/abs/hep-th/0003203). B. Eden, P. Heslop, G. P. Korchemsky, and E. Sokatchev, “[Hidden Symmetry of Four-Point Correlation Functions and Amplitudes in $\mathcal{N}\!=\!4$ SYM]{},” [[*Nucl. Phys.*]{} [**B862**]{} (2012) 193–231](http://dx.doi.org/10.1016/j.nuclphysb.2012.04.007), [[ arXiv:1108.3557 \[hep-th\]]{}](http://arxiv.org/abs/1108.3557). B. Eden, P. Heslop, G. P. Korchemsky, and E. Sokatchev, “[Constructing the Correlation Function of Four Stress-Tensor Multiplets and the Four-Particle Amplitude in $\mathcal{N}\!=\!4$ SYM]{},” [[*Nucl. Phys.*]{} [**B862**]{} (2012) 450–503](http://dx.doi.org/10.1016/j.nuclphysb.2012.04.013), [[ arXiv:1201.5329 \[hep-th\]]{}](http://arxiv.org/abs/1201.5329). J. Drummond, C. Duhr, B. Eden, P. Heslop, J. Pennington, and V. A. Smirnov, “[Leading Singularities and Off-Shell Conformal Integrals]{},” [[*JHEP*]{} [**08**]{} (2013) 133](http://dx.doi.org/10.1007/JHEP08(2013)133), [[ arXiv:1303.6909 \[hep-th\]]{}](http://arxiv.org/abs/1303.6909). J. L. Bourjaily, P. Heslop, and V.-V. Tran, “[Perturbation Theory at Eight Loops: Novel Structures and the Breakdown of Manifest Conformality in $\mathcal{N}\!=\!4$ Supersymmetric Yang-Mills Theory]{},” [[*Phys. Rev. Lett.*]{} [**116**]{} (2016) no. 19, 191602](http://dx.doi.org/10.1103/PhysRevLett.116.191602), [[ arXiv:1512.07912 \[hep-th\]]{}](http://arxiv.org/abs/1512.07912). G. Brinkmann, O. D. Friedrichs, S. Lisken, A. Peeters, and N. Van Cleemput, “[[CaGe]{}: a Virtual Environment for Studying Some Special Classes of Plane Graphs—An Update]{},” [*MATCH Commun. Math. Comput. Chem.*]{} [**63**]{} (2010) no. 3, 533–552. L. Mason and D. Skinner, “[Dual Superconformal Invariance, Momentum Twistors and Grassmannians]{},” [[*JHEP*]{} [**0911**]{} (2009) 045](http://dx.doi.org/10.1088/1126-6708/2009/11/045), [[ arXiv:0909.0250 \[hep-th\]]{}](http://arxiv.org/abs/0909.0250). A. Hodges, “[Eliminating Spurious Poles from Gauge-Theoretic Amplitudes]{},” [[*JHEP*]{} [**1305**]{} (2013) 135](http://dx.doi.org/10.1007/JHEP05(2013)135), [[ arXiv:0905.1473 \[hep-th\]]{}](http://arxiv.org/abs/0905.1473). B. Eden, P. Heslop, G. P. Korchemsky, V. A. Smirnov, and E. Sokatchev, “[Five-Loop Konishi in $\mathcal{N}\!=\!4$ SYM]{},” [[*Nucl. Phys.*]{} [**B862**]{} (2012) 123–166](http://dx.doi.org/10.1016/j.nuclphysb.2012.04.015), [[ arXiv:1202.5733 \[hep-th\]]{}](http://arxiv.org/abs/1202.5733). J. Golden and M. Spradlin, “[Collinear and Soft Limits of Multi-Loop Integrands in $\mathcal{N}\!=\!4$ Yang-Mills]{},” [[*JHEP*]{} [**1205**]{} (2012) 027](http://dx.doi.org/10.1007/JHEP05(2012)027), [[ arXiv:1203.1915 \[hep-th\]]{}](http://arxiv.org/abs/1203.1915). L. F. Alday, J. M. Henn, J. Plefka, and T. Schuster, “[Scattering into the Fifth Dimension of $\mathcal{N}\!=\!4$ super Yang-Mills]{},” [[*JHEP*]{} [**1001**]{} (2010) 077](http://dx.doi.org/10.1007/JHEP01(2010)077), [[ arXiv:0908.0684 \[hep-th\]]{}](http://arxiv.org/abs/0908.0684). A. von Manteuffel and R. M. Schabinger, “[A Novel Approach to Integration by Parts Reduction]{},” [[*Phys. Lett.*]{} [**B744**]{} (2015) 101–104](http://dx.doi.org/10.1016/j.physletb.2015.03.029), [[ arXiv:1406.4513 \[hep-ph\]]{}](http://arxiv.org/abs/1406.4513). [^1]: There is an exception to this conclusion when $\ell\!=\!2$—because $f_1^{(1)}$ is not itself planar. [^2]: See for details. There, the double coincidence limit was taken $x_2\!\rightarrow\!x_1$, $x_4\!\rightarrow\!x_3$, but due to conformal invariance this is in fact equivalent to the single coincidence limit we consider. [^3]: The weaker requirement that the integrand only had a reduced divergence in the limit where two integration variables both approach $x_1\!=\!x_2$ would result in a divergence of at most $\log^2$, etc. [^4]: In this section we are using the same notation for both integrated functions and integrands. [^5]: Note that although not manifest, the loop variable $x_{5}$ also appears completely symmetrically in the above formula. For example, consider terms of the form $F^{(1)}F^{(\ell-1)}$. One such term arises from the second term in (\[logarithm\_expansion\]), giving $1/\ell \times F^{(1)}(x_{5}) F^{(\ell-1)}$. Other such terms arise from the sum with $m\!=\!\ell{\raisebox{0.75pt}{\scalebox{0.75}{$\,-\,$}}}1$, giving ${(\ell{\raisebox{0.75pt}{\scalebox{0.75}{$\,-\,$}}}1)}/\ell \times F^{(\ell-1)}(x_{5}) F^{(1)}$. We see that the integration variable appears with weight 1 in $F^{(1)}$ and weight $\ell{\raisebox{0.75pt}{\scalebox{0.75}{$\,-\,$}}}1$ in $F^{(\ell-1)}$—[*i.e.*]{} completely symmetrically.
{ "pile_set_name": "ArXiv" }
--- author: - 'N.N. Avdeev [^1]' bibliography: - '../common/notmy.bib' - '../common/my.bib' title: ' On diameter bounds for planar integral point sets in semi-general position [^2] ' --- #### Abstract. A point set $M$ in the Euclidean plane is called a planar integral point set if all the distances between the elements of $M$ are integers, and $M$ is not situated on a straight line. A planar integral point set is called to be in semi-general position, if it does not contain collinear triples. The existing lower bound for mininum diameter of planar integral point sets is linear. We prove a new lower bound for mininum diameter of planar integral point sets in semi-general position that is better than linear. Introduction ============ An *integral point set* in a plane is a point set $M$ such that all the usual (Euclidean) distances between the points of $M$ are integers and $M$ is not situated on a straight line. Every integral point set consists of a finite number of points [@anning1945integral; @erdos1945integral]; thus, we denote the set of all planar integral point sets of $n$ points by $\mathfrak{M}(2,n)$ (using the notation in [@our-vmmsh-2018]) and define the diameter of $M\in\mathfrak{M}(2,n)$ in the following natural way: $$\operatorname{diam} M = \max_{A,B\in M} |AB| ,$$ where $|AB|$ denotes the Euclidean distance. The symbol $\# M$ will be used for cardinality of $M$, that is the number of points in $M$ in our case. Since every integral point set can obviously be dilated to a set of larger diameter, minimal possible diameters of sets of given cardinality are in the focus. To be precise, the following function was introduced [@kurz2008bounds; @kurz2008minimum]: $$d(2,n) = \min_{M\in\mathfrak{M}(2,n)} \operatorname{diam} M .$$ It turned out to be very easy to construct a planar integral point set of $n$ points with $n-1$ collinear ones and one point out of the line (so-called *facher* sets); the same holds for 2 points out of the line (we refer the reader to [@antonov2008maximal], where some of such sets are called *crabs*) and even for 4 points out of the line [@huff1948diophantine]. For $9\leq n\leq 122$, the minimal possible diameter is achieved at a facher set [@kurz2008bounds]. A set $M\in\mathfrak{M}(2,n)$ is called to be in *semi-general position*, if no three points of $M$ are collinear. The set of all planar integral point sets in semi-general position is denoted by $\overline{\mathfrak{M}}(2,n)$. Furthermore, the constructions of integral point sets in semi-general position of arbitrary cardinality appeared [@harborth1993upper]; such sets are situated on a circle. Also, there is a sophisticated construction of a circular integral point set of arbitrary cardinality that gives the possible numbers of odd integral distances between points in the plane [@piepmeyer1996maximum]. A set $M\in\overline{\mathfrak{M}}(2,n)$ is called to be in *general position*, if no four points of $M$ are concyclic. The set of all planar integral point sets in general position is denoted by $\dot{\mathfrak{M}}(2,n)$. It remains unknown if there are integral points sets in general position of arbitrary cardinality; however, some sets $M\in \dot{\mathfrak{M}}(2,7)$ are known [@kreisel2008heptagon; @kurz2013constructing]. The inequality $$d(2,n) \leq \overline{d}(2,n) \leq \dot{d}(2,n) ,$$ where $ \overline{d}(2,n) = \min_{M\in\overline{\mathfrak{M}}(2,n)} \operatorname{diam} M $ and $ \dot{d}(2,n) = \min_{M\in\dot{\mathfrak{M}}(2,n)} \operatorname{diam} M $, is obvious; however, a more interesting relation holds: $$c_1 n \leq d(2,n) \leq \overline{d}(2,n) \leq n^{c_2 \log \log n} .$$ The upper bound is presented in [@harborth1993upper]. The lower bound was firstly introduced in [@solymosi2003note]; the largest known value for $c_1$ is $5/11$ for $n\geq 4$ [@my-pps-linear-bound-2019]. There are some bounds for minimal diameter of planar integral point sets in some special positions. Assuming that the planar integral point sets contains many collinear points, the following result holds.  [@kurz2008minimum Theorem 4] For $\delta > 0$, $\varepsilon > 0$, and $P\in\mathfrak{M}(2,n)$ with at least $n^\delta$ collinear points, there exists a $n_0 (\varepsilon)$ such that for all $n \geq n_0 (\varepsilon)$ we have $$\operatorname{diam} P \geq n^{\frac{\delta}{4 \log 2(1+\varepsilon)}\log \log n} .$$ For diameter bounds for circular sets, we refer the reader to [@bat2018number]. Particular cases of planar integral point sets are also discussed in [@brass2006research §5.11], [@guy2013unsolved §D20], [@our-pmm-2018], [@our-ped-2018]. For generalizaton in higher dimensions and the appropriate bounds, see [@kurz2005characteristic; @nozaki2013lower]. In the present paper we give a special bound for planar integral point sets in semi-general position. The condition of semi-general position is important in the given proof. Preliminary results =================== In this section, we give some lemmas which will be used for the proof. [@solymosi2003note Observation 1] If a triangle $T$ has integer side-lengths $a \leq b \leq c$, then the minimal height $m$ of it is at least $\left(a - \frac{1}{4}\right)^{1/2}$. The part of a plane between two parallel straight lines with distance $\rho$ between the lines is called a strip of width $\rho$. [@smurov1998stripcoverings] If a triange $T$ with minimal height $\rho$ is situated in a strip, then the width of a strip is at least $\rho$. \[cor:solymosi\_strip\] If a triangle $T$ with integer side-lengths $a \leq b \leq c$ is situated in a strip, then the width of a strip is at least $\left(a - \frac{1}{4}\right)^{1/2}$. [@our-vmmsh-2018 Lemma 4]; [@my-pps-linear-bound-2019 Lemma 2.4] \[lem:square\_container\] Let $M\in\mathfrak{M}(2,n)$, $\operatorname{diam} M = d$. Then $M$ is situated in a square of side length $d$. [@my-pps-linear-bound-2019 Definition 2.5] A *cross* for points $M_1$ and $M_2$, denoted by $cr(M_1,M_2)$, is the union of two straight lines: the line through $M_1$ and $M_2$, and the perpendicular bisector of line segment $M_1 M_2$. [@my-pps-linear-bound-2019 Theorem 3.10] \[lem:no\_distance\_one\] Each set $M\in\mathfrak{M}(2,n)$ such that for some $M_1,M_2 \in M$ equality $|M_1 M_2|=1$ holds, consists of $n-1$ points, including $M_1$ and $M_2$, on a straight line, and one point out of the line, on the perpendicular bisector of line segment $M_1 M_2$. \[lem:count\_of\_points\_on\_hyperbolas\] Let $\{M_1, M_2, M_3, M_4\} \subset M\in\overline{\mathfrak{M}}(2,n)$ (points $M_2$ and $M_3$ may coincide, other points may not), $n\geq 4$. Then $\# M \leq 4 \cdot |M_1 M_2| \cdot |M_3 M_4|$. Lemma \[lem:count\_of\_points\_on\_hyperbolas\] is one of the variations of [@erdos1945integral]. Each point $N\in M$ satisfies one of the following conditions: a\) $N$ belongs to $cr(M_1,M_2)$ — overall at most 4 points; b\) $N$ belongs to $cr(M_3,M_4)$ — overall at most 4 points; c\) $N$ belongs to the intersection of one of $|M_1 M_2| - 1$ hyperbolas with one of $|M_3 M_4| - 1$ hyperbolas — overall at most $4 (|M_1 M_2| - 1)(|M_3 M_4| - 1)$ points; Due to Lemma \[lem:no\_distance\_one\] we have $|M_1 M_2| \geq 2$ and $|M_3 M_4| \geq 2$. Since $$\begin{gathered} 4 (|M_1 M_2| - 1)(|M_3 M_4| - 1) + 4 + 4 = 4 ( (|M_1 M_2| - 1)(|M_3 M_4| - 1) + 2) = \\= 4 ( |M_1 M_2| \cdot |M_3 M_4| + 1 - |M_1 M_2| - |M_3 M_4| + 2) = \\= 4 ( |M_1 M_2| \cdot |M_3 M_4| + (1 - |M_1 M_2|) + (2 - |M_3 M_4|)) \leq 4 |M_1 M_2| \cdot |M_3 M_4| , \end{gathered}$$ we are done. The main result =============== \[thm:main\_result\] For every integer $n \geq 3$ we have $$\overline{d}(2,n) \geq (n/5)^{5/4} .$$ For $n = 3$ we have $\overline{d}(2,3) = 1$. Consider $M\in\overline{\mathfrak{M}}(2,n)$, $n \geq 4$, $\operatorname{diam} M = p$. Let us choose points $M_1, M_2, M_3, M_4 \in M$ (points $M_2$ and $M_3$ may coincide, other points may not), such that $$\min_{A, B \in M} |AB| = |M_1 M_2| ,$$ $$\min_{A, B \in M \setminus \{M_1\}} |AB| = |M_3 M_4| = m .$$ For $m \leq p^{2/5}$, Lemma \[lem:count\_of\_points\_on\_hyperbolas\] yields that $$n \leq 4 \cdot |M_1 M_2| \cdot |M_3 M_4| \leq 4 p^{4/5} ,$$ or, that is the same, $$\label{eq:hyperbolas_5_4} p \geq (n/4) ^ {5/4} > (n/5) ^ {5/4} .$$ So, let us consider $m > p^{2/5}$. Then for any $A,B \in M\setminus\{M_1\}$ the inequality $|AB| > p^{2/5}$ holds. Due to Corollary \[cor:solymosi\_strip\], no three points of $M\setminus\{M_1\}$ are located in a strip of width $p^{1/5} / 2$. Lemma \[lem:square\_container\] yields that $M$ is situated in a square with side length $p$. Let us partition this square into $q$ strips, $2p^{4/5} \leq q < 2p^{4/5} + 1$, each of width at most $p^{1/5} / 2$. Every strip contains at most two points of $M\setminus\{M_1\}$, thus $$\label{eq:strips_4_5} n \leq 2(2p^{4/5} + 1) + 1 = 4p^{4/5}+3 \leq 5 p^{4/5} .$$ The latter inequality holds because $\overline{d}(2,n) \geq 4$ for $n\geq 4$  [@kurz2008minimum] and $4^{4/5}>3$. From the inequality  one can easily derive that $$\label{eq:strips_5_4} p \geq (n/5) ^ {5/4} .$$ The following result in known: [@solymosi2003note Corollary 1] For $H \in \overline {\mathfrak{M}}(2,n)$, the minimum distance in H is at least $n^{1/3}$. Applying the same technique, one can easily derive that $$n \leq 3 \frac{\operatorname{diam} H }{n^{1/6}} ,$$ which leads to the bound $$\overline{d}(2,n) \geq c_3 n^{7/6} ,$$ which is less than the one from Theorem \[thm:main\_result\]. Conclusion ========== The presented bound is the first special lower bound for sets in semi-general position. Thus, we did not accepted the challenge to make the constant in Theorem \[thm:main\_result\] as large as possible, in order to keep the ideas of the proof clear and understandable. A more thorough research can be done in the future to enlarge the constant. However, the upper and lower bounds are still not tight. Acknowledgements ================ Author thanks Dr. Prof. E.M. Semenov for proofreading and valuable advice. [^1]: nickkolok@mail.ru, avdeev@math.vsu.ru [^2]: This work was carried out at Voronezh State University and supported by the Russian Science Foundation grant 19-11-00197.
{ "pile_set_name": "ArXiv" }
UDHEP-10-92\ IUHET-228\ [**NEUTRINO MIXING DUE TO A\ VIOLATION OF THE EQUIVALENCE PRINCIPLE**]{}\ \ Physics Department\ Indiana University, Bloomington, IN 47405\ and\ \ Department of Physics and Astronomy\ University of Delaware, Newark, DE 19716\ \ Massless neutrinos will mix if their couplings to gravity are flavor dependent, i.e., violate the principle of equivalence. Because the gravitational interaction grows with neutrino energy, the solar neutrino problem and the recent atmospheric neutrino data may be simultaneously explained by violations at the level of $10^{-14}$ to $10^{-17}$ or smaller. This possibility is severely constrained by present accelerator neutrino experiments and will be preeminently tested in proposed long baseline accelerator neutrino experiments. Several years ago, Gasperini noted that if the gravitational couplings of neutrinos are flavor dependent, mixing will take place when neutrinos propagate through a gravitational field [@G]. Similar ideas were proposed independently by Halprin and Leung [@HL]. Consequently, experiments designed to search for neutrino mixing also probe the validity of the equivalence principle. In this Letter, we analyze the implications of present neutrino mixing experiments for the equivalence principle. We consider the effects on neutrinos when they propagate under the influence of a weak, static gravitational field. For simplicity, we shall assume two neutrino flavors and neglect any neutrino masses. Ignoring effects which involve a spin flip, the flavor evolution of a relativistic neutrino is quite simple (see [@HL; @HLP] for more rigorous derivations). In the rest frame of a massive object, a neutrino has the effective interaction energy $$H = - 2 | \phi (r)| E (1 + f) \label{H}$$ where E is the neutrino energy, and $\phi (r) = - | \phi(r) |$ is the Newtonian gravitational potential of the object. $f$ is a small, traceless, $2 \times 2$ matrix which parametrizes the possibility of gravity coupling to neutrinos with a strength different from the universal coupling, i.e. violations of the equivalence principle. $f$ will be diagonal in some basis which we denote as the gravitational interaction basis (G-basis). In that basis, $\delta \equiv f_{22} - f_{11}$, then provides a measure of the degree of violation of the equivalence principle. In general, as occurs for neutrino masses, the flavor basis or the weak interaction basis (W-basis) will not coincide with the G-basis. If we denote the neutrino fields in the G-basis by $\nu_G = (\nu_1, \nu_2)$ and neutrinos in the W-basis by $\nu_W = (\nu_e, \nu_\mu)$, $\nu_G$ and $\nu_W$ are related by a unitary transformation, $U^\dagger$: $$\left( \begin{array}{c} \nu_1 \\ \nu_2 \end{array} \right) = \left[ \begin{array}{cc} \cos \Theta_G & - \sin \Theta_G \\ \sin \Theta_G & \ \ \cos \Theta_G \end{array} \right] \left( \begin{array}{c} \nu_e \\ \nu_\mu \end{array} \right), \label{mix}$$ where $\Theta_G$ is the mixing angle. Consequently when a massless neutrino propagates through a gravitational field, flavor mixing will occur. The idea of using degenerate particles to study possible violations of the equivalence principle is not new. Similar effects have been considered in the neutral kaon system [@kaon] for over 30 years. Note, however, that a violation of the equivalence principle in the kaon system requires that gravity couples differently to particles and antiparticles, a violation of CPT symmetry. This requirement is not necessary for neutrinos. Here, gravity is coupling slightly differently to different fermion generations. Using Eq. (\[H\]), we may write down the flavor evolution equation for relativistic neutrinos propagating through a gravitational field (with no matter present). In the W-basis, it reads $$i \frac{d}{dt} \left( \begin{array}{c} \nu_e \\ \nu_\mu \end{array} \right) = E |\phi(r)| \delta \ \ U \left[ \begin{array}{cc} -1 & 0 \\ 0 & 1 \end{array} \right] U^\dagger \ \ \left( \begin{array}{c} \nu_e \\ \nu_\mu \end{array} \right). \label{osc}$$ where we have neglected the irrelevant term in the hamiltonian which leads to an unobservable phase. For constant $\phi$, the survival probability for a $\nu_e$ after propagating a distance L is $$P(\nu_e \rightarrow \nu_e) = 1 - \sin^2(2 \Theta_G) \sin^2[ {\pi L \over \lambda } ]. \label{P}$$ where $$\lambda = 6.2 {\rm km} ( {10^{-20} \over | \phi | \delta }) ( {10 GeV \over E}) \label{lambda}$$ is the oscillation wavelength. Eq. (\[P\]) is quite similar to that for vacuum oscillations due to neutrino masses (see e.g. [@KP]). However note the linear dependence of the oscillation phase on the neutrino energy. For a mass, the phase depends on $1/E$. Thus these two sources of mixing can be easily distinguished by searching for neutrino mixing at different energies. When the neutrino propagates through matter, then the mixing can be dramatically enhanced. A resonance occurs when $$\surd \overline{2} G_F N_e = 2 E |\phi| \delta \cos ( 2 \Theta_G )$$ where $G_F$ is Fermi’s constant and $N_e$ is the electron density. This effect is completely analogous to the well studied situation in which the mixing is due to neutrino masses ([@MSW], for a review, see [@KP]). The survival probabilities for gravitationally induced mixing when there is a matter background can be obtained from those for masses by the transformation $${m_2^2-m_1^2 \over 4 E} \rightarrow {E | \phi| \delta} , \label{convert}$$ if $\phi$ is a constant. The local potential, $ \phi$, enters through the phase of the oscillations, Eq. (\[P\]). It vanishes far from all sources of gravity so that results of special relativity are recovered. However the local value of $\phi$ is uncertain, because the estimates tend to increase as one looks at matter distributions at larger and larger scales. The potential at the Earth due to the Sun is $1\times 10^{-8}$, that due to the Virgo cluster of galaxies is about $1\times 10^{-6}$ while that due to our supercluster [@kaon] has recently been estimated to be about $3 \times 10^{-5}$ (which is larger than $\phi$ [*in*]{} the Sun due to the Sun). In what follows, we will quote values for the combined, dimensionless parameter $|\phi| \delta$. We now consider what the current data on neutrino mixing imply for the parameters $|\phi| \delta$ and $\Theta_G$. The searches for mixing can be divided into three broad categories; laboratory experiments, atmospheric neutrino observations and solar neutrino observations. The latter two have shown some evidence for neutrino mixing and will be considered first. Solar neutrinos have been observed in four experiments [@solar], their results are summarized in Table 1. The observations are all well below the predictions of the standard solar model [@BU]–an indication of mixing. There are two mechanisms by which neutrino mixing can give large reductions in the solar neutrino flux, long-wavelength oscillations or resonant conversion. We shall consider these mechanisms separately. If the distance between the Sun and the Earth is half of an oscillation wavelength and the mixing angle is large, then Eq. (\[P\]) predicts a large reduction in the flux. This occurs for 10 MeV neutrinos when $|\phi|\delta = 2\times 10^{-25}$. Then the high energy neutrinos will be depleted but the lower energy neutrinos will be completely unaffected. However, the present data indicate mixing for the low energy solar neutrinos as well, see Table 1. A careful $\chi^2$ analysis finds that there is no long-wavelength, two flavor explanation of the data–it is disfavored at the 3 standard deviation level. This is in contrast to the normal case of mixings induced by mass differences for which the data are well described by vacuum oscillations of two flavors (see e.g. [@APP]). The difference is due to the differing energy dependence of the two types of mixings. Next generation solar neutrino observations [@future] will further constrain this possibility by measuring the solar neutrino energy dependence (SNO or Super-Kamiokande) and by searching for seasonal variations (BOREXINO). Resonant conversion as the neutrinos propagate through the interior matter of the Sun can also lead to large reductions in the flux. Figure 1 shows the favored regions from a $\chi^2$ fit to the flux reduction values in Table 1 (the Kamiokande-II energy bins are included explicitly and their overall systematic error is correctly accounted for [@solar]). An analytical expression was used to describe the resonant conversion survival probability [@KP; @HLP], based on only the Sun’s gravitational potential. The average gravitational potential in the sun, $|\overline{\phi}| = 4 \times 10^{-6}$, was used to normalize Fig. (1). If a constant potential from larger scales dominates over this, then the allowed regions in Figure 1 are slightly elongated about the mean $|\overline {\phi}| \delta$ by approximately a factor of 3. Next generation solar neutrino observations will test these regions [@HLP] by measuring the solar neutrino energy dependence (SNO or Super-Kamiokande), by looking for day-night variations, and by performing a neutral current measurement of the solar flux (SNO). Two experiments [@atmos] recently found that there is a relative depletion of $\nu_\mu$ to $\nu_e$ in the flux of low energy atmospheric neutrinos. This depletion may be the result of flavor mixings. The energy of the atmospheric neutrinos is typically about $0.5$ GeV. The propagation length now varies from 20 to 10,000 kilometers. This corresponds to values of $| \phi | \delta$ from about $6 \times 10^{-20}$ to about $10^{-22}$. Although the data can be explained by either $\nu_\mu-\nu_e$ mixing or $\nu_\mu-\nu_\tau$ mixing, and so do not necessarily probe the same parameters as solar neutrinos, it is encouraging that a range of $|\phi|\delta$ can account for the solar neutrino data and the atmospheric neutrino data simultaneously [@HLP]. This may signal a possible breakdown of the principle of equivalence. Numerous laboratory experiments have been performed searching for neutrino mixing. They have not found definitive evidence for mixing, so these experiments eliminate ranges of the gravitationally induced mixing parameters. The most stringent laboratory limits come from “appearance” experiments with the largest values of $E \times L$. These occur in experiments using the beams of neutrinos produced by accelerators [@limits], where the neutrino energies are tens of GeV and propagation lengths are as long as a kilometer. Because most mixing experiments only analyze the data in terms of ${L \over E}$, the relevant quantity for neutrino masses, the results are not exactly transferable to gravitationally induced mixing. From the published descriptions of the experiments, we have [*estimated*]{} the average value of $E \times L$ to derive the limits shown in the top part of Fig. (2). Our estimates are probably accurate up to factors of 3 in $ | \phi | \delta $. Most of the large mixing angle region which solves the solar neutrino problem is eliminated by the current laboratory bounds. But the lower part of this solution, and the small mixing angle region, are still allowed by the present accelerator data. Only relatively small improvements in the current bounds are needed to constrain these regions. New accelerator neutrino experiments, with baselines of hundreds or thousands of kilometers, are under active consideration at the present time. In the lower part of Fig. (1) are shown estimates of the accessible parameter region achievable by two of the proposed experiments [@lbane], FNAL to Soudan 2 and FNAL to DUMAND. At these long distances matter effects are becoming important [@ELAN], as is apparent in the difference between $\nu_\mu$ and $ \overline{\nu}_\mu$ regions. The energy distribution of the neutrino event rate is taken from calculations for a short baseline experiment using the FNAL main injector. Also, similar neutrino energies, intensities and propagation lengths are available to planned next generation atmospheric neutrino detectors (such as DUMAND, AMANDA, etc.). They may be able to probe parameter regions similar to those shown for the accelerator experiments. Thus there are many planned and proposed experiments which can extend the tests of the equivalence principle to values of $|\phi| \delta$ far below the present accelerator limits. In conclusion, the degeneracy of neutrinos enables tests of the equivalence principle which are far more sensitive than those using “normal” matter [@normal]. The present solar neutrino data suggest a violation of the equivalence principle at the level of $ 2 \times 10^{-19} < |\phi|\delta < 3 \times 10^{-22}$. The atmospheric neutrino data also suggest a possible breakdown of the equivalence principle at this same level. This possibility can be independently checked by long-baseline accelerator neutrino experiments, which can reach down to $|\phi|\delta \approx 10^{-24}$. The violation of the equivalence principle is introduced on purely phenomenological grounds. Such a violation would indicate a breakdown in general relativity or the existence of additional long range tensor interactions. We thank M. Butler, S. Nozawa, R. Malaney and A. Boothroyd for communicating preliminary results from their analysis. CNL wishes to thank Fermilab for their hospitality and the University of Delaware Research Foundation for partial support. JP thanks Brookhaven and the Queen’s University Summer Institute for their hospitality. This work was supported in part by the U.S. Department of Energy under Grants No. DE-FG02-84ER40163 and DE-FG02-91ER40661. [99]{} M. Gasperini, Phys. Rev. D [**38**]{}, 2635 (1988); Phys. Rev. D [**39**]{}, 3606 (1989). A. Halprin and C. N. Leung, Phys. Rev. Lett. [**67**]{}, 1833 (1991); Nucl. Phys. B (Proc. Suppl.) [**28A**]{}, 139 (1992). A. Halprin, C. N. Leung, and J. Pantaleone, in preparation. R. J. Hughes, Phys. Rev. D [**46**]{}, R2283 (1992); I. R. Kenyon, Phys. Lett. B [**237**]{}, 274 (1990); M.L. Good, Phys. Rev. 121, 311 (1961). T.K. Kuo and J. Pantaleone, Rev. Mod. Phys. 61, 937 (1989). L. Wolfenstein, Phys. Rev. D [**17**]{}, 2369 (1978); Phys. Rev. D [**20**]{}, 2634 (1979). S. P. Mikheyev and A. Yu Smirnov, Yad. Fiz. [**42**]{}, 1441 (1985) \[Sov. J. Nucl. Phys. [**42**]{}, 913 (1985)\]; Nuovo Cim. [**C9**]{}, 17 (1986). K. S. Hirata [*et al.*]{}, Phys. Lett. B [**280**]{}, 146 (1992). D. Casper [*et al.*]{}, Phys. Rev. Lett. [**66**]{}, 2561 (1991). E. W. Beier [*et al.*]{}, Phys. Lett. B [**283**]{}, 446 (1992). R. Davis, D.S. Harmer and K.C. Hoffman, Phys. Rev. Lett. 20, 1205 (1968); R. Davis, in Proc. of the 21st Int. Cosmic Ray Conf., ed. by R.J. Protheroe (University of Adelaide Press) 143 (1990). K. Hirata et al., Phys. Rev. Lett. 65, 1297 (1990); 1301 (1990); Phys. Rev. D44, 2241 (1991). A.I. Abazov et al. (SAGE), Phys. Rev. Lett. 67, 3332 (1991). A. Gavrin, talk presented at Dallas Conference. P. Anselmann et al. (GALLEX), Phys. Lett. B285, 376 (1992); 390 (1992). J.N. Bahcall and R.K. Ulrich, Rev. Mod. Phys. [**60**]{}, 297 (1988); J.N. Bahcall and M.H. Pinsonneault, Rev. Mod. Phys., Oct. (1992). A. Acker, S. Pakvasa, and J. Pantaleone, Phys. Rev. D [**43**]{}, R1754 (1991). V. Barger, R.J.N. Phillips and K. Whisnant, University of Wisconsin preprint MAD/PH/708. H.H. Chen, Phys. Rev. Lett. 55, 1534 (1985). G.T. Ewan et al., Sudbury Neutrino Observatory (SNO) Proposal, October (1987). Y. Totsuka (Super-Kamiokande), Tokyo Univ. preprint ICRR-227-90-20. C. Arpesella, et al., BOREXINO at Gran Sasso, August (1991). N.J. Baker et al., Phys. Rev. Lett. 47, 1577 (1981). N. Ushida et al., Phys. Rev. Lett. 57, 2897 (1986); Ahrens et al., Phys. Rev. D31, 2732 (1985). V.J. Stenger (DUMAND), Proceedings of the Workshop on Long Baseline Neutrino Oscillations, Fermilab (1991) p. 317; E.A. Peterson (Soudan 2) ibid. p. 243. J. Pantaleone, Phys. Lett. B292, 201 (1992); B246, 245 (1990); R.H. Bernstein and S.J. Parke, Phys. Rev. D44, 2069 (1991). V.B. Braginsky and V.I. Panov, Zh. Eksp. Teor. Fiz. [**61**]{}, 873 (1971) \[Sov. Phys. JETP, [**34**]{}, 463 (1972)\]. More recent experiments have not improved this limit, see, e.g., B.R. Heckel [*et al.*]{}, Phys. Rev. Lett. [**63**]{}, 2705 (1989). M.J. Longo, Phys. Rev. Lett. [**60**]{}, 173 (1988). S. Pakvasa, W.A. Simmons and T.J. Weiler, Phys. Rev. D39, 1761 (1989). Table 1. Results of the solar neutrino experiments [@solar]. The flux is given as a fraction of the standard solar model [@BU] prediction.\ Experiment Process E$_{threshold}$ Expt./SSM --------------- --------------------------------------------- ----------------- ------------------------------------ Davis et al. $\nu_e + ^{37}$Cl$ \rightarrow e + ^{37}$Ar 0.81 MeV 0.27 $\pm$ 0.04 Kamiokande-II $\nu + e \rightarrow \nu + e $ 7.5 MeV 0.46 $\pm$ 0.05 $\pm$ 0.06 SAGE $\nu_e + ^{71}$Ga$ \rightarrow e + ^{71}$Ge 0.24 MeV 0.44 $^{+0.13}_{-0.18}$ $\pm$ 0.11 GALLEX $\nu_e + ^{71}$Ga$ \rightarrow e + ^{71}$Ge 0.24 MeV 0.63 $\pm$ 0.14 $\pm$ 0.06 [**FIGURE CAPTION**]{}\ [**Fig. 1.**]{} $\chi^2$ plot showing regions of $|\overline {\phi}| \delta$ versus $\sin^2 2\Theta_G$ allowed by the solar neutrino data in Table 1 at 90% (solid lines) and 99% (dotted lines) confidence level, assuming two flavors and $\delta > 0$.\ [**Fig. 2.**]{} Upper contours: Regions of $|\phi| \delta$ versus $ \sin^2 2\Theta_G $ that are excluded by accelerator neutrino data [@limits]. Lower contours: Regions probed by proposed [@lbane] long-baseline experiments, assuming sensitivity to 10% $\nu_\mu$ ($\overline{\nu_\mu}$) disappearance. The outer dashed (inner dash dot) curve is for a $\nu_\mu$ ($\overline{\nu}_\mu$) beam from FNAL $\rightarrow$ DUMAND. The inner dashed (dot) curve is for a $\nu_\mu$ ($\overline{\nu}_\mu$) beam from FNAL $\rightarrow$ Soudan 2.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We establish the gravity/fluid correspondence in the nonminimally coupled scalar-tensor theory of gravity. Imposing Petrov-like boundary conditions over the gravitational field, we find that, for a certain class of background metrics, the boundary fluctuations obey the standard Navier-Stokes equation for an incompressible fluid without any external force term in the leading order approximation under the near horizon expansion. That is to say, the scalar field fluctuations does not contribute in the leading order approximation regardless of what kind of boundary condition we impose on it.' author: - | Bin Wu and Liu Zhao\ School of Physics, Nankai University, Tianjin 300071, China\ [*email*]{}: <binfen.wu@gmail.com> and <lzhao@nankai.edu.cn> title: 'Holographic fluid from nonminimally coupled scalar-tensor theory of gravity' --- Introduction ============ The AdS/CFT correspondence [@Ma] is a successful idea which make a connection between quantum field theory on the boundary and gravity theory in the bulk. It has been studied extensively for nearly two decades and has led to important applications in certain condensed matter problems such as superconductivity [@SC], etc. In the long wavelength limit, the dual theory on the boundary reduces to a hydrodynamic system [@PPS; @Ba2], and the transport coefficients of dual relativistic fluid was calculated in [@Haack:2008cp]. This is known as gravity/fluid correspondence. In analogy to the AdS/CFT duality, the dual fluid usually lives on the AdS boundary at asymptotically infinity [@Eling:2009sj; @Ba3; @Ashok]. However, the choice of boundary at asymptotically infinity is not absolutely necessary [@Bredberg:2010ky; @Strominger2]. Refs. [@Cai2; @Ling:2013kua] attempted to place the boundary at finite cutoff in asymptotically AdS spacetime to get the dual fluid. An algorithm was presented in [@Compere:2011dx] for systematically reconstructing the perturbative bulk metric to arbitrary order. For spatially flat spacetime, this method has been widely generalized, such as topological gravitational Chern-Simons theory [@Cai:2012mg], Einstein-Maxwell gravity [@Niu:2011gu], Einstein-dilaton gravity [@Cai] and higher curvature gravity [@Eling; @Zou:2013ix]. For spatially curved spacetimes, imposing Petrov-like boundary conditions on timelike cutoff spacetime is a good way to realize boundary fluid equations [@Strominger; @Wu:2013kqa; @Cai:2013uye], provided the background spacetime in non-rotating. In [@Bin], the present authors investigated the fluid dual of Einstein gravity with a perfect fluid source using the Petrov-like boundary condition. In most of the previously known example cases, the dual fluid equation will contain an external force term provided the bulk theory involves a matter source [@Bin; @Ba; @Ling; @Bai:2012ci]. In this paper, we proceed to study the fluid dual of a nonminimally coupled scalar-tensor theory of gravity. We find that the dual fluid equation arising from near horizon fluctuations around certain class of static background configuration in this theory does not contain the external force term, because the contribution from the scalar fluctuations is of higher order in the near horizon expansion and hence does not enter the leading order approximation. Nonminimally coupled scalar-tensor theory ========================================= We begin by introducing the nonminimally coupled scalar-tensor theory of gravity in $(n+2)$-dimensions. The action is written as $$\begin{aligned} I[g,\phi ]=\int \mathrm{d}^{n+2}x\sqrt{-g}\left[ \frac{1}{2}(R-2\Lambda) -\frac{1}{2}(\nabla \phi )^{2}-\frac{1}{2}\xi R\phi^2 - V(\phi)\right] \;,\end{aligned}$$ where $\xi$ is a coupling constant. When $\xi = \frac{n}{4(n+1)}$, the theory becomes a conformally coupled scalar-tensor theory of gravity. We will not choose any specific value for $\xi$ in this paper because the construction works for any $\xi$. We set $8\pi G=1$ for convenience. The equations of motion that follow from the action read $$\begin{aligned} &G_{\mu\nu} + g_{\mu\nu}\Lambda = T_{\mu\nu}, \label{eq1}\\ &\nabla_{\mu}\nabla^{\mu} \phi-\xi R\phi - \frac{d V}{d \phi}=0, \label{eq2}\end{aligned}$$ where $$\begin{aligned} T_{\mu\nu}&= \nabla_{\mu}\phi\nabla_{\nu}\phi - \frac{1}{2}g_{\mu\nu}(\nabla \phi)^2 + \xi [g_{\mu\nu} \Box -\nabla_{\mu}\nabla_{\nu} + G_{\mu\nu}]\phi^2 -g_{\mu\nu}V(\phi).\end{aligned}$$ In what follows, it is better to reformulate eq.(\[eq1\]) in the form $$\begin{aligned} G_{\mu\nu} &= \tilde{T}_{\mu \nu}, \label{eq1prime}\end{aligned}$$ in which we have introduced $$\begin{aligned} &\tilde{T}_{\mu \nu} = \frac{\nabla_{\mu}\phi\nabla_{\nu}\phi - \frac{1}{2} g_{\mu\nu}(\nabla \phi)^2 + \xi [g_{\mu\nu} \Box -\nabla_{\mu}\nabla_{\nu}]\phi^2 -g_{\mu\nu}(\Lambda + V(\phi))} {(1 - \xi \phi^2)}\;. \label{tp}\end{aligned}$$ To realize fluid dual of the above theory, we will consider fluctuations around metrics of the form $$\begin{aligned} \mathrm{d} s^2 = -f(r)\mathrm{d}t^2 + \frac{\mathrm{d}r^2}{f(r)} + r^2\mathrm{d} \Omega_k^2\;,\end{aligned}$$ where $\mathrm{d}\Omega_k^2$ is the line element of an $n$-dimensional maximally symmetric Einstein space (with coordinates $x^i$), whose normalized constant sectional curvature is $k=0,\pm1$. Exact solutions of this form are not yet explicitly known in arbitrary dimensions. However, a number of example cases indicate that solutions of the above form indeed exist in some concrete dimensions [@MTZ; @Wei; @Nadalini], and, in this work, we do not need to make use of the explicit solution. Thus the spacetime dimension $d=n+2$, the metric function $f(r)$ and the scalar potential $V(\phi)$ are all kept unspecified. In Edington-Fenkelstein (EF) coordinates, the metric can be expressed as $$\begin{aligned} \mathrm{d} s^2 = g_{\mu\nu}\mathrm{d}x^\mu \mathrm{d}x^\nu = -f(r) \mathrm{d}u^2 + 2 \mathrm{d}u \mathrm{d}r + r^2\mathrm{d} \Omega_k^2\;, \label{metric2}\end{aligned}$$ where $u$ is the light-like EF coordinate. In the following, whenever $g_{\mu\nu}$ appears, it is meant to be given by (\[metric2\]) in the coordinates $x^\mu=(u,r,x^i)$. Hypersurface projection and boundary condition ============================================== To construct the fluid dual of the above system, we need to introduce an appropriate hypersurface and make make projections for some geometric objects onto the hypersurface. We also need to introduce appropriate boundary condition on the projection hypersurface. The formulation is basically parallel to the previous works such as [@Strominger; @Bin]. Consider the timelike hypersurface $\Sigma_c$ defined via $r-r_c=0$ with constant $r_c$. The induced metric $h_{\mu\nu}$ on the hypersurface is related to the bulk metric $g_{\mu\nu}$ via $$\begin{aligned} & h_{\mu\nu}=g_{\mu\nu}-n_{\mu}n_{\nu}, \label{indmet}\end{aligned}$$ where $n_{\mu}$ is the unit normal vector of $\Sigma_c$. For the line element (\[metric2\]) $$\begin{aligned} n_{\mu}=(0,\frac{1}{\sqrt{f(r)}},0,\cdots,0), \qquad n^{\mu}=(\frac{1}{\sqrt{f(r)}}, \sqrt{f(r)}, 0, \cdots, 0).\end{aligned}$$ It is natural to introduce $x^a =(u, x^i)$ as a intrinsic coordinate system on the hypersurface. Note that we have adopted two indexing systems. Greek indices represent bulk tensors, while Latin indices represent tensors on the hypersurface. In terms of the coordinates $x^a$, it is convenient to think of the induced metric $h_{\mu\nu}$ on the hypersurface as a metric tensor $h_{ab}$ defined on the hypersurface — one just needs to remove the raw $h_{\mu r}$ and the column $h_{r\nu}$ – which are both full of zeros – from $h_{\mu\nu}$. So, in the following, we will not distinguish $h_{\mu\nu}$ from $h_{ab}$. We will sometimes encounter objects with mixed indices such as $h_{\mu a}$. Such objects should of course be understood as components of a bulk tensor. The line element corresponding to $h_{ab}$ reads $$\begin{aligned} \mathrm{d} s_{n+1}^2 &= -f(r_c) \mathrm{d}u^2+r_{c}^{2}\mathrm{d} \Omega_k^2 \nonumber \\ &=-(\mathrm{d} x^{0})^2+r_{c}^{2}\mathrm{d} \Omega_k^2 \nonumber \\ &=-\frac{1}{\lambda^2}\mathrm{d}\tau^2 +r_{c}^{2}\mathrm{d} \Omega_k^2, \label{ndlelm}\end{aligned}$$ where we have introduced two rescaled temporal coordinates $x^0$ and $\tau$, which are related to $u$ via $\tau = \lambda x^0 = \lambda \sqrt{f(r_c)}\,u$. The rescaling parameter $\lambda$ is introduced so that when $\lambda\rightarrow 0$, the theory becomes non-relativistic. It will become clear in the next section that $\lambda\rightarrow 0$ also signifies the near horizon limit. The hypersurface projections of eq.(\[eq1prime\]) can be decomposed into longitudinal and normal projections, respectively. These two classes of projections are also known as momentum and Hamiltonian constraints in the case of pure general relativity. The results of the projections reads $$\begin{aligned} & D_a(K^{a}{ }_{b}-h^{a}{ }_{b}K)=\tilde{T}_{\mu\nu}n^{\mu}h^{\nu}{ }_{b}, \label{mo}\\ & \hat{R}+K^{ab}K_{ab}-K^2 = - 2\tilde{T}_{\mu\nu} n^{\mu} n^{\nu}, \label{ha}\end{aligned}$$ where $K_{ab}$ is the external curvature. The boundary condition to be imposed on the hypersurface is the Petrov-like condition $$\begin{aligned} & C_{(l)i(l)j}=l^{\mu}m_{i}{}^{\nu}l^{\sigma}m_{j}{}^{\rho}C_{\mu\nu\sigma\rho} =0, \label{bdry}\end{aligned}$$ where $C_{\mu\nu\sigma\rho}$ is the Weyl curvature tensor, $l^\mu, m^\mu$ together with $k^\mu$ form a set of Newman-Penrose basis vector fields which obey $$\begin{aligned} & l^2=k^2=0,\,(k,l)=1,\,(l,m_{i})=(k,m_{i})=0,\,(m_{i},m_{j}) =\delta^{i}{ }_{j}.\end{aligned}$$ The boundary degrees of freedom for the gravitational field are totally encoded in the Brown-York tensor defined in [@B-Y], $$\begin{aligned} t_{ab}=h_{ab}K-K_{ab}, \label{B-Y}\end{aligned}$$ which has $\frac{1}{2}(n+1)(n+2)$ independent components. The Petrov-like conditions impose $\frac{1}{2}n(n+1)-1$ constraints over such degrees of freedom, where the $-1$ is because of the tracelessness of the Weyl tensor. So, there remain only $n+2$ degrees of freedom, which can be interpreted as the density, pressure and velocity components of the boundary fluid, which must obey the Hamiltonian and momentum constraints described earlier in Section 2. These constraint equations can be viewed as the equation of state and the evolution equation of the boundary fluid. Inserting the relation (\[B-Y\]) into (\[bdry\]) and making use of (\[mo\]) and (\[ha\]), the boundary condition becomes $$\begin{aligned} 0&=\frac{2}{\lambda^2}t^{\tau}{ }_{i}t^{\tau}{ }_{j} +\frac{t^2}{n^2}h_{ij}-\frac{t} {n}t^{\tau}{ }_{\tau}h_{ij}+t^{\tau}{ }_{\tau}t_{ij}\nonumber\\ &\quad+2\lambda\partial_{\tau} \left(\frac{t}{n}h_{ij}-t_{ij}\right)-\frac{2}{\lambda} D_{{(}i}t^{\tau}{ }_{j{)}}-t_{ik}t^{k} { }_{j} - \hat{R}_{ij} \nonumber\\ &\quad -\frac{1}{n}(\tilde{T}_{\nu\rho}n^{\nu}n^{\rho} +\tilde{T}+\tilde{T}_{00}-2\tilde{T}_{\rho 0} n^{\rho})h_{ij}+\tilde{T}_{ij} , \label{bcex}\end{aligned}$$ where $t = t^a{}_a$ is the trace of the Brown-York tensor. The calculation that leads to (\[bcex\]) is quite lengthy and is basically identical to what we have done in [@Bin], so we refer the readers to our previous work for details. To proceed, we will need the explicit form of the tensor $\tilde{T}_{\mu\nu}$. Using (\[tp\]), the last line in (\[bcex\]) can be rewritten as $$\begin{aligned} & -\frac{1}{n}(\tilde{T}_{\nu\rho}n^{\nu}n^{\rho} + \tilde{T}+ \tilde{T}_{00}-2\tilde{T}_{\rho 0} n^{\rho})h_{ij}+\tilde{T}_{ij} \nonumber \\ & = -\frac{1}{n(1-\xi\phi^2)} \left[ (1+\frac{n}{2})f\phi^{\prime 2} - (f+n)\xi \Box\phi^2 - 2\Lambda + 4\Lambda\xi\phi^2 + nV(\phi) \right]h_{ij}. \label{tpcp}\end{aligned}$$ Fluctuations around the background and order estimations ======================================================== Having now described the field equation and the boundary condition, we turn to look at the perturbative fluctuations around the background (\[metric2\]) and make order estimations for all relevant quantities. Since the boundary degrees of freedom from bulk gravity are all encoded in the Brown-York tensor, it is reasonable to start with calculations of the Brown-York tensor in the background spacetime and then making perturbative expansion around the background values. As in the previous works, we take the expansion parameter to be identical to the scaling parameter $\lambda$ appeared in (\[ndlelm\]), so that the perturbative limit $\lambda\to 0$ is simultaneously the non-relativistic limit. The perturbed Brown-York tensor reads $$\begin{aligned} t^{a}{ }_{b}=\sum^{\infty}_{n=0} \lambda^{n}(t^{a}{}_{b})^{(n)},\end{aligned}$$ where $$\begin{aligned} &(t^{\tau}{}_{\tau})^{(0)} =\frac{n\sqrt{f}}{r}, \\ &(t^{\tau}{ }_{i})^{(0)}=0, \\ &(t^{i}{}_{j})^{(0)}=\left(\frac{1}{2\sqrt{f}}\partial_{r}f +\frac{(n-1)\sqrt{f}}{r}\right)\delta^{i}{ }_{j}\end{aligned}$$ are the background values. Taking the trace, we also have $$\begin{aligned} &t^{(0)}=\frac{n}{2\sqrt{f}}\partial_{r}{f}+ \frac{n^2 \sqrt{f}}{r}.\end{aligned}$$ Meanwhile, we also take a near horizon limit, assuming that there exists an event horizon at the biggest zero $r=r_h$ of the smooth function $f(r)$ and that the hypersurface $\Sigma_c$ is very close to the horizon as $r_c-r_{h}= \alpha^2\lambda^2$, where $\alpha$ is a constant which is introduced to balance the dimensionality and can be fixed later. Doing so the function $f(r_c)$ and relevant quantities can be expanded near the horizon, e.g. $$\begin{aligned} f(r_c)&= f^{\prime}(r_{h})(r_c-r_h)+ \frac{1}{2}f^{\prime \prime}(r_h) (r_c-r_h)^2 +\cdots \sim \mathcal{O}(\lambda^2).\end{aligned}$$ Naturally, the scalar field on hypersurface also gets a perturbative expansion, $$\begin{aligned} \phi = \sum^{\infty}_{n=0} \lambda^n \phi^{(n)}\;,\end{aligned}$$ where $\phi^{(0)}$ corresponds to the original (unperturbed) background, and for static backgrounds of the form (\[metric2\]), $\phi^{(0)}$ must be independent of the coordinates $(\tau, x^i)$. We can further make the near horizon expansion $$\begin{aligned} \phi^{(0)}(r_c) &= \phi^{(0)}(r_h) + \phi^{(0)\prime}(r_h) (r_c - r_h) + \cdots \sim \mathcal{O}(\lambda^0).\end{aligned}$$ It is reasonable to assume that for all $n>0$, $\phi^{(n)}$ are functions of $ (\tau, x^i)$ only and independent of $r_c$, because otherwise we can simply expand these $r_c$-dependent functions near the horizon and absorb the higher order terms by a redefinition of $\phi^{(n)}$. Since $\phi^{(0)}$ is $(\tau,x_i)$-independent and $\phi^{(n)}$ are $r_c$-independent, we get $$\begin{aligned} \partial_\tau \phi &=\partial_\tau(\phi^{(0)} + \lambda\phi^{(1)} + \cdots) \sim \mathcal{O}(\lambda^1), \\ \partial_i \phi &=\partial_i(\phi^{(0)} + \lambda\phi^{(1)} + \cdots) \sim \mathcal{O}(\lambda^1), \\ \partial_r \phi &=\partial_r (\phi^{(0)} + \lambda\phi^{(1)} + \cdots) \sim \mathcal{O}(\lambda^0).\end{aligned}$$ The index $\mu$ in $\partial_\mu\phi$ can be raised using the inverse of the bulk metric $g_{\mu\nu}$. Since we concentrate only on the perturbations on the hypersurface, we need to work out the behaviors of the bulk metric around $\Sigma_c$. The components of the bulk metric around $\Sigma_c$ in $(\tau,r,x^i)$ coordinate can be written as $$\begin{aligned} g_{\tau\tau}|_{\Sigma_c} = -\frac{1}{\lambda^2}, \qquad g_{\tau r}|_{\Sigma_c}= 2\lambda\sqrt{f(r_c)}, \qquad g_{ij}|_{\Sigma_c}=r_c^{2}\mathrm{d} \Omega_k^2.\end{aligned}$$ Therefore, $$\begin{aligned} g^{r\tau}|_{\Sigma_c}= \lambda\sqrt{f(r_c)}\sim \mathcal{O}(\lambda^2), \quad g^{rr}|_{\Sigma_c}=f(r_c)\sim \mathcal{O}(\lambda^2), \quad g^{ij}|_{\Sigma_c}=\frac{1}{g_{ij}|_{\Sigma_c}}\sim \mathcal{O}(\lambda^0),\end{aligned}$$ and hence $$\begin{aligned} & \partial^\tau \phi|_{\Sigma_c} = g^{\tau\nu}|_{\Sigma_c}\partial_\nu \phi|_{\Sigma_c} = g^{r\tau}|_{\Sigma_c} \partial_r \phi|_{\Sigma_c} \sim \mathcal{O}(\lambda^2), \qquad \partial^i \phi|_{\Sigma_c} = g^{ij}|_{\Sigma_c}\partial_j \phi|_{\Sigma_c} \sim \mathcal{O}(\lambda^1), \\ & \partial^r \phi|_{\Sigma_c} = g^{r\nu}{}_{|_{\Sigma_c}}\partial_\nu \phi = g^{r\tau}|_{\Sigma_c} \partial_\tau \phi|_{\Sigma_c} + g^{rr}|_{\Sigma_c} \partial_r \phi|_{\Sigma_c} \sim \mathcal{O}(\lambda^2).\end{aligned}$$ With the aid of all above analysis, the Petrov-like boundary condition (\[bcex\]) can be expanded in power series in $\lambda$, and at the lowest nontrivial order $\mathcal{O}(\lambda^0)$, we get $$\begin{aligned} \frac{\sqrt{f'_h}}{\alpha}t^{i}{ }_{j}^{(1)} = 2t^{\tau}{ }_{k}^{(1)}t^{\tau}{ }_{j}^{(1)}h^{ik(0)} - 2h^{ik(0)}D_{(j}t^{\tau}{ }_{k)}^{(1)} + \frac{\sqrt{f'_h}}{n \alpha} t^{(1)} \delta^{i}{ }_{j} - \hat{R}^{i}{ }_{j} - C_h \delta^i{}_j, \label{d}\end{aligned}$$ where $h_{ij}^{(0)}=r_h^2 \mathrm{d}\Omega_k^2$ and $C_h$ is a constant with value $$\begin{aligned} C_h= \frac{\left(-2\Lambda + \frac{n f'_h}{r_h} + 4\Lambda\xi\left(\phi^{(0)}_h\right)^2 + nV(\phi^{(0)}_h) - n\xi f'_h \phi^{(0)}_h\phi^{(0)'}_h \right)}{n\left(1-\xi\left(\phi^{(0)}_h\right)^2\right)},\end{aligned}$$ wherein $f'_h$ and $\phi^{(0)'}_h$ represent the derivative of $f(r)$ and $\phi^{(0)}(r)$ evaluated at $r_h$. Our aim is to reduce the momentum constraint (\[mo\]) into the hydrodynamics equation on the hypersurface. For this purpose, we also need to make order estimations for the right hand side (RHS) of (\[mo\]). Since $g_{\mu\nu} n^{\mu}h^{\nu}{}_b =0$, we have $$\begin{aligned} \tilde{T}_{\mu \nu}n^{\mu}h^{\nu}{}_b &= \frac{1}{1-\xi\phi^2} (\partial_{\mu} \phi \partial_{\nu} \phi - \xi \nabla_\mu\nabla_\nu \phi^2 ) n^{\mu}h^{\nu}{}_b \nonumber \\ & = \frac{1}{1-\xi\phi^2}((1-2\xi)n^{\mu} \partial_{\mu} \phi \partial_{b} \phi - 2\xi\phi n^{\mu} \nabla_\mu\nabla_b \phi). \label{RHS}\end{aligned}$$ We see that many terms in $\tilde{T}_{\mu \nu}$ drops off after the hypersurface projection. This makes the order estimation a lot easier. To estimate the order of (\[RHS\]), let us first look at the $\tau$ component. Since $$\begin{aligned} n^{\mu} \partial_{\mu} \phi \partial_{\tau} \phi & = n_r \partial^r \phi \partial_{\tau} \phi = \frac{1}{\sqrt{f}} \partial^{r} \phi \partial_{\tau} \phi \sim \mathcal{O}(\lambda^2) , \\ n^{\mu} \nabla_\mu \nabla_\tau \phi &= n^{\mu} (\partial_\mu\partial_\tau \phi - \Gamma^\nu{}_{\mu\tau}\partial_\nu \phi) =\lambda \partial_\tau \partial_\tau \lambda \phi^{(1)} \sim \mathcal{O}(\lambda^2),\end{aligned}$$ where the fact that $\Gamma^\nu{}_{\mu\tau}=0$ and that $\phi^{(1)}$ is $r$-independent have been used, we see that $\tilde{T}_{\mu \nu}n^{\mu} h^{\nu}{}_\tau$ is of order $\mathcal{O}(\lambda^2)$. Similarly, since $$\begin{aligned} n^{\mu} \partial_{\mu} \phi \partial_i \phi & = n_r \partial^r \phi \partial_i \phi = \frac{1}{\sqrt{f}} \partial^{r} \phi \partial_i\phi \sim \mathcal{O}(\lambda^{2}), \\ n^{\mu} \nabla_\mu \nabla_i \phi &= n^{\mu} (\partial_\mu \partial_i \phi - \Gamma^\nu{}_{\mu i} \partial_\nu \phi) = \lambda \partial_\tau \partial_i \lambda \phi^{(1)} - n^{r} \Gamma^j{}_{r j} \partial_i \lambda \phi^1 \\ & = \lambda^2 \partial_\tau \partial_i \phi^{(1)} - \frac{\lambda \sqrt{f}}{r} \partial_i \phi^{(1)} \sim \mathcal{O}(\lambda^2),\end{aligned}$$ we find that $\tilde{T}_{\mu \nu}n^{\mu} h^{\nu}{}_i$ is also of order $\mathcal{O}(\lambda^2)$. Putting together, we conclude that the RHS of (\[mo\]) is a quantity of order $\mathcal{O}(\lambda^2)$ in the near horizon expansion. Fluid dynamics on hypersurface ============================== In terms of the Brown-York stress tensor, the momentum constraint (\[mo\]) can be rewritten as $$\begin{aligned} D_{a}t^{a}{ }_{b}= -\tilde{T}_{\mu \nu} n^{\mu}h^{\nu}{}_b. \label{divnf}\end{aligned}$$ We have shown in the last section that the RHS of the above equation is $\mathcal{O}(\lambda^2)$ in the near horizon expansion. It remains to consider the near horizon expansion of the left hand side (LHS). To begin with, let us look at the temporal component. We have $$\begin{aligned} D_{a}t^{a}{ }_{\tau}&=D_{\tau} t^{\tau}{ }_{\tau}+D_{i} t^{i}{ }_{\tau} \nonumber \\ &= D_{\tau} t^{\tau}{ }_{\tau} - \frac{1}{\lambda^2}D_{i}(t^{\tau}{ }_{j}h^{ij}). \label{taucomp}\end{aligned}$$ The near horizon expansion of each term behave as $$\begin{aligned} & D_{\tau} t^{\tau}{ }_{\tau} = D_{\tau} t^{\tau}{ }_{\tau}^{(1)}+\cdots \sim \mathcal{O}(\lambda^{1}), \nonumber \\ & \frac{1}{\lambda^2}D_{i}(t^{\tau}{ }_{j}h^{ij}) = \frac{1}{\lambda} h^{ij(0)}D_{i} t^{\tau}{ }_{j}^{(1)}+\cdots \sim \mathcal{O} (\lambda^{-1}).\end{aligned}$$ So, the leading order term of (\[taucomp\]) is $\frac{1}{\lambda} h^{ij(0)}D_{i}t^{\tau}{ }_{j}^{(1)}$ at $\mathcal{O}(\lambda^{-1})$. Since the RHS of (\[divnf\]) is of order $\mathcal{O}(\lambda^{2})$, we get the following identity at the order $\mathcal{O}(\lambda^{-1})$: $$\begin{aligned} D_{i}t^{\tau i(1)}=0. \label{divfree}\end{aligned}$$ Next we consider the spacial components of the momentum constraint. The LHS reads $$\begin{aligned} & D_{a}t^{a}{ }_{i}=D_{\tau}t^{\tau}{ }_{i}+D_{j}t^{j}{ }_{i}.\end{aligned}$$ Inserting (\[d\]) into the above equation and noticing that the constant $C_h$ has no contribution after taking the derivative, we get the following result at order $\mathcal{O}(\lambda^1)$, $$\begin{aligned} & D_{\tau}t^{\tau}{ }_{i}^{(1)} + \frac{\alpha}{\sqrt{f'_h}} \left(2t^{\tau}{ }_{i}^{(1)} D^{k}t^{\tau}{}_{k}^{(1)}+2t^{\tau j (1)}D_{j}t^{\tau}{ }_{i}^{(1)}-D^{k} (D_{i}t^{\tau}{ }_{k}^{(1)}+D_{k}t^{\tau}{ }_{i}^{(1)}) -D_{j}\hat {R}^{j}{ }_{i}\right) + \frac{1}{n}D_{k}t^{(1)} \nonumber \\ & = \partial_{\tau}t^{\tau}{ }_{i}^{(1)} +\frac{\alpha}{\sqrt{f'_h}}\left(2t^{\tau j(1)}D_{j}t^{\tau}{ }_{i}^{(1)} -D_{k}D^{k}t^{\tau}{ }_{i}^{(1)}-\hat{R}^{k}{ }_{i}t^{\tau}{ }_{k}^{(1)}\right) +\frac{1}{n}D_{k}t^{(1)}. \label{bdry2}\end{aligned}$$ Once again, since the RHS of eq.(\[divnf\]) is $\mathcal{O}(\lambda^2)$ in the near horizon expansion, we get the following nontrivial equation in the leading order $\mathcal{O}(\lambda^1)$: $$\begin{aligned} \partial_{\tau}t^{\tau}{ }_{i}^{(1)} +\frac{\alpha}{\sqrt{f'_h}}\left(2t^{\tau j(1)}D_{j}t^{\tau}{ }_{i}^{(1)} -D_{k}D^{k}t^{\tau}{ }_{i}^{(1)}-\hat{R}^{k}{ }_{i}t^{\tau}{ }_{k}^{(1)}\right) +\frac{1}{n}D_{k}t^{(1)} = 0. \label{hy}\end{aligned}$$ Though it seems surprising, the scalar field indeed makes no contribution in the leading order, regardless of what kind of boundary condition we impose on it. Therefore, in the leading order approximation, one need not consider the scalar field as an independent degree of freedom. Now using the so-called holographic dictionary $$\begin{aligned} t^{\tau}{ }_{i}^{(1)}=\frac{v_{i}}{2}, \quad \frac{t^{(1)}}{n}=\frac{p}{2}, \label{notatio}\end{aligned}$$ where the $v_i, {p}$ are respectively the velocity and pressure of the dual fluid on hypersurface, eq.(\[hy\]) becomes the standard Navier-Stokes equation on the curved hypersurface, i.e. $$\begin{aligned} \partial_{\tau}v_{i}+D_{k}p+ 2v^{j}D_{j}v_{i}-D_{k} D^{k}v_{i} - \hat{R}^{m}{ }_{i}v_{m} = 0,\end{aligned}$$ where we have taken $\alpha = \sqrt{f'_h}$ as part of our convention. Meanwhile, eq.(\[divfree\]) becomes $$\begin{aligned} D_i v^i =0,\end{aligned}$$ which can be easily identified to be the incompressibility condition for the dual fluid. Concluding remarks ================== Imposing Petrov-like boundary conditions on a near horizon hypersurface we have been able to establish a fluid dual for the nonminimally coupled scalar-tensor theory of gravity. The resulting Navier-Stokes equation does not contain an external force term, as apposed to most of the previously known examples cases. The absence of external force term is due to the fact that the fluctuations of the scalar field does not contribute in the lowest nontrivial order in the near horizon expansion. Let us remind that the only previously known case in which the force term is missing from the fluid dual of gravity with matter source is [@Ling]. The works presented in [@Ling] and the present paper naturally raise the following question: why is the force term missing in the fluid dual of some theories of gravity with matter source? Currently we don’t have the answer at hand but it is worth to pay further attention to understand. \[2\]\[\][[arXiv:\#2](http://arxiv.org/abs/#2)]{} [99]{} M.Maldacena, “The Large N limit of superconformal field theories and supergravity,” [*Advances in Theoretical and Mathematical Physics*]{} (1998) [**2**]{}: 231-252. S. A. Hartnoll, C. P. Herzog and G. T. Horowitz, “Building a Holographic Superconductor,” Phys. Rev. Lett.  [**101**]{}, 031601 (2008) \[\]. R. Baier, P. Romatschke, D. T. Son, A. O. Starinets and M. A. Stephanov, “Relativistic viscous hydrodynamics, conformal invariance, and holography,” JHEP [**0804**]{}, 100 (2008) \[\]. S. Bhattacharyya, V. EHubeny, S. Minwalla and M. Rangamani, “Nonlinear Fluid Dynamics from Gravity,” JHEP [**0802**]{}, 045 (2008) \[\]. M. Haack and A. Yarom, “Nonlinear viscous hydrodynamics in various dimensions using AdS/CFT,” JHEP [**0810**]{}, 063 (2008) \[\]. C. Eling and Y. Oz, “Relativistic CFT Hydrodynamics from the Membrane Paradigm,” JHEP [**1002**]{}, 069 (2010) \[\]. S. Bhattacharyya, S. Minwalla and S. R. Wadia, “The Incompressible Non-Relativistic Navier-Stokes Equation from Gravity,” JHEP [**0908**]{}, 059 (2009) \[\]. T. Ashok, “Forced Fluid Dynamics from Gravity in Arbitrary Dimensions,” \[\]. I. Bredberg, C. Keeler, V. Lysov and A. Strominger, “Wilsonian Approach to Fluid/Gravity Duality,” JHEP [**1103**]{}, 141 (2011) \[\]. I. Bredberg, C. Keeler, V. Lysov and A. Strominger, “From Navier-Stokes To Einstein,” JHEP [**1207**]{}, 146 (2012) \[\]. R. -G. Cai, L. Li and Y. -L. Zhang, “Non-Relativistic Fluid Dual to Asymptotically AdS Gravity at Finite Cutoff Surface,” JHEP [**1107**]{}, 027 (2011) \[\]. Y. Ling, C. Niu, Y. Tian, X. -N. Wu and W. Zhang, “A note on the Petrov-like boundary condition at finite cutoff surface in Gravity/Fluid duality,” \[\]. G. Compere, P. McFadden, K. Skenderis and M. Taylor, “The Holographic fluid dual to vacuum Einstein gravity,” JHEP [**1107**]{}, 050 (2011) \[\]. R. -G. Cai, T. -J. Li, Y. -H. Qi and Y. -L. Zhang, “Incompressible Navier-Stokes Equations from Einstein Gravity with Chern-Simons Term,” Phys. Rev. D [**86**]{}, 086008 (2012) \[\]. C. Niu, Y. Tian, X. -N. Wu and Y. Ling, “Incompressible Navier-Stokes Equation from Einstein-Maxwell and Gauss-Bonnet-Maxwell Theories,” Phys. Lett. B [**711**]{}, 411 (2012) \[\]. R. -G. Cai, L. Li, Z. -Y. Nie and Y. -L. Zhang, “Holographic Forced Fluid Dynamics in Non-relativistic Limit,” Nucl. Phys. B [**864**]{}, 260 (2012) \[\]. G. Chirco, C. Eling and S. Liberati, “Higher Curvature Gravity and the Holographic fluid dual to flat spacetime,” JHEP [**1108**]{}, 009 (2011) \[\]. D. -C. Zou, S. -J. Zhang and B. Wang, “Holographic charged fluid dual to third order Lovelock gravity,” Phys. Rev. D [**87**]{}, no. 8, 084032 (2013) \[\]. V. Lysov and A. Strominger, “From Petrov-Einstein to Navier-Stokes,” \[\]. X. Wu, Y. Ling, Y. Tian and C. Zhang, “Fluid/Gravity Correspondence for General Non-rotating Black Holes,” Class. Quant. Grav.  [**30**]{} (2013) 145012 \[\]. R. -G. Cai, L. Li, Q. Yang and Y. -L. Zhang, “Petrov type $I$ Condition and Dual Fluid Dynamics,” JHEP [**1304**]{}, 118 (2013) \[\]. B. Wu and L. Zhao, “Gravity-mediated holography in fluid dynamics,” Nucl. Phys. B [**874**]{}, 177 (2013) \[\]. S. Bhattacharyya, R. Loganayagam, S. Minwalla, S. Nampuri, S. P. Trivedi and S. R. Wadia, “Forced Fluid Dynamics from Gravity,” JHEP [**0902**]{}, 018 (2009) \[\]. C. -Y. Zhang, Y. Ling, C. Niu, Y. Tian and X. -N. Wu, “Magnetohydrodynamics from gravity,” Phys. Rev. D [**86**]{}, 084043 (2012) \[\]. X. Bai, Y. -P. Hu, B. -H. Lee and Y. -L. Zhang, “Holographic Charged Fluid with Anomalous Current at Finite Cutoff Surface in Einstein-Maxwell Gravity,” JHEP [**1211**]{}, 054 (2012) \[\]. C. Martinez, R. Troncoso and J. Zanelli, “De Sitter black hole with a conformally coupled scalar field in four-dimensions,” Phys. Rev. D [**67**]{}, 024008 (2003) \[\]. W. Xu and L. Zhao, “Charged black hole with a scalar hair in (2+1) dimensions,” Phys. Rev. D [**87**]{}, 124008 (2013) \[\]. M. Nadalini, L. Vanzo and S. Zerbini, “Thermodynamical properties of hairy black holes in n spacetimes dimensions,” Phys. Rev. D [**77**]{}, 024047 (2008) \[\]. J. D. Brown and J. W. York, “Quasilocal energy and conserved charges derived from the gravitational action,” Phys. Rev. D47 (1993) 1407 \[\].
{ "pile_set_name": "ArXiv" }
--- abstract: 'X-rays trace accretion onto compact objects in binaries with low mass companions at rates ranging up to near Eddington. Accretion at high rates onto neutron stars goes through cycles with time-scales of days to months. At lower average rates the sources are recurrent transients; after months to years of quiescence, during a few weeks some part of a disk dumps onto the neutron star. Quasiperiodic oscillations near 1 kHz in the persistent X-ray flux attest to circular motion close to the surface of the neutron star. The neutron stars are probably inside their innermost stable circular orbits and the x-ray oscillations reflect the structure of that region. The long term variations show us the phenomena for a range of accretion rates. For black hole compact objects in the binary, the disk flow tends to be in the transient regime. Again, at high rates of flow from the disk to the black hole there are quasiperiodic oscillations in the frequency range expected for the innermost part of an accretion disk. There are differences between the neutron star and black hole systems, such as two oscillation frequencies versus one. For both types of compact object there are strong oscillations below 100 Hz. Interpretations differ on the role of the nature of the compact object.' address: | Laboratory for High Energy Astrophysics\ NASA/GSFC Greenbelt, MD 20771 author: - 'Jean H. Swank' title: 'X-Ray Observations of Low-Mass X-Ray Binaries: Accretion Instabilities on Long and Short Time-Scales' --- \#1[[A&A,]{} [\#1]{}]{} \#1[[Acta Astr.,]{} [\#1]{}]{} \#1[[A&AS,]{} [\#1]{}]{} \#1[[ARA&A,]{} [\#1]{}]{} \#1[[AJ,]{} [\#1]{}]{} \#1[[ApJ,]{} [\#1]{}]{} \#1[[ApJS,]{} [\#1]{}]{} \#1[[MNRAS,]{} [\#1]{}]{} \#1[[Nature,]{} [\#1]{}]{} \#1[[PASJ,]{} [\#1]{}]{} Introduction {#introduction .unnumbered} ============ Low-mass X-ray binaries (LMXB) are the binaries of a low-mass “normal” star and a compact star. The compact star could be a white dwarf, a neutron star, or a black hole. The Rossi X-Ray Timing Explorer ([*RXTE*]{}) has been observing since the beginning of 1996 and has obtained qualitatively new information about the neutron star and black hole systems. In this paper I review the new results briefly in the context of what we know about these sources. The brightest, Sco X–1, was one of the first non-solar X-ray sources detected, but only with [*RXTE*]{} have sensitive measurements with high time resolution been made that could detect dynamical time-scales in the region of strong gravity. [*RXTE*]{} also has a sky monitor with a time-scale of hours that keeps track of the long term instabilities and enables in depth observations targeted to particular states of the sources. The LMXB have a galactic bulge or Galactic Population II distribution. The mass donor generally fills its Roche lobe, is less than a solar mass, and is optically faint, in contrast to the early type companions of pulsars like Cen X–3 or the black hole candidate Cyg X–1. In many cases the optical emission is dominated by emission from the accretion disk, and that is dominated by reprocessing of the X-ray flux from the compact object [@vPM95]. The known orbital periods of these binaries range from 16 days (Cir X–1) to 11 minutes (4U 1820–30). The very short period systems ($< 1$ hr) are expected to have degenerate dwarf mass donors and probably the mass transfer is being driven by gravitational radiation. The different properties of the sources indicate several populations. The longer period systems with more massive companions are probably slightly evolved from the main sequence. There are about 50 persistent neutron star LMXB [@vP95]. Distances can be estimated in a variety of ways. The hydrogen column density indicated by the X-ray spectrum should include a minimum amount due to the interstellar medium. Many of the sources emit X-ray bursts associated with thermonuclear flashes that reach the hydrogen or helium Eddington limits. In some cases the optical source provides clues. The resulting luminosity distribution appears to range from several times the Eddington limit for a neutron star down below the luminosity of about $10^{35}$ ergs s$^{-1}$, corresponding to $\approx 10^{-11}$ $\msun$ yr$^{-1}$ [@CS97]. The lower limit has come from instrument sensitivity, but it may also reflect the luminosity below which the accretion flow is not steady, so that the source must be a transient. “X-Ray Novae” that are among the brightest X-ray sources for a month to a year are sufficiently frequent that they were seen in rocket flights in the beginning of X-ray astronomy. The X-ray missions that monitored parts of the sky during the last three decades found that on average there are 1–2 very bright transient sources each year (e.g. [@CSL97]) with durations of a month to a year. In 5 years of [*RXTE*]{} operations, we know of 20 transient neutron star sources and and an equal number of transient black hole sources. If they have a 20 yr recurrence time we have seen only a quarter of them and if we have only been watching a third of the region in the sky, 20 observed sources implies more than 240 sources exist. In reality there is a distribution of the recurrence times, some as short as months, others longer than 50 years, if optical records are good. On the basis of such estimates, the number of potential black hole transients is estimated to be on the order of thousands [@TS96]. The separation of sources into persistent and transient sources is a very gross simplification. One of the discoveries of recent missions, and especially of the All Sky Monitor (ASM) [@Bradt00] has been that the persistent sources have cycles of variations with time-scales ranging from many months to days. If the transient outbursts originate in accretion instabilities, perhaps these variations are related. In the next section I show some of the kinds of behavior being observed. At radii close to the compact objects the dynamical time-scale gets shorter, till it is the milliseconds of the neutron star or black hole. RXTE’s large area detectors detect oscillations on these time-scales which must reflect the dynamics at the innermost stable circular orbit (ISCO) of these neutron stars and black holes. The neutron stars of this sample are expected to have magnetic dipole moments and surface fields about $10^8 - 10^9$ gauss. Of course the neutron stars have a surface such that matter falling from the accretion disk to the neutron star crashes into the surface and generates X-ray emission. In the case of the black holes matter could fall through the event horizon and disappear with no further emission of energy. Thus the X-rays produced and the dynamics that dominates in the two cases (neutron star versus black hole) could be different. However, a number of similarities appear in the signals we receive. Long Time-scale Variabilities {#long-time-scale-variabilities .unnumbered} ============================= High Accretion Rate - Persistent Sources {#high-accretion-rate---persistent-sources .unnumbered} ---------------------------------------- Among the persistent LMXB there are characteristic variations on time-scales of months in some sources and days in others[@Bradt00]. Quasiperiodic modulations were pointed out at 37 days for Sco X–1 (IAUC 6524), 24.7 days for GX 13+1 (IAUC 6508), 77.1 days for Cyg X–2 (IAUC 6452), 37 days for X 2127+119 in M15 (IAUC 6632). The obviously important, but not strictly periodic modulations in 4U 1820–30 and 4U 1705–44 at time-scales of 100–200 days are shown in Figure 1. For Sco X–1, the changes in activity level occur in a day and the activity time-scale is hours. The hardness is often correlated with the rate, although this measure does not bring out more subtle spectral changes. These time-scales are less regular than the 34 day cycle time of Her X-1, and similar modulations in LMC X-4 and SMC X-1, which are thought to be due to the precession of a tilted accretion disk. The latter sources are high magnetic field pulsars in which the disk is larger than in the LMXB, and is truncated by the magnetosphere at a radius as large as $10^8$ cm. The LMXB spectral changes are also different than those of the pulsars. In the LMXB case the changes are thought to be real changes in the accretion onto the neutron star, at least the production of X-rays, rather than a change in an obscuration of the X-rays that we see. The spectral changes are captured in the color-color diagrams that give rise to the names “Z” and “Atoll” for subsets of the LMXB. These were identified with EXOSAT observations by Hasinger and van der Klis [@HvdK89]. Characteristics of the bursts from 4U 1636–53 depended on the place of the persistent flux in the atoll color-color diagram [@vdK90]. This implied that the real mass accretion rate was correlated with the position on the diagram (although other possibilities such as the distribution of accreted material on the surface of the neutron star may play a role). That the position in the diagram in not uniquely correlated to the flux is as yet not understood. Transients atoll sources like Aql X–1 and 4U 1608–52 go around the atoll diagram during the progress of the outburst. Low Average Accretion Rate - Transients {#low-average-accretion-rate---transients .unnumbered} --------------------------------------- There are only a few persistent LMXB in which the compact object is a black hole. Black hole binaries are for some reason more likely to be transients. Perhaps the binaries harboring them are not being driven to have as much mass exchange, so that it happens that these systems are in the range of mass flow through the disk that makes them transient. There are also neutron star transients with low average mass exchange rates. Figure 2 shows on the left two neutron star transients, a well known atoll burster Aql X–1 and the pulsar GRO J1744–28, which had two outbursts a year apart, but has otherwise not been seen. On the right are two black hole candidates, 4U 1630–47, which recurs approximately every two years, and XTE J1550–564, which like GRO J1744–28, had a dramatic outburst, with a weaker recurrence after a year’s hiatus. Black hole candidates can get brighter than the transient bursters, consistent with the Eddington limit for more massive compact objects and they probably go through more different spectral and timing “states”, but there are also similarities in the kinds of behavior that are exhibited. From both BeppoSAX and RXTE results it is clear that there is a population of systems which have transient episodes, but which are an order of magnitude less luminous at peak. BeppoSax has seen bursts from a number of sources for which the persistent flux is below their sensitivity limit. RXTE has seen a dozen sources which may not rise above $10^{36}$ ergs s$^{-1}$ during transient episodes. Several of these are believed to be neutron stars because Type I (cooling) bursts were observed. They include the source SAX J1808.4-3658, unique to date, that both pulses (2.5 msec) and has Type I bursts. Some sources have spectral and timing properties consistent with black hole candidates which go into the black hole “low hard” state, with strong white noise variability below 10 Hz and hard spectra. One of these was V4641 Sgr, which went into much brighter outburst, with a radio jet, before disappearing. Instabilities Close to the Compact Object {#instabilities-close-to-the-compact-object .unnumbered} ========================================= Kilohertz Oscillations for Neutron Stars - near the ISCO {#kilohertz-oscillations-for-neutron-stars---near-the-isco .unnumbered} -------------------------------------------------------- More than 22 LMXB have now exhibited a signal at kilohertz frequencies in the power spectra of the x-ray flux (See [@vdK00]). Figure 3 (thanks to T. Strohmayer) shows results for samples of data from an atoll and a Z source. Usually this signal is two peaks at 1–15 % power. They indicate quasi-periodic oscillations with coherence (mean frequency/frequency width) as much as 100. The centroid frequencies are not constant for a source, but vary. Over a few hours the frequency is correlated with the X-ray flux, increasing with the flux. The flux variations of a factor of two are correlated with changes of frequency between 500 Hz and 1000 Hz, approximately [@SSZ98]. The highest reported is 1330 Hz, from 4U 0614+09. Considering that for a circular orbit at the Kepler radius $r_K$, the observed frequency is $(2183/M_1) (r_{ISCO}/r_K)^{3/2}$, where $r_{ISCO} = 6GM/c^2$ is the innermost stable circular orbit for a spherical mass $M = M_1 \msun$ of smaller radius, neutron stars of masses $M_1$ = 1.6–2.0 would have Kepler frequencies at the ISCO of just such maximum frequencies as are observed. While the luminosities of the sources exhibiting these QPO range from $10^{36}$ ergs s$^{-1}$ to above $10^{38}$ ergs s$^{-1}$, the maximum values of the upper frequency range only between 820 Hz and 1330 Hz. This suggests [@Zhang98; @Kaaret99] that it represents a characteristic of the neutron stars fairly independently of the accretion rate. The ISCO and the neutron star radius are candidates. For lower fluxes, the frequencies, at least locally in the light curve, decline, as if the Keplex orbit were further out. Which is more likely, that the inner radius is then at the ISCO or at the radius of the neutron star? In the latter case the neutron star is outside the innermost stable circular orbit. Understanding the boundary requires consideration of the radiation pressure, the magnetic fields, and the optical depth of the inner disk. For sources with flux near the Eddington limit, the optical depth of the material near the surface should be much larger than the optical depth of the material accreting at rates 100 times less. For the inner disk being at the ISCO, and fairly compact neutron stars, this plausibly does not matter. For the inner disk at the surface or a large neutron star, it seems hard to explain the similarity of appearance between luminous Z sources and fainter atoll sources. There are in fact differences in the appearance of the QPOs; one is that the amplitude of the QPOs is larger for the atoll sources than for the Z sources. So the situation is not completely clear. If a disk is truncated at an inner radius which moves in toward the neutron star as the mass flow through the disk increases and a QPO is generated at near this inner edge, the frequency would be likely to increase with the luminosity. The frequency would not be able to increase beyond the value corresponding to the minimum orbit in which the disk could persist. Miller, Psaltis, and Lamb [@MLP98] argued that if radiation drag was responsible for the termination of the disk, optical depth effects would lead to the sonic point radius moving in as the accretion rate increases. There would be a highest frequency corresponding to the minimum possible sonic point radius. In the cycles of 4U 1820-30 the frequency approached a maximum which it maintained as the flux increased further before the feature became too broad to detect. This kind of behavior would arise from a sonic point explanation. From Figure 4, it can be seen that if the equation of state (EOS) of the nuclear matter at the center of a neutron star is very stiff, near the L equation of state, for $1.4-2 \msun$ neutron stars the radius of the star is close to its own ISCO; whether it is inside or outside it is depends sensitively on the mass. If the equation of state is softer, closer to the FPS EOS, interpretation of the maximum frequencies observed as a Kepler frequency [*at the surface*]{} would imply a mass significantly less than the $1.4 \msun$ with which many neutron stars are probably formed. In either case, moderately stiff EOS and maximum frequency at the ISCO, or stiff equation of state and maximum frequency either at the ISCO or the surface, the frequency would be from near the ISCO, if not just outside it. Accurate considerations require the rotation rate of the neutron star to be taken into account. A characteristic of the twin kilohertz peaks is that when the frequency changes, the two frequencies approximately move together, with the difference approximately constant, at least until near the maximum frequencies (and luminosities) for which they are observed in a given source. This suggests a beat frequency and the relation between the difference frequency and the frequencies seen during bursts (See Strohmayer, this volume) suggest the neutron star spin as the origin of the beats. Miller, Lamb and Psaltis [@MLP98] explored how the two frequencies could be generated and Lamb and Miller refined the model in agreement with the 5 % changes in the frequency separation, that are observed [@LMiller00]. However, this varying separation between the two QPO also suggested identification as the radial epicyclic frequency of a particle moving in an eccentric orbit in the field of the neutron star. The lower of the two frequencies is then identified, not with a beat frequency, but with the precession of the periastron [@Stella99], although efforts to fit the predictions of this model in terms of particle dynamics produce implausibly large eccentricities, neutron star masses and spins [@MarkovicL00]. Psaltis and Norman proposed that similar frequencies could be resonant in a hydrodynamic disk [@PN99]. In these models, at least in their current forms, the difference between the two QPO peaks is not related to the spin, but to something like the radial epicycle frequency. A quite different class of models are those in which the disk has a boundary layer with the neutron star and the plasma is excited by the magnetic field of the neutron star [@TOK99]. The magnetic pole makes a small angle with the neutron star rotation axis. In this case the lower kilohertz QPO frequency is the Kepler frequency, while both the upper frequency and the low frequency oscillation (corresponding to the Horizontal Branch Oscillations in Z sourses) are related to oscillations of plasma interacting with the rotating magnetic field. Hectohertz oscillations for Black Holes {#hectohertz-oscillations-for-black-holes .unnumbered} --------------------------------------- Although accreting neutron stars and black holes should have important differences, they both presumably have an accretion disk with an inner radius, when the mass flow is high enough. Possible signals from the ISCO of black holes were discussed when accretion onto black holes was first considered [@Suny73] and anticipation of [*RXTE*]{} inspired detailed calculations [@NW93]. The [*RXTE*]{} PCA has detected QPO in 5 black hole candidates at frequencies that are suitable to be signals from the ISCO of black holes in the range of $5 - 30 \msun$. They have been observed only in selected observations and are generally of lower amplitude (a few %) than the neutron star kilohertz QPO. For GRS 1915+105, the frequency has always been 67 Hz [@Morgan97] . For GRO 1655–40, Remillard identified 300 Hz [@1655R99]. For XTE J1550–564, at different times it has been between 185 and 205 Hz [@1550R99]. For XTE J1859–262, a broad signal at 200 Hz is observed in the bright phases near the peak of the outburst [@Cui00]. For 4U 1630–47 as well, which has had 3 outbursts during the [*RXTE*]{} era, Remillard has reported 185 Hz. The black hole candidates have appeared to differ from the neutron stars in having one QPO rather than two. An obvious question is whether the second QPO is associated with the presence of a neutron star with a surface and a rotating magnetic dipole. Recent work by Strohmayer [@Stroh01] casts doubt on it. There were other black hole candidates observed with [*RXTE*]{}, which did not exhibit high frequency oscillations and the properties of the high frequency signal are not very well defined. Interpretation in terms of Kepler frequency at the ISCO, non-radial g-mode oscillations in the relativistic region of the accretion disk, and Lense-Thirring precession have been discussed. GRO J1655-40 is very interesting because the radial velocities of absorption lines of the secondary have given rather precise measurement of the mass. (The best estimates are so far $5.5-7.9 \msun$ [@Shahbaz99].) In this case the mass well known and the black hole’s angular momentum can be the goal. The 300 Hz frequency is high enough that for a g-mode the black hole would have near maximal angular momentum, but if it represents a Kepler velocity, a Schwarzschild black hole would still be possible[@Wagoner98]. The question has been asked whether the microquasars GRS 1915+105 and GRO J1655-40 have powerful radio jets associated with outbursts because they have fast rotation [@Mirabel99]. Decahertz Oscillations for Neutron Stars and Black Holes {#decahertz-oscillations-for-neutron-stars-and-black-holes .unnumbered} -------------------------------------------------------- In the Z source LMXBs the first QPOs discovered were the Horizontal Branch Oscillations (HBO), first seen by EXOSAT, but then by Ginga. They occur in the range 15–50 Hz, have amplitudes as high as 30 %, increase in frequency with the luminosity, and have strong harmonic structure. With [*RXTE*]{} observations the atoll LMXB have also been seen to have these signals, although often the coherence is less and there are other signals (See [@Wijnands00]). These QPO tend to be near in frequency to the break frequency of band-limited white noise at low frequencies. The black hole transients had already exhibited very similar features in Nova Muscae and GX 339–4 in the range 1-15 Hz. They have very similar properties to the HBO. [*RXTE*]{} PCA observations have found these QPO in the power spectra of most black hole candidates [@Sw01]. Different origins have been discussed for the neutron star and black hole QPOs, but their similarity is noted. Figure 5 shows examples from a Z source and a black hole candidate (See [@Focke96; @Dieters00]). The HBO were originally ascribed to a magnetic beat frequency model, assuming the Kepler frequency and the spin were both not seen. Stella and Vietri identified them with the Lense-Thirring precession (See [@Stella99]). They appear to have the correct quadratic relation to the high frequency kilohertz QPO. But the magnitude was too high, by even a factor of about four. Assigning them to twice the nodal frequency, a reasonable possibility for the x-ray modulation, relieves the problem in some cases, but still leaves a factor of two in many. Psaltis argues that a magnitude discrepancy of a factor of two can be accommodated in situations where there is actually complex hydrodynamic flow rather than single particle orbits [@Psaltis01]. In the case of the black holes, the energy spectra seem to distinguish contributions of an optically thick disk and non-thermal, that is “power-law” emission, attributed to scattering of low energy photons off more energetic electrons. This division of components is not observationally so clear in the neutron star LMXB (There are many plausible reasons for this: lower central mass and smaller inner disk, X-rays generated on infall to the surface, possible spinning magnetic dipole.) For the black hole transients, this low frequency QPO is clearly a modulation of the power-law photons. However, there appear to be a variety of correlations with the disk behavior, so that the two components are clearly coupled. In the case of the neutron stars Psaltis, Belloni, and van der Klis [@PBvdK99] have noted that the HBO and the lower kilohertz oscillation are correlated over a broad range of frequency (1–1000 Hz). Wijnands and van der Klis [@WvdK99] showed that [*both*]{} the noise break and the low frequency QPO are correlated in the same way for certain neutron stars and black hole candidates. Psaltis [*et al.*]{} went on to point out that if some broad peaks in the power spectra of some black holes were taken to correspond to the lower kilohertz frequency in the neutron star sources, these points also fell approximately on the same relation. While the degree to which this relation was meaningful, given the scatter in the points, selection effects, and distinctions of more than one branch of behavior, recent work is suggestive that in some way three characteristic frequencies of the disk in a strong gravitational field are significant, where these correspond to Kepler motion, precession of the perihelion and nodal precession. There remain difficulties however with specific assignments. It has often been noted that different interpretations implied weaker features in the spectrum, for example modulation of frequencies by the Lense-Thirring precession [@MarkovicL00] or excitation of higher modes in the case of g-modes [@Wagoner98]. In the case of the neutron stars, adding together large amounts of data to build up the statistical signal, while Sco X-1 did not show sidebands [@Mendez00], Jonker et al. [@Jonker00] found evidence of sidebands at about 60 Hz to the lower kilohertz frequency in three sources. The frequency separation is not the same as the low frequency QPO in those sources although it is in the same range and Psaltis argues is close enough that second order effects can be responsible for the difference. It is not clear yet whether the sidebands imply a modulation of the amplitude or whether they represent a beat phenomenon and are one-sided. Conclusions {#conclusions .unnumbered} =========== While it has not yet been possible to fit all the properties of LMXB neatly into a model, it is hard to imagine alternatives for some important results. One of these is that in accordance with the theory of General Relativity, there is an innermost stable orbit, such that quasistatic disk flow does not persist inside it. Nuclear matter at high densities does not meet such a stiff equation of state that the neutron star extends beyond the ISCO. Instead the results suggest the neutron star lies inside the ISCO for its mass. The accretion flows for both neutron stars and black holes have resonances which, from the observations, are apparently successfully coupled to X-ray flux. QPO are observed with high coherence. They can already be compared to assignments of various frequencies, but they do not match exactly with the identifications that have been made. However before it is possible to use it as diagnostic of gravity, it is necessary to sort out further the physics of the situations. Extending the measurements to signals an order of magnitude fainter taxes even the abilities of [*RXTE*]{}. Continued observations are pushing the limits lower by reducing statistical errors, but must deal with intrinsic source variability on longer time-scales. Observations are also being sought of especially diagnostic combinations of flux and other properties. van Paradijs, J., and McClintock, J. E., “Optical and Ultraviolet Observations of X-Ray Binaries”, in [*X-Ray Binaries*]{}, edited by W. H. G. Lewin, J. van Paradijs, and E. P. J. van den Heuvel, Cambridge Univ. Press, New York, 1995, pp. 58-125. van Paradijs, J., “A Catalogue of X-Ray Binaries”, in [*X-Ray Binaries*]{}, edited by W. H. G. Lewin, J. van Paradijs, and E. P. J. van den Heuvel, Cambridge Univ. Press, New York, 1995, pp. 536-577. Christian, D. J., and Swank, J. H., [*ApJS*]{}, [**109**]{}, 177-224 (1997). Chen, W., Shrader, C. R., and Livio, M., [*ApJ*]{}, [**491**]{}, 312-338 (1997). Tanaka, Y., and Shibazaki, N., [*Annu. Rev. A&A*]{}, [**34**]{}, 607-644 (1996). Bradt, H., Levine, A. M., Remillard, R. E., Smith, D. A., “Transient X–Ray Sources Observed with the Rxte All-Sky Monitor after 3.5 Years”, in [*Multifrequency Behaviour of High Energy Cosmic Sources: III*]{}, edited by F. Giovannelli and Lola Sabau-Graziati, [*Mem SAIt*]{}, [**71**]{}, (2000), in press. Hasinger, G. and van der Klis, M., [*A&A*]{}, [**225**]{}, 79-96 (1989). van der Klis, M., Hasinger, G., Damen, E., Penninx, W., van Paradijs, J., and Lewin, W. H. G., [*ApJ*]{}, [**360**]{}, L19-L22 (1990). van der Klis, M., [*Annu. Rev. A&A*]{}, [**38**]{}, 717-760 (2000). Strohmayer, T. E., Swank, J. H., and Zhang, W., “The periods discovered by [*RXTE*]{} in thermonuclear flash bursts”, in [*The Active X-Ray Sky*]{}, edited by L. Scarsi, H. Bradt, P. Giommi, and F. Fiore, Elsevier, New York, 1998, pp. 129-134. Zhang, W., Smale, A. P., Strohmayer, T. E., and Swank, J. H., [*ApJ*]{}, [**500**]{}, L171-L174 (1998). Kaaret, P., Piraino, Bloser, P. F., Ford, E. C., Grindlay, J. E., Santangelo, A., Smale, A. P., and Zhang, W., [*ApJ*]{}, [**520**]{}, L37-L40 (1999). Miller, M. C., Lamb, F. K., and Psaltis, D., [*ApJ*]{}, [**508**]{}, 791-830 (1998). Lamb, F. K., Miller, M. C., [*ApJ*]{}, submitted (2000) (astro-ph/0007460). Stella, L., Vietri, M., and Morsink, S. M., [*ApJ*]{}, [**524**]{}, L63–L66 (1999). Markovic, D., and Lamb, F. K., [*MNRAS*]{}, submitted (2000) (astro-ph/0009169). Psaltis, D., and Norman, C., [*ApJ*]{}, submitted (1999) (astro-ph/0001391). Titarchuk, L., Osherovich, V., and Kuznetsov, S., [*ApJ*]{}, [**525**]{}, L129-L132 (1999). Sunyaev, R., [*Sov. Astronom. AJ*]{} [**16**]{}, 941– 946 (1973). Nowak, M. A., and Wagoner, R. V., [*ApJ*]{}, [**418**]{}, 187-201 (1993). Morgan, E. H., Remillard,T.E., Greiner, J., [*ApJ*]{}, [bf, 482]{}, 993-1009 (1990). Remillard, R. E., Morgan, E. H., McClintock, J. E., Bailyn, C. D., and Orosz, J. A., [*ApJ*]{}, [**522**]{}, 397-412 (1999). Remillard, R. E., MClintock, J. E., Sobczak, G. J., Bailyn, C. D., Orosz, J. A., Morgan, E. H., and Levine, A. M., [*ApJ*]{}, [**517**]{}, L127-L130 (1999). Cui, W. E., Shrader, C. R., GHaswell, C. A., and Hynes, R. I., [*ApJ*]{}, [**535**]{}, L123-L127 (2000). Strohmayer, T. E., [*ApJ*]{}, submitted (2001). Shahbaz, T., van der Hooft, F., Casares, J., Charles, P. A., and van Paradijs, J., [*MNRAS*]{}, [**306**]{}, 89-94 (1999). Wagoner, R., [*Phys. Rep.*]{}, [**311**]{}, 259-269 (1998) (astro-ph/9805028). Mirabel, I. F., and Rodriguez, L. F., [*Ann. Rev. A&A*]{}, [**37**]{}, 409-443 (1999). Wijnands, R., [*Adv. Space Res.*]{}, submitted (2000) (astro-ph/0002074). Swank, J. H., “Disk Corona Oscillations”, in [*The Third Microquasar Workshop*]{}, editors A. Castrado and J. Greiner, in press (2000) (astro-ph/0011494). Focke, W., [*ApJ*]{}, [**470**]{}, [L127-L130]{} (1996). Dieters, S. [*et al.*]{}, [*ApJ*]{}, [**538**]{}, 307-314 (2000). Psaltis, D., [*ApJ*]{}, submitted (2000) (astro-ph/0101118). Psaltis, D., Belloni, T., and van der Klis, M., [*ApJ*]{}, [**520**]{}, 262-270 (1999). Wijnands, R., and van der Klis, M., [*ApJ*]{}, [**514**]{}, 939-944 (1999). Mendez, M. , and van der Klis, M., [*MNRAS*]{}, [**318**]{}, 938-942 (2000). Jonker, P. G., Mendez, M., and van der Klis, M., [*ApJ*]{}, [**540**]{}, L29-L32 (2000).
{ "pile_set_name": "ArXiv" }
--- abstract: | Photoinduced IR absorption was measured in undoped (LaMn)$_{1-\delta }$O$% _{3} $ and (NdMn)$_{1-\delta }$O$_{3}$. We observe broadening and a $\sim $44% increase of the midinfrared anti-Jahn-Teller polaron peak energy when La$^{3+}$ is replaced with smaller Nd$^{3+}$. The absence of any concurent large frequency shifts of the observed PI phonon bleaching peaks and the Brillouin-zone-center internal perovskite phonon modes measured by Raman and infrared spectroscopy indicate that the polaron peak energy shift is mainly a consequence of an increase of the electron phonon coupling constant with decreasing ionic radius $\left\langle r_{A}\right\rangle $ on the perovskite A site. This indicates that the dynamical lattice effects strongly contribute to the electronic band narrowing with decreasing $\left\langle r_{A}\right\rangle $ in doped giant magnetoresistance manganites. address: - '$^{1}$Jozef Stefan Institute, P.O.Box 3000, 1001 Ljubljana, Slovenia' - | $^{2}$University of Ljubljana, Faculty of Mathematics and Physics,\ Jadranska 19, 1000 Ljubljana, Slovenia author: - 'T. Mertelj$^{1,2}$, M. Hrovat$^{1}$, D. Kuščer$^{1}$ and D. Mihailovic$^{1}$' title: 'Direct measurement of polaron binding energy in AMnO$_{3}$ as a function of the A site ionic size by photoinduced IR absorption' --- The physical properties of manganites with the chemical formula (Re$_{1-x}$Ae$_{x}$)MnO$_{3}$ (Re and Ae are trivalent rare-earth and divalent alkaline-earth ions respectively) in which giant magnetoresistance (GMR) is observed[@SearleWang69; @KustersSingelton89; @HelmoltWecker93] show remarkable changes when the average ionic radius $\left\langle r_{A}\right\rangle $ on the perovskite A site is varied.[@ImadaFujimori98; @HwangCheong95] In the region of doping $x$, where GMR is observed, this is reflected in a decrease of the Curie temperature $T_{C}$ and increase of the size of magnetoresistance with decreasing $\left\langle r_{A}\right\rangle $.[@HwangCheong95] The decrease of $T_{C}$ has been attributed to a decrease of the hopping matrix element between neighbouring Mn sites $t$ as a result of changes of Mn-O-Mn bond angles with $% \left\langle r_{A}\right\rangle $.[@HwangCheong95] Traditionally GMR has been explained in the double exchange picture[@Zener51] framework, where the hopping matrix element is one of the key parameters influencing directly the Curie temperature. However it has been shown experimentally[@ZhaoConder96; @ZhaoKeller98; @LoucaEgami97] and theoretically[@MillisShraiman96] that also dynamic lattice effects including Jahn-Teller (JT) polaron formation are crucial ingredients for the explanation of GMR in manganites[@MillisShraiman96]. In this picture $% T_{C}$ also strongly depends on the electron-phonon (EP) coupling in addition to the hopping matrix element $t$ and any change in the EP coupling as function of $\left\langle r_{A}\right\rangle $ contributes to changes of $% T_{C}$ and other physical properties. Experimentally an increase of the EP coupling with decreasing $\left\langle r_{A}\right\rangle $ is suggested by the shift of the $1$-eV polaronic peak in optical conductivity of manganites to higher energy with decreasing $\left\langle r_{A}\right\rangle $[@MachidaMoritomo98; @QuijadaCerne98]. Unfortunately, the peak position of the 1-eV peak does not depend on the polaron binding energy alone[@QuijadaCerne98] and the magnitude of the shift can not be directly linked to change of the EP coupling constant $g$. Recently we observed a polaronic photoinduced (PI) absorption peak in antiferromagnetic (LaMn)$_{1-\delta }$O$_{3}$ (LMO).[@MerteljKuscer00] In this case the peak position is directly linked to the anti-Jahn-Teller polaron[@AllenPerebeinos99] binding energy and enables us to [*measure directly*]{} the change of the electron-phonon coupling with $% \left\langle r_{A}\right\rangle $ in undoped GMR manganites. Here we present photoinduced (PI) absorption measurements in (NdMn)$_{1-\delta }$O$_{3}$ (NMO) with $\delta \approx 0$. We observe a $\sim $44% increase of the small polaron energy when La$^{3+}$is replaced by smaller Nd$^{3+}$. The absence of any concurrent large frequency shifts of the observed PI phonon bleaching peaks and the Brillouin-zone-center internal perovskite phonon modes measured by Raman and infrared (IR) spectroscopy indicate that the polaron energy increase with decreasing $\left\langle r_{A}\right\rangle $ is mainly a consequence of an increase of the electron-phonon coupling constant. The method of preparation and characterization of ceramic sample with nominal composition (LaMn)$_{1-\delta }$O$_{3}$ has been published elsewhere[@MerteljKuscer00; @HolcKuscer97]. The sample with nominal composition (NdMn)$_{1-\delta }$O$_{3}$ was prepared in a similar manner with equal final treatment at 900${{}{}^{\circ }}$C for 300 min in Ar flow[@HuangSantoro97] to decrease cation deficiency. The X-ray difraction patterns of both samples taken before Ar treatment in 2$\Theta $ range 20${% {}^{\circ }}$-70${{}^{\circ }}$ showed that both samples are single phase. The samples showed no sign of a ferromagnetic transition in AC susceptibility measurements and we concluded that $\delta $ is sufficiently small that both are antiferromagnetic (AFM) and insulating below their respective Neel temperatures[@ImadaFujimori98; @HuangSantoro97; @UrushibaraMoritomo95]. PI spectra were measured at 25K in samples dispersed in KBr pellets. CW Ar$% ^{+}$-ion-laser light with 514.5 nm wavelength ($h\nu =2.41$ eV) and optical fluence $\sim $500 mW/cm$^{2}$ was used for photoexcitation. Details of PI-transmitance spectra measurements were published elswhere.[@MerteljKuscer00] Thermal difference spectra[@MerteljKuscer00] (TD) were also measured at the same temperature eliminate possible laser heating effects. Raman spectra were measured at room temperature in a standard backscatering configuration from ceramic powders using a CW Kr$^{+}$-ion-laser light at 647.1 nm. The scattered light was analysed with a SPEX triple spectrometer and detected with a Princeton Instruments CCD array. The incident laser flux was kept below $\sim $400 W/cm$^{2}$ to avoid laser annealing.[@IlievAbrashev98] The low temperature ($T=25$K) PI transmittance $(\frac{\Delta {\cal T}_{PI}}{% {\cal T}})$ spectra of both samples are shown in Fig. 1. In both samples a strong broad PI midinfrared (MIR) absorption (negative PI transmittance) centered at $\thicksim $5000 cm$^{-1}$ ($\thicksim 0.62$ eV) in LMO and at $% \thicksim 7500$ cm$^{-1}$ ($\thicksim 0.93$ eV) in NMO is observed. In the frequency range of the phonon bands (insert of Fig.1) we observe PI phonon bleaching in the range of the 585-cm$^{-1}$ (576 cm$^{-1}$ in NMO) IR phonon band and a slight PI absorption below $\thicksim $580 cm$^{-1}$. The PI phonon bleaching in NMO is similar to LMO[@MerteljKuscer00], but shifted to higher frequency by $\thicksim $20 cm$^{-1}$ and it consists of two peaks at 620 and 690 cm$^{-1}$ with a dip in-between at 660 cm$^{-1}$. Similarly to LMO this two PI transmission peaks are reproducible among different runs, while the structure of the PI absorption below $\thicksim $580 cm$^{-1}$ is not, and presumably arises due to increasing instrumental noise at the lower end of the spectral range. Despite the noise a slight PI absorption below $\thicksim $580 cm$^{-1}$ can be deduced from the PI spectra. The Raman spectra shown in Fig. 2b are consistent with published data.[@IlievAbrashev98] In the 100-900-cm$^{-1}$ frequency range 5 phonon peaks are observed in LMO and 6 phonon peaks in NMO. The frequencies and assignments of the phonon peaks are shown in Table I. The only mode that shifts substantially is the $A_{g}$ mode that corresponds to the out of phase rotation of the MnO$_{3}$ octahedra.[@IlievAbrashev98] The mode frequency increases by 17% to 329 cm$^{-1}$ in NMO.[@ymno] The two high frequency modes that are expected to be related to the collective JT distortion[@LiarokapisLeventouri99] shift by less than 5 cm$^{-1}$, which is less than 1 %. Similarly, the frequencies of IR modes shown in Table II do not shift more than 6 % when La$^{3+}$ is replaced with Nd$% ^{3+} $. A fit of absorption due to a small polaron hopping given by Emin [@Emin93] to the data is shown in Fig. 1 for both samples assuming that $-% \frac{\Delta {\cal T}_{PI}}{{\cal T}}$ is proportional to the absorption coefficient[@KimHeeger87]: $$-\frac{\Delta {\cal T}_{PI}}{{\cal T}}\varpropto \alpha \varpropto \frac{1}{% \hbar \omega }\exp (-\frac{(2E_{pol}-\hbar \omega )^{2}}{4E_{pol}\hbar \omega _{ph}}) \label{eqemin}$$ where $\alpha $ is the absorption coefficient, $E_{pol}$ is the polaron binding energy, $\omega $ the incoming photon frequency and $\omega _{ph}$ the polaron phonon frequency. The theoretical prediction fits well to the data with the small polaron binding energies $E_{pol}=0.34$ eV and $% E_{pol}=0.49\,$eV in LMO and NMO samples respectively. The polaron binding energies $E_{pol}$ and polaron phonon frequencies $\omega _{ph}$ obtained from the fit are summarized in Table III. The polaron phonon frequencies $\omega _{ph}$ obtained from the fit are 310 and 330 cm$^{-1}$ in LMO and NMO respectively and are small compared to the frequencies of the JT related Raman modes at $\thicksim $ 490 cm$^{-1}.$ This discrepancy is not surprising, since the width of the peak in (\[eqemin\]), from which $\omega _{ph}$ is determined, includes a prefactor which strongly depends on the details of the phonon cloud in the small polaron and is 4 only in a 1D Holstein model, which is a molecular model with a single dispersionless phonon. Taking into account the prefactor as well as dispersion and multiplicity of the phonon branches, $\omega _{ph}$ can be viewed as an [*effective polaron phonon frequency*]{} of the different wavevector phonons in the phonon cloud. The shape of the observed PI absorption peak indicates that the EP coupling is strong and $t>\omega _{ph}$, since otherwise one does not expect to observe a symmetric and structureless peak in optical conductivity.[@AlexandrovKabanov94] In this case the polaron binding energy is proportional to[@KabanovMashtakov93] $$E_{pol}\varpropto 2g^{2}\omega _{ph}+\frac{zt^{2}}{2g^{2}\omega _{ph}}\text{.% } \label{epol}$$ Here $g$ is the dimensionless EP coupling constant, $t$ the bare hopping matrix element and $z$ the number of nearest neighbours. In this formula again $g$ and $\omega _{ph}$ should be viewed as effective quantities corresponding to the combination of different wavevector phonons in the polaron phonon cloud. From formula (\[epol\]) it is evident that $E_{pol}$ is very sensitive to changes of the EP coupling constant $g$ and $\omega _{ph}$, but depends on the bare hopping matrix element $t$ only in the second order, since in the strong coupling limit $2g^{2}\omega _{ph}\gg t$. In our experiment one can see from Raman and IR spectra that, apart from the $A_{g}$-mode frequency which corresponds to the out of phase rotation of the MnO$_{3}$ octahedra, all the observed Brillouin-zone-center-phonon frequencies shift by no more than a few percent when La$^{3+}$ is replaced by Nd$^{3+}$. This would at a first sight suggest a link between the $A_{g}$ octahedral rotation mode, especially because the effective polaron phonon frequencies $\omega _{ph}$ obtained from the fit are very close to the observed mode frequencies. Hardening of the phonon mode by 17% and the $% \sim $44% increase of observed small polaron energy would in this case according to (\[epol\]) imply a $\sim $14% increase of the EP coupling constant $g.$ However, as stated above, the effective polaron phonon frequencies obtained from the fit are extremly inaccurate due to a crudness of the Holstein model and, since the small-polaron phonon cloud due to its localised nature includes mainly large wavevector phonons, the effective $\omega _{ph}$ only weakly depends on the frequency of the zone center phonons. In addition, the observed PI phonon bleaching peaks, which are expected to be directly related to the nonzero wavevector phonons forming the polaron phonon cloud, harden by a mere $\sim $3%. We therefore suggest that it is [*very unlikely* ]{}that the frequency shift of the $A_{g}$ mode, which corresponds to the out of phase rotation of the MnO$_{3}$ octahedra, is directly related to the observed small polaron binding energy shift. Instead we attribute the $\sim $[*44% increase of the polaron binding energy to a* ]{}$\sim $[*20% increase of the EP coupling constant* ]{}$g$ when La$^{3+}$is replaced by Nd$^{3+}$and not to a change of $\omega _{ph}$ and/or the bare hopping matrix element $t$. This is supported by a small shift of the observed PI phonon bleaching peaks and, nevertheless, the negligible shift of the relevant Brillouine zone center phonon modes, especialy the Raman-active high frequency ones, which have been shown to be related to the collective JT distortion[@LiarokapisLeventouri99]. The influence of a decrease of the bare hopping matrix element $t$ with decreasing $\left\langle r_{A}\right\rangle $ on the polaron binding energy can be neglected due to its second order nature in (\[epol\]), since a decrease of $t$ with decreasing $\left\langle r_{A}\right\rangle $ in (\[epol\]) would lead to a decrease of $E_{pol}$, which is the [*opposite*]{} to what is experimentally observed. In conclusion, we observe a $\sim $44% increase of the anti-Jahn-Teller polaron binding energy when La$^{3+}$is replaced by smaller Nd$^{3+}$in undoped GMR manganites. Absence of any concurent large frequency shifts of the observed PI phonon bleaching peaks and the Brillouin-zone-center perovskite internal phonon modes indicate that the increase of the polaron binding energy is a consequence of increasing electron-phonon coupling strength with decreasing ionic radius on the perovskite A site. This result can be safely extrapolated to doped manganites as indicated by the shift of the 1-eV polaronic peak with decreasing $\left\langle r_{A}\right\rangle $ in optical conductivity of GMR manganites[@MachidaMoritomo98; @QuijadaCerne98] and increasing isotope effect of $T_{C}$ with decreasing $\left\langle r_{A}\right\rangle $[@ZhaoKeller98]. The decrease of the effective bandwidth resulting in decrease of the Curie temperature $T_{C}$ and increase of the size of magnetoresistance with decreasing $\left\langle r_{A}\right\rangle $ is suggested to be, not just due to the direct influence of Mn-O-Mn bond angles on the bare hopping matrix element $t$,[@HwangCheong95] but also a consequence of the increasing polaronic band narrowing due to increasing electron phonon coupling. [**Acknowledgments**]{} We would like to thank V.V. Kabanov for fruitful discussions. C.W. Searle, S.T. Wang, [*Can. J. Phys.*]{} [**47**]{}, (1969) 2703. R.M. Kusters, J. Singelton, D.A. Ken, R. McGreevy, W. Hayes, [*Physica*]{} [**B155**]{} (1989) 362. R. von Helmolt, J. Wecker, B. Holzapfel, L. Shultz, K. Samwer, [*Phys. Rev. Lett.*]{} [**71**]{} (1993) 2331. for a review se: M. Imada, A. Fujimori, Y. Tokura, [*Rev. Mod. Phys*]{}. [**70**]{}, (1998) 1039. H.Y. Hwang, S.W. Cheong, P.G. Radaelli, M. Marezio, B. Batlogg, [*Phys. Rev. Lett.*]{} [**75**]{} (1995) 914. C. Zener, [*Phys. Rev.*]{} [**82**]{}, (1951) 403. G-M. Zhao, K. Conder, H. Keller, K.A. Müller, [*Nature*]{}[** 381**]{}, (1996) 676. G-M. Zhao, H. Keller, R.L. Greene, K.A. Müller, to appear in [*Physics of Manganites*]{}, Editors: T.A. Kaplan and S.D. Mahanti, (Plenum publishing corporation, 1998). D. Louca, T. Egami, [*Phys. Rev.* ]{}[**B56**]{} (1997) R8475. A.J. Millis, B.I. Sharaiman, R. Mueller, [*Phys. Rev. Lett.*]{} [**77,**]{} (1996) 175. A. Machida, Y. Moritomo, A. Nakamura, [*Phys. Rev.* ]{}[**B58**]{} (1998) 4281. M. Quijada, J.Černe, J.R. Simpson, H.D. Drew, K.H. Ahn, A.J. Millis, R. Shreekala, R. Ramesh, M. Rajeswari, T. Venkatesan, [*Phys. Rev.* ]{}[**B58**]{} (1998) 16093. T. Mertelj, D. Kuščer, M. Kosec and D. Mihailovic, [*Phys. Rev.* ]{}[**BXX**]{}, (2000) xxxx. P. B. Allen, V. Perebeinos, [*Phys. Rev.* ]{}[**B60** ]{}(1999) 10747. J. Holc, D. Kuščer, M. Horvat, S. Bernik, D. Kolar, [*Solid State Ionics*]{} [**95**]{} (1997) 259. Q. Huang, A. Santoro, J.W. Lyinn, R.W. Erwin, J.A. Borchers, J.L.Peng, R.L.Greene, [*Phys. Rev.* ]{}[**B55**]{} (1997) 14087. A. Urishubara, Y. Moritomo, T. Arima, A. Asamitsu, G. Kido, Y. Tokura, [*Phys. Rev.* ]{}[**B51**]{} (1995) 14103. J.B.A.A. Elemans, B. Van Laar, K.R. Van Der Veen, B.O. Loopstra, [*J. Solid State Chem.*]{} [**3**]{} (1971) 238. M.N. Iliev, M.V. Abrashev, H.-G. Lee, V.N. Popov, Y.Y. Sun, C. Thomsen, R.L. Meng, C.W. Chu, [*Phys. Rev.* ]{}[**B57**]{} (1998) 2872. This mode shifts to 396 cm$^{-1}$ in YMnO$_{3}$[@IlievAbrashev98] consistently with the even smaller ionic radius of Y. E. Liarokaois, Th. Leventouri, D. Lampakis, D. Palles, J.J. Neumeier, D.H. Goodwin, [*Phys. Rev.* ]{}[**B60**]{}, (1999) 12758. Y.H. Kim, A.J. Heeger, L. Acedo, G. Stucky, F. Wudl, [*Phys. Rev.* ]{}[**B36**]{} (1987) 7252[**.**]{} V.V. Kabanov, O.Yu. Mashtakov, [*Phys. Rev.* ]{}[**B47**]{} (1993) 6060[**.**]{} A.S. Alexandrov, V.V. Kabanov, D.K. Ray, [*Phyisica*]{} [**C244**]{} (1994) 247. K. Miyano, T. Tanaka, Y. Tomioka, Y. Tokura, [*Phys. Rev. Lett.*]{} [**78**]{} (1997) 4257. D.E. Cox, P.G. Radaelli, M. Marezio, S-W. Cheong, [*Phys. Rev.* ]{}[**B57**]{} (1998) 3305[**.**]{} Y.G. Zhao, J.J Li, R. Shreekala, H.D. Drew, C.L. Chen, W.L. Cao, C.H. Lee, M. Rajeswari, S.B. Ogale, R. Ramesh, G. Baskaran, T. Venkatesan, [*Phys. Rev. Lett.*]{} [**81**]{} (1998) 1310. T. Arima, Y. Tokura, J.B. Torrance, [*Phys. Rev.* ]{}[**B48**]{} (1993) 17006. J.M. De Teresa, K. Dorr, K.H. Müller, L. Shultz, [*Phys. Rev.* ]{}[**B58**]{} (1998) R5928. M.I. Klinger, Phys. Lett. [**7**]{}, 102 (1963). H.G. Reik, Sol. Stat. Comm. [**1**]{}, 67 (1963);H.G. Reik, D. Heese, [*J. Phys. Chem. Solids* ]{}[**28**]{}, (1967) 581. D. Emin, [*Phys. Rev.* ]{}[**B48**]{} (1993) 13691. R.D. Shanon, [*Acta Cryst.*]{} A [**32**]{}, (1976) 751. Mode assignment[@IlievAbrashev98] (LaMn)$_{1-\delta }$O$_{3}$ (cm$^{-1}$) (NdMn)$_{1-\delta }$O$_{3}$ (cm$^{-1}$) Shift % ---------------------------------------- ----------------------------------------- ----------------------------------------- --------- $B_{2g}$ octahedra in-phase stretching 609 604 -0.8 $A_g$ octahedra out-of-phase bending 486 489 -0.6 $B_{2g}$ O1 along $z$ 310 - $A_g$ octahedra out-of-phase rotation 282 329 17 laser annealing induced 233 probably laser annealing induced - 248 probably laser annealing induced - 234 $A_g$ Nd along $x$ - 143 : The Raman phonon frequencies. (LaMn)$_{1-\delta }$O$_{3}$ (cm$^{-1}$) (NdMn)$_{1-\delta }$O$_{3}$ (cm$^{-1}$) Shift % Comment ----------------------------------------- ----------------------------------------- --------- ---------- 636 640 0.6 shoulder 584 577 -1.2 510 521 2.2 460 482 4.8 432 458 6.0 418 434 3.8 376 392 4.2 : The IR phonon band frequencies. sample $ r_{A} (\AA) $ $E_{pol}$ (eV) $\omega _{ph}$ (cm$^{-1}$) ----------------------------- ------------------ ---------------- ---------------------------- (LaMn)$_{1-\delta }$O$_{3}$ 1.216 $0.34$ $ 310$ (NdMn)$_{1-\delta }$O$_{3}$ 1.163 $0.49$ $ 330$ : The small polaron binding energy $E_{pol}$ and the effective polaron phonon frequency $ \omega _{ph}$ as obtained from the fit of absorption due to a small polaron given by Emin[@Emin93]. The perovskite A site ionic radii for 9-fold coordination[@Shannon76] ($ r_{A}$) are also given for comparison. Figure Captions =============== Figure 1. Photoinduced absorption spectra of (LaMn)$_{1-\delta }$O$_{3}$ (thick solid line) and (NdMn)$_{1-\delta }$O$_{3}$ (dashed-line). The thin lines represent the fit of equation (\[eqemin\]) to the data. Inset shows photoinduced absorption spectra in the region of the phonon bands. The structure of the PI absorption below $\thicksim $580 cm$^{-1}$ is not reproducible among different runs, and presumably arises due to increasing instrumental noise at the lower end of the spectral range. Figure 2. Infrared (a) and Raman (b) phonon spectra of (LaMn)$_{1-\delta }$O$% _{3}$ (solid line) and (NdMn)$_{1-\delta }$O$_{3}$ (dashed-line). The Raman spectrum of (NdMn)$_{1-\delta }$O$_{3}$ is offset vertically for clarity.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The Wang–Landau (WL) algorithm has been widely used for simulations in many areas of physics. Our analysis of the WL algorithm explains its properties and shows that the difference of the largest eigenvalue of the transition matrix in the energy space from unity can be used to control the accuracy of estimating the density of states. Analytic expressions for the matrix elements are given in the case of the one-dimensional Ising model. The proposed method is further confirmed by numerical results for the one-dimensional and two-dimensional Ising models and also the two-dimensional Potts model.' author: - 'L. Yu. Barash$^{1,2,3}$' - 'M. A. Fadeeva$^{2,3}$' - 'L. N. Shchur$^{1,2,3}$' title: 'Control of accuracy in the Wang–Landau algorithm' --- Introduction ============ The Wang–Landau (WL) algorithm [@Wang-Landau; @Wang-Landau-PRE] has been shown to be a very powerful tool for directly determining the density of states (DOS) and is also quite widely applicable. It overcomes some difficulties existing in other Monte Carlo algorithms (such as critical slowing down) and allows calculating thermodynamic observables, including free energy, over a wide temperature range in a single simulation. A number of papers investigated statistical errors of the DOS estimation, and it was found in [@Yan2003] that errors reach an asymptotic value beyond which additional calculations fail to improve the accuracy of the results. Yet it was established in [@Zhou2005; @Lee2006] that the statistical error scales as the square root of the logarithm of the modification factor, if the factor is kept constant. It follows from the results in [@Yan2003] that there is a systematic error of DOS estimation by the WL algorithm [^1]. It was also confirmed in the case of the two-dimensional Ising model that the deviation of the DOS obtained with the WL algorithm from the exact DOS does not tend to zero [@1overt; @1overt-a]. Several improvements of the behavior of the modification factor in the algorithm, which were shown to overcome the problem of systematic error in selected applications, have been suggested [@1overt; @1overt-a; @SAMC; @SAMC2; @Eisenbach2011]. There are about fifteen hundred papers that apply the WL algorithm and its improvements to particular problems (e.g., to the statistics of polymers [@Binder09; @Ivanov2016] and to the diluted systems [@Malakis04; @Fytas2013], among many others). In this paper, we address the question of the accuracy of the DOS estimation. We report a method for obtaining information on both the convergence of simulations and the accuracy of the DOS estimation. We numerically apply our algorithm to the one-dimensional and the two-dimensional Ising models, where the exact DOS is known [@Beale], and to the two-dimensional 8-state Potts model, which undergoes a first-order phase transition. We also present analytic expressions for the transition matrix in the energy spectrum for the one-dimensional Ising model. Our approach is based on introducing the transition matrix in the energy space (TMES), whose elements show the frequency of transitions between energy levels during the WL random walk in the energy space. Its elements are influenced by both the random process of choosing a new configurational state and the WL probability of accepting the new state. We consider a chain of random updates (e.g., flips of randomly chosen spins for the Ising model) of a system configuration. Each of the updates is accepted with unitary probability. This random walk in the configurational space is a Markov chain. Its invariant distribution is uniform, i.e., the probabilities of all states of the physical system are equal to each other. For any pair $\Omega_A$ and $\Omega_B$ of configurations, the probability of an update from $\Omega_A$ to $\Omega_B$ is equal to the probability of an update from $\Omega_B$ to $\Omega_A$. Hence, the detailed balance condition is satisfied. Therefore, $$g(E_k)P(E_k,E_m)=g(E_m)P(E_m,E_k), \label{simplebalance}$$ where $g(E)$ is the true DOS and $P(E_k,E_m)$ is a probability of one step of the random walk to move from a configuration with the energy $E_k$ to any configuration with the energy $E_m$. We introduce the notation $$T(E_k,E_m)=\min\left(1,\frac{g(E_k)}{g(E_m)}\right)P(E_k,E_m), \label{Texpr}$$ which represents nondiagonal elements of the TMES of the WL random walk on the true DOS. Relation (\[simplebalance\]) can be rewritten as $T(E_k,E_m)=T(E_m,E_k)$. Therefore, the TMES of the WL random walk on the true DOS is a symmetric matrix. Because the matrix is both symmetric and right stochastic, it is also left stochastic. This means that the rates of visiting of all energy levels are equal to each other. In simulations with a reasonable modification of the WL algorithm, the systematic error of determining the DOS can be made arbitrarily small. In this case, we find that the computed TMES approaches a stochastic matrix as the computed DOS approaches the true value. There are several interesting conclusions. First, this explains the criterion of histogram flatness, which is one of the main features of the original WL algorithm [@Wang-Landau]. Because the histogram elements are equal to sums of columns in the TMES, histogram flatness is related to the closeness of the TMES to a stochastic matrix. Second, it gives a criterion for the proximity of the simulated DOS to the true value. We introduce the difference of the largest eigenvalue of the calculated TMES from unity as a parameter. We show that the parameter is closely connected with the deviation of the DOS from the true value. We confirm numerically that the deviation of the DOS from the true value decays in time in the same manner as our parameter decays. We are not aware of any other method for determining the accuracy of a WL simulation without knowing the exact value of the DOS. The paper is organized as follows. In Sec. \[AlgSec\] we describe the variants of the WL algorithm. In Sec. \[TMESSec\] we introduce the TMES and, in particular, we describe the behavior of the TMES for the one-dimensional Ising model. In Sec. \[DiscussionSec\] we present our main results and discussion, including discussion of properties of the TMES, description of the method and numerical results for the one-dimensional and two-dimensional Ising models and for the two-dimensional Potts model. The algorithms {#AlgSec} ============== Directly estimating the DOS with the WL algorithm allows calculating the free energy as the logarithm of the partition function $$Z=\sum_{k=1}^{N_E}g(E_k)e^{-E_k/k_BT}, \label{partition-function}$$ where $g(E_k)$ is the number of states (density of states) with the energy $E_k$, $N_E$ is the number of energy levels, $k_B$ is the Boltzmann constant, and $T$ is the temperature. The main idea of the WL algorithm is to organize a random walk in the energy space. We take a configuration of the system with the energy $E_k$, randomly choose an update to a new configuration with the energy $E_m$, and accept this configuration with the WL probability $\min\left(1,\tilde g(E_k)/\tilde g(E_m)\right)$, where $\tilde g(E)$ is the DOS approximation. The approximation is obtained recursively by multiplying $\tilde g(E_m)$ by a factor $f$ at each step of the random walk in the energy space [^2]. Each time that the auxiliary histogram $H(E)$ becomes sufficiently flat, the parameter $f$ is modified by taking the square root, $f:=\sqrt{f}$. Each histogram value $H(E_m)$ contains the number of moves to the energy level $E_m$. The histogram is filled with zeros after each modification of the refinement parameter $f$. It is convenient to work with the logarithms of the values $S(E_k):=\ln\tilde g(E_k)$ and $F:=\ln f$ (to fit the large numbers into double precision variables) and to replace the multiplication $\tilde g(E_m):=f\cdot\tilde g(E_m)$ with the addition $S(E_m):=S(E_m)+F$. At the end of the simulation, the algorithm provides only a relative DOS. Either the total number of states or the number of ground states can be used to determine the normalized DOS. It is natural to ask the following three questions: 1. Which condition for the flatness check is optimal? 2. How does the histogram flatness influence the convergence of the DOS estimation? 3. Is the choice of the square root rule to modify the parameter $f$ optimal? A practical answer to question Q1 was given in the original algorithm [@Wang-Landau]: keep the flatness within the accuracy of about 20%. Choosing an accuracy between 1% and 20% is sometimes useful [@Wust2012] but can result in a substantial increase of the simulation time [@Wang-Landau-PRE]. An answer to question Q3 was obtained in two independent works [@1overt] and [@SAMC], which introduced modifications of the WL algorithm, the `WL-1/t` algorithm and the stochastic approximation Monte Carlo (`SAMC`) algorithm, respectively. There are two phases of the `WL-1/t` algorithm [@1overt]. The first phase is similar to the WL algorithm except that every test of the histogram flatness is replaced with a simpler check: Is $H(E)\ne0$ for all $E$? The algorithm enters its second phase if $F\le N_E/t$, where $t$ is the simulation time measured as the number of attempted spin flips. For $t>t_s$, the histogram is no longer checked and $F$ is updated as $F=N_E/t$ at each step. Here $t_s$ is the simulation time when the `WL-1/t` algorithm enters the second phase. Both modified WL algorithms exhibit the same long-range behavior of the refinement parameter $F$ proportional to $1/t$ for long simulation times [@SAMC; @SAMC2]. This is natural due to the following conditions of the convergence: $\sum_{t=1}^\infty F(t)=\infty$ and $\sum_{t=1}^\infty F(t)^\zeta <\infty$ for some $\zeta\in (1,2)$ [@SAMC; @SAMC2]. The `SAMC` algorithm has an additional parameter $t_0$, which is the simulation time when the algorithm enters its second phase. Obtaining the appropriate value of $t_0$ can be quite cumbersome because the rule of thumb for choosing $t_0$ given in [@SAMC] is violated even by the $128\times128$ Ising model [@Janke2017]. The `WL-1/t` algorithm and its further improvements [@Zhou2008; @Swetnam2011; @Wust2009] seem to perform more reliably. Here, we use the `WL-1/t` algorithm, although the main obtained results are qualitatively independent of the modification choice. Transition matrix in the energy space {#TMESSec} ===================================== We calculate the TMES for the WL random walk as follows. The elements of the TMES $\tilde T(E_k,E_m)$ are probabilities for the WL random walk to move from a configuration with the energy $E_k$ to a configuration with the energy $E_m$. For simplicity, we consider the case of the Ising model with periodic boundary conditions and the energy $E=-\sum_{<i,j>}\sigma_i\sigma_j$, where the sum ranges pairs of neighboring spins and $\sigma_i=\pm1$. The number of energy levels accessible for the WL random walk is $N_E=L/2+1$ for $d=1$ and $N_E=L^2-1$ for $d=2$, where the even integer $L$ is the linear size of the hypercubic lattice and $d$ is the lattice dimension. A WL random move cannot increase or decrease the energy of the configuration by more than $d$ energy levels, and every column and every row of the TMES therefore contains no more than $1{+}2d$ nonzero elements. The nondiagonal elements of $\tilde T(E_k,E_m)$ can be represented as $$\tilde T(E_k,E_m)= \min\left(1,\frac{\tilde g(E_k)}{\tilde g(E_m)}\right)P(E_k,E_m), \label{TTexpr}$$ where $k\ne m$. In general, the structure of the probability $P(E_k,E_m)$ depends on both the system dimension and the local lattice properties and is rather complicated. In the case of the one-dimensional Ising chain of $L$ spins with periodic boundary conditions, the probability to change energy from $E_k$ to $E_m$ in a WL random move is $$T(E_k,E_m)=\min\left(1,\frac{g(E_k)}{g(E_m)}\right) \sum_{i=0}^{2k}\frac{N_iQ_i^{E_k\to E_m}}{g(E_k)}, \label{el-mat-1d}$$ where $k\ne m$. Here $k$ is the number of couples of domains walls in the configuration, which determines the energy level $E_k=-\sum_{j=1}^{L}\sigma_j\sigma_{j+1}=-L+4k$, $N_i(k,L)$ is the number of configurations where $i$ domains consist of only one spin and $2k{-}i$ domains consist of more than one spin, and $Q_i^{E_k\to E_m}(L)$ is the probability that a single spin flip moves the system to the energy $E_m$ from such configurations. Occupations of the energy levels of the chain are expressed in terms of binomial coefficients as $g(E_k)=2C_L^{2k}$ because there are exactly $C_L^{2k}$ ways to arrange the $2k$ domain walls. Therefore, partition function (\[partition-function\]) is $$Z_L=2\sum_{k=0}^{L/2} C_L^{2k} e^{(L-4k)/(k_BT)}. \label{Z_L}$$ The detailed analytic expressions for $N_i$ and $Q_i$ are presented in Appendix \[NiQiSec\]. It follows that $$T(E_k,E_{k+1})=T(E_{k+1},E_k)=\frac{C^{2k}_{L-2}}{\max\left(C^{2k}_L,C^{2k+2}_L\right)}. \label{tmatrix1d}$$ Equation (\[tmatrix1d\]) can be understood as follows. The probability of the system to change energy from $E_k$ to $E_{k+1}$ due to a spin flip is equal to the probability that there are no domain walls adjacent to the spin. Therefore, $P(E_k,E_{k+1})=C^{2k}_{L-2}/C^{2k}_L$. Similarly, $P(E_{k+1},E_k)=C^{2k}_{L-2}/C^{2k+2}_L$. We hence obtain (\[tmatrix1d\]). Results and discussion {#DiscussionSec} ====================== TMES and the accuracy of the DOS estimation ------------------------------------------- The convergence of the `WL-1/t` algorithm follows from the arguments presented in [@Zhou2008]. Therefore, there is a final stage of each simulation, where the normalized DOS remains almost the same and is close to the limiting one. We note that the condition that $F(t)$ is much smaller than one in itself does not guarantee that the algorithm is already in its final stage, because it follows from $\sum_{t=1}^\infty F(t)=\infty$ that a substantial cumulative change of the DOS due to a long simulation time is possible. At the same time, a large value of $F(t)$, resulting in a rapid increase of the calculated DOS, does not guarantee a rapid increase of the normalized DOS. The normalized DOS remains almost the same during a long simulation time of the final stage. Therefore, the rate of increase of the logarithm of the nonnormalized DOS is nearly the same for all energies. The behavior of the algorithm is close to a Markov chain in the final stage, and the TMES remains almost the same. The invariant distribution of the Markov chain has the property that all energy levels are almost equiprobable, while different configurations having the same energy may have different probabilities. Therefore, the TMES is close to a stochastic matrix in the final simulation stage. The following proposition also holds: if the TMES is close to a stochastic matrix, then the obtained normalized DOS is close to the true DOS (see details in Appendix \[ConvergenceSec\]). The first phase of the `WL-1/t` algorithm aims to obtain the first crude approximation for the DOS, while the aim of the second phase (in which the factor $F$ is updated as $F(t)=N_E/t$ at each step) is to converge to the true DOS. Both the histogram flatness test in the original WL algorithm and the test whether all energies have been visited in the `WL-1/t` modification are quickly passed in the final stage of the calculation because all energies are almost equally probable. A much longer simulation time is required to satisfy these tests in the early calculation stage, when the probabilities of energy levels differ substantially. The control parameter --------------------- The largest eigenvalue of any stochastic matrix is equal to one, and we therefore propose to use the difference of the largest eigenvalue of the TMES from unity computed during the final stage of the WL simulation as a criterion for the proximity of the DOS to the true value. We estimate the elements of the TMES in simulations as follows. The auxiliary matrix $U(E_k,E_m)$ is initially filled with zeros. The element $U(E_k,E_m)$ is increased by unity after every WL move from a configuration with the energy $E_k$ to a configuration with the energy $E_m$. During the simulations, we compute the normalized matrix $\tilde{T}(E_k,E_m)=U(E_k,E_m)/\tilde{H}$, where $\tilde{H}=\sum_{k,m}U(E_k,E_m)/N_E$. The obtained matrix $\tilde{T}$ approaches the stochastic matrix $T$ in the final stage of calculation. The difference of the largest eigenvalue $\lambda_1$ of $\tilde{T}$ from unity gives the control parameter $\delta=\left|1-\lambda_1\right|$. There are many algorithms for computing the largest eigenvalue of a matrix, and almost all are suitable for calculating $\delta$. We used the power method, also known as power iteration or Von Mises iteration [@poweriteration]. The algorithm does not compute a matrix decomposition, so it is quite efficient for large sparse matrices. It is terminated when a desired accuracy of the eigenvector approximation is achieved; the eigenvalue estimate is then found by applying the Rayleigh quotient to the resulting eigenvector. The method can be used if $\lambda_1$ is the eigenvalue of largest absolute value and $|\lambda_1/\lambda_2| \ne 1$, where $\lambda_1,\dots,\lambda_n$ is the list of the matrix eigenvalues ordered so that $|\lambda_1|\ge|\lambda_2|\ge|\lambda_3|\ge ...\ge|\lambda_n|$. The absolute value of any eigenvalue of any stochastic matrix is less than or equal to unity, therefore, the power method is applicable for estimating $\delta$ in the final stage of the `WL-1/t` algorithm. It is known that $|\lambda^{(k)}-\lambda_1|=O(|\lambda_2/\lambda_1|^{2k})$, where $\lambda^{(k)}$ is the approximation for $\lambda_1$ obtained after $k$ iterations [@Mehl], so the error asymptotically decreases by a factor of $|\lambda_1/\lambda_2|^2$ at each iteration. The TMES is typically a sparse matrix, and its storage usually requires only $O(N_E)$ of memory. The matrix-vector multiplications are performed very efficiently if the matrix is sparse, so each iteration of the power method requires only $O(N_E)$ operations in this case. Software libraries such as ViennaCL [@ViennaCL] contain the implementation of the power method for sparse matrices. The power method may require many iterations if $|\lambda_1/\lambda_2|\approx 1$. However, we note that the eigenvalue needs to be calculated only occasionally. For example, in our simulations, we calculate $\delta$ only once for each integer $n$, where $n\le100\log t<n+1$. Such a simulation applies the power method only several thousands of times during a `WL-1/t` calculation with $10^{13}$ spin flips, so the computing time used for the eigenvalue calculation is negligible. The histogram flatness {#flatnessSec} ---------------------- We can calculate the normalized histogram ${\cal H}=H(E_m)/\sum_{m}H(E_m)$ as ${\cal H}=\sum_{k}\tilde{T}(E_k,E_m)$. Hence, the histogram flatness condition is equivalent to the property that the matrix $\tilde{T}$ is close to stochastic. Thus, the histogram flatness is closely connected at the final simulation stage of the `WL-1/t` algorithm with the proximity to the true DOS. For the original WL algorithm, there is no guarantee that the rate of increase of the logarithm of the nonnormalized DOS is the same for all energies in the final stage of the calculation because the parameter modification rule $F:=F/2$ results in a rapid decay of $F$, and the algorithm hence converges because the value of $F$ is negligible. The histogram flatness check is performed with a finite accuracy such as several percent, which results in a finite accuracy of the calculated DOS. The choice of high accuracy in the flatness criterion can result in a slow convergence and a very long simulation time [@Wang-Landau-PRE]. Normalizing the DOS ------------------- Normalizing the DOS only at the end of the simulation was suggested in the original papers [@Wang-Landau; @1overt; @SAMC]. We note that this can limit the accuracy of the estimated DOS. For example, we consider the one-dimensional Ising model with $L=512$, where the transition to the second phase of the `WL-1/t` algorithm occurs at $t\sim t_s=2\cdot10^{10}$, where $S(E,t_s)\sim10^7$. After only several hours of the calculation, we have $t=5\cdot10^{11}$ and $F=N_E/t=5\cdot10^{-10}$. The operation $S(E):=S(E)+F$ is then beyond the capabilities of double-precision floating-point variables because there is already a $17$ orders of magnitude difference between $S(E)$ and $F$. Hence, the operation is in fact not performed and the DOS is not updated after that. Therefore, we recommend normalizing the calculated DOS more frequently during the simulation. For the simulation corresponding to Fig. \[Fig12\], the calculated DOS is normalized every time the values of $\delta$ and $\Delta$ are calculated. ![image](1d128.eps){width="0.49\linewidth"} ![image](2d16.eps){width="0.49\linewidth"} Behavior of the control parameter for the WL-1/t algorithm ---------------------------------------------------------- The parameter $$\Delta=\frac{1}{N_E}\sum_E \left|\frac{\tilde{S}(E,t)-S_{\text{exact}}(E)}{S_{\text{exact}}(E)}\right| \label{Delta}$$ estimates the deviation of the computed DOS $\tilde g(E_k)$ from the exact DOS $g(E_k)$. Figure \[Fig12\] shows the behavior of $\overline{\Delta}$ and $\overline{\delta}$ as a function of simulation time $t$. The overline means that the data were obtained by averaging over $M$ independent runs of the algorithm to reduce statistical noise, where $M=60$ in Fig. \[Fig12\]. We note that $\tilde{S}(E,t)$ in Eq. (\[Delta\]) corresponds to the normalized DOS. Here, we use the normalization $\tilde{S}(E,t)=S(E,t)-\Delta S$, where $\Delta S=S(E_j,t)-S_{\text{exact}}(E_j)$ and $j$ is chosen such that $S(E_j)=\max_jS(E_j)$. Both the abovementioned normalization to the total number of states and the normalization to the number of ground states turn out to give values of $\Delta$ close to those presented in Fig. \[Fig12\]. The vertical dashed line marks the average value of $t_s$. Figure \[Fig12\] demonstrates the monotonic power-law decrease of both the parameters $\delta$ and $\Delta$ during the second phase of the `WL-1/t` algorithm. We use the logarithmic scale in both axes. A stable power-law decay of the parameter $\delta$ reveals the convergence of $\tilde T$ to a stochastic matrix and can be used as a criterion for the convergence of the simulated DOS to the exact DOS. The fluctuations of the parameters $\delta$ and $\Delta$ are shown in Fig. \[ErrFig\] for the simulations described in Fig. \[Fig12\]. Figure \[ErrFig\] shows $\sigma(\overline{\delta})/\overline{\delta}$ and $\sigma(\overline{\Delta})/\overline{\Delta}$ as functions of $t$. The relative standard deviations were obtained using 60 independent runs of the algorithm. Therefore, the values in Fig. \[ErrFig\] represent the relative magnitudes of the error bars in Fig. \[Fig12\]. It follows from Fig. \[ErrFig\] that $\sigma(\delta)=\sqrt{M}\sigma(\overline{\delta})$ and $\sigma(\Delta)=\sqrt{M}\sigma(\overline{\Delta})$ are of the order of $\overline{\delta}$ and $\overline{\Delta}$, respectively. (0,0) (130,12)[![image](sigma1d128inset.eps){width="0.2\linewidth"}]{} (380,12)[![image](sigma2d16inset.eps){width="0.2\linewidth"}]{} ![image](sigma1d128.eps){width="0.49\linewidth"} ![image](sigma2d16.eps){width="0.49\linewidth"} ![ Dependence of $\overline{\delta}$ (solid line) and $\overline{\tilde\Delta}$ (dotted line) on the Monte Carlo time $t$ for the `WL-1/t` algorithm applied to the two-dimensional Potts model with $q{=}8$ spin states and with periodic boundary conditions. The lattice size is $L=32$ and $M=40$. Here, $\tilde\Delta=1/N_E\cdot\sum_E \left|(\overline{S}(E,t)-S_0(E))/S_0(E)\right|$, where $S_0(E)=\langle S(E,t=2.6\cdot10^{12})\rangle$. The vertical dashed line marks the average value of $t_s$. []{data-label="Potts"}](potts32.eps){width="0.99\linewidth"} The condition $\delta(t_2)\ll\delta(t_1)$ observed during the second algorithm phase should result in satisfying the condition $\Delta(t_2)\ll\Delta(t_1)$, which allows approximating the value of $\Delta(t_1)$ as the deviation between the DOS computed at $t=t_1$ and $t=t_2$. This allows estimating the simulation accuracy in the case where the DOS of the simulated system is not known exactly. In Fig. \[Potts\], as an example of such a case, we present the results of simulating the two-dimensional Potts model with $q{=}8$ spin states. The dependence of the parameters $\delta$ and $\tilde\Delta$ on $t$ are qualitatively similar to those calculated for the Ising model (Fig. \[Fig12\]). Because we do not have an analytic expression for the DOS in this case, we calculate the deviation of $\tilde g(E)$ using the expression $\tilde\Delta=1/N_E\cdot\sum_E \left|(\tilde{S}(E,t)-S_0(E))/S_0(E)\right|$ and taking $S_0(E)=\tilde{S}(E,t_f)$ for a large value of $t_f$ ($t_f=2.6\cdot10^{12}$ in Fig. \[Potts\]). The control parameter $\delta$ can thus be used to estimate the accuracy of the obtained DOS. Very similar results to those shown in Fig. \[Fig12\] were obtained for various values of the lattice size. The calculations were performed with $L$ up to 1024 for the one-dimensional Ising model and up to 64 for the two-dimensional Ising model. Figures \[ising1d\] and \[ising2d\] show $\overline{\delta}(t)$ and $\overline{\Delta}(t)$ for several different values of the Ising model lattice size $L$, where $M=40$. Figures \[Fig12\], \[ising1d\] and \[ising2d\] also demonstrate different values of $t_s$, which grows with the system size. Behavior of the control parameter for the original WL algorithm --------------------------------------------------------------- Figure \[FigClassicalWL\] shows $\overline{\delta}(t)$ and $\overline{\Delta}(t)$ for the original WL algorithm described in [@Wang-Landau]. The algorithm was applied to the one-dimensional and two-dimensional Ising models with $L=32$. The data in the left panel were obtained by applying the WL algorithm to the one-dimensional Ising model and averaging over 40 independent runs. The right panel corresponds to a single run of the WL algorithm applied to the two-dimensional Ising model. Therefore, both $\Delta$ and $\delta$ saturate for the original WL algorithm (see also Sec. \[flatnessSec\]). Using the control parameter $\delta$ thus confirms the systematic error of the original WL algorithm previously reported in \[, , , 24\]. Conclusion {#ConclusionSec} ========== We have analyzed properties of the algorithms and of the TMES. TMES of the WL random walk on the true DOS is stochastic and symmetric. We present analytic expressions for the TMES in the case of one-dimensional Ising model. We improve the WL algorithm based on the `WL-1/t` modification of the original algorithm [@1overt] and propose a method for examining the convergence of simulations to the true DOS and for controlling the accuracy of the DOS calculation. The monotonic power-law decrease of the control parameter $\delta$ during the second phase of the algorithm reveals the convergence of the algorithm, and the values of the control parameter can be used to estimate the accuracy of the DOS calculations. This approach can be generalized to systems with an intitially unknown discrete spectrum, where the general procedure can be applied for the dynamic change of the TMES. It would be interesting to check its applicability to systems with a continuous energy spectrum. This work is supported by the grant 14-21-00158 from the Russian Science Foundation. ![image](1d256.eps){width="0.49\linewidth"} ![image](1d1024.eps){width="0.49\linewidth"} ![image](2d32.eps){width="0.49\linewidth"} ![image](2d64.eps){width="0.49\linewidth"} ![image](1d32classical.eps){width="0.48\linewidth"} ![image](2d32classical.eps){width="0.50\linewidth"} Convergence of the WL-1/t algorithm to the true DOS {#ConvergenceSec} =================================================== We have shown that the TMES $T$ of the WL random walk on the true DOS is stochastic, and also that the TMES $\tilde T$ is close to a stochastic matrix in the final stage of the `WL-1/t` algorithm. Here we demonstrate that the obtained normalized DOS is close to the true DOS if the TMES $\tilde T$ is a stochastic matrix. It follows from (\[TTexpr\]) that $$\frac{\tilde T(E_k,E_m)}{\tilde T(E_m,E_k)} =\frac{\tilde g(E_k)}{\tilde g(E_m)} \frac{P(E_k,E_m)}{P(E_m,E_k)},$$ where $\tilde g(E)$ is the obtained normalized DOS. Using (\[simplebalance\]), we hence obtain $$\frac{\tilde T(E_k,E_m)}{\tilde T(E_m,E_k)} =\frac{\eta_m}{\eta_k}, \label{teq}$$ where $\eta_i=g(E_i)/\tilde g(E_i)$ and $g(E)$ is the true DOS. It follows from (\[teq\]) and the stochasticity of $\tilde T$ that $$\eta_m=\eta_m\sum_k\tilde T(E_m,E_k)=\sum_k\tilde T(E_k,E_m)\eta_k. \label{EqEta}$$ Because the TMES is a stochastic matrix, the rates of visiting all energy levels are equal to each other. The values of $\tilde g(E)$ therefore remain almost the same, and the behavior of the algorithm is close to a Markov chain. Moreover, the invariant distribution of the Markov chain has the property that all energy levels are equiprobable. It follows from (\[EqEta\]) that the values $\eta_i/\sum_k\eta_k$ represent the invariant distribution of the Markov chain. Therefore, $\eta_i$ is independent of $i$, and the obtained normalized DOS is hence close to the true DOS. Expressions for $N_i$ and $Q_i$. {#NiQiSec} ================================ We have the relations $$\begin{aligned} N_i&=&\frac{L}{k}C_{2k}^iC_{L-2k-1}^{2k-i-1},\;i=0,1,\dots,2k-1,\nonumber\\ N_{2k}&=&2\delta_{L,2k},\label{N_i}\\ Q_i^{E_k\to E_{k-1}}&=&\frac{i}{L},\quad Q_i^{E_k\to E_k}=\frac{4k-2i}{L},\nonumber\\ Q_i^{E_k\to E_{k+1}}&=&\frac{L-4k+i}{L},\label{Q_i}\end{aligned}$$ where $\delta_{L,2k}$ is the Kronecker delta. Expression (\[N\_i\]) is derived as follows. We consider the circular chain of $L{-}2k$ spins. We place the first domain wall in front of the first spin. We add another $2k{-}i{-}1$ domain walls in the remaining space between the spins; there are $C_{L-2k-1}^{2k-i-1}$ ways to do this. Therefore, we have $L{-}2k$ spins and $2k{-}i$ domain walls, where the first spin of the first domain is the first spin of the chain. We then add one more spin in every domain. We also add $i$ domains consisting of only one spin. There are exactly $C_{2k}^i$ ways to choose $i$ domains among the $2k$ domains. Each of these choices unambiguously defines how to add $i$ domains, each consisting of only one spin, to the available $2k{-}i$ domains of the chain. We have thus calculated the number of configurations of the circular chain of $L$ spins containing $2k$ domains such that $i$ domains consist of only one spin, $2k{-}i$ domains consist of more than one spin, and there is a domain wall in front of the first spin. This number is $M_i=2C_{2k}^iC_{L-2k-1}^{2k-i-1}$. When $2k$ domain walls are placed among the $L$ spins, the probability that there is a domain wall in front of the first spin is equal to $p=2k/L$. Hence, $N_i=M_i/p$, i.e., we have obtained Eq. (\[N\_i\]). The justification of Eqs. (\[Q\_i\]) is as follows. We have $2k$ domains, where $i$ domains consist of only one spin and $2k{-}i$ domains consist of more than one spin. To remove a couple of domains with just a single spin flip, we must choose one of $i$ spins from the domains consisting of only one spin. Therefore, $Q_i^{E_k\to E_{k-1}}=i/L$. To add a couple of domains with just a single spin flip, we must choose a spin that is not a boundary spin of a domain. There are $L{-}4k{+}i$ spins satisfying this condition because there are $2k$ spins located to the right of a domain wall, $2k$ spins located to the left of a domain wall, and $i$ spins which are located with a domain wall on both the right and the left. Therefore, $Q_i^{E_k\to E_{k+1}}=(L-4k+i)/L$. Finally, $Q_i^{E_k\to E_k}=1-Q_i^{E_k\to E_{k-1}}-Q_i^{E_k\to E_{k+1}}=(4k-2i)/L$. [99]{} F. Wang, D. P. Landau, Phys. Rev. Lett. [**86**]{}, 2050 (2001). F. Wang, D. P. Landau, Phys. Rev. E [**64**]{}, 056101 (2001). Q. Yan, J. J. de Pablo, Phys. Rev. Lett. [**90**]{}, 035701 (2003). C. Zhou, R. N. Bhatt, Phys. Rev. E [**72**]{}, 025701 (2005). H.W. Lee, Y. Okabe, and D.P. Landau, Comp. Phys. Comm. [**175**]{} 36 (2006). R. E. Belardinelli and V. D. Pereyra, Phys. Rev. E [**75**]{}, 046701 (2007). R. E. Belardinelli and V. D. Pereyra, J. Chem. Phys [**127**]{}, 184105 (2007). F. Liang, C. Liu, and R. J. Carroll, J. Am. Stat. Ass. [**102**]{}, 305 (2007). F. Liang, J. Stat. Phys. [**122**]{}, 511 (2006). G. Brown, Kh. Odbadrakh, D. M. Nicholson, M. Eisenbach, Phys. Rev. E [**84**]{}, 065702(R) (2011). M.P. Taylor, W. Paul, and K. Binder, J. Chem. Phys. [**131**]{}, 114907 (2009). S.V. Zablotskiy, V.A. Ivanov, and W. Paul, Phys. Rev. E [**93**]{}, 063303 (2016). A. Malakis,A. Peratzakis, and N. G. Fytas, Phys. Rev. E [**70**]{}, 066128 (2004). N. G. Fytas and P.E. Theodorakis, Eur. Phys. J. B [**86**]{}, 30 (2013). P. D. Beale, Phys. Rev. Lett. [**76**]{}, 78 (1996). T. Wüst, D. P. Landau, J. Chem. Phys. [**137**]{}, 064903 (2012). S. Schneider, M. Mueller, W. Janke, Comp. Phys. Comm. [**216**]{} 1 (2017). C. Zhou, J. Su, Phys. Rev. E [**78**]{}, 046705 (2008). A. D. Swetnam, M. P. Allen, J. Comput. Chem. [**32**]{}, 816 (2011). T. Wüst, D. P. Landau, Phys. Rev. Lett. [**102**]{}, 178101 (2009). R. von Mises and H. Pollaczek-Geiringer, Praktische Verfahren der Gleichungsauflösung, ZAMM - Zeitschrift für Angewandte Mathematik und Mechanik [**9**]{}, 152 (1929). S. Börm, C. Mehl, Numerical Methods for Eigenvalue Problems, Walter De Gruyter, Berlin/Boston, 2012. K. Rupp, Ph. Tillet, F. Rudolf, J. Weinbub, A. Morhammer, SIAM J. Sci. Comp. [**38**]{}, S412 (2016). [^1]: In fact, in the very early presentation of the algorithm in the Rahman Prize Lecture in 2002, David Landau already mentioned the systematic error in the DOS estimation. [^2]: If the new configuration is not accepted, then the configuration is left unchanged, and the step is counted as the move to the energy $E_k$, i.e., $\tilde g(E_k)$ is multiplied by the factor $f$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Focusing on $ROSAT$ results for clusters in the $\sim 20-600$ Myr range, I first summarize our current understanding of the X–ray activity – rotation – age relationship. Then, the problem of the Hyades K and M dwarfs binaries is addressed: [*1.*]{} most K and M–type binaries in wide systems are X–ray brighter than single stars; [*2.*]{} binaries seem to fit into the same activity – rotation relationship as single stars. Points [*1.*]{} and [*2.*]{} suggest that the distributions of rotations of single and binary stars should also show a dicothomy, but the few available rotational data do not support the existence of such a dicothomy. Rotational periods for a larger sample of binary and single stars should be acquired before any conclusion is drawn. Finally, I discuss the topic whether the activity–age dependence is unique, as commonly thought. Whereas the comparison of Praesepe to the Hyades might imply that this is not the case, the X–ray activity of a sample of Hyades–aged field stars instead supports the common thinking.' author: - Sofia Randich title: Coronal activity among open cluster stars --- Introduction ============ As an introductory remark, it is useful to recall that X–ray emission from solar–type and lower mass stars is thought to originate from a hot corona heated and confined by magnetic fields that are generated through a dynamo process. It is therefore expected on theoretical grounds that the level of X–ray emission, or coronal activity, should depend on at least the properties of the convective zone, on stellar rotation and, through the rotation–age dependence, on stellar age. X–ray surveys of stellar clusters offer a powerful tool to empirically prove and quantitatively constrain the dependence of coronal activity on these parameters and, possibly, on additional ones, thus providing feedback to the theory. $ROSAT$ PSPC and HRI observations have provided X–ray images for about 30 open clusters in the age range between $\sim 20 - 600$ Myr (see Table 1 in Jeffries 1999, for the most updated list). Our understanding of coronal properties of solar–type and low mass stars in clusters is now considerably deeper than a decade ago, but, at the same time, new puzzles have been raised by $ROSAT$ results. The main results and questions emerged from $ROSAT$ observations of clusters have been discussed in several reviews in the last few years. The age – rotation – activity paradigm (or ARAP) has been discussed at length by Caillault (1996), Randich (1997), and Jeffries (1999). Other issues, such as time variability (Caillault 1996; Stern 1999; Jeffries 1999), insights from spectra (Caillault 1996), supersaturation (Randich 1998), and observational limits and analysis techniques (Micela 1996) were also addressed. I refer to those papers for a detailed discussion of the above topics. In the present paper I will first present a summary of the general picture of the ARAP that we gathered from $ROSAT$ data; second, I will address an issue that was only marginally discussed in previous reviews, namely binaries and their influence on cluster X–ray luminosity distribution functions (XLDF). Finally, I will focus on the exceptions to the ARAP and on the controversial question whether the X–ray properties of a cluster at a given age can be considered as representative of all clusters at that age. Within this context, I will compare cluster stars with field stars. The following sources of X–ray data were used; [*Pleiades*]{}: Stauffer et al. (1994), Micela et al. (1996), Micela et al. (1999a); [*IC 2602*]{}: Randich et al. (1995); [*IC 2391*]{}: Patten & Simon (1996); [*Alpha Persei*]{}: Prosser et al. (1996); [*Hyades*]{}: Stern et al. (1995), Pye et al. 1994; [*IC 4665*]{}: Giampapa et al. (1998); [*NGC 2547*]{}: Jeffries & Tolley (1998); [*NGC 2516*]{}: Jeffries et al. (1997); [*Blanco 1*]{}: Micela et al. (1999b); [*NGC 6475*]{}: Prosser et al. (1995), James & Jeffries (1997); [*Coma Berenices*]{}: Randich et al. (1996b); [*Praesepe*]{}: Randich & Schmitt (1995). A consistent picture: the ARAP ============================== The main results evidenced by $ROSAT$ observations of open clusters can be summarized as follows: - If we exclude “outliers" or exceptions which I will discuss in Sect. 4, the average level of X–ray activity decays with age. Whereas this was already well established from [*Einstein*]{} observations of the Hyades and the Pleiades (e.g., Micela et al. 1990), the larger number of clusters observed by $ROSAT$ and the finer age sampling have allowed deriving a more detailed activity vs. age relationship. The decay timescales appear to be different for different masses (the lower the mass the longer the timescale) and the L$_{\rm X}$ vs. age functional dependence is not simply described by the Skumanich power law (Skumanich 1972); - In all clusters the maximum X–ray luminosity (L$_{\rm X}$) decreases towards later spectral–types; at a given spectral–type, a significant scatter in L$_{\rm X}$ is observed; as a consequence, whereas the median L$_{\rm X}$ decreases with age, the XLDFs for clusters of different ages are not “parallel" one to another and some overlap is present. This means that X–ray activity cannot be unambiguously used as an age diagnostic; - The X–ray activity level does depend on rotation only up to a rotation threshold above which X–ray emission saturates; for stars rotating faster than this threshold the ratio of the X–ray luminosity over bolometric luminosity, L$_{\rm X}$/L$_{\rm bol}$, is about constant and equal to 10$^{-3}$. Note that a definitive explanation for saturation has not yet been offered. $ROSAT$ observations of clusters are complemented by determinations of rotational velocities and/or periods in a variety of clusters. Very briefly, it is now well established that stars arrive on the Zero Age Main Sequence (ZAMS) with a large spread in their rotation rates and then they slow down with mass–dependent timescales (e.g., Barnes 1999; Bouvier 1997 and references therein). The use of the so–called Rossby diagram allows incorporating the above points into a unique picture. Noyes et al. (1984) were the first to show that the use of the Rossby number ($R_0$), the ratio of the rotational period P over the convective turnover time $\tau_c$, which somehow allows formalizing the dependence of activity on the properties of the convection zone, improved the rotation–chromospheric activity relationship for field stars. Randich et al. (1996a) and Patten & Simon (1996) showed this to hold also for the X–ray activity of cluster stars. Taking advantage of the new available periods for several clusters, I produced an updated version of the diagram which I show in Figure 1. X–ray data for field stars were taken from Schmitt (1997) and Hünsch et al. (1998, 1999); periods were taken from Hempelmann et al. (1996); I retrieved periods for most of the clusters from the Open Cluster Database [^1], complementing the ones for the Pleiades with the new measurements of Krishnamurthi et al. (1998) and adding periods for IC 2602 from Barnes et al. (1999). I derived Rossby numbers using the semi–empirical formulation for $\tau_c$ given by Noyes et al. (1984). I refer to the paper of Pizzolato et al. (1999) for a discussion of how different ways of estimating $\tau_c$ may affect the $\log \rm L_{\rm X}/\rm L_{\rm bol}$ vs. $\log R_0$ relationship. Various features can be noted in the diagram: first, saturation of X–ray activity is evident: it occurs at $\log R_0 \sim -0.8$. The points with a lower Rossby number cluster around $\log$ L$_{\rm X}$/L$_{\rm bol}=-3$ (but note the supersaturation at very low $\log R_0$ –see Randich 1998). Since the diagram includes stars from F down to M spectral–type, the uniformity of the threshold Rossby number below which X–ray emission is saturated, implies that the rotation threshold depends on mass. In other words, if $\log (R_0)_{\rm thr}=(\log$ P/$\tau_c$)$_{\rm thr} =const\sim -0.8$, then, P$_{\rm thr} \propto \tau_c$; since $\tau_c$ increases with decreasing mass (the convective envelope becomes deeper), the lower the mass, the longer is the threshold period (e.g., Stauffer et al. 1997a). Second, all cluster and field stars fit into a unique relationship. This on one hand means that field and cluster stars behave in a similar way as far as the rotation – convection – activity relation is concerned; whereas this is qualitatively expected –why should field and cluster stars behave differently?– it is good to empirically confirm the expectations. On the other hand, the fact that stars in all clusters lie on the same curve, irrespective of age and mass, implies that the activity–age dependence is most likely an activity–\[rotation–convection\]–age dependence. Incidentally, whereas a certain amount of scatter around the relation is present, as well as a few outliers, I believe, in agreement with Jeffries (1999), that most of the scatter is likely due to errors and non–uniformity in L$_{\rm X}$ measurements and to some variability in X–ray luminosities. Third, the linear regression curve has a slope equal to $-2.1(\pm 0.09)$ which, at a given spectral–type (i.e, roughly constant $\tau_c$ and stellar radius) is the same functional L$_{\rm X}$ vs. rotational velocity dependence found by Pallavicini et al. (1981) for field stars. In summary, the Rossby diagram can be looked at as an evolutionary diagram. Stars arrive on the ZAMS characterized by a range of rotation rates; therefore they occupy different regions of the Rossby diagram, with a significant part of them lying on the saturated part. The maximum luminosity at a given spectral–type is bounded by the saturation condition which explains why it decreases towards late spectral–types; non–saturated stars cause the spread in L$_{\rm X}$, whilst saturated stars, in principle, do not contribute to it. As the clusters age, the stars spin–down and they move towards the right of the Rossby diagram. Their L$_{\rm X}$ remain virtually unchanged until they de-saturate and, once they do not lie anymore on the saturation plateau, they become progressively less active as they continue to spin-down. The fraction of saturated stars in a cluster decreases until, as is the case for the Hyades solar–type stars, all the stars are non–saturated. As a consequence, the mean and median luminosities decay. Since, as we consider later spectral–types, both the spin–down timescales and the saturation threshold period are longer, K and M dwarfs move towards the right of the diagram at a slower rate than solar–type stars (in other words, they remain saturated longer); accordingly, the timescales for the decay of X–ray activity of K and M dwarfs are also longer than for solar–type stars. Binaries ======== How do binary stars fit into the scenario outlined in the previous section? In principle, there should be no difference between single stars and wide binaries, which, therefore, should follow a “normal" X–ray activity – rotation – age evolution. On the contrary, as well known, binaries in close, tidally locked systems, are rapid rotators even at rather old ages and therefore are expected to show high levels of X–ray activity and to contribute to the high luminosity tail of a cluster XLDF. In young clusters like the Pleiades, virtually no difference is observed between the X–ray activity level of single and binary stars (e.g., Stauffer et al. 1994); this is indeed not surprising since most of the Pleiades single stars are still rapid rotators because of their young age. The situation is different in the older Hyades: the X–ray brightest stars in the cluster are well known binaries. Most surprisingly, however, not only tidally locked BY Dra binary systems are found to be more active than single stars, but a high level of X–ray emission is also observed among several wide binaries. The influence of binary systems on the XLDFs of the Hyades has been discussed by Pye et al. (1994), Stern et al. (1995), and Stern and Stauffer (1996). All these studies pointed out that the XLDFs of late–A, F, and G–type binaries are very similar to those of single stars. On the contrary, the XLDFs of binary and single K and M dwarfs show a dicothomy, with the bulk of the binary population being considerably more X–ray active than single stars (see Fig. 10 in Stern et al. 1995 and Fig. 2b of Pye et al. 1994). Pye et al. estimated that the probability that binary and single K–type stars XLDFs are drawn from the same parent population is lower than 0.4 %. Since most of the K–type binaries are in wide systems with orbital periods of the order of a year or longer, enforced rotation could not be the reason for the high activity level. Pye et al. also showed that the higher luminosities of binary K dwarfs could not simply be due to the summed luminosities of single components. The questions then arise [*a)*]{} whether the rotation–activity relationship for binaries is similar to that of single stars and, [*b)*]{} if this is the case, why do binaries in long period systems maintain high rotation and activity. Hyades binaries with known rotational periods are plotted as crossed triangles in the Rossby diagram shown in Fig. 1; they clearly follow the same $\log R_0$ vs. L$_{\rm X}$/L$_{\rm bol}$ relation as single Hyades stars, with only one binary lying above the locus of the other stars (the star is VB 50, B$-$V$=0.59$ –i.e., it is not a K/M–type binary). The answer to question [*a)*]{} seems therefore to be “yes". Figure 2 is a revised version of Fig. 11 of Stern et al. (1995); in the figure I plot the logarithm of X–ray luminosity as a function of the orbital period (P$_{\rm orb}$) for Hyades binaries with B$-$V $\geq 0.8$. Orbital periods come from various sources in the literature and were retrieved from the Open Cluster Database. The figure indeed confirms that most wide binaries have a higher L$_{\rm X}$ than the median luminosity of single stars. Stars with P$_{\rm orb} \leq 10$ days are synchronous, as expected (e.g., Zahn & Bouchet 1989) and they nicely follow a L$_{\rm X}$ vs. P$_{\rm orb}=\rm P_{\rm rot.}$ relationship (in agreement with the trend seen in Fig. 1). The stars with longer orbital periods do not follow such a relationship, but are scattered throughout the diagram. Only three of them have available rotational period, but for these three stars a L$_{\rm X}$ vs. P$_{\rm rot}$ relationship may also hold, with the most active one being the most rapid rotator. In other words, both Figs. 1 and 2 suggest that rotation is the reason for the high activity level of both short–period and long period binaries and that even binaries in wide systems may maintain a rather high rotation (at least higher than single stars). As possible explanations for this Pye et al. (1994) and Stern et al. (1995) proposed either the higher initial angular momentum available in binary systems or a different PMS rotational evolution; more specifically, the reasonable hypothesis could be made that binaries disrupt their circumstellar disks earlier than single stars, thus removing a source of rotational braking. If this is the case, as stressed by Stern & Stauffer (1996) the rotational velocity distributions of single and binary K and M dwarfs should also show a dicothomy. Contrarily to this expectation Stauffer et al. (1997b), based on v$\sin i$ measurements, found that the components of SB2 binaries in the Hyades are, on average, slow rotators. In summary, we are left with the contradicting evidences that [**1.**]{} the same L$_{\rm X}$ vs. period or $R_0$ relationship holds for binaries and single stars; [**2.**]{} wide K and M dwarfs binaries may exist with rather short rotational periods and high activity levels; [**3.**]{} the v$\sin i$ distributions of the sample of K and M–type binaries and single stars studied by Stauffer et al. (1997b) do not show any evident dichotomy. I think two possible reasons for this inconsistency can be proposed; first neither the sample of wide binaries with known orbital and rotational periods, nor the sample of Stauffer et al. (1997b) are large enough, and more important, complete. Second, rotational periods of $\sim 10$ days correspond, for stars with B$-$V $\sim 0.9$ (see Fig. 2) to velocities of the order of 4 km/s, lower than the v$\sin i=6$ km/s detection limit of Stauffer et al. (1997b); this suggests that the dicothomy between single and binary K and M dwarfs may show up only among slow rotators. Rotational periods for a large sample of both binary and single stars are clearly required to further investigate this issue. Problems with the ARAP ====================== As discussed in Sect. 2, most of the $ROSAT$ results for open clusters can be explained within the ARAP scenario. Whereas the Rossby diagram shown in Fig. 1 evidences no major deviations from the activity–rotation relationship, exceptions to the age–activity relationship have instead been found. I focus here on solar–type stars, but I mention that problems also exist for lower mass stars. In Figure 3 I plot the median L$_{\rm X}$ vs. age for G–type stars (0.59 $\leq$ B$-$V$_0 \leq 0.82$) in various clusters. The vertical bars denote the luminosity range between the 25th and 75th percentiles of the XLDFs. Field stars are also included in the plot. Their age was taken from Ng & Bertelli (1998) or Edvardsson et al. (1993): all but one are older than the Hyades. The open triangle indicates the median luminosity of a sample of nine field stars with an age similar to the Hyades; I selected these stars using lithium measurements from Pasquini et al. (1994), under the plausible assumption that Li in this color range and up to the Hyades age is a reliable age indicator. Three lines denoting power laws with indices $\alpha= -0.5$ (Skumanich law), $-1, -2$ are also shown in the diagram. The figure witnesses the general trend of decreasing X–ray emission with increasing age, the fact that the decay cannot be simply described by a power law, and the overlap between XLDFs of different clusters (i.e., the most active Hyades stars can be as active as stars in the Pleiades). Not all the clusters, however, fit into the mean trend: Praesepe appears to be the most discrepant cluster in the diagram. It has about the same age as the Hyades and Coma, but as the figure shows, the bulk of its solar–type stars population is considerably X–ray fainter than the other two clusters (Randich & Schmitt 1995). Barrado y Navascués et al. (1998) demonstrated that such a result is not due to the contamination by non–members in the Praesepe sample. In addition, according to Mermilliod (1997), the distributions of rotational velocities in the Hyades and Praesepe are rather similar, although v$\sin i$ or periods are not currently published and thus it is not possible to check on a star-to-star basis whether Praesepe stars follow the same activity – rotation relationship as the stars in other clusters. In any case, this discrepancy casts doubts on the assumption that the X–ray properties of a cluster at a given age can be considered as representative of all clusters at that age. Totten et al. (1999) and Franciosini et al. (1999) analyzed a $ROSAT$ HRI image of NGC 6633, a cluster of about the same age as the Hyades and Praesepe: both studies found that NGC 6633 seems to be more Praesepe–like than Hyades–like, supporting the conclusion that the age–activity relation is not unique (but deeper X–ray observations of NGC 6633 are needed to confirm that NGC 6633 is really less active than the Hyades). On the contrary, as Fig. 3 shows, the median X–ray luminosity of a random sample of field stars at $\sim$ 600 Myr exactly matches the Hyades median (and the spread around the median is very small), supporting the opposite conclusion that the Hyades are indeed the standard at 600 Myr. A solution to this puzzle (at least as far as the Hyades/Praesepe dichotomy is concerned) is possibly offered by the results of Holland et al. (1999) who suggest that Praesepe could result from two merged clusters, with the brightest X–ray sources being found almost exclusively in the main cluster. Other (minor) inconsistencies are visible in Fig. 3; whereas it is understood why all clusters up to Alpha Persei have about the same median luminosity (there is no substantial spin–down up to that age), a tight age–activity relationship does not appear to hold between $\sim$100 and 250 Myr. This, again, would imply that the age–activity relationship is not unique and that other parameters (metallicity? e.g., Jeffries et al. 1997) besides rotation and age influence the level of X–ray activity. However, several sources of uncertainty should be removed before such a conclusion can be regarded as definitive. Namely: [*i)*]{} the X–ray data used to compute XLDFs and the median luminosities come from different surveys, with different sensitivies and have been analyzed in different ways (I just used the published X–ray luminosities); [*ii)*]{} some of the cluster samples are X–ray selected, and thus biased toward X–ray bright stars; [*iii)*]{} some of the cluster samples (e.g., Blanco 1) may be contaminated by non–members; [*iv)*]{} the clusters shown in the figure are not on the same age scale; whereas ages for the Pleiades, Alpha Persei, and IC 2391 come from the most recent determinations through the lithium boundary method, the ages for the other clusters are the more traditional ones derived through color–magnitude diagram fitting. Note, for example, that the age of NGC 2547 could indeed be larger (see Jeffries et al. 1999). Finally, the X–ray activity–age relation for stars older than the Hyades is defined by field stars only, which are scattered throughout the diagram. The figure may suggest that the decay between the Hyades and e.g., the Sun is more rapid than $t^{-1/2}$, but, very obviously, X–ray surveys deep enough to reach main sequence solar–type stars in clusters older than the Hyades are needed. Conclusions =========== $ROSAT$ observations of clusters have increased our confidence in the ARAP, but, at the same time, have led to results that cannot apparently be fully explained by the ARAP. Before the conclusion is drawn that exceptions to the ARAP really exist, additional X–ray and optical observations should be carried out. The need for X–ray surveys of clusters older than the Hyades or of deeper observations of clusters that have already been observed by $ROSAT$ is unquestionable. At the same time, X–ray spectra of cluster stars will allow us to infer their coronal properties and follow their evolution with age, or will possibly provide us with a key to the understanding of saturation and supersaturation. I refer to Jeffries (1999) for a detailed list of the issues that the capabilities of XMM and Chandra will allow us to address. I would like to stress here that complementary optical data (i.e., additional determinations of periods, rotational and radial velocities, deep imaging; etc.) are also needed in order to address in detail these issues and, possibly, find a solution to the puzzles discussed in the previous sections. I am grateful to Giusi Micela and Roberto Pallavicini for their careful reading of the manuscript and useful comments. I thank Rob Jeffries for anticipating his results on the age of NGC 2547. This work has made extensive use of the [simbad]{} database maintained by the Centre de Donnée Astronomiques de Strasbourg. Barnes, S.A. 1999, these Proceedings Barnes, S.A., et al. 1999, ApJ, 516, 263 Bouvier, J. 1997, Mem. SaIt, 68, 881 Barrado y Navascués, D., Stauffer, J.R., and Randich, S. 1998, ApJ, 506, 347 Caillault, J.-P. 1996, The Ninth Cambridge Workshop on Cool Stars, Stellar Systems and the Sun, R. Pallavicini and A.K. Dupree (eds), ASP Conference Series 109, p. 325 Edvardsson, B., et al. 1993, A&A, 275, 101 Franciosini, E., Randich, S., and Pallavicini, R. 1999, these Proceedings Giampapa, M.S., Prosser, C.F., and Fleming, T.A. 1998, ApJ, 501, 624 Holland, K., et al. 1999, these Proceedings Hempelmann, A., Schmitt, J.H.M.M., and Stcepien, K. 1996, A&A 305, 284 Hünsch, M., Schmitt, J.H.M.M., and Voges, W. 1998, A&AS, 132, 155 Hünsch, M., et al. 1999, A&AS, 135, 319 James, D.J., and Jeffries, R.D. 1997, MNRAS, 292, 252 Jeffries, R. 1999, in Solar and Stellar Activity: Similarities and Differences, C.J. Butler and J.G. Doyle (eds), ASP Conference Series 158, p. 75 Jeffries, R.D., and Tolley, A.J. 1998, MNRAS, 300, 331 Jeffries, R.D., Thurston, M.R., and Pye, J.P. 1997, MNRAS, 287, 501 Jeffries, R.D., et al. 1999, these Proceedings Krishnamurthi, A., et al. 1998, ApJ, 493, 914 Mermilliod, J.-C. 1997, Mem. SaIt, 68, 859 Micela, G. 1996, The Ninth Cambridge Workshop on Cool Stars, Stellar Systems and the Sun, R. Pallavicini and A.K. Dupree (eds), ASP Conference Series 109, p. 347 Micela, G., et al. 1990, ApJ, 348, 557 Micela, G., et al. 1996, ApJS, 102, 75 Micela, G., et al. 1999a, A&A, 341, 751 Micela, G., et al. 1999b, A&A, 344, 83 Ng., Y.K., and Bertelli, G. 1998, A&A, 329, 943 Noyes, R.W., et al. 1984, ApJ, 279, 763 Patten, B.M., and Simon, T. 1996, ApJS, 106, 489 Pallavicini, R., et al. 1981, ApJ, 248, 279 Pasquini, L., Liu, Q., and Pallavicini, R. 1994, A&A 287, 191 Pizzolato, N., et al. 1999, these Proceedings Prosser, C.F., et al. 1995, AJ, 110, 1229 Prosser, C.F., et al. 1996, AJ, 112, 1570 Pye, J.P., et al. 1994, MNRAS, 266, 798 Randich, S. 1997, Mem. SaIt, 68, 971 Randich, S. 1998, The Tenth Cambridge Workshop on Cool Stars, Stellar Systems and the Sun, R.A. Donahue and J.A. Bookbinder (eds), ASP Conference Series 154, p. 501 Randich, S., and Schmitt, J.H.M.M. 1995, A&A, 298, 115 Randich, S., et al. 1995, A&A, 300, 134 Randich, S., et al. 1996a, A&A, 305, 785 Randich, S., Schmitt, J.H.M.M., and Prosser, C.F. 1996b A&A, 313, 815 Schmitt, J.H.M.M. 1997, A&A, 318, 215 Skumanich, A. 1972, ApJ, 171, 565 Stauffer, J.R., et al. 1994, ApJS, 91, 625 Stauffer, J.R., et al. 1997a, ApJ, 479, 776 Stauffer, J.R., et al. 1997b, ApJ, 475, 604 Stern, R.A. 1999, in Solar and Stellar Activity: Similarities and Differences, C.J. Butler and J.G. Doyle (eds), ASP Conference Series 158, p. 47 Stern, R.A., and Stauffer, J.R. 1996, The Ninth Cambridge Workshop on Cool Stars, Stellar Systems and the Sun, R. Pallavicini and A.K Dupree (eds), ASP Conference Series 109, p. 387 Stern, R.A., Schmitt, J.H.M.M., and Kahabka, P.T. 1995, ApJ, 448, 683 Totten, E.J., et al. 1999, these Proceedings Zahn, J.-P, and Bouchet, L. 1989, A&A, 223, 112 [^1]: Open Cluster Database, as provided by C.F. Prosser (deceased) and J.R. Stauffer, and which currently may be accessed at http://cfa-ftp.harvard.edu/ stauffer/, or by anonymous ftp to cfa-ftp.harvard.edu, cd /pub/stauffer/clusters.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We consider configurations consisting of a gravitating nonlinear spinor field $\psi$, with a nonlinearity of the type $\lambda\left(\bar\psi\psi\right)^2$, minimally coupled to Maxwell and Proca fields through the coupling constants $Q_M$ \[U(1) electric charge\] and $Q_P$, respectively. In order to ensure spherical symmetry of the configurations, we use two spin-$1/2$ fields having opposite spins. By means of numerical computations, we find families of equilibrium configurations with a positive Arnowitt-Deser-Misner (ADM) mass described by regular zero-node asymptotically flat solutions for static Maxwell and Proca fields and for stationary spinor fields. For the case of the Maxwell field, it is shown that, with increasing charge $Q_M$, the masses of the objects increase and diverge as the charge tends to a critical value. For negative values of the coupling constant $\lambda$, we demonstrate that, by choosing physically reasonable values of this constant, it is possible to obtain configurations with masses comparable to the Chandrasekhar mass and with effective radii of the order of kilometers. It enables us to speak of an astrophysical interpretation of such systems, regarding them as charged Dirac stars. In turn, for the system with the Proca field, it is shown that the mass of the configurations also grows with increasing both $|\lambda|$ and the coupling constant $Q_P$. Although in this case the numerical calculations do not allow us to make a definite conclusion about the possibility of obtaining masses comparable to the Chandrasekhar mass for physically reasonable values of $\lambda$, one may expect that such masses can be obtained for certain values of free parameters of the system under consideration.' author: - Vladimir Dzhunushaliev - Vladimir Folomeev title: Dirac star in the presence of Maxwell and Proca fields --- Introduction ============ The bulk of the literature is devoted to treating compact gravitating configurations consisting of various fundamental fields. The most popular line of investigation focuses on studying boson stars – objects supported by scalar (spin-0) fields. Being in their own gravitational field, such fields can create configurations whose physical characteristics lie in a very wide range, from those which are typical for atoms up to the parameters comparable with characteristics of galaxies [@Schunck:2003kk; @Liebling:2012fv]. On the other hand, it is not impossible that there may exist gravitating objects supported by fundamental fields with nonzero spin. In particular, they may be massive vector (spin-1) fields described by the Proca equation [@Lawrie2002]. Being the generalization of Maxwell’s theory, Proca theory permits one both to take into account various effects related to the possible presence of the rest mass of a photon [@Tu:2005ge] and to describe the massive $Z^0$ and $W^\pm$ particles in the Standard Model of particle physics [@Lawrie2002]. Such fields are also discussed in the literature as applied to dark matter physics [@Pospelov:2008jd] and when considering compact strongly gravitating spherically symmetric starlike configurations [@Brito:2015pxa; @Herdeiro:2017fhv]. In turn, when the source of gravitation is spinor (spin-$1/2$) fields, the corresponding configurations are described by the Einstein-Dirac equations. These can be spherically symmetric systems consisting of both linear [@Finster:1998ws; @Herdeiro:2017fhv] and nonlinear spinor fields [@Krechet:2014nda; @Adanhounme:2012cm; @Dzhunushaliev:2018jhj]. Nonlinear spinor fields are also used in considering cylindrically symmetric solutions [@Bronnikov:2004uu], wormhole solutions [@Bronnikov:2009na], and various cosmological problems (see Refs. [@Ribas:2010zj; @Ribas:2016ulz; @Saha:2016cbu] and references inside). The aforementioned localized self-gravitating configurations with spinor fields are prevented from collapsing under their own gravitational fields due to the Heisenberg uncertainty principle. If one adds to such systems an electric field, the presence of extra repulsive forces related to such a field results in new effects which can considerably alter the characteristics of the systems [@Finster:1998ux]. Consistent with this, the purpose of the present paper is to study the influence that the presence of massless (Maxwell) or massive (Proca) vector fields has on the properties of gravitating configurations consisting of a spinor field $\psi$ with a nonlinearity of the type $\lambda\left(\bar\psi\psi\right)^2$. Since the spin of a fermion has an intrinsic orientation in space, a system consisting of a single spinor particle cannot be spherically symmetric. For this reason, we take two fermions having opposite spins, i.e., consider two spinor fields, and this enables us to have spherically symmetric objects. In order to get configurations with masses of the order of the Chandrasekhar mass, we study in detail the limiting systems obtained in case of using large negative values of the dimensionless coupling constant $\bar \lambda$. Notice here that in the present paper we deal with a system consisting of a [*classical*]{} spinor field. Following Ref. [@ArmendarizPicon:2003qk], by the latter we mean a set of four complex-valued spacetime functions that transform according to the spinor representation of the Lorentz group. But it is evident that realistic spin-$\frac{1}{2}$ particles must be described by [*quantum*]{} spinor fields. It is usually believed that there exists no classical limit for quantum spinor fields. However, classical spinors can be regarded as arising from some effective description of more complex quantum systems (for possible justifications of the existence of classical spinors, see Ref. [@ArmendarizPicon:2003qk]). The paper is organized as follows. In Sec. \[prob\_statem\], we present the general-relativistic equations for the systems under consideration. These equations are solved numerically in Sec. \[num\_sol\] for the Maxwell field (Sec. \[Maxw\_field\]) and for the Proca field (Sec. \[Proca\_field\]) in two limiting cases when the coupling constant $\bar \lambda=0$ (linear spinor fields) and when $|\bar\lambda| \gg 1$. Finally, in Sec. \[concl\], we summarize and discuss the results obtained. Statement of the problem and general equations {#prob_statem} ============================================== We consider compact gravitating configurations consisting of a spinor field minimally coupled to Maxwell/Proca fields. The modeling is carried out within the framework of Einstein’s general relativity. The corresponding total action for such a system can be represented in the form \[the metric signature is $(+,-,-,-)$\] $$\label{action_gen} S_{\text{tot}} = - \frac{c^3}{16\pi G}\int d^4 x \sqrt{-g} R +S_{\text{sp}}+S_{\text{v}},$$ where $G$ is the Newtonian gravitational constant; $R$ is the scalar curvature; and $S_{\text{sp}}$ and $S_{\text{v}}$ denote the actions of spinor and vector fields, respectively. The action $S_{\text{sp}}$ is obtained from the Lagrangian for the spinor field $\psi$ of the mass $\mu$, $$L_{\text{sp}} = \frac{i \hbar c}{2} \left( \bar \psi \gamma^\mu \psi_{; \mu} - \bar \psi_{; \mu} \gamma^\mu \psi \right) - \mu c^2 \bar \psi \psi - F(S), \label{lagr_sp}$$ where the semicolon denotes the covariant derivative defined as $ \psi_{; \mu} = [\partial_{ \mu} +1/8\, \omega_{a b \mu}\left( \gamma^a \gamma^b- \gamma^b \gamma^a\right)+i Q_{M,P}/(\hbar c) A_\mu]\psi $. Here $\gamma^a$ are the Dirac matrices in the standard representation in flat space \[see, e.g., Ref. [@Lawrie2002], Eq. (7.27)\]. In turn, the Dirac matrices in curved space, $\gamma^\mu = e_a^{\phantom{a} \mu} \gamma^a$, are obtained using the tetrad $ e_a^{\phantom{a} \mu}$, and $\omega_{a b \mu}$ is the spin connection \[for its definition, see Ref. [@Lawrie2002], Eq. (7.135)\]. The term $i Q_{M,P}/(\hbar c) A_\mu\psi$ describes the interaction between the spinor and Maxwell/Proca fields. The coupling constant $Q_{M}$ plays the role of a U(1) charge in Maxwell theory, and $Q_P$ is the coupling constant in Proca theory. This Lagrangian contains an arbitrary nonlinear term $F(S)$, where the invariant $S$ can depend on $ \left( \bar\psi \psi \right), \left( \bar\psi \gamma^\mu \psi \right) \left( \bar\psi \gamma_\mu \psi \right)$, or $\left( \bar\psi \gamma^5 \gamma^\mu \psi \right) \left( \bar\psi \gamma^5 \gamma_\mu \psi \right)$. The action for the vector fields $S_{\text{v}}$ appearing in is obtained from the Lagrangian $$L_{\text{v}} = -\frac{1}{4} F_{\mu\nu}F^{\mu\nu}+\frac{1}{2}\left(\frac{m_P c}{\hbar}\right)^2 A_\mu A^\mu,$$ where $F_{\mu\nu} = \partial_{ \mu} A_\nu - \partial_\nu A_\mu$ is the tensor of a massive spin-1 field of the Proca mass $m_P$. In the case of $m_P=0$ we return to Maxwell’s electrodynamics. Varying the action with respect to the metric, to the spinor field, and to the vector potential $A_\mu$, we derive the Einstein, Dirac, and Proca/Maxwell equations in curved spacetime: $$\begin{aligned} R_{\mu}^\nu - \frac{1}{2} \delta_{\mu }^\nu R &=& \frac{8\pi G}{c^4} T_{\mu }^\nu, \label{feqs-10} \\ i \hbar \gamma^\mu \psi_{;\mu} - \mu c \psi - \frac{1}{c}\frac{\partial F}{\partial\bar\psi}&=& 0, \label{feqs-20}\\ i \hbar \bar\psi_{;\mu} \gamma^\mu + \mu c \bar\psi + \frac{1}{c}\frac{\partial F}{\partial\psi}&=& 0, \label{feqs-21}\\ \frac{1}{\sqrt{-g}}\frac{\partial}{\partial x^\nu}\left(\sqrt{-g}F^{\mu\nu}\right)&=&-Q_{M,P}\bar\psi\gamma^\mu\psi +\left(\frac{m_P c}{\hbar}\right)^2 A^\mu. \label{feqs-22}\end{aligned}$$ The right-hand side of Eq.  contains the energy-momentum tensor $T_{\mu}^\nu$, which can be represented (already in a symmetric form) as $$\begin{aligned} \label{EM} \begin{split} T_{\mu}^\nu =&\frac{i\hbar c }{4}g^{\nu\rho}\left[\bar\psi \gamma_{\mu} \psi_{;\rho}+\bar\psi\gamma_\rho\psi_{;\mu} -\bar\psi_{;\mu}\gamma_{\rho }\psi-\bar\psi_{;\rho}\gamma_\mu\psi \right]-\delta_\mu^\nu L_{\text{sp}} \\ &-F^{\nu\rho}F_{\mu\rho}+\frac{1}{4}\delta_\mu^\nu F_{\alpha\beta}F^{\alpha\beta}+ \left(\frac{m_P c}{\hbar}\right)^2\left(A_\mu A^\nu-\frac{1}{2}\delta_\mu^\nu A_\rho A^\rho\right). \end{split}\end{aligned}$$ Next, taking into account the Dirac equations and , the Lagrangian becomes $$L_{\text{sp}} = - F(S) + \frac{1}{2} \left( \bar\psi\frac{\partial F}{\partial\bar\psi} + \frac{\partial F}{\partial\psi}\psi \right).$$ For our purpose, we choose the nonlinear term in a simple power-law form, $F(S) = - k(k+1)^{-1}\lambda\left(\bar\psi\psi\right)^{k+1}, $ where $k$ and $\lambda$ are some free parameters. In what follows we set $k=1$ to give $$F(S) = - \frac{\lambda}{2} \left(\bar\psi\psi\right)^2. \label{nonlin_term_2}$$ (Regarding the physical meaning of the constant $\lambda$ appearing here, see below.) In the absence of gravitation, classical spinor fields with this type of nonlinearity have been considered, for instance, in Refs. [@Finkelstein:1951zz; @Finkelstein:1956; @Soler:1970xp], where it has been shown that the corresponding nonlinear Dirac equation has regular finite energy solutions in a flat spacetime. In turn, soliton-type solutions of the nonlinear Dirac equation in a curved background have been studied in Ref. [@Mielke:2017nwt] (see also references therein). Since here we consider only spherically symmetric configurations, it is convenient to choose the spacetime metric in the form $$ds^2 = N(r) \sigma^2(r) (dx^0)^2 - \frac{dr^2}{N(r)} - r^2 \left( d \theta^2 + \sin^2 \theta d \varphi^2 \right), \label{metric}$$ where $N(r)=1-2 G m(r)/(c^2 r)$, and the function $m(r)$ corresponds to the current mass of the configuration enclosed by a sphere with circumferential radius $r$; $x^0=c t$ is the time coordinate. In order to describe the spinor field, one must choose the corresponding ansatz for $\psi$ compatible with the spherically symmetric line element . Here, we use a stationary ansatz, which can be taken in the following form (see, e.g., Refs. [@Soler:1970xp; @Li:1982gf; @Li:1985gf; @Herdeiro:2017fhv]): $$\psi^T =2\, e^{-i \frac{E t}{\hbar}} \begin{Bmatrix} \begin{pmatrix} 0 \\ - g \\ \end{pmatrix}, \begin{pmatrix} g \\ 0 \\ \end{pmatrix}, \begin{pmatrix} i f \sin \theta e^{- i \varphi} \\ - i f \cos \theta \\ \end{pmatrix}, \begin{pmatrix} - i f \cos \theta \\ - i f \sin \theta e^{i \varphi} \\ \end{pmatrix} \end{Bmatrix}, \label{spinor}$$ where $E/\hbar$ is the spinor frequency and $f(r)$ and $g(r)$ are two real functions. This ansatz ensures that the spacetime of the system under consideration remains static. Here, each row describes a spin-$\frac{1}{2}$ fermion, and these two fermions have the same masses $\mu$ and opposite spins. Thus the ansatz describes two Dirac fields whose energy-momentum tensors are not spherically symmetric, but their sum gives a spherically symmetric energy-momentum tensor. (Regarding the relationships between the [*Ansatz*]{} and [*Ansätze*]{} used in the literature, see Ref. [@Dzhunushaliev:2018jhj].) For the Maxwell and Proca fields, we take the [*Ansatz*]{} $A_\mu=\{\phi(r),0,0,0\}$. In the case of Maxwell’s electrodynamics this corresponds to the presence of the radial electric field $E_r=-\phi^\prime(r)$ Then, substituting the [*Ansatz*]{} and the metric into the field equations , , and , one can obtain the following set of equations: $$\begin{aligned} &&\bar f^\prime + \left[ \frac{N^\prime}{4 N} + \frac{\sigma^\prime}{2\sigma}+\frac{1}{x}\left(1+\frac{1}{\sqrt{N}}\right) \right] \bar f + \left[ \frac{1}{\sqrt{N}} +8\bar \lambda\,\frac{\bar f^2 - \bar g^2}{\sqrt{N}}- \frac{1}{\sigma N} \left(\bar E-\bar Q_{M,P} \bar\phi\right) \right]\bar g= 0, \label{fieldeqs-1_dmls}\\ &&\bar g^\prime + \left[ \frac{N^\prime}{4 N} + \frac{\sigma^\prime}{2\sigma} + \frac{1}{x}\left(1 - \frac{1}{\sqrt{N}}\right) \right]\bar g + \left[ \frac{1}{\sqrt{N}} + 8\bar \lambda\,\frac{\bar f^2 - \bar g^2}{\sqrt{N}}+\frac{1}{\sigma N} \left(\bar E-\bar Q_{M,P} \bar\phi\right) \right]\bar f= 0, \label{fieldeqs-2_dmls}\\ &&\bar m^\prime=8 x^2\left[ \frac{\bar f^2+\bar g^2}{\sigma\sqrt{N}}\left(\bar E-\bar Q_{M,P} \bar\phi\right)+4\bar \lambda\left(\bar f^2-\bar g^2\right)^2 +\frac{1}{16}\frac{\bar\phi^{\prime 2}}{\sigma^2}+\frac{\alpha^2}{16}\frac{\bar\phi^{2}}{N\sigma^2} \right], \label{fieldeqs-3_dmls}\\ &&\frac{\sigma^\prime}{\sigma} =\frac{8 x}{\sqrt{N}}\left[ \frac{\bar f^2+\bar g^2}{\sigma N}\left(\bar E-\bar Q_{M,P} \bar\phi\right)+ \bar g \bar f^\prime-\bar f \bar g^\prime+\frac{\alpha^2}{8}\frac{\bar\phi^{2}}{N^{3/2}\sigma^2} \right], \label{fieldeqs-4_dmls}\\ &&\bar \phi^{\prime\prime}+\left(\frac{2}{x}-\frac{\sigma^\prime}{\sigma}\right)\bar\phi^\prime=-8\bar Q_{M,P} \frac{\sigma}{\sqrt{N}}\left(\bar f^2+\bar g^2\right) +\alpha^2\frac{\bar\phi}{N}, \label{fieldeqs-5_dmls}\end{aligned}$$ where the prime denotes differentiation with respect to the radial coordinate. Here, Eqs.  and are the ($^0_0$) and $[(^0_0)~-~(^1_1)]$ components of the Einstein equations, respectively. The above equations are written in terms of the following dimensionless variables and parameters: $$\begin{aligned} \begin{split} \label{dmls_var} x = & r/\lambda_c, \quad \bar E = \frac{E}{\mu c^2}, \quad \bar f, \bar g = \sqrt{4\pi}\lambda_c^{3/2}\frac{\mu}{M_\text{Pl}} f, g,\quad \bar m = \frac{\mu}{M_\text{Pl}^2} m, \quad \bar \phi = \frac{\sqrt{4\pi G}}{c^2}\phi, \\ \bar \lambda = & \frac{1}{4\pi \lambda_c^3\mu c^2} \left( M_\text{Pl}/\mu\right)^2\lambda, \quad \bar Q_{M,P} = \frac{1}{\sqrt{4\pi G}}\frac{Q_{M,P}}{\mu}, \quad \alpha = \frac{m_P}{\mu}, \end{split}\end{aligned}$$ where $M_\text{Pl}$ is the Planck mass and $\lambda_c=\hbar/\mu c$ is the constant having the dimensions of length (since we consider a classical theory, $\lambda_c$ need not be associated with the Compton length); the metric function $N=1-2\bar m/x$. Notice here that, using the Dirac equations and , one can eliminate the derivatives of $\bar f$ and $\bar g$ from the right-hand side of Eq. . We conclude this section with the expression for the effective radial pressure $p_r=-T_1^1$ \[the $\left(_1^1\right)$ component of the energy-momentum tensor \]. Using the Dirac equations and , it can be represented in the following dimensionless form: $$\bar p_r\equiv\frac{p_r}{\gamma} = 8\left[ \frac{1}{\sigma\sqrt{N}} \left(\bar E-\bar Q_{M,P} \bar \phi \right) \left(\bar f^2 + \bar g^2\right)+\left(\bar f^2 - \bar g^2\right)-2\frac{\bar f\bar g}{x}+4\bar \lambda\left(\bar f^2 - \bar g^2\right)^2 \right]-\frac{1}{2}\frac{\bar\phi^{\prime 2}}{\sigma^2}+\frac{\alpha^2}{2}\frac{\bar\phi^{2}}{N\sigma^2},$$ where $\gamma=c^2 M_{\text{Pl}}^2/\left(4\pi \mu\lambda_c^3\right)$. This expression permits us to see the physical meaning of the coupling constant $\bar\lambda$: the case of $\bar\lambda > 0$ corresponds to the attraction and the case of $\bar\lambda < 0$ to the repulsion. Numerical results {#num_sol} ================= In performing numerical integration of Eqs. -, we start from the center of the configuration where some values of the spinor field, $\bar g_c$, of the scalar potential, $\bar \phi_c$, and of the metric function, $\sigma_c$, are given. The boundary conditions in the vicinity of the center are taken in the form $$\bar g\approx \bar g_c + \frac{1}{2}\bar g_2 x^2, \quad \bar f\approx \bar f_1 x, \quad \sigma\approx \sigma_c+\frac{1}{2}\sigma_2 x^2, \quad \bar m\approx \frac{1}{6}\bar m_3 x^3, \quad \bar\phi \approx \bar \phi_c + \frac{1}{2}\bar \phi_2 x^2. \label{bound_cond}$$ Expressions for the expansion coefficients $\bar f_1, \bar m_3, \sigma_2, \bar g_2, \bar \phi_2$ can be found from Eqs. -. In turn, the expansion coefficients $\sigma_c$, $\bar g_c$, $\bar \phi_c$, and also the parameter $\bar E$, are arbitrary. Their values are chosen so as to obtain regular and asymptotically flat solutions when the functions $N(x\to \infty),\sigma(x\to \infty) \to 1$, and $\bar\phi(x\to \infty) \to 0$. In this case, the asymptotic value of the function $\bar m(x\to \infty) \equiv \bar m_\infty$ will correspond to the Arnowitt-Deser-Misner (ADM) mass of the configurations under consideration. Notice here that in the present paper we consider only configurations described by zero-node solutions. Since the spinor fields decrease exponentially with distance as $\bar g, \bar f \sim e^{-\sqrt{1-\bar E^2}\,x}$ \[see Eq.  below\], numerical calculations are performed up to some boundary point $x_b$ where the functions $\bar g, \bar f $, and their derivatives go to zero (let us refer to such solutions as interior ones). The location of the boundary point is determined both by the central values $\bar g_c, \bar \phi_c, \sigma_c$ and by the magnitudes of the parameters $\bar Q_{M,P}, \bar \lambda$, and $\alpha$. The interior solutions are matched with the exterior ones for $\bar \phi$, for the mass function $\bar m$, and (in the case of the Proca field) for the metric function $\sigma$ on the boundary $x=x_b$ (see below). The case of the Maxwell field {#Maxw_field} ----------------------------- ### Linear spinor fields {#num_sol_Max_linear} Consider first the case of linear spinor fields, i.e., the problem with $\bar \lambda =0$ [@Finster:1998ux]. In this case, depending on the values of $\bar g_c$ and $\bar Q_M$, the value of $x_b$ is of the order of several hundreds for $\bar g_c \approx 0$, and it decreases down to $x_b\sim 10$ with increasing $\bar g_c$. While $\bar g_c$ increases, the mass of the configurations also increases, reaching its maximum whose magnitude depends on the value of $\bar Q_M$. When $\bar Q_M=0$, the maximum mass is $M^{\text{max}}_{\bar Q_M=0}\approx 0.709 M_\text{Pl}^2/\mu$ [@Herdeiro:2017fhv]. The inclusion of the electric field results in increasing the maximum mass, as is illustrated in Fig. \[fig\_mass\_gc\_Q\_Maxw\]. ![Maxwell field: the dimensionless total mass $\bar m_{\infty}$ as a function of $\bar g_c$ for the systems with $\bar \lambda=0$ and $\bar Q_M=0, \,0.5,\, 0.7,\, 0.9, 0.99$. The bold dots show the positions of maxima of the mass. []{data-label="fig_mass_gc_Q_Maxw"}](mass_gc_Q_Maxw.eps){width=".5\linewidth"} The presence of the long-range electric field leads to the fact that, besides the interior solutions for the spinor and electric fields, there is also an exterior solution for the electric field. To obtain it, we employ Eqs.  and by setting $\bar f, \bar g=0$ in them. Also, since for $x=x_b$ the metric function $\sigma=\text{const.}$ \[see Eq. \], we normalize its value to 1 at this point; i.e., we set $\sigma(x_b)=1$. \[This can always be done since Eqs. - are invariant under the replacements $\bar \phi \to a \bar\phi, \sigma \to a \sigma, \bar E \to a \bar E$, where $a$ is an arbitrary constant.\] As a result, we have the following equations valid for $x>x_b$: $$\bar \phi^{\prime\prime}+\frac{2}{x}\bar \phi=0, \quad \bar m^\prime=\frac{x^2}{2}\bar \phi^{\prime 2}.$$ As boundary conditions for these equations, we take the corresponding values of the functions $\bar m(x_b)$, $\bar \phi(x_b)$, and $\bar \phi^\prime(x_b)$ on the boundary $x=x_b$. Then the exterior solutions are $$\bar \phi=C_2+\frac{C_1}{x}, \quad \bar m=\bar m_\infty-\frac{1}{2}\frac{C_1^2}{x}, \label{asymp_sol}$$ where the integration constants are $C_1=-\bar \phi^\prime(x_b) x_b^2, C_2=\bar \phi(x_b)+\bar \phi^\prime(x_b) x_b$. In view of the gauge invariance of the Maxwellian electromagnetic field, one can always add to the scalar potential $\bar \phi$ an arbitrary constant $\bar\phi_\infty$ which ensures that $\bar \phi(\infty)=0$ (this in turn assumes that the constant $C_2$ is zero). For Eqs. -, this gauge transformation looks like $\bar E-\bar Q_M \bar\phi\equiv \left(\bar E-\bar Q_M \bar\phi_\infty \right)-\bar Q_M\left(\bar\phi-\bar\phi_\infty \right)$. Then the only free parameter is $\bar E$, and the solution of the problem reduces to determining such eigenvalues of $\bar E$ for which regular monotonically damped solutions do exist. In this case the contribution of the external electric field to the total mass $\bar m_\infty$ is given by the term $C_1^2/(2 x_b)$, and the external metric $ N=1-2 \bar m_\infty/x+C_1^2/x^2 $ corresponds to the Reissner-Nordström metric. The total ADM mass of the system \[given by the constant $\bar m_\infty$ from Eq. \] is shown in Fig. \[fig\_mass\_gc\_Q\_Maxw\] as a function of $\bar g_c$. In plotting these dependencies, we have kept track of the sign of the binding energy (BE), which is defined as the difference between the energy of $N_f$ free particles, ${\cal E}_f=N_f \mu c^2$, and the total energy of the system, ${\cal E}_t=M c^2$, i.e., $\text{BE}={\cal E}_f-{\cal E}_t$. Here, the total particle number $N_f$ (the Noether charge) is calculated using the timelike component of the 4-current $j^\alpha=\sqrt{-g}\bar \psi \gamma^\alpha \psi$ as $ N_f=\int j^t d^3 x, $ where in our case $j^t = N^{-1/2}r^2 \sin{\theta} \left(\psi^\dag \psi\right)$. In the dimensionless variables , we then have $$N_f=8\left(\frac{M_\text{Pl}}{\mu}\right)^2\int_0^\infty \frac{\bar f^2+\bar g^2}{\sqrt{N}}x^2 dx. \label{part_num}$$ A necessary condition for the energy stability is the positiveness of the binding energy. Therefore, since configurations with a negative BE are certainly unstable, the graphs in Fig. \[fig\_mass\_gc\_Q\_Maxw\] are plotted only up to $\bar g_c$ for which the BE becomes equal to 0 (the very right points of the curves). It is seen from Fig. \[fig\_mass\_gc\_Q\_Maxw\] that the inclusion of the electric field does not change the qualitative behavior of the curve mass–central density of the spinor field. With increasing charge, the location of the maximum moves towards smaller values of $\bar g_c$, and the mass increases simultaneously. The positions of maxima of the mass $M^{\text{max}}$ are joined by a separate curve. Its behavior indicates that while approaching a critical charge, $\bar Q_M \to \bar Q_{\text{crit}}=1$, the quantity $\bar g_c \to 0$, and the mass diverges. The asymptotic behavior of this curve for large $\bar Q_M$ can be approximated by the following expression: $M^{\text{max}}\approx 0.58 \left(\bar Q_{\text{crit}}-\bar Q_M\right)^{-1/2} M_\text{Pl}^2/\mu$. When $\bar Q_M > \bar Q_{\text{crit}}$, the behavior of the solutions alters drastically: there are no longer monotonically damped solutions, but there only exist solutions describing asymptotically damped oscillations. Such solutions correspond to systems whose current mass $\bar m(x)$ diverges as $x\to \infty$, and, in turn, the spacetime is not asymptotically flat. The reason for such a change in the behavior of the solutions is intuitively understood from considering the nonrelativistic limit. Namely, according to Newton’s and Coulomb’s laws, the force between two charged, massive point particles is $$F=-\frac{G \mu^2}{r^2}+\frac{1}{4\pi}\frac{Q_M^2}{r^2}=\frac{G\mu^2}{\lambda_c^2}\frac{\bar Q_M^2-1}{x^2}.$$ Correspondingly, when $\bar Q_M<1$, the gravitational attraction is larger than the electrostatic repulsion, and this enables obtaining gravitationally closed systems. In turn, when $\bar Q_M>1$, the repulsive forces dominate, and this leads to destroying the system. The behavior of the curves shown in Fig. \[fig\_mass\_gc\_Q\_Maxw\] is similar to the behavior of the corresponding mass–central density dependencies for boson stars supported by complex scalar fields (see, e.g., Refs. [@Colpi:1986ye; @Gleiser:1988ih; @Herdeiro:2017fhv]). In the case of boson stars, the stability analysis against linear perturbations indicates that the first peak in the mass corresponds to the point separating stable and unstable configurations [@Gleiser:1988ih; @Jetzer:1989us]. One might expect that for the Dirac stars under consideration a similar situation will occur. But this question requires special study. ### Limiting configurations for $|\bar \lambda| \gg 1$ {#lim_conf_M} The main purpose of the present paper is to study the effects of the inclusion of Maxwell and Proca vector fields in the systems consisting of spinor fields with a nonlinearity of the type . In the absence of vector fields, in our recent paper [@Dzhunushaliev:2018jhj], it was shown that in the limiting case of large negative values of $|\bar \lambda| \gg 1$ it is possible to obtain solutions describing configurations with masses comparable to the Chandrasekhar mass. In this paper, we extend that problem by including the Maxwell and Proca fields minimally coupled to the spinor fields. It was demonstrated in Ref. [@Dzhunushaliev:2018jhj] that, with increasing $|\bar \lambda|$, the maximum total mass of the configurations increases as $ M^{\text{max}}\approx \beta \sqrt{|\bar \lambda|}M_\text{Pl}^2/\mu$. The numerical calculations indicate that the inclusion of the charge $\bar Q_M$ does not lead to qualitative changes, and in the limit $|\bar \lambda| \gg 1$ the above dependence remains the same, and the numerical value of the coefficient $\beta$ is determined by the magnitude of the charge. ![Maxwell field: the dimensionless total mass $\bar m_{*\infty}$ as a function of $\bar g_{c*}$ for the limiting configurations described by Eqs. -. The numbers near the curves correspond to the values of the charge $\bar Q_M$. The graphs are plotted only for the values of $\bar g_{c*}$ for which the binding energy is positive. The bold dots show the positions of maxima of the mass. []{data-label="fig_mass_gc_approx_Maxw"}](field_distr.eps){width="1\linewidth"} ![Maxwell field: the dimensionless total mass $\bar m_{*\infty}$ as a function of $\bar g_{c*}$ for the limiting configurations described by Eqs. -. The numbers near the curves correspond to the values of the charge $\bar Q_M$. The graphs are plotted only for the values of $\bar g_{c*}$ for which the binding energy is positive. The bold dots show the positions of maxima of the mass. []{data-label="fig_mass_gc_approx_Maxw"}](mass_gc_approx_Maxw.eps){width="1\linewidth"} In order to obtain the dependence $\beta(\bar Q_M)$, let us consider an approximate solution to the set of equations - in the limit $|\bar \lambda| \gg 1$. To do this, as in the case of uncharged systems of Ref. [@Dzhunushaliev:2018jhj], one can introduce the following alternative nondimensionalization caused by the scale invariance of these equations: $\bar g_*, \bar f_*=|\bar \lambda|^{1/2}\bar g, \bar f, \bar m_*=|\bar \lambda|^{-1/2}\bar m$, and $x_*=|\bar \lambda|^{-1/2}x$. Using these new variables and taking into account the results of numerical calculations, according to which the leading term in Eq.  is the third term $(\ldots)\bar g$ and $\bar f$ is much smaller than $\bar g$, this equation yields $$\label{g_approx} \bar g_* = \sqrt{-\frac{1}{8} \left[1 - \frac{1}{\sigma\sqrt{N}}\left(\bar E-\bar Q_M\bar\phi\right)\right]}.$$ Substituting this expression into Eqs. -, one has (again in the approximation of $\bar f \ll \bar g$) $$\begin{aligned} \frac{d \bar m_*}{d x_*} &=& 8 x_*^2 \Big\{ \left[\frac{1}{\sigma\sqrt{N}} \left(\bar E-\bar Q_M\bar\phi\right)- 4\bar g_*^2 \right]\bar g_*^2+\frac{1}{16}\frac{\bar\phi^{\prime^2}}{\sigma^2} \Big\}, \label{fieldeqs-3_dmls_approx}\\ \frac{d \sigma}{d x_*}& =& \frac{8 x_*}{N^{3/2}}\left(\bar E-\bar Q_M\bar\phi\right)\bar g_*^2, \label{fieldeqs-4_dmls_approx}\\ \frac{d^2\bar \phi}{d x_*^2}+\left(\frac{2}{x_*}-\frac{1}{\sigma}\frac{d\sigma}{d x_*}\right)\frac{d\bar\phi}{d x_*}&=&-8\bar Q_M \frac{\sigma}{\sqrt{N}}\bar g_*^2, \label{fieldeqs-5_dmls_approx}\end{aligned}$$ where now $N=1-2\bar m_*/x_*$. As $|\bar \lambda|$ increases, the accuracy of Eqs. - becomes better. This is illustrated in Fig. \[fig\_field\_distr\] where the results of calculations for the configurations with the same $\bar g_{c}$ and for $\bar \lambda=0$ and $-100$ are shown. From comparison of the exact and approximate solutions, one can see their good agreement for the case of $\bar \lambda=-100$, except the behavior at large radii. As $|\bar \lambda|\to \infty$, this region becomes less important and, accordingly, the mass of the configurations will be well described by the asymptotic formula (see also the discussion of this question in Ref. [@Dzhunushaliev:2018jhj]). Since $\bar \lambda$ does not appear explicitly in Eqs. -, one can use these limiting equations to determine the rescaled total mass $\bar m_{*\infty}\equiv\bar m_*(x\to\infty) = M_{*}/\left(|\bar \lambda|^{1/2}M_\text{Pl}^2/\mu\right)$ as a function of the central density of the spinor field $\bar g_{c*}$. The corresponding results of a numerical solution to Eqs. - are given in Fig. \[fig\_mass\_gc\_approx\_Maxw\]. The effect of the inclusion of the charge is similar to the case with $\bar \lambda=0$ from Sec. \[num\_sol\_Max\_linear\]: maximum values of the mass grow with increasing $\bar Q_M$, and the locations of the maxima move towards smaller values of $\bar g_{c*}$. It follows from the results of solving the approximate set of equations - (see also Fig. \[fig\_mass\_gc\_approx\_Maxw\]) that the dependence of the maximum mass on the charge is $$\label{M_max_approx_with_charge} M_{*}^{\text{max}} \approx \beta(\bar Q_M) \sqrt{|\bar \lambda|}\frac{ M_\text{Pl}^2}{\mu},$$ where the numerical values of the coefficient $ \beta(\bar Q_M)$ are given in Table \[tab1\]. This coefficient is well approximated by the formula $\beta \approx 0.38 /\sqrt{\bar Q_{\text{crit}}-\bar Q_M}$ which determines the divergence of the maximum mass in the limit $\bar Q_M\to\bar Q_{\text{crit}}=1$. This divergence is also illustrated in Fig. \[fig\_mass\_gc\_approx\_Maxw\] by the curve joining the maxima of the mass. $\bar Q_M$ 0 0.5 0.7 0.9 0.925 0.95 0.975 0.99 ------------ ------ ------ ------ ------ ------- ------ ------- ------ $\beta$ 0.41 0.50 0.64 1.16 1.35 1.66 2.38 3.79 : The calculated values of the coefficient $\beta$ from as a function of the charge $\bar Q_M$. []{data-label="tab1"} Notice here that the numerical computations indicate that the above approximate solutions describe fairly well only systems located near maxima of the mass, and the deviations from the exact solutions become stronger the further away we are from the maximum. ### Effective radius {#eff_rad_Maxw} In modeling ordinary stars (for example, neutron ones), it is usually assumed that they have a surface on which the pressure of matter vanishes. In the case of field configurations (for instance, boson stars) supported by exponentially damped fields, such a surface is already absent. Therefore for such configurations one uses some effective radius which can be introduced in several different ways [@Schunck:2003kk]. Since the configurations considered in this section contain a long-range Maxwell field, a definition of the effective radius is different from that used in the case of configurations consisting of exponentially damped fields. Here we follow Ref. [@Jetzer:1989av] (where charged boson stars with a Maxwell field are under consideration) and introduce the following expression for the effective radius: $$R=\frac{1}{N_f}\int r j^t d^3 x=\frac{\lambda_c}{N_f}\int_0^\infty \frac{\bar f^2+\bar g^2}{\sqrt{N}}x^3 dx, \label{eff_radius}$$ where the particle number $N_f$ is taken from (without the numerical coefficient before the integral). As in the case of charged boson stars of Ref. [@Jetzer:1989av], this expression yields a finite result, in contrast to the case when the effective radius is defined in terms of the mass integral, as is done for uncharged configurations (for details see Refs. [@Schunck:2003kk; @Jetzer:1989av]). $\bar Q_M$ 0 0.5 0.7 0.9 0.925 0.95 0.975 0.99 -------------- ------ ------ ------ ------ ------- ------ ------- ------- $\gamma_{0}$ 2.88 3.39 3.89 5.68 … 7.90 … 17.10 $\gamma_{l}$ 1.08 1.29 1.37 2.37 2.69 3.25 4.51 7.01 : The calculated values of the coefficients $\gamma_{0}$ and $\gamma_{l}$ from and as a function of the charge $\bar Q_M$. []{data-label="tab2"} Using the expression , we have obtained the following dependencies of the effective radius on $\bar Q_M$ for the configurations with maximum masses considered in Secs. \[num\_sol\_Max\_linear\] and \[lim\_conf\_M\]: $$\begin{aligned} && R^{\text{max}}=\lambda_c \gamma_{0}(\bar Q_M) \quad \quad\quad\text{for} \quad \bar\lambda=0, \label{R_eff_lambda_0}\\ && R_*^{\text{max}}\approx \lambda_c \gamma_{l}(\bar Q_M)\sqrt{|\bar\lambda|} \quad \text{for} \quad |\bar\lambda|\gg 1. \label{R_eff_lambda_large}\end{aligned}$$ The numerical values of the coefficients $\gamma_{0}$ and $\gamma_{l}$ appearing here are given in Table \[tab2\]. Asymptotically, as $\bar Q_M\to \bar Q_{\text{crit}}=1$, these coefficients are approximated by the expressions $\gamma_{0} \approx 1.7 /\sqrt{\bar Q_{\text{crit}}-\bar Q_M}$ and $\gamma_{l} \approx 0.73 /\sqrt{\bar Q_{\text{crit}}-\bar Q_M}$ which determine the divergences of the radii as the charge tends to the critical value. The case of the Proca field {#Proca_field} --------------------------- For the Proca field Eqs. - are solved when $\alpha\neq 0$ and with boundary conditions assigned at the center in the form of . In doing so, solutions obtained near the center are smoothly matched with asymptotic solutions obtained as $x\to \infty$, which are $$\bar f\approx \bar f_\infty e^{-\sqrt{1-\bar E^2}\,x}+\ldots, \quad \bar g\approx \bar g_\infty e^{-\sqrt{1-\bar E^2}\,x}+\ldots, \quad \bar \phi\approx \bar \phi_\infty \frac{e^{-\alpha x}}{x}+\ldots, \quad \sigma\approx 1+\ldots, \quad \bar m\approx \bar m_\infty+\ldots, \label{asymp_Proca}$$ where $\bar f_\infty, \bar g_\infty, \bar \phi_\infty, \bar m_\infty$ are integration constants, and, as in the case of the Maxwell field, the constant $\bar m_\infty$ plays the role of the total ADM mass of the configurations under consideration. ### Linear spinor fields {#lin_spinor_Proca} ![Proca field: the dimensionless total mass of the configurations $\bar m_\infty$ as a function of $\bar g_c$ for the systems with $\bar \lambda=0$ and $\bar Q_P=0, \,0.5,\, 0.7,\, 0.9, 0.99, 1.15$. The bold dots show the positions of maxima of the mass. []{data-label="fig_M_Q_Proca"}](field_distr_Proca){width=".98\linewidth"} ![Proca field: the dimensionless total mass of the configurations $\bar m_\infty$ as a function of $\bar g_c$ for the systems with $\bar \lambda=0$ and $\bar Q_P=0, \,0.5,\, 0.7,\, 0.9, 0.99, 1.15$. The bold dots show the positions of maxima of the mass. []{data-label="fig_M_Q_Proca"}](mass_gc_Q_Proca.eps){width=".98\linewidth"} As in the case of the massless vector field of Sec. \[Maxw\_field\], here we seek zero-node regular asymptotically flat solutions describing configurations with finite energies and total masses. In doing so, in contrast to the case of the Maxwell field, besides the eigenvalue $\bar E$, it is also necessary to seek an eigenvalue of $\bar \phi_c$ ensuring exponential damping of the Proca field at infinity. Since the Proca field is massive, it decays exponentially fast with a rate determined by the magnitude of the parameter $\alpha$ \[see Eq. \]. When $\alpha\to 0$, we return to the exterior Coulomb-type solution . In turn, when $\alpha$ is sufficiently large, the field $\bar \phi$ is concentrated inside the radius $x_b$. The typical distributions of the matter fields for the two values $\alpha=0.1$ and $\alpha=1$ and for the fixed central value $\bar g_c=0.02$ are given in Fig. \[fig\_field\_distr\_Proca\]. One can see that, with increasing $\alpha$, the field $\bar \phi$ is increasingly concentrated inside the radius where the spinor fields are nonvanishing, and the central value $\bar \phi_c$ decreases in turn. By varying $\bar Q_P$, we have obtained the dependencies of the mass of the configurations on the value of $\bar g_c$ shown in Fig. \[fig\_M\_Q\_Proca\]. As in the case of the Maxwell field, in plotting these curves we have kept track of the sign of the binding energy using the expression . Correspondingly, the dependencies in Fig. \[fig\_M\_Q\_Proca\] are given only for positive values of the BE. It is seen from the results obtained that, with increasing $\bar Q_P$, the maximum mass of the systems under consideration increases, shifting towards smaller values of $\bar g_c$. In turn, from the behavior of the curve joining the maxima of the mass, one sees the growth of the mass with increasing $\bar Q_P$. However, in contrast to the case of the Maxwell field, here, apparently, there is no [*finite*]{} value of the coupling constant $\bar Q_P$ for which the mass tends to infinity, but it increases monotonically as $\bar Q_P\to \infty$ (regarding this issue, see also the next paragraph). ### The case of $|\bar \lambda| \gg 1$ ![Proca field: the maximum masses of the configurations [*vs.*]{} the coupling constant $\bar Q_P$ for different values of $|\bar \lambda|$. The rightmost points of the curves correspond to the values of $\bar Q_P$ to which we have succeeded (technically) in obtaining the solutions. []{data-label="fig_mass_Q_Proca"}](fit_Proca.eps){width="1\linewidth"} ![Proca field: the maximum masses of the configurations [*vs.*]{} the coupling constant $\bar Q_P$ for different values of $|\bar \lambda|$. The rightmost points of the curves correspond to the values of $\bar Q_P$ to which we have succeeded (technically) in obtaining the solutions. []{data-label="fig_mass_Q_Proca"}](mass_Q_Proca.eps){width="1\linewidth"} Consider now the case of negative $|\bar \lambda| \gg 1$. Numerical computations indicate that, as for the problem of Sec. \[lim\_conf\_M\], in this case the spinor field $\bar f$ is also much smaller than $\bar g$. Then the leading term in Eq.  is again the third term $(\ldots)\bar g$, and this equation yields (in the approximation of $\bar f \ll \bar g$) $$\label{g_approx_Proca} \bar g^2 \approx \frac{1}{8\bar \lambda} \left[1 - \frac{1}{\sigma\sqrt{N}}\left(\bar E-\bar Q_P\bar\phi\right)\right].$$ Unfortunately, on account of the presence of the massive vector field, in this case the field equations are not scale invariant, and therefore it is impossible to introduce the mass and the radius rescaled through $|\bar \lambda|$, as is done in Sec. \[lim\_conf\_M\]. Nevertheless, one can use the approximate expression , insertion of which into Eqs. - yields an approximate set of equations for the metric functions and the Proca field. Solving this set of equations numerically, one can show that, as in the case of the Maxwell field, the approximate solution  agrees well with the exact solution, except only the region at large radii (cf. Fig. \[fig\_field\_distr\]). This permits us to use approximate set of equations thus derived to describe configurations with the Proca field. In this case there is only one eigenparameter ($\bar E$ or $\bar \phi_c$) whose value can be found by using the shooting method; this is technically much easier since to obtain a solution to the exact equations it would be necessary to adjust both aforementioned parameters. Numerically solving Eqs. - in the approximation , we have plotted in Fig. \[fig\_mass\_lambda\_Proca\] the dependencies of the maximum masses of the configurations on $|\bar\lambda|$ for the one value of the parameter $\alpha=0.1$ and for different $\bar Q_P$. In this figure, the bottom curve corresponds to the interpolation formula $ M_{*}^{\text{max}}\approx 0.41 \sqrt{|\bar \lambda|}M_{\text{Pl}}^2/\mu$ describing the dependence of the mass on $|\bar \lambda|$ for the systems without a vector field ($\bar Q_P=0$). For the systems with the vector field ($\bar Q_P\neq0$), the masses are not already proportional to $\sqrt{|\bar \lambda|}$; this is demonstrated by the top interpolation curve plotted for the case of $\bar Q_P=4.5$. Hence we see that, in contrast to the case of the Maxwell field where for any value of the coupling constant $\bar Q_M$ the mass is proportional to $\sqrt{|\bar \lambda|}$ \[see Eq. \], in the case of the Proca field, this is not so. It is evident that this is due to the scale noninvariance of the Proca field. It is seen from Fig. \[fig\_mass\_lambda\_Proca\] that for any fixed value of $\bar \lambda$ the maximum mass of the configurations grows with increasing $\bar Q_P$. This is illustrated in more detail in Fig. \[fig\_mass\_Q\_Proca\], which shows the dependencies of the mass on $\bar Q_P$ for different $\bar \lambda$. From an analysis of the behavior of these curves, one would expect that asymptotically (for $\bar Q_P\gg 1$) they will converge to one curve, and the mass in turn will diverge as $\bar Q_P\to \infty$. However, this question requires more detailed study, including a consideration of the influence that the magnitude of the parameter $\alpha$ has on these dependencies. ### Effective radius {#effective-radius} Since the spinor and Proca fields under consideration decrease exponentially fast with distance \[see Eq. \], to define the effective radius, we may employ one of the ways applied for boson stars. Namely, we choose the following definition which is introduced using the mass integral (see, e.g., in Ref. [@Schunck:2003kk]): $$R = \frac{\int_0^\infty T^0_0 r^3 dr }{\int_0^\infty T^0_0 r^2 dr} = \lambda_c \frac{\int_0^\infty \bar T^0_0 x^3 dx}{\int_0^\infty \bar T^0_0 x^2 dx } ,$$ where the energy density $T^0_0$ is taken from Eq.  or, already in the dimensionless form, from the right-hand side of Eq. . Using this expression, we have plotted in Fig. \[fig\_mass\_R\_Proca\] the dependencies of the effective radii of the configurations with maximum masses on the coupling constant $\bar Q_P$ for different $\bar \lambda$. One sees from this figure that, as in the case of the masses (see Fig. \[fig\_mass\_Q\_Proca\]), there is a monotonic growth of the radii with increasing $\bar Q_P$. Asymptotically (for $\bar Q_P\gg 1$), one might expect that all the curves will converge to one curve which will determine the divergence of the radii as $\bar Q_P\to \infty$. ![Proca field: the effective radii of the configurations with maximum masses [*vs.*]{} the coupling constant $\bar Q_P$ for different values of $|\bar \lambda|$. []{data-label="fig_mass_R_Proca"}](R_Q_Proca.eps){width=".5\linewidth"} Conclusions and discussion {#concl} ========================== We have studied compact strongly gravitating configurations consisting of nonlinear spinor fields minimally coupled to vector (Maxwell and Proca) fields. In order to ensure spherical symmetry of the system, we have used two spinor fields having opposite spins, and this enabled us to get a diagonal energy-momentum tensor. Consistent with this, we have found families of equilibrium configurations described by regular zero-node asymptotically flat solutions for the static vector fields and for the stationary spinor fields, oscillating with a frequency $E/\hbar$. It was shown that for all values of $E$ and of the coupling constants $\lambda, Q_{M,P}$ which we have considered, these solutions describe configurations possessing a positive ADM mass. This enables one to apply such solutions in modeling compact gravitating objects (Dirac stars). The main purpose of the paper was to examine the influence of the presence of the Maxwell and Proca fields on the physical characteristics of the Dirac stars. As in the case of the Dirac stars supported only by nonlinear spinor fields [@Dzhunushaliev:2018jhj], our goal here was to explore the possibility of obtaining objects with masses of the order of the Chandrasekhar mass. For this purpose, we have studied in detail the cases with large negative values of the dimensionless coupling constant $\bar \lambda$ and with different values of the dimensionless coupling constants $\bar Q_{M,P}$. The results obtained can be summarized as follows: 1. For the case of the Maxwell field, we have considered all physically admissible values of the charge $0 \leq \bar Q_M < 1$. In this case the families of equilibrium configurations can be parametrized by two dimensionless quantities: the coupling constant $\bar \lambda =\lambda M_\text{Pl}^2 c/4\pi \hbar^3$ and the charge $\bar Q_M$. Due to the scale invariance of the system, in the limit $|\bar \lambda|\gg 1$, it is possible to get approximate equations which do not contain $\bar \lambda$ explicitly. Consistent with the dimensions of $[\lambda]=\text{erg cm}^3$, one can assume that its characteristic value is $\lambda \sim \tilde \lambda \,\mu c^2 \lambda_c^3$, where the dimensionless quantity $\tilde \lambda \sim 1$. Then the dependence of the maximum mass of the systems in question on $|\bar \lambda|$ and $\bar Q_M$ \[see Eq. \] can be represented as $$M_*^{\text{max}}\approx \beta(\bar Q_M) \sqrt{|\bar \lambda|}\frac{M_\text{Pl}^2}{\mu} \approx 0.46\, \beta(\bar Q_M)\sqrt{|\tilde \lambda|}M_{\odot}\left(\frac{\text{GeV}}{\mu}\right)^2,$$ where the numerical values of the coefficient $\beta(\bar Q_M)$ are taken from Table \[tab1\], and they are well approximated by the formula $\beta \approx 0.38 /\sqrt{\bar Q_{\text{crit}}-\bar Q_M}$ which determines the divergence of the maximum mass in the limit $\bar Q_M\to\bar Q_{\text{crit}}=1$. The above mass $M_*^{\text{max}}$ is comparable to the Chandrasekhar mass for the typical mass of a fermion $\mu\sim 1~\text{GeV}$. In this respect the behavior of the dependence of the maximum mass of the charged Dirac stars considered here on the coupling constant $\bar \lambda$ and on the charge $\bar Q_M$ is similar to that of charged boson stars of Ref. [@Jetzer:1989av]. In turn, the dependence of the effective radii of the limiting configurations with maximum masses on $|\bar \lambda|$ and $\bar Q_M$ \[see Eq. \] can be represented as $$R_*^{\text{max}}\approx \lambda_c \gamma_{l}(\bar Q_M)\sqrt{|\bar\lambda|}\approx 0.68\, \gamma_{l}(\bar Q_M)\sqrt{|\tilde \lambda|}\left(\frac{\text{GeV}}{\mu}\right)^2 \,\, \text{km},$$ where the numerical values of the coefficient $\gamma_{l}(\bar Q_M)$ are taken from Table \[tab2\], and in the limit $\bar Q_M\to\bar Q_{\text{crit}}=1$ they are approximated by the formula $\gamma_{l} \approx 0.73 /\sqrt{\bar Q_{\text{crit}}-\bar Q_M}$ which determines the divergence of the radii when the charge tends to the critical value. For the typical mass of a fermion $\mu\sim 1~\text{GeV}$, the above expression gives the radii of the order of kilometers. In combination with the masses of the order of the Chandrasekhar mass (see above), this corresponds to characteristics typical for neutron stars. 2. In the case of the Proca field, besides the coupling constants $\bar \lambda$ and $\bar Q_P$, the system involves one more free parameter $\alpha$ equal to the ratio of the Proca mass to the mass of the spinor field. In the limit $\alpha\to 0$, we return to the results of item (I). When $\alpha\neq 0$, the vector field is not already a long-range one, and it decreases exponentially fast with distance according to the asymptotic law given by Eq. . Due to the absence of the scale invariance, in this case it is already impossible to get rid of $\bar \lambda$ in the approximate equations valid for $|\bar \lambda|\gg 1$, and therefore the only possibility is to obtain solutions for particular values of $\bar \lambda$. Numerical calculations indicate that in the case of the Proca field, the maximum mass ceases to alter proportionally to $\sqrt{|\bar \lambda|}$, and, with increasing $|\bar \lambda|$, it increases with a rate which is in general determined by the values of $\bar Q_P$ and $\alpha$ (see Fig. \[fig\_mass\_lambda\_Proca\]). In turn, consistent with the numerical results obtained, one may assume that the system seems not to involve a finite value of the coupling constant $\bar Q_P$ for which the total mass and radius of the configurations would diverge. With increasing $\bar Q_P$, a monotonic increase in the maximum masses and in the corresponding effective radii takes place (see Figs. \[fig\_mass\_Q\_Proca\] and \[fig\_mass\_R\_Proca\]), and one might naively expect that they will diverge only as $\bar Q_P\to \infty$. But this question requires further investigation. We can draw from the above results that the physical characteristics of the configurations under investigation are largely determined by the values of the coupling constants $\lambda, Q_{M,P}$ and by the ratio of the masses $\alpha$. Since in the present paper we have studied only a finite set of values of the above parameters for the system with the Proca field, it seems to be of interest to extend the range of values of these parameters by considering, in particular, large values of $\alpha$ and $Q_{P}$. In conclusion, let us briefly address the question of stability of the systems under investigation. As is seen from the above results, all the configurations considered here can be parametrized by the central value of the spinor field $g_c$. The total mass is then a function of this parameter, and for any values of the coupling constants $\lambda,Q_{M,P}$ and of the parameter $\alpha$, there exists a first peak in the mass (a local maximum). Similarly to models of neutron and boson stars, one can naively expect that a transition through this local maximum should lead to instability against perturbations which compress the entire star as a whole. However, this question requires special consideration by analyzing the stability of the configurations studied here against, for instance, linear perturbations, as is done for boson stars [@Gleiser:1988ih; @Jetzer:1989us], or by using catastrophe theory [@Kusmartsev:1990cr]. Acknowledgments {#acknowledgments .unnumbered} =============== The authors gratefully acknowledge support provided by Grant No. BR05236494 in Fundamental Research in Natural Sciences by the Ministry of Education and Science of the Republic of Kazakhstan. We are grateful to the Research Group Linkage Programme of the Alexander von Humboldt Foundation for the support of this research. [99]{} F. E. Schunck and E. W. Mielke, Classical Quantum Gravity  [**20**]{}, R301 (2003). S. L. Liebling and C. Palenzuela, Living Rev. Relativity  [**15**]{}, 6 (2012); [**20**]{}, 5 (2017). I. Lawrie, [*A Unified Grand Tour of Theoretical Physics*]{} (Institute of Physics Publishing, Bristol, 2002). L. C. Tu, J. Luo, and G. T. Gillies, Rep. Prog. Phys.  [**68**]{}, 77 (2005). M. Pospelov and A. Ritz, Phys. Lett. B [**671**]{}, 391 (2009). R. Brito, V. Cardoso, C. A. R. Herdeiro, and E. Radu, Phys. Lett. B [**752**]{}, 291 (2016). C. A. R. Herdeiro, A. M. Pombo, and E. Radu, Phys. Lett. B [**773**]{}, 654 (2017). F. Finster, J. Smoller, and S. T. Yau, Phys. Rev. D [**59**]{}, 104020 (1999). V. G. Krechet and I. V. Sinilshchikova, Russ. Phys. J.  [**57**]{}, 870 (2014). V. Adanhounme, A. Adomou, F. P. Codo, and M. N. Hounkonnou, J. Mod. Phys.  [**3**]{}, 935 (2012). V. Dzhunushaliev and V. Folomeev, Phys. Rev. D [**99**]{}, 084030 (2019). K. A. Bronnikov, E. N. Chudaeva, and G. N. Shikin, Gen. Relativ. Gravit.  [**36**]{}, 1537 (2004). K. A. Bronnikov and J. P. S. Lemos, Phys. Rev. D [**79**]{}, 104019 (2009). M. O. Ribas, F. P. Devecchi, and G. M. Kremer, Europhys. Lett.  [**93**]{}, 19002 (2011). M. O. Ribas, F. P. Devecchi, and G. M. Kremer, Mod. Phys. Lett. A [**31**]{}, 1650039 (2016). B. Saha, Eur. Phys. J. Plus [**131**]{}, 242 (2016). F. Finster, J. Smoller, and S. T. Yau, Phys. Lett. A [**259**]{}, 431 (1999). C. Armendariz-Picon and P. B. Greene, Gen. Relativ. Gravit.  [**35**]{}, 1637 (2003). R. Finkelstein, R. LeLevier, and M. Ruderman, Phys. Rev.  [**83**]{}, 326 (1951). R. Finkelstein, C. Fronsdal, and P. Kaus, Phys. Rev.  [**103**]{}, 1571 (1956). M. Soler, Phys. Rev. D [**1**]{}, 2766 (1970). E. Mielke, [*Geometrodynamics of Gauge Fields: On the Geometry of Yang-Mills and Gravitational Gauge Theories*]{} (Springer International Publishing, Switzerland, 2017). X. z. Li, K. l. Wang, and J. z. Zhang, Nuovo Cimento A [**75**]{}, 87 (1983). K. L. Wang and J. Z. Zhang, Nuovo Cimento A [**86**]{}, 32 (1985). M. Colpi, S. L. Shapiro, and I. Wasserman, Phys. Rev. Lett.  [**57**]{}, 2485 (1986). M. Gleiser and R. Watkins, Nucl. Phys.  [**B319**]{}, 733 (1989). P. Jetzer, Phys. Lett. B [**231**]{}, 433 (1989). P. Jetzer and J. J. van der Bij, Phys. Lett. B [**227**]{}, 341 (1989). F. V. Kusmartsev, E. W. Mielke, and F. E. Schunck, Phys. Rev. D [**43**]{}, 3895 (1991).
{ "pile_set_name": "ArXiv" }
--- abstract: 'These notes discuss, in a style intended for physicists, how to average data and fit it to some functional form. I try to make clear what is being calculated, what assumptions are being made, and to give a derivation of results rather than just quote them. The aim is put a lot useful pedagogical material together in a convenient place. This manuscript is a substantial enlargement of lecture notes I prepared for the Bad Honnef School on “Efficient Algorithms in Computational Physics”, September 10–14, 2012.' author: - Peter Young bibliography: - 'refs.bib' title: '[Everything you wanted to know about Data Analysis and Fitting but were afraid to ask]{}\' --- Introduction ============ These notes describe how to average and fit numerical data that you have obtained, presumably by some simulation. Typically you will generate a set of values $x_i,\, y_i, \cdots,\, i = 1, \cdots N$, where $N$ is the number of measurements. The first thing you will want to do is to estimate various average values, and determine *error bars* on those estimates. As we shall see, this is straightforward if one wants to compute a single average, e.g. $\langle x \rangle$, but not quite so easy for more complicated averages such as fluctuations in a quantity, $\langle x^2 \rangle - \langle x \rangle^2$, or combinations of measured values such as $\langle y \rangle / \langle x \rangle^2$. Averaging of data will be discussed in Sec. \[sec:averages\]. Having obtained several good data points with error bars, you might want to fit this data to some model. Techniques for fitting data will be described in the second part of these notes in Sec. \[sec:fit\] I find that the books on these topics usually fall into one of two camps. At one extreme, the books for physicists don’t discuss all that is needed and rarely *prove* the results that they quote. At the other extreme, the books for mathematicians presumably prove everything but are written in a style of lemmas, proofs, $\epsilon$’s and $\delta$’s, and unfamiliar notation, which is intimidating to physicists. One exception, which finds a good middle ground, is Numerical Recipes [@press:92] and the discussion of fitting given here is certainly influenced by Chap. 15 of that book. In these notes I aim to be fairly complete and also to derive the results I use, while the style is that of a physicist writing for physicists. I also include scripts in python, perl, and gnuplot to perform certain tasks in data analysis and fitting. For these reasons, these notes are perhaps rather lengthy. Nonetheless, I hope, that they will provide a useful reference. Averages and error bars {#sec:averages} ======================= Basic Analysis {#sec:basic} -------------- Suppose we have a set of data from a simulation, $x_i, \, (i = 1, \cdots, N)$, which we shall refer to as a *sample* of data. This data will have some random noise so the $x_i$ are not all equal. Rather they are governed by a distribution $P(x)$, *which we don’t know*. The distribution is normalized, $$\int_{-\infty}^\infty P(x) \, d x = 1,$$ and is usefully characterized by its moments, where the $n$-th moment is defined by $$\langle x^n \rangle = \int_{-\infty}^\infty x^n\, P(x) \, d x\, .$$ We will denote the average *over the exact distribution* by angular brackets. Of particular interest are the first and second moments from which one forms the mean $ \mu$ and variance $\sigma^2$, by $$\begin{aligned} \mu &\equiv \langle x \rangle \label{xavexact} \\ \sigma^2 &\equiv \langle \, \left(x - \langle x\rangle\right)^2 \, \rangle = \langle x^2 \rangle - \langle x \rangle^2 \, . \label{sigma}\end{aligned}$$ The term “standard deviation” is used for $\sigma$, the square root of the variance. In this section we will estimate the mean $\langle x \rangle$, and the uncertainty in our estimate, from the $N$ data points $x_i$. The determination of more complicated averages and resulting error bars will be discussed in Sec. \[sec:advanced\] In order to obtain error bars we need to assume that the data are uncorrelated with each other. This is a crucial assumption, without which it is very difficult to proceed. However, it is not always clear if the data points are truly independent of each other; some correlations may be present but not immediately obvious. Here, we take the usual approach of assuming that even if there are some correlations, they are sufficiently weak so as not to significantly perturb the results of the analysis. In Monte Carlo simulations, measurements which differ by a sufficiently large number of Monte Carlo sweeps will be uncorrelated. More precisely the difference in sweep numbers should be greater than a “relaxation time”. This is exploited in the “binning” method in which the data used in the analysis is not the individual measurements, but rather an average over measurements during a range of Monte Carlo sweeps, called a “bin”. If the bin size is greater than the relaxation time, results from adjacent bins will be (almost) uncorrelated. A pedagogical treatment of binning has been given by Ambegaokar and Troyer [@ambegaokar:09]. Alternatively, one can do independent Monte Carlo runs, requilibrating each time, and use, as individual data in the analysis, the average from each run. The information *from the data* is usefully encoded in two parameters, the sample mean $\overline{x}$ and the sample standard deviation $s$ which are defined by[^1] $$\begin{aligned} \overline{x} & = {1 \over N} \sum_{i=1}^N x_i \, , \label{meanfromdata} \\ s^2 & = {1 \over N - 1} \sum_{i=1}^N \left( x_i - \overline{x}\right)^2 \, . \label{sigmafromdata} $$ In statistics, notation is often confusing but crucial to understand. Here, an average indicated by an over-bar, $\overline{\cdots}$, is an average over the *sample of $N$ data points*. This is to be distinguished from an exact average over the distribution $\langle \cdots \rangle$, as in Eqs. (\[xavexact\]) and (\[sigma\]). The latter is, however, just a theoretical construct since we *don’t know* the distribution $P(x)$, only the set of $N$ data points $x_i$ which have been sampled from it. Next we derive two simple results which will be useful later: 1. The mean of the sum of $N$ independent variables *with the same distribution* is $N$ times the mean of a single variable, and 2. The variance of the sum of $N$ independent variables *with the same distribution* is $N$ times the variance of a single variable. The result for the mean is obvious since, defining $X = \sum_{i=1}^N x_i$, $$\langle X \rangle = \sum_{i=1}^N \langle x_i \rangle = N \langle x_i \rangle \ \boxed{ = N \mu\, .} \label{X}$$ The result for the standard deviation needs a little more work: $$\begin{aligned} \sigma_X^2 & \equiv \langle X^2 \rangle - \langle X \rangle^2 \\ &= \sum_{i,j=1}^N \left( \langle x_i x_j\rangle - \langle x_i \rangle \langle x_j \rangle \right) \label{1} \\ & = \sum_{i=1}^N \left( \langle x_i^2 \rangle - \langle x_i \rangle^2 \right) \label{2} \\ & = N \left(\langle x^2 \rangle - \langle x \rangle^2 \right) \\ & \boxed{ = N \sigma^2 \, .} \label{dXsq}\end{aligned}$$ To get from Eq. (\[1\]) to Eq. (\[2\]) we note that, for $i \ne j$, $\langle x_i x_j\rangle = \langle x_i \rangle \langle x_j\rangle$ since $x_i$ and $x_j$ are assumed to be statistically independent. (This is where the statistical independence of the data is needed.) If the means and standard deviations are not all the same, then the above results generalize to $$\begin{aligned} \langle X \rangle &= \sum_{i=1}^N \mu_i \, , \\ \langle \sigma_X^2 \rangle &= \sum_{i=1}^N \sigma_i^2 \, .\end{aligned}$$ Now we describe an important thought experiment. Let’s *suppose* that we could repeat the set of $N$ measurements *very many* many times, each time obtaining a value of the sample average $\overline{x}$. From these results we could construct a distribution, $\widetilde{P}(\overline{x})$, for the sample average as shown in Fig. \[Fig:distofmean\]. If we do enough repetitions we are effectively averaging over the exact distribution. Hence the average of the sample mean, $\overline{x}$, over very many repetitions of the data, is given by $$\langle \overline{x} \rangle = {1 \over N} \sum_{i=1}^N \langle x_i \rangle = \langle x \rangle \equiv \mu \, , \label{xav}$$ i.e. it is the exact average over the distribution of $x$, as one would intuitively expect, see Fig. \[Fig:distofmean\]. Eq.  also follows from Eq.  by noting that $\overline{x} = X/N$. ![ The distribution of results for the sample mean $\overline{x}$ obtained by repeating the measurements of the $N$ data points $x_i$ many times. The average of this distribution is $\mu$, the exact average value of $x$. The mean, $\overline{x}$, obtained from one sample of data typically differs from $\mu$ by an amount of order $\sigma_{\overline{x}}$, the standard deviation of the distribution $\widetilde{P}(\overline{x})$. []{data-label="Fig:distofmean"}](distofmean.eps){width="9.5cm"} In fact, though, we have only the *one* set of data, so we can not determine $\mu $ exactly. However, Eq. (\[xav\]) shows that $$\boxed{ \mbox{the best estimate of\ } \mu \mbox{ is } \overline{x},} \label{xbarest}$$ i.e. the sample mean, since averaging the sample mean over many repetitions of the $N$ data points gives the true mean of the distribution, $\mu$. An estimate like this, which gives the exact result if averaged over many repetitions of the experiment, is said to be We would also like an estimate of the uncertainty, or “error bar”, in our estimate of $\overline{x}$ for the exact average $\mu$. We take $ \sigma_{\overline{x}}$, the standard deviation in $\overline{x}$ (obtained if one did many repetitions of the $N$ measurements), to be the uncertainty, or error bar, in $\overline{x}$. The reason is that $ \sigma_{\overline{x}}$ is the width of the distribution $\widetilde{P}(\overline{x})$, shown in Fig. \[Fig:distofmean\], so a *single* estimate $\overline{x}$ typically differs from the exact result $\mu$ by an amount of this order. The variance $\sigma_{\overline{x}}^2$ is given by $$\sigma_{\overline{x}}^2 \equiv \langle \overline{x}^2 \rangle - \langle \overline{x} \rangle^2 = {\sigma^2 \over N}\, , \label{dxsq}$$ which follows from Eq.  since $\overline{x} =X / N$. The problem with Eq. (\[dxsq\]) is that **we don’t know $\sigma^2$** since it is a function of the exact distribution $P(x)$. We do, however, know the *sample* variance $s^2$, see Eq. (\[sigmafromdata\]), and the average of this over many repetitions of the $N$ data points, is equal to $\sigma^2$ since $$\begin{aligned} \langle s^2 \rangle & = {1 \over N-1} \sum_{i=1}^N \langle x_i^2 \rangle - {1 \over N(N-1)} \sum_{i=1}^N \sum_{j=1}^N \langle x_i x_j \rangle \label{3} \\ & ={N \over N-1}\langle x^2 \rangle - {1 \over N(N-1)} \left[ N(N-1) \langle x \rangle^2 + N \langle x^2\rangle \right] \label{4}\\ & = \left[\langle x^2 \rangle - \langle x \rangle^2 \right] \\ & = \sigma^2 \, . \label{5}\end{aligned}$$ To get from Eq. (\[3\]) to Eq. (\[4\]), we have separated the terms with $i=j$ in the last term of Eq. (\[3\]) from those with $i \ne j$, and used the fact that each of the $x_i$ is chosen from the same distribution and is statistically independent of the others. It follows from Eq. (\[5\]) that $$\boxed{ \mbox{the best estimate of\ } \sigma^2 \mbox{ is } s^2 \, ,} \label{sigmasamp}$$ since averaging $s^2$ over many repetitions of $N$ data points gives $\sigma^2$. The estimate for $\sigma^2$ in Eq. (\[sigmasamp\]) is therefore unbiased. Combining Eqs. (\[dxsq\]) and (\[sigmasamp\]) gives $$\boxed{ \mbox{the best estimate of\ } \sigma_{\overline{x}}^2 \mbox{ is } {s^2 \over N}\; \, ,} \label{errorbar}$$ We have now obtained, using only information from the data, that the mean is given by $$\boxed{ \mu = \overline{x}\; \pm \; \sigma_{\overline{x}} \, ,}$$ where $$\boxed{ \sigma_{\overline{x}} = {s \over \sqrt{N}}, , } \label{finalans}$$ which we can write explicitly in terms of the data points as $$\boxed{ \sigma_{\overline{x}} = \left[ {1 \over N(N-1)} \, \sum_{i=1}^N (x_i - \overline{x})^2 \right]^{1/2} \, .} \label{finalans2}$$ Remember that $\overline{x}$ and $s$ are the mean and standard deviation of the (one set) of data that is available to us, see Eqs. (\[meanfromdata\]) and (\[sigmafromdata\]). As an example, suppose $N=5$ and the data points are $$x_i = 10, 11, 12, 13, 14,$$ (not very random looking data it must be admitted!). Then, from Eq. (\[meanfromdata\]) we have $\overline{x} = 12$, and from Eq. (\[sigmafromdata\]) $$s^2 = {1 \over 4} \, \left[(-2)^2 + (-1)^2 + 0^2 + 1^2 + 2^2\right] = {5 \over 2} .$$ Hence, from Eq. (\[finalans\]), $$\sigma_{\overline{x}} = {1 \over \sqrt{5}}\, \sqrt{5 \over 2} = {1\over \sqrt{2}}.$$ so $$\mu = \overline{x} \pm \sigma_{\overline{x}} = 12 \pm {1\over \sqrt{2}}.$$ How does the error bar decrease with the number of statistically independent data points $N$? Equation (\[5\]) states that the expectation value of $s^2$ is equal to $\sigma^2$ and hence, from Eq. (\[finalans\]), we see that Hence, to reduce the error bar by a factor of 10 one needs 100 times as much data. This is discouraging, but is a fact of life when dealing with random noise. For Eq. (\[finalans\]) to be really useful we need to know the probability that the true answer $\mu$ lies more than $\sigma_{\overline{x}}$ away from our estimate $\overline{x}$. Fortunately, for large $N$, the central limit theorem, derived in Appendix \[sec:clt\], tells us (for distributions where the first two moments are finite) that the distribution of $\overline{x}$ is a Gaussian. For this distribution we know that the probability of finding a result more than one standard deviation away from the mean is 32%, more than two standard deviations is $4.5\%$ and more than three standard deviations is $0.3\%$. Hence we expect that most of the time $\overline{x}$ will be within $\sigma_{\overline{x}}$ of the correct result $\mu$, and only occasionally will be more than two times $\sigma_{\overline{x}}$ from it. Even if $N$ is not very large, so there are some deviations from the Gaussian form, the above numbers are often a reasonable guide. However, as emphasized in appendix \[sec:clt\], distributions which occur in nature typically have much more weight in the tails than a Gaussian. As a result, the weight in the tails of the distribution *of the sum* can also be much larger than for a Gaussian even for quite large values of $N$, see Fig. \[Fig:converge\_to\_clt\]. It follows that the probability of an “outlier” can be much higher than that predicted for a Gaussian distribution, as anyone who has invested in the stock market knows well! Advanced Analysis {#sec:advanced} ----------------- In Sec. \[sec:basic\] we learned how to estimate a simple average, such as $\mu_x \equiv \langle x \rangle$, plus the error bar in that quantity, from a set of data $x_i$. Trivially this method also applies to a *linear* combination of different averages, $\mu_x, \mu_y, \cdots$ etc. However, we often need more complicated, *non-linear* functions of averages. One example is the fluctuations in a quantity, i.e. $\langle x^2 \rangle - \langle x \rangle^2$. Another example is a dimensionless combination of moments, which gives information about the *shape* of a distribution independent of its overall scale. Such quantities are very popular in finite-size scaling (FSS) analyses since the FSS form is simpler than for quantities with dimension. An popular example, first proposed by Binder, is $\langle x^4 \rangle / \langle x^2 \rangle^2$, which is known as the “kurtosis” (frequently a factor of 3 is subtracted to make it zero for a Gaussian). Hence, in this section we consider how to determine *non-linear functions* of averages of one or more variables, $f(\mu_y, \mu_z, \cdots)$, where $$\mu_y \equiv \langle y \rangle \, ,$$ etc. For example, the two quantities mentioned in the previous paragraph correspond to $$f(\mu_y, \mu_z) = \mu_y - \mu_z^2 \, ,$$ with $y=x^2$ and $z = x$ and $$f(\mu_y, \mu_z) = {\mu_y \over \mu_z^2} \, ,$$ with $y = x^4$ and $z = x^2$. The natural estimate of $f(\mu_y, \mu_z)$ from the sample data is clearly $f(\overline{y}, \overline{z} )$. However, it will take some more thought to estimate the error bar in this quantity. The traditional way of doing this is called “error propagation”, described in Sec. \[sec:traditional\] below. However, it is now more common to use either “jackknife” or “bootstrap” procedures, described in Secs. \[sec:jack\] and \[sec:boot\]. At the price of some additional computation, which is no difficulty when done on a modern computer (though it would have been tedious in the old days when statistics calculations were done by hand), these methods automate the calculation of the error bar. Furthermore, the estimate of $f(\mu_y, \mu_z)$ turns out to have some *bias* if $f$ is a non-linear function. Usually this is small effect because it is order $1/N$, see for example Eq.  below, whereas the statistical error is of order $1/\sqrt{N}$. Since $N$ is usually large, the bias is generally much less than the statistical error and so can generally be neglected. In any case, the jackknife and bootstrap methods also enable one to eliminate the leading ($\sim 1/N$) contribution to the bias in a automatic fashion. ### Traditional method {#sec:traditional} First we will discuss the traditional method, known as error propagation, to compute the error bar and bias. We expand $f(\overline{y}, \overline{z})$ about $f(\mu_y, \mu_z)$ up to second order in the deviations: $$f(\overline{y}, \overline{z}) = f(\mu_y, \mu_z) + (\partial_{\mu_y}f)\, \delta_{\overline{y}} + (\partial_{\mu_z}f)\, \delta_{\overline{z}} + {1\over 2}\, (\partial^2_{\mu_y\mu_y}f)\, \delta_{\overline{y}}^2 + (\partial^2_{\mu_y\mu_z}f)\, \delta_{\overline{y}} \delta_{\overline{z}} + {1\over 2}\, (\partial^2_{\mu_z\mu_z}f)\, \delta_{\overline{z}}^2 + \cdots \, , \label{expand}$$ where $$\delta_{\overline{y}} = \overline{y} - \mu_y ,$$ etc. The terms of first order in the $\delta's$ in Eq.  give the leading contribution to the error, but would average to zero if the procedure were to be repeated many times. However, the terms of second order do not average to zero and so give the leading contribution to the bias. We now estimate that bias. Averaging Eq.  over many repetitions, and noting that $$\langle \delta_{\overline{y}}^2 \rangle = \langle \overline{y}^2 \rangle - \langle \overline{y} \rangle^2 \equiv \sigma_{\overline{y}}^2 , \quad \langle \delta_{\overline{z}}^2 \rangle = \langle \overline{z}^2 \rangle - \langle \overline{z} \rangle^2 \equiv \sigma_{\overline{z}}^2 , \quad \langle \delta_{\overline{y}} \delta_{\overline{z}} \rangle = \langle \overline{y}\, \overline{z} \rangle - \langle \overline{y} \rangle \langle \overline{z} \rangle \equiv \sigma_{\overline{y}\,\overline{z}}^2 ,$$ we get $$\langle f(\overline{y}, \overline{z})\rangle - f(\mu_y, \mu_z) = {1\over 2}\, (\partial^2_{\mu_y\mu_y}f)\, \sigma_{\overline{y}}^2 + (\partial^2_{\mu_y\mu_z}f)\, \sigma_{\overline{y}\,\overline{z}}^2 + {1\over 2}\, (\partial^2_{\mu_z\mu_z}f)\, \sigma_{\overline{z}}^2 \, . \label{df}$$ As shown in Eq.  our estimate of $\sigma_{\overline{y}}^2$ is $N^{-1}$ times the sample variance (which we now call $s_{yy}^2$), and similarly for $\sigma_{\overline{z}}^2$. In the same way, our estimate of $\sigma_{\overline{y}\,\overline{z}}^2$ is $N^{-1}$ times the sample *covariance* of $y$ and $z$, defined by $$s_{y z}^2 = {1 \over N-1}\, \sum_{i=1}^N \left(y_i - \overline{y}\right)\, \left(z_i - \overline{z}\right) \, .$$ Hence, from Eq. , we have $$f(\mu_y, \mu_z) = \langle f(\overline{y}, \overline{z})\rangle - {1\over N}\, \left[{1\over 2}\, (\partial^2_{\mu_y\mu_y}f)\, s_{y y}^2 + (\partial^2_{\mu_y\mu_z}f)\, s_{y z}^2 + {1\over 2}\, (\partial^2_{\mu_z\mu_z}f)\, s_{z z}^2 \right]\, , \label{bias2}$$ where the leading contribution to the bias is given by the $1/N$ term. Note that the bias term is “self-averaging”, i.e. the fluctuations in it are small relative to the average (by a factor of $1/\sqrt{N}$) when averaging over many repetitions of the data. It follows from Eq.  that if one wants to eliminate the leading contribution to the bias one should $$\boxed{ \mbox{estimate } f(\mu_y,\mu_z)\ \mbox{ from } f(\overline{y}, \overline{z}) - {1\over N}\, \left[{1\over 2}\, (\partial^2_{\mu_y\mu_y}f)\, s_{y y}^2 + (\partial^2_{\mu_y\mu_z}f)\, s_{y z}^2 + {1\over 2}\, (\partial^2_{\mu_z\mu_z}f)\, s_{z z}^2 \right].} \label{bias}$$ As claimed earlier, the bias correction is of order $1/N$. Note that it vanishes if $f$ is a linear function, as shown in Sec. \[sec:basic\]. The generalization to functions of more than two averages, $f(\mu_y, \mu_z, \mu_w, \cdots)$, is obvious. Next we discuss the leading *error* in using $f(\overline{y}, \overline{z})$ as an estimate for $f(\mu_y, \mu_z)$. This comes from the terms linear in the $\delta$’s in Eq. . Just including these terms we have $$\begin{aligned} \langle f(\overline{y}, \overline{z}) \rangle &= f(\mu_y, \mu_z) \, , \\ \langle\, f^2(\overline{y}, \overline{z})\, \rangle &= f^2(\mu_y, \mu_z) + (\partial_{\mu_y}f)^2 \, \langle \delta_{\overline{y}}^2 \rangle + 2(\partial_{\mu_y}f)\, (\partial_{\mu_z}f) \, \langle \delta_{\overline{y}} \delta_{\overline{z}} \rangle + (\partial_{\mu_z}f)^2 \, \langle \delta_{\overline{z}}^2 \rangle \, .\end{aligned}$$ Hence $$\begin{aligned} \sigma_f^2 &\equiv \langle\, f^2(\overline{y}, \overline{z})\, \rangle - \langle f(\overline{y}, \overline{z}) \rangle^2 \nonumber \\ &= (\partial_{\mu_y}f)^2 \, \langle \delta_{\overline{y}}^2 \rangle + 2(\partial_{\mu_y}f)\, (\partial_{\mu_z}f) \, \langle \delta_{\overline{y}} \delta_{\overline{z}} \rangle + (\partial_{\mu_z}f)^2 \, \langle \delta_{\overline{z}}^2 \rangle \, .\end{aligned}$$ As above, we use $s_{y y}^2 / N$ as an estimate of $\langle \delta_{\overline{y}}^2 \rangle$ and similarly for the other terms. Hence $$\boxed{ \mbox{the best estimate of } \sigma_f^2 \mbox{ is } {1 \over N}\, (\partial_{\mu_y}f)^2 \, s_{y y}^2 + 2(\partial_{\mu_y}f)\, (\partial_{\mu_z}f) \, s_{y z}^2 + (\partial_{\mu_z}f)^2 \, s_{z z}^2 \, .} \label{sigma_f}$$ This estimate is unbiased to leading order in $N$. Note that we need to keep track not only of fluctuations in $y$ and $z$, characterized by their variances $s_{y y}^2$ and $s_{z z}^2$, but also cross correlations between $y$ and $z$, characterized by their covariance $s_{y z}^2$. Hence, still to leading order in $N$, we get $$\boxed{f(\mu_y, \mu_z) = f(\overline{y}, \overline{z}) \pm \sigma_f\, ,}$$ where we estimate the error bar $\sigma_f$ from Eq.  which shows that it is of order $1/\sqrt{N}$. Again, the generalization to functions of more than two averages is obvious. Note that in the simple case studied in Sec. \[sec:basic\] where there is only one set of variables $x_i$ and $f =\mu_x$, Eq.  tells us that there is no bias, which is correct, and Eq.  gives an expression for the error bar which agrees with Eq. . In Eqs.  and we need to keep track how errors in the individual quantities like $\overline{y}$ propagate to the estimate of the function $f$. This requires inputting by hand the various partial derivatives into the analysis program, and keeping track of all the variances and covariances. In the next two sections we see how *resampling* the data automatically takes account of error propagation without needing to input the partial derivatives and keep track of variances and covariances. These approaches, known as jackknife and bootstrap, provide a *fully automatic* method of determining error bars and bias. ### Jackknife {#sec:jack} We define the $i$-th jackknife estimate, $y^J_i\, (i = 1,2, \cdots, N)$ to be the average over all data in the sample *except the point* $i$, i.e. $$y^J_i \equiv {1 \over N-1}\, \sum_{j \ne i} y_j \, .$$ We also define corresponding jackknife estimates of the function $f$ (again for concreteness we will assume that $f$ is a function of just 2 averages but the generalization will be obvious): $$f^J_i \equiv f(y^J_i, z^J_i) \, . \label{fJi}$$ In other words, we use the jackknife values, $y^J_i, z^J_i$, rather than the sample means, $\overline{y}, \overline{z}$, as the arguments of $f$. For example a jackknife estimate of the Binder ratio $\langle x^4 \rangle / \langle x^2 \rangle^2$ is $$f^J_i = {(N-1)^{-1} \sum_{j, (j \ne i)} x_j^4 \over \left[(N-1)^{-1} \sum_{j \ne i} x_j^2\right]^2 }$$ The overall jackknife estimate of $f(\mu_ y, \mu_z)$ is then the average over the $N$ jackknife estimates $f_i^J$: $$\boxed{ \overline{f^J} \equiv {1 \over N} \sum_{i=1}^N f_i^J \, .} \label{fJ}$$ It is straightforward to show that if $f$ is a linear function of $\mu_y$ and $\mu_z$ then $\overline{f^J} = f(\overline{y},\overline{z})$, i.e. the jackknife and standard averages are identical. However, when $f$ is not a linear function, so there is bias, there *is* a difference, and we will now show the resampling carried out in the jackknife method can be used to determine bias and error bars in an automated way. We proceed as for the derivation of Eq. , which we now write as $$f(\mu_y, \mu_z) = \langle f(\overline{y},\overline{z}) \rangle - {A \over N} - {B\over N^2} + \cdots,$$ where $A$ is the term in rectangular brackets in Eq. , and we have added the next order correction. The jackknife data sets have $N-1$ points with the same distribution as the $N$ points in the actual distribution, and so the bias in the jackknife average will be of the same form, with the same values of $A$ and $B$, but with $N$ replaced by $N-1$, i.e. $$f(\mu_y, \mu_z) = \langle \overline{f^J} \rangle - {A \over N-1} - {B \over (N-1)^2} \cdots \, .$$ We can therefore eliminate the leading contribution to the bias by forming an appropriate linear combination of $f(\overline{y},\overline{z})$ and $\overline{f^J}$, namely $$f(\mu_y, \mu_z) = N \langle f(\overline{y},\overline{z}) \rangle - (N-1) \langle \overline{f^J} \rangle + O\left({1\over N^2}\right) \, . $$ It follows that, to eliminate the leading bias without computing partial derivatives, one should $$\boxed{ \mbox{estimate } f(\mu_y, \mu_z) \mbox{ from } N f(\overline{y},\overline{z}) - (N-1) \overline{f^J} \, . } \label{bias_elim}$$ The bias is then of order $1/N^2$. However, as mentioned earlier, bias is usually not a big problem because, even without eliminating the leading contribution, the bias is of order $1/N$ whereas the statistical error is of order $1/\sqrt{N}$ which is much bigger if $N$ is large. In most cases, therefore, $N$ is sufficiently large that one can use *either* the usual average $f(\overline{y}, \overline{z})$, or the jackknife average $\overline{f^J}$ in Eq. , to estimate $f(\mu_y, \mu_z)$, since the difference between them will be much smaller than the statistical error. In other words, elimination of the leading bias using Eq.  is usually not necessary. Next we show that the jackknife method gives error bars, which agree with Eq.  but without the need to explicitly keep track of the partial derivatives and the variances and covariances. We define the variance of the jackknife averages by $$\sigma^2_{f^J} \equiv \overline{\left(f^J\right)^2} - \left( \overline{f^J} \right)^2 \, , \label{sigmafJ}$$ where $$\overline{\left(f^J\right)^2} = {1 \over N} \sum_{i=1}^N \left(f_i^J\right)^2 \, .$$ Using Eqs.  and , we expand $\overline{f^J}$ away from the exact result $f(\mu_y, \mu_z)$. Just including the leading contribution gives $$\begin{aligned} \overline{f^J} - f(\mu_y, \mu_z) &= {1 \over N} \sum_{i=1}^N \left[ (\partial_{\mu_y} f)\, (y_i^J - \mu_y) + (\partial_{\mu_z} f)\, (z_i^J - \mu_z) \right] \nonumber \\ &= {1 \over N(N-1)} \sum_{i=1}^N \left[ (\partial_{\mu_y} f)\, \left\{N(\overline{y} - \mu_y) - (y_i - \mu_y) \right\} + (\partial_{\mu_z} f)\, \left\{N(\overline{z} - \mu_z) - (z_i - \mu_z) \right\} \right] \nonumber \\ &= (\partial_{\mu_y} f)\, (\overline{y} - \mu_y) + (\partial_{\mu_z} f)\, (\overline{z} - \mu_z) \, . \label{fJ-f}\end{aligned}$$ Similarly we find $$\begin{aligned} \overline{\left(f^J\right)^2 } &= {1 \over N} \sum_{i=1}^N \left[ f(\mu_y, \mu_z) + (\partial_{\mu_y} f)\, (y_i^J - \mu_y) + (\partial_{\mu_z} f)\, (z_i^J - \mu_z) \right]^2 \nonumber \\ &= f^2(\mu_y, \mu_z) + 2 f(\mu_y, \mu_z) \, \left[ (\partial_{\mu_y} f)\, (\overline{y} - \mu_y) + (\partial_{\mu_z} f)\, (\overline{z} - \mu_z) \right] \nonumber \\ &\quad + (\partial_{\mu_y} f)^2\, \left[(\overline{y} - \mu_y)^2 + {s_{yy}^2 \over N(N-1)}\right] + (\partial_{\mu_z} f)^2\, \left[(\overline{z} - \mu_z)^2 + {s_{zz}^2 \over N(N-1)}\right] \nonumber \\ &\qquad + 2(\partial_{\mu_y} f)(\partial_{\mu_z} f)\,\left[(\overline{y} - \mu_y) (\overline{z} - \mu_z) + {s_{yz}^2 \over N(N-1)}\right] \, .\end{aligned}$$ Hence, from Eqs.  and , the variance in the jackknife estimates is given by $$\sigma^2_{f^J} = {1 \over N(N-1)} \, \left[ (\partial_{\mu_y} f)^2\, s_{yy}^2 + (\partial_{\mu_z} f)^2\, s_{zz}^2 + 2(\partial_{\mu_y} f)(\partial_{\mu_z} f) s_{yz}\right] \, ,$$ which is just $1/(N-1)$ times $\sigma_f^2$, the estimate of the square of the error bar in $f(\overline{y}, \overline{z})$ given in Eq. . Hence $$\boxed{ \mbox{the jackknife estimate for } \sigma_f \mbox{ is } \sqrt{N-1} \, \sigma_{f^J}\, .} \label{error_jack}$$ Note that this is directly obtained from the jackknife estimates without having to put in the partial derivatives by hand. Note too that the $\sqrt{N-1}$ factor is in the *numerator* whereas the factor of $\sqrt{N}$ in Eq.  is in the *denominator*. Intuitively the reason for this difference is that the jackknife estimates are very close since they would all be equal except that each one omits just one data point. If $N$ is very large, roundoff errors could become significant from having to subtract large, almost equal, numbers to get the error bar from the jackknife method. It is then advisable to group the $N$ data points into $N_\text{group}$ groups (or “bins”) of data and take, as individual data points in the jackknife analysis, the average of the data in each group. The above results clearly go through with $N$ replaced by $N_\text{group}$. To summarize this subsection, to estimate $f(\mu_y, \mu_z)$ one can use either $f(\overline{y}, \overline{z})$ or the jackknife average $\overline{f^J}$ in Eq. . The error bar in this estimate, $\sigma_f$, is related to the standard deviation in the jackknife estimates $\sigma_{f^J}$ by Eq. . ### Bootstrap {#sec:boot} The bootstrap, like the jackknife, is a resampling of the $N$ data points Whereas jackknife considers $N$ new data sets, each of containing all the original data points minus one, bootstrap uses ${{N_{\rm boot}}}$ data sets each containing $N$ points obtained by random (Monte Carlo) sampling of the original set of $N$ points. During the Monte Carlo sampling, the probability that a data point is picked is $1/N$ irrespective of whether it has been picked before. (In the statistics literature this is called picking from a set “with replacement”.) Hence a given data point $x_i$ will, *on average*, appear once in each Monte Carlo-generated data set, but may appear not at all, or twice, and so on. The probability that $x_i$ appears $n_i$ times is close to a Poisson distribution with mean unity. However, it is not exactly Poissonian because of the constraint in Eq. (\[constraint\]) below. It turns out that we shall need to include the deviation from the Poisson distribution even for large $N$. We shall use the term “bootstrap” data sets to denote the Monte Carlo-generated data sets. More precisely, let us suppose that the number of times $x_i$ appears in a bootstrap data set is $n_i$. Since each bootstrap dataset contains exactly $N$ data points, we have the constraint $$\sum_{i=1}^N n_i = N \, . \label{constraint}$$ Consider one of the $N$ variables $x_i$. Each time we generate an element in a bootstrap dataset the probability that it is $x_i$ is $1/N$, which we will denote by $p$. From standard probability theory, the probability that $x_i$ occurs $n_i$ times is given by a binomial distribution $$P(n_i) = {N! \over n_i! \, (N - n_i)!} \, p^{n_i} (1-p)^{N -n_i} \, .$$ The mean and standard deviation of a binomial distribution are given by $$\begin{aligned} [ n_i ]{_{_{\rm MC}}}& = N p = 1 \, , \label{nimc} \\ {[ n_i^2 ]{_{_{\rm MC}}}} - [n_i]{_{_{\rm MC}}}^2 & = N p (1 - p) = 1 - {1 \over N} \, , \label{epsi_epsi}\end{aligned}$$ where $[ \dots ]{_{_{\rm MC}}}$ denotes an exact average over bootstrap samples (for a fixed original data set $x_i$). For $N \to\infty$, the binomial distribution goes over to a Poisson distribution, for which the factor of $1/N$ in Eq. (\[epsi\_epsi\]) does not appear. We assume that ${{N_{\rm boot}}}$ is sufficiently large that the bootstrap average we carry out reproduces this result with sufficient accuracy. Later, we will discuss what values for ${{N_{\rm boot}}}$ are sufficient in practice. Because of the constraint in Eq. (\[constraint\]), $n_i $ and $n_j$ (with $i \ne j$) are not independent and we find, by squaring Eq.  and using Eqs.  and , that $$[ n_i n_j ]{_{_{\rm MC}}}- [ n_i ]{_{_{\rm MC}}}[ n_j ]{_{_{\rm MC}}}= - {1 \over N} \qquad (i \ne j)\, . \label{epsi_epsj}$$ First of all we just consider the simple average $\mu_x \equiv \langle x \rangle$, for which, of course, the standard methods in Sec. \[sec:basic\] suffice, so bootstrap is not necessary. However, this will show how to get averages and error bars in a simple case, which we will then generalize to non-linear functions of averages. We denote the average of $x$ for a given bootstrap data set by ${x^B}_\alpha$, where $\alpha$ runs from 1 to ${{N_{\rm boot}}}$, [[*i.e.*]{}]{}$${x^B}_\alpha = {1 \over N} \sum_{i=1}^N n_i^\alpha x_i \, .$$ We then compute the bootstrap average of the mean of $x$ and the bootstrap variance in the mean, by averaging over all the bootstrap data sets. We assume that ${{N_{\rm boot}}}$ is large enough for the bootstrap average to be exact, so we can use Eqs. (\[epsi\_epsi\]) and (\[epsi\_epsj\]). The result is $$\begin{aligned} \label{xb} \overline{{x^B}} \equiv {1 \over {{N_{\rm boot}}}} \sum_{\alpha=1}^{{N_{\rm boot}}}{x^B}_\alpha & = & {1\over N} \sum_{i=1}^N [n_i]{_{_{\rm MC}}}x_i = {1\over N} \sum_{i=1}^N x_i = \overline{x} \\ \sigma^2_{{x^B}} \equiv \overline{\left({x^B}\right)^2} - \left(\overline{{x^B}}\right)^2 & = & {1\over N^2} \left(1 - {1\over N}\right) \sum_i x_i^2 - {1 \over N^3} \sum_{i \ne j} x_i x_j \, , \label{sigmab}\end{aligned}$$ where $$\overline{\left({x^B}\right)^2} \equiv {1 \over {{N_{\rm boot}}}} \sum_{\alpha=1}^{{N_{\rm boot}}}\left[ \left({x^B}_\alpha\right)^2\right]{_{_{\rm MC}}}\, .$$ We now average Eqs. (\[xb\]) and (\[sigmab\]) over many repetitions of the original data set $x_i$. Averaging Eq. (\[xb\]) gives $$\langle \overline{{x^B}} \rangle = \langle \overline{x} \rangle = \langle x \rangle \equiv \mu_x \, .$$ This shows that the bootstrap average $\,\overline{{x^B}}\, $ is an unbiased estimate of the exact average $\mu_x$. Averaging Eq. (\[sigmab\]) gives $$\left\langle \sigma^2_{{x^B}} \right\rangle = {N-1 \over N^2} \sigma^2 = {N-1 \over N} \sigma^2_{\overline{x}} \, ,$$ where we used Eq. (\[dxsq\]) to get the last expression. Since $\sigma_{\overline{x}}$ is the uncertainty in the sample mean, we see that $$\boxed{\mbox{the bootstrap estimate of }\sigma_{\overline{x}} \mbox{ is } \sqrt{N \over N-1}\, \sigma_{{x^B}} \, .} \label{sigmaxb}$$ Remember that $\sigma_{{x^B}}$ is the standard deviation of the bootstrap data sets. Usually $N$ is sufficiently large that the square root in Eq. (\[sigmaxb\]) can be replaced by unity. As for the jackknife, these results can be generalized to finding the error bar in some possibly non-linear function, $f(\mu_y, \mu_z)$, rather than for $\mu_x$. One computes the bootstrap estimates for $f(\mu_y, \mu_z)$, which are $${f^B}_\alpha = f({y^B}_\alpha, {z^B}_\alpha) \, .$$ In other words, we use the bootstrap values, ${y^B}_\alpha, {z^B}_\alpha$, rather than the sample means, $\overline{y}, \overline{z}$, as the arguments of $f$. The final bootstrap estimate for $f(\mu_y, \mu_z)$ is the average of these, [[*i.e.*]{}]{}$$\boxed{ \overline{{f^B}} = {1 \over {{N_{\rm boot}}}} \sum_{\alpha=1}^{{N_{\rm boot}}}{f^B}_\alpha \, .} \label{fb}$$ Following the same methods in the jackknife section, one obtains the error bar, $\sigma_f$, in $f(\mu_y, \mu_z)$. The result is $$\boxed{\mbox{the bootstrap estimate for } \sigma_f \mbox{ is } \sqrt{N \over N-1} \,\, \sigma_{{f^B}}}, \label{sigmafb}$$ where $$\boxed{ \sigma^2_{{f^B}} = \overline{\left({f^B}\right)^2} - \left(\overline{{f^B}}\right)^2 \, ,} $$ is the variance of the bootstrap estimates. Here $$\overline{\left({f^B}\right)^2} \equiv {1 \over {{N_{\rm boot}}}} \sum_{\alpha=1}^{{N_{\rm boot}}}\left({f^B}_\alpha\right)^2 \, .$$ Usually $N$ is large enough that the factor of $\sqrt{N/(N-1)}$ is Eq.  can be replaced by unity. Equation (\[sigmafb\]) corresponds to the result Eq. (\[sigmaxb\]) which we derived for the special case of $f = \mu_x$. Again, following the same path as in the jackknife section, it is straightforward to show that the bias of the estimates in Eqs. (\[fb\]) and (\[sigmafb\]) is of order $1/N$ and so vanishes for $N\to\infty$. However, if $N$ is not too large it may be useful to eliminate the leading contribution to the bias in the mean, as we did for jackknife in Eq. (\[bias\_elim\]). The result is that one should $$\boxed{\mbox{estimate } f(\mu_y, \mu_z) \mbox{ from } 2 f(\overline{y}, \overline{z}) - \overline{{f^B}} \, .} \label{improved_boot}$$ The bias in Eq. (\[improved\_boot\]) is of order $1/N^2$, whereas $f(\overline{y}, \overline{z})$ and $\overline{{f^B}}$ each have a bias of order $1/N$. However, it is not normally necessary to eliminate the bias since, if $N$ is large, the bias is much smaller than the statistical error. I have not systematically studied the values of ${{N_{\rm boot}}}$ that are needed in practice to get accurate estimates for the error. It seems that ${{N_{\rm boot}}}$ in the range 100 to 500 is typically chosen, and this seems to be adequate irrespective of how large $N$ is. To summarize this subsection, to estimate $f(\mu_y, \mu_z)$ one can either use $f(\overline{y}, \overline{z})$, or the bootstrap average in Eq. , and the error bar in this estimate, $\sigma_f$, is related to the standard deviation in the bootstrap estimates by Eq. . ### Jackknife or Bootstrap? {#sec:jorb} The jackknife approach involves less calculation than bootstrap, and is fine for estimating combinations of moments of the measured quantities. Furthermore, identical results are obtained each time jackknife is run on the same set of data, which is not the case for bootstrap. However, the range of the jackknife estimates is very much smaller, by a factor of $\sqrt{N}$ for large $N$, than the scatter of averages which would be obtained from individual data sets, see Eq. . By contrast, for bootstrap, $\sigma_{{f^B}}$, which measures the deviation of the bootstrap estimates ${f^B}_\alpha$ from the result for the single actual data set $f(\overline{y}, \overline{z})$, *is equal to* $\sigma_f$, the deviation of the average of a single data set from the exact result $f(\mu_y,\mu_z)$ (if we replace the factor of $N/(N-1)$ by unity, see Eq. ). This is the main strength of the bootstrap approach; it samples the full range of the distribution of the sample distribution. Hence, if you want to generate data which covers the full range then should use bootstrap. This is useful in fitting, see for example, Sec. \[sec:resample\]. However, if you just want to generate error bars on combinations of moments quickly and easily, then use jackknife. Fitting data to a model {#sec:fit} ======================= A good reference for the material in this section is Chapter 15 of Numerical Recipes [@press:92]. Frequently we are given a set of data points $(x_i, y_i), i = 1, 2, \cdots, N$, with corresponding error bars, $\sigma_i$, through which we would like to fit to a smooth function $f(x)$. The function could be straight line (the simplest case), a higher order polynomial, or a more complicated function. The fitting function will depend on $M$ “fitting parameters”, $a_\alpha$ and we would like the “best” fit obtained by adjusting these parameters. We emphasize that a fitting procedure should not only 1. \[give\_params\] give the values of the fit parameters, but also 2. \[give\_errors\] provide error estimates on those parameters, and 3. \[gof\] provide a measure of how good the fit is. If the result of part \[gof\] is that the fit is very poor, the results of parts \[give\_params\] and \[give\_errors\] are probably meaningless. The definition of “best” is not unique. However, the most useful choice, and the one nearly always taken, is “least squares”, in which one minimizes the sum of the squares of the difference between the observed $y$-value, $y_i$, and the fitting function evaluated at $x_i$, weighted appropriately by the error bars since if some points have smaller error bars than others the fit should be closer to those points. The quantity to be minimized, called “chi-squared”,[^2] and written mathematically as $\chi^2$, is therefore $$\boxed{ \chi^2 = \sum_{i=1}^N \left( \, {y_i - f(x_i) \over \sigma_i } \, \right)^2. } \label{chisq}$$ Often we assume that the distribution of the errors is Gaussian, since, according to the central limit theorem discussed in Appendix \[sec:clt\], the sum of $N$ independent random variables has a Gaussian distribution (under fairly general conditions) if $N$ is large. However, distributions which occur in nature usually have more weight in the “tails” than a Gaussian, and as a result, even for moderately large values of $N$, the probability of an “outlier” might be much bigger than expected from a Gaussian, see Fig. \[Fig:converge\_to\_clt\]. If the errors *are* distributed with a Gaussian distribution, and if $f(x)$ has the *exact* values of the fit parameters, then $\chi^2$ in Eq.  is a sum of squares of $N$ random variables with a Gaussian distribution with mean zero and standard deviation unity. However, when we have minimized the value of $\chi^2$ with respect to the $M$ fitting parameters $a_\alpha$ the terms are not all independent. It turns out, see Appendix \[sec:NDF\], that, at least for a linear model (which we define below), the distribution of $\chi^2$ at the minimum is that of the sum of the squares of $N-M$ (not $N$) Gaussian random variable with zero mean and standard deviation unity[^3]. We call $N-M$ the “number of degrees of freedom” ($N_\text{DOF}$). The $\chi^2$ distribution is discussed in Appendix \[sec:Q\]. The formula for it is Eq. . The simplest problems are where the fitting function is a *linear function of the parameters*. We shall call this a *linear model*. Examples are a straight line ($M=2$), $$y = a_0 + a_1 x \, , \label{sl}$$ and an $m$-th order polynomial ($M=m+1$), $$y = a_0 + a_1 x + a_2 x^2 + \cdots + a_m x^m = \sum_{\alpha=0}^m a_\alpha x^m \, , \label{poly}$$ where the parameters to be adjusted are the $a_\alpha$. (Note that we are *not* stating here that $y$ has to be a linear function of $x$, only of the fit parameters $a_\alpha$.) An example where the fitting function depends *non*-linearly on the parameters is $$y = a_0 x^{a_1} + a_2 \, .$$ Linear models are fairly simply because, as we shall see, the parameters are determined by *linear* equations, which, in general, have a unique solution that can be found by straightforward methods. However, for fitting functions which are non-linear functions of the parameters, the resulting equations are *non-linear* which may have many solutions or none at all, and so are much less straightforward to solve. We shall discuss fitting to both linear and non-linear models in these notes. Sometimes a non-linear model can be transformed into a linear model by a change of variables. For example, if we want to fit to $$y = a_0 x^{a_1} \, ,$$ which has a non-linear dependence on $a_1$, taking logs gives $$\ln y = \ln a_0 + a_1 \ln x \, ,$$ which is a *linear* function of the parameters $a'_0 = \ln a_0$ and $a_1$. Fitting a straight line to a log-log plot is a very common procedure in science and engineering. However, it should be noted that transforming the data does not exactly take Gaussian errors into Gaussian errors, though the difference will be small if the errors are “sufficiently small”. For the above log transformation this means $\sigma_i / y_i \ll 1$, i.e. the *relative* error is much less than unity. Fitting to a straight line -------------------------- To see how least squares fitting works, consider the simplest case of a straight line fit, Eq. (\[sl\]), for which we have to minimize $$\chi^2(a_0, a_1) = \sum_{i=1}^N \left({\, y_i - a_0 - a_1 x_i\, \over \sigma_i} \right)^2 \, , \label{chisq_sline}$$ with respect to $a_0$ and $a_1$. Differentiating $\chi^2$ with respect to these parameters and setting the results to zero gives \[sline\] $$\begin{aligned} a_0\, \sum_{i=1}^N {1 \over \sigma_i^2} + a_1\, \sum_{i=1}^N {x_i\over\sigma_i^2} &= \sum_{i=1}^N {y_i\over \sigma_i^2} , \label{da0}\\ a_0\, \sum_{i=1}^N {x_i\over \sigma_i^2} +a_1\, \sum_{i=1}^N {x_i^2\over \sigma_i^2} &= \sum_{i=1}^N {x_i y_i \over \sigma_i^2} . \label{da1}\end{aligned}$$ We write this as $$\begin{aligned} U_{00} \, a_0 + U_{01} \, a_1 &= v_0 , \\ U_{10} \, a_0 + U_{11} \, a_1 &= v_1 ,\end{aligned}$$ \[lssl\] where $$\begin{aligned} &\boxed{U_{\alpha\beta} = \sum_{i=1}^N {x_i^{\alpha + \beta}\over \sigma_i^2}, } \quad \mbox{and} \label{Uab} \\ &\boxed{v_\alpha = \sum_{i=1}^N{ y_i\, x_i^\alpha \over \sigma_i^2 \, }. } \label{v}\end{aligned}$$ The matrix notation, while an overkill here, will be convenient later when we do a general polynomial fit. Note that $U_{10} = U_{01}$. (More generally, later on, $U$ will be a symmetric matrix). Equations (\[lssl\]) are two linear equations in two unknowns. These can be solved by eliminating one variable, which immediately gives an equation for the second one. The solution can also be determined from $$\boxed{ a_\alpha = \sum_{\beta=0}^m \left(U^{-1}\right)_{\alpha\beta} \, v_\beta , } \label{soln}$$ (where we have temporarily generalized to a polynomial of order $m$). For the straight-line fit, the inverse of the $2\times 2$ matrix $U$ is given, according to standard rules, by $$U^{-1} = {1 \over \Delta} \, \begin{pmatrix} U_{11} & -U_{01} \\ -U_{01} & U_{00} \end{pmatrix} \label{Uinv}$$ where $$\boxed{ \Delta = U_{00} U_{11} - U_{01}^2 ,} \label{Delta}$$ and we have noted that $U$ is symmetric so $U_{01} = U_{10}$. The solution for $a_0$ and $a_1$ is therefore given by $$\begin{aligned} &\boxed{a_0 = {U_{11}\, v_0 - U_{01}\, v_1 \over \Delta}, } \\ &\boxed{a_1 = {-U_{01}\, v_0 + U_{00}\, v_1 \over \Delta}. } \end{aligned}$$ \[soln\_sl\] We see that it is straightforward to determine the slope, $a_1$, and the intercept, $a_0$, of the fit from Eqs. (\[Uab\]), (\[v\]), (\[Delta\]) and (\[soln\_sl\]) using the $N$ data points $(x_i,y_i)$, and their error bars $\sigma_i$. Fitting to a polynomial ----------------------- Frequently we need to fit to a higher order polynomial than a straight line, in which case we minimize $$\chi^2(a_0,a_1,\cdots,a_m) = \sum_{i=1}^N \left({y_i - \sum_{\alpha=0}^m a_\alpha x_i^\alpha \over \sigma_i} \right)^2 \label{chisq_poly}$$ with respect to the $(m+1)$ parameters $a_\alpha$. Setting to zero the derivatives of $\chi^2$ with respect to the $a_\alpha$ gives $$\boxed{ \sum_{\beta=0}^m U_{\alpha\beta}\, a _\beta = v_\alpha ,} \label{lspoly}$$ where $U_{\alpha\beta}$ and $v_\alpha$ have been defined in Eqs. (\[Uab\]) and (\[v\]). Eq. (\[lspoly\]) represents $M = m+1$ *linear* equations, one for each value of $\alpha$. Their solution is again given by Eq. (\[soln\]), i.e. it is expressed in terms of the inverse matrix $U^{-1}$. Error Bars {#sec:error_bars} ---------- In addition to the best fit values of the parameters we also need to determine the error bars in those values. Interestingly, this information is *also* contained in the matrix $U^{-1}$. First of all, we explain the significance of error bars in fit parameters. We assume that the data is described by a model with a particular set of parameters $\vec{a}^\text{true}$ which, unfortunately, we don’t know. If we were, somehow, to have many real data sets each one would give a different set of fit parameters $\vec{a}^{(i)}, i = 0, 1, 2, \cdots$, because of noise in the data, *clustered about the true set* $\vec{a}^\text{true}$. Projecting on to a single fit parameter, $a_1$ say, there will be a distribution of values $P(a_1)$ centered on $a_1^\text{true}$ with standard deviation $\sigma_1$, see the top part of Fig. \[Fig:distofa1\]. Typically the value of $a_1$ obtained from our *one actual data set*, $a_1^{(0)}$, will lie within about $\sigma_1$ of $a_1$. Hence we define the error bar to be $\sigma_1$. ![ The top figure shows the distribution of one of the fit parameters $a_1$ if one could obtain many real data sets. The distribution has standard deviation $\sigma_1$ about the true value $a_1^\text{true}$ and is Gaussian if the noise on the data is Gaussian. In fact, however, we have only one actual data set which has fit parameter $a_1^{(0)}$, and this typically lies within about $\sigma_1$ of $a_1^\text{true}$. Hence we However, we because we have only one the one value, $a_1^{(0)}$. However, we can generate many *simulated* data sets from the one actual set and hence of the distribution of the resulting fit parameter $a_1^S$, which is shown in the lower figure. This distribution is centered about the value from the actual data, $a_1^{(0)}$, and has standard deviation, $\sigma_1^S$. The important point is that if one assumes a linear model then one can show that $\boxed{\sigma_1^S = \sigma_1 ,}$ see text. Even if the model is non linear, one usually assumes that the difference in the standard deviations is sufficiently small that one can still equate the true error bar with the standard deviation from the simulated data sets. We emphasize that and this is assumed to equal $\sigma_1$. Furthermore, as shown in Appendices \[sec:proof\] and \[sec:proof2\], if the noise on the data is Gaussian (and the model is linear) both the distributions in this figure are also Gaussian. []{data-label="Fig:distofa1"}](distofa1.eps){width="7.5cm"} Unfortunately, we can’t determine the error bar this way because we have only one actual data set, which we denote here by $y_i^{(0)}$ to distinguish it from other data sets that we will introduce. Our actual data set gives one set of fit parameters, which we call $\vec{a}^{(0)}$. Suppose, however, we were to generate many *simulated* data sets from of the one which is available to us, by generating random values (possibly with a Gaussian distribution though this won’t be necessary yet) centered at the $y_i$ with standard deviation $\sigma_i$. Fitting each simulated dataset would give different values for $\vec{a}$, *clustered now about* $\vec{a}^{(0)}$, see the bottom part of Fig. . We now come to an important, but rarely discussed, point: > We assume that the standard deviation of the fit parameters of these simulated data sets about $\vec{a}^{(0)}$, which we will be able to calculate from the single set of data available to us, is equal to the standard deviation of the fit parameters of real data sets $\vec{a}$ about $\vec{a}^\text{true}$. The latter is what we *really* want to know (since it is our estimate of the error bar on $\vec{a}^\text{true}$) but can’t determine directly. See Fig. \[Fig:distofa1\] for an illustration. In fact we show in the text below that this assumption is correct for a linear model (and for a non-linear model if the range of parameter values is small enough that it can be represented by an effective linear model). Even if the model is non linear, one usually assumes that the two standard deviations are sufficiently close that the difference is not important. Furthermore, we show in Appendices \[sec:proof\] and \[sec:proof2\] that if the noise on the data is Gaussian (and the model is linear), the two distributions in Fig.  are also both Gaussian. Hence, as stated above, to derive the error bars in the fit parameters we take simulated values of the data points, $y_i^S$, which vary by some amount $\delta y_i^S$ about $y_i^{(0)}$, i.e. $\delta y_i^S = y_i^S - y_i^{(0)}$, with a standard deviation given by the error bar $\sigma_i$. The fit parameters of this simulated data set, $\vec{a}^S$, then deviate from $\vec{a}^{(0)}$ by an amount $\delta \vec{a}^S$ where $$\delta a_\alpha^S = \sum_{i=1}^N {\partial a_\alpha \over \partial y_i}\, \delta y_i^S\, .$$ Averaging over fluctuations in the $y_i^S$ we get the variance of $a_\alpha^S$ to be $$\left(\sigma_\alpha^S\right)^2 \equiv \langle \left(\delta a_\alpha^S\right)^2 \rangle = \sum_{i=1}^N \sigma_i^2 \, \left( {\partial a_\alpha \over \partial y_i} \right)^2 \, , \label{sigma_alpha}$$ since $\langle \left(\delta y_i^S\right)^2 \rangle = \sigma_i^2$, and the data points $y_i$ are statistically independent. Writing Eq.  explicitly in terms of the data values, $$a_\alpha = \sum_\beta \left(U^{-1}\right)_{\alpha\beta} \sum_{i=1}^N { y_i\, x_i^\beta \over \sigma_i^2 \, } \, ,$$ and noting that $U$ is independent of the $y_i$, we get $${\partial a_\alpha \over \partial y_i} = \sum_\beta \left(U^{-1}\right)_{\alpha\beta} {x_i^\beta \over \sigma_i^2} \, .$$ Substituting into Eq.  gives $$\left(\sigma_\alpha^S \right)^2 = \sum_{\beta, \gamma} \left(U^{-1}\right)_{\alpha\beta} \left(U^{-1}\right)_{\alpha\gamma} \left[ \sum_{i=1}^N {x_i^{\beta + \gamma} \over \sigma_i^2} \right] \, .$$ The term in rectangular brackets is just $U_{\beta\gamma}$, and so, noting that $U$ is given by Eq.  and is symmetric, the last equation reduces to $$\left(\sigma_\alpha^S \right)^2 = \left(U^{-1}\right)_{\alpha\alpha} \, . \label{error_params_s}$$ Recall that $\sigma_\alpha^S$ is the standard deviation of the fitted parameter values about the $\vec{a}^{(0)}$ when constructing simulated data sets from the one set of data that is available to us. However, the error bar is defined to be the standard deviation the fitted parameter values would have relative to $a_\alpha^\text{true}$ if we could average over many actual data sets. To determine this quantity we simply repeat the above calculation with $\delta y_i = y_i - y_i^\text{true}$ in which $y_i$ is the value of the $i$-th data point in one of the actual data sets. The result is identical to Eq. , namely $$\boxed{ \sigma_\alpha^2 = \left(U^{-1}\right)_{\alpha\alpha} \, ,} \label{error_params}$$ in which $U$ is the *same* in Eq.  as in Eq.  because $U$ is a constant, for a linear model, independent of the $y_i$ or the fit parameters $a_\alpha$. Hence $\sigma_\alpha$ in Eq.  is the error bar in $a_\alpha$. In addition to error bars, we also need a parameter to describe the quality of the fit. A useful quantity is the probability that, given the fit, the data could have occurred with a $\chi^2$ greater than or equal to the value found. This is generally denoted by $Q$ and is given by Eq.  assuming the data have Gaussian noise. Note that the effects of *non-Gaussian* statistics is to increase the probability of outliers, so fits with a fairly small value of $Q$, say around $0.01$, may be considered acceptable. However, fits with a *very* small value of $Q$ should not be trusted and the values of the fit parameters are probably meaningless in these cases. ![An example of a straight-line fit to a set of data with error bars.[]{data-label="fig:slinefit"}](fitdata3.eps){width="10cm"} For the case of a straight line fit, the inverse of $U$ is given explicitly in Eq. (\[Uinv\]). Using this information, and the values of $(x_i, y_i, \sigma_i)$ for the data in Fig. \[fig:slinefit\], the fit parameters (assuming a straight line fit) are $$\begin{aligned} a_0 &= 0.84 \pm 0.32 , \\ a_1 &= 2.05 \pm 0.11 ,\end{aligned}$$ in which the error bars on the fit parameters on $a_0$ and $a_1$, which are denoted by $\sigma_0$ and $\sigma_1$, are determined from Eq. (\[error\_params\]). The data was generated by starting with $y = 1 + 2x$ and then adding some noise with zero mean. Hence the fit should be consistent with $y = 1 +2x$ within the error bars, and it is. The value of $\chi^2$ is 7.44 so $\chi^2/N_\text{DOF} = 7.44 / 9 = 0.866$ and the quality of fit parameter, given by Eq. , is $Q = 0.592$ which is good. We call $U^{-1}$ the “*covariance matrix*”. Its off-diagonal elements are also useful since they contain information about correlations between the fitted parameters. More precisely, one can show, following the lines of the above derivation of $\sigma_\alpha^2$, that the correlation of fit parameters $\alpha$ and $\beta$, known mathematically as their “covariance”, is given by the appropriate off-diagonal element of the covariance matrix, $$\text{Cov}(\alpha, \beta) \equiv \langle \delta a_\alpha \, \delta a_\beta \rangle = \left(U^{-1}\right)_{\alpha\beta} \, . \label{Covab}$$ The correlation coefficient, $r_{\alpha\beta}$, which is a dimensionless measure of the correlation between $\delta a_\alpha$ and $\delta a_\beta$ lying between $-1$ and 1, is given by $$r_{\alpha\beta} = {\text{Cov}(\alpha, \beta) \over \sigma_\alpha \sigma_\beta} \, . \label{rab}$$ A good fitting program should output the correlation coefficients as well as the fit parameters, their error bars, the value of $\chi^2/N_\text{DOF}$, and the goodness of fit parameter $Q$. For a linear model, $\chi^2$ is a quadratic function of the fit parameters and so the elements of the “*curvature matrix*”[^4], $(1/2)\, \partial^2 \chi^2 / \partial {a_\alpha}\partial {a_\beta}$ are constants, independent of the values of the fit parameters. In fact, we see from Eqs.  and that $${1\over 2}\, { \partial^2 \chi^2 \over \partial {a_\alpha} \partial {a_\beta}} = U_{\alpha \beta} \, , \label{curv}$$ so *the curvature matrix is equal to $U$*, given by Eq.  for a polynomial fit. If we fit to a *general* linear model, writing $$f(x) = \sum_{\alpha=1}^M a_\alpha \, X_\alpha(x) , \label{general_lin}$$ where $X_1(x), X_2(x), \cdots, X_M(x)$ a fixed functions of $x$ called basis functions, the curvature matrix is given by $$\boxed{ U_{\alpha\beta} = \sum_{i=1}^N {X_\alpha(x_i)\, X_\beta(x_i) \over \sigma_i^2} \, .} \label{Uab_general}$$ Similarly, the quantities $v_\alpha$ in Eq.  become $$\boxed{ v_\alpha = \sum_{i=1}^N {y_i\, X_\alpha(x_i) \over \sigma_i^2} \, ,} \label{v_general}$$ for a general set of basis functions, and best fit parameters are given by the solution of the $M$ linear equations $$\boxed{ \sum_{\beta=1}^M U_{\alpha\beta}\, a_\beta = v_\alpha \, , } \label{lin_eq}$$ for $\alpha= 1, 2, \cdots, M$. Note that for a linear model the curvature matrix $U$ is a constant, independent of the fit parameters. However, $U$ is not constant for a non-linear model. Fitting to a non-linear model {#sec:nlmodel} ----------------------------- As for linear models, one minimizes $\chi^2$ in Eq. . The difference is that the resulting equations are non-linear so there might be many solutions or non at all. Techniques for solving the coupled non-linear equations invariably require specifying an initial value for the variables $a_\alpha$. The most common method for fitting to non-linear models is the Levenberg-Marquardt (LM) method, see e.g. Numerical Recipes [@press:92]. Implementing the Numerical Recipes code for LM is a little complicated because it requires the user to provide a routine for the derivatives of $\chi^2$ with respect to the fit parameters as well as for $\chi^2$ itself, and to check for convergence. Alternatively, one can use the fitting routines in the `scipy` package of `python` or use `gnuplot`. But see the comments in Appendix \[sec:ase\] about getting the error bars in the parameters correct. This applies when fitting to linear as well as non-linear models. Gnuplot and scipy scripts for fitting to a non-linear model are given in Appendix \[sec:scripts\]. One difference from fitting to a linear model is that the curvature matrix, defined by the LHS of Eq. , is not constant but is a function of the fit parameters. Hence it is no longer true that the standard deviations of the two distributions in Fig. \[Fig:distofa1\] are equal. However, it still generally assumed that the difference is small enough to be unimportant and hence that the covariance matrix, which is now defined to be the inverse of the curvature matrix *at the minimum of $\chi^2$*, still gives information about error bars on the fit parameters. This is discussed more in the next two subsections, in which we point out, however, that a more detailed analysis is needed if the model is non-linear and the spread of fitted parameters is sufficiently large that it cannot be represented by an effective linear model, i.e. $\chi^2$ is not well fitted by a parabola over the needed range of parameter values. As a reminder: - The *curvature matrix* is defined in general by the LHS of Eq. , which, for a linear model, is equivalent to Eq.  (Eq.  for a polynomial fit.) - The *covariance matrix* is the inverse of the curvature matrix at the minimum of $\chi^2$ (the last remark being only needed for a non-linear model). Its diagonal elements give error bars on the fit parameters according to Eq.  (but see the caveat in the previous paragraph for non-linear models) and its off-diagonal elements give correlations between fit parameters according to Eqs.  and . Confidence limits {#sec:conf_limits} ----------------- In the last two subsections we showed that the diagonal elements of the covariance matrix give an error bar on the fit parameters. In this section we extend the notion of error bar to embrace the concept of a “confidence limit”. There is a theorem [@press:92] which states that, for a linear model, if we take simulated data sets assuming Gaussian noise in the data about the actual data points, and compute the fit parameters $\vec{a}^{S(i)}, i = 1, 2, \cdots$ for each data set, then the probability distribution of the $\vec{a}^S$ is given by the multi-variable Gaussian distribution $$\boxed{ P(\vec{a}^S) \propto \exp\left(-{1 \over 2} \, \sum_{\alpha, \beta} \delta a_\alpha^S\, U_{\alpha\beta}\, \delta a_\beta^S \right) \, ,} \label{theorem}$$ where $\delta \vec{a}^S \equiv \vec{a}^{S(i)} - \vec{a}^{(0)}$ and $U$, given by Eq. , is the curvature matrix which can also be defined in terms of the second derivative of $\chi^2$ according to Eq. . A proof of this result is given in Appendix \[sec:proof\]. It applies for a linear model with Gaussian noise, and also for a non-linear model if the uncertainties in the parameters do not extend outside a region where an effective linear model could be used. (In the latter case one still needs a non-linear routine to *find* the best parameters). Note that for a non-linear model, $U$ is not a constant and is the curvature *at the minimum* of $\chi^2$. From Eq.  the change in $\chi^2$ as the parameters are varied away from the minimum is given by $$\Delta \chi^2 \equiv \chi^2(\vec{a}^{S(i)}) - \chi^2(\vec{a}^{(0)}) = \sum_{\alpha, \beta} \delta a_\alpha^S\, U_{\alpha\beta}\, \delta a_\beta^S \, , \label{Dchisq}$$ in which the $\chi^2$ are all evaluated from the single (actual) data set $y_i^{(0)}$. Equation can therefore be written as $$P(\vec{a}^S) \propto \exp\left(-{1 \over 2} \Delta \chi^2 \right) \, . \label{P_dalpha}$$ We remind the reader that we have assumed the noise in the data is Gaussian and that either the model is linear or, if non-linear, the uncertainties in the parameters do not extend outside a region where an effective linear model could be used. Hence the probability of a particular deviation, $\delta \vec{a}^S$, of the fit parameters in a simulated data set away from the parameters in the *actual* data set, depends on how much this change increases $\chi^2$ (evaluated from the actual data set) away from the minimum. In general a “confidence limit” is the range of fit parameter values such that $\Delta \chi^2$ is less than some specified value. The simplest case, and the only one we discuss here, is the variation of *one* variable at a time, though multi-variate confidence limits can also be defined, see Numerical Recipes [@press:92]. We therefore consider the change in $\chi^2$ when one variable, $a_1^S$ say, is held at a specified value, and all the others $(\beta = 2, 3,\cdots, M)$ are varied in order to minimize $\chi^2$. Minimizing $\Delta \chi^2$ in Eq.  with respect to $a_\beta^S$ gives $$\sum_{\gamma=1}^M U_{\beta\gamma}\, \delta a_\gamma^S = 0 , \qquad (\beta = 2, 3, \cdots,M) \, .$$ The corresponding sum for $\beta = 1$, namely $\sum_{\gamma=1}^M U_{1\gamma}\, \delta a_\gamma^S$, is not zero because $\delta a_1$ is fixed. It will be some number, $c$ say. Hence we can write $$\sum_{\gamma=1}^M U_{\alpha\gamma}\, \delta a_\gamma^S = c_\alpha, \qquad (\alpha = 1, 2, \cdots,M) \, ,$$ where $c_1 = c$ and $c_\beta = 0\, (\beta \ne 1)$. The solution is $$\delta a_\alpha^S = \sum_{\beta=1}^M \left(U^{-1}\right)_{\alpha\beta} c_\beta \, . \label{aalpha}$$ For $\alpha = 1$ this gives $$c = \delta a_1^S / \left(U^{-1}\right)_{11} \, . \label{c}$$ Substituting Eq.  into Eq. , and using Eq.  we find that $\Delta \chi^2 $ is related to $\left(\delta a_1^S\right)^2$ by $$\Delta \chi^2 = {(\delta a_1^S)^2 \over \left(U^{-1}\right)_{11} } . \label{Dchi2}$$ (Curiously, the coefficient of $(\delta a_1)^2$ is one over the $11$ element of the inverse of $U$, rather than $U_{11}$ which is how it appears in Eq.  in which the $\beta \ne 1$ parameters are free rather than adjusted to minimize $\chi^2$.) From Eq.  we finally get $$P(a_1^S) \propto \exp\left(-{1 \over 2} \, {(\delta a_1^S)^2 \over\sigma_1^2}\right) \, , \label{Pa1S}$$ where $$\sigma_1^2 = \left(U^{-1}\right)_{11} \, .$$ As shown in Appendices \[sec:proof\] and \[sec:proof2\], Eqs. ,   and also apply, under the same conditions (linear model and Gaussian noise on the data) to the probability for $\delta a_1 \equiv a_1^\text{true} - a_1^{(0)} $, where we remind the reader that $a_1^{(0)}$ is the fit parameter obtained from the actual data, and $a_1^\text{true}$ is the exact value. In other words the probability of the true value is given by $$\boxed{ P(\vec{a}^\text{true}) \propto \exp\left(-{1 \over 2} \Delta \chi^2 \right) \, ,} \label{P_dalphatrue}$$ where $$\Delta \chi^2 \equiv \chi^2(\vec{a}^\text{true}) - \chi^2(\vec{a}^{(0)}) \, ,$$ in which we remind the reader that both values of $\chi^2$ are evaluated from the single set of data available to us, $y_i^{(0)}$. Projecting onto a single parameter, as above, gives $$\boxed{ P(a_1^\text{true}) \propto \exp\left(-{1\over 2}\, {(\delta a_1)^2 \over \sigma_1^2}\right) \, , } \label{Pa1}$$ so $\langle \left(\delta a_1\right)^2 \rangle = \sigma_1^2 = \left(U^{-1}\right)_{11}$, in agreement with what we found earlier in Eq. . We emphasize that Eqs.  and   assumes Gaussian noise on the data points, and either the model is linear or, if non-linear, that the range of uncertainty in the parameters is small enough that a description in terms of an effective linear model is satisfactory. However we have done more than recover our earlier result, Eq. , by more complicated means since we have gained *additional* information. From the properties of a Gaussian distribution we now know that, from Eq. , the probability that $a_\alpha$ lies within one standard deviation $\sigma_\alpha$ of the value which minimizes $\chi^2$ is 68%, the probability of its being within two standard deviations is 95.5%, and so on. Furthermore, from Eq. , we see that > *if a single fit parameter is one standard deviation away from its value at the minimum of $\chi^2$ (the other fit parameters being varied to minimize $\chi^2$), then $\Delta \chi^2 = 1$.* This last sentence, and the corresponding equations Eqs.  and , are not valid for a non-linear model if the uncertainties of the parameters extends outside the range where an effective linear model can be used. In this situation, to get confidence limits, is is necessary to do a bootstrap resampling of the data, as discussed in the next subsection. ![[**Left:**]{} The change in $\chi^2$ as a fit parameter $a_1$ is varied away from the value that minimizes $\chi^2$ for a *linear* model. The shape is a parabola for which $\Delta \chi^2=1$ when $\delta a = \pm \sigma_1$, where $\sigma_1$ is the error bar.\ [**Right:**]{} The solid curve is a sketch of the change in $\chi^2$ for a *non-linear* model. The curve is no longer a parabola and is even non-symmetric. The dashed curve is a parabola which fits the solid curve at the minimum. The fitting program only has information about the *local* behavior at the minimum and so gives an error range $\pm \sigma_1$ where the value of the parabola is $1$. However, the parameter $a_1$ is clearly more tightly constrained on the plus side than on the minus side, and a better way to determine the error range is to look *globally* and locate the values of $\delta a_1$ where $\Delta \chi^2 = 1$. This gives an error bar $\sigma_1^+$ on the plus side, and a different error bar, $\sigma_1^-$, on the minus side, both of which are different from $\sigma_1$.[]{data-label="fig:chi2"}](chi2_linear.eps "fig:"){width="8cm"} ![[**Left:**]{} The change in $\chi^2$ as a fit parameter $a_1$ is varied away from the value that minimizes $\chi^2$ for a *linear* model. The shape is a parabola for which $\Delta \chi^2=1$ when $\delta a = \pm \sigma_1$, where $\sigma_1$ is the error bar.\ [**Right:**]{} The solid curve is a sketch of the change in $\chi^2$ for a *non-linear* model. The curve is no longer a parabola and is even non-symmetric. The dashed curve is a parabola which fits the solid curve at the minimum. The fitting program only has information about the *local* behavior at the minimum and so gives an error range $\pm \sigma_1$ where the value of the parabola is $1$. However, the parameter $a_1$ is clearly more tightly constrained on the plus side than on the minus side, and a better way to determine the error range is to look *globally* and locate the values of $\delta a_1$ where $\Delta \chi^2 = 1$. This gives an error bar $\sigma_1^+$ on the plus side, and a different error bar, $\sigma_1^-$, on the minus side, both of which are different from $\sigma_1$.[]{data-label="fig:chi2"}](chi2_nonlinear.eps "fig:"){width="8cm"} However, if one is not able to resample the data we argue that it is better to take the range where $\Delta \chi^2 \le 1$ as an error bar for each parameter rather than the error bar determined from the curvature of $\chi^2$ at the minimum, see Fig. \[fig:chi2\]. The left hand plot is for a linear model, for which the curve of $\Delta \chi^2$ against $\delta a_1$ is exactly a parabola, and the right hand plot is a sketch for a non-linear model, for which it is not a parabola though it has a quadratic variation about the minimum shown by the dashed curve. For the linear case, the values of $\delta a_1$ where $\Delta \chi^2 = 1$ are the *same* as the values $\pm \sigma_1$, where $\sigma_1$ is the standard error bar obtained from the *local* curvature in the vicinity of the minimum. However, for the non-linear case, the values of $\delta a_1$ where $\Delta \chi^2 = 1$ are *different* from $\pm \sigma_1$, and indeed the values on the positive and negative sides, $\sigma_1^+$ and $\sigma_1^-$, are not equal. For the data Fig. \[fig:chi2\], it is clear that the value of $a_1$ is more tightly constrained on the positive side than the negative side, and so it is better to give the error bars as $+\sigma_1^+$ and $-\sigma_1^-$, obtained from the range where $\Delta \chi^2 \le 1$, rather the symmetric range $\pm \sigma_1$. However, if possible, in these circumstances error bars and a confidence limit should actually be obtained from a bootstrap resampling of the data as discussed in the next section. Confidence limits by resampling the data {#sec:resample} ---------------------------------------- More work is involved if one wants to get error bars and a confidence interval in the case where the model is non-linear and the range of parameter uncertainty extends outside the region where an effective linear model is adequate. Even for a linear model, we cannot convert $\Delta \chi^2$ into a confidence limit with a specific probability if the noise is non-Gaussian. To proceed in these cases, one can bootstrap the individual data points as follows. Each data point $(x_i, y_i)$ has error bar $\sigma_i$, which comes from averaging over $N$ measurements, say. Generating bootstrap datasets by Monte Carlo sampling the $N$ measurements, as discussed in Sec. \[sec:boot\], the distribution of the mean of each bootstrap dataset has a standard deviation equal to the estimate of standard deviation on the mean of the actual data set, see Eq.  (replacing the factor of $\sqrt{N/(N-1)}$ by unity which is valid since $N$ is large in practice). Hence, if we generate $N_\text{boot}$ bootstrap data sets, and fit each one, the scatter of the fitted parameter values will be a measure of the uncertainty in the values from the *actual* dataset. Forming a histogram of the values of a single parameter we can obtain a confidence interval within which 68%, say, of the bootstrap datasets lie (16% missing on either side) and interpret this range as a 68% confidence limit for the actual parameter value. The justification for this interpretation has been discussed in the statistics literature, see e.g. the references in Ref. [@press:92], but I’m not able to go into the details here. Note that this bootstrap approach could also be applied usefully for a *linear* model if the noise is not Gaussian. Unfortunately, use of the bootstrap procedure to get error bars in fits to non-linear models does not yet seem to be a standard procedure in the statistical physics community. Another possibility for a non-linear model, if one is confident that the noise is close to Gaussian, is to generate *simulated* data sets, assuming Gaussian noise on the $y_i$ values with standard deviation given by the error bars $\sigma_i$. Each simulated dataset is fitted and the distribution of fitted parameters is determined. This corresponds to the analytical approach in Appendix \[sec:proof\] but without the assumption that the model can be represented by an effective linear one over of the needed parameter range. A tale of two probabilities. When can one rule out a fit? {#sec:lin_or_quad} --------------------------------------------------------- If the noise on the data is Gaussian, which we will assume throughout this subsection, we have, so far, considered two different probabilities. Firstly, as discussed in Appendix \[sec:Q\], the value of $\chi^2$ is typically in the range $N_\text{DOF} \pm \sqrt{2 N_\text{DOF}}$. The quality of fit parameter $Q$ is the probability that, *given the fit*, the data could have this value of $\chi^2$ or greater, and is given mathematically by Eq. . It varies from unity when $\chi^2 \ll N_\text{DOF} - \sqrt{2 N_\text{DOF}}$ to zero when $\chi^2 \gg N_\text{DOF} + \sqrt{2 N_\text{DOF}}$. We emphasize that Secondly, in the context of error bars and confidence limits, we have discussed, in Eqs.  and , the probability that a fit parameter, $a_1$ say, takes a certain value relative to the optimal one. Equation becomes very small when $\Delta \chi^2$ varies by much more than unity. Note that Eqs.  and refer to the ![[**Left:**]{} A straight-line fit to a data set. The value of $Q$ is reasonable. However, one notices that the data is systematically above the fit for small $x$ and for large $x$ while it is below the fit for intermediate $x$. This is unlikely to happen by random chance. This remark is made more precise in the right figure.\ [**Right:**]{} A parabolic fit to the same data set. The value of $Q$ is larger than for the straight-line fit, but the main result is that the coefficient of the quadratic term is 5 $\sigma$ away from zero, showing that the straight-line fit in the left figure is much less likely than the parabolic fit. []{data-label="fig:lin_or_quad"}](lin_or_quad.eps "fig:"){width="8cm"} ![[**Left:**]{} A straight-line fit to a data set. The value of $Q$ is reasonable. However, one notices that the data is systematically above the fit for small $x$ and for large $x$ while it is below the fit for intermediate $x$. This is unlikely to happen by random chance. This remark is made more precise in the right figure.\ [**Right:**]{} A parabolic fit to the same data set. The value of $Q$ is larger than for the straight-line fit, but the main result is that the coefficient of the quadratic term is 5 $\sigma$ away from zero, showing that the straight-line fit in the left figure is much less likely than the parabolic fit. []{data-label="fig:lin_or_quad"}](lin_or_quad2.eps "fig:"){width="8cm"} At first, it seems curious that the probability $Q$ remains significantly greater than zero if $\chi^2$ changes by an amount of order $\sqrt{N_\text{DOF}}$, whereas if a fit parameter is changed by an amount such that $\chi^2$ changes by of order $\sqrt{N_\text{DOF}}$, the probability of this value becomes extremely small, of order $\exp(-\text{const.}\,\sqrt{N_\text{DOF}})$, in this limit, see Eqs. . While there is no mathemematical inconsistency, since the two probabilities refer to different situations (one is the probability of the data given the fit and the other is the relative probability of two fits given the data), it is useful to understand this difference intuitively. We take, as an example, a problem where we want to know whether the data can be modeled by a straight line, or whether a quadratic term needs to be included as well. A set of data is shown in Fig. \[fig:lin\_or\_quad\]. Looking at the left figure one sees that the data more or less agrees with the straight-line fit. However, one also sees systematic trends: the data is too high for small $x$ and for high $x$, and too low for intermediate $x$. Chi-squared just sums up the contributions from each data point and is insenstive to any systematic trend in the deviation of the data from the fit. Hence the value of $\chi^2$, in itself, does not tell us that this data is unlikely to be represented by a straight line. It is only when we add another parameter in the fit which corresponds to those correlations, that we realize the straight-line model is relatively very unlikely. In this case, the extra parameter is the coefficient of $x^2$, and the resulting parabolic fit is shown in the right figure. The qualitative comments in the last paragraph are made more precise by the parameters of the fits. The straight-line fit gives $a_0 = 0.59 \pm 0.26, a_1 = 2.003 \pm 0.022$ with $Q = 0.124$, whereas the parabolic fit gives $a_0 = 2.04 \pm 0.40, a_1 = 1.588 \pm 0.090, a_2 = 0.0203 \pm 0.0042 $ with $Q = 0.924$. The actual parameters used to generate the data are $a_0 = 2, a_1 = 1.6, a_2 = 0.02$, and there is Gaussian noise with standard deviation equal to $0.8$. Although the quality of fit factor for the straight-line fit is reasonable, the quadratic fit strongly excludes having the fit parameter $a_2$ equal to zero, since zero is five standard deviations away from the best value. For a Gaussian distribution, the probability of a five-sigma deviation or greater is $\text{erfc}(5/\sqrt{2}) \simeq 6 \times 10^{-7}$. The difference in $\chi^2$ for the quadratic fit, between the best fit and the fit forcing $a_2 = 0$, is $(0.0203 / 0.0042)^2 \simeq 23$ according to Eqs.  and . We conclude that, in this case, the straight-line model is unlikely to be correct. The moral of this tale is that a reasonable value of $Q$ does not, in itself, ensure that you have the right model. Another model might be very much more probable. Central Limit Theorem {#sec:clt} ===================== In this appendix we give a proof of the central limit theorem. We assume a distribution that falls off sufficiently fast at $\pm \infty$ that the mean and variance are finite. This *excludes*, for example, the Lorentzian distribution: $$P_{\rm Lor} = {1 \over \pi} {1 \over 1+x^2} \, .$$ A common distribution which *does* have a finite mean and variance is the Gaussian distribution, $$P_{\rm Gauss} = {1 \over \sqrt{2 \pi}\, \sigma} \exp\left[-{(x-\mu)^2 \over 2 \sigma^2}\right] \, . \label{Gauss}$$ Using standard results for Gaussian integrals you should be able to show that the distribution is normalized and that the mean and standard deviation are $\mu$ and $\sigma$ respectively. Consider a distribution, *not necessarily Gaussian*, with a finite mean and distribution. We pick $N$ random numbers $x_i$ from such a distribution and form the sum $$X = \sum_{i=1}^N x_i.$$ Note, we are assuming that all the random numbers have the *same* distribution. The determination of the distribution of $X$, which we call $P_N(X)$, uses the Fourier transform of $P(x)$, called the “characteristic function” in the context of probability theory. This is defined by $$Q(k) = \int_{-\infty}^\infty P(x) e^{i k x} \, d x \, . $$ Expanding out the exponential we can write $Q(k)$ in terms of the moments of $P(x)$ $$Q(k) = 1 + i k\langle x \rangle + {(i k)^2 \over 2!} \langle x^2 \rangle + {(i k)^3 \over 3!} \langle x^3 \rangle + \cdots \, .$$ It will be convenient in what follows to write $Q(k)$ as an exponential, i.e. $$\begin{aligned} Q(k) & = & \exp \left[ \ln \left(1 + i k\langle x \rangle + {(i k)^2 \over 2!} \langle x^2 \rangle + {(i k)^3 \over 3!} \langle x^3 \rangle + \cdots \right) \right] \nonumber \\ & = & \boxed{ \exp\left[ i k \mu -{k^2 \sigma^2 \over 2!} + {c_3(i k)^3 \over 3!} + {c_4 (i k)^4 \over 4!} + \cdots \right] \, ,} \label{cumulant}\end{aligned}$$ where $c_3$ involves third and lower moments, $c_4$ involves fourth and lower moments, and so on. The $c_n$ are called *cumulant* averages. For the important case of a Gaussian, the Fourier transform is obtained by “completing the square”. The result is that the Fourier transform of a Gaussian is also a Gaussian, namely, $$\boxed{ Q_{\rm Gauss}(k) = \exp\left[ i k \mu -{k^2 \sigma^2 \over 2} \right] \, ,} \label{Qgauss}$$ showing that the higher order cumulants, $c_3, c_4$, etc. in Eq. (\[cumulant\]) *all vanish* for a Gaussian. The distribution $P_N(x)$ can be expressed as $$P_N(x) = \int_{-\infty}^\infty P(x_1) d x_1 \, \int_{-\infty}^\infty P(x_2) d x_2 \, \cdots \int_{-\infty}^\infty P(x_N) d x_N \, \delta (X - \sum_{i=1}^N x_i) \, .$$ We evaluate this by using the integral representation of the delta function $$\delta(x) = {1 \over 2 \pi} \int_{-\infty}^\infty e^{i k x} \, d k \, ,$$ so $$\begin{aligned} P_N(X) &= \int_{-\infty}^\infty {d k \over 2 \pi} \int_{-\infty}^\infty P(x_1) d x_1 \, \int_{-\infty}^\infty P(x_2) d x_2 \, \cdots \int_{-\infty}^\infty P(x_N) d x_N \, \exp[i k (x_1 + x_2 + \cdots x_N - X)] \\ &= \int_{-\infty}^\infty {d k \over 2 \pi} Q(k)^N e^{-i k X} \, , \label{inv_FT}\end{aligned}$$ showing that the Fourier transform of $P_N(x)$, which we call $Q_N(k)$, is given by $$\boxed{Q_N(k) = Q(k)^N \, . } \label{fourier_N}$$ Consequently $$Q_N(k) = \exp\left[ i k N\mu -{N k^2 \sigma^2 \over 2} + {N c_3(i k)^3 \over 4!} + {N c_4 (i k)^4 \over 4!} + \cdots \right] \, . \label{cumulant_N}$$ Comparing with Eq. (\[cumulant\]) we see that > the mean of the distribution of the sum of $N$ independent and identically distributed random variables (the coefficient of $-i k$ in the exponential) is $N$ times the mean of the distribution of one variable, and the variance of the distribution of the sum (the coefficient of $-k^2/2!$) is $N$ times the variance of the distribution of one variable. These are general statements applicable for *any* $N$ and have already been derived in Sec. \[sec:basic\]. However, if $N$ is *large* we can now go further. The distribution $P_N(X)$ is the inverse transform of $Q_N(k)$, see Eq. , so $$P_N(X) = {1 \over 2\pi} \int_{-\infty}^\infty \exp\left[ -i k X'-{N k^2 \sigma^2 \over 2!} + N{c_3(i k)^3 \over 3!} + {N c_4 (i k)^4 \over 4!} + \cdots \right] \, d k \, , \label{invtrans}$$ where $$X' = X - N \mu \, . \label{x'}$$ Looking at the $-N k^2 / 2$ term in the exponential in Eq. (\[invtrans\]), we see that the integrand is significant for $ k < k^\star$, where $N \sigma^2 (k^\star)^2 = 1$, and negligibly small for $k \gg k^\star$. However, for $0 < k < k^\star$ the higher order terms in Eq. (\[invtrans\]), (i.e. those of order $k^3, k^4$ etc.) are very small since $N (k^\star)^3 \sim N^{-1/2}, N (k^\star)^4 \sim N^{-1}$ and so on. Hence the terms of higher order than $k^2$ in Eq. (\[invtrans\]), do not contribute for large $N$ and so $$\lim_{N \to \infty} P_N(X) = {1 \over 2\pi} \int_{-\infty}^\infty \exp\left[ -i k X'-{N k^2 \sigma^2 \over 2} \right] \, d k \, . \label{invtransG}$$ In other words, for large $N$ the distribution is the Fourier transform of a Gaussian, which, as we know, is also a Gaussian. Completing the square in Eq. (\[invtransG\]) gives $$\begin{aligned} \lim_{N \to \infty} P_N(X) & = & {1 \over 2\pi} \int_{-\infty}^\infty \exp\left[-{N \sigma^2 \over 2 } \left(k - {i X' \over N \sigma^2}\right)^2 \right] \, d k \ \exp\left[ -{(X')^2 \over 2 N \sigma^2} \right] \nonumber \\ & = & \boxed{ {1 \over \sqrt{2 \pi N} \, \sigma} \exp\left[ -{(X-N\mu)^2 \over 2 N \sigma^2} \right] \, ,} \label{clt}\end{aligned}$$ where, in the last line, we used Eq. (\[x’\]). This is a Gaussian with mean $N \mu$ and variance $N \sigma^2$. Equation (\[clt\]) is the in statistics. It tells us that, > for $N > \to\infty$, the distribution of the sum of $N$ independent and identically distributed random variables is a *Gaussian* whose mean is $N$ times the mean, $\mu$, of the distribution of one variable, and whose variance is $N$ times the variance of the distribution of one variable, $\sigma^2$, *independent of the form of the distribution of one variable*, $P(x)$, provided only that $\mu$ and $\sigma$ are finite. The central limit theorem is of such generality that it is extremely important. It is the reason why the Gaussian distribution has such a preeminent place in the theory of statistics. Note that if the distribution of the individual $x_i$ is Gaussian, then the distribution of the sum of $N$ variables is *always* Gaussian, even for $N$ small. This follows from Eq.  and the fact that the Fourier transform of a Gaussian is a Gaussian. In practice, distributions that we meet in nature, have a much broader tail than that of the Gaussian distribution, which falls off very fast at large $|x-\mu|/\sigma$. As a result, even if the distribution of the sum approximates well a Gaussian in the central region for only modest values of $N$, it might take a much larger value of $N$ to beat down the weight in the tail to the value of the Gaussian. Hence, even for moderate values of $N$, the probability of a deviation greater than $\sigma$ can be significantly larger than that of the Gaussian distribution which is 32%. This caution will be important in Sec. \[sec:fit\] when we discuss the quality of fits. ![ Figure showing the approach to the central limit theorem for the distribution in Eq. , which has mean, $\mu$, equal to 0, and standard deviation, $\sigma$, equal to 1. The horizontal axis is the sum of $N$ random variables divided by $\sqrt{N}$ which, for all $N$, has zero mean and standard deviation unity. For large $N$ the distribution approaches a Gaussian. However, convergence is non-uniform, and is extremely slow in the tails. []{data-label="Fig:converge_to_clt"}](dist_long_all.eps){width="11cm"} We will illustrate the slow convergence of the distribution of the sum to a Gaussian in Fig. , in which the distribution of the individual variables $x_i$ is $$P(x) = {3 \over 2}\, {1 \over (1 + |x|)^4} \, . \label{dist_long}$$ This has mean 0 and standard deviation 1, but moments higher than the second do not exist because the integrals diverge. For large $N$ the distribution approaches a Gaussian, as expected, but convergence is very slow in the tails. The number of degrees of freedom {#sec:NDF} ================================ Consider, for simplicity, a straight line fit, so we have to determine the values of $a_0$ and $a_1$ which minimize Eq. . The $N$ terms in Eq.  are not statistically independent at the minimum because the values of $a_0$ and $a_1$, given by Eq. , depend on the data points $(x_i, y_i, \sigma_i)$. Consider the “residuals” defined by $$\epsilon_i = {y_i - a_0 - a_1 x_i \over \sigma} \, .$$ If the model were exact and we use the exact values of the parameters $a_0$ and $a_1$ the $\epsilon_i$ would be independent and each have a Gaussian distribution with zero mean and standard deviation unity. However, choosing the *best-fit* values of $a_0$ and $a_1$ *from the data* according to Eq.  implies that $$\begin{aligned} \sum_{i=1}^N {1\over \sigma_i}\, \epsilon_i &= 0\, ,\\ \sum_{i=1}^N {x_i \over \sigma_i}\, \epsilon_i &= 0\, ,\end{aligned}$$ which are are two *linear constraints* on the $\epsilon_i$. This means that we only need to specify $N-2$ of them to know them all. In the $N$ dimensional space of the $\epsilon_i$ we have eliminated two directions, so there can be no Gaussian fluctuations along them. However the other $N-2$ dimensions are unchanged, and will have the same Gaussian fluctuations as before. Thus $\chi^2$ has the distribution of a sum of squares of $N-2$ Gaussian random variables. We can intuitively understand why there are $N-2$ degrees of freedom rather than $N$ by considering the case of $N=2$. The fit goes perfectly through the two points so one has $\chi^2=0$ exactly. This implies that there are zero degrees of freedom since, on average, each degree of freedom adds 1 to $\chi^2$. Clearly this argument can be generalized to any fitting function which depends *linearly* on $M$ fitting parameters, with the result that $\chi^2$ has the distribution of a sum of squares of $N_\text{DOF} = N-M$ Gaussian random variables, in which the quantity $N_\text{DOF}$ is called the “number of degrees of freedom”. Even if the fitting function depends non-linearly on the parameters, this last result is often taken as a reasonable approximation. The chi-squared distribution and the goodness of fit parameter $\textbf{Q}$ {#sec:Q} =========================================================================== The $\chi^2$ distribution for $m$ degrees of freedom is the distribution of the sum of $m$ independent random variables with a Gaussian distribution with zero mean and standard deviation unity. To determine this we write the distribution of the $m$ variables $x_i$ as $$P(x_1, x_2, \cdots, x_m)\, dx_1 dx_2 \cdots dx_m = {1 \over (2 \pi)^{m/2}} \, e^{-x_1^2/2} \, e^{-x_2^2/2} \cdots e^{-x_m^2/2} \, dx_1 dx_2 \cdots dx_m \, .$$ Converting to polar coordinates, and integrating over directions, we find the distribution of the radial variable to be $$\widetilde{P}(r) \, dr = {S_m \over (2 \pi)^{m/2}} \, r^{m-1}\, e^{-r^2/2} \, dr \, , \label{Pr}$$ where $S_m$ is the surface area of a unit $m$-dimensional sphere. To determine $S_m$ we integrate Eq.  over $r$, noting that $\widetilde{P}(r) $ is normalized, which gives $$S_m = {2 \pi^{m/2} \over \Gamma(m/2)} \, , \label{Sm}$$ where $\Gamma(x)$ is the Euler gamma function defined by $$\Gamma(x) = \int_0^\infty t^{x-1} \, e^{-t}\, dt \, . \label{gamma}$$ From Eqs.  and we have $$\widetilde{P}(r) = {1 \over 2^{m/2-1} \Gamma(m/2)} \, r^{m-1} e^{-r^2/2} \, .$$ This is the distribution of $r$ but we want the distribution of $\chi^2 \equiv \sum_i x_i^2 = r^2$. To avoid confusion of notation we write $X$ for $\chi^2$, and define the $\chi^2$ distribution for $m$ variables as $P^{(m)}(X)$. We have $P^{(m)}(X) \, dX = \widetilde{P}(r) \, dr$ so the $\chi^2$ distribution for $m$ degrees of freedom is $$\begin{aligned} P^{(m)}(X) &= {\widetilde{P}(r) \over dX / dr} \nonumber \\ &\boxed{ = {1 \over 2^{m/2} \Gamma(m/2)} \, X^{(m/2)-1}\, e^{-X/2} \qquad (X > 0) \, .} \label{chisq-dist}\end{aligned}$$ The $\chi^2$ distribution is zero for $X < 0$. Using Eq.  and the property of the gamma function that $\Gamma(n+1) = n \Gamma(n)$ one can show that $$\begin{aligned} \int_0^\infty P^{(m)}(X)\, d X &= 1 \, , \\ \langle X \rangle \equiv \int_0^\infty X\, P^{(m)}(X)\, d X &= m \, , \label{mean} \\ \langle X^2 \rangle \equiv \int_0^\infty X^2\, P^{(m)}(X)\, d X &= m^2 + 2m \, , \quad \mbox{so }\\ \langle X^2 \rangle - \langle X \rangle^2 &= 2 m \, \label{var} .\end{aligned}$$ From Eqs.  and we see that typically $\chi^2$ lies in the range $m - \sqrt{2 m}$ to $m + \sqrt{2m}$. For large $m$ the distribution approaches a Gaussian according to the central limit theory discussed in Appendix \[sec:clt\]. Typically one focuses on the value of $\chi^2$ per degree freedom since this should be around unity independent of $m$. The goodness of fit parameter is the probability that the specified value of $\chi^2$, or greater, could occur by random chance. From Eq.  it is given by $$\begin{aligned} Q &= {1 \over 2^{m/2} \Gamma(m/2)} \, \int_{\chi^2}^\infty\, X^{(m/2)-1}\, e^{-X/2} \, d X\, , \\ & \boxed{ = {1 \over \Gamma(m/2)} \, \int_{\chi^2/2}^\infty\, y^{(m/2)-1}\, e^{-y} \, dy\, , } \label{Q_expression}\end{aligned}$$ which is known as an incomplete gamma function. Code to generate the incomplete gamma function is given in Numerical Recipes [@press:92]. There is also a built-in function to generate the goodness of fit parameter in the `scipy` package of `python` and in the graphics program `gnuplot`, see the scripts in Appendix \[sec:scripts\]. Note that $Q=1$ for $\chi^2 = 0$ and $Q\to 0$ for $\chi^2 \to\infty$. Remember that $m$ is the number of degrees of freedom, written as $N_\text{DOF}$ elsewhere in these notes. Asymptotic standard error and how to get correct error bars from gnuplot {#sec:ase} ======================================================================== Sometimes one does not have error bars on the data. Nonetheless, one can still use $\chi^2$ fitting to get an *estimate* of those errors (assuming that they are all equal) and thereby also get an error bar on the fit parameters. The latter is called the “asymptotic standard error”. Assuming the same error bar $\sigma_\text{ass}$ for all points, we determine $\sigma_\text{ass}$ from the requirement that $\chi^2$ per degree of freedom is precisely one, i.e. its mean value according to Eq. . This gives $$1 = {\chi^2 \over N_\text{DOF}} = {1 \over N_\text{DOF}} \, \sum_{i=1}^N \left( \, {y_i - f(x_i) \over \sigma_\text{ass} } \, \right)^2 \, ,$$ or, equivalently, $$\boxed{ \sigma_\text{ass}^2 = {1 \over N_\text{DOF}} \, \sum_{i=1}^N \left(y_i - f(x_i)\right)^2 \, .} \label{sigma_ass}$$ The error bars on the fit parameters are then obtained from Eq. , with the elements of $U$ given by Eq.  in which $\sigma_i$ is replaced by $\sigma_\text{ass}$. Equivalently, one can set the $\sigma_i$ to unity in determining $U$ from Eq. , and estimate the error on the fit parameters from $$\qquad\qquad\qquad\boxed{ \sigma^2_\alpha = \left(U\right)^{-1}_{\alpha\alpha} \, \sigma^2_\text{ass}\, ,} \quad\text{(asymptotic standard error)}. \label{assterr}$$ A simple example of the use of the asymptotic standard error in a situation where we don’t know the error on the data points, is fitting to a constant, i.e. *determining the average of a set of data*, which we already discussed in detail in Sec. \[sec:averages\]. In this case we have $$U_{00} = N, \qquad v_0 = \sum_{i=1}^N y_i ,$$ so the only fit parameter is $$a_0 = {v_0 \over U_{00}} = {1\over N} \, \sum_{i=1}^N y_i = \overline{y} ,$$ which gives, naturally enough, the average of the data points, $\overline{y}$. The number of degrees of freedom is $N-1$, since there is one fit parameter, so $$\sigma_\text{ass}^2 = {1 \over N - 1} \, \sum_{i=1}^N \left(y_i - \overline{y}\right)^2 \, ,$$ and hence the square of the error on $a_0$ is given, from Eq. , by $$\sigma^2_0 = {1 \over U_{00}}\, \sigma^2_\text{ass} = {1 \over N ( N - 1)} \, \sum_{i=1}^N \left(y_i - \overline{y}\right)^2 \, ,$$ which is precisely the expression for the error in the mean of a set of data given in Eq. . I now mention that a popular plotting program, `gnuplot`, which also does fits, presents error bars on the fit parameters incorrectly if there are error bars on the data. Whether or not there are error bars on the points, `gnuplot` presents the “asymptotic standard error” on the fit parameters. `Gnuplot` calculates the elements of $U$ correctly from Eq.  including the error bars, but then apparently also determines an “assumed error” from an expression like Eq.  but including the error bars, i.e. $$\sigma_\text{ass}^2 = {1 \over N_\text{DOF}} \, \sum_{i=1}^N \left(\ {y_i - f(x_i) \over \sigma_i}\ \right)^2 \ = \ {\chi^2 \over N_\text{DOF}}, \qquad \text{(\texttt{gnuplot})}\, .$$ Hence `gnuplot`’s $\sigma^2_\text{ass}$ is just the chi-squared per degree of freedom. The error bar (squared) quoted by `gnuplot` is $ \left(U\right)^{-1}_{\alpha\alpha} \, \sigma^2_\text{ass}$, as in Eq. . However, this is wrong since the error bars on the data points have *already* been included in calculating the elements of $U$, so the error on the fit parameter $\alpha$ should be $\left(U\right)^{-1}_{\alpha\alpha}$. Hence, > to get correct error bars on fit parameters from `gnuplot` when there are error bars on the points, you have to divide `gnuplot`’s asymptotic standard errors by the square root of the chi-squared per degree of freedom (which gnuplot calls `FIT_STDFIT` and, fortunately, computes correctly). I have checked this statement by comparing with results for Numerical Recipes routines, and also, for straight-line fits, by my own implementation of the formulae. It is curious that I found no hits on this topic when googling the internet. Can no one else have come across this problem? Correction of `gnuplot` error bars is implemented in the `gnuplot` scripts in Appendix \[sec:scripts\] The need to correct `gnuplot`’s error bars applies to linear as well as non-linear models. I recently learned that error bars on fit parameters given by the routine `curve_fit` of `python` also have to be corrected in the same way. This is shown in two of the python scripts in appendix \[sec:scripts\]. Curiously, a different python fitting routine, `leastsq`, gives the error bars correctly. The distribution of fitted parameters determined from simulated datasets {#sec:proof} ======================================================================== In this section we derive the equation for the distribution of fitted parameters determined from simulated datasets, Eq. , assuming an arbitrary linear model, see Eq. . Projecting on to a single fitting parameter, as above, this corresponds to the lower figure in Fig. \[Fig:distofa1\]. We have *one* set of $y$-values, $y_i^{(0)}$, for which the fit parameters are $\vec{a}^{(0)}$. We then generate an *ensemble* of simulated data sets, $y_i^S$, assuming the data has Gaussian noise with standard deviation $\sigma_i$ centered on the actual data values $y_i^{(0)}$. We ask for the probability that the fit to one of the simulated data sets has parameters $\vec{a}^S$. This probability distribution is given by $$P(\vec{a}^S) = \prod_{i=1}^N \left\{ {1 \over \sqrt{2 \pi} \sigma_i} \, \int_{-\infty}^\infty \, d y_i^S\, \exp\left[-{\left(y_i^S - y_i^{(0)}\right)^2 \over 2 \sigma_i^2}\right]\, \right\} \, \prod_{\alpha=1}^M \delta \left( \sum_\beta U_{\alpha\beta} a_\beta^S - v_\alpha^S\right) \, \det U \, , \label{dist_of_a}$$ where the factor in curly brackets is (an integral over) the probability distribution of the data points $y_i^S$, and the delta functions project out those sets of data points which have a particular set of fitted parameters, see Eq. . The factor of $\det U$ is a Jacobian to normalize the distribution. Using the integral representation of the delta function, and writing explicitly the expression for $v_\alpha$ from Eq. , one has $$\begin{aligned} P(\vec{a}^S) =& \prod_{i=1}^N \left\{ {1 \over \sqrt{2 \pi} \sigma_i} \, \int_{-\infty}^\infty \, d y_i^S\, \exp\left[-{\left(y_i^S - y_i^{(0)} \right)^2 \over 2 \sigma_i^2}\right]\, \right\} \times \qquad\qquad\qquad \\ & \ \prod_{\alpha=1}^M \left( {1 \over 2 \pi} \, \int_{-\infty}^\infty d k_\alpha \exp\left[i k_\alpha\left( \sum_\beta U_{\alpha\beta} a_\beta^S - \sum_{i=1}^N {y_i^S\, X_\alpha(x_i) \over \sigma_i^2} \right)\right] \right) \, \det U \, .\end{aligned}$$ We carry out the $y$ integrals by “completing the square”, $$\begin{aligned} P(\vec{a}^S) = \prod_{\alpha=1}^M \left( {1 \over 2 \pi} \, \int_{-\infty}^\infty d k_\alpha \right) \, \prod_{i=1}^N \left\{ {1 \over \sqrt{2 \pi} \sigma_i} \, \int_{-\infty}^\infty \, d y_i^S\, \exp\left[-{\left(y_i^S - y_i^{(0)} + i \vec{k}\cdot \vec{X}(x_i) \right)^2 \over 2 \sigma_i^2}\right] \right\} \times \\ \exp\left[ -{1 \over 2 \sigma_i^2} \, \left(\, \left(\vec{k}\cdot\vec{X}(i)\right)^2 +2 i \left(\vec{k} \cdot \vec{X}(x_i)\right) \, y_i^{(0)} \right) \right] \times \exp\left[i \sum_{\alpha,\beta} k_\alpha\, U_{\alpha\beta}\, a_\beta^S\right] \, \det U \, .\end{aligned}$$ Doing the $y^S$-integrals, the factors in curly brackets are equal to unity. Using Eqs.  and and the fact that the $U_{\alpha\beta}$ are independent of the $y_i^S$, we then get $$P(\vec{a}^S) = \prod_{\alpha=1}^M \left( {1 \over 2 \pi} \, \int_{-\infty}^\infty d k_\alpha \right) \, \exp\left[ -{1 \over 2} \sum_{\alpha,\beta} k_\alpha\, U_{\alpha\beta}\, k_\beta + i \sum_{\alpha,\beta} k_\alpha\, \delta v_\alpha^S \right] \, \det U \, ,$$ where $$\delta v_\beta^S \equiv v_\beta^S - v^{(0)}_\beta \, ,$$ with $$v_\alpha^{(0)} = \sum_{i=1}^N {y_i^{(0)} \, X_\alpha(x_i) \over \sigma_i^2} \, .$$ We do the $k$-integrals by working in the basis in which $U$ is diagonal. The result is $$P(\vec{a^S}) = {\left( \det U \right)^{1/2} \over (2\pi)^{m/2}} \, \exp\left[-{1 \over 2}\, \sum_{\alpha,\beta} \delta v_\alpha^S \left(U^{-1}\right)_{\alpha\beta} \delta v_\beta^S \right] \, .$$ Using Eq.  and the fact that $U$ is symmetric we get our final result $$\boxed{ P(\vec{a^S}) = {\left( \det U \right)^{1/2} \over (2\pi)^{m/2}} \, \exp\left[-{1 \over 2}\, \sum_{\alpha,\beta} \delta a_\alpha^S\, U_{\alpha\beta}\, \delta a_\beta^S \right]} \, , \label{P_of_a}$$ which is Eq. , including the normalization constant in front of the exponential. The distribution of fitted parameters from repeated sets of measurements {#sec:proof2} ======================================================================== In this section we derive the equation for the distribution of fitted parameters determined in the hypothetical situation that one has many actual data sets. Projecting on to a single fitted parameter, this corresponds to the upper figure in Fig. \[Fig:distofa1\]. The exact value of the data is $y_i^\text{true} = \vec{a}^\text{true} \cdot \vec{X}(x_i)$, and the distribution of the $y_i$ in an actual data set, which differs from $y_i^\text{true}$ because of noise, has a distribution, assumed Gaussian here, centered on $y_i^\text{true}$ with standard deviation $\sigma_i$. Fitting each of these real data sets, the probability distribution for the fitted parameters is given by $$P(\vec{a}) = \prod_{i=1}^N \left\{ {1 \over \sqrt{2 \pi} \sigma_i} \, \int_{-\infty}^\infty \, d y_i\, \exp\left[-{\left(y_i - \vec{a}^\text{true} \cdot \vec{X}(x_i)\right)^2 \over 2 \sigma_i^2}\right]\, \right\} \, \prod_{\alpha=1}^M \delta \left( \sum_\beta U_{\alpha\beta} a_\beta - v_\alpha\right) \, \det U \, , \label{dist_of_a2}$$ see Eq.  for an explanation of the various factors. Proceeding as in Appendix \[sec:proof\] we have $$\begin{aligned} P(\vec{a}) = \prod_{i=1}^N \left\{ {1 \over \sqrt{2 \pi} \sigma_i} \, \int_{-\infty}^\infty \, d y_i\, \exp\left[-{\left(y_i - \vec{a}^\text{true} \cdot \vec{X}(x_i)\right)^2 \over 2 \sigma_i^2}\right]\, \right\} \times \qquad\qquad\qquad \\ \prod_{\alpha=1}^M \left( {1 \over 2 \pi} \, \int_{-\infty}^\infty d k_\alpha \exp\left[i k_\alpha\left( \sum_\beta U_{\alpha\beta} a_\beta - \sum_{i=1}^N {y_i\, X_\alpha(x_i) \over \sigma_i^2} \right)\right] \right) \, \det U \, ,\end{aligned}$$ and doing the $y$- integrals by completing the square gives $$\begin{aligned} P(\vec{a})&= \prod_{\alpha=1}^M \left( {1 \over 2 \pi} \, \int_{-\infty}^\infty d k_\alpha \right) \times \\ &\exp\left[ -{1 \over 2 \sigma_i^2} \, \left(\, \left(\vec{k}\cdot\vec{X}(i)\right)^2 +2 i \left(\vec{k} \cdot \vec{X}(x_i)\right) \, \left(\vec{a}^\text{true} \cdot \vec{X}(x_i)\right) \right) \right] \times \exp\left[i \sum_{\alpha,\beta} k_\alpha\, U_{\alpha\beta}\, a_\beta\right] \, \det U \, .\end{aligned}$$ Using Eq.  we then get $$P(\vec{a}) = \prod_{\alpha=1}^M \left( {1 \over 2 \pi} \, \int_{-\infty}^\infty d k_\alpha \right) \, \exp\left[ -{1 \over 2} \sum_{\alpha,\beta} k_\alpha\, U_{\alpha\beta}\, k_\beta + i \sum_{\alpha,\beta} k_\alpha\, U_{\alpha\beta}\, \delta a_\beta\right] \, \det U \, ,$$ where $$\delta a_\beta \equiv a_\beta - a^\text{true}_\beta \, .$$ The $k$-integrals are done by working in the basis in which $U$ is diagonal. The result is $$\boxed{ P(\vec{a}) = {\left( \det U \right)^{1/2} \over (2\pi)^{m/2}} \, \exp\left[-{1 \over 2}\, \sum_{\alpha,\beta} \delta a_\alpha\, U_{\alpha\beta}\, \delta a_\beta \right]} \, . \label{P_of_a2}$$ In other words, the distribution of the fitted parameters obtained from many sets of actual data, about the *true* value $\vec{a}^\text{true}$ is a Gaussian. Since we are assuming a linear model, the matrix of coefficients $U_{\alpha\beta}$ is a constant, and so the distribution in Eq.  is the *same* as in Eq. . Hence > For a linear model with Gaussian noise, the distribution of fitted parameters, obtained from simulated data sets, relative to *value from the one actual data set*, is the same as the distribution of parameters from many actual data sets relative to *the true value*, see Fig. \[Fig:distofa1\]. This result is also valid for a non-linear model if the range of parameter values needed is sufficiently small that the model can be represented by an effective one. It is usually assumed to be a reasonable approximation even if this condition is not fulfilled. Scripts for some data analysis and fitting tasks {#sec:scripts} ================================================ In this appendix I give sample scripts using perl, python and gnuplot for some basic data analysis and fitting tasks. I include output from the scripts when acting on certain datasets which are available on the web. Note “`this_file_name`” refers to the name of the script being displayed (whatever you choose to call it.) Scripts for a jackknife analysis -------------------------------- The script reads in values of $x$ on successive lines of the input file and computes $\langle x^4\rangle / \langle x^2\rangle^2$, including an error bar computed using the jackknife method. ### Perl #!/usr/bin/perl # # Usage: "this_file_name data_file" # (make the script executable; otherwise you have to preface the command with "perl") # $n = 0; $x2_tot = 0; $x4_tot = 0; # # read in the data # while(<>) # Note this very convenient perl command which reads each line of # of each input file in the command line { @line = split; $x2[$n] = $line[0]**2; $x4[$n] = $x2[$n]**2; $x2_tot += $x2[$n]; $x4_tot+= $x4[$n]; $n++; } # # Do the jackknife estimates # for ($i = 0; $i < $n; $i++) { $x2_jack[$i] = ($x2_tot - $x2[$i]) / ($n - 1); $x4_jack[$i] = ($x4_tot - $x4[$i]) / ($n - 1); } $x2_av = $x2_tot / $n; # Do the overall averages $x4_av = $x4_tot / $n; $g_av = $x4_av / $x2_av**2; $g_jack_av = 0; $g_jack_err = 0; # Do the final jackknife estimate for ($i = 0; $i < $n; $i++) { $dg = $x4_jack[$i] / $x2_jack[$i]**2; $g_jack_av += $dg; $g_jack_err += $dg**2; } $g_jack_av /= $n; $g_jack_err /= $n; $g_jack_err = sqrt(($n - 1) * abs($g_jack_err - $g_jack_av**2)); printf " Overall average is %8.4f\n", $g_av; printf " Jackknife average is %8.4f +/- %6.4f \n", $g_jack_av, $g_jack_err; Executing this file on the data in `http://physics.ucsc.edu/~peter/bad-honnef/data.HW2` gives Overall average is 1.8215 Jackknife average is 1.8215 +/- 0.0368 ### Python # # Program written by Matt Wittmann # # Usage: "python this_file_name data_file" # import fileinput from math import * x2 = []; x2_tot = 0. x4 = []; x4_tot = 0. for line in fileinput.input(): # read in each line in each input file. # similar to perl's while(<>) line = line.split() x2_i = float(line[0])**2 x4_i = x2_i**2 x2.append(x2_i) # put x2_i as the i-th element in an array x2 x4.append(x4_i) x2_tot += x2_i x4_tot += x4_i n = len(x2) # the number of lines read in # # Do the jackknife estimates # x2_jack = [] x4_jack = [] for i in xrange(n): x2_jack.append((x2_tot - x2[i]) / (n - 1)) x4_jack.append((x4_tot - x4[i]) / (n - 1)) x2_av = x2_tot / n # do the overall averages x4_av = x4_tot / n g_av = x4_av / x2_av**2 g_jack_av = 0.; g_jack_err = 0. for i in xrange(n): # do the final jackknife averages dg = x4_jack[i] / x2_jack[i]**2 g_jack_av += dg g_jack_err += dg**2 g_jack_av /= n g_jack_err /= n g_jack_err = sqrt((n - 1) * abs(g_jack_err - g_jack_av**2)) print " Overall average is %8.4f" % g_av print " Jackknife average is %8.4f +/- %6.4f" % (g_jack_av, g_jack_err) The output is the same as for the perl script. Scripts for a straight-line fit ------------------------------- ### Perl, writing out the formulae by hand #!/usr/bin/perl # # Usage: "this_file_name data_file" # (make the script executable; otherwise preface the command with "perl") # # Does a straight line fit to data in "data_file" each line of which contains # data for one point, x_i, y_i, sigma_i # $n = 0; while(<>) # read in the lines of data { @line = split; # split the line to get x_i, y_i, sigma_i $x[$n] = $line[0]; $y[$n] = $line[1]; $err[$n] = $line[2]; $err2 = $err[$n]**2; # compute the necessary sums over the data $s += 1 / $err2; $sumx += $x[$n] / $err2 ; $sumy += $y[$n] / $err2 ; $sumxx += $x[$n]*$x[$n] / $err2 ; $sumxy += $x[$n]*$y[$n] / $err2 ; $n++; } $delta = $s * $sumxx - $sumx * $sumx ; # compute the slope and intercept $c = ($sumy * $sumxx - $sumx * $sumxy) / $delta ; $m = ($s * $sumxy - $sumx * $sumy) / $delta ; $errm = sqrt($s / $delta) ; $errc = sqrt($sumxx / $delta) ; printf ("slope = %10.4f +/- %7.4f \n", $m, $errm); # print the results printf ("intercept = %10.4f +/- %7.4f \n\n", $c, $errc); $NDF = $n - 2; # the no. of degrees of freedom is n - no. of fit params $chisq = 0; # compute the chi-squared for ($i = 0; $i < $n; $i++) { $chisq += (($y[$i] - $m*$x[$i] - $c)/$err[$i])**2; } $chisq /= $NDF; printf ("chi squared / NDF = %7.4lf \n", $chisq); Acting with this script on the data in `http://physics.ucsc.edu/~peter/bad-honnef/data.HW3` gives slope = 5.0022 +/- 0.0024 intercept = 0.9046 +/- 0.2839 chi squared / NDF = 1.0400 ### Python, writing out the formulae by hand # # Program written by Matt Wittmann # # Usage: "python this_file_name data_file" # # Does a straight-line fit to data in "data_file", each line of which contains # the data for one point, x_i, y_i, sigma_i # import fileinput from math import * x = [] y = [] err = [] s = sumx = sumy = sumxx = sumxy = 0. for line in fileinput.input(): # read in the data, one line at a time line = line.split() # split the line x_i = float(line[0]); x.append(x_i) y_i = float(line[1]); y.append(y_i) err_i = float(line[2]); err.append(err_i) err2 = err_i**2 s += 1 / err2 # do the necessary sums over data points sumx += x_i / err2 sumy += y_i / err2 sumxx += x_i*x_i / err2 sumxy += x_i*y_i / err2 n = len(x) # n is the number of data points delta = s * sumxx - sumx * sumx # compute the slope and intercept c = (sumy * sumxx - sumx * sumxy) / delta m = (s * sumxy - sumx * sumy) / delta errm = sqrt(s / delta) errc = sqrt(sumxx / delta) print "slope = %10.4f +/- %7.4f " % (m, errm) print "intercept = %10.4f +/- %7.4f \n" % (c, errc) NDF = n - 2 # the number of degrees of freedom is n - 2 chisq = 0. for i in xrange(n): # compute chi-squared chisq += ((y[i] - m*x[i] - c)/err[i])**2; chisq /= NDF print "chi squared / NDF = %7.4lf " % chisq The results are identical to those from the perl script. ### Python, using a built-in routine from scipy # # Python program written by Matt Wittmann # # Usage: "python this_file_name data_file" # # Does a straight-line fit to data in "data_file", each line of which contains # the data for one point, x_i, y_i, sigma_i. # # Uses the built-in routine "curve_fit" in the scipy package. Note that this # requires the error bars to be corrected, as with gnuplot # from pylab import * from scipy.optimize import curve_fit fname = sys.argv[1] if len(sys.argv) > 1 else 'data.txt' x, y, err = np.loadtxt(fname, unpack=True) # read in the data n = len(x) p0 = [5., 0.1] # initial values of parameters f = lambda x, c, m: c + m*x # define the function to be fitted # note python's lambda notation p, covm = curve_fit(f, x, y, p0, err) # do the fit c, m = p chisq = sum(((f(x, c, m) - y)/err)**2) # compute the chi-squared chisq /= n - 2 # divide by no.of DOF errc, errm = sqrt(diag(covm)/chisq) # correct the error bars print "slope = %10.4f +/- %7.4f " % (m, errm) print "intercept = %10.4f +/- %7.4f \n" % (c, errc) print "chi squared / NDF = %7.4lf " % chisq The results are identical to those from the above scripts. ### Gnuplot # # Gnuplot script to plot points, do a straight-line fit, and display the # points, fit, fit parameters, error bars, chi-squared per degree of freedom, # and goodness of fit parameter on the plot. # # Usage: "gnuplot this_file_name" # # The data is assumed to be a file "data.HW3", each line containing # information for one point (x_i, y_i, sigma_i). The script produces a # postscript file, called here "HW3b.eps". # set size 1.0, 0.6 set terminal postscript portrait enhanced font 'Helvetica,16' set output "HW3b.eps" set fit errorvariables # needed to be able to print error bars f(x) = a + b * x # the fitting function fit f(x) "data.HW3" using 1:2:3 via a, b # do the fit set xlabel "x" set ylabel "y" ndf = FIT_NDF # Number of degrees of freedom chisq = FIT_STDFIT**2 * ndf # chi-squared Q = 1 - igamma(0.5 * ndf, 0.5 * chisq) # the quality of fit parameter Q # # Below note how the error bars are (a) corrected by dividing by # FIT_STDFIT, and (b) are displayed on the plot, in addition to the fit # parameters, neatly formatted using sprintf. # set label sprintf("a = %7.4f +/- %7.4f", a, a_err/FIT_STDFIT) at 100, 400 set label sprintf("b = %7.4f +/- %7.4f", b, b_err/FIT_STDFIT) at 100, 330 set label sprintf("{/Symbol c}^2 = %6.2f", chisq) at 100, 270 set label sprintf("{/Symbol c}^2/NDF = %6.4f", FIT_STDFIT**2) at 100, 200 set label sprintf("Q = %9.2e", Q) at 100, 130 plot \ # Plot the data and fit "data.HW3" using 1:2:3 every 5 with errorbars notitle pt 6 lc rgb "red" lw 2, \ f(x) notitle lc rgb "blue" lw 4 lt 1 The plot below shows the result of acting with this gnuplot script on a the data in `http://physics.ucsc.edu/~peter/bad-honnef/data.HW3`. The results agree with those of the other scripts. ![image](HW3b.eps){width="11cm"} Scripts for a fit to a non-linear model --------------------------------------- We read in lines of data each of which contains three entries $x_i, y_i$ and $\sigma_i$. These are fitted to the form $$y = T_c + A / x^\omega \, ,$$ to determine the best values of $T_c, A$ and $\omega$. ### Python # # Python program written by Matt Wittmann # # Usage: "python this_file_name data_file" # # Does a fit to the non-linear model # # y = Tc + A / x**w # # to the data in "data_file", each line of which contains the data for one point, # x_i, y_i, sigma_i. # # Uses the built-in routine "curve_fit" in the scipy package. Note that this # requires the error bars to be corrected, as with gnuplot # from pylab import * from scipy.optimize import curve_fit from scipy.stats import chi2 fname = sys.argv[1] if len(sys.argv) > 1 else 'data.txt' x, y, err = np.loadtxt(fname, unpack=True) # read in the data n = len(x) # the number of data points p0 = [-0.25, 0.2, 2.8] # initial values of parameters f = lambda x, Tc, w, A: Tc + A/x**w # define the function to be fitted # note python's lambda notation p, covm = curve_fit(f, x, y, p0, err) # do the fit Tc, w, A = p chisq = sum(((f(x, Tc, w, A) - y)/err)**2) # compute the chi-squared ndf = n -len(p) # no. of degrees of freedom Q = 1. - chi2.cdf(chisq, ndf) # compute the quality of fit parameter Q chisq = chisq / ndf # compute chi-squared per DOF Tcerr, werr, Aerr = sqrt(diag(covm)/chisq) # correct the error bars print 'Tc = %10.4f +/- %7.4f' % (Tc, Tcerr) print 'A = %10.4f +/- %7.4f' % (A, Aerr) print 'w = %10.4f +/- %7.4f' % (w, werr) print 'chi squared / NDF = %7.4lf' % chisq print 'Q = %10.4f' % Q When applied to the data in `http://physics.ucsc.edu/~peter/bad-honnef/data.HW4` the output is Tc = -0.2570 +/- 1.4775 A = 2.7878 +/- 0.8250 w = 0.2060 +/- 0.3508 chi squared / NDF = 0.2541 Q = 0.9073 ### Gnuplot # # Gnuplot script to plot points, do a fit to a non-linear model # # y = Tc + A / x**w # # with respect to Tc, A and w, and display the points, fit, fit parameters, # error bars, chi-squared per degree of freedom, and goodness of fit parameter # on the plot. # # Here the data is assumed to be a file "data.HW4", each line containing # information for one point (x_i, y_i, sigma_i). The script produces a # postscript file, called here "HW4a.eps". # set size 1.0, 0.6 set terminal postscript portrait enhanced set output "HW4a.eps" set fit errorvariables # needed to be able to print error bars f(x) = Tc + A / x**w # the fitting function set xlabel "1/x^{/Symbol w}" set ylabel "y" set label "y = T_c + A / x^{/Symbol w}" at 0.1, 0.7 Tc = 0.3 # need to specify initial values A = 1 w = 0.2 fit f(x) "data.HW4" using 1:2:3 via Tc, A, w # do the fit set xrange [0.07:0.38] g(x) = Tc + A * x h(x) = 0 + 0 * x ndf = FIT_NDF # Number of degrees of freedom chisq = FIT_STDFIT**2 * ndf # chi-squared Q = 1 - igamma(0.5 * ndf, 0.5 * chisq) # the quality of fit parameter Q # # Below note how the error bars are (a) corrected by dividing by # FIT_STDFIT, and (b) are displayed on the plot, in addition to the fit # parameters, neatly formatted using sprintf. # set label sprintf("T_c = %5.3f +/- %5.3f",Tc, Tc_err/FIT_STDFIT) at 0.25, 0.33 set label sprintf("{/Symbol w} = %5.3f +/- %5.3f",w, w_err/FIT_STDFIT) at 0.25, 0.27 set label sprintf("A = %5.2f +/- %5.2f",A, A_err/FIT_STDFIT) at 0.25, 0.21 set label sprintf("{/Symbol c}^2 = %5.2f", chisq) at 0.25, 0.15 set label sprintf("{/Symbol c}^2/NDF = %5.2f", FIT_STDFIT**2) at 0.25, 0.09 set label sprintf("Q = %5.2f", Q) at 0.25, 0.03 # # Plot the data and the fit # plot "data.HW4" using (1/$1**w):2:3 with errorbars notitle lc rgb "red" lw 3 pt 8 ps 1.5, \ g(x) notitle lc rgb "blue" lw 3 lt 2 , \ h(x) notitle lt 3 lw 4 The plot below shows the result of acting with this gnuplot script on the data at `http://physics.ucsc.edu/~peter/bad-honnef/data.HW4`. The results agree with those of the python script above. ![image](HW4a.eps){width="11cm"} I’m grateful to Alexander Hartmann for inviting me to give a lecture at the Bad Honnef School on “Efficient Algorithms in Computational Physics”, which provided the motivation to write up these notes, and also for a helpful comment on the need to resample the data to get error bars from fits to non-linear models. I would also like to thank Matt Wittmann for helpful discussions about fitting and data analysis using `python` and for permission to include his python codes. I am also grateful to Wittmann and Christoph Norrenbrock for helpful comments on an earlier version of the manuscript. [^1]: The factor of $N-1$ rather than $N$ in the expression for the sample variance in Eq. (\[sigmafromdata\]) needs a couple of comments. Firstly, the final answer for the error bar on the mean, Eq.  below, will be independent of how this intermediate quantity is defined. Secondly, the $N$ terms in Eq. (\[sigmafromdata\]) are not all independent since $\overline{x}$, which is itself given by the $x_i$, is subtracted. Rather, as will be discussed more in the section on fitting, Sec. \[sec:fit\], there are really only $N-1$ independent variables (called the “number of degrees of freedom” in the fitting context) and so dividing by $N-1$ rather than $N$ has a rational basis. However, this is not essential and many authors divide by $N$ in their definition of the sample variance. [^2]: $\chi^2$ should be thought of as a single variable rather than the square of something called $\chi$. This notation is standard. [^3]: Although this result is only valid if the fitting model is linear in the parameters, it is usually taken to be a reasonable approximation for non-linear models as well. [^4]: It is conventional to include the factor of $1/2$.
{ "pile_set_name": "ArXiv" }
Precise knowledge of the spin susceptibility $\chi({\bf q}, \omega)$ of the cuprates is essential for understanding their unusual normal state properties. The imaginary part, $\chi^{\prime \prime}({\bf q},\omega)$ can be probed either by inelastic neutron scattering (INS) [@Kei; @Bou; @Hay; @Mook97], or in the low frequency limit by NMR measurements of the spin-lattice relaxation rate $1/T_1$ [@T1]. In contrast, one knows little about the real part of the susceptibility, $\chi^\prime({\bf q})$, since information can, so far, only be extracted from the NMR observation of the Gaussian component of the transverse relaxation time, $T_{\rm 2G}$, of planar Cu [@PS91; @Curro97]. In particular, the analysis of INS and NMR experiments has not yet led to a consensus on the shape of $\chi({\bf q},\omega)$ in momentum space and the temperature ($T$) dependence of the antiferromagnetic correlation length, $\xi$. In this communication we present new insight into this issue based on experiments by Bobroff [*et al.*]{} [@BAY97]. Our principal conclusions are that $\xi$ in YBa$_2$Cu$_3$O$_{6+\delta}$ is $T$-dependent and that the Lorentzian form of $\chi^\prime(\bf q)$ provides a completely consistent description of the data, whereas the Gaussian form can be ruled out. Bobroff [*et al.*]{} [@BAY97] recently presented a novel approach to the measurement of $\chi^\prime({\bf q})$ using Ni impurities in YBa$_2$(Cu$_{1-x}$Ni$_x$)$_3$O$_{6+\delta}$. These impurities induce a spin polarization at the planar Cu sites via $\chi^\prime({\bf q})$. The hyperfine coupling between Cu and O induces a spatially varying polarization and an additional broadening $$\Delta \nu_{\rm imp} = \Delta \nu-\Delta \nu_0=\alpha f(\xi)/T \label{dnu}$$ of the planar $^{17}$O NMR, where $\Delta \nu$ and $\Delta \nu_0$ are the total and $x=0$ line width, respectively. In Eq. \[dnu\] $\alpha$ is the overall amplitude of $\chi^\prime({\bf q})$ and $f(\xi)$ characterizes the dependency of $\Delta \nu$ on $\xi$ ($\alpha=4\pi \chi^*$ in the notation of Ref. [@BAY97] and[@Morr97]). Finally, the factor $1/T$ is caused by the Curie behavior of the Ni impurities in YBa$_2$Cu$_3$O$_{6+\delta}$[@Mah94; @Men94] with effective moment $p_{\rm eff}\approx 1.9 \mu_{\rm B}$ ($1.59 \mu_{\rm B}$) for $\delta=0.6 (\delta=1)$. Bobroff [*et al.*]{} found that $T \Delta \nu(T)$ strongly depends on temperature and the Ni concentration $x$ in the sample. Furthermore they observed a much stronger broadening in the underdoped, $\delta=0.6$, sample than in the overdoped one with $\delta=1$. Performing numerical simulations of the NMR line shape by assuming a Gaussian form for $\chi^\prime({\bf q})$, they found that $f(\xi)$ is basically constant for all physically reasonable values of $\xi$. Combining these results with $T_{\rm2G}$ data by Takigawa [@Tak94], they concluded that $\xi$ is $T$-independent for the underdoped samples. On the other hand, in every scenario of cuprate superconductors in which the anomalous low-energy behavior is driven by spin fluctuations one would expect the correlation length $\xi$ to be $T$-dependent (for recent reviews, see: [@Pines; @Scal]). Thus their result has important implications about the mechanism of superconductivity. We recently pointed out [@Morr97], that our simulations using a Lorentzian form of $\chi^\prime({\bf q})$ yield a different result and are actually compatible with a $T$-dependent $\xi$. Before going into the details of our calculations, it is important to notice that the fact that $\xi$ must be $T$-dependent can be deduced even without a detailed model from the very experimental data by Bobroff [*et al.*]{} [@BAY97] for $\Delta \nu(T)$ and Takigawa [@Tak94] for $T_{\rm 2G}$. To show this, we need to recognize that we can always express $T_{\rm 2G}$ as a product of $\alpha$ and a function of $\xi$, namely $$T_{\rm 2G}^{-1} = \alpha g(\xi) \ . \label{t2g}$$ We can then eliminate $\alpha$ by forming the product $$T \Delta \nu_{\rm imp} T_{\rm 2G} = { f(\xi) \over g(\xi)} \label{product}$$ which depends solely on $\xi$. In Fig. \[prod\], we plot the product $T_{\rm 2G} T \Delta \nu_{\rm imp}$ as a function of $T$[@T1corr]. \[t\] =7.5cm We see that this product is strongly $T$-dependent, dropping by more than a factor of 2 between $100 \, K$ and $200 \, K$. Therefore $\xi$ must have a substantial $T$-dependence. To have a more quantitative insight into the $T$-dependence of $T \Delta \nu(T)$ of Ref. [@BAY97], we must go into details. We present in the following a theoretical analysis of the $^{17}$O line shape using a method first applied by Bobroff [*et al.*]{}, to simulate their experimental data. To simulate the $^{17}$O line shape numerically, we distribute Ni impurities on a $(100 \times 100)$ lattice with concentration $\frac{3}{2}x$ randomly at positions ${\bf r}_j$ on a two dimensional lattice [@com2]. We consider the Ni impurities as foreign atoms embedded in the pure material, which is characterized by a non-local spin-susceptibility $\chi'({\bf q})$. In the following ${\bf s}_i$ characterizes the spin dynamics of the pure material, ${\bf S}_i$ the difference at the Ni site brought about by the Ni. These Ni spins polarize the spin ${\bf s}_j$ of the itinerant strongly correlated electrons. To calculate the induced moments we need to know how the Ni impurities couple to these spins. Without discussing the microscopic origin of the effective Ni spin ${\bf S}_j$, we assume that it obeys a Curie law, and that the coupling to the spin ${\bf s}_j$ occurs via an on-site interaction described by $${ \cal H}_{int} = -J \sum_{j} {\bf s}_j \cdot {\bf S}_j \ . \label{Hint}$$ The coupling constant $J$ is an unknown parameter of the theory and will be estimated below. Furthermore, we will assume like Bobroff [*et al.*]{} that the Ni impurities do [*not*]{} change the magnetic correlation length or the magnitude of the spin susceptibility. For the NMR experiments we consider an external magnetic field $B_0$ along the $z$-direction. The Ni spins have a non-zero average value obeying $\langle S^z_j \rangle = C_{\rm Curie} B_0/T $ with Curie constant $C_{\rm Curie}=p_{\rm eff}/(2\sqrt{3} k_{\rm B})$ [@Mah94; @Men94]. Adopting a mean field picture, the induced polarization for the electron spins at the Cu sites ${\bf r}_i$ is given by $$\langle s^z_i \rangle = \frac{J}{(g\mu_{\rm B})^2} \sum_j \chi'({\bf r}_i-{\bf r}_j) \langle S^z_j \rangle\, . \label{Sind}$$ Here, $\chi'({\bf r})$ is the real space Fourier transform of $\chi'({\bf q})$. In the following we consider two different forms of the spin susceptibility [@mp92]. For the commensurate case, there is only one peak, whereas in the incommensurate case, one has to sum over four peaks. The Gaussian form of $\chi'({\bf q})$ is given by $$\chi_{\rm G}'({\bf q})=\alpha \xi^2 \exp\left(-({\bf q}-{\bf Q})^2 \xi^2\right) \label{chig}$$ and the Lorentzian form by $$\chi_{\rm L}'({\bf q})=\alpha \xi^2/(1+({\bf q}-{\bf Q})^2 \xi^2) \ . \label{chil}$$ Since the question whether there exist incommensurate peaks in YBa$_2$Cu$_3$O$_{6+\delta}$ has not been settled yet [@Mook97], we will consider below both cases, a commensurate wavevector ${\bf Q}=(\pm \pi, \pm \pi)$, and an incommensurate one with ${\bf Q}=\delta_{\rm i} (\pm \pi, \pm \pi)$. The calculation of the real space Fourier transform finally yields $$\begin{aligned} \chi_{\rm G}'({\bf r}) &=&\frac{\alpha}{4\pi} F({\bf Q}) \exp \Big( - { {\bf r}^2 \over 4 \xi^2} \Big) \, ,\nonumber \\ \chi_{\rm L}'({\bf r}) &=& \frac{\alpha}{4\pi}F({\bf Q}) K_0 \Big( { r \over \xi } \Big)\, ,\end{aligned}$$ where $K_0$ is the modified Bessel function, and $ F({\bf Q}) = \cos(Q_x r_x)\cos(Q_y r_y) \ . $ Having determined the Ni induced Cu spin polarization $\langle s^z_i \rangle$, it is straightforward to investigate the $^{17}$O NMR lineshape, determined by the coupling of the $I=\frac{5}{2}$ nuclear spins $^{17}{\bf I}_l$ to the Cu electron spins ${\bf s}_i$ with spatially varying mean value $\langle s^z_i \rangle$. The hyperfine Hamiltonian is $${\cal H}_{hf} = \hbar^2 \gamma_n \gamma_e \sum_{l,i} C_{i,l} \, {\bf s}_i\cdot ^{17}{\bf I}_l \, , \label{HMR}$$ where $\gamma_n,\gamma_e$ are the gyromagnetic ratios for the $^{17}$O nucleus and the electron, respectively. The hyperfine coupling constants $C_{i,l}$ is dominated by a nearest neighbor hyperfine coupling $C\approx 3.3 {\rm T}/\mu_B$ [@Zha96]. However, it was recently argued that a next-nearest neighbor hyperfine coupling $C^\prime\approx 0.25C$ is relevant for the explanation of the spin-lattice relaxation rate [@Zha96] in La$_{ 2-x}$Sr$_x$CuO$_4$. We will therefore also consider its effects on the $^{17}$O NMR line. Using a mean field description of this hyperfine coupling by replacing ${\bf s}_i$ by $\langle s^z_i \rangle$ of Eq. \[Sind\], we finally obtain for the shift of the resonance at a given $^{17}$O site ${\bf r}_l$ $$\begin{aligned} \nu_l &=& \frac{A}{T} \sum_{i,j} C_{i,l} \chi'({\bf r}_{i}-{\bf r}_{j})\, . \label{ox_shift}\end{aligned}$$ Here, the sum over $i$ runs over the Cu spin sites, coupled to the $^{17}$O nuclear spin, whereas the $j$-summation goes over all Ni-sites. Furthermore, the constant prefactor $A$ is given by $ \frac{5}{2}\gamma_n \gamma_e J \hbar C_{Curie} B_0 /( g \mu_B)^2$. Note, $\nu_l$ as given in Eq. \[ox\_shift\] is the shift of the $^{17}$O resonance with respect to the case without Ni impurities. To obtain the $^{17}$O NMR line shape, we create a histogram $I_o(\nu)=\sum_l\delta(\nu-\nu_l)$ counting the number of nuclei with shift $\nu$. Since we want to compare the resulting distribution with the experimental data where the line has a finite width even in the absence of impurities, we convolute $I_o(\nu)$ with a Gaussian distribution $\exp\left(-\nu^2/(2\sigma^2)\right)/\sqrt{2 \pi \sigma^2}$, yielding the lineshape $I(\nu)$. By comparison with the experiments of Ref. [@BAY97] we expect that $\Delta \nu_0=\sqrt{2 \log 2} \sigma $ should be of the order of the high temperature (i.e. $\xi < 1$) Ni-impurity induced linewidth. In the following calculation we therefore choose $\sigma=20$kHz for both the Lorentzian and Gaussian $\chi'({\bf q})$. Finally, we define the resulting $\Delta \nu$ by half the width of the peak at half maximum. \[t\] =7.5cm In Fig. \[shape\] we present the lineshape of the $^{17}$O NMR signal, calculated with the Lorentzian form $\chi_{\rm L}'({\bf q})$ for two different values of $\xi$. We clearly observe that the line becomes broader as we increase $\xi$. From a comparison of Eq.(\[ox\_shift\]) with the experimentally measured broadening we can extract the value of the interaction $J$ in Eq.(\[Hint\]). For $C'=0$ and $\xi(200{\rm K})=4$ ($\xi(200{\rm K})=3$) we obtain $J \approx 25 \ {\rm meV}$ ($43 \ {\rm meV}$). These values are accompanied by some uncertainties, but enable us to estimate the effects of a Cu spin mediated Ni-Ni spin (RKKY-type) interaction. We find within a self consistent mean field treatment of this interaction that the effect of the Ni-Ni interaction changes $\Delta \nu$ only within a few percent, consistent with the fact that no significant deviation from a Curie law was found in susceptibility measurements [@Mah94; @Men94]. In Fig. \[comp\] we present a comparison of $\Delta \nu(\xi)$ for the Gaussian $\chi_{\rm G}'({\bf q})$ (open diamonds) and the Lorentzian $\chi_{\rm L}'({\bf q})$ (filled squares). \[t\] =7.5cm Here we chose a Ni concentration of $x=2\%$, $C^\prime=0$, and ${\bf Q}= \delta_{\rm i} (\pm \pi, \pm \pi)$ to be incommensurate with $\delta_{\rm i}=0.94$ [@Mook97]. We also compute $\Delta \nu $ for the commensurate case, and find that in general $\Delta \nu$ decreases. However, since the incommensurability, $1-\delta_{\rm i}$, in YBa$_2$Cu$_3$O$_{6.6}$, if present at all, is rather small, differences are negligible for $\xi < 8$. In Fig. \[comp\], we clearly see that the effect of $\chi_{\rm G}'({\bf q})$ and $\chi_{\rm L}'({\bf q})$ on the behavior of the line width is [*qualitatively*]{} different. In agreement with the results by Bobroff [*et al.*]{} we find using $\chi_{\rm G}'({\bf q})$ that $\Delta \nu$ is basically independent of $\xi$ for all physically reasonable values $2<\xi < 5$. The Lorentzian form $\chi_{\rm L}'({\bf q})$, however, yields a much stronger increase in $\Delta \nu$ between $\xi=2$ and $\xi=5$ than the Gaussian. This result immediately implies that a temperature dependent $\xi$ is clearly compatible with the experimental results by Bobroff [*et al*]{}. Furthermore, we find that the function $f(\xi)$ of Eq. \[dnu\] behaves like $f(\xi) \sim \xi^{3/2} $ in the Lorentzian case and $f(\xi) \sim const.$ in the Gaussian case. This qualitatively different behavior of $ \Delta \nu(\xi)$ for $\chi_{\rm G}'({\bf q})$ and $\chi_{\rm L}'({\bf q})$ makes this experiment extremely sensitive to details of the momentum dependence of $\chi'({\bf q})$. Next we discuss the $\xi$ dependence of $\Delta \nu$ for different values of the Ni concentration $x$. We present our results for a Ni concentration of $x=0.5 \%, 2 \%$ and $4 \%$ and for $C'=0.25C$ in Fig. \[conc\]. \[t\] =7.5cm In agreement with the experimental results we find that $\Delta \nu$ for a given $\xi$ increases with $x$. We believe that the results in Figs. \[comp\] and \[conc\] also provide an explanation for the different behavior of $\Delta \nu$ in the underdoped (YBa$_2$Cu$_3$O$_{6.6}$) and overdoped (YBa$_2$Cu$_3$O$_{7}$) samples. Bobroff [*et al.*]{} obtained that for the overdoped sample the variation of $\Delta \nu$ with $T$ is much weaker than for the underdoped sample. As far as $\chi'({\bf q})$ is concerned, the main difference between these two regimes consist in the value of $\xi$, namely $\xi=1..2$ for the overdoped and $\xi=2..4$ for the underdoped sample. We see from Figs. \[comp\] and \[conc\] that the $\xi$ variation of $\Delta \nu$ for the overdoped sample is much weaker than for the underdoped one, in agreement with the experimental results. Finally, we can use our numerical results to investigate in more detail the consequences of the $T$-dependence of $T \Delta \nu_{\rm imp} T_{\rm 2G}$ shown in Fig. \[prod\]. Using $g(\xi)\propto \xi$ [@Curro97; @Pines] and the above results for $f(\xi)$, it follows from Eq. \[product\] for the Gaussian case ($f(\xi)\sim const.$) that $T \Delta \nu_{\rm imp} T_{\rm 2G} \propto \xi^{-1}$, i.e. $\xi$ has to increase with increasing $T$. This result seems to be unphysical and thus strongly suggests that the Gaussian form $\chi_{\rm G}'({\bf q})$ is inappropriate for the description of the spin susceptibility. On the other hand, for the Lorentzian case, $ f(\xi) \sim \xi^{3/2}$ and it follows $T \Delta \nu_{\rm imp} T_{\rm 2G} \propto \xi^{1/2}$, i.e. $\xi$ decreases as $T$ increases, as we would expect. One can also solve Eqs.(\[dnu\]) and (\[t2g\]) to obtain $\alpha$ as a function of $T$. However, our result possesses error bars which are quite large. The conclusion that $\alpha$ is independent of $T$ is acceptable within those errorbars, however, a weak $T$ dependence cannot be excluded. It is important to contrast our findings with the observations of INS experiments. In YBa$_2$Cu$_3$O$_{6+\delta}$, INS observes a $T$-independent broad peak around $(\pi,\pi)$, above $T_c$, resembling a Gaussian form of $\chi^{\prime \prime}({\bf q}, \omega)$ [@Kei; @Bou]. However, strong indications for incommensurate peaks with Lorentzian like shape in YBa$_2$Cu$_3$O$_{6.6}$ [@Mook97] suggest that the broad structure around $(\pi,\pi)$ is only a superposition of incommensurate peaks. Its width is therefore dominated by the merely $T$-independent incommensuration instead of $\xi^{-1}$. This is consistent with the recent analysis by Pines [@Pines] that the overall magnitude of $\chi''({\bf q},\omega)$ in YBa$_2$Cu$_3$O$_{6+\delta}$, as obtained from NMR experiments, necessitates a considerable improvement of the experimental resolution of INS experiments to resolve the incommensurate peaks in the normal state. In conclusion we obtain from the analysis of the $^{17}$O NMR data by Bobroff [*et al.*]{} and the $T_{\rm 2G}$ data by Takigawa that the correlation length $\xi$ must have a substantial temperature dependence. A detailed analysis shows that the Gaussian form $\chi_{\rm G}'({\bf q})$ of the spin susceptibility can be excluded as an appropriate description of the spin dynamics in the doped cuprates. A more correct description is provided by a Lorentzian-type form $\chi_{\rm L}'({\bf q})$, which is fully compatible with the experimental data and a temperature dependent $\xi$. Though the resolution of the experiment does not allow us yet to determine the precise $T$-dependence of $\xi$, our analysis shows that $\xi$ considerably decreases with increasing temperature. This work has been supported by STCS under NSF Grant No. DMR91-20000, the U.S. DOE Division of Materials Research under Grant No. DEFG02-91ER45439 (C.P.S., R.S.) and the Deutsche Forschungsgemeinschaft (J.S.) We would like to thank H. Alloul, J. Bobroff, A. Chubukov, D. Pines and M. Takigawa for valuable discussions. [99]{} H. F. Fong, B. Keimer, D. Reznik, D. L. Milius, I. A. Aksay, Phys. Rev B [**54**]{}, 6708 (1996). P. Bourges, L. P. Regnault, Y. Sidis, J. Bossy, P. Burlet, C. Vettier, J. Y. Henry, M. Couach, J. Low T. Phys. [**105**]{}, 337 (1996). G. Aeppli, T. E. Mason, S. M. Hayden, H. A. Mook, J. Kulda, Science [**278**]{}, 1432 (1997). J. M. Tranquada, P. M. Ghring, G. Shirane, S. Shamoto, and M. Sato, Phys. Rev. B [**46**]{}, 5561 (1992); P. Dai, H. A. Mook, and F. Dogan, preprint, cond-mat/9707112 C. P. Slicher, in [*Strongly Correlated Electron Systems*]{}, ed. K. S. Bedell [*et al.*]{} (Addison-Wesley, Reading, MA,1994). C.H. Pennington and C.P. Slichter, Phys. Rev. Lett. [**66**]{}, 381 (1991). N. Curro, T. Imai, C. P. Slichter, and B. Dabrowski, Phys. Rev. B [**56**]{}, 877 (1997) J. Bobroff , H. Alloul, Y. Yoshinari, A. Keven. P. Mendels, N. Blanchard, G. Collin, J. F. Marucco, Phys. Rev. Lett. [**79**]{}, 2117 (1997). A.V. Mahajan, H. Alloul, G. Collin, J. F. Marucco, Phys. Rev. Lett. [**72**]{}, 3100 (1994). D.K. Morr, J. Schmalian, R. Stern, and C.P. Slichter, preprint cond-mat/9710257. P. Mendels, H. Alloul, G. Collin, N. Blanchard, J. F. Marucco, J. Bobroff, Physica C [**235-240**]{}, 1595 (1994). D. Pines, Z. Phys. B [**103**]{}, 129 (1997); Proc. of the NATO ASI on [*The Gap Symmetry and Fluctuations in High-T$_c$ Superconductors*]{}, J. Bok and G. Deutscher, eds., Plenum Pub. (1998), and references therein. D. J. Scalapino, Phys. Rep. [**250**]{}, 329 (1995). M. Takigawa, Phys. Rev. B [**49**]{}, 4158 (1994). We thank M. Takigawa for reanalyzing his original $T_{2{\rm G}}$ data in Ref.[@Tak94] using the T$_1$ corrections by Curro [*et al.*]{} [@Curro97]. The concentration of Ni impurities in the CuO$_2$ planes is given by $\frac{3}{2}x$, since predominantely Cu sites in the planes are substituted by Ni. A. J. Millis and H, Monien, Phys. Rev. B [**45**]{}, 3059 (1992). Y. Zha, V. Barzykin and D. Pines, Phys. Rev. B [**54**]{}, 7561 (1996).
{ "pile_set_name": "ArXiv" }
--- abstract: | Let $S$ be a K3 surface with primitive curve class $\beta$. We solve the relative Gromov-Witten theory of $S \times {\mathbb{P}}^1$ in classes $(\beta,1)$ and $(\beta,2)$. The generating series are quasi-Jacobi forms and equal to a corresponding series of genus $0$ Gromov-Witten invariants on the Hilbert scheme of points of $S$. This proves a special case of a conjecture of Pandharipande and the author. The new geometric input of the paper is a genus bound for hyperelliptic curves on K3 surfaces proven by Ciliberto and Knutsen. By exploiting various formal properties we find that a key generating series is determined by the very first few coefficients. Let $E$ be an elliptic curve. As collorary of our computations we prove that Gromov-Witten invariants of $S \times E$ in classes $(\beta,1)$ and $(\beta,2)$ are coefficients of the reciprocal of the Igusa cusp form. We also calculate several linear Hodge integrals on the moduli space of stable maps to a K3 surface and the Gromov-Witten invariants of an abelian threefold in classes of type $(1,1,d)$. address: 'MIT, Department of Mathematics' author: - Georg Oberdieck title: | Gromov-Witten theory of $\text{K3} \times {\mathbb{P}}^1$\ and quasi-Jacobi forms --- Introduction ============ Overview -------- Let $S$ be a nonsingular projective $K3$ surface, let ${\mathbb{P}}^1$ be the projective line, and let $0,1, \infty \in {\mathbb{P}}^1$ be distinct points. Consider the relative geometry $$\label{dfgsdg} ( S \times {\mathbb{P}}^1 ) \ / \ \{ S_0, S_1, S_\infty \}$$ where $S_z$ denotes the fiber over the point $z \in {\mathbb{P}}^1$. For every $\beta \in H_2(S,{{\mathbb{Z}}})$ and integer $d \geq 0$, the pair $(\beta,d)$ determines a class in $H_2(S \times {\mathbb{P}}^1,{{\mathbb{Z}}})$ by $$(\beta,d) = \iota_{S \ast}(\beta) + \iota_{{\mathbb{P}}^1 \ast}(d [{\mathbb{P}}^1])$$ where $\iota_{S}$ and $\iota_{{\mathbb{P}}^1}$ are inclusions of fibers of the projection to ${\mathbb{P}}^1$ and $S$ respectively. Let $\beta_h \in {\mathop{\rm Pic}\nolimits}(S) \subset H_2(S,{{\mathbb{Z}}})$ be a *primitive* non-zero curve class satisfying $$\langle \beta_h, \beta_h \rangle = 2h-2$$ with respect to the intersection pairing on $S$. In [@HilbK3; @K3xE] the following predictions for the relative Gromov-Witten theory of in classes $(\beta_h,d)$ were made: 1. The theory is related by an exact correspondence to the three-point genus $0$ Gromov-Witten theory of the Hilbert schemes of points of $S$. 2. For all fixed relative conditions, the generating series of Gromov-Witten invariants (summed over the genus and the classes $\beta_h$) is a quasi-Jacobi form[^1]. 3. The theory is governed by an explicit Fock space formalism. The Jacobi form property of the generating series (part (ii)) is especially striking since it implies various strong identities and constraints on the curve counting invariants. In case of the Hilbert scheme of points an explanation for these symmetries has been found in the invariance of Gromov-Witten invariants under the monodromies of $\operatorname{Hilb}^d(S)$ in the moduli space of hyperkähler manifolds. For $S \times {\mathbb{P}}^1$ the geometric origin of the Jacobi form property is less clear. Nevertheless, a first hint can be found in the following fact proven by Ciliberto and Knutsen: \[thm\_CK\] Let $\beta$ be a primitive curve class on a K3 surface $S$ such that every curve in $S$ of class $\beta$ is irreducible and reduced. Then the arithmetic genus $g = p_a(C)$ of every irreducible curve $C \subset S \times {\mathbb{P}}^1$ in class $(\beta,d)$ with $d>1$ satisfies $$h \geq g + \alpha \big(g - (d-1)(\alpha+1) \big) \label{CK_equation}$$ where $\langle \beta, \beta \rangle = 2h-2$ and $\alpha = \floor{\frac{g}{2d-2}}$. An elementary check shows implies (in fact is equivalent if $d=2$) to the bound $$(g+d-1)^2 \leq 4 h (d-1) + (d-1)^2 \,.$$ On the other side the coefficient $c(h,r)$ in the Fourier expansion $\sum_{h, r} c(h,r) q^h y^r$ of a weak Jacobi form of index $d-1$ is non-zero only if $$r^2 \leq 4 h (d-1) + (d-1)^2 \,.$$ We find the genus bound by Ciliberto-Knutsen to match the coefficient bound for weak Jacobi forms under the index shift[^2] $r=1-g-d$. The appearence of Jacobi forms in the Gromov-Witten theory of $S \times {\mathbb{P}}^1$ is partly reflected in the fact that $d$-gonal curves on generic K3 surfaces have many singularities. One may ask if constraint can be used to determine Gromov-Witten invariants of $S \times {\mathbb{P}}^1$. The main technical result of the paper shows this is possible in case $d=2$: For a key choice of incidence condition, the Gromov-Witten invariants of $S \times {\mathbb{P}}^1$ in class $(\beta_h, 2)$ are completely determined by formal properties, the constraint and a few calculations in low genus. By standard techniques this leads to a full evaluation of the relative Gromov-Witten theory of $S \times {\mathbb{P}}^1$ in classes $(\beta_h,1)$ and $(\beta_h,2)$ in terms of quasi-Jacobi forms. Relative Gromov-Witten theory of $\text{K3} \times {\mathbb{P}}^1$ {#Section:Relative_Gromov_Witten_theory_of_P1K3} ------------------------------------------------------------------ ### Definition Let $z_1, \dots, z_k$ be distinct points on ${\mathbb{P}}^1$, and consider the relative geometry $$( S \times {\mathbb{P}}^1 )\ / \ \{ S_{z_1} , \ldots , S_{z_k} \}\,. \label{123}$$ Let $(\beta, d) \in H_2(S \times {\mathbb{P}}^1, {{\mathbb{Z}}})$ be a curve class, and let $\vec{\mu}^{(1)}, \dots, \vec{\mu}^{(k)}$ be ordered partitions of size $d$ with positive parts. The moduli space $$\mathbf{M}^{\bullet}_{g,n, (\beta,d), \mathbf{\mu}} = {{\overline M}}^{\bullet}_{g,n}\big( (S \times {\mathbb{P}}^1) / \{ S_{z_1}, \dots, S_{z_k} \}, (\beta,d), (\vec{\mu}^{(1)},\ldots, \vec{\mu}^{(k)}) \big)$$ parametrizes possibly disconnected[^3] $n$-pointed relative stable maps of genus $g$ and class $(\beta,d)$ with ordered ramification profile $\vec{\mu}^{(i)}$ along the divisors $S_{z_i}$ respectively. The relative evaluation maps $${\mathop{\rm ev}\nolimits}_j^{(i)} \colon \mathbf{M}_{g,n, (\beta,d), \mathbf{\mu}}^{\bullet} \, \to\, S_{z_i} \equiv S \,, \quad j=1,\dots, l(\mu_i), \quad i=1,\dots, k$$ send a relative stable map to the $j$-th intersection point with the divisor $S_{z_i}$. We let ${\mathop{\rm ev}\nolimits}_1, \ldots, {\mathop{\rm ev}\nolimits}_n$ denote the evaluation maps of the non-relative marked points. Relative Gromov-Witten invariants are defined using *unordered* relative conditions. Let $\gamma_1, \dots, \gamma_{24}$ be a fixed basis of $H^{\ast}(S, {{\mathbb{Q}}})$. A cohomology weighted partition $\nu$ is a multiset[^4] of pairs $$\Big\{ (\nu_1, \gamma_{s_1}) , \ldots, (\nu_{l(\nu)}, \gamma_{s_{l(\nu)}}) \Big\}$$ where $\sum_i \nu_i$ is an unordered partition of size $|\nu|$. The automorphism group $\operatorname{Aut}(\nu)$ consists of the permutation symmetries of $\nu$. Consider unordered cohomology weighted partitions $$\mu^{(1)}, \dots, \mu^{(k)} \,.$$ For every $i \in \{ 1, \ldots, k \}$ let $( \mu^{(i)}_{j} \,,\, \gamma^{(i)}_{s_j} )_{j=1, \dots, l(\mu_i)}$ be a choice of ordering of $\mu^{(i)}$, and let $\vec{\mu}^{(i)} = (\mu^{(i)}_{j})$ be the underlying ordered partition. We define the reduced Gromov-Witten invariants of $(S \times {\mathbb{P}}^1) / \{ S_{z_i} \}$ with relative conditions $\mu^{(1)}, \dots, \mu^{(k)}$ by integration over the reduced virtual class[^5] of the moduli space $\mathbf{M}^{\bullet}_{g,n, (\beta,d), \mathbf{\mu}}$: $$\begin{gathered} \Big\langle \, \mu^{(1)}, \dots, \mu^{(k)} \, \Big| \, \prod_{i=1}^{n} \tau_{\ell_i}(\alpha_i) \Big\rangle^{S \times {\mathbb{P}}^1/ \{ z_1, \dots, z_k \}, \bullet}_{g, (\beta,d)} \\ = \frac{1}{\prod_i | \operatorname{Aut}(\mu^{(i)}) |} \cdot \int_{ [ \mathbf{M}_{g,n, (\beta,d), \mathbf{\mu}}^{\bullet} ]^{\text{red}} } \prod_{i=1}^{n} \psi_i^{\ell_i} {\mathop{\rm ev}\nolimits}_i^{\ast}(\alpha_i) \cup \prod_{i=1}^{k} \prod_{j=1}^{l(\mu^{(i)})} {\mathop{\rm ev}\nolimits}^{(i) \ast}_j( \gamma^{(i)}_{s_j} ) \,,\end{gathered}$$ where $\alpha_1, \dots, \alpha_n \in H^{\ast}(S \times {\mathbb{P}}^1, {{\mathbb{Q}}})$ are cohomology classes and $\psi_i$ is the cotangent line class at the $i$th non-relative marked point. Since all cohomology of $S$ is even, the integral is independent of the choice of ordering of $\mu_i$. The automorphism factors correct for the choice of an ordering. ### Evaluations Let $S$ be a non-singular projective K3 surface with elliptic[^6] fibration $\pi$ and section $s$, $$\pi : S \to {\mathbb{P}}^1, \quad s : {\mathbb{P}}^1 \to S \,, \quad \pi \circ s = \operatorname{id}_{{\mathbb{P}}^1} .$$ The class of a fiber of $\pi$ and the image of $s$ are denoted $$F, B \, \in\, {\mathop{\rm Pic}\nolimits}(S) \subset H_2(S,{{\mathbb{Z}}})$$ respectively. Consider the primitive curve classes $$\beta_h = B + hF, \quad h \geq 0$$ of self-intersection $\langle \beta_h, \beta_h \rangle = 2h-2$. Let also ${{\mathsf{p}}}\in H^4(S, {{\mathbb{Z}}})$ be the class of a point, and let ${\mathbf{1}}\in H^0(S,{{\mathbb{Z}}})$ be the unit. Consider Gromov-Witten invariants of $S \times {\mathbb{P}}^1 / \{ 0,1, \infty \}$ with relative conditions $$\label{Rel_Conditions} \begin{aligned} \mu_{m,n} & = \{ (1,{{\mathsf{p}}})^m (1, F)^n \} \\ \nu_{m,n} & = \{ (1, {\mathbf{1}})^m (1,F)^{n} \} \\ D(F) & = \{ (1,F) (1, {\mathbf{1}})^{m+n-1} \} \,, \end{aligned}$$ over the points $0,1, \infty$ respectively: $$\mathsf{N}_{g,h}(m,n) = {\big\langle}\mu_{m,n} , \nu_{m,n}, D(F) {\big\rangle}^{S \times {\mathbb{P}}^1/\{ 0,1,\infty \}, \bullet}_{g, (\beta,m+n)}. \label{13531523}$$ By deformation invariance the left hand side only depends on $g,h,m,n$ alone. The relative condition $D(F)$ over $\infty$ is included to fix the automorphism of ${\mathbb{P}}^1$ on the target, but otherwise plays no important role. The first result of the paper is the complete evaluation of $\mathsf{N}_{g,h}(m,n)$ for which we require several definitions: Let $u$ and $q$ be formal variables. For $k \geq 1$ let $$C_{2k}(q) = - \frac{B_{2k}}{2k (2k)!} + \frac{2}{(2k)!} \sum_{n \geq 1} \sum_{d | n} d^{2k-1} \, q^n \,, \label{Eisenstein_Series}$$ denote the classical Eisenstein series, where $B_{2k}$ are the Bernoulli numbers. We define the Jacobi theta function $$\label{Thetafunction} \Theta(u,q) = u \exp\left( \sum_{k \geq 1} (-1)^{k-1} C_{2k}(q) u^{2k} \right)$$ and the Weierstrass elliptic function $$\wp(u,q) = - \frac{1}{u^2} - \sum_{k \geq 2} (-1)^k (2k-1) 2k C_{2k}(q) u^{2k-2} \,.$$ We will also require the slightly unusual but important function $$\mathbf{G}(u,q) = - \Theta(u,q)^2 \big( \wp(u,q) + 2 C_2(q) \big) \,.$$ Finally, define the modular discriminant $$\Delta(q) = q \prod_{m \geq 1} (1-q^m)^{24} \,.$$ \[mainthm\_1b\] For all $m \geq 0$ and $n > 0$, $$\sum_{g \in {{\mathbb{Z}}}} \sum_{h = 0}^{\infty} \mathsf{N}_{g,h}(m,n) u^{2(g + m + n - 1)} q^{h-1} = \frac{1}{m! (n!)^2} \frac{ \mathbf{G}(u,q)^{m} \Theta(u,q)^{2n} }{ \Theta(u,q)^2 \Delta(q)}.$$ If $n=0$ all the invariants $\mathsf{N}_{g,h}(m,n)$ vanish. The left hand side of Theorem \[mainthm\_1b\] is a generating series of relative Gromov-Witten invariants of $S \times {\mathbb{P}}^1$ in degree $d=m+n$ over ${\mathbb{P}}^1$. The right hand side is a (holomorphic) quasi-Jacobi form of index $d-1$. Theorem \[mainthm\_1b\] provides an example for the conjectured quasi-Jacobi form property of generating series of Gromov-Witten invariants of $S \times {\mathbb{P}}^1$ in all degrees $d$. The proof of Theorem \[mainthm\_1b\] proceeds in two steps: first, the series are computed in degree $d=2$ over ${\mathbb{P}}^1$ using induction over the genus via the result of Ciliberto and Knutsen, see Section \[Section:Genus\_induction\] for details. Then, the higher degree case follows by a degeneration and localization argument. The reduction of higher degree to degree $2$ invariants in the second step only works for very limited choices of relative insertions and is one of the reasons to restrict to the considered case. Closed evaluations in higher degree with more general insertions require new methods. Hilb/GW correspondence ---------------------- Let $$\operatorname{Hilb}^d(S)$$ be the Hilbert scheme of $d$ points of the K3 surface $S$. For all $\alpha \in H^{\ast}(S ; {{\mathbb{Q}}})$ and $i > 0$ let $${{\mathfrak{p}}}_{-i}(\alpha) : H^{\ast}(\operatorname{Hilb}^d(S);{{\mathbb{Q}}}) \to H^{\ast}(\operatorname{Hilb}^{d+i}(S);{{\mathbb{Q}}}), \ \gamma \mapsto {{\mathfrak{p}}}_{-i}(\alpha) \gamma$$ be the Nakajima creation operator obtained by adding length $i$ punctual subschemes incident to a cycle Poincare dual to $\alpha$. The cohomology of $\operatorname{Hilb}^d(S)$ is completely described by the cohomology of $S$ via the action of the operators ${{\mathfrak{p}}}_{-i}(\alpha)$ on the vacuum vector $${{v_{\varnothing}}}\in H^{\ast}(\operatorname{Hilb}^0(S);{{\mathbb{Q}}}) \equiv {{\mathbb{Q}}}.$$ To every cohomology weighted partition $\mu = \{ (\mu_i, \gamma_{s_i}) \}$ of size $d$ we associate the class $$| \mu \rangle = \frac{1}{{{\mathfrak{z}}}(\mu)} \prod_{i} {{\mathfrak{p}}}_{-i}(\gamma_{s_i}) {{v_{\varnothing}}}$$ in $H^{\ast}(\operatorname{Hilb}^d(S), {{\mathbb{Q}}})$, where ${{\mathfrak{z}}}(\mu) = |\operatorname{Aut}(\mu)| \prod_{i}\mu_i$. Let $\beta \in H_2(S,{{\mathbb{Z}}})$ be a non-zero curve class on $S$. The associated curve class on $\operatorname{Hilb}^d(S)$, defined as the Poincare dual to $${{\mathfrak{p}}}_{-1}(\beta) {{\mathfrak{p}}}_{-1}({{\mathsf{p}}})^{d-1} {{v_{\varnothing}}}\,,$$ is denoted by $\beta$ as well. We will also require $A \in H_2(\operatorname{Hilb}^d(S),{{\mathbb{Z}}})$, the class of an exceptional curve Poincare dual to $${{\mathfrak{p}}}_{-2}({{\mathsf{p}}}) {{\mathfrak{p}}}_{-1}({{\mathsf{p}}})^{d-2} {{v_{\varnothing}}}\,.$$ Let $\lambda_1, \dots, \lambda_r$ be cohomology weighted partitions and let $$\label{HilbK3invariants} \big\langle \lambda_1, \dots, \lambda_n \big\rangle^{\operatorname{Hilb}^d(S)}_{0,\beta + kA} = \int_{[ {{\overline M}}_{0,n}(\operatorname{Hilb}^d(S),\beta + kA) ]^{\text{red}}} \prod_{i=1}^{n} {\mathop{\rm ev}\nolimits}_i^{\ast}(\lambda_i)$$ be the reduced genus $0$ Gromov-Witten invariants of $\operatorname{Hilb}^d(S)$ in class $\beta + k A$ [@HilbK3]. The following GW/Hilb correspondence was conjectured in [@K3xE]. \[asdasd\] \[GW/Hilb\_correspondence\] For primitive $\beta$, $$\begin{gathered} \label{eqn_correspondence} (-1)^d \sum_{k \in {{\mathbb{Z}}}} \big\langle \mu, \nu, \rho \big\rangle^{\operatorname{Hilb}^d(S)}_{0,\beta + kA} y^k \\ \ = \ (-iu)^{l(\mu) + l(\nu) + l(\rho) - d} \sum_{g \geq 0} {\big\langle}\mu , \nu , \rho {\big\rangle}^{S \times {\mathbb{P}}^1 / \{ 0,1,\infty \}}_{g, (\beta,d)} u^{2g-2}\end{gathered}$$ under the variable change $y = - e^{iu}$. The Gromov-Witten invariants of $\operatorname{Hilb}^d(S)$ which correspond to the invariants $\mathsf{N}_{g,h}(m,n)$ were calculated in [@HilbK3]. The result exactly matches the evaluation of Theorem \[mainthm\_1b\] under the correspondence of Conjecture \[GW/Hilb\_correspondence\]. Hence Theorem \[mainthm\_1b\] gives an example of Conjecture \[GW/Hilb\_correspondence\] in every degree $d$. For low degree we have the following result: \[GWHilb\_thm\] \[thm\_GWHilb\_correspondence\] Conjecture \[asdasd\] holds if $d=1$ or $d=2$. Let $(z,\tau) \in {{\mathbb{C}}}\times {{\mathbb{H}}}$. The ring ${\mathop{\rm QJac}\nolimits}$ of quasi-Jacobi forms is the linear subspace $${\mathop{\rm QJac}\nolimits}\subset {{\mathbb{Q}}}[ \Theta(z,\tau), C_2(\tau), C_4(\tau), \wp(z,\tau), \wp^{\bullet}(z,\tau), J_1(z,\tau)]$$ of functions which are holomorphic at $z=0$; here $\Theta$ is the Jacobi theta function, $\wp$ is the Weierstrass elliptic function, $\wp^{\bullet}$ is its derivative with respect to $z$, and $J_1$ is the logarithmic derivative of $\Theta$ with respect to $z$, see [@HilbK3 Appendix B]. The space ${\mathop{\rm QJac}\nolimits}$ is naturally graded by index $m$ and weight $k$, $${\mathop{\rm QJac}\nolimits}= \bigoplus_{m \geq 0} \bigoplus_{k \geq -2m} {\mathop{\rm QJac}\nolimits}_{k,m},$$ with finite-dimensional summands ${\mathop{\rm QJac}\nolimits}_{k,m}$. We identify a quasi Jacobi form $\psi(z, \tau)$ with its power series expansions in the variables $q = e^{2 \pi i \tau}$ and $u = 2 \pi z$. In [@HilbK3] the invariants of $\operatorname{Hilb}^2(S)$ have been completely determined in the primitive case. In particular, combining [@HilbK3 Theorem 3] and Theorem \[GWHilb\_thm\] we have the following. Let $\mu, \nu, \rho$ be cohomology weighted partitions of size $2$. Then, under the variable change $u = 2 \pi z$ and $q = e^{2 \pi i \tau}$, we have $$(-iu)^{l(\mu) + l(\nu) + l(\rho) - d} \sum_{g, h} {\big\langle}\mu, \nu, \rho {\big\rangle}^{S \times {\mathbb{P}}^1 / \{ 0,1,\infty \}, \bullet}_{g, (\beta_h,2)} u^{2g-2} q^{h-1} \ = \ \frac{\psi(z,\tau)}{\Delta(q)}$$ for a quasi-Jacobi form $\psi(z,\tau)$ of index $1$ and weight $\leq 8$. The product $\text{K3} \times E$ {#Section_Introduction_K3xE} -------------------------------- Let $S$ be a nonsingular projective $K3$ surface, and let $E$ be a nonsingular elliptic curve. The 3-fold $$X=S \times E$$ has trivial canonical bundle, and hence is Calabi-Yau. Let $\beta \in H_2(S,\mathbb{Z})$ be an effective curve class and let $d\geq 0$ be an integer. The pair $(\beta,d)$ determines a class in $H_2(X,\mathbb{Z})$ by $$(\beta,d)= \iota_{S*}(\beta)+ \iota_{E*}(d[E])$$ where $\iota_S$ and $\iota_E$ are inclusions of fibers of the projections to $E$ and $S$ respectively. The moduli space ${{\overline M}}^{\bullet}_{g}(X, (\beta,d))$ of disconnected genus $g$ stable maps in class $(\beta,d)$ carries a reduced virtual class $$[ {{\overline M}}^{\bullet}_{g}(X, (\beta,d)) ]^{\text{red}}$$ of dimension $1$. The group $E$ acts on the moduli space by translation and the dimension of the reduced class correspond to the $1$-dimensional orbits under this action. We define a count of curves in $X$ by imposing an incidence condition which selects one point in each $E$ orbit. Concretely, let $\omega \in H^2(E, {{\mathbb{Z}}})$ be the class of a point and let $\beta^{\vee} \in H^2(S, {{\mathbb{Q}}})$ be a class satisfying $\langle \beta, \beta^{\vee} \rangle = 1$. We define $$\label{15343} \mathsf{N}_{g,(\beta,d)}^X = \int_{[ {{\overline M}}^{\bullet}_{g,1}(X, (\beta,d)) ]^{\text{red}}} {\mathop{\rm ev}\nolimits}_1^{\ast}( \pi_1^{\ast}(\beta^{\vee}) \cup \pi_2^{\ast}(\omega) ) \,.$$ A complete evaluation of $\mathsf{N}_{g,(\beta,d)}^X$ was conjectured in [@K3xE] matching the physical predictions [@KKV]. We consider here the case of primitive $\beta$. For primitive $\beta_h$ the integral only depends on the norm $\langle \beta_h, \beta_h \rangle = 2h-2$. We write $$\mathsf{N}_{g,h,d}^X = \mathsf{N}_{g,(\beta_h,d)}^X \,.$$ \[fjgdf\] Let $\tilde{q}$ be a formal variable. Then $$\label{chi10} \sum_{d = 0}^{\infty} \sum_{h = 0}^{\infty} \sum_{g \in {{\mathbb{Z}}}} \mathsf{N}_{g,h,d}^X u^{2g-2} q^{h-1} \tilde{q}^{d-1} = \frac{1}{\chi_{10}(u,q,\tilde{q})}$$ where $\chi_{10}(u,q, \tilde{q})$ is the Igusa cusp form in the notation of [@K3xE]. Conjecture \[fjgdf\] contains several known cases. In curve class $(\beta, 0)$ the invariant $\mathsf{N}^X_{g, h,d}$ reduces to the Katz-Klemm-Vafa formula proven in [@MPT]. The case $(\beta_{0}, d)$ for $d \geq 0$ reduces to the product of a $\mathcal{A}_1$-resolution times an elliptic curve computed in [@M]. The cases $(\beta_0, d)$ and $(\beta_1, d)$ have been recently obtained by J. Bryan [@Bryan-K3xE]. Here we show the cases $(\beta_h, 1)$ and $(\beta_h,2)$ of Conjecture \[fjgdf\]: \[Theorem\_K3xE\] For $d=1$ and $d=2$ we have $$\sum_{h \geq 0} \sum_{g \in {{\mathbb{Z}}}} \mathsf{N}_{g,h,d}^X u^{2g-2} q^{h-1} = \left[ \frac{1}{\chi_{10}(u,q,\tilde{q})} \right]_{\tilde{q}^{d-1}}$$ where $\left[ \ \cdot \ \right]_{\tilde{q}^k}$ denotes the coefficient of $\tilde{q}^k$. Abelian threefolds ------------------ Consider a complex abelian variety $A$ of dimension $3$, and let $\beta \in H_2(A, {{\mathbb{Z}}})$ be a curve class of type $$(d_1, d_2, d_3), \quad d_1, d_2 > 0,\ d_3 \geq 0 \,,$$ where the type is obtained from the standard divisor theory of the dual abelian variety $A^{\vee}$. Since $d_1, d_2 > 0$, the action of $A$ on the moduli space ${{\overline M}}_{g}(A,\beta)$ by translation has finite stabilizers and the stack quotient $${{\overline M}}_{g}(A,\beta) / A$$ is Deligne-Mumford. A $3$-reduced virtual class $[ {{\overline M}}_{g}(A, \beta) / A ]^{\text{3-red}}$ of dimension $0$ has been defined in [@BOPY] and gives rise to Gromov-Witten invariants $$\mathsf{N}^A_{g,(d_1, d_2, d_3)} = \int_{[ {{\overline M}}_{g}(A, \beta)/A ]^{\text{3-red}} } 1 \label{aaa}$$ counting genus $g$ curves in $A$ of class $\beta$ *up to translation*. In genus $3$, the counts $\mathsf{N}^A_{3,(d_1, d_2, d_3)}$ reduce to a lattice count in abelian groups [@Debarre; @Gottsche; @LS] A full formula for $\mathsf{N}^A_{g,(d_1, d_2, d_3)}$ in case $d_1 = 1$ was recently conjectured in [@BOPY] based on new calculations of the Euler characteristic of the Hilbert scheme of curves in $A$. The following result verifies this conjecture in case $d_1 = d_2 = 1$. \[abelianthm\] $$\sum_{d = 0}^{\infty} \sum_{g=2}^{\infty} \mathsf{N}^A_{g,(1,1,d)} u^{2g-2} q^d \, = \, \Theta(u,q)^2$$ An interesting question is to explore the enumerative significance of Theorem \[abelianthm\]. Define BPS numbers $\mathsf{n}_{g, (1, d, d')}$ by the expansion $$\sum_{g} \mathsf{n}_{g, (1,d,d')} (2 \sin(u/2))^{2g-2} = \sum_{g \geq 0} \mathsf{N}^A_{g,(1,d,d')} u^{2g-2} \,.$$ Then it is natural to ask: If $A$ is a generic abelian threefold carrying a curve class $\beta$ of type $(1,d,d')$, do there exist only finitely many isolated curves of genus $g$ in class $\beta$ up to translation? Is every such curve non-singular? If both questions can be answered affirmative, the BPS numbers $\mathsf{n}_{g, (1,d,d')}$ are enumerative. Plan of the paper ----------------- In Section \[Section\_The\_bracket\_notation\] we review a bracket notation for Gromov-Witten invariants. In Section \[Section:First\_vanishing\] we exploit a basic evaluation of Gromov-Witten invariants of $S \times {\mathbb{P}}^1$ in classes $(\beta_h,1)$, leading to a proof of Theorem \[abelianthm\] on abelian threefolds. In Section \[Section\_Formal\_series\_of\_quasimodular\_forms\] we prove a uniqueness statement for formal series of quasi-modular forms. Section \[Section:Genus\_induction\] is the heart of the paper: here we apply the result of Ciliberto and Knutsen on hyperelliptic curves in K3 surface to calculate a key generating series of Gromov-Witten invariants of $S$. In Section \[Section:Relative\_Invariants\_of\_P1K3\] we apply standard techniques to solve for the relative Gromov-Witten theory of $S \times {\mathbb{P}}^1$ in degrees $d=1$ and $d=2$. As a result we obtain the GW/Hilb correspondence (Theorem \[GWHilb\_thm\]) and Theorem \[Theorem\_K3xE\] on the Gromov-Witten theory of the $S \times E$. Acknowledgements ---------------- I would like to thank Davesh Maulik for interesting discussions and technical assistence, and Jim Bryan, Tudor Padurariu, Rahul Pandharipande, Aaron Pixton, Martin Raum, Junliang Shen, and Qizheng Yin for discussions about counting curves in K3 geometries. I am also very grateful to the referees for a careful reading of the manuscript and their comments. A great intellectual debt is owed to the paper [@MPT] by Maulik, Pandharipande and Thomas, where many of the techniques used here were developed. The bracket notation {#Section_The_bracket_notation} ==================== Let $X$ be a smooth projective variety and let $\beta \in H_2(X,{{\mathbb{Z}}})$ be a curve class. We will denote connected Gromov-Witten invariants of $X$ by the bracket notation $$\label{def_gw} \Big\langle \, \alpha \, ; \, \tau_{k_1}(\gamma_1) \cdots \tau_{k_n}(\gamma_n) \Big\rangle^X_{g, \beta} \ = \ \int_{[ {{\overline M}}_{g,n}(X ,\beta) ]^{\text{vir}}} \alpha \cup \prod_{i=1}^{n} {\mathop{\rm ev}\nolimits}_i^{\ast}(\gamma_i) \psi_i^{k_i},$$ where - ${{\overline M}}_{g,n}(X,\beta)$ is the moduli space of connected $n$-marked stable maps of genus $g$ and class $\beta$, - $\gamma_1, \ldots, \gamma_n \in H^{\ast}(X)$ are cohomology classes, - $\alpha$ is a cohomology class on ${{\overline M}}_{g,n}(X,\beta)$, usually taken to be the pullback of a *tautological* class [@FP13] under the forgetful map ${{\overline M}}_{g,n}(X,\beta) \to {{\overline M}}_{g,n}$ to the moduli space of curves. If the obstruction sheaf on ${{\overline M}}_{g,n}(X, \beta)$ admits a trivial quotient obtained from a holomorphic $2$-form on $X$, the integral in is assumed to be over the *reduced* virtual class. For abelian threefolds we will use the $3$-reduced virtual class [@BOPY]. The parallel definition of for disconnected invariants is denoted by attaching the superscript $\bullet$ to the bracket and the moduli spaces. Let ${{\mathbb{E}}}\to {{\overline M}}_{g,n}(X, \beta)$ (resp. ${{\mathbb{E}}}\to {{\overline M}}_{g,n}^{\bullet}(X, \beta)$) be the Hodge bundles with fiber $H^0(C, \omega_C)$ over the moduli point $[f : C \to X]$. The total Chern class of the dual of ${{\mathbb{E}}}$, $${{\mathbb{E}}}^{\vee}(1) = c({{\mathbb{E}}}^{\vee}) = 1 - \lambda_1 + \ldots + (-1)^g \lambda_g,$$ is often used for the insertion $\alpha$. We extend the bracket by multilinearity in the insertions. Since for dimension reasons only finitely many terms contribute, the formal expansion $$\frac{\gamma}{1 - \psi_i} = \sum_{k=0}^{\infty} \tau_k(\gamma), \quad \gamma \in H^{\ast}(X) \,.$$ is well-defined. Assume that $X$ admits a fibration $$\pi : X \to {\mathbb{P}}^1$$ and let $X_0, X_\infty$ be the fibers of $\pi$ over the points $0, \infty \in {\mathbb{P}}^1$. We will use the standard bracket notation $$\Big\langle \ \mu\ \Big| \ \alpha\, \prod_{i} \tau_{k_i}(\gamma_i) \ \Big| \ \nu \ \Big\rangle^{X}_{g, \beta} = \int_{ [\overline{M}_{g,n}( X /\{ X_0, X_\infty\}, \beta)_{\mu, \nu}]^{\text{vir}} } \alpha\, \cup\, \prod_{i} \psi_i^{k_i} {\mathop{\rm ev}\nolimits}_i^{\ast}(\gamma_i)$$ for the Gromov-Witten invariants of $X$ relative to the fibers $X_0$ and $X_{\infty}$. The integral is over the moduli space of stable maps $$\overline{M}_{g,n}( X /\{ X_0, X_{\infty} \},\, \beta)$$ relative to the fibers over $0,\infty \in {\mathbb{P}}^1$ in class $\beta$. Here, $\mu$ and $\nu$ are unordered cohomology weighted partitions, weighted by cohomology classes on $X_0$ and $X_{\infty}$ respectively[^7]. The integrand contains the cohomology class $\alpha$ and the descendents. Again, we use a *reduced* virtual class whenever possible. We will form generating series of the absolute and relative invariants above. Throughout we will use the following conventions: In K3 geometries we assign to a primitive class $\beta_h$ of norm $\langle \beta_h , \beta_h \rangle = 2h-2$ the variable $q^{h-1}$. The $d$-times multiple of the fundamental class of an elliptic curve (in a trivial elliptic fibration) will correspond to $q^d$. For absolute invariants the genus $g$ Gromov-Witten invariant in class $\beta$ will be weighted by the variable $$u^{2g - 2 + \int_\beta c_1(X)}.$$ For relative invariants with relative conditions specified by cohomology weighted partitions $\mu_1, \dots, \mu_k$ we will use $$u^{2g-2+\int_\beta c_1(X) + \sum_{i=1}^{k} l(\mu_i) - |\mu_i|} \,.$$ For example, in case of the elliptically fibered K3 surface $S$ with curve classes $\beta_h = B + hF$ we will use $$\begin{gathered} \label{Generating_Series_K3_Surfaces} \Big\langle \alpha ; \tau_{k_1}(\gamma_1) \cdots \tau_{k_n}(\gamma_n) \Big\rangle^{S} = \sum_{g \geq 0} \sum_{h \geq 0} \big\langle \alpha ; \tau_{k_1}(\gamma_1) \cdots \tau_{k_n}(\gamma_n) \big\rangle^S_{g, \beta_h} u^{2g-2} q^{h-1} \,.\end{gathered}$$ Calculations in degree $1$ {#Section:First_vanishing} ========================== Overview -------- We evaluate a special Gromov-Witten invariant on $\text{K3} \times {\mathbb{P}}^1$ in class $(\beta_h,1)$. By the Katz-Klemm-Vafa formula this leads to a proof of Theorem \[abelianthm\]. Evaluation {#Subsection_Proof_of_Theorem_1} ---------- Let $S$ be a K3 surface, let $\beta_h \in H_2(S, {{\mathbb{Z}}})$ be a primitive curve class satisfying $\langle \beta_h, \beta_h \rangle =2h-2$ and let $$F \in H^2(S,{{\mathbb{Z}}})$$ be a class satisfying $F \cdot \beta_h = 1$ and $F \cdot F = 0$. Let $\omega \in H^2({\mathbb{P}}^1)$ be the class of a point, and let $$F \boxtimes \omega = \pi_1^{\ast}(F) \cup \pi_2^{\ast}(\omega) \in H^4(S \times {\mathbb{P}}^1)$$ where $\pi_i$ is the projection from $S \times {\mathbb{P}}^1$ to the $i$th factor. Consider the connected Gromov-Witten invariant $${\big\langle}\tau_0(F \boxtimes \omega)^3 {\big\rangle}_{g,(\beta_h, 1)}^{S \times {\mathbb{P}}^1} = \int_{[ {{\overline M}}_{g,3}(S \times {\mathbb{P}}^1, (\beta_h, 1)) ]^{\text{red}}} \prod_{i=1}^{3} {\mathop{\rm ev}\nolimits}_i^{\ast}(F \boxtimes \omega) \,. \label{rwerw}$$ \[vanishing1\] For every $h \geq 0$, we have $${\big\langle}\tau_0(F \boxtimes \omega)^3 {\big\rangle}_{g,(\beta_h, 1)}^{S \times {\mathbb{P}}^1} = \begin{cases} \left[ \frac{1}{\Delta(q)} \right]_{q^{h-1}} & \text{ if } g = 0 \\ 0 & \text{ if } g > 0 \,, \end{cases}$$ where $[\ \cdot\ ]_{q^{n}}$ denotes extracting the $n$-th coefficient. We may take $S$ to be generic and $\beta_h$ to be irreducible. Let $F_i$, $i=1,2,3$ be generic distinct smooth submanifolds of class $F$ which intersect all rational curves in class $\beta_h$ transversely in a single point. Let also $x_1, x_2, x_3$ be distinct points in ${\mathbb{P}}^1$. The products $$F_i \times x_i \, \subset \, S \times {\mathbb{P}}^1, \quad i=1,2,3$$ have class $F \boxtimes \omega$. Consider an algebraic curve $C \subset S \times {\mathbb{P}}^1$ in class $(\beta_h, 1)$ incident to $F_i \times x_i$ for all $i$. Since $F_i \cap F_j = \varnothing$ for $i \neq j$, the curve $C$ is irreducible and reduced. Because the projection $C \to {\mathbb{P}}^1$ is of degree $1$ the curve $C$ is non-singular. Since irreducible rational curves on K3 surfaces are rigid,the only deformations of $C$ in $S \times {\mathbb{P}}^1$ are by translations by automorphisms of ${\mathbb{P}}^1$. The incidence conditions $F_i \times x_i$ then select precisely one member of each translation class. We find curves in class $(\beta_h,1)$ incident to all $F_i \times x_i$ are in $1$-to-$1$ correspondence with rational curves on $S$ in class $\beta_h$. By the Yau-Zaslow formula proven in [@BL; @Bea99; @Chen] there are precisely $$\left[ \frac{1}{\Delta(q)} \right]_{q^{h-1}}$$ such curves. It remains to calculate their contribution to . By arguments parallel to the proof of [@K3xE Proposition 5] the generating series of over all $g$ is related to the generating series of reduced stable pair invariants of $S \times {\mathbb{P}}^1$ in class $(\beta_h, 1)$ with incidence conditions $F_i \times x_i, i=1,2,3$. The contribution of the isolated curve $C \subset S \times {\mathbb{P}}^1$ to the stable pair invariant is obtained from a direct modification of the calcation in [@PT1 Section 4.2] to the reduced setting [@MPT]. Translating back to Gromov-Witten theory we find each curve $C$ contributes $1$ in genus $0$ and $0$ otherwise. This concludes the proof. Relative theory of ${\mathbb{P}}^1 \times E$ {#Section_Relative_Theory_of_P1E} -------------------------------------------- Let $E$ be an elliptic curve and consider the curve class $$(1,d) = \iota_{{\mathbb{P}}^1 \ast}( [ {\mathbb{P}}^1 ] ) + \iota_{E \ast}( d[E] ) \in H_2({\mathbb{P}}^1 \times E, {{\mathbb{Z}}})$$ where $\iota_{{\mathbb{P}}^1}, \iota_{E}$ are the inclusion of fibers of the projections to the second and first factor respectively. We will use the generating series of relative invariants of ${\mathbb{P}}^1 \times E$, $$\begin{gathered} \label{gen_series_p1e} \Big\langle \ \mu \ \Big| \ \alpha\, \prod_{i} \tau_{a_i}(\gamma_i) \ \Big| \ \nu \ \Big\rangle^{{\mathbb{P}}^1 \times E} =\ \sum_{g \geq 0} \sum_{d \geq 0} \Big\langle \ \mu \ \Big| \ \alpha\, \prod_{i} \tau_{a_i}(\gamma_i) \ \Big| \ \nu \ \Big\rangle^{{\mathbb{P}}^1 \times E}_{g, (1,d)} \, u^{2g}q^d \,.\end{gathered}$$ Since the class $(1,d)$ is of degree $1$ over ${\mathbb{P}}^1$, the relative insertions $\mu$ and $\nu$ are cohomology classes on the fibers: $$\mu \in H^{\ast}(0\times E) \quad \text{ and } \quad \nu \in H^{\ast} (\infty \times E) \,.$$ Similar definitions apply also to the case of a single relative divisor. \[P1xE\_Lemma\] 1. The series vanishes unless $$\deg_{{{\mathbb{R}}}}(\mu) + \deg_{{{\mathbb{R}}}}(\nu) \leq 2 \,,$$ where $\deg_{{{\mathbb{R}}}}(\gamma)$ denotes the real degree of $\gamma$. 2. We have ${\big\langle}\, \omega\, |\, {{{{\mathbb{E}}}^{\vee}(1)}}{\big\rangle}^{{\mathbb{P}}^1 \times E} = {\big\langle}\, {\mathbf{1}}\, |\, {{{{\mathbb{E}}}^{\vee}(1)}}\tau_0({{\mathsf{p}}}) {\big\rangle}^{{\mathbb{P}}^1 \times E} = 1$. 3. Let $D = q \frac{d}{dq}$. Then, $${\big\langle}\, \omega\, |\, {{{{\mathbb{E}}}^{\vee}(1)}}\tau_0({{\mathsf{p}}}) \, {\big\rangle}^{{\mathbb{P}}^1 \times E} = \frac{D \Theta(u,q)}{\Theta(u,q)} \,.$$ \(a) follows since a curve $C \subset {\mathbb{P}}^1 \times E$ in class $(1,d)$ is of the form $$({\mathbb{P}}^1 \times e)\, +\, D$$ where $e \in E$ is a fixed point and $D$ is a fiber of the projection ${\mathbb{P}}^1 \times E \to {\mathbb{P}}^1$. Hence for every relative stable map $f$ to ${\mathbb{P}}^1 \times E / \{ 0, \infty \}$ the intersection point over $0$ and over $\infty$ agree, which implies the claim (for example choose cycles representing $\mu$ and $\nu$). Part (b) is [@MPT Lemma 24] and part (c) follows from [@MPT Lemma 26]. Fiber integrals --------------- Let $S$ be the elliptically fibered K3 surface with curve class $\beta_h = B+ hF$ where $B,F$ are the section and fiber class respectively. Recall also the notation . \[Proposition\_Thm1m=0\] $\displaystyle \Big\langle\, {{\mathbb{E}}}^{\vee}(1) \ \prod_{i=1}^{n} \frac{F}{1-\psi_i} \Big\rangle^{S} = \frac{1}{u^{2n}} \frac{\Theta(u,q)^{2n}}{\Theta(u,q)^2 \Delta(q)} $ By Proposition \[vanishing1\] we have $$\sum_{g \geq 0} \sum_{h \geq 0} {\big\langle}\tau_0(F \boxtimes \omega)^3 {\big\rangle}_{g,(\beta_h,1)}^{S \times {\mathbb{P}}^1} u^{2g} q^{h-1} = \frac{1}{\Delta(q)} \,. \label{gen1}$$ The factor ${\mathbb{P}}^1$ admits an action of ${{\mathbb{C}}}^{\ast}$ which lifts to the moduli space ${{\overline M}}_{g,n}(S \times {\mathbb{P}}^1, (\beta_h,1))$. Applying the virtual localization formula [@GP] and using the divisor axiom yields $$\label{gen2} {\big\langle}\tau_0(F \boxtimes \omega)^3 {\big\rangle}_{g,(\beta_h,1)}^{S \times {\mathbb{P}}^1} = \Big\langle\, {{\mathbb{E}}}^{\vee}(1) \ \frac{F}{1 - \psi_1} \Big\rangle^S_{g, \beta_h} \,.$$ This proves the claim for $n=1$. For the general case, we degenerate $S$ to the normal cone of a fiber $E$ of the elliptic fibration $S \to {\mathbb{P}}^1$, $$\label{degeneration} S\ \leadsto\ S \, \cup \, ( {\mathbb{P}}^1 \times E )$$ specializing the fiber class $F$ to the ${\mathbb{P}}^1 \times E$ component. The degeneration formula [@Junli1; @Junli2], see also [@MPT Section 6] and [@BOPY Section 3.4] for the modifications in the reduced case, yields $$\label{4242} \Big\langle {{\mathbb{E}}}^{\vee}(1) \, \prod_{i=1}^{n} \frac{F}{1-\psi_i} \Big\rangle^S = \Big\langle\, {{\mathbb{E}}}^{\vee}(1)\ \Big| \ 1 \ \Big\rangle^S \Big\langle\ \omega\ \Big|\ {{\mathbb{E}}}^{\vee}(1) \prod_{i=1}^{n} \frac{F}{1 - \psi_i}\, \Big\rangle^{{\mathbb{P}}^1 \times E}$$ We analyze both terms on the right hand side. By a further degeneration of $S$ (using Lemma \[P1xE\_Lemma\]) and then using the Katz-Klemm-Vafa formula [@MPT] we get $$\label{KKV} \Big\langle\ {{\mathbb{E}}}^{\vee}(1)\ \Big| \ 1 \ \Big\rangle^S = \Big\langle\, {{\mathbb{E}}}^{\vee}(1) \, \Big\rangle^S = \frac{1}{\Theta(u,q)^2 \Delta(q)} \,.$$ For the second term, we degenerate the base ${\mathbb{P}}^1$ to obtain a chain of $n+1$ surfaces isomorphic to ${\mathbb{P}}^1 \times E$. The first $n$ of these each receive a single insertion $F$ weighted by psi classes. Using Lemma \[P1xE\_Lemma\] we obtain $$\label{4343} \Big\langle\ \omega\ \Big|\ \prod_{i=1}^{n} \frac{F}{1 - \psi}\, \Big\rangle^{{\mathbb{P}}^1 \times E} = \left( \Big\langle\ \omega\ \Big|\ \frac{F}{1 - \psi} \Big|\ 1\ \Big\rangle^{{\mathbb{P}}^1 \times E} \right)^n \,.$$ In case $n=1$ the left hand side of is known and we can solve for . The result is $$\label{assd} \Big\langle\ \omega\ \Big|\ {{\mathbb{E}}}^{\vee}(1) \, \frac{F}{1 - \psi}\ \Big|\ 1\ \Big\rangle^{{\mathbb{P}}^1 \times E} = \frac{\Theta(u,q)^2}{u^2} \,.$$ Inserting and back into , the proof is complete. The abelian threefold --------------------- Recall the bracket notation for the generating series of relative invariants of ${\mathbb{P}}^1 \times E$ in class $(1,d)$. We will need the following result. \[riogkegfr\] For ${{\mathsf{p}}}\in H^4({\mathbb{P}}^1 \times E, {{\mathbb{Z}}})$ the point class, $$\Big\langle \ 1\ \Big| \ {{\mathbb{E}}}^{\vee}(1) \ \frac{{{\mathsf{p}}}}{1-\psi} \ \Big\rangle^{{\mathbb{P}}^1 \times E} = \frac{\Theta(u,q)^2}{u^2}$$ The translation action of the elliptic curve on ${\mathbb{P}}^1 \times E$ yield basic vanishing relations on the Gromov-Witten theory of ${\mathbb{P}}^1 \times E$, see [@BOPY Section 3.3] for the parallel case of abelian surfaces and also [@OP3]. A straightforward application here yields $$\Big\langle \ 1\ \Big| \ {{\mathbb{E}}}^{\vee}(1) \ \frac{{{\mathsf{p}}}}{1-\psi} \Big\rangle^{{\mathbb{P}}^1 \times E} = \Big\langle\ \omega\ \Big|\ {{\mathbb{E}}}^{\vee}(1) \ \frac{F}{1 - \psi_1} \, \Big\rangle^{{\mathbb{P}}^1 \times E} \,,$$ where $F$ is the fiber over a point in ${\mathbb{P}}^1$. The claim follows from . By deformation invariance we may consider the special geometry $$A = E_1 \times E_2 \times E_3 \,,$$ where $E_i$ are elliptic curves, and the curve classes $$(1,1,d) = \iota_{E_1, \ast}([E_1]) + \iota_{E_2 \ast}([E_2]) + \iota_{E_3 \ast}( d [E_3] )\ \in H_2(A, {{\mathbb{Z}}})$$ where $\iota_{E_i} : E_i \hookrightarrow A$ is the inclusion of a fiber of the map forgetting the $i$th factor. For $i \in \{ 1, 2, 3 \}$ let $$H_i \in H^2(A)$$ be the pullback of the point class from the $i$-th factor of $A$. By [@BOPY Lemma 18], $$\mathsf{N}^{A}_{g, (1,1,d)} = \frac{1}{2} \Big\langle \tau_0( {{\mathsf{p}}}) \tau_0(H_1 H_2) \Big\rangle^{A,\, 3\text{-red}}_{g, (1,1,d)}$$ where the right hand side are absolute 3-reduced Gromov-Witten invariants of $A$ with insertions the point class ${{\mathsf{p}}}\in H^6(A, {{\mathbb{Z}}})$ and $H_1 H_2$. We degenerate the factor $E_1$ to a nodal rational curve and resolve. Applying the degeneration formula modified to the reduced case[^8] we obtain $$\big\langle \tau_0( {{\mathsf{p}}}) \tau_0(H_1 H_2) \big\rangle^{A,\text{3-red}}_{g, \beta} = \Big\langle\ 1\ \Big| \ \tau_0( {{\mathsf{p}}}) \tau_0(H_1 H_2)\ \Big|\ 1\ \Big\rangle^{{\mathbb{P}}^1 \times E_2 \times E_3, \text{red}}_{g-1, (1,1,d)} \,,$$ where the right hand side are $1$-reduced invariants of ${\mathbb{P}}^1 \times E_2 \times E_3$ relative to the fibers over $0$ and $\infty$, and $H_i$ is the pullback of the point class from the $i$-th factor. By a degeneration of the base ${\mathbb{P}}^1$ to a chain of three ${\mathbb{P}}^1$’s and specializing all insertions to the middle factor, we obtain $$\Big\langle\ 1\ \Big| \ \tau_0( {{\mathsf{p}}}) \tau_0(H_1 H_2)\ \Big|\ 1\ \Big\rangle^{{\mathbb{P}}^1 \times E_2 \times E_3, \text{red}}_{g-1, (1,1,d)} \ = \ \Big\langle \tau_0( {{\mathsf{p}}}) \tau_0(H_1 H_2) \Big\rangle^{{\mathbb{P}}^1 \times E_2 \times E_3, \text{red}}_{g-1, (1,1,d)} \,, \label{12345}$$ to which we apply the localization formula to get $$\label{midd} \Big\langle\ {{\mathbb{E}}}^{\vee}(1)\, \frac{{{\mathsf{p}}}}{1-\psi_1}\, \Big\rangle^{E_2 \times E_3, \text{red}}_{g-1, (1,d)} + \Big\langle\ {{\mathbb{E}}}^{\vee}(1)\, \frac{H_2}{1-\psi_1} \, \tau_0({{\mathsf{p}}}) \Big\rangle_{g-1,(1,d)}^{E_2 \times E_3, \text{red}} \,.$$ Let $E = E_3$. We calculate both terms of by the degeneration formula for $$E_2 \times E \ \leadsto \ (E_2 \times E) \cup ({\mathbb{P}}^1 \times E) \,.$$ where the point and $H_2$ class are specialized to the ${\mathbb{P}}^1 \times E$ component. In both cases we will use the evaluation $\langle\, {{\mathbb{E}}}^{\vee}(1)\, | \, \omega \, \rangle^{E_2 \times E_3}_{g, (1,d)} = \delta_{g,1} \delta_{d,0}$ proven in [@BOPY Lemma 8]. The result for the first term is $$\Big\langle\ {{\mathbb{E}}}^{\vee}(1)\, \frac{{{\mathsf{p}}}}{1-\psi_1}\, \Big\rangle^{E_2 \times E_3, \text{red}}_{g-1, (1,d)} = \Big\langle\ 1 \ \Big|\ {{\mathbb{E}}}^{\vee}(1) \frac{{{\mathsf{p}}}}{1-\psi_1} \Big\rangle^{{\mathbb{P}}^1 \times E}_{g-2, (1,d)} \,,$$ and similarly the second term yields $$\begin{aligned} \Big\langle\ {{\mathbb{E}}}^{\vee}(1)\, \frac{H_2}{1-\psi_1} \, \tau_0({{\mathsf{p}}}) \Big\rangle_{g-1,(1,d)}^{E_2 \times E_3, \text{red}} \,. & = \Big\langle\ 1 \ \Big|\ {{\mathbb{E}}}^{\vee}(1) \frac{F}{1-\psi_1} \tau_0({{\mathsf{p}}})\, \Big\rangle^{{\mathbb{P}}^1 \times E}_{g-2, (1,d)} \\ & = \Big\langle\ 1 \ \Big|\ {{\mathbb{E}}}^{\vee}(1) \frac{F}{1-\psi_1} \ \Big|\ \omega \ \Big\rangle^{{\mathbb{P}}^1 \times E}_{g-2, (1,d)}\end{aligned}$$ where $F$ is the class of a fiber over a point in ${\mathbb{P}}^1$, and in the second step we used a further degeneration of the base ${\mathbb{P}}^1$ and Lemma \[P1xE\_Lemma\]. Using Lemma \[riogkegfr\] and the claim follows now by summing up. Formal series of quasi-modular forms {#Section_Formal_series_of_quasimodular_forms} ==================================== Quasi-modular forms ------------------- The ring of *quasi-modular forms* is the free polynomial algebra $${\mathrm{QMod}}= {{\mathbb{C}}}[ C_2, C_4, C_6 ] \,,$$ where $C_{2k}$ are the Eisenstein series. The natural weight grading $${\mathrm{QMod}}= \bigoplus_{m \geq 0} {\mathrm{QMod}}_m$$ is defined by assigning $C_{2k}$ weight $2k$. For a quasi-modular form $f(q) = \sum_n a_n q^n$, let $$\nu(f) = \mathrm{inf} \{ \, n \, |\, a_n \neq 0\, \}$$ be the order of vanishing of $f$ at $q=0$. If $f$ is a modular form, i.e. $f \in {{\mathbb{C}}}[C_4, C_6]$, and $f$ is non-zero of weight $m$, then $$\nu(f) < \dim {\mathrm{Mod}}_m, \quad \text{ hence} \quad \nu(f) \leq \frac{1}{12} m \,,$$ where ${\mathrm{Mod}}_m$ is the space of weight $m$ modular forms. Similarly, one may ask if $\nu(f) < \dim {\mathrm{QMod}}_m$ also holds for every non-zero quasi-modular form of weight $m$, see [@KK] for a discussion. For us the following weaker bound proven by Saradha suffices: \[Lemma:QMod\_vanishing\] Let $f$ be a non-zero quasi-modular form of weight $2k$. Then $$\nu(f) \leq \frac{1}{6} k (k+1) \,.$$ The proof in [@Saradha Lemma 3] also yields the stronger result stated here, as has been observed in [@BP]. Formal series ------------- Let $u$ be a formal variable, and consider a power series $$\mathsf{F}(u,q) = \sum_{m \geq 0} f_m(q) u^m$$ in $u$ with coefficients $f_m(q) \in {\mathrm{QMod}}$. Let $\big[ f_m(q) \big]_{q^n}$ denote the coefficient of $q^n$ in $f_m(q)$, and let $$\mathsf{F}_n(u) = \big[ \mathsf{F}(u,q) \big]_{q^n} = \sum_{m \geq 0} \big[ f_m(q) \big]_{q^n} u^m$$ be the series of $n$-th coefficients. \[formel\_prop\] Let $\sigma$ be an even integer, and let $$\mathsf{F}(u,q) = \sum_{m \geq 0} f_m(q) u^m$$ be a formal power series in $u$ satisfying the following conditions: 1. $f_m(q) \in {\mathrm{QMod}}_{m+\sigma}$ for every $m$, 2. $\mathsf{F}_n(u)$ is the Laurent expansion of a rational function in $y$ under the variable change $y = -e^{iu}$, $$\mathsf{F}_n(u) = \sum_{r} c(n,r) y^r \,,$$ 3. $c(n,r) = 0$ unless $r^2 \leq 4 n + 1$, 4. $f_m(q) = 0$ for all $m \leq B(\sigma)$ where $$B(\sigma) = 2 \floor{ \sigma + 1 + \sqrt{2 \sigma^2 + 3 \sigma + 4} } \,.$$ Then $\mathsf{F}(u,q) = 0$. Assume $F$ is non-zero. Since $\sigma$ is even and all quasi-modular forms have even weight, we have $f_m = 0$ unless $m$ is even. Hence there exists an integer $b$ such that $f_m(q) = 0$ for all $m \leq 2b$, but $f_{2b+2}(q) \neq 0$. Necessarily, $2 b \geq B(\sigma)$. *Claim.* $\mathsf{F}_n(u) = 0$ for $n < \frac{1}{4} b (b+2)$. *Proof of Claim.* By property (b) and (c) above, we may write $$\mathsf{F}_n(u) = \sum_{m \geq 0} a_m u^{2m} = \sum_{\ell = - \ell_{\text{max}}}^{\ell_{\text{max}}} c_\ell y^{\ell}$$ for coefficients $a_{m}, c_{\ell} \in {{\mathbb{C}}}$ where $\ell_{\text{max}} = \floor{\sqrt{4n+1}}$. Since $f_m = 0$ for all odd $m$, we find $\mathsf{F}_n(-u) = \mathsf{F}_n(u)$, which yields the symmetry $c_{\ell} = c_{-\ell}$. In particular, we may also write $$\mathsf{F}_n(u) = \sum_{\ell = 0}^{\ell_{\text{max}}} b_{\ell} r^{2 \ell}$$ where $$r = y^{\frac{1}{2}} + y^{-\frac{1}{2}} = - 2 \sin\left( \frac{u}{2} \right) = - u + \frac{1}{24} u^3 + \ldots \,.$$ Since $r = - u + O(u^3)$ we obtain an invertible and upper-triangular relation between the coefficients $\{ a_{\ell} \}_{\ell \geq 0}$ and $\{ b_{\ell} \}_{\ell \geq 0}$. In particular, $a_{\ell} = 0$ for $\ell = 0, \dots, b$ implies $b_{\ell} = 0$ for $\ell = 0, \dots, b$. Since moreover $n < \frac{1}{4} b (b+2)$ implies $\ell_{\text{max}} \leq b$ we find $b_{\ell} = 0$ for all $\ell$ and hence $\mathsf{F}_n = 0$ as claimed. We conclude the proof of Proposition \[formel\_prop\]. By the claim the order of vanishing of $f_{2b+2}(q)$ at $q=0$ is at least $\frac{1}{4} b (b+2)$, $$\frac{1}{4} b (b+2) \leq \nu(f_{2b+2}) \,.$$ But by Lemma \[Lemma:QMod\_vanishing\] and the non-vanishing of $f_{2b+2}$, $$\nu(f_{2b+2}) \leq \frac{1}{6} (b+\sigma/2+1) (b+\sigma/2+2) \,, \label{use_of_vanishing_bound}$$ which is impossible since $2 b \geq B(\sigma)$. A crucial ingredient in the proof of Proposition \[formel\_prop\] was the vanishing Lemma \[Lemma:QMod\_vanishing\] employed in equation . If we could prove $$\label{strong_bound} \nu(f) < \dim {\mathrm{QMod}}_{m}$$ for all non-zero quasi-modular forms of weight $m$, we could sharpen the bound in (d). While we can’t prove for all $m$, we have verified it for all $m \leq 250$. This leads to the following partial strengthening of Proposition \[formel\_prop\]. \[formel\_prop\_strenthening\] Assume $\sigma \leq 42$. Then Proposition \[formel\_prop\] holds with property (d) replaced by 1. $f_m(q) = 0$ for $m \leq B'(\sigma)$, where $B'(\sigma)$ is $$2 \cdot \mathrm{min} \Big\{ \widetilde{b} \in {{\mathbb{Z}}}\ \Big|\ \frac{1}{4} b (b+2) > \dim {\mathrm{QMod}}_{\sigma + 2 b + 2} - 1 \text{ for all } b \geq \widetilde{b} \Big\} \,.$$ This follows by an argument identical to proof of Proposition \[formel\_prop\] except for the following steps: If $b \leq 103$, then $2b+2+\sigma \leq 250$ and we use the bound instead of . This leads to a contradiction by definition of $B'(\sigma)$. If $b > 103$, then by assumption $f_m = 0$ for all $m \leq 208$. In particular, property (d) of Proposition \[formel\_prop\] holds and we can apply Proposition \[formel\_prop\]. **Remarks.** (a) Since $$\dim {\mathrm{QMod}}_{2 \ell} = \frac{1}{12} \left( \ell^2 + 6 \ell + 12 \right) - c(\ell)$$ where $|c(\ell)| < 1$, the inequality $ b (b+2) / 4 > \dim {\mathrm{QMod}}_{\sigma + 2 b + 2}$ holds for all $b$ sufficiently large. In particular, $B'(\sigma)$ defined above is well-defined and finite. The first values are given in the following table: $\sigma$ $<-2$ $-2$ $0$ $2$ $4$ $6$ $8$ $10$ $12$ $14$ $16$ -------------- ----------- ------ ----- ------ ------ ------ ------ ------ ------ ------ ------ $B'(\sigma)$ $-\infty$ $2$ $6$ $10$ $12$ $14$ $18$ $20$ $24$ $26$ $28$ In particular, for $\sigma < -2$ property (d’) of Lemma \[formel\_prop\_strenthening\] is always satisfied. \(b) We may obtain from Proposition \[formel\_prop\] a similar statement for odd $\sigma$ by integrating $\mathsf{F}$ formally with respect to $u$. \(c) The coefficient bound $r^2 \leq 4n + 1$ in Proposition \[formel\_prop\] (c) is the index $1$ case of the Fourier coefficient bound for weak Jacobi forms [@EZ]. Surprisingly, the proof of Proposition \[formel\_prop\] fails for higher index since these coefficient constraints become weaker, while the growth of $\dim {\mathrm{QMod}}_{2 \ell}$ remains constant. The analog of $B'(\sigma)$ is no longer well-defined. \(d) In applications below, the coefficient of $u^{2g + 2}$ in $\mathsf{F}(u,q)$ is a series of genus $g$ Gromov-Witten invariants of K3 surfaces. For low $\sigma$, checking the vanishing of these coefficients in the range $2g + 2 \leq B(\sigma)$ is feasible. \(e) Proposition \[formel\_prop\] was motivated by the proof of the Kudla modularity conjecture using formal series of Jacobi forms [@BR]. Genus induction {#Section:Genus_induction} =============== Overview -------- Let $S$ be an elliptic K3 surface with section, let $B$ and $F$ be the section and fiber class respectively, set $\beta_h = B + h F$ where $h \geq 0$, and let ${{\mathsf{p}}}\in H^4(S,{{\mathbb{Z}}})$ be the class of a point. Recall the generating series notation for the surface $S$. In this section we will prove the following evaluation: \[mainthm\_1\] For all $m,n \geq 0$, $$\Big\langle\, {{\mathbb{E}}}^{\vee}(1) \ \prod_{i=1}^{m} \frac{{{\mathsf{p}}}}{1-\psi_i} \prod_{i=m+1}^{m+n} \frac{F}{1-\psi_i} \Big\rangle^{S} = \frac{ ( \mathbf{G}(u,q) - 1 )^{m} \Theta(u,q)^{2n} }{ u^{2m+2n} \Theta(u,q)^2 \Delta(q)} .$$ Formal series {#Subsection_proof_of_Theorem_1_Case2} ------------- Theorem \[mainthm\_1\] will follow from the following evaluation and a degeneration argument. \[thm\_middle\] $\displaystyle \Big\langle\, {{\mathbb{E}}}^{\vee}(1) \frac{{{\mathsf{p}}}}{1 - \psi_1} \frac{F}{1 - \psi_2}\, \Big\rangle^{S} = \frac{1}{u^4} \, \frac{\mathbf{G}(u,q) - 1}{\Delta(q)} $ Let $\omega \in H^2({\mathbb{P}}^1)$ be the class of a point, and for $\gamma \in H^{\ast}(S)$ let $$\gamma \boxtimes \omega = \pi_1^{\ast}(\gamma) \cup \pi_2^{\ast}(\omega) \in H^{\ast}(S \times {\mathbb{P}}^1)$$ where $\pi_i$ is the projection of $S \times {\mathbb{P}}^1$ to the $i$th factor. Define the formal series $$\label{Series_F} \mathsf{F}(u,q) = \Delta(q) \cdot \sum_{g\in {{\mathbb{Z}}}} \sum_{h \geq 0} \big\langle \tau_0({{\mathsf{p}}}\boxtimes \omega) \tau_0(F \boxtimes \omega)^3 \big\rangle^{S \times {\mathbb{P}}^1, \bullet}_{g, (\beta_h,2)} u^{2g+2} q^{h-1}$$ where the bracket on the right hand side denotes disconnected absolute Gromov-Witten invariants of $S \times {\mathbb{P}}^1$. \[wer\] With $D = q \frac{d}{dq}$, $$\mathsf{F}(u,q) = u^4 \Delta(q) \Big\langle\, {{\mathbb{E}}}^{\vee}(1) \, \frac{{{\mathsf{p}}}}{1 - \psi_1} \, \frac{F}{1 - \psi_2}\, \Big\rangle^{S} + 1 + \Theta(u,q) \cdot D \Theta(u,q)$$ By Proposition \[vanishing1\] the contribution from disconnected curves to $$\label{bbb} \sum_{g\in {{\mathbb{Z}}}} \sum_{h \geq 0} \big\langle \tau_0({{\mathsf{p}}}\otimes \omega) \tau_0(F \boxtimes \omega)^3 \big\rangle^{S \times {\mathbb{P}}^1, \bullet}_{g, (\beta_h,2)} u^{2g+2} q^{h-1}$$ is $\frac{1}{\Delta(q)}$.[^9] For the contribution from connected curves we apply the localization formula, specializing ${{\mathsf{p}}}\boxtimes \omega$ and one $F \boxtimes \omega$ insertion to the fiber over $\infty$, and the other insertions to the fiber over $0 \in {\mathbb{P}}^1$. We find equals $$\frac{1}{\Delta(q)} + u^4 \Big\langle {{\mathbb{E}}}^{\vee}(1) \frac{{{\mathsf{p}}}}{1 - \psi_1} \frac{F}{1 - \psi_2} \Big\rangle^{S} + u^4 \Big\langle {{\mathbb{E}}}^{\vee}(1) \frac{F}{1 - \psi_1} \frac{F}{1 - \psi_2} \tau_0({{\mathsf{p}}}) \Big\rangle^{S}.$$ We evaluate the third term by degenerating $S$ to a union of $S$ with four bubbles of ${\mathbb{P}}^1 \times E$, $$S \leadsto S \cup ({\mathbb{P}}^1 \times E) \cup \ldots \cup ({\mathbb{P}}^1 \times E)$$ where the first three copies of ${\mathbb{P}}^1 \times E$ receive a single insertion each. By , and Lemma \[P1xE\_Lemma\], $$\begin{gathered} u^4 \Big\langle\, {{\mathbb{E}}}^{\vee}(1) \cdot \frac{F}{1 - \psi_1} \cdot \frac{F}{1 - \psi_2}\, \tau_0({{\mathsf{p}}}) \Big\rangle^{S} = u^4 {\big\langle}{{\mathbb{E}}}^{\vee}(1) \big| 1 {\big\rangle}^S \\ \cdot \Big( {\big\langle}\omega \big| \, F / (1-\psi) \, \big| 1 {\big\rangle}^{{\mathbb{P}}^1 \times E} \Big)^2 \cdot {\big\langle}\omega \big| {{{{\mathbb{E}}}^{\vee}(1)}}\tau_0({{\mathsf{p}}}) \big| 1 {\big\rangle}^{{\mathbb{P}}^1 \times E} \cdot {\big\langle}\omega \big| {{{{\mathbb{E}}}^{\vee}(1)}}{\big\rangle}^{{\mathbb{P}}^1 \times E} \\ = \frac{\Theta(u,q) D \Theta(u,q)}{\Delta(q)} \,. \qedhere\end{gathered}$$ \[rgrgreg\] The series $\mathsf{F}(u,q)$ satisfies properties , , of Proposition \[formel\_prop\] with $\sigma=0$. **Property (a).** For $m \in {{\mathbb{Z}}}$ let $$f_m(q) = \big[ \, \mathsf{F}(u,q) \, \big]_{u^m}$$ be the coefficient of $u^m$ in $\mathsf{F}(u,q)$. For odd $m$, $f_m(q)$ vanishes. For even $m$ we have by Lemma \[wer\] $$f_m(q) = \Delta(q) \sum_{h \geq 0} \Big\langle {{\mathbb{E}}}^{\vee} \, \frac{{{\mathsf{p}}}}{1 - \psi_1} \, \frac{F}{1 - \psi_2} \Big\rangle^{S}_{g, \beta_h} q^{h-1} + \delta_{m0} + \big[ \Theta \cdot D \Theta \big]_{u^m}$$ where $m = 2g+2$. By the refinement [@BOPY Theorem 9] of the quasi-modularity result proven in [@MPT], the first term on the right hand side is a quasi-modular form of weight $2g+2$. By direct verification the last two terms are also quasi-modular of weight $m$. Hence $$f_m(q) \in {\mathrm{QMod}}_m \,.$$ This verifies property (a). **Property (b).** By an argument parallel to the proof of [@K3xE Proposition 5] the GW/Pairs correspondence [@PaPix1; @PaPix2] holds for absolute disconnected Gromov-Witten invariants of $S \times {\mathbb{P}}^1$ in class $(\beta_h,d)$. In particular, the coefficient of every $q^{h-1}$ in is the Laurent expansion of a rational function in $y$ under the variable transformation $y = - e^{iu}$. This implies the claim for $\mathsf{F}(u,q)$. **Property (c).** For each $h \geq 0$ consider the Laurent expansion $$\sum_{g} u^{2g+2} \big\langle \tau_0({{\mathsf{p}}}\boxtimes \omega) \tau_0(F \boxtimes \omega)^3 \big\rangle^{S \times {\mathbb{P}}^1, \bullet}_{g, (\beta_h,2)} = \sum_{r \in {{\mathbb{Z}}}} c(h,r) y^r \label{u-expansion}$$ of the rational function in $y = - e^{iu}$. By the GW/Pairs correspondence[^10] we have $$c(h,r) = \big\langle \tau_0({{\mathsf{p}}}\boxtimes \omega) \tau_0(F \boxtimes \omega)^3 \rangle^{S \times {\mathbb{P}}^1, \text{Pairs}}_{(\beta_h,2), r+2} \,,$$ where the right hand side are reduced stable pairs invariants of $S \times {\mathbb{P}}^1$ in class $(\beta_h,2)$ with Euler characteristic $r+2$. We will prove the vanishing of $c(h,r)$ for $r^2 > 4h+1$ in three steps. **Step 1.** $c(h,r) = 0$ for $r < -\sqrt{4 h + 1}$. By deformation invariance we may assume $\beta_h$ is irreducible. Let $F_i, i=1,2,3$ be generic disjoint smooth submanifolds in class $F$, let $x_1, x_2,x_3 \in {\mathbb{P}}^1$ be distinct points, and let $P \in S \times {\mathbb{P}}^1$ be a generic point. Let $$\mathsf{P}(\beta_h, n)$$ denote the moduli space of stable pairs in $S \times {\mathbb{P}}^1$ of class $(\beta_h,2)$ Euler characteristic $n$ and whose underlying support curve is incident to $F_i \times x_i$ for $i \in \{ 1,2,3 \}$ and to the point $P$. We claim $\mathsf{P}(\beta_h, n)$ is empty if $n < 2 -\sqrt{4h+1}$. Indeed, let $[ {{\mathcal O}}_X \to {{\mathcal F}}] \in \mathsf{P}(\beta_h, n)$ with underlying support curve $C$. If $C$ is disconnected and incident to $P$ and $x_i \times F_i, i=1,2,3$, then $C$ is a disjoint union of two copies of ${\mathbb{P}}^1$. Hence, $$n = \chi({{\mathcal F}}) \geq \chi({{\mathcal O}}_C) \geq 2 \,.$$ If $C$ is connected, the incidence conditions imply that $C$ is irreducible and reduced. Then by Theorem \[thm\_CK\] the arithmetic genus $g = g_a(C) = 1 - \chi({{\mathcal O}}_C)$ satisfies $$h \geq g + \alpha( g - \alpha - 1 )$$ where $\alpha = \floor{g/2}$ which implies $n = \chi({{\mathcal F}}) \geq \chi({{\mathcal O}}_C) \geq 2 - \sqrt{4h+1}$. Since $n = r+2$, Step 1 is complete. **Step 2.** There exist an intger $N \geq 0$ and $n_{g,h} \in {{\mathbb{Q}}}$ such that $$\sum_r c(h,r) y^r = \sum_{g=-N}^N n_{g,h} (y^{1/2} + y^{-1/2})^{2g+2} \,.$$ *Proof.* By Lemma \[wer\] and the expansion of $\Theta(u,q)$ in $y = -e^{iu}$ it is enough to show that for all $h$ $$\sum_{g \geq 0} \Big\langle\, {{\mathbb{E}}}^{\vee}(1) \, \frac{{{\mathsf{p}}}}{1 - \psi_1} \, \frac{F}{1 - \psi_2}\, \Big\rangle^{S}_{g,\beta_h} u^{2g+2} = \sum_{g=-N}^{N} n_{g,h} (y^{\frac{1}{2}} + y^{-\frac{1}{2}})^{2g+2} \label{301}$$ for some $N$ and some $n_{g,h}$ under the variable change $y = -e^{iu}$. For this, we will relate the left hand side to the Gromov-Witten invariants of $S \times E$, where $E$ is an elliptic curve. By degenerating two ${\mathbb{P}}^1 \times E$-bubbles off from $S$, and by the Katz-Klemm-Vafa formula and , we have $$u^4 \Big\langle\, {{\mathbb{E}}}^{\vee}(1) \, \frac{{{\mathsf{p}}}}{1 - \psi_1} \, \frac{F}{1 - \psi_2}\, \Big\rangle^{S} = \frac{u^2}{\Delta(q)} \Big\langle\, \omega \, \Big| \, {{\mathbb{E}}}^{\vee}(1) \, \frac{{{\mathsf{p}}}}{1 - \psi_1} \Big\rangle^{{\mathbb{P}}^1 \times E} \,.$$ On the other side, let $\omega \in H^2(E,{{\mathbb{Z}}})$ be the class of a point and let $$\big\langle\tau_0(F \boxtimes \omega) \big\rangle^{S \times E, \bullet} = \sum_h \sum_{g} {\big\langle}\tau_0(F \boxtimes \omega) {\big\rangle}^{S \times E, \bullet}_{g, (\beta_h, 1)} u^{2g-2} q^{h-1}$$ be the generating series of disconnected Gromov-Witten invariants of $S \times E$. By degenerating $E$ to a nodal curve and resolving we have $$\label{302} \big\langle\tau_0(F \boxtimes \omega) \big\rangle^{S \times E, \bullet} = \sum_{\gamma} \big\langle \gamma, \gamma^{\vee} \Big| \tau_0(F \boxtimes \omega) \big\rangle^{S \times {\mathbb{P}}^1 / \{ 0, \infty \}, \bullet}$$ where $\gamma$ runs over a basis of $H^{\ast}(S, {{\mathbb{Q}}})$ with $\gamma^{\vee}$ the dual basis, and we have written $\gamma$ for the weighted partition $(1,\gamma)$. Degenerating the base ${\mathbb{P}}^1$ to two copies of ${\mathbb{P}}^1$ with the non-relative point specializing to one, and the relative marked points specializing to the other, the right hand side of is $$\label{300} \sum_{\gamma} \big\langle \gamma, \gamma^{\vee}, F \big\rangle^{S \times {\mathbb{P}}^1 / \{ 0, 1, \infty\} , \bullet} + 24 \big\langle \, {{\mathsf{p}}}\, \Big|\, \tau_0(F \boxtimes \omega) \big\rangle^{S \times {\mathbb{P}}^1, \bullet} \,.$$ By arguments parallel[^11] to the proof of Proposition \[vanishing1\], only genus $0$ curves contribute to the first term in . Hence, $$\sum_{\gamma} \big\langle \gamma, \gamma^{\vee}, F \big\rangle^{S \times {\mathbb{P}}^1 / \{ 0, 1, \infty\} , \bullet} = g(q)$$ for some power series $g(q)$ independent of $u$. By using the Katz-Klemm-Vafa formula for the disconnected, and the localization formula for the connected part, the second term of is $$\frac{24}{\Theta(u,q)^2 \Delta(q)} + 24 u^2 \Big\langle {{\mathbb{E}}}^{\vee}(1) \frac{{{\mathsf{p}}}}{1-\psi_1} \Big\rangle^S$$ which by a further degeneration $S \leadsto S \cup ({\mathbb{P}}^1 \times E)$ is $$\frac{24}{\Theta(u,q)^2 \Delta(q)} \left( 1 + u^2 \Big\langle\, \omega \, \Big| \, {{\mathbb{E}}}^{\vee}(1) \, \frac{{{\mathsf{p}}}}{1 - \psi_1} \Big\rangle^{{\mathbb{P}}^1 \times E} \right)$$ Combining everything, we have therefore $$\begin{gathered} u^2 \Big\langle\, \omega \, \Big| \, {{\mathbb{E}}}^{\vee}(1) \, \frac{{{\mathsf{p}}}}{1 - \psi_1} \Big\rangle^{{\mathbb{P}}^1 \times E} = \frac{\Theta(u,q)^2 \Delta(q)}{24} \left( \Big\langle\tau_0(F \boxtimes \omega) \Big\rangle^{S \times E, \bullet} - g(q) \right) - 1 \end{gathered}$$ By the GW/Pairs correspondence for $S \times E$ in primitive classes, see [@K3xE Proposition 5], the series $\langle\tau_0(F \boxtimes \omega) \rangle^{S \times E, \bullet}$ equals a series of reduced stable pair invariants for $X$ under $y = -e^{iu}$. By [@ReducedSP Thm. 1] these can be evaluated by the Behrend function weighted Euler characteristic of the quotient of the moduli space of stable pairs by the translation action by the elliptic curve. The result now follows from [@ReducedSP Thm. 2] or alternatively [@PTBPS Section 4, Appendix] (since the classes $(\beta_h, 1)$ are reduced in the sense of [@PTBPS]). **Step 3.** $c(h,r) = 0$ for $r > \sqrt{4 h + 1}$. *Proof.* For every $h$, consider the rational function $$f(y) = \sum_r c(h,r) y^r = \sum_{g=-N}^N n_{g,h} (y^{1/2} + y^{-1/2})^{2g+2} \,.$$ Substituting $y = -e^{iu}$ and taking the Laurent expansion around $u=0$, we obtain the equality of formal Laurent series $$f(-e^{iu}) = \sum_{g} u^{2g+2} \big\langle \tau_0({{\mathsf{p}}}\boxtimes \omega) \tau_0(F \boxtimes \omega)^3 \big\rangle^{S \times {\mathbb{P}}^1, \bullet}_{g, (\beta_h,2)} \,.$$ By considering a generic K3 surface and a generic choice of cycles representing the incidence conditions, a direct check shows $$\big\langle \tau_0({{\mathsf{p}}}\boxtimes \omega) \tau_0(F \boxtimes \omega)^3 \big\rangle^{S \times {\mathbb{P}}^1, \bullet}_{g, (\beta_h,2)} = 0$$ for $g \leq -2$. Hence, $f(-e^{iu})$ is a power series in $u$: $$f(-e^{iu}) = a_0 + a_2 u^2 + a_4 u^4 + \ \ldots , \quad \quad a_i \in {{\mathbb{C}}}\,.$$ Since $$(y^{1/2} + y^{-1/2})^{2g+2} = u^{2g+2} + O(u^{2g+4})$$ this shows $n_{g,h} = 0$ for $g \leq -2$. Hence, $f(y)$ is a finite Laurent *polynomial* in $y$, $$f(y) = \sum_{r=-M}^{M} c(h,r) y^r = \sum_{g=-1}^{N} n_{g,h} (y^{1/2} + y^{-1/2})^{2g+2} \,.$$ Since $f$ is symmetric under $y \mapsto y^{-1}$, we conclude $$c(h,r) = c(h,-r) \,.$$ The claim of Step 3 follows now from Step 1 above. The proof of Property (c) for $\mathsf{F}(u,q)$ is now complete. Proof of Theorem \[thm\_middle\] {#Subsection_Proof_Thm_middle} -------------------------------- Let $\mathsf{F}(u,q)$ be the formel series defined in . By Lemma \[wer\] it is enough to show $$\mathsf{F}(u,q) = \mathbf{G}(u,q) + \Theta(u,q) \cdot D \Theta(u,q) \,. \label{utut}$$ By Proposition \[rgrgreg\] the left hand side satisfies the properties (a)-(c) of Proposition \[rgrgreg\]. Since we may rewrite $$\mathbf{G}(u,q) = \frac{1}{12} \varphi_{0,1}(z,\tau) - 2 C_2(q) \varphi_{-2,1}(z,\tau)$$ where $q = e^{2 \pi i \tau}, u = 2 \pi z$ and $\varphi_{0,1}, \varphi_{-2,1}$ are the weak Jacobi forms of index $1$ defined in [@EZ Section 9], properties (a)-(c) of Proposition \[rgrgreg\] hold for $\mathbf{G}$, and similarly for $\Theta \cdot D \Theta$.[^12] Hence by Proposition \[formel\_prop\] resp. Lemma \[formel\_prop\_strenthening\], we need to check only for the coefficients of $u^m$ where $m \leq 6$, or equivalently since $m=2g+2$ for genera $0 \leq g \leq 2$. For this, we may reduce by Lemma to Gromov-Witten invariants of a K3 surface with only fiber and point insertions. These can be computed for fixed genus by a degeneration argument, see [@MPT] or Appendix \[Appendix\_K3\]. Proof of Theorem \[mainthm\_1\] {#Subsection_Proof_of_mainthm} ------------------------------- Consider the degeneration of $S$ to the union of $S$ with $m+n+1$ bubbles of ${\mathbb{P}}^1 \times E$, $$S \leadsto S \cup \underbrace{({\mathbb{P}}^1 \times E) \cup \ldots \cup ({\mathbb{P}}^1 \times E)}_{m+n+1} \,.$$ Applying the degeneration formula to $$\Big\langle {{\mathbb{E}}}^{\vee}(1)\ \prod_{i=1}^{m} \frac{{{\mathsf{p}}}}{1-\psi_i} \prod_{i=1}^{n} \frac{F}{1-\psi_i} \Big\rangle^{S}$$ with the first $m+n$ copies of ${\mathbb{P}}^1 \times E$ receiving a single insertion each, yields by Lemma \[P1xE\_Lemma\] $$\begin{gathered} \label{pppp} \Big\langle\ {{\mathbb{E}}}^{\vee}(1)\ \Big| \ 1 \ \Big\rangle^{S} \left( \Big\langle\ \omega \ \Big| \ {{\mathbb{E}}}^{\vee}(1) \frac{F}{1-\psi_1} \ \Big| \ 1 \ \Big\rangle^{{\mathbb{P}}^1 \times E} \right)^{n} \\ \cdot \left( \Big\langle\ \omega \ \Big| \ {{\mathbb{E}}}^{\vee}(1) \frac{{{\mathsf{p}}}}{1-\psi_1} \ \Big| \ 1 \ \Big\rangle^{{\mathbb{P}}^1 \times E} \right)^{m} \,.\end{gathered}$$ The first term on the right is the Katz-Klemm-Vafa formula , the second term is determined by . By solving for the third term in case $m=n=1$ using the result of Theorem \[thm\_middle\] we find $$\Big\langle\ \omega \ \Big| \ {{\mathbb{E}}}^{\vee}(1) \frac{{{\mathsf{p}}}}{1-\psi_1} \ \Big| \ 1 \ \Big\rangle^{{\mathbb{P}}^1 \times E} = \frac{\mathbf{G}(u,q) - 1}{u^2} \,.$$ Inserting everything back into completes the proof. Further invariants ------------------ Theoretically we could use the formal method used above to evaluate also other Gromov-Witten invariants of $S \times {\mathbb{P}}^1$ in classes $(\beta_h,2)$. For example consider the relative invariants $$\label{ffffg} \Big\langle (1,F)^2 , D(F) , (1,F)^2 \Big\rangle^{S \times {\mathbb{P}}^1, \bullet}_{g,(\beta_h,2)},$$ which, under the GW/Hilb correspondence, count rational curves in $\operatorname{Hilb}^2(S)$ incident to $2$ fibers of a Lagrangian fibration $\operatorname{Hilb}^2(S) \to {\mathbb{P}}^2$ [@HilbK3]. The appropriate generating series associated to satisfies almost all conditions needed for Proposition \[formel\_prop\]. (Showing property (c) for $r > \sqrt{4n+1}$ requires a BPS expansion parallel to the one used in Step 2 of the proof of Proposition \[rgrgreg\] for which we do not have a full argument at the moment.) The modular weight $\sigma$ takes the lowest possible value namely $\sigma = -2$. Therefore we expect to be determined by formal properties and the evaluation in genus $0$ alone (which is the Yau-Zaslow formula). Similarly the space of quasi-Jacobi forms of index $1$ and weight $-2$ has dimension $1$, and is spanned by $$\varphi_{-2,1}(z,\tau) = \Theta(u,q)^2 \,.$$ By comparision and without any further calculation we find that $\frac{\Theta(u,q)^2}{\Delta(q)}$ is the generating series for . By localization and degeneration the Gromov-Witten invariants of $S \times {\mathbb{P}}^1$ reduce to linear Hodge integrals on the K3 surface. The reasoning above provides some explanation of the ubiquity of Jacobi forms and particularly of $\Theta(u,q)$ in the enumerative geometry of K3 surfaces. Relative invariants of $\text{K3} \times {\mathbb{P}}^1$ {#Section:Relative_Invariants_of_P1K3} ======================================================== Overview -------- The main objective of this section is to prove the GW/Hilb correspondence in degree $2$ (Theorem \[GWHilb\_thm\]). In [@HilbK3] the full genus $0$ three point theory of $\operatorname{Hilb}^2(S)$ for primitive classes has been determined by calculating first five basic cases, and then applying the WDVV equations repeatedly to solve for all other invariants. Here we follow a similar approach. In Section \[Section\_Relations\] we first show the WDVV equations for genus $0$ invariants on $\operatorname{Hilb}^d(S)$ is compatible with a corresponding set of equations for the relative invariants of $S \times {\mathbb{P}}^1$ obtained from the degeneration formula. In Section \[Section\_Proof\_Of\_MainThm\_1b\], independently from the rest, we prove Theorem \[mainthm\_1b\]. In Section \[Section\_Special\_cases\_in\_degree\_2\] we use a combination of standard methods and Theorem \[mainthm\_1\] to calculate the same five basic series as in the $\operatorname{Hilb}^d(S)$ case. Since these series match those on the $\operatorname{Hilb}^d(S)$ side this completes the proof. Finally in Section \[Section\_Proof\_of\_SxE\_Theorem\] we prove Theorem \[Theorem\_K3xE\]. Throughout the section we will repeatedly use the localization and the degeneration formula, see for example [@GP; @GV; @FPM] and [@Junli1; @Junli2; @MP]. Relations {#Section_Relations} --------- Let $S$ be a K3 surface. Let $\{ \gamma_i \}_i$ be a fixed basis of $H^{\ast}(S)$. We identify a partition $\mu = \{ (\mu_j, \gamma_{i_j}) \}$ weighted by the basis $\{ \gamma_i \}$ with the class $$\frac{1}{{{\mathfrak{z}}}(\mu)} \prod_j {{\mathfrak{p}}}_{-\mu_j}(\gamma_{i_j}) v_{\varnothing} \in H^{\ast}(\operatorname{Hilb}^{|\mu|}(S))$$ on the Hilbert scheme, where ${{\mathfrak{z}}}(\mu) = |\operatorname{Aut}(\mu)| \prod_{i}\mu_i$. Let also $\deg(\mu)$ denote the complex cohomological degree of $\mu$ in $\operatorname{Hilb}^d(S)$, $$\mu \in H^{2 \deg(\mu)}(\operatorname{Hilb}^d(S)) \,.$$ Since $\{ \gamma_i \}$-weighted partitions of size $d$ form a basis for the cohomology of $\operatorname{Hilb}^d(S)$, the cup product $\mu \cup \nu$ of cohomology weighted partitions $\mu, \nu$ can be uniquely expressed as a formal linear combination of weighted partitions: $$\mu \cup \nu = \sum_{\lambda} c_{\mu \nu}^{\lambda} \lambda$$ where the sum runs over all weighted partitions of size $|\mu|$ and $c_{\lambda} \in {{\mathbb{Q}}}$ are coefficients. When $\mu$ or $\nu$ are divisor classes on $\operatorname{Hilb}^d(S)$, explicit formulas for $\mu \cup \nu$ are surveyed in [@Lehn]. Let $\mu, \nu, \rho$ be cohomology weighted partitions of size $d$, and let $\beta \in H_2(S,{{\mathbb{Z}}})$ be a curve class. We will require the modified bracket $$\big\langle \, \mu ,\nu , \rho \, \big\rangle^{S \times {\mathbb{P}}^1, \star}_{\beta} = (-iu)^{l(\mu) + l(\nu) + l(\rho) - d} \sum_{g \in {{\mathbb{Z}}}} \big\langle \, \mu ,\nu , \rho \, \big\rangle^{S \times {\mathbb{P}}^1,\bullet}_{g, (\beta,d)} u^{2g-2} .$$ where the bracket on the right hand side denote disconnected Gromov-Witten invariants of $S \times {\mathbb{P}}^1 / \{ 0,1,\infty\}$ with relative insertions $\mu,\nu,\rho$. Since the degree $d$ is determined by the partition, it is omitted in the notation from the left hand side. When the entries $\mu, \nu, \rho$ are formal linear combination of cohomology weighted partitions, the bracket $\langle \mu, \nu, \rho \rangle^{S \times {\mathbb{P}}^1, \star}$ is defined by multilinearity \[Proposition\_WDVV\_analog\] Let $\lambda_1, \dots, \lambda_4$ be cohomology weighted partitions of size $d$ weighted by the fixed basis $\{ \gamma_i \}$, such that $\sum_i \deg(\lambda_i) = 2d+1$. Then $$\begin{gathered} {\big\langle}\lambda_1, \lambda_2, \lambda_3 \cup \lambda_4 {\big\rangle}_{\beta}^{S \times {\mathbb{P}}^1, \star} + {\big\langle}\lambda_1 \cup \lambda_2, \lambda_3, \lambda_4 {\big\rangle}_{\beta}^{S \times {\mathbb{P}}^1, \star} \\ = {\big\langle}\lambda_1, \lambda_4, \lambda_2 \cup \lambda_3 {\big\rangle}_{\beta}^{S \times {\mathbb{P}}^1, \star} + {\big\langle}\lambda_1 \cup \lambda_4, \lambda_2, \lambda_3 {\big\rangle}_{\beta}^{S \times {\mathbb{P}}^1, \star} \,.\end{gathered}$$ Consider Gromov-Witten invariants of $S \times {\mathbb{P}}^1$ relative to fibers over the points $0,1, \infty, t \in {\mathbb{P}}^1$, $$\label{4ptseries} (-iu)^{-2d + \sum_i l(\lambda_i)} \sum_g {\big\langle}\lambda_1, \lambda_2, \lambda_3, \lambda_4 {\big\rangle}_{g,(\beta,d)}^{(S \times {\mathbb{P}}^1) / \{ 0, 1, \infty, t\}, \bullet} u^{2g-2} \,.$$ Consider the degeneration of $S \times {\mathbb{P}}^1$ obtained by degenerating the base ${\mathbb{P}}^1$ to a union of two copies of ${\mathbb{P}}^1$, $$S \times {\mathbb{P}}^1 \leadsto (S \times {\mathbb{P}}^1) \cup (S \times {\mathbb{P}}^1).$$ We assume the fibers over $0,1$ specialize to the first and the fibers over $t, \infty$ specialize to the second component respectively. We will apply the degeneration formula to . Since the reduced class breaks into a product of a reduced class and an ordinary virtual class, we must have either $\beta_1 = 0$ or $\beta_2 = 0$ in the splitting $\beta = \beta_1 + \beta_2$ of the curve class. The result of the degeneration formula is $$\begin{gathered} \label{4ptseries_broken} \sum_{g_1, g_2} \sum_{\beta=\beta_1 + \beta_2} \sum_{\eta} \langle \lambda_1, \lambda_2, \eta {\big\rangle}^{S \times {\mathbb{P}}^1, \bullet}_{g_1, \beta_1} \\ \cdot {{\mathfrak{z}}}(\eta) {\big\langle}\eta^{\vee}, \lambda_3, \lambda_4 {\big\rangle}_{g_2,\beta_2}^{S \times {\mathbb{P}}^1, \bullet} (-iu)^{-2d + \sum_i l(\lambda_i)} u^{2(g_1 + g_2 + l(\eta) - 1)-2}\end{gathered}$$ where $g_1, g_2$ run over all integers, we have $\beta_1 = 0$ or $\beta_2 = 0$, $\eta$ runs over all $\{ \gamma_i \}$-weighted partitions of size $d$, and $\eta^{\vee}$ is the dual partition[^13]. Above, we also have used the genus glueing relation $$g = g_1 + g_2 + l(\eta) - 1 \,,$$ and have followed the notation (explained in Section \[Section\_The\_bracket\_notation\]) that we use a reduced class whenever the K3 factor of the curve class is non-zero, and the usual virtual class otherwise. Consider the basis of $H^{\ast}(\operatorname{Hilb}^d(S))$ defined by the set of all $\{ \gamma_i \}$-weighted partitions $\eta$ of size $d$. The corresponding dual basis with respect to the intersection pairing on $H^{\ast}(\operatorname{Hilb}^d(S))$ is $\{ (-1)^{d + l(\eta)} {{\mathfrak{z}}}(\eta) \eta^{\vee} \}$. Hence for every $\alpha \in H^{\ast}(\operatorname{Hilb}^d(S))$, $$\alpha = \sum_{\eta} (-1)^{d + l(\eta)} {{\mathfrak{z}}}(\eta) \langle \alpha, \eta^{\vee} \rangle \eta \,.$$ We will also require the following evaluation of (non-reduced) relative invariants of $S \times {\mathbb{P}}^1$ in class $(0,d)$, $$\label{non-red-cor} (-iu)^{-d + \sum_i l(\lambda_i)} \sum_g \langle \lambda_1, \lambda_2, \lambda_3 \rangle^{S \times {\mathbb{P}}^1, \bullet}_{g, (0,d)} u^{2g-2} = (-1)^d \int_{\operatorname{Hilb}^d(S)} \lambda_1 \cup \lambda_2 \cup \lambda_3 \,.$$ for all weighted partitions $\lambda_1, \lambda_2, \lambda_3$. Equality follows directly from the corresponding local case, see [@OPHilb Section 4.3]. Putting everything together, and hence are equal to the left hand side of Proposition \[Proposition\_WDVV\_analog\], namely $${\big\langle}\lambda_1, \lambda_2, \lambda_3 \cup \lambda_4 {\big\rangle}^{S \times {\mathbb{P}}^1, \star}_{\beta} + {\big\langle}\lambda_1 \cup \lambda_2, \lambda_3, \lambda_4 {\big\rangle}^{S \times {\mathbb{P}}^1, \star}_{\beta} \,.$$ Since by a parallel argument (with $0,t$ specializing to the first, and $1,\infty$ specializing to the second component) we also find to equal the right hand side of Proposition \[Proposition\_WDVV\_analog\]. From Proposition \[Proposition\_WDVV\_analog\] and [@HilbK3 Appendix A] we obtain the following. Under the correspondence of Conjecture \[GW/Hilb\_correspondence\], the reduced WDVV equation on $\operatorname{Hilb}^d(S)$ corresponds to the degeneration relations of Proposition \[Proposition\_WDVV\_analog\] For weighted partitions $\mu, \nu, \rho$ of size $d$, let $$\big\langle \, \mu ,\nu , \rho \, \big\rangle^{S \times {\mathbb{P}}^1,\bullet}_{\beta} = u^{l(\mu) + l(\nu) + l(\rho) - d} \sum_{g \in {{\mathbb{Z}}}} \langle \mu , \nu , \rho {\big\rangle}^{S \times {\mathbb{P}}^1/ \{ 0,1,\infty \}, \bullet}_{g, (\beta,d)} u^{2g - 2}$$ \[Proposition\_take\_out\_div\] For all $\gamma, \gamma' \in H^2(S, {{\mathbb{Q}}})$ and all weighted partitions $\mu, \nu$ of size $d$, $$\begin{aligned} \langle \beta, \gamma' \rangle \cdot \big\langle \, \mu ,\nu , D(\gamma) \, \big\rangle_{\beta}^{S \times {\mathbb{P}}^1,\bullet} & = \langle \beta, \gamma \rangle \cdot \big\langle \, \mu ,\nu , D(\gamma') \, \big\rangle_{\beta}^{S \times {\mathbb{P}}^1,\bullet} \\ \langle \beta, \gamma \rangle \cdot \big\langle \, \mu ,\nu , (2, {\mathbf{1}})(1, {\mathbf{1}})^{d-2} \, \big\rangle_{\beta}^{S \times {\mathbb{P}}^1,\bullet} & = \frac{d}{du} \, \big\langle \, \mu ,\nu , D(\gamma) \, \big\rangle_{\beta}^{S \times {\mathbb{P}}^1,\bullet}\end{aligned}$$ By a rubber calculus argument, see for example [@M Prop. 4.3] or [@MP]. Elliptic K3 surfaces -------------------- In the remainder of Section \[Section:Relative\_Invariants\_of\_P1K3\] let $S$ be an elliptically fibered K3 surface with section, let $B$ and $F$ be the section and fiber class respectively, and let $\beta_h = B +hF$ for all $h \geq 0$. For $H^{\ast}(S)$-weighted partitions $\mu, \nu, \rho$ of size $d$, we set $$\begin{gathered} \label{rel_gen_ser} \big\langle \, \mu ,\nu , \rho \, \big\rangle^{S \times {\mathbb{P}}^1,\bullet} = \ u^{l(\mu) + l(\nu) + l(\rho) - d} \sum_{g \in {{\mathbb{Z}}}} \sum_{h \geq 0} \langle \mu , \nu , \rho {\big\rangle}^{S \times {\mathbb{P}}^1/ \{ 0,1,\infty \}, \bullet}_{g, (\beta_h,d)} u^{2g - 2} q^{h-1}\end{gathered}$$ for the generating series of disconnected Gromov-Witten invariants of $S \times {\mathbb{P}}^1 / \{ 0,1, \infty\}$, and the same except without $\bullet$ for connected invariants. Proof of Theorem \[mainthm\_1b\] {#Section_Proof_Of_MainThm_1b} -------------------------------- Let $\mu_{m,n}, \nu_{m,n}$ be the weighted partitions defined in . For the proof we will drop the subscript $n$ and simply write $\mu_m = \mu_{m,n}$, etc. Let also $\rho_m = \{ (1,F) (1, {\mathbf{1}})^{m+n-1} \}$. Let $n>0$ first. By a degeneration argument the *connected* invariants satisfy $$\big\langle \mu_m , \nu_m, \rho_m \big\rangle^{S \times {\mathbb{P}}^1/ \{ 0,1,\infty \}}_{g,(m+n,\beta_h)} = \frac{1}{n!} \big\langle \mu_m \big| \tau_{0}(F \boxtimes \omega)^{n+1} \big\rangle^{S \times {\mathbb{P}}^1/ \{ 0 \} }_{g,(m+n,\beta_h)}.$$ Applying the localization formula and Theorem \[mainthm\_1\] yields $$\sum_{g, h} \big\langle \mu_m , \nu_m, \rho_m \big\rangle^{S \times {\mathbb{P}}^1/ \{ 0,1,\infty \}}_{g,(m+n,\beta_h)} q^{h-1} u^{2(m+n) + 2g-2} =\frac{1}{n!^2 m!} \frac{(\mathbf{G}-1)^{m} \Theta^{2n}}{\Theta^2 \Delta}\,.$$ To obtain the disconnected invariants, let $$f : C \to S \times {\mathbb{P}}^1$$ be a possibly disconnected relative stable map incident to (cycles representing) $\mu_m, \nu_m, \rho_m$ over $0,1, \infty$ respectively. There is a single connected component $C_0$ of $C$ such that the restriction $f|C_0$ maps in class $(n+k,\beta_h)$ for some $k \geq 0$ and is incident to $\mu_k, \nu_k, \rho_k$. By the incidence conditions, the restriction of $f$ to every other component is an isomorphism onto a rational line ${\mathbb{P}}^1 \times P$ where $P$ is one of the remaining incidence points. In total, with careful consideration of the orderings, we therefore find $$\begin{gathered} \big\langle \mu_m , \nu_m, \rho_m \big\rangle^{S \times {\mathbb{P}}^1/\{ 0,1,\infty\}, \bullet}_{g,(m+n,\beta_h)}\\ = \frac{1}{(n! m!)^2} \sum_{k=0}^{m} \binom{m}{k} \binom{m}{k} (m-k)! \Big( (n! k!)^2 \big\langle \mu_k, \nu_k, \rho_k \big\rangle^{S \times {\mathbb{P}}^1/ \{ 0,1,\infty \}}_{g+(m-k),(k+n,\beta_h)} \Big) \,.\end{gathered}$$ The first part of Theorem \[mainthm\_1b\] follows now by summing up. In case $n=0$ we will use the relative condition $\mu_m = \{ (1, x_1) , \ldots , (1,x_m) \}$ for some generic points $x_1, \ldots, x_m \in S$. Consider an irreducibe curve $\Sigma \subset S \times {\mathbb{P}}^1$ of degree $k$ over ${\mathbb{P}}^1$ which is incident to $k$ of the points $\{ x_i \}$ over $0$. Since $\operatorname{Hilb}^k(S)$ is not uniruled for every $k>0$ the map ${\mathbb{P}}^1 \to \operatorname{Hilb}^k(S)$ corresponding to $\Sigma$ must be constant and hence $k=1$ and $\Sigma = x_{i} \times {\mathbb{P}}^1$. Let $f : C \to S \times {\mathbb{P}}^1$ be a possibly disconnected relative stable map incident to $\mu_m, \nu_m, \rho_m$ over $0, 1, \infty$ respectively. By the previous discussion the image $f(C)$ must contain the curves $x_i \times {\mathbb{P}}^1$ for all $i$ and hence meets the divisor $S_{\infty}$ in the points $x_1, \ldots, x_m$. But if $\rho_m$ is represented by the cycle $\{ (1,F_{0}), (1,S)^{m-1} \}$ for some fiber $F_0$ disjoint from $\{ x_i \}$ this implies that $f$ is not incident to $\rho_m$ in contradiction to the assumption. Hence the moduli space is empty and the invariant vanishes. Special cases in degree $2$ {#Section_Special_cases_in_degree_2} --------------------------- We will require a total of five special cases of relative invariants of $S \times {\mathbb{P}}^1$ in degree $2$ over ${\mathbb{P}}^1$. The first two cases are provided by Theorem \[mainthm\_1b\] with $(m,n) = (1,1)$ and $(0,2)$. \[L0\] $\displaystyle \big\langle (1,F)^2, D(F) , (1,{{\mathsf{p}}})(1, {\mathbf{1}}) \big\rangle^{S \times {\mathbb{P}}^1, \bullet} = \frac{1}{2} \frac{\Theta(u,q) \cdot D \Theta(u,q)}{\Delta(q)} $ Only maps from connected curves contribute to the invariants here, hence it is enough to consider connected invariants. By a degeneration argument we have $$\big\langle (1,F)^2, D(F), (1,{{\mathsf{p}}})(1, {\mathbf{1}}) \big\rangle^{S \times {\mathbb{P}}^1} = \big\langle (1,F)^2 \big| \tau_0({{\mathsf{p}}}\boxtimes \omega) \tau_0(F \boxtimes \omega) \big\rangle^{S \times {\mathbb{P}}^1},$$ which by the localization formula and the divisor axiom is $$\frac{u^4}{2} \Big\langle \, {{\mathbb{E}}}^{\vee}(1) \, \tau_0({{\mathsf{p}}}) \, \frac{F}{1-\psi_2} \, \frac{F}{1-\psi_3} \, \Big\rangle^S$$ Degeneration of $S$ to the normal cone of an elliptic fiber $E$ yields $$\frac{u^4}{2} \big\langle {{\mathbb{E}}}^{\vee}(1) \big| 1 \big\rangle^S \Big( {\big\langle}\, \omega \, \big| \frac{F}{1-\psi} \, \big| \, 1 \, {\big\rangle}^{{\mathbb{P}}^1 \times E} \Big)^2 {\big\langle}\omega \big| \tau_0({{\mathsf{p}}}) {{\mathbb{E}}}^{\vee}(1) \big| 1 {\big\rangle}^{{\mathbb{P}}^1 \times E}$$ Using the Katz-Klemm-Vafa formula , and Lemma \[P1xE\_Lemma\] for the first, second and third term respectively, the claim follows. \[L1\] $\displaystyle \big\langle (2,{{\mathsf{p}}}), D(F), D(F) \, \big\rangle^{S \times {\mathbb{P}}^1, \bullet} = \frac{1}{4} \frac{ \frac{\partial}{\partial u} \mathbf{G}(u,q) }{\Delta(q)} $ Let $\alpha \in H^2(S, {{\mathbb{Q}}})$ be a class satisfying $$\langle \alpha, \alpha \rangle = 1 \quad \text{and} \quad \langle \alpha, F \rangle = \langle \alpha, W \rangle = 0 \,.$$ Then apply Proposition \[Proposition\_take\_out\_div\] twice with $(\lambda_1, \ldots , \lambda_4)$ equal to $$\big( (2, \alpha), D(F), D(F), D(\alpha) \big) \ \text{and} \ \big( (1,F)(1, \alpha), D(F), (2,1), D(\alpha) \big)$$ respectively, and use Proposition \[Proposition\_take\_out\_div\] and Theorem \[mainthm\_1b\]. For the last case we will require the following Hodge integrals. \[Hodge\_Evaluations\] $$\begin{aligned} \Big\langle {{{{\mathbb{E}}}^{\vee}(1)}}\frac{{\mathbf{1}}}{1-\psi_1} \tau_0({{\mathsf{p}}}) \Big\rangle^S & = \frac{1}{u^2} \frac{ \mathbf{G}(u,q) - 1}{\Theta^2 \Delta} \tag{i} \\ \Big\langle {{{{\mathbb{E}}}^{\vee}(1)}}\frac{{\mathbf{1}}}{1-\psi_1} \Big\rangle^S & = \frac{-2}{u^2 \Delta(q)} \tag{ii}\end{aligned}$$ $$\begin{gathered} \tag{iii} \ \ \ \ {\Big\langle}{{\mathbb{E}}}^{\vee}(1) \frac{{{\mathsf{p}}}}{1-\psi_1} \frac{{\mathbf{1}}}{1-\psi_2} \tau_0({{\mathsf{p}}}) {\Big\rangle}^{S} \\ = \Big\langle {{{{\mathbb{E}}}^{\vee}(1)}}\frac{{{\mathsf{p}}}}{1 - \psi_1} \cdot \frac{{\mathbf{1}}}{1-\psi_2} \Big\rangle^{S} \cdot \frac{D \Theta}{\Theta} + \frac{( \mathbf{G}-1 )^2}{u^4 \Theta^2 \Delta} + 2 \frac{(\mathbf{G}-1)}{u^4 \Delta} \cdot \frac{D \Theta}{\Theta} .\end{gathered}$$ [**(i)**]{} Consider the connected invariant $${\big\langle}\tau_0(F \boxtimes \omega) \tau_0({{\mathsf{p}}}\boxtimes \omega) {\big\rangle}^{S \times {\mathbb{P}}^1}_{g, (\beta_h, 1)} \,. \label{402b}$$ Applying the localization formula to with $\tau_0(F \boxtimes \omega)$ specializing to the fiber over $0$, and $\tau_0({{\mathsf{p}}}\boxtimes \omega)$ specializing to the fiber over $\infty$, yields ${\big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\frac{{{\mathsf{p}}}}{1-\psi_1} {\big\rangle}^S_{g, \beta_h} + {\big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\frac{F}{1-\psi_1} \tau_0({{\mathsf{p}}}) {\big\rangle}^{S}_{g, \beta_h}$. Specializing both insertions to the fiber over $\infty \in {\mathbb{P}}^1$ yields ${\big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\frac{{\mathbf{1}}}{1-\psi_1} \tau_0(F) \tau_0({{\mathsf{p}}}) {\big\rangle}^{S}_{g, \beta_h}$. Since the result in both computations is the same, the claim now follows by the divisor equation and Theorem \[mainthm\_1\]. [**(ii)**]{} Applying the localization formula to ${\big\langle}\tau_0(F \boxtimes \omega)^3 {\big\rangle}^{S \times {\mathbb{P}}^1}_{g, (\beta_h,1)}$ where all insertions specialize to the fiber over $0$ yields ${\big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\frac{{\mathbf{1}}}{1 - \psi_1} \tau_0(F)^3 {\big\rangle}^S_{g, \beta_h}$. The claim now follows from Proposition \[vanishing1\], Theorem \[mainthm\_1\] and the divisor axiom. [**(iii)**]{} Consider the degeneration $$\label{degeneration_22} S \leadsto S \cup ({\mathbb{P}}^1 \times E) \cup ({\mathbb{P}}^1 \times E) \,.$$ We apply the degeneration formula to the invariants ${\big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\frac{{\mathbf{1}}}{1-\psi_1} \tau_0({{\mathsf{p}}}) {\big\rangle}^S$ where we specialize $\tau_0({{\mathsf{p}}})$ to the first copy of ${\mathbb{P}}^1 \times E$. Using Lemma \[P1xE\_Lemma\] the result is $$\begin{aligned} \label{rtergerg} & \Big\langle {{{{\mathbb{E}}}^{\vee}(1)}}\frac{{\mathbf{1}}}{1-\psi_1} \tau_0({{\mathsf{p}}}) \Big\rangle^S \\ = \ \ & {\Big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\frac{{\mathbf{1}}}{1-\psi_1} \Big| 1 {\Big\rangle}^S {\Big\langle}\omega \Big| {{{{\mathbb{E}}}^{\vee}(1)}}\tau_0({{\mathsf{p}}}) \Big| 1 {\Big\rangle}^{{\mathbb{P}}^1 \times E} \\ + \ & {\Big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\Big| 1 {\Big\rangle}^S {\Big\langle}\omega \Big| {{{{\mathbb{E}}}^{\vee}(1)}}\frac{{\mathbf{1}}}{1 - \psi_1} \tau_0({{\mathsf{p}}}) \Big| 1 {\Big\rangle}^{{\mathbb{P}}^1 \times E} \\ + \ & {\Big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\Big| 1 {\Big\rangle}^S {\Big\langle}\omega \Big| {{{{\mathbb{E}}}^{\vee}(1)}}\tau_0({{\mathsf{p}}}) \Big| 1 {\Big\rangle}^{{\mathbb{P}}^1 \times E} {\Big\langle}\omega \Big| {{{{\mathbb{E}}}^{\vee}(1)}}\frac{{\mathbf{1}}}{1-\psi_1} {\Big\rangle}^{{\mathbb{P}}^1 \times E} \,. \end{aligned}$$ By (ii) and using the degeneration $S \leadsto S \cup ({\mathbb{P}}^1 \times E)$ we have $$\begin{gathered} \frac{-2}{u^2 \Delta(q)} = \Big\langle {{{{\mathbb{E}}}^{\vee}(1)}}\frac{{\mathbf{1}}}{1-\psi_1} \Big\rangle^S \\ = {\Big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\frac{{\mathbf{1}}}{1-\psi_1} \Big| 1 {\Big\rangle}^S + {\Big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\Big| 1 {\Big\rangle}^S {\Big\langle}\omega \Big| {{{{\mathbb{E}}}^{\vee}(1)}}\frac{{\mathbf{1}}}{1-\psi_1} {\Big\rangle}^{{\mathbb{P}}^1 \times E}\end{gathered}$$ Inserting this into , using (i), the Katz-Klemm-Vafa formula , and Lemma \[P1xE\_Lemma\], we obtain $$\label{400} {\Big\langle}\omega \Big| {{{{\mathbb{E}}}^{\vee}(1)}}\frac{{\mathbf{1}}}{1 - \psi_1} \tau_0({{\mathsf{p}}}) \Big| 1 {\Big\rangle}^{{\mathbb{P}}^1 \times E} = \frac{1}{u^2} \big( \mathbf{G} - 1 + 2 \Theta \cdot D \Theta \big).$$ We apply the degeneration formula for to ${\big\langle}{{\mathbb{E}}}^{\vee}(1) \frac{{{\mathsf{p}}}}{1-\psi_1} \frac{{\mathbf{1}}}{1-\psi_2} \tau_0({{\mathsf{p}}}) {\big\rangle}^S$. We specialize the marked point carrying the $\tau_0({{\mathsf{p}}})$ insertion to the first copy of ${\mathbb{P}}^1 \times E$, and the marked point with insertion ${{\mathsf{p}}}/ (1-\psi_1)$ to $S$. The result is $$\begin{aligned} & {\Big\langle}{{\mathbb{E}}}^{\vee}(1) \frac{{{\mathsf{p}}}}{1-\psi_1} \frac{{\mathbf{1}}}{1-\psi_2} \tau_0({{\mathsf{p}}}) {\Big\rangle}^{S} \\ =\ \ & {\Big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\frac{{{\mathsf{p}}}}{1-\psi_1} \frac{{\mathbf{1}}}{1-\psi_2} \Big| 1 {\Big\rangle}^S {\Big\langle}\omega \Big| {{{{\mathbb{E}}}^{\vee}(1)}}\tau_0({{\mathsf{p}}}) \Big| 1 {\Big\rangle}^{{\mathbb{P}}^1 \times E} \\ + \ & {\Big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\frac{{{\mathsf{p}}}}{1-\psi_1} \Big| 1 {\Big\rangle}^S {\Big\langle}\omega \Big| {{{{\mathbb{E}}}^{\vee}(1)}}\frac{{\mathbf{1}}}{1 - \psi_1} \tau_0({{\mathsf{p}}}) \Big| 1 {\Big\rangle}^{{\mathbb{P}}^1 \times E} \\ + \ & {\Big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\frac{{{\mathsf{p}}}}{1-\psi_1} \Big| 1 {\Big\rangle}^S {\Big\langle}\omega \Big| {{{{\mathbb{E}}}^{\vee}(1)}}\tau_0({{\mathsf{p}}}) \Big| 1 {\Big\rangle}^{{\mathbb{P}}^1 \times E} {\Big\langle}\omega \Big| {{{{\mathbb{E}}}^{\vee}(1)}}\frac{{\mathbf{1}}}{1-\psi_1} {\Big\rangle}^{{\mathbb{P}}^1 \times E} .\end{aligned}$$ which by a similar argument as before, and with and Lemma \[P1xE\_Lemma\] is $$\Big\langle {{{{\mathbb{E}}}^{\vee}(1)}}\frac{{{\mathsf{p}}}}{1 - \psi_1} \cdot \frac{{\mathbf{1}}}{1-\psi_2} \Big\rangle^{S} \cdot \frac{D \Theta}{\Theta} + \frac{( \mathbf{G}-1 )^2}{u^4 \Theta^2 \Delta} + 2 \frac{(\mathbf{G}-1)}{u^4 \Delta} \cdot \frac{D \Theta}{\Theta} . \qedhere$$ \[GDFDG\] The series $$\begin{gathered} -4 u^4 {\Big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\frac{{{\mathsf{p}}}}{1-2 \psi_1} {\Big\rangle}^S + \frac{1}{2} u^6 {\Big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\frac{{{\mathsf{p}}}}{1-\psi_1} \frac{{{\mathsf{p}}}}{1 - \psi_2} {\Big\rangle}^S \\ + u^4 {\Big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\frac{{{\mathsf{p}}}}{1 - \psi_1} \frac{{\mathbf{1}}}{1 - \psi_2} {\Big\rangle}^S + u^4 {\Big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\frac{{{\mathsf{p}}}}{1 - \psi_1} {\Big\rangle}^S .\end{gathered}$$ is equal to $\big(-2 (\mathbf{G}-1) + \Theta \cdot D \Theta \big)\frac{1}{\Delta}$. Consider the connected invariant $$\label{401} {\Big\langle}\tau_0({{\mathsf{p}}}\boxtimes \omega) \tau_0(F \boxtimes \omega)^3 {\Big\rangle}^{S \times {\mathbb{P}}^1}.$$ We apply the localization formula to , with exactly two of the four insertions specializing to the fiber over $0 \in {\mathbb{P}}^1$. The result is $$u^4 {\Big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\frac{{{\mathsf{p}}}}{1-\psi_1} \frac{F}{1-\psi_2} {\Big\rangle}^S + u^4 {\Big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\frac{F}{1-\psi_1} \frac{F}{1-\psi_2} \tau_0({{\mathsf{p}}}) {\Big\rangle}^S \,,$$ which, by a degeneration argument, Theorem \[mainthm\_1\] and Lemma \[P1xE\_Lemma\], is equal to $\big( (\mathbf{G}-1) + \Theta \cdot D \Theta \big)/\Delta$. We apply the localization formula a second time to , this time specializing the insertion $\tau_0({{\mathsf{p}}}\boxtimes \omega)$ to the fiber over $\infty$, and all insertions $\tau_0(F \boxtimes \omega)$ to the fiber over $0$. The result is $$\begin{gathered} -4 u^4 {\Big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\frac{{{\mathsf{p}}}}{1-2 \psi_1} {\Big\rangle}^S + u^4 {\Big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\frac{{{\mathsf{p}}}}{1 - \psi_1} \frac{{\mathbf{1}}}{1 - \psi_2} \tau_0(F)^3 {\Big\rangle}^S \\ + \frac{1}{2} u^6 {\Big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\frac{{{\mathsf{p}}}}{1-\psi_1} \frac{{{\mathsf{p}}}}{1 - \psi_2} {\Big\rangle}^S + u^4 {\Big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\frac{{{\mathsf{p}}}}{1 - \psi_1} {\Big\rangle}^S .\end{gathered}$$ The claim follows now follows by applying the divisor axiom to the second term and using Theorem \[mainthm\_1\]. We determine the fifth special case. \[L2\] $\displaystyle \big\langle D({{\mathsf{p}}}), D(F), D({{\mathsf{p}}}) \big\rangle^{S \times {\mathbb{P}}^1, \bullet} = \frac{ \left( D \Theta(u,q) \right)^2 }{\Delta(q)} $ Only connected curves contribute to the integral. The degeneration formula yields $$\begin{gathered} {\big\langle}\tau_0({{\mathsf{p}}}\boxtimes \omega)^2 \tau_0(F \boxtimes \omega) {\big\rangle}^{S \times {\mathbb{P}}^1} = \big\langle\, D({{\mathsf{p}}}), \, D(F), \, D({{\mathsf{p}}}) \, \big\rangle^{S \times {\mathbb{P}}^1} \\ + 2 {\big\langle}(1, {{\mathsf{p}}})^2 \big| \tau_0(F \boxtimes \omega) {\big\rangle}^{S \times {\mathbb{P}}^1} + 2 {\big\langle}(1, {{\mathsf{p}}})(1, F) \big| \tau_0(F \boxtimes \omega) {\big\rangle}^{S \times {\mathbb{P}}^1} \,.\end{gathered}$$ The last two terms of the right hand side are computed directly using the localization formula and Theorem \[mainthm\_1\]: $$\begin{aligned} {\big\langle}(1, {{\mathsf{p}}})^2 \big| \tau_0(F \boxtimes \omega) {\big\rangle}^{S \times {\mathbb{P}}^1} & = \frac{1}{2} \frac{(\mathbf{G}-1)^2}{\Theta^2 \Delta} \\ {\big\langle}(1, {{\mathsf{p}}})(1, F) \big| \tau_0({{\mathsf{p}}}\boxtimes \omega) {\big\rangle}^{S \times {\mathbb{P}}^1} & = \frac{(\mathbf{G}-1)}{\Delta} \frac{ D \Theta}{\Theta}\end{aligned}$$ Hence it remains to prove $${\big\langle}\tau_0({{\mathsf{p}}}\boxtimes \omega)^2 \tau_0(F \boxtimes \omega) {\big\rangle}^{S \times {\mathbb{P}}^1} = \frac{ ( D \Theta )^2 }{\Delta} + 2 \frac{(\mathbf{G}-1)}{\Delta} \frac{ D \Theta}{\Theta} + \frac{(\mathbf{G}-1)^2}{\Theta^2 \Delta} \,.$$ We apply the localization formula to the left hand side, specializing exactly one of the $\tau_0({{\mathsf{p}}}\boxtimes \omega)$ insertions to the fiber over $0$, and the other insertions to the fiber over $\infty$. Five fixed loci contribute. The result is $$\label{GERGE} \begin{aligned} - \ & 4 u^4 {\Big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\frac{{{\mathsf{p}}}}{1-2 \psi_1} \tau_0({{\mathsf{p}}}) {\Big\rangle}^{S} \\ + \ & u^4 {\Big\langle}{{\mathbb{E}}}^{\vee}(1) \frac{{{\mathsf{p}}}}{1-\psi_1} \frac{{\mathbf{1}}}{1-\psi_2} \tau_0({{\mathsf{p}}}) \tau_0(F) {\Big\rangle}^{S} \\ + \ & \frac{1}{2} u^6 {\Big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\frac{{{\mathsf{p}}}}{1-\psi_1} \frac{{{\mathsf{p}}}}{1-\psi_2} \tau_0({{\mathsf{p}}}) {\Big\rangle}^{S} \\ + \ & u^4 {\Big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\frac{{{\mathsf{p}}}}{1-\psi_1} \tau_0({{\mathsf{p}}}) {\Big\rangle}^S \\ + \ & u^4 {\Big\langle}{{{{\mathbb{E}}}^{\vee}(1)}}\frac{{{\mathsf{p}}}}{1-\psi_1} \frac{F}{1-\psi_2} \tau_0({{\mathsf{p}}}) {\Big\rangle}^{S} \,. \end{aligned}$$ By the divisor equation and Lemma \[Hodge\_Evaluations\](iii), we may remove the $\tau_0(F)$ and $\tau_0({{\mathsf{p}}})$ insertion from the second term. Since only fiber and point classes appear in the other terms of , the $\tau_0({{\mathsf{p}}})$ insertion can be degenerated off to a copy of ${\mathbb{P}}^1 \times E$, where it is evaluated by Lemma \[P1xE\_Lemma\]. The remaining first four terms then exactly yield the evaluation of Lemma \[GDFDG\]. Applying Theorem \[mainthm\_1\] for the last term, a direct calculation shows the claim. Proof of Theorem \[thm\_GWHilb\_correspondence\] ------------------------------------------------ We consider the case $d=2$. The invariants $$\begin{gathered} \label{special_case_invs} \big\langle\, (1,F)^2, \, D(F) , \, (1,F)^2 \, \big\rangle^{S \times {\mathbb{P}}^1, \bullet} \\ \big\langle\, (1,{{\mathsf{p}}})(1,F), \, D(F) , \, D(F) \, \big\rangle^{S \times {\mathbb{P}}^1, \bullet} \\ \big\langle\, (1,F)^2, \, D(F) , \, D({{\mathsf{p}}}) \, \big\rangle^{S \times {\mathbb{P}}^1, \bullet} \\ \big\langle\, (2,{{\mathsf{p}}}), \, D(F), \, D(F) \, \big\rangle^{S \times {\mathbb{P}}^1, \bullet} \\ \big\langle\, D({{\mathsf{p}}}), \, D(F), \, D({{\mathsf{p}}}) \, \big\rangle^{S \times {\mathbb{P}}^1, \bullet} \end{gathered}$$ were computed in Theorem \[mainthm\_1b\] and Lemmas \[L0\], \[L1\] and \[L2\]. By comparision with the results of [@HilbK3], the GW/Hilb correspondence (Conjecture \[GW/Hilb\_correspondence\]) holds in the case of the invariants . Under the GW/Hilb correspondence the WDVV equations on the Hilbert scheme side correspond to the relations of Proposition \[Proposition\_WDVV\_analog\]. Similarly, the divisor axiom on the Hilbert scheme side corresponds to Proposition \[Proposition\_take\_out\_div\] above. A direct check shows that all degree $2$ relative invariants $$\big\langle \lambda_1, \lambda_2, \lambda_3 {\big\rangle}^{S \times {\mathbb{P}}^1, \bullet}_{g, (\beta_h,2)}$$ can be reduced to the invariants using the relations of Propositions \[Proposition\_WDVV\_analog\] and \[Proposition\_take\_out\_div\]. Since, under the correspondence , both the genus $0$ invariants of $\operatorname{Hilb}^2(S)$ and the relative invariants of $S \times {\mathbb{P}}^1$ in degree $2$ are goverened by the same set of non-degenerate equations and initial values, they are equal. We consider $d=1$. The invariants of $S \times {\mathbb{P}}^1$ in class $(\beta_h,1)$ with relative insertions $(1,F)$, $(1,F)$, $(1,F)$ are determined by Proposition \[vanishing1\] via a degeneration argument. The result matches the corresponding series on the Hilbert scheme $\operatorname{Hilb}^1(S) = S$. The remaining invariants in degree $1$ are determined by Proposition \[Proposition\_take\_out\_div\]. Hence the result follows by the same argument as above. The product $S \times E$ {#Section_Proof_of_SxE_Theorem} ------------------------ Let $d \geq 0$ be an integer, and let $$\mathsf{N}_{g,h,d}^{S \times E} = {\big\langle}\tau_0(F \boxtimes \omega) {\big\rangle}^{S \times E}_{g, (\beta_h,d)}$$ be the absolute reduced Gromov-Witten invariants of the product $S \times E$, where we as usual work with the elliptically fibered K3 surface $S$ with section class $B$, fiber class $F$ and curve class $\beta_h = B + hF$. Degenerating the elliptic curve $E$ to a nodal curve and resolving, and degeneration off the $\tau_0(F \boxtimes \omega)$ insertion, we obtain $$\begin{gathered} \label{500} \sum_{g,h} \mathsf{N}_{g,h,d}^{S \times E} u^{2g-2} q^{h-1} = \sum_{\eta} {{\mathfrak{z}}}(\eta) {\big\langle}\eta, \eta^{\vee}, D(F) {\big\rangle}^{S \times {\mathbb{P}}^1, \bullet} \\ + \chi(\operatorname{Hilb}^d(S)) \sum_{g,h} d! {\big\langle}(1,{{\mathsf{p}}})^d \big| \tau_0(F \boxtimes \omega) {\big\rangle}^{S \times {\mathbb{P}}^1} u^{2g-2+2d} q^{h-1}\end{gathered}$$ where $\eta$ runs over the set ${{\mathcal P}}(d)$ of cohomology weighted partitions of size $d$ weighted by a fixed basis $\{ \gamma_i \}$, $\eta^{\vee}$ is the dual partition of $\eta$, and $\chi(\operatorname{Hilb}^d(S))$ is the topological Euler characteristic of $\operatorname{Hilb}^d(S)$. The second term on the right hand side of can be computed by localization and Theorem \[mainthm\_1\]. We obtain $$\begin{gathered} \label{501} \sum_{g,h} \mathsf{N}_{g,h,d}^{S \times E} u^{2g-2} q^{h-1} = \sum_{\eta} {{\mathfrak{z}}}(\eta) {\big\langle}\eta, \eta^{\vee}, D(F) {\big\rangle}^{S \times {\mathbb{P}}^1, \bullet} + \frac{\chi(\operatorname{Hilb}^d(S)) \mathbf{G}(u,q)^d}{\Theta(u,q)^2 \Delta(q)} .\end{gathered}$$ Under the GW/Hilb correspondence (Conjecture \[GW/Hilb\_correspondence\]) and by a degeneration argument, $\sum_{\eta} {{\mathfrak{z}}}(\eta) {\big\langle}\eta, \eta^{\vee}, D(F) {\big\rangle}^{S \times {\mathbb{P}}^1, \bullet}$ equals $$\label{123999} \mathcal{H}_d(y,q) = \sum_{h \geq 0} \sum_{k \in {{\mathbb{Z}}}} q^{h-1} y^k \int_{[ {{\overline M}}_{(E,0)}(\operatorname{Hilb}^d(S), \beta_h + kA) ]^{\text{red}}} {\mathop{\rm ev}\nolimits}_0^{\ast}(F)$$ under the variable change $y = -e^{iu}$, where we follow the notation of [@K3xE]. Since the GW/Hilb correspondence has been proven for $d=1$ and $d=2$ above, the claim now follows from the Katz-Klemm-Vafa formula [@MPT] for $d=1$, and Proposition $2$ of [@HilbK3] for $d=2$. Alternatively, in case $d=1$ and $d=2$ the right hand side of can be directly evaluated on $S \times {\mathbb{P}}^1$ by reduction to the invariants . We analyze further. By [@ReducedSP Theorem 2] we have the expansion $$\sum_{g} \mathsf{N}_{g,h,d}^{S \times E} u^{2g-2} q^{h-1} = \sum_{g=0}^{N} \mathsf{n}_{g, h, d} (y^{1/2} + y^{-1/2})^{2g-2}$$ where $y = -e^{iu}$ and $\mathsf{n}_{g, h, d} \in {{\mathbb{Z}}}$. A calculation of the (disconnected) genus $0$ Gromov-Witten invariants of $S \times E$ using the product formula yields $$\mathsf{n}_{0, h,d} = p_{24}(h) p_{24}(d),$$ where we let $$p_{24}(n) = \Big[ \frac{1}{\Delta(q)} \Big]_{q^{n-1}} = \chi(\operatorname{Hilb}^n(S)).$$ On the other hand, the coefficient of $u^{-2} q^{h-1}$ in the second term on the right hand side of is $$\chi(\operatorname{Hilb}^d(S)) \Big[ \frac{\mathbf{G}(u,q)^d}{\Theta(u,q)^2 \Delta(q)} \Big]_{u^{-2} q^{h-1}} = p_{24}(d) \cdot p_{24}(h).$$ This shows the following. For every $d \geq 0$ we have $$\sum_{g,h} \mathsf{N}_{g,h,d}^{S \times E} u^{2g-2} q^{h-1} = \mathcal{F}_d(u,q) + \chi(\operatorname{Hilb}^d(S)) \frac{\mathbf{G}(u,q)^d}{\Theta(u,q)^2 \Delta(q)} \,.$$ where under the variable change $y = e^{iu}$, $$\mathcal{F}_d(u,q) = \sum_{g=1}^{m} \mathsf{n}'_{g, h, d} (y^{1/2} + y^{-1/2})^{2g-2}.$$ with $\mathsf{n}'_{g, h, d} \in {{\mathbb{Z}}}$. In particular $\mathcal{F}_d(u,q)$ is a holomorphic entire function in $u \in {{\mathbb{C}}}$. Hence we have proven the natural splitting of the invariants of $S \times E$ into a finite holomorphic part $\mathcal{F}_d$ (conjecturally equal to the the Hilbert scheme invariants $\mathcal{H}_d$) and the polar part (a correction term), see the discussion of Conjecture A in [@K3xE]. Gromov-Witten invariants of K3 surfaces {#Appendix_K3} ======================================= Overview -------- Let $S$ be an elliptic K3 surface with section, let $B$ and $F$ be the section and fiber class respectively, set $\beta_h = B + h F$ where $h \geq 0$, and let ${{\mathsf{p}}}\in H^4(S,{{\mathbb{Z}}})$ be the class of a point. Recall the generating series notation for the surface $S$. In this section we will explain how the invariants $${\Big\langle}{{\mathbb{E}}}^{\vee}(1) \prod_{i} \tau_{k_i}({{\mathsf{p}}}) \prod_j \tau_{\ell_j}(F) {\Big\rangle}_g^S \label{51451451}$$ can be obtained from the Gromov-Witten theory of elliptic curves. While the method we present yields an effective algorithm for the computation of for every genus $g$, it seems difficult to obtain closed formulas in this way. Computation ----------- By degenerating $S$ to a union of $S$ with $m+n+1$-copies of ${\mathbb{P}}^1 \times E$ with each of the first $m+n$ copies receiving a marked point, and using for the first and for the last term, we have $${\Big\langle}{{\mathbb{E}}}^{\vee}(1) \prod_{i} \tau_{k_i}({{\mathsf{p}}}) \prod_j \tau_{\ell_j}(F) {\Big\rangle}^S = \frac{1}{\Theta(u,q)^2 \Delta(q)} \prod_i A_{k_i}(u,q) \prod_j B_{\ell_j}(u,q)$$ where for all $k \geq 0$ we let $$\begin{aligned} {2} A_{k}(u,q) & = \sum_{g \geq k+1} (-1)^{g-k-1} A_{k,g}(q) u^{2g}, & \quad \quad A_{k,g}(q) & = {\big\langle}\omega \big| \lambda_{g-k-1} \tau_{k}({{\mathsf{p}}}) \big| 1 {\big\rangle}^{{\mathbb{P}}^1 \times E}_{g} \\ B_{k}(u,q) & = \sum_{g \geq k} (-1)^{g-k} B_{k,g}(q) u^{2g}, & \quad \quad B_{k,g}(q) & = {\big\langle}\omega \big| \lambda_{g-k} \tau_{k}(F) \big| 1 {\big\rangle}^{{\mathbb{P}}^1 \times E}_{g}.\end{aligned}$$ By a further degeneration and Lemma \[P1xE\_Lemma\] we have $$A_{k,g}(q) = {\big\langle}\omega \big| \lambda_{g-k-1} \tau_{k}({{\mathsf{p}}}) {\big\rangle}^{{\mathbb{P}}^1 \times E}_{g}$$ to which we apply the localization formula. This yields $$A_{k,g}(q) = \sum_{\substack{ i,j,\ell \geq 0 \\ 2i+j \leq g+\ell-1 \\ \ell \leq g-k-1 }} (-1)^{i+j+\ell} P(i,\ell) \cdot {\big\langle}\tau_{g-2i-j+\ell-1}(\omega) \tau_k(\omega) \lambda_j \lambda_{g-k-1-\ell} {\big\rangle}^E_{g-i}$$ where the invariants of a nonsingular elliptic curve $E$ are denoted by $${\big\langle}\alpha \, \tau_{k_1}(\gamma_1) \cdots \tau_{k_n}(\gamma_n) {\big\rangle}^E_g = \sum_{d \geq 0} {\big\langle}\alpha \, \tau_{k_1}(\gamma_1) \cdots \tau_{k_n}(\gamma_n) {\big\rangle}^E_{g, d[E]} q^d$$ and we set $P(0,0) = 1$, $P(g,\ell) = \langle \omega | \lambda_\ell \Psi_{\infty}^{g-\ell-1} | 1 \rangle^{{\mathbb{P}}^1 \times E, \sim}_{g}$ for all $g \geq \ell+1$, and $P(g,\ell) = 0$ otherwise. By the methods of [@MP] one proves $$\sum_{g,k} P(g,k) u^{2g} w^k = \exp\Big( \sum_{r \geq 1} C_{2r}(q) u^{2r} w^{r-1} \Big) \,,$$ where $C_{2r}(q)$ are the Eisenstein series . Similarly, $$B_{k,g}(q) = P(g,g-k) + \sum_{\substack{ i,j,\ell \geq 0 \\ 2i+j \leq g+\ell-1 \\ \ell \leq g-k }} (-1)^{i+j+\ell} P(i,\ell) \cdot {\big\langle}\tau_{g-2i-j+\ell-1}(\omega) \tau_k(1) \lambda_j \lambda_{g-k-\ell} {\big\rangle}^E_{g-i} \,.$$ This reduces the computation of to the evaluation of Gromov-Witten invariants of an elliptic curve, which were completely determined in [@OP3] and can be computed conveniently in the program [@GWall]. We list the examples which are used in Section \[Subsection\_Proof\_Thm\_middle\]. $$\begin{aligned} \langle \tau_0({{\mathsf{p}}}) \rangle^{S}_{g=1} & = \frac{1}{\Delta} \big( -2 C_{2}^{2} + 10 C_{4} \big) \\ \langle \tau_1({{\mathsf{p}}}) \rangle^{S}_{g=2} & = \frac{1}{\Delta} \big( -\frac{8}{3} C_{2}^{3} + 16 C_{2} C_{4} - 7 C_{6} \big) \\ \langle \tau_0({{\mathsf{p}}}) \lambda_1 \rangle^{S}_{g=2} & = \frac{1}{\Delta} \big( -4 C_{2}^{3} + 12 C_{2} C_{4} + 21 C_{6} \big) \\ \langle \tau_0({{\mathsf{p}}}) \tau_1(F) \rangle_{g=2} & = \frac{1}{\Delta} \cdot 2 C_2 \cdot ( -2 C_{2}^{2} + 10 C_{4} \big). $$ [99]{} A. Beauville, [*Counting rational curves on $K3$ surfaces*]{}, Duke Math. J. [**97**]{} (1999), no. 1, 99–108. V. Bosser and F. Pellarin, [*On certain families of Drinfeld quasi-modular forms*]{}, J. Number Theory [**129**]{} (2009), no. 12, 2952–2990. J. Bryan, [*GWall: A Maple program for the Gromov-Witten theory of curves*]{}, [www.math.ubc.ca/\~jbryan/gwall.html](www.math.ubc.ca/~jbryan/gwall.html) . J. Bryan, [*The Donaldson-Thomas theory of $K3 \times E$ via the topological vertex*]{}, [arXiv:1504.02920](http://arxiv.org/abs/1504.02920). J. Bryan and N. C. Leung, [*The enumerative geometry of $K3$ surfaces and modular forms*]{}, J. Amer. Math. Soc. [**13**]{} (2000), no. 2, 371–410. J. Bryan, G. Oberdieck, R. Pandharipande, and Q. Yin, *Curve counting on abelian surfaces and threefolds*, Algebr. Geom., to appear, [arXiv:1506.00841](https://arxiv.org/abs/1506.00841). J. H. Bruinier and M. Westerholt-Raum, [*Kudla’s modularity conjecture and formal Fourier-Jacobi series*]{}, Forum Math. Pi [**3**]{} (2015), e7, 30 pp. [arXiv:1409.4996](http://arxiv.org/abs/1409.4996). X. Chen, *A simple proof that rational curves on [$K3$]{} are nodal*, Math. Ann. **324** (2002), no. 1, 71–104. C. Ciliberto, A. L. Knutsen, [*On k-gonal loci in Severi varieties on general K3 surfaces and rational curves on hyperkähler manifolds*]{}, J. Math. Pures Appl. (9) [**101**]{} (2014), no. 4, 473-494. O. Debarre, [*Complex tori and abelian varieties*]{}, SMF/AMS Texts and Monographs, [**11**]{}, Amer. Math. Soc., Providence, RI; Soc. Math. France, Paris, 2005, x+109 pp. M. Eichler and D. Zagier, , Progress in Mathematics, [**55**]{}, Birkhäuser Boston, Inc., Boston, MA, 1985, v+148 pp. C. Faber and R. Pandharipande, [*Relative maps and tautological classes*]{}, J. Eur. Math. Soc. (JEMS) [**7**]{} (2005), no. 1, 13–49. C. Faber and R. Pandharipande, , in [*Handbook of moduli*]{}, Vol. I, 293–330, Adv. Lect. Math. (ALM), [**24**]{}, Int. Press, Somerville, MA, 2013. L. Göttsche, [*The Betti numbers of the Hilbert scheme of points on a smooth projective surface*]{}, Math. Ann. [**286**]{} (1990), no. 1-3, 193–207. T. Graber and R. Pandharipande, [*Localization of virtual classes*]{}, Invent. Math. [**135**]{} (1999), no. 2, 487–518. T. Graber and R. Vakil, [*Relative virtual localization and vanishing of tautological classes on moduli spaces of curves*]{}, Duke Math. J. [**130**]{} (2005), no. 1, 1–37. J. Li, [*Stable morphisms to singular schemes and relative stable morphisms*]{}, J. Differential Geom. [**57**]{} (2001), no. 3, 509–578. J. Li, [*A degeneration formula for Gromov-Witten invariants*]{}, J. Differential Geom. [**60**]{} (2002), no. 2, 199–293. S. Katz, A. Klemm, and C. Vafa, [*M-theory, topological strings, and spinning black holes*]{}, Adv. Theor. Math. Phys. [**3**]{} (1999), 1445–1537. H. Lange and E. Sernesi, [*Curves of genus $g$ on an abelian variety of dimension $g$*]{}, Indag. Math. (N.S.) [**13**]{} (2002), no. 4, 523–535. M. Kaneko and M. Koike, [*On extremal quasimodular forms*]{}, Kyushu J. Math. [**60**]{} (2006), no. 2, 457–470. M. Kool and R. Thomas, [*Reduced classes and curve counting on surfaces I: theory*]{}, Algebr. Geom. [**1**]{} (2014), no. 3, 334–383. M. Lehn, *Lectures on [H]{}ilbert schemes*, in *Algebraic structures and moduli spaces*, volume 38 of CRM Proc. Lecture Notes, 1–30, Amer. Math. Soc., Providence, RI, 2004. A. Libgober, [*Elliptic genera, real algebraic varieties and quasi-Jacobi forms*]{}, Topology of stratified spaces, 95–120, Math. Sci. Res. Inst. Publ., 58, Cambridge Univ. Press, Cambridge, 2011. D. Maulik, [*Gromov-Witten theory of $A_n$-resolutions*]{}, Geom. and Top. [**13**]{} (2009), 1729–1773. D. Maulik and R. Pandharipande, [*A topological view of Gromov-Witten theory*]{}, Topology [**45**]{} (2006), no. 5, 887–918. D. Maulik, R. Pandharipande, and R. Thomas, [*Curves on $K3$ surfaces and modular forms*]{}, J. of Topology [**3**]{} (2010), 937–996. G. Oberdieck, [*Gromov–Witten invariants of the Hilbert scheme of points of a $K3$ surface*]{}, Geom. Top., to appear, [arXiv:1406.1139](http://arxiv.org/abs/1406.1139). G. Oberdieck, [*On reduced stable pair invariants*]{}, Math. Z., to appear, [arXiv:1605.04631](https://arxiv.org/abs/1605.04631). G. Oberdieck and A. Pixton, [*Gromov–Witten theory of elliptic fibrations: Jacobi forms and holomorphic anomaly equations*]{}, [arXiv:1709.01481](https://arxiv.org/abs/1709.01481). G. Oberdieck and R. Pandharipande, [*Curve counting on $K3\times E$, the Igusa cusp form $\chi_{10}$, and descendent integration*]{}, in K3 surfaces and their moduli, C. Faber, G. Farkas, and G. van der Geer, eds., Birkhauser Prog. in Math. 315 (2016), 245–278. A. Okounkov and R. Pandharipande, [*Virasoro constraints for target curves*]{}, Invent. Math. [**163**]{} (2006), no. 1, 47–108. A. Okounkov and R. Pandharipande, [*Quantum cohomology of the Hilbert scheme of points in the plane*]{}, Invent. Math. [**179**]{} (2010), no. 3, 523–557. R. Pandharipande and A. Pixton, [*Gromov-Witten/Pairs descendent correspondence for toric 3-folds*]{}, Geom. and Top. (to appear). R. Pandharipande and A. Pixton, [*Gromov-Witten/Pairs correspondence for the quintic 3-fold*]{}, J. Amer. Math. Soc. [**30**]{} (2017), no. 2, 389–449. R. Pandharipande and R. P. Thomas, , Invent. Math. [**178**]{} (2009), no. 2, 407–447. R. Pandharipande and R. P. Thomas, [*Stable pairs and BPS invariants*]{}, J. Amer. Math. Soc. [**23**]{} (2010), no. 1, 267–297. N. Saradha, [*Transcendence measure for $\eta/\omega$*]{}, Acta Arith. [**92**]{} (2000), no. 1, 11–25. [^1]: Jacobi forms are two-parameter generalizations of classical modular forms. A quasi-Jacobi forms is the holomorphic part of a almost-holomorphic Jacobi form, see [@Lib] for the definition and [@RES Sec.1] for an introduction. In this paper we will use the explicit presentation of the quasi-Jacobi form algebra presented in [@HilbK3 Appendix B]. [^2]: The shift $r = 1-g-d$ is related to a similar shift in the GW/Pairs correspondence [@PaPix1; @PaPix2]. [^3]: The moduli space in the disconnected case is always denoted by a $\bullet$ here. [^4]: The same as a set but with possible repetitions. [^5]: The pullback of the symplectic form from the K3 surface yields a trivial quotient of the standard perfect-obstruction theory on the moduli space. The reduction by this quotient defines the reduced virtual class, see [@KT] for a modern treatment of this process. [^6]: We work here with an elliptically fibered K3 surface to obtain a uniform presentation of our results. By deformation invariance, our results imply parallel statements for any non-singular projective K3 surface with primitive curve class $\beta$. [^7]: We follow the convention of Section \[Section:Relative\_Gromov\_Witten\_theory\_of\_P1K3\] or equivalently of [@MP]. [^8]: This is parallel to the breaking of the reduced virtual class in the K3 case when degenerating to two rational elliptic surfaces, see [@MPT Section 4]. [^9]: If the curve is disconnected it must have precisely two components of degree $1$ over ${\mathbb{P}}^1$ each. Moreover, one component carries the insertion ${{\mathsf{p}}}\otimes \omega$ and contributes $1$, the other carries all the insertions $F \boxtimes \omega$ and contributes $\Delta(q)^{-1}$. [^10]: See Property (b). [^11]: We may also use Proposition \[Proposition\_take\_out\_div\] below to reduce to Proposition \[vanishing1\]. [^12]: For example, see [@EZ page 105] for the crucial coefficient bound. [^13]: If $\eta = \{ (\eta_i, \gamma_{s_i}) \}$, then $\eta^{\vee} = \{ (\eta_i, \gamma_{s_i}^{\vee}) \}$ where $\{ \gamma_i^{\vee} \}$ is the basis dual to $\{ \gamma_i \}$ with respect to the intersection pairing on $H^{\ast}(S,{{\mathbb{Q}}})$.
{ "pile_set_name": "ArXiv" }
--- bibliography: - 'paper.bib' --- \ \ \ \ > [*Several new methods have been recently proposed for performing valid inference after model selection. An older method is sampling splitting: use part of the data for model selection and the rest for inference. In this paper we revisit sample splitting combined with the bootstrap (or the Normal approximation). We show that this leads to a simple, assumption-free approach to inference and we establish results on the accuracy of the method. In fact, we find new bounds on the accuracy of the bootstrap and the Normal approximation for general nonlinear parameters with increasing dimension which we then use to assess the accuracy of regression inference. We define new parameters that measure variable importance and that can be inferred with greater accuracy than the usual regression coefficients. Finally, we elucidate an inference-prediction trade-off: splitting increases the accuracy and robustness of inference but can decrease the accuracy of the predictions.*]{} “Investigators who use \[regression\] are not paying adequate attention to the connection - if any - between the models and the phenomena they are studying. ... By the time the models are deployed, the scientific position is nearly hopeless. Reliance on models in such cases is Panglossian ...” —David Freedman Introduction ============ We consider the problem of carrying out assumption-free statistical inference after model selection for high-dimensional linear regression. This is now a large topic and a variety of approaches have been considered under different settings – an overview of a subset of these can be found in [@dezeure2015high]. We defer a detailed discussion of the literature and list of references until Section \[sec:related\]. In this paper, we will use linear models but we do not assume that the true regression function is linear. We show the following: 1. Inference based on sample splitting followed by the bootstrap (or Normal approximation) gives assumption-free, robust confidence intervals under very weak assumptions. No other known method gives the same inferential guarantees. 2. The usual regression parameters are not the best choice of parameter to estimate in the weak assumption case. We propose new parameters, called LOCO (Leave-Out-COvariates) parameters, that are interpretable, general and can be estimated accurately. 3. There is a trade-off between prediction accuracy and inferential accuracy. 4. We provide new bounds on the accuracy of the Normal approximation and the bootstrap to the distribution of the projection parameter (the best linear predictor) when the dimension increases and the model is wrong. We need these bounds since we will use Normal approximations or the bootstrap after choosing the model. In fact, we provide new general bounds on Normal approximations for nonlinear parameters with increasing dimension. This gives new insights on the accuracy of inference in high-dimensional situations. In particular, the accuracy of the Normal approximation for the standard regression parameters is very poor while the approximation is very good for LOCO parameters. 5. The accuracy of the bootstrap can be improved by using an alternative version that we call the image bootstrap. However, this version is computationally expensive. The image bootstrap is discussed in the appendix. 6. We show that the law of the projection parameter cannot be consistently estimated without sample splitting. We want to emphasize that we do not claim that the LOCO parameter is optimal in any sense. We just aim to show that there exist alternatives to the usual parameters that, when the linear model is not true, (i) are more interpretable and (ii) can be inferred more accurately. ### Problem Setup and Four (Random) Parameters that Measure Variable Importance {#problem-setup-and-four-random-parameters-that-measure-variable-importance .unnumbered} We consider a distribution-free regression framework, where the random pair $Z = (X,Y) \in \mathbb{R}^d \times \mathbb{R} $ of $d$-dimensional covariates and response variable has an unknown distribution $P$ belonging to a large non-parametric class $\mathcal{Q}_n$ of probability distributions on $\mathbb{R}^{d+1}$. We make no assumptions on the regression function $x \in \mathbb{R}^d \mapsto \mu(x) = \mathbb{E}\left[ Y | X = x \right]$ describing the relationship between the vector of covariates and the expected value of the response variable. In particular, we do not require it to be linear. We observe $\mathcal{D}_n = (Z_1,\ldots, Z_n)$, an i.i.d. sample of size $n$ from some $P \in \mathcal{Q}_n$, where $Z_i = (X_i,Y_i)$, for $i = 1,\ldots,n$. We apply to the data a procedure $w_n$, which returns both a subset of the coordinates and an estimator of the regression function over the selected coordinates. Formally, $$\mathcal{D}_n \mapsto w_n(\mathcal{D}_n) = \left(\widehat{S}, \widehat{\mu}_{\widehat{S}}\right),$$ where $\widehat{S}$, the selected model, is a random, nonempty subset of $\{1,\ldots,d\}$ and $\widehat{\mu}_{\widehat{S}}$ is an estimator of the regression function $x \in \mathbb{R}^d \mapsto \mathbb{E}\left[ Y | X_{\widehat{S}} = x_{\widehat{S}} \right]$ restricted to the selected covariates $\widehat{S}$, where for $x \in \mathbb{R}^d$, $x_{{\widehat{S}}} = (x_j, j \in {\widehat{S}})$ and $(X,Y) \sim P$, independent of $\mathcal{D}_n$. The model selection and estimation steps comprising the procedure $w_n$ need not be related to each other, and can each be accomplished by any appropriate method. The only assumption we impose on $w_n$ is that the size of the selected model be under our control; that is, $ 0 < |\widehat{S}| \leq k $, for a pre-defined positive integer $k \leq d$ where $k$ and $d$ can both increase with sample size. For example, $\widehat{S}$ may be defined as the set of $k$ covariates with the highest linear correlations with the response and $\hat{\mu}_{\widehat{S}}$ may be any non-parametric estimator of the regression function over the coordinates in $\widehat{S}$ with bounded range. Although our framework allows for arbitrary estimators of the regression function, we will be focussing on linear estimators: $\widehat{\mu}_{\widehat{S}}(x) = \widehat{\beta}_{\widehat{S}}^\top x_{\widehat{S}} $, where $\widehat{\beta}_{\widehat{S}}$ is any estimator of the of the linear regression coefficients for the selected variables – such as ordinary least squares on the variables in $\widehat{S}$. In particular, $\widehat{\beta}_{\widehat{S}}$ may arise from fitting a sparse linear model, such as the lasso or stepwise-forward regression, in which case estimation of the regression parameters and model selection can be accomplished simultaneously with one procedure. It is important to emphasize that, since we impose minimal assumptions on the class $\mathcal{Q}_n$ of data generating distribution and allow for arbitrary model selection and estimation procedures $w_n$, we will not assume anything about the quality of the output returned by the procedure $w_n$. In particular, the selected model $\widehat{S}$ needs not be a good approximation of any optimal model, however optimality may be defined. Similarly, $\hat{\mu}_{\widehat{S}}$ may not be a consistent estimator of the regression function restricted to $\widehat{S}$. Instead, our concern is to provide statistical guarantees for various criteria of significance for the selected model $\widehat{S}$, uniformly over the choice of $w_n$ and over all the distributions $P \in \mathcal{Q}_n$. We will accomplish this goal by producing confidence sets for four [*random*]{} parameters in $\mathbb{R}^{ \widehat{S}}$, each providing a different assessment of the level of statistical significance of the variables in $\widehat{S}$ from a purely [*predictive*]{} standpoint. All of the random parameters under consideration are functions of the data generating distribution $P$, of the sample $\mathcal{D}_n$ and, therefor, of its size $n$ and, importantly, of the model selection and estimation procedure $w_n$. Below, $(X,Y)$ denotes a draw from $P$, independent of the sample $\mathcal{D}_n$. Thus the distribution of $(X,Y)$ is the same as their conditional distribution given $\mathcal{D}_n$. - [**The projection parameter $\beta_{\widehat{S}}$.**]{} The linear projection parameter $\beta_{\widehat{S}}$ is defined to be the vector of coefficients of the best linear predictor of $Y$ using $X_{\widehat{S}}$: $$\beta_{\widehat{S}} = \operatorname*{argmin}_{\beta \in \mathbb{R}^{\widehat{S}}} \mathbb{E}_{X,Y} \left[ (Y- \beta^\top X_{\widehat{S}})^2 \right],$$ where $\mathbb{E}_{(X,Y)}$ denote the expectation with respect to the distribution of $(X,Y)$. The terminology projection parameters refers to the fact that $X^\top \beta_{{\widehat{S}}}$ is the projection of $Y$ into the linear space of all random variables that can be obtained as linear functions of $X_{{\widehat{S}}}$. For a through discussion and an analysis of the properties of such parameters see [@buja2015models]. More generally, this type of quantities are also studied in [@lee2016exact; @taylor2014exact; @berk2013valid; @wasserman2014]. Note that the projection parameter is well-defined even though the true regression function $\mu$ is not linear. Indeed, it is immediate that $$\label{eq::projection-parameter} \beta_{{\widehat{S}}} = \Sigma_{{\widehat{S}}}^{-1}\alpha_{{\widehat{S}}}$$ where $\alpha_{{\widehat{S}}} = (\alpha_{{\widehat{S}}}(j):\ j\in {\widehat{S}})$, $\alpha_{{\widehat{S}}}(j) = \mathbb{E}[Y X_{{\widehat{S}}}(j)]$ and $\Sigma_{{\widehat{S}}} = \mathbb{E}[X_{{\widehat{S}}} X_{{\widehat{S}}}^\top]$. We remark that the regression projection parameter depends only on the selected model ${\widehat{S}}$, and not any estimate $\hat{\mu}_{{\widehat{S}}}$ of the regression function on the coordinates in ${\widehat{S}}$ that may be implemented in $w_n$. - [**The LOCO parameters $\gamma_{{\widehat{S}}}$ and $\phi_{{\widehat{S}}}$.**]{} Often, statisticians are interested in $\beta_{{\widehat{S}}}$ as a measure of the importance of the selected covariates. But, of course, there are other ways to measure variable importance. We now define two such parameters, which we refer to as [*Leave Out COvariate Inference – or LOCO – parameters*]{}, which were originally defined in [@lei2016distribution] and are similar to the variable importance measures used in random forests. The first LOCO parameter is $\gamma_{{\widehat{S}}} = (\gamma_{{\widehat{S}}}(j):\ j\in {\widehat{S}})$, where $$\label{eq:gamma.j} \gamma_{{\widehat{S}}}(j) = \mathbb{E}_{X,Y}\Biggl[|Y-\hat\beta_{{\widehat{S}}(j)}^\top X_{{\widehat{S}}(j)}|- |Y-\hat\beta_{{\widehat{S}}}^\top X_{{\widehat{S}}}| \Big| \mathcal{D}_n \Biggr].$$ In the above expression, $\hat\beta_{{\widehat{S}}}$ is [*any*]{} estimator of the projection parameter $\beta_{{\widehat{S}}}$ and ${\widehat{S}}(j)$ and $\hat\beta_{{\widehat{S}}(j)}$ are obtained by re-running the model selection and estimation procedure after removing the $j^{\mathrm{th}}$ covariate from the data $\mathcal{D}_n$. To be clear, for each $j \in wS$, ${\widehat{S}}(j)$ is a subset of size $k$ of $\{1,\ldots,d\} \setminus \{j\}$. Notice that the selected model can be different when covariate $X_j$ is held out from the data, so that the intersection between ${\widehat{S}}(j)$ and ${\widehat{S}}$ can be quite smaller than $k-1$. The interpretation of $\gamma_{{\widehat{S}}}(j)$ is simple: it is the increase in prediction error by not having access to $X(j)$ (in both the model selection and estimation steps). Of course, it is possible to extend the definition of this parameter by leaving out several variables from ${\widehat{S}}$ at once without additional conceptual difficulties.\ The parameter $\gamma_{{\widehat{S}}}$ has several advantages over the projection parameter $\beta_{{\widehat{S}}}$: it is more interpretable since it refers directly to prediction error and we shall see that the accuracy of the Normal approximation and the bootstrap is much higher. Indeed, we believe that the widespread focus on $\beta_{{\widehat{S}}}$ is mainly due to the fact that statisticians are used to thinking in terms of cases where the linear model is assumed to be correct.\ The second type of LOCO parameters that we consider are the median LOCO parameters $\phi_{{\widehat{S}}} = (\phi_{{\widehat{S}}}(j):\ j\in {{\widehat{S}}})$ with $$\label{eq:median.LOCO} \phi_{{\widehat{S}}}(j) = {\rm median}\Biggl[|Y-\hat\beta_{{\widehat{S}}(j)}^\top X_{{\widehat{S}}}|- |Y-\hat\beta_{{\widehat{S}}}^\top X_{{\widehat{S}}}|\,\Biggr],$$ where the median is over the conditional distribution of $(X,Y)$ given $\mathcal{D}_n$. Though one may simply regard $\phi_{{\widehat{S}}}$ as a robust version of $\gamma_{{\widehat{S}}}$, we find that inference for $\phi_{{\widehat{S}}}$ will remain valid under weaker assumptions that the ones needed for $\gamma_{{\widehat{S}}}$. Of course, as with $\gamma_{{\widehat{S}}}$, we may leave out multiple covariate at the same time. - [**The prediction parameter $\rho_{{\widehat{S}}}$**]{}. It is also of interest to obtain an omnibus parameter that measures how well the selected model will predict future observations. To this end, we define the future predictive error as $$\rho_{{\widehat{S}}} = \mathbb{E}_{X,Y}\Bigl[| Y - \hat\beta_{{\widehat{S}}}^\top X_{{\widehat{S}}}|\, \Bigr],$$ where $\widehat{\beta}_{{\widehat{S}}}$ is any estimator the projection parameters $\beta_{{\widehat{S}}}$. [**Remarks.**]{} 1. The LOCO and prediction parameters do not require linear estimators. For example we can define $$\gamma_{{\widehat{S}}}(j) = \mathbb{E}_{X,Y}\Biggl[ |Y-\hat\mu_{{\widehat{S}}(j)}(X_{{{\widehat{S}}}(j)})| - |Y-\hat\mu_{{\widehat{S}}}(X_{{\widehat{S}}})|\ \Biggr], \quad j \in {\widehat{S}},$$ where $\hat\mu_{{\widehat{S}}}$ is any regression estimator restricted to the coordinates in ${\widehat{S}}$ and $\hat\mu_{{\widehat{S}}(j)}$ is the estimator obtained after performing a new model selection process and then refitting without covariate $j \in {\widehat{S}}$. Similarly, we could have $$\rho_{{\widehat{S}}} = \mathbb{E}_{X,Y}\Bigl[| Y - \hat{\mu}_{{\widehat{S}}}(X_{{\widehat{S}}})|\, \Bigr],$$ for an arbitrary estimator $\hat{\mu}_{{\widehat{S}}}$. For simplicity, we will focus on linear estimators, although our results about the LOCO and prediction parameters hold even in this more general setting. 2. It is worth reiterating that the projection and LOCO parameters are only defined over the coordinates in ${\widehat{S}}$, the set of variables that are chosen in the model selection phase. If a variable is not selected then the corresponding parameter is set to be identically zero and is not the target of any inference. There is another version of the projection parameter defined as follows. For the moment, suppose that $d < n$ and that there is no model selection. Let $\beta_n = (\mathbb{X}^\top \mathbb{X})^{-1}\mathbb{X}^\top \mu_n$ where $\mathbb{X}$ is the $n\times d$ design matrix, whose columns are the $n$ vector of covariates $X_1,\ldots, X_n$, and $\mu_n = (\mu_n(1),\ldots, \mu_n(n))^\top$, with $\mu_n(i) = \mathbb{E}[Y_i | X_1,\ldots, X_n]$. This is just the conditional mean of the least squares estimator given $X_1,\ldots, X_n$. We call this the [*conditional projection parameter*]{}. The meaning of this parameter when the linear model is false is not clear. It is a data dependent parameter, even in the absence of model selection. [@buja2015models] have devoted a whole paper to this issue. Quoting from their paper: > [*When fitted models are approximations, conditioning on the regressor is no longer permitted ... Two effects occur: (1) parameters become dependent on the regressor distribution; (2) the sampling variability of the parameter estimates no longer derives from the conditional distribution of the response alone. Additional sampling variability arises when the nonlinearity conspires with the randomness of the regressors to generate a $1/\sqrt{n}$ contribution to the standard errors.*]{} Moreover, it is not possible to estimate the distribution of the conditional projection parameter estimate in the distribution free framework. To see that, note that the least squares estimator can be written as $\hat\beta(j) = \sum_{i=1}^n w_i Y_i$ for weights $w_i$ that depend on the design matrix. Then $\sqrt{n}(\hat\beta(j) - \beta(j)) = \sum_{i=1}^n w_i \epsilon_i$ where $\epsilon_i = Y_i - \mu_n(i)$. Thus, for each $j \in \{1,\ldots,d\}$ we have that $\sqrt{n}(\hat\beta(j) - \beta(j))$ is approximately $\approx N(0,\tau^2)$, where $\tau^2 = \sum_i w_i^2 \sigma_i^2$, with $\sigma_i^2 = {\rm Var}(\epsilon_i | X_1,\ldots, X_n)$. The problem is that there is no consistent estimator of $\tau^2$ under the nonparametric models we are considering. Even if we assume that $\sigma_i^2$ is constant (an assumption we avoid in this paper), we still have that $\tau^2 =\sigma^2 \sum_i w_i^2$ which cannot be consistently estimated without assuming that the linear model is correct. Again, we refer the reader to [@buja2015models] for more discussion. In contrast, the projection parameter $\beta = \Sigma^{-1}\alpha$ is a fixed functional of the data generating distribution $P$ and is estimable. For these reasons, we focus in this paper on the projection parameter rather than the conditional projection parameter. Goals and Assumptions {#goals-and-assumptions .unnumbered} --------------------- Our main goal is to provide statistical guarantees for each of the four random parameters of variable significance introduced above, under our distribution free framework. For notational convenience, in this section we let $\theta_{{\widehat{S}}}$ be any of the parameters of interest: $\beta_{{\widehat{S}}}$, $\gamma_{{\widehat{S}}}$, $\phi_{{\widehat{S}}}$ or $\rho_{{\widehat{S}}}$. We will rely on sample splitting: assuming for notational convenience that the sample size is $2n$, we randomly split the data $\mathcal{D}_{2n}$ into two halves, $\mathcal{D}_{1,n}$ and $\mathcal{D}_{2,n}$. Next, we run the model selection and estimation procedure $w_{n}$ on $\mathcal{D}_{1,n}$, obtaining both ${\widehat{S}}$ and $\hat{\mu}_{{\widehat{S}}}$ (as remarked above, if we are concerned with the projection parameters, then we will only need ${\widehat{S}}$). We then use the second half of the sample $\mathcal{D}_{2,n}$ to construct an estimator $\hat \theta_{{\widehat{S}}}$ and a confidence hyper-rectangle $\hat{C}_{{\widehat{S}}}$ for $\theta_{{\widehat{S}}}$ satisfying the following properties: $$\begin{aligned} {\rm Concentration}: \phantom{xxxxxxxxx}&\ \displaystyle \limsup_{n \rightarrow \infty} \sup_{w_{n}\in {\cal W}_{n}} \sup_{P\in {\cal Q}_n} \mathbb{P}(||\hat\theta_{{\widehat{S}}}-\theta_{{\widehat{S}}}||_\infty > r_n) \to 0 \label{eq:concentration}\\ \vspace{.11pt} \nonumber\\ {\rm Coverage\ validity\ (honesty)}: &\ \displaystyle\liminf_{n\to\infty}\inf_{w_{n}\in {\cal W}_{n}} \inf_{P\in {\cal Q}_{n}} \mathbb{P}(\theta_{{\widehat{S}}}\in \hat{C}_{{\widehat{S}}})\geq 1-\alpha \label{eq::honest}\\ \vspace{.11pt} \nonumber \\ {\rm Accuracy}: \phantom{xxxxxxxx}&\ \displaystyle \limsup_{n \rightarrow \infty} \sup_{w_{n}\in {\cal W}_{n}} \sup_{P\in {\cal Q}_n}\mathbb{P}(\nu(\hat{C}_{{\widehat{S}}})> \epsilon_n)\to 0 \label{eq::accuracy}\end{aligned}$$ where $\alpha \in (0,1)$ is a pre-specified level of significance, $\mathcal{W}_n$ is the set of all the model selection and estimation procedures on samples of size $n$, $r_n$ and $\epsilon_n$ both vanish as $n \rightarrow \infty$ and $\nu$ is the size of the set (length of the sides of the rectangle) where we recall that $k = |{\widehat{S}}|$ is non-random. The probability statements above take into account both the randomness in the sample $\mathcal{D}_{n}$ and the randomness associated to splitting it into halves. [**Remark.**]{} The property that the coverage of $\hat{C}_{{\widehat{S}}}$ is guaranteed uniformly over the entire class $\mathcal{Q}_n$ is known as (asymptotic) honesty [@li1989honest]. Note that the confidence intervals are for random parameters (based on half the data) but the uniform coverage, accuracy and concentration guarantees hold marginally. The statistical guarantees listed above assure that both $\hat{\theta}_{{\widehat{S}}}$ and $\hat{C}_{{\widehat{S}}}$ are [*robust*]{} with respect to the choice of $w_n$. We seek validity over all model selection and estimation rules because, in realistic data analysis, the procedure $w_n$ can be very complex. In particular, the choice of model can involve: plotting, outlier removal, transformations, choosing among various competing models, etc.. Thus, unless we have validity over all $w_n$, there will be room for unconscious biases to enter. Note that sample splitting is key in yielding uniform coverage and robustness. The confidence sets we construct will be hyper-rectangles. The reason for such choice is two-fold. First, once we have a rectangular confidence set for a vector parameter, we immediately have simultaneous confidence intervals for the components of the vector. Secondly, recent results on high dimensional normal approximation of normalized sums by [@cherno1; @cherno2] have shown that central limit theorems for hyper-rectangles have only a logarithmic dependence on the dimension. Depending on the target parameter, the class $\mathcal{Q}_n$ of data generating distributions on $\mathbb{R}^{d+1}$ for the pair $(X,Y)$ will be different. We will provide details on each such case separately. However, it is worth noting that inference for the projection parameters calls for a far more restricted class of distributions than the other parameters. In particular, we find it necessary to impose uniform bounds on the largest and smallest eigenvalues of the covariance matrices of all $k$ marginals of the $d$ covariates, as well as bounds on the higher moments of $X$ and on the mixed moments of $X$ and $Y$. We will further assume, in most cases, that the distribution of the pair $(X,Y)$ in $[-A,A]^{d+1}$, for some fixed $A>0$. Such compactness assumptions are stronger than necessary but allow us to keep the statement of the results and their proofs simpler. In particular, they may be replaced with appropriate tail or moment bounds and not much will change in our analysis and results. Although we have formulated the guarantees of honest validity, accuracy and concentration in asymptotic terms, all of our results are in fact obtained as finite sample bounds. This allow us to derive consistency rates in $n$ with all the relevant quantities, such as the dimension $d$, the size of the selected model $k$, and the variance and eigenvalue bounds needed for the projection parameters accounted for in the constants (with the exception of $A$, which we keep fixed). As a result, our results remain valid and are in fact most interesting when all these quantities are allowed to change with $n$. Related Work {#sec:related} ------------ The problem of inference after model selection has received much attention lately. Much of the work falls broadly into three categories: inference uniformly over selection procedure, inference with regard to a particular debiased or desparsified model, and inference conditional on model selection. A summary of some of the various methods is in Table \[table::compare\]. We discuss these approaches in more detail in Section \[section::comments\]. The uniform approach includes POSI [@berk2013valid], which constructs valid inferential procedures regardless of the model selection procedure by maximizing over all possible model selections. This method assumes Normality and a fixed, known variance, as well as being computationally expensive. The idea is built upon by later work [@bachoc2; @bachoc], which extend the ideas to other parameters of interest and which allow for heteroskedasticity, non-normality, and model misspecification. Most other approaches focus on a particular model selection procedure and conduct inference for selections made by that procedure. This includes the literature on debiased or desparsified regularized models, for example [@buhlmann2013statistical], [@zhang2014confidence], [@javanmard2014confidence], [@peter.sarah.2015], [@test], [@zhang2017simultaneous], [@vandegeer2014asymptotically], [@nickl2013confidence]. This work constructs confidence intervals for parameters in high dimensional regression. These can be used for the selected model if a Bonferroni correction is applied. However, these methods tend to assume that the linear model is correct as well as a number of other assumptions on the design matrix and the distribution of errors. A separate literature on selective inference has focused on inference with respect to the selected model, conditional on the event of that model’s selection. This began with [@lockhart2014significance], but was developed more fully in [@lee2016exact], [@carving], and [@taylor2014exact]. Further works in this area include [@tibshirani2015uniform], [@randomization], [@loftus2015selective], [@bootstrap.john], [@tibshirani2016exact], [@loco.john]. In the simplest version, the distribution of $\sqrt{n}(\hat\beta(j) - \beta(j))$ conditional on the selected model has a truncated Gaussian distribution, if the errors are Normal and the covariates are fixed. The cdf of the truncated Gaussian is used as a pivot to get tests and confidence intervals. This approach requires Normality, and a fixed, known variance. While the approach has broadened in later work, the methods still tend to assume fixed design and a known, parametric structure to the outcome. There have been several additional approaches to this problem that don’t fall in any of these broad categories. While this is a larger literature than can be addressed completely here, it includes early work on model selection [@hurvich1990impact] and model averaging interpretations [@hjort2003frequentist]; the impossibility results of [@leeb2008can], [@buja2015models] on random $X$ and model misspecification; methods based on resampling or sample splitting [@CL:11; @CL:13; @Efron:14; @wasserman2009high; @meinshausen2009pvalues]; stability selection [@meinshausen2010stability; @shah2013variable]; the conformal inference approach of [@lei2016distribution]; goodness-of-fit tests of [@shah2018goodness]; moment-constraint-based uniform confidence sets [@andrews2009hybrid]; [@meinshausen2015group] on inference about groups of variables under general designs; [@belloni2011inference] in the instrumental variable setting; [@belloni2015uniform] on post-selection inference for $Z$-estimators, and the knockoffs approach of [@barber2015controlling] and later [@candes2016panning]. Although they are not directed at linear models,[@wager2014confidence] and [@JMLR:v17:14-168] address similar problems for random forests. Method Parameter Assumptions Accuracy Computation Robust ------------------ -------------- ------------- --------------------------------------- ------------- -------- Debiasing True $\beta$ Very Strong $1/\sqrt{n}$ Easy No Conditional Projection Strong Not known Easy No Uniform Projection Strong $\sqrt{k/n}$ NP hard Yes Sample Splitting Projection Weak $\sqrt{k^{5/2}\log k\sqrt{\log n}/n}$ Easy Yes Sample Splitting LOCO None $\sqrt{\log (kn)/n}$ Easy Yes : *Different inferential methods. ‘accuracy’ refers to the size of sides of the confidence set. ‘robust’ refers to robustness to model assumptions. The term ‘Very Strong’ means that the linear model is assumed to be correct and that there are incoherence assumptions on the design matrix. ‘Strong’ means constant variance and Normality are assumed. ‘Weak’ means only iid and invertible covariance matrix (for the selected variables). ‘None’ means only iid or iid plus a moment assumption.*[]{data-label="table::compare"} [**Sample Splitting.**]{} The oldest method for inference after model selection is sample splitting: half the data ${\cal D}_1$ are used for model fitting and the other half ${\cal D}_2$ are used for inference.[^1] Thus $S = w_{n}({\cal D}_1)$. The earliest references for sample splitting that we know of are [@Barnard], [@cox1975note], [@faraway1995data] [@hartigan1969using], page 13 of [@miller2002subset] [@moran1973dividing], page 37 of [@mosteller1977data] and [@picard1990data]. To quote Barnard: “ ... the simple idea of splitting a sample in two and then developing the hypothesis on the basis of one part and testing it on the remainder may perhaps be said to be one of the most seriously neglected ideas in statistics ...” To the best of our knowledge there are only two methods that achieve asymptotically honest coverage: sample splitting and uniform inference. Uniform inference is based on estimating the distribution of the parameter estimates over all possible model selections. In general, this is infeasible. But we compare sample splitting and uniform inference in a restricted model in Section \[section::splitornot\]. Outline ------- In Section \[section::splitting\] we introduce the basic sample splitting strategies. In Section \[section::splitornot\] we compare sample splitting to non-splitting strategies. Section \[section::comments\] contains some comments on other methods. In Section \[section::simulation\] we report some numerical examples. In Section \[section::berry\] we establish a Berry-Esseen bound for regression with possibly increasing dimension and no assumption of linearity on the regression function. Section \[section::conclusion\] contains concluding remarks. Extra results, proofs and a discussion of another version of the bootstrap, are relegated to the Appendices. Notation -------- Let $Z=(X,Y)\sim P$ where $Y\in\mathbb{R}$ and $X\in \mathbb{R}^d$. We write $X = (X(1),\ldots, X(d))$ to denote the components of the vector $X$. Define $\Sigma = \mathbb{E}[X X^\top]$ and $\alpha = (\alpha(1),\ldots,\alpha(d))$ where $\alpha(j) = \mathbb{E}[Y X(j)]$. Let $\sigma = {\rm vec}(\Sigma)$ and $\psi \equiv \psi(P) = (\sigma,\alpha)$. The regression function is $\mu(x) = \mathbb{E}[Y|X=x]$. We use $\nu$ to denote Lebesgue measure. We write $a_n \preceq b_n$ to mean that there exists a constant $C>0$ such that $a_n \leq C b_n$ for all large $n$. For a non-empty subset $S\subset \{1,\ldots, d\}$ of the covariates $X_S$ or $X(S)$ denotes the corresponding elements of $X$: $(X(j):\ j\in S)$ Similarly, $\Sigma_S = \mathbb{E}[X_S X_S^\top]$ and $\alpha_S = \mathbb{E}[Y X_S]$. We write $\Omega = \Sigma^{-1}$ and $\omega = {\rm vec}(\Omega)$ where ${\rm vec}$ is the operator that stacks a matrix into one large vector. Also, ${\rm vech}$ is the half-vectorization operator that takes a symmetric matrix and stacks the elements on and below the diagonal into a matrix. $A\otimes B$ denotes the Kronecker product of matrices. The commutation matrix $K_{m,n}$ is the $mn \times mn$ matrix defined by $K_{m,n} {\rm vec}(A) = {\rm vec}(A^\top)$. For any $k\times k$ matrix $A$. $\mathrm{vech}(A)$ denotes the column vector of dimension $k(k+1)/2$ obtained by vectorizing only the lower triangular part of $k\times k$ matrix $A$. Main Results {#section::splitting} ============ We now describe how to construct estimators of the random parameters defined earlier. Recall that we rely on data splitting: we randomly split the $2n$ data into two halves ${\cal D}_{1,n}$ and ${\cal D}_{2,n}$. Then, for a given choice of the model selection and estimation rule $w_n$, we use ${\cal D}_{1,n}$ to select a non-empty set of variables ${\widehat{S}}\subset \{ 1,\ldots,d\}$ where $k =|{\widehat{S}}| < n$. For the LOCO and prediction parameters, based on $\mathcal{D}_{1,n}$, we also compute $\widehat{\beta}_{{\widehat{S}}}$, any estimator of the projection parameters restricted to ${\widehat{S}}$. In addition, for each $j \in {\widehat{S}}$, we further compute, still using $\mathcal{D}_{1,n}$ and the rule $w_n$, $\widehat{\beta}_{{\widehat{S}}(j)}$, the estimator of the projection parameters over the set $\widehat{S}(j)$. Also, for $l=1,2$, we denote with $\mathcal{I}_{l,n}$ random subset of $\{1,\ldots, 2n\}$ containing the indexes for the data points in $\mathcal{D}_{l,n}$. Projection Parameters {#sec:projection} --------------------- In his section we will derive various statistical guarantees for the projection parameters, defined in . We will first define the class of data generating distributions on $\mathbb{R}^{d+1}$ for which our results hold. In the definition below, $S$ denotes a non-empty subset of $\{1,\ldots,d\}$ and $W_S = ({\rm vech}(X_S X_S^\top), X_SY)$. \[def:Pdagger\] Let ${\cal P}_n^{\mathrm{OLS}} $ be the set of all probability distributions $P$ on $\mathbb{R}^{d+1}$ with zero mean, Lebesgue density and such that, for some positive quantities $A, a, u, U , v$ and $\overline{v}$, 1. the support of $P$ is contained in $[-A,A]^{d+1}$; 2. $\min_{ \{ S \colon |S| \leq k \} } \lambda_{\rm min}(\Sigma_S) \geq u$ and $\max_{ \{ S \colon |S| \leq k\} } \lambda_{\rm max}(\Sigma_S) \leq U$, where $\Sigma_S = \mathbb{E}_P[X_S X_S^\top]$; 3. $\min_{ \{S \colon |S| \leq k \} } \lambda_{\rm min}({\rm Var}_P(W_S))\geq v$ and $\max_{ \{S \colon |S| \leq k\} } \lambda_{\rm max}({\rm Var}_P(W_S))\leq \overline{v}$. 4. $\min\{ U, \overline{v} \} \geq \eta$, for a fixed $\eta>0$. The first compactness assumption can be easily modified by assuming instead that $Y$ and $X$ are sub-Gaussian, without any technical difficulty. We make such boundedness assumption to simplify our results. The bound on the smallest eigenvalue of $\Sigma_S$, uniformly over all subsets $S$ is natural: the projection parameter is only well defined provided that $\Sigma_S$ is invertible for all $S$, and the closer $\Sigma_S$ is to being singular the higher the uncertainty. The uniform condition on the largest eigenvalue of $\Sigma_S$ in part 2. is used to obtain sharper bounds than the ones stemming from the crude bound $U \leq A k$ implied by the assumption of a compact support (see e.g. below). The quantities $v$ and $\overline{v}$ in part 3. are akin to 4th moment conditions. In particular, one can always take $\overline{v} \leq A^2 k^2$ in the very worst case. Finally, the assumption of zero mean is imposed out of convenience and to simplify our derivations, so that we need not to be concerned with an intercept term. As remarked above, in all of our results we have kept track of the dependence on the constants $a, u, U , v$ and $\overline{v}$, so that we may in fact allow all these quantities to change with $n$ (but we do treat $A$ as fixed and therefore have incorporate it in the constants). Finally, the assumption that t$U$ and $\overline{v}$ are bounded from zero is extremely mild. In particular, the parameter $\eta$ is kept fixed and its value affect the constants in Theorems \[thm:beta.accuracy2\], \[thm::big-theorem\] and \[theorem::beta.boot\] (the matrix Bernstein inequality (see )). [**Remark.**]{} Although our assumptions imply that the individual coordinates of $X$ are sub-Gaussians, we do not require $X$ itself to be a sub-Gaussian vector, in the usual sense that, for each $d$-dimensional unit vector $\theta$, the random variable $\theta^\top X$ is sub-Gaussian with variance parameter independent of $\theta$ and $d$. Recall that the projection parameters defined in are $$\label{eq:betahat} \beta_{{\widehat{S}}} = \Sigma_{{\widehat{S}}}^{-1}\alpha_{{\widehat{S}}},$$ where ${\widehat{S}}$ is the model selected based on $\mathcal{D}_{1,n}$ (of size no larger than $k$) and $$\label{eq:sigma.alpha} \alpha_{{\widehat{S}}} = \mathbb{E}[Y X({\widehat{S}})] \quad \text{and} \quad \Sigma_{{\widehat{S}}} = \mathbb{E}[X({\widehat{S}}) X({\widehat{S}})^\top].$$ We will be studying the ordinary least squares estimator $\hat{\beta}_{{\widehat{S}}}$ of $\beta_{{\widehat{S}}}$ computed using the sub-sample $\mathcal{D}_{2,n}$ and restricted to the coordinates ${\widehat{S}}$. That is, $$\label{eq:least.squares} \hat{\beta}_{{\widehat{S}}} = \widehat{\Sigma}_{{\widehat{S}}}^{-1} \widehat{\alpha}_{{\widehat{S}}}$$ where, for any non-empty subset $S$ of $\{1,\ldots,d\}$, $$\label{eq:alpha.beta.hat} \widehat{\alpha}_{S} = \frac{1}{n} \sum_{i \in \mathcal{I}_{2,n} } Y_i X_i(S) \quad \text{and} \quad \widehat{\Sigma}_{S} = \frac{1}{n} \sum_{i \in \mathcal{I}_{2,n}} X_i({\widehat{S}}) X_i(S)^\top.$$ Since each $P \in \mathcal{P}_n^{\mathrm{OLS}}$ has a Lebesgue density, $\hat{\Sigma}_{{\widehat{S}}}$ is invertible almost surely as long as $n \geq k \geq |{\widehat{S}}|$. Notice that $\hat{\beta}_{{\widehat{S}}}$ is not an unbiased estimator of $\beta_{{\widehat{S}}}$ , conditionally or unconditionally on $\mathcal{D}_{2,n}$. In order to relate $\hat{\beta}_{{\widehat{S}}}$ to $\beta_{{\widehat{S}}}$, it will first be convenient to condition on ${\widehat{S}}$ and thus regard $\beta_{{\widehat{S}}}$ as a $k$-dimensional deterministic vector of parameters (recall that, for simplicity, we assume that $|{\widehat{S}}| \leq k$), which depends on some unknown $P \in \mathcal{P}_n^{\mathrm{OLS}}$. Then, $\hat{\beta}_{{\widehat{S}}}$ is an estimator of a fixed parameter $\beta_{{\widehat{S}}} = \beta_{{\widehat{S}}}(P)$ computed using an i.i.d. sample $\mathcal{D}_{2,n}$ from the same distribution $P \in \mathcal{P}_n^{\mathrm{OLS}}$. Since all our bounds depend on ${\widehat{S}}$ only through its size $k$, those bounds will hold also unconditionally. For each $P \in \mathcal{P}_n^{\mathrm{OLS}}$, we can represent the parameters $\Sigma_{{\widehat{S}}} = \Sigma_{{\widehat{S}}}(P)$ and $\alpha_{{\widehat{S}}} = \alpha_{{\widehat{S}}}(P)$ in in vectorized form as $$\label{eq:psi.beta} \psi = \psi_{{\widehat{S}}} = \psi({\widehat{S}},P)= \left[ \begin{array}{c} \mathrm{vech}(\Sigma_{{\widehat{S}}})\\ \alpha_{{\widehat{S}}}\\ \end{array} \right] \in \mathbb{R}^{b},$$ where $b = \frac{ k^2 + 3k}{2} $. Similarly, based on the sub-sample $\mathcal{D}_{2,n}$ we define the $n$ random vectors $$W_i = \left[ \begin{array}{c} \mathrm{vech}(X_i({\widehat{S}}) X_i({\widehat{S}})^\top)\\ Y_i \cdot X_i({\widehat{S}}) \\ \end{array} \right] \in \mathbb{R}^b, \quad i \in \mathcal{I}_{2,n},$$ and their average $$\label{eq:hat.psi.beta} \hat{\psi} = \hat{\psi}_{{\widehat{S}}} = \frac{1}{n} \sum_{i \in \mathcal{I}_{2,n}} W_i.$$ It is immediate to see that $\mathbb{E}_P[\hat{\psi}] = \psi$, uniformly over all $P \in \mathcal{P}_n^{\mathrm{OLS}}$. We express both the projection parameter $\beta_{{\widehat{S}}}$ and the least square estimator $\hat{\beta}_{{\widehat{S}}}$ as non-linear functions of $\psi$ and $\hat{\psi}$, respectively, in the following way. Let $g \colon \mathbb{R}^b \rightarrow \mathbb{R}^k$ be given by $$\label{eq:g.beta} x = \left[ \begin{array}{c} x_1\\ x_2\\ \end{array} \right] \mapsto \left( \mathrm{math}(x_1) \right)^{-1} x_2,$$ where $x_1$ and $x_2$ correspond to the first $k(k+1)/2$ and the last $k$ coordinates of $x$, respectively, and $\mathrm{math}$ is the inverse mapping of $\mathrm{vech}$, i.e. $\mathrm{math}(x) = A$ if and only if $\mathrm{vech}(A) = x$. Notice that $g$ is well-defined over the convex set $$\left\{ \left[ \begin{array}{c} \mathrm{vech}(\Sigma)\\ x \end{array} \right] \colon \Sigma \in \mathcal{C}^+_{k}, x \in \mathbb{R}^k \right\}$$ where $\mathcal{C}^+_k$ is the cone of positive definite matrices of dimension $k$. It follows from our assumptions that, for each $P \in \mathcal{P}_n^{\mathrm{OLS}}$, $\psi$ is in the domain of $g$ and, as long as $n \geq d$, so is $\hat{\psi}$, almost surely. Thus, we may write $$\beta_{{\widehat{S}}} = g(\psi_{{\widehat{S}}}) \quad \text{and} \quad \hat{\beta}_{{\widehat{S}}} = g(\hat{\psi}_{{\widehat{S}}}).$$ This formulation of $\beta_{{\widehat{S}}}$ and $\hat{\beta}_{{\widehat{S}}}$ is convenient because, by expanding each coordinate of $g(\hat{\psi})$ separately through a first-order Taylor series expansion around $\psi$, it allows us to re-write $\hat{\beta}_{{\widehat{S}}} - \beta_{{\widehat{S}}}$ as a linear transformation of $\hat{\psi} - \psi$ given by the Jacobian of $g$ at $\psi$, plus a stochastic reminder term. Since $\hat{\psi} - \psi$ is an average, such approximation is simpler to analyze that the original quantity $\hat{\beta}_{{\widehat{S}}} - \beta_{{\widehat{S}}}$ and, provided that the reminder term of the Taylor expansion be small, also sufficiently accurate. This program is carried out in detail and greater generality in a later Section \[section::berry\], where we derive finite sample Berry-Esseen bounds for non-linear statistics of sums of independent random vectors. The results in this section are direct, albeit non-trivial, applications of those bounds. ### Concentration of $\hat{\beta}_{{\widehat{S}}}$ {#concentration-of-hatbeta_widehats .unnumbered} We begin by deriving high probability concentration bonds for $\hat{\beta}_{{\widehat{S}}}$ around $\beta_{{\widehat{S}}}$. When there is no model selection nor sample splitting – so that ${\widehat{S}}$ is deterministic and equal to $\{1,\ldots,d$) – our results yield consistency rates for the ordinary least squares estimator of the projection parameters, under increasing dimensions and a misspecified model. An analogous result was established in [@hsu14], where the approximation error $\mu(x) - x^\top \beta$ is accounted for explicitly. \[thm:beta.accuracy2\] Let $$B_n = \frac{ k}{u^2} \sqrt{ U \frac{ \log k + \log n}{n}}$$ and assume that $\max\{ B_n, u B_n \} \rightarrow 0$ as $n \rightarrow \infty$. Then, there exists a constant $C>0$, dependent on $A$ and $\eta$ only, such that, for all $n$ large enough, $$\label{eq::beta2} \sup_{w_n \in \mathcal{W}_n} \sup_{P \in \mathcal{P}_n^{\mathrm{OLS}}} \|\hat\beta_{{\widehat{S}}} - \beta_{{\widehat{S}}} \| \leq C B_n,$$ with probability at last $1 - \frac{2}{n}$. [**Remarks.**]{} 1. It is worth recalling that, in the result above as well as in all the result of the paper, the probability is with respect to joint distribution of the entire sample and of the splitting process. 2. For simplicity, we have phrased the bound in in an asymptotic manner. The result can be trivially turned into a finite sample statement by appropriately adjusting the value of the constant $C$ depending on how rapidly $\max\{ B_n, u B_n \} \rightarrow 0$ vanishes.   3. The proof of the above theorem relies namely an inequality for matrix norms and the vector and matrix Bernstein concentration inequalities (see below). 4. Theorems \[thm:beta.accuracy\] and \[thm:beta.accuracy2\] can be easily generalized to cover the case in which the model selection and the computation of the projection parameters are performed on the entire dataset and not on separate, independent splits. In this situation, it is necessary to obtain a high probability bound for the quantity $$\max_{S} \| \beta_S - \hat{\beta}_S \|$$ where the maximum is over all non-empty subsets of $\{1,\ldots,d\}$ of size at most $k$ and $\hat{\beta}_S = \hat{\Sigma}_{S}^{-1}\hat{\alpha}_{S}$ (see Equation \[eq:alpha.beta.hat\]). Since there are less than $ \left( \frac{e d}{k} \right)^k $ such subsets, an additional union bound argument in each application of the matrix and vector Bernstein’s inequalities (see Lemma \[lem:operator\]) within the proofs of both Theorem \[thm:beta.accuracy\] and \[thm:beta.accuracy2\] will give the desired result. The rates so obtained will be then worse than the ones from Theorems \[thm:beta.accuracy\] and \[thm:beta.accuracy2\] which, because of the sample splitting do not require a union bound. In particular, the scaling of $k$ with respect to $n$ will be worse by a factor of $k \log \frac{d}{k}$. This immediately gives a rate of consistency for the projection parameter under arbitrary model selection rules without relying on sample splitting. We omit the details. ### Confidence sets for the projection parameters: Normal Approximations {#confidence-sets-for-the-projection-parameters-normal-approximations .unnumbered} We will now derive confidence intervals for the projection parameters using on a high-dimensional Normal approximation to $\hat{\beta}_{{\widehat{S}}}$. The construction of such confidence sets entails approximating the dominant linear term in the Taylor series expansion of $\hat{\beta}_{{\widehat{S}}} - \beta_{{\widehat{S}}}$ by a centered Gaussian vector in $\mathbb{R}^{{\widehat{S}}}$ with the same covariance matrix $\Gamma_{{\widehat{S}}}$ (see (\[eq:Gamma\]) in ). The coverage properties of the resulting confidence sets depend crucially on the ability to estimate such covariance. For that purpose, we use a plug-in estimator, given by $$\label{eq::Ga} \hat\Gamma_{{\widehat{S}}} = \hat{G}_{{\widehat{S}}}\hat V_{{\widehat{S}}} \hat{G}_{{\widehat{S}}}^\top$$ where $\hat V_{{\widehat{S}}} = \frac{1}{n}\sum_{i=1}^n [ (W_i - \hat\psi) (W_i - \hat\psi)^\top]$ is the $b \times b$ empirical covariance matrix of the $W_i$’s and the $k \times b$ matrix $\hat{G}_{{\widehat{S}}}$ is the Jacobian of the mapping $g$, given explicitly below in , evaluated at $\hat{\psi}$. The first confidence set for the projection parameter based on the Normal approximation that we propose is an $L_\infty$ ball of appropriate radius centered at $\hat{\beta}_{{\widehat{S}}}$: $$\label{eq::beta.conf-rectangle} \hat{C}_{{\widehat{S}}} = \Bigl\{ \beta \in \mathbb{R}^k:\ ||\beta-\hat\beta_{{\widehat{S}}}||_\infty \leq \frac{\hat{t}_\alpha}{\sqrt{n}}\Bigr\},$$ where $\hat{t}_\alpha$ is a random radius (dependent on $\mathcal{D}_{2,n}$ ) such that $$\label{eq:t.akpha} \mathbb{P}\left( \| \hat{\Gamma}^{1/2}_{\hat{S}} Q \|_\infty \leq \hat{t}_\alpha \right) = \alpha,$$ with $Q$ a random vector having the $k$-dimensional standard Gaussian distribution and independent of the data. In addition to the $L_\infty$ ball given in , we also construct a confidence set for $\beta_{{\widehat{S}}}$ to be a hyper-rectangle, with sides of different lengths in order to account for different variances in the covariates. This can be done using the set $$\label{eq:beta.hyper:CI} \tilde C_{{\widehat{S}}} = \bigotimes_{j\in {\widehat{S}}} \tilde{C}(j),$$ where $$\tilde{C}(j) = \left[ \hat\beta_{{\widehat{S}}}(j) - z_{\alpha/(2k)} \sqrt{\frac{ \hat\Gamma_{{\widehat{S}}}(j,j)}{n}}, \hat\beta_{{\widehat{S}}}(j) + z_{\alpha/(2k)} \sqrt{\frac{ \hat\Gamma_{{\widehat{S}}}(j,j)}{n}}\right],$$ with $\hat\Gamma_{{\widehat{S}}}$ given by (\[eq::Ga\]) and $z_{\alpha/(2k)}$ the upper $1 - \alpha/(2k)$ quantile of a standard Normal variate. Notice that we use a Bonferroni correction to guarantee a nominal coverage of $1-\alpha$. \[thm::big-theorem\] Let $\hat{C}_{{\widehat{S}}}$ and $\tilde{C}_{{\widehat{S}}}$ the confidence sets defined in and , respectively. Let $$\label{eq:un} u_n = u -K_{2,n},$$ where $$K_{2,n} = C A \sqrt{ k U \frac{\log k + \log n}{n} },$$ with $C = C(\eta)>0$ the universal constant in . Assume, in addition, that $n$ is large enough so that $ u_n $ is positive. Then, for a $C >0$ dependent on $A$ only, $$\label{eq:big-theorem.Linfty} \inf_{w_n \in \mathcal{W}_n} \inf_{P\in {\cal P}_n^{\mathrm{OLS}}}\mathbb{P}(\beta \in \hat{C}_{{\widehat{S}}}) \geq 1-\alpha - C \Big(\Delta_{n,1} + \Delta_{n,2}+\Delta_{n,3} \Big)$$ and $$\label{eq:big-theorem.hyper} \inf_{w_n \in \mathcal{W}_n} \inf_{P\in {\cal P}_n^{\mathrm{OLS}}}\mathbb{P} (\beta \in \tilde{C}_{{\widehat{S}}}) \geq 1-\alpha - C\Big(\Delta_{n,1} + \Delta_{n,2}+\tilde{\Delta}_{n,3} \Big),$$ where $$\Delta_{n,1} = \frac{1}{\sqrt{v}}\left( \frac{ \overline{v}^2 k^2 (\log kn)^7)}{n}\right)^{1/6} , \quad \Delta_{n,2} = \frac{ U }{ \sqrt{v}} \sqrt{ \frac{k^4 \overline{v} \log^2n \log k}{n\,u_n^6} },$$ $$\Delta_{n,3} = \left( \frac{ U^2 }{ v }\right)^{1/3} \left( \overline{v}^2 \frac{k^{5}}{u_n^{6} u^4} \frac{ \log n}{n} \log ^4 k\right)^{1/6} \quad \text{and} \quad \tilde{\Delta}_{n,3} = \min \left\{ \Delta_{n,3}, \frac{U^2}{v} \overline{v} \frac{ k^{5/2}}{u_n^3 u^2} \frac{ \log n}{n} \log k \right\}.$$ A few remarks are in order. The coverage probability is affected by three factors: the term $\Delta_{n,1}$, which bounds the approximation error stemming from the high dimensional Berry-Esseen theorem (see ); the term $\Delta_{n,2}$, which is a high probability bound on the size of the reminder term in the Taylor series expansion of $\beta_{{\widehat{S}}}$ around $\widehat{\beta}_{{\widehat{S}}}$ and can therefore be thought of as the price for the non-linearity of the projection parameter, and the terms $\Delta_{n,3}$ and $\tilde{\Delta}_{n,3}$, which are due to the fact that the covariance of the estimator is unknown and needs to be also estimated, leading to another source of error (the bootstrap procedure, described below, implicitly estimates this covariance). In terms of dependence of $k$ on $n$, all other things being equal, the covariance term $\Delta_{3,n}$ exhibit the worst rate, as it constrain $k$ to be of smaller order than $n^{1/5}$ in order to guarantee asymptotic coverage of $\hat{C}_{{\widehat{S}}}$. This same term also contains the worst dependence on $u$, the uniform bound on the smallest eigenvalue of all covariance matrices of the form $\Sigma_S$, for $S \subset \{1,\ldots,d\}$ with $0 < S \leq k$. Thus, the dependence of the rates on the dimension and on the minimal eigenvalue is overall quite poor. While this is, to an extent, unavoidable, we do not know whether our upper bounds are sharp. The reasons for replacing $u$ by the smaller term $u_n$ given in are somewhat technical, but are explained in the proof of the theorem. Assuming a scaling in $n$ that guarantees that the error terms $\Delta_{1,n}$, $\Delta_{2,n}$ and $\Delta_{3,n}$ are vanishing, such modification is inconsequential and does not affect the rates. The coverage rates obtained for the LOCO and prediction parameters below in are significantly faster then the ones for the projection parameters, and hold under less restrictions on the class of data generating distributions. We regard this as another reason to prefer the LOCO parameters. Interesting, the covariance error term $\tilde{\Delta}_{3,n}$ for confidence set $\tilde{C}_{{\widehat{S}}}$ is no worse than the corresponding term for the set $\hat{C}_{{\widehat{S}}}$, suggesting that using hyper-rectangles in stead of hyper-cubes may be a better choice. The quantity $\overline{v}$ can of be order $k^2$ in the worst case, further inflating the terms $\Delta_{3,n}$ and $\tilde{\Delta}_{3,n}$. As a function of sample size, there is a term of order $n^{-1/6}$ in $\Delta_{1,n}$ and $\Delta_{3,n}$. The exponent $1/6$ comes from the Berry-Esseen bound in Section 3. [@cherno2] conjecture that this rate is optimal for high-dimensional central limit theorems. Their conjecture is based on the lower bound result in [@bentkus1985lower]. If their conjecture is true, then this is best rate that can be hoped for in general. The rates are slower than the rate obtained in the central limit theorem given in [@portnoy1987central] for robust regression estimators. A reason for such discrepancy is that [@portnoy1987central] assumes, among the other things, that the linear model is correct. In this case, the least squares estimators is conditionally unbiased. Without the assumption of model correctness there is a substantial bias. If we assume that the covariates are independent then the situation gets dramatically better. For example, the term $\Delta_{n,2}$ is then $O(1/\sqrt{n})$. But the goal of this paper is to avoid adding such assumptions. We now consider the accuracy of the confidence set given by the hyper-rectangle $\tilde{C}_{{\widehat{S}}}$ from Equation by deriving an upper bound on the length of the largest side of $\max_{j \in {\widehat{S}}} \tilde{C}(j)$. Similar rates can be obtained for length of the sides of the hyper-cube confidence set $\hat{C}_{{\widehat{S}}}$ given in . \[cor:accuracy.beta\] With probability at least $ 1- \frac{2}{n}$, the maximal length of the sides of the hyper-rectangle $\tilde{C}_{{\widehat{S}}}$ is bounded by $$C \sqrt{ \frac{\log k}{n} \left( \frac{k^{5/2}}{u_n^3 u^2} \overline{v} \sqrt{ \frac{\log n}{n}} + \frac{k }{u^4} \overline{v}\right) },$$ for a constant $C>0$ depending on $A$ only, uniformly over all $P \in \mathcal{P}_n^{\mathrm{OLS}}$. ### Confidence sets for the projection parameters: The Bootstrap {#confidence-sets-for-the-projection-parameters-the-bootstrap .unnumbered} The confidence set in based on the Normal approximation require the evaluation of both the matrix $\hat{\Gamma}_{{\widehat{S}}}$ and the quantile $\hat{t}_\alpha$ in , which may be computationally inconvenient. Similarly the hyper-rectangle requires computing the diagonal entries in $\hat{\Gamma}_{{\widehat{S}}}$. Below we show that the paired bootstrap can be deployed to construct analogous confidence sets, centered at $\hat{\beta}_{{\widehat{S}}}$, without knowledge of $\hat{\Gamma}_{{\widehat{S}}}$. Throughout, by the bootstrap distribution we mean the empirical probability measure associated to the sub-sample $\mathcal{D}_{2,n}$ and conditionally on $\mathcal{D}_{1,n}$ and the outcome of the sample splitting procedure. We let $\hat{\beta}^*_{{\widehat{S}}}$ denote the estimator of the projection parameters $\beta_{{\widehat{S}}}$ of the form and arising from an i.i.d. sample of size $n$ drawn from the bootstrap distribution. It is important to point out that $\hat{\beta}^*_{{\widehat{S}}}$ is well-defined only provided that the bootstrap realization of the covariates $(X_1^*,\ldots,X_n^*)$ is such that the corresponding $k$-dimensional empirical covariance matrix $$\frac{1}{n} \sum_{i \in \mathcal{I}_{2,n}} X_i^*({\widehat{S}}) (X_i^*({\widehat{S}}))^\top$$ is invertible. Since the data distribution is assumed to have a $d$-dimensional Lebesgue density, this occurs almost surely with respect to the distribution of the full sample $\mathcal{D}_n$ if the bootstrap sample contains more than $k$ distinct values. Thus, the bootstrap guarantees given below only holds on such event. Luckily, this is a matter of little consequence, since under our assumptions the probability that such event does not occur is exponentially small in $n$ (see below). For a given $\alpha \in (0,1)$, let $\hat{t}^*_\alpha$ be the smallest positive number such that $$\mathbb{P}\left( \sqrt{n} \| \hat{\beta}^*_{{\widehat{S}}} - \hat{\beta}_{{\widehat{S}}}\| \leq \hat{t}^*_\alpha \Big| \mathcal{D}_{2,n} \right) \geq 1 - \alpha.$$ Next, let $(\tilde{t}^*_j, j \in {\widehat{S}})$ be such that $$\mathbb{P}\left( \sqrt{n} | \hat{\beta}^*_{{\widehat{S}}}(j) - \hat{\beta}_{{\widehat{S}}} (j) \leq \tilde{t}^*_j, \forall j \Big| \mathcal{D}_{2,n} \right) \geq 1 - \alpha.$$ By the union bound, each $\tilde{t}^*_j$ can be chosen to be the largest positive number such that $$\mathbb{P}\left( \sqrt{n} | \hat{\beta}^*_{{\widehat{S}}}(j) - \hat{\beta}_{{\widehat{S}}} (j) > \tilde{t}^*_j, \Big| \mathcal{D}_{2,n} \right) \leq \frac{\alpha}{k}.$$ Consider the following two bootstrap confidence sets: $$\label{eq:ci.boot.beta} \hat{C}^*_{{\widehat{S}}} = \left\{ \beta \in \mathbb{R}^{{\widehat{S}}} \colon \| \beta - \hat{\beta}_{{\widehat{S}}} \|_\infty \leq \frac{ \hat{t}^*_{\alpha}}{\sqrt{n}} \right\} \quad \text{and} \quad \tilde{C}^*_{{\widehat{S}}} = \left\{ \beta \in \mathbb{R}^{{\widehat{S}}} \colon | \beta(j) - \hat{\beta}_{{\widehat{S}}}(j) | \leq \frac{ \tilde{t}^*_{j}}{\sqrt{n}}, \forall j \in {\widehat{S}}\right\}$$ It is immediate to see that $\hat{C}^*_{{\widehat{S}}}$ and $\tilde{C}^*_{{\widehat{S}}}$ are just the bootstrap equivalent of the confidence sets of and , respectively. \[theorem::beta.boot\] Let $$v_n = v - K_{1,n}, \quad \overline{v}_n = \overline{v} + K_{1,n}, \quad u_n = u - K_{2,n} \quad \text{and} \quad U_n = U + K_{2,n},$$ where $$K_{1,n} = C A^2 \sqrt{ b \overline{v}\frac{\log b + \log n}{n} } \quad \text{and} \quad K_{2,n} = C A \sqrt{ k U \frac{\log k + \log n}{n} },$$ with $C = C(\eta)>0$ the constant in . Assume that $n$ is large enough so that $v_n = v - K_{1,n}$ and $u_n = u -K_{2,n}$ are both positive. Then, for a constant $C = C(A)>0$, $$ \inf_{w_n \in \mathcal{W}_n} \inf_{P\in {\cal P}^{\mathrm{OLS}}_n}\mathbb{P}(\beta_{{\widehat{S}}} \in C^*_{{\widehat{S}}}) \geq 1-\alpha - C\left(\Delta^*_{n,1} + \Delta^*_{n,2} + \Delta_{n,3} \right),$$ where $C^*_{{\widehat{S}}}$ is either one of the bootstrap confidence sets in , $$\Delta^*_{n,1} = \frac{1}{\sqrt{v_n}}\left( \frac{ k^2 \overline{v}_n^2 (\log kn)^7)}{n}\right)^{1/6} , \quad \Delta^*_{n,2} = \frac{ U_n }{ \sqrt{v_n}} \sqrt{ \frac{k^4 \overline{v}_n \log^2n \log k}{n\,u_n^6}}$$ and $\Delta_{n,3}$ is as in . [**Remark.**]{} The term $\Delta_{n,3}$ remains unchanged from the Normal approximating case since it arises from the Gaussian comparison part, which does not depend on the bootstrap distribution. [**Remark.**]{} It is important that we use the pairs bootstrap — where each pair $Z_i=(X_i,Y_i)$, $i=\mathcal{I}_{2,n}$, is treated as one observation — rather than a residual based bootstrap. In fact, the validity of the residual bootstrap requires the underlying regression function to be linear, which we do not assume. See [@buja2015models] for more discussion on this point. In both cases, the Berry-Esseen theorem for simple convex sets (polyhedra with a limited number of faces) with increasing dimension due to [@cherno1; @cherno2] justifies the method. In the case of $\beta_{{\widehat{S}}}$ we also need a Taylor approximation followed by an application of the Gaussian anti-concentration result from the same reference. The coverage rates from are of course no better than the ones obtained in , and are consistent with the results of [@el2015can] who found that, even when the linear model is correct, the bootstrap does poorly when $k$ increases. The coverage accuracy can also be improved by changing the bootstrap procedure; see Section \[section::improving\]. [**Remark.**]{} Our results concern the bootstrap distribution and assume the ability to determine the quantities $\hat{t}^*_\alpha$ and $(\tilde{t}^*_j, j \in {\widehat{S}})$ in Equation . Of course, they can be approximated to an arbitrary level of precision by drawing a large enough number $B$ of bootstrap samples and then by computing the appropriate empirical quantiles from those samples. This will result in an additional approximation error, which can be easily quantified using the DKW inequality (and, for the set $\tilde{C}^*_{{\widehat{S}}}$, also the union bound) and which is, for large $B$, negligible compared to the size of the error bounds obtained above. For simplicity, we do not provide these details. Similar considerations apply to all subsequent bootstrap results. ### The Sparse Case {#the-sparse-case .unnumbered} Now we briefly discuss the case of sparse fitting where $k = O(1)$ so that the size of the selected model is not allowed to increase with $n$. In this case, things simplify considerably. The standard central limit theorem shows that $$\sqrt{n}(\hat\beta - \beta)\rightsquigarrow N(0,\Gamma)$$ where $\Gamma = \Sigma^{-1} \mathbb{E}[(Y-\beta^\top X)^2] \Sigma^{-1}$. Furthermore, $\Gamma$ can be consistently estimated by the sandwich estimator $\hat\Gamma = \hat\Sigma^{-1} A \hat\Sigma^{-1}$ where $A = n^{-1}\mathbb{X}^\top R \mathbb{X}$, $\mathbb{X}_{ij} = X_i(j)$, $R$ is the $k\times k$ diagonal matrix with $R_{ii} = (Y_i - X_i^\top \hat\beta)^2$. By Slutsky’s theorem, valid asymptotic confidence sets can be based on the Normal distribution with $\hat\Gamma$ in place of $\Gamma$ ([@buja2015models]). However, if $k$ is non-trivial relative to $n$, then fixed $k$ asymptotics may be misleading. In this case, the results of the previous section may be more appropriate. In particular, replacing $\Gamma$ with an estimate then has a non-trivial effect on the coverage accuracy. Furthermore, the accuracy depends on $1/u$ where $u = \lambda_{\rm min}(\Sigma)$. But when we apply the results after sample splitting (as is our goal), we need to define $u$ as $u = \min_{|S|\leq k} \lambda_{\rm min}(\Sigma_S)$. As $d$ increases, $u$ can get smaller and smaller even with fixed $k$. Hence, the usual fixed $k$ asymptotics may be misleading. [**Remark:**]{} We only ever report inferences for the selected parameters. The bootstrap provides uniform coverage over all parameters in $S$. There is no need for a Bonferroni correction. This is because the bootstrap is applied to $||\hat\beta_{{\widehat{S}}}^* - \hat\beta_{{\widehat{S}}}||_\infty$. However, we also show that univariate Normal approximations together with Bonferroni adjustments leads valid hyper-rectangular regions; see Theorem \[thm::bonf\]. LOCO Parameters {#sec:loco.parameters} --------------- Now we turn to the LOCO parameter $\gamma_{{\widehat{S}}} \in \mathbb{R}^{{\widehat{S}}}$, where ${\widehat{S}}$ is the model selected on the first half of the data. Recall that $j^{\mathrm{th}}$ coordinate of this parameter is $$\gamma_{{\widehat{S}}}(j) = \mathbb{E}_{X,Y}\Biggl[|Y-\hat\beta_{{\widehat{S}}(j)}^\top X_{{\widehat{S}}(j)}|- |Y-\hat\beta_{{\widehat{S}}}^\top X_{{\widehat{S}}}| \Big| \mathcal{D}_{1,n} \Biggr],$$ where $\hat\beta_{{\widehat{S}}} \in \mathbb{R}^{{\widehat{S}}}$ is any estimator of $\beta_{{\widehat{S}}}$, and $\hat\beta_{{\widehat{S}}(j)}$ is obtained by re-computing the same estimator on the set of covariates ${\widehat{S}}(j)$ resulting from re-running the same model selection procedure after removing covariate $X_j$. The model selections ${\widehat{S}}$ and ${\widehat{S}}(j)$ and the estimators $\hat{\beta}_{{\widehat{S}}}$ and $\hat{\beta}_{{\widehat{S}}}(j)$ are all computed using half of the sample, $\mathcal{D}_{1,n}$. In order to derive confidence sets for $\gamma_{{\widehat{S}}}$ we will assume that the data generating distribution belongs to the class ${\cal P}_n'$ of all distributions on $\mathbb{R}^{d+1}$ supported on $[-A,A]^{d+1}$, for some fixed constant $A>0$. Clearly the class $\mathcal{P}_n^{\mathrm{LOCO}}$ is significantly larger then the class $\mathcal{P}_n^{\mathrm{OLS}}$ considered for the projection parameters. A natural unbiased estimator of $\gamma_{{\widehat{S}}}$ – conditionally on $\mathcal{D}_{1,n}$ – is $$\hat{\gamma}_{{\widehat{S}}} = \frac{1}{n} \sum_{i \in \mathcal{I}_{2,n}} \delta_i,$$ with $(\delta_i,i \in \mathcal{I}_{2,n})$ independent and identically distributed random vectors in $\mathbb{R}^{{\widehat{S}}}$ such that, for any $i \in \mathcal{I}_{2,n}$ and $j \in {\widehat{S}}$, $$\label{eq:delta.i} \delta_i(j) = \Big| Y_i-\hat\beta_{{\widehat{S}}(j)}^\top X_i({\widehat{S}}(j)) \Big|- \Big|Y_i-\hat\beta_{{\widehat{S}}}^\top X_{i}({\widehat{S}}) \Big|.$$ $X_i$ obtained by considering only the coordinates in $S$. To derive a CLT for $\hat\gamma_{{\widehat{S}}}$ we face two technical problem. First, we require some control on the minimal variance of the coordinates of the $\delta_i$’s. Since we allow for increasing $k$ and we impose minimal assumptions on the class of data generating distributions, it is possible that any one variable might have a tiny influence on the predictions. As a result, we cannot rule out the possibility that the variance of some coordinate of the $\delta_i$’ vanishes. In this case the rate of convergence in high-dimensional central limit theorems would be negatively impacted, in ways that are difficulty to assess. To prevent such issue we simply redefine $\gamma_{{\widehat{S}}}$ by adding a small amount of noise with non-vanishing variance. Secondly, we also need an upper bound on the third moments of the coordinates of the $\delta_i$’s. In order to keep the presentation simple, we will truncate the estimator of the regression function by hard-thresholding so that it has bounded range $[-\tau,\tau]$ for a given $\tau>0$. Since both $Y$ and the coordinates of $X$ are uniformly bounded in absolute value by $A$, this assumption is reasonable. Thus, we re-define the vector of LOCO parameters $\gamma_{{\widehat{S}}}$ so that its $j^{\mathrm{th}}$ coordinate is $$\label{eq:new.gamma} \gamma_{{\widehat{S}}}(j) = \mathbb{E}_{X,Y, \xi_j}\Biggl[ \left|Y- t_{\tau}\left( \hat\beta_{{\widehat{S}}(j)}^\top X_{{\widehat{S}}(j)} \right) \right|- \left| Y-t_{\tau}\left( \hat\beta_{{\widehat{S}}}^\top X_{{\widehat{S}}} \right) \right| + \epsilon \xi(j) \Biggr],$$ where $\epsilon > 0$ is a pre-specified small number, $\xi = (\xi(j), j \in {\widehat{S}})$ is a random vector comprised of independent $\mathrm{Uniform}(-1,1)$, independent of the data, and $t_{\tau}$ is the hard-threshold function: for any $x \in \mathbb{R}$, $t_{\tau}(x)$ is $x$ if $|x| \leq \tau$ and $\mathrm{sign}(x) \tau$ otherwise. Accordingly, we re-define the estimator $\hat{\gamma}_{{\widehat{S}}}$ of this modified LOCO parameters as $$\label{eq:new.delta} \hat{\gamma}_{{\widehat{S}}} = \frac{1}{n} \sum_{i \in \mathcal{I}_{2,n}} \delta_i,$$ where the $\delta_i$’s are random vector in $\mathbb{R}^{{\widehat{S}}}$ such that the $j^{\mathrm{th}}$ coordinate of $\delta_i$ is $$\Big| Y_i- t_{\tau}\left( \hat\beta_{{\widehat{S}}(j)}^\top X_i({\widehat{S}}(j)) \right) \Big|- \Big| Y_i - t_{\tau} \left( Y_i-\hat\beta_{{\widehat{S}}}^\top X_{i}({\widehat{S}}) \right)\Big| + \epsilon \xi_i(j), \quad j \in {\widehat{S}}.$$ [**Remark.**]{} Introducing additional noise has the effect of making the inference conservative: the confidence intervals will be slightly wider. For small $\epsilon$ and any non-trivial value of $\gamma_{{\widehat{S}}}(j)$ this will presumably have a negligible effect. For our proofs, adding some additional noise and thresholding the regression function are advantageous because the first choice will guarantee that the empirical covariance matrix of the $\delta_i$’s is non-singular, and the second choice will imply that the coordinates of $\hat{\gamma}_{{\widehat{S}}}$ are bounded. It is possible to let $\epsilon\to 0$ and $\tau \rightarrow \infty$ as $n\to\infty$ at the expense of slower concentration and Berry-Esseen rates. For simplicity, we take $\epsilon$ and $\tau$ to be fixed but we will keep explicit track of these quantities in the constants. Since each coordinate of $\hat{\gamma}_{{\widehat{S}}}$ is an average of random variables that are bounded in absolute value by $2(A+\tau) + \epsilon$, and $\mathbb{E}\left[ \hat{\gamma}_{{\widehat{S}}} | \mathcal{D}_{1,n}\right] = \gamma_{{\widehat{S}}}$, a standard bound for the maxima of $k$ bounded (and, therefore, sub-Gaussian) random variables yields the following concentration result. As usual, the probability is with respect to the randomness in the full sample and in the splitting. $$\sup_{w_n \in \mathcal{W}_n} \sup_{P \in \mathcal{P}_n^{\mathrm{LOCO}}} \mathbb{P}\left( \| \hat{\gamma}_{{\widehat{S}}} - \gamma_{{\widehat{S}}}\|_\infty \leq \left( 2(A+\tau) + \epsilon \right) \sqrt{ 2 \frac{\log k + \log n}{n} } \right) \geq 1 - \frac{1}{n}.$$ The bound on $\| \hat{\gamma}_{{\widehat{S}}} - \gamma_{{\widehat{S}}}\|_\infty $ holds with probability at least $1 - \frac{1}{n}$ conditionally on $\mathcal{D}_{1,n}$ and the outcome of data splitting, and uniformly over the choice of the procedure $w_n$ and of the distribution $P$. Thus, the uniform validity of the bound holds also unconditionally. We now construct confidence sets for $\gamma_{{\widehat{S}}}$. Just like we did with the projection parameters, we consider two types of methods: one based on Normal approximations and the other on the bootstrap. ### Normal Approximation {#normal-approximation .unnumbered} Obtaining high-dimensional Berry-Esseen bounds for $\hat\gamma_{{\widehat{S}}}$ is nearly straightforward since, conditionally on $\mathcal{D}_{1,n}$ and the splitting, $\hat\gamma_{{\widehat{S}}}$ is just a vector of averages of bounded and independent variables with non-vanishing variances. Thus, there is no need for a Taylor approximation and we can apply directly the results in [@cherno2]. In addition, we find that the accuracy of the confidence sets for this LOCO parameter is higher than for the projection parameters. Similarly to what we did in , we derive two approximate confidence sets: one is an $L_\infty$ ball and the other is a hyper-rectangle whose $j^{\mathrm{th}}$ side length is proportional to the standard deviation of the $j^{\mathrm{th}}$ coordinate of $\hat{\gamma}_{{\widehat{S}}}$. Both sets are centered at $\hat{\gamma}_{{\widehat{S}}}$. Below, we let $\alpha \in (0,1)$ be fixed and let $$\label{eq:Sigma.loco} \hat \Sigma_{{\widehat{S}}} = \frac{1}{n} \sum_{i=1}^n \left( \delta_i - \hat{\gamma}_{{\widehat{S}}} \right)\left( \delta_i - \hat{\gamma}_{{\widehat{S}}} \right)^\top,$$ be the empirical covariance matrix of the $\delta_i$’s. The first confidence set is the $L_\infty$ ball $$\label{eq::gamma.conf-rectangle} \hat{D}_{{\widehat{S}}} = \Big\{ \gamma \in \mathbb{R}^k \colon \|\gamma - \hat{\gamma}_{{\widehat{S}}} \|_\infty \leq \hat{t}_\alpha \Big\},$$ where $\hat{t}_\alpha$ is such that $$\mathbb{P}\left( \| Z_n \|_\infty \leq \hat{t}_\alpha \right) = 1 - \alpha,$$ with $Z_n \sim N(0,\hat{\Sigma}_{{\widehat{S}}}$). The second confidence set we construct is instead the hyper-rectangle $$\label{eq:gamma.hyper:CI} \tilde{D}_{{\widehat{S}}} = \bigotimes_{j \in {\widehat{S}}} \hat{D}(j),$$ where, for any $j \in {\widehat{S}}$, $\tilde{D}(j) = \left[ \hat{\gamma}_{{\widehat{S}}}(j) -\hat{t}_{j,\alpha}, \hat{\gamma}_{{\widehat{S}}}(j) +\hat{t}_{j,\alpha} \right]$, with $ \hat{t}_{j,\alpha} = z_{\alpha/2k} \sqrt{ \frac{\hat\Sigma_{{\widehat{S}}}(j,j)}{n} }.$ The above confidence sets have the same form as the confidence sets for the projection parameters . The key difference is that for the projection parameters we use the estimated covariance of the linear approximation to $\hat{\beta}_{{\widehat{S}}}$, while for the LOCO parameter $\hat{\gamma}_{{\widehat{S}}}$ we rely on the empirical covariance , which is a much simpler estimator to compute. In the next result we derive coverage rates for both confidence sets. \[thm::CLT2\] There exists a universal constant $C > 0$ such that $$\label{eq:loco.coverage1} \inf_{w_n \in \mathcal{W}_n} \inf_{P \in \mathcal{P}_n^{\mathrm{LOCO}}} \mathbb{P} \left( \gamma_{{\widehat{S}}} \in \widehat{D}_{{\widehat{S}}} \right) \geq 1 - \alpha - C \left( \mathrm{E}_{1,n} + \mathrm{E}_{2,n} \right)- \frac{1}{n},$$ and $$\label{eq:loco.coverage2} \inf_{w_n \in \mathcal{W}_n} \inf_{P \in \mathcal{P}_n^{\mathrm{LOCO}}} \mathbb{P} \left( \gamma_{{\widehat{S}}} \in \tilde{D}_{{\widehat{S}}} \right) \geq 1 - \alpha - C \left( \mathrm{E}_{1,n} + \tilde{\mathrm{E}}_{2,n} \right) - \frac{1}{n},$$ where $$\begin{aligned} \label{eq:E1n} \mathrm{E}_{1,n} &= \frac{2(A+\tau) + \epsilon }{\epsilon} \left(\frac{ (\log n k)^7}{n}\right)^{1/6},\\ \label{eq:E2n} \mathrm{E}_{2,n} & = \frac{N_n^{1/3} (2 \log 2k)^{2/3}}{\underline{\epsilon}^{2/3}},\\ \label{eq:tildeE2.n} \tilde{\mathrm{E}}_{2,n} &= \min \left\{ \mathrm{E}_{2,n},\frac{ N_n z_{\alpha/(2k)}}{\epsilon^2} \left(\sqrt{ 2 + \log(2k ) } + 2 \right) \right\}\end{aligned}$$ and $$\label{eq:Nn} N_n = \left( 2(A+\tau) + \epsilon \right)^2 \sqrt{ \frac{4\log k + 2 \log n}{n} }.$$ [**Remark.**]{} The term $\mathrm{E}_{1,n}$ quantifies the error in applying the high-dimensional normal approximation to $\hat{\gamma}_{{\widehat{S}}} - \gamma_{{\widehat{S}}}$, given in [@cherno2]. The second error term $\mathrm{E}_{2,n}$ is due to the fact that $\Sigma_{{\widehat{S}}}$ is unknown and has to be estimated using the empirical covariance matrix $\widehat{\Sigma}_{{\widehat{S}}}$. To establish $\mathrm{E}_{2,n}$ we use the Gaussian comparison Theorem \[thm:comparisons\]. We point out that the dependence in $\epsilon$ displayed in the term $\mathrm{E}_{2,n}$ above does not follow directly from Theorem 2.1 in [@cherno2]. It can be obtained by tracking constants and using Nazarov’s inequality in the proof of that result. See in for details. The accuracy of the confidence set can be easily established to be of order $O \left( \sqrt{ \frac{\log k}{n}} \right)$, a fact made precise in the following result. \[cor:accuracy.LOCO\] With probability at least $ 1- \frac{1}{n}$, the maximal length of the sides of the hyper-rectangle $\tilde{C}_n$ is bounded by $$C \left(2(A + \tau) + \epsilon \right) \sqrt{ \frac{\log k}{n} \left( 1 + \frac{(4\log k + 2 \log n)^{1/2}}{n^{1/2}}\right)},$$ for a universal constant $C>0$, uniformly over all $P \in \mathcal{P}_n^{\mathrm{LOCO}}$. ### The Bootstrap {#the-bootstrap .unnumbered} We now demonstrate the coverage of the paired bootstrap version of the confidence set for $\gamma_{{\widehat{S}}}$ given above in . The bootstrap distribution is the empirical measure associated to the $n$ triplets $\left\{ (X_i,Y_i,\xi_i), i \in \mathcal{I}_{2,n} \right\}$ and conditionally on $\mathcal{D}_{1,n}$. Let $\hat{\gamma}^*_{{\widehat{S}}}$ denote the estimator of the LOCO parameters of the form computed from an i.i.d. sample of size $n$ drawn from the bootstrap distribution. Notice that $\mathbb{E}\left[ \hat{\gamma}^*_{{\widehat{S}}} \Big| (X_i,Y_i,\xi_i), i \in \mathcal{I}_{2,n} \right] = \hat{\gamma}_{{\widehat{S}}}$. For a given $\alpha \in (0,1)$, let $\hat{t}^*_\alpha$ be the smallest positive number such that $$\mathbb{P}\left( \sqrt{n} \| \hat{\gamma}^*_{{\widehat{S}}} - \hat{\gamma}_{{\widehat{S}}}\| \leq \hat{t}^*_\alpha \Big| (X_i,Y_i,\xi_i), i \in \mathcal{I}_{2,n} \right) \geq 1 - \alpha.$$ Next, let $(\tilde{t}^*_j, j \in {\widehat{S}})$ be such that $$\mathbb{P}\left( \sqrt{n} | \hat{\gamma}^*_{{\widehat{S}}}(j) - \hat{\gamma}_{{\widehat{S}}} (j) \leq \tilde{t}^*_j, \forall j \Big| (X_i,Y_i,\xi_i), i \in \mathcal{I}_{2,n} \right) \geq 1 - \alpha.$$ In particular, using the union bound, each $\tilde{t}^*_j$ can be chosen to be the largest positive number such that $$\mathbb{P}\left( \sqrt{n} | \hat{\gamma}^*_{{\widehat{S}}}(j) - \hat{\gamma}_{{\widehat{S}}} (j) > \tilde{t}^*_j, \Big| (X_i,Y_i,\xi_i), i \in \mathcal{I}_{2,n} \right) \leq \frac{\alpha}{k}.$$ Consider the following two bootstrap confidence sets: $$\label{eq:ci.boot.loco} \hat{D}^*_{{\widehat{S}}} = \left\{ \gamma \in \mathbb{R}^{{\widehat{S}}} \colon \| \gamma - \hat{\gamma}_{{\widehat{S}}} \|_\infty \leq \frac{ \hat{t}^*_{\alpha}}{\sqrt{n}} \right\} \quad \text{and} \quad \tilde{D}^*_{{\widehat{S}}} = \left\{ \gamma \in \mathbb{R}^{{\widehat{S}}} \colon | \gamma_j - \hat{\gamma}_{{\widehat{S}}} | \leq \frac{ \tilde{t}^*_{j}}{\sqrt{n}}, \forall j \right\}.$$ \[thm:boot.loco\] Using the same notation as in , assume that $n$ is large enough so that $\epsilon_n = \sqrt{ \epsilon^2 - N_n }$ is positive. Then there exists a universal constant $C>0$ such that the coverage of both confidence sets in is at least $$1 - \alpha - C\left( \mathrm{E}^*_{1,n} + \mathrm{E}_{2,n} + \frac{1}{n} \right),$$ where $$\mathrm{E}^*_{1,n} = \frac{2(A+\tau) + \epsilon_n }{\epsilon_n} \left(\frac{ (\log n k)^7}{n}\right)^{1/6}.$$ Median LOCO parameters ---------------------- For the median loco parameters $(\phi_{{\widehat{S}}}(j), j \in {\widehat{S}})$ given in finite sample inference is relatively straightforward using standard confidence intervals for the median based on order statistics. In detail, for each $j \in {\widehat{S}}$ and $i \in \mathcal{I}_{2,n}$, recall the definition of $\delta_i(j)$ in and let $\delta_{(1)}(j) \leq \ldots \leq \delta_{(n)}(j)$ be the corresponding order statistics. We will not impose any restrictions on the data generating distribution. In particular, for each $j \in {\widehat{S}}$, the median of $\delta_i(j)$ needs not be unique. Consider the interval $$E_j = [ \delta_{(l)}(j), \delta_{(u)}(j)]$$ where $$\label{eq:lu} l = \Big\lceil \frac{n}{2} - \sqrt{\textcolor{black}{\frac{n}{2}} \log\left( \frac{2k}{\alpha}\right)} \Big\rceil \quad \text{and} \quad u = \Big\lfloor \frac{n}{2} + \sqrt{\textcolor{black}{\frac{n}{2}} \log\left( \frac{2k}{\alpha}\right)} \Big\rfloor$$ and construct the hyper-cube $$\hat{E}_{{\widehat{S}}} = \bigotimes_{j \in {\widehat{S}}}^n E_j.$$ Then, a standard result about confidence sets for medians along with union bound implies that $\hat{E}_{{\widehat{S}}}$ is a $1-\alpha$ confidence set for the median LOCO parameters, uniformly over $\mathcal{P}_n$. For every $n$, $$\inf_{w_n \in \mathcal{W}_n} \inf_{P\in {\cal P}_{n}}\mathbb{P}(\phi_{{\widehat{S}}} \in \hat{E}_{{\widehat{S}}}) \geq 1-\alpha.$$ [**Remark.**]{} Of course, if the median of $\delta_i(j)$ is not unique, the length of the corresponding confidence interval does not shrink ad $n$ increases. But if the median is unique for each $j \in {\widehat{S}}$, and under addition smoothness conditions, we obtain the maximal length the side of the confidence rectangle $\hat{E}_{{\widehat{S}}}$ is of order $O \left( \sqrt{\frac{\log k + \log n}{n}} \right)$, with high probability. \[thm::median\] Suppose that there exists positive numbers $M$ and $\eta$ such that, for each $j \in {\widehat{S}}$, the cumulative distribution function of each $\delta_i(j)$ is differentiable with derivative no smaller than $M$ at all points at a distance no larger than $\eta$ from its (unique) median. Then, for all $n$ for which $$\frac{1}{n} + \sqrt{\frac{1}{2n}\log\left(\frac{2k}{\alpha}\right)} + \sqrt{ \frac{ \log 2kn}{2n} } \leq \eta M,$$ the sides of $\hat{E}_{{\widehat{S}}}$ have length uniformly bounded by $$\frac{2}{M} \left( \frac{1}{n} + \sqrt{\frac{1}{2n}\log\left(\frac{2k}{\alpha}\right)} + \sqrt{ \frac{ \log 2kn}{2n} } \right),$$ with probability at least $1 - \frac{1}{n}$. Future Prediction Error ----------------------- To construct a confidence interval for the future prediction error parameter $\rho_{{\widehat{S}}}$ consider the set $$\hat{F}_{{\widehat{S}}} = \Bigl[\hat\rho_S - z_{\alpha/2} s/\sqrt{n},\ \hat\rho_S + z_{\alpha/2} s/\sqrt{n}\Bigr]$$ where $z_{\alpha/2}$ is the $1-\alpha/2$ upper quantile of a standard normal distribution, $$\hat\rho_{{\widehat{S}}} = \frac{1}{n}\sum_{i\in {\cal I}_2}\sum_i A_i, \quad s^2 = \frac{1}{n}\sum_{i\in {\cal I}_2}(A_i - \hat\rho_{{\widehat{S}}})^2, \quad \text{and} \quad A_i = |Y_i - \hat\beta_{{\widehat{S}}}^\top X_{i}({\widehat{S}})|, \forall i \in \mathcal{I}_{2,n}.$$ For any $P$, let $\sigma^2_n = \sigma^2_n(P) = \mathrm{Var}_P(A_1)$ and $\mu_{3,n}= \mu_{3,n}(P) = \mathbb{E}_P \left[ |A_1 - \mathbb{E}_P[A_1]|^3 \right]$. Then, by the one-dimensional Berry-Esseen theorem: $$\inf_{w_n \in \mathcal{W}_n} \mathbb{P}(\rho_{{\widehat{S}}} \in \hat{F}_{{\widehat{S}}}) \geq 1-\alpha - O \left( \frac{ \mu_{3,n}}{\sigma_n \sqrt{n}} \right).$$ In order to obtain uniform coverage accuracy guarantees, we may rely on a modification of the target parameter that we implemented for the LOCO parameters in and redefine the prediction parameter to be $$\rho_{{\widehat{S}}} = \mathbb{E} \left[ |Y - t_\tau( \hat\beta_{{\widehat{S}}}^\top X_({\widehat{S}})) | + \epsilon \xi \right] ,$$ where $t_{\tau}$ is the hard-threshold function (for any $x \in \mathbb{R}$, $t_{\tau}(x)$ is $x$ if $|x| \leq \tau$ and $\mathrm{sign}(x) \tau$ otherwise) and $\xi$ is independent noise uniformly distributed on $[-1,1]$. Above, the positive parameters $\tau$ and $\epsilon$ are chosen to ensure that the variance of the $A_i$’s does not vanish and that their third moment does not explode as $n$ grows. With this modification, we can ensure that $\sigma^2_n \geq \epsilon^2$ and $\mu^{3,n} \leq \left( A + \tau + \epsilon \right)^3$ uniformly in $n$ and also $s \leq 4 (A + \tau + \epsilon)^2$, almost surely. Of course, we may let $\tau$ and $\epsilon$ change with $n$ in a controlled manner. But for fixed choices of $\tau$ and $\epsilon$ we obtain the following parametric rate for $\rho_{{\widehat{S}}}$, which holds for all possible data generating distributions: $$\inf_{w_n \in \mathcal{W}_n} \mathbb{P}(\rho_{{\widehat{S}}} \in \hat{F}_{{\widehat{S}}}) \geq 1-\alpha - C \left( \frac{1}{\sqrt{n}} \right),$$ for a constant dependent only on $A$, $\tau$ and $\epsilon$. Furthermore, the length of the confidence interval is parametric, of order $\frac{1}{\sqrt{n}}$. Prediction/Accuracy Tradeoff: Comparing Splitting to Uniform Inference {#section::splitornot} ====================================================================== There is a price to pay for sample splitting: the selected model may be less accurate because only part of the data are used to select the model. Thus, splitting creates gains in accuracy and robustness for inference but with a possible loss of prediction accuracy. We call this the [*inference-prediction tradeoff*]{}. In this section we study this phenomenon by comparing splitting with uniform inference (defined below). We use uniform inference for the comparison since this is the any other method we know of that achieves (\[eq::honest\]). We study this use with a simple model where it is feasible to compare splitting with uniform inference. We will focus on the [*many means problem*]{} which is similar to regression with a balanced, orthogonal design. The data are $Y_1,\ldots, Y_{2n} \sim P$ where $Y_i\in\mathbb{R}^D$. Let $\beta = (\beta(1),\ldots, \beta(D))$ where $\beta(j) = \mathbb{E}[Y_i(j)]$. In this section, the model ${\cal P}_{n}$ is the set of probability distributions on $\mathbb{R}^D$ such that $\max_j \mathbb{E}|Y(j)|^3 < C$ and $\min_j {\rm Var}(Y(j)) > c$ for some positive $C$ and $c$, which do not change with $n$ or $D$ (these assumptions could of course be easily relaxed). Below, we will only track the dependence on $D$ and $n$ and will use the notation $\preceq$ to denote inequality up to constants. To mimic forward stepwise regression — where we would choose a covariate to maximize correlation with the outcome — we consider choosing $j$ to maximize the mean. Specifically, we take $$\label{eq::J} {\widehat{S}}\equiv w(Y_1,\ldots, Y_{2n}) =\operatorname*{argmax}_j \overline{Y}(j)$$ where $\overline{Y}(j) = (1/2n)\sum_{i=1}^{2n} Y_i(j)$. Our goal is to infer the random parameter $\beta_{{\widehat{S}}}$. The number of models is $D$. In forward stepwise regression with $k$ steps and $d$ covariates, the number of models is $D = d^k$. So the reader is invited to think of $D$ as being very large. We will compare splitting versus non-splitting with respect to three goals: estimation, inference and prediction accuracy. [**Splitting:**]{} In this case we take Let ${\cal D}_{1,n} = \{i: \ 1 \leq i \leq n\}$ and ${\cal D}_{2,n} = \{i: \ n+1 \leq i \leq 2n\}$. Then $$\label{eq::J1} {\widehat{S}}\equiv w(Y_1,\ldots, Y_n) =\operatorname*{argmax}_j \overline{Y}(j)$$ where $\overline{Y}(j) = (1/n)\sum_{i=1}^n Y_i(j)$. The point estimate and confidence interval for the random parameter $\beta_{{\widehat{S}}}$ are $$\hat\beta_{{\widehat{S}}} = \frac{1}{n}\sum_{i=n+1}^{2n} Y_i({\widehat{S}})$$ and $$\hat{C}_{{\widehat{S}}}= [\hat\beta_{{\widehat{S}}} - s z_{\alpha/2}/\sqrt{n},\ \hat\beta_{{\widehat{S}}} + s z_{\alpha/2}/\sqrt{n}]$$ where $s^2 = n^{-1}\sum_{i=n+1}^{2n} (Y_i({\widehat{S}}) - \hat\beta_{{\widehat{S}}})^2$. [**Uniform Inference (Non-Splitting).**]{} By “non-splitting” we mean that the selection rule and estimator are invariant under permutations of the data. In particular, we consider uniform inference which is defined as follows. Let $\hat\beta(s) = (2n)^{-1}\sum_i Y_i(s)$ be the average over all the observations. Let $\hat{S} = \operatorname*{argmax}_s \hat\beta(s)$. Our point estimate is $\hat{\beta}_{{\widehat{S}}} \equiv \hat\beta(\hat{S})$. Now define $$F_{n}(t) = \mathbb{P}(\sup_s \sqrt{2n}|\hat\beta(s)-\beta(s)| \leq t).$$ We can consistently estimate $F_{n}$ by the bootstrap: $$\hat F_{n}(t) = \mathbb{P}( \sup_s \sqrt{2n}\left|\hat\beta^*(s)-\hat \beta(s)\right| \leq t\,| Y_1,\ldots, Y_{2n}).$$ A valid confidence set for $\beta$ is $R= \{ \beta:\ ||\beta - \hat\beta||_\infty \leq t/\sqrt{2n}\}$ where $t=\hat F_{n}^{-1}(1-\alpha)$. Because this is uniform over all possible models (that is, over all $s$), it also defines a valid confidence interval for a randomly selected coordinate. In particular, we can define $$\hat{C}_{{\widehat{S}}}= [\hat\beta_{\hat{S}} - t/\sqrt{2n},\ \hat\beta_{\hat{S}} + t/\sqrt{2n}]$$ Both confidence intervals satisfy (\[eq::honest\]). We now compare $\hat\beta_{{\widehat{S}}}$ and $\hat{C}_{{\widehat{S}}}$ for both the splitting and non-splitting procedures. The reader should keep in mind that, in general, $\hat{S}$ might be different between the two procedures, and hence $\beta_{{\widehat{S}}}$ may be different. The two procedures might be estimating different parameters. We discuss that issue shortly. [**Estimation.**]{} First we consider estimation accuracy. \[lemma::est-accuracy\] For the splitting estimator: $$\sup_{P\in {\cal P}_{n}}\mathbb{E}|\hat\beta_{{\widehat{S}}}-\beta_{{\widehat{S}}}| \preceq n^{-1/2}.$$ For non-splitting we have $$\label{eq::lower1} \inf_{\hat\beta}\sup_{P\in {\cal P}_{n}} \mathbb{E}|\hat\beta_{{\widehat{S}}}-\beta_{{\widehat{S}}}| \succeq \sqrt{\frac{\log D}{n}}.$$ The above is stated for the particular selection rule ${\widehat{S}}= \operatorname*{argmax}_s \hat{\beta}_s$, but the splitting-based result holds for general selection rules $w\in\mathcal{W}_n$, so that for splitting $$\sup_{w\in {\cal W}_n}\sup_{P\in {\cal P}_{n}}\mathbb{E}|\hat\beta_{{\widehat{S}}}-\beta_{{\widehat{S}}}| \preceq n^{-1/2}$$ and for non-splitting $$\label{eq::lower2} \inf_{\hat\beta}\sup_{w\in {\cal W}_{2n}}\sup_{P\in {\cal P}_{n}} \mathbb{E}|\hat\beta_{{\widehat{S}}}-\beta_{{\widehat{S}}}| \succeq \sqrt{\frac{\log D}{n}}.$$ Thus, the splitting estimator converges at a $n^{-1/2}$ rate. Non-splitting estimators have a slow rate, even with the added assumption of Normality. (Of course, the splitting estimator and non-splitting estimator may in fact be estimating different randomly chosen parameters. We address this issue when we discuss prediction accuracy.) [**Inference.**]{} Now we turn to inference. For splitting, we use the usual Normal interval $\hat{C}_{{\widehat{S}}} = [\hat\beta_{{\widehat{S}}}-z_\alpha s/\sqrt{n},\ \hat\beta_{{\widehat{S}}}+z_\alpha s/\sqrt{n}]$ where $s^2$ is the sample variance from ${\cal D}_{2,n}$. We then have, as a direct application of the one-dimensional Berry-Esseen theorem, that: Let $\hat{C}_{{\widehat{S}}}$ be the splitting-based confidence set. Then, $$\label{eq:lem12a} \inf_{P\in {\cal P}_{n}}\mathbb{P}(\beta_{\hat{S}}\in \hat{C}_{{\widehat{S}}}) = 1-\alpha - \frac{c}{\sqrt{n}}$$ for some $c$. Also, $$\label{eq:lem12b} \sup_{P\in {\cal P}_{n}}\mathbb{E}[\nu(\hat{C}_{{\widehat{S}}})] \preceq n^{-1/2}$$ where $\nu$ is Lebesgue measure. More generally, $$\inf_{w\in\mathcal{W}_n}\inf_{P\in {\cal P}_{n}}\mathbb{P}(\beta_{\hat{S}}\in \hat{C}_{{\widehat{S}}}) = 1-\alpha - \frac{c}{\sqrt{n}}$$ for some $c$, and $$\sup_{w\in\mathcal{W}_n}\sup_{P\in {\cal P}_{n}}\mathbb{E}[\nu(\hat{C}_{{\widehat{S}}})] \preceq n^{-1/2}$$ Let $\hat{C}_{{\widehat{S}}}$ be the uniform confidence set. Then, $$\inf_{P\in {\cal P}_{n}} \mathbb{P}(\beta_{\hat{S}}\in \hat{C}_{{\widehat{S}}}) = 1-\alpha - \left(\frac{ c (\log D)^7 }{n}\right)^{1/6}$$ for some $c$. Also, $$\sup_{P\in {\cal P}_{2n}}\mathbb{E}[\nu(\hat{C}_{{\widehat{S}}})] \succeq \sqrt{\frac{\log D}{n}}.$$ The proof is a straightforward application of results in [@cherno1; @cherno2]. We thus see that the splitting method has better coverage and narrower intervals, although we remind the reader that the two methods may be estimating different parameters. [**Can We Estimate the Law of $\hat\beta(\hat{S})$?**]{} An alternative non-splitting method to uniform inference is to estimate the law $F_{2n}$ of $\sqrt{2n}(\hat\beta_{{\widehat{S}}} - \beta_{{\widehat{S}}})$. But we show that the law of $\sqrt{2n}(\hat\beta_{{\widehat{S}}}-\beta_{{\widehat{S}}})$ cannot be consistently estimated even if we assume that the data are Normally distributed and even if $D$ is fixed (not growing with $n$). This was shown for fixed population parameters in [@leeb2008can]. We adapt their proof to the random parameter case in the following lemma. \[lemma::contiguity\] Suppose that $Y_1,\ldots,Y_{2n} \sim N(\beta,I)$. Let $\psi_n(\beta) = \mathbb{P}(\sqrt{2n}(\hat\beta_{{\widehat{S}}} - \beta_{{\widehat{S}}})\leq t)$. There is no uniformly consistent estimator of $\psi_n(\beta)$. [**Prediction Accuracy.**]{} Now we discuss prediction accuracy which is where splitting pays a price. The idea is to identify a population quantity $\theta$ that model selection is implicitly targeting and compare splitting versus non-splitting in terms of how well they estimate $\theta$. The purpose of model selection in regression is to choose a model with low prediction error. So, in regression, we might take $\theta$ to be the prediction risk of the best linear model with $k$ terms. In our many-means model, a natural analog of this is the parameter $\theta = \max_j \beta(j)$. We have the following lower bound, which applies over all estimators both splitting and non-splitting. For the purposes of this lemma, we use Normality. Of course, the lower bound is even larger if we drop Normality. \[lemma::many-means-bound\] Let $Y_1,\ldots, Y_n \sim P$ where $P=N(\beta,I)$, $Y_i\in\mathbb{R}^D$, and $\beta \in \mathbb{R}^D$. Let $\theta = \max_j \beta(j)$. Then $$\inf_{\hat\theta}\sup_{\beta}E[ (\hat\theta - \theta)^2] \geq \frac{2\log D}{n}.$$ To understand the implications of this result, let us write $$\hat\beta(S) - \theta = \underbrace{\hat\beta(S) - \beta(S)}_{L_1} + \underbrace{\beta(S) - \theta}_{L_2}.$$ The first term, $L_1$, is the focus of most research on post-selection inference. We have seen it is small for splitting and large for non-splitting. The second term takes into account the variability due to model selection which is often ignored. Because $L_1$ is of order $n^{-1/2}$ for splitting, and the because the sum is of order $\sqrt{\log D/n}$ it follows that splitting must, at least in some cases, pay a price by have $L_2$ large. In regression, this would correspond to the fact that, in some cases, splitting leads to models with lower predictive accuracy. Of course, these are just lower bounds. To get more insight, we consider a numerical example. Figure (\[fig::price\]) shows a plot of the risk of $\hat\beta(\hat{S})=\overline{Y}(\hat{S})$ for $2n$ (non-splitting) and $n$ (splitting). In this example we see that indeed, the splitting estimator suffers a larger risk. In this example, $D=1,000$, $n=50$, and $\beta = (a,0,\ldots, 0)$. The horizontal axis is $a$ which is the gap between the largest and second largest mean. ![*Horizontal axis: the gap $\beta_{(1)} - \beta_{(2)}$. Blue risk: risk of splitting estimator. Black line: risk of non-splitting estimator.*[]{data-label="fig::price"}](PriceOfSplitting) To summarize: splitting gives more precise estimates and coverage for the selected parameter than non-splitting (uniform) inference. But the two approaches can be estimating different parameters. This manifests itself by the fact that splitting can lead to less precise estimates of the population parameter $\theta$. In the regression setting, this would correspond to the fact that splitting the data can lead to selecting models with poorer prediction accuracy. Comments on Non-Splitting Methods {#section::comments} ================================= There are several methods for constructing confidence intervals in high-dimensional regression. Some approaches are based on debiasing the lasso estimator [e.g., @zhang2014confidence; @vandegeer2014asymptotically; @javanmard2014confidence; @nickl2013confidence See Section \[sec:related\]]. These approaches tend to require that the linear model to be correct as well as assumptions on the design, and tend to target the true $\beta$ which is well-defined in this setting. Some partial exceptions exist: [@peter.sarah.2015] relaxes the requirement of a correctly-specified linear model, while [@meinshausen2015group] removes the design assumptions. In general, these debiasing approaches do not provide uniform, assumption-free guarantees. [@lockhart2014significance; @lee2016exact; @taylor2014exact] do not require the linear model to be correct nor do they require design conditions. However, their results only hold for parametric models. Their method works by inverting a pivot. In fact, inverting a pivot is, in principle, a very general approach. We could even use inversion in the nonparametric framework as follows. For any $P\in {\cal P}$ and any $j$ define $t(j,P)$ by $$\mathbb{P}( \sqrt{n}|\hat\beta_S(j) - \beta_S(j)| > t(j,P)) = \alpha.$$ Note that, in principle, $t(j,P)$ is known. For example, we could find $t(j,P)$ by simulation. Now let $A = \{P\in {\cal P}:\ \sqrt{n}|\hat\beta_S(j) - \beta_S(j)| < t(j,P)\}$. Then $\mathbb{P}(P\in A)\geq 1-\alpha$ for all $P\in {\cal P}$. Write $\beta_j(S) = f(P,Z_1,\ldots, Z_n)$. Let $C= \{f(P,Z_1,\ldots, Z_n):\ P\in A\}$. It follows that $\mathbb{P}(\beta_j(S)\in C) \geq 1-\alpha$ for all $P\in {\cal P}$. Furthermore, we could also choose $t(j,P)$ to satisfy $\mathbb{P}( \sqrt{n}|\hat\beta_S(j) - \beta_S(j)| > t(j,P)|E_n) = \alpha$ for any event $E_n$ which would given conditional confidence intervals if desired. There are two problems with this approach. First, the confidence sets would be huge. Second, it is not computationally feasible to find $t(j,P)$ for every $P\in {\cal P}$. The crucial and very clever observation in [@lee2016exact] is that if we restrict to a parametric model (typically they assume a Normal model with known, constant variance) then, by choosing $E_n$ carefully, the conditional distribution reduces, by sufficiency, to a simple one parameter family. Thus we only need to find $t$ for this one parameter family which is feasible. Unfortunately, the method does not provide confidence guarantees of the form (\[eq::honest\]) which is the goal of this paper. [@berk2013valid] is closest to providing the kind of guarantees we have considered here. But as we discussed in the previous section, it does not seem to be extendable to the assumption-free framework. None of these comments is meant as a criticism of the aforementioned methods. Rather, we just want to clarify that these methods are not comparable to our results because we require uniformity over ${\cal P}$. Also, except for the method of [@berk2013valid], none of the other methods provide any guarantees over unknown selection rules. Numerical Examples {#section::simulation} ================== In this section we briefly consider a few illustrative examples. In a companion paper, we provide detailed simulations comparing all of the recent methods that have proposed for inference after model selection. It would take too much space, and go beyond the scope of the current paper, to include these comparisons here. We focus on linear models, and in particular on inference for the projected parameter $\beta_{{\widehat{S}}}$ and the LOCO parameter $\gamma_{{\widehat{S}}}$ of and , respectively. The data are drawn from three distributions: Setting A : *Linear and sparse with Gaussian noise.* A linear model with $\beta_i\sim U[0,1]$ for $j=1,\dots,5$ and $\beta_j=0$ otherwise. Setting B : *Additive and sparse with $t$-distributed noise.* An additive model with a cubic and a quadratic term, as well as three linear terms, and $t_5$-distributed additive noise. Setting C : *Non-linear, non-sparse, $t$-distributed noise.* The variables from Setting $B$ are rotated randomly to yield a dense model. In Settings A and B, $n=100$ (before splitting); in Setting C $n=200$. In all Settings $p=50$ and the noise variance is 0.5. The linear model, $\hat{\beta}_{{\widehat{S}}}$ is selected on $\mathcal{D}_1$ by lasso with $\lambda$ chosen using 10-fold cross-validation. For $\gamma_{{\widehat{S}}}(j)$, $\hat{\beta}_{{\widehat{S}}}(j)$ is estimated by reapplying the same selection procedure to $\mathcal{D}_1$ with the $j^{\mathrm{th}}$ variable removed. Confidence intervals are constructed using the pairs bootstrap procedure of Section 2 with $\alpha=0.05$. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![*Typical confidence intervals for the projection parameter (left) and the LOCO parameter (right) for Settings A, B, and C. Blue indicates the true parameter value, and green indicates the point estimate from $\mathcal{D}_2$. Note that the parameters are successfully covered even when the underlying signal is non-linear ($X_1$ in Setting B) or dense (Setting C).*[]{data-label="fig::confint"}](linear_beta.pdf "fig:"){width="0.5\linewidth"} ![*Typical confidence intervals for the projection parameter (left) and the LOCO parameter (right) for Settings A, B, and C. Blue indicates the true parameter value, and green indicates the point estimate from $\mathcal{D}_2$. Note that the parameters are successfully covered even when the underlying signal is non-linear ($X_1$ in Setting B) or dense (Setting C).*[]{data-label="fig::confint"}](linear_gamma.pdf "fig:"){width="0.5\linewidth"} ![*Typical confidence intervals for the projection parameter (left) and the LOCO parameter (right) for Settings A, B, and C. Blue indicates the true parameter value, and green indicates the point estimate from $\mathcal{D}_2$. Note that the parameters are successfully covered even when the underlying signal is non-linear ($X_1$ in Setting B) or dense (Setting C).*[]{data-label="fig::confint"}](additive_beta.pdf "fig:"){width="0.5\linewidth"} ![*Typical confidence intervals for the projection parameter (left) and the LOCO parameter (right) for Settings A, B, and C. Blue indicates the true parameter value, and green indicates the point estimate from $\mathcal{D}_2$. Note that the parameters are successfully covered even when the underlying signal is non-linear ($X_1$ in Setting B) or dense (Setting C).*[]{data-label="fig::confint"}](additive_gamma.pdf "fig:"){width="0.5\linewidth"} ![*Typical confidence intervals for the projection parameter (left) and the LOCO parameter (right) for Settings A, B, and C. Blue indicates the true parameter value, and green indicates the point estimate from $\mathcal{D}_2$. Note that the parameters are successfully covered even when the underlying signal is non-linear ($X_1$ in Setting B) or dense (Setting C).*[]{data-label="fig::confint"}](nonlinear_beta.pdf "fig:"){width="0.5\linewidth"} ![*Typical confidence intervals for the projection parameter (left) and the LOCO parameter (right) for Settings A, B, and C. Blue indicates the true parameter value, and green indicates the point estimate from $\mathcal{D}_2$. Note that the parameters are successfully covered even when the underlying signal is non-linear ($X_1$ in Setting B) or dense (Setting C).*[]{data-label="fig::confint"}](nonlinear_gamma.pdf "fig:"){width="0.5\linewidth"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Figure \[fig::confint\] shows typical confidence intervals for the projection parameter, $\beta_{{\widehat{S}}}$, and the LOCO parameter, $\gamma_{{\widehat{S}}}$, for one realization of each Setting. Notice that confidence intervals are only constructed for $j\in {\widehat{S}}$. The non-linear term is successfully covered in Setting B, even though the linear model is wrong. ![*Joint coverage probability of the intervals for $\beta_{{\widehat{S}}}$ and $\gamma_{{\widehat{S}}}$ in Setting B, as sample size $n$ varies with $p=50$ held fixed. The coverage for $\gamma_{{\widehat{S}}}$ is accurate even at low sample sizes, while the coverage for $\beta_{{\widehat{S}}}$ converges more slowly.* []{data-label="fig::coverage"}](coverage_plot.pdf) Figure \[fig::coverage\] shows the coverage probability for Setting B as a function of $n$, holding $p=50$ fixed. The coverage for the LOCO parameter, $\gamma_{{\widehat{S}}}$ is accurate even at low sample sizes. The coverage for $\beta_{{\widehat{S}}}$ is low (0.8-0.9) for small sample sizes, but converges to the correct coverage as the sample size grows. This suggests that $\beta_{{\widehat{S}}}$ is an easier parameter to estimate and conduct inference on. Berry-Esseen Bounds for Nonlinear Parameters With Increasing Dimension {#section::berry} ====================================================================== The results in this paper depend on a Berry-Esseen bound for regression with possibly increasing dimension. In this section, there is no model selection or splitting. We set $d=k$ and $S = \{1,\ldots, k\}$ where $k < n$ and $k$ can increase with $n$. Later, these results will be applied after model selection and sample splitting. Existing Berry-Esseen results for nonlinear parameters are given in [@pinelis2009berry; @shao2016stein; @chen2007normal; @anastasiou2014bounds; @anastasiou2015new; @anastasiou2016multivariate]. Our results are in the same spirit but we keep careful track of the effect of dimension and the eigenvalues of $\Sigma$, while leveraging results from [@cherno1; @cherno2] on high dimensional central limit theorems for simple convex sets. We derive a general result on the accuracy of the Normal approximation over hyper-rectangles for nonlinear parameters. We make use of three findings from [@cherno2; @chernozhukov2015comparison] and [@nazarov1807maximal]: the Gaussian anti-concentration theorem, the high-dimensional central limit theorem for sparely convex sets, and the Gaussian comparison theorem,reported in the appendix as Theorems \[thm:anti.concentration\], \[thm:high.dim.clt\] and \[thm:comparisons\], respectively. In fact, in the appendix we re-state these results in a slightly different form than they appear in the original papers. We do this because we need to keep track of certain constants that affect our results. Let $W_1,\ldots, W_n$ be an independent sample from a distribution $P$ on $\mathbb{R}^b$ belonging to the class ${\cal P}_n$ of probability distribution supported on a subset of $[-A,A]^b$, for some fixed $A>0$ and such that $$v = \inf_{ P \in \mathcal{P}_n} \lambda_{\min}(V(P))) \quad \text{and} \quad \overline{v} = \sup_{ P \in \mathcal{P}_n} \lambda_{\max}(V(P))) \geq 1,$$ where $V(P) = \mathbb{E}_P[ (W_i-\psi)(W_i-\psi)^\top]$. We allow the class $\mathcal{P}_n$ to change with $n$, so that $b$, $v$ and $\overline{v}$ – but not $A$ – are to be regarded as functions of $n$, although we do not express such dependence in our notation for ease of readability. Notice that, in the worse case, $\overline{v}$ can be of order $b$. [**Remark.**]{} The assumption that $\overline{v} \geq 1$ is made out of convenience and is used in the proof of in the Appendix. Our results remain valid if we assume that $\overline{v}$ is bounded away from $0$ uniformly in $n$, i.e. that $\overline{v} \geq \eta$ for some $\eta > 0$ and all $n$. Then, the term $\eta$ would then appear as another quantity affecting the bounds. We have not kept track of this additional dependence. Let $g = (g_1,\ldots,g_s)^\top \colon \mathbb{R}^b \rightarrow \mathbb{R}^s$ be a twice-continuously differentiable vector-valued function defined over an open, convex subset $\mathcal{S}_n$ of $[-A,A]^b$ such that, for all $P \in \mathcal{P}_n$, $\psi = \psi(P) = \mathbb{E}[W_1] \in \mathcal{S}_n$. Let $\widehat{\psi} = \widehat{\psi}(P) = \frac{1}{n} \sum_{i=1}^n W_i$ and assume that $\widehat{\psi} \in \mathcal{S}_n$ almost surely, for all $P \in \mathcal{P}_n$. Finally, set $\theta = g(\psi)$ and $\widehat{\theta} = g(\widehat{\psi})$. For any point $\psi \in \mathcal{S}_n$ and $j\in \{ 1,\ldots,s\}$, we will write $G_j(\psi) \in \mathbb{R}^b$ and $H_j(\psi)\in \mathbb{R}^{b \times b}$ for the gradient and Hessian of $g_j$ at $\psi$, respectively. We will set $ G(\psi)$ to be the $s\times b$ Jacobian matrix whose $j^{\rm th}$ row is $G^\top_j(\psi)$. [**Remark**]{} The assumption that $\hat{\psi}$ belongs to $\mathcal{S}_n$ almost surely can be relaxed to hold on an event of high probability, resulting in an additional error term in all our bounds. To derive a high-dimensional Berry-Esseen bound on $g(\psi) - g(\hat{\psi})$ we will study its first order Taylor approximation. Towards that end, we will require a uniform control over the size of the gradient and Hessian of $g$. Thus we set $$\label{eq:H.and.B} B = \sup_{P \in \mathcal{P}_n }\max_{j=1,\ldots,s} ||G_j(\psi(P))|| \quad \text{and} \quad \overline{H} = \sup_{ \psi\in \mathcal{S}_n }\max_{j=1,\ldots,s} \|H_j(\psi)\|_{\mathrm{op}}$$ where $\|H_j(\psi)\|_{\mathrm{op}}$ is the operator norm. [**Remark.**]{} The quantity $\overline{H}$ can be defined differently, as a function of $\mathcal{P}_n$ and not $\mathcal{S}_n$. In fact, all that is required of $\overline{H}$ is that it satisfy the almost everywhere bound $$\label{eq:H.2} \max_j \int_0^1 \left\| H_j \left( t\psi(P) - (1-t)\hat{\psi}(P) \right) \right\|_{\mathrm{op}} dt \leq \overline{H},$$ for each $P \in \mathcal{P}_n$ (see below). This allows us to establish a uniform bound on the magnitude of the reminder term in the Taylor series expansion of $g(\hat{\psi})$ around $g(\psi)$, as detailed in the proof of below. Of course, we may relax the requirement that holds almost everywhere to the requirement that it holds on an event of high probability. This is indeed the strategy we use when in applying the present results to the projection parameters in . The covariance matrix of the linear approximation of $g(\psi) - g(\hat{\psi})$, which, for any $P \in \mathcal{P}_n$, is given by $$\label{eq:Gamma} \Gamma = \Gamma(\psi(P),P)=G(\psi(P)) V(P) G(\psi(P))^\top,$$ plays a crucial role in our analysis. In particular, our results will depend on the smallest variance of the linear approximation to $g(\psi) - g(\hat{\psi})$: $$\label{eq:sigma} \underline{\sigma}^2 = \inf_{ P \in \mathcal{P}_n}\min_{j =1,\ldots,s} G^\top_j(\psi(P)) V(P) G_j(\psi(P)).$$ With these definitions in place we are now ready to prove the following high-dimensional Berry-Esseen bound. \[theorem::deltamethod\] Assume that $W_1,\ldots, W_n$ is an i.i.d. sample from some $P \in {\cal P}_n$ and let $Z_n \sim N(0,\Gamma)$. Then, there exists a $C>0$, dependent on $A$ only, such that $$\sup_{P\in {\cal P}_n} \sup_{t > 0} \Bigl|\mathbb{P}( \sqrt{n}||\hat\theta - \theta||_\infty \leq t) -\mathbb{P}( ||Z_n||_\infty \leq t)\Bigr| \leq C \Big( \Delta_{n,1} + \Delta_{n,2} \Big),$$ where $$\begin{aligned} \label{eq::Delta} \Delta_{n,1} &= \frac{1}{\sqrt{v}} \left( \frac{ \overline{v}^2 b (\log 2bn)^7}{n} \right)^{1/6} \\ \Delta_{n,2} &= \frac{1}{\underline{\sigma}}\sqrt{\frac{ b \overline{v} \overline{H}^2 (\log n)^2 \log b}{n}}.\end{aligned}$$ [**Remarks**]{} The assumption that the support of $P$ is compact is made for simplicity, and can be modified by assuming that the coordinates of the vectors $W_i$ have sub-exponential behavior. Notice also that the coordinates of the $W_i$’s need not be independent. The proof of resembles the classic proof of the asymptotic normality of non-linear functions of averages by the delta method. First, we carry out a coordinate-wise Taylor expansion of $\widehat{\theta}$ around $\theta$. We then utilize a a high-dimensional Berry-Esseen theorem for polyhedral sets established in [@cherno2] (see below for details) to derive a Gaussian approximation to the linear part in the expansion, resulting in the error term $\Delta_{n,1}$. Finally, we bound the reminder term due to the non-linearity of the function $g$ with basic concentration arguments paired with the Gaussian anti-concentration bound due to [@nazarov1807maximal] (see in the Appendix), thus obtaining the second error term $\Delta_{n,2}$. Throughout, we keep track of the dependence on $v$ and $\underline{\sigma}$ in order to obtain rates with a leading constant dependent only on $A$ (assumed fixed) but not on any other term that may vary with $k$ or $b$. ### Asymptotically honest confidence sets: Normal approximation approach {#sec:berry.normal .unnumbered} We now show how to use the high-dimensional central limit theorem to construct asymptotically honest confidence sets for $\theta$. We will first to obtain a consistent estimator of the covariance matrix $\Gamma=G(\psi) V(\psi) G(\psi)^\top$ of the linear approximation to $\hat{\theta} - \theta$. In conventional fixed-dimension asymptotics, we would appeal to Slutzky’s theorem and ignore the effect of replacing $\Gamma$ with a consistent estimate. But in computing Berry-Esseen bounds with increasing dimension we may not discard the effect of estimating $\Gamma$. As we will see below, this extra step will bring an additional error term that must be accounted for. We will estimate $\Gamma$ with the plug-in estimator $$\label{eq:hat.gamma.berry} \hat\Gamma = G(\hat\psi) \hat V G(\hat\psi)^\top,$$ where $\hat{V} = \frac{1}{n} \sum_{i=1}^n W_i W_i^\top - \hat{\psi} \hat{\psi}^\top$ is the empirical covariance matrix. Below, we bound the element-wise difference between $\Gamma$ and $\hat{\Gamma}$. Although this is in general a fairly weak notion of consistency in covariance matrix estimation, it is all that is needed to apply the Gaussian comparison theorem \[thm:comparisons\], which will allow us to extend the Berry-Esseen bound established in to the case when $\Gamma$ is estimated. \[lemma::upsilon\] Let $$\label{eq:aleph} \aleph_n = \max \Big\{ \overline{H} B \overline{v} \sqrt{ b\frac{ \log n}{n}}, B^2 \sqrt{ b \overline{v} \frac{\log b + \log n }{n} }\Big\}.$$ There exists a $C > 0$ dependent on $A$ only such that $$\label{eq:upsilon} \sup_{P\in {\cal P}_n} \mathbb{P}\left(\max_{j,l} \left| \hat\Gamma(j,l)-\Gamma(j,l)\right| \geq C \,\aleph_n\right) \leq \frac{2}{n}.$$ Now we construct the confidence set. Let $Q=(Q(1),\ldots, Q(s))$ be i.i.d. standard Normal variables, independent of the data. Let $\hat Z = \hat\Gamma^{1/2} Q$ and define $\hat{t}_\alpha$ by $$\label{eq:hat.t.alpha.berry} \mathbb{P}( ||\hat Z||_\infty > \hat{t}_\alpha \,|\, \hat\Gamma)=\alpha.$$ Finally, let $$\label{eq::conf-rectangle} \hat{C}_n = \Bigl\{ \theta \in \mathbb{R}^s:\ ||\theta-\hat\theta||_\infty \leq \frac{\hat{t}_\alpha}{\sqrt{n}}\Bigr\}.$$ \[thm::coverage\] There exists a $C>0$, dependent only on $A$, such that $$\inf_{P\in {\cal P}}\mathbb{P}(\theta\in R_n)= 1-\alpha - C \left( \Delta_{n,1} + \Delta_{n,2} + \Delta_{n,3} + \frac{1}{n} \right),$$ where $$\label{eq::this-is-upsilon} \Delta_{n,3}= \frac{\aleph_n^{1/3} (2 \log 2s)^{2/3}}{\underline{\sigma}^{2/3}}.$$ [**Remark**]{}. The additional term $\Delta_{n,3}$ in the previous theorem is due to the uncertainty in estimating $\Gamma$, and can be established by using the comparison inequality for Gaussian vectors of [@chernozhukov2015comparison], keeping track of the dependence on $\underline{\sigma}^2$; see below. In addition to $L_\infty$ balls, we can also construct our confidence set to be a hyper-rectangle, with side lengths proportional to the standard errors of the projection parameters. That is, we define $$\label{eq:hyper:CI} \tilde C_n = \bigotimes_{j\in S} C(j),$$ where $$C(j) = \left[ \hat\beta_S(j) - z_{\alpha/(2s)} \sqrt{ \frac{\hat\Gamma_n(j,j)}{n} }, \hat\beta_S(j) + z_{\alpha/(2s)} \sqrt{ \frac{\hat\Gamma_n(j,j)}{n} }\right],$$ with $\hat\Gamma$ given by (\[eq::Ga\]) and $z_{\alpha/(2s)}$ the upper $1 - \alpha/(2s)$ quantile of a standard normal variate. Notice that we use a Bonferroni correction to guarantee a nominal coverage of $1-\alpha$. Also, note that $z_{\alpha/(2s)} = O(\sqrt{\log s})$, for each fixed $\alpha$. The coverage rate for this other confidence set is derived in the next result. \[thm::bonf\] Let $$\label{eq.Delta3.tilde} \tilde{\Delta}_{n,3} = \min\left\{\Delta_{3,n}, \frac{ \aleph_n z_{\alpha/(2s)}}{\underline{\sigma}^2 } \left(\sqrt{ 2 + \log(2s ) } + 2 \right) \right\}.$$ There exists a $C>0$, dependent only on $A$, such that $$\inf_{P \in \mathcal{P}_n} \mathbb{P}(\theta \in \tilde C_n) \geq (1-\alpha) - C \Big( \Delta_{n,1} + \Delta_{n,2} + \tilde \Delta_{n,3} + \frac{1}{n} \Big).$$ ### Asymptotically honest confidence sets: the bootstrap approach {#asymptotically-honest-confidence-sets-the-bootstrap-approach .unnumbered} To construct the confidence set , one has to compute the estimator $\hat{\Gamma}$ and the quantile $\hat{t}_\alpha$ in , which may be computationally inconvenient. Similarly, the hyper-rectangle requires computing the diagonal entries in $\hat{\Gamma}$. Below we rely on the bootstrap to construct analogous confidence sets, centered at $\hat{\theta}$, which do not need knowledge of $\hat{\Gamma}$. We let $\hat{\psi}^*$ denote the sample average of an i.i.d. sample of size $n$ from the bootstrap distribution, which is the empirical measure associated to the sample $(W_1,\ldots,W_n)$. We also let $\hat{\theta}^* = g(\hat{\psi}^*)$. For a fixed $\alpha \in (0,1)$, let $\hat{t}^*_\alpha$ be the smallest positive number such that $$\mathbb{P}\left( \sqrt{n} \| \hat{\theta}^* - \hat{\theta}\| \leq \hat{t}^*_\alpha \Big| (W_1,\ldots,W_n) \right) \geq 1 - \alpha.$$ and let $(\tilde{t}^*_j, j =1,\ldots,s)$ be such that $$\mathbb{P}\left( \sqrt{n} | \hat{\theta}^*(j) - \hat{\theta} (j) \leq \tilde{t}^*_j, \forall j \Big| (W_1,\ldots,W_n) \right) \geq 1 - \alpha.$$ By the union bound, each $\tilde{t}^*_j$ can be chosen to be the largest positive number such that $$\mathbb{P}\left( \sqrt{n} | \hat{\theta}^*(j) - \hat{\beta} (j) > \tilde{t}^*_j, \Big| (W_1,\ldots,W_n) \right) \leq \frac{\alpha}{s}.$$ Consider the following two bootstrap confidence sets: $$\label{eq:ci.boot.theta} \hat{C}^*_{n} = \left\{ \theta \in \mathbb{R}^{s} \colon \| \theta - \hat{\theta} \|_\infty \leq \frac{ \hat{t}^*_{\alpha}}{\sqrt{n}} \right\} \quad \text{and} \quad \tilde{C}^*_{n} = \left\{ \theta \in \mathbb{R}^{s} \colon | \theta(j) - \hat{\theta}(j) | \leq \frac{ \tilde{t}^*_{j}}{\sqrt{n}}, \forall j \in {\widehat{S}}\right\}$$ \[theorem::boot\] Assume the same conditions of Theorem \[theorem::deltamethod\] and that and $\hat{\psi}$ and $\hat{\psi}^*$ belong to $\mathcal{S}_n$ almost surely. Suppose that $n$ is large enough so that the quantities $\underline{\sigma}^2_n = \underline{\sigma}^2 - C \aleph_n >0$ and $v_n = v - C \daleth_n$ are positive, where $C$ is the larger of the two constants in and in and $$\daleth_n = \sqrt{ b \overline{v} \frac{ \log b + \log n }{n} }.$$ Also set $\overline{v}_n = \overline{v} + C \daleth_n$. Then, for a constant $C$ depending only on $A$, $$\label{eq::boot-cov} \inf_{P\in {\cal P}_n}\mathbb{P}(\theta\in \hat{C}^*_n) \geq 1-\alpha - C\left(\Delta^*_{n,1} + \Delta^*_{n,2} + \Delta_{n,3} + \frac{1}{n}\right),$$ where $$\Delta^*_{n,1} = \frac{1}{\sqrt{v_n}} \left( \frac{ \overline{v}_n b (\log 2bn)^7}{n} \right)^{1/6} ,\quad \Delta^*_{n,2} = \frac{1}{\underline{\sigma}_n}\sqrt{\frac{ b \overline{v}_n \overline{H}^2 (\log n)^2 \log b}{n}},$$ and $\Delta_{n,3}$ is given in (\[eq::this-is-upsilon\]). Similarly, $$\label{eq::boot-cov.bonf} \inf_{P\in {\cal P}_n}\mathbb{P}(\theta\in \tilde{C}^*_n) \geq 1-\alpha - C\left(\Delta^*_{n,1} + \Delta^*_{n,2} + \Delta_{n,3} + \frac{1}{n}\right).$$ [**Remark.**]{} The assumption that $\hat{\psi}$ and $\hat{\psi}^*$ are in $\mathcal{S}_n$ almost surely can be relaxed to a high probability statement without any issue, resulting in an additional bound on the probability of the complementary event. [**Remark.**]{} The proof of the theorem involves enlarging the class of distributions $\mathcal{P}_n$ to a bigger collection $\mathcal{P}^*_n$ that is guaranteed to include the bootstrap distribution (almost surely or with high probability). The resulting coverage error terms are larger than the ones obtained in using Normal approximations precisely because $\mathcal{P}_n^*$ is a larger class. In the above result we simply increase the rates arising from so that they hold for $\mathcal{P}^*_n$ without actually recomputing the quantities $B$, $\overline{H}$ and $\underline{\sigma}^2$ in and over the new class $\mathcal{P}^*_n$. Of course, better rates may be established should sharper bounds on those quantities be available. [**Remark.**]{} The error term $\Delta_{n,3}$ remains the same as in and because it quantifies an error term, related to the Gaussian comparison , which does not depend on the bootstrap distribution. Conclusions {#section::conclusion} =========== In this paper we have taken a modern look at inference based on sample splitting. We have also investigated the accuracy of Normal and bootstrap approximations and we have suggested new parameters for regression. Despite the fact that sample splitting is on old idea, there remain many open questions. For example, in this paper, we focused on a single split of the data. One could split the data many times and somehow combine the confidence sets. However, for each split we are essentially estimating a different (random) parameter. So currently, it is nor clear how to combine this information. The bounds on coverage accuracy — which are of interest beyond sample splitting — are upper bounds. An important open question is to find lower bounds. Also, it is an open question whether we can improve the bootstrap rates. For example, the remainder term in the Taylor approximation of $\sqrt{n}(\hat\beta(j) - \beta(j))$ is $$\frac{1}{2n}\int \int \delta^\top H_j((1-t)\psi + t \hat\psi) \delta\, dt$$ where $\delta=\sqrt{n}(\hat\psi - \psi)$. By approximating this quadratic term it might be possible to correct the bootstrap distribution. [@pouzo2015bootstrap] has results for bootstrapping quadratic forms that could be useful here. In Section \[section::improving\] we saw that a modified bootstrap, that we called the image bootstrap, has very good coverage accuracy even in high dimensions. Future work is needed to compute the resulting confidence set efficiently. Finally, we remind the reader that we have taken a assumption-free perspective. If there are reasons to believe in some parametric model then of course the distribution-free, sample splitting approach used in this paper will be sub-optimal. Acknowledgements {#acknowledgements .unnumbered} ================ The authors are grateful to the AE and the reviewers for comments that led to substantial improvements on the paper and the discovery of a mistake in the original version of the manuscript. We also thank Lukas Steinberger, Peter Buhlmann and Iosif Pinelis for helpful suggestions and Ryan Tibshirani for comments on early drafts. Appendix 1: Improving the Coverage Accuracy of the Bootstrap for the Projection Parameters {#section::improving} ========================================================================================== Throughout, we treat $S$ as a fixed, non-empty subset of $\{1,\ldots,d\}$ of size $k$ and assume an i.i.d. sample $(Z_1,\ldots,Z_n$) where $Z_i = (X_i,Y_i)$ for all $i$, from a distribution from $\mathcal{P}_n^{\mathrm{OLS}}$. The coverage accuracy for LOCO and prediction parameters is much higher than for the projection parameters and the inferences for $\beta_S$ are less accurate if $k$ is allowed to increase with $n$. Of course, one way to ensure accurate inferences is simply to focus on $\gamma_S$ or $\phi_S$ instead of $\beta_S$. Here we discuss some other approaches to ensure coverage accuracy. If we use ridge regression instead of least squares, the gradient and Hessian with respect to $\beta$ are bounded and the error terms are very small. However, this could degrade prediction accuracy. This leads to a tradeoff between inferential accuracy and prediction accuracy. Investigating this tradeoff will be left to future work. Some authors have suggested the estimator $\hat\beta_S = \tilde\Sigma_S^{-1} \hat\alpha_S$ where $\tilde\Sigma_S$ is a block diagonal estimator of $\Sigma$. If we restrict the block size to be bounded above by a constant, then we get back the accuracy of the sparse regime. Again there is a tradeoff between inferential accuracy and prediction accuracy. The accuracy of the bootstrap can be increased by using the [*image bootstrap*]{} as we now describe. First we apply the bootstrap to get a confidence set for $\psi_S$. Let $$H_n = \Biggl\{ \psi_S:\ ||\psi_S - \hat\psi_S||_\infty \leq \frac{t^*_\alpha}{\sqrt{n}} \Biggr\}$$ where $t^*_\alpha$ is the bootstrap quantile defined by $\hat F^*(t^*_\alpha) = 1-\alpha$ and $$\hat F^*(t) = P(\sqrt{n}||\hat\psi^*_S - \hat\psi_S||_\infty \leq t\,|\, Z_1,\ldots, Z_n).$$ Since $\psi_S$ is just a vector of moments, it follows from Theorem K.1 of [@cherno1] and the Gaussian anti-concentration () that, for a constant $C$ depending on $A$ only, $$\label{eq:image} \sup_{P\in {\cal P}^{\mathrm{OLS}}_n}|\mathbb{P}(\psi \in H_n) - (1-\alpha)| \leq \frac{C}{a_n}\left( \frac{(\log k)^7}{n}\right)^{1/6}.$$ In the above display $a_n = \sqrt{a - C \sqrt{ \frac{\log k }{n}}}$ and is positive for $n$ large enough, and $$a \leq \inf_{P \in \mathcal{P}_n^{\mathrm{OLS}}} \min_{j \in \{1,\ldots,d\}} {\rm Var}_P(W_i(j)).$$ Notice that $a$ is positive since $a \geq v$, where $v$ is given the definition \[def:Pdagger\] of $\mathcal{P}_n^{\mathrm{OLS}}$. However, $a$ can be significantly larger that $v$. The term $C \sqrt{\log k }{n}$ appearing in the definition of $a_n$ is just a high probability bound on the maximal element-wise difference between $V$ and $\hat{V}$, valid for each $ P \in \mathcal{P}_n^{\mathrm{OLS}}$. Next, recall that $\beta_S = g(\psi_S)$. Now define $$C_n = \Biggl\{ g(\psi) :\ \psi \in H_n \Biggr\}.$$ We call $C_n$ the [*image bootstrap confidence set*]{} as it is just the nonlinear function $g$ applied to the confidence set $H_n$. Then, by , $$\inf_{P\in {\cal P}_n'}\mathbb{P}(\beta \in C_n) \geq 1-\alpha - \frac{C}{a_n}\left( \frac{\log k}{n}\right)^{1/6}.$$ In particular, the implied confidence set for $\beta(j)$ is $$C_j = \Biggl[\inf_{\psi \in H_n}g(\psi),\ \sup_{\psi \in H_n}g(\psi)\Biggr].$$ Remarkably, in the coverage accuracy of the image-bootstrap the dimension $k$ enters only logarithmically. This is in stark contrast with the coverage accuracy guarantees for the projection parameters from , which depend polynomially in $k$ and on the other eigenvalue parameters. The image bootstrap is usually avoided because it generally leads to conservative confidence sets. Below we derive bounds on the accuracy of the image bootstrap. \[thm:beta.accuracy\] Let $u_n$ be as in and assume that $k \geq u_n^2$. Then, for each $P \in \mathcal{P}_n^{\mathrm{OLS}}$, with probability at least $\frac{1}{n}$, the diameter of the image bootstrap confidence set $H_n$ is bounded by $$C \frac{k^{3/2}}{u_n^2}\sqrt{ \frac{\log k + \log n}{n}}.$$ where $C>0$ depends on $A$ only. [**Remark.**]{} The assumption that $k \geq u_n^2$ is not necessary and can be relaxed, resulting in a slightly more general bound. Assuming non-vanishing $u$, the diameter tends uniformly to $0$ if $k (\log k)^{1/3} = o(n^{1/3})$. Interestingly, this is the same condition required in [@portnoy1987central] although the setting is quite different. Currently, we do not have a computationally efficient method to find the supremum and infimum. A crude approximation is given by taking a random sample $\psi_1,\ldots, \psi_N$ from $H_n$ and taking $$a(j) \approx \min_{j} g(\hat\psi_j),\ \ \ b(j) \approx \max_{j} g(\hat\psi_j).$$ [**Proof of .**]{} We will establish the claims by bound the quantity $\left\| \hat\beta_{S} - \beta_{S} \right\| $ uniformly over all $\beta_S \in H_n$.\ Our proof relies on a first order Taylor series expansion of of $g$ and on the uniform bound on the norm of the gradient of each $g_j$ given in. Recall that, by conditioning on $\mathcal{D}_{1,n}$, we can regard $S$ and $\beta_{S}$ as a fixed. Then, letting $G(x)$ be the $|S| \times b$-dimensional Jacobian of $g$ at $x$ and using the mean value theorem, we have that $$\begin{aligned} \left\| \hat\beta_{S} - \beta_{S} \right\| & = \left\| \left( \int_0^1 G\bigl( (1-t)\psi_{S} + ut \hat \psi_{S} \bigr) dt \right) (\hat\psi_{S} - \psi_{S}) \right\|\\ & \leq \left\| \int_0^1 G\bigl( (1-t)\psi_{S} + t \hat \psi_{S}\bigr) dt \right\|_{\mathrm{op}} \left\| \hat{\psi}_{S} - \psi_{S} \right\|. $$ To further bound the previous expression we use the fact, established in the proof of , that $\| \hat{\psi}_{S} - \psi_{S} \| \leq C k \sqrt{ \frac{ \log n + \log k}{n} }$ with probability at least $1/n$, where $C$ depends on $A$, for each $P \in \mathcal{P}_n^{\mathrm{OLS}}$. Next, $$\left\| \int_0^1 G\bigl( (1-t)\psi_{S} + t \hat \psi_{S} \bigr) dt \right\|_{\mathrm{op}} \leq \sup_{t \in (0,1)} \left\| G\left( 1-t)\psi_{S} + t \hat \psi_{S} \right) \right\|_{\mathrm{op}} \leq \sup_{t \in (0,1)} \max_{j \in S} \left\| G\left( 1-t)\psi_{S} + t \hat \psi_{S} \right) \right\|$$ where $G_i(\psi)$ is the $j^{\mathrm{th}}$ row of $G(\psi)$, which is the gradient of $g_j$ at $\psi$. Above, the first inequality relies on the convexity of the operator norm and the second inequality uses that the fact that the operator norm of a matrix is bounded by the maximal Euclidean norm of the rows. For each $P \in \mathcal{P}_n^{\mathrm{OLS}}$ and each $t \in (0,1)$ and $j \in S$, the bound in yields that, for a $C>0$ depending on $A$ only, $$\left\| G\left( 1-t)\psi_{S} + t \hat \psi_{S} \right) \right\| \leq C \left( \frac{\sqrt{k}}{\hat{u}_t ^2} + \frac{1}{\hat{u}_t} \right),$$ where $\hat{u}_t \geq (1 - t) \lambda_{\min}(\Sigma_{S}) + t \lambda_{\min}(\hat{\Sigma}_{S})$. By in and Weyl’s theorem, and using the fact that $ u > u_n$, on an event with probability at lest $1 - \frac{1}{n}$, $$\left\| G\left( 1-t)\psi_{S} + t \hat \psi_{S} \right) \right\| \leq C \left( \frac{\sqrt{k}}{u_n ^2} + \frac{1}{u_n} \right) \leq C \frac{\sqrt{k}}{u_n^2},$$ where in the last inequality we assume $n$ large enough so that $k \geq u_n^2$. The previous bound does not depend on $t$, $j$ or $P$. The result now follows. $\Box$ Appendix 2: Proofs of the results in ===================================== In all the proofs of the results from , we will condition on the outcome of the sample splitting step, resulting in the random equipartition $\mathcal{I}_{1,n}$ and $\mathcal{I}_{2,n}$ of $\{1,\ldots,2n\}$, and on $\mathcal{D}_{1,n}$. Thus, we can treat the outcome of the model selection and estimation procedure $w_n$ on $\mathcal{D}_{1,n}$ as a fixed. As a result, we regard ${\widehat{S}}$ as a deterministic, non-empty subset of $\{1,\ldots,d\}$ of size by $k < d$ and the projection parameter $\beta_{{\widehat{S}}}$ as a fixed vector of length $k$. Similarly, for the LOCO parameter $\gamma_{{\widehat{S}}}$, the quantities $\widehat{\beta}_{{\widehat{S}}}$ and $\widehat{\beta}_{{\widehat{S}}(j)}$, for $j \in {\widehat{S}}$, which depend on $\mathcal{D}_{1,n}$ also become fixed. Due to the independence of $\mathcal{D}_{1,n}$ and $\mathcal{D}_{2,n}$, all the probabilistic statements made in the proofs are therefore referring to the randomness in $\mathcal{D}_{2,n}$ only. Since all our bounds will depend on $\mathcal{D}_{1,n}$ through the cardinality of ${\widehat{S}}$, which is fixed at $k$, the same bounds will hold uniformly over all possible values taken on by $\mathcal{D}_{1,n}$ and $\mathcal{I}_{1,n}$ and all possible outcomes of all model selection and estimation procedures $w_n \in \mathcal{W}_n$ run on $\mathcal{D}_{1,n}$. In particular, the bounds are valid unconditionally with respect to the joint distribution of the entire sample and of the splitting outcome. Also, in the proof $C$ denotes a positive positive that may depend on $A$ only but not on any other variable, and whose value may change from line to line. [**Proof of .**]{} As usual, we condition on $\mathcal{D}_{1,n}$ and thus treat ${\widehat{S}}$ as a fixed subset of $\{1,\ldots,d\}$ of size $k$. Recalling the definitions of $\hat{\beta}_{{\widehat{S}}}$ and $\beta_{{\widehat{S}}}$ given in and , respectively, and dropping the dependence on ${\widehat{S}}$ in the notation for convenience, we have that $$\begin{aligned} \| \hat{\beta}_{{\widehat{S}}} - \beta_{{\widehat{S}}}\| & = \left\| \left( \hat{\Sigma}^{-1} - \Sigma^{-1} \right) \hat{\alpha} + \Sigma^{-1}\left( \hat{\alpha} - \alpha \right) \right\|\\ & \leq \left\| \hat{\Sigma}^{-1} - \Sigma^{-1} \right\|_{\mathrm{op}} \|\hat{\alpha} \| + \frac{1}{u} \| \hat{\alpha} - \alpha\|\\ & = T_1 + T_2.\end{aligned}$$ By the vector Bernstein inequality , $$\| \hat{\alpha} - \alpha \| \leq C A \sqrt{ \frac{k \log n}{n} },$$ with probability at least $1 - \frac{1}{n}$ and for some universal constant $C$ (independent of $A$). Since the smallest eigenvalue of $\Sigma$ is bounded from below by $u$, we have that $$T_1 \leq C \frac{1}{u} \sqrt{ \frac{k \log n}{n}}.$$ To bound $\left\| \hat{\Sigma}^{-1} - \Sigma^{-1} \right\|_{\mathrm{op}}$ in the term $T_2$ we write $\hat{\Sigma} = \Sigma + E$ and assume for the moment that $\|E\|_{\mathrm{op}} \| \Sigma^{-1}\|_{\mathrm{op}} < 1 $ (which of course implies that $\| E \Sigma^{-1} \|_{\mathrm{op}} < 1$). Since $E$ is symmetric, we have, by formula 5.8.2 in [@Horn:2012:MA:2422911], that $$\left\| \hat{\Sigma}^{-1} - \Sigma^{-1} \right\|_{\mathrm{op}} = \left\| (\Sigma + E)^{-1} - \Sigma^{-1} \right\|_{\mathrm{op}} \leq \| \Sigma^{-1}\|_{\mathrm{op}} \frac{\| E \Sigma^{-1}\|_{\mathrm{op}} } { 1 - \| E \Sigma^{-1}\|_{\mathrm{op}} },$$ which in turn is upper bounded by $$\|\Sigma^{-1}\|^2_{\mathrm{op}} \frac{\| \hat{\Sigma} - \Sigma \|_{\mathrm{op}} } { 1 - \|\hat{\Sigma} - \Sigma \|_{\mathrm{op}} \| \Sigma^{-1}\|_{\mathrm{op}} }.$$ The matrix Bernstein inequality along with the assumption that $U \geq \eta > 0$ yield that, for a positive $C$ (which depends on $\eta$), $$\|\hat{\Sigma} - \Sigma \|_{\mathrm{op}} \leq C A \sqrt{ k U \frac{ \log k + \log n}{n}},$$ with probability at least $1 - \frac{1}{n}$. Using the fact that $\| \Sigma^{-1} \|_{\mathrm{op}} \leq \frac{1}{u}$ and the assumed asymptotic scaling on $B_n$ we see that $\| \Sigma^{-1} E \|_{\mathrm{op}} \leq 1/2$ for all $n$ large enough. Thus, for all such $n$, we obtain that, with probability at least $ 1- \frac{1}{n}$, $$T_2 \leq 2 C A \frac{ k}{u^2} \sqrt{ U \frac{ \log k + \log n}{n}},$$ since $\| \hat{\alpha} \| \leq A \sqrt{k}$ almost surely. Thus we have shown that holds, with probability at least $1 - \frac{2}{n}$ and for all $n$ large enough. This bound holds uniformly over all $P \in \mathcal{P}_n^{\mathrm{OLS}}$. $\Box$ [**Proof of .**]{} In what follows, any term of the order $\frac{1}{n}$ are absorbed into terms of asymptotic bigger order. As remarked at the beginning of this section, we first condition on $\mathcal{D}_{1,n}$ and the outcome of the sample splitting, so that ${\widehat{S}}$ is regarded as a fixed non-empty subset $S$ of $\{1,\ldots,d\}$ of size at most $k$. The bounds and are established using and from , where we may take the function $g$ as in , $s = k$, $b = \frac{k^2 + 3k}{2} $ and $\psi = \psi_{{\widehat{S}}}$ and $\hat{\psi}_{{\widehat{S}}}$ as in and , respectively. As already noted, $\psi$ is always in the domain of $g$ and, as long as $n \geq d$, so is $\hat{\psi}$, almost surely. A main technical difficulty in applying the results of is to obtain good approximations for the quantities $\underline{\sigma}, \overline{H}$ and $B$. This can be accomplished using the bounds provided in below, which rely on matrix-calculus. Even so, the claims in the theorem do not simply follow by plugging those bounds in Equations , and from . Indeed, close inspection of the proof of (which is needed by both Theorems \[thm::coverage\] and \[thm::bonf\]) shows that the quantity $\overline{H}$, defined in , is used there only once, but critically, to obtain the almost everywhere bound in Equation . Adapted to the present setting, such a bound would be of the form $$\max_{j \in S} \int_{0}^1 \left\| H_j\left((1-t) \psi_S(P) + t \hat{\psi}_S(P) \right) \right\|_{\mathrm{op}} dt \leq \overline{H},$$ almost everywhere, for each $S$ and $P \in \mathcal{P}_n^{\mathrm{OLS}}$, where $\psi_S = \psi_S(P)$ and $\hat{\psi}_S = \hat{\psi}_S(P)$ are given in and , respectively. Unfortunately, the above inequality cannot be expected to hold almost everywhere, like we did in . Instead we will derive a high probability bound. In detail, using the second inequality in below we obtain that, for any $t \in [0,1]$, $S$, $j \in S$ and $P \in \mathcal{P}_n^{\mathrm{OLS}}$, $$\left\| H_j\left((1-t) \psi_S(P) + t \hat{\psi}_S(P) \right) \right\|_{\mathrm{op}} \leq C \frac{k}{\hat{u}_t^3}$$ where $\hat{u}_t = \lambda_{\min}( t \Sigma_S + ( 1- t) \hat{\Sigma}_S) \geq t \lambda_{\min}(\Sigma_S) + (1-t) \lambda_{\min}(\hat{\Sigma}_S)$ and the constant $C$ is the same as in ( the dependence $\Sigma_S$ and $\hat{\Sigma}_S$ on $P$ is implicit in our notation). Notice that, unlike in in the proof of , the above bound is random. By assumption, $\lambda_{\min}(\Sigma_S) \geq u$ and, by in and Weyl’s theorem, $\lambda_{\min}(\hat{\Sigma}_S) \geq u_n$ with probability at least $1 - \frac{1}{n}$ for each $P \in \mathcal{P}_n$. Since $u_n \leq u$, we conclude that, for each $S$, $j \in S$ and $P \in \mathcal{P}_n^{\mathrm{OLS}}$, $$\max_{j \in S} \int_0^1 \left\| H_j\left((1-t) \psi_S(P) + t \hat{\psi}_S(P) \right) \right\|_{\mathrm{op}} dt \leq C \frac{k}{u_n^3},$$ on an event of probability at least $1 - \frac{1}{n}$. The same arguments apply to the bound in the proof , yielding that the term $\aleph_n$, given in , can be bounded, on an event of probability at least $1 - \frac{1}{n}$ and using again , by $$\label{eq:new.aleph} C \frac{k^{5/2}}{u_n^3 u^2} \overline{v} \sqrt{ \frac{ \log n}{n}},$$ for each $P \in \mathcal{P}_n^{\mathrm{OLS}}$ and some $C>0$ dependent on $A$ only. (In light of the bounds derived next in , the dominant term in the bound on $\aleph_n$ given in is $ \overline{H} B \overline{v} \sqrt{ b\frac{ \log n}{n}}$, from which follows. We omit the details). Thus, for each $P \in \mathcal{P}_n^{\mathrm{OLS}}$, we may now apply Theorems \[thm::coverage\] and \[thm::bonf\] on event with probability no smaller than $ 1- \frac{1}{n}$, whereby the term $\overline{H}$ is replaced by $C \frac{k}{u^3_n}$ and the terms $B$ and $\overline{\sigma}$ are bounded as in . \[lemma::horrible\] For any $j \in {\widehat{S}}$, let $\beta_{{\widehat{S}}}(j) = e_j^\top \beta_{{\widehat{S}}} = g_j(\psi)$ where $e_j$ is the $j^{\mathrm{th}}$ standard unit vector. Write $\alpha = \alpha_{{\widehat{S}}}$ and $\Omega = \Sigma^{-1}_{{\widehat{S}}}$ and assume that $k \geq u^2$. The gradient and Hessian of $g_j$ are given by $$\label{eq:Gj} G^\top_j = e^\top_j \Big( \left[ - \left( \alpha^\top \otimes I_k \right) (\Omega \otimes \Omega) \;\;\;\;\; \Omega\right] \Big) D_h$$ and $$\label{eq:Hj} H_j = D_h^\top A_j D_h,$$ respectively, where $$A_j = \frac{1}{2}\left( (I_b \otimes e^\top_j) H + H^\top (I_b \otimes e_j) \right),$$ and $$H = \left[ \begin{array}{c} - \Big( ( \Omega \otimes \Omega) \otimes I_k \Big) \Big[0_{k^3 \times k^2} \;\;\;\;\; ( I_k \otimes \mathrm{vec}(I_k)) \Big] + \Big( I_{k^2} \otimes (\alpha^\top \otimes I_k) \Big) G \Big[ (\Omega \otimes \Omega) \;\;\;\;\; 0_{k^2 \times k}\Big]\\ \;\\ \Big[ - (\Omega \otimes \Omega) \;\;\;\;\; 0_{k^2 \times k} \Big] \end{array} \right],$$ and $D_h$ is the modified duplication matrix defined by $D \psi_h = \psi$, with $\psi_h$ the vector consisting of the subset of $\psi$ not including entries that correspond to the upper diagonal entries of $\Sigma$. Assume that $k \geq u^2$. Then, $$\label{eq::B-and-lambda} B= \sup_{P \in \mathcal{P}_n^{\mathrm{OLS}} } \max_j \|G_j(\psi(P)) \| \leq C \frac{ \sqrt{k} }{u^2},\ \ \ \overline{H}=\max_j \sup_{P \in \mathcal{P}_n^{\mathrm{OLS}}} \| H_j(\psi(P))\|_{\mathrm{op}} \leq C \frac{k}{u^3},$$ and $$\label{eq:sigmamin} \underline{\sigma} = \inf_{P \in \mathcal{P}^{\mathrm{OLS}}_n} \min_j \sqrt{ G_j V G_j^\top} \geq \frac{ \sqrt{v } }{ U },$$ where $C>0$ depends on $A$ only. [**Remark.**]{} The assumption that $k \geq u^2$ is not actually needed but this is the most common case and it simplifies the expressions a bit. [**Proof of .**]{} The maximal length is of the sides of $\tilde{C}_n$ is $$2 \max_{j \in {\widehat{S}}} z_{\alpha/(2k)} \sqrt{\frac{ \hat\Gamma_{{\widehat{S}}}(j,j)}{n}} \leq 2 \max_{j \in {\widehat{S}}} z_{\alpha/(2k)} \sqrt{\frac{ \Gamma_{{\widehat{S}}}(j,j) + \left| \hat\Gamma(j,j)-\Gamma(j,j)\right|}{n}}.$$ By and Equation , the event that $$\max_{ j,l \in {\widehat{S}}} \left| \hat\Gamma(j,l)-\Gamma(j,l)\right| \leq C \frac{k^{3/2}}{u_n^3 u^2} \overline{v} \sqrt{ \frac{k^2 \log n}{n}}$$ holds with probability at least $1 - \frac{2}{n}$ and for each $P \in \mathcal{P}_n^{\mathrm{OLS}}$, where $C > 0$ depends on $A$ only. Next, letting $G = G(\psi_{{\widehat{S}}})$ and $V = V_{{\widehat{S}}}$, we have that, for each $j \in {\widehat{S}}$ and $P \in \mathcal{P}_n^{\mathrm{OLS}}$, $$\Gamma_{{\widehat{S}}}(j,j) = G_j V G_j^\top \leq \|G_j\|^2 \lambda_{\max}(V) \leq B^2 \overline{v} \leq C \frac{k }{u^4} \overline{v}$$ where $G_j$ denotes the $j^{\mathrm{th}}$ row of $G$ and, as usual, $C>0$ depends on $A$ only. The second inequality in the last display follows from property 3. in and by the definition of $B$ in , while the third inequality uses the first bound in Equation . The result follows from combining the previous bounds and the fact that $z_{\alpha/(2k)} = O \left( \sqrt{ \log k} \right)$. $\Box$ [**Proof of .**]{} We condition on $\mathcal{D}_{1,n}$ and the outcome of the sample splitting. The claimed results follows almost directly from , with few additional technicalities. The first difficulty is that the least squares estimator is not always well-defined under the bootstrap measure, which is the probability distribution of $n$ uniform draws with replacement from $\mathcal{D}_{2,n}$. In fact, any draw consisting of less than $d$ distinct elements of $\mathcal{D}_{2,n}$ will be such that the corresponding empirical covariance matrix will be rank deficient and therefore not invertible. On the other hand, because the distribution of $\mathcal{D}_{2,n}$ has a Lebesgue density by assumption, any set of $d$ or more points from $\mathcal{D}_{2,n}$ will be in general position and therefore will yield a unique set of least squares coefficients. To deal with such complication we will simply apply on the event that the bootstrap sample contains $d$ or more distinct elements of $\mathcal{D}_{2,n}$, whose complementary event, given the assumed scaling of $d$ and $n$, has probability is exponentially small in $n$, as shown next. \[eq:lem.occupancy\] For $d \leq n/2$, the probability that sampling with replacement $n$ out of $n$ distinct objects will result in a set with less than $d$ distinct elements is no larger than $$\label{eq:occupancy} \exp \left\{ - \frac{n (1/2 - e^{-1})^2}{2} \right\}.$$ [**Remark.**]{} The condition that $d \leq n/2$ can be replaced by the condition that $d \leq c n$, for any $ c \in (0, 1 - e^{-1})$. Thus, we will assume that the event that the bootstrap sample contains $d$ or more distinct elements of $\mathcal{D}_{2,n}$. This will result in an extra term that is of smaller order than any of the other terms and therefore can be discarded by choosing a larger value of the leading constant. At this point, the proof of the theorem is nearly identical to the proof of except for the way the term $A_3$ is handled. The assumptions that $n$ be large enough so that $v_n$ and $u_n$ are both positive implies, by and Weyl’s theorem, that, for each $P \in \mathcal{P}_{n}^{\mathrm{OLS}}$ and with probability at least $ 1- \frac{2}{n}$ with respect to the distribution of $\mathcal{D}_{2,n}$, the bootstrap distribution belongs to the class $\mathcal{P}_n^*$ of probability distributions for the pair $(X,Y)$ that satisfy the properties of the probability distributions in the class $\mathcal{P}_n^{\mathrm{OLS}}$ with two differences: (1) the quantities $U$, $u$, $v$ and $\overline{v}$ are replaced by $U_n$, $u_n$, $v_n$ and $\overline{v}_n$, respectively, and (2) the distributions in $\mathcal{P}^*_n$ need not have a Lebesgue density. Nonetheless, since the Lebesgue density assumption is only used to guarantee that empirical covariance matrix is invertible, a fact that is also true for the bootstrap distribution under the event that the bootstrap sample consists of $d$ or more distinct elements of $\mathcal{D}_{2,n}$, the bound on the term $A_3$ established in holds for the larger class $\mathcal{P}_n^*$ as well. Next, can be used to bound the quantities $\overline{\sigma}$ and $B$ for the class $\mathcal{P}_n^*$. As for the bound on $\overline{H}$, we proceed as in the proof of and conclude that, for each non-empty subset $S$ of $\{1,\ldots,d\}$ and $P \in \mathcal{P}_n^*$, $$\max_{j \in S} \int_{0}^1 \left\| H_j\left((1-t) \psi_S(P) + t \hat{\psi}_S(P) \right) \right\|_{\mathrm{op}} dt \leq \frac{C}{k}{u_n^3}$$ on an event of probability at least $1 - \frac{1}{n}$, where $C$ is the constant appearing in the second bound in . Thus, we may take $\frac{C}{k}{u_n^3}$ in lieu of $\overline{H}$ and then apply (noting that the high probability bound in the last display holds for each $P \in \mathcal{P}_n^*$ separately). [**Proof of .**]{} As remarked at the beginning of this appendix, throughout the proof all probabilistic statements will be made conditionally on the outcome of the splitting and on $\mathcal{D}_{1,n}$. Thus, in particular, ${\widehat{S}}$ is to be regarded as a fixed subset of $\{1,\ldots,d\}$ of size $k$. Let $Z_n\sim N(0,\hat \Sigma_{{\widehat{S}}})$, with $ \hat{\Sigma}_{{\widehat{S}}}$ given in \[eq:Sigma.loco\]. Notice that $\hat{\Sigma}_{{\widehat{S}}}$ is almost surely positive definite, a consequence of adding extra noise in the definition of $\gamma_{{\widehat{S}}}$ and $\hat{\gamma}_{{\widehat{S}}}$. Then, using Theorem 2.1 in [@cherno2], there exists a universal constant $C > 0$ such that $$\label{eq::secondx} \sup_{ t = (t_j, j \in {\widehat{S}}) \in \mathbb{R}^{{\widehat{S}}}_{+}} \Bigl| \mathbb{P}( \sqrt{n}|\hat\gamma_{{\widehat{S}}}(j) - \gamma_{{\widehat{S}}}(j) | \leq t_j, \forall j \in {\widehat{S}}) - \mathbb{P}(|Z_n(j)| \leq t_j, \forall j \in {\widehat{S}})\Bigr| \leq C \mathrm{E}_{1,n},$$ where $\mathrm{E}_{1,n}$ is given in . By restricting the supremum in the above display to all $t \in \mathbb{R}^{{\widehat{S}}}_+$ with identical coordinates, we also obtain that $$\label{eq::firstx} \sup_{t > 0} \Bigl| \mathbb{P}(\sqrt{n}||\hat\gamma_{{\widehat{S}}} - \gamma_{{\widehat{S}}}||_\infty \leq t) - \mathbb{P}\left(||Z_n||_\infty \leq t \right)\Bigr| \leq C \mathrm{E}_{1,n}.$$ In order to show and , we will use the same arguments used in the proofs of and . We first define $\mathcal{E}_n$ to be the event that $$\label{eq:loco.aleph} \max_{i,j} \left| \widehat{\Sigma}_{{\widehat{S}}}(i,j) - \Sigma_{{\widehat{S}}}(i,j) \right| \leq N_n,$$ where $N_n$ is as in . Each entry of $\widehat{\Sigma}_{{\widehat{S}}} - \Sigma_{{\widehat{S}}}$ is bounded in absolute value by $\left( 2(A+\tau) + \epsilon \right)^2$, and therefore is a sub-Gaussian with parameter $\left( 2(A+\tau) + \epsilon \right)^4$. Using a standard derivation for bounding the maximum of sub-Gaussian random variables we obtain that $\mathbb{P}(\mathcal{E}_n^c) \leq \frac{1}{n} $. The bound follows from the same arguments as in the proof : combine the Gaussian comparison Theorem \[thm:comparisons\] with and notice that $\epsilon/\sqrt{3}$ is a lower bound on the standard deviation of the individual coordinates of the $\delta_i$’s. In particular, the Gaussian comparison theorem yields the additional error term $C \mathrm{E}_{2,n} + \frac{1}{n}$ given in , for some universal positive constant $C$. Similarly, can be established along the lines of the proof of , starting from the bound . In this case we pick up an additional error term $C \tilde{\mathrm{E}}_{2,n} + \frac{1}{n}$ of different form, shown in , where $C>0$ is a different universal constant. Since all the bounds we have derived do not depend on $\mathcal{D}_{1,n}$, the outcome of the splitting and $w_n$, the same bounds therefore hold for the joint probabilities, and uniformly over the model selection and estimation procedures. The above arguments hold for each $P \in \mathcal{P}_n^{\mathrm{LOCO}}$. $\Box$ [**Proof of .**]{} Following the proof of , for each $P \in \mathcal{P}_n^{\mathrm{LOCO}}$ and on the event $\mathcal{E}_n$ given in (which has probability at leas $1- \frac{1}{n}$), we have that $$\begin{aligned} 2 \max_{j \in {\widehat{S}}} z_{\alpha/(2k)} \sqrt{\frac{ \hat\Sigma_{{\widehat{S}}}(j,j)}{n}} & \leq 2 \max_{j \in {\widehat{S}}} z_{\alpha/(2k)} \sqrt{\frac{ \Sigma_{{\widehat{S}}}(j,j) + \left| \hat\Sigma(j,j)-\Sigma(j,j)\right|}{n}}\\ & \leq z_{\alpha/(2k)} \sqrt{ \frac{ (2(A + \tau) + \epsilon)^2 + N_n }{n}}. \end{aligned}$$ The claimed bound follows from the definition of $N_n$ as in . $\Box$ [**Proof of .**]{} All the probabilistic statements that follow are to be understood conditionally on the outcome of the sample splitting and on $\mathcal{D}_{1,n}$. Thus, $\mathcal{I}_{1,n}$, ${\widehat{S}}$, $\hat{\beta}_{{\widehat{S}}}$ and, for each $j \in {\widehat{S}}$, $\hat{\beta}_{{\widehat{S}}(j)}$ are to be regarded as fixed, and the only randomness is with respect to the joint marginal distribution of $\mathcal{D}_{2,n}$ and $(\xi_i, i \in \mathcal{I}_{2,n})$, and two auxiliary independent standard Gaussian vectors in $\mathbb{R}^{{\widehat{S}}}$, $Z_1$ and $Z_2$, independent of everything else. Let $\hat{\gamma}^*_{{\widehat{S}}} \in \mathcal{R}^{{\widehat{S}}}$ denotes the vector of LOCO parameters arising from the bootstrap distribution corresponding to the empirical measure associated to the $n$ triplets $\left\{ (X_i,Y_i,\xi_i), i \in \mathcal{I}_{2,n} \right\}$. Next, $$\mathbb{P}\left( \sqrt{n} \| \gamma_{{\widehat{S}}} - \hat{\gamma}_{{\widehat{S}}} \|_\infty \leq \hat{t}^*_{\alpha} \right) \geq \mathbb{P}\Big( \sqrt{n} \| \hat{\gamma}^*_{{\widehat{S}}} - \hat{\gamma}_{{\widehat{S}}} \|_\infty \leq \hat{t}^*_ \alpha \left | (X_i,Y_i,\xi_i), i \in \mathcal{I}_{2,n} \right. \Big) -( A_1 + A_2 + A_3),$$ where $$\begin{aligned} A_1 & = \sup_{t >0} \left| \mathbb{P}\left( \sqrt{n} \| \hat{\gamma}_{{\widehat{S}}} - \gamma_{{\widehat{S}}} \|_\infty \leq t \right) - \mathbb{P}( \| Z \|_\infty \leq t ) \right|,\\ A_2 & = \sup_{t>0} \left| \mathbb{P}( \| Z \|_\infty \leq t ) - \mathbb{P}( \| \hat{Z} \|_\infty \leq t ) \right|,\\ \text{and} & \\ A_3 & = \sup_{t >0} \Big| \mathbb{P}\Big( \sqrt{n} \| \hat{\gamma}^*_{{\widehat{S}}} - \hat{\gamma}_{{\widehat{S}}} \|_\infty \leq t \left | (X_i,Y_i,\xi_i), i \in \mathcal{I}_{2,n} \right. \Big) - \mathbb{P}( \| \hat{Z} \|_\infty \leq t ) \Big|,\end{aligned}$$ with $Z = \Sigma_{{\widehat{S}}}^{1/2} Z_1$ and $\hat{Z} = \widehat{\Sigma}_{{\widehat{S}}} Z_2$. Then, $A_1 \leq C \mathrm{E}_{1,n}$ by and $A_2 \leq C \mathbb{E}_{2,n} + \frac{1}{n} $, by applying the Gaussian comparison Theorem \[thm:comparisons\] on the event $\mathcal{E}_n$ that holds, whereby $\mathbb{P}(\mathcal{E}_n^c) \leq \frac{1}{n}$ as argued in the proof of . Finally the bound on $A_3$ follows from applying Theorem 2.1 in [@cherno2] to the bootstrap measure, conditionally on $(X_i,Y_i,\xi_i), i \in \mathcal{I}_{2,n}$, just like it was done in the proof of . In this case, we need to restrict to the even $\mathcal{E}_n$ to ensure that the minimal variance for the bootstrap measure is bounded away from zero. To that end, it will be enough to take $n$ large enough so that $\epsilon_n $ is positive and to replace $\epsilon$ with $\epsilon_n$. The price for this extra step is a factor of $\frac{1}{n}$, which upper bounds $\mathbb{P}(\mathcal{E}_n^c)$. Putting all the pieced together we arrive at the bound $$A_3 \leq C \mathrm{E}^*_{1,n} + \frac{1}{n}.$$ Finally notice that $\mathrm{E}_{1,n} \leq \mathrm{E}^*_{1,n}$ since $\epsilon_n \leq \epsilon$. The very same arguments apply to the other bootstrap confidence set $\tilde{C}^*_\alpha$, producing the very same bound. We omit the proof for brevity but refer the reader to the proof of for details. All the bounds obtained so far are conditionally on the outcome of the sample splitting and on $\mathcal{D}_{1,n}$ but are not functions of those random variables. Thus, the same bounds hold also unconditionally, for each $P \in \mathcal{P}_n^{\mathrm{LOCO}}$. $\Box$ Let $F_{n,j}$ denote the empirical cumulative distribution function of $\{ \delta_i(j), i \in \mathcal{I}_{2,n}\}$ and $F_j$ the true cumulative distribution function of $\delta_i(j)$. Thus, setting $\beta_l = l/n$ and $\beta_u = u/n$, we see that $\delta_{(l)}(j) = F_{n,j}^{-1}(\beta_l)$ and $\delta_{(u)}(j) = F_{n,j}^{-1}(\beta_u)$ and, furthermore, that $F_{n,j}(F_{n,j}^{-1}(\beta_l)) = \beta_l$ and $F_{n,j}F(_{n,j}^{-1}(\beta_u)) = \beta_u$. In particular notice that $\beta_l$ is smaller than $ \frac{1}{2} - \sqrt{\frac{1}{2n}\log\left(\frac{2k}{\alpha}\right)}$ by at most $1/n$ and, similarly, $\beta_u$ is larger than $ \frac{1}{2} + \sqrt{\frac{1}{2n}\log\left(\frac{2k}{\alpha}\right)}$ by at most $1/n$. By assumption, the median $\mu_j = F_j^{-1}(1/2)$ of $\delta_i(j)$ is unique and the derivative of $F_j$ is larger than $M$ at all points within a distance of $\eta$ from $\mu_j$. Thus, by the mean value theorem, we must have that, for all $x \in \mathbb{R}$ such that $| x -\mu_j | < \eta$, $$M |x - \mu_j| \leq | F_j(x) -F_j(\mu_j)|.$$ As a result, if $$\label{eq:M.inverse} | F_j(x) - F_j(\mu_j) | \leq M \eta,$$ it is the case that $|x- \mu_j| \leq \eta$, and therefore, that $| x- \mu_j| \leq \frac{F_j(x) - F_j(\mu_j)}{M}$. By the DKW inequality and the union bound, with probability at least $1-1/n$, $$\label{eq:dkw.median} \max_{j \in {\widehat{S}}} \|F_{n,j} - F_j \|_\infty \leq \sqrt{\frac{ \log 2kn}{2n} }.$$ Thus, for any $j \in {\widehat{S}}$, $$\left| F_{n,j}( \delta_{(u)}(j)) - F_j( \delta_{(u)}(j))\right| \leq \sqrt{\frac{ \log 2kn}{2n} }.$$ Since $$F_{n,j}( \delta_{(u)}(j)) = \beta_u \leq 1/2 + \frac{1}{n} + \sqrt{\frac{1}{n}\log\left(\frac{2k}{\alpha}\right)} = F_j(\mu_j) + \frac{1}{n} + \sqrt{\frac{1}{n}\log\left(\frac{2k}{\alpha}\right)},$$ using , we conclude that, on the event and provided that $ \frac{1}{n} + \sqrt{\frac{1}{2n}\log\left(\frac{2k}{\alpha}\right)} + \sqrt{ \frac{ \log 2kn}{2n} } \leq \eta M$, $$| \mu_j - \delta_{(u)}(j) | \leq \frac{1}{M} \left( \frac{1}{n} + \sqrt{\frac{1}{2n}\log\left(\frac{2k}{\alpha}\right)} + \sqrt{ \frac{ \log 2kn}{2n} } \right).$$ Similarly, under the same conditions, $$| \mu_j - \delta_{(l)}(j) | \leq \frac{1}{M} \left( \frac{1}{n} + \sqrt{\frac{1}{2n}\log\left(\frac{2k}{\alpha}\right)} + \sqrt{ \frac{ \log 2kn}{2n} } \right).$$ The claim now follows by combining the last two displays. Notice that the result holds uniformly over all $j \in {\widehat{S}}$ and all distributions satisfying the conditions of the theorem. $\Box$ Appendix 3: Proof of the results in ==================================== [**Proof of Lemma \[lemma::est-accuracy\].**]{} The upper bounds are obvious. The lower bound (\[eq::lower1\]) is from Section 4 in [@sackrowitz1986evaluating]. We now show (\[eq::lower2\]). Let $\hat\beta =g(Y)$ be any estimator where $Y=(Y_1,\ldots, Y_n)$. Given any $Y$ and any $w(Y)$, $\hat\beta$ provides an estimate of $\beta(J)$ where $J= w(Y)$. Let $w_j$ be such that $w_j(X)=j$. Then define $\hat\beta = ( g(Y,w_1(Y)),\ldots, g(Y,w_D(Y)))$. Let $w_0(Y) = \operatorname*{argmax}_j |\beta(j)-\hat\beta(j)|$. Then $\mathbb{E}[|\hat \beta(J) - \beta(J)|]= \mathbb{E}[||\hat \beta - \beta||_\infty]$. Let $P_0$ be multivariate Normal with mean $(0,\ldots, 0)$ and identity covariance. For $j=1,\ldots, D$ let $P_j$ be multivariate Normal with mean $\mu_j=(0,\ldots,0,a,0, 0)$ and identity covariance where $a = \sqrt{ \log D/(16n)}$. Then $$\begin{aligned} \inf_{\hat\beta}\sup_{w\in {\cal W}_n}\sup_{P\in {\cal P}_n}\mathbb{E}[|\hat \beta(J) - \beta(J)|] &\geq \inf_{\hat\beta}\sup_{P\in M}\mathbb{E}[|\hat \beta(J) - \beta(J)|]\\ &= \inf_{\hat\beta}\sup_{P\in M}\mathbb{E}[||\hat \beta - \beta||_\infty]\end{aligned}$$ where $J= w_0(Y)$ and $M = \{P_0,P_1,\ldots,P_D\}$. It is easy to see that $${\rm KL}(P_0,P_j) \leq \frac{\log D}{16 n}$$ where KL denotes the Kullback-Leibler distance. Also, $||\mu_j - \mu_k||_\infty \geq a/2$ for each pair. By Theorem 2.5 of [@tsybakov2009introduction], $$\inf_{\hat\beta}\sup_{P\in M}\mathbb{E}[||\hat \mu - \mu ||_\infty] \geq \frac{a}{2}$$ which completes the proof. $\Box$ [**Proof of Lemma \[lemma::contiguity\].**]{} We use a contiguity argument like that in [@leeb2008can]. Let $Z_1,\ldots, Z_D \sim N(0,1)$. Note that $\hat\beta(j) \stackrel{d}{=} \beta(j)+ Z_j/\sqrt{n}$. Then $$\begin{aligned} \psi_n(\beta) &= \mathbb{P}(\sqrt{n}(\hat\beta(S) - \beta(S))\leq t) = \sum_j \mathbb{P}(\sqrt{n}(\hat\beta(j) - \beta(j))\leq t,\ \hat\beta(j) > \max_{s\neq j}\hat\beta_s)\\ &= \sum_j \mathbb{P}(\max_{s\neq j}Z_s + \sqrt{n}(\beta(s)-\beta(j)) < Z_j < t) = \sum_j \Phi(A_j)\end{aligned}$$ where $\Phi$ is the $d$-dimensional standard Gaussian measure and $$A_j = \Bigl\{ \max_{s\neq j}Z_s + \sqrt{n}(\beta(s)-\beta(j) < Z_j < t \Bigr\}.$$ Consider the case where $\beta = (0,\ldots, 0)$. Then $$\psi_n(0)= D\, \Phi(\max_{s\neq 1}Z_s < Z_1 < t) \equiv b(0).$$ Next consider $\beta_n = (a/\sqrt{n},0,0,\ldots, 0)$ where $a>0$ is any fixed constant. Then $$\begin{aligned} \psi(\beta_n) &= \Phi( (\max_{s\neq 1}Z_s )-a < Z_1 < t)\\ &\ \ \ \ \ + \sum_{j=2}^D \Phi(\max\{Z_1+a,Z_2,\ldots, Z_{j-1},Z_{j+1},\ldots, Z_D\} < Z_j < t)\\ &\equiv b(a).\end{aligned}$$ Suppose that $\hat\psi_n$ is a consistent estimator of $\psi_n$. Then, under $P_0$, $\hat\psi_n \stackrel{P}{\to} b(0)$. Let $P_n = N(\beta_n,I)$ and $P_0 = N(0,I)$. It is easy to see that $P_0^n(A_n)\to 0$ implies that $P_n^n(A_n)\to 0$ so that $P_n$ and $P_0$ are contiguous. So, by Le Cam’s first lemma [see, e.g. @green.book], under $P_n$, we also have that $\hat\psi_n \stackrel{P}{\to} b(0)$. But $b(0)\neq b(a)$, which contradicts the assumed consistency of $\hat\psi_n$. $\Box$ [**Proof of Lemma \[lemma::many-means-bound\].**]{} Let $P_0 = N(\mu_0, \frac{1}{n}I_D)$, where $\mu_0 = 0$, and for $j=1,\ldots,D$ let $P_j = N(\mu_j, \frac{1}{n}I_D)$, where $\mu_j$ is the $D$-dimensional vector with $0$ entries except along the $j^{\mathrm{th}}$ coordinate, which takes the value $\sqrt{c \frac{\log D}{n}}$, where $0 < c < 1$. Consider the mixture $\overline{P} = \frac{1}{D} \sum_{j=1}^D P_j$. Then, letting $\theta_j$ and $\theta_0$ be the largest coordinates of $\mu_j$ and $\mu_0$ respectively, we have that $\| \theta_j - \theta_0 \|^2 = \frac{c \log D}{n}$ for all $j$. Next, some algebra yields that the $\chi^2$ distance between $P_0$ and the mixture $\overline{P} = \frac{1}{D} \sum_{j=1}^D P_j$ is $\frac{1}{D} e^{ c \log D} - \frac{1}{D}$, which vanishes as $D$ tends to $\infty$. Since this is also an upper bound on the squared total variation distance between $P_0$ and $\overline{P}$, the result follows from an application of Le Cam Lemma [see, e.g. @tsybakov2009introduction]. $\Box$ Appendix 4: Proof of the results in ==================================== [**Proof of Theorem \[theorem::deltamethod\].**]{} For ease of readability, we will write $G_j$ and $G$ instead of $G_j(\psi)$ and $G(\psi)$, respectively. Throughout the proof, $C$ will indicate a positive number whose value may change from line to line and which depends on $A$ only, but on none of the remaining variables. For each $j \in \{1,\ldots,s\}$, we use a second order Taylor expansion of $\widehat{\theta}_j$ to obtain that $$\hat\theta_j = \theta_j + G_j^\top(\hat\psi - \psi) + \frac{1}{2n}\delta^\top \Lambda_j \delta, \quad \forall j \in \{1, \ldots s\}$$ where $\delta = \sqrt{n}(\hat\psi - \psi)$ and $\Lambda_j = \int_0^1 H_j( (1-t)\psi + t \hat\psi) dt \in \mathbb{R}^{b \times b}$. Hence, $$\label{eq::taylor} \sqrt{n}(\hat\theta - \theta) = \sqrt{n}(\hat\nu - \nu) + R$$ where $\nu = G\psi$, $\hat\nu = G \hat\psi$ and $R$ is a random vector in $\mathbb{R}^s$ whose $j^{\mathrm{th}}$ coordinate is $$R_j = \frac{1}{2\sqrt{n}} \delta^\top \left[ \int_0^1 H_j( (1-t)\psi + t \hat\psi) dt \right] \delta.$$ By Lemma \[lem:hyper\] below, there exists a constant $C>0$, depending on $A$ only, such that $$\label{eq::CLT} \sup_{P\in {\cal P}_n} \sup_t \Bigl|\mathbb{P}(\sqrt{n}||\hat\nu - \nu||_\infty \leq t) - \mathbb{P}(||Z_n||_\infty \leq t)\Bigr| \leq C \frac{1}{\sqrt{v}} \left( \frac{ \overline{v}^2 b (\log 2bn)^7}{n} \right)^{1/6},$$ where $Z_n \sim N_s(0,\Gamma)$. Now we bound the effect of remainder $R$ in (\[eq::taylor\]). First, by assumption (see Equation \[eq:H.and.B\]), we have that, almost everywhere, $$\label{eq:bound.H} \sup_{u \in [0,1]} \| H_j( (1-u)\psi + u \hat\psi) \|_{\mathrm{op}} \leq \overline{H},$$ from which it is follows that $$\| R \|_\infty \leq \frac{\overline{H} ||\delta||^2}{2\sqrt{n}},$$ with the inequality holding uniformly in $\mathcal{P}_n$. Next, consider the event $\mathcal{E}_n = \Bigl\{ \frac{\overline{H} ||\delta||^2}{2\sqrt{n}} < \epsilon_n\Bigr\}$ where $$\label{eq:epsilon} \epsilon_n = C \sqrt{\frac{b \overline{v} \overline{H}^2 (\log n)^2}{n}},$$ for a sufficiently large, positive constant $C$ to be specified later. Thus, since $\delta = \sqrt{n} (\hat{\psi}- \psi)$, we have that $$\begin{aligned} \nonumber \mathbb{P}(\mathcal{E}_n^c) &= \mathbb{P}\left( \frac{\overline{H} ||\delta||^2}{2\sqrt{n}} > \epsilon_n\right)\\ \nonumber & = \mathbb{P}\left( ||\hat{\psi} - \psi || > \sqrt{ \frac{2 \epsilon_n}{ \sqrt{n} \overline{H}}}\right)\\ \nonumber & = \mathbb{P}\left( ||\hat{\psi} - \psi || > C \sqrt{ \overline{v} b \frac{\log n}{n} } \right)\\ \label{eq:Ac} & \leq \frac{1}{n}, \end{aligned}$$ where in the third identity we have used the definition of $\epsilon_n$ in and the final inequality inequality follows from the vector Bernstein inequality and by taking the constant $C$ in appropriately large. In fact, the bound on the probability of the event $\mathcal{E}_n^c$ holds uniformly over all $P \in \mathcal{P}_n$. Next, for any $t > 0$ and uniformly in $P \in \mathcal{P}_n$, $$\begin{aligned} \mathbb{P}( \sqrt{n}||\hat\theta - \theta||_\infty \leq t) &= \mathbb{P}( \sqrt{n}||\hat\theta - \theta||_\infty \leq t,\ \mathcal{E}_n) + \mathbb{P}( \sqrt{n}||\hat\theta - \theta||_\infty \leq t,\ \mathcal{E}_n^c) \nonumber \\ & \leq \mathbb{P}( \sqrt{n}||\hat\nu - \nu||_\infty \leq t+\epsilon_n) + \mathbb{P}(\mathcal{E}_n^c) \nonumber \\ & = \mathbb{P}( ||Z_n||_\infty \leq t+\epsilon_n) + C \frac{1}{\sqrt{v}} \left( \frac{\overline{v}^2 b (\log 2bn)^7}{n} \right)^{1/6} + \mathbb{P}(\mathcal{E}_n^c) \label{eq:sorryrogeryouaretigernow}\end{aligned}$$ where the inequality follows from and the fact that $\| R \|_\infty \leq \epsilon_n $ on the event $\mathcal{E}_n$ and the second identity from the Berry-Esseen bound (\[eq::CLT\]). By the Gaussian anti-concentration inequality of , $$\mathbb{P}( ||Z_n||_\infty \leq t + \epsilon_n )\leq \mathbb{P}( ||Z_n||_\infty \leq t) + \frac{\epsilon_n}{\underline{\sigma}} (\sqrt{2 \log b} +2).$$ Using the previous inequality on the first term of , we obtain that $$\begin{aligned} \mathbb{P}( \sqrt{n}||\hat\theta - \theta||_\infty \leq t)& \leq \mathbb{P}( ||Z_n||_\infty \leq t) + C \left[ \frac{\epsilon_n}{\underline{\sigma}} (\sqrt{2 \log b} +2) + \frac{1}{\sqrt{v}} \left( \frac{\overline{v}^2 b (\log 2bn)^7}{n} \right)^{1/6} \right] + \mathbb{P}(\mathcal{E}_n^c)\\ & \leq \mathbb{P}( ||Z_n||_\infty \leq t) + C \left [\frac{\epsilon_n}{\underline{\sigma}} (\sqrt{2 \log b} +2) + \frac{1}{\sqrt{v}} \left( \frac{\overline{v}^2 b (\log 2bn)^7}{n} \right)^{1/6} \right],\end{aligned}$$ where in the second inequality we have used the fact that $\mathbb{P}(\mathcal{E}^c_n) \leq \frac{1}{n}$ by and have absorbed this lower order term into higher order terms by increasing the value of $C$. By a symmetric argument, we have $$\mathbb{P}( \sqrt{n}||\hat\theta - \theta||_\infty \leq t) \geq \mathbb{P}( ||Z_n||_\infty \leq t) -C \left [\frac{\epsilon_n}{\underline{\sigma}} (\sqrt{2 \log b} +2) + \frac{1}{\sqrt{v}} \left( \frac{\overline{v}^2 b (\log 2bn)^7}{n} \right)^{1/6} \right].$$ The result now follows by bounding $\epsilon_n$ as in . $\Box$ The following lemma shows that the linear term $\sqrt{n}(\hat\nu - \nu)$ in has a Gaussian-like behavior and is key ingredient of our results. It is an application of the Berry-Esseen , due to [@cherno2]. The proof is in . \[lem:hyper\] There exists a constant $C>0$, depending on $A$ only, such that $$\sup_{P\in {\cal P}} \sup_t \Bigl|\mathbb{P}(\sqrt{n}||\hat\nu - \nu||_\infty \leq t) - \mathbb{P}(||Z_n||_\infty \leq t)\Bigr| \leq C \frac{1}{\sqrt{v}} \left( \frac{\overline{v}^2 b (\log 2bn)^7}{n} \right)^{1/6},$$ where $Z_n \sim N_s(0,\Gamma)$. [**Proof of Lemma \[lemma::upsilon\].**]{} Throughout the proof, we set $G = G(\psi)$, where $\psi = \psi(P)$ for some $P \in \mathcal{P}_n$, and $\hat{G} = G(\hat{\psi})$ where $\hat{\psi} = \hat{\psi}(P)$ is the sample average from an i.i.d. sample from $P$. Recall that the matrices $\Gamma$ and $\hat{\Gamma}$ are given in Equations and , respectively. For convenience we will suppress the dependence of $\hat{\Gamma}$ and $\hat{G}$, and of $\Gamma$ and $G$ on $\hat{\psi}$ and $\psi$, respectively. Express $\hat\Gamma - \Gamma$ as $$\begin{aligned} (\hat G - G) V G^\top + G V (\hat G - G)^\top + & (\hat G - G) V (\hat G - G)^\top +\\ (\hat G - G)(\hat{V} - V) G^\top + & G (\hat{V} - V) ( \hat{G}- G)^\top + G (\hat V - V)G^\top + (\hat{G} - G) (\hat{V} - V)^\top (\hat{G} - G )^\top.\end{aligned}$$ The first, second and sixth terms are dominant, so it will be enough to compute high-probability bounds for $(\hat G - G) V G^\top$ and $G (\hat V - V)G^\top$. We first bound $(\hat G - G) V G^\top$. For any $j$ and $l$ in $\{1,\ldots,s\}$ and using the Cauchy-Schwartz inequality, we have that $$\label{eq::aaa} \left| \left( \hat{G}_j - G_j \right) V G_l^\top \right| \leq \lambda_{\max}(V) \| \hat{G}_j - G_j\| B \leq \overline{v} B \| \hat{G}_j - G_j\|,$$ by the definition of $B$ (see Equation \[eq:H.and.B\]), where we recall that $G_j$ denotes the $j^{\mathrm{th}}$ row of $G$. It remains to bound the stochastic term $\max _j \| \hat{G}_j - G_j\|$. Towards that end, we will show that, for some constant $C$ dependent on $A$ only, $$\label{eq:hatGjmGj} \mathbb{P} \left( \max_j \|\hat G_j - G_j\|\leq C \overline{H} \sqrt{b \frac{ \log n}{ n}} \right) \geq 1 - 1/n.$$ Indeed, by a Taylor expansion, $$\begin{aligned} \hat G_j - G_j =(\hat\psi -\psi)^\top \int_0^1 H_j((1-t)\psi +t\hat{\psi})dt \quad \text{for all } j \in \{1,\ldots,s\},\end{aligned}$$ so that $$\max_j \|\hat G_j - G_j \| \leq \| \psi - \hat{\psi} \| \max_j \Big \| \int_0^1 H_j((1-t)\psi +t\hat{\psi})dt \Big\|_{\mathrm{op}}.$$ Since the coordinates of $\hat{\psi}$ are bounded in absolute value by $A$, the bound implies that, for some positive constant $C$ dependent on $A$ only, $ \mathbb{P} \left( \|\hat\psi-\psi\| \leq C \sqrt{b (\log n)/n}) \right) \geq 1 - 1/n$, for all $P\in {\cal P}_n^{\mathrm{OLS}}$. We restrict to this event. By convexity of the operator norm $||\cdot ||_{\rm op}$ and our assumption, we have that $$\label{eq:here} \max_j \Biggl|\Biggl|\int_0^1 H_j((1-t)\psi +t \hat{\psi})dt\Biggr|\Biggr|_{\mathrm{op}}\le \overline{H},$$ yielding the bound in . Combined with (\[eq::aaa\]), we conclude that on an event of probability at least $1 - 1/n$, $\max_{j,l} |\hat\Gamma(j,l) - \Gamma(j,l)|\preceq \aleph_n$. This bound holds uniformly over $P \in \mathcal{P}_n$. As for the other term $G (\hat V - V)G^\top$, we have that, by in , $$\max_{j,l} \left| G_j (\hat V - V)G_l^\top \right| \leq B^2 \| \hat{V} - V \|_{\mathrm{op}} \leq C B^2 \sqrt{ b \overline{v} \frac{ \log b + \log n }{n} },$$ with probability at least $ 1- \frac{1}{n}$, where $C$ depends only on $A$ and we have used the fact that $\max_j \| G_j (\psi(P))\|^2 \leq B^2$ uniformly over $P \in\mathcal{P}_n$. Thus, by a union bound, the claim holds on an event of probability at least $1- \frac{2}{n}$. $\Box$ [**Proof of Theorem \[thm::coverage\].**]{} Let $Z_n \sim N(0,\Gamma)$ and recall that $\hat{Z}_n \sim N(0,\hat{\Gamma})$. Using the triangle inequality, we have that $$\mathbb{P}(\theta \in \hat{C}_n) = \mathbb{P}(\sqrt{n}||\hat\theta - \theta||_\infty \leq \hat{t}_\alpha) \geq \mathbb{P}(||\hat Z_n||_\infty \leq \hat{t}_\alpha) - A_1 - A_2,$$ where $$A_1 = \sup_{t > 0} | \mathbb{P}(\sqrt{n}||\hat\theta - \theta||_\infty \leq t) - \mathbb{P}(||Z_n||_\infty \leq t) |$$ and $$A_2 = \sup_{t > 0} | \mathbb{P}(||Z_n||_\infty \leq t) - \mathbb{P}( ||\hat Z_n||_\infty \leq t)|.$$ Now $$\mathbb{P}(||\hat Z_n||_\infty \leq \hat{t}_\alpha) =\mathbb{E}[\mathbb{P}(||\hat Z_n||_\infty \leq \hat{t}_\alpha | \hat\Gamma)] = 1-\alpha,$$ by the definition of $\hat{t}_\alpha$. implies that $A_1 \leq C ( \Delta_{1,n} + \Delta_{2,n}) $, where $C$ depends on $A$ only. To bound $A_2$, consider the event $\mathcal{E}_n= \{ \max_{j,k} |\widehat{\Gamma} - \Gamma| \leq C \aleph_n\}$, where the constant $C$ is the same as in . Then, by the same Lemma, $\mathbb{P}(\mathcal{E}_n) \geq 1 - 1/n$, uniformly over all $P$ in ${\cal P}_n$. Next, we have that $$A_2 \leq \mathbb{E}\left[ \sup_{t > 0} \left| \mathbb{P}(||Z_n||_\infty \leq t) - \mathbb{P}( ||\hat Z_n||_\infty \leq t|\hat\Gamma)\right|; \mathcal{E}_n \right] + \mathbb{P}(\mathcal{E}_n^c),$$ where $\mathbb{E}[\cdot;\mathcal{E}_n]$ denotes expectation restricted to the event $\mathcal{E}_n$. By the Gaussian comparison the term inside the expected value is bounded by $\Delta_{n,3}$. $\Box$ [**Proof of .**]{} For $j=1,\ldots,s$, let $\gamma_j =\sqrt{\Gamma_{j,j}}$, $\hat{\gamma}_j = \sqrt{ \hat{\Gamma}_{j,j}}$ and $\hat{t}_j = z_{\alpha/(2s)} \hat{\gamma}_j$ We use the same arguments and notation as in the proofs of and . Thus, let $\mathcal{E}_n$ be the event that $ \frac{\overline{H} ||\delta||^2}{2\sqrt{n}} < \epsilon_n$, where $\frac{\overline{H} ||\delta||^2}{2\sqrt{n}}$ is an upper bound on $\| R\|_\infty$, with $R$ the reminder in the Taylor series expansion and $\epsilon_n$ as in . Then, $\mathbb{P}\left( \mathcal{E}_n^c \right) \leq n^{-1}$ (see equation \[eq:Ac\]). Next, for each $t \in \mathbb{R}^{2s}_+$ and any Jacobian matrix $G = G(\psi(P))$, with $P \in \mathcal{P}_n$, let $$\label{eq:polyhedron2} P(G,t) = \left\{ x \in \mathbb{R}^b \colon v_l^\top x \leq t_l , \forall v_l \in \mathcal{V}(G) \right\},$$ where $\mathcal{V}(G)$ is defined in the proof of . Then, for any positive numbers $(t'_1,\ldots,t'_s)$ $$| \sqrt{n}(\hat{\nu}_j - \nu_j ) | \leq t'_j, j=1,\ldots,s \quad \text{if and only if } \quad \sqrt{n} (\hat{\psi} - \psi) \in P(G,t),$$ where the coordinates of $t \in \mathbb{R}^{2s}$ are as follows: for $j=1,\ldots,s$, $t_{2l-1} = t_{2l} = \frac{t'_l}{\|G_j\|}$. Consider now the class of subsets of $\mathbb{R}^b$ of the form specified in , where $t$ ranges over the positive vectors in $\mathbb{R}^{2s}$ and $G$ ranges in $ \{ G(\psi(P)), P \in \mathcal{P}_n\}$. This is a class comprised by polytopes with at most $2s$ faces in $\mathbb{R}^b$. Thus, using the same arguments as in the proof of , we obtain that $$\label{eq:delta1n.bonf} \sup_{t =(t_1,\ldots,t_s) \in \mathbb{R}^s_+} \left| \mathbb{P}\left(\sqrt{n}|\hat\nu_j - \nu_j| \leq t_j, \forall j\right) - \mathbb{P}\left(|Z_{n,j} | \leq t_j, \forall j\right) \right| \leq C \frac{1}{\sqrt{v}} \left( \frac{\overline{v}^2 b (\log 2bn)^7}{n} \right)^{1/6},$$ for some $C>0$ depending only on $A$, where $Z_n \sim N(0,\Gamma)$. Using the above display, and following the same arguments as in the proof of , we have that $$\begin{aligned} \mathbb{P}( \sqrt{n} |\hat\theta_j - \theta_j| \leq \hat{t}_j, \forall j) &= \mathbb{P}(\sqrt{n}|\hat\theta_j - \theta_j| \leq \hat{t}_j, \forall j;\ \mathcal{E}_n) + \mathbb{P}( \sqrt{n} |\hat\theta_j - \theta_j| \leq \hat{t}_j, \forall j;\ \mathcal{E}_n^c) \\ & \leq \mathbb{P}( \sqrt{n}|\hat\nu_j - \nu_j| \leq \hat{t}_j+\epsilon_n, \forall j) + \mathbb{P}(\mathcal{E}_n^c) \nonumber \\ & \leq \mathbb{P}( |Z_{n,j}| \leq \hat{t}_j+\epsilon_n, \forall j) + C \frac{1}{\sqrt{v}} \left( \frac{\overline{v}^2 b (\log 2bn)^7}{n} \right)^{1/6} + \frac{1}{n} \\ & \leq \mathbb{P}( |Z_{n,j}| \leq \hat{t}_j, \forall j) + C \left[ \frac{\epsilon_n}{\underline{\sigma}} (\sqrt{2 \log b} +2) + \frac{1}{\sqrt{v}} \left( \frac{\overline{v}^2 b (\log 2bn)^7}{n} \right)^{1/6} \right],\end{aligned}$$ where in the second-to-last inequality we have used the fact that $\mathbb{P}(\mathcal{E}^c_n) \leq \frac{1}{n}$ and in the last inequality we have applied the Gaussian anti-concentration inequality in (and have absorbed the term $\frac{1}{n}$ into higher order terms by increasing the value of $C$). A similar argument gives $$\mathbb{P}( \sqrt{n} |\hat\theta_j - \theta_j| \leq \hat{t}_j, \forall j)\geq \mathbb{P}( |Z_{n,j}| \leq \hat{t}_j, \forall j) - C \left[ \frac{\epsilon_n}{\underline{\sigma}} (\sqrt{2 \log b} +2) + \frac{1}{\sqrt{v}} \left( \frac{\overline{v}^2 b (\log 2bn)^7}{n} \right)^{1/6} \right].$$ To complete the proof, we will show that $$\label{eq:min} \mathbb{P}( |Z_{n,j}| \leq z_{\alpha/(2s)} \hat{\gamma}_j, \forall j) \geq (1-\alpha) - \frac{1}{n} - \min \left\{ C \Delta_{3,n}, \frac{ \aleph_n z_{\alpha/(2s)}}{(\min_j \gamma_j)^2} \left(\sqrt{ 2 + \log(2s ) } + 2 \right)\right\}.$$ Let $\hat{Z}_n \sim N(0,\hat \Gamma)$. By the Gaussian comparison , $$\mathbb{P}( |Z_{n,j}| \leq z_{\alpha/(2s)} \hat{\gamma}_j, \forall j) \geq \mathbb{P}( |\hat{Z}_{n,j}| \leq z_{\alpha/(2s)} \hat{\gamma}_j, \forall j) - I \geq 1 - \alpha - I$$ where $$I \leq \mathbb{E}\left[ \sup_{t = (t_1,\ldots,t_s) \in \mathbb{R}^s_+} \left| \mathbb{P}(|Z_{j,n} | \leq t_j, \forall j) - \mathbb{P}( |\hat Z_{j,n}| \leq t, \forall j |\hat\Gamma)\right|; \mathcal{F}_n \right] + \mathbb{P}(\mathcal{F}_n^c) \leq C \Delta_{3,n} + \frac{1}{n}.$$ In the above expression the constant $C$ is the same as in and $\mathcal{F}_n$ is the event that $\{ \max_{j,k} |\widehat{\Gamma} - \Gamma| \leq C \aleph_n\}$, which is of probability at least $ 1- \frac{1}{n}$, again by . This gives the first bound in . To prove the second bound in we let $\Xi_n = C \frac{\aleph_n}{\min_j \gamma_j}$, where $C$ is the constant in , and then notice that, on the event $\mathcal{F}_n$, $$|\hat\gamma_j - \gamma_j| = \frac{|\hat\gamma_j^2 - \gamma_j^2|} {|\hat\gamma_j + \gamma_j|} \leq \frac{|\hat\gamma_j^2 - \gamma_j^2|} {\gamma_j} \leq \frac{\max_j |\hat\gamma_j^2 - \gamma_j^2|}{\min_j \gamma_j} \leq \Xi_n.$$ Thus, $$\begin{aligned} \mathbb{P}\left( |Z_{n,j}| \leq z_{\alpha/(2s)} \hat{\gamma}_j, \forall j\right) & = \mathbb{P}\left( |Z_{n,j}| \leq z_{\alpha/(2s)} \hat{\gamma}_j, \forall j\right) -\mathbb{P}\left( |Z_{n,j}| \leq z_{\alpha/(2s)} \gamma_j, \forall j\right) + \mathbb{P}\left( |Z_{n,j}| \leq z_{\alpha/(2s)} \gamma_j, \forall j\right) \\ & \geq \mathbb{P} \left( |Z_{n,j}| \leq z_{\alpha/(2s)} \hat{\gamma}_j, \forall j;\mathcal{F}_n\right) -\mathbb{P}\left( |Z_{n,j}| \leq z_{\alpha/(2s)} \gamma_j, \forall j\right) + \mathbb{P}\left( |Z_{n,j}| \leq z_{\alpha/(2s)} \gamma_j, \forall j\right)\\ & \geq \mathbb{P} \left( |Z_{n,j}| \leq z_{\alpha/(2s)} \hat{\gamma}_j, \forall j;\mathcal{F}_n\right) -\mathbb{P}\left( |Z_{n,j}| \leq z_{\alpha/(2s)} \gamma_j, \forall j\right) + (1-\alpha), \end{aligned}$$ where in the last step we have used the union bound. Next, $$\mathbb{P} \left( |Z_{n,j}| \leq z_{\alpha/(2s)} \hat{\gamma}_j, \forall j;\mathcal{F}_n\right) \geq \mathbb{P} \left( |Z_{n,j}| \leq z_{\alpha/(2s)} (\gamma_j - \Xi_n), \forall j; \mathcal{F}_n \right) \geq \mathbb{P} \left( |Z_{n,j}| \leq z_{\alpha/(2s)} (\gamma_j - \Xi_n), \forall j \right) - \mathbb{P}\left( \mathcal{F}_n^c \right).$$ Thus, $$\begin{aligned} \mathbb{P}\left( |Z_{n,j}| \leq z_{\alpha/(2s)} \hat{\gamma}_j, \forall j\right) & \geq (1-\alpha)- \mathbb{P}\left( \mathcal{F}_n^c \right)+ \mathbb{P}\left( |Z_{n,j}| \leq z_{\alpha/(2s)} (\gamma_j - \Xi_n), \forall j \right) -\mathbb{P}\left( |Z_{n,j}| \leq z_{\alpha/(2s)} \gamma_j, \forall j\right)\\ & \geq (1-\alpha) - \frac{1}{n} - \frac{ \Xi_n z_{\alpha/(2s)}}{\min_j \gamma_j} \left(\sqrt{ 2 + \log(2s ) } + 2 \right), \end{aligned}$$ since, by the Gaussian anti-concentration inequality of , $$\mathbb{P}\left( |Z_{n,j}| \leq z_{\alpha/(2s)} (\gamma_j - \Xi_n), \forall j \right) -\mathbb{P}\left( |Z_{n,j}| \leq z_{\alpha/(2s)} \gamma_j, \forall j\right) \geq - \frac{ \Xi_n z_{\alpha/(2s)}}{\min_j \gamma_j} \left(\sqrt{ 2 + \log(2s ) } + 2 \right).$$ The result follows by combining all the above bounds and the fact that $\underline{\sigma}^2 = \min_{P \in \mathcal{P}_n} \min_j \Gamma(j,j)$. As usual, we have absorbed any lower order term (namely $\frac{1}{n}$) into higher order ones. $\Box$ [**Proof of .**]{} Let $Z_n \sim N(0,\Gamma)$ where $\Gamma = G V G^\top$ and $\hat Z_n \sim N(0,\hat\Gamma)$ where we recall that $\hat\Gamma = \hat G \hat V \hat G^\top$, $\hat G = G(\hat \psi)$ and $\hat V = n^{-1}\sum_{i=1}^n (W_i - \hat\psi)(W_i - \hat\psi)^\top$. Take $\mathcal{E}_n$ to be the event that $$\left\{ \max_{j,k} |\widehat{\Gamma} - \Gamma| \leq C \aleph_n \right\} \cap \left\{ \| V - \hat{V} \|_{\mathrm{op}} \leq C \daleth_n \right\},$$ where $C$ is the larger of the two constants in and in . Then, by and , $\mathbb{P}\left( \mathcal{E}_n \right) \geq 1-2/n$, uniformly over all the distributions in ${\cal P}_n$. By the triangle inequality, $$\label{eq:F.boot} \mathbb{P}(\theta \in \hat{C}^*_n) = \mathbb{P}(\sqrt{n}||\hat\theta - \theta||_\infty \leq \hat{t}^*_\alpha) \geq \mathbb{P}( \sqrt{n}||\hat \theta^* - \hat{\theta}||_\infty \leq \hat{t}^*_\alpha|(W_1,\ldots,W_n)) - (A_1 + A_2 + A_3),$$ where $$\begin{aligned} A_1 & = \sup_{t >0} \left| \mathbb{P}\left( \sqrt{n} \| \hat{\theta} - \theta \|_\infty \leq t \right) - \mathbb{P}( \| Z_n \|_\infty \leq t ) \right|,\\ A_2 & = \sup_{t>0} \left| \mathbb{P}( \| Z_n \|_\infty \leq t ) - \mathbb{P}( \| \hat{Z}_n \|_\infty \leq t ) \right|,\\ \text{and} & \\ A_3 & = \sup_{t >0} \left| \mathbb{P}( \| \hat{Z}_n \|_\infty \leq t - \mathbb{P}\left( \sqrt{n} \| \hat{\theta}^* - \hat{\theta} \|_\infty \leq t \Big| (W_1,\ldots,W_n) \right) \right|.\end{aligned}$$ Since, by definition, $\mathbb{P}( \sqrt{n}||\hat \theta^* - \hat{\theta}||_\infty \leq \hat{t}^*_\alpha|(W_1,\ldots,W_n)) \geq 1 - \alpha$, it follows from that, in order to establish (\[eq::boot-cov\]) we will need to upper bound each of the terms $A_1$, $A_2$ and $A_3$ accordingly. The term $A_1$ has already been bounded by $C( \Delta_{1,n} + \Delta_{2,n} )$ in the earlier . For $A_2$ we use the Gaussian comparison as in the proof of restricted to the event $\mathcal{E}_n$ to conclude that $A_2 \leq C \Delta_{n,3} + \frac{2}{n}$. Finally, to bound $A_3$, one can apply the same arguments as in Theorem \[theorem::deltamethod\], but restricted to the event $\mathcal{E}_n$, to the larger class of probability distributions $\mathcal{P}^*_n$ differing from $\mathcal{P}_n$ only in the fact that $v$ is replaced by the smaller quantity $v_n > 0$ and $\overline{v}$ by the larger quantity $\overline{v}_n =\overline{v} + C \daleth_n$. In particular, the bootstrap distribution belongs to $\mathcal{P}^*_n$. In detail, one can replace $\psi$ and with $\hat\psi$, and $\hat\psi$ with $\hat\psi^*$ and, similarly, $\Gamma$ with $\hat{\Gamma}$ and $\hat{\Gamma}$ with $\hat{\Gamma}^* = G(\hat{\psi}^*) \hat{V}^* G(\hat{\psi}^*)^\top$, where $\hat{V}^*$ is the empirical covariance matrix based on a sample of size $n$ from the bootstrap distribution. The assumption that $n$ is large enough so that $v_n$ and $\sigma^2_n$ are positive ensures that, on the event $\mathcal{E}_n$ of probability at least $1-2/n$, $\min_j \sqrt{\hat{\Gamma}(j,j)} > \sqrt{\underline{\sigma}^2 - C \aleph_n} >0$ and, by Weyl’s inequality, the minimal eigenvalue of $\hat{V}$ is no smaller than $v - C \daleth_n > 0$. In particular, the error terms $\Delta^*_{n,1}$ and $\Delta^*_{n,2}$ are well-defined (i.e. positive). Thus we have that $$\label{eq:A3} A_3 \leq C \left( \Delta^*_{n,1} + \Delta^*_{n,2} \right) + \frac{2}{n},$$ where the lower order term $\frac{1}{n}$ is reported to account for the restriction to the event $\mathcal{E}_n$. The result now follows by combining all the bounds, after noting that $\Delta_{1,n} \leq \Delta^*_{1,n}$ and $\Delta_{2,n} \leq \Delta^*_{2,n}$. To show that the same bound holds for the coverage of $\tilde{C}^*_\alpha$ we proceed in a similar manner. Using the triangle inequality, and uniformly over all the distributions in ${\cal P}_n$, $$\begin{aligned} \mathbb{P}(\theta \in \tilde{C}^*_n) & = \mathbb{P}(\sqrt{n} |\hat\theta_j - \theta _j| \leq \tilde{t}^*_{j,\alpha}, \forall j)\\ & \geq \mathbb{P}\left( \sqrt{n} |\hat \theta^*_j - \hat{\theta}_j | \leq \tilde{t}^*_{j,\alpha}, \forall j \Big| (W_1,\ldots,W_n) \right) - (A_1 + A_2 + A_3)\\ & \geq (1 - \alpha) - (A_1 + A_2 + A_3),\end{aligned}$$ where $$\begin{aligned} A_1 & = \sup_{t = (t_1,\ldots,t_s) \in \mathbb{R}_+^s} \left| \mathbb{P}\left( \sqrt{n} | \hat{\theta}_j- \theta_j | \leq t_j, \forall j \right) - \mathbb{P}( |Z_{n,j}| \leq t_j, \forall j ) \right|,\\ A_2 & = \sup_{t = (t_1,\ldots,t_s) \in \mathbb{R}_+^s } \left| \mathbb{P}( | Z_{n,j} | \leq t_j, \forall j ) - \mathbb{P}( | \hat{Z}_{n,j} | \leq t_j, \forall j ) \right|,\\ \text{and} & \\ A_3 & = \sup_{t = (t_1,\ldots,t_s) \in \mathbb{R}_+^s } \Big| \mathbb{P}( | \hat{Z}_{n,j} | \leq t_j, \forall j) - \mathbb{P}\left( \sqrt{n} | \hat{\theta}_j^* - \hat{\theta}_j | \leq t_j, \forall j \Big| (W_1,\ldots,W_n) \right) \Big|.\end{aligned}$$ The term $A_1$ is bounded by $C (\Delta_{1,n} + \Delta_{2,n})$, as shown in the first part of the proof of . The Gaussian comparison yields that $A_2 \leq C \Delta_{n,3} + \frac{2}{n}$. To bound the term $A_3$, we repeat the arguments used in the first part of the proof of , applied to the larger class $\mathcal{P}_n^*$ and restricting to the event $\mathcal{E}_n$. As argued above, we will replace $\psi$ with $\hat\psi$ and $\hat\psi$ with $\hat\psi^*$ and, similarly, $\Gamma$ with $\hat{\Gamma}$ and $\hat{\Gamma}$ with $\hat{\Gamma}^*$. The assumption that $n$ is large enough guarantees that, with probability at least $1 - \frac{2}{n}$, both $v_n$ and $\sigma^2_n$ are positive. Thus, the right hand side of serves as an upper bound for the current term $A_3$ as well. The claimed bound then follows. $\Box$ Appendix 5: Proofs of Auxiliary Results {#appendix:auxilary} ======================================= [**Proof of .**]{} Let $Z$ be the number of objects that are not selected. Then $\mathbb{E}[Z] = n \left( 1 - \frac{1}{n} \right)^n \leq \frac{n}{e}$. Next, by the bounded difference inequality, $$\mathbb{P}\left(| Z - \mathbb{E}[Z] | \geq t \right) \leq 2 e^{ -\frac{t^2}{2 n}},$$ which implies that $$\mathbb{P}\left( Z > n - d \right) \leq \exp\left\{ - \frac{(n -d -n(1-1/n)^n)^2 }{2n} \right\}.$$ The claim follows immediately, since $n \geq \frac{d}{2}$ and $\left( 1 - \frac{1}{n} \right)^n \leq e^{-1}$ for all $n=1,2,\ldots$. $\Box$ [**Proof of Lemma \[lem:hyper\].**]{} Let $\psi$ be an arbitrary point in $\mathcal{S}_n$ and $G = G(\psi) \in \mathbb{R}^{s \times b }$ be the corresponding Jacobian. Recall that, for $j=1,\ldots,s$ the $j^{\mathrm{th}}$ row of $G$ is the transpose of $G_j = G_j(\psi)$, the gradient of $g_j$ at $\psi$. Let $\mathcal{V} = \mathcal{V}(G) = \left\{ v_1,\ldots,v_{2s} \right\}$, where for $j=1,2,\ldots,s$, we define $v_{2j-1} = \frac{G_j}{\| G_j \|}$ and $v_{2j} = -\frac{G_j}{\| G_j\|}$. For a given $t>0$ and for any Jacobian matrix $G = G(\psi)$, set $$\label{eq:polyhedron} P(G,t) = \left\{ x \in \mathbb{R}^b \colon v_l^\top x \leq t_l , \forall v_l \in \mathcal{V}(G) \right\},$$ where, for $j=1,\ldots,s$, $t_{2j-1} = t_{2j} = \frac{t}{\|G_j\|}$. Recalling that $\widehat{\nu} = G \widehat{\psi}$, we have that $$\left\| \sqrt{n}(\hat{\nu} - \nu ) \right\|_\infty \leq t \quad \text{if and only if } \quad \sqrt{n} (\hat{\psi} - \psi) \in P(G,t).$$ Similarly, if $\tilde{Z}_n \sim N_b(0,V)$ and $Z_n = G \tilde{Z}_n \sim N_s(0,\Gamma)$ $$\| Z_n \|_\infty \leq t\quad \text{if and only if } \quad \tilde{Z}_n \in P(G,t).$$ Now consider the class $\mathcal{A}$ of all subsets of $\mathbb{R}^b$ of the form specified in , where $t$ ranges over the positive reals and $G$ ranges in $ \{ G(\psi(P)), P \in \mathcal{P}\}$. Notice that this class is comprised of polytopes with at most $2s$ facets. Also, from the discussion above, $$\label{eq:simple.convex} \sup_{ P \in \mathcal{P}_n} \sup_{t >0} \left | \mathbb{P}\left( \| \sqrt{n} (\hat{\nu} - \nu) \|_\infty \leq t\right) -\mathbb{P}\left( \| Z_n \|_\infty \leq t \right) \right| = \sup_{A \in \mathcal{A}} \left | \mathbb{P}(\sqrt{n} (\hat{\psi} - \psi) \in A)- \mathbb{P}( \tilde{Z}_n \in A) \right|.$$ The claimed result follows from applying the Berry-Esseen bound for polyhedral classes, in the appendix, due to [@cherno2] to the term on the left hand side of . To that end, we need to ensure that conditions (M1’), (M2’) and (E1’) in that Theorem are satisfied. For each $i=1,\ldots,n$, set $\tilde{W}_i= (\tilde{W}_{i,1}, \ldots, \tilde{W}_{i,2s})= \left( (W_i - \psi)^\top v,v \in \mathcal{V}(G) \right)$. Condition (M1’) holds since, for each $l=1,\ldots,2s$, $$\mathbb{E}\left[ \tilde{W}^2_{i,l} \right] \geq \min_l v_l^\top V v_l \geq \lambda_{\min}(V),$$ where $V = \mathrm{Cov}[W]$. Turning to condition (M2’), we have that, for for each $l=1,\ldots,2s$ and $k=1,2$, $$\begin{aligned} \mathbb{E}\left[ | \tilde{W}_{i,l}|^{2+k} \right] & \leq \mathbb{E}\left[ |v_l^\top (W_i - \psi)|^2 \|W_i - \psi\|^{k} \right]\\ & \leq \mathbb{E}\left[ |v_l^\top (W_i - \psi)|^2 \right] \left( 2A \sqrt{b} \right)^k\\ & \leq \overline{v} \left( 2A \sqrt{b} \right)^k,\end{aligned}$$ where the first inequality follows from the bound $| v^\top_l (W_i - \psi) |\leq \| W_i - \psi\|$ (as each $v_l$ is of unit norm), the second from the fact that the coordinates of $W_i$ are bounded in absolute value by $A$ and the third by the fact that $\overline{v}$ is the largest eigenvalue of $V$. Thus we see that by setting $B_n = \overline{v} \left( 2A \sqrt{b} \right)$, condition (M2’) is satisfied (here we have used the fact that $\overline{v} \geq 1$). Finally, condition (E1’) is easily satisfied, possibly by increasing the constant in the term $B_n$. Thus, gives $$\sup_{A \in \mathcal{A}} \left | \mathbb{P}(\sqrt{n} (\hat{\psi} - \psi) \in A) - \mathbb{P}( \tilde{Z}_n \in A) \right| \leq C \frac{1}{\sqrt{\lambda_{\min}(V)}} \left( \frac{ \overline{v}^2 b (\log 2bn)^7}{n} \right)^{1/6},$$ and the result follows from , the fact that the choice of $G = G(\psi)$ is arbitrary and the fact that $\lambda_{\rm min}(V(P)) \geq v$ for all $P\in {\cal P}_n$, by assumption. $\Box$ \[lem:operator\] Let $X_1,\ldots,X_n$ be independent, mean-zero vectors in $\mathbb{R}^p$, where $p \leq n$, such that $\max_{i=1\dots,n} \|X_i \|_\infty \leq K$ almost surely for some $K>0$ with common covariance matrix $\Sigma$ with $\lambda_{\max}(\Sigma) \leq U$. Then, there exists a universal constant $C>0$ such that $$\label{eq:vector.bernstein.simple} \mathbb{P}\left( \frac{1}{n} \left\| \sum_{i=1}^n X_i \right\| \leq C K \sqrt{ p \frac{\log n }{n}} \right) \geq 1 - \frac{1}{n}.$$ Letting $\hat{\Sigma} = \frac{1}{n} \sum_{i=1}^n X_i X_i^\top$, if $U \geq \eta > 0$, then there exists a $C>0$, dependent on $\eta$ only, such that $$\label{eq:matrix.bernstein.simple.2} \mathbb{P}\left( \| \widehat{\Sigma} - \Sigma \|_{\mathrm{op}} \leq C K \sqrt{p U \frac{ \log p + \log n}{n} } \right) \geq 1 - \frac{1}{n}.$$ [**Proof of .**]{} Since $\| X_i \| \leq K \sqrt{p}$ and $\mathbb{E}\left[ \| X_i \|^2 \right] \leq U p$ for all $i = 1,\ldots,n$, Proposition 1.2 in [@hsu12] yields that $$\label{eq:vector.bernstein} \mathbb{P}\left( \frac{1}{n} \left\| \sum_{i=1}^n X_i \right\| \leq \sqrt{\frac{U p}{n}} + \sqrt{ 8 \frac{U p}{n} \log n} + \frac{4 K \sqrt{p}}{3 n} \log n \right) \geq 1 - \frac{1}{n}.$$ Equation follows by bounding $\mathbb{E}\left[ \| X_i \|^2 \right]$ with $K^2 p$ instead of $Up$. Next, we prove . We let $\preceq$ denote the positive semi-definite ordering, whereby, for any $p$-dimensional symmetric matrices $A$ and $B$, $A \preceq B$ if and only if $B-A$ is positive semi-definite. For each $i =1,\ldots,n$, the triangle inequality and the assumptions in the statement yield the bound $$\left\| X_i X_i^\top - \Sigma\right\|_{\mathrm{op}} \leq \| X_i \|^2 + \lambda_{\max}(\Sigma) \leq K^2 p + U.$$ Similarly, $\| \mathbb{E}\left[ (X_i X_i^\top)^2 \right] - \Sigma\|_{\mathrm{op}} \leq K^2p U$ for each $i = 1,\ldots, n$, since $$\mathbb{E}\left[ (X_i X_i^\top)^2 \right] - \Sigma^2 \preceq \mathbb{E}\left[ \| X_i \|^2 X_i X_i^\top \right] \preceq K^2 p \Sigma \preceq K^2 p U I_{p}.$$ with $I_p$ the $p $-dimensional identity matrix. Thus, applying the Matrix Bernstein inequality [see Theorem 1.4 in @Tropp2012], we obtain that $$\label{eq:matrix.bernstein} \mathbb{P}\left( \| \widehat{\Sigma} - \Sigma \|_{\mathrm{op}} \leq \sqrt{ 2 K^2 p U \frac{\log p + \log 2n }{n}} + \frac{2}{3} (K^2 p + U) \frac{\log p + \log 2n }{n}\right)\geq 1 - \frac{1}{n}.$$ The bound follows from choosing $C$ large enough, depending on $\eta$, and using the fact that $p \leq n$. $\Box$ [**Remark.**]{} From , by using the looser bounds $$\left\| X_i X_i^\top - \Sigma\right\|_{\mathrm{op}} \leq 2 K^2 p \quad \text{and} \quad \mathbb{E}\left[ (X_i X_i^\top)^2 \right] - \Sigma^2 \preceq K^4 p^2 I_p,$$ one can obtain directly that $$\label{eq:matrix.bernstein.simple} \mathbb{P}\left( \| \widehat{\Sigma} - \Sigma \|_{\mathrm{op}} \leq C K^2 p \sqrt{\frac{ \log p + \log n}{n} } \right) \geq 1 - \frac{1}{n},$$ for some universal constant $C>0$. Clearly, the scaling in $p$ is worse. [**Proof of Lemma \[lemma::horrible\].**]{} Throughout, we drop the dependence on ${\widehat{S}}$ in our notation and assume without loss of generality that ${\widehat{S}}= \{1,\ldots,k\}$. We refer the reader to [@magnus07] for a comprehensive treatment of matrix calculus techniques. Recall that $\psi = \left[ \begin{array}{c} \sigma \\ \alpha\\ \end{array} \right]$ and $\xi = \left[ \begin{array}{c} w\\ \alpha\\ \end{array} \right]$, where $\sigma =\mathrm{vec}(\Sigma)$ and $w = \mathrm{vec}(\Omega)$. The dimension of both $\psi$ and $\xi$ is $b = k^2 + k$. For $ 1 \leq j \leq n$, let $$\beta_j = g_j(\psi) = e^\top_j \Omega\alpha,$$ where $e_j$ is the $j^{\mathrm{th}}$ elements of the standard basis in $\mathbb{R}^n$. Then, we can write $$g_j(\psi) = g(f(\psi)),$$ with $f(\psi) = \xi \in \mathbb{R}^b$ and $g(\xi) = e^\top_j \Omega \alpha \in \mathbb{R}$. Using the chain rule, the derivative of $g_j(\psi)$ is $$D g_j(\psi) = D g(\xi) D f(\psi) = e_j^\top \Big[\left( \alpha^\top \otimes I_k \right) E + \Omega F\Big] \left[ \begin{array}{cc} - \Omega \otimes \Omega & 0 \\ 0 & I_k \end{array} \right],$$ where $$E = \Big[I_{k^2} \;\;\;\;\; 0_{k^2 \times k}\Big] = \frac{d w}{d \psi} \in \mathbb{R}^{k^2 \times b} \quad \text{and} \quad F = \Big[0_{k \times k^2} \;\;\;\;\; I_k\Big] = \frac{d \alpha}{d \psi} \in \mathbb{R}^{ k \times b}.$$ Carrying out the calculations, we have that $$\begin{aligned} \left( \alpha^\top \otimes I_k \right) E \left[ \begin{array}{cc} - \Omega \otimes \Omega & 0 \\ 0 & I_k \end{array} \right] & = \left( \alpha^\top \otimes I_k \right) \Big[I_{k^2} \;\;\;\;\; 0_{k^2 \times k}\Big] \left[ \begin{array}{cc} - \Omega \otimes \Omega & 0 \\ 0 & I_k \end{array} \right] \\ & = \Big[ - \left( \alpha^\top \otimes I_k \right) (\Omega \otimes \Omega) \;\;\;\;\; 0_{k \times k}\Big] \end{aligned}$$ and $$\begin{aligned} \Omega F \left[ \begin{array}{cc} - \Omega \otimes \Omega & 0 \\ 0 & I_k \end{array} \right] & = \Omega \Big[0_{k \times k^2} \;\;\;\;\; I_k\Big] \left[ \begin{array}{cc} - \Omega \otimes \Omega & 0 \\ 0 & I_k \end{array} \right] \\ & = \Omega \Big[ 0_{k \times k^2} \;\;\;\;\; I_k \Big] = \Big[ 0_{k \times k^2} \;\;\;\;\; \Omega \Big] .\end{aligned}$$ Plugging the last two expressions into the initial formula for $D g_j(\psi)$ we obtain that $$\begin{aligned} \nonumber D g_j(\psi) & = e^\top_j \Big( \left[ - \left( \alpha^\top \otimes I_k \right) (\Omega \otimes \Omega) \;\;\;\;\; 0_{k \times k}\right] + \left[ 0_{k \times k^2} \;\;\; \Omega \right] \Big)\\ & = \label{eq::gj} e^\top_j \Big( \left[ - \left( \alpha^\top \otimes I_k \right) (\Omega \otimes \Omega) \;\;\;\;\; \Omega\right] \Big). \end{aligned}$$ The gradient of $g_j$ at $\psi$ is just the transpose of $Dg_j(\psi)$. Thus, the Jacobian of the function $g$ is $$\label{eq::GG} \beta(j)/d\psi = G = \left( \begin{array}{c} G_1^\top\\ \vdots\\ G_k^\top \end{array} \right).$$ Next, we compute $Hg_j (\psi)$, the $b \times b$ Hessian of $g_j$ at $\psi$. Using the chain rule, $$H g_j(\psi) = D (D g_j(\psi)) = (I_b \otimes e^\top_j) \frac{ d \; \mathrm{vec} \Big( \left[ - \left( \alpha^\top \otimes I_k \right) (\Omega \otimes \Omega) \;\;\;\;\; \Omega\right] \Big)}{ d \psi},$$ where the first matrix is of dimension $b \times kb$ and the second matrix is of dimension $kb \times b$. Then, $$\label{eq:H} \frac{ d \; \mathrm{vec} \Big( \left[ - \left( \alpha^\top \otimes I_k \right) (\Omega \otimes \Omega) \;\;\;\;\; \Omega\right] \Big)}{ d \psi} = \left[ \begin{array}{c} -\frac{d \left( \alpha^\top \otimes I_k \right) (\Omega \otimes \Omega) }{d \psi}\\ \;\; \\ \frac{d \Omega}{d \psi} \end{array} \right].$$ The derivative at the bottom of the previous expression is $$\frac{d \Omega}{d \psi} = \frac{d \Omega}{d \Sigma} \frac{d \Sigma}{d \psi} = - (\Omega \otimes \Omega) E = - (\Omega \otimes \Omega) [I_{k^2} \;\;\;\;\; 0_{k^2 \times k}] = \Big[ - (\Omega \otimes \Omega) \;\;\;\;\; 0_{k^2 \times k} \Big].$$ The top derivative in is more involved. By the product rule, $$\frac{d \left( \alpha^\top \otimes I_k \right) (\Omega \otimes \Omega) }{d \psi} = \Big( ( \Omega \otimes \Omega) \otimes I_k \Big) \frac{d (\alpha^\top \otimes I_k) }{d \psi} + \Big( I_{k^2} \otimes (\alpha^\top \otimes I_k) \Big) \frac{ d (\Omega \otimes \Omega)}{d \psi}.$$ The first derivative in the last expression is $$\begin{aligned} \frac{d (\alpha^\top \otimes I_k) }{d \psi} & = \frac{d (\alpha^\top \otimes I_k) }{d \alpha} \frac{d \alpha}{d \psi} = (I_k \otimes K_{1,k} \otimes I_k) (I_k \otimes \mathrm{vec}(I_k)) F \\ & = (I_k \otimes \mathrm{vec}(I_k) )F = ( I_k \otimes \mathrm{vec}(I_k)) \Big[0_{k \times k^2} \;\;\;\;\; I_k \Big] \\ & = \Big[0_{k^3 \times k^2} \;\;\;\;\; I_k \otimes \mathrm{vec}(I_k) \Big] ,\end{aligned}$$ where $K_{k,1}$ is the appropriate commutation matrix and the third identity follows since $K_{k,1} = I_k$ and, therefore, $(I_k \otimes K_{1,k} \otimes I_k) = I_{k^3}$. Continuing with the second derivative in , $$\begin{aligned} \frac{ d (\Omega \otimes \Omega)}{d \psi} & = \frac{ d (\Omega \otimes \Omega)}{d \Omega} \frac{d \Omega}{d \Sigma} \frac{d \Sigma}{d \psi} = - J (\Omega \otimes \Omega) E \\ & = - J (\Omega \otimes \Omega) \Big[ I_{k^2} \;\;\;\;\; 0_{k^2 \times k}\Big] = -J \Big[ \Omega \otimes \Omega\ ;\;\;\;\; 0_{k^2 \times k} \Big], \end{aligned}$$ where $$J = \Big[ (I_k \otimes \Omega) \otimes I_{k^2} \Big] \Big( I_k \otimes K_{k,k} \otimes I_k \Big) \Big( I_{k^2} \otimes \mathrm{vec}(I_k) \Big) + \Big[ I_{k^2} \otimes( \Omega \otimes I_k) \Big] \Big( I_k \otimes K_{k,k} \otimes I_k \Big) \Big( \mathrm{vec}(I_k) \otimes I_{k^2} \Big).$$ To see this, notice that, by the product rule, we have $$J = \frac{d (\Omega\otimes \Omega)}{d \Omega} = \frac{d (\Omega \otimes I_k)( I_k \otimes \Omega) } {d \Omega} = \Big[ (I_k \otimes \Omega) \otimes I_{k^2} \Big] \frac{d (\Omega \otimes I_k)}{d \Omega} + \Big[ I_{k^2} \otimes( \Omega \otimes I_k) \Big] \frac{d (I_k \otimes \Omega)}{d \Omega}.$$ Next, $$\frac{d (\Omega \otimes I_k)}{d \Omega} = \Big( I_k \otimes K_{k,k} \otimes I_k \Big) \Big( I_{k^2} \otimes \mathrm{vec}(I_k) \Big) = \Big( I_{k^2} \otimes K_{k,k} \Big) \Big(I_k \otimes \mathrm{vec}(I_k) \otimes I_k \Big)$$ and $$\frac{d (I_k \otimes \Omega )}{d \Omega} = \Big( I_k \otimes K_{k,k} \otimes I_k \Big) \Big( \mathrm{vec}(I_k) \otimes I_{k^2} \Big) = \Big( K_{k,k} \otimes I_{k^2} \Big) \Big(I_k \otimes \mathrm{vec}(I_k) \otimes I_k \Big).$$ The formula for $J$ follows from the last three expressions. Notice that $J$ is matrix of size $k^4 \times k^2$. Finally, plugging the expressions for $\frac{d (\alpha^\top \otimes I_k) (\Omega \otimes \Omega) }{d \psi}$ and $\frac{ d \Omega }{d \psi}$ in we get that the Hessian $H g_j(\psi)$ is $$\label{eq::Hessian} \frac{1}{2}\left( (I_b \otimes e^\top_j) H + H^\top (I_b \otimes e_j) \right)$$ where $$\label{eq:Halcazzo} H = \left[ \begin{array}{c} - \Big( (\Omega \otimes \Omega) \otimes I_k \Big) \Big[0_{k^3 \times k^2} \;\;\;\;\; I_k \otimes \mathrm{vec}(I_k) \Big] + \Big( I_{k^2} \otimes (\alpha^\top \otimes I_k )\Big) J \Big[ \Omega \otimes \Omega \;\;\;\;\; 0_{k^2 \times k}\Big]\\ \;\\ \Big[ - \Omega \otimes \Omega \;\;\;\;\; 0_{k^2 \times k} \Big] \end{array} \right].$$ So far we have ignored the facts that $\Sigma$ is symmetric. Account for the symmetry, the Hessian of $g_j(\psi)$ is $$D_h^\top H g_j(\psi) D_h,$$ where $D_h$ is the modified duplication matrix such that $D \psi_h = \psi$, with $\psi_h$ the vector comprised by the sub-vector of $\psi$ not including the entries corresponding to the upper (or lower) diagonal entries of $\Sigma$. We now prove the bounds and . We will use repeatedly the fact that $\sigma_1(A \otimes B) = \sigma_1(A) \sigma_1(B)$ and, for a vector $x$, $\sigma_1(x) = \|x\|$. For notational convenience, we drop the dependence on $\psi$, since all our bounds hold uniformly over all $\psi \in \mathcal{S}_n$. The first bound in on the norm of the gradient of $g_j$ is straightforward: $$\begin{aligned} \nonumber ||G_j|| & \leq ||e_j|| \times \sigma_1\left( \left[ - \left( \alpha^\top \otimes I_k \right) (\Omega \otimes \Omega) \;\;\;\;\; \Omega\right] \right)\\ \nonumber & \leq \Big( ||\alpha||\times \sigma_1(\Omega)^2 + \sigma_1(\Omega) \Big)\\ \label{eq:Gj.constants} & \leq \frac{A^2 \sqrt{ k}}{u^2} + \frac{1}{u}\\ \nonumber & \leq C \frac{\sqrt{k}}{u^2},\end{aligned}$$ since $\sigma_1(\Omega) \leq \frac{1}{u}$, $\| \alpha \| \leq \sqrt{A^2 \mathrm{tr}(\Sigma)} \leq A^2 \sqrt{k}$, and we assume that $k \geq u^2$. Turning to the second bound in , we will bound the largest singular values of the individual terms in . First, for the lower block matrix in , we have that $$\sigma_1([ \Omega\otimes\Omega \;\;\;\;\; 0_{k^2 \times k}]) = \sigma_1( \Omega\otimes\Omega) = \sigma_1^2(\Omega) = 1/u^2.$$ Next, we consider the two matrix in the upper block part of . For the first matrix we have that $$\begin{aligned} \label{eq:mammamia} \sigma_1 \Big( (\Omega\otimes\Omega\otimes I_k) \Big[ 0_{k^3 \times k^2 } \;\;\;\; I_k \otimes {\rm vec}(I_k) \Big] \Big)&= \sigma_1 \Big( \Big[ 0_{k^3 \times k^2} \;\;\;\; \Omega \otimes {\rm vec}(\Omega) \Big] \Big)\\ \nonumber & = \sigma_1\left( \Omega \otimes \mathrm{vec}(\Omega)\right) \\ \nonumber &= \sigma_1 ( \Omega) \sigma_1({\rm vec}(\Omega))\\ \nonumber & \leq \frac{\sqrt{k}}{u^2},\end{aligned}$$ since $$\sigma_1({\rm vec}(\Omega)) = ||\Omega||_F = \sqrt{\sum_{i=1}^k \sigma_i^2(\Omega)}\leq \sqrt{k}\sigma_1(\Omega) = \frac{\sqrt{k}}{u}.$$ The identity in is established using the following facts, valid for conformal matrices $A$, $B$, $C$, $D$ and $X$: - $(A \otimes B)(C \otimes D) = AC \otimes BD$ , with $A = \Omega$, $B = \Omega \otimes I_k$, $C = I_k$ and $D = \mathrm{vec}(\Omega)$, and - $AXB = C$ is equivalent to $\left( B^\top \otimes A\right) \mathrm{vec}(X) = \mathrm{vec}(C)$, with $B=C = \Omega$ and $X = A = I_k$. We now bound $\sigma_1 \Big([I_{k^2}\otimes \alpha^\top \otimes I_k] \; J \; [ \Omega\otimes\Omega \;\;\;\; 0_{k^2\times k}] \Big)$, the second matrix in the upper block in . We have that $$\begin{aligned} \sigma_1(J) & \leq 2\sigma_1( (I_k \otimes \Omega \otimes I_{k^2})(I_k \otimes K_{k,k}\otimes I_k)(I_{k^2}\otimes {\rm vec}(I_k))\\ &= 2\sigma_1(\Omega)||I_k||_F\\ &= 2\sqrt{k}\sigma_1(\Omega), \end{aligned}$$ since $\sigma_1(K_{k,k}) = 1$. Hence, using the fact that $\sigma_1([I_{k^2}\otimes \alpha^\top \otimes I_k]) = ||\alpha||$, $$\sigma_1 \Big([I_{k^2}\otimes \alpha^\top \otimes I_k] \; J \; [ \Omega\otimes\Omega \;\;\;\; 0_{k^2\times k}] \Big) \leq 2\sqrt{k} ||\alpha|| \sigma_1^3(\Omega) \leq 2 \sqrt{A U} \frac{k}{u^3},$$ since $\| \alpha \| \leq \sqrt{A U k}$. Thus, we have obtained the following bound for the largest singular value of the matrix $H$ in : $$\sigma_1(H)\leq C \Big( \frac{1}{u^2} + \frac{\sqrt{k}}{u^2}+ \frac{k}{u^3} \Big),$$ where $C$ is a positive number depending on $A$ only. Putting all the pieces together, $$\begin{aligned} \sigma_1(H_j) &= \sigma_1\left( \frac{1}{2}((I_b \otimes e_j)H + H^\top (I_b\otimes e_j))\right)\\ & \leq \sigma_1((I_b \otimes e_j)H)\\ & \leq \sigma_1(I_b)\sigma_1(e_j)\sigma_1(H)\\ & \leq C \Big( \frac{1}{u^2} + \frac{\sqrt{k}}{u^2}+ \frac{k}{u^3} \Big).\end{aligned}$$ Whenever $u \leq \sqrt{k}$, the dominant term in the above expression is $\frac{k}{u^3}$. This gives the bound on $\overline{H}$ in (\[eq::B-and-lambda\]). The bound on $\underline{\sigma}$ given in follows from . Indeed, for every $P \in \mathcal{P}^{\mathrm{OLS}}$ $$\min_j \sqrt{ G_j V G_j^\top} \geq \sqrt{v} \min_j \| G_j \|.$$ Then, using , $$\min_j \| G_j \| \geq \min_j \| \Omega_j \| \geq \lambda_{\min}(\Omega) = \frac{1 }{ U },$$ where $\Omega_j$ denotes the $j^{\mathrm{th}}$ row of $\Omega$. The final value of the constant $C$ depends only on $A$ and $U$, and since $U \leq A$, we can reduce the dependence of such constant on $A$ only. $\Box$ Appendix 6: Anti-concentration and comparison bounds for maxima of Gaussian random vectors and Berry-Esseen bounds for polyhedral sets {#app:high.dim.clt} ====================================================================================================================================== Now we collect some results that can be are derived from [@chernozhukov2015comparison], [@cherno2] and [@nazarov1807maximal]. However, our statement of the results is slightly different than in the original papers. The reason for this is that we need to keep track of some constants in the proofs that affect our rates. The following anti-concentration result for the maxima of Gaussian vectors follows from Lemma A.1 in [@cherno2] and relies on a deep result in [@nazarov1807maximal]. \[thm:anti.concentration\] Let $(X_1\ldots,X_p)$ be a centered Gaussian vector in $\mathbb{R}^p$ with $\sigma_j^2 = \mathbb{E}[X_j^2] > 0$ for all $j=1,\ldots,p$. Moreover, let $\underline{\sigma} = \min_{1 \leq j \leq p} \sigma_j$. Then, for any $y = (y_1,\ldots,y_p) \in \mathbb{R}^p$ and $a > 0$ $$\mathbb{P}( X_j \leq y_j + a, \forall j) - \mathbb{P}( X_j \leq y_j, \forall j) \leq \frac{a}{\underline{\sigma}} \left( \sqrt{2 \log p} + 2 \right).$$ The previous result implies that, for any $a > 0$ and $y = (y_1,\ldots,y_p) \in \mathbb{R}^p_+$, $$\mathbb{P}( |X_j| \leq y_j + a, \forall j) - \mathbb{P}( |X_j| \leq y_j, \forall j) \leq \frac{a}{\underline{\sigma}} \left( \sqrt{2 \log 2p} + 2 \right)$$ and that, for any $y > 0$, $$\mathbb{P}( \max_j |X_j | \leq y + a) - \mathbb{P}(\max_j |X_j| \leq y) \leq \frac{a}{\underline{\sigma}} \left( \sqrt{2 \log 2p} + 2 \right).$$ The following high-dimensional central limit theorem follows from Proposition 2.1 in [@cherno2] and . Notice that we have kept the dependence on the minimal variance explicit. \[thm:high.dim.clt\] Let $X_1,\ldots,X_n$ be independent centered random vectors in $\mathbb{R}^p$. Let $S^X_n = \frac{1}{\sqrt{n}} \sum_{i=1}^n X_i$ and, similarly, let $S^Y_n = \frac{1}{n} \sum_{i=1}^n Y_i$, where $Y_1,\ldots, Y_n$ are independent vectors with $Y_i \sim N_p(0,\mathbb{E}[X_i X_i^\top])$. Let $\mathcal{A}$ be the collection of polyhedra $A$ in $\mathbb{R}^p$ of the form $$A =\left\{ x \in \mathbb{R}^d \colon v^\top x \leq t_v , v \in \mathcal{V}(\mathcal{A}) \right\}$$ where $ \mathcal{V}(\mathcal{A}) \subset \mathbb{R}^p$ is a set of $m$ points of unit norm, with $m \leq (n p)^d$ for some constant $d>0$, and $( t_v \colon v \in \mathcal{V}(\mathcal{A}) )$ is a set of $m$ positive numbers. For each $i=1,\ldots,n$ let $$\tilde{X}_i = (\tilde{X}_{i1},\ldots,\tilde{X}_{im})^\top = \left( v^\top X_i, v \in \mathcal{V}(\mathcal{A}) \right).$$ Assume that the following conditions are satisfied, for some $B_n \geq 1$ and $\underline{\sigma}>0$: 1. $n^{-1} \sum_{i=1}^n \mathbb{E}\left[ \tilde{X}_{ij}^2 \right] \geq \underline{\sigma}^2$, for all $j=1,\ldots, m$; 2. $n^{-1} \sum_{i=1}^n \mathbb{E}\left[ | \tilde{X}_{ij}|^{2+k} \right] \leq B^{k}_n$, for all $j=1,\ldots,m$ and $k=1,2$; 3. $\mathbb{E}\left[ \exp\left( | \tilde{X}_{i,j} | / B_n \right) \right] \leq 2$, for $i=1,\ldots,n$ and $k=1,2$. Then, there exists a constant $C>0$ depending only on $d$ such that $$\sup_{A \in \mathcal{A}} \left|\mathbb{P}(S^X_n \in A) - \mathbb{P}(S^Y_n \in A) \right| \leq \frac{C}{\underline{\sigma}} \left( \frac{B_n^2 \log^7(pn) }{n } \right)^{1/6}.$$ Finally, we make frequent use the following comparison theorem for the maxima of Gaussian vectors. Its proof can be established using arguments from the proof of Theorem 4.1 in [@cherno2] – which itself relies on a modification of Theorem 1 from [@chernozhukov2015comparison] – along with the above anti-concentration bound of . As usual, we have kept the dependence on the minimal variance explicit. \[thm:comparisons\] Let $X \sim N_p(0,\Sigma_X)$ and $Y \sim N_p(0,\Sigma_Y)$ with $$\Delta = \max_{i,j} | \Sigma_X(j,k) - \Sigma_Y(j,k)|$$ Let $\underline{\sigma}^2 = \max\{ \min_j \Sigma_X(j,j) , \min_j \Sigma_Y(j,j) \}$. Then, there exists a universal constant $C>0$ such that $$\sup_{t \in \mathbb{R}^p} \left| \mathbb{P}( X \leq t) - \mathbb{P}( Y \leq t) \right| \leq C \frac{\Delta^{1/3} (2 \log p)^{1/3}}{\underline{\sigma}^{2/3}}.$$ [**Remark.**]{} The above result further implies that $$\sup_{t >0 } \left| \mathbb{P}( \| X \|_\infty \leq t) - \mathbb{P}( \| Y \|_\infty \leq t) \right| \leq 2 C \frac{\Delta^{1/3} (2 \log p)^{1/3}}{\underline{\sigma}^{2/3}},$$ which corresponds to the original formulation of the Gaussian comparison theorem of [@chernozhukov2015comparison]. Appendix 7: The Procedures ========================== ------------------------------------------------------------------------ [Boot-Split]{} [Input]{}: Data ${\cal D} = \{(X_1,Y_1),\ldots, (X_{2n},Y_{2n})\}$. Confidence parameter $\alpha$. Constant $\epsilon$ (Section \[sec:loco.parameters\]).\ [Output]{}: Confidence set $\hat{C}^*_{{\widehat{S}}}$ for $\beta_{{\widehat{S}}}$ and $\hat{D}^*_{{\widehat{S}}}$ for $\gamma_{{\widehat{S}}}$. Randomly split the data into two halves ${\cal D}_{1,n}$ and ${\cal D}_{2,n}$. Use ${\cal D}_{1,n}$ to select a subset of variables ${\widehat{S}}$. This can be forward stepwise, the lasso, or any other method. Let $k= |{\widehat{S}}|$. Write ${\cal D}_{2,n}=\{(X_1,Y_1),\ldots, (X_n,Y_n)\}$. Let $P_n$ be the empirical distribution of ${\cal D}_{2,n}$. For $\beta_{{\widehat{S}}}$: Get $\hat\beta_{{\widehat{S}}}$ from ${\cal D}_{2,n}$ by least squares. Draw $(X_1^*,Y_1^*),\ldots, (X_m^*,Y_m^*) \sim P_n$. Let $\hat\beta^*_{{\widehat{S}}}$ be the estimator constructed from the bootstrap sample. Repeat $B$ times to get $\hat\beta_{{\widehat{S}},1}^*, \ldots,\hat\beta_{{\widehat{S}},B}^*$. Define $\hat{t}_\alpha$ by $$\frac{1}{B}\sum_{b=1}^B I\Bigl(\sqrt{n}||\hat\beta_{{\widehat{S}},b}^* - \hat\beta_{{\widehat{S}}}||_\infty > \hat{t}_\alpha\Bigr) = \alpha.$$ Output: $\hat{C}^*_{{\widehat{S}}} = \{ \beta\in\mathbb{R}^k:\ ||\beta-\hat\beta_{{\widehat{S}}}||_\infty\leq \hat{t}_\alpha/\sqrt{n}\}$. For $\gamma_{{\widehat{S}}}$: Get $\hat\beta_{{\widehat{S}}}$ from ${\cal D}_{1,n}$. This can be any estimator. For $j\in \hat{S}$ let $\hat\gamma_{\hat{S}}(j) = \frac{1}{n}\sum_{i=1}^n r_i$ where $r_i = (\delta_i(j) + \epsilon \xi_i(j))$, $\delta_i(j) = |Y_i - \hat\beta_{{\widehat{S}},j}^\top X_i| - |Y_i - \hat\beta_{{\widehat{S}}}^\top X_i|$ and $\xi_i(j)\sim {\rm Unif}(-1,1)$. Let $\hat\gamma_{{\widehat{S}}} = (\hat\gamma_{{\widehat{S}}}(j):\ j\in {\widehat{S}})$. Draw $(X_1^*,Y_1^*),\ldots, (X_n^*,Y_n^*) \sim P_n$. Let $\hat\gamma_{{\widehat{S}}}(j) = \frac{1}{n}\sum_{i=1}^n r_i^*$. Let $\hat\gamma_{{\widehat{S}}}^* = (\hat\gamma_{{\widehat{S}}}^*(j):\ j\in {\widehat{S}})$. Repeat $B$ times to get $\hat\gamma_{{\widehat{S}},1}^*, \ldots, \hat\gamma_{{\widehat{S}},B}^*$. Define $\hat{u}_\alpha$ by $$\frac{1}{B}\sum_{b=1}^B I\Bigl(\sqrt{n}||\hat\gamma_{{\widehat{S}},b}^* - \hat\gamma_{{\widehat{S}}}||_\infty > \hat{u}_\alpha\Bigr) = \alpha.$$ Output: $\hat{D}^*_{{\widehat{S}}} = \{ \gamma_{{\widehat{S}}} \in\mathbb{R}^k:\ ||\gamma_{{\widehat{S}}}-\hat\gamma_{{\widehat{S}}}||_\infty\leq \hat{u}_\alpha/\sqrt{n}\}$. ------------------------------------------------------------------------ ------------------------------------------------------------------------ [Normal-Split]{} [Input]{}: Data ${\cal D} = \{(X_1,Y_1),\ldots, (X_{2n},Y_{2n})\}$. Confidence parameter $\alpha$. Threshold and variance parameters $\tau$ and $\epsilon$ (only for $\gamma_{{\widehat{S}}}$).\ [Output]{}: Confidence set $\hat{C}_{{\widehat{S}}}$ for $\beta_{{\widehat{S}}}$ and $\hat{D}_{{\widehat{S}}}$ for $\gamma_{{\widehat{S}}}$. Randomly split the data into two halves ${\cal D}_{1,n}$ and ${\cal D}_{2,n}$. Use ${\cal D}_{1,n}$ to select a subset of variables ${\widehat{S}}$. This can be forward stepwise, the lasso, or any other method. Let $k= |{\widehat{S}}|$. For $\beta_{{\widehat{S}}}$: Get $\hat\beta_{{\widehat{S}}}$ from ${\cal D}_{2,n}$ by least squares. Output $\hat{C}_{{\widehat{S}}} = \bigotimes_{j\in {\widehat{S}}} C(j)$ where $C(j) = \hat\beta_{{\widehat{S}}}(j) \pm z_{\alpha/(2k)} \sqrt{\hat\Gamma_n(j,j)}$ where $\hat\Gamma$ is given by (\[eq::Ga\]). For $\gamma_{{\widehat{S}}}$: Get $\hat\beta_{{\widehat{S}}}$ from ${\cal D}_{1,n}$. This can be any estimator. For $j\in {\widehat{S}}$ let $\hat\gamma_{{\widehat{S}}}(j) = \frac{1}{n}\sum_{i=1}^n r_i$ where $r_i = (\delta_i(j) + \epsilon \xi_i(j))$, $\delta_i(j) = \left| Y_i - t_{\tau} \left( \hat\beta_{{\widehat{S}},j}^\top X_i \right) \right| - \left|Y_i - t_{\tau} \left( \hat\beta_{{\widehat{S}}}^\top X_i \right)\right|$ and $\xi_i(j)\sim {\rm Unif}(-1,1)$. Let $\hat\gamma_{{\widehat{S}}} = (\hat\gamma_{{\widehat{S}}}(j):\ j\in {\widehat{S}})$. Output $\hat{D}_{{\widehat{S}}} = \bigotimes_{j\in {\widehat{S}}} D(j)$ where $D(j) = \hat\gamma_{{\widehat{S}}}(j) \pm z_{\alpha/(2k)}\hat{\Sigma}(j,j)$, with $\hat{\Sigma}(j,j)$ given by . ------------------------------------------------------------------------ ------------------------------------------------------------------------ [Median-Split]{} [Input]{}: Data ${\cal D} = \{(X_1,Y_1),\ldots, (X_{2n},Y_{2n})\}$. Confidence parameter $\alpha$.\ [Output]{}: Confidence set $\hat{E}_{{\widehat{S}}}$. Randomly split the data into two halves ${\cal D}_{1,n}$ and ${\cal D}_{2,n}$. Use ${\cal D}_{1,n}$ to select a subset of variables ${\widehat{S}}$. This can be forward stepwise, the lasso, or any other method. Let $k= |{\widehat{S}}|$. Write ${\cal D}_{2,n}=\{(X_1,Y_1),\ldots, (X_n,Y_n)\}$. For $(X_i,Y_i)\in {\cal D}_{2,n}$ let $$W_i(j) = |Y_i-\hat\beta_{{\widehat{S}},j}^\top X_i| - |Y_i-\hat\beta_{{\widehat{S}}}^\top X_i|,$$ Let $W_{(1)}(j) \leq \cdots \leq W_{(n)}(j)$ be the order statistics and let $E(j) = [W_{(n-k_2)}(j),W_{(n-k_1+1)}(j)]$ where $$k_1 = \frac{n}{2} + \sqrt{n \log\left( \frac{2k}{\alpha}\right)},\ \ \ k_2 = \frac{n}{2} - \sqrt{n \log\left( \frac{2k}{\alpha}\right)}.$$ Let $\hat{E}_{{\widehat{S}}} = \bigotimes_{j\in S} E(j)$. ------------------------------------------------------------------------ [^1]: For simplicity, we assume that the data are split into two parts of equal size. The problem of determining the optimal size of the split is not considered in this paper. Some results on this issue are contained in [@shao1993linear].
{ "pile_set_name": "ArXiv" }
--- abstract: 'The thermodynamic entropy of an isolated system is given by its von Neumann entropy. Over the last few years, there is an intense activity to understand thermodynamic entropy from the principles of quantum mechanics. More specifically, is there a relation between the (von Neumann) entropy of entanglement between a system and some (separate) environment is related to the thermodynamic entropy? It is difficult to obtain the relation for many body systems, hence, most of the work in the literature has focused on small number systems. In this work, we consider black-holes — that are simple yet macroscopic systems — and show that a direct connection could not be made between the entropy of entanglement and the Hawking temperature. In this work, within the adiabatic approximation, we explicitly show that the Hawking temperature is indeed given by the rate of change of the entropy of entanglement across a black hole’s horizon with regard to the system energy. This is yet another numerical evidence to understand the key features of black hole thermodynamics from the viewpoint of quantum information theory.' author: - 'S. Santhosh Kumar' - 'S. Shankaranarayanan' title: Quantum entanglement and Hawking temperature --- Introduction ============ Equilibrium statistical mechanics allows a successful description of the thermodynamic properties of matter . More importantly, it relates entropy, a phenomenological quantity in thermodynamics, to the volume of a certain region in phase space [@Wehrl1978-RMP]. The laws of thermodynamics are also equally applicable to quantum mechanical systems. A lot of progress has been made recently in studying the cold trap atoms that are largely isolated from surroundings [@weiss2006-nature; @gross2008-nature; @smith2013-njp; @yukalov2007-LPL]. Furthermore, the availability of Feshbach resonances is shown to be useful to control the strength of interactions, to realize strongly correlated systems, and to drive these systems between different quantum phases in controlled manner [@Osterloh2002; @wu2004; @rey2010-njp; @santos2010-njp]. These experiments have raised the possibility of understanding the emergence of thermodynamics from principles of quantum mechanics. The fundamental questions that one hopes to answer from these investigations are: How the macroscopic laws of thermodynamics emerge from the reversible quantum dynamics? How to understand the thermalization of a closed quantum systems? What are the relations between information, thermodynamics and quantum mechanics [@2006-Lloyd-NPhys; @2008-Brandao; @horodecki-2008; @popescu97; @vedral98; @plenio98] ? While answer to these questions, for many body system is out of sight, some important progress has been made by considering simple lattice systems (See, for instance, Refs. [@1994-srednicki; @rigol2008-nature; @2012-srednicki; @rahul2015-ARCMP]). In this work, in an attempt to address some of the above questions, our focus is on another simple, yet, macroscopic system — black-holes. It has long been conjectured that a black hole’s thermodynamic entropy is given by its entropy of entanglement across the horizon [@bombelli86; @srednicki93; @eisert2005; @shanki2006; @shanki-review; @solodukhin2011; @shanki2013]. However, this has never been directly related to the Hawking temperature [@hawking75]. Here we show that: (i) Hawking temperature is given by the rate of change of the entropy of entanglement across a black hole’s horizon with regard to the system energy. (ii) The information lost across the horizon is related to black hole entropy and laws of black hole mechanics emerge from entanglement across the horizon. The model we consider is complementary to other models that investigate the emergence of thermodynamics [@2006-Lloyd-NPhys; @2008-Brandao; @horodecki-2008; @popescu97; @vedral98; @plenio98]: First, we evaluate the entanglement entropy for a relativistic free scalar fields propagating in the black-hole background while the simple lattice models that were considered are non-relativistic. Second, quantum entanglement can be unambiguously quantified only for bipartite systems [@horodecki2009; @eisert2010]. While the bipartite system is an approximation for applications to many body systems, here, the event horizon provides a natural boundary. Evaluation of the entanglement of a relativistic free scalar field, as always, is the simplest model. However, even for free fields it is difficult to obtain the entanglement entropy. The free fields are Gaussian and these states are entirely characterized by the covariance matrix. It is generally difficult to handle covariance matrices in an infinite dimensional Hilbert space [@eisert2010]. There are two ways to calculate entanglement entropy in the literature. One approach is to use the replica trick which rests on evaluating the partition function on an n-fold cover of the background geometry where a cut is introduced throughout the exterior of the entangling surface [@eisert2010; @cardy2004]. Second is a [*direct approach*]{}, where the Hamiltonian of the field is discretized and the reduced density matrix is evaluated in the real space. We adopt this approach as entanglement entropy may have more symmetries than the Lagrangian of the system [@krishnand2014]. To remove the spurious effects due to the coordinate singularity at the horizon[^1], we consider Lemaître coordinate which is explicitly time-dependent. One of the features that we exploit in our computation is that for a fixed Lemaître time coordinate, Hamiltonian of the scalar field in Schwarzschild space-time reduces to the scalar field Hamiltonian in flat space-time [@shanki-review]. The procedure we adopt is the following: (i) We perturbatively evolve the Hamiltonian about the fixed Lemaître time. (ii) We obtain the entanglement entropy at different times. We show that at all times, the entanglement entropy satisfies the area law i. e. $S(\epsilon) = C(\epsilon) A$ where $S(\epsilon)$ is the entanglement entropy evaluated at a given Lemaître time $(\epsilon)$, $C(\epsilon)$ is the proportionality constant that depends on $\epsilon$, and $A$ is the area of black hole horizon. In other words, the value of the entropy is different at different times. (iii) We calculate the change in entropy as function of $\epsilon$, i. e., $\Delta S/\Delta \epsilon$. Similarly we calculate change in energy $E(\e)$, i.e., $\Delta E/\Delta \epsilon$. For several black-hole metrics, we explicitly show that ratio of the rate of change of energy and the rate of change of entropy is identical to the Hawking temperature. The outline of the paper is as follows: In Sec. (\[sec.1\]), we set up our model Hamiltonian to obtain the entanglement entropy in ($D+2$)-dimensional space time. Also, we define [*entanglement temperature*]{}, which had the same structure from the statistical mechanics, that is, ratio of change in total energy to change in entanglement entropy. In Sec. (\[sec.2\]), we numerically show that for different black hole space times, the divergent free [*entanglement temperature*]{} matches approximately with the Hawking temperature obtained from general theory of relativity and its Lovelock generalization. This provides a strong evidence towards the interpretation of entanglement entropy as the Bekenstein-Hawking entropy. Finally in Sec. (\[sec.3\]), we conclude with a discussion to connect our analysis with the eigenstate thermalization hypothesis for the closed quantum systems [@2012-srednicki]. Throughout this work, the metric signature we adopt is $(+,-,-,-)$ and set $\hbar=k_{B} =c=1$. Model and Setup {#sec.1} =============== Motivation ---------- Before we go on to evaluating entanglement entropy (EE) of a quantum scalar field propagating in black-hole background, we briefly discuss the motivation for the studying entanglement entropy of a scalar field. Consider the Einstein-Hilbert action with a positive cosmological constant ($|\Lambda|$): \[eq:EHAction\] S\_[\_[EH]{}]{} (|[g]{}) = M\_[\_[Pl]{}]{}\^2 d\^4x . Perturbing the above action w.r.t. the metric $\bar{g}_{\mu\nu} = g_{\mu\nu} + h_{\mu\nu}$, the action up to second order becomes [@shanki-review]: S\_[\_[EH]{}]{}(g, h) &=& - d\^4x . The above action corresponds to massive ($\Lambda$) spin-2 field ($h_{\mu\nu}$) propagating in the background metric $g_{\mu\nu}$. Rewriting, $h_{\mu\nu} = M_{_{\rm Pl}}^{-1} \epsilon_{\mu\nu} \Phi(x^{\mu})$ \[where $\epsilon_{\mu\nu}$ is the constant polarization tensor\], the above action can be written as S\_[\_[EH]{}]{} (g, h) = - 1 2 d\^4x . which is the action for the massive scalar field propagating in the background metric $g_{\mu\nu}$. In this work, we consider massless ($\Lambda = 0$ corresponding to asymptotically flat space-time) scalar field propagating in $(D + 2)-$dimensional spherically symmetric space-time. Model ----- The canonical action for the massless, real scalar field $\Phi(x^{\mu})$ propagating in $(D + 2)-$dimensional space-time is \[equ21\] =d\^[D+2]{}[**x**]{}g\^\_([**x**]{})\_([**x**]{}) where $g_{\m\n} $ is the spherically symmetric Lemaître line-element : \[equ22\] ds\^2= d\^2-ł(1-f\[r(,)\])d\^2-r\^2(,)dØ\^2\_D where $\t,\xi$ are the time and radial components in Lemaître coordinates, respectively, $r$ is the radial distance in Schwarzschild coordinate and $d\O_D$ is the $D-$dimensional angular line-element. In order for the line-element (\[equ22\]) to describe a black hole, the space-time must contain a singularity (say at $r = 0$) and have horizons. We assume that the asymptotically flat space-time contains one non-degenerate event-horizon at $r_h$. The specific form of $f(r)$ corresponds to different space-time. Lemaître coordinate system has the following interesting properties: (i) The coordinate $\tau$ is time-like all across $0< r < \infty$, similarly $\xi$ is space-like all across $0< r < \infty$. (ii) Lemaître coordinate system does not have coordinate singularity at the horizon. (iii) This coordinate system is time-dependent. The test particles at rest relative to the reference system are particles moving freely in the given field . (iv) Scalar field propagating in this coordinate system is explicitly time-dependent. The spherical symmetry of the line-element (\[equ22\]) allows us to decompose the normal modes of the scalar field as: \[equ23\] ([**x**]{})=\_[l,m\_i]{}\_[lm\_i]{}(,) Z\_[\_[lm\_i]{}]{}(,\_i), where $i \in \{1,2,\ldots D-1\}$ and $Z_{_{lm_i}}$’s are the real hyper-spherical harmonics. We define the following dimensionless parameters: $\til{r}=r/r_h, \thin \til{\xi}=\xi/r_h,\thin \til{\t}=\t/r_h, \thin\til{\Phi}_{lm}=r_h \, \Phi_{lm}. $ By the substitution of the orthogonal properties of $Z_{_{lm_i}}$, the canonical massless scalar field action becomes, \[equ26\] &=&\_[\_[l,m\_i]{}]{}dd \^D ł\[ (\_[ ]{}\_[lm\_i]{})\^2 -.\ &&ł. (\_\_[lm\_i]{})\^2-\_[lm\_i]{}\^2\] The above action contains non-linear time-dependent terms through $f(\til r)$. Hence, the Hamiltonian obtained from the above action will have non-linear time-dependence. While the full non-linear time-dependence is necessary to understand the small size black-holes, for large size black-holes, it is sufficient to linearize the above action by fixing the time-slice and performing the following infinitesimal transformation about a particular Lemaître time $\til \t$ [@toms]. More specifically, ’=+, ’=,\ (’,’)=(+,),\ \_[lm\_i]{}(,)’\_[lm\_i]{}(’,)=\_[lm\_i]{}(,) \[equ27\] where $\e$ is the infinitesimal Lemaître time. The functional expansion of $f(\til r)$ about $\e$ and the following relation between the Lemaître coordinates , - =, allow us to perform the perturbative expansion in the above action. After doing the Legendre transformation, the Hamiltonian up to second order in $\e$ is \[Hamilt\_1\] H ()H\_[\_0]{}+ V\_[\_1]{}+\^2 V\_[\_2]{} where $ H_{_0}$ is the unperturbed scalar field Hamiltonian in the flat space-time, $V_{_1} \,\mbox{and}\,V_{_2}$ are the perturbed parts of the Hamiltonian (for details, see Appendix \[app1\]). Physically, the above infinitesimal transformations (\[equ27\]) correspond to perturbatively expanding the scalar field about a particular Lemaître time. Important observations ---------------------- The Hamiltonian in Eq. (\[Hamilt\_1\]) is key equation regarding which we would like to stress the following points: First, in the limit of $\epsilon \to 0$, the Hamiltonian reduces to that of a free scalar field propagating in flat space-time [@shanki-review]. In other words, the zeroth order Hamiltonian is identical for all the space-times. Higher order $\epsilon$ terms contain information about the global space-time structure and, more importantly, the horizon properties. Second, the Lemaître coordinate is intrinsically time-dependent; the $\epsilon$ expansion of the Hamiltonian corresponds to the perturbation about the Lemaître time. Here, we assume that the Hamiltonian $H$ undergoes adiabatic evolution and the ground state $\Psi_{GS}$ is the instantaneous ground state at all Lemaître times. This assumption is valid for large black-holes as Hawking evaporation is not significant. Also, since the line-element is time-asymmetric, the vacuum state is Unruh vacuum. Evaluation of the entanglement entropy for different values of $\epsilon$ corresponds to different values of Lemaître time. As we will show explicitly in the next section, entanglement entropy at a given $\epsilon$ satisfies the area law \[$S(\epsilon) \propto A$\] and the proportionality constant depends on $\epsilon$ i. e. $S(\epsilon) = C(\epsilon) A$. Third, it is not possible to obtain a closed form analytic expression for the density matrix (tracing out the quantum degrees of freedom associated with the scalar field inside a spherical region of radius $r_h$) and hence, we need to resort to numerical methods. In order to do that we take a spatially uniform radial grid, $\{ r_j\}$, with $b = r_{j + 1} - r_j$. We discretize the Hamiltonian $H$ in Eq.(\[Hamilt\_1\]). The procedure to obtain the entanglement entropy for different $\epsilon$ is similar to the one discussed in Refs. [@srednicki93; @shanki-review]. In this work, we assume that the quantum state corresponding to the discretized Hamiltonian is the ground state with wave-function $\Psi_{GS}(x_1,\ldots,x_n;y_1,\ldots,y_{N- n})$. The reduced density matrix $\rho(\vec y,{\vec y\,}')$ is obtained by tracing over the first $n$ of the $N$ oscillators (y,[y]{}’) =(\_[i =1]{}\^[n]{} dx\_i) \_[GS]{}(x\_1,..,x\_n;y) \^[\*]{}\_[GS]{}(x\_1,..,x\_n;[y]{}’) Fourth, in this work, we use von Neumann entropy \[renyi\] S() = -ł() as the measure of entanglement. In analogy with microcanonical ensemble picture of equilibrium statistical mechanics, evaluation of the Hamiltonian $H$ at different infinitesimal Lemaître time $\epsilon$, corresponds to setting the system at different internal energies. In analogy we define [*entanglement temperature*]{} [@sakaguchi89]: \[temp\_1\] = = The above definition is consistent with the statistical mechanical definition of temperature. In statistical mechanics, temperature is obtained by evaluating change in the entropy and energy w.r.t. thermodynamic quantities. In our case, entanglement entropy and energy depend on the Lemaître time, we have evaluated the change in the entanglement entropy and energy w.r.t. $\epsilon$. In other words, we calculate the change in the ground state energy (entanglement entropy) for different values of $\epsilon$ and find the ratio of the change in the ground state energy and change in the EE. As we will show in the next section, EE and energy goes linearly with $\epsilon$ and hence, the temperature does not depend on $\epsilon$. While the EE and the energy diverge, their ratio is a non-divergent quantity. To understand this, let us do a dimensional analysis N\^[D+1]{}ł()\^[D+1]{},   \[S\]\ \[T\_[EE]{}\]N\     =\[T\_[EE]{}\] (N/n)\^D        \[temper\_1\] where $A_D$ is the $D+1$ dimensional hyper- surface area. In the thermodynamic limit, by setting $L$ finite with $N \to \infty$ and $b \to 0$, $T_{EE}$ in Eq. (\[temper\_1\]) is finite and independent of $\e$. For large $N$, we show that, in the natural units, the above calculated temperature is identical to Hawking temperature for the corresponding black-hole [@hawking75]: \[eq:HawkingTemp\] T\_[BH]{} = ł. =|\_[\_[ r= r\_h]{} ]{} Fifth, it is important to note the above [*entanglement temperature*]{} is non-zero only for $f(r) \neq 1$. In the case of flat space-time, our analysis shows that the [*entanglement temperature*]{} vanishes, and we obtain $T_{EE}$ numerically for different black hole space-times. Results and Discussions {#sec.2} ======================= The Hamiltonian $H$ in Eq. (\[Hamilt\_1\]) is mapped to a system of $N$ coupled time independent harmonic oscillators (HO) with non-periodic boundary conditions. The interaction matrix elements of the Hamiltonian can be found in Ref [@dropbox]. The total internal energy (E) and the entanglement entropy ($S$) for the ground state of the HO’s is computed numerically as a function of $\e$ by using central difference scheme (see Appendix \[app2\]). All the computations are done using MATLAB R$2012$a for the lattice size $N = 600$, $ 10 \leq n \leq 500$ with a minimum accuracy of $10^{-8}$ and a maximum accuracy of $10^{-12}$. In the following subsections, we compute $T_{EE}$ numerically for two different black-hole space-times, namely, 4 dimensional Schwarzschild and Reissner-Nordström black holes and show that they match with Hawking temperature $T_{BH}$. $T_{EE}$ is calculated by taking the average of [*entanglement temperature*]{} for each $n$’s by fixing $N$. Schwarzschild (SBH) black holes ------------------------------- The 4-dimensional Schwarzschild black hole space-time ( put $D=2$) in dimensionless units $\til r$ is given by the line element in Eq.(\[equ22\]) with $f(\til r)$ is given by: f(r)=1- In Fig.(\[fig1\]), we have plotted total energy (in dimensionless units) and EE versus $\e$ for 4-dimensional Schwarzschild space-time. Following points are important to note regarding the numerical results: First for every $\e$, von Neumann entropy scales approximately as $S \sim (r_h/b)^2$. Second, EE and the total energy increases with $\e$. Using relation (\[temp\_1\]), we evaluate “entanglement” temperature numerically. In dimensionless units, we get $T_{EE}=0.0793$ which is close to the value of the Hawking temperature $0.079$. However, it is important to note that for different values of $N$, we obtain approximately the same value of entropy. The results are tabulated, see Table(\[table1\]). See Appendix \[app3\], for plots of energy and EE for $n=50,80, 100$ and $130$. ![The plots of total energy (left) and EE (right) as a function of $\e$ for the 4d Schwarzschild black hole. We set $N = 600$ and $n=150$. The cyan coloured dots are the numerical data and the red line is the best linear fit to the data.[]{data-label="fig1"}](fig1) Reissner-Nordström (RN) black holes ------------------------------------ The 4-dimensional Reissner-Nordström black hole is given by the line element in Eq.(\[equ22\]), where $f(\til r)$ is \[equ219\] f(r)=1-+ $Q$ is the charge of the black hole. Note that we have rescaled the radius w.r.t the outer horizon ($ r_h=M+\sqrt{M^2-Q^2}$). Choosing $q=Q/r_h$, we get f(r) = 1-+ and the black hole temperature in the unit of $r_h$ is $T_{BH} = (1-q^2)/4\pi$. ![image](fig3) Note that we have evaluated the [*entanglement temperature*]{} by fixing the charge $q$. For a fixed charge $q$, the first law of black hole mechanics is given by $ d E=\l(\k/2\pi\r) dA$, where $A$ is the area of the black hole horizon. The energy and EE for different $q$ values have the same profile, which looks exactly like in the previous case and is shown in the middle row in Fig. (\[fig3\]). See Appendix \[app3\], for plots for other values of $n$. As given in the table (\[table1\]), $T_{EE}$ matches with Hawking temperature. [|C[2cm]{} |C[1.5cm]{} |C[1.5cm]{}|C[1.5cm]{}|]{} **Black hole space time& & $\mathbf{T_{BH}}$ & $\mathbf{T_{EE}}$\ & & 0.07958& 0.07927\ ** & $q=0.1$ & 0.07878 & 0.07836\ & $q=0.2$ & 0.07639 & 0.07507\ &$q=0.3$& 0.07242 & 0.07501\ & $q=0.4$& 0.06685 & 0.06659\ Conclusions and outlook {#sec.3} ======================== In this work, we have given another proof that 4-dimensional black hole entropy can be associated to entropy of entanglement across the horizon by explicitly deriving entanglement temperature. Entanglement temperature is given by the rate of change of the entropy of entanglement across a black hole’s horizon with regard to the system energy. Our new result sheds the light on the interpretation of temperature from entanglement as the Hawking temperature, one more step to understand the black hole thermodynamics from the quantum information theory platform. Some of the key features of our analysis are: First, while entanglement and energy diverge in the limit of $b \to 0$, the [*entanglement temperature*]{} is a finite quantity. Second, [*entanglement temperature*]{} vanishes for the flat space-time. While the evaluation of the entanglement entropy [*does not*]{} distinguish between the black-hole space-time and flat space-time, entanglement temperature distinguishes the two space-times. Our analysis also shows that the entanglement entropy satisfies all the properties of the black-hole entropy. First, like the black hole entropy, the entanglement entropy increases and never decreases. Second, the entanglement entropy and the temperature satisfies the first law of black-hole mechanics $dE= T_{EE} \,dS$. We have shown this explicitly for Schwarzschild black-hole and for Reissner-Norstrom black-hole . It is quite remarkable that in higher dimensional space time the Rényi entropy provides a convergent alternative to the measure of entanglement [@shanki2013], however, entanglement temperature will depend on the Rényi parameter. While a physical understanding of the Rényi parameter has emerged [@baez_renyi_2011], it is still not clear how to fix the Rényi parameter from first principles [@progress]. Our analysis throws some light on the emergent gravity paradigm [@Sakharov2000; @jacobson95; @padmanabhan2010; @Verlinde2011] where gravity is viewed not as a fundamental force. Here we have shown that the information lost across the horizon is related to the black-hole entropy and the laws of black-hole mechanics emerge from the entanglement across the horizon. Since General Relativity reduces gravity to an effect of the curvature of the space-time, it is thought that the microscopic constituents would be the [*atoms of the space-time*]{} itself. Our analysis shows that entanglement across horizons can be used as building blocks of space-time [@VanRaamsdonk2010-GRG; @VanRaamsdonk2010-IJMP]. One of the unsettling questions in theoretical physics is whether due to Hawking temperature the black-hole has performed a non-unitary transformation on the state of the system aka information loss problem. Our analysis here does not address this for two reasons: (i) Here, we have fixed the radius of the horizon at all times and evaluated the change in the entropy while to address the information loss we need to look at changing horizon radius. (ii) Here, we have used perturbative Hamiltonian, and hence, this analysis fails as the black-hole size shrinks to half-its-size [@Almheiri2013-JHEP]. We hope to report this in future. While the unitary quantum time-evolution is reversible and retains all information about the initial state, we have shown that the restriction of the degrees of freedom outside the event-horizon at all times leads to temperature analogous to Hawking temperature. Our analysis may have relevance to the eigenstate thermalization hypothesis [@1994-srednicki; @rigol2008-nature; @2012-srednicki; @rahul2015-ARCMP], which we plan to explore. ACKNOWLEDGMENTS {#acknowledgments .unnumbered} =============== Authors wish to thank A. P. Balachandran, Charles Bennett, Samuel Braunstein, Saurya Das and Jens Eisert for discussions and comments. Also, we would like to thank the anonymous referee for the useful comments. All numerical computations were done at the fast computing clusters at IISER-TVM. The work is supported by Max Planck-India Partner Group on Gravity and Cosmology. SSK acknowledges the financial support of the CSIR, Govt. of India through Senior Research Fellowship. SS is partially supported by Ramanujan Fellowship of DST, India. Calculation of Scalar field Hamiltonian in Lemaître coordinate {#app1} ============================================================== In this Appendix section, we give details of the derivation of the Hamiltonian (H) upto second order in $\e$. Using the orthogonal properties of the real spherical harmonics $Z_{_{lm_i}}$, the scalar field action reduces to, \[equu26\] S&=&\_[\_[l,m\_i]{}]{}dd \^D ł\[ (\_[ ]{}\_[lm\_i]{})\^2 - -\_[lm\_i]{}\^2\] where $\til{r}=r/r_h, \thin \til{\xi}=\xi/r_h,\thin \til{\t}=\t/r_h, \thin\til{\Phi}_{lm}=r_h \, \Phi_{lm} $ are dimensionless. Performing the following infinitesimal transformation [@toms] in the above resultant action: ’=+,’=,\ \_[lm\_i]{}(,)’\_[lm\_i]{}(’,)=\_[lm\_i]{}(,),\ (’,’)=(+,) The action in Eq. (\[equu26\]) becomes, S &&\_[\_[l,m\_i]{}]{}dd ł(+h\_1+\^2 h\_2/2)\^Dł\[ł(1-f-h\_1-ł\[h\_2+h\_1\^2 \])\^[1/2]{} (\_[ ]{}\_[lm\_i]{})\^2.\ & &ł.-ł(1-f-h\_1-ł\[h\_2+h\_1\^2 \])\^[-1/2]{} (\_[ ]{}\_[lm\_i]{})\^2 - .\ & &ł. ł(1-f-h\_1-ł\[h\_2+h\_1\^2 \])\^[1/2]{}\_[lm\_i]{}\^2\] \[equ28\] where $h_1=\dis\frac{\partial\til r}{\pa\til \t} ~~\mbox{and}~~ h_2=\dis\frac{\pa^2\til r}{\pa\til \t^2} $. Using the relation between the Lemaître coordinates - = gives the following expression, \[equ214\] h\_1=-,  h\_2=,   |\_= The Hamiltonian $(H)$ corresponding to the above Lagrangian is [ \[equ212\] H\_[\_[l,m\_i]{}]{}dł\[\_[lm\_i]{}\^2+ ł( \_)\^2+\_[\_[lm\_i]{}]{}\^2\] ]{} where \[equ210\] g\_[\_1]{}=+h\_1+\^2 h\_2/2,   g\_[\_2]{}= ,   \_[\_[lm\_i]{}]{}= g\_[\_1]{}\^[D/2]{}\_[lm\_i]{} and $\til{\Pi}_{lm_i}$ is the canonical conjugate momenta corresponding to the field $\til{\chi}_{lm_i}$. Upon quantization, $\til{\Pi}_{lm_i}$ and $\til{\chi}_{lm_i}$ satisfy the usual canonical commutation relation: \[equ211\] ł\[\_[lm\_i]{}ł([,]{}),\_[l’m’\_i]{}ł([,]{}) \]=i \_[ll’]{}\_[m\_im’\_i]{}ł(-) Using relations (\[equ214\]) and expanding the Hamiltonian up to second order in $\e$, we get, \[equ215\] H&\_[\_[l,m\_i]{}]{}\_\^dł\[\^2\_[lm\_i]{}+\^D ł\[\_[r]{} \]\^2.\ &ł.+\^2\_[lm\_i]{}\] The Hamiltonian in Eq. (\[equ215\])is of the form \[Hamilt\_11\] HH\_[\_0]{}+ V\_[\_1]{}+\^2 V\_[\_2]{} where $ H_{_0}$ is the unperturbed scalar field Hamiltonian in the flat space-time, $V_{_1} \mbox{and}\,V_{_2}$ are the perturbed parts of the Hamiltonian given by; H\_[\_0]{}&=&\_[\_[l,m\_i]{}]{}\_\^dł\[\^2\_[lm\_i]{}+ r\^Dł\[\_[r]{} \]\^2 +\^2\_[lm\_i]{}\]\ V\_[\_1]{}&=&\_[\_[l,m\_i]{}]{}\_\^dł\[ .\ & &ł. +\^2\_[lm\_i]{}\]\ V\_[\_2]{}&=&\_[\_[l,m\_i]{}]{}\_\^dł\[(H\^2\_3+H\_4)’\^2\_[lm\_i]{}+ (--+D H\_1 H\_1’-D H\_3 H\_1’+D H\_2’+H\_3 H\_3’ + H\_4’)’\_[lm\_i]{}\_[lm\_i]{}.\ &&ł.+(++-++ D\^2 H\_1’\^2-..\ &&ł.ł.-- D H\_1’ H\_3’+ H\_3’\^2-)\^2\_[lm\_i]{}\] where H\_[\_1]{}= , H\_[\_2]{}= , H\_[\_3]{}=, H\_[\_4]{}=ł()\^2+ and the redefined field operators are \_[lm\_i]{}= \_[lm\_i]{}= such that they satisfy the following canonical commutation relation \[equ216\] ł\[\_[lm\_i]{}(r, ),\_[l’m’\_i]{}(r’,)\]=i \_[ll’]{}\_[m\_im’\_i]{}(r- r’) The Hamiltonian $H$ in Eq. (\[Hamilt\_11\]) is mapped to a system of $N$ coupled time independent harmonic oscillators (HO) with non-periodic boundary conditions. The interaction matrix elements of the Hamiltonian can be found in the Ref.[@dropbox]. The total internal energy (E) and the entanglement entropy ($S_\a$) for the ground state of the HO’s is computed numerically as a function of $\e$ by using central difference scheme. Central Difference discretization {#app2} ================================== This is one of the effective method for finding the approximate value for derivative of a function in the neighbourhood of any discrete point, $x_i=x_0+i\;h$,with unit steps of $h$. The Taylor expansion of the function about the point $x_0$ in the forward and backward difference scheme is given respectively by, f(x+h)= f(x) + ++ ......\ f(x-h)=f(x) - +- ....... which implies, f’(x)&= &+ O(h\^2)\ f”(x)&=&+O(h\^2)\ f”’(x)&=&+O(h\^2) Plots of internal energy and EE as a function of $\e$ for different black hole space-times {#app3} =========================================================================================== In this section of Appendix, we give plots of EE for different black hole space-times; ![ Plots of the EE as function of $\e$ for the 4-d Schwarzschild black hole with $N=300$, $n=50,80,100$, and $130$, respectively. The blue dots are the numerical data and the red line is the best linear fit to the data. []{data-label="fig2"}](fig2) ![ []{data-label="fig16"}](fig16) ![ []{data-label="fig4"}](fig4) [10]{} G. D. [Birkhoff]{}, “[Proof of a Recurrence Theorem for Strongly Transitive Systems]{},” [[*Proceedings of the National Academy of Science*]{} [**17**]{} (Dec., 1931) 650–655](http://dx.doi.org/10.1073/pnas.17.12.650). J. V. Neumann, “Proof of the quasi-ergodic hypothesis,” [[*Proceedings of the National Academy of Sciences*]{} [**18**]{} no. 1, (1932) 70–82](http://dx.doi.org/10.1073/pnas.18.1.70). L. Boltzmann, [*Lectures on Gas Theory*]{}. Dover Books on Physics. Dover Publications, 2011. L. D. Landau and E. M. Lifshitz, [*Statistical Physics*]{}, vol. V of [ *Course of Theoretical Physics*]{}. Elsevier, 3 ed., 1980. A. Wehrl, “[General properties of entropy]{},” [[*Rev. Mod. Phys.*]{} [**50**]{} no. 2, (Apr., 1978) 221–260](http://dx.doi.org/10.1103/RevModPhys.50.221). , [Wenger Trevor]{}, and [Weiss David S.]{}, “[A quantum Newton’s cradle]{},” [[*Nature*]{} [**440**]{} no. 7086, (Apr, 2006) 900?903](http://dx.doi.org/http://dx.doi.org/10.1038/nature04693). , [Gross C.]{}, [Weller A.]{}, [Giovanazzi S.]{}, and [Oberthaler M. K.]{}, “[Squeezing and entanglement in a Bose-Einstein condensate]{},” [[*Nature*]{} [**455**]{} no. 7217, (Oct, 2008) 1216–1219](http://dx.doi.org/http://dx.doi.org/10.1038/nature07332). D. A. Smith, M. Gring, T. Langen, M. Kuhnert, B. Rauer, R. Geiger, T. Kitagawa, I. Mazets, E. Demler, and J. Schmiedmayer, “Prethermalization revealed by the relaxation dynamics of full distribution functions,” [[*New Journal of Physics*]{} [**15**]{} no. 7, (2013) 075011](http://dx.doi.org/10.1088/1367-2630/15/7/075011). V. I. Yukalov, “Bose-einstein condensation and gauge symmetry breaking,” [[*Laser Physics Letters*]{} [**4**]{} no. 9, (2007) 632](http://dx.doi.org/10.1002/lapl.200710029). A. Osterloh, L. Amico, G. Falci, and R. Fazio, “Scaling of entanglement close to a quantum phase transition,” [[*Nature*]{} [**416**]{} (Dec, 2002) 608](http://dx.doi.org/10.1038/416608a). L.-A. Wu, M. S. Sarandy, and D. A. Lidar, “Quantum phase transitions and bipartite entanglement,” [[*Phys. Rev. Lett.*]{} [**93**]{} (Dec, 2004) 250404](http://dx.doi.org/10.1103/PhysRevLett.93.250404). J. von Stecher, E. Demler, M. D. Lukin, and A. M. Rey, “Probing interaction-induced ferromagnetism in optical superlattices,” [[*New Journal of Physics*]{} [**12**]{} no. 5, (2010) 055009](http://dx.doi.org/10.1088/1367-2630/12/5/055009). J. Dinerman and L. F. Santos, “Manipulation of the dynamics of many-body systems via quantum control methods,” [[*New Journal of Physics*]{} [**12**]{} no. 5, (2010) 055025](http://dx.doi.org/10.1088/1367-2630/12/5/055025). , “[Quantum Thermodynamics: Excuse our ignorance]{},” [[*Nature Physics*]{} [**2**]{} no. 11, (Nov, 2006) 727–728](http://dx.doi.org/http://dx.doi.org/10.1038/nphys456). and [M. B. Plenio ]{}, “[Entanglement theory and the second law of thermodynamics]{},” [[*Nature Physics*]{} [**4**]{} no. 11, (Nov, 2008) 873–877](http://dx.doi.org/http://dx.doi.org/10.1038/nphys1100). , “[Quantum entanglement: Reversible path to thermodynamics]{},” [[*Nature Physics*]{} [**4**]{} no. 11, (Nov, 2008) 833–834](http://dx.doi.org/http://dx.doi.org/10.1038/nphys1123). S. Popescu and D. Rohrlich, “Thermodynamics and the measure of entanglement,” [[*Phys. Rev. A*]{} [**56**]{} (1997) R3319–R3321](http://dx.doi.org/10.1103/PhysRevA.56.R3319). V. Vedral and M. B. Plenio, “Entanglement measures and purification procedures,” [[*Phys. Rev. A*]{} [**57**]{} (Mar, 1998) 1619–1633](http://dx.doi.org/10.1103/PhysRevA.57.1619). M. B. Plenio and V. Vedral, “Teleportation, entanglement and thermodynamics in the quantum world,” [[ *Contemporary Physics*]{} [**39**]{} no. 6, (Nov., 1998) 431–446](http://dx.doi.org/10.1080/001075198181766). M. Srednicki, “Chaos and quantum thermalization,” [[*Phys. Rev. E*]{} [**50**]{} (1994) 888–901](http://dx.doi.org/10.1103/PhysRevE.50.888). , [Dunjko Vanja]{}, and [Olshanii Maxim]{}, “[Thermalization and its mechanism for generic isolated quantum systems]{},” [[*Nature*]{} [**452**]{} no. 7189, (Apr, 2008) 854?–858](http://dx.doi.org/http://dx.doi.org/10.1038/nature06838). M. Rigol and M. Srednicki, “Alternatives to eigenstate thermalization,” [[*Phys. Rev. Lett.*]{} [**108**]{} (2012) 110601](http://dx.doi.org/10.1103/PhysRevLett.108.110601). R. Nandkishore and D. A. Huse, “Many-body localization and thermalization in quantum statistical mechanics,” [[*Annual Review of Condensed Matter Physics*]{} [**6**]{} no. 1, (2015) 15–38](http://dx.doi.org/10.1146/annurev-conmatphys-031214-014726). L. Bombelli, R. K. Koul, J. Lee, and R. D. Sorkin, “Quantum source of entropy for black holes,” [[ *Phys. Rev. D*]{} [**34**]{} (Jul, 1986) 373–383](http://dx.doi.org/10.1103/PhysRevD.34.373). M. Srednicki, “Entropy and area,” [[*Phys. Rev. Lett.*]{} [**71**]{} (Aug, 1993) 666–669](http://dx.doi.org/10.1103/PhysRevLett.71.666). J. Eisert and M. Cramer, “Single-copy entanglement in critical quantum spin chains,” [[*Phys. Rev. A*]{} [**72**]{} (Oct, 2005) 042112](http://dx.doi.org/10.1103/PhysRevA.72.042112). S. Das and S. Shankaranarayanan, “How robust is the entanglement entropy-area relation?,” [[*Phys. Rev. D*]{} [**73**]{} (Jun, 2006) 121701](http://dx.doi.org/10.1103/PhysRevD.73.121701). S. Das, S. Shankaranarayanan, and S. Sur, [*Black hole entropy from entanglement: A review* ]{}, vol. 268 of [*Horizons in World Physics*]{}. Nova Science Publishers,, New York, 2009. page 211. S. N. Solodukhin, “Entanglement [Entropy]{} of [Black]{} [Holes]{},” [[*Living Reviews in Relativity*]{} [**14**]{} (2011) ](http://dx.doi.org/10.12942/lrr-2011-8). S. L. Braunstein, S. Das, and S. Shankaranarayanan, “Entanglement entropy in all dimensions,” [[ *Journal of High Energy Physics*]{} [**2013**]{} no. 7, (2013) 1–9](http://dx.doi.org/10.1007/JHEP07(2013)130). S. W. Hawking, “Particle creation by black holes,” [[*Communications in Mathematical Physics*]{} [**43**]{} (1975) 199–220](http://dx.doi.org/10.1007/BF02345020). R. Horodecki, P. Horodecki, M. Horodecki, and K. Horodecki, “Quantum entanglement,” [[*Rev. Mod. Phys.*]{} [**81**]{} (Jun, 2009) 865–942](http://dx.doi.org/10.1103/RevModPhys.81.865). J. Eisert, M. Cramer, and M. B. Plenio, “*Colloquium* : Area laws for the entanglement entropy,” [[*Rev. Mod. Phys.*]{} [**82**]{} (Feb, 2010) 277–306](http://dx.doi.org/10.1103/RevModPhys.82.277). P. Calabrese and J. L. Cardy, “[Entanglement entropy and quantum field theory]{},” [[ *Journal of Statistical Mechanics: Theory and Experiment*]{} [**0406**]{} (2004) P06002](http://dx.doi.org/10.1088/1742-5468/2004/06/P06002). K. Mallayya, R. Tibrewala, S. Shankaranarayanan, and T. Padmanabhan, “Zero modes and divergence of entanglement entropy,” [[*Phys. Rev. D*]{} [**90**]{} (Aug, 2014) 044058](http://dx.doi.org/10.1103/PhysRevD.90.044058). S. Mukohyama, M. Seriu, and H. Kodama, “Thermodynamics of entanglement in schwarzschild spacetime,” [*Phys. Rev. D*]{} [**58**]{} (Jul, 1998) 064001. L. D. Landau and E. M. Lifshitz, [*The Classical Theory of Fields*]{}, vol. II of [*Course of Theoretical Physics*]{}. Butterworth-Heinemann, 4 ed., 1975. D. J. Toms, [*[The Schwinger action principle and effective action]{}*]{}. Cambridge monographs on mathematical physics. Cambridge Univ. Press, Cambridge, 2007. H. Sakaguchi, “Renyi entropy and statistical mechanics,” [[*Progress of Theoretical Physics*]{} [**81**]{} no. 4, (1989) 732–737](http://dx.doi.org/10.1143/PTP.81.732). Click on the following ‘Dropbox’ link for more details on MATLAB codes that use for the numerical study reported in the paper, <https://www.dropbox.com/sh/wxmq02qxg6m3pbm/AABEE-k4MMow7FJj_JHgfrV_a?dl=0>. J. C. [Baez]{}, “Rényi entropy and free energy,” [[arXiv:1102.2098 \[quant-ph\]]{}](http://arxiv.org/abs/1102.2098). . A. D. Sakharov, “Vacuum quantum fluctuations in curved space and the theory of gravitation,” [[*General Relativity and Gravitation*]{} [**32**]{} no. 2, (2000) 365–367](http://dx.doi.org/10.1023/A:1001947813563). T. Jacobson, “Thermodynamics of spacetime: The einstein equation of state,” [[*Phys. Rev. Lett.*]{} [**75**]{} (Aug, 1995) 1260–1263](http://dx.doi.org/10.1103/PhysRevLett.75.1260). T. Padmanabhan, “Thermodynamical aspects of gravity: new insights,” [[*Reports on Progress in Physics*]{} [**73**]{} no. 4, (Apr., 2010) 046901](http://dx.doi.org/10.1088/0034-4885/73/4/046901). E. Verlinde, “On the origin of gravity and the laws of newton,” [[*Journal of High Energy Physics*]{} [**2011**]{} no. 4, (2011) ](http://dx.doi.org/10.1007/JHEP04(2011)029). M. Van Raamsdonk, “Building up spacetime with quantum entanglement,” [[*General Relativity and Gravitation*]{} [**42**]{} no. 10, (2010) 2323–2329](http://dx.doi.org/10.1007/s10714-010-1034-0). M. Van Raamsdonk, “Building up space?time with quantum entanglement,” [[*International Journal of Modern Physics D*]{} [**19**]{} no. 14, (2010) 2429–2435](http://dx.doi.org/10.1142/S0218271810018529). A. Almheiri, D. Marolf, J. Polchinski, and J. Sully, “Black holes: complementarity or firewalls?,” [[*Journal of High Energy Physics*]{} [**2013**]{} no. 2, (2013) 1–20](http://dx.doi.org/10.1007/JHEP02(2013)062). [^1]: In Schwarzschild coordinate, $r > 2M$ need to be bipartited [@1998-Mukohyama].
{ "pile_set_name": "ArXiv" }
--- author: - 'M. Csanád[^1]' bibliography: - '../../../master.bib' title: Time evolution of the sQGP with hydrodynamic models --- Introduction ============ The almost perfect fluidity of the experimentally created strongly interacting Quark-Gluon-Plasma at the Relativistic Heavy Ion Collider (RHIC) [@Adcox:2004mh] showed that relativistic hydrodynamic models can be applied in describing the space-time picture of heavy-ion collisions and infer the relation between experimental observables and the initial conditions. In this paper we investigate the relativistic, ellipsoidally symmetric model of Ref. [@Csorgo:2003ry]. Hadronic observables were calculated in Ref. [@Csanad:2009wc], while photonic observables in Ref. [@Csanad:2011jq]. We also show new solutions, which can be regarded as generalizations of the model of Ref. [@Csorgo:2003ry] to arbitrary, temperature dependent speed of sound, originally published in Ref. [@Csanad:2012hr]. Equations of hydrodynamics ========================== We denote space-time coordinates by $x^\mu = {\left({t, {\mathbf{r}}}\right)}$, with ${\mathbf{r}}=(r_x, r_y, r_z)$ being the spatial three-vector and $t$ the time in lab-frame. The metric tensor is $g_{\mu\nu}=diag{\left({1,-1,-1,-1}\right)}$. Coordinate proper-time is defined as $\tau=\sqrt{t^2-|{\mathbf{r}}|^2}$. The fluid four-velocity is $u^\mu=\gamma{\left({1,{\mathbf{v}}}\right)}$, with ${\mathbf{v}}$ being the three-velocity, and $\gamma=1/\sqrt{1-|{\mathbf{v}}|^2}$. An analytic hydrodynamical solution is a functional form for pressure $p$, energy density $\varepsilon$, entropy density $\sigma$, temperature $T$, and (if the fluid consists of individual conserved particles, or if there is some conserved charge or number) the conserved number density is $n$. Then basic hydrodynamical equations are the continuity and energy-momentum-conservation equations: $$\begin{aligned} \partial_\mu{\left({n u^\mu}\right)} = 0\;\textnormal{ and }\;\partial_\nu T^{\mu \nu} = 0\label{e:em}.\end{aligned}$$ The energy-momentum tensor of a perfect fluid is $$\begin{aligned} T^{\mu\nu} ={\left({\varepsilon+p}\right)}u^\mu u^\nu-pg^{\mu \nu} .\end{aligned}$$ The energy-momentum conservation equation can be then transformed to (by projecting it orthogonal and parallel to $u^\mu$, respectively): $$\begin{aligned} {\left({\varepsilon+p}\right)}u^{\nu}\partial_{\nu}u^{\mu} & ={\left({g^{\mu\nu}-u^{\mu}u^{\nu}}\right)}\partial_{\nu}p,\label{e:euler} \\ {\left({\varepsilon+p}\right)}\partial_{\nu}u^{\nu}+u^{\nu}\partial_{\nu}\varepsilon & = 0\label{e:energy}.\end{aligned}$$ [Eq. (\[e:euler\])]{} is the relativistic Euler equation, while [Eq. (\[e:energy\])]{} is the relativistic form of the energy conservation equation. Note also that [Eq. (\[e:energy\])]{} is equivalent to the entropy conservation equation: $$\begin{aligned} \label{e:scont} \partial_\mu{\left({\sigma u^\mu}\right)}=0 .\end{aligned}$$ The Equation of State (EoS) closes the set of equations. We investigate the following EoS: $$\begin{aligned} \label{e:eos} \varepsilon = \kappa{\left({T}\right)} p ,\end{aligned}$$ while the speed of sound $c_s$ is calculated as $c_s = \sqrt{\partial p/\partial \varepsilon}$, i.e. for constant $\kappa$, the relation $c_s = 1/\sqrt{\kappa}$ holds. For the case when there is a conserved $n$ number density, we also use the well-known relation for ideal gases: $$\begin{aligned} \label{e:tdef} p=nT. \end{aligned}$$ For $\kappa{\left({T}\right)}=$ constant, an ellipsoidally symmetric solution of the hydrodynamical equations is presented in Ref. [@Csorgo:2003ry]: $$\begin{aligned} \label{e:tsol0} u^\mu = \frac{x^\mu}{\tau},\quad n = n_0\frac{V_0}{V}\nu{\left({s}\right)},\quad T = T_0{\left({\frac{V_0}{V}}\right)}^{{\frac{1}{\kappa}}}{\frac{1}{\nu{\left({s}\right)}}} ,\quad V = \tau^3,\quad s = \frac{r_x^2}{X^2} + \frac{r_y^2}{Y^2} + \frac{r_z^2}{Z^2},\end{aligned}$$ where $n_0$ and $T_0$ correspond to the proper time when the arbitrarily chosen volume $V_0$ was reached (i.e. $\tau_0 = V_0^{1/3}$), and $\nu{\left({s}\right)}$ is an arbitrary function of $s$. The quantity $s$ has ellipsoidal level surfaces, and obeys $u^\nu\partial_\nu s=0$. We call $s$ a *scaling variable*, and $V$ the effective volume of a characteristic ellipsoid. Furthermore, $X$, $Y$, and $Z$ are the time (lab-frame time $t$) dependent principal axes of an expanding ellipsoid. They have the explicit time dependence as $X = \dot X_0 t$, $Y = \dot Y_0 t$, and $Z = \dot Z_0 t$, with $\dot X_0$, $\dot Y_0$, $\dot Z_0$ constants. Photon and hadron observables for constant EoS ============================================== From the above hydrodynamic solution with a constant EoS, source functions can be written up. For bosonic hadrons, it takes the following form [@Csanad:2009wc]: $$\begin{aligned} S(x,p)d^4x=\mathcal{N}\frac{p_{\mu}\,d^3\Sigma^{\mu}(x)H(\tau)d\tau}{n(x)\exp\left(p_{\mu}u^{\mu}(x)/T(x)\right)-1},\end{aligned}$$ where $\mathcal{N}=g/(2\pi)^3$ (with $g$ being the degeneracy factor), $H(\tau)$ is the proper-time probability distribution of the freeze-out. It is assumed to be a $\delta$ function or a narrow Gaussian centered at the freeze-out proper-time $\tau_0$. Furthermore, $\mu(x)/T(x)=\ln n(x)$ is the fugacity factor and $d^3 \Sigma_\mu(x)p^\mu$ is the Cooper-Frye factor (describing the flux of the particles), and $d^3 \Sigma_\mu(x)$ is the vector-measure of the freeze-out hyper-surface, pseudo-orthogonal to $u^\mu$. Here the source distribution is normalized such as $\int S(x,p) d^4 x d^3{\bf p}/E = N$, i.e. one gets the total number of particles $N$ (using $c$=1, $\hbar$=1 units). Note that one has to change variables from $\tau$ to $t$, and so a Jacobian of $d\tau/dt=t/\tau$ has to be taken into account. For the source function of photon creation we have [@Csanad:2011jq]: $$\begin{aligned} \label{e:source} S(x,p)d^4x = \mathcal{N'}\frac{p_{\mu}\,d^3\Sigma^{\mu}(x)dt}{\exp\left(p_{\mu}u^{\mu}(x)/T(x)\right)-1} = \mathcal{N'}\frac{p_{\mu}u^{\mu}}{\exp\left(p_{\mu}u^{\mu}(x)/T(x)\right)-1}\,d^4x \end{aligned}$$ where $p_{\mu}d^3\Sigma^{\mu}$ is again the Cooper-Frye factor of the emission hyper-surfaces. Similarly to the previous case, we assume that the hyper-surfaces are pseudo-orthogonal to $u^\mu$, thus $d^3\Sigma^{\mu}(x) = u^{\mu}d^3x$. This yields then $p_{\mu}u^{\mu}$ which is the energy of the photon in the co-moving system. The photon creation is the assumed to happen from an initial time $t_i$ until a point sufficiently near the freeze-out. From these source functions, observables can be calculated, as detailed in Refs. [@Csanad:2009wc; @Csanad:2011jq] Comparison to measured hadron and photon distributions ====================================================== Observables calculated from the above source functions were compared to data in Refs. [@Csanad:2009wc; @Csanad:2011jq]. Hadron fits determined the freeze-out parameters of the model [@Csanad:2009wc]: expansion rates, freeze-out proper-time and freeze-out temperature (in the center of the fireball). When describing direct photon data [@Csanad:2011jq], the free parameters (besides the ones fixed from hadronic fits) were $\kappa$ (the equation of state parameter) and $t_i$, the initial time of the evolution. ------------------------ --------------------- ---------------- ---------------- ------------------- Dataset $N_1$ and HBT elliptic flow $N_1$ hadrons hadrons photons Central FO temperature $T_0$ \[MeV\] 199$\pm$3 204$\pm$7 $204$ MeV (fixed) Eccentricity $\epsilon$ 0.80$\pm$0.02 0.34$\pm$0.03 $0.34$ (fixed) Transverse expansion $u_t^2/b$ -0.84$\pm$0.08 -0.34$\pm$0.01 $-0.34$ (fixed) FO proper-time $\tau_0$ \[fm/$c$\] 7.7$\pm$0.1 - $7.7$ (fixed) Longitudinal expansion $\dot{Z_0^2}/b$ -1.6$\pm$0.3 - $-1.6$ (fixed) Equation of State $\kappa$ - - $7.9 \pm 0.7$ ------------------------ --------------------- ---------------- ---------------- ------------------- : Parameters of the model determined by hadron and photon observables data. See details in Refs. [@Csanad:2009wc; @Csanad:2011jq].[]{data-label="t:param"} ![Fits to invariant momentum distribution of pions [@Adler:2003cb] (left), HBT radii [@Adler:2003kt] (middle) and elliptic flow [@Adler:2004rq] (right). See the obtained parameters in Table \[t:param\].[]{data-label="f:hadronfits"}](N1.eps "fig:"){height="32.00000%"} ![Fits to invariant momentum distribution of pions [@Adler:2003cb] (left), HBT radii [@Adler:2003kt] (middle) and elliptic flow [@Adler:2004rq] (right). See the obtained parameters in Table \[t:param\].[]{data-label="f:hadronfits"}](HBT.eps "fig:"){height="32.00000%"} ![Fits to invariant momentum distribution of pions [@Adler:2003cb] (left), HBT radii [@Adler:2003kt] (middle) and elliptic flow [@Adler:2004rq] (right). See the obtained parameters in Table \[t:param\].[]{data-label="f:hadronfits"}](V2.eps "fig:"){height="32.00000%"} ![Fit to direct photon invariant transverse momentum data [@Adare:2008fqa] (left), comparison to elliptic flow data [@Adare:2011zr] (middle) and direct photon HBT predictions (right). See the model parameters in Table \[t:param\].[]{data-label="f:photonfits"}](n1pt.eps "fig:"){width="32.00000%"} ![Fit to direct photon invariant transverse momentum data [@Adare:2008fqa] (left), comparison to elliptic flow data [@Adare:2011zr] (middle) and direct photon HBT predictions (right). See the model parameters in Table \[t:param\].[]{data-label="f:photonfits"}](v2pt.eps "fig:"){width="32.00000%"} ![Fit to direct photon invariant transverse momentum data [@Adare:2008fqa] (left), comparison to elliptic flow data [@Adare:2011zr] (middle) and direct photon HBT predictions (right). See the model parameters in Table \[t:param\].[]{data-label="f:photonfits"}](hbtpt.eps "fig:"){width="32.00000%"} We compared our model to PHENIX 200 GeV Au+Au hadron and photon data from Refs. [@Adler:2003cb; @Adler:2004rq; @Adler:2003kt; @Adare:2008fqa]. Results are shown in Figs. \[f:hadronfits\] and \[f:photonfits\], while the model parameters are detailed in Table. \[t:param\]. The EoS result from the photon fit is $\kappa=7.9\pm0.7_{stat}\pm1.5_{syst}$, or alternatively, using $\kappa=1/c_s^2$ $$\begin{aligned} c_s = 0.36\pm0.02_{stat}\pm0.04_{syst}\end{aligned}$$ which is in agreement with lattice QCD calculations [@Borsanyi:2010cj] and measured hadronic data [@Adare:2006ti; @Lacey:2006pn]. This represents an average EoS as it may vary with temperature. The maximum value for $t_i$ within 95% probability is 0.7 fm/$c$. The initial temperature of the fireball (in its center) is then: $$\begin{aligned} T_i = 507\pm12_{stat}\pm90_{syst}\textnormal{MeV}\end{aligned}$$ at 0.7 fm/$c$. This is in accordance with other hydro models as those values are in the $300-600$ MeV interval [@Adare:2008fqa]. Note that the systematic uncertainty comes from the analysis of a possible prefactor, as detailed in Ref. [@Csanad:2011jq]. Using the previously determined fit parameters. we can calculated the elliptic flow of direct photons in Au+Au collisions at RHIC. This was compared to PHENIX data [@Adare:2011zr], as shown in Fig. \[f:photonfits\], and they were found not to be incompatible. We also calculated direct photon HBT radii as a prediction, and found $R_\textnormal{out}$ to be significantly larger than $R_\textnormal{side}$. New solutions for general Equation of State {#s:sols} =========================================== We found new solutions to the relativistic hydrodynamical equations for arbitary $\varepsilon=\kappa{\left({T}\right)}p$ Equation of State, as detailed in Ref. [@Csanad:2012hr]. These are the first solutions of their kind (i.e. with a non-constant EoS). In the case where we do not consider any conserved $n$ density, the solution is given as: $$\begin{aligned} \sigma &= \sigma_0 \frac{\tau_0^3}{\tau^3} ,\label{e:Tsol:s:0}\\ u^\mu & = \frac{x^\mu}{\tau} ,\\ \frac{\tau_0^3}{\tau^3} & = \exp{\left\{{\int_{T_0}^T{\left({\frac{\kappa{\left({\beta}\right)}}{\beta}+{\frac{1}{\kappa{\left({\beta}\right)}+1}}{\frac{\mathrm{d}{\kappa{\left({\beta}\right)}}}{\mathrm{d}{\beta}}}}\right)}{\mathrm{d}}\beta}\right\}} . \label{e:Tsol:s}\end{aligned}$$ For the case when the pressure is expressed as $p=nT$ with a conserved density $n$, another new solution can be written up as: $$\begin{aligned} n &= n_0 \frac{\tau_0^3}{\tau^3} ,\\ u^\mu & = \frac{x^\mu}{\tau} ,\\ \frac{\tau_0^3}{\tau^3} & = \exp{\left\{{\int_{T_0}^T{\left({{\frac{1}{\beta}}{\frac{\mathrm{d}{}}{\mathrm{d}{\beta}}}{\left[{\kappa{\left({\beta}\right)}\beta}\right]}}\right)}{\mathrm{d}}\beta}\right\}} . \label{e:Tsol:nT}\end{aligned}$$ Quantities denoted by the subscript 0 ($n_0$, $T_0$, $\sigma_0$) correspond to the proper-time $\tau_0$, which can be chosen arbitrarily. If for example $\tau_0$ is taken to be the freeze-out proper-time, then $T_0$ is the freeze-out temperature. These solutions are simple generalizations of the $\nu{\left({s}\right)}=1$ case of the solutions of Ref. [@Csorgo:2003ry], and the latter also represents a relativistic generalization of the solution presented in Ref. [@Csorgo:2001xm]. It is important to note that the conserved $n$ solution becomes ill-defined, if ${\frac{\mathrm{d}{}}{\mathrm{d}{T}}} {\left({\kappa{\left({T}\right)}T}\right)}>0$ is not true. In such a case, one can use the solution without conserved $n$ (Eqs. [(\[e:Tsol:s:0\])]{}–[(\[e:Tsol:s\])]{}). If $\kappa$ is given as a function of the pressure $p$ and not that of the temperature $T$, a third new solution can be given as: $$\begin{aligned} \sigma &= \sigma_0 \frac{\tau_0^3}{\tau^3} ,\label{e:psol:s}\\ u^\mu & = \frac{x^\mu}{\tau} ,\\ \frac{\tau_0^3}{\tau^3} & = \exp{\left\{{\int_{p_0}^p{\left({\frac{\kappa{\left({\beta}\right)}}{\beta}+{\frac{\mathrm{d}{\kappa{\left({\beta}\right)}}}{\mathrm{d}{\beta}}}}\right)}\frac{{\mathrm{d}}\beta}{\kappa{\left({\beta}\right)}+1}}\right\}} , \label{e:psol:p}\end{aligned}$$ i.e. almost the same as in [Eq. (\[e:Tsol:s\])]{}, except that here the integration variable is the pressure $p$. Utilizing a lattice QCD EoS =========================== A QCD equation of state has been calculated by the Budapest-Wuppertal group in Ref. [@Borsanyi:2010cj], with dynamical quarks, in the continuum limit. In their Eq. (3.1) and Table 2, they give an analytic parametrization of the trace anomaly $I=\epsilon-3p$ as a function of temperature. The pressure can also be calculated from it, as (if using the normalized values and $\hbar=c=1$ units) $\frac{I}{T^4}=\frac{1}{T}\frac{\partial}{\partial T}\frac{p}{T^4}$. From this, we calculated the EoS parameter $\kappa=I/p+3$ as a function of the temperature, as shown in Fig. \[f:validity\_ttau\] (left plot). Since in a $T$ range ${\frac{\mathrm{d}{}}{\mathrm{d}{T}}} {\left({\kappa{\left({T}\right)}T}\right)}$ becomes negative, the solution without conserved number density $n$ (presented in Eqs. [(\[e:Tsol:s:0\])]{}–[(\[e:Tsol:s\])]{}) was used. [@Csanad:2012hr] ![Left: The temperature dependence of the EoS parameter $\kappa$ from Ref. [@Borsanyi:2010cj] is shown with the solid black curve. In the shaded $T$ range (173 MeV - 230 MeV) ${\frac{\mathrm{d}{}}{\mathrm{d}{T}}} {\left({\kappa{\left({T}\right)}T}\right)}$ (red dashed line) becomes negative, thus the solution shown in Eqs. [(\[e:Tsol:s:0\])]{}–[(\[e:Tsol:s\])]{} shall be used with this EoS. Right: Time dependence of the temperature $T(\tau)$ (normalized with the freeze-out time $\tau_f$ and the freeze-out temperature $T_f$) is shown. The four thin red lines show this dependence in case of constant $\kappa$ values, while the thicker blue lines show results based on the EoS of Ref. [@Borsanyi:2010cj]. []{data-label="f:validity_ttau"}](validity.eps "fig:"){width="49.00000%"} ![Left: The temperature dependence of the EoS parameter $\kappa$ from Ref. [@Borsanyi:2010cj] is shown with the solid black curve. In the shaded $T$ range (173 MeV - 230 MeV) ${\frac{\mathrm{d}{}}{\mathrm{d}{T}}} {\left({\kappa{\left({T}\right)}T}\right)}$ (red dashed line) becomes negative, thus the solution shown in Eqs. [(\[e:Tsol:s:0\])]{}–[(\[e:Tsol:s\])]{} shall be used with this EoS. Right: Time dependence of the temperature $T(\tau)$ (normalized with the freeze-out time $\tau_f$ and the freeze-out temperature $T_f$) is shown. The four thin red lines show this dependence in case of constant $\kappa$ values, while the thicker blue lines show results based on the EoS of Ref. [@Borsanyi:2010cj]. []{data-label="f:validity_ttau"}](ttau.eps "fig:"){width="48.30000%"} We utilized the obtained $\kappa(T)$ and calculated the time evolution of the temperature of the fireball from this solution of relativistic hydrodynamics. The result is shown in Fig. \[f:validity\_ttau\] (right plot). Clearly, temperature falls off almost as fast as in case of a constant $\kappa=3$, an ideal relativistic gas. Hence a given freeze-out temperature yields a significantly higher initial temperature than a higher $\kappa$ (i.e. a lower speed of sound $c_s$) would. Let us give an example! We shall fix the freeze-out temperature, based on lattice QCD results, to a reasonable value of $T_f = 170$ MeV. Let all the quantities with subscript 0 correspond the the freeze-out, we shall thus index them with $f$. In this case, already at $0.3\times\tau_f$ (30% of the freeze-out time), temperatures 2.5-3$\times$ higher than at the freeze-out can be reached. To give a full quantitative example, let as assume the following values: $$\begin{aligned} \tau_f = 8\;\rm{fm}/c \;\textnormal{ and }\;&\tau_{\rm init} = 1.5\;\rm{fm}/c\textnormal{, \;then}\\ T_f = 170\;\rm{MeV}\;\Rightarrow\;&T_{\rm init} \approx 550\;\rm{MeV}\end{aligned}$$ (and even higher if $\tau_{\rm init}$ is smaller). This value would have been reached with a constant EoS of $\kappa\approx4$, even though the extracted average EoS values are usually above this value. The reason for it may be, that for the largest temperature range, $\kappa$ obtains values close to 4, as shown in the left plot of Fig. \[f:validity\_ttau\]. In general, the mentioned lattice QCD equation of state of Ref. [@Borsanyi:2010cj] and our hydro solution yields a $T(\tau)$ dependence. Then, if the freeze-out temperature $T_f$ and the time evolution duration $\tau_f / \tau_{\rm init}$ are known, the initial temperature of the fireball can be easily calculated, or even read off the right plot of Fig. \[f:validity\_ttau\], as it was drawn with units normalized by the freeze-out temperature and proper-time. Conclusion ========== Exact parametric solutions of perfect hydrodynamics can be utilized in order to describe the matter produced in heavy ion collisions at RHIC. We calculated observables from a relativistic, 1+3 dimensional, ellipsoidally symmetric, exact solution, and compared these to 200 GeV Au+Au PHENIX data. Hadronic data are compatible with our model, and freeze-out parameters were extracted from fits to these data. [@Csanad:2009wc] From fits to direct photon data, we find that thermal radiation is consistent these measurements, with an average speed of sound of $c_s = 0.36\pm0.02_{stat}\pm0.04_{syst}$. We can also set a lower bound on the initial temperature of the sQGP to $507\pm12_{stat}\pm90_{syst}$ MeV at $0.7$ fm/$c$. We also find that the thermal photon elliptic flow from this mode is not incompatible with measurements. We also predicted photon HBT radii from this model. [@Csanad:2011jq] In the second part of this paper, we have presented the first analytic solutions of the equations of relativistic perfect fluid hydrodynamics for general temperature dependent speed of sound (ie. general Equation of State). Using our solutions and utilizing a lattice QCD Equation of State, we explored the initial state of heavy-ion reactions based on the reconstructed final state. In $\sqrt{s_{NN}}=200$ GeV Au+Au collisions, our investigations reveal a very high initial temperature consistent with calculations based on the measured spectrum of low momentum direct photons. [@Csanad:2012hr] Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported by the NK-101438 OTKA grant and the Bolyai Scholarship (Hungarian Academy of Sciences) of M. Csanád. The author also would like to thank the organizers for the possibility of participating at the first International Conference on New Frontiers in Physics. [^1]:
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this expository and survey paper, along one of main lines of bounding the ratio of two gamma functions, we look back and analyse some inequalities, several complete monotonicity of functions involving ratios of two gamma or $q$-gamma functions, and necessary and sufficient conditions for functions involving ratios of two gamma or $q$-gamma functions to be logarithmically completely monotonic.' address: 'Research Institute of Mathematical Inequality Theory, Henan Polytechnic University, Jiaozuo City, Henan Province, 454010, China' author: - Feng Qi date: - Drafted on 7 April 2008 in Melbourne - 'Completed on Sunday 22 September 2008 in Carlisle B, VU Student Village, Australia' - title: 'Bounds for the ratio of two gamma functions—From Wendel’s and related inequalities to logarithmically completely monotonic functions' --- [^1] [^2] Introduction ============ The gamma and $q$-gamma functions --------------------------------- It is well-known that the classical Euler gamma function may be defined by $$\label{egamma} \Gamma(x)=\int^\infty_0t^{x-1} e^{-t}\operatorname{d\mspace{-2mu}}t,\quad x>0.$$ The logarithmic derivative of $\Gamma(x)$, denoted by $\psi(x)=\frac{\Gamma'(x)}{\Gamma(x)}$, is called the psi or digamma function, and $\psi^{(k)}(x)$ for $k\in \mathbb{N}$ are called the polygamma functions. It is common knowledge that special functions $\Gamma(x)$, $\psi(x)$ and $\psi^{(k)}(x)$ for $k\in\mathbb{N}$ are fundamental and important and have much extensive applications in mathematical sciences. The $q$-analogues of $\Gamma$ and $\psi$ are defined [@andrews pp. 493–496] for $x>0$ by $$\begin{gathered} \label{q-gamma-dfn} \Gamma_q(x)=(1-q)^{1-x}\prod_{i=0}^\infty\frac{1-q^{i+1}}{1-q^{i+x}},\quad 0<q<1,\\ \label{q-gamma-dfn-q>1} \Gamma_q(x)=(q-1)^{1-x}q^{\binom{x}2}\prod_{i=0}^\infty\frac{1-q^{-(i+1)}}{1-q^{-(i+x)}}, \quad q>1,\end{gathered}$$ and $$\begin{aligned} \label{q-gamma-1.4} \psi_q(x)=\frac{\Gamma_q'(x)}{\Gamma_q(x)}&=-\ln(1-q)+\ln q \sum_{k=0}^\infty\frac{q^{k+x}}{1-q^{k+x}}\\ &=-\ln(1-q)-\int_0^\infty\frac{e^{-xt}}{1-e^{-t}}\operatorname{d\mspace{-2mu}}\gamma_q(t) \label{q-gamma-1.5}\end{aligned}$$ for $0<q<1$, where $\operatorname{d\mspace{-2mu}}\gamma_q(t)$ is a discrete measure with positive masses $-\ln q$ at the positive points $-k\ln q$ for $k\in\mathbb{N}$, more accurately, $$\gamma_q(t)= \begin{cases} -\ln q\sum\limits_{k=1}^\infty\delta(t+k\ln q),&0<q<1,\\ t,&q=1. \end{cases}$$ See [@Ismail-Muldoon-119 p. 311]. The $q$-gamma function $\Gamma_q(z)$ has the following basic properties: $$\lim_{q\to1^+}\Gamma_q(z)=\lim_{q\to1^-}\Gamma_q(z)=\Gamma(z)\quad \text{and}\quad \Gamma_q(x)=q^{\binom{x-1}2}\Gamma_{1/q}(x).$$ The definition and properties of completely monotonic functions --------------------------------------------------------------- A function $f$ is said to be completely monotonic on an interval $I$ if $f$ has derivatives of all orders on $I$ and $(-1)^{n}f^{(n)}(x)\ge0$ for $x \in I$ and $n \ge0$. The class of completely monotonic functions has the following basic properties. \[p.161-widder\] A necessary and sufficient condition that $f(x)$ should be completely monotonic for $0<x<\infty$ is that $$f(x)=\int_0^\infty e^{-xt}\operatorname{d\mspace{-2mu}}\alpha(t),$$ where $\alpha(t)$ is nondecreasing and the integral converges for $0<x<\infty$. \[p.83-bochner\] If $f(x)$ is completely monotonic on $I$, $g(x)\in I$, and $g'(x)$ is completely monotonic on $(0,\infty)$, then $f(g(x))$ is completely monotonic on $(0,\infty)$. The logarithmically completely monotonic functions -------------------------------------------------- A positive and $k$-times differentiable function $f(x)$ is said to be $k$-log-convex (or $k$-log-concave, respectively) with $k\ge2$ on an interval $I$ if and only if $[\ln f(x)]^{(k)}$ exists and $[\ln f(x)]^{(k)}\ge0$ (or $[\ln f(x)]^{(k)}\le0$, respectively) on $I$. A positive function $f(x)$ is said to be logarithmically completely monotonic on an interval $I\subseteq\mathbb{R}$ if it has derivatives of all orders on $I$ and its logarithm $\ln f(x)$ satisfies $(-1)^k[\ln f(x)]^{(k)}\ge0$ for $k\in\mathbb{N}$ on $I$. The notion “logarithmically completely monotonic function” was first put forward in [@Atanassov] without an explicit definition. This terminology was explicitly recovered in [@minus-one] whose revised and expanded version was formally published as [@minus-one.tex-rev]. It has been proved once and again in [@CBerg; @clark-ismail-NFAA.tex; @clark-ismail.tex; @compmon2; @absolute-mon.tex; @minus-one; @minus-one.tex-rev; @schur-complete] that a logarithmically completely monotonic function on an interval $I$ must also be completely monotonic on $I$. C. Berg points out in [@CBerg] that these functions are the same as those studied by Horn [@horn] under the name infinitely divisible completely monotonic functions. For more information, please refer to [@CBerg; @auscmrgmia] and related references therein. Outline of this paper --------------------- The history of bounding the ratio of two gamma functions has been longer than sixty years since the paper [@wendel] by J. G. Wendel was published in 1948. The motivations of bounding the ratio of two gamma functions are various, including establishment of asymptotic relation, refinements of Wallis’ formula, approximation to $\pi$, needs in statistics and other mathematical sciences. In this expository and survey paper, along one of main lines of bounding the ratio of two gamma functions, we look back and analyse some inequalities such as Wendel’s double inequality, Kazarinoff’s refinement of Wallis’ formula, Watson’s monotonicity, Gautschi’s double inequality, and Kershaw’s first double inequality, the complete monotonicity of several functions involving ratios of two gamma or $q$-gamma functions by Bustoz, Ismail, Lorch and Muldoon, and necessary and sufficient conditions for functions involving ratios of two gamma or $q$-gamma functions to be logarithmically completely monotonic. Some inequalities for bounding the ratio of two gamma functions =============================================================== In this section, we look back and analyse some related inequalities for bounding the ratio of two gamma functions. Wendel’s double inequality -------------------------- Our starting point is a paper published in 1948 by J. G. Wendel, which is the earliest one we can search out to the best of our ability. In order to establish the classical asymptotic relation $$\label{wendel-approx} \lim_{x\to\infty}\frac{\Gamma(x+s)}{x^s\Gamma(x)}=1$$ for real $s$ and $x$, by using Hölder’s inequality for integrals, J. G. Wendel [@wendel] proved elegantly the double inequality $$\label{wendel-inequal} \biggl(\frac{x}{x+s}\biggr)^{1-s}\le\frac{\Gamma(x+s)}{x^s\Gamma(x)}\le1$$ for $0<s<1$ and $x>0$. \[rem-2.1.1\] The inequality  can be rewritten for $0<s<1$ and $x>0$ as $$\label{wendel-inequal-rew} (x+s)^{s-1}\frac{\Gamma(x+1)}{\Gamma(x+s)}\le1\le x^{s-1}\frac{\Gamma(x+1)}{\Gamma(x+s)}.$$ The relation  results in $$\lim_{x\to\infty}(x+s)^{s-1}\frac{\Gamma(x+1)}{\Gamma(x+s)} =\lim_{x\to\infty}x^{s-1}\frac{\Gamma(x+1)}{\Gamma(x+s)}=1$$ which hints us that the functions $$\label{2.5-function} (x+s)^{s-1}\frac{\Gamma(x+1)}{\Gamma(x+s)}\quad \text{and}\quad x^{s-1}\frac{\Gamma(x+1)}{\Gamma(x+s)}$$ or $$\label{2.5-function-2} (x+s)\biggl[\frac{\Gamma(x+1)}{\Gamma(x+s)}\biggr]^{1/(s-1)}\quad \text{and}\quad x\biggl[\frac{\Gamma(x+1)}{\Gamma(x+s)}\biggr]^{1/(s-1)}$$ are possibly increasing and decreasing respectively. In [@abram p. 257, 6.1.46], the following limit was listed: For real numbers $a$ and $b$, $$\label{gamma-ratio-lim} \lim_{x\to\infty}\biggl[x^{b-a}\frac{\Gamma(x+a)}{\Gamma(x+b)}\biggr]=1.$$ The limits  and  are equivalent to each other since $$x^{t-s}\frac{\Gamma(x+s)}{\Gamma(x+t)}=\frac{\Gamma(x+s)}{x^s\Gamma(x)} \cdot\frac{x^t\Gamma(x)}{\Gamma(x+t)}.$$ Hence, the limit  is presumedly called as Wendel’s limit. Due to unknown reasons, Wendel’s paper [@wendel] was seemingly neglected by nearly all mathematicians for more than fifty years until it was mentioned in [@Merkle-JMAA-99], to the best of my knowledge. Kazarinoff’s double inequality {#Wallis-section} ------------------------------ Starting from $$\label{John-Wallis-ineq} \frac1{\sqrt{\pi(n+1/2)}}<\frac{(2n-1)!!}{(2n)!!}<\frac1{\sqrt{\pi n}},\quad n\in\mathbb{N},$$ one form of the celebrated formula of John Wallis, which had been quoted for more than a century before 1950s by writers of textbooks, D. K. Kazarinoff proved in [@Kazarinoff-56] that the sequence $\theta(n)$ defined by $$\label{theta-dfn-kazar} \frac{(2n-1)!!}{(2n)!!} =\frac1{\sqrt{\pi[n+\theta(n)]}}$$ satisfies $\frac14<\theta(n)<\frac12$ for $n\in\mathbb{N}$, that is, $$\label{Wallis'inequality} \frac1{\sqrt{\pi(n+1/2)}}<\frac{(2n-1)!!}{(2n)!!} <\frac1{\sqrt{\pi(n+1/4)}},\quad n\in\mathbb{N}.$$ It was said in [@Kazarinoff-56] that it is unquestionable that inequalities similar to  can be improved indefinitely but at a sacrifice of simplicity, which is why the inequality  had survived so long. Kazarinoff’s proof of  is based upon the property $$\label{Phi-ineq} [\ln\phi(t)]''-\{[\ln\phi(t)]'\}^2>0$$ of the function $$\phi(t)=\int_0^{\pi/2}\sin^tx\operatorname{d\mspace{-2mu}}x=\frac{\sqrt\pi\,}2\cdot\frac{\Gamma((t+1)/2)}{\Gamma((t+2)/2)}$$ for $-1<t<\infty$. The inequality  was proved by making use of the well-known Legendre’s formula $$\label{Legendre's-formula} \psi(x)=-\gamma+\int_0^1\frac{t^{x-1}-1}{t-1}\operatorname{d\mspace{-2mu}}t$$ for $x>0$ and estimating the integrals $$\int_0^1\frac{x^t}{1+x}\operatorname{d\mspace{-2mu}}x\quad\text{and}\quad \int_0^1\frac{x^t\ln x}{1+x}\operatorname{d\mspace{-2mu}}x.$$ Since  is equivalent to the statement that the reciprocal of $\phi(t)$ has an everywhere negative second derivative, therefore, for any positive $t$, $\phi(t)$ is less than the harmonic mean of $\phi(t-1)$ and $\phi(t+1)$, which implies $$\label{karz-2.17-ineq} \frac{\Gamma((t+1)/2)}{\Gamma((t+2)/2)}<\frac2{\sqrt{2t+1}},\quad t>-\frac12.$$ As a subcase of this result, the right-hand side inequality in  is established. Replacing $t$ by $2t$ in  and rearranging yield $$\label{karz-2.17-ineq-rew} \frac{\Gamma(t+1)}{\Gamma(t+1/2)}>\sqrt{t+\frac14}\quad\Longleftrightarrow\quad \biggl(t+\frac14\biggr)^{1/2-1}\frac{\Gamma(t+1)}{\Gamma(t+1/2)}>1$$ for $t>-\frac14$. From , it follows that $$\lim_{x\to\infty}\biggl(t+\frac14\biggr)^{1/2-1}\frac{\Gamma(t+1)}{\Gamma(t+1/2)}=1.$$ This suggests that the function $$\biggl(t+\frac14\biggr)^{1/2-1}\frac{\Gamma(t+1)}{\Gamma(t+1/2)}\quad \text{or}\quad \biggl(t+\frac14\biggr)\biggl[\frac{\Gamma(t+1)}{\Gamma(t+1/2)}\biggr]^{1/(1/2-1)}$$ is perhaps decreasing, more strongly, logarithmically completely monotonic. \[kazarinoff-rem-5\] The inequality  may be rewritten as $$\psi'\biggl(\frac{t+1}2\biggr)-\psi'\biggl(\frac{t+2}2\biggr) >\biggl[\psi\biggl(\frac{t+1}2\biggr)-\psi\biggl(\frac{t+2}2\biggr)\biggr]^2$$ for $t>-1$. Letting $u=\frac{t+1}2$ in the above inequality yields $$\psi'(u)-\psi'\biggl(u+\frac12\biggr) >\biggl[\psi(u)-\psi\biggl(u+\frac12\biggr)\biggr]^2$$ for $u>0$. This inequality has been generalized in [@Comp-Mon-Digamma-Trigamma-Divided.tex] to the complete monotonicity of a function involving divided differences of the digamma and trigamma functions as follows. \[CMDT-divided-thm\] For real numbers $s$, $t$, $\alpha=\min\{s,t\}$ and $\lambda$, let $$\label{Delta-lambda-dfn} \Delta_{s,t;\lambda}(x)=\begin{cases}\bigg[\dfrac{\psi(x+t) -\psi(x+s)}{t-s}\bigg]^2 +\lambda\dfrac{\psi'(x+t)-\psi'(x+s)}{t-s},&s\ne t\\ [\psi'(x+s)]^2+\lambda\psi''(x+s),&s=t \end{cases}$$ on $(-\alpha,\infty)$. Then the function $\Delta_{s,t;\lambda}(x)$ has the following complete monotonicity: 1. For $0<|t-s|<1$, 1. the function $\Delta_{s,t;\lambda}(x)$ is completely monotonic on $(-\alpha,\infty)$ if and only if $\lambda\le1$, 2. so is the function $-\Delta_{s,t;\lambda}(x)$ if and only if $\lambda\ge\frac1{|t-s|}$; 2. For $|t-s|>1$, 1. the function $\Delta_{s,t;\lambda}(x)$ is completely monotonic on $(-\alpha,\infty)$ if and only if $\lambda\le\frac1{|t-s|}$, 2. so is the function $-\Delta_{s,t;\lambda}(x)$ if and only if $\lambda\ge1$; 3. For $s=t$, the function $\Delta_{s,s;\lambda}(x)$ is completely monotonic on $(-s,\infty)$ if and only if $\lambda\le1$; 4. For $|t-s|=1$, 1. the function $\Delta_{s,t;\lambda}(x)$ is completely monotonic if and only if $\lambda<1$, 2. so is the function $-\Delta_{s,t;\lambda}(x)$ if and only if $\lambda>1$, 3. and $\Delta_{s,t;1}(x)\equiv0$. Taking $\lambda=s-t>0$ in Theorem \[CMDT-divided-thm\] produces that the function $\frac{\Gamma(x+s)}{\Gamma(x+t)}$ on $(-t,\infty)$ is increasingly convex if $s-t>1$ and increasingly concave if $0<s-t<1$. Watson’s monotonicity {#Watson-sec} --------------------- In 1959, motivated by the result in [@Kazarinoff-56] mentioned in Section \[Wallis-section\], G. N. Watson [@waston] observed that $$\begin{gathered} \label{watson-formula} \frac1x\cdot\frac{[\Gamma(x+1)]^2}{[\Gamma(x+1/2)]^2} ={}_2F_1\biggl(-\frac12,-\frac12;x;1\biggr)\\* =1+\frac1{4x}+\frac1{32x(x+1)} +\sum_{r=3}^\infty\frac{[(-1/2)\cdot(1/2)\cdot(3/2)\cdot(r-3/2)]^2} {r!x(x+1)\dotsm(x+r-1)}\end{gathered}$$ for $x>-\frac12$, which implies the much general function $$\label{theta-dfn} \theta(x)=\biggl[\frac{\Gamma(x+1)}{\Gamma(x+1/2)}\biggr]^2-x$$ for $x>-\frac12$, whose special case is the sequence $\theta(n)$ for $n\in\mathbb{N}$ defined in , is decreasing and $$\lim_{x\to\infty}\theta(x)=\frac14\quad \text{and}\quad \lim_{x\to(-1/2)^+}\theta(x)=\frac12.$$ This implies apparently the sharp inequalities $$\label{theta-l-u-b} \frac14<\theta(x)<\frac12$$ for $x>-\frac12$, $$\label{watson-special-ineq} \sqrt{x+\frac14}\,< \frac{\Gamma(x+1)}{\Gamma(x+1/2)}\le \sqrt{x+\frac14+\biggl[\frac{\Gamma(3/4)}{\Gamma(1/4)}\biggr]^2}\, =\sqrt{x+0.36423\dotsm}$$ for $x\ge-\frac14$, and, by Wallis cosine formula [@WallisFormula.html], $$\label{best-bounds-Wallis} \frac{1}{\sqrt{\pi(n+{4}/{\pi}-1)}}\le\frac{(2n-1)!!}{(2n)!!} <\frac{1}{\sqrt{\pi(n+1/4)}},\quad n\in\mathbb{N}.$$ In [@waston], an alternative proof of the double inequality  was also provided. It is easy to see that the inequality  extends and improves  when $s=\frac12$. The left-hand side inequality in  is better than the corresponding one in . The formula  implies the complete monotonicity of the function $\theta(x)$ on $\bigl(-\frac12,\infty\bigr)$ defined by . Gautschi’s double inequalities ------------------------------ The first result of the paper [@gaut] was the double inequality $$\label{gaut-3-ineq} \frac{(x^p+2)^{1/p}-x}2<e^{x^p}\int_x^\infty e^{-t^p}\operatorname{d\mspace{-2mu}}t\le c_p\biggl[\biggl(x^p+\frac1{c_p}\biggr)^{1/p}-x\biggr]$$ for $x\ge0$ and $p>1$, where $$c_p=\biggl[\Gamma\biggl(1+\frac1p\biggr)\biggr]^{p/(p-1)}$$ or $c_p=1$. By an easy transformation, the inequality  was written in terms of the complementary gamma function $$\Gamma(a,x)=\int_x^\infty e^{-t}t^{a-1}\operatorname{d\mspace{-2mu}}t$$ as $$\label{gaut-4-ineq} \frac{p[(x+2)^{1/p}-x^{1/p}]}2<e^x\Gamma\biggl(\frac1p,x\biggr)\le pc_p\biggl[\biggl(x+\frac1{c_p}\biggr)^{1/p}-x^{1/p}\biggr]$$ for $x\ge0$ and $p>1$. In particular, if letting $p\to\infty$, the double inequality $$\frac12\ln\biggl(1+\frac2x\biggr)\le e^xE_1(x)\le\ln\biggl(1+\frac1x\biggr)$$ for the exponential integral $E_1(x)=\Gamma(0,x)$ for $x>0$ was derived from , in which the bounds exhibit the logarithmic singularity of $E_1(x)$ at $x=0$. As a direct consequence of the inequality  for $p=\frac1s$, $x=0$ and $c_p=1$, the following simple inequality for the gamma function was deduced: $$\label{gaut-none-ineq} 2^{s-1}\le\Gamma(1+s)\le1,\quad 0\le s\le 1.$$ The second result of the paper [@gaut] was a sharper and more general inequality $$\label{gaut-6-ineq} e^{(s-1)\psi(n+1)}\le\frac{\Gamma(n+s)}{\Gamma(n+1)}\le n^{s-1}$$ for $0\le s\le1$ and $n\in\mathbb{N}$ than . It was obtained by proving that the function $$f(s)=\frac1{1-s}\ln\frac{\Gamma(n+s)}{\Gamma(n+1)}$$ is monotonically decreasing for $0\le s<1$. Since $\psi(n)<\ln n$, it was derived from the inequality  that $$\label{gaut-6-ineq-simp} \biggl(\frac1{n+1}\biggr)^{1-s}\le\frac{\Gamma(n+s)}{\Gamma(n+1)}\le\biggl(\frac1n\biggr)^{1-s}, \quad 0\le s\le1,$$ which was also rewritten as $$\label{euler-gaut} \frac{n!(n+1)^{s-1}}{(s+1)(s+2)\dotsm(s+n-1)}\le\Gamma(1+s) \le\frac{(n-1)!n^s}{(s+1)(s+2)\dotsm(s+n-1)},$$ and so a simple proof of Euler’s product formula in the segment $0\le s\le1$ was shown by letting $n\to\infty$ in . For more information on refining the inequality , please refer to [@incom-gamma-L-N; @qi-senlin-mia; @Qi-Mei-99-gamma] and related references therein. The double inequalities  and  can be rearranged as $$\label{gaut-ineq-1} n^{1-s}\le\frac{\Gamma(n+1)}{\Gamma(n+s)}\le\exp((1-s)\psi(n+1))$$ and $$\label{gaut-ineq-2} n^{1-s}\le\frac{\Gamma(n+1)}{\Gamma(n+s)}\le (n+1)^{1-s}$$ for $n\in\mathbb{N}$ and $0\le s\le 1$. Furthermore, the inequality  can be rewritten as $$\label{gau-rew-1} n^{1-s}\frac{\Gamma(n+s)}{\Gamma(n+1)}\le1\le (n+1)^{1-s}\frac{\Gamma(n+s)}{\Gamma(n+1)}$$ or $$\label{gau-rew-2} n\biggl[\frac{\Gamma(n+s)}{\Gamma(n+1)}\biggr]^{1/(1-s)}\le1\le (n+1)\biggl[\frac{\Gamma(n+s)}{\Gamma(n+1)}\biggr]^{1/(1-s)}.$$ This supply us some possible clues to see that the sequences at the very ends of the inequalities  and  are monotonic. The left-hand side inequality in  and the upper bound in  have the following relationship $$\label{wendel-gautschi-comp} (n+s)^{1-s}\le\exp((1-s)\psi(n+1))$$ for $0\le s\le\frac12$ and $n\in\mathbb{N}$, and the inequality  reverses for $s>e^{1-\gamma}-1=0.52620\dotsm$, since the function $$\label{Q(x)-dfn} Q(x)=e^{\psi(x+1)}-x$$ was proved in [@Infinite-family-Digamma.tex Theorem 2] to be strictly decreasing on $(-1,\infty)$ and $$\label{Q-infty-lim} \lim_{x\to\infty}Q(x)=\frac12.$$ This means that Wendel’s double inequality  and Gautschi’s first double inequality  are not included each other, but they all contain Gautschi’s second double inequality . In the reviews on the paper [@gaut] by the Mathematical Reviews and the Zentralblatt MATH, there is no a word to comment on inequalities in  and . However, these two double inequalities later became a major source of a large amount of study on bounding the ratio of two gamma functions. Kershaw’s first double inequality {#kershaw-sec} --------------------------------- Inspired by the inequality , among other things, D. Kershaw presented in [@kershaw] the following double inequality $$\label{gki1} \biggl(x+\frac{s}2\biggr)^{1-s}<\frac{\Gamma(x+1)}{\Gamma(x+s)} <\biggl[x-\frac12+\biggl(s+\frac14\biggr)^{1/2}\biggr]^{1-s}$$ for $0<s<1$ and $x>0$. In the literature, it is called as Kershaw’s first double inequality for the ratio of two gamma functions. It is easy to see that the inequality  refines and extends the inequality  and . The inequality  may be rearranged as $$\label{gki1-rew} \biggl[x-\frac12+\biggl(s+\frac14\biggr)^{1/2}\biggr]^{s-1}\frac{\Gamma(x+1)}{\Gamma(x+s)} <1<\biggl(x+\frac{s}2\biggr)^{s-1}\frac{\Gamma(x+1)}{\Gamma(x+s)}$$ for $x>0$ and $0<s<1$. By virtue of  or , it is easy to see that $$\lim_{x\to\infty}\biggl[x-\frac12+\biggl(s+\frac14\biggr)^{1/2}\biggr]^{s-1} \frac{\Gamma(x+1)}{\Gamma(x+s)} =\lim_{x\to\infty}\biggl(x+\frac{s}2\biggr)^{s-1}\frac{\Gamma(x+1)}{\Gamma(x+s)}=1.$$ This insinuates the monotonicity, more strongly, the logarithmically complete monotonicity, of the functions $$\biggl[x-\frac12+\biggl(s+\frac14\biggr)^{1/2}\biggr]^{s-1} \frac{\Gamma(x+1)}{\Gamma(x+s)} \quad\text{and}\quad \biggl(x+\frac{s}2\biggr)^{s-1}\frac{\Gamma(x+1)}{\Gamma(x+s)}$$ or $$\biggl[x-\frac12+\biggl(s+\frac14\biggr)^{1/2}\biggr] \biggl[\frac{\Gamma(x+1)}{\Gamma(x+s)}\biggr]^{1/(s-1)} \quad\text{and}\quad \biggl(x+\frac{s}2\biggr)\biggl[\frac{\Gamma(x+1)}{\Gamma(x+s)}\biggr]^{1/(s-1)}.$$ Some completely monotonic functions involving ratios of two gamma or ${q}$-gamma functions {#sec-cmf-ismail} ========================================================================================== In this section, we look back and analyse several complete monotonicity of functions involving ratios of two gamma or $q$-gamma functions. Ismail-Lorch-Muldoon’s monotonicity results ------------------------------------------- Motivated by work on inequalities for the ratio of two gamma functions in [@kershaw; @laforgia-mc-1984; @lorch-ultra] and [@J.Wimp p. 155], M. E. H. Ismail, L. Lorch and M. E. Muldoon pointed out at the beginning of [@Ismail-Lorch-Muldoon] that simple monotonicity of the ratio of two gamma functions are useful. In [@Oliver pp. 118–119], the asymptotic formula $$\label{Oliver-asymp-formula-gamma-ratio} z^{b-a}\frac{\Gamma(z+a)}{\Gamma(z+b)}\sim1+\frac{(a-b)(a+b-1)}{2z} +\dotsm$$ as $z\to\infty$ along any curve joining $z=0$ and $z=\infty$ is listed, where $z\ne-a,-a-1,\dotsc$ and $z\ne-b,-b-1,\dotsc$. Suggested by it, the following complete monotonicity was proved in [@Ismail-Lorch-Muldoon Theorem 2.4]: Let $a>b\ge0$, $a+b\ge1$ and $$\label{i-l-m-f-2-2} h(x)=\ln\biggl[x^{a-b}\frac{\Gamma(x+b)}{\Gamma(x+a)}\biggr].$$ Then both $h'(x)$ and $$\label{i-l-m-f-2} x^{b-a}\frac{\Gamma(x+a)}{\Gamma(x+b)}$$ are completely monotonic on $(0,\infty)$; The results fail when $a+b<1$ replaces $a+b\ge1$ in the hypotheses. Meanwhile, the following $q$-analogue of [@Ismail-Lorch-Muldoon Theorem 2.4] was also provided in [@Ismail-Lorch-Muldoon Theorem 2.5]: Let $a>b\ge0$, $a+b\ge1$, $q>0$, $q\ne1$ and $$h_q(x)=\ln\biggl[|1-q^x|^{a-b}\frac{\Gamma_q(x+b)}{\Gamma_q(x+a)}\biggr].$$ Then $h_q'(x)$ is completely monotonic on $(0,\infty)$; So is the function $$\label{i-l-m-f-2-q} |1-q^x|^{b-a}\frac{\Gamma_q(x+a)}{\Gamma_q(x+b)};$$ The result fails if $a+b<1$. The proof of [@Ismail-Lorch-Muldoon Theorem 2.4] can be outlined as follows: Using the integral representation $$\label{gauss-formula-psi} \psi(z)=-\gamma+\int_0^\infty\frac{e^{-t}-e^{-tz}}{1-e^{-t}}\operatorname{d\mspace{-2mu}}t$$ for $\operatorname{Re}z>0$ yields $$\label{h'(x)-int} h'(x)=\int_0^\infty\biggl[\frac{e^{-at}-e^{-bt}}{1-e^{-t}}+a-b\biggr]e^{-xt}\operatorname{d\mspace{-2mu}}t.$$ It was established in [@Ismail-Lorch-Muldoon Lemma 4.1] that if $0\le b<a$, $a+b\ge1$ and $b^2+(a-1)^2\ne0$, then $$\label{ILM-ineq-exp} \frac{w^b-w^a}{1-w}<a-b,\quad 0<w<1;$$ The result fails if the condition $a+b\ge1$ is replaced by $a+b<1$. Combining  and  with Theorem \[p.83-bochner\] results in [@Ismail-Lorch-Muldoon Theorem 2.4]. The proof of [@Ismail-Lorch-Muldoon Theorem 2.5] was finished by using the formula , the inequality , Theorem \[p.161-widder\] and Theorem \[p.83-bochner\]. It is noted that [@Ismail-Lorch-Muldoon Theorem 2.4 and Theorem 2.5] mentioned above can be restated using the terminology “logarithmically completely monotonic function” as follows: The functions defined by  and  are logarithmically completely monotonic on $(0,\infty)$ if and only if $a+b\ge1$ for $a>b\ge0$, $q>0$ and $q\ne1$. Bustoz-Ismail’s monotonicity results {#Bustoz-Ismail-sec} ------------------------------------ In [@Bustoz-and-Ismail], it was noticed that inequalities like  are “immediate consequences of the complete monotonicity of certain functions. Indeed, one should investigate monotonicity properties of functions involving quotients of gamma functions and as a by-product derive inequalities of the aforementioned type. This approach is simpler and yields more general results.” In [@Bustoz-and-Ismail], it was revealed that 1. [@Bustoz-and-Ismail Theorem 1]: the function $$\label{ismail-f(x)-thm1} \frac1{(x+c)^{1/2}}\cdot\frac{\Gamma(x+1)}{\Gamma(x+1/2)},\quad x>\max\biggl\{-\frac12,-c\biggr\}$$ is completely monotonic on $(-c,\infty)$ if $c\le\frac14$, so is the reciprocal of  on $\bigl[-\frac12,\infty\bigr)$ if $c\ge\frac12$; 2. [@Bustoz-and-Ismail Theorem 3]: the function $$\label{ismail-g(x)-thm1} (x+c)^{a-b}\frac{\Gamma(x+b)}{\Gamma(x+a)}$$ for $1\ge b-a>0$ is completely monotonic on the interval $(\max\{-a,-c\},\infty)$ if $c\le\frac{a+b-1}2$, so is the reciprocal of  on $(\max\{-a,-c\},\infty)$ if $c\ge a$; 3. [@Bustoz-and-Ismail Theorem 7]: the function $$\label{ismail-h(x)-thm2} \frac{\Gamma(x+1)}{\Gamma(x+s)}\biggl(x+\frac{s}2\biggr)^{s-1}$$ for $0\le s\le1$ is completely monotonic on $(0,\infty)$; when $0<s<1$, it satisfies $(-1)^nf^{(n)}(x)>0$ for $x>0$; 4. [@Bustoz-and-Ismail Theorem 8]: the function $$\label{thm8-ismail-one} \Biggl(x-\frac12+\sqrt{s+\frac14}\,\Biggr)^{1-s}\frac{\Gamma(x+s)}{\Gamma(x+1)}$$ for $0<s<1$ is strictly decreasing on $(0,\infty)$. \[lemma2-bus-ism\] A special case of Theorem \[p.83-bochner\] says that the function $\exp(-h(x))$ is completely monotonic on an interval $I$ if $h'(x)$ is completely monotonic on $I$. This was iterated as [@Bustoz-and-Ismail Lemma 2.1]. In [@er p. 15 and p. 20], the following integral representation was listed: For $\operatorname{Re}z>0$, $$\label{psi-frac12} \psi\biggl(\frac12+\frac{z}2\biggr)-\psi\biggl(\frac{z}2\biggr) =2\int_0^\infty\frac{e^{-zt}}{1+e^{-t}}\operatorname{d\mspace{-2mu}}t.$$ The formula  and [@Bustoz-and-Ismail Lemma 2.1] are basic tools of the proof of [@Bustoz-and-Ismail Theorem 1]. The basic tools of the proof of [@Bustoz-and-Ismail Theorem 3] include [@Bustoz-and-Ismail Lemma 2.1] mentioned in Remark \[lemma2-bus-ism\], the formula , and the non-negativeness of the function $$\label{omega-ismail-nonneg-1} \omega(t)=2(b-a)\sinh\frac{t}2-2\sinh\frac{(b-a)t}2$$ for $b>a$ and $t\ge0$ and the function $$\label{omega-ismail-nonneg-2} (a-b)(1-e^{-t})+e^{(c-a)t}-e^{(c-b)t}$$ for $b>a$, $c\ge a$ and $t\ge0$. The proof of the complete monotonicity of the function  in [@Bustoz-and-Ismail Theorem 7] relies on the series representation $$\label{series-repr} \psi(x)=-\gamma-\frac1x+\sum_{n=1}^\infty\biggl(\frac1n-\frac1{x+n}\biggr)$$ in [@er p. 15], the positivity of the function $$\label{sinh-sinh-ismail} (1-s)\sinh t-\sinh[(1-s)t]$$ on $(0,\infty)$ for $0<s<1$, and the above Theorem \[p.83-bochner\] applied to $f(x)=e^{-x}$, as mentioned in Remark \[lemma2-bus-ism\]. The proof of the decreasing monotonicity of the function  just used the formula  and the conclusion stated in Remark \[lemma2-bus-ism\]. In fact, under corresponding assumptions, the functions , , and their reciprocals had been proved in [@Bustoz-and-Ismail] to be logarithmically completely monotonic. Ismail-Muldoon’s monotonicity results ------------------------------------- It was claimed in [@Ismail-Muldoon-119 p. 310] that “Many inequalities for special functions follow from monotonicity properties. Often such inequalities are special cases of the complete monotonicity of related special functions. For example, an inequality of the form $f(x)\ge g(x)$ for $x\in[a,\infty)$ with equality if and only if $x=a$ may be a disguised form of the complete monotonicity of $\frac{g(\varphi(x))}{f(\varphi(x))}$ where $\phi(x)$ is a nondecreasing function on $(a,\infty)$ and $\frac{g(\varphi(a))}{f(\varphi(a))}=1$”. Among other things, suggested by [@Bustoz-and-Ismail Theorem 3] mentioned in the above section, the following complete monotonicity was presented in [@Ismail-Muldoon-119 Theorem 2.5]: Let $a<b\le a+1$ and $$g(x)=\biggl(\frac{1-q^{x+c}}{1-q}\biggr)^{a-b}\frac{\Gamma_q(x+b)}{\Gamma_q(x+a)}.$$ Then $-[\ln g(x)]'$ is completely monotonic on $(-c,\infty)$ if $0\le c\le\frac{a+b-1}2$ and $[\ln g(x)]'$ is completely monotonic on $(-a,\infty)$ if $c\ge a\ge0$; Neither is completely monotonic for $\frac{a+b-1}2<c<a$. As a supplement of [@Ismail-Muldoon-119 Theorem 2.5], it was proved separately in [@Ismail-Muldoon-119 Theorem 2.6] that the first derivative of the function $$\ln\biggl[\biggl(\frac{1-q^x}{1-q}\biggr)^a\frac{\Gamma_q(x)}{\Gamma_q(x+a)}\biggr],\quad 0<q<1$$ is completely monotonic on $(0,\infty)$ for $a\ge1$. The proof of [@Ismail-Muldoon-119 Theorem 2.5] depends on deriving $$\frac{\operatorname{d\mspace{-2mu}}{}}{\operatorname{d\mspace{-2mu}}x}\ln g(x)=-\int_0^\infty e^{-xt} \biggl[\frac{e^{-bt}-e^{-at}}{1-e^{-t}}+(b-a)e^{-ct}\biggr]\operatorname{d\mspace{-2mu}}\gamma_q(t)$$ and [@Ismail-Muldoon-119 Lemma 1.2]: Let $0<\alpha<1$. Then $$\label{Lemma1.2-Ismail-Muldoon-119} \alpha e^{(\alpha-1)t}<\frac{\sinh(\alpha t)}{\sinh t}<\alpha,\quad t>0.$$ The inequalities become equalities when $\alpha=1$ and they are reversed when $\alpha>1$. The proof of [@Ismail-Muldoon-119 Theorem 2.6] is similar to that of [@Ismail-Muldoon-119 Theorem 2.5]. It is clear that that Theorem 2.5 and Theorem 2.6 in [@Ismail-Muldoon-119] mentioned above can be rewritten using the phrase “logarithmically completely monotonic function”. From [@Ismail-Muldoon-119 Theorem 2.5], the following inequality was derived in [@Ismail-Muldoon-119 Theorem 3.3]: For $0<q\le1$, the inequality $$\label{i-m-q-gamma-ineq} \frac{\Gamma_q(x+1)}{\Gamma_q(x+s)}>\biggl(\frac{1-q^{x+s/2}}{1-q}\biggr)^{1-s}$$ holds for $0<s<1$ and $x>-\frac{s}2$. In [@Alzer-Math-Nachr-2001], it was pointed out that the inequality $$\label{alzer-q-gamma-ineq} \frac{\Gamma_q(x+1)}{\Gamma_q(x+s)}<\biggl(\frac{1-q^{x+s}}{1-q}\biggr)^{1-s},\quad s\in(0,1)$$ is also valid for $x>-s$. As refinements of  and , the following double inequality was presented in [@Alzer-Math-Nachr-2001 Theorem 3.1]: For real numbers $0<q\ne1$ and $s\in(0,1)$, the double inequality $$\biggl[\frac{1-q^{x+\alpha(q,s)}}{1-q}\biggr]^{1-s} <\frac{\Gamma_q(x+1)}{\Gamma_q(x+s)} <\biggl[\frac{1-q^{x+\beta(q,s)}}{1-q}\biggr]^{1-s},\quad x>0$$ holds with the best possible values $$\alpha(q,s)=\begin{cases} \dfrac{\ln[(q^s-q)/(1-s)(1-q)]}{\ln q},&0<q<1\\ \dfrac{s}2,&q>1 \end{cases}$$ and $$\beta(q,s)=\frac{\ln\bigl\{1-(1-q)[\Gamma_q(s)]^{1/(s-1)}\bigr\}}{\ln q}.$$ As a direct consequence, it was derived in [@Alzer-Math-Nachr-2001 Corollary 3.2] that the inequality $$\label{lug-egp-alzer-ineq} [x+a(s)]^{1-s}\le\frac{\Gamma(x+1)}{\Gamma(x+s)}\le [x+b(s)]^{1-s}$$ holds for $s\in(0,1)$ and $x\ge0$ with the best possible values $a(s)=\frac{s}2$ and $b(s)=[\Gamma(s)]^{1/(s-1)}$. The inequality  was ever claimed in [@Lazarevic p. 248] with a wrong proof. It was also generalized and extended in [@egp Theorem 3]. Some logarithmically completely monotonic functions involving ratios of two gamma or ${q}$-gamma functions ========================================================================================================== In this section, we look back and analyse necessary and sufficient conditions for functions involving ratios of two gamma or $q$-gamma functions to be logarithmically completely monotonic. Some properties of a function involving exponential functions ------------------------------------------------------------- For real numbers $\alpha$ and $\beta$ with $\alpha\ne\beta$ and $(\alpha,\beta)\not\in\{(0,1),(1,0)\}$, let $$\label{q-dfn} q_{\alpha,\beta}(t)= \begin{cases} \dfrac{e^{-\alpha t}-e^{-\beta t}}{1-e^{-t}},&t\ne0;\\ \beta-\alpha,&t=0. \end{cases}$$ As seen in Section \[sec-cmf-ismail\], it is easy to have an idea that the function $q_{\alpha,\beta}(t)$ or its variations play indispensable roles in the proofs of [@Bustoz-and-Ismail Theorem 3], [@Bustoz-and-Ismail Theorem 7], [@Ismail-Lorch-Muldoon Theorem 2.4], [@Ismail-Lorch-Muldoon Theorem 2.5], [@Ismail-Muldoon-119 Theorem 2.5] and [@Ismail-Muldoon-119 Theorem 2.6]. In order to bound ratios of two gamma or $q$-gamma functions, necessary and sufficient conditions for $q_{\alpha,\beta}(t)$ to be either monotonic or logarithmically convex have been investigated in [@mon-element-exp-final.tex; @mon-element-exp.tex-rgmia; @comp-mon-element-exp.tex; @notes-best-new-proof.tex; @notes-best.tex-mia; @notes-best.tex-rgmia]. \[q-mon-lem-2\] Let $t$, $\alpha$ and $\beta$ with $\alpha\ne\beta$ and $(\alpha,\beta)\not\in\{(0,1),(1,0)\}$ be real numbers. Then 1. the function $q_{\alpha,\beta}(t)$ increases on $(0,\infty)$ if and only if $(\beta-\alpha)(1-\alpha-\beta)\ge0$ and $(\beta-\alpha) (|\alpha-\beta| -\alpha-\beta)\ge0$; 2. the function $q_{\alpha,\beta}(t)$ decreases on $(0,\infty)$ if and only if $(\beta-\alpha)(1-\alpha-\beta)\le0$ and $(\beta-\alpha) (|\alpha-\beta| -\alpha-\beta)\le0$; 3. the function $q_{\alpha,\beta}(t)$ increases on $(-\infty,0)$ if and only if $(\beta-\alpha)(1-\alpha-\beta)\ge0$ and $(\beta-\alpha) (2-|\alpha-\beta| -\alpha-\beta)\ge0$; 4. the function $q_{\alpha,\beta}(t)$ decreases on $(-\infty,0)$ if and only if $(\beta-\alpha)(1-\alpha-\beta)\le0$ and $(\beta-\alpha) (2-|\alpha-\beta| -\alpha-\beta)\le0$; 5. the function $q_{\alpha,\beta}(t)$ increases on $(-\infty,\infty)$ if and only if $(\beta-\alpha) (|\alpha-\beta| -\alpha-\beta)\ge0$ and $(\beta-\alpha) (2-|\alpha-\beta| -\alpha-\beta)\ge0$; 6. the function $q_{\alpha,\beta}(t)$ decreases on $(-\infty,\infty)$ if and only if $(\beta-\alpha) (|\alpha-\beta| -\alpha-\beta)\le0$ and $(\beta-\alpha) (2-|\alpha-\beta| -\alpha-\beta)\le0$. \[q-log-conv-thm\] The function $q_{\alpha,\beta}(t)$ on $(-\infty,\infty)$ is logarithmically convex if $\beta-\alpha>1$ and logarithmically concave if $0<\beta-\alpha<1$. If $1>\beta-\alpha>0$, then $q_{\alpha,\beta}(u)$ is $3$-log-convex on $(0,\infty)$ and $3$-log-concave on $(-\infty,0)$; If $\beta-\alpha>1$, then $q_{\alpha,\beta}(u)$ is $3$-log-concave on $(0,\infty)$ and $3$-log-convex on $(-\infty,0)$. Let $\lambda\in\mathbb{R}$. If $\beta-\alpha>1$, then the function $q_{\alpha,\beta}(t) q_{\alpha,\beta}(\lambda-t)$ is increasing on $\bigl(\frac\lambda2,\infty\bigr)$ and decreasing on $\bigl(-\infty, \frac\lambda2\bigr)$; if $0<\beta-\alpha<1$, it is decreasing on $\bigl(\frac\lambda2, \infty\bigr)$ and increasing on $\bigl(-\infty, \frac\lambda2\bigr)$. By noticing that the function $q_{\alpha,\beta}(t)$ can be rewritten as $$\label{rewr-f} q_{\alpha,\beta}(t)=\frac{\sinh[(\beta-\alpha)t/2]}{\sinh(t/2)}\exp\frac{(1-\alpha-\beta)t}2,$$ it is easy to see that the inequality , the non-negativeness of the functions  and , the positivity of the function  and the inequality  are at all special cases of the monotonicity of the function $q_{\alpha,\beta}(t)$ on $(0,\infty)$ stated in Proposition \[q-mon-lem-2\]. Necessary and sufficient conditions related to the ratio of two gamma functions ------------------------------------------------------------------------------- In this section, we survey necessary and sufficient conditions for some functions involving the ratio of two gamma functions to be logarithmically completely monotonic. ### The logarithmically complete monotonicity of the function $$\label{h-def-sandor} h_a(x)=\frac{(x+a)^{1-a}\Gamma(x+a)}{x\Gamma(x)}=\frac{(x+a)^{1-a}\Gamma(x+a)}{\Gamma(x+1)}$$ for $x>0$ and $a>0$, the reciprocal of the first function in  discussed in Remark \[rem-2.1.1\], were considered in [@sandor-gamma.tex-rgmia; @sandor-gamma-JKMS.tex]. \[thm-sandor-qi-2\] The function $h_a(x)$ has the following properties: 1. The function $h_a(x)$ is logarithmically completely monotonic on $(0,\infty)$ if $0<a<1$; 2. The function $[h_a(x)]^{-1}$ is logarithmically completely monotonic on $(0,\infty)$ if $a>1$; 3. For any $a>0$, $$\lim_{x\to0^+}h_a(x)=\frac{\Gamma(a+1)}{a^a} \quad \text{and}\quad \lim_{x\to\infty}h_a(x)=1.$$ In order to obtain a refined upper bound in , the logarithmically complete monotonicity of the function $$\label{f-def-sandor} f_a(x)=\frac{\Gamma(x+a)}{x^a\Gamma(x)}$$ for $x\in(0,\infty)$ and $a\in(0,\infty)$, the middle term in or the reciprocal of the second function in , were considered in [@sandor-gamma.tex-rgmia] and [@sandor-gamma-JKMS.tex Theorem 1.3]. \[thm-sandor-qi-final\] The function $f_a(x)$ has the following properties: 1. The function $f_a(x)$ is logarithmically completely monotonic on $(0,\infty)$ and $\lim_{x\to0+}f_a(x)=\infty$ if $a>1$; 2. The function $[f_a(x)]^{-1}$ is logarithmically completely monotonic on $(0,\infty)$ and $\lim_{x\to0+}f_a(x)=0$ if $0<a<1$; 3. $\lim_{x\to\infty}f_a(x)=1$ for any $a\in(0,\infty)$. As a straightforward consequence of combining Theorem \[thm-sandor-qi-2\] and Theorem \[thm-sandor-qi-final\], the following refinement of the upper bound in the inequality  is established. \[sandor-qi-inequal-ref\] Let $x\in(0,\infty)$. If $0<a<1$, then $$\begin{gathered} \label{combined-inequal} \biggl(\frac{x}{x+a}\biggr)^{1-a} <\frac{\Gamma(x+a)}{x^a\Gamma(x)}\\* <\begin{cases} \dfrac{\Gamma(a+1)}{a^a}\biggl(\dfrac{x}{x+a}\biggr)^{1-a}\le1, &0<x\le\dfrac{ap(a)}{1-p(a)}, \\1,&\dfrac{ap(a)}{1-p(a)}<x<\infty, \end{cases}\end{gathered}$$ where $$\label{p-def-sandor} p(x)=\begin{cases} \biggl[\dfrac{x^x}{\Gamma(x+1)}\biggr]^{1/(1-x)},&x\ne1,\\ e^{-\gamma},&x=1. \end{cases}$$ If $a>1$, the reversed inequality of  holds. The logarithmically complete monotonicity of the function  and its generalized form were researched in [@sandor-gamma.tex-rgmia], [@sandor-gamma-JKMS.tex Theorem 1.5], [@sandor-gamma-2-ITSF.tex Theorem 1.4] and [@sandor-gamma-2-ITSF.tex-rgmia Theorem 1.4] respectively. ### In [@laj-7.pdf Theorem 1], the following logarithmically complete monotonicity were established: The functions $$\label{laj-7-2-funct} \frac{\Gamma(x+t)}{\Gamma(x+s)}\biggl(x+\frac{s+t-1}2\biggr)^{s-t}\quad \text{and}\quad \frac{\Gamma(x+s)}{\Gamma(x+t)}(x+s)^{t-s}$$ for $0<s<t<s+1$ are logarithmically completely monotonic with respect to $x$ on $(-s,\infty)$. We can not understand why the authors of the paper [@laj-7.pdf] chose so special functions in . More accurately, we have no idea why the constants $\frac{s+t-1}2$ and $s$ were chosen in the polynomial factors of the functions listed in . Perhaps this can be interpreted by Theorem \[unify-log-comp-thm\] and Theorem \[polygamma-divided\] below. ### For real numbers $a$, $b$ and $c$, denote $\rho=\min\{a,b,c\}$ and let $$\label{h-def-sandor-new} H_{a,b;c}(x)=(x+c)^{b-a}\frac{\Gamma(x+a)}{\Gamma(x+b)}$$ for $x\in(-\rho,\infty)$. By a recourse to the incomplete monotonicity of the function $q_{\alpha,\beta}(t)$ obtained in [@mon-element-exp.tex-rgmia], the following incomplete but correct conclusions about the logarithmically complete monotonicity of the function $H_{a,b;c}(x)$ were procured in [@sandor-gamma-3.tex-jcam; @sandor-gamma-3.tex-rgmia]. \[unify-log-comp-thm-orig\] Let $a$, $b$ and $c$ be real numbers and $\rho=\min\{a,b,c\}$. Then 1. the function $H_{a,b;c}(x)$ is logarithmically completely monotonic on $(-\rho,\infty)$ if $$\begin{aligned} (a,b;c)&\in\biggl\{(a,b;c):a+b\ge1,c\le b<c+\frac12\biggr\} \cup\biggl\{(a,b;c):a>b\ge c+\frac12\biggr\}\\ &\quad\cup\{(a,b;c):2a+1\le a+b\le1,a<c\} \cup\{(a,b;c):b-1\le a<b\le c\} \\ &\quad\setminus\{(a,b;c):a=c+1,b=c\}, \end{aligned}$$ 2. so is the function $[H_{a,b;c}(x)]^{-1}$ if $$\begin{aligned} (a,b;c)&\in\biggl\{(a,b;c):a+b\ge1, c\le a<c+\frac12\biggr\} \cup\biggl\{(a,b;c):b>a\ge c+\frac12\biggr\}\\ &\quad\cup\{(a,b;c):b<a\le c\} \cup\{(a,b;c):b+1\le a,c\le a\le c+1\}\\ &\quad\cup\{(a,b;c):b+c+1\le a+b\le1\}\\ &\quad\setminus\{(a,b;c):a=c+1,b=c\} \setminus\{(a,b;c):b=c+1,a=c\}. \end{aligned}$$ ### In [@notes-best-simple-equiv.tex Theorem 1], [@notes-best-simple-equiv.tex-RGMIA Theorem 1] and [@notes-best-simple.tex-rgmia Theorem 2], the function $$\label{differen-ineq} \delta_{s,t}(x)= \begin{cases} \dfrac{\psi(x+t)-\psi(x+s)}{t-s}-\dfrac{2x+s+t+1}{2(x+s)(x+t)},&s\ne t\\[1em] \psi'(x+s)-\dfrac1{x+s}-\dfrac1{2(x+s)^2},&s=t \end{cases}$$ for $|t-s|<1$ and $-\delta_{s,t}(x)$ for $|t-s|>1$ were proved to be completely monotonic on the interval $(-\min\{s,t\},\infty)$. By employing the formula , the monotonicity of $q_{\alpha,\beta}(t)$ on $(0,\infty)$ and the complete monotonicity of $\delta_{s,t}(x)$, necessary and sufficient conditions are presented for the function $H_{a,b;c}(x)$ to be logarithmically completely monotonic on $(-\rho,\infty)$ as follows. \[unify-log-comp-thm\] Let $a$, $b$ and $c$ be real numbers and $\rho=\min\{a,b,c\}$. Then 1. the function $H_{a,b;c}(x)$ is logarithmically completely monotonic on $(-\rho,\infty)$ if and only if $$\label{d1-dfn-new} \begin{split} (a,b;c)\in D_1(a,b;c)&\triangleq\{(a,b;c):(b-a)(1-a-b+2c)\ge0\}\\ &\quad\cap\{(a,b;c):(b-a) (\vert a-b\vert-a-b+2c)\ge0\}\\ &\quad\setminus\{(a,b;c):a=c+1=b+1\}\\ &\quad\setminus\{(a,b;c):b=c+1=a+1\}; \end{split}$$ 2. so is the function $H_{b,a;c}(x)$ on $(-\rho,\infty)$ if and only if $$\label{d2-dfn-new} \begin{split} (a,b;c)\in D_2(a,b;c)&\triangleq\{(a,b;c):(b-a)(1-a-b+2c)\le0\}\\ &\quad\cap\{(a,b;c):(b-a) (\vert a-b\vert-a-b+2c)\le0\}\\ &\quad\setminus\{(a,b;c):b=c+1=a+1\}\\ &\quad\setminus\{(a,b;c):a=c+1=b+1\}. \end{split}$$ The limit  implies that $\lim_{x\to\infty}H_{a,b;c}(x)=1$ is valid for all defined numbers $a,b,c$. Combining this with the logarithmically complete monotonicity of $H_{a,b;c}(x)$ yields that the inequality $$H_{a,b;c}(x)>1$$ holds if $(a,b;c)\in D_1(a,b;c)$ and reverses if $(a,b;c)\in D_2(a,b;c)$, that is, the inequality $$\label{wendel-gamma-ineq-orig} x+\lambda<\biggl[\frac{\Gamma(x+a)}{\Gamma(x+b)}\biggr]^{1/(a-b)}<x+\mu,\quad b>a$$ holds for $x\in(-a,\infty)$ if $\lambda\le\min\bigl\{a,\frac{a+b-1}2\bigr\}$ and $\mu\ge\max\bigl\{a,\frac{a+b-1}2\bigr\}$, which is equivalent to $$\label{wendel-gamma-ineq} \min\biggl\{a,\frac{a+b-1}2\biggr\}<\biggl[\frac{\Gamma(a)}{\Gamma(b)}\biggr]^{1/(a-b)} <\max\biggl\{a,\frac{a+b-1}2\biggr\},\quad b>a>0.$$ It is noted that a special case $0<a<b<1$ of the inequality  was derived in [@Chen-Oct-04-1051] from Elezović-Giordano-Pečarić’s theorem (see [@egp; @notes-best-new-proof.tex; @notes-best.tex-mia; @notes-best.tex-rgmia]). Moreover, by available of the inequality  and others, the double inequalities $$\frac{x+a}{x+b}(x+b)^{b-a}\le\frac{\Gamma(x+b)}{\Gamma(x+a)}\le(x+a)^{b-a},\quad x>0$$ and $$(x+a)e^{-\gamma/(x+a)}<\biggl[\frac{\Gamma(x+b)}{\Gamma(x+a)}\biggr]^{1/(b-a)} <(x+b)e^{-1/2(x+b)}, \quad x\ge1$$ were proved in [@Sandor-Oct-04-1052] to be valid for $0<a<b<1$. Maybe two references [@Bencze-OQ1352; @Modan-Oct-04-1055] are also useful and worth being mentioned. Since the complete monotonicity of the function  was not established and the main result in [@mon-element-exp.tex-rgmia] about the monotonicity of the function $q_{\alpha,\beta}(t)$ is incomplete at that time, necessary conditions for the function  to be logarithmically completely monotonic was not discovered in [@sandor-gamma-3.tex-jcam Theorem 1] and [@sandor-gamma-3.tex-rgmia Theorem 1] and the sufficient conditions in [@sandor-gamma-3.tex-jcam Theorem 1] and [@sandor-gamma-3.tex-rgmia Theorem 1] are imperfect. It is not difficult to see that all (complete) monotonicity on functions involving the ratio of two gamma functions, showed by Bustoz-Ismail in [@Bustoz-and-Ismail] and Ismail-Lorch-Muldoon in [@Ismail-Lorch-Muldoon] and related results in [@sandor-gamma-3.tex-jcam; @sandor-gamma-3.tex-rgmia; @sandor-gamma.tex-rgmia; @sandor-gamma-JKMS.tex], are special cases of the above Theorem \[unify-log-comp-thm\]. ### From the above Theorem \[unify-log-comp-thm\], the following double inequalities for divided differences of the psi and polygamma functions may be deduced immediately. \[polygamma-divided\] Let $b>a\ge0$ and $k\in\mathbb{N}$. Then the double inequality $$\label{n-s-ineq} \frac{(k-1)!}{(x+\alpha)^k}<\frac{(-1)^{k-1} \bigl[\psi^{(k-1)}(x+b)-\psi^{(k-1)}(x+a)\bigr]}{b-a} <\frac{(k-1)!}{(x+\beta)^k}$$ for $x\in(-\rho,\infty)$ holds if $\alpha\ge\max\bigl\{a,\frac{a+b-1}2\bigr\}$ and $0\le\beta\le\min\bigl\{a,\frac{a+b-1}2\bigr\}$. It is amazing that taking $b-a=1$ in  leads to $$\psi^{(k-1)}(x+a+1)-\psi^{(k-1)}(x+a)=(-1)^{k-1}\frac{(k-1)!}{(x+a)^k}$$ for $a\ge0$, $x>0$ and $k\in\mathbb{N}$, which is equivalent to the recurrence formula $$\label{recurrence-formula} \psi^{(n)}(z+1)-\psi^{(n)}(z)=(-1)^nn!z^{-n-1},\quad z>0,\quad n\ge0$$ listed in [@abram p. 260, 6.4.6]. For detailed information, see [@roots-polygamma-eq.tex-ajmaa; @roots-polygamma-eq.tex] and [@mon-element-exp-final.tex Remark 8]. For more information on results of divided differences for the psi and polygamma functions, please refer to [@notes-best-simple-equiv.tex-RGMIA; @notes-best-simple-equiv.tex; @notes-best-simple-open.tex; @simple-equiv.tex; @notes-best-simple.tex-rgmia; @simple-equiv-simple-rev.tex; @Comp-Mon-Digamma-Trigamma-Divided.tex; @AAM-Qi-09-PolyGamma.tex] and related references therein. It is worthwhile to note that some errors and defects appeared in [@sandor-gamma-3.tex-jcam; @sandor-gamma-3.tex-rgmia] have been corrected and consummated in [@sandor-gamma-3-note.tex-final; @sandor-gamma-3-note.tex]. Necessary and sufficient conditions related to the ratio of two ${q}$-gamma functions ------------------------------------------------------------------------------------- The known results obtained by many mathematicians show that most of properties of the ratio of two gamma functions may be replanted to cases of the ratio of two $q$-gamma functions, as done in [@Ismail-Lorch-Muldoon Theorem 2.5] and [@Ismail-Muldoon-119 Theorem 2.5 and Theorem 2.6] mentioned above. Let $a,b$ and $c$ be real numbers, $\rho=\min\{a,b,c\}$, and define $$\label{H{q;a,b;c}(x)} H_{q;a,b;c}(x)=\biggl(\frac{1-q^{x+c}}{1-q}\biggr)^{a-b}\frac{\Gamma_q(x+b)}{\Gamma_q(x+a)}$$ for $x\in(-\rho,\infty)$, where $\Gamma_q(x)$ is the $q$-gamma function defined by  and . It is clear that the function  is a $q$-analogue of the function . In virtue of the monotonicity of $q_{\alpha,\beta}(t)$ on $(0,\infty)$ and the formula , the following Theorem \[q-gamma-ratio\], a $q$-analogue of Theorem \[unify-log-comp-thm\], was procured. \[q-gamma-ratio\] Let $a$, $b$ and $c$ be real numbers and $\rho=\min\{a,b,c\}$. Then the function $H_{q;a,b;c}(x)$ is logarithmically completely monotonic on $(-\rho,\infty)$ if and only if $(a,b;c)\in D_2(a,b;c)$, so is the function $H_{q;b,a;c}(x)$ if and only if $(a,b;c)\in D_1(a,b;c)$, where $D_1(a,b;c)$ and $D_2(a,b;c)$ are defined by  and  respectively. All complete monotonicity obtained in [@Ismail-Lorch-Muldoon Theorem 2.5] and [@Ismail-Muldoon-119 Theorem 2.5 and Theorem 2.6] are special cases of Theorem \[q-gamma-ratio\]. Similar to Theorem \[polygamma-divided\], the following double inequality of divided differences of the $q$-psi function $\psi_q(x)$ for $0<q<1$ may be derived from Theorem \[q-gamma-ratio\]. \[polygamma-divided-q-gamma\] Let $b>a\ge0$, $k\in\mathbb{N}$ and $0<q<1$. Then the inequality $$\label{n-s-ineq-g-gamma} \frac{(-1)^{k-1}\bigl[\psi^{(k-1)}_q(x+b)-\psi^{(k-1)}_q(x+a)\bigr]}{b-a} <(-1)^{k-1}[\ln(1-q^{x+c})]^{(k)}$$ for $x\in(-\rho,\infty)$ holds if $0\le c\le\min\bigl\{a,\frac{a+b-1}2\bigr\}$ and reverses if $c\ge\max\bigl\{a,\frac{a+b-1}2\bigr\}$. Consequently, the identity $$\label{n-s-ineq-g-gamma-equality} \psi^{(k-1)}_q(x+1)-\psi^{(k-1)}_q(x) =[\ln(1-q^x)]^{(k)}$$ holds for $x\in(0,\infty)$ and $k\in\mathbb{N}$. Since identities  and  may be derived from inequalities  and , we can regard inequalities  and  as generalizations of identities  and . Logarithmically complete monotonicity for ratios of products of the gamma and ${q}$-gamma functions =================================================================================================== In this section, we would like to look back and analyse some (logarithmically) complete monotonicity of ratios of products of the gamma and $q$-gamma functions. Let $a_i$ and $b_i$ for $1\le i\le n$ be real numbers and $\rho_n=\min_{1\le i\le n}\{a_i,b_i\}$. For $x\in(-\rho_n,\infty)$, define $$h_{\boldsymbol{a},\boldsymbol{b};n}(x)=\prod_{i=1}^n\frac{\Gamma(x+a_i)}{\Gamma(x+b_i)},$$ where $\boldsymbol{a}$ and $\boldsymbol{b}$ denote $(a_1,a_2,\dotsc,a_n)$ and $(b_1,b_2,\dotsc,b_n)$ respectively. Complete monotonicity --------------------- In [@Bustoz-and-Ismail Theorem 6], by virtue of the formula  and a special case of Theorem \[p.83-bochner\] mentioned in Remark \[lemma2-bus-ism\] above, the function $$\label{ismail-bustoz-ratio-gamma} x\mapsto\frac{\Gamma(x)\Gamma(x+a+b)}{\Gamma(x+a)\Gamma(x+b)}$$ for $a,b\ge0$, a special cases of $h_{\boldsymbol{a},\boldsymbol{b};n}(x)$ for $n=2$, was proved to be completely monotonic on $(0,\infty)$. In [@psi-alzer Theorem 10], the function $h_{\boldsymbol{a},\boldsymbol{b};n}(x)$ was proved to be completely monotonic on $(0,\infty)$ provided that $0\le a_1\le\dotsm\le a_n$, $0\le b_1 \le \dotsm \le b_n$ and $\sum_{i=1}^ka_i\le\sum_{i=1}^kb_i$ for $1\le k\le n$. Its proof used the formula , a special case of Theorem \[p.83-bochner\] applied to $f(x)=e^{-x}$, and the following conclusion cited from [@marolk p. 10]: Let $a_i$ and $b_i$ for $i=1,\dotsc,n$ be real numbers such that $a_1\le\dotsm\le a_n$, $b_1\le\dotsm\le b_n$, and $\sum_{i=1}^ka_i\le\sum_{i=1}^kb_i$ for $k=1,\dotsc,n$. If the function $f$ is decreasing and convex on $\mathbb{R}$, then $$\sum_{i=1}^nf(b_i)\le\sum_{i=1}^nf(a_i).$$ In [@Ismail-Muldoon-119 Theorem 4.1], the functions $$-\frac{\operatorname{d\mspace{-2mu}}}{\operatorname{d\mspace{-2mu}}x}\ln\frac{\Gamma_q(x+a_1) \Gamma_q(x+a_2)\dotsm\Gamma_q(x+a_n)}{[\Gamma(x+\bar{a})]^n}$$ and $$\label{no-mean-gamma} \frac{\operatorname{d\mspace{-2mu}}}{\operatorname{d\mspace{-2mu}}x}\ln\frac{\Gamma_q(x+a_1) \Gamma_q(x+a_2)\dotsm\Gamma_q(x+a_n)} {[\Gamma_q(x)]^{n-1}\Gamma_q(x+a_1+a_2+\dotsm+a_n)}$$ were proved to be completely monotonic on $(0,\infty)$, where $a_1,\dotsc,a_n$ are positive numbers, $n\bar{a}=a_1+\dotsm+a_n$, and $0<q\le1$. In [@Malig-Pecaric-Persson-95], the function $$\label{mal-pec-pers-f} x\mapsto\frac{[\Gamma(x)]^{n-1}\Gamma\bigl(x+\sum_{i=1}^na_i\bigr)}{\prod_{i=1}^n\Gamma(x+a_i)}$$ for $a_i>0$ and $i=1,\dotsc,n$ was found to be decreasing on $(0,\infty)$. Motivated by the decreasing monotonic property of the function , H. Alzer proved in [@psi-alzer Theorem 11] that the function $$x\mapsto\frac{[\Gamma(x)]^\alpha\Gamma\bigl(x+\sum_{i=1}^na_i\bigr)}{\prod_{i=1}^n\Gamma(x+a_i)}$$ is completely monotonic on $(0,\infty)$ if and only if $\alpha=n-1$. It is clear that the decreasingly monotonic property of the function  is just the special case $q\to1^-$ of the complete monotonicity of the function . Therefore, it seems that the authors of the papers [@psi-alzer; @Malig-Pecaric-Persson-95] were not aware of the results in [@Ismail-Muldoon-119 Theorem 4.1]. The complete monotonicity mentioned just now are indeed logarithmically completely monotonic ones. Logarithmically complete monotonicity ------------------------------------- Let $S_n$ be the symmetric group over $n$ symbols, $a_1,a_2,\dotsc,a_n$. Let $O_n$ and $E_n$ be the sets of odd and even permutations over $n$ symbols, respectively. For $a_1>a_2>\dotsm>a_n>0$, define $$F(x)=\frac{\prod_{\sigma\in E_n}\Gamma\bigl(x+a_{\sigma(2)} +2a_{\sigma(3)}+\dotsm+(n-1)a_{\sigma(n)}\bigr)} {\prod_{\sigma\in O_n}\Gamma\bigl(x+a_{\sigma(2)} +2a_{\sigma(3)}+\dotsm+(n-1)a_{\sigma(n)}\bigr)}.$$ It was proved in [@grin-ismail Theorem 1.1] that the function $F(x-a_2-2a_3-\dotsm-(n-1)a_n)$ is logarithmically completely monotonic on $(0,\infty)$. In [@grin-ismail Theorem 1.2], it was presented that the functions $$\label{fn-dfn} F_n(x)=\frac{\Gamma(x)\prod_{k=1}^{[n/2]}\Bigl[\prod_{m\in P_{n,2k}} \Gamma\Bigl(x+\sum_{j=1}^{2k}a_{m_j}\Bigr)\Bigr]} {\prod_{k=1}^{[(n+1)/2]}\Bigl[\prod_{m\in P_{n,2k-1}} \Gamma\Bigl(x+\sum_{j=1}^{2k-1}a_{m_j}\Bigr)\Bigr]}$$ for any $a_k>0$ and $k\in\mathbb{N}$ are logarithmically completely monotonic on $(0,\infty)$ and that any product of functions of the type  with different parameters $a_k$ is logarithmically completely monotonic on $(0,\infty)$ as well, where $P_{n,k}$ for $1\le k\le n$ is the set of all vectors $\boldsymbol{m}=(m_1,\dotsc,m_k)$ whose components are natural numbers such that $1\le m_\nu<m_\mu\le n$ for $1\le\nu<\mu\le k$ and $P_{n,0}$ is the empty set. The above Theorem 1.2 is more general than Theorem 1.1. The case $n=2$ in Theorem 1.2 corresponds to the complete monotonicity of the function  obtained in [@Bustoz-and-Ismail Theorem 6]. In [@grin-ismail Theorem 3.2], it was showed that if $$F_q(x)=\frac{\prod_{\sigma\in E_n}\Gamma_q\bigl(x+a_{\sigma(2)} +2a_{\sigma(3)}+\dotsm+(n-1)a_{\sigma(n)}\bigr)} {\prod_{\sigma\in O_n}\Gamma_q\bigl(x+a_{\sigma(2)} +2a_{\sigma(3)}+\dotsm+(n-1)a_{\sigma(n)}\bigr)}$$ for $a_1>a_2>\dotsm>a_n>0$, then $F_q(x-a_2-2a_3-\dotsm-(n-1)a_n)$ is a logarithmically completely monotonic function of $x$ on $(0,\infty)$. In [@grin-ismail Theorem 3.3], it was stated that the functions $$\label{fn-dfn-2} F_{n,q}(x)=\frac{\Gamma_q(x)\prod_{k=1}^{[n/2]}\Bigl[\prod_{m\in P_{n,2k}} \Gamma_q\Bigl(x+\sum_{j=1}^{2k}a_{m_j}\Bigr)\Bigr]} {\prod_{k=1}^{[(n+1)/2]}\Bigl[\prod_{m\in P_{n,2k-1}} \Gamma_q\Bigl(x+\sum_{j=1}^{2k-1}a_{m_j}\Bigr)\Bigr]}$$ for any $a_k>0$ with $k=1,\dotsc,n$ are logarithmically completely monotonic on $(0,\infty)$, so is any product of functions  with different parameters $a_k$. It is obvious that [@grin-ismail Theorem 3.2 and Theorem 3.3] are $q$-analogues of [@grin-ismail Theorem 1.1 and Theorem 1.2]. Some recent conclusions ----------------------- By a recourse to the monotonicity of $q_{\alpha,\beta}(t)$ on $(0,\infty)$, the following sufficient conditions for the function $h_{\boldsymbol{a},\boldsymbol{b};n}(x)$ to be logarithmically completely monotonic on $(0,\infty)$ are devised. \[products-ratio-thm8\] If $$\label{c-1-cond} (b_i-a_i)(1-a_i-b_i)\ge0\quad\text{and}\quad (b_i-a_i) (|a_i-b_i|-a_i-b_i)\ge0$$ hold for $1\le i\le n$ and $$\label{c-3-cond} \sum_{i=1}^nb_i\ge\sum_{i=1}^na_i,$$ then the function $h_{\boldsymbol{a},\boldsymbol{b};n}(x)$ is logarithmically completely monotonic on $(-\rho_n,\infty)$. If inequalities in  and  are reversed, then the function $h_{\boldsymbol{b},\boldsymbol{a};n}(x)$ is logarithmically completely monotonic on $(-\rho_n,\infty)$. The $q$-analogue of Theorem \[products-ratio-thm8\] is as follows. \[products-ratio-thm9\] Let $a_i$ and $b_i$ for $1\le i\le n$ be real and $\rho_n=\min_{1\le i\le n}\{a_i,b_i\}$. For $x\in(-\rho_n,\infty)$, define $$h_{q;\boldsymbol{a},\boldsymbol{b};n}(x)=\prod_{i=1}^n\frac{\Gamma_q(x+a_i)}{\Gamma_q(x+b_i)}$$ for $0<q<1$, where $\boldsymbol{a}$ and $\boldsymbol{b}$ stand for $(a_1,a_2,\dotsc,a_n)$ and $(b_1,b_2,\dotsc,b_n)$ respectively. If inequalities in  and  hold, then the function $h_{q;\boldsymbol{a},\boldsymbol{b};n}(x)$ is logarithmically completely monotonic on $(-\rho_n,\infty)$. If inequalities in  and  are reversed, then the function $h_{q;\boldsymbol{b},\boldsymbol{a};n}(x)$ is logarithmically completely monotonic on $(-\rho_n,\infty)$. Acknowledgements {#acknowledgements .unnumbered} ---------------- This article was ever reported on 9 October 2008 as a talk in the seminar held at the RGMIA, School of Engineering and Science, Victoria University, Australia, while the author was visiting the RGMIA between March 2008 and February 2009 by the grant from the China Scholarship Council. The author expresses many thanks to Professors Pietro Cerone and Server S. Dragomir and other local colleagues at Victoria University for their invitation and hospitality throughout this period. [99]{} M. Abramowitz and I. A. Stegun (Eds), *Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables*, National Bureau of Standards, Applied Mathematics Series **55**, 9th printing, Washington, 1970. H. Alzer, *On some inequalities for the gamma and psi functions*, Math. Comp. **66** (1997), 373–389. H. Alzer, *Sharp bounds for the ratio of $q$-gamma functions*, Math. Nachr. **222** (2001), no. 1, 5–14. G. E. Andrews, R. A. Askey, and R. Roy, *Special Functions*, Cambridge University Press, Cambridge, 1999. R. D. Atanassov and U. V. Tsoukrovski, *Some properties of a class of logarithmically completely monotonic functions*, C. R. Acad. Bulgare Sci. **41** (1988), no. 2, 21–23. M. Bencze, *OQ 1352*, Octogon Math. Mag. **12** (2004), no. 1, 448. C. Berg, *Integral representation of some functions related to the gamma function*, Mediterr. J. Math. **1** (2004), no. 4, 433–439. S. Bochner, *Harmonic Analysis and the Theory of Probability*, California Monographs in Mathematical Sciences, University of California Press, Berkeley and Los Angeles, 1960. J. Bustoz and M. E. H. Ismail, *On gamma function inequalities*, Math. Comp. **47** (1986), 659–667. Ch.-P. Chen, *On an open problem by M. Bencze*, Octogon Math. Mag. **12** (2004), no. 2, 1051–1052. N. Elezović, C. Giordano and J. Pečarić, *The best bounds in Gautschi’s inequality*, Math. Inequal. Appl. **3** (2000), 239–252. A. Erdélyi, W. Magnus, F. Oberhettinger and F. G. Tricomi (Editors), *Higher Transcendental Functions*, Vol. 1, McGraw-Hill, New York, 1953. W. Gautschi, *Some elementary inequalities relating to the gamma and incomplete gamma function*, J. Math. Phys. **38** (1959/60), 77–81. A. Z. Grinshpan and M. E. H. Ismail, *Completely monotonic functions involving the gamma and $q$-gamma functions*, Proc. Amer. Math. Soc. **134** (2006), 1153–1160. B.-N. Guo and F. Qi, *A double inequality for divided differences and some identities of the psi and polygamma functions*, Aust. J. Math. Anal. Appl. **5** (2009), no. 2, Art. 18; Available online at <http://ajmaa.org/cgi-bin/paper.pl?string=v5n2/V5I2P18.tex>. B.-N. Guo and F. Qi, *Properties and applications of a function involving exponential functions*, Commun. Pure Appl. Anal. **8** (2009), no. 4, in press. R. A. Horn, *On infinitely divisible matrices, kernels and functions*, Z. Wahrscheinlichkeitstheorie und Verw. Geb **8** (1967), 219–230. M. E. H. Ismail, L. Lorch, and M. E. Muldoon, *Completely monotonic functions associated with the gamma function and its $q$-analogues*, J. Math. Anal. Appl. **116** (1986), 1–9. M. E. H. Ismail and M. E. Muldoon, *Inequalities and monotonicity properties for gamma and $q$-gamma functions*, in: R.V.M. Zahar (Ed.), Approximation and Computation: A Festschrift in Honour of Walter Gautschi, ISNM, Vol. **119**, BirkhRauser, Basel, 1994, 309–323. D. K. Kazarinoff, *On Wallis’ formula*, Edinburgh Math. Notes **1956** (1956), no. 40, 19–21. D. Kershaw, *Some extensions of W. Gautschi’s inequalities for the gamma function*, Math. Comp. **41** (1983), 607–611. A. Laforgia, *Further inequalities for the gamma function*, Math. Comp. **42** (1984), no. 166, 597–600. A. Laforgia and P. Natalini, *Supplements to known monotonicity results and inequalities for the gamma and incomplete gamma functions*, J. Inequal. Appl. **2006** (2006), Article ID 48727, 1–8. I. Lazarević and A. Lupaş, *Functional equations for Wallis and Gamma functions*, Publ. Elektrotehn. Fak. Univ. Beograd. Ser. Electron. Telecommun. Automat. No. **461-497** (1974), 245–251. A.-J. Li, W.-Zh. Zhao and Ch.-P. Chen, *Logarithmically complete monotonicity properties for the ratio of gamma function*, Adv. Stud. Contemp. Math. (Kyungshang) **13** (2006), no. 2, 183–191. L. Lorch, *Inequalities for ultraspherical polynomials and the gamma function*, J. Approx. Theory **40** (1984), no. 2, 115–120. L. Maligranda, J. E. Pečarić, and L. E. Persson, *Stolarsky’s inequality with general weights*, Proc. Amer. Math. Soc. **123** (1995), 2113–2118. A. W. Marshall and I. Olkin, *Inequalities: Theory of Majorization and its Appplications*, Academic Press, New York, 1979. M. Merkle, *Representations of error terms in Jensen’s and some related inequalities with applications*, J. Math. Anal. Appl. **231** (1999), 76–90. L. Modan, *A solution for the double inequality of OQ 1352*, Octogon Math. Mag. **12** (2004), no. 2, 1055. F. W. J. Olver, *Asymptotics and Special Functions*, Academic Press, New York/San Francisco/London, 1974. F. Qi, *A class of logarithmically completely monotonic functions and the best bounds in the first Kershaw’s double inequality*, J. Comput. Appl. Math. **206** (2007), no. 2, 1007–1014; Available online at <http://dx.doi.org/10.1016/j.cam.2006.09.005>. F. Qi, *A class of logarithmically completely monotonic functions and the best bounds in the first Kershaw’s double inequality*, RGMIA Res. Rep. Coll. **9** (2006), no. 2, Art. 16; Available online at <http://www.staff.vu.edu.au/rgmia/v9n2.asp>. F. Qi, *A double inequality for divided differences and some identities of psi and polygamma functions*, RGMIA Res. Rep. Coll. **10** (2007), no. 3, Art. 6; Available online at <http://www.staff.vu.edu.au/rgmia/v10n3.asp>. F. Qi, *A completely monotonic function involving divided difference of psi function and an equivalent inequality involving sum*, RGMIA Res. Rep. Coll. **9** (2006), no. 4, Art. 5; Available online at <http://www.staff.vu.edu.au/rgmia/v9n4.asp>. F. Qi, *A completely monotonic function involving the divided difference of the psi function and an equivalent inequality involving sums*, ANZIAM J. **48** (2007), no. 4, 523–532. F. Qi, *A completely monotonic function involving divided differences of psi and polygamma functions and an application*, RGMIA Res. Rep. Coll. **9** (2006), no. 4, Art. 8; Available online at <http://www.staff.vu.edu.au/rgmia/v9n4.asp>. F. Qi, *Certain logarithmically $N$-alternating monotonic functions involving gamma and $q$-gamma functions*, Nonlinear Funct. Anal. Appl. **12** (2007), no. 4, 675–685. F. Qi, *Certain logarithmically $N$-alternating monotonic functions involving gamma and $q$-gamma functions*, RGMIA Res. Rep. Coll. **8** (2005), no. 3, Art. 5, 413–422; Available online at <http://www.staff.vu.edu.au/rgmia/v9n3.asp>. F. Qi, *Monotonicity and logarithmic convexity for a class of elementary functions involving the exponential function*, RGMIA Res. Rep. Coll. **9** (2006), no. 3, Art. 3; Available online at <http://www.staff.vu.edu.au/rgmia/v9n3.asp>. F. Qi, *The best bounds in Kershaw’s inequality and two completely monotonic functions*, RGMIA Res. Rep. Coll. **9** (2006), no. 4, Art. 2; Available online at <http://www.staff.vu.edu.au/rgmia/v9n4.asp>. F. Qi, *Three classes of logarithmically completely monotonic functions involving gamma and psi functions*, Integral Transforms Spec. Funct. **18** (2007), no. 7, 503–509. F. Qi, *Three classes of logarithmically completely monotonic functions involving gamma and psi functions*, RGMIA Res. Rep. Coll. **9** (2006), Suppl., Art. 6; Available online at <http://www.staff.vu.edu.au/rgmia/v9(E).asp>. F. Qi, *Three-log-convexity for a class of elementary functions involving exponential function*, J. Math. Anal. Approx. Theory **1** (2006), 100–103. F. Qi, J. Cao, and D.-W. Niu, *Four logarithmically completely monotonic functions involving gamma function and originating from problems of traffic flow*, RGMIA Res. Rep. Coll. **9** (2006), no. 3, Art 9; Available online at <http://www.staff.vu.edu.au/rgmia/v9n3.asp>. F. Qi and Ch.-P. Chen, *A complete monotonicity property of the gamma function*, J. Math. Anal. Appl. **296** (2004), no. 2, 603–607. F. Qi, P. Cerone and S. S. Dragomir, *Complete monotonicity results of divided difference of psi functions and new bounds for ratio of two gamma functions*, submitted. F. Qi and B.-N. Guo, *A property of logarithmically absolutely monotonic functions and the logarithmically complete monotonicity of a power-exponential function*, Available online at <http://arxiv.org/abs/0903.5038>. F. Qi and B.-N. Guo, *An alternative proof of Elezović-Giordano-Pečarić’s theorem*, Available online at <http://arxiv.org/abs/0903.1174>. F. Qi and B.-N. Guo, *Complete monotonicities of functions involving the gamma and digamma functions*, RGMIA Res. Rep. Coll. **7** (2004), no. 1, Art. 8, 63–72; Available online at <http://www.staff.vu.edu.au/rgmia/v7n1.asp>. F. Qi and B.-N. Guo, *Complete monotonicity results of a function involving the divided difference of the psi functions and consequences*, submitted. F. Qi and B.-N. Guo, *Necessary and sufficient conditions for a function involving divided differences of the di- and tri-gamma functions to be completely monotonic*, Available online at <http://arxiv.org/abs/0903.3071>. F. Qi and B.-N. Guo, *Necessary and sufficient conditions for functions involving the tri- and tetra-gamma functions to be completely monotonic*, Adv. Appl. Math. (2009), in press. F. Qi and B.-N. Guo, *Sharp inequalities for the psi function and harmonic numbers*, Available online at <http://arxiv.org/abs/0902.2524>. F. Qi and B.-N. Guo, *Some logarithmically completely monotonic functions related to the gamma function*, submitted. F. Qi and B.-N. Guo, *Wendel’s and Gautschi’s inequalities: Refinements, extensions, and a class of logarithmically completely monotonic functions*, Appl. Math. Comput. **205** (2008), no. 1, 281–290; Available online at <http://dx.doi.org/10.1016/j.amc.2008.07.005>. F. Qi and B.-N. Guo, *Wendel-Gautschi-Kershaw’s inequalities and sufficient and necessary conditions that a class of functions involving ratio of gamma functions are logarithmically completely monotonic*, RGMIA Res. Rep. Coll. **10** (2007), no. 1, Art. 2; Available online at <http://www.staff.vu.edu.au/rgmia/v10n1.asp>. F. Qi, B.-N. Guo and Ch.-P. Chen, *Some completely monotonic functions involving the gamma and polygamma functions*, J. Aust. Math. Soc. **80** (2006), 81–88. F. Qi, B.-N. Guo and Ch.-P. Chen, *The best bounds in Gautschi-Kershaw inequalities*, Math. Inequal. Appl. **9** (2006), 427–436. F. Qi, B.-N. Guo and Ch.-P. Chen, *The best bounds in Gautschi-Kershaw inequalities*, RGMIA Res. Rep. Coll. **8** (2005), no. 2, Art. 17, 311–320; Available online at <http://www.staff.vu.edu.au/rgmia/v8n2.asp>. F. Qi and S.-L. Guo, *Inequalities for the incomplete gamma and related functions*, Math. Inequal. Appl. **2** (1999), no. 1, 47–53. F. Qi, W. Li and B.-N. Guo, *Generalizations of a theorem of I. Schur*, RGMIA Res. Rep. Coll. **9** (2006), no. 3, Art. 15; Available online at <http://www.staff.vu.edu.au/rgmia/v9n3.asp>. F. Qi and J.-Q. Mei, *Some inequalities of the incomplete gamma and related functions*, Z. Anal. Anwendungen **18** (1999), no. 3, 793–799. F. Qi, D.-W. Niu, J. Cao, and Sh.-X. Chen, *Four logarithmically completely monotonic functions involving gamma function*, J. Korean Math. Soc. **45** (2008), no. 2, 559–573. J. Sándor, *On certain inequalities for the ratios of gamma functions*, Octogon Math. Mag. **12** (2004), no. 2, 1052–1054. G. N. Watson, *A note on gamma functions*, Proc. Edinburgh Math. Soc. **11** (1958/1959), no. 2, Edinburgh Math Notes No. 42 (misprinted 41) (1959), 7–9. E. W. Weisstein, *Wallis Cosine Formula*, From MathWorld—A Wolfram Web Resource; Available online at <http://mathworld.wolfram.com/WallisFormula.html>. J. G. Wendel, *Note on the gamma function*, Amer. Math. Monthly **55** (1948), no. 9, 563–564. D. V. Widder, *The Laplace Transform*, Princeton University Press, Princeton, 1941. J. Wimp, *Computation with Recurrence Relations*, Pitman, London, 1984. [^1]: The author was partially supported by the China Scholarship Council [^2]: This paper was typeset using -LaTeX
{ "pile_set_name": "ArXiv" }
--- abstract: 'It is known that multidimensional complex potentials obeying $\mathcal{PT}$-symmetry may possess all real spectra and continuous families of solitons. Recently it was shown that for multi-dimensional systems these features can persist when the parity symmetry condition is relaxed so that the potential is invariant under reflection in only a single spatial direction. We examine the existence, stability and dynamical properties of localized modes within the cubic nonlinear Schrödinger equation in such a scenario of partially $\mathcal{PT}$-symmetric potential.' author: - 'J. D’Ambroise' - 'P.G. Kevrekidis' title: ' Existence, Stability & Dynamics of Nonlinear Modes in a 2d Partially $\mathcal{PT}$ Symmetric Potential' --- Introduction ============ The study of $\mathcal{PT}$ (parity–time) symmetric systems was initiated through the works of Bender and collaborators [@Bender1; @Bender2]. Originally, it was proposed as an alternative to the standard quantum theory, where the Hamiltonian is postulated to be Hermitian. In these works, it was instead found that Hamiltonians invariant under $\mathcal{PT}$-symmetry, which are not necessarily Hermitian, may still give rise to completely real spectra. Thus, the proposal of Bender and co-authors was that these Hamiltonians are appropriate for the description of physical settings. In the important case of Schr[ö]{}dinger-type Hamiltonians, which include the usual kinetic-energy operator and the potential term, $% V(x) $, the $\mathcal{PT}$-invariance is consonant with complex potentials, subject to the constraint that $V^{\ast }(x)=V(-x)$. A decade later, it was realized (and since then it has led to a decade of particularly fruitful research efforts) that this idea can find fertile ground for its experimental realization although not in quantum mechanics where it was originally conceived. In this vein, numerous experimental realizations sprang up in the areas of linear and nonlinear optics [@Ruter; @Peng2014; @peng2014b; @RevPT; @Konotop], electronic circuits [@Schindler1; @Schindler2; @Factor], and mechanical systems [@Bender3], among others. Very recently, this now mature field of research has been summarized in two comprehensive reviews [@RevPT; @Konotop]. One of the particularly relevant playgrounds for the exploration of the implications of $\mathcal{PT}$-symmetry is that of nonlinear optics, especially because it can controllably involve the interplay of $\mathcal{PT}$-symmetry and nonlinearity. In this context, the propagation of light (in systems such as optical fibers or waveguides [@RevPT; @Konotop]) is modeled by the nonlinear Schrödinger equation of the form: $$\begin{aligned} \label{nls} i\Psi_z + \Psi_{xx} + \Psi_{yy} + U(x,y)\Psi + \sigma |\Psi|^2\Psi = 0.\end{aligned}$$ In the optics notation that we use here, the evolution direction is denoted by $z$, the propagation distance. Here, we restrict our considerations to two spatial dimensions and assume that the potential $U(x,y)$ is complex valued, representing gain and loss in the optical medium, depending on the sign of the imaginary part (negative for gain, positive for loss) of the potential. In this two-dimensional setting, the condition of full $\mathcal{PT}$-symmetry in two dimensions is that $U^*(x,y) = U(-x,-y)$. Potentials with full $\mathcal{PT}$ symmetry have been shown to support continuous families of soliton solutions [@OptSolPT; @WangWang; @LuZhang; @StabAnPT; @ricardo]. However, an important recent development was the fact that the condition of (full) $\mathcal{PT}$ symmetry can be relaxed. That is, either the condition $U^*(x,y)=U(-x,y)$ or $U^*(x,y)=U(x,-y)$ of, so-called, partial $\mathcal{PT}$ symmetry can be imposed, yet the system will still maintain all real spectra and continuous families of soliton solutions [@JYppt]. In the original contribution of [@JYppt], only the focusing nonlinearity case was considered for two select branches of solutions and the stability of these branches was presented for isolated parametric cases (of the frequency parameter of the solution). Our aim in the present work is to provide a considerably more “spherical” perspective of the problem. In particular, we examine the bifurcation of nonlinear modes from [*all three*]{} point spectrum eigenvalues of the underlying linear Schr[ö]{}dinger operator of the partially $\mathcal{PT}$-symmetric potential. Upon presenting the relevant model (section 2), we perform the relevant continuations (section 3) unveiling the existence of nonlinear branches [*both*]{} for the focusing and for the defocusing nonlinearity case. We also provide a systematic view towards the stability of the relevant modes (section 4), by characterizing their principal unstable eigenvalues as a function of the intrinsic frequency parameter of the solution. In section 5, we complement our existence and stability analysis by virtue of direct numerical simulations that manifest the result of the solutions’ dynamical instability when they are found to be unstable. Finally, in section 6, we summarize our findings and present our conclusions, as well as discuss some possibilities for future studies. Model, Theoretical Setup and Linear Limit ========================================= Motivated by the partially $\mathcal{PT}$-symmetric setting of [@JYppt], we consider the complex potential $U(x,y) = V(x,y) + iW(x,y)$ where $$\begin{aligned} V &=& \left( ae^{ - (y - y_0)^2} + be^{ - (y + y_0)^2}\right)\left(e^{-(x - x_0)^2 } + e^{-(x + x_0)^2 }\right)\nonumber \\ W &=& \beta\left(ce^{ - (y - y_0)^2} + de^{ - (y + y_0)^2}\right)\left(e^{-(x - x_0)^2 } - e^{-(x + x_0)^2 }\right).\label{VW}\end{aligned}$$ with real constants $\beta$, $a\neq b $ and $c\neq -d$. The potential is chosen with partial $\mathcal{PT}$-symmetry so that $U^*(x,y)=U(-x,y)$. That is, the real part is even in the $x$-direction with $V(x,y) = V(-x,y)$ and the imaginary part is odd in the $x$-direction with $-W(x,y) = W(-x,y)$. The constants $a,b,c,d$ are chosen such that there is no symmetry in the $y$ direction. In [@JYppt] it is shown that the spectrum of the potential $U$ can be all real as long as $|\beta|$ is below a threshold value, after which a ($\mathcal{PT}$-) phase transition occurs; this is a standard property of $\mathcal{PT}$-symmetric potentials. We focus on the case $\beta=0.1$ and $a=3, b=c=2, d=1$ for which the spectrum is real, i.e., below the relevant transition threshold. Figure \[figVW\] shows plots of the potential $U$. The real part of the potential is shown on the left, while the imaginary part associated with gain-loss is on the right; the gain part of the potential corresponds to $W<0$ and occurs for $x<0$, while the loss part with $W>0$ occurs for $x>0$. Figure \[figVWeigs\] shows the spectrum of $U$, i.e., eigenvalues for the underlying linear Schr[ö]{}dinger problem $(\nabla^2 + U)\psi_0 = \mu_0\psi_0$. The figure also shows the corresponding eigenvectors for the three discrete real eigenvalues $\mu_0$. It is from these modes that we will seek bifurcations of nonlinear solutions in what follows. ![The plots show the spatial distribution of real ($V$, left panel) and imaginary ($W$, right panel) parts of the potential $U$ with $x_0=y_0=1.5$. []{data-label="figVW"}](VW.png){width="3.5in"} ![ The top left plot shows the spectrum of Schr[ö]{}dinger operator associated with the potential $U$ in the complex plane (see also the text). Plots of the magnitude of the normalized eigenvectors for the three discrete eigenvalues $\mu_0$ are shown in the other three plots.[]{data-label="figVWeigs"}](VWeigs.png){width="3.5in"} Existence: Nonlinear Modes Bifurcating from the Linear Limit ============================================================ As is customary, we focus on stationary soliton solutions of (\[nls\]) of the form $\Psi(x,y,t) = \psi(x,y)e^{i\mu z}$. Thus one obtains the following stationary equation for $\psi(x,y)$. $$\psi_{xx} + \psi_{yy} + U(x,y)\psi + \sigma|\psi|^2\psi = \mu \psi \label{stateq}$$ In [@JYppt] it is discussed that a continuous family of solitons bifurcates from each of the linear solutions in the presence of nonlinearity. In order to see this let $\mu_0$ be a discrete simple real eigenvalue of the potential $U$ (such as one of the positive real eigenvalues in the top left plot of Fig. \[figVWeigs\]). Now, following [@JYppt], expand $\psi(x,y)$ in terms of $\epsilon = |\mu-\mu_0| << 1$ and substitute the expression $$\psi(x,y) = \epsilon^{1/2}\left[ c_0 \psi_0 + \epsilon \psi_1 + \epsilon^2 \psi_2 + \dots \right] \label{epsexpand}$$ into equation (\[stateq\]). This gives the equation for $\psi_1$ as $$L\psi_1 = c_0\left( \rho\psi_0 - \sigma |c_0|^2|\psi_0|^2\psi_0 \right) \label{psi1eq}$$ where $\rho = {\rm sgn}(\mu-\mu_0)$ and $$|c_0|^2 = \frac{\rho \langle \psi_0^*, \psi_0 \rangle}{\sigma \langle \psi_0^*, |\psi_0|^2 \psi_0\rangle}. \label{c0}$$ Here, $\psi_0^*$ plays the role of the adjoint solution to $\psi_0$. Thus in order to find solutions of (\[stateq\]) for $\sigma = \pm 1$ we perform a Newton continuation in the parameter $\mu$ where the initial guess for $\psi$ is given by the first two terms of (\[epsexpand\]). The bottom left panel of Figure \[muvpow\] shows how the (optical) power $P(\mu) = \int\int |\psi|^2 dxdy$ of the solution grows as a function of increasing $\mu$ for $\sigma=1$, or as a function of decreasing $\mu$ for $\sigma=-1$ (from the linear limit). The first branch begins at the first real eigenvalue of $U$ at $\mu_0 \approx 0.286$, the second branch at $\mu_0 \approx 0.487$, and the third branch begins at $\mu_0 \approx 0.785$. Plots of the solutions and their corresponding time evolution and stability properties are shown in the next section. As a general starting point comment for the properties of the branches, we point out that all the branches populate both the gain and the loss side. In the branch starting from $\mu_0 \approx 0.286$, all 4 “wells” of the potential of Fig. \[figVW\] appear to be populated, with the lower intensity “nodes” being more populated and the higher intensity ones less populated. The second branch starting at $\mu_0 \approx 0.487$, as highlighted also in [@JYppt], possesses an anti-symmetric structure in $x$ (hence the apparent vanishing of the density at the $x=0$ line). Both in the second and in the third branch, the higher intensity nodes of the potential appear to bear a higher intensity. Stability of the Nonlinear Modes: Spectral Analysis =================================================== ![The bottom left plot shows the power of the solution $\psi$ plotted in terms of the continuation parameter $\mu$. The curves begin at the lowest power (i.e., at the linear limit) at the discrete real eigenvalues of approximately $0.286$ (branch 1), $0.487$ (branch 2), $0.785$ (branch 3). Each power curve is drawn with its corresponding stability noted: a blue solid curve denotes a stable solution and a red dashed curve denotes an unstable solution. The other three plots track the maximum real part of eigenvalues $\nu$ as a function of the continuation parameter $\mu$: the red dashed line represents the max real part of eigenvalues that are real (exponential instability) while the blue dotted line tracks the max real part for eigenvalues that have nonzero imaginary part (quartets); this case corresponds to oscillatory instabilities. []{data-label="muvpow"}](mupow.png){width="3.5in"} The natural next step is to identify the stability of the solutions. This is predicted by using the linearization ansatz: $$\label{pert} \Psi = e^{i\mu z}\left( \psi + \delta \left[ a(x,y)e^{\nu z} + b^*(x,y)e^{\nu^*z}\right] \right)$$ which yields the order $\delta$ linear system $$\left[ \begin{array}{cc} M_1 & M_2 \\ -M_2^* & -M_1^* \end{array} \right] \left[ \begin{array}{c} a\\ b \end{array} \right] = -i \nu \left[ \begin{array}{c} a\\ b \end{array} \right]$$ where $M_1 = \nabla^2 + U - \mu + 2\sigma|\psi|^2$, $M_2 = \sigma \psi^2$. Thus ${\rm max}({\rm Re}(\nu)) > 0$ corresponds to instability and ${\rm max}({\rm Re}(\nu)) = 0$ corresponds to (neutral) stability. In the bottom left panel of Figure \[muvpow\] the power curve is drawn with stability and instability as determined by $\nu$ noted by the solid or dashed curve, respectively. The other three plots in Figure \[muvpow\] show the maximum real part of eigenvalues $\nu$ for each of the three branches: the red dashed curve is the max real part of real eigenvalue pairs, and the blue dotted curve is the max real part of eigenvalue quartets with nonzero imaginary part. The former corresponds to exponential instabilities associated with pure growth, while the latter indicate so-called oscillatory instabilities, where growth is present concurrently with oscillations. In Figure \[eigs\] we plot some example eigenvalues in the complex plane for some sample unstable solutions. The dominant unstable eigenvalues within these can be seen to be consonant with the growth rates reported for the respective branches (and for these parameter values) in Fig. \[muvpow\]. ![Eigenvalues are plotted in the complex plane $(Re(\nu),Im(\nu))$ for a few representative solutions. One can compare the maximal real part with Figure \[muvpow\]. For example, the top left complex plane plot here shows that for branch 1 at $\mu\approx 0.71$ the eigenvalues with maximum real part are complex; this agrees with the top left plot of Figure \[muvpow\] where at $\mu\approx 0.71$ the blue dotted curve representing complex eigenvalues is bigger. Similarly one can check the other three eigenvalue plots here also agree with what is shown in Figure \[muvpow\], the top right for branch 1, the bottom left for branch 2 and the bottom right for branch 3.[]{data-label="eigs"}](eigs.png){width="3.5in"} The overarching conclusions from this stability analysis are as follows. The lowest $\mu$ branch, being the ground state in the defocusing case, is always stable in the presence of the self-defocusing nonlinearity. For the parameters considered, generic stability is also prescribed for the third branch under self-defocusing nonlinearity. The middle branch has a narrow interval of stability and then becomes unstable, initially (as shown in the top right of Fig. \[muvpow\]) via an oscillatory instability and then through an exponential one. In the focusing case (that was also focused on in [@JYppt] for the second and third branch), all three branches appear to be stable immediately upon their bifurcation from the linear limit, yet, all three of them subsequently become unstable. Branch 1 (that was not analyzed previously) features a combination of oscillatory and exponential instabilities. Branch 2 features an oscillatory instability which, however, only arises for a finite interval of frequencies $\mu$, and the branch restabilizes. On the other hand, branch 3, when it becomes unstable it is through a real pair. Branches 2 and 3 terminate in a saddle-center bifurcation near $\mu=1.9$. The eigenvalue panels of Fig. \[eigs\] confirm that the top panels of branch 1 may possess one or two concurrent types of instability (in the focusing case), branch 2 (bottom left) can only be oscillatorily unstable in the focusing case (yet as is shown in Fig. \[muvpow\] it can feature both types of instabilities in the defocusing case), while branch 3, when unstable in the focusing case is so via a real eigenvalue pair. Dynamics of Unstable Solutions ============================== Figures \[rkp1\], \[rkp2\], \[rkp3\] show the time evolution of three unstable solutions, one on each branch. All three time evolution examples we show here have a value of $\sigma = 1$. That is, they each correspond to a mu-value that is bigger than the initial discrete value $\mu_0$ and pertain to the focusing case. The time evolution figures show a similar feature for the unstable solutions, namely that over time the magnitude of the solutions will increase on the left side of the spatial grid. This agrees with what is expected from $\mathcal{PT}$-symmetry since the left side of the spatial grid corresponds to the gain side of the potential $U$. Importantly, also, the nature of the instabilities varies from case to case, and is consonant with our stability expectations based on the results of the previous section. In Fig. \[rkp1\] branch 1 (for the relevant value of the parameter $\mu$) features an oscillatory instability (but with a small imaginary part). In line with this, we observe a growth that is principally exponential (cf. also the top panel for the power of the solution), yet features also some oscillation in the amplitude of the individual peaks. It should be noted here that although two peaks result in growth and two in decay (as expected by the nature of $W$ in this case), one of them clearly dominates between the relevant amplitudes. ![This figure shows the time evolution of the branch 1 solution for the value $\mu\approx 0.71$. The bottom left plot shows the magnitude of the solution $|\Psi|$ at $t=0$. Observe that this solution has four peaks in its magnitude over the two-dimensional spatial grid. The bottom right plot shows the solution at $t=23$. Observe that the magnitudes of the peaks on the left side have increased. The top left plot shows the time evolution of the power of the solution as a function of the evolution variable $z$. The top right plot here shows the evolution of the four peaks in the magnitude of the solution as a function of $z$ (blue = bottom left peak, red = top left peak, green = bottom right peak, cyan = top right peak). []{data-label="rkp1"}](rkp1mu=_71.png){width="3.5in"} In Fig. \[rkp2\], it can be seen that branch 2, when unstable in the focusing case, is subject to an oscillatory instability (with a fairly significant imaginary part). Hence the growth is not pure, but is accompanied by oscillations as is clearly visible in the top left panel. In this case, among the two principal peaks of the solution of branch 2, only the left one (associated with the gain side) is populated after the evolution shown. ![This figure is similar to Figure \[rkp1\] (the final evolution distance however is about $z=114$). Here the plots correspond to the time evolution of the branch 2 solution for the value $\mu\approx 0.94$. In the top right plot, the blue curve corresponds to the left peak of the magnitude of the solution over $z$ and the green corresponds to the right peak of the magnitude. []{data-label="rkp2"}](rkp2mu=_94.png){width="3.5in"} Lastly, in branch 3, the evolution (up to $z=42$) manifests the existence of an exponential instability. The latter leads once again the gain part of the solution to indefinite growth resulting in one of the associated peaks growing while the other (for $x>0$ on the lossy side) featuring decay. ![This figure is similar to Figure \[rkp1\] (with an evolution up to distance $z=42$). Here, the plots correspond to the evolution of the branch 3 solution for the value $\mu\approx 1.0$. In the top right plot, the blue curve corresponds to the left peak of the magnitude of the solution over $z$ and green corresponds to the right peak of the magnitude. Clearly, once again, the gain side of the solution eventually dominates. []{data-label="rkp3"}](rkp3mu=1.png){width="3.5in"} It is worthwhile to mention that in the case of branch 2, the only branch that was found (via our eigenvalue calculations) to be unstable in the self-defocusing case, we also attempted to perform dynamical simulations for $\sigma=-1$. Nevertheless, in all the cases considered it was found that, fueled by the defocusing nature of the nonlinearity, a rapid spreading of the solution would take place (as $z$ increased), leading to a rapid interference of the results with the domain boundaries. For that reason, this evolution is not shown here. Conclusions & Future Challenges =============================== In the present work, we have revisited the partially $\mathcal{PT}$-symmetric setting originally proposed in [@JYppt] and have attempted to provide a systematic analysis of the existence, stability and evolutionary dynamics of the nonlinear modes that arise in the presence of such a potential for both self-focusing and self-defocusing nonlinearities. It was found that all three linear modes generate nonlinear counterparts. Generally, the defocusing case was found to be more robustly stable than the focusing one. In the former, two of the branches were stable for all the values of the frequency considered, while in the focusing case, all three branches developed instabilities sufficiently far from the linear limit (although all of them were spectrally stable close to it). The instabilities could be of different types, both oscillatory (as for branch 2) and exponential (as for branch 3) or even of mixed type (as for branch 1). The resulting oscillatorily or exponentially unstable dynamics, respectively, led to the gain overwhelming the dynamics and leading to indefinite growth in the one or two of the gain peaks of our four-peak potential. Naturally, there are numerous directions that merit additional investigation. For instance, and although this would be of less direct relevance in optics, partial $\mathcal{PT}$ symmetry could be extended to 3 dimensions. There it would be relevant to appreciate the differences between potentials that are partially $\mathcal{PT}$ symmetric in one direction vs. those partially $\mathcal{PT}$ symmetric in two directions. Another relevant case to explore in the context of the present mode is that where a $\mathcal{PT}$ phase transition has already occurred through the collision of the second and third linear eigenmode considered herein. Exploring the nonlinear modes and the associated stability in that case would be an interesting task in its own right. Such studies are presently under consideration and will be reported in future publications. [999]{} Bender, C. M.; Boettcher, S. Real Spectra in Non-Hermitian Hamiltonians Having $\mathcal{PT}$ Symmetry. [*Phys. Rev. Lett.*]{} [**1998**]{}, [*80*]{}, 5243-5246. Bender, C. M.; Brody, D. C.; Jones, H. F. Complex Extension of Quantum Mechanics. [*Phys. Rev. Lett.*]{} [**2002**]{}, [*89*]{}, 270401-1-270401-4. Ruter, C. E.; Markris, K. G.; El-Ganainy, R.; Christodoulides, D. N.; Segev, M.; Kip, D. Observation of parity-time symmetry in optics. [*Nat. Phys.*]{} [**2010**]{}, [**6**]{}, 192-195. Peng, B.; Ozdemir, S. K.; Lei, F.; Monifi, F.; Gianfreda, M.; Long, G. L.; Fan, S.; Nori, F.; Bender, C. M.; Yang, L. Parity?time-symmetric whispering-gallery microcavities. [*Nat. Phys.*]{} [**2014**]{}, [*10*]{}, 394-398. Peng, B.; Ozdemir, S. K.; Rotter, S.; Yilmaz, H.; Liertzer, M.; Monifi, F.; Bender, C. M.; Nori, F.; Yang, L. Loss-induced suppression and revival of lasing. [*Science*]{} [**2014**]{}, [*346*]{}, 328-332. Suchkov, S. V.; Sukhorukov, A. A.; Huang, J.; Dmitriev, S. V.; Lee, C.; Kivshar, Yu. S. Nonlinear switching and solitons in PT-symmetric photonic systems. [*Laser Photonics Rev.*]{} [**2016**]{}, [*10*]{}, 177-213. Konotop, V. V.; Yang, J.; Zezyulin, D. A. Nonlinear waves in $\mathcal{PT}$-symmetric systems. [ *Rev. Mod. Phys.*]{} [**2016**]{}, [*88*]{}, 035002-1-035002-65. Schindler, J.; Li, A.; Zheng, M. C.; Ellis, F. M.; Kottos, T. Experimental study of active LRC circuits with $\mathcal{PT}$ symmetries. [*Phys. Rev. A*]{} [**2011**]{}, [*84*]{}, 040101-1-040101-4. Schindler, J.; Lin, Z.; Lee, J. M.; Ramezani, H.; Ellis, F. M.; Kottos, T. $\mathcal{PT}$-symmetric electronics. [*J. Phys. A: Math. Theor.*]{} [**2012**]{}, [*45*]{}, 444029-1-444029-17. Bender, N.; Factor, S.; Bodyfelt, J. D.; Ramezani, H.; Christodoulides, D. N.; Ellis, F. M.; Kottos, T. Observation of Asymmetric Transport in Structures with Active Nonlinearities. [*Phys. Rev. Lett.*]{} [**2013**]{}, [*110*]{}, 234101-1-234101-5. Bender, C. M.; Berntson, B.; Parker, D.; Samuel, E. Observation of $\mathcal{PT}$ Phase Transition in a Simple Mechanical System. [*Am. J. Phys.*]{} [**2013**]{}, [*81*]{}, 173-179. Musslimani, Z. H.; Makris, K. G.; El-Ganainy, R.; Christodoulides, D. N. Optical Solitons in $\mathcal{PT}$ Periodic Potentials. [*Phys. Rev. Lett.*]{} [**2008**]{}, [*100*]{}, 030402-1-030402-4. Wang, H.; Wang, J. Defect solitons in parity-time periodic potentials. [*Opt. Express*]{} [**2011**]{}, [*19*]{}, 4030-4035. Lu, Z.; Zhang, Z. Defect solitons in parity-time symmetric superlattices. [*Opt. Express*]{} [**2011**]{}, [*19*]{}, 11457-11462. Nixon, S.; Ge, L.; Yang, J. Stability analysis for solitons in $\mathcal{PT}$-symmetric optical lattices. [*Phys. Rev. A*]{} [**2012**]{}, [*85*]{}, 023822-1-023822-10. Achilleos, V.; Kevrekidis, P. G.; Frantzeskakis, D. J.; Carretero-Gonz[á]{}lez, R. Dark solitons and vortices in $\mathcal{PT}$-symmetric nonlinear media: From spontaneous symmetry breaking to nonlinear $\mathcal{PT}$ phase transitions. [*Phys. Rev. A*]{} [**2012**]{}, [*86*]{}, 013808-1-013808-7. Yang, J. Partially $\mathcal{PT}$ symmetric optical potentials with all-real spectra and soliton families in multidimensions. [*Opt. Lett.*]{} [**2014**]{}, [*39*]{}, 1133-1136. Wang, C.; Theocharis,G.; Kevrekidis, P. G.; Whitaker, N.; Law, K. J. H.; Frantzeskakis, D. J.; Malomed, B. A. Two-dimensional paradigm for symmetry breaking: The nonlinear Schrödinger equation with a four-well potential. [*Phys. Rev. E*]{} [**2009**]{}, [*80*]{}, 046611-1-046611-9.
{ "pile_set_name": "ArXiv" }
--- abstract: 'This paper presents a class of Dynamic Multi-Armed Bandit problems where the reward can be modeled as the noisy output of a time varying linear stochastic dynamic system that satisfies some boundedness constraints. The class allows many seemingly different problems with time varying option characteristics to be considered in a single framework. It also opens up the possibility of considering many new problems of practical importance. For instance it affords the simultaneous consideration of temporal option unavailabilities and the dependencies between options with time varying option characteristics in a seamless manner. We show that, for this class of problems, the combination of any Upper Confidence Bound type algorithm with any efficient reward estimator for the expected reward ensures the logarithmic bounding of the expected cumulative regret. We demonstrate the versatility of the approach by the explicit consideration of a new example of practical interest.' author: - 'T. W. U. Madhushani$^{1}$ and D. H. S. Maithripala$^2$ and N. E. Leonard$^{3}$ [^1] [^2] [^3]' bibliography: - 'DynamicBandit.bib' title: '**Asymptotic Allocation Rules for a Class of Dynamic Multi-armed Bandit Problems**' --- Introduction {#sect:Introduction} ============ In decision theory Multi-Armed Bandit problems serve as a model that captures the salient features of human decision making strategies. The elementary case of a *1-armed bandit* is a slot machine with one lever that results in a numerical reward after every execution of the action. The reward is assumed to satisfy a specific but unknown probability distribution. A slot machine with multiple levers is known as a *Multi-Armed Bandit* (MAB) [@Sutton; @Robbins]. The problem is analogous to a scenario where an agent is repeatedly faced with several different options and is expected to make suitable choices in such a way that the cumulative reward is maximized [@Gittins]. This is known to be equivalent to minimizing the expected cumulative regret [@LaiRobbins]. Over decades optimal strategies have been developed to realize the above stated objective. In the standard multi-armed bandit problem the reward distributions are stationary. Thus if the mean values of all the options are known to the agent, in order to maximize the cumulative reward, the agent only has to sample from the option with the maximum mean. In reality this information is not available and the agent should choose options to maximize the cumulative reward while gaining sufficient information to estimate the true mean values of the option reward distributions. This is called the exploration-exploitation dilemma. In a case where the agent is faced with these choices with an infinite time horizon exploitation-exploration sampling rules are guaranteed to converge to the optimum option. In their seminal work Lai and Robbins [@LaiRobbins] established a lower bound for the cumulative regret for the finite time horizon case. Specifically, they establish a logarithmic lower bound for the number of times a sub-optimal option needs to be sampled by an optimal sampling rule if the total number of times the sub-optimal arms are sampled satisfies a certain boundedness condition. The pioneering work by [@LaiRobbins] establishes a confidence bound and a sampling rule to achieve logarithmic cumulative regret. These results are further simplified in [@AgrawalSimpl] by establishing a confidence bound using a sample mean based method. Improving on these results, a family of Upper Confidence Bound (UCB) algorithms for achieving asymptotic and uniform logarithmic cumulative regret was proposed in [@Auer]. These algorithms are based on the notion that the desired goal of achieving logarithmic cumulative regret is realized by choosing an appropriate uncertainty model, which results in optimal trade-off between reward gain and information gain through uncertainty. What all these schemes have in common is a three step process: 1) a predication step, that involves the estimation of the expected reward characteristics for each option based on the information of the obtained rewards, 2) an objective function that captures the tradeoff between estimated reward expectation and the uncertainty associated with it and 3) a decision making step that involves formulation of an action execution rule to realize a specified goal. For the standard MAB problem the reward associated with an option is considered as an iid stochastic process. Therefore in the frequentist setting the natural way of estimating the expectation of the reward is to consider the sample average [@LaiRobbins; @AgrawalSimpl; @Auer]. The papers [@Kauffman; @Reverdy] present how to incorporate prior knowledge about reward expectation in the estimation step by leveraging the theory of conditional expectation in the Bayesian setting. We highlight that all these estimators ensure certain asymptotic bounds on the tail probabilities of the estimate of the expected reward. We will call such an estimator an *efficient reward estimator*. Furthermore all these methods with the exception of [@LaiRobbins] rely on UCB type algorithms for the decision making process. An extension to the standard MAB problem is provided in [@Kleinberg2010] to include temporal option unavailabilities where they propose a UCB based algorithm that ensures that the expected regret is upper bounded by a function that grows as the square root of the number of time steps. In all of the previously discussed papers, the option characteristics are assumed to be static. However many real world problems can be modeled as multi-armed bandit problems with dynamic option characteristics [@dacosta2008adaptive; @Slivkins; @granmo2010solving; @Garivier2011; @srivastava2014surveillance; @schulz2015learning; @tekin2010online]. In these problems reward distributions can change deterministically or stochastically. The work [@dacosta2008adaptive; @Garivier2011; @srivastava2014surveillance] present allocation rules and associated regret bounds for a class of problems where the reward distributions change deterministically after an unknown number of time steps. The paper [@dacosta2008adaptive] presents a UCB1 based algorithm where they incorporate the Page-Hinkley change point detection method to identify the the point at which the underlying option characteristics change. A discounted UCB or a sliding-window UCB algorithm is proposed in [@Garivier2011] to solve non stationary MAB problems where the expectation of the reward switches to unknown constants at unknown time points. This work is extended in [@srivastava2014surveillance] by proposing sliding window UCL (SW-UCL) algorithm with adaptive window sizes for correlated Gaussian reward distributions. They incorporate the Page-Hinkley change point detection method to adjust the window size by identifying abrupt changes in the reward mean. Similarly, they also propose a block SW-UCL algorithm to restrict the transitions among arms. A class of MAB problems with gradually changing reward distributions are considered in [@Slivkins; @granmo2010solving]. Specifically [@Slivkins] considered the case where the expectation of the reward follows a random walk while [@granmo2010solving] addresses the problem where, at each time step, the expectation of each reward is modified by an independent Gaussian perturbation of constant variance. In [@schulz2015learning] the expectation of the reward associated with an option is considered to depend on a linear static function of some known variables that characterize the option and propose to estimate the reward based on learning this function. A different class of dynamically and stochastically varying option characteristics is considered in [@tekin2010online] where the reward distribution of each option is modeled as a finite state irreducible, aperiodic, and reversible Markov chain. In this paper we consider a class of *Dynamic Multi-Armed Bandit* problems (DMAB) that will include most of the previously stated dynamic problems as special cases. Specifically we consider a class of DMAB problems where the reward of each option is the noisy output of a multivariate linear time varying stochastic dynamic system that satisfies some boundedness conditions. This formulation allows one to accommodate a wide class of real world problems such as the cases where the option characteristics vary periodically, aperiodically, or gradually in a stochastic way. Furthermore incorporating this dynamic structure allows one to easily capture the underlying characteristic variations of each option as well as allow the possibility of incorporating dependencies between options. To the best of our knowledge this is the first time that such a wide class of dynamic problems have been considered in one general setting. We also incorporate temporal option unavailabilities into our structure that helps broaden the applicability of this model in real world problems. To the best of our knowledge it is the first time that temporal option unavailabilities are incorporated in a setting where the reward distributions are non-stationary. One major advantage of this linear dynamic systems formulation is that it immediately allows us to use the vast body of linear dynamic systems theory including that of switched systems to the problem of classification and solution of different DMAB problems. In this paper we prove that if the system characteristics satisfy certain boundedness conditions and the number of times the optimal arm becomes unavailable is at most logarithmic, then the expected cumulative regret is logarithmically bounded from above when one combines any UCB type decision making algorithm with any efficient reward estimator. We demonstrate the effectiveness of the scheme using an example where an agent intends to maximize the information she gathers under the constraint of option unavailability and periodically varying option characteristics. In section-\[Secn:DMAB\] we formally state the class of DMAB problems that is considered in this paper. We show in section-\[Secn:AsymptoticAllocationRules\] that the combination of any UCB type allocation rule with an efficient estimator guarantees that the expected cumulative regret is bounded above by a logarithmic function of the number of time steps. In section-\[Secn:EfficientEstimators\] we explicitly show, using a Hoeffding type tail bound [@Garivier2011], that the sample mean estimator is an efficient estimator. Finally in section-\[Secn:Example\] we provide a novel DMAB example that deals with unknown periodically and continuously varying options characteristics. Dynamic Multi-Armed Bandit Problem {#Secn:DMAB} ================================== In this paper we consider a wide class of dynamic multi-armed bandit problems where the reward is a noisy measurement of a linear time varying stochastic dynamic process. The ‘noise’ in the measurement and the ‘noise’ in the process are assumed to have a bounded support. This is a reasonable and valid assumption since the rewards in physical problems are bounded and are greater than zero. Consider a *k-armed bandit*. Let the reward associated with each option $i \in \{1,2,3,\ldots,k\}$ at the $t^{\mathrm{th}}$ time step be given by the real valued random variable $X_i^t$. The expectation of this reward depends linearly on a $\mathbb{R}^m$ valued random variable $\theta^t$. The random variable $\theta^t$ represents option characteristics. The dynamics of the option characteristics can be multidimensional and thus we allow provision for $m$ to be larger than $k$. These option characteristics could either evolve deterministically or stochastically. The reward is assumed to depend linearly on the option characteristics. The dependence of the reward on the option characteristics may be precisely known or there could be some uncertainty about it. We model this uncertainty by an additive ‘noise’ term with finite support. We also allow the possibility of incorporating option dependencies and thereby considering the possibility of other options directly or indirectly influencing the reward associated with a given option. In order to capture this behavior in a concrete theoretical setting we assume that the bounded random variables $\theta^t \in \chi_{\theta}\subset \mathbb{R}^m$, with $\chi_{\theta}$ compact, and $X_i^t\in [0,\chi_x]$ with $0\leq \chi_x<\infty$, specifically satisfy a linear time varying stochastic process, $$\begin{aligned} \theta^t&=A^t\theta^{t-1}+B^tn^t_{\theta},\label{eq:Process}\\ X_i^t&=\gamma_i^t\,\left(H_i^t\theta^t+g_i^{t}\,n^t_{xi}\right),\label{eq:NoisyReward}\end{aligned}$$ where $\{n^t_{\theta}\}$ is a bounded $\mathbb{R}^q$ valued stochastic process with zero mean and constant covariance $\Sigma_\theta$ while $\{n^t_{xi}\}$ is a $\mathbb{R}$ valued bounded stochastic process with zero mean and constant variance $\sigma_{xi}$. We also let $\{\gamma_i^t\},\{g_i^t\}$ be real valued deterministically varying sequences while $\{A^t\},\{B^t\},\{H_i^t\}$ are matrix valued deterministic sequences of appropriate dimensions. We allow the variances, $\sigma_{xi}^2$, corresponding to each arm to be different. Letting $\gamma_i^t\in \{0,1\}$ allows us to consider temporal option unavailabilities. Expression-(\[eq:Process\]) describes the collective time varying characteristics of all the options and the absence or presence of $B^tn_{\theta}$ dictates whether these dynamics are deterministic or stochastic. Expression-(\[eq:NoisyReward\]) describes how the reward depends on the option characteristics. The presence of the ‘noise’ term $g_i^{t}\,n_{xi}$ indicates that the rewards that one obtains given the knowledge of the option characteristics involve some bounded uncertainty. The case where $\{A^t\},\{B^t\},\{H_i^t\}$ each has a block diagonal structure represents independent arms and the case where there are off diagonal entries represent situations where the arms depend on each other. Notice that by setting $A^t\equiv I$ and $B^t\equiv 0$ we obtain the standard MAB with temporal option unavailabilities. One major advantage of this linear dynamic systems formulation is that it allows one to use the vast body of linear dynamic systems theory including that of switched systems in the classification and solution of different DMAB problems. From equations (\[eq:Process\]) and (\[eq:NoisyReward\]) we see that the expectations $E(\theta^t),E(X_i^t)$ evolve according to $$\begin{aligned} E(\theta^t)&=A^tE(\theta^{t-1}),\label{eq:EProcess}\\ E(X_i^t)&=\gamma_i^t\,H_i^tE(\theta^t),\label{eq:EReward}\end{aligned}$$ and that the covariances $\Sigma(\theta^t)\triangleq E(\theta^{t}{\theta^{t}}^T)-E(\theta^t){E(\theta^t)}^T$, $\Sigma(X_i^t)\triangleq E(X_i^{t}{X_i^{t}}^T)-E(X_i^t){E(X_i^t)}^T$ evolve according to $$\begin{aligned} \Sigma(\theta^t)&=A^t\Sigma(\theta^{t-1}){A^t}^T+B^t\Sigma_\theta{B^t}^T,\label{eq:VProcess}\\ \Sigma(X_i^t)&=(\gamma_i^t)^2\,\left(H_i^t\Sigma(\theta^t){H_i^t}^T+\sigma_{xi}^2{(g_i^t)}^2\right).\label{eq:VReward}\end{aligned}$$ Boundedness of $\theta_i^t$ implies $E(\theta^{t})$ and $\Sigma(\theta^t)$ should remain bounded. Let $\Phi^t_\tau\triangleq \left(\prod_{j=\tau}^tA^j\right)$ and then since $E(\theta^t)=\Phi^t_1E(\theta^0)$ we find that the expectation and the covariance of the reward become unbounded if $\lim_{t\to\infty}||\Phi^t_1||=\infty$. On the other hand the expectation converges to zero if $\lim_{t\to\infty}||\Phi^t_1||=0$. Thus sequences $\{A^t\}$ that satisfy the conditions $\limsup_{t\to\infty}||\Phi^t_1||=\bar{a}<\infty$ and $\liminf_{t\to\infty}||\Phi^t_1||={a}>0$ are the only ones that correspond to a meaningful DMAB problem. Thus to ensure boundedness of $E(\theta^{t})$ we assume that: \[As:MainAssumption0\] The sequence $\{A^t\}$ satisfies: $$\begin{aligned} \limsup_{t\to\infty}\left|\left|\prod_{j=1}^tA^j\right|\right|&<\infty\\ \liminf_{t\to\infty}\left|\left|\prod_{j=1}^tA^j\right|\right|&>0\end{aligned}$$ and $\exists \:\: a,\bar{a}>0$ such that, $$\begin{aligned} a<\left|\left|\prod_{j=\tau}^tA^j\right|\right|<\bar{a},\:\:\:\:\:\end{aligned}$$ $\forall \:\: t\geq\tau$. Several examples of sequences $\{A^t\}$ of practical significance that ensure this condition are those where $A^t$: 1. is an orthogonal matrix or is a stochastic matrix (i.e. $||A^t||=1$), 2. is a periodic matrix (i.e. $A^t=A^{t+N}$ for some $N>0$), 3. corresponds to a stable switched system. Next we will consider conditions needed for the boundedness of $\Sigma(X_i^t)$. Note that the covariance of $\theta^t$ is given by [$$\begin{aligned} \Sigma(\theta^t)&=\Phi^t_1\Sigma(\theta^{0}){\Phi^t_1}^T+\sum_{\tau=1}^t\Phi^{t}_{\tau+1}B^{\tau}\Sigma_\theta{B^{\tau}}^T(\Phi^{t}_{\tau+1})^T.\label{eq:VProcess1}\end{aligned}$$ ]{} Assumption-\[As:MainAssumption0\] ensures that the first term on the right hand side is bounded and that [$$\begin{aligned} ||\Sigma(\theta^t)||&\leq \bar{a}^2||\Sigma(\theta^{0})||+\bar{a}^2||\Sigma_\theta||^2\sum_{\tau=1}^t||B^\tau||^2.\label{eq:VProcess2}\end{aligned}$$ ]{} Thus it also follows that $||\Sigma(\theta^t)||$ remains bounded in any finite time horizon if $||B^t||$ remains bounded in that period. On the other hand if the sequence $\{B^t\}$ satisfies $||B^t||\leq c/t$ for some $c>0$ or if the number of time steps where the condition $\Phi^{t}_{\tau}B^{\tau-1}\neq 0$ is satisfied remains finite then $||\Sigma(\theta^t)||$ is guaranteed to be bounded for all $t>0$. Therefore from (\[eq:VReward\]) we find that in order to satisfy the boundedness of $X_i^t$ the sequences $\{\gamma_i^t\},\{||H_i^t||\},\{g_i^t\}$ must necessarily be bounded from above in addition to what is specified in Assumption \[As:MainAssumption0\]. In order to define a meaningful DMAB problem the notion of an optimal option should be well defined. That is $i^*\triangleq \arg\limits_{i}\max\{H_i^tE(\theta^t)\}$ is independent of time. The following assumption specifies the conditions necessary for the boundedness of the reward $X_i^t$ as well as the conditions necessary for the existence of an optimal arm. \[As:MainAssumption\] We will assume that the sequences $\{\gamma_i^t\},\{B^t\},\{H_i^t\},\{g_i^t\}$ guarantee the following conditions for all $t>0$: $$\begin{aligned} ||\Sigma(\theta^t)||\leq &\sigma,\label{eq:SigmaBnd}\\ \gamma_i^t\in \{0,1\}&,\\ 0<g_i^t\leq&\bar{g}_i,\\ ||B^t|| \leq \frac{b}{t}\:\:\:\mbox{or}\:\: &||B^t||\neq 0 \:\:\:\mbox{finitely many times},\\ h_i<||H_i^t||\leq&\bar{h}_i,\label{eq:HBnd}\end{aligned}$$ and $\forall \: t\geq 0$ there exists a unique $i^*=i^t_*$ such that $$\begin{aligned} \Delta_{i}\leq \Delta_i^t&\triangleq {H_{i^t_*}}^tE(\theta^t_{i^t_*})-H_i^t E(\theta^t) \leq \bar{\Delta},\label{eq:OptimalArm}\end{aligned}$$ $\forall \:\:i\neq i^t_*$ and while $$\begin{aligned} \sum_{j=2}^t\mathbb{I}_{\{\gamma^j_{i^*}=0\}} \leq \gamma \log{t},\label{eq:LogBndAvailability}\end{aligned}$$ for some $\bar{g}_i,h_i,\bar{h}_i,\bar{\Delta},{\Delta}_i,\gamma,\sigma,b>0$ where $\mathbb{I}_{\{\gamma^j=0\}}$ is the indicator function. Note that condition (\[eq:OptimalArm\]), which implies existence of a well defined optimal arm, is guaranteed if $({h_{i^*}}a||E(\theta^0_{i^*})||-\bar{h}_i\bar{a}||E(\theta^0)||)>0, \:\: \forall \:\: t\geq \tau>0$. Condition (\[eq:LogBndAvailability\]) implies that this optimal arm becomes unavailable at most logarithmically with the number of time steps. Finally the boundedness of $X_i^t$ is guaranteed by the conditions (\[eq:SigmaBnd\]) – (\[eq:HBnd\]). We will now proceed to analyze the regret of the DMAB problem stated above. Consider the probability space $(\Omega,\mathcal{U},\mathcal{P})$ and the increasing sequence of subalgebras $\mathcal{F}_{0}\subset\mathcal{F}_{1}\cdots \subset\mathcal{F}_{t}\cdots \subset\mathcal{F}_{n-1}\subset \mathcal{U}$ for $t=0,1,\cdots,n$ where $\mathcal{P}$ is the probability measure on the sigma algebra $\mathcal{U}$ of $\Omega$. The sigma algebra $\mathcal{F}_{t}$ represents the information that is available at the $t^{\mathrm{th}}$ time step. Let $\{\varphi_t\}_{t=1}^n$ be a sequence of random variables, each defined on $(\Omega,\mathcal{F}_{t-1},\mathcal{P})$ and taking values in $\{1,2,\cdots,k\}$. The random variable $\varphi_t$ models the action taken by the agent at the $t^{\mathrm{th}}$ time step. The value $i\in\{1,2,\cdots,k\}$ of the random variable $\varphi_t$ specifies that the $i^{\mathrm{th}}$ option is chosen at time step $t$. Then $\mathbb{I}_{\{\varphi_t =i\}}$ is the $\mathcal{F}_{t-1}$ measurable indicator random variable that takes a value one if the $i^{\mathrm{th}}$ option is chosen at step $t$ and is zero otherwise. The DMAB problem is to find an allocation rule $\{\varphi_t\}_{t=1}^n$ that maximizes the expected cumulative reward or equivalently that minimizes the cumulative regret. The cumulative reward after the the $n^{\mathrm{th}}$ time step is defined to be the real valued random variable $S_n$ defined on the probability space $(\Omega,\mathcal{F}_{n-1},\mathcal{P})$ that is given by $$\begin{aligned} S_n&=\sum_{t=1}^n\sum_{i=1}^k E(X_i^t\mathbb{I}_{\{\varphi_{t} =i\}}|\mathcal{F}_{t-1})\\ &=\sum_{t=1}^n\sum_{i=1}^k E(X_i^t|\mathcal{F}_{t-1})\mathbb{I}_{\{\varphi_{t} =i\}}. \end{aligned}$$ Thus the expected cumulative reward is, $$\begin{aligned} E(S_n)=\sum_{t=1}^n\sum_{i=1}^k E({X}_i^t)E(\mathbb{I}_{\{\varphi_{t} =i\}}) \end{aligned}$$ where $T_i(n)=\sum_{t=1}^n\mathbb{I}_{\{\varphi_t =i\}}$ is a real valued random variable defined on $(\Omega,\mathcal{F}_{n-1},\mathcal{P})$ that represents the number of times the $i^{\mathrm{th}}$ arm has been sampled in $n$ trials. Note that $E(X_i^t)=\gamma_i^tH_i^tE(\theta^t)$. Let $i^t_*=\max_i\{E(X_i^t)\}$. Then the expected cumulative regret is defined as [ $$\begin{aligned} R_n&\triangleq \sum_{t=1}^n\left(E({X}^t_{i_t^*})- \sum_{i=1}^k\gamma_{i}^tE({X}_i^t)E(\mathbb{I}_{\{\varphi_{t} =i\}}) \right). \label{eq:DynRegret} \end{aligned}$$ ]{} Then from condition (\[eq:OptimalArm\]) we find that [ $$\begin{aligned} R_n &=\sum_{i=1}^k\sum_{t=1}^n\mathbb{I}_{\{\gamma_{i^*}^t=1\}}\left(H_{i^*}^tE(\theta_{i^*}^t)- \gamma_i^tH_{i}^tE(\theta_{i}^t) \right)E(\mathbb{I}_{\{\varphi_{t} =i\}})\nonumber\\ &+\sum_{i=1}^k\sum_{t=1}^n\mathbb{I}_{\{\gamma_{i^*}^t=0\}}\left(H_{i^t_*}^tE(\theta_{i^t_*}^t)- \gamma_i^tH_{i}^tE(\theta_{i}^t) \right)E(\mathbb{I}_{\{\varphi_{t} =i\}})\nonumber\\ & \leq \bar{\Delta}\sum_{i\neq i^*}^kE\left(T_i(n)\right). \end{aligned}$$ ]{} In their seminal work [@LaiRobbins] Lai and Robbins proved that, for the static MAB problem, the regret is bounded below by a logarithmic function of the number of time steps. Asymptotic Allocation Rules for the DMAB Problem {#Secn:AsymptoticAllocationRules} ================================================ In this section we show how to construct *asymptotically efficient* allocation rules for the class of DMAB problems that were formally defined above. Specifically, in the following, we will show that the combination of any *UCB based* decision making process and an *efficient estimator* provides such an allocation rule. In the DMAB problem $\mu_i^t\triangleq E(X_i^t)$ is time varying. Thus one needs to consider a ‘time average’ for $\mu_i^t$. This time average depends on how one samples option $i$. Specifically it is a $\mathcal{F}_{t-1}$ measurable random variable $$\begin{aligned} \widehat{\mu}_i^t&\triangleq \frac{1}{T_i(t)}\sum_{j=1}^{t}E(X_i^j)\mathbb{I}_{\{\psi_j=i\}}. \label{eq:TimeAverageMean}\end{aligned}$$ This random variable can not be estimated using the maximum likelihood principle since $E(X_i^j)$ are unknown and thus will have to be estimated by other means. We will consider a $\mathcal{F}_{t-1}$ measurable random variable $\widehat{X}_i^t$ to be an estimator of $\widehat{\mu}_i^t$ if $E(\widehat{X}_i^t)=E(\widehat{\mu}_i^t)$. Let $\widehat{X}_i^t$ be a $\mathcal{F}_{t-1}$ measurable random variable such that $E(\widehat{X}_i^t)=E(\widehat{\mu}_i^t)$ and $T_i(t)$ be the $\mathcal{F}_{t-1}$ measurable random variable that represents the number of times the $i^{\mathrm{th}}$ option has been sampled up to time $t$. An estimator $\widehat{X}_i^t$ that ensures $$\begin{aligned} \mathcal{P}\left(\widehat{X}_i^t\geq \widehat{\mu}_i^t+\sqrt{\frac{\vartheta}{T_i(t)}}\right)&\leq \frac{\nu\,\log{t}}{\exp\left(2\kappa \vartheta\right)},\label{eq:TailProbBnd1}\\ \mathcal{P}\left(\widehat{X}_i^t\leq \widehat{\mu}_i^t-\sqrt{\frac{\vartheta}{T_i(t)}}\right)&\leq \frac{\nu\,\log{t}}{\exp\left(2\kappa \vartheta\right)}. \label{eq:TailProbBnd2}\end{aligned}$$ for some $\kappa,\vartheta,\nu>0$ will be referred to as an *efficient reward estimator*. In section-\[Secn:EfficientEstimators\] we show that the frequentist average mean estimator satisfies this requirement. Let $\widehat{X}_i^t$ be a $\mathcal{F}_{t-1}$ measurable random variable such that $E(\widehat{X}_i^t)=E(\widehat{\mu}_i^t)$. The allocation rule $\{\varphi_{t}\}_1^{n}$ will be referred to as *UCB based* if it is chosen such that $$\begin{aligned} \mathbb{I}_{\{\varphi_{t+1}=i\}}=\left\{ \begin{array}{cl} 1 & \:\:\:Q^{t}_i=\max\{Q^{t}_1,\cdots,Q^{t}_k\}\label{eq:UCBallocation}\\ 0 & \:\:\: {\mathrm{o.w.}}\end{array}\right. \end{aligned}$$ with $$\begin{aligned} Q^t_i&\triangleq \widehat{X}^t_i+\sigma\sqrt{\frac{\Psi\left(t\right)}{T_i(t)}}\label{eq:UCBQ} \end{aligned}$$ where $\Psi(t)$ is an increasing function of $t$ with $\Psi(1)=0$ and $\sigma>0$. We will also let $T_i(1)=1$ for all $i$. There exists two choices for picking an option at the first time step. If there exists some prior knowledge one can use that as the initial estimate $\widehat{X}^1_i$. On the other hand in the absence of such prior knowledge one can sample every option once and select the the sampled values for $\widehat{X}^1_i$. We will show below that combining any efficient estimator with a UCB based allocation rule ensures that the number of times a suboptimal arm is sampled is bounded above by a logarithmic function of the number of samples. This result is formally stated below and is proved in the appendix. \[Theom:DynamicRegret\] Let conditions specified in Assumption \[As:MainAssumption0\] and \[As:MainAssumption\] hold. Then any efficient estimator combined with an UCB based allocation rule $\{\varphi_t\}_1^{n}$ ensures that for every $i\in\{1,2,\cdots,k\}$ such that $i\neq i^{*}$ and for some $ l\geq 1$. $$\begin{aligned} E(T_i(n))\leq \gamma\log{n}+ \frac{4\sigma^2}{{\Delta^2_i}}\,\Psi(n) +\left(l+\nu\sum_{t=l}^{n-1}\frac{\log{t}}{t^{2\kappa\sigma^2\alpha}}\right).\end{aligned}$$ Thus the cumulative expected regret satisfies: $$\begin{aligned} R_n\leq c_0\log{n} + c_1,\end{aligned}$$ for some constant $c_0,c_1>0$ if $\Psi(t)$ satisfies $$\begin{aligned} \alpha \log{t}\leq \Psi(t) \leq \beta \log{t},\end{aligned}$$ for some constants $3/(2\kappa\sigma^2)<\alpha\leq \beta$. If one selects $\Psi(t)=16\log{t}$ one obtains the standard UCB-Normal algorithm proposed in [@Auer] while if one selects ${\Psi(t)}=\left(\Phi^{-1}\left(1-1/(\sqrt{2\pi e}\,t^2)\right)\right)^2$ where $\Phi^{-1}\left(\cdot\right)$ is the inverse of the cumulative distribution function for the normal distribution one obtains the UCL algorithm proposed in [@Reverdy]. Efficient Estimators {#Secn:EfficientEstimators} -------------------- Let $S_i^t$ be the $\mathcal{F}_{t-1}$ measurable random variable that gives the cumulative reward received by choosing arm $i$ up to the $t^{\mathrm{th}}$ time step that is given by $$\begin{aligned} S_i^t=\sum_{j=1}^{t}E(X_i^j\mathbb{I}_{\{\psi_j=i\}}|\mathcal{F}_{j-1}) =\sum_{j=1}^{t}E(X_i^j|\mathcal{F}_{j-1})\mathbb{I}_{\{\psi_j=i\}}.\end{aligned}$$ Define the $\mathcal{F}_{t-1}$ measurable simple average sample mean estimate $\widehat{X}_i^t$ of the cumulative mean reward received from arm $i$ as $$\begin{aligned} \widehat{X}_i^t&\triangleq \left\{\begin{array}{lc}\widehat{X}_i^1 & \mathrm{if}\:\: T_i(t)=0\\ \frac{S_i^t}{T_i(t)} & \mathrm{o.w.} \end{array}\right..\label{eq:MeanAverage}\end{aligned}$$ Then $$\begin{aligned} E(\widehat{X}_i^t)&=\sum_{j=1}^{t}E\left(\frac{E(X_i^j| \mathcal{F}_{j-1})\mathbb{I}_{\{\psi_j=i\}}}{T_i(t)}\right).\end{aligned}$$ Since $E(X_i^j| \mathcal{F}_{j-1})$ is independent of $\mathbb{I}_{\{\psi_j=i\}}$ and $T_i(t)$ we have that $$\begin{aligned} E(\widehat{X}_i^t)&=\sum_{j=1}^{t}E(X_i^j)E\left(\frac{\mathbb{I}_{\{\psi_j=i\}}}{T_i(t)}\right)\nonumber \\ &=E\left(\sum_{j=1}^{t}\frac{E(X_i^j)\mathbb{I}_{\{\psi_j=i\}}}{T_i(t)}\right)=E(\widehat{\mu}_i^t).\end{aligned}$$ The tail probability distribution for the above sample mean average is given by the following lemma which follows from Theorem 4 of [@Garivier2011]. If the random process $\{{X}_i^t\}$ satisfies ${X}_i^t\in [0,\chi_x], \:\:\forall i\in{1,2,\ldots,k}$ and $t>0$ and $\widehat{X}_i^t,\widehat{\mu}_i^t$ are given by (\[eq:MeanAverage\]) and (\[eq:TimeAverageMean\]) respectively we have that, $$\begin{aligned} \mathcal{P}\left(\widehat{X}_i^t>\widehat{\mu}_i^t+\sqrt{\frac{\vartheta}{T_i(t)}}\right)\leq \frac{\nu\,\log{t}}{\exp\left(2\kappa \vartheta\right)}\end{aligned}$$ where $\kappa=\left(1-\frac{\eta^2}{16}\right)/\chi^2$ and $\nu=1/\log (1+\eta)$ for all $t>0$ and $\eta,\vartheta>0$. Thus we have that (\[eq:TailProbBnd1\]) and (\[eq:TailProbBnd2\]) are satisfied for the sample mean estimate of the reward. Example: Periodically Continuously Varying Option Characteristics {#Secn:Example} ================================================================= In this section we consider a novel example of practical interest. The problem that we consider is that of an agent trying to maximize the reward that depends continuously on certain, periodically and continuously varying, option characteristics. The agent is assumed to be unaware of any information about this periodic behavior. Specifically we consider the problem where an agent is encountered with $k$ number of options. Each option may vary with time and may become unavailable from time to time. For this example we assume that the options do not depend on each other. This is the case, for instance, if the agent is dealing with collecting human behavioral information in a recreational park and has several options for locating herself for the purpose of collecting this information. The average number of people who frequent the park may vary depending on whether it is in the morning, afternoon or evening. Similar circumstances occur if one needs to select the type of optimal crops, highlight a particular product in a store, sample a set of sensors whose characteristics vary with the time of the day, or advertise a particular event in super markets. In each of these cases due to certain external events some of the options may become temporally unavailable as well. This class is characterized by a fixed periodic block diagonal matrix $A^t=\mathrm{diag}(A^t_1,A^t_2,\cdots,A^t_k)$ where each $A_i^t$ is a $p\times p$ matrix that satisfies the property $A_i^{t+N}=A_i^t$ for some $N>1$. The matrix $A_i^t$ encodes how the dynamics of the expected characteristics of the $i^{\mathrm{th}}$ option varies. For the example we consider here the options do not depend on each other thus we will also set $B^t=[B_1,B_2,\cdots,B_k]^T$ where each $B_i$ is a $1\times p$ row matrix and each $H_i^t=H_i$ has a corresponding block structure so that the option dynamics are not coupled. We will consider two cases. One where we will assume that the total number of people who visit a park is always the same for every day and the more realistic case where the number of people that visit a park varies stochastically. For the first case we will set $B^tn^t_{\theta}\equiv 0$ which guarantees the boundedness of the covariance of $\theta^t$ for infinite time horizons as well. In the second case we pick $B^tn^t_{\theta}\neq 0$ where $B^t\equiv B$ is a constant matrix and $n^t_{\theta}$ is a bounded random process with zero mean. In this second case the boundedness of the covariance of $\theta^t$ is guaranteed only for finite time horizons. For illustration we use $k=5$ and $N=3$. We see that this problem can be modeled by selecting $\theta^t=(\theta^t_1,\theta^t_2,\cdots,\theta^t_5)$ where each $\theta^t_i\in \mathbb{R}^3$. We let $A^t=\mathrm{diag}(A^t_1,A^t_2,\cdots,A^t_5)$ where each $A^t_i=(A_i)^t$ with $$\begin{aligned} A_i=\begin{bmatrix} 0 & 1 & 0\\ 0 & 0 & 1\\ 1 & 0 & 0\end{bmatrix}.\end{aligned}$$ The output matrix $H^t_i$ is a constant $1\times 3k$ row matrix and has all zero entries with the exception of the $(3i-2)^{\mathrm{th}}$ entry which is equal to one. The direct noise coupling term corresponding to each reward is chosen to be $g^t_i\equiv 1$ and we will assume that the noise is a bounded random variable $n^t_{x_i}$ with zero mean. Thus we see that the expected reward of the $i^{\mathrm{th}}$ option satisfies $E(X_i^t)=H(A_i)^t\theta_i^0$ where $H=[1\:\:\:0\:\:\:0]$ and hence satisfies the condition $E(X_i^{t+3})=E(X_i^t)$ for all $t>0$. The initial condition is chosen such that $\theta^0=(\theta^0_1,\theta^0_2,\cdots,\theta^0_5)$ where each $\theta_i^0$ takes the form $\theta_i^0=\bar{\theta}_i[\alpha_1\:\:\: \alpha_2\:\:\:\alpha_3]^T$ with $\bar{\theta}_i\in \mathbb{R}$. The real positive constants $\alpha_1, \alpha_2, \alpha_3$ satisfy the condition $\alpha_1\alpha_2\alpha_3=1$ and captures the periodic variation of the number of visitors within a given day. The block diagonal structure of $A^t,B^t,H_i^t$ amounts to the assumption that the number of people that frequent different locations are uncorrelated. The total number of people that visit the park in a particular day is given by $(\bar{\theta}_1+\bar{\theta}_2+\bar{\theta}_3+\bar{\theta}_4+\bar{\theta}_5)(\alpha_1+\alpha_2+\alpha_3)$. In the first case we will assume that each $\bar{\theta}_i$ is fixed for each day and thus set $B^t\equiv 0$. In the other case we will consider the more realistic situation where this value changes stochastically according to a uniform distribution on the support $[\bar{\theta}_i-50,\bar{\theta}_i+50]$. That is, we select $n_\theta^t$ to be uniformly distributed on $[-50,50]$. We also let the noise term $n_{xi}$ for each $i$ to be uniformly distributed on $[-50,50]$. Notice that in this second case the growth condition for the covariance $\Sigma(X_i^t)$ is only satisfied for a finite time horizon. For the simulations we let $\alpha_1=3/4,\alpha_2=1,\alpha_3=4/3$ and $\bar{\theta}_1=400, \bar{\theta}_2=350, \bar{\theta}_3=750, \bar{\theta}_4=1000, \bar{\theta}_5=526$. Thus the optimal option is $i^*=4$ and is well defined for all $t$. For the simulations we use the UCB algorithem (\[eq:UCBallocation\]) – (\[eq:UCBQ\]) with $\Psi(t)=16\log{t}$ and the standard frequentist average estimator (\[eq:MeanAverage\]) for the estimation of the rewards. In compliance with the assumption that the optimal option becomes unavailable at most logarithmically, we let $$\begin{aligned} \gamma_i^t&=\left\{\begin{array}{lc} 0 & \mathrm{if}\:\:\left([\log{(n_i+t+1)}]-[\log{(n_i+t)}]\right)=1\\ 1 & \mathrm{o.w.} \end{array}\right.,\end{aligned}$$ for some integer $n_i>0$ where $[x]$ denotes the nearest integer value of $x$. For convenience of simulation we will let $\gamma_i^t\equiv 1$ for $i\neq i^*$ and $\{\gamma_{i^*}^t\}$ as chosen above. We estimate the expected reward, $E(S_n)$, the expected cumulative regret, $E(R_n)$, and the expected number of times the optimal arm is selected, $E(T_{i^*}(n))$, by simulating the algorithm for each $1\leq n\leq200$ a $1000$ times and by computing the frequentist mean as an estimate for $E(S_n)$, $E(R_n)$, and $E(T_{i^*}(n))$. The expected reward, $E(S_n)$, the expected number of times the optimal arm is sampled, $E(T_{i^*}^n)$, and the expected cumulative regret, $E(R_n)$, are plot against $n$ in Figurers \[Fig:ErewardNoNoise\] – \[Fig:EregretNoNoise\] in the absence of process noise and in Figures \[Fig:ErewardNoise\] – \[Fig:EregretNoise\] for uniformly distributed process noise. We observe that, as expected, the covariance of the regret and the reward increase as the number of time steps increase when the option dynamics are influenced by uncertainty. However since we only consider a finite time horizon they remain bounded during this horizon. Notice that yet the expected number of times the optimal arm is chosen behaves the same as when there is no process noise. ![Expected reward $E(S_n)$ for the time varying DMAB with no process noise.[]{data-label="Fig:ErewardNoNoise"}](PeriodicNoNoiseEreward.jpg){width="50.00000%"} ![Expected number of times, $E(T_{i^*}(n))$, the optimal arm has been sampled for the time varying DMAB with no noise. []{data-label="Fig:EToptNoNoise"}](PeriodicNoNoiseEtopt.jpg){width="50.00000%"} ![Expected cumulative regret $E(R_n)$ for the time varying DMAB with no noise. []{data-label="Fig:EregretNoNoise"}](PeriodicNoNoiseEregret.jpg){width="50.00000%"} ![Expected reward $E(S_n)$ for the time varying DMAB with with uniform process noise.[]{data-label="Fig:ErewardNoise"}](PeriodicNoiseEreward.jpg){width="50.00000%"} ![Expected number of times, $E(T_{i^*}(n))$, the optimal arm has been sampled for the time varying DMAB with with uniform process noise. []{data-label="Fig:EToptNoise"}](PeriodicNoiseEtopt.jpg){width="50.00000%"} ![Expected cumulative regret $E(R_n)$ for the time varying DMAB with uniform process noise. []{data-label="Fig:EregretNoise"}](PeriodicNoiseEregret.jpg){width="50.00000%"} Conclusion ========== This paper presents a novel unifying framework for modeling a wide class of Dynamic Multi-Armed Bandit problems. It allows one to consider option unavailabilities and option correlations in a single setting. The class of problems is characterized by situations where the reward for each option depends uncertainly on a multidimensional parameter that evolves according to a linear stochastic dynamic system that captures the internal and hidden collective behavior of the dynamically changing options. The dynamic system is assumed to satisfy certain boundedness conditions. For this class of problems we show that the combination of any Upper Confidence Bound type algorithm with any efficient estimator guarantees that the expected cumulative regret is bounded above by a logarithmic function of the time steps. We provide a novel practically significant example to demonstrate these ideas. In the following we will proceed to prove the above Theorem \[Theom:DynamicRegret\] by closely following the proof provided in [@Auer; @Reverdy]. Let $C_i^t\triangleq \sqrt{\frac{\Psi(t)}{T_i(t)}}$. Then for $i\neq i^*$ and $l\geq 1$ $$\begin{aligned} E(T_i(n))&=\sum_{t=0}^{n-1}\mathcal{P}({\{\varphi_{t}=i\}})\leq l+\sum_{t=l}^{n-1}\mathcal{P}({\{\varphi_{t}=i\}})\\ &\leq l+\sum_{t=l}^{n-1}\mathcal{P}(\{Q_{i^*}^t< {Q^t_{i}}\}).\end{aligned}$$ Let $$\begin{aligned} \mathcal{A}_i^t&\triangleq\{\widehat{X}_{i^*}^t+C_{i^*}^t\geq \widehat{\mu}^t_{i^*}\},\\ \mathcal{B}_i^t&\triangleq\{\widehat{\mu}^t_{i^*}\geq \widehat{\mu}^t_{i}+2{C^t_{i}}\},\\ \mathcal{C}_i^t&\triangleq\{\widehat{\mu}^t_{i}+2{C_{i}}^t\geq\widehat{X}_{i}^t+{C^t_{i}}\}\\ \mathcal{D}_i^t&\triangleq\{\gamma^t_{i^*}\neq 0\}\end{aligned}$$ Then we have, $$\begin{aligned} \{\mathcal{A}_i^t\cap \mathcal{B}_i^t \cap \mathcal{C}_i^t\}\cap \mathcal{D}_i^t\} \subseteq \{Q_{i^*}^t\geq{Q_{i}}^t\}\end{aligned}$$ Therefore, $$\begin{aligned} \{Q_{i^*}^t< {Q^t_{i}}\} \subseteq {\bar{\mathcal{A}}_i^t}\cup {\bar{\mathcal{B}}_i^t} \cup {\bar{\mathcal{C}}_i^t}\cup {\bar{\mathcal{D}}_i^t}\end{aligned}$$ From above equations we have, $$\begin{aligned} \mathcal{P}(\{Q_{i^*}^t< {Q^t_{i}}\})&\leq \mathcal{P}(\{\widehat{X}_{i^*}^t+C^t_{i^*}<\widehat{\mu}^t_{i^*}\})\\ &\:\:\:\:+\mathcal{P}(\{\widehat{\mu}^t_{i^*}< \widehat{\mu}^t_{i}+2{C^t_{i}}\})\\ &\:\:\:\:+\mathcal{P}(\{\widehat{X}_{i}^t-C^t_{i}>\widehat{\mu}^t_{i}\})\\ &\:\:\:\:+\mathbb{I}_{\{\gamma^t_{i^*}= 0\}}.\end{aligned}$$ Note that the conditions (\[eq:TailProbBnd1\]) and (\[eq:TailProbBnd2\]) of the tail probabilities of the distribution of the estimate $\widehat{X}_i^t$ gives us that, $$\begin{aligned} \mathcal{P}(\{\widehat{X}_{i^*}^t+C_{i^*}^t<\widehat{\mu}^t_{i^*}\})&\leq\:\:\:\:\: \frac{\nu\,\log{t}}{\exp\left(2\kappa \sigma^2\Psi(t)\right)},\\ \mathcal{P}(\{\widehat{X}_{i}^t-C_{i}^t>\widehat{\mu}^t_{i}\}) &\leq\:\:\:\:\: \frac{\nu\,\log{t}}{\exp\left(2\kappa \sigma^2\Psi(t)\right)},\end{aligned}$$ and hence that $$\begin{aligned} \mathcal{P}(\{Q_{i^*}^t< {Q^t_{i}}\})&\leq \mathbb{I}_{\{\gamma^t_{i^*}= 0\}}+\mathcal{P}(\bar{B}^t_i)+\frac{2\nu\,\log{t}}{\exp\left(2\kappa\sigma^2 \Psi(t)\right)}.\end{aligned}$$ From condition (\[eq:LogBndAvailability\]) we have $\sum_{j=2}^t\mathbb{I}_{\{\gamma^j_{i^*} =0\}}\leq \gamma \log{t}$. Thus $$\begin{aligned} E(T_i(n))&\leq l+\gamma \log{n}+\sum_{t=l}^{n-1}\mathcal{P}(\bar{B}^t_i)+\sum_{t=l}^{n-1}\frac{2\nu\,\log{t}}{\exp\left(2\kappa \sigma^2\Psi(t)\right)}.\end{aligned}$$ Let us proceed to find an upper bound for $\sum_{t=1}^n\mathcal{P}(\bar{B}^t_i)$. Since $\bar{B}^t_i=\{\widehat{\mu}^t_{i^*}< \widehat{\mu}_i^t+2{C^t_{i}}\}$. Let $\Delta^t_{i}=\widehat{\mu}^t_{i^*}-\widehat{\mu}_i^t$. Then if $\bar{B}^t_i$ is true then, $$\begin{aligned} \frac{\Delta^t_{i}}{2}&<\sigma\sqrt{\frac{\Psi(t)}{T_i(t)}},\end{aligned}$$ where the last inequality follows from (\[eq:UCBQ\]). Since $0<{\Delta_i}<\Delta^t_{i}$, thus we have that if $\bar{B}^t_i$ is true then $$\begin{aligned} T_i(t)&<\frac{4\sigma^2}{{\Delta^2_i}}\,\Psi(t),\end{aligned}$$ is true. Thus for sufficiently large $t$ $$\begin{aligned} &\bar{B}^t_i\subseteq \left\{T_i(t)<\frac{4\sigma^2}{{\Delta^2_i}}\,\Psi(t)\right\},\\ &\mathcal{P}\left(\{\bar{B}^t_i\}\right)\leq \mathcal{P}\left(\left\{T_i(t)<\frac{4\sigma^2}{{\Delta^2_i}}\,\Psi(t)\right\}\right).\end{aligned}$$ Thus $\mathcal{P}\left(\bar{B}^t_i\right)\neq 0$ only if $$\begin{aligned} T_i(t)&<\frac{4\sigma^2}{{\Delta^2_i}}\,\Psi(t).\end{aligned}$$ Thus we have $$\begin{aligned} \sum_{t=l}^{n-1}\mathcal{P}(\bar{B}^t_i)&=\sum_{t=l}^{\tilde{t}}\mathcal{P}(\bar{B}^t_i)\leq \frac{4\sigma^2}{{\Delta^2_i}}\,\Psi(n),\end{aligned}$$ and hence $$\begin{aligned} E(T_i(n))\leq l+\gamma \log{n}+\frac{4\sigma^2}{{\Delta^2_i}}\,\Psi(n) +\sum_{t=l}^{n-1}\frac{\nu\,\log{t}}{\exp\left(2\kappa \sigma^2\Psi(t)\right)}.\end{aligned}$$ If $\alpha \log{t}\leq \Psi(t) \leq \beta \log{t}$ then $$\begin{aligned} E(T_i(n))\leq l+\gamma\log{n}+\frac{4\sigma^2\beta}{{\Delta^2_i}}\,\log n +\sum_{t=l}^{n-1}\frac{\nu\,\log{t}}{\exp\left(2\kappa\sigma^2 \alpha \log{t}\right)}.\end{aligned}$$ The series on the right converges as long as $\alpha>3/(2\kappa \sigma^2)$. Thus from (\[eq:DynRegret\]) we have that the cumulative expected regret satisfies: $$\begin{aligned} R_n\leq c_1+ \bar{\Delta}\sum_{i\neq i^*}^k\left(\gamma+\frac{4 \sigma^2\beta}{\Delta_i^2}\right)\,\log{n} ,\end{aligned}$$ where $$c_1=k\bar{\Delta}\nu\,\left(\frac{\log{2}}{2^{2\kappa\sigma^2 \alpha}}+\int_2^{n-1}\frac{1}{t^{2\kappa \sigma^2\alpha-1}}dt\right)$$ The integral on the right converges as $n\to \infty$ if $\alpha>3/(2\kappa\sigma^2)$. Thus completing the proof of Theorem \[Theom:DynamicRegret\]. [^1]: $^{1}$Department of Mechanical and Aerospace Engineering, Princeton University, NJ 08544, USA. [udarim@princeton.edu]{} [^2]: $^{2}$Department of Mechanical Engineering, University of Peradeniya, KY 20400, Sri Lanka. [smaithri@pdn.ac.lk]{} [^3]: $^{3}$Department of Mechanical and Aerospace Engineering, Princeton University, NJ 08544, USA. [naomi@princeton.edu]{}
{ "pile_set_name": "ArXiv" }
--- abstract: 'This paper presents a novel efficient receiver design for wireless communication systems that incorporate orthogonal frequency division multiplexing (OFDM) transmission. The proposed receiver does not require channel estimation or equalization to perform coherent data detection. Instead, channel estimation, equalization, and data detection are combined into a single operation, and hence, the detector is denoted as a direct data detector ($D^{3}$). The performance of the proposed system is thoroughly analyzed theoretically in terms of bit error rate (BER), and validated by Monte Carlo simulations. The obtained theoretical and simulation results demonstrate that the BER of the proposed $D^{3}$ is only $3$ dB away from coherent detectors with perfect knowledge of the channel state information (CSI) in flat fading channels, and similarly in frequency-selective channels for a wide range of signal-to-noise ratios (SNRs). If CSI is not known perfectly, then the $D^{3}$ outperforms the coherent detector substantially, particularly at high SNRs with linear interpolation. The computational complexity of the $D^{3}$ depends on the length of the sequence to be detected, nevertheless, a significant complexity reduction can be achieved using the Viterbi algorithm.' author: - 'A. Saci, , A. Al-Dweik, , A. Shami, [^1][^2][^3]' title: Direct Data Detection of OFDM Signals Over Wireless Channels --- OFDM, fading channels, data detection, Viterbi, sequence detection, channel estimation, equalization. Introduction ============ Consequently, a low-complexity single-tap equalizer can be utilized to eliminate the impact of the multipath fading channel. Under such circumstances, the OFDM demodulation process can be performed once the fading parameters at each subcarrier, commonly denoted as channel state information (CSI), are estimated. In general, channel estimation can be classified into blind [@One-Shot-CFO-2014]-[@blind-massive-mimo-acd], and pilot-aided techniques [@Robust-CE-OFDM-2015]-[@pilot-ce-pilot-freq-domain]. Blind channel estimation techniques are spectrally efficient because they do not require any overhead to estimate the CSI, nevertheless, such techniques have not yet been adopted in practical OFDM systems. Conversely, pilot-based CSI estimation is preferred for practical systems, because typically it is more robust and less complex. In pilot-based CSI estimation, the pilot symbols are embedded within the subcarriers of the transmitted OFDM signal in time and frequency domain; hence, the pilots form a two dimensional (2-D) grid [@LTE-A]. The channel response at the pilot symbols can be obtained using the least-squares (LS) frequency domain estimation, and the channel parameters at other subcarriers can be obtained using various interpolation techniques [@Rayleigh-Ricean-Interpolation-TCOM2008]. Optimal interpolation requires a 2-D Wiener filter that exploits the time and frequency correlation of the channel, however, it is substantially complex to implement [@Interpolation-TCOM-2010], [@Wiener]. The complexity can be reduced by decomposing the 2-D interpolation process into two cascaded 1-D processes, and then, using less computationally-involved interpolation schemes [@Adaptive-Equalization-IEEE-Broadcasting-2008], [@Comp-Pilot-VTC2007]. Low complexity interpolation, however, is usually accompanied by error rate performance degradation [@Comp-Pilot-VTC2007]. It is also worth noting that most practical OFDM-based systems utilize a fixed grid pattern structure [@LTE-A]. Once the channel parameters are obtained for all subcarriers, the received samples at the output of the fast Fourier transform (FFT) are equalized to compensate for the channel fading. Fortunately, the equalization for OFDM is performed in the frequency domain using single-tap equalizers. The equalizer output samples, which are denoted as the decision variables, will be applied to a maximum likelihood detector (MLD) to regenerate the information symbols. In addition to the direct approach, several techniques have been proposed in the literature to estimate the CSI or detect the data symbols indirectly, by exploiting the correlation among the channel coefficients. For example, the per-survivor processing (PSP) approach has been widely used to approximate the maximum likelihood sequence estimator (MLSE) for coded and uncoded sequences [@PSP-Raheli], [@PSP-Zhu], [@Rev-1]. The PSP utilizes the Viterbi algorithm (VA) to recursively estimate the CSI without interpolation using the least mean squares (LMS) algorithm. Although the PSP provides superior performance when the channel is flat over the entire sequence, its performance degrades severely if this condition is not satisfied, even when the LMS step size is adaptive [@PSP-Zhu]. Multiple symbol differential detection (MSDD) can be also used for sequence estimation without explicit channel estimation. In such systems, the information is embedded in the phase difference between adjacent symbols, and hence, differential encoding is needed. Although differential detection is only $3$ dB worse than coherent detection in flat fading channels, its performance may deteriorate significantly in frequency-selective channels [@Divsalar], [@Diff-Xhang]. Consequently, Wu and Kam [@Wu; @2010] proposed a generalized likelihood ratio test (GLRT) receiver whose performance without CSI is comparable to the coherent detector. Although the GLRT receiver is more robust than differential detectors in frequency-selective channels, its performance is significantly worse than coherent detectors. The signal at channel output is estimated with a minimum mean square error (MMSE) estimator from the knowledge of the received signal and the second order statistics of the channel and noay provide BER that is about $1$ dB from the ML coherent detector in flat fading channels but at the expense of a large number of pilots. Decision-directed techniques can also be used to avoid conventional channel estimation. For example, the authors in [@Saci-Tcom] proposed a hybrid frame structure that enables blind decision-directed channel estimation. Although the proposed system manages to offer reliable channel estimates and BER in various channel conditions, the system structure follows the typical coherent detector design where equalization and symbol detection are required. Motivation and Key Contributions -------------------------------- Unlike conventional OFDM detectors, this work presents a new detector to regenerate the information symbols directly from the received samples at the FFT output, which is denoted as the direct data detector ($D^{3})$. By using the $D^{3}$, there is no need to perform channel estimation, interpolation, equalization, or symbol decision operations. The $D^{3}$ exploits the fact that channel coefficients over adjacent subcarriers are highly correlated and approximately equal. Consequently, the $D^{3}$ is derived by minimizing the difference between channel coefficients of adjacent subcarriers. The main limitation of the $D^{3}$ is that it suffers from a phase ambiguity problem, which can be solved using pilot symbols, which are part of a transmission frame in most practical standards [@WiMax], [@LTE-A]. To the best of the authors’ knowledge, there is no work reported in the published literature that uses the proposed principle. The $D^{3}$ performance is evaluated in terms of complexity, computational power, and bit error rate (BER), where analytic expressions are derived for several channel models and system configurations. The $D^{3}$ BER is compared to other widely used detectors such as the maximum likelihood (ML) coherent detector [@Proakis-Book-2001] with perfect and imperfect CSI, multiple symbol differential detector (MSDD) [@Divsalar], the ML sequence detector (MLSD) with no CSI [@Wu; @2010], and the per-survivor processing detector [@PSP-Raheli]. The obtained results show that the $D^{3}$ is more robust than all the other considered detectors in various cases of interest, particularly in frequency-selective channels at moderate and high SNRs. Moreover, the computational power comparison shows that the $D^{3}$ requires less than $35\%$ of the computational power required by the ML coherent detector. Paper Organization and Notations -------------------------------- The rest of this paper is organized as follows. The OFDM system and channel models are described in Section \[sec:Signal-and-Channel\]. The proposed $D^{3}$ is presented in Section \[sec:Proposed-System-Model\], and the efficient implementation of the $D^{3}$ is explored in Section \[sec:Efficient-Implementation-of-D3\]. The system error probability performance analysis is presented in Section \[sec:System-Performance-Analysis\]. Complexity analysis of the conventional pilot based OFDM and the $D^{3}$ are given in Section \[sec:Complexity-Analysis\]. Numerical results are discussed in Section \[sec:Numerical-Results\], and finally, the conclusion is drawn in Section \[sec:Conclusion\]. In what follows, unless otherwise specified, uppercase boldface and blackboard letters such as $\mathbf{H}$ and $\mathbb{H}$, will denote $N\times N$ matrices, whereas lowercase boldface letters such as $\mathbf{x}$ will denote row or column vectors with $N$ elements. Uppercase, lowercase, or bold letters with a tilde such as $\tilde{d}$ will denote trial values, and symbols with a hat, such as $\hat{\mathbf{x}}$, will denote the estimate of $\mathbf{x}$. Letters with apostrophe such as $\acute{v}$ are used to denote the next value, i.e., $\acute{v}\triangleq v+1$. Furthermore, $\mathrm{E}\left[\cdot\right]$ denotes the expectation operation. Signal and Channel Models \[sec:Signal-and-Channel\] ==================================================== Consider an OFDM system with $N$ subcarriers modulated by a sequence of $N$ complex data symbols $\mathbf{d}=[d_{0}$, $d_{1}$, $....$, $d_{N-1}]^{T}$. The data symbols are selected uniformly from a general constellation such as $M$-ary phase shift keying (MPSK) or quadrature amplitude modulation (QAM). In conventional pilot-aided OFDM systems [@IEEE-AC], $N_{P}$ of the subcarriers are allocated for pilot symbols, which can be used for channel estimation and synchronization purposes. The modulation process in OFDM can be implemented efficiently using an $N$-point inverse FFT (IFFT) algorithm, where its output during the $\ell$th OFDM block can be written as $\mathbf{x(\ell)=F}^{H}\mathbf{d(\ell)}$ where $\mathbf{F}$ is the normalized $N\times N$ FFT matrix, and hence, $\mathbf{F}^{H}$ is the IFFT matrix. To simplify the notation, the block index $\ell$ is dropped for the remaining parts of the paper unless it is necessary to include it. Then, a CP of length $N_{\mathrm{CP}}$ samples, no less than the channel maximum delay spread ($\mathcal{D}_{\mathrm{h}}$), is appended to compose the OFDM symbol with a total length $N_{\mathrm{t}}=N+N_{\mathrm{CP}}$ samples and duration of $T_{\mathrm{t}}$s. At the receiver front-end, the received signal is down-converted to baseband and sampled at a rate $T_{\mathrm{s}}=T_{\mathrm{t}}/N_{\mathrm{t}}$. In this work, the channel is assumed to be composed of $\mathcal{D}_{\mathrm{h}}+1$ independent multipath components each of which has a gain $h_{m}\sim\mathcal{CN}\left(0,2\sigma_{h_{m}}^{2}\right)$ and delay $m\times T_{\mathrm{s}}$, where $m\in\{0$, $1$,$...$, $\mathcal{D}_{\mathrm{h}}\}$. A quasi-static channel is assumed throughout this work, and thus, the channel taps are considered constant over one OFDM symbol, but they may change over two consecutive symbols. Therefore, the received sequence after dropping the CP samples and applying the FFT can be expressed as, $$\mathbf{r}=\mathbf{Hd+w}\label{eq:rx_sig_FD}$$ where $\left\{ \mathbf{r,w}\right\} \in\mathbb{C}^{N\times1}$, $w_{v}\sim\mathcal{CN}\left(0\text{, }2\sigma_{w}^{2}\right)$ is the additive white Gaussian noise (AWGN) vector and $\mathbf{\mathbf{H}}$ denotes the channel frequency response (CFR) $$\mathbf{\mathbf{H}}=\text{diag}\left\{ \left[H_{0}\text{, }H_{1}\text{,}\ldots\text{, }H_{N-1}\right]\right\} .$$ By noting that $\mathbf{r|}_{\mathbf{H,d}}\sim\mathcal{CN}\left(\mathbf{Hd}\text{, }2\sigma_{w}^{2}\mathbf{I}_{N}\right)$ where $\mathbf{I}_{N}$ is an $N\times N$ identity matrix, then it is straightforward to show that the MLD can be expressed as $$\mathbf{\hat{d}}=\arg\text{ }\min_{\tilde{\mathbf{d}}}\text{ }\left\Vert \mathbf{r-H}\tilde{\mathbf{d}}\right\Vert ^{2}\label{E-MLD-01}$$ where $\left\Vert \mathbf{\cdot}\right\Vert $ denotes the Euclidean norm, and $\tilde{\mathbf{d}}=\left[\tilde{d}_{0}\text{, }\tilde{d}_{1}\text{,}\ldots\text{, }\tilde{d}_{N1}\right]^{T}$ denotes the trial values of $\mathbf{d}$. As can be noted from (\[E-MLD-01\]), the MLD requires the knowledge of $\mathbf{\mathbf{H}}$. Moreover, because (\[E-MLD-01\]) describes the detection of more than one symbol, it is typically denoted as maximum likelihood sequence detector (MLSD). If the elements of $\mathbf{d}$ are independent, the MLSD can be replaced by a symbol-by-symbol MLD $$\hat{d}_{v}=\arg\text{ }\min_{\tilde{d}_{v}}\text{ }\left\vert r_{v}\mathbf{-}H_{v}\tilde{d}_{v}\right\vert ^{2}\text{.}\label{E-MLD-02}$$ Since perfect knowledge of $\mathbf{H}$ is infeasible, an estimated version of $\mathbf{H}$, denoted as $\hat{\mathbf{H}}$, can be used in (\[E-MLD-01\]) and (\[E-MLD-02\]) instead of $\mathbf{H}$**.** Another possible approach to implement the detector is to equalize $\mathbf{r}$, and then use a symbol-by-symbol MLD. Therefore, the equalized received sequence can be expressed as, $$\check{\mathbf{r}}=\left[\hat{\mathbf{H}}^{H}\hat{\mathbf{H}}\right]^{-1}\hat{\mathbf{H}}^{H}\mathbf{r}$$ and $$\hat{d}_{v}=\arg\min_{\tilde{d}_{v}}\left\vert \check{r}_{v}-\tilde{d}_{v}\right\vert ^{2}\text{, }\forall v\text{.}$$ It is interesting to note that solving (\[E-MLD-01\]) does not necessarily require the explicit knowledge of $\mathbf{H}$ under some special circumstances. For example, Wu and Kam [@Wu; @2010] noticed that in flat fading channels, i.e., $H_{v}=H$ $\forall v$, it is possible to detect the data symbols using the following MLSD, $$\mathbf{\hat{d}}=\arg\text{ }\max_{\tilde{\mathbf{d}}}\text{ }\frac{\left\vert \tilde{\mathbf{d}}^{H}\mathbf{r}\right\vert ^{2}}{\parallel\tilde{\mathbf{d}}\Vert}.\label{E-Wu}$$ Although the detector described in (\[E-Wu\]) is efficient in the sense that it does not require the knowledge of $\mathbf{H}$, its BER is very sensitive to the channel variations. Proposed $D^{3}$ System Model\[sec:Proposed-System-Model\] ========================================================== One of the distinctive features of OFDM is that its channel coefficients over adjacent subcarriers in the frequency domain are highly correlated and approximately equal. The correlation coefficient between two adjacent subcarriers can be defined as $$\begin{aligned} \varrho_{f} & \triangleq & \mathrm{E}\left[H_{v}H_{\acute{v}}^{\ast}\right]\nonumber \\ & = & \mathrm{E}\left[\sum_{n=0}^{\mathcal{D}_{\mathrm{h}}}h_{n}e^{-j2\pi\frac{nv}{N}}\sum_{m=0}^{\mathcal{D}_{\mathrm{h}}}h_{m}^{\ast}e^{j2\pi\frac{m\acute{v}}{N}}\right]=\sum_{m=0}^{\mathcal{D}_{\mathrm{h}}}\sigma_{h_{m}}^{2}e^{j2\pi\frac{m}{N}}\label{eq:rho-f}\end{aligned}$$ where $\sigma_{h_{m}}^{2}=\mathrm{E}\left[\left\vert h_{m}\right\vert ^{2}\right]$. The difference between two adjacent channel coefficients is $$\Delta_{f}=\mathrm{E}\left[H_{v}-H_{\acute{v}}\right]=\mathrm{E}\left[\sum_{m=0}^{\mathcal{D}_{\mathrm{h}}}h_{n}e^{-j2\pi\frac{mv}{N}}\left(1-e^{-j2\pi\frac{m}{N}}\right)\right]$$ For large values of $N$, it is straightforward to show that $\varrho_{f}\rightarrow1$ and $\Delta_{f}\rightarrow0$. Similar to the frequency domain, the time domain correlation defined according to the Jakes’ model can be computed as [@Jakes-Model], $$\varrho_{t}=\mathrm{E}\left[H_{v}^{\ell}\left(H_{v}^{\acute{\ell}}\right)^{\ast}\right]=J_{0}\left(2\pi f_{d}T_{\mathrm{s}}\right)\label{eq:rho-t}$$ where $J_{0}\left(\cdot\right)$ is the Bessel function of the first kind and $0$ order, $f_{d}$ is the maximum Doppler frequency. For large values of $N$, $2\pi f_{d}T_{\mathrm{s}}\ll1$, and hence $J_{0}\left(2\pi f_{d}T_{\mathrm{s}}\right)\approx1$, and thus $\varrho_{t}\approx1$. Using the same argument, the difference in the time domain $\Delta_{t}\triangleq\mathrm{E}\left[H_{v}^{\ell}-H_{v}^{\acute{\ell}}\right]\approx0$. Although the proposed system can be applied in the time domain, frequency domain, or both, the focus of this work is the frequency domain. Based on the aforementioned properties of OFDM, a simple approach to extract the information symbols from the received sequence $\mathbf{r}$ can be designed by minimizing the difference of the channel coefficients between adjacent subcarriers, which can be expressed as $$\mathbf{\hat{d}}=\arg\min_{\tilde{\mathbf{d}}}\sum_{v=0}^{N-2}\left\vert \frac{r_{v}}{\tilde{d}_{v}}-\frac{r_{\acute{v}}}{\tilde{d}_{\acute{v}}}\right\vert ^{2}.\label{E-DDD-00}$$ As can be noted from (\[E-DDD-00\]), the estimated data sequence $\mathbf{\hat{d}}$ can be obtained without the knowledge of $\mathbf{H}$. Moreover, there is no requirement for the channel coefficients over the considered sequence to be equal, and hence, the $D^{3}$ should perform fairly well even in frequency-selective fading channels. Nevertheless, it can be noted that (\[E-DDD-00\]) does not have a unique solution because $\mathbf{d}$ and $-\mathbf{d}$ can minimize (\[E-DDD-00\]). To resolve the phase ambiguity problem, one or more pilot symbols can be used as a part of the sequence $\mathbf{d}$**.** In such scenarios, the performance of the $D^{3}$ will be affected indirectly by the frequency selectivity of the channel because the capability of the pilot to resolve the phase ambiguity depends on its fading coefficient. Another advantage of using pilot symbols is that it will not be necessary to detect the $N$ symbols simultaneously. Instead, it will be sufficient to detect $\mathcal{K}$ symbols at a time, which can be exploited to simplify the system design and analysis. Using the same approach of the frequency domain, the $D^{3}$ can be designed to work in the time domain as well by minimizing the channel coefficients over two consecutive subcarriers, i.e., two subcarriers with the same index over two consecutive OFDM symbols, which is also applicable to single carrier systems. It can be also designed to work in both time and frequency domains, where the detector can be described as $$\mathbf{\hat{D}}_{\mathcal{L}\text{,}\mathcal{K}}\mathbf{=}\arg\min_{\mathbf{\tilde{\mathbf{D}}}_{\mathcal{L}\text{,}\mathcal{K}}}\text{ }J\left(\tilde{\mathbf{D}}_{\mathcal{L}\text{,}\mathcal{K}}\right)\label{eq:opt-D}$$ where $\mathbf{D}_{\mathcal{L}\text{,}\mathcal{K}}$ is an $\mathcal{L}\times\mathcal{K}$ data matrix, $\mathcal{L}$ and $\mathcal{K}$ are the time and frequency detection window size, and the objective function $J\left(\tilde{\mathbf{D}}\right)$ is given by $$J\left(\tilde{\mathbf{D}}_{\mathcal{L}\text{,}\mathcal{K}}\right)=\sum_{\ell=0}^{\mathcal{L}-1}\sum_{v=0}^{\mathcal{K}-2}\left\vert \frac{r_{v}^{\ell}}{\tilde{d}_{v}^{\ell}}-\frac{r_{\acute{v}}^{\ell}}{\tilde{d}_{\acute{v}}^{\ell}}\right\vert ^{2}+\left\vert \frac{r_{v}^{\ell}}{\tilde{d}_{v}^{\ell}}-\frac{r_{v}^{\acute{\ell}}}{\tilde{d}_{v}^{\acute{\ell}}}\right\vert ^{2}\text{.}\label{eq:objective-function}$$ For example, if the detection window size is chosen to be the LTE resource block, then, $\mathcal{L}=14$ and $\mathcal{K=}12$. Moreover, the system presented in can be extended to the multi-branch receiver scenarios, single-input multiple-output (SIMO) as, $$\begin{aligned} \hat{\mathbf{D}} & =\arg\min_{\tilde{\mathbf{d}}}\sum_{n=1}^{\mathcal{N}}\sum_{\ell=0}^{\mathcal{L}-1}\sum_{v=0}^{\mathcal{K}-2}\left\vert \frac{r_{v}^{\ell,n}}{\tilde{d}_{v}}-\frac{r_{\acute{v}}^{\ell,n}}{\tilde{d}_{\acute{v}}^{\ell}}\right\vert ^{2}+\left\vert \frac{r_{v}^{\ell,n}}{\tilde{d}_{v}^{\ell}}-\frac{r_{v}^{\acute{\ell},n}}{\tilde{d}_{v}^{\acute{\ell}}}\right\vert ^{2}\end{aligned}$$ where $\mathcal{N}$ is the number of receiving antennas. Efficient Implementation of $D^{3}$\[sec:Efficient-Implementation-of-D3\] ========================================================================= It can be noted from and that solving for $\hat{\mathbf{D}}$, given that $N_{P}$ pilot symbols are used, requires an $M^{\mathcal{K}\mathcal{L-}N_{P}}$ trials if brute force search is adopted, which is prohibitively complex, and thus, reducing the computational complexity is crucial. ![Example of a 1-D segmentation over the frequency domain for an LTE-A resource block.\[fig:2D-to-1D\] ](graphics/fig_01_digram_1D_sigmentation_modified) \[subsec:The-Viterbi-Algorithm\]The Viterbi Algorithm (VA) ---------------------------------------------------------- By noting that the expression in (\[E-DDD-00\]) corresponds to the sum of correlated terms, which can be modeled as a first-order Markov process, then MLSD techniques such as the VA can be used to implement the $D^{3}$ efficiently. For example, the trellis diagram of the VA with binary phase shift keying (BPSK) is shown in Fig. \[fig:Viterbi-D3\], and can be implemented as follows: 1. Initialize the path metrics $\left\{ \Gamma_{0}^{U},\acute{\Gamma}_{0}^{U},\Gamma_{0}^{L},\acute{\Gamma}_{0}^{L}\right\} =0$, where $U$ and $L$ denote the upper and lower branches, respectively. Since BPSK is used, the number of states is $2$. 2. Initialize the counter, $c=0$. 3. Compute the branch metric $J_{m,n}^{c}=\left\vert \frac{rc}{m}-\frac{r_{\acute{c}}}{n}\right\vert ^{2}$, where $m$ is current symbol index, $m=0\rightarrow\tilde{d}=-1$, and $m=1\rightarrow\tilde{d}=1$, and $n$ is the next symbol index using the same mapping as $m$. 4. Compute the path metrics using the following rules, $$\begin{array}{ccc} \Gamma_{\acute{c}}^{U}=\min\left[\Gamma_{c}^{U}\text{, }\acute{\Gamma}_{c}^{U}\right]+J_{00}^{c} & & \Gamma_{\acute{c}}^{L}=\min\left[\Gamma_{c}^{L}\text{, }\acute{\Gamma}_{c}^{L}\right]+J_{01}^{c}\\ \acute{\Gamma}_{\acute{c}}^{U}=\min\left[\Gamma_{c}^{U}\text{, }\acute{\Gamma}_{c}^{U}\right]+J_{10}^{c} & & \acute{\Gamma}_{\acute{c}}^{L}=\min\left[\Gamma_{c}^{L}\text{, }\acute{\Gamma}_{c}^{L}\right]+J_{11}^{c} \end{array}$$ 5. Track the surviving paths, $2$ paths in the case of BPSK. 6. Increase the counter, $c=c+1$. 7. if $c=\mathcal{K}$, the algorithm ends. Otherwise, go to step 3. ![Trellis diagram of the $D^{3}$ detector for BPSK.\[fig:Viterbi-D3\]](graphics/fig_02_digram_trellis) It is worth mentioning that placing a pilot symbol at the edge of a segment terminates the trellis. To simplify the discussion, assume that the pilot value is $-1$, and thus we compute only $J_{0,0}$ and $J_{1,0}$. Consequently, long data sequences can be divided into smaller segments bounded by pilots, which can reduce the delay by performing the detection over the sub-segments in parallel without sacrificing the error rate performance. 1. 2. System Design with an Error Control Coding ------------------------------------------ Forward error correction (FEC) coding can be integrated with the $D^{3}$ in two ways, based on the decoding process, i.e., hard or soft decision decoding. For the hard decision decoding, the integration of FEC coding is straightforward where the output of the $D^{3}$ is applied directly to the hard decision decoder (HDD). For the soft decision decoding, we can exploit the coded data to enhance the performance of the $D^{3}$, and then use the $D^{3}$ output to estimate the channel coefficients in a decision-directed manner. The $D^{3}$ with coded data can be expressed as $$\mathbf{\hat{d}}=\arg\min_{\tilde{\mathbf{u}}\in\mathbb{U}}\sum_{v=0}^{N-2}\left\vert \frac{r_{v}}{\tilde{u}_{v}}-\frac{r_{\acute{v}}}{\tilde{u}_{\acute{v}}}\right\vert ^{2}\label{E-D3-Joint}$$ where $\mathbb{U}$ is the set of all codewords modulated using the same modulation used at the transmitter. Therefore, the trial sequences $\tilde{\mathbf{u}}$ are restricted to particular sequences. For the case of convolutional codes, the detection and decoding processes can be integrated smoothly since both of them are using the VA. Such an approach can be adopted with linear block codes as well because trellis-based decoding can be also applied to block codes [@Trellis-Block]. Error Rate Analysis of the $D^{3}$\[sec:System-Performance-Analysis\] ===================================================================== The system BER analysis is presented for several cases according to the pilot and data arrangements. For simplicity, each case is discussed in separate subsections. To make the analysis tractable, we consider BPSK modulation in the analysis while the BER of higher-order modulations is obtained via Monte Carlo simulations. ![Double-sided pilot segment. \[fig:Double-sided-pilot\]](graphics/fig_03_single_sided_diagram){width="0.48\columnwidth"} ![Double-sided pilot segment. \[fig:Double-sided-pilot\]](graphics/fig_04_double_sided_diagram){width="0.48\columnwidth"} Single-Sided Pilot \[subsec:Single-Sided-Pilot\] ------------------------------------------------ To detect a data segment that contains $\mathcal{K}$ symbols, at least one pilot symbol should be part of the segment in order to resolve the phase ambiguity problem. Consequently, the analysis in this subsection considers the case where there is only one pilot within the $\mathcal{K}$ symbols, as shown in Fig. \[fig:Single-sided-pilot\]. Given that the FFT output vector $\mathbf{r}=\left[r_{0}\text{, }r_{1}\text{,}\ldots,r_{N-1}\right]$ is divided into $L$ segments each of which consists of $\mathcal{K}$ symbols, including the pilot symbol, then the frequency domain $D^{3}$ detector can be written as, $$\hat{\mathbf{d}}_{l}=\arg\min_{\tilde{\mathbf{d}}}\sum_{v=l}^{\mathcal{K-}2+l}\left\vert \frac{r_{v}}{\tilde{d_{v}}}-\frac{r_{\acute{v}}}{\tilde{d}_{\acute{v}}}\right\vert ^{2}\,\,\,\,\mathcal{K}\in\left\{ 2,3,\dots,N-1\right\} \label{eq:d_hat}$$ where $l$ denotes the index of the first subcarrier in the segment, and without loss of generality, we consider that $l=0$. Therefore, by expanding we obtain, $$\begin{gathered} \hat{\mathbf{d}}_{0}=\arg\min_{\tilde{\mathbf{d}}}\left(\frac{r_{0}}{\tilde{d_{0}}}-\frac{r_{1}}{\tilde{d}_{1}}\right)\left(\frac{r_{0}}{\tilde{d_{0}}}-\frac{r_{1}}{\tilde{d}_{1}}\right)^{\ast}+\cdots+\left(\frac{r_{\mathcal{K}-2}}{\tilde{d}_{\mathcal{K}-2}}-\frac{r_{\mathcal{K}-1}}{\tilde{d}_{\mathcal{K}-1}}\right)\left(\frac{r_{\mathcal{K}-2}}{\tilde{d}_{\mathcal{K}-2}}-\frac{r_{\mathcal{K}-1}}{\tilde{d}_{\mathcal{K}-1}}\right)^{\ast}\label{eq:analysis-expansion-01}\end{gathered}$$ which can be simplified to, $$\begin{gathered} \hat{\mathbf{d}}_{0}=\arg\min_{\tilde{\mathbf{d}}}\left\vert \frac{r_{0}}{\tilde{d_{0}}}\right\vert ^{2}+\left\vert \frac{r_{1}}{\tilde{d_{1}}}\right\vert ^{2}+\dots+\left\vert \frac{r_{\mathcal{K}-1}}{\tilde{d}_{\mathcal{K}-1}}\right\vert ^{2}-\frac{r_{0}}{\tilde{d_{0}}}\frac{r_{1}}{\tilde{d}_{1}^{\ast}}-\frac{r_{0}}{\tilde{d_{0}^{\ast}}}\frac{r_{1}}{\tilde{d}_{1}}-\cdots\\ -\frac{r_{\mathcal{K}-2}}{\tilde{d}_{\mathcal{K}-2}}\frac{r_{\mathcal{K}-1}}{\tilde{d}_{\mathcal{K}-1}^{\ast}}-\frac{r_{\mathcal{K}-2}}{\tilde{d}_{\mathcal{K}-2}^{\ast}}\frac{r_{\mathcal{K}-1}}{\tilde{d}_{\mathcal{K}-1}}.\label{eq:analysis-expansion-02}\end{gathered}$$ For BPSK, $\left\vert r_{v}/\tilde{d_{v}}\right\vert ^{2}=\left\vert r_{v}\right\vert ^{2}$, which is a constant term with respect to the maximization process in , and thus, they can be dropped. Therefore, the detector is reduced to $$\hat{\mathbf{d}}_{0}=\arg\max_{\tilde{\mathbf{d}_{0}}}\sum_{v=0}^{\mathcal{K-}2}\Re\left\{ \frac{r_{v}r_{\acute{v}}}{\tilde{d_{v}}\tilde{d}_{\acute{v}}}\right\} .$$ Given that the pilot symbol is placed in the first subcarrier and noting that $d_{v}\in\left\{ -1,1\right\} $, then $\tilde{d_{0}}=1$ and $\hat{\mathbf{d}}_{0}$ can be written as $$\hat{\mathbf{d}}_{0}=\arg\max_{\tilde{d}_{0}\notin\tilde{\mathbf{d}}_{0}}\frac{1}{\tilde{d_{1}}}\Re\left\{ r_{0}r_{1}\right\} +\sum_{v=1}^{\mathcal{K-}2}\frac{1}{\tilde{d_{v}}\tilde{d}_{\acute{v}}}\Re\left\{ r_{v}r_{\acute{v}}\right\} .\label{eq:d_hat_single_sided}$$ The sequence error probability ($P_{S}$), conditioned on the channel frequency response over the $\mathcal{K}$ symbols ($\mathbf{H}_{0})$ and the transmitted data sequence $\mathbf{d}_{0}$ can be defined as, $$P_{S}|_{\mathbf{H}_{0},\mathbf{d}_{0}}\triangleq\left.\Pr\left(\hat{\mathbf{d}_{0}}\neq\mathbf{d}_{0}\right)\right\vert _{\mathbf{H}_{0},\mathbf{d}_{0}}\label{eq:SEP-definition}$$ which can be also written in terms of the conditional probability of correct detection $P_{C}$ as, $$P_{C}|_{\mathbf{H}_{0},\mathbf{d}_{0}}=1-\Pr\left(\hat{\mathbf{d}_{0}}=\mathbf{d}_{0}\right)\mid_{\mathbf{H}_{0},\mathbf{d}_{0}}.\label{eq:SEP-Analysis-01}$$ Without loss of generality, we assume that $\mathbf{d}_{0}\mathbf{=}[1$, $1$,…$,1]\triangleq\mathbf{1}$ . Therefore, $$P_{C}|_{\mathbf{H}_{0},\mathbf{\mathbf{1}}}=\Pr\left(\sum_{v=0}^{\mathcal{K-}2}\Re\left\{ r_{v}r_{\acute{v}}\right\} =\max_{\tilde{\mathbf{d}_{0}}}\left\{ \sum_{v=0}^{\mathcal{K-}2}\frac{\Re\left\{ r_{v}r_{\acute{v}}\right\} }{\tilde{d_{v}}\tilde{d}_{\acute{v}}}\right\} \right).\label{eq:probability-correct-sequence}$$ Since $\mathbf{d}_{0}$ has $\mathcal{K-}1$ data symbols, then there are $2^{\mathcal{K-}1}$ trial sequences, $\tilde{\mathbf{d}}_{0}^{(0)}$, $\tilde{\mathbf{d}}_{0}^{(1)}$,$\ldots$, $\tilde{\mathbf{d}}_{0}^{(\psi)}$, where $\psi=2^{\mathcal{K-}1}-1$, and $\tilde{\mathbf{d}}_{0}^{(\psi)}\mathbf{=}[1$, $1$,…$,1]$ . The first symbol in every sequence is set to $1$, which is the pilot symbol. By defining $\sum_{v=0}^{\mathcal{K-}2}\frac{\Re\left\{ r_{v}r_{\acute{v}}\right\} }{\tilde{d_{v}}\tilde{d}_{\acute{v}}}\triangleq A_{n}$, where $\tilde{d_{v}}\tilde{d}_{\acute{v}}\in\tilde{\mathbf{d}}_{0}^{(n)}$, then (\[eq:probability-correct-sequence\]) can be written as, $$P_{C}|_{\mathbf{H}_{0},\mathbf{\mathbf{1}}}=\Pr\left(A_{\psi}>A_{\psi-1},A_{\psi-2},\ldots,A_{0}\right)\label{E-PC-00}$$ which, as depicted in Appendix I, can be simplified to $$P_{C}|_{\mathbf{H}_{0},\mathbf{\mathbf{1}}}=\prod\limits _{v=0}^{\mathcal{K-}2}\Pr\left(\Re\left\{ r_{v}r_{\acute{v}}\right\} >0\right).\label{eq:pc-expansion-2}$$ To evaluate $P_{C}|_{\mathbf{H}_{0},\mathbf{\mathbf{1}}}$ given in , it is necessary to compute $\Pr\left(\Re\left\{ r_{v}r_{\acute{v}}\right\} >0\right)$, which can be written as $$\Pr\left(\Re\left\{ r_{v}r_{\acute{v}}\right\} >0\right)=\Pr\left(\underbrace{r_{v}^{I}r_{\acute{v}}^{I}-r_{v}^{Q}r_{\acute{v}}^{Q}}_{r_{v,\acute{v}}^{\mathrm{SP}}}>0\right).\label{E-rSP}$$ Given that $\mathbf{d}_{0}\mathbf{=}[1$, $1$,…$,1]$ , then $r_{v}^{I}=\Re\left\{ r_{v}\right\} =H_{v}^{I}+w_{v}^{I}$ and $r_{v}^{Q}=\Im\left\{ r_{v}\right\} =H_{v}^{Q}+w_{v}^{Q}$. Therefore, $r_{v}^{I},$ $r_{v}^{Q}$, $r_{\acute{v}}^{I}$ and $r_{\acute{v}}^{Q}$ are independent conditionally Gaussian random variables with averages $H_{v}^{I}$, $H_{v}^{Q}$, $H_{\acute{v}}^{I}$ and $H_{\acute{v}}^{Q}$, respectively, and the variance for all elements is $\sigma_{w}^{2}$. To derive the PDF of $r_{v,\acute{v}}^{\mathrm{SP}}$, the PDFs of $r_{v}^{I}r_{\acute{v}}^{I}$ and $r_{v}^{Q}r_{\acute{v}}^{Q}$ should be evaluated, where each of which corresponds to the product of two Gaussian random variables. Although the product of two Gaussian variables is not usually Gaussian, the limit of the moment-generating function of the product has Gaussian distribution. Therefore, the product of two variables $X\sim\mathcal{N}(\mu_{x},\sigma_{x}^{2})$ and $Y\sim\mathcal{N}(\mu_{y},\sigma_{y}^{2})$ tends to be $\mathcal{N}(\mu_{x}\mu_{y},\mu_{x}^{2}\sigma_{y}^{2}+\mu_{y}^{2}\sigma_{x}^{2})$ as the ratios $\mu_{x}/\sigma_{x}$ and $\mu_{y}/\sigma_{y}$ increase [@Product; @of; @2RV]. By noting that in in (\[E-rSP\]) $\mathrm{E}\left[r_{y}^{x}\right]=H_{y}^{x}$, $x\in\left\{ I,Q\right\} $ and $y\in\left\{ v,\acute{v}\right\} $ and $\sigma_{r_{y}^{x}}=\sigma_{w}$, thus $\mathrm{E}\left[r_{y}^{x}\right]/\sigma_{r_{y}^{x}}\gg1$ $\forall\left\{ x,y\right\} $. Moreover, because the PDF of the sum or difference of two Gaussian random variables is also Gaussian, then, $r_{v,\acute{v}}^{\mathrm{SP}}\sim\mathcal{N}\left(\bar{\mu}_{\mathrm{SP}},\bar{\sigma}_{\mathrm{SP}}^{2}\right)$ where $\bar{\mu}_{\mathrm{SP}}=H_{v}^{I}H_{\acute{v}}^{I}+H_{v}^{Q}H_{\acute{v}}^{Q}$ and $\bar{\sigma}_{\mathrm{SP}}^{2}=\sigma_{w}^{2}\left(\left\vert H_{v}\right\vert ^{2}+\left\vert H_{\acute{v}}\right\vert ^{2}+\sigma_{w}^{2}\right)$. Consequently, $$P_{C}|_{\mathbf{H}_{0},\mathbf{1}}=\prod_{v=0}^{\mathcal{K}-2}\Pr\left(r_{v,\acute{v}}^{\mathrm{SP}}>0\right)=\prod_{v=0}^{\mathcal{K}-2}\left[1-Q\left(\sqrt{\frac{2\bar{\mu}_{\mathrm{SP}}}{\bar{\sigma}_{\mathrm{SP}}^{2}}}\right)\right]$$ and $$P_{S}|_{\mathbf{H}_{0},\mathbf{1}}=1-\prod_{v=0}^{\mathcal{K}-2}\left[1-Q\left(\sqrt{\frac{2\bar{\mu}_{\mathrm{SP}}}{\bar{\sigma}_{\mathrm{SP}}^{2}}}\right)\right]\label{eq:SEP}$$ where $Q\left(x\right)\triangleq\frac{1}{\sqrt{2\pi}}\int_{x}^{\infty}\exp\left(-\frac{t^{2}}{2}\right)dt$. Since $H_{v}^{I}$ and $H_{v}^{Q}$ are independent, then, the condition on $\mathbf{H}_{0}$ in can be removed by averaging $P_{S}$ over the PDF of $\mathbf{H}_{0}^{I}$ and $\mathbf{H}_{0}^{Q}$ as, $$\begin{gathered} \mathrm{SEP}\mid_{\mathbf{d}=1}=\underbrace{\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\dots\int_{-\infty}^{\infty}}_{2\mathcal{K}\text{ fold}}\mathrm{SEP}\mid_{\mathbf{H}_{0},\mathbf{d}=1}f_{\mathbf{H}_{0}^{I}}\left(H_{0}^{I},H_{1}^{I},\dots,H_{\mathcal{K}-1}^{I}\right)\times\\ f_{\mathbf{H}_{0}^{Q}}\left(H_{0}^{Q},H_{1}^{Q},\dots,H_{\mathcal{K}-1}^{Q}\right)dH_{0}^{I}dH_{1}^{I}\dots dH_{\mathcal{K}-1}^{I}dH_{0}^{Q}dH_{1}^{Q}\dots dH_{\mathcal{K}-1}^{Q}\text{.}\label{eq:unconditional-SER}\end{gathered}$$ Because the random variables $H_{i}^{I}$ and $H_{i}^{Q}$ $\forall i$ in are real and Gaussian, their PDFs are multivariate Gaussian distributions [@Proakis-Book-2001], $$f_{\mathbf{X}}\left(X_{0},X_{1},\dots,X_{\mathcal{K}-1}\right)=\frac{\exp\left(-\frac{1}{2}(\mathbf{X}-\boldsymbol{\mu})^{\mathrm{T}}\boldsymbol{\Sigma}^{-1}(\mathbf{X}-\boldsymbol{\mu})\right)}{\sqrt{(2\pi)^{\mathcal{K}}|\boldsymbol{\Sigma}|}}\label{eq:multi-variate-gaussian}$$ where $\boldsymbol{\mu}$ is the mean vector, which is defined as, $$\boldsymbol{\mu}=\mathrm{E}\left[\mathbf{X}\right]=\left[\mathrm{E}\left[X_{1}\right],\mathrm{E}\left[X_{2}\right],\dots,\mathrm{E}\left[X_{\mathcal{K}-1}\right]\right]^{T}$$ and $\boldsymbol{\Sigma}$ is the covariance matrix, $\boldsymbol{\Sigma}=\mathrm{E}\left[\left(\mathbf{X}-\mu\right)\left(\mathbf{X}-\mu\right)^{T}\right].$ Due to the difficulty of evaluating $2\mathcal{K}$ integrals, we consider the special case of flat fading, which implies that $H_{v}=H_{\acute{v}}\triangleq H$ and $\left(H^{I}\right)^{2}+\left(H^{Q}\right)^{2}\triangleq\alpha^{2}$, where $\alpha$ is the channel fading envelope, $\alpha=\left\vert H\right\vert $. Therefore, the SEP expression in becomes, $$P_{S}|_{\alpha,\mathbf{1}}=1-\left[1-Q\left(\sqrt{\frac{\alpha^{2}}{\sigma_{w}^{2}\left(\alpha^{2}+\sigma_{w}^{2}\right)}}\right)\right]^{\mathcal{K}-1}.\label{eq:SEP-conditional-general}$$ Recalling the Binomial Theorem, we get $$\left(a+b\right)^{n}=\sum_{v=0}^{n}\binom{n}{v}a^{n-v}b^{v}\text{, }\binom{n}{v}\triangleq\frac{n!}{\left(n-v\right)!v!}.\label{eq:binomial-theorem}$$ Then the SEP formula in using the Binomial Theorem in can be written as, $$P_{S}|_{\alpha,\mathbf{1}}=1-\sum_{v=0}^{\mathcal{K}-1}\binom{\mathcal{K}-1}{v}\left(-1\right)^{v}\left[Q\left(\sqrt{\frac{\alpha^{2}}{\sigma_{w}^{2}\left(\alpha^{2}+\sigma_{w}^{2}\right)}}\right)\right]^{v}.\label{eq:sep_cond_higher_k}$$ The conditioning on $\alpha$ can be removed by averaging over the PDF of $\alpha$, which is Rayleigh. Therefore, $$f\left(\alpha\right)=\frac{\alpha}{\sigma_{H}^{2}}e^{-\frac{\alpha^{2}}{2\sigma_{H}^{2}}}.\label{eq:rayleigh-pdf}$$ And hence, $$P_{S}|_{\mathbf{1}}=\int_{0}^{\infty}P_{S}|_{\alpha,\mathbf{1}}\text{ }f\left(\alpha\right)d\alpha.\label{E-Averaging}$$ Because the expression in contains high order of $Q$-function $Q^{n}\left(x\right)$, evaluating the integral analytically becomes intractable for $\mathcal{K}>2$. For the special case of $\mathcal{K}=2$, $P_{S}$ can be evaluated by substituting (\[eq:sep\_cond\_higher\_k\]) and (\[eq:rayleigh-pdf\]) into (\[E-Averaging\]) and evaluating the integral yields the following simple expression, $$P_{S}|_{\mathbf{1}}=\frac{1}{2\left(\bar{\gamma}_{s}+1\right)}\text{, \ \ }\bar{\gamma}_{s}\triangleq\frac{\mathrm{E}\left[\left\vert d_{v}\right\vert ^{2}\right]\mathrm{E}\left[\left\vert H\right\vert ^{2}\right]}{2\sigma_{w}^{2}}\label{E-Pe_K2}$$ where $\bar{\gamma}_{s}$ is the average signal-to-noise ratio (SNR). Moreover, because all data sequences have an equal probability of error, then $P_{S}|_{\mathbf{1}}=P_{S}$, which also equivalent to the bit error rate (BER). It is interesting to note that (\[E-Pe\_K2\]) is similar to the BER of the differential binary phase shift keying (DBPSK) [@Proakis-Book-2001]. However, the two techniques are essentially different as $D^{3}$ does not require differential encoding, has no constraints on the shape of the signal constellation, and performs well even in frequency-selective fading channels. To evaluate $P_{S}$ for $\mathcal{K}>2$, we use an approximation for $Q\left(x\right)$ in [@Q-Func-Approx-02], which is given by $$Q\left(x\right)\approx\frac{1}{\sqrt{2\pi\left(x^{2}+1\right)}}e^{-\frac{1}{2}x^{2}},\text{ }x\in\lbrack0,\infty).\label{eq:Q-func-Approx}$$ Therefore, by substituting into the conditional SEP and averaging over the Rayleigh PDF , the evaluation of the SEP becomes straightforward. For example, evaluating the integral for $\mathcal{K}=3$ gives, $$P_{S}|_{\mathbf{1}}=\frac{\zeta_{1}}{\pi}\mathit{\mathrm{Ei}}\left(1,\zeta_{1}+1\right)e^{\zeta_{1}+1}\text{, \ \ }\zeta_{1}\triangleq\frac{1}{2\bar{\gamma}_{s}}\left(\frac{1}{\bar{\gamma}_{s}}+1\right)$$ where $\mathrm{Ei}\left(x\right)$ is the exponential integral (EI), $\mathrm{Ei}\left(x\right)\triangleq-\int_{-x}^{\infty}\frac{e^{-t}}{t}dt$. Similarly, $P_{S}$ for $\mathcal{K}=7$ can be evaluated to, $$P_{S}|_{\mathbf{1}}=\frac{\zeta_{2}}{64\pi^{3}}\left[e^{\zeta+3}\left(2\zeta_{2}+6\right)^{2}\text{ }\mathit{\mathrm{Ei}}\left(1,\zeta_{2}+3\right)-4\left(\zeta_{2}+1\right)\right]\text{, \ }\zeta_{2}\triangleq\frac{1}{2\bar{\gamma}_{s}}\left(\frac{1}{4\bar{\gamma}_{s}}+1\right).$$ Although the SEP is a very useful indicator for the system error probability performance, the BER is actually more informative. For a sequence that contains $\mathcal{K}_{D}$ information bits, the BER can be expressed as $P_{B}=\frac{1}{\Lambda}P_{S}$, where $\Lambda$ denotes the average number of bit errors given a sequence error, which can be defined as $$\Lambda=\sum_{m=1}^{\mathcal{K}_{D}}m\Pr\left(m\right).$$ Because the SEP is independent of the transmitted data sequence, then, without loss of generality, we assume that the transmitted data sequence is $\mathbf{d}_{0}^{(0)}$. Therefore, $$\Lambda=\sum_{m=1}^{\mathcal{K}_{D}}m\Pr\left(\left\Vert \mathbf{\hat{d}}_{0}\right\Vert ^{2}=m\right)$$ where $\left\Vert \mathbf{\hat{d}}_{0}\right\Vert ^{2}$, in this case, corresponds to the Hamming weight of the detected sequence $\mathbf{\hat{d}}_{0}$, which can be expressed as $$\Pr\left(\left\Vert \mathbf{\hat{d}}_{0}\right\Vert ^{2}=m\right)=\Pr\left(\mathbf{d}_{0}^{(0)}\rightarrow\bigcup\limits _{i}\mathbf{d}_{0}^{(i)}\right)\text{, }\left\Vert \mathbf{d}_{0}^{(i)}\right\Vert ^{2}=m$$ where $\mathbf{d}_{0}^{(0)}\rightarrow\mathbf{d}_{0}^{(i)}$ denotes the pairwise error probability (PEP). By noting that $\Pr\left(\mathbf{d}_{0}^{(0)}\rightarrow\mathbf{d}_{0}^{(i)}\right)\neq\Pr\left(\mathbf{d}_{0}^{(0)}\rightarrow\mathbf{d}_{0}^{(j)}\right)$ $\forall i\neq j$, then deriving the PEP for all cases of interest is intractable. As an alternative, a simple approximation is derived. For a sequence that consists of $\mathcal{K}_{D}$ information bits, the BER is bounded by $$\frac{1}{\mathcal{K}_{D}}P_{S}\leq P_{B}\leq P_{S}\text{.}\label{E-Bounds}$$ In practical systems, the number of bits in the detected sequence is generally not large, which implies that the upper and lower bounds in (\[E-Bounds\]) are relatively tight, and hence, the BER can be approximated as the middle point between the two bounds as, $$P_{B}\approx\frac{P_{S}}{0.5\left(1+\mathcal{K}_{D}\right)}.\label{E_PB}$$ The analysis of the general $1\times\mathcal{N}$ SIMO system is a straightforward extension of the single-input single-output (SISO) case. To simplify the analysis, we consider the flat channel case where the conditional SEP can be written as, $$P_{S}|_{\mathbf{\alpha}}=1-\left[1-Q\left(\sqrt{\frac{\sum_{i=1}^{\mathcal{N}}\alpha_{i}^{2}}{\sigma_{w}^{2}\left(\mathcal{N}\sigma_{w}^{2}+\sum_{i=1}^{\mathcal{N}}\alpha_{i}^{2}\right)}}\right)\right]^{\mathcal{K}-1}.$$ Given that all the receiving branches are independent, the fading envelopes will have Rayleigh distribution $\alpha_{i}\sim\mathcal{R}\left(2\sigma_{H}^{2}\right)$ $\forall i$, and thus, $\sum_{i=1}^{\mathcal{N}}\alpha_{i}^{2}\triangleq a$ will have Gamma distribution, $a\sim\mathcal{G}\left(\mathcal{N},2\sigma_{H}^{2}\right)$, $$f\left(a\right)=\left(2\sigma_{H}^{2}\right)^{\mathcal{N}}e^{-2\sigma_{H}^{2}a}\frac{_{a^{\mathcal{N}-1}}}{\Gamma\left(\mathcal{N}\right)}.$$ Therefore, the unconditional SEP can be evaluated as, $$P_{S}=\int_{0}^{\infty}P_{S}|_{\mathbf{\alpha}}\text{ }f_{A}\left(a\right)da.$$ For the special case of $\mathcal{N=}2$, $\mathcal{K}=2$, $P_{S}$ can be evaluated as, $$P_{S}=\frac{1}{2}+Q\left(\frac{\varkappa}{\sqrt{\bar{\gamma}_{s}}}\right)\left[2\bar{\gamma}_{s}\left(\frac{\bar{\gamma}_{s}}{\sqrt{2}}+2\right)-e^{\varkappa^{2}}\right]-\bar{\gamma}_{s}\frac{\varkappa}{\sqrt{2\pi}}$$ where $\varkappa\triangleq\sqrt{2+\bar{\gamma}_{s}}.$ Computing the closed-form formulas for other values of $\mathcal{N}$  and $\mathcal{K}$ can be evaluated following the same approach used in the SISO case. Double-Sided Pilot \[subsec:Double-Sided-Pilot\] ------------------------------------------------ Embedding more pilots in the detection segment can improve the detector’s performance. Consequently, it worth investigating the effect of embedding more pilots in the SEP analysis. More specifically, we consider double-sided segment, $\tilde{d}_{0}=1$, $\tilde{d}_{\mathcal{K}-1}=1$, as illustrated in Fig. \[fig:Double-sided-pilot\]. In this case, the detector can be expressed as, $$\hat{\mathbf{d}_{0}}=\arg\max_{\tilde{\mathbf{d}}_{0}}\frac{\Re\left\{ r_{0}r_{1}\right\} }{\tilde{d}_{1}}+\frac{\Re\left\{ r_{\mathcal{K}-2}r_{\mathcal{K}-1}\right\} }{\tilde{d}_{\mathcal{K}-2}}+\sum_{v=1}^{\mathcal{K-}3}\frac{\Re\left\{ r_{v}r_{\acute{v}}\right\} }{\tilde{d_{v}}\tilde{d}_{\acute{v}}},\text{\thinspace\thinspace\thinspace\thinspace\ensuremath{\mathcal{K}\in\left\{ 3,4,\dots,N-1\right\} .}}\label{eq:d_hat_double_sided}$$ From the definition in , the probability of receiving the correct sequence can be derived based on the reduced number of trials as compared to . Therefore, $$\begin{gathered} P_{C}|_{\mathbf{H}_{0},\mathbf{1}}=\Pr\Big(\left(\Re\left\{ r_{0}r_{1}\right\} +\Re\left\{ r_{\mathcal{K}-2}r_{\mathcal{K}-1}\right\} \right)\cap\\ \Re\left\{ r_{1}r_{2}\right\} \cap\Re\left\{ r_{2}r_{3}\right\} \cap\dots\cap\Re\left\{ r_{\mathcal{K}-4}r_{\mathcal{K}-3}\right\} >0\Big)\label{eq:pc-double-sided}\end{gathered}$$ which, similar to the single-sided case, can be written as, $$P_{C}|_{\mathbf{H}_{0},\mathbf{1}}=\Pr\left(\left[\prod_{v=0}^{\mathcal{K}-3}\Pr\left(\Re\left\{ r_{v}r_{\acute{v}}\right\} \right)+\prod_{v=1}^{\mathcal{K}-2}\Pr\left(\Re\left\{ r_{v}r_{\acute{v}}\right\} \right)\right]>0\right).$$ Therefore, $$P_{S}|_{\mathbf{H}_{0},\mathbf{1}}=1-\left[1-Q\left(\sqrt{\frac{2\sqrt{2}\bar{\mu}_{\mathrm{SP}}}{\bar{\sigma}_{\mathrm{SP}}^{2}}}\right)\right]\times\prod_{v=1}^{\mathcal{K}-3}\left[1-Q\left(\sqrt{\frac{2\bar{\mu}_{\mathrm{SP}}}{\bar{\sigma}_{\mathrm{SP}}^{2}}}\right)\right].\label{eq:SEP-1}$$ For flat fading channels, the SEP expression in can be simplified by following the same procedure in Subsection \[subsec:Single-Sided-Pilot\], for the special case of $\mathcal{K}=3$, the SEP becomes, $$P_{S}=\left(\frac{\Upsilon}{2}-\sqrt{2}\right)\frac{1}{\Upsilon}\text{, \ }\Upsilon\triangleq\sqrt{8\bar{\gamma}_{s}+\sqrt{2}\left(4+\frac{1}{\bar{\gamma}_{s}}\right)}.$$ For $\mathcal{K}>3$, the approximation of $Q^{n}\left(x\right)$, as illustrated in Subsection \[subsec:Single-Sided-Pilot\], can be used in to average over the PDF in . For example, the case $\mathcal{K}=4$ can be evaluated as, $$P_{S}=\frac{1}{8\pi\bar{\gamma}_{s}}\left(\Omega_{1}-1\right)e^{\Omega_{1}}\mathit{\mathrm{Ei}}\left(1,\Omega_{1}\right)\text{, \ }\Omega_{1}\triangleq1+\frac{\sqrt{2}}{4\bar{\gamma}_{s}}\left(1+\frac{1}{4\bar{\gamma}_{s}}\right).$$ For $\mathcal{K}=6$, $$P_{S}=\frac{\Omega_{1}-1}{4\pi^{2}}\left[1-\left[\left(\Omega_{1}-1\right)e^{\Omega_{2}}+2\right]\mathit{\mathrm{Ei}}\left(1,\Omega_{2}\right)\right]\text{, \ }\Omega_{2}\triangleq2+\frac{\sqrt{2}}{\bar{\gamma}_{s}}\left(8+\frac{1}{32\bar{\gamma}_{s}}\right)$$ For the double-sided pilot, $P_{B}=P_{S}$ for the case of $\mathcal{K}=3$, while it can be computed using (\[E\_PB\]) for $\mathcal{K}>3$. Complexity Analysis\[sec:Complexity-Analysis\] ============================================== The computational complexity is evaluated as the total number of primitive operations needed to perform the detection. The operations that will be used are the number of real additions ($R_{A}$), real multiplications ($R_{M}$), and real divisions ($R_{D}$) required to produce the set of detected symbols $\hat{\mathbf{d}}$ for each technique. It worth noting that one complex multiplication ($C_{M}$) is equivalent to four $R_{M}$ and three $R_{A}$ operations, while one complex addition ($C_{A}$) requires two $R_{A}$. To simplify the analysis, we first assume that constant modulus (CM) constellations such as MPSK is used, then, we evaluate the complexity for higher-order modulation such as quadrature amplitude modulation (QAM) modulation. Complexity of Conventional OFDM Detectors\[subsec:Complexity-of-Conventional\] ------------------------------------------------------------------------------ The complexity of the conventional OFDM receiver that consists of the following main steps with the corresponding computational complexities: 1. Channel estimation of the pilot symbols, which computes $\hat{H}_{k}$ at all pilot subcarriers. Assuming that the pilot symbol $d_{k}$ is selected from a CM constellation, then $\hat{H}_{k}=r_{k}d_{k}^{*}$ and hence, $N_{P}$ complex multiplications are required. Therefore, $R_{A}^{\left(1\right)}=4N_{P}$ and $R_{M}^{\left(1\right)}=4N_{P}$. 2. Interpolation, which is used to estimate the channel at the non-pilot subcarriers. The complexity of the interpolation process depends on the interpolation algorithm used. For comparison purposes, we assume that linear interpolation is used, which is the least complex interpolation algorithm. The linear interpolation requires one complex multiplication and two complex additions per interpolated sample. Therefore, the number of complex multiplications required is $N-N_{P}$ and the number of complex additions is $2\left(N-N_{P}\right)$. And hence, $R_{A}^{\left(2\right)}=7\left(N-N_{P}\right)$ and $R_{M}^{\left(2\right)}=4\left(N-N_{P}\right)$. 3. Equalization, a single-tap equalizer requires $N-N_{P}$ complex division to compute the decision variables $\check{r}_{k}=\frac{r_{k}}{\hat{H}_{k}}=r_{k}\frac{\hat{H}_{k}^{*}}{\left|\hat{H}_{k}^{*}\right|^{2}}$. Therefore, one complex division requires two complex multiplications and one real division. Therefore, $R_{A}^{\left(3\right)}=6\left(N-N_{P}\right)$, $R_{M}^{\left(3\right)}=8\left(N-N_{P}\right)$ and $R_{D}^{\left(3\right)}=\left(N-N_{P}\right)$. 4. Detection, assuming symbol-by-symbol minimum distance detection, the detector can be expressed as $\hat{d}_{k}=\arg\min_{\tilde{d}_{i}}J\left(\tilde{d}_{i}\right),\,\,\forall i\in\left\{ 0,1,\dots,M-1\right\} $ where $J\left(\tilde{d}_{i}\right)=\left|\check{r}_{k}-\tilde{d}_{i}\right|^{2}$ . Assuming CM modulation is used, expanding the cost function and dropping the constant terms we can write $J\left(\tilde{d}_{k}\right)=-\check{r}_{k}\tilde{d}_{k}^{*}-\check{r}_{k}^{*}\tilde{d}_{k}$. We can also drop the minus sign from the cost function, and thus, the objective becomes maximizing the cost function $\hat{d}_{k}=\arg\min_{\tilde{d}_{i}}J\left(\tilde{d}_{i}\right)$. Since the two terms are complex conjugate pair, then $-\check{r}_{k}\tilde{d}_{k}^{*}-\check{r}_{k}^{*}\tilde{d}_{k}=2\Re\left\{ \check{r}_{k}\tilde{d}_{k}^{*}\right\} $, and thus we can write the detected symbols as, $$\hat{d}_{k}=\arg\max_{\tilde{d}_{k}}\left(\Re\left\{ \check{r}_{k}\right\} \Re\left\{ \tilde{d}_{k}^{*}\right\} -\Im\left\{ \check{r}_{k}\right\} \Im\left\{ \tilde{d}_{k}^{*}\right\} \right)$$ Therefore, the number of real multiplications required for each information symbol is $2M$, and the number of additions is $M$. Therefore, $R_{A}^{\left(4\right)}=\left(N-N_{P}\right)M$ and $R_{M}^{\left(4\right)}=2\left(N-N_{P}\right)M$. Finally, the total computational complexity per OFDM symbol can be obtained by adding the complexities of the individual steps $1\rightarrow4$, as: $$\begin{aligned} R_{A}^{CM} & ={\displaystyle \sum_{i=1}^{4}R_{A}^{\left(i\right)}=\left(13+M\right)N-\left(10+M\right)N_{P}}\\ R_{M}^{CM} & =\sum_{i=1}^{4}R_{M}^{\left(i\right)}=2N\left(6+M\right)-2N_{P}\left(4+M\right)\\ R_{D}^{CM} & =\sum_{i=1}^{4}R_{D}^{\left(i\right)}=N-N_{P}.\end{aligned}$$ Complexity of the $D^{3}$ ------------------------- The complexity of the $D^{3}$ based on the VA is mostly determined by the branch and path metrics calculation. The branch metrics can be computed as $$J_{m,n}^{c}=\frac{\left\vert r_{c}\right\vert ^{2}}{\left\vert \tilde{d}_{m}\right\vert ^{2}}-\frac{r_{c}r_{\acute{c}}^{\ast}}{\tilde{d}_{m}\tilde{d}_{n}^{\ast}}-\frac{r_{c}^{\ast}r_{\acute{c}}}{\tilde{d}_{m}^{\ast}\tilde{d}_{n}}+\frac{\left\vert r_{c}\right\vert ^{2}}{\left\vert \tilde{d}_{n}\right\vert ^{2}}.$$ For CM constellation, the first and last terms are constants, and hence, can be dropped. Therefore, $$J_{m,n}^{c}=-\frac{r_{c}r_{\acute{c}}^{\ast}}{\tilde{d}_{m}\tilde{d}_{n}^{\ast}}+\frac{r_{c}^{\ast}r_{\acute{c}}}{\tilde{d}_{m}^{\ast}\tilde{d}_{n}}.\label{eq:branch-metric-viterbi}$$ By noting that the two terms in are the complex conjugate pair, then $$J_{m,n}^{c}=-2\Re\left\{ \frac{r_{c}r_{\acute{c}}^{\ast}}{\tilde{d}_{m}\tilde{d}_{n}^{\ast}}\right\} .\label{eq:branch-metric-viterbi-02}$$ From the expression in , the constant “$-2$ can be dropped from the cost function, however, the problem with be flipped to a maximization problem. Therefore, by expanding , we get, $$J_{m,n}^{c}=\Re\left\{ \frac{\Re\left\{ r_{c}\right\} \Re\left\{ r_{\acute{c}}^{\ast}\right\} -\Im\left\{ r_{c}\right\} \Im\left\{ r_{\acute{c}}^{\ast}\right\} +j\left[-\Re\left\{ r_{c}\right\} \Im\left\{ r_{\acute{c}}^{\ast}\right\} +\Im\left\{ r_{c}\right\} \Im\left\{ r_{\acute{c}}^{\ast}\right\} \right]}{\Re\left\{ \tilde{d}_{m}\tilde{d}_{n}^{\ast}\right\} +j\Im\left\{ \tilde{d}_{m}\tilde{d}_{n}^{\ast}\right\} }\right\} .\label{eq:branch-metric-03}$$ By defining $\tilde{d}_{m}\tilde{d}_{n}^{\ast}\triangleq\tilde{u}_{m,n},$ and using complex numbers identities, we get , $$J_{m,n}^{c}=\frac{\left[\Re\left\{ r_{c}\right\} \Re\left\{ r_{\acute{c}}^{\ast}\right\} +\Im\left\{ r_{c}\right\} \Im\left\{ r_{\acute{c}}^{\ast}\right\} \right]\Re\left\{ \tilde{u}_{m,n}\right\} -\left[-\Re\left\{ r_{c}\right\} \Im\left\{ r_{\acute{c}}^{\ast}\right\} +\Im\left\{ r_{c}\right\} \Im\left\{ r_{\acute{c}}^{\ast}\right\} \right]\Im\left\{ \tilde{u}_{m,n}\right\} }{\Re\left\{ \tilde{u}_{m,n}\right\} ^{2}+\Im\left\{ \tilde{u}_{m,n}\right\} ^{2}}.\label{eq:branch-metric-04}$$ For CM, $\Re\left\{ \tilde{u}_{m,n}\right\} ^{2}+\Im\left\{ \tilde{u}_{m,n}\right\} ^{2}$ is constant, and hence, it can be dropped from the cost function, which implies that no division operations are required. To compute $J_{m,n}^{c}$, it is worth noting that the two terms in brackets are independent of $\left\{ m,n\right\} $, and hence, they are computed only once for each value of $c$. Therefore, the complexity at each step in the trellis can be computed as $R_{A}=3\times2^{M}$, $R_{M}=4+2\times2^{M}$ and $R_{D}=0$, where $2^{M}$ is the number of branches at each step in the trellis. However, if the trellis starts or ends by a pilot, then only $M$ computations are required. By noting that the number of full steps is $N-2N_{P}-1$, and the number of steps that require $M$ computations is $2\left(N_{P}-1\right)$, then the total computations of the branch metrics (BM) are: $$\begin{aligned} R_{A}^{BM} & =\left(3\times2^{M}\right)\left(N-2N_{P}-1\right)+2\left(3\times M\right)\left(N_{P}-1\right)\\ R_{M}^{BM} & =\left(4+2^{M+1}\right)\left(N-2N_{P}-1\right)+2\left(N_{P}-1\right)\left(4+2M\right)\\ R_{D}^{BM} & =0\end{aligned}$$ The path metrics (PM) require $R_{A}^{PM}=\left(N-2N_{P}-1\right)+M\left(N_{P}-1\right)$ real addition. Therefore, the total complexity is: $$\begin{aligned} R_{A}^{CM} & =\left(N-2N_{P}-1\right)\left(5\times2^{M}\right)+7M\left(N_{P}-1\right)\\ R_{M}^{CM} & =\left(N-2N_{P}-1\right)\left(4+2^{M+1}\right)+2\left(N_{P}-1\right)\left(4+2M\right)\\ R_{D}^{CM} & =0\end{aligned}$$ [$N$]{} [$128$]{} [$256$]{} [$512$]{} [$1024$]{} [$2048$]{} -------------------- ------------ ------------ ------------ ------------ ------------ [$\eta_{R_{A}}$]{} [$0.58$]{} [$1.07$]{} [$1.21$]{} [$1.27$]{} [$1.31$]{} [$\eta_{R_{M}}$]{} [$0.77$]{} [$0.72$]{} [$0.68$]{} [$0.64$]{} [$0.61$]{} [$R_{D}$]{} [$96$]{} [$192$]{} [$384$]{} [$768$]{} [$1536$]{} [$\eta_{P}$]{} [$0.20$]{} [$0.21$]{} [$0.22$]{} [$0.26$]{} [$0.31$]{} : Computational complexity comparison using different values of $N$, $N_{P}=N/4$, for BPSK.\[tab:Computational-power-analysis\] To compare the complexity of the $D^{3}$, we use the conventional detector using LS channel estimation, linear interpolation, zero-forcing (ZF) equalization, and MLD, denoted as coherent-L, as a benchmark due to its low complexity. The relative complexity is denoted by $\eta$, which corresponds to the ratio of the $D^{3}$ complexity to the conventional detector, i.e., $\eta_{R_{A}}$ denotes the ratio of real additions and $\eta_{R_{M}}$ corresponds to the ratio of real multiplications. As depicted in Table \[tab:Computational-power-analysis\], $R_{A}$ for $D^{3}$ less than coherent-L only using BPSK for $N=128$, and then it becomes larger for all the other considered values of $N$. For $R_{M}$, $D^{3}$ is always less than the coherent-L, particularly for high values of $N$, where it becomes 0.61 for $N=2048$. It is worth noting that $R_{D}$ in the table corresponds to the number of divisions in the conventional OFDM since the $D^{3}$ does not require any division operations. For a more informative comparison between the two systems, we use the computational power analysis presented in [@computational_power], where the total power for each detector is estimated based on the total number of operations. Table \[tab:Computational-power-analysis\] shows the relative computational power $\eta_{P}$, which shows that the $D^{3}$ detector requires only $0.2$ of the power required by the coherent-L detector for $N=128$ and $0.31\%$ for $N=2048$. -- -- -- -- -- -- -- -- -- -- Besides, it is worth noting that linear interpolation has lower complexity as compared to more accurate interpolation schemes such as the spline interpolation [@spline-interpolation], [@Spline], which comes at the expense of the error rate performance. Therefore, the results presented in Table \[tab:Computational-power-analysis\] can be generally considered as upper bounds on the relative complexity of the $D^{3}$, when more accurate interpolation schemes are used, the relative complexity will drop even further as compared to the results in Table \[tab:Computational-power-analysis\]. Complexity with Error Correction Coding --------------------------------------- To evaluate the impact of the complexity reduction of the $D^{3}$ in the presence of FEC coding, convolutional codes are considered with soft and hard decision decoding using the VA. BPSK is the modulation considered for the complexity evaluation and the code rate is assumed to be $1/2$. For decoding of convolutional codes, the soft VA requires $n\times2^{K}$ addition or subtractions and multiplications per decoded bit, where $1/n$ is the code rate and $K$ is the constraint length [@P-Wu]. Therefore, for $1/2$ code rate, $R_{A}=R_{M}=2^{K+1}$. Given that each OFDM symbol has $N$ coded bits and $N/2$ information bits, the complexity per OFDM symbol becomes $R_{A}=R_{M}=N\times2^{K}$. For the hard VA, $N\times2^{K}$ XOR operations are required for the branch metric computation, while $N\times2^{K-1}$ additions are required for the path metric computations. Because the XOR operation is a bit operation, it’s complexity is much less than the addition. Assuming that addition is using an 8-bit representation, then the complexity of an addition operation is about eight times the XOR. Therefore, $R_{A}$, in this case, can be approximated as $N\left(2^{K}+2^{K-2}\right)$. [$K$ ]{} [$3$ ]{} [$4$ ]{} [$5$]{} [$6$]{} [$7$]{} ---------- ------------- ------------- ------------- ------------- ------------- [$0.96$ ]{} [$0.97$ ]{} [$0.97$ ]{} [$0.98$ ]{} [$0.99$ ]{} [$0.24$ ]{} [$0.26$ ]{} [$0.28$ ]{} [$0.33$ ]{} [$0.41$ ]{} : Computational complexity comparison using hard and soft VA for different values of $K$, $N=2048$. \[T-coded\] As can be noted from Table \[T-coded\], the complexity reduction when soft VA is used less significant as compared to the hard VA. Such a result is obtained because the soft VA requires the CSI to compute the reliability factors, which requires $N-N_{P}$ division operations when the $D^{3}$ is used. For hard decoding, the advantage of the $D^{3}$ is significant even for high constraint length values. Numerical Results\[sec:Numerical-Results\] ========================================== This section presents the performance of the $D^{3}$ detector in terms of BER for several operating scenarios. The system model follows the LTE-A physical layer (PHY) specifications [@LTE-A], where the adopted OFDM symbol has $N=512$, $N_{\mathrm{CP}}=64$, the sampling frequency $f_{s}=7.68$ MHz, the subcarrier spacing $\Delta f=15$ kHz, and the pilot grid follows that of Fig. \[fig:2D-to-1D\]. The total OFDM symbol period is $75$ $\mu\sec$, and the CP period is $4.69$ $\mu\sec$. The channel models used are the flat Rayleigh fading channel, the typical urban (TUx) multipath fading model [@Typical; @Urban] that consists of $6$ taps with normalized delays of $\left[0,2,3,9,13,29\right]$ and average taps gains are $\left[0.2,0.398,0.2,0.1,0.063,0.039\right]$, which corresponds to a severe frequency-selective channel. The TUx model is also used to model a moderate frequency-selective channel where the number of taps in the channel is $9$ with normalized delays of $[0$, $1$, $\ldots$, $8]$ samples, and the average taps gains are $[0.269$, $0.174$, $0.289$, $0.117$, $0.023$, $0.058$, $0.036$, $0.026$, $0.008]$. The channel taps gains are assumed to be independent and Rayleigh distributed. The Monte Carlo simulation results included in this work are obtained by generating $10^{6}$ OFDM symbols per simulation run. Throughout this section, the ML coherent detector with perfect CSI will be denoted as coherent, while the coherent with linear and spline interpolation will be denoted as coherent-L and coherent-S, respectively. Moreover, the results are presented for the SISO system, $\mathit{\mathcal{N}\mathrm{=1}}$, unless it is mentioned otherwise. Fig. \[fig:BER-Single-Double-Sided-Flat\] shows the BER of the single-sided (SS) and double-sided (DS) $D^{3}$ over flat fading channels for $\mathcal{K}=2,6$ and $3,7$, respectively, and using BPSK. The number of data symbols $\mathcal{K}_{D}=\mathcal{K}-1$ for the SS and $\mathcal{K}_{D}=\mathcal{K}-2$ for the DS because there are two pilot symbols at both ends of the data segment for the DS case. The results in the figure for the SS show that $\mathcal{K}$ has a noticeable impact on the BER where the difference between the $\mathcal{K}=2$ and $6$ cases is about $1.6$ dB at BER of $10^{-3}$. For the DS segment, the BER has the same trends of the SS except that it becomes closer to the coherent case because having more pilots reduces the probability of sequence inversion due to the phase ambiguity problem. The figure shows that the approximated and simulation results match very well for all cases, which confirms the accuracy of the derived approximations. The effect of the frequency selectivity is illustrated in Fig. \[fig:BER-SISO-D3-SS-6-taps\] for the SS and DS configurations using$\mathcal{K}_{D}=1$. As can be noted from the figure, frequency-selective channels introduce error floors at high SNRs, which is due to the difference between adjacent channel values caused by the channel frequency selectivity. Furthermore, the figure shows a close match between the simulation and the derived approximations. The approximation results are presented only for $\mathcal{K}=2$ because evaluating the BER for $\mathcal{K}>2$ becomes computationally prohibitive. For example, evaluating the integral for the $\mathcal{K}=3$ requires solving a $6$-fold integral. The results for the frequency-selective channels are quite different from the flat fading cases. In particular, the BER performance drastically changes when the DS pilot segment is used. Moreover, the impact of the frequency selectivity is significant, particularly for the SS pilot case. ![BER in frequency-selective channels using BPSK, $\mathcal{K}_{D}=1$ and $\mathcal{N}=1$. \[fig:BER-SISO-D3-SS-6-taps\] ](graphics/fig_05_ber_ss_ds_flat){width="0.4\paperwidth"} ![BER in frequency-selective channels using BPSK, $\mathcal{K}_{D}=1$ and $\mathcal{N}=1$. \[fig:BER-SISO-D3-SS-6-taps\] ](graphics/fig_07_ber_ss_ds_6_taps_9_taps_awgn){width="0.4\paperwidth"} Fig. \[fig:BER-SIMO-D3-flat\] shows the BER of the $1\times2$ SIMO $D^{3}$ over flat fading channels for SS and DS pilot segments. It can be noted from the figure that the maximum ratio combiner (MRC) BER with perfect CSI outperforms the DS and SS systems by about $2$ and $3$ dB, respectively. Moreover, the figure shows that the MLSD [@Wu; @2010] and the $D^{3}$ have equivalent BER for the SISO and SIMO scenarios. ![BER of the SISO $D^{3}$ and MLSD [@Wu; @2010] over the 6-taps frequency-selective channel using QPSK, $\mathcal{K}_{D}=1$, $\mathcal{N}=1$, $2.$\[fig:BER-SISO-D3-QPSK\] ](graphics/fig_08_ber_ss_ds_flat_simo){width="0.4\paperwidth"} ![BER of the SISO $D^{3}$ and MLSD [@Wu; @2010] over the 6-taps frequency-selective channel using QPSK, $\mathcal{K}_{D}=1$, $\mathcal{N}=1$, $2.$\[fig:BER-SISO-D3-QPSK\] ](graphics/fig_10_ber_ds_6_taps_qpsk_siso_simo){width="0.4\paperwidth"} Figs. \[fig:BER-SISO-D3-QPSK\] shows the BER of the SISO and $1\times2$ SIMO MLSD, coherent, coherent-S and coherent-L systems over frequency-selective channels. For both SISO and SIMO, the BER of all the considered techniques converges at low SNRs because the AWGN dominates the BER in the low SNR range. For moderate and high SNRs, the $D^{3}$ outperforms all the other considered techniques except for the coherent, where the difference is about $3.5$ and $2.75$ dB at BER of $10^{-3}$ for the SISO and SIMO systems, respectively. ![ ](graphics/fig_12_ber_comparasion){width="0.4\paperwidth"} ![ ](graphics/fig_11_ber_ds_6_taps_16qam_siso){width="0.4\paperwidth"} ![](graphics/fig_14_ber_d3_full_rb){width="0.4\paperwidth"} ![](graphics/fig_13_ber_d3_coded){width="0.4\paperwidth"} Conclusion and Future Work\[sec:Conclusion\] ============================================ This work proposed a new receiver design for OFDM-based broadband communication systems. The new receiver performs the detection process directly from the FFT output symbols without the need of experiencing the conventional steps of channel estimation, interpolation, and equalization, which led to a considerable complexity reduction. Moreover, the $D^{3}$ system can be deployed efficiently using the VA. The proposed system was analyzed theoretically where simple closed-form expressions were derived for the BER in several cases of interest. The analytical and simulation results show that the $D^{3}$ BER outperforms the coherent pilot-based receiver in various channel conditions, particularly in frequency-selective channels where the $D^{3}$ demonstrated high robustness. Although the $D^{3}$ may perform well even in severe fading conditions, it is crucial to evaluate its sensitivity to various practical imperfections. Thus, we will consider in our future work the performance of the $D^{3}$ in the presence of various system imperfections such as phase noise, synchronization errors and IQ imbalance. Moreover, we will evaluate the $D^{3}$ performance in mobile fading channels, where the channel variation may introduce intercarrier interference. Appendix I {#appendix-i .unnumbered} ========== By defining the events $A_{\psi}>A_{n}\triangleq E_{\psi,n}$, $n\in\left\{ 0\text{, }1\text{, }\ldots,\psi-1\right\} $, then, $$P_{C}|_{\mathbf{H}_{0},\mathbf{\mathbf{1}}}=P\left(\bigcap\limits _{n=0}^{\psi-1}E_{\psi,n}\right).\label{E-PC-01}$$ Using the chain rule, $P_{C}|_{\mathbf{H}_{0},\mathbf{\mathbf{1}}}$ can be written as, $$P_{C}|_{\mathbf{H}_{0},\mathbf{\mathbf{1}}}=\Pr\left(\left.E_{\psi,\psi-1}\right\vert \bigcap\limits _{n=0}^{\psi-2}E_{\psi,n}\right)\Pr\left(\bigcap\limits _{n=0}^{\psi-2}E_{\psi,n}\right).$$ For $\mathcal{K}=2$, $\psi=1$, $\tilde{\mathbf{d}}_{0}^{(0)}=[1$, $-1]$, $\tilde{\mathbf{d}}_{0}^{(1)}=[1$,$1]$, and thus, $$\begin{aligned} P_{C}|_{\mathbf{H}_{0},\mathbf{\mathbf{1}}} & = & \Pr\left(E_{1,0}\right)\nonumber \\ & = & \Pr\left(\Re\left\{ r_{1}r_{2}\right\} >\Re\left\{ -r_{1}r_{2}\right\} \right)=\Pr\left(\Re\left\{ r_{0}r_{1}\right\} >0\right).\end{aligned}$$ For $\mathcal{K}=3$, $\psi=4$, $\tilde{\mathbf{d}}_{0}^{(0)}=[1$, $1$, $-1]$, $\tilde{\mathbf{d}}_{0}^{(1)}=[1$, $-1$, $-1]$, $\tilde{\mathbf{d}}_{0}^{(2)}=[1$, $-1$, $1]$ and $\tilde{\mathbf{d}}_{0}^{(3)}=[1$, $1$,…$,1]$ . Using the chain rule $$\begin{aligned} P_{C}|_{\mathbf{H}_{0},\mathbf{\mathbf{1}}} & = & \Pr\left(E_{3,2}|E_{3,1}\text{, }E_{3,0}\right)\Pr\left(E_{3,1},E_{3,0}\right)\nonumber \\ & = & \Pr\left(E_{3,2}|E_{3,1}\text{, }E_{3,0}\right)\Pr\left(E_{3,1}|E_{3,0}\right)\Pr\left(E_{3,0}\right)\label{E-PrA3A0}\end{aligned}$$ However, $\Pr\left(E_{3,0}\right)=\Pr\left(A_{3}>A_{0}\right)$, and thus $$\begin{aligned} \Pr\left(E_{3,0}\right) & = & \Pr\left(\Re\left\{ r_{0}r_{1}+r_{1}r_{2}\right\} >\Re\left\{ r_{0}r_{1}-r_{1}r_{2}\right\} \right)\nonumber \\ & = & \Pr\left(\Re\left\{ r_{1}r_{2}\right\} >\Re\left\{ -r_{1}r_{2}\right\} \right)=\Pr\left(\Re\left\{ r_{1}r_{2}\right\} >0\right).\end{aligned}$$ The second term in (\[E-PrA3A0\]) can be evaluated by noting that the events $E_{3,1}$ and $E_{3,0}$ are independent. Therefore $\Pr\left(E_{3,1}|E_{3,0}\right)=\Pr\left(E_{3,1}\right)$, which can be computed as $$\begin{aligned} \Pr\left(E_{3,1}\right) & = & \Pr\left(\Re\left\{ r_{0}r_{1}+r_{1}r_{2}\right\} >\Re\left\{ -r_{0}r_{1}+r_{1}r_{2}\right\} \right)\nonumber \\ & = & \Pr\left(\Re\left\{ r_{0}r_{1}\right\} >\Re\left\{ -r_{0}r_{1}\right\} \right)=\Pr\left(\Re\left\{ r_{0}r_{1}\right\} >0\right).\end{aligned}$$ The first term in (\[E-PrA3A0\]) $\Pr\left(E_{3,2}|E_{3,1}\text{, }E_{3,0}\right)=1$ because if $A_{3}>\left\{ A_{1},A_{0}\right\} $, then $A_{3}>A_{2}$ as well. Consequently, $$P_{C}|_{\mathbf{H}_{0},\mathbf{\mathbf{1}}}=\Pr\left(\Re\left\{ r_{0}r_{1}\right\} >0\right)\Pr\left(\Re\left\{ r_{1}r_{2}\right\} >0\right).$$ By induction, it is straightforward to show that $P_{C}|_{\mathbf{H}_{0},\mathbf{\mathbf{1}}}$ can be written as, $$P_{C}\mid_{\mathbf{H},\mathbf{\mathbf{d}=\mathbf{1}}}=\prod\limits _{n=0}^{\mathcal{K-}2}\Pr\left(\Re\left\{ r_{n}r_{\acute{n}}\right\} >0\right).$$ [10]{} IEEE Standard for Local and metropolitan area networks Part 16: Air Interface for Broadband Wireless Access Systems Amendment 3: Advanced Air Interface, IEEE Std. 802.16m, 2011. LTE; Evolved Universal Terrestrial Radio Access (E-ULTRA), LTE physical layer, 3GPP TS 36.300, 2011. T. Hwang, C. Yang, G. Wu, S. Li, and G. Y. Li, OFDM and its wireless applications: A survey, IEEE Trans. Veh. Technol., vol. 58, no. 4, pp. 1673–1694, May 2009. D. Tsonev, *et al.*, “A 3-Gb/s single-LED OFDM-based wireless VLC link using a gallium nitride $\mu$LED, *IEEE Photon. Technol. Lett*., vol. 26, no. 7, pp. 637-40, Apr. 2014. S. Dissanayake, J. Armstrong, “Comparison of ACO-OFDM, DCO-OFDM and ADO-OFDM in IM/DD systems, *J. Lightw. Technol.*, vol. 31, no. 7, pp. 1063-72, Apr. 2013 P. Guan et al., “5G field trials: OFDM-based waveforms and mixed numerologies,”* IEEE J. Sel. Areas Commun.*, vol. 35, no. 6, pp. 1234-1243, June 2017. Weile Zhang, Qinye Yin, Wenjie Wang, and Feifei Gao, “One-shot blind CFO and channel estimation for OFDM with multi-antenna receiver,” *IEEE Trans. Signal Process.*, vol. 62, no. 15, pp. 3799-3808, Aug. 2014. Song Noh, Youngchul Sung, Michael Zoltowski, “A new precoder design for blind channel estimation in MIMO-OFDM systems,”* IEEE Trans. Wireless Commun.*, vol. 13, no. 12, pp. 7011-7024, Dec. 2014. A. Saci, A. Al-Dweik, A. Shami, and Y. Iraqi, “One-shot blind channel estimation for OFDM systems over frequency-selective fading channels,” *IEEE Trans. Commun*., vol. 65, no. 12, pp. 5445-5458, Dec. 2017. A. Saci, A. Al-Dweik and A. Shami, “Blind channel estimation using cooperative subcarriers for OFDM systems,” *IEEE Int. Conf. Commun*. (ICC), Kansas City, USA, May 2018. X. Zhang and D. Xu, “Blind channel estimation for multiple antenna OFDM system subject to unknown carrier frequency offset,”* J. of Sys. Eng. and Electron.*, vol. 25, no. 5, pp. 721-727, Oct. 2014. A. Mezghani and A. L. Swindlehurst, “Blind estimation of sparse broadband massive MIMO channels with ideal and one-bit ADCs,” *IEEE Trans. Signal Process*., vol. 66, no. 11, pp. 2972-2983, June 2018. Hongting Zhang and Hsiao-Chun Wu, “Robust pilot detection techniques for channel estimation and symbol detection in OFDM systems,”* IEEE Signal Process.* *Lett.*, vol. 22, no. 6, pp. 733-737, June 2015. R. Shaked, N. Shlezinger and R. Dabora, “Joint estimation of carrier frequency offset and channel impulse response for linear periodic channels,” *IEEE Trans. Commun.*, vol. 66, no. 1, pp. 302-319, Jan. 2018. Y. Wang, G. Liu, F. Han and H. Qu, “Channel estimation and equalization for SS-OOFDM system with high mobility,” *IEEE Wireless Commun. Lett.*, vol. 23, no. 1, pp. 92-95, Jan. 2019. Chenhao Qi, Guosen Yue, Lenan Wu, and A. Nallanathan, “Pilot design for sparse channel estimation in OFDM-based cognitive radio systems,”* IEEE Trans. on Veh. Technol.*, vol. 63, no. 2, pp. 982-987, Feb. 2014. G. Liu, L. Zeng, H. Li, L. Xu, and Z. Wang, “Adaptive complex interpolator for channel estimation in pilot-aided OFDM system,” *J. Commun. Networks*, vol. 15, no. 5, pp. 496-503, Oct. 2013. Jung-Chieh Chen, Chao-Kai Wen, and Pangan Ting, “An efficient pilot design scheme for sparse channel estimation in OFDM systems,”* IEEE Commun. Lett.*, vol. 17, no. 7, pp. 1352-1355, July 2013. T. Lee, D. Sim, B. Seo and C. Lee, “Channel estimation scheme in oversampled frequency domain for FBMC-QAM systems based on prototype filter set,” *IEEE Trans. Veh. Technol.*, vol. 68, no. 1, pp. 728-739, Jan. 2019. P. Tan and N. Beaulieu, “Effect of channel estimation error on bit error probability in OFDM systems over Rayleigh and Ricean fading channels,”* IEEE Trans. Commun.*, vol. 56, no. 4, pp. 675-685., Apr. 2008. S. Tomasin and M. Butussi, “Analysis of interpolated channel estimation for mobile OFDM systems,* IEEE Trans. Commun.*, vol. 58, no. 5, pp. 1578-1588, May 2010. P. Hoeher, S. Kaiser and P. Robertson, “Two-dimensional pilot-symbol-aided channel estimation by Wiener filtering, *In Proc IEEE Int. Conf. on Acoustics, Speech, and Signal Processing*, vol. 3, Munich, 1997, pp. 1845-1848. F. D’Agostini, S. Carboni, M. De Castro, F. De Castro, and D. Trindade, “Adaptive concurrent equalization applied to multicarrier OFDM systems,”* IEEE Trans. Broadcast*, vol. 54, no. 3, pp. 441-447, Sep. 2008. M. Henkel, C. Schilling and W. Schroer, “Comparison of channel estimation methods for pilot aided OFDM systems,” in *Proc. IEEE VTC. Spring*, Dublin, 2007, pp. 1435-1439. R. Raheli, A. Polydoros and C-K Tzou, “Per-survivor processing: a general approach to MLSE in uncertain environments, *IEEE Trans. Commun*., vol. 43, no. 2, pp. 354-364, Feb. 1995. Z. Zhu and H. Sadjadpour, “An adaptive per-survivor processing algorithm, *IEEE Trans. Commun*., vol. 50, no. 11, pp. 1716-1718, Nov. 2002. M. Luise, R. Reggiannini and G. M. Vitetta, “Blind equalization/detection for OFDM signals over frequency-selective channels, *IEEE J. Sel. Areas Commun*., vol. 16, no. 8, pp. 1568-1578, Oct. 1998. D. Divsalar and M. K. Simon, “Multiple-symbol differential detection of MPSK,  *IEEE Trans. Commun*., vol. 38, no. 3, pp. 300-308, Mar. 1990. L. Zhang, Z. Hong, Y. Wu, R. Boudreau and L. Thibault, “A novel differential detection for differential OFDM systems with high mobility, *IEEE Trans. Broadcast*., vol. 62, no. 2, pp. 398-408, June 2016. M. Wu and P. Y. Kam, “Performance analysis and computational complexity comparison of sequence detection receivers with no explicit channel estimation,”* IEEE Trans. on Veh. Technol.*, vol. 59, no. 5, pp. 2625-2631, Jun 2010. M. Matinmikko and A. Mammela, “Estimator-correlator receiver in fading channels for signals with pilot symbols, in *Proc. IEEE 15th Ann. Int. Symp. Pers. Indoor Mobial Radio Commun. (PIMRC)*, Barcelona, 2004, pp. 2278-2282 vol. 3. J. Proakis and M. Salehi, *Digital communications*, 5th ed. New York: McGraw-Hill, 2008. IEEE Standard for Information technology Telecommunications and information exchange between systems local and metropolitan area networks, specific requirements, Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications, Amendment 4: Enhancements for Very High Throughput for Operation in Bands below 6 GHz, Dec. 2013. W. C. Jakes, *Microwave Mobile Communications*, 2nd ed. Wiley, 1994. S. Lin, T. Kasami, T. Fujiwara, and M. Fossorier, **“Trellises and trellis-based decoding algorithms for linear block codes”,** *Springer Science & Business Media*, vol. 443. 2012. Seijas-Macias, Antonio, and A. Oliveira. “An approach to distribution of the product of two normal variables,” *Discussiones Mathematicae Probability and Statistics*, vol. 32, no. 1-2, pp. 87-99, 2012. Borjesson, P., and C-E. Sundberg. “Simple approximations of the error function $Q(x)$ for communications applications,” *IEEE Trans. Commun.*, vol. 27, no. 3, pp. 639-643, March 1979. M. Tariq, A. Al-Dweik, B. Mohammad, H. Saleh and T. Stouraitis, “Computational power analysis of wireless communications systems using operation-level power measurements,” in *Proc. ICECTA*, Ras Al Khaimah, 2017, pp. 1-6. D. Petrinovic, “Causal cubic splines: formulations, interpolation properties and implementations,” *IEEE Trans. Signal Process.,* vol. 56, no. 11, pp. 5442-5453, Nov. 2008. D. Lamb, L. F. O. Chamon and V. H. Nascimento, “Efficient filtering structure for spline interpolation and decimation,”* IET Electron. Lett.*, vol. 52, no. 1, pp. 39-41, Aug. 1 2016. P. Wu, “On the complexity of turbo decoding algorithms, *IEEE VTS 53rd Veh. Technol. Conf.*, Spring 2001, Rhodes, Greece, 2001, pp. 1439-1443. ETSI TR 125 943 V9.0.0 (2010-02), Universal Mobile Telecommunications System (UMTS) Deployment Aspects, 3GPP TR 25.943, Release 9. [^1]: A. Saci, A. Al-Dweik and A. Shami are with the Department of Electrical and Computer Engineering, Western University, London, ON, Canada, (e-mail: {asaci, aaldweik, abdallah.shami}@uwo.ca). [^2]: A. Al-Dweik is also with the Department of Electrical and Computer Engineering, Khalifa University, Abu Dhabi, UAE, (e-mail: dweik@kustar.ac.ae). [^3]: Part of this work is protected by the US patent: A. Al-Dweik “Signal detection in a communication system.” U.S. Patent No. 9,596,119. 14 Mar. 2017.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Let $L$ be a reductive subgroup of a reductive Lie group $G$. Let $G/H$ be a homogeneous space of reductive type. We provide a necessary condition for the properness of the action of $L$ on $G/H$. As an application we give examples of spaces that do not admit standard compact Clifford-Klein forms.' author: - Maciej Bocheński and Marek Ogryzek title: A restriction on proper actions on homogeneous spaces of a reductive type --- Introduction ============ Let $L$ be a locally compact topological group acting continuously on a locally Hausdorff topological space $M$. This action is called [***proper***]{} if for every compact subset $C \subset M$ the set $$L(C):=\{ g\in L \ | \ g\cdot C \cap C \neq \emptyset \}$$ is compact. In this paper, our main concern is the following question posed by T. Kobayashi [@kob3] How “large” subgroups of $G$ can act properly on a homogeneous space $G/H$? **(Q1)** We restrict our attention to the case where $M=G/H$ is a homogeneous space of reductive type and always assume that $G$ is a linear connected reductive real Lie group with the Lie algebra $\mathfrak{g}.$ Let $H\subset G$ be a closed subgroup of $G$ with finitely many connected components and $\mathfrak{h}$ be the Lie algebra of $H.$ The subgroup $H$ is reductive in $G$ if $\mathfrak{h}$ is reductive in $\mathfrak{g},$ that is, there exists a Cartan involution $\theta $ for which $\theta (\mathfrak{h}) = \mathfrak{h}.$ The space $G/H$ is called the homogeneous space of reductive type. \[def1\] Note that if $\mathfrak{h}$ is reductive in $\mathfrak{g}$ then $\mathfrak{h}$ is a reductive Lie algebra. It is natural to ask when does a closed subgroup of $G$ act properly on a space of reductive type $G/H.$ This problem was treated, inter alia, in [@ben], [@bt], [@kas], [@kob2], [@kob4], [@kob1], [@kul] and [@ok]. In [@kob2] one can find a very important criterion for a proper action of a subgroup $L$ reductive in $G.$ To state this criterion we need to introduce some additional notation. Let $\mathfrak{l}$ be the Lie algebra of $L.$ Take a Cartan involution $\theta$ of $\mathfrak{g}.$ We obtain the Cartan decomposition $$\mathfrak{g}=\mathfrak{k} + \mathfrak{p}. \label{eq1}$$ Choose a maximal abelian subspace $\mathfrak{a}$ in $\mathfrak{p}.$ The subspace $\mathfrak{a}$ is called the ***maximally split abelian subspace*** of $\mathfrak{p}$ and $\text{rank}_{\mathbb{R}}(\mathfrak{g}) := \text{dim} (\mathfrak{a})$ is called the ***real rank*** of $\mathfrak{g}.$ It follows from Definition \[def1\] that $\mathfrak{h}$ and $\mathfrak{l}$ admit Cartan decompositions $$\mathfrak{h}=\mathfrak{k}_{1} + \mathfrak{p}_{1} \ \text{and} \ \mathfrak{l}=\mathfrak{k}_{2} + \mathfrak{p}_{2},$$ given by Cartan involutions $\theta_{1}, \ \theta_{2}$ of $\mathfrak{g}$ such that $\theta_{1} (\mathfrak{h})= \mathfrak{h}$ and $\theta_{2} (\mathfrak{l})= \mathfrak{l}.$ Let $\mathfrak{a}_{1} \subset \mathfrak{p}_{1}$ and $\mathfrak{a}_{2} \subset \mathfrak{p}_{2}$ be maximally split abelian subspaces of $\mathfrak{p}_{1}$ and $\mathfrak{p}_{2},$ respectively. One can show that there exist $a,b \in G$ such that $\mathfrak{a}_{\mathfrak{h}} := \text{\rm Ad}_{a}\mathfrak{a}_{1} \subset \mathfrak{a}$ and $\mathfrak{a}_{\mathfrak{l}} := \text{\rm Ad}_{b}\mathfrak{a}_{2} \subset \mathfrak{a}.$ Denote by $W_{\mathfrak{g}}$ the Weyl group of $\mathfrak{g}.$ In this setting the following holds The following three conditions are equivalent 1. $L$ acts on $G/H$ properly. 2. $H$ acts on $G/L$ properly. 3. For any $w \in W_{\mathfrak{g}},$ $w\cdot \mathfrak{a}_{\mathfrak{l}} \cap \mathfrak{a}_{\mathfrak{h}} =\{ 0 \}.$ \[twkob\] Note that the criterion 3. in Theorem \[twkob\] depends on how $L$ and $H$ are embedded in $G$ up to inner-automorphisms. Theorem \[twkob\] provides a partial answer to Q1. If $L$ acts properly on $G/H$ then $$\text{\rm rank}_{\mathbb{R}}(\mathfrak{l}) + \text{\rm rank}_{\mathbb{R}}(\mathfrak{h}) \leq \text{\rm rank}_{\mathbb{R}} (\mathfrak{g}).$$ \[coko\] Hence the real rank of $L$ is bounded by a constant which depends on $G/H,$ no matter how $H$ and $L$ are embedded in $G.$ In this paper we find a similar, stronger restriction for Lie groups $G,H,L$ by means of a certain tool which we call the a-hyperbolic rank (see Section 2, Definition \[dd2\] and Table \[tab1\]). In more detail we prove the following If $L$ acts properly on $G/H$ then $$\mathop{\mathrm{rank}}\nolimits_{\text{\rm a-hyp}}(\mathfrak{l}) + \mathop{\mathrm{rank}}\nolimits_{\text{\rm a-hyp}}(\mathfrak{h}) \leq \mathop{\mathrm{rank}}\nolimits_{\text{\rm a-hyp}} (\mathfrak{g}).$$ \[twgl\] Recall that a homogeneous space $G/H$ of reductive type admits a ***compact Clifford-Klein form*** if there exists a discrete subgroup $\Gamma \subset G$ such that $\Gamma$ acts properly on $G/H$ and $\Gamma \backslash G/H$ is compact. The space $G/H$ admits a ***standard compact Clifford-Klein form*** in the sense of Kassel-Kobayashi [@kako] if there exists a subgroup $L$ reductive in $G$ such that $L$ acts properly on $G/H$ and $L \backslash G/H$ is compact. In the latter case, for any discrete cocompact subgroup $\Gamma ' \subset L,$ the space $\Gamma ' \backslash G/H$ is a compact Clifford-Klein form. Therefore it follows from Borel’s theorem (see [@bor]) that any homogeneous space of reductive type admitting a standard compact Clifford-Klein form also admits a compact Clifford-Klein form. It is not known if the converse statement holds, but all known reductive homogeneous spaces $G/H$ admitting compact Clifford-Klein forms also admit standard compact Clifford-Klein forms. As a corollary to Theorem \[twgl\], we get examples of the semisimple symmetric spaces without standard compact Clifford-Klein forms. In particular, we cannot find the first example in the existing literature: The homogeneous spaces $G/H=SL(2k+1, \mathbb{R})/SO(k-1,k+2)$ and $G/H=SL(2k+1, \mathbb{R})/Sp(k-1,\mathbb{R})$ for $k \geq 5$ do not admit standard compact Clifford-Klein forms. \[co1\] Let us mention the following results, related to the above corollary. - T. Kobayashi proved in [@kobadm] that $SL(2k,\mathbb{R})/SO(k,k)$ for $k\geq 1$ and $SL(n,\mathbb{R})/Sp(l,\mathbb{R})$ for $0<2l \leq n-2$ do not admit compact Clifford-Klein forms. - Y. Benoist proved in [@ben] that $SL(2k+1,\mathbb{R})/SO(k,k+1)$ for $k\geq 1$ does not admit compact Clifford-Klein forms. - Y. Morita proved recently in [@mor] that $SL(p+q,\mathbb{R})/SO(p,q)$ does not admit compact Clifford-Klein forms if $p$ and $q$ are both odd. Note that these works are devoted to the problem of existence of compact Clifford-Klein forms on a given homogeneous space (not only standard compact Clifford-Klein forms). The a-hyperbolic rank and antipodal hyperbolic orbits ===================================================== Let $\Sigma_{\mathfrak{g}}$ be a system of restricted roots for $\mathfrak{g}$ with respect to $\mathfrak{a}.$ Choose a system of positive roots $\Sigma^{+}_{\mathfrak{g}}$ for $\Sigma_{\mathfrak{g}}.$ Then a fundamental domain of the action of $W_{\mathfrak{g}}$ on $\mathfrak{a}$ can be define as $$\mathfrak{a}^{+} := \{ X\in \mathfrak{a} \ | \ \alpha (X) \geq 0 \ \text{\rm for any} \ \alpha \in \Sigma^{+}_{\mathfrak{g}} \}.$$ Note that $$sX+tY \in \mathfrak{a}^{+},$$ for any $s,t \geq 0$ and $X,Y\in \mathfrak{a}^{+}$. Therefore $\mathfrak{a}^{+}$ is a convex cone in the linear space $\mathfrak{a}.$ Let $w_{0} \in W_{\mathfrak{g}}$ be the longest element. One can show that $$-w_{0}: \mathfrak{a} \rightarrow \mathfrak{a}, \ \ X \mapsto -(w_{0} \cdot X)$$ is an involutive automorphism of $\mathfrak{a}$ preserving $\mathfrak{a}^{+}.$ Let $\mathfrak{b} \subset \mathfrak{a}$ be the subspace of all fixed points of $-w_{0}$ and put $$\mathfrak{b}^{+} := \mathfrak{b} \cap \mathfrak{a}^{+}.$$ Thus $\mathfrak{b}^{+}$ is a convex cone in $\mathfrak{a}.$ We also have $\mathfrak{b} = \text{Span} (\mathfrak{b}^{+}).$ The dimension of $\mathfrak{b}$ is called the a-hyperbolic rank of $\mathfrak{g}$ and is denoted by $$\mathop{\mathrm{rank}}\nolimits_{\text{a-hyp}} (\mathfrak{g}).$$ \[dd2\] The a-hyperbolic ranks of the simple real Lie algebras can be deduce from Table \[tab1\]. A method of calculation of the a-hyperbolic rank of a simple Lie algebra can be found in [@bt]. The a-hyperbolic rank of a semisimple Lie algebra equals the sum of a-hyperbolic ranks of all its simple parts. For a reductive Lie algebra $\mathfrak{g}$ we put $$\mathop{\mathrm{rank}}\nolimits_{\text{a-hyp}} (\mathfrak{g}) := \mathop{\mathrm{rank}}\nolimits_{\text{a-hyp}}([\mathfrak{g},\mathfrak{g}]).$$ There is a close relation between $\mathfrak{b}^{+}$ and the set of antipodal hyperbolic orbits in $\mathfrak{g}.$ We say that an element $X \in \mathfrak{g}$ is [***hyperbolic***]{}, if $X$ is semisimple (that is, $\mathrm{ad}_{X}$ is diagonalizable) and all eigenvalues of $\mathrm{ad}_{X}$ are real. An adjoint orbit $O_{X}:=\mathop{\mathrm{Ad}}\nolimits (G)(X)$ is said to be hyperbolic if $X$ (and therefore every element of $O_{X}$) is hyperbolic. An adjoint orbit $O_{Y}$ is antipodal if $-Y\in O_{Y}$ (and therefore for every $Z\in O_{Y},$ $-Z\in O_{Y}$). There is a bijective correspondence between antipodal hyperbolic orbits $O_{X}$ in $\mathfrak{g}$ and elements $Y \in \mathfrak{b}^{+}.$ This correspondence is given by $$\mathfrak{b}^{+}\ni Y \mapsto O_{Y}.$$ Furthermore, for every hyperbolic orbit $O_{X}$ in $\mathfrak{g}$ the set $O_{X} \cap \mathfrak{a}$ is a single $W_{\mathfrak{g}}$ orbit in $\mathfrak{a}$. \[lma\] The main result =============== We need two basic facts from linear algebra. Let $V_{1},V_{2}$ be vector subspaces of a real linear space $V$ of finite dimension. Then $$\text{\rm dim} (V_{1}+V_{2})= \text{\rm dim} (V_{1}) + \text{\rm dim} (V_{2}) - \text{\rm dim} (V_{1}\cap V_{2}).$$ \[lma1\] Let $V_{1},...,V_{n}$ be a collection of vector subspaces of a real linear space $V$ of a finite dimension and let $A^{+} \subset V$ be a convex cone. Assume that $$A^{+} \subset \bigcup_{k=1}^{n} V_{k}.$$ Then there exists a number $k,$ such that $A^{+} \subset V_{k}.$ \[lma2\] We also need the following, technical lemma. Choose a subalgebra $\mathfrak{h}$ reductive in $\mathfrak{g}$ which corresponds to a Lie group $H \subset G.$ Let $\mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]}^{+} \subset \mathfrak{a}_{\mathfrak{h}}$ be the convex cone constructed according to the procedure described in the previous subsection (for $[\mathfrak{h},\mathfrak{h}]$). Let $X\in \mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]}^{+}.$ The orbit $O_{X}:=\mathop{\mathrm{Ad}}\nolimits (G)(X)$ is an antipodal hyperbolic orbit in $\mathfrak{g}.$ \[lma3\] By Lemma \[lma\] the vector $X$ defines an antipodal hyperbolic orbit in $\mathfrak{h}.$ Therefore we can find $h \in H \subset G$ such that $\text{\rm Ad}_{h}(X) = - X$. Since a maximally split abelian subspace $\mathfrak{a} \subset \mathfrak{g}$ consists of vectors for which $\mathrm{ad}$ is diagonalizable with real values and $$X \in \mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]}^{+} \subset \mathfrak{a}_{\mathfrak{h}} \subset \mathfrak{a},$$ thus the vector $X$ is hyperbolic in $\mathfrak{g}.$ It follows that $\mathop{\mathrm{Ad}}\nolimits (G)(X)$ is a hyperbolic orbit in $\mathfrak{g}$ and $-X \in \mathop{\mathrm{Ad}}\nolimits (G)(X).$ Now we are ready to give a proof of Theorem \[twgl\]. Assume that $\mathop{\mathrm{rank}}\nolimits_{\text{a-hyp}}(\mathfrak{l}) + \mathop{\mathrm{rank}}\nolimits_{\text{a-hyp}}(\mathfrak{h}) > \mathop{\mathrm{rank}}\nolimits_{\text{a-hyp}} (\mathfrak{g})$ and let $\mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]}^{+},$ $\mathfrak{b}_{[\mathfrak{l},\mathfrak{l}]}^{+},$ $\mathfrak{b}^{+}$ be appropriate convex cones. If $X \in \mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]}^{+}$ then $O_{X}^{H} := \mathop{\mathrm{Ad}}\nolimits (G)(X)$ is an antipodal hyperbolic orbit in $\mathfrak{h}.$ By Lemma \[lma3\] the orbit $O_{X}^{G} := \mathop{\mathrm{Ad}}\nolimits (G)(X)$ is an antipodal hyperbolic orbit in $\mathfrak{g}.$ By Lemma \[lma\] there exists $Y \in \mathfrak{b}^{+}$ such that $$O_{X}^{G} = O_{Y}^{G} = \mathop{\mathrm{Ad}}\nolimits (G)(Y).$$ Since $\mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]}^{+} \subset \mathfrak{a}_{\mathfrak{h}} \subset \mathfrak{a}$ and $\mathfrak{b}^{+} \subset \mathfrak{a}$ thus (according to Lemma \[lma\]) we get $X=w_{1} \cdot Y$ for a certain $w_{1} \in W_{\mathfrak{g}}.$ Therefore $$\mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]}^{+} \subset W_{\mathfrak{g}} \cdot \mathfrak{b}^{+} = \bigcup_{w\in W_{\mathfrak{g}}} w \cdot \mathfrak{b}^{+} \subset \bigcup_{w\in W_{\mathfrak{g}}} w \cdot \mathfrak{b}.$$ Analogously $$\mathfrak{b}_{[\mathfrak{l},\mathfrak{l}]}^{+} \subset \bigcup_{w\in W_{\mathfrak{g}}} w \cdot \mathfrak{b}^{+} \subset \bigcup_{w\in W_{\mathfrak{g}}} w \cdot \mathfrak{b}.$$ By Lemma \[lma2\] there exist $w_{\mathfrak{h}},w_{\mathfrak{l}} \in W_{\mathfrak{g}}$ such that $$\mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]}^{+} \subset w_{\mathfrak{h}}^{-1} \cdot \mathfrak{b} \ \ \text{\rm and} \ \ \mathfrak{b}_{[\mathfrak{l},\mathfrak{l}]}^{+} \subset w_{\mathfrak{l}}^{-1} \cdot \mathfrak{b},$$ because $W_{\mathfrak{g}}$ acts on $\mathfrak{a}$ by linear transformations. Therefore $$\mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]} \subset w_{\mathfrak{h}}^{-1} \cdot \mathfrak{b} \ \ \text{\rm and} \ \ \mathfrak{b}_{[\mathfrak{l},\mathfrak{l}]} \subset w_{\mathfrak{l}}^{-1} \cdot \mathfrak{b}$$ where $\mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]} := \text{Span}(\mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]}^{+})$ and $\mathfrak{b}_{[\mathfrak{l},\mathfrak{l}]} := \text{Span} (\mathfrak{b}_{[\mathfrak{l},\mathfrak{l}]}^{+}).$ We obtain $$w_{\mathfrak{h}} \cdot \mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]} \subset \mathfrak{b} , \ w_{\mathfrak{l}} \cdot \mathfrak{b}_{[\mathfrak{l},\mathfrak{l}]} \subset \mathfrak{b}.$$ By the assumption and Lemma \[lma1\] $$\text{\rm dim}(w_{\mathfrak{h}} \cdot \mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]} \cap w_{\mathfrak{l}} \cdot \mathfrak{b}_{[\mathfrak{l},\mathfrak{l}]}) >0.$$ Choose $0 \neq Y \in w_{\mathfrak{h}} \cdot \mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]} \cap w_{\mathfrak{l}} \cdot \mathfrak{b}_{[\mathfrak{l},\mathfrak{l}]}.$ Then $$w_{\mathfrak{l}} \cdot X_{\mathfrak{l}}=Y= w_{\mathfrak{h}} \cdot X_{\mathfrak{h}} \ \text{\rm for some} \ X_{\mathfrak{h}} \in \mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]}\backslash \{ 0 \} \ \text{\rm and} \ X_{\mathfrak{l}} \in \mathfrak{b}_{[\mathfrak{l},\mathfrak{l}]}\backslash \{ 0 \}.$$ Take $w_{2} := w_{\mathfrak{h}}^{-1}w_{\mathfrak{l}} \in W_{\mathfrak{g}},$ we have $X_{\mathfrak{h}}= w_{2} \cdot X_{\mathfrak{l}}$ and $X_{\mathfrak{h}} \in \mathfrak{a}_{\mathfrak{h}}, \ X_{\mathfrak{l}} \in \mathfrak{a}_{\mathfrak{l}}.$ Thus $0 \neq X_{\mathfrak{h}} \in w_{2} \cdot \mathfrak{a}_{\mathfrak{l}} \cap \mathfrak{a}_{\mathfrak{h}}.$ The assertion follows from Theorem \[twkob\]. We can proceed to a proof of Corollary \[co1\]. For a reductive Lie group $D$ with a Lie algebra $\mathfrak{d}$ with a Cartan decomposition $$\mathfrak{d} = \mathfrak{k}_{\mathfrak{d}} + \mathfrak{p}_{\mathfrak{d}}$$ we define $d(G) := \text{dim} (\mathfrak{p}_{\mathfrak{d}}).$ We will need the following properties Let $L$ be a subgroup reductive in $G$ acting properly on $G/H.$ The space $L \backslash G /H$ is compact if and only if $$d(L)+d(H)=d(G).$$ \[twkk\] If $J \subset G$ is a semisimple subgroup then it is reductive in $G.$ \[twy\] Let $L \subset G$ be a semisimple Lie group acting properly on $G/H=SL(2k+1, \mathbb{R})/SO(k-1,k+2)$ or $G/H=SL(2k+1, \mathbb{R})/Sp(k-1,\mathbb{R}).$ Then $$\text{\rm rank}_{\mathbb{R}}(\mathfrak{l}) \leq 2.$$ \[p1\] Because $\mathop{\mathrm{rank}}\nolimits_{\text{a-hyp}}(\mathfrak{g}) = 1+ \mathop{\mathrm{rank}}\nolimits_{\text{a-hyp}}(\mathfrak{h})$ thus it follows from Table \[tab1\] and Theorem \[twgl\] that if $L$ is simple then $\text{rank}_{\mathbb{R}}(\mathfrak{l}) \leq 2.$ On the other hand if $L$ is semisimple then each (non-compact) simple part of $\mathfrak{l}$ adds at least $1$ to a-hyperbolic rank of $\mathfrak{l}.$ Thus we also have $\text{rank}_{\mathbb{R}}(\mathfrak{l}) \leq 2.$ *Proof of Corollary \[co1\].* Assume now that $L$ is reductive in $G.$ Since the Lie algebra $\mathfrak{l}$ is reductive therefore $$\mathfrak{l} = \mathfrak{c}_{\mathfrak{l}} + [\mathfrak{l},\mathfrak{l}],$$ where $\mathfrak{c}_{\mathfrak{l}}$ denotes the center of $\mathfrak{l}.$ It follows from Corollary \[coko\] that $$\text{\rm rank}_{\mathbb{R}}(\mathfrak{l}) \leq k+1, \label{eq2}$$ and by Proposition \[p1\] we have $\text{rank}_{\mathbb{R}}([\mathfrak{l},\mathfrak{l}]) \leq 2.$ Note that $$d(G)-d(H)\geq k^{2} +2k +2.$$ We will show that if $L$ acts properly on $G/H$ and $k\geq 5$ then $$d(L) < k^{2} + 2k +2. \label{eq4}$$ Let $[\mathfrak{l},\mathfrak{l}] = \mathfrak{k}_{0} + \mathfrak{p}_{0}$ be a Cartan decomposition. From (\[eq2\]) $$d(L) \leq \text{\rm dim} (\mathfrak{c}_{\mathfrak{l}}) + \text{\rm dim} (\mathfrak{p}_{0}) \leq k+1 + \text{\rm dim} (\mathfrak{p}_{0}). \label{eq7}$$ Also, if $\text{rank}_{\mathbb{R}}([\mathfrak{l},\mathfrak{l}]) =2$ then it follows from Table \[tab1\] that (the only) non-compact simple part of $[\mathfrak{l},\mathfrak{l}]$ is isomorphic to $\mathfrak{sl}(3,\mathbb{R}),$ $\mathfrak{su}^{\ast}(6),$ $\mathfrak{e}_{6}^{\text{IV}}$ or $\mathfrak{sl}(3,\mathbb{C})$ (treated as a simple real Lie algebra). In such case $$\text{\rm dim} (\mathfrak{p}_{0}) < 27. \label{eq5}$$ Therefore assume that $\text{rank}_{\mathbb{R}} ([\mathfrak{l},\mathfrak{l}])=1$ and let $\mathfrak{s} \subset [\mathfrak{l},\mathfrak{l}]$ be (the only) simple part of a non-compact type. We have $$\text{\rm rank}_{\mathbb{R}} (\mathfrak{s}) =1. \label{eq3}$$ It follows from Theorem \[twy\] that $\mathfrak{s}$ is reductive in $\mathfrak{g}.$ Therefore $\mathfrak{s}$ admits a Cartan decomposition $$\mathfrak{s} = \mathfrak{k}_{\mathfrak{s}} + \mathfrak{p}_{\mathfrak{s}}$$ compatible with $\mathfrak{g}= \mathfrak{k} + \mathfrak{p},$ that is $\mathfrak{k}_{\mathfrak{s}} \subset \mathfrak{k}.$ We also have $\text{dim}(\mathfrak{p}_{s})=\text{dim}(\mathfrak{p}_{0}).$ Since $\mathfrak{k} = \mathfrak{so}(2k+1)$ we obtain $$\text{\rm rank} (\mathfrak{k}_{s}) \leq \text{\rm rank} (\mathfrak{k}) = k.$$ Using the above condition together with (\[eq3\]) we can check (by a case-by-case study of simple Lie algebras) that $$\text{\rm dim} (\mathfrak{p}_{\mathfrak{s}}) < 4k. \label{eq6}$$ Now (\[eq7\]), (\[eq5\]) and (\[eq6\]) imply that $$d(L) < 5k+1$$ for $k \geq 6,$ and $d(L)<33$ for $k=5.$ Thus we have showed (\[eq4\]). The assertion follows from Theorem \[twkk\]. [99]{} Y. Benoist, [*Actions propres sur les espaces homogènes réductifs*]{}, Ann. of Math. 144 (1996), 315-347. M. Bocheński, A. Tralle, [*Clifford-Klein forms and a-hyperbolic rank*]{}, Int. Math. Res. Notices (2014), DOI: 10.1093/imrn/rnu123. A. Borel, [*Compact Clifford-Klein forms of symmetric spaces*]{}, Topology 2 (1963), 111-122. F. Kassel, [*Proper actions on corank-one reductive homogeneous spaces*]{}, J. Lie Theory 18 (2008), 961-978. F. Kassel, T. Kobayashi, [*Poincaré series for non-Riemannian locally symmetric spaces*]{}, arXiv:1209.4075. T. Kobayashi, [*Proper actions on a homogeneous space of reductive type*]{}, Math. Ann. 285 (1989), 249-263. T. Kobayashi, [*Discontinuous groups acting on homogeneous spaces of reductive type*]{}, Representation theory of Lie groups and Lie algebras (Fuji-Kawaguchiko, 1990), World Sci. Publ., River Edge (1992), 59-75. T. Kobayashi, [*A necessary condition for the existence of compact Clifford-Klein forms of homogeneous spaces of reductive type,*]{} Duke Math. J. 67 (1992), 653-664. T. Kobayashi, [*On discontinuous groups acting on homogeneous spaces with noncompact isotropy subgroups*]{}, J. Geom. Phys. 12 (1993), 133-144. T. Kobayashi, [*Criterion for proper actions on homogeneous spaces of reductive groups*]{}, J. Lie Theory 6 (1996), 147-163. T. Kobayashi, T. Yoshino, [*Compact Clifford-Klein forms of symmetric spaces revisited*]{}, Pure Appl. Math. Quart. 1 (2005), 603-684. R. S. Kulkarni, [*Proper actions and pseudo-Riemannian space forms,*]{} Adv. Math. 40 (1981), 10-51. Y. Morita, [*A topological necessary condition for the existence of compact Clifford-Klein forms*]{}, arXiv:1310.7096, to appear in J. Differ. Geom. T. Okuda, [*Classification of semisimple symmetric spaces with $SL(2, \mathbb{R})$-proper actions,*]{} J. Differ. Geom. 94 (2013), 301-342. A. L. Onishchik, E. B. Vinberg [*Lie groups and algebraic groups,*]{} Springer (1990). K. Yosida, [*A theorem concerning the semisimple Lie groups,*]{} Tohoku Math. J. 44 (1938), 81-84. Maciej Bocheński Department of Mathematics and Computer Science, University of Warmia and Mazury, Słoneczna 54, 10-710, Olsztyn, Poland. email: mabo@matman.uwm.edu.pl Marek Ogryzek Department of Geodesy and Land Management, University of Warmia and Mazury, Prawocheńskiego 15, 10-720, Olsztyn, Poland. email: marek.ogryzek@uwm.edu.pl
{ "pile_set_name": "ArXiv" }
--- address: 'Department of Mathematics, Imperial College, 180 Queens Gate, London SW7 2BZ, United Kingdom' author: - 'Proshun Sinha-Ray and Henrik Jeldtoft Jensen' title: 'Forest-fire models as a bridge between different paradigms in Self-Organized Criticality' --- Several types of models of self-organised criticality (SOC) exist[@Bak:book; @Jensen:book]. The original cellular automaton models were defined by a deterministic and conservative updating algorithm, with thresholds (barriers to activity), and stochastic driving [@BTW:SOC1; @BTW:SOC]. A new variation of models was developed by Olami, Feder and Christensen (OFC) [@OFCmodel] who realised that a non-conservative threshold model might remain critical if driven uniformly. The OFC model is completely deterministic except for a random initial configuration. In both types of model the threshold is assumed to play a crucial role as a local rigidity which allows for a separation of time scales and, equally important, produces a large number of metastable states. The dynamics take the system from one of these metastable states to another. It is believed that separation of time scale and metastability are essential for the existence of scale invariance in these models. A seemingly very different type of model was developed by Drössel and Schwabl (DS)[@DroSchff]. No threshold appears explicitly in this model and the separation of time scales is put in by hand by tuning the rates of two stochastic processes which act as driving forces for the model. The DS forest-fire (FF) is defined on a $d$-dimensional square lattice. Empty sites are turned into “trees” with a probability $p$ per site in every time step. A tree can catch fire stochastically when hit by “lightning”, with probability $f$ each time step, or deterministically when a neighbouring site is on fire. The model is found to be critical in the limit $p\rightarrow0$ together with $f/p\rightarrow0$. This model is a generalization of a model first suggested by Bak, Chen and Tang [@BCTff] which is identical to the DS model except that it does not contain the stochastic ignition by lightning. The BCT system is not critical [@BCTffnotSOC] (in less than three dimensions, see [@BCTffin3d]). A continuous variable, uniformly driven deterministic version [@detff] also shows regular behavior for low values of $p$[@detffnotSOC]. Thus the introduction of the stochastic lightning mechanism appeared to be necessary, at least in two dimensions, for the model to behave critically. A useful review can be found in [@ffrev]. In the present letter we present a transformation of the forest-fire model into a completely deterministic system. This model is an extension of the recently introduced auto-ignition forest-fire, a simple variation on the DS model[@autoigff]. As in that model, we find that all macroscopic statistical measures of the system are preserved. Specifically, we show that the three models have the same exponent for the probability density describing clusters of trees, similar probability densities of tree ages and, probably most unexpected, almost the same power spectrum for the number of trees on the lattice as a function of time. It is surprising that the temporal fluctuation spectrum can be the same in the deterministic model as in the DS forest fire, since even a small stochastic element in an updating algorithm is known to be capable of altering the power spectrum in a significant way [@jens-anders-grin]. [*Definition of model –* ]{} The SOC FF can be recast into an auto-ignition model. This model is identical to the DS model, except that the spontaneous ignition probability $f$ is replaced by an auto-ignition mechanism by which trees ignite automatically when their age $T$ after inception reaches a value $T_{max}$. Choosing this value suitably with respect to $p$ gives a system with exactly the same behaviour and statistical properties as the DS model[@autoigff]. Thus one stochastic driving process has been removed and a threshold introduced, while maintaining the SOC state; this model also displays explicitly the relationship between threshold dynamics and the separation of time scales so necessary for the SOC state. The auto-ignition model can be turned into a completely deterministic critical model by eliminating the stochastic growth mechanism. In the deterministic model (which we shall call the regen FF) each cell is given an integer parameter $T$ which increases by one each time step. If $T>0$, the cell is said to be occupied, otherwise it is empty (or regenerating). The initial configuration is a random distribution of $T$-values and fires. Fires spread through nearest neighbours and the auto-ignition mechanism is again operative so that a tree catches fire when its $T=T_{max}$. However, in this model when a tree catches fire the result is a decrement of $T_{regen}$ from its $T$-value. Note that when $T_{regen}<T_{max}$, a cell may still be occupied after it has been ignited. The parameters $T_{max}$ and $T_{regen}$ can be thought of as having a qualitatively reciprocal relationship with $f$ and $p$ respectively (in terms of the average ‘waiting time’ for spontaneous ignition and tree regrowth), though this is less straightforward in the latter case because trees are not always burned down by fire. It is evident that $T_{regen}$ also sets, and allows direct control of, the degree of dissipation of the $T$-parameter in the system. [*Results –* ]{} We now turn to a comparison between the statistical properties of the stochastic DS FF and the entirely deterministic regen model, with reference to the partly deterministic auto-ignition model. First we consider the probability density $p(s)$ of the tree clusters sizes [@cluster] simulated for different parameters for the different models. It is well known that the correlation length in the DS model (as measured by the cut-off $s_c$ in $p(s)$) increases as the critical point is approached by decreasing $p$, $f$ and $f/p$ [@DroSchff]. There is a corresponding increase in the power law regime for the cluster distribution in the auto-ignition model as $p$ is decreased and $T_{max}$ is increased[@autoigff]. The scaling behaviour of the cut-off $s_c$ is difficult to ascertain due to the limited range of data available, but seems to be of the form $\ln(S_c)\sim pT_{max}$, although we cannot exclude an algebraic dependence of the form $s_c\sim (pT_{max})^a$, with $a\simeq 6$. Fig. \[regenCl\] shows scaling plots for the regen model, and we see that here too the cut-off $s_c$ scales with increasing ratio, $t=T_{max}/T_{regen}$. We have approximately $\ln(s_c)\sim T_{max}$ though again the relation may be algebraic. The conclusion is that all three models approach a critical state described by the [*same*]{} power law $p(s)\sim s^{-\tau}$ with $\tau\simeq 2$. One expects the power law observed in the cluster size distribution to be reflected in power laws for spatial correlation functions. It is particularly interesting to study the age-age correlation function: $$C(r) = \langle T({\bf r}+{\bf r}_0)T({\bf r}_0)\rangle - \langle T({\bf r_0})\rangle^2 \label{correlation}$$ This correlation function was never studied for the DS FF because the model does not consider the age $T({\bf r})$ explicitly. In Fig. \[T-Tfig\] we show the behavior of the age-age correlation function in the regen and DS models. As usual it is difficult to obtain a substantial power law region because of finite size limitations. Nevertheless it is clear that $C(r)$ does exhibit power law dependence on $r$ and we find $C(r)\sim r^{-\eta}$ with $\eta\simeq 0.32, 0.21$ and $0.23$ for the regen, auto-ignition and DS models respectively. Interestingly, the same correlation function for empty sites (which have negative $T$ in the regen model is also a power law with $\eta\simeq 0.13$. Let us now turn to the temporal characteristics of the models. In Fig. \[agefig\] we show that the probability distribution of the ages of the trees has a very similar form for all three models. All are broad and exponential in character. Since it is a microscopic property, it is not surprising that there is some variation between the models. This variation may also be the reason for the different exponents in the age-age correlation functions mentioned above. It is remarkable that the DS FF exhibits a cut-off in the age distribution which is nearly as sharp as the cut-off in the two threshold models. This shows that the stochastic ignition process in the DS model, characterized by the lightning probability $f$, can be replaced in surprising detail by the deterministic age threshold. The collective temporal behaviour is represented by the power spectrum of the time variation of the total number of trees on the lattice. In Fig. \[nTfig\] these power spectra are shown for the DS and regen models (again, the power spectrum for the auto-ignition model is nearly identical). Our most surprising result is that the deterministic regeneration model has nearly the same power spectrum as the two other models, particularly in the light of the differences in the age profiles above. The equivalence between the three models allows us to think of the probabilistic growth and lightning in the DS FF model as effectively acting as thresholds. Qualitatively one can readily see that the probabilistic nature of the growth and the lightning can be interpreted as a kind of rigidity. Namely, an empty site has a rigidity against being turned into a tree described by $1/p$. A tree has rigidity against fire described by the fact that a tree only catches fire if nearest neighbor to a fire or when hit by lightning. [*Discussion –* ]{} We now discuss the relationship between the regen model presented above and other SOC models. Our regen model is similar to the deterministic model introduced by Chen, Bak and Jensen[@detff]. The crucial difference however, is that in the previous model the ratio $T_{regen}/T_{max}$ - which must be decreased to move closer to the critical point and obtain scale free behavior - is effectively held fixed at a finite value, and hence the model does not allow one to approach the critical state. The regen model has several features in common with the sandpile and earthquake models. It is similar to both sets of models in that the intrinsic dynamics is entirely deterministic and controlled by thresholds. The model is uniformly driven like the OFC earthquake model [@OFCmodel] , and moreover, our deterministic FF model is genuinely non-conservative. It is worth noting that distributing the increase in $T$ randomly in a limited number of portions (rather than equally across all trees) each time step was found to destroy the criticality as the size of the portions increased. In one important respect our model is more similar to the BTW sandpile model than to the OFC model. Namely, when a site suffers relaxation (a tree catches fire) a fixed amount $T_{regen}$ is subtracted from the dynamical variable of that site. The same happens in the BTW model. In the OFC model, on the other hand, the dynamical variable of a relaxing site is reset to zero. This property has been argued to allow for a marginal synchronisation in the model and hence to be responsible for the OFC model’s ability, in contrast to the BTW model, to remain critical even in the non-conservative regime[@middleton]. Seen in this context the deterministic FF model presented here constitutes a very interesting mix of features from the BTW and OFC models. Our regen FF model is non-conservative, uniformly driven and though the microscopic update does not support a marginal synchronisation, nevertheless the model does exhibit the same scale free behavior as the DS FF. This gives a direct link between the SOC behaviour of the BTW, OFC and DS FF models, each of which are commonly assumed to be representative of different and distinct classes of SOC models (e.g. in [@uniMFpic]). Furthermore, the change in the mechanism for the renewal of the forest (from a probability for growth to a time for regeneration) and the resultant sandpile-like picture allows the identification of $p$ with a dissipation parameter (in terms of the subtraction of $T_{regen}$ on ignition) rather than as a driving parameter. This is quite contrary to the normally held and most obvious view - for the DS FF - that $p$ is the driving parameter (creating trees in the system), and that if anything $f$ controls the dissipation (the complete combustion of trees into empty sites). If this is so, we can speculate that it may be possible to relate the physical limits for critical behaviour in the BTW sandpile: $h,h/\epsilon\rightarrow0$ (where $h$ is the driving rate and $\epsilon$ the dissipation) and, recalling the qualitatively reciprocal relationships between $f,p, T_{max}$ and $T_{regen}$ noted earlier, the DS and regen forest-fire models: $f,f/p\rightarrow0$, and 1/$T_{max}$,$T_{regen}$/$T_{max}\rightarrow0$ The main difference between the deterministic FF model and the sandpile and earthquake models is that the dynamical variable $T$ is [*not*]{} transported to neighboring sites when a site relaxes and that the threshold exists only for the initiation and not propagation of avalanches. This difference can be summarized as the FF model being a model of two coupled fields, fires and trees, whereas the sandpile and earthquake models contain one self-coupled field, the energy of a site. Another difference consists in that the thresholds of the deterministic FF model must be tuned (to infinity) for the model to approach the critical regime. The reason for this is that the thresholds relate directly to the rate of driving in the model. The sandpile and earthquake models are different in that the SOC limit of slow driving can be reached without a tuning of the thresholds. Finally, we note that the regen model is critical with periodic boundary conditions (in contrast to the BTW and OFC models) and without external driving (unlike the DS model), and is therefore the only system which can be said to be completely self-contained. [*Conclusion –*]{} We have demonstrated that the stochastic Drossel-Schwabl forest-fire model can be turned into a deterministic threshold model without changing any of the collective statistical measures of the system in a significant way. The model illuminates greatly the relationship between different types of SOC models. [*Acknowledgements –*]{} HJJ is supported by EPSRC under contract GR/L12325. PSR is the recipient of an EPSRC PhD studentship. We would like to thank Barbara Drössel and Kim Christensen for helpful discussion and insight. [99]{} P. Bak, [*How Nature Works*]{}, Oxford University Press (1997). H. J. Jensen, [*Self-Organized Criticality*]{}, Cambridge University Press (1998). P. Bak, C. Tang, and K. Weisenfeld, Phys. Rev. Lett. [**59**]{}, 381 (1987) P. Bak, C. Tang, and K. Weisenfeld, Phys. Rev. A [**38**]{}, 364 (1988) Z. Olami, H. J. S. Feder, K. Christensen, Phys. Rev. Lett. [**68**]{}, 1244 (1992) B. Drössel and F. Schwabl, Phys. Rev. Lett. [**69**]{}, 1629 (1992) P. Bak, K. Chen and C. Tang, Physics Letters A [**147**]{}, 297 (1990) P. Grassberger and H. Kantz, J. Stat. Phys. [**63**]{}, 685 (1991) H.-M. Bröker and P. Grassberger, Phys. Rev. E [**56**]{}, R4918 (1997) K. Chen, P. Bak, and M. H. Jensen, Physics Letters A [**149**]{}, 207 (1990) J. E. S. Socolar, G. Grinstein, and C. Jayaprakash, Phys. Rev. E [**47**]{}, 2366 (1993) S. Clar, B. Drossel, and F. Schwabl, J. Phys: Condensed Matter [**8**]{}, 6803 (1996) P. Sinha-Ray and H. J. Jensen, to be published. H. J. Jensen, Phys. Rev. Lett [**64**]{}, 3103 (1990), J. V. Andersen, H. J. Jensen, and O. G. Mouritsen, Phys. Rev. B [**44**]{}, 429 (1991), G. Grinstein, T. Hwa, and H. J. Jensen, Phys. Rev. A [**45**]{}, R559 (1992). Tree clusters are defined as groups of sites occupied by trees and all connected via [*nearest*]{} neighbor likes. A. A. Middleton and C. Tang, Phys. Rev. Lett. [**74**]{}, 742 (1995), A. Corral, C. J. Perez, A. Diaz-Guilera, and A. Arenas, Phys. Rev. Lett. [**74**]{}, 118 (1995). A. Vespignani and S. Zapperi, Phys. Rev. E [**57**]{}, 6345 (1998)
{ "pile_set_name": "ArXiv" }
--- abstract: 'Ensembles of alkali or noble-gas atoms at room temperature and above are widely applied in quantum optics and metrology owing to their long-lived spins. Their collective spin states maintain nonclassical nonlocal correlations, despite the atomic thermal motion in the bulk and at the boundaries. Here we present a stochastic, fully-quantum description of the effect of atomic diffusion in these systems. We employ the Bloch-Heisenberg-Langevin formalism to account for the quantum noise originating from diffusion and from various boundary conditions corresponding to typical wall coatings, thus modeling the dynamics of nonclassical spin states with spatial inter-atomic correlations. As examples, we apply the model to calculate spin noise spectroscopy, temporal relaxation of squeezed spin states, and the coherent coupling between two spin species in a hybrid system.' author: - Roy Shaham - Or Katz - Ofer Firstenberg bibliography: - 'diffusion\_bib.bib' title: Quantum Dynamics of Collective Spin States in a Thermal Gas --- Introduction\[sec:Introduction\] ================================ Gaseous spin ensembles operating at room temperature and above have attracted much interest for decades. At ambient conditions, alkali-metal vapors and odd isotopes of noble gasses exhibit long spin-coherence times, ranging from milliseconds to hours [@Happer1972OPIntro; @Happer1977SERF; @katz2018storage1sec; @balabas2010minutecoating; @walker1997SEOPReview; @Walker2017He3review]. These spin ensembles, consisting of a macroscopic number of atoms, are beneficial for precision sensing, searches of new physics, and demonstration of macroscopic quantum effects [@Happer2010book; @Brown2010RomalisCPTviolation; @Sheng2013RomalisSubFemtoTesla; @Budker2007OpticalMagnetometryI; @Budker2013OpticalMagnetometryII; @Crooker2004SNSmagnetometerNature; @ItayAxions2019arxiv]. In particular, manipulations of collective spin states allow for demonstrations of basic quantum phenomena, including entanglement, squeezing, and teleportation [@Julsgaard2001PolzikEntanglement; @Sherson2006PolzikTeleportationDemo; @Jensen2011PolzikSqueezingStorage; @Polzik2010ReviewRMP] as well as storage and generation of photons [@Eisaman2005LukinSinglePhoton; @Peyronel2012LukinInteractingSinglePhotons; @Gorshkov2011LukinRydbergBlockadePhotonInteractions; @Borregaard2016SinglePhotonsOnMotionallyAveragedMicrocellsNcomm]. It is the collectively-enhanced coupling and the relatively low noise offered by these spin ensembles that make them particularly suitable for metrology and quantum information applications. Thermal atomic motion is an intrinsic property of the dynamics in gaseous systems. Gas-phase atoms, in low-pressure, room-temperature systems, move at hundreds of meters-per-second in ballistic trajectories, crossing the cell at sub-millisecond timescales and interacting with its boundaries. To suppress wall collisions, buffer gas is often introduced, which renders the atomic motion diffusive via velocity-changing collisions [@Kastler1957OPreview]. At the theory level, the effect of diffusion on the mean spin has been extensively addressed, essentially by describing the evolution of an impure (mixed) spin state in the cell using a mean-field approximation [@MasnouSeeuwsBouchiat1967diffusion; @WuHapper1988CoherentCellWallTheory; @Li2011RomalisMultipass; @Firstenberg2007DickeNarrowingPRA; @Firstenberg2010CoherentDiffusion; @XiaoNovikova2006DiffusionRamsey]. This common formalism treats the spatial dynamics of an average atom in any given position using a spatially-dependent density matrix. It accurately captures the single-atom dynamics but neglects both inter-atomic correlations and thermal fluctuations associated with the spin motion and collisions. Non-classical phenomena involving collective spin states, such as transfer of quantum correlations between non-overlapping light beams by atomic motion [@Xiao2019MultiplexingSqueezedLightDiffusion; @Bao2019SpinSqueezing; @bao2016spinSqueezing], call for a quantum description of the thermal motion. For spin-exchange collisions, which are an outcome of thermal motion, such a quantum description has received much recent attention [@Kong2018MitchellAlkaliSEEntanglement; @weakcollisions2019arxiv; @AlkaliNobleEntanglementKatz2020PRL; @Dellis2014SESpinNoisePRA; @Mouloudakis2019SEwavefunctionUnraveling; @Mouloudakis2020SEbipartiteEntanglement; @Vasilakis2011RomalisBackactionEvation]. However, the more direct consequences of thermal motion, namely the stochasticity of the spatial dynamics in the bulk and at the system’s boundaries, still lack a proper fully-quantum description. In this paper, we describe the effect of spatial diffusion on the quantum state of warm spin gases. Using the Bloch-Heisenberg-Langevin formalism, we identify the dissipation and noise associated with atomic thermal motion and with the scattering off the cell boundaries. Existing significant work in this field rely primarily on mean-field models, which address both wall coupling [@seltzer2009RomalishighTcoating] and diffusion in unconfined systems [@Lucivero2017RomalisDiffusionCorrelation]. The latter work derives the correlation function of diffusion-induced quantum noise from the correlation function of mass diffusion in unconfined systems. Here we derive the quantum noise straight out of Brownian motion considerations and provide a solution for confined geometries. Our model generalizes the mean-field results and enables the description of inter-atomic correlations and collective quantum states of the ensemble. We apply the model to highly-polarized spin vapor and analyze the effect of diffusion in various conditions, including spin noise spectroscopy [@Sinitsyn2016SpinNoiseSpectroscopySNSreview; @Katsoprinakis2007SpinNoiseRelaxation; @Crooker2004SNSmagnetometerNature; @Crooker2014PRLspectroscopy; @Lucivero2016MitchellSpinNoiseSpectrscopySqueezedLight; @Lucivero2017MitchellNoiseSpectroscopyFundumentals], spin squeezing [@Kong2018MitchellAlkaliSEEntanglement; @Julsgaard2001PolzikEntanglement], and coupling of alkali to noble-gas spins in the strong coupling regime [@weakcollisions2019arxiv; @AlkaliNobleEntanglementKatz2020PRL]. The paper is arranged as follows. We derive in Sec. \[sec:Model\] the Bloch-Heisenberg-Langevin model for the evolution of the collective spin operator due to atomic Brownian motion and cell boundaries. We focus on highly-polarized ensembles in Sec. \[sec:Polarized-ensemb\] and provide the model solutions. In Sec. \[sec:Applications\], we present several applications of our model. We discuss how it is employed to describe the temporal evolution, to calculate experimental results, to provide insight, and to optimize setups for specific tasks. Limits of our model and differences from existing models, as well as future prospects, are discussed in Sec. \[sec:Discussion\]. We provide appendices that elaborate on (\[sec:diffusion-noise\]) the quantum noise produced by thermal motion, (\[sec:wall-coupling-toy\]) a simplified model for analyzing the scattering off the cell walls, (\[sec:solution-of-diffusion-relaxation\]) means of solving the Bloch-Heisenberg-Langevin equation, and (\[sec:Faraday-rotation-measurement\]) the Faraday rotation scheme used herein. ![(a) Atomic spins in the gas phase, comprising a collective quantum spin $\mathbf{\hat{s}}(\mathbf{r},t)$ and undergoing thermal motion. (b) In the diffusive regime, the spins spatially redistribute via frequent velocity-changing collisions. (c) Collisions (local interaction) with the walls of the gas cell may fully or partially depolarize the spin state. (d) Diffusion and wall collisions lead to a multimode evolution, here exemplified for a spin excitation $\hat{a}(\mathbf{r},t)\propto\hat{s}_{x}(\mathbf{r},t)-i\hat{s}_{y}(\mathbf{r},t)$ with an initial Gaussian-like spatial distribution $\langle\hat{a}(\mathbf{r},t)\rangle$ and for destructive wall collisions. In addition to the mode-specific decay $\Gamma_{n}$, each spatial mode accumulates mode-specific quantum noise $\mathcal{\hat{W}}_{n}(t)$.\[fig:diffusion-illustration\]](illustration7){width="1\columnwidth"} Model\[sec:Model\] ================== Consider a warm ensemble of $N_{\mathrm{a}}$ atomic spins confined in a cell, as illustrated in Fig. \[fig:diffusion-illustration\]a. Let $\mathbf{r}_{a}(t)$ be the classical location of the $a^{\text{th}}$ atom at time $t$ and define the single-body density function at some location $\mathbf{r}$ as $n_{a}(\mathbf{r})=\delta(\mathbf{r}-\mathbf{r}_{a}(t))$. We denote the spin operator of the $a^{\text{th}}$ atom by $\mathbf{\hat{s}}_{a}$ and define the space-dependent collective spin operator as $\mathbf{\hat{s}}(\mathbf{r},t)=\sum_{a=1}^{N_{\mathrm{a}}}\mathbf{\hat{s}}_{a}n_{a}(\mathbf{r})$. While formally $\mathbf{\hat{s}}(\mathbf{r},t)$ is sparse and spiked, practical experiments address only its coarse-grained properties, *e.g.*, due to finite spatial scale of the employed optical or magnetic fields. The time evolution of the collective spin operator is given by $$\frac{\partial\hat{\mathbf{s}}}{\partial t}=\sum_{a=1}^{N_{\mathrm{a}}}\frac{\partial\hat{\mathbf{s}}_{a}}{\partial t}n_{a}+\hat{\mathbf{s}}_{a}\frac{\partial n_{a}}{\partial t}.\label{eq:spin-equation}$$ Here the first term accounts for the internal degrees of freedom, including the local Hamiltonian evolution of the spins and spin-spin interactions, while the second term accounts for the external degrees of freedom, namely for motional effects. The focus of this paper is on the second term, considered in the diffusion regime as illustrated in Fig.$\,$\[fig:diffusion-illustration\]b. We consider the first term only for its contribution to the boundary conditions, via the effect of wall collisions as illustrated in Fig.$\,$\[fig:diffusion-illustration\]c. In the following, we first derive the equations governing the quantum operator $\mathbf{\hat{s}}(\mathbf{r},t)$ in the bulk and subsequently introduce the effect of the boundaries. Diffusion in the bulk\[subsec:Thermal-motion\] ---------------------------------------------- We consider the limit of gas-phase atoms experiencing frequent, spin-preserving, velocity-changing collisions, such as those characterizing a dilute alkali vapor in an inert buffer-gas. In this limit, the atomic motion is diffusive, and the local density evolution can be described by the stochastic differential equation [@dean1996langevinDiffusion] $$\partial n_{a}/\partial t=D\nabla^{2}n_{a}+\boldsymbol{\nabla}(\boldsymbol{\eta}\sqrt{n_{a}}),\label{eq:Dean-diffusion}$$ where $D$ is the diffusion coefficient, and $\boldsymbol{\eta}$ is a white Gaussian stochastic process whose components satisfy $\langle\eta_{i}(\mathbf{r},t)\eta_{j}(\mathbf{r}',t')\rangle_{\mathrm{c}}=2D\delta_{ij}\delta(\mathbf{r}-\mathbf{r}')\delta(t-t')$ for $i,j=x,y,z$. We use $\langle\cdot\rangle_{\mathrm{c}}$ to represent ensemble average over the classical atomic trajectories, differing from the quantum expectation value $\langle\cdot\rangle$. The first term in Eq.$\,$(\[eq:Dean-diffusion\]) leads to delocalization of the atomic position via deterministic diffusion, while the second term introduces fluctuations that localize the atoms to discrete positions. Equation (\[eq:Dean-diffusion\]), derived by Dean for Brownian motion in the absence of long-range interactions [@dean1996langevinDiffusion], is valid under the coarse-grain approximation, when the temporal and spatial resolutions are coarser than the mean-free time and path between collisions. Substituting $\partial n_{a}/\partial t$ into Eq.$\,$(\[eq:spin-equation\]), we obtain the Bloch-Heisenberg-Langevine dynamical equation for the collective spin $$\partial\hat{\mathbf{s}}/\partial t=i[\mathcal{H},\hat{\mathbf{s}}]+D\nabla^{2}\mathbf{\hat{s}}+\hat{\boldsymbol{f}}.\label{eq:spin-diffusion-equation}$$ Here $\mathcal{H}$ is the spin Hamiltonian in the absence of atomic motion, originating from the $\partial\hat{\mathbf{s}}_{a}/\partial t$ term in Eq.$\,$(\[eq:spin-equation\]). The quantum noise operator $\hat{\boldsymbol{f}}=\hat{\boldsymbol{f}}(\mathbf{r},t)$ is associated with the local fluctuations of the atomic positions. It can be formally written as $\hat{f}_{\mu}=\boldsymbol{\nabla}(\hat{\mathrm{s}}_{\mu}\boldsymbol{\eta}\slash\sqrt{n})$, where $\mu=x,y,z$, and $n=\sum_{a}n_{a}$ is the atomic density. The noise term has an important role in preserving the mean spin moments of the ensemble. The commutation relation of different instances of the noise $\hat{f}_{\mu}=\hat{f}_{\mu}(\mathbf{r},t)$ and $\hat{f}_{\nu}'=\hat{f}_{\nu}(\mathbf{r}',t')$ satisfies $$\langle[\hat{f}_{\mu},\hat{f}_{\nu}']\rangle_{\mathrm{c}}=2i\epsilon_{\xi\mu\nu}D(\boldsymbol{\nabla}\boldsymbol{\nabla}')\hat{\mathrm{s}}_{\xi}\delta(\mathbf{r}-\mathbf{r}')\delta(t-t'),\label{eq:diffusion-noise-commutation-relation}$$ where $\epsilon_{\xi\mu\nu}$ is the Levi-Civita antisymmetric tensor. These commutation relations ensure the conservation of spin commutation relations $[\hat{\mathrm{s}}_{\mu}(\mathbf{r},t),\hat{\mathrm{s}}_{\nu}(\mathbf{r}',t)]=i\epsilon_{\xi\mu\nu}\hat{\mathrm{s}}_{\xi}\delta(\mathbf{r}-\mathbf{r}')$ on the operator level, compensating for the diffusion-induced decay in the bulk due to the $D\nabla^{2}$ term. We provide the full derivation of $\hat{\boldsymbol{f}}$ and its properties in Appendix \[sec:diffusion-noise\]. The spin noise pat smaller length scales; this is a manifestation of the fluctuation-dissipation theorem. Finally, as expected, ensemble averaging over the noise realizations leaves only the diffusion term in the mean-field Bloch equation for the spin $\partial\langle\mathbf{s}\rangle/\partial t=D\nabla^{2}\langle\mathbf{s}\rangle$, where $\langle\mathbf{s}\rangle=\langle\hat{\mathbf{s}}(\mathbf{r},t)\rangle$ is the spin expectation value at a course-grained position $\mathrm{\boldsymbol{r}}$. Boundary conditions\[subsec:Cell-walls\] ---------------------------------------- We now turn to derive the contribution of wall collision to the quantum dynamics of the collective spin. When the atoms diffuse to the boundaries of the cell, their spin interacts with the surface of the walls. This interaction plays an important role in determining the depolarization and decoherence times of the total spin [@Kastler1957OPreview; @Happer2010book] and may also induce frequency shifts [@Volk1979CoherentWalls; @Kwon1981CoherentXe131Wall; @simpson1978CellWallNMR; @Wu1987CoherentCellWallExp; @WuHapper1988CoherentCellWallTheory]. Bare glass strongly depolarizes alkali atoms, and magnetic impurities in the glass affect the nuclear spin of noble-gas atoms. To attenuate the depolarization at the walls, cells can be coated with spin-preserving coatings such as paraffin [@Alexandrov2002LightInducedDesorptionParaffin; @Graf2005paraffinPRA; @balabas2010minutecoating] or OTS [@seltzer2009RomalishighTcoating] for alkali vapor and Surfasil or SolGel [@Driehuys1993HapperSurfasilPRA; @Driehuys1995HapperSurfasilPRL; @Breeze1999Surfasil; @Hsu2000SolGelCoating] for spin-polarized xenon. The coupling between the spins and the cell walls constitutes the formal boundary conditions of Eq. (\[eq:spin-diffusion-equation\]). In the mean-field picture, the wall coupling can be described as a local scatterer for the spin density-matrix $\rho$. In this picture, assisted by kinetic gas theory, the boundary conditions can be written as [@MasnouSeeuwsBouchiat1967diffusion] $$(1+\tfrac{2}{3}\lambda\hat{\mathbf{n}}\cdot\boldsymbol{\nabla})\rho=(1-\tfrac{2}{3}\lambda\hat{\mathbf{n}}\cdot\boldsymbol{\nabla})\mathcal{S}\rho,\label{eq:mean-field-boundary}$$ where $\mathcal{S}$ is the wall scattering matrix. Here $\lambda$ denotes the mean free path of the atoms, related to the diffusion coefficient via $D=\lambda\bar{v}/3$, where $\bar{v}$ is the mean thermal velocity. We adopt a similar perspective in order to derive the coupling of the collective spin $\hat{\mathbf{s}}$ with the walls in the Bloch-Heisenberg-Langevin formalism. In this formalism, the scattering off the walls introduces not only decay, but also fluctuations. In the Markovian limit, when each scattering event is short, its operation on a single spin becomes a stochastic density matrix $$\mathcal{S}\hat{\mathbf{s}}_{a}=e^{-1/N}\hat{\mathbf{s}}_{a}+\hat{\boldsymbol{w}}_{a}.\label{eq:wall-scattering}$$ Here $N$ denotes the average number of wall collisions a spin withstands before depolarizing [@seltzer2009RomalishighTcoating]. The accompanied quantum noise process is $\hat{\boldsymbol{w}}_{a}$; it ensures the conservation of spin commutation relations at the boundary. Using the stochastic scattering matrix, we generalize the mean-field boundary condition [\[]{}Eq. (\[eq:mean-field-boundary\])[\]]{} for collective spin operators as $$(1-e^{-1/N})\hat{\mathbf{s}}+\tfrac{2}{3}\lambda(1+e^{-1/N})(\hat{\mathbf{n}}\cdot\boldsymbol{\nabla})\hat{\mathbf{s}}=\hat{\boldsymbol{w}}.\label{eq:boundary-condition}$$ Here $\hat{\boldsymbol{w}}(\mathbf{r},t)=\sum_{a}\hat{\boldsymbol{w}}_{a}n_{a}$,for positions $\mathbf{r}$ on the cell boundary, is the collective wall-coupling noise process affecting the local spin on the wall. $\hat{\boldsymbol{w}}$ is zero on average and its statistical properties, together with the derivation of Eq.$\,$(\[eq:wall-scattering\]), are discussed in Appendix \[sec:wall-coupling-toy\]. The first term in Eq. (\[eq:boundary-condition\]) describes the fractional depolarization by the walls, and the second term describes the difference between the spin flux entering and exiting the wall. If the wall coupling also includes a coherent frequency-shift component, it can be appropriately added to these terms. The term on the right hand side describes the associated white fluctuations. In the limit of perfect spin-preserving coating, the boundary condition becomes a no-flux (Neumann) condition satisfying $(\hat{\mathbf{n}}\cdot\boldsymbol{\nabla})\hat{\mathbf{s}}=0$, and depolarization is minimized. This limit is realized for $N\gg R/\lambda$, where $R$ is the dimension of the cell$\,$[^1] [@Happer2010book]. In the opposite limit of strongly depolarizing walls, i.e. $N\lesssim1$, the (Dirichlet) boundary condition is $\hat{\mathbf{s}}=\hat{\boldsymbol{w}}/(1-e^{-1/N})$ [^2], rendering the scattered spin state random. For any other value of $N$ (partially-depolarizing walls), the boundary condition in Eq. (\[eq:boundary-condition\]) is identified as a stochastic Robin boundary condition [@ozisik2002boundarybook]. The two mechanisms discussed in this section — the bulk diffusion and the wall coupling — are independent physical processes. This is evident by the different parameters characterizing them — $D$ and $N$ — which are dictated by different physical scales, such as buffer gas pressure and the quality of the wall coating. These processes are different in nature; while wall coupling leads to spin depolarization and thermalization, diffusion leads to spin redistribution while conserving the total spin. They introduce independent fluctuations and dissipation, and they affect the spins at different spatial domains (the bulk and the boundary). That being said, both processes are necessary to describe the complete spin dynamics in a confined volume, simultaneously satisfying Eqs.$\,$(\[eq:spin-diffusion-equation\]) and (\[eq:boundary-condition\]). Polarized ensembles\[sec:Polarized-ensemb\] =========================================== When discussing non-classical spin states for typical applications, it is beneficial to consider the prevailing limit of highly-polarized ensembles. Let us assume that most of the spins point downwards ($-\hat{\mathbf{z}}$). In this limit, we follow the Holstein-Primakoff transformation [@Holstein1940PrimakoffHP; @kittel1987forHPapprox] and approximate the longitudinal spin component by its mean value $\mathbf{\hat{\mathrm{s}}}_{z}(\mathbf{r},t)=\mathbf{\mathrm{s}}_{z}$ (with $\mathrm{s}_{z}=-n/2$ for spin 1/2). The ladder operator $\hat{\mathrm{s}}_{-}=\hat{\mathrm{s}}_{x}-i\hat{\mathrm{s}}_{y}$, which flips a single spin downwards at position $\mathbf{r}$, can be represented by the annihilation operator $\hat{a}=\hat{\mathrm{s}}_{-}/\sqrt{2|\mathrm{s}_{z}|}$. This operator satisfies the bosonic commutation relations $[\hat{a}(\mathbf{r},t),\hat{a}^{\dagger}(\mathbf{r}',t)]=\delta(\mathbf{r}-\mathbf{r}')$. Under these transformations, Eqs. (\[eq:spin-diffusion-equation\]) and (\[eq:boundary-condition\]) become $$\begin{aligned} \partial\hat{a}/\partial t & =i[\mathcal{H},\hat{a}]+D\nabla^{2}\hat{a}+\hat{f},\label{eq:HP heisenberg langevin}\\ (1-e^{-1/N})\hat{a} & =-\tfrac{2}{3}\lambda(1+e^{-1/N})\hat{\mathbf{n}}\cdot\boldsymbol{\nabla}\hat{a}+\hat{w},\label{eq:HP boundary condition}\end{aligned}$$ where both $\hat{f}=(\hat{f}_{x}-i\hat{f}_{y})/\sqrt{2|\mathbf{\mathrm{s}}_{z}|}$ and $\hat{w}=(\hat{w}_{x}+i\hat{w}_{y})/\sqrt{2|\mathbf{\mathrm{s}}_{z}|}$ are now vacuum noise processes (see Appendices \[sec:diffusion-noise\] and \[sec:wall-coupling-toy\]; note that $\hat{f}$ is spatially colored). Here, Eq. (\[eq:HP heisenberg langevin\]) describes the spin dynamics in the bulk, while Eq.$\,$(\[eq:HP boundary condition\]) holds at the boundary. We solve Eqs. (\[eq:HP heisenberg langevin\]) and (\[eq:HP boundary condition\]) by decomposing the operators into a superposition of non-local diffusion modes $\hat{a}(\mathbf{r},t)=\sum_{n}\hat{a}_{n}(t)u_{n}(\mathbf{r})$. We first identify the mode functions $u_{n}(\mathbf{r})$ by solving the homogeneous Helmholtz equation $(D\nabla^{2}+\Gamma_{n})u_{n}(\mathbf{r})=0$, where the eigenvalues $-\Gamma_{n}$ are fixed by the Robin boundary condition [\[]{}Eq. (\[eq:HP boundary condition\]) without the noise term[\]]{}. The operator $\hat{a}_{n}(t)=\int_{V}\hat{a}(\mathbf{r},t)u_{n}^{\ast}(\mathbf{r})d^{3}\mathbf{r}$, where $V$ is the cell volume, annihilates a collective transverse spin excitation with a nonlocal distribution $|u_{n}(\mathbf{r})|^{2}$ and a relaxation rate $\Gamma_{n}$. These operators satisfy the bosonic commutation relation $[\hat{a}_{n},\hat{a}_{m}^{\dagger}]=\delta_{nm}$. The noise terms $\hat{f}$ and $\hat{w}$ are decomposed using the same mode-function basis. This leads to mode-specific noise terms $\mathcal{\hat{W}}_{n}(t)$, operating as independent sources. Assuming, for the sake of example, a magnetic (Zeeman) Hamiltonian $\mathcal{H}=\omega_{0}\hat{\mathrm{s}}_{z}$, where $\omega_{0}$ is the Larmor precession frequency around a $\hat{\mathbf{z}}$ magnetic field, the time evolution of the mode operators is given by $$\hat{a}_{n}(t)=\hat{a}_{n}(0)e^{-(i\omega_{0}+\Gamma_{n})t}+\mathcal{\hat{W}}_{n}(t).\label{eq:eigenmode-time-evolution}$$ The multimode decomposition and evolution are illustrated in Fig.$\,$\[fig:diffusion-illustration\]d, showing the first angular-symmetric mode distributions $u_{n}(\mathbf{r})$ of a cylindrical cell. In Table \[tab:Diffusion-mode-solutions\], we provide explicit solutions of the mode bases and associated decay rates for any given boundary properties in either rectangular, cylindrical, or spherical cells. The solution procedure and the corresponding decomposition of the noise terms are demonstrated for an exemplary one-dimensional geometry in Appendix \[sec:solution-of-diffusion-relaxation\]. We note that asymptotically, the decay of high-order modes ($n\gg1$) is independent of cell geometry and is approximately given by $\Gamma_{n}\sim D(\pi nV^{-1/3})^{2}$, where $\pi nV^{-1/3}$ approximates the mode’s wavenumber. Applications\[sec:Applications\] ================================ The outlined Bloch-Heisenberg-Langevin formalism applies to various experimental configurations and applications. It should be particularly useful when two constituents of the same system have different spatial characteristics, leading to different spatial modes. That occurs, for example, when coupling spins to optical fields (Fig.$\,$\[fig:diffusion-illustration\]d) or when mixing atomic species with different wall couplings. In this section, we consider three such relevant, real-life cases. ![image](APN_PSD_v4g){width="1.9\columnwidth"} Spin-noise spectroscopy\[subsec:Noise-spectrum\] ------------------------------------------------ Spin-noise spectroscopy (SNS) allows to extract physical data out of the noise properties of the spin system. It is used for magnetometry with atomic ensembles in or out of equilibrium [@Crooker2004SNSmagnetometerNature; @Crooker2014PRLspectroscopy; @Lucivero2017MitchellNoiseSpectroscopyFundumentals; @Katsoprinakis2007SpinNoiseRelaxation; @Tang2020DiffusionLowPressure_PRA], for low-field NMR [@Tayler2016zeroFieldNMR], for fundamental noise studies aimed at increasing metrological sensitivity [@Sheng2013RomalisSubFemtoTesla; @Crooker2004SNSmagnetometerNature], and more [@Sinitsyn2016SpinNoiseSpectroscopySNSreview]. SNS is also used to quantify inter-atomic correlations in squeezed states, when it is performed with precision surpassing the standard quantum limit [@Julsgaard2001PolzikEntanglement; @Katsoprinakis2007SpinNoiseRelaxation; @Kong2018MitchellAlkaliSEEntanglement]. Spin noise in an alkali vapor is affected by various dephasing mechanisms. Here we describe the effect of diffusion, given a spatially-fixed light beam employed to probe the spins. Since this probe beam may overlap with several spatial modes of diffusion, the measured noise spectrum would depend on the beam size, cell dimensions, and diffusion characteristics. On the mean-field level, this effect has been described by motion of atoms in and out of the beam [@Pugatch2009UniversalDiffusion; @XiaoNovikova2006DiffusionRamsey]. Here we calculate the SNS directly out of the quantum noise induced by the thermal motion as derived above. For concreteness, we consider two cylindrical cells of radius $R=1$ cm and length $L=3$ cm. One cell contains 100 Torr of buffer gas, providing for $\lambda=\unit[0.5]{\mu m}$ and $D=1\,\mathrm{cm^{2}/s}$, and no spin-preserving coating $N\lesssim1$ (*e.g.*, as in Ref. [@Kong2018MitchellAlkaliSEEntanglement]). The other cell has a high-quality paraffin coating, allowing for $N=10^{6}$ wall collisions before depolarization [@balabas2010minutecoating], and only dilute buffer-gas originating from outgassing of the coating, such that $\lambda=\unit[1]{mm}$ and $D=3\cdot10^{3}\,\mathrm{cm^{2}/s}$ [@sekiguchi2016JapaneseParaffinOutgassing]. A probe beam with waist radius $w_{0}$ measures the alkali spin $\hat{\mathbf{x}}$ component, oriented along the cylinder axis as presented in Fig.$\,$\[fig:APN-PSD\]a. The cell is placed inside a magnetic field $\mathbf{B}=2\pi f_{0}/g_{\mathrm{a}}\cdot\hat{\mathbf{z}}$ pointing along the spin polarization, where $g_{\mathrm{a}}$ is the alkali gyromagnetic ratio. In Appendix \[sec:Faraday-rotation-measurement\], we review the measurement details and calculate the spin-noise spectral density $S_{xx}(f)$ for both cells $$S_{xx}(f)=\sum_{n}\frac{|I_{n}^{(\mathrm{G})}|^{2}}{4}\frac{2\tilde{P}\Gamma_{n}}{\Gamma_{n}^{2}+4\pi^{2}(f-f_{0})^{2}},\label{eq:APN-PSD}$$ where $f$ is the frequency in which the SNS is examined, $\Gamma_{n}$ is again the decay rate of the $n^{\text{th}}$ diffusion mode, $I_{n}^{(\mathrm{G})}$ is the overlap of the Gaussian probe beam with that mode, and $\tilde{P}$ depends on the spin polarization such that $\tilde{P}=1$ for highly polarized ensembles. The calculated spectra are shown in Figs.$\,$\[fig:APN-PSD\]b and \[fig:APN-PSD\]c for $w_{0}=1$ mm. The cusp-like spectra originate from a sum of Lorentzians, whose relative weights correspond to the overlap of the probe beam with each given mode $|I_{n}^{(\mathrm{G})}|^{2}$. In the past, this cusp was identified as a universal phenomena [@Pugatch2009UniversalDiffusion], while here we recreate this result using the eigenmodes and accounting for the boundary. With spin-preserving coating, the uniform mode $n=0$ decays slower, and its contribution to the noise spectrum is much more pronounced, while the higher-order modes decay faster due to lack of buffer gas. The dominance of the central narrow feature thus depends on the overlap of the probe with the least-decaying mode $|I_{0}^{(G)}|^{2}$. To quantify it, we define the unitless noise content $\zeta=\int_{-f_{1/2}}^{f_{1/2}}S_{xx}(f)df$ as the fraction of the noise residing within the FWHM of the spectrum. Figure \[fig:APN-PSD\]d shows $\zeta$ for different beam sizes $w_{0}/R$. Evidently, the spin resonance is more significant in the buffer-gas cell, unless the probe beam covers the entire cell. This should be an important consideration in the design of such experiments. Squeezed-state lifetime\[subsec:Squeezed-state-lifetime\] --------------------------------------------------------- When the spin noise is measured with a sensitivity below the standard quantum limit, the spin ensemble is projected into a collective squeezed spin state. Such measurements are done primarily using optical Faraday rotation in paraffin-coated cells [@Julsgaard2001PolzikEntanglement; @Sherson2006PolzikTeleportationDemo; @Wasilewski2010PolzikEntanglementMagnetometry; @Jensen2011PolzikSqueezingStorage] and recently also in the presence of buffer gas [@Kong2018MitchellAlkaliSEEntanglement]. The duration of the probe pulse and the spatial profile of the probe beam determine the spatial profile of the squeezed spin state and hence its lifetime. We shall employ the same two cells from the previous section. Given a probe pulse much shorter than $w_{0}^{2}/D$ and assuming the measurement sensitivity surpasses the standard quantum limit, a squeezed state is formed, with initial spin variance $\langle\hat{x}_{\mathrm{G}}^{2}(0)\rangle\le1/4$, where $\hat{x}_{G}(t)$ is the measured spin operator [\[]{}defined in Appendix \[sec:Faraday-rotation-measurement\] as a weighted integral over the local operator $\hat{x}(\mathbf{r},t)$[\]]{}. The state is remeasured (validated) after some dephasing time $t$, see Fig.$\,$\[fig:multiexpnential-decay-of-squeezing\]a. Using the diffusion modes $u_{n}(\mathbf{r})$ with decay rates $\Gamma_{n}$, we use Eq.$\,$(\[eq:eigenmode-time-evolution\]) to calculate the evolution in the dark of the spin variance $$\langle\hat{x}_{\mathrm{G}}^{2}(t)\rangle=(\sum_{n}|I_{n}^{(G)}|^{2}e^{-\Gamma_{n}t})^{2}(\langle\hat{x}_{\mathrm{G}}^{2}(0)\rangle-1/4)+1/4.\label{eq:squeezing-vs-time}$$ Figures \[fig:multiexpnential-decay-of-squeezing\]b and \[fig:multiexpnential-decay-of-squeezing\]c present the calculated evolution. As expected, a narrow probe beam squeezes a superposition of diffusion modes (the first low-order modes in the buffer-gas cell are visualized in Fig.$\,$\[fig:diffusion-illustration\]d), which leads to a multi-exponential decay. The importance of thermal motion grows as the degree of squeezing increases, as the latter relies on squeezing in higher-spatial modes. To see this, we plot in Fig. \[fig:multiexpnential-decay-of-squeezing\]d the decay of squeezing in the buffer-gas cell with a wide probe beam and with the initial state extremely squeezed $\langle\hat{x}^{2}(\mathbf{r},t=0)\rangle\lll1/4$ The squeezing rapidly decays, as a power law, until only the lowest-order mode remains squeezed. This indicates the practical difficulty in achieving and maintaining a high degree of squeezing. An interesting behavior is apparent for the case of a large beam in a coated cell (Fig. \[fig:multiexpnential-decay-of-squeezing\]c, $w_{0}=8$ mm). Here, the significant overlap with the uniform produces a certain degree of squeezing that is especially long-lived. These results demonstrate the significance of accounting for many diffusion modes when considering fragile non-classical states or high-fidelity operations. For example, the presented calculations for the $\unit[25]{dB}$-squeezing require 1000 modes to converge. ![Lifetime of spin squeezing. (a) Experimental sequence comprising a short measurement (squeezing) pulse, followed by dephasing in the dark due to thermal motion for duration $t$, and a verification pulse. In the calculations, the Gaussian distribution is initially squeezed by the measurement to spin variance of $\langle\hat{x}_{\mathrm{G}}^{2}(0)\rangle=0.05$ ($\unit[7]{dB}$ squeezing). We take the same cell geometry as in Fig. \[fig:APN-PSD\] ($R=1$ cm, $L=3$ cm). (b) Degree of spin squeezing versus time in the buffer-gas cell, calculated using Eq.$\,$(\[eq:squeezing-vs-time\]). Spin squeezing exhibits multi-exponential decay associated with multiple diffusion modes. Larger probe beams lead to longer squeezing lifetimes, as the beam overlaps better with lower-order modes. (c) Spin squeezing in the coated cell. In a coated cell, the decay rate of the uniform diffusion mode, dominated by wall coupling, is substantially lower than that of higher-order modes. Therefore, ensuring a significant overlap of the probe beam with the uniform mode is even more important in coated cells for maximizing the squeezing lifetime. The dotted line in (b) and (c) is a single exponential decay $\langle\hat{x}^{2}(t)\rangle=\langle\hat{x}^{2}(0)\rangle e^{-2\Gamma_{w}t}+(1-e^{-2\Gamma_{w}t})/4$, shown for reference with $\Gamma_{w}=\pi^{2}D/w_{0}^{2}$ and $w_{0}=4$ mm; note the difference in time-scales between (b) and (c). (d) An extremely-squeezed state relies more on higher-order spatial modes and thus loses its squeezing degree rapidly (time is normalized by $T_{w}=R^{2}/\pi^{2}D$). The calculation is initialized with a uniform distribution of squeezing and includes the first 1000 radial modes, required for convergence.\[fig:multiexpnential-decay-of-squeezing\]](nonexponential_decay_fig_v8a){width="0.95\columnwidth"} Coupling of alkali spins to noble-gas spins \[subsec:weak-collisions\] ---------------------------------------------------------------------- Lastly, we consider collisional spin-exchange between two atomic species [@Katz2015SERFHybridization; @Happer1977SERF; @Dellis2014SESpinNoisePRA; @Happer2010book; @Mouloudakis2019SEwavefunctionUnraveling; @weakcollisions2019arxiv]. When the two species experience different wall couplings, their spin dynamics is determined by different diffusion-mode bases. Therefore mutual spin exchange, which is due to a local coupling (atom-atom collisions), depends on the mode-overlap between these bases. Here we consider the coupling of alkali spins to noble-gas spins, such as helium-3, for potential applications in quantum optics [@AlkaliNobleEntanglementKatz2020PRL]. The nuclear spins of noble gases are well protected by the enclosing complete electronic shells and thus sustain many collisions with other atoms and with the cell walls. Their lifetime typically reaches minutes and hours [@walker1997SEOPReview; @gemmel2010UltraSensitiveMagnetometer; @Walker2017He3review]. In an alkali-noble-gas mixture, the noble-gas acts as a buffer both for itself and for the alkali atoms, so that both species diffuse, and their collective spin states can be described by our Bloch-Heisenberg-Langevin model. As the noble-gas spins do not relax by wall collisions, their lowest-order diffusion mode $u_{0}^{\mathrm{b}}(\mathbf{r})$ is that associated with the characteristic (extremely) long life time. Higher-order modes $u_{n}^{\mathrm{b}}(\mathbf{r})$ decay due to diffusion with typical rates $\Gamma_{\mathrm{wall}}n^{2}=n^{2}\pi^{2}D/R^{2}$, where $R$ is the length scale of the system. For typical systems, $\Gamma_{\mathrm{wall}}$ is of the order of $(\unit[1\,]{ms})^{-1}-(\unit[1\,]{sec})^{-1}$. Consequently, to enjoy the long lifetimes of noble-gas spins, one should employ solely the uniform mode. The alkali spins couple locally to the noble-gas spins with a collective rate $J$ via spin-exchange collisions [@weakcollisions2019arxiv]. Unlike the noble-gas spins, the alkali spins are strongly affected by the cell walls, and consequently their low-order diffusion modes $u_{m}^{\mathrm{a}}(\mathbf{r})$ are different. This mode mismatch, between $u_{m}^{\mathrm{a}}(\mathbf{r})$ and $u_{n}^{\mathrm{b}}(\mathbf{r})$, leads to fractional couplings $c_{mn}J$, where $c_{mn}=\int_{V}d^{3}\mathbf{r}\,u_{m}^{\mathrm{a*}}(\mathbf{r})u_{n}^{\mathrm{b}}(\mathbf{r})$ are the overlap coefficients. In particular, $|c_{m0}|J$ are the couplings to the uniform (long lived) mode of the noble-gas spins. Usually, no anti-relaxation coating is used in these experiments, thus $|c_{m0}|$&lt;1. Here we demonstrate a calculation for a spherical cell of radius $R$, for which the radial mode bases $u_{m}^{\mathrm{a}}(\mathbf{r})$, $u_{n}^{\mathrm{b}}(\mathbf{r})$ and associated decay rates $\Gamma_{\mathrm{a}m},\,\Gamma_{\mathrm{b}n}$ are presented in Appendix \[sec:solution-of-diffusion-relaxation\], alongside the first $c_{m0}$ values for an uncoated cell (Table \[tab:overlap-in-spherical-cell\]). The calculation includes the first $m,n\le70$ modes [^3]. As the initial state, we consider doubly-excited (Fock) state of the alkali spins $|\psi_{0}\rangle=\frac{1}{\sqrt{2}}(\sum_{m}\alpha_{m}a_{m}^{\dagger})^{2}|0\rangle_{\mathrm{a}}|0\rangle_{\mathrm{b}}=|2\rangle_{\mathrm{a}}|0\rangle_{\mathrm{b}}$, where $|0\rangle_{\mathrm{a}}|0\rangle_{\mathrm{b}}$ is the vacuum state with all spins pointing downwards. We take the initial excitation to be spatially uniform, for which the coefficients $\alpha_{m}=c_{m0}$ satisfy $\sum_{m}\alpha_{m}u_{m}(\mathbf{r})=u_{0}^{\mathrm{b}}(\mathbf{r})=(4\pi R^{3}/3)^{-1/2}$. We calculate the transfer of this excitation via spin-exchange to the uniform mode $\hat{b}_{0}$ of the noble-gas spins, $i.e.$ to the state $|0\rangle_{\mathrm{a}}|2\rangle_{\mathrm{b}}=2^{-1/2}(\hat{b}_{0}^{\dagger})^{2}|0\rangle_{\mathrm{a}}|0\rangle_{\mathrm{b}}$. Figure \[fig:weak-collisions-exchange-fidelity\] displays the exchange fidelity $\mathcal{F}=\max|\langle\psi(t)|\,|0\rangle_{\mathrm{a}}|2\rangle_{\mathrm{b}}|^{2}$ as a function of both spin-exchange rate $J$ and quality of coating $N$. As $N$ increases, the initial uniform excitation matches better the lower-order modes of the alkali spins, which couple better to the uniform modes of the noble-gas spins. Indeed we find that the exchange fidelity grows with increasing $J$ and $N$. ![Excitation exchange between polarized alkali and noble-gas spins. Shown is the exchange fidelity of the doubly-excited (Fock) states $|2\rangle_{\mathrm{a}}|0\rangle_{\mathrm{b}}$ and $|0\rangle_{\mathrm{a}}|2\rangle_{\mathrm{b}}$. We assume a spherical cell containing potassium and helium-3. The quality $N$ of the wall coating for the alkali is varied between no coating ($N\le1$) and perfect coating ($N\rightarrow\infty$). The noble-gas spins do not couple to the cell walls. The exchange fidelity approaches 1 when $J\gg\Gamma_{\mathrm{wall}},\Gamma_{\mathrm{a}}$, as then the spin exchange is efficient for many diffusion modes; here $\Gamma_{\mathrm{wall}}$ is the contribution of wall collisions to the relaxation rate of the alkali spin (*i.e.*, the typical diffusion rate to the walls), and $\Gamma_{\mathrm{a}}$ is the contribution of atomic collisions. The calculations are performed for a cell radius $R=5$ mm and with $1$ atm of helium-3. The diffusion constants are $D_{\mathrm{a}}=0.35\,\mathrm{cm^{2}/s}$ for the potassium (mean free path $\lambda_{\mathrm{a}}=50$ nm) so that $\Gamma_{\mathrm{wall}}=\pi^{2}D_{\mathrm{a}}/R^{2}=\unit[1/(70]{\,ms)}$, and $D_{\mathrm{b}}=0.7\,\mathrm{cm^{2}/s}$ for the helium (mean free path $\lambda_{\mathrm{b}}=20$ nm). The additional homogeneous decay of the alkali is $\Gamma_{\mathrm{a}}\approx6\,\mathrm{s^{-1}}$ [@Happer2010book]. The wall coating plays a significant role, since for $N\cdot\xi_{\mathrm{a}}/R>1$ (*i.e.*, $N>10^{5}$) the diffusion modes of the potassium and helium spins match. \[fig:weak-collisions-exchange-fidelity\]](weak_collisions_fidelity_v2){width="0.95\columnwidth"} Discussion\[sec:Discussion\] ============================ We have presented a fully-quantum model, based on a Bloch-Heisenberg-Langevin formalism, for the effects of diffusion on the collective spin states in a thermal gas. The model is valid when the atomic mean free-path is much shorter than the apparatus typical dimension. This is often the case for warm alkali-vapor systems, even when a buffer gas is not deliberately introduced, as the out-gassing of a spin-preserving wall-coating can lead to mean free-paths on the order of millimeters [@sekiguchi2016JapaneseParaffinOutgassing]. We have mostly focused on highly-polarized spin ensembles, typically used to study nonclassical phenomena that employ the transverse component of the spin. It is important to note that Eqs. (\[eq:spin-diffusion-equation\]) and (\[eq:boundary-condition\]) hold generally and can be applied to unpolarized systems as well. For example, the presented analysis of spin-noise spectra holds for unpolarized vapor [\[]{}using $\tilde{P}=1/2$ in Eq.$\,$(\[eq:APN-PSD\])[\]]{} and is thus applicable to non-classical experiments done in that regime [@Kong2018MitchellAlkaliSEEntanglement]. Our model can also describe other space-dependent phenomena, such as the dynamics in the presence of nonuniform driving fields [@Xiao2019MultiplexingSqueezedLightDiffusion]. The presented model agrees with existing mean-field descriptions of diffusion of atomic spins. It further agrees with models employing the dissipation-fluctuation theorem to derive the spin-noise spectrum from the decay associated with diffusion. Importantly, it extends all these models by describing quantum correlations and explicitly deriving the quantum noise of the Brownian motion. Although the model does not formally applies to the special case of small, low-pressure, coated cells ($\lambda\gtrsim R,L$) [@Borregaard2016SinglePhotonsOnMotionallyAveragedMicrocellsNcomm; @sekiguchi2016JapaneseParaffinOutgassing; @Tang2020DiffusionLowPressure_PRA], where the atomic motion is predominantly ballistic, it may still provide a qualitative description of the effect of wall collisions on the uniform spin dynamics. Our results highlight the multimode nature of the dynamics. As exemplified for the applications considered in Sec. \[sec:Applications\], one often needs to account for multiple diffusion modes, with the high-order modes introducing additional quantum noise or reducing fidelities. As a rule of thumb, if $\varepsilon$ is the allowed infidelity or excess quantum noise, then one should include the first $\sim\varepsilon^{-1}$ modes in the calculations. Since thermal motion is inherent to gas-phase systems, our model could be beneficial to many studies of non-classical spin gasses and particularly to warm alkali vapors. One such example is a recent demonstration of transfer of quantum correlations by the diffusion of alkali atoms between different spatial regions [@Xiao2019MultiplexingSqueezedLightDiffusion]. Other examples involve a single active region, e.g., when spin squeezing is performed using a small probe beam over a long probing time, with the goal of coupling efficiently to the uniform diffusion mode in a coated cell [@Polzik2010ReviewRMP; @Borregaard2016SinglePhotonsOnMotionallyAveragedMicrocellsNcomm]. The resulting spatio-temporal dynamics can be described using our model in order to assess the obtainable degree of squeezing. In particular, our model predicts that high buffer-gas pressure would improve the lifetime of squeezed states when small probe beams are employed (e.g., when using cavities), thus encouraging the realization of such experiments. We thank Eugene Polzik for fruitful discussions and insights. We acknowledge financial support by the European Research Council starting investigator grant Q-PHOTONICS 678674, the Israel Science Foundation, the Pazy Foundation, the Minerva Foundation with funding from the Federal German Ministry for Education and Research, and the Laboratory in Memory of Leon and Blacky Broder. Diffusion-induced noise\[sec:diffusion-noise\] ============================================== In the main text, we formulate the dynamics of a collective spin operator as driven from local density fluctuations. For deriving Eq. (\[eq:spin-diffusion-equation\]), we use the Lagrangian version of Eq.$\,$(\[eq:Dean-diffusion\]), where the noise is defined for each particle individually $$\partial n_{a}/\partial t=D\nabla^{2}n_{a}+\boldsymbol{\nabla}[\boldsymbol{\eta}^{(a)}(t)\sqrt{n_{a}}].\label{eq:Lagrangian-Dean-diffusion}$$ Here $\boldsymbol{\eta}^{(a)}(t)$ is a white Gaussian process with vanishing mean $\langle\boldsymbol{\eta}^{(a)}\rangle_{\mathrm{c}}=\mathbf{0}$ and with correlations $\langle\eta_{i}^{(a)}(t)\eta_{j}^{(a')}(t')\rangle_{\mathrm{c}}=2D\delta_{ij}\delta_{aa'}\delta(t-t')$. Substituting these into Eq.$\,$(\[eq:spin-equation\]) provides the definition for the quantum noise components as $$\hat{f}_{\mu}(\mathbf{r},t)=\sum_{a=1}^{N_{\mathrm{a}}}\hat{\mathrm{s}}_{\mu}^{(a)}(t)\boldsymbol{\nabla}[\boldsymbol{\eta}^{(a)}n_{a}(\mathbf{r},t)].\label{eq:Lagnrangian-thermal-motion-noise}$$ Following the lines of Ref. [@dean1996langevinDiffusion], we consider an alternative, equivalent definition $$\hat{f}_{\mu}(\mathbf{r},t)=\boldsymbol{\nabla}[\hat{\mathrm{s}}_{\mu}(\mathbf{r},t)\boldsymbol{\eta}(\mathbf{r},t)\slash\sqrt{n}],\label{eq:Eulerian-thermal-motion-noise}$$ as also provided in the main text. According to both definitions, $\hat{\boldsymbol{f}}$ is a stochastic Gaussian process (linear operations on a Gaussian process accumulate to a Gaussian process) with a vanishing mean. Consequently, the equivalence of the two definitions is a result of the equality of the noise correlations $$\begin{aligned} \langle\hat{f}_{\mu}\hat{f}_{\nu}'\rangle_{\mathrm{c}}= & \langle\sum_{i}\nabla_{i}(\sum_{a}\hat{\mathrm{s}}_{\mu}^{(a)}n_{a}\eta_{i}^{(a)})\times\nonumber \\ & \:\sum_{j}\nabla'_{j}(\sum_{a'}\hat{\mathrm{s}}_{\nu}^{(a')}n_{a'}\eta_{j}^{(a')})\rangle_{\mathrm{c}}\nonumber \\ = & 2D(\boldsymbol{\nabla}\cdot\boldsymbol{\nabla}')(\sum_{a}\hat{\mathrm{s}}_{\mu}^{(a)}\hat{\mathrm{s}}_{\nu}^{(a)}n_{a})\delta(\mathbf{r}-\mathbf{r}')\delta(t-t')\nonumber \\ = & 2D(\boldsymbol{\nabla}\cdot\boldsymbol{\nabla}')\frac{\sum_{aa'}\hat{\mathrm{s}}_{\mu}^{(a)}n_{a}\hat{\mathrm{s}}_{\nu}^{(a')}n_{a'}}{\sum_{a'}n_{a'}}\delta(\mathbf{r}-\mathbf{r}')\delta(t-t')\nonumber \\ = & 2D(\boldsymbol{\nabla}\cdot\boldsymbol{\nabla}')(\hat{\mathrm{s}}_{\mu}\hat{\mathrm{s}}_{\nu}'/\sqrt{nn'})\delta(\mathbf{r}-\mathbf{r}')\delta(t-t')\nonumber \\ = & \langle\sum_{i}\nabla_{i}(\hat{\mathrm{s}}_{\mu}\eta_{i}/\sqrt{n})\times\sum_{j}\nabla_{j}'(\hat{\mathrm{s}}_{\nu}'\eta_{j}'/\sqrt{n'})\rangle_{\mathrm{c}},\label{eq:equivalence-of-eulerian-lagrangian-thermal-motion-quantum-noises}\end{aligned}$$ where we used the identity $n_{a}(\mathbf{r},t)n_{a'}(\mathbf{r},t)=\delta_{aa'}n_{a}(\mathbf{r},t)n_{a'}(\mathbf{r},t)$. Here and henceforth, we use tags to abbreviate the coordinates $(\mathbf{r}',t')$ for a field, *i.e.*, $F'=F(\mathbf{r}',t')$ and $F=F(\mathbf{r},t)$. The quantum noise, whose commutation relations are shown in Eq.$\,$(\[eq:diffusion-noise-commutation-relation\]), conserves the spin commutation relations $[\hat{\mathrm{s}}_{\mu}(\mathbf{r},t),\hat{\mathrm{s}}_{\nu}(\mathbf{r}',t)]=i\epsilon_{\xi\mu\nu}\hat{\mathrm{s}}_{\xi}\delta(\mathbf{r}-\mathbf{r}')$. This can be seen from $$\begin{aligned} \langle[\hat{\mathrm{s}}_{\mu}(\mathbf{r},t+dt),\hat{\mathrm{s}}_{\nu}(\mathbf{r}',t+dt)]-[\hat{\mathrm{s}}_{\mu}(\mathbf{r},t),\hat{\mathrm{s}}_{\nu}(\mathbf{r}',t)]\rangle_{\mathrm{c}}\\ =i\epsilon_{\xi\mu\nu}\delta(\mathbf{r}-\mathbf{r}')\langle\hat{\mathrm{s}}_{\xi}(\mathbf{r},t+dt)-\hat{\mathrm{s}}_{\xi}(\mathbf{r},t)\rangle_{\mathrm{c}}\nonumber \end{aligned}$$ and then $$\begin{aligned} i\epsilon_{\xi\mu\nu}D(\boldsymbol{\nabla}+\boldsymbol{\nabla}')^{2}[\hat{\mathrm{s}}_{\xi}\delta(\mathbf{r}-\mathbf{r}')]dt\label{eq:conservation-of-spin-commutation-relations}\\ =i\epsilon_{\xi\mu\nu}D(\nabla^{2}\hat{\mathrm{s}}_{\xi})\delta(\mathbf{r}-\mathbf{r}')dt,\nonumber \end{aligned}$$ where the last equality stems from $(\boldsymbol{\nabla}+\boldsymbol{\nabla}')\delta(\mathbf{r}-\mathbf{r}')=0$. In Sec. \[sec:Polarized-ensemb\], we focus on highly polarized ensembles, where the dynamics is described by the bosonic annihilation operator $\hat{a}$, under the Holstein-Primakof approximation. Under these conditions, the thermal noise operating on the bosonic excitations becomes $\hat{f}=\boldsymbol{\nabla}(\hat{a}\boldsymbol{\eta}/\sqrt{n})$. In addition, the same conditions ensure that $\hat{a}^{\dagger}\hat{a}=0$, $\hat{a}(\mathbf{r},t)\hat{a}^{\dagger}(\mathbf{r}',t)=\delta(\mathbf{r}-\mathbf{r}')$, and $\hat{a}(\mathbf{r},t)\hat{a}^{\dagger}(\mathbf{r}',t)\delta(\mathbf{r}-\mathbf{r}')=n\delta(\mathbf{r}-\mathbf{r}')$, thus providing $$\begin{aligned} \langle\hat{f}\hat{f}'^{\dagger}\rangle_{\mathrm{c}} & =\langle\boldsymbol{\nabla}\hat{a}\boldsymbol{\eta}/\sqrt{n}\,\boldsymbol{\nabla}'(\hat{a}^{\dagger})'\boldsymbol{\eta}'/\sqrt{n'}\rangle_{\mathrm{c}}\nonumber \\ & =-2D\nabla^{2}\delta(\mathbf{r}-\mathbf{r}')\delta(t-t'),\end{aligned}$$ and $\langle\hat{f}^{\dagger}\hat{f}'\rangle_{\mathrm{c}}=0$. Therefore, the noise becomes a vacuum noise and conserves the commutation relations of the bosonic operators. We denote the correlations of the diffusion noise in the bulk as $C(\mathbf{r},\mathbf{r}')=-2D\nabla^{2}\delta(\mathbf{r}-\mathbf{r}')$. Model for wall coupling\[sec:wall-coupling-toy\] ================================================ We adopt a simplified model for describing the scattering of atoms off the cell walls. The model assumes that the wall coupling is stochastic and Markovian, thus resulting in an exponential decay of the scattered spin, and that the noise due to diffusion in the bulk vanishes within a thin boundary layer at the wall. This leads to the scattering described by Eq.$\,$(\[eq:wall-scattering\]). The accompanying noise processes for atoms $a$ and $a'$ satisfy the relations $$[\hat{w}_{a}^{\mu}(t),\hat{w}_{a'}^{\nu}(t')]=i\epsilon_{\mu\nu\xi}e^{-1/N}(1-e^{-1/N})\hat{\mathrm{s}}_{\xi}^{(a)}\delta_{aa'}\frac{\varpi\delta(t-t')}{\bar{v}},\label{eq:lagrangian-wall-scattering-fluctuation}$$ where $\mu,\nu=x,y,z$. Here $\varpi=(e^{1/N}-1)^{-1}\lambda/3$ is the effective correlation distance of the wall-scattering noise, defined such that the commutation relations of the spin operators are conserved for all diffusion modes, *i.e.* for the entire cell (bulk and boundary). It changes monotonically from $\varpi=e^{-1/N}\lambda/3$ for spin-destructing walls ($N\ll1)$ to $\varpi=N\lambda/3$ for spin-preserving walls ($N\gg1$). The continuous operator $\hat{\boldsymbol{w}}(\mathbf{r},t)$ used in the main text to describe the noise due to interactions with the cell walls is defined as $\hat{\boldsymbol{w}}=\sum_{a}\hat{\boldsymbol{w}}_{a}n_{a}$. It is the analogue of $\hat{\boldsymbol{w}}_{a}(t)$, like $\hat{\mathbf{s}}$ is to $\hat{\mathbf{s}}_{a}$. It vanishes for positions $\mathbf{r}$ whose distance from the boundary is larger than $\varpi$, and its commutation relations are $$\begin{aligned} \langle[\hat{w}^{\mu},\hat{w}'^{\nu}]\rangle_{\mathrm{c}}= & i\epsilon_{\mu\nu\xi}e^{-1/N}(1-e^{-1/N})\varpi/\bar{v}\,\times\nonumber \\ & \hat{\mathrm{s}}_{\xi}(\mathbf{r},t)\delta(\mathbf{r}-\mathbf{r}')\delta(t-t').\label{eq:eulerian-wall-noise-commutation}\end{aligned}$$ The last expression is defined only for coordinates $\mathbf{r},\mathbf{r}'$ on the cell boundary and vanishes elsewhere. As an example, for a rectangular cell with a wall at $x=L/2$, we shall define coordinates on the boundary $\mathbf{r}_{\perp}=y\hat{\mathbf{y}}+z\hat{\mathbf{z}}$ and substitute $\delta(\mathbf{r}-\mathbf{r}')=\frac{1}{\varpi}\delta(y-y')\delta(z-z')$ at $x=x'=L/2$. For a spherical cell with a wall at $|\mathbf{r}|=R$, we use $\delta(\mathbf{r}-\mathbf{r}')=\frac{1}{\varpi}\frac{\delta(\Omega-\Omega')}{R^{2}}$, where $\Omega$ is the angular position of coordinate $\mathbf{r}$. Using $\hat{\boldsymbol{w}}(\mathbf{r},t)$, the scattering matrix for the spin density operator becomes $\mathcal{S}\hat{\mathbf{s}}=e^{-1/N}\cdot\hat{\mathbf{s}}+\hat{\boldsymbol{w}}$. We write Eq. (\[eq:boundary-condition\]) for the spin density operator using the noise field $\hat{\boldsymbol{w}}$. In addition, $\hat{\boldsymbol{w}}$ is defined only on the boundary, such that $\left.(\hat{\mathbf{n}}\cdot\boldsymbol{\nabla})\hat{\boldsymbol{w}}\right|_{\text{boundary}}\propto\delta^{\prime}(0)$ and therefore vanishes. Finally, under the Holstein-Primakof approximation, we use Eq.$\,$(\[eq:eulerian-wall-noise-commutation\]) to find the noise operating on $\hat{a}$ due to wall scattering. The operator $\hat{w}=\hat{w}_{-}/\sqrt{2|\mathrm{s}_{z}|}$ becomes a vacuum noise, satisfying $\langle\hat{w}^{\dagger}\hat{w}'\rangle_{\mathrm{c}}=0$, and for $\mathbf{r},\mathbf{r}'$ on the cell boundary, $$\langle\hat{w}\hat{w}'^{\dagger}\rangle_{\mathrm{c}}=2e^{-1/N}(1-e^{-1/N})\varpi/\bar{v}\delta(\mathbf{r}-\mathbf{r}')\delta(t-t').\label{eq:wall-vacuum-noise-commutation}$$ Considering a general spin distribution, the noise due to the walls exists only in a volume of order $\varpi S$, where $S$ is the cell surface area, while the noise due to diffusion in the bulk exists in the entire volume $V$. The ratio of the two scales as $\langle\hat{w}\hat{w}'^{\dagger})\rangle_{\mathrm{c}}/\langle\hat{f}\hat{f}'^{\dagger})\rangle_{\mathrm{c}}\propto\varpi S/V\propto\lambda/R$, where $R$ is the typical dimension of the cell. Thus in diffusive systems, the diffusion noise dominates over that of the wall scattering for most non-uniform spin distributions. Solving the diffusion-relaxation Bloch-Heisenberg-Langevin equations\[sec:solution-of-diffusion-relaxation\] ============================================================================================================ The diffusion-relaxation equation in the Bloch-Heisenberg-Langevin formalism, in the limit of a highly-polarized spin gas, is presented in Sec. \[sec:Polarized-ensemb\]. Here we first solve Eqs. (\[eq:HP heisenberg langevin\]) and (\[eq:HP boundary condition\]) for a simplified one-dimensional (1D) case by following the method described in the main text. We provide an explicit expressions for the mode-specific noise sources due to motion in the bulk and at the boundary. Finally, we provide tabulated solutions for the three-dimensional (3D) cases of rectangular, cylindrical, and spherical cells. Consider a 1D cell with a single spatial coordinate $-L/2\le x\le L/2$. The functions $u_{k}(x)$ that solve the Helmholtz equation $\partial^{2}u_{k}/\partial x^{2}+k^{2}u_{k}=0$ are the relaxation-diffusion modes, where the decay rates $\Gamma$ introduced in the main text are $\Gamma=Dk^{2}$. These solutions are $u_{k}^{+}=A_{k}^{+}\cos(k^{+}x)$ and $u_{k}^{-}=A_{k}^{-}\sin(k^{-}x)$, composing symmetric and anti-symmetric modes. The annihilation operator decomposes into a superposition $\hat{a}(x,t)=\sum_{k,\pm}\hat{a}_{k}^{\pm}(t)u_{k}^{\pm}(x)$. To further simplify the example, we take only symmetric spin distribution and symmetric noise into consideration, i.e., we keep only the modes $u_{k}^{+}$ and omit the ’$+$’ superscript. Note that a physical noise is random and generally has no defined symmetry, but it can be decomposed into components with well-defined symmetry. The bulk diffusion equation becomes $\partial\hat{a}_{k}/\partial t=i[\mathcal{H},\hat{a}_{k}]-Dk^{2}\hat{a}_{k}+\int_{-L/2}^{L/2}\hat{f}u_{k}dx$. We break the boundary equation into a homogeneous part, where the noise is omitted, and an inhomogeneous part, which includes the noise. The former can be decomposed into the different modes and is simplified to the algebric equation $\cot(kL/2)=2\frac{1+e^{-1/N}}{1-e^{-1/N}}\lambda k$ [^4]. For general values of $N$, this is a Robin boundary condition, which can be solved numerically or graphically as presented in Fig. \[fig:graphical-solution-for-Robin-condition\]. The discrete solutions $k_{n}$ define a complete and orthonormal set of discrete modes $u_{n}=A_{n}\cos(k_{n}x)$, spanning all symmetric spin distributions in the 1D cell, and $\int_{-L/2}^{L/2}u_{m}^{\ast}u_{n}dx=\delta_{nm}$. These provide the discrete decay rates $\Gamma_{n}$. For example, in the Dirichlet case of destructive walls ($N\cdot\lambda/L\ll1$), $k_{n}=(2n+1)\pi/L$. The annihilation operators of the various modes are $\hat{a}_{n}(t)=\int_{-L/2}^{L/2}u_{n}^{\ast}(x)\hat{a}(x,t)dx$, and the noise operators are $\hat{f}_{n}(t)=\int_{-L/2}^{L/2}u_{n}^{\ast}(x)\hat{f}(x,t)dx$ and $\hat{w}(t)=[\hat{w}(L/2,t)+\hat{w}(-L/2,t)]/2$. ![Graphical solutions for the Robin boundary condition. Here we solve the 1D equations for a system of length $L=\unit[1]{cm}$ and mean free-path of $\lambda=\unit[0.5]{\mu m}$ (characteristic of $100$ Torr of buffer gas) and for different values of $N$.\[fig:graphical-solution-for-Robin-condition\]](robin_fig_v3){width="0.9\columnwidth"} The treatment of $\hat{f}_{n}$ as a bulk source term operating on independent modes is a common technique [@SteckNotes]. It differs, however, from the treatment of the noise at the boundaries. We deal with this term by defining auxiliary fields $$\hat{a}(x,t)=\hat{p}(x,t)+\sum_{n}\hat{h}_{n}(t)u_{n}(x),$$ as we desire to use $\hat{p}(x,t)$ to imbue the wall noise as a source acting on the modes $\hat{a}_{n}$, while $\hat{h}_{n}$ solves the homogeneous equations in the absence of wall-induced fluctuations. Therefore $\hat{p}(x,t)$ is defined such that $\nabla^{2}\hat{p}(x,t)=0$. [|&gt;p[0.12]{}||&gt;p[0.23]{}|&gt;p[0.2]{}|&gt;p[0.24]{}|]{} cell shape & rectangular & cylindrical & spherical[\ ]{} symmetry & $\begin{array}{c} \text{symmetric: }(+)\\ \text{anti-symmetric: }(-) \end{array}$ & angular: $n$ & spherical: $\ell,p$[\ ]{} coordinate range & $-L/2\leq x\leq L/2$ & $\begin{array}{c} 0\leq\rho\leq R\\ 0\leq\varphi\leq2\pi \end{array}$ & $\begin{array}{c} 0\leq r\leq R\\ 0\leq\theta\leq\pi\\ 0\leq\varphi\leq2\pi \end{array}$[\ ]{} boundary equation & $\begin{array}{c} \cot(k_{n}^{+}L/2)=\frac{2}{3}\frac{1+e^{-1/N}}{1-e^{-1/N}}\lambda k_{n}^{+}\\ -\tan(k_{n}^{-}L/2)=\frac{2}{3}\frac{1+e^{-1/N}}{1-e^{-1/N}}\lambda k_{n}^{-} \end{array}$ & $-\frac{J_{n}(k_{\nu n}R)}{J_{n}^{\prime}(k_{\nu n}R)}=\frac{2}{3}\frac{1+e^{-1/N}}{1-e^{-1/N}}\lambda k_{\nu n}$ & $-\frac{j_{\ell}(k_{n\ell}R)}{j_{\ell}^{\prime}(k_{n\ell}R)}=\frac{2}{3}\frac{1+e^{-1/N}}{1-e^{-1/N}}\lambda k_{n\ell}$[\ ]{} $u_{n}(\mathbf{r})$ & $\begin{array}{c} u_{n}^{+}(x)=A_{n}^{+}\cos(k_{n}^{+}x)\\ u_{n}^{-}(x)=A_{n}^{-}\sin(k_{n}^{-}x) \end{array}$ & $u_{\nu n}(\rho,\varphi)=A_{\nu n}J_{n}(k_{\nu n}\rho)e^{in\varphi}$ & $u_{n\ell p}(r,\theta,\varphi)=A_{n\ell p}j_{\ell}(k_{n\ell}r)Y_{\ell p}(\theta,\varphi)$[\ ]{} In our 1D symmetric case, $\hat{p}(x,t)=\hat{p}(t)$ is uniform. Writing the full boundary equation for $\hat{a}$ provides $\hat{p}(t)=\hat{w}(t)/(1-e^{-1/N})$. We decompose $\hat{p}(t)$ into the modes to obtain $\hat{p}_{n}(t)=\int_{-L/2}^{L/2}\hat{p}(t)u_{n}(x)dx=2A_{n}\sin(k_{n}L/2)/k_{n}\cdot\hat{p}(t)$. Substituting this in Eq.$\,$(\[eq:HP heisenberg langevin\]) provides the equation for the homogeneous mode operators $\hat{h}_{n}$. In the case of a magnetic Zeeman Hamiltonian $\mathcal{H}=i\omega_{0}\hat{\mathrm{S}}_{z}$, we find $$\partial\hat{h}_{n}/\partial t=-i\omega_{0}\hat{h}_{n}-\Gamma_{n}\hat{h}_{n}+\hat{f}_{n}-i\omega_{0}\hat{p}_{n}-\partial\hat{p}_{n}/\partial t,\label{eq:boundaryless-diffusion-eqn-eqn}$$ whose solutions are $$\begin{array}{cc} \hat{h}_{n}= & e^{-(i\omega_{0}+\Gamma_{n})t}\hat{h}_{n}\left(0\right)+\qquad\qquad\qquad\qquad\qquad\qquad\qquad\\ & \int_{0}^{t}e^{-(i\omega_{0}+\Gamma_{n})(t-\tau)}(\hat{f}_{n}(\tau)-(i\omega_{0}+\tfrac{\partial}{\partial\tau})\hat{p}_{n}(\tau))d\tau. \end{array}\label{eq:boundaryless-diffusion-eqn-solution}$$ Substituting into $\hat{a}_{n}(t)=\hat{p}_{n}(t)+\hat{h}_{n}(t)$ and differentiating with respect to $t$ provides the evolution of the annihilation operators of the spin modes $$\partial\hat{a}_{n}/\partial t=-(i\omega_{0}+\Gamma_{n})\hat{a}_{n}+\hat{f}_{n}+\hat{f}_{n}^{\mathrm{w}},\label{eq:mode-operator-diffusion-eqn}$$ where $$\hat{f}_{n}^{\mathrm{w}}=\Gamma_{n}\int_{-L/2}^{L/2}u_{n}^{\ast}(x)\hat{p}(x,t)dx=\frac{2A_{n}\Gamma_{n}\sin(k_{n}L/2)}{(1-e^{-1/N})k_{n}}\hat{w}$$ is the quantum noise due to wall collisions. Finally, we can combine the two noise terms and obtain the total, mode-specific, noise operator $$\hat{\mathcal{W}}_{n}=\int_{0}^{t}e^{-[i\omega_{0}+D(k_{n}^{\pm})^{2}](t-\tau)}[\hat{f}_{n}(\tau)+\hat{f}_{n}^{\mathrm{w}}(\tau)]d\tau,\label{eq:mode-noise-process}$$ appearing in Eq. (\[eq:eigenmode-time-evolution\]). Under the influence of the noise sources $\hat{\mathcal{W}}_{n}$ and the dissipation $\Gamma_{n}$, the spin operators of the diffusion modes obey the fluctuation-dissipation theorem, and their commutation relations are conserved, resulting from $\langle(\hat{f}_{n'}+\hat{f}_{n'}^{\mathrm{w}})(\hat{f}_{n}^{\dagger}+\hat{f}_{n}^{\mathrm{w}\dagger})\rangle_{\mathrm{c}}=2\Gamma_{n}\delta_{n'n}\delta(t-t')$ and $\langle(\hat{f}_{n'}^{\dagger}+\hat{f}_{n'}^{\mathrm{w}\dagger})(\hat{f}_{n}+\hat{f}_{n}^{\mathrm{w}})\rangle_{\mathrm{c}}=0$. Note that the conservation of local commutation relations is already presented in Appendices \[sec:diffusion-noise\] and \[sec:wall-coupling-toy\] (where $\hat{f}$ applies for the bulk and $\hat{w}$ for the boundary) without the mode decomposition. Notably however, it also holds for the nonlocal (diffusion) modes. For completeness, we provide in Table \[tab:Diffusion-mode-solutions\] the diffusion-relaxation modes for rectangular, cylindrical, and spherical cells. Various applications, such as those involving collisional (local) coupling between two spin ensembles, also require the overlap coefficients $c_{mn}=\int_{V}d^{3}\mathbf{r}A_{m}^{\ast}(\mathbf{r})B_{n}(\mathbf{r})$ between diffusion modes $A_{m}(\mathbf{r})$ and $B_{n}(\mathbf{r})$. These are presented in Table \[tab:overlap-in-spherical-cell\] for spherically-symmetric modes, where $A_{m}(\mathbf{r})$ are modes for highly destructive walls ($N\lesssim1$), and $B_{n}(\mathbf{r})$ are for inert walls ($N\gg L/\lambda$). These conditions are typical for a mixture of alkali vapor and noble gas, as discussed in section$\,$\[sec:Applications\]. $c_{mn}$ $n=0$ $n=1$ $n=2$ $n=3$ $n=4$ ---------- -------- -------- -------- -------- -------- $m=0$ 0.780 0.609 -0.126 0.058 -0.033 $m=1$ -0.390 0.652 0.622 -0.158 0.079 $m=2$ 0.260 -0.274 0.647 0.627 -0.173 $m=3$ -0.195 0.182 -0.256 0.644 0.629 $m=4$ 0.156 -0.139 0.1680 -0.246 0.643 : Overlap coefficients of the first five spherically symmetric modes, $i.e.$, $\ell=p=0$. We take $A_{m}(\mathbf{r})$ to be the diffusion modes of a spherical cell with radius $R=1$ and destructive walls, and $B_{n}(\mathbf{r})$ to be the modes in the same cell but with spin-conserving walls.\[tab:overlap-in-spherical-cell\] Faraday rotation measurement setup\[sec:Faraday-rotation-measurement\] ====================================================================== In Sec. \[sec:Applications\], we consider two experimental setups where the transverse component of a polarized spin ensemble is measured by means of the Faraday rotation. This scheme is common in alkali spin measurements [@Braginsky1996QNDreview; @Julsgaard2001PolzikEntanglement; @Kong2018MitchellAlkaliSEEntanglement; @AppeltHapper1998SEOPtheoryPRA]. As illustrated in Fig. \[fig:APN-PSD\]a, we consider a cylindrical cell with radius $R$ and length $L$, with the cylinder axis along $\hat{\mathbf{x}}$. The spins are polarized along $\hat{\mathbf{z}}$, parallel to an external applied magnetic field $\mathbf{B}=B\hat{\mathbf{z}}$. We use $\rho$ and $\varphi$ as the cylindrical coordinates, and $x$ as the axial coordinate. A linearly-polarized probe beam travels along $\hat{\mathbf{x}}$ with a Gaussian intensity profile $I_{\mathrm{G}}(\mathbf{r})=I_{0}\exp(-2\rho^{2}/w_{0}^{2})$, where $w_{0}$ is the beam waist radius. We assume a negligible beam divergence within the cell and require the normalization $\int_{V}I_{\mathrm{G}}^{2}(\mathbf{r})d^{3}\mathbf{r}=1$, so that $(I_{0})^{-2}=\pi Lw_{0}^{2}(1-e^{-4R^{2}/w_{0}^{2}})/4$. The probe frequency is detuned from the atomic transition, such that the probe is not depleted and does not induce additional spin decay. The linear polarization of the probe rotates due to the Faraday effect, with the rotation angle proportional to the spin projection along the beam propagation direction. Therefore, measurement of the rotation angle provides a measurement of $\hat{\mathrm{s}}_{x}$ weighted by its overlap with the beam profile. Precisely, the operator $\hat{x}_{\mathrm{G}}(t)=\int_{V}d^{3}\mathbf{r}I_{\mathrm{G}}(\mathbf{r})\hat{x}(\mathbf{r},t)$, where $\hat{x}(\mathbf{r},t)=[\hat{a}(\mathbf{r},t)+\hat{a}^{\dagger}(\mathbf{r},t)]/2$, is measured in this scheme [@Polzik2010ReviewRMP]. We identify the atomic diffusion modes in the cylindrical cell as $u_{n}(\mathbf{r})$. Note that in Table \[tab:Diffusion-mode-solutions\], the modes require several labels, which we replace here with a single label $n$ for brevity. We decompose the spin operator and the probe intensity profile using the modes $\hat{x}(\mathbf{r},t)=\sum_{n}\hat{x}_{n}(t)u_{n}(\mathbf{r})$ and $I_{\mathrm{G}}(\mathbf{r})=\sum_{n}I_{n}^{(\mathrm{G})}u_{n}(\mathbf{r})$, where $\hat{x}_{n}(t)=(\hat{a}_{n}+\hat{a}_{n}^{\dagger})/2$ and $I_{n}^{(\mathrm{G})}=\int_{V}d^{3}\mathbf{r}I_{\mathrm{G}}(\mathbf{r})u_{n}^{\ast}(\mathbf{r})$. Using these, we express the measured spin operator as $\hat{x}_{\mathrm{G}}(t)=\sum_{n}I_{n}^{(\mathrm{G})}\hat{x}_{n}(t)$. We calculate the spin noise spectrum from its formal definition $$S_{xx}(f)=\lim_{T\rightarrow\infty}\frac{1}{T}\int_{0}^{T}\int_{0}^{T}\hat{x}_{\mathrm{G}}(\tau)\hat{x}_{\mathrm{G}}(\tau')e^{2\pi if(\tau-\tau')}d\tau d\tau'$$ utilizing the temporal evolution of the modes as given by Eq.$\,$(\[eq:eigenmode-time-evolution\]), and including the noise properties $\hat{\mathcal{W}}_{n'}^{\dagger}(t)\hat{\mathcal{W}}_{n}(t)=0$ and $\hat{\mathcal{W}}_{n'}(t)\hat{\mathcal{W}}_{n}^{\dagger}(t)=(1-e^{-2\Gamma_{n}t})\delta_{n'n}$ derived from Appendix$\,$\[sec:solution-of-diffusion-relaxation\]. The spin-noise spectral density appearing in Eq.$\,$(\[eq:APN-PSD\]) holds for both polarized and unpolarized ensembles [\[]{}$\tilde{P}=1$ and $\tilde{P}=\nicefrac{1}{2}$ in Eq.$\,$(\[eq:APN-PSD\]), respectively[\]]{}. For the considered geometry, the standard quantum limit is $\langle\hat{\mathrm{S}}_{x}^{2}\rangle\ge N_{\text{beam}}/4=nV_{\text{beam}}/4\cdot\frac{[1-\exp(-2R^{2}/w_{0}^{2})]^{2}}{1-\exp(-4R^{2}/w_{0}^{2})}$, where $N_{\text{beam}}=n\cdot[\int_{V}I_{\mathrm{G}}(\mathbf{r})d^{3}r]^{2}\slash\int_{V}I_{\mathrm{G}}^{2}(\mathbf{r})d^{3}r$ is the number of atoms in the beam, and $V_{\text{beam}}=\pi Lw_{0}^{2}$ is the beam volume [@Shah2010highBWmagnetometryAndAnalysis]. [^1]: For long-lived solutions of the diffusion equation, the flux towards the wall $\hat{\mathbf{n}}\cdot\boldsymbol{\nabla}\hat{\mathbf{s}}$ is of the same order as $\hat{\mathbf{s}}/R$. [^2]: This limit is obtained only when $\lambda\ll R$, which is also necessary for the validity of the diffusion equation. [^3]: Excess decay and noise due to the modes $m,n\ge70$ is introduced along the lines of Eq. (S4) in Ref. [@weakcollisions2019arxiv] [^4]: Here in 1D, we use the relation $D=\lambda\bar{v}$ between the diffusion coefficient and the kinetic parameters. In 3D, this becomes $D=\lambda\bar{v}/3$, such that $\cot\left(kL/2\right)=\frac{2}{3}\frac{1+e^{-1/N}}{1-e^{-1/N}}\lambda k$. $\varpi$ has similar dimensionality dependence.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In extracting predictions from theories that describe a multiverse, we face the difficulty that we must assess probability distributions over possible observations, prescribed not just by an underlying theory, but by a theory together with a conditionalization scheme that allows for (anthropic) selection effects. This means we usually need to compare distributions that are consistent with a broad range of possible observations, with actual experimental data. One controversial means of making this comparison is by invoking the ‘principle of mediocrity’: that is, the principle that we are typical of the reference class implicit in the conjunction of the theory and the conditionalization scheme. In this paper, I quantitatively assess the principle of mediocrity in a range of cosmological settings, employing ‘xerographic distributions’ to impose a variety of assumptions regarding typicality. I find that for a fixed theory, the assumption that we are typical gives rise to higher likelihoods for our observations. If, however, one allows both the underlying theory and the assumption of typicality to vary, then the assumption of typicality does not always provide the highest likelihoods. Interpreted from a Bayesian perspective, these results support the claim that when one has the freedom to consider different combinations of theories and xerographic distributions (or different ‘frameworks’), one should favor the framework that has the highest posterior probability; and then from this framework one can *infer*, in particular, how typical we are. In this way, the invocation of the principle of mediocrity is more questionable than has been recently claimed.' author: - Feraz Azhar bibliography: - 'azhar\_15\_BIB\_v2.bib' title: Testing typicality in multiverse cosmology --- Introduction {#SEC:Introduction} ============ A generic prediction of the theories that underpin our current understanding of the large-scale structure of the universe, is that the observable universe is not all that exists, and that we may be part of a vast landscape of (as yet) unobserved domains where the fundamental constants of nature, and perhaps the effective laws of physics more generally, vary. The predominant approach to characterizing this variability rests on theory-generated probability distributions that describe the statistics of constants associated with the standard models of particle physics and cosmology. The hope remains that plausible descriptions of such multi-domain universes (henceforth ‘multiverses’), generated, for example, from inflationary cosmology [@vilenkin_83; @linde_83; @linde_86] or the string theory landscape [@bousso+polchinski_00; @kachru+al_03; @freivogel+al_06; @susskind_07], will yield prescriptions for calculating these distributions in unambiguous ways. Subsequent comparisons with our observations would allow us to ascertain which multiverse models are indeed favored. To be more precise, one expects theories that describe a multiverse to set down a likelihood for observations we might make, given both the theory under consideration, and conditions that restrict the vast array of domains to ones in which we might arise. This latter conditionalization is naturally couched in terms of conditions necessary for the existence of ‘us’, as defined by relevant features of the theory. The need for such ‘anthropic’ conditionalization, as captured, for example, in what has become known as Carter’s ‘Weak Anthropic Principle’ [@carter_74], is predicated on the presumption that most of the domains described by theories of the multiverse will not give rise to the specialized structures we see around us, nor indeed to complex biological life [@hartle_07]. Under this scenario any observation we might make conditionalized on theory alone, would prove to be unlikely; and one should therefore restrict one’s attention to relevant domains so as to secure relevant probabilities for possible observations. An appropriate conditionalization scheme might make our observations more likely: but how likely should they be, before we can count them as having been successfully predicted by the conjunction of a theory and a conditionalization scheme? One proposed solution to this problem is known as the ‘principle of mediocrity’ [@vilenkin_95]: in more current terminology, it assumes that we should reason as though we are typical members of a suitable reference class (see also [@gott_93; @page_96; @bostrom_02]). Under this assumption, for appropriately conditionalized distributions, as long as our observations are within some ‘typical’ range according to the distribution, we can count them as being successfully predicted. The assessment of this assumption is the topic of this paper, and constitutes a central concern in extracting predictions from any theory of the multiverse. The principle of mediocrity, or the ‘assumption of typicality’—as it will also be referred to in this paper—is not without its critics [@weinstein_06; @smolin_07; @hartle+srednicki_07]. A key issue involves how one defines the reference class with respect to which we are typical [@garriga+vilenkin_08]. This problem is even more stark given our ignorance of who or what we are trying to characterize, and the precise physical constraints we need to implement in order to do so [@weinstein_06; @azhar_14]. Rather than assessing this principle from a primarily conceptual point of view, we propose to test it *quantitatively*. In particular, we investigate how well it does in terms of accounting for our data in comparison to other assumptions regarding typicality, in a restricted set of multiverse cosmological scenarios. We do this by extending the program of @srednicki+hartle_10 to explore a variety of assumptions regarding typicality, building these assumptions into likelihoods for possible observations through the use of ‘xerographic distributions’ (in the terminology of @srednicki+hartle_10). The goal then is to find the conjunction of a theory and a xerographic distribution (which they call a ‘framework’) that gives rise to the highest likelihoods for our data. I will show that (1) for a fixed theory, the assumption that we are typical gives rise to higher likelihoods for our observations; but (2) if one allows both the underlying theory and the assumption of typicality to vary, then the assumption of typicality *does not* always provide the highest likelihoods. Interpreted from a Bayesian perspective, these results provide support for the claim that one should try to identify the framework with the highest posterior probability; and then from this framework, one can *infer* how typical we are. The structure of this paper is as follows. In section \[SEC:Xerographic\_Distributions\], I outline the general formalism within which I will be investigating assumptions regarding typicality, including the introduction of a statement of the principle of mediocrity adapted to our specific purposes. Section \[SEC:Multiverse\_Model\] introduces the multiverse model we will analyze (which is a generalization of the cosmological model of @srednicki+hartle_10), derives the central equations for relevant likelihoods from which we will eventually test assumptions regarding typicality, and shows that these likelihoods reduce to the results of @srednicki+hartle_10 under the appropriate simplifying assumptions. Explicit tests of the principle of mediocrity are presented in section \[SEC:Results\], and we conclude in section \[SEC:Discussion\] with a discussion of the context in which one should interpret the results of these tests. So I turn first to a description of the general formalism within which I will be working. Xerographic Distributions {#SEC:Xerographic_Distributions} ========================= Generalities {#SEC:Generalities} ------------ I begin by outlining the formalism of @srednicki+hartle_10, recasting relevant parts of their discussion to suit our computations in the next section. In general multiverse scenarios, it is possible that any reference class of which we believe we are a member, may have multiple members. Indeed, it is plausible that our accumulated data $D_{0}$, which gives a detailed description of our physical surroundings, might be replicated at different spacetime locations in the multiverse. A theory $\mathcal{T}$ describing this multiverse scenario, will, in principle, generate a likelihood for this data which we will denote by $P(D_{0}|\mathcal{T})$. This corresponds to a ‘third-person’ likelihood in the terminology of @srednicki+hartle_10—that is, a likelihood that does not include any information about which member of our reference class we might be. The quantity that takes this *indexical* information into account is a ‘first-person’ likelihood and will be denoted by $P^{(1p)}(D_{0}|\mathcal{T}, \xi)$, in accordance with the notation of @srednicki+hartle_10. The added ingredient here is the *xerographic distribution* $\xi$, a probability distribution that we specify *by assumption*, that encodes our belief about which member of our reference class we happen to be. Its functional form is independent of a given theory $\mathcal{T}$, and together with such a theory, constitutes a ‘framework’ $(\mathcal{T}, \xi)$ (in the notation of [@srednicki+hartle_10]). Thus the transition from a third-person likelihood $P(D_{0}|\mathcal{T})$ to a first-person likelihood $P^{(1p)}(D_{0}|\mathcal{T}, \xi)$ is effected by two ideas: (i) the conditionalization scheme, which (as mentioned in section \[SEC:Introduction\]) specifies our reference class, and (ii) a probability distribution over members of our reference class. In the case where there exist $L$ members of our reference class at spacetime locations $x_{l}$ for $l = 1,2,\dots,L$, we let the probability that we are the member at location $x_{l}$ be denoted by $\xi_{l}$. So the xerographic distribution is just the sequence of probabilities $\xi := \{\xi_{l}\}_{l=1}^{L}$, and will always be chosen so that it is normalized to unity: $\sum_{l=1}^{L}\xi_{l}=1$. We will assume throughout this paper that the total number of members $L$ is finite. The assumption that we are a typical member of this reference class, is then the statement that the probability that we are any one of these members is the same, and thus the xerographic distribution is given by the uniform distribution: $\xi_{l} = \frac{1}{L}$. Correspondingly, the assumption that we are atypical of this reference class will be given by xerographic distributions that deviate from the uniform distribution. How then do we propose to compute the first-person likelihood $P^{(1p)}(D_{0}|\mathcal{T}, \xi)$? To do so, we will introduce a few conventions that will form the basis of the general discussion here, and also the basis of the more specific examples we will pursue in the following two sections. Assume then that there exists a finite set of $N$ distinct domains in a multiverse, within each of which ‘observers’ may exist with some probability that may depend on the particular domain being considered. We will only track the existence or non-existence of observers in each domain, without concerning ourselves with issues such as: whether or not these observers are anything like ‘us’[^1], precisely where in these domains these observers might be located, and how many observers might exist in a domain. We thereby consider only whether a domain has observers in it or does not. Thus, there exists a total of $2^{N}$ possible configurations of observers in domains across the entire multiverse. We will denote each such configuration by an $N$-dimensional vector $\vec{\sigma}$ of binary digits, where $\sigma_{i} = 1$ denotes the existence of observers in domain $i$, and $\sigma_{i} = 0$ denotes there are no observers in that domain. The set of all such configurations $\vec{\sigma}$ will be denoted by $\mathcal{K}$, and we will denote the probability of a configuration $\vec{\sigma}$ by $P(\vec{\sigma})$. We live inside one of these domains and observe some data. Let $D_{0}$ denote the data that there exist observers who see this same data. We will take a theory $\mathcal{T}$ to describe an *observable* fact about each domain: in the model introduced in section \[SEC:Multiverse\_Model\], this will be the value of a binary quantity. So the probabilities $P(\vec{\sigma})$ will be given in general, independently of $\mathcal{T}$. But $\mathcal{T}$ will determine the subset $\mathcal{K}_{D_{0}}({\mathcal{T}})$ of those configurations $\vec{\sigma}$ in which $D_{0}$ is observed ($\mathcal{K}_{D_{0}}({\mathcal{T}})\subset\mathcal{K}$). The sum of the probabilities of the configurations $\vec{\sigma}$ belonging to the subset $\mathcal{K}_{D_{0}}({\mathcal{T}})$, is just the theory-generated third-person likelihood for our data: $P(D_{0}|\mathcal{T}) = \sum_{\vec{\sigma}\in\mathcal{K}_{D_{0}}(\mathcal{T})}P(\vec{\sigma})$. This specifies the likelihood that there is at least one observing system in the multiverse that witnesses the data $D_{0}$. To derive a first-person likelihood, we need to assume a xerographic distribution by firstly specifying a reference class that could plausibly describe ‘us’. Two natural reference classes to consider (which we will further develop in section \[SEC:Multiverse\_Model\]), following [@srednicki+hartle_10], are $(i)$ the reference class of all observers who witness our data $D_{0}$, and $(ii)$ the reference class of all observers (who do not necessarily see our data $D_{0}$). In either case, for any particular possible observer configuration $\vec{\sigma}$, the xerographic distribution encodes the probability that we are the reference-class member at some specified location. Owing to the simplicity of our model, a ‘location’ will correspond to a multiverse domain. In general, for a given $\mathcal{T}$ and given $\vec{\sigma}\in\mathcal{K}_{D_{0}}(\mathcal{T})$, only a subset of locations $L_{D_{0}}(\vec{\sigma},\mathcal{T})$, will contain observers who see our data $D_{0}$. From these considerations, we can write down the appropriate first-person likelihood $P^{(1p)}(D_{0}|\mathcal{T}, \xi)$ as follows: $$\label{EQN:1stPerson} P^{(1p)}(D_{0}|\mathcal{T}, \xi) := \sum_{\vec{\sigma}\in\mathcal{K}_{D_{0}}(\mathcal{T})} P(\vec{\sigma}) \sum_{l'\in L_{D_{0}}(\vec{\sigma},\mathcal{T})}\xi_{l'}\;.$$ To be clear, *for each configuration* $\vec{\sigma}$ in the above sum over configurations, one needs to compute the appropriately normalized xerographic distribution, subsequently summing that distribution over only those locations that could indeed correspond to us.[^2] We note that for the reference class of all observers who witness our data $D_{0}$ (called $(i)$ above): for each theory $\mathcal{T}$ and each configuration $\vec{\sigma}\in\mathcal{K}_{D_{0}}(\mathcal{T})$, the subset of locations that contain observers who witness our data $D_{0}$, namely $L_{D_{0}}(\vec{\sigma},\mathcal{T})$, is the entirety of the set of locations over which the xerographic distribution can be nonzero. Thus, by the normalization condition the xerographic distribution satisfies, we have, in Eq. (\[EQN:1stPerson\]): $\sum_{l'\in L_{D_{0}}(\vec{\sigma},\mathcal{T})}\xi_{l'} = 1$. For this reference class therefore, the first-person likelihood is independent of the functional form of the xerographic distribution, and reduces to the appropriate third-person likelihood: $P^{(1p)}(D_{0}|\mathcal{T}, \xi) \longrightarrow P(D_{0}|\mathcal{T})$. Preferred xerographic distributions {#SEC:Bayesian} ----------------------------------- The likelihood introduced in Eq. (\[EQN:1stPerson\]) can be analyzed from a Bayesian perspective, which, under the appropriate conditions, allows us to pick out a preferred xerographic distribution. As detailed by @srednicki+hartle_10, one can compute the posterior probability $P^{(1p)}(\mathcal{T}, \xi|D_{0})$ by Bayes’ theorem $$P^{(1p)}(\mathcal{T}, \xi | D_{0}) = \frac{P^{(1p)}(D_{0}|\mathcal{T}, \xi) P(\mathcal{T}, \xi)}{\sum_{(\mathcal{T}, \xi)}P^{(1p)}(D_{0}|\mathcal{T}, \xi) P(\mathcal{T}, \xi)},$$ where $P(\mathcal{T}, \xi)$ is the prior probability of the framework $(\mathcal{T}, \xi)$. We will be working below in a simplified setting where we assume we have a set of equally plausible frameworks at our disposal. That is, each framework enters into our Bayesian analysis with an equal prior. This implies the posterior probability is simply proportional to the likelihood: $$P^{(1p)}(\mathcal{T}, \xi|D_{0}) \propto P^{(1p)}(D_{0}|\mathcal{T}, \xi),$$ and the question of which framework is to be preferred (by virtue of having the highest posterior probability), becomes a question of which framework gives rise to the highest likelihood $P^{(1p)}(D_{0}|\mathcal{T}, \xi)$. It is from this preferred framework that one can select a preferred xerographic distribution. Given that we will use these distributions to encode assumptions regarding typicality, this selection will allow us to infer, in particular, how typical we are. Testing mediocrity {#SEC:Testing_typicality} ------------------ How then do we propose to test the principle of mediocrity? In the language introduced above, we can formulate a more precise version of the principle of mediocrity as follows: > [**PM**]{}: We are typical of the entirety of the reference class of observers in the multiverse who measure our data $D_{0}$. That is, in cases where there are (a finite number of) $L$ observers who measure $D_{0}$, situated at spacetime locations $x_{l}$ for $l = 1,2,\dots, L$, the probability that we are any one of these observers is $\frac{1}{L}$. The corresponding xerographic distribution is given by $\xi_{l} =\frac{1}{L}$, for $l = 1, 2, \dots, L$. In what follows, we will denote the xerographic distribution implementing [**PM**]{} by $\xi^{\textrm{PM}}$. Any other xerographic distribution, with either a non-uniform distribution over the reference class referred to in [**PM**]{}, or else any distribution over a reference class that is not the one referred to there, constitutes some form of *non-mediocrity*. With the assumptions of section \[SEC:Bayesian\] in mind, we propose to test the principle of mediocrity by comparing likelihoods $P^{(1p)}(D_{0}|\mathcal{T}, \xi^{\textrm{PM}})$ against $P^{(1p)}(D_{0}|\mathcal{T}^{\star}, \xi^{\star})$, where $\xi^{\star}\neq\xi^{\textrm{PM}}$, and where we allow for the possibility that the underlying theory $\mathcal{T}$ can vary as well. Extending the cosmological model of Srednicki and Hartle {#SEC:Multiverse_Model} ======================================================== To explicate the schema of section \[SEC:Xerographic\_Distributions\], and to probe the plausibility of [**PM**]{} in more concrete settings, we now construct a generalization of the cosmological toy model presented by @srednicki+hartle_10 (see also their earlier paper [@hartle+srednicki_07]). In section \[SEC:SH\_multiverse\], we will demonstrate that our results for likelihoods for this extended model reproduce theirs under the appropriate simplifying assumptions. Model preliminaries {#SEC:Model_preliminaries} ------------------- Let $\mathcal{V} = \{1,2,\dots,N\}$ label $N$ distinct domains in a multiverse, each of which is assumed to have one of two ‘colors’, *red* or *blue*, corresponding to two possible values of some physical observable. The precise interpretation of this observable will not matter, and we will rely only on the fact of it taking two distinct values. Observers may exist in these domains with some probability, where we assume this probability is independent of the color of any domain. As outlined in section \[SEC:Generalities\], we characterize this probability by first introducing a vector of observer occupancy (or a ‘configuration’) via the notation $\vec{\sigma} := (\sigma_{1}, \sigma_{2}, \dots, \sigma_{N})$, where, for $i=1,2,\dots,N$, $\sigma_{i}=1$ denotes the existence of observers in domain $i$, and $\sigma_{i}=0$ denotes there are no observers in domain $i$. There will, of course, be $2^N$ such configurations $\vec{\sigma}$, the set of which we label $\mathcal{K}$. We will further assume that the probability of observers in a domain is independent of that for all the other domains, but also that these probabilities are *not* generally the same.[^3] This implies that the probability of $\vec{\sigma}$, denoted by $P(\vec{\sigma})$, factorizes into a product of marginals $P_{i}(\sigma_{i})$: $P(\vec{\sigma}) = \prod_{i=1}^{N}P_{i}(\sigma_{i})$. If we let $p_{i}$ denote the probability of the existence of observers in domain $i$, then $(1-p_{i})$ is the probability of no observers in that domain, and since $\sigma_{i} = \textrm{1 or 0}$, we can write $P_{i}(\sigma_{i}) = p_{i}^{\sigma_{i}}(1-p_{i})^{1-\sigma_{i}}$, giving $$\label{EQN:Prob_Joint} P(\vec{\sigma}) = \prod_{i=1}^{N}p_{i}^{\sigma_{i}}(1-p_{i})^{1-\sigma_{i}}.$$ As to our data, we assume: we exist within one of these domains and observe the color red. That is, our data $D_{0}$ is - There exists a domain with observers in it who see *red*. To write down the first-person likelihood for our data in accord with Eq. (\[EQN:1stPerson\]), we also need to specify the theories and the xerographic distributions we are interested in. The theories we consider are ones that specify the color of each of the $N$ domains. In particular, each theory will be denoted by a vector $\mathcal{T} = (T_{1}, T_{2}, \dots, T_{N})$, where, for $i=1,2,\dots,N$, $T_{i} = 1$ when the theory predicts that domain $i$ is red, and $T_{i} = 0$ when the theory predicts it is blue. The xerographic distributions we consider will be defined to take the value zero, for each domain outside some nonempty subset (among all possible (nonempty) subsets) of domains $\mathcal{V}=\{1,2,\dots,N\}$ in the multiverse. To fix notation, if we let $\mathcal{C}$ denote the power set of $\mathcal{V}$ excluding the empty set, i.e., $\mathcal{C} = 2^{\mathcal{V}}\backslash \emptyset$, then a nonempty subset of domains $c\in\mathcal{C}$, outside which a xerographic distribution must be zero, will be denoted by $c=\{v_{1}, v_{2}, \dots, v_{M}\}$ (where, of course, $1\leq M\leq N$). The $v_{\alpha}$’s are simply integers labeling different domains in $\mathcal{V}$. We will explicitly write xerographic distributions as $\xi_{c}$, with the subscript $c$ indicating the subset over which they can be nonzero (so this subset $c$ is a superset of the support of a xerographic distribution). For the sake of simplicity, we will assume that the elements of $c$ are listed in increasing order, though nothing physical in what follows will depend on this. Then for each possible $c$, our xerographic distributions will fall into two classes: - The first class of xerographic distributions assumes that we are typical among instances of observers who see our data $D_{0}$, and will be denoted by $\xi^{\textrm{typD}}_c$. - The second class of xerographic distributions assumes we are typical among instances of observers, regardless of what data they see (i.e., regardless of whether they measure red or blue for their particular domain). These distributions will be denoted by $\xi^{\textrm{typO}}_c$. Note that only *one* of the xerographic distributions imposes [**PM**]{}, namely, $\xi^{\textrm{PM}}:=\xi^{\textrm{typD}}_\mathcal{V}$. That is, the principle of mediocrity is represented by the distribution that assumes we are typical among *all* instances of our data over the multiverse (note that $c = \mathcal{V}$ for this distribution). Under the assumptions laid out in section \[SEC:Testing\_typicality\], any other xerographic distribution encodes some form of *non-mediocrity*. With these preliminaries in mind, we can construct the analog of Eq. (\[EQN:1stPerson\]) for our multiverse model. We will do so separately for [**C1**]{} and [**C2**]{} above. We turn first to [**C2**]{}: where we will construct the first-person likelihood for our data $D_{0}$, assuming we are typical among instances of observers, regardless of their data, as encoded by $\xi^{\textrm{typO}}_c$. Construction of $P^{(1p)}(D_{0}|\mathcal{T}, \xi^{\textrm{typO}}_c)$ {#SEC:Construction_TypO} -------------------------------------------------------------------- As indicated in the outer sum of Eq. (\[EQN:1stPerson\]), the definition of the first-person likelihood for our data $D_{0}$, sums over the subset $\mathcal{K}_{D_{0}}(\mathcal{T})$ of configurations that contain our data, according to theory $\mathcal{T}$. Keeping in mind that any xerographic distribution must be zero outside a nonempty subset $c=\{v_{1}, v_{2}, \dots, v_{M}\}$ of multiverse domains $\mathcal{V}$, we let $\mathcal{K}_{D_{0}}(c,\mathcal{T})\subset\mathcal{K}$ denote the corresponding subset, i.e. the subset of all configurations $\vec{\sigma}$ that contain instances of our data $D_{0}$, according to theory $\mathcal{T}$, within the domains specified by $c$. Next we need to characterize the xerographic distribution $\xi^{\textrm{typO}}_c$. For a configuration $\vec{\sigma}\in\mathcal{K}_{D_{0}}(c,\mathcal{T})$, the total number of observers within $c=\{v_{1}, v_{2}, \dots, v_{M}\}$ is simply $\sum_{\alpha = 1}^{M}\sigma_{v_{\alpha}}$; and the xerographic distribution corresponding to $\xi^{\textrm{typO}}_c$, assigns to the location $x_{l}$ of each observer-system in $c$, the value $\xi_{l} = 1/(\sum_{\alpha = 1}^{M}\sigma_{v_{\alpha}})$ (recalling that a location in our model is simply a multiverse domain). In the computation of the first-person likelihood, we sum the xerographic distribution over only those locations that contain instances of our data $D_{0}$. So to that end, following our notation in Eq. (\[EQN:1stPerson\]), for each configuration $\vec{\sigma}\in\mathcal{K}_{D_{0}}(c,\mathcal{T})$, let $L_{D_{0}}(c, \vec{\sigma}, \mathcal{T})$ denote the subset of locations in $c$, where our data $D_{0}$ exists, according to $\mathcal{T}$. Note that the total number of instances of our data in $c$ is the just the size of this set $|L_{D_{0}}(c, \vec{\sigma}, \mathcal{T})|$. This can be written as $|L_{D_{0}}(c, \vec{\sigma}, \mathcal{T})| = \sum_{\beta = 1}^{M}\sigma_{v_{\beta}}T_{v_{\beta}}$: the dummy variable $v_{\beta}$ on the right hand side shows the $c$-dependence; and although there is no explicit $D_{0}$ dependence, recall that $T_{v_{\beta}}=1$ iff domain $v_{\beta}$ is red, which, when $\sigma_{v_{\beta}}=1$, corresponds to the existence of our data $D_{0}$ in domain $v_{\beta}$. We can now calculate a closed-form expression for $P^{(1p)}(D_{0}|\mathcal{T}, \xi^{\textrm{typO}}_c)$, by directly adapting Eq. (\[EQN:1stPerson\]) for the particular case considered here: $$\begin{aligned} P^{(1p)}(D_{0}|\mathcal{T}, \xi^{\textrm{typO}}_c) &=& \sum_{\vec{\sigma}\in\mathcal{K}_{D_{0}}(c,\mathcal{T})} P(\vec{\sigma}) \sum_{l'\in L_{D_{0}}(c, \vec{\sigma},\mathcal{T})}\xi_{l'} \nonumber \\ &=& \sum_{\vec{\sigma}\in\mathcal{K}_{D_{0}}(c,\mathcal{T})} \prod_{i=1}^{N}p_{i}^{\sigma_{i}}(1-p_{i})^{1-\sigma_{i}} \sum_{l'\in L_{D_{0}}(c, \vec{\sigma},\mathcal{T})}\xi_{l'} \nonumber \\ &=& \sum_{\vec{\sigma}\in\mathcal{K}_{D_{0}}(c,\mathcal{T})} \prod_{i=1}^{N}p_{i}^{\sigma_{i}}(1-p_{i})^{1-\sigma_{i}} \sum_{l'\in L_{D_{0}}(c, \vec{\sigma},\mathcal{T})}\frac{1}{\sum_{\alpha = 1}^{M}\sigma_{v_{\alpha}}} \nonumber \\ &=& \sum_{\vec{\sigma}\in\mathcal{K}_{D_{0}}(c,\mathcal{T})} \prod_{i=1}^{N}p_{i}^{\sigma_{i}}(1-p_{i})^{1-\sigma_{i}} \frac{1}{\sum_{\alpha = 1}^{M}\sigma_{v_{\alpha}}} |L_{D_{0}}(c, \vec{\sigma}, \mathcal{T})| \nonumber \\ &=& \sum_{\vec{\sigma}\in\mathcal{K}_{D_{0}}(c,\mathcal{T})} \prod_{i=1}^{N}p_{i}^{\sigma_{i}}(1-p_{i})^{1-\sigma_{i}} \left(\frac{\sum_{\beta = 1}^{M} \sigma_{v_{\beta}}T_{v_{\beta}}}{\sum_{\alpha = 1}^{M} \sigma_{v_{\alpha}}}\right); \label{EQN:1stPersonO}\end{aligned}$$ where we have used Eq. (\[EQN:Prob\_Joint\]) to obtain the second line, and the fact that the value of the xerographic distribution is the same at each location ${l'\in L_{D_{0}}(c, \vec{\sigma},\mathcal{T})}$ to obtain the fourth line. Construction of $P^{(1p)}(D_{0}|\mathcal{T}, \xi^{\textrm{typD}}_c)$ {#SEC:Construction_TypD} -------------------------------------------------------------------- We turn next to [**C1**]{}, and the construction of the first-person likelihood for our data $D_{0}$, assuming we are typical among instances of our data, as encoded by $\xi^{\textrm{typD}}_c$. Having completed the above construction, it is straightforward to compute this. For a configuration $\vec{\sigma}\in\mathcal{K}_{D_{0}}(c,\mathcal{T})$, since the total number of observers who see our data within $c$ is $\sum_{\alpha = 1}^{M}\sigma_{v_{\alpha}}T_{v_{\alpha}}$, the xerographic distribution corresponding to $\xi^{\textrm{typD}}_c$ is $\xi_{l} = 1/(\sum_{\alpha = 1}^{M}\sigma_{v_{\alpha}}T_{v_{\alpha}})$ for each location $x_{l}$ of our data in $c$. The term that imposes the xerographic distribution in the computation of the first-person likelihood, namely $\sum_{l'\in L_{D_{0}}(c, \vec{\sigma},\mathcal{T})}\xi_{l'}$, is then simply unity: $\sum_{l'\in L_{D_{0}}(c, \vec{\sigma},\mathcal{T})}\xi_{l'}=1$. Hence, we find $$\label{EQN:1stPersonD} P^{(1p)}(D_{0}|\mathcal{T}, \xi^{\textrm{typD}}_c) = \sum_{\vec{\sigma}\in\mathcal{K}_{D_{0}}(c,\mathcal{T})} \prod_{i=1}^{N}p_{i}^{\sigma_{i}}(1-p_{i})^{1-\sigma_{i}}.$$ Note that we recover the analog of the claim in the last paragraph of section \[SEC:Generalities\]: that for [**C1**]{}, the first-person likelihood $P^{(1p)}(D_{0}|\mathcal{T}, \xi^{\textrm{typD}}_c)$, is equal to the appropriate third-person likelihood—which expresses the likelihood that there is at least one observing system that witnesses our data $D_{0}$, among the multiverse domains specified by $c$. Equations (\[EQN:1stPersonO\]) and (\[EQN:1stPersonD\]) are the appropriate generalizations of the likelihoods calculated for the cyclic cosmological model of @srednicki+hartle_10. In order to make this connection more precise, and to set the stage for some of the computations described in section \[SEC:Results\], we now show that in the case where our multiverse model reduces to the cosmological model of @srednicki+hartle_10, Eqs. (\[EQN:1stPersonO\]) and (\[EQN:1stPersonD\]) indeed reduce to their likelihoods. A multiverse of cycles {#SEC:SH_multiverse} ---------------------- Consider, then, the case where the $N$ domains of the multiverse model introduced in section \[SEC:Model\_preliminaries\] are stretched out in time, so that the index $i$ that labels a domain, corresponds to the order in which the domain occurs in what we will refer to as a ‘multiverse of cycles’. Assume further that the xerographic distribution can be nonzero for each of these domains, so that $c=\mathcal{V}$. Furthermore, assume, as in [@srednicki+hartle_10], that the probability of observers existing in each of these cycles is the same (as well as independent of whether they exist in other cycles), and is given by $p_{i} = p$. We wish to first calculate the likelihood of our data under the assumption that we are typical among all instances of observers, i.e., $P^{(1p)}(D_{0}|\mathcal{T}, \xi^{\textrm{typO}}_\mathcal{V})$. Under these assumptions, Eq. (\[EQN:1stPersonO\]) reduces to $$P^{(1p)}(D_{0}|\mathcal{T}, \xi^{\textrm{typO}}_\mathcal{V}) = \sum_{\vec{\sigma}\in\mathcal{K}_{D_{0}}(\mathcal{V},\mathcal{T})} p^{\sum_{i=1}^{N}\sigma_{i}}(1-p)^{N-\sum_{j=1}^N\sigma_{j}}\left(\frac{\sum_{m = 1}^{N} \sigma_{m}T_{m}}{\sum_{n = 1}^{N} \sigma_{n}}\right). \label{EQN:1stPersonOHS}$$ The large amount of symmetry in this expression allows us to organize the outermost sum by separately considering configurations $\vec{\sigma}$ according to the total number of observers $\sum_{i=1}^{N}\sigma_{i}$ in each configuration. We will call the total number of observers in each configuration $n_{O}$, and note that this will range from 1 to $N$ for consistency with the fact that there exists at least one cycle with observers, that is, for consistency with our data (recall that we are not counting how many individuals are in each domain, just whether domains house observers or not). Equation (\[EQN:1stPersonOHS\]) can then be thought of as $P^{(1p)}(D_{0}|\mathcal{T}, \xi^{\textrm{typO}}_\mathcal{V}) = \sum_{n_{O}=1}^{N}F(p,n_{O}, N_{R})$, for some function $F$ we will compute below, where $N_{R}$ denotes the total number of red cycles predicted by the theory $\mathcal{T}$. There are two obvious cases to consider in this sum: namely, $1 \leq n_{O} \leq N_{R}$ and $N_{R} < n_{O} \leq N$, and we formally treat these two cases separately. For $1 \leq n_{O} \leq N_{R}$, one can generate an expression for $F(p, n_{O}, N_{R})$ by sequentially considering all possible numbers of observers (out of a maximum of $n_{O}$ observers) placed in $N_{R}$ red cycles. In general, one finds $$\begin{aligned} F(p,n_{O},N_{R})&=& p^{n_{O}}(1-p)^{N-n_{O}}\frac{1}{n_{O}}\left[\binom{N_{R}}{n_{O}} n_{O} + \binom{N_{R}}{n_{O}-1}\binom{N-N_{R}}{1} (n_{O}-1)+\cdots+\binom{N_{R}}{1}\binom{N-N_{R}}{n_{O}-1}\right]\nonumber \\ &=& p^{n_{O}}(1-p)^{N-n_{O}}\frac{1}{n_{O}}\sum_{k=0}^{n_{O}-1}\binom{N_{R}}{n_{O}-k}\binom{N-N_{R}}{k}(n_{O}-k)\nonumber \\ &=& p^{n_{O}}(1-p)^{N-n_{O}}\frac{1}{n_{O}}N_{R}\sum_{k=0}^{n_{O}-1}\binom{N-N_{R}}{k}\binom{N_{R}-1}{n_{O}-1-k}\nonumber \\ &=& p^{n_{O}}(1-p)^{N-n_{O}}\frac{1}{n_{O}}N_{R}\binom{N-1}{n_{O}-1}\nonumber \\ &=& p^{n_{O}}(1-p)^{N-n_{O}}N_{R}\frac{1}{N}\binom{N}{n_{O}}, \end{aligned}$$ where we have used a standard binomial identity to obtain the third and fifth lines, together with Vandermonde’s identity to obtain the fourth one. In the case where $N_{R} < n_{O} \leq N$, it is not difficult to show that we obtain precisely the same final result using a similar sequence of steps. Putting this all together, gives $$\begin{aligned} P^{(1p)}(D_{0}|\mathcal{T}, \xi^{\textrm{typO}}_\mathcal{V}) &=&\frac{N_{R}}{N}\sum_{n_{O}=1}^{N}\binom{N}{n_{O}}p^{n_{O}}(1-p)^{N-n_{O}}\nonumber \\ &=& \frac{N_{R}}{N}\left[1-(1-p)^{N}\right],\label{EQN:FP_SH_Nr_typO}\end{aligned}$$ agreeing with the corresponding expression (Eqs. (5.8) and (B5)) in [@srednicki+hartle_10]. A similar calculation (presented in the appendix) gives the correct formula for $P^{(1p)}(D_{0}|\mathcal{T}, \xi^{\textrm{PM}})$ (recall that $\xi^{\textrm{PM}}:=\xi^{\textrm{typD}}_\mathcal{V}$ ), where $$\label{EQN:FP_SH_Nr} P^{(1p)}(D_{0}|\mathcal{T}, \xi^{\textrm{PM}}) = 1-(1-p)^{N_{R}},$$ corresponding to Eq. (5.5) in [@srednicki+hartle_10]. In this way, the generalized multiverse model introduced in section \[SEC:Model\_preliminaries\] reduces to the cyclic model of @srednicki+hartle_10 under the appropriate simplifying assumptions. Evaluating the principle of mediocrity {#SEC:Results} ====================================== With the results of the last section in hand, we can now address the central conceptual task of this paper: namely, to assess the predictive power (understood in Bayesian terms) of the principle of mediocrity, under the scheme introduced in section \[SEC:Xerographic\_Distributions\]. We will develop the answers to three central questions in turn: (A) What framework produces the highest likelihoods? (B) For a fixed theory, what xerographic distribution gives rise to the framework with the highest likelihoods? (C) Does the principle of mediocrity generally provide the framework with the highest likelihoods? The best performing framework {#SEC:Best_Performing} ----------------------------- To begin our analysis, we show that the framework whose likelihood attains the supremum of the likelihoods for all frameworks considered, and for all assignments of probabilities $\{p_{i}\}_{i=1}^{N}$ to multiverse domains, is $(\mathcal{T}_{\textrm{all red}}, \xi^{\textrm{PM}})$: that is, the theory that predicts each of the domains is red ($\mathcal{T}=\mathcal{T}_{\textrm{all red}}$), together with the xerographic distribution corresponding to the principle of mediocrity ($\xi=\xi^{\textrm{PM}}:=\xi^{\textrm{typD}}_\mathcal{V}$). For this framework, the likelihood attained is the same as that with a xerographic distribution corresponding to typicality across all observers for $c=\mathcal{V}$, that is, for $\xi=\xi^{\textrm{typO}}_\mathcal{V}$. To show this, let the $p_{i}$’s be arbitrarily chosen but fixed. Note first that for arbitrary $c$ and $\mathcal{T}$, Eqs. (\[EQN:1stPersonO\]) and (\[EQN:1stPersonD\]) imply $P^{(1p)}(D_{0}|\mathcal{T}, \xi^{\textrm{typO}}_{c}) \leq P^{(1p)}(D_{0}|\mathcal{T}, \xi^{\textrm{typD}}_{c})$: as the same configurations $\vec{\sigma}\in\mathcal{K}_{D_{0}}(c,\mathcal{T})$ contribute to the sums in Eqs. (\[EQN:1stPersonO\]) and (\[EQN:1stPersonD\]), but in the case of Eq. (\[EQN:1stPersonO\]), each contributing term is multiplied by a factor that is less than or equal to 1. Now, the *maximal* value of $P^{(1p)}(D_{0}|\mathcal{T}, \xi^{\textrm{typD}}_{c})$ is attained when we choose $c$ and $\mathcal{T}$ such that $\mathcal{K}_{D_{0}}(c,\mathcal{T})$ includes the maximal number of configurations in the sum over (manifestly non-negative) probabilities of configurations. This will occur for the case where all multiverse domains are included ($c=\mathcal{V}$) and for the theory that predicts that all domains are red. Note finally that for $\mathcal{T} =\mathcal{T}_{\textrm{all red}}\equiv(1,1,\dots,1)$, $\sum_{\beta = 1}^{M} \sigma_{v_{\beta}}T_{v_{\beta}} = \sum_{\alpha = 1}^{M} \sigma_{v_{\alpha}}$ (regardless of $c$), and so the final term in brackets in Eq. (\[EQN:1stPersonO\]) is unity; hence the likelihood for the framework $(\mathcal{T}_{\textrm{all red}}, \xi^{\textrm{PM}})$, coincides with the likelihood for $(\mathcal{T}_{\textrm{all red}}, \xi^{\textrm{typO}}_\mathcal{V})$. To see how these conclusions manifest in a particular setting, consider the multiverse of cycles introduced in section \[SEC:SH\_multiverse\], which assumes that the probability of observers existing in any cycle is the same for each cycle and is given by $p$. We will consider the case where the xerographic distribution can be nonzero only on an initial segment of cycles starting with the first cycle and ending with some terminal cycle (after which the xerographic distribution is assumed to be zero). So for any number of cycles $N$, there are a total of $N$ such possible ‘cut-off schemes’. Figure \[FIG:SH\_N3\_likelihoods\] shows likelihoods as a function of $p$ for the case where $N=3$. ![image](azhar_15_Figure_1a){width="0.45\linewidth"} ![image](azhar_15_Figure_1b){width="0.45\linewidth"} We see that the highest likelihoods indeed occur for the framework with $(\mathcal{T}_{\textrm{all red}}, \xi^{\textrm{PM}})$ (uppermost trace in Fig. \[FIG:SH\_N3\_likelihoods\] (left)), and that these likelihoods coincide with the likelihoods for $(\mathcal{T}_{\textrm{all red}}, \xi^{\textrm{typO}}_\mathcal{V})$ (uppermost trace in Fig. \[FIG:SH\_N3\_likelihoods\] (right)). The best xerographic distribution for a fixed theory ---------------------------------------------------- It is an encouraging check of our intuition that the framework with the highest likelihood (and so also the highest posterior probability under the assumptions of section \[SEC:Bayesian\]) involves the theory that predicts all domains are red. A natural question that arises is: for a *fixed theory*, what xerographic distribution leads to the best performing framework? It does not take much more work to show that under these circumstances, the framework whose likelihood attains the supremum of the likelihoods over all xerographic distributions considered, for all assignments of probabilities $p_{i}$ to the multiverse domains, is the one whose xerographic distribution corresponds to [**PM**]{}: $\xi=\xi^{\textrm{PM}}:=\xi^{\textrm{typD}}_\mathcal{V}$. To prove this, fix the theory $\mathcal{T}$ and let the $p_{i}$’s be arbitrarily chosen but fixed. For an arbitrary $c$, we showed in section \[SEC:Best\_Performing\] that $P^{(1p)}(D_{0}|\mathcal{T}, \xi^{\textrm{typO}}_{c}) \leq P^{(1p)}(D_{0}|\mathcal{T}, \xi^{\textrm{typD}}_{c})$. Now, as in the proof there, the choice of $c$ that will maximize $P^{(1p)}(D_{0}|\mathcal{T}, \xi^{\textrm{typD}}_{c})$ is the one that will include the most number of non-negative terms in the sum in Eq. (\[EQN:1stPersonD\]). This is just $c=\mathcal{V}$, and hence ($\mathcal{T}, \xi^{\textrm{PM}}$) will in general give the highest likelihood. We see how this claim manifests in a simple case, by again considering the multiverse of cycles introduced in section \[SEC:SH\_multiverse\], with xerographic distributions implementing a cut-off in time, as described in the last paragraph of section \[SEC:Best\_Performing\]. ![image](azhar_15_Figure_2){width="0.95\linewidth"} Figure \[FIG:SH\_N3\_FIxTheory\] displays plots of the difference between the likelihood for the framework with some theory $\mathcal{T}$ and xerographic distribution corresponding to the principle of mediocrity $\xi=\xi^{\textrm{PM}}:=\xi^{\textrm{typD}}_{\mathcal{V}}$, and the likelihood for the framework with the same theory and all other xerographic distributions (the probability $p$ of observers in domains, labels the $x$-axis of each subplot). The case for $N=3$ cycles is displayed. We see that for each row of the plot (corresponding to a fixed theory), the difference is non-negative in each case, confirming the general claim advanced in this section. Does the principle of mediocrity generally give rise to the most predictive frameworks? {#SEC:PM_WBS} --------------------------------------------------------------------------------------- In light of the results in the last two subsections it is natural to ask whether there are cases where [**PM**]{} does not provide the highest likelihoods. We will show by construction that indeed it does *not*, when one is allowed to vary both the underlying theory and the xerographic distribution. For the sake of simplicity, we will restrict attention to the multiverse of cycles introduced in section \[SEC:SH\_multiverse\], where again, the probability of observers in any cycle is the same for each cycle and is given by $p$. There is a strong motivation for embarking on this search, since some well-motivated theories suggest xerographic distributions which do not express the principle of mediocrity. We will further explicate such scenarios in the next section; but for now, we aim to figure out whether in such instances, likelihoods are generally highest for those frameworks that involve the principle of mediocrity. To be explicit about the terms for our search: we are interested in whether for a given framework characterized by [**PM**]{}, that is, for some ‘reference framework’ ($\mathcal{T}, \xi^{\textrm{PM}}:=\xi^{\textrm{typD}}_\mathcal{V}$), there could exist another framework ($\mathcal{T}^{\star}, \xi^{\star}$) with a higher likelihood for at least some values of $p$. To steer the discussion away from more trivial cases, we will only consider situations where: - $\mathcal{T} \neq \mathcal{T}_{\textrm{all red}}$: since otherwise, by the results of section \[SEC:Best\_Performing\], we will not be able to find the required framework ($\mathcal{T}^{\star}, \xi^{\star}$). - $\mathcal{T} \neq \mathcal{T}_{\textrm{no red}}$: where $\mathcal{T}_{\textrm{no red}} = (0,0,\dots,0)$ is the theory that predicts no red cycles; since then the likelihood of our data $D_{0}$, given the framework ($\mathcal{T}_{\textrm{no red}}, \xi^{\textrm{PM}}$), vanishes, and we will be able to find the required frameworks ($\mathcal{T}^{\star}, \xi^{\star}$) rather trivially. - $\xi^{\star}\neq \xi^{\textrm{PM}}$: since we already know from section \[SEC:Best\_Performing\] that for $\xi^{\star} = \xi^{\textrm{PM}}$, $\mathcal{T}^{\star} = \mathcal{T}_{\textrm{all red}}$ will provide the supremum of the likelihoods in this case—and moreover, we are interested in finding frameworks where [**PM**]{} is not integral to constructing higher likelihoods. - $(\mathcal{T}^{\star}, \xi^{\star}) \neq (\mathcal{T}_{\textrm{all red}}, \xi^{\textrm{typO}}_\mathcal{V})$: since again, this corresponds to the supremum as mentioned in (iii), following the results of section \[SEC:Best\_Performing\]. So consider the case $N=3$. Set $\mathcal{T}$ in our reference framework ($\mathcal{T}, \xi^{\textrm{PM}}$), to be the theory that predicts a total of one of the three cycles is red: $\mathcal{T}=\mathcal{T}_{\textrm{one\,red}}$. Then, from Eq. (\[EQN:FP\_SH\_Nr\]), we have $P^{(1p)}(D_{0}|\mathcal{T}_{\textrm{one red}}, \xi^{\textrm{PM}}) = p$. It is straightforward to show from Eq. (\[EQN:1stPersonO\]) that for the theory that predicts only the first two cycles are red, which we will denote by $\mathcal{T}^{\star}=(1,1,0)\equiv\mathcal{T}_{RRB}$, the xerographic distribution given by $\xi^{\textrm{typO}}_{c=\{1,2\}}$, which (clearly) *does not* correspond to the principle of mediocrity, implies: $P^{(1p)}(D_{0}|\mathcal{T}_{RRB}, \xi^{\textrm{typO}}_{c=\{1,2\}}) = p(2-p)$. So the framework $(\mathcal{T}_{RRB}, \xi^{\textrm{typO}}_{c=\{1,2\}})$ has a likelihood higher than the framework involving [**PM**]{}, since $p(2-p) > p$ (for all non-trivial values of $p$, i.e. $p\in (0,1)$). More interesting behavior can be exhibited for the case where we take $\mathcal{T}^{\star}$ to be the theory that predicts a total of two of the cycles are red and the xerographic distribution to be $\xi^{\textrm{typO}}_{\mathcal{V}}\equiv \xi^{\textrm{typO}}_{c=\{1,2,3\}}$. In this case, as seen from Eq. (\[EQN:FP\_SH\_Nr\_typO\]), $P^{(1p)}(D_{0}|\mathcal{T}_{\textrm{two\,red}}, \xi^{\textrm{typO}}_{\mathcal{V}}) = \frac{2}{3}p(p^2-3p+3)$. This likelihood displays the behavior that it does better than our reference framework ($\mathcal{T}_{\textrm{one\;red}}, \xi^{\textrm{PM}}$), depending on the value of $p$. In particular, for $p >\sim 0.63$, ($\mathcal{T}_{\textrm{one\,red}}, \xi^{\textrm{PM}}$) has a higher likelihood than ($\mathcal{T}_{\textrm{two\,red}}, \xi^{\textrm{typO}}_{\mathcal{V}}$), whereas the situation is reversed otherwise. Such behavior thereby exhibits parametric dependence of the success of [**PM**]{}. Both this situation and the one discussed in the last paragraph are displayed graphically in Fig. \[FIG:Figure\_3\] (left). ![image](azhar_15_Figure_3a){width="0.45\linewidth"} ![image](azhar_15_Figure_3b){width="0.45\linewidth"} Similar results can be obtained in the case where $N=4$ (see Fig. \[FIG:Figure\_3\] (right)). In this case, there exists a framework implementing [**PM**]{} with $P^{(1p)}(D_{0}|\mathcal{T}_{\textrm{two\,red}}, \xi^{\textrm{PM}}) = p(2-p)$. This is less than or equal to the likelihood $P^{(1p)}(D_{0}|\mathcal{T}_{\textrm{RRRB}}, \xi^{\textrm{typO}}_{c = \{1,2,3\}}) = p(p^{2}-3p+3)$, for all $p$, which assumes a theory where only the first three cycles are red ($\mathcal{T}^{\star}=(1,1,1,0)\equiv\mathcal{T}_{RRRB}$). In addition, our reference framework’s likelihood $P^{(1p)}(D_{0}|\mathcal{T}_{\textrm{two\,red}}, \xi^{\textrm{PM}})$ displays only parametric dominance over the likelihood associated with another framework not implementing [**PM**]{}, namely ($\mathcal{T}_{\textrm{three\;red}}, \xi^{\textrm{typO}}_\mathcal{V}$), which takes the value $P^{(1p)}(D_{0}|\mathcal{T}_{\textrm{three\;red}}, \xi^{\textrm{typO}}_\mathcal{V})=\frac{3}{4}p(4-6p+4p^{2}-p^{3})$. Again, interestingly, [**PM**]{} does not universally give rise to the highest likelihoods and in certain cases exhibits only a parameter-dependent dominance. Discussion {#SEC:Discussion} ========== The principle of mediocrity is a controversial issue in multiverse cosmology. According to one way of thinking, it articulates our intuitions about how one should reason from an appropriately defined, peaked probability distribution, to a prediction of a possible observation. But crucially, this intuition has been developed either in controlled laboratory settings, or more generally, in cases where we understand, and have some (at least theoretical) control over, the conditions that obtain in systems of interest. The multiverse, however, is a different story. It is plausible that we are *more* typical of a set of appropriately restricted multiverse domains, but whether we can positively assert typicality heavily depends on who or what is predicted by the theories and the conditionalization schemes which we consider in multiverse cosmology. These latter schemes, at best, set down conditions that are necessary for the presence of ‘observers’, but there is an ambiguity in defining precisely who or what these observers are. In addition we do not know precisely what parameters or conditions need to be fixed within the confines of any theory in order to unambiguously describe these observers. As a result, it is not clear that typicality is justified, even if we conditionalize in accord with the ‘ideal reference class’ of @garriga+vilenkin_08. Of course, we *may* be typical, but following this line of thinking, we do not have good reason to assert that we are. The formalism of @srednicki+hartle_10 allows one to neatly address this multi-faceted concern, which affects our ability to reason in multiverse scenarios [@smeenk_14]. Through their formalism, one can formulate a set of assumptions regarding typicality, and from this set, one can calculate relevant likelihoods for possible observations, to then see how well different assumptions do in terms of describing our observations. This is implemented in a Bayesian framework, so that *if*, as we have assumed in this paper (see section \[SEC:Bayesian\]), we can assign equal amounts of prior credence to various candidate frameworks, then higher likelihoods translate to greater support for those frameworks given relevant experimental data. From the framework with the highest posterior probability then, one can *infer* how typical we are. How well does typicality do? As we have discovered within the admittedly simplified multiverse setting of section \[SEC:Multiverse\_Model\]: for a fixed theory, the principle of mediocrity yields likelihoods for our data that attain the supremum of all likelihoods considered, for all values of the probabilities of observers in domains. But an important caveat is that this result is *not* universal. Namely, if one is allowed to vary both the theory and the xerographic distribution implementing assumptions regarding typicality, the principle of mediocrity does not always provide the highest likelihoods. This is particularly pertinent when the set of candidate frameworks that constitute plausible alternatives for the description of a physical situation, do not always include xerographic distributions implementing the principle of mediocrity. One example where this occurs is in the situation where ‘Boltzmann brains’ exist and out-number ordinary observers, but both sets of observers record the same data (see the discussion in [@srednicki+hartle_10], as well as [@albrecht+sorbo_04; @page_08; @gott_08]). In this case, the first-person likelihood of our observations might be higher under the principle of mediocrity; but an unwanted consequence of favoring the corresponding framework, is the high likelihood of us *being* Boltzmann brains. That is, this framework would also predict that our future measurements will be disordered, that is, uncorrelated with past measurements (as is presumed for Boltzmann brains). One way to avoid having to accept this consequence is by restricting the xerographic distribution accordingly—for example, by focussing attention on only a proper subset of appropriately chosen domains. This type of restriction has been actively developed in this paper, and is equivalent to an assumption of non-mediocrity under the scheme described in section \[SEC:Xerographic\_Distributions\].[^4] To sum up: if some of the frameworks one considers have xerographic distributions that do not implement the principle of mediocrity for relevant theories, then demanding that we favor a framework that includes the principle of mediocrity is hazardous. For as shown in section \[SEC:PM\_WBS\], we cannot guarantee it will produce the framework with the highest likelihood. It is important to note that we have selected a particular reference class within which to implement the principle of mediocrity—namely, observers who witness our experimental data. The motivation for this choice was to test a limiting case of a ‘top-down’ approach to conditionalization (see [@aguirre+tegmark_05; @weinstein_06; @garriga+vilenkin_08; @azhar_14] for further details on top-down approaches). This limiting case requires consideration of the maximally specific reference class in the setup at hand. The assumption of typicality with respect to this reference class (encoded in $\xi^{\textrm{typD}}_{\mathcal{V}}$) then corresponds to the principle of mediocrity. A key result in this paper is that typicality with respect to this reference class does not necessarily give rise to the highest likelihoods for our data $D_{0}$, if one is allowed to vary both the theory and the xerographic distribution under consideration (as explained in section \[SEC:PM\_WBS\]). It is also important to note that we have evaluated frameworks based on first-person likelihoods generated for our data $D_{0}$. These computations are in accordance with the approach adopted by @srednicki+hartle_10 [§IV], who invoke such likelihoods in the evaluation of frameworks (where they also use xerographic distributions that correspond to each of $\xi^{\textrm{typO}}_{\mathcal{V}}$ and $\xi^{\textrm{typD}}_{\mathcal{V}}$). Another significant question to address, that manifestly goes beyond the cosmological settings studied in this paper, is: how should one evaluate the predictive power of frameworks (and in particular, the principle of mediocrity) in a case where one aims to predict the value of an observable, say, that is not explicitly included in the conditionalization scheme adopted? For realistic cosmological calculations, securing the required separation of the observable from the conditionalization scheme is non-trivial. One naturally requires that the observable being predicted is (i) correlated with the conditionalization scheme (otherwise the conditionalization scheme will play no role in the predictive framework), but (ii) is not perfectly correlated with the conditionalization scheme (otherwise one is open to the charge of circularity). Thus when it is not clear exactly how observables are correlated with the defining features of a conditionalization scheme, the need to strike a balance between (i) and (ii) gives rise to a difficult problem—namely, how to distinguish the observable to be predicted, from the defining features of the conditionalization scheme (see @garriga+vilenkin_08 [§III] who mention such concerns). Assuming that one has a solution to this problem, a formalism that can handle this type of predictive setting is described by @srednicki+hartle_10 [§VI]. For the computation of first-person likelihoods, the appropriate way to proceed, as detailed by @garriga+vilenkin_08 and @hartle+hertog_13 [@hartle+hertog_15], is to explicitly leave out that part of our data that involves the observable we aim to predict, in the specification of our conditionalization scheme. The assumption of typicality with respect to the reference class implicit in this specification, would then be the appropriate implementation of the principle of mediocrity [@garriga+vilenkin_08]. It remains to apply the methods introduced by @srednicki+hartle_10, and advanced in this paper, to more realistic cosmological settings, in order to more fully assess the extent of the errors that may arise from universally imposing the principle of mediocrity (see [@hartle+hertog_13; @hartle+hertog_15] for recent work in this direction). For now, our conclusion must be that the principle of mediocrity (in the style of [@gott_93; @vilenkin_95; @page_96; @bostrom_02; @garriga+vilenkin_08]) is more questionable than has been claimed. I am very grateful to Jeremy Butterfield for insightful discussions and comments on an earlier version of this paper. I thank audiences at Cambridge and at the Munich Center for Mathematical Philosophy at Ludwig-Maximilians-Universit[ä]{}t M[ü]{}nchen for helpful feedback. I am supported by the Wittgenstein Studentship in Philosophy at Trinity College, Cambridge. Calculation of $P^{(1p)}(D_{0}|\mathcal{T}, \xi^{\textrm{PM}})$ for the multiverse of cycles ============================================================================================ For the sake of completeness, we present the calculation for the first-person likelihood of our data $D_{0}$, given a theory $\mathcal{T}$, under the assumption of a xerographic distribution that imposes the principle of mediocrity, $\xi=\xi^{\textrm{PM}}:=\xi^{\textrm{typD}}_\mathcal{V}$. We work within the cyclic cosmological model of @srednicki+hartle_10 under the assumptions of section \[SEC:SH\_multiverse\]; where we have $N$ domains stretched out in time, each with a probability $p$ of housing observers. From these assumptions, Eq. (\[EQN:1stPersonD\]) reduces to $$P^{(1p)}(D_{0}|\mathcal{T}, \xi^{\textrm{PM}}) = \sum_{\vec{\sigma}\in\mathcal{K}_{D_{0}}(\mathcal{V},\mathcal{T})} p^{\sum_{i=1}^{N}\sigma_{i}}(1-p)^{N-\sum_{j=1}^N\sigma_{j}}. \label{EQN:1stPersonDHS}$$ We organize the sum by separately considering configurations according to the total number of observers $n_{O}=\sum_{i=1}^{N}\sigma_{i}$ in each configuration $\vec{\sigma}$. Equation (\[EQN:1stPersonDHS\]) can then be thought of as $P^{(1p)}(D_{0}|\mathcal{T}, \xi^{\textrm{PM}}) = \sum_{n_{O}=1}^{N}G(p,n_{O}, N_{R})$, for some function $G$ we will compute shortly. The total number of red cycles $N_{R}$ depends on the theory $\mathcal{T}$, and we will separately consider the two cases into which the sum naturally partitions, namely, $1 \leq n_{O} \leq N_{R}$ and $N_{R} < n_{O} \leq N$. For $1 \leq n_{O} \leq N_{R}$, we generate an expression for $G(p,n_{O}, N_{R})$ by sequentially placing all possible numbers of observers (out of a maximum of $n_{O}$ observers) in $N_{R}$ red cycles. In general, we find $$\begin{aligned} G(p,n_{O}, N_{R})&=& p^{n_{O}}(1-p)^{N-n_{O}}\left[\binom{N_{R}}{n_{O}} + \binom{N_{R}}{n_{O}-1}\binom{N-N_{R}}{1} +\cdots+\binom{N_{R}}{1}\binom{N-N_{R}}{n_{O}-1}\right] \nonumber \\ &=& p^{n_{O}}(1-p)^{N-n_{O}}\sum_{k=0}^{n_{O}-1}\binom{N_{R}}{n_{O}-k}\binom{N-N_{R}}{k} \nonumber \\ &=& p^{n_{O}}(1-p)^{N-n_{O}}\left[\binom{N}{n_{O}}-\binom{N-N_{R}}{n_{O}}\right]\nonumber\\ &=& p^{n_{O}}(1-p)^{N-n_{O}}\sum_{k=1}^{N_{R}}\binom{N-k}{n_{O}-1},\nonumber\end{aligned}$$ where we have used Vandermonde’s identity in obtaining the third line, and have iterated using Pascal’s formula in obtaining the fourth line. For $N_{R} < n_{O} \leq N$, we obtain the same final result using a similar sequence of steps. Putting this all together, gives $$\begin{aligned} P^{(1p)}(D_{0}|\mathcal{T}, \xi^{\textrm{PM}}) &=& \sum_{n_{O}=1}^{N}p^{n_{O}}(1-p)^{N-n_{O}}\sum_{k=1}^{N_{R}}\binom{N-k}{n_{O}-1}\nonumber\\ &=& \sum_{k=1}^{N_{R}}\sum_{n_{O}=1}^{N-k+1}\binom{N-k}{n_{O}-1}p^{n_{O}}(1-p)^{N-n_{O}}\nonumber\\ &=& \sum_{k=1}^{N_{R}}p(1-p)^{k-1}\sum_{m=0}^{N-k}\binom{N-k}{m}p^{m}(1-p)^{N-k-m}\nonumber\\ &=& p\sum_{k=1}^{N_{R}}(1-p)^{k-1}\nonumber\\ &=& 1-(1-p)^{N_{R}},\end{aligned}$$ where we have used the binomial theorem to obtain the fourth line. This agrees with the corresponding expression (Eq. (5.5)) in [@srednicki+hartle_10], and Eq. (\[EQN:FP\_SH\_Nr\]) in the text above. [^1]: The simplicity of the scenarios we will consider makes this less egregious an assumption than it would otherwise be. Indeed, as mentioned in section \[SEC:Introduction\], definitions of observers and the subsequent specification of appropriate reference classes is a thorny issue, but we will not need to engage with it here (see [@hartle+srednicki_07] for further discussion). [^2]: The notation adopted in Eq. (\[EQN:1stPerson\]) is not to be confused with the claim that the theory in any way *determines* the xerographic distribution—it does not. We are simply spelling out how our assumptions regarding which possible member of a suitable reference class we might be, enter into the determination of first-person likelihoods. This will become clearer when we consider concrete cosmological scenarios below—see Eq. (\[EQN:1stPersonO\]) for a preview. [^3]: The general results below (in section \[SEC:Results\]) do not depend on any assumption that the different domains receive independent but unequal (or even equal) probabilities. However, it is natural to assume that $(a)$ the existence of observers in different domains constitute probabilistically independent events (e.g., due to separate processes of evolution), and $(b)$ that these probabilities can be unequal, reflecting the fact that different domains can vary in their hostility to life. [^4]: One might wonder why we do not simply reject a theory that when partnered with the principle of mediocrity, does not constitute a plausible candidate framework. A response in the spirit of this paper, is that when the status of the principle of mediocrity is uncertain, it makes sense to examine the predictions of frameworks that constrain otherwise well-motivated theories by assumptions of non-mediocrity. A more pointed response is that one is simply not justified in rejecting a theory just because we would not be typical according to that theory.  @hartle+srednicki_07 raise this objection in discussing a (hypothetical) theory that predicts the existence of many more observers on Jupiter than on Earth. They claim it is unreasonable to reject such a theory, when we notice we are human and not Jovian, just because we would not be typical according to the theory.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Sequential and Quantum Monte Carlo methods, as well as genetic type search algorithms can be interpreted as a mean field and interacting particle approximations of Feynman-Kac models in distribution spaces. The performance of these population Monte Carlo algorithms is strongly related to the stability properties of nonlinear Feynman-Kac semigroups. In this paper, we analyze these models in terms of Dobrushin ergodic coefficients of the reference Markov transitions and the oscillations of the potential functions. Sufficient conditions for uniform concentration inequalities w.r.t. time are expressed explicitly in terms of these two quantities. We provide an original perturbation analysis that applies to annealed and adaptive FK models, yielding what seems to be the first results of this kind for these type of models. Special attention is devoted to the particular case of Boltzmann-Gibbs measures’ sampling. In this context, we design an explicit way of tuning the number of Markov Chain Monte Carlo iterations with temperature schedule. We also propose and analyze an alternative interacting particle method based on an adaptive strategy to define the temperature increments.' address: - 'INRIA Bordeaux Sud-Ouest, team ALEA, Domaine Universitaire, 351, cours de la Libération, 33405 Talence Cedex, France.' - 'CEA-CESTA, 33114 Le Barp, France.' - author: - François Giraud - Pierre Del Moral title: 'Non-Asymptotic Analysis of Adaptive and Annealed Feynman-Kac Particle Models' --- Introduction {#introduction .unnumbered} ============ Feynman-Kac ([*abbreviate FK*]{}) particle methods, also called sequential, quantum or diffusion Monte Carlo methods, are stochastic algorithms to sample from a sequence of complex high-dimensional probability distributions. These stochastic simulation techniques are of current use in numerical physics [@Assaraf-overview; @Assaraf; @Hetherington] to compute ground state energies in molecular systems. They are also used in statistics, signal processing and information sciences [@Cappe; @DM-filt; @DM-D-J; @DM-Guionnet-2] to compute posterior distributions of a partially observed signal or unknown parameters. In the evolutionary computing literature, these Monte Carlo methods are used as natural population search algorithms for solving optimization problems. From the pure mathematical viewpoint, these advanced Monte Carlo methods are an interacting particle system ([*abbreviate IPS*]{}) interpretation of FK models. For a more thorough discussion on these models, we refer the reader to the monograph [@DM-FK], and the references therein. The principle (see also [@DM-D-J] and the references therein) is to approximate a sequence of target probability distributions $(\eta_n)_n$ by a large cloud of random samples termed particles or walkers. The algorithm starts with $N$ independent samples from $\eta_0$ and then alternates two types of steps: an acceptance-rejection scheme equipped with a selection type recycling mechanism, and a sequence of free exploration of the state space.\ In the recycling stage, the current cloud of particles is transformed by randomly duplicating and eliminating particles in a suitable way, similarly to a selection step in models of population genetics. In the Markov evolution step, particles move independently one each other (mutation step). This method is often used for solving sequential problems, such as filtering (see e.g., [@Cappe; @Doucet-F-G; @DM-filt]). In other interesting problems, these algorithms also turn out to be efficient to sample from a single target measure $\eta$. In this context, the central idea is to find a judicious interpolating sequence of measures $(\eta_k)_{0\leq k\leq n}$ with increasing sampling complexity, starting from some initial distribution $\eta_0$, up to the terminal one $\eta_n=\eta$. Consecutive measures $\eta_k$ and $\eta_{k+1}$ are sufficiently similar to allow for efficient importance sampling and/or acceptance-rejection sampling. The sequential aspect of the approach is then an “artificial way” to introduce the difficulty of sampling gradually. In this vein, important examples are provided by annealed models. More generally, a crucial point is that large population sizes allow to cover several modes simultaneously. This is an advantage compared to standard MCMC methods that are more likely to be trapped in local modes. These sequential samplers have been used with success in several application domains, including rare events simulation (see [@Cerou-RE]), stochastic optimization and more generally Boltzmann-Gibbs measures sampling ([@DM-D-J]). Up to now, IPS algorithms have been mostly analyzed using asymptotic (i.e. when number of particles $N$ tends to infinity) techniques, notably through fluctuation theorems and large deviation principles (see for instance [@DM-DA; @DM-Guionnet-3], [@DM-Guionnet-2; @DM-L; @DM-M],[@Kunsch], [@Chopin], [@DM-filt], [@Cappe] and [@DM-FK] for an overview).\ Some non-asymptotic theorems have been recently developped ([@Cerou-var; @DM-D-J-adapt]), but unfortunately none of them apply to annalyze annealed and adaptive FK particle models. On the other hand, these type of nonhomogeneous IPS algorithms are of current use for solving concrete problems arising in numerical physics and engineering sciences (see for instance [@Bertrand; @Giraud-RB; @Neal], [@Clapp; @Deutscher; @Minvielle], [@Jasra; @Schafer]). By the lack of non-asymptotic estimates, these particle algorithms are used as natural heuristics.\ The main contribution of this article is to analyze these two classes of time nonhomogeneous IPS models. Our approach is based on semigroup techniques and on an original perturbation analysis to derive several uniform estimates w.r.t. the time parameter.\ More precisely, in the case of annealed type models, we estimate explicitly the stability properties of FK semigroup in terms of the Dobrushin ergodic coefficient of the reference Markov chain and the oscillations of the potential functions. We combine these techniques with non-asymptotic theorems on $L^p$-mean error bounds ([@DM-M]) and some useful concentration inequalities ([@DM-Hu-Wu; @DM-Rio]). Then, we provide parameter tuning strategies that allow to deduce some useful uniform concentration inequalities w.r.t. the time parameter. These results apply to non homogeneous FK models associated with cooling temperature parameters. In this situation, the sequence of measures $\eta_n$ is associated with a nonincreasing temperature parameter. We mention that other independent approaches, such as Whiteley’s ([@Whiteley]) or Schweizer’s ([@Schweizer]), are based on, e.g., drift conditions, hyper-boundedness, spectral gaps, or non-asymptotic biais and variance decompositions. These approaches lead to convergence results that may also apply to non-compact state spaces. To our knowledge, these techniques are restricted to non-asymptotic variance theorems and they cannot be used to derive uniform and exponential concentration inequalities. It seems also difficult to extend these approaches to analyze the adaptive IPS model discussed in the present article. To solve these questions, we develop a perturbation technique of stochastic FK semigroups. In contrast to traditional FK semigroup, the adaptive particle scheme is now based on random potential functions that depend on a cooling schedule adapted to the variability and the adaptation of the random populations.\ \ The rest of the article is organized as follows. In a preliminary section, we recall a few essential notions related to Dobrushin coefficients or FK semigroups. We also provide some important non-asymptotic results we use in the further development of the article. Section \[section-analyse-generale-FK\] is concerned with the semigroup stability analysis of these models. We also provide a couple of uniform $L^p$-deviations and concentration estimates. In Section \[section-Gibbs\] we apply these results to Boltzmann-Gibbs models associated with a decreasing temperature schedule. In this context, IPS algorithm can be interpreted as a sequence of interacting simulated annealing algorithms ([*abbreviate ISA*]{}). We design an explicit way of tuning the number of Markov Chain Monte Carlo iterations with the temperature schedule. Finally, in Section \[section-adapt\], we propose an alternative ISA method based on an original adaptive strategy to design on the flow the temperature decrements. We provide a non-asymptotic study, based on a perturbation analysis. We end the article with $L^p$-deviation estimates as well as a couple of concentration inequalities. Statement of Some Results {#statement-of-some-results .unnumbered} ========================= Feynman-Kac particle algorithms consist in evolving an interacting particle system $(\zeta_n)_n = \left( \zeta_n^1, \ldots, \zeta_n^N \right)_n$ of size $N$, on a given state space $E$. Their evolution is decomposed into two genetic type transitions: a selection step, associated with some positive potential function $G_n$; and a mutation step, where the selected particles evolve randomly according to a given Markov transition $M_n$ (a more detailed description of these IPS algorithms is provided in Section \[algo-classique\]). In this context, the occupation measures $\displaystyle{ \eta_n^N := \frac{1}{N}\sum_{1\leq i\leq N}\delta_{\zeta^i_n} }$ are $N$-approximations of a sequence of measures $\eta_n$ defined by the FK recursive formulae: $$\eta_n(f) = \frac{\eta_{n-1} \left( G_n \times M_n.f \right)}{\eta_{n-1}(G_n)} ,$$ for all bounded measurable function $f$ on $E$ (a more detailed discussion on these evolution equations is provided in Section \[section-flot-FK\]).\ To describe with some precision the main results of the article, we consider the pair of parameters $(g_n,b_n)$ defined below. $$g_n := \underset{x,y\in E}{\sup} \frac{G_n(x)}{G_n(y)} \quad \quad \textmd{and} \quad \quad b_n = \beta(M_n) := \underset{\underset{A \subset E}{x,y\in E}}{\sup} \left| M_n(x,A) - M_n(y,A) \right|$$ The quantity $\beta(M_n)$ is called the Dobrushin ergodic coefficient of the Markov transition $M_n$. One of our first main results can be basically stated as follows: \[theo-statement-unif\] We assume that $$\underset{p \geq 1}{\sup} \; g_p \leq M \quad \quad \textmd{and} \quad \quad \underset{p \geq 1}{\sup} \; b_p \leq \frac{a}{a+M}$$ for some finite constant $M < \infty$ and some $a \in (0,1)$. In this situation, for any $n \geq 0$, $N \geq 1$, $y \geq 0$ and $f \in \mathcal{B}_1 (E)$, the probability of the event $$\vert \eta_n^N(f) - \eta_n(f) \vert \leq \frac{r_1^{\star} N + r_2^{\star} y}{N^2}$$ is greater than $1-e^{-y}$, where $r_1^{\star}$ and $r_2^{\star}$ are some constants that are explicitly defined in terms of $(a,M)$. In Section \[section-result-unif\], under the same assumptions of Theorem \[theo-statement-unif\], we also prove uniform $L^p$-mean error bounds as well as new concentration inequalities for unnormalized particle models. We also extend the analysis to the situation where $g_n \underset{n \rightarrow + \infty}{\longrightarrow} 1$.\ We already mention that the regularity conditions on $b_n$ may appear difficult to check since the Markov kernels are often dictated by the application under study. However, we can deal with this problem as soon as we can simulate a Markov kernel $K_n$ such that $\eta_n . K_n = \eta_n$. Indeed, to stabilize the system, the designer can “add” several MCMC evolution steps next to each $M_n$-mutation step. From a more formal viewpoint, the target sequence $(\eta_n)_n$ is clearly also solution of the FK measure-valued equations associated with the Markov kernels $M_n^{\prime} = M_n. K_n^{m_n}$, where iteration numbers $m_n$ are to be chosen loosely. This system is more stable since the corresponding $b_n^{\prime}$ satisfy $$b_n^{\prime} = \beta(M_n^{\prime}) \leq b_n . \beta(K_n^{m_n}) \leq b_n . \beta(K_n)^{m_n} .$$ In such cases, Theorem \[theo-statement-unif\] and its extension provide sufficient conditions on the iteration numbers $m_n$ to ensure the convergence and the stability properties of the algorithm.\ These results apply to stochastic optimization problems. Let $V : E \rightarrow \mathbb{R}$ be a bounded potential function, $\beta_n$ a sequence which tends to infinity, and $m$ a reference measure on $E$. It is well known that the sequence of Boltzmann-Gibbs measures $$\eta_n (dx) \propto e^{- \beta_n . V(x)} m(dx)$$ concentrates on $V$’s global minima (in the sense of $m$-$\textmd{essinf}(V)$). In the above display, $\propto$ stands for the proportional sign. One central observation is that these measures can be interpreted as a FK flow of measures associated with potential functions $G_n = e^{-(\beta_n - \beta_{n-1}).V}$ and Markov kernels $M_n = K_{\beta_n}^{m_n k_0}$ where $K_{\beta_n}$ is a simulating annealing kernel (see Section \[section-Gibbs-tuning\]) and $m_n$ and $k_0$ are given iteration parameters. In the further development of this section, we let $K$ be the proposal transition of the simulated annealing transition $K_{\beta}$. In this context, the IPS methods can be used to minimize $V$. The conditions on $b_n$ and $g_n$ can be turned into conditions on the temperature schedule $\beta_n$ and the number of MCMC iterations $m_n$. Moreover, combining our results with standard concentration properties of Boltzmann-Gibbs measures, we derive some convergence results in terms of optimization performance. In this notation, our second main result is basically stated as follows. \[theo-statement-optim\] Let us fix $a \in (0,1)$. We assume that for any $x \in E$, $K^{k_0} (x, \cdot) \geq \delta \nu(\cdot)$ for some measure $\nu$ on $E$, some $\delta>0$ and some $k_0 \geq 1$. We also assume that the temperature schedule $\beta_p$ and the iteration numbers $m_p$ satisfy the following conditions: $$\underset{p \geq 1}{\sup} \; \Delta_p \leq \Delta \quad \quad \textmd{and} \quad \quad m_p \geq \frac{\log(\frac{e^{\Delta.\textmd{osc}(V)}+a}{a})e^{osc(V).\beta_p}}{\delta}$$ for some constant $\Delta$. For all $\varepsilon >0$, let $p_n^N (\varepsilon)$ be the proportion of particles $(\zeta_n^i)$ so that $ V (\zeta_n^i) \geq V_{\min} + \varepsilon$. Then, for any $n \geq 0$, $N \geq 1$, $y \geq 0$ and for all $\varepsilon^{\prime} < \varepsilon$, the probability of the event $$p_n^N (\varepsilon) \leq \frac{e^{-\beta_n(\varepsilon - \varepsilon^{\prime}) }}{m_{\varepsilon^{\prime}}} + \frac{r_1^{\star} N + r_2^{\star} y}{N^2}$$ is greater than $1-e^{-y}$, with $ m_{\varepsilon^{\prime}} = m \left( V \leq V_{\min} + \varepsilon ^{\prime} \right)$, and the same constants $(r_1^{\star},r_2^{\star})$ as the one stated in Theorem \[theo-statement-unif\] (with $M=e^{\Delta}$). It is instructive to compare the estimates in the above theorem with the performance analysis of the traditional simulated annealing model ([*abbreviate SA*]{}). Firstly, most of the literature on SA models is concerned with the weak convergence of the law of the random states of the algorithm. When the initial temperature of the scheme is greater than some critical value, using a logarithmic cooling schedule, it is well known that the probability for the random state to be in the global extrema levels tends to $1$, as the time parameter tends to $\infty$. The cooling schedule presented in Theorem \[theo-statement-optim\] is again a logarithmic one. In contrast to the SA model, Theorem \[theo-statement-optim\] allows to quantify the performance analysis of the ISA model in terms of uniform concentration inequalities, that doesn’t depend on a critical parameter.\ \ In practice, choosing the sequence of increments $\Delta_n = (\beta_n - \beta_{n-1})$ in advance can cause computational problems. To solve this problem, adaptive strategies, where increment $\Delta_n$ depends on the current set of particles $\zeta_{n-1}$, are of common use in the engineering community (see for instance [@Jasra; @Schafer], [@Clapp; @Deutscher; @Minvielle]). In this context, we propose to study the case where the increment $\Delta_n^N$ is chosen so that $$\eta_{n-1}^N (e^{-{\Delta}_{n}^N \cdot V}) = \varepsilon ,$$ where $\varepsilon >0$ is a given constant (see Section \[description-algo-adapt\] for a detailed description of the algorithm). Computationally speaking, $\varepsilon$ is the expectation of the proportion of particles which are not concerned with the recycling mechanism in the selection step. We interpret this particle process as a perturbation of a theoretical FK sequence $\eta_n$ associated with a theoretical temperature schedule $\beta_n$. Our main result is the following $L^p$-mean error estimate. \[theo-statement-adapt\] For any $p \geq 1$, $n \geq 0$, $N \geq 1$ and any bounded by $1$ function $f$, we have $$\mathbb{E} \left( \left| \eta_n^N(f) - \eta_n(f) \right|^p \right)^{1/p} \leq \frac{B_p}{\sqrt{N}} \sum_{k=0}^n \prod_{i=k+1}^n \left( b_i g_i (1+c_i) \right) ,$$ with $\displaystyle{ c_n = \frac{ V_{\max} e^{{\Delta}_n V_{\max}} }{\varepsilon \cdot \eta_{n-1}(V)} }$, $\Delta_n = \beta_n - \beta_{n-1}$ and $B_p$ defined below. $$\label{def-Bp} B_{2p}^{2p} = \frac{(2p)!}{2^p . p!} \quad ; \quad B_{2p+1}^{2p+1} = \frac{(2p+1)!}{2^p . p! \sqrt{2p+1} } .$$ Under appropriate regularity conditions on the parameters $b_n, g_n, c_n$, we mention that these $L^p$-mean error bounds also provide uniform concentration inequalities.\ The proofs of Theorem \[theo-statement-unif\], Theorem \[theo-statement-optim\], Theorem \[theo-statement-adapt\] and related uniform exponential estimates are detailed, respectively, in Sections \[section-result-unif\], Section \[section-Gibbs-tuning\] and Section \[gros-result-adapt\]. Some Preliminaries {#sec:1} ================== Basic Notation -------------- Let $(E,r)$ be a complete, separable metric space and let $\mathcal{E}$ be the $\sigma$-algebra of Borel subsets of $E$. Denote by $\mathcal{P}(E)$ the space of probability measures on $E$. Let $\mathcal{B}(E)$ be the space of bounded, measurable, real-valued functions on $E$. Let $\mathcal{B}_1(E) \subset \mathcal{B}(E)$ be the subset of all bounded by $1$ functions.\ \ If $\mu \in \mathcal{P}(E)$, $f \in \mathcal{B}(E)$ and $K, K_1, K_2$ are Markov kernels on $E$, then $\mu (f)$ denotes the quantity $\int_E f(x) \mu(dx)$, $K_1 . K_2$ denotes the Markov kernel defined by $$(K_1.K_2) (x,A) = \int_E K_1(x,dy) K_2(y,A),$$ $K.f$ denotes the function defined by $$K.f(x) = \int_E K(x,dy) f(y)~$$ and $\mu . K$ denotes the probability measure defined by $$\mu.K(A) = \int_E K(x,A) \mu(dx) .$$ If $G$ is a positive, bounded function on $E$, then $\psi_G: \mathcal{P}(E) \rightarrow \mathcal{P}(E) $ denotes the Boltzmann-Gibbs transformation associated with $G$, defined by $$\forall \mu \in \mathcal{P}(E), \; \forall f \in \mathcal{B}(E), \quad \psi_G(\mu) (f) = \frac{\mu(G \times f)}{\mu(G)} .$$ For any $ f \in \mathcal{B}(E)$, let $\displaystyle{ \Vert f \Vert_{\infty} = \underset{x \in E}{\sup} \vert f(x) \vert }$ and $osc(f) = (f_{\max}-f_{\min})$. Let $\mathcal{O}_1 (E) \subset \mathcal{B}(E) $ be the subset of functions $f$ so that $osc(f) \leq 1$. For any random variable $X: \Omega \rightarrow \mathbb{R}$ defined on some probability space $(\Omega,\mathcal{F},\mathbb{P})$, and any $p \geq 1$, $\Vert X \Vert_{p}$ stands for the $L^p$ norm $\displaystyle{ \mathbb{E} \left( |X|^p \right)^{1/p} }$. Let $\mathcal{P}_{\Omega}(E)$ be the set of random probability measures on $E$. For all $p \geq 1$, we denote by $d_p$ the distance on $\mathcal{P}_{\Omega}(E)$ defined for all random measures $ \hat{\mu} , \hat{\nu}$ by $$d_p(\hat{\mu} , \hat{\nu}) = \underset{f\in \mathcal{O}_1 (E) }{\sup} \Vert \hat{\mu}(f) - \hat{\nu}(f) \Vert_p .$$ Finally, for any $x\in E$, $\delta_x$ stands for the Dirac measure centered on $x$. Dobrushin Ergodic Coefficient {#section-Dob-var-tot} ----------------------------- Let us recall here the definitions as well as some simple properties that will be useful in the following. Let $\mu , \nu \in \mathcal{P}(E)$, the total variation distance between $\mu$ and $\nu$ is defined by $$\left\| \mu- \nu \right\|_{tv} = \sup \{ \left| \mu(A) - \nu(A) \right| ; A \in \mathcal{E} \} .$$ To each Markov kernel $K$ on $E$, is associated its Dobrushin ergodic coefficient $\beta(K) \in [0,1]$ defined by $$\beta(K)= \sup \{ K(x,A) - K(y,A) ; x,y\in E, A\in \mathcal{E} \}$$ or in an equivalent way: $$\beta(K)= \sup \{ \frac{\left\| \mu.K - \nu.K \right\|_{tv}} {\left\| \mu - \nu \right\|_{tv}} ; \mu, \nu \in \mathcal{P}(E) , \mu \neq \nu \} .$$ The parameter $\beta(K)$ caracterizes mixing properties of the Markov kernel $K$. Note that function $\beta$ is an operator norm, in the sense that $\beta(K_1.K_2) \leq \beta(K_1).\beta(K_2)$, for any couple of Markov kernels $K_1$, $K_2$. By definition, for any measures $\mu, \nu \in \mathcal{P}(E)$ and any Markov kernel $K$, we have $\left\| \mu.K - \nu.K \right\|_{tv} \leq \beta(K). \left\| \mu- \nu \right\|_{tv}$. Otherwise, for any function $f \in \mathcal{B}(E)$, $$\label{prop-Dob} osc(K.f) \leq \beta(K)\cdot osc(f).$$ Further details on these ergodic coefficients can be found in the monograph[@DM-FK], such as the following lemma that we will need hereinafter. \[BG-tv\] Let $\mu, \nu \in \mathcal{P}(E)$ and $G$ a positive, bounded function on $E$ satisfying $\underset{x,y\in E}{\sup} \frac{G(x)}{G(y)} \leq g $, for some finite constant $g\geq 0$. In this situation, we have $$\left\| \Psi_G(\mu) - \Psi_G(\nu) \right\|_{tv} \leq g. \left\| \mu - \nu \right\|_{tv} .$$ Feynman-Kac Models ------------------ We recall here some standard tools related to FK models. They provide useful theoretical background and notation to formalize and analyze IPS methods (see e.g. [@DM-Guionnet-2; @DM-Hu-Wu; @DM-M] for further details). ### Evolution Equations {#section-flot-FK} Consider a sequence of probability measures $(\eta_n)_n$, defined by an initial measure $\eta_0$ and recursive relations $$\label{defeta} \forall f \in \mathcal{B}(E), \quad \eta_n(f) = \frac{\eta_{n-1} \left( G_n \times M_n.f \right)}{\eta_{n-1}(G_n)}$$ for positive functions $G_n \in \mathcal{B}(E)$ and Markov kernels $M_n$ with $M_n(x,\cdot) \in \mathcal{P}(E)$ and $M_n(\cdot,A) \in \mathcal{B}_1(E)$. This is the sequence of measures we mainly wish to approximate with the IPS algorithm. In an equivalent way, $(\eta_n)_n$ can be defined by the relation $$\eta_n = \phi_n(\eta_{n-1})$$ where $\phi_n: \; \mathcal{P}(E) \rightarrow \mathcal{P}(E)$ is the FK transformation associated with potential function $G_n$ and Markov kernel $M_n$ and defined by $$\phi_n(\eta_{n-1})=\psi_{G_{n}}(\eta_{n-1}).M_n$$ with $$\psi_{G_{n}}(\eta_{n-1})(dx):=\frac{1}{\eta_{n-1}(G_{n})}~G_{n}(x)~\eta_{n-1}(dx) .$$ The next formula provides an interpretation of the Boltzmann-Gibbs transformation in terms of a nonlinear Markov transport equation $$\Psi_{G_{n}}(\eta_{n-1})(dy)=\left(\eta_{n-1} S_{n,\eta_{n-1}}\right)(dy):=\int\eta_{n-1}(dx) S_{n,\eta_{n-1}}(x,dy)$$ with the Markov transition $ S_{n,\eta_n}$ defined below $$S_{n,\eta_{n-1}}(x,dy)=\varepsilon_n.G_n(x)~\delta_x(dy)+\left(1-\varepsilon_n.G_n(x)\right)~\Psi_{G_{n}}(\eta_{n-1})(dy),$$ (for any constant $\varepsilon_n > 0$ so that $\varepsilon_n.G_n \leq 1$). This implies $$\label{rec-eta-noyau} \eta_{n}=\eta_{n-1} K_{n,\eta_{n-1}}\quad\mbox{\rm with}\quad K_{n,\eta_{n-1}}= S_{n,\eta_{n-1}}M_{n}$$ Therefore, $\eta_n$ can be interpreted as the distributions of the random states $\overline{X}_n$ of a Markov chain whose Markov transitions $$\label{eta-interp-Markov} \mathbb{P} \left(\overline{X}_{n+1}\in dy~|~\overline{X}_n=x\right):=K_{n+1,\eta_n}(x,dy)$$ depend on the current distribution $\eta_n=\mbox{\rm Law}\left(\overline{X}_n\right)$.\ We finally recall that the measures $\eta_n$ admit the following functional representations: $$\eta_n (f) = \frac{\gamma_n(f)}{\gamma_n(1)}$$ ($1$ stands for the unit function) with the unnormalized FK measures $\gamma_n$ defined by the formulae $$\label{defgamma} \gamma_0 = \eta_0 \quad ; \quad \gamma_n(f) = \gamma_{n-1} (G_n \times M_n.f) .$$ Comparing this definition with (\[defeta\]), it is clear that the normalizing constant $\gamma_n(1)$ satisfies $$\label{defgamma1} \gamma_n(1) = \prod_{p=1}^n \eta_{p-1}(G_p) .$$ The special interest given to this quantity will be motivated in section \[section-Gibbs-motiv\]. ### Feynman-Kac Semigroup {#semigroupFK} An important point is that the semigroup transformations $$\phi_{p,n}:=\phi_n\circ\phi_{n-1}\circ \cdots \circ \phi_{p+1}$$ admit a comparable structure as each of the $\phi_k$. To be more precise, for each integer $p$, let us define the unnormalized integral operator $Q_p$ $$\label{defQ} \forall f \in \mathcal{B}(E) \; , \quad Q_p.f = G_p.M_p.f$$ and the composition operators $Q_{p,n}$ defined by the backward recursion $$\label{defQpn} Q_{p,n} = Q_{p+1}. \left( Q_{p+2} \ldots Q_n \right) =Q_{p+1}.Q_{p+1,n} .$$ We use the convention $Q_{n,n} = Id$ for $p=n$. Comparing these definitions with (\[defgamma\]), it is clear that $\gamma_n = \gamma_{n-1} . Q_n$ and more generally $$\gamma_n = \gamma_p . Q_{p,n} .$$ for any $p\leq n$. The semigroup $\phi_{p,n}$ can be expressed in terms of $Q_{p,n}$ with the following formulae $$\phi_{p,n}(\mu)(f) = \frac{\mu(Q_{p,n}.f)}{\mu(Q_{p,n}.1)}$$ for any $f \in \mathcal{B} (E)$ and $\mu \in \mathcal{P} (E)$. Finally, if we set $$P_{p,n}.f = \frac{Q_{p,n}.f}{Q_{p,n}.1} \quad \textmd{and} \quad G_{p,n} = Q_{p,n}.1$$ then we find that $$\phi_{p,n}(\mu)(f) = \frac{\mu(G_{p,n}.P_{p,n}.f)}{\mu(G_{p,n})} ,$$ or in other words: $ \phi_{p,n}(\mu) = \psi_{G_{p,n}}(\mu).P_{p,n} $. The Interacting Particle System Model {#algo-classique} ------------------------------------- The central idea is to approximate the measures $\eta_n$ by simulating an interacting particle system $(\zeta_n)_n = \left( \zeta_n^1, \ldots, \zeta_n^N \right)_n$ of size $N$ so that $$\eta^N_{n}=\frac{1}{N}\sum_{1\leq i\leq N}\delta_{\zeta^i_n}\rightarrow_{N\uparrow\infty}\eta_n .$$ Of course, the main issue is to make precise and to quantify this convergence. The particle model is defined as follows.\ We start with $N$ independent samples $\zeta_0 = (\zeta_0^1, \ldots , \zeta_0^N)$ from $\eta_0$. Then, the particle dynamics alternates two genetic type transitions.\ During the first step, every particle $\zeta^i_{n}$ evolves to a new particle $\widehat{\zeta}^i_{n}$ randomly chosen with the distribution $$S_{n+1,\eta^N_{n}}( \zeta^i_{n},dx) :=\varepsilon_{n+1}.G_{n+1}( \zeta^i_{n})~\delta_{ \zeta^i_{n}}(dx)+ \left(1-\varepsilon_{n+1}.G_{n+1}( \zeta^i_{n})\right)~ \Psi_{G_{n+1}}(\eta^N_n)(dx)$$ with the updated measures $$\Psi_{G_{n+1}}(\eta^N_n)=\sum_{j=1}^N\frac{G_{n+1}( \zeta^j_{n})}{\sum_{k=1}^N G_{n+1}( \zeta^k_{n})} \delta_{ \zeta^j_{n}} .$$ This transition can be interpreted as an acceptance-rejection scheme with a recycling mechanism. In the second step, the selected particles $\widehat{\zeta}^i_{n}$ evolve randomly according to the Markov transitions $M_{n+1}$. In other words, for any $1\leq i\leq N$, we sample a random state $ \zeta^i_{n+1}$ with distribution $M_{n+1}\left(\widehat{\zeta}^i_{n},dx\right)$.\ In view of (\[eta-interp-Markov\]), if we replace $\eta_n^N$ by $\eta_n$, then $\zeta_n$ coincide with $N$ independent copies of the Markov chains $\overline{X}_n$ defined in (\[rec-eta-noyau\]). On the other hand, by the law of large numbers, we have $\eta_0^N \simeq \eta_0$ so that $$\eta_1^N \simeq \eta_0^N . K_{1,\eta_0^N} \simeq \eta_0 . K_{1,\eta_0} = \eta_1 .$$ Iterating this approximation procedure, the empirical measure $\eta^N_{n}$ is expected to approximate $\eta_n$ at any time $n \geq 0$. As for the unnormalized measures $\gamma_n$, we define $$\gamma_n^N (1) = \prod_{p=1}^n \eta_{p-1}^N (G_p)$$ (mimicking formula (\[defgamma1\])) and more generally $\displaystyle{ \gamma_n^N (f) = \eta_n^N(f) \times \prod_{p=1}^n \eta_{p-1}^N (G_p) }$. Let us mention (see for instance [@DM-filt]) that these particle models provide an unbiased estimate of the unnormalized measures; that is we have that $$\forall f \in \mathcal{B}(E), \quad \mathbb{E} \left( \gamma_n^N (f) \right) = \gamma_n(f) .$$ In addition to the analysis of $\eta_n^N$’s convergence, the concentration properties of the unbiased estimators $\gamma_n^N(1)$ around their limiting values $\gamma_n(1)$ will also be considered thereafter. Some Non-Asymptotic Results {#section-Dob-estim} --------------------------- To quantify the FK semigroup stability properties, it is convenient to introduce the following parameters. For any integers $p<n$, we set $$b_n := \beta(M_n) \quad \textmd{and} \quad b_{p,n} := \beta(P_{p,n}).$$ $$g_n := \sup_{x,y \in E}\frac{G_n(x)}{G_n(y)} \quad \textmd{and} \quad g_{p,n} := \sup_{x,y \in E}\frac{G_{p,n}(x)}{G_{p,n}(y)}.$$ The quantities $g_{p,n}$, and respectively $b_{p,n}$, reflect the oscillations of the potential functions $G_{p,n}$, and respectively the mixing properties of the Markov transition $P_{p,n}$ associated with the FK semigroup $\phi_{p,n}$ described in (\[semigroupFK\]). Several contraction inequalities of $\phi_{p,n}$ w.r.t. the total variation norm or different types of relative entropies can be derived in terms of these two quantities (see for instance[@DM-FK]).\ The performance analysis developed in Sections \[estim-generales\] and \[section-Gibbs-tuning\] is partly based on the three non asymptotic inequalities presented below.\ Firstly, the following $L^p$-mean error bound for all $f \in \mathcal{B}_1 (E)$ is proven in [@DM-M]: $$\mathbb{E} \left( \left| \eta_n^N (f) - \eta_n (f) \right|^p \right)^{1/p} \leq \; \; \; \frac{B_p}{\sqrt{N}} \sum_{k=0}^{n} g_{k,n}b_{k,n} \label{born-Lp-DMM}$$ where $B_p$ are the constants introduced in (\[def-Bp\]).\ Secondly, the following exponential concentration inequality is derived in [@DM-Rio]. For all $f \in \mathcal{B}_1(E)$ and any $\varepsilon > 0$ we have: $$\frac{-1}{N} \log \mathbb{P} \left( \vert \eta_n^N(f) - \eta_n(f) \vert \geq \frac{r_n}{N} + \varepsilon \right) \geq \frac{\varepsilon^2}{2} \left[ b_n^{\star} \overline{\beta}_n + \frac{\sqrt{2}r_n}{\sqrt{N}} + \varepsilon \left( 2r_n + \frac{b_n^{\star}}{3} \right) \right]^{-1} \label{ineg-conc}$$ where $r_n$, $\overline{\beta}_n$ and $b_n^{\star}$ are constants so that: $$\left\{ \begin{array}{llll} r_n& \leq & 4 \sum_{p=0}^{n} g_{p,n}^3 b_{p,n} \\ & &\\ {\overline{\beta_n}}^2 & \leq & 4 \sum_{p=0}^{n} g_{p,n}^2 b_{p,n}^2 \\ & &\\ b_n^{\star} & \leq & \displaystyle{ 2 \sup_{0 \leq p \leq n} g_{p,n} b_{p,n}} \end{array} \right.$$ Thirdly, the following concentration inequality for unnormalized particle models $\gamma_n^N$ is provided in [@DM-Hu-Wu] (see theorem $6.5$). Let $$\label{def-fonctions-h} \begin{array}{cll} & h_0 := x \mapsto 2(x+\sqrt{x}) & \quad \textmd{and} \\ & \displaystyle{ h_1 := x \mapsto \frac{x}{3} + \sqrt{2x} }. & \end{array}$$ Then, $\forall \epsilon \in \{ +1,-1 \}$ and $\forall y \geq 0$: $$\label{ineg-conc-nor} \mathbb{P} \left( \frac{\epsilon}{n} \log \left( \frac{\gamma_n^N (1)}{\gamma_n(1)} \right) \geq \frac{\bar{r} (n) }{N} h_0 (y) + \tau_n^{\star} \bar{\sigma}_n^2 h_1 \left( \frac{y}{N.\bar{\sigma}_n^2} \right) \right) \leq e^{-y}$$ where quantities $\tau_n^{\star}$, $\bar{\sigma}_n^2$ and $\bar{r} (n)$ can be estimated this way: - $\displaystyle{ \tau_n^{\star} = \underset{0 \leq q \leq n}{\sup} \tau_{q,n} }$, where $\tau_{q,n}$ satisfy $$\label{estim-tau} \tau_{q,n} \leq \frac{4}{n} \sum_{p=q}^{n-1} g_{q,p} . g_{p+1} . b_{q,p} \; ;$$ - $ \displaystyle{ \bar{\sigma}_n^2 = \sum_{q=0}^{n-1} \sigma_q^2 \left( \frac{\tau_{q,p} }{\tau_n^{\star} } \right)^2 }$ where $\sigma_q$ satisfy $\sigma_q \leq 1$; - $\bar{r} (n)$ satisfy $$\label{estim-r} \bar{r} (n) \leq \frac{8}{n} \sum_{0 \leq q \leq p <n} g_{p+1} . g_{q,p}^3 .b_{q,p} .$$ Non-Asymptotic Theorems {#section-analyse-generale-FK} ======================= The formulae (\[born-Lp-DMM\]), (\[ineg-conc\]) and (\[ineg-conc-nor\]) provide explicit non asymptotic estimates in terms of the quantities $g_{p,n}$ and $b_{p,n}$. Written this way, they hardly apply to any IPS parameters tuning decision, since the only known or calculable objects are generally the reference Markov chain $M_p$ and the elementary potential functions $G_p$. We thus have to estimate $g_{p,n}$ and $b_{p,n}$ with some precision in terms of the $g_p$ and $b_p$. This task is performed in Section \[estim-generales\]. In the second section, Section \[section-result-unif\], we combine these estimates with the concentration results presented in Section \[section-Dob-estim\] to derive some useful uniform estimates w.r.t. the time parameter. Semigroup Estimates {#estim-generales} ------------------- We start with a series of technical lemmas. \[lemmette1\] Let $K$ be a Markov kernel and $G$ a positive function on $E$ satisfying $\displaystyle{ \underset{x,y \in E}{\sup} \frac{G(x)}{G(y)} \leq g }$, for some finite constant $g$. In this situation, we have that $$\underset{x,y \in E}{\sup} \; \frac{K.G(x)}{K.G(y)} \leq 1 + \beta(K)(g-1) .$$ Let $x,y \in E$ be s.t. $K.G(x) \geq K.G(y)$. Let us write: $$\frac{K.G(x)}{K.G(y)} \; \; = \; \; \frac{K.G(x) - K.G(y)}{K.G(y)} +1 \; \; \leq \; \; \frac{\beta(K) (G_{max} - G_{min})}{G_{min}} +1$$ We check the last inequality using the fact that $$K.G(x) - K.G(y) \leq osc(K.G) \leq \beta(K). osc (G) = \beta(K).(G_{max}-G_{min}) .$$ On the other hand we have $ K.G(y) = \int_{u}G(u)K(y,du) \geq G_{min} $. The desired result is now obtained taking the supremum over all $(x,y) \in E^2$. Note that $\frac{\beta(K) (G_{max} - G_{min})}{G_{min}} +1$ is exactly equal to $ 1+\beta(K) (g-1)$.\ This ends the proof of the lemma. \[lemmette2\] Let $M$ be a Markov kernel, $Q$ a not necessarily normalized integral operator satisfying $ \displaystyle{ \underset{x,y \in E}{\sup} \frac{Q.1(x)}{Q.1(y)} \leq g }$, for some finite constant $g \geq 1$ and $f$ a bounded, non negative function. In this situation, the Markov kernel $P$ defined by $$P.f(x) := \frac{M.Q.f(x)}{M.Q.1(x)}$$ satisfies the following property $$\beta(P) \leq g.\beta(M).\beta(P^{\prime}) .$$ In the above display formula, $P^{\prime}$ is the Markov transition defined by $ \displaystyle{ P^{\prime}.f(x) := \frac{Q.f(x)}{Q.1(x)} }$. Note that $P.f(x)$ can be written in this way $P.f(x) = \Psi_{Q.1}(\delta_x.M) \left( P^{\prime}.f \right)$. Thus, for any $ x,y \in E$, we have that $$\begin{aligned} \left| P.f(x) - P.f(y) \right| & = & \left| \left( \Psi_{Q.1}(\delta_x.M) - \Psi_{Q.1}(\delta_y.M) \right) \left( P^{\prime}.f \right) \right| \\ & \leq & \left\| \left( \Psi_{Q.1}(\delta_x.M) - \Psi_{Q.1}(\delta_y.M) \right) \right\|_{tv}.osc \left( P^{\prime}.f \right) .\end{aligned}$$ By Lemma \[BG-tv\], this implies that $$\left\| \left( \Psi_{Q.1}(\delta_x.M) - \Psi_{Q.1}(\delta_y.M) \right) \right\|_{tv} \; \leq \; g. \left\| \delta_x.M - \delta_y.M \right\|_{tv} \; \leq \; g.\beta(M) .$$ Using (\[prop-Dob\]), we have $osc \left( P^{\prime}.f \right) \leq \beta(P^{\prime}).osc(f)$.\ This ends the proof of the lemma. \[estim-g-b\] For any integers $p \leq n$, we have: $$\begin{aligned} g_{p,n}-1 & \leq \sum_{k=p+1}^n (g_k -1) b_{p+1} \ldots b_{k-1} \label{estim-g} \\ b_{p,n} & \leq \prod_{k=p+1}^n b_k . g_{k,n} \label{estim-b}\end{aligned}$$ Let us prove (\[estim-g\]). By definition, we have $G_{p,n} = Q_{p,n}.1$. Combining (\[defQ\]) and (\[defQpn\]) applied to unit function we have $$Q_{p-1,n}(1) \; = \; Q_{p+1}. \left[ \left( Q_{p+1} \ldots Q_n \right).1 \right] \; = \; G_p \times M_p. \left( Q_{p,n}.1 \right) .$$ This implies that the functions $G_{p,n}$ satisfy the following “backward” relations: $$G_{n,n}=1 \quad ; \quad G_{p-1,n}=G_p \times M_p.G_{p,n}$$ Then, for any $x,y\in E$, we deduce that $$\frac{G_{p-1,n}(x)}{G_{p-1,n}(y)} = \underbrace{\frac{G_p(x)}{G_p(y)}}_{E_1} \times \underbrace{ \frac{ \left( M_{p}.G_{p,n} \right)(x) }{\left( M_{p}.G_{p,n} \right)(y)} }_{E_2} .$$ Notice that $E_1 \leq g_p$ (by definition), and by Lemma \[lemmette1\], we have $E_2 \leq 1 + \beta(M_{p}).(g_{p,n}-1)$. This shows the following backward inequalities: $$\label{estim-rec-g} g_{n,n}=1 \quad ; \quad g_{p-1,n} \leq g_p \left( 1+ b_p(g_{p,n}-1) \right)$$ We end the proof of (\[estim-g\]) by induction.\ To prove (\[estim-b\]), we use the formulae $$P_{p-1,n}.f \; = \; \frac{Q_{p-1,n}.f}{Q_{p-1,n}.1} \; = \; \frac{G_p \times M_p.Q_{p,n}.f}{G_p \times M_p.Q_{p,n}.1} \; = \; \frac{ M_p.Q_{p,n}.f}{M_p.Q_{p,n}.1} .$$ Recalling that $\displaystyle{ P_{p,n}.f = \frac{Q_{p,n}.f}{Q_{p,n}.1} } $, we apply Lemma \[lemmette2\] to check that $\beta(P_{p-1,n}) \leq \beta(M_p).g_{p,n}.\beta(P_{p,n}) $, from which we conclude that $$b_{p-1,n} \leq b_p.g_{p,n}.b_{p,n} .$$ We end the proof of (\[estim-b\]) by induction.\ This ends the proof of the lemma. We end this section with a useful technical lemma to control the quantity $g_{p,n} b_{p,n}$. \[lemme-gpn-bpn\] For any $p \leq n$, we have $$g_{p,n} b_{p,n} \leq \prod_{k=p+1}^n ( b_k.g_{k-1,n} ) .$$ Using Lemma \[estim-g-b\], we have $$\begin{array}{ccl} g_{p,n} b_{p,n} & \; \; \leq \; \; & \displaystyle{ g_{p,n} . \prod_{k=p+1}^n b_k g_{k,n} } \\ & \; \; = \; \; & g_{p,n} . (b_{p+1}g_{p+1,n}) . (b_{p+2} g_{p+2,n}) \ldots (b_{n-1} g_{n-1,n}) . (b_n \underbrace{g_{n,n}}_{=1} ) \\ & \; \; = \; \; & (g_{p,n} b_{p+1}) . (g_{p+1,n} b_{p+2}) \ldots (g_{n-1,n} b_n) \; \; = \; \; \displaystyle{ \prod_{k=p+1}^n b_k g_{k-1,n} } . \end{array}$$ This ends the proof of the lemma. The term $g_{p,n} b_{p,n}$ is central in the $L^p$-mean error bound (\[born-Lp-DMM\]). By Lemma \[lemme-gpn-bpn\] we have $$\displaystyle{\sum_{p=0}^{n} \prod_{k=p+1}^{n} b_k g_{k-1,n} < +\infty}\Longrightarrow\sum_{p=0}^{n} g_{p,n} b_{p,n} < +\infty .$$ This gives a sufficient condition for a uniform $L^p$ bound w.r.t. time $n$. $g_{p,n} b_{p,n}$ is also involved in the estimates of all the quantities defined in section \[section-Dob-estim\] such as $r_n$, ${\overline{\beta_n}}^2$, and others. In addition, by Lemma \[BG-tv\], we have the stability property $$\Vert \phi_{p,n} (\mu) - \phi_{p,n} (\nu) \Vert_{tv} \leq g_{p,n} b_{p,n} \Vert \mu - \nu \Vert_{tv} .$$ This shows that the term $g_{p,n} b_{p,n}$ is central to quantify the stability properties of the semigroup $\phi_{p,n}$. Uniform Concentration Theorems {#section-result-unif} ------------------------------ To obtain uniform bounds w.r.t. the time horizon, Lemma \[lemme-gpn-bpn\] naturally leads to a sufficient condition of the following type: $$\underset{k \leq n}{\sup} \; b_k . g_{k-1,n} \leq a\quad \mbox{\rm for some} \quad a \in (0,1)$$ In this situation, we prove that $\displaystyle{ g_{p,n} b_{p,n} \leq a^{n-p} }$, and therefore, using (\[born-Lp-DMM\]), $$\underset{f \in \mathcal{B}_1 (E)}{\sup} \; \mathbb{E} \left( \left| \eta_n^N (f) - \eta_n (f) \right|^p \right)^{1/p} \leq \frac{B_p}{\sqrt{N}}~\frac{1}{1-a} ,$$ with the constants $B_p$ introduced in (\[def-Bp\]). We then fix the parameter $a \in (0,1)$ and we look for conditions on the $b_p$ so that $b_k g_{k-1,n} \leq a$. This parameter $a$ can be interpreted as a performance degree of the $N$-approximation model. In order to explicit relevant and applicable conditions, we study two typical classes of regularity conditions on the potential functions $G_p$. The first one relates to bounded coefficients $g_p$ (Theorem \[reg born\]). In the second one, the parameters $g_p$ tend to $1$ as $p \rightarrow \infty$ (Theorem \[reg dec\]).\ The concentration inequalities developed in Theorem \[reg born\] will be described in terms of the parameters $(r_1^{\star },r_2^{\star })$ defined below. $$\label{def-r-star-12} \left\{ \begin{array}{lll} r_1^{\star }&=& \frac{9}{2} \frac{(M+a)^2}{1-a} + \sqrt{\frac{8}{\sqrt{1-a^2}} + \frac{18(M+a)^2}{\sqrt{N}} } \\ & &\\ r_2^{\star} &=& 18 \frac{(M+a)^2}{1-a} + \sqrt{\frac{8}{\sqrt{1-a^2}} + \frac{18 (M+a)^2}{\sqrt{N}}} \end{array} \right.$$ \[reg born\] We assume that $$\label{condition-born} \underset{p \geq 1}{\sup} \; g_p \leq M \quad \quad \textmd{and} \quad \quad \underset{p \geq 1}{\sup} \; b_p \leq \frac{a}{a+M}$$ for some finite $M \geq 1$. In this situation we have the following uniform estimates. - The $L^p$-error bound: $$\label{reg-born-Lp} \underset{n \geq 0}{\sup} \; \; d_p \left( \eta_n^N , \eta_n \right) \leq \frac{B_p}{2 (1-a) \sqrt{N} } \;$$ - For any $n\geq 0$, $N \geq 1$, $y \geq 0$ and $f \in \mathcal{B}_1 (E)$, the probability of the event $$\label{reg-born-eta} \vert \eta_n^N(f) - \eta_n(f) \vert \leq \frac{r_1^{\star} N + r_2^{\star} y}{N^2}$$ is greater than $1-e^{-y}$, with the parameters $(r_1^{\star },r_2^{\star })$ defined in (\[def-r-star-12\]). - For any $n\geq 0$, $N \geq 1$, $\epsilon \in \{ +1 , -1 \}$, $y \geq 0$ and $f \in \mathcal{B}_1 (E)$, the probability of the event $$\label{reg-born-gamma} \frac{\epsilon}{n} \log \left( \frac{\gamma_n^N (1)}{\gamma_n(1)} \right) \leq \frac{\tilde{r}_1}{N} h_0(y) + \tilde{r}_2.h_1 \left( \frac{y}{n.N} \right)$$ is greater than $1-e^{-y}$, with the parameters $ \tilde{r}_1 = \frac{8 M (M+a)^2}{1-a} $ and $ \tilde{r}_2 = \frac{4M}{1-a} $. Firstly, we prove the inequalities $$\label{estim-gpn-prod-born} \left\{ \begin{array}{llll} g_{p,n}&\leq &M+a \\ b_p . g_{p-1,n}&\leq&a \end{array} \right.$$ Let us assume that $b_p \leq \frac{1}{A}$ for some $A > 1$. Then, by Lemma \[estim-g-b\], $$(g_{p,n}-1) \leq \sum_{k=p+1}^{n} \frac{M-1}{A^{k-(p+1)}} = (M-1) \frac{1-(\frac{1}{A})^{n-p}}{1-\frac{1}{A}} \leq \frac{A}{A-1} (M-1) .$$ On the one hand, this estimation implies that $g_{p,n}\leq M+a$ as soon as $\displaystyle{ A \geq \frac{M+a-1}{a} =: A_1 }$. On the other hand, we can write $$b_p . g_{p-1,n} \leq \frac{1}{A} (g_{p-1,n} -1) + \frac{1}{A} = \frac{M-1}{A-1} + \frac{1}{A},$$ from which we check that $b_p . g_{p-1,n}\leq a$ as soon as $ A \geq \frac{M+a+\sqrt{(M+a)^2 - 4a}}{2a} =:A_2 $. The inequalities (\[estim-gpn-prod-born\]) are then proven using the fact that $A_1$ and $A_2$ are both lower than $\frac{M+a}{a}$. - By Lemma \[lemme-gpn-bpn\] and (\[estim-gpn-prod-born\]) we have $g_{p,n}b_{p,n} \leq a^{n-p}$. Combining this with (\[born-Lp-DMM\]), the $L^p$-error bound (\[reg-born-Lp\]) is clear. - Let us prove (\[reg-born-eta\]), which is a consequence of the concentration inequality (\[ineg-conc\]). Combining the estimations of $r_n$, ${\overline{\beta_n}}^2$ and $b_n^{\star}$ given in section \[section-Dob-estim\] with (\[estim-gpn-prod-born\]) and $g_{p,n}b_{p,n} \leq a^{n-p}$, we deduce that $$r_n \leq \frac{4(M+a)^2}{1-a} \, , \quad {\overline{\beta_n}}^2 \leq \frac{4}{1-a^2} \quad \textmd{and} \quad b_n^{\star} \leq 2.$$ These estimations applied in (\[ineg-conc\]) lead to $$- \log \; \mathbb{P} \left( \vert \eta_n^N(f) - \eta_n(f) \vert \geq \frac{r_1}{4N} + \varepsilon \right) \geq \frac{N \varepsilon^2}{r_2 + r_1(\frac{1}{\sqrt{N}} + \varepsilon)}$$ with $r_1 = \frac{18(M+a)}{1-a}$ and $r_2 = \frac{8}{\sqrt{1-a^2}}$. We set $y = \frac{N \varepsilon^2}{r_2 + r_1(\frac{1}{\sqrt{N}} + \varepsilon)} > 0$. Given some $y >0$, we have $$\varepsilon(y) = \frac{ r_1 y + \sqrt{r_1^2 y^2 + 4N(r_2 + \frac{r_1}{\sqrt{N}} )y } }{2N^2} \; \Longrightarrow \; y = \frac{N {\varepsilon(y)}^2}{r_2 + r_1(\frac{1}{\sqrt{N}} + \varepsilon(y))}$$ and then it is clear that $$\mathbb{P} \left( \vert \eta_n^N(f) - \eta_n(f) \vert \geq \frac{r_1}{4N} + \varepsilon(y) \right) \leq e^{-y}.$$ After some various but elementary calculations we prove that $$\frac{r_1}{4N} + \varepsilon(y) \leq \frac{r_1^{\star} N + r_2^{\star} y}{N^2} .$$ - The last concentration inequality (\[reg-born-gamma\]) is a consequence of (\[ineg-conc\]) and (\[estim-gpn-prod-born\]). Indeed, from estimations (\[estim-tau\]) and (\[estim-r\]), we can easily show that the quantities $\tau_n^{\star}$ and $\bar{r}(n)$ satisfy $$\tau_n^{\star} \leq \frac{4M}{n(1-a)} \quad \textmd{and} \quad \bar{r}(n) \leq \frac{8.M(M+a)^2}{1-a} .$$ On the other hand, $\bar{\sigma}_n^2$ is trivially bounded by $n$. Then we find that $$\frac{\bar{r} (n) }{N} h_0 (y) + \tau_n^{\star} \bar{\sigma}_n^2 h_1 \left( \frac{y}{N.\bar{\sigma}_n^2} \right) = \frac{\bar{r} (n) }{N} h_0 (y) + \frac{y \tau_n^{\star}}{3N} + \sqrt{ \frac{2y(\tau_n^{\star} \bar{\sigma}_n^2)}{N} } \; .$$ Finally, (\[reg-born-gamma\]) is obtained by making the suitable substitutions. This ends the proof of the theorem. Let us now consider the case where $g_p$ decreases to $1$ as $p \rightarrow \infty$. The idea of the forthcoming analysis is to find a condition on the $b_p$ so that the $g_{p,n}$ are uniformly bounded w.r.t. $n$ by $g_{p+1}^{1+\alpha}$ with $$\alpha = \frac{a}{1-a} > 0 \quad \left( \Longleftrightarrow a = \frac{\alpha}{1 + \alpha} \right) .$$ The concentration inequalities developed in Theorem \[reg dec\] will be described in terms of the parameters $r_3^{\star}(n),r_4^{\star}(n)$ and $\tilde{r}_3(n),\tilde{r}_4,\tilde{r}_5(n)$ defined below. $$\label{def-r-star-34} \left\{ \begin{array}{lll} r_3^{\star } (n) &=& \frac{9. u_1(n)}{2(1-a)} + \sqrt{\frac{8}{\sqrt{1-a^2}} + \frac{18 . u_1(n)}{\sqrt{N}} } \\ & & \\ r_4^{\star} (n) &=& \frac{18 . u_1(n)}{1-a} + \sqrt{\frac{8}{\sqrt{1-a^2}} + \frac{18 . u_1(n)}{\sqrt{N}} } \end{array} \right.$$ $$\label{def-r-tilde-345} \left\{ \begin{array}{cllll} \tilde{r}_3(n)&=& \frac{16.u_2(n)}{1-a} & & \\ &&&&\\ \tilde{r}_4 &=& \frac{4}{3} \; \sum_{n \geq 0} g_{n+1}.a^n & \leq & \frac{4.g_1}{3(1-a)} \\ & &\\ \tilde{r}_5(n) &=& \frac{4\sqrt{2}. u_3(n) }{1-a} & & \end{array} \right.$$ The sequences $u_1(n)$, $u_2(n)$ and $u_3(n)$ used in the above formulae are defined by $$\begin{cases} u_1(n) \; = \; (1-a) \; \sum_{p \geq 0} g_{n-p+1}^{2(1+\alpha)} a^p \; \; \underset{n \rightarrow \infty}{\longrightarrow} 1 \\ u_2(n) \; = \; \frac{1}{n} \; \sum_{p=1}^n g_p^{3+2\alpha} \; \; \underset{n \rightarrow \infty}{\longrightarrow} 1 \\ u_3(n) \; = \; \left( \frac{1}{n} \; \sum_{p=0}^{n-1} g_{p+1}^2 \right)^{1/2} \; \; \underset{n \rightarrow \infty}{\longrightarrow} 1 . \end{cases}$$ Notice that the sequence $u_1(n)$ tends to $1$ by dominated convergence. Sequences $u_2(n)$ and $u_3(n)$ tend to $1$ by Cesaro’s theorem. \[reg dec\] We assume that $g_p \downarrow 1$ as $p \rightarrow \infty$ and the sequence $b_p$ satisfies for any $p \geq 1$ $$\label{cond-bp-gpun} b_p \leq \displaystyle{\frac{g_p^{\alpha} -1} {g_p^{\alpha+1} -1}} \left( \underset{p \rightarrow +\infty}{\longrightarrow} a \right) \quad\mbox{\rm and}\quad b_p \leq \displaystyle{\frac{a} {g_p^{\alpha+1}}} \left( \underset{p \rightarrow +\infty}{\longrightarrow} a \right) .$$ In this situation, we have the following uniform estimates. - The $L^p$-error bound $$\label{reg-dec-Lp} \underset{n \geq 0}{\sup} \; \; d_p \left( \eta_n^N , \eta_n \right) \leq \frac{B_p}{2 (1-a) \sqrt{N} } \;$$ with the constants $B_p$ introduced in (\[def-Bp\]). - For any $n\geq 0$, $N \geq 1$, $y \geq 0$ and $f \in \mathcal{B}_1 (E)$, the probability of the event $$\label{reg-dec-eta} \vert \eta_n^N(f) - \eta_n(f) \vert \; \; \leq \; \; \frac{r_3^{\star}(n). N + r_4^{\star}(n). y}{N^2}$$ is greater than $1-e^{-y}$, with the parameters $r_3^{\star } (n),r_4^{\star} (n)$ defined in (\[def-r-star-34\]). - For any $n\geq 0$, $N \geq 1$, $\epsilon \in \{ +1 , -1 \}$, $y \geq 0$ and $f \in \mathcal{B}_1 (E)$, the probability of the event $$\label{reg-dec-gamma} \frac{\epsilon}{n} \log \left( \frac{\gamma_n^N (1)}{\gamma_n(1)} \right) \; \; \leq \; \; \tilde{r}_3(n) \left( \frac{y + \sqrt{y} }{N} \right) + \tilde{r}_4 \left( \frac{y}{n.N} \right) + \tilde{r}_5(n) \sqrt{ \frac{y}{n.N} }$$ is greater than $1-e^{-y}$, with the parameters $\tilde{r}_3(n),\tilde{r}_4,\tilde{r}_5(n)$ defined in (\[def-r-tilde-345\]). Firsly, we prove that $$\label{estim-gpn-prod-dec} \textmd{(\ref{cond-bp-gpun})} \quad \Longrightarrow \quad \forall p \leq n, \; \left\{ \begin{array}{llll} g_{p,n}& \leq &(g_{p+1})^{1 + \alpha} \\ g_{p-1,n} . b_p & \leq & a \end{array} \right.$$ The proof of the first inequality comes from a simple backward induction on $p$ (with fixed $n$), using formula $g_{p-1,n} \leq g_p \left( 1+ b_p(g_{p,n}-1) \right)$ (see (\[estim-rec-g\])). For $p=n$, $g_{p,n}$ is clearly smaller than $g_{p+1}^{1+\alpha}$ because $g_{n,n} = 1$. The second assertion is now immediate. Now we assume that $g_{p,n} \leq g_{p+1}^{1+\alpha}$. In this case, $g_{p-1,n} \leq g_{p}^{1+\alpha}$ is met as soon as $$b_p \leq \frac{g_p^{\alpha} -1} {g_{p+1}^{\alpha+1} -1}.$$ Notice that this estimate is met as soon as $b_p \leq \frac{g_p^{\alpha} -1}{g_{p}^{\alpha+1} -1}$, the sequence $(g_p)_p$ being decreasing. - Now that we proved (\[estim-gpn-prod-dec\]) (which implies $g_{p,n}b_{p,n} \leq a^{n-p}$ by Lemma \[lemme-gpn-bpn\]), the $L^p$-mean error bound (\[reg-born-Lp\]) comes from a simple substitution in (\[born-Lp-DMM\]). - To prove (\[reg-dec-eta\]), we focus on the quantities ${\overline{\beta_n}}^2$, $b_n^{\star}$ and $r_n$ arising in the concentration inequality (\[ineg-conc\]). With (\[estim-gpn-prod-dec\]) and $g_{p,n}b_{p,n} \leq a^{n-p}$, we readily verify that $${\overline{\beta_n}}^2 \leq \frac{4}{1-a^2} \quad \textmd{and} \quad b_n^{\star} \leq 2 .$$ The term $r_n$ can be roughly bounded by $\frac{4}{1-a} g_1^{2(1+\alpha)}$, but another manipulation provides a more precise estimate. Indeed, using the fact that $b_{p,n} . g_{p,n} \leq a^{n-p}$ and $g_{p,n} \leq g_{p+1}^{1+ \alpha}$, we prove that $$\begin{array}{ccccccc} r_n & \; \; \leq \; \; & \displaystyle{4 \sum_{p=0}^{n} g_{p,n}^2 a^{n-p} } & \; \; \leq \; \; & \displaystyle{ 4 \sum_{p=0}^{n} g_{p+1}^{2(1+ \alpha)} a^{n-p} } &\; \; \leq \; \; & \displaystyle{ 4 \sum_{p=0}^{n} g_{n-p+1}^{2(1+ \alpha)} a^{p} }\\ &&&&& \; \; \leq \; \; & \displaystyle{ 4 \sum_{p \geq0} g_{n-p+1}^{2(1+ \alpha)} a^{p} } \\ &&&&& \; \; \leq \; \; & \displaystyle{ \frac{4.u_1(n)}{1-a} }. \end{array}$$ We prove (\[reg-dec-eta\]) using the same line of arguments as in the proof of Theorem \[reg born\]. - Let us prove the last concentration inequality (\[reg-dec-gamma\]). It is mainly a consequence of the inequality (\[ineg-conc-nor\]). Starting from the following decomposition $$\label{osef} \frac{\bar{r} (n) }{N} h_0 (y) + \tau_n^{\star} \bar{\sigma}_n^2 h_1 \left( \frac{y}{N.\bar{\sigma}_n^2} \right) = \frac{2 \bar{r} (n) }{N} \left( y + \sqrt{y} \right) + \frac{y \tau_n^{\star}}{3N} + \sqrt{ \frac{2y(\tau_n^{\star} \bar{\sigma}_n^2)}{N} } \; ,$$ we need to find some refined estimates of the quantities $\tau_n^{\star}$, $\bar{r}_n$ and $(\tau_n^{\star} . \bar{\sigma}_n^2)$. To estimate $\tau_n^{\star}$, we notice that $\forall q, \; g_{p+1} \leq g_{p-q+1}$, so that $$\begin{array}{ccccccc} \tau_{q,n} & \; \; \leq \; \; & \displaystyle{ \frac{4}{n} \sum_{p=q}^{n-1} g_{p+1} a^{p-q} } & \; \; \leq \; \; & \displaystyle{ \frac{4}{n} \sum_{p=q}^{n-1} g_{p-q+1} a^{p-q} } & \; \; \leq \; \; & \displaystyle{ \frac{4}{n} \sum_{p=0}^{n-q-1} g_{p+1} a^p } \\ &&&&& \; \; \leq \; \; & \displaystyle{ \frac{4}{n} \sum_{p=0}^{n-1} g_{p+1} a^p } . \end{array}$$ Finally, we have that $\displaystyle{ \tau_n^{\star} \leq \frac{4.U_0}{n} }$, where $\displaystyle{ U_0 = \sum_{p \geq 0} g_{p+1} a^p \leq \frac{g_1}{1-a} }$.\ We estimate $\bar{r}_n$, using the following inequalities: $$\begin{array}{ccccc} \bar{r}_n & \; \; \leq \; \; & \displaystyle{ \frac{8}{n} \sum_{0\leq q \leq p <n} g_{p+1} . g_{p,q}^3 . b_{p,q} } & \; \; \leq \; \; & \displaystyle{ \frac{8}{n} \sum_{p=0}^{n-1} \; \sum_{q=p-n}^p g_{p+1}^{3+2\alpha} a^{p-q} } \\ &&& \; \; \leq \; \; & \displaystyle{ \frac{8}{1-a} \cdot \underbrace{\frac{1}{n} \sum_{p=1}^{n} g_{p}^{3+2\alpha} }_{=u_2(n)} } \end{array}$$ Let us conduct a last useful estimation: $$\begin{array}{ccccc} \tau_n^{\star} . \bar{\sigma}_n^2 & \; \; \leq \; \; & \displaystyle{ \sum_{q =0}^{n-1} \tau_{q,n}^2} & \; \; \leq \; \; & \displaystyle{ \sum_{q =0}^{n-1} \left( \frac{4}{n} \; \sum_{p=0}^{n-q+1} \; g_{p+q+1} . a^p \right)^2 } \\ &&& \; \; \leq \; \; & \displaystyle{ \sum_{q =0}^{n-1} \; \frac{16}{n^2} \cdot g_{q+1}^2 \cdot \frac{1}{(1-a)^2} } \\ &&& \; \; \leq \; \; & \displaystyle{ \frac{16}{n.(1-a)^2} \times \underbrace{ \frac{1}{n} \sum_{q =0}^{n-1} g_{q+1}^2}_{= \left(u_3(n)\right)^2} } \end{array}$$ At last, we make the suitable substitutions in (\[osef\]) and obtain the desired inequality (\[reg-dec-gamma\]). This ends the proof of the theorem. Interacting Simulated Annealing Models {#section-Gibbs} ====================================== Some Motivations {#section-Gibbs-motiv} ---------------- We consider the Boltzmann-Gibbs probability measure associated with an inverse “temperature” parameter $\beta \geq 0$ and a given potential function $V \in \mathcal{B}(E)$ defined by $$\label{def-mes-Gibbs} \mu_{\beta} (dx) = \frac{1}{Z_{\beta}} e^{- \beta.V(x)} m(dx) ,$$ where $m$ stands for some reference measure, and $Z_{\beta}$ is a normalizing constant. We let $\beta_n$ a strictly increasing sequence (which may tend to infinity as $n \rightarrow \infty$). In this case, the measures $\eta_n = \mu_{\beta_n}$ can be interpreted as a FK flow of measures with potential functions $G_n = e^{-(\beta_n - \beta_{n-1})V}$ and Markov transitions $M_n$ chosen as being MCMC dynamics for the current target distributions. Indeed, we have $$\mu_{\beta_{n}} (dx) = \frac{e^{- \beta_n.V(x)}}{Z_{\beta_n}} m(dx) = \frac{ Z_{\beta_{n-1}} } {Z_{\beta_n}} \underbrace{e^{-(\beta_n - \beta_{n-1})V(x)}}_{=G_n(x)} \underbrace{ \left( \frac{e^{- \beta_{n-1}.V(x)}}{Z_{\beta_{n-1}}} m(dx) \right) }_{=\mu_{\beta_{n-1}} (dx)}.$$ This shows $\displaystyle{ \mu_{\beta_{n}} = \psi_{G_n} (\mu_{\beta_{n-1}}) } $. Let $\phi_n$ stand for the FK transformation associated with potential function $G_n$ and Markov transition $M_n$. We have $$\phi_n(\mu_{\beta_{n-1}}) = \psi_{G_n} (\mu_{\beta_{n-1}}) = \mu_{\beta_{n}} . M_n = \mu_{\beta_{n}} .$$ Sampling from these distributions is a challenging problem in many application domains. The simplest one is to sample from a complex posterior distribution on some Euclidian space $E=\mathbb{R}^d$, for some $d \geq 1$. Let $x$ be a variable of interest, associated with a prior density $p(x)$ (easy to sample) with respect to Lebesgue measure $dx$ on $E$, and $y$ a vector of observations, associated with a calculable likelihood model $p(y \mid x)$. In this context, we recall that $p(y \mid x)$ is the density of the observations given the variable of interest. The density $p(x \mid y)$ of the posterior distribution $\eta$ is given by Bayes’ formula $$p(x \mid y) \propto p(x). p(y \mid x).$$ In the case where $\eta$ is highly multimodal, it is difficult to sample from it directly. As an example, classic MCMC methods tend to get stuck in local modes for very long times. As a result, they converge to their equilibrium $\eta$ only on unpractical time-scales. To overcome this problem, a common solution is to approximate the target distribution $\eta$ with a sequence of measures $\eta_0, \ldots , \eta_n$ with density $$\eta_k(dx) \propto p(x). p(y \mid x)^{\beta_k} dx$$ where $\displaystyle{ (\beta_k)_{0 \leq k \leq n} }$ is a sequence of number increasing from $0$ to $1$, so that $\eta_0$ is the prior distribution of density $p(x)$, easy to sample, and the terminal measure $\eta_{n}$ is the target distribution $\eta$ (see for instance [@Bertrand; @Giraud-RB; @Minvielle; @Neal]). If we take $V:= x \mapsto - \log(p(y \mid x))$ and $m(dx) := p(x) dx$, then the $\eta_k$ coincide with the Boltzmann-Gibbs measures $\mu_{\beta_k}$ defined in (\[def-mes-Gibbs\]). In this context, IPS methods arise as being a relevant approach, especially if $\eta$ is multimodal, since the use of a large number of particles allows to cover several modes simultaneously.\ The normalizing constant $\gamma_n(1)$ coincide with the marginal likelihood $p(y)$. Computing this constant is another central problem in model selection arising in hidden Markov chain problems and Bayesian statistics.\ Next, we present another important application in physics and chemistry, known as free energy estimation. The problem starts with an un-normalized density of the form $$q \left( \omega \mid T,\alpha \right) = \exp \left( -\frac{H(\omega , \alpha)}{k.T} \right)$$ where $H(\omega, \alpha)$ is the energy function of state $\omega$, $k$ is Boltzmann’s constant, T is the temperature and $\alpha$ is a vector of system characteristics. The free energy $F$ of the system is defined by the quantity $$F(T, \alpha) = -k.T.\log(z(T, \alpha)),$$ where $z(T, \alpha)$ is the normalizing constant of the system density. See for instance [@Ceperley; @Ciccotti-Hoover; @Frankel-Smit] for a further discussion on these ground state energy estimation problems.\ Last, but not least, it is well known that Boltzmann-Gibbs measures’ sampling is related to the problem of minimizing the potential function $V$. The central idea is that $\mu_{\beta}$ tends to concentrate on $V$’s minimizers as the inverse temperature $\beta$ tends to infinity. To be more precise, we provide an exponential concentration inequality in Lemma \[lemme-conc-Gibbs\].\ In this context, the IPS algorithm can be interpreted as a sequence of interacting simulated annealing ([*abbreviate ISA*]{}) algorithms. As they involve a population of $N$ individuals, evolving according to genetic type processes (selection, mutation), ISA methods also belong to the rather huge class of evolutionary algorithms for global optimization. These algorithms consist in exploring a state space with a population associated with an evolution strategy, i.e. an evolution based on selection, mutation and crossover. See [@Goldberg] or [@Le-Riche] and the references therein for an overview. As these algorithms involve complex, possibly adaptive strategies, their analysis is essentially heuristic, or sometimes asymptotic (see [@Cerf] for general convergence results on genetic algorithms). The reader will also find in [@DM-M-Efini] a proof for a ISA method of the a.s. convergence to the global minimum in the case of a finite state space, when the time $n$ tends to infinity, and as soon as the population size $N$ is larger than a critical constant that depends on the oscillations of the potential fitness functions and the mixing properties of the mutation transitions.\ The results of the previous sections apply to the analysis of ISA optimization methods. Our approach is non-asymptotic since we estimate at each time $n$, and for a fixed population size $N$ the distance between the theoretical Boltzmann-Gibbs measure $\eta_n$ and its empirical approximation $\eta_n^N$. In some sense, as $\beta_n \rightarrow + \infty$, $\eta_n$ tends to the Dirac measure $\delta_{x^{\star}}$ where $x^{\star} = \underset{x \in E}{\textmd{argmin}} \; V(x)$, as soon as there is a single global minimum. So intuitively, we guess that if this distance admits a uniform bound w.r.t. time $n$, then for large time horizon we have $\eta_n^N \simeq \delta_{x^{\star}} $.\ In this section we propose to turn the conditions of Theorem \[reg born\] and Theorem \[reg dec\] into conditions on the temperature schedule to use, and the number of MCMC steps that ensure a given performance degree. Then we combine the concentration results of section \[section-result-unif\] with Lemma \[lemme-conc-Gibbs\] to analyze the convergence of the IPS optimization algorithm. An ISA Optimization Model {#section-Gibbs-tuning} ------------------------- We fix an inverse temperature schedule $\beta_n$ and we set - $\eta_n(dx) = \mu_{\beta_n}(dx)= \frac{1}{Z_{\beta_n}} e^{- \beta_n V(x)} m(dx)$; - $G_n(x) = e^{-\Delta_n .V(x)}$ ; - and then $g_n = e^{\Delta_n . \textmd{osc} (V)}$, where $\Delta_n$ are the increments of temperature $\Delta_n = \beta_n - \beta_{n-1}$. We let $K_{\beta}$ the simulated annealing Markov transition with invariant measure $\mu_{\beta}$ and a proposition kernel $K(x,dy)$ reversible w.r.t. $m(dx)$. We recall that $K_{\beta}(x,dy)$ is given by the following formulae: $$\begin{array}{llll} K_{\beta}(x,dy)= K(x,dy). \min \left( 1 , e^{-\beta \left( V(y) - V(x) \right)} \right) & \forall y \neq x & \\ & &\\ K_{\beta}(x,\{ x \}) = 1 - \int_{y \neq x} K(x,dy). \min \left( 1 , e^{-\beta \left( V(y) - V(x) \right)} \right) \end{array}$$ Under the assumption $K^{k_0}(x,\cdot) \geq \delta \nu(\cdot)$ for any $x$ with some integer $k_0 \geq 1$, some measure $\nu$ and some $\delta >0$, one can show (see for instance [@Bartoli]) that $$\beta(K_{\beta}^{k_0}) \leq \left( 1- \delta e^{- \beta \overline{\Delta V}(k_0)} \right) \label{dob-MH}$$ where $\overline{\Delta V}(k_0)$ is the maximum potential gap one can obtain making $k_0$ elementary moves with the Markov transition $K$. One way to control the mixing properties of the ISA model is to consider the Markov transition $M_p = K_{\beta_p}^{{k_0}.{m_p}}$, the simulated annealing kernel iterated $k_0.m_p$ times. In this case, the user has a choice to make on two tuning parameters, namely the temperature schedule $\beta_p$, and the iteration numbers $m_p$. Note that for all $b \in (0,1)$, condition $b_p \leq b$ is turned into $\left( 1 - \delta e^{- \beta_p \overline{\Delta V}(k_0)} \right)^{m_p} \leq b,$ which can also be rewritten as follows. $$m_p \geq \frac{\log(\frac{1}{b})e^{\overline{\Delta V}(k_0).\beta_p}}{\delta}$$ We prove now is a technical lemma that we will use in the following. It deals with Boltzmann-Gibbs measures’ concentration properties. \[lemme-conc-Gibbs\] For any $\beta > 0$, and for all $0 < \varepsilon^{\prime} < \varepsilon$, the Boltzmann-Gibbs measure $\mu_{\beta}$ satisfies $$\mu_{\beta} \left( V \geq V_{\min} + \varepsilon \right) \leq \frac{e^{-\beta(\varepsilon - \varepsilon^{\prime}) }}{m_{\varepsilon^{\prime}}} ,$$ where $m_{\varepsilon^{\prime}} = m \left( V \leq V_{\min} + \varepsilon ^{\prime} \right) >0$. The normalizing constant $Z_{\beta}$ of the definition (\[def-mes-Gibbs\]) is necessary equal to $\underset{E}{\displaystyle{\int}} e^{-\beta V} dm $. Then we have $$\begin{aligned} \mu_{\beta} \left( V \geq V_{\min} + \varepsilon \right) & \; \; = \; \; \frac{ \underset{V \geq V_{\min} + \varepsilon}{\displaystyle{\int}} e^{-\beta V} dm }{ \underset{V \geq V_{\min} + \varepsilon}{\displaystyle{\int}} e^{-\beta V} dm + \underset{V < V_{\min} + \varepsilon}{\displaystyle{\int}} e^{-\beta V} dm } \\ & \; \; \leq \; \; \underbrace{ \left( \underset{V \geq V_{\min} + \varepsilon}{\displaystyle{\int}} e^{-\beta V} dm \right) }_{A_1} \; \underbrace{ \left( \underset{V < V_{\min} + \varepsilon}{\displaystyle{\int}} e^{-\beta V} dm \right)^{-1} }_{A^{-1}_2} .\end{aligned}$$ Firstly, it is clear that $\displaystyle{ A_1 \leq e^{- \beta (V_{\min} + \varepsilon )} }$. Secondly, $\varepsilon^{\prime} < \varepsilon$ implies $\{ V \leq V_{\min} + \varepsilon^{\prime} \} \subset \{ V < V_{\min} + \varepsilon \} $, then we have $$A_2 \; \geq \underset{V \leq V_{\min} + \varepsilon^{\prime}}{\displaystyle{\int}} e^{-\beta V} dm \; \; \geq \; \; m \left( V \leq V_{\min} + \varepsilon ^{\prime} \right) e^{- \beta (V_{\min} + \varepsilon^{\prime} )} .$$ We end the proof by making the appropriate substitutions. Combining this Lemma \[lemme-conc-Gibbs\], the theorems of section \[section-result-unif\] (with indicator test function $f = \mathbf{1}_{\lbrace V \geq V_{\min} + \varepsilon \rbrace}$) , and the Dobrushin ergodic coefficient estimate (\[dob-MH\]) we prove the following theorem: \[theo-optim-base\] Let us fix $a \in (0,1)$. For any $\varepsilon >0$, $n\geq 0$ and $N \geq 1$, let $p_n^N (\varepsilon)$ denote the proportion of particles $(\zeta_n^i)$ s.t. $ V (\zeta_n^i) \geq V_{\min} + \varepsilon$. We assume that the inverse temperature schedule $\beta_p$ and the iteration numbers $m_p$ satisfy one of these two conditions: 1. $ \underset{p \geq 1}{\sup} \; \Delta_p \leq \Delta < \infty$ (e.g. linear temperature schedule) and $$\displaystyle{m_p \geq \frac{\log(\frac{e^{\Delta.\textmd{osc}(V)}+a}{a})e^{\overline{\Delta V}(k_0).\beta_p}}{\delta}} .$$ 2. $\Delta_p \downarrow 0$ (as $p \rightarrow \infty$) and $\displaystyle{ m_p \geq \left( \textmd{osc}(V). \Delta_p + \log (\frac{1}{a}) \right) \frac{e^{\overline{\Delta V}(k_0).\beta_p}}{\delta}}$ . In this situation, for any $\varepsilon >0$, $n\geq 0$, $N \geq 1$, $y \geq 0$, and $\varepsilon^{\prime} \in (0, \varepsilon)$, the probability of the event $$p_n^N (\varepsilon) \; \; \leq \; \; \frac{e^{-\beta_n(\varepsilon - \varepsilon^{\prime}) }}{m_{\varepsilon^{\prime}}} + \frac{r_i^{\star} N + r_j^{\star} y}{N^2}$$ is greater than $1-e^{-y}$, with $(i,j) = (1,2)$ (and $M=e^{\Delta}$) in the case of bounded $\Delta_{p}$, and $(i,j) = (3,4)$ in the second one. We distinguish two error terms. The first one, $\displaystyle{ \left( \frac{e^{-\beta_n(\varepsilon - \varepsilon^{\prime}) }}{m_{\varepsilon^{\prime}}} \right) }$, is related to the concentration of the Boltzmann-Gibbs measure around the set of global minima of $V$. The second one, $\displaystyle{ \left( \frac{r_i^{\star} N + r_j^{\star} y}{N^2} \right) }$, is related to the concentration of the occupation measure around the limiting Boltzmann-Gibbs measure. Besides the fact that Theorem \[theo-optim-base\] provides tuning strategies which ensure the performance of the ISA model, the last concentration inequality explicits the relative importance of other parameters, including the probabilistic precision $y$, the threshold $t$ on the proportion of particles possibly out of the area of interest, the final temperature $\beta_n$ and the population size $N$. A simple equation, deduced from this last theorem, such as $\displaystyle{ \left( \frac{e^{-\beta_n(\varepsilon - \varepsilon^{\prime}) }}{m_{\varepsilon^{\prime}}} = \frac{r_i^{\star} N + r_j^{\star} y}{N^2} = \frac{t}{2} \right) } $ may be applied to the global tuning of an ISA model, which is generally a difficult task.\ One natural way to choose $\Delta_p$ is to look at the number of iterations $n_1$ we need to proceed to move from $\beta_p$ to $\beta_n$, and to compare it to $n_2$, the iteration number we need to proceed to move from $\beta_p$ to $\beta_q$, and then from $\beta_q$ to $\beta_n$, with $\beta_p < \beta_q < \beta_n$. Roughly speaking, we have seen that the convergence condition was $b_p \leq \frac{a}{g_p}$, then - $n_1 \simeq \left( osc(V).\Delta_{p,n}+ \log (\frac{1}{a}) \right) \frac{e^{\overline{\Delta V}(k_0).\beta_n}}{\delta}$ - $n_2 \simeq \left( osc(V).\Delta_{p,q}+ \log (\frac{1}{a}) \right) \frac{e^{\overline{\Delta V}(k_0).\beta_q}}{\delta} + \left( osc(V). \Delta_{q,n}+ \log (\frac{1}{a}) \right) \frac{e^{\overline{\Delta V}(k_0).\beta_n}}{\delta}$  \ where $\Delta_{p,q} := \beta_p - \beta_q$ for $p > q$. After some approximation technique we find that $$n_1 \leq n_2 \Longleftrightarrow \Delta_{p,q} \Delta_{q,n} \geq \frac{\log (\frac{1}{a} )}{osc(V). \overline{\Delta V}(k_0)}.$$ This condition doesn’t bring any relevant information in the case where $\Delta_p \longrightarrow 0$ exept that the error decomposition $\eta_n^N-\eta_n = \sum_{p=0}^{n} \phi_{p,n}(\eta_p^N)- \phi_{p,n} \phi_{p}(\eta_{p-1}^N)$, underlying our analysis, is not adapted to the case where $\Delta_p \longrightarrow 0$ (which can be compared to the continuous time case). Nevertheless, this condition is interesting in the case of constant inverse temperature steps. In this situation, the critical parameter $\Delta_{\beta}$ is given by $$\Delta_{\beta} = \sqrt{ \frac{\log (\frac{1}{a} )}{osc(V).\overline{\Delta V}(k_0)} } .$$ More precisely, above $\Delta_{\beta}$ the algorithm needs to run too many MCMC steps to stabilize the system. In the reverse angle, when the variation of temperature is too small, it is difficult to reach the disired target measure. An Adaptive Temperature Schedule in ISA {#section-adapt} ======================================= As we already mentioned in the introduction, the theoretical tuning strategies developed in section \[section-Gibbs-tuning\] are of the same order as the logarithmic cooling schedule of traditional simulated annealing ([@Bartoli; @Cerf; @DM-M-Efini]). In contrast to SA models, we emphasize that the performance of the ISA models are not based on a critical initial temperature parameter. Another advantage of the ISA algorithm is to provide at any time step an $N$-approximation of the target measure with a given temperature. In other words, the population distribution reflects the probability mass distribution of the Boltzmann-Gibbs measure at that time. Computationally speaking, the change of temperature parameter $\Delta_p$ plays an important role. For instance, if $\Delta_p$ is taken too large, the selection process is dominated by a minority of well fitted particles and the vast majority of the particles are killed. The particle set’s diversity, which is one of the main advantage of the ISA method, is then lost. On the contrary, if $\Delta_p$ is taken too small, the algorithm doesn’t proceed to an appropriate selection. It wastes time by sampling from MCMC dynamics while the set of particles has already reached its equilibrium. The crucial point is to find a relevant balance between maintaining diversity and avoiding useless MCMC operations.\ Designing such a balance in advance is almost as hard as knowing the function $V$ in advance. Therefore, it is natural to implement adaptive strategies that depend on the variability and the adaptation of the population particles (see for instance [@Jasra; @Schafer], [@Clapp; @Deutscher; @Minvielle] for related applications). In the general field of evolutionary algorithms, elaborating adaptive selection strategies is a crucial question (see, e.g. [@Baker]) and a challenging problem to design performant algorithms. In the case of ISA methods, the common ways to choose $\Delta_p$ are based on simple criteria such as the expected number of particle killed (see section \[description-algo-adapt\]), or the variance of the weights (Effective Sample Size). All of these criteria are based on the same intuitive idea; that is to achieve a reasonable selection. As a result, all of these adaptive ISA models tend to perform similarly.\ In [@DM-D-J-adapt], the reader will find a general formalization of adaptive IPS algorithms. The idea is to define the adaptation as the choice of the times $n$ at which the resampling occurs. These times are chosen according to some adaptive criteria, depending on the current particle set, or more generally on the past process. Under weak conditions on the criteria, it is shown how the adaptive process asymptotically converges to a static process involving deterministic interaction times when the population size tends to infinity. A functional central limit theorem is then obtained for a large class of adaptive IPS algorithms.\ \ The approach developed in the following section is radically different, with a special focus on non-asymptotic convergence results for the ISA algorithm defined in section \[description-algo-adapt\]. The adaptation consists here in choosing the $\beta$ increment ${\Delta}_{n+1}^N$ so that $$\eta_n^N (e^{-{\Delta}_{n+1}^N \cdot V}) = \varepsilon$$ where $\varepsilon > 0$ is a given constant, at each iteration $n$. We show that the associated stochastic process can be interpretated as a perturbation of the limiting FK flow. Feynman-Kac Representation -------------------------- Let $V \in \mathcal{B}(E)$. To simplify the analysis, without any loss of generality, we assume $V_{\min}=0$. Let us fix $\varepsilon > 0$. For any measure $\mu \in \mathcal{P}(E)$, we let the function $\lambda_{\mu}$ defined by $$\begin{array}{rccc} & [0,+\infty) & \longrightarrow & (0,1] \\ \lambda_{\mu} = & x & \mapsto & \mu \left( e^{-x \cdot V} \right) \\ \end{array}$$ This function is clearly decreasing ($\lambda_{\mu} (0) = 1$), convex, and differentiable infinitely. Moreover, if $\mu \left( \{ V=0 \} \right) =0$, then it satisfies $\lambda_{\mu}(x) \longrightarrow 0$ when $x \to +\infty$. Therefore, we can define its inverse function $\kappa_{\mu}$: $$\begin{array}{rccl} & (0,1] & \longrightarrow & \; [0,+\infty) \\ \kappa_{\mu} = & \varepsilon & \mapsto & \; x \; \; \textmd{so that} \; \mu \left( e^{-x \cdot V} \right) = \varepsilon \\ \end{array}$$ This function is again decreasing, convex, infinitely differentiable, takes value $0$ for $\varepsilon=1$, and it satisfies $\kappa_{\mu}(\varepsilon)\longrightarrow + \infty$ when $\varepsilon \to 0^+$.\ Now, we let $m$ be a reference measure on $E$ s.t. $m \left( \{ V=0 \} \right) =0$. We consider the sequence $(\beta_n)_n$ and its associated Gibbs measures $\eta_n = \mu_{\beta_n} \propto e^{-\beta_n V}. m$, defined recursively by the equation $${\Delta}_{n+1} := (\beta_{n+1} - \beta_{n}) = \kappa_{\eta_n}(\varepsilon) . \label{increment-theo}$$ In an equivalent way, we have $$\lambda_{\eta_n}({\Delta}_{n+1}) = \varepsilon \quad \textmd{or} \quad \eta_n (e^{-{\Delta}_{n+1} \cdot V}) = \varepsilon.$$ The main objective of this section is to approximate these target measures. Formally speaking, $(\eta_n)$ admits the FK structure described in section \[section-Gibbs-motiv\], with potential functions $G_n(x) = e^{-\Delta_n .V(x)}$, and some dedicated MCMC Markov kernels $M_n$. We let $g_n$, $b_n$, be the associated oscillations, Dobrushin ergodic coefficients and the corresponding FK transformations $\phi_n$.\ \ The solving of the equation $\displaystyle{ \eta_n (e^{-{\Delta}_{n+1} \cdot V}) = \varepsilon }$ can be interpreted as a way to impose some kind of theoretical regularity in the FK flow. Indeed, according to the formula (\[defgamma1\]) (and definition (\[def-mes-Gibbs\]) ), it is equivalent to find $\Delta_{n+1}$ s.t. $$\frac{\gamma_{n+1} (1) }{\gamma_n(1)} = \varepsilon \; \; \left( = \frac{Z_{\beta_n + \Delta_{n+1} }}{Z_{\beta_n}} \right).$$ In other words, the sequence $\Delta_n$ is defined so that the normalizing constants $\gamma_n(1)$ increase geometrically, with the ratio $\gamma_{n+1} (1) / \gamma_n(1) = \varepsilon$. Notice that these increments $\Delta_n$ are only theoretical, and the corresponding potential functions $G_n$ are not explicitly known. An Adaptive Interacting Particle Model {#description-algo-adapt} -------------------------------------- As in the classic IPS algorithm, we approximate the measures $\eta_n$ by simulating an interacting particle system $(\zeta_n)_n = \left( \zeta_n^1, \ldots, \zeta_n^N \right)_n$ of size $N$ so that $$\eta^N_{n}=\frac{1}{N}\sum_{1\leq i\leq N}\delta_{\zeta^i_n}\rightarrow_{N\uparrow\infty}\eta_n.$$ We start with $N$ independent samples from $\eta_0$ and then alternate selection and mutation steps, as described in section \[algo-classique\]. As we mentioned above, in contrast to the classic IPS model, the potential function $G_{n+1}$ arising in the selection is not known. The selection step then starts by calculating the empirical increment $\Delta_{n+1}^N$ defined by $${\Delta}_{n+1}^N := \kappa_{\eta_n^N}(\varepsilon)$$ or $\lambda_{\eta_n^N}({\Delta}_{n+1}^N) = \eta_n^N (e^{-{\Delta}_{n+1}^N \cdot V}) = \varepsilon $. As the quantity $\displaystyle{\eta_n^N (e^{-{\Delta} \cdot V}) = \frac{1}{N} \sum_{1 \leq i \leq N} e^{-\Delta . V(\zeta_n^i)} } $ is easy to calculate for all $\Delta \geq 0$, one can calculate $\Delta_{n+1}^N$ by, e.g., performing a dichotomy algorithm. If we consider the stochastic potential functions $$G_{n+1}^N = e^{-{\Delta}_{n+1}^N .V}$$ then every particle $\zeta^i_{n}$ evolves to a new particle $\widehat{\zeta}^i_{n}$ randomly chosen with the following stochastic selection transition $$S_{n+1,\eta^N_{n}}^N( \zeta^i_{n},dx) :=G_{n+1}^N( \zeta^i_{n})~\delta_{ \zeta^i_{n}}(dx)+ \left(1-G_{n+1}^N( \zeta^i_{n})\right)~ \Psi_{G_{n+1}^N}(\eta^N_n)(dx) .$$ In the above display formula, $\Psi_{G_{n+1}^N}(\eta^N_n)$ stands for the updated measure defined by $$\Psi_{G_{n+1}^N}(\eta^N_n)=\sum_{j=1}^N\frac{G_{n+1}^N( \zeta^j_{n})}{\sum_{k=1}^N G_{n+1}^N( \zeta^k_{n})} \delta_{ \zeta^j_{n}} .$$ Note that $V_{\min}=0$ ensures $0 < G_{n+1}^N \leq 1$. The mutation step consists in performing Markov transitions $M_{n+1}^N(\widehat{\zeta}^i_{n},\cdot)$, defined as $M_{n+1}(\widehat{\zeta}^i_{n},\cdot)$ by replacing $\beta_{n+1}$ by $\beta_{n+1}^N = \beta_n^N + \Delta_{n+1}^N$. Thus, conditionally to the previous particle set $\zeta_n$, the new population of particles $\zeta_{n+1}$ is sampled from distribution $$\begin{aligned} & \mbox{\rm Law} \left( \zeta_{n+1}^1,...,\zeta_{n+1}^N \mid \zeta_{n}^1,...,\zeta_{n}^N \right) \nonumber \\ &=\; \; \left( \delta_{\zeta_{n}^1}.S^N_{n+1,\eta_{n}^N}.M_{n+1}^N \right) \otimes \cdots \otimes \left( \delta_{\zeta_{n}^N}.S^N_{n+1,\eta_{n}^N}.M_{n+1}^N \right) . \label{loi-algo-adapt} \end{aligned}$$ The definition of $\Delta_{n+1}^N$ is to be interpreted as the natural approximation of the theoretical relation (\[increment-theo\]). On the other hand, it admits a purely algorithmic interpretation. As a matter of fact, conditionnaly to the n-th generation of particles $ \left( \zeta_n^1, \ldots, \zeta_n^N \right)$, the probability for any particle $ \zeta_n^i $ to be accepted, i.e. not affected by the recycling mechanism, is given by $G_{n+1}^N(\zeta_n^i) = e^{-\Delta_{n+1}^N . V(\zeta_n^i)}$. Then, the expectation of the number of accepted particles is given by $\sum_i e^{-\Delta_{n+1}^N . V(\zeta_n^i)}$. But it turns out that this quantity is exactly $N \times \eta_n^N (e^{-{\Delta}^{N}_{n+1} \cdot V})$, which is equal to $N . \varepsilon$ by definition of $\Delta_{n+1}^N$. Therefore, $\varepsilon$ is an approximation of the proportion of particles which remain in place during the selection step. In other words, at each generation $n$, the increment $\Delta_{n+1}^N$ is chosen so that the selection step kills less than $(1-\varepsilon).N$ particles. This type of tuning parameter is very important in practice to avoid degenerate behaviours. A Perturbation Analysis ----------------------- This section is mainly concerned with the convergence analysis of a simplified adaptive model. More precisely, we only consider the situation where the mutation transition in (\[loi-algo-adapt\]) is given by the limiting transition $M_{n+1}$. The analysis of the adaptive model (\[loi-algo-adapt\]) is much more involved, and our approach doesn’t apply directly to study the convergence of this model.\ Despite the adaptation, the sequence $\eta_n^N$ can be analyzed as a random perturbation of the theoretical sequence $\eta_n$. Let us fix $n$ and a population state $\zeta_n$ at time $n$. If $\phi_{n+1}^N$ denotes the FK transformation associated with potential $G_{n+1}^N$ and kernel $M_{n+1}$, then, by construction, the measure $\eta_{n+1}^N$ is close to $\phi_{n+1}^N(\eta_{n}^N)$. In particular, by the Khintchine’s type inequalities presented in [@DM-FK] we have $$\label{MZ-cond} \forall f \in \mathcal{B}_1 (E), \quad \mathbb{E} \left( \left| \eta_{n+1}^N (f) - \phi_{n+1}^N(\eta_{n}^N) (f) \right|^p \mid \zeta_n \right)^{1/p} \leq \frac{B_p}{\sqrt{N}},$$ with the constants $B_p$ introduced in (\[def-Bp\]). A simple, but important remark about the Boltzmann-Gibbs transformations is that for any measure $\mu$ and any positive functions $G$ and $\tilde{G}$ we have $$\psi_{\tilde{G}} (\mu) \; = \; \psi_G \left( \psi_{\frac{\tilde{G}}{G}} (\mu) \right).$$ Therefore, if we take $\displaystyle{H_{n+1}^N:= \frac{G^N_{n+1}}{G_{n+1}} }$, then the perturbed transformation $\phi_{n+1}^N$ can be written in terms of the theoretical one $\phi_{n+1}$ by $$\label{mes-virt} \phi_{n+1}^N = \phi_{n+1} \circ \psi_{H_{n+1}^N}.$$ If we use an inductive approach, we face the following problem. Let $\eta$ be a deterministic measure ($\eta_n$ in our analysis) and $\hat{\eta}$ a random measure ($\eta_n^N$ in our analysis), close to $\eta$ under the $d_p$ distance (induction hypothesis). We also consider a Markov kernel $M$ and the potential functions $$\label{def-G-etc} G = e^{-\kappa_{\eta}(\varepsilon).V} , \quad \hat{G} = e^{-\kappa_{\hat{\eta}}(\varepsilon).V}, \quad \hat{H} = \frac{\hat{G}}{G}$$ and we let $\phi$ (respectively $\hat{\phi}$) be the FK transformation associated with the potential function $G$ (respectively $\hat{G}$). The question is now: how can we estimate $d_p(\phi(\eta),\hat{\phi}(\hat{\eta}))$ in terms of $d_p(\eta,\hat{\eta})$ ?\ To answer to this question, we propose to achieve a two-step estimation. Firstly we estimate the distance between $\hat{\eta}$ and $\psi_{\hat{H}}(\hat{\eta}) $ (Lemma \[lemme-fonda-adapt\]). Secondly we analyze the stability properties of the transformation $\phi$ (Lemma \[lemme-phi-Lp\]). This strategy is summerized by the following synthetic diagram. $$\begin{array}{ccc} \eta & \underset{\phi}{-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!\longrightarrow} & \phi(\eta) \\ \hat{\eta} & & \\ \downarrow & & \\ \psi_{\hat{H}}(\hat{\eta}) & \underset{\phi}{-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!-\!\!\!\!\longrightarrow} & \hat{\phi}(\hat{\eta}) \end{array}$$ \[lemme-fonda-adapt\] Let $\eta \in \mathcal{P}(E)$, $\hat{\eta} \in \mathcal{P}_{\Omega}(E)$ and let $G$, $\hat{G}$, $\hat{H}$ be the positive functions on $E$ defined by equations (\[def-G-etc\]). If $\eta( \{V=0 \}) = \hat{\eta}( \{V=0 \}) = 0$ (a.s.), then for all $\varepsilon > 0$, we have $$d_p \left( \psi_{\hat{H}}(\hat{\eta}), \eta \right) \leq \frac{V_{\max} \cdot e^{\kappa_{\eta}(\varepsilon).V_{\max} } }{\eta(V)} \cdot d_p \left( \hat{\eta}, \eta \right) .$$ We simplify the notation and we set $x:=\kappa_{\eta}(\varepsilon)$ and $\hat{x} := \kappa_{\hat{\eta}}(\varepsilon)$.\ We start with the following observation, $$\label{prem-decomp} \psi_{\hat{H}} (\hat{\eta}) (f) - \hat{\eta} (f) = \frac{\hat{\eta} (\hat{H}.f )}{\hat{\eta}(\hat{H})} - \hat{\eta}(f) = \underbrace{\frac{1}{\hat{\eta}(\hat{H})}}_{A_1} \underbrace{\hat{\eta} \left[ \left(\hat{H} - \hat{\eta}(\hat{H}) \right).f \right]}_{A_2}$$ for any $f \in \mathcal{B}(E)$. We notice that $\hat{H} = \hat{G}/G = e^{(x-\hat{x}) \cdot V}$, which leads to the lower bound $\hat{\eta}(\hat{H}) = \hat{\eta} \left( e^{(x-\hat{x}) \cdot V} \right) \geq \hat{\eta} \left( e^{-\hat{x} \cdot V} \right) = \varepsilon$. The last equality comes from the definition of $\hat{x}$. We just proved: $|A_1| \leq \varepsilon^{-1}$. On the other hand, we have $ osc(\hat{H}) = \left| e^{(x-\hat{x}) \cdot V_{\max}} - 1 \right| $, so that $$\begin{aligned} \left| A_2 \right| \leq \hat{\eta} \left( \left| \hat{H} - \hat{\eta}(\hat{H}) \right| \right) \cdot \left\| f \right\|_{\infty} & \leq osc(\hat{H}) \cdot \left\| f \right\|_{\infty} \nonumber \\ & \leq \left| e^{(x-\hat{x}) \cdot V_{\max}} - 1 \right| \cdot \left\| f \right\|_{\infty} . \label{apparition-u}\end{aligned}$$ The quantity $\displaystyle{ \hat{u} := \left( e^{(x-\hat{x}) \cdot V_{\max}} - 1 \right) }$ is intuitively small. Next, we provide an estimate in terms of the functions $\lambda_{\eta}$ and $\lambda_{\hat{\eta}}$. Given $\omega \in \Omega$, if $x \geq \hat{x}$, then we can write $$( \lambda_{\hat{\eta}}(x) - \underbrace{\lambda_{\eta}(x)}_{=\varepsilon=\lambda_{\hat{\eta}}(\hat{x})} ) = \left( \lambda_{\hat{\eta}}(x) - \lambda_{\hat{\eta}}(\hat{x}) \right) = \int_{\hat{x}}^{x} -\lambda^{'}_{\hat{\eta}} (s) ds . \\$$ Furthermore, for all $\mu \in \mathcal{P}(E)$ and $s \geq 0$, we have $$\begin{array}{rcl} -\lambda_{\mu}^{'}(s) = \mu \left( V \cdot e^{-sV} \right) & \geq & \mu \left( V \cdot e^{-s V_{\max}} \right) \\ & \geq & \mu(V) \cdot e^{-s V_{\max}}. \end{array}$$ Then, we have $$\begin{aligned} \left( \lambda_{\hat{\eta}}(x) - \lambda_{\eta}(x) \right) & \geq \hat{\eta}(V) \frac{-1}{V_{\max}} \left[ e^{-s V_{\max}} \right]_{\hat{x}}^{x} \\ & = \hat{\eta}(V) \frac{e^{-x V_{\max}} }{V_{\max}} \left( e^{(x-\hat{x})V_{\max}} - 1 \right) \\ & = \hat{\eta}(V) \frac{e^{-x V_{\max}} }{V_{\max}} \hat{u} .\end{aligned}$$ By symmetry, we have $$x \leq \hat{x} \; \Longrightarrow \; \left( \lambda_{\eta}(x) - \lambda_{\hat{\eta}}(x) \right) \geq \hat{\eta}(V) \frac{e^{-x V_{\max}} }{V_{\max}} (-\hat{u}) .$$ This yields the almost sure upper bound $$\left| \hat{u} \right| \cdot \hat{\eta}(V) \frac{e^{-x V_{\max}} }{V_{\max}} \leq \left| \lambda_{\eta}(x) - \lambda_{\hat{\eta}}(x) \right| .$$ Using the decomposition $\hat{\eta}(V) = \eta(V) + (\hat{\eta}(V) - \eta(V) )$, by simple manipulation, we prove that $$\left| \hat{u} \right| \leq \frac{V_{\max} e^{x V_{\max}} }{\eta(V)} \underbrace{\left| \lambda_{\eta}(x) - \lambda_{\hat{\eta}}(x) \right|}_{A_3} + \underbrace{\frac{(\hat{\eta}(V) - \eta(V))}{\eta(V)}}_{A_4} \cdot \underbrace{\left| \hat{u} \right|}_{A_5}$$ Considering the $L^p$ norm of the right hand side of this inequality, one can check that - $A_3 = (\eta - \hat{\eta}) \left( e^{-x \cdot V} \right) $ so, as $osc \left( e^{-x \cdot V} \right) \leq 1$, $\Vert A_3 \Vert_p \leq d_p(\hat{\eta},\eta)$; - as $osc(V) = V_{\max}$, $\Vert A_4 \Vert_p \leq \frac{V_{\max}}{\eta(V)} \cdot d_p(\hat{\eta},\eta)$; - $A_5 = \left| \hat{u} \right| = e^{x V_{\max}} \left| e^{-\hat{x} V_{\max}} - e^{-x V_{\max}} \right| \leq e^{x V_{\max}} $. Making the appropriate substitutions, we have $$\Vert \hat{u} \Vert_p \leq \frac{2 V_{\max} e^{x \cdot V_{\max}} }{\eta(V)} \cdot d_p(\hat{\eta},\eta).$$ Combining this result with (\[prem-decomp\]) and (\[apparition-u\]), we check that $$\left\Vert \psi_{\hat{H}} (\hat{\eta}) (f) - \hat{\eta} (f) \right\Vert_p \leq \frac{2 V_{\max} e^{x V_{\max}} }{\varepsilon \cdot \eta(V)} \cdot d_p(\hat{\eta},\eta) \cdot \left\| f \right\|_{\infty}$$ We finally go from $\left\| f \right\|_{\infty}$ to $\frac{osc(f)}{2}$ by noticing that $\psi_{\hat{H}} (\hat{\eta}) (f) - \hat{\eta} (f)$ is equal to $0$ for any constant function $f$, and by considering the above inequality taken for $\tilde{f}=f-\frac{f_{\max}+f_{min}}{2}$, which satisfies $\left\| \tilde{f} \right\|_{\infty} = \frac{osc(f)}{2}$.\ This ends the proof of the lemma. \[lemme-phi-Lp\] Let $\eta \in \mathcal{P}(E)$, $\hat{\eta} \in \mathcal{P}_{\Omega}(E)$, and let $\phi$ be a FK transformation associated with a positive function $G$ and a Markov kernel $M$. If we set $\displaystyle{ g:= \underset{x,y \in E}{\sup} G(x) / G(y) }$ and $b := \beta(M)$, then we have $$d_p \left( \phi(\hat{\eta}), \phi(\eta) \right) \leq g \cdot b \cdot d_p \left( \hat{\eta}, \eta \right) .$$ Let us fix $f \in \mathcal{B}(E)$. We have $$\begin{aligned} \phi(\hat{\eta}) (f) - \phi(\eta) (f) & = \frac{\hat{\eta}(G \times M . f)}{\hat{\eta}(G)} - \phi(\eta) (f) \\ & = \frac{\hat{\eta} \left( G \times \left[ M. \left( f-\phi(\eta) (f) \right) \right] \right)}{\hat{\eta} \left( G \right)} . \end{aligned}$$ Let $\displaystyle{ \tilde{f} = M . \left( f-\phi(\eta) (f) \right) } $. By property (\[prop-Dob\]), $\tilde{f}$ satisfies $osc(\tilde{f}) = osc(M.f) \leq b \cdot osc(f)$.\ Additionally we have $\eta(G \times \tilde{f}) = \eta(G \times M.f) - \eta \left( G \times \frac{\eta(G \times M.f)}{\eta(G)} \right) = 0 $. So we obtain $$\begin{aligned} \phi(\hat{\eta}) (f) - \phi(\eta) (f) = \frac{\hat{\eta}(G \times \tilde{f}) }{\hat{\eta}(G)} - \underbrace{\frac{\eta(G \times \tilde{f}) }{\hat{\eta}(G)}}_{=0} & = \frac{1}{\hat{\eta}(G)} (\hat{\eta} - \eta) (G \times \tilde{f}) \\ & = \frac{G_{\max}}{\hat{\eta}(G)} (\hat{\eta} - \eta) (\frac{G}{G_{\max}} \times \tilde{f}).\end{aligned}$$ Firstly, we notice that $\displaystyle{ \left| \frac{G_{\max}}{\hat{\eta}(G)} \right| \leq \frac{G_{\max}}{G_{\min}} \leq g }$. On the other hand, we notice that $\tilde{f}$ can be rewritten as $$\tilde{f} = M. \left( f - \psi_G(\eta)(M.f) \right) = (M.f) - \psi_G(\eta)(M.f).$$ It follows that $ \tilde{f}_{\max} \geq 0$ and $ \tilde{f}_{\min} \leq 0$. Under these conditions, $osc \left( \frac{G}{G_{\max}} \times \tilde{f} \right) \leq osc(\tilde{f}) \leq b \cdot osc(f) $. We conclude that $$\begin{aligned} \left( E \left| \frac{G_{\max}}{\hat{\eta}(G)} (\hat{\eta} - \eta) \left( \frac{G}{G_{\max}} \times \tilde{f} \right) \right|^p \right)^{1/p} & \leq g \cdot E \left[ E \left| (\hat{\eta} - \eta) \left( \frac{G}{G_{\max}} \times \tilde{f} \right) \right|^p \right]^{1/p} \\ & \leq g \cdot b \cdot d_p \left( \hat{\eta}, \eta \right) \cdot osc(f) .\end{aligned}$$ This ends the proof of the lemma. Non-Asymptotic Convergence Results {#gros-result-adapt} ---------------------------------- This section is mainly concerned with the proof of Theorem \[theo-statement-adapt\] stated on page . We also deduce some concentration inequalities of the ISA adaptive model. We start with the proof of Theorem \[theo-statement-adapt\].\ \ [*Proof of Theorem \[theo-statement-adapt\]:*]{}\ We fix $p \geq 1$ and we let $ \tilde{e}_n = \sum_{k=0}^n \prod_{i=k+1}^n b_i g_i (1+c_i) $. We notice that this sequence can also be defined with the recurrence relation $\tilde{e}_{n+1} = 1+g_{n+1} b_{n+1}(1+c_{n+1}) \cdot \tilde{e}_{n}$ starting at $\tilde{e}_0 = 1$. We also consider the following parameter. $$e_n := \frac{\sqrt{N}}{2 B_p} \cdot \underset{f \in \mathcal{O}_1 (E)}{\sup} \left\Vert \eta_n^N (f) - \eta_n (f) \right\Vert_p$$ We use an inductive proof to check that the proposition $ {\textbf{IH}(n) = \{ e_n \leq \tilde{e}_n \} }$ is met at any rank $n$. As $\eta_0^N$ is obtained with $N$ independant samples from $\eta_0$, $\textbf{IH}(0)$ is given by the Khintchine’s inequality. Now suppose that $\textbf{IH}(n)$ is satisfied.\ According to the identity (\[mes-virt\]), we can write the following decomposition. $$\begin{aligned} \eta_{n+1}^N - \eta_{n+1} & = \left( \eta_{n+1}^N - \phi_{n+1}^N (\eta_{n}^N) \right) + \left(\phi_{n+1}^N (\eta_{n}^N) - \eta_{n+1} \right) \nonumber \\ & = \underbrace{\left( \eta_{n+1}^N - \phi_{n+1}^N (\eta_{n}^N) \right)}_{A_1} + \underbrace{\left( \phi_{n+1} \left( \psi_{H_{n+1}^N} (\eta_{n}^N) \right) - \phi_{n+1}(\eta_{n}) \right)}_{A_2} \label{decompAA}\end{aligned}$$ Given $\zeta_n$, and using (\[MZ-cond\]), we know that for all function $f \in \mathcal{O}_1 (E)$, we have $ \displaystyle{ \frac{\sqrt{N}}{2 B_p} \left\Vert A_1 (f) \right\Vert_p \leq 1 } $. To estimate $A_2$, we start by decomposing $ \left( \psi_{H_{n+1}^N} (\eta_{n}^N) - \eta_{n} \right)$ in this way: $$\psi_{H_{n+1}^N} (\eta_{n}^N) - \eta_{n} = \underbrace{ \left( \psi_{H_{n+1}^N} (\eta_{n}^N) - \eta_{n}^N \right) }_{Q_1} + \underbrace{\left( \eta_{n}^N - \eta_{n} \right)}_{Q_2} .$$ By the induction hypothesis, we have $\frac{\sqrt{N}}{2 B_p} \cdot \underset{f \in \mathcal{O}_1 (E)}{\sup} \left\Vert Q_2 (f) \right\Vert_p \leq \tilde{e}_{n}$. Therefore, by Lemma \[lemme-fonda-adapt\], we find that $$\frac{\sqrt{N}}{2 B_p} \cdot \underset{f \in \mathcal{O}_1 (E)}{\sup} \left\Vert Q_1 (f) \right\Vert_p \leq c_{n+1} \cdot \tilde{e}_{n} .$$ Thus the measures $\psi_{H_{n+1}^N} (\eta_{n}^N)$ et $\eta_{n}$ satisfy $$\frac{\sqrt{N}}{2 B_p} \cdot \underset{f \in \mathcal{O}_1 (E)}{\sup} \left\Vert \psi_{H_{n+1}^N} (\eta_{n}^N) (f) - \eta_{n} (f) \right\Vert_p \leq (1+c_{n+1}) \cdot \tilde{e}_{n} .$$ Applying Lemma \[lemme-phi-Lp\] we also have $$\frac{\sqrt{N}}{2 B_p} \cdot \underset{f \in \mathcal{O}_1 (E)}{\sup} \left\Vert A_2 (f) \right\Vert_p \leq g_{n+1} b_{n+1} (1+c_{n+1}) \cdot \tilde{e}_{n} .$$ Back to (\[decompAA\]), we conclude that $ e_{n+1} \leq 1+ g_{n+1} b_{n+1} (1+c_{n+1}) \cdot \tilde{e}_{n} = \tilde{e}_{n+1}$.\ This ends the proof of the theorem. $\mathbin{\vbox{\hrule\hbox{\vrule height1.4ex \kern0.6em\vrule height1.4ex}\hrule}}$\ \ We are now in position to obtain a sufficient condition for uniform concentration and $L^p$-mean error bounds w.r.t. time. If the condition $b_n g_n (1+c_n) \leq a$ is satisfied for some $a < 1$ and any $n$, then we have the uniform error bounds $$\label{Lp-unif-adapt} d_p \left( \eta_n^N , \eta_n \right) \leq \frac{B_p}{2(1-a)\sqrt{N}} .$$ for any $p$, with the constants $B_p$ introduced in (\[def-Bp\]). In addition, for any $f \in \mathcal{B}_1(E)$, we have the following concentration inequalities: $$\label{conc-adapt-un} \forall s \geq 0, \quad \mathbb{P} \left( \vert \eta_n^N(f) - \eta_n(f) \vert \geq s \right) \leq r_1 (\sqrt{N}.s) e^{-r_2 N s^2}$$ and $$\label{conc-adapt-deux} \forall y \geq 1, \quad \mathbb{P} \left( \vert \eta_n^N(f) - \eta_n(f) \vert \geq \frac{r(1+\sqrt{y})}{\sqrt{N}} \right) \leq e^{-y} ,$$ with the parameters $$\left\{ \begin{array}{lll} r_1&=& \displaystyle{ e^{1/2} (1-a) } \\ r_2&=& \displaystyle{ \frac{1}{2} (1-a)^2 } \\ r &=& \displaystyle{ \frac{2}{1-a} } . \end{array} \right.$$ The inequality (\[Lp-unif-adapt\]) is a direct consequence of Theorem \[theo-statement-adapt\].\ Let us fix $n$, $f \in \mathcal{B}_1(E)$ and set $$X := \vert \eta_n^N(f) - \eta_n(f) \vert \quad \textmd{and} \quad \epsilon_N := \frac{1}{(1-a) \sqrt{N}} .$$ In this notation, we have $\displaystyle{ \left\Vert X \right\Vert_p \leq B_p \cdot \epsilon_N } $ for any $p \geq 1$. Let us fix $s\geq0$. By Markov inequality, for all $t\geq0$ we have $$\label{ineg-markov-expo} \mathbb{P} \left( X \geq s \right) = \mathbb{P} \left( e^{t X} \geq e^{t s} \right) \leq e^{-st} \mathbb{E} \left( e^{t X} \right).$$ Using (\[def-Bp\]), we estimate the Laplace transform $\displaystyle{ \mathbb{E} \left( e^{t X} \right) }$. $$\begin{aligned} \mathbb{E} \left( e^{t X} \right) & = \sum_{p \geq 0} \mathbb{E} \left( \frac{t^p . X^p}{p!} \right) \\ & \leq \sum_{p \geq 0} \frac{t^{2p} \epsilon_N^{2p} }{(2p)!} \frac{(2p)!}{2^p. p!} + \sum_{p \geq 0} \frac{t^{2p+1} \epsilon_N^{2p+1} }{(2p+1)!} \frac{(2p+1)!}{2^p. p! \sqrt{2p+1}} \\ & \leq (1+ t \epsilon_N) e^{\frac{t^2 \epsilon_N^2}{2}}\end{aligned}$$ Taking the inequality (\[ineg-markov-expo\]) with $\displaystyle{ t= \frac{1}{\epsilon_N} \left( \frac{s}{\epsilon_N} -1 \right) }$, we obtain $$\mathbb{P} \left( X \geq s \right) \leq \frac{s}{\epsilon_N} e^{-\frac{1}{2} \left[ \left(\frac{s}{\epsilon_N} \right)^2 -1 \right] } ,$$ which is equivalent to the first concentration inequality (\[conc-adapt-un\]).\ For the second one, we use the inequality $$u e^{-\frac{u^2}{2} } = e^{-(\frac{u^2}{2} - \log u)} \leq e^{-(\frac{u^2}{2} - u+1)}$$ with $u=\frac{s}{\epsilon_N}$. Then, one can easily show that for all $y\geq1$ and $u\geq 2$, the equation $y = \frac{-u^2}{2} - u+1$ is equivalent to $u = 1+ \sqrt{1+4(y-1)} \leq 2(1+\sqrt{y}) $. Thus $\forall y\geq1$: $$\mathbb{P} \left( X \geq 2 \epsilon_N (1+\sqrt{y}) \right) \leq e^{-y} ,$$ which is equivalent to (\[conc-adapt-deux\]).\ This ends the proof of the corollary. [99.]{} Assaraf, R., Caffarel, M.: A Pedagogical Introduction to Quantum Monte Carlo. Mathematical models and methods for ab initio Quantum Chemistry, 45-73 Springer (2000) doi: 10.1007/978-3-642-57237-1\_ 3 Assaraf, R., Caffarel, M., Khelif, A.: Diffusion Monte Carlo with a fixed number of walkers. Physical Revue E (2000) Bartoli, N., Del Moral, P.: Simulation & Algorithmes Stochastiques. Cépaduès (2001) Baker, J. E.: Adaptive Selection Methods for Genetic Algorithms. ICGA1, pp. 101-111. (1985) Bertrand, C., Hamada, Y., Kado, H.: MRI Prior Computation and Parallel Tempering Algorithm: A Probabilistic Resolution of the MEG/EEG Inverse Problem, Brain Topography, vol 14 num 1. (2001) Cappé,O., Moulines, E., Rydén, T.: Inference in Hidden Markov Models. Springer, New York, (2005) Ceperley, D. M.: Path Integrals in the Theory of Condensed Helium. Rev. Modern Phys. 67 279-355. (1995) Cerf R.: Une théorie asymptotique des algorithmes génétiques. PhD thesis, Université de Montpellier 2, France. (1994) Ciccotti, G., Hoover, W. G. Molecular-Dynamics Simulation of Statistical-Mechanical Systems. North-Holland, Amsterdam (1986). Cérou, F., Del Moral, P., Furon, T., Guyader, A.: Sequential Monte Carlo for Rare Event Estimation. Statistics and Computing, vol 22, no 3, 795–808 (2012) doi: 10.1007/s11222-011-9231-6 Cérou, F., Del Moral, P., Guyader, A.: A nonasymptotic variance theorem for unnormalized Feynman-Kac particle models. Annales de l’Institut Henri Poincaré, Probabilités et Statistiques, 47, 629–649 (2011) Chopin, N.: Central Limit Theorem for Sequential Monte Carlo methods and its application to Bayesian inference. Annals of Statistics, 32, 2385-2411 (2004) Clapp, T.: Statistical Methods in the Processing of Communications Data. Ph.D. thesis, Cambridge University Engineering Department. (2000) Del Moral, P.: Feynman-Kac formulae. Genealogical and interacting particle approximations Springer New York, \[575p.\] Series: Probability and Applications (2004). Del Moral, P.: Nonlinear Filtering: Interacting Particle Solution. Markov Processes and Related Fields, Vol. 2, No. 4 555–579 (1996). Dawson D.A., Del Moral P.: Large deviations for interacting processes in the strong topology. Statistical Modeling and Analysis for Complex Data Problem P. Duchesne and B. Rémillard Editors, pp. 179–209, Springer (2005) Del Moral, P., Doucet, A., Jasra, A: On Adaptive Resampling Procedures for Sequential Monte Carlo Methods. Bernoulli, Vol. 18, No. 1, pp. 252-278. (2012) Del Moral, P., Doucet, A., Jasra, A.: Sequential Monte Carlo Samplers. Journal of the Royal Statistical Society B, 68, 411–436 (2006) Del Moral, P., Guionnet, A.: On the stability of interacting processes with applications to filtering and genetic algorithms. Annales de l’Institut Henri Poincaré, vol 37, No. 2, 155-194 (2001). Del Moral P., Guionnet A. A Central Limit Theorem for Non Linear Filtering using Interacting Particle Systems. Annals of Applied Probability, Vol. 9, No. 2, 275-297 (1999). Del Moral P., Guionnet A. Large Deviations for Interacting Particle Systems. Applications to Non Linear Filtering Problems. Stochastic Processes and their Applications, vol. 78, pp. 69-95 (1998). Del Moral P., Hu P., Wu L.: On the Concentration Properties of Interacting Particle Processes. Technical Report INRIA Bordeaux Sud-Ouest (ALEA) no 7677 (2011). Del Moral P., Ledoux M. On the Convergence and the Applications of Empirical Processes for Interacting Particle Systems and Nonlinear Filtering. Journal of Theoret. Probability, Vol. 13, No. 1, pp. 225-257 (2000). Del Moral P., Miclo L.: On the Convergence and the Applications of the Generalized Simulated Annealing. SIAM Journal on Control and Optimization, Vol. 37, No. 4, 1222-1250, (1999). Del Moral, P., Miclo, L.: Branching and interacting particle systems approximations of Feynman-Kac formulae with applications to nonlinear filtering. Séminaire de Probabilités XXXIV, Lecture Notes in Mathematics, vol 1729, Springer, Berlin, pp. 1-145 (2000) Del Moral, P., Rio, E.: Concentration Inequalities for Mean Field Particle Models. Annals of Applied Probabilities, vol 21, no 3, 1017-1052 (2011). Deutscher, J., Blake, A., Reid, I.: Articulated body motion capture by annealed particle filtering. IEEE Conference on Computer Vision and Pattern Recognition. Vol. 2, pp. 126-133.(2000) Doucet, A., De Freitas, N., Gordon, N.: Sequential Monte Carlo Methods in Practice. Statistics for engineering and Information Science. Springer, New York. (2001) Frankel, D., Smit,B. Understanding Molecular Simulation. Academic Press, New York. (1996) Giraud, F., Minvielle, P., Sancandi, M., Del Moral, P.: Rao-Blackwellised Interacting Markov Chain Monte Carlo for Electromagnetic Scattering Inversion. arXiv stat.AP/1209.4006v2. (2012) Goldberg, D. E.: Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley (1989) Hetherington, J. H.: Observations on the statistical iteration of matrices. Physical Revue A, vol 30, 2713-2719 (1984). Jasra, A., Stephens, D., Doucet, A., Tsagaris, T.: Inference for Lévy Driven stochastic volatility models via adaptive sequential Monte Carlo. Scand. J. Stat. (2008) Künsch, H. R.: Recursive Monte-Carlo filters: algorithms and theoretical analysis. Annals of Statistics, 33, 1983–2021 (2005) Le Riche, R., Schoenauer, M., Sebag, M.: Un état des lieux de l’optimisation évolutionnaire et de ses implications en sciences pour l’ingénieur, chapter in Modélisation Numérique: défis et perspectives, Vol. 2, Traité Mécanique et Ingénierie des Matériaux, P. Breitkopf and C. Knopf-Lenoir eds., Hermes publ., pp. 187-259 (2007). Minvielle, P., Doucet, A., Marrs, A., Maskell, S.: A Bayesian approach to joint tracking and identification of geometric shapes in video sequences. Image and Vision Computing. Vol. 28, pp. 111-123. (2010) Neal, R. M.: Annealed importance sampling, Statistics and Computing, vol. 11, pp. 125-139. (2001) Schäfer, C., Chopin, N.: Sequential Monte Carlo on large binary sampling spaces. Statistics and Computing. (2011) Schweizer, N.: Non-asymptotic error bounds for sequential MCMC and stability of Feynman-Kac propagators. Working Paper, University of Bonn (2012) Whiteley, N.: Sequential Monte Carlo samplers: error bounds and insensitivity to initial conditions. Working Paper, University of Bristol (2011)
{ "pile_set_name": "ArXiv" }
--- abstract: 'Based on our previous work [@paper2], we investigate here the effects on the wind and magnetospheric structures of weak-lined T Tauri stars due to a misalignment between the axis of rotation of the star and its magnetic dipole moment vector. In such configuration, the system loses the axisymmetry presented in the aligned case, requiring a fully three-dimensional approach. We perform three-dimensional numerical magnetohydrodynamic simulations of stellar winds and study the effects caused by different model parameters, namely the misalignment angle $\theta_t$, the stellar period of rotation, the plasma-$\beta$, and the heating index $\gamma$. Our simulations take into account the interplay between the wind and the stellar magnetic field during the time evolution. The system reaches a periodic behavior with the same rotational period of the star. We show that the magnetic field lines present an oscillatory pattern. Furthermore, we obtain that by increasing $\theta_t$, the wind velocity increases, especially in the case of strong magnetic field and relatively rapid stellar rotation. Our three-dimensional, time-dependent wind models allow us to study the interaction of a magnetized wind with a magnetized extra-solar planet. Such interaction gives rise to reconnection, generating electrons that propagate along the planet’s magnetic field lines and produce electron cyclotron radiation at radio wavelengths. The power released in the interaction depends on the planet’s magnetic field intensity, its orbital radius, and on the stellar wind local characteristics. We find that a close-in Jupiter-like planet orbiting at $0.05$ AU presents a radio power that is $\sim 5$ orders of magnitude larger than the one observed in Jupiter, which suggests that the stellar wind from a young star has the potential to generate strong planetary radio emission that could be detected in the near future with LOFAR. This radio power varies according to the phase of rotation of the star. For three selected simulations, we find a variation of the radio power of a factor $1.3$ to $3.7$, depending on $\theta_t$. Moreover, we extend the investigation done in @paper2 and analyze whether winds from misaligned stellar magnetospheres could cause a significant effect on planetary migration. Compared to the aligned case, we show that the time-scale $\tau_w$ for an appreciable radial motion of the planet is shorter for larger misalignment angles. While for the aligned case $\tau_w\simeq 100$ Myr, for a stellar magnetosphere tilted by $\theta_t = 30^{\rm o}$, $\tau_w$ ranges from $\sim 40$ to $70$ Myr for a planet located at a radius of $0.05$ AU. Further reduction on $\tau_w$ might occur for even larger misalignment angles and/or different wind parameters.' author: - 'A. A. Vidotto' - 'M. Opher' - 'V. Jatenco-Pereira' - 'T. I. Gombosi' title: 'Simulations of Winds of Weak-Lined T Tauri Stars. II.: The Effects of a Tilted Magnetosphere and Planetary Interactions' --- INTRODUCTION ============ T Tauri stars are pre-main sequence low-mass stars ($0.5 \lesssim M/M_\odot \lesssim 2$), with a range of spectral types from F to M, and radius $\lesssim 3-4~R_\odot$. They are usually classified in two categories, depending on their evolutionary stage. In an earlier stage, they are known as classical T Tauri stars (CTTS), surrounded by circumstellar disks. In a later stage, with the dissipation of the accretion disk, they are known as weak-lined T Tauri stars (WTTSs). Thanks to spectropolarimetric measurements, the number of young low-mass stars with detected magnetic fields has significantly increased in the past decade . These detections have suggested that T Tauri stars present mean surface field strengths of the order of kG. Surface magnetic maps, derived from spectropolarimetric data, indicate that the surface fields on T Tauri stars are more complex than that of a simple dipole and are often misaligned with the rotational axis of the star [@2007MNRAS.380.1297D; @2008MNRAS.386.1234D]. CTTSs such as BP Tau [@2008MNRAS.386.1234D], V2129 Oph [@2007MNRAS.380.1297D], CR Cha and CV Cha [@2009MNRAS.398..189H], and V2247 Oph [@2010MNRAS.402.1426D] present dipolar and octupolar components of the surface magnetic field moment that are asymmetric with respect to the rotational axis of the star. More recently, the first determination of surface magnetic maps for a WTTS, V410 Tau, has been acquired [@2010MNRAS.403..159S], showing that, similar to the less evolved CTTSs, V410 Tau also presents a non-axisymmetric poloidal field. Despite the existing knowledge of the surface magnetic fields in young stars, the global structure of the stellar magnetic field is unknown. Magnetic field extrapolations from surface magnetograms using the potential field source surface (PFSS) method has been used to help us elucidate the geometry of the large-scale field around T Tauri stars [@2008MNRAS.386..688J; @2008MNRAS.389.1839G]. Such extrapolations, however, neglect the interaction of the field with the stellar wind and the temporal evolution of the system. Full magnetohydrodynamics (MHD) numerical simulations [@2003ApJ...595L..57R; @paper2] allow us to study the interplay between the stellar magnetic field and the wind. In this method, the dynamical interaction of the stellar wind and the magnetic field lines is a result of the action of magnetic, thermal, gravitational, and inertial forces. MHD simulations can be, however, computationally expensive and time consuming. A comparison between PFSS and MHD models that used observed surface magnetic maps as a boundary condition can be found in @2006ApJ...653.1510R in the context of the solar wind, showing that PFSS models are able to reconstruct the large-scale structure of the solar corona when time-dependent changes in the photospheric flux can be neglected, although nonpotential effects can have a significant effect on the magnetic structure of the corona. The accurate determination of the wind properties and topology of the magnetic field of a star is necessary to solve a series of open questions. The rotational evolution of the star, for example, requires the knowledge of the topology of the field, as the shape of the magnetic field lines may enhance rotational braking caused by the stellar magnetized winds . Furthermore, studies of the magnetic interaction between a CTTS and its disk require the knowledge of the structure of the magnetic field of the star [@1990RvMA....3..234C; @1991ApJ...370L..39K; @2008MNRAS.386.1274L]. Determining realistic magnetic field topologies and wind dynamics are also key to understand interactions between magnetized extra-solar planets and the star, such as interactions that lead to planetary migration [@2006ApJ...645L..73R; @2008MNRAS.389.1233L; @paper2], interactions between the stellar magnetic field and the planetary magnetosphere and also with the planetary atmosphere . As a next step towards a more realistic wind and magnetic field modeling of WTTSs, in this work we extend the study performed in @paper2, where the stellar rotation and magnetic moment vectors were assumed to be parallel. We now consider cases where these vectors are not aligned. Some numerical and theoretical models exist considering the case of an oblique magnetic geometry, mainly applied to the study of pulsars , with a few applications to other astrophysical objects [e.g., @1998MNRAS.300..718L; @2003ApJ...595.1009R; @2004ApJ...610..920R; @2007MNRAS.382..139T]. As a consequence of the oblique magnetic geometry, the system loses the axisymmetry present in the aligned case [@paper2], thus requiring a fully three-dimensional (3D) approach. We perform here 3D MHD numerical simulations of magnetized stellar winds of WTTSs, by considering at the base of the coronal wind a dipolar magnetic field that is tilted with respect to the rotational axis of the star. Complex, high-order multipoles magnetic field configuration at the surface may exist, but a dipolar component should dominate at larger distances [e.g., @2007ApJ...664..975J; @2007MNRAS.380.1297D]. As the simulation evolves in time, the initial field configuration is modified by the interaction with the stellar wind, which in turn also is modified by the magnetic field geometry. The stellar wind of a host star is expected to directly influence an orbiting planet and its atmosphere. The interaction, for example, of a magnetized wind with a magnetized planet can give rise to reconnection of magnetic field lines. Reconnection processes appear in several places in the Solar System. E.g., the magnetic field lines of the Earth magnetospheric day-side (i.e., the side of the Earth that is facing the Sun) are compressed due to the interaction with the solar wind, while in the opposite side of the Earth’s magnetosphere (the night-side), a tear-dropped-shaped tail is formed [e.g., @1930Natur.126..129C]. The solar wind interaction with the magnetic planets of the Solar System (Earth, Jupiter, Saturn, Uranus, and Neptune) accelerates electrons that propagate along the planets magnetic field lines, producing electron cyclotron radiation at radio wavelengths [@1998JGR...10320159Z]. By analogy to the magnetic planets in the Solar System, predictions have been made that extra-solar planets should produce cyclotron maser emission , if they harbor intrinsic magnetic fields, although predictions also exist for the case of non-magnetized planets . Evidence that extrasolar giant planets can be magnetized was found by @2005ApJ...622.1075S [@2008ApJ...676..628S], who observed modulations of the Ca II H&K lines in phase with planetary orbital periods on extrasolar planetary systems. Such modulations were interpreted as induced activity on the stellar chromosphere caused by the interaction between the stellar and planetary magnetic fields.[^1] The consideration of a realistic wind is crucial to determine how the interaction between the stellar wind and the magnetosphere of an extrasolar planet occur. Using the 3D, time-dependent MHD wind models developed in this paper, we investigate the planet-wind interaction. Such interaction can give rise to reconnection processes, which result in transfer of energy from the stellar wind to the planet’s magnetosphere. Analogously to the interaction of Jupiter’s magnetosphere with the solar wind, we estimate the radio power released from the interaction of a close-in giant planet with the wind of its host star. We have organized this paper as follows. §\[sec.numerics\] presents the MHD numerical model adopted to describe a magnetized stellar wind of WTTSs. In §\[sec.results\], we present the simulations made and the results we achieved, along with a comparison between wind models with different parameters. In §\[sec.discussion\], we discuss our stellar wind results, performing comparisons with other simpler stellar wind models. The interaction between a close-in magnetized giant planet and the stellar wind and an estimate of the radio power released from this interaction are presented in §\[sec.reconnection\]. In §\[sec.migration\], we investigate whether the action of magnetic torques from the stellar wind acting on a close-in giant planet is able to cause planetary migration. This investigation extends the one presented in @paper2, where the winds analyzed in that case assumed that the axis of rotation of the star and the surface magnetic dipole moment were aligned. In §\[sec.conclusions\], we present the conclusions. THE NUMERICAL MODEL {#sec.numerics} =================== To perform the simulations, we use the Block Adaptive Tree Solar-wind Roe Upwind Scheme (BATS-R-US), a 3D ideal MHD numerical code developed at the Center for Space Environment Modeling at University of Michigan [@1999JCoPh.154..284P]. BATS-R-US has a block-based computational domain, consisting of Cartesian blocks of cells that can be adaptively refined for the region of interest. It has been used to simulate the heliosphere [@2003ApJ...595L..57R; @2007ApJ...654L.163C], the outer-heliosphere [@1998JGR...103.1889L; @2003ApJ...591L..61O; @2006ApJ...640L..71O; @2007Sci...316..875O], coronal mass ejections [@2004JGRA..10901102M; @2005ApJ...627.1019L], the Earth’s magnetosphere [@2006AdSpR..38..263R] and the magnetosphere of Saturn [@2005GeoRL..3220S06H] and Uranus [@2004JGRA..10911210T], among others. In this work, we extend the model developed in @paper2 to study the wind structure of WTTSs, specifically when the magnetic moment of the star and the stellar rotational axis are non-parallel. BATS-R-US solves the ideal MHD equations, that in the conservative form are given by (in cgs units) $$\label{eq:continuity_conserve} \frac{\partial \rho}{\partial t} + \nabla\cdot \left(\rho {\bf u}\right) = 0$$ $$\label{eq:momentum_conserve} \frac{\partial \left(\rho {\bf u}\right)}{\partial t} + \nabla\cdot\left[ \rho{\bf u\,u}+ \left(p + \frac{B^2}{8\pi}\right)I - \frac{{\bf B\,B}}{4\pi}\right] = \rho {\bf g}$$ $$\label{eq:bfield_conserve} \frac{\partial {\bf B}}{\partial t} + \nabla\cdot\left({\bf u\,B} - {\bf B\,u}\right) = 0$$ $$\label{eq:energy_conserve} \frac{\partial\varepsilon}{\partial t} + \nabla \cdot \left[ {\bf u} \left( \varepsilon + p + \frac{B^2}{8\pi} \right) - \frac{\left({\bf u}\cdot{\bf B}\right) {\bf B}}{4\pi}\right] = \rho {\bf g}\cdot {\bf u} \, ,$$ where $\rho$ is the mass density, ${\bf u}$ the plasma velocity, ${\bf B}$ the magnetic field, $p$ the gas pressure, ${\bf g}$ the gravitational acceleration due to the central body, and $\varepsilon$ is the total energy density given by $$\label{eq:energy_density} \varepsilon=\frac{\rho u^2}{2}+\frac{p}{\gamma-1}+\frac{B^2}{8\pi} \, .$$ We consider an ideal gas, so $p=\rho k_B T/(\mu m_p)$, where $k_B$ is the Boltzmann constant, $T$ is the temperature, $m_p$ is the proton mass, $\mu =0.5 $ is the mean molecular weight of a totally ionized hydrogen gas, and $\gamma$ is the ratio of the specific heats (or heating parameter). In our simulations, we adopt either $\gamma=1.1$ or $\gamma=1.2$. The adopted grid is Cartesian and the star is placed at the origin. The axes $x$, $y$, and $z$ extend from $-75~r_0$ to $75~r_0$, where $r_0$ is the stellar radius. For all the cases studied, we apply $11$ levels of refinement in the simulation domain. Figure \[fig.grid\] presents a cut along the meridional $xz$-plane illustrating the refinement at the inner portion of the grid. We note that a higher resolution is used around the central star. This configuration has a total of $\sim 2.5 \times 10^7$ cells in the simulation domain. The smallest cells (closest to the star) have a size of $0.018~r_0$ and the maximum cell size is $3.75~r_0$. ![Meridional cut ($xz$-plane) of the adopted 3D grid in the simulations of misaligned magnetospheres, illustrating the refinement at the inner portion of the grid. Immediately around the star, the cell resolution is $0.018~r_0$ ($r_0$ is the stellar radius), and with distance from the star, the grid gets coarser. The coarsest resolution is in the outer corners of the grid (not shown above) and is $3.75~r_0$. \[fig.grid\]](f1.eps){height="7cm"} The star has $M_\star = 0.8~M_\odot$ and $r_0=2~R_\odot$. The grid is initialized with a 1D hydrodynamical wind for a fully ionized plasma of hydrogen. The solution for $u_r (r)$ depends on the choice of the temperature at the base of the wind and on $\gamma$, and the only physical possible solution is the one that becomes supersonic when passing through the critical radius [@1958ApJ...128..664P]. Due to conservation of mass of a steady wind (i.e., $\rho u_r r^2={\rm constant}$), we obtain the density profile from the radial velocity profile $u_r (r)$. The star is considered to be rotating as a solid body with a period of rotation $P_0=2\pi/\Omega$, where $\Omega$ is the angular velocity of the star. Its axis of rotation lies in the $z$-direction $${\bf \Omega} = \Omega \hat{\bf z} \, .$$ The surface magnetic moment vector ${\bf m}$ is tilted with respect to ${\bf \Omega}$ at an angle $\theta_t$ $$|{\bf m \cdot \Omega}| = m \Omega \cos\theta_t \, .$$ The simulations are initialized with a dipolar magnetic field described in spherical coordinates $\{ r, \theta, \varphi \}$ by $$\label{eq:dipoler} B_r (t_0)= \frac{B_0 r_0 ^3}{r ^3} (\cos \theta \cos \theta_t + \sin \theta \cos \varphi (t_0) \sin \theta_t ) \, ,$$ $$\label{eq:dipoletheta} B_\theta (t_0) = \frac{B_0 r_0 ^3}{r ^3} \left(\frac12 \sin \theta \cos \theta_t - \frac12\cos \theta \cos \varphi (t_0) \sin \theta_t \right) \, ,$$ $$\label{eq:dipolephi} B_\varphi (t_0) = \frac{B_0 r_0 ^3}{r ^3} \frac{\sin \varphi (t_0) \sin \theta_t}{2} \, ,$$ where $B_0$ is the magnetic field intensity at the [*magnetic poles*]{} (where $\theta = \theta_t$ and $r=r_0$), $r$ is the radial coordinate, $\theta$ is the co-latitude, and $\varphi$ is the azimuthal angle measured in the equatorial plane. At the initial instant $t_0$, the vector [**m**]{} is in the $xz$-plane, tilted by an angle $\theta_t$ in the clockwise direction around the $y$-axis (Fig. \[fig.tilted-initial\]a). The inner boundary of the system is the base of the wind at $r=r_0$, where fixed boundary conditions were adopted. The outer boundary has outflow conditions, i.e., a zero gradient is set to all the primary variables (${\bf u}$, ${\bf B}$, $p$, and $\rho$). As the magnetic field is anchored on the star, in one stellar rotational period, the surface magnetic moment vector ${\bf m}$ draws a cone, whose central axis is the $z$-axis (Fig. \[fig.tilted-initial\]b). Because of that, in the simulations with oblique magnetic geometries, the boundary conditions are time-dependent and the simulations reach a periodic configuration (§\[sec.temp.beh\]). The MHD solution is evolved in time from the initial dipolar configuration for the magnetic field to a fully self-consistent non-dipolar solution. The wind interacts with the magnetic field lines and deforms the initial dipolar configuration of the field. The stellar wind is also modified by the magnetic field, i.e., no fixed topologies for the magnetic field neither for the wind are assumed. ![The magnetic field configuration in the simulations of misaligned magnetospheres. (a) Orientation of the magnetic field lines in the $xz$-plane at the initial instant $t_0$. (b) In one stellar rotational period, the magnetic moment vector ${\bf m}$ draws a cone, whose central axis is the $z$-axis. Shown above are the magnetic moment vector for the initial instant $t_0$ and half-period later. $\theta_t$ is the angle between the vectors ${\bf m}$ and $\Omega$. \[fig.tilted-initial\] ](f2a.eps "fig:"){height="7cm"}\ ![The magnetic field configuration in the simulations of misaligned magnetospheres. (a) Orientation of the magnetic field lines in the $xz$-plane at the initial instant $t_0$. (b) In one stellar rotational period, the magnetic moment vector ${\bf m}$ draws a cone, whose central axis is the $z$-axis. Shown above are the magnetic moment vector for the initial instant $t_0$ and half-period later. $\theta_t$ is the angle between the vectors ${\bf m}$ and $\Omega$. \[fig.tilted-initial\] ](f2b.ps "fig:"){height="7cm"} SIMULATION RESULTS {#sec.results} ================== Table \[table\] shows the parameters adopted in the simulations. Common to all simulations are the magnetic field intensity of $B_0=1$ kG at the magnetic poles of the star and the temperature $T_0=10^6$ K at the base of the wind. We varied the misalignment angle $\theta_t$, the period of rotation of the star $P_0$, the density at the base of the wind $\rho_0$ (and consequently the plasma-$\beta$ at the base of the coronal wind $\beta_0$), and the heating index $\gamma$. Observations of WTTSs show that they possess rotational periods ranging from $0.5$ to $13$ d with a distribution peaking at $P_0 \sim 2$ d [@2007ApJ...671..605C]. However, we did not consider $P_0>3$ d, as the dynamical effect of the misalignment is more significant in the cases where the star has a low period of rotation. This implies that we are in the lower range of observed rotational periods for WTTSs . A description of the choice of parameters used in the simulations (except for $\theta_t$) can be found in Section 3 of @paper2. The misalignment angle $\theta_t$ between the rotational axis of the star and the surface magnetic dipole moment vector was chosen to vary from $0^{\rm o}$ (aligned case) to $90^{\rm o}$, although surface magnetic maps of T Tauri stars have shown the existence of smaller angles [$\theta_t \lesssim 30^{\rm o}$, @2007MNRAS.380.1297D; @2008MNRAS.386.1234D]. Simulation T01 considers the aligned case. Simulations T02 to T06 represent case with different $\theta_t$. With respect to our fiducial case T04, we varied $\gamma$ in case T07, $P_0$ in cases T08 and T09, and $\beta_0$ in case T10. [c c c c c c c]{} & [$\rho_0$]{} & [$\gamma$]{} & [$P_0$]{} & [$\theta_t$]{} & [$\beta_0$]{}\ & (g cm$^{-3}$) & & (d) & ($^{\rm o}$) &\ T01 & $1\times 10^{-11}$ & $1.2$ & $1$ & $0$ & $1/25$\ T02 & $1\times 10^{-11}$ & $1.2$ & $1$ & $10$ & $1/25$\ T03 & $1\times 10^{-11}$ & $1.2$ & $1$ & $20$ & $1/25$\ T04 & $1\times 10^{-11}$ & $1.2$ & $1$ & $30$ & $1/25$\ T05 & $1\times 10^{-11}$ & $1.2$ & $1$ & $60$ & $1/25$\ T06 & $1\times 10^{-11}$ & $1.2$ & $1$ & $90$ & $1/25$\ T07 & $1\times 10^{-11}$ & $1.1$ & $1$ & $30$ & $1/25$\ T08 & $1\times 10^{-11}$ & $1.2$ & $3$ & $30$ & $1/25$\ T09 & $1\times 10^{-11}$ & $1.2$ & $0.5$ & $30$ & $1/25$\ T10 & $2.4\times 10^{-12}$ & $1.2$ & $1$ & $30$ & $1/100$\ Time-Dependent Behavior {#sec.temp.beh} ----------------------- Because the star is rotating and the magnetic field is asymmetric with respect to the axis of rotation, the simulations with an oblique magnetosphere have a periodic behavior with the same period of rotation of the star. Depending on the physical conditions and grid size, after a certain number of stellar rotations, the system has relaxed and such periodic behavior is achieved. For example, for a wind expansion velocity of $\sim 200$ km s$^{-1}$ propagating in a grid size of $75~r_0$, the time the solution will take to relax in the grid is of $\sim 75~r_0/(200$ km s$^{-1}) \sim 6~$days. Cases T02, T03, and T04 were run for $10$ days, T05 for $9$ days, T06 for $8$ days, T07 for $5$ days, T08 and T09 for $6$ days, and T10 for $7$ days. These time intervals were sufficient for the solution to relax in the grid. Figure \[fig.evolution\] illustrates the time-dependent behavior of our simulations, where we show meridional cuts of the total velocity of the wind for nine instants during one full period of rotation of the star for case T04, our fiducial case. The first panel represents a given instant $t = t_1$; the subsequent panels increase in multiples of $1/8~P_0$, until the cycle completes at instant $t=t_1+P_0$. The magnetic field lines are represented by black lines and the white line denotes the contour where the magnetic field changes polarity, i.e., when $B_r = 0$. It can be seen that the initial instant $t = t_1$ and the final instant $t=t_1+P_0$ of a given stellar rotational period are identical, which exemplifies the periodic behavior of the simulation. The magnetic field has zones of open and closed field lines. The zone of closed field lines rotates around the stellar equatorial plane ($z=0$), as is readily seen if one uses the contour-line of $B_r=0$ (white line) as a guide. ![image](f3a.eps){height="5.5cm"} ![image](f3b.eps){height="5.5cm"} ![image](f3c.eps){height="5.5cm"}\ ![image](f3d.eps){height="5.5cm"} ![image](f3e.eps){height="5.5cm"} ![image](f3f.eps){height="5.5cm"}\ ![image](f3g.eps){height="5.5cm"} ![image](f3h.eps){height="5.5cm"} ![image](f3i.eps){height="5.5cm"} It should be reminded that $B_r=0$ defines a bi-dimensional surface, but because Fig. \[fig.evolution\] (as well as several other figures we present later on) is a meridional cut, $B_r=0$ is shown as a contour-line. When there is no misalignment ($\theta_t=0$), the surface $B_r=0$ coincides with the equatorial plane ($z=0$). However, in the case where $\theta_t \ne 0$, $B_r=0$ defines a wavy, time-dependent surface for an observer that is in an inertial referential frame. In the corotating frame of the star, the surface would still be wavy, but it will appear as static. In both cases ($\theta_t = 0$ and $\theta_t \ne 0$), such a surface is the locus of points at the tip of the closed magnetic field lines. Therefore, we expect cusp-like structures (i.e., helmet-streamers) to also oscillate with the same rotational period of the star. The three-dimensional view of the surface $B_r=0$ can be seen in Fig. \[fig.Br-isosurface\]. ![Three-dimensional view of the isosurface defined by $B_r=0$ for case T04 after $8$ rotations of the star ($t=192$ h). The star is shown in red. \[fig.Br-isosurface\]](f4.eps){height="7cm"} The effects of the misalignment angle $\theta_t$ ------------------------------------------------ By comparing simulations T01 to T04, we can analyze the effects caused due to small misalignment between the stellar rotation axis and the stellar magnetic dipole moment vector in the wind structure and magnetic field configuration. Figure \[fig.radial-velocity\] presents radial velocity color maps for the cases T02 ($\theta_t=10^{\rm o}$), T03 ($\theta_t=20^{\rm o}$), T04 ($\theta_t=30^{\rm o}$), and the aligned case T01 ($\theta_t=0^{\rm o}$) for the entire simulation box (shown in the figure is a meridional cut in the $xz$-plane). The four panels are snapshots taken at $t=240$ h, which is a sufficient time to assure that the periodic behavior described in §\[sec.temp.beh\] has been achieved. For each of the misaligned cases, we note regions of higher velocities, surrounded by regions of smaller velocities. This oscillatory behavior is seen in all the variables of the wind and is caused by the precession of the stellar magnetic field around the polar axis of the star. We also note that the increase in $\theta_t$ leads to faster winds on average. This is caused by the azimuthal derivative ($\partial / \partial \varphi$) terms in Eqs. (\[eq:continuity\_conserve\]) to (\[eq:energy\_conserve\]), which vanish in the aligned case. In the momentum equation \[Eq. \[eq:momentum\_conserve\]\], for instance, these terms are: the inertial term $$\label{eq.extrainertial} \frac{u_\varphi}{r \sin \theta} \left( \frac{\partial {u_r}}{\partial \varphi}\hat{\bf r} + \frac{\partial {u_\theta}}{\partial \varphi}\hat{\bf \theta} + \frac{\partial {u_\varphi}}{\partial \varphi} \hat{\bf \varphi}\right) \, ,$$ the pressure gradient in the azimuthal direction $$\label{eq.extrathermal} - \frac{1}{r \sin \theta} \frac{\partial {p}}{\partial \varphi} \hat{\bf \varphi} \, ,$$ and the magnetic force $({\bf \nabla \times B}){\bf \times B}/4\pi$ $$\label{eq.extramag} \frac{1}{4\pi r \sin \theta} \left[ B_\varphi \frac{\partial}{\partial \varphi} (B_r \hat{\bf r} + B_\theta \hat{\bf \theta}) - \frac{\partial}{\partial \varphi} \left( \frac{B_r^2}{2} + \frac{B_\theta^2}{2} \right) \hat{\bf \varphi}\right] \,,$$ where the first term inside the brackets refers to a magnetic tension and the second term refers to a gradient of the magnetic pressure. The terms given by Eq. (\[eq.extrainertial\]) are negligible when compared to the total inertial force, as well as the terms in Eq. (\[eq.extrathermal\]) when compared to the magnitude of the total pressure gradient force. The magnetic tension and pressure presented in Eq. (\[eq.extramag\]) contribute more significantly to the acceleration of the wind under an oblique magnetic field configuration. Furthermore, the larger the misalignment angle $\theta_t$ is, the larger is the magnitude of this contribution and, consequently, the larger is the increase in the wind velocity. ![image](f5a.eps){height="7cm"} ![image](f5b.eps){height="7cm"}\ ![image](f5c.eps){height="7cm"} ![image](f5d.eps){height="7cm"} Figure \[fig.radial-velocity-cuts\] shows line-radial cuts in the $xz$-plane of radial velocity for the same instant shown in Fig. \[fig.radial-velocity\]: Fig. \[fig.radial-velocity-cuts\]a presents radial cuts along the [*magnetic pole*]{}, i.e., along co-latitude $\theta =10^{\rm o}$ for T02 (where $\theta_t=10^{\rm o}$), along co-latitude $\theta =20^{\rm o}$ for T03 (where $\theta_t=20^{\rm o}$) and so on; Fig. \[fig.radial-velocity-cuts\]b presents radial cuts along the [*magnetic equator*]{}, i.e., along co-latitude $\theta =100^{\rm o}$ for T02 (where $\theta_t=10^{\rm o}$), along co-latitude $\theta =110^{\rm o}$ for T03 (where $\theta_t=20^{\rm o}$) and so on. We note that along the magnetic pole, the curves of radial velocity for the tilted cases oscillate around the curve assigned for the aligned case (black solid line, $\theta_t=0^{\rm o}$). However, along the magnetic equator, the radial velocity increases as $\theta_t$ gets larger, as explained in the previous paragraph. The wind radial velocity at $\sim 75~r_0$ is $227~$km s$^{-1}$ for case T01, $234~$km s$^{-1}$ for T02, $251~$km s$^{-1}$ for T03, and $275~$km s$^{-1}$ for T04. Figures \[fig.radial-velocity-cuts\]c and \[fig.radial-velocity-cuts\]d presents the same cuts along the rotational poles and equator, respectively. ![image](f6a.eps){height="7cm"} ![image](f6b.eps){height="7cm"}\ ![image](f6c.eps){height="7cm"} ![image](f6d.eps){height="7cm"}\ Figure \[fig.Uphi-snapshot\] presents the inner portion of our simulation boxes at $t=240$ h for cases T01 to T04. The magnetic field lines are represented by black lines, and the white line represents the contour-line where $B_r = 0$. By following the white line in Fig. \[fig.Uphi-snapshot\], we note that the closed magnetic field lines, as well as the open field lines, are not rigid, presenting a warped zone around the rotational equatorial plane of the star ($z= 0$). The amplitude of the oscillations gets larger as $\theta_t$ increases. The color maps show azimuthal velocity. There is not a significant variation in the magnitude of $u_\varphi$ between the simulations, but the spatial profile of $u_\varphi$ is highly dependent on the configuration of the magnetic field, and thus on $\theta_t$. As we can see, the highest values of $u_\varphi$ are achieved inside the closed magnetic field lines (close to the star) and the rotating wind is forced to follow the same oscillation pattern of the magnetic field lines. In case of perfect alignment, the maximum azimuthal velocity happens in the equatorial plane of the star (Fig. \[fig.Uphi-snapshot\]a). ![image](f7a.eps){height="7cm"} ![image](f7b.eps){height="7cm"}\ ![image](f7c.eps){height="7cm"} ![image](f7d.eps){height="7cm"} Two further simulations for different misalignment angles were performed: T05 ($\theta_t=60^{\rm o}$) and T06 ($\theta_t=90^{\rm o}$). These simulations represent more extreme case of misalignment, being T06 the case where the axis of the magnetic moment at the base of the coronal wind is perpendicular to the rotational axis. Both of them presents similar characteristics as the cases presented so far for $\theta_t \leq 30^{\rm o}$, with an enhanced wind velocity though. For case T06, the region of lower radial velocity remains in the region of closed magnetic field lines, but this region is now around the rotational poles (at co-latitudes $\theta=0^{\rm o}$, $180^{\rm o}$). Figure \[fig.3D.T23.T24\] presents the 3D view of selected magnetic field lines for cases T05 (Fig. \[fig.3D.T23.T24\]a) and T06 (Fig. \[fig.3D.T23.T24\]b), where it illustrates the inherent three-dimensional nature of our simulations. ![3D view of selected magnetic field lines for (a) $\theta_t=60^{\rm o}$ (T05) and (b) $\theta_t=90^{\rm o}$ (T06). \[fig.3D.T23.T24\]](f8a.eps "fig:"){height="7cm"}\ ![3D view of selected magnetic field lines for (a) $\theta_t=60^{\rm o}$ (T05) and (b) $\theta_t=90^{\rm o}$ (T06). \[fig.3D.T23.T24\]](f8b.eps "fig:"){height="7cm"} We selected case T04 to describe the wind characteristics. The periodic movement of the stellar magnetosphere affects the entire wind structure, as can be seen in Fig. \[fig.t21.typical\], where we present meridional cuts of the following wind variables: $u_{\rm tot}$, $|B_r|$, $|B_\theta|$, $|B_\varphi|$, $\rho$, and $J_{\rm tot}$. Meridional cuts of $u_r$ and $|u_\varphi|$ can be found in Figs. \[fig.radial-velocity\] and \[fig.Uphi-snapshot\], respectively. $|B_r|$, $|B_\theta|$ and $|B_\varphi|$ present dipolar configuration at the base of the coronal wind, but for other radii, their solution is dependent on the resultant interaction of the magnetic field with the wind. Because of this interaction, the stellar magnetosphere acquires an azimuthal component for the magnetic field, which can be seen in Fig. \[fig.t21.typical\]d, presenting maximum intensity in the interface between closed and open field lines. The density of the wind is not spherically symmetric, presenting higher densities around the $B_r=0$ surface (Fig. \[fig.t21.typical\]e). The total current density $J_{\rm tot} \propto |\nabla \times {\bf B}|$ is shown in Fig. \[fig.t21.typical\]f. ![image](f9a.eps){height="7cm"} ![image](f9b.eps){height="7cm"}\ ![image](f9c.eps){height="7cm"} ![image](f9d.eps){height="7cm"}\ ![image](f9e.eps){height="7cm"} ![image](f9f.eps){height="7cm"} The effect of a different $\gamma$ on the wind ---------------------------------------------- We now compare the results from simulations T04 and T07, where different values of $\gamma$ were adopted. The value of $\gamma$ influences the thermal acceleration of the wind. It defines the input of thermal energy and $\gamma$ obeys the relation $p\propto \rho^{\gamma}$. Low values of $\gamma$ imply a proportionally large input of thermal energy in the wind, and consequently, high wind terminal velocities. It was shown in @paper1 that when the wind is magnetized, the value chosen for $\gamma$ can alter the ratio between thermal and magnetic forces, and thus, accentuate the latitudinal dependence of the wind. In this work, we do not invoke the physical processes that may cause a larger input of energy in our models. Nevertheless, we study the effects of a smaller $\gamma$ in the wind of a magnetized star with oblique magnetic geometries. Figure \[fig.gamma\] presents the total velocity of the wind for both cases. By comparing simulations T04 ($\gamma=1.2$) and T07 ($\gamma=1.1$), we find that the terminal velocity achieved by simulation T07 is around $22\%$ larger than the one achieved in T04. The wind temperature profile for both cases is different, as expected: at the equatorial plane, near $10~r_0$, for instance, $T \simeq 3.4 \times 10^5~$K for case T04 and $T \simeq 5.5 \times 10^5~$K for case T07. Furthermore, the magnetic field configuration differs from both cases, with case T07 presenting a more compact zone of closed field lines. These results agree with our previous ones [@paper2; @paper1], which show that the heating parameter is a important in defining the acceleration of the wind and the magnetic configuration around the star. ![Meridional cuts of the total wind velocity $u_{\rm tot}$ for cases with (a) $\gamma=1.2$ (T04) and (b) $\gamma = 1.1$ (T07). Black and white lines have the same meaning as before. \[fig.gamma\]](f10a.eps "fig:"){height="7cm"}\ ![Meridional cuts of the total wind velocity $u_{\rm tot}$ for cases with (a) $\gamma=1.2$ (T04) and (b) $\gamma = 1.1$ (T07). Black and white lines have the same meaning as before. \[fig.gamma\]](f10b.eps "fig:"){height="7cm"} The effect of different stellar rotational periods -------------------------------------------------- Here we explore the rotational effects on the dynamics of the wind, keeping $\theta_t=30^{\rm o}$. We select cases T04, T08, and T09 to perform this comparison, where the stellar rotational periods are $1$, $3$, and $0.5$ d, respectively. These periods of rotation are in the lower range of observed periods for WTTSs , as to explore the maximum effects on the wind. Longer stellar rotational periods imply a wind that is less disturbed by the precession of the magnetic field. Figure \[fig.Prot\] shows the meridional cuts of total wind velocities for $P_0=0.5$ d (Fig. \[fig.Prot\]a), $P_0=1$ d (Fig. \[fig.Prot\]b), and $P_0=3$ d (Fig. \[fig.Prot\]c) at the same stellar rotational phase. The wind is more accelerated for lower $P_0$ (i.e., larger rotational velocities), as a result of the coupling of magnetic fields and rotation [e.g., @1967ApJ...148..217W; @1976ApJ...210..498B]. At the equatorial plane, $u_{\rm tot}=360$ km s$^{-1}$ for case T09, $u_{\rm tot}=250$ km s$^{-1}$ for case T04, and $u_{\rm tot}=180$ km s$^{-1}$ for case T08. ![Meridional cuts of total wind velocities $u_{\rm tot}$ at the same stellar rotational phase for (a) $P_0=0.5$ d (T09), (b) $P_0=1$ d (T04), and (c) $P_0=3$ d (T08). Black and white lines have the same meaning as before. \[fig.Prot\]](f11a.eps "fig:"){height="7cm"}\ ![Meridional cuts of total wind velocities $u_{\rm tot}$ at the same stellar rotational phase for (a) $P_0=0.5$ d (T09), (b) $P_0=1$ d (T04), and (c) $P_0=3$ d (T08). Black and white lines have the same meaning as before. \[fig.Prot\]](f11b.eps "fig:"){height="7cm"}\ ![Meridional cuts of total wind velocities $u_{\rm tot}$ at the same stellar rotational phase for (a) $P_0=0.5$ d (T09), (b) $P_0=1$ d (T04), and (c) $P_0=3$ d (T08). Black and white lines have the same meaning as before. \[fig.Prot\]](f11c.eps "fig:"){height="7cm"} The effect of a different $\beta_0$ on the wind ----------------------------------------------- In @paper2 [@paper1], we have shown that the ratio between the thermal and magnetic energy densities at the base of the wind ($\beta_0$) is a decisive factor in defining the magnetic configuration of the wind, as well as its velocity distribution. To study how $\beta_0$ influences the wind profile in the case of an oblique magnetic geometry, we compare simulations T04 and T10, which have the same model parameters, except for the density at the base of the wind (and thus, different $\beta_0$). Figure \[fig.beta\] shows meridional cuts of the total velocity of the wind plotted for both cases, as well as the magnetic field lines (black lines), and the surface $B_r=0$ (white line). Both panels show a snapshot at the same rotational phase of the star. As in the aligned case [@paper2; @paper1], the wind is more accelerated for low $\beta_0$, where the magnetic energy density at the base of the wind is more important than the thermal energy density. The ratio of open to closed magnetic field lines are larger for case T10, with lower $\beta_0$, showing that a faster wind is able to open the field lines more efficiently. ![Comparison between simulations with different $\beta_0$: (a) $\beta_0=1/25$ (T04) and (b) $\beta_0=1/100$ (T10). Plots show meridional cuts of the total wind velocity $u_{\rm tot}$, magnetic field lines (black lines) and the surface $B_r=0$ (white line). \[fig.beta\]](f12a.eps "fig:"){height="7cm"}\ ![Comparison between simulations with different $\beta_0$: (a) $\beta_0=1/25$ (T04) and (b) $\beta_0=1/100$ (T10). Plots show meridional cuts of the total wind velocity $u_{\rm tot}$, magnetic field lines (black lines) and the surface $B_r=0$ (white line). \[fig.beta\]](f12b.eps "fig:"){height="7cm"} The “wavelength” of the magnetospheric oscillation -------------------------------------------------- The typical length $\lambda$ of the oscillation of the isocontour of $B_r=0$ (white lines in Figs. 3, 7, 9 - 12) can be estimated as in @2009ApJ...703....8L $$\label{eq.lambda} \lambda \simeq u_{\rm char} P_0 = 0.062~r_0 ~u_{\rm char}^{\rm (km/s)} P_0^{\rm(d)} \, ,$$ where $u_{\rm char}$ is the characteristic velocity of the plasma. For $P_0=1$ d, and different $\theta_t$ (cases T02 to T04), we showed that the velocity of the wind increases with $\theta_t$. Therefore, from Eq. (\[eq.lambda\]), it is immediate to conclude that $\lambda$ will be larger for larger $\theta_t$. In fact, our estimates for the inner portion of the grid show that $\lambda \simeq 13$, $14$, and $16~r_0$ for $\theta_t=10^{\rm o}$, $20^{\rm o}$, and $30^{\rm o}$, respectively, which are consistent to the values measured from the simulations, $\lambda_{\rm sim} \simeq 14$, $16$, and $17~r_0$. In the cases where $\gamma$ was compared (cases T04 and T07), we note that $\lambda$ is relatively larger for case T07 ($\gamma=1.1)$ than for case T04 ($\gamma=1.2)$. This is again due to the larger velocities of the wind for case T07. The value of $\lambda$ calculated by Eq. (\[eq.lambda\]) was $\simeq 16$ and $20~r_0$ for T04 and T07, respectively, while the measured values of $\lambda_{\rm sim}$ from the inner region of the simulations were $\simeq 17$ and $19~r_0$. In the cases where $P_0$ was varied from $0.5$ to $3$ days (cases T04, T08, and T09), the velocity of the wind decreased with the increase of $P_0$. Both facts have a different effect on Eq. (\[eq.lambda\]). However, the increase in $P_0$ has proved to be more important in the increase of $\lambda$ than the effect provided by the increase in the wind velocity. We find $\lambda \simeq 11$, $16$, and $37~r_0$ for $P_0=0.5$ d (T09), $P_0=1$ d (T04), and $P_0=3$ d (T08), respectively, while the measured values were $\lambda_{\rm sim} \simeq 12$, $17$, and $36~r_0$. Varying the ratio between the thermal and magnetic energy densities at the base of the wind (cases T04 and T10) has an important effect on the acceleration of the wind. Because case T10 presents a lower $\beta_0$ than case T04, T10 has a larger characteristic velocity of the plasma. Ultimately, this increases $\lambda$ from $\simeq 16~r_0$ (T04) to $ 30~r_0$ (T10). The measured values of $\lambda_{\rm sim}$ from the inner region of the simulations were $17$ and $31~r_0$. DISCUSSION OF STELLAR WIND RESULTS {#sec.discussion} ================================== A MHD wind model provides solutions for the density, velocity, and temperature profiles, along with the magnetic field configuration of the wind. Because the solution of the MHD equations is complex, some models adopt several approximations. This is the case of the Weber-Davis model [@1967ApJ...148..217W], first developed for the solar wind and later on adopted to describe winds of other stars . The simplifications involved in the Weber-Davis model are: the model is axisymmetric and stationary; it considers an open, radial magnetic field that acquires an azimuthal component due to the rotation of the star; the wind solution is valid for the equatorial plane; it neglects meridional components of the magnetic and velocity fields. Because only the open magnetic field lines contribute to angular momentum loss, the Weber-Davis model is expected to overestimate angular momentum loss through a magnetized stellar wind, presenting shorter time-scales for stellar rotation brake than those models that consider the existence of closed field line regions [@1987MNRAS.226...57M]. Because of its uni-dimensional characteristic, the solution of the Weber-Davis model can be easily integrated. A detailed description of its solution is given in . Opposed to the Weber-Davis model, our model presents a multi-component corona, with the co-existence of open and closed field line regions. A latitudinally dependent velocity is observed, where the wind along the magnetic poles has a larger velocity than the wind along the equatorial regions. Details of the characteristics of the solutions of our wind model, in the context of aligned rotational axis and magnetic dipole moment, are presented in @paper1. Unfortunately, the three-dimensional nature of our model does not allow for analytical expressions of the solution of the variables of the wind. Different wind scenarios on the framework of the Weber-Davis model were explored by and . The first one investigated the characteristics of the stellar wind for a sample of stars hosting close-in giant planets, while the second one used empirical data to constrain theoretical wind scenarios. Despite the wind solutions from and being focused mainly on cool main-sequence stars, we compare the overall trend of our model in respect to these works. Mass-loss rates on such models are similar to the solar wind value and do not exceed about $10$ times the solar value . Wind terminal velocities found by range from $310$ km s$^{-1}$ for an isothermal wind temperature of $T_0=5\times 10^5~$K to $760$ km s$^{-1}$ for $T_0=2\times10^6~$K. Compared to our models, mass-loss rates are about $6$ orders of magnitude smaller, while velocities achieved are about the same order of magnitude. The difference in mass-loss rates is a consequence of the larger coronal densities adopted in our models. To our knowledge, there are no measurements of mass-loss rates and wind velocities for WTTSs to compare our results with. The detections of mass-loss rates are based on the early stage as a CTTS, when an accretion disk is still present [e.g., @1964ApJ...140.1409K; @2003ApJ...599L..41E; @2006ApJ...646..319E; @2007ApJ...657..897K; @2007ApJ...654L..91G]. Based on these detections, mass-loss rates are of the order of $10^{-10}$ to $10^{-7}~{\rm M}_\odot ~{\rm yr}^{-1}$ and wind terminal velocities $\simeq 400$ km s$^{-1}$. The parameters in our models were chosen to have values compatible to these ones. @paper2 considered models with the density at the base of the coronal wind spanning by two orders of magnitude, which resulted in mass-loss rates ranging between $\sim 10^{-9}$ and $8 \times 10^{-8}$ M$_\odot$ yr$^{-1}$. In the present paper, the main goal of §\[sec.results\] was to analyze the effects of the tilt angle on the wind. We therefore did not explore several values of base density and our models present mass-loss rates of about $ 9 \times 10^{-9}$ M$_\odot$ yr$^{-1}$. From Figures \[fig.radial-velocity\], \[fig.gamma\], \[fig.Prot\], and \[fig.beta\] we note that the wind terminal velocities obtained are $\simeq 350$ to $500$ km s$^{-1}$. @2007ApJ...657..897K suggest that, if the winds of WTTSs are simply a scaled-up version of the solar wind, WTTS winds should then be stronger than those of CTTSs, because X-ray emission from WTTSs are stronger than the emission from CTTSs. However, the winds of CTTSs are believed to be powered by accretion, which could be the reason why the wind traced by HeI $\lambda$10830 is not detected in WTTSs [@2007ApJ...657..897K]. We would expect that when accretion ceases, the wind should become less strong. There is clearly a need of more observational constrains on the winds of WTTSs. In possession of that, we would be able to better constrain the parameters of our models. ON THE PLANET-WIND RECONNECTION {#sec.reconnection} =============================== Finding planets around young stars is currently ongoing, with two recent detections of massive giant planets: one around a $5$ Myr-old star [@2010arXiv1006.3070L] and one around a $12~$Myr-old star [@2010arXiv1006.3314L]. The stellar wind is expected to directly influence the planet and its atmosphere, e.g., by changing the configuration of the planet’s magnetosphere, producing nonthermal planetary magnetospheric radio emissions, etc. So far, the few theoretical works investigating the influence of the stellar wind on the magnetosphere of planets were based on simplified treatments of the stellar winds, e.g., using the Parker wind model , the Weber-Davis wind model , assuming a solar-type stellar wind , or based on scalings for the mass-loss rates and wind terminal velocities . The consideration of a realistic wind is crucial to determine how the interaction between the stellar wind and the magnetosphere of an extrasolar planet occur. Our 3D, time-dependent MHD simulations of stellar winds of WTTSs provide a powerful tool to investigate the planet-wind interaction, as it allows us to consider the effects of a more realistic wind and obtain key insights on the detectability of radio emission from extrasolar planets. In this section, we estimate the reconnection rate and power released when reconnection between a close-in magnetized giant planet and the stellar wind takes place. This estimate is performed for four different wind simulations: T01, T02, T03, and T04, where the misalignment angle of the stellar rotation axis and its magnetic moment vector is $\theta_t=0^{\rm o}$, $10^{\rm o}$, $20^{\rm o}$, and $30^{\rm o}$, respectively. The Reconnection Rate --------------------- In this section, we estimate the rate of reconnection between the stellar and planetary magnetic field lines. This is necessary to evaluate the planetary radio emission (§\[subsec.radio\]). The magnetic field of the stellar wind has three components $B_x$, $B_y$, and $B_z$. $B_x$ and $B_y$ are parallel to the stellar rotational equatorial plane. $B_z$ is perpendicular to the this plane. Considering a planet whose orbital plane coincides with the rotational equatorial plane of the star and considering that the planet’s magnetic dipole moment is aligned in the $-z$-direction, then when the magnetic field of the stellar wind and the magnetic field of the planet are oriented anti-parallel to each other, magnetic field line reconnection can occur. This results in transfer of energy from the stellar wind to the planet’s magnetosphere. The reconnection rate is the amount of magnetic flux that reconnects per unit time per unit length of the reconnection line or, equivalently, the reconnection rate can be defined as the strength of the electric field parallel to the reconnection merging line [e.g., @2000mare.book.....P]. The reconnection line refers to the line where magnetic field lines reconnect. The rate at which reconnection between two different plasmas happens depends, among other things, on the velocity of the incident plasma on the reconnection site [@1973ApJ...180..247P]. In the idealized case when the magnetic fields of two identical plasmas are exactly anti-parallel, the reconnection rate (or the generated electric field at the reconnection site) can be estimated as [e.g., @2008JGRA..11307210B] $$\label{eq.recsimple} E \simeq \frac{v_{\rm in} B}{c} \, ,$$ where $v_{\rm in} = C v_A$ is the inflow velocity of the plasma in the reconnection site, $v_A$ and $B$ are the Alvén speed and magnetic field of the ambient plasma, respectively, and $c$ is the speed of light. The factor $C=l/L$ is the dissipation region aspect ratio, i.e., a property of the geometry of the reconnection region which has a characteristic width $l$ and a characteristic length $L$. When reconnection occurs between two plasmas with different characteristics, Eq. (\[eq.recsimple\]) becomes more complex, taking into account the different magnetic field intensities and Alvén speeds of the two plasmas [@2007PhPl...14j2114C; @2008JGRA..11307210B] $$\label{eq.reccomplex} E \simeq C \frac{2}{c} \left( \frac{B_{z,1}^3 B_{z,2}^3}{4 \pi (B_{z,2} \rho_1 + B_{z,1} \rho_2)(B_{z,1} + B_{z,2})} \right) ^{1/2}\,$$ where the index “1” and “2” are used to distinguish both plasmas on the site of the interaction, $B_{z,1}$ and $B_{z,2}$ are oriented anti-parallel to each other, and $\rho$ is the mass density. Planetary Radio Emission ------------------------ The solar wind interaction with the magnetic planets of the Solar System (Earth, Jupiter, Saturn, Uranus, and Neptune) accelerates electrons that propagate along the planets magnetic field lines, producing electron cyclotron radiation at radio wavelengths [@1998JGR...10320159Z]. By analogy to the magnetic planets in the Solar System, predictions have been made that magnetized extra-solar planets should produce cyclotron maser emission . The planetary radio emission depends on the planet’s magnetic field intensity[^2] and on the stellar wind power: in general, it implies that the stronger the stellar wind is, the more radio-luminous should the planet be. So far, such radio signatures from stars hosting hot Jupiters have not yet been detected, and one possible reason for that may be due to the lack of instrumental sensitivity in the appropriate frequency range of the observations [@2000ApJ...545.1058B]. The theoretical estimates on the radio flux emitted by extrasolar planets carry along a big uncertainty due to the fact that the stellar wind properties are poorly constrained: @1999JGR...10414025F showed that a variation by a factor of $2$ in the wind velocity may change the level of radio power emission by a factor of $100$, with more recent works suggesting that the radio power emission is proportional to the incident wind power . Therefore, the potential observation of radio emission from extrasolar planets strongly depends on the nature of the stellar wind. Based on our simulated stellar winds (cases T01 to T04), we estimate the planet’s radio power. The electric field generated in the interaction is calculated from Eq. (\[eq.reccomplex\]), where plasma $1$ refers to the characteristics of the planet’s magnetosphere, while plasma $2$ refers to the local characteristics of the impacting stellar wind. Initially, we do not consider pile-up of the stellar wind magnetic field in the magnetosheath of the planet, but will do so in §\[subsec.pile\]. To derive the characteristics of the planet’s magnetospheric plasma (plasma $1$), we consider a hot Jupiter with a dipolar magnetic field aligned in the $-z$-direction, and magnetic intensity at the equator $B_p=50~$G. The density of the planetary plasma $\rho_1$ is taken to be negligible such that $\rho_2 B_{z,1} \gg \rho_1 B_{z,2}$ in Eq. (\[eq.reccomplex\])[^3]. We assume that the planet has the same radius as Jupiter $R_p=R_{\rm Jup}\sim 0.05~r_0$. As we do not include the planet in our simulation, next, we analytically calculate the area of the planet that will interact with the stellar wind. ### The Size of the Planet’s Magnetosphere The interaction of the planet’s magnetosphere with the wind takes place at a distance $r_M$ from the center of the planet, where there is balance between the wind total pressure and the magnetic pressure of the planet $$\label{eq.equilibrium} \frac{\rho_2 u_2^2}{2} + \frac{B_{\parallel,2}^2}{4\pi}= \frac{B_{z,1}^2}{4\pi}\, ,$$ where $B_{z,1}$ is the $z$-component of the magnetic field of the planet at the equatorial plane $$\label{eq.Bzeq} B_{z,1}= \frac{B_p R_p^3}{r_M^3} \, ,$$ $B_{\parallel,2}$ is the parallel component of the stellar magnetic field to the boundary layer and $u_2=(u_\varphi-u_K)$ is the relative velocity between the wind azimuthal velocity $u_\varphi$ and the circular Keplerian velocity of the planet $u_K = (GM_\star/r)^{1/2}$. Substituting Eq. (\[eq.Bzeq\]) in (\[eq.equilibrium\]), we have $$\label{eq.rmagnetopause} \frac{r_M}{R_p} = \left[ \frac{B_p^2/2\pi}{ (\rho_2 u_2^2 + B_{\parallel,2}^2/2\pi)} \right]^{1/6} \, .$$ The size of the planet’s magnetosphere $r_M$ depends on the local characteristics of the stellar wind, on the orbital radius of the planet (through $u_K$), and on the planetary magnetic field. Figure \[fig.magnetosphere\]a presents the size of the planet’s magnetosphere if the stellar wind is given by cases T01 (aligned case, $\theta_t=0^{\rm o}$), T02 ($\theta_t=10^{\rm o}$), T03 ($\theta_t=20^{\rm o}$), and T04 ($\theta_t=30^{\rm o}$), calculated at $t=240~$h, for a range of planetary orbital radius up to $12~r_0 \sim 0.11~$AU. We note that the planet’s magnetosphere becomes large as the misalignment angle $\theta_t$ is smaller (i.e., the highest values of $r_M$ are found for the aligned case). This is a result of a lower wind total pressure $\frac12(\rho_2 u_2^2 + B_{\parallel,2}^2/2\pi)$ as $\theta_t$ gets smaller. For the range of orbital radii analyzed here, both kinetic and magnetic terms of the wind total pressure make significant contributions, except for small orbital radii ($r \lesssim 3~r_0$), where the magnetic term dominates. We see that for $r \lesssim 2~r_0$, the magnetosphere of the planet has vanished, due to a large wind total pressure. As for the misaligned cases the wind impacting on the planet changes characteristics according to the stellar phase (see for instance Fig. \[fig.evolution\]), it is expected that the radius of the planet’s magnetosphere will vary if the stellar rotational period is different from the orbital period of the planet. Considering a planet located at an orbital radius of $r = 5~r_0 \simeq 0.05$ AU in a circular orbit, for case T02, for instance, the variation in $r_M$ from its maximum possible value to its minimum possible value is around $5\%$, while for cases T03 and T04, this variation is around $8\%$ and $11\%$. Figure \[fig.magnetosphere\]b shows $r_M$ as a function of the stellar phase of rotation for a planet located at $r = 5~r_0 \simeq 0.05$ AU. ![The magnetospheric radius $r_M$ \[Eq. (\[eq.rmagnetopause\])\] of a planet orbiting a star as given by simulations T01, T02, T03, and T04. (a) Considering a given phase of the stellar rotation ($t=240~$h), and planet located in the $x$-axis. (b) Considering an orbital radius of $r = 5~r_0 \simeq 0.05$ AU as function of stellar rotational phase (lines presented are merely symbol connectors and do not represent actual values of $r_M$).\[fig.magnetosphere\]](f13a.eps "fig:"){height="7cm"}\ ![The magnetospheric radius $r_M$ \[Eq. (\[eq.rmagnetopause\])\] of a planet orbiting a star as given by simulations T01, T02, T03, and T04. (a) Considering a given phase of the stellar rotation ($t=240~$h), and planet located in the $x$-axis. (b) Considering an orbital radius of $r = 5~r_0 \simeq 0.05$ AU as function of stellar rotational phase (lines presented are merely symbol connectors and do not represent actual values of $r_M$).\[fig.magnetosphere\]](f13b.eps "fig:"){height="7cm"}\ Knowing the value of $r_M$, we can then calculate from Eq. (\[eq.Bzeq\]) the value of the magnetospheric magnetic field of the planet $B_{z,1}$ that will interact with the stellar wind. Because the size of the planet’s magnetosphere can increase/decrease depending on the incident wind, $B_{z,1}$ will present variations along the planetary orbit. The values of $B_{z,1}$ can range between $6.1$ and $7.0$ G for case T02, $6.8$ and $8.9$ G for case T03, and $7.4$ and $10.4$ G for case T04, for a planet at an orbital radius $r = 5~r_0 \simeq 0.05$ AU. Thus, for instance, for $\theta_t=30^{\rm o}$, along the planetary orbit, the interacting planetary magnetic field varies up to a factor of $1.4$. The more misaligned is the stellar rotation axis in relation to the stellar magnetic moment vector, more variation is expected in the magnetospheric radius of the planet and on the interacting planetary magnetic field $B_{z,1}$. ### Estimate of the Planetary Radio Emission {#subsec.radio} The power released with the reconnection event $P_{\rm rec} $ can be decomposed into a power released from the dissipation of kinetic energy carried by the stellar wind $P_k$ and a power released from the magnetic energy of the wind $P_B$ $$\label{eq.pwrrec} P_{\rm rec} = a P_k + b P_B\, ,$$ where $a$ and $b$ are efficiency ratios and we assumed that $P_{\rm rec}$ depends linearly on $P_k$ and $P_B$. Observations of the Solar System can be either explained by {$a = 1\times 10^{-5}$, $b = 0$} or {$a=0$, $b=2\times 10^{-3}$} . In fact, argues that it is not possible to decide which incident power actually drives the radio power observed from the magnetic planets of the Solar System. If both incident powers contribute to the radio emission, this implies that the coefficients $a$ and $b$ need to satisfy the relation $a/(1\times 10^{-5}) + b/(2\times 10^{-3}) =1$, as to match the observed radio power. However, we do not know if $a$ and $b$ should remain the same in other planetary systems. As we lack a better guess, we will adopt $a$ and $b$ as in the Solar System, i.e., either $P_{\rm rec} = 1\times 10^{-5} P_k$ or $P_{\rm rec} = 2\times 10^{-3} P_B$. The magnetic power $P_B$ can be estimated as the Poynting flux of the stellar wind impacting on the planetary magnetospheric cross-section $S$ $$\label{eq.pwrB} P_B = \int c \frac{{\bf E}\times {\bf B}}{4\pi} \cdot {\rm d}{\bf S} \simeq c\frac{E B_{z,2}}{4\pi} \pi r_M^2\, ,$$ where the electric field $E$ is given by Eq. (\[eq.reccomplex\]). The constant $C$ in Eq. (\[eq.reccomplex\]) is assumed to be $C =l/L \sim 0.1$, as derived by different analytical and numerical methods [for a discussion, see @2008JGRA..11307210B]. This implies that we are assuming that the reconnection happens in a region with a characteristic width $l = 0.1 L$, where $L$ is the radius of the planet (the characteristic length). The kinetic power $P_k$ is $$\label{eq.pwrk} P_k = \int p_{\rm ram} {\bf u} \cdot {\rm d}{\bf S} \simeq \rho_2 u_2^3 \pi r_M^2\, ,$$ where $p_{\rm ram} = \rho_2 u_2^2 $ is the wind kinetic ram pressure. According to our stellar wind models, throughout the region of investigation ($r\lesssim 12~r_0\simeq 0.11$ AU), we find that $P_k> P_B$ , except near the orbital radius where the planet co-rotates with the stellar wind and, thus, $u_2=(u_\varphi-u_K)\simeq 0$ and $P_k\simeq 0$. Figure \[fig.rec-power\] shows the estimated powers that are released for the wind cases T01 ($\theta_t=0^{\rm o}$, solid lines) and T04 ($\theta_t=30^{\rm o}$, dot-dashed lines) as a function of planetary orbital radius for a single rotation phase at $t=240~$h. ![Estimated dissipated power $P_{\rm rec}$ due to the interaction of a hot-Jupiter with the stellar wind for cases T01 (aligned, solid lines) and T04 ($\theta_t=30^{\rm o}$, dot-dashed lines) as a function of orbital radius of the planet, considered to lie in the $x$-axis. Red curves consider the power released from the incident kinetic power ($P_{\rm rec} = a P_k$), while black curves are from the incident magnetic power ($P_{\rm rec} = b P_B$). The curves refer to the solutions for the stellar wind at $t=240$ h. \[fig.rec-power\]](f14.eps){height="7cm"} The consideration of several phases of rotation of the star is taken in Figure \[fig.power\], which shows the estimated power that is released for the wind cases T02 ($\theta_t=10^{\rm o}$, Fig. \[fig.power\]a), T03 ($\theta_t=20^{\rm o}$, Fig. \[fig.power\]b), and T04 ($\theta_t=30^{\rm o}$, Fig. \[fig.power\]c). Figure \[fig.power\] presents the power released from the incident magnetic power of the wind ($P_{\rm rec} = b P_B$). The shaded area lies between maximum and minimum power that can be released in the interaction, depending on the characteristics of the incident wind (i.e., phase of rotation of the star). We see that the maximum emitted power gets progressively high as $\theta_t$ increases, while the minimum released power gets progressively low as $\theta_t$ increases. This causes the shaded area to be larger for case T04 than for case T02 or T03. For a planet at orbital radius $r=5~r_0\simeq 0.05$ AU, the ratio between maximum and minimum released power due to a variation in the incident wind is a factor of $1.3$ for case T02 ($\theta_t=10^{\rm o}$), $2.2$ for case T03 ($\theta_t=20^{\rm o}$), and $3.7$ for case T04 ($\theta_t=30^{\rm o}$). Figure \[fig.power\] also shows the released power for the aligned case (red solid line), showing that an inclination between the axis of rotation of the star and the surface magnetic moment vector can contribute for an increase in the emitted power, depending on the phase of rotation of the star. ![Estimated dissipated power $P_{\rm rec}= b P_B$ due to the interaction of a hot-Jupiter with the stellar wind for cases (a) T02 ($\theta_t=10^{\rm o}$), (b) T03 ($\theta_t=20^{\rm o}$), and (c) T04 ($\theta_t=30^{\rm o}$). The shaded area lies between maximum and minimum power that can be released in the interaction, according to the rotational phase of the star. The red curve is for the aligned case T01 ($\theta_t=0^{\rm o}$). The emitted radio power, assuming a conversion given by Eq. (\[eq.pwrrad\]), is shown in the vertical axes on the right of each plot. \[fig.power\]](f15a.eps "fig:"){height="7cm"}\ ![Estimated dissipated power $P_{\rm rec}= b P_B$ due to the interaction of a hot-Jupiter with the stellar wind for cases (a) T02 ($\theta_t=10^{\rm o}$), (b) T03 ($\theta_t=20^{\rm o}$), and (c) T04 ($\theta_t=30^{\rm o}$). The shaded area lies between maximum and minimum power that can be released in the interaction, according to the rotational phase of the star. The red curve is for the aligned case T01 ($\theta_t=0^{\rm o}$). The emitted radio power, assuming a conversion given by Eq. (\[eq.pwrrad\]), is shown in the vertical axes on the right of each plot. \[fig.power\]](f15b.eps "fig:"){height="7cm"}\ ![Estimated dissipated power $P_{\rm rec}= b P_B$ due to the interaction of a hot-Jupiter with the stellar wind for cases (a) T02 ($\theta_t=10^{\rm o}$), (b) T03 ($\theta_t=20^{\rm o}$), and (c) T04 ($\theta_t=30^{\rm o}$). The shaded area lies between maximum and minimum power that can be released in the interaction, according to the rotational phase of the star. The red curve is for the aligned case T01 ($\theta_t=0^{\rm o}$). The emitted radio power, assuming a conversion given by Eq. (\[eq.pwrrad\]), is shown in the vertical axes on the right of each plot. \[fig.power\]](f15c.eps "fig:"){height="7cm"} Part of this released energy can be used to accelerate electrons, generating radio emission $$\label{eq.pwrrad} P_{\rm radio} =\eta P_{\rm rec} \, .$$ The efficiency $\eta$ in the conversion of $P_{\rm rec}$ into radio emission $P_{\rm radio}$ depends on the details of the physical processes that generate the radio emission (e.g., on the cyclotron-maser instability). Assuming $\eta=10\%$ , the radio power emitted from a Jupiter-like planet orbiting at a distance $r=5~r_0 \simeq0.05$ AU is $P_{\rm radio}\sim 10^{15}$ W. The radio power is also shown in Fig. \[fig.power\] (vertical axes on the right). A time-dependent radio emission has also been estimated by @2010MNRAS.tmp..735F. For the magnetic planets of the Solar System $P_{\rm radio}\sim 10^{6.5}$ W (Neptune) to $\sim 10^{10.5}$ W (Jupiter), which means that, for our assumed giant planet orbiting our fictitious star, the radio power released is almost $\sim 5$ orders of magnitude larger than for Jupiter. This result suggests that stellar winds from pre-main sequence young stars have the potential to generate stronger planetary radio emission than the solar wind. Our results are in accordance to previous works developed on the framework of stellar winds of stars at the early main-sequence phase . Table \[tab.results\] presents a summary of the results obtained for cases T01 to T04. The properties of the wind and of the reconnection site are described at $r=5~r_0$. ### Pile-up of the Stellar Wind Magnetic Field {#subsec.pile} Around the Earth, the magnetic field of the solar wind piles-up in the magnetosheath. This causes the magnetic field strength to enhance in the reconnection site, and ultimately, increases the electric field $E$. By analogy, here we consider what would happen if magnetic field pile-up is considered in the planet’s magnetosheath. If we consider that the stellar wind is supersonic, a bow shock forms and the wind is deflected around the planet magnetosphere. For cases T01 to T04, the wind becomes supersonic at $r \sim 4~r0$ at the rotational equatorial plane. Downstream the shock, the field strength $B_{\parallel,2}$, the velocity $u_2$, and the density $\rho_2$ that appear on Eqs. (\[eq.reccomplex\]), (\[eq.rmagnetopause\]), and (\[eq.pwrB\]) will be given by shock conditions instead of arising directly from our stellar wind model. In this case, we use the Rankine-Hugoniot jump conditions to determine the magnetic field intensity $B_{\parallel,2}^{(s)}$, density $\rho_2^{(s)}$, and velocity $u_2^{(s)}$ on the magnetosheath [@opher-book-chapter]. Hence, except for Eqs. (\[eq.reccomplex\]), (\[eq.rmagnetopause\]), and (\[eq.pwrB\]), the equations presented in this section remain the same. Assuming a perpendicular shock with strength $\delta=\rho_2^{(s)}/\rho_2=4$ (i.e., the density in the magnetosheath is four times the value of the stellar wind density), the Rankine-Hugoniot jump conditions state that the magnetic field at the magnetosheath is $\sim 4$ times higher than the local value of the wind magnetic field (i.e., $B_{\parallel,2}^{(s)} \sim 4B_{\parallel,2}$), while the velocity drops by a factor $\sim 4$ (i.e., $u_2^{(s)} \sim u_2/4$). As a consequence, the size $r_M^{(s)}$ of the planet’s magnetosphere \[Eq. (\[eq.rmagnetopause\])\] diminishes with respect to $r_M$ when pile-up of the wind field lines was not considered. Within $r\lesssim 12~r_0$, the ratio $r_M^{(s)}/r_M\gtrsim 0.68$ for case T01, and becomes slightly more significant for higher $\theta_t$, presenting $r_M^{(s)}/r_M\gtrsim 0.63$ for case T04. Despite the decrease in $r_M^{(s)}$, the increase in the reconnection rate $E^{(s)}$ \[Eq. (\[eq.reccomplex\])\] is such that the dissipated power $P_{\rm rec}^{(s)}=b P_B^{(s)}$ \[Eq. (\[eq.pwrB\])\] increases when compared to the case when pile-up of the wind field lines was not considered. For case T01, $P_{\rm rec}^{(s)}$ increases $4.8$ – $6.8$ times the values of $P_{\rm rec}$ presented in Fig. \[fig.power\], depending on the orbital radius of the planet. For case T02, this increase ranges between $4.5$ – $7.2$, $4.5$ – $8.4$ for case T03, and $4.5$ – $10.5$ for case T04, where these ranges depend now on both the location of the planet, as well as on the rotational phase of the star. This shows that for a supersonic stellar wind, the consideration of a perpendicular shock can increase the dissipated power due to the interaction of a hot-Jupiter with the stellar wind. On the Detectability of Planetary Radio Emission ------------------------------------------------ The detection of planetary radio emission depends on several factors, such as, on the distance $d$ to the extra-solar system, on whether the conical beam of the cyclotron emission is directed towards us, and on the emission bandwidth $\Delta f$ [@1999JGR...10414025F]. The radio flux that we detect on Earth is given by $$\label{eq.fluxrad} \Phi_{\rm radio} = \frac{P_{\rm radio}}{d^2 w {\Delta f}}\, .$$ where $w$ is the solid angle of the conical emission beam. The stellar wind ultimately controls the incident power on the planet, while the planet’s characteristics control the frequency of the cyclotron emission $f_c$, and thus the emission bandwidth assumed to be $\Delta f = 0.5 f_c$ [@1999JGR...10414025F].[^4] For our fictitious planet, the assumed magnetic field at the pole is $100$ G (maximum field strength), which emits at (maximum) $f_c = 2.8 B = 280$ MHz ($B$ given in G and $f_c$ in MHz), with a bandwidth of $\Delta f = 140$ MHz. If our star is at a distance $d\sim10$ pc, using our estimated $P_{\rm radio}\sim 10^{15}$ W obtained in §\[subsec.radio\], we find that the radio flux detected at Earth would be $\Phi_{\rm radio} \sim 7.5/w$ mJy. For a spherical emission, $w=4\pi$, and the detected flux is $\Phi_{\rm radio} \sim 0.6$ mJy, while for a hollow-cone beamed emission with a conical aperture of $45^{\rm o}$, $w\sim1.8$ sr, and $\Phi_{\rm radio} \sim 4$ mJy. One of the possibilities of the yet unsuccessful radio detections may be due to a probably small planetary field, which produces cyclotron emission in a low-frequency range, where instrumental sensitivity is still poor [@2000ApJ...545.1058B]. Low-frequency detectors, such as LOFAR, might be able to detect emission of few mJy at a frequency range of $10$ to $240$ MHz in the future . Our hypothetical planet, for instance, could be observable by LOFAR. Throughout §\[sec.reconnection\], we have assumed a planet with equatorial magnetic field of $50$ G, which is about $6$ times larger compared to Jupiter’s magnetic field of $8$ G at the equator [or $\sim 16$ G at the pole, @1998JGR...10311929C]. If we assume a planetary magnetic field intensity as Jupiter’s, i.e., $B_p=8$ G at the equator, the power $P_{\rm rec}=bP_B$ released from the reconnection between planetary and stellar wind field lines at $r=5~r_0\simeq 0.05~$AU will be a factor of $2.7$ to $3.4$ smaller than the estimates performed in §\[subsec.radio\] for $B_p=50~$G. Also, because the planetary magnetic field is smaller, the magnetosphere of the planet will face a strong wind pressure and will vanish for $r\lesssim 4~r_0$ (in this case, $r_M =R_p$). For a value of $P_{\rm rec} \simeq 3\times 10^{22}$ erg s$^{-1}$, and adopting the same efficiency $\eta =10\%$ for the conversion of the released energy into radio power, the radio flux of the planet arriving at Earth (adopting $d=10$ pc) would be $\Phi_{\rm radio}=14/w~$mJy, at a bandwidth of ${\Delta f}=22.4$ MHz, for a given solid angle $w$ of the conical emission beam. Comparing to $B_p=50$ G, the drop on the radio power caused by a smaller planetary magnetic field ($B_p=8$ G) is more than compensated by a smaller ${\Delta f}$ leading to a radio flux that is about twice the value obtained for $B_p=50$ G. THE INFLUENCE OF THE WIND ON PLANET MIGRATION {#sec.migration} ============================================= In @paper2, we investigated the action of magnetic torques from the stellar wind acting on the planet and whether such torques were able to significantly remove orbital angular momentum of the planet, causing planetary migration. The idea is that the wind exerts a pressure $p_{\rm tot } = \frac12(\rho_2 u_2^2 + B_{\parallel,2}^2/2\pi)$ on the area of the planet’s magnetosphere $A_{\rm eff} = \pi r_M^2$. This force $p_{\rm tot} A_{\rm eff}$ will produce a torque $\frac{d L_p}{dt}$ at the planetary orbital radius $r$ [@2008MNRAS.389.1233L] $$\label{eq.torque.planet2} \left| \frac{d L_p}{dt} \right| \simeq \ (p_{\rm tot} A_{\rm eff}) r \,$$ where $L_p =M_p v_K r$ is the orbital angular momentum of the planet, and $M_p$ is the mass of the planet. A change in the planet’s angular momentum also leads to $$\label{eq.torque.planet} \left| \frac{d L_p}{dt} \right| \simeq \frac12 M_p v_K \frac{d r}{dt} \simeq \frac12 M_p v_K \frac{r}{\tau_w}\, ,$$ where $\tau_w$ is the time-scale for an appreciable radial motion of the planet [@1996Natur.380..606L]. From Eqs. (\[eq.torque.planet2\]) and (\[eq.torque.planet\]), we can estimate such time-scale $$\label{eq.time-scale} {\tau_w} \simeq \frac12 \frac{M_p v_K}{p_{\rm tot} A_{\rm eff}} \, .$$ Within the range of parameters adopted in the simulations performed in @paper2, we showed that the stellar winds of WTTSs were not expected to have strong influence on the migration of close-in giant planets. The winds analyzed in that case assumed that the axis of rotation of the star and the stellar magnetic dipole moment were aligned. One aspect that was not investigated in @paper2 is the effect of a tilted magnetic moment with respect to the rotation axis. Using the results of the simulations presented in the present paper, we thus compare the time-scales $\tau_w$ obtained for the aligned case (T01) and the misaligned case T04 ($\theta_t=30^{\rm o}$). We consider a planet with the same mass and radius as Jupiter, and the magnetic field at the equator of $B_p=50$ G. This comparison is shown in Figure \[fig.mig\], where we note that an inclination of the stellar magnetic field acts to reduce $\tau_w$ when compared to the aligned case. The cases with intermediate tilt angles investigated (i.e., $\theta_t=10^{\rm o}$, $20^{\rm o}$) results in time-scales $\tau_w (r)$ that lie between the solid line for case T01 and the dot-dashed line for case T04. Figure \[fig.mig\] illustrates one single rotational phase of the star. For other phases of rotation of the star, we observe the same behavior: $\tau_w$ calculated for the misaligned cases is smaller than for the aligned case. ![Time-scale $\tau_w$ for an appreciable radial motion of the planet. Solid line is for case T01 (aligned case) and dot-dashed line for case T04 ($\theta_t=30^{\rm o}$). The planet is assumed to lie in the $x$-axis. The curves refer to the solutions for the stellar wind at $t=240$ h. \[fig.mig\]](f16.eps){height="7cm"} The last column of Table \[tab.results\] presents the time-scale $\tau_w$ calculated for cases T01 to T04 for a planet at $r=5~r_0$. Compared to the aligned case, where $\tau_w \simeq 100$ Myr, case T04 ($\theta_t = 30^{\rm o}$), for example, shows considerably smaller time-scales ranging from $\tau_w \sim 40$ to $70$ Myr. We expect that larger misalignment angles $\theta_t>30^{\rm o}$ or other effects, such as an increase in the wind coronal base density or magnetic field intensity [as discussed in @paper2], could reduce $\tau_w$. The time-scales derived here seem to be larger (and therefore less important) than those estimated by other processes, such as from the interaction of the protoplanet with the disk wherein it was formed [@2006RPPh...69..119P]. However, as suggested by , the removal of planetary orbital angular momentum by the stellar wind may be important for synchronizing stellar rotation with the orbital motion of their planets during the pre-main sequence phase. CONCLUSION {#sec.conclusions} ========== We have presented simulations of magnetized stellar winds where the surface stellar magnetic moment is tilted with respect to the axis of rotation of the star. Such configuration requires a fully 3D approach, as the system does not present axisymmetry. By adopting a dipolar surface distribution of magnetic flux, we showed that the interaction of magnetic field lines and the wind leads to a periodic final solution, with the same rotational period as the star. The final magnetic field configuration of the stellar magnetosphere presents an oscillatory pattern. By varying several parameters of the simulations, we explored the effects of the misalignment angle $\theta_t$, stellar period of rotation $P_0$, heating index $\gamma$, and plasma-$\beta$ parameter at the magnetic pole of the star $\beta_0$ in the final periodic solution of our simulations. We showed that the increase in $\theta_t$ or the decrease in $P_0$ lead to a more accelerated wind. The same is true if $\gamma$ or $\beta_0$ are decreased, as already demonstrated in the axisymmetric cases of our previous paper [@paper2]. We estimated the power released in the interaction between a close-in giant planet and the stellar wind. If the planet and wind are magnetized, the interaction results in reconnection process, which releases energy that can be used to accelerate electrons. These electrons propagate along the planet’s magnetic field, producing cyclotron radiation at radio wavelengths. This calculation is motivated by radio emission observed in the magnetic planets of the Solar System (Earth, Neptune, Saturn, Uranus, and Jupiter). We showed that the intensity of the radio emission varies, as the wind impacting on the planet changes according to the stellar phase of rotation. If radio emission from a planet orbiting a star with misaligned rotation axis and magnetic axis is to be detected, we showed here that the radio power will present a larger temporal variation for higher $\theta_t$. Our estimates show that the radio power emitted by the fictitious extra-solar planet orbiting our star at $\sim 0.05~$AU can be $5$ orders of magnitude larger than the non-thermal radio power emitted by Jupiter. This suggests that the stellar wind from a young star has the potential to generate strong planetary radio emission, which could be detected by LOFAR. As a final point, we answered the question posed in @paper2 about whether winds from misaligned stellar magnetospheres could cause a significant effect on planetary migration. In @paper2, only the case of winds from stars where the rotation axis and the surface dipolar magnetic moment were aligned was considered. Compared to the aligned case, we showed here that the time-scale for an appreciable radial motion of the planet is shorter for larger misalignment angles. The authors would like to thank E. Shkolnik for useful comments. We also appreciate the comments and suggestions from the anonymous referee that greatly improved the manuscript. AAV acknowledges support from FAPESP (04-13846-6), CAPES (BEX4686/06-3), and an STFC grant. MO acknowledges the support by National Science Foundation CAREER Grant ATM-0747654. VJ-P thanks CNPq (305905/2007-4). The simulations presented here were performed at the Columbia supercomputer, at NASA Ames Research Center. Bastian, T. S., Dulk, G. A., & Leblanc, Y. 2000, , 545, 1058 Belcher, J. W., & MacGregor, K. B. 1976, , 210, 498 Benz, A. O. 2008, Living Reviews in Solar Physics, 5, 1 Bogovalov, S. V. 1999, , 349, 1017 Borovsky, J. E., Hesse, M., Birn, J., & Kuznetsova, M. M. 2008, Journal of Geophysical Research (Space Physics), 113, 7210 Bouvier, J. 2009, EAS Publications Series, 39, 199 Bouvier, J., Cabrit, S., Fernandez, M., Martin, E. L., & Matthews, J. M. 1993, , 272, 176 Camenzind, M. 1990, Reviews in Modern Astronomy, 3, 234 Cassak, P. A., & Shay, M. A. 2007, Physics of Plasmas, 14, 102114 Cieza, L., & Baliber, N. 2007, , 671, 605 Chapman, S., & Ferraro, V. C. A. 1930, , 126, 129 Choi, P. I., & Herbst, W. 1996, , 111, 283 Cohen, O., et al. 2007, , 654, L163 Cohen, O., Drake, J. J., Kashyap, V. L., Saar, S. H., Sokolov, I. V., Manchester, W. B., Hansen, K. C., & Gombosi, T. I. 2009, , 704, L85 Connerney, J. E. P., Acu[ñ]{}a, M. H., Ness, N. F., & Satoh, T. 1998, , 103, 11929 Donati, J.-F., et al. 2007, , 380, 1297 Donati, J.-F., et al. 2008, , 386, 1234 Donati, J.-F., et al. 2010, , 402, 1426 Edwards, S., Fischer, W., Hillenbrand, L., & Kwan, J. 2006, , 646, 319 Edwards, S., Fischer, W., Kwan, J., Hillenbrand, L., & Dupree, A. K. 2003, , 599, L41 Fares, R., et al. 2010, , 735, in press Farrell, W. M., Desch, M. D., & Zarka, P. 1999, , 104, 14025 Farrell, W. M., Lazio, T. J. W., Zarka, P., Bastian, T. J., Desch, M. D., & Ryabov, B. P. 2004, , 52, 1469 G[ó]{}mez de Castro, A. I., & Verdugo, E. 2007, , 654, L91 Gregory, S. G., Matt, S. P., Donati, J.-F., & Jardine, M. 2008, , 389, 1839 Grie[ß]{}meier, J.-M., et al. 2004, , 425, 753 Grie[ß]{}meier, J.-M., Motschmann, U., Mann, G., & Rucker, H. O. 2005, , 437, 717 Grie[ß]{}meier, J.-M., Zarka, P., & Spreeuw, H. 2007a, , 475, 359 Grie[ß]{}meier, J.-M., Preusse, S., Khodachenko, M., Motschmann, U., Mann, G., & Rucker, H. O. 2007b, , 55, 618 Hansen, K. C., Ridley, A. J., Hospodarsky, G. B., Achilleos, N., Dougherty, M. K., Gombosi, T. I., & T[ó]{}th, G. 2005, , 32, 20 Herbst, W., Bailer-Jones, C. A. L., Mundt, R., Meisenheimer, K., & Wackermann, R. 2002, , 396, 513 Holzwarth, V. 2005, , 440, 411 Holzwarth, V., & Jardine, M. 2007, , 463, 11 Hussain, G. A. J., et al. 2009, , 398, 189 Ip, W.-H., Kopp, A., & Hu, J.-H. 2004, , 602, L53 Jardine, M., & Cameron, A. C. 2008, , 490, 843 Jardine, M. M., Gregory, S. G., & Donati, J.-F. 2008, , 386, 688 Johns-Krull, C. M. 2007, , 664, 975 Johns-Krull, C. M., Valenti, J. A., & Koresko, C. 1999, , 516, 900 Kalapotharakos, C., & Contopoulos, I. 2009, , 496, 495 Kawaler, S. D. 1988, , 333, 236 Koenigl, A. 1991, , 370, L39 Kraft, R. P. 1967, , 150, 551 Kwan, J., Edwards, S., & Fischer, W. 2007, , 657, 897 Kuhi, L. V. 1964, , 140, 1409 Lafreni[è]{}re, D., Jayawardhana, R., & van Kerkwijk, M. H. 2010, arXiv:1006.3070 Lagrange, A. -., et al. 2010, arXiv:1006.3314 Lanza, A. F. 2010, , 512, A77 Lazarian, A., & Opher, M. 2009, , 703, 8 Lazio, T. J. W., Carmichael, S., Clark, J., Elkins, E., Gudmundsen, P., Mott, Z., Szwajkowski, M., & Hennig, L. A. 2010, , 139, 96 Lazio, T. J. W., Farrell, W. M., Dietrick, J., Greenlees, E., Hogan, E., Jones, C., & Hennig, L. A. 2004, , 612, 511 Li, J., & Wickramasinghe, D. T. 1998, , 300, 718 Lin, D. N. C., Bodenheimer, P., & Richardson, D. C. 1996, , 380, 606 Linde, T. J., Gombosi, T. I., Roe, P. L., Powell, K. G., & Dezeeuw, D. L. 1998, , 103, 1889 Long, M., Romanova, M. M., & Lovelace, R. V. E. 2008, , 386, 1274 Lovelace, R. V. E., Romanova, M. M., & Barnard, A. W. 2008, , 389, 1233 Lugaz, N., Manchester, W. B., IV, & Gombosi, T. I. 2005, , 627, 1019 Manchester, W. B., Gombosi, T. I., Roussev, I., De Zeeuw, D. L., Sokolov, I. V., Powell, K. G., T[ó]{}th, G., & Opher, M. 2004, Journal of Geophysical Research (Space Physics), 109, 1102 Marilli, E., et al. 2007, , 463, 1081 Mestel, L., & Spruit, H. C. 1987, , 226, 57 Opher, M. 2010, Chap. 7 In: Heliophysics: Space Storms and Radiation: Causes and Effects, C. J. Schrijver & G. L. Siscoe (Eds.), Cambridge Univ. Press Opher, M., Liewer, P. C., Gombosi, T. I., Manchester, W., DeZeeuw, D. L., Sokolov, I., & Toth, G. 2003, , 591, L61 Opher, M., Stone, E. C., & Liewer, P. C. 2006, , 640, L71 Opher, M., Stone, E. C., & Gombosi, T. I. 2007, Science, 316, 875 Parker, E. N. 1973, , 180, 247 Parker, E. N. 1958, , 128, 664 Papaloizou, J. C. B., & Terquem, C. 2006, Reports on Progress in Physics, 69, 119 Pneuman, G. W., & Kopp, R. A. 1971, , 18, 258 Powell, K. G., Roe, P. L., Linde, T. J., Gombosi, T. I., & de Zeeuw, D. L. 1999, Journal of Computational Physics, 154, 284 Preusse, S., Kopp, A., B[ü]{}chner, J., & Motschmann, U. 2005, , 434, 1191 Preusse, S., Kopp, A., B[ü]{}chner, J., & Motschmann, U. 2007, , 55, 589 Preusse, S., Kopp, A., B[ü]{}chner, J., & Motschmann, U. 2006, , 460, 317 Priest, E., & Forbes, T. 2000, Magnetic Reconnection, Cambridge University Press Rebull, L. M., Stauffer, J. R., Megeath, S. T., Hora, J. L., & Hartmann, L. 2006, , 646, 297 Ridley, A. J., de Zeeuw, D. L., Manchester, W. B., & Hansen, K. C. 2006, Advances in Space Research, 38, 263 Riley, P., Linker, J. A., Miki[ć]{}, Z., Lionello, R., Ledvina, S. A., & Luhmann, J. G. 2006, , 653, 1510 Romanova, M. M., & Lovelace, R. V. E. 2006, , 645, L73 Romanova, M. M., Ustyugova, G. V., Koldoba, A. V., Wick, J. V., & Lovelace, R. V. E. 2003, , 595, 1009 Romanova, M. M., Ustyugova, G. V., Koldoba, A. V., & Lovelace, R. V. E. 2004, , 610, 920 Roussev, I. I., et al. 2003, , 595, L57 Shkolnik, E., Walker, G. A. H., Bohlender, D. A., Gu, P.-G., Kürster, M. 2005, , 622, 1075 Shkolnik, E., Bohlender, D. A., Walker, G. A. H., & Collier Cameron, A. 2008, , 676, 628 Skelly, M. B., Donati, J.-F., Bouvier, J., Grankin, K. N., Unruh, Y. C., Artemenko, S. A., & Petrov, P. 2010, , 403, 159 Spitkovsky, A. 2006, , 648, L51 Stevens, I. R. 2005, , 356, 1053 T[ó]{}th, G., Kov[á]{}cs, D., Hansen, K. C., & Gombosi, T. I. 2004, Journal of Geophysical Research (Space Physics), 109, 11210 Townsend, R. H. D., Owocki, S. P., & Ud-Doula, A. 2007, , 382, 139 Valenti, J. A., & Johns-Krull, C. M. 2004, , 292, 619 Vidotto, A. A., Opher, M., Jatenco-Pereira, V., & Gombosi, T. I. 2009a, , 703, 1734 Vidotto, A. A., Opher, M., Jatenco-Pereira, V., & Gombosi, T. I. 2009b, , 699, 441 Weber, E. J., & Davis, L., Jr. 1967, , 148, 217 Yang, H., Johns-Krull, C. M., & Valenti, J. A. 2008, , 136, 2286 Zarka, P. 1998, , 103, 20159 Zarka, P. 2007, , 55, 598 Zarka, P., Treumann, R. A., Ryabov, B. P., & Ryabov, V. B. 2001, , 277, 293 [c c c c c c c c c c c]{} T01 & $ 1.13 $ & $ 52 $ & $ 9.45 $ & $ 3.30 $ & $ 5.63 $ & $ 1.08 $ & $ 2.09 $ & $ 1.16 $ & $ 1.16 $ & $ 99 $\ T02 & $ 0.93-1.10 $ & $ 47-56 $ & $ 8.44-9.66 $ & $ 3.33-3.78 $ & $ 6.07-7.00 $ & $ 1.08-1.34 $ & $ 1.95-2.03 $ & $ 0.61-1.26 $ & $ 1.09-1.42 $ & $ 74-89 $\ T03 & $ 0.62- 1.07 $ & $ 32-59 $ & $ 6.59-10.4 $ & $ 3.25-4.50 $ & $ 6.80-8.88 $ & $ 1.05-1.91 $ & $ 1.79-1.96 $ & $ 0.11-1.18 $ & $ 0.92-2.05 $ & $ 54-76 $\ T04 & $ 0.39-1.07 $ & $ 14-64 $ & $ 5.03-10.9 $ & $ 2.84-4.62 $ & $ 7.37-10.4 $ & $ 0.91-2.36 $ & $ 1.70-1.90 $ & $ 0.0055-1.67 $ & $ 0.63-2.35 $ & $ 43-68 $ [^1]: suggest that a conductor planet moving relatively to the stellar wind is also able to generate perturbations that can trigger the chromospheric modulations observed by @2005ApJ...622.1075S [@2008ApJ...676..628S] without the requirement of a magnetized planet. [^2]: For predictions of planetary radio emission for non-magnetized planet, see . [^3]: In our simulations, for $r\lesssim 12~r_0$, the ratio $ |B_{z,2}/B_{z,1}|$ ranges from $0.2$ to $9.3$. For example, for case T02 at $r\simeq 5~r_0 \simeq 0.05~$AU, $B_{z,1}\simeq 6.1$ G and $B_{z,2}\simeq -3.4$ G. As $B_{z,1}$ and $ B_{z,2}$ have approximately the same order of magnitude, this condition implies that $\rho_1 \ll \rho_2 \sim 10^{-13}$ g cm$^{-3}$. [^4]: Other authors adopt a higher value of $\Delta f \simeq 0.9 - 1.0 f_c$. If this is the case, the emission bandwidth we obtain will have to be multiplied by a factor of $1.8$ – $2.0$, and the resultant radio flux divided by the same factor.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Thermal infrared photometry in the $L$- and $M''$-band and $L - M''$ colors of type-1 and type-2 active galactic nuclei (AGNs) are presented. After combining our observations with photometric data at similar wavelengths taken from the literature, we find that the excess of $L - M''$ colors of type-2 AGNs (37 sources, 50 data points) relative to type-1 AGNs (27 sources, 36 data points), due to dust extinction, is statistically detectable, but very small. We next investigate the $L - M''$ colors of type-2 AGNs by separating less dust-obscured type-2 AGNs and highly dust-obscured type-2 AGNs. In both cases, the $L - M''$ colors are similar to the intrinsic $L - M''$ color of unobscured AGNs, and the $L - M''$ color excess of the latter highly dust-obscured type-2 AGNs due to dust extinction is much smaller than that expected from the Galactic dust extinction curve. Contamination from starbursts and the time lag of flux variation are unlikely to explain this small $L - M''$ color excess, which is best explained if the dust extinction curve in the close vicinity of AGNs is fairly flat at 3–5 $\mu$m as a result of a size increase of the absorbing dust grains through coagulation.' author: - Masatoshi Imanishi title: 'Thermal Infrared 3–5 $\mu$m Colors of Obscured and Unobscured Active Galactic Nuclei' --- Introduction ============ According to the current unification paradigm for active galactic nuclei (AGNs), type-1 AGNs (which show broad optical emission lines) and type-2 AGNs (which do not) are intrinsically the same, but the nuclei of the latter class are obscured by dust that lies along our line of sight in dusty molecular tori close to the AGNs ([@ant93]). Estimation of the amount of dust along our line of sight in type-2 AGNs and comparison with the amount in type-1 AGNs is an important observational test of the unification paradigm. A direct estimate of dust extinction toward highly luminous type-2 AGNs is necessary to answer the question “how common are highly luminous and highly dust-obscured AGNs (so-called type-2 quasars)?” ([@hal99]). X-ray spectroscopic observations of type-2 AGNs imply higher X-ray absorption columns than do observations of type-1 AGNs, supporting the unification paradigm (e.g., [@nan94; @smi96]). However, X-ray absorption is caused both by dust and gas. Estimating the amount of dust along our line of sight ($A_{\rm V}$) from X-ray absorption ($N_{\rm H}$) is uncertain, since the $N_{\rm H}$/$A_{\rm V}$ ratios toward AGNs are found to vary by more than an order of magnitude ([@alo97]). For several reasons, we would expect study in the thermal infrared (3–5 $\mu$m) wavelength range to be a powerful tool for estimation of dust extinction toward AGNs. Firstly, flux attenuation in this band is purely caused by dust extinction, and the effects of dust extinction are wavelength dependent in the Galactic diffuse interstellar medium ([@rie85; @lut96]). Secondly, the absolute flux attenuation by dust extinction is smaller than at shorter wavelengths ([@rie85; @lut96]). Thirdly, extended stellar emission generally dominates over obscured AGN emission at $<$2 $\mu$m, whereas at $>$3 $\mu$m moderately luminous obscured AGNs show compact, AGN-related emission that dominates observed fluxes (Alonso-Herrero et al. 1998, 2000, [@sim98a], but see Simpson, Ward, & Wall 2000). Finally, since the compact emission at 3–5 $\mu$m most likely originates in hot (600–1000 K) dust at a part of the dusty molecular torus very near to the AGN (close to the innermost dust sublimation region), the dust extinction toward the 3–5 $\mu$m emission region is almost the same as that toward the central engine itself. Hence, by comparing observed continuum fluxes at more than one wavelength between 3 and 5 $\mu$m, we can estimate dust extinction toward obscured AGNs directly, up to high magnitudes of obscuration, without serious uncertainties in the subtraction of stellar emission. Some attempts to estimate dust extinction toward obscured AGNs have been made based on near-infrared 1–5 $\mu$m colors ([@sim98a; @sim99; @sim00]), but, given that only upper limits are available at 3–5 $\mu$m in most cases, the estimate depends heavily on data at $<$3 $\mu$m in the rest-frame, where stellar emission dominates the observed fluxes. We have conducted $L$ (3.5$\pm$0.3 $\mu$m) and $M'$ (4.7$\pm$0.1 $\mu$m) band photometry of type-1 and type-2 AGNs. The main aim is to investigate the $L - M'$ colors of a large number of type-1 and type-2 AGNs and to examine the question of whether $L - M'$ colors are a good measure of the dust extinction toward AGNs. Throughout this paper, $H_{0}$ $=$ 75 km s$^{-1}$ Mpc$^{-1}$, $\Omega_{\rm M}$ = 0.3, and $\Omega_{\rm \lambda}$ = 0.7 are adopted. Target Selection ================ The target sources were selected based on their proximity and high optical \[OIII\] emission line luminosities. The first and second criteria were adopted, respectively, to make detection at $M'$ feasible and to select reasonably luminous AGNs ([@sim98b]), for which contamination from extended star-formation-related emission (both stellar emission and dust emission powered by star-formation activity) is expected to be smaller than it would be for less luminous AGNs. Our samples are heterogeneous and not statistically complete, but provide useful information on the $L - M'$ colors of AGNs. Observation and Data Analysis ============================= $L$ (3.5$\pm$0.3 $\mu$m) and $M'$ (4.7$\pm$0.1 $\mu$m) band photometry was performed at the NASA Infrared Telescope Facility (IRTF) using NSFCAM ([@shu94]). Table 1 gives details of the observations. Sky conditions were photometric throughout the observing runs. The seeing sizes measured from standard stars were 0$\farcs$6–1$\farcs$2. The NSFCAM used a 256$\times$256 InSb array. For $M'$-band photometry, the smallest pixel scale (0$\farcs$06 pix$^{-1}$) was used during all the observing runs. For $L$-band photometry, the pixel scale of 0$\farcs$06 pix$^{-1}$ was used in November 1999, while that of 0$\farcs$15 pix$^{-1}$ was used in April and May 2000. The field of view is 14$''$ $\times$ 14$''$ and 38$''$ $\times$ 38$''$ in the case of 0$\farcs$06 pix$^{-1}$ and 0$\farcs$15 pix$^{-1}$ pixel scales, respectively. Each exposure was 0.3–0.4 sec long at $L$ and 0.12–0.2 sec at $M'$. A dithering technique was utilized with an amplitude of 3–10$''$ to place sources at five different positions on the array. At each dithering position, 50–200 frames were coadded. Offset guide stars were used whenever available to achieve high telescope tracking accuracy. Standard data analysis procedures were employed, using IRAF [^1]. Firstly, bad pixels were removed and the values of these pixels were replaced with interpolated values from the surrounding pixels. Secondly, the frames were dark-subtracted and then scaled to have the same median pixel value, so as to produce a flat frame. The dark-subtracted frames were then divided by a normalized flat frame to produce images at each dithering position. Standard stars and very bright AGNs were clearly seen in images at each dithering position, and so images that contained these were aligned to sub-pixel accuracy using these detected sources and then summed to produce the final images. However, for fainter AGNs, the sources were not always clearly recognizable in the individual images at each dithering position. In these cases the images were shifted based on the records of telescope offset, assuming that telescope pointing and tracking were accurate, and were then summed to produce final images. This procedure potentially broadens the effective point spread function in the final image, providing larger source full widths at half maximum (FWHMs) than the expected values. At 3–5 $\mu$m, and particularly at $M'$, thermal emission from a small amount of occasionally transiting cirrus can increase sky background signals and affect data quality, even though the sky may look clear. Thus, before summing the frames, we confirmed that their sky background levels agreed to within 1%, showing that the data were not seriously affected by this kind of cirrus. The images of all the observed AGNs were spatially compact, with no clear extended emission found at either band. The measured FWHMs of some AGNs in the final images were slightly larger than the FWHMs of standard stars, but we attribute these larger FWHMs mainly to the uncertainty introduced by shifting and adding frames containing faint sources, as discussed above. Photometry was done with 6$''$ diameter apertures, by using the task “PHOT”. Since the 3–5 $\mu$m emission was compact, the resulting photometric magnitudes were almost independent of aperture size as long as the aperture sizes were sufficiently larger than the measured FWHMs. Flux calibrations were made by using standard stars in the faint Elias standard star catalog [^2] or IRTF bright infrared standard star catalog [^3]. The $M'$-band magnitudes of standard stars were assumed to be the same as the $M$-band magnitudes listed in these catalogs, because the $M - M'$ color is expected to be virtually 0 mag unless a standard star is of a very late type. $M'$-band photometry is inherently very difficult because of the large background noise. The photometric accuracy was strongly dependent on the uncorrectable spatial fluctuation of background signals; because of this uncorrectable fluctuation, $M'$-band photometric accuracy was not necessarily better in objects with longer integration time. Furthermore, since the smallest pixel scale (0$\farcs$06 pix$^{-1}$) had to be utilized in our observations to avoid saturation, emission from the compact AGN emission was spread over many pixels, which made the recognition of real detections even more difficult. To avoid spurious detections, we divided $M'$-band data into two or three independent images and confirmed that the source positions agreed with each other. Results ======= Our new photometric measurements for type-1 and type-2 AGNs are tabulated in Table 2. We estimate $L - M'$ colors based on $L$- and $M'$-band photometric data taken on the same night or two successive nights. Figure 1a shows the distribution of $L - M'$ colors of the AGNs measured with our standard 6$''$ diameter apertures. Based on photometry of quasars (at $<$2.2 $\mu$m, 3.7 $\mu$m, and 10.1 $\mu$m), Neugebauer et al. (1987) found that the continuum spectral energy distribution can be approximated by a power law of the form $F_{\nu} \propto \nu^{1.4\pm0.3}$ at 3–5 $\mu$m, which implies that the intrinsic $L - M'$ color of quasars is 1.0$\pm$0.1. Since this color was derived using quasars (that is, highly luminous unobscured AGNs), any contamination from star-formation-related emission is expected to be very small. We adopt this value for the intrinsic $L - M'$ color of unobscured AGNs. The type-1 and type-2 AGNs in Figure 1a both show $L- M'$ colors similar to the intrinsic $L - M'$ color of unobscured AGNs. To increase the sample size, we search the available literature for 3–5 $\mu$m photometric data on galaxies. Table 3 summarizes the results for sources whose magnitudes have been measured both at $\sim$3.5 $\mu$m ($L$- or $L'$-band) and at $\sim$4.7 $\mu$m ($M'$- or $M$-band). We exclude sources for which only upper limits were given at one wavelength. When photometry has been performed with several different aperture sizes by the same authors, we have tabulated the photometric results based on the smallest aperture, in order to minimize contamination from the extended star-formation-related emission of the host galaxies, except in the case of NGC 1068, for which photometry with a 0$\farcs$6 aperture was adopted ([@mar00]). The aperture sizes used are 0$\farcs$6–22$\farcs$5. The central wavelength and wavelength coverage of the filters used for photometry at these two bands differ slightly among authors, and magnitude conversion formulas between slightly different filters are not well established. We therefore assume that photometric magnitudes in slightly different filters are the same. Although this assumption might introduce an uncertainty in the final photometric magnitudes of 0.1 mag or so, this would not affect our conclusions. Figure 1b plots $L - M'$ colors of these AGNs and starburst/LINER galaxies taken from the literature. Discussion ========== Comparison of $L - M'$ Colors ----------------------------- After combining our data with data in the literature, we find that, for type-1 AGNs (27 sources, 36 data points), type-2 AGNs (37 sources, 50 data points), and starbursts/LINERs (22 sources, 23 data points), the median (mean) $L - M'$ colors are 0.9 (0.8), 0.9 (1.0), and 0.4 (0.3) respectively. The median $L - M'$ colors of both type-1 and type-2 AGNs are within the range of the intrinsic $L - M'$ color of unobscured AGNs (1.0$\pm$0.1), but that of starbursts/LINERs is clearly smaller than those of AGNs. Since the relative contribution of star-formation-related emission is larger in starbursts and LINERs than in AGNs, this implies that star-formation-related emission gives rise to bluer $L - M'$ colors than do AGNs. Figure 2 shows the cumulative probability distribution of $L - M'$ colors of type-1 and type-2 AGNs. We apply the Kolmogorov-Smirnov test and find that the probability that the two distributions are drawn from different distributions is 88%. Thus, statistically, the $L - M'$ colors of type-2 AGNs are different from those of type-1 AGNs. However, the difference appears to be very small, the median (mean) being 0.0 (0.2) mag redder in the former than the latter. If the Galactic dust extinction curve ($A_{\rm L}$/$A_{\rm V}$ = 0.058, $A_{\rm M'}$/$A_{\rm V}$ = 0.023; [@rie85]) is adopted, the 0.2 mag difference in the $L - M'$ colors implies that type-2 AGNs have higher dust extinction than type-1 AGNs with only $A_{\rm V}$ $\sim$ 6 mag, typically. We next investigate the “physical” aperture sizes used for the measurements of the $L - M'$ colors of the AGNs, because, when larger physical aperture sizes are used, contamination from extended star-formation-related emission could be larger, which might decrease the $L - M'$ colors. The median physical aperture sizes are 3.0 kpc and 1.3 kpc for type-1 and type-2 AGNs, respectively. Figure 3 compares the cumulative probability distribution of the physical aperture size between type-1 and type-2 AGNs. We do not find any clear trend that physical aperture size is systematically larger for type-2 AGNs than for type-1 AGNs. Furthermore, for type-1 AGNs, the median $L - M'$ colors below and above the median physical aperture size (3.0 kpc) are 0.7 mag and 0.9 mag, respectively. For type-2 AGNs, those below and above the median physical aperture size (1.3 kpc) are 0.9 mag and 1.1 mag, respectively. Therefore, $L - M'$ colors of AGNs measured with larger physical aperture sizes are not systematically bluer due to greater contamination from extended star-formation-related emission. Thus, contamination from extended star-formation-related emission is unlikely to have strong, systematic effects on the $L - M'$ colors of AGNs. We next investigate the $L - M'$ colors of type-2 AGNs, distinguishing between those that are less dust-obscured and those highly dust-obscured, because even though a galaxy may be classified as a type-2 AGN, the dust extinction toward its nucleus could vary significantly. For some type-2 AGNs, dust extinction has been estimated to be high. IRAS 08572+3915, NGC 7172, and NGC 7479 display strong silicate dust absorption features at 9.7 $\mu$m ([@dud97; @roc91]) and strong carbonaceous dust absorption at 3.4 $\mu$m ([@imd00; @ima00]), which means that a large number of both carbonaceous and silicate dust grains lie in front of the background AGN emission. Since interstellar dust consists mainly of carbonaceous and silicate dust grains ([@mat77; @mat89]), the presence of many of these grains along our line of sight to the AGN implies high dust extinction toward the AGN emission. Besides the above sources, a clear 3.4 $\mu$m carbonaceous dust absorption feature is detected in the spectrum of NGC 1068 ([@bri94; @ima97]). For Cygnus A, the observed $L$-band flux is significantly smaller than that expected from the intrinsic AGN power, which is estimated based on the optical \[OIII\] flux or extinction corrected 2–10 keV X-ray luminosity ([@war96]); it is argued that the small $L$-band flux is a result of flux attenuation by dust extinction ([@war96]). If the properties of the obscuring dust are similar to those in the Galactic diffuse interstellar medium ($\tau_{3.4}$/$A_{\rm V}$ $=$ 0.004–0.007; [@pen94], $\tau_{9.7}$/$A_{\rm V}$ = 0.05–0.1; Roche & Aitken 1984, 1985, $A_{\rm L}$/$A_{\rm V}$ = 0.058; [@rie85]) [^4], then the estimated column density of the obscuring dust is very large, corresponding to $A_{\rm V}$ = 140 mag for Cygnus A (Ward 1996), $A_{\rm V}$ $>$ 100 mag for IRAS 08572+3915 ([@imu00]), $A_{\rm V}$ = 30 mag for NGC 1068 ([@bri94; @ima97; @mar00]), and $A_{\rm V}$ $>$ 20 mag for NGC 7172 and NGC 7479 ([@roc91; @ima00]). All of these sources are thus very likely to be highly dust-obscured AGNs. On the other hand, NGC 2992, IRAS 05189$-$2524, IRAS 20460+1925, MCG$-$5–23–16, and PKS 1345+12 show detectable broad Pa$\alpha$ or Pa$\beta$ emission lines at $<$2 $\mu$m (Veilleux, Goodrich, & Hill 1997a, Veilleux, Sanders, & Kim 1997b, 1999b). These sources are thus classed as less dust-obscured type-2 AGNs. In Figure 4 we plot the $L - M'$ colors of these representative samples of less dust-obscured and highly dust-obscured type-2 AGNs. For both less dust-obscured and highly dust-obscured type-2 AGNs, the $L - M'$ colors are similar to the intrinsic $L - M'$ color of unobscured AGNs. If the Galactic dust extinction curve of Rieke & Lebofsky (1985) is applied, screen dust extinction with $A_{\rm V}$ = 50 mag should make the $L - M'$ color deviate from the intrinsic color with $\sim$1.7 mag. The color deviation in the case of the Galactic dust extinction curve is so large that it should be easily recognizable in the highly dust-obscured type-2 AGNs in Figure 4. The actual color deviation in the highly dust-obscured type-2 AGNs is, however, much smaller than that expected. Dust obscuration toward type-2 AGNs might be due not only to the dusty tori in the close vicinity of AGNs, but also to dust in the host galaxies (on $>$100 pc scales). In the latter case, a screen dust extinction model is applicable. Among our five highly dust-obscured AGNs (IRAS 08572+3915, Cygnus A, NGC 1068, NGC 7172, and NGC 7479), NGC 7172 is thought to belong to this class of object ([@ima00]). In the former case, where obscuration comes from the torus, the dust has a temperature gradient, with the inner dust having a higher temperature ([@pie92]). The $L$- and $M'$-band emission is dominated by $\sim$900 K and $\sim$600 K dust, respectively, and since the $M'$-band emitting dust is located further out than the $L$-band emitting dust, $M'$-band emission suffers less flux attenuation by dust extinction than $L$-band emission in type-2 AGNs. For IRAS 08572+3915, Cygnus A, NGC 1068, and NGC 7479, since dust extinction toward 3–4 $\mu$m emission region estimated using 3–4 $\mu$m data is larger than that toward $\sim$10 $\mu$m emission region estimated using $\sim$10 $\mu$m data, and/or that toward $\sim$10 $\mu$m emission region is larger than that toward $\sim$20 $\mu$m emission region estimated using $\sim$20 $\mu$m data, the presence of a temperature gradient in the obscuring dust is strongly suggested, indicating that these objects are obscured by dusty tori ([@dud97; @imu00; @ima00]). The presence of this temperature gradient in the obscuring dust should increase (not decrease) the $L - M'$ colors compared to a screen dust extinction model and thus cannot explain the small $L - M'$ colors observed in these highly dust-obscured type-2 AGNs. Possible Reasons for the Small $L- M'$ Color Excess in Dust-Obscured AGNs ------------------------------------------------------------------------- ### Time Lag between $L$- and $M'$-band Flux Variation According to the unification paradigm for AGNs, 3–5 $\mu$m emission is dominated by thermal emission powered by UV to optical emission from the central engine. The UV to optical emission is known to be highly time variable (e.g., [@cla92; @nan98]). Since the $L$-band emission region ($\sim$900 K dust) is located closer to the central engine than the $M'$-band emission region ($\sim$600 K dust), a time lag in flux variation is expected, in the sense that the $L$-band flux responds to the flux variation of the central UV to optical emission prior to the $M'$-band flux. A lower limit on the time lag is determined by the physical separation between the $L$- and $M'$-band emission regions. The physical separation depends strongly on the UV to optical luminosity of the central engine and on the assumed dust radial density distribution in the dusty torus. We use the code [*DUSTY*]{} ([@ive99]) to estimate the separation in the case of a reasonable dust spatial distribution. [*DUSTY*]{} solves the radiative transfer equation for a source embedded in a spherically symmetric dusty envelope and calculates the resulting radial temperature distribution. We use the same basic parameters for the spectral shape of the central UV to optical emission and the dust composition as those adopted in Imanishi & Ueno (2000), and consider the UV to optical luminosities of central engines with $>$ 10$^{11}L_{\odot}$. We assume the ratio of the outer to inner radius of the dusty envelope to be 200, where the inner radius is determined by the UV to optical luminosity and by the dust sublimation temperature, which is assumed to be $\sim$1000 K. Using reasonable parameters, such as a dust extinction toward the central engine of $A_{\rm V}$ = 0–200 mag and a radial dust density distribution approximated by a power law ($\propto$ r$^{-\gamma}$) with an index of $\gamma$ = 0–2, we find that the physical separation between 900 K and 600 K dust is always larger than a few light days, so the time lag is longer than a few days. The time lag of the flux variation between the $L$- and $M'$-band could therefore affect the $L - M'$ color measurements. In fact, the $L - M'$ colors of sources with multiple observations in Tables 2 and 3 are indeed different on different observing dates, and these color differences could be attributed to the time lag. If all the highly dust-obscured type-2 AGNs happened to be observed when they are bright at $L$ but faint at $M'$, then the derived $L - M'$ colors of these dust-obscured type-2 AGNs would be small. However, although this explanation cannot be ruled out completely, it seems implausible. ### Contamination from Compact Nuclear Starbursts Compact 3–5 $\mu$m emission has hitherto been regarded as AGN-related emission. We now consider the possibility that this compact emission may contain a significant contribution from compact nuclear starbursts. The presence of such compact nuclear starbursts has been suggested in some obscured AGNs (e.g., [@gon00]; but see [@iva00]). If galaxies possessed both AGN and (less obscured) nuclear starburst activity, and if the intrinsic magnitude ratios of these two components were the same among galaxies, then the contribution of AGN emission to observed 3–5 $\mu$m fluxes would be smaller in more highly dust-obscured AGNs as a result of the larger flux attenuation of AGN emission. Thus the $L - M'$ colors would not necessarily be larger in more highly dust-obscured AGNs. When compact nuclear starbursts contribute significantly to the 3–5 $\mu$m fluxes measured within the central few arcsec, the nuclear 3–4 $\mu$m spectra are expected to display the 3.3 $\mu$m polycyclic aromatic hydrocarbon (PAH) emission ([@imd00]). However, the nuclear spectra of NGC 1068 (3$\farcs$8 $\times$ 3$\farcs$8), NGC 7172 (1$\farcs$2 $\times$ 5$''$), NGC 7479 (1$\farcs$2 $\times$ 5$''$), and IRAS 08572+3915 (1$\farcs$2 $\times$ 8$''$) show clear 3.4 $\mu$m absorption feature but no detectable 3.3 $\mu$m PAH emission ([@ima97; @ima00; @imd00]), indicating that the observed 3–4 $\mu$m fluxes are dominated by obscured AGN emission and not by starbursts. For Cygnus A, the emission at $>$3 $\mu$m is dominated by nuclear compact emission ([@djo91], this work, [@imu00]) and the nuclear spectrum shows no detectable 11.3 $\mu$m PAH emission ([@imu00]), suggesting that in this case too starburst activity contributes little to the observed flux at $>$3 $\mu$m. We conclude that for these five highly dust-obscured AGNs (NGC 1068, NGC 7172, NGC 7479, IRAS 08572+3915, and Cygnus A), the small $L - M'$ colors are unlikely to be caused by the contamination from nuclear starburst activity. ### Flat Dust Extinction Curve at 3–5 $\mu$m All the above five sources except NGC 7172 are thought obscured by dusty molecular tori in the close vicinity of AGNs (see §5.1). High dust density and high turbulence velocity in these tori could promote dust coagulation ([@ros91]), which might make the dust size there much larger than that in the Galactic diffuse interstellar medium (Maiolino et al. 2000a, b). If the size of a typical dust grain in the AGNs’ dusty tori is as large as a few $\mu$m, as suggested by Maiolino et al. (2000a, b), the extinction curve at 3–5 $\mu$m could be much flatter than that in the Galactic diffuse interstellar medium. This flat dust extinction curve at 3–5 $\mu$m can explain the small $L - M'$ color excess in the highly dust-obscured AGNs. If this is the case, a strong caveat would need to be attached to estimations of the dust extinction toward AGNs obscured by dusty tori that assume the applicability of a dust extinction curve derived from the Galactic diffuse interstellar medium. Summary ======= $L - M'$ colors of obscured and unobscured AGNs were investigated. The $L - M'$ color excess in highly dust-obscured AGNs due to dust extinction, when compared to less dust-obscured AGNs, was much smaller than that expected from the Galactic dust extinction curve. We argued that the size of the dust grains in the close vicinity of AGNs may be so large, due to coagulation, that the extinction curve at 3–5 $\mu$m is flatter than that in the Galactic diffuse interstellar medium. We thank P. Fukumura-Sawada, D. Griep and C. Kaminski for their support during the IRTF run. We are grateful to Drs. J. Rayner and W. Vacca for their kind instruction how to use NSFCAM prior to actual observing runs, and to Dr. R. Nakamura for useful discussion about dust coagulation processes. Drs. T. Nakajima, C. C. Dudley, and the anonymous referee gave useful comments on this manuscript. MI was financially supported by the Japan Society for the Promotion of Science for his stays at the University of Hawaii. Drs. A. T. Tokunaga and H. Ando gave MI the opportunity to work at the University of Hawaii. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. Alonso-Herrero, A., Ward, M. J., & Kotilainen, J. K. 1997, MNRAS, 288, 977 Alonso-Herrero, A., Simpson, C., Ward, M. J., & Wilson, A. S. 1998, ApJ, 495, 196 Alonso-Herrero, A., Quillen, A. C., Simpson, C., Efstathiou, A., & Ward, M. J. 2000, AJ, in press (astro-ph/0012096) Antonucci, R. 1993, ARA&A, 31, 473 Becklin, E. E., Tokunaga, A. T., & Wynn-Williams, C. G. 1982, ApJ, 263, 624 Bridger, A., Wright, C. S., & Geballe, T. R. 1994, Infrared Astronomy with Arrays: the Next Generation, I. McLean ed. (Dordrecht: Kluwer Academic Publishers), p. 537 Clavel, J. et al. 1992, ApJ, 393, 113 Djorgovski, S., Weir, N., Matthews, K., & Graham, J. R. 1991, ApJ, 372, L67 Dudley, C. C. 1998, Ph. D. Thesis, University of Hawaii Dudley, C. C., & Wynn-Williams C. G. 1997, ApJ, 488, 720 Elvis, M., Willner, S. P., Fabbiano, G., Carleton, N. P., Lawrence, A., & Ward, M. 1984, ApJ, 280, 574 Gonzalez Delgado, R. M., Heckman, T., & Leitherer, C. 2000, ApJ, in press, (astro-ph/0008417) Halpern, J. P., Turner, T. J., & George, I. M. 1999, MNRAS, 307, L47 Heckman, T. M., Smith, E. P., Baum, S. A., van Breugel, W. J. M., Miley, G. K., Illingworth, G. D., Bothun, G. D., & Balick, B. 1986, ApJ, 311, 526 Imanishi, M. 2000, MNRAS, 319, 331 Imanishi, M., & Dudley, C. C. 2000, ApJ, 545, 701 Imanishi, M., & Ueno, S. 2000, ApJ, 535, 626 Imanishi, M., Terada, H., Sugiyama, K., Motohara, K., Goto, M., & Maihara, T. 1997, PASJ, 49, 69 Ivanov, V. D., Rieke, G. H., Groppi, C. E., Alonso-Herrero, A., Rieke, M. J., & Engelbracht, C. W. 2000, ApJ, in press, (astro-ph/0007177) Ivezic, Z., Nenkova, M., & Elitzur, M. 1999, User Manual for DUSTY, University of Kentucky Internal Report, accessible at http://www.pa.uky.edu/$^{\sim}$moche/dusty/ (astro-ph/9910475) Jackson, N., & Rawlings, S. 1997, MNRAS, 286, 241 Lawrence, A., Ward, M., Elvis, M., Fabbiano, G., Willner, S. P., Carleton, N. P., & Longmore, A. 1985, 291, 117 Lutz, D., et al. 1996, A&A, 315, L269 Maiolino, R., Marconi, A., & Oliva, E. 2000a, A&A, in press, (astro-ph/0010066) Maiolino, R., Marconi, A., Salvati, M., Risaliti, G., Severgnini, P., Oliva, E., La Franca, F., & Vanzi, L. 2000b, A&A, in press, (astro-ph/0010009) Marco, O., & Alloin, D. 2000, A&A, 353, 465 Mathis, J. S., & Whiffen, G. 1989, ApJ, 341, 808 Mathis, J. S., Rumpl, W., & Nordsieck, K. H. 1977, ApJ, 217, 425 McAlary, C. W., McLaren, R. A., & Crabtree, D. R. 1979, ApJ, 234, 471 McAlary, C. W., McLaren, R. A., & McGonegal, R. J. 1983, ApJS, 52, 341 Moorwood, A. F. M., & Glass, I. S. 1984, A&A, 135, 281 Nandra, K., & Pounds, K. A. 1994, MNRAS, 268, 405 Nandra, K., Clavel, J., Edelson, R. A., George, I. M., Malkan, M. A., Mushotzky, R. F., Peterson, B. M., & Turner, T. J. 1998, ApJ, 505, 594 Neugebauer, G., Green, R. F., Matthews, K., Schmidt, M., Soifer, B. T., & Bennett, J. 1987, ApJS, 63, 615 Osterbrock, D. E. 1983, PASP, 95, 12 Osterbrock, D. E., & Miller, J. S. 1975, ApJ, 197, 535 Pendleton, Y. J., Sandford, S. A., Allamandola, L. J., Tielens, A. G. G. M., & Sellgren, K. 1994, ApJ, 437, 683 Pier, E. A., & Krolik, J. H. 1992, ApJ, 401, 99 Rieke, G. H., & Lebofsky M. J. 1985, ApJ, 288, 618 Roche, P. F., & Aitken, D. K. 1984, MNRAS, 208, 481 Roche, P. F., & Aitken, D. K. 1985, MNRAS, 215, 425 Roche, P. F., Aitken, D. K., Smith, C. H., & Ward, M. J. 1991, MNRAS, 1991, 248, 606 Rossi, S. C. F., Benevides-Soares, P., & Barbuy, B. 1991, A&A, 251, 587 Rush, B., Malkan, M. A., & Spinoglio, L. 1993, ApJS, 89, 1 Scoville, N. Z., et al. 2000, AJ, 119, 991 Shure, M. A., Toomey, D. W., Rayner, J. T., Onaka, P., & Denault, A. J. 1994, Proc. SPIE, 2198, 614 Simpson, C. 1998a, ApJ, 509, 653 Simpson, C. 1998b, MNRAS, 297, L39 Simpson, C., Rawlings, S., & Lacy, M. 1999, MNRAS, 306, 828 Simpson, C., Ward, M., & Wall, J. V. 2000, MNRAS, 319, 963 Smith, D. A., & Done, C. 1996, MNRAS, 280, 355 Tadhunter, C. N., Morganti, R., Robinson, A., Dickson, R., Villar-Martin, M., & Fosbury, R. A. E. 1998, MNRAS, 298, 1035 Vader, J. P., Frogel, J. A., Terndrup, D. M., & Heisler, C. A. 1993, AJ, 106, 1743 Veilleux, S., Kim, D. -C., Sanders, D. B., Mazzarella, J. M., & Soifer, B. T. 1995, ApJS, 98, 171 Veilleux, S., Goodrich, R. W., & Hill, G. J. 1997a, ApJ, 477, 631 Veilleux, S., Kim, D. -C., & Sanders, D. B. 1999a, ApJ, 522, 113 Veilleux, S., Sanders, D. B., & Kim, D. -C. 1997b, ApJ, 484, 92 Veilleux, S., Sanders, D. B., & Kim, D. -C. 1999b, ApJ, 522, 139 Ward, M. J. 1996, in Cygnus A - Study of a Radio Galaxy, Carilli, C. L., & Harris, D. E., eds, Cambridge University Press, p.43 Ward, M., Elvis, M., Fabbiano, G., Carleton, N. P., Willner, S. P., & Lawrence, A. 1987, ApJ, 315, 74 Xu, C., Livio, M., & Baum, S. 1999, AJ, 118, 1169 Young, S., Axon, D. J., Hough, J. H., Fabian, A. C., & Ward, M. J. 1998, MNRAS, 294, 478 [ccccccc]{} 3C 63 & 0.175 & 2 $^{a}$ & 1000 & 1200 & 1999 Nov 29 & 1999 Nov 29\ 3C 171 & 0.238 & 5 $^{a}$ & 1800 & 3600 & 1999 Nov 28 & 1999 Nov 29\ 3C 195 (0806$-$10) & 0.110 & 5 $^{b}$ & 900 & 1200 & 1999 Nov 28 & 1999 Nov 29\ 3C 234 & 0.184 & 21 $^{a}$ & 400 & 800 & 1999 Nov 29 & 1999 Nov 29\ & & & 200 & 400 & 2000 May 16 & 2000 May 16\ 3C 321 & 0.096 & 2 $^{a}$ & 400 & 1600 & 2000 May 15 & 2000 May 15\ & & & 400 & 1800 & 2000 May 16 & 2000 May 16\ 3C 445 & 0.056 & 1 $^{a}$ & 200 & 400 & 2000 May 15 & 2000 May 15\ 3C 456 & 0.233 & 6 $^{a}$ & 1200 & 2400 & 1999 Nov 28 & 1999 Nov 29\ Cygnus A & 0.056 & 1 $^{c}$ & 600 & 2400 & 2000 Apr 19 & 2000 Apr 19\ & & & 400 & 1800 & 2000 May 16 & 2000 May 16\ Mrk 231 & 0.042 & 1 $^{d}$ & 150 & 200 & 2000 Apr 18 & 2000 Apr 18\ PG 1534+580 (Mrk 290) & 0.032 & 1 $^{d}$ & 300 & 800 & 2000 Apr 19 & 2000 Apr 19\ PKS 1345+12 & 0.122 & 2 $^{d}$ & 600 & 800 & 2000 Apr 19 & 2000 Apr 19\ [cccccc]{} 3C 63 & 13.4$\pm$0.3 & $>$11.0 & $<$2.7 & 6 (16.6) & Sy2 $^{a}$\ 3C 171 & $>$13.9 & $>$10.9 & & 6 (21.1) & Sy2 $^{a}$\ 3C 195 & 10.7$\pm$0.1 & 9.9$\pm$0.1 & 0.8$\pm$0.1 & 6 (11.2) & Sy2 $^{b}$\ 3C 234 & 10.7$\pm$0.1 & 9.5$\pm$0.2 & 1.2$\pm$0.2 & 6 (17.3) & Sy2 $^{c}$\ & 10.6$\pm$0.1 & 9.3$\pm$0.2 & 1.3$\pm$0.2 & 6 (17.3) &\ 3C 321 $^{*}$ & 12.5$\pm$0.1 & 11.7$\pm$0.3 & 0.8$\pm$0.3 & 6 (10.0) & Sy2 $^{a}$\ & 12.5$\pm$0.1 & 11.4$\pm$0.2 & 1.1$\pm$0.2 & 6 (10.0) &\ 3C 445 & 9.2$\pm$0.1 & 8.4$\pm$0.1 & 0.8$\pm$0.1 & 6 (6.1) & Sy1 $^{a}$\ 3C 456 & 12.7$\pm$0.2 & $>$10.1 & $<$2.8 & 6 (20.8) & Sy2 $^{a}$\ Cygnus A & 12.7$\pm$0.1 & 11.4$\pm$0.3 & 1.3$\pm$0.3 & 6 (6.1) & Sy2 $^{d}$\ & 12.0$\pm$0.1 & 11.1$\pm$0.2 & 0.9$\pm$0.2 & 6 (6.1) &\ Mrk 231 & 7.4$\pm$0.1 & 6.4$\pm$0.1 & 1.0$\pm$0.1 & 6 (4.7) & Sy1 $^{e}$\ PG 1534+580 & 10.9$\pm$0.1 & 10.3$\pm$0.3 & 0.6$\pm$0.3 & 6 (3.6) & Sy1 $^{f}$\ PKS 1345+12 $^{*}$ & 11.5$\pm$0.1 & 10.7$\pm$0.2 & 0.8$\pm$0.2 & 6 (12.3) & Sy2 $^{e}$\ [cccccccc]{} Sy 1 & NGC 863 (Mrk 590) & 0.026 & 9.3 & 8.5 & 0.8 & 8, 5 (3.9, 2.5) & 1\ & NGC 931 (Mrk 1040) & 0.017 & 8.4 & 7.6 & 0.8 & 8, 5 (2.6, 1.6) & 1\ & & & 8.9$\pm$0.1 & 7.9$\pm$0.2 & 1.0$\pm$0.2 & 7.9 (2.6) & 2\ & NGC 1365 & 0.005 & 7.6$\pm$0.1 & 6.9$\pm$0.1 & 0.7$\pm$0.1 & 9.1 (0.9) & 2\ & & & 7.6$\pm$0.1 & 7.0$\pm$0.1 & 0.6$\pm$0.1 & 5 (0.5) & 3\ & NGC 3227 & 0.004 & 8.9$\pm$0.1 & 8.4$\pm$0.5 & 0.5$\pm$0.5 & 4.6 (0.4) & 2\ & & & 8.6$\pm$0.1 & 8.0 & 0.6 & 15 (1.2) & 4\ & NGC 3516 & 0.009 & 8.4$\pm$0.1 & 8.1$\pm$0.6 & 0.3$\pm$0.6 & 15, 10 (2.6, 1.7)& 4\ & NGC 4051 & 0.002 & 9.0$\pm$0.1 & 8.0 & 1.0 & 15 (0.6) & 4\ & NGC 4151 & 0.003 & 7.4$\pm$0.1 & 6.4$\pm$0.1 & 1.0$\pm$0.1 & 7.9 (0.5) & 2\ & & & 7.3$\pm$0.1 & 6.4$\pm$0.1 & 0.9$\pm$0.1 & 10 (0.6) & 4\ & NGC 5548 & 0.017 & 8.6$\pm$0.1 & 8.0$\pm$0.3 & 0.6$\pm$0.3 & 7.9 (2.6) & 2\ & & & 9.0$\pm$0.1 & 8.0$\pm$0.7 & 1.0$\pm$0.7 & 10 (3.2) & 4\ & NGC 6814 & 0.005 & 9.8$\pm$0.1 & 8.9$\pm$0.6 & 0.9$\pm$0.6 & 7.9 (0.8) & 2\ & NGC 7469 & 0.016 & 8.1$\pm$0.1 & 7.0$\pm$0.2 & 1.1$\pm$0.2 & 7.9 (2.4) & 2\ & & & 8.0$\pm$0.1 & 7.4$\pm$0.1 & 0.6$\pm$0.1 & 5 (1.5) & 3\ & 3A 0557–385 & 0.034 & 8.3 & 7.1 & 1.2 & 5 (3.2) & 1\ & 3C 120 & 0.033 & 9.2$\pm$0.1 & 8.4$\pm$0.7 & 0.8$\pm$0.7 & 15 (9.2) & 4\ & 3C 273 & 0.158 & 8.0$\pm$0.1 & 7.3$\pm$0.2 & 0.7$\pm$0.2 & 9.1 (23.2)& 2\ & 3C 445 & 0.056 & 9.5 & 8.6 & 0.9 & 8, 5 (8.1, 5.1) & 5\ & Akn 120 & 0.032 & 9.0$\pm$0.1 & 8.1$\pm$0.3 & 0.9$\pm$0.3 & 4.6 (2.8)& 2\ & ESO 113–IG45 & 0.045 & 8.4$\pm$0.1 & 8.1$\pm$0.3 & 0.3$\pm$0.3 & 9.1 (7.5) & 2\ & IC 4329A & 0.016 & 7.8$\pm$0.1 & 7.5$\pm$0.1 & 0.3$\pm$0.1 & 4.6 (1.4) & 2\ & & & 7.9$\pm$0.1 & 6.9$\pm$0.3 & 1.0$\pm$0.3 & 15 (4.6) & 4\ & MCG$-$2–58–22 & 0.048 & 9.3 & 8.6 & 0.7 & 8, 5 (7.0, 4.4) & 1\ & MCG 8–11–11 & 0.020 & 8.9 & 7.9 & 1.0 & 8, 5 (3.0, 1.9) & 1\ & & & 8.8$\pm$0.1 & 8.1$\pm$0.3 & 0.7$\pm$0.3 & 7.9 (3.0) & 2\ & & & 8.8$\pm$0.1 & 7.3$\pm$0.2 & 1.5$\pm$0.2 & 15 (5.7) & 4\ & Mrk 79 & 0.022 & 9.3 & 8.4 & 0.9 & 8 (3.3) & 1\ & Mrk 231 & 0.042 & 7.4$\pm$0.1 & 6.5$\pm$0.1 & 0.9$\pm$0.1 & 10 (7.8) & 4\ & Mrk 335 & 0.026 & 8.7$\pm$0.1 & 7.6$\pm$0.2 & 1.1$\pm$0.2 & 7.9 (3.9) & 2\ & Mrk 359 & 0.017 & 10.3$\pm$0.1 & 9.2$\pm$0.1 & 1.1$\pm$0.1 & 5 (1.6) & 6\ & Mrk 1152 & 0.053 & 10.6 & 9.4 & 1.2 & 8, 5 (7.7, 4.8) & 1\ Sy 2 & NGC 262 (Mrk 348) & 0.015 & 10.5$\pm$0.1 & 9.1$\pm$0.1 & 1.4$\pm$0.1 & 3 (0.9) & 7\ & NGC 526a & 0.019 & 9.6 & 8.4 & 1.2 & 8, 5 (2.9, 1.8) & 1\ & NGC 1052 & 0.005 & 9.8$\pm$0.1 & 9.1$\pm$0.2 & 0.7$\pm$0.2 & 3 (0.3) & 7\ & & & 9.7$\pm$0.1 & 9.3$\pm$0.2 & 0.4$\pm$0.2 & 4 (0.4) & 8\ & NGC 1068 & 0.004 & 5.3 & 3.7 & 1.6$\pm$0.4 & 0.6 (0.1) & 9\ & & & 4.5$\pm$0.1 & 3.2$\pm$0.1 & 1.3$\pm$0.1 & 3 (0.2) & 7\ & NGC 1275 & 0.018 & 8.1$\pm$0.1 & 7.1$\pm$0.1 & 1.0$\pm$0.1 & 7.9 (2.7) & 2\ & NGC 1808 & 0.003 & 8.7$\pm$0.1 & 8.7$\pm$0.2 & 0.0$\pm$0.2 & 5 (0.3) & 3\ & NGC 2992 & 0.008 & 9.2 & 8.4 & 0.8 & 6 (0.9) & 1\ & & & 10.0$\pm$0.1 & 9.2$\pm$0.3 & 0.8$\pm$0.3 & 3 (0.5) & 7\ & NGC 3094 & 0.008 & 8.2$\pm$0.1 & 7.5$\pm$0.1 & 0.7$\pm$0.1 & 5 (0.8) & 3\ & NGC 3281 & 0.011 & 8.4 & 7.2 & 1.2 & 6 (1.3) & 10\ & NGC 4418 $^{a}$ & 0.007 & 11.1$\pm$0.1 & 10.2$\pm$0.2 & 0.9$\pm$0.2 & 5 (0.7) & 3\ & NGC 4736 & 0.001 & 7.0$\pm$0.1 & 7.4$\pm$0.4 & $-$0.4$\pm$0.4 & 15 (0.3) & 4\ & NGC 4945 & 0.002 & 8.3 & 7.5 & 0.8 & 7.5 (0.3) & 11\ & NGC 4968 & 0.010 & 10.0$\pm$0.1 & 8.7$\pm$0.2 & 1.3$\pm$0.2 & 3 (0.6) & 7\ & NGC 5252 & 0.023 & 10.6$\pm$0.1 & 10.0$\pm$0.3 & 0.6$\pm$0.3 & 3 (1.3) & 7\ & NGC 5506 & 0.006 & 7.6$\pm$0.1 & 6.7$\pm$0.1 & 0.9$\pm$0.1 & 4.6 (0.5)& 2\ & & & 7.1$\pm$0.1 & 6.2$\pm$0.1 & 0.9$\pm$0.1 & 3 (0.4) & 7\ & NGC 7130 (IC 5135) & 0.016 & 10.0$\pm$0.1 & 9.9$\pm$0.2 & 0.1$\pm$0.2 & 7.8 (2.4) & 3\ & NGC 7172 & 0.009 & 9.1$\pm$0.1 & 8.5$\pm$0.1 & 0.6$\pm$0.1 & 5 (0.9) & 3\ & & & 8.2 & 7.3$\pm$0.1 & 0.9$\pm$0.1 & 8, 5 (1.4, 0.9) & 6\ & & & 9.5$\pm$0.1 & 8.6$\pm$0.1 & 0.9$\pm$0.1 & 3 (0.5) & 7\ & NGC 7314 & 0.005 & 10.3 & 9.4 & 0.9 & 5 (0.5) & 1\ & NGC 7479 & 0.008 & 9.8$\pm$0.1 & 8.6$\pm$0.1 & 1.2$\pm$0.1 & 5 (0.8) & 3\ & NGC 7582 & 0.005 & 7.8$\pm$0.1 & 7.1$\pm$0.1 & 0.7$\pm$0.1 & 9.1 (0.9) & 2\ & NGC 7674 (Mrk 533) & 0.029 & 9.0$\pm$0.1 & 8.2$\pm$0.1 & 0.8$\pm$0.1 & 5 (2.7) & 3\ & & & 9.1$\pm$0.1 & 8.0$\pm$0.1 & 1.1$\pm$0.1 & 3 (1.6) & 7\ & 3C 33 & 0.060 & 11.9$\pm$0.1 & 11.2$\pm$0.1 & 0.7$\pm$0.1 & 3 (3.3) & 12\ & 3C 223 & 0.137 & 12.9$\pm$0.1 & 11.8$\pm$0.2 & 1.1$\pm$0.2 & 3 (6.8) & 12\ & 3C 234 & 0.185 & 10.9 & 9.3 & 1.6 & 6 (17.4) & 5\ & & & 10.5$\pm$0.1 & 9.9$\pm$0.1 & 0.6$\pm$0.1 & 3 (8.7) & 12\ & Circinus & 0.002 & 6.4 & 5.1 & 1.3 & 5 (0.2) & 11\ & IRAS 00198$-$7926 & 0.073 & 9.8 & 7.7 & 2.1 & 4.7 (6.1) & 13\ & IRAS 00521$-$7054 & 0.069 & 9.2 & 8.1 & 1.1 & 6.2 (7.6) & 13\ & IRAS 04385$-$0828 & 0.015 & 9.5 & 8.0 & 1.5 & 4.7 (1.3) & 13\ & IRAS 05189$-$2524 & 0.042 & 8.1$\pm$0.1 & 7.4$\pm$0.1 & 0.7$\pm$0.1 & 5 (3.9) & 3\ & & & 8.5 & 7.2 & 1.3 & 4.7 (3.6) & 13\ & IRAS 08572+3915 $^{b}$ & 0.058 & 9.2$\pm$0.3 & 8.0$\pm$0.3 & 1.2$\pm$0.4 & 4.5 (4.7) & 14\ & IRAS 20460+1925 & 0.181 & 9.2 & 8.3 & 0.9 & 11.9 (33.9) & 13\ & MCG$-$5–23–16 & 0.008 & 8.5$\pm$0.1 & 7.7$\pm$0.1 & 0.8$\pm$0.1 & 3 (0.5) & 7\ & Mrk 573 & 0.017 & 10.1$\pm$0.1 & 9.0$\pm$0.3 & 1.1$\pm$0.3 & 3 (1.0) & 7\ SB & NGC 520 & 0.008 & 9.6$\pm$0.1 & 9.1$\pm$0.2 & 0.5$\pm$0.2 & 7.8 (1.2) & 3\ LINER & NGC 613 & 0.005 & 9.5$\pm$0.1 & 8.7$\pm$0.1 & 0.8$\pm$0.1 & 5 (0.5) & 3\ Unknown & NGC 660 & 0.003 & 8.9$\pm$0.1 & 8.7$\pm$0.1 & 0.2$\pm$0.1 & 5 (0.3) & 3\ & NGC 828 & 0.018 & 10.4$\pm$0.1 & 10.6$\pm$0.1 & $-$0.2$\pm$0.1 & 5 (1.7) & 3\ & NGC 1614 (Mrk 617) & 0.016 & 9.0$\pm$0.1 & 8.6$\pm$0.1 & 0.4$\pm$0.1 & 7.8 (2.4) & 3\ & & & 9.2 & 8.8$\pm$0.1 & 0.4$\pm$0.1 & 5 (1.5) & 6\ & NGC 2110 & 0.008 & 9.4 & 8.9$\pm$0.2 & 0.5$\pm$0.2 & 6 (0.9) & 6\ & NGC 2339 & 0.007 & 9.7$\pm$0.1 & 9.5$\pm$0.2 & 0.2$\pm$0.2 & 7.8 (1.1) & 3\ & NGC 2388 & 0.014 & 10.1$\pm$0.1 & 9.8$\pm$0.2 & 0.3$\pm$0.2 & 5 (1.3) & 3\ & NGC 2623 & 0.018 & 10.6$\pm$0.1 & 10.2$\pm$0.2 & 0.4$\pm$0.2 & 5 (1.7) & 3\ & NGC 2782 & 0.009 & 10.1$\pm$0.1 & 9.6$\pm$0.1 & 0.5$\pm$0.1 & 5 (0.9)& 3\ & NGC 3079 & 0.004 & 9.2 & 9.4$\pm$0.2 & $-$0.2$\pm$0.2 & 6 (0.5) & 6\ & NGC 4102 & 0.003 & 8.5$\pm$0.1 & 8.3$\pm$0.1 & 0.2$\pm$0.1 & 5 (0.3) & 3\ & NGC 4194 (Mrk 201) & 0.008 & 9.4$\pm$0.1 & 8.9$\pm$0.1 & 0.5$\pm$0.1 & 5 (0.8) & 3\ & NGC 4579 & 0.005 & 9.4 & 9.5$\pm$0.3 & $-$0.1$\pm$0.3 & 6 (0.6) & 6\ & NGC 4826 & 0.001 & 8.9 & 9.2$\pm$0.2 & $-$0.3$\pm$0.2 & 6 (0.1) & 6\ & NGC 6764 & 0.008 & 10.9 & 9.9$\pm$0.2 & 1.0$\pm$0.2 & 5 (0.8) & 6\ & NGC 7714 & 0.009 & 10.4$\pm$0.1 & 10.1$\pm$0.2 & 0.3$\pm$0.2 & 5 (0.9) & 3\ & NGC 7770 & 0.014 & 11.2$\pm$0.1 & 10.5$\pm$0.2 & 0.7$\pm$0.2 & 5 (1.3) & 3\ & NGC 7771 & 0.014 & 10.3$\pm$0.1 & 9.9$\pm$0.1 & 0.4$\pm$0.1 & 5 (1.3) & 3\ & MCG$-$3–4–14 & 0.033 & 10.7$\pm$0.1 & 10.2$\pm$0.2 & 0.5$\pm$0.2 & 5 (3.1) & 3\ & Mrk 331 & 0.018 & 9.8$\pm$0.1 & 9.5$\pm$0.2 & 0.3$\pm$0.2 & 5 (1.7) & 3\ & UGC 3094 & 0.025 & 10.6$\pm$0.1 & 10.2$\pm$0.2 & 0.4$\pm$0.2 & 5 (2.4)& 3\ [^1]: IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc (AURA), under a cooperative agreement with the National Science Foundation. [^2]: http://irtf.ifa.hawaii.edu/online/IRTF/Catalogs/Elias\_standards [^3]: http://irtf.ifa.hawaii.edu/online/IRTF/Catalogs/bright\_standards [^4]: $\tau_{3.4}$ and $\tau_{9.7}$ mean the optical depths of the 3.4 $\mu$m carbonaceous dust absorption and the 9.7 $\mu$m silicate dust absorption, respectively.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this work, we study two-dimensional Galilean field theories with global translations and anisotropic scaling symmetries. We show that such theories have enhanced local symmetries, generated by the infinite dimensional spin-$\ell$ Galilean algebra with possible central extensions, under the assumption that the dilation operator is diagonalizable and has a discrete and non-negative spectrum. We study the Newton-Cartan geometry with anisotropic scaling, on which the field theories could be defined in a covariant way. With the well-defined Newton-Cartan geometry we establish the state-operator correspondence in anisotropic GCFT, determine the two-point functions of primary operators, and discuss the modular properties of the torus partition function which allows us to derive Cardy-like formulae.' author: - 'Bin Chen$^{1,2,3}$, Peng-Xiang Hao$^1$ and Zhe-fei Yu$^1$' title: 2d Galilean Field Theories with Anisotropic Scaling --- [*$^{1}$Department of Physics and State Key Laboratory of Nuclear Physics and Technology,\ Peking University, 5 Yiheyuan Rd, Beijing 100871, P. R. China\ $^{2}$Collaborative Innovation Center of Quantum Matter, 5 Yiheyuan Rd, Beijing 100871, P. R. China\ $^{3}$Center for High Energy Physics, Peking University, 5 Yiheyuan Rd, Beijing 100871, P. R. China*]{} Introduction ============ In two-dimensional(2D) spacetime, the global symmetry in a quantum field theory could be enhanced to a local one. The well-known example studied by J. Polchinski in [@Polchinski:1987dy] shows that a 2D Poincaré invariant QFT with scale invariance could be of conformal invariance, provided that the theory is unitary and the dilation spectrum is discrete and non-negative. More recently, A. Strominger and D. Hofman relaxed the requirement of Lorentz invariance and studied the enhanced symmetries of the theory of chiral scaling[@Hofman:2011zj]. They found two kinds of minimal theories. One kind is the two-dimensional conformal field theory (CFT$_2$)[@Belavin:1984vu], while the other kind is called the warped conformal field theory (WCFT). In a warped CFT, the global symmetry is $SL(2,R)\times U(1)$, and it is enhanced to an infinite-dimensional group generated by an Virasoro-Kac-Moody algebra. For the study on various aspects of 2D warped CFT, see [@Detournay:2012pc; @Hofman:2014loa; @Castro:2015csg; @Castro:2015uaa; @Song:2016gtd; @Song:2017czq; @Jensen:2017tnb; @Azeyanagi:2018har; @Apolo:2018eky; @Chaturvedi:2018uov; @Apolo:2018oqv; @Song:2019txa]. In this paper, we would like to investigate other types of two dimensional field theory with enhanced symmetries. We will focus on the theories whose global symmetries include the translations along two directions, boost symmetry and anisotropic scaling symmetry. If the two directions are recognized as temporal and spatial directions, the anisotropic scaling is of Lifshitz type $x\rightarrow\lambda x,\ t\rightarrow\lambda^z t$. Recall that the scaling behaviour in a warped conformal field theories is chiral $$x\rightarrow\lambda x,\ \ \ y\rightarrow y,$$ while the one in a Galilean conformal field theories (GCFT) is[^1] $$x\rightarrow\lambda x,\ \ \ y\rightarrow \lambda y.$$ In Galilean CFT, the boost symmetry is of Galilean type rather than Lorentzian type $$y\rightarrow y+v x.$$ The Galilean CFT can be obtained by taking the non-relativistic limit of the conformal field theory. Thus the Lorentzian symmetry is broken in Galilean CFT. In this work, we concern the case with more general anisotropic scaling $$x\rightarrow\lambda^c x,\ \ \ y\rightarrow \lambda^d y,$$ and a Galilean boost symmetry. Our consideration is general enough to include the WCFT and GCFT as special cases. The CFT with anisotropic scaling could be related to the strong-coupling systems in the condensed matter physics and in some statistical systems[@Henkel:1997zz; @Henkel:2002vd; @Rutkevich:2010xs]. In particular, It is well-known that for the fermions at unitarity which could be realized experimentally using trapped cold atoms at the Feshbach resonance[@Bartenstein:2004zza; @Regal:2004zza; @Zwierlein:2004zz], there is Schrödinger symmetry, and near the quantum critical points[@Sachdev2011] there is Lifshitz-type symmetry. In order to study these non-relativistic strong coupling systems holographically, people has tried to establish their gravity duals[^2] [@Son:2005rv; @Balasubramanian:2008dm; @Kachru:2008yh]. One essential requirement is the geometric realization of the symmetry. For a 2D QFT with enhanced symmetry, its role in the holographic duality becomes subtler and more interesting. In this case, the dual gravity must involve 3D gravity. As it is well-known, there is no local dynamical degree of freedom in 3D gravity, but there could be boundary global degrees of freedom. The AdS spacetime is not globally hyperbolic and the boundary conditions at infinity plays an important role. For AdS$_3$ gravity, under the Brown-Henneaux boundary, the asymptotic symmetry group is generated by two copies of the Virasoro algebra[@Brown:1986nw], leading to the AdS$_3$/CFT$_2$ correspondence. However there exist other sets of consistent boundary conditions. In particular, under the Compére-Song-Strominger boundary conditions, the asymptotic symmetry group is generated by the Virasoro-Kac-Moody U(1) algebra[@Compere:2013bya]. Therefore under the CSS boundary conditions, the AdS$_3$ gravity could be dual to a warped conformal field theory. This AdS$_3$/WCFT correspondence has been studied in [@Song:2016gtd; @Apolo:2018eky; @Castro:2017mfj; @Chen:2019xpb; @Lin:2019dji]. The study of consistent asymptotical boundary conditions and corresponding asymptotic symmetry group have played important roles in setting up other holographic correspondences beyond AdS/CFT, including chiral gravity[@Li:2008dq], WAdS/WCFT[@Anninos:2008fx; @Compere:2009zj], Kerr/CFT[@Guica:2008mu], BMS/GCA[@Bagchi:2010eg; @Bagchi:2012cy], BMS/CFT[@Barnich:2010eb; @deBoer:2003vf; @Ball:2019atb] and the non-relativistic limit of the AdS/CFT[@Bagchi:2009my]. Recall that both WCFT and GCA are the special cases in our study, it is tempting to guess that the anisotropic GCFT could be the holographic dual of a gravity theory. In order to investigate this possibility, one needs to study the enhanced symmetry of the field theory and in particular the geometry on which the theory is defined. We first study the enhanced symmetries, following the approach developed in [@Polchinski:1987dy; @Hofman:2011zj]. We find that even with anisotropic scaling and Galilean boost symmetry there are still infinite conserved charges, equipped with the infinite dimensional spin $\ell=\frac{d}{c}$ Galilean algebra, in the theory. This algebra is different from the chiral part of the $W_\ell$ algebra, even though the weights of the conserved currents are the same. The next question we address is on what kind of geometry such theories should be defined. Can the local Lorentz symmetry be consistent with the scaling symmetry such that the theories are defined on the pseudo-Riemannian manifold? The answer is generally no. Since the Lorentz boost put the two directions on the equal footing, only the isotropic scaling could be consistent with Lorentz symmetry. Actually as shown in [@Sibiryakov:2014qba], the isotropic scaling may imply the Lorentz invariance, under the assumption that the propagating speed of signal is finite and several other assumptions. The existence of isotropic scaling and Lorentz symmetry may lead to 2D CFT defined on the Riemann surfaces. In 2D CFT, the combination of $L_0$ and $\bar{L}_0$ gives the dilation and Lorentz boost generator. On contrast, although 2D GCFT has the isotropic scaling, the propagating speed in it is infinite and the Lorentz invariance is broken as well. For the theories without Lorentz invariance, the geometry cannot be pseudo-Riemannian. Considering the loss of the local Lorentz symmetry, a natural alternative to pseudo-Riemannian geometry is the Newton-Cartan geometry. In [@Hofman:2014loa], it was noted that with the global translation and scaling symmetry, the restriction of Lorentz symmetry require the theory to be conformal invariant while the restriction of Galilean symmetry require the theory to be the warped conformal field theories. The warped CFT are defined on the warped geometry, which is a kind of the Newton-Carton geometry with additional scaling structure. For a Galilean invariant field theory[^3], it could be coupled to a Newton-Cartan geometry in a covariant way[@Son:2008ye; @Son:2013rqa; @Jensen:2014aia; @Hartong:2014pma; @Banerjee:2014pya; @Banerjee:2014nja; @Banerjee:2016laq; @Duval:2009vt; @Duval:1984cj]. For a 2D Galilean conformal field theory, it is expected to couple to a Newton-Cartan geometry with a scaling symmetry, but a detailed study is lacking. For the Galilean CFT with anisotropic scaling discussed in this paper, we show that it should be defined on a Newton-Cartan geometry with additional scaling structure, similar to the warped geometry discussed in [@Hofman:2014loa]. These geometries are actually of vanishing curvature and non-vanishing torsion. One advantage of coupling the field theory to geometry is that the symmetries of the theory become manifest. The theories are defined by requiring the classical action is invariant under certain coordinate reparametrization. For 2D CFT, the Virasoro symmetries is manifest as the worldsheet reparametrization invariance. For the background Newton-Cartan geometry, the coordinate reparametrization can be absorbed by the local scaling transformation as well as the local Galilean boost. In other words, the theories are defined on the equivalent classes of the Newton-Carton geometry with special scaling structure. The geometries related by local scaling and Galilean boost belong to the same equivalent class[^4]. Having defined these theories, we find the infinitely many conserved charges by considering the currents coupled to the geometric quantities. These conserved charges are exactly the ones got by using the method in [@Hofman:2011zj]. Furthermore, we study the radial quantization and the state-operator correspondence in the anisotropic GCFT with anisotropic scaling ratio $\ell$ being integer, analogous to the usual CFT$_2$ case. Remarkably, the primary operators in the theory with $\ell>1$ have different properties. They are not transformed covariantly under the local transformations. Consequently the correlation functions become much more complicated than the usual cases. The Newton-Carton geometry with additional scaling structure has a natural conformal compactification which is of a cylinder topology $R\times S^1$. One has a well-defined torus in this case. This allows us to study the modular properties of the theories. We formulate the modular transformation and obtain the Cardy-like formulae in the general $\ell$ cases. For the warped CFTs with $\ell=0$, and Galilean conformal field theory with $\ell=1$, we get consistent results with the ones in the previous studies. The remaining parts are organized as follows. In Section 2, we generalize the Hofman-Strominger theorem to the anisotropic GCFT. Assuming that the dilation spectrum is discrete and non-negative, the theories coupled to Newton-Carton geometry with global translation and scaling symmetries have infinitely conserved charges. This means the global symmetries are enhanced to local ones. In Section 3, we discuss the properties of the Newton-Cartan geometry with additional scaling structure, on which our field theory could be consistently defined. It turns out that the geometries should have vanishing curvature but non-vanishing torsion. In Section 4, we give an intrinsic definition of these field theories, from which one can find the allowed local transformations and the corresponding infinitely many conserved charges directly. These discussions match the results in Section 2. In Section 5, we look further into these theories by considering the Hilbert space and the representation of the algebra. The state-operator correspondence is established. We also discuss the unusual properties of the primary operators for the $\ell>1$ cases. In Section 6, we calculate the two-point functions of the primary operators. A byproduct is the correlation functions of the certain related descendant operators. In Section 7, we define the torus partition function and discuss its modular properties. We derive a Cardy-like formula which gives estimation of the integral spectrum density at high energy. For a unitary theory, the formula counts the degeneracy of the highly excited states. We conclude and give some discussions in Section 8. Enhanced Symmetries =================== In this section, we discuss the enhanced symmetries in two dimensional (2D) field theory with boost symmetry and anisotropic scalings, using the method developed in [@Polchinski:1987dy] and [@Hofman:2011zj]. Usually for a theory with global symmetries, we can defines the corresponding conserved Noether currents and their conserved charges. However there could be ambiguities in defining the currents. In 2D quantum field theory with scaling symmetry and boost symmetry, under the assumption that there exist a complete basis of local operators as the eigenvectors of the dilation operator with a discrete spectrum, the conserved currents can be organized in a form such that they have the canonical commutation relations with the generators. But the currents can be shifted by certain local operators without changing the commutation and conservation relations. Analyzing the behavior of the local operators leads to special relations of the currents, which in turn tell us that there may be infinite conserved charges. Global symmetries ----------------- The global symmetries of 2D QFT we consider in this work include the translations along two directions $x$ and $y$ $$x\rightarrow x'=x+\delta x, \hs{3ex} y\rightarrow y'=y+\delta y,$$ the dilations $$x\rightarrow x'=\lambda ^c x,\hs{3ex}y\rightarrow y'=\lambda ^d y,$$ where $c,d$ are non-negative. And the Galilean boost symmetry behaves as, $$y\rightarrow y'=y+v x.$$ It is worth noting that the dilation scales two directions at the same time, but could be with different weights $c$ and $d$. We use a slightly different notation from the one in [@Hofman:2014loa]. The generators of the above symmetry transformations are denoted as $H, \bar{H}, D$ and $B$ respectively. They annihilate the vacuum, and satisfy the commutation relations =0,&\[D,H\]=-c H,&\[D,|[H]{}\]=-d |[H]{},=-|[H]{},&\[B,|[H]{}\]=0,&\[B,D\]=(d-c)B. We assume that the dilation operator has a discrete spectrum and the theory has a complete basis of local operators which obey =\_x ,&&\[|[H]{},\]=\_y ,=c x \_x +d y \_y +\_, where $\D_\mO$ is the non-negative scaling dimension of the operator $\mO$. The global symmetries can restrict the two-point function of $\mO_1,\mO_2$ to be either of the form $${\left\langle}\mO_1(x_1,y_1) \mO_2(x_2,y_2){\right\rangle}=x_{12}^{-c(\Delta_1+\Delta_2)} f(\frac{y_{12}^c}{x_{12}^d}),$$ or of the form $${\left\langle}\mO_1(x_1,y_1) \mO_2(x_2,y_2){\right\rangle}=y_{12}^{-d(\Delta_1+\Delta_2)} f(\frac{x_{12}^d}{y_{12}^c})$$ where $x_{12}=x_1-x_2,\ y_{12}=y_1-y_2$ and $f$ is a priori unknown function. Moreover in the case that the operators are invariant under the Galilean boost $$[B,\mO(x,y)]=x\partial_y \mO(x,y),$$ the two-point function of $\mO$ does not depend on $y_{12}$ $${\left\langle}\mO(x_1,y_1) \mO(x_2,y_2){\right\rangle}=N_{\mO}x_{12}^{-2c\Delta_{\mO}},$$ where $N_{\mO}$ is the normalization constant. Here for simplicity, we take $\mO_1=\mO_2$. The generators above are related to the conserved Noether current by $$Q=\int \star J,$$ where $$\star =H_{\mu\nu}$$ serves as the volume in the Newton-Cartan geometry which will be studied in the next section, $J$ is the conserved current satisfying $$\nabla_{\mu}J^{\mu}=0.$$ In flat Newton-Cartan geometry, $$Q=\int J_x dx+\int J_y dy, \hs{3ex}\mbox{with}\hs{2ex}\partial_y J_x+\partial_x J_y=0.$$ The integral contour is the slice where we quantize the theory and define the Hilbert space. Corresponding to the generators $H, \bar{H}, D$ and $B$, the currents are denoted as $h_\m,\bar{h}_\m,d_\m,b_\m$. The canonical commutation relations of the currents and the charges are =\_x h\_x, \[H,h\_y\]=\_x h\_y,\[H,|[h]{}\_x\]=\_x|[h]{}\_x,\[H,|[h]{}\_y\]=\_x|[h]{}\_y, =\_x d\_x+c h\_x,\[H,d\_y\]=\_x d\_y+c h\_y,=\_x b\_x+|[h]{}\_x,\[H,b\_y\]=\_x b\_y+|[h]{}\_y, $$[\bar{H},h_x]=\partial_y h_x,\hs{2ex}[\bar{H},h_y]=\partial_y h_y,\hs{2ex}[\bar{H},\bar{h}_x]=\partial_y\bar{h}_x,\hs{2ex}[\bar{H},\bar{h}_y]=\partial_y\bar{h}_y$$ $$[\bar{H},d_x]=\partial_y d_x+d h_x,\hs{2ex}[\bar{H},d_y]=\partial_y d_y+d h_y,\ee \be [\bar{H},b_x]=\partial_y b_x,\hs{2ex}[\bar{H},b_y]=\partial_yb_y,$$ $$[D,h_x]=(c x\partial_x+d y\partial_y)h_x+2c h_x,\hs{2ex}[D,h_y]=(c x\partial_x+d y\partial_y)h_y+(c+d) h_y,$$ $$[D,\bar{h}_x]=(c x\partial_x+d y\partial_y)\bar{h}_x+(c+d) \bar{h}_x,\hs{2ex}[D,\bar{h}_y]=(c x\partial_x+d y\partial_y)\bar{h}_y+2d \bar{h}_y,$$ $$[D,d_x]=(c x\partial_x+d y\partial_y)d_x+c d_x,\hs{2ex}[D,d_y]=(c x\partial_x+d y\partial_y)d_y+d d_y,$$ $$[D,b_x]=(c x\partial_x+d y\partial_y)b_x+d b_x,\hs{2ex}[D,b_y]=(c x\partial_x+d y\partial_y)b_y+(2d-c) b_y,$$ $$[B,h_x]=x\partial_yh_x- h_x,\hs{2ex}[B,h_y]=x\partial_yh_y,\ee \be [B,\bar{h}_x]=x\partial_y\bar{h}_x,\hs{2ex}[B,\bar{h}_y]=x\partial_y\bar{h}_y+\bar{h}_y,$$ $$[B,d_x]=x\partial_yd_x- d_x,\hs{2ex}[B,d_y]=x\partial_yd_y,\ee \be [B,b_x]=x\partial_yb_x,\hs{2ex}[B,b_y]=x\partial_yb_y+b_y.$$ We choose the above commutation relations by the following two requirements. One is that the differential operators must act on the field properly, while the other is that we must recover the commutators of the generators. It is remarkable that there are ambiguities in defining the Noether currents. One can shift the currents by some local operators to get the same commutation relations of the generators and still have the conservation laws. One may organize the currents with respect to the canonical commutation relations to define the local operators. The above canonical commutation relations imply that the dilation and boost currents can be expressed by the translation current up to some local operators $$d_x=c xh_x+d y\bar{h}_x+s_x,\hs{2ex}d_y=c xh_y+d y\bar{h}_y+s_y$$ $$b_x=x\bar{h}_x+w_x,\hs{2ex}b_y=x\bar{h}_y+w_y,$$ where $s_{x},s_y$ and $w_{x},w_y$ are local operators. In the following we will study the shifts of the currents that do not change the canonical commutation relations. Enhanced symmetries ------------------- Let us first study the boost symmetry and the boost current. The boost currents are related to the translation currents $$b_x=x\bar{h}_x+w_x,\hs{2ex}b_y=x\bar{h}_y+w_y.$$ The conservation law reads, $$\partial_y b_x+\partial_x b_y=0,\hs{2ex}\partial_x\bar{h}_y+\partial_y\bar{h}_x=0$$ which allows us to write the current $\bar{h}_y$ as, $$\bar{h}_y=-\partial_x w_y-\partial_y w_x.$$ From the commutation relations, we learn that $w_x$ is invariant under the Galilean boost. From the discussion on the two-point functions, we find $$\partial_{y_1}{\left\langle}w_x(x_1,y_1) w_x(x_2,y_2){\right\rangle}=0,\hs{2ex}{\left\langle}\partial_y w_x \partial_y w_x{\right\rangle}=0.$$ From our assumptions that the spectrum of the dilation operator is discrete and non-negative, the following equation is valid as an operator equation $$\partial_y w_x=0.$$ We can shift the currents without changing the canonical commutation relations $$\bar{h}_y\rightarrow \bar{h}_y+\partial_x w_y,\hs{2ex}\bar{h}_x\rightarrow \bar{h}_x-\partial_y w_y.$$ The $\bar{h}_x$ component must be changed at the same time to keep the conservation law intact. The similar shifts also happen in the currents $b_\m$. Under the above shift, we can set $$\bar{h}_y=0, \label{hbary}$$ such that \_y|[h]{}\_x=0 which implies that $\bar{h}_x$ is a function of $x$ $$\bar{h}_x=\bar{h}_x(x).$$ This leads to the existence of an infinite set of conserved charges, $$M_{\epsilon}=\int \epsilon(x)\bar{h}_x(x) dx, \label{Mcharges}$$ where $\epsilon(x)$ is an arbitrary smooth function $x$. It is easy to see that $M_1$ with $\epsilon=1$ actually generates the translation along $y$ direction, while $M_x$ with $\epsilon(x)=x$ is the boost generator. This is consistent with the discussion in the warped CFT literatures[@Hofman:2011zj; @Detournay:2012pc]. We should emphasize here that this infinite set of conserved charges are common in the 2d local Galilean field theories. Next let us turn to the dilation current. Depending on the weight $c$, we will consider $c=0$ and $c\neq 0$ separately. ### Special case: $c=0$ In this case, we have $$d_x=d y\bar{h}_x+s_x,\hs{2ex} d_y=dy\bar{h}_y+s_y.$$ The equations above can be taken as the defining relations of new local operators $s_x$ and $s_y$, taking into account of the fact that $$\bar{h}_x=\bar{h}_x(x),\hs{2ex}\bar{h}_y=0.$$ The canonical commutation relations are still valid, as well as the conservation laws. Considering the conservation laws of $d_\m$ and $h_\m$ $$\partial_y d_x+\partial_x d_y=0,\hs{2ex}\partial_y h_x+\partial_x h_y=0$$ we have $$d\bar{h}_x=-\partial_y s_x-\partial_x s_y.$$ Now $s_x$ is an operator of weight zero under the dilation. The two-point function is $${\left\langle}s_x s_x {\right\rangle}=\mbox{constant},$$ which implies that $$\partial_y s_x=0$$ is valid as an operator equation. We arrive at $$\bar{h}_x(x)=-\partial_x s_y.$$ Note that $s_y$ is an operator of weight $d$ under the dilation along $y$ direction such that $$\partial_y{\left\langle}s_y s_y {\right\rangle}=f(x)\partial_y y^{-2d}\neq 0.$$ But $s_y$ is invariant under the Galilean boost as well, which means that the above relation should be vanishing. The only way to be self-consistent is to set $s_y=0$ and therefore $\bar{h}_x=0$. This implies that a 2D theory with $c=0$ and the symmetries $$y\rightarrow y+v x,\hs{2ex}y\rightarrow \lambda y,\hs{2ex}y\rightarrow y+\delta y,$$ is inconsistent and does not exist. ### Other cases: $c\neq 0$ Next we turn to the $c\neq 0$ cases, in which we can normalize the dilation so that $c=1$. However, we keep $c$ unfixed in the following discussion in this section. One should note that the final results cannot be symmetric in $c$ and $d$, since the boost symmetry tells the difference between $x$ and $y$ directions. We start from the dilation currents $d_\m$ $$d_x=c xh_x+d y\bar{h}_x+s_x,\hs{2ex}d_y=c xh_y+d y\bar{h}_y+s_y.$$ The conservation law of $d_\m$ leads to the relation $$c h_y+d\bar{h}_x=-\partial_y s_x-\partial_x s_y.$$ Moreover we have $$\bar{h}_y=0.$$ We can shift the current $h_\m$ as follows h\_y&&h’\_y=h\_y + (\_y s\_x+\_x s\_y),\ h\_x && h’\_x=h\_x - (\_x s\_x+\_y s\_y). This will not change the commutation relations and the conservation laws. Considering the boost behaviour of $s_x,s_y$, after the shift we may have $$c h_y+d\bar{h}_x=0. \label{hy}$$ We can define a set of charges, $$L_\epsilon=\int \{c\epsilon(x) h_x(x,y)+d\epsilon'(x)y\bar{h}_x(x)\}dx+\int\{c\epsilon(x)h_y(x) \}dy, \label{Qcharges}$$ where $\epsilon(x)$ is arbitrary smooth function on $x$ and $\epsilon'(x)=\p_x \epsilon$. $h_y(x)$ depends only on $x$, since its boost charge vanishes. We denote $$q_x=c\epsilon(x) h_x(x,y)+d\epsilon'(x)y\bar{h}_x(x), \hs{2ex}q_y=c\epsilon(x)h_y(x).$$ One can check that the charges $L_\epsilon$ are indeed conserved $$\partial_y q_x+\partial_x q_y=0,$$ provided that $$\partial_yh_x+\partial_xh_y=0.$$ Note that when $\epsilon=1$, $$L_1=\int h_xdx+\int h_ydy$$ generates the translation in $x$ direction, while when $\epsilon=x$ $$L_x=\int \{cxh_x(x,y)+dy\bar{h}_x(x)\}dx+\int\{cxh_y(x) \}dy$$ generates the anisotropic scaling symmetry. In the case that $d=0$, from we have $$h_y=0.$$ And considering the conservation law, we find that $h_x$ depends only on $x$. This is exactly the case for the warped CFTs discussed in [@Hofman:2011zj; @Hofman:2014loa]. Algebra of enhanced symmetries ------------------------------ After some calculations, we arrive at the algebra, $$[L_\epsilon,L_{\tilde{\epsilon}}]=L_{c \epsilon'\tilde{\epsilon}-c \tilde{\epsilon}'\epsilon}+\cdots,$$ $$[L_\epsilon,M_{\tilde{\epsilon}}]=M_{d\epsilon'\tilde{\epsilon}-c \tilde{\epsilon}'\epsilon}+\cdots,$$ $$[M_\epsilon,M_{\tilde{\epsilon}}]=\cdots,$$ where $\epsilon$ and $\tilde{\epsilon}$ are arbitrarily smooth functions of $x$ and the ellipsis denotes potential central extension terms allowed by the Jacobi identity.\ The algebra of the plane modes without central extension is $$\begin{aligned} \label{algebra} \nonumber \left[l_n,l_m\right]&=& c(n-m)l_{n+m},\\ \nonumber\left[l_n,m_m\right]&=& (dn-c m)m_{n+m} ,\\ \left[m_n,m_m\right] &=&0.\end{aligned}$$ This is the infinite dimensional spin-$\ell$ Galilean algebra, with $\ell=\frac{d}{c}$[@Henkel:1997zz]. The central extension is constrained by the Jacobi identity[@Hosseiny:2014dxa]. There are various kinds of extensions, which we list here in order. - $T$-extension is always allowable: $$\left[L_n,L_m\right]= (n-m)L_{n+m}+\frac{c_T}{12}n(n^2-1)\delta_{n+m,0}.$$ This gives the Virasoro algebra. - $B$-extension is only allowable for $\ell=1$: $$\left[L_n,M_m\right]= (n-m)M_{n+m}+\frac{c_B}{12}n(n^2-1)\delta_{n+m,0}.$$ This gives the Galilean conformal algebra (GCA). The field theories equipped with GCA have been discussed in [@Bagchi:2009ca; @Bagchi:2009pe; @Bagchi:2016geg; @Bagchi:2017cpu]. - $M$-extension is only allowable for $d=0$, the infinite dimensional spin-$0$ Galilean algebra $$\left[M_n,M_m\right]=c_Mn\delta_{n+m,0}.$$ This is actually the algebra for the warped CFT, with $c_M$ being the Kac-Moody level. - Infinite $M$-extensions, in which there are infinite $c_M$ charges $$[M_n,M_m]=(n-m){(c_M)}_{n+m},\hs{3ex} [L_n,{(c_M)}_m]=-m{(c_M)}_{n+m}.$$ The familiar case is the Schrödinger-Virasoro algebra, in which $\ell=1/2$. Note that for arbitrary spin $\ell$, there could be similar algebraic structure. Geometry ======== In this section, we discuss the underlying geometry on which the theories with anisotropic scaling and boost symmetries can be defined. Recall that a 2D $CFT$ in the Euclidean signature is defined on a two-dimensional Riemann surface, which has the translation symmetries, rotation symmetry and a scaling symmetry. More importantly the classical action is invariant under the (anti-)holomorphic transformations $$z\rightarrow f(z),\ \ \ \bar{z}\rightarrow f(\bar{z}),$$ but the partition function and correlation functions may suffer from potential quantum anomaly due to the change of the measure under the transformations. For the Galilean field theories, one needs to introduce the Newton-Cartan structure into the two-dimensional geometry to make the Galilean symmetries manifest. Furthermore, a special scaling structure is needed to define the dynamical variable, the affine connection. For the warped CFTs, the underlying Newton-Cartan geometry has been studied in [@Hofman:2014loa]. For the case at hand, we need to introduce a Newton-Cartan geometry with a different scaling structure however. Flat Geometry ------------- We start with the geometry similar to the flat Euclidean geometry. Such geometry admits the following symmetries $$H:x\rightarrow x'=x+\delta x,$$ $$\bar{H}:y\rightarrow y'=y+\delta y,$$ $$B:y\rightarrow y'=y+v x.$$ Note that for different scalings $c,d$ , the flat geometries are the same. The invariant vector and one-form are respectively $$\bar{q}^a=\left( \begin{aligned} & 0\\ & 1 \end{aligned} \right )\ ,\ \ \ \ q_a=(0\ \ \ 1),\hs{3ex}a=1,2.$$ Similarly, there is a metric $$g_{ab}=q_aq_b=\left( \begin{aligned} 1\ \ & 0\\ 0\ \ & 0 \end{aligned} \right )$$ which is flat and invariant under boost transformation $$g=BgB^{-1}.$$ The metric is degenerate, and it is orthogonal to the invariant one-form. It has one positive eigenvalue and one vanishing eigenvalue. Besides, there is an antisymmetric tensor $h_{ab}$ to lower the index $$q_a=h_{ab}\bar{q}^b.$$ It is invariant under the boost transformation as well. It is invertible with $h^{ab}h_{bc}=\d^a_c$, and its inverse helps us to raise the index |[q]{}\^a=h\^[ab]{}q\_b. With $h^{ab}$, we can obtain the upper index metric |[g]{}\^[ab]{}=|[q]{}\^a|[q]{}\^b=h\^[ac]{}h\^[bd]{}q\_c q\_d=h\^[ac]{}h\^[bd]{}g\_[cd]{}. Curved Geometry --------------- In the previous subsection, the vector space and the dual 1-form space are introduced to define the geometry. The antisymmetric tensor $h_{ab}$ maps the vectors to one-forms, and the metric $g_{ab}$ defines the inner product of the vectors. This is in contrast with the usual Riemannian geometry, in which the metric serves also as a tool to map the vectors to the one-forms. The curved geometry is defined by ‘gluing flat geometry’, in the sense that the tangent space is flat with the map determined by the zweibein. One needs to define the connection properly. The zweibein is required to map the space-time vector to the tangent vector, $$e^a_{\mu}:v^{\mu}\rightarrow \bar{v}^a$$ The covariant derivative is $$D=\partial+\omega+\Gamma$$ where $\omega$ is the spin connection to connect the points in the tangent space, while $\Gamma$ is the affine connection to connect the points in the base manifold. In the usual case, the affine connection is determined uniquely by requiring the metric to be compatible and torsion free, with zweibein postulate. In the Galilean case, the torsion free condition cannot determine the spin connection uniquely, and other conditions should be imposed to get the unique spin connection and then the affine connection by zweibein postulate[@Jensen:2014aia; @Bergshoeff:2014uea]. From the zweibein postulate $$D_{\mu}e^a_{\nu}=0,$$ and the invertibility of $e^a_\m$, one may get the affine connection $$\Gamma^\rho_{\mu\nu}=e^\rho_a\partial_\mu e^a_\nu+e^\rho_a e^b_\nu \omega^a_{~b\mu}$$ where $$\omega^a_{~b\mu}=\bar{q}^aq_b\omega_{\mu}.$$ The torsion and curvature two-forms are respectively $$T^a=de^a+\omega^a_{~b} \wedge e^b, \hs{3ex} R^a_{~b}=d\omega^a_{~b}.$$ The metric compatibility requires, $$D_\mu \bar{q}^a=D_\mu q_a=0.$$ Instead of torsion free condition, the condition proposed here is that the geometry is compatible with the scaling symmetry, i.e. the scaling structure is a covariant constant $$D_\mu J^a_b=0.$$ The scaling structure is defined to select the scaling weights of vectors and 1-forms $$J^a_b\bar{q}^b=-d\bar{q}^a, \hs{3ex} J^a_bq^b=-c q^a.$$ Under the scaling , $$x\rightarrow \lambda^c x,\ \ \ y\rightarrow \lambda^dy,$$ the infinitesimal transformation is $$\Lambda^a_b=\delta^a_b+\lambda J^a_b.$$ The scaling structure is expressed covariantly as, $$J^a_b=-c(q^cq_c)^{-1}q^aq_b-d(\bar{q}^c\bar{q}_c)^{-1}\bar{q}^a\bar{q}_b$$ by requiring that $$\bar{q}_aq^a=0.$$ Note again that $\bar{q}^a$ and $q_a$ are boost invariant vector and one-form, and then the vector $q^a$ and one-form $\bar{q}_a$ are defined by the scaling structure in turn. Now the condition that the scaling structure is covariant constant implies $$q_a\partial_\mu q^a=0,\ \ \ \bar{q}^a\partial_\mu \bar{q}_a=0,$$ which means that $$q_aq^a=\mbox{const.}, \ \ \ \bar{q}_a\bar{q}^a=\mbox{const.}$$ As at different points, the normalization should be the same, one can choose the constants to be unit. The scaling structure is covariantly constant also implies the spin connection can be expressed as $$\omega_\mu=-\frac{1}{c+d}(c\bar{q}_a\partial_\mu q^a+dq^b\partial_\mu \bar{q}_b).$$ One should also impose that $$D_\mu q^a=0.$$ This means the weights of vectors do not change when being parallel transported. This condition implies $$\bar{q}_a\partial_\mu q^a=q^a\partial_\mu \bar{q}_a.$$ Then one reaches the conclusion that in the $(x,y)$ coordinates in the tangent space $$\omega_\mu=0,$$ and in turn $$R=0.$$ However, the affine connection and the torsion $$\Gamma^\rho_{\mu\nu}=e^\rho_a\partial_\mu e^a_\nu,\ \ \ T^a=de^a$$ are now not vanishing. This is the same as the warped geometry of warped CFTs. Affine connection ----------------- In this subsection, we discuss the various constraints to determine the affine connection without the help of the zweibein. The starting point is the Newton-Cartan geometry $(M,A_{\mu},G^{\mu\nu})$. $A_\mu$ is a temporal one-form which defines the local time direction, while $G^{\mu\nu}$ is the inverse metric on the spatial slice. One may define $$G^{\mu\nu}=\bar{A}^\mu\bar{A}^\nu,\ \ G_{\mu\nu}=A_\mu A_\nu,$$ and the antisymmetric tensor H\_=e\^a\_e\^b\_h\_[ab]{}=e\^a\_[\[]{} e\^b\_[\]]{} h\_[ab]{}=A\_[\[]{}|[A]{}\_[\]]{}. The velocity field $A^{\mu}$ is defined by A\_|[A]{}\^=0, |[A]{}\^A\_=0, where $\bar{A}_\mu$ is the dual one-form of $A^\mu$ $$\bar{A}_\nu=H_{\mu\nu}A^\mu.$$ The vectors and one-forms are related to the zweibein in the last subsection by $$\hat A=\hat e \cdot \hat q.$$ In components, we have $$\bar{A}^\mu=e_a^\mu\bar{q}^a,\ \ \ A_\mu=e^a_\mu q_a,$$ $$\bar{A}_\mu=e^a_\mu\bar{q}_a,\ \ \ A^\mu=e_a^\mu q^a.$$ The question is what conditions should be imposed to determine the geometry completely. In the following, we review the fact that metric compatibility and the torsion free condition cannot determine the affine connection uniquely. The covariant derivative acts on the tensor as $$D_\mu V^\nu_\rho=\partial_\mu V^\nu_\rho+\Gamma^\nu_{\sigma\mu}V^\sigma_\rho-\Gamma^\sigma_{\rho\mu}V^\nu_\sigma.$$ The torsion is $$T^\mu_{\nu\rho}=\Gamma^\mu_{\nu\rho}-\Gamma^{\mu}_{\rho\nu},$$ and the curvature is defined as usual. Then the constancy condition of $A_\mu$ implies $$D_\mu A_\nu=\partial_\mu A_\nu-\Gamma^\rho_{\mu\nu}A_\rho=0$$ which gives constraints on the temporal affine connection $$\Gamma^\rho_{\mu\nu}A_\rho=\partial_\mu A_\nu.$$ Along with the torsion free condition $$T^\mu_{\nu\rho}=\Gamma^\mu_{\nu\rho}-\Gamma^{\mu}_{\rho\nu}=0,$$ one gets the point that the temporal one-form is closed $$\partial_\mu A_\nu-\partial_\nu A_\mu=0.$$ Considering the constancy of $\bar{A}^\mu$, one finds the affine connection $$\Gamma^\mu_{\nu\rho}=\bar{A}^\mu\bar{A}^\sigma(\bar{A_{(\rho}}\partial_{\nu)}\bar{A}_\sigma-\bar{A}_{(\nu|}\partial_\sigma\bar{A}_{|\rho)})+\bar{A}^\mu\partial_{(\nu}\bar{A}_{\rho)}+\bar{A}^\mu\bar{A}^\sigma A_{(\nu}F_{\rho)\sigma},$$ where $F_{\mu\nu}$ is an arbitrary anti-symmetric tensor. Moreover we impose the condition that the scaling structure is covariant constant, which implies that the parallel transport keeps the scaling weight of the vectors invariant. This fact implies that $$D_\mu A^\nu=0,$$ and then $$F_{\mu\nu}=0.$$ The requirement that the scaling structure is covariantly constant implies also $$\bar{A}^\mu\bar{A}_\mu=\mbox{const}.$$ This in turn determines $A^\mu$ and the affine connection $$\Gamma^\rho_{\mu\nu}=0.$$ Actually the conditions are too strong to allow interesting geometry. To get the non-vanishing affine connection, one may relax the torsion free condition. The only constraints we impose are the metricity and the condition that the scaling structure is covariantly constant. Then the affine connection reads $$\Gamma^\rho_{\mu\nu}=A^{\rho}\partial_{\mu}A_\nu+\bar{A}^{\rho}\partial_{\mu}\bar{A}_\nu.$$ In this case, the curvature is vanishing, but the torsion tensor is not $$R=0,\ \ T_{\mu\nu}^\rho=A^\rho\partial_{[\mu}A_{\nu]}+\bar{A}^\rho\partial_{[\mu}\bar{A}_{\nu]}.$$ This is the case we focus on in this paper. Note that if one does not require the covariantly constant scaling structure, there are remaining ambiguities, the so-called Milne boost, in defining the velocity vector. One may impose another set of consistent constraints, including the metric compatibility, torsion free and $$R^{(\mu\nu)}_{[\rho\sigma]}=0.$$ These conditions imply $$dF=0,\ \ \ F=dQ.$$ $F$ is closed and can be expressed as an exterior derivative of a local $U(1)$ connection coupled to the particle number current. This is the so-called the geometry with Newtonian connection. The field theories defined on such geometries have non-vanishing central terms which are the particle numbers or the mass extensions. Defining Field Theories ======================= In this section, we discuss what kinds of field theories could be coupled to the geometry discussed above in a covariant way, and check that there are indeed infinitely many conserved charges in these theories. As the case $c=0$ is trivial, here we focus on the case $c\neq 0$. To simplify the notation, we use the freedom in the overall re-scaling to set $c=1$. The geometry is defined by $A_\mu,\bar{A}_\mu,A^\mu,\bar{A}^\mu$, satisfying $$A_\mu A^\mu=1,\ \bar{A}_\mu\bar{A}^\mu=1,\ A_\mu\bar{A}^\mu=0,\ \bar{A}_\mu A^\mu=0.$$ In the discussion below, the canonical one-forms are chosen to be $$A=dx,\ \ \bar{A}=dy.$$ Under the scaling $$x\rightarrow \lambda x,\ \ y\rightarrow \lambda^dy,$$ the vector field and 1-form field transform as $$A_\mu\rightarrow \lambda A_\mu,\ \ \bar{A}_\mu\rightarrow \lambda^d\bar{A}_\mu,$$ $$A^\mu\rightarrow \lambda^{-1} A^\mu,\ \ \bar{A}^\mu\rightarrow \lambda^{-d} \bar{A}^\mu.$$ Under the boost $$y\rightarrow y+v x,$$ there is $$A_\mu\rightarrow A_\mu,\ \ \bar{A}_\mu\rightarrow\bar{A}_\mu+vA_\mu,$$ $$A_\mu\rightarrow A_\mu-v \bar{A}^\mu,\ \ \bar{A}^\mu\rightarrow\bar{A}^\mu.$$ Now we want to find the diffeomorphism of the geometry by considering an infinitesimal coordinate transformation, $$x\rightarrow x+\epsilon(x,y),\ \ \ y\rightarrow y+\xi(x,y).$$ The infinitesimal variations are, \[if1\] dx&=&\_x dx+\_y dy,\ dy&=&\_x dx+\_y dy. This should be the same as the one arisen from the Galilean boost and anisotropic transformations locally, \[if2\] dx&=&dx,\ dy&=&ddy+v dx. Comparing with , we get the constraints on the transformations, $$d\partial_x \epsilon(x,y)=\partial_y \xi(x,y),$$ $$\partial_y\epsilon(x,y)=0.$$ The allowed infinitesimal transformations are $$x\rightarrow x+\epsilon(x),\ \ \ y\rightarrow (1+d\epsilon'(x))y, \label{epsilon1}$$ $$x\rightarrow x,\ \ \ y\rightarrow y+\xi(x), \label{epsilon2}$$ It turns out the allowed finite symmetry transformations are $$x\rightarrow f(x) ,\hs{3ex}y\rightarrow f'(x)^dy,$$ and $$x\rightarrow x,\hs{3ex}y\rightarrow y+g(x).$$ From the infinitesimal transformations ,, we read the generators $$l_n=-x^{n+1}\partial_x-d(n+1)x^n y\partial_y,$$ $$m_n=x^{n+d}\partial_y,$$ which satisfy the algebra $$\begin{aligned} &&\left[l_n,l_m\right]= (n-m)l_{n+m},\\ &&\left[l_n,m_m\right]= (dn-m)m_{n+m} ,\\ &&\left[m_n,m_m\right] =0.\end{aligned}$$ This algebra is analogous to the Witt algebra, and it is called the spin-$d$ Galilean algebra. The central extended one $\tilde{g}=g\oplus C$ has been discussed in section 2. Now we require that the action of the theory is invariant under the symmetries above $$\delta S[\delta A_\mu,\delta \bar{A}_\mu]=0$$ where $$\delta A_\mu=\lambda A_\mu,\ \ \ \delta \bar{A}_\mu=d\lambda \bar{A}_\mu+v A_\mu.$$ The corresponding currents can be read from $$\delta S[\delta A_\mu,\delta \bar{A}_\mu]=\int H (J^\mu \delta A_\mu+\bar{J}^\mu\delta \bar{A}_\mu),$$ with $$\bar{J}^\mu A_\mu=0,\ \ \ J^\mu A_\mu+d\bar{J}^\mu \bar{A}_\mu=0.$$ In the canonical coordinate $(x,y)$, $$(\star\bar{J})_x=\bar{h}_x,\ \ (\star\bar{J})_y=\bar{h}_y,\ \ (\star J)_x=h_x,\ \ (\star J)_y=h_y.$$ The conditions are simply $$\bar{h}_y=0,\ \ h_y=-d\bar{h}_x,$$ which are exactly the relations and . The other condition is the conservation of the currents[@Hofman:2014loa] $$D_\mu J^\mu_a=0.$$ With $$J^\mu=q^aJ^\mu_a,\ \ \ \bar{J}^\mu=\bar{q}^aJ^\mu_a,$$ we have $$\nabla_\mu J^\mu=0,\ \ \ \nabla_\mu\bar{J}^\mu=0.$$ This implies $$\partial_y \bar{h}_x=0,\ \ \ \partial_y h_x+\partial_x h_y=0.$$ This allows us to define infinitely many conserved charges as in equations and . In summary, we have shown that the field theory defined on the Newton-Cartan geometry with anisotropic scaling and boost symmetry indeed possess the conservation currents and charges we need. In the following discussion, we denote $\bar{h}_x=M(x)$ and $h_x=T(x,y)$. Quantization ============ In this section, we consider how to define the theories on the geometry discussed above. We will use the language in terms of operators in the discussion, and we focus on the case with $\ell=d/c$ being integer[^5]. For simplicity, we set $c=1$ such that $d$ is just an integer. Cylinder Interpretation ----------------------- The starting point is the so-called canonical cylinder characterized by a spatial circle $\phi$ and a temporal direction $t$ $$(\phi,t)\sim(\phi+2\pi,t).$$ One can get other kinds of spatial circles by tilting $t\rightarrow t+g(x)$. The compactified coordinate is considered in order to eliminate any potential infrared divergence. Now, we define the ‘lightcone coordinates’, $$x=t+\phi,\ \ y=t-\phi.$$ We impose the symmetry on the $x,\ y$ directions as discussed before $$x\rightarrow f(x),\ \ y\rightarrow f'(x)^{d}y$$ and $$x\to x, \ \ y\rightarrow y+g(x),$$ with $f(x)$ and $g(x)$ being arbitrary smooth functions of $x$. Consider the following complex transformation which maps the canonical cylinder to the reference plane $$z=e^{ix}=e^{t_E-i\phi},\ \ \ \tilde{y}=(iz)^dy,$$ where $t_E=-it$ is the Wick-rotated time. We have not considered the tilting of $y$ direction yet. The real time cylinder is capped off at $t=0$ by a reference plane with imaginary time. $$t_E\rightarrow-\infty,\ \ z\rightarrow 0,$$ $$t_E\rightarrow\infty,\ \ z\rightarrow \infty.$$ The Hilbert space are defined on the equal imaginary time slices. On the reference plane, this leads to the radial quantization. The ‘in state’ and ‘out state’ are defined by inserting the operators at $t_E=\mp \infty$. On the reference plane, these states are defined at the origin and the radial infinity. One can further put the operators at $y=0$ using the translation symmetry of the $y$ direction. The Hamiltonian operator relates different Hilbert space on the canonical cylinder while the dilation on the plane relates the Hilbert spaces on different radial slices (of $x$, but different $y$ ) with each other. One can inverse the procedure above to get the canonical cylinder from the reference plane. Notice that $z$ provides one real degree of freedom after the continuation, while the other degree of freedom is offered by $y$ instead of the analytical continuation of $\bar{z}$. The generators of the algebra act on the plane in the following way, $$L_n=-x^{n+1}\partial_x-d(n+1)x^ny\partial_y,$$ $$M_n=x^{n+d}\partial_y.$$ The generators $$L_1,\ L_0,\ L_1,\ M_{-d},\ \cdots\ ,M_d$$ can act regularly on each point, and they generate the global subgroup. Now we want to find a set of basis operators filling the representation of the algebra, by the theory of induced representation. The subgroup keeping the origin invariant is $$L_0,\ L_{n>0},\ \ M_{-d+1},\ M_{-d+2},\ \cdots.$$ The local operators can be labelled by the eigenvalues $(h_\mO, \xi_\mO)$ of the generators of the Cartan subalgebra $L_0,\ M_0$ $$[L_0,\mO(0,0)]=h_\mO \mO(0,0),\ \ \ [M_0,\mO(0,0)]=\xi_\mO \mO(0,0).$$ Requiring $h_\mO$ to be bounded below, one arrives at the highest weight representations $$[L_n,\mO(0,0)]=0,\ \ [M_n,\mO(0,0)]=0,\ \ \ \mbox{for}\ n>0.$$ This defines the primary operator. One can get the tower of descendant operators by acting $L_{-n},\ M_{-n}$ with $n>0$ on $\mO$. The operators inserting at the origin give the states, $$\mO(0,0)|0\rangle\rightarrow |h_\mO,\xi_\mO \rangle.$$ This gives a bijection between the states in the Hilbert space at infinitely past and the operators insertion at the origin on the reference plane. Using the commutation relations, the states above fill the representation of the algebra as well. Such representation will be discussed in the following subsection. The operators at other positions can be got by using the translations, $$\mO(x,y)=U^{-1}\mO(0,0)U,\ \ \ U=e^{-xL_{-1}+yM_{-d}}.$$ In order to compute the commutators $[L_n,\mO(x,y)]$ and $[M_n,\mO(x,y)]$, we notice the relations &=&U\^[-1]{}\[UL\_nU\^[-1]{},(0,0)\]U, &=&U\^[-1]{}\[UM\_nU\^[-1]{},(0,0)\]U.By using the Baker-Campell-Hausdorff (BCH) formula $$e^{-A}Be^A=B+[B,A]+\frac{1}{2!}[[B,A],A]+\cdots,$$ we have $$UL_nU^{-1}=\sum_{k=0}^{n+1}\frac{(n+1)!}{(n+1-k)!k!}(x^kL_{n-k}-dkyx^{k-1}M_{n+1-d-k}),$$ and get $$\begin{aligned} [L_n,\mO(x,y)] \nonumber&=&(-x^{n+1}\partial_x-d(n+1)x^ny\partial_y+(n+1)x^n h_\mO+dn(n+1)x^{n-1}\xi_\mO)\mO(x,y) \\ &+&\sum_{k=n+2-d}^{n}C_{n+1}^k dkyx^{k-1}(M_{n-d-k+1}\mO)(x,y),\ \ \ \mbox{for}\ n\geq -1.\end{aligned}$$ Similarly, by using $$UM_nU^{-1}=\sum_{k=0}^{n+d}\frac{(n+d)!}{(n+d-k)!k!}x^k M_{n-k},$$ we find $$[M_n,\mO(x,y)]=(x^{n+d}\partial_y+C^n_{n+d}x^n\xi_\mO)\mO(x,y)+\sum_{k=n+1}^{n+d-1}C^k_{n+d}x^k (M_{n-k}\mO)(x,y),\ \ \ \mbox{for}\ n\geq -d.$$ A special case is $d=0$. Now $M_0$ does not keep the origin invariant. Nevertheless, $M_0$ is still the generator of the Cartan subalgebra, $$[M_0,\mO(x,y)]=\xi_\mO \mO(x,y),$$ and $$[M_n,\mO(x,y)]=x^{n}\partial_y\mO(x,y)=x^n\xi_\mO \mO(x,y).$$ Representation -------------- The Hilbert space is spanned by the states filling the proper representations of the algebra. The critical assumption is that the spectrum of $L_0$ are bounded below so that we can find the highest weight representations. Starting with an arbitrary state, by acting with the generators $L_n,\ M_n\ (n>0)$, one must reach a state annihilated by all the generators with positive roots. This is the primary state, which is a highest weight state. The Cartan subalgebra is $(L_0,\ M_0)$, so we can find the states with the common eigenstates of $(L_0,\ M_0)$. We consider the case that the primary operators can be diagonalized, $$L_0|h,\xi\rangle=h|h,\xi\rangle,\ \ \ M_0|h,\xi\rangle=\xi|h,\xi\rangle,$$ $$L_n|h,\xi\rangle=0,\ \ \ M_n|h,\xi\rangle=0,\ \ n>0.$$ By acting the generators $L_n,\ M_n$ with $n<0$, one gets the descendant states, which are labelled by two vectors $\vec{I},\ \vec{J}$, $$|\vec{I},\vec{J},h,\xi\rangle=L_{-1}^{I_1}\cdots M_{-1}^{J_1}\cdots|h,\xi\rangle.$$ A state is either a primary state or a descendant state, and the Hilbert space is spanned by the modules $$H=\oplus\sum V_{h,\xi},$$ where $V_{h,\xi}$ is the module consisting of a primary state and the tower of all its descendants. Note that all the null states must be removed. We have defined the Hilbert space at the origin and discussed the operator-state correspondence. The in-states are $$|\mO_{in}\rangle=\lim_{z,y\rightarrow0}\mO(z,y)|0\rangle.$$ The dilation operator relates one Hilbert space to the others on the reference plane.\ Now we consider the Hilbert space at the infinity. After the Wick rotation the Hermitian conjugation becomes the reflection of the imaginary time $t_E\rightarrow -t_E$, and on the reference plane $$z\rightarrow z'=\frac{1}{z^{\ast}},\ \ \ \tilde{y}\rightarrow y'=(\frac{-1}{z^{\ast 2}})^d\tilde{y}\ast,$$ where $z^\ast$ is the complex conjugate of $z$. The dual space is defined by the out-states $$\langle \mO_{out}|=\lim_{z',y'\rightarrow0}\langle 0|\tilde{\mO}(z',y')$$ which can be defined at the infinity on the reference plane, corresponding to the infinite future on the canonical cylinder.\ The operator $\mO$ transforms as $$\mO(z',y')=(\frac{-1}{z^{\ast 2}})^h\mO(z,y),$$ so the conjugate of the primary operator $\mO$ is $$\mO^\dagger(z,y)=\mO(\frac{1}{z^{\ast}},(\frac{-1}{z^{\ast 2}})d\tilde{y}\ast)(\frac{-1}{z^{\ast 2}})^h.$$ The dual state is, $$\langle \mO_{out}|=\lim_{z',y'\rightarrow 0}\langle 0|\tilde{\mO}(z',y')=\lim_{z,y\rightarrow0}\langle0|\mO^\dagger(z,y)=|\mO_{in}\rangle^\dagger.$$ To map the descendant states to the dual space, consider the mode expansion of the stress tensors, M(z)&=&\_[n]{}M\_n z\^[-n-1-d]{},\ T(z,y)&=&\_[n]{}L\_nz\^[-n-2]{}-d\_[n]{}(n+1)yM\_[n-d]{}z\^[-n-2]{}. One can impose the condition that $M,\ T$ are Hermitian in the real-time theory, or equivalently they are real in the imaginary-time theory. This leads to $$M_n^\dagger=(-1)^{d+1}M_{-n},\hs{3ex} L_n^\dagger=L_{-n}.$$ The inner product and the map are defined by the adjoint structure above. As $$L_0^\dagger=L_0,\ \ \ M_0^\dagger=(-1)^{d+1}M_{0},$$ $h$ is always real, but $\xi$ is real for odd $d$ and is purely imaginary for even $d$. Note that the above conditions are not necessary. In the usual case of CFT$_2$, one imposes such conditions, with further constraints on the spectrum and central charges, to get a unitary field theory. In the cases where the theory are not unitary, one can define other adjoint structures. Two-point Correlation Functions =============================== In this section, we calculate the correlation functions of the primary operators in the theories with anisotropic scalings. In the usual CFT, unitarity implies the OPE convergence. For the non-unitary theories we may assume the OPE convergence to explore potential properties. In the theories discussed above, considering the radial quantization on the reference plane, it is natural to expect that the operator product expansion is convergent if the theories are unitary. However, such theories cannot be unitary unless all the $\xi$’s are vanishing. Nevertheless, we assume OPE convergence in such theories. With the OPE convergence, the higher point functions can be reduced to lower ones by inserting a complete set of operator basis. Thus the data of such theories are the spectrum and the OPE coefficients. The correlation functions with imaginary time must be time ordered in order to be well-defined. Correspondingly they are radially ordered in the reference plane. We will keep this point in mind without expressing the radial-ordering explicitly. The vacuum is invariant under the global group discussed in the previous section[^6] $$\langle0|G=0,$$ where $G$ are the generators of the global subgroup. Consequently the correlation functions are invariant under the global transformations $$\langle0|GO(x_1,y_1)O(x_2,y_2)|0\rangle=0$$ where $$G\in\{L_{-1},\ L_0,\ L_1,\ M_{-d},\ \cdots,\ M_d\}.$$ Moving $G$ from the left to the right gives the constraints on the two-point functions. For example, the translation symmetries require that the correlation functions must depend only on $x=x_1-x_2$ and $y=y_1-y_2$. Let us discuss case by case, setting $c=1$. The $d=0$ case is special, since the representation is special. As shown in [@Song:2017czq], there is $$\langle \mO_1(x,y)\mO_2(0,0)\rangle=d_\mO\delta_{h_1,h_2}\delta_{\xi_1,-\xi_2}\frac{1}{x^{2h_1}}e^{\xi y}.$$ For $d=1$, there are no descendant operators involved when doing the local transformations on the primary operators. The two-point function is different from the other cases[@Bagchi:2009ca] $$\langle \mO_1(x,y)\mO_2(0,0)\rangle=d_\mO\delta_{h_1,h_2}\delta_{\xi_1,\xi_2}\frac{1}{x^{2h_1}}e^{2\xi\frac{y}{x}}. \label{2ptb1}$$ For $d \geq 2$, the correlation functions become much more involved. The correlation functions of the descendant operators with the primary operators are not vanishing in such cases. Namely we have to consider the following correlation functions $$f(n,d)=\langle(M_{n}\mO_1)(x,y)\mO_2(0,0)\rangle.$$ Solving the constraints from the invariance of the two-point functions under the global transformations, one gets $$f(-d+1,d)=-\frac{1}{2}xf(-d,d),$$ $$f(n,d)=\frac{(d-1)!(d-n)!}{2(2d-1)!(-n)!}(-1)^{n+d} x^{n+d} f(-d,d),\ \ \ \mbox{for}\ n\in [-d+2,0].$$ In the end, one finds $$\langle \mO_1(x,y)\mO_2(0,0)\rangle=d_\mO\delta_{h_1,h_2}\delta_{\xi_1,(-1)^{d+1}\xi_2}\frac{1}{x^{2h_1}}e^{2C_{2d-1}^d(-1)^{d+1}\xi\frac{y}{x^d}},$$ where $C^n_m$ is the binomial coefficient. When $d=1$, it reduces to the equation . Modular Properties ================== In this section, we discuss the theories defined on the torus and the modular properties, which means the behaviour of the torus partition funtion under the modular transformation we will discuss below[^7]. The theories are defined on the cylinder with the spatial circle $$(\phi,t)\sim(\phi+2\pi,t).$$ Moreover, there is a thermal circle characterizing the temperature and angular potential $$(\phi,t)\sim(\phi+\alpha,t+\beta).$$ The translation charges are $$Q[\partial_t]=H=M_0^{(cyl)},\ \ \ Q[\partial_\phi]=M=L_0^{(cyl)}.$$ They are related to the plane boost and dilation charges $$x\partial_x+d y\partial_y\rightarrow \partial_\phi,$$ $$x\partial_y\rightarrow \partial_t.$$ The torus partition function is $$Z(\alpha,\beta)=\Tr e^{-\beta H-\alpha M}.$$ The trace is taken over all states in the Hilbert space of the theory, and $(\alpha,\beta)$ is a pair of the modular parameters characterizing the torus. This is similar with the discussion in [@Detournay:2012pc]. The torus can be described by the fundamental region on the plane $$(x,y)\sim(x,y)+p(\alpha_1,\alpha_2)+q(\beta_1,\beta_2)$$ where $p$ and $q$ are integers. There are different choices of $(\alpha,\beta)$ giving the same lattice and thus the same torus. Assuming that $(\alpha,\beta)$ and $(\gamma,\delta)$ give the same lattice, they are related to each other by an $SL(2,Z)$ transformation $$\left( \begin{aligned} a\ \ &b\\ c\ \ &d \end{aligned} \right)\left( \begin{aligned} \alpha\\ \beta \end{aligned} \right)= \left( \begin{aligned} \gamma\\ \delta \end{aligned} \right)$$ with $$ad-bc=1, \hs{3ex}a,b,c,d\in Z.$$ And since there is no difference between $(\alpha,\beta)$ and $-(\alpha,\beta)$, the modular group is actually $SL(2,Z)/Z_2$. Note that the modular group is the isometry group acting on the modular parameters. It describes the different choices of identifications giving the same lattice and thus the same torus. The modular group is independent of the detailed theory. $CFT_2$ also belongs to this case. The modular group is generated by the $T$ and $S$ transformations. The T-transformation leads to $$(\alpha,\beta)\rightarrow (\alpha,\beta+\alpha), \hs{3ex} T=\left( \ba{cc} 1&0\\ 1&1 \ea \right).$$ The $S$-transformation leads to $$(\alpha,\beta)\rightarrow (-\beta,\alpha),\hs{3ex} S=\left( \ba{cc} 0&-1\\ 1&0 \ea \right).$$ Note that the S-transformation exchanges the identifications along the two cycles, instead of the two coordinates. Modular Invariance ------------------ We start with the torus $(\alpha,\beta)$ with two identifications, $$\label{torus1} (\phi,t)\sim(\phi+2\pi,t)\sim(\phi+\alpha,t+\beta).$$ Consider the symmetry transformation of the theory, $$\phi\rightarrow f(\phi),\ \ \ t\rightarrow f'(\phi)^d t,$$ combining with $$t\rightarrow t+g(\phi),$$ where we have set $c=1$ for simplicity. Under such transformations, $$(\phi,t)\rightarrow(\phi',t''),\nn$$ and $$(\phi+2\pi,t)\rightarrow (f(\phi+2\pi),f'(\phi+2\pi)^dt+g(\phi+2\pi)).\nn$$ $$(\phi+\alpha,t+\beta)\rightarrow (f(\phi+\alpha),f'(\phi+\alpha)^d(t+\beta)+g(\phi+\alpha)).\nn$$ We would like to find the symmetry transformations which are consistent with the torus identification. For arbitrary point $(\phi',t'')$, there should be two identifications, $$(f(\phi),t'')\sim(f(\phi+2\pi),f'(\phi+2\pi)^dt+g(\phi+2\pi))\sim(f(\phi+\alpha),f'(\phi+\alpha)^d(t+\beta)+g(\phi+\alpha))$$ where t”=f’()\^dt+g(). If the identifications above are proper, $f(\phi+2\pi)-f(\phi)$ and $f(\phi+\alpha)-f(\phi)$ should not depend on $\phi'$, so $$f(\phi)=\lambda \phi+q$$ since $\phi$ is real. The constant shift $q$ of $\phi$ does not matter, and we can set it vanishing and have $$f(\phi)=\lambda\phi.$$ Then the identifications become $$(\phi',t'')\sim(\phi'+f(2\pi),t''+g(\phi+2\pi)-g(\phi))\sim(\phi'+f(\alpha),t''+\lambda^d\beta+g(\phi+\alpha)-g(\phi)).$$ Similarly, $g(\phi+2\pi)-g(\phi)$ should not depend on $\phi'$, so $$g(\phi)=g_0(\phi)+k\phi+p$$ where $g_0(\phi)\sim g_0(\phi+2\pi)$. Moreover, $\lambda^d\beta+g(\phi+\alpha)-g(\phi)$should not depend on $\phi'$, so $$g_0(\phi)\sim g_0(\phi+\alpha)$$ so $g_0$ is a constant and can be absorbed into $p$. The constant shift $p$ does not matter, and we may consider the transformation, $$g(\phi)=k\phi.$$ In short, the transformation functions should be linear, leading to the identifications $$\label{torus2} (\phi',t'')\sim(\phi'+f(2\pi),t''+g(2\pi))\sim(\phi'+f(\alpha),t''+\lambda^d\beta+g(\alpha)).$$ Moreover, one can do a S-transformation to exchange the two identifications, $$\label{torus3} (\phi',t'')\sim(\phi'+f(\alpha),t''+\lambda^d\beta+g(\alpha))\sim(\phi'+f(2\pi),t''+g(2\pi))$$ This gives the same torus as. In order to have a well-defined torus identified as in , we should impose the following conditions, $$f(\alpha)=2\pi,\ \ \ \ \lambda^d\beta+g(\alpha)=0,$$ which determine $$\lambda=\frac{2\pi}{\alpha},\ \ \ \ k=-(\frac{2\pi}{\alpha})^d\frac{\beta}{\alpha}.$$ Therefore the allowed symmetry transformations are f()= , g()=-()\^d . The new thermal cycle is $$(\phi,t)\sim(\phi+\alpha',t+\beta')$$ where $$\alpha'=f(2\pi)=-\frac{4\pi^2}{\alpha},\ \ \ \ \beta'=g(2\pi)=-(\frac{2\pi}{\alpha})^{d+1}\beta.$$ Note that the transformation of $\phi$ is purely a scaling, not leading to any anomaly in the partition function. When $d\neq 0$, the partition function is invariant under the modular transformation $$Z(\alpha',\beta')=Z(\alpha,\beta).$$ since the modular transformations are re-scaling and Galilean boost, which all belong to the global transformation. However, when $d=0$, the transformation of $t$ may introduce anomaly due to the non-vanishing $c_M$, since the Galilean boost generated by $M_1$ does not belong to the global transformations[@Detournay:2012pc].\ Cardy-like formula: $d\neq 0$ ----------------------------- In this subsection, we calculate the spectrum density at small $\alpha$ or large $\Delta,\xi$ by the saddle point approximation in the cases where $d\neq0$. In the holographic CFT, the degeneracy of the highly excited states could be related to the entropy of the dual configuration. The torus partition function can be recast into the one without vacuum charges$$\tilde{Z}(\alpha,\beta)=\int_{\Delta_0}^{\infty}\int_{\xi_0}^{\infty}e^{-\alpha\Delta-\beta\xi}\rho(\Delta,\xi)$$ where the $\tilde{Z}$ is the partition function defined by $$\tilde{Z}(\alpha,\beta)=e^{\alpha M_v+\beta H_v}Z(\alpha,\beta).$$ Here $M_v$ and $H_v$ are the translation charges of the vacuum. They can be non-vanishing in various theories. The spectrum density could be read via inverse Laplace transformation $$\rho(\Delta,\xi)=-\frac{1}{(2\pi)^2}\int d\alpha d\beta e^{\alpha \Delta+\beta\xi} \tilde{Z}(\alpha,\beta).$$ Just like in 2D CFT case[@Cardy:1986ie], the key point is to use the modular invariance to reexpress the equation above, and do the saddle point approximation at large $\Delta,\xi$. The modular invariance suggests $$\tilde{Z}(\alpha,\beta)=e^{(\alpha-\alpha')M_v+(\beta-\beta')H_v}\tilde{Z}(\alpha',\beta').$$ For non-vanishing $H_v$ and $M_v$, the saddle point is at $$\alpha=\frac{2\pi}{(1+\frac{\xi}{H_v})^{\frac{1}{d+1}}},$$ and the microcanonical entropy is $$S(\Delta,\xi)=\log \rho(\Delta,\xi)=\frac{2\pi(\Delta+M_v)}{(\frac{\xi}{H_v}-1)^{\frac{1}{d+1}}}+2\pi(\frac{\xi}{H_v}-1)^{\frac{1}{d+1}}M_v.$$ For $$\xi>>H_v,\ \ \ \ \Delta>>M_v, \nn$$ the entropy is $$S(\Delta,\xi)=2\pi\Delta(\frac{H_v}{\xi})^{\frac{1}{d+1}}+2\pi M_v(\frac{\xi}{H_v})^{\frac{1}{d+1}}. \label{entropy}$$ When $d=1,\ M_v=0$, it matches with the result in [@Bagchi:2012xr] in which the entropy reproduces the Bekenstein-Hawking entropy of flat cosmological horizon. For general $d=1$ case, our result match with the one in [@Bagchi:2013qva]. Several remarks are in order:\ 1. We assume that there is no $M$ extension such that the theory is anomaly-free and modular invariant.\ 2.The saddle point approximation is valid. The $\tilde{Z}(\alpha,\beta)$ does vary slowly around the putative saddle point. Near the saddle point, $\alpha$ is small. The dominant part of the partition function is the contribution from vacuum module. We have already extracted the vacuum charges so that $\tilde{Z}$ approaches a constant as $\alpha\rightarrow 0$. As a result, the vacuum charges are more important than the central charges, and appear in the entropy formula.\ 3. The relation could be taken as a kind of entropy, or the degeneracy of the states, if there is no state with negative norm. Otherwise, the integral spectrum density results from the difference between the positive and negative spectrum density.\ 4. In the case $H_v=M_v=0$, the saddle point approximation is not valid since there is no sharp Gaussian contribution around the saddle point. Cardy-like formula: $d=0$ ------------------------- If the algebra has non-vanishing $M$-extension, the torus partition function is covariant under the modular transformation. There is an anomaly due to the $M$ central charges. For the infinite dimensional spin-$\ell$-Galilean algebra with finite number of central extensions, the only case with $M$-extension is $d=0$, the warped CFT case. Now there is $$[M_n,M_m]=c_Mn\delta_{n+m,0}.$$ If $g(x)$ is not vanishing, there are anomalies of the currents coming from the $M$-extension[@Detournay:2012pc] $$M(w)=w'^{-1}(M(x)+c_Mg'(x)),$$ $$T(w)=w'^{-2}\left(T(x)-\frac{c_T}{12}s(w,x)-g'(x)M(x)-\frac{c_M g'(x)^2}{2}\right),$$ where $s(w,x)$ is the Schwarzian derivative $$s(w,x)=\frac{w'''}{w'}-\frac{3}{2}(\frac{w''}{w'})^2.$$ The modular $S$-transformation leads to $$T\rightarrow \frac{4\pi^2}{\alpha^2}T-\frac{2\pi\beta}{\alpha^2}M+\frac{\beta^2c_M}{2\alpha^2}.$$ Therefore, the partition function is covariant under the transformation $$Z(\alpha,\beta)=e^{\frac{\beta^2c_M}{2\alpha}}Z(\alpha',\beta')$$ where $$\alpha'=-\frac{4\pi^2}{\alpha},\ \ \ \ \beta'=-\frac{2\pi\beta}{\alpha}.$$ The partition function without the vacuum charges transforms as $$\tilde{Z}(\alpha,\beta)=e^{(\alpha-\alpha')M_v+(\beta-\beta')H_v+\frac{\beta^2c_M}{2\alpha}}\tilde{Z}(\alpha',\beta').$$ The saddle point is at $$\beta=-\frac{(\alpha+2\pi)H_v+\alpha\xi}{c_M},\ \ \ \ \alpha=2\pi\frac{\sqrt{H_v^2+2c_MM_v}}{\sqrt{H_v^2-2c_MM_v-2c_M\Delta+2H_v\xi+\xi^2}}.$$ The microcanonical ensemble entropy is $$S(\Delta,\xi)=-2\pi\frac{H_v(H_v+\xi)}{c_M}-\frac{2\pi}{c_M}\sqrt{(H_v^2+2c_MM_v)(-2c_M(M_v+\Delta)+(H_v+\xi)^2)}.$$ For $$\xi>>H_v,\ \ \ \ \Delta>>M_v, \nn$$ the entropy is $$S(\Delta,\xi)=-2\pi\frac{H_v\xi}{c_M}-\frac{2\pi}{c_M}\sqrt{(H_v^2+2c_MM_v)(-2c_M\Delta+\xi^2)},$$ which has been obtained in [@Detournay:2012pc]. Conclusion and Discussion ========================= In the present work we studied a class of general GCFT with anisotropic scaling $x\to \l^c x, y\to \l^d y$. Under the assumption that the dilation operator is diagonalizable, and has a discrete, non-negative spectrum, we showed in two different ways that the field theories with global translation, Galilean boost and anisotropic scaling could have enhanced symmetries. The first way is to generalize Hofman-Strominger theorem to the case at hand. The global symmetries are enhanced to the infinite dimensional spin-$l$ ($\ell=d/c$) Galilean algebra with possible central extensions. The second way relies on the Newton-Cartan geometry with scaling structure on which the field theory could be defined in a covariant way. Then the enhanced local symmetries could be understood as the consequence that the action of the field theory is invariant under coordinate reparametrization. Furthermore we discussed the properties of the anisotropic GCFT. We establish the state-operator correspondence by studying the representation of the algebra of the enhanced symmetries. We noticed that when $\ell>1$ the primary operators do not transform covariantly under the local symmetries. And consequently the two-point correlation functions become more involved, as we had to consider the correlation functions of a certain set of descendant operators at the same time. With the Newton-Cartan geometry, we were allowed to consider the theory defined on a torus and discuss the modular properties of the torus partition function. For the theories without $M$-extension, they are modular invariant, while for the theories with $M$-extension they are modular covariant. In both cases, we derived Cardy-like formulae to count the degeneracy of highly excited states. Having a covariant formalism, we can go further to explore other properties of such theories. One interesting issue is the scaling anomaly in the partition function. The theory is defined on the equivalent class of the geometry. However, the measure is not invariant under the local transformation[^8]. An effective action should be given to describe the anomaly of the partition function. In 2D CFT, such effective action is a Liouville action[@Polyakov:1981rd]. For the warped CFT, the effective action is a Liouville-type action[@Hao2019]. It would be interesting to study the effective action of the anisotropic GCFT. We leave this issue to the future work. Another interesting problem is to construct explicitly the simple examples realizing the enhance symmetries. This may help us to understand the symmetries better. Another important direction is trying to bootstrap these field theories. Some efforts have been made for $l=1$ [@Bagchi:2016geg; @Bagchi:2017cpu] and $l=1/2$[@Goldberger:2014hca]. It will be interesting if one can obtain any dynamical information for these non-relativistic conformal field theories using some well established bootstrap equations with appropriate inputs and reasonable assumptions. One subtle but essential point is that the theories now are generally non-unitary. To our best knowledge, there are few analytical bootstrap results for non-unitary (conformal) theories. In fact, unitarity is needed for the non-negativity of square of the OPE coefficients, which is crucial for both numerical and analytical bootstrap method. Also, note that the algebras here are generally not semi-simple (while the conformal algebra is), studying the bootstrap in such theories is nontrivial. Even though the algebra generically include an $SL(2,R)$ sector, the constraints from crossing equation could be different from a simple 1D CFT, which has been studied in SYK model[@Qiao:2017xif; @Simmons-Duffin:2017nub; @Mazac:2018qmi]. The anisotropic GCFTs allow us to study the holographic correspondence beyond AdS/CFT. For the non-relativistic scale invariant field theory, the underlying spacetime is better described by an Newton-Cartan geometry with additional scaling structure. The bulk dual would be at least one dimensional higher. The symmetry consideration may lead to the construction of the bulk dual. For example, as proposed in [@Hofman:2014loa], the lower spin gravity could be the minimal set of holographic duality of warped CFT. it would be interesting to investigate the holographic dual of a general GCFT with anisotropic scalings. \ We are grateful to Luis Apolo, Stephane Detournay, Wei Song, Jian-fei Xu for valuable discussions. The work was in part supported by NSFC Grant No. 11275010, No. 11335012, No. 11325522 and No. 11735001. We thank Tsinghua Sanya International Mathematics Forum for hospitality during the workshop Black holes and holography. [99]{} J. Polchinski, “Scale and Conformal Invariance in Quantum Field Theory,” Nucl. Phys. B [**303**]{}, 226 (1988). D. M. Hofman and A. Strominger, “Chiral Scale and Conformal Invariance in 2D Quantum Field Theory,” Phys. Rev. Lett.  [**107**]{}, 161601 (2011) \[arXiv:1107.2917 \[hep-th\]\]. A. A. Belavin, A. M. Polyakov and A. B. Zamolodchikov, “Infinite Conformal Symmetry in Two-Dimensional Quantum Field Theory,” Nucl. Phys. B [**241**]{}, 333 (1984). D. M. Hofman and B. Rollier, “Warped Conformal Field Theory as Lower Spin Gravity,” Nucl. Phys. B [**897**]{}, 1 (2015) \[arXiv:1411.0672 \[hep-th\]\]. S. Detournay, T. Hartman and D. M. Hofman, “Warped Conformal Field Theory,” Phys. Rev. D [**86**]{}, 124018 (2012) \[arXiv:1210.0539 \[hep-th\]\]. A. Castro, D. M. Hofman and N. Iqbal, “Entanglement Entropy in Warped Conformal Field Theories,” JHEP [**1602**]{}, 033 (2016) \[arXiv:1511.00707 \[hep-th\]\]. A. Castro, D. M. Hofman and G. Sárosi, “Warped Weyl fermion partition functions,” JHEP [**1511**]{}, 129 (2015) \[arXiv:1508.06302 \[hep-th\]\]. W. Song, Q. Wen and J. Xu, “Modifications to Holographic Entanglement Entropy in Warped CFT,” JHEP [**1702**]{}, 067 (2017) \[arXiv:1610.00727 \[hep-th\]\]. W. Song and J. Xu, “Correlation Functions of Warped CFT,” JHEP [**1804**]{}, 067 (2018) \[arXiv:1706.07621 \[hep-th\]\]. K. Jensen, “Locality and anomalies in warped conformal field theory,” JHEP [**1712**]{}, 111 (2017) \[arXiv:1710.11626 \[hep-th\]\]. T. Azeyanagi, S. Detournay and M. Riegler, “Warped Black Holes in Lower-Spin Gravity,” Phys. Rev. D [**99**]{}, no. 2, 026013 (2019) \[arXiv:1801.07263 \[hep-th\]\]. L. Apolo and W. Song, “Bootstrapping holographic warped CFTs or: how I learned to stop worrying and tolerate negative norms,” JHEP [**1807**]{}, 112 (2018) \[arXiv:1804.10525 \[hep-th\]\]. P. Chaturvedi, Y. Gu, W. Song and B. Yu, “A note on the complex SYK model and warped CFTs,” JHEP [**1812**]{}, 101 (2018) \[arXiv:1808.08062 \[hep-th\]\]. L. Apolo, S. He, W. Song, J. Xu and J. Zheng, “Entanglement and chaos in warped conformal field theories,” JHEP [**1904**]{}, 009 (2019) \[arXiv:1812.10456 \[hep-th\]\]. W. Song and J. Xu, “Structure Constants from Modularity in Warped CFT,” \[arXiv:1903.01346 \[hep-th\]\]. C. R. Hagen, “Scale and conformal transformations in galilean-covariant field theory,” Phys. Rev. D [**5**]{}, 377 (1972). M. Henkel, “Local Scale Invariance and Strongly Anisotropic Equilibrium Critical Systems,” Phys. Rev. Lett.  [**78**]{}, 1940 (1997) \[cond-mat/9610174 \[cond-mat.stat-mech\]\]. M. Henkel, “Phenomenology of local scale invariance: From conformal invariance to dynamical scaling,” Nucl. Phys. B [**641**]{}, 405 (2002) \[hep-th/0205256\]. S. Rutkevich, H. W. Diehl and M. A. Shpot, “On conjectured local generalizations of anisotropic scale invariance and their implications,” Nucl. Phys. B [**843**]{}, 255 (2011) \[arXiv:1005.1334 \[cond-mat.stat-mech\]\]. M. Bartenstein, A. Altmeyer, S. Riedl, S. Jochim, C. Chin, J. H. Denschlag and R. Grimm, “Crossover from a Molecular Bose-Einstein Condensate to a Degenerate Fermi Gas,” Phys. Rev. Lett.  [**92**]{}, 120401 (2004) \[cond-mat/0401109 \[cond-mat.other\]\]. C. A. Regal, M. Greiner and D. S. Jin, “Observation of Resonance Condensation of Fermionic Atom Pairs,” Phys. Rev. Lett.  [**92**]{}, 040403 (2004) \[cond-mat/0401554 \[cond-mat.stat-mech\]\]. M. W. Zwierlein, C. A. Stan, C. H. Schunck, S. M. F. Raupach, A. J. Kerman and W. Ketterle, “Condensation of Pairs of Fermionic Atoms near a Feshbach Resonance,” Phys. Rev. Lett.  [**92**]{}, 120403 (2004) \[cond-mat/0403049 \[cond-mat.soft\]\]. S. Sachdev, “[*Quantum Phase Transitions*]{},” 2nd Edition, Cambridge University Press, 2011. D. T. Son and M. Wingate, “General coordinate invariance and conformal invariance in nonrelativistic physics: Unitary Fermi gas,” Annals Phys.  [**321**]{}, 197 (2006) doi:10.1016/j.aop.2005.11.001 \[cond-mat/0509786\]. K. Balasubramanian and J. McGreevy, “Gravity duals for non-relativistic CFTs,” Phys. Rev. Lett.  [**101**]{}, 061601 (2008) \[arXiv:0804.4053 \[hep-th\]\]. S. Kachru, X. Liu and M. Mulligan, “Gravity duals of Lifshitz-like fixed points,” Phys. Rev. D [**78**]{}, 106005 (2008) \[arXiv:0808.1725 \[hep-th\]\]. S. A. Hartnoll, “Lectures on holographic methods for condensed matter physics,” Class. Quant. Grav.  [**26**]{}, 224002 (2009) \[arXiv:0903.3246 \[hep-th\]\]. J. D. Brown and M. Henneaux, “[Central charges in the canonical realization of asymptotic symmetries: an example from three-dimensional gravity]{},” [ Commun. Math. Phys. [**104**]{} (1986) 207–226](http://dx.doi.org/10.1007/BF01211590). G. Compére, W. Song and A. Strominger, “New Boundary Conditions for AdS3,” JHEP [**1305**]{}, 152 (2013) \[arXiv:1303.2662 \[hep-th\]\]. A. Castro, C. Keeler and P. Szepietowski, “Tweaking one-loop determinants in AdS$_{3}$,” JHEP [**1710**]{}, 070 (2017) \[arXiv:1707.06245 \[hep-th\]\]. B. Chen, P. X. Hao and W. Song, “Rényi Mutual Information in Holographic Warped CFTs,” \[arXiv:1904.01876 \[hep-th\]\]. Y. h. Lin and B. Chen, “Note on Bulk Reconstruction in AdS$_3$/WCFT$_2$,” \[arXiv:1905.04680 \[hep-th\]\]. W. Li, W. Song and A. Strominger, “Chiral Gravity in Three Dimensions,” JHEP [**0804**]{}, 082 (2008) \[arXiv:0801.4566 \[hep-th\]\]. D. Anninos, W. Li, M. Padi, W. Song and A. Strominger, “Warped AdS(3) Black Holes,” JHEP [**0903**]{}, 130 (2009) \[arXiv:0807.3040 \[hep-th\]\]. G. Compere and S. Detournay, “Boundary conditions for spacelike and timelike warped AdS$_3$ spaces in topologically massive gravity,” JHEP [**0908**]{}, 092 (2009) \[arXiv:0906.1243 \[hep-th\]\]. M. Guica, T. Hartman, W. Song and A. Strominger, “The Kerr/CFT Correspondence,” Phys. Rev. D [**80**]{}, 124008 (2009) \[arXiv:0809.4266 \[hep-th\]\]. A. Bagchi, “Correspondence between Asymptotically Flat Spacetimes and Nonrelativistic Conformal Field Theories,” Phys. Rev. Lett.  [**105**]{}, 171601 (2010) \[arXiv:1006.3354 \[hep-th\]\]. A. Bagchi and R. Fareghbal, “BMS/GCA Redux: Towards Flatspace Holography from Non-Relativistic Symmetries,” JHEP [**1210**]{}, 092 (2012) \[arXiv:1203.5795 \[hep-th\]\]. G. Barnich and C. Troessaert, “Aspects of the BMS/CFT correspondence,” JHEP [**1005**]{}, 062 (2010) \[arXiv:1001.1541 \[hep-th\]\]. J. de Boer and S. N. Solodukhin, “A Holographic reduction of Minkowski space-time,” Nucl. Phys. B [**665**]{}, 545 (2003) \[hep-th/0303006\]. A. Ball, E. Himwich, S. A. Narayanan, S. Pasterski and A. Strominger, “Uplifting AdS3/CFT2 to Flat Space Holography,” \[arXiv:1905.09809 \[hep-th\]\]. A. Bagchi and R. Gopakumar, “Galilean Conformal Algebras and AdS/CFT,” JHEP [**0907**]{}, 037 (2009) \[arXiv:0902.1385 \[hep-th\]\]. S. Sibiryakov, “From scale invariance to Lorentz symmetry,” Phys. Rev. Lett.  [**112**]{}, no. 24, 241602 (2014) \[arXiv:1403.4742 \[hep-th\]\]. D. T. Son, “Toward an AdS/cold atoms correspondence: A Geometric realization of the Schrodinger symmetry,” Phys. Rev. D [**78**]{}, 046003 (2008) \[arXiv:0804.3972 \[hep-th\]\]. D. T. Son, “Newton-Cartan Geometry and the Quantum Hall Effect,” \[arXiv:1306.0638 \[cond-mat.mes-hall\]\]. K. Jensen, “On the coupling of Galilean-invariant field theories to curved spacetime,” SciPost Phys.  [**5**]{}, no. 1, 011 (2018) \[arXiv:1408.6855 \[hep-th\]\]. R. Banerjee, A. Mitra and P. Mukherjee, “A new formulation of non-relativistic diffeomorphism invariance,” Phys. Lett. B [**737**]{}, 369 (2014) \[arXiv:1404.4491 \[gr-qc\]\]. R. Banerjee, A. Mitra and P. Mukherjee, “Localization of the Galilean symmetry and dynamical realization of Newton-Cartan geometry,” Class. Quant. Grav.  [**32**]{}, no. 4, 045010 (2015) \[arXiv:1407.3617 \[hep-th\]\]. R. Banerjee and P. Mukherjee, “Torsional Newton–Cartan geometry from Galilean gauge theory,” Class. Quant. Grav.  [**33**]{}, no. 22, 225013 (2016) \[arXiv:1604.06893 \[gr-qc\]\]. J. Hartong, E. Kiritsis and N. A. Obers, “Schrödinger Invariance from Lifshitz Isometries in Holography and Field Theory,” Phys. Rev. D [**92**]{}, 066003 (2015) \[arXiv:1409.1522 \[hep-th\]\]. C. Duval, G. Burdet, H. P. Kunzle and M. Perrin, “Bargmann Structures and Newton-cartan Theory,” Phys. Rev. D [**31**]{}, 1841 (1985). C. Duval and P. A. Horvathy, “Non-relativistic conformal symmetries and Newton-Cartan structures,” J. Phys. A [**42**]{} (2009) 465206 \[arXiv:0904.0531 \[math-ph\]\]. C. Duval and P. Horvathy, “Conformal Galilei groups, Veronese curves, and Newton-Hooke spacetimes,” J. Phys. A [**44**]{} (2011) 335203 \[arXiv:1104.1502 \[hep-th\]\]. D. Martelli and Y. Tachikawa, “Comments on Galilean conformal field theories and their geometric realization,” JHEP [**1005**]{} (2010) 091 \[arXiv:0903.5184 \[hep-th\]\]. C. Duval, G. W. Gibbons and P. Horvathy, “Celestial mechanics, conformal structures and gravitational waves,” Phys. Rev. D [**43**]{}, 3907 (1991) \[hep-th/0512188\]. P.-M. Zhang, M. Cariglia, M. Elbistan and P. A. Horvathy, “Scaling and conformal symmetries for plane gravitational waves,” arXiv:1905.08661 \[gr-qc\]. A. Hosseiny, “Possible Central Extensions of Non-Relativistic Conformal Algebras in 1+1,” J. Math. Phys.  [**55**]{}, 061704 (2014) \[arXiv:1403.4537 \[hep-th\]\]. E. A. Bergshoeff, J. Hartong and J. Rosseel, “Torsional Newton-Cartan geometry and the Schrödinger algebra,” Class. Quant. Grav.  [**32**]{}, no. 13, 135017 (2015) \[arXiv:1409.5555 \[hep-th\]\]. R. Andringa, E. Bergshoeff, S. Panda and M. de Roo, “Newtonian Gravity and the Bargmann Algebra,” Class. Quant. Grav.  [**28**]{}, 105011 (2011) \[arXiv:1011.1145 \[hep-th\]\]. A. Bagchi and I. Mandal, “On Representations and Correlation Functions of Galilean Conformal Algebras,” Phys. Lett. B [**675**]{}, 393 (2009) \[arXiv:0903.4524 \[hep-th\]\]. A. Bagchi, R. Gopakumar, I. Mandal and A. Miwa, “GCA in 2d,” JHEP [**1008**]{}, 004 (2010) \[arXiv:0912.1090 \[hep-th\]\]. J. L. Cardy, “Operator Content of Two-Dimensional Conformally Invariant Theories,” Nucl. Phys. B [**270**]{}, 186 (1986). A. Bagchi, S. Detournay, R. Fareghbal and J. Simón, “Holography of 3D Flat Cosmological Horizons,” Phys. Rev. Lett.  [**110**]{}, no. 14, 141302 (2013) \[arXiv:1208.4372 \[hep-th\]\]. A. Bagchi and R. Basu, “3D Flat Holography: Entropy and Logarithmic Corrections,” JHEP [**1403**]{}, 020 (2014) \[arXiv:1312.5748 \[hep-th\]\]. H. A. Gonzalez, D. Tempo and R. Troncoso, “Field theories with anisotropic scaling in 2D, solitons and the microscopic entropy of asymptotically Lifshitz black holes,” JHEP [**1111**]{}, 066 (2011) \[arXiv:1107.3647 \[hep-th\]\]. K. Jensen, “Anomalies for Galilean fields,” SciPost Phys.  [**5**]{}, no. 1, 005 (2018) \[arXiv:1412.7750 \[hep-th\]\]. A. M. Polyakov, “Quantum Geometry of Bosonic Strings,” Phys. Lett. B [**103**]{}, 207 (1981). Work in progress. A. Bagchi, M. Gary and Zodinmawia, “Bondi-Metzner-Sachs bootstrap,” Phys. Rev. D [**96**]{}, no. 2, 025007 (2017) \[arXiv:1612.01730 \[hep-th\]\]. A. Bagchi, M. Gary and Zodinmawia, “The nuts and bolts of the BMS Bootstrap,” Class. Quant. Grav.  [**34**]{}, no. 17, 174002 (2017) \[arXiv:1705.05890 \[hep-th\]\]. W. D. Goldberger, Z. U. Khandker and S. Prabhu, “OPE convergence in non-relativistic conformal field theories,” JHEP [**1512**]{}, 048 (2015) \[arXiv:1412.8507 \[hep-th\]\]. J. Qiao and S. Rychkov, “A tauberian theorem for the conformal bootstrap,” JHEP [**1712**]{}, 119 (2017) \[arXiv:1709.00008 \[hep-th\]\]. D. Simmons-Duffin, D. Stanford and E. Witten, “A spacetime derivation of the Lorentzian OPE inversion formula,” JHEP [**1807**]{}, 085 (2018) \[arXiv:1711.03816 \[hep-th\]\]. D. Mazác, “A Crossing-Symmetric OPE Inversion Formula,” \[arXiv:1812.02254 \[hep-th\]\]. [^1]: In some literature, it sometimes referred the Galilean conformal field theory to be the one with anisotropic scaling $t \to \l^2 t$ and $x_i \to \l x_i$, in particular in higher dimensions[@Hagen:1972pd]. With the particle number symmetry, the algebra becomes Schrödinger algebra. Without the particle number symmetry, the anisotropic scaling could be $t \to \l^z t$ and $x_i \to \l x_i$ with $z\neq 2$. In this work, we focus on the two-dimensional case, and call the GCFT the one with $z=1$ and the anisotropic GCFT the one with $z\neq 1$. [^2]: For a nice review and complete references, please see [@Hartnoll:2009sz]. [^3]: For various kinds of the Galilean field theories, please see[@Duval:2011mi; @Martelli:2009uc]. A brand-new application is the discussion on the gravitational waves using the Newton-Carton framework[@Duval:1990hj; @Zhang:2019gdm]. We thank A. Bagchi and P. A. Horvathy for bringing this point to our attention. [^4]: However, there are potential anomalies in the partition function, since the measure will change under the local transformations. We leave this point to the future work. [^5]: The known examples of warped CFTs and GCA field theories are of this kind. In the case with integer $\ell$, the Cartan algebra is $(L_0,M_0)$. One can discuss the common eigenstates of them and construct the highest weight representation. We can choose $\ell$ to be other values as well, but there is no $M_0$ then. The discussion is similar. From the highest weight representation marked by the eigenvalue of $L_0$, one can also construct the descendant states with $L_{n<0},M_{n<0}$. [^6]: We focus also on the case with integer $\ell$ in this section. [^7]: We are grateful to Stephane Detournay for suggesting the study of modular properties of the theory. [^8]: For the Galilean field theories in higher dimensions, the anomaly issue was studied in [@Jensen:2014hqa].
{ "pile_set_name": "ArXiv" }
--- abstract: 'The effect of external electric field on electron-hole correlation in GaAs quantum dots is investigated. The electron-hole Schrödinger equation in the presence of external electric field is solved using explicitly correlated full configuration interaction (XCFCI) method and accurate exciton binding energy and electron-hole recombination probability are obtained. The effect of the electric field was included in the 1-particle single component basis functions by performing variational polaron transformation. The quality of the wavefunction at small inter-particle distances was improved by using Gaussian-type geminal function that depended explicitly on the electron-hole separation distance. The parameters of the explicitly correlated function were determined variationally at each field strength. The scaling of total exciton energy, exciton binding energy, and electron-hole recombination probability with respect to the strength of the electric field was investigated. It was found that a 500 kV/cm change in electric field reduces the binding energy and recombination probability by a factor of 2.6 and 166, respectively. The results show that the eh-recombination probability is affected much more strongly by the electric field than the exciton binding energy. Analysis using the polaron-transformed basis indicate that the exciton binding should asymptotically vanish in the limit of large field strength.' author: - 'Christopher J. Blanton' - Christopher Brenon - Arindam Chakraborty bibliography: - 'ref.bib' title: ' Development of polaron-transformed explicitly correlated full configuration interaction method for investigation of quantum-confined Stark effect in GaAs quantum dots ' --- Introduction ============ The influence of external electric field on optical properties of semiconductors has been studied extensively using both experimental and theoretical techniques. In bulk semiconductors the shift in the optical absorbing due to the external field is known as the Franz-Keldysh effect.[@seeger2004semiconductor] In quantum wells and quantum dots, application of electric field has shown to modify the optical properties of nanosystems and is known as the quantum-confined Stark effect (QCSE).[@miller1984band; @miller1985electric] The application of the external field induces various modifications in the optical properties of the nanomaterial including, absorption coefficient, spectral weight of transitions, and change in $\lambda_\mathrm{max}$ of the absorption spectra. In certain cases, the applied field can lead to exciton ionisation.[@perebeinos2007exciton] The quantum-confined Stark effect has found application in the field of electro-absorption modulators,[@bimberg2012quantum] solar cells[@yaacobi2012combining] and the light-emitting devices.[@de2012quantum] Recent experiments by Weiss et al. on semiconductor quantum dots have shown that the QCSE can also be enhanced by the presence of heterojunctions.[@park2012single] In some cases, the QCSE can be induced chemically because of close proximity to ligands.[@yaacobi2012combining] The QCSE also plays a major role in electric field dependent photoconductivity in CdS nanowires and nanobelts.[@li2012electric] Electric field has emerged as one of the tools to control and customize quantum dots as novel light sources. In a recent study, electric field was used in generation and control of polarization-entangled photons using GaAs quantum dots.[@ghali2012generation] It has been shown that the coupling between stacked quantum dots can be modified using electric field.[@talalaev2006tuning] The QCSE has been investigated using various theoretical techniques including perturbation theory,[@Jaziri1994171; @Kowalik2005; @Xie20091625; @He2010266; @Lu2011; @Chen2012786] variational techniques,[@Kuo200011051; @Dane2008278; @Barseghyan2009521; @Duque2010309; @Dane20101901; @Kirak2011; @Acosta20121936] and configuration interaction method. [@Bester2005; @Szafran2008; @Reimer2008; @Korkusinski2009; @Kwaniowski2009821; @Pasek2012; @Luo2012; @Braskan2001775; @Braskan20007652; @Corni2003853141; @Lehtonen20084535] In the present work, development of explicitly correlated full configuration interaction (XCFCI) method is presented for investigating effect of external electric field on quantum dots and wells. The XCFCI method is a variational method in which the conventional CI wavefunction is augmented by explicitly correlated Gaussian-type geminal functions.[@JoakimPersson19965915] The inclusion of explicitly correlated function in the form of the wavefunction is important for the following two reasons. First, the addition of the geminal function increases the convergence of the FCI energy with respect to the size of the underlying 1-particle basis set.[@Prendergast20011626] Second, inclusion of explicitly correlated function improves the form of the electron-hole wavefunction at small inter-particle distances which is important for accurate calculation of electron-hole recombination probability.[@RefWorks:2334; @Wimmer2006; @RefWorks:4030] The effect of explicitly correlated function on the convergence of CI energy has been investigated by Prendergast et al.[@Prendergast20011626] and is directly related to accurate treatment of the Coulomb singularity in the Hamiltonian.[@Hattig20124; @Kong201275; @Prendergast20011626] Varganov et al. have demonstrated the applicability of geminal augmented multiconfiguration self-consistent field wavefunction for many-electron systems.[@varganov2010variational] Elward et al. have also performed variational calculation using explicitly correlated wavefunction for treating electron-hole correlated in quantum dots.[@RefWorks:4030; @RefWorks:4031] One of the important features of the XCFCI method presented here is the inclusion of the external field in the ansatz of the wavefunction. This is achieved by defining a new set of field-dependent coordinates which are generated by performing variational polaron transformation[@harris1985variational] and recasting the original Hamiltonian in terms of the field-dependent coordinates. The variational polaron transformation was introduced by Harris and Silbey for studying quantum dissipation phenomenon in the spin-boson system[@harris1985variational] and is used in the present work because of the mathematical similarity between the spin-boson and the field-dependent electron-hole Hamiltonian. The remainder of this article is organized as follows. The important features of the XCFCI method are summarized in Sec. \[sec:xcfci\], construction of the field dependent basis functions is presented in Sec. \[sec:polaron\], the application of the XCFCI method using field-dependent basis is presented in Sec. \[sec:results\], and the conclusion are provided in Sec. \[sec:conclusion\]. Theory ====== Explicitly correlated full configuration interaction {#sec:xcfci} ---------------------------------------------------- The field dependent electron-hole Hamiltonian is defined as[@RefWorks:4060; @RefWorks:2174] $$\begin{aligned} \label{eq:ham} H &= -\frac{\hbar^2}{2m_{\mathrm{e}}}\nabla^2_{\mathrm{e}} -\frac{\hbar^2} {2m_{\mathrm{h}}}\nabla^2_{\mathrm{h}} + v^\mathrm{ext}_\mathrm{e} + v^\mathrm{ext}_\mathrm{h} \\ \notag &- \frac{1}{\epsilon \vert \mathbf{r}_{\mathrm{eh} } \vert} + \vert e\vert \mathbf{F} \cdot (\mathbf{r}_{\mathrm{e}}-\mathbf{r}_{\mathrm{h}})\end{aligned}$$ where $m_{\mathrm{e}}$ is the mass of the electron, $m_{\mathrm{h}}$ is the mass of the hole, $\epsilon$ is the dielectric constant, and $\mathbf{F}$ is the external electric field. The external potential $v^\mathrm{ext}_\mathrm{e}$ and $v^\mathrm{ext}_\mathrm{h}$ represent the confining potential experienced by the quasi-particles. The form of the XCFCI wavefunction is defined as $$\begin{aligned} \label{eq:xcfci} \Psi_\mathrm{XCFCI} &= \hat{G} \sum_k c_k \Phi_k\end{aligned}$$ where $c_k$ is the CI coefficient and $\Phi_k$ are basis functions. The operator $\hat{G}$ is known as the geminal operator and is an explicit function of $r_\mathrm{eh}$ and is defined as $$\begin{aligned} \hat{G} = \sum_{i=1}^{N_\mathrm{e}} \sum_{j=1}^{N_\mathrm{h}} \sum_{k=1}^{N_\mathrm{g}} b_{k}e^{-\gamma_k r_{ij}^2},\end{aligned}$$ where $N_\mathrm{g}$ is the number of Gaussian functions included in the expansion, $N_\mathrm{e}$ and $N_\mathrm{e}$ are the number of electrons and holes, respectively. The parameters $b_k$ and $\gamma_k$ used in the definition of the geminal operator are obtained variationally. The construction of the basis functions used in the definition of XCFCI wavefunction in Eq. will be discussed in Sec. \[sec:polaron\]. The XCFCI calculation is performed in two steps. In the first step, the parameters of geminal operator are obtained variationally by performing the following minimization $$\begin{aligned} E[G_{\mathrm{min}}] &= \min_{b_k,\gamma_k} \frac{\langle G \Phi_0 \vert H \vert G \Phi_0 \rangle } {\langle G \Phi_0 \vert G \Phi_0 \rangle} . \end{aligned}$$ In the second step, the expansion coefficients $\{c_k\}$ are obtained variationally and are defined by the following minimization procedure $$\begin{aligned} \label{eq:Excfci} E_\mathrm{XCFCI} &= \min_{\mathbf{c}} \frac{\langle \Psi_\mathrm{XCFCI} \vert H \vert \Psi_\mathrm{XCFCI} \rangle } {\langle \Psi_\mathrm{XCFCI} \vert \Psi_\mathrm{XCFCI} \rangle } .\end{aligned}$$ The above equation can be rewritten as a FCI calculation of transformed operators $$\begin{aligned} E_\mathrm{XCFCI} &= \min_{\mathbf{c}} \frac{\langle \Psi_\mathrm{FCI} \vert \tilde{H} \vert \Psi_\mathrm{FCI} \rangle } {\langle \Psi_\mathrm{FCI} \vert \tilde{1} \vert \Psi_\mathrm{FCI} \rangle }, \end{aligned}$$ where the transformed operators are defined as $$\begin{aligned} \label{eq:htilde} \tilde{H} &= G_\mathrm{min}^\dagger H G_\mathrm{min}, \\ \label{eq:stilde} \tilde{1} &= G_\mathrm{min}^\dagger G_\mathrm{min}.\end{aligned}$$ The exact expression of the transformed operators in Eq. and and discussion relevant to their derivation has been presented earlier in Ref. and is not repeated here. The $E_\mathrm{XCFCI}$ reduces to conventional FCI energy in the limit of geminal function equals to 1 $$\begin{aligned} E_\mathrm{FCI} = \lim_{G \rightarrow 1} E_\mathrm{XCFCI}\end{aligned}$$ We expect the $E_\mathrm{XCFCI}$ energy to be lower than the FCI energy for identical set of basis functions and earlier studies have shown this to be true.[@RefWorks:4031] After the successful completion of the XCFCI calculations, the field dependent exciton binding was calculated from the difference between the non-interacting and interacting ground state energies. Defining the non-interacting Hamiltonian as $$\begin{aligned} \label{eq:h0} H_0 &= \lim_{\epsilon^{-1} \rightarrow 0} H ,\end{aligned}$$ the exciton binding energy is computed as $$\begin{aligned} \label{eq:Eb} E_\mathrm{B}[\mathbf{F}] &= E_\mathrm{XCFCI} - E_0^{(0)},\end{aligned}$$ where $E_0^{(0)}$ is defined in Eq. $$\begin{aligned} \label{eq:e0} E_0^{(0)} &= \min_{\Psi} \frac{\langle \Psi \vert H_0 \vert \Psi \rangle} {\langle \Psi \vert \Psi \rangle}.\end{aligned}$$ The field dependent electron-hole recombination probability is obtained from the XCFCI wavefunction using the following expression[@RefWorks:4030; @RefWorks:4031] $$\begin{aligned} \label{eq:recomb} P_{\mathrm{eh}} [\mathbf{F}] = \frac{\langle\Psi_\mathrm{XCFCI} \vert \delta(\mathbf{r}_{\mathrm{e}}-\mathbf{r}_{\mathrm{h}}) \vert \Psi_\mathrm{XCFCI} \rangle} {\langle\Psi_\mathrm{XCFCI} \vert \Psi_\mathrm{XCFCI} \rangle}.\end{aligned}$$ The exciton binding energy and the recombination probability are functionals of the applied external field and are indicated explicitly in Eq. and , respectively. Construction of field dependent basis set {#sec:polaron} ----------------------------------------- One of the key features of the electron-hole Hamiltonian used in the present work is the presence of the field-dependent term in Eq. . Since the convergence of the CI expansion depends on the quality of the underlying 1-particle basis, it is desirable to construct and use efficient single particle basis sets. In the present work, we have developed field-dependent basis functions and the details of the derivation are presented as following. Starting with the expression of $H_0$ in Eq. , the zeroth-order Hamiltonian is expressed as a sum of non-interacting electronic and hole Hamiltonians $$\begin{aligned} H_0 = H_0^\mathrm{e} + H_0^\mathrm{h},\end{aligned}$$ where the expression for the single-component non-interacting Hamiltonian is given as $$\begin{aligned} H_0^\mathrm{e} &= T_\mathrm{e} + v_\mathrm{e}^\mathrm{ext} + \vert e \vert \mathbf{F} \cdot \mathbf{r}_\mathrm{e} \\ H_0^\mathrm{h} &= T_\mathrm{h} + v_\mathrm{h}^\mathrm{ext} - \vert e \vert \mathbf{F} \cdot \mathbf{r}_\mathrm{h}.\end{aligned}$$ As seen from the above equation, the coupling between the external field and the quasiparticle coordinates is linear. The above Hamiltonian shares mathematical similarity with the spin-boson Hamiltonian that has been used extensively in quantum dissipative systems.[@weiss2008quantum] In the present method, we perform analogous transformation which is defined by the follow equations $$\begin{aligned} \label{eq:polar} \mathbf{q}_\mathrm{e} &= \mathbf{r}_\mathrm{e} + \lambda_\mathrm{e} \mathbf{F} \\ \mathbf{q}_\mathrm{h} &= \mathbf{r}_\mathrm{h} - \lambda_\mathrm{h} \mathbf{F}.\end{aligned}$$ Similar to the polaron transformation in the spin-boson system, the coordinates of the quasiparticle experience a shift due to the presence of the external field. [@weiss2008quantum] Using the method of variational polaron transformation by Harris and Silbey,[@harris1985variational] the shift parameter $\lambda$ is determined variationally. The field-dependent electronic basis functions are obtained by first constructing the Hamiltonian matrix using Gaussian-type orbitals (GTO) and then diagonalizing the resulting matrix $$\begin{aligned} H_0^\mathrm{e} \Phi_i^\mathrm{e} &= \epsilon_i^\mathrm{e} (\lambda_\mathrm{e}) \Phi_i^\mathrm{e} \quad i=1,\dots,M_\mathrm{e} \\ H_0^\mathrm{h} \Phi_j^\mathrm{h} &= \epsilon_j^\mathrm{h} (\lambda_\mathrm{h}) \Phi_j^\mathrm{h} \quad j=1,\dots,M_\mathrm{h}.\end{aligned}$$ The value of the shift parameter is obtained variationally by minimizing the trace $$\begin{aligned} \min_{\lambda} \sum_{i}^{M_\mathrm{e}} \epsilon_i^\mathrm{e} \implies \lambda_\mathrm{e}.\end{aligned}$$ The $\lambda_\mathrm{h}$ is also obtained by a similar procedure. The electron-hole basis functions for the FCI calculations are constructed by taking a direct product between the set of electronic and hole single-component basis sets $$\begin{aligned} \{ \Phi_k \} &= \{ \Phi_i^\mathrm{e} \} \otimes \{ \Phi_j^\mathrm{h} \}.\end{aligned}$$ The procedure described above is a general method that is independent of the exact form of the external potential. However if the external potential is of quadratic form, the field dependent zeroth-order single-component Hamiltonian has an uncomplicated mathematical form and additional simplification can be achieved. Results and Discussion {#sec:results} ====================== The electron-hole Hamiltonian in Eq. has been used extensively for studying optical rectification [@RefWorks:4060; @RefWorks:2174; @He2010266; @RefWorks:4137; @Chen2012786] effect in GaAs quantum dots and all the system specific parameters were obtained from previous calculations on the GaAs system.[@RefWorks:4060; @RefWorks:2174] The parabolic confinement potential has found widespread applications[@Peeters19901486; @Que199211036; @Halonen19925980; @RefWorks:4035; @Jaziri1994171; @Rinaldi1996342; @RefWorks:4130; @RefWorks:2155; @Barseghyan2009521; @Taut2009; @He2010266; @Stobbe2010; @RefWorks:4034; @RefWorks:4033; @Kirak2011; @Trojnar2011] in the study of quantum dots and was used in the present work to approximate the external potential term in the Hamiltonian. All the parameters that are needed for complete description of the electron-hole Hamiltonian used in the calculations are presented in Table \[tab:param\]. Parameter Value ------------------ ---------------------------- $m_{\mathrm{e}}$ $0.067m_{0}$ $m_{\mathrm{h}}$ $0.090m_{0}$ $k_{\mathrm{e}}$ $9.048\times 10^{-7}$ a.u. $k_{\mathrm{h}}$ $1.122\times 10^{-6}$ a.u. $\epsilon$ $13.1\epsilon_{0}$ : System dependent parameters used in the electron-hole Hamiltonian for the GaAs quantum dot [@RefWorks:4060; @RefWorks:2174] \[tab:param\] Following earlier work on the effect of electric field on non-linear optical properties of GaAs quantum dots,[@RefWorks:4060; @RefWorks:2174] the external electric field was aligned along the z-axis and the field strength was varied from zero to 500 kV/cm. Similar to the spin-boson Hamiltonian, the polaron transform resulted in shifted harmonic oscillators.[@weiss2008quantum] The eigenvalues and and eigenfunctions of the $H_0$ were obtained analytically, and the lowest ten eigenstates of the shifted harmonic oscillator Hamiltonian were used in the construction of the 1-particle basis. The direct product between the electronic and the hole basis sets was performed to generate the electron-hole basis for the FCI calculations. The geminal minimization was performed using a set of three $\{b_k,\gamma_k\}$ parameters at each field strength, and the optimized values are presented in Table \[tab:geminals\]. $F_z$ (kV/cm) 0 100 200 300 400 500 --------------- ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- $b_1$ 1.00 1.00 1.00 1.00 1.00 1.00 $\gamma_1$ 0.00 0.00 0.00 0.00 0.00 0.00 $b_2$ $1.40\times 10^{-1}$ $9.99\times 10^{-1}$ $4.99\times 10^{-2}$ $5.78\times 10^{-3}$ $1.00\times 10^{-2}$ $5.59\times 10^{-3}$ $\gamma_2$ $2.29\times 10^{-4}$ $4.60\times 10^{-6}$ $1.11\times 10^{-2}$ $1.11$ $1.00$ $1.11$ $b_3$ $4.35\times 10^{-2}$ $1.08\times 10^{-1}$ $8.90\times 10^{-2}$ $1.67\times 10^{-2}$ $2.00\times 10^{-2}$ $1.58\times 10^{-2}$ $\gamma_3$ $1.13\times 10^{-2}$ $1.00\times 10^{-2}$ $1.01\times 10^{-3}$ $1.11\times 10^{-1}$ $1.01\times 10^{-1}$ $1.02\times 10^{-1}$ \[tab:geminals\] The total exciton energy for the field-free case was found to be 269.45 meV. The total exciton energy of the system as a function of the field strength is presented in Fig. \[fig:toteng\]. ![Relative exciton energy compared to the fit $E = (-2.7925\times 10^{-6})F_z^2 + (-7.0938\times 10^{-5})F_z + 1$.[]{data-label="fig:toteng"}](fig1.pdf){width="85mm"} It is seen that the total energy decreases with increasing field strength. Earlier studies on this system indicate that the exciton energy is a quadratic function of the applied field.[@Weiner1987842; @Robinson20051] To investigate the scaling of the total exciton energy with respect to the field strength, we have performed least-square fit of the calculated values with a second order polynomial and the results are presented in Fig. \[fig:toteng\]. The results from these calculations confirm that the quadratic scaling of the exciton energy as a function of the field strength. The exciton binding energy was calculated using Eq. and was found to be 28.52 meV for the field-free case. The effect of the external field on the exciton binding energy was investigated by calculating the relative binding energy which is defined by the following equation $$\begin{aligned} \tilde{E}_\mathrm{B} &= \frac{E_\mathrm{B}[\mathbf{F}]}{E_\mathrm{B}[\mathbf{F}=0]} .\end{aligned}$$ It is seen from Fig. \[fig:relative\] that the exciton binding energy decreases with increasing field strength. As the field strength is increased from 0 to 500 kV/cm, the exciton binding energy decreases by a factor of 2.6. In addition to calculation of binding energy, the effect of the field on electron-hole recombination probability was also investigated. Analogous to the relative binding energy, the relative recombination probability is defined as $$\begin{aligned} \tilde{P}_\mathrm{eh} &= \frac{P_\mathrm{eh}[\mathbf{F}]}{P_\mathrm{eh}[\mathbf{F}=0]},\end{aligned}$$ and is presented in the Fig. \[fig:relative\]. ![Comparison of $\tilde{E}_{\mathrm{B}}$ and $\tilde{P}_{\mathrm{eh}}$ as a function of electric field strength.[]{data-label="fig:relative"}](fig2.pdf){width="85mm"} It is seen that the there is a sharp decrease in the recombination probability with increasing field strength and the recombination probability at 500 kV/cm is lower than the field-free case by a factor of 166. One of the key results from this study is that exciton binding energy and eh-recombination probability are affected differently by the external electric field. It is seen that the exciton binding energy and eh-recombination probability follow different scaling with respect to field strength. The polaron transformation also provides insight into the effect electric field on the exciton binding energy in the limit of high field strengths. Starting with the transformation defined in Eq. , the electron-hole Coulomb interaction in the transformed coordinate can be expressed as $$\begin{aligned} \frac{1}{\vert \mathbf{r}_\mathrm{e} - \mathbf{r}_\mathrm{h} \vert} &= \frac{1}{\vert (\mathbf{q}_\mathrm{e} - \mathbf{q}_\mathrm{h}) - (\lambda_e + \lambda_h) \mathbf{F} \vert } = v_\mathrm{eh}(\mathbf{q}) . % v_\mathrm{eh}(\mathbf{q}_\mathrm{e},\mathbf{q}_\mathrm{h})\end{aligned}$$ It is seen in the above equation that the above expression will be dominated by the field-dependent term in the limit of high field strength. A direct consequence of the above condition is that in the limit of high field strengths, we expect the exciton binding energy to be small $$\begin{aligned} \label{eq:limit} H(\mathbf{q}) \approx H_0(\mathbf{q}) \implies E_\mathrm{B} \approx 0 \quad \quad \mathrm{for}\, 1 \ll \vert \mathbf{F} \vert < \infty.\end{aligned}$$ It is important to note that the above conclusion is independent of the choice of the external potential. Conclusion {#sec:conclusion} ========== The effect of external electric field on exciton binding energy and electron-hole recombination probability was computed using explicitly correlated full configuration interaction method. Field-dependent basis functions were used in the calculations and a variational polaron transformation scheme was developed for construction of field-dependent basis functions. It was found that both exciton binding energy and electron-hole recombination probability decrease with increasing field strength. One interesting conclusion from this study is that the binding energy and recombination probability follow different scaling with respect to the external electric field. For the range of field strengths studied, the recombination probability and exciton binding energy decrease by a factor of 166 and 2.6, respectively. These results give important insight into the application of electric field for manipulating excitons in quantum dots. Acknowledgment is made to the donors of The American Chemical Society Petroleum Research Fund (52659-DNI6) and to Syracuse University for support of this research.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present here an overview of recent work in the subject of astrophysical manifestations of super-massive black hole (SMBH) mergers. This is a field that has been traditionally driven by theoretical work, but in recent years has also generated a great deal of interest and excitement in the observational astronomy community. In particular, the electromagnetic (EM) counterparts to SMBH mergers provide the means to detect and characterize these highly energetic events at cosmological distances, even in the absence of a space-based gravitational-wave observatory. In addition to providing a mechanism for observing SMBH mergers, EM counterparts also give important information about the environments in which these remarkable events take place, thus teaching us about the mechanisms through which galaxies form and evolve symbiotically with their central black holes.' address: '$^1$ NASA Goddard Space Flight Center, Greenbelt, MD 20771' author: - 'Jeremy D. Schnittman$^{1}$' title: 'Astrophysics of Super-massive Black Hole Mergers' --- INTRODUCTION {#intro} ============ Following numerical relativity’s [*annus mirabilis*]{} of 2006, a deluge of work has explored the astrophysical manifestations of black hole mergers, from both the theoretical and observational perspectives. While the field has traditionally been dominated by applications to the direct detection of gravitational waves (GWs), much of the recent focus of numerical simulations has been on predicting potentially observable electromagnetic (EM) signatures. Of course, the greatest science yield will come from coincident detection of both the GW and EM signature, giving a myriad of observables such as the black hole mass, spins, redshift, and host environment, all with high precision [@bloom:09]. Yet even in the absence of a direct GW detection (and this indeed is the likely state of affairs for at least the next decade), the EM signal alone may be sufficiently strong to detect with wide-field surveys, and also unique enough to identify unambiguously as a SMBH merger. In this article, we review the brief history and astrophysical principles that govern the observable signatures of SMBH mergers. To date, the field has largely been driven by theory, but we also provide a summary of the observational techniques and surveys that have been utilized, including recent claims of potential detections of both SMBH binaries and also post-merger recoiling black holes. While the first public use of the term “black hole” is generally attributed to John Wheeler in 1967, as early as 1964 Edwin Saltpeter proposed that gas accretion onto super-massive black holes provided the tremendous energy source necessary to power the highly luminous quasi-stellar objects (quasars) seen in the centers of some galaxies [@saltpeter:64]. Even earlier than that, black holes were understood to be formal mathematical solutions to Einstein’s field equations [@schwarzschild:16], although considered by many to be simply mathematical oddities, as opposed to objects that might actually exist in nature (perhaps most famously, Eddington’s stubborn opposition to the possibility of astrophysical black holes probably delayed significant progress in their understanding for decades) [@thorne:94]. In 1969, Lynden-Bell outlined the foundations for black hole accretion as the basis for quasar power [@lynden_bell:69]. The steady-state thin disks of Shakura and Sunyaev [@shakura:73], along with the relativistic modifications given by Novikov and Thorne [@novikov:73], are still used as the standard models for accretion disks today. In the following decade, a combination of theoretical work and multi-wavelength observations led to a richer understanding of the wide variety of accretion phenomena in active galactic nuclei (AGN) [@rees:84]. In addition to the well-understood thermal disk emission predicted by [@shakura:73; @novikov:73], numerous non-thermal radiative processes such as synchrotron and inverse-Compton are also clearly present in a large fraction of AGN [@oda:71; @elvis:78]. Peters and Mathews [@peters:63] derived the leading-order gravitational wave emission from two point masses more than a decade before Thorne and Braginsky [@thorne:76] suggested that one of the most promising sources for such a GW signal would be the collapse and formation of a SMBH, or the (near head-on) collision of two such objects in the center of an active galaxy. In that same paper, Thorne and Braginsky build on earlier work by Estabrook and Wahlquist [@estabrook:75] and explore the prospects for a space-based method for direct detection of these GWs via Doppler tracking of inertial spacecraft. They also attempted to estimate event rates for these generic bursts, and arrived at quite a broad range of possibilities, from $\lesssim 0.01$ to $\gtrsim 50$ events per year, numbers that at least bracket our current best-estimates for SMBH mergers [@sesana:07]. However it is not apparent that Thorne and Braginsky considered the hierarchical merger of galaxies as the driving force behind these SMBH mergers, a concept that was only just emerging at the time [@ostriker:75; @ostriker:77]. Within the galactic merger context, the seminal paper by Begelman, Blandford, and Rees (BBR) [@begelman:80] outlines the major stages of the SMBH merger: first the nuclear star clusters merge via dynamical friction on the galactic dynamical time $t_{\rm gal} \sim 10^8$ yr; then the SMBHs sink to the center of the new stellar cluster on the stellar dynamical friction time scale $t_{\rm df} \sim 10^6$ yr; the two SMBHs form a binary that is initially only loosely bound, and hardens via scattering with the nuclear stars until the loss cone is depleted; further hardening is limited by the diffusive replenishing of the loss cone, until the binary becomes “hard,” i.e., the binary’s orbital velocity is comparable to the local stellar orbital velocity, at which point the evolutionary time scale is $t_{\rm hard} \sim N_{\rm inf} t_{\rm df}$, with $N_{\rm inf}$ stars within the influence radius. This is typically much longer than the Hubble time, effectively stalling the binary merger before it can reach the point where gravitational radiation begins to dominate the evolution. Since $r_{\rm hard} \sim 1$ pc, and gravitational waves don’t take over until $r_{\rm GW} \sim 0.01$ pc, this loss cone depletion has become known as the “final parsec problem” [@merritt:05]. BBR thus propose that there should be a large cosmological population of stalled SMBH binaries with separation of order a parsec, and orbital periods of years to centuries. Yet to date not a single binary system with these sub-parsec separations has even been unambiguously identified. In the decades since BBR, numerous astrophysical mechanisms have been suggested as the solution to the final parsec problem [@merritt:05]. Yet the very fact that so many different solutions have been proposed and continue to be proposed is indicative of the prevailing opinion that it is still a real impediment to the efficient merger of SMBHs following a galaxy merger. However, the incontrovertible evidence that galaxies regularly undergo minor and major mergers during their lifetimes, coupled with a distinct lack of binary SMBH candidates, strongly suggest that nature has found its own solution to the final parsec problem. Or, as Einstein put it, “God does not care about mathematical difficulties; He integrates empirically.” For incontrovertible evidence of a SMBH binary, nothing can compare with the direct detection of gravitational waves from space. The great irony of gravitational-wave astronomy is that, despite the fact that the peak GW luminosity generated by black hole mergers outshines the [*entire observable universe*]{}, the extremely weak coupling to matter makes both direct and indirect detection exceedingly difficult. For GWs with frequencies less than $\sim 1$ Hz, the leading instrumental concept for nearly 25 years now has been a long-baseline laser interferometer with three free-falling test masses housed in drag-free spacecraft [@faller:89]. Despite the flurry of recent political and budgetary constraints that have resulted in a number of alternative, less capable designs, we take as our fiducial detector the classic LISA (Laser Interferometer Space Antenna) baseline design [@yellowbook:11]. For SMBHs with masses of $10^6 M_\odot$ at a redshift of $z=1$, LISA should be able to identify the location of the source on the sky within $\sim 10$ deg$^2$ a month before merger, and better than $\sim 0.1$ deg$^2$ with the entire waveform, including merger and ringdown [@kocsis:06; @lang:06; @lang:08; @kocsis:08a; @lang:09; @thorpe:09; @mcwilliams:10]. This should almost certainly be sufficient to identify EM counterparts with wide-field surveys such as LSST [@lsst:09], WFIRST [@spergel:13], or WFXT [@wfxt:12]. Like the cosmological beacons of gamma-ray bursts and quasars, merging SMBHs can teach us about relativity, high-energy astrophysics, radiation hydrodynamics, dark energy, galaxy formation and evolution, and how they all interact. A large variety of potential EM signatures have recently been proposed, almost all of which require some significant amount of gas in the near vicinity of the merging black holes [@schnittman:11]. Thus we must begin with the question of whether or not there [*is*]{} any gas present, and if so, what are its properties. Only then can we begin to simulate realistic spectra and light curves, and hope to identify unique observational signatures that will allow us to distinguish these objects from the myriad of other high-energy transients throughout the universe. CIRCUMBINARY DISKS {#disk_theory} ================== If there is gas present in the vicinity of a SMBH binary, it is likely in the form of an accretion disk, as least at some point in the system’s history. Disks are omnipresent in the universe for the simple reason that it is easy to lose energy through dissipative processes, but much more difficult to lose angular momentum. At larger separations, before the SMBHs form a bound binary system, massive gas disks can be quite efficient at bringing the two black holes together [@escala:05; @dotti:07]. As these massive gas disks are typically self-gravitating, their dynamics can be particularly complicated, and require high-resolution 3D simulations, which will be discussed in more detail in section \[MHD\_simulations\]. Here we focus on the properties of non-self-gravitating circumbinary accretion disks which have traditionally employed the same alpha prescription for pressure-viscous stress scaling as in [@shakura:73]. Much of the early work on this subject was applied to protoplanetary disks around binary stars, or stars with massive planets embedded in their surrounding disks. The classical work on this subject is Pringle (1991) [@pringle:91], who considered the evolution of a 1D thin disk with an additional torque term added to the inner disk. This source of angular momentum leads to a net outflow of matter, thus giving these systems their common names of “excretion” or “decretion” disks. Pringle considered two inner boundary conditions: one for the inflow velocity $v^r(R_{\rm in})\to 0$ and one for the surface density $\Sigma(R_{\rm in}) \to 0$. For the former case, the torque is applied at a single radius at the inner edge, leading to a surface density profile that increases steadily inwards towards $R_{\rm in}$. In the latter case, the torque is applied over a finite region in the inner disk, which leads to a relatively large evacuated gap out to $\gtrsim 6 R_{\rm in}$. In both cases, the angular momentum is transferred from the binary outwards through the gas disk, leading to a shrinking of the binary orbit. In [@artymowicz:91], SPH simulations were utilized to understand in better detail the torquing mechanism between the gas and disk. They find that, in agreement with the linear theory of [@goldreich:79], the vast majority of the binary torque is transmitted to the gas through the $(l,m)=(1,2)$ outer Lindblad resonance (for more on resonant excitation of spiral density waves, see [@takeuchi:96]). The resonant interaction between the gas and eccentric binary ($e=0.1$ for the system in [@artymowicz:91]) pumps energy and angular momentum into the gas, which gets pulled after the more rapidly rotating interior point mass. This leads to a nearly evacuated disk inside of $r\approx 2a$, where $a$ is the binary’s semi-major axis. The interaction with the circumbinary disk not only removes energy and angular momentum from the binary, but it can also increase its eccentricity, and cause the binary pericenter to precess on a similar timescale, all of which could lead to potentially observable effects in GW observations [@armitage:05; @roedig:11; @roedig:12]. In [@artymowicz:94; @artymowicz:96], Artymowicz & Lubow expand upon [@artymowicz:91] and provide a comprehensive study of the effects of varying the eccentricity, mass ratio, and disk thickness on the behavior of the circumbinary disk and its interaction with the binary. Not surprisingly, they find that the disk truncation radius moves outward with binary eccentricity. Similarly, the mini accretion disks around each of the stars has an outer truncation radius that decreases with binary eccentricity. On the other hand, the location of the inner edge of the circumbinary disk appears to be largely insensitive to the binary mass ratio [@artymowicz:94]. For relatively thin, cold disks with aspect ratios $H/R\approx 0.03$, the binary torque is quite effective at preventing accretion, much as in the decretion disks of Pringle [@pringle:91]. In that case, the gas accretion rate across the inner gap is as much at $10-100\times$ smaller than that seen in a single disk, but the authors acknowledge that the low resolution of the SPH simulation makes these estimates inconclusive [@artymowicz:94]. When increasing the disk thickness to $H/R\approx 0.1$, the gas has a much easier time jumping the gap and streaming onto one of the two stars, typically the smaller one. For $H/R\approx 0.1$, the gas accretion rate is within a factor of two of the single-disk case [@artymowicz:96]. The accretion rate across the gap is strongly modulated at the binary orbital period, although the accretion onto the individual masses can be out of phase with each other. The modulated accretion rate suggests a promising avenue for producing a modulated EM signal in the pre-merger phase, and the very fact that a significant amount of gas can in fact cross the gap is important for setting up a potential prompt signal at the time of merger. To adequately resolve the spiral density waves in a thin disk, 2D grid-based calculations are preferable to the inherently noisy and diffusive SPH methods. Armitage and Natarayan [@armitage:02] take a hybrid approach to the problem, and use a 2D ZEUS [@stone:92] hydrodynamics calculation to normalize the torque term in the 1D radial structure equation. Unlike [@artymowicz:91], they find almost no leakage across the gap, even for a moderate $H/R=0.07$. However, they do identify a new effect that is particularly important for binary black holes, as opposed to protoplanetary disks. For a mass ratio of $q\equiv m_2/m_1=0.01$, when a small accretion disk is formed around the primary, the evolution of the secondary due to gravitational radiation can shrink the binary on such short time scales that it plows into the inner accretion disk, building up gas and increasing the mass accretion rate and thus luminosity immediately preceding merger [@armitage:02]. If robust, this obviously provides a very promising method for generating bright EM counterparts to SMBH mergers. However, recent 2D simulations by [@baruteau:12] suggest that the gas in the inner disk could actually flow across the gap back to the outer disk, like snow flying over the plow. The reverse of this effect, gas piling up in the outer disk before leaking into the inner disk, has recently been explored by [@kocsis:12a; @kocsis:12b]. In the context of T Tau stars, [@gunther:02; @gunther:04] developed a sophisticated simulation tool that combines a polar grid for the outer disk with a Cartesian grid around the binary to best resolve the flow across the gap. They are able to form inner accretion disks around each star, fed by persistent streams from the circumbinary disk. As a test, they compare the inner region to an SPH simulation and find good agreement, but only when the inner disks are artificially fed by some outer source, itself not adequately resolved by the SPH calculation [@gunther:04]. They also see strong periodic modulation in the accretion rate, due to a relatively large binary eccentricity of $e=0.5$. ![\[fig:macfadyen\] ([*left*]{}) Surface density and spiral density wave structure of circumbinary disk with equal-mass BHs on a circular orbit, shown after the disk evolved for 4000 binary periods. The dimensions of the box are $x=[-5a,5a]$ and $y=[-5a,5a]$. ([*right*]{}) Time-dependent accretion rate across the inner edge of the simulation domain ($r_{\rm in}=a$), normalized by the initial surface density scale $\Sigma_0$. \[reproduced from MacFadyen & Milosavljevic (2008), ApJ [**672**]{}, 83\]](macfadyen_Sigma.epsi "fig:"){width="40.00000%"} ![\[fig:macfadyen\] ([*left*]{}) Surface density and spiral density wave structure of circumbinary disk with equal-mass BHs on a circular orbit, shown after the disk evolved for 4000 binary periods. The dimensions of the box are $x=[-5a,5a]$ and $y=[-5a,5a]$. ([*right*]{}) Time-dependent accretion rate across the inner edge of the simulation domain ($r_{\rm in}=a$), normalized by the initial surface density scale $\Sigma_0$. \[reproduced from MacFadyen & Milosavljevic (2008), ApJ [**672**]{}, 83\]](macfadyen_mdot.eps "fig:"){width="50.00000%"} MacFadyen and Milosavljevic (MM08) [@macfadyen:08] also developed a sophisticated grid-based code including adaptive mesh refinement to resolve the flows at the inner edge of the circumbinary disk in the SMBH binary context. However, they excise the inner region entirely to avoid excessive demands on their resolution around each black hole so are unable to study the behavior of mini accretion disks. They also use an alpha prescription for viscosity and find qualitatively similar results to the earlier work described above: a gap with $R_{\rm in} \approx 2a$ due to the $m=2$ outer Lindblad resonance, spiral density waves in an eccentric disk, highly variable and periodic accretion, and accretion across the gap of $\sim 20\%$ that expected for a single BH accretion disk with the same mass [@macfadyen:08]. The disk surface density as well as the variable accretion rate are shown in Figure \[fig:macfadyen\]. Recent work by the same group carried out a systematic study of the effect of mass ratio and found significant accretion across the gap for all values of $q=m_2/m_1$ between 0.01 and 1 [@dorazio:12]. The net result of these calculations seems to be that circumbinary gas disks are a viable mechanism for driving the SMBH binary through the final parsec to the GW-driven phase, and supplying sufficient accretion power to be observable throughout. Thus it is particularly perplexing that no such systems have been observed with any degree of certainty. According to simple alpha-disk theory, there should also be a point in the GW evolution where the binary separation is shrinking at such a prodigious rate that the circumbinary disk cannot keep up with it, and effectively decouples from the binary. At that point, gas should flow inwards on the relatively slow timescale corresponding to accretion around a single point mass, and a real gap of evacuated space might form around the SMBHs, which then merge in a near vacuum [@milos:05]. NUMERICAL SIMULATIONS {#simulations} ===================== Vacuum numerical relativity {#vacuum} --------------------------- In the context of EM counterparts, the numerical simulation of two equal-mass, non-spinning black holes in a vacuum is just about the simplest problem imaginable. Yet the inherent non-linear behavior of Einstein’s field equations made this a nearly unsolvable Grand Challenge problem, frustrating generations of relativists from the 3+1 formulation of Arnowitt, Deser, and Misner in 1962 [@arnowitt:08], followed shortly by the first attempt at a numerical relativity (NR) simulation on a computer in 1964 [@hahn:64], decades of uneven progress, slowed in large part by the limited computer power of the day (but also by important fundamental instabilities in the formulation of the field equations), to the ultimate solution by Pretorius in 2005 [@pretorius:05] and subsequent deluge of papers in 2006 from multiple groups around the world (for a much more thorough review of this colorful story and the many technical challenges overcome by its participants, see [@centrella:10]). Here we will review just a few highlights from the recent NR results that are most pertinent to our present subject. For the first 50 years since their original conception, black holes (and general relativity as a whole) were largely relegated to mathematicians as a theoretical curiosity with little possibility of application in astronomy. All this changed in the late 1960s and early 70s when both stellar-mass and super-massive black holes were not only observed, but also understood to be critical energy sources and play a major role in the evolution of galaxies and stars [@thorne:94]. A similar environment was present during the 1990s with regard to binary black holes and gravitational waves. Most believed in their existence, but after decades of false claims and broken promises, the prospect of direct detection of GWs seemed further away than ever. But then in 1999, construction was completed on the two LIGO observatories, and they began taking science data in 2002. At the same time, the space-based LISA concept was formalized with the “Yellow Book,” a report submitted to ESA in 1996, and together with NASA, an international science team was formed in 2001. Astrophysics theory has long been data-driven, but here was a case where large-scale projects were being proposed and even funded based largely on theoretical predictions. The prospect of real observations and data in turn energized the NR community and provided new motivation to finally solve the binary BH merger problem. Long-duration, accurate waveforms are necessary for both the detection and characterization of gravitational waves. Generic binary sources are fully described by 17 parameters: the BH masses (2), spin vectors (6), binary orbital elements (6), sky position (2), and distance (1). To adequately cover this huge parameter space requires exceedingly clever algorithms and an efficient method for calculating waveforms. Fortunately, most NR studies to date suggest that even the most non-linear phase of the inspiral and merger process produces a relatively smooth waveform, dominated by the leading quadrupole mode [@centrella:10]. Additionally, in the early inspiral and late ringdown phases, relatively simple analytic expressions appear to be quite sufficient in matching the waveforms [@pan:11]. Even more encouraging is the fact that waveforms from different groups using very different methods agree to a high level of accuracy, thus lending confidence to their value as a description of the real world [@baker:07]. In addition to the waveforms, another valuable result from these first merger simulations was the calculation of the mass and spin of the final black hole, demonstrating that the GWs carried away a full $4\%$ of their initial energy in roughly an orbital time, and leave behind a moderately spinning black hole with $a/M=0.7$ [@baker:06a; @campanelli:06]. After the initial breakthrough with equal-mass, non-spinning black holes, the remarkably robust “moving puncture” method was soon applied to a wide variety of systems, including unequal masses [@berti:07], eccentric orbits [@hinder:08], and spinning BHs [@campanelli:06b]. As with test particles around Kerr black holes, when the spins are aligned with the orbital angular momentum, the BHs can survive longer before plunging, ultimately producing more GW power and resulting in a larger final spin. This is another critical result for astrophysics, as the spin evolution of SMBHs via mergers and gas accretion episodes is a potentially powerful diagnostic of galaxy evolution [@berti:08]. Perhaps the most interesting and unexpected result from the NR bonanza was the first accurate calculation of the gravitational recoil, which will be discussed in more detail in the following section. In addition to the widespread moving puncture method, the NR group at Cornell/Caltech developed a highly accurate spectral method that is particularly well-suited for long evolutions [@boyle:07]. Because it converges exponentially with resolution (as opposed to polynomial convergence for finite-difference methods), the spectral method can generate waveforms with dozens of GW cycles, accurate to a small fraction of phase. These long waveforms are particularly useful for matching the late inspiral to post-Newtonian (PN) equations of motion, the traditional tool of choice for GW data analysis for LIGO and LISA (e.g., [@cutler:93; @apostolatos:94; @kidder:95; @blanchet:06]). The down side of the spectral method has been its relative lack of flexibility, making it very time consuming to set up simulations of new binary configurations, particularly with arbitrary spins. If this problem can be overcome, spectral waveforms will be especially helpful in guiding the development of more robust semi-analytic tools (e.g., the effective-one-body approach of Buonanno [@buonanno:99]) for calculating the inspiral, merger, and ringdown of binary BHs with arbitrary initial conditions. The natural application for long, high-accuracy waveforms is as templates in the matched-filtering approach to GW data analysis. For LIGO, this is critical to detect most BH mergers, where much of the in-band power will come from the final stages of inspiral and merger. The high signal-to-noise expected from SMBHs with LISA means that most events will probably be detected with high significance even when using a primitive template library [@flanagan:98; @cutler:98]. However, for [*parameter estimation*]{}, high-fidelity waveforms are essential for faithfully reproducing the physical properties of the source. In particular, for spinning BHs, the information contained in the precessing waveform can greatly improve our ability to determine the sky position of the source, and thus improve our prospects for detecting and characterizing any EM counterpart [@lang:08; @thorpe:09; @lang:09]. Gravitational recoil {#recoil} -------------------- In the general case where there is some asymmetry between the two black holes (e.g., unequal masses or spins), the GW radiation pattern will have a complicated multipole structure. The beating between these different modes leads to a net asymmetry in the momentum flux from the system, ultimately resulting in a recoil or kick imparted on the final merged black hole [@schnittman:08a]. This effect has long been anticipated for any GW source [@bonnor:61; @peres:62; @bekenstein:73], but the specific value of the recoil has been notoriously difficult to calculate using traditional analytic means [@wiseman:92; @favata:04; @blanchet:05; @damour:06]. Because the vast majority of the recoil is generated during the final merger phase, it is a problem uniquely suited for numerical relativity. Indeed, this was one of the first results published in 2006, for the merger of two non-spinning BHs with mass ratio 3:2, giving a kick of $90-100$ km/s [@baker:06b]. Shortly thereafter, a variety of initial configurations were explored, covering a range of mass ratios [@herrmann:07b; @gonzalez:07a], aligned spins [@herrmann:07a; @koppitz:07], and precessing spins [@campanelli:07; @tichy:07]. Arguably the most exciting result came with the discovery of the “superkick” configuration, where two equal-mass black holes have equal and opposite spins aligned in the orbital plane, leading to kicks of $>3000$ km/s [@gonzalez:07b; @campanelli:07; @tichy:07]. If such a situation were realized in nature, the resulting black hole would certainly be ejected from the host galaxy, leaving behind an empty nuclear host [@merritt:04]. Some of the many other possible ramifications include offset AGN, displaced star clusters, or unusual accretion modes. These and other signatures are discussed in detail below in section \[observations\]. Analogous to the PN waveform matching mentioned above, there has been a good deal of analytic modeling of the kicks calculated by the NR simulations [@schnittman:07a; @schnittman:08a; @boyle:08; @racine:09]. Simple empirical fits to the NR data are particularly useful for incorporating the effects of recoil into cosmological N-body simulations that evolve SMBHs along with merging galaxies [@baker:07b; @campanelli:07; @lousto:09; @vanmeter:10b]. While the astrophysical impacts of large kicks are primarily Newtonian in nature (even a kick of $v\sim 3000$ km/s is only $1\%$ of the speed of light), the underlying causes, while only imperfectly understood, clearly point to strong non-linear gravitational forces at work [@pretorius:07; @schnittman:08a; @rezzolla:10; @jaramillo:12; @rezzolla:13]. Pure electromagnetic fields {#EM_fields} --------------------------- Shortly after the 2006–07 revolution, many groups already began looking for the next big challenge in numerical relativity. One logical direction was the inclusion of electromagnetic fields in the simulations, solving the coupled Einstein-Maxwell equations throughout a black hole merger. The first to do so was Palenzuela et al. [@palenzuela:09], who considered an initial condition with zero electric field and a uniform magnetic field surrounding an equal-mass, non-spinning binary a couple orbits before merger. The subsequent evolution generates E-fields twisted around the two BHs, while the B-field remains roughly vertical, although it does experience some amplification (see Fig. \[fig:palenzuela\]). ![\[fig:palenzuela\] Magnetic and electric field configurations around binary black hole $40M$ ([*left*]{}) and $20M$ ([*right*]{}) before merger. The electric fields get twisted around the black holes, while the magnetic fields remain roughly vertical. \[reproduced from Palenzuela et al. 2009, [*PRL*]{} [ **103**]{}, 081101\]](Palenzuela-1.eps "fig:"){width="45.00000%"} ![\[fig:palenzuela\] Magnetic and electric field configurations around binary black hole $40M$ ([*left*]{}) and $20M$ ([*right*]{}) before merger. The electric fields get twisted around the black holes, while the magnetic fields remain roughly vertical. \[reproduced from Palenzuela et al. 2009, [*PRL*]{} [ **103**]{}, 081101\]](Palenzuela-2.eps "fig:"){width="45.00000%"} The EM power from this system was estimated by integrating the radial Poynting flux through a spherical shell at large radius. They found only a modest ($30-40\%$) increase in EM energy, but there was a clear transient quadrupolar Poynting burst of power coincident with the GW signal, giving one of the first hints of astrophysical EM counterparts from NR simulations. This work was followed up by a more thorough study in [@moesta:10; @palenzuela:10a], which showed that the EM power $L_{\rm EM}$ scaled like the square of the total BH spin and proportional to $B^2$, as would be expected for a Poynting flux-powered jet [@blandford:77]. Force-free simulations {#force_free} ---------------------- In [@palenzuela:10b; @palenzuela:10c], Palenzuela and collaborators extended their vacuum simulations to include force-free electrodynamics. This is an approximation where a tenuous plasma is present, and can generate currents and magnetic fields, but carries no inertia to push those fields around. They found that any moving, spinning black hole can generate Poynting flux and a Blandford-Znajek-type jet [@blandford:77]. Compared to the vacuum case, force-free simulations of a merging binary predict significant amplification of EM power by a factor of $\sim 10 \times$, coincident with the peak GW power [@palenzuela:10c]. For longer simulations run at higher accuracy, [@moesta:12; @alic:12] found an even greater $L_{\rm EM}$ amplification of $\sim 30 \times$ that of electro-vacuum. M/HD simulations {#MHD_simulations} ---------------- As mentioned above in section \[disk\_theory\], if there is an appreciable amount of gas around the binary BH, it is likely in the form of a circumbinary disk. This configuration has thus been the subject of most (magneto)hydrodynamical simulations. SPH simulations of disks that are not aligned with the binary orbit show a warped disk that can precess as a rigid body, and generally suffer more gas leakage across the inner gap, modulated at twice the orbital frequency [@larwood:97; @ivanov:99; @hayasaki:12]. In many cases, accretion disks can form around the individual BHs [@dotti:07; @hayasaki:08]. Massive disks have the ability to drive the binary towards merger on relatively short time scales [@escala:05; @dotti:07; @cuadra:09] and also align the BH spins at the same time [@bogdanovic:07] (although see also [@lodato:09; @lodato:13] for a counter result). Retrograde disks may be even more efficient at shrinking the binary [@nixon:11] and they may also be quite stable [@nixon:12]. Recent simulations by [@roedig:12] show that the binary will evolve due not only to torques from the circumbinary disk, but also from transfer of angular momentum via gas streaming onto the two black holes. They find that the binary does shrink, and eccentricity can still be excited, but not necessarily at the rates predicted by classical theory. Following merger, the circumbinary disk can also undergo significant disruption due to the gravitational recoil, as well as the sudden change in potential energy due to the mass loss from gravitational waves. These effects lead to caustics forming in the perturbed disk, in turn leading to shock heating and potentially both prompt and long-lived EM afterglows [@oneill:09; @megevand:09; @rossi:10; @corrales:10; @zanotti:10; @ponce:12; @rosotti:12; @zanotti:13]. Any spin alignment would be critically important for both the character of the prompt EM counterpart, as well as the recoil velocity [@lousto:12; @berti:12]. Due to computational limitations, it is generally only possible to include the last few orbits before merger in a full NR simulation. Since there is no time to allow the system to relax into a quasi-steady state, the specific choice of initial conditions is particularly important for these hydrodynamic merger simulations. Some insight can be gained from Newtonian simulations [@shi:12] as well as semi-analytic models [@liu:10; @rafikov:12; @shapiro:13]. If the disk decouples from the binary well before merger, the gas may be quite hot and diffuse around the black holes [@hayasaki:11]. In that case, uniform density diffuse gas may be appropriate. In merger simulations by [@farris:10; @bode:10; @bogdanovic:11], the diffuse gas experiences Bondi-type accretion onto each of the SMBHs, with a bridge of gas connecting the two before merger. Shock heating of the gas could lead to a strong EM counterpart. As a simple estimate for the EM signal, [@bogdanovic:11] use bremsstrahlung radiation to predict roughly Eddington luminosity peaking in the hard X-ray band. The first hydrodynamic NR simulations with disk-like initial conditions were carried out by [@farris:11] by allowing the disk to relax into a quasi-steady state before turning the GR evolution on. They found disk properties qualitatively similar to classical Newtonian results, with a low-density gap threaded by accretion streams at early times, and largely evacuated at late times when the binary decouples from the disk. Due to the low density and high temperatures in the gap, they estimate the EM power will be dominated by synchrotron (peaking in the IR for $M=10^8 M_\odot$), and reach Eddington luminosity. An analogous calculation was carried out by [@bode:12], yet they find EM luminosity orders of magnitude smaller, perhaps because they do not relax the initial disk for as long. Most recently, circumbinary disk simulations have moved from purely hydrodynamic to magneto-hydrodynamic (MHD), which allows them to dispense with alpha prescriptions of viscosity and incorporate the true physical mechanism behind angular momentum in accretion disks: magnetic stresses and the magneto-rotational instability [@balbus:98]. Newtonian MHD simulations of circumbinary disks find large-scale $m=1$ modes growing in the outer disk, modulating the accretion flow across the gap [@shi:12]. Similar modes were seen in [@noble:12], who used a similar procedure as [@farris:11] to construct a quasi-stable state before allowing the binary to merge. They find that the MHD disk is able to follow the inspiraling binary to small separations, showing little evidence for the decoupling predicted by classical disk theory. However, the simulations of [@noble:12] use a hybrid space-time based on PN theory [@gallouin:12] that breaks down close to merger. Furthermore, while fully relativistic in its MHD treatment, the individual black holes are excised from the simulation due to computational limitations, making it difficult to estimate EM signatures from the inner flow. Farris et al. [@farris:12] have been able to overcome this issue and put the BHs on the grid with the MHD fluid. They find that the disk decouples at $a \approx 10M$, followed by a decrease in luminosity before merger, and then an increase as the gap fills in and resumes normal accretion, as in [@milos:05]. Giacomazzo et al. [@giacomazzo:12] carried out MHD merger simulations with similar initial conditions to both [@palenzuela:10a] and [@bode:10], with diffuse hot gas threaded by a uniform vertical magnetic field. Unlike in the force-free approximation, the inclusion of significant gas leads to a remarkable amplification of the magnetic field, which is compressed by the accreting fluid. [@giacomazzo:12] found the B-field increased by of a factor of 100 during merger, corresponding to an increase in synchrotron power by a factor of $10^4$, which could easily lead to super-Eddington luminosities from the IR through hard X-ray bands. The near future promises a self-consistent, integrated picture of binary BH-disk evolution. By combining the various methods described above, we can combine multiple MHD simulations at different scales, using the results from one method as initial conditions for another, and evolve a circumbinary disk from the parsec level through merger and beyond. Radiation transport {#radiation} ------------------- Even with high resolution and perfect knowledge of the initial conditions, the value of the GRMHD simulations is limited by the lack of radiation transport and accurate thermodynamics, which have only recently been incorporated into local Newtonian simulations of steady-state accretion disks [@hirose:09a; @hirose:09b]. Significant future work will be required to incorporate the radiation transport into a fully relativistic global framework, required not just for accurate modeling of the dynamics, but also for the prediction of EM signatures that might be compared directly with observations. ![\[fig:pandurata\] A preliminary calculation of the broad-band spectrum produced by the GRMHD merger of [@giacomazzo:12], sampled near the peak of gravitational wave emission. Synchrotron and bremsstrahlung seeds from the magnetized plasma are ray-traced with [Pandurata]{} [@schnittman:13b]. Inverse-Compton scattering off hot electrons in a diffuse corona gives a power-law spectrum with cut-off around $kT_e$. The total mass is $10^7 M_\odot$ and the gas has $T_e = 100$ keV and optical depth of order unity.](pandurata.eps){width="65.00000%"} Some recent progress has been made by using the relativistic Monte Carlo ray-tracing code [Pandurata]{} as a post-processor for MHD simulations of single accretion disks [@schnittman:13a; @schnittman:13b], reproducing soft and hard X-ray spectral signatures in agreement with observations of stellar-mass black holes. Applying the same ray-tracing approach to the MHD merger simulations of [@giacomazzo:12], we can generate light curves and broad-band spectra, ranging from synchrotron emission in the IR up through inverse-Compton peaking in the X-ray. An example of such a spectrum is shown in Figure \[fig:pandurata\], corresponding to super-Eddington luminosity at the peak of the EM and GW emission. Since the simulation in [@giacomazzo:12] does not include a cooling function, we simply estimate the electron temperature as 100 keV, similar to that seen in typical AGN coronas. Future work will explore the effects of radiative cooling within the NR simulations, as well as incorporating the dynamic metric into the ray-tracing analysis. Of course, the ultimate goal will be to directly incorporate radiation transport as a dynamical force within the GRMHD simulations. Significant progress has been made recently in developing accurate radiation transport algorithms in a fully covariant framework [@ohsuga:11; @jiang:12; @sadowski:13], and we look forward to seeing them mature to the point where they can be integrated into dynamic GRMHD codes. In addition to [Pandurata]{}, there are a number of other relativistic ray-tracing codes (e.g., [@dolence:09; @shcherbakov:11]), currently based on the Kerr metric, which may also be adopted to the dynamic space times of merging black holes. OBSERVATIONS: PAST, PRESENT, AND FUTURE {#observations} ======================================= One way to categorize EM signatures is by the physical mechanism responsible for the emission: stars, hot diffuse gas, or circumbinary/accretion disks. In Figure \[source\_chart\], we show the diversity of these sources, arranged according the spatial and time scales on which they are likely to occur [@schnittman:11]. Over the course of a typical galaxy merger, we should expect the system to evolve from the upper-left to the lower-center to the upper-right regions of the chart. Sampling over the entire observable universe, the number of objects detected in each source class should be proportional to the product of the lifetime and observable flux of that object. ![\[source\_chart\] Selection of potential EM sources, sorted by timescale, typical size of emission region, and physical mechanism (blue/[*italic*]{} = stellar; yellow/Times-Roman = accretion disk; green/[**bold**]{} = diffuse gas/miscellaneous). The evolution of the merger proceeds from the upper-left through the lower-center, to the upper-right.](ss_v2.eps){width="85.00000%"} Note that most of these effects are fundamentally Newtonian, and many are only indirect evidence of SMBH mergers, as opposed to the prompt EM signatures described above. Yet they are also important in understanding the complete history of binary BHs, as they are crucial for estimating the number of sources one might expect at each stage in a black hole’s evolution. If, for example, we predict a large number of bright binary quasars with separations around $0.1$ pc, and find no evidence for them in any wide-field surveys (as has been the case so far, with limited depth and temporal coverage), we would be forced to revise our theoretical models. But if the same rate calculations accurately predict the number of dual AGN with separations of $\sim 1-10$ kpc, and GW or prompt EM detections are able to confirm the number of actual mergers, then we might infer the lack of binary quasars is due to a lack of observability, as opposed to a lack of existence. The long-term goal in observing EM signatures will be to eventually fill out a plot like that of Figure \[source\_chart\], determining event rates for each source class, and checking to make sure we can construct a consistent picture of SMBH-galaxy co-evolution. This is indeed an ambitious goal, but one that has met with reasonable success in other fields, such as stellar evolution or even the fossil record of life on Earth. Stellar Signatures ------------------ On the largest scales, we have strong circumstantial evidence of supermassive BH mergers at the centers of merging galaxies. From large optical surveys of interacting galaxies out to redshifts of $z \sim 1$, we can infer that $5-10\%$ of massive galaxies are merging at any given time, and the majority of galaxies with $M_{\rm gal} \gtrsim 10^{10} M_\odot$ have experienced a major merger in the past 3 Gyr [@bell:06; @mcintosh:08; @deravel:09; @bridge:10], with even higher merger rates at redshifts $z\sim 1-3$ [@conselice:03]. At the same time, high-resolution observations of nearby galactic nuclei find that every large galaxy hosts a SMBH in its center [@kormendy:95]. Yet we see a remarkably small number of dual AGN [@komossa:03; @comerford:09], and only one known source with an actual binary system where the BHs are gravitationally bound to each other [@rodriguez:06; @rodriguez:09]. Taken together, these observations strongly suggest that when galaxies merge, the merger of their central SMBHs inevitably follows, and likely occurs on a relatively short time scale, which would explain the apparent scarcity of binary BHs (although recent estimates by [@hayasaki:10] predict as many as $10\%$ of AGNs with $M\sim 10^7 M_\odot$ might be in close binaries with $a\sim 0.01$ pc). The famous “M-sigma” relationship between the SMBH mass and the velocity dispersion of the surrounding bulge also points to a merger-driven history over a wide range of BH masses and galaxy types [@gultekin:09]. There is additional indirect evidence for SMBH mergers in the stellar distributions of galactic nuclei, with many elliptical galaxies showing light deficits (cores), which correlate strongly with the central BH mass [@kormendy:09]. The cores suggest a history of binary BHs that scour out the nuclear stars via three-body scattering [@milosavljevic:01; @milosavljevic:02; @merritt:07], or even post-merger relaxation of recoiling BHs [@merritt:04; @boylan-kolchin:04; @gualandris:08; @guedes:09]. While essentially all massive nearby galaxies appear to host central SMBHs, it is quite possible that this is not the case at larger redshifts and smaller masses, where major mergers could lead to the complete ejection of the resulting black hole via large recoils. By measuring the occupation fraction of SMBHs in distant galaxies, one could infer merger rates and the distribution of kick velocities [@schnittman:07a; @volonteri:07; @schnittman:07b; @volonteri:08a; @volonteri:10]. The occupation fraction will of course also affect the LISA event rates, especially at high redshift [@sesana:07]. Another indirect signature of BH mergers comes from the population of stars that remain bound to a recoiling black hole that gets ejected from a galactic nucleus [@komossa:08a; @merritt:09; @oleary:09]. These stellar systems will appear similar to globular clusters, yet with smaller spatial extent and much larger velocity dispersions, as the potential is completely dominated by the central SMBH. With multi-object spectrometers on large ground-based telescopes, searching for these stellar clusters in the Milky Way halo or nearby galaxy clusters ($d \lesssim 40$ Mpc) is technically realistic in the immediate future. Gas Signatures: Accretion Disks ------------------------------- As discussed above in section \[disk\_theory\], circumbinary disks will likely have a low-density gap within $r\approx 2a$, although may still be able to maintain significant gas accretion across this gap, even forming individual accretion disks around each black hole. The most sophisticated GRMHD simulations suggest that this accretion can be maintained even as the binary is rapidly shrinking due to gravitational radiation [@noble:12]. If the inner disks can survive long enough, the final inspiral may lead to a rapid enhancement of accretion power as the fossil gas is plowed into the central black hole shortly before merger [@armitage:02; @chang:10]. For small values of $q$, a narrow gap could form in the inner disk, changing the AGN spectra in a potentially observable way [@gultekin:12; @mckernan:13]. Regardless of [*how*]{} the gas reaches the central BH region, the simulations described above in section \[simulations\] all seem to agree that even a modest amount of magnetized gas can lead to a strong EM signature. If the primary energy source for heating the gas is gravitational [@vanmeter:10], then typical efficiencies will be on the order of $\sim 1-10$%, comparable to that expected for standard accretion in AGN, although the much shorter timescales could easily lead to super-Eddington transients, depending on the optical depth and cooling mechanisms of the gas[@krolik:10]. However, if the merging BHs are able to generate strong magnetic fields [@palenzuela:09; @moesta:10; @palenzuela:10b; @giacomazzo:12], then hot electrons could easily generate strong synchrotron flux, or highly relativistic jets may be launched along the resulting BH spin axis, converting matter to energy with a Lorentz boost factor of $\Gamma \gg 1$. Even with purely hydrodynamic heating, particularly bright and long-lasting afterglows may be produced in the case of large recoil velocities, which effectively can disrupt the entire disk, leading to strong shocks and dissipation [@lippai:08; @shields:08; @schnittman:08b; @megevand:09; @rossi:10; @anderson:10; @corrales:10; @tanaka:10a; @zanotti:10]. Long-lived afterglows could be discovered in existing multi-wavelength surveys, but successfully identifying them as merger remnants as opposed to obscured AGN or other bright unresolved sources would require improved pipeline analysis of literally millions of point sources, as well as extensive follow-up observations [@schnittman:08b]. For many of these large-kick systems, we may observe quasar activity for millions of years after, with the source displaced from the galactic center, either spatially [@kapoor:76; @loeb:07; @volonteri:08b; @civano:10; @dottori:10; @jonker:10] or spectroscopically [@bonning:07; @komossa:08c; @boroson:09; @robinson:10]. However, large offsets between the redshifts of quasar emission lines and their host galaxies have also been interpreted as evidence of pre-merger binary BHs [@bogdanovic:09; @dotti:09; @tang:09; @dotti:10b] or due to the large relative velocities in merging galaxies [@heckman:09; @shields:09a; @vivek:09; @decarli:10], or “simply” extreme examples of the class of double-peaked emitters, where the line offsets are generally attributed to the disk [@gaskell:88; @eracleous:97; @shields:09b; @chornock:10; @gaskell:10]. An indirect signature for kicked BHs could potentially show up in the statistical properties of active galaxies, in particular in the relative distribution of different classes of AGN in the “unified model” paradigm [@komossa:08b; @blecha:11]. For systems that open up a gap in the circumbinary disk, another EM signature may take the form of a quasar suddenly turning on as the gas refills the gap, months to years after the BH merger [@milos:05; @shapiro:10; @tanaka:10b]. But again, these sources would be difficult to distinguish from normal AGN variability without known GW counterparts. Some limited searches for this type of variability have recently been carried out in the X-ray band [@kanner:13], but for large systematic searches, we will need targeted time-domain wide-field surveys like PTF, Pan-STARRS, and eventually LSST. One of the most valuable scientific products from these time-domain surveys will be a better understanding of what is the range of variability for normal AGN, which will help us distinguish when an EM signal is most likely due to a binary [@tanaka:13]. In addition to the many potential prompt and afterglow signals from merging BHs, there has also been a significant amount of theoretical and observational work focusing on the early precursors of mergers. Following the evolutionary trail in Figure 1, we see that shortly after a galaxy merges, dual AGN may form with typical separations of a few kpc [@komossa:03; @comerford:09], sinking to the center of the merged galaxy on a relatively short timescale ($\lesssim$ 1 Gyr) due to dynamical friction [@begelman:80]. The galaxy merger process is also expected to funnel a great deal of gas to the galactic center, in turn triggering quasar activity [@hernquist:89; @kauffmann:00; @hopkins:08; @green:10]. At separations of $\sim 1$ pc, the BH binary (now “hardened” into a gravitationally bound system) could stall, having depleted its loss cone of stellar scattering and not yet reached the point of gravitational radiation losses [@milosavljevic:03]. Gas dynamical drag from massive disks ($M_{\rm disk} \gg M_{\rm BH}$) leads to a prompt inspiral ($\sim 1-10$ Myr), in most cases able to reach sub-parsec separations, depending on the resolution of the simulation [@escala:04; @kazantzidis:05; @escala:05; @dotti:07; @cuadra:09; @dotti:09b; @dotti:10a]. At this point, a proper binary quasar is formed, with an orbital period of months to decades, which could be identified by periodic accretion [@macfadyen:08; @hayasaki:08; @haiman:09a; @haiman:09b], density waves in the disk [@hayasaki:09], or periodic red-shifted broad emission lines [@bogdanovic:08; @shen:09; @loeb:10; @montuori:11]. If these binary AGN systems do in fact exist, spectroscopic surveys should be able to identify many candidates, which may then be confirmed or ruled out with subsequent observations over relatively short timescales ($\sim 1-10$ yrs), as the line-of-site velocities to the BHs changes by an observable degree. This approach has been attempted with various initial spectroscopic surveys, but as yet, no objects have been confirmed to be binaries by multi-year spectroscopic monitoring [@boroson:09; @lauer:09; @chornock:10; @eracleous:12]. Gas Signatures: Diffuse Gas; “Other” ------------------------------------ In addition to the many disk-related signatures, there are also a number of potential EM counterparts that are caused by the accretion of diffuse gas in the galaxy. For the Poynting flux generated by the simulations of section \[simulations\], transient bursts or modulated jets might be detected in all-sky radio surveys [@kaplan:11; @oshaughnessy:11]. For BHs that get significant kicks at the time of merger, we expect to see occasional episodes of Bondi accretion as the BH oscillates through the gravitational potential of the galaxy over millions of years, as well as off-center AGN activity [@blecha:08; @fujita:09; @guedes:10; @sijacki:10]. On larger spatial scales, the recoiling BH could also produce trails of over-density in the hot interstellar gas of elliptical galaxies [@devecchi:09]. Also on kpc–Mpc scales, X-shaped radio jets have been seen in a number of galaxies, which could possibly be due to the merger and subsequent spin-flip of the central BHs [@merritt:02]. Another potential source of EM counterparts comes not from diffuse gas, or accretion disks, but the occasional capture and tidal disruption of normal stars by the merging BHs. These tidal disruption events (TDEs), which also occurs in “normal” galaxies [@rees:88; @komossa:99; @halpern:04], may be particularly easy to identify in off-center BHs following a large recoil [@komossa:08a]. TDE rates may be strongly increased prior to the merger [@chen:09; @stone:10; @seto:10; @schnittman:10; @chen:11; @wegg:11], but the actual disruption signal may be truncated by the pre-merger binary [@liu:09], and post-merger recoil may also reduce the rates [@li:12]. These TDE events are likely to be seen by the dozen in coming years with Pan-STARRS and LSST [@gezari:09]. In addition to the tidal disruption scenario, in [@schnittman:10] we showed how gas or stars trapped at the stable Lagrange points in a BH binary could evolve during inspiral and eventually lead to enhanced star formation, ejected hyper-velocity stars, highly-shifted narrow emission lines, and short bursts of super-Eddington accretion coincident with the BH merger. A completely different type of EM counterpart can be seen with pulsar timing arrays (PTAs). In this technique, small time delays ($\lesssim 10$ ns) in the arrival of pulses from millisecond radio pulsars would be direct evidence of extremely low-frequency (nano-Hertz) gravitational waves from massive ($\gtrsim 10^8 M_\odot$) BH binaries [@jenet:06; @sesana:08; @sesana:09; @jenet:09; @seto:09; @pshirkov:10; @vanhaasteren:10; @sesana:10]. By cross-correlating the signals from multiple pulsars around the sky, we can effectively make use of a GW detector the size of the entire galaxy. For now, one of the main impediments to GW astronomy with pulsar timing is the relatively small number of known, stable millisecond radio pulsars. Current surveys are working to increase this number and the uniformity of their distribution on the sky [@lee:13]. Even conservative estimates suggest that PTAs are probably only about ten years away from a positive detection of the GW stochastic background signal from the ensemble of SMBH binaries throughout the universe [@sesana:13]. The probability of resolving an individual source is significantly smaller, but if it were detected, would be close enough ($z \lesssim 1$) to allow for extensive EM follow-up, unlike many of the expected LISA sources at $z \gtrsim 5$. Also, unlike LISA sources, PTA sources would be at an earlier stage in their inspiral and thus be much longer lived, allowing for even more extensive study. A sufficiently large sample of such sources would even allow us to test whether they are evolving due to GW emission or gas-driven migration [@kocsis:11; @tanaka:12; @sesana:12] (a test that might also be done with LISA with only a single source with sufficient signal-to-noise [@yunes:11]). CONCLUSION ========== Black holes are fascinating objects. They push our intuition to the limits, and never cease to amaze us with their extreme behavior. For a high-energy theoretical astrophysicist, the only thing more exciting than a real astrophysical black hole is [*two*]{} black holes, destroying everything in their path as they spiral together towards the point of no return. Thus one can easily imagine the frustration that stems from our lack of ability to actually see such an event, despite the fact that it outshines the entire observable universe. And the path forwards does not appear to be a quick one, at least not for gravitational-wave astronomy. One important step along this path is the engagement of the broader (EM) astronomy community. Direct detection of gravitational waves will not merely be a confirmation of a century-old theory—one more feather in Einstein’s Indian chief head-dress—but the opening of a window through which we can observe the entire universe at once, eagerly listening for the next thing to go bang in the night. And when it does, all our EM eyes can swing over to watch the fireworks go off. With a tool as powerful as coordinated GW/EM observations, we will be able to answer many of the outstanding questions in astrophysics: How were the first black holes formed? Where did the first quasars come from? What is the galaxy merger rate as a function of galaxy mass, mass ratio, gas fraction, cluster environment, and redshift? What is the mass function and spin distribution of the central BHs in these merging (and non-merging) galaxies? What is the central environment around the BHs, prior to merger: What is the quantity and quality (temperature, density, composition) of gas? What is the stellar distribution (age, mass function, metallicity)? What are the properties of the circumbinary disk? What is the time delay between galaxy merger and BH merger? These are just a few of the mysteries that will be solved with the routine detection and characterization of SMBH mergers, may we witness them speedily in our days! We acknowledge helpful conversations with John Baker, Manuela Campanelli, Bruno Giacomazzo, Bernard Kelly, Julian Krolik, Scott Noble, and Cole Miller. References {#references .unnumbered} ========== [99]{} Abell P A Alic D, Moesta P, Rezzolla L, Zanotti O and Jaramillo J L 2012 36 Anderson M, Lehner L, Megevand M and Neilsen D 2010 044004 Apostolatos T A, Cutler C, Sussman G J and Thorne K S 1994 6274 Armitage P J and Natarajan P 2002 9–12 Armitage P J and Natarajan P 2005 921–927 Arnowitt R, Deser S and Misner C W 2008 [ *Gen. Rel. Grav.*]{} [**40**]{} 1997–2007 Artymowicz P, Clarke C J, Lubow S H, and Pringle J E 1991 L35–L38 Artymowicz P and Lubow S H 1994 651–667 Artymowicz P and Lubow S H 1996 77 Baker J G, Centrella J, Choi D-I, Koppitz M, and van Meter J R 2006 111102 Baker J G, Centrella J, Choi D-I, Koppitz M, van Meter J R and Miller M C 2006 93–96 Baker J G, Campanelli M, Pretorius F and Zlochower Y 2007 S25–S31 Baker J G, Boggs W D, Centrella J, Kelly B J, McWilliams S T, Miller M C and van Meter J R 2007 1140–1144 Balbus S A and Hawley J F 1998 [*Rev. Mod. Phys.*]{} 70 1–53 Buonanno A and Damour T 1999 084006 Baruteau C, Ramirez-Ruiz E and Masset F 2012 L65 Begelman M C, Blandford R D and Rees M J 1980 [*Nature*]{} [**287**]{} 307–309 Bekenstein J D 1973 657 Bell E F 2006 [*Astroph. J.*]{} [**652**]{} 270–276 Berti E, Cardoso V, Gonzalez J, Sperhake U, Hannam M, Husa S and Brugmann B 2007 064034 Berti E and Volonteri M 2008 822 Berti E, Kesden M and Sperhake U 2012 124049 Blanchet L, Quasailah M S S and Will C M 2005 508 Blanchet L 2006 [*LRR*]{} 9 4 Blandford R D and Znajek R L 1977 433 Blecha L and Loeb A 2008 1311–1325 Blecha L, Cox T J, Loeb A and Hernquist L 2011 2154–2182 Bloom J 2009 \[arXiv:0902.1527\] Bode T, Haas R, Bogdanovic T, Laguna P and Shoemaker D 2010 1117 Bode T, Bogdanovic T, Haas R, Healy J, Laguna P and Shoemaker D 2010 45 Bogdanovic T, Reynolds C S and Miller M C 2007 147 Bogdanovic T, Smith B D, Sigurdsson S and Eracleous M 2008 455–480 Bogdanovic T, Eracleous M and Sigurdsson S 2009 288–292 Bogdanovic T, Bode T, Haas R, Laguna P and Shoemaker D 2011 094020 Bonning E W, Shields G A and Salviander S 2007 13–16 Bonnor W B and Rotenberg M A [*Proc. Royal Soc. A*]{} [**265**]{} 1320 Boroson T A and Lauer T R 2009 [*Nature*]{} [ **458**]{} 53–55 Boyer R H and Lindquist R W 1967 [*J. Mod. Phys.*]{} [**8**]{} 265 Boylan-Kolchin M, Ma C-P and Quataert E 2004 [*Astroph. J. Lett.*]{} [**613**]{} 37–40 Boyle L and Kesden M 2008 024017 Boyle M, Brown D A, Kidder L E, Mroue A H, Pfeiffer H P, Scheel M A, Cook G P and Teukolsky S A 2007 124038 Bridge C R, Carlberg R G and Sullivan M 2010 [ *Astroph. J.*]{} [**709**]{} 1067–1082 Campanelli M, Lousto C, Marronetti P and Zlochower Y 111101 Campanelli M, Lousto C and Zlochower Y 041501 Campanelli M, Lousto C, Zlochower Y and Merritt D 2007 5–8 Centrella J, Baker J G, Kelly B J and van Meter J R 2010 [*Rev. Mod. Phys.*]{} [**82**]{} 3069–3119 Chang P, Strubbe L E, Menou K and Quataert E 2010 2007–2016 Chen X, Madau P, Sesana A and Liu F K 2009 149–152 Chen X, Sesana A, Madau P and Liu F K 2011 13 Chornock R, Bloom J S, Cenko S B, Filippenko A V, Silverman J M, Hicks M D, Lawrence K J, Mendez A J, Rafelski M and Wolfe A M 2010 39–43 Civano F [*et al.*]{} 2010 209–222 Comerford J M 2009 [*Astroph. J.*]{} [**698**]{} 956–965 Conselice C J, Bershady M A, Dickinson M and Papovich C 2003 [*Astron. J.*]{} [**126**]{} 1183–1207 Corrales L R, Haiman Z and MacFadyen A 2010 947–962 Cuadra J, Armitage P J, Alexander R D and Begelman M C 2009 1423–1432 Cutler C, Finn L S, Poisson E and Sussman G J 1993 Cutler C 1998 7089 Damour T and Gopakumar A 2006 124006 Decarli R, Falomo R, Treves A and Barattini M 2010 [*Astron. & Astroph.*]{} [**511**]{} 27 de Ravel L 2009 [*Astron. & Astroph.*]{} [**498**]{} 379–397 Devecchi B, Rasia E, Dotti M, Volonteri M and Colpi M 2009 633–640 Dolence J C, Gammie C F, Moscibrodzka M and Leung P K 387 D’Orazio D J, Haiman Z and MacFadyen A 2012 submitted \[arXiv:1210.0536\] Dotti M, Colpi M, Haardt F and Mayer L 2007 956–962 Dotti M, Montuori C, Decarli R, Volonteri M, Colpi M and Haardt F 2009 L73–L77 Dotti M, Ruszkowski M, Paredi L, Colpi M, Volonteri M and Haardt F 2009 1640–1646 Dotti M, Volonteri M, Perego A, Colpi M, Ruszkowski M and Haardt F 2010 682–690 Dotti M and Ruszkowski M 2010 37–40 Dottori H, Diaz R J, Albacete-Colombo J F and Mast D 2010 42–46 Elvis M 1978 129 Eracleous M, Halpern J P, Gilbert A M, Newman J A and Filippenko A V 1997 216 Eracleous M, Boroson T A, Halpern J P and Liu J 2012 23 Escala A, Larson R B, Coppi P S and Mardones D 2004 765–777 Escala A, Larson R B, Coppi P S and Mardones D 2005 152–166 Estabrook F B and Wahlquist H D 1975 [ *Gen. Rel. Grav.*]{} [**6**]{}, 439–447 Faller J E, Bender P L, Hall J L, Hils D and Stebbins R T 1989 [*Adv. Space Res.*]{} [**9**]{} 107–111 Farris B D, Liu Y-K and Shapiro S L 2010 084008 Farris B D, Liu Y-K and Shapiro S L 2011 024024 Farris B D, Gold R, Pschalidis V, Etienne Z B and Shapiro S L 2012 221102 Favata M, Hughes S A and Holz D E 2004 5 Flanagan E E and Hughes S A 1998 4535 Fujita Y 2009 1050–1057 Gallouin L, Nakano H, Yunes N and Campanelli M 2012 235013 Gaskell M C 1988 [*LNP*]{} [**307**]{} 61 Gaskell M C 2010 [*Nature*]{} [**463**]{} E1 Gezari S 2009 1367–1379 698, 1367. Giacomazzo B, Baker J G, Miller M C, Reynolds C S and van Meter J R 2012 15 Goldreich P and Tremaine S 1979 857–871 Gonzalez J A, Hannam M, Sperhake U, Brugmann B and Husa S 2007 091101 Gonzalez J A, Hannam M, Sperhake U, Brugmann B and Husa S 2007 231101 Green P J, Myers A D, Barkhouse W A, Mulchaey J S, Bennert V N, Cox T J and Aldcroft T L 2010 [*Astroph. J.*]{} [ **710**]{} 1578–1588 Gualandris A and Merritt D 2008 [*Astroph. J.*]{} [**678**]{} 780–797 Guedes J, Madau P, Kuhlen M, Diemand J and Zemp M 2009 [*Astroph. J.*]{} [**702**]{} 890–900 Guedes J, Madau P, Mayer L and Callegari S 2011 125 Gultekin K et al. 2009 198–221 Gultekin K and Miller J M 2012 90 Gunther R and Kley W 2002 Å[**387**]{} 550 Gunther R, Schafer C and Kley W 2004 Å[**423**]{} 559 Hahn S G and Lindquist R W 1964 [*Ann. Phys.*]{} [**29**]{} 304 Haiman Z, Kocsis B, Menou K, Lippai Z and Frei Z 2009 [*Class. Quant. Grav.*]{} [**26**]{} 094032 Haiman Z, Kocsis B and Menou K 2009 1952–1969 Halpern J P, Gezari S and Komossa S 2004 572 Hayasaki K, Mineshige S and Ho L C 2008 1134–1140 Hayasaki K and Okazaki A T 2009 5 Hayasaki K, Ueda Y and Isobe N 2010, [*PASJ*]{} [**62**]{} 1351 Hayasaki K 2011 14 5 Hayasaki K, Saito H and Mineshige S 2012 [ *PASJ*]{} submitted \[arXiv:1211.5137\] Heckman T M, Krolik J H, Moran S M, Schnittman J D and Gezari S 2009 363–367 Hernquist L 1989 [*Nature*]{} [**340**]{} 687 Herrmann F, Hinder I, Shoemaker D, Laguna P and Matzner R A 2007 430–436 Herrmann F, Hinder I, Shoemaker D and Laguna P 2007 S33 Hinder I, Vaishnav B, Herrmann F, Shoemaker D M and Laguna P 2008 081502 Hirose S, Krolik J H and Blaes O 2009 16 Hirose S, Blaes O and Krolik J H 2009 781–788 Hopkins P F, Hernquist L, Cox T J and Keres D 2008 356 Ivanov P B, Papaloizou J C B and Polnarev A G 1999 79 Jaramillo J L, Macedo R P, Moesta P and Rezzolla L 2012 084030 Jenet F A 2006 1571–1576 Jenet F A 2009 \[arXiv:0909.1058\] Jiang Y-F, Stone J M and Davis S W 2012 14 Jonker P G, Torres M A P, Fabian A C, Heida M, Miniutti G and Pooley D 2010 645–650 Kanner J, Baker J G, Blackburn L, Camp J, Mooley K, Mushotzky R and Ptak A 2013 submitted \[arXiv:1305.5874\] Kaplan D L, O’Shaugnessy R, Sesana A and Volonteri M 2011 37 Kapoor R C 1976 Pramãna [**7**]{} 334–343 Kauffmann G and Haehnelt M 2000 576 Kazantzidis S, Mayer L, Colpi M, Madau P, Debattista V P, Wadsley J, Stadel J, Quinn T and Moore B 2005 L67–L70 Kidder L E 1995, 821 Kocsis B, Frei Z, Haiman Z and Menou K 2006 27–37 Kocsis B, Haiman Z and Menou K 2008 870–887 Kocsis B and Sesana A 2011 1467 Kocsis B, Haiman Z and Loeb A 2012 2660 Kocsis B, Haiman Z and Loeb A 2012 2680 Komossa S and Bode N 1999 [*Astron. & Astroph.*]{} [**343**]{} 775–787 Komossa S, Burwitz V, Hasinger G, Predehl P, Kaastra J S and Ikebe Y 2003 [*Astroph. J. Lett.*]{} [**582**]{} 15–19 Komossa S and Merritt D 2008a 21–24 Komossa S and Merritt D 2008b 89–92 Komossa S, Zhou H and Lu H 2008 81–84 Koppitz M, Pollney D, Reisswig C, Rezzolla L, Thornburg J, Diener P and Schnetter E 2007 041102 Kormendy J and Richstone D 1995 [*Ann. Rev. Astron. & Astroph.*]{} [**33**]{} 581 Kormendy J, Fisher D B, Cornell M E and Bender R 2009 [*Astroph. J. Suppl.*]{} [**182**]{} 216–309; Kormendy J and Bender R 2009 [*Astroph. J. Lett.*]{} [**691**]{} 142–146 Krolik J H 2010 774–779 Lang R N and Hughes S A 2006 122001 Lang R N and Hughes S A 2008 1184–1200 Lang R N and Hughes S A 2009 [ *Class. Quant. Grav.*]{} [**26**]{} 094035 Larwood J D and Papaloizou J C B 1997 288 Lauer T A and Boroson T R 2009 930–938 Lee K J 2013 688 Li S, Liu F K, Berczik P, Chen X and Sperzem R 2012 65 Lippai Z, Frei Z and Haiman Z 2008 5–8 LISA Assessment Study Report (“Yellow Book”) 2011 ESA/SRE(2011)3 Liu F K, Li S and Chen X 2009 133–137 Liu Y T and Shapiro S L 2010 123011 Lodato G, Nayakshin S, King A R and Pringle J E 2009 1392 Lodato G and Gerosa D 2013 30 Loeb A 2007 041103 Loeb A 2010 047503 Lousto C O and Zlochower Y 2009 064018 Lousto C O, Zlochower Y, Dotti M and Volonteri M 2012 084015 Lynden-Bell D 1969 [*Nature*]{} [**223**]{} 690 MacFadyen A I and Milosavljevic M 2008 83–93 McIntosh D H, Guo Y, Hertzberg J, Katz N, Mo H J, van den Bosch F C, and Yang X 2008 [*Mon. Not. Roy. Astron. Soc.*]{} [**388**]{} 1537–1556 McKernan B, Ford K E S, Kocsis B and Haiman Z 2013 1468 McWilliams S T, Thorpe J I, Baker J G and Kelly B J 2010 064014 Megevand M, Anderson M, Frank J, Hirschmann E W, Lehner L, Liebling S L, Motl P M and Neilsen D 2009 024012 Merrit D and Ekers R D 2002 [*Science*]{} [ **297**]{} 1310–1313 Merritt D, Milosavljevic M, Favata M, Hughes S A and Holz D E 2004 [*Astroph. J. Lett.*]{} [**607**]{} 9–12 Merritt D and Milosavljevic M 2005 [*LRR*]{} [ **8**]{} 8 Merritt D, Mikkola S and Szell A 2007 [ *Astroph. J.*]{} [**671**]{} 53–72 Merritt D, Schnittman J D and Komossa S 2009 1690–1710 Milosavljevic M and Merritt D 2001 [ *Astroph. J.*]{} [**563**]{} 34–62 Milosavljevic M, Merritt D, Rest A and van den Bosch F C 2002 [*Mon. Not. Roy. Astron. Soc.*]{} [**331**]{} 51–55 Milosavljevic M and Merritt D 2003 [ *Astroph. J.*]{} [**596**]{} 860–878 Milosavljevic M and Phinney E S 2005 93–96 Moesta P, Palenzuela C, Rezzolla L, Lehner L, Yoshida S and Pollney D 2010 064017 Moesta P, Alic D, Rezzolla L, Zanotti O and Palenzuela C 2012 32 Montuori C, Dotti M, Colpi M, Decarli R and Haardt F 2011 26 Murray S S 2012 [*SPIE*]{} [**8443**]{} 1L Nixon C J, King A R and Pringle J E 2011 66 Nixon C J 2012 2597 Noble S C, Mundim B C, Nakano H, Krolik J H, Campanelli M, Zlochower Y and Yunes N 2012 51 Novikov I D and Thorne K D 1973 in [*Black Holes*]{} ed. C DeWitt & B S DeWitt (New York: Gordon and Breach) O’Leary R M and Loeb A 2009 781–786 O’Neill S M, Miller M C, Bogdanovic T, Reynolds C S and Schnittman J D 2009 859–871 O’Shaughnessy R, Kaplan D L, Sesana A and Kamble A 2011 136 Oda M, Gorenstein P, Gursky H, Kellogg E, Schreier E, Tanenbaum H and Giacconi R 1971 1 Ohsuga K and Mineshige S 2011 2 Ostriker J P and Tremaine S D 1975 113–117 Ostriker J P and Hausman M A 1977 125–129 Palenzuela C, Anderson M, Lehner L, Liebling S L and Neilsen D 2009 081101 Palenzuela C, Lehner L and Yoshida S 2010 084007 Palenzuela C, Garrett T, Lehner L and Liebling S L 2010 044045 Palenzuela C, Lehner L and Liebling S L 2010 [*Science*]{} [**329**]{} 927 Pan Y, Buonanno A, Boyle M, Buchman L T, Kidder L E, Pfeiffer H P and Scheel M A 2011 124052 Peres A 1962 [*Phys. Rev.*]{} [**128**]{} 2471 Peters P C and Mathews J 1963 [*Phys. Rev.*]{} [**131**]{} 435–440 Pnce M, Faber J A and Lombardi J C 2012 71 Pretorius F 2005 121101 Pretorius F 2007 \[arXiv:0710.1338\] Pringle J E 1991 754–259 Pshirkov M S, Baskaran D and Postnov K A 2010 417–423 Racine E, Buonanno A and Kidder L 2009 044010 Rafikov R R 2012 submitted \[arXiv:1205.5017\] Rees M J 1984 [*Ann. Rev. Astron. Astroph.*]{} [**22**]{} 471 Rees M J 1988 [*Nature*]{} [**333**]{} 523–528 Rezzolla L, Macedo R P and Jaramillo J L 2010 221101 Rezzolla L 2013 \[arXiv:1303.6464\] Robinson A, Young S, Axon D J, Kharb P and Smith J E 2010 123–126 Rodriguez C, Taylor G B, Zavala R T, Peck A B, Pollack L K and Romani R W 2006 [*Astroph. J.*]{} [**646**]{} 49–60 Rodriguez C, Taylor G B, Zavala R T, Pihlstrom Y M and Peck A B 2009 37 Roedig C, Dotti M, Sesana A, Cuadra J and Colpi M 2011 3033 Roedig C, Sesana A, Dotti M, Cuadra J, Amaro-Seoane P and Haardt F 2012, Å[**545**]{}, A127 Rosotti G P, Lodato G and Price D J 2012 1958 Rossi E M, Lodato G, Armitage P J, Pringle J E and King A R 2010 2021–2035 Sadowski A, Narayan R, Tchekhovskoy A and Zhu Y 2013 3533 Saltpeter E E 1964 435 Schnittman J D and Buonanno A 2007 [ *Astroph. J. Lett.*]{} [**662**]{} 63–66 Schnittman J D 2007 [ *Astroph. J. Lett.*]{} [**667**]{} 133–136 Schnittman J D, Buonanno A, van Meter J R, Baker J G, Boggs W D, Centrella J, Kelly B J and McWilliams S T 2008 044031 Schnittman J D and Krolik J H 835–844 Schnittman J D 2010 39 Schnittman J D 2011 094021 Schnittman J D, Krolik J H and Noble S C 2013 156 Schnittman J D and Krolik J H 2013 submitted \[arXiv:1302.3214\] Schwarzschild K 1916 [*Prus. Acad. Sci.*]{} 189 Sesana A 2007 6–10 Sesana A, Vecchio A and Colacino C N 2008 192–209 Sesana A, Vecchio A and Volonteri M 2009 2255-2265 Sesana A and Vecchio A 2010 104008 Sesana A, Roedig C, Reynolds M T and Dotti M 2012 860 Sesana A 2013 1 Seto N 2009 L38–L42 Seto N and Muto T 2010 103004 Shakura N I and Sunyaev R A 1973 [*A& A*]{} [**24**]{} 337 Shapiro S L 2010 024019 Shapiro S L 2013 103009 Shcherbakov R V and Huang L 2011 1052 Shen Y and Loeb A 2009 \[arXiv:0912.0541\] Shi J-M, Krolik J H, Lubow S H and Hawley J F 2012 118 Shields G A and Bonning E W 2008 758–766 Shields G A, Bonning E W and Salviander S 2009 1367–1373 Shields G A, Rosario D J, Smith K L, Bonning E W, Salviander S, Kalirai J S, Strickler R, Ramirez-Ruiz E, Dutton A A, Treu T and Marshall P J 2009 936–941 Sijacki D, Springel V and Haehnelt M 2011 3656 Spergel D 2013 \[arXiv:1305.5422\] Stone J M and Norman M L 1992 753–790 Stone N and Loeb A 2010 \[arXiv:1004.4833\] Takeuchi T, Miyama S M and Lin D N C 1996 832 Tanaka T and Menou K 2010 404–422 Tanaka T, Haiman Z and Menou K 2010 642–651 Tanaka T, Menou K and Haiman Z 2012 705 Tanaka T 2013 in press \[arXiv:1303.6279\] 705 Tang S and Grindlay J 2009 1189–1194 Thorpe J I, McWilliams S T, Kelly B J, Fahey R P, Arnaud K and Baker J G 2009 [*Class. Quant. Grav.*]{} [**26**]{} 094026 Thorne K S and Braginsky V B 1976 1–6 Thorne K S 1994 [*Black Holes and Time Warps: Einstein’s Outrageous Legacy*]{} (New York: W W Norton) Tichy W and Marronetti P 2007 061502 van Haasteren R and Levin Y 2010 2372–2378 van Meter J R, Wise, J H, Miller M C, Reynolds C S, Centrella J, Baker J G, Boggs W D, Kelly B J and McWilliams S T 2010 89–92 van Meter J R, Miller M C, Baker J G, Boggs W D and Kelly B J 2010 1427–1432 Vivek M, Srianand R, Noterdaeme P, Mohan V and Kuriakosde V C 2009 L6–L9 Volonteri M 2007 5–8 Volonteri M, Lodato G and Natarajan P 2008 1079–1088 Volonteri M and Madau P 2008 57–60 Volonteri M, Gultekin K and Dotti M 2010 2143–2150 Wegg C and Bode N J 2011 8 Wiseman A G 1992 1517 Yunes N, Kocsis B, Loeb A and Haiman Z 2011 171103 Zanotti O, Rezzolla L, Del Zanna L and Palenzuela C 2010 Å[**523**]{} 8 Zanotti O 2013 [*New Astron.*]{} [**17**]{} 331
{ "pile_set_name": "ArXiv" }
--- author: - '[^1] for the GRAND collaboration' bibliography: - 'biblio.bib' title: The GRAND project and GRANDProto300 experiment --- Introduction {#intro} ============ The Giant Radio Array for Neutrino Detection (GRAND) will be a network of 20 subarrays of $\sim$10000 radio antennas each, deployed in mountainous and radio-quiet sites around the world, totaling a combined area of 200000km$^2$. It will form an observatory of unprecedented sensitivity for ultra-high energy cosmic particles (neutrinos, cosmic rays and gamma rays). Here we first detail the GRAND detection concept, its science case and experimental challenges. In a second part we detail the GRANDProto300 experiment, a pathfinder for GRAND, but also an appealing scientific project on its own. The GRAND project {#GRAND} ================= Detection concept {#concept} ----------------- Principles of radio detection of air showers are detailed in [@Huege:2016veh; @Schroder:2016hrv], and the GRAND detection concept is presented in [@WP]. It is briefly summarized below, and also illustrated in figure \[principle\]. ![GRAND detection principle for cosmic rays or gammas (detection of the EAS induced by the direct interaction of the cosmic particles in the atmosphere) and neutrinos (underground interaction with subsequent decay of the tau lepton in the atmosphere)[]{data-label="principle"}](principle.pdf){width="10cm"} When it enters the Earth atmosphere, a cosmic particle may interact with air particles to induce an extensive air shower (EAS), which in turn, generates electromagnetic radiations mainly through the deflection by the Earth magnetic field of the charged particles composing the shower[@Kahn206]. This so-called [*geomagnetic effect*]{} is coherent in the tens of MHz frequency range, generating short (&lt;1$\mu$s), transient electromagnetic pulses, with amplitudes large enough to allow for the detection of the EAS[@Allan:1971; @Ardouin:2005qe; @Falcke:2005tc] if the primary particle’s energy is above 10$^{17}$eV typically. Cosmic neutrinos however have a very small probability of being detected through this process because of their tiny interaction cross-section with air particles. Yet, a tau neutrino can produce a tau lepton under the Earth surface through charged-current interactions. Thanks to its large range in rock and short lifetime, it may emerge in the Earth atmosphere and eventually decay to induce a detectable EAS[@Fargion:2000iz]. The Earth opacity to neutrinos of energies above 10$^{17}$eV however implies that only Earth-skimming trajectories allow for such a scenario. This peculiarity, which can first be seen as a handicap for detection, turns out to be an asset for radiodetection: because of relativistic effects, the radio emission is indeed strongly beamed forward in a cone which opening is given by the Cerenkov angle $\theta_C\sim1^{\circ}$. For air shower trajectories close to the zenith, this induces a radio footprint at ground of few hundred meters diameter, requiring a large density of antennas at ground for a good sampling of the signal. For very inclined trajectories however, the larger distance of the antennas to the emission zone and the projection effect of the signal on ground combine to generate a much larger footprint[@Huege:2016veh]. Targeting air showers with very inclined trajectories —either up-going for Earth-skimming neutrinos, or down-going for cosmic rays and gammas— make it possible to detect them with a sparse array (typically one antenna per km$^2$). This is a key feature of the GRAND detector. Another driver in GRAND is to aim at mountainous areas with favorable topographies as deployment sites. An ideal topography consists of two opposing mountain ranges, separated by a few tens of kilometers. One range acts as a target for neutrino interactions, while the other acts as a screen on which the ensuing radio signal is projected. Simulations (see section \[simu\]) show that such configurations result in a detection efficiency improved by a factor $\sim$4 compared to a flat site. Detector performances {#simu} --------------------- ### Neutrino sensitivity ![Left: NEC4 simulation of the HorizonAntenna gain as a function of direction. Right: One simulated neutrino event displayed over the ground topography of the simulated area. The large red circle shows the position of the tau production and the red star, its decay. The dotted line indicates the shower trajectory. Circles mark the positions of triggered antennas. The color code represents the peak-to-peak voltage amplitude of the antennas. The limits of the simulated detector are indicated with a black line. []{data-label="sim"}](HAgain.png "fig:"){width="5cm"} ![Left: NEC4 simulation of the HorizonAntenna gain as a function of direction. Right: One simulated neutrino event displayed over the ground topography of the simulated area. The large red circle shows the position of the tau production and the red star, its decay. The dotted line indicates the shower trajectory. Circles mark the positions of triggered antennas. The color code represents the peak-to-peak voltage amplitude of the antennas. The limits of the simulated detector are indicated with a black line. []{data-label="sim"}](exHS1.png "fig:"){width="7cm"} In order to estimate the potential of the GRAND detector for the detection of cosmic neutrinos, an end-to-end simulation chain was developped, composed mostly of computation-effective tools we developed to take into account the very large size of the detector and its complex topography. - The first element of the simulation chain is DANTON[@DANTON:note], a 3-D Monte-Carlo simulation of the neutrino propagation and interactions embeded in a realistic implementation of the ground topography. A back-tracking mode is also implemented in DANTON, reducing the computation time by several orders of magnitude for neutrino energies below 10$^{18}$eV. - The radio emission induced by each simulated tau decay is computed in our simulation chain through a semi-analytical treatment called [*radiomorphing*]{}. This method, detailed in [@Zilles:2018kwq], allows to determine the radio signal induced by any shower at any location through analytical operations on simulated radio signals from one single reference shower. Radiomorphing allows a gain of two orders of magnitudes in computation time compared to a standard simulation, for a relative difference of the signal amplitude below 10% on average. - A specific design was proposed for the GRAND antenna. This so-called [*HorizonAntenna*]{} is composed of 3 arms, allowing for a complete determination of the wave polarization. Placed 5m above ground, with a design optimized for the 50-200MHz frequency range, its sensitivity to horizontal signals is excellent. The HorizonAntenna response to EAS radio signals was simulated with the NEC4 code[@NEC4] (see figure \[sim\]) and integrated in the simulation chain. - The final step of the treatment is the trigger simulation: it requires that, for at least 5 units in one 9-antennas square cell, the peak-peak amplitude of the voltage signal at the output of the antennas is larger than 30$\mu$V (twice the expected stationnary background noise) in an agressive scenario, or 75$\mu$V (five times the expected stationnary background noise) in a conservative one . This simulation chain was run over a 10000km$^2$ area, with 10000 antennas deployed along a square grid of 1km step size in an area of the TianShan mountain range, in the XinJiang Autonomous Province (China). This setup is displayed in figure \[sim\] together with one simulated event. The 3-year 90% C.L. sensitivity limit derived from this simulation is presented in figure \[limit\] and the implications on the science goals achievable by GRAND are detailed in section \[science\]. ### Reconstruction performance {#recons} Reconstruction of the direction of origin, energy and nature of the primary particle from the radio data have now reached performances comparable to standard technics [@Buitink:2016nkf; @Bezyazeekov:2015rpa; @Aab:2016eeq]. A key issue for GRAND will be to achieve similar results from nearly horizontal air showers detected with radio antennas only. Demonstrating this will be one of the goal of the GRANDProto300 experiment (see section \[GP300\]). Before that, simulation studies are used to evaluate and optimize the reconstruction performance of GRAND. We reconstructed in particular the direction of origin of neutrino-induced air showers simulated with the ZHAireS code[@Zhaires:2012] over a GRAND-like array deployed on a toy-model topography, corresponding to a plane detector area facing the shower with a constant slope of 10$^{\circ}$ w.r.t. the horizontal. Applying a basic plane-wave hypothesis reconstruction to these data yields an average reconstruction error of a few fractions of a degree. The different antenna heights allow to achieve such resolution even for horizontal trajectories. A hyperbolic wavefront is presently being implemented, and may yield even better results, according to our understanding of air shower radio wavefront structure[@Corstanje:2014waa]. The method of [@Buitink:2014eqa] has been implemented in order to reconstruct the maximum of development of cosmic-ray induced showers on a GRAND-like array. It yields resolutions on $X_{max}$ better than 40g$\cdot$cm$^{-2}$ provided that the shower core position is known. These two preliminary results are encouraging signs that the large size of GRAND events compensate the handicaps of a sparse array and very large zenith angles. Reconstruction methods are presently being refined with a goal of 0.1$^{\circ}$ for the angular resolution and 20g$\cdot$cm$^{-2}$ for the $X_{max}$ resolution. GRAND science case {#science} ------------------ ### Ultra-high energy neutrinos The sources, production and nature of the particles with the highest energies in the Universe are still a mystery, despite decades-long experimental efforts. Ultra-high energy neutrinos could be an extremely valuable tool to answer this question: thanks to their very low interaction probability and neutral charge, these particles indeed travel unimpeded from their sources over cosmological distances. Besides, their production is intrisically linked to that of Ultra-High Energy Cosmic Rays (UHECRs), be it at the source itself or along the UHECRs journey through the Universe. In the latter case, UHECRs interactions with the cosmic microwave background produce [*cosmogenic*]{} neutrinos[@GZK] which differential flux depend mostly on the astronomical evolution of the sources and the UHECRs composition and energy spectrum. A collection of 20 subarrays such as the one used in the simulation presented in section \[simu\] would yield a neutrino sensitivity for GRAND allowing to probe the full range of expected fluxes of cosmogenic neutrinos[@AlvesBatista:2018zui] within 10 years, as illustrated in figure \[limit\]. For several source candidates of UHECRs, models predict fluxes of neutrinos larger than those of cosmogenic origin (see figure \[limit\]). The GRAND sensitivity will thus allow to probe neutrino production at the source while its expected excellent angular resolution (see section \[recons\]) will open the path for ultra-high energy neutrino astronomy. ![Left: cosmogenic neutrinos flux expectations derived from the latest results of the Pierre Auger Observatory [@AlvesBatista:2018zui] superimposed to the differential sensitivity limit derived from the 10000 antennas simulation presented in section \[simu\] (“GRAND10k”, purple area) and the extrapolation for the 20-times larger GRAND array (“GRAND200k”, orange line). Right: expected neutrino fluxes produced at the source for different types of sources superimposed to the GRAND10k and GRAND200k 3-years sensitivity limits. Taken from [@WP].[]{data-label="limit"}](sensitivity.pdf "fig:"){width="6cm"} ![Left: cosmogenic neutrinos flux expectations derived from the latest results of the Pierre Auger Observatory [@AlvesBatista:2018zui] superimposed to the differential sensitivity limit derived from the 10000 antennas simulation presented in section \[simu\] (“GRAND10k”, purple area) and the extrapolation for the 20-times larger GRAND array (“GRAND200k”, orange line). Right: expected neutrino fluxes produced at the source for different types of sources superimposed to the GRAND10k and GRAND200k 3-years sensitivity limits. Taken from [@WP].[]{data-label="limit"}](neutrinoFromSources.pdf "fig:"){width="6cm"} ### UHECRs and gamma rays According to preliminary simulations, GRAND will benefit from a 100% detection efficiency for cosmic rays with zenith angles larger than 70$^{\circ}$ and energies above 10$^{18}$eV[@WP]. This will yield an exposure 15 times larger than the Pierre Auger Observatory. This, together with a field of view covering both Northern and Southern hemispheres, will make GRAND an excellent tool to study the end of the UHECRs spectrum. If a 20g$\cdot$cm$^{-2}$ resolution can be achieved on the $X_{max}$ measurement — a realistic goal given present experimental results [@Buitink:2016nkf; @Bezyazeekov:2015rpa] and preliminary simulations results, see section \[recons\]— GRAND will also be able to discriminate UHECRs of hadronic origin from UHE gamma rays. Non-detection of cosmogenic gamma-rays produced by photo-pion interaction of UHECRs with the CMB within 3 years of operation of GRAND would then exclude a light composition of UHECRs, while detection of UHE gamma rays from nearby sources would on the other hand probe the diffuse cosmic radio background[@Fixsen:1998kq] for instance. ### Fast Radio Bursts By incoherently adding the signals from the large number antennas in a subarray, GRAND will also be able to detect a 30 Jy fast radio burst with a flat frequency spectrum[@WP]. Moreover, as incoherent summing preserves the wide field of view of a single antenna, GRAND may be able to detect several hundreds of FRBs per days. In addition, detection of a single FRB by several subarrays would allow to reconstruct the direction of origin of the signal. The path to GRAND ----------------- GRAND will perform standalone radio detection of air showers. This is a challenge, as measurements have shown that, outside polar areas, the rate of transient radio signals due to background sources (high voltage power lines or transformers, planes, thunderstorms, etc.) dominates that of EAS in the tens of MHz frequency range by several orders of magnitude, a statement that will be even truer for neutrino-induced showers. Two questions naturally arise from this observation: how to collect and identify EAS events, and how to single out neutrino-induced events among them? Below we explain how we expect to tackle these two issues. ### Neutrino events identification At energies beyond 10$^{16}$eV, cosmic rays are expected to induce air showers at a rate larger than neutrinos by several orders of magnitudes. Yet, selecting events with trajectories reconstructed below the horizon will allow to reject a huge fraction of them, while measuring the $X_{max}$ position (larger than 100km to ground for a cosmic ray with zenith angle larger than 80$^{\circ}$) will provide another very powerful discrimination tool. Reconstruction performance will therefore be of key importance to identify succesfully neutrino-induced events. ### Standalone radio-detection of air showers {#autonomous} We believe that achieving a $\sim$100% detection efficiency of EAS combined with a $\sim$100% rejection of background is possible, taking into account the following elements: - The quality of the radio environment of the detection site is of paramount importance: it is first necessary that the frequency spectrum is clean in the targeted frequency range. For the site selection of the GRANDProto300 experiment for example (see section \[GP300\]), we request a maximum of three continuous wave emitters, and an integrated power of the spectrum not larger than twice the irreductible level due to Galactic + thermal ground emission over the targeted frequency range. More important, the rate of transient events should be evaluated with the same antenna as the one that will be used in the setup before considering deployment. In the case of GRANDProto300, a site with a trigger rate below 1kHz for a 6$\sigma$ threshold is considered as acceptable, where $\sigma$ is the stationnary noise level in the 50-200MHz range. Nine distinct sites have been evaluated for this experiment: six of them comply with the defined request in remote, mountainous areas of Western or Southern China, where major background sources are screened by mountains. - Even in the quietest sites, the rate of transient radio events is typically a few tens to hundreds of Hz[@Charrier:2018fle]. The DAQ system of the experiment has to be designed accordingly, in order to guarantee a 100% live time of the acquisition system. - Digital radio detection of EAS has been an active field since two decades now, but no large-scale effort of autonomous radio detection has been initiated yet: trigger algorithms remain extremely basic, very often limited to signal-over-threshold logics, and above all, self-triggered radio data is very scarce. We believe that much better can be done. To achieve this, a dedicated self-triggered radio detector has to be set up and used as a testbench to develop, test and optimize autonomous radio detection. This is one of the primary goals of the GRANDProto300 experiment, presented in the next section. - Once data from all contributing antennas have been collected and written to disk, it will be possible to discriminate background event from EAS in an offline analysis based on the distinct features of the two types of signals. In particular, it has been shown already that EAS radio signatures clearly differ from background events: for example their time traces are usually much shorter[@Barwick:2016], while their amplitude[@Nelles:2014dja] and polarization[@CODALEMA:2017] patterns at ground are very specific. Taking advantage of these specific features thus makes it possible to perform a very efficent rejection of the background signals from radio data only[@Charrier:2018fle]. Identification of EAS from radio only data is therefore not a physics issue, but a technical one. The GRANDProto300 experiment {#GP300} ============================ The GRANDProto300 (GP300) experiment will be an array of 300 detection units deployed in 2020 over 200km$^2$ in a $\sim$3000m-high, radio-quiet area on the borders of the Gobi desert and the Tibetan plateau. GP300 will be a pathfinder for the next stages of the GRAND project, and in particular for GRAND10k —the first GRAND subarray of 10000 units— expected to be deployed in 2025. The design of the GRAND detection unit and DAQ system will be finalized during the GRAND10k phase, and duplicated over the 20 subarrays which will compose the 200000-km$^2$-large GRAND detector. The deployment of GRAND is expected to be finalized in the early 2030’s. Below we present the GP300 setup and its science goals. They are specific to the experiment, making GP300 a standalone project, meaningful even outside of the GRAND sequence. The GRANDProto300 setup ----------------------- The details of the GP300 detector layout will be defined once the exact location of the detector is set, in order to take maximal advantage of its topography. Its baseline design consists in three nested arrays following a square grid of 190 antennas with 1km step size —the GRAND detector layout— 500m step size (90 antennas) and 200m step size (30 antennas). The GP300 detection unit will be composed of the [*HorizonAntenna*]{} specifically designed for GRAND (see section \[simu\]) and succesfully tested on site in summer 2018. This antenna will be placed atop a 5m wooden pole such as the ones used for the transportation of electrical power. The signals from the three antenna arms will be fed into 50-200MHz passive filters followed by 500MHz, 14-bits ADCs. A FPGA combined to a CPU will allow for local on-the-fly treatment of the data (filtering of continuous wave emitters, rejection of transient signals with time traces not compatible with EAS, etc). The timestamps of selected transient signals will be obtained from GPS and sent to the central DAQ over WiFi communication. For transient signals triggering five units or more, 3-$\mu$s-long time traces will then be retrieved and written to disk for offline analysis. Extrapolations of the TREND results indicate that the rate of such events will be a few mHz only[@Martineau-Huynh:2017bpw], but this DAQ system will allow for a 10Hz, a safe margin. The power consumption of the detection unit will be 10W. A 100W solar pannel will thus allow for a 24/7 acquisition time. Science goals of the GRANDProto300 experiment {#GP300science} --------------------------------------------- GP300 will primarly serve as a demonstrator for the GRAND detection concept. In a first phase, the large amount of self-triggered data collected with GP300 will be used to refine and optimize the offline algorithms identifying EAS from their time traces shapes, amplitude and polarization patterns or frequency content (see section \[autonomous\]). The algorithms developped with simulated data to reconstruct the primary particles informations will be applied to the selected events in order to build the sky distribution, energy spectrum and elongation rate of the candidate EAS. GP300 being too small to allow for the detection of neutrinos, these events should correspond to cosmic rays in the energy range $10^{16.5}-10^{18}$eV, and the comparison of these results to those obtained by other experiments will allow to quantify the performances of GP300 on the selection and reconstruction of EAS. In a second phase, EAS selection algorithms —based on the offline treatments performed in the first phase, or on other promising technics, such as machine learning[@FuhrerARENA:2018]— will be tested online, in order to minimize the rate of events transfered to the DAQ in the perspective of GRAND10k. The GP300 radio array will be complemented by an array of particle detectors, optimized for the detection of air showers with energies above 10$^{16.5}$eV and zenith angles larger than 70$^{\circ}$. The electromagnetic content of these very inclined showers is fully absorbed in the atmosphere (see figure \[muons\]), leading to a pure measurement of the muon content of the showers thanks to this particle detector array. The radio array, for its part, measures the electromagnetic part of the shower only. GP300 will therefore be the only setup performing a direct separation of the two components of the showers at the detector level itself. This is of particular interest between $10^{16.5}$ and $10^{18}$eV, an energy range below which a satisfying agreement between simulations and experimental data is observed[@Fomin:2016kul], while a significant excess of muons is measured at higher energies[@Aab:2014pza] (see also [@dembinski] for more details on this issue). The determination of the shower energy and maximum of shower development from the radio data, combined with the measurement of the muon content by the particle detector array, on a shower-to-shower basis would provide a very interesting insight in the details of hadronic interactions at the highest energies and hopefully help us to better understand and interprete the measurements of UHECRs. GP300 will also be a suitable setup to test alternative radio-based methods to measure the nature of the primaries [@Billoir:2015cua]. GP300 will also be a very effective instrument to study cosmic rays between $10^{16.5}$ and $10^{18}$eV: after one year of observation, it will have recorded more than 10$^5$ cosmic rays in this energy range. This large statistics, combined with the precise measurement of energy and mass composition, will allow to infer the distribution of arrival directions of light and heavy primaries separately and their variation as a function of energy [@WP]. This will place GP300 in a privileged position to study the transition between cosmic rays of Galactic and extragalacitc origin, expected to occur between $10^{17}$ and $10^{18}$eV[@Dawson:2017rsp]. Finally GP300 will search for Giant Pulses[@Eftekhari:2016fyo]: simulations show that the incoherent summing of the signals of its 300 antennas will allow a detection of giant pulses from the Crab[@WP], while the very wide beam formed by uncoherent summing will allow for full sky survey of sources of similar intensities. GP300 will also study the Epoch of Reionization: an absolute calibration of 30 antennas at the 1mK level —a goal reachable[@WP]— and a summation of their signals would allow to measure the temperature of the sky with a precision large enough to identify the absorption feature due to the reionization of hydrogen by the first stars, expected below 100MHz. Conclusion ========== A detailed, robust and reliable simulation chain demonstrates that the 200000km$^2$ GRAND detector will reach a sensitivity allowing the detection of neutrinos of cosmogenic origin or directly emitted by the source. This, combined to its exposure to UHECRs and UHE gamma rays, will make GRAND a very powerfull tool to study the origin of UHECRs. The GRANDProto300 experiment, a setup of 300 antennas covering a 200km$^2$ area, will be deployed in 2020 in order to demonstrate that the autonomous detection of very inclined showers is possible, thus validating GRAND detection principle. GRANDProto300 will not only be a pathfinder for GRAND: it will also be a self-standing experiment thanks to its rich and appealing science case. [^1]:
{ "pile_set_name": "ArXiv" }
--- abstract: 'The notion of ${\mathbb}{A}^1$-degree provides an arithmetic refinement of the usual notion of degree in algebraic geometry. In this note, we compute ${\mathbb}{A}^1$-degrees of certain finite covers $f\colon {\mathbb}{A}^n\to {\mathbb}{A}^n$ induced by quotients under actions of Weyl groups. We use knowledge of the cohomology ring of partial flag varieties as a key input in our proofs.' author: - 'Joseph Knight, Ashvin A. Swaminathan, and Dennis Tseng' bibliography: - 'references.bib' title: 'On the $\mathbb{A}^1$-Degree of a Weyl Cover' --- Introduction {#sec-intro} ============ We work over a field $K$, which is arbitrary unless stated otherwise. Associated to a finite morphism $f\colon {\mathbb{A}}^n\to {\mathbb{A}}^n$ of $K$-varieties, we have the usual notion of its degree, denoted by $\deg f$ and defined to be the degree of the induced extension of function fields. Refining this, $\mathbb{A}^1$-enumerative geometry provides a notion of an ${\mathbb}{A}^1$-degree, denoted by $\deg^{{\mathbb{A}}^1}f$, which is an element of the Grothendieck-Witt ring ${\operatorname}{GW}(K)$.[^1] The Grothendieck-Witt ring is generated by symmetric bilinear forms on $K$-vector spaces up to isomorphism, and the usual degree $\deg f$ can be recovered by taking the rank of the bilinear form $\deg^{{\mathbb{A}}^1}f$. If $K$ is algebraically closed, then the rank homomorphism defines an isomorphism of rings ${\operatorname}{GW}(K) \overset{\sim}\longrightarrow {\mathbb}{Z}$, and $\deg^{{\mathbb{A}}^1}f$ contains no more information than $\deg f$. However, if $K={\mathbb}{R}$, then the rank homomorphism ${\operatorname}{GW}({\mathbb}{R})\to {\mathbb}{Z}$ has kernel isomorphic to $\mathbb{Z}$, reflecting the fact that $\deg^{{\mathbb{A}}^1}f$ also contains the data of the Brouwer degree of the underlying map of ${\mathbb}{R}$-manifolds. In general, $\deg^{{\mathbb{A}}^1}f$ can be viewed as an enrichment of $\deg f$ that contains interesting arithmetic data. In this paper, we compute ${\mathbb{A}}^1$-degrees of quotient maps induced by Weyl groups. As a first example, one may consider the quotient map $\pi\colon {\mathbb{A}}^n\to {\mathbb{A}}^n/S_n\simeq {\mathbb{A}}^n$ of affine space by action of the symmetric group on the coordinates. The usual degree of $\pi$ is $\deg \pi = n!$, and it turns out that $\deg^{{\mathbb{A}}^1} \pi=\frac{n!}{2} \cdot (\langle 1\rangle +\langle -1\rangle)$ for $n\geq 2$. This follows easily from the fact that $S_n$ contains a simple reflection, leading us to the following preliminary observation. \[trivialthm\] Let $G$ be a finite group acting linearly on a finite-dimensional $K$-vector space $V$. If the ring $K[V]^G$ of $G$-invariants of $K[V]$ is a polynomial ring and $G$ contains a simple reflection, then the $\mathbb{A}^1$-degree of $\pi \colon {\operatorname}{Spec} K[V] \to {\operatorname}{Spec} K[V]^G$ is given by $$\begin{aligned} \deg^{{\mathbb{A}}^1} \pi = \frac{\deg \pi}{2} \cdot (\langle 1\rangle +\langle -1\rangle).\end{aligned}$$ For instance, applies to quotients of root spaces by Weyl groups when $K$ is of characteristic zero by the Chevalley–Shephard–Todd theorem (see [@C55 (A)]) or in arbitrary characteristic when the Weyl group is of type $A$ or $C$ (see [@D73 Théorème]). We can also compute ${\mathbb{A}}^1$-degrees in situations where does not apply. For example, we will show that the ${\mathbb{A}}^1$-degree of the quotient map ${\mathbb{A}}^4/(S_2 \times S_2) \to {\mathbb{A}}^4/S_4$ is given by $4\cdot \langle 1\rangle + 2\cdot \langle -1\rangle$, so in particular, the ${\mathbb{A}}^1$-degree is no longer a multiple of $\langle 1\rangle +\langle -1\rangle$. Generalizing this example, we prove the following: \[partialquotient\] Let $n_1,\ldots,n_r$ be positive integers satisfying $n = \sum_{i = 1}^r n_i$. The ${\mathbb{A}}^1$-degree of the map $\pi \colon \mathbb{A}_K^n\big/\prod_{i=1}^{r}S_{n_i}\to \mathbb{A}_K^{n}/S_n$ is given by $$\begin{aligned} \deg^{{\mathbb{A}}^1} \pi & = \frac{\deg \pi - a}{2} \cdot (\langle 1\rangle +\langle -1\rangle) + a\cdot \langle 1 \rangle \\ & = \frac{1}{2}\left(\frac{n!}{\prod_{i=1}^{r}n_i!}+a\right)\cdot \langle 1\rangle + \frac{1}{2}\left(\frac{n!}{\prod_{i=1}^{r}n_i!}-a\right)\cdot \langle -1\rangle,\end{aligned}$$ where $a = \lfloor\frac{n}{2}\rfloor! \big/ \prod_{i=1}^{r}\lfloor\frac{n_i}{2}\rfloor!$ if at most one $n_i$ is odd and $a = 0$ otherwise. The proof of  involves applying the algorithm in [@KW19 Section 2] together with knowledge of the cohomology ring of partial flag varieties of type $A$. Motivated by this, we extend  to apply to Weyl groups of other types as follows: \[bigthm\] Let $K$ be a field of characteristic $0$. Let $G$ be a simple complex Lie group with root space $V/K$, and let $P \subset G$ be a parabolic subgroup. Let $W$ be the Weyl group of $G$, and let $W_P \subset W$ be the associated parabolic subgroup. Then the ${\mathbb{A}}^1$-degree of the map $\pi \colon {{\operatorname}{Spec}}K[V]^{W_P} \to {{\operatorname}{Spec}}K[V]^W$ $$\deg^{{\mathbb{A}}^1} \pi = \frac{\deg \pi - a}{2} \cdot (\langle 1 \rangle + \langle -1 \rangle) + a \cdot \langle \alpha \rangle,$$ where $\alpha \in K^\times$, and $a$ is equal to the number of cosets $\omega \cdot W_P \in W/W_P$ for which $\omega^{-1}\omega_0\omega \in W_P$ and $\omega_0 \in W$ is the longest word. The element $\alpha$ in the statement of Theorem \[bigthm\] depends on the choice of identifications of ${\operatorname}{Spec}(K[V]^W)$ and ${\operatorname}{Spec}(K[V]^{W_P})$ with ${\mathbb}{A}^{\dim(V)}$. Such identifications are equivalent to choosing generators of ${\operatorname}{Spec}(K[V]^W)$ and ${\operatorname}{Spec}(K[V]^{W_P})$ as polynomial rings over $K$. In particular, scaling a generator of ${\operatorname}{Spec}(K[V]^W)$ by $\alpha'$ scales $\deg^{{\mathbb{A}}^1} \pi$ by $(\alpha')^{-1}$, so there is always a choice of generators making $\alpha$ in equal to 1. In the type-A case (i.e., ), we show taking the obvious choice of generators using elementary symmetric functions yields $\alpha=1$. On the other hand, the number $a$ in the statement of  can be computed explicitly in all cases, as we demonstrate in the following result: \[cor-0groups\] We have $a = 0$ in Theorem \[bigthm\] except in the following cases, tabulated according to the Dynkin diagrams of $G$ and of the $G$ $P/U(P)$ $a$ ------------ -------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------- $A_n$ $\amalg_{i = 1}^r A_{n_i}$ with $n = \sum_{i = 1}^r n_i$ and $\#\{\text{odd }n_i\} \leq 1$ $\lfloor\frac{n}{2}\rfloor! \big/ \prod_{i=1}^{r}\lfloor\frac{n_i}{2}\rfloor!$ $D_{2n+1}$ $D_{2n}$ 2 $E_6$ $D_5$ 3 $E_6$ $D_4$ 6 Thus, for all pairs $(G,P)$ not tabulated above, the ${\mathbb{A}}^1$-degree of the map $\pi \colon {{\operatorname}{Spec}}K[V]^{W_P} \to {{\operatorname}{Spec}}K[V]^W$ $$\deg^{{\mathbb{A}}^1} \pi = \frac{\deg \pi}{2} \cdot (\langle 1 \rangle + \langle -1 \rangle).$$ Acknowledgments {#acknowledgments .unnumbered} --------------- This work was supervised by Kirsten Wickelgren at the 2019 Arizona Winter School. We would like to thank Kirsten Wickelgren and Matthias Wendt for offering advice and guidance and for engaging in many enlightening discussions on the subject of $\mathbb{A}^1$-enumerative geometry. We would also like to acknowledge Nicholas Kuhn, Victor Petrov, J. D. Quigley, Jason Starr, James Tao, and Libby Taylor. We used [sage]{} for explicit computations. Background Material {#sec-back} =================== Before we prove our results, we provide a brief exposition on Grothendieck-Witt rings and on the ${\mathbb{A}}^1$-degree in the case of finite maps between affine spaces. Strictly speaking, there is not (as of yet) a notion of ${\mathbb{A}}^1$-degree for maps of affine spaces in the literature on ${\mathbb{A}}^1$-enumerative geometry, which is largely concerned with maps of spheres. For maps of affine spaces, a notion of *local* ${\mathbb{A}}^1$-degree is defined in [@KW19 Definition 11], and a suitable notion of (global) ${\mathbb{A}}^1$-degree will be defined in forthcoming work of Kass et al. (see [@KLSW19preprint]). In this section, we largely follow [@KW19 Section 1], which gives an algorithm for computing the local ${\mathbb{A}}^1$-degree around a $K$-rational point in the source. Because the foundations are still being written, our results can be interpreted as follows. We start with a finite map $\pi\colon {\mathbb}{A}^n\to {\mathbb}{A}^n$ given in , , or . 1. We compute the local ${\mathbb{A}}^1$-degree of $\pi$ at the origin. 2. For each $K$-point $q\in {\mathbb}{A}^n$ such that all the closed points of $\pi^{-1}(q)$ are $K$-rational, the sum of the local degrees at the closed points of $\pi^{-1}(q)$ agree with the local ${\mathbb{A}}^1$-degree of $\pi$ at the origin [@KW19 Corollary 31]. We use this sum as a preliminary definition of ${\mathbb{A}}^1$-degree (). 3. Our computation of the local ${\mathbb{A}}^1$-degree of $\pi$ at the origin will agree with the global notion of ${\mathbb{A}}^1$-degree in [@KLSW19preprint] as the global ${\mathbb{A}}^1$-degree will be able to be computed as a sum over local ${\mathbb{A}}^1$-degrees. The Grothendieck-Witt Ring {#GWdefsec} -------------------------- We now recall the definition of the Grothendieck-Witt ring of $K$. \[GWdef1\] Denoted by ${\operatorname}{GW}(K)$, the Grothendieck-Witt ring of $K$ is defined to be the group completion of the semi-ring (under the operations of direct sum and tensor product) of isomorphism classes of symmetric nondegenerate bilinear forms on finite-dimensional vector spaces valued in $K$. In addition to the abstract definition of ${\operatorname}{GW}(K)$ given in , it is often useful to have an explicit presentation. For $u \in K^\times$, define $\langle u \rangle \in {\operatorname}{GW}(K)$ to be the class of the nondegenerate symmetric bilinear form that sends $(x,y) \in K^2$ to $u\cdot xy \in K$. \[\] \[GWrelations\] The group ${\operatorname}{GW}(K)$ is generated by the elements $\langle u \rangle$ with $u \in K^\times$, subject to the following relations: 1. $\langle u \cdot v^2 \rangle = \langle u \rangle$, 2. $\langle u \rangle + \langle v \rangle = \langle u + v \rangle + \langle uv(u + v) \rangle$ if $u + v \neq 0$. The second relation in is easy to see. Let $e_1$ and $e_2$ be a basis of a rank 2 vector space, with a bilinear form represented by $\begin{pmatrix} u & 0\\ 0 & v\end{pmatrix}$ with respect to its basis. With respect to the new basis $e_1+e_2, be_1-ae_2$ the bilinear form is represented by $\begin{pmatrix} u+v & 0\\ 0 & uv(u+v)\end{pmatrix}$. As a corollary of the second relation in , the following relation is well-known and important for us. We couldn’t find a proof so we include it here. \[secondrelation\] For any $u \in K^\times$ we have $\langle u \rangle + \langle -u \rangle = \langle 1 \rangle + \langle -1 \rangle$ as elements of ${\operatorname}{GW}(K)$. We have the following equalities: $$\begin{aligned} \langle u\rangle + \langle -u\rangle &= (\langle u\rangle + \langle 1-u\rangle) + \langle -u\rangle- \langle 1-u\rangle\\ &= \langle 1\rangle +( \langle u(1-u)\rangle + \langle -u\rangle) -\langle 1-u\rangle \\ &= \langle 1\rangle + \langle -u^2 \rangle + \langle -u^2(-u)(u(1-u))\rangle -\langle 1-u\rangle\\ &= \langle 1\rangle + \langle -1\rangle + \langle 1-u\rangle -\langle 1-u\rangle. \qedhere\end{aligned}$$ The form $\langle 1 \rangle + \langle -1 \rangle$ in is called the *hyperbolic form*. It is easy to see from the second relation in that the product of the hyperbolic form with any element in ${\operatorname}{GW}(K)$ is an integral multiple of the hyperbolic form. Also, $\langle 1 \rangle + \langle -1 \rangle$ is equivalent to the bilinear form $\begin{pmatrix} 0 & 1 \\ 1 & 0\end{pmatrix}$ in ${\operatorname}{GK}(K)$. This is easy to see in characteristic not $2$, as the matrices $\begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix}$ and $\begin{pmatrix} 0 & 1 \\ 1 & 0\end{pmatrix}$ define equivalent bilinear forms. In characteristic 2, this is true because $\begin{pmatrix} 0 & 1 & 0\\ 1 & 0 & 0\\ 0 & 0 & 1\end{pmatrix}$ and $\begin{pmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1\end{pmatrix}$ are equivalent bilinear forms over ${\mathbb}{F}_2$. For more information about Grothendieck-Witt rings, see [@EKM08; @Lam05; @WW19]. Algorithm for Computing ${\mathbb{A}}^1$-degrees {#A1def} ------------------------------------------------ In this subsection, we recall an algorithm from [@KW19] for computing the ${\mathbb{A}}^1$-degree of a finite map between affine spaces when there exists a fiber whose closed points are $K$-rational. Let $f\colon {\mathbb{A}}^n\to {\mathbb{A}}^n$ be a finite map, and let $(f_1,\ldots,f_n)$ be its component functions. \[Qdef\] The local algebra of $f$ at $K$-point $p=(a_1,\ldots,a_n)\in {\mathbb}{A}^n$ is $Q_p(f) {\vcentcolon=}K[x_1,\ldots,x_n]_{m_p}/(f_1-b_1,\ldots,f_n-b_n)$, where $(b_1,\ldots,b_n)=f(p)\in {\mathbb}{A}^n$ and $m_p$ is the maximal ideal of $p$. The distinguished socle element is $E_p(f) {\vcentcolon=}\det(a_{ij}) \in Q(f)$ where $a_{ij}\in K[x_1,\ldots,x_n]$ are polynomials such that $f_i= \sum_j{a_{ij}\cdot (x_j-a_j)}$. We will denote $Q_0(f)$ and $E_0(f)$ as $Q(f)$ and $E(f)$ respectively. \[phidef\] To a linear functional $\phi\colon Q_p(f)\to K$, we can associate a symmetric bilinear form $\beta_{\phi}$ on $Q_p(f)$ defined by $\beta_{\phi}(a,b)=\phi(ab)$. From [@KW19 Main Theorem], we can compute the local ${\mathbb{A}}^1$-degree of $f$ as the class of a bilinear form on $Q(f)$ (viewed as a $K$-vector space) in the Grothendieck-Witt ring of $K$. \[thmdef-a1main\] The local ${\mathbb{A}}^1$-degree of $f$ at a $K$-rational point $p$ in the domain ${\mathbb}{A}^n$, denoted by $\deg_{p}^{{\mathbb{A}}^1} f$, is given by the class of the bilinear form $\beta_\phi$ in ${\operatorname}{GW}(K)$, where $\phi$ is any linear functional sending the distinguished socle element $E(f)$ to 1. From this, one can compute the sum of the local ${\mathbb{A}}^1$-degrees in a fiber of $f$ whose closed points are all $K$-rational. \[a1def\] The sum $$\sum_{p\in f^{-1}(q)}\deg_{p}^{{\mathbb{A}}^1} f$$ is independent of $q\in {\mathbb}{A}^n$, as $q$ varies over all $K$-rational points in ${\mathbb}{A}^n$ where the closed points in $f^{-1}(q)$ are $K$-rational. In all the cases we consider, we choose $q=0$ and $f^{-1}(0)$ is supported at the origin. In light of and the forthcoming work of Kass et al. in [@KLSW19preprint], we make the following preliminary definition. \[a1defr\] The global ${\mathbb{A}}^1$-degree of $f$, denoted $\deg^{{\mathbb{A}}^1}f$ is defined to be $$\begin{aligned} \deg^{{\mathbb{A}}^1}f{\vcentcolon=}\sum_{p\in f^{-1}(q)}\deg_{p}^{{\mathbb{A}}^1} f\end{aligned}$$ for any $K$-rational point $q\in {\mathbb}{A}^n$ such that the closed points in $f^{-1}(q)$ are $K$-rational. Lastly, in the proof of  only, we will make use of the Jacobian element, which is defined as follows: The Jacobian element is $J(f) {\vcentcolon=}{\operatorname}{det}\left(\frac{\partial f_i}{\partial x_j}\right) \in Q(f)$. The Jacobian element and distinguished socle element are related to each other by the equation $J(f) = (\dim_K Q(f)) \cdot E(f)$ [@SS75 (4.7) Korollar], so $J(f)$ contains the same information as $E(f)$ if the characteristic of $K$ does not divide the dimension of $Q(f)$ as a $K$-vector space. Proofs of the Results {#sec-proofs} ===================== We first note that for all of the maps $\pi\colon {\mathbb}{A}^n\to {\mathbb}{A}^n$ we consider, $\pi^{-1}(0)$ is supported at the origin. This is because the orbit of $0\in {\mathbb}{A}^n$ under a linear group action is just the origin. In particular, this means $\deg^{{\mathbb{A}}^1}\pi$ can be evaluated using the definition of the local ${\mathbb{A}}^1$-degree in . Proof of Proposition \[trivialthm\] ----------------------------------- Since $G$ contains a simple reflection $r$, the map $\pi$ factors through ${\operatorname}{Spec}(K[V]^r)$: $$\begin{aligned} \pi \colon {\operatorname}{Spec}(K[V]) \to {\operatorname}{Spec}(K[V]^r) \to {\operatorname}{Spec}(K[V]^G).\end{aligned}$$ It is easy to check (for example using [@KW19 Section 1]) that $$\deg^{{\mathbb{A}}^1}({\operatorname}{Spec}(K[V])\to {\operatorname}{Spec}(K[V])^r) = \langle 1\rangle+\langle -1\rangle.$$ By the fact that local ${\mathbb{A}}^1$-degrees are multiplicative in compositions, we have that $$\deg^{{\mathbb{A}}^1} \pi = (\langle 1\rangle+\langle -1\rangle) \cdot \deg^{{\mathbb{A}}^1}({\operatorname}{Spec}(K[V])^r \to {\operatorname}{Spec}(K[V]^G)).$$ It follows from the presentation of the Grothendieck-Witt ring in [@EKM08 Theorem 4.7] that any product with the hyperbolic form $\langle 1\rangle+\langle -1\rangle$ is actually an integral multiple of the hyperbolic form. Thus, there is some integer $N$ such that $\deg^{{\mathbb{A}}^1} \pi = N \cdot (\langle 1 \rangle + \langle -1 \rangle)$. Taking the rank of $\deg^{{\mathbb{A}}^1} \pi$, we find that $2N = \deg^{{\mathbb{A}}^1} \pi = \deg \pi$, which is the desired result. It turns out to be more efficient from an expository standpoint to prove Theorem \[bigthm\] and Proposition \[cor-0groups\] before Theorem \[partialquotient\], so we order the remaining proofs accordingly. Proof of Theorem \[bigthm\] --------------------------- Consider the algebra $$\begin{aligned} Q{\vcentcolon=}Q(\pi) \simeq K[V]^{W_P}/(K[V]^{W})^+,\end{aligned}$$ where for a graded ring $R$, we denote by $R^+$ its irrelevant ideal. Suppose that we can produce a $K$-linear functional $\phi\colon Q\to K$ sending the distinguished socle element $E \in Q$ to $1$. Then because $\pi^{-1}(0) = \{0\}$, it follows from Theorem \[thmdef-a1main\] that the symmetric bilinear form $\beta_\phi\colon Q\times Q\to K$ defined by $\beta_\phi(a_1,a_2){\vcentcolon=}\phi(a_1 a_2)$ has the property that its class in ${\operatorname}{GW}(K)$ is equal to $\deg^{{\mathbb{A}}^1} \pi$. We now briefly sketch our idea for producing the desired functional $\phi$. The key observation is that $Q$ can be identified with the Chow ring of a certain partial flag variety, we can choose $\phi$ to be a certain scalar multiple of the integration map. Because the cohomology of this partial flag variety is spanned by Schubert varieties, and because each Schubert variety has a dual Schubert variety, this forces $\beta_\phi$ to be a direct sum of copies of the bilinear forms $\begin{pmatrix} \alpha \end{pmatrix}$ and $\begin{pmatrix} 0 & 1\\ 1 & 0\end{pmatrix}$, where the multiplicity of each form depends on how many Schubert varieties are cohomologically equivalent to their dual Schubert variety. By [@B53 Proposition 29.2(a)], the partial flag variety $F{\vcentcolon=}G/P$ has cohomology[^2] $$\begin{aligned} H^{\bullet}(F,K) = K[V]^{W_P}/(K[V]^{W})^+ = Q.\end{aligned}$$ We might try to take the functional $\phi$ to be the integration map on $H^{\bullet}(F,K)$, but to make this work, we would need to verify that the distinguished socle element $E$, viewed as an element of $H^{\bullet}(F,K)$, integrates to $1$. This also depends on the choice and ordering of polynomial generators of $K[V]^{W_P}$ and $K[V]^{W}$ providing isomorphisms ${\operatorname}{Spec}(K[V]^{W_P})\simeq {\operatorname}{Spec}(K[V]^{W})\simeq {\mathbb{A}}^{\dim_K V}$. We verify this in the case where $G = \mathrm{SL}_n$ in Section \[sec-sln\], where we used elementary symmetric functions as the generators in the invariant rings. For now, let $\alpha \in K^\times$ be such that $\frac{1}{\alpha}$ is the integral of $E$, and let $\phi$ be such that $\frac{1}{\alpha} \cdot \phi$ is the integration map. We now compute the intersection pairing $\beta_\phi$ on $Q$. To do this, we use the following three facts (see [@B05 Section 2.1]): 1. The cohomology $H^{\bullet}(F,K)$ of $F$ has a basis given by the classes of the Schubert varieties; 2. Schubert varieties are indexed by cosets of $W/W_P$; and 3. the basis of Schubert varieties has a dual basis under the integration pairing, also given by Schubert varieties. The Schubert variety dual to the Schubert variety associated to the coset $\omega W_P$ is given by the coset $\omega_0\omega W_P$, where $\omega_0\in W$ is It follows that the matrix of $\beta_\phi$ with respect to the basis of Schubert classes is block-diagonal, where the blocks are of two types: $\begin{pmatrix} \alpha \end{pmatrix}$ arising from self-dual Schubert classes, and $\begin{pmatrix} 0 & \alpha\\ \alpha & 0\end{pmatrix}$ arising from all other dual pairs. Note that the class of $\begin{pmatrix} \alpha \end{pmatrix}$ in ${\operatorname}{GW}(K)$ is given by $ \langle \alpha \rangle$ and that Lemma \[secondrelation\] implies that the class of $\begin{pmatrix} 0 & \alpha\\ \alpha & 0\end{pmatrix}$ is given by $\langle 1 \rangle + \langle -1 \rangle$. Let $a$ be the number of self-dual Schubert classes. Then the number of other dual pairs of Schubert classes is simply given by $\frac{1}{2}(\dim_{K} Q - a) = \frac{1}{2}(\deg \pi - a).$ The theorem now follows upon observing that $a$ is equal to the number of cosets $\omega \cdot W_P$ such that $\omega_0\omega$ belongs to the same coset, which is equivalent to saying that $\omega^{-1} \omega_0 \omega \in W_P$. Proof of --------- Suppose that the Dynkin diagram of $G$ is not any one of $A_n$, $D_{n}$ for $n$ odd, or $E_6$. Then the longest word $\omega_0$ is in the center of $W$ ([@Bou Planches I-IX]), and the support of $\omega_0$ is full (in the sense that one requires every generator of $W$ to express $\omega_0$). It follows that $\omega^{-1}\omega_0\omega = \omega_0$ is not contained in any parabolic subgroup of $W$, so we must have that $a = 0$. We treat the remaining cases separately as follows. ### The $A_n$ Case In this case, the Weyl group of $G$ is $W = S_n$, and any parabolic subgroup $W_P \subset W$ is of the form $W_P = \prod_{i = 1}^r S_{n_i}$, where $n = \sum_{i = 1}^r n_i$. The longest word $\omega_0\in S_n$ is the permutation that sends $i$ to $n-i$ for every $i$. Recall that the number $a$ of self-dual Schubert classes is equal to the number of cosets $\omega \cdot \prod_{i=1}^{r}S_{n_i}$ such that $\omega_0\omega$ belongs to the same coset, which is further equal to the number of elements in the set $P$ of partitions of $\{1,\ldots,n\}$ into blocks $B_1,\ldots,B_r$ (not necessarily contiguous) of sizes $n_1,\ldots,n_r$ such that swapping $n-i$ for each $i$ preserves those blocks. If all of the blocks are of even size, then $\# P$ is equal to the number of partitions of $\{1,\ldots,\frac{n}{2}\}$ into blocks of size $\frac{n_i}{2}$. If some block has odd size, then that block must contain $\frac{n+1}{2}$ (in particular, $n$ must be odd) and must therefore be the only block of odd size. Thus, if there is a single block of odd size that contains $\frac{n+1}{2}$, then $\# P$ is equal to the number of partitions of $\{1,\ldots,\frac{n-1}{2}\}$ into blocks of size $\lfloor\frac{n_i}{2}\rfloor$, and $\# P = 0$ otherwise. So $a = \#P = \lfloor\frac{n}{2}\rfloor! \big/ \prod_{i=1}^{r}\lfloor\frac{n_i}{2}\rfloor!$ if at most one $n_i$ odd and $a = \#P = 0$ otherwise, as desired. ### The $D_n$ case In this case, the Weyl group of $G$ has presentation $$W = \langle r_1, \dots, r_n : (r_ir_j)^{m_{ij}}=1 \rangle,$$ where the $m_{ij}$ are defined by $$m_{ij}= \begin{cases} 1, & \text{if}\ i=j \\ 2, & \text{if}\ (i,j)=(1,2) \text{ or } |i-j|>1 \text{ for } (i,j) \neq (1,3),(3,1)\\ 3, & \text{if}\ |i-j|=1 \text{ for } i,j \geq 2. \end{cases}$$ The generator $r_k$ of $W$ correspond to the node $k$ of the Dynkin diagram of $D_n$ labeled below. & & &\ & & &\ & & & The length $\ell(\omega)$ of $\omega \in W$ is the length of the shortest expression of $\omega$ as a product of the generators $r_k$. The unique longest element $\omega_0$ is an involution satisfying $\ell(\omega_0) = n^2-n$, and when $n$ is odd, $\omega_0$ acts by conjugation on the generators as follows: $\omega_0r_1\omega_0^{-1}=r_2$, and $\omega_0r_i\omega_0^{-1}=r_i$ for $i \geq 3$ ([@Bou Planche IV]). The (proper) parabolic subgroups of $W$ are precisely those subgroups $W_I = \langle r_i : i \in I\rangle$, where $I \subsetneq \lbrace 1, \dots, n \rbrace$ is any subset. A parabolic subgroup $W_I$ is said to be *maximal* if $\# I = n-1$. Let $n \geq 5$ be odd. We claim that if $\omega^{-1}\omega_0\omega \in W_I$ for some $\omega \in W$ and proper parabolic subgroup $W_I$, then $I= \lbrace 1, \dots, n-1 \rbrace$. Note that for such an $\omega$, any element $\omega' \in Z\omega$ also satisfies $(\omega')^{-1}\omega_0\omega' \in W_I$, where $Z$ is the centralizer of $\omega_0$ in $W$. Let $$\sigma_{i,k}= \begin{cases} \prod_{j = i}^k r_j, & \text{if}\ i \leq k \\ 1, & \text{if}\ i>k \end{cases}$$ The following table shows how to reduce the length of a coset representative of $Z\omega$ as above by left-multiplication with elements of $Z$. In each row, the leftmost entry is a possible starting segment $b$ for $\omega$ expressed as a word $\omega = b \cdot c$, the middle entry is a re-expression of the $b$ that is more convenient for the purpose of length reduction, and the rightmost entry is a shortened segment $b' \in Z b$ with $\ell(b') < \ell(b)$. Because $\{r_1r_2\} \cup \{r_i : i\geq 3\} \subset Z$ and because $\{r_i : i \geq 4\}$ is contained in the centralizer of $r_1$, it is sufficient to consider starting segments $b$ that begin with $r_1r_3$. Starting Segment Re-expression of Starting Segment Shortened Segment Conditions ----------------------- ----------------------------------------------------------------------------- -------------------- --------------------- $r_1r_3r_1$ $r_3 \cdot r_1r_3$ $r_1r_3$ n/a $r_1r_3r_2$ $r_1r_3r_2 \cdot (r_2r_3)^3 = (r_1r_2r_3) \cdot r_2r_3 $ $r_1r_3$ n/a $r_1\sigma_{3,k}r_j$ $(r_1r_3r_j) \cdot \sigma_{4,k} $ $r_1\sigma_{3,k}$ $1 \leq j \leq 2$ $r_1\sigma_{3,k}r_j $ $ r_1r_3\sigma_{4,j+1}r_j\sigma_{j+2,k} = r_{j+1}\cdot r_1r_3\sigma_{4,k} $ $r_1\sigma_{3,k} $ $3 \leq j \leq k-1$ $r_1r_3r_j$ $r_j \cdot r_1r_3$ $r_1r_3$ $j>4$ For example, the reduction in row 4 of the table is justified as follows: the defining relations of $W$ imply that $\sigma_{j+2,k}r_j=r_j \sigma_{j+2,k}$, and that $r_jr_{j+1}r_j=r_{j+1}r_jr_{j+1}$. Since each row of the table constitutes a reduction in length, we have shown that if the conjugacy class of $\omega_0$ meets $W_I$, then there is an element $$\omega \in S{\vcentcolon=}\lbrace r_1\sigma_{3,k} : 3 \leq k \leq n \rbrace$$ such that $\omega^{-1}\omega_0\omega \in W_I$. The length of the longest element of $S$ is $n-1$, so we deduce that $$\ell(\omega^{-1}\omega_0\omega) \geq \ell(\omega_0)-2\cdot \ell(\omega) \geq (n^2-n)-2\cdot (n-1) = n^2-3n+2.$$ To finish the proof of the claim, it is enough to see that $\ell_k < n^2 - 3n + 2$, where $k<n$ and $\ell_k$ is the length of the longest element of the (unique) maximal parabolic subgroup not containing $r_k$. The maximal lengths in a Weyl group of type $A_r$ or $D_r$ are $\binom{r+1}{2}$ and $r^2-r$, respectively ([@Bou Planches I and IV]). Using this fact together with the additivity of maximal lengths in products of Coxeter groups, we find that $$\ell_k= \begin{cases} \binom{n}{2}, & \text{if}\ 1 \leq k \leq 2 \\ (k-1)^2-(k-1) + \binom{n-k+1}{2}, & \text{if}\ 2<k<n \end{cases}$$ It is easy to check in each case that $f(n,k) {\vcentcolon=}n^2-3n+2-\ell_k >0$ when $n \geq 5$. For example, in the case $2<k<n$, one readily checks that $f(n,k)$ is minimal when $k=n-1$, in which case $f(n,n-1) = 4n-10 > 0$. Thus we have proven the claim. Finally, in the case $I = \lbrace 1, \dots ,n-1 \rbrace$, let $P_I \subset G = \mathrm{SO}(2n)$ be an associated parabolic subgroup. We can realize the flag variety $G/P_I$ as a smooth quadric hypersurface $X$ of dimension $2n-2$. Indeed, under the obvious transitive action of $G$ on such a quadric, the subgroup $M=\mathrm{SO}(2n-2) \subset P_I \subset G$ (embedded in the standard way by acting as the identity on the last two coordinates) stabilizes a point $p \in X$. Since the stabilizer $M' \subset G$ of $p$ is parabolic and contains $M$, it follows by inspecting its Dynkin diagram that $M' = P_I$. By [@Reid72 Proof of Theorem 1.13], we have that $$H^{n-1}(X,\mathbb{Z})=\mathbb{Z}L_1 \oplus \mathbb{Z}L_2,$$ where the $L_i$ are classes of linear subspaces on $X$ satisfying $L_1^2=L_2^2=1$ and $L_1 \cdot L_2 = 0$. Thus, there are exactly two self-dual classes, as desired. ### The $E_6$ Case This case can be verified using the following [sage]{} code. The first block of code computes the number of elements $\omega \in W$ such that $\omega^{-1}\omega_0 \omega \in W_P$ in the cases where $P$ is a maximal parabolic subgroup. [0.5in]{}[0in]{} `INPUT:`\ `E6=WeylGroup([`“`E`”`, 6]);`\ `w0=E6.w0;`\ `i=0;`\ `for w in E6:`\ `    if len((w.inverse()*w0*w).coset_representative([2,3,4,5,6]).reduced_word())==0:`\ `        i=i+1;`\ `print i;        `\ `i=0;`\ `for w in E6:`\ `     if len((w.inverse()*w0*w).coset_representative([1,3,4,5,6]).reduced_word())==0:`\ `        i=i+1;`\ `print i;`\ `i=0`\ `for w in E6:`\ `     if len((w.inverse()*w0*w).coset_representative([1,2,4,5,6]).reduced_word())==0:`\ `        i=i+1;`\ `print i;`\ `i = 0;`\ `for w in E6:`\ `    if len((w.inverse()*w0*w).coset_representative([1,2,3,5,6]).reduced_word())==0:`\ `        i=i+1;`\ `print i;`\ \ `OUTPUT: `\ `5760`\ `0`\ `0`\ `0` The above code shows that the only maximal parabolic subgroups $P$ that gives rise to a nonzero number of self-dual elements are the ones where the Dynkin diagram of $P/U(P)$ is $D_5$, which can be obtained by deleting the node labeled $1$ in the Dynkin diagram of $E_6$ (as illustrated below), or by deleting the node labeled $6$. & & & &\ & &&& In this case, the desired number of self-dual elements is given by $$\frac{\#\{\omega \in W : \omega^{-1}\omega_0\omega \in W_P\}}{\# W_P} = \frac{5760}{1920} = 3.$$ For the smaller parabolic subgroups, it suffices to consider only those $P$ that are properly contained in a maximal parabolic subgroup that gives rise to a non-zero number of self-dual elements. The only such $P$ has the property that the Dynkin diagram of $P/U(P)$ is $D_4$ and is obtained by deleting the nodes labeled $1$ and $6$ from the Dynkin diagram of $E_6$, as illustrated below: & & & &\ & &&& The second block of code handles this case: [0.5in]{}[0in]{} `INPUT:`\ `i=0;`\ `for w in E6:`\ `    if len((w.inverse()*w0*w).coset_representative([2,3,4,5]).reduced_word())==0:`\ `        i=i+1;`\ `print i;`\ \ `OUTPUT:`\ `1152` In this case, the desired number of self-dual elements is given by $$\frac{\#\{\omega \in W : \omega^{-1}\omega_0\omega \in W_P\}}{\# W_P} = \frac{1152}{192} = 6.$$ This completes the proof of . Proof of Theorem \[partialquotient\] {#sec-sln} ------------------------------------ The idea is to use the same strategy as in the proof of Theorem \[bigthm\]. For convenience, let $m_i=\sum_{j = 1}^i n_j$. Consider the partial flag variety $F{\vcentcolon=}F(m_1,\dots,m_r)$ parametrizing flags of $\mathbb{C}$-vector spaces $0\subset V_1\subset \cdots\subset V_r=\mathbb{C}^n$ where $V_i$ has dimension $m_i$. By [@B53 Proposition 31.1], the integral cohomology ring of $F$ is given by $$\begin{aligned} H^{\bullet}(F,\mathbb{Z}) = \frac{\mathbb{Z}[x_1,\ldots,x_n]^{\prod_{i=1}^{r}S_{n_i}}}{(\mathbb{Z}[x_1,\ldots,x_n]^{S_n})^+}. \label{eq-IntCohom} \end{aligned}$$ For any field $K$ (regardless of characteristic), we have that $$Q:=Q(\pi) = \frac{K[x_1,\ldots,x_n]^{\prod_{i=1}^{r}S_{n_i}}}{(K[x_1,\ldots,x_n]^{S_n})^+} = H^{\bullet}(F,K).$$ We want to take the functional $\phi$ to be the integration map on $H^{\bullet}(F,K)$, so we need to verify that the distinguished socle element $E:=E(\pi)$, viewed as an element of $H^{\bullet}(F,K)$, integrates to $1$. To do this, consider the element ${\widetilde}{E} \in \mathbb{Z}[x_1, \dots, x_n]$ defined by the formula for the distinguished socle element in . Viewing ${\widetilde}{E}$ as an element of $H^{\bullet}(F,\mathbb{Z})$ via the identification , it is easy to see that the image of ${\widetilde}{E}$ under the map $H^{\bullet}(F,\mathbb{Z}) \to H^{\bullet}(F,K)$ is equal to $E$. It now suffices to show that ${\widetilde}{E}$ is equal to the class of a point in $H^{\bullet}(F,\mathbb{Z})$. Notice that ${\widetilde}{E}\in H^{\text{top}}(F,\mathbb{Z})$ and that $H^{\text{top}}(F, \mathbb{Z}) \simeq \mathbb{Z}$. By [@SS75 proof of Korollar 4.7] (see also [@KW19 proof of Lemma 4]), $E$ is nonzero independent of $K$, so we can vary $K = \mathbb{F}_p$ over all primes $p$ to see that the image of ${\widetilde}{E}$ in $H^{\text{top}}(F, \mathbb{F}_p)$ must be nonzero for each prime $p$. It follows that ${\widetilde}{E}$ is a generator of $H^{\text{top}}(F,\mathbb{Z})\simeq \mathbb{Z}$ and therefore agrees with the class of a point up to sign. To determine the sign, it suffices to compute the sign of the Jacobian element $J:=J(\pi)$, taking $K=\mathbb{Q}$. We first consider the case where $n_i=1$ for every $i$. In this case, the Jacobian element is $J = \prod_{1\leq i<j\leq n}(x_i-x_j)$ by [@LP02 Equation (1)]; notice that $J$ is a Vandermonde determinant and can be expressed using the Leibniz formula as $$\label{eq-1} J = \prod_{1\leq i<j\leq n}(x_{i}-x_{j}) = \sum_{\sigma \in S_n} {\operatorname}{sign}(\sigma) \cdot \prod_{i = 1}^n x_{\sigma(i)}^{n-i}.$$ On the other hand, the class of a point in $F$ is by definition given by $\prod_{i = 1}^n x_i^{n-i}$ (see [@BJS93 Section 1]). For any $\sigma \in S_n$, we have that $$\label{eq-2} \sigma \cdot \prod_{i = 1}^n x_i^{n-i} = \prod_{i = 1}^n x_{\sigma(i)}^{n-i} = {\operatorname}{sign}(\sigma) \cdot \prod_{i = 1}^n x_i^{n-i}.$$ It follows from combining  and  that $J = n! \cdot \prod_{i = 1}^n x_i^{n-i} \in H^{\text{top}}(F,\mathbb{Z})$ We next consider the general case where not every $n_i$ is equal to $1$. Consider the composition of maps $$\begin{aligned} \label{eq-comp} {\operatorname}{Spec}({\mathbb}{Q}[x_1,\ldots,x_n])\to {\operatorname}{Spec}({\mathbb}{Q}[x_1,\ldots,x_n]^{\prod_{i=1}^{r}S_i})\to {\operatorname}{Spec}({\mathbb}{Q}[x_1,\ldots,x_n]^{S_n}).\end{aligned}$$ The Jacobian element of the first map in  is the product of the Jacobian elements of the maps ${\operatorname}{Spec}({\mathbb}{Q}[x_{m_{k-1}+1},\ldots,x_{m_k}]\to {\operatorname}{Spec}({\mathbb}{Q}[x_{m_{k-1}+1},\ldots,x_{m_k}]^{S_k})$ over $1 \leq k \leq r$. It then follows from the Chain Rule that the Jacobian element of ${\operatorname}{Spec}({\mathbb}{Q}[x_1,\ldots,x_n]^{\prod_{i=1}^{r}S_i})\to {\operatorname}{Spec}({\mathbb}{Q}[x_1,\ldots,x_n]^{S_n})$ is $$\begin{aligned} J & = \prod_{1\leq i<j\leq n}(x_{i}-x_{j}) \bigg/\prod_{k = 1}^r \prod_{m_{k-1}+1\leq i<j\leq m_k}(x_{i}-x_{j}) \nonumber\\ & = \prod_{1\leq k<\ell\leq r}\prod_{i=1}^{n_k}\prod_{j=1}^{n_\ell}(x_{m_{k-1}+i}-x_{m_{\ell-1}+j}).\label{Jelement}\end{aligned}$$ In words, this Jacobian element takes the same form as the product of differences $\prod_{1\leq i<j\leq n}(x_i-x_j)$, but instead of taking all pairwise differences $x_i-x_j$ for $i<j$, we instead take the pairs $i<j$ such that $i$ and $j$ are from different blocks, where we partition $\{1,\dots,n\}$ into contiguous blocks of size $n_1,\dots,n_r$. Now, we want to compare the Jacobian element with the class of a point in $F$. As before, we visibly see that swapping two variables from different blocks switches the sign and swapping two variables from the same block preserves the Jacobian element. Therefore, the same is true for the formula for the class of a point in $F$. By [@B05 Section 2.1], the class of a point in $F$ is the Schubert polynomial associated to the permutation $$\label{eq-perm} m_{r-1}+1,\ldots,m_{r},m_{r-2}+1,\ldots,m_{r-1},\cdots, 1,\ldots,m_{1}$$ of the list $1, \dots, n$. In words, the permutation  takes the numbers $1,\ldots,n$, splits them up into contiguous blocks of size $n_1,\ldots,n_r$ and reverses the order of the blocks (keeping the order within each block fixed). By [@BJS93 Block decomposition formula], the Schubert polynomial associated to  is given by $$\prod_{i=1}^{r}\left(\prod_{j=1}^{n_i}x_j\right)^{\sum_{k = i+1}^r n_k}$$ Expanding out  and keeping track of the signs, we find that $$J = \frac{n!}{\prod_{i=1}^{r}n_i!} \cdot \prod_{i=1}^{r}\left(\prod_{j=1}^{n_i}x_j\right)^{\sum_{k = i+1}^r n_k}$$ so the signs agree. We deduce that $\alpha = 1$, so the theorem now follows from Theorem \[bigthm\] and . [^1]: We give precise definitions of ${\operatorname}{GW}(K)$ and $\deg^{{\mathbb{A}}^1}f$ in Section \[sec-back\]. [^2]: Note that our partial flag variety is defined over ${\mathbb}{C}$, but we take its cohomology with coefficients in $K$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Kaltofen has proposed a new approach in [@Kal92] for computing matrix determinants without divisions. The algorithm is based on a baby steps/giant steps construction of Krylov subspaces, and computes the determinant as the constant term of a characteristic polynomial. For matrices over an abstract ring, by the results of @BaSt82, the determinant algorithm, actually a straight-line program, leads to an algorithm with the same complexity for computing the adjoint of a matrix. However, the latter adjoint algorithm is obtained by the reverse mode of automatic differentiation, hence somehow is not “explicit”. We present an alternative (still closely related) algorithm for the adjoint that can be implemented directly, we mean without resorting to an automatic transformation. The algorithm is deduced by applying program differentiation techniques “by hand” to Kaltofen’s method, and is completely decribed. As subproblem, we study the differentiation of programs that compute minimum polynomials of lineraly generated sequences, and we use a lazy polynomial evaluation mechanism for reducing the cost of Strassen’s avoidance of divisions in our case.' address: | CNRS, Université de Lyon, INRIA\ Laboratoire LIP, ENSL, 46, Allée d’Italie, 69364 Lyon Cedex 07, France author: - Gilles Villard title: 'Kaltofen’s division-free determinant algorithm differentiated for matrix adjoint computation' --- [^1] matrix determinant, matrix adjoint, matrix inverse, characteristic polynomial, exact algorithm, division-free complexity, Wiedemann algorithm, automatic differentiation. Introduction {#sec:intro} ============ Kaltofen has proposed in [@Kal92] a new approach for computing matrix determinants. This approach has brought breakthrough ideas for improving the complexity estimate for the problem of computing the determinant without divisions over an abstract ring (see [@Kal92; @KaVi04-2]). With these foundations, the algorithm of @KaVi04-2 computes the determinant in $O(n^{2.7})$ additions, subtractions, and multiplications. The same ideas also lead to the currently best known bit complexity estimate of @KaVi04-2 for the problem of computing the characteristic polynomial. We consider the straigth-line programs of [@Kal92] for computing the determinant over abstract fields or rings (with or without divisions). Using the reverse mode of automatic differentiation (see [@Lin70; @Lin76], and [@OWB71]), a straight-line program for computing the determinant of a matrix $A$ can be (automatically) transformed into a program for computing the adjoint matrix $A ^{*}$ of $A$. This principle, stated by @BaSt82 [Cor.5], is also applied by @Kal92 [Sec.1.2] for computing $A^*$. Since the adjoint program is derived by an automatic process, few is known about the way it computes the adjoint. The only available information seems to be the determinant program itself, and the knowledge we have on the differentiation process. Neither the adjoint program can be described, or implemented, without resorting to an automatic differentiation tool. In this paper, by studying the differentiation of Kaltofen’s determinant algorithm step by step, we produce an “explicit” adjoint algorithm. The determinant algorithm, that we first recall in Section \[sec:detK\] over an abstract field $\K$, uses a Krylov subspace construction, hence mainly reduces to vector times matrix, and matrix times matrix products. Another operation involved is computing the minimum polynomial of a linearly generated sequence. We apply the program differentiation mechanism, reviewed in Section \[sec:autodiff\], to the different steps of the determinant program in Section \[sec:differentiation\]. This leads us to the description of a corresponding new adjoint program over a field, in Section \[sec:adjointK\]. The algorithm we obtain somehow calls to mind the matrix factorization of @Ebe97 [(3.4)]. We note that our objectives are similar to Eberly’s ones, whose question was to give an explicit inversion algorithm from the parallel determinant algorithm of @KaPa91. Our motivation for studying the differentiation and resulting adjoint algorithm, is the importance of the determinant approach of @Kal92, and @KaVi04-2, for various complexity estimates. Recent advances around the determinant of polynomial or integer matrices (see [@EGV00; @KaVi04-2; @Sto03; @Sto05]), and matrix inversion (see [@JeVi06], and [@Sto08]) also justify the study of the general adjoint problem. For computing the determinant without divisions over a ring $\R$, Kaltofen applies the avoidance of divisions of @Str73 to his determinant algorithm over a field. We apply the same strategy for the adjoint. From the algorithm of Section \[sec:adjointK\] over a field, we deduce an adjoint algorithm over an arbitrary ring $\R$ in Section \[sec:nodiv\]. The avoidance of divisions involves computations with truncated power series. A crucial point in Kaltofen’s approach is a “baby steps/giant steps” scheme for reducing the corresponding power series arithmetic cost. However, since we use the reverse mode of differentiation, the flow of computation is modified, and the benefit of the baby steps/giant steps is partly lost for the adjoint. This asks us to introduce an early, and lazy polynomial evaluation strategy for not increasing the complexity estimate. The division-free determinant algorithm of @Kal92 uses $\sO(n^{3.5})$ operations in $\R$. The adjoint algorithm we propose has essentially the same cost. Our study may be seen as a first step for the differentiation of the more efficient algorithm of @KaVi04-2. The latter would require, in particular, to consider asymptotically fast matrix multiplication algorithms that are not discussed in what follows. Especially in our matrix context, we note that interpreting programs obtained by automatic differentiation, may have connections with the interpretation of programs derived using the transposition principle. We refer for instance to the discussion of @Kal00-2 [Sec.6]. [**Cost functions.**]{} We let ${\sf M}(n)$ be such that two univariate polynomials of degree $n$ over an arbitrary ring $\R$ can be multiplied using ${\sf M}(n)$ operations in . The algorithm of @CaKa91 allows ${\sf M}(n)=O(n\log n \log\log n)$. The function $O({\sf M}(n))$ also measures the cost of truncated power series arithmetic over $\R$. For bounding the cost of polynomial gcd-type computations over a commutative field $\K$ we define the function ${\sf G}$. Let ${\sf G}(n)$ be such that the extended gcd problem (see [@vzGG99 Chap.11]) can be solved with ${\sf G}(n)$ operations in $\K$ for polynomials of degree $2n$ in $\K[x]$. The recursive Knuth/Schönhage half-Gcd algorithm (see [@Knu70; @Sch71; @Moe73]) allows ${\sf G}(n)=O({\sf M}(n)\log n)$. The minimum polynomial of degree $n$, of a linearly generated sequence given by its first $2n$ terms, can be computed in ${\sf G}(n) +O(n)$ operations (see [@vzGG99 Algorithm 12.9]). We will often use the notation $\sO$ that indicates missing factors of the form $\alpha (\log n )^{\beta}$, for two positive real numbers $\alpha$ and $\beta$. Kaltofen’s determinant algorithm over a field {#sec:detK} ============================================= Kaltofen’s determinant algorithm extends the Krylov-based method of @Wie86. The latter approach is successful in various situations. We refer especially to the algorithms of @KaPa91 and @KaSa91 around exact linear system solution that has served as basis for subsequent works. We may also point out the various questions investigated by @CEKSTV01-2, and references therein. Let $\K$ be a commutative field. We consider $A \in \K ^{n \times n}$, $u \in \K ^{ 1 \times n}$, and $v \in \K ^{n \times 1}$. We introduce the Hankel matrix $H= \left(uA^{i+j-2}v\right) _{1\leq i,j \leq n} \in \K ^{n \times n}$, and let $h_k=uA^kv$ for $0\leq k \leq 2n-1$. We also assume that $H$ is non-singular: $$\label{eq:defH} \det H = \det \left[ \begin{array}{cccc} uv & uAv & \ldots & uA ^{n-1}v \\ uAv & uA ^{2}v & \ldots & uA ^{n}v \\ \vdots & \ddots & \vdots & \vdots\\ uA ^{n-1}v & \ldots & \ldots & uA ^{2n-2}v \end{array} \right] \neq 0.$$ In the applications, (\[eq:defH\]) is ensured either by construction of $A,u$, and $v$, as in [@Kal92; @KaVi04-2], or by randomization (see the above cited references around Wiedemann’s approach, and [@Kal92; @KaVi04-2]). One of the key ideas of @Kal92 for reducing the division-free complexity estimate for computing the determinant, is to introduce a “baby steps/giant steps” behaviour in the Krylov subspace construction. With baby steps/giant steps parameters $r=\lceil 2n/s \rceil$ and $s=\lceil \sqrt{n}\rceil$ ($rs \geq 2n$) we consider the following algorithm. [rl]{}   \ Algorithm &\ [*Input:*]{} & $A \in \K ^{n \times n}, u \in \K ^{ 1 \times n}, v \in \K ^{n \times 1}$\ & --------------------------- --------------------------------------------------------------- [step i]{}. $v_0:=v$; For $i=1,\ldots,r-1$ do $v_i := A v_{i-1}$ \[-0.16cm\] [step ii]{}. $B:=A^r$ \[-0.16cm\] [step iii]{}. $u_0:=u$; For $j=1,\ldots,s-1$ do $u_j := u_{j-1} B$ \[-0.16cm\] [step iv]{}. For $i=0,1,\ldots,r-1$ do \[-0.16cm\] For $j=0,1,\ldots,s-1$ do $h_{i+jr}:=u_jv_i$ \[-0.16cm\] [step v]{}. $f:=$ the minimum polynomial of $\{h_k\}_{0\leq k \leq 2n-1}$ --------------------------- --------------------------------------------------------------- \ [*Output:*]{} & $\det A := (-1)^nf(0)$.\    We ommit the proof of next theorem that establishes the correctness and the cost of Algorithm [Det]{}, and refer to @Kal92. We may simply note that the sequence $\{h_k\}_{0\leq k \leq 2n-1}$ is linearly generated. In addition, if (\[eq:defH\]) is true, then the minimum polynomial $f$ of $\{h_k\}_{0\leq k \leq 2n-1}$, the minimum polynomial of $A$, and the characteristic (monic) polynomial of $A$ coincide. Hence $(-1)^nf(0)$ is equal to the determinant of $A$. Via an algorithm that can multiply two matrices of $\K ^{n \times n}$ in $O(n^{\omega})$ we have: \[theo:proofdet\] If $A\in \K ^{n \times n}$, $u\in \K ^{ 1 \times n}$, and $v\in \K ^{n \times 1}$ satisfy (\[eq:defH\]), then Algorithm [Det]{} computes the determinant of $A$ in $O(n^{\omega}\log n)$ operations in $\K$. For the matrix product we may set $\omega =3$, or $\omega=2.376$ using the algorithm of @CoWi90. In the rest of the paper we work with a cubic matrix multiplication algorithm. Our study has to be generalized if fast matrix multiplication is introduced. Backward automatic differentiation {#sec:autodiff} ================================== The determinant of $A\in \K^{n\times n}$ is a polynomial $\Delta$ in $\K[a_{1,1},\ldots,a_{i,j},\ldots,a_{n,n}]$ of the entries of $A$. We denote the adjoint matrix by $A ^{*}$ such that $A A ^{*}= A ^{*} A = (\det A) I$. As noticed by @BaSt82, the entries of $A ^{*}$ satisfy $$\label{eq:partial} a_{j,i}^{*}=\frac{\partial \Delta}{\partial a_{i,j}}, 1\leq i,j \leq n.$$ The reverse mode of automatic differentiation allows to transform a program which computes $\Delta$ into a program which computes all the partial derivatives in (\[eq:partial\]). Among the rich literature about the reverse mode of automatic differentiation we may refer to the seminal works of @Lin70 [@Lin76] and @OWB71. For deriving the adjoint program from the determinant program we follow the lines of @BaSt82 and @Mor85. Algorithm [Det]{} is a straight-line program over $\K$. For a comprehensive study of straight-line programs for instance see [@BCS97 Chapter 4]. We assume that the entries of $A$ are stored initially in $n^2$ variables $\delta_i$, $-n^2 < i \leq 0$. Then we assume that the algorithm is a sequence of arithmetic operations in $\K$, or assignments to constants of $\K$. Let $L$ be the number of such operations. We assume that the result of each instruction is stored in a new variable $\delta_i$, hence the algorithm is seen as a sequence of instructions $$\label{eq:slp1} \delta_i := \delta_j \text{~op~} \delta_k, \text{~op} \in {\{+,-,\times,\div\}}, ~-n^2 < j,k < i,$$ or $$\label{eq:slp2} \delta_i := c, ~c \in \K,$$ for $1\leq i \leq L$. Note that a binary arithmetic operation (\[eq:slp1\]) where one of the operands is a constant of $\K$ can be implemented with the aid of (\[eq:slp2\]). For any $0\leq i \leq L$, the determinant maybe be seen as a rational function $\Delta _i$ of $\delta_{-n^2+1}, \ldots, \delta_{i}$, such that $$\label{eq:firstinstruct} \Delta _0 (\delta_{-n^2+1}, \ldots, \delta_{0}) = \Delta (a_{1,1}, \ldots , a_{n,n}),$$ and such that the last instruction gives the result: $$\label{eq:lastinstruct} \det A = \delta_{L} = \Delta _{L} (\delta_{-n^2+1}, \ldots, \delta_{L}).$$ The reverse mode of automatic differentiation computes the derivatives (\[eq:partial\]) in a backward recursive way, from the derivatives of (\[eq:lastinstruct\]) to those of (\[eq:firstinstruct\]). Using (\[eq:lastinstruct\]) we start the recursion with $$\frac{\partial \Delta _L}{\partial \delta_L} = 1, ~ \frac{\partial \Delta _L}{\partial \delta_l} = 0, ~ -n^2 < l \leq L-1.$$ Then, writing $$\label{eq:deltaidentities} \Delta _{i-1} (\delta_{-n^2+1}, \ldots, \delta_{i-1})= \Delta _{i} (\delta_{-n^2+1}, \ldots, \delta_{i}) = \Delta _{i}(\delta_{-n^2+1}, \ldots, g(\delta_j, \delta_k)),$$ where $g$ is given by (\[eq:slp1\]) or (\[eq:slp2\]), we have $$\label{eq:maindiff} \frac{\partial \Delta _{i-1}}{\partial \delta_l} = \frac{\partial \Delta _{i}}{\partial \delta_l} + \frac{\partial \Delta _{i}}{\partial \delta_{i}} \frac{\partial g}{\partial \delta_l}, ~ -n^2 < l \leq i-1,$$ for $1\leq i \leq L$. Depending on $g$ several cases may be examined. For instance, for an addition $\delta_i := g(\delta_k,\delta_j)=\delta_k + \delta_j$, (\[eq:maindiff\]) becomes $$\label{eq:diffadd} \frac{\partial \Delta _{i-1}}{\partial \delta_k} = \frac{\partial \Delta _{i}}{\partial \delta_k} + \frac{\partial \Delta _{i}}{\partial \delta_{i}}, ~~~\frac{\partial \Delta _{i-1}}{\partial \delta_j} = \frac{\partial \Delta _{i}}{\partial \delta_j} + \frac{\partial \Delta _{i}}{\partial \delta_{i}} ,$$ with the other derivatives ($l\neq k$ or $j$) remaining unchanged. In the case of a multiplication $\delta_i := g(\delta_k,\delta_j)=\delta_k \times \delta_j$, (\[eq:maindiff\]) gives that the only derivatives that are modified are $$\label{eq:diffmul} \frac{\partial \Delta _{i-1}}{\partial \delta_k} = \frac{\partial \Delta _{i}}{\partial \delta_k} + \frac{\partial \Delta _{i}}{\partial \delta_{i}}\,\delta_j, ~~~\frac{\partial \Delta _{i-1}}{\partial \delta_j} = \frac{\partial \Delta _{i}}{\partial \delta_j} + \frac{\partial \Delta _{i}}{\partial \delta_{i}}\,\delta_k.$$ We see for instance in (\[eq:diffmul\]), where $\delta_j$ is used for updating the derivative with respect to $\delta_k$, that the recursion uses intermediary results of the determinant algorithm. For the adjoint algorithm, we will assume that the determinant algorithm has been executed once, and that the $\delta_i$’s are stored in $n^2 +L$ memory locations. Recursion (\[eq:maindiff\]) gives a practical mean, and a program, for computing the $N=n^2$ derivatives of $\Delta$ with respect to the $a_{i,j}$’s. For any rational function $Q$ in $N$ variables $\delta_{-N+1},\ldots , \delta_0$ the corresponding general statement is: \[theo:BaSt\] \[@BaSt82\] Let ${\mathcal P}$ be a straight-line program computing $Q$ in $L$ operations in $\K$. One can derive an algorithm $\partial {\mathcal P}$ that computes $Q$ and the $N$ partial derivatives ${\partial Q}/{\partial \delta_l}$ in less than $5L$ operations in $\K$. Combining Theorem \[theo:BaSt\] with Theorem \[theo:proofdet\] gives the construction of an algorithm $\partial$[Det]{} for computing the adjoint matrix $A ^*$ (see [@BaSt82 Corollary 5]). The algorithm can be generated automatically via an automatic differentiation tool[^2]. However, it seems unclear how it could be programmed directly, and, to our knowledge, it has no interpretation of its own. Differentiating the determinant algorithm over a field {#sec:differentiation} ====================================================== We apply the backward recursion (\[eq:maindiff\]) to Algorithm [Det]{} of Section \[sec:detK\] for deriving the algorithm $\partial$[Det]{}. We assume that $A$ is non-singular, hence $A^*$ is non-trivial. By construction, the flow of computation for the adjoint is reversed compared to the flow of Algorithm [Det]{}, therefore we start with the differentiation of [step v]{}. Differentiation of the minimum polynomial constant term computation {#subsec:constantterm} ------------------------------------------------------------------- At [step v]{}, Algorithm [Det]{} computes the minimum polynomial $f$ of the linearly generated sequence $\{h_k\}_{0\leq k \leq 2n-1}$. Let $\lambda$ be the first instruction index at which all the $h_k$’s are known. We apply the recursion until step $\lambda$, globally, we mean that we compute the derivatives of $\Delta _{\lambda}$. After the instruction $\lambda$, the determinant is viewed as a function $\Delta _{\text{\sc v}}$ of the $h_k$’s only. Following (\[eq:deltaidentities\]) we have $$\det (A) = \Delta _{\lambda}(\delta_{-n^2+1}, \ldots, \delta_{\lambda}) =\Delta _{\text{\sc v}}(h_1, \ldots , h_{2n-1}).$$ Hence we may focus on the derivatives $\partial \Delta _{\text{\sc v}} / \partial h_k$, $0\leq k \leq 2n-1$, the remaining ones are zero. Using assumption (\[eq:defH\]) we know that the minimum polynomial $f$ of $\{h_k\}_{0\leq k \leq 2n-1}$ has degree $n$, and if $f(x)=f_0 + f_1 x + \ldots + f_{n-1} x^{n-1} + x^n$, then $f$ satisfies $$\label{eq:linsysf} H \left[ \begin{array}{c} f_0 \\ f_1 \\ \vdots \\ f_{n-1}\end{array} \right] = \left[ \begin{array}{cccc} h_0 & h_1 & \ldots & h_{n-1} \\ h_1 & h_{2} & \ldots & h_{n} \\ \vdots & \ddots & \vdots & \vdots\\ h_{n-1} & \ldots & \ldots & h_{2n-2} \end{array} \right] \left[ \begin{array}{c} f_0 \\ f_1 \\ \vdots \\ f_{n-1}\end{array} \right] = - \left[ \begin{array}{c} h_n \\ h_{n+1} \\ \vdots \\ h_{2n-1}\end{array} \right]$$ see, e.g., [@Kal92], or [@vzGG99 Algorithm 12.9] together with [@BGY80]. Applying Cramer’s rule we see that $$f_0 = (-1)^n \det \left[ \begin{array}{cccc} h_1 & h_2 & \ldots & h_{n} \\ h_2 & h_{3} & \ldots & h_{n+1} \\ \vdots & \ddots & \vdots & \vdots\\ h_{n} & \ldots & \ldots & h_{2n-1} \end{array} \right] / \det H,$$ hence, defining $H_A=\left( uA^{i+j-1}v\right) _{1\leq i,j \leq n}=\left( h_{i+j-1}\right)_{1\leq i,j \leq n} \in \K ^{n\times n}$, we obtain $$\label{eq:quof0} \Delta_{\text{\sc v}} = \frac{\det H_A}{\det H}.$$ Let $\tilde{{\mathcal K}}_u$ and ${\mathcal K}_v$ be the Krylov matrices $$\label{eq:defKu} \tilde{{\mathcal K}}_u = [u ^T, A ^Tu ^T, \ldots, (A ^T) ^{n-1}u ^T]^T \in \K ^{n\times n},$$ and $$\label{eq:defKv} {\mathcal K}_v = [v, Av, \ldots, A ^{n-1}v] \in \K ^{n\times n}.$$ Since $H=\tilde{{\mathcal K}}_u {\mathcal K}_v$, assumption (\[eq:defH\]) implies that both $\tilde{{\mathcal K}}_u$ and ${\mathcal K}_v$ are non-singular. Hence, using that $A$ is non-singular, we note that $H_A = \tilde{{\mathcal K}}_u A {\mathcal K}_v$ also is non-singular. For differentiating (\[eq:quof0\]), let us first specialize (\[eq:partial\]) to Hankel matrices. We denote by $(\partial \Delta /\partial a_{i,j})(H)$ the substitution of the $a_{i,j}$’s for the entries of $H$ in $\partial \Delta /\partial a_{i,j}$, for $1\leq i,j \leq n$. From (\[eq:partial\]) we have $$h ^* _{j,i} = \frac{\partial \Delta}{\partial a_{i,j}}(H), 1\leq i,j \leq n.$$ Since the entries of $H$ are constant along the anti-diagonals, we deduce that $$\frac{\partial \det H}{\partial h_k} = \sum _{i+j-2=k} \frac{\partial \Delta}{\partial a_{i,j}}(H) = \sum _{i+j-2=k} h ^* _{j,i}= \sum _{i+j-2=k} h ^* _{i,j}, ~0 \leq k \leq 2n-2.$$ In other words, we may write $$\label{eq:partialH} \frac{\partial \det H}{\partial h_k} = \sigma_k (H ^*), ~0 \leq k \leq 2n-1,$$ where, for a matrix $M=(m_{ij})$, we define $$\sigma_k(M)= 0+\sum_{i+j-2=k} m_{ij}, ~1 \leq i,j \leq n.$$ The function $\sigma_k(M)$ is the sum of the entries in the anti-diagonal of $M$ starting with $m_{1,k+1}$ if $0 \leq k \leq n-1$, and $m_{k-n+2,n}$ if $n \leq k \leq 2n-2$. Shifting the entries of $H$ for obtaining $H_A$ we also have $$\label{eq:partialHA} \frac{\partial \det H_A}{\partial h_k} = \sigma_{k-1} (H_A ^*), ~0 \leq k \leq 2n-1.$$ Now, differentiating (\[eq:quof0\]), together with (\[eq:partialH\]) and (\[eq:partialHA\]), leads to $$\frac{\partial \Delta_{\text{\sc v}}}{\partial h_k} = \frac{(\partial \det H_A / \partial h_k)}{\det H} - \frac{(\partial \det H / \partial h_k)}{\det H} \frac{\det H_A}{\det H} = \frac{(\partial \det H_A / \partial h_k)}{\det H_A} \frac{\det H_A}{\det H} - \sigma _k (H ^{-1}) \Delta_{\text{\sc v}}$$ and, consequently, to $$\label{eq:diff5} \frac{\partial \Delta_{\text{\sc v}}}{\partial h_k} = \left( \sigma _{k-1} (H_A ^{-1}) - \sigma _k (H ^{-1})\right) \Delta_{\text{\sc v}}, ~0 \leq k \leq 2n-1.$$ With (\[eq:diff5\]) we identify the problem solved by the first step of the $\partial$[ Det]{} algorithm, and provide first informations for interpreting or implementing the adjoint program. Various algorithms may be used for computing the minimum polynomial (for instance see [@vzGG99 Algorithm 12.9]), that will lead to corresponding algorithms for computing the left sides in (\[eq:diff5\]). However, we will not discuss these aspects, since the associated costs are not dominant in the overall complexity. We have recalled, in the introduction, that the minimum polynomial $f$ (its constant term $f(0)$) can be computed from the $h_k$’s in ${\sf G}(n) +O(n)$ operations in $\K$. Hence Theorem \[theo:BaSt\] gives an algorithm for computing the derivatives using $5{\sf G}(n) +O(n)$ operations. Alternatively, in the Appendix we propose a direct approach that takes advantage of (\[eq:diff5\]). Proposition \[prop:computsigma\] shows that if $f$, $H$, and $H_A$ are given, then the ${\partial \Delta_{\text{\sc v}}}/{\partial h_k}$’s can be computed in ${\sf G}(n) + O({\sf M}(n))$ operations in $\K$. Differentiation of the dot products ----------------------------------- For differentiating [step iv]{}, $\Delta$ is seen as a function $\Delta _{\text{\sc iv}}$ of the $u_j$’s and $v_i$’s. The entries of $u_j$ are used for computing the $r$ scalars $h_{jr},h_{1+jr}, \ldots, h_{(r-1)+jr}$ for $0\leq j \leq s-1$. The entries of $v_i$ are involved in the computation of the $s$ scalars $h_i, h_{i+r}, \ldots, h_{i+(s-1)r}$ for $0\leq i \leq r-1$. In (\[eq:maindiff\]), the new derivative ${\partial \Delta _{i-1}}/{\partial \delta_l}$ is obtained by adding the current instruction contribution to the previously computed derivative ${\partial \Delta _{i}}/{\partial \delta_l}$. Since all the $h_{i+jr}$’s are computed independently according to $$h_{i+jr} = \sum _{l=0}^n (u_j)_l (v_i)_l,$$ it follows that the derivative of $\Delta _{\text{\sc iv}}$ with respect to an entry $(u_j)_l$ or $(v_i)_l$ is obtained by summing up the contributions of the multiplications $(u_j)_l (v_i)_l$. We obtain $$\label{eq:tmpDu} \frac{\partial \Delta _{\text{\sc iv}}}{\partial (u_j)_l} = \sum _{i=0}^{r-1} \frac{\partial \Delta _{\text{\sc v}}}{\partial h_{i+jr}} (v_i)_l, ~0\leq j \leq s-1,~1\leq l \leq n,$$ and $$\label{eq:tmpDv} \frac{\partial \Delta _{\text{\sc iv}}}{\partial (v_i)_l} = \sum _{i=0}^{s-1} \frac{\partial \Delta _{\text{\sc v}}}{\partial h_{i+jr}} (u_j)_l, ~0\leq i \leq r-1, ~1\leq l \leq n.$$ By abuse of notations (of the sign $\partial$), we let $\partial u_j$ be the $n\times 1$ vector, respectively $\partial v_i$ be the $1\times n$ vector, whose entries are the derivatives of $\Delta _{\text{\sc iv}}$ with respect to the entries of $u_j$, respectively $v_i$. Note that because of the index transposition in (\[eq:partial\]), it is convenient, here and in the following, to take the transpose form (column versus row) for the derivative vectors. Defining also $$\partial H = \left( \frac{\partial \Delta _{\text{\sc v}}}{\partial h_{i+jr}} \right)_{ 0\leq i \leq r-1, ~0\leq j \leq s-1} \in \K ^{r \times s},$$ we deduce, from (\[eq:tmpDu\]) and (\[eq:tmpDv\]), that $$\label{eq:diff4u} \left[ \partial u_0, \partial u_1, \ldots, \partial u_{s-1} \right] = \left[ v_0, v_1, \ldots, v_{r-1} \right] \partial H \in \K ^{n \times s}.$$ and $$\label{eq:diff4v} \left[\begin{array}{c} ~~~~\partial v_0~~~~\\ \partial v_1\\ \vdots\\ \partial v_{r-1} \end{array} \right] = \partial H \left[\begin{array}{c} ~~~~u_0~~~~\\ u_1\\ \vdots\\ u_{s-1} \end{array} \right] \in \K^{r\times n}.$$ Identities (\[eq:diff4u\]) and (\[eq:diff4v\]) give the second step of the adjoint algorithm. In Algorithm [Det]{}, [step iv]{} costs essentially $2rsn$ additions and multiplications in $\K$. Here we have essentially $4rsn$ additions and multiplications using basic loops (as in [step iv]{}) for calculating the matrix products, we mean without an asymptotically fast matrix multiplication algorithm. Differentiation of the matrix times vector and matrix products -------------------------------------------------------------- The recursive process for differentiating [step iii]{} to [ step i]{} may be written in terms of the differentiation of the basic operation (or its transposed operation) $$\label{eq:pq} q := p \cdot M \in \K^{1\times n},$$ where $p$ and $q$ are row vectors of dimension $n$, and $M$ is an $n\times n$ matrix. We assume at this point (by construction of the recursion) that column vectors $\partial p$ and $\partial q$ of derivatives of the determinant with respect to the entries of $p$ and $q$, are available. For instance, for differentiating [step iii]{}, we will consider the $\partial u_j$’s. We also assume that an $n \times n$ matrix $\partial M$, whose transpose gives the derivatives with respect to the $m_{ij}$’s, has been computed. Initially, for [step iii]{}, we will take $\partial B=0$. Following the lines of previous section for obtaining (\[eq:diff4u\]) and (\[eq:diff4v\]), we see that differentiating (\[eq:pq\]) amounts to updating $\partial p$ and $\partial M$ according to $$\label{eq:diffpq} \left\{\begin{array}{l} \partial p := \partial p + M \cdot \partial q \in \K ^n,\\ \partial M := \partial M + \partial q \cdot p \in \K^{n\times n}. \end{array} \right.$$ Starting from the values of the $\partial u_j$’s computed with (\[eq:diff4u\]), and from $\partial B=0$, for the differentiation of [step iii]{}, (\[eq:diffpq\]) gives $$\label{eq:diff3} \left\{ \begin{array}{l} \partial u_{j-1} := \partial u_{j-1} + B \cdot \partial u_j,\\ \partial B := \partial B + \partial u_j \cdot u_{j-1}, ~j=s-1, \ldots, 1. \end{array} \right.$$ For [step ii]{}, we mean $B:=A^r$, we show that the backward recursion leads to $$\label{eq:diffpow} \partial A := \sum _{k=1}^r A ^{r-k} \cdot \partial B \cdot A ^{k-1}.$$ Here, the notation $\partial A$ stands for the $n\times n$ matrix whose transpose gives the derivatives $\partial \Delta _{\text{\sc ii}}/{\partial a_{i,j}}$. We may show (\[eq:diffpow\]) by induction on $r$. For $r=1$, $\partial A = \partial B$ is true. If (\[eq:diffpow\]) is true for $r-1$, then let $C=A ^{r-1}$ and $B=CA$. Using (\[eq:diffpq\]), and overloading the notation $\partial A$, we have $$\left\{\begin{array}{l} \partial C = A \cdot \partial B \in \K ^{n \times n},\\ \partial A = \partial B \cdot C \in \K^{n\times n}. \end{array} \right.$$ Hence, using (\[eq:diffpow\]) for $r-1$, we establish that $$\begin{array}{ll} \partial A & = \partial A + \sum _{k=1}^{r-1} A ^{r-k-1} \cdot \partial C \cdot A ^{k-1},\\ & = \partial B \cdot C + \sum _{k=1}^{r-1} A ^{r-k-1} \cdot ( A \cdot \partial B) \cdot A ^{k-1}\\ & = \partial B \cdot A ^{r-1} + \sum _{k=1}^{r-1} A ^{r-k} \cdot \partial B \cdot A ^{k-1} = \sum _{k=1}^r A ^{r-k} \cdot \partial B \cdot A ^{k-1}. \end{array}$$ Any specific approach for computing $A ^r$ will lead to an associated program for computing $\partial A$. Let us look, in particular, at the case where [step ii]{} of Algorithm [Det]{} is implemented by repeated squaring, in essentially $\log_2 r$ matrix products. Consider the recursion $$\begin{array}{l} A_0:=A\\ \text{For~} k=1,\ldots,\log_2 r \text{~do~} A_{2^k}:= A_{2^{k-1}} \cdot A_{2^{k-1}}\\ B:=A_r \end{array}$$ that computes $B:= A ^r$. The associated program for computing the derivatives is $$\label{eq:diffpowlog} \begin{array}{l} \partial A_r:= \partial B \\ \text{For~} k=\log_2 r,\ldots, 1 \text{~do~} \partial A _{2^{k-1}}:= A _{2^{k-1}} \cdot \partial A_{2^{k}} + \partial A_{2^{k}} \cdot A _{2^{k-1}} \\ \partial A:= \partial A_0, \end{array}$$ and costs essentially $2 \log_2 r$ matrix products. From the values of the $\partial v_i$’s computed with (\[eq:diff4v\]), we finally differentiate [step i]{}, and update $\partial A$ according to $$\label{eq:diff1} \left\{ \begin{array}{l} \partial v_{i-1} := \partial v_{i-1} + \partial v_i \cdot A,\\ \partial A := \partial A + v_{i-1}\cdot \partial v_i, ~i=r-1, \ldots, 1. \end{array} \right.$$ Now, $\partial A$ is the $n\times n$ matrix whose transpose gives the derivatives $\partial \Delta _{\text{\sc i}}/{\partial a_{i,j}} = \partial \Delta /{\partial a_{i,j}}$, hence from (\[eq:partial\]) we know that $A ^{*} = \partial A$. [step iii]{} and [step i]{} both cost essentially $r$ ($\approx s$) matrix times vector products. From (\[eq:diff3\]) and (\[eq:diff1\]) the differentiated steps both require $r$ matrix times vector products, and $2rn^2 +O(rn)$ additional operations in $\K$. The adjoint algorithm over a field {#sec:adjointK} ================================== We call [Adjoint]{} the algorithm obtained from the successive differentiations of Section \[sec:differentiation\]. Algorithm [Adjoint]{} is detailed below. We keep the notations of previous sections. We use in addition $U \in \K^{s\times n}$ and $V \in \K^{n\times r}$ (resp. $\partial U \in \K^{n\times s}$ and $\partial V \in \K^{r\times n}$) for the right sides (resp. the left sides) of (\[eq:diff4u\]) and (\[eq:diff4v\]). The cost of [Adjoint]{} is dominated by [step iv]{}$^*$, which is the differentiation of the matrix power computation. As we have seen with (\[eq:diffpowlog\]), the number of operation is essentially twice as much as for Algorithm [Det]{}. The code we give allows an easy implementation. We note that if the product by $\det A$ is avoided in [step i]{}$^*$, then the algorithm computes the matrix inverse $A^{-1}$. We may put this into perspective with the algorithm given by @Ebe97. With $\tilde{{\mathcal K}}_u$ and ${\mathcal K}_v$ the Krylov matrices of (\[eq:defKu\]) and (\[eq:defKv\]), Eberly has proposed a processor-efficient inversion algorithm based on $$\label{eq:inverseeb} A ^ {-1}= {\mathcal K}_v H_A^{-1} \tilde{{\mathcal K}}_u.$$ To see whether a baby steps/giant steps version of (\[eq:inverseeb\]) would lead to an algorithm similar to [Adjoint]{} deserves further investigations. [rl]{}   \ Algorithm & ($\partial$[Det]{})\ [*Input:*]{} & $A \in \K ^{n \times n}$ non-singular, and the intermediary data of Algorithm [Det]{}\ & All the derivatives are initialized to zero\ & ------------------- ----------------------------------------------------------------------------------------- [step i]{}$^*$. [*/\* Requires the Hankel matrices $H$ and $H_A$, see (\[eq:diff5\]) \*/*]{} \[-0.16cm\] ${\partial \Delta_{\text{\sc v}}}/{\partial h_k} := \left( \sigma _{k-1} (H_A ^{-1}) - \sigma _k (H ^{-1})\right) \det A, ~0 \leq k \leq 2n-1$ [step ii]{}$^*$. [*/\* Requires the $u_j$’s and $v_i$’s, see (\[eq:diff4u\]) and (\[eq:diff4v\]) \*/*]{} \[-0.16cm\] $\partial U := V \cdot \partial H$ \[-0.16cm\] $\partial V := \partial H \cdot U$ [step iii]{}$^*$. [*/\* Requires $B=A ^r$, see (\[eq:diff3\]) \*/*]{} \[-0.16cm\] For $j=s-1, \ldots, 1$ do \[-0.16cm\] $\partial u_{j-1} := \partial u_{j-1} + B \cdot \partial u_j$ \[-0.16cm\] $\partial B := \partial B + \partial u_j \cdot u_{j-1}$ [step iv]{}$^*$. [*/\* Requires the powers of $A$, see (\[eq:diffpow\]) or (\[eq:diffpowlog\]) \*/*]{} \[-0.16cm\] $A ^* := \sum _{k=1}^r A ^{r-k} \cdot \partial B \cdot A ^{k-1}$ [step v]{}$^*$. [*/\* See (\[eq:diff1\]) \*/*]{} \[-0.16cm\] For $i=r-1, \ldots, 1$ do \[-0.16cm\] $\partial v_{i-1} := \partial v_{i-1} + \partial v_i \cdot A$ \[-0.16cm\] $ A ^* := A ^* + v_{i-1}\cdot \partial v_i$ ------------------- ----------------------------------------------------------------------------------------- \ [*Output:*]{} & The adjoint matrix $A ^* \in \K^{n\times n}$.\    Application to computing the adjoint without divisions {#sec:nodiv} ====================================================== Now let $A$ be an $n\times n$ matrix over an abstract ring $\R$. Kaltofen’s algorithm for computing the determinant of $A$ without divisions applies Algorithm [Det]{} on a well chosen univariate polynomial matrix $Z(z) = C + z (A-C)$ where $C \in {{\mathbb Z}}^{n \times n}$, with a dedicated choice of projections $u=\varphi \in {{\mathbb Z}}^{1\times n}$ and $v=\psi \in {{\mathbb Z}}^{n \times 1}$. The algorithm uses Strassen’s avoidance of divisions (see [@Str73; @Kal92]). Since the determinant of $Z$ is a polynomial of degree $n$ in $z$, the arithmetic operations over $\K$ in [Det]{} may be replaced by operations on power series in $\R [[z]]$ modulo $z^{n+1}$. Once the determinant of $Z(z)$ is computed, the evaluation $(\det Z)(1) = \det (C + 1 \times (A-C))$ gives the determinant of $A$. The choice of $C, \varphi$ and $\psi$ is such that, whenever a division by a truncated power series is performed the constant coefficients are $\pm 1$. Therefore the algorithm necessitates no divisions. Note that, by construction of $Z(z)$, the constant terms of the power series involved when [Det]{} is called with inputs $Z(z), \varphi$ and $\psi$, are the intermediary values computed by [Det]{} with inputs $C, \varphi$ and $\psi$. The cost for computing the determinant of $A$ without divisions is then deduced as follows. In [step i]{} and [step ii]{} of Algorithm [Det]{} applied to $Z(z)$, the vector and matrix entries are polynomials of degree $O(\sqrt{n})$. The cost of [step ii]{} dominates, and is $O(n^3 {\sf M}(\sqrt{n}) \log n)= \sO(n ^3 \sqrt{n})$ operations in $\R$. [step iii]{}, [iv]{}, and [v]{} cost $O(n^2 \sqrt{n})$ operations on power series modulo $z^{n+1}$, that is $O(n^2{\sf M}(n)\sqrt{n})$ operations in $\R$. Hence $\det Z(z)$ is computed in $\sO(n^3\sqrt{n})$ operations in $\R$, and $\det A$ is obtained with the same cost bound. An main property of Kaltofen’s approach (which also holds for the improved blocked version of @KaVi04-2), is that the scalar value $\det A$ is obtained via the computation of the polynomial value $\det Z (z)$. This property seems to be lost with the adjoint computation. We are going to see how Algorithm [Adjoint]{} applied to $Z(z)$ allows to compute $A ^{*} \in \R ^{n\times n}$ in time $\sO(n^3\sqrt{n})$ operations in $\R$, but does not seem to allow the computation of $Z ^{*}(z) \in \R[z] ^{n\times n}$ with the same complexity estimate. Indeed, a key point in Kaltofen’s approach for reducing the overall complexity estimate, is to compute with small degree polynomials (degree $O(\sqrt{n})$) in [step i]{} and [step ii]{}. However, since the adjoint algorithm has a reversed flow, this point does not seem to be relevant for [Adjoint]{}, where polynomials of degree $n$ are involved from the beginning. Our approach for computing $A^*$ over $\R$ keeps the idea of running Algorithm [Adjoint]{} with input $Z(z)=C+z(A-C)$, such that $Z^*(z)$ has degree less than $n$, and gives $A^*=Z^*(1)$. In Section \[subsec:divifree\], we verify that the implementation using Proposition \[prop:computsigma\], needs no divisions. We then show in Section \[subsec:lazy\] how to establish the cost estimate $\sO(n^3\sqrt{n})$. The principle we follow is to start evaluating polynomials at $z=1$ as soon as computing with the entire polynomials is prohibitive. Division-free Hankel matrix inversion and anti-diagonal sums {#subsec:divifree} ------------------------------------------------------------ In Algorithm [Adjoint]{}, divisions may only occur during the anti-diagonal sums computation. We verify here that with the matrix $Z(z)$, and the special projections $\varphi \in {{\mathbb Z}}^{1 \times n},\psi \in {{\mathbb Z}}^{n \times 1}$, the approach described in the Appendix for computing the anti-diagonal sums requires no divisions. Equivalently, since we use Strassen’s avoidance of divisions, we verify that with the matrix $C$ and the projections $\varphi,\psi$, the approach necessitates no divisions. As we are going to see, this a direct consequence of the construction of @Kal92. Here we let $h_k = \varphi C^k \psi$ for $0 \leq k \leq 2n-1$, $a(x)=x^{2n}$, and $b(x)=h_0 x^{2n-1}+h_1 x^{2n-2}+ \ldots + h_{2n-1}$. The extended Euclidean scheme with inputs $a$ and $b$ leads to a normal sequence, and after $n-1$ and $n$ steps of the scheme, we get (see  [@Kal92 Sec.2]): $$\label{eq:euclidg} s(x)a(x) + t(x) b(x) = c(x), \text{with}\, \deg s= n-2, \deg t=n-1, \deg c =n,$$ and $$\label{eq:euclidf} \bar{s}(x)a(x) + \bar{t}(x) b(x) = \bar{c}(x), \text{with}\, \deg \bar{s}= n-1, \deg \bar{t}=n, \deg \bar{c}=n-1.$$ The polynomial $\bar{t}$ is such that $$\label{eq:bart} \bar{t}=\pm x^n + \text{intermediate~monomials~} + 1 = \pm f,$$ with $f$ the minimum polynomial of $\{h_k\}_{0\leq k \leq 2n-1}$. One may check, in particular, that the $n$ equations obtained by identifying the coefficients of degree $2n-1 \geq k \geq n$ in (\[eq:euclidf\]) give the linear system (\[eq:linsysf\]), that defines $f$. The polynomial $c$ also has leading coefficient $\pm 1$. By identifying the coefficients of degree $2n-1 \geq k \geq n$ in (\[eq:euclidg\]), we obtain: $$\label{eq:linsysg} H \left[ \begin{array}{c} t_0 \\ t_1 \\ \vdots \\ t_{n-1}\end{array} \right] = \left[ \begin{array}{cccc} h_0 & h_1 & \ldots & h_{n-1} \\ h_1 & h_{2} & \ldots & h_{n} \\ \vdots & \ddots & \vdots & \vdots\\ h_{n-1} & \ldots & \ldots & h_{2n-2} \end{array} \right] \left[ \begin{array}{c} t_0 \\ t_1 \\ \vdots \\ t_{n-1}\end{array} \right] = \pm \left[ \begin{array}{c} 0 \\ 0 \\ \vdots \\ 1\end{array} \right].$$ Therefore $t=\pm g$ with $g$ the polynomial needed for computing (\[eq:sigmaklowH\])-(\[eq:sigmakhighHA\]), in addition to $f$. Since $C, \varphi$, and $\psi$ are such that the extended Euclidean scheme necessitates no divisions (see  [@Kal92 Sec.2]), we see that both $f$ and $g$ may be computed with no divisions. The only remaining division in the algorithm for Proposition \[prop:computsigma\] is at (\[eq:lastcolHA\]). From (\[eq:bart\]), this division is by $f_0=1$. Lazy polynomial evaluation and division-free adjoint computation {#subsec:lazy} ---------------------------------------------------------------- We run Algorithm [Adjoint]{} with input $Z(z) \in \R[z]^{n\times n}$, and start with operations on truncated power series modulo $z^{n+1}$. We assume that Algorithm [Det]{} has been executed, and that its intermediary results have been stored. Using Proposition \[prop:computsigma\] and previous section, [step i]{}$^*$ requires $O({\sf G}(n){\sf M}(n))= \sO(n^2)$ operations in $\R$ for computing $\partial H(z)$ of degree $n$ in $\R [z]^{r\times s}$. [step ii]{}$^*$, [step iii]{}$^*$, and [v]{}$^*$ cost $O(n^2 \sqrt{n})$ operations in $\K$, hence, taking into account the power series operations, this gives $O(n^2 {\sf M}(n) \sqrt{n}) = \sO(n^3 \sqrt{n})$ operations in $\R$ for the division-free version. The cost analysis of [step iv]{}$^*$, using (\[eq:diffpowlog\]) over power series modulo $z^n$, leads to $\log_2 r$ matrix products, hence to the time bound $\sO(n^4)$, greater than the target estimate $\sO(n^3\sqrt{n})$. As noticed previously, [step iii]{} of Algorithm [Det]{} only involves polynomials of degree $O(\sqrt{n})$, while the reversed program for [step iv]{}$^*$ of Algorithm [Adjoint]{}, relies on $\partial B(z)$ whose degree is $n$. Since only $Z^*(1)=A^*$ is needed, our solution, for restricting the cost to $\sO(n^3\sqrt{n})$, is to start evaluating at $z=1$ during [step iv]{}$^*$. However, since power series multiplications are done modulo $z^n$, this evaluation must be lazy. The fact that matrices $Z^k(z)$, $1\leq k \leq r-1$, of degree at most $r-1$ are involved, enables the following. Let $a$ and $c$ be two polynomials such that $\deg a + \deg c = r-1$ in $\R[z]$, and let $b$ be of degree $n \geq r-2$ in $\R[z]$. Considering the highest degree part of $b$, and evaluating the lowest degree part at $z=1$, we define $b_H(z) = b_nz^{r-2} + \ldots + b_{n-r+2} \in \R[z]$ and $b_L = b_{n-r+1} + \ldots +b_0 \in \R$. We then remark that $$\label{eq:lazyeval} \begin{array}{ll} \left( a(z)b(z)c(z) \bmod z^{n+1}\right)(1) &= \left( a(z)(b_H(z)z^{n-r+2} +b_L)c(z) \bmod z^{n+1}\right)(1), \\ &= \left( a(z)b_H(z)c(z) \bmod z^{r-1}\right)(1) + \left( a(z)b_Lc(z)\right)(1). \end{array}$$ For modifying [step iv]{}$^*$, we follow the definition of $b_H$ and $b_L$, and first compute $\partial B_H(z) \in \R[z]^{n \times n}$ of degree $r-2$, and $\partial B_L \in \R^{n\times n}$. Applying (\[eq:lazyeval\]), the sum $\sum _{k=1}^r Z ^{r-k}(z) \cdot \partial B (z) \cdot Z^{k-1}(z)$ may then be evaluated at $z=1$ by the program $$\label{eq:modifstep4} \begin{array}{ll} \text{Modified {\sc step iv}}^*.~~ & Z^* := \left( \sum _{k=1}^r Z ^{r-k}(z) \cdot \partial B_H(z) \cdot Z ^{k-1}(z) \bmod z^{r-1} \right) (1)\\ & Z^* := Z^* + \left( \sum _{k=1}^r Z ^{r-k}(z) \cdot \partial B_L \cdot Z ^{k-1}(z) \right) (1), \end{array}$$ in $\sO(n^3 {\sf M}(r)) = \sO(n^3\sqrt{n})$ operations in $\R$. This leads to an intermediary value $Z^* \in \R ^{n\times n}$ before [step v]{}$^*$. The value is updated at [step v]{}$^*$ with power series operations, and a final evaluation at $z=1$ in time $\sO(n^2r {\sf M}(n))=\sO(n^3\sqrt{n})$. Since only [step iv]{}$^*$ has been modified, we obtain the following result. Let $A \in \R ^{n \times n}$. If Algorithm [Adjoint]{}, modified according to (\[eq:modifstep4\]), is executed with input $Z(z)=C+z(A-C)$, power series operations modulo $z^{n+1}$, and a final evaluation at $z=1$, then the matrix adjoint $A^*$ is computed in $\sO(n^3\sqrt{n})$ operations in $\R$. Concluding remarks ================== We have developed an explicit algorithm for computing the matrix adjoint using only ring arithmetic operations. The algorithm has complexity estimate $\sO(n^{3.5})$. It represents a practical alternative to previously existing solutions for the problem, that rely on automatic differentiation of a determinant algorithm. Our description of the algorithm allows direct implementations. It should help understanding how the adjoint is computed using Kaltofen’s baby steps/giant steps construction. Still, a full mathematical explanation deserves to be investigated. Our work has to be generalized to the block algorithm of @KaVi04-2 (with the use of fast matrix multiplication algorithms) whose complexity estimate is currently the best known for computing the determinant, and the adjoint without divisions.\ [**Acknowledgements.**]{} We thank Erich Kaltofen who has brought reference [@OWB71] to our attention. Appendix: Hankel matrix inversion and anti-diagonal sums {#appendix-hankel-matrix-inversion-and-anti-diagonal-sums .unnumbered} ======================================================== For implementing (\[eq:diff5\]), we study the computation of the anti-diagonal sums $\sigma _k$ of $H^{-1}$ and $H_A^{-1}$. We first use the formula of @LaChCa90 for Hankel matrices inversion. The minimum polynomial $f$ of $\{h_k\}_{0\leq k \leq 2n-1}$ is $f(x)=f_0 + f_1 x + \ldots + f_{n-1} x^{n-1} + x^n$, and satisfies (\[eq:linsysf\]). Let the last column of $H^{-1}$ be given by $$\label{eq:linsyslastcol} H \,[g_0, g_1, \ldots, g_{n-1}]^T = [0, \ldots, 0, 1]^T \in \K ^n.$$ Applying [@LaChCa90 Theorem3.1] with (\[eq:linsysf\]) and (\[eq:linsyslastcol\]), we know that $$\label{eq:invH} H^{-1}= \left[ \begin{array}{cccc} f_{1} & \kern-3pt\ldots & f_{n-1} & \kern2pt 1 \\ \vdots & \kern-1pt{\mathinner{ \mkern2mu\raise1pt\hbox{.} \mkern2mu\raise4pt\hbox{.} \mkern1mu\raise7pt\vbox{\kern7pt\hbox{.}} \mkern1mu}}& \kern4pt{\mathinner{ \mkern2mu\raise1pt\hbox{.} \mkern2mu\raise4pt\hbox{.} \mkern1mu\raise7pt\vbox{\kern7pt\hbox{.}} \mkern1mu}}& \\ f_{n-1} & \kern1pt{\mathinner{ \mkern2mu\raise1pt\hbox{.} \mkern2mu\raise4pt\hbox{.} \mkern1mu\raise7pt\vbox{\kern7pt\hbox{.}} \mkern1mu}}& 0 & \\ 1& & & \end{array} \right]\!\! \left[ \begin{array}{ccc} g_{0} & \ldots & g_{n-1} \\ & \ddots &\vdots \\ 0 & & g_0 \end{array} \right] - \left[ \begin{array}{cccc} g_{1} & \kern-3pt\ldots & g_{n-1} & \kern2pt 0 \\ \vdots & \kern-1pt{\mathinner{ \mkern2mu\raise1pt\hbox{.} \mkern2mu\raise4pt\hbox{.} \mkern1mu\raise7pt\vbox{\kern7pt\hbox{.}} \mkern1mu}}& \kern4pt{\mathinner{ \mkern2mu\raise1pt\hbox{.} \mkern2mu\raise4pt\hbox{.} \mkern1mu\raise7pt\vbox{\kern7pt\hbox{.}} \mkern1mu}}& \\ g_{n-1} & \kern1pt{\mathinner{ \mkern2mu\raise1pt\hbox{.} \mkern2mu\raise4pt\hbox{.} \mkern1mu\raise7pt\vbox{\kern7pt\hbox{.}} \mkern1mu}}& 0 & \\ 0& & & \end{array} \right]\!\! \left[ \begin{array}{ccc} f_{0} & \ldots & f_{n-1} \\ & \ddots &\vdots \\ 0& & f_0 \end{array} \right].$$ For deriving an analogous formula for $H_A ^{-1}$, using the notations of Section \[subsec:constantterm\], we first recall that $H=\tilde{{\mathcal K}}_u {\mathcal K}_v$ and $H_A = \tilde{{\mathcal K}}_u A {\mathcal K}_v$. Multiplying (\[eq:linsysf\]) on the left by $\tilde{{\mathcal K}}_u A \tilde{{\mathcal K}}_u^{-1}$ gives $$\label{eq:linsysHA} H_A \,[ f_0, f_1, \ldots, f_{n-1}]^T = - [h_{n+1}, h_{n+2}, \ldots, h_{2n}]^T.$$ We also notice that $$H_A H ^{-1}= \left( {\mathcal K}_u ^{-1} A ^T {\mathcal K}_u\right)^T,$$ and, using the action of $A ^T$ on the vectors $u ^T, \ldots, (A ^T)^{n-2}u ^T$, we check that $H_A H ^{-1}$ is the companion matrix $$H_A H ^{-1} = \left[ \begin{array}{cccc} 0 & 1 & & 0\\ \vdots & & \kern4pt\ddots & \\ 0 & \ldots & 0 & \kern8pt 1 \\ -f_0 & -f_1 & \ldots & -f_{n-1} \end{array} \right].$$ Hence the last column $[g_0^*, g_1 ^*,\ldots, g_{n-1}^*]$ of $H_A ^{-1}$ is the first column of $H^{-1}$ divided by $-f_0$. Using (\[eq:invH\]) for determining the first column of $H ^{-1}$, we get $$\label{eq:lastcolHA} [g^*_0, g^*_1, \ldots, g^*_{n-1}]^T = -\frac{g_0}{f_0}[f_1, \ldots, f_{n-1},1]^T +[g_1, \ldots, g_{n-1},0]^T.$$ Applying [@LaChCa90 Theorem3.1], now with (\[eq:linsysHA\]) and (\[eq:lastcolHA\]), we obtain $$\label{eq:invHA} H_A^{-1}= \left[ \begin{array}{cccc} f_{1} & \kern-3pt\ldots & f_{n-1} & \kern2pt 1 \\ \vdots & \kern-1pt{\mathinner{ \mkern2mu\raise1pt\hbox{.} \mkern2mu\raise4pt\hbox{.} \mkern1mu\raise7pt\vbox{\kern7pt\hbox{.}} \mkern1mu}}& \kern4pt{\mathinner{ \mkern2mu\raise1pt\hbox{.} \mkern2mu\raise4pt\hbox{.} \mkern1mu\raise7pt\vbox{\kern7pt\hbox{.}} \mkern1mu}}& \\ f_{n-1} & \kern1pt{\mathinner{ \mkern2mu\raise1pt\hbox{.} \mkern2mu\raise4pt\hbox{.} \mkern1mu\raise7pt\vbox{\kern7pt\hbox{.}} \mkern1mu}}& 0 & \\ 1& & & \end{array} \right]\!\! \left[ \begin{array}{ccc} g^*_{0} & \kern-1pt\ldots & \kern-1pt g^*_{n-1} \\ & \kern-1pt\ddots &\kern-1pt\vdots \\ 0& & \kern-1pt g^*_0 \end{array} \right] - \left[ \begin{array}{cccc} g^*_{1} & \kern-3pt\ldots & g^*_{n-1} & \kern2pt 0 \\ \vdots & \kern-1pt{\mathinner{ \mkern2mu\raise1pt\hbox{.} \mkern2mu\raise4pt\hbox{.} \mkern1mu\raise7pt\vbox{\kern7pt\hbox{.}} \mkern1mu}}& \kern4pt{\mathinner{ \mkern2mu\raise1pt\hbox{.} \mkern2mu\raise4pt\hbox{.} \mkern1mu\raise7pt\vbox{\kern7pt\hbox{.}} \mkern1mu}}& \\ g^*_{n-1} & \kern1pt{\mathinner{ \mkern2mu\raise1pt\hbox{.} \mkern2mu\raise4pt\hbox{.} \mkern1mu\raise7pt\vbox{\kern7pt\hbox{.}} \mkern1mu}}& 0 & \\ 0& & & \end{array} \right]\!\! \left[ \begin{array}{ccc} f_{0} & \ldots & f_{n-1} \\ & \ddots &\vdots \\ 0 & & f_0 \end{array} \right].$$ From (\[eq:invH\]) and (\[eq:invHA\]) we see that computing $\sigma _{k} (H ^{-1})$ and $\sigma _{k-1} (H_A ^{-1})$, for $0\leq k \leq 2n-1$, reduces to computing the anti-diagonal sums for a product of triangular Hankel times triangular Toeplitz matrices. Let $$M = LR = \left[ \begin{array}{cccc} l_{0} & l_1 & \ldots & l_{n-1} \\ l_1 & \kern11pt{\mathinner{ \mkern2mu\raise1pt\hbox{.} \mkern2mu\raise4pt\hbox{.} \mkern1mu\raise7pt\vbox{\kern7pt\hbox{.}} \mkern1mu}}& \kern4pt{\mathinner{ \mkern2mu\raise1pt\hbox{.} \mkern2mu\raise4pt\hbox{.} \mkern1mu\raise7pt\vbox{\kern7pt\hbox{.}} \mkern1mu}}& \\ \vdots & \kern1pt{\mathinner{ \mkern2mu\raise1pt\hbox{.} \mkern2mu\raise4pt\hbox{.} \mkern1mu\raise7pt\vbox{\kern7pt\hbox{.}} \mkern1mu}}& 0 & \\ l_{n-1} & & & \end{array} \right] \left[ \begin{array}{cccc} r_{0} & r_1 & \ldots & \kern8pt r_{n-1} \\ & \ddots & \kern4pt\ddots &\kern8pt r_{n-2} \\ &0 & \kern8pt\ddots & \vdots \\ & & & \kern4pt r_0 \end{array} \right].$$ We have $$\label{eq:tmpm1} m_{i,j} = \sum _{s =i-1}^{i+j-2} l_{s} r_{i+j-s -2}, ~1\leq i+j-1 \leq n,$$ and $$\label{eq:tmpm2} m_{i,j} = \sum _{s =i-1}^{n-1} l_{s} r_{i+j-s -2}, ~n \leq i+j-1 \leq 2n-1.$$ For $0 \leq k \leq 2n-2$, $\sigma _k(M)$ is defined by summing the $m_{i,j}$’s such that $i+j-2=k$. Using (\[eq:tmpm1\]) we obtain $$\begin{array}{ll} \sigma _k (M) & = \sum _{i=1}^{k+1} m_{i,k-i+2} = \sum _{i=1}^{k+1} \sum _{s=i-1}^{k} l_s r_{k-s}, \\ & = \sum _{s=0}^{k} (s+1) l_s r_{k-s}, ~0 \leq k \leq n-1, \end{array}$$ hence $$\label{eq:prodsigma1} (\sum _{s=0}^{n-1} l_s x ^{s+1})' (\sum _{s=0}^{n-1} r_s x ^{s}) \, \bmod x^n = \sum _{k=0}^{n-1} \sigma _k(M) \,x^k.$$ In the same way, using (\[eq:tmpm2\]) with $\bar{k}=k-n+2$, we have $$\begin{array}{ll} \sigma _k (M) & = \sum _{i=1}^{n-\bar{k}+1} m_{i+\bar{k}-1,n-i+1} = \sum _{i=1}^{n-\bar{k}+1} \sum _{s=i}^{n-\bar{k}+1} l_{s+\bar{k}-2} r_{n-s}, \\ & = \sum _{s=\bar{k}-1}^{n-1} (s+n-k) \,l_{s} r_{k-s} ,~n-1 \leq k \leq 2n-2, \end{array}$$ and $$\label{eq:prodsigma2} (\sum _{s=1}^{n} r_{n-s} x ^{s})' (\sum _{s=0}^{n-1} l_{n-s-1} x ^{s}) \, \bmod x^n = \sum _{k=0}^{n-1} \sigma _{2n-k-2}(M) \,x^k.$$ It remains to apply (\[eq:prodsigma1\]) and  (\[eq:prodsigma2\]) to the structured matrix products in (\[eq:invH\]) and (\[eq:invHA\]), for computing the $\sigma _k(H ^{-1})$ and $\sigma _k(H_A ^{-1})$’s. Together with the minimum polynomial $f=f_0 + \ldots + f_{n-1} x^{n-1} + x^n$, let $g=g_0 + \ldots + g_{n-1} x^{n-1}$ (see (\[eq:linsyslastcol\])), and $g^*=g^*_0 \ldots + g^*_{n-1} x^{n-1}$ (see (\[eq:lastcolHA\])). We may now combine, respectively (\[eq:invH\]) and (\[eq:invHA\]), with (\[eq:prodsigma1\]), for obtaining $$\label{eq:sigmaklowH} f'g-g'f \bmod x^n = \sum _{k=0}^{n-1} \sigma _k(H ^{-1}) \,x^k,$$ and $$\label{eq:sigmaklowHA} f'g^*-(g^*)'f \bmod x^n = \sum _{k=0}^{n-1} \sigma _k(H_A ^{-1}) \,x^k.$$ Defining also $\mbox{rev}(f)=1 + f_{n-1} x + \ldots + f_0 x^n$, $\mbox{rev}(g)= g_{n-1} x + \ldots + g_0 x^n$, and $\mbox{rev}(g^*)= g^*_{n-1} x + \ldots + g^*_0 x^n$, the combination of, respectively, (\[eq:invH\]) and (\[eq:invHA\]), with (\[eq:prodsigma2\]), leads to $$\label{eq:sigmakhighH} \mbox{rev}(g)'\mbox{rev}(f) - \mbox{rev}(f)'\mbox{rev}(g) \bmod x^n = \sum _{k=0}^{n-1} \sigma _{2n-k-2}(H) \,x^k,$$ and $$\label{eq:sigmakhighHA} \mbox{rev}(g^*)'\mbox{rev}(f) - \mbox{rev}(f)'\mbox{rev}(g^*) \bmod x^n = \sum _{k=0}^{n-1} \sigma _{2n-k-2}(H_A) \,x^k.$$ \[prop:computsigma\] Assume that the minimum polynomial $f$ and the Hankel matrices $H$ and $H_A$ are given. The anti-diagonal sums $\sigma _{k} (H ^{-1})$ and $\sigma _k (H_A ^{-1})$, for $~0 \leq k \leq 2n-1$, can be computed in ${\sf G}(n)+O({\sf M}(n))$ operations in $\K$. Using the approach of @BGY80 we know that computing the last column of $H^{-1}$ reduces to an extended Euclidean problem of degree $2n$. Hence the polynomial $g$ is computed in ${\sf G}(n)+O(n)$ operations. From there, $g^*$ is computed using (\[eq:lastcolHA\]). Then, applying (\[eq:sigmaklowH\])-(\[eq:sigmakhighHA\]) leads to the cost $O({\sf M}(n))$. [28]{} natexlab\#1[\#1]{}url \#1[`#1`]{}urlprefix Baur, W., Strassen, V., 1983. The complexity of partial derivatives. Theor. Comp. Sc. 22, 317–330. Brent, R., Gustavson, F., Yun, D., 1980. [Fast solution of Toeplitz systems of equations and computation of Padé approximations]{}. Journal of Algorithms 1, 259–295. B[ü]{}rgisser, P., Clausen, M., Shokrollahi, M., 1997. [Algebraic Complexity Theory]{}. Volume 315, Grundlehren der mathematischen Wissenschaften. Springer-Verlag. Cantor, D., Kaltofen, E., 1991. On fast multiplication of polynomials over arbitrary algebras. Acta Informatica 28 (7), 693–701. Chen, L., Eberly, W., Kaltofen, E., Saunders, B., Turner, W., Villard, G., 2002. [Efficient matrix preconditioners for black box linear algebra]{}. Linear Algebra and its Applications 343-344, 119–146. Coppersmith, D., Winograd, S., 1990. Matrix multiplication via arithmetic progressions. J. of Symbolic Computation 9 (3), 251–280. Eberly, W., Jul 1997. Processor-efficient parallel matrix inversion over abstract fields: two extensions. In: Proc. Second International Symposium on Parallel Symbolic Computation, Maui, Hawaii, USA. ACM Press, pp. 38–45. Eberly, W., Giesbrecht, M., Villard, G., Nov. 2000. [Computing the determinant and Smith form of an integer matrix]{}. In: [The 41st Annual IEEE Symposium on Foundations of Computer Science, Redondo Beach, CA]{}. IEEE Computer Society Press, pp. 675–685. [von zur Gathen]{}, J., Gerhard, J., 1999. Modern Computer Algebra. Cambridge University Press. Jeannerod, C., Villard, G., 2006. Asymptotically fast polynomial matrix algorithms for multivariable systems. Int. J. Control 79 (11), 1359–1367. Kaltofen, E., Jul. 1992. [On computing determinants without divisions]{}. In: International Symposium on Symbolic and Algebraic Computation, Berkeley, California USA. ACM Press, pp. 342–349. Kaltofen, E., 2000. Challenges of symbolic computation: my favorite open problems. J. of Symbolic Computation 29 (6), 891–919. Kaltofen, E., Pan, V., 1991. Processor efficient parallel solution of linear systems over an abstract field. In: Proc. 3rd Annual ACM Symposium on Parallel Algorithms and Architecture. ACM-Press, pp. 180–191. Kaltofen, E., Saunders, B., 1991. On [Wiedemann’s]{} method of solving sparse linear systems. In: Proc. AAECC-9. LNCS 539, Springer Verlag. pp. 29–38. Kaltofen, E., Villard, G., 2004. On the complexity of computing determinants. Computational Complexity 13, 91–130. Knuth, D., 1970. The analysis of algorithms. In: Proc. International Congress of Mathematicians, Nice, France. Vol. 3. pp. 269–274. Labahn, G., Choi, D., Cabay, S., 1990. [The inverses of block Hankel and block Toeplitz matrices]{}. SIAM J. Comput. 19 (1), 98–123. Linnainmaa, S., 1970. [The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors (in Finnish)]{}. Master’s thesis, University of Helsinki, Dpt of Computer Science. Linnainmaa, S., 1976. [Taylor expansion of the accumulated rounding errors]{}. BIT 16, 146–160. Moenck, R., 1973. [Fast computation of Gcds]{}. In: 5 th. ACM Symp. Theory Comp. pp. 142–151. Morgenstern, J., 1985. [How to compute fast a function and all its derivatives, a variation on the theorem of Baur-Strassen]{}. ACM SIGACT News 16, 60–62. Ostrowski, G. M., Wolin, J. M., Borisow, W. W., 1971. [Ü]{}ber die [Berechnung]{} von [Ableitungen]{} (in [German]{}). Wissenschaftliche Zeitschrift der Technischen Hochschule [für]{} Chemie, Leuna-Merseburg 13 (4), 382–384. Sc[hö]{}nhage, A., 1971. [Schnelle Berechnung von Kettenbruchenwicklungen]{}. Acta Informatica 1, 139–144. Storjohann, A., 2003. High-order lifting and integrality certification. Journal of Symbolic Computation 36 (3-4), 613–648, special issue International Symposium on Symbolic and Algebraic Computation (ISSAC’2002). Guest editors: M. Giusti & L. M. Pardo. Storjohann, A., 2005. The shifted number system for fast linear algebra on integer matrices. Journal of Complexity 21 (4), 609–650. Storjohann, A., Jul. 2008. [On the complexity of inverting integer and polynomial matrices]{}. Preprint D.R. Cheriton School of Computer Science, U. Waterloo, Ontario, Canada. Strassen, V., 1973. [Vermeidung von Divisionen]{}. J. Reine Angew. Math. 264, 182–202. Wiedemann, D., 1986. [Solving sparse linear equations over finite fields]{}. IEEE Transf. Inform. Theory IT-32, 54–62. [^1]: This research was partly supported by the French National Research Agency, ANR Gecko. [^2]: We refer for instance to <http://www.autodiff.org>
{ "pile_set_name": "ArXiv" }
--- abstract: 'It has been known that epidemic outbreaks in the SIR model on networks are described by phase transitions. Despite the similarity with percolation transitions, whether an epidemic outbreak occurs or not cannot be predicted with probability one in the thermodynamic limit. We elucidate its mechanism by deriving a simple Langevin equation that captures an essential aspect of the phenomenon. We also calculate the probability of epidemic outbreaks near the transition point.' author: - 'Junya Iwai${}^1$ and Shin-ichi Sasa${}^2$' title: Intrinsic Unpredictability of Epidemic Outbreaks on Networks --- Introduction ============ We start with the following question: How can it be determined whether an epidemic outbreak has occurred. Obviously, this is hard to answer, because an accurate model of epidemic spread in real societies, which include complicated and heterogeneous human-to-human contact, cannot be constructed. Then, is it possible to predict the outbreak for a simple mathematical model? Even in this case, the manner of the early spread of disease may significantly influence states that manifest after a sufficiently long time. For example, it seems reasonable to conjecture that whether a single infected individual with a very high infection rate causes an outbreak may depend on the number of people infected by the individual, which is essentially stochastic. In the present paper, we attempt to formulate this conjecture. Specifically, we study the stochastic SIR model as the simplest epidemic model, where an edge in the network represents a human-to-human contact and the infection rate $\lambda$ (the infection probability per unit time in each edge) is a parameter of the SIR model (see e.g. Ref. [@allen2008introduction] for an introduction to the stochastic SIR model; see also Refs. [@boccaletti2006complex; @RevModPhys.81.591] for related social dynamics on complex networks). The SIR model may be defined for well-mixed cases [@bailey1950simple; @bailey1953total; @metz1978epidemic; @martin1998final; @PhysRevE.76.010901; @PhysRevE.86.062103], homogeneous networks [@diekmann1998deterministic; @PhysRevE.64.050901; @PhysRevE.66.016128; @lanvcic2011phase; @bohman2012sir; @moreno2002epidemic], and scale-free networks [@moreno2002epidemic; @PhysRevLett.86.3200; @PhysRevE.64.066112; @gallos2003distribution]. A remarkable phenomenon is that when $\lambda$ exceeds a critical value ${\lambda_{\rm c}}$, a disease spreads to macroscopic scales from a single infected individual, which corresponds to an epidemic outbreak. This was found in well-mixed cases and random graphs, but ${\lambda_{\rm c}}=0$ for scale free networks. That is, epidemic outbreaks are described as phase transition phenomena. In addition to the interest in theoretical problems, recently, the SIR model on networks has been studied so as to identify influential spreaders [@kitsak2010identification] and so as to determine a better immunization strategy [@PhysRevLett.91.247901; @PhysRevLett.101.058701]. Although the phase transition in the SIR model may be a sort of percolation transition, its property is different from that of standard percolation models. In the SIR model exhibiting the phase transition, the order parameter characterizing it may be the fraction of the infected population, which is denoted by $\rho$. Indeed, $\rho=0$ in the non-outbreak phase ($\lambda<{\lambda_{\rm c}}$), whereas the expectation of $\rho$ becomes continuously non-zero from $0$ when $\lambda > {\lambda_{\rm c}}$. This phenomenon is in accordance with the standard percolation transition. However, on one hand, the order parameter in the percolated phase, e.g. the fraction of the largest cluster, takes a definite value with probability one in the thermodynamic limit; on the other hand, the fraction of the infected population in the SIR model is not uniquely determined even in the thermodynamic limit. In fact, it has been reported that the distribution function of the order parameter in SIR models with finite sizes shows two peaks at $\rho=0$ and $\rho=\rho_*$ for well-mixed cases [@bailey1953total; @metz1978epidemic; @martin1998final; @PhysRevE.76.010901], homogeneous networks [@diekmann1998deterministic; @lanvcic2011phase; @PhysRevE.64.050901], and scale-free networks [@gallos2003distribution]. Mathematically, the probability density of $\rho$ in the thermodynamic limit may be expressed as $$P(\rho;\lambda)= (1-q(\lambda)) \delta(\rho) +q(\lambda) \delta(\rho-\rho_*), \label{goal}$$ where $q=0$ for $\lambda \le {\lambda_{\rm c}}$ and $q\not = 0$ for $\lambda > {\lambda_{\rm c}}$. This means that the value of the fraction of the infected population in the outbreak phase, which is either $0$ or $\rho_*(\lambda)$, cannot be predicted with certainty. We call this phenomenon the [*intrinsic unpredictability of epidemic outbreaks*]{}. In this paper, we clearify the meaning of . We first observe the phenomenon in the SIR model defined on a random regular graph. By employing a mean field approximation, we describe the epidemic spread dynamics in terms of a master equation for two variables. Then, with a system size expansion, we approximate the solutions to the master equation by those to a Langevin equation. Now we can analyze this Langevin equation and work out the mechanism of the appearance of the two peaks. We also calculate $q(\lambda)$ near the transition point. Model {#sec_model} ===== Let $G$ be a random $k$-regular graph consisting of $N$ nodes. For each $x \in G$, the state $\sigma(x) \in \{ {{\rm S}}, {{\rm I}}, {{\rm R}}\} $ is defined, where ${{\rm S}}$, ${{\rm I}}$, and ${{\rm R}}$ represent Susceptible, Infective, and Recovered, respectively. The state of the whole system is given by $(\sigma_x)_{x \in G}$, which is denoted by ${{\boldsymbol \sigma}}$ collectively. The SIR model on networks is described by a continuous time Markov process with infection rate $\lambda$ and recovery rate $\mu$. Concretely, the transition rate $W({{\boldsymbol \sigma}} \to {{\boldsymbol \sigma}}')$ of the Markov process is given as $$W({{\boldsymbol \sigma}} \to {{\boldsymbol \sigma}}') = \sum_{x \in G} w({{\boldsymbol \sigma}} \to {{\boldsymbol \sigma}}'|x),$$ with $$\begin{aligned} w({{\boldsymbol \sigma}} \to {{\boldsymbol \sigma}}'|x)& = & \lambda \left[\delta(\sigma_x,{{\rm S}})\delta(\sigma_x',{{\rm I}}) \sum_{y \in B(x)} \delta(\sigma_y,{{\rm I}}) \right] \nonumber \\ & & +\mu \delta(\sigma_x, {{\rm I}})\delta(\sigma_x', {{\rm R}}), \label{rate-netmodel}\end{aligned}$$ where $B(x)$ is a set of $k$-adjacent nodes to $x \in G$. Hereinafter, without loss of generality, we use dimensionless time by setting $\mu=1$. For almost all time sequences, infective nodes vanish after a sufficiently long time, and then the system reaches a stationary state, which is called the [*final state*]{}. The ratio of the total number of recovered nodes to $N$ in the final state is equivalent to the fraction of the infected population $\rho$. This quantity measures the extent of the epidemic spread. At $t=0$, we assume that $\sigma={{\rm I}}$ for only one node selected randomly and that $\sigma={{\rm S}}$ for the other nodes. In Fig. \[fig-sir-pm3d\], as an example, we show the result of numerical simulations for the model with $k=3$ and $N=8192$. We measured the probability density $P(\rho;\lambda)$ of the fraction of the infected population $\rho$ for various values of $\lambda$. This figure suggests that the expectation of $\rho$ becomes non-zero when $\lambda$ exceeds a critical value. The important observation here is that $\log P $ in the outbreak phase has a sharp peak near $\rho=0$, too. Indeed, the inset in Fig. \[fig-sir-pm3d\] clearly shows the existence of the two peaks in $\log P$ with $\lambda=1.5$. Similar graphs were reported in Refs. [@bailey1953total; @martin1998final; @gallos2003distribution; @PhysRevE.76.010901; @PhysRevE.64.050901; @lanvcic2011phase]. The existence of the two peaks is not due to a finite size effect, as shown in Fig. \[fig-sir-Nxxx-p0-N16\], where the probability that $\rho > 1/16$, which is denoted by $p(\rho >1/16)$, is plotted as a function of $\lambda$ for several values of $N$. Note that $\lim_{N \to \infty} p(\rho >1/16)=q(\lambda)$ when $\rho_*(\lambda) >1/16$. These results suggest the limiting density (\[goal\]), where $q(\lambda)$ becomes continuously non-zero for $\lambda > {\lambda_{\rm c}}$ whereas $q(\lambda)=0$ for $\lambda \le {\lambda_{\rm c}}$. This is the phenomenon that we attempt to understand in this paper. Analysis ======== Defining two variables $s\equiv \sum_{x}\delta(\sigma_x,S)/N$ and $i \equiv \sum_{x}\delta(\sigma_x,I)/N$, we consider a continuous-time Markov process of the two variables as an approximation of the SIR model on the network [@hufnagel2004forecast; @colizza2006modeling]. We expect the phenomenon we are concerned with to be reproduced within this approximation; we verify this at a later stage. The transition rate of $(s,i)\to (s,i-1/N)$ is exactly given as $Ni$, and we approximate the rate $(s,i)\to (s-1/N,i+1/N)$ as $\lambda k N s \psi $, where $\psi$ is the probability of finding $y \in B(x)$ such that $\sigma_y={{\rm I}}$ for any $x$. Here, the infective nodes form a connected cluster, and this cluster is tree-like because the typical size of the loops is $O(\log N)$. Now, as an approximation, we assume that there are $N i (k-2)$ edges connecting the tree-like cluster with susceptible nodes [@derrida1986random; @keeling2005networks]. Therefore, $\psi$ is estimated as the rate of $N i (k-2)$ to the number of all edges $N k$ in the thermodynamic limit. That is, $\psi=i (k-2)/k$. Below, we focus on the case $k=3$. Let $P(s,i,t)$ be the probability density of $s(t)=s$ and $i(t)=i$. Then, $P(s,i,t)$ obeys the master equation $$\begin{aligned} \frac{\partial P(s,i,t)}{\partial t} &=& N \left( i+\frac{1}{N} \right) P \left(s,i+\frac{1}{N},t \right) -N i P \left(s,i,t \right) \nonumber \\ &+& N \lambda \left (s+\frac{1}{N}\right) \left(i-\frac{1}{N}\right) P\left( s+\frac{1}{N},i-\frac{1}{N},t\right) \nonumber \\ &-& N\lambda s i P\left(s,i,t\right). \label{MST}\end{aligned}$$ When $N$ is sufficiently large, the master equation for $P(s,i,t)$ can be expanded as $$\frac{\partial P}{\partial t} +\partial_i J_i +\partial_s J_s+O\left( \frac{1}{N^2} \right)=0, \label{sNMST}$$ with $$\begin{aligned} J_i&=& \left(\lambda s-1\right) i P - \partial_i\left[ \frac{\left(\lambda s+1\right)i}{2N} P \right] + \partial_s \left(\frac{\lambda s i }{2N}P \right), \nonumber \\ J_s&=& -\lambda s i P - \partial_s \left( \frac{\lambda s i }{2N}P\right) +\partial_i \left(\frac{\lambda s i}{2N} P \right).\end{aligned}$$ By assuming that $O(1/N^2)$ terms can be ignored, we obtain the Fokker-Planck equation [@gardiner2004handbook]. It can be confirmed by direct calculation that this Fokker-Planck equation (\[sNMST\]) describes the time evolution of the probability density for the following set of Langevin equations: $$\begin{aligned} {\frac{d s}{d t}} &=& -\lambda s i -\sqrt{\frac{\lambda s i}{N}}\cdot \xi_1, \label{s_lgv} \\ {\frac{d i}{d t}} &=& \lambda s i - i +\sqrt{\frac{\lambda s i}{N}}\cdot \xi_1 +\sqrt{\frac{i}{N}}\cdot \xi_2, \label{i_lgv} \end{aligned}$$ where $\xi_i$ is Gaussian white noise that satisfies $\left<\xi_i\left(t\right) \right>=0$ and $ \left<\xi_i\left(t\right) \xi_j\left(t'\right) \right> =\delta_{i j}\delta\left(t-t'\right)$. The symbol “$\cdot$” in front of $\xi_1$ and $\xi_2$ in and represents the Ito product rule. The same equations as and were presented in Refs. [@hufnagel2004forecast; @colizza2006modeling]. In this description, the fraction of the infected population is given by $$\rho=1-s(\infty).$$ In Fig. \[fig-sy\_lgv-pm3d\], we show the result of numerical simulations of the Langevin equations and . Comparing Fig. \[fig-sy\_lgv-pm3d\] with Fig. \[fig-sir-pm3d\], we find that the phenomenon under study is described by the Langevin equations and . Thus, our problem may be solved by analyzing them. Now, the key idea of our analysis is the introduction of a new variable $Y=\sqrt{i N}$. Then, and are re-written as $$\begin{aligned} {\frac{d s}{d t}} &=& \frac{1}{N}\left[ - \lambda s Y^2 -\sqrt{{\lambda s Y^2}}\cdot \xi_1\right], \label{s2_lgv} \\ {\frac{d Y}{d t}} &=& \frac{1}{2}\left\{\left(\lambda s - 1\right)Y -\frac{1}{4}\left({\lambda s +1} \right)\frac{1}{Y}\right\} \nonumber \\ && +\frac{1}{2}\sqrt{{\lambda s }}\cdot \xi_1 +\frac{1}{2}\sqrt{1}\cdot \xi_2, \label{Y_lgv}\end{aligned}$$ where it should be noted that the multiplication of the variable $Y$ and the noise does not appear in . We then consider the probability $q(\lambda)$ in the thermodynamic limit as the probability of observing $Y \simeq N^{1/2}$, because it is equivalent to $\rho >0$. Here, from and , we find that the characteristic time scale of $s$ is $N$ times that of $Y$. Thus, when $N$ is sufficiently large, $s$ almost retains its value when $Y$ changes over time. In particular, it is reasonable to set $s=1$ when $t$ is shorter than $N$. In this time interval, is expressed as $$\begin{aligned} {\frac{d Y}{d t}} &=& -\partial_Y U(Y)+\sqrt{2D}\xi, \label{Y2_lgv}\end{aligned}$$ where $D=(\lambda+1)/8$ and the potential $U(Y)$ is calculated as $$U(Y)=-\frac{1}{4}\left(\lambda - 1\right)Y^2 +\frac{1}{8}\left({\lambda +1}\right)\log(Y).$$ $\xi$ is Gaussian white noise with unit variance, where we have used the relation $\sqrt{\lambda}/2 \xi_1+1/2 \xi_2=\sqrt{\lambda+1}/2 \xi$. The initial condition is given as $Y(0)=1$. It should be noted that is independent of $N$. Thus, solutions satisfying $Y\simeq N^{1/2}$ in and correspond to solutions satisfying $Y \to \infty$ in . We identify $q(\lambda)$ with the probability of finding these solutions. We now derive this probability. First, we investigate the shapes of the graph $U\left(Y\right)$. We find that $U\left(0_+\right)=-\infty$ for any $\lambda$ and that $U\left(Y\right)$ monotonically increases in $Y$ for $\lambda <1$, while $U(Y)$ has a single maximum peak at $Y=Y_*$ for $\lambda >1$, where $$Y_\ast=\frac{1}{2} \sqrt{\frac{\lambda+1}{\lambda-1}}.$$ As a reference, in Fig. \[fig-U\_Y\], we show the shapes of $U\left(Y\right)$ for $\lambda=0.5$ and $1.2$. Next, based on the shapes of the potential function, we discuss the expected behavior of solutions to . When $\lambda<1$, the probability of $Y \to \infty$ is obviously zero because $U(Y)$ is a monotonically increasing function in $Y$. That is, $q(\lambda)=0$ in this case. The behavior for $\lambda>1$ is complicated. We thus focus on the case that $\lambda =1+{\epsilon}$, where ${\epsilon}$ is a small positive number. In this case, $Y_*\simeq {\epsilon}^{-1/2}$. We then note that if a solution $Y$ to happens to exceed $Y_*$, it is comparatively likely that $ Y \to \infty$. Assuming that the probability of $ Y \to \infty$ under the condition $Y \ge Y_*$ at some time is unity, we estimate $q(\lambda)$ as the probability that $Y$ exceeds $Y_*$. Furthermore, we express $q(\lambda)$ in terms of the transition rate $T$ from $Y=1$ to $Y=Y_*$. Noting that the transition rate from $Y=1$ to $Y=0$ is equal to the recovery rate in the original SIR model, we can write $$q=\frac{T}{1+T}. \label{qT}$$ Since $T$ is positive and finite, we obtain $0 < q(\lambda) < 1$. In this manner, we have clearly explained the probabilistic nature in the outbreak phase, and we have obtained ${\lambda_{\rm c}}=1$. Finally, we calculate $q(\lambda)$ quantitatively near the transition point. From $Y_* \simeq {\epsilon}^{-1/2}$ and $U(Y_*) \simeq \log{\epsilon}$, we estimate the slope of the straight line connecting two points $\left(1,U(1)\right)$ and $\left(Y_*,U(Y_*)\right)$ in the $(Y,U)$ plane as $(U(Y_*)-U(1))/(Y_*-1)\simeq \sqrt{{\epsilon}}(\log {\epsilon})$, which approaches zero in the limit ${\epsilon}\rightarrow 0$. Thus, the transition from $Y=1$ to $Y=Y_*$ may be assumed to be free Brownian motion with the diffusion constant $D=(\lambda+1)/8$. The transition rate from $Y=1$ to $Y_*$ is then estimated as $T=2D/Y_*^2 = {\epsilon}+ O({\epsilon}^2)$. We thus obtain $$\begin{aligned} q(\lambda) &=& {\epsilon}+O({\epsilon}^2). \label{qep}\end{aligned}$$ In Fig. \[fig-sy\_lgv-sp-Nxxx-fit\], we compare the theoretical result with those obtained in numerical simulations of and . We measured the probability that $\rho > 0.003$, which is denoted as $p(\rho > 0.003)$. Recall that $\lim_{N \to \infty} p(\rho >0.003) =q(\lambda)$ when $\rho_*(\lambda) > 0.003$. Since the experimental result suggests $p(\rho >0.003)={\epsilon}+O({\epsilon}^2)$ in the limit $N \to \infty$, we claim that the theoretical result (\[qep\]) is in good agreement with the experimental result. Concluding remarks ================== In this paper, we have achieved a novel understanding of the intrinsic unpredictability of epidemic outbreaks by analyzing the Langevin equation , which effectively describes this singular phenomenon. Further, trajectories in the outbreak phase are divided into two groups: trajectories in one group are absorbed into zero, and the others diverge in . The division corresponds to the non-trivial limiting density given in . On the basis of this description, we calculated the probability of an epidemic outbreak near the transition point. Before ending the paper, we make a few remarks. First, the probability $q(\lambda)$ was studied in the mathematical literature (see [@yan2008distribution] and [@britton2010stochastic] as reviews.) To the best of our knowledge, the method proposed in this paper has never been used in previous studies. It might be interesting to connect our analysis with mathematical studies. Second, although we have investigated the simplest model in this paper, similar analysis might be applied to various models. For example, we can consider the case that there are $m$ infected nodes at time $t=0$. Since the essence of the phenomenon is the existence of $Y_*$, the same result is obtained when $m$ is independent of $N$. However, for the case $m=c N$ with a small positive number $c$, $Y(t)$ is never adsorbed to zero in the outbreak phase, because $Y(0)$ is infinitely far away from $Y=Y_*$. This is qualitatively different from the case $m=1$, which was reported in Refs. [@barbour1974functional; @miller2012epidemics]. In fact, as suggested in Fig. \[fig-sir-Nxxx-p0-mN128-N16\], $q(\lambda)$ jumps discontinuously to $q(\lambda)=1$ which is similar to the behavior observed in standard percolation transitions. Finally, as another generalization, one may study the behavior of the SIR on more complex networks. In these cases, since the mean field approximation might not be effective, one needs to devise a new technique to describe the unpredictability of outbreaks. Moreover, one of the most interesting is to predict probabilistic epidemic outbreaks from limited data on realistic networks. We hope that future studies will address these problems. The authors thank N. Nakagawa, T. Nemoto and M. Itami for their useful comments. The present study was supported by KAKENHI No. 22340109 and No. 23654130. [10]{} L. Allen, in , edited by F. Brauer [*et al.*]{}, (Springer, Berlin, 2008), §3, p. 81. S. Boccaletti, V. Latora, Y. Moreno, M. Chavez, and D. U. Hwang, [*Phys. Rep.*]{} **424**, 175 (2006). C. Castellano, S. Fortunato, and V. Loreto [*Rev. Mod. Phys.*]{} **81**, 1275 (2009). N. T. J. Bailey, [*Biometrika*]{} **37**, 193 (1950). N. T. J. Bailey, [*Biometrika*]{} **40**, 177 (1953). J. A. J. Metz, [*Acta Biotheor.*]{} **27**, 75 (1978). A. Martin-L[ö]{}f, [*J. Appl. Probab.*]{} **35**, 671 (1998). D. A. Kessler and N. M. Shnerb, **76**, 010901 (2007). B. S. Bayati and P. A. Eckhoff, **86**, 062103 (2012). O. Diekmann, M. C. M. de Jong, and J. A. J. Metz, [*J. Appl. Probab.*]{} **35**, 448 (1998). D. H. Zanette, **64**, 050901 (2001). M. E. J. Newman, **66**, 016128 (2002). A. Lan[č]{}i[ć]{}, N. Antulov-Fantulin, M. [Š]{}iki[ć]{}, and H. [Š]{}tefan[č]{}i[ć]{}, **390**, 65 (2011). T. Bohman and M. Picollelli, **41**, 179 (2012). Y. Moreno, R. Pastor-Satorras, and A. Vespignani, **26**, 521 (2002). R. Pastor-Satorras and A. Vespignani, **86**, 3200 (2001). R. M. May and A. L. Lloyd, **64**, 066112 (2001). L. K. Gallos and P. Argyrakis, **330**, 117 (2003). M. Kitsak, L. K. Gallos, S. Havlin, F. Liljeros, L. Muchnik, H. E. Stanley, and H. A. Makse, **6**, 888 (2010). R. Cohen, S. Havlin, and D. Avraham, **91**, 247901 (2003). Y. Chen, G. Paul, S. Havlin, F. Liljeros, and H. E. Stanley, **101**, 058701 (2008). L. Hufnagel, D. Brockmann, and T. Geisel, **101**, 15124 (2004). V. Colizza, A. Barrat, M. Barth[é]{}lemy, and A. Vespignani, **68**, 1893 (2006). B. Derrida and Y. Pomeau, **1**, 45 (1986). M. J. Keeling and K. T. D. Eames, **2**, 295 (2005). C. Gardiner, (Springer, Berlin, 2004). P. Yan, in , edited by F. Brauer [*et al.*]{}, (Springer, Berlin, 2008), §10.5, p. 261. T. Britton, [*Math. Biosci.*]{} **225**, 24 (2010). A. D. Barbour, [*Adv. Appl. Probab.*]{} **6**, 21 (1974). J. C. Miller, (2012).
{ "pile_set_name": "ArXiv" }
--- abstract: 'The many-body state of carriers confined in a quantum dot is controlled by the balance between their kinetic energy and their Coulomb correlation. In coupled quantum dots, both can be tuned by varying the inter-dot tunneling and interactions. Using a theoretical approach based on the diagonalization of the exact Hamiltonian, we show that transitions between different quantum phases can be induced through inter-dot coupling both for a system of few electrons (or holes) and for aggregates of electrons and holes. We discuss their manifestations in addition energy spectra (accessible through capacitance or transport experiments) and optical spectra.' address: - | Istituto Nazionale per la Fisica della Materia (INFM) and Dipartimento di Fisica,\ Università degli Studi di Modena e Reggio Emilia, Via Campi 213/A, 41100 Modena, Italy - ' Institut für Theoretische Physik, Karl–Franzenz–Universität Graz, Universitätsplatz 5, 8010 Graz, Austria ' author: - Massimo Rontani - Filippo Troiani - Ulrich Hohenester - Elisa Molinari title: Quantum phases in artificial molecules --- , , , and A. semiconductors ,A. nanostructures ,D. electron-electron interactions 71.35.-y ,71.45.Gm ,73.23.-b ,73.23.Hk Introduction ============ Semiconductor quantum dots (QDs) are a formidable laboratory for next-generation devices and for the actual realization of some key [*Gedankenexperimente*]{} in many-body physics [@book1; @book2; @book3]. Indeed, the number of electrons and holes in the QD can be controlled very accurately, and almost all relevant parameters influencing their strongly correlated states, like confinement potential and coupling with magnetic field and light, can be tailored in the experiments. The additional possibility of tuning the coupling between QDs enriches their physics and the possible applications. From the point of view of fundamental physics such coupling extends the analogy between quantum dots (“artificial atoms” [@artificialatoms]) and natural atoms, to artificial and natural molecules. The tunability of coupling among QDs allows to explore all regimes between non-interacting dots and their merging into a single QD; many of those regimes are precluded to molecular physics. One of the peculiarities of QDs with respect to other solid state structures consists in the partial decoupling of a few degrees of freedom from all the others, which is due to the discrete nature of the spectrum [@book1; @book2; @book3]. The actual exploiting of such a feature largely depends on the capability of integrating arrays of QDs, thus increasing the number of degrees of freedom that one can address with precision and coherent manipulation. This is precisely the strategy pursued by the semiconductor-based solid state implementations of quantum computation [@lloyd]. In general and basic terms, the tuning of inter-dot tunneling allows to modify the relative position of the single-particle levels, thus inducing phase transitions in the many-body ground states and different degrees of spatial correlations among carriers. Manifestations of these phenomena in systems formed by carriers of only one type, whose ground and excited state properties are accessible through addition energy spectra, have been predicted. Here we point out that similar effects are expected to occur also for systems formed by both electrons and holes. We also show that, in spite of the obvious differences, strong similarities appear in the analysis of electrons and electron-hole systems, and a unified theoretical description is in order. Basically, a competition emerges between two trends. On one side an atomic [*Aufbau*]{} logic, where carriers tend to occupy the lowest single-particle states available, thus minimizing the kinetic energy and the total spin, at the (energetic) cost of reducing spatial correlation among carriers. At the opposite extreme we find an enhanced degree of spatial correlation among carriers, which occurs through the occupation of orbitals other than the lowest. This implies an enhancement of the kinetic energy and a reduction of the repulsive one, and results in electron distributions maximizing the total spin (Hund’s rule). The balance between these two trends depends on the spacings of the single-particle levels involved, and these are precisely what can be settled by controlling the inter-dot tunneling. When carriers of opposite charge, different effective masses and tunneling parameters come into play, the competition between both trends becomes even more delicate. Predictions of the actual ground and excited states of the many-body system thus require a careful theoretical treatment including all carrier-carrier interactions. Since the number of carriers in the dot can be controlled and kept relatively small, we can proceed through direct diagonalization of the exact many-body Hamiltonian, with no need to make a priori assumptions on the interactions. On the contrary, the results are a useful benchmark for the validity of the most common approximations for these systems. We find that different quantum phases correspond to different regimes of inter-dot coupling both for a system of few electrons (or holes) and for aggregates of electrons and holes, with various possible spatial configurations and the formation of different possible “subsystems” of inter-correlated particles. Besides, due to the negligible electron-hole exchange interaction in heterostructures such as GaAs, the two kinds of carriers can be treated as distinguishable particles. Therefore spatial correlation among electrons and holes does not arise from the Fermi statistics: it needs instead the entanglement between the orbital degrees of freedom associated to holes and electrons, and turns out to depend only indirectly on the spin quantum numbers $ S_{e} $ and $ S_{h} $. After a brief summary of the state of the art in theoretical and experimental work on coupled dots (Sect. \[Review\]), in the following we describe the general Hamiltonian and solution scheme (Sect. \[Method\]). We then come to the results for electron- (Sect. \[Electron\]) and electron-hole systems (Sect. \[Electron-hole\]). The trends leading to different quantum phases are discussed in detail, together with their nature in terms of spin and spatial correlation functions. Experimental and theoretical background {#Review} ======================================= Early experimental and theoretical studies focused on electrostatically-coupled dots with negligible inter-dot tunneling [@review]. Here we consider [*artificial molecules*]{} [@leoscience], where carriers tunnel at appreciable rates between dots, and the wavefunction extends across the entire system. The formation of a miniband structure in a one-dimensional array of tunnel-coupled dots was demonstrated more than a decade ago [@leocrystal]. After that, the first studies considered “planar” coupled dots defined by electrodes in a two dimensional electron gas. In these devices the typical charging energy was much larger than the average inter-level spacing, hence linear [@planarlinear] and non-linear [@planarnonlinear] Single Electron Tunneling Spectroscopy (SETS), obtained by transport measurements at different values of the inter-dot conductance, could be explained by model theories based on capacitance parameterizations [@planarth]. Early studies also considered simple model Hamiltonians (usually Hubbard-like) with matrix elements treated as parameters [@Hubbardsimple]. Blick and coworkers clearly showed the occurrence of coherent molecular states across the entire two-dot setup, analyzing transport data [@blickI] and the response to a coherent wave interferometer [@blickII]. The tuning of coherent states was also probed by microwave excitations [@PAT], and coupling with environment acoustic phonons was studied [@spontaneous]. Planar coupled dots were also used to cool electron degrees of freedom [@vaart], to measure the magnetization as a function of the magnetic field [@magnetization], and to study the phenomenon of “bunching” of addition energies in large quantum dots [@bunching]. The so-called “vertical” experimental geometry was introduced later: it consists of a cylindrical mesa incorporating a triple barrier structure that defines two dots. Sofar, evidence of single-particle coherent states in a AlAs/GaAs heterostructure has been reported [@schmidt], while in AlGaAs/InGaAs structures clear SETS spectra of few-particle states have been observed as a function of the magnetic field $B$ and of the inter-dot barrier thickness [@guy]. A relevant part of theoretical research has addressed the study of few-particle states in vertical geometries, within the framework of the envelope function approximation. The two-electron problem was solved, by means of exact diagonalization, in different geometries by Bryant [@bryant] and by Oh [*et al.*]{} [@oh]. Systems with a number of electrons $N>2$ at $B\approx 0$ in cylindrical geometry have been studied by several methods: Hartree-Fock [@tamura], exact diagonalization for $N\le 5$ [@tokura], numerical solution of a generalized Hubbard Model for $N\le 6$ [@ssc] and for $N>12$ with a “core” approximation suitable for the weak-coupling regime only [@asano], density functional theory [@bart]. Palacios and Hawrylak [@palacios] studied the energy spectrum in strong magnetic field and negligible inter-dot tunneling with various methods ($N\le 6$), and established a connection between the correlated ground states of the double-dot system and those of Fractional Quantum Hall Effect systems in double layers. In this perspective, Hu [*et al.*]{} [@dagotto] studied collective modes in mean-field theory, Imamura [*et al.*]{} [@aoki] exactly diagonalized the full Hamiltonian at strong $B$ and different values of tunneling ($N\le 4$), Martín-Moreno [*et al.*]{} [@tejedor] considered the occurrence of canted ground states. Also the far-infrared response of many-electron states was analyzed with various techniques [@mayrock]. Another interesting issue is the relation between quantum and “classical” ground states [@peeters] as the radius of the dot is enlarged, when electrons arrange to minimize electrostatic repulsion because the kinetic energy is quenched [@yannouleas; @helium]. The electronic properties of planar dots have also been studied theoretically, through a variety of techniques: configuration interaction or analytical methods with various approximations [@eto; @natalia], or density functional theory for larger values of $N$ [@leburton]. The infrared [@tapash] and the thermoelectric [@molenkamp] response were considered. Systems of coupled QDs are also among the most promising candidates for the implementation of semiconductor-based quantum information processing devices: some of the current proposals identify the qubits with either the spin [@qc] or the orbital degrees of freedom associated to the conduction band electrons in QDs. Research on few-electron systems in double dots is thus a new field in very rapid growth, with increasing focus on the possible quantum phases and how they can be driven by artificially controllable parameters such as inter-dot coupling, magnetic field, dot dimension. The study of such phases is expected to add insight into the physics of double layers, e.g. the conditions for Wigner crystallization, and of strongly correlated systems in general. The amount of experimental data on many-body states in artificial molecules is still limited, but the whole bunch of spectroscopic tools currently available (linear and non-linear transport, Raman spectroscopy) is now beginning to be employed (see e.g. SETS spectra in the $B$-$N$ space, Ref. [@Amaha] in this issue) and should allow the direct verification of theoretical predictions and a more general understanding of the basic phenomena and trends. Also the optical properties of coupled QDs depend both on the confinement of electrons and holes and on the effects of correlation among these carriers. In spite of their importance, however, such correlation effects are still largely unknown. From the experimental point of view, cleaved-edge overgrown samples have been used [@schedelbeck:97], but self-organized quantum dots are most commonly employed for optics. Their stacking was demonstrated [@fafard:00], and the splitting of the excitonic ground state in a single artificial molecule was studied as a function of the inter-dot distance. The lines in the photoluminescence spectra were explained in terms of transitions among excitonic states obtained by single-particle filling of delocalized bonding and anti-bonding electron and hole states [@schedelbeck:97; @bayer:01]. When a few photoexcited particles are present, however, the correlations induced by the carrier-carrier Coulomb interactions play a crucial role [@insertio]; single-particle tunneling and kinetic energies are also affected by the different energetic spacings of electron and hole single-particle states. The correlated ground and excited states will thus be governed by the competition of these effects, not included in previous theoretical descriptions of photoexcited artificial molecules [@filippoprl]. A detailed understanding of exciton and multiexciton states in coupled semiconductor QDs, however, is of great interest for the development of the optical implementations of quantum-information processing schemes, starting from the identification of well characterized qubits [@bennet:00]. The possibility of complete optical control over the computational space formed by interacting excitons in quantum dots has recently been demonstrated in Refs. [@troiani:00] and [@biolatti:00]. We therefore expect that a systematic investigation of trends in the many-body phases of coupled dots will be actively pursued for systems of few electrons, and extended to systems of photoexcited electrons and holes in the near future. Many-body states of $N$ electrons and holes {#Method} =========================================== In the following we focus on the motion of few electrons and holes confined in two coupled quantum dots. Our primary interest is in the correlated nature of ground and excited states of the interacting system. Hereafter we consider a simplified model where, within the envelope function and effective mass approximations, two coupled identical vertical dots are described by a separable confining potential $V(\varrho,z)=V(\varrho)+V(z)$, with $V(\varrho)=\frac 1 2 m^*\omega_0^2\varrho^2$ an in-plane parabolic potential \[$\vec{\varrho}=(x,y)$, $m^*$ is the effective electron (hole) mass, $\omega_0$ the characteristic frequency\] and $V(z)$ a double square quantum well along the $z$ direction (see Fig. \[scheme\]). Each well (of width $L$ and barrier potential height $V_0$) corresponds to a dot; the coupling between the two dots is controlled either by varying the inter-dot distance $d$ (width of the inter-dot barrier) or the height of the inter-dot potential barrier. To vary $d$ implies to consider differently grown devices. The full many-body Hamiltonian $\mathcal{H}$ (in zero magnetic field) is the sum of the single-particle terms $H^{(0)}(\vec{r})=-\hbar^2\nabla^2/(2m^*) +V(\varrho,z)$ and of the two-body Coulomb interaction terms: $${\mathcal{H}}= \sum_{\xi=e,h}\sum_{i=1}^{N_{\xi}}\left[ H_{\xi}^{(0)}\!\left(\vec{r}_{\xi}^{\;(i)}\right)+ \sum_{j<i}\frac{e^2}{\kappa_r \left| \vec{r}_{\xi}^{\;(i)}-\vec{r}_{\xi}^{\;(j)} \right|}\right]- \sum_{i=1}^{N_e}\sum_{j=1}^{N_h} \frac{e^2}{\kappa_r \left| \vec{r}_{e}^{\;(i)}-\vec{r}_{h}^{\;(j)} \right|}. \label{e:hmanybody}$$ Here $\kappa_r$ is the dielectric constant of the semiconductor medium, and the subscript $e$ ($h$) refers to electrons (holes). Effective masses, characteristic frequencies, and details of the double well entering $H^{(0)}$ differ for electrons and holes. We choose this geometry for two reasons: firstly, experimental devices whose behavior can be described by this model are currently studied by several groups, allowing for precise tailoring of the dot geometry, strong spatial confinement, and hence observation of spectral features beyond the simple Coulomb Blockade behavior (e.g. in SETS spectra). Secondly, the cylindrical vertical geometry, contrary to in-plane devices, has the richest degree of symmetry, which is particularly helpful to theoretical work both in reducing the size of Hilbert space sectors and in analyzing electronic configurations. Specifically, $\mathcal{H}$ is invariant under any rotation in the spin space (the total spin $S$ and its projection $S_z$ are therefore conserved), rotation around the $z$ axis in real space (conservation of the $z$-component of the orbital angular momentum $M$), inversion with respect to the geometrical center of the system (parity conservation). In complete analogy with Molecular Physics [@slater] and for each species of carriers we introduce a spectroscopic notation to classify electronic terms, namely eigenstates of $\mathcal{H}$: $^{2S+1}M_{g,u}$. Here $g$ ($u$) stands for even (odd) parity and $M$ takes the labels $\Sigma$, $\Pi$, $\Delta$, $\ldots$ standing for $M=0,1,2,\ldots$ Actually, a $\Sigma$ term is also invariant under reflection with respect to a plane passing through the symmetry axis: in this case the notation takes the form $^{2S+1}\Sigma^{\pm}_{g,u}$, where $\pm$ labels the sign change under reflection [@insertioII]. We are interested here in the evolution of the ground and excited states as the inter-dot distance $d$ is varied. This feature shows a remarkable difference between artificial and natural molecules: in the latter the inter-nuclear distance is almost fixed, controlled by the nature of bonding, while in the former it can be tuned by adjusting electrods or by growing different sample devices. Ground and excited states can be probed by several kinds of spectroscopies. Theoretically, once the energy spectrum is known after numerical diagonalization of $\mathcal{H}$, it is quite easy to compute the relevant observable quantities. A considerable achievement has been obtained by transport spectroscopies, like single-electron capacitance tunneling spectroscopy [@capacitance] or SETS [@tarucha] for the ground state, or non-linear tunneling spectroscopy [@leoexcited] for the excited states. In a transport experiment the chemical potential $\mu(N)$ of the double-dot is measured as the number of electrons $N$ is varied charging the system one electron by one. In fact, from the experimental value $\mu(N)$ one can infer information about the ground state, being $\mu\left(N\right)=E_0\left(N\right)-E_0\left(N-1\right)$, with $E_0(N)$ the ground-state energy of the $N$-body system [@spectroscopy]. Our theoretical strategy is straightforward: we compute the ground state energies $E_0(N)$ at different values of $N$, and from these the chemical potential $\mu$ to be compared with the spectra. Single-dot far-infrared spectroscopies [@book1; @book2; @book3] are unsuitable to probe the relative motion of electrons and hence their correlation, because light only couples to the center-of-mass motion (generalized Kohn theorem) [@kohn]. This is also true for a system of vertically coupled quantum dots with cylindrical symmetry, as long as the in-plane confinement potential (orthogonal to the symmetry axis, e.g. the growth direction) is parabolic and the polarization of light is in the same plane. However, this limitation does not hold for two-photon processes like Raman scattering, where density fluctuations can excite collective modes of the interacting system [@Raman]. Finally, optical spectroscopy allows the study of few-particle states including electrons and holes. In the lowest order the light-semiconductor coupling is associated either to the absorption of a photon and to the promotion of an electron from the valence to the conduction band or to the reversed process, which is accounted for by a Hamiltonian of the form $-\mathbf{E \cdot P}$, where $ \mathbf{E}$ is the electric field and $ \mathbf{P} $ the material polarisation [@haug:93]. Within the rotating-wave and dipole approximations the luminescence spectrum for a QD initially prepared in state $ | \lambda \rangle $ can be computed according to Fermi’s golden rule: $$L_{ \sigma } ( \omega ) \propto \sum_{ \lambda' } \: \vert \, ( P_{\sigma} )_{ \lambda' , \lambda } \, \vert^{2} \; \delta( E_{ \lambda } + \hbar\omega - E_{ \lambda' } );$$ here $ (P_{\sigma})_{ \lambda , \lambda' } $ are the dipole matrix elements corresponding to the transition between states $\lambda$ and $\lambda'$ (through removal of one electron-hole pair) and the the creation of a photon with helicity $\sigma=\pm$. Few-electron system {#Electron} =================== In this section we study the system of interacting carriers of the same species, e.g. electrons. Let us start from the simplest case, that is the two-electron molecule. A theorem due to Wigner [@wignerth] guarantees that the ground state is always a singlet if time-reversal symmetry is preserved [@singlet]: however, dramatic alterations of the energy spectrum and wavefunction are driven by the inter-dot distance $d$ and the characteristic dot radius $\ell_0=(\hbar/m^*\omega_0)^{1/2}$. This is shown in Fig. \[scheme\]: in panels (a) and (b) we plot the total ground state kinetic $\left< E_k\right>$ and Coulomb $\left< V_{ee} \right>$ energy [@insertioIII], respectively, vs $d$. Coulomb interaction mixes up different configurations (i.e. Slater determinants) which contribute with different weight to the ground state. Besides $\left< E_k\right>$ and $\left< V_{ee} \right>$ for the true few-particle groundstate $|\psi\rangle$ (diamond symbol), in Fig. \[scheme\] we also show the corresponding data of three prototypical states [@insertioIV]: $ | 1 \rangle \equiv | \sigma_{g}\!\uparrow , \sigma_{g}\!\downarrow \rangle $ (singlet), $ | 2 \rangle \equiv ( | \sigma_{g}\! \uparrow , \sigma_{u}\! \downarrow \rangle - | \sigma_{g}\! \downarrow , \sigma_{u}\! \uparrow \rangle ) / \sqrt{2} $ (singlet), $ | 3 \rangle \equiv ( | \sigma_{g}\! \uparrow , \sigma_{u}\! \downarrow \rangle + | \sigma_{g}\! \downarrow , \sigma_{u}\! \uparrow \rangle ) / \sqrt{2} $ (triplet). Note that the difference between the (identical) kinetic energies of states $ | 2 \rangle $ and $ | 3 \rangle $ and that of $ | 1 \rangle $ amounts exactly to the energy splitting $\Delta_{sas}$ between the single-particle states $ \sigma_{u} $ and $ \sigma_{g} $ \[Fig. \[scheme\] (a)\]. This quantity decreases exponentially as $d$ increases and as the probability of the tunneling through the potential barrier goes to zero. While singlet and triplet states $ | 2 \rangle $ and $ | 3 \rangle $ have identical kinetic energy, the latter state is energetically favored as the interaction energy is concerned. The splitting in $\left< V_{ee} \right>$ between $ | 2 \rangle $ and $ | 3 \rangle $ appearing in Fig. \[scheme\] (b) is an [*exchange energy,*]{} namely the consequence of the antisymmetry of the total wavefunction for particle permutations. The behavior of the ground state $|\psi\rangle$ partly resembles that of the state $ | 1 \rangle$, but shows significant deviations due to the mixing of configurations. The arrangement of electrons is naturally visualized by computing density functions in real space. However, both the single-particle density and the usual radial pair correlation function $g(\varrho)$ plotted in the $xy$ plane depend only on the relative distance, due to the cylindrical symmetry of the system. Hence, we follow Ref. [@maksym] and calculate the “angular” spin-resolved pair correlation function $$P_{s,s_0}( \vec{\varrho} , z ;\vec{\varrho}_0 , z_0 )=A_{s,s_0} \left<\sum_{i\ne j}\delta(\vec{\varrho}^{\;(i)}-\vec{\varrho}) \delta(z^{(i)}-z) \delta_{s^{(i)},s}\delta(\vec{\varrho}^{\;(j)}-\vec{\varrho}_0) \delta(z^{(j)}-z_0)\delta_{s^{(j)},s_0}\right>,$$ where $\left<\ldots\right>$ denotes the expectation value on a given state, the subscript $s$ refers to spin, and $A_{s,s_0}$ is a normalization factor, such that $\int {\rm d}\vec{\varrho}\,{\rm d}z \,{\rm d}\vec{\varrho}_0\,{\rm d}z_0\, P_{s,s_0}$ $ ( \vec{\varrho} , z ; \vec{\varrho}_0 , z_0 )=1$. One electron with spin $s_0$ is fixed at the position $(\vec{\varrho}_0, z_0)$, while the other at $(\vec{\varrho},z)$ with spin $s$ is varied: thus $P_{s,s_0}( \vec{\varrho} , z ; \vec{\varrho}_0 , z_0 )$ is proportional to the conditional probability of finding the second electron given that the first one is fixed. This allows for observation of the relative spatial arrangement of electrons and of the angular correlation. The spin-independent quantity is the total pair correlation function $P( \vec{\varrho} , z ; \vec{\varrho}_0 , z_0 )$, normalized as $$P( \vec{\varrho} , z ; \vec{\varrho}_0 , z_0 ) =\left[ \sum_{s=s_0}N_s(N_s-1) P_{s,s_0} + \sum_{s\ne s_0}N_sN_{s_0} P_{s,s_0}\right]/N(N-1),$$ being $N_s$ the number of spin-$s$ electrons. In Fig. \[scheme\] (c) we plot the function $ \rho ( z ) \equiv \int\!\int {\rm d}x \, {\rm d}y \: P \:(x,y,z;x_{0},y_{0},z_{0}) $, showing how the fixed position of one electron (represented by the black circle) affects the spatial distribution of the other one along the symmetry axis $z$, for an inter-dot distance of 1 nm. The state $ | 1 \rangle $ clearly exhibits no spatial correlation among the two carriers: the placing of one electron in one quantum dot (QD) does not change the probability of finding the other one in any of the dots. In the case of the singlet state $ | 2 \rangle $ the spatial distribution of one electron is peaked around the other (fixed) one: the two particles tend to occupy the same QD. Opposite trends apply to the triplet state $ | 3 \rangle $. Again, the true ground state shows a mixed character: $ \rho ( z )$ has its biggest peak in the “unoccupied” QD, but there is a finite probability for the double occupancy on the same QD. The average values of the Coulomb energy $\left< V_{ee}\right>$ \[Fig. \[scheme\] (b)\] clearly reflect such behaviors. The curve referring to $ | 1 \rangle $ slowly decreases, because the value of $\left< V_{ee}\right>$ corresponding to two particles in different QDs diminishes as $d$ increases. A fortiori, $ \langle V_{ee} \rangle $ decreases for the triplet state: the electrons are always in different QDs. The Coulomb energy is less affected by the inter-dot distance in the case of the state $ | 2 \rangle $, because $ \langle V_{ee} \rangle $ is mainly due to intra-dot interaction (both carriers in the same QD): the slight increase of $ \langle V_{ee} \rangle $ depends on the growing localisation of the particles within a QD. The terms contributing to the Hamiltonian $\mathcal{H}$ of Eq. (\[e:hmanybody\]) scale differently with the characteristic length of the confinement potential $\ell_0$: the kinetic one goes like $\sim \ell_0^{-2}$, while the interaction one like $\sim \ell_0^{-1}$. For small dots, the kinetic term dominates and the system is Fermi-liquid like: here the ground state is determined by the successive filling of the empty lowest-energy single-particle levels, according to [*Aufbau*]{} atomic theory. As $\ell_0$ increases, the electrons become more and more correlated and arrange to minimize Coulomb repulsion, up to the limit of complete spatial localization (reminiscent of Wigner crystallization in 2D). Even if for $N=2$ the ground state is always a $^1\Sigma_g$ term as $\ell_0$ is varied, nevertheless we can gain further insight into the correlation dynamics by analyzing the two-body wavefunction. In the inset of Fig. \[fthree\] we plot the total pair correlation function $P( \vec{\varrho} , z ; \vec{\varrho}_0 , z_0 )$ for the $N=2$ triplet state. Here, $\varrho_0$ and $z_0$ are set equal to the average value of the in-plane radius and the maximum along $z$ of the single-particle density, respectively. In addition, $z$ is fixed at the position of the second, symmetric maximum of density in the symmetry-axis direction: the resulting contour plot is the value of $P$ in the $xy$ plane (in units of $\ell_0$). This represents the probability of finding one electron in the plane of one dot, given that the second electron is fixed on the other dot. The other plots in Fig. \[fthree\] show $P( \varrho , \varphi , z ; \varrho_0 , \varphi_0 , z_0 )$ vs the azimuthal angle $\varphi$: all other parameters $\varrho$, $\vec{\varrho}_0$, $z$, $z_0$ are fixed, with $\varrho=\varrho_0$. When $\varphi=\varphi_0$, the position coincides with that of the fixed electron, and the probability $P$ has a minimum (zero in the triplet case with $z=z_0$) due to the Pauli exclusion principle. As $\varphi$ is varied, the position follows a trajectory like the thick circle in the inset, starting from the bullet locating the other fixed electron in the $xy$ plane. After a $2\pi$-rotation, we are back in the starting point. These plots are a kind of “snapshot” of the angular correlation, as we freeze the motion of one electron. Figure \[fthree\] is organized in two columns, corresponding to the singlet ground state and to the triplet excited state, respectively, for different values of $\hbar\omega_0$ ($d=$ 1 nm). Solid lines refers to the case $z=z_0$, namely electrons on the same dot, while dashed lines to $z\ne z_0$, i.e. electrons on different dots. When $\hbar\omega_0$ is very large (40 meV, top row), the curves $P( \vec{\varrho} , z ; \vec{\varrho}_0 , z_0 )$ are almost flat. This flatness implies that the motion of the two electrons is substantially uncorrelated, except the effect of Fermi statistics. In fact, in the triplet case, the probability of measuring two electrons on the same dot is negligible, and this holds at any value of $\hbar\omega_0$. On the contrary, in the singlet state there is a finite probability of measuring two electrons on the same dot in the ground state. As $\hbar\omega_0$ is reduced (20 meV, middle row), angular correlation is turned on. This can be seen by the increase of the peak-valley ratio in the angular correlation function. The position of the maximum corresponds to $\pi$, i.e. the two electrons repelling each other tend to be separated as much as possible. This trend is even clearer at small values of $\hbar\omega_0$ (3.5 meV, bottom row). Plots of the second row ($\hbar\omega_0=$ 20 meV) should be compared with the corresponding plot of Fig. \[scheme\] (c): the first ones illustrate how electrons correlate in the $xy$ plane, the second one how they arrange along $z$. Note that in the first column of Fig. \[fthree\] as $\hbar\omega_0$ is decreased the probability of measuring two electrons on the same dot increases up to the limit when it equals the probability of measuring electrons on different dots ($\hbar\omega_0=$ 3.5 meV). This is due to the different ratios between two fundamental energy scales: the harmonic oscillator inter-level separation $\hbar\omega_0$ and the energy difference $\Delta_{sas}$ between antisymmetric and symmetric double-well wavefunctions: if $\hbar\omega_0\gg \Delta_{sas}$, the inter-dot tunneling is negligible with respect to the kinetic energy of the intra-dot motion, and the dots are almost quantum mechanically decoupled. In the opposite limit, the system is coherent, and it makes no difference between measuring one electron on one dot or on the other one, since the system behaves as a unique dot, doubled in size. Let us now turn to discuss the case $N>2$. We choose a particular set of parameters, namely $m^*=$ 0.067$m_e$, $\kappa_r=$ 12.4, $L=$ 12 nm, $V_0=$ 250 meV, $\hbar\omega_0=$ 5.78$N^{-1/4}$, corresponding to a set of experimental devices currently under study [@guy]. The parameterization of $\hbar\omega_0(N)$ is meant to mimic the effect of the gate voltage on the electrostatic confinement potential $V(\varrho)$ [@bart]. We exactly diagonalize the Hamiltonian $\mathcal{H}$ of Eq. (\[e:hmanybody\]) for $N\le 6$, using up to 32 single-particle orbitals. The convergence is checked controlling a cutoff on the average energy of the Slater determinants entering the computation. Our code uses the ARPACK package [@arpack] and isolates Hilbert space sectors with $S$ and $S_z$ fixed, contrary to usual Lanczos approaches. Figure \[f:four\] shows the calculated ground state energy vs $d$ for $3\le N \le 6$. As $d$ is varied, one or two transitions between ground states of different symmetries occur. Specifically, while there is only one transition between two different electronic terms for $N=3$, two transitions take place for $N=4$ and $N=5$. The intermediate phase for $N=4$ exists only in a very narrow range of $d$ ($\sim$ 0.01 nm) in the neighborhood of $d=$ 3.45 nm. For $N=$ 6, again only two phases exist. However, at the intersection point of the $^1\Sigma_g$ and $^3\Sigma_g$ terms, the excited state $^3\Pi_g$ is almost degenerate in energy. These transitions can be understood analyzing the many-body wavefunction of the different ground states [@ssc]. In the bottom panel of Fig. \[f:four\] we focus on the $N=4$ case and we schematically depict the major-weight Slater determinant corresponding to each phase. The key point is that, as $d$ is decreased from the value of 4.5 nm, the “energy-gap” $\Delta_{sas}$ between “bonding” and “anti-bonding” orbitals (i.e. symmetric and antisymmetric solutions of the double well along $z$) changes, from the limit of decoupled dots \[labeled as c) in figure\] to the strong-coupling limit \[labeled as a)\]. In the c) case, the first-shell molecular orbitals $0\sigma_g$ and $0\sigma_u$ are almost degenerate and well separated in energy with respect to the second shell, hence they are filled with four electrons giving the configuration $0\sigma_g^20\sigma_u^2$, i.e. two isolated dots with the first orbital shell completely filled. In the opposite limit, at small values of $d$ \[a) case\], the bonding mini-band made of $0\sigma_g$, $0\pi_u^+$, $0\pi_u^-$ single-particle orbitals is much lower in energy that the anti-bonding one. The ambiguity of how to fill the lowest-energy orbitals, due to the degeneracy of $0\pi_u^+$ and $0\pi_u^-$ levels, is solved consistently with Hund’s first rule [@tarucha], i.e. the two open-shell electrons occupy each orbital with parallel spin (the configuration being $0\sigma_g^20\pi_u^+0\pi_u^-$), in such a way that exchange interaction prohibits electrons from getting close, minimizing Coulomb repulsion. This configuration is characteristic of a single dot, doubled in size [@tarucha]. In the intermediate phase b), the antibonding $0\sigma_u$ level is almost degenerate with the bonding $0\pi_u^+$ and $0\pi_u^-$ levels: while the first two electrons occupy the lowest-energy orbital $0\sigma_g$, the remaining two arrange again to maximize spin, consistently to a “generalized” Hund’s first rule. However, now there are three levels almost degenerate, and we find that the ground state configuration is $0\sigma_g^20\sigma_u0\pi_u^+$: according to Hund’s second rule, also the total orbital angular momentum $M$ is maximized, to minimize the interaction energy (the higher $m$, the smaller the Coulomb matrix element between single-particle levels). A similar reasoning applies to the transitions at $N\ne 4$. Let us now focus on the $N=5$ case in Fig. \[f:four\]. As $d$ increases, the ground state sequence is $^2\Pi_u \rightarrow\, ^4\Sigma_u \rightarrow\, ^2\Pi_u$, that is the $^2\Pi_u$ term appears twice, corresponding to a continuous energy curve that crosses twice the $^4\Sigma_u$ term. However, if we examine which Slater determinants mainly contribute to $^2\Pi_u$, we find that the relevant configuration at small $d$ (I $\equiv$ $0\sigma_g^20\pi_u^{+2}0\pi_u^-$) differs from that at large $d$ (II $\equiv$ $0\sigma_g^20\sigma_u^20\pi_u^+$). Moreover, the slope of the curve in the two regions is different, mainly due to the change in the balance between bonding and anti-bonding levels occupied: 5:0 for I, 3:2 for II, which controls the dependence of the overall kinetic energy on $d$ (see the previous discussion). This change in the “character” of the $^2\Pi_u$ term (i.e. the ratio between the weights of configurations I and II) is found to be continuous with $d$. We plot also the first excited state for the $^2\Pi_u$ symmetry (dashed line in the $N=5$ panel): clearly this curve anti-crosses the $^2\Pi_u$ ground state. Analyzing the slope and character of this excited state in the small- and large-$d$ regions, we find an inverted behaviour with respect to the ground state: now the relevant configuration at small $d$ is II while that at large $d$ is I. The overall behavior can be understood as a consequence of the Wigner-von Neumann theorem, i.e. that intersection of terms of identical symmetry is forbidden [@landau]. Therefore, the two $^2\Pi_u$ terms anti-cross, while $^2\Pi_u$ and $^4\Sigma_u$ terms can freely cross and bring about ground state transitions, belonging to different irreducible representations of the symmetry group of $\mathcal{H}$. An analogous anti-crossing between ground and excited state for the $^1\Sigma_g$ symmetry is depicted in the $N=4$ panel (solid and dashed line, respectively). Results of Fig. \[f:four\] should be compared with those obtained by means of exact diagonalization of a generalized Hubbard model [@ssc], by density functional theory [@bart], and by Hartree-Fock method [@tamura]. In all these works the window in $d$-space at which the ground state $^3\Pi_g$ at $N=4$ occurs is much larger and a ghost additional intermediate phase at $N=6$ (corresponding to the excited state $^3\Pi_g$ in our calculation) appears. Therefore our results, that agree well with data obtained up to $N\le 5$ by exact diagonalization in Ref. [@tokura], clearly demonstrate the importance of correlation beyond mean-field approaches. The interacting electronic system is so correlated in regimes of realistic parameters of the devices that it is very difficult to obtain quantitatively reliable results with any approach but configuration interaction. This point was already stressed, for single quantum dots, in Refs. [@pfannkuche] and [@prb]. To further characterize different ground states vs $d$, in Fig. \[f:five\] we plot the spin-resolved pair correlation function $P_{\uparrow,\uparrow}( \vec{\varrho} , z ; \vec{\varrho}_0 , z_0 )$ for $N=4$ for values of $d$ corresponding to the three phases previously discussed. The position of a spin-up electron $(\vec{\varrho}_0 , z_0 )$ is fixed in one dot as in Fig. \[fthree\], and the contour plots of the top (bottom) row, with $z=z_0$ ($z\ne z_0$) fixed, correspond to the probability of measuring another spin-up electron on the same (other) dot in the $xy$ plane. The right column refers to the $^1\Sigma_g$ term at $d=$ 3.9 nm: there is only one “free” spin-up electron, and we can see that the probability of measuring it on the same dot as the fixed electron is negligible, while the probability distribution on the other dot depends only slightly on the position of the first fixed electron. Therefore the two dots are quantum mechanically decoupled, each one filled with two electrons in the lowest shell. The motion of electrons in the two dots is almost uncorrelated in the $xy$ plane. In the opposite limit of small $d$ (left column, $d=$ 3.1 nm, three spin-up electrons), the coupling is so strong that it makes no difference either measuring the electron on one dot or on the other, i.e. the contour plots are identical, and the system forms a coherent, strongly bound molecule. The fixed electron is “dressed” by its exchange and correlation hole, i.e. it repels other electrons that are at small distances. The middle column ($d=$ 3.5 nm, three spin-up electrons) shows that in the intermediate phase $^3\Pi_g$ the dots are coupled with a weak degree of coherence, namely the probabilities of measuring the electron on the two dots are sensitively different. Planar correlation in the same dot is important, while it is negligible for motion on different dots. The above classification of phases is also suitable to $N\ne 4$. In this section we have shown results for the electron energy spectrum up to $N\le 6$. It is straightforward now to compute the SETS linear spectrum, and the comparison with very recent experimental data [@Amaha] show remarkable agreement in many respects. Results for $B > 0$ will be presented elsewhere [@tobepubl]. Few electron-hole pairs {#Electron-hole} ======================= We next consider systems of interacting carriers composed of an equal number of electrons and holes. Let us start by considering a single electron-hole pair (exciton) and the way in which its ground state depends on the width of the barrier. As the inter-dot distance $d$ increases the splitting between the energies of the bonding ($ \sigma_{g} $) and anti-bonding ($ \sigma_{u} $) states decreases both for electrons and for holes. The energetic cost associated to the promotion of the two particles from the bonding to the anti-bonding states becomes smaller and comparable to the gain in Coulomb energy arising from the correlation of their spatial distributions along $z$. In Fig. \[exciton\] we plot the functions $ \rho ( z )$ of electrons and holes, associated to given positions of the other carrier, at inter-dot distances of $ d = 1 $ nm (a) and $ d = 3 $ nm (b); the two insets represent the contributions to the electron-hole ground states of $ | 1 \rangle \equiv | \sigma_{g}^{e} \uparrow , \sigma_{g}^{h} \uparrow \rangle $ and $ | 2 \rangle \equiv | \sigma_{u}^{e} \uparrow , \sigma_{u}^{h} \uparrow \rangle $. The decreasing tunneling goes with an increasing correlation (electron more localized around the hole and vice versa) and an increasing contribution from the $ | 2 \rangle $ state. The slight difference between the plots associated to electrons and holes at each inter-dot distance are due to the differences in the barrier heights (400 meV for electrons and 215 meV for holes) and in the effective masses ($ m_{e}^{*} = 0.067\: m_{0} $ and $ m_{h}^{*} = 0.38\: m_{0} $) of the two carriers: as a consequence electrons tunnel more than holes and tend to be less localized within one QD. It is worth noticing that at $ d = 3 $ nm the electronic tunneling still induces a pronounced splitting between the two delocalized bonding and anti-bonding single-particle states ($ \epsilon_{b}= 35.11$ meV and $ \epsilon_{a}= 37.52$ meV). In spite of this, due to Coulomb correlation and to the reduced tunneling of holes, the energetic value and the spatial distribution of the excitonic ground state closely resembles that of an exciton in a single QD; besides, the splitting between the ground and the first excited states is negligible [@insertioV]. In other words the “excitonic tunneling” is suppressed at smaller inter-dot distances than the electronic one. If the double dot is occupied by two electrons and two holes, both attractive and repulsive interactions are present. Intuitively one would expect carriers with identical charge to avoid each other and carriers of opposite charge to look for each other: the interplay between such trends is directed by the values of $ d $ and by those of $ S_{e} $ and $ S_{h} $. In Fig. \[f0010d1\] we compare such correlations for two different values of the electron and hole spin quantum numbers and for $ d = 1 $ nm. The singlet-singlet lowest state ($ S_{e} = 0, S_{h} = 0 $) is characterized by a small correlation between the electrons, (a), and by a more pronounced one between holes, (c). Analyzing the eigenfunction associated to this state, one observes that approximately only the electron single-particle state $ \sigma_{g}^{e} $ is (twice) occupied, while for holes strong contributions arise from both bonding and anti-bonding states. As already mentioned, this difference between the behaviors of the two carriers depends on the fact that a gain in Coulomb energy has a greater kinetic cost for electrons than for holes. Besides, the spatial distribution of holes, (e), is not affected by the position of the electrons [@insertioVI]. Let us now compare these correlation functions with the corresponding ones associated to the triplet-singlet configuration ($ S_{e} = 1, S_{h} = 0 $). Again the correlation among electrons and holes is negligible, (f): the two electrons and the two holes subsystems can thus be understood to a good extent independently from each other. As in the case of the prototypical state $ | 3 \rangle $, that the state of the two electrons here resembles, the probability of finding two electrons in the same QD is negligible, (b). Such spatial separation of the electrons induces a more pronounced separation for the holes too, as shown by the flattening of the smallest peak in the left well \[see Fig. \[f0010d1\] (b), (d)\]. The differences between the two spin configurations are even more dramatic at bigger inter-dot distances. In Fig. \[f0010d3\] we show the same correlation functions at $ d = 3 $ nm. The triplet-singlet configuration \[Figs. \[f0010d1\] (b), (d), (f)\] shows the same features already observed at $ d = 1 $ nm; here the two electrons (holes) are perfectly localized in different QDs, due to the suppressed tunneling. The singlet-singlet configuration \[Figs. \[f0010d1\] (a), (c), (e)\] instead has undergone a transition to a phase in which all carriers are localized in either QD (due to symmetry). If the position of one of the four particles is fixed in one QD, all the others are localized in the same one. This somehow surprising effect can be explained in the following way: in a mean field picture, due to the substantial similarity of the electron and hole wavefunctions, the localization of the two excitons in two different QDs or in the same one makes no difference with respect to the Coulomb energy because there is cancellation of terms of opposite sign. If correlation comes into play, however, the localization of all particles in the same QD gives rise to the so-called “biexcitonic binding energy” $\Delta E$ (which is defined as the difference between twice the energy of the excitonic ground state and that of the biexcitonic one). Specifically, the binding energy $\Delta E$ is due to the correlations among the $x$ and $y$ directions: as in the case of the two electrons in Fig. \[fthree\], such correlations become strongly effective and lower the Coulomb energy when particles are localized in the same QD. The comparison between the correlation functions corresponding to equal spin configurations at different inter-dot distances shows a trend similar to that of the two electrons alone. The population of the anti-bonding states increases with decreasing bonding-antibonding splitting, thus allowing a more pronounced spatial correlation between identical carriers. Such dependence is particularly clear for the singlet states, while for the triplet ones a high degree of correlation is already guaranteed by the permutational symmetry of the few-particle wavefunction (i.e., by the fermionic nature of electrons and holes). The described behaviours are reflected in the values of the different contributions to the mean Coulomb energies of each spin configuration (Fig. \[contr\]). Let us start by considering the three spin arrangements $ ( S_{e} , S_{h} ) = (1,1),(0,1),(1,0)$. The contributions to the Coulomb energy associated to the electron-electron interaction (a) all monotonically decrease with increasing $d$: the two electrons, each in a different QD, get more and more far apart. If $ S_{e}=1 $ the spatial separation of the two electrons is a direct consequence of the permutational symmetry of the wavefunction; if $ S_{e}=0 $ the same effect arises from a proper linear combination of different Slater determinants and from the corresponding occupation of the electronic $ \sigma_{u}^{e} $ orbitals (which in turn depends on the tunneling: from which the different slopes of the $ S_{e}=0 $ and $ S_{e}=1 $ curves). Analogous behaviours are seen to occur with respect to the hole-hole interaction (c). The main difference as compared to the previous case is the higher degree of correlation among such carriers in the $ S_{h} = 0 $ case, already at small inter-dot distances. The trends of the electron-hole Coulomb (b) interactions are hardly distinguishable one from the other. The monotonic decrease (in modulus) of $ \langle V_{eh} \rangle $ reflects that of the interaction energy among carriers localized in different QDs when they get far apart. The plots associated to the singlet-singlet configuration (continuous lines), however, show a transition towards a phase in which all carriers are localized in a same QD, already put in evidence in Fig. \[f0010d3\]. The absolute values of all Coulomb terms correspondingly go through an abrupt increase for values of $d$ in the range between 2 and 2.5 nm. Let us finally consider the total energies (d). The lowest singlet-singlet state turns out to be the system’s ground state at any inter-dot distance. For $ d \lesssim 2$ nm the $ S_{e} = 0 $ and $ S_{e} = 1 $ configurations are degenerate with respect to the value of $ S_{h} $; the difference between the total energies follows that between the two-electrons triplet and singlet states. As $d$ increases, the energies of all spin configurations but $S_{e}=S_{h}=0$ asymptotically tend to a value which is twice the energy of the excitonic ground state in a single QD. The energy of the singlet-singlet state instead tends to that of the biexcitonic ground state in a single QD: the difference between these two asymptotic values is the already mentioned biexcitonic binding energy $\Delta E$. Summary ======= We have presented a unified theoretical description of the many-body states of a few electrons (holes) or few excitons confined in coupled quantum dots. In these systems, interdot coupling controls the competition between kinetic energy and Coulomb interactions, and can reach regimes far out of those accessible in natural molecules. The resulting ground state is therefore very different for different values of the coupling: We have shown that a system of few electrons is characterized by different spin configurations depending on the inter-dot coupling, and we have extensively discussed the marked variations arising in the electron-electron correlation functions. In the case of two electrons and two holes we have identified the ground state corresponding to both pairs localized in one of the dots (weak coupling) or distributed in both dots (strong coupling). Tuning such phases by external fields is possible, and is found to induce novel quantum effects that will be described elesewhere [@tobepubl]. Manifestations of transitions between such phases in addition or optical spectra are expected to lead to a direct experimental verification of many-body-theory predictions, and to the experimental control of the many-body states in nanoscale devices. Acknowledgements ================ This paper was supported in part by the EC through the SQID and Ultrafast Projects, and by INFM through PRA SSQI. [90]{} L. Jacak, P. Hawrylak, and A. Wójs, [*Quantum Dots,*]{} (Springer, Berlin, 1998). D. Bimberg, M. Grundmann, and N. N. Ledentsov, [*Quantum Dot Heterostructures,*]{} (Wiley, Chichester, 1998). U. Woggon, [*Optical Properties of Semiconductor Quantum Dots,*]{} (Springer, Berlin, 1997). M. A. Kastner, Phys. Today [**46,**]{} 24 (1993). S. Lloyd, Science [**261,**]{} 1569 (1993). For a review see L. P. Kouwenhoven, C. M. Marcus, P. L. McEuen, S. Tarucha, R. M. Westervelt, and N. S. Wingreen, in [*Mesoscopic Electron Transport,*]{} edited by L. L. Sohn, L. P. Kouwenhoven, and G. Schoen, (Kluwer, Dordrecht, 1997), p. 105. Leo Kouwenhoven, Science [**268,**]{} 1440 (1995). L. P. Kouwenhoven, F. W. J. Hekking, B. J. van Wees, C. J. P. M. Harmans, C. E. Timmering, and C. T. Foxon, Phys. Rev. Lett. [**65,**]{} 361 (1990). F. R. Waugh, M. J. Berry, D. J. Mar, R. M. Westervelt, K. L. Campman, and A. C. Gossard, Phys. Rev. Lett. [**75,**]{} 705 (1995); F. R. Waugh, M. J. Berry, C. H. Crouch, C. Livermore, D. J. Mar, R. M. Westervelt, K. L. Campman, and A. C. Gossard, Phys. Rev. B [**53,**]{} 1413 (1996); C. Livermore, C. H. Crouch, R. M. Westervelt, K. L. Campman, and A. C. Gossard, Science [**274,**]{} 1332 (1996); T. H. Wang and S. Tarucha, Appl. Phys. Lett. [**71,**]{} 2499 (1997); A. S. Adourian, C. Livermore, R. M. Westervelt, K. L. Campman, and A. C. Gossard, Appl. Phys. Lett. [**75,**]{} 424 (1999). C. H. Crouch, C. Livermore, R. M. Westervelt, K. L. Campman, and A. C. Gossard, Appl. Phys. Lett. [**71,**]{} 817 (1997). K. A. Matveev, L. I. Glazman, and H. U. Baranger, Phys. Rev. B [**53,**]{} 1034 (1996); [*ibid.*]{} [**54,**]{} 5637 (1996); John M. Golden and Bertrand I. Halperin, Phys. Rev. B [**53,**]{} 3893 (1996); [*ibid.*]{} [**54,**]{} 16757 (1996). C. A. Stafford and S. Das Sarma, Phys. Rev. Lett. [**72,**]{} 3590 (1994); Gerhard Klimeck, Guanlong Chen, and Supriyo Datta, Phys. Rev. B [**50,**]{} 2316 (1994); Guanlong Chen, Gerhard Klimeck, Supriyo Datta, Guanhua Chen, and William A. Goddard III, Phys. Rev. B [**50,**]{} 8035 (1994); R. Kotlyar and S. Das Sarma, Phys. Rev. B [**56,**]{} 13235 (1997). R. H. Blick, R. J. Haug, J. Weis, D. Pfannkuche, K. von Klitzing, and K. Eberl, Phys. Rev. B [**53,**]{} 7899 (1996); R. H. Blick, D. Pfannkuche, R. J. Haug, K. von Klitzing, and K. Eberl, Phys. Rev. Lett. [**80,**]{} 4032 (1998). Robert H. Blick, Daniel W. van der Weide, Rolf J. Haug, and Karl Eber, Phys. Rev. Lett. [**81,**]{} 689 (1998). T. H. Oosterkamp, T. Fujisawa, W. G. van der Wiel, K. Ishibashi, R. V. Hijman, S. Tarucha, and L. P. Kouwenhoven, Nature (London) [**395,**]{} 873 (1998). Toshimasa Fujisawa, Tjerk H. Oosterkamp, Wilfred G. van der Wiel, Benno W. Broer, Ramón Aguado, Seigo Tarucha, and Leo P. Kouwenhoven, Science [**282,**]{} 932 (1998). N. C. van der Vaart, S. F. Godijn, Y. V. Nazarov, C. J. P. M. Harmans, J. E. Mooij, L. W. Molenkamp, and C. T. Foxon, Phys. Rev. Lett. [**74,**]{} 4702 (1995). T. H. Oosterkamp, S. F. Godijn, M. J. Uilenreef, Y. V. Nazarov, N. C. van der Vaart, and Leo P. Kouwenhoven, Phys. Rev. Lett. [**80,**]{} 4951 (1998). M. Brodsky, N. B. Zhitenev, R. C. Ashoori, L. N. Pfeiffer, and K. W. West, Phys. Rev. Lett. [**85,**]{} 2356 (2000). T. Schmidt, R. J. Haug, K. von Klitzing, A. Förster, and H. Lüth, Phys. Rev. Lett. [**78,**]{} 1544 (1997). David Guy Austing, Takashi Honda, and Seigo Tarucha, Jpn. J. Appl. Phys. [**36**]{}, 1667 (1997); D. G. Austing, T. Honda, K. Muraki, Y. Tokura, and S. Tarucha, Physica B [**249-251,**]{} 206 (1998). Garnett W. Bryant, Phys. Rev. B [**48,**]{} 8024 (1993). J. H. Oh, K. J. Chang, G. Ihm, and S. J. Lee, Phys. Rev. B [**53,**]{} R13264 (1996). Hiroyuki Tamura, Physica B [**249-251,**]{} 210 (1998). Y. Tokura, D. G. Austing, and S. Tarucha, in [*Proceedings of the 24th International Conference on the Physics of Semiconductors,*]{} edited by D. Gershoni, (World Scientific, on CD-ROM, 1999); J. Phys.: Condens. Matter [**11,**]{} 6023 (1999); Y Tokura, S. Sasaki, D. G. Austing, and S. Tarucha, Physica E [**6,**]{} 676 (2000). M. Rontani, F. Rossi, F. Manghi, and E. Molinari, Solid State Comm. [**112,**]{} 151 (1999); Mat. Res. Soc. Symp. Proc. [**571,**]{} 179 (2000). Yasuhiro Asano, Phys. Rev. B [**58,**]{} 1414 (1998). B. Partoens and F. M. Peeters, Phys. Rev. Lett. [**84,**]{} 4433 (2000); Martí Pi, Agustí Emperador, Manuel Barranco, and Francesca Garcias, Phys. Rev. B [**63,**]{} 115316 (2001). Juan José Palacios and Pawel Hawrylak, Phys. Rev. B [**51,**]{} 1769 (1995). Jun Hu, E. Dagotto, and A. H. MacDonald, Phys. Rev. B [**54,**]{} 8616 (1996). Hiroshi Imamura, Peter A. Maksym, and Hideo Aoki, Phys. Rev. B [**53,**]{} 12613 (1996); Hiroshi Imamura, Hideo Aoki, and Peter A. Maksym, [*ibid.*]{} [**57,**]{} R4257 (1998); Hiroshi Imamura, Peter A. Maksym, and Hideo Aoki, [*ibid.*]{} [**59,**]{} 5817 (1999). L. Martín-Moreno, L. Brey, and C. Tejedor, Phys. Rev. B [**62,**]{} R10633 (2000). O. Mayrock, S. A. Mikhailov, T. Darnhofer, and U. Rössler, Phys. Rev. B [**56,**]{} 15760 (1997); B. Partoens, A. Matulis, and F. M. Peeters, [*ibid.*]{} [**57,**]{} 13039 (1998). B. Partoens, V. A. Schweigert, and F. M. Peeters, Phys. Rev. Lett. [**79,**]{} 3990 (1997). Constantine Yannouleas and Uzi Landman, Phys. Rev. Lett. [**82,**]{} 5325 (1999). B. Partoens, A. Matulis, and F. M. Peeters, Phys. Rev. B [**59,**]{} 1617 (1999). Mikio Eto, Solid-State Electron. [**42,**]{} 1373 (1998). Yu. E. Lozovik and N. E. Kaputnika, Physica Scripta [**57,**]{} 542 (1998); N. E. Kaputnika and Yu. E. Lozovik, Fiz. Tverd. Tela [**40,**]{} 2127 (1998) \[Sov. Phys. Solid State [**40,**]{} 1929 (1998)\]. Satyadev Nagaraja, Jean-Pierre Leburton, and Richard M. Martin, Phys. Rev. B [**60,**]{} 8759 (1999); Andreas Wensauer, Oliver Steffens, Michael Suhrke, and Ulrich Rössler, Phys. Rev. B [**62,**]{} 2605 (2000). Tapash Chakraborty, V. Halonen, and P. Pietiläinen, Phys. Rev. B [**43,**]{} 14289 (1991). Xiaoshuang Chen, H. Buhmann, and L. W. Molenkamp, Phys. Rev. B [**61,**]{} 16801 (2000). Guido Burkard, Daniel Loss, and David P. DiVincenzo, Phys. Rev. B [**59,**]{} 2070 (1999); Xuedong Hu and S. Das Sarma, Phys. Rev. A [**61,**]{} 62301 (2000). S. Amaha, D. G. Austing, Y. Tokura, K. Muraki, K. Ono, and S. Tarucha, Solid State Comm. (2001), this issue. G. Schedelbeck, W. Wegschreider, M. Bichler, G. Abstreiter, Science [**278,**]{} 1792 (1997). S. Fafard, M. Spanner, J. P. McCaffrey, and Z. R. Wasilewski, Appl. Phys. Lett. [**75,**]{} 2268 (2000), and references therein. M. Bayer, P. Hawrylak, K. Hinzer, S. Fafard, M. Korkusinski, Z. R. Wasilewski, O. Stern, A. Forchel, Science [**291,**]{} 451 (2001). For single quantum dots it is now known from previous theoretical and experimental work that few-particle Coulomb correlations dominate the optical spectra in the non-linear regime. See, e.g., Refs. [@hartmann:00; @zrenner:00], and references therein. A. Hartmann, Y. Ducommun, E. Kapon, U. Hohenester, and E. Molinari, Phys. Rev. Lett. [**84,**]{} 5648 (2000). A. Zrenner, J. Chem. Phys. [**112,**]{} 7790 (2000). F. Troiani, U. Hohenester, and E. Molinari, unpublished. C. H. Bennet and D. P. DiVincenzo, Nature (London) [**404,**]{} 247 (2000). F. Troiani, U. Hohenester, and E. Molinari, Phys. Rev. B [**62,**]{} R2263 (2000), and references therein. E. Biolatti, R. C. Iotti, P. Zanardi, and F. Rossi, Phys. Rev. Lett. [**85,**]{} 5647 (2000). John C. Slater, [*Quantum Theory of Molecules and Solids,*]{} vol. 1, (McGraw-Hill, New York, 1963). The notation slightly differs from that used in Ref. [@ssc], where $\pm$ refers to the reflection with respect to the $xy$ plane. Reference [@tejedor] defines this plane reflection as “parity.” R. C. Ashoori, Nature (London) [**379,**]{} 413 (1996). S. Tarucha, D. G. Austing, T. Honda, R. J. van der Hage, and L. P. Kouwenhoven, Phys. Rev. Lett. [**77,**]{} 3613 (1996). L. P. Kouwenhoven, T. H. Oosterkamp, M. W. S. Danoesastro, M. Eto, D. G. Austing, T. Honda, and S. Tarucha, Science [**278,**]{} 1788 (1997). C. W. J. Beenakker, Phys. Rev. B [**44,**]{} 1646 (1991). W. Kohn, Phys. Rev. [**123,**]{} 1242 (1961). D. J. Lockwood, P. Hawrylak, P. D. Wang, C. M. Sotomayor Torres, A. Pinczuk, and B. S. Dennis, Phys. Rev. Lett. [**77,**]{} 354 (1996); C. Schüller, K. Keller, G. Biese, E. Ulrichs, L. Rolf, C. Steinebach, D. Heitmann, and K. Eberl, [*ibid.*]{} [**80,**]{} 2673 (1998). H. Haug and S. W. Koch, [*Quantum theory of the optical and electronic properties of semiconductors,*]{} (World Scientific, Singapore, 1993). Daniel C. Mattis, [*The Theory of Magnetism,*]{} (Harper, New York, 1965), p. 91. The magnetic field breaks time-reversal symmetry and induces singlet-triplet transitions. In the artificial Helium this proceeds with increments of the quantum of angular momentum $\hbar$ \[M. Wagner, U. Merkt, and A. V. Chaplik, Phys. Rev. B [**45,**]{} 1951 (1992)\]. Here by “kinetic energy” we mean the sum of the single-particle contributions to the total energy, thus including the effect of the external confinement potential. We describe a configuration listing the occupied single-particle levels, labeled as $nm_{g,u}^{\pm}$: $n$ is the radial quantum number, $m$ assumes the symbols $\sigma,\pi,\delta,\ldots$ corresponding to the azimuthal quantum number $m=0,1,2,\ldots$, the superscript + (-) stands for positive (negative) values of $m$, and the subscript $g$ ($u$) refers to even (odd) parity. Cf. Ref. [@slater]. P. A. Maksym, Phys. Rev. B [**53,**]{} 10871 (1996). R. B. Lehoucq, K. Maschhoff, D. C. Sorensen, C. Yang, ARPACK computer code, [http://www.caam.rice.edu/software/ARPACK/]{}. L. D. Landau and E. M. Lifshitz, [*Quantum Mechanics - Non Relativistic Theory,*]{} (Pergamon Press, Oxford, 1958). Daniela Pfannkuche, Vidar Gudmundsson, and Peter A. Maksym, Phys. Rev. B [**47,**]{} 2244 (1993). M. Rontani, F. Rossi, F. Manghi, and E. Molinari, Phys. Rev. B [**59,**]{} 10165 (1999). M. Rontani, G. Goldoni, F. Manghi, and E. Molinari, unpublished. The main contributions to the excitonic first excited state come from the states $ | 3 \rangle \equiv | \sigma_{g}^{e} \uparrow , \sigma_{u}^{h} \uparrow \rangle $ and $ | 4 \rangle \equiv | \sigma_{u}^{e} \uparrow , \sigma_{g}^{h} \uparrow \rangle $. At $ d = 1 $ nm $ | c_{3} |^{2} = 0.852$ and $ | c_{4} |^{2} = 0.0836 $, while at $ d = 3 $ nm $ | c_{3} |^{2} = 0.573$ and $ | c_{4} |^{2} = 0.348 $; in the latter case the correlation function looks very much like that of the ground state. Formally this means that the four-particle wavefunction can be written to a good degree of approximation in a factorized form: $\psi ( \vec{r_{e\uparrow}} , \vec{r_{e\downarrow}} , \vec{r_{h\uparrow}} ,\vec{r_{h\downarrow}} ) \simeq \phi_{e} (\vec{r_{e\uparrow}} , \vec{r_{e\downarrow}}) \phi_{h} (\vec{r_{h\uparrow}}, \vec{r_{h\downarrow}})$. ![ Dependence on the inter-dot distance $d$ of: (a) the kinetic $\left< E_k\right>$ and (b) Coulomb $\left< V_{ee} \right>$ energies of three prototypical two-electron states $ | 1 \rangle \equiv | \sigma_{g}\! \uparrow , \sigma_{g}\! \downarrow \rangle $, $ | 2 \rangle \equiv ( | \sigma_{g} \!\uparrow , \sigma_{u} \!\downarrow \rangle - | \sigma_{g}\! \downarrow , \sigma_{u} \!\uparrow \rangle ) / \sqrt{2}$ (singlet states) and $ | 3 \rangle \equiv ( |\sigma_{g}\!\uparrow , \sigma_{u}\!\downarrow \rangle+ | \sigma_{g}\! \downarrow , \sigma_{u}\! \uparrow \rangle ) / \sqrt{2} $ (triplet state); the diamond symbol refers to the real ground state $ | \psi \rangle $. In panel (c) we plot the spatially averaged pair-correlation function $ \rho ( z ) \equiv \int\!\int P \:(x,y,z;x_{0},y_{0},z_{0}) \: {\rm d}x \, {\rm d}y $ corresponding to these states. The coordinates of the fixed particle (represented by the black circle) are $ x_{0} = 0 $ , $ y_{0} = 0 $ and $ z_{0} = -7.5 $ nm ( $ z = 0 $ is in the middle of the inter-dot barrier). The continuous grey line gives the profile of the $z$ component of the confinement potential (barrier height $ V_{0} = 400$ meV, in-plane confinement energy $\hbar\omega_0=$ 20 meV). Other parameters adopted are those typical of GaAs: $ m_{e}^{*}= 0.067\: m_{e} $, $\kappa_r = 12.9$. \[scheme\] ](d_scheme){width="0.65\columnwidth"} ![ Angular pair correlation function $P( \varrho , \varphi , z ; \varrho_0 , \varphi_0 , z_0 )$ vs the azimuthal angle $\varphi$ for $N=2$: all other parameters $\varrho$, $\vec{\varrho}_0$, $z$, $z_0$ are fixed, with $\varrho=\varrho_0$. Here $\varrho_0$, $z_0$, $z$ correspond to the average value of the in-plane radius and the maxima along $z$ of the single-particle density, respectively. We use $m^*=$ 0.067$m_e$, $\kappa_r=$12.9, $L=$ 10 nm, $V_0=$ 400 meV, $d=$ 1 nm. The inset is a contour plot of $P( \vec{\varrho} , z ; \vec{\varrho}_0 , z_0 )$ in the $xy$ plane (in units of $\ell_0$), for the triplet excited state with $\hbar\omega_0=$ 20 meV. \[fthree\] ](fthree){width="0.85\columnwidth"} ![ Ground state energy vs $d$ for different number of electrons. Some excited states are also depicted, toghether with their term in the Molecular Spectroscopy notation. The bottom panel pictorially shows the single-particle configurations that have the largest weight in the three different many-body ground states for $N=4$. \[f:four\] ](f_four){width="0.85\columnwidth"} ![ Plot of the functions $ \rho ( z_{e} ) $ (continuous line) and $ \rho ( z_{h} ) $ (dashed line), computed for the excitonic ground states at $ d = 1 $ nm (a) and $ d = 3 $ nm (b). The definition of $\rho(z)$ and the positions of the fixed particles (either electron or hole) are as in Fig. \[scheme\]. The columns in the insets represent the square moduli of the coefficients associated with the states $ | 1 \rangle \equiv | \sigma_{g} \uparrow , \sigma_{g} \downarrow \rangle $ (grey column) and $ | 2 \rangle \equiv | \sigma_{u} \uparrow , \sigma_{u} \downarrow \rangle $ (white column): the excitonic ground states are given, to a good degree of approximation, by the superposition of these two states. At $ d = 1$ nm $ | c_{1} |^{2}=0.871 $ and $ | c_{2} |^{2}=0.066 $, while at $ d = 3$ nm $ | c_{1} |^{2}=0.577 $ and $ | c_{2} |^{2}=0.344 $; minor contributions arise from occupations of higher-energetic single-particle states. \[exciton\] ](d_exciton){width="0.85\columnwidth"} ![ The two spin configurations $ (S_{e}=0,S_{h}=0) $ and $ (S_{e}=1,S_{h}=0) $ are considered at $d=1$ nm. The plots represent the spatial distribution of a carrier of spin orientation as specified by the subscript of the z-coordinate, given the position of another carrier is fixed (whose type and spin orientation are drawn close to the corresponding black circle). The position of the fixed particles is the one adopted in Fig. \[scheme\]. \[f0010d1\] ](d_f0010d1){width="0.85\columnwidth"} ![ Same functions as in Fig. \[f0010d1\], but for an inter-dot distance of 3 nm. \[f0010d3\] ](d_f0010d3){width="0.85\columnwidth"} ![ Plots (a), (b) and (c) represent the $d$-dependence of the average values of the different contributions to the Coulomb energy for the eigenstates of lowest total energy associated to each of the four spin configurations $ ( S_{e} , S_{h} ) $. The curves associated to the $ (1,0) $ and $ (1,1) $ spin arrangements are identical in (a), and, together with $ (0,1) $, in (c). In (d) we plot the total energies. \[contr\] ](d_contr){width="0.85\columnwidth"}
{ "pile_set_name": "ArXiv" }
--- author: - | Ziyu Zhang[^1] Alexander G. Schwing Sanja Fidler Raquel Urtasun\ Department of Computer Science, University of Toronto\ [{zzhang, aschwing, fidler, urtasun}@cs.toronto.edu]{} bibliography: - 'egbib.bib' title: Monocular Object Instance Segmentation and Depth Ordering with CNNs --- [^1]: The first two authors contributed equally to this work.
{ "pile_set_name": "ArXiv" }
--- abstract: | We propose a new model for naturally realizing light Dirac neutrinos and explaining the baryon asymmetry of the universe through neutrinogenesis. To achieve these, we present a minimal construction which extends the standard model with a real singlet scalar, a heavy singlet Dirac fermion and a heavy doublet scalar besides three right-handed neutrinos, respecting lepton number conservation and a $Z_2^{}$ symmetry. The neutrinos acquire small Dirac masses due to the suppression of weak scale over a heavy mass scale. As a key feature of our construction, once the heavy Dirac fermion and doublet scalar go out of equilibrium, their decays induce the CP asymmetry from the interference of tree-level processes with the *radiative vertex corrections* (rather than the self-energy corrections). Although there is no lepton number violation, an equal and opposite amount of CP asymmetry is generated in the left-handed and the right-handed neutrinos. The left-handed lepton asymmetry would then be converted to the baryon asymmetry in the presence of the sphalerons, while the right-handed lepton asymmetry remains unaffected.\ \[2mm\] author: - 'Pei-Hong Gu$^{1}_{}$' - 'Hong-Jian He$^{2}_{}$' - 'Utpal Sarkar$^{3}_{}$' title: Realistic Neutrinogenesis with Radiative Vertex Correction --- Strong evidences from neutrino oscillation experiments[@pdg2006] so far have pointed to tiny but nonzero masses for active neutrinos. The smallness of the neutrino masses can be elegantly understood via seesaw mechanism[@minkowski1977] in various extensions of the standard model (SM). The origin of the observed baryon asymmetry[@pdg2006] in the universe poses a real challenge to the SM, but within the seesaw scenario, it can be naturally explained through leptogenesis [@fy1986; @luty1992; @fps1995; @ms1998; @di2002; @kl1984]. In the conventional leptogenesis scenario, the lepton number violation is essential as it is always associated with the mass-generation of Majorana neutrinos. However, the Majorana or Dirac nature of the neutrinos is unknown a priori and is awaiting for the upcoming experimental determination. It is important to note[@ars1998; @dlrw1999] that even with lepton number conservation, it is possible to generate the observed baryon asymmetry in the universe. Since the sphaleron processes[@krs1985] have no direct effect on the right-handed fields, a nonzero lepton asymmetry stored in the left-handed fields, which is equal but opposite to that stored in the right-handed fields, can be partially converted to the baryon asymmetry as long as the interactions between the left-handed lepton number and the right-handed lepton number are too weak to realize an equilibrium before the electroweak phase transition, the sphalerons convert the lepton asymmetry in the left-handed fields, leaving the asymmetry in the right-handed fields unaffected [@dlrw1999; @mp2002; @ap2006; @gdu2006; @gh2006]. For all the SM species, the Yukawa interactions are sufficiently strong to rapidly cancel the stored left- and right-handed lepton asymmetry. However, the effective Yukawa interactions of the ultralight Dirac neutrinos are exceedingly weak[@rw1983; @rs1984] and thus will not reach equilibrium until the temperatures fall well below the weak scale. In some realistic models[@mp2002; @gdu2006; @gh2006], the effective Yukawa couplings of the Dirac neutrinos are naturally suppressed by the ratio of the weak scale over the heavy mass scale. Simultaneously, the heavy particles can decay with the CP asymmetry to generate the expected left-handed lepton asymmetry after they are out of equilibrium. This new type of leptogenesis mechanism is called neutrinogenesis [@dlrw1999]. In this paper, we propose a new model to generate the small Dirac neutrino masses and explain the origin of cosmological baryon asymmetry, by extending the SM with a real scalar, a heavy Dirac fermion singlet and a heavy doublet scalar besides three right-handed neutrinos. In comparison with all previous realistic neutrinogenesis models [@mp2002; @gdu2006; @gh2006], the Dirac neutrino masses in our new model are also suppressed by the ratio of the weak scale over the heavy mass scale, but the crucial difference is that in the decays of the heavy particles, the *radiative vertex corrections* (instead of the self-energy corrections) interfere with the tree-level diagrams to generate the required CP asymmetry and naturally realize neutrinogenesis. [c|ccc]{}\   Fields &   $SU(2)_{L}^{}$ & $ U(1)_{Y}^{} \quad\quad $ & $ Z_{2}^{} $  \ \   $\psi_{L}^{}$ &   **2** & $ -1/2 \quad\quad $ & $ + $\   $\phi$ &   **2** & $ -1/2 \quad\quad $ & $ + $\   $\nu_{R}^{}$ &   **1** & $ ~\,0 \quad\quad $ & $ - $\   $D_{L,R}^{}$ &   **1** & $ ~\,0 \quad\quad $ & $ + $\   $\eta$ &   **2** & $ -1/2 \quad\quad $ & $ - $\   $\chi$ &   **1** & $ ~\,0 \quad\quad $ & $ -$\ \ We summarize the field content in Table\[charge\], in which $\psi_{L}^{}$, $\phi$, $\nu_{R}^{}$, $D_{L,R}^{}$, $\eta$ and $\chi$ denote the left-handed lepton doublets, the SM Higgs doublet, the right-handed neutrinos, the heavy singlet Dirac fermion, the heavy doublet scalar and the real scalar, respectively. Here $\psi_{L}^{}$, $\nu_{R}^{}$, $D_L^{}$ and $D_R^{}$ carry lepton number $1$ while $\phi$, $\eta$ and $\chi$ have zero lepton number. For simplicity, we have omitted the family indices as well as other SM fields, which carry even parity under the discrete symmetry $Z_{2}^{}$. It should be noted that the conventional dimension-4 Yukawa interactions among the left-handed lepton doublets, the SM Higgs doublet and the right-handed neutrinos are forbidden under the $Z_{2}^{}$ symmetry. Our model also exactly conserves the lepton number, so we can write down the relevant Lagrangian as below, $$\begin{aligned} \label{lagrangian1} -\mathcal{L} &\supset& \left\{f_{i}^{}\overline{\psi_{Li}^{}}\phi D_{R}^{} + g_{i}^{}\chi \overline{D_{L}^{}}\nu_{Ri}^{} +y_{ij}^{}\overline{\psi_{Li}^{}}\eta\nu_{Ri}^{} -\mu\chi\eta^{\dagger}_{}\phi\right.\nonumber\\ && \left.+M_{D}^{}\overline{D_{L}^{}}D_{R}^{} +\textrm{h.c}\right\}+M_{\eta}^{2}\eta^{\dagger}_{}\eta\,, \end{aligned}$$ where $f_i^{}$, $g_i^{}$ and $y_{ij}^{}$ are the Yukawa couplings, while the cubic scalar coupling $\mu$ has mass-dimension equal one. The parameters $M_D$ and $M_\eta$ in (\[lagrangian1\]) are the masses of the heavy singlet fermion $D$ and the heavy Higgs doublet $\eta$, respectively. Note that in the Higgs potential the scalar doublet $\eta$ has a positive mass-term as shown in the above Eq.(\[lagrangian1\]), while the Higgs doublet $\phi$ and singlet $\chi$ both have negative mass-terms[^1]. The lepton number conservation ensures that there is no Majorana mass term for all fermions. As we will discuss below, the vacuum expectation value (*vev*) of $\eta$ comes out to be much less than the *vev* of the other fields. Thus the first two terms generate mixings of the light Dirac neutrinos with the heavy Dirac fermion, while the third term gives the light Dirac neutrino mass term. The complete mass matrix can now be written in the basis $\left\{ \nu_L^{},~ D_L^{},~\nu_R^{},~D_R^{}\right\}$ as $$\begin{aligned} \label{eq:Mnu44} M = \left[ \begin{array}{cccc}0 & 0 & a & b \\ 0 & 0 & c & d \\ a^{\dagger}_{} & c^{\dagger}_{} & 0 & 0 \\ b^{\dagger}_{} & d^{\dagger}_{} & 0 & 0\end{array}\right] \,, \end{aligned}$$ where $a \equiv y\langle \eta \rangle$, $ b\equiv f \langle \phi\rangle$, $ c\equiv g \langle \chi \rangle$ and $d\equiv M_{D}^{}$. As will be shown below, $\,d\gg a,b,c\,$. So, the diagonalization of the mass matrix (\[eq:Mnu44\]) generates the light Dirac neutrino masses of order $a - bc/d$ and a heavy Dirac fermion mass of order $d$. As shown in Fig.\[massgeneration\], at low energy we can integrate out the heavy singlet fermion as well as the heavy doublet scalar. Then we obtain the following effective dimension-5 operators, $$\begin{aligned} \label{operator} \mathcal{O}_{5}^{} =\frac{\,f_{i}^{}g_{j}^{}\,}{M_{D}^{}} \overline{\psi_{Li}^{}}\phi\nu_{Rj}^{}\chi -\frac{\,\mu y_{ij}^{}\,}{M_{\eta}^{2}} \overline{\psi_{Li}^{}}\phi\nu_{Rj}^{}\chi +\textrm{h.c.}\,. \end{aligned}$$ Therefore, once the SM Higgs doublet $\phi$ and the real scalar $\chi$ both acquire their *vev*s, the neutrinos naturally acquire small Dirac masses, $$\begin{aligned} \mathcal{L}_m &=& -\left(m_{\nu}^{}\right)_{ij}^{} \overline{\nu_{Li}^{}}\nu_{Rj}^{}+\textrm{h.c.}\,, \end{aligned}$$ where $$\begin{aligned} \label{diracmass} m_{\nu}^{} ~\equiv~ m_{\nu}^{I}+ m_{\nu}^{II} \,, \end{aligned}$$ with [@rw1983] $$\begin{aligned} \label{diracmass1} && \left(m_{\nu}^{I}\right)_{ij}^{} ~=~ -f_{i}^{}g_{j}^{}\frac{\langle \phi \rangle \langle \chi \rangle}{M_{D}^{}} ~=~ -\frac{\,(bc)_{ij}^{}\,}{d} \,, \end{aligned}$$ and [@gh2006] $$\begin{aligned} \label{diracmass2} && \left(m_{\nu}^{II}\right)_{ij}^{~} ~=~ y_{ij}^{}\frac{\mu\langle \phi \rangle \langle \chi \rangle}{M_{\eta}^{2}} ~=~ a_{ij}^{~} \,. \end{aligned}$$ To quantify the second equality in (\[diracmass2\]), we note that different from the SM Higgs doublet, the heavy scalar doublet $\eta$ has a positive mass-term in the Higgs potential, so it will develop a tiny nonzero *vev* until $\phi$ and $\chi$ both acquire their *vev*s [@gh2006], $$\begin{aligned} \label{doubletvev} \langle\eta\rangle & \simeq & \frac{\,\mu\langle \phi\rangle \langle \chi \rangle \,} {M^{2}_{\eta}}\,. \end{aligned}$$ With this we can derive the neutrino mass formula  $m_{\nu}^{II}= y\langle \eta \rangle \equiv a$  from the Lagrangian (\[lagrangian1\]), which confirms the Eq.(\[diracmass2\]) above. In the reasonable parameter space of $\,M_D^{} \sim M_{\eta}^{}\sim \mu \gg \left<\chi\right>,\left<\phi\right> $ and $\,(f,\,g,\,y)={O}(1)$, we can naturally realize $ a \ll b,c \ll d\,$. Furthermore, using the second relations in (\[diracmass1\]) and (\[diracmass2\]) we can re-express the summed neutrino mass matrix as $$\begin{aligned} \label{diracmass} m_{\nu}^{} ~\equiv~ m_{\nu}^{I}+ m_{\nu}^{II} = -bc/d + a \,. \end{aligned}$$ This is consistent with the direct diagonalization of the original Dirac mass matrix (\[eq:Mnu44\]), which we have mentioned below (\[eq:Mnu44\]). It is clear that this mechanism of the neutrino mass generation has two essential features: (i) it generates Dirac masses for neutrinos, and (ii) it retains the essence of the conventional seesaw [@minkowski1977] by making the neutrino masses tiny via the small ratio of the weak scale over the heavy mass scale. It is thus called Dirac Seesaw [@gh2006]. In particular, compared to the classification of the conventional type-I and type-II seesaw, we may refer to Eqs.(\[diracmass1\]) and (\[diracmass2\]) as the type-I and type-II Dirac seesaw, respectively. From Eq.(\[diracmass\]) we see that both type-I and type-II seesaws can contribute to the $3\times 3$ mass-matrix $m_\nu^{}$ for the light neutrinos. There are three possibilities in general: (i) $m^I_\nu \gg m^{II}_\nu$, or (ii) $m^{I}_\nu \sim m^{II}_\nu$, or (iii) $m^{I}_\nu \ll m^{II}_\nu$. We note that for case-(iii), the type-II contribution alone can accommodate the neutrino oscillation data even if type-I is fully negligible; while for case-(i) and -(ii), the type-II contribution should still play a nontrivial role for $\nu$-mass generation because $m^I_\nu$ is rank-1 and additional contribution from $m^{II}_\nu$ is necessary. The rank-1 nature of $m_{\nu}^{I}=bc/d$ is due to that there is only one singlet heavy fermion in our current minimal construction, which means that $m_{\nu}^{I}$ has two vanishing mass-eigenvalues. Hence, to accommodate the neutrino oscillation data[@pdg2006] in the case-(i) and -(ii) of our minimal construction always requires nonzero contribution $m_{\nu}^{II}$ from the type-II Dirac seesaw[^2]. Let us explicitly analyze how this can be realized for the case-(i) and -(ii). As $m_{\nu}^{I}$ is rank-1, we can consider a basis for $m_{\nu}^{I}$ where one of the two massless states is manifest, i.e., $b_1^{} = c_1^{} =0$. For the remaining components of $b$ and $c$, we choose a generic parameter set[^3], $\,b_2^{} \approx b_3^{} \approx -c_2^{} \approx -c_3^{}\,$, which naturally realizes the maximal mixing angle $\theta_{23}^{}=45^{\circ}_{}$ for explaining the atmospheric neutrino mixing. Including the type-II Dirac-seesaw matrix $m_{\nu}^{II}=a$ will then account for the other mixing angles ($\theta_{12}^{},\,\theta_{13}^{}$) and the two other neutrino masses. Thus, we can naturally realize the light neutrino mass-spectrum via both normal hierarchy (NH) and inverted hierarchy (IH) schemes. To be concrete, the NH-scheme is realized in our case-(i) where the type-II Dirac-seesaw matrix $m_{\nu}^{II}=a\equiv \delta \ll m_0^{} \sim m_{\nu}^{I}$, with $m_0^{}$ the neutrino mass scale (fixed by the atmospheric neutrino mass-squared-difference $\Delta_{\textrm{a}}^{}$ with $\,m_0^{} \equiv \sqrt{\Delta_{\textrm{a}}^{}}\,$) and its relations to the nonzero $(b_j^{},\,c_j^{}$) are defined via $\,b_j^{} = \sqrt{m_0^{}d/2}+{O}(\delta)$ and $\,c_j^{} = -\sqrt{m_0^{}d/2}+{O}(\delta)$ for $j=2,3$. Thus we have $$\begin{aligned} \label{NH} \displaystyle m_{\nu}^{} = -\frac{bc}{d} + a =m_0^{}\left(\begin{array}{ccc} 0 & 0 & 0\\ 0 & \frac{1}{2} & \frac{1}{2}\\ 0 & \frac{1}{2} & \frac{1}{2} \end{array}\right) + {O}(\delta )\,. \end{aligned}$$ It is clear that Eq.(\[NH\]) predicts the neutrino masses, $\,(m_1^{},\,m_2^{},\,m_3^{}) = m_0^{}(0,\,0,\,1) + {O}(\delta)\,$, consistent with the NH mass-spectrum. Next, the IH-scheme can be realized in our case-(ii) where the type-II Dirac-seesaw matrix $m_{\nu}^{II}=a\equiv m_0^{}{\rm diag}(1,0,0) + \delta \,\sim\, m_\nu^I $ with $\,\delta \ll m_0^{}\,$, while the structure of the type-I Dirac seesaw matrix $m_{\nu}^{I}$ remains the same, $$\begin{aligned} \label{IH} \displaystyle m_{\nu}^{} = -\frac{bc}{d} + a =m_0^{}\left(\begin{array}{ccc} 1 & 0 & 0\\ 0 & \frac{1}{2} & \frac{1}{2}\\ 0 & \frac{1}{2} & \frac{1}{2} \end{array}\right) + {O}(\delta )\,. \end{aligned}$$ From this equation we deduce the neutrino masses, $\,(m_1^{},\,m_2^{},\,m_3^{}) = m_0^{}(1,\,1,\,0) + {O}(\delta)\,$, consistent with the IH mass-spectrum. So far we have discussed all three possibilities, the case-(i), -(ii) and -(iii), regarding the relative contributions of the type-I versus type-II seesaw to the neutrino mass matrix $m_\nu^{}$ for accommodating the oscillation data. The question of which one among these three possibilities is realized in nature should be answered by a more fundamental theory which can precisely predict the Yuakawa couplings and masses for $D$ and $\eta$ as well as the [*vev*]{} of $\chi$. Finally, we also note that as the neutrinos being Dirac particles, our Dirac-seesaw construction will be consistent with the possible non-observation of the neutrinoless double beta decays ($0\nu\beta\beta$) which will be tested in the upcoming $0\nu\beta\beta$-experiments[@0nu2beta]. The real scalar $\chi$ is expected to acquire its *vev* near the weak scale, so we will set $ \langle\chi\rangle$ around ${O}(\textrm{TeV})$ [^4]. Under this setup, it is straightforward to see that $m_{\nu}^{I}$ will be efficiently suppressed by the ratio of the weak scale over the heavy mass. For instance, we find that $m_{\nu}^{I}={O}(0.1)\,\textrm{eV}$ for $\,M_{D}^{}= {O}(10^{13-15}_{})\,\textrm{GeV}$ and $(f,\, g,\,y) = {O}(0.1-1)$, where $\langle \phi\rangle\simeq 174\,\textrm{GeV}$. It also is reasonable to set the trilinear scalar coupling $|\mu|$ to be around the scale of the $\eta$ mass $M_{\eta}^{}$. In consequence, the neutrino mass $m_{\nu}^{II}$ in (\[diracmass2\]) will be highly suppressed, similar to $m_{\nu}^{I}$. For example, we derive $m_{\nu}^{II}={O}(0.1)\,\textrm{eV}$ for $M_{\eta}^{} ={O}(10^{13-15}_{})\,\textrm{GeV}$ and $(y,\,\mu/M_\eta ) ={O}(0.1-1)$. So, we can naturally realize the Dirac neutrino masses around ${O}(0.1)\,\textrm{eV}$. We now demonstrate how to generate the observed baryon asymmetry in our model by invoking the neutrinogenesis [@dlrw1999] mechanism. Since the sphaleron processes [@krs1985] have no direct effect on the right-handed neutrinos, and the effective Yukawa interactions of the Dirac neutrinos are too weak to reach the equilibrium until temperatures fall well below the weak scale, the lepton asymmetry stored in the left-handed leptons, which is equal but opposite to that stored in the right-handed neutrinos, can be partially converted to the baryon asymmetry by sphalerons. In particular, the final baryon asymmetry should be $$\begin{aligned} B ~=~ \frac{28}{79}\left(B-L_{SM}^{}\right) ~=~ -\frac{28}{79}L_{SM}^{} \,, \end{aligned}$$ for the SM with three generation fermions and one Higgs doublet. In the pure type-I Dirac seesaw scenario [@gdu2006], we can generate the CP asymmetry through the interferences between the tree-level decay and the self-energy loops if there exist at least two heavy fermion singlets. Similarly, the pure type-II Dirac seesaw model [@gh2006] also needs two heavy scalar doublets to obtain the self-energy loops in the decays. In the following, we shall focus on the minimal construction with only one heavy singlet fermion and one heavy doublet scalar to realize the radiative vertex corrections for the CP asymmetry, although further extensions are allowed in our current scenario. In this framework, depending on the values of the masses and couplings, the leptogenesis can be realized either from the decay of the heavy singlet fermion or from the decay of the heavy doublet scalar. From the decay of the heavy singlet fermion to the left-handed leptons and the SM Higgs doublet, as shown in Fig.\[decay1\], the CP asymmetry is given by $$\begin{aligned} \label{cp1} \varepsilon^{I}_{} &\equiv& \frac{\Gamma\left(D_R^{}\rightarrow\psi_{L}^{}\phi^{\ast}_{} \right) -\Gamma\left(D^c_R\rightarrow\psi_{L}^{c}\phi\right)} {\Gamma_{D}^{}}\nonumber \\[3mm] &=& \frac{1}{4\pi}\frac{\textrm{Im} \left[\textrm{Tr}\left(f^{\dagger}_{}yg^{\dagger}_{}\right) \mu\right]M_{\eta}^{2}} {\left[\textrm{Tr} \left(f_{}^{\dagger}f\right)+\frac{1}{2}\textrm{Tr} \left(g_{}^{\dagger}g\right)\right]M_{D}^{3}}\nonumber \\[1.7mm] && \times \ln\left(1+\frac{M_{D}^{2}}{M_{\eta}^{2}} \right)\,, \end{aligned}$$ where $$\begin{aligned} \Gamma_{D}^{} &=& \frac{1}{16\pi}\left[\textrm{Tr} \left(f_{}^{\dagger}f\right)+\frac{1}{2}\textrm{Tr} \left(g_{}^{\dagger}g\right)\right]M_{D}^{} \end{aligned}$$ is the total decay width of $D$ or $D_{}^c$. Here we have taken $M_{D}^{}$ to be real after proper phase rotation. Furthermore, from the decay of the heavy doublet scalar to the left-handed leptons and the right-handed neutrinos, a CP asymmetry can also be produced. It is given by the interference of the tree-level process with the one-loop vertex diagram as shown in Fig.\[decay2\], $$\begin{aligned} \label{cp2} \hspace*{-3mm} \varepsilon^{II}_{} &\equiv& \frac{\,\Gamma\left(\eta\rightarrow\psi_{L}^{}\nu_{R}^{c} \right) -\Gamma\left(\eta^{\ast}_{}\rightarrow\psi_{L}^{c} \nu_{R}\right)\,} {\Gamma_{\eta}^{}} \nonumber\\[2mm] \hspace*{-3mm} &=& \frac{1}{4\pi}\frac{\textrm{Im}\left[\textrm{Tr} \left(f^{\dagger}_{}yg^{\dagger}_{}\right) \mu\right]M_{D}^{}}{\left[\textrm{Tr} \left(y_{}^{\dagger}y\right)M_{\eta}^{2}+ |\mu|^{2}_{}\right]} \ln\!\left(\!1+\frac{M_{\eta}^{2}}{M_{D}^{2}}\right) \end{aligned}$$ where $$\begin{aligned} \Gamma_{\eta}^{} ~=~ \frac{1}{16\pi} \left[\textrm{Tr}\left(y^{\dagger}_{}y\right) +\frac{\left|\mu\right|^{2}_{}}{M_{\eta}^{2}}\right] M_{\eta}^{}\, \end{aligned}$$ is the total decay width of $\eta$ or $\eta^{\ast}_{}$. In the case where the masses of the heavy singlet fermion and heavy doublet scalar locate around the same scale, and also their couplings are of the same order of magnitude, the two types of asymmetry of Eqs.(\[cp1\]) and (\[cp2\]) can be both important for the neutrinogenesis. For illustration below, we will analyze two typical scenarios where one process dominates over the other. Scheme-1 is defined for $M_{D}^{}\ll M_{\eta}^{}$ and $f\sim g\sim y$, under which the final left- or right-handed lepton asymmetry mainly comes from the pair decays of $(D,\,D^{c}_{})$. We can simplify the CP asymmetry (\[cp1\]) as $$\begin{aligned} \hspace*{-5mm} \varepsilon^{I}_{} &\simeq&\frac{1}{64\pi^{2}_{}} \frac{M_{D}^{}M_{\eta}^{2}\textrm{Im} \left[\textrm{Tr}\left(m_{\nu}^{I\dagger}m_{\nu}^{II} \right)\right]} {\langle\phi\rangle^{2}_{}\langle\chi\rangle^{2}_{} \Gamma_{D}^{}} \nonumber \\[3mm] \hspace*{-5mm} &=& \left[\frac{45}{\left(4\pi\right)^{7}_{} g_{\ast}^{}}\right]^{\frac{1}{2}}_{} \frac{1}{K_{D}^{}}\frac{M_{\eta}^{2}}{M_{D}^{2}} \nonumber \\[1.8mm] && \times\frac{M_{\textrm{Pl}}^{}M_{D}^{}\textrm{Im} \left[\textrm{Tr}\left(m_{\nu}^{I\dagger}m_{\nu}^{II} \right)\right]} {\langle\phi\rangle^{2}_{}\langle\chi\rangle^{2}_{}} \end{aligned}$$ with $$\begin{aligned} \label{kd} K_{D}^{} &\equiv& \left.\frac{\Gamma_{D}^{}}{H}\right|^{}_{T=M_{D}^{}} \end{aligned}$$ as a measurement of the deviation from equilibrium for $D$. Here $H$ is the Hubble constant, $$\begin{aligned} \label{eq:HubbleC} H(T) &=& \left(\frac{4\pi^{3}_{}g_{\ast}^{}}{45} \right)^{\frac{1}{2}} \frac{T^{2}_{}}{M_{\textrm{Pl}}^{}}\,, \end{aligned}$$ with $g_{\ast}^{}={O}(100)$ and $M_{\textrm{Pl}}\simeq 1.2\times 10^{19}_{}\,\textrm{GeV}$. Note that there is a correlation between $K_{D}^{}$ and $m_{\nu}^{I}$, $$\begin{aligned} \overline{m}_{I}^{2}&\equiv& \textrm{Tr}\left(m_{\nu}^{I\dagger}m_{\nu}^{I}\right) \nonumber\\ &=& \textrm{Tr}\left(f^{\dagger}_{}fgg^{\dagger}_{}\right) \frac{\langle\phi\rangle^{2}_{} \langle\chi\rangle^{2}_{}}{M_{D}^{2}} \nonumber\\ &=&\sum_{i}^{}\left|f_{i}^{} \right|_{}^{2}\left|g_{i}^{}\right|_{}^{2} \frac{\langle\phi\rangle^{2}_{} \langle\chi\rangle^{2}_{}}{M_{D}^{2}} \nonumber\\ &<&\sum_{i}^{}\left|f_{i}^{} \right|_{}^{2}\sum_{j}^{}\left|g_{j}^{}\right|_{}^{2} \frac{\langle\phi\rangle^{2}_{} \langle\chi\rangle^{2}_{}}{M_{D}^{2}} \nonumber\\ &=& 2\left(16\pi\right)^{2}_{} B_{L}^{}B_{R}^{}\Gamma_{D}^{2} \frac{\langle\phi\rangle^{2}_{} \langle\chi\rangle^{2}_{}}{M_{D}^{4}} \nonumber\\ &=& \frac{2\left(4\pi\right)^{5}_{}g_{\ast}^{}}{45} B_{L}^{}B_{R}^{}K_{D}^{2} \frac{\langle\phi\rangle^{2}_{} \langle\chi\rangle^{2}_{}}{M_{\textrm{Pl}}^{2}}\,, \end{aligned}$$ and hence $$\begin{aligned} \label{constrain} K_{D}^{} & > & \left[\frac{45}{2\left(4\pi\right)^{5}_{} g_{\ast}^{}B_{L}^{}B_{R}^{}}\right]^{\frac{1}{2}}_{} \frac{\,M_{\textrm{Pl}}^{}\overline{m}_{I}\,} {\langle\phi\rangle\langle\chi\rangle}\,. \end{aligned}$$ Here $B_{L}^{}$ and $B_{R}^{}$ are the branching ratios of the heavy fermion singlet decaying into the left-handed lepton doublets and the right-handed neutrinos, respectively. They satisfy the following relationship, $$\begin{aligned} \label{relation} B_{L}^{}+B_{R}^{}\equiv 1\,,\, ~&\Rightarrow&~ \, B_{L}^{}B_{R}^{} \leqslant \frac{1}{4}\,. \end{aligned}$$ For instance, we may choose the sample inputs, $\,M_{\eta}= 10\,M_{D}^{} = 1.8\times 10^{12}_{}\,\textrm{GeV}\gg M_D^{}$, $\left<\phi\right>=174\,\textrm{GeV}$, $\left<\chi\right>=400\,\textrm{GeV}$ and $(y,\,f,\, g,\,\mu/M_{\eta} ) \simeq (0.02,\,0.033,\,0.005,\,0.01)=O(0.01)$. Thus, we can estimate the light neutrino mass scale, $\overline{m}_{I}^{} ={O}(m_{\nu}^{I})= {O}(10\,m_{\nu}^{II})\simeq 0.06\,\textrm{eV}$. In consequence, we can estimate $\,B_{L}^{}B_{R}^{}\simeq 0.99\times 0.011 \simeq 0.011$, and thus $K_{D}^{}\simeq 88$. This leads to $\varepsilon_{}^{I} \simeq -2.4\times 10^{-5}_{}$ for the maximal CP phase. We then use the approximate relation [@kt1990; @ht2001] to deduce the final baryon asymmetry, $$\begin{aligned} \label{asymmetry} Y_{B}^{} \equiv \frac{n_{B}^{}}{s}&\simeq& - \frac{28}{79} \times \frac{0.3\left(\varepsilon_{~}^{I}/g_{\ast}^{}\right)} {K_{D}^{}\left(\ln K_{D}^{}\right)^{0.6}_{}} \nonumber\\ &\simeq& 10^{-10}_{}\,. \end{aligned}$$ This is consistent with the current observations [@pdg2006]. Furthermore, the relationship $m_{\nu}^{I}={O}(0.1\,\textrm{eV}) \gg m_{\nu}^{II}$, shows the dominance of type-I Dirac seesaw. As we mentioned earlier, this can be realized via the NH mass-spectrum of the light neutrinos. We also note that even if the Dirac fermion singlet is at a fairly low mass scale, such as TeV, it is still feasible to efficiently enhance the CP asymmetry as long as the ratio ${M_{\eta}}/{M_{D}}$ is large enough. In other words, we can realize the low-scale neutrinogenesis without invoking the conventional resonant effect to enhance the CP asymmetry (which requires at least two heavy Dirac fermion singlets). Scheme-2 is defined for the other possibility with $M_{\eta}^{} \ll M_{D}^{}$ and $f\sim g\sim y$. Hence the final left- or right-handed lepton asymmetry is dominated by the pair decays of $(\eta,\,\eta^{\ast}_{})$. We derive the following CP asymmetry from (\[cp2\]), $$\begin{aligned} \hspace*{-5mm} \varepsilon^{II}_{} &\simeq& \frac{1}{64\pi^{2}_{}}\frac{M_{\eta}^{3}\textrm{Im} \left[\textrm{Tr}\left(m_{\nu}^{I\dagger}m_{\nu}^{II} \right)\right]} {\langle\phi\rangle^{2}_{}\langle\chi\rangle^{2}_{} \Gamma_{\eta}^{}} \nonumber\\[3mm] \hspace*{-5mm} &=&\left[\frac{45}{\left(4\pi\right)^{7}_{} g_{\ast}^{}}\right]^{\!\frac{1}{2}}_{}\frac{1}{K_{\eta}^{}}\!\! \nonumber\\[1.8mm] && \times \frac{\,M_{\textrm{Pl}}^{}M_{\eta}^{}\textrm{Im} \left[\textrm{Tr}\left(m_{\nu}^{I\dagger}m_{\nu}^{II} \right)\right]\,} {\langle\phi\rangle^{2}_{} \langle\chi\rangle^{2}_{}}\,, \end{aligned}$$ where $K_{\eta}^{}$ is given by $$\begin{aligned} K_{\eta}^{} &\equiv & \left.\frac{\Gamma_{\eta}^{}}{H} \right|^{}_{T=M_{\eta}^{}} \,, \end{aligned}$$ with the Hubble constant $\,H(T)\,$ expressed in Eq.(\[eq:HubbleC\]). Here the parameter $\,K_{\eta}^{}\,$ measures the deviation from the equilibrium for $\eta$. We deduce the correlation between $K_{\eta}^{}$ and $m_{\nu}^{II}$, $$\begin{aligned} \overline{m}_{II}^{2}&\equiv& \textrm{Tr}\left(m_{\nu}^{II\dagger}m_{\nu}^{II}\right) \nonumber\\[3mm] &=& \textrm{Tr}\left(y^{\dagger}_{}y\right) \frac{|\mu|^{2}_{}\langle\phi\rangle^{2}_{} \langle\chi\rangle^{2}_{}}{M_{\eta}^{4}} \nonumber\\[3mm] &=& \left(16\pi\right)^{2}_{}B_{f}^{}B_{s}^{} \Gamma_{\eta}^{2}\frac{\langle\phi\rangle^{2}_{} \langle\chi\rangle^{2}_{}}{M_{\eta}^{4}} \nonumber \\[3mm] &=& \frac{\left(4\pi\right)^{5}_{}g_{\ast}^{}}{45} B_{f}^{}B_{s}^{}K_{\eta}^{2} \frac{\langle\phi\rangle^{2}_{} \langle\chi\rangle^{2}_{}}{M_{\textrm{Pl}}^{2}}\,, \end{aligned}$$ and also $$\begin{aligned} K_{\eta}^{}=\left[\frac{45}{\left(4\pi\right)^{5}_{} g_{\ast}^{}B_{f}^{}B_{s}^{}}\right]^{\frac{1}{2}}_{} \frac{M_{\textrm{Pl}}^{}\overline{m}_{II}^{}} {\langle\phi\rangle\langle\chi\rangle}\,, \end{aligned}$$ where $B_{f}^{}$ and $B_{s}^{}$ are the branching ratios of the heavy scalar doublet decaying into the light fermions and the scalars, respectively. Similar to Eq.(\[relation\]), they satisfy $$\begin{aligned} B_{f}^{}+B_{s}^{}\equiv 1\,,\, ~&\Rightarrow&~ \, B_{f}^{}B_{s}^{} \leqslant \frac{1}{4}\,.\end{aligned}$$ For instance, given the sample inputs, $\,M_{\eta}^{}=26\mu =0.1\, M_{D}^{} =2\times 10^{13}_{}\,\textrm{GeV}\ll M_D\,$, $\left<\phi\right>=174\,\textrm{GeV}$, $\left<\chi\right>=400\,\textrm{GeV}$ and $\,(f,\,g,\,y)=(0.16,\,0.16,\,0.34)={O}(0.1)$, we can estimate the light neutrino mass scale, $\,\overline{m}_{II}^{} ={O}(m_{\nu}^{II})={O}(10\,m_{\nu}^{I}) \simeq 0.05\,\textrm{eV}$. Subsequently, we derive, $B_{f}^{}B_{s}^{} \simeq 0.99\times 0.013\simeq 0.012$, and $K_{\eta}^{}\simeq 84$. This leads to, $\varepsilon_{}^{II} \simeq -2.3\times 10^{-5}_{}$, for the maximal CP phase. Using the approximate analytical formula[@kt1990; @ht2001] for the baryon asymmetry, we arrive at $$\begin{aligned} \label{asymmetry} Y_{B}^{} ~\simeq~ - \frac{28}{79} \times\frac{0.3 \left(\varepsilon_{~}^{II}/g_{\ast}^{}\right)} {K_{\eta}^{}\left(\ln K_{\eta}^{}\right)^{0.6}_{}} ~\simeq~ 10^{-10}_{}\,, \end{aligned}$$ consistent with the present observation[@pdg2006]. Furthermore, we note that in the Scheme-2, the active neutrino masses are dominated by the type-II Dirac seesaw, $\,m_{\nu}^{II} ={O}(0.1\, \textrm{eV})\gg m_{\nu}^{I}$, where both the NH and IH neutrino-mass-spectra can be realized. In this paper, we have presented a new possibility to realize the neutrinogenesis in the Dirac seesaw scenario. In our minimal construction, we introduce a real scalar $\chi$, a heavy singlet Dirac fermion $D$ and a heavy doublet scalar $\eta$ besides three right-handed singlet neutrinos to the SM. Therefore, different from previous realistic neutrinogenesis models, the *radiative vertex corrections* rather than the self-energy corrections interfere with the tree-level diagrams to generate the CP asymmetry in the decays of the heavy particles. Finally, we note that the real singlet scalar $\chi$ at the weak scale can couple to the SM Higgs doublet $\phi$ via the $Z_2^{}$-conserving quartic interaction $\chi\chi\phi^\dagger_{}\phi\,$. In consequence, the lightest neutral Higgs boson $h^0_{}$ is a mixture between $\phi^0_{}$ and $\chi$, leading to non-SM-like anomalous couplings of $h^0_{}$ with the weak gauge bosons ($W^\pm,\,Z$) and the SM-fermions. This can significantly modify the light Higgs boson ($h^0_{}$) phenomenology at the Tevatron Run-2, the CERN LHC and the future International Linear Colliders(ILC) [@bgm1977]. A systematical study for the collider phenomenology of $\phi^0_{}$ and $\chi$ is beyond the present scope and will be given elsewhere. [99]{} Particle Data Group, W.M. Yao *et al.*, Journal of Physics G **33**, 1 (2006); R.N. Mohapatra, A.Y. Smirnov, Ann. Rev. Nucl. Part. Sci. **56**, 569 (2006); A. Strumia and F. Vissani, hep-ph/0606054; and references therein. P. Minkowski, Phys. Lett. B **67**, 421 (1977); T. Yanagida, in [*Proc. of the Workshop on Unified Theory and the Baryon Number of the Universe*]{}, ed. O. Sawada and A. Sugamoto (KEK, Tsukuba, 1979), p. 95; M. Gell-Mann, P. Ramond, and R. Slansky, in [*Supergravity*]{}, ed. F. van Nieuwenhuizen and D. Freedman (North Holland, Amsterdam, 1979), p. 315; S.L. Glashow, in [*Quarks and Leptons*]{}, ed. M. L$\rm\acute{e}$vy [*et al.*]{} (Plenum, New York, 1980), p. 707; R.N. Mohapatra and G. Senjanovi$\rm\acute{c}$, Phys. Rev. Lett. **44**, 912 (1980); J. Schechter and J.W.F. Valle, Phys. Rev. D **22**, 2227 (1980). M. Fukugita and T. Yanagida, Phys. Lett. B **174**, 45 (1986); P. Langacker, R.D. Peccei, and T. Yanagida, Mod. Phys. Lett. A **1**, 541 (1986). M.A. Luty, Phys. Rev. D **45**, 455 (1992); R.N. Mohapatra and X. Zhang, Phys. Rev. D **46**, 5331 (1992). M. Flanz, E.A. Paschos, and U. Sarkar, Phys. Lett. B **345**, 248 (1995); M. Flanz, E.A. Paschos, U. Sarkar, and J. Weiss, Phys. Lett. B **389**, 693 (1996). E. Ma and U. Sarkar, Phys. Rev. Lett. **80**, 5716 (1998). S. Davidson and A. Ibarra, Phys. Lett. B **535**, 25 (2002); W. Buchm$\rm\ddot{u}$ller, P. Di Bari, and M. Pl$\rm\ddot{u}$macher, Nucl. Phys. B **643**, 390 (2002); P.H. Frampton, S.L. Glashow, and T. Yanagida, Phys. Lett. B **548**, 119 (2002); W. Buchm$\rm\ddot{u}$ller, P. Di Bari, and M. Pl$\rm\ddot{u}$macher, Nucl. Phys. B **665**, 445 (2003); G.F. Giudice, A. Notari, M. Raidal, A. Riotto, and A. Strumia, Nucl. Phys. B **685**, 89 (2004); V. Barger, D.A. Dicus, H.J. He, and T. Li, Phys. Lett. B **583**, 173 (2004); R.G. Felipe, F.R. Joaquim, and B.M. Nobre, Phys. Rev. D **70**, 085009 (2004); W. Rodejohann, Eur. Phys. J. C **32**, 235 (2004); P. Gu and X.J. Bi, Phys. Rev. D **70**, 063511 (2004); S. Antusch and S.F. King, Phys. Lett. B **597**, 199 (2004); R.N. Mohapatra and S. Nasri, Phys. Rev. D **71**, 033001 (2005); W. Buchm$\rm\ddot{u}$ller, P. Di Bari, and M. Pl$\rm\ddot{u}$macher, Annals Phys. **315**, 305 (2005); N. Sahu and U. Sarkar, Phys. Rev. D **74**, 093002 (2006); P.H. Gu, H. Zhang, and S. Zhou, Phys. Rev. D **74**, 076002 (2006); E.Kh. Akhmedov, M. Blennow, T. Hallgren, T. Konstandin, and T. Ohlsson, JHEP **0704**, 022 (2007); S. Antusch, Phys. Rev. D **76**, 023512 (2007); W. Buchm$\rm\ddot{u}$ller, R.D. Peccei, and T. Yanagida, Ann. Rev. Nucl. Part. Sci. **55**, 311 (2005); and references therein. For earlier related works, see M.Yu. Khlopov and A.D. Linde, Phys. Lett. B **138**, 265 (1984); F. Balestra *et al.*, Sov. J. Nucl. Phys. **39**, 626 (1984); M.Yu. Khlopov, Yu.L. Levitan, E.V. Sedelnikov, and I.M. Sobol, Phys. Atom. Nucl. **57**, 1393 (1994). E.Kh. Akhmedov, V.A. Rubakov, and A.Yu. Smirnov, Phys. Rev. Lett. **81**, 1359 (1998). K. Dick, M. Lindner, M. Ratz, and D. Wright, Phys. Rev. Lett. **84**, 4039 (2000). V.A. Kuzmin, V.A. Rubakov, and M.E. Shaposhnikov, Phys. Lett. B **155**, 36 (1985); R.N. Mohapatra and X. Zhang, Phys. Rev. D **45**, 2699 (1992). H. Murayama and A. Pierce, Phys. Rev. Lett. **89**, 271601 (2002); B. Thomas and M. Toharia, Phys. Rev. D **73**, 063512 (2006); B. Thomas and M. Toharia, Phys. Rev. D **75**, 013013 (2007). S. Abel and V. Page, JHEP **0605**, 024 (2006). D.G. Cerdeno, A. Dedes, and T.E.J. Underwood, JHEP **0609**, 067 (2006). P.H. Gu and H.J. He, JCAP **0612**, 010 (2006); P.H. Gu, H.J. He, and U. Sarkar, JCAP (2007), in press, arXiv:0705.3736 \[hep-ph\]. M. Roncadelli and D. Wyler, Phys. Lett. B **133**, 325 (1983). P. Roy and O. Shanker, Phys. Rev. Lett. **52**, 713 (1984); S. Panda, U. Sarkar, Phys. Lett. B **139**, 42 (1984); A.S. Joshipura, P. Roy, O. Shanker, and U. Sarkar, Phys. Lett. B **150**, 270 (1985); A.S. Joshipura, A. Mukherjee, and U. Sarkar, Phys. Lett. B **156**, 353 (1985); U. Sarkar, Phys. Rev. D **35**, 1528 (1987); S. Mishra, S.P. Misra, S. Panda, and U. Sarkar, Phys. Rev. D **35**, 975 (1987); U. Sarkar, Phys. Rev. D **59**, 037302 (1999). E.g., P.H. Frampton, S.L. Glashow, and T. Yanagida, Phys. Lett. B **548**, 119 (2002); V. Barger, D.A. Dicus, H.J. He, and T. Li, Phys. Lett. B **583**, 173 (2004); and references therein. For reviews, P. Vogel, arXiv:hep-ph/0611243; S.R. Elliott and P. Vogel, Ann. Rev. Nucl. Phys. Sci. **52**, 115 (2002); and references therein. Y.B. Zeldovich, I.Y. Kobzarev, and L.B. Okun, Sov. Phys. JETP **40**, 1 (1974); T.W. Kibble, Phys. Rept. **67**, 183 (1980). E.W. Kolb and M. S. Turner, *The Early Universe* (Addison-Wesley Publishing Co, 1990). A. D. Linde, *Particle Physics and Inflationary Cosmology* (Harwood Academic, Switzerland, 1990). G. Dvali and G. Senjanovic, Phys. Rev. Lett. **74**, 5178 (1995). S. Weinberg, Phys. Rev. D **9**, 3357 (1974). G. Dvali and G. Senjanovic, Phys. Rev. Lett. **72**, 9 (1994). H.B. Nielsen and Y. Takanishi, Phys. Lett. B **507**, 241 (2001). For some related works, *e.g.* W. Buchmuller, C. Greub, and P. Minkowski, Phys. Lett. B **267**, 395 (1991); O. Bahat-Treidel, Y. Grossman, and Y. Rozen, JHEP **0705**, 022 (2007); J.R. Espinosa and M. Quir$\rm\acute{o}$s, arXiv:hep-ph/0701145. [^1]: The general Higgs potential $V(\phi,\eta,\chi)$ was given in the Appendix of our first paper in Ref.[@gh2006]. [^2]: Note that the type-I Dirac seesaw alone can accommodate the oscillation data once we extend the current minimal construction to include a second heavy fermion $D'$ which makes $m_{\nu}^{I}$ rank 2; this is similar to the minimal (Majorana) neutrino seesaw studied before[@MMnuSS]. [^3]: Here we consider the difference between any two of the four components $|b_j^{}|$ and $|c_j^{}|$ ($j=2,3$) to be much smaller themselves. [^4]: Here we comment on the cosmological domain wall problem associated with spontaneous breaking of a discrete $Z_2$ symmetry. This problem arises during the phase transition (when the broken discrete symmetry gets restored at the transition temperature) because of the production of topological defects – domain walls which carry too much energy and trouble the standard big-bang cosmology[@DW]. This can be avoided by inflation as long as phase transition temperature is above the inflation scale[@kt1990; @Linde]. Another resolution[@dvali1] to the domain wall problem is realized by the possibility of symmetry non-restoration at high temperature[@Weinberg]. It is also very possible that a discrete symmetry like $Z_2$ is not a basic symmetry but appears as a remnant of a continuous symmetry such as $U(1)$ which is free from the domain wall problem[@dvali2]. Finally, the $Z_{2}^{}$ symmetry in our model can also be replaced by a global $U(1)_{D}^{}$ as in the pure type-I Dirac seesaw model[@rw1983], and the phenomenology of the Goldstone boson associated with this $U(1)_{D}^{}$ breaking was discussed in [@gdu2006].
{ "pile_set_name": "ArXiv" }
--- abstract: 'We describe a space-efficient algorithm for solving a generalization of the subset sum problem in a finite group $G$, using a Pollard-$\rho$ approach. Given an element $z$ and a sequence of elements $S$, our algorithm attempts to find a subsequence of $S$ whose product in $G$ is equal to $z$. For a random sequence $S$ of length $d\log_2 n$, where $n=\#G$ and $d {\geqslant}2$ is a constant, we find that its expected running time is $O(\sqrt{n}\log n)$ group operations (we give a rigorous proof for $d > 4$), and it only needs to store $O(1)$ group elements. We consider applications to class groups of imaginary quadratic fields, and to finding isogenies between elliptic curves over a finite field.' author: - 'Gaetan Bisson and Andrew V. Sutherland' title: 'A low-memory algorithm for finding short product representations in finite groups' --- Introduction ============ Let $S$ be a sequence of elements in a finite group $G$ of order $n$, written multiplicatively. We say that $S$ *represents* $G$ if every element of $G$ can be expressed as the (ordered) product of a subsequence of $S$. Ideally, we want $S$ to be short, say $k=d\log_2 n$ for some constant $d$ known as the *density* of $S$. In order for $S$ to represent $G$, we clearly require $d{\geqslant}1$, and for sufficiently large $n$, any $d>1$ suffices. More precisely, Babai and Erdős [@babai-erdos] show that for all $$k {\geqslant}\log_2 n + \log_2 \log n + 2$$ there exists a sequence $S$ of length $k$ that represents $G$. Their proof is non-constructive, but, in the case that $G$ is abelian, Erdős and Rényi [@erdos-renyi] show that a randomly chosen sequence of length $$k = \log_2 n + \log_2 \log n + \omega_n$$ represents $G$ with probability approaching $1$ as $n\to\infty$, provided that $\omega_n\to\infty$. The randomness assumption is necessary, since it takes much larger values of $k$ to ensure that *every* sequence of length $k$ represents $G$, see [@eggleton-erdos; @white]. In related work, Impagliazzo and Naor prove that for a random sequence $S$ of density $d>1$, the distribution of subsequence products almost surely converges to the uniform distribution on $G$ as $n$ goes to infinity [@impagliazzo-naor Proposition 4.1]. This result allows us to bound the complexity of our algorithm for almost all $S$ with $d > 4$. Given a sequence $S$ that represents $G$ (or a large subset of $G$), we wish to find an explicit representation of a given group element $z$ as the product of a subsequence of $S$; we call this a *short product representation* of $z$. In the special case that $G$ is abelian and the elements of $S$ are distinct, this is the *subset sum problem* in a finite group. Variations of this problem and its decision version have long been of interest to many fields: complexity theory [@karp], cryptography [@merkle-hellman], additive number theory [@babai-erdos], Cayley graph theory [@alon-milman], and information theory [@alon-barak-manber], to name just a few. As a computational framework, we work with a generic group $G$ whose elements are uniquely identified, and assume that all group operations are performed by a black box that can also provide random group elements; see [@sutherland:thesis Chapter 1] for a formal model. Time complexity is measured by counting group operations (calls to the black box), and for space complexity we count the number of group elements that are simultaneously stored. In most practical applications, these metrics are within a polylogarithmic factor of the usual bit complexity. Working in this model ensures that our algorithms apply to any finite group for which a suitable black box can be constructed. It also means that finding short product representations is provably hard. Indeed, the discrete logarithm problem in a cyclic group of prime order has a lower bound of $\Omega(\sqrt{n})$ in the generic group model [@shoup], and is easily reduced to finding short product representations. In the particular group $G={\mathbb Z}/n{\mathbb Z}$, we note that finding short product representations is easier for non-generic algorithms: the problem can be lifted to $k$ subset sum problems in ${\mathbb Z}$, which for suitable inputs can be solved with a time and space complexity of $O(n^{0.3113})$ via [@howgravegraham-joux], beating the $\Omega(\sqrt{n})$ generic lower bound noted above. This is not so surprising, since working with integers is often easier than working in generic groups; for instance, the discrete logarithm problem in ${\mathbb Z}$ corresponds to integer division and can be solved in quasi-linear time. A standard technique for solving subset sum problems in generic groups uses a baby-step giant-step approach, which can also be used to find short product representations (Section \[sec:BSGS\]). This typically involves $O(2^{k/2})$ group operations and storage for $O(2^{k/2})$ group elements. The space bound can be improved to $O(2^{k/4})$ via a method of Schroeppel and Shamir [@schroeppel-shamir]. Here, we give a Pollard-$\rho$ type algorithm [@pollard] for finding short product representations in a finite group (Section \[sec:pollard\]). It only needs to store $O(1)$ group elements, and, assuming $S$ is a random sequence of density $d>4$, we prove that its expected running time is $O(\sqrt{n}\log{n})$ group operations; alternatively, by dedicating $O(n^\epsilon)$ space to precomputations, the time complexity can be reduced to $O(\sqrt{n})$ (Section \[sec:analysis\]). We also consider two applications: representing elements of the class group of an imaginary quadratic number field as short products of prime ideals with small norm (Section \[sec:relations\]), and finding an isogeny between two elliptic curves defined over a finite field (Section \[sec:isogenies\]). For the latter, our method combines the advantages of [@galbraith] and [@galbraith-hess-smart] in that it requires little memory and finds an isogeny that can subsequently be evaluated in polynomial time. In practice, our algorithm performs well so long as $d {\geqslant}2$, and its low space complexity allows it to feasibly handle much larger problem instances than other generic methods (Section \[sec:comput\]). Algorithms ========== Let $S$ be a sequence of length $k$ in a finite group $G$ of order $n$, let $z$ be an element of $G$, and let ${\mathcal P}(S)$ denote the set of all subsequences of $S$. Our goal is to find a preimage of $z$ under the product map ${\pi}:{\mathcal P}(S)\to G$ that sends a subsequence of $S$ to the (ordered) product of its elements. Baby-step giant-step {#sec:BSGS} -------------------- Let us first recall the baby-step giant-step method. We may express $S=AB$ as the concatenation of two subsequences of roughly equal length. For any sequence $y=(y_1,\ldots,y_m)$, let $\mu(y) = (y_m^{-1},\ldots,y_1^{-1})$, so that ${\pi}(y)$ and ${\pi}(\mu(y))$ are inverses in $G$. We then search for $x\in{\mathcal P}(A)$ (a baby step) and $y\in{\mathcal P}(B)$ (a giant step) which “collide” in the sense that ${\pi}(x) = {\pi}(z\mu(y))$, where $z\mu(y)$ denotes the sequence $(z,y_m^{-1},\ldots,y_1^{-1})$. > <span style="font-variant:small-caps;">Baby-step giant-step Algorithm</span>\ > **<span style="font-variant:small-caps;">Input:</span>** A finite sequence $S$ in a group $G$ and a target $z\in{\pi}({\mathcal P}(S))$.\ > **<span style="font-variant:small-caps;">Output:</span>** A subsequence of $S$ whose product is $z$.\ > > ---- ---------------------------------------------------------------------------------------- > 1. Express $S$ in the form $S=AB$ with $\#A\approx \#B$. > 2. For each $x\in{\mathcal P}(A)$, store $({\pi}(x),x)$ in a table indexed by ${\pi}(x)$. > 3. For each $y\in{\mathcal P}(B)$: > 4. Lookup ${\pi}(z\mu(y))$ in the table computed in Step 2. > 5. If ${\pi}(z\mu(y))={\pi}(x)$ is found then output $xy$, otherwise continue. > ---- ---------------------------------------------------------------------------------------- > The table constructed in Step 2 is typically implemented as a hash table, so that the cost of the lookup in Step 4 is negligible. Elements of ${\mathcal P}(A)$ and ${\mathcal P}(B)$ may be compactly represented by bit-strings of length $\lceil k/2\rceil = O(\log n)$, which is approximately the size of a single group element. If these bit-strings are enumerated in a suitable order, each step can be derived from the previous step using $O(1)$ group operations[^1]. The algorithm then performs a total of $O(2^{k/2})$ group operations and has a space complexity of $O(2^{k/2})$ group elements. One can make a time-space trade off by varying the relative sizes of $A$ and $B$. This algorithm has the virtue of determinism, but its complexity $O(n^{d/2})$ is exponential in the density $d$ (as well as $\log n$). For $d > 1$, a randomized approach works better: select $\sqrt{n}$ baby steps $x\in{\mathcal P}(A)$ at random, then select random giant steps $y\in{\mathcal P}(B)$ until a collision ${\pi}(z\mu(y))={\pi}(x)$ is found. Assuming that ${\pi}(x)$ and ${\pi}(z\mu(y))$ are uniformly distributed in $G$, we expect to use $\sqrt{n}$ giant steps. To reduce the cost of each step, one may partition $A$ and $B$ each into approximately $d$ subsequences $A_i$ and $B_i$ and precompute ${\pi}(x)$ for all $x\in{\mathcal P}(A_i)$, and ${\pi}(\mu(y))$ for all $y\in{\mathcal P}(B_i)$. This yields an expected running time of $O(\sqrt{n})$ group operations, using storage for $O(\sqrt{n})$ group elements, for any fixed $d$. A low-memory algorithm {#sec:pollard} ---------------------- In order to use the Pollard-$\rho$ technique, we need a pseudo-random function $\phi$ on the disjoint union ${\mathcal C}={\mathcal A}\sqcup{\mathcal B}$, where ${\mathcal A}={\mathcal P}(A)$ and ${\mathcal B}$ is the set $\{z\mu(y):y\in{\mathcal P}(B)\}$. This map $\phi$ is required to preserve collisions, meaning that ${\pi}(x)={\pi}(y)$ implies ${\pi}(\phi(x))={\pi}(\phi(y))$. Given a hash function ${\eta}:G\to{\mathcal C}$, we may construct such a map as $\phi={\eta}\circ{\pi}$. Under suitable assumptions (see Section \[sec:analysis\]), the Pollard-$\rho$ method can then be applied. > <span style="font-variant:small-caps;">Pollard-$\rho$ Algorithm</span>\ > **<span style="font-variant:small-caps;">Input:</span>** A finite sequence $S$ in a group $G$ and a target $z\in {\pi}({\mathcal P}(S))$.\ > **<span style="font-variant:small-caps;">Output:</span>** A subsequence of $S$ whose product is $z$.\ > > ---- ------------------------------------------------------------------------------------------- > 1. Pick a random element $w\in {\mathcal C}$ and a hash function ${\eta}:G\to {\mathcal C}$. > 2. Find the least $i > 0$ and $j {\geqslant}0$ such that $\phi^{(i+j)}(w)=\phi^{(j)}(w)$. > 3. If $j=0$ then return to Step 1. > 4. Let $s=\phi^{(i+j-1)}(w)$ and let $t=\phi^{(j-1)}(w)$. > 5. If ${\pi}(s) \ne {\pi}(t)$ then return to Step 1. > 6. If $s\in{\mathcal A}$ and $t=z\mu(y)\in{\mathcal B}$ then output $sy$ and terminate. > 7. If $t\in{\mathcal A}$ and $s=z\mu(y)\in{\mathcal B}$ then output $ty$ and terminate. > 8. Return to Step 1. > ---- ------------------------------------------------------------------------------------------- > Step 2 can be implemented with Floyd’s algorithm [@knuth-art2 Exercise 3.1.6] using storage for just two elements of ${\mathcal C}$, which fits in the memory space of $O(1)$ group elements. More sophisticated collision-detection techniques can reduce the number of evaluations of $\phi$ while still storing $O(1)$ elements, see [@brent; @sedgewick; @teske]. We prefer the method of *distinguished points*, which facilitates a parallel implementation [@vanoorschot-wiener]. Toy example ----------- Let $G=({\mathbb Z}/n{\mathbb Z},+)$ and define $S$ as the concatenation of the sequences $A=(3^i)$ and $B=(5^i)$ for $i\in\{1,\ldots,k/2\}$. We put $n=127$ and $k=12$, implying $d\approx 1.7$. With ${\mathcal C}={\mathcal A}\sqcup{\mathcal B}$ as above, we define ${\eta}:G\to {\mathcal C}$ via $$x\longmapsto\left\{\begin{array}{cl} (A_i)_{\{i:b_i=1\}}&\text{when }b_0=1\\ z\mu\left((B_i)_{\{i:b_i=1\}}\right)&\text{when }b_0=0 \end{array}\right.$$ where $\sum_{i=0}^{k/2} b_i2^i$ is the binary representation of $96x \bmod n$. Starting from $w=(2,-5^6,-5^3,-5^2,-5)$, the algorithm finds $i=4$ and $j=6$: i///in [ 0/[2,-5\^6,-5\^3,-5\^2,-5]{}/0/0.85, 1/[3\^3,3\^5]{}/0/1.7, 2/[2,-5\^5,-5\^4]{}/2/1.7, 3/[2,-5\^6,-5\^5,-5\^4,-5\^2,-5]{}/5/1.7, 4/[3\^2,3\^4]{}/7.8/1.7, 5/[2,-5\^5]{}/9.5/1.7, 6/[3,3\^2,3\^5]{}/10.5/0.85, 7/[2,-5\^2,-5]{}/10/0, 8/[2,-5\^6,-5\^4,-5\^2,-5]{}/7.3/0, 9/[3,3\^2,3\^3,3\^5]{}/6.8/0.85]{} (G-i) at (,) [$\left(\s\right)$]{}; /in [0/1,1/2,2/3,3/4,4/5,5/6,6/7,7/8,8/9,9/4]{} (G-) – (G-); The two preimages of $(3^2,3^4)$ yield the short product representation $$2\equiv 3+3^2+3^3+3^5+5+5^2+5^4+5^5+5^6\bmod 127.$$ Analysis {#sec:analysis} ======== The Pollard-$\rho$ approach is motivated by the following observation: if $\phi:X\to X$ is a random function on a set $X$ of cardinality $n$, then the expected size of the orbit of any $x\in X$ under the action of $\phi$ is $\sqrt{\pi n/2}$ (see [@sobol-random-sequences] for a rigorous proof). In our setting, $X$ is the set ${\mathcal C}$ and $\phi={\eta}\circ{\pi}$. Alternatively, since $\phi$ preserves collisions, we may regard $X$ as the set ${\pi}({\mathcal C})\subset G$ and use ${\varphi}={\pi}\circ {\eta}$. We shall take the latter view, since it simplifies our analysis. Typically the function ${\varphi}$ is not truly random, but under a suitable set of assumptions it may behave so. To rigorously analyze the complexity of our algorithm, we fix a real number $d>4$ and assume that: 1. the hash function ${\eta}:G\to {\mathcal C}$ is a random oracle; 2. $S$ is a random sequence of density $d$. For any finite set $U$, let ${\mathbb U}_U$ denote the uniform distribution on $U$, which assigns to each subset $X$ of $U$ the value $\#X/\#U$. For any function $f:U\to V$, let $f_*{\mathbb U}_U$ denote the *pushforward distribution* by $f$ of ${\mathbb U}_U$, which assigns to each subset $Y$ of $V$ the value $$f_*{\mathbb U}_U(Y) = \frac{\#\{u\in U: f(u)\in Y\}}{\#U}.$$ Assumption (2) implies that $A$ and $B$ are both random sequences with density greater than $2$. By [@impagliazzo-naor Proposition 4.1], this implies that $${\operatorname{Prob}}_A\left[ \left\|{\pi}_*{\mathbb U}_{{\mathcal A}}-{\mathbb U}_{G}\right\|{\geqslant}n^{-c} \right]{\leqslant}n^{-c},$$ where $c=(d-2)/4 > 1/2$, and the *variation distance* $\|\sigma-\tau\|$ between two distributions $\sigma$ and $\tau$ on $G$ is defined as the maximum value of $|\sigma(H)-\tau(H)|$ over all subsets $H$ of $G$. Similarly, we have $${\operatorname{Prob}}_B\left[ \left\|{\pi}_*{\mathbb U}_{{\mathcal B}}-{\mathbb U}_{G}\right\|{\geqslant}n^{-c} \right]{\leqslant}n^{-c}.$$ From now on we assume that $S$ is fixed and that ${\pi}_*{\mathbb U}_C$ is within variation distance $2n^{-c}$ of the uniform distribution on $G$; by the argument above, this happens with probability at least $1-2n^{-c}$. Recall that a *random oracle* ${\eta}:G\to {\mathcal C}$ is a random function drawn uniformly from ${\mathcal C}^G$, that is, each value ${\eta}(x)$ is drawn uniformly and independently from ${\mathcal C}$. Thus, for any $g\in G$, the distribution of ${\pi}({\eta}(g))$ is ${\pi}_*{\mathbb U}_C$. It is then easy to verify that $$\left\|({\eta}\mapsto {\pi}\circ {\eta})_*{\mathbb U}_{{\mathcal C}^G}-{\mathbb U}_{G^G}\right\|{\leqslant}2n^{-c}.$$ In other words, for a random oracle ${\eta}$, the function ${\varphi}={\pi}\circ {\eta}$ is very close to being a random oracle (from $G$ to $G$) itself. Since $c>1/2$, we obtain, as in [@pollard], an $O(\sqrt{n})$ bound on the expectation of the least positive integer $i+j$ for which ${\varphi}^{(i+j)}(g)={\varphi}^{(j)}(g)$, for any $g=\pi(w)\in G$. For $d > 2$, the probability that ${\pi}(s)\ne {\pi}(t)$ in Step 5 is $o(1)$, since ${\mathcal C}$ is then larger than $G$ and collisions in the map ${\varphi}$ (and $\phi$) are more likely to be caused by collisions in ${\pi}$ than collisions in ${\eta}$. Having reached Step 6, we obtain a short product representation of $z$ with probability $1/2$, since by results of [@impagliazzo-naor] the value of ${\pi}(x)$ is independent of whether $x\in {\mathcal A}$ or $x\in {\mathcal B}$. The expected running time is thus $O(k\sqrt{n})=O(\sqrt{n}\log n)$ group operations, and, as noted in Section \[sec:pollard\], the space complexity is $O(1)$ group elements. We summarize our analysis with the following proposition. \[prop:main\] Let $S$ be a random sequence of constant density $d > 4$ and let ${\eta}:G\to {\mathcal C}$ be a random oracle. Then our Pollard-$\rho$ algorithm uses $O(\sqrt{n}\log n)$ expected group operations and storage for $O(1)$ group elements. As in Section \[sec:BSGS\], to speed up the evaluation of the product map ${\pi}$, one may partition $A$ and $B$ into subsequences $A_i$ and $B_i$ of length $m$ and precompute ${\pi}({\mathcal P}(A_i))$ and ${\pi}(\mu({\mathcal P}(B_i))$. This requires storage for $O(k2^m/m)$ group elements and speeds up subsequent evaluations of ${\pi}$ by a factor of $m$. If we let $m=\epsilon\log_2 n$, for any $\epsilon>0$, we obtain the following corollary. Under the hypotheses of the proposition above, our Pollard-$\rho$ algorithm can be implemented to run in expected time $O(\sqrt{n})$ using $O(n^\epsilon)$ space. In our analysis above, we use a random $S$ random with $d > 4$ to prove that products of random elements of ${\mathcal A}$ and ${\mathcal B}$ are quasi-uniformly distributed in $G$. If we directly assume that both ${\pi}_*{\mathbb U}_{\mathcal A}$ and ${\pi}_*{\mathbb U}_{\mathcal B}$ are quasi-uniformly distributed, our analysis applies to all $d{\geqslant}2$, and in practice we find this to be the case. However, we note that this does not apply to $d<2$, for which we expect a running time of $O(n^{(4-d)/4}\log n)$, as discussed in Section \[sec:comput\]. Applications ============ As a first application, let us consider the case where $G$ is the ideal class group of an order ${\mathcal{O}}$ in an imaginary quadratic field. We may assume $${\mathcal{O}}={\mathbb Z}+\frac{D+\sqrt{D}}{2}{\mathbb Z},$$ where the *discriminant* $D$ is a negative integer congruent to $0$ or $1$ modulo $4$. Modulo principal ideals, the invertible ideals of ${\mathcal{O}}$ form a finite abelian group ${\operatorname{cl}({\mathcal{O}})}$ of cardinality $h$. The *class number* $h$ varies with $D$, but is on average proportional to $\sqrt{|D|}$ (more precisely, $\log h \sim \frac{1}{2}\log|D|$ as $D\to -\infty$, by Siegel’s theorem [@siegel]). Computationally, invertible ${\mathcal{O}}$-ideals can be represented as binary quadratic forms, allowing group operations in ${\operatorname{cl}({\mathcal{O}})}$ to be computed in time $O(\log^{1+\epsilon}|D|)$, via [@schonhage-fastforms]. Prime ideals {#sec:Sk} ------------ Let $\ell_i$ denote the $i$^th^ largest prime number for which there exists an invertible ${\mathcal{O}}$-ideal of norm $\ell_i$ and let $\alpha_i$ denote the unique such ideal that has nonnegative trace. For each positive integer $k$, let $S_k$ denote the sequence of (not necessarily distinct) ideal classes $$S_k = ([\alpha_1],[\alpha_2],\ldots,[\alpha_k]).$$ For algorithms that work with ideal class groups, $S_k$ is commonly used as a set of generators for ${\operatorname{cl}({\mathcal{O}})}$, and in practice $k$ can be made quite small, conjecturally $O(\log h)$. Proving such a claim is believed to be very difficult, but under the generalized Riemann hypothesis (GRH), Bach obtains the following result [@bach-erh]. Assume the GRH. If $D$ is a fundamental[^2] discriminant and $\ell_{k+1} > 6\log^2|D|$, then the set $S_k$ generates ${\operatorname{cl}({\mathcal{O}})}$. Unfortunately, this says nothing about short product representations in ${\operatorname{cl}({\mathcal{O}})}$. Recently, a special case of [@expander-grh Corollary 1.3] was considered in [@quantum-iso Theorem 2.1] which still assumes the GRH but is more suited to our short product representation setting. Nevertheless, for our purpose here, we make the following stronger conjecture. For every $d_0 >1$ there exist constants $c > 0$ and $D_0 < 0$ such that if $D {\leqslant}D_0$ and $S_k$ has density $d {\geqslant}d_0$ then 1. ${\pi}({\mathcal P}(S_k))=G$, that is, $S_k$ represents $G$; 2. $\left\|{\pi}_*{\mathbb U}_{{\mathcal P}(S_k)}-{\mathbb U}_G\right\|<h^{-c}$; where $G$ is the ideal class group ${\operatorname{cl}({\mathcal{O}})}$ and $h$ is its cardinality. In essence, these are heuristic analogs to the results of Erdős and Rényi, and of Impagliazzo and Naor, respectively, suggesting that the distribution of the classes $[\alpha_i]$ resembles that of random elements uniformly drawn from ${\operatorname{cl}({\mathcal{O}})}$. Note that (1), although seemingly weaker, is only implied by (2) when $c>1$. Empirically, (1) is easily checked: for $d_0=2$ we have verified it using $D_0=-3$ for every imaginary quadratic order with discriminant $D{\geqslant}-10^8$, and for $10^4$ randomly chosen orders with $D$ logarithmically distributed over the interval $[-10^{16},-10^{8}]$ (see Figure \[fig:hyp-rnd\]). Although harder to test, (2) is more natural in our context, and practical computations support it as well. Even though we see no way to prove this conjecture, we assume its veracity as a useful heuristic. ![Dots plot the minimal $k$ such that $S_k$ satisfies conjecture (1); gray dots for all discriminants $D{\geqslant}-10^8$ and black dots for ten thousand $D$ drawn at random according to a logarithmic distribution. The lines represent $k=d\log_2 h$ for $d=1,2$.[]{data-label="fig:hyp-rnd"}](hyp-rnd.pdf){width="\textwidth"} Short relations {#sec:relations} --------------- In [@hafner-mccurley], Hafner and McCurley give a subexponential algorithm to find representatives of the form $\prod\alpha_i^{e_i}$ for arbitrary ideal classes of imaginary quadratic orders; the ideals $\alpha_i$ have subexponential norms, but the exponents $e_i$ can be as large as the class number $h$. Asking for small exponents $e_i\in\{0,1\}$ means, in our terminology, writing elements $z\in G$ as short product representations on $S_k=([\alpha_i])$. Under the conjecture above, this can be achieved by our low-memory algorithm in $O(|D|^{1/4+\epsilon})$ expected time, using $k=O(\log h)$ ideals $\alpha_i$. We can even combine these approaches. If the target element $z$ is represented by an ideal of small norm, say $z=[\alpha_{k+1}]$, we get what we call a *short relation* for ${\operatorname{cl}({\mathcal{O}})}$. Conjecture (1) implies not only that the map that sends each vector $(e_1,\ldots,e_{k+1})\in{\mathbb Z}^{k+1}$ to the class of the ideal $\prod\alpha_i^{e_i}$ is surjective, but also that there exists a set of short relations generating its kernel lattice $\Lambda$. This gives a much better upper bound on the diameter of $\Lambda$ than was used by Hafner and McCurley, and their algorithm can be adapted to make use of this new bound and find, in subexponential time, representatives $\prod\alpha_i^{e_i}$ with ideals $\alpha_i$ of subexponential norm and exponents $e_i$ bounded by $O(\log|D|)$. See [@bisson-grh] for details, or [@quantum-iso] for an equivalent construction. Short isogenies {#sec:isogenies} --------------- Now let us consider the problem of finding an isogeny between two ordinary elliptic curves ${E}_1$ and ${E}_2$ defined over a finite field ${\mathbb F}_q$. This problem is of particular interest to cryptography because the discrete logarithm problem can then be transported from ${E}_1$ to ${E}_2$. An isogeny between curves ${E}_1$ and ${E}_2$ exists precisely when ${E}_1$ and ${E}_2$ lie in the same *isogeny class*. By a theorem of Tate, this occurs if and only if $\#{E}_1({\mathbb F}_q)=\#{E}_2({\mathbb F}_q)$, which can be determined in polynomial time using Schoof’s algorithm [@schoof-pointcounting]. The isogeny class of ${E}_1$ and ${E}_2$ can be partitioned according to the endomorphism rings of the curves it contains, each of which is isomorphic to an order ${\mathcal{O}}$ in an imaginary quadratic number field. Identifying isomorphic curves with their $j$-invariant, for each order ${\mathcal{O}}$ we define $${\operatorname{Ell}({\mathcal{O}})}=\left\{j({E}) : {\operatorname{End}({E})}\cong{\mathcal{O}}\right\},$$ where ${E}$ denotes an elliptic curve defined over ${\mathbb F}_q$. The set ${\operatorname{Ell}({\mathcal{O}})}$ to which a given curve belongs can be determined in subexponential time, under heuristic assumptions [@bisson-sutherland]. An isogeny from ${E}_1$ to ${E}_2$ can always be decomposed into two isogenies, one that is essentially determined by ${\operatorname{End}({E}_1)}$ and ${\operatorname{End}({E}_2)}$ (and can be made completely explicit but may be difficult to compute), and another connecting curves that lie in the same set ${\operatorname{Ell}({\mathcal{O}})}$. We shall thus restrict ourselves to the problem of finding an isogeny between two elements of ${\operatorname{Ell}({\mathcal{O}})}$. The theory of complex multiplication states that ${\operatorname{Ell}({\mathcal{O}})}$ is a principal homogeneous space (a *torsor*) for the class group ${\operatorname{cl}({\mathcal{O}})}$: each ideal $\alpha$ acts on ${\operatorname{Ell}({\mathcal{O}})}$ via an isogeny of degree ${\operatorname{N}}(\alpha)$, and this action factors through the class group. We may then identify each ideal class $[\alpha]$ with the image $[\alpha]j({E}_i)$ of its action on $j({E}_i)$. This allows us to effectively work in the group ${\operatorname{cl}({\mathcal{O}})}$ when computing isogenies from ${E}_i$. Galbraith addressed the search for an isogeny ${E}_1\to{E}_2$ using a baby-step giant-step approach in [@galbraith]; a low-memory variant was later given in [@galbraith-hess-smart] which produces an exponentially long chain of low-degree isogenies. From that, a linearly long chain of isogenies of subexponential degree may be derived by smoothing the corresponding ideal in ${\operatorname{cl}({\mathcal{O}})}$ using variants of the method of Hafner and McCurley (for instance, those mentioned in Section \[sec:relations\]); alternatively, our low-memory algorithm can be used to derive a chain of low-degree isogenies with length linear in $\log|D|$ (assuming our conjecture), and we believe this is the most practical approach. However, let us describe how our method applies naturally to the torsor ${\operatorname{Ell}({\mathcal{O}})}$, and directly finds a short chain of low-degree isogenies from ${E}_1$ to ${E}_2$ using very little memory. Let $S_k=AB$ be such that conjecture (1) holds, where $A$ and $B$ are roughly equal in size, and define ${\mathcal C}={\mathcal A}\sqcup{\mathcal B}$ where ${\mathcal A}={\mathcal P}(A)$ and ${\mathcal B}=\mu({\mathcal P}(B))$. We view each element of ${\mathcal A}$ as a short chain of isogenies of small prime degree $\ell_i={\operatorname{N}}(\alpha_i)$ that originates at ${E}_1$; similarly, we view elements of ${\mathcal B}$ as chains of isogenies originating at ${E}_2$. Now let ${\pi}:{\mathcal C}\to {\operatorname{Ell}({\mathcal{O}})}$ be the map that sends $x\in{\mathcal A}$ (resp. $x\in{\mathcal B}$) to the element of ${\operatorname{Ell}({\mathcal{O}})}$ that is the codomain of the isogeny chain defined by $x$ and originating at ${E}_1$ (resp. ${E}_2$). It suffices to find a collision between an element of ${\mathcal A}$ and an element of ${\mathcal B}$ under the map ${\pi}$: this yields an isogeny chain from ${E}_1$ and an isogeny chain from ${E}_2$ that have the same codomain. Composing the first with the dual of the second gives an isogeny from ${E}_1$ to ${E}_2$. The iteration function $\phi$ on ${\mathcal C}$ can now be defined as the composition ${\eta}\circ{\pi}$ where ${\eta}$ is a map from ${\operatorname{Ell}({\mathcal{O}})}$ to ${\mathcal C}$ that behaves like a random oracle. Using this formalism, our Pollard-$\rho$ algorithm can be applied directly, and under the conjecture it finds an isogeny in time $O(h^{1/2+\epsilon})$. In terms of space, it only needs to store $O(1)$ elements of ${\operatorname{cl}({\mathcal{O}})}$ and ${\operatorname{Ell}({\mathcal{O}})}$, which is $O(\log q)$ bits. However, in order to compute isogenies, modular polynomials $\Phi_\ell(X,Y)$ might be used, each of which requires $O(\ell^3\log\ell)$ bits. If we heuristically assume that $\ell_k = O(k\log k) = O(\log h\log\log h)$, the overall space complexity is then bounded by $O(\log^{3+\epsilon} h) = O(\log^{3+\epsilon} q)$ bits, which is polynomial in $\log q$. This can be improved to $O(\log^{2+\epsilon} q)$ bits by using the algorithm of [@sutherland-point-counting] to directly compute $\Phi_\ell(j({E}),Y)$ in a space-efficient manner. Computations {#sec:comput} ============ To test our generic low-memory algorithm for finding short product representations in a practical setting, we implemented black-boxes for three types of finite groups: 1. $G={E}({\mathbb F}_p)$, the elliptic curve ${E}:y^2=x^3+x+1$ over a finite field ${\mathbb F}_p$. 2. $G={\operatorname{cl}({\mathcal{O}})}$, where ${\mathcal{O}}$ is an order in an imaginary quadratic field.[^3] 3. $G={\operatorname{GL}(2,{\mathbb F}_p)}$, the group of invertible $2\times 2$ matrices over ${\mathbb F}_p$. To simplify the implementation, we restricted to cases where ${\mathbb F}_p$ is a prime field. The groups ${E}({\mathbb F}_p)$ are abelian groups, either cyclic or the product of two cyclic groups. The groups ${\operatorname{cl}({\mathcal{O}})}$ are also abelian, but may be highly non-cyclic (we specifically chose some examples with large $2$-rank), while the groups ${\operatorname{GL}(2,{\mathbb F}_p)}$ are non-abelian. For the groups ${E}({\mathbb F}_p)$, we used the sequence of points $S=(P_1,\ldots,P_k)$ with $P_i=(x_i,y_i)$, where $x_i$ is the $i$^th^ smallest positive integer for which $x_i^3+x_i+1$ is a quadratic residue $y_i^2$ modulo $p$ with $y_i {\leqslant}(p-1)/2$; our target $z$ was the point $P_{k+1}$. For the groups ${\operatorname{cl}({\mathcal{O}})}$, we used the sequence $S_k$ defined in Section \[sec:Sk\] with $z=[\alpha_{k+1}]$. For the groups ${\operatorname{GL}(2,{\mathbb F}_p)}$, we simply chose a sequence $S$ of length $k$ and a target element $z$ at random. Table \[table:comput\] lists performance data obtained by applying our Pollard-$\rho$ algorithm to various groups $G$ and sequences $S$ of densities $d=k/\log_2 n$ ranging from just under $2$ to slightly more than $4$. Each row compares expected values with actual results that are averages over at least $10^3$ runs. The parameter $c$ counts the number of collisions $\phi^{(i+j)}(w)=\phi^{(j)}(w)$ that were needed for a run of the algorithm to obtain a short product representation. Typically $c$ is greater than $1$ because not every collision yields a short product representation. The parameter ${\rho_\text{tot}}$ is the sum of $\rho=i+j$ over the $c$ collisions required, and represents a lower bound on the number of times the map $\phi$ was evaluated. With efficient collision detection, the actual number is very close to ${\rho_\text{tot}}$ (using the method of distinguished points we were able to stay within $1\%$). ----------------------------------------------------- ------------ ----- ------ -- ------ --------------------- -- ------ --------------------- $G$ $\log_2 n$ $k$ $d$ $c$ ${\rho_\text{tot}}$ $c$ ${\rho_\text{tot}}$ \[3pt\] \[-5pt\] ${E}/{\mathbb F}_{2^{20}+7}$ 20.00 40 2.00 3.00 3144 3.00 3162 60 3.00 2.00 2568 2.01 2581 80 4.00 2.00 2567 2.01 2565 \[1pt\] ${E}/{\mathbb F}_{2^{24}+43}$ 24.00 48 2.00 3.00 12577 3.02 12790 72 3.00 2.00 10269 2.03 10381 96 4.00 2.00 10268 2.00 10257 \[1pt\] ${E}/{\mathbb F}_{2^{28}+3}$ 28.00 56 2.00 3.00 50300 2.95 49371 84 3.00 2.00 41070 2.02 41837 112 4.00 2.00 41069 1.98 40508 \[1pt\] ${E}/{\mathbb F}_{2^{32}+15}$ 32.00 64 2.00 3.00 201196 3.06 205228 96 3.00 2.00 164276 1.96 160626 128 4.00 2.00 164276 2.04 169595 \[1pt\] ${E}/{\mathbb F}_{2^{36}+31}$ 36.00 72 2.00 3.00 804776 2.95 796781 108 3.00 2.00 657097 2.00 655846 144 4.00 2.00 657097 1.98 657097 \[1pt\] ${E}/{\mathbb F}_{2^{40}+15}$ 40.00 80 2.00 3.00 3219106 2.90 3120102 120 3.00 2.00 2628390 1.97 2604591 160 4.00 2.00 2628390 2.06 2682827 \[3pt\] ${\operatorname{cl}(1-2^{40})}$ 19.07 40 2.10 2.52 2088 2.44 2082 60 3.15 2.00 1859 2.02 1845 80 4.20 2.00 1858 2.01 1863 \[1pt\] ${\operatorname{cl}(1-2^{48})}$ 23.66 48 2.03 2.79 10800 2.75 10662 72 3.04 2.00 9140 1.97 8938 96 4.06 2.00 9140 1.99 9079 \[1pt\] ${\operatorname{cl}(1-2^{56})}$ 27.54 56 2.03 2.73 40976 2.69 40512 84 3.05 2.00 35076 2.06 36756 112 4.07 2.00 35076 1.98 35342 \[1pt\] ${\operatorname{cl}(1-2^{64})}$ 30.91 64 2.07 2.47 125233 2.59 131651 96 3.11 2.00 112671 1.98 111706 128 4.14 2.00 112671 1.99 111187 \[1pt\] ${\operatorname{cl}(1-2^{72})}$ 35.38 72 2.04 2.65 609616 2.60 598222 108 3.05 2.00 529634 2.00 534639 144 4.07 2.00 529634 2.00 532560 \[1pt\] ${\operatorname{cl}(1-2^{80})}$ 39.59 80 2.02 2.76 2680464 2.80 2793750 120 3.03 2.00 2283831 2.01 2318165 160 4.04 2.00 2283831 2.04 2364724 \[3pt\] ${\operatorname{GL}(2,{\mathbb F}_{37})}$ 20.80 42 2.02 2.87 4053 2.84 4063 62 2.98 2.00 3384 1.99 3358 84 4.04 2.00 3384 1.97 3388 \[1pt\] ${\operatorname{GL}(2,{\mathbb F}_{67})}$ 24.24 48 1.98 3.18 14087 3.08 13804 72 2.97 2.00 11168 2.10 11590 96 3.96 2.00 11167 2.01 11167 \[1pt\] ${\operatorname{GL}(2,{\mathbb F}_{131})}$ 28.12 56 1.99 3.09 53251 3.03 52070 84 2.99 2.00 42851 1.94 42019 112 3.98 2.00 42851 1.98 42146 \[1pt\] ${\operatorname{GL}(2,{\mathbb F}_{257})}$ 32.02 64 2.00 3.01 202769 3.03 204827 96 3.00 2.00 165237 2.02 165742 128 4.00 2.00 165237 2.00 165619 \[1pt\] ${\operatorname{GL}(2,{\mathbb F}_{511})}$ 36.10 72 1.99 3.07 842191 3.18 886141 108 2.99 2.00 679748 1.97 668416 144 3.99 2.00 679747 2.04 703877 \[1pt\] ${\operatorname{GL}(2,{\mathbb F}_{1031})}$ 40.04 80 2.00 3.03 3276128 2.99 3243562 120 3.00 2.00 2663155 2.02 2677122 160 4.00 2.00 2663154 2.08 2708512 \[3pt\] ----------------------------------------------------- ------------ ----- ------ -- ------ --------------------- -- ------ --------------------- : Comparison of expected vs. observed values on various groups.[]{data-label="table:comput"} The expected values of $c$ and ${\rho_\text{tot}}$ listed in Table \[table:comput\] were computed under the heuristic assumption that ${\eta}:G\to{\mathcal C}$ and ${\pi}:{\mathcal C}\to G$ are both random functions. This implies that while iterating $\phi$ we are effectively performing simultaneous independent random walks on $G$ and ${\mathcal C}$. Let $X$ and $Y$ be independent random variables for the number of steps these walks take before reaching a collision, respectively. The probability that $\pi(s)=\pi(t)$ in Step 5 is $P(X {\leqslant}Y)$, and the algorithm then proceeds to find a short product representation with probability $1/2$. Using the probability density $u\exp(-u^2/2)du$ of $X/\sqrt{\#G}$ and $Y/\sqrt{\#{\mathcal C}}$, we find $${\operatorname{\mathbf{E}}[c]} = 2/{P(X{\leqslant}Y)} = 2(1+r),$$ where $r=\#G/\#{\mathcal C}$. One may also compute $${\operatorname{\mathbf{E}}[{\rho_\text{tot}}]} = {\operatorname{\mathbf{E}}[c]}{\operatorname{\mathbf{E}}[\min(X,Y)]} = \sqrt{2\pi n(1+r)}.$$ For $d > 2$, we have $r\approx 0$ for large $n$, so that ${\operatorname{\mathbf{E}}[c]}\approx 2$ and ${\operatorname{\mathbf{E}}[{\rho_\text{tot}}]}\approx \sqrt{2\pi n}$. For $d=2$, we have ${\operatorname{\mathbf{E}}[c]}=3$ and ${\operatorname{\mathbf{E}}[{\rho_\text{tot}}]}=\sqrt{3\pi n}$ (when $k$ is even). For $d < 2$, the value of ${\operatorname{\mathbf{E}}[c]}$ increases with $n$ and we have ${\operatorname{\mathbf{E}}[{\rho_\text{tot}}]}=O(n^{(4-d)/4})$. In addition to the tests summarized in Table \[table:comput\], we applied our low memory algorithm to some larger problems that would be quite difficult to address with the baby-step giant-step method. Our first large test used $G={E}({\mathbb F}_p)$ with $p=2^{80}+13$, which is a cyclic group of order $n=p+1+1475321552477$, and the sequence $S=(P_1,\ldots,P_{k})$ with points $P_i$ defined as above with $k=200$, which gives $d\approx 2.5$. Our target element was $z=P_{201}$ with $x$-coordinate $391$. The computation was run in parallel on $32$ cores (3.0 GHz AMD Phenom II), using the distinguished points method.[^4] The second collision yielded a short product representation after evaluating the map $\phi$ a total of $1480862431620 \approx 1.35\sqrt{n}$ times. After precomputing $655360$ partial products (as discussed in Section \[sec:analysis\]), each evaluation of $\phi$ used $5$ group operations, compared to an average of $50$ without precomputation, and this required just $10$ megabytes of memory. The entire computation used approximately $140$ days of CPU time, and the elapsed time was about $4$ days. We obtained a short product representation for $z$ as the sum of $67$ points $P_i$ with $x$-coordinates less than $391$. In hexadecimal notation, the bit-string that identifies the corresponding subsequence of $S$ is: `542ab7d1f505bdaccdbeb6c2e92180d5f38a20493d60f031c1` Our second large test used the group $G={\operatorname{cl}(1-2^{160})}$, which is isomorphic to $$({\mathbb Z}/2{\mathbb Z})^{8} \times {\mathbb Z}/4{\mathbb Z}\times {\mathbb Z}/8{\mathbb Z}\times {\mathbb Z}/80894875660895214584{\mathbb Z},$$ see [@sutherland:thesis Table B.4]. We used the sequence $S_k$ with $k=200$, and chose the target $z=[\alpha_{201}]$ with ${\operatorname{N}}(\alpha_{201})=2671$. We ran the computation in parallel on $48$ cores, and needed $3$ collisions to obtain a short product representation, which involved a total of $2856153808020\approx 3.51\sqrt{n}$ evaluations of $\phi$. As in the first test, we precomputed $655360$ partial products so that each evaluation of $\phi$ used $5$ group operations. Approximately $900$ days of CPU time were used (the group operation in ${\operatorname{cl}(D)}$ is slower than in the group $E({\mathbb F}_p)$ used in our first example). We obtained a representative for the ideal class $z$ as the product of $106$ ideals with prime norms less than $2671$. The bit-string that encodes the corresponding subsequence of $S_k$ is: `5cf854598d6059f607c6f17b8fb56314e87314bee7df9164cd` Acknowledgments {#acknowledgments .unnumbered} =============== The authors are indebted to Andrew Shallue for his kind help and advice in putting our result in the context of subset sum problems, and to Steven Galbraith for his useful feedback on an early draft of this paper. [10]{} Noga Alon, Amnon Barak, and Udi Manber. On disseminating information reliably without broadcasting. In Radu Popescu-Zeletin, Gerard [Le Lann]{}, and Kane H. Kim, editors, [*Proceedings of the 7^th^ International Conference on Distributed Computing Systems*]{}, pages 74–81. IEEE Computer Society Press, 1987. Noga Alon and Vitali D. Milman. , isoperimetric inequalities for graphs, and superconcentrators. , 38:73–88, 1985. László Babai and Paul Erdős. Representation of group elements as short products. , 60:27–30, 1982. Eric Bach. Explicit bounds for primality testing and related problems. , 55(191):355–380, 1990. Gaetan Bisson. Computing endomorphism rings of elliptic curves under the [GRH]{}, 2010. In preparation. Gaetan Bisson and Andrew V. Sutherland. Computing the endomorphism ring of an ordinary elliptic curve over a finite field. , Special Issue on Elliptic Curve Cryptography, 2009. To appear. Richard P. Brent. An improved [Monte Carlo]{} factorization algorithm. , 20:176–184, 1980. Andrew M. Childs, David Jao, and Vladimir Soukharev. Constructing elliptic curve isogenies in quantum subexponential time, 2010. Preprint available at <http://arxiv.org/abs/1012.4019>. Roger B. Eggleton and Paul Erdős. Two combinatorial problems in group theory. , 28:247–254, 1975. Paul Erdős and Alfréd Rényi. Probabilistic methods in group theory. , 14(1):127–138, 1965. Steven D. Galbraith. Constructing isogenies between elliptic curves over finite fields. , 2:118–138, 1999. Steven D. Galbraith, Florian Hess, and Nigel P. Smart. Extending the [GHS]{} [Weil]{} descent attack. In Lars R. Knudsen, editor, [*Advances in Cryptology–EUROCRYPT ’02*]{}, volume 2332 of [*Lecture Notes in Computer Science*]{}, pages 29–44. Springer, 2002. James L. Hafner and Kevin S. McCurley. A rigorous subexponential algorithm for computing in class groups. , 2(4):837–850, 1989. Nick Howgrave-Graham and Antoine Joux. New generic algorithms for hard knapsacks. In Henri Gilbert, editor, [*Advances in Cryptology–EUROCRYPT ’10*]{}, volume 6110 of [*Lecture Notes in Computer Science*]{}, pages 235–256. Springer, 2010. Russel Impagliazzo and Moni Naor. Efficient cryptographic schemes provably as secure as subset sum. , 9(4):199–216, 1996. David Jao, Stephen D. Miller, and Ramarathnam Venkatesan. Expander graphs based on [GRH]{} with an application to elliptic curve cryptography. , 129(6):1491–1504, 2009. Richard M. Karp. Reducibility among combinatorial problems. In Raymond E. Miller, James W. Thatcher, and Jean D. Bohlinger, editors, [*Complexity of Computer Computations*]{}, pages 85–103. Plenum Press, 1972. Donald E. Knuth. . Addison-Wesley, 1998. Donald E. Knuth. . Addison-Wesley, 2005. Ralph Merkle and Martin Hellman. Hiding information and signatures in trapdoor knapsacks. , 24(5):525–530, 1978. John M. Pollard. A [Monte Carlo]{} method for factorization. , 15(3):331–334, 1975. Arnold Sch[ö]{}nhage. Fast reduction and composition of binary quadratic forms. In Stephen M. Watt, editor, [*International Symposium on Symbolic and Algebraic Computation–ISSAC ’91*]{}, pages 128–133. ACM Press, 1991. Ren[é]{} Schoof. Counting points on elliptic curves over finite fields. , 7:219–254, 1995. Richard Schroeppel and Adi Shamir. A [$T=O(2^{n/2}), S=O(2^{n/4})$]{} algorithm for certain [NP]{}-complete problems. , 10(3):456–464, 1981. Robert Sedgewick and Thomas G. Szymanski. The complexity of finding periods. In [*Proceedings of the 11^th^ ACM Symposium on the Theory of Computing*]{}, pages 74–80. ACM Press, 1979. Victor Shoup. Lower bounds for discrete logarithms and related problems. In [*Advances in Cryptology–EUROCRYPT ’97*]{}, volume 1233 of [ *Lecture Notes in Computer Science*]{}, pages 256–266. Springer-Verlag, 1997. Revised version. Carl Ludwig Siegel. ber die [C]{}lassenzahl quadratischer [Z]{}ahlk[ö]{}rper. , 1:83–86, 1935. Ilya M. Sobol. On periods of pseudo-random sequences. , 9:333–338, 1964. Andrew V. Sutherland. Genus 1 point counting in quadratic space and essentially quartic time. in preparation. Andrew V. Sutherland. Order computations in generic groups. thesis, MIT, 2007. <http://groups.csail.mit.edu/cis/theses/sutherland-phd.pdf>. Edlyn Teske. A space efficient algorithm for group structure computation. , 67:1637–1663, 1998. Paul C. van Oorschot and Michael J. Wiener. Parallel collision search with cryptanalytic applications. , 12:1–28, 1999. Edward White. Ordered sums of group elements. , 24:118–121, 1978. [^1]: With a Gray code, exactly one group operation is used per step, see [@knuth-art4f2]. [^2]: Meaning that either $D$ is square-free, or $D/4$ is an integer that is square-free modulo $4$. [^3]: We identify ${\mathcal{O}}$ by its discriminant $D$ and may write ${\operatorname{cl}(D)}$ instead of ${\operatorname{cl}({\mathcal{O}})}$. [^4]: In this parallel setting we may have collisions between two distinct walks (a $\lambda$-collision), or a single walk may collide with itself (a $\rho$-collision). Both types are useful.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Conventional graph-based dependency parsers guarantee a tree structure both during training and inference. Instead, we formalize dependency parsing as the problem of independently selecting the head of each word in a sentence. Our model which we call <span style="font-variant:small-caps;">DeNSe</span> (as shorthand for [**De**]{}pendency [**N**]{}eural [ **Se**]{}lection) produces a distribution over possible heads for each word using features obtained from a bidirectional recurrent neural network. Without enforcing structural constraints during training, <span style="font-variant:small-caps;">DeNSe</span> generates (at inference time) trees for the overwhelming majority of sentences, while non-tree outputs can be adjusted with a maximum spanning tree algorithm. We evaluate <span style="font-variant:small-caps;">DeNSe</span> on four languages (English, Chinese, Czech, and German) with varying degrees of non-projectivity. Despite the simplicity of the approach, our parsers are on par with the state of the art.[^1]' author: - 'Xingxing Zhang, Jianpeng Cheng' - | Mirella Lapata\ Institute for Language, Cognition and Computation\ School of Informatics, University of Edinburgh\ 10 Crichton Street, Edinburgh EH8 9AB\ [{x.zhang,jianpeng.cheng}@ed.ac.uk, mlap@inf.ed.ac.uk]{} bibliography: - 'eacl2017.bib' title: Dependency Parsing as Head Selection --- Introduction ============ Dependency parsing plays an important role in many natural language applications, such as relation extraction [@fundel2007relex], machine translation [@carreras2009non], language modeling [@chelba1997structure; @zhang-etal:2016] and ontology construction [@snow2004learning]. Dependency parsers represent syntactic information as a set of head-dependent relational arcs, typically constrained to form a tree. Practically all models proposed for dependency parsing in recent years can be described as graph-based [@mcdonald2005online] or transition-based [@yamada2003statistical; @nivre2006labeled]. Graph-based dependency parsers are typically arc-factored, where the score of a tree is defined as the sum of the scores of all its arcs. An arc is scored with a set of local features and a linear model, the parameters of which can be effectively learned with online algorithms [@crammer2001algorithmic; @crammer2003ultraconservative; @freund1999large; @collins2002discriminative]. In order to efficiently find the best scoring tree during training *and* decoding, various maximization algorithms have been developed [@eisner1996three; @eisner2000bilexical; @mcdonald2005non]. In general, graph-based methods are optimized globally, using features of single arcs in order to make the learning and inference tractable. Transition-based algorithms factorize a tree into a set of parsing actions. At each transition state, the parser scores a candidate action conditioned on the state of the transition system and the parsing history, and greedily selects the highest-scoring action to execute. This score is typically obtained with a classifier based on non-local features defined over a rich history of parsing decisions [@yamada2003statistical; @zhang2011transition]. Regardless of the algorithm used, most well-known dependency parsers, such as the [@mcdonald2005non] and the MaltPaser [@nivre2006maltparser], rely on extensive feature engineering. Feature templates are typically manually designed and aim at capturing head-dependent relationships which are notoriously sparse and difficult to estimate. More recently, a few approaches [@chen2014fast; @pei2015effective; @DBLP:journals/corr/KiperwasserG16a] apply neural networks for learning dense feature representations. The learned features are subsequently used in a conventional graph- or transition-based parser, or better designed variants [@dyer2015transition]. In this work, we propose a simple neural network-based model which learns to select the head for each word in a sentence without enforcing tree structured output. Our model which we call [DeNSe]{} (as shorthand for [**De**]{}pendency [**N**]{}eural [**Se**]{}lection) employs bidirectional recurrent neural networks to learn feature representations for words in a sentence. These features are subsequently used to predict the head of each word. Although there is nothing inherent in the model to enforce tree-structured output, when tested on an English dataset, it is able to generate trees for 95% of the sentences, 87% of which are projective. The remaining non-tree (or non-projective) outputs are post-processed with the Chu-Liu-Edmond (or Eisner) algorithm. [ DeNSe]{} uses the head selection procedure to estimate arc weights during training. During testing, it essentially reduces to a standard graph-based parser when it fails to produce tree (or projective) output. We evaluate our model on benchmark dependency parsing corpora, representing four languages (English, Chinese, Czech, and German) with varying degrees of non-projectivity. Despite the simplicity of our approach, experiments show that the resulting parsers are on par with the state of the art. Related Work ============ #### Graph-based Parsing Graph-based dependency parsers employ a model for scoring possible dependency graphs for a given sentence. The graphs are typically factored into their component arcs and the score of a tree is defined as the sum of its arcs. This factorization enables tractable search for the highest scoring graph structure which is commonly formulated as the search for the maximum spanning tree (MST). The Chu-Liu-Edmonds algorithm [@chu1965shortest; @edmonds1967optimum; @mcdonald2005non] is often used to extract the MST in the case of non-projective trees, and the Eisner algorithm [@eisner1996three; @eisner2000bilexical] in the case of projective trees. During training, weight parameters of the scoring function can be learned with margin-based algorithms [@crammer2001algorithmic; @crammer2003ultraconservative] or the structured perceptron [@freund1999large; @collins2002discriminative]. Beyond basic first-order models, the literature offers a few examples of higher-order models involving sibling and grand parent relations [@carreras2007experiments; @koo2010dual; @zhang2012generalized]. Although more expressive, such models render both training and inference more challenging. #### Transition-based Parsing As the term implies, transition-based parsers conceptualize the process of transforming a sentence into a dependency tree as a sequence of transitions. A transition system typically includes a stack for storing partially processed tokens, a buffer containing the remaining input, and a set of arcs containing all dependencies between tokens that have been added so far [@nivre2003efficient; @nivre2006labeled]. A dependency tree is constructed by manipulating the stack and buffer, and appending arcs with predetermined operations. Most popular parsers employ an *arc-standard* [@yamada2003statistical; @nivre2004incrementality] or *arc-eager* transition system [@nivre2008algorithms]. Extensions of the latter include the use of non-local training methods to avoid greedy error propagation [@zhang2008tale; @huang2010dynamic; @zhang2011transition; @goldberg2012dynamic]. In an *arc-standard* system [@yamada2003statistical; @nivre2004incrementality], the transitions include a <span style="font-variant:small-caps;">Shift</span> operation which removes the first word in the buffer and pushes it onto the stack; a <span style="font-variant:small-caps;">Left-Arc</span> operation adds an arc from the word in the beginning of the buffer to the word on top of the stack; and a <span style="font-variant:small-caps;">Right-Arc</span> operation adds an arc from the word on top of the stack to the word in the beginning of the buffer. During parsing, the transition from one configuration to the next is greedily scored with a linear classifier whose features are defined according to the stack and buffer. The above arc-standard system builds a projective dependency tree bottom up, with the assumption that an arc is only added when the dependent node has already found all its dependents. Extensions include the *arc-eager* system [@nivre2008algorithms] which always adds an arc at the earliest possible stage, a more elaborate (reduce) action space to handle non-projective parsing [@attardi2006experiments], and the use of non-local training methods to avoid greedy error propagation [@zhang2008tale; @huang2010dynamic; @zhang2011transition; @goldberg2012dynamic]. #### Neural Network-based Features Neural network representations have a long history in syntactic parsing [@Mayberry:Miikkulainen:1999; @henderson:2004:ACL; @titov-henderson:2007:ACLMain]. Recent work uses neural networks in lieu of the linear classifiers typically employed in conventional transition- or graph-based dependency parsers. For example, use a feed forward neural network to learn features for a transition-based parser, whereas do the same for a graph-based parser. apply tensor decomposition to obtain word embeddings in their syntactic roles, which they subsequently use in a graph-based parser. redesign components of a transition-based system where the buffer, stack, and action sequences are modeled separately with stack long short-term memory networks. The hidden states of these LSTMs are concatenated and used as features to a final transition classifier. use bidirectional LSTMs to extract features for a transition- and graph-based parser, whereas build a greedy arc-standard parser using similar features. In our work, we formalize dependency parsing as the task of finding for each word in a sentence its most probable head. Both head selection and the features it is based on are learned using neural networks. The idea of modeling child-parent relations independently dates back to who use an edge-factored model to generate $k$-best parse trees which are subsequently reranked using a model based on rich global features. Later show that a head selection variant of their loopy belief propagation parser performs worse than a model which incorporates tree structure constraints. Our parser is conceptually simpler: we rely on head selection to do most of the work and decode the best tree *directly* without using a reranker. In common with recent neural network-based dependency parsers, we aim to alleviate the need for hand-crafting feature combinations. Beyond feature learning, we further show that it is possible to simplify the training of a graph-based dependency parser in the context of bidirectional recurrent neural networks. Dependency Parsing as Head Selection ==================================== In this section we present our parsing model, [DeNSe]{}, which tries to predict the head of each word in a sentence. Specifically, the model takes as input a sentence of length $N$ and outputs $N$ $\langle$head, dependent$\rangle$ arcs. We describe the model focusing on unlabeled dependencies and then discuss how it can be straightforwardly extended to the labeled setting. We begin by explaining how words are represented in our model and then give details on how [DeNSe]{} makes predictions based on these learned representations. Since there is no guarantee that the outputs of [ DeNSe]{} are trees (although they mostly are), we also discuss how to extend [DeNSe]{} in order to enforce projective and non-projective tree outputs. Throughout this paper, lowercase boldface letters denote vectors (e.g., $\mathbf{v}$ or $\mathbf{v}_i$), uppercase boldface letters denote matrices (e.g., $\mathbf{M}$ or $\mathbf{M}_b$), and lowercase letters denote scalars (e.g., $w$ or $w_i$). Word Representation {#sec:wordrepr} ------------------- Let $S=(w_0, w_1, \dots, w_N)$ denote a sentence of length $N$; following common practice in the dependency parsing literature [@Kubler:etal:2009], we add an artificial [root]{} token represented by $w_0$. Analogously, let denote the representation of sentence $S$, with $\mathbf{a}_i$ representing word $w_i \quad (0 \le i \le N)$. Besides encoding information about each $w_i$ in isolation (e.g., its lexical meaning or POS tag), $\mathbf{a}_i$ must also encode $w_i$’s positional information within the sentence. Such information has been shown to be important in dependency parsing [@mcdonald2005online]. For example, in the following sentence: [root]{} & a & dog & is & chasing & a & cat\ the head of the first *a* is *dog*, whereas the head of the second *a* is *cat*. Without considering positional information, a model cannot easily decide which *a* (nearer or farther) to assign to *dog*. Long short-term memory networks (Hochreiter and Schmidhuber, 1997; LSTMs), a type of recurrent neural network with a more complex computational unit, have proven effective at capturing long-term dependencies. In our case LSTMs allow to represent each word on its own and within a sequence leveraging long-range contextual information. As shown in Figure \[fig:dense\], we first use a forward LSTM ($\text{LSTM}^F$) to read the sentence from left to right and then a backward LSTM ($\text{LSTM}^B$) to read the sentence from right to left, so that the entire sentence serves as context for each word:[^2] $$\label{eq:forward} \mathbf{h}_i^F, \mathbf{c}_i^F = \text{LSTM}^F(\mathbf{x}_i, \mathbf{h}_{i-1}^F, \mathbf{c}_{i-1}^F)$$ $$\label{eq:backward} \mathbf{h}_i^B, \mathbf{c}_i^B = \text{LSTM}^B(\mathbf{x}_i, \mathbf{h}_{i+1}^B, \mathbf{c}_{i+1}^B)$$ where $\mathbf{x}_i$ is the feature vector of word $w_i$, $\mathbf{h}_i^F \in \mathbb{R}^{d}$ and $\mathbf{c}_i^F \in \mathbb{R}^{d}$ are the hidden states and memory cells for the $i$th word $w_i$ in $ \text{LSTM}^F$ and $d$ is the hidden unit size. $\mathbf{h}_i^F$ is also the representation for $w_{0:i}$ ($w_i$ and its left neighboring words) and $\mathbf{c}_i^F$ is an internal state maintained by $ \text{LSTM}^F$. $\mathbf{h}_i^B \in \mathbb{R}^d$ and $\mathbf{c}_i^B \in \mathbb{R}^d$ are the hidden states and memory cells for the backward $\text{LSTM}^B$. Each token $w_i$ is represented by $\mathbf{x}_i$, the concatenation of two vectors corresponding to $w_i$’s lexical and POS tag embeddings: $$\label{eq:fearep} \mathbf{x}_i = [ \mathbf{W}_e \cdot e(w_i); \mathbf{W}_t \cdot e(t_i) ]$$ where $e(w_i)$ and $e(t_i)$ are one-hot vector representations of token $w_i$ and its POS tag $t_i$; $\mathbf{W}_e \in \mathbb{R}^{s \times |V|}$ and $\mathbf{W}_t \in \mathbb{R}^{q \times |T|}$ are the word and POS tag embedding matrices, where $|V|$ is the vocabulary size, $s$ is the word embedding size, $|T|$ is the POS tag set size, and $q$ the tag embedding size. The hidden states of the forward and backward LSTMs are concatenated to obtain$~\mathbf{a}_i$, the final representation of $w_i$: $$\label{eq:ann} \mathbf{a}_i = [ \mathbf{h}_i^F; \mathbf{h}_i^B ] \quad i \in [0, N]$$ Note that bidirectional LSTMs are one of many possible ways of representing word $w_i$. Alternative representations include embeddings obtained from feed-forward neural networks [@chen2014fast; @pei2015effective], character-based embeddings [@ballesteros-dyer-smith:2015:EMNLP], and more conventional features such as those introduced in . Head Selection -------------- We now move on to discuss our formalization of dependency parsing as head selection. We begin with unlabeled dependencies and then explain how the model can be extended to predict labeled ones. In a dependency tree, a head can have multiple dependents, whereas a dependent can have only one head. Based on this fact, dependency parsing can be formalized as follows. Given a sentence $S=(w_0, w_1, \dots, w_N)$, we aim to find for each word $w_i \in \{w_1, w_2, \dots, w_n\}$ the most probable head $w_j \in \{w_0, w_1, \dots, w_N\}$. For example, in Figure \[fig:dense\], to find the head for the token *love*, we calculate probabilities $P_{head}(\text{\sc root}|\text{love}, S)$, $P_{head}(\text{kids}|\text{love}, S)$, and $P_{head}(\text{candy}|\text{love}, S)$, and select the highest. More formally, we estimate the probability of token $w_j$ being the head of token $w_i$ in sentence $S$ as: $$\label{eq:softmax} P_{head}(w_j|w_i, S) = \frac{\exp(g(\mathbf{a}_j, \mathbf{a}_i))}{ \sum_{k=0}^{N} \exp( g(\mathbf{a}_k, \mathbf{a}_i) ) }$$ where $\mathbf{a}_i$ and $\mathbf{a}_j$ are vector-based representations of $w_i$ and $w_j$, respectively (described in Section \[sec:wordrepr\]); $g(\mathbf{a}_j, \mathbf{a}_i)$ is a neural network with a single hidden layer that computes the associative score between representations $\mathbf{a}_i$ and $\mathbf{a}_j$: $$\label{eq:tanh} g(\mathbf{a}_j, \mathbf{a}_i) = \mathbf{v}_a^\top \cdot \tanh( \mathbf{U}_a \cdot \mathbf{a}_j + \mathbf{W}_a \cdot \mathbf{a}_i )$$ where $\mathbf{v}_a \in \mathbb{R}^{2d}$, $\mathbf{U}_a \in \mathbb{R}^{2d \times 2d}$, and $\mathbf{W}_a \in \mathbb{R}^{2d \times 2d}$ are weight matrices of $g$. Note that the candidate head $w_j$ can be the [root]{}, while the dependent $w_i$ cannot. Equations  and  compute the probability of adding an arc between two words, in a fashion similar to the neural attention mechanism in sequence-to-sequence models [@bahdanau:2014]. We train our model by minimizing the negative log likelihood of the gold standard $\langle$head, dependent$\rangle$ arcs in all training sentences: $$\hspace*{-2ex}J(\theta) = - \frac{ 1 }{ |\mathcal{T}| } \sum_{S \in \mathcal{T}} \sum_{i=1}^{N_S} \log P_{head}(h(w_i) | w_i, S )$$ where $\mathcal{T}$ is the training set, $h(w_i)$ is $w_i$’s gold standard head[^3] within sentence $S$, and $N_S$ the number of words in $S$ (excluding [ root]{}). During inference, for each word $w_i~(i \in [1, N_S])$ in $S$, we greedily choose the most likely head $w_j~(j \in [0, N_S])$: $$\label{eq:inference} w_j = \operatorname*{arg\,max}_{w_j: j \in [0, N_S]} P_{head}(w_j|w_i, S)$$ Note that the prediction for each word $w_i$ is made independently of the other words in the sentence. Given our greedy inference method, there is no guarantee that predicted $\langle$head, dependent$\rangle$ arcs form a tree (maybe there are cycles). However, we empirically observed that most outputs during inference are indeed trees. For instance, on an English dataset, 95% of the arcs predicted on the development set are trees, and 87% of them are projective, whereas on a Chinese dataset, 87% of the arcs form trees, 73% of which are projective. This indicates that although the model does not explicitly model tree structure during training, it is able to figure out from the data (which consists of trees) that it should predict them. So far we have focused on unlabeled dependencies, however it is relatively straightforward to extend <span style="font-variant:small-caps;">DeNSe</span> to produce labeled dependencies. We basically train an additional classifier to predict labels for the arcs which have been already identified. The classifier takes as input features $[\mathbf{a}_i; \mathbf{a}_j; \mathbf{x}_i; \mathbf{x}_j]$ representing properties of the arc $\langle w_j, w_i \rangle$. These consist of $\mathbf{a}_i$ and $\mathbf{a}_j$, the LSTM-based representations for $w_i$ and $w_j$ (see Equation ), and their word and part-of-speech embeddings, $\mathbf{x}_i$ and $\mathbf{x}_j$ (see Equation ). Specifically, we use a trained [DeNSe]{} model to go through the training corpus and generate features and corresponding dependency labels as training data. We employ a two-layer rectifier network [@glorot:2011] for the classification task. Maximum Spanning Tree Algorithms {#sec:maxim-spann-tree} -------------------------------- As mentioned earlier, greedy inference may not produce well-formed trees. In this case, the output of <span style="font-variant:small-caps;">DeNSe</span> can be adjusted with a maximum spanning tree algorithm. We use the Chu-Liu-Edmonds algorithm [@chu1965shortest; @edmonds1967optimum] for building non-projective trees and the Eisner algorithm for projective ones. Following , we view a sentence $S=(w_0=\text{\sc root}, w_1, \dots, w_N)$ as a graph $G_S=\langle V_S, E_S \rangle$ with the sentence words and the dummy root symbol as vertices and a directed edge between every pair of distinct words and from the root symbol to every word. The directed graph $G_S$ is defined as: $$\begin{aligned} V_S &= \{w_0 = \text{\sc root}, w_1, \dots ,w_N\} \\ E_S &= \{ \langle i, j \rangle: i \ne j, \langle i, j \rangle \in [0, N] \times [1, N] \} \\ s(i, j) &= P_{head}(w_i | w_j, S) \quad \langle i, j \rangle \in E_S \end{aligned}$$ where $s(i, j)$ is the weight of edge $\langle i, j \rangle$ and $P_{head}(w_i | w_j, S)$ is known. The problem of dependency parsing now boils down to finding the tree with the highest score which is equivalent to finding a MST in $G_S$ [@mcdonald2005non]. #### Non-projective Parsing To build a non-projective parser, we solve the MST problem with the Chu-Liu-Edmonds algorithm [@chu1965shortest; @edmonds1967optimum]. The algorithm selects for each vertex (excluding [root]{}) the in-coming edge with the highest weight. If a tree results, it must be the maximum spanning tree and the algorithm terminates. Otherwise, there must be a cycle which the algorithm identifies, contracts into a single vertex and recalculates edge weights going into and out of the cycle. The greedy inference strategy described in Equation ) is essentially a sub-procedure in the Chu-Liu-Edmonds algorithm with the algorithm terminating after the first iteration. In implementation, we only run the Chu-Liu-Edmonds algorithm through graphs with cycles, i.e., non-tree outputs. #### Projective Parsing For projective parsing, we solve the MST problem with the Eisner algorithm. The time complexity of the Eisner algorithm is $O(N^3)$, while checking if a tree is projective can be done reasonably faster, with a $O(N\log N)$ algorithm. Therefore, we only apply the Eisner algorithm to the non-projective output of our greedy inference strategy. Finally, it should be noted that the *training* of our model does not rely on the Chu-Liu-Edmonds or Eisner algorithm, or any other graph-based algorithm. MST algorithms are only used at *test* time to correct non-tree outputs which are a minority; [DeNSe]{} acquires underlying tree structure constraints from the data without an explicit learning algorithm. Experiments {#sec:experiments} =========== We evaluated our parser in a projective and non-projective setting. In the following, we describe the datasets we used and provide training details for our models. We also present comparisons against multiple previous systems and analyze the parser’s output. Datasets -------- In the projective setting, we assessed the performance of our parser on the English Penn Treebank (PTB) and the Chinese Treebank 5.1 (CTB). Our experimental setup closely follows and . For English, we adopted the Stanford basic dependencies (SD) representation [@demarneffe:2006].[^4] We follow the standard splits of PTB, sections 2–21 were used for training, section 22 for development, and section 23 for testing. POS tags were assigned using the Stanford tagger [@toutanova:2003] with an accuracy of 97.3%. For Chinese, we follow the same split of [ CTB5]{} introduced in . In particular, we used sections , for training, sections , for development, and sections , for testing. The original constituency trees in CTB were converted to dependency trees with the Penn2Malt tool.[^5] We used gold segmentation and gold POS tags as in and . In the non-projective setting, we assessed the performance of our parser on Czech and German, the largest non-projective datasets released as part of the CoNLL 2006 multilingual dependency parsing shared task. Since there is no official development set in either dataset, we used the last 374/367 sentences in the Czech/German training set as development data.[^6] Projective statistics of the four datasets are summarized in Table \[tab:proj\]. [Dataset]{} \# Sentences (%) Projective ------------- -------------- ---------------- English 39,832 99.9 Chinese 16,091 100.0 Czech 72,319 76.9 German 38,845 72.2 : Projective statistics on four datasets. Number of sentences and percentage of projective trees are calculated on the training set.[]{data-label="tab:proj"} Training Details ---------------- We trained our models on an Nvidia GPU card; training takes one to two hours. Model parameters were uniformly initialized to $[-0.1, 0.1]$. We used Adam [@kingma:2014] to optimize our models with hyper-parameters recommended by the authors (i.e., learning rate 0.001, first momentum coefficient 0.9, and second momentum coefficient 0.999). To alleviate the gradient exploding problem, we rescaled the gradient when its norm exceeded 5 [@pascanu:2013]. Dropout [@srivastava:2014] was applied to our model with the strategy recommended in the literature [@zaremba:2014; @semeniuta-etal-2016:COLING]. On all datasets, we used two-layer LSTMs and set $d=s=300$, where $d$ is the hidden unit size and $s$ is the word embedding size. As in previous neural dependency parsing work [@chen2014fast; @dyer2015transition], we used pre-trained word vectors to initialize our word embedding matrix $\mathbf{W}_e$. For the PTB experiments, we used 300 dimensional pre-trained GloVe[^7] vectors [@pennington:2014]. For the CTB experiments, we trained 300 dimensional GloVe vectors on the Chinese Gigaword corpus which we segmented with the Stanford Chinese Segmenter [@tseng:2005]. For Czech and German, we did not use pre-trained word vectors. The POS tag embedding size was set to $q=30$ in the English experiments, $q=50$ in the Chinese experiments and $q=40$ in both Czech and German experiments. Results {#sec:projr} ------- For both English and Chinese experiments, we report unlabeled (UAS) and labeled attachment scores (LAS) on the development and test sets; following punctuation is excluded from the evaluation. Experimental results on PTB are shown in Table \[tab:en\_depparse\]. We compared our model with several recent papers following the same evaluation protocol and experimental settings. The first block in the table contains mostly graph-based parsers which do not use neural networks: Bohnet10 [@bohnet:2010], Martins13 [@martins:2013], and Z&M14 [@zhang:2014:acl]. Z&N11 [@zhang2011transition] is a transition-based parser with non-local features. Accuracy results for all four parsers are reported in . ------------------- --------------- --------------- --------------- --------------- UAS LAS UAS LAS Bohnet10 — — 92.88 90.71 Martins13 — — 92.89 90.55 Z&M14 — — 93.22 91.02 Z&N11 — — 93.00 90.95 C&M14 92.00 89.70 91.80 89.60 Dyer15 93.20 90.90 93.10 90.90 Weiss15 — — 93.99 [92.05]{} Andor16 — — [**94.61**]{} [**92.79**]{} K&G16 [*graph*]{} — — 93.10 91.00 K&G16 [*trans*]{} — — 93.90 91.90 [DeNSe]{}-Pei 90.77 88.35 90.39 88.05 [DeNSe]{}-Pei+E 91.39 88.94 91.00 88.61 [DeNSe]{} 94.17 91.82 94.02 91.84 [DeNSe]{}+E [**94.30**]{} [**91.95**]{} [94.10]{} 91.90 ------------------- --------------- --------------- --------------- --------------- : Results on English dataset (PTB with Stanford Dependencies). +E: we post-process non-projective output with the Eisner algorithm.[]{data-label="tab:en_depparse"} The second block in Table \[tab:en\_depparse\] presents results obtained from neural network-based parsers. C&M14 [@chen2014fast] is a transition-based parser using features learned with a feed forward neural network. Although very fast, its performance is inferior compared to graph-based parsers or strong non-neural transition based parsers (e.g., Z&N11). Dyer15 [@dyer2015transition] uses (stack) LSTMs to model the states of the buffer, the stack, and the action sequence of a transition system. Weiss15 [@weiss:2015] is another transition-based parser, with a more elaborate training procedure. Features are learned with a neural network model similar to C&M14, but much larger with two layers. The hidden states of the neural network are then used to train a structured perceptron for better beam search decoding. Andor16 [@andor-EtAl:2016] is similar to Weiss15, but uses a globally normalized training algorithm instead. ------------------- --------------- --------------- --------------- --------------- UAS LAS UAS LAS Z&N11 — — 86.00 84.40 Z&M14 — — [**87.96**]{} [**86.34**]{} C&M14 84.00 82.40 83.90 82.40 Dyer15 87.20 [**85.90**]{} 87.20 85.70 K&G16 [*graph*]{} — — 86.60 85.10 K&G16 [*trans*]{} — — 87.60 86.10 [DeNSe]{}-Pei [82.50]{} 80.74 [82.38]{} [80.55]{} [DeNSe]{}-Pei+E [83.40]{} 81.63 [83.46]{} [81.65]{} [DeNSe]{} [87.27]{} 85.73 [87.63]{} [85.94]{} [DeNSe]{}+E [**87.35**]{} 85.85 [87.84]{} [86.15]{} ------------------- --------------- --------------- --------------- --------------- : Results on Chinese dataset (CTB). +E: we post-process non-projective outputs with the Eisner algorithm.[]{data-label="tab:ch_depparse"} ------------- --------------- --------------- --------------- --------------- Parser Dev Test Dev Test C&M14 43.35 40.93 32.75 32.20 Dyer15 51.94 50.70 [**39.72**]{} [**37.23**]{} [DeNSe]{} 51.24 49.34 34.74 33.66 [DeNSe]{}+E [**52.47**]{} [**50.79**]{} 36.49 35.13 ------------- --------------- --------------- --------------- --------------- : UEM results on PTB and CTB.[]{data-label="tab:uem"} Unlike all models above, [DeNSe]{} does not use any kind of transition- or graph-based algorithm during training and inference. Nonetheless, it obtains a UAS of 94.02%. Around 95% of the model’s outputs after inference are trees, 87% of which are projective. When we post-process the remaining 13% of non-projective outputs with the Eisner algorithm ([DeNSe+E]{}), we obtain a slight improvement on UAS (94.10%). extract features from bidirectional LSTMs and feed them to a graph- (K&G16 [*graph*]{}) and transition-based parser (K&G16 [*trans*]{}). Their LSTMs are jointly trained with the parser objective. [DeNSe]{} yields very similar performance to their transition-based parser while it outperforms K&G16 [*graph*]{}. A key difference between [DeNSe]{} and K&G16 lies in the training objective. The objective of [DeNSe]{} is log-likelihood based *without* tree structure constraints (the model is trained to produce a distribution over possible heads for each word, where each head selection is independent), while K&G16 employ a max-margin objective *with* tree structure constraints. Although our probabilistic objective is non-structured, it is perhaps easier to train compared to a margin-based one. We also assessed the importance of the bidirectional LSTM on its own by replacing our LSTM-based features with those obtained from a feed-forward network. Specifically, we used the 1-order-atomic features introduced in which represent POS-tags, modifiers, heads, and their relative positions. As can be seen in Table \[tab:en\_depparse\] ([DeNSe]{}-Pei), these features are less effective compared to LSTM-based ones and the contribution of the MST algorithm (Eisner) during decoding is more pronounced ([ DeNSe]{}-Pei+E). We observe similar trends in the Chinese, German, and Czech datasets (see Tables \[tab:ch\_depparse\] and \[tab:ger\_depparse\]). ------------------------------------------- ------------------------------------------- a. b. ![image](ptb-crop.pdf){width="49.00000%"} ![image](ctb-crop.pdf){width="49.00000%"} ------------------------------------------- ------------------------------------------- ------------------- --------------- ------- --------------- ------- UAS LAS UAS LAS MST-1st 86.18 — 89.54 — MST-2nd 87.30 — 90.14 — Turbo-1st 87.66 — 90.52 — Turbo-3rd 90.32 — [**92.41**]{} — RBG-1st 87.90 — 90.24 — RBG-3rd [**90.50**]{} — 91.97 — [DeNSe]{}-Pei 86.00 77.92 89.42 86.48 [DeNSe]{}-Pei+CLE 86.52 78.42 89.52 86.58 [DeNSe]{} 89.60 81.70 92.15 89.58 [DeNSe]{}+CLE 89.68 81.72 92.19 89.60 ------------------- --------------- ------- --------------- ------- : Non-projective results on the CoNLL 2006 dataset. +CLE: we post-process non-tree outputs with the Chu-Liu-Edmonds algorithm.[]{data-label="tab:ger_depparse"} Results on CTB follow a similar pattern. As shown in Table \[tab:ch\_depparse\], <span style="font-variant:small-caps;">DeNSe</span> outperforms all previous neural models (see the test set columns) on UAS and LAS. <span style="font-variant:small-caps;">DeNSe</span> performs competitively with Z&M14, a non-neural model with a complex high order decoding algorithm involving cube pruning and strategies for encouraging diversity. Post-processing the output of the parser with the Eisner algorithm generally improves performance (by 0.21%; see last row in Table \[tab:ch\_depparse\]). Again we observe that 1-order-atomic features [@pei2015effective] are inferior compared to the LSTM. Table \[tab:uem\] reports unlabeled sentence level exact match (UEM) in Table \[tab:uem\] for English and Chinese. Interestingly, even when using the greedy inference strategy, [DeNSe]{} yields a UEM comparable to Dyer15 on PTB. Finally, in Figure \[fig:uaslen\] we analyze the performance of our parser on sentences of different length. On both PTB and CTB, <span style="font-variant:small-caps;">DeNSe</span> has an advantage on long sentences compared to C&M14 and Dyer15. For Czech and German, we closely follow the evaluation setup of CoNLL 2006. We report both UAS and LAS, although most previous work has focused on UAS. Our results are summarized in Table \[tab:ger\_depparse\]. We compare [DeNSe]{} against three non-projective graph-based dependency parsers: the MST parser [@mcdonald2005non], the Turbo parser [@martins:2013], and the RBG parser [@lei2014low]. We show the performance of these parsers in the first order setting (e.g., ) and in higher order settings (e.g., ). The results of , MST-2nd, RBG-1st and RBG-3rd are reported in and the results of Turbo-1st and Turbo-3rd are reported in . We show results for our parser with greedy inference (see <span style="font-variant:small-caps;">DeNSe</span> in the table) and when we use the Chu-Liu-Edmonds algorithm to post-process non-tree outputs ([ DeNSe]{}+CLE). As can been seen, <span style="font-variant:small-caps;">DeNSe</span> outperforms all other first (and second) order parsers on both German and Czech. As in the projective experiments, we observe slight a improvement (on both UAS and LAS) when using a MST algorithm. On German, [DeNSe]{} is comparable with the best third-order parser (), while on Czech it lags behind Turbo-3rd and RBG-3rd. This is not surprising considering that <span style="font-variant:small-caps;">DeNSe</span> is a first-order parser and only uses words and POS tags as features. Comparison systems use a plethora of hand-crafted features and more sophisticated high-order decoding algorithms. Finally, note that a version of <span style="font-variant:small-caps;">DeNSe</span> with features in [@pei2015effective] is consistently worse (see the second block in Table \[tab:ger\_depparse\]). -------- ------- ------ ------ ------- ------- Tree Proj Tree Proj PTB 1,700 95.1 86.6 100.0 100.0 CTB 803 87.0 73.1 100.0 100.0 Czech 374 87.7 65.5 100.0 72.7 German 367 96.7 67.3 100.0 68.1 -------- ------- ------ ------ ------- ------- : Percentage of trees and projective trees on the development set before and after [DeNSe]{} uses a MST algorithm. On PTB and CTB, we use the Eisner algorithm and on Czech and German, we use the Chu-Liu-Edmonds algorithm.[]{data-label="tab:treerate"} Our experimental results demonstrate that using a MST algorithm during inference can slightly improve the model’s performance. We further examined the extent to which the MST algorithm is necessary for producing dependency trees. Table \[tab:treerate\] shows the percentage of trees before and after the application of the MST algorithm across the four languages. In the majority of cases [ DeNSe]{} outputs trees (ranging from 87.0% to 96.7%) and a significant proportion of them are projective (ranging from 65.5% to 86.6%). Therefore, only a small proportion of outputs (14.0% on average) need to be post-processed with the Eisner or Chu-Liu-Edmonds algorithm. Conclusions {#sec:conclusions} =========== In this work we presented [DeNSe]{}, a neural dependency parser which we train without a transition system or graph-based algorithm. Experimental results show that [DeNSe]{} achieves competitive performance across four different languages and can seamlessly transfer from a projective to a non-projective parser simply by changing the post-processing MST algorithm during inference. In the future, we plan to increase the coverage of our parser by using tri-training techniques [@li2014ambiguity] and multi-task learning [@luong:2015]. #### Acknowledgments We would like to thank Adam Lopez and Frank Keller for their valuable feedback. We acknowledge the financial support of the European Research Council (ERC; award number 681760). [^1]: Our code is available at <http://github.com/XingxingZhang/dense_parser>. [^2]: For more detail on LSTM networks, see e.g., or . [^3]: Note that $h(w_i)$ can be [root]{}. [^4]: We obtained SD representations using the Stanford parser v.3.3.0. [^5]: <http://stp.lingfil.uu.se/~nivre/research/Penn2Malt.html> [^6]: We make the number of sentences in the development and test sets comparable. [^7]: <http://nlp.stanford.edu/projects/glove/>
{ "pile_set_name": "ArXiv" }
--- abstract: 'Being homologue to the new, Fe-based type of high-temperature superconductors, CeFePO exhibits magnetism, Kondo and heavy-fermion phenomena. We experimentally studied the electronic structure of CeFePO by means of angle-resolved photoemission spectroscopy. In particular, contributions of the Ce $4f$-derived states and their hybridization to the Fe $3d$ bands were explored using both symmetry selection rules for excitation and their photoionization cross-section variations as a function of photon energy. It was experimentally found $-$ and later on confirmed by LDA as well as DMFT calculations $-$ that the Ce 4$f$ states hybridize to the Fe 3$d$ states of $d_{3z^2-r^2}$ symmetry near the Fermi level that discloses their participation in the occurring electron-correlation phenomena and provides insight into mechanism of superconductivity in oxopnictides.' author: - 'M.G. Holder' - 'A. Jesche' - 'P. Lombardo' - 'R. Hayn' - 'D. V. Vyalikh' - 'S. Danzenbächer' - 'K. Kummer' - 'C. Krellner' - 'C. Geibel' - 'Yu. Kucherenko' - 'T. Kim' - 'R. Follath' - 'S. L. Molodtsov' - 'C. Laubschat' title: 'CeFePO: *f*-*d* hybridization and quenching of superconductivity' --- The unusual superconducting properties of the novel Fe-based oxopnictides with transition temperatures ($T_c$) up to 55K have attracted considerable attention [@Kamihara08; @Chen08; @Ren08; @GFChen08]. While pure $R$FeAsO ($R$: rare-earth elements) compounds reveal metallic properties, doping by F on O sites leads to superconductivity. The proximity of the superconducting state to spin-density wave formation gave rise to speculations that the underlying pairing mechanism is based on magnetic fluctuations [@Mazin2009]. Superconductivity without doping, although at reduced $T_c$ with respect to the arsenides, is found in the isoelectronic phosphides, except for $R$=Ce [@Kamihara2008; @Baumbach2009]. In CeFeAsO both Fe and Ce order antiferromagnetically below a Neel temperature of 140K [@Zhao2008] and 3.7K [@Jesche2009], respectively. A gradual replacement of As by P leads first to the vanishing of the Fe magnetism, coupled with a change of the Ce order to ferromagnetism [@Luo2009]. For further P doping the Ce order is suppressed, resulting in a paramagnetic heavy-fermion compound [@Bruning2009]. This wide variation of properties is a consequence of a strong sensitivity of the valence-band (VB) structure to the lattice parameters and to interaction with localized $f$ states. Close to the Fermi level ($E_F$) the electronic structure of $R$Fe$Pn$O ($Pn$: phosphorus or arsenic) materials is dominated by five energy bands that have predominantly Fe $3d$ character [@Vildosola2008; @Kuroki2009]. Small variations of the lattice parameters affect particularly two of these bands, namely those containing $d_{xy}$ and $d_{3z^2-r^2}$ orbitals. Increasing the distance of the pnictogen ions to the Fe plane shifts the $d_{xy}$-derived band towards lower and the $d_{3z^2-r^2}$-derived bands towards higher binding energies (BE) leading to a transition from 3D to 2D behavior of the Fermi surface (FS). As discussed in Ref. \[\], superconductivity delicately depends on nesting conditions between the FS sheets generated by the above mentioned bands around the $\Gamma$ point and those located around the $M$ point in the Brillouin zone (BZ). The nesting conditions may be affected by variations of the lattice parameters or interaction with 4$f$ states. Purpose of the present work is to study the electronic structure of CeFePO by means of angle-resolved photoemission (ARPES) in order to understand possible reasons for the quenching of superconductivity. We find that closely below $E_F$ both the position and the dispersion of the valence bands are strongly changed with respect to the ones in LaFePO what is at least partly due to interactions with the Ce 4$f$ states. Hybridization of the Fe 3$d$-derived VBs and the Ce 4$f$ states leads around the $\bar\Gamma$ point of the surface BZ to strong 4$f$ admixture to the valence bands, accompanied by a reconstruction of the Fermi surface and a shift of the 4$f$-derived quasiparticle band to lower binding energies. Experiments were performed at the “$1^3$-ARPES” setup at BESSY (Berlin) as described in Ref. \[\], at temperatures around 10K, on single crystals grown from a Sn flux as specified in Ref. \[\]. Due to setup geometry, the vector potential $\bm{A}$ of incident light is parallel to sample surface at vertical polarization (VP) and posses an additional perpendicular component at horizontal polarization (HP). Dipole matrix elements for the photoexcitation depend on the spatial extension of the orbital along the direction of $\bm{A}$. This means that in normal emission geometry states of $d_{3z^2-r^2}$ symmetry will contribute only at HP, while those of $d_{xz,yz}$ and $d_{x^2-y^2}$ ($d_{xy}$, depending on the orientation of the sample in the $(x,y)$ plane) symmetry will be detected at both VP and HP $-$ though with different relative intensities. ![(Color online) Experimental ARPES images recorded from CeFePO at *h*$\nu$=112eV and VP along the $\bar\Gamma$ - $\bar M$ (a) and $\bar\Gamma$ -$\bar X$ (b) directions in the surface BZ, and calculated energy bands for a slab containing 15 atomic layers, with a P terminated surface, treating 4$f$ states as quasi-core (c) and valence states (d). Size of the dots indicates contribution of $d$ orbitals of the outermost Fe layer (solid dots) or of Ce 4$f$ states (4th layer, open dots). The labels indicate the orbitals with strongest contribution to the bands.[]{data-label="ARPES"}](ARPES){width="8.5cm"} Photoemission (PE) spectra of Ce systems reveal a well known double-peak structure consisting of a component at about 2eV BE, roughly reflecting the 4$f^0$ final state expected from PE excitation of an unhybridized 4$f^1$ ground state, and a feature close to $E_F$ that is only due to hybridization and reproduces the ground-state configuration of mainly 4$f^1$ character. In our measurements we made use of strong variations of the 4$f$ photoionization cross section around the 4$d\rightarrow$ 4$f$ absorption threshold due to a Fano resonance: 4$f$ emission becomes resonantly enhanced (suppressed) at *h*$\nu$=121eV (112eV) photon energy[@Mol1997]. Valence-band maps taken at VP and a photon energy of 112eV are shown in Fig. \[ARPES\](a) and (b) for two high symmetry directions in the surface Brillouin zone. Along the $\bar\Gamma$ -$\bar X$ direction two energy bands cross $E_F$ at x$_1\approx$ 0.1 $\overline{\Gamma X}$ ($A_1$) and x$_2\approx$ 0.4$\overline{\Gamma X}$ ($A_2$), respectively. In LaFePO similar bands are observed but the crossings occur closer to the $\bar X$ point at x$_1\approx$ 0.2 $\overline{\Gamma X}$ and x$_2\approx$ 0.7 $\overline{\Gamma X}$ [@Lu2008]. In the vicinity of the $\bar M$ point two additional bands can be distinguished \[Fig. \[ARPES\](a), dashed\], that merge in LaFePO. All these bands are discussed in Ref. \[\] on the basis of LDA bulk band-structure calculations, using internally relaxed parameters and rescaling calculated band energies by a factor of two. In this way, the Fermi level crossings x$_1$ and x$_2$ are caused by $d_{xz, yz}$ and $d_{3z^2-r^2}$-derived states, respectively. The latter should hardly be visible at VP and hence at least for the present measurement a different character of the $A_2$ band has to be concluded. Another parabolic hole-like band (labeled $B$) comes very close to $E_F$ and has no direct counterpart in LaFePO. In order to take account of the surface sensitivity of ARPES and the fact that band positions of surface and subsurface atomic layers may be different in BE with respect to the bulk ones [@Vyalikh2009], slab calculations were performed by means of the linear-muffin-tin-orbital (LMTO) method [@And75]. It follows from the structural and cohesive properties that the CeFePO crystal can be cleaved mainly between the FeP and CeO stacks, so that the surface will be terminated either by P or Ce atoms. In the case of a P terminated slab the Fe atoms occupy the second (subsurface) layer and the main contribution to the PE intensity stems from the high cross section Fe 3$d$-derived bands. A schematic view of the 15 atomic layer thick slab as well as results of the respective slab calculations are shown in Fig. \[ARPES\](c). Note that 4$f$ states were treated as quasi-core states in order to avoid the well known failures of LDA in describing strongly localized states. The effect of the surface to the observed bands can be explained to some extent with the spatial orientation of the involved Fe $d$ states. The calculated band structure of Fe layer in the center of the slab is very close to the bulk band structure. Band $B$ is quite well described by $d_{x^2-y^2}$ states which are not strongly influenced by surface effects, since these orbitals are oriented in the ($x$,$y$) plane and contribute to the Fe$-$P bonds but with negligible overlap with Ce states. Two bands of $d_{xz}$ and $d_{yz}$ symmetry cross the Fermi level in the same way as bands $A_1$ and $A_2$. Close to the $\bar \Gamma$ point band $A_1$ reveals increasing $d_{3z^2-r^2}$ character. Besides these bands the calculation predicts a further band ($A_3$) of $d_{3z^2-r^2}$ character closer to the $\bar X$ point, resembling the situation reported in LaFePO [@Lu2008]. However, this band does not appear in the above ARPES maps, because this emission is symmetry forbidden in the case of VP excitation. Our calculations show that the $d_{3z^2-r^2}$ states (and to a minor degree bands of $d_{xz}$ and $d_{yz}$ symmetry) overlap with the adjacent Ce layer where they exhibit linear combinations of $f$ symmetry at the Ce sites, and are thus allowed for hybridization with the Ce 4$f$ states. The experimentally observed behavior of band $A_1$ may reflect effects of such hybridization, since it strongly deviates from a parabolic dispersion. In order to get a rough estimate of this effect, results of calculations, where the 4$f$ states are treated as valence band states, are shown in Fig. \[ARPES\](d). Due to their interaction with the $f$ states, Fe $d_{3z^2-r^2}$ do not contribute anymore to the band structure close to $E_F$. Instead, a band of this symmetry appears at about 0.33eV BE at the $\bar\Gamma$ point. ![(Color online) ARPES images taken along the $\bar\Gamma$ - $\bar M$ direction with the VP light at *h*$\nu$=112eV (top, off resonance for $f$ emission), 121eV (middle, on resonance for $f$ emission), and with the HP light at *h*$\nu$=121eV (bottom, sensitive to the $d_{3z^2-r^2}$ orbitals).[]{data-label="ResPES"}](ResPES){width="8.0cm"} An investigation of the discussed hybridization between the Fe 3$d_{3z^2-r^2}$-derived bands and the Ce 4$f$ states is possible enhancing the cross section of photoexcitation by switching from VP to HP (3$d$ bands) and exploiting the 4$d \rightarrow$ 4$f$ Fano resonance (4$f$ states). Respective PE maps are shown in Fig. \[ResPES\]. In the topmost map, taken with VP at *h*$\nu$=112eV bands $A_1$, $A_2$ and $B$ are of comparable intensity, reflecting their Fe 3$d_{xz, yz}$ and 3$d_{x^2-y^2}$ character. Switching to *h*$\nu$=121eV the intensity of bands $A_1$ and $A_2$ becomes essentially larger as compared to that of band $B$. This is caused by resonant enhancement of partial 4$f$ admixtures to the former bands. Especially the intensity of band $A_1$ grows strongly between 0.1eV BE and the Fermi level, supporting the former assumption about the hybridization with Ce 4$f$ states. In addition two other features appear: ([*i*]{}) a peak directly at $E_F$ that reflects the Ce 4$f^1$ final state and ([*ii*]{}) a further band with its top at about 0.1eV BE, labeled $C$. Finally, at HP and *h*$\nu$=121eV (Fig. \[ResPES\] bottom), band $C$ appears extremely enhanced, indicating its predominant 3$d_{3z^2-r^2}$ character. Thus, its visibility at 121eV and VP is only due to finite Ce 4$f$ admixtures. Band $A_3$, on the other hand, is still not observed. In the results of our calculations \[see Fig. \[ARPES\](d)\] the Fe 3$d_{3z^2-r^2}$-derived band at 0.33eV BE corresponds to band $C$, but the calculated band has higher BE as compared to the experiment due to well known overestimation of the 4$f$-VB interaction obtained with LDA. In Fig. \[ARPES\](c) this band is absent (the respective subsurface Fe 3$d$ states form band $A_3$), however, a similar band \[small dots in Fig. \[ARPES\] (c)\] is found at 0.25eV BE which is derived from the Fe 3$d$ states of the central (bulk) layer. Thus, a possible presence of Ce at the surface may influence band $C$ and other bands of 3$d_{3z^2-r^2}$ character. In order to investigate this effect, the calculations where repeated for a Ce terminated slab \[see Fig. \[Calc\](a)\]. The results reveal at the $\bar \Gamma$ point the formation of a surface-derived band of 3$d_{3z^2-r^2}$ symmetry close to the experimentally obtained position of band $C$, while band $A_3$ is not observed. The remaining band structure looks quite similar to the one calculated for the P terminated slab. One can see in Fig. \[ARPES\](d), that the lowest lying Ce 4$f$-derived band is pushed above $E_F$ in that regions of **k** space, where it interacts with the valence bands. This is in interesting correspondence to the experimentally observed behavior of the 4$f^1$-derived feature at $E_F$ \[Fig. \[ResPES\], on resonance\]: Around the $\bar\Gamma$ point this feature disappears and seems to be pushed across the Fermi energy by the parabolic valence bands, that in turn reveal certain 4$f$ admixtures in this region of **k** space. Similar interaction phenomena have been reported for the Yb 4$f^{13}$ bulk emission of the heavy-fermion system YbRh$_2$Si$_2$ [@Vyalikh2009] as well as for the respective surface component of YbIr$_2$Si$_2$ [@Danz2006; @Danz2007]. In the latter case, where the 4$f$ emission is relatively far away from the Fermi energy (0.6eV BE), the phenomenon could be described quantitatively in the light of a simplified approach to the Periodic Anderson Model (PAM) where 4$f$ dispersion and 4$f$ admixtures to the valence bands are explained by linear combinations of 4$f$ and valence-band states. For 4$f$ emissions at $E_F$ the mentioned approach is, unfortunately, not applicable because interaction with unoccupied valence states is not properly considered. In order to solve this problem we present in the following an elaborated approach to PAM based on dynamical mean-field theory (DMFT). ![(Color online) (a) Calculated energy bands for a Ce terminated slab constructed by interchanging the FeP and CeO stacks of the slab shown in Fig. \[ARPES\](c). The meaning of the symbols is the same as in Fig. \[ARPES\](c) and (d). (b) Distribution of the spectral intensity calculated by means of the Periodic Anderson Model.[]{data-label="Calc"}](Calc){width="8.0cm"} For a numerical simulation of hybridization effects within PAM we consider a valence band of bandwidth $W$=1.2eV and center at $\epsilon_d$=0.7eV BE, with parabolic dispersion in the relevant part of [**k**]{} space and a 4$f$ state at $\epsilon_f$=2eV BE. The self-energy was calculated by DMFT in a way as recently proposed in Ref. [@Sordi2007] but applying the noncrossing approximation (NCA) [@Bickers1987] as impurity solver like in Ref. \[\]. With a hybridization parameter $t_{df}$=0.3eV and an on-site Coulomb repulsion $U$=7eV the DMFT equation provides the self-energy of the hybridized 4$f$ states. Results in Fig. \[Calc\](b) show, that the peak at $E_F$ is caused by $f$-$d$ hybridization and might be interpreted as the tail of the Kondo resonance, which is located above $E_F$ [@Reinert2001]. For those [**k**]{} values where the VB comes close to the Fermi level, the $f$ state is pushed towards lower BE (above $E_F$) as reflected in the PE spectrum by a decrease of 4$f$-derived intensity at $E_F$, while the intensity of the interacting valence band becomes enhanced by substantial 4$f$ admixtures. In our study, we compared ARPES data of CeFePO with results of LDA slab calculations and analyzed the effect of $f$-$d$ hybridization both in the framework of LDA and PAM. Without adjustment of internal lattice parameters, our slab calculations reproduce qualitatively the observed band dispersions and characters, demonstrating the importance of surface effects in the electronic structure. Particularly the termination of the surface either by P or Ce atoms affects strongly shape and position of the bands. For an interpretation of the experimental data a coexistence of both terminations must be considered. Strongest influence of the surface effects is found for the Fe 3$d_{3z^2-r^2}$ orbitals, which have largest overlap and, therefore, mostly pronounced interaction with the Ce-derived states. As a consequence, a missing Ce surface layer induces the formation of surface-derived bands which are not observed in bulk band structure. In LaFePO the Fe 3$d_{3z^2-r^2}$-derived states form a pocket in the Fermi surface around the $\bar \Gamma$ point, that is reproduced by our slab calculations if interactions with the 4$f$ states are neglected \[Fig. \[ARPES\](c)\]. In CeFePO this pocket is missing due to the $f$-$d$ hybridization \[Fig. \[ARPES\](d)\]. The Fe 3$d_{xz, yz}$-derived states are not so strongly affected by the hybridization. Two bands of this symmetry cross the Fermi level near $\bar \Gamma$, while two others exhibit intersections near the $\bar M$ point. In LaFePO, each pair of these bands nearly degenerate, forming Fermi pockets around the $\bar \Gamma$ and $\bar M$ points, respectively. The different behavior of these bands in CeFePO might be also a consequence of the $f$-$d$ hybridization. Superconductivity depends crucially on electronic interactions between different FS sheets. Following the discussion in Ref. \[\], it is governed by nesting between a sheet around the $M$ point and sheets at $\Gamma$ formed by bands of $d_{xz, yz}$ and $d_{xy}$ symmetry, respectively. Thus the strong modifications of the Fermi surface as induced by the Ce 4$f$ states suppress superconductivity in CeFePO, which is observed in other $R$FePO compounds without strong $f$-$d$ correlation. On the other hand, also the 4$f$ states are heavily affected by interaction with the valence bands as reflected by the observed dispersion of the Kondo resonance and may be important for the understanding of quenching of magnetism and appearance of heavy-fermion properties in CeFePO. This work was supported by the DFG projekt VY64/1-1, and by the Science and Technology Center in Ukraine (STCU), grant 4930. The authors would like to thank S. Borisenko for support at “1$^3$-ARPES” beam line at BESSY. [99]{} Y. Kamihara et al., J. Am. Chem. Soc. **130**, 3296 (2008). X. H. Chenet et al., Nature (London) **453**, 761 (2008). Z.-A. Ren et al., Europhys. Lett. **82**, 57002 (2008). G. F. Chen et al., Phys. Rev. Lett. **100**, 247002 (2008). I. I. Mazin and J. Schmalian, Physica C **469**, 614 (2009). Y. Kamihara et al., Phys. Rev. B **78**, 184512 (2008). R. E. Baumbach et al., New J. Phys. **11**, 025018 (2009). J. Zhao et al., Nature Mater. **7**, 953 (2008). A. Jesche et al., New J. Phys. **11**, 103050 (2009). Y. Luo et al., arXiv:0907.2691 (unpublished). E. M. Br[ü]{}ning et al., Phys. Rev. Lett. **101**, 117206 (2009). V. Vildosola et al., Phys. Rev. B **78**, 064518 (2008). K. Kuroki et al., Phys. Rev. B **79**, 224511 (2009). D. S. Inosov et. al., Phys. Rev. B **77**, 212504 (2008). C. Krellner and C. Geibel, J. Crystal Growth **310**, 1875 (2008). S. L. Molodtsov et al., Phys. Rev. B, **78**, 142 (1997). D. H. Lu et al., Nature (London) **455**, 81 (2008). D. V. Vyalikh et al., Phys. Rev. Lett. **103**, 137601 (2009). O.K. Andersen, Phys. Rev. B [**12**]{}, 3060 (1975). S. Danzenb[ä]{}cher et al., Phys. Rev. Lett. **96**, 106402 (2006). S. Danzenb[ä]{}cher et al., Phys. Rev. B **75**, 045109 (2007). G. Sordi et al., Phys. Rev. Lett. **99**, 196403 (2007); our model coincides with theirs but we adopted the notations: $d \rightarrow f$, and $p \rightarrow d$. N.E. Bickers, Rev. Mod. Phys. **59**, 845 (1987). P. Lombardo et al., Phys. Rev. B **74**, 085116 (2006). F. Reinert et al., Phys. Rev. Lett. **87**, 106401 (2001).
{ "pile_set_name": "ArXiv" }
--- abstract: 'This paper is concerned with the modeling errors appeared in the numerical methods of inverse medium scattering problems (IMSP). Optimization based iterative methods are wildly employed to solve IMSP, which are computationally intensive due to a series of Helmholtz equations need to be solved numerically. Hence, rough approximations of Helmholtz equations can significantly speed up the iterative procedure. However, rough approximations will lead to instability and inaccurate estimations. Using the Bayesian inverse methods, we incorporate the modelling errors brought by the rough approximations. Modelling errors are assumed to be some complex Gaussian mixture (CGM) random variables, and in addition, well-posedness of IMSP in the statistical sense has been established by extending the general theory to involve CGM noise. Then, we generalize the real valued expectation-maximization (EM) algorithm used in the machine learning community to our complex valued case to learn parameters in the CGM distribution. Based on these preparations, we generalize the recursive linearization method (RLM) to a new iterative method named as Gaussian mixture recursive linearization method (GMRLM) which takes modelling errors into account. Finally, we provide two numerical examples to illustrate the effectiveness of the proposed method.' address: - 'School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an 710049, China' - 'School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an, 710049, China' - 'School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an 710049, China' - 'School of Electronic and Information Engineering, Xi’an Jiaotong University, Xi’an 710049, China' author: - Junxiong Jia - Bangyu Wu - Jigen Peng - Jinghuai Gao bibliography: - 'references.bib' title: Recursive linearization method for inverse medium scattering problems with complex mixture Gaussian error learning --- [^1] Introduction ============ Scattering theory has played a central role in the field of mathematical physics, which is concerned with the effect that an inhomogeneous medium has on an incident particle or wave [@ColtonThirdBook]. Usually, the total field is viewed as the sum of an incident field and a scattered field. Then, the inverse scattering problems focus on determining the nature of the inhomogeneity from a knowledge of the scattered field [@Bleistein2001Book; @ColtonSIAMReview2000], which have played important roles in diverse scientific areas such as radar and sonar, geophysical exploration, medical imaging and nano-optics. Deterministic computational methods for inverse scattering problems can be classified into two categories: nonlinear optimization based iterative methods [@Bao2015TopicReview; @Metivier2016IP; @Natterer1995IP] and imaging based direct methods [@Cakoni2006Book; @Cheney2001IP]. Direct methods are called qualitative methods which need no direct solvers and visualize the scatterer by highlighting its boundary with designed imaging functions. Iterative methods are usually called quantitative methods, which aim at providing some functions to represent the scatterer. Because a sequence of direct and adjoint scattering problems need to be solved, the quantitative methods are computationally intensive. This paper is concerned with the nonlinear optimization based iterative methods, especially focus on the recursive linearization method (RLM) for inverse medium scattering problems [@Bao2015TopicReview]. Although the computational obstacle can be handled in some circumstances, the accuracy of the forward solver is still a critical topic, particularly for applications in seismic exploration [@Fichtner2011Book] and medical imaging [@Koponen2014IEEE]. A lot of efficient forward solvers based on finite difference method, finite element methods and spectral methods have been proposed [@Teresa2006IP; @Wang1997JASA]. Here, we will not propose a new forward solver to reduce the computational load, but attempt to reformulate the nonlinear optimization model based on Bayesian inverse framework which can incorporate statistical properties of the model errors induced by rough forward solvers. By using the statistical properties, we aim to reduce the computational load for the inverse procedure. In order to give a clear sketch of our idea, let us provide a concise review of the Bayesian inverse methods according to our purpose. Denote $X$ to be some separable Banach space, then the forward problem usually modeled as follows $$\begin{aligned} \label{forwardForm} d = \mathcal{F}(m) + \epsilon,\end{aligned}$$ where $d \in \mathbb{C}^{N_{d}}$ ($N_{d} \in \mathbb{N}^{+}$) stands for the measured data, $m \in X$ represents the interested parameter and $\epsilon$ denotes noise. For inverse scattering problems, $m$ is just the scatterer, $\mathcal{F}$ represents a Helmholtz equation combined with some measurement operator. The nonlinear optimization based iterative methods just formulate inverse problem as follows $$\begin{aligned} \label{optimiFormu} \min_{m \in X} \Bigg\{ \frac{1}{2}\big\|d - \mathcal{F}(m)\big\|_{2}^{2} + \mathcal{R}(m) \Bigg\},\end{aligned}$$ where $\mathcal{R}(\cdot)$ stands for some regularization operator and $\|\cdot\|_{2}$ represents the $\ell^{2}$-norm. Different to the minimization problem (\[optimiFormu\]), Bayesian inverse methods reformulate the inverse problem as a stochastic inference problem, which has the ability to give uncertainty quantifications [@inverse_fluid_equation; @Besov_prior; @Junxiong2016IP; @book_comp_bayeisn; @acta_numerica]. Bayesian inverse methods aim to provide a complete posterior information, however, it can also offer a point estimate. Up to now, there are usually two frequently used point estimators: maximum a posteriori (MAP) estimate and conditional mean (CM) estimate [@Tenorio2006Book]. For problems defined on finite dimensional space, MAP estimate is obviously just the solution of the minimization problem (\[optimiFormu\]), which is illustrated rigorously in [@book_comp_bayeisn]. Different to the finite dimensional case, only recently, serious results for relationships between MAP estimates and minimization problem (\[optimiFormu\]) are obtained in [@Burger2014IP; @MAPSmall2013; @Dunlop2016IP] when $X$ is an infinite dimensional space. Simply speaking, if minimization problem (\[optimiFormu\]) has been used to solve our inverse problem, then an assumption has been made that is the noise $\epsilon$ is sampled from some Gaussian distribution $\mathcal{N}(\bar{\epsilon},\Sigma_{\epsilon})$ with mean $\bar{\epsilon}$ and covariance operator $\Sigma_{\epsilon}$. In real world applications, we would like to use a fast forward solver (limited accuracy) to obtain an estimation as accurately as possible. Hence, the noise usually not only brought by inaccurate measurements but also induced by a rough forward solver and inaccurate physical assumptions [@Calvetti2017]. Let us denote $\mathcal{F}_{a}(\cdot)$ to be the forward operator related to some rough forward solver, then (\[forwardForm\]) can be rewrite as follows by following the methods used in [@Koponen2014IEEE] $$\begin{aligned} \label{forwardForm2} d = \mathcal{F}_{a}(m) + (\mathcal{F}(m) - \mathcal{F}_{a}(m)) + \epsilon.\end{aligned}$$ By denoting $\xi := (\mathcal{F}(m) - \mathcal{F}_{a}(m))$, we obtain that $$\begin{aligned} \label{forwardForm3} d = \mathcal{F}_{a}(m) + \xi + \epsilon.\end{aligned}$$ From the perspective of Bayesian methods, we can model $\xi$ as a random variable which obviously has the following two important features 1. $\xi$ depend on the unknown function $m$; 2. $\xi$ may distributed according to a complicated probability measure. For feature (1), we can relax this tough problem to assume that $\xi$ is independent of $m$ but the probability distribution of $\xi$ and the prior probability measure of $m$ are related with each other [@Lasanen2012IPI]. For feature (2), to the best of our knowledge, the existing literatures only provide a compromised methods that is assume $\xi$ sampled from some Gaussian probability distributions [@Junxiong2016; @Koponen2014IEEE]. Here, we attempt to provide a more realistic assumptions for the probability measures of the random variable $\xi$. Noticing that Bayes’ formula is also one of fundamental tools for investigations about statistical machine learning [@PR2006Book] which is a field attracts numerous researchers from various fields, e.g., computer science, statistics and mathematics. Notice that for problems such as background subtraction [@Yong2017IEEE], low-rank matrix factorization [@Zhao2015IEEE] and principle component analysis [@MENG2012487; @Zhao2014ICML], learning algorithms deduced by Bayes’ formula are useful and the errors brought by inaccurate forward modeling also appears. For modeling errors appeared in machine learning tasks, Gaussian mixture model is widely used since it can approximate any probability measure in some sense [@PR2006Book]. Gaussian mixture distributions usually have the following form of density function $$\begin{aligned} \sum_{k = 1}^{K}\pi_{k} \mathcal{N}(\cdot \,| \,\zeta_{k},\Sigma_{k}),\end{aligned}$$ where $\mathcal{N}(\cdot \,| \,\zeta_{k},\Sigma_{k})$ stands for a Gaussian probability density function with mean value $\zeta_{k}$ and covariance matrix $\Sigma_{k}$ and for every $k$, $\pi_{k}\in (0,1)$ satisfy $\sum_{k=1}^{K}\pi_{k} = 1$. In the following, we always assume that the measurement noise $\epsilon$ is a Gaussian random variable with mean $0$ and covariance matrix $\nu I$ ($\nu \in \mathbb{R}^{+}$ and $I$ is an identity matrix). For our problem, we can intuitively provide the following optimization problem if we assume $\xi$ sampled from some Gaussian mixture probability distributions $$\begin{aligned} \label{modelQ1} \min_{m\in X}\Bigg\{-\ln\Big( \sum_{k = 1}^{K}\pi_{k} \mathcal{N}(d-\mathcal{F}_{a}(m) \,| \, \zeta_{k},\Sigma_{k} + \nu I) \Big) + \mathcal{R}(m) \Bigg\}.\end{aligned}$$ In the machine learning field, there usually have a lot of sampling data and the forward problems are not computationally intensive compared with the inverse medium scattering problem. Hence, they use alternative iterative methods to find the optimal solution and estimate the modeling error simultaneously [@Zhao2015IEEE]. However, considering the lack of learning data and the high computational load of our forward problems, we can not trivially generalize their alternative iterative methods to our case. In order to employ Gaussian mixture distribution, we will meet the following three problems 1. Under which conditions, Bay’s formula and MAP estimate with Gaussian mixture distribution hold in infinite-dimensional space; 2. How to construct learning examples and how to learn the parameters in Gaussian mixture distributions. Firstly, since we can hardly have so many learning examples as for the usual machine learning problem, we will meet a situation that is the number of learning examples are smaller than the number of discrete points which is also an ill-posed problem. Secondly, the solution of Helmholtz equation is a complex valued function. Because of that, we should develop learning algorithms for complex valued variables which is different to the classical cases for machine learning tasks [@PR2006Book; @Yong2017IEEE]. 3. For the complicated minimization problem (\[modelQ1\]), how to construct a suitable iterative type method, i.e., some modified RLM. In this paper, we provide a primitive study about these three problems. Theoretical foundations for using Gaussian mixture distributions in infinite-dimensional space problems have been established. Learning algorithm has been designed based on the relationships between real Gaussian distribution and complex Gaussian distribution. By carefully calculations, a modified RLM name as Gaussian mixture recursive linearization method (GMRLM) has been proposed to efficiently solve the inverse medium problem with multi-frequencies data. Numerical examples are finally reported to illustrate the effectiveness of the proposed method. The outline of this paper is as follows. In Section 2, general Bayesian inverse method with Gaussian mixture noise model is established and the relationship between MAP estimators with classical regularization methods is also discussed. In Section 3, well-posedness of inverse medium scattering problem in the Bayesian sense is proved. Then, we propose the learning algorithm for Gaussian mixture distribution by generalizing the real valued expectation-maximization (EM) algorithm to complex valued EM algorithm. At last, we deduce the Gaussian mixture recursive linearization method. In Section 4, two typical numerical examples are given, which illustrate the effectiveness of the proposed methods. Bayesian inverse theory with Gaussian mixture distribution {#BayeTheoSection} ========================================================== In this section, we prove the well-posedness and illustrate the validity of MAP estimate of inverse problems under the Bayesian inverse framework when the noise is assumed to be a random variable sampled from a complex valued Gaussian mixture distribution. Before diving into the main contents, let us provide a brief notation list which will be used in all of the following parts of this paper. **Notations:** - For an integer $N$, denote $\mathbb{C}^{N}$ as $N$-dimensional complex vector space; $\mathbb{R}^{+}$ and $\mathbb{N}^{+}$ represent positive real numbers and positive integers respectively; - For a Banach space $X$, $\|\cdot\|_{X}$ stands for the norm defined on $X$ and, particularly, $\|\cdot\|_{2}$ represents the $\ell^{2}$-norm of $\ell^{2}$ space. - For a matrix $\Sigma$, denote its determinant as $\det(\Sigma)$; - Denote $B(m,R)$ as a ball with center $m$ and radius $R$. Particularly, denote $B_{R} := B(0,R)$ when the ball is centered at origin; - Denote $X$ and $Y$ to be some Banach space; For an operator $F :\, X \rightarrow Y$, denote $F'(x_{0})$ as the Fréchet derivative of $F$ at $x_{0} \in X$. - Denote $\text{Re}(\xi)$, $\text{Imag}(\xi)$, $\xi^{T}$, $\xi^{H}$ and $\bar{\xi}$ as the real part, imaginary part, transpose, conjugate transpose and complex conjugate of $\xi \in \mathbb{C}^{N}$ respectively; - The notation $\eta \sim p(\eta)$ stands for a random variable $\eta$ obeys the probability distribution with density function $p(\cdot)$. Let $\mathcal{N}_{c}(\eta \,|\, \zeta,\Sigma)$ represents the density function of $N_{d}$-dimensional complex valued Gaussian probability distribution [@Goodman1963Annals] defined as follows $$\begin{aligned} \mathcal{N}_{c}(\eta \, | \, \zeta,\Sigma) := \frac{1}{(\pi)^{N_{d}}\det(\Sigma)} \exp\left( -\Big\|\eta-\zeta\Big\|_{\Sigma}^{2} \right),\end{aligned}$$ where $\zeta$ is a $N_{d}$-dimensional complex valued vector, $\Sigma$ is a positive definite Hermitian matrix and $\|\cdot\|_{\Sigma}^{2}$ is defined as follow $$\begin{aligned} \big\|\eta-\zeta\big\|_{\Sigma}^{2} := \big(\eta-\zeta\big)^{H} \, \Sigma^{-1} \, \big(\eta-\zeta\big),\end{aligned}$$ with the superscript $H$ stands for conjugate transpose. Denote $\eta := \xi + \epsilon$, then formula (\[forwardForm3\]) can be written as follows $$\begin{aligned} d = \mathcal{F}_{a}(m) + \eta,\end{aligned}$$ where $$\begin{aligned} d \in \mathbb{C}^{N_{d}}, \quad \eta \sim \sum_{k = 1}^{K}\pi_{k}\mathcal{N}_{c}(\eta \,|\, \zeta_{k},\Sigma_{k} + \nu I),\end{aligned}$$ with $N_{d}$, $K$ denote some positive integers and $\nu \in \mathbb{R}^{+}$. Before going further, let us provide the following basic assumptions about the approximate forward operator $\mathcal{F}_{a}$. **Assumption 1.** 1. for every $\epsilon > 0$ there is $M = M(\epsilon) \in \mathbb{R}$, $C\in\mathbb{R}$ such that, for all $m \in X$, $$\begin{aligned} \|\mathcal{F}_{a}(m)\|_{2} \leq C \exp(\epsilon\|m\|_{X}^{2} + M). \end{aligned}$$ 2. for every $r > 0$ there is $K = K(r) > 0$ such that, for all $m \in X$ with $\|m\|_{X} < r$, we have $$\begin{aligned} \|\mathcal{F}_{a}'(m)\|_{op} \leq K, \end{aligned}$$ where $\|\cdot\|_{op}$ denotes the operator norm. At this stage, we need to provide some basic notations of the Bayesian inverse method when $m$ in some infinite-dimensional space. Following the work [@inverse_fluid_equation; @acta_numerica], let $\mu_{0}$ stands for the prior probability measure defined on a separable Banach space $X$ and denote $\mu^{d}$ to be the posterior probability measure. Then the Bayes’ formula may be written as follows $$\begin{aligned} \frac{d\mu^{d}}{d\mu_{0}}(m) & = \frac{1}{Z(d)} \exp\Big( \Phi(m;d) \Big), \label{DefineMuY} \\ Z(d) & = \int_{X} \exp\Big( \Phi(m;d) \Big)\mu_{0}(dm), \label{DefineOfZd}\end{aligned}$$ where $\frac{d\mu^{d}}{d\mu_{0}}(\cdot)$ represents the Radon-Nikodym derivative and $$\begin{aligned} \Phi(m;d) := \ln\Bigg\{\sum_{k = 1}^{K} \pi_{k} \frac{1}{\pi^{N_{d}}\det(\Sigma_{k} + \nu I)} \exp\left( -\Big\|d-\mathcal{F}_{a}(m)-\zeta_{k}\Big\|_{\Sigma_{k} + \nu I}^{2} \right) \Bigg\}.\end{aligned}$$ Well-posedness -------------- In this subsection, we prove the following results which demonstrate formula (\[DefineMuY\]) and (\[DefineOfZd\]) under some general conditions. \[wellPosedBaye\] Let Assumption 1 holds for some $\epsilon$, $r$, $K$ and $M$. Assume that $X$ is some separable Banach space, $\mu_{0}(X) = 1$ and that $\mu_{0}(X\cap B) > 0$ for some bounded set $B$ in $X$. In addition, we assume $\int_{X} \exp(2\epsilon \|m\|_{X}^{2}) \mu_{0}(dm) < \infty$. Then, for every $d \in \mathbb{C}^{N_{d}}$, $Z(d)$ given by (\[DefineOfZd\]) is positive and the probability measure $\mu^{d}$ given by (\[DefineMuY\]) is well-defined. In addition, there is $C = C(r) > 0$ such that, for all $d_{1}, d_{2} \in B(0,r)$ $$\begin{aligned} d_{\text{Hell}}(\mu^{d_{1}}, \mu^{d_{2}}) \leq C \|d_{1} - d_{2}\|_{2},\end{aligned}$$ where $d_{\text{Hell}}(\cdot,\cdot)$ denotes the Hellinger distance defined for two probability measures. In order to prove this theorem, we need to verify three conditions stated in Assumption 4.2 and Theorem 4.4 in [@Dashti2014]. Since $$\begin{aligned} \sum_{k = 1}^{K} \pi_{k} \frac{1}{\pi^{N_{d}}\det(\Sigma_{k}+\nu I)} \exp\left( -\Big\|d-\mathcal{F}_{a}(m)-\zeta_{k}\Big\|_{\Sigma_{k}+\nu I}^{2} \right) \leq 1,\end{aligned}$$ we know that $$\begin{aligned} \label{Cond1} \Phi(m;d) \leq 0.\end{aligned}$$ In the following, we denote $$\begin{aligned} f_{k}(d,m):= \big(d-\mathcal{F}_{a}(m)-\zeta_{k}\big)^{H} \, \big(\Sigma_{k}+\nu I\big)^{-1} \, \big(d-\mathcal{F}_{a}(m)-\zeta_{k}\big).\end{aligned}$$ Then, we have $$\begin{aligned} \nabla_{d}f_{k}(d,m) & = (d - \mathcal{F}_{a}(m) - \zeta_{k})^{H}(\Sigma_{k}+\nu I)^{-1} + \overline{(d-\mathcal{F}_{a}(m)-\zeta_{k})^{H}(\Sigma_{k}+\nu I)^{-1}} \\ & = 2\text{Re}\Big( (d - \mathcal{F}_{a}(m) - \zeta_{k})^{H}(\Sigma_{k}+\nu I)^{-1} \Big).\end{aligned}$$ Through some simple calculations, we find that $$\begin{aligned} \label{DdPhi1} \nabla_{d}\Phi(m;d) = - \sum_{k=1}^{K} 2 g_{k} \text{Re}\Big( (d - \mathcal{F}_{a}(m) - \zeta_{k})^{H}(\Sigma_{k}+\nu I)^{-1} \Big),\end{aligned}$$ where $$\begin{aligned} \label{gkDef} g_{k} := \frac{\pi_{k}\mathcal{N}_{c}(d-\mathcal{F}_{a}(m) \,|\, \zeta_{k},\Sigma_{k}+\nu I)} {\sum_{j=1}^{K}\pi_{j}\mathcal{N}_{c}(d-\mathcal{F}_{a}(m) \,|\, \zeta_{j},\Sigma_{j}+\nu I)}.\end{aligned}$$ From the expression (\[DdPhi1\]) and (i) of Assumption 1, we can deduce that $$\begin{aligned} \label{BoundDd} \|\nabla_{d}\Phi(m;d)\|_{2} \leq C\big( 1 + \|d\|_{2} + \exp(\epsilon\|m\|_{X}^{2}) \big).\end{aligned}$$ where the constant $C$ depends on $K$, $\{\Sigma_{k}\}_{k=1}^{K}$ and $\{\zeta_{k}\}_{k = 1}^{K}$. Considering (\[BoundDd\]), we obtain $$\begin{aligned} \label{Cond2} |\Phi(m;d_{1}) - \Phi(m;d_{2})| \leq C\big( 1 + r + \exp(\epsilon\|m\|_{X}^{2}) \big) \|d_{1} - d_{2}\|_{2}.\end{aligned}$$ By our assumptions, the following relation obviously hold $$\begin{aligned} \label{Cond3} C^{2}\big( 1 + r + \exp(\epsilon\|m\|_{X}^{2}) \big)^{2} \in L_{\mu_{0}}^{1}(X;\mathbb{R}).\end{aligned}$$ At this stage, estimates (\[Cond1\]), (\[Cond2\]) and (\[Cond3\]) verify Assumption 4.2 and conditions of Theorem 4.4 in [@Dashti2014]. Employing theories constructed in [@Dashti2014], we complete the proof. The assumptions of the prior probability measure are rather general, which include Gaussian probability measure and TV-Gaussian probability measure [@TGPrior2016] for certain space $X$. MAP estimate ------------ Through MAP estimate, Bayesian inverse method and classical regularization method are in accordance with each other. Because our aim is to develop an efficient optimization method, we need to demonstrate the validity of MAP estimate which provide theoretical foundations for our method. Firstly, let us assume that the prior probability measure $\mu_{0}$ is a Gaussian probability measure and define the following functional $$\begin{aligned} \label{MiniProForm} J(m) = \left \{\begin{aligned} & -\Phi(m;d) + \frac{1}{2} \|m\|_{E}^{2} \quad \text{if }m\in E, \text{ and} \\ & + \infty, \quad\quad\quad\quad\quad\quad\quad\,\,\, \text{else.} \end{aligned}\right.\end{aligned}$$ Here $(E,\|\cdot\|_{E})$ denotes the Cameron-Martin space associated to $\mu_{0}$. In infinite dimensions, we adopt small ball approach constructed in [@MAPSmall2013]. For $m \in E$, let $B(m,\delta) \in X$ be the open ball centred at $m \in X$ with radius $\delta$ in $X$. Then, we can prove the following theorem which encapsulates the idea that probability is maximized where $J(\cdot)$ is minimized. \[SmallBall\] Let Assumption 1 holds and assume that $\mu_{0}(X) = 1$. Then the function $J(\cdot)$ defined by (\[MiniProForm\]) satisfies, for any $m_{1}, m_{2} \in E$, $$\begin{aligned} \lim_{\delta\rightarrow 0}\frac{\mu(B(m_{1},\delta))}{\mu(B(m_{2},\delta))} = \exp\Big( J(m_{2}) - J(m_{1}) \Big).\end{aligned}$$ In order to prove this theorem, let us verify the following two conditions concerned with $\Phi(m;d)$, 1. for every $r > 0$ there exists $K = K(r) > 0$ such that, for all $m \in X$ with $\|m\|_{X} \leq r$ we have $\Phi(m;d) \geq K$. 2. for every $r > 0$ there exists $L = L(r) > 0$ such that, for all $m_{1}, m_{2} \in X$ with $\|m_{1}\|_{X}, \|m_{2}\|_{X} < r$ we have $|\Phi(m_{1};d) - \Phi(m_{2};d)| \leq L \|m_{1} - m_{2}\|_{X}$. For the first condition, by employing Jensen’s inequality, we have $$\begin{aligned} \Phi(m;d) & = \ln\Big( \sum_{k = 1}^{K}\pi_{k} \mathcal{N}_{c}\big(d - \mathcal{F}_{a}(m) \, | \, \zeta_{k}, \Sigma_{k}+\nu I\big) \Big) \\ & \geq \sum_{k = 1}^{K} \pi_{k} \ln\Bigg( \frac{1}{\pi^{N_{d}}|\Sigma_{k} + \nu I|} \exp\bigg( -\big\| d-\mathcal{F}_{a}(m) - \zeta_{k} \big\|_{\Sigma_{k}+\nu I}^{2} \bigg) \Bigg) \\ & \geq \sum_{k = 1}^{K} \pi_{k} \Big( -\big\| d - \mathcal{F}_{a}(m) - \zeta_{k} \big\|_{\Sigma_{k}+\nu I}^{2} - N_{d}\ln(\pi) - \ln(|\Sigma_{k}+\nu I|) \Big) \\ & \geq - C \big( 1 + \|d\|_{2}^{2} + \exp(\epsilon r^{2}) \big),\end{aligned}$$ where $C$ is a positive constant depends on $K$, $\{\pi_{k}\}_{k=1}^{K}$, $\{\Sigma_{k}\}_{k = 1}^{K}$, $\{\zeta_{k}\}_{k=1}^{K}$ and $N_{d}$. Now, the first condition holds true by choosing $K = - C \big( 1 + \|d\|_{2}^{2} + \exp(\epsilon r^{2}) \big)$. In order to verify the second condition, we denote $$\begin{aligned} f_{k}(d,m):= \big(d-\mathcal{F}_{a}(m)-\zeta_{k}\big)^{H} \, \big(\Sigma_{k}+\nu I\big)^{-1} \, \big(d-\mathcal{F}_{a}(m)-\zeta_{k}\big),\end{aligned}$$ then focus on the derivative of $f_{k}$ with respect to $m$. Through some calculations, we find that $$\begin{aligned} \nabla_{m}f_{k}(d,m) = - 2 \text{Re}\Big( (d-\mathcal{F}_{a}(m)-\zeta_{k})^{H}(\Sigma_{k}+\nu I)^{-1}\mathcal{F}_{a}'(m) \Big).\end{aligned}$$ Hence, we have $$\begin{aligned} \label{DePhi1} \nabla_{m}\Phi(m;d) = - \sum_{k=1}^{K}2 g_{k} \text{Re}\Big( (d-\mathcal{F}_{a}(m)-\zeta_{k})^{H}(\Sigma_{k}+\nu I)^{-1}\mathcal{F}_{a}'(m) \Big),\end{aligned}$$ where $g_{k}$ defined as in (\[gkDef\]). Using Assumption 1 and formula (\[DePhi1\]), we find that $$\begin{aligned} |\Phi(m_{1};d) - \Phi(m_{2};d)| \leq C K (1+\|d\|_{2} + \exp(\epsilon \, r^{2})) \|m_{1} - m_{2}\|_{X}.\end{aligned}$$ Let $L = C K (1+\|d\|_{2} + \exp(\epsilon \, r^{2}))$, obviously the second condition holds true. Combining these two conditions with (\[Cond1\]), we can complete the proof by using Theorem 4.11 in [@Dashti2014]. Now, if we assume $\mu_{0}$ is a TV-Gaussian probability measure, then we can define the following functional $$\begin{aligned} \label{MiniProFormTG} J(m) = \left \{\begin{aligned} & -\Phi(m;d) + \lambda \|m\|_{\text{TV}} + \frac{1}{2} \|m\|_{E}^{2} \quad \text{if }m\in E, \text{ and} \\ & + \infty, \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\,\, \text{else.} \end{aligned}\right.\end{aligned}$$ Using similar methods as for the Gaussian case and the above functional (\[MiniProFormTG\]), we can prove a similar theorem to illustrate that the MAP estimate is also the minimal solution of $\min_{m\in X}J(m)$. Inverse medium scattering problem {#SecInverMedi} ================================= In this section, we will apply the general theory developed in Section \[BayeTheoSection\] to a specific inverse medium scattering problem. Then we construct algorithms to learn the parameters appeared in the complex Gaussian mixture distribution. At last, we give the model error compensation based recursive linearization method. Since the model errors are estimated by some Gaussian mixture distributions, the proposed iterative method is named as Gaussian mixture recursive linearization method (GMRLM). Now, let us provide some basic settings of the inverse scattering problem considered in this paper. In the following, we usually assume that the total field $u$ satisfies $$\begin{aligned} \label{zongEq} \Delta u + \kappa^{2}(1+q)u = 0 \quad \text{in }\mathbb{R}^{2},\end{aligned}$$ where $\kappa > 0$ is the wavenumber, and $q(\cdot)$ is a real function known as the scatterer representing the inhomogeneous medium. We assume that the scatterer has a compact support contained in the ball $B_{R} = \{ \mathbf{r}\in\mathbb{R}^{2} : \, |\mathbf{r}| < R \}$ with boundary $\partial B_{R} = \{ \mathbf{r}\in\mathbb{R}^{2} : \, |\mathbf{r}| = R \}$, and satisfies $-1 < q_{\text{min}} \leq q \leq q_{\text{max}} < \infty$, where $q_{\text{min}}$ and $q_{\text{max}}$ are two constants. The scatterer is illuminated by a plane incident field $$\begin{aligned} u^{\text{inc}}(\mathbf{r}) = e^{i\kappa\mathbf{r}\cdot\mathbf{d}},\end{aligned}$$ where $\mathbf{d} = (\cos\theta, \sin\theta) \in \{ \mathbf{r}\in\mathbb{R}^{2} \, : \, |\mathbf{r}| = 1 \}$ is the incident direction and $\theta \in (0,2\pi)$ is the incident angle. Obviously, the incident field satisfies $$\begin{aligned} \label{inEq} \Delta u^{\text{inc}} + \kappa^{2} u^{\text{inc}} = 0 \quad \text{in }\mathbb{R}^{2}.\end{aligned}$$ The total field $u$ consists of the incident field $u^{\text{inc}}$ and the scattered field $u^{s}$ $$\begin{aligned} \label{fenjieEq} u = u^{\text{inc}} + u^{s}.\end{aligned}$$ It follows from (\[zongEq\]), (\[inEq\]) and (\[fenjieEq\]) that the scattered field satisfies $$\begin{aligned} \label{scatterEq} \Delta u^{s} + \kappa^{2}(1+q)u^{s} = -\kappa^{2}qu^{\text{inc}} \quad \text{in }\mathbb{R}^{2},\end{aligned}$$ accompanied with the following Sommerfeld radiation condition $$\begin{aligned} \label{radiatEq} \lim_{|\mathbf{r}| \rightarrow \infty}r^{1/2}\big(\partial_{r}u^{s} - i\kappa u^{s}\big) = 0,\end{aligned}$$ where $r = |\mathbf{r}|$. Well-posedness in the sense of Bayesian formulation {#WellSubsec} --------------------------------------------------- In this subsection, we suppose that the scatterer $q(\cdot)$ appeared in (\[zongEq\]) has compact support and $\text{supp}(q) \subset \Omega \subset B_{R}$ where $\Omega$ is a square region. For the reader’s convenience, we provide an illustration of this relation in Figure \[illuFig\]. \ Because the scatterer $q(\cdot)$ is assumed to have compact support, the problem (\[scatterEq\]) and (\[radiatEq\]) defined on $\mathbb{R}^{2}$ can be reformulated to the following problem defined on bounded domain [@Bao2015TopicReview] $$\begin{aligned} \label{BoundedHelEq} \left \{\begin{aligned} & \Delta u^{s} + \kappa^{2}(1+q)u^{s} = -\kappa^{2}qu^{\text{inc}} \quad \text{in }B_{R}, \\ & \partial_{\mathbf{n}}u^{s} = \mathcal{T}u^{s} \quad \text{on }\partial B_{R}, \end{aligned}\right.\end{aligned}$$ where $\mathcal{T}$ is the Dirichlet-to-Neumann (DtN) operator defined as follows: for any $\varphi \in H^{1/2}(\partial B_{R})$, $$\begin{aligned} (\mathcal{T}\varphi)(R,\theta) = \kappa\sum_{n\in\mathbb{Z}}\frac{H^{(1)'}_{n}(\kappa R)}{H^{(1)}_{n}(\kappa R)}\hat{\varphi}_{n}e^{in\theta}\end{aligned}$$ with $H^{(1)}_{n}$ is the Hankel function of the first kind with order $n$ and $$\begin{aligned} \hat{\varphi}_{n} = (2\pi)^{-1}\int_{0}^{2\pi} \varphi(R,\theta)e^{-in\theta}d\theta.\end{aligned}$$ For problem (\[BoundedHelEq\]), we define the map $\mathcal{S}(q,\kappa)u^{\text{inc}}$ by $u^{s} = \mathcal{S}(q,\kappa)u^{\text{inc}}$ as in [@Bao2015TopicReview]. From [@Bao2010StochasticSource; @ColtonThirdBook], we easily know that the following estimate holds for equations (\[BoundedHelEq\]) $$\begin{aligned} \label{EstimateH} \|u^{s}\|_{H^{2}(\Omega)} \leq C \|q\|_{L^{\infty}(\Omega)}\|u^{\text{inc}}\|_{L^{2}(B(0,R))}.\end{aligned}$$ Considering Sobolev embedding theorem, we can define the following measurement operator $$\begin{aligned} \label{MeaOp1} \mathcal{M}(\mathcal{S}(q,\kappa)u^{\text{inc}})(x) = \big( u^{s}(x_{1}), \ldots, u^{s}(x_{N_{d}}) \big)^{T},\end{aligned}$$ where $x_{i} \in \partial\Omega$, $i = 1,2,\ldots,N_{d}$, are the points where the wave field $u^{s}$ is measured. In practice, we employ a uniaxial PML technique to transform the problem defined on the whole domain to a problem defined on a bounded rectangular domain, as seen in Figure \[illuFig2\]. \ Let $D$ be the rectangle which contain $\Omega = [x_{1},x_{2}] \times [y_{1},y_{2}]$ with $\text{supp} (q) \subset \Omega$ and let $d_{1}$ and $d_{2}$ be the thickness of the PML layers along $x$ and $y$, respectively. Let $s_{1}(x) = 1+i\sigma_{1}(x)$ and $s_{2}(y) = 1+i\sigma_{2}(y)$ be the model medium property and usually we can simply take $$\begin{aligned} \sigma_{1}(x) = \left\{\begin{aligned} & \sigma_{0}\left( \frac{x-x_{2}}{d_{1}} \right)^{p} \quad \text{for }x_{2} < x < x_{2} + d_{1} \\ & 0 \quad\quad\quad\quad\quad\quad\,\,\,\,\, \text{for }x_{1} \leq x \leq x_{2} \\ & \sigma_{0}\left( \frac{x_{1} - x}{d_{1}} \right)^{p} \quad \text{for }x_{1} - d_{1} < x < x_{1}, \end{aligned}\right.\end{aligned}$$ and $$\begin{aligned} \sigma_{2}(y) = \left\{\begin{aligned} & \sigma_{0}\left( \frac{y-y_{2}}{d_{2}} \right)^{p} \quad \text{for }y_{2} < y < y_{2} + d_{2} \\ & 0 \quad\quad\quad\quad\quad\quad\,\,\,\, \text{for }y_{1} \leq y \leq y_{2} \\ & \sigma_{0}\left( \frac{y_{1} - y}{d_{2}} \right)^{p} \quad \text{for }y_{1} - d_{2} < y < y_{1}, \end{aligned}\right.\end{aligned}$$ where the constant $\sigma_{0} > 1$ and the integer $p \geq 2$. Denote $$s = \text{diag}(s_{2}(x)/s_{1}(x), s_{1}(x)/s_{2}(y)),$$ then the truncated PML problem can be defined as follow $$\begin{aligned} \label{PMLBoundedHelEq} \left \{\begin{aligned} & \nabla\cdot(s \nabla u^{s}) + s_{1}s_{2}\kappa^{2}(1+q)u^{s} = -\kappa^{2}qu^{\text{inc}} \quad \text{in }D, \\ & u^{s} = 0 \quad \text{on }\partial D. \end{aligned}\right.\end{aligned}$$ Similar to the physical problem (\[BoundedHelEq\]), we introduce the map $\mathcal{S}_{a}(q,\kappa)$ defined by $u^{s}_{a} = \mathcal{S}_{a}(q,\kappa)u^{\text{inc}}$ where $u^{s}_{a}$ stands for the solution of the truncated PML problem (\[PMLBoundedHelEq\]). Through similar methods for equations (\[BoundedHelEq\]), we can prove that $u^{s}_{a}$ is a continuous function and satisfies $$\begin{aligned} \label{EstimateHDis} \|u^{s}_{a}\|_{L^{\infty}(D)} \leq C \|q\|_{L^{\infty}(D)} \|u^{\text{inc}}\|_{L^{2}(D)}.\end{aligned}$$ Now, we can define the measurement operator similar to (\[MeaOp1\]) as follow $$\begin{aligned} \label{MeaOp2} \mathcal{M}(\mathcal{S}_{a}(q,\kappa)u^{\text{inc}})(x) = \big( u^{s}_{a}(x_{1}), \ldots, u^{s}_{a}(x_{N_{d}}) \big)^{T},\end{aligned}$$ where $x_{i} \in \partial D$, $i = 1,2,\ldots,N_{d}$. In order to introduce appropriate Gaussian probability measures, we give the following assumptions related to the covariance operator. **Assumption 2.** Denote $A$ to be an operator, densely defined on the Hilbert space $\mathcal{H} = L^{2}(D;\mathbb{R}^{d})$, satisfies the following properties: 1. $A$ is positive-definite, self-adjoint and invertible; 2. the eigenfunctions $\{\varphi_{j}\}_{j\in\mathbb{N}}$ of $A$, form an orthonormal basis for $\mathcal{H}$; 3. the eigenvalues satisfy $\alpha_{j} \asymp j^{2/d}$, for all $j\in\mathbb{N}$; 4. there is $C > 0$ such that $$\begin{aligned} \sup_{j\in\mathbb{N}}\left( \|\varphi_{j}\|_{L^{\infty}} + \frac{1}{j^{1/d}}\text{Lip}(\varphi_{j}) \right) \leq C. \end{aligned}$$ At this moment, we can show well-posedness for inverse medium scattering problem with some Gaussian prior probability measures. For a constant $s > 1$, we consider the prior probability measure to be a Gaussian measure $\mu_{0} := \mathcal{N}(\bar{q},A^{-s})$ where $\bar{q}$ is the mean value and the operator $A$ satisfies Assumption 2. In addition, we take $X = C^{t}$ with $t < s - 1$. Then we know that $\mu_{0}(X) = 1$ by Example 2.19 shown in [@Dashti2014]. For the scattering problem, we can take $\mathcal{F}_{a}(q) = \mathcal{M}(\mathcal{S}_{a}(q,\kappa)u^{\text{inc}})$ and let the noise $\eta$ obeys a Gaussian mixture distribution with density function $$\sum_{k = 1}^{K}\pi_{k}\mathcal{N}_{c}(\eta \,|\, \zeta_{k},\Sigma_{k}+\nu I).$$ Then, the measured data $d \in \mathbb{C}^{N_{d}}$ are $$\begin{aligned} \label{DataSpecPro} d = \mathcal{F}_{a}(q) + \eta.\end{aligned}$$ \[BayeTheoScatter\] For the two dimensional problem (\[PMLBoundedHelEq\])(problem (\[BoundedHelEq\])), if we assume space $X$, $q \sim \mu_{0}$ and $\eta$ are specified as previous two paragraphes in this subsection. Then, the Bayesian inverse problems of recovering input $q \in X$ of problem (\[PMLBoundedHelEq\])(problem (\[BoundedHelEq\])) from data $d$ given as in (\[DataSpecPro\]) is well formulated: the posterior $\mu^{d}$ is well defined in $X$ and it is absolutely continuous with respect to $\mu_{0}$, the Radon-Nikodym derivative is given by (\[DefineMuY\]) and (\[DefineOfZd\]). Moreover, there is $C = C(r)$ such that, for all $d_{1},d_{2} \in \mathbb{C}^{N_{d}}$ with $|d_{1}|,|d_{2}| \leq r$, $$\begin{aligned} d_{Hell}(\mu^{d_{1}},\mu^{d_{2}}) \leq C \|d_{1} - d_{2}\|_{2}.\end{aligned}$$ From Section \[BayeTheoSection\], we easily know that Theorem \[BayeTheoScatter\] holds when Assumption 1 is satisfied. According to the estimates (\[EstimateHDis\]) and (\[MeaOp2\]), we find that $$\begin{aligned} \|\mathcal{F}_{a}(q)\|_{2} \leq C \|q\|_{L^{\infty}(D)},\end{aligned}$$ which indicates that statement (1) of Assumption 1 holds. In order to verify statement (2) of Assumption 1, we denote $u_{a}^{s} + \delta u = \mathcal{F}_{a}(q+\delta q)$. By simple calculations, we deduce that $\delta u$ satisfies $$\begin{aligned} \label{deltaUEq} \left \{\begin{aligned} & \nabla\cdot(s \nabla \delta u) + s_{1}s_{2}\kappa^{2}(1+q)\delta u = -\kappa^{2}\delta q(u^{\text{inc}} + s_{1}s_{2} u_{a}^{s}) \quad \text{in }D, \\ & \delta u = 0 \quad \text{on }\partial D. \end{aligned}\right.\end{aligned}$$ Now, denote $\mathcal{F}_{a}'(q)$ to be the Fréchet derivative of $\mathcal{F}_{a}(q)$, we find that $$\begin{aligned} \label{FDerQ} \mathcal{F}_{a}'(q)\delta q = \mathcal{M}(\delta u),\end{aligned}$$ where $\delta u$ is the solution of equations (\[deltaUEq\]). By using some basic estimates for equations (\[PMLBoundedHelEq\]), we obtain $$\begin{aligned} \label{FDerEst} \|\mathcal{F}_{a}'(q)\delta q\|_{2} \leq \|\delta u\|_{L^{\infty(D)}} \leq C(1+\|q\|_{L^{\infty}(\Omega)})\|\delta q\|_{L^{\infty}(D)},\end{aligned}$$ where $C$ depends on $\kappa$, $D$, $s_{1}$ and $s_{2}$. Estimate (\[FDerEst\]) ensures that statement (2) of Assumption 1 holds, and the proof is completed by employing Theorem \[wellPosedBaye\]. From the proof of Theorem \[BayeTheoScatter\], we can see that Theorem \[SmallBall\] holds true for inverse medium scattering problem considered in this subsection. Hence, we can compute the MAP estimate by minimizing functional defined in (\[MiniProForm\]) with the forward operator defined in (\[DataSpecPro\]). If we assume $\mu_{0}$ is a TV-Gaussian probability measure, similar results can be obtained. The posterior probability measure is well-defined and the MAP estimate can be obtained by solving $\min_{q\in X}J(q)$ with $J$ defined in (\[MiniProFormTG\]). Since there are no new ingredients, we omit the details. Learn parameters of complex Gaussian mixture distribution {#LearnSection} --------------------------------------------------------- How to estimate the parameters is one of the key steps for modeling noises by some complex Gaussian mixture distributions. This key step consists two fundamental elements: learning examples and learning algorithms. For the learning examples, they are the approximate errors $e := \mathcal{F}(q) - \mathcal{F}_{a}(q)$ that is the difference of measured values for slow explicit forward solver and fast approximate forward solver. In order to obtain this error, we need to know the unknown function $q$ which is impossible. However, in practical problems, we usually know some prior knowledge of the unknown function $q$. Relying on the prior knowledge, we can construct some probability measures to generate functions which we believe to maintain similar statistical properties as the real unknown function $q$. For this, we refer to a recent paper [@Iglesias2014IP]. Since this procedure depends on specific application fields, we only provide details in Section \[SecNumer\] for concrete numerical examples. For the learning algorithms, expectation-maximization (EM) algorithm is often employed in the machine learning community [@PR2006Book]. Here, we need to notice that the variables are complex valued and the complex Gaussian distribution are used in our case. This leads some differences to the classical real variable situation. In order to provide a clear explanation, let us recall some basic relationships between complex Gaussian distributions and real Gaussian distributions which are proved in [@Goodman1963Annals]. Denote $e = (e_{1}, \ldots, e_{N_{d}})^{T}$ is a $N_{d}$-tuple of complex Gaussian random variables. Let $\tau_{k} := \text{Re}(e_{k})$ and $\varsigma_{k} := \text{Imag}(e_{k})$ as the real and imaginary parts of $e_{k}$ with $k = 1,\ldots,N_{d}$, then define $$\begin{aligned} \label{defTau} \tau = (\tau_{1},\varsigma_{1},\ldots,\tau_{N_{d}},\varsigma_{N_{d}})\end{aligned}$$ is $2N_{d}$-tuple of random variables. From the basic theories of complex Gaussian distributions, we know that $\tau$ is $2N_{d}$-variate Gaussian distributed. Denote the covariance matrix of $e$ by $\Sigma$ and the covariance matrix of $\tau$ by $\tilde{\Sigma}$. As usual, we assume $\Sigma$ is a positive definite Hermitian matrix, then $\tilde{\Sigma}$ is a positive definite symmetric matrix by Theorem 2.2 and Theorem 2.3 in [@Goodman1963Annals]. In addition, we have the following lemma which is proved in [@Goodman1963Annals]. \[complexGauPro\] For complex Gaussian distributions, we have that the matrix $\Sigma$ is isomorphic to the matrix $2\tilde{\Sigma}$, $e^{H}\Sigma e = \tau^{T}\tilde{\Sigma}\tau$ and $\text{det}(\Sigma)^{2} = \text{det}(\tilde{\Sigma})$. Let $N_{s} \in \mathbb{N}^{+}$ stands for the number of learning examples. Let $e_{n} = (e_{1}^{n}, \ldots, e_{N_{d}}^{n})^{T}$ with $n = 1,\ldots,N_{s}$ represent $N_{s}$ learning examples. Then, for some fixed $K \in \mathbb{N}^{+}$, we need to solve the following optimization problem to obtain estimations of parameters $$\begin{aligned} \min_{\{\pi_{k}, \zeta_{k}, \Sigma_{k}\}_{k=1}^{K}} J_{G}(\{\pi_{k}, \zeta_{k}, \Sigma_{k}\}_{k =1}^{K}),\end{aligned}$$ where $$\begin{aligned} J_{G}(\{\pi_{k}, \zeta_{k}, \Sigma_{k}\}_{k =1}^{K}) := \sum_{n = 1}^{N_{s}} \ln \Bigg\{ \sum_{k = 1}^{K} \pi_{k} \mathcal{N}_{c}(e_{n} \, | \, \zeta_{k},\Sigma_{k}) \Bigg\}.\end{aligned}$$ In the following, we only show two different parts compared with the real variable Gaussian case. **Estimation of means**: Setting the derivatives of $J_{G}(\{\pi_{k}, \zeta_{k}, \Sigma_{k}\}_{k =1}^{K})$ with respect to $\zeta_{k}$ of the complex Gaussian components to zero and using Lemma \[complexGauPro\], we obtain $$\begin{aligned} 0 = -\sum_{n=1}^{N_{s}}\frac{\pi_{k}\mathcal{N}_{c}(e_{n}\,|\,\zeta_{k},\Sigma_{k})} {\sum_{j = 1}^{K}\pi_{j}\mathcal{N}_{c}(e_{j}\,|\,\zeta_{j},\Sigma_{j})}\tilde{\Sigma}_{k}^{-1}(\tau_{n} - \tilde{\zeta}_{k}),\end{aligned}$$ where $\tau_{n}$ defined as in (\[defTau\]) with $e$ replaced by $e_{n}$, $\tilde{\zeta}_{k}$ also defined as in (\[defTau\]) with $e$ replaced by $\zeta_{k}$ and $\tilde{\Sigma}_{k}$ is the covariance matrix corresponding to $\Sigma_{k}$. Hence, by some simple simplification, we find that $$\begin{aligned} \zeta_{k} = \frac{1}{\tilde{N}_{k}}\sum_{n=1}^{N_{s}}\gamma_{nk}e_{n},\end{aligned}$$ where $$\begin{aligned} \label{defineTilN} \tilde{N}_{k} = \sum_{n=1}^{N_{s}}\gamma_{nk}, \quad \gamma_{nk} = \frac{\pi_{k}\mathcal{N}_{c}(e_{n}\,|\,\zeta_{k},\Sigma_{k})} {\sum_{j = 1}^{K}\pi_{j}\mathcal{N}_{c}(e_{j}\,|\,\zeta_{j},\Sigma_{j})}.\end{aligned}$$ In the above formula, $\tilde{N}_{k}$ usually interpret as the effective number of points assigned to cluster $k$ and $\gamma_{nk}$ usually is a variable depend on latent variables [@PR2006Book]. **Estimation of covariances**: For the covariances, we need to use latent variables to provide the following complete-data log likelihood function as formula (9.40) shown in [@PR2006Book] $$\begin{aligned} \sum_{n = 1}^{N_{s}}\sum_{k = 1}^{K}\gamma_{nk}\Big\{ \ln\pi_{k} + \ln\mathcal{N}_{c}(e_{n} \, | \, \zeta_{k}, \Sigma_{k}) \Big\}.\end{aligned}$$ Now, for $k = 1,\dots,K$, we prove that $$\begin{aligned} \Sigma_{k} := \frac{1}{\tilde{N}_{k}}\sum_{n = 1}^{N_{s}}\gamma_{nk}(e_{n} - \zeta_{k})(e_{n} - \zeta_{k})^{H}\end{aligned}$$ solves the following maximization problem $$\begin{aligned} \label{ZuidaWen} \max_{\{\Sigma_{k}\}_{k=1}^{K}}\Bigg\{\sum_{n = 1}^{N_{s}}\sum_{k = 1}^{K}\gamma_{nk} \Big( \ln\pi_{k} + \ln\mathcal{N}_{c}(e_{n} \, | \, \zeta_{k}, \Sigma_{k}) \Big)\Bigg\}.\end{aligned}$$ Denote $$\begin{aligned} L = \sum_{n = 1}^{N_{s}}\sum_{k = 1}^{K}\gamma_{nk} \Big( \ln\pi_{k} + \ln\mathcal{N}_{c}(e_{n} \, | \, \zeta_{k}, \Sigma_{k}) \Big).\end{aligned}$$ Let $$\begin{aligned} B_{k} := \frac{1}{\tilde{N}_{k}}\sum_{n = 1}^{N_{s}}\gamma_{nk}(e_{n} - \zeta_{k})(e_{n} - \zeta_{k})^{H}\end{aligned}$$ and notice that $$\begin{aligned} \sum_{n=1}^{N_{s}}\sum_{k=1}^{K}\gamma_{nk}(e_{n}-\zeta_{k})^{H}\Sigma_{k}^{-1}(e_{n}-\zeta_{k}) & = \sum_{n=1}^{N_{s}}\sum_{k=1}^{K}\gamma_{nk}\text{tr}\Big( \Sigma_{k}^{-1}(e_{n}-\zeta_{k})(e_{n}-\zeta_{k})^{H} \Big) \\ & = \sum_{k=1}^{K}\text{tr}\Big( \Sigma_{k}^{-1}\sum_{n=1}^{N_{s}}\gamma_{nk}(e_{n}-\zeta_{k})(e_{n}-\zeta_{k})^{H} \Big) \\ & = \sum_{k=1}^{K}\tilde{N}_{k}\text{tr}\Big( \Sigma_{k}^{-1}B_{k} \Big),\end{aligned}$$ where $\tilde{N}_{k}$ defined as in (\[defineTilN\]). Then, using the explicit form of density function, we obtain $$\begin{aligned} \label{LDEF} L = - \sum_{k = 1}^{K}\tilde{N}_{k}\ln\text{det}(\Sigma_{k}) - \sum_{k=1}^{K}\tilde{N}_{k}\text{tr}(\Sigma_{k}^{-1}B_{k}) - N_{d}\ln\pi + \sum_{k=1}^{K}\tilde{N}_{k}\ln\pi_{k}.\end{aligned}$$ Define $p(\xi,\Sigma):= \frac{1}{\pi^{N_{d}}\text{det}(\Sigma)}\exp\left( -\xi^{H}\Sigma^{-1}\xi \right)$, then we have $$\begin{aligned} \label{JDEF} \begin{split} J & = \sum_{k=1}^{K} \tilde{N_{k}} \int_{\xi}p(\xi,\Sigma_{k}^{-1})\ln\Big( p(\xi,B_{k}^{-1})/p(\xi,\Sigma_{k}^{-1}) \Big) d\xi \\ & = \int_{\xi} \Bigg\{ \left( \ln\text{det}(B_{k}) - \xi^{H}B_{k}\xi \right)p(\xi,\Sigma_{k}^{-1}) \\ & \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad - \left( \ln\text{det}(\Sigma_{k}) - \xi^{H}\Sigma_{k}\xi \right)p(\xi,\Sigma_{k}^{-1}) \Bigg\}d\xi \\ & = \sum_{k=1}^{K}\tilde{N}_{k}\ln\text{det}(B_{k}) + \sum_{k=1}^{K}\tilde{N}_{k}\text{tr}(I) \\ & \quad\quad\quad\quad\quad\quad\quad\quad\quad - \sum_{k=1}^{K}\tilde{N}_{k}\text{tr}(\Sigma_{k}^{-1}B_{k}) - \sum_{k=1}^{K}\tilde{N}_{k}\ln\text{det}(\Sigma_{k}), \end{split}\end{aligned}$$ where Corollary 4.1 in [@Goodman1963Annals] has been used for the last equality. On comparing the final result of (\[LDEF\]) with (\[JDEF\]) one observes that any series Hermitian positive definite matrixes $\{\Sigma_{k}\}_{k=1}^{K}$ that maximize $L$ maximize $J$ and conversely. Now, $\ln u \leq u-1$ with equality holding if and only if $u = 1$. Thus $$\begin{aligned} \label{Jleq1} \begin{split} J & = \sum_{k=1}^{K} \tilde{N_{k}} \int_{\xi}p(\xi,\Sigma_{k}^{-1})\ln\Big( p(\xi,B_{k}^{-1})/p(\xi,\Sigma_{k}^{-1}) \Big) d\xi \\ & \leq \sum_{k=1}^{K} \tilde{N_{k}} \int_{\xi} p(\xi,\Sigma_{k}^{-1})\Big( p(\xi,B_{k}^{-1})/p(\xi,\Sigma_{k}^{-1}) - 1 \Big) d\xi = 0. \end{split}\end{aligned}$$ If and only if $p(\xi,\Sigma_{k}) = p(\xi,B_{k})$ with $k=1,\ldots,K$, equality in (\[Jleq1\]) holds true. Hence, $\Sigma_{k} = B_{k}\, (k=1,\ldots,K)$ solves problem (\[ZuidaWen\]). With these preparations, we can easily construct EM algorithm following the line of reasoning shown in Chapter 9 of [@PR2006Book]. For concisely, the details are omitted and we provide the EM algorithm in Algorithm \[algComplexEM\]. In Algorithm \[algComplexEM\], if the parameters satisfy $N_{d} < N_{s}$, we can usually obtain nonsingular matrixes $\{\Sigma_{k}\}_{k=1}^{K}$. However, in our case, we can not generate so many learning examples $N_{s}$ and the number of measuring points $N_{d}$ is usually very large for real world applications. Hence, we will meet the situation $N_{d} > N_{s}$ which makes $\{\Sigma_{k}\}_{k=1}^{K}$ to be a series of singular matrixes. In order to solve this problem, we adopt a simple strategy that is replace the estimation of $\Sigma_{k}$ in Step 3 by the following formula $$\begin{aligned} \label{MStepReg} \Sigma_{k}^{\text{new}} = \frac{1}{\tilde{N}_{k}} \sum_{n=1}^{N_{s}}\gamma_{nk}(e_{n}-\zeta_{k}^{\text{new}}) (e_{n}-\zeta_{k}^{\text{new}})^{H} + \delta I,\end{aligned}$$ where $\delta$ is a small positive number named as the regularization parameter. Adjoint state approach with model error compensation ---------------------------------------------------- By Algorithm \[algComplexEM\], we obtain the estimated mixing coefficients, mean values and covariance matrixes. From the statements shown in Section \[BayeTheoSection\] and Subsection \[WellSubsec\], it is obviously that we need to solve optimization problems as follows $$\begin{aligned} \min_{q\in L^{\infty}(\Omega)} \Big\{ - \Phi(q;d) + \mathcal{R}(q) \Big\},\end{aligned}$$ where $$\begin{aligned} - \Phi(q;d)\! = \! - \ln\Bigg\{ \sum_{k=1}^{K}\pi_{k}\frac{1}{\pi^{N_{d}}\det(\Sigma_{k}+\nu I)} \exp\Big( -\frac{1}{2}\Big\| d-\mathcal{F}_{a}(q)-\zeta_{k} \Big\|_{\Sigma_{k}+\nu I}^{2} \Big) \Bigg\}, \\ \mathcal{R}(q) = \frac{1}{2}\|A^{s/2}q\|_{L^{2}(\Omega)}^{2}\, \quad \text{or} \quad \mathcal{R}(q) = \lambda \|q\|_{\text{TV}} + \frac{1}{2}\|A^{s/2}q\|_{L^{2}(\Omega)}^{2}. \label{DefFunR}\end{aligned}$$ Different form of functional $\mathcal{R}$ comes from different assumptions of the prior probability measures: Gaussian probability measure or TV-Gaussian probability measure. For the multi-frequency approach of inverse medium scattering problem, the forward operator in each optimization problem is related to $\kappa$. So we rewrite $\mathcal{F}_{a}(q)$ and $\Phi(q;d)$ as $\mathcal{F}_{a}(q,\kappa)$ and $\Phi(q,\kappa;d)$, which emphasize the dependence of $\kappa$. We have a series of wavenumbers $0 < \kappa_{1} < \kappa_{2} < \cdots \kappa_{N_{w}} < \infty$, and we actually need to solve a series optimization problems $$\begin{aligned} \label{opt1} \min_{q\in L^{\infty}(\Omega)} \Big\{ - \Phi(q,\kappa_{i};d) + \mathcal{R}(q) \Big\}\end{aligned}$$ with $i$ from $1$ to $N_{w}$ and the solution of the previous optimization problem is the initial data for the later optimization problem. Denote $F(q) = - \Phi(q,\kappa_{i};d)$. To minimize the cost functional by a gradient method, it is required to compute Fréchet derivative of functionals $F$ and $\mathcal{R}$. For functional $\mathcal{R}$ with form shown in (\[DefFunR\]), we can obtain the Fréchet derivatives as follows $$\begin{aligned} \label{DerR1} \mathcal{R}'(q) = A^{s}q, \quad \text{or} \quad \mathcal{R}'(q) = A^{s}q + 2\lambda \nabla\cdot\left( \frac{\nabla q}{\sqrt{|\nabla q|^{2} + \delta}} \right),\end{aligned}$$ where we used the following modified version of $\mathcal{R}$ $$\begin{aligned} \mathcal{R}(q) = \lambda\int_{\Omega}\sqrt{|\nabla q|^{2}+\delta} + \frac{1}{2}\|A^{s/2}q\|_{L^{2}(\Omega)}^{2}\end{aligned}$$ for the TV-Gaussian prior case and $\delta$ is a small smoothing parameter avoiding zero denominator in (\[DerR1\]). Next, we consider the functional $F$ with $\mathcal{F}_{a}$ is the forward operator related to problem (\[PMLBoundedHelEq\]). A simple calculation yields the derivative of $F$ at $q$; $$\begin{aligned} \label{zuihou0} F'(q)\delta q = \text{Re} \Big( \mathcal{M}(\delta u), \sum_{k = 1}^{K} \gamma_{k} (\Sigma_{k}+\nu I)^{-1}(d-\mathcal{F}_{a}(q,\kappa_{i})-\zeta_{k}) \Big),\end{aligned}$$ where $\delta u$ satisfy $$\begin{aligned} \label{deltaUEq2} \left \{\begin{aligned} & \nabla\cdot(s \nabla \delta u) + s_{1}s_{2}\kappa_{i}^{2}(1+q)\delta u = -\kappa^{2}\delta q(u^{\text{inc}} + s_{1}s_{2}u_{a}^{s}) \quad \text{in }D, \\ & \delta u = 0 \quad \text{on }\partial D, \end{aligned}\right.\end{aligned}$$ and $$\begin{aligned} \gamma_{k} = \frac{\pi_{k}\mathcal{N}_{c}(d-\mathcal{F}_{a}(q) \, | \, \zeta_{k},\Sigma_{k}+\nu I) }{\sum_{j = 1}^{K} \pi_{j}\mathcal{N}_{c}(d-\mathcal{F}_{a}(q) \, | \, \zeta_{j},\Sigma_{j}+\nu I). }\end{aligned}$$ To compute the Fréchet derivative, we introduce the adjoint system: $$\begin{aligned} \label{AdjSystem} \left \{\begin{aligned} & \nabla\cdot(\bar{s}\nabla v) + \bar{s}_{1}\bar{s}_{2}\kappa_{i}^{2}(1+q) v = - \kappa_{i}^{2} \sum_{j = 1}^{N_{d}}\delta(x-x_{j})\rho_{j} \quad \text{in }D, \\ & v = 0 \quad \text{on }D, \end{aligned}\right.\end{aligned}$$ where $\rho_{j} \, (j=1,\ldots,N_{d})$ denote the $j$th component of $\sum_{k = 1}^{K} \gamma_{k} (\Sigma_{k}+\nu I)^{-1}(d-\mathcal{F}_{a}(q,\kappa_{i})-\zeta_{k}) \in \mathbb{C}^{N_{d}}$. Multiplying equation (\[deltaUEq2\]) with the complex conjugate of $v$ on both sides and integrating over $D$ yields $$\begin{aligned} \int_{D}\nabla\cdot(s \nabla \delta u)\bar{v} + s_{1}s_{2}\kappa_{i}^{2}(1+q)\delta u \bar{v} = - \int_{D}\kappa^{2}\delta q(u^{\text{inc}} + s_{1}s_{2}u_{a}^{s})\bar{v}.\end{aligned}$$ By integration by parts formula, we obtain $$\begin{aligned} \int_{D}\delta u \Big( \nabla\cdot(s \nabla \bar{v}) + s_{1}s_{2}\kappa_{i}^{2}(1+q)\bar{v} \Big) = - \kappa_{i}^{2}\int_{D}\delta q (u^{\text{inc}} + s_{1}s_{2}u_{a}^{s})\bar{v}.\end{aligned}$$ Taking complex conjugate of equation (\[AdjSystem\]) and plugging into the above equation yields $$\begin{aligned} -\kappa_{i}^{2}\int_{D}\delta u \sum_{j = 1}^{N_{d}}\delta(x-x_{j})\bar{\rho}_{j} = - \kappa_{i}^{2}\int_{D}\delta q (u^{\text{inc}} + s_{1}s_{2}u_{a}^{s})\bar{v},\end{aligned}$$ which implies $$\begin{aligned} \label{zuihou1} \Big( \mathcal{M}(\delta u), \sum_{k = 1}^{K} \gamma_{k} \Sigma_{k}^{-1}(d-\mathcal{F}_{a}(q,\kappa_{i})-\zeta_{k}) \Big) = \int_{D}\delta q (u^{\text{inc}} + s_{1}s_{2}u_{a}^{s})\bar{v}.\end{aligned}$$ Considering both (\[zuihou0\]) and (\[zuihou1\]), we find that $$\begin{aligned} F'(q)\delta q = \text{Re}\int_{D}\delta q (u^{\text{inc}} + s_{1}s_{2}u_{a}^{s})\bar{v},\end{aligned}$$ which gives the Fréchet derivative as follow $$\begin{aligned} \label{Fdd1} F'(q) = \text{Re}\big( (\bar{u}^{\text{inc}} + \bar{s}_{1}\bar{s}_{2}\bar{u}_{a}^{s})v \big).\end{aligned}$$ With these preparations, it is enough to construct Gaussian mixture recursive linearization method (GMRLM) which is shown in Algorithm \[alg23\]. Notice that for the recursive linearization method (RLM) shown in [@Bao2015TopicReview], only one iteration of the gradient descent method for each fixed wavenumber can provide an acceptable recovery function. So we only iterative once for each fixed wavenumber. Numerical examples {#SecNumer} ================== In this section, we provide two numerical examples in two dimensions to illustrate the effectiveness of the proposed method. In the following, we assume that $\Omega = \{x\in\mathbb{R}^{2} \, : \, \|x\|_{2} \leq 1\}$ with $\Omega \subset D$ where $D$ is the PML domain with $d_{1} = d_{2} = 0.15$, $p = 2.5$ and $\sigma_{0} = 1.5$. For the forward solver, finite element method (FEM) has been employed and the scattering data are obtained by numerical solution of the forward scattering problem with adaptive mesh technique. For the following two examples, we choose $N_{w} = 20$ and $\textbf{d}_{j} \, (j = 1,\ldots,N_{w})$ are equally distributed around $\partial D$. Equally spaced wavenumbers are used, starting from the lowest wavenumber $\kappa_{\text{min}} = \pi$ and ending at the highest wavenumber $\kappa_{\text{max}} = 10\pi$. Denote by $\Delta\kappa = (\kappa_{\text{max}} - \kappa_{\text{min}})/9 = \pi$ the step size of the wavenumber; then the ten equally spaced wavenumbers are $\kappa_{j} = j\Delta\kappa$, $j = 1,\ldots,10$. We set $400$ receivers that equally spaced along the boundary of $\Omega$ as shown in Figure \[illuFig2\]. For the initial guess of the unknown function $q$, there are numerous strategies, i.e., methods based on Born approximation [@Bao2015TopicReview; @Bleistein2001Book]. Since the main point here is not on the initial gauss, we just set the initial $q$ to be a function always equal to zero for simplicity. In order to show the stability of the proposed method, some relative random noise is added to the data, i.e., $$\begin{aligned} u^{s}|_{\partial \Omega} := (1+\sigma \text{rand})u^{s}|_{\partial_{D}}.\end{aligned}$$ Here, rand gives uniformly distributed random numbers in $[-1,1]$ and $\sigma$ is a noise level parameter taken to be $0.02$ in our numerical experiments. Define the relative error by $$\begin{aligned} \text{Relative Error} = \frac{\|q-\tilde{q}\|_{L^{2}(\Omega)}}{\|q\|_{L^{2}(\Omega)}},\end{aligned}$$ where $\tilde{q}$ is the reconstructed scatterer and $q$ is the true scatterer. **Example 1**: For the first example, let $$\begin{aligned} \tilde{q}(x,y) = 0.3 (1-x)^{2}e^{-x^{2}-(y+1)^{2}} - (0.2x - x^{3} - y^{5})e^{-x^{2}-y^{2}} - 0.03 e^{-(x+1)^{2} - y^{2}}\end{aligned}$$ and reconstruct a scatterer defined by $$\begin{aligned} q(x,y) = \tilde{q}(3x,3y)\end{aligned}$$ inside the unit square $\{x\in\mathbb{R}^{2} \, : \, \|x\|_{2} < 1\}$. ![True scatterer and five typical learning examples[]{data-label="TrueLearnFig1"}](TrueLearnFig1.jpg "fig:"){width="100.00000%"}\ Denote $U[b_{1},b_{2}]$ to be a uniform distribution with minimum value $b_{1}$ and maximum value $b_{2}$. Now, we assume that some prior knowledge of this function $q$ have been known. According to the prior knowledge, we generate $200$ learning examples according to the following function $$\begin{aligned} q_{e}(x,y) := \sum_{k = 1}^{3}(1-x^{2})^{a_{k}^{1}}(1-y^{2})^{a_{k}^{2}} a_{k}^{3} \exp\bigg(-a_{k}^{4}(x - a_{k}^{5})^{2} - a_{k}^{6}(y - a_{k}^{7})^{2}\bigg),\end{aligned}$$ where $$\begin{aligned} & a_{k}^{1}, a_{k}^{2} \sim U[1,3], \quad a_{k}^{3} \sim U[-1,1], \\ & a_{k}^{4}, a_{k}^{6} \sim U[8,10], \quad a_{k}^{5}, a_{k}^{7} \sim U[-0.8,0.8].\end{aligned}$$ In order to provide an intuitional sense, we show the true scatterer and several learning examples in Figure \[TrueLearnFig1\]. We use $409780$ elements to obtain accurate solutions which we recognized as $\mathcal{S}(q)u^{\text{inc}}$. To test our approach, $16204$ elements will be used to obtain $\mathcal{S}_{a}(q)u^{\text{inc}}$. Learning algorithm with $K = 4$ proposed in Subsection \[LearnSection\] has been used to learn the statistical properties of differences $e_{n}^{i} := \mathcal{F}(q_{n},\kappa_{i}) - \mathcal{F}_{a}(q_{n},\kappa_{i})$ with $\kappa_{i} = i\cdot \pi \, (i = 1,\ldots 10)$ and $q_{n} \, (n = 1,\ldots 200)$ stands for the learning examples. Concerning the regularizing term, we take $A = 0.01 \Delta$, $s = 1.5$ and $\lambda = 0$, which can be computed by Fourier transform. Since regularization is not the main point of our paper, we will not discuss the strategies of choosing $A$ in details. ![Relative errors with different parameters: green dotted line are relative errors obtained by using the RLM with 16204 elements; cyan dotted line with circles are relative errors obtained by using the RLM with 183198 elements; blue solid line are relative errors obtained by using the GMRLM with 16204 elements.[]{data-label="RelaErrEx1"}](RelaErrEx1.jpg "fig:"){width="50.00000%"}\ \[table1\] [c|c|c|c]{} Algorithm & Element Number & Wavenumber & Relative Error\ RLM & 16204 & $7\pi$ & 7.10%\ RLM & 183198 & $10\pi$ & 0.26%\ GMRLM & 16204 & $4\pi$ & 0.49%\ RLM & 16204 & $4\pi$ & 30.58%\ RLM & 183198 & $4\pi$ & 20.82%\ RLM & 183198 & $9\pi$ & 0.42%\ Relative errors of RLM with small element number, RLM with large element number and GMRLM with small element number have been shown in Figure \[RelaErrEx1\], which illustrate the effectiveness of the proposed method. For the case of small element number, the RLM diverges when $\kappa\approx 7\pi$. The reason is that large number of elements are needed to ensure the convergence of finite element methods for Helmholtz equations with high wavenumber. Our error compensation method can not eliminate such errors, so it is also diverges when $\kappa\approx 7\pi$. However, when $\kappa \approx 4\pi$, our method provides a recovered function with relative error comparable to the result obtained by the RLM with more than eleven times of elements and $\kappa \approx 9\pi$. Hence, by learning process, the GMRLM can give an acceptable recovered function much faster than the traditional RLM. ![Recovered functions with different parameters. (a): true function; (b): minimum relative error estimate for the RLM with 16204 elements and the wavenumber computed to $7\pi$; (c): minimum relative error estimate for the RLM with 183198 elements and the wavenumber computed to $10\pi$; (d): minimum relative error estimate for the GMRLM with 16204 elements and the wavenumber computed to $4\pi$; (e): recovered function for the RLM with 16204 elements and the wavenumber computed to $4\pi$; (f): recovered function for the RLM with 183198 elements and the wavenumber computed to $4\pi$.[]{data-label="PcolorComEx1"}](PcolorComEx1.jpg "fig:"){width="100.00000%"}\ In addition, we show the accurate values of relative errors and element numbers in Table \[table1\]. In Figure \[PcolorComEx1\], the true scatterer function has been shown on the top left and five results obtained by RLM and GMRLM with different parameters have been given. From these, we can visually see the effectiveness of the proposed method. **Example 2**: For the second example, let $$\begin{aligned} q(x,y) := \left \{\begin{aligned} & 0.7 \qquad \text{for } -0.3 \leq x \leq 0.3 \text{ and }-0.3 \leq y \leq 0.3 \\ & -0.1 \quad\! \text{for } -0.1 < x < 0.1 \text{ and }-0.1 < y < 0.1 \\ & 0 \qquad\,\,\,\,\, \text{other areas in square } -1\leq x\leq 1 \text{ and } -1\leq y\leq 1. \end{aligned}\right.\end{aligned}$$ As in Example 1, we need to develop some learning examples. Here, we assume that there is a square in $[-1,1]^{2}$, but we did not know the position, size and height of the square. We assume that the position, size and height are all uniform random variables with height between $[-1,1]$ and the square supported in $[-1,1]^{2}$. As in Example 1, we generate 200 learning examples. To give the reader an intuitive idea, we show the true scatterer and five typical learning examples in Figure \[TrueLearnFig2\]. ![True scatterer and five typical learning examples[]{data-label="TrueLearnFig2"}](TrueLearnFig2.jpg "fig:"){width="100.00000%"}\ ![Relative errors with different parameters: green dotted line are relative errors obtained by using the RLM with 16204 elements; cyan dotted line with circles are relative errors obtained by using the RLM with 183198 elements; blue solid line are relative errors obtained by using the GMRLM with 16204 elements.[]{data-label="RelaErrEx2"}](RelaErrEx2.jpg "fig:"){width="50.00000%"}\ For this discontinuous scatterer, we take same values of parameters as in Example 1. Beyond our expectation, the proposed algorithm obviously converges even faster than the RLM with more than eleven times of elements, which is shown in Figure \[RelaErrEx2\]. By our understanding, the reason for such fast convergence is that the means and covariances learned by complex EM algorithm not only compensate numerical errors but also encode some prior information of the true scatterer by learning examples. Until the wavenumber is $9\pi \approx 28.26$, the RLM with 183198 elements provide a recovered function which has similar relative error as the recovered function obtained by the GMRLM. The RLM with only 16204 elements diverges as in Example 1 when wavenumber is too large, and the proposed algorithm still can not compensate the loss of physics as shown in Figure \[RelaErrEx2\]. For accurate value of relative errors and elements, we show them in Table \[table2\]. \[table2\] [c|c|c|c]{} Algorithm & Element Number & Wavenumber & Relative Error\ RLM & 16204 & $8\pi$ & 36.49%\ RLM & 183198 & $10\pi$ & 9.71%\ GMRLM & 16204 & $5\pi$ & 13.92%\ GMRLM & 16204 & $7\pi$ & 12.09%\ RLM & 16204 & $7\pi$ & 39.28%\ RLM & 183198 & $7\pi$ & 22.03%\ RLM & 183198 & $9\pi$ & 12.76%\ ![Recovered functions with different parameters. (a): true function; (b): minimum relative error estimate for the RLM with 16204 elements and the wavenumber computed to $8\pi$); (c): minimum relative error estimate for the RLM with 183198 elements and the wavenumber computed to $10\pi$; (d): minimum relative error estimate for the GMRLM with 16204 elements and the wavenumber computed to $8\pi$; (e): recovered function for the RLM with 16204 elements and the wavenumber computed to $8\pi$; (f): recovered function for the RLM with 183198 elements and the wavenumber computed to $8\pi$.[]{data-label="PcolorComEx2"}](PcolorComEx2.jpg "fig:"){width="100.00000%"}\ Finally, we provide the image of true scatterer on the top left in Figure \[PcolorComEx2\]. On the top middle, the best result obtained by the RLM with 16204 elements is given. From this image, we can see that it is failed to recover the small square embedded in the large square. The best result obtained by the RLM with 183198 elements is shown on the top right. It is much much better than the function obtained by algorithm with 16204 elements. At the bottom of Figure \[PcolorComEx2\], we show the best result obtained by the GMRLM with 16204 elements on the left and show the results obtained by the RLM (compute to the same wavenumber as the GMRLM) with 16204 elements and 183198 elements in the middle and on the righthand side respectively. The recovered function by the GMRLM is not as well as the recovered function obtained by the RLM with more than eleven times of elements and higher wavenumber. However, beyond our expectation, it is already capture the small square embedded in the large square, which is not incorporated in our 200 learning examples. In summary, the proposed GMRLM converges much faster than the classical RLM and it can provide a much better result at the same discrete level compared with the RLM. Conclusions =========== In this paper, we assume the modeling errors brought by rough discretization to be Gaussian mixture random variables. Based on this assumption, we construct the general Bayesian inverse framework and prove the relations between MAP estimates and regularization methods. Then, the general theory has been applied to a specific inverse medium scattering problem. Well-posedness in the statistical sense has been proved and the related optimization problem has been obtained. In order to acquire estimates of parameters in the Gaussian mixture distribution, we generalize the EM algorithm with real variables to the complex variables case rigorously, which incorporate the machine learning process into the classical inverse medium problem. Finally, the adjoint problem has been deduced and the RLM has been generalized to GMRLM based on the previous illustrations. Two numerical examples are given, which demonstrate the effectiveness of the proposed methods. This work is just a beginning, and there are a lot of problems need to be solved. For example, we did not give a principle of choosing parameter $K$ appeared in the Gaussian mixture distribution. In addition, in order to learn the model errors more accurately, we can attempt to design new algorithms to adjust the parameters in the Gaussian mixture distribution efficiently in the inverse iterative procedure. Acknowledgments {#acknowledgments .unnumbered} =============== This work was partially supported by the NSFC under grant Nos. 11501439, 11771347, 91730306, 41390454 and partially supported by the Major projects of the NSFC under grant Nos. 41390450 and 41390454, and partially supported by the postdoctoral science foundation project of China under grant no. 2017T100733. [^1]:
{ "pile_set_name": "ArXiv" }
--- abstract: 'Геометрическая медиана плоской области — это точка, минимизирующая среднее расстояние от самой себя до всех точек этой области. Здесь мы выпишем некоторую градиентную систему для вычисления геометрической медианы треугольной области и сформулируем наглядное характеристическое свойство этой медианы. Оно заключается в том, что все три средних расстояния от геометрической медианы до трех сторон границы треугольной области равны между собой. Дальше эти результаты обобщаются на другие виды областей, а также на другие медианоподобные точки.' author: - | Петр Панов[$^1$]{}, Алексей Савватеев$^{2,3}$\ $^1$Национальный исследовательский университет\ &gt;, <panovpeter@mail.ru>\ $^2$Московский физико-технический институт, <hibiny@mail.ru>\ $^3$Центральный экономико-математический институт РАН title: О геометрической медиане треугольной области и других медианоподобных точках --- > **Ключевые слова:** геометрическая медиана, градиентная система, треугольная область. Геометрическая медиана является естественным пространственным обобщением статистической медианы одномерной выборки, которая, как известно, минимизирует суммарное расстояние до всех элементов выборки. Именно это минимизирующее свойство положено в основу определения геометрической медианы $m$ конечного набора точек $P_1,\dots P_n$ на плоскости $$\label{eq:nmedian} m = \mathop{\mathrm{arg\,min}}\limits_{X\in\mathbb R^2} \sum_{i=1}^n{|P_i-X|}.$$ С начала прошлого века геометрическая медиана и ее непосредственные обобщения используются в экономической науке в качестве полезного инструмента [@Weber1909]. Параллельно продолжают исследоваться математические свойства дискретной медианы и разрабатываются эффективные численные методы для ее нахождения [@Wesolowsky1993]. А ближе к концу века интерес смещается в сторону непрерывного случая — развиваются исследования, связанные с геометрическими медианами кривых и областей [@Fekete2005; @Zhang2014]. В своем изложении мы как раз сосредоточимся на непрерывном случае. Мы начнем с вывода градиентной системы для нахождения геометрической медианы треугольной области (Теорема \[TriangleSystem\]). Это позволит получить простое и компактное характеристическое свойство геометрической медианы этой области (Предложение \[TriangleSystem’\]). Дальше эти результаты обобщаются на другие виды областей, а также на другие медианоподобные точки. Напомним, что по аналогии с дискретным случаем (\[eq:nmedian\]) геометрическая медиана $m$ области $\Omega\subset \mathbb R^2$ определяется как $$m = \mathop{\mathrm{arg\,min}}\limits_{X\in\mathbb R^2}\int_{P\in\Omega}{|P - X|}\,dP,$$ где $|P - X|$ — это обычное евклидово расстояние между точками $P$ и $X$. После введения обозначения $$\label{eq:SigmaOmega} \Sigma_\Omega(X) = \int_{P\in\Omega}|P-X|\,dP$$ то же самое определение можно записать короче $$m = \mathop{\mathrm{arg\,min}}\limits_{X\in\mathbb R^2} \Sigma_\Omega(X).$$ Нам понадобится еще одно одно обозначение, пусть $P_1$ и $P_2$ — некоторые точки на плоскости, тогда $$\Sigma_{P_1P_2}(X) = \int_{P\in P_1P_2}|P-X|\,dP,$$ где интегрирование ведется по отрезку $P_1P_2$. Заметим, что среднее расстояние от точки $X$ до точек отрезка $P_1P_2$ имеет вид $\Sigma_{P_1P_2}(X)/|P_2-P_1|$. Теперь приступим к формулировке первого результата. \[TriangleSystem\] Точка $m$ тогда и только будет геометрической медианой треугольной области $\Delta$ с вершинами $P_1,P_2,P_3$, когда выполняется равенство $$\label{eq:TriangleSystem} \frac{\Sigma_{P_1P_2}(m)}{|P_2 - P_1|}\,\overrightarrow{P_1P_2} + \frac{\Sigma_{P_2P_3}(m)}{|P_3 - P_2|}\,\overrightarrow{P_2P_3} + \frac{\Sigma_{P_3P_1}(m)}{|P_1 - P_3|}\,\overrightarrow{P_3P_1} = 0.$$ [*Доказательство*]{}. Из определения следует, что медиана $m= m(\Delta)$ — это критическая точка функции $\Sigma_\Delta$. Возьмем малый вектор $\vec \delta$, $|\vec \delta| = \delta$ и вычислим приращение функции $\Sigma_\Delta$ при смещении аргумента на этот вектор $$\delta\Sigma_\Delta(X)= \Sigma_\Delta(X + \vec \delta) - \Sigma_\Delta(X).$$ В критической точке $m$ оно должно иметь порядок $o(\delta)$. Положим $P'_i = P_i - \vec \delta$ и обозначим через $\Delta'$ сдвинутый треугольник с штрихованными вершинами (рис. \[fig:BrownTriangle\]). ![Треугольник $\Delta'$ сдвинут относительно $\Delta$ на вектор $-\vec \delta$[]{data-label="fig:BrownTriangle"}](BrownTrianglePM) Этот рисунок показывает, что приращение функции $\Sigma_\Delta$ при смещении аргумента на вектор $\vec \delta$ мы можем записать также и в виде $$\delta\Sigma_\Delta(X) = \Sigma_{\Delta'}(X) - \Sigma_\Delta(X).$$ Для параллелограмма с вершинами $P_i,P_{i+1},P'_{i+1},P'_i$ введем обозначение $\pi_i$ и, заодно, введем обозначение $|\pi_i|$ для его площади. Тогда, как показывает рисунок \[fig:BrownTriangle\], это приращение представляется суммой трех слагаемых вида $\Sigma_{\pi_i}(X)$, взятых с подходящими знаками. Причем при $\delta \to 0$ средние значения интегралов $\Sigma_{\pi_i}(X)$ и $\Sigma_{P_iP_{i+1}}(X)$ сближаются друг с другом, а именно, $$\frac{\Sigma_{\pi_i}(X)}{|\pi_i|} = \frac{\Sigma_{P_iP_{i+1}}(X)}{|P_{i+1}-P_i|} + o(1).$$ Вопрос о знаке, с которым слагаемое $\Sigma_{\pi_i}(X)$ должно входить в приращение $\delta\Sigma_\Delta(X)$, решается следующим образом. Умножим обе части предыдущего равенства на ориентированную площадь параллелограмма $\pi_i$, а именно, на $-\vec\delta\wedge\overrightarrow{P_iP_i}_{+1}$, $$-\frac{\vec\delta\wedge\overrightarrow{P_iP_i}_{+1}}{|\pi_i|}\, \Sigma_{\pi_i}(X) = -\vec\delta\wedge\overrightarrow{P_iP_i}_{+1}\, \frac{\Sigma_{P_iP_{i+1}}(X)}{|P_{i+1}-P_i|} + o(\delta).$$ Нетрудно проверить, что дробь, стоящая перед $\Sigma_{\pi_i}(X)$, как раз и является тем самым необходимым знаком. Таким образом, $$\begin{gathered} \label{eq:deltaSigma} \delta\Sigma_\Delta(X) = \\ = -\vec\delta\wedge\ \left( \frac{\Sigma_{P_1P_2}(X)}{|P_2 - P_1|}\,\overrightarrow{P_1P_2} + \frac{\Sigma_{P_2P_3}(X)}{|P_3 - P_2|}\,\overrightarrow{P_2P_3} + \frac{\Sigma_{P_3P_1}(X)}{|P_1 - P_3|}\,\overrightarrow{P_3P_1} \right) + o(\delta).\end{gathered}$$ Мы видим, что приращение $\delta\Sigma_\Delta(X)$ в точке $X$ в том и только в том случае будет иметь порядок $o(\delta)$, когда выражение, стоящее в скобках, будет равно нулю. Теорема доказана. На самом деле вектор, расположенный в скобках в выражении (\[eq:deltaSigma\]) — это градиент функции $\Sigma_\Delta(X)$, повернутый на $-90^\circ$, так что уравнение (\[eq:TriangleSystem\]) — это, действительно, градиентная система. Градиентную систему (\[eq:TriangleSystem\]) можно записать в более компактном и симметричном виде. Из Теоремы \[TriangleSystem\] вытекает следующее утверждение. \[TriangleSystem’\] Точка $m$ тогда и только тогда будет геометрической медианой треугольной области $\Delta$ с вершинами $P_1,P_2,P_3$, когда $$\label{eq:TriangleSystem'} \frac{\Sigma_{P_1P_2}(m)}{|P_2-P_1|} = \frac{\Sigma_{P_2P_3}(m)}{|P_3-P_2|} = \frac{\Sigma_{P_3P_1}(m)}{|P_1-P_3|} .$$ Таким образом геометрическая медиана треугольной области — это точка, для которой три средних расстояния до трех сторон граничного треугольника равны между собой. [*Доказательство*]{}. В треугольнике одну из сторон выразим через две другие $\overrightarrow{P_3P_1} = -\overrightarrow{P_1P_2} - \overrightarrow{P_2P_3}$, тогда для него равенство (\[eq:TriangleSystem\]) можно переписать в виде $$\overrightarrow{P_1P_2}\,\left(\frac{\Sigma_{P_1P_2}(m)}{|P_2 - P_1|} - \frac{\Sigma_{P_3P_1}(m)}{|P_1 - P_3|}\right) + \overrightarrow{P_2P_3}\,\left(\frac{\Sigma_{P_2P_3}(m)}{|P_2 - P_1|} - \frac{\Sigma_{P_3P_1}(m)}{|P_1 - P_3|}\right) = 0.$$ Из-за независимости векторов $P_1P_2$ и $P_2P_3$ следует, что соответствующие им коэффициенты должны быть равны 0, и соотношение (\[eq:TriangleSystem’\]) в критической точке выполняется. Следствие доказано. Отметим, что все интегралы вида $\Sigma_{PQ}(X)$, присутствующие в градиентных системах (\[eq:TriangleSystem\]) и (\[eq:TriangleSystem’\]), вычисляются в конечном виде, подынтегральная функция тут — это квадратный корень из квадратного трехчлена. Таким образом, геометрическая медиана это общий ноль двух элементарных функций. Тем не менее, аналитическая формула конечного вида для нее, по-видимому, неизвестна. Во всяком случае среди 25000 точек, перечисленных в Encyclopedia of Triangle Centers [@Kimb] Кларка Кимберлинга, геометрическая медиана треугольной области отсутствует. Заметим, что эта энциклопедия постоянно пополняется и, кроме того, она снабжена проверочными инструментами, позволяющими выяснить, принадлежит ли ей данная точка или нет. Точно так же, как и Теорема \[TriangleSystem\], доказывается следующий результат. \[PiSystem\] Точка $m$ тогда и только будет геометрической медианой многоугольной области $\Pi$ с вершинами $P_1,\dots,P_n$, когда выполняется равенство $$\label{eq:PiSystem} \frac{\Sigma_{P_1P_2}(m)}{|P_2 - P_1|}\, \overrightarrow{P_1P_2}+ \dots + \frac{\Sigma_{P_nP_{n+1}}(m)}{|P_{n+1} - P_n|}\,\overrightarrow{P_nP_n}_{+1} = 0.$$ Не приводя доказательства, ограничимся здесь только размещением следующего рисунка \[fig:BrownPoly\] — аналога рисунка \[fig:BrownTriangle\]. ![Расположение точки $X + \vec\delta$ в многоугольнике $\Pi$ такое же, как у точки $X$ в сдвинутом многоугольнике $\Pi'$[]{data-label="fig:BrownPoly"}](BrownPoly) Для произвольной плоской области $\Omega$, как непосредственное следствие соотношения (\[eq:PiSystem\]), получаем следующий результат. \[OmegaSystem\] Пусть задана плоская область $\Omega$ с кусочно гладкой границей. Точка $m$ тогда и только будет ее геометрической медианой, когда выполняется равенство $$\label{eq:OmegaSystem} \int_{P\in\partial\Omega}{f(P-m)}\,\overrightarrow{dP} = 0.$$ Чтобы убедиться в правильности этого утверждения, достаточно вписать в кривую $\partial\Omega$ многоугольник с вершинами $P_1,\dots,P_n$, записать для его геометрической медианы равенство (\[eq:PiSystem\]) и убедиться, что левая часть этого равенства представляет собой интегральную сумму для интеграла (\[eq:OmegaSystem\]). Теперь дополнительно снимем ограничение $n=2$ на размерность и вместо функции $\Sigma_\Omega$, определенной равенством (\[eq:SigmaOmega\]), будем рассматривать функции более общего вида, определенные равенством $$\label{eq:SimgaOmegaf} \Sigma_\Omega^f(X) = \int_{P\in\Omega}{f(P-X)}\,dP,$$ где $f$ — это некоторая функция аргумента $P\in \mathbb R^n$. Здесь имеет место следующий общий результат, который покрывает все предыдущие. \[th:fOmegaSystem\] Пусть функция $f$ непрерывна и $\Omega$ — это ограниченная область с кусочно гладкой границей, расположенная в пространстве $\mathbb R^n$. Тогда для функции $\Sigma^f_\Omega$ условие критичности точки $m$ равносильно выполнению следующего равенства $$\label{eq:fromStokes} \int_{P\in\partial\Omega}{f(P-m)}\,\overrightarrow{n(P)}\,dP = 0,$$ где $\overrightarrow{n(P)}$ — это единичный вектор внешней нормали к границе области $\Omega$ в точке $P$. Доказательство этой теоремы вполне аналогично доказательству Теоремы \[OmegaSystem\]. Вернемся к тому, с чего мы начали, а именно, к геометрической медиане треугольной области. Как уже говорилось, в общем случае не удается найти точное решение градиентной системы (\[eq:TriangleSystem\]) или системы (\[eq:TriangleSystem’\]). Мы смогли это сделать только в одном тривиальном случае, речь о “сплюснутом” треугольнике со сторонами $\alpha,\beta,\gamma$, в котором $\gamma = |\alpha - \beta|$. Имеется ввиду следующее утверждение. Рассмотрим треугольники со сторонами $\alpha,\beta,\gamma$, где две большие стороны $\alpha$ и $\beta$ фиксированы, а меньшая $\gamma$ является параметром. Обозначим геометрическую медиану такого треугольника $m(\gamma)$. Пусть теперь $\gamma$ стремится к $|\alpha-\beta|$, тогда $m(\gamma)$ стремится к точке $m$, которая отстоит от общей вершины сторон $\alpha$ и $\beta$ на расстояние $\sqrt{\alpha\beta/2}$. Это утверждение является простым следствием равенства (\[eq:TriangleSystem’\]), примененного непосредственно к сплюснутому треугольнику со сторонами $\alpha,\beta$ и $|\alpha-\beta|$, для которого оно имеет смысл и для которого в то же самое время само понятие геометрической медианы не вполне определено. Добавим, что среди сплюснутых треугольников имеются два типа равнобедренных — это треугольники со сторонами $\alpha,\alpha/2,\alpha/2$ и $\alpha,\alpha,0$. В работе [@Panov2018] содержится более точная асимптотическая информация о расположении их геометрических медиан. [9]{} Alfred Weber, Über den Standort der Industrien. 1.Teil: Reine Theorie des Standorts. 2. Auflage. Tübingen 1909 G.O. Wesolowsky, *The Weber Problem: History and Perspectives*, Location Sciense, 1993, 1 , pp 5-23 Sandor P. Fekete, Joseph S.B. Mitchell, and Karin Beurer, *On the Continuous Fermat–Weber Problem*, Operations Research, 2005, V. 53, no 1, pp 61–76 Thomas T.C.K. Zhang, John G. Carlsson *Continuous Fermat–Weber Problem for a Convex Polygon Using Euclidean Distance*, 2014 <https://arxiv.org/abs/1403.3715> Clark Kimberling, *Encyclopedia of Triangle Centers*\ <http://faculty.evansville.edu/ck6/encyclopedia/> П.А. Панов, *О геометрической медиане выпуклых, а также треугольных и других многоугольных областей*, Изв. Иркут. гос. ун-та. Сер. Математика, в печати
{ "pile_set_name": "ArXiv" }
--- abstract: | In many problems of classical analysis extremal configurations appear to exhibit complicated fractal structure. This makes it much harder to describe extremals and to attack such problems. Many of these problems are related to the [*multifractal analysis*]{} of [*harmonic measure*]{}. We argue that, searching for extremals in such problems, one should work with random fractals rather than deterministic ones. We introduce a new class of fractals [*random conformal snowflakes*]{} and investigate its properties developing tools to estimate spectra and showing that extremals can be found in this class. As an application we significantly improve known estimates from below on the extremal behaviour of harmonic measure, showing how to constuct a rather simple snowflake, which has a spectrum quite close to the conjectured extremal value. author: - 'D. Beliaev' - 'S. Smirnov' bibliography: - 'snow.bib' title: Random conformal snowflakes --- Introduction ============ It became apparent during the last decade that extremal configurations in many important problems in classical complex analysis exhibit complicated fractal structure. This makes such problems more difficult to approach than similar ones, where extremal objects are smooth. As an example one can consider the coefficient problem for univalent functions. Bieberbach formulated his famous conjecture arguing that the Köebe function, which maps a unit disc to a plane with a stright slit, is extremal. The Bieberbach conjecture was ultimately proved by de Branges in 1985 [@deBranges], while the sharp growth asymptotics was obtained by Littlewood [@Littlewood25] in 1925 by a much easier argument. However, coefficient growth problem for bounded functions remains widely open, largely due to the fact that the extremals must be of fractal nature (cf [@CaJo]). This relates (see [@BeSmECM]) to a more general question of finding the [*universal multifractal spectrum*]{} of [*harmonic measure*]{} defined below, which includes many other problems, in particular conjectures of Brennan, Carleson and Jones, Kraetzer, Szegö, and Littlewood. In this paper we report on our search for extremal fractals. We argue that one should study random fractals instead of deterministic ones. We introduce a new class of random fractals, [*random conformal snowflakes*]{}, investigate its properties, and as a consequence significantly improve known estimates from below for the multifractal spectra of harmonic measure. Multifractal analysis of harmonic measure ----------------------------------------- It became clear recently that appropriate language for many problems in geometric function theory is given by the [*multifractal analysis*]{} of [*harmonic measure*]{}. The concept of multifractal spectrum of a measure was introduced by Mandelbrot in 1971 in [@Mandelbrot72; @Mandelbrot74] in two papers devoted to the distribution of energy in a turbulent flow. We use the definitions that appeared in 1986 in a seminal physics paper [@HJKPS] by Halsey, Jensen, Kadanoff, Procaccia, Shraiman who tried to understand and describe scaling laws of physical measures on different fractals of physical nature (strange attractors, stochastic fractals like DLA, etc.). There are various notions of spectra and several ways to make a rigorous definition. Two standard spectra are [*packing*]{} and [*dimension*]{} spectra. The packing spectrum of harmonic measure $\omega$ in a domain $\Omega$ with a compact boundary is defined as $$\pi_{\Omega}(t)= \sup\,\Big\{q:\ \forall\delta>0~\exists~\delta-\mathrm{ packing }~\{B\} ~{\mathrm with}~\sum \mathrm{diam}(B)^t\omega(B)^q\,\ge\,1\Big\}\ ,$$ where $\delta$-packing is a collection of disjoint open sets whose diameters do not exceed $\delta$. The [*dimension spectrum*]{} which is defined in terms of harmonic measure $\omega$ on the boundary of $\Omega$ (in the case of simply connected domain $\Omega$ harmonic measure is the image under the Riemann map $\phi$ of the normalised length on the unit circle). Dimension spectrum gives the dimension of the set of points, where harmonic measure satisfies a certain power law: $$f(\alpha)~:=~\mathrm{dim}\, \Big\{z:~\omega\br{B(z,\delta)}\,\approx\,\delta^\alpha\,,~\delta\to0\Big\},~\alpha\ge\frac12~.$$ Here $\mathrm{dim}$ stands for the Hausdorff or Minkowski dimension, leading to possibly different spectra. The restriction $\alpha \ge 1/ 2$ is due to Beurling’s inequality. Of course in general there will be many points where measure behaves differently at different scales, so one has to add $\limsup$’s and $\liminf$’s to the definition above – consult [@Makarov] for details. In our context it is more suitable to work with a modification of the packing spectrum which is specific for the harmonic measure on a two dimensional simply connected domain $\Omega$. In this case we can define the [*integral means spectrum*]{} as $$\beta_\phi(t)~:=~\limsup_{r\to1+}\frac{\log \int_{0}^{2\pi}|\phi'(re^{i\theta})|^t d\theta}{|\log(r-1)|},~t\in{{\mathbb R}}~,$$ where $\phi$ is a Riemann map from the complement of the unit disc onto a simply connected domain $\Omega$. Connections between all these spectra for particular domains are not that simple, but the [*universal spectra*]{} $$\Pi(t)=\sup_\Omega \pi(t), \quad F(\alpha)=\sup_\Omega f(\alpha), \ \ \mathrm{and} \ \ B(t)=\sup_\Omega\beta(t)$$ are related by Legendre-type transforms: $$\begin{aligned} F(\alpha)&=&\inf_{0\le t \le 2} (\alpha \Pi(t)+t), \quad \alpha \ge 1~, \\ \Pi(t)&=&\sup_{\alpha\ge 1} \left(\frac{F(\alpha)-t)}{\alpha}\right), \quad 0\le 2\le 2~, \\ \Pi(t)&=&B(t)-t+1~.\end{aligned}$$ See Makarov’s survey [@Makarov] for details. Random fractals --------------- One of the main problems in the computation of the integral means spectrum (or other multifractal spectra) is the fact that the derivative of a Riemann map for a fractal domain depends on the argument in a very non regular way: $\phi'$ is a “fractal” object in itself. We propose to study random fractals to overcome this problem. For a random function $\phi$ it is natural to consider the [*average integral means spectrum:*]{} $$\begin{aligned} \bar\beta(t)&=&\sup\brs{\beta: \int_1(r-1)^{\beta-1}\int_0^{2\pi} {{\mathbb E}}\brb{|f'(r e^{i\theta})|^t}d \theta d r=\infty} \\ &=&\inf\brs{\beta: \int_1(r-1)^{\beta-1}\int_0^{2\pi} {{\mathbb E}}\brb{|f'(r e^{i\theta})|^t}d \theta d r<\infty}.\end{aligned}$$ The average spectrum does not have to be related to the spectra of a particular realization. We want to point out that even if $\phi$ has the same spectrum a.s. it does not guarantee that $\bar\beta(t)$ equal to the a.s. value of $\beta(t)$. Moreover, it can happen that $\bar\beta$ is not a spectrum of [*any*]{} particular domain. But one can see that $\bar\beta(t)$ is bounded by the universal spectrum $B(t)$. Indeed, suppose that there is a random $f$ with $\bar\beta(t)>B+\epsilon$, hence for any $r$ there are particular realizations of $f$ with $\int |f'(z)|d\theta>(r-1)^{-B-\epsilon/2}$. Then by Makarov’s fractal approximation [@Makarov] there is a (deterministic) function $F$ such that $\beta_F(t)>B(t)$ which is impossible by the definition of $B(t)$. For many classes of random fractals ${{\mathbb E}}|\phi'|^t$ (or its growth rate) does not depend on the argument. This allows us to drop the integration with respect to the argument and study the growth rate along any particular radius. Perhaps more importantly ${{\mathbb E}}|\phi'|$ is no longer a “fractal” function. One can think that this is not a big advantage compared to the usual integral means spectrum: instead of averaging over different arguments we average over different realizations of a fractal. But most fractals are results of some kind of an iterative construction, which means that they are invariant under some (random) transformation. Thus ${{\mathbb E}}|\phi'|^t$ is a solution of some kind of equation. Solving this equation (or estimating its solutions) we can find $\bar\beta(t)$. In this paper we want to show how one can employ these ideas. In the Section \[sec:def\] we introduce a new class of random fractals that we call random conformal snowflakes. In the Section \[sec:spectrum\] we show that $\bar\beta(t)$ for this class is related to the main eigenvalue of a particular integral operator. We also prove the fractal approximation for this class in the Section \[sec:approximation\]. In the Appendix \[sec:application\] we give an example of a snowflake and prove that for this snowflake $\bar\beta(1)>0.23$. This significantly improves previously known estimate $B(1)>0.17$ due to Pommerenke [@Pommerenke75]. Conformal snowflake {#sec:def} =================== The construction of our conformal snowflake is similar to the construction in Pommerenke’s paper [@Pommerenke67lms]. The main difference is the introduction of the randomness. By $\Sigma'$ we denote a class of all univalent functions $\phi:{{\mathbb D}}_-\to {{\mathbb D}}_-$ such that $\phi(\infty)=\infty$ and $\phi'(\infty)\in {{\mathbb R}}$. Let $\phi\in \Sigma'$ be a function with expansion at infinity $\phi(z)=b_1 z+\dots$, by ${{\mathrm cap\:}}\phi={{\mathrm cap\:}}\Omega$ we denote the logarithmic capacity of $\phi$ which is equal to $\log|b_1|$. We will also use the so called [*Koebe $n$-root transform*]{} which is defined as $$({K}\phi)(z)=({K}_n\phi)(z)=\sqrt[n]{\phi(z^n)}.$$ It is a well known fact that the Koebe transform is well defined and ${K}\phi\in\Sigma'$. It is easy to check that Koebe transform divides capacity by $n$ and that the capacity of a composition is the sum of capacities. First we define the deterministic snowflake. To construct a snowflake we need a building block $\phi \in \Sigma'$ and an integer $k\ge 2$. Our snowflake will be the result of the following iterative procedure: we start with the building block and at $n$-th step we take a composition of our function and the $k^n$-root transform of the rotated building block. Let $\phi \in \Sigma'$ and $\theta\in [0,2\pi]$. By $\phi_\theta(z)$ we denote the map whose range is the rotation of that for $\phi$, namely $e^{i\theta}\phi(z e^{-i\theta})$. Let $\phi\in \Sigma'$, $k\ge 2$ be an integer number, and $\{\theta_n\}$ be a sequence of numbers from ${{\mathbb T}}$. Let $f_0(z)=\phi_{\theta_0}(z)$ and $$\begin{aligned} f_{n}(z)=f_{n-1}(K_{k^n}\phi_{\theta_n}(z))= \phi_{\theta_0}(\phi_{\theta_1}^{1/k}(\dots\phi_{\theta_n}^{1/k}(z^{k^n})\dots).\end{aligned}$$ The conformal snowflake $f$ is the limit of $f_n$. For simplicity $S={{\mathbb C}}\setminus f({{\mathbb D}}_-)$ and $g=f^{-1}$ are also called a snowflake. Sometimes it is easier to work with a slightly different symmetric snowflake $$\bar f_{n}(z)=\phi_{\theta_1}^{1/k}(\dots\phi_{\theta_n}^{1/k}(z^{k^n})\dots)= \Phi_1\circ \cdots \circ\Phi_n(z),$$ where $\Phi_j={K}_{k^j}\phi_{\theta_j}$. There are two equivalent ways to construct the symmetric snowflake from the usual one. One is to take the Koebe transform ${K}f_n$, another is to start with $f_0(z)=z$. It is easy to see that $f_n=\Phi_0\circ\cdots\circ\Phi_n.$ How this snowflake grows? This is easy to analyse looking at the evolution of $\bar f_n$. At every step we add $k^n$ equidistributed (according to the harmonic measure) small copies of the building block. But they are not exact copies, they are distorted a little bit by a conformal mapping. Figures \[fbar\] and \[f\] show images of the first four functions $\bar f$ and $f$ with $k=2$ and the building block is a slit map (which adds a straight slit of length $4$). ![The third generation of a snowflake: $f_3$.[]{data-label="pic2"}](snow7313long.eps) ![The image of a small boundary arc under $\bar f_3$ with three Green’s lines.[]{data-label="pic3"}](snow7313locgreen.eps) Let $f_n=\phi_{\theta_0}(\bar f_n(z))$ be the $n$-th approximation to the snowflake with a building block $\phi$ and $k\ge 2$. Then ${{\mathrm cap\:}}( f_n)$ and ${{\mathrm cap\:}}(\bar f_n)$ are bounded by (and converge to) ${{\mathrm cap\:}}(\phi)k/(k-1)$ and ${{\mathrm cap\:}}(\phi)/(k-1)$. \[l:boundedcapacity\] This lemma follows immediately from the standard facts that $$\begin{aligned} {{\mathrm cap\:}}(f\circ g)&=&{{\mathrm cap\:}}(f)+{{\mathrm cap\:}}(g),\\ {{\mathrm cap\:}}(K_n f)&=&{{\mathrm cap\:}}(f)/n.\end{aligned}$$ The conformal snowflake is well defined, namely let $f_n$ be an $n$-th approximation to a snowflake with a building block $\phi$ and $k\ge 2$. Then there is $f\in\Sigma'$ such that $f_n$ converge to $f$ uniformly on every compact subset of ${{\mathbb D}}_-$. \[thm:conv\] Fix $\epsilon>0$. It is enough to prove that $\bar f_n$ converge uniformly on ${{\mathbb D}}_\epsilon=\{z: |z|\ge 1+\epsilon\}$. Suppose that $m>n$ so we can write $\bar f_m=\bar f_n \circ \Phi_{n,m}$ where $\Phi_{n,m}=\Phi_{n+1}\circ\cdots\circ \Phi_m$ and $$|\bar f_n(z)-\bar f_m(z)|= |\bar f_n(z)-\bar f_n(\Phi_{n,m}(z))|\le \max_{\zeta \in {{\mathbb D}}_\epsilon}|\bar f_n'(\zeta)||z-\Phi_{n,m}(z)|.$$ By the Lemma \[l:boundedcapacity\] ${{\mathrm cap\:}}(\bar f_n)$ is uniformly bounded, hence by the growth theorem the derivative of $\bar f_n$ is uniformly bounded in ${{\mathbb D}}_\epsilon$. Thus it is enough to prove that $\Phi_{n,m}(z)$ converge uniformly to $z$. Let $\phi(z)=b_1 z+\dots$ at infinity, then $\Phi_n(z)=b_1^{1/k^{n}}z+\dots$ Functions $\Phi_{n,m}$ have expansion $$b_1^{k^{-n}+\dots +k^{-m}} z+\dots=b_1^{(n,m)}z+\dots$$ Obviously, $b_1^{(n,m)}\to 1$ as $n\to\infty$. This proves that $\Phi_{n,m}(z)\to z$ uniformly on ${{\mathbb D}}_\epsilon$ hence $f_n$ converge uniformly. Uniform limit of a functions from $\Sigma'$ can be either a constant or a function from $\Sigma'$. Since ${{\mathrm cap\:}}(f_n)$ is uniformly bounded the limit can not be a constant. Let $\phi\in\Sigma'$ and $k\ge 2$ be an integer number. The random conformal snowflake is a conformal snowflake defined by $\phi$, $k$, and $\{\theta_n\}$, where $\theta_n$ are independent random variables uniformly distributed on ${{\mathbb T}}$. \[stationary\] Let $\phi\in\Sigma'$, $k\ge 2$ be an integer number, and $\psi=\phi^{-1}$. Let $f$ be a corresponding random snowflake and $g=f^{-1}$. Then the distribution of $f$ is invariant under the transformation $\Sigma'\times {{\mathbb T}}\mapsto \Sigma'$ which is defined by $$(f,\theta)\mapsto \phi_\theta({K}_k f).$$ In other words $$\begin{aligned} f(z)&=&\phi_\theta(({K}_k f)(z))= \phi_\theta(f^{1/k}(z^k)),\\ g(z)&=&({K}_k g)(\psi_\theta(z))= g^{1/k}(\psi^k_\theta(z)),\end{aligned}$$ where $\theta$ is uniformly distributed on ${{\mathbb T}}$. Both equalities should be understood in the sense of distributions, i.e. distributions of both parts are the same. Let $f$ be a snowflake generated by $\{\theta_n\}$. The probability distribution of the family of snowflakes is the infinite product of (normalised) Lebesgue measures on ${{\mathbb T}}$. By the definition $$f(z)= \lim_{n\to\infty} \phi_{\theta_0}(\phi_{\theta_1}^{1/k}(\dots\phi_{\theta_n}^{1/k}(z^{k^n})\dots)$$ and $$\phi_\theta(({K}_k f)(z))=\lim_{n\to\infty} \phi_{\theta}(\phi_{\theta_0}^{1/k}(\dots\phi_{\theta_n}^{1/k}(z^{k^{n+1}})\dots),$$ hence $\phi_\theta({K}_k f)$ is just the snowflake defined by the sequence $\theta, \theta_0, \theta_1,\dots$. So the transformation $f(z)\mapsto\phi_\theta(({K}_k f)(z))$ is just a shift on the $[-\pi,\pi]^{\mathbb N}$. Obviously the product measure is invariant under the shift. This proves stationarity of $f$. Stationarity of $g$ follows immediately from stationarity of $f$. There is another way to think about random snowflakes. Let $\mathcal M$ be a space of probability measures on $\Sigma'$. And let $T$ be a random transformation $f \mapsto \phi_\theta ({K}_k f)$, where $\theta$ is uniformly distributed on $[\pi,\pi]$. Obviously $T$ acts on $\mathcal M$. The distribution of a random snowflake is the only measure which is invariant under $T$. In some sense the random snowflake is an analog of a Julia set: it semi-conjugates $z^k$ and $\psi_\theta^k$. ------------------------------------ -------------------------------- ------------------------------------- $\stackrel{f}{\longleftarrow}$ ![image](circle.eps){width="0.7cm"} ![image](snow24L.eps){width="3cm"} $\uparrow \psi_\theta^k(z)$ $\uparrow z^k$ $\stackrel{f}{\longleftarrow}$ ![image](circle.eps){width="0.7cm"} ![image](snow24L.eps){width="3cm"} ------------------------------------ -------------------------------- ------------------------------------- The random conformal snowflakes are also rotationally invariant, the exact meaning is given by the following theorem. Let $\phi\in\Sigma'$, $k\ge 2$ and $g$ be the corresponding snowflake. Then $g$ is rotationally invariant, namely $g(z)$ and $e^{i\omega}g(e^{-i\omega}z)$ have the same distribution for any $\omega$. Let $g_n(z)$ be the $n$th approximation to the snowflake defined by the sequence of rotations $\theta_0,\dots,\theta_n$. We claim that $\tilde g(z)=e^{i\omega}g(e^{-i\omega}z)$ is the approximation to the snowflake defined by $\tilde\theta_0,\dots,\tilde\theta_n$ where $\tilde\theta_j=\theta_j+\omega k^j$ (we add arguments $\mod 2 \pi$). We prove this by induction. Obviously this is true for $\tilde g_0$. Suppose that it is true for $\tilde g_{n-1}$. By the definition of $g_n$ and assumption that $g_{n-1}(e^{-i\omega}z)=e^{-i\omega}\tilde g_{n-1}(z)$ we have that $$\begin{aligned} e^{i\omega}g_n(e^{-i\omega}z)&=&e^{i\omega}e^{i\theta_n/k^n} \psi^{1/k^n}(e^{-i\theta_n} g_{n-1}^{k^n}(e^{-i\omega}z)) \\ &=& e^{i\tilde\theta_n/k^n}\psi^{1/k^n}(e^{i\tilde\theta_n}\tilde g_{n-1}^{k^n}(z)) = \tilde g_n(z).\end{aligned}$$ Obviously $\tilde \theta_n$ are also independent and uniformly distributed on ${{\mathbb T}}]$, hence $\tilde g_n$ has the same distribution as $g_n$. The distributions of $|g(z)|$ and $|g'(z)|$ depend on $|z|$ only. The same is true for $f$. Spectrum of a conformal snowflake {#sec:spectrum} ================================= As we discussed above, for random fractals it is more natural to consider the average spectrum $\bar \beta(t)$ instead of the usual spectrum $\beta(t)$. We will work with $\bar \beta(t)$ only and “spectrum” will always mean $\bar \beta(t)$. We will write ${{\mathcal L}}$ for the class of functions on $(1,\infty)$ that are bounded on compact sets and integrable in the neighbourhood of $1$. In particular, these functions belong to $L^1[1,R]$ for any $1<R<\infty$. Let $F(z)=F(|z|)=F(r)={{\mathbb E}}\brb{\,|g'(r)/g(r)|^\tau\log^{\sigma}|g(r)|}$ where $\tau=2-t$ and $\sigma=\beta-1$. The $\bar\beta(t)$ spectrum of the snowflake is equal to $$\inf\brs{\beta: F(r)\in {{\mathcal L}}}.$$ By the definition $\bar\beta$ is the minimal value of $\beta$ such that $$\int_1\int_0^{2\pi}(r-1)^{\beta-1}{{\mathbb E}}|f(r e^{i\theta})|^t d\theta d r$$ is finite. We change variable to $w=f(z)=f(re^{i\theta})$ $$\begin{aligned} \int\int {{\mathbb E}}\brb{|f'(re^{i\theta})|^t}(r-1)^{\beta-1} dr d\theta &=& \int \frac{{{\mathbb E}}\brb{|f'(z)|^t}(|z|-1)^{\beta-1}dm(z)}{r} \\&=& \int {{\mathbb E}}\brb{\frac{|g'(w)|^{2-t}(|g(w)|-1)^{\beta-1}}{|g(w)|}}dm \, ,\end{aligned}$$ where $m$ is the Lebesgue measure. Note that $|g|$ is uniformly bounded and $g$ is rotationally invariant, hence the last integral is finite if and only if $$\int_1 |g'(r)|^\tau(|g(r)|-1)^\sigma d r<\infty.$$ Since $1<|g|$ is uniformly bounded we have that $|g'(r)|^\tau (|g(r)|-1)^\sigma$ is comparable up to an absolute constant to $$\br{\frac{|g'(r)|}{|g(r)|}}^\tau \log^\sigma|g(r)|.$$ If $F \in {{\mathcal L}}$ then it is a solution of the following equation: $$\label{eq} F(r)=\frac{1}{k^\sigma}\int_{-\pi}^{\pi} F(|\psi^k(re^{i\theta})|)|\psi^{k-1}(re^{i\theta}) \psi'(re^{i\theta})|^\tau\frac{d\theta}{2\pi}.$$ By the Theorem \[stationary\] $g(z)$ and $g^{1/k}(\psi_\theta^{k}(z))$ have the same distribution, hence $$\begin{aligned} F(r)&=&{{\mathbb E}}\brb{\left|g'(r)/g(r)\right|^\tau\log^\sigma |g(r)|} \\ &=& {{\mathbb E}}\brb{\left|\frac{(g^{1/k}(\psi_\theta^k(r)))'}{g^{1/k}(\psi_\theta^k(r))}\right|^\tau \log^\sigma|g^{1/k}(\psi_\theta^k(r))|}\\ &=& {{\mathbb E}}\brb{\left|\frac{g'(\psi^k_\theta(r))}{g(\psi_\theta^k(r))}\right|^\tau\log^\sigma(g(\psi^k_\theta(r))) \frac{|\psi'_\theta(r)\psi^{k-1}_\theta(r)|^\tau}{k^\sigma}},\end{aligned}$$ where $\theta$ has a uniform distribution. The expectation is the integral with respect to the joint distribution of $g$ and $\theta$, since they are independent this joint distribution is just a product measure. So we can write it as the double integral: first we take the expectation with respect to the distribution of $g$ and than with respect to the (uniform) distribution of $\theta$ $$\begin{aligned} F(r)&=&\int_{-\pi}^\pi \br{\int \left|\frac{g'(\psi^k_\theta(r))}{g(\psi_\theta^k(r))}\right|^\tau\log^\sigma(g(\psi^k_\theta(r))) \frac{|\psi'_\theta(r)\psi^{k-1}_\theta(r)|^\tau}{k^\sigma}d g} \frac{d\theta}{2\pi} \\ &=& \int_{-\pi}^\pi \br{\int \left|\frac{g'(\psi^k_\theta(r))} {g(\psi_\theta^k(r))}\right|^\tau\log^\sigma(g(\psi^k_\theta(r))) d g}\frac{|\psi'_\theta(r)\psi^{k-1}_\theta(r)|^\tau}{k^\sigma} \frac{d\theta}{2\pi}.\end{aligned}$$ The inner integral is equal to $F(\psi_\theta^k(r))=F(\psi^k(e^{-i\theta}r))$ by the definition of $F$, hence $$\begin{aligned} F(r)&=&\int_{-\pi}^\pi F(\psi^k(e^{-i\theta}r)) \frac{|\psi'(e^{-i\theta}r)\psi^{k-1}(e^{-i\theta}r)|^\tau}{k^\sigma} \frac{d\theta}{2\pi} \\ &=& \frac{1}{k^\sigma}\int_{-\pi}^\pi F(\psi^k(e^{i\theta}r)) |\psi'(e^{i\theta}r)\psi^{k-1}(e^{i\theta}r)|^\tau \frac{d\theta}{2\pi}\end{aligned}$$ which completes the proof. This equation is the key ingredient in our calculations. One thinks about $F$ as the main eigenfunction of an integral operator. Hence the problem of finding the spectrum of the snowflake boils down to the question about the main eigenvalue of a particular integral operator. Usually it is not very difficult to estimate the latter. This justifies the definition: $$Qf(r):=k \int_{-\pi}^{\pi} f (|\psi^k(re^{i\theta})|)\, |\psi^{k-1}(re^{i\theta}) \psi'(re^{i\theta})|^\tau\frac{d\theta}{2\pi}\, .$$ Using this notation we can rewrite (\[eq\]) as $$k^{\beta}F=QF.$$ Note that this is in fact an ordinary kernel operator, $|\psi|$ is a smooth function of $\theta$, hence we can change the variable and write it as an integral operator. As mentioned above, the study of a $F$ is closely related to the study of operator $Q$ and its eigenvalues. And our estimate of the spectrum is in fact estimate of the main eigenvalue. Adjoint operator ---------------- First of all we want to find a formally adjoint operator. Let $\nu$ be a bounded function and $R>1$ such that $D_R \subset \psi^k(D_R)$ where $D_R=\brs{z:1<|z|<R}$. $$\begin{aligned} \int_1^R Qf(r)\nu(r)d r &=& \int_1^R \nu(r)k \int_0^{2\pi} f(\psi^k(re^{i\theta}))\, |\psi'(re^{i\theta})\psi^{k-1}(re^{i\theta})|^\tau\frac{d\theta}{2\pi}d r \\ &=& \int_{D_R}\frac{\nu(|z|)}{|z|}\frac{k}{ 2 \pi}f(\psi^k(z)) |\psi'(z)\psi^{k-1}(z)|^\tau d m(z),\end{aligned}$$ where $d m $ is the Lebesgue measure. Changing a variable to $w=\psi^k(z)$ we get $$\begin{aligned} &&\int_{\psi^k(D_R)}\frac{\nu(\phi(w^{1/k}))}{|\phi(w^{1/k})|} \frac{1}{k 2\pi}f(w)|\phi'(w^{1/k})w^{1/k-1}|^{2-\tau} d m(w) \\ &\ge& \int_1^R\int_0^{2\pi k} \frac{r\nu(\phi(r^{1/k}e^{i\theta/k}))}{|\phi(r^{1/k}e^{i\theta/k})|} f(r)r^{\frac{(1-k)(\tau-2)}{k}}|\phi'(r^{1/k}e^{i\theta/k})|^{2-\tau} \frac{d\theta}{2\pi k}d r \\ &=& \int_1^R f(r)\int_0^{2\pi } \frac{r\nu(\phi(r^{1/k}e^{i\theta}))}{|\phi(r^{1/k}e^{i\theta})|} r^{\frac{(1-k)(\tau-2)}{k}}|\phi'(r^{1/k}e^{i\theta})|^{2-\tau} \frac{d\theta}{2\pi}d r.\end{aligned}$$ So we define another operator $$P\nu(r):={r^{1-\frac{(k-1)(2-\tau)}{k}}} \int_0^{2\pi} \frac{\nu(\phi(r^{1/k}e^{i\theta}))}{|\phi(r^{1/k}e^{i\theta})|} |\phi'(r^{1/k}e^{i\theta})|^{2-\tau}\frac{d\theta}{2\pi}. \label{defP}$$ Changing $2-\tau$ to $t$ we can rewrite (\[defP\]) as $$P\nu(r):={r^{1-\frac{(k-1)t}{k}}} \int_0^{2\pi} \frac{\nu(\phi(r^{1/k}e^{i\theta}))}{|\phi(r^{1/k}e^{i\theta})|} |\phi'(r^{1/k}e^{i\theta})|^{t}\frac{d\theta}{2\pi}.$$ The inequality above can be written as $$\label{cover} \int_1^R Q f(r)\nu(r)d r \ge \int_1^R f(r)P\nu(r) d r.$$ We would like to note that for $R=\infty$ there is an equality since $\psi^K({{\mathbb D}}_-)$ covers ${{\mathbb D}}_-$ exactly $k$ times. In this case $$\label{conj} \int_1^\infty Q f(r)\nu(r)d r = \int_1^\infty f(r)P\nu(r) d r.$$ so operators $P$ and $Q$ are formally adjoint on $[1,\infty)$. Operator $Q=Q(t)$ acts on ${{\mathcal L}}$ if $\int|\phi'(r e^{i\theta})|d \theta$ is bounded. If $t\ge 1$ then it also acts on $L^1(1,\infty)$. Let $\nu=1$ in (\[conj\]). Then $$\int_1^\infty Q f(r)d r = \int_1^\infty f(r) \int_{-\pi}^{\pi} \frac{r^{1-\frac{(k-1)t}{k}}}{|\phi(r^{1/k}e^{i\theta})|} |\phi'(r^{1/k}e^{i\theta})|^{t}\frac{d\theta}{2\pi}.$$ Let $r<R$, then $$\frac{r^{1-\frac{(k-1)t}{k}}}{|\phi(r^{1/k}e^{i\theta})|}<R^{1-\frac{(k-1)t}{k}},$$ so the second integral is bounded since $\int |\phi'|^t d\theta$ is bounded. This proves that $Q f$ is in ${{\mathcal L}}$. To prove that it acts on $L^1(1,\infty)$ we should consider the large values of $r$. At infinity $\phi(z)=c z+ \dots $ and $\phi'(z)=c+\dots$, hence $$\begin{aligned} \frac{r^{1-\frac{(k-1)t}{k}}}{|\phi(r^{1/k}e^{i\theta})|} |\phi'(r^{1/k}e^{i\theta})|^{t}\approx \frac{r^{1-\frac{(k-1)t}{k}} |c|^t}{|c|r^{1/k}}= \mathrm{const}\, r^{1-\frac{(k-1)t)}{k}-\frac{1}{k}} \\ =\mathrm{const}\, r^{\frac{(k-1)(1-t)}{k}}. \end{aligned}$$ Thus the second integral is comparable (up to a universal constant) to $r^{\frac{(k-1)(1-t)}{k}}$, so it is bounded if and only if $t\ge 1$. Note that the assumption on the integral of $|\phi'|$ is just a bit stronger than $\beta_\phi(t)=0$. We restrict ourselves to the building blocks that are smooth up to the boundary, for such building blocks this assumption is always true. Condition $t\ge 1$ is technical and due to the behavior at infinity which should be irrelevant. Introducing the weight at infinity we can get rid of this assumption. Next we want to discuss how eigenvalues of $P$ and $Q$ are related to the spectrum of the snowflake. If $F$ is integrable then it is a solution of (\[eq\]) and using (\[cover\]) we can write $$\int_1^R F(r)\nu(r) = \int_1^R \frac{Q F (r)}{k^{1+\sigma}}\nu(r)\ge \int_1^R F(r) \nu(r) \frac{P\nu(r)}{\nu(r)k^{\sigma+1}}.$$ Suppose that $t$ is fixed. Let us fix a positive test function $\nu$. If $P\nu(r)\ge \nu(r)k^{\sigma+1}$ then we arrive at contradiction, this means that $F(r)$ for this particular pair of $\tau$ and $\sigma$ can not be integrable. Using this fact we can estimate $\bar\beta(t)$ from below. Hence any positive $\nu$ gives the lower bound of the spectrum. $$\label{betalog} \bar\beta(t)\ge \min_{1\le r \le R}\log\left(\frac{P\nu(r)}{\nu(r)}\right)/\log k.$$ Obviously, the best choice of $\nu$ is an eigenfunction of $P$ corresponding to the maximal eigenvalue. This proves the following lemma: Let $\lambda$ be the maximal eigenvalue of $P$ (on any interval $[1,R]$ such that $D_R \subset \psi^k(D_R)$) then $\bar\beta(t)\ge \log \lambda/\log k$. Fractal approximation {#sec:approximation} ===================== In this section we prove the fractal approximation by conformal snowflakes. Namely we show that for any $t$ one can construct a snowflake with building block which is smooth up to the boundary and $\bar\beta(t)$ arbitrary close to $B(t)$. The proof of this theorem is similar to the proof of the fractal approximation for standard snowflakes but it is less technical. \[thm:approximation\] For any $\epsilon$ and $t$ there are a building block $\phi\in\Sigma' {{\mathrm cap\:}}C^\infty(\{|z|\ge 1\})$ and a positive integer $k$ that define the snowflake with $\bar\beta(t)>B(t)-\epsilon$. We will use the following lemma. \[l:polygon\] For any $\epsilon>0$, $t\in {{\mathbb R}}$ there is $A>0$ such that for any $\delta>0$ there is a function $\phi\in\Sigma' {{\mathrm cap\:}}C^\infty $ such that $$\int \left|\phi'(re^{i\theta})\right|^t d \theta > A \br{ \frac{1}{\delta}}^{B(t)-\epsilon}$$ for $\delta>r-1$. Moreover, capacity of $\phi$ is bounded by a universal constant that does not depend on $\delta$. There is a function $f$ with $\beta_f(t)>B(t)-\epsilon$. Hence there is a constant $A$ such that $$\int \left| f'(r e^{i\theta}) \right|^t d\theta > A (r-1)^{-B(t)+2\epsilon}.$$ The only problem is that this function is not smooth up to the boundary. Set $\phi(z)=f(s z)$. Obviously, $\phi(z) \rightrightarrows f(z)$ as $s \to 1$. If we fix a scale $\delta$ then there is $s$ sufficiently close to $1$ such that $\int|\phi'/\phi|^t d \theta > A \delta^{-B(t)+2\epsilon}/2$. But for $r<1+\delta$ the integral can not be smaller by the subharmonicity. It is easy to see that $${{\mathrm cap\:}}(f_n)<{{\mathrm cap\:}}(f)={{\mathrm cap\:}}(\phi)/(1-1/k)<2{{\mathrm cap\:}}(\phi),$$ hence ${{\mathrm cap\:}}(f_n)$ and $|f_n(z)|$ for $|z|<2$ are bounded by the universal constants that depend on capacity of $\phi$ only and do not depend on $k$. It also follows that $|K_k f_n(z)|<1+c/k$ for $|z|<2$ and $c$ depending on ${{\mathrm cap\:}}(\phi)$ only. Let us fix $t$ and let $\phi$ be a function from the Lemma \[l:polygon\] for $\delta=c/k$. By $I(f,\delta)$ we denote $$\int_{-\pi}^\pi \left| f'(re^{i\theta}) \right|^t d \theta,$$ where $r=\exp(\delta)$. The $k$-root transform changes integral means in a simple way: $$I(K_k f,\delta/k)=\int \left|\frac{f'(r^k e^{i k\theta})}{f^{(k-1)/k}(r^k e^{i k\theta})}\right|^t r^{t(k-1)}d\theta.$$ As we mentioned before, the capacity of the snowflake is bounded by the universal constant, hence $|f|$ can be bounded by a universal constant. Thus $$I(K_k f,\delta/k)>\mathrm{const}\, I(f,\delta).$$ The function $f_{n+1}$ is a composition of a (random) function $\phi_\theta$ with $Kf_n$. The expectation of $I(f_{n+1},1/k^{n+1})$ conditioned on $f_n$ is $$\begin{aligned} {{\mathbb E}}\brb{I(f_{n+1},1/k^{n+1})\mid f_n}&=& \int \int |\phi_\theta'(K_k f_n(re^{i\xi}))|^t|(K_k f_n)'(re^{i\xi})|^t d\xi d \theta \\ &=&\int |(K_k f_n)'(re^{i\xi})|^t \int |\phi'(e^{-i\theta} K_k f_n(re^{i\xi}))|^t d \theta d \xi,\end{aligned}$$ where $r=\exp(1/k^{n+1})$. We know that $|K_k f_n(re^{i\xi})|-1<c/k$. By our choice of $\phi$ $$\int |\phi'(|K_k f_n(re^{i\xi})|e^{-i\theta})|^t d \theta> A \br{\frac{k}{c}}^{B(t)-\epsilon}.$$ So $$\begin{aligned} {{\mathbb E}}\brb{I(f_{n+1},1/k^{n+1})} &>& A \br{\frac{k}{c}}^{B(t)-\epsilon} {{\mathbb E}}\brb{I(K f_n,1/k^{n+1})} \\ &>&A \br{\frac{k}{c}}^{B(t)-\epsilon} \mathrm{const}\, {{\mathbb E}}\brb{ I(f_n,1/k^n)}.\end{aligned}$$ Applying this inequality $n$ times we obtain $${{\mathbb E}}\brb{I(f_n,1/k^n)}>\mathrm{ const}^n\, \br{\frac{k}{c}}^{n(B(t)-\epsilon)}.$$ So $$\frac{\log{{\mathbb E}}\brb{ I(f_n,1/k^n)}}{n\log k}>B(t)-2\epsilon$$ for sufficiently large $k$. This completes the proof. Appendix: example of an estimate {#sec:application} ================================ The main purpose of this section is to show that using conformal snowflakes it is not very difficult to find good estimates. Particularly it means that if one of the famous conjectures mentioned in the introduction is wrong, then it should be possible to find a counterexample. In this section we will give an example of a simple snowflake and estimate its spectrum at $t=1$. We could do essentially the same computations for other values of $t$, but $B(1)$ is of special interest because it is related to the coefficient problem and Littlewood conjecture (see [@BeSmECM] for details). As a building block we use a very simple function: a straight slit map. We use the following scheme: first we define a building block and this gives us the operator $P$. By (\[betalog\]) any positive function $\nu$ gives us an estimate on the spectrum. To choose $\nu$, we find the first eigenvector of discretized operator $P$ and approximate it by a rational function. We compute $P\nu$ using Euler’s quadrature formula and estimate the error term. The minimum of $P\nu/\nu$ gives us the desired estimate of $\beta(1)$. For $t=1$ we give the rigorous estimate of the error term in the computation of $P\nu/nu$, for other values of $t$ we give approximate values (computed with less precision) without any estimates of the error terms. Single slit domain ------------------ We use a straight slit functions. First we define the basic slit function $$\label{slit} \phi(z,l)=\phi_l(z)=\mu_2\br{\frac{\sqrt{\mu_1^2(z s)+l^2/(4k+4)}}{\sqrt{1+l^2/(4l+4)}}},$$ where $s$ is a constant close to $1$, $\mu_1$ and $\mu_2$ are the Möbius transformation that maps ${{\mathbb D}}_-$ onto the right half plane and its inverse: $$\begin{aligned} \mu_1(z)=\frac{z-1}{z+1},\\ \mu_2(z)=\frac{z+1}{z-1}.\end{aligned}$$ We also need the inverse function $$\psi(z,l)=\psi_l(z)=\phi(z,l)^{-1}.$$ The function $\phi$ first maps ${{\mathbb D}}_-$ onto the right half-plane, than we cut off a straight horizontal slit starting at the origin and map it back. The image $\phi_l({{\mathbb D}}_-)$ is ${{\mathbb D}}_-$ with a horizontal slit starting from $1$. The length of the slit is $l$. The derivative of a slit map has a singularities at points that are mapped to $1$. But if we take $s>1$ then these singularities are not in ${{\mathbb D}}_-$. We set $s=1.002$. We study the snowflake generated by $\phi(z)=\phi_{73}(z)$ with $k=13$ (numbers $13$ and $73$ are found experimentally). Figure \[pic2\] show the image of the unit circle under $f_3$. The Figure \[pic3\] shows the image of a small arc under $\bar f_3$ and three Green’s lines. First we have to find the critical radius $R$ such that $D_R\subset \psi^k(D_R)$. By symmetry of $\phi$, the critical radius is the only positive solution of $$\psi^k(x)=x.$$ This equation can not be solved explicitly, but we can solve it numerically (we don’t care about error term since we can take any greater value of $R$). The approximate value of $R$ is $ 76.1568$. To be on the safe side we fix $R=76.2$. The disc takes just a small portion of $\psi(D_R)$ which means that there is a huge overkill in the inequality (\[cover\]). By (\[betalog\]) any positive function $\nu$ gives a lower bound of spectrum. And this estimate is sharp when $\nu$ is the main eigenfunction of $P$. So we have to find an “almost” eigenfunction of $P$. Almost eigenfunction of operator P ---------------------------------- Even for such a simple building block we can not find the eigenfunction explicitly. Instead we look for some sort of approximation. The first idea is to substitute integral operator $P$ by its discretized version. Here we use the most simple and quite crude approximation. Choose sufficiently large $N$ and $M$. Let $r_n=1+(R-1)n/N$ and $\theta_m= 2\pi m /M$. Instead of $P$ we have an $N\times N$ matrix with elements $$P_{n,n'}=\sum_{m} r_n^{1-t(k-1)/k}\frac{|\phi'(r_n^{1/k}e^{i\theta_{m}})|^t} {|\phi(r_n^{1/k}e^{i\theta_{m}})|M},$$ where summation is over all indexes $m$ such that $r_{n'}$ is the nearest point to $|\phi(r_n^{1/k}e^{i\theta_m})|$. This defines the discretized operator $P_N$. Let $\lambda_N$ and $V_N$ be the main eigenvalue and the corresponding eigenvector. A priori, $\lambda_N$ should converge to $k^{\beta(t)}$, but it is not easy to prove and not clear how to find the rate of convergence. But this crude estimate gives us the fast test whether the pair $\phi$ and $k$ defines a snowflake with large spectrum or not (this is the way how we found $k=13$ and $l=73$). Instead of proving convergence of $\lambda_N$ and estimating the error term we will study $V_N$ which is the discrete version of the eigenfunction. We approximate $V_N$ by a rational function of a relatively small degree (in our case $5$), or by any other simple function. In our case we find the rational function by the linear least square fitting. In any way we get a nice and simple function $\nu$ which is supposed to be close to the eigenfunction of $P$. We would like to note that procedure, by which we obtained $\nu$, is highly non rigorous, but that does not matter since as soon as we have some explicit function $\nu$ we can plug it into $P$ and get the rigorous estimate of $\beta$. In our case we take $N=1000$ and $M=500$. The logarithm of the first eigenvalue is $0.2321$ (it is $0.23492$ if we take $s=1$). Figure \[eigenvectors\] shows a plot with coordinates of the first eigenvector. ![image](eigenvector.eps){width="8cm"} We scale this data from $[1,1000]$ to the interval $[1,R]$ and approximate by a rational function $\nu$: $$\begin{aligned} \nu(x)=(7.1479+8.9280 x - 0.07765 x^2+ 1.733 \times 10^{-3} x^3 - \\ 2.0598 \times 10^{-5} x^4 + 9.5353 \times 10^{-8}x^5)/( 2.7154+ 13.2845 x). \end{aligned}$$ Figure \[nu\] shows the plot of $\nu$. ![An “almost” eigenfunction[]{data-label="nu"}](nu.eps){width="8cm"} Estimates of derivatives ------------------------ To estimate $\beta(1)$ we have to integrate $\nu(|\phi|)|\phi'|/|\phi|$. It is easy to see that the main contribution to the derivative is given by a factor $|\phi'|/|\phi|$. Assume for a while that $s=1$. The fraction $|\phi'(z)/\phi(z)|$ can be written as $$\label{fraction} \frac{|z-1|}{|z|\sqrt{|(z-z_1)(z-z_2)}},$$ where $$\begin{aligned} z_1=\frac{-5033-292 i \sqrt{74}}{5625}\approx -0.894756 - 0.446556 i, \\ z_2=\frac{-5033+292 i \sqrt{74}}{5625}\approx -0.894756 + 0.446556 i. \end{aligned}$$ Singular points $z_1$ and $z_2$ are mapped to $1$ and $\phi'$ has a square root type singularity at these points. They will play essential role in all further calculations. We introduce notation $z_1=x+i y$ and $z_2=x-i y$, for $z$ we will use polar coordinates $z=r e^{i\theta}$. We compute the integral of $f=|\nu(\phi)\phi'/\phi|$ using the Euler quadrature formula based on the trapezoid quadrature formula $$\int_0^{2\pi} f(x)dx \approx S_\epsilon^n(f)= S_\epsilon(f) -\sum_{k=1}^{n-1} \gamma_{2k}\epsilon^{2k}\br{f^{(2k-1)}(2\pi)-f^{(2k-1)}(0)},$$ where $S_\epsilon(f)$ is a trapezoid quadrature formula with step $\epsilon$ and $\gamma_k=B_k/k!$ were $B_k$ is the Bernoulli number. The error term in the Euler formula is $$-\gamma_{2n}\max f^{(2n)}\epsilon^{2n}2\pi. \label{error1}$$ In our case function $f$ is periodic and terms with higher derivatives vanish. This means that we can use (\[error1\]) for any $n$ as an estimate of the error in the trapezoid quadrature formula. Function $\phi$ has two singular points: $z_1$ and $z_2$. Derivative of $\phi$ blows up near these points. This is why we introduce scaling factor $s$. We can write a power series of $\phi$ near $z_1$ (near $z_2$ situation is the same by the symmetry) $$\phi^{(k)}=c_{-k}(z-z_1)^{-k+1/2}+c_{-k+1}(z-z_1)^{-k+3/2}+\dots+c_0+\dots\ .$$ This means that for $s>1$ derivative can be estimated by $$|c_{-k}|(s-1)^{-k+1/2}+|c_{k+1}|(s-1)^{-k+3/2}+\dots\ .$$ Tail of this series can be estimated because series converges in a disc of a fixed radius (radius is $|z_1+1|$), and sum of tail can be estimated by a sum of a geometric progression. Writing these power series explicitly we find (for $s=1.002$) $$\begin{aligned} {3} |\phi'| & < 55, & \quad |\phi''| & < 11800, & \quad |\phi^{(3)}| &< 8.69\times 10^6,\\ |\phi^{(4)}| & < 1.08 \times 10^{10}, &\quad |\phi^{(5)}| & < 1.90 \times 10^{13}, &\quad |\phi^{(6)}| & < 4.25 \times 10^{16}, \\ |\phi^{(7)}| & < 1.17 \times 10^{20}. &\quad & & &\end{aligned}$$ The maximal values for first six derivatives of $\nu$ are $$\begin{aligned} {3} |\nu'|&<0.28, &\quad |\nu''|&<0.45, &\quad |\nu^{(3)}|&<1.12, \\ |\nu^{(4)}|&<3.69, &\quad |\nu^{(5)}|&<15.3, &\quad |\nu^{(6)}|&<76.2.\end{aligned}$$ The derivative $\partial_\theta|\phi|$ can be estimated by $r |\phi'|$. We can write sixth derivative of $\nu(|\phi|)|\phi'|/|\phi|$ as a rational function of partial derivatives of $|\phi|$, $|\phi'|$, and $\nu$. Than we apply triangle inequality and plug in the above estimates. Finally we have $$\left|\frac{\partial\br{\frac{\nu(|\phi|)|\phi'|}{|\phi|}}}{\partial \theta^6}\right|<1.65\times 10^{21}.$$ Plugging $\epsilon=\pi/5000$ and estimate on sixth derivative into (\[error1\]) we find that error term in this case is less than $0.0034$. Next we have to estimate modulus of continuity with respect to $r$. First we calculate $$\partial_r|z-(a+b i)|^2=2 r - 2 (a\cos\theta+b\sin\theta).$$ Applying this formula several times we find $$\begin{aligned} \partial_r\br{\frac{|\phi'|}{|\phi|}}= \partial_r \br{\frac{|z-1|}{r\sqrt{|z-z_1||z-z_2|}}} \le \partial_r \br{\frac{|z-1|}{\sqrt{|z-z_1||z-z_2|}}} \\ = \frac{r-\cos\theta}{|z-1|S}-\frac{r-x\cos\theta-y\sin\theta}{2|z-z_1|^2S}|z-1|- \frac{r-x\cos\theta+y\sin\theta}{2|z-z_2|^2S}|z-1|, \end{aligned}$$ where $S=\sqrt{|z-z_1||z-z_2|}$. Factoring out $$\frac{1}{2|z-1|\cdot|z-z_1|^{5/2}|z-z_2|^{5/2}}$$ we get $$\begin{aligned} 2(r-\cos\theta)|z-z_1|^2|z-z_2|^2 -(r-x\cos\theta-y\sin\theta)|z-1|^2|z-z_2|^2 \\ -(r-x\cos\theta+y\sin\theta)|z-1|^2|z-z_1|^2 \\ =-2(r^2-1)(2\cos^2\theta r (x-1)+\cos\theta (r^2+1)(x-1)+2 r y^2). \end{aligned}$$ This is a quadratic function with respect to $\cos\theta$. Taking values of $x$ and $y$ into account we can write it as $$\cos^2\theta+\cos\theta\br{r+\frac{1}{r}}\frac{1}{2}-\frac{592}{5625}~.$$ This quadratic function has two real roots. Their average is $-(r+1/r)/2<-1$, hence one root is definitely less than $-1$. The product of roots is a small negative number, which meant that the second root is positive and less than $1$. Simple calculation shows that this root decreases as $r$ grows. This means that the corresponding value of $\theta$ increases. Hence it attains its maximal value at $r=1.4$ and the maximal value is at most $1.48$. This gives us that the radial derivative of $|\phi'|/|\phi|$ can be positive only on the arc $\theta\in[-1.48,1.48]$. By subharmonicity it attains the maximal on the boundary of $\{z\mid 1<r<1.4,\ -1.48<\theta<1.48\}$. It is not very difficult to check that maximum is at $z=1.4$ and it is equal to $0.36$. Let $$I(r)=\int_{-\pi}^\pi \nu(|\phi(r^{1/k}e^{i\theta})| \left|\frac{\phi'(r^{1/k}e^{i\theta})}{\phi(r^{1/k}e^{i\theta})}\right| \frac{d \theta}{2\pi}.$$ The derivative is $$I'(r)=\frac{1}{k r^{1-1/k}}\br{\int_{-\pi}^\pi \nu'(|\phi|)\partial_r |\phi| \frac{|\phi'|}{|\phi|}\frac{d \theta}{2\pi}+ \int_{-\pi}^\pi \nu(|\phi|)\partial_r\br{\frac{|\phi'|}{|\phi|}}\frac{d \theta}{2\pi}}.$$ By the symmetry the first integral is zero. In the second integral $$\nu(|\phi|)\partial_r\br{\frac{|\phi'|}{|\phi|}}$$ can be positive only when $\theta\in[-1.48,1.48]$ and even in this case it is bounded by $0.36/(r^{1-1/k} k 2\pi)$. Hence $$I'(r)<2 \cdot 1.48 \cdot 0.36/(r^{1-1/k}k 2\pi)<0.0131 r^{1/k-1}.$$ If we compute values $I(r_1)$ and $I(r_2)$ (with precision $0.0034$) then the minimum of $P(\nu)/\nu$ on $[r_1,r_2]$ is at least $$r_1^{1/k}(\min\{I(r_1),I(r_2)\}-0.0034-0.0131(r_2-r_1) r^{1/k-1})/\nu(r_1). \label{int}$$ ![Values of $I(r)$[]{data-label="integral"}](int.eps) ![Plot of $\log(P\nu/\nu)/\log k$[]{data-label="log"}](log.eps) We take $3000$ equidistributed points on $[1,R]$ and compute $I(r)$ at these points. The data for $I(r)$ is shown on the Figure \[integral\]. Applying the error estimate (\[int\]) we find a rigorous estimate from below of $P\nu/\nu$. The minimum of $P\nu/\nu$ is at least $1.8079$ which means that $$\beta(1)>0.2308.$$ The Figure \[log\] shows the plot of $\log(P\nu/\nu)/\log k$. Estimates of spectrum for other values of $t$ --------------------------------------------- We also computed lower bound on spectrum of the same snowflake for other values of $t$. Below are given base $13$ logarithms of eigenvalues of discretized operator $P$ ($N=1000$, $M=400$), lower bounds of $\log(P\nu/\nu)/\log k$, $t^2/4$ and upper bound of the universal spectrum from [@HeSh; @MaPo]. For values of $t$ close to zero we can not find a function that gives us a positive lower bound. $t$ $\log_{13} \lambda$ $\beta(t)>$ $t^2/4$ $\beta(t)<$ ------ --------------------- ------------- --------- ------------- -2.0 0.6350 0.56 1 1.218 -1.8 0.5348 0.48 0.81 1.042 -1.6 0.4395 0.39 0.64 0.871 -1.4 0.3502 0.31 0.49 0.706 -1.2 0.2678 0.220 0.36 0.549 -1.0 0.1936 0.152 0.25 0.403 -0.8 0.1290 0.0925 0.16 0.272 -0.6 0.0756 0.0430 0.09 0.159 -0.4 0.0353 0.0050 0.04 0.072 -0.2 0.0100 0 0.01 0.0179 0.2 0.0105 0 0.01 0.031 0.4 0.0387 0.0280 0.04 0.184 0.6 0.0858 0.0795 0.09 0.276 0.8 0.1515 0.1505 0.16 0.368 1.0 0.234 0.234 0.25 0.460 1.2 0.334 0.332 0.36 0.613 1.4 0.448 0.442 0.49 0.765 1.6 0.576 0.570 0.64 0.843 1.8 0.713 0.698 0.81 0.921 2.0 0.859 0.821 1 1
{ "pile_set_name": "ArXiv" }
--- abstract: 'Using heavy quark effective theory a factorized form for inclusive production rate of a heavy meson can be obtained, in which the nonperturbative effect related to the heavy meson can be characterized by matrix elements defined in the heavy quark effective theory. Using this factorization, predictions for the full spin density matrix of a spin-1 and spin-2 meson can be obtained and they are characterized only by one coefficient representing the nonperturbative effect. Predictions for spin-1 heavy meson are compared with experiment performed at $e^+e^-$ colliders in the energy range from $\sqrt{s}=10.5$GeV to $\sqrt{s}=91$GeV, a complete agreement is found for $D^*$- and $B^*$-meson. There are distinct differences from the existing approach and they are discussed.' address: | Institute of Theoretical Physics,\ Academia Silica,\ P.O.Box 2735, Beijing 100080, China\ e-mail: majp@itp.ac.cn author: - 'J.P. Ma' title: Spin Alignment of Heavy Meson Revisited --- =-5mm -6mm 2[H\^[\*\*]{}]{} Heavy quark effective theory(HQET) is a powerful tool to study properties of heavy hadrons which contain one heavy quark $Q$[@HQET; @Review]. With HQET it allows such a study starting directly from QCD. HQET is widely used in studied of decays of heavy hadrons. In comparison, only in a few works it is used to study productions of heavy hadrons. In 1994 Falk and Peskin used HQET to predict spin alignment of a heavy hadron in its inclusive production[@FP]. In this talk we will reexamine the subject and restrict ourself to the case of spin-1 meson. A spin-1 heavy meson $H^*$ is a bound state of a heavy quark $Q$ and a system of light degrees of freedom in QCD, like gluons and light quarks. In the work[@FP] the total angular momentum $j$ of the light system is taken as $1/2$. In the heavy quark limit, the orbital angular momentum of $Q$ can be neglected and only the spin of $Q$ contributes to the total spin of $H^*$. Once the heavy quark $Q$ is produced, it will combine the light system into $H^*$. Because parity is conserved in QCD, the probabilities for the light system with positive- and negative helicity is the same. Therefore, one can predict the probabilities for production of $H^*$ with a left-handed heavy quark $Q$ as: $$P(\bar B^*(\lambda=-1)):P(\bar B^*(\lambda=0)): P(\bar B^*(\lambda=1)) =\frac{1}{2} : \frac{1}{4} :0,$$ where $\lambda$ is the helicity of $H^*$. These results are easily derived, however, three questions or comments can be made to them: (a). In general the spin information is contained in a spin density matrix, probabilities are the diagonal part of the matrix. The question is how about the non-diagonal part. It should be noted that this part is also measured in experiment. (b). It is possible that the light system can have the total angular momentum $j=3/2$ to form $H^*$. One may argue that the production of such a system is suppressed. Is it possible to derive full spin density matrix without the assumption of $j=1/2$? (c). How can we systematically add corrections to the approximation which lead to the results in Eq.(1)? To make responses to these questions let us look at a inclusive production of $H^*$ in detail. In its inclusive production a heavy meson is formed with a heavy quark $Q$ and with other light degrees of freedom, the light degrees can be a system of light quarks and gluons. Because its large mass $m_Q$ the heavy quark is produced by interactions at short distance. Therefore the production can be studied with perturbative QCD. The heavy quark, once produced, will combine light degrees of freedom to form a hadron, the formation is a long-distance process, in which momentum transfers are small, hence the formed hadron will carry the most momentum of the heavy quark. The above discussion implies the production rate can be factorized, in which the perturbative part is for the production of a heavy quark, while the nonperturbative part is for the formation. For the nonperturbative part an expansion in the inverse of $m_Q$ can systematically be performed in the framework of HQET. This type of the factorization was firstly used in parton fragmentation into a heavy hadron[@Ma]. In this talk we will not discuss the factorization in detail, the details can be found in the work[@Ma1]. We directly give our results and make a comparison with experiment. It should be noted that the factorization can be performed for any inclusive production of $H^*$. Because the most experiments to measure spin alignment are performed at $e^+e^-$-colliders, we present the results for the inclusive production at $e^+e^-$-colliders. We consider the process $$e^+({\bf p})+e^{-}(-{\bf p})\to H^*({\bf k}) +X,$$ where the three momenta are given in the brackets. In the process we assume that the initial beams are unpolarized. We denote the helicity of $H^*$ as $\lambda$ and $\lambda=-1,0,1$. All information about the polarization of $H^*$ is contained in a spin density matrix, which may be unnormalized or normalized, we will call them unnormalized or normalized spin density matrix, respectively. The unnormalized spin density matrix can be defined as $$R(\lambda, \lambda',{\bf p},{\bf k}) = \sum_X \langle H^*(\lambda)X\vert {\cal T}\vert e^+e^-\rangle \cdot \langle H^*(\lambda')X\vert {\cal T}\vert e^+e^-\rangle^*,$$ where the conservation of the total energy-momentum and the spin average of the initial state is implied. ${\cal T}$ is the transition operator. The cross-section with a given helicity $\lambda$ is given by: $$\sigma(\lambda) = \frac{1}{2s} \int \frac {d^3k}{(2\pi)^3} R(\lambda, \lambda,{\bf p},{\bf k}).$$ From Eq.(3) the normalized spin density matrix is defined by $$\rho_{\lambda\lambda'}({\bf p},{\bf k}) = \frac {R(\lambda, \lambda',{\bf p},{\bf k})} {\sum_\lambda R(\lambda, \lambda,{\bf p},{\bf k})}.$$ It should be noted that the normalized spin density matrix is measured in experiment. It is straightforward to perform the mentioned factorization for the unnormalized spin density matrix in the rest frame of $H^*$, which is related to the moving frame only by a Lorentz boost. In the rest frame we can define a creation operator for $H^*$: $$\vert H^*(\lambda)\rangle =a^\dagger(\lambda) \vert 0\rangle= \bfeps(\lambda)\cdot{\bf a}^\dagger \vert 0 \rangle .$$ where $\bfeps(\lambda)$ is the polarization vector. In the rest frame the field $h_v$ of the heavy quark $Q$ in HQET has two non-zero components. We denote them as: $$h_v(x)=\left(\begin{array}{c} \psi(x) \\ 0 \end{array}\right).$$ With these notations we define two operators: $$O(H^*) = \frac{1}{6}{\rm Tr} \psi a_i^\dagger a_i \psi^\dagger, \ \ O_s(H^*) = \frac{i}{12} {\rm Tr} \sigma_i \psi a^\dagger_j a_k \psi^\dagger \varepsilon_{ijk},$$ where $\varepsilon_{ijk}$ is the totally antisymmetric tensor and $\sigma_i(i=1,2,3)$ is the Pauli matrix. The results for the unnormalized spin density matrix read: $$\begin{aligned} R(\lambda, \lambda',{\bf p},{\bf k}) &=&\frac{1}{3} a({\bf p}, {\bf k}) \langle 0 \vert O(H^*) \vert 0 \rangle \bfeps^*(\lambda) \cdot\bfeps(\lambda') \nonumber\\ &+& \frac{i}{3} {\bf b}({\bf p},{\bf k})\cdot [\bfeps^*(\lambda)\times\bfeps(\lambda')] \cdot \langle 0 \vert O_s(H^*) \vert 0 \rangle +{\cal O}(m_Q^{-2}).\end{aligned}$$ The quantities $a({\bf p}, {\bf k})$ and ${\bf b}({\bf p},{\bf k})$ characterize the spin density matrix of the heavy quark $Q$ produced in the inclusive process: $$e^+({\bf p})+e^{-}(-{\bf p})\to Q({\bf k},{\bf s}) +X$$ where ${\bf s}$ is the spin vector of $Q$ in its rest frame and the rest frame is related to the moving frame only by a Lorentz boost. The unnormalized spin density matrix $R_Q({\bf s},{\bf p},{\bf k})$ of $Q$ can be defined by replacing $H^*(\lambda)$ with $Q({\bf k},{\bf s})$ in Eq.(3). This matrix can be calculated with perturbative theory because of the heavy mass. The result in general takes the form $$R_Q({\bf s},{\bf p},{\bf k}) =a({\bf p},{\bf k}) +{\bf b}({\bf p},{\bf k}) \cdot {\bf s}$$ where $a({\bf p},{\bf k})$ and ${\bf b}({\bf p},{\bf k})$ are the same in Eq.(9). The physical interpretation for Eq.(9) is the following: The coefficients $a({\bf p},{\bf k})$ and ${\bf b}({\bf p},{\bf k})$ characterize the production of $Q$ and they can be calculated with perturbative QCD, while the two matrix elements defined in HQET characterize the nonperturbative effects of the formation of $H^*$ with the heavy quark $Q$. With Eq.(9) we obtain: $$\rho({\bf p},{\bf k}) =\frac{1}{3}\left( \begin{array}{ccc} 1+P_3, & -P_+, & 0 \\ -P_-, & 1, & -P_+ \\ 0, &-P_-, &1-P_3 \end{array}\right),$$ with $$P_3= \frac{b_3(\bp,\bk)}{a(\bp,\bk)} \cdot \frac{\langle 0 \vert O_s(H^*) \vert 0 \rangle} {\langle 0 \vert O(H^*) \vert 0 \rangle},\ \ \ P_\pm = \frac{b_1(\bp,\bk)\pm ib_2(\bp,\bk)}{\sqrt{2} a(\bp,\bk)} \cdot \frac{\langle 0 \vert O_s(H^*) \vert 0 \rangle} {\langle 0 \vert O(H^*) \vert 0 \rangle},\ \ \$$ The indices of the matrix in Eq.(12) run from -1 to 1. Without knowing the coefficients and the matrix elements we can already predict that $\rho_{00}=1/3$ and $\rho_{1-1}=\rho_{-11}=0$. With these results we are in position to compare with experiment. The experiments to measure the polarization of $B^*$ are performed at LEP with $\sqrt s =M_Z$ by different experiments groups. To measure the polarization the dominant decay $B^*\to \gamma B$ is used, where the polarization of the photon is not observed. Because the parity is conserved and the distribution of the angle between the moving directions of $\gamma$ and of $B^*$ is measured, one can only determine the matrix element $\rho_{00}$. If we denote $\theta$ is the angle between the moving directions of $B^*$ and of $\gamma$ in the $B^*$-rest frame and $\phi$ is the azimuthal angle of $\gamma$, then the angular distribution is given by $W_{B^*\to B\gamma}(\theta,\phi)\propto \sum_{\lambda\lambda'} \rho_{\lambda\lambda'}(\delta_{\lambda\lambda'}-Y_{1\lambda}(\theta,\phi) Y^*_{1\lambda'}(\theta,\phi))$. Integrating over $\phi$ and using our result $\rho_{00}=1/3$, the distribution of $\theta$ is isotropic. In experiment one indeed finds that the distribution is isotropic in $\theta$. The experimental results at $\sqrt s =M_Z$ are[@DB; @AB; @OB]: $$\begin{aligned} \rho_{00} &=& 0.32\pm 0.04\pm0.03,\ \ \ \ {\rm DELPHI} \nonumber \\ \rho_{00}&=&0.33\pm0.06\pm0.05,\ \ \ \ {\rm ALEPH}, \nonumber \\ \rho_{00}&=&0.36\pm0.06\pm0.07,\ \ \ \ {\rm OPAL}.\end{aligned}$$ These results agree well with our prediction $\rho_{00}=1/3$. The polarization measurement for $D^*$-meson has been done with different $\sqrt s$, in some experiments the non-diagonal part of the spin density matrix has also been measured by measuring azimuthal angular distribution in $D^*$ decay, where the decay mode into two pseudo-scalars, i.e., $D^*\to D\pi$, is used. Denoting $\theta$ as the angle between the moving directions of $D^*$ and of $\pi$ in the $D^*$-rest frame and $\phi$ as the azimuthal angle of $\pi$, then the angular distribution of $\pi$ is given by $W_{D^*\to D\pi}(\theta,\phi)\propto \sum_{\lambda\lambda'} \rho_{\lambda\lambda'}Y_{1\lambda}(\theta,\phi) Y^*_{1\lambda'}(\theta,\phi)$. Integrating over $\phi$ and using our result $\rho_{00}=1/3$, the distribution of $\theta$ is again isotropic. The experimental results are summarized in Table 1 and also partly summarized in [@THK]. From Table 1. we can see that the $\rho_{00}$ measured by all experimental groups is close to the prediction $\rho_{00}=\frac{1}{3}$, the most precise result is obtained by CLEO, its deviation from the prediction is $2\%$, the largest deviation of the prediction is from the result made by OPAL at $\sqrt s =90$GeV, it is $20\%$. In general, $\rho_{00}$ depends on the energy of $H^*$. Our results in Eq.(17) give that $\rho_{00}$ is a constant in the heavy quark limit, or the energy dependence is suppressed by $m_Q^{-2}$. In experiment only a very weak energy dependence is observed, e.g., in CLEO results[@CLEO]. From our results $\rho_{1-1}$ is exactly zero in the heavy quark limit, the results from TPC and from HRS are in consistent with our result, a non zero value is obtained by OPAL, which has a $3\sigma$ deviation from zero. These deviations may be explained with effects of higher orders in $m_c^{-1}$, these effects are expected to be substantial, because $m_c$ is not so large. It is interesting to note only results from OPAL at $\sqrt s =91$GeV have the largest deviations from our predictions, while results from other groups agree well with our predictions. At $\sqrt s =10.5{\rm GeV\ or\ } 29$GeV, the effect of the $Z$-boson exchange can be neglected, hence the parity is conserved. We obtain $\rho_{10}=0$. This prediction is also in agreement with the experimental result made by TPC and by HRS. [**Table 1**]{}. Experimental Results for $D^*$ Collaboration $\sqrt s $ in GeV Results --------------- ------------------- ----------------------------------- CLEO[@CLEO] 10.5 $\rho_{00}=0.327\pm0.006$ HRS[@HRS] 29 $\rho_{00}=0.371\pm0.016$       $\rho_{1-1}=0.04\pm0.03$       $\rho_{10}=0.00\pm0.01$ TPC[@TPC] 29 $\rho_{00}=0.301\pm0.042\pm0.007$    $\rho_{1-1}=0.01\pm0.03\pm0.00$    $\rho_{10}=0.03\pm0.03\pm0.00$ SLD[@SLD] 91 $\rho_{00}=0.34\pm0.08\pm0.13$ OPAL[@OB] 91 $\rho_{00}=0.40\pm0.02\pm0.01$       $\rho_{1-1}=-0.039\pm0.014$ Since our results are derived without knowing the total angular momentum $j$ of the light degrees of freedom in the heavy meson, the agreement of our results with experiment can not be used to extract the information about $j$ from the experimental data in Eq.(19) and in Table 1., although $\rho_{00}=1/3$ can also be obtained by taking $j=1/2$. One way to extract $j$ may be to measure the difference between $\rho_{11}-\rho_{-1-1}$, but it seems not possible, because the polarization of $H^*$ is measured through its parity-conserved decay and the polarization of decay products is not observed in experiment. In the heavy quark limit, the nondiagonal element $\rho_{1-1}$ and $\rho_{-11}$ are zero, while the other nondiagonal matrix elements are nonzero if the parity is not conserved and the initial state is unpolarized. At higher orders in $m_Q^{-1}$ this can be changed, e.g., $H^*$ can have tensor polarization. The factorization can also be done for inclusive productions of a spin-2 meson. The results can be found in the work[@Ma1]. Experimentally only the spin alignment of $D_2^*(2460)$ is measured with large errors. But the experimental results seem to be not in agreement with the predictions. The reason can be the large effect from corrections at higher orders of $m_c^{1}$. To summarize: Using the approach of QCD factorization and employing HQET, we obtain predictions of full spin density matrices of spin-1- and spin-2 heavy meson. The leading order predictions for a spin-1 meson agree well with experiment. Within the approach the three questions asked before are answered. Although we have given in this talk detailed predictions for inclusive production of a spin-1 heavy meson at an $e^+e^-$ collider, our approach can be easily generalized to other inclusive productions, testable predictions can be made without a detailed calculation, for example, in inclusive production of $B^*$ at an electron-hadron- or a hadron-hadron collider we always have the prediction $\rho_{00}=1/3$ and $\rho_{-11}=\rho_{1-1}=0$ in the heavy quark limit. [0]{} N. Isgur and M.B. Wise, Phys. Lett. B232 (1989) 113, ibid B237 (1990) 527 E. Eichten and B. Hill, Phys. Lett. B234 (1990) 511 B. Grinstein, Nucl. Phys. B339 (1990) 253 H. Georgi, Phys. Lett. B240 (1990) 447 M. Neubert, Phys. Rept. C245 (1994) 259 M. Shifman, Lectures given at Theoretical Advanced Study Institue [*QCD and Beyond*]{}, Unversity Colorado, June 1975, hep-ph/9510377 A.F. Falk and M.E. Peskin, Phys. Rev. D49 (1994) 3320 J.P. Ma, Nucl. Phys. B506 (1997) 329 J.P.Ma, hep-ph/0111237, to appear in Nucl. Phys. B.. P. Abreu et al., DELPHI Collaboration, Z. Phys. C68 (1995) 353 D. Buskulic et al., ALEPH Collaboration, Z. Phys. C69 (1993) 393 K. Ackerstaff et al., OPAL Collaboration, Z. Phys. C74 (1997) 437 T.H. Kress, hep-ex/0101047, to be published in the proceedings of 30th International Symposium on Multiparticle Dynamics (ISMD 2000) G. Brandenburg et al., CLEO Collaboration, Phys. Rev. D58 (1998) 052002 S. Abachi et al., HRS Collaboration, Phys. Lett. B199 (1987) 585 S. Aihara et al., TPC Collaboration, Phys. Rev. D43 (1991) 29 K. Abe et al., SLD Collaboration, presented at International Europhysics Conference on High-Energy Physics (HEP97), Jerusalem, Isral, 1997, Report No. SLAC-PUB 7574
{ "pile_set_name": "ArXiv" }
--- author: - | [**Anna Kovalenko$^1$, Razin Farhan Hussain$^1$, Omid Semiari$^2$, and Mohsen Amini Salehi$^1$**]{}\ $^1$[{aok8889, razinfarhan.hussain1, amini}@louisiana.edu]{} $^2$[osemiari@georgiasouthern.edu]{}\ $^1$High Performance Cloud Computing ([HPCC](http://hpcclab.org/)) Laboratory,\ School of Computing and Informatics, University of Louisiana at Lafayette, Lafayette, LA, USA\ $^2$Department of Electrical and Computer Engineering, Georgia Southern University, Statesboro, GA, USA\ bibliography: - 'references.bib' title: 'Robust Resource Allocation Using Edge Computing for Vehicle to Infrastructure (V2I) Networks' --- Acknowledgments {#acknowledgments .unnumbered} =============== Portions of this research were conducted with high performance computational resources provided by Louisiana Optical Network Infrastructure [@LONI] and was supported by the Louisiana Board of Regents under grant number LEQSF(2016-19)-RD-A-25.
{ "pile_set_name": "ArXiv" }
--- author: - 'C.R. Cowley[^1]' - 'S. Hubrig' title: 'The absorption and emission spectrum of the magnetic Herbig Ae star HD 190073[^2]' --- Introduction {#sec:intro} ============ The intriguing spectrum of the magnetic Herbig Ae star HD190073 (V1295Aql) has attracted the attention of classical as well as modern spectroscopists (Merrill 1933; Swings & Struve 1940; Catala et al. 2007, henceforth CAT). Pogodin, Franco & Lopes (2005, henceforth P05) give a detailed description of the spectrum along with a historical resume of investigations from the 1930’s. We note that the nature of HD190073 as a young, Herbig Ae star became widely recognized some three decades after Herbig’s (1960) seminal paper. HD190073 was included among the 24 young stars studied for abundances by Acke & Waelkens (2004, henceforth, AW). In this important paper, the authors made the bold assumption that abundances could be determined for stars of this nature using standard techniques-plane parallel, one dimensional models, in hydrostatic equilibrium. The models were used to obtain abundances from absorption lines with equivalent widths less than 150 mÅ. These assumptions might very well be questioned. Material is being accreted by these young stars, and the infall velocities are thought to be near free-fall, several hundred kms$^{-1}$. Does this infall produce shocks and heating of the atmospheres that could invalidate models that neglect such complications? AW nevertheless proceeded. Although they did not state this explicitly, the justification for their assumptions is empirical, and may be found in their results. Basically, these are the fact that their approach yields entirely reasonable stellar parameters and abundances, including agreement from lines of neutral and ionized elements. Stated simply, their assumptions led them to self-consistent results. We make these same assumptions in the present work, taking some comfort in the fact that self consistency is all one ever has in science. While AW’s studies were both competent and thorough, better observational material is currently available, making it possible to use systematically weaker lines, and to study more elements. We have also made use of the wings of the Balmer lines, not used by AW. The lower Balmer lines have central emissions. In the case of H$\alpha$, the emission dominates the feature. The Balmer lines and especially H$\alpha$, have been extensively studied (e.g. P05; Cuttela & Ringuelet 1990). In the present paper we also study the weaker metallic emission lines, to provide information on the physical conditions where this emission occurs. This was discussed by CAT who were primarily concerned with the magnetic field of HD 190073. They also give a detailed qualitative description of the metallic emission lines (primarily Ti , Fe / (cf. Sect. \[sec:emiss\] and following). Hubrig et al. (2006, 2009) reported a longitudinal magnetic field of 84$\pm$30 Gauss, up to 104$\pm$19 Gauss, while CAT found a longitudinal field of 74$\pm$10 Gauss. Observations {#sec:obs} ============ We downloaded 8 HARPS spectra from the ESO archive, all obtained on 11 November 2008 within 74 minutes of one another. These were averaged, binned to 0.02Å and mildly Fourier filtered. The resulting spectrum has a signal-to-noise ratio of 350 or more. The resolution of HARPS spectra usually cited as over 100000, is not significantly modified for our purposes on the averaged spectrum, as Fig. \[fig:line4481\] illustrates. ![ The HARPS spectrum (ADP.HARPS.2008-11-10T23:43:14.386\_2\_SID\_A) and averaged spectra in the region of the Mg doublet $\lambda$4481Å. The HARPS (gray and red in online version) spectrum has been displaced slightly downward for display purposes.[]{data-label="fig:line4481"}](4481.ps){width="55mm" height="83mm"} UVES spectra, obtained on 18 September 2005 cover the region from 3044 to 10257 Å. They were used for special features (e.g. the \[Ca \] lines), but not for abundances. Reduction {#sec:reduction} ========= ![The region from $\lambda\lambda$4000 to 4050 Å shows numerous measurable absorption features. Note the broad absorption at $\lambda$4026 Å, which is He . Fe   $\lambda$4045 Å shows strong emission as well as absorption.[]{data-label="fig:line4050"}](4050.ps){width="54mm" height="83mm"} The averaged HARPS spectrum was measured for 1796 lines. The UVES spectrum was also measured for line identifications in the region $\lambda\lambda$3054–3867 Å. We measured 760 absorption lines, which were often severely affected by emission. Many absorption lines, especially weak ones, were not significantly affected by emission, and suitable for abundance determinations. Figure \[fig:line4050\] shows a typical region with many relatively unperturbed absorptions. Preliminary, automated identifications were made, and wavelength coincidence statistics (WCS, Cowley & Hensberge 1981) were performed. A few spectra not investigated by AW were analyzed: He , Na , Al , Si , S , S , Co , Mn , Mn , Ni , Zn , and Sr . We found no exotic elements, such as lanthanides, or unusual 4d or 5d elements. ![ A comparison of equivalent width measurements by AW and the present study (UMICH).[]{data-label="fig:pltdif"}](pltdif.ps){width="54mm" height="83mm"} Lines were chosen for equivalent width measurement with the help of the automated identification list, which lists plausible identifications within 0.03 Å of any measured feature. Blends were rejected. Usually, we avoided lines with equivalent widths greater than 20 mÅ but in order to compare our measurements with those of AW, we included a few stronger lines. A comparison of measurements is given in Fig. \[fig:pltdif\]. Generally, the measurements agree well with one another, and differences can usually be explained by judgments of where to draw the continuum when a line is partially in emission, or there is emission close by. Differences in the case of one of the solid circles is surely due to emission, as Ti  $\lambda$4398 Å falls between two strong emission lines. The other solid point is for O $\lambda$3947 Å. This is apparently a misidentification. Note that Fig. \[fig:pltdif\] is not logarithmic. The model atmosphere and abundance methods {#sec:model} ========================================== The methods used to obtain abundances from the equivalent widths, including model atmosphere construction are explained in some detail in two previous papers (Cowley et al. 2010a,b). Briefly, the $T(\tau_{5000})$ from Atlas 9 (Kurucz 1993) as implemented by Sbordone et al. (2004) was used with Michigan software to product depth-dependent models. The effective temperature and gravity were selected from ionization and excitation equilibrium as well as fits to the wings of H$\beta$–H$\delta$. We have adopted a somewhat lower temperature than used by AW, 8750 K, and $\log g = 3.0$. The former used 9250 K, and $\log g=3.5$. We also used a lower microturbulence, 2 kms$^{-1}$, compared to AW’s 3 kms$^{-1}$, but this is not important for most of our weaker lines. Oscillator strengths were taken from the modern literature when possible, or from compilations by NIST (Ralchenko 2010, preferred) or VALD (Kupka et al. 1999). Default damping constants were used as in the studies cited, but they are unimportant for weak lines. Abundances ========== --------------- ------------------------- ------ ----- ------- ------- Ion $\log({\rm El}/{\Sum})$ sd $N$ Sun AW \[1.5pt\] He –1.15 0.38 2 –1.11 C –3.40 0.23 36 –3.61 –3.55 [**N** ]{}\* –3.50 0.38 9 –4.21 –3.40 O –3.29 0.10 12 –3.35 –3.38 [**Na** ]{} –5.25 0.24 5 –5.80 Mg –4.29 0.23 3 –4.44 –4.52 Mg –4.54 0.16 8 –4.44 [**Al** ]{}\* –6.07 1 –5.59 –6.01 Si –4.43 0.36 7 –4.53 –4.41 Si –4.61 0.13 10 –4.53 S –4.62 0.06 3 –4.92 S –4.40 0.45 6 Ca –5.78 0.11 2 –5.70 –5.41 Ca –5.63 0.19 6 –5.70 [**Sc** ]{}\* –9.16 0.13 10 –8.89 –9.00 Ti –7.18 0.19 32 –7.09 –7.07 V –8.07 0.11 14 –8.11 –7.93 Cr –6.54 0.12 6 –6.40 –6.35 Cr –6.37 0.17 22 –6.40 Mn –6.60 0.28 3 –6.61 Mn –6.53 0.24 13 –6.61 Fe –4.54 0.15 182 –4.54 –4.53 Fe –4.54 0.21 145 –4.54 Co –7.13 1 –7.05 Ni –5.86 0.15 18 –5.82 –5.73 Ni –5.68 0.23 5 –5.82 Zn –7.48 0.08 2 –7.48 Sr –8.53 0.54 2 –9.17 Y –10.05 0.19 8 –9.83 –9.79 Zr –9.43 0.15 13 –9.46 –9.12 Ba –9.88 0.12 3 –9.86 –9.72 \[1.5pt\] --------------- ------------------------- ------ ----- ------- ------- : Abundances in HD190073 from the current study and AW.[]{data-label="tab:abund"} The AW abundances have been converted from differential values, using Anders & Grevesse (1989) abundances, which AG adopted. Our abundances (see Table \[tab:abund\]) refer to the Asplund et al. (2009) scale. The case is not strong that any of these abundances differ significantly from the solar abundance. Nevertheless, we have highlighted in bold face some elements that deserve additional attention. Nitrogen, in particular, deserves attention, as it has been found in excess in the Herbig Ae star HD 101412 (Cowley et al. 2010a). Asterisks mark cases where AW and the present work agree on possibly significant departures from solar abundances. NLTE effects could also be responsible for some non-solar abundances (Kamp et al. 2001). Neutral helium {#sec:he1} -------------- The helium abundance is from $\lambda\lambda$4026 (see Fig. \[fig:line4050\]) and 4713 Å. Both lines are weak, but in excellent agreement with one another. However $\lambda$4471 Å was also found in absorption, and analyzed. The value of $\log({\rm He}/\Sum)$ from this line was found to be $-$1.53, some 0.4 dex below the mean of the other two lines. We have chosen to disregard this value, as possibly weakened by partial emission. Should it be included in the average, we find $\log({\rm He}/\Sum) = -1.29\pm 0.21$(sd), still solar, within the errors. The D$_3$ line ($\lambda$5876 Å) of He is in emission, and included in Table \[tab:lindat\]. It was observed at numerous phases by P05, whose observations (see their Fig. 3) do not show the central reversal clearly seen in our Fig. \[fig:d3\]. This feature was measured at $\lambda^*$5875.60, virtually unshifted from the expected photospheric position $\lambda$5875.64. Moreover, an LTE synthesis using Voigt profiles and an assumed solar abundance matches the observed absorption in shape and strength. Given the likelihood of NLTE, it is unclear how seriously to take this result. Nevertheless, the D$_3$ absorption is consistent with photospheric absorption, and a solar abundance. ![ He D$_3$ line in the spectrum of HD190073. The central absorption is arguably photospheric, and agrees in shape and strength with a calculated absorption profile.[]{data-label="fig:d3"}](d3.ps){width="54mm" height="83mm"} The D$_3$ emission is remarkably similar in morphology to the metallic emissions (with unshifted absorptions), though it is much broader. It resembles the P05 illustrations at phases ‘a’ or ‘e’, with a maximum shifted somewhat to the red, and a longer violet tail. The emission spectrum {#sec:emiss} ===================== A second focus of the current paper is the emission spectrum, in particular, permitted and forbidden metallic lines. ![The emission/absorption, metallic-line spectrum of two Herbig Ae stars, contrasted. Note the proclivity of the intrinsically stronger lines to be in emission in HD 190073.[]{data-label="fig:p2dat"}](p2dat.ps){width="54mm" height="83mm"} Previous studies (P05, CAT) provide detailed descriptions of numerous metallic emission lines, primarily of Fe and , Ti , and Cr , as well as the Na D-lines, and have illustrations similar to the upper spectrum of Fig. \[fig:p2dat\]. As the P05 work has a temporal dimension lacking in the present study, we briefly summarize their findings. The emissions show mild temporal variations both in strengths and widths. The profiles are somewhat asymmetric, with their peak intensities generally very slightly red shifted with respect to the photospheric absorption spectrum. The widths of the features are significantly larger than these shifts, and a considerable fraction of the emission is shifted to the blue. The P05 observations are all in good agreement with the current findings, which we take to be a representative sample. Table \[tab:lindat\] gives measurements of the peak intensities, equivalent widths, and FWHM for the strong relatively unblended emission lines as they appear on our averaged HARPS spectra. Multiplet numbers follow the spectrum designation. The intensities are in units of the continuum, and the equivalent widths are the areas of the emissions above the continuum, which is assumed to have unit intensity. The measurements are from segments fitted by eye to the emission lines, as illustrated in Fig. \[fig:5183\]. ![ Segment fits (black) to the emission from Mg-2 $\lambda$5183.60 Å, the strongest of the Mgb lines. Observations: gray (red online) with dots. The portion of the fitted curve near the central absorption is a by-eye estimate of the missing part of the profile. The maximum designated $I^0$, and other properties of similar interpolations are given in Table \[tab:lindat\].[]{data-label="fig:5183"}](5183.ps){width="54mm" height="83mm"} All lines in the table had central absorptions. In measuring the $I^0$-values, an attempt was made to interpolate over this absorption, so the $I^0$-value is a few per cent higher than the maximum of the emission. The accuracy of the measurements vary. Repeated measurements show consistency for FWHM are generally within 10%. Underlying emission from other lines is the prime cause of the uncertainty. ------------------- ------------- ------- ---------- ------- -------------- $\lambda$ Ion/Mult. $I^0$ $W$\[Å\] FWHM FWHM \[Å\] \[kms$^{-1}$ \[1.5pt\] 4045.81 Fe I-43 1.11 0.130 1.02 75.6 4063.59 Fe I-43 1.08 0.065 0.83 61.2 4071.74 Fe I-43 1.11 0.113 0.82 60.4 4077.71 Sr II-1 1.12 0.109 0.87 64.0 4143.87 Fe I-43 1.05 0.032 0.61 44.1 4163.65 Ti II-105 1.05 0.056 0.93 67.0 4173.46 Fe I-27 1.12 0.095 0.80 57.5 4178.86 Fe II-28 1.12 0.180 1.26 90.4 4215.52 Sr II-1 1.06 0.472 0.81 57.6 4233.17 Fe II-27 1.27 0.402 1.29 91.4 4246.82 Sc II-7 1.07 0.062 0.82 57.9 4271.76 Fe I-42 1.06 0.620 0.82 57.5 4290.22 Ti II-42 1.12 0.112 0.87 60.8 4294.10 Ti II-20 1.13 0.119 0.82 57.2 4300.05 Ti II-41 1.23 0.308 1.06 73.9 4307.86 Ti II-41 1.15 0.179 1.04 72.4 4351.77 Fe I-27 1.28 0.420 1.31 90.2 4383.55 Fe I-41 1.14 0.119 0.83 56.8 4404.75 Fe I-41 1.08 0.077 0.86 58.5 4443.79 Ti II-19 1.24 0.311 1.06 71.5 4450.48 Ti II-19 1.06 0.075 1.04 70.1 4468.51 Ti II-31 1.25 0.281 0.89 59.7 4491.41 Fe II-37 1.12 0.155 1.15 76.8 4501.27 Ti II-31 1.24 0.319 1.03 68.6 4508.29 Fe II-38 1.17 0.198 1.04 69.2 4515.34 Fe II-37 1.12 0.176 1.25 83.0 4533.97 Ti II-50 1.19 0.261 1.18 78.0 4541.52 Fe II-38 1.06 0.048 0.80 52.8 4558.65 Cr II-44 1.14 0.186 1.17 76.9 4563.76 Ti II-50 1.16 0.183 0.97 63.8 4571.97 Ti II-82 1.26 0.332 1.07 70.2 4576.34 Fe II-38 1.07 0.081 0.97 63.5 4588.20 Cr II-44 1.09 0.105 0.96 62.7 4618.80 Cr II-44 1.11 1.350 1.07 69.4 4629.34 Fe II-37 1.17 0.211 1.00 64.8 4634.07 Cr II-44 1.04 0.043 0.87 56.3 4731.45 Fe II-43 1.05 0.058 0.93 58.9 4805.09 Ti II92 1.04 0.332 0.86 53.7 4824.13 Cr II-30 1.08 0.953 1.06 65.9 4923.92 Fe II-42 1.65 1.340 1.70 103.5 4957.60 Fe I-318 1.74 1.590 1.82 110.1 5169.03 Fe II-42 1.77 1.890 2.14 124.1 5183.60 Mg I-2 1.23 0.303 1.10 63.6 5197.58 Fe II-49 1.23 0.231 1.23 70.9 5234.63 Fe II49 1.22 0.437 1.22 69.9 5264.81 Fe II-48 1.04 0.030 0.66 37.6 5284.11 Fe II-41 1.08 0.117 1.20 68.1 5362.87 Fe II-48 1.15 0.216 1.22 68.2 5534.85 Fe II-55 1.10 0.140 1.10 59.6 5875.64 He I-4 1.09 0.390 4.17 212.7 5889.95 Na I-D$_2$ 1.76 1.380 1.38 70.2 5895.92 Na I-D$_1$ 1.70 1.130 1.36 69.2 5991.38 Fe II-55p 1.03 0.043 1.38 69.1 6238.39 Fe II-74 1.07 0.078 1.14 54.8 6247.56 Fe II-74 1.13 0.217 1.46 70.1 6347.11 Si II-2 1.11 0.325 2.50 118.1 6371.37 Si II-2 1.08 0.228 2.86 134.6 6416.92 Fe II-74 1.06 0.098 1.42 65.7 6432.68 Fe II-40 1.05 0.073 1.16 54.1 6562.82 H I 6.82 32.20 5.34 243.9 5158.78 Fe II-19F\] 1.04 0.012 0.282 16.4 5577.35 O I-3F\] 1.01 0.004 0.44 23.4 6300.30 O I-1F\] 1.11 0.056 0.41 19.3 6363.78 O I-1F\] 1.04 0.019 0.42 19.7 7291.47 Ca II-1F\] 1.11 0.046 0.43 17.8 7323.89 Ca II-1F\] 1.10 0.033 0.28 11.3 \[1.5pt\] ------------------- ------------- ------- ---------- ------- -------------- : Maximum intensity measurements, equivalent widths, and full widths at half maximum for selected emission lines.[]{data-label="tab:lindat"} Resumé: the permitted emissions ------------------------------- We summarize salient properties of the permitted emission lines: - The centers of gravity are shifted by ca. 5 to the red. - The profiles are somewhat asymmetric, with a longer violet than red tail. - The central absorption wavelengths are photospheric within the errors of measurement. That is, the radial velocities of the weaker photospheric absorptions agree with those of the central absorption components of the emission lines. - The emission lines are the intrinsically strongest lines. Weaker lines show weaker emission, until, for the weakest lines, the observed features are all in absorption. CAT suggested the emissions arose in conditions similar to those of a photosphere. They suggested a heated region with densities and temperatures in the range of $10^{13}$–$10^{14}$ cm$^{-3}$ and $15\,000$–$20\,000$K. Their value of 65 kms$^{-1}$ as a typical FWHM for the emissions agrees well with our measurements (Table \[tab:lindat\]). The origin of this velocity, however, is not readily apparent. They speculate that these velocities are due to a supersonic turbulence. Such “turbulence” might arise from the roil of accreting material settling onto the photosphere. Forbidden lines\[sec:forbidln\] ------------------------------- ### The \[O \] lines\[sec:ForO1\] ![The nebular ($\rm ^3P_2$–$^1\rm D_2$) \[O \] transition $\lambda$6300 Å. The long arrow points to the laboratory position at 6300.30 Å. The stellar feature seems slightly red shifted. The short arrow points to a narrow, blue-shifted satellite feature that is also seen at the same displacement in \[O \] $\lambda\lambda$6363 and 5577 Å. The strong absorption lines are atmospheric.[]{data-label="fig:6300"}](6300.ps){width="55mm" height="83mm"} Both $\lambda\lambda$6300 and 6363 \[O \] are present as well as the auroral transition $\lambda$5577 Å. In addition to what we shall call the main features, all three lines show faint, sharp, “satellite” components shifted to the violet by ca. 25. This structure is illustrated in Figs. \[fig:6300\] and  \[fig:5577d\]. The main \[O \] features are roughly one third the width of the typical permitted metallic-line features, but their peak intensities have comparably small red shifts of ca. 5 . It is plausible to assume the \[O \] arises in a region further from the star, and therefore of lower density than the gas giving rise to the permitted metallic lines. The satellite emissions may arise in a polar stream, if we assume the system is viewed pole on. The velocity, however, is not high. The satellite features of all three \[O \] lines is ca. $-25$ . ### The \[Ca -F1\] doublet {#sec:forbidca} ![The \[Ca \] lines $\lambda\lambda$7291 (black) and 7324 Å (gray, red in online version) from multiplet 1-F. Wavelengths for $\lambda$7324 Å are at the top abscissa. The arrow indicates a possible narrow component of $\lambda$7291 Å corresponding to those seen in \[O \].[]{data-label="fig:forca2"}](forca2.ps){width="55mm" height="83mm"} Hamann (1994) has noted the presence of \[Ca \] in a number of young stars, including the Herbig Ae V380Ori. We are not aware that the lines have been previously noted in HD190073. They are also seen in supernovae (Kirshner & Kwan 1975) and extragalactic spectra. (Donahue & Voit 1993). Merrill (1943) reported \[Ca II\] lines in emission in the peculiar hydrogen-poor binary $\upsilon$ß,Sgr (see also Greenstein & Merrill 1946). We find definite emissions at the positions of the forbidden $\rm ^2S_{1/2}$–$ ^2\rm D_{3/2,5/2}$ transitions. The air wavelengths, determined from the energy levels, are 7291.47 and 7323.89 Å. These features were identified on a the UVES spectrogram. The spectrum (Fig. \[fig:forca2\]) is too noisy or blended to show the presence of satellite features of $\lambda$7324 Å, but it might be present for $\lambda$7291 Å. Measurements of the main features are included in Table \[tab:lindat\]. The maxima are shifted to the red by 2 to 4 , in general agreement with the \[O\] and metallic lines. The FWHM agree with those of other forbidden lines. ### \[Fe \]\[sec:fe2forbid\] ![ Forbidden Fe  lines from multiplet 19-F. The wavelength scale for the weaker of the two lines, $\lambda$5376 Å (gray, red in online version) is at the top of the figure. Vertical lines mark the positions of wavelengths derived from the atomic energy levels.[]{data-label="fig:5158"}](5158.ps){width="55mm" height="83mm"} Several workers have discussed \[Fe II\] emission lines in Herbig Ae/Be stars (Finkenzeller 1985; Donati et al. 1997). We find a definite, sharp emission feature with a maximum measured at $\lambda^*$5158.84 Å. This wavelength is close to that of \[Fe -19F\], $\lambda$5158.78 Å. (Laboratory positions for \[Fe \] are from Fuhr & Wiese (2006) rather than the RMT). Another line from this multiplet is weakly present (Fig. \[fig:5158\]). Both features are seen on the unaveraged HARPS spectra as well as UVES spectra taken some 3 years previously (see Sect. \[sec:obs\]). Several other lines in Multiplet 19-F are arguably present ($\lambda\lambda$5261, 5296, 5072 Å), other lines are masked by blends or in a HARPS order gap. The line $\lambda$5158 Åis entered in Table \[tab:lindat\]. It has a FWHM comparable to that of the other forbidden lines. We found no other \[Fe \] features that could be said to be unambiguously present. The well-known \[Fe \] line $\lambda$4244 Å is at best, marginally present. ### Physical conditions from the forbidden lines “Critical electron densities” are obtained from observed forbidden transitions by equating the Einstein spontaneous decay rate to the collisional deexcitation rate. Typical values are given by Draine (2011) for $T_{\rm e} = 10\,000$K. For the \[O\] $\rm ^1D_2$-level (the upper level of $\lambda\lambda$6300 and 6363 Å), the critical $N_{\rm e}$ is 1.6$\times 10^6$ cm$^{-3}$. We calculate a similar critical density for the \[Ca \] lines with the help of rates calculated by Burgess et al. (1995), and a lifetime of the $\rm ^2D$ term given by NIST (Ralchenko et al. 2010). We see no evidence of the \[S -2F\] pair at $\lambda\lambda$6717 and 6731 Å, though they have been observed in Herbig Ae/Be stars (Corcoran & Ray 1997). For this pair, Draine gives critical densities of $10^3$–$10^4$ cm$^{-3}$. We conclude the forbidden lines we do see arise in a region where the electron density is between $10^4$ and $10^{6-7}$ cm$^{-3}$. ![ The auroral ($\rm ^1D_2$–$ ^1\rm S_0$) \[O \] transition $\lambda$5577 (gray, red in online version). The long arrow points to 5577.34 Å, the NIST wavelength. The maximum, and center of gravity of the line shifted slightly to the red, as are the nebular lines (Fig. \[fig:6300\]). The shorter arrow points to the sharp, blue-shifted feature with the same displacement as the sharp component of \[O \] $\lambda\lambda$6300 and 6363 Å.[]{data-label="fig:5577d"}](5577d.ps){width="55mm" height="83mm"} When the nebular as well as auroral transitions of \[O \] are available, the ratio may allow one to determine values of $T_{\rm e}$ and $N_{\rm e}$ compatible with the observation. The average of three measurements on $\lambda$5577 gives $W = 0.0042$Å, which with $W = 0.056\,$Å for $\lambda$6300 yields a ratio of 13.3. If we assume the excited levels of O arise from electron excitation, we may interpolate in the plot of Gorti et al. (2011) to find acceptable the values given in Table \[tab:Gorti\]. With the electron density constraint given above, we find temperatures in the range 7500 to 10000 K for the volume where the forbidden lines are formed. An alternate interpretation of \[O \] emission in Herbig Ae/Be systems is discussed by Acke, van den Ancker & Dullemond (2005). In their model, the excited O levels arise primarily from the photodissociation of the OH molecule. ---------------- ------------------ $T_{\rm e}$    $\log N_{\rm e}$ \[1.5pt\] 5000 8 7500 7 10000 6.5 12000 6.2 \[1.5pt\] ---------------- ------------------ : Values of $T_{\rm e}$ (in K) and $N_{\rm e}$ (in cm$^{-3}$) compatible with the observed $\lambda$6300/5577-ratio = 13.3.[]{data-label="tab:Gorti"} We are grateful for the availability of the ESO archive. This research has made use of the SIMBAD data base, operated at CDS, Strasbourg, France. Our calculations have made extensive use of the VALD atomic data base (Kupka et al. 1999), as well as the facilities provided by NIST (Ralchenko et al. 2010). CRC thanks colleagues at Michigan for many helpful suggestions. Jesús Hernández suggested that we examine the forbidden lines. Acke, B., Waelkens, C.: 2004, A&A 427, 1017 (AW) Acke, B., van den Ancker, M.E., Dullemond, C.P.: 2005, A&A 436, 209 Anders, E., Grevesse, N.: 1989, Geochim. Cosmochim. Acta 53, 197 Asplund, M., Grevesse, N., Sauval, A.J., Scott, P.: 2009, ARA&A 47, 481 Burgess, A., Chidichimo, M.C., Tully, J.A.: 1995, A&A 300, 627 Catala, C., Alecian, E., Donati, J.-F., et al.: 2007, A&A 462, 293 (CAT) Corcoran, M., Ray, T.P.: 1997, A&A 321, 189 Cowley, C.R., Hensberge, H.: 1981, ApJ 244, 252 Cowley, C.R., Hubrig, S., González, J.F., Savanov, I.: 2010a, A&A 523, 65 Cowley, C.R., Hubrig, S., Palmeri, P., et al.: 2010b, MNRAS 405, 1271 Cuttela, M., Ringuelet, A.E.: 1990, MNRAS 246, 20 Donahue, M., Voit, G.M.: 1993, ApJ 414, L17 Donati, J.-F., Semel, M., Carter, B.D., Rees, D.E., Cameron, A.C.: 1997, MNRAS 291, 658 Drain, B.T.: 2011, [*Physics of the Interstellar and Intergalactic Medium*]{}, Princeton Univ. Press, Princeton, NJ Finkenzeller, U.: 1985, A&A 151, 340 Fuhr, J. R. & Wiese, W. L.: 2006, J. Phys. Chem. Ref. Data, 35, 1669 Greenstein, J.L., Merrill, P.W.: 1946, ApJ 104, 177 Hamann, F.: 1994, ApJS 93, 485 Herbig, G.H.: 1960, ApJS 4, 337 Hubrig, S., Yudin, R.V., Schöller, M., Pogodin, M.A.: 2006, A&A 446, 1089 Hubrig, S., Stelzer, B., Schöller, M., et al.: 2009, A&A 502, 283 Kamp, I., Iliev, I.Kh., Paunzen, E., et al.: 2001, A&A 375, 899 Kirshner, R.P., Kwan, J.: 1975, ApJ 197, 415 Kupka, F., Piskunov, N.E., Ryabchikova, T.A., Stempels, H.C., Weiss, W.W.: 1999, A&AS 138, 119 Kurucz, R.L.: 1993, [*ATLAS9 Stellar Atmosphere Programs and 2 km/s grid*]{}, CD-Rom, No. 13, Smithsonian Ap. Obs., Cambridge/MA Merrill, P.W.: 1933, ApJ 77, 51 Merrill, P.: 1943, PASP 55, 242 Pogodin, M.A., Franco, G.A.P., Lopes, D.F.: 2005, A&A 438, 239 Ralchenko, Yu., Kramida, A.E., Reader, J., NIST ASD team: 2010, [*NIST Atomic Spectra Database*]{}, version 3.1.5, http://physics.nist.gov/as of \[2010, Feb 10\], National Institute of Standards and Technology, Gaithersburg, MD Sbordone, L., Bonifacio, P., Castelli, F., Kurucz, R.L.: 2004, Mem. Soc. Astron. Ital. Suppl. 5, 93 Swings, P., Struve, O.: 1940, ApJ 91, 546 [^1]: Corresponding author: [cowley@umich.edu]{} [^2]: Based on ESO Archival data, from ESO programme076.B-0055(A) and programme 082.D-0833(A)
{ "pile_set_name": "ArXiv" }
--- abstract: 'The order-by-order renormalization of the self-consistent mean-field potential in many-body perturbation theory for normal Fermi systems is investigated in detail. Building on previous work mainly by Balian and de Dominicis, as a key result we derive a thermodynamic perturbation series that manifests the consistency of the adiabatic zero-temperature formalism with perturbative statistical mechanics—for both isotropic and anisotropic systems—and satisfies at each order and for all temperatures the thermodynamic relations associated with Fermi-liquid theory. These properties are proved to all orders.' author: - Corbinian Wellenhofer bibliography: - 'refs.bib' title: 'Zero-Temperature Limit and Statistical Quasiparticles in Many-Body Perturbation Theory' --- Introduction ============ Many-body perturbation theory (MBPT) represents the elementary framework for calculations aimed at the properties of nonrelativistic many-fermion systems at zero and finite temperature. In general, for Fermi systems the correct ground-state is not a normal state but involves Cooper pairs [@PhysRev.150.202; @PhysRevLett.15.524; @doi:10.1142/S0217979292001249; @RevModPhys.66.129; @Salmhofer:1999uq]. However, pairing effects can often be neglected for approximative calculations of thermodynamic properties close to zero temperature. For such calculations there are two formalisms: first, there is grand-canonical perturbation theory, and second, the zero-temperature formalism based on the adiabatic continuation of the ground state [@PhysRev.84.350; @Goldstone:1957zz; @Nozbook; @Runge; @negele; @Fetter; @Abrikosov]. In their time-dependent (i.e., in frequency space) formulations, these two formalisms give matching results if all quantities are derived from the exact Green’s functions, i.e., from the self-consistently renormalized propagators [@Luttinger:1960ua; @Fetter; @Abrikosov; @Feldman1999]. The renormalization of MBPT in frequency space can be generalized to vertex functions [@dommar2; @dommar1; @Hausmann; @Rossi:2015lda; @PhysRevB.93.161102; @DICKHOFF2004377; @VanHoucke:2011ux], and is essential to obtain a fully consistent framework for calculating transport properties [@Baym:1961zz; @Baym:1962sx; @stefanuc]. Nevertheless, the use of bare propagators has the benefit that in that case the time integrals can be performed analytically. With bare propagators, MBPT in its most basic form corresponds to a perturbative expansion in terms of the interaction Hamiltonian $V$ about the noninteracting system with Hamiltonian $H_0$, where $H=H_0+V$ is the full Hamiltonian. First-order self-energy effects can be included to all orders in bare MBPT by expanding instead about a reference Hamiltonian $H_\text{ref}=H_0+U_1$, where $U_1$ includes the first-order contribution to the (frequency-space) self-energy $\Sigma_{1,{{\textbf}{k}}}$ as a self-consistent single-particle potential (mean field). The renormalization of $H_\text{ref}$ in terms of $U_1$ has the effect that all two-particle reducible diagrams with first-order pieces (single-vertex loops) are canceled. At second order the self-energy becomes frequency dependent and complex, so the equivalence between the propagator renormalization in frequency space and the renormalization of the mean-field part of $H_\text{ref}$ in bare MBPT is restricted to the Hartree-Fock level. Zero-temperature MBPT calculations with bare propagators and a Hartree-Fock reference Hamiltonian ${H_\text{ref}=H_0+U_1}$ are common in quantum chemistry and nuclear physics. With a Hartree-Fock reference Hamiltonian (or, with ${H_\text{ref}=H_0}$), however, the adiabatic zero-temperature formalism is inconsistent with the zero-temperature limit (${T\rightarrow 0}$) of grand-canonical MBPT. The (main) fault however lies not with zero-temperature MBPT, but with the grand-canonical perturbation series: in the bare grand-canonical formalism (with $H_\text{ref}\in\{H_0,H_0+U_1\}$) there is a mismatch in the Fermi-Dirac distribution functions caused by using the *reference* spectrum $\varepsilon_{{\textbf}{k}}$ together with the *true* chemical potential $\mu$, and in general this leads to deficient results [@Fritsch:2002hp; @PhysRevC.89.064009; @Wellenhofer:2017qla]. The adiabatic formalism on the other hand uses the reference chemical potential, i.e., the reference Fermi energy ${\varepsilon}_{\text{F}}$. Related to this is the presence of additional contributions from two-particle reducible diagrams, the so-called anomalous contributions, in the grand-canonical formalism. This issue is usually dealt with by modifying the grand-canonical perturbation series for the free energy in terms of an expansion about the chemical potential $\mu_\text{ref}\xrightarrow{T\rightarrow 0}{\varepsilon}_{\text{F}}$ of the reference system [@Kohn:1960zz; @brout2] (see also Sec. \[sec42\]). This expansion introduces additional anomalous contributions, and for isotropic systems these can be seen to cancel the old ones for ${T\rightarrow 0}$ [@Luttinger:1960ua]. Thus, the modified perturbation series for the free energy $F(T,\mu_\text{ref})$ reproduces the adiabatic series in the isotropic case. For anisotropic systems, however, the anomalous contributions persist at ${T=0}$ (for ${H_\text{ref}=H_0+U_1}$, at fourth order and beyond). Negele and Orland [@negele] interpret this feature as follows: there is nothing fundamentally wrong with the bare zero-temperature formalism, but for anisotropic systems the adiabatic continuation must be based on a better reference Hamiltonian $H_\text{ref}$. Since the convergence rate[^1] of MBPT depends on the choice of $H_\text{ref}$, this issue is relevant also for finite-temperature calculations, and for isotropic systems. Recently, Holt and Kaiser [@PhysRevC.95.034326] have shown that including the real part of the bare second-order contribution to the (on-shell) self-energy, $\text{Re}\,[\Sigma_{2,{{\textbf}{k}}}(\varepsilon_{{\textbf}{k}})]$, as the second-order contribution to the self-consistent mean field has a significant effect in perturbative nuclear matter calculations with modern two- and three-nucleon potentials (see, e.g., Refs. [@RevModPhys.81.1773; @MACHLEIDT20111; @Bogner:2009bt]). However, a formal clarification for the renormalization of $H_\text{ref}$ in terms of $\text{Re}\,[\Sigma_{2,{{\textbf}{k}}}(\varepsilon_{{\textbf}{k}})]$ was not included in Ref. [@PhysRevC.95.034326]. In particular, from the discussion of Ref. [@PhysRevC.95.034326] it is not clear whether the use of this second-order mean field should be considered an improvement or not, compared to calculations with a Hartree-Fock mean field.[^2] A general scheme where the reference Hamiltonian is renormalized at each order in grand-canonical MBPT was introduced by Balian, Bloch, and de Dominicis [@Balian1961529] (see also Refs. [@Balian1961529b; @Balianph; @BlochImperFerm; @boer; @1964mbpdedom]). This scheme however leads to a mean field whose functional form is given by $U[n_{\textbf}{k},T]$, where $n_{\textbf}{k}(T,\mu)$ is the Fermi-Dirac distribution and the explicit temperature dependence involves factors $\operatorname{\operatorname{e}}^{\pm({\varepsilon}_{\textbf}{k}-\mu)/T}$. Because of the $\operatorname{\operatorname{e}}^{\pm({\varepsilon}_{\textbf}{k}-\mu)/T}$ factors, the resulting perturbation series is well-behaved only at sufficiently large temperatures, and its ${T\rightarrow 0}$ limit does not exist.[^3] A different renormalization scheme was outlined by Balian and de Dominicis (BdD) in Refs. [@statquasi3; @statquasi1] (see also Refs. [@1964mbpdedom; @boer]). At second order, this scheme leads to the mean field employed by Holt and Kaiser [@PhysRevC.95.034326]. The outline given in Refs. [@statquasi3; @statquasi1] indicates the following results: 1. The functional form of the mean field is to all orders given by $U[n_{\textbf}{k}]$, i.e., there is no explicit temperature dependence (apart from the one given by the Fermi-Dirac distributions), so the ${T\rightarrow 0}$ limit exists. 2. The zero-temperature limit of the renormalized grand-canonical perturbation series for the free energy $F(T,\mu)$ reproduces the (correspondingly renormalized) adiabatic series for the ground-state energy $E^{(0)}({\varepsilon}_{\text{F}})$ to all orders; i.e., the reference spectrum ${\varepsilon}_{\textbf}{k}$ has been adjusted to the true chemical potential $\mu$, with ${\varepsilon}_{\text{F}}=\mu$ at ${T=0}$. 3. One obtains at each perturbative order and for all temperatures the thermodynamic relations associated with Fermi-liquid theory [@Landau]. This result corresponds to the notion statistical quasiparticles [@PhysRevA.7.304; @pethick2; @Fermiliq]. The most intricate part in establishing these results is as follows. For ${T\neq 0}$, there are no energy denominator poles in the (proper) expressions for the perturbative contributions to the grand-canonical potential. The BdD renormalization scheme however introduces such poles, and therefore a regularization procedure is required to apply the scheme. So far, this issue has been studied in more detail only for the case of impurity systems [@BALIAN1971229; @LUTTINGER19731; @keitermorandi]. Motivated by this situation, in the present paper we revisit the order-by-order renormalization of the reference Hamiltonian in bare MBPT.[^4] First, in Sec. \[sec2\] we give a short review of grand-canonical perturbation theory with bare propagators and introduce the various order-by-order renormalizations of the reference Hamiltonian. We also discuss how dynamical quasiparticles arise in (frequency-space) MBPT, and show that their energies are distinguished from the ones of the statistical quasiparticles associated with result (2). In Sec. \[sec3\] we discuss the regularization procedure for the BdD renormalization scheme, and analyze the resulting expressions for the second- and third-order contributions to the grand-canonical potential and the BdD mean field. In Sec. \[sec4\] we prove to all orders that the BdD renormalized perturbation series satisfies the Fermi-liquid relations (2) and, as a consequence, manifests the consistency of the adiabatic zero-temperature formalism (1). The paper is concluded in Sec. \[summary\]. In Appendix \[app1\], we derive explicitly the renormalized contribution from two-particle reducible diagrams at fourth order. In Appendix \[app2\], we discuss in more detail the various forms of the self-energy, derive various expressions for the mean occupation numbers, and examine the functional relations between the grand-canonical potential and the (various forms of the) self-energy in bare MBPT. Grand-Canonical Perturbation Theory {#sec2} =================================== Setup {#sec21} ----- We consider a homogeneous but not necessarily isotropic system of nonrelativistic fermions in thermodynamic equilibrium. The Hamiltonian is given by ${H= H_0+V}$, where $V$ is a two-body operator representing pair interactions. Multi-fermion interactions do not raise any new formal or conceptual issues, and are therefore neglected. For notational simplicity and without loss of generality we assume a single species of spinless fermions. If there is no external potential, then $H_0$ is the kinetic energy operator. We now introduce an additional one-body operator $U$, and write $$\begin{aligned} \label{Eq0} H= \underbrace{(H_0+U)}_{H_\text{ref}}+(V- U).\end{aligned}$$ The operator $U$ represents a mean field, i.e., an effective one-body potential which allows to define a solvable reference system that includes the effects of pair interactions in the system to a certain degree. For a homogeneous system the mean field should preserve translational invariance, so the eigenstates $\ket{\psi_{{\textbf}{k}}}$ of the momentum operator are eigenstates of $H_\text{ref}=H_0+U$, i.e., $$\begin{aligned} H_{\text{ref}} \ket{\psi_{{\textbf}{k}}} = {\varepsilon}_{{\textbf}{k}}\ket{\psi_{{\textbf}{k}}}.\end{aligned}$$ Because the mean field is supposed to include interaction effects self-consistently, the single-particle energies ${\varepsilon}_{{\textbf}{k}}$ are determined by the self-consistent equation $$\begin{aligned} {\varepsilon}_{{\textbf}{k}} = {\varepsilon}_{0,{\textbf}{k}} + U_{{\textbf}{k}}[{\varepsilon}_{{\textbf}{k}}],\end{aligned}$$ where ${\varepsilon}_{0,{\textbf}{k}}=\braket{\psi_{{\textbf}{k}}|H_0|\psi_{{\textbf}{k}}}$ and $U_{{\textbf}{k}}[{\varepsilon}_{{\textbf}{k}}]=\braket{\psi_{{\textbf}{k}}|U|\psi_{{\textbf}{k}}}$.[^5] The occupation number representation of the reference Hamiltonian $H_{\text{ref}}=H_0+ U$ is then given by $$\begin{aligned} \mathcal{H}_{\text{ref}} &= \sum_{{\textbf}{k}} \braket{\psi_{{\textbf}{k}}|H_{\text{ref}}|\psi_{{\textbf}{k}} } a^\dagger_{{\textbf}{k}} a_{{\textbf}{k}},\end{aligned}$$ where $a^\dagger_{\textbf}{k}$ and $a_{\textbf}{k}$ are creation and annihiliation operators with respect to momentum eigenstates. If not indicated explicitly otherwise, we assume the thermodynamic limit where $\sum_{\textbf}{k}\rightarrow\int \! d^3 k/(2\pi)^3$.[^6] The occupation number representation of the perturbation Hamiltonian ${H-H_\text {ref}=V-U}$ is given by $$\begin{aligned} \label{VHamilton} \mathcal{V}&= \frac{1}{2!}\sum_{{\textbf}{k}_1,{\textbf}{k}_2,{\textbf}{k}_3,{\textbf}{k}_4} \braket{\psi_{{\textbf}{k}_1}\psi_{{\textbf}{k}_2}| V|\psi_{{\textbf}{k}_3}\psi_{{\textbf}{k}_4} } a^\dagger_{{\textbf}{k}_1} a^\dagger_{{\textbf}{k}_2} a_{{\textbf}{k}_4} a_{{\textbf}{k}_3} {\nonumber \\}& \quad -\sum_{{\textbf}{k}} \braket{\psi_{{\textbf}{k}}| U|\psi_{{\textbf}{k}} } a^\dagger_{{\textbf}{k}} a_{{\textbf}{k}},\end{aligned}$$ where momentum conservation is implied, i.e., ${\textbf}{k}_1+{\textbf}{k}_2$$={\textbf}{k}_3+{\textbf}{k}_4$. We assume that the potential $V$ is sufficiently regular(ized) such that no ultraviolet [@Hammer:2000xg; @Wellenhofer:2018dwh] or infrared [@PhysRev.106.364] divergences appear in perturbation theory.[^7] Further, we require that $V$ has a form (e.g., finite-rangedinteractions) for which the thermodynamic limit exists; see, e.g., Refs. [@haagbook; @ruelle; @lieb]. Perturbation series and diagrammatic analysis {#sec22} --------------------------------------------- ### Grand-canonical perturbation series For truncation order $N$, the perturbation series for the grand-canonical potential $\Omega(T,\mu)$ is given by $$\begin{aligned} \label{MBPTgc0} \Omega(T,\mu) = \Omega_\text{ref}(T,\mu) +\Omega_{U}(T,\mu) + \sum_{n=1}^{N} \Omega_n(T,\mu),\end{aligned}$$ where $$\begin{aligned} \Omega_\text{ref}(T,\mu) &= T\sum_{{\textbf}{k}}\ln(\bar n_{\textbf}{k}), {\nonumber \\}\Omega_{U}(T,\mu)&= - \sum_{{\textbf}{k}} U_{{\textbf}{k}} n_{\textbf}{k}.\end{aligned}$$ Here, $\bar n_{\textbf}{k}=1-n_{\textbf}{k}$, with $n_{\textbf}{k}=[1+\operatorname{\operatorname{e}}^{\beta({\varepsilon}_{\textbf}{k}-\mu}]^{-1}$ the Fermi-Dirac distribution function, and $\beta=1/T$. From the grand-canonical version of Wick’s theorem one obtains the following formula [@Fetter; @BDDnuclphys7] for $\Omega_{n}(T,\mu)$: $$\begin{aligned} \label{OmegaT} \Omega_n^{\text{direct}[P]}= -\frac{1}{\beta}\frac{(-1)^{n}}{n!} \int\limits_{0}^\beta \! d \tau_n \cdots d \tau_1 \; \Braket{ \mathcal{T}\big[ \mathcal{V}(\tau_n) \cdots \mathcal{V}(\tau_1) \big] }_{L},\end{aligned}$$ where $\mathcal{T}$ is the time-ordering operator and $\mathcal{V}(\tau)=\operatorname{\operatorname{e}}^{\mathcal{H}_\text{ref} \tau}\mathcal{V}\operatorname{\operatorname{e}}^{-\mathcal{H}_\text{ref} \tau}$ is the interaction picture representation (in imaginary time) of the perturbation operator $\mathcal{V}$ given by Eq. . ### Classification of diagrams The various ways the Wick contractions in the unperturbed ensemble average $\braket{\ldots}$ can be performed can be represented by Hugenholtz diagrams, i.e., diagrams composed of $V$ and $-U$ vertices,[^8] and directed lines attached to vertices at both ends. Left-pointing lines are called holes and correspond to factors $n_{\textbf}{k}$, right-pointing lines are called particles and have factors $\bar n_{\textbf}{k}$. In the case of two-particle reducible diagrams, momentum conservation implies that there are two or more lines with identical three-momenta. We refer to these lines as articulation lines. The diagrammatic parts connected via articulation lines are referred to as pieces. Two-particle irreducible diagrams have only $V$ vertices. Two-particle reducible diagrams where at least one set of lines with identical three-momenta includes both holes and particles are called anomalous, with the indicative lines referred to as anomalous articulation lines. All other (two-particle reducible or irreducible) diagrams are called normal. The parts of anomalous diagrams connected via anomalous articulation lines are called normal pieces.[^9] In general, normal two-particle reducible diagrams transform into anomalous diagrams under vertex permutations, see Figs. \[fig3red\], \[figx\] and \[fig4\]. In Eq. , the subscript $L$ means that only linked diagrams are taken into account. By virtue of the time integration and the time-ordering operator, in Eq.  there is no distinction between diagrams connected via vertex permutations; in particular, there is no distinction between normal and anomalous two-particle reducible diagrams. The distinction between the different diagrams in the permutation invariant sets of diagrams is however relevant for the time-independent formulas discussed below. ### Time-independent formulas From Eq. , Bloch and de Dominicis [@BDDnuclphys7] (see also Refs. [@1964mbpdedom0; @boer; @keitermorandi]) have derived several time-independent formulas for $\Omega_{n}(T,\mu)$. One of them, here referred to as the , is given by $$\begin{aligned} \label{direct} \Omega_{n}^\text{direct}&=\frac{1}{\beta} \frac{(-1)^{n}}{2\pi {\text{i}}} \oint_{C} dz \frac{\operatorname{\operatorname{e}}^{-\beta z}}{z^2} \Braket{ \mathcal{V} \frac{1}{{D}_n-z} \cdots \mathcal{V} \frac{1}{{D}_1-z} \mathcal{V} }_{\!L},\end{aligned}$$ where the contour $C$ encloses all the poles ${z=0,{D}_1,\ldots,{D}_n}$, with ${D}_{\nu\in\{1,\ldots,n\}}$ the energy denominators for the respective diagrams. Furthermore, in Eq. , it is implied that the contributions from all poles are summed before the momentum integration, i.e., the $z$ integral is performed inside the momentum integrals. This has the consequence that the integrands of the momentum integrals have no poles (for ${T\neq 0}$, see below) from vanishing energy denominators. The expressions obtained from the direct formula deviate from the ones obtained from the time-dependent formula Eq. , but—as evident from the derivation of direct formula [@BDDnuclphys7]—the sum of the direct expressions obtained for a set of diagrams that is closed under vertex permutations is equivalent (but not identical) to the expression obtained from Eq. . From the cyclic property of the trace, another time-independent formula can be derived [@BDDnuclphys7], here referred to as the cyclic formula, i.e., $$\begin{aligned} \label{cyclic} \Omega_{n}^\text{cyclic}&=\frac{1}{n} \frac{(-1)^{n+1}}{2\pi {\text{i}}} \oint_{C} dz \frac{\operatorname{\operatorname{e}}^{-\beta z}}{z} \Braket{ \mathcal{V} \frac{1}{{D}_n-z} \cdots \mathcal{V} \frac{1}{{D}_1-z} \mathcal{V} }_{\!L},\end{aligned}$$ where again it is implied that the $z$ integral is performed inside the momentum integrals; again, this has the consequence that the integrands have no poles (for ${T\neq 0}$). The direct and the cyclic formula give equivalent (but not identical) expressions only for the sums of diagrams connected via cyclic vertex permutations, and the cyclic expressions for the individual diagrams in these cyclic groups are equivalent. Finally, from the analysis of the contributions from the different poles in Eq.  one can *formally* write down a reduced form of the cyclic formula [@BDDnuclphys7], here referred to as the reduced formula, i.e., $$\begin{aligned} \label{reduced} \Omega_{n}^\text{reduced}&= \frac{(-1)^{n+1}}{\mathcal{O}} \underset{z=0}{\text{Res}} \frac{\operatorname{\operatorname{e}}^{-\beta z}}{z} \Braket{ \mathcal{V} \frac{1}{{D}_n-z} \cdots \mathcal{V} \frac{1}{{D}_1-z} \mathcal{V} }_{\!L},\end{aligned}$$ where $\mathcal{O}$ is the order of the pole at $z=0$. The reduced expressions for normal diagrams are identical to the usual expressions of zero-temperature MBPT, except that the step functions are replaced by Fermi-Dirac distributions. As a consequence, while at ${T=0}$ the energy denominator poles in these expressions are at the integration boundary, for ${T\neq 0}$ they are in the interior. This entails that the reduced expressions for individual diagrams are not well-defined for ${T\neq 0}$. Last, we note that each of the time-independent formulas can be applied also to unlinked diagrams (the only change being the omission of the subscript $L$); this will become relevant in Sec. \[sec4\]. ### Classification of perturbative contributions Anomalous diagrams give no contribution in zero-temperature MBPT. However, the contributions from anomalous diagrams in grand-canonical MBPT do *not* vanish for ${T\rightarrow 0}$ (in the thermodynamic limit). The reduced integrands (which are well-defined at ${T=0}$) for diagrams with identically vanishing energy denominators[^10] have terms of the form $$\begin{aligned} \label{anomT0lim} \frac{\partial^\nu n_{\textbf}{k}}{\partial \mu^\nu} \xrightarrow{T\rightarrow 0} \delta^{(\nu)}(\mu-{\varepsilon}_{\textbf}{k}),\end{aligned}$$ e.g., $\beta n_{\textbf}{k} \bar n_{\textbf}{k}=\partial n_{\textbf}{k}/\partial \mu \xrightarrow{T\rightarrow 0} \delta(\mu-{\varepsilon}_{\textbf}{k})$. Contributions with such terms are called anomalous contributions. There are also contributions that vanish for ${T\rightarrow 0}$, e.g., $$\begin{aligned} n_{\textbf}{k} \bar n_{\textbf}{k}=T\frac{\partial n_{\textbf}{k}}{\partial \mu} \xrightarrow{T\rightarrow 0} 0.\end{aligned}$$ Such pseudoanomalous contributions can be associated also with *normal* two-particle reducible diagrams via the relation $$\begin{aligned} \label{ndouble} \bar n_{\textbf}{k}=1-n_{\textbf}{k},\end{aligned}$$ i.e., $$\begin{aligned} \label{ndouble1} n_{\textbf}{k} n_{\textbf}{k} &= n_{\textbf}{k} -n_{\textbf}{k} \bar n_{\textbf}{k}, \\ \label{ndouble2} \bar n_{\textbf}{k} \bar n_{\textbf}{k} &= \bar n_{\textbf}{k} -n_{\textbf}{k} \bar n_{\textbf}{k}, \\ \label{ndouble3} n_{\textbf}{k} n_{\textbf}{k} n_{\textbf}{k} &= n_{\textbf}{k} -2 n_{\textbf}{k} \bar n_{\textbf}{k}+n_{\textbf}{k} \bar n_{\textbf}{k} \bar n_{\textbf}{k}, \\ \label{ndouble4} \bar n_{\textbf}{k} \bar n_{\textbf}{k} \bar n_{\textbf}{k} &= \bar n_{\textbf}{k} -2 n_{\textbf}{k} \bar n_{\textbf}{k}+n_{\textbf}{k} n_{\textbf}{k} \bar n_{\textbf}{k},\end{aligned}$$ etc.[^11] Contributions which are not anomalous or pseudoanomalous are referred to as normal contributions. Following loosely Balian, Bloch, and de Dominicis [@Balian1961529], we refer to the application of Eq.  according to Eqs. –, etc. as disentanglement, denoted symbolically by $\div$. For the ${T\rightarrow 0}$ limit, the energy denominator exponentials present in the direct and cyclic formula all have to be evaluated via $$\begin{aligned} \label{Dexp} \bar n_{\textbf}{k} \operatorname{\operatorname{e}}^{-\beta ({\varepsilon}_{\textbf}{k}-\mu)} = n_{\textbf}{k}.\end{aligned}$$ The simple relations given by Eqs. (\[ndouble\]) and (\[Dexp\]) play a crucial role in many of the issues and results discussed in the present paper. Discrete spectrum inconsistency and anomalous contributions {#sec23a} ----------------------------------------------------------- Apart from being essential for practical many-body calculations, the thermodynamic limit is in fact essential for the thermodynamic consistency of the grand-canonical perturbation series at low $T$, in particular for ${T\rightarrow 0}$, in the general case (see below). For a finite system with a discrete spectrum at ${T=0}$ one has either $\mu\in\{{\varepsilon}_{\textbf}{k}\}$ or $\mu\not\in\{{\varepsilon}_{\textbf}{k}\}$. Both cases are inconsistent. In the first case the ${T\rightarrow 0}$ limit is singular, because in that case the anomalous contributions diverge. In addition, for discrete systems and $\mu\in\{{\varepsilon}_{\textbf}{k}\}$ the ${T\rightarrow 0}$ limit is singular due to energy denominator singularities.[^12] In the second case the anomalous contributions vanish for ${T\rightarrow 0}$. From $F(T,\mu)=\Omega(T,\mu)+\mu\varrho(T,\mu)$ and the fact that all contributions to $\varrho(T,\mu)=-\partial\Omega(T,\mu)/\partial\mu$ except the ones from $\Omega_\text{ref}(T,\mu)$ are anomalous, for $\mu\not\in\{{\varepsilon}_{\textbf}{k}\}$ we obtain $$\begin{aligned} \label{gcfinite} F(T,\mu)\xrightarrow{T\rightarrow 0} E^{(0)}(\mu), \;\;\;\;\;\;\;\;\;\; \varrho(T,\mu)\xrightarrow{T\rightarrow 0}\sum_{{\textbf}{k}}\theta(\mu-{\varepsilon}_{\textbf}{k}),\end{aligned}$$ where $E^{(0)}({\varepsilon}_{\text{F}})$ corresponds to the adiabatic series. As noted by Kohn and Luttinger [@Kohn:1960zz], the two parts of Eq.  are inconsistent with each other. A possible definition of the chemical potential at ${T=0}$ in the finite case is $$\begin{aligned} \mu(T=0,\varrho)=\frac{E^{(0)}(\varrho+1)+E^{(0)}(\varrho)}{2}.\end{aligned}$$ The second part of Eq.  however is equivalent to $$\begin{aligned} \mu(T=0,\varrho)=\frac{E_\text{ref}^{(0)}(\varrho+1)+E_\text{ref}^{(0)}(\varrho)}{2} \equiv \mu_\text{ref}(T=0,\varrho),\end{aligned}$$ which contradicts the previous equation. For a given particle number the true chemical potential deviates from the chemical potential of the reference system, and Eq.  would imply that they are equal at $T=0$. Thus, in the discrete case the ${T\rightarrow 0}$ limit of the grand-canonical perturbation series is inconsistent also for $\mu\not\in\{{\varepsilon}_{\textbf}{k}\}$. The same inconsistency can arise in the thermodynamic limit if the reference spectrum has a gap $\Delta$ and $\mu\not\in\{{\varepsilon}_{\textbf}{k}\}$. The ${T\rightarrow 0}$ limit is still smooth in the discrete (gapped) case, so the inconsistency is still present for nonzero $T$, although it is washed out at sufficiently high $T$. Qualitatively, in the discrete case the inconsistency is relevant if the spectrum does not resolve the anomalous terms $\partial^\nu n_{\textbf}{k}/\partial \mu^\nu$. As discussed in Sec. \[sec23\], contributions with such terms can be seen to account for the mismatch generated by using the reference spectrum together with the true chemical potential. If the anomalous terms are not sufficiently resolved the information about this mismatch gets lost and one approaches the paradoxical result that $\mu(T,\varrho)= \mu_\text{ref}(T,\varrho)$.[^13] There are two ways the discrete (gapped) spectrum inconsistency for $\mu\not\in\{{\varepsilon}_{\textbf}{k}\}$ can be partially resolved, i.e., 1. by using the reference chemical potential instead of the true one, 2. by choosing a mean-field that leads to $\mu({T=0},\varrho)= \mu_\text{ref}({T=0},\varrho)$. Case (i) corresponds to the modified perturbation series $F(T,\mu_\text{ref})$. The partial resolution of the discrete spectrum inconsistency in that case is as follows: 1. $F(T,\mu_\text{ref})$ involves additional anomalous contributions, and the failure to resolve the old ones is balanced (for anisotropic systems, only partially) by not resolving the new ones. In the gapped case $F(T,\mu_\text{ref})$ reproduces the adiabatic series in the ${T\rightarrow 0}$ limit (if $\mu_\text{ref}\not\in\{{\varepsilon}_{\textbf}{k}\}$). In the gapless case the adiabatic series is reproduced only for isotropic systems; in that case the old and new anomalous contributions cancel for ${T\rightarrow 0}$. Thus, there is still a remainder of the discrete spectrum inconsistency. The information about anisotropy encoded in the anomalous contributions is not resolved in the discrete case at low $T$. In particular, for $F(T,\mu_\text{ref})$ the thermodynamic limit (and ${\Delta\rightarrow 0}$ limit, respectively) and the ${T\rightarrow 0}$ limit are noncommuting limits in the anisotropic case. Regarding case (ii), there are three mean-field renormalization schemes that lead to $\mu(T,\varrho)= \mu_\text{ref}(T,\varrho)$, and accordingly, $F(T,\mu)=F(T,\mu_\text{ref})$: the direct, the cyclic, and the BdD scheme; see Sec. \[sec23\]. The anomalous diagrams are removed in each scheme, but in the direct and cyclic schemes there are still anomalous contributions. Hence, for the direct and cyclic schemes there is no discrete spectrum inconsistency despite anomalous contributions. However, these schemes are well-behaved only at high $T$ where the inconsistency ceases to be relevant. In particular, the ${T\rightarrow 0}$ limit does not exist for the direct and cyclic scheme. The ${T\rightarrow 0}$ limit exists for the BdD scheme, but for ${N>2}$ this scheme exists only in the thermodynamic limit. The commutativity of the ${T\rightarrow 0}$ and ${\Delta \rightarrow 0}$ limits is fully restored in the BdD scheme, irrespective of isotropy. The anomalous contributions can be removed and the result $\mu({T=0},\varrho)= \mu_\text{ref}({T=0},\varrho)$ can be achieved also for finite systems, via the mean-field renormalization scheme specified by Eq.  below.[^14] For ${N\leq 2}$ the Eq.  scheme converges to the BdD scheme, but for ${N>2}$ it becomes ill-defined (singular, for ${N\geq 4}$) in the thermodynamic limit. Altogether, we have: 1. The result $\mu({T=0},\varrho)= \mu_\text{ref}({T=0},\varrho)$ can be achieved for finite systems and in the thermodynamic limit, irrespective of isotropy, but for ${N>2}$ these two cases are not smoothly connected. The commutativity of limits can however be fully restored for $F(T,\mu_\text{ref})$: 1. For $F(T,\mu_\text{ref})$ together with the mean-field renormalization scheme given by Eq. \[Ufermi\] below the limit ${T\rightarrow 0}$ commutes with both the thermodynamic limit and the ${\Delta \rightarrow 0}$ limit, irrespective of isotropy. This is because for $F(T,\mu_\text{ref})$, at ${T=0}$ the Eq. \[Ufermi\] scheme removes the anomalous contributions. Case (ii) and case (iii) both lead to the adiabatic formalism, irrespective of isotropy. There are however still anomalous contributions at finite $T$ in case (iii), and the reference chemical potential is identified with the true chemical potential only in case (ii).[^15] Mean-field renormalization schemes {#sec23} ---------------------------------- The usual choices for the mean-field potential are ${U=0}$ (free reference spectrum) or ${U=U_1}$ (Hartree-Fock spectrum). In general, one expects that the choice ${U=U_1}$ leads to an improved perturbation series, compared to ${U=0}$. For first-order MBPT this can be seen from the fact that ${U=U_1(T,\mu)}$ and ${U=U_1(\varepsilon_{\text{F}})}$, respectively, are stationary points of the right-hand sides of the inequalities $$\begin{aligned} \Omega(T,\mu) &\leq \Omega_\text{ref}(T,\mu) +\Omega_{U}(T,\mu) + \Omega_1(T,\mu), \\ E^{(0)}(\varepsilon_\text{F}) &\leq E^{(0)}_\text{ref}(\varepsilon_\text{F}) +E^{(0)}_{U}(\varepsilon_\text{F}) + E^{(0)}_1(\varepsilon_\text{F}),\end{aligned}$$ for grand-canonical and adiabatic MBPT, respectively. For truncation orders $N> 1$, however, no similar formal argument is available for ${U=U_1}$ representing the best choice. For both ${U=0}$ and ${U=U_1}$, in the thermodynamic limit the grand-canonical perturbation series does not reproduce the adiabatic one for $T\rightarrow 0$. The adiabatic series is also not reproduced in the discrete case, and in that case the grand-canonical series is inconsistent, in general (in particular, for ${U\in\{\,0,U_1\}}$); see Sec. \[sec23a\]). It is now important to note that, at least for ${U=0}$, in general bare grand-canonical MBPT leads to deficient results also in the thermodynamic limit. This is particularly evident for a system with a first-order phase transition: for ${U=0}$ it is impossible to obtain the nonconvex single-phase constrained free energy from $\Omega(T,\mu)$, since $\Omega(T,\mu)$ is necessarily a single-valued function of $\mu$ for ${U=0}$; see also Refs. [@Fritsch:2002hp; @PhysRevC.89.064009; @Wellenhofer:2017qla]. This deficiency can be repaired by modifying the expression for $F(T,\mu)$ in terms of a (truncated) formal expansion about the chemical potential $\mu_\text{ref}\xrightarrow{T\rightarrow 0}{\varepsilon}_{\text{F}}$ of the reference system; see Sec. \[sec42\] for details. This expansion introduces additional contributions, and the structure of these contributions is very similar to anomalous diagrams. For isotropic systems it can be seen that the anomalous parts of these additional contributions cancel the old ones for ${T\rightarrow 0}$, leading to $$\begin{aligned} F(T,\mu_\text{ref})\xrightarrow{T\rightarrow 0}E^{(0)}(\varepsilon_{\text{F}})\end{aligned}$$ in the isotropic case.[^16] For $\Omega(T,\mu)$ with ${U=U_1(T,\mu)}$ one has $\varrho(T,\mu)=\sum_{\textbf}{k} n_{\textbf}{k}$ at truncation order ${N=1}$. Thus, for ${N=1}$ (but not for ${N>1}$) it is $$\begin{aligned} F(T,\mu)=F(T,\mu_\text{ref}),\end{aligned}$$ with $\mu(T,\varrho)=\mu_\text{ref}(T,\varrho)$, where $F(T,\mu_\text{ref})$ now corresponds to the modified series with ${U=U_1(T,\mu_\text{ref})}$. For $\Omega(T,\mu)$ the change from ${U=0}$ to ${U=U_1}$ removes all anomalous (and normal) diagrams with single-vertex loops. For $F(T,\mu_\text{ref})$ both the reference spectrum and the reference chemical potential get renormalized, and both the anomalous diagrams and the additional ones with single-vertex loops are removed. Now, these features make evident that there is a deficiency in the grand-canonical series with ${U=0}$ irrespective of the presence of a first-order phase transition: there is a mismatch in the Fermi-Dirac distribution functions generated by using the spectrum of $H_0$ together with the true chemical potential, leading to decreased perturbative convergence, compared to $F(T,\mu_\text{ref})$ with the same setup.[^17] One may interpret the anomalous contributions as a symptom of this mismatch. In that sense, the “expanding away” of the mismatch, i.e., the construction of $F(T,\mu_\text{ref})$, corresponds to a symptomatic treatment that provides as remedy additional anomalous contributions that counteract the old ones.[^18] The mismatch can however be ameliorated (cured, for $N=1$, in the $U=U_1$ case) by improving the quality of the reference Hamiltonian: the change from ${U=0}$ to ${U=U_1}$ removes the main symptom (and the corresponding remedy, in the modified case), the anomalous diagrams with single-vertex loops. Altogether, this suggests that one can expect that the convergence behavior of $\Omega(T,\mu)$ is inferior to the one of $F(T,\mu_\text{ref})$ also for ${U=U_1}$. Moreover, one can suspect that both $\Omega(T,\mu)$ and $F(T,\mu_\text{ref})$ may be further improved by using a mean field beyond Hartree-Fock. In the best case, the additional mean-field contributions should remove all the remaining anomalous diagrams (and additional diagrams, for the modified series), i.e., the ones with higher-order pieces, and lead to $F(T,\mu)=F(T,\mu_\text{ref})$ for truncation orders ${N>1}$. In the following, we introduce three different renormalization schemes where the mean field receives additional contributions for each $N$, i.e, $$\begin{aligned} \label{mbptren1} U^{\aleph,(\ast\ast),\div}=U_1+\sum_{n=2}^N U^{\aleph,(\ast\ast),\div}_n.\end{aligned}$$ Here, $\aleph$ refers to one of the three time-independent formulas (direct, cyclic, or reduced), $\div$ to the disentanglement, and $\ast\ast$ to the regularization of energy denominators required to make the reduced formula well-defined. Equation  is understood to imply a reordering of the perturbation contributions such that a given order $n \in\{1,\ldots, N\}$ involves only diagrams for which $$\begin{aligned} \label{Ncounting} \mathscr{N}(V)+\mathscr{N}(U_1)+\sum_{m=2}^N m \mathscr{N}(U_m) =n \in\{1,\ldots, N\} ,\end{aligned}$$ where $\mathscr{N}(V)$ is the number of $V$ vertices, and $\mathscr{N}(U_1)$ and $\mathscr{N}(U_n)$ the number of $-U_1$ and $-U^{\aleph,(\ast\ast),\div}_n$ vertices, respectively. This can be implemented by writing Eq.  as $$\begin{aligned} H = \underbrace{(U_0+U^{\aleph,(\ast\ast),\div})}_{H_\text{ref}} + \lambda V - \lambda U_1-\sum_{n=2}^N\lambda^n U^{\aleph,(\ast\ast),\div}_n\end{aligned}$$ and ordering the perturbation series with respect to powers of $\lambda$ (which is at the end set to $\lambda=1$). For each truncation order $N$, the three schemes constitute three (different, for ${N>1}$) stationary points of MBPT. Related to this, in each of the three schemes the (direct, cyclic and reduced, respectively) contributions from anomalous diagrams are removed, and in each scheme the relation between the particle number and the chemical potential matches the adiabatic relation, i.e., $$\begin{aligned} U=U^{\aleph,(\ast\ast),\div}:\;\;\;\;\; \varrho(T,\mu)= -\frac{\partial \Omega(T,\mu)}{\partial\mu}=\sum_{{\textbf}{k}} n_{\textbf}{k},\end{aligned}$$ so $F(T,\mu)=F(T,\mu_\text{ref})$ holds in each scheme. The ${T\rightarrow 0}$ limit exists however only for the case where $U=U^{\text{reduced},\ast\ast,\div}$. In that case, the grand-canonical formalism and zero-temperature MBPT are consistent with each other for both isotropic and anisotropic systems, given that the adiabatic continuation is based on $H_\text{ref}=H_0+U^{\text{reduced},\ast\ast,\div}$. ### Scheme by Balian, Bloch, and de Dominicis (direct scheme) In the renormalization scheme by Balian, Bloch, and de Dominicis [@Balian1961529], the mean-field potential is, for truncation order $N$, defined as $$\begin{aligned} \label{Udirect} U^{\text{direct},\div}_{{\textbf}{k}} = \sum_{n=1}^{N} U^{\text{direct},\div}_{n,{\textbf}{k}} &= \sum_{n=1}^{N} \frac{\delta \Omega_{n,\text{normal}}^{\text{direct},\div}}{\delta n_{\textbf}{k}} = \frac{\delta \mathcal{D}^{\text{direct},\div}}{\delta n_{\textbf}{k}},\end{aligned}$$ i.e., only the direct contributions from normal diagrams are included, and $\div$ (disentanglement) means that for each set of (normal) articulation lines with identical three-momenta only one (hole or particle) distribution function appears \[i.e., only the first term of the right-hand sides of Eqs. –, etc., is included\]. The ${n=1}$ contribution to the mean field corresponds (as in the other schemes) to the usual Hartree-Fock single-particle potential, i.e., $$\begin{aligned} \label{UHF} U_{1,{\textbf}{k}} = \sum_{{\textbf}{k}'} \braket{\psi_{{\textbf}{k}}\psi_{{\textbf}{k}'}| V|\psi_{{\textbf}{k}} \psi_{{\textbf}{k}'}} n_{{\textbf}{k}'},\end{aligned}$$ where antisymmetrization is implied. For the higher-order contributions, the functional derivative $\delta/\delta n_{\textbf}{k}$ has to be evaluated *without* applying Eq. , i.e., the energy denominator exponentials have to be kept in the form that results from the contour integral. Otherwise, the functional derivative would be ill-defined (due to the emergence of poles). For ${n=2}$, one finds $$\begin{aligned} \label{U2dir} U^{\text{direct},(\div)}_{2,{\textbf}{k}} &= \frac{1}{2} \sum_{{\textbf}{k}_2, {\textbf}{k}_3, {\textbf}{k}_4} \! |\braket{\psi_{{\textbf}{k}}\psi_{{\textbf}{k}_2}| V|\psi_{{\textbf}{k}_3} \psi_{{\textbf}{k}_4}}|^2 {\nonumber \\}& \quad \times \Big[ n_{{\textbf}{k}_2} \bar n_{{\textbf}{k}_3} \bar n_{{\textbf}{k}_4}\mathcal{F}^\text{direct}(D) - n_{{\textbf}{k}_3} n_{{\textbf}{k}_4} \bar n_{{\textbf}{k}_2} \mathcal{F}^\text{direct}(-D) \Big] ,\end{aligned}$$ where $$\begin{aligned} \label{F2T0lim} \mathcal{F}^\text{direct}(D) &= \frac{1-\beta D-\operatorname{\operatorname{e}}^{-\beta D}}{\beta D^2} \xrightarrow{D\rightarrow 0} -\frac{\beta}{2},\end{aligned}$$ with $D={\varepsilon}_{{\textbf}{k}_3}+{\varepsilon}_{{\textbf}{k}_4} - {\varepsilon}_{{\textbf}{k}_2} - {\varepsilon}_{{\textbf}{k}}$. The ${T\rightarrow 0}$ limit of Eq.  is singular, due to the energy denominator exponential in $\mathcal{F}^\text{direct}(D)$: the functional derivative has removed one distribution function in the integrand, inhibiting the complete elimination of the energy denominator exponential via Eq. . Hence, the renormalization scheme of Balian, Bloch, and de Dominicis is of interest only for systems which are sufficiently close to the classical limit.[^19] The direct contributions from anomalous diagrams composed of two normal pieces that are not (but may involve) $-U$ vertices have the factorized form $$\begin{aligned} \label{directfactorized} \Omega_{n_1+n_2,\text{anomalous}}^{\text{direct},\div} &= -\frac{\beta}{2} \sum_{\textbf}{k} U^{\text{direct},\div}_{n_1,{\textbf}{k}} n_{\textbf}{k} \bar n_{\textbf}{k} \, U^{\text{direct},\div}_{n_2,{\textbf}{k}} (2-\delta_{n_1,n_2}) ,\end{aligned}$$ and similar for anomalous diagrams with several normal (non $-U$) pieces; see Sec. \[sec42\]. Given that for normal diagrams with $-U$ vertices the functional derivative in Eq.  acts only on the diagrammatic lines,[^20] Eq.  implies that the direct contributions from these diagrams are all canceled by the contributions from the corresponding diagrams with $-U$ pieces. The resulting perturbation series is then given by $$\begin{aligned} \label{MBPTrenorm1a} \Omega(T,\mu) &= \Omega_\text{ref}(T,\mu) + \Omega_{U}(T,\mu) + \mathcal{D}^{\text{direct},\div}(T,\mu).\end{aligned}$$ Using ${\varepsilon}_{{\textbf}{k}}={\varepsilon}_{0,{\textbf}{k}}+U^{\text{direct},\div}_{{\textbf}{k}}$, Eq.  can be written in the equivalent form $$\begin{aligned} \label{MBPTrenorm1} \Omega[n_{\textbf}{k},T] &=T \sum_{\textbf}{k} \big( n_{\textbf}{k}\ln n_{\textbf}{k} + \bar n_{\textbf}{k}\ln \bar n_{\textbf}{k}\big) +\sum_{\textbf}{k} \left({\varepsilon}_{0,{\textbf}{k}} - \mu \right) n_{\textbf}{k} {\nonumber \\}& \quad + \mathcal{D}^{\text{direct},\div}[n_{\textbf}{k},T],\end{aligned}$$ which, using Eqs.  and , can be seen to be stationary under variations of the distribution functions, $\delta \Omega[n_{\textbf}{k},T]/\delta n_{\textbf}{k} =0$. From this one readily obtains the following expressions for the fermion number $\varrho$, the entropy $S$, and the internal energy $E$: $$\begin{aligned} \label{StatQP1a} \varrho &= \sum_{\textbf}{k} n_{\textbf}{k}, \\ \label{StatQP2a} S &=-\sum_{\textbf}{k} \big( n_{\textbf}{k}\ln n_{\textbf}{k} + \bar n_{\textbf}{k}\ln \bar n_{\textbf}{k}\big) - \frac{\partial \mathcal{D}^{\text{direct},\div}[n_{\textbf}{k},T]}{\partial T} , \\ E &= \sum_{\textbf}{k} {\varepsilon}_{0,{\textbf}{k}} n_{\textbf}{k} + \mathcal{D}^{\text{direct},\div}[n_{\textbf}{k},T] - T\frac{\partial \mathcal{D}^{\text{direct},\div}[n_{\textbf}{k},T]}{\partial T}.\end{aligned}$$ The variation of the internal energy $\delta E[n_{\textbf}{k},T]/\delta n_{\textbf}{k}$ is given by $$\begin{aligned} \label{StatQP3a} \frac{\delta E}{\delta n_{\textbf}{k}} &= {\varepsilon}_{{\textbf}{k}} -T\frac{\partial U^{\text{direct},\div}_{{\textbf}{k}}[n_{\textbf}{k},T]}{\partial T}.\end{aligned}$$ The relations given by Eqs. , and match those of Fermi-liquid theory [@Landau; @Landau2; @Landau3], except for the terms due to the explicit temperature dependence of $\mathcal{D}^{\text{direct},\div}[n_{\textbf}{k},T]$ and $U^{\text{direct},\div}_{\textbf}{k}[n_{\textbf}{k},T]$. ### Cyclic scheme There is a straightforward variant of the scheme by Balian, Bloch, and de Dominicis: the cyclic scheme, with mean-field potential $$\begin{aligned} \label{Ucyclic} U^{\text{cyclic},\div}_{{\textbf}{k}} = \sum_{n=1}^{N} U^{\text{cyclic},\div}_{n,{\textbf}{k}} &= \sum_{n=1}^{N} \frac{\delta \Omega_{n,\text{normal}}^{\text{cyclic},\div}}{\delta n_{\textbf}{k}} = \frac{\delta \mathcal{D}^{\text{cyclic},\div}}{\delta n_{\textbf}{k}}.\end{aligned}$$ At second order one has $$\begin{aligned} \label{U2cyc} U^{\text{cyclic},(\div)}_{2,{\textbf}{k}} &= -\frac{1}{4} \sum_{{\textbf}{k}_2, {\textbf}{k}_3, {\textbf}{k}_4} \! |\braket{\psi_{{\textbf}{k}}\psi_{{\textbf}{k}_2}| V|\psi_{{\textbf}{k}_3} \psi_{{\textbf}{k}_4}}|^2 {\nonumber \\}& \quad \times \Big[ n_{{\textbf}{k}_2} \bar n_{{\textbf}{k}_3} \bar n_{{\textbf}{k}_4} \mathcal{F}^\text{cyclic}(D) -n_{{\textbf}{k}_3} n_{{\textbf}{k}_4} \bar n_{{\textbf}{k}_2} \mathcal{F}^\text{cyclic}(-D) \Big],\end{aligned}$$ where $$\begin{aligned} \label{F2cycT0lim} \mathcal{F}^\text{cyclic}(D) &= \frac{1-\operatorname{\operatorname{e}}^{-\beta D}}{ D} \xrightarrow{D\rightarrow 0} \beta.\end{aligned}$$ In the cyclic scheme, the perturbation series and thermodynamic relations have the same structure as in the direct scheme. In particular, the same factorization property holds (see Sec. \[sec42\]), and again the zero-temperature limit does not exist \[as evident from Eq. \]. The direct scheme is, however, distinguished from the cyclic scheme in terms of it leading to the identification of the Fermi-Dirac distribution functions with the exact mean occupation numbers [@Balian1961529; @Balian1961529b; @boer] (see also Appendix \[app22\]) and (in the classical limit) the virial expansion [@Balian1961529b]. This indicates that, for calculations close to the classical limit, the direct scheme is preferable to the cyclic scheme. ### Reduced scheme(s) In the renormalization scheme outlined by Balian and de Dominicis (BdD) [@statquasi3; @statquasi1], the term $\mathcal{D}^{\text{direct},\div}[n_{\textbf}{k},T]$ (or, $\mathcal{D}^{\text{cyclic},\div}[n_{\textbf}{k},T]$) is replaced by a term $\mathcal{D}^\text{BdD}[n_{\textbf}{k}]$ that has no explicit temperature dependence in addition to the one given by the functional dependence on $n_{\textbf}{k}(T,\mu)$, and satisfies $$\begin{aligned} \label{T0BdD} \mathcal{D}^\text{BdD}(T,\mu) \xrightarrow{T\rightarrow 0} \sum_{n=1}^N E^{(0)}_n({\varepsilon}_{\text{F}}),\end{aligned}$$ where, by Eq. , $\mu\xrightarrow{T\rightarrow 0}{\varepsilon}_{\text{F}}$, and $E^{(0)}_n({\varepsilon}_{\text{F}})$ corresponds to the sum of all contributions of order $n$ in zero-temperature MBPT. This implies consistency with the adiabatic zero-temperature formalism irrespective of isotropy. The BdD mean field is given by $$\begin{aligned} \label{UBdDfirst} U^\text{BdD}_{\textbf}{k}[n_{\textbf}{k}]=\frac{\delta\mathcal{D}^\text{BdD}[n_{\textbf}{k}]}{\delta n_{\textbf}{k}}.\end{aligned}$$ Since $\mathcal{D}^\text{BdD}[n_{\textbf}{k}]$ is supposed to have no explicit temperature dependence, it must be constructed by eliminating all energy denominator exponentials via Eq. . But then the functional derivative will lead to poles. To make the functional derivative well-defined, the energy denominators have to be regularized. Now, as first recognized by Balian and de Dominicis [@BALIAN1960502] as well as Horwitz, Brout and Englert [@brout1], for a finite system with a discrete spectrum the following renormalized perturbation series can be constructed $$\begin{aligned} \label{MBPTrenorm2a} \Omega(T,\mu) &= \Omega_\text{ref}(T,\mu) + \Omega_{U}(T,\mu) + \mathcal{D}^{\text{reduced,}\ast,\div}(T,\mu),\end{aligned}$$ with mean field $$\begin{aligned} \label{Uredast} U^{\text{reduced,}\ast\ast,\div}_{\textbf}{k}=\frac{\delta \mathcal{D}^{\text{reduced,}\ast,\div}}{\delta n_{\textbf}{k}},\end{aligned}$$ where $$\begin{aligned} \label{MBPTrenorm2b} \mathcal{D}^{\text{reduced,}\ast,\div}(T,\mu) = \sum_{n=1}^{N} \frac{\delta \Omega_{n,\text{normal}}^{\text{reduced,}\ast,\div}}{\delta n_{\textbf}{k}}\xrightarrow{T\rightarrow 0} \sum_{n=1}^N E^{(0)}_n({\varepsilon}_{\text{F}}),\end{aligned}$$ with $\mu\xrightarrow{T\rightarrow 0}{\varepsilon}_{\text{F}}$. Here, ${\ast}$ means that the energy denominator poles are excluded in the discrete state sums (which makes the reduced formula well-defined, for a finite system). Equation  entails another factorization property, i.e., (see Sec. \[sec43\]) $$\begin{aligned} \label{reducedfactorized1} \Omega_{n_1+n_2,\text{anomalous}}^{\text{reduced,}\ast,\div} &= -\frac{\beta}{2} \sum_{\textbf}{k} U^{\text{reduced,}\ast,\div}_{n_1,{\textbf}{k}} n_{\textbf}{k} \bar n_{\textbf}{k} U^{\text{reduced,}\ast,\div}_{n_2,{\textbf}{k}} {\nonumber \\}&\quad \times (2-\delta_{n_1,n_2}) .\end{aligned}$$ In Eq. , $\div$ implies that the pseudoanomalous terms from the reduced expressions for normal two-particle reducible diagrams with the same pieces are added (to the reduced expressions for the corresponding anomalous diagrams). Equations  and lead to the Fermi-liquid relations for $\varrho$, $S$, and $\delta E/\delta n_{\textbf}{k}$. The validity of the ${\ast}$ prescription for finite systems is however somewhat questionable, since it disregards the contributions from the energy denominator poles present in the cyclic and direct case for ${T\neq 0}$ \[see Eqs.  and \].[^21] In the thermodynamic limit the contributions from energy denominator poles have measure zero. However, the thermodynamic limit of $\mathcal{D}^{\text{reduced,}\ast,\div}$ is singular at ${T\neq 0}$, due to terms with energy denominator poles of even degree.[^22] In addition, there are terms with several (odd) energy denominator poles for which the thermodynamic limit is not well-defined, as evident from the Poincaré-Bertrand transformation formula Eq. ; this implies that in the thermodynamic limit $U^{\text{reduced,}\ast,\div}_{\textbf}{k}$ is ill-defined also at ${T=0}$. All in all, Eqs. \[MBPTrenorm2a\], , and indicate that the BdD renormalization scheme should correspond to $$\begin{aligned} \label{bddassume} \mathcal{D}^\text{BdD}[n_{\textbf}{k}] = \mathcal{D}^{\text{reduced,}\ast\ast,\div}[n_{\textbf}{k}],\end{aligned}$$ where ${\ast\ast}$ refers to the energy denominator regularization for infinite systems. Statistical versus dynamical quasiparticles {#sec24} ------------------------------------------- The statistical quasiparticles associated with the BdD renormalization scheme are distinguished from the dynamical quasiparticles [@Noz1; @Noz2; @Benfatto2006] associated with the asymptotic stability of the low-lying excited states. In the following, we examine how dynamical quasiparticles arise in grand-canonical MBPT, and compare their energies to the ones of the statistical quasiparticles (i.e., the single-particle energies in the BdD scheme). More details on the (various forms of the) self-energy are given in Appendix \[app2\]. In particular, in Appendix \[app22\] we show that (only) in the direct scheme the exact mean occupation numbers $f_{{\textbf}{k}}(T,\mu)$ are identified with the Fermi-Dirac distributions. Note that since the ${T\rightarrow 0}$ limit does not exist for the direct scheme, this result is consistent with the discontinuity of $f_{{\textbf}{k}}(T,\mu)$ at ${T=0}$. The consistency of $f_{{\textbf}{k}}({T\neq 0},\mu)=n_{{\textbf}{k}}({T\neq 0},\mu)$ with the results discussed below is examined in Appendix \[app22\]. ### Dynamical quasiparticles without mean field In MBPT (for normal systems), dynamical quasiparticles arise as follows. The perturbative contributions $\Sigma_{n,{\textbf}{k}}(z,T,\mu)$ to the frequency-space self-energy $\Sigma_{{\textbf}{k}}(z,T,\mu)$ are given by a specific analytic continuation (see Appendix \[app21\]) of the perturbative contributions to the Matsubara self-energy $\Xi_{{\textbf}{k}}(z_l,T,\mu)$, where $$\begin{aligned} \label{Matsubarafreq} z_l=\frac{{\text{i}}(2l+1)\pi}{\beta}+\mu\end{aligned}$$ are the Matsubara frequencies, with $l\in\mathbb{Z}$. For example, in bare MBPT (with ${U=0}$) the two-particle irreducible second-order contribution to $\Xi_{{\textbf}{k}}(z_l,T,\mu)$ is given by \[see Eq. \] $$\begin{aligned} \label{sigma2} \Xi_{2,{\textbf}{k}}(z_l,T,\mu) &= \frac{1}{2} \sum_{{\textbf}{k}_2, {\textbf}{k}_3, {\textbf}{k}_4} \! |\braket{\psi_{{\textbf}{k}}\psi_{{\textbf}{k}_2}| V|\psi_{{\textbf}{k}_3} \psi_{{\textbf}{k}_4}}|^2 n_{{\textbf}{k}_2} \bar n_{{\textbf}{k}_3} \bar n_{{\textbf}{k}_4} {\nonumber \\}& \quad \times \frac{e^{-\beta({\varepsilon}_{{\textbf}{k}_3}+{\varepsilon}_{{\textbf}{k}_4} - {\varepsilon}_{{\textbf}{k}_2} - z_l)}-1}{{\varepsilon}_{{\textbf}{k}_3}+{\varepsilon}_{{\textbf}{k}_4} - {\varepsilon}_{{\textbf}{k}_2} - z_l}.\end{aligned}$$ From this, the expression for $\Sigma_{2,{\textbf}{k}}(z,T,\mu)$ is obtained by *first* substituting $\operatorname{\operatorname{e}}^{\beta(z_l-\mu)}=-1$ and *then* performing the analytic continuation. Using Eq. , one gets $$\begin{aligned} \label{sigma2b} \Sigma_{2,{\textbf}{k}}(z,T,\mu) &= -\frac{1}{2} \sum_{{\textbf}{k}_2, {\textbf}{k}_3, {\textbf}{k}_4} \! |\braket{\psi_{{\textbf}{k}}\psi_{{\textbf}{k}_2}| V|\psi_{{\textbf}{k}_3} \psi_{{\textbf}{k}_4}}|^2 {\nonumber \\}& \quad \times \frac{ n_{{\textbf}{k}_2} \bar n_{{\textbf}{k}_3} \bar n_{{\textbf}{k}_4} + n_{{\textbf}{k}_3} n_{{\textbf}{k}_4} \bar n_{{\textbf}{k}_2}} {{\varepsilon}_{{\textbf}{k}_3}+{\varepsilon}_{{\textbf}{k}_4} - {\varepsilon}_{{\textbf}{k}_2} - z}.\end{aligned}$$ As evident from the second-order contribution, setting $z = \omega\pm{\text{i}}\eta$, with $\omega$ real and $\eta$ infinitesimal, leads to the general relation [@kadanoffbaym] $$\begin{aligned} \label{dynquasi1a} \Sigma_{\textbf}{k}(\omega\pm{\text{i}}\eta,T,\mu) = \mathcal{S}_{\textbf}{k}(\omega,T,\mu) \mp {\text{i}}\mathcal{J}_{\textbf}{k}(\omega,T,\mu),\end{aligned}$$ where $\mathcal{S}_{\textbf}{k}$ and $\mathcal{J}_{\textbf}{k}$ are real, and ${\mathcal{J}_{\textbf}{k}\geq 0}$ (see Appendix \[app21\]). From the property that at ${T=0}$ the energy denominators in the expressions for the perturbative contributions to the self-energy, $\Sigma_{n,{\textbf}{k}}(z,T,\mu)$, vanish only for $z\rightarrow \mu$, Luttinger [@PhysRev.121.942] showed that $$\begin{aligned} \label{dynquasi1} \mathcal{J}_{\textbf}{k}(\omega,0,\mu) \xrightarrow{\omega\rightarrow \mu} C_{\textbf}{k}(\mu)\, (\omega-\mu)^2,\end{aligned}$$ with ${C_{\textbf}{k}(\mu)\geq 0}$. Crucial for our discussion (i.e., in particular for the next paragraph), this result holds not only if $\Sigma_{\textbf}{k}$ is calculated using self-consistent propagators but also if $\Sigma_{\textbf}{k}$ is calculated using bare propagators. In Ref. [@PhysRev.119.1153], Luttinger showed that Eq.  implies a discontinuity at ${T=0}$ and ${{\textbf}{k}={\textbf}{k}_{\text{F}}}$ of the exact mean occupation numbers $f_{{\textbf}{k}}(T,\mu)$ of the momentum eigenstates $\ket{\psi_{{\textbf}{k}}}$, i.e., [@Luttinger:1960ua; @Parry] $$\begin{aligned} f_{{\textbf}{k}}(T,\mu)=\braket{\!\braket{a_{\textbf}{k}^\dagger a_{\textbf}{k}}\!} = \int \limits_{-\infty}^\infty \!\frac{d\omega}{2\pi} \frac{1}{1+\operatorname{\operatorname{e}}^{\beta(\omega-\mu)}}\mathcal{A}_{\textbf}{k}(\omega,T,\mu),\end{aligned}$$ where $\braket{\!\braket{\ldots}\!}$ denotes the true ensemble average, and the spectral function $\mathcal{A}_{\textbf}{k}(\omega,T,\mu)$ is given by [@kadanoffbaym] (see also Appendix \[app21\]) $$\begin{aligned} \label{spectral} \mathcal{A}_{\textbf}{k}(\omega,T,\mu)=\frac{2 \mathcal{J}_{\textbf}{k}(\omega,T,\mu)}{\left[\omega-{\varepsilon}_{0,{\textbf}{k}}-\mathcal{S}_{\textbf}{k}(\omega,T,\mu)\right]^2+\left[\mathcal{J}_{\textbf}{k}(\omega,T,\mu)\right]^2}.\end{aligned}$$ The (true) Fermi momentum ${\textbf}{k}_{\text{F}}$, defined in terms of the discontinuity of $f_{{\textbf}{k}}(0,\mu)$, is determined by [@PhysRev.119.1153] $$\begin{aligned} \label{dynquasi2} \mu={\varepsilon}_{0,{\textbf}{k}_{\text{F}}} + \mathcal{S}_{{\textbf}{k}_{\text{F}}}(\mu,0,\mu).\end{aligned}$$ The lifetime of a single-mode excitation with momentum ${\textbf}{k}$ of the ground state is determined by the width of the spectral function at ${T=0}$ [@Fetter; @kadanoffbaym]. From Eqs.  and , the width vanishes (i.e., the excitation becomes stable against decay into collective modes) for ${\omega\rightarrow \mu}$ and ${{\textbf}{k}\rightarrow{\textbf}{k}_{\text{F}}}$. The energies $\mathcal{E}_{\textbf}{k}$ of the dynamical quasiparticles are therefore determined by $$\begin{aligned} \label{dynquasi2b} \mathcal{E}_{\textbf}{k}(\mu)={\varepsilon}_{0,{\textbf}{k}} + \mathcal{S}_{{\textbf}{k}}(\mathcal{E}_{\textbf}{k},0,\mu),\end{aligned}$$ where ${\mathcal{E}_{\textbf}{k}\approx \mu}$ and ${{\textbf}{k}\approx{\textbf}{k}_{\text{F}}}$ (low-lying excitations).[^23] ### Dynamical quasiparticles with mean field The distinction between the energies of statistical and dynamical quasiparticles can now be made explicit, in a specific sense. For bare MBPT with mean field $U_{\textbf}{k}(T,\mu)$ the self-energy is given by $$\begin{aligned} \label{dynquasi3b} \Sigma_{{\textbf}{k}}(z,T,\mu) &= -U_{\textbf}{k}(T,\mu) +\Sigma'_{{\textbf}{k}}(z,T,\mu).\end{aligned}$$ Here, the first term corresponds to the contribution from the self-energy diagram composed of a single $-U$ vertex. Since bare propagators are used, $\Sigma'_{{\textbf}{k}}(z,T,\mu)$ involves not only one- and two-particle irreducible but also two-particle reducible self-energy diagrams (including diagrams with $-U$ vertices); see, e.g., Ref. [@PLATTER2003250].[^24] It can be seen that \[see Eq. \] $$\begin{aligned} \label{dynquasi4} \Sigma'_{n,{\textbf}{k}}(z) = \left[\frac{\delta \Omega_n^{\text{reduced}}[n_{\textbf}{k}]}{\delta n_{\textbf}{k}}\bigg|_{{\varepsilon}_{\textbf}{k}= z} \right]_{{\textbf}{k} \notin \{\text{articulation lines}\}},\end{aligned}$$ (with $\text{Im}[z]\neq 0$). Instead of Eq.  we have $$\begin{aligned} \Sigma_{{\textbf}{k}}(z,T,\mu) &= -U_{\textbf}{k}(T,\mu) +\mathcal{S}'_{{\textbf}{k}}(z,T,\mu)+{\text{i}}\mathcal{J}'_{{\textbf}{k}}(z,T,\mu),\end{aligned}$$ with $$\begin{aligned} \label{dynquasi3d} \mathcal{J}'_{\textbf}{k}(\omega,0,\mu) = C'_{\textbf}{k} (\omega-\mu)^2.\end{aligned}$$ The spectral function is now given by $$\begin{aligned} \label{spectralSJ} \mathcal{A}_{\textbf}{k}(\omega,T,\mu)=\frac{2 \mathcal{J}'_{\textbf}{k} (\omega,T,\mu)}{\Big[\omega-{\varepsilon}_{{\textbf}{k}}-\text{Re}\left[\Sigma_{{\textbf}{k}}(z,T,\mu)\right]\Big]^2+\left[\mathcal{J}'_{\textbf}{k}(\omega,T,\mu)\right]^2}.\end{aligned}$$ Using ${\varepsilon}_{{\textbf}{k}}={\varepsilon}_{0,{\textbf}{k}}+U_{{\textbf}{k}}(T,\mu)$, this becomes $$\begin{aligned} \mathcal{A}_{\textbf}{k}(\omega,T,\mu)=\frac{2 \mathcal{J}'_{\textbf}{k}(\omega,T,\mu)}{\left[\omega-{\varepsilon}_{0,{\textbf}{k}}-\mathcal{S}'_{\textbf}{k}(\omega,T,\mu)\right]^2+\left[\mathcal{J}'_{\textbf}{k}(\omega,T,\mu)\right]^2},\end{aligned}$$ so the (true) Fermi-momentum ${\textbf}{k}_{\text{F}}$ is determined by $$\begin{aligned} \label{kfdef} \mu={\varepsilon}_{0,{\textbf}{k}_{\text{F}}} + \mathcal{S}'_{{\textbf}{k}_{\text{F}}}(\mu,0,\mu),\end{aligned}$$ and the dynamical quasiparticle energies $\mathcal{E}_{\textbf}{k}$ are given by $$\begin{aligned} \label{quasiU} \mathcal{E}_{\textbf}{k}(\mu)={\varepsilon}_{0,{\textbf}{k}} + \mathcal{S}'_{{\textbf}{k}}(\mathcal{E}_{\textbf}{k},0,\mu),\end{aligned}$$ where ${\mathcal{E}_{\textbf}{k}\approx \mu}$ and ${{\textbf}{k}\approx{\textbf}{k}_{\text{F}}}$. It is ${\mathcal{E}_{\textbf}{k} = {\varepsilon}_{\textbf}{k}}$ for ${N\leq 2}$ within the BdD renormalization scheme, but from Eq.  as well as Eqs.  and it is clear that this correspondence breaks down for truncation orders ${N>2}$. To have ${\mathcal{E}_{\textbf}{k} = {\varepsilon}_{\textbf}{k}}$ for ${N>2}$ the mean field must satisfy $$\begin{aligned} \label{luttward} U_{\textbf}{k}(0,\mu) = \mathcal{S}'_{{\textbf}{k}}(\mu,0,\mu),\end{aligned}$$ but then no statistical quasiparticle relations are obtained. In particular, formally extending Eq.  to momenta ${\textbf}{k}\in[0,{\textbf}{k}_{\text{F}}]$, the mean-field renormalization specified by Eq.  leads to $$\begin{aligned} \sum_{\textbf}{k} \theta(\mu-\mathcal{E}_{\textbf}{k}) = \sum_{\textbf}{k} \theta(\mu-{\varepsilon}_{\textbf}{k}),\end{aligned}$$ but the relation (i.e., Luttinger’s theorem [@PhysRev.121.942; @stefanuc; @Baym:1962sx]) $$\begin{aligned} \sum_{\textbf}{k} \theta(\mu-\mathcal{E}_{\textbf}{k})=\varrho(T=0,\mu)\end{aligned}$$ is satisfied only for truncation orders $N\leq 2$. Regularization of energy denominators {#sec3} ===================================== An energy denominator regularization scheme is a procedure that allows to evaluate the contributions associated with the various parts ${\mathcal{F}_\alpha=f_\alpha/D}$ of the energy denominator terms ${\mathcal{F}=\sum_\alpha\mathcal{F}_\alpha}$ separately \[cf., e.g., Eq. \]. The (formal) splitting of the $\mathcal{F}$’s into parts $\mathcal{F}_\alpha$ introduces poles, so the essence of any regularization scheme must be a change in the way the contributions near the zeros of the denominators $D=\prod_\nu D_\nu^{n_\nu}$ of these terms are evaluated (in particular for the case where some $n_\nu$ are even). This change must be such that, for a fixed mean field, the same results are obtained as from the original unregularized expressions for the $\mathcal{F}$’s (e.g., the expressions obtained from the direct or cyclic formula). For the second-order normal contribution the regularization is (essentially) unique and corresponds to evaluating the two parts of Eq.  separately via principal value integrals. For the higher-order contributions, the regularization scheme introduced here starts by adding infinitesimal imaginary parts to the individual energy denominators $D_\nu$, i.e., $\prod_\nu D_\nu^{n_\nu}\rightarrow \prod_\nu (D_\nu+i\eta_\nu)^{n_\nu}$. The regularization then corresponds to evaluating the various parts with energy denominator terms $\mathcal{F}_{\alpha,[\{\eta_\nu\}]}=f_\alpha/[\prod_\nu (D_\nu+i\eta_\nu)]^{n_\nu}$ via the Sokhotski-Plemelj-Fox formula. That this is a valid procedure can be seen from the fact that (after adding infinitesimal imaginary parts) the Sokhotski-Plemelj-Fox formula can be applied (formally) also to the unsplit expressions with energy denominator terms $\mathcal{F}_{[\{\eta_\nu\}]}=\sum_\alpha\mathcal{F}_{\alpha,[\{\eta_\nu\}]}$, and after its application the splitting corresponds again (i.e., as in the second-order case) to a separation into principal value integrals, by virtue of Eq.  below. The crucial point of this particular regularization scheme is that it allows to separate the normal, anomalous, and pseudoanomalous contributions (at finite $T$) such that these contributions have a form that matches the (regularized) disentangled reduced formula. This feature is essential for the cancellation of the pseudoanomalous contributions and the factorization of the anomalous contributions, and these properties lead to the thermodynamic Fermi-liquid relations via the BdD scheme. In other terms, the Fermi-liquid relations uniquely determine the regularization of the energy denominators.[^25] In Sec. \[sec31\] we introduce the formal approach to the energy denominator regularization for the BdD scheme.[^26] The numerical evaluation of the resulting expressions is discussed in Sec. \[sec32\]. Formal regularization {#sec31} --------------------- From the cyclic expressions, the regularized (${\ast\ast}$) disentangled ($\div$) reduced expressions are obtained by performing the following steps: 1. add infinitesimal imaginary parts $\eta_\nu$ to the energy denominators $D_\nu$ (where $\eta_1\neq \eta_2\neq \ldots$), 2. eliminate the energy denominator exponentials via Eq. , 3. apply Eq. . Here, the first step is part of $\ast\ast$, the second step is part of the reduction, and the third step is associated with $\div$. Then 1. for two-particle reducible diagrams, average over the signs $\text{sgn}(\eta_\nu)$ of the imaginary parts, 2. split the integrals such that the various parts of the cyclic energy denominator terms are integrated separately, then suitably relabel indices in some integrals, and finally recombine the integrals that lead to normal, pseudoanomalous and anomalous contributions, 3. observe that the pseudoanomalous contributions vanish (this is proved to all orders in Sec. \[sec4\]), 4. observe that the anomalous contributions factorize (this is proved to all orders in Sec. \[sec4\]), where the first step is part of $\ast\ast$, and the second, third and fourth steps are associated with $\div$ and reduction. To show how these rules arise we now regularize, disentangle, and reduce the expressions for the contributions from the normal second-order diagram and from selected third-order diagrams. ![The normal second-order diagram. It is invariant under vertex permutations.[]{data-label="fig2normal"}](fig1.pdf){width="12.00000%"} The cyclic expression for the normal second-order diagram shown in Fig. \[fig2normal\] is given by $$\begin{aligned} \label{Omega2} \Omega_{2,\text{normal}}^\text{cyclic}= -\frac{1}{8} \sum_{ijab} \zeta^{ijab} n_{ij}\bar n_{ab} \frac{1-\operatorname{\operatorname{e}}^{-\beta D_{ab,ij}}}{D_{ab,ij}},\end{aligned}$$ where $\zeta^{ijab}=V^{ij,ab}V^{ab,ij}$, with $V^{ij,ab}= \braket{\psi_{{\textbf}{k}_i}\psi_{{\textbf}{k}_j}|V|\psi_{{\textbf}{k}_a}\psi_{{\textbf}{k}_b} }$. Moreover, $\sum_i=\int d^3 k_i/(2\pi)^3$, $n_{ij}=n_{{\textbf}{k}_i}n_{{\textbf}{k}_j}$ and $\bar n_{ij}=(1-n_{{\textbf}{k}_a})(1-n_{{\textbf}{k}_b})$, and $D_{ab,ij}={\varepsilon}_{{\textbf}{k}_a}+{\varepsilon}_{{\textbf}{k}_b}-{\varepsilon}_{{\textbf}{k}_i}-{\varepsilon}_{{\textbf}{k}_j}$. In Eq. , the term $(1-\operatorname{\operatorname{e}}^{-\beta D_{ab,ij}})/D_{ab,ij}$ is regular for ${D_{ab,ij}=0}$. To evaluate the two parts of the numerator of this term separately, we add an infinitesimal imaginary term ${\text{i}}\eta$ to the energy denominator. This leads to $$\begin{aligned} \label{Omega2b} \Omega_{2,\text{normal}}^\text{cyclic}&= -\frac{1}{8} \sum_{ijab} \zeta^{ijab} n_{ij} \bar n_{ab} \frac{1-\operatorname{\operatorname{e}}^{-\beta D_{ab,ij}}}{D_{ab,ij}+{\text{i}}\eta}. {\nonumber \\}&= -\frac{1}{8} \sum_{ijab} \zeta^{ijab} n_{ij} \bar n_{ab} \frac{1}{D_{ab,ij}+{\text{i}}\eta} {\nonumber \\}& \quad +\frac{1}{8} \sum_{ijab} \zeta^{ijab} n_{ab} \bar n_{ij} \frac{1}{D_{ab,ij}-{\text{i}}\eta}.\end{aligned}$$ where we have applied Eq.  to eliminate the energy denominator exponential in the second part. Relabeling indices $(i,j)\leftrightarrow (a,b)$ and recombining the two terms leads to $$\begin{aligned} \label{Omega2c} \Omega_{2,\text{normal}}^\text{cyclic}&= -\frac{1}{8} \sum_{ijab} \zeta^{ijab} n_{ij} \bar n_{ab} \left[\frac{1}{D_{ab,ij}+{\text{i}}\eta}+\frac{1}{D_{ab,ij}-{\text{i}}\eta}\right] {\nonumber \\}& \equiv \Omega_{2,\text{normal}}^{\text{reduced,}\ast\ast,(\div)}=\Omega_{2,\text{normal}}^\text{BdD}.\end{aligned}$$ From this, one obtains for the second-order contribution to the BdD mean field the expression $$\begin{aligned} \label{Omega2d} U^\text{BdD}_{2,i} = U_{2,i}^{\text{reduced,}\ast\ast,(\div)} &= -\frac{1}{4} \sum_{jab} \zeta^{ijab} \big( n_{j} \bar n_{ab}+n_{ab} \bar n_{j}\big) {\nonumber \\}& \quad \times \left[\frac{1}{D_{ab,ij}+{\text{i}}\eta}+\frac{1}{D_{ab,ij}-{\text{i}}\eta}\right].\end{aligned}$$ Note that the expressions for $\Omega_{2,\text{normal}}^\text{BdD}$ and $U^\text{BdD}_{2,i}$ are real. Given that the integration variables include $D_{ab,ij}$ \[or an equivalent variable, see Eq. \], this can be seen explicitly from the Sokhotski-Plemelj theorem $$\begin{aligned} \label{plemelj} \frac{1}{x+{\text{i}}\eta}= \frac{P}{x}-{\text{i}}\pi\,\text{sgn}(\eta)\,\delta(x),\end{aligned}$$ where $P$ refers to the Cauchy principal value. For actual numerical calculations it is however more practical not to use $D_{ab,ij}$ as an integration variable, and then the application of the Sokhotski-Plemelj theorem requires further attention. This issue is discussed in Sec. \[sec32\]. It will be useful now to examine how Eq.  can be derived from the direct formula. The direct expression is given by $$\begin{aligned} \label{Omega2dira} \Omega_{2,\text{normal}}^\text{direct}= \frac{1}{4} \sum_{ijab} \zeta^{ijab} n_{ij}\bar n_{ab} \frac{1-\operatorname{\operatorname{e}}^{-\beta D}-\beta D}{\beta D^2},\end{aligned}$$ where $D=D_{ab,ij}$. Adding an imaginary part to the energy denominator we have $$\begin{aligned} \label{Omega2dirb} \Omega_{2,\text{normal}}^\text{direct}= \frac{1}{4} \sum_{ijab} \zeta^{ijab} n_{ij}\bar n_{ab} \frac{1-\operatorname{\operatorname{e}}^{-\beta D}-\beta D}{\beta (D+{\text{i}}\eta)^2}.\end{aligned}$$ Here, the integral can be evaluated in terms of the Sokhotski-Plemelj-Fox formula [@fox] $$\begin{aligned} \label{plemelj2} \frac{1}{(x+{\text{i}}\eta)^n}= \frac{P}{x^n} + {\text{i}}\pi(-1)^{n} \,\text{sgn}(\eta)\,\delta^{(n-1)}(x),\end{aligned}$$ where now $P$ denotes the Hadamard finite part [@hadamard] (see also Refs. [@MONEGATO2009425; @galap; @dispersions]), i.e., $$\begin{aligned} \label{hadamard} \int \!\! dx\, \varphi(x)\frac{P}{x^{n+1}}\equiv \frac{1}{n!} \lim_{y\rightarrow 0} \frac{\partial^{n}}{\partial y^{n}}\!\! \int \!\! dx\, \varphi(x)\frac{P}{x-y},\end{aligned}$$ and $\delta^{(n-1)}(x)=\partial \delta(x)/\partial x^n$. Note that this prescription satisfies $x^k/(x+{\text{i}}\eta)^n=1/(x+{\text{i}}\eta)^{n-k}$. Since $\partial(1-\operatorname{\operatorname{e}}^{-\beta D}-\beta D)/\partial D=0$ for $D=0$, evaluating Eq.  with the Sokhotski-Plemelj-Fox formula gives the same result as Eq. . This equivalence is maintained if the three parts of the $1-\operatorname{\operatorname{e}}^{-\beta D}-\beta D$ are integrated separately (and evaluated with the Sokhotski-Plemelj-Fox formula). That is, applying *first* the Sokhotski-Plemelj-Fox formula and *then* Eq.  and the relabeling of indices we find $$\begin{aligned} \label{Omega2dirc} \Omega_{2,\text{normal}}^\text{direct} &= -\frac{1}{4} \sum_{ijab} \zeta^{ijab} n_{ij}\bar n_{ab} \, D\frac{P}{D^2} =-\frac{1}{4} \sum_{ijab} \zeta^{ijab} n_{ij}\bar n_{ab} \, \frac{P}{D} {\nonumber \\}& \equiv \Omega_{2,\text{normal}}^{\text{reduced,}\ast\ast,(\div)}=\Omega_{2,\text{normal}}^\text{BdD}.\end{aligned}$$ It is now important to note that applying Eq.  and relabeling indices in the second part (which implies $D\rightarrow -D$) *before* applying the Sokhotski-Plemelj-Fox formula would lead to incorrect results, i.e., this procedure would leave the real part invariant but produce a finite imaginary part. This is because $$\begin{aligned} \label{Omega2wronga} f(D)\frac{\operatorname{\operatorname{e}}^{-\beta D}}{(D+{\text{i}}\eta)^2} &=f(D)\operatorname{\operatorname{e}}^{-\beta D}\frac{P}{D^2}+{\text{i}}\pi \,\text{sgn}(\eta)\,\delta(D)\,\beta f(0) {\nonumber \\}& \quad -{\text{i}}\pi \,\text{sgn}(\eta)\,\delta(D)\,\frac{\partial f(D)}{\partial D},\end{aligned}$$ whereas $$\begin{aligned} \frac{f(-D)}{(-D+{\text{i}}\eta)^2} =f(-D)\frac{P}{(-D)^2}+{\text{i}}\pi \,\text{sgn}(\eta)\,\delta(D)\,f(0).\end{aligned}$$ In general, for ${n>1}$ it is $$\begin{aligned} f(D)\frac{\operatorname{\operatorname{e}}^{-\beta D}}{(D+{\text{i}}\eta)^n} \neq \frac{f(-D)}{(-D+{\text{i}}\eta)^n}.\end{aligned}$$ However, note that $f(-D)\frac{P}{(-D)^n}=f(D)\frac{P}{D^n}$ since $D$ is integrated in the whole real domain, and therefore $$\begin{aligned} \text{Re}\left[f(D)\frac{\operatorname{\operatorname{e}}^{-\beta D}}{(D+{\text{i}}\eta)^n}\right] \equiv \text{Re}\left[\frac{f(-D)}{(-D+{\text{i}}\eta)^n}\right]\end{aligned}$$ for the considered $f(D)$. Hence, applying Eq.  and relabeling indices without first applying the Sokhotski-Plemelj-Fox formula becomes valid if we average over the sign of $\eta$, i.e., $$\begin{aligned} \frac{1}{2}\sum_{\text{sgn}(\eta)}f(D)\frac{\operatorname{\operatorname{e}}^{-\beta D}}{(D+{\text{i}}\eta)^n} = \frac{1}{2}\sum_{\text{sgn}(\eta)} \frac{f(-D)}{(-D+{\text{i}}\eta)^n}.\end{aligned}$$ Note that the average has to be taken for all three parts of Eq. , otherwise imaginary parts would remain. ![The third-order two-particle irreducible diagrams. Each diagram is invariant under cyclic vertex permutations. The first (pp) and second (hh) diagram transform into each other under noncyclic permutations, and the third (ph) diagram is permutation invariant.[]{data-label="fig3ppph"}](fig2.pdf){width="45.00000%"} The cyclic expressions for the third-order two-particle irreducible diagrams shown in Fig. \[fig3ppph\] are given by $$\begin{aligned} \label{Omega3pp} \Omega_{3,\text{pp}}^\text{cyclic}&= \frac{1}{24} \sum_{ijabcd} \zeta_{\text{pp}}^{ijabcd} n_{ij}\bar n_{abcd} \mathcal{F}_\text{pp}^\text{cyclic}, \\ \label{Omega3hh} \Omega_{3,\text{hh}}^\text{cyclic}&= \frac{1}{24} \sum_{ijklab} \zeta_{\text{hh}}^{ijklcd} n_{ijkl}\bar n_{ab} \mathcal{F}_\text{hh}^\text{cyclic}, \\ \label{Omega3ph} \Omega_{3,\text{ph}}^\text{cyclic}&= \frac{1}{3} \sum_{ijkabc} \zeta_{\text{ph}}^{ijkabc} n_{ijk}\bar n_{abc} \mathcal{F}_\text{ph}^\text{cyclic},\end{aligned}$$ where $\zeta_{\text{pp}}^{ijabcd}=V^{ij,ab}V^{ab,cd}V^{cd,ij}$, $\zeta_{\text{hh}}^{ijabcd}=V^{ij,ab}V^{kl,ij}V^{ab,kl}$, and $\zeta_{\text{ph}}^{ijabcd}=V^{ij,ab}V^{kb,ic}V^{ac,jk}$. The energy denominator terms are given by $$\begin{aligned} \mathcal{F}_\text{pp,hh,ph}^\text{cyclic} = \left[\frac{1}{D_1 D_2} +\frac{\operatorname{\operatorname{e}}^{-\beta D_1}}{D_1 (D_1 - D_2)} -\frac{\operatorname{\operatorname{e}}^{-\beta D_2}}{D_2 (D_1 - D_2)}\right],\end{aligned}$$ with $D_1=D_{ab,ij}$ and $D_2=D_{cd,ij}$ for the pp diagram, $D_1=D_{ab,ij}$ and $D_2=D_{ab,kl}$ for the hh diagram, and $D_1=D_{ab,ij}$ and $D_2=D_{ac,jk}$ for the ph diagram. In each case, substituting $D_1\rightarrow D_1+{\text{i}}\eta_1$ and $D_2\rightarrow D_2+{\text{i}}\eta_2$, with $\eta_1\neq\eta_2$, splitting the integrals, eliminating the energy denominator exponentials and relabeling indices leads to $$\begin{aligned} \mathcal{F}_\text{pp,hh,ph}^{\text{cyclic},\ast\ast} &= \left[\frac{1}{(D_1+{\text{i}}\eta_1) (D_2+{\text{i}}\eta_2)} +\frac{1}{(D_1-{\text{i}}\eta_1) (D_2+{\text{i}}\eta_2)}\right. {\nonumber \\}&\quad \left. +\frac{1}{(D_1-{\text{i}}\eta_1) (D_2-{\text{i}}\eta_2)}\right] \equiv \mathcal{F}_\text{pp,hh,ph}^{\text{reduced},\ast\ast} ,\end{aligned}$$ which is real. Substituting this for $\mathcal{F}_\text{pp,hh,ph}^\text{cyclic}$ in Eqs. , , and  and performing the functional derivative one obtains the third-order contribution to $U^\text{BdD}$. ![The six third-order two-particle reducible diagrams composed of one second-order and one first-order piece. Articulation lines are shown as dashed lines. The shaded blobs represent vertices with loops (first-order pieces). In each row, the diagram on the left is a normal diagram, and the other two are anomalous. The diagrams in each row transform into each other under cyclic vertex permutations. The set of all six diagrams is closed under general vertex permutations.[]{data-label="fig3red"}](fig3.pdf){width="48.00000%"} The normal third-order two-particle reducible diagrams are shown in Fig. \[fig3red\]. Also shown are the cyclically related anomalous diagrams. The cyclic expression for the sum of these diagrams is given by $$\begin{aligned} \label{Omega3_21} \Omega_{3,(21)}^\text{cyclic}&= -\frac{1}{4} \sum_{ijab} \zeta^{ijab} n_{ij}\bar n_{ab} \mathcal{F}_{(21)}^\text{cyclic} \big(n_i U_{1,i}-\bar n_a U_{1,a}\big),\end{aligned}$$ where $$\begin{aligned} \mathcal{F}_{(21)}^\text{cyclic} = \frac{1-\operatorname{\operatorname{e}}^{-\beta D}-\beta D\operatorname{\operatorname{e}}^{-\beta D}}{D^2} ,\end{aligned}$$ with $D=D_{ab,ij}$. In Hartree-Fock MBPT, the contribution from these diagrams is (as is well known) canceled by the corresponding diagrams where the first-order pieces are replaced by $-U_{1}$ vertices. Nevertheless, it will be still be useful to regularize these contributions. We will then find that, if $U_1$ were left out, the anomalous part of these diagrams can still be canceled via $U_2^\text{BdD}$.[^27] Substituting $D\rightarrow D+{\text{i}}\eta$ and applying Eqs.  and and the relabeling $(i,j)\leftrightarrow (a,b)$, we can separate $\Omega_{3,(21)}^\text{cyclic}$ into the three contributions $$\begin{aligned} \label{Omega3_21a} \Omega_{3,(21),\text{normal}}^{\text{reduced,}\circ\circ,\div}&= -\frac{1}{4} \sum_{ijab} \zeta^{ij,ab} n_{ij}\bar n_{ab} \frac{1}{(D+{\text{i}}\eta)^2} \big(U_{1,i}-U_{1,a}\big), \\ \label{Omega3_21b} \Omega_{3,(21),\text{anom.}}^{\text{reduced,}\circ\circ,\div}&= \frac{\beta}{4} \sum_{ijab} \zeta^{ij,ab} n_{ij}\bar n_{ab} \frac{1}{D+{\text{i}}\eta} \big(\bar n_iU_{1,i}- n_aU_{1,a}\big), \\ \label{Omega3_21c} \Omega_{3,(21),\text{pseudo-a.}}^{\text{reduced,}\circ\circ,\div}&= \frac{1}{4} \sum_{ijab} \zeta^{ij,ab} n_{ij}\bar n_{ab} \left[\frac{1}{(D+{\text{i}}\eta)^2}-\frac{1}{(D-{\text{i}}\eta)^2}\right] {\nonumber \\}&\quad \times \big(n_iU_{1,i}-\bar n_aU_{1,a}\big).\end{aligned}$$ Here, ${\circ\circ}$ refers to an incomplete (in fact, incorrect) regularization: none of the three contributions given by Eq. , and is real, and (more severely) also their sum is not real. As explained below Eq. , the reason for this deficiency is that we have applied Eq.  and relabeled indices without applying the Sokhotski-Plemelj-Fox formula first. To repair this we have to average over the signs of the imaginary parts, which leads to $$\begin{aligned} \label{T0canc} \Omega_{3,(21),\text{normal}}^{\text{reduced,}\ast\ast,\div}&= -\frac{1}{8} \sum_{ijab} \zeta^{ij,ab} n_{ij}\bar n_{ab} {\nonumber \\}&\quad \times \left[\frac{1}{(D+{\text{i}}\eta)^2}+\frac{1}{(D-{\text{i}}\eta)^2}\right] \big(U_{1,i}- U_{1,a}\big), \\ \Omega_{3,(21),\text{anom.}}^{\text{reduced,}\ast\ast,\div}&= \frac{\beta}{8} \sum_{ijab} \zeta^{ij,ab} n_{ij}\bar n_{ab} \left[\frac{1}{(D+{\text{i}}\eta)}+\frac{1}{(D-{\text{i}}\eta)}\right], {\nonumber \\}&\quad \times (\bar n_iU_{1,i}- n_aU_{1,a}), \\ \label{321pseudovanish} \Omega_{3,(21),\text{pseudo-a.}}^{\text{reduced,}\ast\ast,\div}&= 0.\end{aligned}$$ The pseudoanomalous contribution has vanished: this feature, which is essential to obtain the Fermi-liquid relations at $T\neq 0$ (but not ${T=0}$), holds to all orders (see Sec. \[sec4\]). Note that the vanishing of the pseudoanomalous contributions holds only if all vertex permutations are included, i.e., it holds not separately for cyclically closed sets (in the present case, the two rows in Fig. \[fig3red\]). The anomalous contribution has the factorized form given by Eq.  (with $\ast\ast$ instead of $\ast$), i.e., $$\begin{aligned} \Omega_{3,(21),\text{anom.}}^{\text{reduced,}\ast\ast,\div}&= -\beta \, U_{2,i}^{\text{reduced,}\ast\ast,(\div)} n_i \bar n_i \, U_{1,i}.\end{aligned}$$ Thus, the anomalous contribution from the diagrams of Fig. \[fig3red\] gets canceled by the contribution from the diagram shown in Fig. \[fig4UU\] where one piece is a first-order diagram and the other one either a $-U_1$ vertex or a $-U_{2}^\text{BdD}$ vertex. The same cancellation occurs between the rotated diagram and the one with two mean-field vertices, and similar for the case where both $U_1$ and $U_{2}^\text{BdD}$ are included. ![The anomalous diagram composed of two pieces of the mean-field or single-vertex loop type.[]{data-label="fig4UU"}](fig4.pdf){width="14.00000%"} Integration variables {#sec32} --------------------- We now discuss how the formulas derived in Sec. \[sec31\] can be evaluated in numerical calculations. Nonvanishing contributions with poles of even degree appear first at fourth order in the BdD renormalization scheme. These have to be evaluated in terms of the Hadamard finite part, which obviously represents a major difficulty in the numerical application of the BdD scheme at high orders. We leave out the discussion of methods to evaluate the Hadamard finite part numerically, and defer numerical applications of the BdD scheme (and the other schemes) to future research. For an isotropic system and MBPT without a mean-field potential (${U_{\textbf}{k}=0}$) where ${\varepsilon}_{\textbf}{k}={\varepsilon}_{0,{\textbf}{k}}={\textbf}{k}^2/(2M)$, using as integration variables relative momenta ${\textbf}{p}=({\textbf}{k}_i-{\textbf}{k}_j)/2$ and ${\textbf}{A}=({\textbf}{k}_a-{\textbf}{k}_b)/2$ as well as the average momentum ${\textbf}{K}=({\textbf}{k}_i+{\textbf}{k}_j)/2=({\textbf}{k}_a+{\textbf}{k}_b)/2$, one obtains from Eq.  the following expression for the second-order normal contribution: $$\begin{aligned} \label{Omega2num1} \Omega_{2,\text{normal}}^\text{BdD}&= -2 M \sum_{{\textbf}{K},{\textbf}{p},{\textbf}{A}} \zeta^{ijab} n_{ij} \bar n_{ab} \frac{P}{A^2-p^2}.\end{aligned}$$ The functional derivative of this expression with respect to $n_{{\textbf}{k}_i}$ is given by $$\begin{aligned} \label{U2num1} U_{2,{\textbf}{k}_i}^\text{BdD}[{\varepsilon}_{0,{\textbf}{k}_i}]= -4 M \sum_{{\textbf}{p},{\textbf}{A}} \zeta^{ijab} \big(n_{j} \bar n_{ab}+n_{ab} \bar n_{j}\big) \frac{P}{A^2-p^2}.\end{aligned}$$ For truncation order ${N=2}$, the single-particle energies in the BdD scheme are obtained from the self-consistent equation $$\begin{aligned} {\varepsilon}_{\textbf}{k}=\frac{{\textbf}{k}^2}{2M}+ U_{1,{\textbf}{k}}+ U_{2,{\textbf}{k}}^\text{BdD}[{\varepsilon}_{\textbf}{k}],\end{aligned}$$ where one may use for $U_{2,{\textbf}{k}}^\text{BdD}[{\varepsilon}_{\textbf}{k}]$ the expression obtained by substituting in Eq.  the term $D_{ab,ij}={\varepsilon}_{{\textbf}{k}_a}+{\varepsilon}_{{\textbf}{k}_b}-{\varepsilon}_{{\textbf}{k}_i}-{\varepsilon}_{{\textbf}{k}_j}$ for $M/(A^2-p^2)$ if this substitution does not introduce additional poles; otherwise one must go back to the expression with infinitesimal imaginary parts, Eq. . This issue can be seen also in the ${U_{\textbf}{k}=0}$ case if ${\textbf}{k}_a$, ${\textbf}{k}_i$ and ${\textbf}{k}_j$ are used as integration variables to evaluate Eq. . Considering a one-dimensional system for simplicity, we have $$\begin{aligned} \label{Omega2num2a} \Omega_{2,\text{normal}}^\text{BdD}&= -\frac{ M}{4}\!\! \sum_{k_a,k_i,k_j}\!\! \zeta^{ijab} n_{ij} \bar n_{ab} \, \left[\frac{1}{\kappa+{\text{i}}\eta} +\frac{1}{\kappa-{\text{i}}\eta} \right],\end{aligned}$$ with $k_b=k_i+k_j-k_a$ and $\sum_k=\int dk/(2\pi)$. Moreover, $\kappa=(k_a-k_i)(k_a-k_j)$, i.e., now there are two poles. To bring Eq.  into a form where the Sokhotski-Plemelj theorem can be applied, we note that $$\begin{aligned} &(k_a-k_i+ {\text{i}}\eta)(k_a-k_j+{\text{i}}\eta) {\nonumber \\}&\quad \quad= (\kappa +{\text{i}}\eta) \, \theta(2k_a-k_i-k_j) +(\kappa - {\text{i}}\eta) \, \theta(k_i+k_j-2k_a),\end{aligned}$$ so $$\begin{aligned} \sum_{\text{sgn}(\eta)}\frac{1}{\kappa+{\text{i}}\eta} = \sum_{\text{sgn}(\eta)} \frac{1}{(k_a-k_i+{\text{i}}\eta)(k_a-k_j+{\text{i}}\eta)}.\end{aligned}$$ The Sokhotski-Plemelj theorem can now be applied (assuming that $k_a$ is integrated after $k_i$ or $k_j$), which leads to $$\begin{aligned} \label{Omega2num2b} \Omega_{2,\text{normal}}^\text{BdD}&= -\frac{ M}{2} \sum_{k_a,k_i,k_j} \zeta^{ijab} n_{ij} \bar n_{ab} {\nonumber \\}&\quad \times \bigg[\frac{P}{k_a-k_i}\frac{P}{k_a-k_j} + \pi^2 \delta(k_a-k_i)\delta(k_a-k_j) \bigg],\end{aligned}$$ where the integration order is fixed. Changing the integration order such that $k_a$ is integrated first would lead to an incorrect result, as evident from the Poincaré-Bertrand transformation formula [@hardy; @poincare; @bertrand; @Muskheli; @dispersions] $$\begin{aligned} \label{bertrand} &\int \!\! dx \!\int \!\! dy \,\,\varphi(x,y) \frac{P}{x-y}\frac{P}{x-z} {\nonumber \\}&\quad = \int \!\! dy \!\int \!\! dx \,\,\varphi(x,y) \frac{P}{x-y}\frac{P}{x-z} + \pi^2 \varphi(z,z).\end{aligned}$$ Since it has only one pole, the expression given by Eq.  is however preferable compared to the one where ${\textbf}{k}_a$, ${\textbf}{k}_i$ and ${\textbf}{k}_j$ are used as integration variables. At third order the issue manifested by the Poincaré-Bertrand transformation formula becomes unavoidable. For an isotropic system and ${U_{\textbf}{k}=0}$, using relative and average momenta as integration variables one obtains for $\Omega_{3,\text{pp}}^\text{BdD}$ the expression (see also Refs. [@Kondo; @YosidaHirosi]) $$\begin{aligned} \label{Omega3ppnum} \Omega_{3,\text{pp}}^\text{BdD}&= \frac{M^2}{3} \! \sum_{{\textbf}{K},{\textbf}{p},{\textbf}{A},{\textbf}{B}} \zeta^{ijabcd}_\text{pp} n_{ij} \bar n_{abcd} {\nonumber \\}& \quad \times \left[3\frac{P}{A^2-p^2}\frac{P}{B^2-p^2}+\pi^2 \frac{\delta(A-p)\delta(B-p)}{(A+p)(B+p)}\right],\end{aligned}$$ where ${\textbf}{p}=({\textbf}{k}_i-{\textbf}{k}_j)/2$, ${\textbf}{A}=({\textbf}{k}_a-{\textbf}{k}_b)/2$, ${\textbf}{B}=({\textbf}{k}_c-{\textbf}{k}_d)/2$ and ${\textbf}{K}=({\textbf}{k}_i+{\textbf}{k}_j)/2$. In Eq. , the integration order is such that $p$ is integrated after $A$ or $B$.[^28] The expression for $\Omega_{3,\text{hh}}^\text{BdD}$ is similar to Eq. . For $\Omega_{3,\text{ph}}^\text{BdD}$, however, using relative and average momenta as integration variables leads to $$\begin{aligned} \label{Omega3phnum} \Omega_{3,\text{ph}}^\text{BdD}&= \frac{8 M^2}{3} \sum_{{\textbf}{K},{\textbf}{p},{\textbf}{A},{\textbf}{Y}} \zeta^{ijkabc}_\text{ph} n_{ijk} \bar n_{abc} {\nonumber \\}& \quad \times \left[\mathcal{F}_{[\eta_1,\eta_2]}+\mathcal{F}_{[-\eta_1,\eta_2]}+\mathcal{F}_{[-\eta_1,-\eta_2]} \right],\end{aligned}$$ where ${\textbf}{p}=({\textbf}{k}_i-{\textbf}{k}_j)/2$, ${\textbf}{A}=({\textbf}{k}_a-{\textbf}{k}_b)/2$, ${\textbf}{Y}=({\textbf}{k}_a-{\textbf}{k}_c)/2$ and ${\textbf}{K}=({\textbf}{k}_i+{\textbf}{k}_j)/2$, and $$\begin{aligned} \mathcal{F}_{[\eta_1,\eta_2]}= \frac{1}{\big[A^2-p^2+{\text{i}}\eta_1\big]\, \big[({\textbf}{p}-{\textbf}{A})\cdot({\textbf}{A}-2{\textbf}{Y}+{\textbf}{p})+{\text{i}}\eta_2\big]}.\end{aligned}$$ From here one would have to proceed similar to the steps that lead from Eq.  to Eq. . Factorization to All Orders {#sec4} =========================== Here, we prove to all orders that the BdD renormalization scheme implies the thermodynamic relations associated with Fermi-liquid theory and (consequently) leads to a perturbation series that manifests the concistency of the adiabatic zero-temperature formalism, for both isotropic and anisotropic systems. First, in Sec. \[sec41\], we examine more closely how the linked-cluster theorem manifests itself. Second, in Sec. \[sec42\] we systematize the disentanglement ($\div$) of the grand-canonical perturbation series. These two steps provide the basis for Sec. \[sec43\], where we prove to all orders the reduced factorization property for finite systems, Eq. . In Sec. \[sec44\] we then infer that the reduced factorization property holds also for the BdD renormalization scheme. This implies the Fermi-liquid relations and the consistency of the adiabatic formalism. Finally, in Sec. \[sec45\] we point out that the BdD renormalization scheme maintains the cancellation of the divergencies (at ${T=0}$) from energy denominator poles and discuss the minimal renormalization requirement for the consistency of the adiabatic formalism with the modified perturbation series for the free energy, $F(T,\mu_\text{ref})$, in the anisotropic case. Linked-cluster theorem {#sec41} ---------------------- Letting the truncation order (formally) go to infinity, the sum of all perturbative contributions to $\Omega(T,\mu)$ can be written as $$\begin{aligned} \label{OmegaY} \Delta \Omega = \sum_{n=1}^\infty \Omega_n =-\frac{1}{\beta}\ln \left[1-\beta\sum_{n=1}^\infty \Upsilon_n \right],\end{aligned}$$ where $\Upsilon_n$ denotes the contribution of order $n$ from both linked and unlinked diagrams. We refer to the various linked parts of an unlinked diagram as subdiagrams. Further, we denote the contribution—evaluated via a given *time-independent* ($\aleph$) formula (i.e., direct, cyclic, or reduced with $\ast$ or $\ast\ast$)—to $\Upsilon_n$ from a diagram composed of $K=k\sum_{i=0}^{k} \alpha_i$ linked parts involving $k$ different subdiagram species $\Gamma_1\neq \Gamma_2 \neq \ldots \neq \Gamma_k$ where each $\Gamma_i$ appears $\alpha_i$ times in the complete diagram, by $\Upsilon^\aleph_{[\Gamma_{1}^{\alpha_1}\cdots\Gamma_{k}^{\alpha_k}]_n}$. In this notation, Eq.  reads $$\begin{aligned} \label{redfac1} \Delta\Omega = -\frac{1}{\beta} \ln \Big[1-\beta \sum_{n=1}^\infty \sum_{[\Gamma_{1}^{\alpha_1}\cdots\Gamma_{k}^{\alpha_k}]_n} \sum_O \Upsilon^\aleph_{[\Gamma_{1}^{\alpha_1}\cdots\Gamma_{k}^{\alpha_k}]_n} \Big],\end{aligned}$$ where $\sum_{[\Gamma_{1}^{\alpha_1}\cdots\Gamma_{k}^{\alpha_k}]_n}$ is the sum over all possible (i.e., those consistent with order $n$) combinations of subdiagrams (including repetitions), and $\sum_O$ denotes the sum over all distinguishable vertex permutations of the unlinked diagram that leave the subdiagrams invariant. This is illustrated in Fig. \[figx\]. We write $$\begin{aligned} \label{Gammasum} \sum_{n=1}^\infty \sum_{[\Gamma_{1}^{\alpha_1}\cdots\Gamma_{k}^{\alpha_k}]_n} \sum_O \Upsilon^\aleph_{[\Gamma_{1}^{\alpha_1}\cdots\Gamma_{k}^{\alpha_k}]_n} = \sum_{\Gamma_{1}^{\alpha_1}\cdots\Gamma_{k}^{\alpha_k}} \sum_O \Upsilon^\aleph_{\Gamma_{1}^{\alpha_1}\cdots\Gamma_{k}^{\alpha_k}}.\end{aligned}$$ It is $$\begin{aligned} \label{redfac1b} \sum_{\Gamma_{1}^{\alpha_1}\cdots\Gamma_{k}^{\alpha_k}} \sum_O \Upsilon^\aleph_{\Gamma_{1}^{\alpha_1}\cdots\Gamma_{k}^{\alpha_k}} = \sum_{\widetilde{\Gamma}_{1}^{\alpha_1}\cdots\widetilde{\Gamma}_{k}^{\alpha_k}} \sum_P \Upsilon^\aleph_{\widetilde{\Gamma}_{1}^{\alpha_1}\cdots\widetilde{\Gamma}_{k}^{\alpha_k}} ,\end{aligned}$$ where $\sum_P$ denotes the sum over all distinguishable vertex orderings, and $\sum_{\widetilde{\Gamma}_{1}^{\alpha_1}, \ldots,\widetilde{\Gamma}_{k}^{\alpha_k}}$ sums over all combinations of subdiagrams where in the underlying set of linked diagrams $\{\Gamma_i\}$ only one (arbitrary) element is included for each set of diagrams that is closed under vertex permutations. For example, among the first two diagrams of Fig. \[fig3ppph\] only one is included, and only one of the six diagrams of Fig. \[fig3red\]. ![Vertex permutations for an unlinked diagram with two linked parts (subdiagrams). If the diagram in the first row represents the original vertex ordering, then the second and third diagram correspond to (nonoverlapping and overlapping, respectively) orderings $\in O$, and the fourth diagram to an ordering $\in P/O$.[]{data-label="figx"}](fig5.pdf){width="25.00000%"} The generalization of Eq.  for $\Upsilon_n$ is given by $$\begin{aligned} \label{OmegadirectT} \Upsilon_n^{\text{direct}[P]}= -\frac{1}{\beta}\frac{(-1)^{n}}{n!} \int\limits_{0}^\beta \! d \tau_n \cdots d \tau_1 \; \Braket{ \mathcal{T}\big[ \mathcal{V}(\tau_n) \cdots \mathcal{V}(\tau_1) \big] }.\end{aligned}$$ We denote the expressions obtained from Eq.  for the contribution from a given permutation invariant set of (linked or unlinked) diagrams by $\Upsilon^{\text{direct}[P]}_{\widetilde{\Gamma}_{1}^{\alpha_1}\cdots \widetilde{\Gamma}_{k}^{\alpha_k}}$. As noted in Sec. \[sec22\], these expressions are equivalent to the summed expressions obtained from any of the time-independent formulas (direct, cyclic, or reduced with $\ast$ or $\ast\ast$), i.e., $$\begin{aligned} \label{sumuppi} \Upsilon^{\text{direct}[P]}_{\widetilde{\Gamma}_{1}^{\alpha_1}\cdots \widetilde{\Gamma}_{k}^{\alpha_k}} &= \sum_P\Upsilon^\aleph_{\widetilde{\Gamma}_{1}^{\alpha_1}\cdots \widetilde{\Gamma}_{k}^{\alpha_k}}.\end{aligned}$$ Now, the number of ways the $n$ perturbation operators in Eq.  can be partitioned into the subgroups specified by $\Upsilon^{\text{direct}[P]}_{\widetilde{\Gamma}_{1}^{\alpha_1}\cdots \widetilde{\Gamma}_{k}^{\alpha_k}}$ is given by [@Abrikosov; @Fetter] $$\begin{aligned} \label{partitioningcount} \frac{1}{\alpha_1! \cdots \alpha_k!}\frac{n!}{(n_1!)^{\alpha_1} \cdots (n_k!)^{\alpha_k}},\end{aligned}$$ where $n_i$ are the orders of the respective subdiagrams. From Eq. , this leads to $$\begin{aligned} \label{redfac2} \sum_P\Upsilon^\aleph_{\widetilde{\Gamma}_{1}^{\alpha_1}\cdots \widetilde{\Gamma}_{k}^{\alpha_k}} &= -\frac{1}{\beta} \prod_{i=1}^k \frac{\left(-\beta \, \Upsilon^{\text{direct}[P]}_{\widetilde{\Gamma}_{i}}\right)^{\alpha_i}}{\alpha_i!} {\nonumber \\}&= -\frac{1}{\beta} \prod_{i=1}^k \frac{\left(-\beta \,\sum_{P_i} \Upsilon^\aleph_{\widetilde{\Gamma}_{i}}\right)^{\alpha_i}}{\alpha_i!},\end{aligned}$$ where in the second step we have applied Eq. . In Sec. \[sec43\] we will see that Eq.  implies the (direct, cyclic, and reduced) factorization properties for anomalous diagrams. It is now straightforward to verify by explicit comparison with Eq.  that $$\begin{aligned} \label{redfac3} \sum_{\widetilde{\Gamma}_{1}^{\alpha_1}\cdots\widetilde{\Gamma}_{k}^{\alpha_k}} \sum_P \left(-\beta\Upsilon^\aleph_{\widetilde{\Gamma}_{1}^{\alpha_1}\cdots\widetilde{\Gamma}_{k}^{\alpha_k}}\right) &= \sum_{\alpha=1}^\infty \frac{1}{\alpha!} \bigg[ -\beta \sum_{\Gamma} \Upsilon^\aleph_{\Gamma} \bigg]^\alpha {\nonumber \\}&= -1+\exp\Big( -\beta \sum_{\Gamma} \Upsilon^\aleph_{\Gamma}\Big).\end{aligned}$$ Applying this to Eq.  leads to $$\begin{aligned} \Delta \Omega &= \sum_{\Gamma} \Upsilon^\aleph_{\Gamma},\end{aligned}$$ which constitutes the linked-cluster theorem. Now, expanding the logarithm in Eq.  we find $$\begin{aligned} \label{Insert2} \Delta \Omega&= \sum_{n=1}^\infty \, \Bigg[ \sum_{\nu,\{n_i\},\{k_i\}} \beta^{k_1+\ldots+k_\nu-1} \binom{k_1+\ldots+k_\nu}{k_1,\ldots,k_\nu} \frac{(\Upsilon_{n_1})^{k_1}\cdots (\Upsilon_{n_\nu})^{k_\nu}}{k_1 +\ldots +k_\nu}\Bigg],\end{aligned}$$ where the inner sum is subject to the constraints $\sum_{i=1}^\nu n_i k_i=n$, $1\leq n_1<n_2<\ldots< n_\nu$, $k_i\geq 1$, and $\nu\geq 1$. The linked-cluster theorem implies that, if evaluated in terms of the usual Wick contraction formalism [@Fetter], in Eq.  the contributions with $\nu=1$ and $k_1=1$ from unlinked diagrams are all canceled by the contributions with $\nu>1$ or $k_1>1$. The individual expressions from these canceling terms are not size extensive, i.e., in the thermodynamic limit they diverge with higher powers of the confining volume. Disentanglement {#sec42} --------------- Here, we first introduce the cumulant formalism,[^29] which allows systematizing the disentanglement ($\div$). Then, we show that this formalism provides a new representation and evaluation method for the contributions associated (in the usual Wick contraction formalism) with anomalous diagrams and the subleading parts of Eqs. –, etc. Finally, we construct and discuss the modified perturbation series for the free energy $F(T,\mu_\text{ref})$. ### Cumulant formalism We define $\mathcal{C}_{i_1 \ldots i_n}$ as the unperturbed ensemble average of a fully-contracted (indicted by paired indices) but not necessarily linked sequence of creation and annihilation operators, i.e., $$\begin{aligned} \label{chi1def} \mathcal{C}_{i_1 \ldots i_n}=\braket{a_{i_1}^\dagger a_{i_1} \cdots a_{i_n}^\dagger a_{i_n}},\end{aligned}$$ where some of the index tuples may be identical (articulation lines). In Eq. , all contractions are of the hole type. For the case where there are also particles we introduce the notation $$\begin{aligned} \label{chi1defpart} \mathcal{C}_{i_1 \cdots i_n}^{a_1 \cdots a_m}=\braket{a_{i_1}^\dagger a_{i_1} \cdots a_{i_n}^\dagger a_{i_n} a_{a_1} a_{a_1}^\dagger \cdots a_{a_m} b_{a_m}^\dagger }.\end{aligned}$$ This can be expressed in terms of functional derivatives of the unperturbed partition function $\mathcal{Y}_{\!\text{ref}}=\operatorname{\operatorname{Tr}}[\operatorname{\operatorname{e}}^{-\beta(\mathcal{H}_\text{ref}-\mu \mathcal{N})}]$, i.e.,[^30] $$\begin{aligned} \label{chi1defpart2} \mathcal{C}_{i_1 \cdots i_n}^{a_1 \cdots a_m}&=\frac{1}{\mathcal{Y}_{\!\text{ref}}} \frac{\delta}{\delta[-\beta {\varepsilon}_{i_1}]} \cdots \frac{\delta}{\delta[-\beta {\varepsilon}_{i_n}]} {\nonumber \\}&\quad\times \bigg(1-\frac{\delta}{\delta[-\beta {\varepsilon}_{a_1}]}\bigg) \cdots \bigg(1-\frac{\delta}{\delta[-\beta {\varepsilon}_{a_m}]}\bigg)\;\mathcal{Y}_{\!\text{ref}}.\end{aligned}$$ This shows that the upper indices can be lowered iteratively, i.e., $\mathcal{C}_{i_1 \cdots i_n}^{a_1 \cdots a_m}=\mathcal{C}_{i_1 \cdots i_n}^{a_1 \cdots a_{m-1}}-\mathcal{C}_{i_1 \cdots i_n a_m}^{a_1 \cdots a_{m-1}}$, which leads to $$\begin{aligned} \label{chi1defpart3} \mathcal{C}_{i_1 \cdots i_n}^{a_1 \cdots a_m}= \sum_{\mathcal{P}\subset\{1,\ldots,m\}} (-1)^{|P|} \mathcal{C}_{i_1 \cdots i_n \{a_k\}_{k \in P} }.\end{aligned}$$ The cumulants $\mathcal{K}_{i_1 \ldots i_n}$ are defined by $$\begin{aligned} \label{Maverage} \mathcal{K}_{i_{1}\ldots i_{n}}= \frac{\delta^n \ln \mathcal{Y}_{\!\text{ref}}}{\delta[-\beta {\varepsilon}_{i_1}] \cdots \delta[-\beta {\varepsilon}_{i_n}]}.\end{aligned}$$ The relation between the $\mathcal{K}$’s and the $\mathcal{C}$’s is given by [@sokal] $$\begin{aligned} \label{generalG} \mathcal{C}_{i_1 \cdots i_n}=\sum_{\substack{\mathcal{P}\in\,\text{partitions}\\\text{of}\,\{1,\ldots,n\}}} \prod_{I\in \mathcal{P}} \mathcal{K}_{ \{i_k\}_{k\in I} }.\end{aligned}$$ These formulas provide an alternative way (compared to the Wick contraction formalism) to evaluate the various contributions $\Upsilon_{[\Gamma_{1}^{\alpha_1}\cdots\Gamma_{k}^{\alpha_k}]_n}$ in Eq. . ### Simply connected unlinked diagrams For linked diagrams without articulation lines (i.e., two-particle irreducible diagrams) the contributions from higher cumulants have measure zero for infinite systems. For such diagrams, the (sums of the) contributions from higher cumulants vanish also in the finite case, via exchange antisymmetry. This is clear, since these (nonextensive) contributions are absent in the Wick contraction formalism. For instance, for the first-order diagram $\mathcal{C}_{ij}=\mathcal{K}_{i}\mathcal{K}_{j}+\delta_{ij}\mathcal{K}_{ii}$ the part $\delta_{ij}\mathcal{K}_{ii}$ gives no contribution (by antisymmetry). Overall, it is $$\begin{aligned} \label{highK1} i_1\neq i_2\neq \ldots \neq i_n:\;\;\; \mathcal{C}_{i_1\cdots i_n}=\prod_{\nu=1}^n \mathcal{K}_{i_\nu}\end{aligned}$$ for linked diagrams. This means that for linked two-particle irreducible diagrams the cumulant formalism leads to the same expressions as the Wick contraction formalism. For two-particle reducible diagrams, however, there are additional size extensive contributions from higher cumulants corresponding to articulation lines with identical three-momenta. This has the effect that for each set of normal articulation lines with identical three-momenta there is only a single distribution function, i.e., $$\begin{aligned} \label{Gnormal} \mathcal{C}_{i_1 \cdots i_n j \cdots j}^{a_1 \cdots a_m}&=\mathcal{C}_{i_1 \cdots i_n j}^{a_1 \cdots a_m}, \\ \label{Gnormal2} \mathcal{C}_{i_1 \cdots i_n}^{a_1 \cdots a_m b \cdots b}&=\mathcal{C}_{i_1 \cdots i_n }^{a_1 \cdots a_m b},\end{aligned}$$ see Ref. [@Wellenhofer:2017qla]. For example, $\mathcal{C}_{i_1\ldots i_n jj}=\mathcal{C}_{i_1\ldots i_n}(\mathcal{K}_{j}\mathcal{K}_{j}+\mathcal{K}_{jj})=\mathcal{C}_{i_1\ldots i_n j}$, since $\mathcal{K}_{j}\mathcal{K}_{j}+\mathcal{K}_{jj}=n_jn_j+n_j\bar n_j=n_j$. Equations  and together with Eq.  imply that the contributions from anomalous diagrams are zero: $$\begin{aligned} \label{anomzero} \mathcal{C}_{i_1 \cdots i_n j \cdots j}^{a_1 \cdots a_m j \cdots j}=0.\end{aligned}$$ The contributions from anomalous diagrams and from the subleading parts of Eqs. –, etc. , now arise instead from unlinked diagrams (with normal subdiagrams). That is, for unlinked diagrams composed of $N$ subdiagrams the contributions from higher cumulants connecting $N$ lines with distinct three-momenta are size extensive. For example, for the case of three first-order subdiagrams with indices $(i,j)$, $(k,l)$, and $(m,n)$ one has the contributions $$\begin{aligned} \delta_{ik}\delta_{jm} \mathcal{K}_{i i}\mathcal{K}_{j j} \mathcal{K}_{l} \mathcal{K}_{n}, \ldots, \;\; \delta_{ik}\delta_{im} \mathcal{K}_{i i i} \mathcal{K}_{j} \mathcal{K}_{l} \mathcal{K}_{n},\ldots,\end{aligned}$$ where $\mathcal{K}_{i i i}=n_i\bar n_i \bar n_i-n_i n_i \bar n_i$ and the ellipses represent terms with other index combinations. By virtue of the linked-cluster theorem, the size extensive contributions from unlinked diagrams where not all higher-cumulant indices correspond to different subdiagrams cancel against the corresponding terms with $\nu>1$ or $k_1>1$ in Eq. . The remaining size extensive contributions from unlinked diagrams are exactly those where the different (normal) subdiagrams are simply connected via higher cumulants. This provides a new representation for the contributions associated (in the Wick contraction formalism) with anomalous diagrams and the contributions not included in Eqs.  and . ### Modified thermodynamic perturbation series There are two methods for the construction of the modified perturbation series for the free energy $F(T,\mu_\text{ref})$. The first, introduced by Kohn and Luttinger [@Kohn:1960zz], is based on grand-canonical MBPT; it constructs $F(T,\mu_\text{ref})$ in terms of a truncated formal expansion[^31] of $F(T,\mu)$ about $\mu_\text{ref}$, see Refs. [@Kohn:1960zz; @PhysRevC.89.064009; @Wellenhofer:2017qla] for details. The second method, due to Brout and Englert [@brout2], starts from the canonical ensemble. In canonical perturbation theory [@PhysRev.115.1374; @Parry], Eq.  is replaced by $$\begin{aligned} \mathscr{C}_{i_1 \ldots i_n}=\braket{a_{i_1}^\dagger a_{i_1} \cdots a_{i_n}^\dagger a_{i_n}}_{\!\varrho},\end{aligned}$$ where $\braket{\ldots}_{\!\varrho}$ denotes the unperturbed canonical ensemble average which involves only Fock states $\ket{\Psi_{\!\varrho}}$ with fixed $\varrho=\Braket{\Psi_{\!\varrho}|\mathcal{N}|\Psi_{\!\varrho}}$. From this, we proceed analogously to the grand-canonical case, with $\mathcal{Y}_{\!\text{ref}}$ replaced by the unperturbed canonical partition function $\mathcal{Z}_\text{ref}=\sum_{\Psi_{\!\varrho}}\braket{\Psi_{\!\varrho}|\operatorname{\operatorname{e}}^{-\beta \mathcal{H}_\text{ref}}|\Psi_{\!\varrho}}$, i.e., the cumulants are now given by $$\begin{aligned} \mathscr{K}_{i_{1}\ldots i_{n}}= \frac{\delta^n \ln \mathcal{Z}_\text{ref}}{\delta[-\beta {\varepsilon}_{i_1}] \cdots \delta[-\beta {\varepsilon}_{i_n}]}.\end{aligned}$$ The decisive new step is now to evaluate the cumulants not directly (which would be practically impossible) but using the Legendre transformation $$\begin{aligned} \label{legendre} \ln \mathcal{Z}_\text{ref}(T,\varrho)&=\ln \mathcal{Y}_{\!\text{ref}}(T,\mu_\text{ref})- \mu_\text{ref} \frac{\partial \ln \mathcal{Y}_{\!\text{ref}}(T,\mu_\text{ref})}{\partial \mu_\text{ref}},\end{aligned}$$ where $\mu_\text{ref}$ is the chemical potential of an unperturbed grand-canonical system with the same mean fermion number as the fully interacting canonical system, i.e., $$\begin{aligned} \varrho=-\frac{1}{\beta} \frac{\partial \ln \mathcal{Y}_{\!\text{ref}}(T,\mu_\text{ref})}{\partial \mu_\text{ref}} =-\frac{\partial \Omega_\text{ref}(T,\mu_\text{ref})}{\partial \mu_\text{ref}} =\sum_{\textbf}{k} \tilde{n}_{\textbf}{k},\end{aligned}$$ where $\tilde{n}_{\textbf}{k}$ denotes the Fermi-Dirac distribution with $\mu_\text{ref}$ as the chemical potential. With $\varrho$ being fixed, $\varrho=\sum_{\textbf}{k} \tilde{n}_{\textbf}{k}$ determines $\mu_\text{ref}$ as a functional of the spectrum $\varepsilon_{\textbf}{k}$. From this and Eq. , the expression for $\mathscr{K}_{i}$ is given by $$\begin{aligned} \mathscr{K}_{i} &= \frac{\delta \ln \mathcal{Y}_{\!\text{ref}}}{\delta[-\beta \varepsilon_i]} -\frac{1}{\beta}\frac{\partial \ln \mathcal{Y}_{\!\text{ref}}}{\partial \mu_\text{ref}} \Bigg(\frac{\delta \mu_\text{ref}}{\delta \varepsilon_i}\Bigg)_{\!\varrho} -\varrho\,\Bigg( \frac{\delta\mu_\text{ref}}{\partial\varepsilon_i}\Bigg)_{\!\varrho} = \tilde{n}_{i}. \end{aligned}$$ The higher $\mathscr{K}$’s can then be determined iteratively, i.e., $$\begin{aligned} \mathscr{K}_{i_1 i_2} &=\Bigg(\frac{\delta \mathscr{K}_{i_1}}{\delta[-\beta \varepsilon_{i_2}]}\Bigg)_{\!\varrho} {\nonumber \\}&= \delta_{i_1i_2} \frac{\partial \mathscr{K}_{i_1}}{\partial[-\beta \varepsilon_{i_1}]} - \frac{\partial \mathscr{K}_{i_1}}{\partial \mu_\text{ref}}\, \Bigg[\frac{\partial \varrho}{\partial\mu_\text{ref}}\Bigg]^{-1}\!\! \frac{\delta \varrho}{\delta[-\beta \varepsilon_{i_2}]} {\nonumber \\}&= \delta_{i_1i_2} \tilde{n}_{i_1} (1-\tilde{n}_{i_1}) -\frac{\tilde{n}_{i_1} (1-\tilde{n}_{i_1})\tilde{n}_{i_2} (1-\tilde{n}_{i_2})}{\sum_{i} \tilde{n}_{i} (1-\tilde{n}_{i})}.\end{aligned}$$ For $\mathscr{K}_{i_1 i_2 i_3}$ and beyond there is also a contribution where the energy derivative acts on $[\partial \varrho/\partial\mu_\text{ref}]^{-1}=\sum_{i} \tilde{n}_{i} (1-\tilde{n}_{i})$, i.e.,[^32] $$\begin{aligned} \label{Kscr3} \big[\mathscr{K}_{i_1 i_2 i_3}\big]_{i_1\neq i_2 \neq i_3} &= - \frac{\partial \mathscr{K}_{i_1i_2}}{\partial \mu_\text{ref}} \Bigg[\frac{\delta \varrho}{\delta\mu_\text{ref}}\Bigg]^{-1} \frac{\delta \varrho}{\delta[-\beta \varepsilon_{i_3}]} {\nonumber \\}&\quad - \frac{\partial \mathscr{K}_{i_1}}{\beta \mu_\text{ref}} \frac{\delta \varrho}{\delta[-\beta \varepsilon_{i_2}]} \frac{\partial}{\partial[-\beta\varepsilon_{i_3}]} \Bigg[\frac{\partial \varrho}{\partial\mu_\text{ref}}\Bigg]^{-1}.\end{aligned}$$ One can show that $\big[\mathscr{K}_{i_1 \ldots i_n}\big]_{i_a \neq i_b \,\forall a,b\in[1,n]}=\mathcal{O}(1/\varrho^{n-1})$, see Ref. [@Wellenhofer:2017qla], so the size extensive contributions from unlinked diagrams are again given by simply connected diagrams. For isotropic systems the anomalous parts of these contributions cancel at each order in the zero-temperature limit,[^33] thus $$\begin{aligned} \text{isotropy:}\;\; F(T,\mu_\text{ref}) \xrightarrow{T\rightarrow 0} E^{(0)}({\varepsilon}_{\text{F}}),\end{aligned}$$ with $\mu_\text{ref} \xrightarrow{T\rightarrow 0} {\varepsilon}_{\text{F}}$. By construction, within each of the order-by-order renormalization schemes (direct, cyclic, BdD), at each order the modified perturbation series $F(T,\mu_\text{ref})$ matches the grand-canonical perturbation series for the free energy $F(T,\mu)=\Omega(T,\mu)-\mu \,\partial \Omega(T,\mu)/\partial \mu$. The zero-temperature limit exists however only for the BdD scheme (see Sec. \[sec2\]). Factorization theorem(s) {#sec43} ------------------------ Using the direct formula, the cyclic formula, or the reduced formula for finite systems ($\ast$) and applying the cumulant formalism to Eq.  leads to $$\begin{aligned} \label{redfac2cumulant} \sum_{P/A}\Upsilon^{\text{direct},\div}_{[\widetilde{\Pi}_{1}^{\alpha_1}\cdots \widetilde{\Pi}_{k}^{\alpha_k}]_n} &= -\frac{1}{\beta} \prod_{i=1}^k \frac{\left(-\beta \,\sum_{P_i/A_i} \Upsilon^{\text{direct},\div}_{\widetilde{\Pi}_{i}}\right)^{\alpha_i}}{\alpha_i!}, \\ \label{redfac2cumulantcyc} \sum_{P/A}\Upsilon^{\text{cyclic},\div}_{[\widetilde{\Pi}_{1}^{\alpha_1}\cdots \widetilde{\Pi}_{k}^{\alpha_k}]_n} &= -\frac{1}{\beta} \prod_{i=1}^k \frac{\left(-\beta \,\sum_{P_i/A_i} \Upsilon^{\text{cyclic},\div}_{\widetilde{\Pi}_{i}}\right)^{\alpha_i}}{\alpha_i!},\end{aligned}$$ $$\begin{aligned} \label{redfac7} \sum_{P/A} \Upsilon^{\text{reduced,}\ast,(\div)}_{[\widetilde{\Pi}_{1}^{\alpha_1}\cdots \widetilde{\Pi}_{k}^{\alpha_k}]_n} &= -\frac{1}{\beta} \prod_{i=1}^k \frac{\left(-\beta \,\sum_{P_i/A_i} \Upsilon^{\text{reduced,}\ast,\div}_{\widetilde{\Pi}_{i}}\right)^{\alpha_i}}{\alpha_i!},\end{aligned}$$ where the $\widetilde{\Pi}_i$ are all normal diagrams, and $P/A$ excludes those permutations that lead to anomalous diagrams. The combinatorics (and sign factors) of the higher-cumulant connections matches the combinatorics of the functional derivatives that generate the mean-field contributions from the perturbative contributions to the grand-canonical potential. Hence, Eqs.  and prove the direct and the cyclic version of the factorization property given by Eq.  and its cyclic analog, and Eq.  proves the reduced factorization property for finite systems $(\ast)$, Eq. . Note that Eq.  implies that in the reduced finite case the pseudoanomalous contributions vanish at each order. The reduced version of the factorization theorem can also be proved as follows. For a given unlinked diagram where none of the linked parts are overlapping (see Fig. \[figx\]), the reduced formula has the form $$\begin{aligned} \Upsilon^{\text{reduced,}\ast,(\div)}_{[{\Pi}_{1}^{\alpha_1}\cdots {\Pi}_{k}^{\alpha_k}]_n} &\sim \underset{z=0}{\text{Res}} \frac{\operatorname{\operatorname{e}}^{-\beta z}}{z} \frac{1}{(-z)^K} \prod_i\frac{1}{{D}_i-z} {\nonumber \\}& \sim \beta^{K-1} \prod_i\frac{1}{{D}_i} + \text{extra terms},\end{aligned}$$ where the extra terms are proportional to $\beta^{K-n}$, with $n\in\{2,\ldots,K\}$. The reduced expressions for unlinked diagrams with overlapping linked parts are composed entirely of such extra terms. These extra terms are incompatible with the linked-cluster theorem: they do not match the temperature dependence of (the disentangled reduced expressions) for the corresponding contributions with $\nu>1$ or $k_1>1$ in Eq. . The extra terms must therefore cancel each other at each order in the sum $\sum_{P/A}$.[^34] Thus, symbolically we have $$\begin{aligned} \label{redfac7b} \sum_{P/A} \Upsilon^{\text{reduced,}\ast,(\div)}_{[\widetilde{\Pi}_{1}^{\alpha_1}\cdots \widetilde{\Pi}_{k}^{\alpha_k}]_n} & \sim \beta^{K-1} \sum_{P/A}\left[ \prod_i\frac{1}{{D}_i} \right],\end{aligned}$$ which is equivalent to Eq. . Statistical quasiparticles {#sec44} -------------------------- The energy denominator regularization maintains the linked-cluster theorem. From the proof of the (reduced) factorization theorem it can be inferred that this suffices to establish that $$\begin{aligned} \label{redfac7BdD} \sum_{P/A} \Upsilon^{\text{reduced,}\ast\ast,\div}_{[\widetilde{\Pi}_{1}^{\alpha_1}\cdots \widetilde{\Pi}_{k}^{\alpha_k}]_n} &= -\frac{1}{\beta} \prod_{i=1}^k \frac{\left(-\beta \,\sum_{P_i/A_i} \Upsilon^{\text{reduced,}\ast\ast,\div}_{\widetilde{\Pi}_{i}}\right)^{\alpha_i}}{\alpha_i!},\end{aligned}$$ which (by virtue of the cumulant formalism) implies the BdD factorization property $$\begin{aligned} \label{reducedfactorizedBdD} \Omega_{n_1+n_2,\text{anomalous}}^{\text{reduced,}\ast\ast,\div} &= -\frac{\beta}{2} \sum_{\textbf}{k} U^{\text{reduced,}\ast\ast,\div}_{n_1,{\textbf}{k}} n_{\textbf}{k} \bar n_{\textbf}{k} \, U^{\text{reduced,}\ast\ast,\div}_{n_2,{\textbf}{k}} {\nonumber \\}& \quad \times (2-\delta_{n_1,n_2}) ,\end{aligned}$$ and similar \[i.e., as specified by Eq. \] for anomalous contributions with several pieces (subdiagrams, in the cumulant formalism). It is now clear how the cancellation between the contributions from simply connected diagrams composed of $V$ vertices and those where also $-U$ vertices are present works. For a given simply connected diagram, only the subdiagrams with *single* higher-cumulant connections can be replaced by $-U$ vertices, so at truncation orders $2N+1$ and $2N+2$ all anomalous contributions are removed if the mean field includes all contributions $U^{\aleph,\div}_{n,{\textbf}{k}}$ with $n\leq N$. However, this does *not* imply consistency with the adiabatic formalism for $U^{\aleph,\div}_{n,{\textbf}{k}}=U^{\text{reduced},\ast\ast,\div}_{n,{\textbf}{k}}$ (irrespective of isotropy), since the relation between chemical potential $\mu$ and the fermion number $\varrho$ does not match the adiabatic relation $\varrho=\sum_{\textbf}{k} \theta({\varepsilon}_{\text{F}}-{\varepsilon}_{\textbf}{k})$. For the consistency of the grand-canonical and the adiabatic formalism, the BdD mean field must include all contributions up to the truncation order; only then one preserves the thermodynamic relations of the pure mean-field theory (where ${H=H_0+U}$, with $U\equiv U[n_{\textbf}{k}]$), i.e., the Fermi-liquid relations $$\begin{aligned} \label{StatQP1aFERMILIQ} \varrho &= \sum_{\textbf}{k} n_{\textbf}{k}, \\ \label{StatQP2aFERMILIQ} S &=-\sum_{\textbf}{k} \big( n_{\textbf}{k}\ln n_{\textbf}{k} + \bar n_{\textbf}{k}\ln \bar n_{\textbf}{k}\big) , \\ \label{StatQP3aFERMILIQ} \frac{\delta E}{\delta n_{\textbf}{k}} &= {\varepsilon}_{{\textbf}{k}} .\end{aligned}$$ These relations are valid for all temperatures. Zero-temperature limit {#sec45} ---------------------- At zero temperature, the energy denominator poles are at the boundary of the integration region,[^35] which implies that the contributions from two-particle reducible diagrams with several identical energy denominators diverge [@Wellenhofer:2018dwh; @Feldman1996]. For MBPT with ${U=0}$ or ${U=U_1}$ one finds that the divergent contributions cancel each other at each order.[^36] This cancellation is maintained in the BdD renormalization scheme: the cancellation occurs separately for normal contributions, and for the sum of the matching contributions the Sokhotski-Plemelj-Fox formula is consistent with the ${T\rightarrow 0}$ limit. Notably, the energy denominator regularization is not required to construct a thermodynamic perturbation series that is consistent with the adiabatic formalism in the anisotropic case: at ${T=0}$, the BdD factorization theorem takes the form $$\begin{aligned} \label{reducedfactorizedT0} T=0:\;\;\Omega_{n_1+n_2,\text{anomalous}} &= -\frac{1}{2} \sum_{\textbf}{k} U^{\text{reduced},\ast\ast,\div}_{n_1,{\textbf}{k}} \delta({\varepsilon}_{\textbf}{k}-\mu) {\nonumber \\}&\quad \times U^{\text{reduced},\ast\ast,\div}_{n_2,{\textbf}{k}} (2-\delta_{n,m}),\end{aligned}$$ and similar for anomalous diagrams with several pieces. (At ${T=0}$, the symbols $\ast\ast$ and $\div$ (and the specification of $\aleph$ to reduced) are not needed for the separation of normal and anomalous contributions to the grand-canonical potential.) Thus, as recognized by Feldman *et al.* [@Feldman1999], to cancel the anomalous contributions to $\Omega(T=0,\mu)$ the following mean field is sufficient (for truncation orders below $2N+2$): $$\begin{aligned} \label{Ufermi} U^{L_{\text{F}}}_{{\textbf}{k}} = U_{1,{\textbf}{k}} + \sum_{n=2}^N L_{\text{F}}\left[ U^{\text{reduced,}\ast\ast,\div}_{n,{\textbf}{k}}(T=0,\mu)\right],\end{aligned}$$ where $L_{\text{F}}$ satisfies $L_{\text{F}}[g({\textbf}{k})] = g({\textbf}{k})$ for ${\varepsilon}_{\textbf}{k}=\mu$ and is smoothed off away from ${\varepsilon}_{\textbf}{k}=\mu$. There are still anomalous contributions to the particle number, so the adiabatic series is not reproduced. The renormalization given by Eq.  (with $\mu$ replaced by $\mu_\text{ref}$) is however sufficient for the consistency of the adiabatic formalism with the modified perturbation series $F(T,\mu_\text{ref})$ in the anisotropic case. Conclusion {#summary} ========== In the present paper, we have, substantiating the outline by Balian and de Dominicis (BdD) [@statquasi3; @statquasi1],[^37] derived a thermodynamic perturbation series for infinite Fermi systems that (1) is consistent with the adiabatic zero-temperature formalism for both isotropic and anisotropic systems and (2) satisfies at each order and for all temperatures the thermodynamic relations associated with Fermi-liquid theory. This result arises, essentially, as a corollary of the linked-cluster theorem. The proof of (2) \[which implies (1)\] given here relies, apart from the earlier analysis of the disentanglement ($\div$) conducted by Balian, Bloch, and de Dominicis [@Balian1961529] and the outline provided by Balian and de Dominicis, on the application of the cumulant formalism (as a systematic method to perform $\div$) introduced by Brout and Englert [@brout2; @PhysRev.115.824]. The statistical quasiparticles associated with the thermodynamic Fermi-liquid relations are distinguished from the dynamical quasiparticles associated with the asymptotic stability of the low-lying excited states; in particular, the energies of dynamical and statistical quasiparticles are different. In the perturbation series derived in the present paper the reference Hamiltonian is renormalized at each order in terms of additional contributions to the self-consistent mean-field potential. Conceptually, such an order-by-order renormalization is appealing: At each new order, not only is new information about interaction effects included, but this information automatically improves the reference point. Nevertheless, the relevance of this perturbation series depends on its convergence rate compared to the modified perturbation series for the free energy $F(T,\mu_\text{ref})$ with a fixed reference Hamiltonian; e.g., Hartree-Fock, or the (modified) second-order BdD mean field. In addition to the complete removal of anomalous contributions, the higher-order mean-field contributions lead also to partial cancellations of normal two-particle reducible contributions. This suggests that the convergence rate may indeed improve by renormalizing the mean field at each order.[^38] Apart from the question of convergence, beyond second order the practicality of the BdD renormalization scheme is impeded by the increasingly complicated regularization procedure required for its numerical application. An alternative renormalization scheme, the direct scheme, was introduced by Balian, Bloch, and de Dominicis [@Balian1961529] (and rederived in the present paper, together with yet another scheme, the cyclic one). The thermodynamic relations resulting from the direct scheme however deviate from the Fermi-liquid relations. More severely, for the direct (and the cyclic) scheme the zero-temperature limit does not exist. The direct scheme may however still be useful for numerical calculations close to the classical limit. In particular, the corresponding perturbation series reproduces the virial expansion in the classical limit [@Balian1961529b]. The BdD renormalization scheme is thus mainly targeted at calculations not too far from the degenerate limit, in particular perturbative nuclear matter calculations (see, e.g., Refs. [@BOGNER200559; @PhysRevC.82.014314; @PhysRevC.83.031301; @PhysRevLett.110.032504; @PhysRevC.88.025802; @PhysRevC.89.025806; @PhysRevC.93.054314; @PhysRevC.94.054307; @Drischler:2017wtt; @PhysRevC.87.014322; @PhysRevC.89.044321; @PhysRevC.91.054311; @Sammarruca:2018bqh]. Notably, the statistical quasiparticle relations may be useful for the application of the Sommerfeld expansion [@prakash] and to connect with phenomenological parametrizations [@Yasin:2018ckc]. In conclusion, future research in the many-fermion problem will investigate the perturbation series derived in the present paper.[^39]\ I thank A. Carbone, C. Drischler, K. Hebeler, J. W. Holt, F. Hummel, N. Kaiser, R. Lang, M. Prakash, S. Reddy, A. Schwenk and W. Weise for useful discussions. Moreover, I thank the referees for helpful comments. Finally, I thank for their warm hospitality the group T39 (TU München), the INT (Seattle), the CEA (Saclay) and the ECT\* (Trento), where parts of this work have been presented. This work is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Projektnummer 279384907– SFB 1245 as well as the DFG and NSFC through the CRC 110 “Symmetries and the Emergence of Structure in QCD”. Two-particle reducible diagrams at fourth order {#app1} =============================================== Here we derive explicitly the regularized ($\ast\ast$) disentangled ($\div$) reduced expressions for the contributions from two-particle reducible diagrams at fourth order. Diagrams with single-vertex loops are canceled by the ones with $-U_1$ vertices; the remaining diagrams with $V$ vertices only are shown in Fig. \[fig4\]. One can choose indices such that for each diagram the matrix elements are given by $$\begin{aligned} \zeta= V^{ij,ab}V^{ik,cd}V^{cd,ki}V^{ab,ij},\end{aligned}$$ and the energy denominators corresponding to the two second-order pieces are given by $D_1=D_{ab,ij}$ and $D_2=D_{cd,ik}$. The cyclic expression for the sum of the diagrams in each row $\nu \in\{1,2,3,4\}$ can then be written as $$\begin{aligned} \Omega^\text{cyclic}_{4,\nu} = \xi_{\nu} \sum_{ijkabcd} \zeta \; \mathcal{N}_{\nu} \, \mathcal{F}_{\nu}^\text{cyclic},\end{aligned}$$ where $\xi_{1,2,3,4}=(-1/4,-1/4,1/8,1/8)$. For the chosen indices the $\mathcal{N}_\nu$ are fixed as $\mathcal{N}_{1}=\mathcal{N}_{2}= n_{iijk} \bar{n}_{abcd}$,$\mathcal{N}_{3}= n_{iijcd} \bar{n}_{abk}$, and $\mathcal{N}_{4}= n_{abk} \bar{n}_{iijcd}$. Finally, from Eq. , the $\mathcal{F}^\text{cyclic}_\nu$ are given by $$\begin{aligned} \mathcal{F}_{1}^\text{cyclic} &= \frac{1}{D_1^2 D_{1+2}} -\frac{\operatorname{\operatorname{e}}^{-\beta (D_{1+2})}}{D_2^2 D_{1+2}} +\frac{\operatorname{\operatorname{e}}^{-\beta D_1}D_{1-2}}{D_1^2 D_2^2} -\beta\frac{\operatorname{\operatorname{e}}^{-\beta D_1}}{D_1 D_2}, \\ \mathcal{F}_{2}^\text{cyclic} &= \frac{1}{D_1D_2 D_{1+2}} -\frac{\operatorname{\operatorname{e}}^{-\beta (D_{1+2})}}{D_1D_2 D_{1+2}} +\frac{\operatorname{\operatorname{e}}^{-\beta D_1}}{D_1 D_2 D_{1-2}} {\nonumber \\}&\quad -\frac{\operatorname{\operatorname{e}}^{-\beta D_2}}{D_1 D_2 D_{1-2}}, \\ \mathcal{F}_{3}^\text{cyclic} &= \frac{1}{D_1^2 D_{1-2}} -\frac{\operatorname{\operatorname{e}}^{-\beta (D_{1-2})}}{D_2^2 D_{1-2}} +\frac{\operatorname{\operatorname{e}}^{-\beta D_1}D_{1+2}}{D_1^2 D_2^2} +\beta\frac{\operatorname{\operatorname{e}}^{-\beta D_1}}{D_1 D_2}, \\ \mathcal{F}_{4}^\text{cyclic} &= -\frac{1}{D_1^2 D_{1-2}} +\frac{\operatorname{\operatorname{e}}^{\beta (D_{1-2})}}{D_2^2 D_{1-2}} -\frac{\operatorname{\operatorname{e}}^{\beta D_1}D_{1+2}}{D_1^2 D_2^2} +\beta\frac{\operatorname{\operatorname{e}}^{\beta D_1}}{D_1 D_2},\end{aligned}$$ where $D_{1\pm 2}=D_1 \pm D_2$. Although their individual parts have poles, the $\mathcal{F}^\text{cyclic}_{\nu}$ are regular for any zero of $D_1 D_2 D_{1+2} D_{1-2}$. To separate the various parts, we add infinitesimal imaginary parts to the energy denominators, i.e., $$\begin{aligned} {D}_{1} \rightarrow \,& {D}_{1,\eta_{1}}={D}_{1} +{\text{i}}\eta_{1}, \\ {D}_{2} \rightarrow \,& {D}_{2,\eta_{2}}={D}_{2} +{\text{i}}\eta_{2},\end{aligned}$$ where $|\eta_1| \neq |\eta_2|$, since otherwise ${D}_{1+2,\eta_{1+2}}$ or ${D}_{1-2,\eta_{1-2}}$ has zeros. Averaging over the signs of the imaginary parts and applying Eqs. \[ndouble\] and , we can reorganize the sum of the 12 diagrams according to $$\begin{aligned} \sum_{\nu=1}^4\Omega^\text{cyclic}_{4,\nu} = \Omega_{4,\text{normal}}^{\text{reduced,}\ast\ast,\div} + \Omega_{4,\text{anom.}}^{\text{reduced,}\ast\ast,\div} + \Omega_{4,\text{pseudo-a.}}^{\text{reduced,}\ast\ast,\div},\end{aligned}$$ where $$\begin{aligned} \label{4normal} \Omega_{4,\text{normal}}^{\text{reduced,}\ast\ast,\div} = & \sum_{\alpha=1}^4 \bigg[ \frac{1}{8} \sum_{ijkabcd} \!\! \zeta \; \mathcal{N}^\text{normal}_{\alpha}\; \mathcal{F}^{\text{reduced},\ast\ast}_{\alpha,\text{normal}} \bigg] , \\ \label{4anom} \Omega_{4,\text{anom.}}^{\text{reduced,}\ast\ast,\div} = & \sum_{\alpha=1,3,4} \bigg[ \frac{\beta}{8} \sum_{ijkabcd} \!\! \zeta \; \mathcal{N}^\text{anom.}_{\alpha} \; \mathcal{F}^{\text{reduced},\ast\ast}_{\alpha,\text{anom.}} \bigg] , \\ \label{4pseud} \Omega_{4,\text{pseudo-a.}}^{\text{reduced,}\ast\ast,\div} = & \sum_{\alpha=1,3,4} \bigg[ \frac{1}{8} \sum_{ijkabcd} \!\! \zeta \; \mathcal{N}^\text{anom.}_{\alpha}\; \mathcal{R}^{\text{reduced},\ast\ast}_{\alpha} \bigg] ,\end{aligned}$$ with $$\begin{aligned} \mathcal{F}^{\text{reduced},\ast\ast}_{\alpha,\text{normal}} = \!\!\!\!\! \sum_{\text{sgn}(\eta_{1}),\text{sgn}(\eta_{2})} \!\! \mathcal{F}^{\text{reduced},\ast\ast}_{\alpha,\text{normal},[\eta_1,\eta_2]} ,\end{aligned}$$ and similar for the anomalous and pseudoanomalous contributions. The correspondence $\alpha \cong \nu$ holds only for the anomalous contributions, and the normal ones with $\alpha=3,4$. For the normal contributions with $\alpha=1,2$, we combine the (disentangled) contributions from the first two ($\alpha=1$) and the third two ($\alpha=2$) diagrams in the first two rows. Regarding the pseudoanomalous contributions, each $\alpha$ corresponds to several $\nu$’s, by virtue of the application of Eq. . In the anomalous contribution $$\begin{aligned} \label{Nanom1} \mathcal{N}^\text{anom.}_1&= n_{iabk} \bar{n}_{ijcd},\\ \label{Nanom2} \mathcal{N}^\text{anom.}_3&= n_{iabcd} \bar{n}_{ijk},\\ \label{Nanom3} \mathcal{N}^\text{anom.}_4&= n_{ijk} \bar{n}_{iabcd},\end{aligned}$$ ![The 12 fourth-order two-particle reducible diagrams composed of two second-order normal pieces. In each of the four rows (1, 2, 3, 4), the first (and third) diagram is a normal diagram, the other anomalous. The diagrams in each row transform into each other under cyclic vertex permutations. The set of all 12 diagrams is closed under general vertex permutations. []{data-label="fig4"}](fig6.pdf){width="48.00000%"} and $$\begin{aligned} \mathcal{F}^{\text{reduced},\ast\ast}_{1,\text{anom.}} &=\!\!\!\!\!\! \sum_{\text{sgn}(\eta_{1}),\text{sgn}(\eta_{2})} \!\! \left[-\frac{2}{{D}_{1,\eta_1}{D}_{2,\eta_2}}\right] , \\ \mathcal{F}^{\text{reduced},\ast\ast}_{3,\text{anom.}} &=\!\!\!\!\!\! \sum_{\text{sgn}(\eta_{1}),\text{sgn}(\eta_{2})} \!\!\frac{1}{{D}_{1,\eta_1}{D}_{2,\eta_2}}, \\ \mathcal{F}^{\text{reduced},\ast\ast}_{4,\text{anom.}} &=\!\!\!\!\!\! \sum_{\text{sgn}(\eta_{1}),\text{sgn}(\eta_{2})} \!\!\frac{1}{{D}_{1,\eta_1}{D}_{2,\eta_2}}.\end{aligned}$$ Suitably relabeling indices, we obtain the BdD factorization property $$\begin{aligned} \label{4thorderfactorized} \Omega_{4,\text{anom.}}^{\text{reduced,}\ast\ast,\div} =-\frac{\beta}{2}\sum_i U_{2,i}^{\text{reduced,}\ast\ast,(\div)} n_i \bar n_i \, U_{2,i}^{\text{reduced,}\ast\ast,(\div)},\end{aligned}$$ with $U_{2,i}^{\text{reduced,}\ast\ast,(\div)}$ given by Eq. . Relabeling indices according to Eqs. , , and , the energy denominators in the pseudoanomalous contribution are $$\begin{aligned} \mathcal{R}^{\text{reduced},\ast\ast}_{1,[\eta_1,\eta_2]} &= -\frac{2}{{D}_{1,\eta_1}({D}_{2,\eta_2})^2} +\frac{2}{({D}_{1,\eta_1})^2{D}_{2,\eta_2}} {\nonumber \\}& \quad -\frac{2}{{D}_{1,\eta_1}{D}_{2,\eta_2}{D}_{1-2,\eta_{1-2}}} -\frac{2}{{D}_{1,\eta_2}{D}_{2,\eta_1}{D}_{1-2,\eta_{2-1}}} {\nonumber \\}& \quad +\frac{1}{({D}_{2,\eta_1})^2{D}_{1-2,\eta_{2-1}}} +\frac{1}{({D}_{2,\eta_2})^2{D}_{1-2,\eta_{1-2}}} {\nonumber \\}& \quad +\frac{1}{({D}_{1,\eta_1})^2{D}_{1-2,\eta_{1-2}}} +\frac{1}{({D}_{1,\eta_2})^2{D}_{1-2,\eta_{2-1}}} , \\ \mathcal{R}^{\text{reduced},\ast\ast}_{3,[\eta_1,\eta_2]} &= -\frac{2}{({D}_{2,\eta_2})^2{D}_{1+2,\eta_{1+2}}} -\frac{2}{{D}_{1,\eta_1}{D}_{2,\eta_2}{D}_{1+2,\eta_{1+2}}} {\nonumber \\}& \quad +\frac{1}{{D}_{1,\eta_1}({D}_{2,\eta_2})^2} +\frac{1}{({D}_{1,\eta_1})^2{D}_{2,\eta_2}} , \\ \mathcal{R}^{\text{reduced},\ast\ast}_{4,[\eta_1,\eta_2]} &= \frac{2}{({D}_{2,\eta_2})^2{D}_{1+2,\eta_{1+2}}} +\frac{2}{{D}_{1,\eta_1}{D}_{2,\eta_2}{D}_{1+2,\eta_{1+2}}} {\nonumber \\}& \quad -\frac{1}{{D}_{1,\eta_1}({D}_{2,\eta_2})^2} -\frac{1}{({D}_{1,\eta_1})^2{D}_{2,\eta_2}}.\end{aligned}$$ In these expressions, the parts with three different denominators require special attention: the formal application of the Sokhotski-Plemelj-Fox formula assumes that each energy denominator is used as an explicit integration variable, but this is not possible for terms with denominators of the form ${D}_{1,\eta_1}{D}_{2,\eta_2}{D}_{1\pm 2,\eta_{1\pm 2}}$. To evaluate these terms, we use the relations $$\begin{aligned} \label{combineD} \frac{1}{({D}_{1,\eta_1})^2{D}_{1\pm 2,\eta_{1\pm 2}}}\pm \frac{1}{{D}_{1,\eta_1}{D}_{2,\eta_2}{D}_{1\pm 2,\eta_{1\pm 2}}} &= \pm \frac{1}{({D}_{1,\eta_1})^2{D}_{2,\eta_{2}}}, \\ \label{combineD2} \frac{1}{({D}_{2,\eta_2})^2{D}_{1\pm 2,\eta_{1\pm 2}}}\pm \frac{1}{{D}_{1,\eta_1}{D}_{2,\eta_2}{D}_{1\pm 2,\eta_{1\pm 2}}} &= \pm\frac{1}{{D}_{1,\eta_1}({D}_{2,\eta_{2}})^2}.\end{aligned}$$ This leads to $$\begin{aligned} \mathcal{R}^{\text{reduced},\ast\ast}_{1} &=0 , \\ \label{4pseudoav2} \mathcal{R}^{\text{reduced},\ast\ast}_{3} &=\!\!\! \sum_{\text{sgn}(\eta_{1}),\text{sgn}(\eta_{2})} \left[ -\frac{1}{{D}_{1,\eta_1}({D}_{2,\eta_2})^2} +\frac{1}{({D}_{1,\eta_1})^2{D}_{2,\eta_2}} \right] , \\ \label{4pseudoav3} \mathcal{R}^{\text{reduced},\ast\ast}_{4} &=\!\!\! \sum_{\text{sgn}(\eta_{1}),\text{sgn}(\eta_{2})} \left[ \frac{1}{{D}_{1,\eta_1}({D}_{2,\eta_2})^2} -\frac{1}{({D}_{1,\eta_1})^2{D}_{2,\eta_2}} \right] .\end{aligned}$$ One sees that $\mathcal{R}^{\text{reduced},\ast\ast}_{3}$ and $\mathcal{R}^{\text{reduced},\ast\ast}_{4}$ are antisymmetric under $D_1 \leftrightarrow D_2$. In each case, the remaining part of the integrand is symmetric under $D_1 \leftrightarrow D_2$. Thus, the pseudoanomalous contribution is zero: $$\begin{aligned} \Omega_{4,\text{pseudo-a.}}^{\text{reduced,}\ast\ast,\div} =0.\end{aligned}$$ Finally, in the normal contribution $$\begin{aligned} \label{Nnorm1} \mathcal{N}^\text{normal}_1&= n_{ijk} \bar{n}_{abcd},\\ \label{Nnorm2} \mathcal{N}^\text{normal}_2&= n_{abcd} \bar{n}_{ijk},\\ \label{Nnorm3} \mathcal{N}^\text{normal}_3&= n_{ijcd} \bar{n}_{abk},\\ \label{Nnorm4} \mathcal{N}^\text{normal}_4&= n_{abk} \bar{n}_{ijcd},\end{aligned}$$ and $$\begin{aligned} \label{Enorm1} \mathcal{F}^{\text{reduced},\ast\ast}_{1,\text{normal},[\eta_1,\eta_2]} &= -\frac{2}{({D}_{1,\eta_1})^2{D}_{1+2,\eta_{1+2}}} {\nonumber \\}& \quad -\frac{2}{{D}_{1,\eta_1}{D}_{2,\eta_2}{D}_{1+2,\eta_{1+2}}} , \\ \label{Enorm2} \mathcal{F}^{\text{reduced},\ast\ast}_{2,\text{normal},[\eta_1,\eta_2]} &= \frac{2}{({D}_{2,\eta_2})^2{D}_{1+2,\eta_{1+2}}} +\frac{2}{{D}_{1,\eta_1}{D}_{2,\eta_2}{D}_{1+2,\eta_{1+2}}} , \\ \label{Enorm3} \mathcal{F}^{\text{reduced},\ast\ast}_{3,\text{normal},[\eta_1,\eta_2]} &= \frac{1}{({D}_{1,\eta_1})^2{D}_{1-2,\eta_{1-2}}} +\frac{1}{({D}_{1,\eta_2})^2{D}_{1-2,\eta_{2-1}}} , \\ \label{Enorm4} \mathcal{F}^{\text{reduced},\ast\ast}_{4,\text{normal},[\eta_1,\eta_2]} &=-\frac{1}{({D}_{1,\eta_1})^2{D}_{1-2,\eta_{1-2}}} -\frac{1}{({D}_{1,\eta_2})^2{D}_{1-2,\eta_{2-1}}} ,\end{aligned}$$ where we have suitably relabeled indices. Applying Eqs.  and , the averaged expressions are given by $$\begin{aligned} \label{Enorm1av} \mathcal{F}^{\text{reduced},\ast\ast}_{1,\text{normal}} &=\!\!\! \sum_{\text{sgn}(\eta_{1}),\text{sgn}(\eta_{2})} \left[-\frac{2}{({D}_{1,\eta_1})^2{D}_{2,\eta_{2}}}\right] , \\ \label{Enorm2av} \mathcal{F}^{\text{reduced},\ast\ast}_{2,\text{normal}} &=\!\!\! \sum_{\text{sgn}(\eta_{1}),\text{sgn}(\eta_{2})} \frac{2}{{D}_{1,\eta_{1}}({D}_{2,\eta_2})^2} , \\ \label{Enorm3av} \mathcal{F}^{\text{reduced},\ast\ast}_{3,\text{normal}} &=\!\!\! \sum_{\text{sgn}(\eta_{1}),\text{sgn}(\eta_{2})} \frac{1}{({D}_{1,\eta_1})^2{D}_{1-2,\eta_{1-2}}} , \\ \label{Enorm4av} \mathcal{F}^{\text{reduced},\ast\ast}_{4,\text{normal}} &=\!\!\! \sum_{\text{sgn}(\eta_{1}),\text{sgn}(\eta_{2})} \left[-\frac{1}{({D}_{1,\eta_1})^2{D}_{1-2,\eta_{1-2}}}\right] .\end{aligned}$$ In addition to the contribution from the 12 diagrams shown in Fig. \[fig4\], in the BdD scheme the two-particle reducible contribution at fourth order involves the six diagrams of Fig. \[fig3red\] with the first-order subdiagrams replaced by $-U_{2}^\text{BdD}$ vertices, and the diagram composed of two $-U_{2}^\text{BdD}$ vertices (Fig. \[fig4UU\]). The anomalous contributions from these 19 diagrams cancel each other (as a consequence of Eq. ). Notably, there is also a partial analytic cancellation between the contributions from Eqs.  and and the normal contribution from the third-order diagrams with one $-U_{2}^\text{BdD}$ vertex (Fig. \[fig3red\]), see Refs. [@Becker:1971asg; @Jones:1970hwh; @Wellenhofer:2017qla]. Such partial cancellations can be found also at higher orders for the normal contribution from certain (normal) two-particle reducible diagrams, i.e., for those where cutting the articulation lines and closing them such that an unlinked diagram (with two linked parts) is generated leaves the number of holes invariant.[^40] Finally, the two-particle reducible fourth-order contribution to $U^\text{BdD}$ is given by the functional derivative of the regularized disentangled reduced normal contributions from the diagrams of Fig. \[fig4\] and the ones of Fig. \[fig3red\] with the first-order subdiagrams replaced by $-U_{2}^\text{BdD}$ vertices.[^41] Self-energy, mass function, mean field and all that {#app2} =================================================== Here, we discuss the various forms of the self-energy, and their relation to the grand-canonical potential, the mean occupation numbers, and the (various forms of the) mean field. Analytic continuation(s) of the Matsubara self-energy {#app20} ----------------------------------------------------- Although it is defined in terms of the self-consistent Dyson equation, the proper Matsubara self-energy $\Xi_{\textbf}{k}(z_l)$ can also be calculated using bare propagators; in that case, also two-particle reducible self-energy diagrams contribute to $\Xi_{\textbf}{k}(z_l)$; see, e.g., Ref. [@PLATTER2003250]. From $\Xi_{\textbf}{k}(z_l)$ the frequency-space self-energy $\Sigma_{\textbf}{k}(z)$ is obtained as the analytic continuation of $\Xi_{\textbf}{k}(z_l)$ that has the following properties:[^42] 1. $\Sigma_{\textbf}{k}(z)$ is analytic in both the upper and lower half plane, vanishes at infinity, and has a branch cut along the real axis where $\text{Im}[\Sigma_{\textbf}{k}(z)]$ changes sign, with ${\text{Im}[\Sigma_{\textbf}{k}(z)]\lessgtr 0}$ for ${\text{Im}[z] \gtrless 0}$. With these properties, $\Sigma_{\textbf}{k}(z)$ leads to the spectral representation of the mean occupation number, see Eq.  below. Now, as shown below, in bare MBPT another analytic continuation of the Matsubara self-energy $\Xi_{\textbf}{k}(z_l)$ can be defined, here referred to as the mass function $\mathcal{M}_{\textbf}{k}(z)$. It has the following properties: 1. $\mathcal{M}_{\textbf}{k}(z)$ is entire and real on the real axis for ${T\neq 0}$. 2. It vanishes at infinity, except for $1/\text{Re}[z]=0^{+}$ where it has an essential singularity. 3. For $\text{Re}[z]>\mu$ it has an essential singularity at ${T=0}$. If $z={\varepsilon}_{\textbf}{k}+\mathcal{M}_{\textbf}{k}(z)$ has no solutions off the real axis, then one can obtain from $\mathcal{M}_{\textbf}{k}(z)$ another simple expression for the mean occupation numbers: the mass function representation $f_{\textbf}{k}=n(\mathscr{E}_{\textbf}{k})$, see Eq. . The ${T\rightarrow 0}$ limit of this representation is singular for $\mathscr{E}_{\textbf}{k}>\mu$, and gives $f_{\textbf}{k}=n_{\textbf}{k}$ for $\mathscr{E}_{\textbf}{k}<\mu$. The issue whether $z={\varepsilon}_{\textbf}{k}+\mathcal{M}_{\textbf}{k}(z)$ may have nonreal solutions is discussed further below. We did not see an argument that guarantees that nonreal solutions exist. The functional forms of the bare perturbative contributions to $\Xi_{\textbf}{k}(z_l)$, $\Sigma_{\textbf}{k}(z)$, and $\mathcal{M}_{\textbf}{k}(z)$ are related to ones of the different time-independent formulas ($\aleph$) for the perturbative contributions to the grand-canonical potential, $\Omega_n^\aleph$. This is examined in part \[app23\] of this Appendix. Mean occupation numbers from Dyson equation {#app21} ------------------------------------------- Here, we first derive the mass function representation for the mean occupation number, Eq. . Then, we derive the spectral representation Eq. . Only the spectral representation can be derived also from the real-time propagator.[^43] Last, we examine the relation between the collision self-energy $\Sigma^{\text{coll}}_{\textbf}{k}(\omega)$ and the frequency-space self-energy $\Sigma_{\textbf}{k}(z)$ at ${T=0}$. ### Mass function The imaginary-time propagator is given by $$\begin{aligned} \mathscr{G}_{\textbf}{k}(\tau-\tau')=-\Braket{\! \Braket{ \mathcal{T}\left[a_{\textbf}{k}(\tau) a_{\textbf}{k}^\dagger(\tau') \right] } \!},\end{aligned}$$ where $a_{\textbf}{k}(\tau)=a_{\textbf}{k}\operatorname{\operatorname{e}}^{-{\varepsilon}_{\textbf}{k} \tau}$ and $a^\dagger_{\textbf}{k}(\tau)=a^\dagger_{\textbf}{k}\operatorname{\operatorname{e}}^{{\varepsilon}_{\textbf}{k} \tau}$, and $\braket{\!\braket{\ldots}\!}$ is the true ensemble average. Its Fourier series is $$\begin{aligned} \label{Gfourierser} \mathscr{G}_{\textbf}{k}(\tau)=\frac{1}{\beta} \lim_{l_\text{max} \rightarrow \infty} \sum_{l\in \mathcal{L}(l_\text{max}) } \mathscr{G}_{\textbf}{k}(z_l) \operatorname{\operatorname{e}}^{-z_l\tau},\end{aligned}$$ where $\mathcal{L}(l_\text{max})=\{-l_\text{max},\ldots,l_\text{max}\}$, and $z_l$ are the Matsubara frequencies (see Eq. ). The Fourier coefficients are given by $$\begin{aligned} \label{Gfourierserinv} \mathscr{G}_{\textbf}{k}(z_l)=\int \limits_{0}^\beta d\tau \,\mathscr{G}_{\textbf}{k}(\tau)\operatorname{\operatorname{e}}^{z_l\tau}.\end{aligned}$$ The Dyson equation in Fourier (Matsubara) space is given by $$\begin{aligned} \mathscr{G}_{\textbf}{k}(z_l) &= g_{\textbf}{k}(z_l) + g_{\textbf}{k}(z_l)\, \Xi_{\textbf}{k}(z_l)\, \mathscr{G}_{\textbf}{k}(z_l),\end{aligned}$$ where $\Xi_{\textbf}{k}(z_l)$ is the Matsubara self-energy and $g_{{\textbf}{k}}(\omega_l)$ is the unperturbed propagator in Matsubara space, i.e., $$\begin{aligned} \label{g0fourier} g_{{\textbf}{k}}(z_l) =\frac{1}{z_l-{\varepsilon}_{\textbf}{k}}.\end{aligned}$$ Iterating the Dyson equation and summing the resulting geometric series leads to $$\begin{aligned} \label{Dysonsummed} \mathscr{G}_{\textbf}{k}(z_l) &= \frac{1}{z_l-{\varepsilon}_{\textbf}{k}-\Xi_{\textbf}{k}(z_l)}.\end{aligned}$$ Inserting this into the Fourier series Eq.  and replacing the discrete frequency sums by a contour integral leads to $$\begin{aligned} \label{Gim0C0} \mathscr{G}_{\textbf}{k}(\tau) &= \! \oint\limits_{C_0[l_\text{max}]} \!\!\! \frac{d z}{2\pi {\text{i}}} \, \operatorname{\operatorname{e}}^{-z\tau}n(z)\, \frac{1}{z-{\varepsilon}_{\textbf}{k}-\mathcal{M}_{\textbf}{k}(z)} ,\end{aligned}$$ where $l_\text{max}\rightarrow \infty$ is implied, and $n(z)=[1+\operatorname{\operatorname{e}}^{\beta(z-\mu}]^{-1}$. The contour $C_0[l_\text{max}]$ encloses all the Matsubara poles $z=z_{l\in\mathcal{L}(l_\text{max})}$ but not the pole at $z={\varepsilon}_{\textbf}{k}+\mathcal{M}_{\textbf}{k}(z)$, see Fig. \[figc1\]. Note that by construction $C_0[l_\text{max}]$ crosses the real axis. Thus, for Eq.  to be equivalent to Eq. , the mass function $\mathcal{M}_{\textbf}{k}(z)$ must be an analytic continuation of the Matsubara self-energy $\Xi_{\textbf}{k}(z_l)$ that is analytic on the real axis (and near the Matsubara poles). This is easy to get: for the second-order two-particle irreducible contribution to $\mathcal{M}_{\textbf}{k}(z)$ we obtain from Eq.  the expression $$\begin{aligned} \label{sigma2bMass} \mathcal{M}_{2,{\textbf}{k}}(z) &= \frac{1}{2} \sum_{{\textbf}{k}_2, {\textbf}{k}_3, {\textbf}{k}_4} \! |\braket{\psi_{{\textbf}{k}}\psi_{{\textbf}{k}_2}| V|\psi_{{\textbf}{k}_3} \psi_{{\textbf}{k}_4}}|^2 n_{{\textbf}{k}_2} \bar n_{{\textbf}{k}_3} \bar n_{{\textbf}{k}_4} {\nonumber \\}& \quad \times \frac{e^{-\beta({\varepsilon}_{{\textbf}{k}_3}+{\varepsilon}_{{\textbf}{k}_4} - {\varepsilon}_{{\textbf}{k}_2} - z)}-1}{{\varepsilon}_{{\textbf}{k}_3}+{\varepsilon}_{{\textbf}{k}_4} - {\varepsilon}_{{\textbf}{k}_2} - z},\end{aligned}$$ i.e., in contrast to Eq. , we do *not* substitute$\operatorname{\operatorname{e}}^{\beta(z_l-\mu)}=-1$ before performing the analytic continuation, and similar for higher-order contributions. Since with this prescription there are no poles, $\mathcal{M}_{\textbf}{k}(z)$ is entire and real on the real axis, and regular everywhere except for $\text{Re}[z]\rightarrow \infty$. However, the ${T\rightarrow 0}$ limit of $\mathcal{M}_{\textbf}{k}(z)$ is singular for $\text{Re}[z]>\mu$, due to terms $\operatorname{\operatorname{e}}^{\beta (z-\mu)}$ as in Eq. . ![Contours $C_0[l_\text{max}]$ (left panel) and $C_1$ (right panel).[]{data-label="figc1"}](C1.pdf){width="45.00000%"} In Eq. , for $\tau<0$ the term $\operatorname{\operatorname{e}}^{-z\tau} n_{\textbf}{k}(z)$ is regular at infinity, and the term $[z-{\varepsilon}_{\textbf}{k}-\mathcal{M}_{\textbf}{k}(z)]^{-1}$ vanishes at infinity. Hence, if we assume that $z={\varepsilon}_{\textbf}{k}+\mathcal{M}_{\textbf}{k}(z)$ has no solutions off the real axis, for $\tau<0$ the contour $C_0[l_\text{max}]$ can be deformed into the contour $C_1$ (see Fig. \[figc1\]) that encloses the pole on the real axis at $$\begin{aligned} \label{massSC} \mathscr{E}_{\textbf}{k}={\varepsilon}_{\textbf}{k}+\mathcal{M}_{\textbf}{k}(\mathscr{E}_{\textbf}{k}).\end{aligned}$$ For nonreal $z=x+iy$, $z={\varepsilon}_{\textbf}{k}+\mathcal{M}_{\textbf}{k}(z)$ is equivalent to the two coupled nonlinear equations $x={\varepsilon}_{\textbf}{k}+\text{Re}[\mathcal{M}_{\textbf}{k}(x+iy)]$ and $y=\text{Im}[\mathcal{M}_{\textbf}{k}(x+iy)]$. At second order, this is given by $$\begin{aligned} \label{Mcoup1} x &= {\varepsilon}_{\textbf}{k} - U_{{\textbf}{k}} + U_{1,{\textbf}{k}} + \mathcal{M}^{\ddagger}_{2,{\textbf}{k}} {\nonumber \\}& \quad + \frac{1}{2} \sum_{{\textbf}{k}_2, {\textbf}{k}_3, {\textbf}{k}_4} \! |\braket{\psi_{{\textbf}{k}}\psi_{{\textbf}{k}_2}| V|\psi_{{\textbf}{k}_3} \psi_{{\textbf}{k}_4}}|^2 \frac{n_{{\textbf}{k}_2} \bar n_{{\textbf}{k}_3} \bar n_{{\textbf}{k}_4}}{\left[D(x)\right]^2+y^2} {\nonumber \\}& \quad \times \{ D(x)[\cos(\beta y)e^{-\beta D(x)}-1] -y \sin(\beta y) e^{-\beta D(x)} \} , \\ \label{Mcoup2} y &= \frac{1}{2} \sum_{{\textbf}{k}_2, {\textbf}{k}_3, {\textbf}{k}_4} \! |\braket{\psi_{{\textbf}{k}}\psi_{{\textbf}{k}_2}| V|\psi_{{\textbf}{k}_3} \psi_{{\textbf}{k}_4}}|^2 \frac{n_{{\textbf}{k}_2} \bar n_{{\textbf}{k}_3} \bar n_{{\textbf}{k}_4}}{\left[D(x)\right]^2+y^2} {\nonumber \\}& \quad \times \{ y[ \cos(\beta y)e^{-\beta D(x)}-1] +D(x) \sin(\beta y) e^{-\beta D(x)} \},\end{aligned}$$ with $D(x)={\varepsilon}_{{\textbf}{k}_3}+{\varepsilon}_{{\textbf}{k}_4} - {\varepsilon}_{{\textbf}{k}_2} - x$. Here, $-U_{{\textbf}{k}}$ corresponds to the diagram composed of a single $-U$ vertex, $U_{1,{\textbf}{k}}$ to the one with a single $V$ vertex, and $\mathcal{M}^{\ddagger}_{2,{\textbf}{k}}$ denotes the second-order two-particle reducible contribution to $\mathcal{M}_{2,{\textbf}{k}}(z)$. We did not see an argument that guarantees that Eqs.  and have solutions for ${y\neq 0}$. Assuming that there is only the pole given by Eq. , we get $$\begin{aligned} \label{fMass0} \mathscr{G}_{\textbf}{k}(\tau<0) &=n\left(\mathscr{E}_{\textbf}{k}\right)\operatorname{\operatorname{e}}^{-\mathscr{E}_{\textbf}{k}\tau},\end{aligned}$$ and the expression for the mean occupation numbers is given by $$\begin{aligned} \label{fMass} f_{\textbf}{k} &= \mathscr{G}_{\textbf}{k}(0^{-}) =n\left(\mathscr{E}_{\textbf}{k}\right),\end{aligned}$$ i.e., the exact mean occupation numbers are given by the Fermi-Dirac distribution with the reference spectrum renormalized in terms of the on-shell mass function $\mathcal{M}_{\textbf}{k}(\mathscr{E}_{\textbf}{k})$ defined via Eq.  and the analytic continuation of the Matsubara self-energy $\Xi_{\textbf}{k}(z_l)$ that is real analytic on the real axis. As discussed, the ${T\rightarrow 0}$ limit of $\mathcal{M}_{\textbf}{k}(\mathscr{E}_{\textbf}{k})$ is ill-behaved. ### Frequency-space self-energy The real-time propagator is[^44] $$\begin{aligned} {\text{i}}G_{\textbf}{k}(t-t')= \Braket{\! \Braket{ \mathcal{T}\left[a_{\textbf}{k}(t) a_{\textbf}{k}^\dagger(t') \right] } \!},\end{aligned}$$ with $a_{\textbf}{k}(t)=a_{\textbf}{k}\operatorname{\operatorname{e}}^{-{\text{i}}{\varepsilon}_{\textbf}{k} t}$ and $a^\dagger_{\textbf}{k}(t)=a^\dagger_{\textbf}{k}\operatorname{\operatorname{e}}^{{\text{i}}{\varepsilon}_{\textbf}{k}t}$. It can be decomposed as $$\begin{aligned} {\text{i}}G_{\textbf}{k}(t-t')= \theta(t-t') \underbrace{\braket{\! \braket{a_{\textbf}{k}(t) a_{\textbf}{k}^\dagger(t') }\!} }_{ {\text{i}}G^{>}_{\textbf}{k}(t-t')} - \theta(t'-t)\underbrace{\braket{\! \braket{a_{\textbf}{k}^\dagger(t') a_{\textbf}{k}(t) }\!} }_{-{\text{i}}G^{<}_{\textbf}{k}(t-t')},\end{aligned}$$ where we have defined the correlation functions ${\text{i}}G^{>}_{\textbf}{k}(t-t')$ and ${\text{i}}G^{<}_{\textbf}{k}(t-t')$. The Fourier transforms of the real-time propagator and the correlation functions are given by $$\begin{aligned} G_{\textbf}{k}(\omega)&=\int \limits_{-\infty}^\infty \!\! dt \, {\text{i}}G_{\textbf}{k}(t)\operatorname{\operatorname{e}}^{{\text{i}}\omega t}, \\ G^{>}_{\textbf}{k}(\omega)&=\int \limits_{-\infty}^\infty \!\! dt \, {\text{i}}G_{\textbf}{k}^{>}(t)\operatorname{\operatorname{e}}^{{\text{i}}\omega t}, \\ G^{<}_{\textbf}{k}(\omega)&=\int \limits_{-\infty}^\infty \!\! dt \, \left(-{\text{i}}G_{\textbf}{k}^{<}(t)\operatorname{\operatorname{e}}^{{\text{i}}\omega t}\right),\end{aligned}$$ with inverse transforms $$\begin{aligned} \label{Gfouriertransform} {\text{i}}G_{\textbf}{k}(t)=\int \limits_{-\infty}^\infty \!\! \frac{d \omega}{2 \pi} \, G_{\textbf}{k}(\omega) \operatorname{\operatorname{e}}^{-{\text{i}}\omega t}, \\ {\text{i}}G^{>}_{\textbf}{k}(t)=\int \limits_{-\infty}^\infty \!\! \frac{d \omega}{2 \pi} \, G^{>}_{\textbf}{k}(\omega) \operatorname{\operatorname{e}}^{-{\text{i}}\omega t}, \\ -{\text{i}}G^{<}_{\textbf}{k}(t)=\int \limits_{-\infty}^\infty \!\! \frac{d \omega}{2 \pi} \, G^{<}_{\textbf}{k}(\omega) \operatorname{\operatorname{e}}^{-{\text{i}}\omega t}.\end{aligned}$$ The Fourier transforms of the correlation functions satisfy the KMS relation [@kadanoffbaym; @ThesisRios] (see also Refs. [@PhysRev.115.1342; @Haag1967; @doi:10.1143/JPSJ.12.570]) $$\begin{aligned} G^{<}_{\textbf}{k}(\omega) = \operatorname{\operatorname{e}}^{-\beta(\omega-\mu)} G^{>}_{\textbf}{k}(\omega).\end{aligned}$$ From this relation it follows that we can write $$\begin{aligned} G^{>}_{\textbf}{k}(\omega) &= \bar n(\omega) \mathcal{A}_{\textbf}{k}(\omega) , \\ G^{<}_{\textbf}{k}(\omega) &= n(\omega) \mathcal{A}_{\textbf}{k}(\omega),\end{aligned}$$ with $$\begin{aligned} \mathcal{A}_{\textbf}{k}(\omega) &= G^{>}_{\textbf}{k}(\omega)+G^{<}_{\textbf}{k}(\omega),\end{aligned}$$ and $\bar n(\omega)=1-n(\omega)$. From the Lehmann representations of $G^{>}_{\textbf}{k}(t)$ and $G^{<}_{\textbf}{k}(t)$ it can be seen that the spectral function is semipositive, i.e., $$\begin{aligned} \label{Ageq0} \mathcal{A}_{\textbf}{k}(\omega) & \geq 0 ,\end{aligned}$$ and satisfies the sum rule $$\begin{aligned} \label{sumrule} \int \limits_{-\infty}^\infty \!\! \frac{d \omega}{2\pi}\mathcal{A}_{\textbf}{k}(\omega) & =1,\end{aligned}$$ see e.g., Refs. [@Fetter; @ThesisRios]. Consider now the function $\Gamma_{\textbf}{k}(z)$ defined by $$\begin{aligned} \label{Gammadef} \Gamma_{\textbf}{k}(z) &= \int \limits_{-\infty}^\infty \!\! \frac{d \omega}{2\pi} \frac{\mathcal{A}_{\textbf}{k}(\omega)}{z-\omega}.\end{aligned}$$ From the Lehmann representation of the imaginary-time propagator $\mathscr{G}_{\textbf}{k}(\tau)$ it can be seen that [@Fetter] $$\begin{aligned} \mathscr{G}_{\textbf}{k}(z_l) &= \int \limits_{-\infty}^\infty \!\! \frac{d \omega}{2\pi} \frac{\mathcal{A}_{\textbf}{k}(\omega)}{z_l-\omega}.\end{aligned}$$ From the sum rule for $\mathcal{A}_{\textbf}{k}(\omega)$, Eq. , it then follows that $\Gamma_{\textbf}{k}(z)$ corresponds to the (unique [@baym]) analytic continuation of $\mathscr{G}_{\textbf}{k}(z_l)$ that satisfies $\Gamma_{\textbf}{k}(z)\sim z^{-1}$ for $|z|\rightarrow \pm\infty$. From Eq. , this can be obtained via[^45] $$\begin{aligned} \label{Sigmadef} \Gamma_{\textbf}{k}(z) &= \frac{1}{z-{\varepsilon}_{\textbf}{k}-\Sigma_{\textbf}{k}(z)},\end{aligned}$$ where the frequency-space self-energy $\Sigma_{\textbf}{k}(z)$ is defined as the analytic continuation of the Matsubara self-energy $\Xi_{\textbf}{k}(z_l)$ that satisfies $\Sigma_{\textbf}{k}(z)\rightarrow 0$ for $|z|\rightarrow \pm\infty$. In bare MBPT, this is given by the prescription noted before Eq.  in Sec. \[sec24\], i.e., one first substitutes $\operatorname{\operatorname{e}}^{\beta(\omega_l-\mu)}=-1$ and then performs the analytic continuation. For convenience, we give again the irreducible part of the bare second-order contribution to $\Sigma_{\textbf}{k}(z)$, i.e., $$\begin{aligned} \label{sigma2again} \Sigma_{2,{\textbf}{k}}(z) &= -\frac{1}{2} \sum_{{\textbf}{k}_2, {\textbf}{k}_3, {\textbf}{k}_4} \! |\braket{\psi_{{\textbf}{k}}\psi_{{\textbf}{k}_2}| V|\psi_{{\textbf}{k}_3} \psi_{{\textbf}{k}_4}}|^2 \frac{ n_{{\textbf}{k}_2} \bar n_{{\textbf}{k}_3} \bar n_{{\textbf}{k}_4} + n_{{\textbf}{k}_3} n_{{\textbf}{k}_4} \bar n_{{\textbf}{k}_2}} {{\varepsilon}_{{\textbf}{k}_3}+{\varepsilon}_{{\textbf}{k}_4} - {\varepsilon}_{{\textbf}{k}_2} - z}.\end{aligned}$$ From Eqs.  and , we obtain for the spectral function the expression $$\begin{aligned} \label{spectralfundef} \mathcal{A}_{\textbf}{k}(\omega) &={\text{i}}\Big[\Gamma_{\textbf}{k}(\omega+{\text{i}}\eta) - \Gamma_{\textbf}{k}(\omega-{\text{i}}\eta) \Big]. {\nonumber \\}&={\text{i}}\left[\frac{1}{\omega-{\varepsilon}_{\textbf}{k}-\Sigma_{\textbf}{k}(\omega+{\text{i}}\eta)+{\text{i}}\eta} - \text{c.c.} \right],\end{aligned}$$ where c.c. denotes the complex conjugate. Note that inserting Eq.  into Eq.  leads to the Breit-Wigner form of the spectral function, Eq. . The relation between $\mathcal{A}_{\textbf}{k}(\omega)$ and the Fourier transform of the real-time propagator $G_{\textbf}{k}(\omega)$ is obtained as follows: $$\begin{aligned} \label{Gomega} G_{\textbf}{k}(\omega) &= \int \limits_{-\infty}^\infty \!\! dt \,\operatorname{\operatorname{e}}^{{\text{i}}\omega t} \Big[\theta(t) {\text{i}}G^{>}_{\textbf}{k}(t)+\theta(-t) {\text{i}}G^{<}_{\textbf}{k}(t) \Big] {\nonumber \\}&= -\int \limits_{-\infty}^\infty \!\! dt \,\operatorname{\operatorname{e}}^{{\text{i}}\omega t} \left[ \int \limits_{-\infty}^\infty \!\! \frac{d \xi}{2 \pi{\text{i}}} \,\frac{\operatorname{\operatorname{e}}^{-{\text{i}}\xi t} }{\xi+{\text{i}}\eta} {\text{i}}G^{>}_{\textbf}{k}(t) \right. \left. - \int \limits_{-\infty}^\infty \!\! \frac{d \xi}{2 \pi{\text{i}}} \,\frac{\operatorname{\operatorname{e}}^{-{\text{i}}\xi t} }{\xi-{\text{i}}\eta} {\text{i}}G^{<}_{\textbf}{k}(t) \right] {\nonumber \\}&= -\int \limits_{-\infty}^\infty \!\! \frac{d \xi}{2\pi {\text{i}}} \left[ \frac{G^{>}_{\textbf}{k}(\omega-\xi)}{\xi+{\text{i}}\eta}+\frac{G^{<}_{\textbf}{k}(\omega-\xi)}{\xi-{\text{i}}\eta} \right] {\nonumber \\}&= -\int \limits_{-\infty}^\infty \!\! \frac{d \xi}{2\pi{\text{i}}} \left[ \frac{\bar n(\xi) \mathcal{A}_{\textbf}{k}(\xi)}{\omega-\xi+{\text{i}}\eta} +\frac{n(\xi) \mathcal{A}_{\textbf}{k}(\xi)}{\omega-\xi-{\text{i}}\eta} \right],\end{aligned}$$ where we have used the relation [@Fetter] $$\begin{aligned} \theta(\pm t) = \mp\int\limits_{-\infty}^\infty \!\! \frac{d \xi}{2 \pi{\text{i}}} \, \frac{\operatorname{\operatorname{e}}^{-{\text{i}}\xi t} }{\xi\pm {\text{i}}\eta}.\end{aligned}$$ From Eq.  we then have $$\begin{aligned} {\text{i}}G_{\textbf}{k}(t)= -\int \limits_{-\infty}^\infty \!\! \frac{d \omega}{2\pi} \operatorname{\operatorname{e}}^{-{\text{i}}\omega t}\!\! \int \limits_{-\infty}^\infty \!\! \frac{d \xi}{2\pi{\text{i}}} \left[ \frac{\bar n(\xi) \mathcal{A}_{\textbf}{k}(\xi)}{\omega-\xi+{\text{i}}\eta} +\frac{n(\xi) \mathcal{A}_{\textbf}{k}(\xi)}{\omega-\xi-{\text{i}}\eta} \right].\end{aligned}$$ For $t<0$ we can close the $\omega$ integral in the upper half plane. Interchanging the integration order, we then get $$\begin{aligned} \label{Gtless0} {\text{i}}G_{\textbf}{k}(t<0)= -\int \limits_{-\infty}^\infty \!\! \frac{d \xi}{2\pi} \, \operatorname{\operatorname{e}}^{-{\text{i}}\xi t} n(\xi) \mathcal{A}_{\textbf}{k}(\xi).\end{aligned}$$ Thus, the expression for the exact mean occupation numbers is $$\begin{aligned} \label{fspectralrep} f_{\textbf}{k} &= -{\text{i}}G_{\textbf}{k}(0^{-}) =\int \limits_{-\infty}^\infty \!\! \frac{d \xi}{2\pi} \, n(\xi) \mathcal{A}_{\textbf}{k}(\xi).\end{aligned}$$ Here, in contrast to Eq. , the ${T\rightarrow 0}$ limit is well-behaved, and its analysis reveals that $f_{\textbf}{k}(T=0,\mu)$ has a discontinuity at ${\textbf}{k}={\textbf}{k}_{\text{F}}$, see Ref. [@PhysRev.119.1153] and Sec. \[sec24\]. ![Contours $C^{\pm}_0$ (left panel) and $C_2$ (right panel).[]{data-label="figc2"}](C2.pdf){width="45.00000%"} The result given by Eq.  can also be obtained directly from the Fourier expansion of the imaginary-time propagator, Eq. . That is, taking first the limit ${l_\text{max}\rightarrow\infty}$ and then performing the analytic continuation of $\Xi_{\textbf}{k}(z_l)$ to $\Sigma_{\textbf}{k}(z)$ we get $$\begin{aligned} \mathscr{G}_{\textbf}{k}(\tau) &= \oint\limits_{C^{\pm}_0} \frac{d z}{2\pi {\text{i}}} \, \operatorname{\operatorname{e}}^{-z\tau} n(z) \frac{1}{z-{\varepsilon}_{\textbf}{k}-\Sigma_{\textbf}{k}(z)} ,\end{aligned}$$ with $C^{\pm}_0=C^{+}_0+C^{-}_0$, where $C^{+}_0$ encloses the Matsubara poles in the upper half plane without crossing the real axis, and $C^{-}_0$ the poles in the lower half plane. Since $\Sigma_{\textbf}{k}(z)$ is analytic in the two half planes and vanishes at complex infinity, and $\operatorname{\operatorname{e}}^{-z\tau} n_{\textbf}{k}(z)$ is regular at infinity for $\tau<0$, for $\tau<0$ these two contours can be deformed into the contour $C_2$ that encloses the real axis, see Fig. \[figc2\], i.e., $$\begin{aligned} \label{Gim0C0again} \mathscr{G}_{\textbf}{k}(\tau<0) &= \!\! \int\limits_{-\infty}^\infty \!\! \frac{d \xi}{2\pi {\text{i}}} \, \operatorname{\operatorname{e}}^{-\xi\tau} n(\xi) \underbrace{\left[\frac{1}{\xi-{\varepsilon}_{\textbf}{k}-\Sigma_{\textbf}{k}(\xi+{\text{i}}\eta)+{\text{i}}\eta} -\text{c.c.}\right]}_{-{\text{i}}\mathcal{A}_{\textbf}{k}(\xi)},\end{aligned}$$ which is just the Wick rotation of Eq. . ### Collision self-energy at zero temperature The self-energy corresponding to the real-time propagator, here referred to as the collision self-energy $\Sigma_{\textbf}{k}^\text{coll}(\omega)$, can be defined by [@Fetter][^46] $$\begin{aligned} \label{Ecoll} G_{\textbf}{k}(\omega) = {\text{i}}\frac{1}{\omega-{\varepsilon}_{\textbf}{k}- \Sigma_{\textbf}{k}^\text{coll}(\omega) }.\end{aligned}$$ In the following, we examine how at ${T=0}$ the collision self-energy relates to the frequency-space self-energy $\Sigma_{\textbf}{k}(z)$. For this, using the Sokhotski-Plemelj theorem we rewrite Eq.  as $$\begin{aligned} G_{\textbf}{k}(\omega) = \int \limits_{-\infty}^\infty \!\! \frac{d \xi}{2\pi{\text{i}}} \mathcal{A}_{\textbf}{k}(\xi) \frac{P}{\omega-\xi} - \frac{\bar n(\omega) \mathcal{A}_{\textbf}{k}(\omega)}{2} + \frac{ n(\omega) \mathcal{A}_{\textbf}{k}(\omega)}{2} .\end{aligned}$$ From $n(\omega)\xrightarrow{T\rightarrow 0}\theta(\mu-\omega)$, at zero temperature we have $$\begin{aligned} G_{\textbf}{k}(\omega) = \theta(\omega-\mu)\,G_{\textbf}{k}^\text{R}(\omega) + \theta(\mu-\omega)\,G_{\textbf}{k}^\text{A}(\omega) ,\end{aligned}$$ with the Fourier transforms of the retarded and advanced propagators given by $$\begin{aligned} G_{\textbf}{k}^R(\omega)&= \int \limits_{-\infty}^\infty \!\! \frac{d \xi}{2\pi{\text{i}}} \mathcal{A}_{\textbf}{k}(\xi) \frac{P}{\omega-\xi} - \frac{\mathcal{A}_{\textbf}{k}(\omega)}{2}, \\ G_{\textbf}{k}^A(\omega)&= \int \limits_{-\infty}^\infty \!\! \frac{d \xi}{2\pi{\text{i}}} \mathcal{A}_{\textbf}{k}(\xi) \frac{P}{\omega-\xi} + \frac{\mathcal{A}_{\textbf}{k}(\omega)}{2}.\end{aligned}$$ Comparing with Eq.  we see that $$\begin{aligned} G_{\textbf}{k}^\text{R}(\omega) = {\text{i}}\Gamma_{\textbf}{k}(\omega+ {\text{i}}\eta), \;\;\;\;\;\;\;\; G_{\textbf}{k}^\text{A}(\omega) = {\text{i}}\Gamma_{\textbf}{k}(\omega- {\text{i}}\eta).\end{aligned}$$ Thus, from Eq.  we can at ${T=0}$ make the identification $$\begin{aligned} \Sigma_{\textbf}{k}^\text{coll}(\omega)= \theta(\omega-\mu)\,\Sigma_{\textbf}{k}(\omega+ {\text{i}}\eta) + \theta(\mu-\omega)\,\Sigma_{\textbf}{k}(\omega- {\text{i}}\eta).\end{aligned}$$ From Eq. , i.e., $$\begin{aligned} \label{dynquasi1aagain} \Sigma_{\textbf}{k}(\omega\pm{\text{i}}\eta) = \mathcal{S}_{\textbf}{k}(\omega) \mp {\text{i}}\mathcal{J}_{\textbf}{k}(\omega),\end{aligned}$$ we have (at ${T=0}$) $$\begin{aligned} \Sigma_{\textbf}{k}^\text{coll}(\omega) &= \theta(\omega-\mu)\,\big[\mathcal{S}_{\textbf}{k}(\omega)-{\text{i}}\mathcal{J}_{\textbf}{k}(\omega) \big] {\nonumber \\}& \quad +\theta(\mu-\omega)\,\big[\mathcal{S}_{\textbf}{k}(\omega)+{\text{i}}\mathcal{J}_{\textbf}{k}(\omega) \big].\end{aligned}$$ In particular, (as discussed in Sec. \[sec24\]), at zero temperature it is ${\mathcal{J}_{\textbf}{k}(\omega) \xrightarrow{\omega \rightarrow \mu} C_{\textbf}{k}(\mu) (\omega-\mu)^2}$, with $C_{\textbf}{k}(\mu)\geq 0$, so (at ${T=0}$) $$\begin{aligned} \text{Im}\big[\Sigma_{\textbf}{k}^\text{coll}(\omega)\big] &= -\theta(\omega-\mu)\,\mathcal{J}_{\textbf}{k}(\omega) +\theta(\mu-\omega)\,\mathcal{J}_{\textbf}{k}(\omega) {\nonumber \\}&\xrightarrow{\omega \rightarrow \mu} -C_{\textbf}{k}(\mu)\, (\omega-\mu)|\omega-\mu|.\end{aligned}$$ Finally, for the on-shell collision self-energy this leads to $$\begin{aligned} \text{Im}\big[\Sigma_{\textbf}{k}^\text{coll}({\varepsilon}_{\textbf}{k})\big] &\xrightarrow{{\varepsilon}_{\textbf}{k} \rightarrow \mu} -C_{\textbf}{k}(\mu)\, ({\varepsilon}_{\textbf}{k}-\mu)|{\varepsilon}_{\textbf}{k}-\mu|,\end{aligned}$$ which is the property quoted in Refs.[@Holt13a; @Kaiser:2001ra].[^47] Mean occupation numbers from direct mean-field renormalization {#app22} -------------------------------------------------------------- In the direct renormalization scheme the exact mean occupation numbers are identified with the Fermi-Dirac distributions (i.e., with the mean occupation numbers in the reference system), i.e., $$\begin{aligned} \label{fisn} \text{direct scheme:}\;\;\;\; f_{\textbf}{k}=n_{\textbf}{k}.\end{aligned}$$ From this one may conclude that in the direct scheme the mass function is zero, $\mathcal{M}_{\textbf}{k}(z)=0$, and the spectral function is given by the unperturbed one, $\mathcal{A}_{\textbf}{k}(\omega)=2\pi \delta(\omega-{\varepsilon}_{\textbf}{k})$. More generally, one may conclude that the Matsubara self-energy is zero, $\Xi_{\textbf}{k}(z_l)=0$. However, these conclusions come with two caveats: 1. The cancellations that lead to Eq.  are not available in Matsubara space. That is, the result $\Xi_{\textbf}{k}(z_l)=0$ is obtained only from the Fourier expansion of the direct expression for the propagator, Eq. . If one instead Fourier expands the (unperturbed) propagators (cf. Appendix \[app23\]) in the time-integral representation, Eq. , then one obtains the usual result, i.e., $\Xi_{\textbf}{k}(z_l)\neq 0$, also in the direct scheme. 2. The (proper) Matsubara self-energy $\Xi_{\textbf}{k}(z_l)$ is defined in terms of the Dyson equation, Eq. . The Dyson equation is inconsistent with a perturbative truncation order. In contrast, Eq.  relies on a finite truncation order $N$. Caveat (1) implies that the ${T\neq 0}$ part[^48] of the general results of Sec. \[sec24\] can be obtained also in the direct scheme, and caveat (2) makes evident that these results and Eq.  do not contradict each other; they correspond to different partial summations of a divergent asymptotic series. ### Proof of Eq. (\[fisn\]) The perturbation series for the imaginary-time propagator $\mathscr{G}_{{\textbf}{k}}(\tau-\tau')$ is given by $$\begin{aligned} \mathscr{G}_{n,{\textbf}{k}}(\tau-\tau') &= g_{{\textbf}{k}}(\tau-\tau') +\sum_{n=1}^N \mathscr{G}_{n,{\textbf}{k}}(\tau-\tau').\end{aligned}$$ Here, the unperturbed propagator $g_{{\textbf}{k}}(\tau-\tau')$ is given by $$\begin{aligned} \label{smallg} g_{{\textbf}{k}}(\tau-\tau') &= - \Braket{ \mathcal{T}\left[a_{\textbf}{k}(\tau) a_{\textbf}{k}^\dagger(\tau') \right] } {\nonumber \\}&= \theta(\tau-\tau')\, n_{\textbf}{k} \operatorname{\operatorname{e}}^{{\varepsilon}_{\textbf}{k} (\tau-\tau')} - \theta(\tau'-\tau)\, \bar n_{\textbf}{k} \operatorname{\operatorname{e}}^{{\varepsilon}_{\textbf}{k} (\tau-\tau')},\end{aligned}$$ and its Matsubara coefficients are given by Eq. . The perturbative contributions $\mathscr{G}_{n,{\textbf}{k}}(\tau-\tau')$ are given by the expression [@Fetter] $$\begin{aligned} \label{Gmbpt} \mathscr{G}_{n,{\textbf}{k}}(\tau-\tau') &= \frac{(-1)^{n+1}}{n!} \int \limits_{0}^{\beta} d\tau_n\cdots d\tau_1 {\nonumber \\}&\quad \times \Braket{ \mathcal{T}\big[ a_{\textbf}{k}(\tau) \, \mathcal{V}(\tau_n) \cdots \mathcal{V}(\tau_1) \, a_{\textbf}{k}^\dagger(\tau') \big] }_{L} {\nonumber \\}&\equiv\mathscr{G}^{\text{direct}(P)}_{n,{\textbf}{k}}(\tau-\tau').\end{aligned}$$ This can be written as $$\begin{aligned} \mathscr{G}_{n,{\textbf}{k}}(\tau-\tau') &= (-1)^{n+1} \int \limits_{0}^{\beta} d\tau_{n} \int \limits_{0}^{\tau_n} d\tau_{n-1}\cdots \int \limits_{0}^{\tau_2} d\tau_1 {\nonumber \\}&\quad \times \Braket{ \mathcal{T}\big[ a_{\textbf}{k}(\tau) \, \mathcal{V}(\tau_{n}) \cdots \mathcal{V}(\tau_1) \, a_{\textbf}{k}^\dagger(\tau') \big] }_{\!L} {\nonumber \\}&\equiv\mathscr{G}^\text{direct}_{n,{\textbf}{k}}(\tau-\tau') .\end{aligned}$$ From here, we can follows the steps that lead Bloch and de Dominicis [@BDDnuclphys7] to the direct formula[^49] for the perturbative contributions to the grand-canonical potential (see Sec. \[sec22\]). Because $\mathscr{G}_{n,{\textbf}{k}}(\tau)$ is antiperiodic with period $\beta$ we can, without loss of generality, set $\tau<0$ and $\tau'<0$. For $\tau-\tau'<0$, this leads to $$\begin{aligned} \label{Gdirect} \mathscr{G}^\text{direct}_{n,{\textbf}{k}}(\tau<0)&= \operatorname{\operatorname{e}}^{{\varepsilon}_{\textbf}{k}\tau} \frac{(-1)^{n}}{2\pi {\text{i}}} \oint_{C} dz \frac{\operatorname{\operatorname{e}}^{-\beta z}}{z^2} {\nonumber \\}&\quad \times \Braket{ \mathcal{V} \frac{1}{{D}_n-z} \cdots \mathcal{V} \frac{1}{{D}_1-z} \mathcal{V} \,a_{\textbf}{k}^\dagger\, a_{\textbf}{k} }_{\!L}.\end{aligned}$$ For truncation order $N$, the contributions to $\mathscr{G}_{{\textbf}{k}}(\tau)$ are given by all linked (one-particle irreducible and reducible) propagator diagrams that satisfy Eq. . Applying the cumulant formalism, the contributions to Eq.  are given by normal propagator diagrams[^50] with normal Hugenholtz diagrams attached via higher-cumulant connections, plus diagrams composed of multiple normal propagator diagrams simply-connected via higher-cumulant connections attached to normal Hugenholtz diagrams. With the mean field given by $$\begin{aligned} U_{\textbf}{k}=\sum_{n=1}^N U_{n,{\textbf}{k}}^{\text{direct},\div},\end{aligned}$$ the contributions with higher-cumulant connections are removed. *Furthermore*, because propagator diagrams involve all possible orderings of the vertices, (an analog of) the direct factorization theorem applies also for the remaining contributions; e.g., for a one-particle irreducible propagator diagram with non $-U$ self-energy part (i.e., at least one $V$ vertex is involved) we have $$\begin{aligned} \label{Gdirectfactorized1} \mathscr{G}_{n,{\textbf}{k}}^{\div}(\tau)&= g_{{\textbf}{k}}(\tau) \; U^{\text{direct,}\div}_{n,{\textbf}{k}},\end{aligned}$$ for a one-particle reducible diagram with two non $-U$ self-energy parts we have $$\begin{aligned} \label{Gdirectfactorized2} \mathscr{G}_{n_1+n_2,{\textbf}{k}}^{\div}(\tau)&= g_{{\textbf}{k}}(\tau) \; U^{\text{direct,}\div}_{n_1,{\textbf}{k}} \; U^{\text{direct,}\div}_{n_2,{\textbf}{k}},\end{aligned}$$ etc. Hence, in the direct scheme these contributions are canceled by the diagrams where the self-energy parts are replaced by $-U^{\text{direct},\div}$ vertices. Thus,[^51] $$\begin{aligned} \text{direct scheme:}\;\;\;\; \mathscr{G}_{{\textbf}{k}}(\tau)= g_{{\textbf}{k}}(\tau),\end{aligned}$$ and Eq. (\[fisn\]) is proved. Self-energy, mass function, and grand-canonical potential {#app23} --------------------------------------------------------- The (proper) Matsubara self-energy $\Xi_{\textbf}{k}(z_l)$ can be calculated using self-consistent propagators or using bare propagators (or, anything in between). In the bare case, also two-particle reducible self-energy diagrams contribute to $\Xi_{\textbf}{k}(z_l)$; see, e.g., Ref. [@PLATTER2003250]. Below, we first explain how the bare perturbative contributions to the improper Matsubara self-energy $\Xi^\star_{\textbf}{k}(z_l)$ can be obtained. From this, the bare contributions to $\Xi_{\textbf}{k}(z_l)$ are obtained via the restriction to one-particle irreducible diagrams. Second, we derive the functional relations between the bare perturbative contributions to the (various forms of the) improper self-energy and the grand-canonical potential.[^52] In particular, we find the simple relation for the proper frequency-space self-energy $\Sigma_{\textbf}{k}(z)$ given by Eq. . ### Matsubara self-energy The improper Matsubara self-energy $\Xi^\star_{{\textbf}{k}}(z_l)$ is defined by [@Luttinger:1960ua] $$\begin{aligned} \label{Xistar0} \mathscr{G}_{{\textbf}{k}}(z_l) &= g_{{\textbf}{k}}(z_l) + g_{{\textbf}{k}}(z_l)\, \Xi^\star_{{\textbf}{k}}(z_l)\, g_{{\textbf}{k}}(z_l),\end{aligned}$$ i.e., the perturbative contributions to $\Xi^\star_{{\textbf}{k}}(z_l)$ are defined by $$\begin{aligned} \label{Xistar1} \mathscr{G}_{n,{\textbf}{k}}(z_l) &= g_{{\textbf}{k}}(z_l)\, \Xi^\star_{n,{\textbf}{k}}(z_l)\, g_{{\textbf}{k}}(z_l).\end{aligned}$$ For example, from Eq.  the second-order irreducible contribution to $\mathscr{G}_{{\textbf}{k}}(\tau)$ is given by $$\begin{aligned} \mathscr{G}_{2,{\textbf}{k}}(\tau) &= -\frac{1}{2} \sum_{{\textbf}{k}_2,{\textbf}{k}_3,{\textbf}{k}_4} |\braket{\psi_{{\textbf}{k}}\psi_{{\textbf}{k}_2}| V|\psi_{{\textbf}{k}_3}\psi_{{\textbf}{k}_4}}|^2 \int \limits_{0}^{\beta} \! d\tau_1 \int \limits_{0}^{\beta} \!d\tau_2 {\nonumber \\}&\quad \times g_{\textbf}{k}(\tau-\tau_1)g_{\textbf}{k}(\tau_2-0) g_{{\textbf}{k}_2}(\tau_{21})g_{{\textbf}{k}_3}(\tau_{12}) g_{{\textbf}{k}_5}(\tau_{12}),\end{aligned}$$ with $\tau_{ij}=\tau_i-\tau_j$. Inserting the Fourier series of the unperturbed propagators $g_{{\textbf}{k}}(\tau)=\beta^{-1}\sum_l g_{{\textbf}{k}}(z_l) e^{-z_l \tau}$ we obtain the expression $$\begin{aligned} \label{Xistar2} \mathscr{G}_{2,{\textbf}{k}}(\tau) &= -\frac{1}{2} \sum_{{\textbf}{k}_2,{\textbf}{k}_3,{\textbf}{k}_4} \!\! |\braket{\psi_{{\textbf}{k}}\psi_{{\textbf}{k}_2}| V|\psi_{{\textbf}{k}_3}\psi_{{\textbf}{k}_4}}|^2 \sum_{l,l_2,l_3,l_4} {\nonumber \\}&\quad \times \frac{1}{\beta^4} \operatorname{\operatorname{e}}^{-z_{l}\,\tau} \int \limits_{0}^{\beta} \! d\tau_1 \operatorname{\operatorname{e}}^{-(z_{l_3}+z_{l_4}-z_{l_2}-z_{l})\,\tau_1} {\nonumber \\}&\quad \times \left[g_{{\textbf}{k}}(z_{l}) \right]^2\,g_{{\textbf}{k}_2}(z_{l_2})g_{{\textbf}{k}_3}(z_{l_3}) g_{{\textbf}{k}_4}(z_{l_4}),\end{aligned}$$ where we have eliminated the $\tau_2$ integral and one Matsubara sum via the relation $$\begin{aligned} \label{kroneck} \frac{1}{\beta}\int \limits_{0}^{\beta} \! d\tau_2 \operatorname{\operatorname{e}}^{\pm(z_{l_3}+z_{l_4}-z_{l_2}-z_{l'})\,\tau_2}=\delta_{l_3+l_4,l_2+l'}.\end{aligned}$$ From $\mathscr{G}_{2,{\textbf}{k}}(\tau)=\beta^{-1}\sum_l \mathscr{G}_{2,{\textbf}{k}}(z_l) e^{-z_l \tau}$ and Eq.  we then find that $$\begin{aligned} \label{Xistar3} \Xi_{2,{\textbf}{k}}[g_{\textbf}{k}(z_l),z_l] &= -\frac{1}{2} \sum_{{\textbf}{k}_2,{\textbf}{k}_3,{\textbf}{k}_4} \!\! |\braket{\psi_{{\textbf}{k}}\psi_{{\textbf}{k}_2}| V|\psi_{{\textbf}{k}_3}\psi_{{\textbf}{k}_4}}|^2 \; \sum_{l_2,l_3,l_4} {\nonumber \\}&\quad \times \frac{1}{\beta^3}\int \limits_{0}^\beta \! d\tau \operatorname{\operatorname{e}}^{-(z_{l_3}+z_{l_4}-z_{l_2}-z_{l})\,\tau} {\nonumber \\}&\quad \times g_{{\textbf}{k}_2}(z_{l_2})g_{{\textbf}{k}_3}(z_{l_3}) g_{{\textbf}{k}_4}(z_{l_3}),\end{aligned}$$ i.e., $$\begin{aligned} \label{Xistar4} \Xi_{2,{\textbf}{k}}[g_{\textbf}{k}(\tau),z_l] &= -\frac{1}{2} \sum_{{\textbf}{k}_2,{\textbf}{k}_3,{\textbf}{k}_4} \!\! |\braket{\psi_{{\textbf}{k}}\psi_{{\textbf}{k}_2}| V|\psi_{{\textbf}{k}_3}\psi_{{\textbf}{k}_4}}|^2 \; {\nonumber \\}&\quad \times \int \limits_{0}^\beta \! d\tau \operatorname{\operatorname{e}}^{z_l\,\tau} g_{{\textbf}{k}_2}(\tau)g_{{\textbf}{k}_3}(\tau) g_{{\textbf}{k}_4}(\tau).\end{aligned}$$ Since $\tau>0$ in the time integral, from Eq.  we have $$\begin{aligned} \label{Xi2calc} \Xi_{2,{\textbf}{k}}[n_{\textbf}{k},z_l] &= -\frac{1}{2} \sum_{{\textbf}{k}_2,{\textbf}{k}_3,{\textbf}{k}_4} |\braket{\psi_{{\textbf}{k}}\psi_{{\textbf}{k}_2}| V|\psi_{{\textbf}{k}_3}\psi_{{\textbf}{k}_4}}|^2 {\nonumber \\}&\quad \times \int \limits_{0}^{\beta} \! d\tau \operatorname{\operatorname{e}}^{-({\varepsilon}_{{\textbf}{k}_3}+{\varepsilon}_{{\textbf}{k}_4}-{\varepsilon}_{{\textbf}{k}_2}-z_l)\,\tau} n_{{\textbf}{k}_2}\bar n_{{\textbf}{k}_3}\bar n_{{\textbf}{k}_4},\end{aligned}$$ and carrying out the time integral we get Eq. . ### Functional relations The functional relations between the perturbative contributions to the improper Matsubara self-energy and the grand-canonical potential are given by (see, e.g., Ref. [@Luttinger:1960ua]) $$\begin{aligned} \label{funcrel1a} \Omega^{\aleph}_{n}[g_{{\textbf}{k}}(z_l)] &= \frac{1}{2n\beta} \sum_{\textbf}{k} \sum_{l} g_{{\textbf}{k}}(z_l)\, \Xi^\star_{n,{\textbf}{k}}[g_{{\textbf}{k}}(z_l),z_l], \\ \label{funcrel1b} \Xi^\star_{n,{\textbf}{k}}[g_{{\textbf}{k}}(z_l),z_l] &= \beta\frac{\delta \Omega^{\aleph}_{n}[g_{{\textbf}{k}}(z_l)]}{\delta [g_{{\textbf}{k}}(z_l)]},\end{aligned}$$ and similar for $\Omega^{\aleph}_{n}[g_{{\textbf}{k}}(\tau)]$ and $\Xi^\star_{n,{\textbf}{k}}[g_{{\textbf}{k}}(\tau),\tau]$. The question is, what does $\aleph$ correspond to? To find this out, we first evaluate the expression obtained from Eq.  for the second-order normal contribution, i.e., $$\begin{aligned} \label{O2calc1} \Omega^{\text{direct}(P)}_{2,\text{normal}}[g_{{\textbf}{k}}(\tau)] &= \frac{1}{8 \beta} \sum_{{\textbf}{k}_1,{\textbf}{k}_2,{\textbf}{k}_3,{\textbf}{k}_4} \!\! |\braket{\psi_{{\textbf}{k}}\psi_{{\textbf}{k}_2}| V|\psi_{{\textbf}{k}_3}\psi_{{\textbf}{k}_4}}|^2 W^{\text{direct}(P)}_{{\textbf}{k}_1,{\textbf}{k}_2,{\textbf}{k}_3,{\textbf}{k}_4},\end{aligned}$$ where $$\begin{aligned} \label{Wcalc1} W^{\text{direct}(P)}_{{\textbf}{k}_1,{\textbf}{k}_2,{\textbf}{k}_3,{\textbf}{k}_4} &= \int \limits^{\beta}_0 \!\!d\tau_1 \int \limits^{\beta}_0 \!\! d\tau_2 \, g_{{\textbf}{k}_1}(\tau_{21})g_{{\textbf}{k}_2}(\tau_{21})g_{{\textbf}{k}_3}(\tau_{12}) g_{{\textbf}{k}_4}(\tau_{12}) {\nonumber \\}&= \int \limits^{\beta}_0 \!\!d\tau_1 \!\!\!\! \int \limits^{\beta-\tau_1}_{-\tau_1} \!\! d\tau' \, g_{{\textbf}{k}_1}(\tau')g_{{\textbf}{k}_2}(\tau')g_{{\textbf}{k}_3}(-\tau') g_{{\textbf}{k}_4}(-\tau') {\nonumber \\}&= \int \limits^{\beta}_0 \!\!d\tau_1 \!\!\ \int \limits^{0}_{-\tau_1} \!\! d\tau' \, \bar n_{{\textbf}{k}_1}\bar n_{{\textbf}{k}_2}n_{{\textbf}{k}_3} n_{{\textbf}{k}_4} \operatorname{\operatorname{e}}^{-D\tau'} {\nonumber \\}& \quad + \int \limits^{\beta}_0 \!\!d\tau_1 \!\! \int \limits^{\beta-\tau_1}_{0} \!\!\!\! d\tau' \, n_{{\textbf}{k}_1}n_{{\textbf}{k}_2}\bar n_{{\textbf}{k}_3} \bar n_{{\textbf}{k}_4} \operatorname{\operatorname{e}}^{-D\tau'} {\nonumber \\}&= \bar n_{{\textbf}{k}_1}\bar n_{{\textbf}{k}_2}n_{{\textbf}{k}_3} n_{{\textbf}{k}_4} \frac{\beta D-1+\operatorname{\operatorname{e}}^{\beta D}}{D^2} {\nonumber \\}& \quad - n_{{\textbf}{k}_1}n_{{\textbf}{k}_2}\bar n_{{\textbf}{k}_3} \bar n_{{\textbf}{k}_4} \frac{\beta D-1+\operatorname{\operatorname{e}}^{-\beta D}}{D^2},\end{aligned}$$ with $D={\varepsilon}_{{\textbf}{k}_3}+{\varepsilon}_{{\textbf}{k}_4} - {\varepsilon}_{{\textbf}{k}_2} - {\varepsilon}_{{\textbf}{k}_1}$.[^53] Now, we can evaluate $W_{{\textbf}{k}_1,{\textbf}{k}_2,{\textbf}{k}_3,{\textbf}{k}_4}$ also by inserting in the first expression in Eq.  the Fourier expansion of the unperturbed propagators. This leads to $$\begin{aligned} \label{Wcalc2} W^{\aleph}_{{\textbf}{k}_1,{\textbf}{k}_2,{\textbf}{k}_3,{\textbf}{k}_4} &= \frac{1}{\beta^3} \sum_{l_1,l_2,l_3,l_4} \delta_{l_3+l_4,l_2+l_1} g_{{\textbf}{k}_1}(z_{l_1}) g_{{\textbf}{k}_2}(z_{l_2})g_{{\textbf}{k}_3}(z_{l_3}) g_{{\textbf}{k}_4}(z_{l_3}).\end{aligned}$$ Note that this is the expression we get by substituting $\Xi^\star_{2,{\textbf}{k}}[g_{{\textbf}{k}}(z_l),z_l]$ into Eq. . Using Eq.  we find $$\begin{aligned} \label{Wcalc3} W^{\aleph}_{{\textbf}{k}_1,{\textbf}{k}_2,{\textbf}{k}_3,{\textbf}{k}_4} &= \beta \int \limits_{0}^{\beta} \! d\tau \operatorname{\operatorname{e}}^{-({\varepsilon}_{{\textbf}{k}_3}+{\varepsilon}_{{\textbf}{k}_4}-{\varepsilon}_{{\textbf}{k}_2}-{\varepsilon}_{{\textbf}{k}_1})\,\tau} n_{{\textbf}{k}_1} n_{{\textbf}{k}_2}\bar n_{{\textbf}{k}_3}\bar n_{{\textbf}{k}_4} {\nonumber \\}&= \beta n_{{\textbf}{k}_1} n_{{\textbf}{k}_2}\bar n_{{\textbf}{k}_3}\bar n_{{\textbf}{k}_4} \frac{\operatorname{\operatorname{e}}^{-\beta D}-1}{D},\end{aligned}$$ which corresponds to the *cyclic* expression, Eq. . However, we could have easily evaluated Eq.  such that the expression given by Eq.  would be obtained (i.e., by reversing the step that lead to Eq. ). Thus, the $\aleph$ in Eq.  depends on how the Matsubara sums are carried out. The identification of $\aleph$ with cyclic can however be fixed (formally) by substituting $\Xi^\star_{n,{\textbf}{k}}[n_{{\textbf}{k}},z_l]$ for $\Xi^\star_{n,{\textbf}{k}}[g_{{\textbf}{k}}(z_l),z_l]$, i.e., $$\begin{aligned} \label{funcrel1c} \Omega^\text{cyclic}_{n}[g_{{\textbf}{k}}(z_l)] &= \frac{1}{2n\beta} \sum_{\textbf}{k} \sum_{l} g_{{\textbf}{k}}(z_l)\, \Xi^\star_{n,{\textbf}{k}}[n_{{\textbf}{k}},z_l].\end{aligned}$$ Note that no functional derivative relation is available for $\Xi^\star_{n,{\textbf}{k}}[n_{\textbf}{k},z_l]$. Now, from Eqs.  and , we obtain by analytic continuation the relations $$\begin{aligned} \label{funcrel2a} \Omega_{n}^\aleph[g_{{\textbf}{k}}(z)] &= \frac{1}{2n} \sum_{\textbf}{k} \oint\limits_{C_0} \frac{d z}{2\pi {\text{i}}} \, g_{{\textbf}{k}}(z)\,n_{{\textbf}{k}}(z)\, \Xi^\star_{n,{\textbf}{k}}[g_{{\textbf}{k}}(z),z], \\ \label{funcrel2b} \Xi^\star_{n,{\textbf}{k}}[g_{{\textbf}{k}}(z')] &= \frac{\delta \Omega_{n}^\aleph[g_{{\textbf}{k}}(z)]}{\delta[ g_{{\textbf}{k}}(z')]},\end{aligned}$$ where $C_0 \in \{C_0[l_\text{max}],C_0^{\pm}\}$, with $C_0[l_\text{max}]$ from Fig. \[figc1\] and $C_0^{\pm}$ from Fig. \[figc2\]. Note that these relations require that $\Xi^\star_{n,{\textbf}{k}}$ is represented as a functional of $g_{{\textbf}{k}}(z)$. Replacing $\Xi^\star_{n,{\textbf}{k}}[g_{{\textbf}{k}}(z),z]$ by the mass function $\mathcal{M}^\star_{n,{\textbf}{k}}[n_{\textbf}{k},z]$ leads to $$\begin{aligned} \label{funcrel2c} \Omega_{n}^\text{cyclic}[n_{\textbf}{k}] &= \frac{1}{2n} \sum_{\textbf}{k} \oint\limits_{C_0[l_\text{max}]} \!\!\!\!\! \frac{d z}{2\pi {\text{i}}} \, g_{{\textbf}{k}}(z)\,n_{{\textbf}{k}}(z)\, \mathcal{M}^\star_{n,{\textbf}{k}}[n_{\textbf}{k},z].\end{aligned}$$ Because $g_{{\textbf}{k}}(z)\,n_{{\textbf}{k}}(z)\, \mathcal{M}^\star_{n,{\textbf}{k}}[n_{\textbf}{k},z]$ vanishes at infinity we can deform the contour $C_0[l_\text{max}]$ into the contour $C_1$ from Fig. \[figc1\]. Since $\mathcal{M}^\star_{n,{\textbf}{k}}(z)$ is entire, we get only the contributions from the pole at $1/g_{{\textbf}{k}}(z)=0$, i.e., at $z={\varepsilon}_{\textbf}{k}$, so $$\begin{aligned} \label{funcrel2d} \Omega_{n}^\text{cyclic}[n_{\textbf}{k}] &= \frac{1}{2n} \sum_{\textbf}{k} n_{{\textbf}{k}} \mathcal{M}^\star_{n,{\textbf}{k}}[n_{\textbf}{k},{\varepsilon}_{\textbf}{k}].\end{aligned}$$ Finally, as discussed above, from the expressions for $\mathcal{M}^\star_{n,{\textbf}{k}}[n_{\textbf}{k},z]$ the ones for the perturbative contributions to the frequency-space self-energy $\Sigma^\star_{n,{\textbf}{k}}[n_{\textbf}{k},z]$ are obtained by substituting $\operatorname{\operatorname{e}}^{\beta(z_l-\mu)}=-1$ and removing the remaining energy denominator exponentials via Eq. . From this, we find (analogous to zero-temperature MBPT [@Kaiser:2001ra; @Kaiser:2013bua]) that $$\begin{aligned} \label{sigmastarred} \Sigma^\star_{n,{\textbf}{k}}[n_{\textbf}{k},z] = \frac{\delta\Omega_n^{\text{reduced}}[n_{\textbf}{k}]}{\delta n_{\textbf}{k}}\bigg|_{{\varepsilon}_{\textbf}{k}= z},\end{aligned}$$ which implies the relation for the proper frequency-space self-energy $\Sigma_{n,{\textbf}{k}}(z)$ given by Eq. . [^1]: In general, MBPT corresponds to divergent asymptotic series [@negele; @PhysRevLett.121.130405; @rossi; @refId0; @Marino:2019wra], so convergence rate should be understood in terms of the result at optimal truncation. [^2]: To be precise, since Ref. [@PhysRevC.95.034326] uses the adiabatic formalism, the considered self-energy is not the frequency-space self-energy of the imaginary-time formalism, $\Sigma_{{{\textbf}{k}}}(z)$, but the collisional one $\Sigma^\text{coll}_{{\textbf}{k}}(\omega)$, which however satisfies $\text{Re}\,[\Sigma^\text{coll}_{{{\textbf}{k}}}(\omega)]=\text{Re}\,[\Sigma_{{{\textbf}{k}}}(\omega\pm {\text{i}}\eta)]$. We use the notion frequency somewhat generalized, i.e., by frequency we refer mostly to the argument $z\in \mathbb{C}$ of the (usual) analytic continuation $\Sigma_{{{\textbf}{k}}}(z)$ of the Matsubara self-energy $\Xi_{{{\textbf}{k}}}(z_l)$. See Appendix \[app21\] for details. [^3]: Note that in Luttinger’s analysis [@PhysRev.174.263] of the scheme by Balian, Bloch, and de Dominicis it is incorrectly assumed that the mean field has the form $U[n_{\textbf}{k}]$. This is why in Ref. [@Fermiliq] Luttinger’s paper has been (incorrectly) associated with statistical quasiparticles. [^4]: Several of the results presented here are also discussed in the authors dissertation [@Wellenhofer:2017qla], but note that some technical details have been missed and several typos appear there. [^5]: In the Hartree-Fock case the self-consistency requirement can be evaded for isotropic systems at ${T=0}$ by replacing in the expression for $U_{{\textbf}{k}}=U_{1,k}$ the distribution functions $n_k=\theta(\mu-{\varepsilon}_k)$ by $\theta(k_{{\text{F}},\text{ref}}-k)$, where the unperturbed Fermi momentum $k_{{\text{F}},\text{ref}}$ is defined via ${{\varepsilon}_{k_{{\text{F}},\text{ref}}}=\mu}$. In that case, first-order MBPT is identical for ${U=0}$ and ${U=U_1}$ (more generally, ${U\propto U_1}$). [^6]: In the thermodynamic limit the expressions for all size extensive quantities scale linearly with the confining volume. For notational simplicity, we neglect the scale factors. For discussions regarding our choice of basis states, see Refs. [@2016arXiv160900014C; @PhysRevA.95.062124]. [^7]: At ${T=0}$, in MBPT there are still divergences due to vanishing energy denominators, but these cancel each other at each order [@Wellenhofer:2018dwh] (see also Sec. \[sec44\]). [^8]: The diagram composed of a single $-U$ vertex corresponds to $\Omega_{U}(T,\mu)$ and is excluded here. For the diagrammatic rules, see, e.g., Refs. [@szabo; @Runge]. [^9]: That is, normal pieces correspond to the linked normal subdiagrams of the normal unlinked diagram generated by cutting all anomalous articulation lines and closing them in each separated part. [^10]: That is, diagrams with energy denominators involving only articulation lines with identical three-momenta. Such diagrams are anomalous. Note that one must distinguish between anomalous (normal) diagrams and anomalous (normal) contributions. [^11]: In the case of the direct formula there are also pseudoanomalous contributions (of a different kind, i.e., terms $\sim T$) from the pole at $z=0$. Furthermore, in both the direct and the cyclic case the expressions for diagrams with several identical energy denominators involve terms $\sim T^{-\nu}$ with $\nu\geq 1$. In Hartree-Fock MBPT such diagrams appear first at sixth order, i.e., normal two-particle reducible diagrams composed of three second-order pieces. Because terms $\sim T^{-\nu}$ with $\nu\geq 1$ do not appear in the reduced formula, for the cyclic sums of diagrams these terms cancel each other in the ${T\rightarrow 0}$ limit. [^12]: These singularities are present also in the adiabatic case for ${\varepsilon}_{\text{F}}\in\{{\varepsilon}_{\textbf}{k}\}$ (i.e., for open-shell systems). [^13]: This inconsistency has been overlooked in Ref. [@SANTRA2017355]. [^14]: As discussed in Sec. \[sec23\], the Eq.  scheme leads to $\mu(T,\varrho)= \mu_\text{ref}(T,\varrho)$ for nonzero $T$ if the (pseudoanomalous) contributions from energy denominator poles are excluded, but it is not clear whether this is justified. [^15]: Accordingly, for the grand-canonical series the Eq. \[Ufermi\] scheme does not remove the anomalous contributions to $\rho(T,\mu)$ at ${T=0}$ (or ${T\neq 0}$), so in that case the adiabatic series is not reproduced (in any case) and the discrete spectrum inconsistency persists. [^16]: We note that $F(T,\mu_\text{ref})$ with $U\in\{\,0,U_1\}$ and ${N=2}$ has been employed in nuclear matter calculations in Refs. [@Tolos:2007bh; @Fritsch:2002hp; @Fiorilla:2011sr; @Holt:2013fwa; @PhysRevC.89.064009; @PhysRevC.92.015801; @PhysRevC.93.055802]. For nuclear matter calculations with self-consistent propagators, see, e.g., Refs. [@PhysRevC.88.044302; @PhysRevC.90.054322; @PhysRevC.98.025804]. [^17]: For additional details and numerical evidence, see Ref. [@Wellenhofer:2017qla]. [^18]: At second order the two types of anomalous contributions have been found to give individually very large but nearly canceling contributions in nuclear matter calculations [@Wellenhofer:2017qla; @Tolos:2007bh; @PhysRevC.89.064009]. [^19]: See also the next paragraph, and Sec. \[summary\]. [^20]: That is, the functional dependence on $n_{\textbf}{k}$ of the $-U$ vertices is *not* taken into account in Eq. . [^21]: If the pole contributions are included for a finite system then Eq.  is valid only for ${T\rightarrow 0}$ (and the ${T\rightarrow 0}$ limit exists only for $\mu\not\in\{{\varepsilon}_{\textbf}{k}\}$). In that sense, the construction of the thermodynamic Fermi-liquid relations via MBPT depends on the thermodynamic limit. [^22]: At ${T=0}$, these singular terms cancel each other, see Ref. [@Wellenhofer:2018dwh] and Sec. \[sec45\]. [^23]: Note that the relation ${\mathcal{E}_{{\textbf}{k}_{\text{F}}}=\partial E({T=0},\varrho)/\partial \varrho}$ (Hugenholtz-Van Hove theorem [@HUGENHOLTZ1958363; @Baym:1962sx]) is trivial if $E({T=0},\varrho)$ is derived from $\Omega(T,\mu)$. [^24]: Note that this implies that there are diagrams with several identical energy denominators, i.e., the Hadamard finite part appears. [^25]: A different regularization scheme can for example be set up via $\prod_\nu D_\nu^{n_\nu}\rightarrow (\prod_\nu D_\nu)^{n_\nu}+i\eta$. The parts $\mathcal{F}_{\alpha,[\eta]}$ then have a form that deviates from the reduced formula (in particular, the pseudoanomalous contributions do not cancel; see also Appendix \[app1\]), so the Fermi-liquid relations cannot be obtained in this scheme. [^26]: Rules for the formal regularization have been presented also in Refs. [@BALIAN1971229; @LUTTINGER19731; @keitermorandi] for the case of impurity systems. [^27]: It should be noted that, while the complete cancellation of two-particle reducible diagrams (with first-order pieces) is specific to $U_1$, including $U_2^\text{BdD}$, $U_3^\text{BdD}$, etc. does not only eliminate anomalous contributions but also partially cancels normal contributions. Note also that the reduced contributions from normal two-particle reducible diagrams with single-vertex loops can be resummed as geometric series; in zero-temperature MBPT this is equivalent to the change from ${U=0}$ to ${U=U_1}$ for isotropic systems (only). [^28]: Notably, the same expression results if one naively introduces principal values in $\Omega_{3,\text{pp}}^\text{reduced}$ and averages over three different integration orders (where in one case $p$ is integrated before $A$ or $B$); for Eq.  this procedure would, however, lead to an incorrect result. [^29]: In the context of MBPT for Fermi systems this method was introduced by Brout and Englert [@brout2; @PhysRev.115.824] (see also Ref. [@horwitz2]). [^30]: The number operator is given by $\mathcal{N}=\sum_{\textbf}{k} a_{\textbf}{k}^\dagger a_{\textbf}{k}$. [^31]: The mean field is not expanded; i.e., the expansion is performed after $U(T,\mu)$ is replaced by $U(T,\mu_\text{ref})$. This (and the truncation of the expansion) makes evident that at a given order the modified and the unmodified perturbation series lead to different results; see also Sec.\[sec23\] and Refs. [@PhysRevC.89.064009; @Wellenhofer:2017qla]. [^32]: Note that Eq. (B.12) of Ref. [@brout2] is not valid; e.g., it misses the second part of Eq. . [^33]: This feature is expected from indirect arguments [@Luttinger:1960ua; @Wellenhofer:2017qla]. The cancellation has been shown explicitly to all orders for certain subclasses of diagrams [@Wellenhofer:2017qla], but no direct proof to all orders exists. [^34]: This cancellation is not always purely algebraic, see Eqs.  and . [^35]: For an interesting implication of this feature, i.e., the singularity at fourth order and ${T=0}$ of the Maclaurin expansion in terms of $x=\mu_\uparrow-\mu_\downarrow$ (or, $x=\varrho_\uparrow-\varrho_\downarrow$) for a system of spin one-half fermions with spins $\uparrow$ and $\downarrow$, see Refs. [@PhysRevC.91.065201; @PhysRevC.93.055802; @Wellenhofer:2017qla]. Note however that the statement in Refs. [@PhysRevC.93.055802; @Wellenhofer:2017qla] that the convergence radius of the expansion is still zero (instead of just very small) near (but not at) the degenerate limit appears somewhat questionable. In particular, Fig. 6of Ref. [@PhysRevC.93.055802] should be interpreted not in terms of the radius of convergence but in terms of convergence at $x=\pm 1$. [^36]: See Ref. [@Wellenhofer:2018dwh] \[and Eq. \] for an example of this. We defer a more detailed analysis of these cancellations to a future publication. [^37]: Other studies regarding the derivation of statistical quasiparticle relations can be found in Refs. [@nort1; @nort2; @article; @PhysRevA.1.1243; @RevModPhys.39.771; @PhysRev.153.263; @TUTTLE1966510; @1964quasip; @PhysRev.121.957; @PhysRev.124.583]. [^38]: Note also that somewhat similar methods have been applied with considerable success for (certain) finite systems [@Tichai:2018qge; @PhysRevLett.121.032501; @PhysRevLett.75.2787]. [^39]: More generally, the effect on convergence of higher-order contributions to the mean field (in the modified perturbation series for the free energy) will be investigated. [^40]: We defer a more detailed analysis of these partial analytic cancellations to a future publication. [^41]: Here, the functional derivative is supposed to disregard the implicit dependence on $n_{\textbf}{k}$ of $U_{2}^\text{BdD}$; see Sec. \[sec23\]. [^42]: See the second paragraph of part \[app21\] of this Appendix. [^43]: In that sense, the mass function representation (as well as the direct representation of part \[app22\]) represents a purely statistical result, while the spectral representation corresponds to a statistical-dynamical result. Only the statistical-dynamical result has a well-behaved ${T\rightarrow 0}$ limit. [^44]: In this paragraph we follow for the most part Kadanoff and Baym [@kadanoffbaym], Fetter and Walecka [@Fetter], and Ref. [@ThesisRios]. [^45]: Note that Eqs.  and imply that $\text{Im}[\Sigma_{\textbf}{k}(z)]\lessgtr 0$ for $\text{Im}[z] \gtrless 0$. [^46]: Note that Fetter and Walecka omit the factor ${\text{i}}$ in the definition of $G_{\textbf}{k}(\omega)$, so no ${\text{i}}$ appears in their version of our Eq. , i.e., in Eq. (9.33) of Ref. [@Fetter]. [^47]: In the adiabatic formalism only real-time propagators appear, so it is the collisional self-energy that is calculated. [^48]: We note again that in the direct (and cyclic) scheme the ${T\rightarrow 0}$ limit is nonexistent. [^49]: Because of the two external lines no cyclic and reduced versions of Eq.  are available; see Ref. [@BDDnuclphys7] for details on the derivation of the cyclic formula and the reduced formula. [^50]: We use the notion normal propagator diagrams to refer to diagrams that have no anomalous articulation lines and are either (i) one-particle irreducible propagator diagrams or (ii) one-particle reducible propagator diagrams where all cuttable propagator lines go in the same direction. [^51]: In the cyclic and the BdD scheme only the contributions with Hugenholtz diagrams attached via higher-cumulant connections can be canceled. The remaining propagator contributions in these schemes are then given by Eqs. , , etc., with the $-U$ vertices (but not the self-energy parts) given by $-U^{\text{cyclic},\div}_n$ and $-U^{\text{BdD}}_n$, respectively, plus diagrams that have self-energy parts consisting of $-U^{\text{cyclic},\div}_n$ and $-U^{\text{BdD}}_n$ vertices (with $2\leq n\leq N$), respectively. [^52]: For the self-consistent functional relations between the proper self-energy and the grand-canonical potential, see, e.g., Refs. [@Abrikosov; @Luttinger:1960ua; @Baym:1962sx]. [^53]: It can be seen by regularizing the energy denominators that the expression obtained from Eqs.  and is equivalent to the direct, cyclic, and regularized reduced expressions for the (permutation invariant) second-order normal diagram; see Sec. \[sec31\].
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this manuscript we provide a family of lower bounds on the indirect Coulomb energy for atomic and molecular systems in two dimensions in terms of a functional of the single particle density with gradient correction terms.' address: - '$^1$ Departmento de Física, P. Universidad Cat'' olica de Chile,' - '$^2$ Departmento de Física, P. Universidad Cat'' olica de Chile, ' author: - 'Rafael D. Benguria$^1$' - Matěj Tušek$^2$ title: '**Indirect Coulomb Energy for Two-Dimensional Atoms**' --- Introduction ============ Since the advent of quantum mechanics, the impossibility of solving exactly problems involving many particles has been clear. These problems are of interest in such areas as atomic and molecular physics, condensed matter physics, and nuclear physics. It was, therefore, necessary from the early beginnings to estimate various energy terms of a system of electrons as functionals of the single particle density $\rho_{\psi}(x)$, rather than as functionals of their wave function $\psi$. The first estimates of this type were obtained by Thomas and Fermi in 1927 (see [@Li81] for a review), and by now they have given rise to a whole discipline under the name of [*Density Functional Theory*]{} (see, e.g., [@Be13] and references therein). In Quantum Mechanics of many particle systems the main object of interest is the wavefunction $\psi \in \bigwedge^N L^2({{{\mathord{\mathbb R}}}}^3)$, (the antisymmetric tensor product of $L^2({{{\mathord{\mathbb R}}}}^3)$). More explicitly, for a system of $N$ fermions, $\psi(x_1, \dots , x_i, \dots, \dots x_ j, \dots x_N)= - \psi(x_1, \dots , x_j, \dots, \dots x_ i, \dots x_N)$, in view of Pauli’s Exclusion Principle, and $\int_{{{\mathord{\mathbb R}}}^N} |\psi|^2 \, dx_1 \dots d x_n=1$. Here, $x_i \in {{\mathord{\mathbb R}}}^3$ denote the coordinates of the $i$-th particle. From the wavefunction $\psi$ one can define the one–particle density (single particle density) as $$\rho_{\psi}(x) = N \int_{{{\mathord{\mathbb R}}}^{3(N-1)}} |\psi (x, x _2, \dots, x_N)|^2 \, dx_2 \dots dx_N, \label{density}$$ and from here it follows that $\int_{{{\mathord{\mathbb R}}}^3} \rho_{\psi} (x) \, dx = N$, the number of particles, and $\rho_{\psi}(x)$ is the density of particles at $x \in {{\mathord{\mathbb R}}}^3$. Notice that since $\psi$ is antisymmetric, $|\psi|^2$ is symmetric, and it is immaterial which variable is set equal to $x$ in (\[density\]). In Atomic and Molecular Physics, given that the expectation value of the Coulomb attraction of the electrons by the nuclei can be expressed in closed form in terms of $\rho_{\psi}(x)$, the interest focuses on estimating the expectation value of the kinetic energy of the system of electrons and the expectation value of the Coulomb repulsion between the electrons. Here, we will be concerned with the latest. The most natural approximation to the expectation value of the Coulomb repulsion between the electrons is given by $$D(\rho_{\psi},\rho_{\psi})=\frac{1}{2} \int \rho_{\psi} (x) \frac{1}{|x-y|} \rho_{\psi}(y) \, {\mathrm{d}}x \, {\mathrm{d}}y,$$ which is usually called the [*direct term*]{}. The remainder, i.e., the difference between the expectation value of the electronic repulsion and $D(\rho_{\psi},\rho_{\psi})$, say $E$, is called the [*indirect term*]{}. In 1930, Dirac [@Di30] gave the first approximation to the indirect Coulomb energy in terms of the single particle density. Using an argument with plane waves, he approximated $E$ by $$E \approx -c_D \int \rho_{\psi}^{4/3} \, dx, \label{eq:dirac}$$ where $c_D=(3/4)(3/\pi)^{1/3} \approx 0.7386$ (see, e.g., [@Mo06], p. 299). Here we use units in which the absolute value of the charge of the electron is one. The first rigorous lower bound for $E$ was obtained by E.H. Lieb in 1979 [@Li79], using the Hardy–Littlewood Maximal Function [@StWe71]. There he found that, $E \geq -8.52 \int \rho_{\psi}^{4/3} \, dx$. The constant $8.52$ was substantially improved by E.H. Lieb and S. Oxford in 1981 [@LiOx81], who proved the bound $$E \ge -C \int \rho_{\psi}^{4/3} \, dx, \label{eq:LO}$$ with $C = C_{LO}=1.68$. In their proof, Lieb and Oxford used Onsager’s electrostatic inequality [@On39], and a localization argument. The best value for $C$ is unknown, but Lieb and Oxford [@LiOx81] proved that it is larger or equal than $1.234$. The Lieb–Oxford value was later improved to $1.636$ by Chan and Handy, in 1999 [@ChHa99]. Since the work of Lieb and Oxford [@LiOx81], there has been a special interest in quantum chemistry in constructing corrections to the Lieb–Oxford term involving the gradient of the single particle density. This interest arises with the expectation that states with a relatively small kinetic energy have a smaller indirect part (see, e.g., [@LePe93; @PeBuEr96; @VeMeTr09] and references therein). Recently, Benguria, Bley, and Loss obtained an alternative to (\[eq:LO\]), which has a lower constant (close to $1.45$) to the expense of adding a gradient term (see Theorem 1.1 in [@BeBlLo12]), which we state below in a slightly modified way, \[BBL\] For any normalized wave function $\psi(x_1, \dots, x_N)$ and any $0 < \epsilon < 1/2$ we have the estimate $$E(\psi) \ge - 1.4508 \, (1+\epsilon) \int_{{{\mathord{\mathbb R}}}^3} \rho_{\psi}^{4/3} dx -\frac{3}{2 \epsilon} (\sqrt{\rho_{\psi}}, |p| \sqrt{\rho_{\psi}}) \label{exch}$$ where $$(\sqrt \rho, |p| \sqrt \rho) := \int_{{{\mathord{\mathbb R}}}^3} |\widehat{\sqrt \rho}(k)|^2 |2\pi k| d k = \frac{1}{2\pi^2} \int_{{{\mathord{\mathbb R}}}^3} \int_{{{\mathord{\mathbb R}}}^3} \frac{|\sqrt{\rho(x)} - \sqrt{\rho(y)}|^2 }{|x-y|^4} dx dy \ . \label{eq:KE}$$ Here, $\widehat f(k)$ denotes the Fourier-transform $$\widehat f (k) = \int_{{{\mathord{\mathbb R}}}^3} e^{-2\pi i k \cdot x} f(x) d x\ .$$ i\) For many physical states the contribution of the last two terms in (\[exch\]) is small compared with the contribution of the first term. See, e.g., the Appendix in [@BeBlLo12]; ii\) For the second equality in (\[eq:KE\]) see, e.g., [@LiLo01], Section 7.12, equation (4), p. 184; iii\) It was already noticed by Lieb and Oxford (see the remark after equation (26), p. 261 on [@LiOx81]), that somehow for uniform densities the Lieb–Oxford constant should be $1.45$ instead of $1.68$; iv\) In the same vein, J. P. Perdew [@Pe91], by employing results for a uniform electron gas in its low density limit, showed that in the Lieb–Oxford bound one ought to have $C \ge 1.43$ (see also [@LePe93]). After the work of Lieb and Oxford [@LiOx81] many people have considered bounds on the indirect Coulomb energy in lower dimensions (in particular see, e.g., [@HaSe01] for the one-dimensional case; [@LiSoYn95], [@NaPoSo11], [@RaPiCaPr09], and [@RaSeGo11] for the two-dimensional case, which is important for the study of quantum dots). Recently, Benguria, Gallegos, and Tušek [@BeGaTu12] gave an alternative to the Lieb–Solovej–Yngvason bound [@LiSoYn95], with a constant much closer to the numerical values proposed in [@RaSeGo11] (see also the references therein) to the expense of adding a gradient term: \[thm:LO\] Let $\psi\in L^{2}({{\mathord{\mathbb R}}}^{2N})$ be normalized to one and symmetric (or antisymmetric) in all its variables. Define $$\rho_{\psi}(x)=N\int_{{{\mathord{\mathbb R}}}^{2(N-1)}}|\psi|^{2}(x,x_{2},\ldots,x_{N})~{\mathrm{d}}x_{2}\ldots{\mathrm{d}}x_{N}.$$ If $\rho_{\psi}\in L^{3/2}({{\mathord{\mathbb R}}}^2)$ and $|\nabla\rho_{\psi}^{1/4}|\in L^2({{\mathord{\mathbb R}}}^2)$, then, for all $\epsilon>0$, $$E(\psi)\equiv\langle\psi,\sum_{i<j}^{N}|x_{i}-x_{j}|^{-1}\psi\rangle-D(\rho_{\psi},\rho_{\psi})\geq -(1+\epsilon)\beta\int_{{{\mathord{\mathbb R}}}^{2}}\rho_{\psi}^{3/2} \, {\mathrm{d}}x-\frac{4}{\beta\epsilon}\int_{{{\mathord{\mathbb R}}}^{2}}|\nabla\rho_{\psi}^{1/4}|^{2} \, {\mathrm{d}}x \label{eq:ind_en_est}$$ with $$\beta=\left(\frac{4}{3}\right)^{3/2}\sqrt{5\pi-1}\simeq 5.9045. \label{beta}$$ i\) The constant $\beta \simeq 5.9045$ in (\[eq:ind\_en\_est\]) is substantially lower than the constant $C_{LSY} \simeq 481.27$ found in [@LiSoYn95] (see equation (5.24) of lemma 5.3 in [@LiSoYn95]). ii\) Moreover, the constant $\beta$ is close to the numerical values (i.e., $\simeq 1.95$) of [@RaPiCaPr09] (and references therein), but is not sharp. In the literature there are, so far, three approaches to prove lower bounds on the exchange energy, namely: i\) The approach introduced by E.H. Lieb in 1979 [@Li79], which uses as the main tool the Hardy–Littlewood Maximal Function [@StWe71]. This method was used in the first bound of Lieb [@Li79]. Later it was used in [@LiSoYn95] to obtain a lower bound on the exchange energy of two–dimensional Coulomb systems. It has the advantage that it may be applied in a wide class of problems, but it does not yield sharp constants. ii\) The use of Onsager’s electrostatic inequality [@On39] together with localization techniques, introduced by Lieb and Oxford [@LiOx81]. This method yields very sharp constants. It was used recently in [@BeBlLo12] to get a new type of bounds including gradient terms (for three dimensional Coulomb systems). In some sense the constant $1.4508$ recently obtained in [@BeBlLo12] is the best possible (see the comments after Theorem \[BBL\]). The only disadvantage of this approach is that it depends on the use of Onsager’s electrostatic inequality (which in turn relies on the fact that the Coulomb potential is the fundamental solution of the Laplacian). Because of this, it cannot be used in the case of two–dimensional atoms, because $1/|x|$ is not the fundamental solution of the two–dimensional Laplacian. iii\) The use of the stability of matter of an auxiliary many particle system. This idea was used by Lieb and Thirring [@LiTh75] to obtain lower bounds on the kinetic energy of a systems of electrons in terms of the single particle density. In connection with the problem of getting lower bounds on the exchange energy it was used for the first time in [@BeGaTu12], to get a lower bound on the exchange energy of two–dimensional Coulomb systems including gradient terms. This method provides very good, although not sharp, constants. As we mentioned above, during the last twenty years there has been a special interest in quantum chemistry in constructing corrections to the Lieb–Oxford term involving the gradients of the single particle density. This interest arises with the expectation that states with a relatively small kinetic energy have a smaller indirect part (see, e.g., [@LePe93; @PeBuEr96; @VeMeTr09] and references therein). While the form of leading term (i.e., the dependence as an integral of $\rho^{4/3}$ in three dimensions or as an integral of $\rho^{3/2}$ in two dimensions) is dictated by Dirac’s argument (using plane waves), there is no such a clear argument, nor a common agreement concerning the structure of the gradient corrections. The reason we introduced the particular gradient term, $\int_{{{\mathord{\mathbb R}}}^{2}}|\nabla\rho_{\psi}^{1/4}|^{2} \, {\mathrm{d}}x$ in our earlier work [@BeGaTu12], was basically due to the fact that we already knew the stability of matter arguments for the auxiliary system. However, there is a whole one parameter family of such gradient terms that can be dealt in the same manner. In this manuscript we obtain lower bounds including as gradient terms this one–parameter family. One interesting feature of our bounds is that the constant $\beta$ in front of the leading term remains the same (i.e., its value is independent of the parameter that labels the different possible gradient terms), while the constant in front of the gradient term is parameter dependent. Our main result is the following theorem. \[thm:BeTu\] Let $1<\gamma<3$, and $\alpha = (3-\gamma)/(2 \gamma)$. Assume, $\rho_{\psi}\in L^{3/2}({{\mathord{\mathbb R}}}^2)$ and $|\nabla \rho_{\psi}^{\alpha}| \in L^{\gamma}({{\mathord{\mathbb R}}}^2)$. Let $C(p) = 2^{1-p/2}$, for $0<p \le 2$ while $C(p)=1$, for $p \ge 2$. Then, for all $\epsilon >0$ we have, $$E(\psi)\equiv\langle\psi,\sum_{i<j}^{N}|x_{i}-x_{j}|^{-1}\psi\rangle-D(\rho_{\psi},\rho_{\psi}) \geq -\tilde{b}^2 \int_{{{\mathord{\mathbb R}}}^{2}}\rho_{\psi}^{3/2} \, {\mathrm{d}}x - \tilde{a}^2 \int_{{{\mathord{\mathbb R}}}^{2}}|\nabla\rho_{\psi}^{\alpha}|^{\gamma} \, {\mathrm{d}}x. \label{MainLowerBound}$$ Here, $$\tilde{b}^{2}=\left(\frac{4}{3}\right)^{3/2}{\sqrt{5\pi-1}} \, (1+\epsilon) = \beta \, (1+\epsilon) \label{constant1}$$ where $\beta$ is the same constant that appears in (\[beta\]). Also, $$\tilde{a}^{2}=\frac{2^\gamma C(\gamma)}{3-\gamma}\left(\frac{1}{\beta \, \epsilon} \,\frac{\gamma-1}{3-\gamma}\, C\bigg(\frac{\gamma}{\gamma-1}\bigg)\right)^{\gamma-1}. \label{constant2}$$ In particular, we have (with a fixed $\epsilon$) $$\tilde{a}^{2}|_{\gamma\to 1+}=\sqrt{2}.$$ i\) Our previous Theorem \[thm:LO\] is a particular case of Theorem \[thm:BeTu\], for the value $\gamma=2$, $\alpha=1/4$. ii\) Notice that $\tilde{b}^2$ is independent of $\gamma$, and it is therefore the same as in [@BeGaTu12]. iii)The constant in front of the gradient term depends on the power $\gamma$ and, of course, on $\epsilon$. However, as $\gamma \to 1+$, this constant converges to $\sqrt{2}$ independently of the value of $\epsilon$. In the rest of the manuscript we give a sketch of the proof of this theorem, which follows closely the proof of the particular result \[thm:LO\] in [@BeGaTu12]. Auxiliary lemmas ================ First we need a standard convexity result. \[lem:norm\_comp\] Let $x,y\in{{\mathord{\mathbb R}}}$, and $p>0$. Then $$|x|^p+|y|^p\leq C(p)|x+iy|^p,$$ where $C(p)=2^{1-p/2}$ for $0<p\leq 2$, and $C(p)=1$ for $p\geq 2$. The constant $C(p)$ is sharp. If $p\geq 2$, the assertion follows, e.g., from the fact that $l^{p}$-norm is decreasing in $p$. On the other hand, for $0<p<2$, the assertion follows from the concavity of the mapping $t \to t^{1/n}$ for $t>0$ and $n>1$. The next lemma is a generalization of the analogous result introduced in [@BeLoSi07] and used in the proof of Theorem \[thm:LO\] above (see [@BeGaTu12]). This lemma is later needed to prove a Coulomb Uncertainty Principle. \[lem:uncert\_princ\] Let $D_{R}$ stands for the disk of radius $R$ and origin $(0,0)$. Moreover let $u=u(|x|)$ be a smooth function such that $u(R)=0$ and $1<\gamma<3$. Then the following uncertainty principle holds $$\begin{split} &\left|\int_{D_{R}}\big[ 2u(|x|)+|x|u'(|x|)\big]f(x)^{1/\alpha}\right|\leq\\ &\leq\frac{1}{\alpha}\left(C(\gamma)\int_{D_{R}}|\nabla f(x)|^{\gamma}\, {\mathrm{d}}x\right)^{1/\gamma}\left(C(\delta)\int_{D_{R}}|x|^{\delta}|u(|x|)|^{\delta}|f(x)|^{3/(2\alpha)}\, {\mathrm{d}}x\right)^{1/\delta}, \end{split}$$ where $$\label{eq:coeff} \frac{1}{\alpha}=\frac{2\gamma}{3-\gamma},\qquad \frac{1}{\gamma}+\frac{1}{\delta}=1.$$ Set $g_{j}(x)=u(|x|)x_{j}$. Then we have, $$\begin{split} &\int_{D_{R}}[2u(|x|)+|x|u'(|x|)]f(x)^{1/\alpha}\, {\mathrm{d}}x=\sum_{j=1}^{2}\int_{D_{R}}[\partial_{j}g_{j}(x)]f(x)^{1/\alpha}\, {\mathrm{d}}x= \\ &=\sum_{j}\int_{D_{R}}f(x)\partial_{j}[g_{j}(x)f(x)^{1/\alpha-1}]\, {\mathrm{d}}x-\left(\frac{1}{\alpha}-1\right)\sum_{j}\int_{D_{R}}f(x)^{1/\alpha-1}g_{j}(x)\partial_{j}f(x)\, {\mathrm{d}}x=\\ &=-\frac{1}{\alpha}\int_{D_{R}}\langle \nabla f(x),\, x\rangle u(|x|)f(x)^{1/\alpha-1}\, {\mathrm{d}}x. \end{split}$$ In the last equality we integrated by parts and made use of the fact that $u$ vanishes on the boundary $\partial D_{R}$. Next, the Hölder inequality implies $$\begin{split} &\left|\int_{D_{R}}\big[ 2u(|x|)+|x|u'(|x|)\big]f(x)^{1/\alpha}\right|\leq \\ &\frac{1}{\alpha}\left(\int_{D_{R}}\sum_{j=1}^{2}|\partial_{j}f(x)|^{\gamma}\, {\mathrm{d}}x\right)^{1/\gamma}\left(\int_{D_{R}}\sum_{j=1}^{2}|x_{j}|^{\delta} |u(|x|)|^{\delta}|f(x)|^{(1/\alpha-1)\delta}\, {\mathrm{d}}x\right)^{1/\delta}. \end{split}$$ The rest follows from Lemma $\ref{lem:norm_comp}$. A stability result for an auxiliary two-dimensional molecular system ==================================================================== Here we follow the method introduced in [@BeGaTu12]. That is, in order to prove our Lieb–Oxford type bound (with gradient corrections) in two dimensions we use a stability of matter type result on an auxiliary molecular system. This molecular system is an extension of the one studied in [@BeGaTu12], which was adapted from the similar result in three dimensions discussed in [@BeLoSi07] (this last one corresponds to the zero mass limit of the model introduced in [@En87; @EnDr87; @EnDr88]). We begin with a typical Coulomb Uncertainty Principle which uses the kinetic energy of the electrons in a ball to bound the Coulomb singularities. For every smooth non-negative function $\rho$ on the closed disk $D_{R} \subset {{\mathord{\mathbb R}}}^2$, and for any $a,b>0$ we have $$ab \, \alpha\left|\int_{D_{R}}\left(\frac{1}{|x|}-\frac{2}{R}\right)\rho(x)\, {\mathrm{d}}x\right|\leq \frac{a^\gamma C(\gamma)}{\gamma}\,\int_{D_{R}}|\nabla\rho(x)^{\alpha}|^{\gamma}\, {\mathrm{d}}x+\frac{b^\delta C(\delta)}{\delta}\,\int_{D_{R}}\rho^{3/2}\,{\mathrm{d}}x,$$ where $1<\gamma< 3$, and $\alpha$ and $\delta$ are as in (\[eq:coeff\]). In Lemma \[lem:uncert\_princ\] we set $u(r)=1/r-1/R$ and $f=\rho^{\alpha}$. The assertion of the theorem then follows from Young inequality with coefficients $\gamma$ and $\delta$. And now we introduce the auxiliary molecular system through the “energy functional” $$\xi(\rho)= \tilde{a}^2 \int_{{{\mathord{\mathbb R}}}^2} |\nabla \rho^{\alpha}|^\gamma \, {\mathrm{d}}x + \tilde{b}^2 \int_{{{\mathord{\mathbb R}}}^2} \rho^{3/2} \, {\mathrm{d}}x - \int_{{{\mathord{\mathbb R}}}^2} V(x) \rho (x) \, {\mathrm{d}}x +D(\rho,\rho) +U, \label{EnergyFunctional}$$ where $$V(x) = \sum_{i=1}^K \frac{z}{|x-R_i|},\quad D(\rho,\rho) = \frac{1}{2} \int_{{{\mathord{\mathbb R}}}^2 \times {{\mathord{\mathbb R}}}^2} \rho(x) \frac{1}{|x-y|} \rho(y) \, {\mathrm{d}}x\, {\mathrm{d}}y,\quad U = \sum_{1 \le i < j \le K} \frac{z^2}{|R_i-R_j|}$$ with $z>0$ and $R_{i}\in{{\mathord{\mathbb R}}}^{2}$. As above we assume $1<\gamma<3$, and $\alpha=(3-\gamma)/(2\gamma)$. The choice of $\alpha$ (in terms of $\gamma$) is made in such a way that the first two terms in (\[EnergyFunctional\]) scale as one over a length. Indeed, let us denote $$K(\rho) \equiv \tilde{a}^2 \int_{{{\mathord{\mathbb R}}}^2} |\nabla \rho^{\alpha}|^\gamma \, {\mathrm{d}}x + \tilde{b}^2 \int_{{{\mathord{\mathbb R}}}^2} \rho^{3/2} \, {\mathrm{d}}x.$$ Given any trial function $\rho \in L^1({{\mathord{\mathbb R}}}^2)$ and setting $\rho_{\lambda} (x) = \lambda^{2} \rho(\lambda x)$ (thus preserving the $L^1$ norm), it is simple to see that with our choice of $\alpha$ we have $K(\rho_{\lambda})= \lambda K(\rho)$. If we now introduce constants $a,b_{1},b_{2}>0$ so that $$\begin{aligned} &\tilde{a}^{2}=\frac{a^\gamma C(\gamma)}{2\alpha\, \gamma} \label{eq:a_def}\\ &\tilde{b}^{2}=\frac{b_{2}^{\delta} C(\delta)}{2\alpha \, \delta}+b_{1}^{2}\nonumber\end{aligned}$$ (again with $\delta$ given by (\[eq:coeff\])), we may use the proof of [@BeGaTu12 Lemma 2.5] step by step. In particular, $$\xi(\rho)\geq b_{1}^{2}\int_{{{\mathord{\mathbb R}}}^{2}}\rho^{3/2}{\mathrm{d}}x-\int_{{{\mathord{\mathbb R}}}^{2}}V\rho~{\mathrm{d}}x+ab_{2}\sum_{j=1}^{K}\int_{B_{j}}\left(\frac{1}{2|x-R_{j}|}-\frac{1}{D_{j}}\right)\rho(x){\mathrm{d}}x+ D(\rho,\rho)+U,$$ where $$D_j = \frac{1}{2} \min \{|R_k-R_j| \bigm| k \neq j \},$$ and $B_j$ is a disk with center $R_j$ and of radius $D_j$. Thus as in [@BeGaTu12 Lemma 2.5] we have that, for $$\label{eq:z_cond} z\leq ab_{2}/2,$$ it holds $$\label{eq:main_est} \xi(\rho)\geq \sum_{j=1}^{K}\frac{1}{D_{j}}\left[\frac{z^{2}}{8}-\frac{4}{27b_{1}^{4}}\left(2z^{3}(\pi-1)+\pi a^{3}b_{2}^{3}\right)\right].$$ Consequently we arrive at the following theorem. \[theo:stability\] For all non-negative functions $\rho$ such that $\rho\in L^{3/2}({{\mathord{\mathbb R}}}^{2})$ and $|\nabla\rho^\alpha|\in L^\gamma({{\mathord{\mathbb R}}}^{2})$, we have that $$\xi(\rho) \geq 0, \label{EFgreaterthanzero}$$ provided that $$z\leq \max_{\sigma\in(0,1)}h(\sigma) \label{maxcharge}$$ $$h(\sigma)=\min\left\{\frac{a}{2}\left(\tilde{b}^{2}\, \frac{3-\gamma}{\gamma-1}\, C\bigg(\frac{\gamma}{\gamma-1}\bigg)^{-1} (1-\sigma)\right)^{(\gamma-1)/\gamma},\, \frac{27}{64}\,\frac{\tilde{b^{4}}}{5\pi-1}\sigma^2\right\}, \label{EquationForH}$$ with $a$ given by (\[eq:a\_def\]). In order to arrive at (\[EquationForH\]) we set $b_{2}$ in (\[eq:main\_est\]) to be the smallest possible under the condition (\[eq:z\_cond\]), i.e., $b_{2}=2z/a$, and we introduced $\sigma=b_{1}^{2}/\tilde{b}^{2}$. Proof of Theorem \[thm:BeTu\] ============================= In this Section we give the proof of the main result of this paper, namely Theorem \[thm:BeTu\]. We use an idea introduced by Lieb and Thirring in 1975 in their proof of the stability of matter [@LiTh75] (see also the review article [@Li76] and the recent monograph [@LiSe09]). This idea was first used in this context in [@BeGaTu12]. Consider the inequality (\[EFgreaterthanzero\]), with $K=N$ (where $N$ is the number of electrons in our original system), $z=1$ (i.e., the charge of the electrons), and $R_i=x_i$ (for all $i=1, \dots, N$). With this choice, according to (\[maxcharge\]), the inequality (\[EFgreaterthanzero\]) is valid as long as $\tilde{a}$ and $\tilde{b}$ (that are now free parameters) satisfy the constraint, $$1 \le \max_{\sigma\in(0,1)}h(\sigma) \label{eq:3.1}$$ with $\sigma_{0}$ (which maximizes $h(\sigma)$) such that $h(\sigma_{0})=1$. Let us introduce $\epsilon>0$ and set $\sigma_{0}=1/(1+\epsilon)$. Then the smallest $\tilde{b}$ such that the assumptions of Theorem \[theo:stability\] may be in principle fulfilled reads $$\tilde{b}^{2}=\left(\frac{4}{3}\right)^{3/2} {\sqrt{5\pi-1}} \, (1+\epsilon). \label{tildeb}$$ Hence $a$ has to be chosen large enough, namely such that $$1=\frac{a}{2}\left(\tilde{b}^{2}\, \frac{3-\gamma}{\gamma-1}\, C\bigg(\frac{\gamma}{\gamma-1}\bigg)^{-1} \frac{\epsilon}{1+\epsilon}\right)^{(\gamma-1)/\gamma},$$ which due to (\[eq:a\_def\]) implies $$\tilde{a}^{2}=\frac{2^\gamma C(\gamma)}{3-\gamma}\left(\left(\frac{3}{4}\right)^{3/2}(5\pi-1)^{-1/2}\, \frac{1}{\epsilon}\,\frac{\gamma-1}{3-\gamma}\, C\bigg(\frac{\gamma}{\gamma-1}\bigg)\right)^{\gamma-1}. \label{tildea}$$ Since $$\lim_{\gamma \to 1+}C(\gamma)=\sqrt{2},\quad \lim_{\gamma \to 1+}\left(\frac{\gamma-1}{3-\gamma}\, C\bigg(\frac{\gamma}{\gamma-1}\bigg)\right)^{\gamma-1}=1,$$ we have (with a fixed $\epsilon$) $$\tilde{a}^{2}|_{\gamma \to 1+}=\sqrt{2}.$$ Then take any normalized wavefunction $\psi(x_1,x_2, \dots, x_N)$, and multiply (\[EFgreaterthanzero\]) by $| \psi(x_1, \dots, x_N)|^2$ and integrate over all the electronic configurations, i.e., on ${{\mathord{\mathbb R}}}^{2N}$. Moreover, take $\rho=\rho_{\psi}(x)$. We get at once, $$E(\psi)\equiv\langle\psi,\sum_{i<j}^{N}|x_{i}-x_{j}|^{-1}\psi\rangle-D(\rho_{\psi},\rho_{\psi}) \geq - \tilde{a}^2 \int_{{{\mathord{\mathbb R}}}^2} |\nabla \rho^{\alpha}|^\gamma \, {\mathrm{d}}x - \tilde{b}^2 \int_{{{\mathord{\mathbb R}}}^2} \rho^{3/2} \, {\mathrm{d}}x \label{eq:3.4}$$ provided $\tilde{a}$ and $\tilde{b}$ satisfy (\[tildea\]) and (\[tildeb\]), respectively. In general the two integral terms in (\[MainLowerBound\]) are not comparable. If one takes a very rugged $\rho$, normalized to $N$, the gradient term may be very large while the other term can remain small. However, if one takes a smooth $\rho$, the gradient term can be very small as we illustrate in the example below. Let us denote $$L(\rho)=\int_{{{\mathord{\mathbb R}}}^2} \rho(x)^{3/2} \, {\mathrm{d}}x$$ and $$G(\rho)=\int_{{{\mathord{\mathbb R}}}^2} (|\nabla \rho(x)^{\alpha}|)^{\gamma} \, {\mathrm{d}}x.$$ with $\alpha=(3-\gamma)/(2 \gamma)$. We will evaluate them for the normal distribution $$\rho(|x|)=C\mathrm{e}^{-A|x|^{2}}$$ where $C,\, A>0$. Some straightforward integration yields $$L=C^{3/2}\frac{2\pi}{3A},$$ while, $$G=C^{\alpha \, \gamma} \pi 2^{\gamma} (A \alpha)^{(\gamma/2)-1} \Gamma\left(1+\frac{\gamma}{2}\right) \gamma^{-(\gamma/2)-1}.$$ With $C=NA/\pi$, $$\int_{{{\mathord{\mathbb R}}}^{2}}\rho(|x|) \, {\mathrm{d}}x=N,$$ and we have $$\frac{G}{L}= {3} \left(\frac{\sqrt{2}}{\gamma} \right)^{\gamma} \left(\frac{\pi}{N}\right)^{\gamma/2} \Gamma\left(1+\frac{\gamma}{2}\right) (3-\gamma)^{(\gamma/2)-1},$$ i.e., in the “large number of particles” limit, the $G$ term becomes negligible, for all $1< \gamma< 3$. Acknowledgments {#acknowledgments .unnumbered} =============== It is a pleasure to dedicate this manuscript to Elliott Lieb on his eightieth birthday. The scientific achievements of Elliott Lieb have inspired generations of Mathematical Physicists. This work has been supported by the Iniciativa Cient’fica Milenio, ICM (CHILE) project P07–027-F. The work of RB has also been supported by FONDECYT (Chile) Project 1100679. The work of MT has also been partially supported by the grant 201/09/0811 of the Czech Science Foundation. [21]{} R. D. Benguria, [*Density Functional Theory*]{}, in [**Encyclopedia of Applied and Computational mathematics**]{} (B. Engquist, [*et al*]{}, Eds.), Springer-Verlag, Berlin, 2013. R. D. Benguria, G. A. Bley, and M. Loss, [*An improved estimate on the indirect Coulomb Energy*]{}, International Journal of Quantum Chemistry [**112**]{}, 1579–1584 (2012). R. D. Benguria, M. Loss, and H. Siedentop, [*Stability of atoms and molecules in an ultrarelativistic Thomas–Fermi–Weizsäcker model*]{}, J. Math. Phys. [**49**]{}, article 012302 (2008). R. D. Benguria, P. Gallegos, and M. Tušek, [*New Estimate on the Two-Dimensional Indirect Coulomb Energy*]{}, Annales Henri Poincaré (2012). G. K.–L. Chan and N. C. Handy, [*Optimized Lieb–Oxford bound for the exchange–correlation energy*]{}, Phys. Rev. [**A 59**]{}, 3075–3077 (1999). P. A. M. Dirac, [*Note on Exchange Phenomena in the Thomas Atom*]{}, Mathematical Proceedings of the Cambridge Philosophical Society, [**26**]{}, 376–385 (1930). E. Engel, [*Zur relativischen Verallgemeinerung des TFDW modells*]{}, Ph.D. Thesis Johann Wolfgang Goethe Universität zu Frankfurt am Main, 1987. E. Engel and R. M. Dreizler, [*Field–theoretical approach to a relativistic Thomas–Fermi–Weizsäcker model*]{}, Phys. Rev. A [**35**]{}, 3607–3618 (1987). E. Engel and R. M. Dreizler, [*Solution of the relativistic Thomas–Fermi–Dirac–Weizsäcker model for the case of neutral atoms and positive ions*]{}, Phys. Rev. A [**38**]{}, 3909–3917 (1988). C. Hainzl and R. Seiringer, [*Bounds on One–dimensional Exchange Energies with Applications to Lowest Landau Band Quantum Mechanics*]{}, Letters in Mathematical Physics [**55**]{}, 133–142 (2001). M. Levy and J. P. Perdew, [*Tight bound and convexity constraint on the exchange–correlation–energy functional in the low–density limit, and other formal tests of generalized–gradient approximations*]{}, Physical Review B [**48**]{}, 11638–11645 (1993). E. H. Lieb, *The stability of matter*, Rev. Mod. Phys. **48**, 553–569 (1976). E. H. Lieb, [*A Lower Bound for Coulomb Energies*]{}, Physics Letters [**70 A**]{}, 444–446 (1979). E. H. Lieb, [*Thomas–Fermi and related theories of Atoms and Molecules*]{}, Rev. Mod. Phys. [**53**]{}, 603–641 (1981). E. H. Lieb and M. Loss, [**Analysis, Second Edition**]{}, Graduate Texts in Mathematics, vol. 14, Amer. Math. Soc., RI, 2001. E. H. Lieb and S. Oxford, [*Improved Lower Bound on the Indirect Coulomb Energy*]{}, International Journal of Quantum Chemistry [**19**]{}, 427–439 (1981). E. H. Lieb and R. Seiringer, [**The Stability of Matter in Quantum Mechanics**]{}, Cambridge University Press, Cambridge, UK, 2009. E. H. Lieb, J. P. Solovej, and J. Yngvason, [*Ground States of Large Quantum Dots in Magnetic Fields*]{}, Physical Review B [**51**]{}, 10646–10666 (1995). E. H. Lieb and W. Thirring, [*Bound for the Kinetic Energy of Fermions which Proves the Stability of Matter*]{}, Phys. Rev. Lett. [**35**]{}, 687–689 (1975); Errata [**35**]{}, 1116 (1975). J. D. Morgan III, [*Thomas–Fermi and other density functional theories*]{}, in [**Springer handbook of atomic, molecular, and optical physics, vol. 1**]{}, pp. 295–306, edited by G.W.F. Drake, Springer–Verlag, NY, 2006. P.–T. Nam, F. Portmann, and J. P. Solovej, [*Asymptotics for two dimensional Atoms*]{}, preprint, 2011. L. Onsager, [*Electrostatic Interactions of Molecules*]{}, J. Phys. Chem. [**43**]{} 189–196 (1939). \[Reprinted in [**The collected works of Lars Onsager (with commentary)**]{}, World Scientific Series in 20$^{th}$ Century Physics, vol. 17, pp. 684–691, Edited by P.C. Hemmer, H. Holden and S. Kjelstrup Ratkje, World Scientific Pub., Singapore, 1996.\] J. P. Perdew, [*Unified Theory of Exchange and Correlation Beyond the Local Density Approximation*]{}, in [**Electronic Structure of Solids ’91**]{}, pp. 11–20, edited by P. Ziesche and H. Eschrig, Akademie Verlag, Berlin, 1991. J. P. Perdew, K. Burke, and M. Ernzerhof, [*Generalized Gradient Approximation Made Simple*]{}, Phys. Rev. Letts. [**77**]{}, 3865–3868 (1996). E. Räsänen, S. Pittalis, K. Capelle, and C. R. Proetto, [*Lower bounds on the Exchange–Correlation Energy in Reduced Dimensions*]{}, Phys. Rev. Letts. [**102**]{}, article 206406 (2009). E. Räsänen, M. Seidl, and P. Gori–Giorgi, [*Strictly correlated uniform electron droplets*]{}, Phys. Rev. B [**83**]{}, article 195111 (2011). E. M. Stein and G. Weiss, [**Introduction to Fourier Analysis on Euclidean Spaces**]{}, Princeton University Press, Princeton, NJ, 1971. A. Vela, V. Medel, and S. B. Trickey, [*Variable Lieb–Oxford bound satisfaction in a generalized gradient exchange–correlation functional*]{}, The Journal of Chemical Physics [**130**]{}, 244103 (2009).
{ "pile_set_name": "ArXiv" }
--- author: - 'Ю. В. Дымченко' title: Равенство емкости и модуля конденсатора в субфинслеровом пространстве --- Равенство емкости и модуля конденсатора имеет важное значение в геометрической теории функций. Оно позволяет связать теоретико-функциональные и геометрические свойства множеств. Для конформных емкостей и модулей на плоскости равенство было доказано Л. Альфорсом и А. Бёрлингом в работе [@ahlfors]. Затем этот результат был улучшен в работах Б. Фюгледе [@fuglede] и В. Цимера [@ziemer]. Дж. Хессе [@hesse] распространил этот результат на p-емкость и p-модуль для случая, когда пластины конденсатора не пересекаются с границей области. В случае евклидовой метрики равенство емкости и модуля в самых общих предположениях было доказано В. А. Шлыком [@shlyk2], затем это доказательство было немного упрощено в работе М. Оцука [@ohtsuka]. В случае римановой метрики равенство было доказано в [@capmod]. Финслеровы пространства были введены как обобщение римановых многообразий на случай, когда метрика зависит не только от координат, но и от направления. Равенство емкости и модуля конденсатора в финслеровых пространствах в самых общих предположениях было установлено в работе [@dymch2009]. Пространства Карно-Каратеодори и субфинслеровы пространства отличаются от римановых и финслеровых пространств соответственно ограничением класса допустимых путей. C основными вопросами анализа на группах Карно можно ознакомиться, например, в книге [@folland]. Емкости, модули конденсаторов, а также свойства различных функциональных классов на группах Карно в последнее время изучались группой С. К. Водопьянова (например, [@vod89; @vod98; @vod96]). В частности, равенство емкости и модуля конденсатора было установлено И. Г. Маркиной в работе [@markina2003]. Субфинслеровы пространства изучались, например, в работах [@clelland2006628; @ber13; @donne; @ber14; @buk14]. Приведем основные определения и обозначения. Доказательство многих нижеприведенных рассуждений можно найти в [@folland]. Стратифицированной однородной группой (или группой Карно) называется связная односвязная нильпотентная группа Ли , алгебра Ли которой разлагается в прямую сумму векторных пространств $V_1\oplus V_2\oplus\dots\oplus V_m$ таких, что $[V_1,V_k]=V_{k+1}$ для $k=1,2,\dots,m-1$ и $[V_1,V_m]=\{0\}$. Здесь $[X,Y]=XY-YX$ — коммутатор элементов $X$ и $Y$, а $[V_1,V_j]$ — линейная оболочка элементов $[X,Y]$, где $X\in V_1$, $Y\in V_j$, $j=1,2,\dots,m$. Пусть левоинвариантные векторные поля $X_{11}$, $X_{12}$,…, $X_{1n_1}$ образуют базис $V_1$. Определим подрасслоение $HT$ касательного расслоения $T\G$ со слоями $HT_x$, $x\in \G$, которые представляют собой линейную оболочку векторных полей $X_{11}(x)$, $X_{12}$, …, $X_{1n_1}(x)$. Назовем $HT$ горизонтальным касательным расслоением, а его слои $HT_x$ — горизонтальными касательными пространствами в точке $x\in\G$. Расширим базис $X_{11}$, …, $X_{1n_1}$ до базиса $X_{ij}$, $j=1,2,\dots n_i$, $i=1,2,\dots,m$, всей алгебры Ли , где каждый $X_{ij}$ представляет собой коммутатор $j$-го порядка некоторых векторов $X_{1j}$, $j=1,2,\dots,n_1$. Таким образом, $n_i$ является размерностью пространства $V_i$, $i=1,2,\dots,m$. Любой элемент $x\in \G$ можно единственным образом представить в виде $x=\exp\left(\sum\limits_{i,j}x_{ij}X_{ij}\right)$. Набор чисел $\{x_{ij}\}$ назовем координатами элемента $x$. Получим взаимно однозначное отображение между группой и пространством $R^N$, где $N=n_1+n_2+\cdots+n_m$ — топологическая размерность группы . Мера Лебега в $R^N$ индуцирует биинвариантную меру Хаара в , которую мы обозначим через $dx$. Обозначим $x_i=(x_{i1},x_{i2},\dots,x_{in_i})$, $i=1,2,\dots,m$. Определим растяжения $\delta_\lambda x$, $\lambda>0$, по формуле $\delta_\lambda x=(\lambda x_1,\lambda^2x_2,\dots,\lambda^m x_m)$. также имеем $d(\delta_\lambda x)=\lambda^Q dx$, где $Q=\sum\limits_iin_i$ — однородная размерность группы . Пусть $F(x,\xi)$ — неотрицательная функция, определенная при $x\in\G$, $\xi\in HT_x$, которая гладко зависит от $x$ и $\xi$ и представляет собой финслерову метрику на каждом слое $HT_x$, то есть: 1\) Для любого $a>0$ выполнено $F(x,a\xi)=aF(x,\xi)$ и $F(x,\xi)>0$ при $\xi\ne0$, $x\in\G$; 2\) Для любых $x\in\G$, $\xi,\eta\in HT_x$ функция $\nabla^2_HF^2(x,\eta)(\xi,\xi)$ положительно определена, где $$(\nabla^2_H)_{ij}=\frac12(X_{1i}X_{1j}+X_{1j}X_{1i}),\quad i,j=1,2,\dots,n_1.$$ Определим на кокасательном расслоении $HT^*$ функцию $H(x,\omega)$, где $x\in\G$, $\omega\in HT^*_x$ как супремум величин $\omega(\xi)$ по всем $\xi\in HT_x$, удовлетворяющим условию $F(x,\xi)\le1$. В дальнейшем будем отождествлять $\omega$ с вектором, имеющем координаты дифференциальной формы $\omega$ в базисе $\omega_i$, двойственным к базису $X_{1i}$, то есть $\omega_i(X_{1j})=\delta_{ij}$ для $i,j=1,2,\dots,n_1$. Кривую $\gamma:(a,b)\to\G$ назовем горизонтальной, если для почти всех $t\in (a,b)$ $\dot \gamma(t)\in HT_{\gamma(t)}$. Длину такой кривой определим как интеграл $l(\gamma)=\int\limits_a^bF(\gamma(t),\dot\gamma(t))dt$. Если длина конечна, то кривую назовем спрямляемой. На группе определим однородную норму $|\cdot|$, удовлетворяющую условиям: для любого $x\in\G$ $|x|\ge0$ и $|x|=0$ только при $x=0$; $|x^{-1}|=|x|$, $|\delta_\lambda x|=\lambda |x|$. Определим шар с центром в точке $x\in\G$ радиуса $r>0$ следующим образом: $B(x,r)=\{y\in \G: |x^{-1}y|<r\}$. Заметим, что он является левым сдвигом шара $B(0,r)$, который в свою очередь является образом единичного шара $B(0,1)$ при растяжении $\delta_r$. Известно [@folland], что существует константа $C$ такая, что для любых $x,y\in\G$ $$\label{ner} \left||xy|-|x|\right|\le C|y|\text{ при } |y|\le \frac{|x|}2.$$ Меру $dx$ нормируем так, чтобы $|B(0,1)|=\int\limits_{B(0,1)}dx=1$. очевидно, что $|B(0,r)|=r^Q$. Посредством непрерывной положительной в функции $g(x)$ определим элемент объема $d\sigma=g(x)dx$. Расстояние $d_c(x,y)$ между двумя точками $x,y\in\G$ определим как инфимум длин кривых, соединяющих $x$ и $y$. Пусть $D$ — область в $\G$ и $E_0,E_1\subset \bar D$ — замкнутые непересекающиеся множества. Тройку множеств $(E_0,E_1,D)$ назовем конденсатором. Будем говорить, что кривая $\gamma:(a,b)\to D$ соединяет множества $E_0$ и $E_1$, если $\liminf\limits_{t\to a} d(\gamma(t), E_0)=\liminf\limits_{t\to b} d(\gamma(t),E_1)=0$, где $d(x,y)=|x^{-1}y|$ для $x,y\in\G$. Семейство всех таких локально спрямляемых кривых обозначим через $\Gamma(E_0,E_1,D)$. Расстояния $d$ и $d_c$ эквивалентны друг другу, а топология, порожденная расстоянием $d$, эквивалентна евклидовой [@vod98]. Неотрицательную числовую борелевскую функцию на $D$ назовём допустимой для некоторого семейства $\Gamma$ кривых, расположенных в $D$, если для любой $\gamma\in\Gamma$ $\int\limits_\gamma\rho F(x,dx)=\int\limits_a^b\rho(\gamma(t)) F(\gamma(t),\dot\gamma(t))dt \ge1$, где $\gamma(t)$ — параметризация $\gamma$ посредством параметра $t\in(a,b)$. Множество всех допустимых функций для $\Gamma$ обозначим через $\operatorname{adm}\Gamma$. Пусть $p>1$. Определим $p$-модуль конденсатора $(E_0,E_1,D)$ следующим образом: $$M_{p,F}(E_0,E_1,D)=\inf\int\limits_D\rho^p\,d\sigma,$$ где инфимум берется по всем $\rho\in\operatorname{adm}\Gamma(E_0,E_1,D)$. Функцию $u:D\to \mathbb R$ назовем локально липшицевой в $D$, если для любого компактного подмножества $D'\subset D$ существует константа $L$ такая, что для любых $x,y\in D'$ $u(x)-u(y)\le L d_c(x,y)$. Определим класс $L^1_{p,F}(D)$ как замыкание класса локально липшицевых в $D$ функций по норме $$\|u\|_{L^1_{p,F}(D)}=\left(\int\limits_DH(x,Xu)^p\,d\sigma\right)^{1/p},$$ где $Xu=(X_{11}u,X_{12}u,\dots, X_{1n_1}u)$ — горизонтальный градиент функции $u$, который имеет смысл в силу теоремы Радемахера для групп Карно (см. [@mitchell1985]). Обозначим через $\operatorname{Adm}(E_0,E_1,D)$ множество неотрицательных функций из $L_{p,F}^1(D)\cap C(D)$, равных нулю (единице) в некоторой окрестности $E_0$ ($E_1$). Определим $p$-емкость конденсатора: $$C_{p,F}(E_0,E_1,D)=\inf\int\limits_DH(x,Xu)^p\,d\sigma,$$ где инфимум берется по всем функциям $u\in\operatorname{Adm}(E_0,E_1,D)$. Инфимум в определении $M_{p,F}(E_0,E_1,D)$ можно брать по непрерывным в $D\setminus(E_0\cup E_1)$ допустимым функциям. **Доказательство.** Пусть $0<\varepsilon<1/2$, $D_k$, $k=1,2,\ldots$ — открытые множества, образующие исчерпание изнутри множества $D\setminus( E_0\cup E_1)$, т.е. $\overline{D_k}\subset D_{k+1}$, $\bigcup\limits_{k=1}^\infty D_k=D\setminus( E_0\cup E_1)$;$d_k=d(\partial D_k,\partial D_{k+1})$, $k\ge1$. Положим для единообразия рассуждений $d_{-1}=d_0=\infty$, $D_0=\emptyset$. Для каждого $k\ge1$ покроем компактное множество $\overline{D_k}\setminus D_{k-1}$ конечным числом шаров $B(x_i,r_i)$, где $x_i\in \overline{D_k}\setminus D_{k-1}$, $r_i<\min(d_{k-2},d_k)/2$. Получим локально конечное покрытие области $D\setminus( E_0\cup E_1)$ шарами $B(x_i,r_i)$, $i\ge1$, лежащими в $D\setminus( E_0\cup E_1)$. Заметим, что покрытие шарами с теми же центрами и вдвое большими радиусами $B(x_i,2r_i)$ обладает тем же свойством. Дополнительно можно считать что все $r_i<1/2$. Пусть $\{h_i(x)\}$ — разбиение единицы на $D\setminus( E_0\cup E_1)$, подчиненное покрытию $\{B(x_i,r_i)\}$. Возьмём допустимую функцию $\rho$ для $\Gamma(E_0,E_1,D)$ такую, что $$\int\limits_D\rho^p\,d\sigma<M_{p,F}(E_0,E_1,D)+\varepsilon.$$ Пусть $\varphi(z)$ — бесконечно дифференцируемая неотрицательная в функция с носителем в $B(0,1)$ с условием $\int\limits_\G\varphi(z)\,dz=1$. Обозначим $\rho_i=h_i\rho$, $\varphi_t(x)=t^{-Q}\varphi(\delta_{1/t}x)$, $\tilde \rho_i=\int\limits_\G\rho_i(y)\varphi_t(xy^{-1})dy$. Для каждого $i\ge1$ подберем параметр $0<t_i<\varepsilon$ так, чтобы при $t\le t_i$ $\|\tilde\rho_i-\rho_i\|_{p,F}<2^{-i}\varepsilon^{1/p}$, где норма берется в пространстве $L_{p,F}(D)$, $\|\rho\|_{p,F}=\left(\int\limits_D\rho^p\,d\sigma\right)^{1/p}$ (см. [@folland утв. 1.20], с заменой левых сдвигов на правые и наоборот). Также потребуем, чтобы $zB(x_i,r_i)\subset B(x_i,2r_i)$ для любого $z$ с $|z|\le t_i$. Это можно сделать в силу неравенства . Функция $\log F(x,\xi)$ равномерно непрерывна на компакте $\{(x,\xi):x\in \overline{B(x_i,2r_i)}, 1/2\le F(x,\xi)\le3/2\}$, то есть существует $\delta>0$ такое, что при $|z|<\delta$, $|\xi'-\xi''|<\delta$ и для любого $x\in B(x_i,2r_i)$ такого, что $zx\in \overline{B(x_i,2r_i)}$, $$\label{1} \frac{F(zx,\xi')}{F(x,\xi'')}\ge(1+\varepsilon)^{-1}.$$ Здесь $\xi'$, $\xi''$ рассматриваем как векторы в базисе $X_{1i}$, $i=1,2,\dots,n_1$ с евклидовой нормой. Далее считаем, что $t_i<\delta$. Функция $\tilde\rho=\sum\limits_i\tilde\rho_i$ является бесконечно дифференцируемой в $D\setminus( E_0\cup E_1)$ и $$\label{5} \int\limits_D\tilde\rho^p\,d\sigma<M_{p,F}(E_0,E_1,D)+2\varepsilon,$$ если положить $\tilde\rho=0$ на $E_0\cup E_1$. Далее мы покажем, что функция $(1+\varepsilon)\tilde\rho$ является допустимой для $\Gamma(E_0,E_1,D)$. Если $\gamma\in\Gamma(E_0,E_1,D)$, то $$\label{3} 1\le\int\limits_\gamma\rho\,F(x,dx)=\int\limits_\gamma\sum_i\rho_i\,F(x,dx)=\sum_i\int\limits_{\gamma\cap B(x_i,r_i)}\rho_i\,F(x,dx).$$ Преобразуем интеграл от функции $\tilde\rho$ по $\gamma$: $$\begin{gathered} \label{2} \int\limits_\gamma\tilde\rho\,F(x,dx)=\int\limits_\gamma\sum_i\tilde\rho_i\,F(x,dx)=\int\limits_\gamma\sum_i\int\limits_\G\rho_i(y^{-1}x)\varphi_{t_i}(y)\,dy\,F(x,dx)= \\ =\int\limits_\gamma\sum_i\int\limits_\G\rho_i((\delta_{t_i}z)^{-1}x)\varphi(z)\,dz\,F(x,dx)= \sum_i\int\limits_{B(0,1)}\varphi(z)\,dz\int\limits_{\gamma\cap B(x_i,2r_i)}\rho_i((\delta_{t_i}z)^{-1}x)\,F(x,dx)=\\=\int\limits_{B(0,1)}\varphi(z)\,dz\sum_i\int\limits_{\gamma\cap B(x_i,2r_i)}\rho_i((\delta_{t_i}z)^{-1}x)\,F(x,dx). \end{gathered}$$ Пусть $z\in B(0,1)$. Для любого $i=1,2,\dots$ рассмотрим дугу $\gamma'$ из множества $\gamma\cap B(x_i,2r_i)$. Обозначим $\tilde\gamma'=(\delta_{t_i}z)^{-1}\cdot\gamma'$. Эта кривая будет горизонтальной вследствие левоинвариантности $X_{1j}$, $j=1,2,\dots,n_1$. Соединим соответствующие граничные точки $\gamma'$ и $\tilde\gamma'$ двумя горизонтальными кривыми, лежащими в $B(x_i,2r_i)\setminus B(x_i,r_i)$. Заменим $\gamma'$ на объединение $\tilde\gamma'$ с этими горизонтальными кривыми. В результате всех этих изменений получим кривую $\tilde\gamma_z$, которая также будет допустимой для $\Gamma(E_0,E_1,D)$. Далее имеем, делая замену $y=(\delta_{t_i}z)^{-1}x$ и параметризуя кривую $\gamma$ посредством финслеровой длины дуги (если кривая не спрямляема, то отсчитываем длину дуги от какой-либо точки кривой с соответствующим знаком): $$\int\limits_{\gamma\cap B(x_i,2r_i)}\rho_i((\delta_{t_i}z)^{-1}x)\,F(x,dx)\ge(1+\varepsilon)^{-1}\int\limits_{\tilde\gamma_z\cap B(x_i,r_i)}\rho_i(y)\,F(y,dy)$$ в силу . Подставляя в и используя , получим, что $\int\limits_\gamma\tilde\rho\,F(x,dx)\ge(1+\varepsilon)^{-1}$, то есть $(1+\varepsilon)\tilde\rho\in\operatorname{adm}\Gamma(E_0,E_1,D)$. В силу произвольности $\varepsilon$ и неравенства лемма доказана. Следующий результат был установлен в $R^n$ В. А. Шлыком в [@shlyk2] и модифицирован в работе [@ohtsuka1999]. Определим систему замкнутых множеств $E_{ij}$, $j\ge0$, $i=0,1$, таких, что $E_{ij}\subset \mathop{\rm int}E_{i,j-1}$ при $j\ge1$, $E_i=\bigcap\limits_{j=0}^\infty E_{ij}$, $E_{00}\cap E_{10}=\emptyset$. \[lemma\] Пусть $\rho\in L_{p,F}(D)$ — положительная непрерывная в $D\setminus(E_0\cup E_1)$ функция. Для любого $\varepsilon>0$ существует функция $\rho'$, $\rho'\ge\rho$ в $D$ такая, что 1. $\int\limits_D\rho'^p\,d\sigma\le\int\limits_D\rho^p\,d\sigma+\varepsilon$, 2. Предположим, что для каждого $j\ge0$ существует кривая $\gamma_j\in\Gamma(E_{0j},E_{1j},D)$ такая, что\ $\int\limits_{\gamma_j}\rho'\,F(x,dx)\le\alpha$. Тогда существует кривая $\tilde\gamma\in \Gamma(E_0,E_1,D)$ такая, что $\int\limits_{\tilde\gamma}\rho\, F(x,dx)\le\alpha+\varepsilon$. **Доказательство.** Пусть $E^j=E_{0j}\cup E_{1j}$, $W_j=E^{j-1}\setminus \mathop{\rm int}E^j$, $d_j=\min(d_c(\partial E_{0j},\partial E_{0,j-1}),d_c(\partial E_{1,j-1},\partial E_{1j}))>0$. Так как функция $\rho$ положительна в $D\setminus (E_0\cup E_1)$, можно найти последовательность $\varepsilon_j\to0$ при $j\to\infty$ такую, что $$\begin{gathered} \label{11}\sum_{j=1}^\infty(1+\varepsilon_j^{-1})\varepsilon_j^{p+1}<\varepsilon,\\ \label{12}\alpha\varepsilon_j<d_j\inf_{W_j\cap D}\rho.\end{gathered}$$ Образуем последовательность компактных множеств $D_j$ такую, что $D_j\subset \mathop{\rm int} D_{j+1}$, $\bigcup\limits_{j=1}^\infty D_j=D$ и $\int\limits_{D\setminus D_j}\rho^p\,d\sigma<\varepsilon_j$. Пусть $V_j=(D\setminus D_j)\cap W_j$. Положим $$\rho'(x)=\left\{ \begin{array}{cl} (1+\varepsilon_j^{-1})\rho(x), & x\in V_j; \\ \rho(x), & x\in D\setminus\bigcup\limits_{j=1}^\infty V_j. \end{array} \right.$$ Покажем, что функция $\rho'$ удовлетворяет условиям леммы. Используя , имеем: $$\begin{aligned} \int\limits_D\rho'^p\,d\sigma&=\sum_{j=1}^\infty\int\limits_{V_j}\left((1+\varepsilon_j^{-1})\rho\right)^p\,d\sigma+\int\limits_{D\setminus\bigcup\limits_{j=1}^\infty V_j}\rho^p\,d\sigma\le \\ &\le \sum_{j=1}^\infty(1+\varepsilon_j^{-1})^p\int\limits_{V_j}\rho^p\,d\sigma+\int\limits_D\rho^p\,d\sigma\le \\ &\le \sum_{j=1}^\infty(1+\varepsilon_j^{-1})^p\varepsilon_j^{p+1}+\int\limits_D\rho^p\,d\sigma\le\int\limits_D\rho^p\,d\sigma+\varepsilon.\end{aligned}$$ Таким образом, условие 1 выполнено. Покажем, что выполняется условие 2. Зафиксируем $j\ge1$. Кривая $\gamma_k\in\Gamma(E_{0j},E_{1j},D)$ при $k\ge j$, следовательно, она содержит две дуги: $\gamma'_k$, соединяющую $\partial E_{0j}$ с $\partial E_{0,j-1}$, и $\gamma''_k$, соединяющую $\partial E_{1,j-1}$ с $\partial E_{1j}$. Дуги $\gamma'_k$ и $\gamma''_k$ не содержатся в $V_j$. Действительно, если бы выполнялось обратное, то с помощью неравенства выводим, что $$\alpha\ge\int\limits_{\gamma_k}\rho'\,F(x,dx)\ge\int\limits_{\gamma'_k}\rho'\,F(x,dx)\ge\varepsilon_j^{-1}\int\limits_{\gamma'_k}\rho\,F(x,dx)\ge\varepsilon_j^{-1}d_j\inf_{W_j\cap D}\rho>\alpha,$$ и аналогично с $\gamma''_k$. Получили противоречие. Значит, $$\gamma_k\cap(D_j\cap(E_{i,j-1}\setminus \mathop{\rm int}E_{ij}))\ne\emptyset,\quad i=0,1,\quad k\ge j.$$ Обозначим $\gamma_k=\gamma_{0k}$. Приведем алгоритм, позволяющий из некоторой последовательности кривых $\gamma_{j-1,k}$ извлечь подпоследовательность $\gamma_{jk}$. Заметим, что множество $D_j\cap(E_{i,j-1}\setminus \mathop{\rm int}E_{ij})$ является компактом. Следовательно, из последовательности $\gamma_{j-1,k}$ можно выделить подпоследовательность (которую снова обозначим $\gamma_{j-1,k}$), сходящуюся к некоторой кривой $\gamma_0$, для которой множество $M=\gamma_0\cap(D_j\cap(E_{0,j-1}\setminus \mathop{\rm int}E_{0j}))\ne\emptyset$. Возьмём какую-либо точку $x_{0j}\in M$. Так как $\rho$ непрерывна в точке $x_{0j}$, можно выбрать шар $B(x_{0j},r(x_{0j}))$ такой, что для любой геодезической линии $l$, соединяющую центр шара и его границу, выполнено условие $$\label{13} \int\limits_l\rho\,F(x,dx)\le\frac\varepsilon{2^{j+3}}.$$ Отбрасывая несколько первых членов последовательности $\gamma_{j-1,k}$, можно считать, что любая кривая этой подпоследовательности пересекает шар $B(x_{0j},r(x_{0j}))$. Таким же образом рассмотрим множество $D_j\cap(E_{1,j-1}\setminus \mathop{\rm int}E_{1j})$, точку $x_{1j}$ из этого множества, шар $B(x_{1j},r(x_{1j}))$, удовлетворяющий условию, аналогичному ; так же из $\gamma_{j-1,k}$ выделим подпоследовательность, все члены которой пересекают этот шар. Полученная подпоследовательность и будет искомой последовательностью $\gamma_{jk}$. Проводим изложенное построение последовательно для $j=1,2,\dots$. Рассмотрим диагональную последовательность $\gamma_{kk}$. Кривая $\gamma_{kk}$ пересекает шары $B(x_{ij},r(x_{ij}))$, $i=0,1$, для $1\le j\le k$ не менее чем в двух точках. Соединим две точки пересечения с центром соответствующего шара геодезическими линиями. Получим кривую $\tilde\gamma_k\in\Gamma(E_{0k},E_{1k},D)$, проходящую через точки $x_{0j}$, $x_{1j}$, $j=1,2,\dots,k$. Для этой кривой имеем, используя условие : $$\int\limits_{\tilde\gamma_k}\rho\,F(x,dx)\le\int\limits_{\gamma_{kk}}\rho\,F(x,dx)+2\sum_{j=1}^k\frac\varepsilon{2^{j+3}}\le\alpha+\frac\varepsilon4.$$ Пусть $\Gamma_0$ — семейство горизонтальных кривых, соединяющих $x_{00}$ и $x_{10}$ в $D\setminus(E_0\cup E_1)$, $\Gamma_{ij}$ — семейство горизонтальных кривых в $D\setminus(E_0\cup E_1)$, соединяющих $x_{ij}$ и $x_{i,j+1}$, $i=0,1$, $j=1,2,\dots$. Тогда $$\inf_{\gamma\in\Gamma_0}\int\limits_\gamma\rho\,F(x,dx)+\sum_{j=1}^k\inf_{\gamma\in\Gamma_{0j}}\int\limits_\gamma\rho\,F(x,dx)+\sum_{j=1}^k\inf_{\gamma\in\Gamma_{1j}}\int\limits_\gamma\rho\,F(x,dx) \le\int\limits_{\tilde\gamma_k}\rho\,F(x,dx)\le\alpha+\frac\varepsilon4.$$ Это верно для любого $k$, следовательно, $$\inf_{\gamma\in\Gamma_0}\int\limits_\gamma\rho\,F(x,dx)+\sum_{j=1}^\infty\inf_{\gamma\in\Gamma_{0j}}\int\limits_\gamma\rho\,F(x,dx)+\sum_{j=1}^\infty\inf_{\gamma\in\Gamma_{1j}}\int\limits_\gamma\rho\,F(x,dx) \le\alpha+\frac\varepsilon4.$$ Выберем кривые $C_0\in\Gamma_0$ и $C_{ij}\in\Gamma_{ij}$, $i=0,1$, $j=1,2,\dots$ так, чтобы $$\begin{aligned} \int\limits_{C_0}\rho\,F(x,dx)&<\inf_{\gamma\in\Gamma_0}\int\limits_\gamma\rho\,F(x,dx)+\frac\varepsilon2,\\ \int\limits_{C_{ij}}\rho\,F(x,dx)&<\inf_{\gamma\in\Gamma_{ij}}\int\limits_\gamma\rho\,F(x,dx)+\frac\varepsilon{2^{j+3}}.\end{aligned}$$ Пусть $\tilde\gamma=\dots+C_{01}+C_0+C_{11}+\dots$. Тогда $\tilde\gamma\in\Gamma(E_0,E_1,D)$ и $$\int\limits_{\tilde\gamma}\rho\,F(x,dx)\le\alpha+\frac\varepsilon4+\frac\varepsilon2+2\sum_{j=1}^\infty\frac\varepsilon{2^{j+3}}=\alpha+\varepsilon.$$ Лемма доказана. Докажем теперь основную теорему. Пусть $D$ — область в ; $E_0$, $E_1$ — непересекающиеся непустые компакты из $\bar D$. Тогда $$M_{p,F}(\Gamma(E_0,E_1,D))=C_{p,F}(E_0,E_1,D).$$ **Доказательство.** Сначала докажем неравенство $$\label{14} M_{p,F}(\Gamma(E_0,E_1,D))\le C_{p,F}(E_0,E_1,D).$$ Пусть $u\in\operatorname{Adm}(E_0,E_1,D)$, $\Gamma_0$ — подсемейство локально спрямляемых горизонтальных кривых $\gamma$ из $\Gamma(E_0,E_1,D)$ таких, что $u$ абсолютно непрерывна на любой спрямляемой замкнутой части $\gamma$. Определим функцию $\rho(x)=H(x,Xu)$ на $D$. Пусть $\gamma\in\Gamma_0$ и $\gamma:(a,b)\to D$. Если $a<t_1<t_2<b$, то получим: $$\int\limits_\gamma\rho\,F(x,dx)\ge\int\limits_{t_1}^{t_2} H(x,Xu(\gamma(t)))F(x,\dot\gamma(t))dt\ge\left|\int\limits_{t_1}^{t_2} (Xu(\gamma(t)),\dot\gamma(t))\,dt\right|=|u(\gamma(t_2))-u(\gamma(t_1))|.$$ Вследствие произвольности $t_1$ и $t_2$ получим, что $\int\limits_\gamma\rho\,F(x,dx)\ge1$. Таким образом, $\rho\in\operatorname{adm}\Gamma_0$. Следовательно, $$M_{p,F}(\Gamma_0)\le\int\limits_D\rho^p\,d\sigma=\int\limits_DH(x,Xu)\,d\sigma.$$ Учитывая, что $M_{p,F}(\Gamma_0)=M_{p,F}(\Gamma(E_0,E_1,D))$ (см. [@fuglede] и [@markina2003]), переходя к инфимуму по $u$, получим неравенство . Докажем противоположное неравенство $$\label{16} M_{p,F}(\Gamma(E_0,E_1,D))\ge C_{p,F}(E_0,E_1,D)$$ в случае $(E_0\cup E_1)\cap\partial D=\emptyset$. Пусть $\rho\in \operatorname{adm}\Gamma(E_0,E_1,D)$ — непрерывная в $D\setminus(E_0\cup E_1)$ функция. Определим в $D$ функцию $u(x)=\min(1,\inf\int\limits_{\beta_x}\rho\,F(x,dx))$, где инфимум берется по всем локально спрямляемым горизонтальным кривым $\beta_x$, соединяющим $E_0$ и $x$ в направлении точки $x$. Покажем, что $u\in\operatorname{Adm}(E_0,E_1,D)$ и $H(x,Xu)\le \rho$ почти везде в $D$. Если $u\equiv1$, то это очевидно. В случае $u\not\equiv1$ пусть $\alpha_{x_1x_2}$ — кратчайшая кривая, соединяющая $x_1$ и $x_2$ в направлении точки $x_2$, где точки $x_1$ и $x_2$ выбраны достаточно близко друг от друга. Пусть $\beta_{x_1}$ — спрямляемая кривая, соединяющая $x_1$ и $E_0$. Тогда $$u(x_2)\le\int\limits_{\beta_{x_1}}\rho\,F(x,dx)+\int\limits_{\alpha_{x_1x_2}}\rho\,F(x,dx)\le\int\limits_{\beta_{x_1}}\rho\,F(x,dx)+\max_{x\in\alpha_{x_1x_2}}\rho(x)d_c(x_1,x_2).$$ Здесь $d_c$ измеряется от $x_1$ до $x_2$. Так как $\beta_{x_1}$ произвольно, то $$u(x_2)\le u(x_1)+\max_{x\in\alpha_{x_1x_2}}\rho(x)d_c(x_1,x_2).$$ Используя рассуждения, аналогичные приведеным в [@mitchell1985], убедимся в существовании производных $X_{1j}u$, $j=1,2,\dots,n_1$ почти всюду в $D$. Пусть $x_1$ такая точка, и пусть дана гладкая кривая, проходящая через $x_1$ в направлении вектора $\xi$. Устремляя $x_2$ к $x_1$ по этой кривой, получим: $$Xu(x_1)(\xi)\le \rho(x_1)F(x_1,\xi).$$ Поделив на $F(x_1,\xi)$ и взяв супремум по всем $\xi$, получим, что $H(x_1,Xu(x_1))\le\rho(x_1)$. Значит, $$C_{p,F}(E_0,E_1,D)\le\int\limits_DH(x,Xu)^p\,d\sigma\le\int\limits_D\rho^p\,d\sigma.$$ Переходя к инфимуму по $\rho$, получим неравенство в случае $\partial D\cap(E_0\cup E_1)=\emptyset$. Поэтому теорема в этом случае доказана. Рассмотрим общий случай $\partial D\cap(E_0\cup E_1)\ne\emptyset$. Пусть $0<\varepsilon<1/2$. Рассмотрим непрерывную на $D\setminus(E_0\cup E_1)$ допустимую для $\Gamma(E_0,E_1,D)$ функцию $\rho$ такую, что $$\int\limits_{D\setminus(E_0\cup E_1)}\rho^p\,d\sigma<\varepsilon+M_{p,F}(E_0,E_1,D).$$ Можем считать, что $\rho>0$ на $D\setminus(E_0\cup E_1)$, иначе возьмем вместо нее функцию $\max(\rho(x),h(x))$, где $h(x)>0$ — непрерывная на функция со сколь угодно малым интегралом $\int\limits_Dh^p\,d\sigma$. Пусть $\rho'$, $E_{0j}$, $E_{1j}$ такие же, как в лемме \[lemma\]. Покажем, что $$\int\limits_\gamma\rho'\,F(x,dx)>1-2\varepsilon$$ для всех $\gamma\in\Gamma(E_{0j},E_{1j},D)$ при достаточно больших $j$. Действительно, если это не так, то найдутся $j_k$ и $\gamma_k\in\Gamma(E_{0j_k},E_{1j_k},D)$ такие, что $\int\limits_{\gamma_k}\rho'\,F(x,dx)\le1-2\varepsilon$. По лемме \[lemma\] найдется кривая $\tilde\gamma\in\Gamma(E_0,E_1,D)$ такая, что $\int\limits_{\tilde\gamma}\rho\,F(x,dx)\le1-\varepsilon$, что противоречит допустимости функции $\rho$. Определим функцию $$\tilde\rho(x)=\left\{ \begin{array}{ll} \dfrac{\rho'}{1-2\varepsilon}, & x\in D\setminus(E_{0j}\cup E_{1j}); \\ 0, & x\notin D\setminus(E_{0j}\cup E_{1j}). \end{array} \right.$$ Она принадлежит $\operatorname{adm}\Gamma(E_0,E_1,D\cup E_{0j}\cup E_{1j})$. Поэтому, в силу доказанного частного случая, $$\begin{gathered} C_{p,F}(E_0,E_1,D)\le C_{p,F}(E_0,E_1,D\cup E_{0j}\cup E_{1j})=M_{p,F}(E_0,E_1,D\cup E_{0j}\cup E_{1j})\le\\ \le\int\limits_D\tilde\rho^p\,d\sigma\le(M_{p,F}(E_0,E_1,D)+2\varepsilon)(1-2\varepsilon)^{-p}.\end{gathered}$$ Устремляя $\varepsilon\to0$, получим неравенство , а следовательно, и утверждение теоремы. Из леммы \[lemma\] следует непрерывность модуля, то есть $$\lim\limits_{j\to\infty}M_{p,F}(E_{0j},E_{1j},D)=M_{p,F}(E_0,E_1,D).$$ [10]{} В. Н. Берестовский, *Однородные пространства с внутренней метрикой и субфинслеровы многообразия. Научный семинар.*, http://gct.math.nsc.ru/?p=2632. В. Н. Берестовский, *Универсальные методы поиска нормальных геодезических на группах Ли с левоинвариантной субримановой метрикой*, Сиб. матем. журн., **55**, №5, (2014), 959–970. А. В. Букушева, *Слоения на распределениях с финслеровой метрикой*, Изв. Сарат. ун-та. Нов. сер. Сер. Математика. Механика. Информатика, **14**, №3, (2014), 247–251. С. Водопьянов, А. Ухлов, *Пространства Соболева и $(P,Q)$-квазиконформные отображения групп Карно*, Сиб. мат. ж., **39**, №4, (1998), 776–795. С. К. Водопьянов, *Теория потенциала на однородных группах*, Матем. сб., **180**, №1, (1989), 57–77. С. К. Водопьянов, *Монотонные функции и квазиконформные отображения на группах Карно*, Сиб. матем. журн., **37**, №6, (1996), 1269–1295. Ю. В. Дымченко, *Равенство емкости и модуля конденсатора на поверхности*, *Аналитическая теория чисел и теория функций. 17(Зап. научн. семин. ПОМИ)*, **276**, 112–133, СПб.: Наука, 2001. Ю. В. Дымченко, *Равенство емкости и модуля конденсатора в финслеровых пространствах*, Матем. заметки, **85**, №4, (2009), 594–602. В. А. Шлык, *О равенстве $p$-емкости и $p$-модуля*, Сиб. мат. журн., **34**, №6, (1993), 216–221. L. Ahlfors, A. Beurling, *Conformal invariants and function-theoretic null-sets*, Acta Mathematica, **83**, (1950), 101–129. H. Aikawa, M. Ohtsuka, *Extremal lenth of vector measures*, Ann. Acad. Sci. Fenn. Ser. A., **24**, (1999), 61–88. J. N. Clelland, C. G. Moseley, *Sub-Finsler geometry in dimension three*, Differential Geometry and its Applications, **24**, №6, (2006), 628 – 651. E. L. Donne, *A metric characterization of Carnot groups*, http://arxiv.org/abs/1304.7493v2. G. Folland, E. Stein, *Hardy Spaces on Homogeneous Groups*, Mathematical Notes Series, University Press, 1982. B. Fuglede, *Extremal length and functional completion*, Acta Math., **126**, №3, (1957), 171 – 219. J. Hesse, *A $p$-extremal length and $p$-capacity equality*, Ark. mat., **13**, №1, (1975), 131 – 144. I. Markina, *On coincidence of p-module of a family of curves and p-capacity on the Carnot group*, Rev. Mat. Iberoamericana, **19**, №1, (2003), 143–160. J. Mitchell, *On Carnot-Caratheodory metrics*, J. Differential Geom., **21**, №1, (1985), 35–45. M. Ohtsuka, *Extremal length and precise functions*, GAKUTO international series, Gakk[ō]{}tosho, 2003. W. P. Ziemer, *Extremal length and conformal capacity*, Trans. Amer. Math. Soc., **126**, №3, (1967), 460–473.
{ "pile_set_name": "ArXiv" }
--- author: - | Yonghui Ling$^\textup{a,}$[^1] , Xiubin Xu$^\textup{b,}$[^2]\ *Department of Mathematics, Zhejiang University, Hangzhou 310027, China\ *Department of Mathematics, Zhejiang Normal University, Jinhua 321004, China** title: 'Semilocal Convergence Behavior of Halley’s Method Using Kantorovich’s Majorants Principle ' --- **Abstract:** The present paper is concerned with the semilocal convergence problems of Halley’s method for solving nonlinear operator equation in Banach space. Under some so-called majorant conditions, a new semilocal convergence analysis for Halley’s method is presented. This analysis enables us to drop out the assumption of existence of a second root for the majorizing function, but still guarantee Q-cubic convergence rate. Moreover, a new error estimate based on a directional derivative of the twice derivative of the majorizing function is also obtained. This analysis also allows us to obtain two important special cases about the convergence results based on the premises of Kantorovich and Smale types. **Keywords:** Halley’s Method; Majorant Condition; Majorizing Function; Majorizing Sequence; Kantorovich-type Convergence Criterion; Smale-type Convergence Criterion\ [**Subject Classification:**]{} 47J05, 65J15, 65H10. Introduction {#section:Introduction} ============ In this paper, we concern with the numerical approximation of the solution $x$ of the nonlinear equation $$\label{eq:NonlinearOperatorEquation} F(x) = 0,$$ where $F$ is a given nonlinear operator which maps from some nonempty open convex subset $D$ in a Banach space $X$ to another Banach space $Y$. Newton’s method with initial point $x_0$ is defined by $$\label{iteration:NewtonMethod} x_{k+1} = x_k - F'(x_k)^{-1}F(x_k), \ \ \ k = 0,1,2,\ldots,$$ which is the most efficient method known for solving such an equation. One of the famous results on Newton’s method (\[iteration:NewtonMethod\]) is the well-known Kantorovich theorem [@Kantorvich1982], which guarantees convergence of that method to a solution using semilocal conditions. It does not require a priori existence of a solution, proving instead the existence of the solution and its uniqueness on some region. Another important result concerning Newton’s method (\[iteration:NewtonMethod\]) is Smale’s point estimate theory [@Smale1986]. It assumes that the nonlinear operator is analytic at the initial point. Since then, Kantorovich like theorem has been the subject of many new researches, see for example, [@GraggTapia1974; @Deuflhard1979; @Ypma1982; @Gutierrez2000; @XuLi2007; @XuLi2008]. For Smale’s point estimate theory, Wang and Han in [@WangHan1997] discussed $\alpha$ criteria under some weak condition and generalized this theory. In particular, Wang in [@Wang1999] introduced some weak Lipschitz conditions called Lipschitz conditions with L-average, under which Kantorovich like convergence criteria and Smale’s point estimate theory can be investigated together. Recently, Ferreira and Svaiter [@Ferreira2009a] presented a new convergence analysis for Kantorovich’s theorem which makes clear, with respect to Newton’s method (\[iteration:NewtonMethod\]), the relationship of the majorizing function $h$ and the nonlinear operator $F$ under consideration. Specifically, they studied the semilocal convergence of Newton’s method (\[iteration:NewtonMethod\]) under the following majorant conditions: $$\|F'(x_0)^{-1}[F'(y) - F'(x)]\| \leq h'(\|y - x\| + \|x - x_0\|) - h'(\|x - x_0\|), \ \ x,y \in {\bm{\mathrm{B}}}(x_0,R), R > 0,$$ where $\|y - x\| + \|x - x_0\| < R$ and $h:[0,R) \to \mathbb{R}$ is a continuously differentiable, convex and strictly increasing function and satisfies $h(0) > 0, h'(0) = - 1$, and has zero in $(0,R)$. This convergence analysis relaxes the assumptions for guaranteeing Q-quadratic convergence (see Definition \[definition:Q-orderConvergence\]) of Newton’s method (\[iteration:NewtonMethod\]) and obtains a new estimate of the Q-quadratic convergence. This analysis was also introduced in [@Ferreira2009b] studing the local convergence of Newton’s method. Halley’s method in Banach space denoted by $$\label{iteration:HalleyMethod} x_{k+1} = x_k - [{\bm{\mathrm{I}}}- L_F(x_k)]^{-1}F'(x_k)^{-1}F(x_k), \ \ k = 0,1,2,\ldots,$$ where operator $L_F(x)=\frac{1}{2}F'(x)^{-1}F''(x)F'(x)^{-1}F(x)$, is another famous iteration for solving nonlinear equation (\[eq:NonlinearOperatorEquation\]). The results concerning convergence of this method with its modification have recently been studied under the assumptions of Newton-Kantorovich type, see for example, [@Candela1990a; @HanWang1997; @Argyros2004; @YeLi2006; @Ezquerro2005; @Argyros2012]. Besides, there are also some researches concerned with Smale-type convergence for Halley’s method (\[iteration:HalleyMethod\]), if the nonlinear operator $F$ is analytic at the initial point, see for example, [@WangHan1990; @Wang1997; @Han2001]. Motivated by the ideas of Ferreira and Svaiter in [@Ferreira2009a], in the rest of this paper, we study the semilocal convergence of Halley’s method (\[iteration:HalleyMethod\]) under some so-called majorant conditions. Suppose that $F$ is a twice Fréchet differentiable operator and there exists $x_0 \in D$ such that $F'(x_0)$ is nonsingular. In addition, let $R > 0$ and $h : [0,R) \to \mathbb{R}$ be a twice continuously differentiable function. We say the operator $F''$ satisfies the majorant conditions, if $$\label{condition:MajorantCondition} \|F'(x_0)^{-1}[F''(y) - F''(x)]\| \leq h''(\|y - x\| + \|x - x_0\|) - h''(\|x - x_0\|), \ \ x,y \in {\bm{\mathrm{B}}}(x_0,R),$$ where $\|y - x\| + \|x - x_0\| < R$ and the following assumptions hold: 1. $h(0) > 0, h''(0) > 0, h'(0) = -1.$ 2. $h''$ is convex and strictly increasing in $[0,R)$. 3. $h$ has zero(s) in $(0,R)$. Assume that $t^*$ is the smallest zero and $h'(t^*) < 0.$ Under the assumptions that the second derivative of $F$ satisfies the majorant conditions, we establish a semilocal convergence for Halley’s method (\[iteration:HalleyMethod\]). In our convergence analysis, the assumptions for guaranteeing Q-cubic convergence of Halley’s method (\[iteration:HalleyMethod\]) are relaxed. In addition, we obtain a new error estimate based on a directional twice derivative of the derivative of the majorizing function. We drop out the assumption of existence of a second root for the majorizing function, still guaranteeing Q-cubic convergence. Moreover, the majorizing function even do not need to be defined beyond its first root. In particular, this convergence analysis allows us to obtain some important special cases, which includes Kantorovich-type convergence results under Lipschitz conditions and Smale-type convergence results under the $\gamma$-condition (see Definition \[definition:GammaCondition\]). The rest of this paper is organized as follows. In Section 2, we introduce some preliminary notions and properties of the majorizing function. In Section 3, we study the majorizing function and the results regarding only the majorizing sequence. The main results about the semilocal convergence and new error estimate are stated and proved in Section 4. In Section 5, we present two special cases of our main results. And finally in Section 6, some remarks and numerical example are offered. Preliminaries {#section:Preliminaries} ============= Let $X$ and $Y$ be Banach spaces. For $x \in X$ and a positive number $r$, throughout the whole paper, we use ${\bm{\mathrm{B}}}(x,r)$ to stand for the open ball with radius $r$ and center $x$, and let $\overline{{\bm{\mathrm{B}}}(x,r)}$ denote its closure. Throughout this paper, for a convergent sequence $\{x_n\}$ in $X$, we use the notion of Q-order of convergence (see [@Jay2001] or [@Potra1989] for more details). \[definition:Q-orderConvergence\] A sequence $\{x_n\}$ converges to $x^*$ with Q-order (at least) $q \geq 1$ if there exist two constants $c \geq 0$ and $N \geq 0$ such that for all $n \geq N$ we have $$\|x^* - x_{n+1}\| \leq c\|x^* - x_n\|^q.$$ For $q = 2,3$ the convergence is said to be (at least) Q-quadratic and Q-cubic, respectively. The notions about Lipschitz condition (see [@Wang2000; @Deuflhard2004]) and the $\gamma$-condition (see [@WangHan1997]) are defined as follows. \[definition:LipschitzCondition\] The condition on the operator $F$ $$\|F(x) - F(y)\| \leq L\|x - y\|,\ \ \ x,y\in D$$ is usually called the Lipschitz condition in the domain $D$ with constant $L$. If it is only required to satisfy $$\|F(x) - F(x_0)\| \leq L\|x - x_0\|,\ \ \ x \in {\bm{\mathrm{B}}}(x_0,r),$$ we call it the center Lipschitz condition in the ball ${\bm{\mathrm{B}}}(x_0,r)$. In particular, if $F'(x_0)^{-1}F'$ satisfies the Lipschitz condition, i.e. $$\|F'(x_0)^{-1}[F'(x) - F'(y)]\| \leq L\|x - y\|,\ \ \ x,y \in {\bm{\mathrm{B}}}(x_0,r),$$ we call it the affine covariant Lipschitz condition. The corresponding center Lipschitz condition is referred to as affine covariant center Lipschitz condition. \[definition:GammaCondition\] Let $F:D\subset X \to Y$ be a nonlinear operator with thrice continuously differentiable, $D$ open and convex. Suppose $x_0 \in D$ is a given point, and let $0 < r \leq 1/\gamma$ be such that ${\bm{\mathrm{B}}}(x_0,r) \subset D$. $F$ is said to satisfy the $\gamma$-condition (with 1-order) on ${\bm{\mathrm{B}}}(x_0,r)$ if $$\|F'(x_0)^{-1}F''(x)\| \leq \frac{2\gamma}{(1 - \gamma\|x - x_0\|)^3}.$$ $F$ is said to satisfy the $\gamma$-condition with 2-order on ${\bm{\mathrm{B}}}(x_0,r)$ if the following relation holds: $$\|F'(x_0)^{-1}F'''(x)\| \leq \frac{6\gamma^2}{(1 - \gamma\|x - x_0\|)^4}.$$ For the convergence analysis, we need the following useful lemmas about elementary convex analysis. The first one is slightly modified from the one in [@Ferreira2009b]. \[lemma:ConvexFunctionProperties1\] Let $R > 0$. If $g : [0,R) \to \mathbb{R}$ is continuously differentiable and convex, then 1. $(1 - \theta)g'(\theta t) \leq \cfrac{g(t) - g(\theta t)}{t} \leq (1 - \theta)g'(t)$ for all $t \in (0,R)$ and $0 \leq \theta \leq 1$. 2. $\cfrac{g(u) - g(\theta u)}{u} \leq \cfrac{g(v) - g(\theta v)}{v}$ for all $u,v \in [0,R),\ u < v$ and $0 \leq \theta \leq 1$. \[lemma:ConvexFunctionProperties2\] Let $I \subset \mathbb{R}$ be an interval and $g : I \to \mathbb{R}$ be convex. Then 1. For any $u_0 \in \textup{int}(I)$, there exists $(in \ \mathbb{R})$ $$\label{definition:DirectionalDerivative} D^-g(u_0) := \lim_{u \to u_0^-} \frac{g(u_0) - g(u)}{u_0 - u} = \sup_{u < u_0} \frac{g(u_0) - g(u)}{u_0 - u}.$$ 2. If $u, v, w \in I$ and $u \leq v \leq w$, then $$g(v) - g(u) \leq [g(w) - g(u)]\frac{v - u}{w - u}.$$ For the convenience of analysis, we define the majorizing function with respect to Halley’s method (\[iteration:HalleyMethod\]) as follows. \[definition:MajorizingFunction\] Let $F: D \subset X \to Y$ be a twice continuously differentiable nonlinear operator. For a given guess $x_0 \in D$, we assume $F'(x_0)$ is nonsingular. A continuously twice differentiable function $h: [0,R) \to \mathbb{R}$ is said to be a majorizing function to $F$ at $x_0$, if $F''$ satisfies the majorant conditions in ${\bm{\mathrm{B}}}(x_0,R) \subset D$ and the following initial conditions: $$\label{condition:InitialCondition} \|F'(x_0)^{-1}F(x_0)\| \leq h(0),\ \|F'(x_0)^{-1}F''(x_0)\| \leq h''(0).$$ The following lemma describes some basic properties of the majorizing function $h$. \[lemma:MajorizingFunctionProperties\] Let $R > 0$ and let $h : [0,R) \to \mathbb{R}$ be a twice continuously differentiable function which satisfies assumptions $(A1)-(A3)$. Then 1. $h'$ is strictly convex and strictly increasing on $[0,R)$. 2. $h$ is strictly convex on $[0,R)$, $h(t) > 0$ for $t \in [0,t^*)$ and equation $h(t) = 0$ has at most one root on $(t^*,R)$. 3. $-1 < h'(t) < 0$ for $t \in (0,t^*)$. \(i) follows from assumption (A2) and $h''(0) > 0$ in (A1). (i) implies that $h$ is strictly convex. As assumption (A1), (i) and $h(t^*) = 0$, we know that $h(t) = 0$ has at most one root on $(t^*,R)$. Since $h(t^*) = 0$ and $h(0) > 0$, one has $h(t) > 0$ for $t \in [0,t^*)$. It remains to show (iii). Firstly, since $h$ is strictly convex, we obtain from Lemma \[lemma:ConvexFunctionProperties1\] that $$h'(t) < \frac{h(t^*) - h(t)}{t^* - t},\ \ \ t \in [0,t^*).$$ This implies $0 = h(t^*) > h(t) + h'(t)(t^* - t)$. In view of $h(t) > 0$ in $[0,t^*)$, we get $h'(t) < 0$. Secondly, as $h'$ is strictly increasing and $h'(0) = -1$, we have $h'(t) > -1$ for $t \in (0,t^*)$. This completes the proof. Halley’s Method Applied to the Majorizing Function {#section:HalleyMethodAppliedToTheMajorizingFunction} ================================================== Let $$\label{function:BanachSpaceIterativeFunctionHalleyMethod} H_F(x) := x - [{\bm{\mathrm{I}}}- L_F(x)]^{-1}F'(x)^{-1}F(x)$$ be the iterative function of Halley’s method, where $L_F(x)=\frac{1}{2}F'(x)^{-1}F''(x)F'(x)^{-1}F(x)$. Suppose $h$ is the majorizing function to $F$ (see Definition \[definition:MajorizingFunction\]). Then Halley’s method applied to $h$ can be denoted as $$\label{function:RealSpaceIterativeFunctionHalleyMethod} H_h(t) := t - \frac{1}{1 - L_h(t)}\cdot\frac{h(t)}{h'(t)}, \ \ \ t \in [0,R),$$ where $L_h(t) = h(t)h''(t)/(2h'(t)^2)$. In order to obtain the convergence of the majorizing sequence generated by applying Halley’s method to the marjorizing function, we need some useful lemmas. \[lemma:estimate\_Lh(t)\] Let $h : [0,R) \to \mathbb{R}$ be a twice continuously differentiable function and satisfy assumptions $(A1)-(A3)$. Then we have $0 \leq L_h (t) \leq 1/4$ for $t \in [0,t^*]$. Define function $$\phi(s) = h(t) + h'(t)(s - t) + \frac{1}{2}h''(t)(s - t)^2,\ \ \ s \in [t,t^*].$$ Then, by Lemma \[lemma:MajorizingFunctionProperties\] (ii), $\phi(t) = h(t) > 0$ for $t \in [0,t^*)$. In addition, we have $$\label{eq:Phi(t*)} \phi(t^*) = h(t) + h'(t)(t^* - t) + \frac{1}{2}h''(t)(t^* - t)^2.$$ By using Taylor’s formula, one has that $$\label{eq:h(t*)} h(t^*) = h(t) + h'(t)(t^* - t) + \frac{1}{2}h''(t)(t^* - t)^2 + \int_0^1 (1 - \tau)[h''(t + \tau(t^* - t)) - h''(t)](t^* - t)^2 \, \textup{d}\tau.$$ In view of $h(t^*) = 0$ and $h''$ is increasing, it follows from (\[eq:Phi(t\*)\]) and (\[eq:h(t\*)\]) that $ \phi(t^*) \leq 0$. Thus, there exists a real root of $\phi(s)$ in $[t,t^*]$. So the discriminant of $\phi(s)$ is greater than or equal to 0, i.e., $h'(t)^2 - 2h''(t)h(t) \geq 0$, which is equivalent to $0 \leq h''(t)h(t)/h'(t)^2 \leq 1/2$. Therefore, $0 \leq L_h (t) \leq 1/4$ for $t \in [0,t^*]$. The proof is complete. \[lemma:estimate\_Hh(t)\] Let $h : [0,R) \to \mathbb{R}$ be a twice continuously differentiable function and satisfy assumptions $(A1)-(A3)$. Then, for all $t \in [0,t^*)$, $t < H_h(t) < t^*$. Moreover, $h'(t^*) < 0$ if and only if there exists $t \in (t^*,R)$ such that $h(t) < 0$. For $t \in [0,t^*)$, since $h(t) > 0$, $-1 < h'(t) < 0$ (from Lemma \[lemma:MajorizingFunctionProperties\]) and $0 \leq L_h (t) \leq 1/4$ (from Lemma \[lemma:estimate\_Lh(t)\]), one has that $t < H_h (t)$. Furthermore, for any $t \in (0,t^*]$, it follows from the definition of directional derivative (\[definition:DirectionalDerivative\]) and assumption (A2) that $D^- h''(t) > 0$. Thus, we have $$D^- H_h (t) = \frac{h(t)^2[3h''(t)^2 - 2 h'(t)D^- h''(t)]}{[h(t)h''(t) - 2 h'(t)^2]^2} > 0,\ \ t \in (0,t^*].$$ This implies that $H_h(t) < H_h(t^*) = t^*$ for any $t \in (0,t^*)$. So the first part of this Lemma is shown. For the second part, if $h'(t^*) < 0$, then it is obvious that there exists $t \in (t^*,R)$ such that $h(t) < 0$. Conversely, noting that $h(t^*) = 0$, by Lemma \[lemma:ConvexFunctionProperties1\], we have $h(t) > h(t^*) + h'(t^*)(t - t^*)$ for $t \in (t^*,R)$, which implies $h'(t^*) < 0$. This completes the proof. \[remark:h’(t\*)\] The condition $h'(t^*) < 0$ in (A3) implies the following properties: 1. $h(t^{**}) = 0$ for some $t^{**} \in (t^*,R)$. 2. $h(t) < 0$ for some $t \in (t^*,R)$. In the usual versions of Kantorovich-type and Smale-type theorems for Halley’s method (e.g., [@HanWang1997; @Han2001]), in order to guarantee Q-cubic convergence, condition (a) is used. As we discussed, this condition is more restrictive than condition $h'(t^*) < 0 $ in assumption (A3). \[lemma:estimate\_t\*-Hh(t)\] Let $h : [0,R) \to \mathbb{R}$ be a twice continuously differentiable function and satisfy assumptions $(A1)-(A3)$. Then $$\label{estimate:t*-Hh(t)} t^* - H_h(t) \leq \left[\frac{1}{3} \frac{h''(t^*)^2}{h'(t^*)^2} + \frac{2}{9} \frac{D^-h''(t^*)}{-h'(t^*)}\right](t^* - t)^3,\ \ \ t \in [0,t^*).$$ By the definition of $H_h$ in (\[function:RealSpaceIterativeFunctionHalleyMethod\]), we may derive the following relation $$\begin{aligned} t^* - H_h(t) &=& \frac{1}{1 - L_h(t)} \left[(1 - L_h(t))(t^* - t) + \frac{h(t)}{h'(t)}\right]\\ &=& - \frac{1}{h'(t)(1 - L_h(t))} \int_0^1 \big[h''(t + \tau(t^* - t)) - h''(t)\big](t^* - t)^2 (1 - \tau) {\mathrm{d}}\tau\\ && + \ \frac{t^* - t}{2((1 - L_h(t))} \frac{h''(t)}{h'(t)^2}\int_0^1 h''(t + \tau(t^* - t))(t^* - t)^2 (1 - \tau) {\mathrm{d}}\tau.\end{aligned}$$ Since $h''$ is convex and $t < t^*$, it follows from Lemma \[lemma:ConvexFunctionProperties2\] (ii) that $$h''(t + \tau(t^* - t)) - h''(t) \leq [h''(t^*) - h''(t)]\frac{\tau(t^* - t)}{t^* - t}.$$ Then, noting that $h''$ is strictly increasing, we have $$t^* - H_h(t) \leq -\frac{h''(t^*) - h''(t)}{6h'(t)(1 - L_h(t))}(t^* - t)^2 + \frac{h''(t^*)h''(t)}{4h'(t)^2(1 - L_h(t))}(t^* - t)^3.$$ In view of the facts that $h'(t) < 0, h''(0) > 0$ and $h', h''$ are strictly increasing on $[0,t^*)$ by Lemma \[lemma:MajorizingFunctionProperties\] and that $0 \leq L_h (t) \leq 1/4$ for $t \in [0,t^*]$ by Lemma \[lemma:estimate\_Lh(t)\], the preceding relation can be further reduced to $$\label{ineq:estimatet*-Hh(t)} t^* - H_h(t) \leq \frac{2}{9}\frac{h''(t^*) - h''(t)}{- h'(t)}(t^* - t)^2 + \frac{1}{3} \frac{h''(t^*)^2}{h'(t^*)^2}(t^* - t)^3.$$ As $h'$ is increasing, $h'(t^*) < 0$ and $h'(t) < 0$ in $[0,t^*)$, we have $$\frac{h''(t^*) - h''(t)}{- h'(t)} \leq \frac{h''(t^*) - h''(t)}{- h'(t^*)} = \frac{1}{- h'(t^*)}\frac{h''(t^*) - h''(t)}{t^* - t}(t^* - t) \leq \frac{D^-h''(t^*)}{- h'(t^*)}(t^* - t),$$ where the last inequality follows from Lemma \[lemma:ConvexFunctionProperties2\] (i). Combining the above inequality with (\[ineq:estimatet\*-Hh(t)\]), we conclude that (\[estimate:t\*-Hh(t)\]) holds. This completes the proof. By Definition \[definition:MajorizingFunction\], if $h$ is the majorizing function to $F$ at $x_0$, then the results in Lemma \[lemma:estimate\_Lh(t)\], Lemma \[lemma:estimate\_Hh(t)\] and Lemma \[lemma:estimate\_t\*-Hh(t)\] also hold. Let $\{t_k\}$ denote the majorizing sequence generated by $$\label{majorizingsequence;tk} t_0 = 0,\ t_{k+1} = H_h(t_k) = t_k - \frac{1}{1 - L_h(t_k)}\cdot\frac{h(t_k)}{h'(t_k)},\ \ \ k = 0,1,2,\ldots.$$ Therefore, by using Lemma \[lemma:estimate\_Hh(t)\] and Lemma \[lemma:estimate\_t\*-Hh(t)\], one concludes that \[theorem:ConvergenceRealSpaceHalleyIterqtion\] Let sequence $\{t_k\}$ be defined by $(\ref{majorizingsequence;tk})$. Then $\{t_k\}$ is well defined, strictly increasing and is contained in $[0,t^*)$. Moreover, $\{t_k\}$ satisfies $(\ref{estimate:t*-Hh(t)})$ and converges to $t^*$ with Q-cubic. Semilocal Convergence Results for Halley’s Method {#section:SemilocalConvergenceOfHalleyMethod} ================================================= In this section, we study the semilocal convergence of Halley’s method (\[iteration:HalleyMethod\]) in Banach space. Assume $F$ is a twice differentiable nonlinear operator in some convex domain $D$. For a given guess $x_0 \in D$, suppose that $F'(x_0)^{-1}$ exists. The following lemmas, which provide clear relationships between the majorizing function and the nonlinear operator, will play key roles for the convergence analysis of Halley’s method (\[iteration:HalleyMethod\]). \[lemma:estimateF’(x)-1F’(x0)\] Suppose $\|x - x_0\| \leq t < t^*$. If $h : [0,t^*) \to \mathbb{R}$ is twice continuously differentiable and is the majorizing function to $F$ at $x_0$. Then $F'(x)$ is nonsingular and $$\label{estimate;normF'(x)-1F'(x0)} \|F'(x)^{-1}F'(x_0)\| \leq - \frac{1}{h'(\|x - x_0\|)} \leq - \frac{1}{h'(t)}.$$ In particular, $F'$ is nonsingular in ${\bm{\mathrm{B}}}(x_0,t^*)$. Take $x \in \overline{{\bm{\mathrm{B}}}(x_0,t)}$, $0 \leq t < t^*$. Since $$F'(x) = F'(x_0) + \int_0^1 [F''(x_0 + \tau(x - x_0)) - F''(x_0)](x - x_0){\mathrm{d}}\tau + F''(x_0)(x - x_0),$$ by using conditions (\[condition:MajorantCondition\]) and (\[condition:InitialCondition\]), we have $$\begin{aligned} \|F'(x_0)^{-1}F'(x) - {\bm{\mathrm{I}}}\| &\leq& \int_0^1 \|F'(x_0)^{-1}[F''(x_0^{\tau}) - F''(x_0)]\|\|x - x_0\| d\tau + \|F'(x_0)^{-1}F''(x_0)\|\|x - x_0\|\\ &\leq& \int_0^1 \big[h''(\tau\|x - x_0\|) - h''(0)\big]\|x - x_0\| d\tau + h''(0)\|x - x_0\|\\ &=& h'(\|x - x_0\|) - h'(0),\end{aligned}$$ where $x_0^{\tau} = x_0 + \tau(x - x_0)$. Since $h'(0) = - 1$ and $-1 < h'(t) < 0$ for $(0,t^*)$ from Lemma \[lemma:MajorizingFunctionProperties\], we get $$\|F'(x_0)^{-1}F'(x) - I\| \leq h'(t) - h'(0) < 1.$$ It follows from Banach lemma that $F'(x_0)^{-1}F'(x)$ is nonsingular and (\[estimate;normF’(x)-1F’(x0)\]) holds. The proof is complete. \[lemma:estimateF’(x0)-1F”(x)\] Suppose $\|x - x_0\| \leq t < t^*$. If $h : [0,t^*) \to \mathbb{R}$ is twice continuously differentiable and is the majorizing function to $F$ at $x_0$. Then $\|F'(x_0)^{-1}F''(x)\| \leq h''(\|x - x_0\|) \leq h''(t)$. By using (\[condition:MajorantCondition\]), we have $$\begin{aligned} \|F'(x_0)^{-1}F''(x)\| &\leq& \|F'(x_0)^{-1}[F''(x) - F''(x_0)]\| + \|F'(x_0)^{-1}F''(x_0)\|\\ &\leq& h''(\|x - x_0\|) - h''(0) + h''(0) = h''(\|x - x_0\|).\end{aligned}$$ Since $h''$ is strictly increasing, we get $h''(\|x - x_0\|) \leq h''(t)$. The proof is complete. \[lemma:ConvergenceAuxiliaryResults\] Suppose that $h : [0,t^*) \to \mathbb{R}$ is twice continuously differentiable. Let $\{x_k\}$ be generated by Halley’s method $(\ref{iteration:HalleyMethod})$ and $\{t_k\}$ be generated by $(\ref{majorizingsequence;tk})$. If $h$ is the majorizing function to $F$ at $x_0$. Then, for all $k = 0,1,2,\ldots$, we have 1. $F'(x_k)^{-1}$ exists and $\|F'(x_k)^{-1}F'(x_0)\| \leq - 1/h'(\|x_k - x_0\|) \leq - 1/h'(t_k)$. 2. $\|F'(x_0)^{-1}F''(x_k)\| \leq h''(t_k)$. 3. $\|F'(x_0)^{-1}F(x_k)\| \leq h(t_k)$. 4. $[{\bm{\mathrm{I}}}- L_F(x_k)]^{-1}$ exists and $\|[{\bm{\mathrm{I}}}- L_F(x_k)]^{-1}\| \leq 1 /(1 - L_h(t_k))$. 5. $\|x_{k+1} - x_k\| \leq t_{k+1} - t_k$. (i)-(v) are obvious for the case $k = 0$. Now we assume that they hold for some $n \in \mathbb{N}$. By the inductive hypothesis (v) and Theorem \[theorem:ConvergenceRealSpaceHalleyIterqtion\], we have $\|x_{n+1} - x_0\| \leq t_{n+1} < t^*$. It follows from Lemma \[lemma:estimateF’(x)-1F’(x0)\] and Lemma \[lemma:estimateF’(x0)-1F”(x)\] that (i) and (ii) hold for $k = n+1$, respectively. As for (iii), we can derive the following relation from [@HanWang1997]: $$\begin{aligned} F(x_{n+1}) &=& \frac{1}{2}F''(x_n)L_F(x_n)(x_{n+1} - x_n)^2 + \int_0^1 (1 - \tau)[F''(x_n^{\tau}) - F''(x_n)](x_{n+1} - x_n)^2 {\mathrm{d}}\tau,\end{aligned}$$ where $x_n^{\tau} = x_n + \tau(x_{n+1} - x_n)$. Applying (\[condition:MajorantCondition\]) and the inductive hypotheses (i)-(ii) and (iv)-(v), we can obtain $$\begin{aligned} \lefteqn{\|F'(x_0)^{-1}F(x_{n+1})\| \leq \frac{1}{2}\|F'(x_0)^{-1}F''(x_n)\|\|L_F(x_n)\|\|x_{n+1} - x_n\|^2}\\ && + \int_0^1 [h''(\tau\|x_{n+1} - x_n\| + \|x_n - x_0\|) - h''(\|x_n - x_0\|)]\|x_{n+1} - x_n\|^2(1 - \tau) {\mathrm{d}}\tau\\ &\leq& \frac{1}{4}h''(t_n)\frac{h(t_n)h''(t_n)}{h'(t_n)^2}(t_{n+1} - t_n)^2 + \int_0^1 \big[h''(\tau(t_{n+1} - t_n) + t_n) - h''(t_n)\big](t_{n+1} - t_n)^2(1 - \tau) {\mathrm{d}}\tau\\ &=& h(t_{n+1}).\end{aligned}$$ This means (iii) holds for $k = n +1$. By Lemma \[lemma:estimate\_Lh(t)\] and the inductive hypotheses (i)-(iii), we get (iv) for $k = n +1$. Finally, for (v), we have $$\begin{aligned} \label{estimate:normxn+2-xn+1} \|x_{n+2} - x_{n+1}\| &\leq& \|[{\bm{\mathrm{I}}}- L_F(x_{n+1})]^{-1}\| \|F'(x_{n+1})^{-1}F'(x_0)\| \|F'(x_0)^{-1}F(x_{n+1})\|\nonumber\\ &\leq& - \frac{1}{1 - L_h(t_{n+1})}\frac{h(t_{n+1})}{h'(t_{n+1})} = t_{n+2} - t_{n+1}.\end{aligned}$$ Therefore, the statements hold for all $k = 0,1,2,\ldots$. This completes the proof. We are now ready to prove the semilocal convergence results (convergence, convergence rate and uniqueness) for Halley’s method (\[iteration:HalleyMethod\]). \[theorem:SemilocalConvergenceHalleyMethodMajorantCondition\] Let $F:D\subset X \to Y$ be a twice continuously differentiable nonlinear operator, $D$ open and convex. Assume that there exists a starting point $x_0\in D$ such that $F'(x_0)^{-1}$ exists, and that $h$ is the majorizing function to $F$ at $x_0$, i.e., $(\ref{condition:MajorantCondition})$ and $(\ref{condition:InitialCondition})$ hold and $h$ satisfies assumptions $(A1)-(A3)$. Then the sequence $\{x_k\}$ generated by Halley’s method $(\ref{iteration:HalleyMethod})$ for solving equation $(\ref{eq:NonlinearOperatorEquation})$ with starting point $x_0$ is well defined, is contained in ${\bm{\mathrm{B}}}(x_0,t^*)$ and converges to a point $x^* \in \overline{{\bm{\mathrm{B}}}(x_0,t^*)}$ which is the solution of equation $(\ref{eq:NonlinearOperatorEquation})$. By Lemma \[lemma:ConvergenceAuxiliaryResults\], we conclude that the sequence $\{x_k\}$ is well defined. By Lemma \[lemma:ConvergenceAuxiliaryResults\] (v) and Theorem \[theorem:ConvergenceRealSpaceHalleyIterqtion\], we have $\|x_k - x_0\| \leq t_k < t^*$ for any $k \in \mathbb{N}$, which means that $\{x_k\}$ is contained in ${\bm{\mathrm{B}}}(x_0,t^*)$. It follows from (\[estimate:normxn+2-xn+1\]) and Theorem \[theorem:ConvergenceRealSpaceHalleyIterqtion\] that $$\sum_{k = N}^\infty \|x_{k+1} - x_k\| \leq \sum_{k = N}^\infty (t_{k+1} - t_k) = t^* - t_N < + \infty,$$ for any $N \in \mathbb{N}$. Hence $\{x_k\}$ is a Cauchy sequence in ${\bm{\mathrm{B}}}(x_0,t^*)$ and so converges to some $x^* \in \overline{{\bm{\mathrm{B}}}(x_0,t^*)}$. The above inequality also implies that $\|x^* - x_k\| \leq t^* - t_k$ for any $k \in \mathbb{N}$. It remains to prove that $F(x^*) = 0$. It follows from Lemma \[lemma:estimateF’(x)-1F’(x0)\] that $\{\|F'(x_k)\|\}$ is bounded. By Lemma \[lemma:ConvergenceAuxiliaryResults\], we have $$\|F(x_k)\| \leq \|F'(x_k)\|\|F'(x_k)^{-1}F(x_k)\| \leq \|F'(x_k)\|(1 - L_h(t_k))(t_{k+1} - t_k).$$ Letting $k \to \infty$, by noting the fact that $L_h(x_k)$ is bounded (from Lemma \[lemma:estimate\_Lh(t)\]) and $\{t_k\}$ is convergent, we have $\lim\limits_{k \to \infty} F(x_k) = 0$. Since $F$ is continuous in $\overline{{\bm{\mathrm{B}}}(x_0,t^*)}$, $\{x_k\} \subset {\bm{\mathrm{B}}}(x_0,t^*)$ and $\{x_k\}$ converges to $x^*$, we also have $\lim\limits_{k \to \infty}F(x_k) = F(x^*)$. This completes the proof. \[theorem:ConvergenceRate\] Under the assumptions of Theorem $\ref{theorem:SemilocalConvergenceHalleyMethodMajorantCondition}$, we have the following error bound: $$\label{estimate:ErrorEstimateHalleyMethod} \|x^* - x_{k+1}\| \leq (t^* - t_{k+1})\left(\frac{\|x^* - x_k\|}{t^* - t_k}\right)^3,\ \ \ k = 0,1,\ldots.$$ Thus, the sequence $\{x_k\}$ generated by Halley’s method $(\ref{iteration:HalleyMethod})$ converges Q-cubic as follows $$\label{estimate:ErrorEstimateExactHalleyMethod} \|x^* - x_{k+1}\| \leq \left[\frac{1}{3} \frac{h''(t^*)^2}{h'(t^*)^2} + \frac{2}{9} \frac{D^-h''(t^*)}{-h'(t^*)}\right]\|x^* - x_k\|^3,\ \ \ k = 0,1,\ldots.$$ Set $\Gamma_F = [I - L_F(x)]^{-1}$. Applying standard analytical techniques, one has that $$\begin{aligned} x^* - x_{k+1} &=& - \Gamma_F(x_k)F'(x_k)^{-1}[- F'(x_k)(x^* - x_k) - F(x_k)] - \Gamma_F(x_k)L_F(x_k)(x^* - x_k)\\ &=& - \Gamma_F(x_k)F'(x_k)^{-1} \int_0^1 (1 - \tau)[F''(x_k^{\tau}) - F''(x_k)](x^* - x_k)^2 {\mathrm{d}}\tau \\ && + \frac{1}{2}\Gamma_F(x_k)F'(x_k)^{-1}F''(x_k)\left[F'(x_k)^{-1} \int_0^1 (1 - \tau)F''(x_k^{\tau})(x^* - x_k)^2 {\mathrm{d}}\tau\right](x^* - x_k),\end{aligned}$$ where $x_k^{\tau} = x_k + \tau(x^* - x_k)$. Using (\[condition:MajorantCondition\]), one has that $$\int_0^1 \|F'(x_0)^{-1}[F''(x_k^{\tau}) - F''(x_k)]\|(1 - \tau) {\mathrm{d}}\tau \leq \int_0^1 [h''(\tau\|x^* - x_k\| + \|x_k - x_0\|) - h''(\|x_k - x_0\|)](1 - \tau) {\mathrm{d}}\tau.$$ Then, we use Lemma \[lemma:ConvexFunctionProperties2\] to obtain $$\begin{aligned} h''(\tau\|x^* - x_k\| + \|x_k - x_0\|) - h''(\|x_k - x_0\|) &\leq& h''(\tau\|x^* - x_k\| + t_k) - h''(t_k)\\ &\leq& [h''(\tau(t^* - t_k) + t_k) - h''(t_k)]\frac{\|x^* - x_k\|}{t^* - t_k}.\end{aligned}$$ This together with Lemma \[lemma:estimateF’(x0)-1F”(x)\] and Lemma \[lemma:ConvergenceAuxiliaryResults\], we have $$\begin{aligned} \|x^* - x_{k+1}\| &\leq& - \frac{1}{(1 - L_h(t_k))h'(t_k)} \left[\int_0^1 [h''(\tau(t^* - t_k) + t_k) - h''(t_k)](1 - \tau) {\mathrm{d}}\tau\right] \frac{\|x^* - x_k\|^3}{t^* - t_k}\\ && + \frac{1}{2}\frac{h''(t_k)}{(1 - L_h(t_k))h'(t_k)^2} \left[\int_0^1 h''(\tau(t^* - t_k) + t_k)(1 - \tau) {\mathrm{d}}\tau\right] \|x^* - x_k\|^3\\ &=& - \frac{1}{(1 - L_h(t_k))h'(t_k)}\left(\frac{\|x^* - x_k\|}{t^* - t_k}\right)^3 h(t_k)\frac{t^* - t_{k+1}}{t_{k+1} - t_k} = (t^* - t_{k+1})\left(\frac{\|x^* - x_k\|}{t^* - t_k}\right)^3.\end{aligned}$$ This shows (\[estimate:ErrorEstimateHalleyMethod\]) holds for all $k \in \mathbb{N}$. (\[estimate:ErrorEstimateExactHalleyMethod\]) follows from Lemma \[lemma:estimate\_t\*-Hh(t)\]. The proof is complete. \[theorem:UniquenessSolution\] Under the assumptions of Theorem $\ref{theorem:SemilocalConvergenceHalleyMethodMajorantCondition}$, the limit $x^*$ of the sequence $\{x_k\}$ is the unique zero of equation $(\ref{eq:NonlinearOperatorEquation})$ in ${\bm{\mathrm{B}}}(x_0,\rho)$, where $\rho$ is defined as $\rho := \sup\{t \in [t^*,R): h(t) \leq 0\}$. We first to show the solution $x^*$ of (\[eq:NonlinearOperatorEquation\]) is unique in $\overline{{\bm{\mathrm{B}}}(x_0,t^*)}$. Assume that there exists another solution $x^{**}$ in $\overline{{\bm{\mathrm{B}}}(x_0,t^*)}$. Then $\|x^{**} - x_0\| \leq t^*$. Now we prove by induction that $$\label{estimate:normx**-xk} \|x^{**} - x_k\| \leq t^* - t_k,\ \ \ k = 0,1,2,\ldots.$$ It is clear that the case $k = 0$ holds because of $t_0 = 0$. Assume that the above inequality holds for some $n \in \mathbb{N}$. By Theorem \[theorem:ConvergenceRate\] we have $$\|x^{**} - x_{k+1}\| \leq (t^* - t_{k+1})\left(\frac{\|x^{**} - x_k\|}{t^* - t_k}\right)^3.$$ Then, by applying the inductive hypothesis (\[estimate:normx\*\*-xk\]) to the above inequality, one has that (\[estimate:normx\*\*-xk\]) also holds for $k = n+1$. Since $\{x_k\}$ converges to $x^*$ and $\{t_k\}$ converges to $t^*$, from (\[estimate:normx\*\*-xk\]) we conclude $x^{**} = x^*$. Therefore, $x^*$ is the unique zero of (\[eq:NonlinearOperatorEquation\]) in $\overline{{\bm{\mathrm{B}}}(x_0,t^*)}$. It remains to prove that $F$ does not have zeros in ${\bm{\mathrm{B}}}(x_0,\rho)\backslash \overline{{\bm{\mathrm{B}}}(x_0,t^*)}$. For proving this fact by contradiction, assume that $F$ does have a zero there, i.e., there exists $x^{**} \in D \subset X$ such that $t^* < \|x^{**} - x_0\| < \rho$ and $F(x^{**}) = 0$. We will show that the above assumptions do not hold. Firstly, we have the following observation; $$\label{estimate:F(x**)} F(x^{**}) = F(x_0) + F'(x_0)(x^{**} - x_0) + \frac{1}{2}F''(x_0)(x^{**} - x_0)^2 + (1 - \tau) \int_0^1 [F''(x_0^{\tau}) - F''(x_0)](x^{**} - x_0)^2 {\mathrm{d}}\tau,$$ where $x_0^{\tau} = x_0 + \tau(x^{**} - x_0)$. Secondly, we use (\[condition:MajorantCondition\]) to yield $$\begin{aligned} \lefteqn{\left\|(1 - \tau)\int_0^1 F'(x_0)^{-1}[F''(x_0^{\tau}) - F''(x_0)](x^{**} - x_0)^2 {\mathrm{d}}\tau\right\|}\nonumber\\ &\leq& \int_0^1 [h''(\tau\|x^{**} - x_0\|) - h''(0)]\|x^{**} - x_0\|^2 (1 - \tau) {\mathrm{d}}\tau \nonumber\\ &=& h(\|x^{**} - x_0\|) - h(0) - h'(0)\|x^{**} - x_0\| - \frac{1}{2}h''(0) \|x^{**} - x_0\|^2. \label{estimate:F(x**)2}\end{aligned}$$ Thirdly, by applying (\[condition:InitialCondition\]), one has that $$\begin{aligned} \lefteqn{\left\|F'(x_0)^{-1}[F(x_0) + F'(x_0)(x^{**} - x_0) + \frac{1}{2}F''(x_0)(x^{**} - x_0)^2]\right\|}\nonumber\\ &\geq& \|x^{**} - x_0\| - \|F'(x_0)^{-1}F(x_0)\| - \frac{1}{2} \|F'(x_0)^{-1}F''(x_0)\|\|x^{**} - x_0\|^2 \nonumber\\ &\geq& \|x^{**} - x_0\| - h(0) - \frac{1}{2}h''(0) \|x^{**} - x_0\|^2. \label{estimate:F(x**)1}\end{aligned}$$ In view of $F(x^{**}) = 0$ and $h'(0) = - 1$, combining (\[estimate:F(x\*\*)1\]) and (\[estimate:F(x\*\*)2\]), we obtain from (\[estimate:F(x\*\*)\]) that $$h(\|x^{**} - x_0\|) - h(0) + \|x^{**} - x_0\| - \frac{1}{2}h''(0) \|x^{**} - x_0\|^2 \geq \|x^{**} - x_0\| - h(0) - \frac{1}{2}h''(0) \|x^{**} - x_0\|^2,$$ which is equivalent to $h(\|x^{**} - x_0\|) \geq 0$. Note that $h$ is strictly convex by Lemma \[lemma:MajorizingFunctionProperties\]. Hence $h$ is strictly positive in the interval $(\|x^{**} - x_0\|,R)$. So, we get $\rho \leq \|x^{**} - x_0\|$, which is a contradiction to the above assumptions. Therefore, $F$ does not have zeros in ${\bm{\mathrm{B}}}(x_0,\rho)\backslash \overline{{\bm{\mathrm{B}}}(x_0,t^*)}$ and $x^*$ is the unique zero of equation (\[eq:NonlinearOperatorEquation\]) in ${\bm{\mathrm{B}}}(x_0,\rho)$. The proof is complete. Special Cases {#section:SpecialCases} ============= In this section we present two special cases of the convergence results obtained in Section \[section:SemilocalConvergenceOfHalleyMethod\]. Namely, convergence results under an affine covariant Lipschitz condition and the $\gamma$-condition. Convergence results under the affine covariant Lipschitz condition ------------------------------------------------------------------ In [@HanWang1997], by using the majorizing technique, Han and Wang studied the semilocal convergence of Halley’s method (\[iteration:HalleyMethod\]) under affine covariant Lipschitz condition: $$\label{condition:LipschitzCondition} \|F'(x_0)^{-1}[F''(y) - F''(x)]\| \leq L\|y - x\|,\ \ \ x,y \in D.$$ The majorizing function employed in [@HanWang1997] is $$\label{majorizingfunction:cubicfunction} f(t) = \beta - t + \frac{\eta}{2} t^2 + \frac{L}{6} t^3.$$ If we choose this cubic polynomial as the majorizing function $h$ in (\[condition:MajorantCondition\]), then we can see that the majorant condition (\[condition:MajorantCondition\]) reduced to the Lipschitz condition (\[condition:LipschitzCondition\]) and that assumptions (A1) and (A2) are satisfied for $f$. Moreover, if the following Kantorovich-type convergence criterion holds $$\label{criterion:LipschitzConditionConvergenceCriterion} \beta < b := \frac{2(\eta + 2\sqrt{\eta^2 + 2L})}{3(\eta + \sqrt{\eta^2 + 2L})^2},$$ then assumption (A3) is satisfied for $f$. Thus, the concrete forms of Theorem \[theorem:SemilocalConvergenceHalleyMethodMajorantCondition\], Theorem \[theorem:ConvergenceRate\] and Theorem \[theorem:UniquenessSolution\] are given as follows. \[theorem:SemilocalConvergenceHalleyMethodLipschitzCondition\] Let $F:D\subset X \to Y$ be a twice continuously differentiable nonlinear operator, $D$ open and convex. Assume that there exists a starting point $x_0\in D$ such that $F'(x_0)^{-1}$ exists, and satisfies the affine covariant Lipschitz condition $(\ref{condition:LipschitzCondition})$ and $\|F'(x_0)^{-1}F(x_0)\| \leq \beta$, $\|F'(x_0)^{-1}F''(x_0)\| \leq \eta$. If $(\ref{criterion:LipschitzConditionConvergenceCriterion})$ holds, then the sequence $\{x_k\}$ generated by Halley’s method $(\ref{iteration:HalleyMethod})$ for solving equation $(\ref{eq:NonlinearOperatorEquation})$ with starting point $x_0$ is well defined, is contained in ${\bm{\mathrm{B}}}(x_0,t^*)$ and converges to a point $x^* \in \overline{{\bm{\mathrm{B}}}(x_0,t^*)}$ which is the solution of equation $(\ref{eq:NonlinearOperatorEquation})$, where $t^*$ is the smallest positive root of $f$ $(defined \ by \ (\ref{majorizingfunction:cubicfunction}))$ in $[0,r_1]$, where $r_1 = 2/(\eta + \sqrt{\eta^2 + 2L})$ is the positive root of $f'$. The limit $x^*$ of the sequence $\{x_k\}$ is the unique zero of equation $(\ref{eq:NonlinearOperatorEquation})$ in ${\bm{\mathrm{B}}}(x_0,t^{**})$, where $t^{**}$ is the root of $f$ in interval $[r_1,+\infty)$. Moreover, the following error bound holds: $$\|x^* - x_{k+1}\| \leq (t^* - t_{k+1})\left(\frac{\|x^* - x_k\|}{t^* - t_k}\right)^3,\ \ \ k = 0,1,\ldots.$$ And the sequence $\{x_k\}$ Q-cubically converges as follows: $$\|x^* - x_{k+1}\| \leq \frac{3(\eta + Lt^*)^2 + 2L(1 - \eta t^* - Lt^{*2}/2)}{9(1 - \eta t^* - Lt^{*2}/2)^2}\|x^* - x_k\|^3,\ \ \ k = 0,1,\ldots.$$ Convergence results under the $\gamma$-condition ------------------------------------------------ The notion of the $\gamma$-condition (see Definition \[definition:GammaCondition\]) for operators in Banach spaces was introduced in [@WangHan1997] by Wang and Han to study Smale’s point estimate theory. In this subsection, we will give the semilocal convergence results for Halley’s method (\[iteration:HalleyMethod\]) under the $\gamma$-condition. As we will discuss, these convergence results can be applied to Smale’s condition (see [@Smale1986] for more details about the Smale’s condition). Smale [@Smale1986] studied the convergence and error estimation of Newton’s method (\[iteration:NewtonMethod\]) under the hypotheses that $F$ is analytic and satisfies $$\label{condition:SmaleCondition} \left\|F'(x_0)^{-1}F^{(n)}(x_0)\right\| \leq n!\gamma^{n-1},\ \ \ n \geq 2,$$ where $x_0$ is a given point in $D$ and $\gamma$ is defined by $$\label{constant:GammaSmale} \gamma := \sup_{k > 1}\left\|\frac{F'(x_0)^{-1}F^{(k)}(x_0)} {k!}\right\|^{\frac{1}{k-1}}.$$ Wang and Han in [@WangHan1990] completely improved Smale’s results by introducing a majorizing function $$\label{majorizingfunction:GammaCondition} f(t) = \beta - t + \frac{\gamma t^2}{1 - \gamma t},\ \ \ \gamma > 0, \ 0 \leq t < \frac{1}{\gamma}.$$ If we choose this function as the majorizing function $h$, then we can see that the majorant condition (\[condition:MajorantCondition\]) reduces to the following condition: $$\label{condition:SmaleMajorizingCondition} \|F'(x_0)^{-1}[F''(y) - F''(x)]\| \leq \frac{2\gamma}{(1 - \gamma\|y - x\| - \gamma\|x - x_0\|)^3} - \frac{2\gamma}{(1 - \gamma\|x - x_0\|)^3},\ \ \gamma > 0,$$ where $\|y - x\| + \|x - x_0\| < 1/\gamma$, and that assumptions (A1) and (A2) are satisfied for $f$. Moreover, if $\alpha := \beta\gamma < 3 - 2\sqrt{2}$, then assumption (A3) is satisfied for $f$. Thus, the concrete forms of Theorem \[theorem:SemilocalConvergenceHalleyMethodMajorantCondition\], Theorem \[theorem:ConvergenceRate\] and Theorem \[theorem:UniquenessSolution\] are given as follows. \[theorem:SemilocalConvergenceHalleyMethodSmaleMajorizingCondition\] Let $F:D\subset X \to Y$ be a twice continuously differentiable nonlinear operator, $D$ open and convex. Assume that there exists a starting point $x_0\in D$ such that $F'(x_0)^{-1}$ exists, and satisfies condition $(\ref{condition:SmaleMajorizingCondition})$, $\|F'(x_0)^{-1}F(x_0)\| \leq \beta$ and $\|F'(x_0)^{-1}F''(x_0)\| \leq 2\gamma$. If $\alpha := \beta\gamma < 3 - 2\sqrt{2}$, then the sequence $\{x_k\}$ generated by Halley’s method $(\ref{iteration:HalleyMethod})$ for solving equation $(\ref{eq:NonlinearOperatorEquation})$ with starting point $x_0$ is well defined, is contained in ${\bm{\mathrm{B}}}(x_0,t^*)$ and converges to a point $x^* \in \overline{{\bm{\mathrm{B}}}(x_0,t^*)}$ which is the solution of equation $(\ref{eq:NonlinearOperatorEquation})$. The limit $x^*$ of the sequence $\{x_k\}$ is the unique zero of equation $(\ref{eq:NonlinearOperatorEquation})$ in ${\bm{\mathrm{B}}}(x_0,t^{**})$, where $t^*$ and $t^{**}$ are given as $$\label{root:SmaleMajorizingFunctionfRoot} t^* = \frac{1 + \alpha - \sqrt{(1 + \alpha)^2 - 8\alpha}}{4\gamma} \ \ \textup{and}\ \ t^{**} = \frac{1 + \alpha + \sqrt{(1 + \alpha)^2 - 8\alpha}}{4\gamma},$$ respectively. Moreover, the following error bound holds: $$\label{error:SmaleTypeSemilocalConvergenceErrorBound} \|x^* - x_{k+1}\| \leq (t^* - t_{k+1})\left(\frac{\|x^* - x_k\|}{t^* - t_k}\right)^3,\ \ \ k = 0,1,\ldots.$$ And the sequence $\{x_k\}$ Q-cubically converges as follows $$\label{rate:SmaleTypeSemilocalConvergenceRate} \|x^* - x_{k+1}\| \leq \frac{8\gamma^2}{3[2(1 - \gamma t^*)^2 - 1]^2} \|x^* - x_k\|^3,\ \ \ k = 0,1,\ldots.$$ The next result gives a condition easier to be checked than condition (\[condition:MajorantCondition\]), provided that the majorizing function $h$ is thrice continuously differentiable. \[lemma:RelationMajorantConditionAndGammaCondition2Order\] Let $F:D\subset X \to Y$ be a thrice continuously differentiable nonlinear operator, $D$ open and convex. Let $h : [0,R) \to \mathbb{R}$ be a thrice continuously differentiable function with convex $h''$. Then $F$ satisfies condition $(\ref{condition:MajorantCondition})$ if and only if $$\label{condition:F'(x0)-1F'''(x)} \|F'(x_0)^{-1}F'''(x)\| \leq h'''(\|x - x_0\|),$$ for all $x \in D$ with $\|x - x_0\| < R$. If $F$ satisfies (\[condition:MajorantCondition\]), then (\[condition:F’(x0)-1F”’(x)\]) holds trivially. Conversely, if $F$ satisfies (\[condition:F’(x0)-1F”’(x)\]), then we have $$\begin{aligned} \|F'(x_0)^{-1}[F''(y) - F''(x)]\| &\leq& \int_0^1 \|F'(x_0)^{-1}F'''(x + \tau (y - x))\| \|y - x\| {\mathrm{d}}\tau\\ &\leq& \int_0^1 h'''(\|x - x_0\| + \tau\|y - x\|)\|y - x\| {\mathrm{d}}\tau\\ &=& h''(\|y - x\| + \|x - x_0\|) - h''(\|x - x_0\|),\end{aligned}$$ which implies that $F$ satisfies (\[condition:MajorantCondition\]). The proof is complete. If the majorizing function $h$ is defined by (\[majorizingfunction:GammaCondition\]), then (\[condition:F’(x0)-1F”’(x)\]) becomes $$\label{condition:GammaCondition2Order} \|F'(x_0)^{-1}F'''(x)\| \leq \frac{6\gamma^2}{(1 - \gamma\|x - x_0\|)^4},$$ which means that $F$ satisfies the $\gamma$-condition with 2-order (see Definition \[definition:GammaCondition\]) in ${\bm{\mathrm{B}}}(x_0,R)$. By [@WangHan1997], if $F$ satisfies the $\gamma$-condition with 2-order, then $F$ satisfies the $\gamma$-condition (with 1-order). One typical and important class of examples satisfying the $\gamma$-condition with 2-order (\[condition:GammaCondition2Order\]) is the one of analytic functions. The following lemma shows that an analytic operator satisfies the $\gamma$-condition with 2-order. \[lemma:RelationGammaConditionAndAnalyticOperator\] Let $F : D \to Y$ be an analytic nonlinear operator. Suppose that $x_0 \in D$ is a given point, $F'(x_0)$ is invertible and that ${\bm{\mathrm{B}}}(x_0,1/\gamma) \subset D$. Then $F$ satisfies the $\gamma$-condition with 2-order $(\ref{condition:GammaCondition2Order})$ in ${\bm{\mathrm{B}}}(x_0,1/\gamma)$, where $\gamma$ is defined by $(\ref{constant:GammaSmale})$. For any $x \in D$, since $F$ is an analytic operator, we have $$F'(x_0)^{-1}F'''(x) = \sum_{n=0}^\infty \frac{1}{n!} F'(x_0)^{-1}F^{(n + 3)}(x_0)(x - x_0)^n.$$ This together with (\[constant:GammaSmale\]) directly leads to $$\|F'(x_0)^{-1}F'''(x)\| \leq \gamma^2 \sum_{n=0}^\infty (n+3)(n+2)(n+1)(\gamma\|x - x_0\|)^n.$$ Noting that $\gamma\|x - x_0\| < 1$ due to the assumption ${\bm{\mathrm{B}}}(x_0,1/\gamma) \subset D$, we have $$\sum_{n=0}^\infty (n+3)(n+2)(n+1)(\gamma\|x - x_0\|)^n = \frac{6}{(1-\gamma\|x - x_0\|)^4},$$ which completes the proof. From Lemma \[lemma:RelationMajorantConditionAndGammaCondition2Order\] and Lemma \[lemma:RelationGammaConditionAndAnalyticOperator\], we conclude that the semilocal convergence results obtained in Theorem \[theorem:SemilocalConvergenceHalleyMethodSmaleMajorizingCondition\] also hold when $F$ is an analytic operator. \[theorem:SemilocalConvergenceForAnalyticOperator\] Let $F: D \to F$ be an analytic operator, $D$ open and convex. Assume that exists $x_0 \in D$ such that $F'(x_0)$ is nonsingular. If $\|F'(x_0)^{-1}F(x_0)\| \leq \beta$ and $\alpha := \beta\gamma < 3 - 2\sqrt{2}$, where $\gamma$ is given by $(\ref{constant:GammaSmale})$. Then the sequence $\{x_k\}$ generated by Halley’s method $(\ref{iteration:HalleyMethod})$ for solving equation $(\ref{eq:NonlinearOperatorEquation})$ with starting point $x_0$ is well defined, is contained in ${\bm{\mathrm{B}}}(x_0,t^*)$ and converges to a point $x^* \in \overline{{\bm{\mathrm{B}}}(x_0,t^*)}$ which is the solution of equation $(\ref{eq:NonlinearOperatorEquation})$. The limit $x^*$ of $\{x_k\}$ is the unique zero of equation $(\ref{eq:NonlinearOperatorEquation})$ in ${\bm{\mathrm{B}}}(x_0,t^{**})$, where $t^*$ and $t^{**}$ are given in $(\ref{root:SmaleMajorizingFunctionfRoot})$. Moreover, the error estimate and the convergence rate for $\{x_k\}$ are characterized by $(\ref{error:SmaleTypeSemilocalConvergenceErrorBound})$ and $(\ref{rate:SmaleTypeSemilocalConvergenceRate})$, respectively. Remarks and Numerical Example ============================= All the well-known one-point iterative methods with third-order of convergence are given by the following unified form (see [@Hernandez2005; @Hernandez2009] for more details): $$\label{iteration:IterativeFamilyMethods} \left\{ \begin{array}{l} x_{n + 1} = x_n - H(L_F(x_n)) F'(x_n)^{-1} F(x_n),\\ H(L_F(x_n)) = {\bm{\mathrm{I}}}+ \frac{1}{2} L_F(x_n) + \sum_{k \geq 2} a_k L_F(x_n)^k,\\ L_F(x_n) = F'(x_n)^{-1} F''(x_n)F'(x_n)^{-1}F(x_n), \ \ \ n \in \mathbb{N}, \end{array} \right.$$ where $\{a_k\}_{k \geq 2}$ is a nonnegative and nonincreasing real sequence such that $$\sum_{k = 0}^\infty a_k t^k < + \infty, \ \ t \in [- \frac{1}{2}, \frac{1}{2}] \ \ \ \text{with} \ \ a_0 = 1, a_1 = \frac{1}{2}.$$ Thus, if $L_F(x_n)$ exists and $\|L_F(x_n)\| \leq 1/2$, then (\[iteration:IterativeFamilyMethods\]) is well defined. In particular, when $a_k = 1/2^k$ for any $k \geq 0$, (\[iteration:IterativeFamilyMethods\]) reduces to Halley’s method (\[iteration:HalleyMethod\]). Hernández and Romero in [@Hernandez2009] studied the semilocal convergence of (\[iteration:IterativeFamilyMethods\]) under the following condition: $$\label{condition:MajorantConditionLike} \|F''(x) - F''(y)\| \leq |p''(u) - p''(v)|, \ \ x, y \in D, u, v \in [a, s] \ \text{such that} \ \|x - y\| \leq |u - v|,$$ where $p$ is a sufficiently differentiable nonincreasing and convex real function in an interval $[a, b]$ such that $p(a) > 0 > p(b)$ and $p'''(t) \geq 0$ in $[a, s]$, and $s$ is the unique simple solution of $p(t) = 0$ in $[a, b]$. We point out that condition (\[condition:MajorantCondition\]) used in our convergence analysis is affine invariant but not condition (\[condition:MajorantConditionLike\]) (see [@Deuflhard1979; @Deuflhard2004] for more details about the affine invariant theory), and that the assumptions of the majorizing function used in our analysis are weaker than the ones in [@Hernandez2009]. Furthermore, our convergence analysis provides a clear relationship between the majorizing function and the nonlinear operator, see Lemmas \[lemma:estimateF’(x)-1F’(x0)\], \[lemma:estimateF’(x0)-1F”(x)\] and \[lemma:ConvergenceAuxiliaryResults\]. To illustrate the theoretical results, we provide a numerical example on nonlinear Hammerstein integral equation of the second kind. Consider the integral equation: $$\label{eq:NonlinearHammersteinEquation} u(s) = f(s) + \lambda \int_a^b k(s, t) u(t)^n {\mathrm{d}}t, \ \ \lambda \in \mathbb{R}, n \in \mathbb{N},$$ where $f$ is a given continuous function satisfying $f(s) > 0$ for $s \in [a, b]$ and the kernel function $k(s, t)$ is continuous and positive in $[a, b] \times [a, b]$. Let $X = Y = C[a, b]$ and $D = \{u \in D[a, b]: u(s) \geq 0, s \in [a, b]\}$. Then, finding a solution of (\[eq:NonlinearHammersteinEquation\]) is equivalent to find a solution of $F(x) = 0$, where $F: D \to C[a, b]$ is defined by $$F(u)(s) = u(s) - f(s) - \lambda \int_a^b k(s, t) u(t)^n {\mathrm{d}}t, \ \ s \in [a, b], \lambda \in \mathbb{R}, n \in \mathbb{N}.$$ We adopt the max-norm. The first and second derivative of $F$ are given by $$F'(u)v(s) = v(s) - n \lambda \int_a^b k(s, t) u(t)^{n - 1} v(t) {\mathrm{d}}t, \ \ v \in D,$$ and $$F''(u)[vw](s) = - n(n - 1)\lambda \int_a^b k(s, t)u(t)^{n - 2} (vw)(t) {\mathrm{d}}t, \ \ v, w \in D.$$ We choose $[a, b] = [0, 1], n = 3, x_0(t) = f(t) = 1$ and $k(s, t)$ as the Green kernel on $[0,1] \times [0, 1]$ defined by $$G(s,t)= \begin{cases} \displaystyle \frac{(b - s)(t - a)}{b - a} = t(1 - s),\ t \leq s,\\ \displaystyle \frac{(b - t)(s - a)}{b - a} = s(1 - t),\ s \leq t. \end{cases}$$ Let $M = \max\limits_{s \in [0, 1]} \int_0^1 |k(s, t)| {\mathrm{d}}t$. Then $M = 1/8$. Thus, we obtain that $$\|F'(x_0)^{-1}\| \leq \frac{8}{8 - 3 |\lambda|}, \ \ \|F'(x_0)^{-1}F(x_0)\| \leq \frac{|\lambda|}{8 - 3 |\lambda|}, \ \ \|F'(x_0)^{-1}F''(x_0)\| \leq \frac{6 |\lambda|}{8 - 3 |\lambda|}.$$ In addition, for any $x, y \in D$, we have $$\|F'(x_0)^{-1}[F''(x) - F''(y)]\| \leq \frac{6 |\lambda|}{8 - 3 |\lambda|} \|x - y\|.$$ So, we obtain the values of $\beta, \eta$ and $L$ in (\[majorizingfunction:cubicfunction\]) as follows: $$\beta = \frac{|\lambda|}{8 - 3 |\lambda|}, \ \ \eta = \frac{6 |\lambda|}{8 - 3 |\lambda|}, \ \ L = \frac{6 |\lambda|}{8 - 3 |\lambda|}.$$ Consequently, the convergence criterion (\[criterion:LipschitzConditionConvergenceCriterion\]) holds for any $|\lambda| \in [0, 32/27)$, and Theorem \[theorem:SemilocalConvergenceHalleyMethodLipschitzCondition\] is applicable and the sequence generated by Halley’s method (\[iteration:HalleyMethod\]) with initial point $x_0$ converges to a zero of $F$ defined by (\[eq:NonlinearHammersteinEquation\]). For the special cases of integral equation (\[eq:NonlinearHammersteinEquation\]) with $n = 3$ when $\lambda = 1/4, 1/2, 3/4, 1$ and $f(t) = 1$, the corresponding domains of existence and uniqueness of solution, together with those obtained by Hernández and Romero in [@Hernandez2007], are given in Table \[table:DomainExistenceUniqueness\]. We notice that our convergence analysis gives better existence balls and uniqueness fields than those in [@Hernandez2007]. ------------ ---------------------------------------------- -------------- ---------------------------------------------- --------------- $f(t) = 1$ $\lambda$ Existence Uniqueness Existence Uniqueness 0.25 $\overline{{\bm{\mathrm{B}}}(1, 0.0346081)}$ (1, 4.06814) $\overline{{\bm{\mathrm{B}}}(1, 0.0348595)}$ (1, 4.06798) 0.5 $\overline{{\bm{\mathrm{B}}}(1, 0.0783777)}$ (1, 2.35026) $\overline{{\bm{\mathrm{B}}}(1, 0.0814400)}$ (1, 2.34809) 0.75 $\overline{{\bm{\mathrm{B}}}(1, 0.138260)}$ (1, 1.54454) $\overline{{\bm{\mathrm{B}}}(1, 0.157580)}$ (1, 1.52953) 1 $\overline{{\bm{\mathrm{B}}}(1, 0.236068)}$ (1, 1) $\overline{{\bm{\mathrm{B}}}(1, 0.402436)}$ (1, 0.853166) ------------ ---------------------------------------------- -------------- ---------------------------------------------- --------------- : Domains of existence and uniqueness of solution for Halley’s method[]{data-label="table:DomainExistenceUniqueness"} [10]{} I. K. Argyros, On the Newton-Kantorovich Hypothesis for Solving Equations, J. Comput. Appl. Math., 169 (2004) 315-332. I. K. Argyros, Ball Convergence Theorems for Hally’s Method in Banach Space, J. Appl. Math. Comput., 38 (2012) 453-465. V. Candela and A. Marquina, Recurrence Relations for Rational Cubic Methods I: The Halley Method, Computing, 44 (1990) 169-184. P. Deuflhard, Newton Methods for Nonlinear Problems: Affine Invariance and Adaptive Algorithms, Springer-Verlag, Berlin Heidelberg, 2004. P. Deuflhard and G. Heindl, Affine Invariant Convergence Theorems for Newton’s Method and Extensions to Related Methods, SIAM J. Numer. Anal., 16 (1979) 1-10. J. A. Ezquerro and M. A. Hernández, On the R-Order of the Hally Method, J. Math. Anal. Appl., 303 (2005) 591-601. O. P. Ferreira, Local Convergence of Newton’s Method in Banach Space from the Viewpoint of the Majorant Principle, IMA J. Numer. Anal., 29 (2009) 746-759. O. P. Ferreira and B. F. Svaiter, Kantorovich’s Majorants Principle for Newton’s Method, Comput. Optim. Appl., 42 (2009) 213-229. W. B. Gragg and R. A. Tapia, Optimal Error Bounds for the Newton-Kantorovich Theorem, SIAM J. Numer. Anal., 11 (1974) 10-13. J. M. Gutiérrez and M. A. Hernández, Newton’s Method under Weak Kantorovich Conditions, IMA J. Numer. Anal., 20 (2000) 521-532. D. Han, The Convergence on a Family of Iterations with Cubic Order, J. Comput. Math., 19 (2001) 467-474. D. Han and X.Wang, The Error Estimates of Halley’s Method, Numer. Math. JCU (Engl. Ser.), 6 (1997) 231-240. M. A. Hernández and N. Romero, On a Characterization of Some Newton-Like Methods of R-Order at Least Three, J. Comput. Appl. Math. 183 (2005) 53-66. M. A. Hernández and N. Romero, Application of Iterative Processes of R-Order at Least Three to Operators with Unbounded Second Derivative, Appl. Math. Comput. 185 (2007) 737-747. M. A. Hernández and N. Romero, Toward a Unified Theory for Third R-Order Iterative Methods for Operators with Unbounded Second Deriviative, Appl. Math. Comput. 215 (2009) 2248-2261. L. O. Jay, A Note on Q-Order of Convergence, BIT Numer. Math., 41 (2001) 422-429. L. V. Kantorvich and G. P. Akilov, Functional Analysis, Pergamon Press, Oxford, 1982. F. A. Potra, On Q-Order and R-Order of Convergence, J. Optim. Theory Appl., 63 (1989) 415-431. S. Smale, Newton’s Method Estimates from Data at One Point, In: The Merging of Disciplines: New Directions in Pure, Applied and Computational Mathematics (R. Ewing, K. Gross, and C. Martin editors), pages 185-196. Springer-Verlag, New York, 1986. X. Wang, Convergence on the Iteration of Halley Family in Weak Conditions, Chinese Sci. Bull., 42 (1997) 552-555. X. Wang, Convergence of Newton’s Method and Inverse Functions Theorem in Banach Space, Math. Comput., 68 (1999) 169-186. X. Wang, Convergence of [N]{}ewton’s Method and Uniqueness of the Solution of Equations in [B]{}anach Space, IMA J. Numer. Anal., 20 (2000) 123-134. X. Wang and D. Han, On the Dominating Sequence Method in the Point Estimates and Smale’s Theorem, Scientia Sinica Ser. A., 33 (1990) 135-144. X. Wang and D. Han, Criterion $\alpha$ and Newton’s Method in the Weak Conditions (in Chinese), Math. Numer. Sinica, 19 (1997) 103-112. X. Xu and C. Li, Convergence of [N]{}ewton’s Method for Systems of Equations with Constant Rank Derivatives, J. Comput. Math., 25 (2007) 705-718. X. Xu and C. Li, Convergence Criterion of [N]{}ewton’s Method for Singular Systems with Constant Rank Derivatives, J. Math. Anal. Appl., 345 (2008) 689-701. X. Ye and C. Li, Convergence of the Family of the Deformed Euler-Hally Iterations under the Hölder Condition of the Second Derivative, J. Comput. Appl. Math., 194 (2006) 294-308. T. J. Ypma, Affine Invariant Convergence Results for Newton’s Method, BIT Numer. Math., 22 (1982) 108-118. [^1]: Corresponding author. lingyinghui@163.com (Y. Ling), xxu@zjnu.cn (X. Xu). [^2]: The second author’s work was supported in part by the National Natural Science Foundation of China (Grant No. 61170109 and No. 10971194).
{ "pile_set_name": "ArXiv" }
--- abstract: 'Gamma-ray spectroscopy provides diagnostics of particle acceleration in solar flares, but care must be taken when interpreting the spectra due to effects of the angular distribution of the accelerated particles (such as relativistic beaming) and Compton reprocessing of the radiation in the solar atmosphere. In this paper, we use the GEANT4 Monte Carlo package to simulate the interactions of accelerated electrons and protons and study these effects on the gamma-rays resulting from electron bremsstrahlung and pion decay. We consider the ratio of the 511 keV annihilation-line flux to the continuum at 200 keV and in the energy band just above the nuclear de-excitation lines (8–15 MeV) as a diagnostic of the accelerated particles and a point of comparison with data from the X17 flare of 2003 October 28. We also find that pion secondaries from accelerated protons produce a positron annihilation line component at a depth of $\sim$ 10 g cm$^{-2}$, and that the subsequent Compton scattering of the 511 keV photons produces a continuum that can mimic the spectrum expected from the 3$\gamma$ decay of orthopositronium.' author: - 'ShiChao Tang and David M. Smith' title: 'GEANT4 Simulations of Gamma-Ray Emission from Accelerated Particles in Solar Flares' --- Introduction ============ In solar flares, electrons and ions are accelerated to non-thermal energies. When these particles interact with the ambient medium, they can produce photons with energies up to the gamma-ray range. The electrons produce continuum emission via the bremsstrahlung process, while the ions (protons and heavier nuclei) can produce excited and radioactive nuclei which, through de-excitation or decay, make emission lines usually $\lesssim$ 7 MeV. Ions with energy above $\sim$ 200 MeV can produce pions by interacting with ambient nuclei. These pions then produce gamma-ray continuum via $\pi^0\rightarrow 2\gamma$ or $\pi^{\pm} \rightarrow \mu^{\pm} \rightarrow e^{\pm}\rightarrow \gamma_{brem}$. There is also 511 keV line emission from annihilation of positrons created by the decay of $\beta^{+}$-emitting radioactive nuclei or $\pi^{+}$. Whether radioactive nuclei or $\pi^{+}$ contribute more to the positron population depends on the hardness of the injected ions [@murphy84]. Positrons may also come from the $e^{-}e^{+}$ pair production process of the gamma-ray continuum. These continua and lines provide information on particle acceleration in solar flares, but we can only observe those photons that reach us. Since the accelerated electrons are relativistic, the angular distribution of bremsstrahlung will tend to follow that of the original electrons, so electrons beamed downward along the magnetic field will put most of their radiation into the Sun. Photons created deep in the solar atmosphere by any process are less likely to escape than those created in the corona or chromosphere. These effects will change the relative luminosity of different spectral components, but the location and directionality of the photon production processes will also change the spectral shape of each as well. Bremsstrahlung intrinsically creates different spectra in different directions, with the hardest spectrum in the beam direction. Compton scattering can also affect the observed spectra [@kotoku07 e.g.] by scattering photons to lower energy. The importance of this process depends on both the depth where the original photons are produced and their direction relative to the line of sight. Accurate simulations of the location, beaming, reprocessing, and absorption of flare photons are therefore just as important to interpreting spectra as modeling of the original radiation mechanism. Examples of the importance of directionality and location in interpreting observed bremsstrahlung spectra are seen in recent work by @krucker08 and @kontar06. The picture of electrons accelerated directly down field lines into the deep solar atmosphere is seldom found to agree with observations. Strong scattering, as from interactions with magnetohydrodynamic waves, can make the distribution function evolve into one that is more isotropic than when the electrons were injected into the magnetic loop. @bret09 compared different kinds of instabilities that may cause this anisotropy decrease. The evolution speed of the distribution function is sensitive to the loop magnetic field as described by @karlicky09. Magnetic mirroring can produce a “pancake” distribution moving mostly parallel to the solar surface [@dermer86]. @krucker08 proposed a scenario in which the highest energy bremsstrahlung in flares is produced in the magnetic loop top, because they found that the coronal source is harder and becomes dominant above 500 keV in the 2005 January 20 flare. In this picture, the angular distribution of gamma-ray producing electrons is also isotropic because they are trapped by strong scattering. @kontar06 concluded that the angular distribution of electrons producing hard X-rays at flare footpoints is isotropic by including the Compton-scattered X-ray “albedo” surrounding the footpoints in their spectral analysis. In the present work, we perform Monte Carlo simulations with the toolkit GEANT4 [@agostinelli03] to illustrate the effects of beaming and reprocessing on observable gamma-ray components from flare-accelerated electrons and protons. We focus on electron bremsstrahlung and the secondary radiation from pion production by protons, since we believe that GEANT4 addresses these components more accurately than it does the nuclear de-excitation lines that dominate between these energy ranges. In particular, we study the positron-annihilation line and the production of continuum radiation from 8–15 MeV, a range bracketed at the bottom by the energy at which de-excitation lines first become negligible and at the top by the maximum energy observed by the [*Reuven Ramaty High-Energy Solar Spectroscopic Imager (RHESSI)*]{} [@lin02], which we will use to compare our simulations to a flare observation. If the 8–15 MeV continuum is taken to be bremsstrahlung from flare-accelerated electrons, we show that there is a strong constraint on the angular distribution of the electrons, an effect discussed also by @kotoku07. The positron-annihilation line is always isotropic, because the positrons mostly slow down to thermal speed before they annihilate. Thus, a comparison of the annihilation line and the 8–15 MeV continuum gives us further information on the electron angular distribution, again under the assumption that both components are due to bremsstrahlung and its reprocessing. We will also discuss the other sources for these energy bands: for the line, $\beta+$ decay and positrons from $\pi+$, and for the continuum, bremsstrahlung from electrons and positrons from the decay of charged pions and reprocessing of gamma-rays from the decay of neutral pions in the solar atmosphere and in the instrument. Positrons can annihilate through the 3$\gamma$ orthopositronium channel. The resulting continuum, while it has a characteristic shape, can be mistaken for Comptonization of the 511 keV line [@share04] and can greatly affect the estimation of the ratio between the annihilation line and other spectral components. For an example both of the capabilities of these simulations and of instrumental effects, we use an observation of the large X-class flare of 2003 October 28 with [*RHESSI*]{} and compare the time integrated spectrum with our simulations. The models and method are shown in detail in §\[sec:model\]. In §\[sec:simu\] we show the simulation results and compare them with the 2003 October 28 flare, and §\[sec:dis\] provides the summary and discussion. Model and method {#sec:model} ================ The GEANT4 tool kit {#sec:geant} ------------------- We used the Monte Carlo simulation package GEANT4 [@agostinelli03], which is widely used in experimental high-energy physics for simulating the passage of particles through matter. The physics processes offered cover all the electromagnetic and hadronic processes we are interested in. GEANT4 treats individual simulated particles one at a time rather than distributions of particles, and carries them through a mass model of the universe defined by geometrical boundaries between materials rather than a grid. When a particle is “injected”, GEANT4 calculates the mean free path of all the discrete physics processes implemented, calculates a random distance associated with each process, and chooses that with the shortest distance to be implemented (unless the distance to the nearest material boundary is closer, in which case the particle is taken to the boundary). It then determines all the physics properties of the particle after the chosen process (including its new position) taking account of the continual physics processes (such as energy loss by electrons) that happen within this step. As this goes on, it provides a “track” of the particle until it comes to rest, leaves the volume, or reaches a low-energy threshold. Daughter particles, when created, are tracked immediately, with the parent particle put aside to be followed after the daughter particle is finished. Cascades can thus be followed deeply, restricted only by computer memory. In our simulations, we inject electrons or protons into a model of the solar atmosphere and track their interactions with the ambient material, recording the angular and energy distribution of photons leaving the Sun. @kotoku07 used GEANT4 to simulate electron bremsstrahlung in solar flares, and included the effect of Compton scattering on the emerging bremsstrahlung continuum. In this work, we also quantify the positron-annihilation line resulting from bremsstrahlung photons pair-producing in the Sun, and simulate the gamma-ray emissions originating from accelerated protons as well, considering the continuum and annihilation-line photons resulting from pion creation. ### Electromagnetic processes {#sec:em} GEANT4 allows the user to select the particular physics processes to be used, including alternate versions of some processes. The following electromagnetic processes are implemented in our simulations: bremsstrahlung, ionization and Coulomb scattering for electrons and positrons, and pair-production, Compton scattering, and photoelectric absorption for photons (the latter is not significant at the energies of interest). Because we focus on very high energies, we treat the chromosphere as a cold target even though its temperature could reach as high as tens of keV when bombarded by flare-accelerated particles. Annihilation is also implemented for positrons, but without the formation of positronium, so the annihilation always produces two gamma-ray photons of 511 keV. Some fraction of positrons in the real solar atmosphere may form parapositronium (which also decays to two 511 keV photons) and orthopositronium (which decays to three continuum photons with maximum energy 511 keV), but the orthopositronium continuum can be quenched by collisions at high densities; see @murphy05 for extensive recent calculations. We will return to this problem in section \[sec:oct28\] while comparing our simulations with a flare observation. The processes mentioned above, except for Coulomb scattering, are implemented through the Penelope physics package [@salvat06], which is valid above $\sim$ 250 eV. Bremsstrahlung in Penelope includes electron-electron bremsstrahlung, which can be significant above several hundred keV in flares [@kontar07], using cross sections calculated after @seltzer85. Coulomb scattering needs special consideration because its mean free path is much smaller than that of any other process and a full treatment would take too much time. GEANT4 simulates the effect of multiple Coulomb interactions after a given step as a statistical expectation, instead of treating them one by one. ### Hadronic processes {#sec:hadron} @chin09 suggested that because the different GEANT4 models for proton inelastic collisions are not consistent with each other in the energy range of tens of MeV, these models are not ready for problem solving; our initial simulations using these packages confirm this conclusion for our application. We found that the proton inelastic process as simulated in the HadronPhysicsQGSP\_BERT\_HP physics module produces a continuum-like photon spectrum extending to $\sim$ 20 MeV. This conflicts with most observations of gamma-ray flares [@share95 e.g.], in which this component falls off dramatically at 8 MeV, above the complex of lines from the de-excitation of nitrogen, carbon and oxygen. The observed spectrum is also more dominated by individual lines and looks less like a continuum. Another package, HadronPhysicsQGSP\_BIC\_HP, shows a more realistic cutoff around 8 MeV but still produces only a continuum-like shape and not the observed complex of de-excitation lines. In Figure \[fig:modcompare\] we show the difference between these two physics modules for the proton inelastic scattering component and the entire proton-derived flux, including the high-energy component from pion production. For this comparison we injected mono-energetic protons of 1 GeV downward into the model Sun described in section \[sec:solarmod\] and recorded all the photons coming out at an angle $\beta$ from the solar normal such that $\cos(\beta)>0.8$. Pion production and decay, on the other hand, are simpler processes and there is agreement among multiple models for their simulation. As can be seen in Figure \[fig:modcompare\], the two GEANT4 physics modules agree well as to this photon component (dominant above 10 MeV). We also compared these results from GEANT4 with the output of the code developed by Reuven Ramaty, Ronald Murphy, and others, which has been successfully applied to large flares [@murphy87]. For this comparison, we put mono-energetic protons of 1 GeV into a atmosphere of hydrogen and helium with He/H = 0.1, and recorded all the photons produced. As shown in Figure \[fig:pioncompare\], the results are consistent with GEANT4; in the energy range 8–15 MeV, the difference is less than 20%. Since we believe we can therefore trust GEANT4 for pion processes as well as electromagnetic cascades, we will ignore the inelastic process for this work and concentrate on the 8–15 MeV range, which should be dominated by bremsstrahlung both from primary electrons and from secondary electrons and positrons from pion decay. The solar model {#sec:solarmod} --------------- In the present simulation, we treat the solar atmosphere as parallel layers. This is a reasonable approximation because the size of the emission region is always much smaller than the solar radius. We use the analytical approximation of @kotoku07 to the Harvard-Smithsonian reference atmosphere [@gingerich71] as the solar mass-density profile, which is $$\begin{aligned} \label{eq:density} \rho(z)=3.19\times10^{-7}\exp(-\frac{z}{h})~~\mathrm{g~cm^{-3}}.\end{aligned}$$ Here $z$ is the height measured from photosphere and $h$ is the scale height, which is $\sim 400$ km for $z<$0 and $\sim 110$ km for $z>$0. The specific vertical structure model will not affect the simulation result, however. All the processes we care about except decays depend only on the column density along the path. Because our model solar atmosphere is made up of parallel slabs, we can always transform the parameter $z$ into column depth without changing our results. The only exception is decay processes, which depend on time. However, the longest lifetime we need to consider is that of $\pi^{\pm}$, which is 2.6$\times 10^{-8}$s. During this short time, the pion travels less than 1 km even if it has an energy of up to 1$\times10^{4}$ MeV, which is much shorter than the length scale of the system. So the decay always happens approximately where the short lived particles are produced and the result will not change with different vertical structures. Another consideration is that when the chromosphere is bombarded by the flare accelerated particles, it will evaporate and fill up the magnetic loop. This will change the vertical structure. If the particles were to interact high in a narrow column, the pattern of photon escape as a function of solar normal angle would be very different, with tangential escape much easier. However, @aschwanden97 find that the density of evaporation upflow is around 10$^{10}$ cm$^{-3}$. Take the longest flare loop with a length of around 10$^{11}$ cm, the column density change will be less than 0.01 gcm$^{-3}$. As we will show in §\[sec:simu\], all the processes we are interested in happen at a column density greater that 1 gcm$^{-3}$. Therefore, evaporation should not strongly influence our results. The elemental abundance of our model Sun is taken from @grevesse07, and we assume that the abundances of the photosphere and corona are the same. In future work we will implement more realistic photospheric and coronal abundances, including the enhancement of low first ionization potential (FIP) elements in the corona. @kotoku07 used a pure hydrogen atmosphere, which underestimates bremsstrahlung efficiency, since that rises as approximately the square of atomic number $Z$, and also underestimates pair production, the cross-section for which increases even dramatically with $Z$. We find that 10 MeV photons, for example, produce 12% more positrons in a realistic atmosphere than in a hydrogen atmosphere. Results {#sec:result} ======= Simulations {#sec:simu} ----------- We inject electrons and protons with different energies and angular distributions into our model Sun and track them as well as their secondaries. The tracking stops if the particles leave the Sun or if their energy falls below 50 keV. However, because positrons seldom annihilate until they thermalize, the 50 keV cutoff does not apply to them; we track them until they annihilate or leave the Sun. We also track pions to their decay. ### Interactions of Accelerated Electrons {#sec:ele} Table \[tab:e\] shows all the different models of injected electrons used in the simulations. For the downward-beamed and downward-isotropic distributions, the electrons are initialized just above the model solar atmosphere. For the isotropic and pancake distributions, they are injected at an integrated column depth of $8 \times 10^{-5}$ g cm$^{-2}$. The results are not sensitive to this parameter as long as it is not very deep in the atmosphere. As the electrons have to experience mirroring to give these distributions, they are confined in the region and reflected artificially in the simulation. We do not invoke real magnetic mirroring because the gyration radius is too small compared to other length scales in the simulation and including the magnetic field would make the runs impractically slow. In Table \[tab:e\], columns 4–7 show the ratio between 511 keV line flux and the continuum in two places: the energy flux per keV at 200 keV and the integrated continuum from 8–15 MeV. The simulation marked with dashes gave no photons from 8–15 MeV. The columns marked “(sim.)” are the ratios from the direct output of the simulations. To get the ratios marked “(cnvlv.),” we convolved our simulated spectra with the instrumental response matrix of [*RHESSI*]{}, so we could compare the simulations with the ratio of counts in a flare observed with that spacecraft. The last three columns give the production efficiency for photons in each band exiting the Sun per input particle in the simulation. The first half of Table \[tab:e\] represents a disk flare (cosine of the viewing angle $>0.8$) and the last half represents a limb flare (cosine of the viewing angle between 0.2 and 0.4). ----------------- ---------- -------------------- -------- ---------- -------- ---------- --------- -------- -------- Angular Spectral Viewing Distribution Index Angle (cos$\beta$) (sim.) (cnvlv.) (sim.) (cnvlv.) 511 200 8–15 Downward beamed 2.2 $>$0.8 1.13 0.84 16.5 8.67 3.8e-6 3.4e-6 2.3e-7 3.2 0.45 0.15 14.5 5.07 2.9e-7 6.4e-7 2e-8 Downward iso. 2.2 1.92 0.94 22.7 11.5 1.1e-5 5.9e-6 5e-7 3.2 0.45 0.18 58.0 13.1 5.8e-7 1.3e-6 1e-8 Isotropic 2.2 0.95 3.44 0.093 0.47 9.03e-6 9.5e-6 9.7e-5 2.7 0.42 1.98 0.091 0.53 1.9e-6 4.5e-6 2.1e-5 3.2 0.11 0.73 0.12 0.77 1.6e-7 1.5e-6 1.4e-6 Pancake 2.2 2.57 1.12 10.0 5.86 1.5e-5 6.0e-6 1.5e-6 2.7 1.21 0.60 14.0 8.71 3.2e-6 2.7e-6 2.3e-7 3.2 0.40 0.30 10.5 8.72 6.3e-7 1.6e-6 6e-8 Downward beamed 2.2 0.4–0.2 0.46 0.43 1.38 1.36 6.2e-7 1.4e-6 4.5e-7 3.2 0.09 0.02 - - 6.0e-8 6.6e-7 - Downward iso. 2.2 0.96 0.91 1.09 1.85 3.9e-6 4.1e-6 3.6e-6 3.2 0.15 0.29 1.60 4.89 2.4e-7 1.6e-6 1.5e-7 Isotropic 2.2 0.55 3.48 0.046 0.46 4.4e-6 8.1e-6 9.6e-5 2.7 0.25 1.85 0.058 0.524 1.2e-6 4.9e-6 2.1e-5 3.2 0.05 0.77 0.06 0.71 7.0e-8 1.3e-6 1.4e-6 Pancake 2.2 0.81 3.69 0.058 0.47 9.4e-6 1.2e-5 1.6e-4 2.7 0.30 1.98 0.062 0.54 2.15e-6 7.2e-6 3.4e-5 3.2 0.14 1.05 0.072 0.66 5.2e-7 3.8e-6 7.3e-6 ----------------- ---------- -------------------- -------- ---------- -------- ---------- --------- -------- -------- : Flux ratios of photons in three energy bands from simulations of electron injection (see text). []{data-label="tab:e"} The electron angular distribution affects the outgoing photons in several ways. First, the bremsstrahlung photons are highly beamed in the direction of the electrons’ motion. The more the primary electrons are beamed downward, the fewer photons can escape to be observed. Second, the more the primary electrons are beamed downward, the deeper positrons are produced and the fewer annihilation photons escape without being scattered or absorbed. Therefore, the escaping flux and the annihilation line to continuum ratio depends on the injected angular distribution. Not all the photons recorded come to the detector directly after they are produced. Some of them may have been Compton scattered, changing both their energy and direction. Compton scattering becomes very important when the angular distribution of injected electrons is mostly downward. In this case, since most direct bremsstrahlung photons head into the Sun, Compton “albedo” can contribute a significant part of the observed spectrum, and may even become dominant at lower energies [@kontar06; @kotoku07]. In panels A and B of Figure \[fig:e\_all\], we plot the outgoing gamma-ray spectra for different parameters of injected electrons, collected for $\cos \beta > 0.8$. The spectra are normalized to outgoing photons per MeV per incoming electron. As expected, the harder the spectral index, the higher the production rate, since high-energy electrons are more efficient for thick-target bremsstrahlung. It can also be seen that the production rate decreases with more downward beaming (panel C of Figure \[fig:e\_all\]). The spectra show a softening at low energies, the extra flux coming from Compton-scattered photons that were originally directed downward. This effect can be seen most clearly in the last panel of Figure \[fig:e\_all\], in which we show the results with Compton scattering turned on and off. As expected, this effect is most significant when the electron distribution is most downward. Figure \[fig:a\_all\] shows the gamma-ray spectrum recorded at different values of the outgoing angle $\beta$ between the photon direction and outward solar normal. The two traces in each panel represent a disk flare (solid) and a limb flare (dashed). There are two process that affect the production rate at different outgoing angles: relativistic beaming in a downward electron distribution will tend to put out more photons at large $\beta$, while, on the other hand, photons coming out at large $\beta$ have longer paths in the Sun and are more likely to scatter. At lower energies, the second process is more important and will make disk flares appear brighter. At high energy, however, the beaming effect is more important and will cause limb brightening. This result may explain the discovery of limb brightening at $>$ 0.3 MeV [@vestrand87] but not over the range 5–500 keV, which is dominated by much lower energy photons [@li94; @li95]. The same effect can cause a spectral break within a given flare: our simulations show spectral hardening for limb flares in the high-energy band (higher than about 400 keV) but not below, in agreement with the observations of @li95. For the simulated isotropic distribution, most photons observed are direct bremsstrahlung, which is also isotropic, so there is no significant spectral evolution with viewing angle, as shown in panel E of Figure \[fig:a\_all\]. For pancake distributions, it is still true that the main component observed is directly from bremsstrahlung, but beaming to high $\beta$ is visible at all energies, as shown in panel F of Figure \[fig:a\_all\]. In Table \[tab:e\], we summarize the ratio between 511 keV line flux and flux at 200 keV as well as the ratio between the 511 keV line and the 8–15 MeV continuum from our simulations. In order to compare these ratios with [*RHESSI*]{} data, we convolved the spectra from our simulations with the spectral response matrix of the *RHESSI* instrument [@smith02]; both the direct output of the simulations and the ratio in the “count space” of the instrument after convolution are shown in Table \[tab:e\]. It is interesting to note that the ratios can either decrease or increase from the convolution process, depending on the overall shape of the spectrum. When the high-energy bremsstrahlung escapes the Sun easily (such as for an isotropic electron distribution), not only does it overwhelm the solar annihilation line but also the multi-MeV bremsstrahlung photons pair produce in the spacecraft, so that the count spectrum has a more prominent 511 keV line than the photon spectrum. For flare spectra in which little MeV bremsstrahlung escapes, on the other hand, the most important instrumental effect is that the solar 511 keV photons (which are significant in this case) often Compton scatter out of [ *RHESSI*]{}’s detectors after a single interaction, so that they register as continuum photons instead of line photons, causing the count spectrum to have a less significant line than the photon spectrum. A solar gamma-ray spectrometer with a heavy anticoincidence shield, such as the Gamma-Ray Spectrometer on the [*Solar Maximum Mission*]{} [@forrest80], would be less susceptible to all these instrumental effects, and the line-to-continuum ratios would be much more similar in the count and photon spectra. ### Interactions of Accelerated Protons {#sec:proton} We also simulated the interaction between accelerated protons and the solar atmosphere. The simulated solar atmosphere was the same as in the electron simulations. As discussed above, we are at this time simulating only the production of pions and their secondaries, not nuclear excitation, spallation, and radioactive decay. We simulated downward beamed and downward isotropic proton distributions, with results shown in Table \[tab:p\] and Figure \[fig:proton\_out\]. A pancake distribution of protons is not included, since it has been ruled out for at least one gamma-ray flare by observations of strong redshifts in the nuclear de-excitation lines [@smith03]. The shape of the outgoing spectrum changes little when the angular and spectral distributions of the protons are varied, but the overall gamma-ray luminosity is greater for harder spectral indices and for the more isotropic distribution. This is expected, since pions are produced only by the highest-energy protons and since the downward-isotropic distribution will produce some pions at shallower column depths where photons are better able to escape (see below). The spectra observed at $0.2<\cos\beta<0.4$ extend to higher energy because more photons emerge without scattering. --------------- ---------- -------------------- -------- ---------- -------- ---------- --------- --------- --------- Angular Spectral Viewing Distribution Index Angle (cos$\beta$) (sim.) (cnvlv.) (sim.) (cnvlv.) 511 200 8–15 Downward iso. 2.2 $>0.8$ 57.6 15.3 4.51 0.72 1.9e-3 3.36e-5 4.3e-4 3.2 71.6 17.0 6.40 0.80 3.0e-4 4.17e-6 4.67e-5 Downward beam 2.2 61.7 17.1 3.07 0.60 6.6e-4 1.1e-5 2.1e-4 3.2 72.7 17.9 5.22 0.71 1.4e-4 2.0e-6 2.7e-5 Pancake 2.2 57.3 14.9 6.83 0.86 3.5e-3 6.14e-5 5.15e-4 3.2 73.3 17.3 10.0 0.93 4.7e-4 6.4e-6 4.7e-5 Downward iso. 2.2 0.4–0.2 54.5 17.7 0.64 0.45 5.77e-4 1.03e-5 8.8e-4 3.2 74.7 18.3 1.06 0.49 9.28e-5 1.24e-6 8.73e-5 Downward beam 2.2 65.5 19.0 0.80 0.45 1.71e-4 2.61e-6 2.13e-4 3.2 75.1 19.1 1.33 0.49 3.99e-5 5.32e-7 3.01e-5 Pancake 2.2 54.7 16.9 0.70 0.46 1.1e-3 2.07e-5 1.6e-3 3.2 76.0 17.8 1.21 0.51 1.6e-4 2.17e-6 1.36e-4 --------------- ---------- -------------------- -------- ---------- -------- ---------- --------- --------- --------- : Flux ratios from simulations of proton injection.[]{data-label="tab:p"} Pions are produced very deep in the Sun, at column depths of tens of g/cm$^2$, as shown in Figure \[fig:piproduct\]. The production rate falls off more quickly with depth for the downward isotropic distribution, since protons at a shallow angle can go through a large column of solar atmosphere and still produce pions at relatively small depth. Pions have very short lifetimes (2.6$\times 10^{-8}$ s for $\pi^{\pm}$ and 8.4$\times 10^{-17}$ s for $\pi^0$), so they decay where they are produced. The positrons produced by $\pi^{+}$ decay have to travel some distance before they slow down and annihilate. Figure \[fig:annidepth\] shows the depth distributions of positron annihilation events for both injected protons and electrons when both have a spectral index of 2.2 and energy ranges of 100–10000 MeV and 0.1–100 MeV respectively. Each proton in this case is more than 200 times as likely to produce a positron as an electron, and they tend to be produced deeper in the solar atmosphere. Among positrons originating with protons, we found that 70% originate from $\pi^{+}$ decay, 20% from pair-production of the gamma-rays produced in $\pi^0$ decay, and 10% from more indirect cascade processes (for example, $\pi^{-} \rightarrow e^{-} \rightarrow {\rm bremsstrahlung} \rightarrow {\rm pair~production}$). Most of the annihilation photons that leave the Sun in the simulations experienced Compton scattering, so that they are observed in a continuum below the line. In Figure \[fig:outannidepth\], we plot the depth distributions of only those annihilation events corresponding to photons that escape the Sun. As expected, the deeper the annihilation happens, the fewer photons escape without scattering. More than 70% of the gamma-ray photons we observed experienced Compton scattering. The resulting Compton continuum can mimic two other spectral components just below 511 keV: the continuum from the 3$\gamma$ decay of orthopositronium and the broad lines from $\alpha-\alpha$ reactions. Orthopositronium will only survive collisional disruption before decay at low densities; thus, a 511 keV line with little continuum directly below it can be taken as a sign of annihilation at moderately high density but not great depth. @share04 found that if the 511 keV line was produced under 5–7 g cm$^-2$, the Compton continuum would be similar to what is seen below the line by [*RHESSI*]{} from the X17 flare of 2003 October 28. Our simulations show that a significant fraction of positrons annihilate at just this depth if they originate from pions. Positrons from $\beta +$ decay of spallation products will probably have a shallower distribution since they can be created by more numerous, lower-energy protons of tens of MeV that do not penetrate to these depths. The X17 Flare of 2003 October 28 {#sec:oct28} -------------------------------- This flare, the second largest in the “Halloween storms” of 2003, occurred near disk center at a heliocentric angle of $\arccos(0.87)$. [*RHESSI*]{} missed the rapid rise phase and peak of the flare because it was crossing the South Atlantic Anomaly and only provides data after 11:06 UT. The *RHESSI* data extend from 3 keV to 17 MeV, with energy resolution of $\sim$1–10 keV across this range. The high-energy spectrum of the flare is shown in Figure \[fig:spec\], using data from the rear segments of the [*RHESSI*]{} germanium detectors [@smith02]. The 511 keV line and the 2.2 MeV line from neutron capture on ambient protons are most clearly visible. Since this spectrum is uncorrected for instrument response, many of the counts are shifted to lower energies by Compton scattering in the instrument. A spectral accumulation taken 15 *RHESSI* orbits (about one day) earlier has been subtracted as background, since the geographical position and radiation history of the spacecraft was similar at that time. In Table \[tab:ob\], we list the ratio between 511 keV line flux and the continua around 200 keV and 8–15 MeV for comparison to our simulations; the results are shown graphically in Figure \[fig:ratio\]. We find that if all or most of the 511 keV line flux resulted from pion interactions, there would have been more 8–15 MeV continuum observed, regardless of the angular distribution of the injected protons (Table \[tab:p\]). Most of the positrons are therefore from other sources, either $\beta+$ decays (not simulated here) or bremsstrahlung gammas from high-energy flare electrons (Table \[tab:e\]). If all or most of the 511 keV line flux came from accelerated electrons, the 8–15 MeV continuum would also be overproduced for an isotropic distribution. Either a domination of the positron source by $\beta+$ decay or by mostly downward electrons is allowed. In the comparison above, we did not consider the effect of 3$\gamma$ annihilation on the 511 keV line. Most of the annihilation in our model occurs below the photosphere, where the density is high enough that orthopositronium will be destroyed by collisions before decay, thus we believe that this is a good approximation. However, if there were any 3$\gamma$ annihilation, it would make the 511 keV line to 8–15 MeV ratios even smaller, thus strengthening the conclusion that pion decay does not dominate positron production in this flare. -- --------- ----------- -------------------- ------------------------- --------------- 511 keV Continuum Continuum 511 keV Flux/ 511 keV Flux/ Line 8–15 MeV at 200 keV per keV flux Per keV at 200 keV 8–15 MeV flux 43561 5031 5120 8.5 8.65 -- --------- ----------- -------------------- ------------------------- --------------- : Flux ratios for the flare of 2003 October 28 from *RHESSI* data, for comparison with Tables \[tab:e\] and \[tab:p\]. Data in the first two columns are in raw background-subtracted counts.[]{data-label="tab:ob"} Discussion and Summary {#sec:dis} ====================== We have used the GEANT4 package to simulate the spectra produced by the interactions of high-energy flare particles in the Sun, emphasizing electron bremsstrahlung and pion production by protons. We find that the angular distribution of primary electrons accelerated in solar flares can greatly affect the shape and production rate of outgoing gamma-ray photons. In general, the more the injection is downward beamed, the steeper the outgoing spectrum is and the lower the production rate. @kotoku07 found the same result and suggested that limb flares should therefore have harder spectra than disk flares. Extending those results down to lower energies, we find that a downward-biased distribution will cause limb brightening at higher ($\gtrsim$ 0.3 MeV) energy and limb darkening at lower energy. A isotropic distribution will show no bias and a pancake distribution will produce limb brightening at all energies. These results are due to the combined effects of bremsstrahlung and Compton scattering. We modeled two classes of mechanism that can produce positrons in flares – the electromagnetic cascade from accelerated electrons and the decay products of pions. The third source, and perhaps the most important, is radioactive decay, which we postpone until we can further study and perhaps improve the nuclear cross-sections available in GEANT4. We found that the annihilation line resulting from accelerated electrons has only a weak dependence on the angular distribution of the electrons. Since the bremsstrahlung continuum has a strong dependence, the ratio of the annihilation line to the continuum can constrain the electron angular distribution if the electron contribution to the positron population can be isolated. @murphy84 and @gan04 used the time history of the C/O de-excitation lines from 4 MeV to 7 MeV to estimate the positron production due to decay of spallation products (i.e., due to accelerated protons below the pion production threshold). Such a technique, combined with observations of photons up to 100 MeV [@arkhangelskaja09 e.g.] to fix the pion contribution to the annihilation line, could allow the electron contribution to be isolated if the three components (electrons, lower-energy protons, and high-energy protons) have different time profiles. If the electron contribution to the annihilation line can be isolated, it becomes a new and valuable diagnostic for the electron spectrum and angular distribution. Future space missions with $\sim$ 10” imaging in the gamma-ray range could allow spatial as well as spectral and temporal information to be used to isolate the electron contribution to the annihilation line. Comparing our simulations to [*RHESSI*]{} data from the 2003 October 28 flare, we find that the high ratio of the 511 keV line to the 8–15 MeV continuum implies that either radioactivity, bremsstrahlung from downward-biased electrons, or a combination dominates the positron production in this flare. We also find that positrons from $\pi^{+}$ decay annihilate at a column depth of $\sim$ 10 g cm$^{-2}$ in the Sun and most of the gamma-ray photons they produce experience Compton scattering before escaping, producing a continuum that resembles the 3$\gamma$ decay of orthopositronium. In gamma-ray flares, a hardening break around 0.6 MeV is often found for the power-law component of the spectra. This break may be interpreted as indicating two populations of electrons [@krucker08], or as electron-electron bremsstrahlung becoming dominant above that energy [@kontar07]. Based on our simulation, it is also possible that the component above 0.6 MeV is not caused by primary electrons that are accelerated in the flares but by electrons and positrons from the decay of pions generated by the accelerated protons. To test this possibility, we will need to compare our simulations with flare data up to $\sim 100$ MeV to fix the normalization of the pion component and see if it is enough to contribute what is usually thought of as the hard tail of the bremsstrahlung spectrum. The authors thank Ronald Murphy, Gerald Share, Albert Shih, Troy Porter, and Eduard Kontar for contributing by explanation and example to this work and our understanding. This work was supported by NASA grant NNG05G189G-004, NASA contract NAS5-98033, and China Scholarship Council Postgraduate Scholarship Program. Agostinelli, S., et al. 2003, NIMPA, 506, 250 Arkhangelskaja, I. V., Kotov, Yu. D., Kalmykov, P. A., & Glyanenko, A. S., 2009, Advances in Space Research, 43, 589 Aschwanden, M. J., & Benz, A. O., 1997, ApJ, 480, 825 Bret, A., 2009, ApJ, 699, 990 Chin, M. P. W., & Spyrou, N. M., 2009, Applied Radiation and Isotopes, 67, 406 Dermer, C. D., & Ramaty, R., 1986, ApJ, 301, 962 Forrest, D. J., et al. 1980, Sol. Phys., 65, 15 Gan, W. Q., 2004, Sol. Phys., 219, 279 Gingerich, O., Noyes, R. W., Kalkofen, W., & Cuny, Y., 1971, Sol. Phys., 18, 347 Grevesse, N., Asplund, M., & Sauval, A. J., 2007, Spaces. Sci. Rev., 130, 105 Karlicky, M., & Kasparova, J., A&A, 506, 1437 Kontar, E. P., & Brown, J. C., ApJL, 653, L149 Kontar, E. P., Emslie, A. G., Massone, A. M., Piana, M., Brown, J. C., & Prato, M., 2007, ApJ, 670, 857 Kotoku, J., Makishima, K., Matsumoto, Y., Kohama, M., Terada, Y., & Tamagawa, T., 2007, PASJ, 59, 1161 Krucker, S., Hurford, G. J., MacKinnon, A. L., Shih, A. Y., & Lin, R. P., 2008, ApJL, 687, L63 Li, P., Hurley, K., Barat, C., Niel, M., Talon, R., & Kurt, V., 1994, ApJ, 426, 758 Li, P., 1995. ApJ, 443, 855 Lin, R. P., et al. 2002, Sol. Phys., 210, 3 Murphy, R. J., & Ramaty, R., 1984, Adv. Space. Res., 4, 127 Murphy, R. J., Dermer, C. D. & Ramaty, R., 1987, ApJS, 63, 721 Murphy, R. J., Share, G. H., Skibo, J. G., & Kozlovsky, B. 2005 ApJS, 161, 495 Salvat, F., Fernandez-Varea, J. M., & Sempau, J., *PENELOPE-2006: A Code System for Monte Carlo Simulation of Electron and Photon Transport*, Workshop Proceedings Barcelona, Spain, 4-7 July 2006 Seltzer, S. M. & Berger, M. J. 1985, Nuc. Inst. Meth. B., 12, 95 Share, G. H. & Murphy, R. J. 2005, ApJ, 452, 993 Share, G. H., Murphy, R. J., Smith, D. M., Schwartz, R. A., & Lin, R. P., 2004, ApJL, 615, L169 Smith, D. M. et al. 2002, Solar Phys., 210, 33 Smith, D. M., Share, G. H., Murphy, R. J., Schwartz, R. A., Shih, A. Y., & Lin, R. P., 2003, ApJL, 595, L81 Vestrand, W. T., Forrest, D. J., Chupp, E. L., Rieger, E., & Share, G. H., 1987, ApJ, 322, 1010 ![A comparison between the nuclear de-excitation spectra generated with two different physics modules in GEANT4. The details of the simulation parameters are in the text. The thick and thin solid lines represent the total spectrum and the contribution of proton inelastic processes respectively, from the module HadronPhysicsQGSP\_BERT\_HP. The thick and thin dashed lines are from the module HadronPhysicsQGSP\_BIC\_HP.[]{data-label="fig:modcompare"}](OutFileCompareNew.eps){width="60.00000%"} ![A comparison between the pion-decay spectra generated with the HadronPhysicsQGSP\_BERT\_HP module of GEANT4 and the code developed at NRL by R. Murphy et al.; the HadronPhysicsQGSP\_BIC\_HP module gives similar results. *Dash dot:* $\pi^0$ decay. *Dotted:* bremsstrahlung of $e^{\pm}$ from the decay of $\pi^{\pm}$. *Dashed:* positron annihilation. *Thin solid:* the total spectrum from GEANT4. *Thick solid:* the total spectrum from the NRL code.[]{data-label="fig:pioncompare"}](pion_compare.eps){width="60.00000%"} ![ The outgoing gamma-ray spectrum for different electron spectra and distributions. *Panel A*: Isotropic distribution. *Panel B*: Directly downward-beamed distribution. *Panel C*: the outgoing spectrum for different injected angular distributions with electron spectral index 2.2. From top down: (1) isotropic distribution; (2) pancake distribution uniform within angles $\theta$ such that $-0.3<\cos \theta <0.3$; (3) downward-isotropic distribution; and (4) downward-beamed distribution. *Panel D*: the gamma-ray spectrum from directly downward-beamed electrons with a spectral index 2.2, with the Compton scattering process turned on (solid) and off (dotted). All electron spectra are cut off at 0.1 MeV and 100 MeV and the outgoing spectra are collected at angles $\beta$ from the solar normal such that $\cos \beta > 0.8$.[]{data-label="fig:e_all"}](e_all_new.eps){width="80.00000%"} ![ The gamma-ray spectrum recorded at different outgoing angles $\beta$ from the solar normal. The solid line is $\cos \beta > 0.8$ and the dashed line is $0.2 < \cos \beta < 0.4$. *Panel A*: downward-beamed distribution with spectral index 2.2. *Panel B*: downward-beamed distribution with spectral index 3.2. *Panel C*: downward-isotropic distribution with spectral index 2.2. *Panel D*: downward-isotropic distribution with spectral index 3.2. *Panel E*: isotropic distribution with spectral index 2.2. *Panel F*: pancake distribution with spectral index 2.2 (isotropic within angle $\theta$ of solar normal such that $-0.2<\cos \theta <0.2$). []{data-label="fig:a_all"}](a_all_new.eps){width="80.00000%"} ![ The outgoing gamma-ray spectrum for different parameters of injected protons. *Dash dot:* $\pi^0$ decay. *Dotted*: bremsstrahlung of $e^{\pm}$ from the decay of $\pi^{\pm}$. *Dashed*: positron annihilation. *Thin solid:* the total spectrum. *Panels A and B*: downward-beamed distributions with spectral index 2.2 and 3.2, respectively. *Panels D and E*: downward-isotropic distributions with spectral indices 2.2 and 3.2, respectively. The above four panels are for $\cos \beta>0.8$. *Panels C and F*: downward-beamed and downward-isotropic distributions with spectral index 2.2, recorded at $0.2<\beta<0.4$. All proton spectra are cut off at 100 MeV and 10000 MeV.[]{data-label="fig:proton_out"}](proton_out_new.eps){width="90.00000%"} ![ The pion-producing depth distribution for different parameters of injected protons. *Panels A and B*: downward-beamed distributions with spectral indices 2.2 and 3.2, respectively. *Panels C and D*: downward-isotropic distributions with spectral indices 2.2 and 3.2, respectively. In each panel, *solid*: $\pi^{+}$; *dashed*: $\pi^0$; *dotted*: $\pi^{-}$. []{data-label="fig:piproduct"}](PionProduct.eps){width="80.00000%"} ![ The distribution of the depth where positron annihilation occurs. *Solid line*: positrons generated from injected protons. *Dashed line*: positrons generated from injected electrons. Both the electrons and protons follow a downward isotropic distribution with spectral index 2.2. []{data-label="fig:annidepth"}](AnniFile_new.eps){width="60.00000%"} ![ The distribution of the depth where positron annihilation occurs, corresponding to photons collected at $\cos \beta>0.8$. *Dashed line*: photons that experience Compton scattering before being detected. *Dotted line*: photons that do not experience Compton scattering. *Solid line*: total. *Panels A and B*: downward beamed distributions with spectral indices 2.2 and 3.2, respectively. *Panels C and D*: downward isotropic distributions with spectral indices 2.2 and 3.2, respectively.[]{data-label="fig:outannidepth"}](OutAnniFile.eps){width="80.00000%"} ![ The overall spectrum of the flare of 28 October 2003, from *RHESSI* (11:06 to 11:26 UT). A small discontinuity at 375 keV, a peak at 3 MeV, and a dip around 9 MeV are known instrumental artifacts that are accounted for when model spectra are fitted to these data. The effect of Compton scattering of solar photons in the instrument is most clearly seen at 2 MeV, where there is a shoulder due to Compton backscattering of 2.2 MeV neutron capture photons in [*RHESSI*]{}’s germanium detectors.[]{data-label="fig:spec"}](rhessi_spec.eps){width="80.00000%"} ![The ratio between 511 keV line flux and the continua around 200 keV and 8–15 MeV, The meaning of different symbols are shown in the figure. From which, it is clear that if all the 511s are from proton and isotropically injected electrons, the 511/8–15 ratio would be well below observation and a combination of proton injection (plus symbol) and electron injection that is not complete isotropic (cross symbol) may produce the observation. See the text for more details.[]{data-label="fig:ratio"}](flux_ratio.eps){width="80.00000%"}
{ "pile_set_name": "ArXiv" }
--- abstract: 'We performed Self-Consistent Greens Function (SCGF) calculations for symmetric nuclear matter using realistic nucleon-nucleon (NN) interactions and effective low-momentum interactions ($V_{low-k}$), which are derived from such realistic NN interactions. We compare the spectral distributions resulting from such calculations. We also introduce a density-dependent effective low-momentum interaction which accounts for the dispersive effects in the single-particle propagator in the medium.' author: - 'P. Bożek[^1]' - 'D. J. Dean[^2]' - 'H. Müther[^3]' title: Correlations and effective interactions in nuclear matter --- Introduction ============ The description of bulk properties of nuclear systems starting from realistic nucleon-nucleon (NN) interactions is a long-standing and unsolved problem. Various models for the NN interaction have been developed, which describe the experimental NN phase shifts up to the threshold for pion production with high accuracy[@cdbonn; @arv18; @nijmw; @n3lo]. A general feature of all these interaction models are strong short-range and tensor components, which lead to corresponding correlations in the nuclear many-body wave-function. Hartree-Fock mean-field theory, which represents the the lowest-order many-body calculations one can perform with such realistic NN interactions, fails to produce bound nuclei [@reviewartur; @localint] precisely because Hartree-Fock does not fully incorporate many-body correlation effects. That correlations beyond the mean field are important is supported by experiments exploring the spectral distribution of the single-particle strength. One experimental fact found in all nuclei is the global depletion of the Fermi sea. A recent experiment from NIKHEF puts this depletion of the proton Fermi sea in ${}^{208}$Pb at a little less than 20% [@bat01] in accordance with earlier nuclear matter calculations [@vond1]. Another consequence of the presence of short-range and tensor correlations is the appearance of high-momentum components in the ground state wave-function to compensate for the depleted strength of the mean field. Recent JLab experiments [@rohe:04] indicate that the amount and location of this strength is consistent with earlier predictions for finite nuclei [@mudi:94] and calculations of infinite matter [@frmu:03]. These data and their analysis, however, are not sufficient to allow for a detailed comparison with the predictions derived from the various interaction models at high momenta. In this paper, we want to investigate a possibility to separate the predictions for correlations at low and medium momenta, which are constrained by the NN scattering matrix below pion threshold, from the high momentum components, which may strongly depend on the underlying model for the NN interaction. For that purpose we will perform nuclear many-body calculations within a model space that allows for the explicit evaluation of low-momentum correlations. The effective Hamiltonian for this model space will be constructed from a realistic interaction to account for for correlations outside the model space. This concept of a model space and effective operators appropriately renormalized for this model space has a long history in approaches to the nuclear many-body physics. As an example we mention the effort to evaluate effective operators to be used in Hamiltonian diagonalization calculations of finite nuclei. For a review on this topic see e.g. [@morten:04]. The concept of a model space for the study of infinite nuclear matter was used e.g. by Kuo et al.[@kumod1; @kumod2; @kumod3]. Also the Brueckner-Hartree-Fock (BHF) approximation can be considered as a model space approach. In this case one restricts the model space to just one Slater-determinant and determines the effective interaction through a calculation of the G-matrix, the solution of the Bethe-Goldstone equation. The effective hamiltonians for such model space calculations have frequently been evaluated within the Rayleigh-Schrödinger perturbation theory, leading to a non-hermitian and energy-dependent result. The energy-dependence can be removed by considering the so-called folded-diagrams as has been discussed e.g. by Brandow[@brandow:67] and Kuo[@kuo:71]. We note that the folded-diagram expansion yields effective interaction terms between three and more particle, even if one considers a realistic interaction with two-body terms only[@polls:83; @polls:85]. During the last years the folded-diagram technique has been applied to derive an effective low-momentum potential $V_{low-k}$[@bogner:03] from a realistic NN interaction. By construction, $V_{low-k}$ potentials reproduce the deuteron binding-energy, the low-energy phase shifts and the half-on-shell $T$ matrix calculated from the underlying realistic NN interaction up to the chosen cut-off parameter. The resulting $V_{low-k}$ turns out to be rather independent on the original NN interaction if this cut-off parameter for the relative momenta is below the value of the pion-production threshold in NN scattering. The off-shell characteristics of the $V_{low-k}$ effective interaction are not constrained by experimental data and can influence the many-body character of the interaction. For finite nuclei we find that one does indeed obtain different binding energies for $^{16}$O depending on the underlying NN interaction from which one derives the $V_{low-k}$ interaction. For example, using coupled-cluster techniques at the singles and doubles level (CCSD) [@dean04] we find binding energies for $^{16}$O at a lab-momentum cutoff of $\Lambda=2.0$ fm$^{-1}$ to be $-143.4\pm 0.4$ MeV and $-153.3\pm 0.4$ MeV for the N$^3$LO [@n3lo] and CD-Bonn two-body interactions, respectively. The CCSD calculations were carried out at up to 7 major oscillator shells (with extrapolations to an infinite model space) using the intrinsic Hamiltonian defined as $H=T-T_{cm}+V_{low-k}$ where $T_{cm}$ is the center of mass kinetic energy. Attractive energies are obtained if such a $V_{low-k}$ interaction is used in a Hartree-Fock calculation of nuclear matter or finite nuclei[@corag:03; @kuck:03]. High-momentum correlations, which are required to obtain bound nuclear systems from a realistic NN interaction (see above) are taken into account in the renormalization procedure which leads to $V_{low-k}$. Supplementing these Hartree-Fock calculations with corrections up to third order in the Goldstone perturbation theory leads to results for the ground-state properties of $^{16}O$ and $^{40}Ca$, which are in fair agreement with the empirical data[@corag:03]. (One should note that $T_{cm}$ was not included in these calculations.) Calculations in infinite matter demonstrate that $V_{low-k}$ seems to be quite a good approximation for the evaluation of low-energy spectroscopic data. The results for the pairing derived from the bare interaction are reproduced[@kuck:03]. The prediction of pairing properties also agree with results obtained phenomenological interactions like the Gogny force[@gogny; @sedrak:03]. The $V_{low-k}$ interaction also yields a good approximation for the calculated binding energy of nuclear matter at low densities. At high densities, however, BHF calculations using $V_{low-k}$ yield too much binding energy and do not reproduce the saturation feature of nuclear matter[@kuck:03]. This is due to the fact that $V_{low-k}$ does not account for the effects of the dispersive quenching of the two-particle propagator, as it is done e.g. in the Brueckner $G$-matrix derived from a realistic NN interaction. The saturation can be obtained if a three-body nucleon is added to the hamiltonian[@bogner:05]. An alternative technique to determine an effective hamiltonian for a model space calculation is based on a unitary transformation of the hamiltonian. It has been developed by Suzuki[@suzuki:82] and leads to an energy-independent, hermitian effective interaction. The unitary-model-operator approach (UMOA) has also been used to evaluate the ground-state properties of finite nuclei[@suz13; @suz15; @fuji:04; @roth:05]. In the present study we are going to employ the unitary transformation technique to determine an effective interaction, which corresponds to the $V_{low-k}$ discussed above. This effective interaction will then be used in self-consistent Green’s function (SCGF) calculation of infinite nuclear matter. Various groups have recently developed techniques to solve the corresponding equations and determine the energy- and momentum-distribution of the single-particle strength in a consistent way[@frmu:03; @bozek0; @bozek1; @bozek2; @dewulf:03; @rd; @frmu:05]. Therefore we can study the correlation effects originating from $V_{low-k}$ inside the model space and compare it to the correlations derived from the bare interaction. Furthermore we use the unitary transformation technique to determine an effective interaction which accounts for dispersive effects missing in the original $V_{low-k}$ (see discussion above). After this introduction we will present the method for evaluating the effective interaction in section 2 and briefly review the basic features of the SCGF approach in section 3. The results of our investigations are presented in section 4, which is followed up by the conclusions. Effective interaction ===================== For the definition and evaluation of an effective interaction to be used in a nuclear structure calculation, which is restricted to a subspace of the Hilbert space, the so-called model space, we follow the usual notation and define a projection operator $P$, which projects onto this model space. The operator projecting on the complement of this subspace is identified by $Q$ and these operators satisfy the usual relations like $P+Q=1$, $P^2=P$, $Q^2=Q$, and $PQ=0=QP$. It is the aim of the Unitary Model Operator Approach (UMOA) to define a unitary transformation $U$ in such a way, that the transformed Hamiltonian does not couple the $P$ and $Q$ space, i.e. $QU^{-1}HUP=0$. For a many-body system the resulting Hamiltonian can be evaluated in a cluster expansion, which leads to many-body terms. This is very similar to the folded diagram expansion, which has been discussed above. In UMOA studies of finite nuclei terms up to three-body clusters have been evaluated[@suz13; @suz15] indicating a convergence of the expansion up to this order. In the present study we would like to determine an effective two-body interaction and therefore consider two-body systems only. We define the effective interaction as $$V_{eff} = U^{-1}\left( h_0 + v_{12}\right) U - h_0\,,\label{eq:veff1}$$ with $v_{12}$ representing the bare NN interaction. The operator $h_0$ denotes the one-body part of the two-body system and contains the kinetic energy of the interacting particles. This formulation will lead to an effective interaction corresponding to $V_{low-k}$. Since, however, we want to determine an effective interaction of two nucleons in the medium of nuclear matter, we will also consider the possibility to add a single-particle potential to $h_0$. Note that in any case $h_0$ commutes with the projection operators $P$ and $Q$. The operator for the unitary transformation $U$ can be expressed as[@suz24] $$U=(1+\omega-\omega ^{\dagger})(1+\omega \omega ^{\dagger} +\omega ^{\dagger}\omega )^{-1/2}\,,\label{eq:veff2}$$ with an operator $\omega$ satisfying $\omega=Q\omega P$ such that $\omega^2 = \omega^{\dagger 2} = 0$. In the following we will describe how to determine the matrix elements of this operator $\omega$. As a first step we solve the two-body eigenvalue equation $$\left( h_0 + v_{12}\right)\vert \Phi _{k}\rangle =E_{k}\vert \Phi _{k}\rangle\,. \label{eq:veff2a}$$ This can be done separately for each partial wave of the two-nucleon problem. Partial waves are identified by total angular momentum $J$, spin $S$ and isospin $T$. The relative momenta are appropriately discretized such that we can reduce the eigenvalue problem to a matrix diagonalization problem. Momenta below the cut-off momentum $\Lambda$ define the $P$ space and will subsequently be denoted by $\vert p\rangle$ and $\vert p'\rangle$. Momenta representing the $Q$ space will be labeled by $\vert q\rangle$ and $\vert q'\rangle$, while states $\vert i\rangle$, $\vert j\rangle$, $\vert k \rangle$ and $\vert l \rangle$ refer to basis states of the total $P+Q$ space. From the eigenstates $\vert \Phi _{k}\rangle$ we determine those $N_P$ ($N_P$ denoting the dimension of the $P$ space) eigenstates $\vert \Phi _{p}\rangle$, which have the largest overlap with the $P$ space and determine $$\label{eq:veff3} \langle q\vert\omega\vert p'\rangle =\sum_{p=1}^{N_P}\langle q\vert Q\vert \Phi _{p}\rangle \langle \tilde{\varphi}_{p}\vert p'\rangle,$$ with $\vert \varphi_{p}\rangle = P\vert \Phi_{p}\rangle$ and $\langle \tilde{\varphi}_{p}\vert$ denoting the biorthogonal state, satisfying $$\sum_{p}\langle \tilde{\varphi} _{k}|p\rangle \langle p|\varphi _{k'}\rangle \quad \mbox{and} \quad \sum_{k}\langle p'|\tilde{\varphi} _{k}\rangle \langle \varphi _{k}|p\rangle =\delta _{p,p'}\,.\label{eq:veff4}$$ In the next step we solve the eigenvalue problem in the $P$ space $$\omega ^{\dagger}\omega\vert\chi_{p}\rangle =\mu _{p}^{2}|\chi_{p} \rangle\, ,\label{eq:veff5}$$ and use the results to define $$\vert\nu _{p}\rangle =\frac{1}{\mu _{p}}\omega \vert\chi _{p}\rangle ,\label{eq:veff5a}$$ which due to the fact that $\omega=Q\omega P$, can be written as $$\label{eq:veff5b} \langle q|\nu _{p}\rangle =\frac{1}{\mu _{p}} \sum_{p'}\langle q|\omega |p'\rangle \langle p'|\chi_{p}\rangle\, .$$ Using Eqs. (\[eq:veff5\]) - (\[eq:veff5b\]) and the representation of $U$ in Eq. (\[eq:veff2\]), the matrix elements of the unitary transformation operator $U$ can be written $$\begin{aligned} \label{eq:Up'p} \langle p''|U|p'\rangle &=&\langle p''|(1+\omega^{\dagger}\omega )^{-1/2}|p'\rangle \nonumber \\ &=&\sum_{p=1}^{N_P}(1+\mu_{p}^{2})^{-1/2} \langle p''|\chi_{p}\rangle \langle \chi_{p}|p'\rangle \,,\end{aligned}$$ $$\begin{aligned} \label{eq:Uqp} \langle q|U|p'\rangle &=&\langle q|\omega (1+\omega^{\dagger}\omega )^{-1/2}|p'\rangle \nonumber \\ &=&\sum_{p=1}^{N_P}(1+\mu_{p}^{2})^{-1/2}\mu _{p} \langle q|\nu _{p}\rangle \langle \chi _{p}|p'\rangle\, ,\end{aligned}$$ $$\begin{aligned} \label{eq:Upq} \langle p'|U|q\rangle &=&-\langle p'|\omega ^{\dagger}(1+\omega \omega ^{\dagger})^{-1/2} |q\rangle \nonumber \\ &=&-\sum_{p=1}^{N_P}(1+\mu_{p}^{2})^{-1/2}\mu_{p} \langle p'\vert\chi_{p}\rangle \langle \nu_{p}\vert q\rangle \,, \end{aligned}$$ $$\begin{aligned} \label{eq:Uq'q} \langle q'|U|q\rangle &=&\langle q'|(1+\omega \omega ^{\dagger})^{-1/2}\vert q\rangle \nonumber \\ &=&\sum_{p=1}^{N_P}\{(1+\mu_{p}^{2})^{-1/2}-1\} \langle q'|\nu _{p}\rangle \langle \nu _{p}|q\rangle + \delta _{q,q'}\,.\end{aligned}$$ These matrix elements of $U$ can then be used to determine the matrix elements of the effective interaction $V_{eff}$ according to Eq.(\[eq:veff1\]). They might also be used to define matrix elements of other effective operators. Self-consistent Green’s function approach ========================================= One of the key quantities within the Self-consistent Green’s Function (SCGF) approach is the retarded single-particle (sp) Green’s function or sp propagator $G(k,\omega)$ (see e.g.[@diva:05]). Its imaginary part can be used to determine the spectral function $$\label{spec_g2} A(k,\omega)=-2\,{\mathrm{Im}}\,G(k,\omega+{\mathrm{i}}\eta)\,.$$ The spectral function provides the information about the energy- and momentum-distribution of the single-particle strength, i.e. the probability for adding or removing a particle with momentum $k$ and leaving the residual system at an excitation energy related to $\omega$. In the limit of the mean-field or quasi-particle approximation the spectral function is represented by a $\delta$-function and takes the simple form $$A(k,\omega)=2\pi\delta(\omega -\varepsilon_k) \,,\label{eq:specqp}$$ with the quasi-particle energy $\varepsilon_k$ for a particle with momentum $k$. The sp Green’s function can be obtained from the solution of the Dyson equation, which reduces for the system of homogeneous infinite matter to a a simple algebraic equation $$\left[\omega -\frac{k^2}{2m}-\Sigma(k,\omega)\right] G(k,\omega) = 1\,,\label{eq:dyson}$$ where $\Sigma(k,\omega)$ denotes the complex self-energy. The self-energy can be decomposed into a generalized Hartree-Fock part plus a dispersive contribution $$\label{spec_Sigma} \Sigma(k,\omega)=\Sigma^{HF}(k)-\frac{1}{\pi}\int_{-\infty}^{+\infty} {\mathrm{d}}\omega^{\prime} \, \frac{{\mathrm{Im}}\Sigma(k,\omega^{\prime}+ {\mathrm{i}}\eta)} {\omega-\omega^{\prime}}.$$ The next step is to obtain the self energy in terms of the in-medium two-body scattering $T$ matrix. It is possible to express ${\mathrm{Im}}\Sigma(k,\omega+{\mathrm{i}}\eta)$ in terms of the retarded $T$ matrix [@frmu:03; @bozek3; @kadanoff] (for clarity, spin- and isospin quantum number are suppressed) $$\begin{aligned} \label{im_sigma} {\mathrm{Im}}\Sigma(k,\omega+{\mathrm{i}}\eta)&=& \frac{1}{2}\int \frac{{\mathrm{d}}^3k^{\prime}}{(2\pi)^3} \int_{-\infty}^{+\infty} \frac{{\mathrm{d}}\omega^{\prime}}{2\pi} \left<{\mathbf{kk}}^{\prime}| {\mathrm{Im}}T(\omega+\omega^{\prime}+{\mathrm{i}}\eta)| {\mathbf{kk}}^{\prime}\right> \nonumber \\ && \qquad \times [f(\omega^{\prime})+b(\omega+\omega^{\prime})] A(k^{\prime},\omega^{\prime}).\end{aligned}$$ Here and in the following $f(\omega)$ and $b(\omega)$ denote the Fermi and Bose distribution functions, respectively. These functions depend on the chemical potential $\mu$ and the inverse temperature $\beta$ of the system. The in-medium scattering matrix $T$ is to be determined as a solution of the integral equation $$\begin{aligned} \left<{\mathbf{kk}}^{\prime}|T(\Omega+{\mathrm{i}}\eta)| {\mathbf{pp}}^{\prime}\right> & = &\left<{\mathbf{kk}}^{\prime}|V| {\mathbf{pp}}^{\prime}\right> + \int \frac{d^3q\,d^3q^\prime}{\left(2\pi\right)^6} \left<{\mathbf{kk}}^{\prime}|V| {\mathbf{qq}}^{\prime}\right>G^0_{\mathrm{II}}(\mathbf{qq}^\prime,\Omega+i\eta) \nonumber \\ &&\quad\quad\times \left<{\mathbf{qq}}^{\prime}|T(\Omega+{\mathrm{i}}\eta)| {\mathbf{pp}}^{\prime}\right>\,,\label{eq:tscat0}\end{aligned}$$ where $$\label{two_pp} G^0_{\mathrm{II}}(k_1,k_2,\Omega+i\eta)= \int_{-\infty}^{+\infty}\frac{{\mathrm{d}}\omega}{2\pi} \int_{-\infty}^{+\infty}\frac{{\mathrm{d}}\omega^{\prime}}{2\pi} A(k_1,\omega)A(k_2,\omega^{\prime}) \frac{1-f(\omega)-f(\omega^{\prime})} {\Omega-\omega-\omega^{\prime}+i\eta}\,.$$ stands for the two-particle Green’s function of two non-interacting but dressed nucleons. The matrix elements of the two-body interaction $V$ represent either the bare NN interaction $v_{12}$ or the effective interaction $V_{eff}$, in which case the integrals are cut at the cut-off parameter $\Lambda$. The in-medium scattering equation (\[eq:tscat0\]) can be reduced to a set of one-dimensional integral equations if the two-particle Green’s function in (\[two\_pp\]) is written as a function of the total and relative momenta of the interacting pair of nucleons and the usual angle-average approximation is employed (see *e.g.* [@angleav] for the accuracy of this approximation). This leads to integral equations in the usual partial waves, which can be solved very efficiently if the two-body interaction is represented in terms of separable interaction terms of a sufficient rank[@bozek1]. Finally, we consider the generalized Hartree-Fock contribution to the self-energy in (\[spec\_Sigma\]), which takes the form $$\label{hf_sigma} \Sigma^{HF}(k) = \frac{1}{2} \int \frac{{\mathrm{d}}^3k^{\prime}}{(2\pi)^3} \left<{\mathbf{k}},{\mathbf{k}}^{\prime}\right| V \left|{\mathbf{k}},{\mathbf{k}}^{\prime}\right> n(k^{\prime}),$$ where $n(k)$ is the correlated momentum distribution, which is to be calculated from the spectral function by $$\label{occupation} n(k)= \int_{-\infty}^{+\infty} \frac{{\mathrm{d}}\omega}{2\pi} f(\omega) A(k,\omega).$$ Also the energy per particle, $E/A$, can be calculated from the spectral function using Koltun’s sum rule $$\label{eda} \frac{E}{A}=\frac{1}{\rho} \int \frac{{\mathrm{d}}^3k}{(2\pi)^3} \int_{-\infty}^{+\infty} \frac{{\mathrm{d}}\omega}{2\pi} \frac{1}{2}\left(\frac{k^2}{2m}+\omega\right)A(k,\omega)f(\omega)\,.$$ Eqs.(\[spec\_g2\])-(\[occupation\]) define the so-called $T$-matrix approach to the SCGF equations. They form a symmetry conserving approach in the sense of [@kadanoff], which means that thermodynamical relations like the Hughenholtz-Van Hove theorem[@hugenholtz; @bozek1] are obeyed. The Brueckner-Hartree-Fock (BHF) approximation, which is very popular in nuclear physics, can be regarded as a simple approximation to this $T$-matrix approach. In the BHF approximation one reduces the spectral function $A(k,\omega)$ to the quasiparticle approximation (\[eq:specqp\]). Furthermore one ignores the hole-hole scattering terms in the scattering Eq.(\[eq:tscat0\]), which means that one replaces $$\left(1-f(\omega)-f(\omega')\right) \quad\rightarrow \quad \left(1-f(\omega) \right) \left(1-f(\omega')\right)\,, \label{eq:pauliop}$$ which is the usual Pauli operator (at finite temperature). This reduces the in-medium scattering equation to the Bethe-Goldstone equation. The removal of the hole-hole scattering terms leads to real self-energies $\Sigma(k,\omega)$ at energies $\omega$ below the chemical potential, i.e. for the hole states. Results and discussion ====================== In the following we discuss results for symmetric nuclear matter obtained from Self-Consistent Greens Function (SCGF) calculations. These calculations are either performed in the complete Hilbert space using the bare CD-Bonn [@cdbonn] interaction or in the model space, which is defined by a cut-off parameter $\Lambda$ = 2 fm$^{-1}$ in the two-body scattering equation, employing the corresponding effective interaction $V_{low-k}$, which is derived from the CD-Bonn interaction using the techniques described in Sect II. We note that using this unitary model operator technique we were able to reproduce the results of the BHF calculations presented in [@kuck:03], which used tabulated matrix elements of [@bogner:03], with good accuracy. The NN interaction has been restricted to partial waves with total angular momentum $J$ less than 6. Results for the calculated energy per nucleon are displayed in Fig. \[fig:becd1\] for various densities, which are labeled by the corresponding Fermi momentum $k_F$. The effective interaction $V_{low-k}$ accounts for a considerable fraction of the short-range NN correlations, which are induced by realistic interactions like the CD-Bonn interactions. Therefore, already the Hartree-Fock approximation using this $V_{low-k}$ yields reasonable results for the energies as can be seen from the dotted line of Fig. \[fig:becd1\]. Hartree-Fock calculations using the bare CD-Bonn interaction yield positive energies ranging between 2 MeV per nucleon and 15 MeV per nucleon for the densities considered in this figure. Note that the CD-Bonn interaction should be considered as a soft realistic interaction. Interaction models, which are based on local potentials, like the Argonne interaction [@arv18], yield more repulsive Hartree-Fock energies [@localint]. ![(Color online) Binding energy per nucleon for symmetric nuclear matter as function of the Fermi momentum: Results of self-consistent $T$-matrix calculations for the CD-Bonn potential (dashed line), are compared to results of calculations using $V_{low-k}$ with $\Lambda=2$fm$^{-1}$ in the Hartree-Fock approximation (dotted line), the self-consistent second order approximation (dashed-dotted line) and for the self-consistent $T-$matrix approximation (solid line) within the model space. []{data-label="fig:becd1"}](becd1.eps){width="10.5cm"} The inclusion of correlations within the model space yields a substantial decrease of the energy. The self-consistent $T$-matrix approach provides additional attraction ranging between 6 MeV per nucleon at a density of 0.4 $\rho_0$ (with $\rho_0$ the empirical saturation density) and 3 MeV per nucleon at 2 $\rho_0$. The fixed cut-off parameter $\Lambda$ seems to reduce the phase-space available for correlations beyond the mean-field approach at higher densities. Therefore the energy calculated in the self-consistent $T$-matrix approach reduces to the Hartree-Fock result at large densities. Fig. \[fig:becd1\] also displays the energies resulting from a SCGF calculation within the model space, in which the $T$-matrix has been approximated by the corresponding scattering matrix including only terms up to second order in the NN interaction $V$. The results of such second-order calculations in $V_{low-k}$ are represented by the dashed-dotted line and show a very good agreement with the model-space calculations including the full $T$-matrix. This confirms the validity of approaches, which consider correlation effects within the model-space in a perturbative way. All these model space calculations using $V_{low-k}$, however, fail to reproduce the results of the SCGF calculations, which are obtained in the complete space using the bare NN interaction, which are labeled by CD Bonn T-matrix in Fig. \[fig:becd1\]. In particular, the model space calculations yield to attractive energies at high densities and therefore do not exhibit a minimum for the energy as a function of density. This confirms the results of the BHF calculations of [@kuck:03]. It has been argued [@kuck:03] that this overestimate of the binding energy at high densities is due to the fact that $V_{low-k}$ does not account for the quenching of correlation effects, which is due to the Pauli principle and the dispersive effects in the single-particle propagator getting more important with increasing density. Therefore we try to account for the dispersive quenching effects by adopting the following two-step procedure. In a vein similar to the use of a G-matrix within a self-consistent BHF calculation, as a first step we perform BHF calculations using $V_{low-k}$. The resulting single-particle spectrum is approximated by an effective mass parameterization. This parameterization of the mean field is employed to define the single-particle operator $h_0$, used in Eq. (\[eq:veff1\]) and the following equations of Sect. II (see also [@fuji:04]). The resulting effective interaction is used again for a BHF calculation within the model space, leading to an update of the mean field parameterization. The procedure is repeated until a self-consistent result is obtained. Since the mean field parameterization depends on the density, this method yields an effective density-dependent interaction, which in the limit of the density $\rho\to 0$ coincides with $V_{low-k}$. Therefore we call this effective interaction the density dependent $V_{low-k}$ or in short $V_{low-k}(\rho)$. Such a procedure amounts to summing up certain higher order terms in the full many-body problem. ![(Color online) Same as Fig. \[fig:becd1\] but for $V_{low-k}(\rho)$ calculated at each density[]{data-label="fig:becd2"}](becd2.eps){width="10.5cm"} In a second step this $V_{low-k}(\rho)$ is used in SCGF calculations at the corresponding density. Energies resulting from such model space calculations using $V_{low-k}(\rho)$ are presented in Fig. \[fig:becd2\]. The comparison of the various calculations within the model space exhibits the same features as discussed above for the original $V_{low-k}$. The correlation within the model space provide a substantial reduction of the energy as can be seen from the comparison of the self-consistent $T$-matrix approach with the Hartree-Fock results. The approach treating correlations up to second order in $V_{low-k}(\rho)$ yields energies which are very close to the complete $T$-matrix approach. The density dependence of the effective interaction $V_{low-k}(\rho)$ yields a significant improvement for the comparison between the model space calculations and the SCGF calculation using the bare CD-Bonn interaction. Note that the energy scale has been adjusted going from Fig. \[fig:becd1\] to Fig. \[fig:becd2\]. The discrepancy remaining at densities above $\rho_0$ might be due to the effects of the Pauli quenching, which are not included in $V_{low-k}(\rho)$. These deviations could also originate from the simple parameterization of the dispersive quenching in $V_{low-k}(\rho)$. Our investigations also provide the possibility to explore the effects of correlations evaluated within the model space using the effective interaction $V_{low-k}$. We can furthermore compare these correlation effects with the corresponding effects determined by the bare interaction in the unrestricted space. As a first example, we discuss the imaginary part of the self-energy calculates at the empirical saturation density $\rho_0$ for various nucleon momenta $p$ as displayed in Fig. \[fig:im10\]. The calculations within the model space reproduce the results of the unrestricted calculations with a good accuracy in the energy interval for $\omega$ ranging between 50 MeV below and 50 MeV above the chemical potential $\mu$. The remaining differences around the Fermi energy can be attributed to the difference in the effective masses obtained using the $V_{low-k}$ and the bare potential [@wi98]. The agreement between the $T$-matrix results around $\omega=\mu$ using the two potentials is improved if one rescales by the ratio of the effective masses. The imaginary part calculated with $V_{low-k}$, however, is much smaller than the corresponding result obtained for the bare interaction at energies $\omega -\mu$ above 100 MeV. Furthermore the model space calculation do not reproduce the imaginary part for energies below the chemical potential at momenta $k$ above 400 MeV/c. ![(Color online) Imaginary part of the self-energy as a function of the energy $\omega$ for various momenta $p$ as indicated in the panels (see Eq. ( )). The results have been determined for the empirical saturation density $\rho_0$; using $V_{low-k}$ in the $T$-matrix approximation (solid line), using $V_{low-k}$ in the second order approximation (dashed-dotted line), and employing CD-Bonn interaction in the $T$-matrix approximation (dotted line). The dashed line in the first panel denotes the results of the $T$-matrix calculation with the CD-Bonn potential rescaled by the ratio of the effective masses at the Fermi momentum obtained with the $V_{low-k}$ and the bare CD-Bonn potential. []{data-label="fig:im10"}](im10.eps){width="10.5cm"} The imaginary part of the self-energy is a very important ingredient for the evaluation of the spectral function $A(k,\omega )$ and therefore also for the calculation of the occupation probability $n(k)$ (see Eq. (\[occupation\])). The small values for the imaginary part of the self-energy at high momenta $k$ and negative energies $\omega -\mu$ leads to occupation probabilities at these momenta, which are much smaller than the corresponding predictions derived from bare realistic NN interactions as can be seen from Fig. \[fig:nofk\]. This missing strength in the prediction of $V_{low-k}$ at high momenta is accompanied by larger occupation probabilities at low momenta. The self-consistent $T$-matrix approximation using CD-Bonn yields an occupation probability at $k=0$ of 0.897, while the corresponding number using $V_{low-k}$ is 0.920. At this density, the calculation including only terms up to second order in $V_{low-k}$ yields a rather good approximation to the self-consistent $T$-matrix approximation within the model space. ![(Color online) Momentum distribution $n(k)$ (see Eq. ()) calculated for nuclear matter at the empirical saturation density $\rho_0$. Results of the $T$-matrix approximation within the model space (solid line) are compared to results of the second order approximation (dashed-dotted line) and the $T$-matrix approximation (dotted line) in the unrestricted space.[]{data-label="fig:nofk"}](nofk.eps){width="10.5cm"} As a second example we consider the imaginary part of the self-energy calculated at a lower density $\rho=0.4\times \rho_0$. The results displayed in Fig. \[fig:im04\] refer to nucleons with momentum $k=0$. Also at this density we find that the imaginary part evaluated with $V_{low-k}$ drops to zero at large positive energies much faster than the predictions derived from the bare interaction (see upper panel on the left in Fig. \[fig:im04\]). It is worth noting, that at this low density the second order approximation is not such a good approximation to the full $T$-matrix approach as it is for the higher densities. Characteristic differences between the dashed-dotted and the solid line show up at energies $\omega$ close to the chemical potential. In order to trace the origin of these differences we display in Fig. \[fig:im04\] the contributions of various partial waves of NN interaction channels to this imaginary part. It turns out that the differences are largest in the $^3S_1-^3D_1$ and the $^1S_0$ channels. This means that the perturbative approach is not very successful in those two channels which tend to form quasi-bound states. In these channels all particle-particle hole-hole ladders have to be summed up to obtain the pairing solution. Note, that the pairing solutions are suppressed at higher densities, if the effects of short-range correlations are properly taken into account[@bozek4; @muwi05]. Furthermore we would like to point out that a different scale is used in the two lower panels of Fig. \[fig:im04\]. Taking this into account it is evident from this figure that the main contribution to the imaginary part of the self-energy, and that means the main contribution to the character of the deviation of the spectral function from the mean-field approach originates from the NN interaction in the $^3S_1-^3D_1$ channel. ![Imaginary part of the self-energy as a function of the energy $\omega$ for nucleons with momentum $k=0$ calculated at the density $\rho=0.4\times \rho_0$. Results of the $T$-matrix approach (solid line) and the second order approximation (dashed-dotted line) within the model space are compared to results obtained in the unrestricted calculation (dotted line).[]{data-label="fig:im04"}](im04all.eps){width="10.5cm"} Conclusions =========== During the last few years it has become very popular to perform nuclear structure calculations using effective low-momentum NN interactions. These $V_{low-k}$ interactions are based on a realistic model of the NN interaction. They are constructed to be different from zero only within a model space defined by a cut-off $\Lambda$ in the relative momenta of the interacting nucleons. Within this model space they reproduce the NN data of the underlying bare interaction, although the many-body solutions may show differences with different starting NN interactions. For this study we performed Self-Consistent Greens Functions (SCGF) calculations of symmetric nuclear matter employing $V_{low-k}$ effective interactions as well as the bare CD Bonn interaction they are based on. Special attention was paid to the correlations which can be described within this model space as compared to correlations predicted by the underlying interaction within the unrestricted space. Using a cut-off $\Lambda$ = 2 fm$^{-1}$ we find that the spectral distribution of the single-particle strength in an energy window of plus minus 50 MeV around the Fermi energy is rather well reproduced by the calculation using $V_{low-k}$. The effective interaction $V_{low-k}$ is softer than typical realistic NN interactions. Therefore for many observables it is sufficient to approximate the full in-medium scattering matrix $T$ by the approximation including terms up to second order in $V_{low-k}$. This justifies the use of the resummed effective interaction in many-body approximations that do not the include ladder-diagram resummation. Special attention must be paid to nuclear systems at smaller densities: the possible formation of quasi-bound states may require the non-perturbative treatment of the NN scattering in the medium. This also has implications for the use of $V_{low-k}$ in studies of weakly bound nuclear systems. The model space approach cannot reproduce correlation effects, which lead to spectral strength at high energies and high momenta. For nuclear matter at the empirical saturation density $\rho_0$ the momentum distribution is reliably predicted up to a momentum of 400 MeV/c. The $V_{low-k}$ approach overestimates the binding energy per nucleon at high densities. Therefore we introduced a density-dependent effective interaction $V_{low-k}(\rho)$ which we constructed along the same line as the original $V_{low-k}$. The new effective interaction accounts for a dispersive correction of the single-particle propagator in the medium. This improves the behavior of the effective interaction significantly. For densities above $\rho_0$, however, the binding energies calculated with $V_{low-k}(\rho)$ are still too large. This might be improved by determining effective three-nucleon forces explicitly from the underlying bare interaction. This work is supported in part by the Polish State Committee for Scientific Research Grant No. 2P03B05925U.S, the Department of Energy under Contract Number DE-AC05-00OR22725 with UT-Battelle, LLC (Oak Ridge National Laboratory) and the Deutsche Forschungsgemeinschaft (SFB 382). [99]{} R. Machleidt, F. Sammarruca, and Y. Song, Phys. Rev. C **53**, [R1483]{} (1996). R.B. Wiringa, V.G.J. Stoks, and R. Schiavilla, Phys. Rev. C **51**, 38 (1995). V.G.J. Stoks, R.A.M. Klomp, C.P.F. Terheggen, and J.J. de Swart, Phys. Rev. C **49**, 2950 (1994). D. R. Entem and R. Machleidt, Phys. Rev. C [**68**]{}, 41001(R) (2003). H. Müther and A. Polls, Prog. Part. Nucl. Phys. **45**, [243]{}[(2000)]{}. H. Müther and A. Polls, Phys. Rev. C **61**, 014304 (2000). M.F. van Batenburg, Ph.D. Thesis, University of Utrecht (2001). B.E. Vonderfecht, W.H. Dickhoff, A. Polls, and A. Ramos, Phys. Rev. C **44**, R1265 (1991). D. Rohe, *et al.*, Phys. Rev. Lett. **93**, 182501 (2004). H. M[ü]{}ther and W.H. Dickhoff, Phys. Rev. C **49**, R17 (1994). T. Frick and H. M[ü]{}ther, Phys. Rev. C **68**, 034310 (2003). D.J. Dean, T. Engeland, M. Hjorth-Jensen, M. Kartamychev, and E. Osnes, Prog. Part. Nucl. Phys. **53**, 419 (2004). Z.Y. Ma and T.T.S. Kuo, Phys. Lett. **127B**, 137 (1983). H.Q. Song and T.T.S. Kuo, Phys. Rev. C **43**, 2883 (1991). T.T.S. Kuo and Y. Tzeng, Int. J. of Mod. Phys. E **3**, 523 (1994). B.H. Brandow, Rev. Mod. Phys. **39**, 771 (1967). T.T.S. Kuo, S.Y. Lee, and K.F. Ratcliff, Nucl. Phys. **A 176**, 65 (1971). A. Polls, H. Müther, A. Faessler, T.T.S. Kuo, and E. Osnes, Nucl. Phys. **A 401**, 124 (1983). H. Müther, A. Polls, and T.T.S. Kuo, Nucl. Phys. **A 435**, 548 (1985). S.K. Bogner, T.T.S. Kuo, and A. Schwenk, Phys. Rep. **386**, 1 (2003). D.J. Dean and M. Hjorth-Jensen, Phys. Rev. C69, 54320 (2004). L. Coraggio, N. Itaco, A. Covello, A. Gargano, and T.T.S. Kuo, Phys. Rev. C **68**, 034320 (2003). J. Kuckei, F. Montani, H. Müther, and A. Sedrakian, Nucl. Phys. **A 723**, 32 (2003). J. Decharge and D. Gogny, Phys. Rev. C **21**, 1568 (1980). A. Sedrakian, T.T.S. Kuo, H. Müther, and P. Schuck, Phys. Lett. **B 576**, 68 (2003). S.K. Bogner, A. Schwenk, R.J. Furnstahl, and A. Nogga, Nucl. Phys. **A 763**, 59 (2005). K. Suzuki, Prog. Theoret. Phys. **68**, 246 (1982). K. Suzuki and R. Okamoto, Prog. Theor. Phys. **92**, 1045 (1994). H. Kumagai, K. Suzuki, and R. Okamoto, Prog. Theor. Phys. **97**, 1023 (1997). S. Fujii, R. Okamoto, and K. Suzuki, Phys. Rev. C **69**, 034328 (2004). R. Roth, P. Papakonstantinou, N. Paar, H. Hergert, T. Neff, and H. Feldmeier, preprint nucl-th/0510036. P. Bo[ż]{}ek and P. Czerski, Eur. Phys. J. A **11**, 271 (2001). P. Bo[ż]{}ek, Phys. Rev. C **65**, 054306 (2002). P. Bo[ż]{}ek, Eur. Phys. J. A **15**, 325 (2002). Y. Dewulf, W.H. Dickhoff,D. Van Neck, E.R. Stoddard, and M. Waroquier, Phys. Rev. Lett **90**, 152501 (2003). W. H. Dickhoff and E. P. Roth, Acta Phys. Pol. B **33**, 65 (2002); E. P. Roth, Ph.D. thesis Washington University, St. Louis (2000). T. Frick, H. M[ü]{}ther, A. Rios, A. Polls, and A. Ramos, Phys. Rev. C **71**, 014313 (2005). K. Suzuki, Prog. Theor. Phys. **68**, 246 (1982). W.H. Dickhoff and D. Van Neck, *Many-Body Theory Exposed!* (World Scientific, Singapore, 2005). P. Bożek, Phys. Rev. C [**59**]{}, 2619 (1999). L. P. Kadanoff and G. Baym, *Quantum Statistical Mechanics* (Benjamin, New York, 1962). E. Schiller, H. Müther, and P. Czerski, Phys. Rev. C **59**, 2934 (1999). N.M. Hugenholtz and L. Van Hove, Physica **24**, 363 (1958). W.H. Dickhoff, Phys. Rev. C [**58**]{}, 2807 (1998). P. Bożek, Phys. Lett. B [**551**]{}, 93 (2003). H. Müther and W.H. Dickhoff, Phys. Rev. C **72**, 054313 (2005). [^1]: Electronic address : piotr.bozek@ifj.edu.pl [^2]: Electronic address : deandj@ornl.gov [^3]: Electronic address : herbert.muether@uni-tuebingen.de
{ "pile_set_name": "ArXiv" }
--- abstract: 'Using a slightly weaker definition of cellular algebra, due to Goodman ([@G2] Definition 2.9), we prove that for a symmetric cellular algebra, the dual basis of a cellular basis is again cellular. Then a nilpotent ideal is constructed for a symmetric cellular algebra. The ideal connects the radicals of cell modules with the radical of the algebra. It also reveals some information on the dimensions of simple modules. As a by-product, we obtain some equivalent conditions for a finite dimensional symmetric cellular algebra to be semisimple.' title: Radicals of symmetric cellular algebras --- [^1] Yanbo Li Department of Information and Computing Sciences,\ Northeastern University at Qinhuangdao;\ Qinhuangdao, 066004, P.R. China\ School of Mathematics Sciences, Beijing Normal University;\ Beijing, 100875, P.R. China\ `E-mail: liyanbo707@163.com` **Introduction** {#xxsec1} ================ Cellular algebras were introduced by Graham and Lehrer [@GL] in 1996, motivated by previous work of Kazhdan and Lusztig [@KL]. They were defined by a so-called cellular basis with some nice properties. The theory of cellular algebras provides a systematic framework for studying the representation theory of non-semisimple algebras which are deformations of semisimple ones. One can parameterize simple modules for a finite dimensional cellular algebra by methods in linear algebra. Many classes of algebras from mathematics and physics are found to be cellular, including Hecke algebras of finite type, Ariki-Koike algebras, $q$-Schur algebras, Brauer algebras, Temperley-Lieb algebras, cyclotomic Temperley-Lieb algebras, Jones algebras, partition algebras, Birman-Wenzl algebras and so on, we refer the reader to [@G; @GL; @RX; @Xi1; @Xi2] for details. An equivalent basis-free definition of cellular algebras was given by Koenig and Xi [@KX1], which is useful in dealing with structural problems. Using this definition, in [@KX5], Koenig and Xi made explicit an inductive construction of cellular algebras called inflation, which produces all cellular algebras. In [@KX7], Brauer algebras were shown to be iterated inflations of group algebras of symmetric groups and then more information about these algebras was found. There are some generalizations of cellular algebras, we refer the reader to [@DR; @GRM; @GRM2; @WB] for details. Recently, Koenig and Xi [@KX8] introduced affine cellular algebras which contain cellular algebras as special cases. Affine Hecke algebras of type A and infinite dimensional diagram algebras like the affine Temperley-Lieb algebras are affine cellular. It is an open problem to find explicit formulas for the dimensions of simple modules of a cellular algebra. By the theory of cellular algebras, this is equivalent to determine the dimensions of the radicals of bilinear forms associated with cell modules. In [@LZ], for a quasi-hereditary cellular algebra, Lehrer and Zhang found that the radicals of bilinear forms are related to the radical of the algebra. This leads us to studying the radical of a cellular algebra. However, we have no idea for dealing with general cellular algebras now. We will do some work on the radicals of [*symmetric*]{} cellular algebras in this paper. Note that Hecke algebras of finite types, Ariki-Koike algebras over any ring containing inverses of the parameters, Khovanov’s diagram algebras are all symmetric cellular algebras. The trivial extension of a cellular algebra is also a symmetric cellular algebra. For details, see [@BS], [@MM], [@XX]. Throughout this paper, we will adopt a slightly weaker definition of cellular algebra due to Goodman ([@G2] Definition 2.9). It is helpful to note that the results of [@GL] remained valid with his weaker axiom. In case $2$ is invertible, these two definitions are equivalent. We begin with recalling definitions and some well-known results of symmetric algebras and cellular algebras in Section 2. Then in Section 3, we prove that for a symmetric cellular algebra, the dual basis of a cellular basis is again cellular. In Section 4, a nilpotent ideal of a symmetric cellular algebra is constructed. This ideal connects the radicals of cell modules with the radical of the algebra and also reveals some information on the dimensions of simple modules. As a by-product, in Section 5, we obtain some equivalent conditions for a finite dimensional symmetric cellular algebra to be semisimple. **Preliminaries** {#xxsec2} ================= In this section, we start with the definitions of symmetric algebras and cellular algebras (a slightly weaker version due to Goodman) and then recall some well-known results about them. Let $R$ be a commutative ring with identity and $A$ an associative $R$-algebra. As an $R$-module, $A$ is finitely generated and free. Suppose that there exists an $R$-bilinear map $f:A\times A\rightarrow R$. We say that $f$ is non-degenerate if the determinant of the matrix $(f(a_{i},a_{j}))_{a_{i},a_{j}\in B}$ is a unit in $R$ for some $R$-basis $B$ of $A$. We say $f$ is associative if $f(ab,c)=f(a,bc)$ for all $a,b,c\in A$, and symmetric if $f(a,b)=f(b,a)$ for all $a,b\in A$. \[2.1\] An $R$-algebra $A$ is called symmetric if there is a non-degenerate associative symmetric bilinear form $f$ on $A$. Define an $R$-linear map $\tau: A\rightarrow R$ by $\tau(a)=f(a,1)$. We call $\tau$ a symmetrizing trace. Let $A$ be a symmetric algebra with a basis $B=\{a_{i}\mid i=1,\ldots,n\}$ and $\tau$ a symmetrizing trace. Denote by $D=\{D_{i}\mid i=i,\ldots,n\}$ the basis determined by the requirement that $\tau(D_{j}a_{i})=\delta_{ij}$ for all $i, j=1,\ldots,n$. We will call $D$ the dual basis of $B$. For arbitrary $1\leq i,j \leq n$, write $a_{i}a_{j}=\sum\limits_{k}r_{ijk}a_{k}$, where $r_{ijk}\in R$. Fixing a symmetrizing trace $\tau$ for $A$, then we have the following lemma. \[2.2\] Let $A$ be a symmetric $R$-algebra with a basis $B$ and the dual basis $D$. Then the following hold: $$a_{i}D_{j}=\sum_{k}r_{kij}D_{k};\,\,\,\,\,D_{i}a_{j}=\sum_{k}r_{jki}D_{k}.$$ We only prove the first equation. The other one is proved similarly. Suppose that $a_{i}D_{j}=\sum\limits_{k}r_{k}D_{k}$, where $r_{k}\in R$ for $k=1,\cdots,n$. Left multiply by $a_{k_{0}}$ on both sides of the equation and then apply $\tau$, we get $\tau(a_{k_{0}}a_{i}D_{j})=r_{k_{0}}$. Clearly, $\tau(a_{k_{0}}a_{i}D_{j})=r_{k_{0},i,j}$. This implies that $r_{k_{0}}=r_{k_{0},i,j}$. Given a symmetric algebra, it is natural to consider the relation between two dual bases determined by two different symmetrizing traces. For this we have the following lemma. \[2.3\] Suppose that $A$ is a symmetric $R$-algebra with a basis $B=\{a_{i}\mid i=1, \cdots, n\}$. Let $\tau, \tau'$ be two symmetrizing traces. Denote by $\{D_{i}\mid i=1, \cdots, n\}$ the dual basis of $B$ determined by $\tau$ and $\{D_{i}'\mid i=1, \cdots, n\}$ the dual basis determined by $\tau'$. Then for $1\leq i \leq n$, we have $$D_{i}'=\sum_{j=1}^{n}\tau(a_{j}D_{i}')D_{j}.$$ It is proved by a similar method as in Lemma \[2.2\]. Graham and Lehrer introduced the so-called cellular algebras in [@GL] , then Goodman weakened the definition in [@G2]. We will adopt Goodman’s definition throughout this paper. [([@G2])]{}\[2.4\] Let $R$ be a commutative ring with identity. An associative unital $R$-algebra is called a cellular algebra with cell datum $(\Lambda, M, C, i)$ if the following conditions are satisfied: [(C1)]{} The finite set $\Lambda$ is a poset. Associated with each ${\lambda}\in\Lambda$, there is a finite set $M({\lambda})$. The algebra $A$ has an $R$-basis $\{C_{S,T}^{\lambda}\mid S,T\in M({\lambda}),{\lambda}\in\Lambda\}$. [(C2)]{} The map $i$ is an $R$-linear anti-automorphism of $A$ with $i^{2}=id$ and $$i(C_{S,T}^{\lambda})\equiv C_{T,S}^{\lambda}\,\,\,\,(\rm {mod}\,\,\, A(<{\lambda}))$$ for all ${\lambda}\in\Lambda$ and $S,T\in M({\lambda})$, where $A(<{\lambda})$ is the $R$-submodule of $A$ generated by $\{C_{S^{''},T^{''}}^\mu \mid S^{''},T^{''}\in M(\mu),\mu<{\lambda}\}$. [(C3)]{} If ${\lambda}\in\Lambda$ and $S,T\in M({\lambda})$, then for any element $a\in A$, we have\ $$aC_{S,T}^{\lambda}\equiv\sum_{S^{'}\in M({\lambda})}r_{a}(S',S)C_{S^{'},T}^{{\lambda}} \,\,\,\,(\rm {mod}\,\,\, A(<{\lambda})),$$ where $r_{a}(S^{'},S)\in R$ is independent of $T$. Apply $i$ to the equation in [(C3)]{}, we obtain [(C3$'$)]{} $C_{T,S}^{\lambda}i(a)\equiv\sum\limits_{S^{'}\in M({\lambda})}r_{a}(S^{'},S)C_{T,S^{'}}^{{\lambda}} \,\,\,\,(\rm mod \,\,\,A(<{\lambda})).$ Graham and Lehrer’s original definition in [@GL] requires that $i(C_{S,T}^{\lambda})=C_{T,S}^{\lambda}$ for all ${\lambda}\in\Lambda$ and $S,T\in M({\lambda})$. But Goodman pointed out that the results of [@GL] remained valid with his weaker axiom. In case $2\in R$ is invertible, these two definitions are equivalent. It is easy to check the following lemma by Definition \[2.4\]. [([@GL])]{} Let ${\lambda}\in\Lambda$ and $a\in A$. Then for arbitrary elements $S,T,U,V\in M({\lambda})$, we have $$C_{S,T}^{\lambda}aC_{U,V}^{\lambda}\equiv \Phi_{a}(T,U)C_{S,V}^{\lambda}\,\,\,\, (\rm mod\,\,\, A(<{\lambda})),$$ where $\Phi_{a}(T,U)\in R$ depends only on $a$, $T$ and $U$. We often omit the index $a$ when $a=1$, that is, writing $\Phi_{1}(T,U)$ as $\Phi(T,U)$. Let us recall the definition of cell modules now. [([@GL])]{}\[2.7\] Let $A$ be a cellular algebra with cell datum $(\Lambda, M, C, i)$. For each ${\lambda}\in\Lambda$, define the left $A$-module $W({\lambda})$ as follows: $W({\lambda})$ is a free $R$-module with basis $\{C_{S}\mid S\in M({\lambda})\}$ and $A$-action defined by\ $$aC_{S}=\sum_{S^{'}\in M({\lambda})}r_{a}(S^{'},S)C_{S^{'}} \,\,\,\,(a\in A,S\in M({\lambda})),$$ where $r_{a}(S^{'},S)$ is the element of $R$ defined in Definition [\[2.4\]]{} [(C3)]{}. Note that $W({\lambda})$ may be thought of as a right $A$-module via\ $$C_{S}a=\sum_{S^{'}\in M({\lambda})}r_{i(a)}(S^{'},S)C_{S^{'}} \,\,\,\,(a\in A,S\in M({\lambda})).$$ We will denote this right $A$-module by $i(W({\lambda}))$. [([@GL])]{}\[2.8\] There is a natural isomorphism of $R$-modules $$C^{{\lambda}}:W({\lambda})\otimes_{R}i(W({\lambda}))\rightarrow R{\rm -span}\{C_{S,T}^{{\lambda}}\mid S,T\in M({\lambda})\},$$ defined by $(C_{S},C_{T})\rightarrow C_{S,T}^{{\lambda}}$. For a cell module $W({\lambda})$, define a bilinear form $\Phi _{{\lambda}}:\,\,W({\lambda})\times W({\lambda})\longrightarrow R$ by $\Phi _{{\lambda}}(C_{S},C_{T})=\Phi(S,T)$. It plays an important role for studying the structure of $W({\lambda})$. It is easy to check that $\Phi(T,U)=\Phi(U,T)$ for arbitrary $T,U\in M({\lambda})$. Define $$\operatorname{rad}{\lambda}:= \{x\in W({\lambda})\mid \Phi_{{\lambda}}(x,y)=0 \,\,\,\text{for all} \,\,\,y\in W({\lambda})\}.$$ If $\Phi _{{\lambda}}\neq 0$, then $\operatorname{rad}{\lambda}$ is the radical of the $A$-module $W({\lambda})$. Moreover, if ${\lambda}$ is a maximal element in $\Lambda$, then $\operatorname{rad}{\lambda}=0$. The following results were proved by Graham and Lehrer in [@GL]. [[@GL]]{} Let $K$ be a field and $A$ a finite dimensional cellular algebra. For any ${\lambda}\in\Lambda$, denote the $A$-module $W({\lambda})/\operatorname{rad}{\lambda}$ by $L_{{\lambda}}$. Let $\Lambda_{0}=\{{\lambda}\in\Lambda\mid \Phi_{{\lambda}}\neq 0\}$. Then $\{L_{{\lambda}}\mid {\lambda}\in\Lambda_{0}\}$ is a complete set of [(]{}representative of equivalence classes of [)]{} absolutely simple $A$-modules. [([@GL])]{} \[glthm\]Let $K$ be a field and $A$ a cellular $K$-algebra. Then the following are equivalent.\ [(1)]{} The algebra $A$ is semisimple.\ [(2)]{} The nonzero cell representations $W({\lambda})$ are irreducible and pairwise inequivalent.\ [(3)]{} The form $\Phi_{{\lambda}}$ is non-degenerate (i.e. $\operatorname{rad}{\lambda}=0$) for each ${\lambda}\in\Lambda$. For any ${\lambda}\in\Lambda$, fix an order on $M({\lambda})$ and let $M({\lambda})=\{S_{1},S_{2},\cdots,S_{n_{{\lambda}}}\}$, where $n_{{\lambda}}$ is the number of elements in $M({\lambda})$, the matrix $G({\lambda})=(\Phi(S_{i},S_{j}))_{1\leq i,j\leq n_{{\lambda}}}$ is called Gram matrix. It is easy to know that all the determinants of $G({\lambda})$ defined with different order on $M({\lambda})$ are the same. By the definition of $G({\lambda})$ and $\operatorname{rad}{\lambda}$, for a finite dimensional cellular algebra $A$, it is clear that if $\Phi_{{\lambda}}\neq 0$, then $\dim_{K}L_{{\lambda}}=\operatorname{rank}G({\lambda})$. **Symmetric cellular algebras** =============================== In this section, we prove that for a symmetric cellular algebra, the dual basis of a cellular basis is again cellular. Let $A$ be a symmetric cellular algebra with a cell datum $(\Lambda, M, C, i)$. Denote the dual basis by $D=\{D_{S,T}^{\lambda}\mid S,T\in M({\lambda}),{\lambda}\in\Lambda\}$ throughout, which satisfies $$\tau(C_{S,T}^{{\lambda}}D_{U,V}^{\mu})=\delta_{{\lambda}\mu}\delta_{SV}\delta_{TU}.$$ For any ${\lambda}, \mu\in \Lambda$, $S,T\in M({\lambda})$, $U,V\in M(\mu)$, write $$C_{S,T}^{{\lambda}}C_{U,V}^{\mu}=\sum\limits_{\epsilon\in\Lambda,X,Y\in M(\epsilon)} r_{(S,T,{\lambda}),(U,V,\mu),(X,Y,\epsilon)}C_{X,Y}^{\epsilon}.$$ A lemma which we now prove plays an important role throughout this paper. \[2.14\] Let $A$ be a symmetric cellular algebra with a cell datum $(\Lambda, M, C, i)$ and $\tau$ a given symmetrizing trace. For arbitrary ${\lambda},\mu\in\Lambda$ and $S,T,P,Q\in M({\lambda})$, $U,V\in M(\mu)$, the following hold:\ [(1)]{}$D_{U,V}^{\mu}C_{S,T}^{{\lambda}}=\sum\limits_{\epsilon\in \Lambda, X,Y\in M(\epsilon)}r_{(S,T,{\lambda}),(Y,X,\epsilon),(V,U,\mu)}D_{X,Y}^{\epsilon}.$\ [(2)]{}$C_{S,T}^{{\lambda}}D_{U,V}^{\mu}=\sum\limits_{\epsilon\in \Lambda, X,Y\in M(\epsilon)}r_{(Y,X,\epsilon),(S,T,{\lambda}),(V,U,\mu)}D_{X,Y}^{\epsilon}.$\ [(3)]{}$C_{S,T}^{{\lambda}}D_{T,Q}^{{\lambda}}=C_{S,P}^{{\lambda}}D_{P,Q}^{{\lambda}}.$\ [(4)]{}$D_{T,S}^{{\lambda}}C_{S,Q}^{{\lambda}}=D_{T,P}^{{\lambda}}C_{P,Q}^{{\lambda}}.$\ [(5)]{}$C_{S,T}^{{\lambda}}D_{P,Q}^{{\lambda}}=0\,\, if \,\,T\neq P.$\ [(6)]{}$D_{P,Q}^{{\lambda}}C_{S,T}^{{\lambda}}=0\,\, if \,\,Q\neq S.$\ [(7)]{}$C_{S,T}^{{\lambda}}D_{U,V}^{\mu}=0 \,\,\,\,if\,\,\, \mu\nleq {\lambda}.$\ [(8)]{}$D_{U,V}^{\mu}C_{S,T}^{{\lambda}}=0 \,\,\,\,if \,\,\,\mu\nleq {\lambda}.$ (1), (2) are corollaries of Lemma \[2.2\]. The equations (5), (6), (7), (8) are corollaries of (1) and (2). We now prove (3). By (2), we have $$C_{S,T}^{\lambda}D_{T,Q}^{\lambda}=\sum_{\epsilon\in \Lambda, X,Y\in M(\epsilon)}r_{(Y,X,\epsilon),(S,T,{\lambda}),(Q,T,{\lambda})}D_{X,Y}^{\epsilon}$$ $$C_{S,P}^{\lambda}D_{P,S}^{\lambda}=\sum_{\epsilon\in \Lambda, X,Y\in M(\epsilon)}r_{(Y,X,\epsilon),(S,P,{\lambda}),(Q,P,{\lambda})}D_{X,Y}^{\epsilon}.$$ On the other hand, by (C3) of Definition \[2.4\] we also have $$r_{(Y,X,\epsilon),(S,T,{\lambda}),(Q,T,{\lambda})}=r_{(Y,X,\epsilon),(S,P,{\lambda}),(Q,P,{\lambda})}$$ for all $\epsilon\in \Lambda$ and $X,Y\in M(\epsilon)$. This completes the proof of (3). \(4) is proved similarly. \[2.15\] Let $A$ be a symmetric cellular algebra with a cell datum $(\Lambda, M, C, i)$. Then the dual basis $D=\{D_{S,T}^{\lambda}\mid S,T\in M({\lambda}),{\lambda}\in\Lambda\}$ is again a cellular basis of $A$ with respect to the opposite order on $\Lambda$. Clearly, we only need to consider (C2) and (C3) of Definition \[2.4\]. Now we proceed in two steps.\ [*Step 1.*]{} (C2) holds. Let $i(D_{S,T}^{{\lambda}})=\sum\limits_{\epsilon\in\Lambda, X,Y\in M(\epsilon)}r_{X,Y,\epsilon}D_{X,Y}^{\epsilon}$ with $r_{X,Y,\epsilon}\in R$. If there exists $\eta\ngeq{\lambda}$ such that $r_{P,Q,\eta}\neq 0$ for some $P,Q\in M(\eta)$. Then $\tau(i(D_{S,T}^{{\lambda}})C_{Q,P}^\eta)=r_{P,Q,\eta}\neq 0$. This implies that $i(D_{S,T}^{{\lambda}})C_{Q,P}^\eta\neq 0$. Thus $C_{P,Q}^{\eta}D_{S,T}^{{\lambda}}\neq 0$. But we know $\eta\ngeq{\lambda}$, then by Lemma \[2.14\] (7), $C_{P,Q}^{\eta}D_{S,T}^{{\lambda}}=0$, a contradiction. This implies that $$i(D_{S,T}^{{\lambda}})\equiv \sum\limits_{X,Y\in M({\lambda})}r_{X,Y,{\lambda}}D_{X,Y}^{{\lambda}}\,\,\,(\mod A_{D}(>{\lambda})).$$ Now assume $r_{U,V,{\lambda}}\neq 0$. Then $i(D_{S,T}^{{\lambda}})C_{V,U}^{{\lambda}}\neq 0$, hence $C_{U,V}^{{\lambda}}D_{S,T}^{{\lambda}}\neq 0$. By Lemma \[2.14\] (5), $V=S$. We can get $U=T$ similarly.\ [*Step 2.*]{} (C3) holds. For arbitrary $C_{S,T}^{{\lambda}}$, by Lemma \[2.14\] (2), we have $$C_{S,T}^{{\lambda}}D_{U,V}^{\mu}=\sum_{\epsilon\in\Lambda, X,Y\in M(\epsilon)}r_{(Y,X,\epsilon),(S,T,{\lambda}),(V,U,\mu)}D_{X,Y}^{\epsilon}.$$ By (C3) of Definition \[2.4\], if $\epsilon <\mu$, then $r_{(Y,X,\epsilon),(S,T,{\lambda}),(V,U,\mu)}=0$. Therefore, $$C_{S,T}^{{\lambda}}D_{U,V}^{\mu}\equiv\sum_{X,Y\in M(\mu)}r_{(Y,X,\mu),(S,T,{\lambda}),(V,U,\mu)}D_{X,Y}^{\mu}\,\,\,\,\,(\mod A_{D}(>\mu)),$$ where $A_{D}(>\mu)$ is the $R$-submodule of $A$ generated by $$\{D_{S^{''},T^{''}}^\eta \mid S^{''},T^{''}\in M({\lambda}),\eta>\mu\}.$$ By (C3$'$) of Definition \[2.4\], if $Y\neq V$, then $r_{(Y,X,\mu),(S,T,{\lambda}),(V,U,\mu)}=0$. So $$C_{S,T}^{{\lambda}}D_{U,V}^{\mu}\equiv\sum_{X\in M(\mu)}r_{(V,X,\mu),(S,T,{\lambda}),(V,U,\mu)}D_{X,V}^{\mu}\,\,\,(\mod A_{D}(>\mu)).$$ Clearly, for arbitrary $X\in M(\mu)$, we have $$r_{(V,X,\mu),(S,T,{\lambda}),(V,U,\mu)}=r_{C_{T,S}^{{\lambda}}}(U,X)$$ and which is independent of $V$. Since $C_{S,T}^{{\lambda}}$ is arbitrary, then $$aD_{U,V}^{\mu}\equiv \sum_{U'\in M(\mu)}r_{i(a)}(U,U')D_{U',V}^{\mu}\,\,\,\,\,(\mod A_{D}(>\mu))$$ for any $a\in A$. By Definition \[2.4\], $r_{i(a)}(U,U')$ is independent of $V$. Using the original definition of cellular algebras, Graham proved in [@G3] the dual basis of a cellular basis is again cellular in the case when $\tau(a)=\tau(i(a))$, for all $a\in A$. Since the dual basis is again cellular, for arbitrary elements $S,T,U,V\in M({\lambda})$, it is clear that $$D_{S,T}^{\lambda}D_{U,V}^{\lambda}\equiv \Psi(T,U)D_{S,V}^{\lambda}\,\,\,\, (\rm mod\,\,\, A(>{\lambda})),$$ where $\Psi(T,U)\in R$ depends only on $T$ and $U$. Then we also have Gram matrices $G'({\lambda})$ defined by the dual basis. Now it is natural to consider the problem what is the relation between $G({\lambda})$ and $G'({\lambda})$. To study this, we need the following lemma. \[2.17\]Let $A$ be a symmetric cellular algebra with cell datum $(\Lambda, M, C, i)$. For every ${\lambda}\in\Lambda$ and $S,T,U,V,P\in M({\lambda})$, we have $$C_{S,T}^{{\lambda}}D_{T,U}^{{\lambda}}C_{U,V}^{{\lambda}}D_{V,P}^{{\lambda}}=\sum_{Y\in M({\lambda})}\Phi(Y,V)\Psi(Y,V)C_{S,T}^{{\lambda}}D_{T,P}^{{\lambda}}.$$ By Lemma \[2.14\] (1), we have $$\begin{aligned} & & C_{S,T}^{{\lambda}}D_{T,U}^{{\lambda}}C_{U,V}^{{\lambda}}D_{V,P}^{{\lambda}} =C_{S,T}^{{\lambda}}(D_{T,U}^{{\lambda}}C_{U,V}^{{\lambda}})D_{V,P}^{{\lambda}}\\&=&\sum_{\epsilon\in\Lambda,X,Y\in M(\epsilon)}r_{(U,V,{\lambda}),(Y,X,\epsilon),(U,T,{\lambda})}C_{S,T}^{{\lambda}}D_{X,Y}^{\epsilon}D_{V,P}^{{\lambda}}.\end{aligned}$$ If $\varepsilon>{\lambda}$, then by Lemma \[2.14\] (7), $C_{S,T}^{{\lambda}}D_{X,Y}^{\epsilon}=0$; if $\varepsilon<{\lambda}$, by Definition \[2.4\] (C3), $r_{(U,V,{\lambda}),(Y,X,\epsilon),(U,T,{\lambda})}=0$. This implies that $$\begin{aligned} & & \sum_{\epsilon\in\Lambda,X,Y\in M(\epsilon)}r_{(U,V,{\lambda}),(Y,X,\epsilon),(U,T,{\lambda})}C_{S,T}^{{\lambda}}D_{X,Y}^{\epsilon}D_{V,P}^{{\lambda}}\\&=&\sum_{X,Y\in M({\lambda})}r_{(U,V,{\lambda}),(Y,X,{\lambda}),(U,T,{\lambda})}C_{S,T}^{{\lambda}}D_{X,Y}^{{\lambda}}D_{V,P}^{{\lambda}}.\end{aligned}$$ By Definition \[2.4\] (C3), if $X\neq T$, then $r_{(U,V,{\lambda}),(Y,X,{\lambda}),(U,T,{\lambda})}=0$. Hence, $$\begin{aligned} & & \sum_{X,Y\in M({\lambda})}r_{(U,V,{\lambda}),(Y,X,{\lambda}),(U,T,{\lambda})}C_{S,T}^{{\lambda}}D_{X,Y}^{{\lambda}}D_{V,P}^{{\lambda}}\\&=&\sum_{Y\in M({\lambda})}r_{(U,V,{\lambda}),(Y,T,{\lambda}),(U,T,{\lambda})}C_{S,T}^{{\lambda}}D_{T,Y}^{{\lambda}}D_{V,P}^{{\lambda}}.\end{aligned}$$ Note that $$D_{T,Y}^{{\lambda}}D_{V,P}^{{\lambda}}\equiv\Psi(Y,V)D_{T,P}^{{\lambda}} \,\,\,\,\,\,(\mod A_{D}(>{\lambda})).$$ Moreover, by Lemma \[2.14\] (7), if $\epsilon>{\lambda}$, then $C_{S,T}^{{\lambda}}D_{X,Y}^{\epsilon}=0$. Thus $$\sum_{Y\in M({\lambda})}r_{(U,V,{\lambda}),(Y,T,{\lambda}),(U,T,{\lambda})}C_{S,T}^{{\lambda}}D_{T,Y}^{{\lambda}}D_{V,P}^{{\lambda}}=\sum\limits_{Y\in M({\lambda})}\Phi(Y,V)\Psi(Y,V)C_{S,T}^{{\lambda}}D_{T,P}^{{\lambda}}.$$ This completes the proof. By Lemma \[2.14\], $C_{U,V}^{{\lambda}}D_{V,P}^{{\lambda}}$ is independent of $V$, so is $\sum\limits_{Y\in M({\lambda})}\Phi(Y,V)\Psi(Y,V)$. Then for any ${\lambda}\in\Lambda$, we can define a constant $k_{{\lambda},\tau}$ as follows. \[2.18\] Keep the notation above. For ${\lambda}\in\Lambda$, take an arbitrary $V\in M({\lambda})$. Define $$k_{{\lambda}, \tau}=\sum\limits_{X\in M({\lambda})}\Phi(X,V)\Psi(X,V).$$ Note that $\{k_{{\lambda}, \tau}\mid{\lambda}\in\Lambda\}$ is not independent of the choice of symmetrizing trace. Fixing a symmetrizing trace $\tau$, we often write $k_{{\lambda}, \tau}$ as $k_{{\lambda}}$. The following lemma reveals the relation among $G({\lambda})$, $G'({\lambda})$ and $k_{{\lambda}}$. \[2.19\] Let $A$ be a symmetric cellular algebra with cell datum $(\Lambda, M, C, i)$. For any ${\lambda}\in\Lambda$, fix an order on the set $M({\lambda})$. Then $G({\lambda})G'({\lambda})=k_{{\lambda}}E$, where $E$ is the identity matrix. For an arbitrary ${\lambda}\in\Lambda$, according to the definition of $G({\lambda})$, $G'({\lambda})$ and $k_{{\lambda}}$, we only need to show that $\sum\limits_{Y\in M({\lambda})}\Phi(Y,U)\Psi(Y,V)=0$ for arbitrary $U,V\in M({\lambda})$ with $U\neq V$. In fact, on one hand, for arbitrary $S\in M({\lambda})$, by Lemma \[2.14\] (5), $U\neq V$ implies that $C_{S,U}^{{\lambda}}D_{V,S}^{{\lambda}}=0$. Then $C_{S,U}^{{\lambda}}D_{U,S}^{{\lambda}}C_{S,U}^{{\lambda}}D_{V,S}^{{\lambda}}=0$. On the other hand, by a similar method as in the proof of Lemma \[2.17\], $$\begin{aligned} C_{S,U}^{{\lambda}}D_{U,S}^{{\lambda}}C_{S,U}^{{\lambda}}D_{V,S}^{{\lambda}}&=&\sum_{\epsilon\in\Lambda,X,Y\in M(\epsilon)}r_{(S,U,{\lambda}),(Y,X,\epsilon),(S,U,{\lambda})}C_{S,U}^{{\lambda}}D_{X,Y}^{\epsilon}D_{V,S}^{{\lambda}}\\ &=&\sum_{Y\in M({\lambda})}r_{(S,U,{\lambda}),(Y,U,{\lambda}),(S,U,{\lambda})}C_{S,U}^{{\lambda}}D_{U,Y}^{{\lambda}}D_{V,S}^{{\lambda}}\\&=&\sum_{Y\in M({\lambda})}\Phi(Y,U)\Psi(Y,V)C_{S,U}^{{\lambda}}D_{U,S}^{{\lambda}}.\end{aligned}$$ Then $\sum\limits_{Y\in M({\lambda})}\Phi(Y,U)\Psi(Y,V)C_{S,U}^{{\lambda}}D_{U,S}^{{\lambda}}=0$. This implies that $$\tau(\sum\limits_{Y\in M({\lambda})}\Phi(Y,U)\Psi(Y,V)C_{S,U}^{{\lambda}}D_{U,S}^{{\lambda}})=0.$$ Since $\tau(C_{S,U}^{{\lambda}}D_{U,S}^{{\lambda}})=1$, then $\sum\limits_{Y\in M({\lambda})}\Phi(Y,U)\Psi(Y,V)=0$. \[2.20\] Let $A$ be a symmetric cellular algebra over an integral domain $R$. Then $k_{{\lambda}}=0$ for any ${\lambda}\in\Lambda$ with $\operatorname{rad}{\lambda}\neq 0$. Since $|G({\lambda})|= 0$ is equivalent to $\operatorname{rad}{\lambda}\neq 0$, then by Lemma \[2.19\], $\operatorname{rad}{\lambda}\neq 0$ implies that $k_{{\lambda}}=0$. Using the dual basis, for each ${\lambda}\in\Lambda$, we can also define the cell module $W_{D}({\lambda})$. Then the following lemma is clear. \[2.21\] There is a natural isomorphism of $R$-modules $$D^{{\lambda}}:W_{D}({\lambda})\otimes_{R}i(W_{D}({\lambda}))\rightarrow R{\rm -span}\{D_{S,T}^{{\lambda}}\mid S,T\in M({\lambda})\},$$ defined by $(D_{S},D_{T})\rightarrow D_{S,T}^{{\lambda}}$. **Radicals of Symmetric Cellular Algebras** {#xxsec3} =========================================== To study radicals of symmetric cellular algebras, we need the following lemma. \[lmr2\]Let $A$ be a symmetric cellular algebra. Then for any ${\lambda}\in\Lambda$, the elements of the form $\sum\limits_{S,U\in M({\lambda})}r_{SU}C_{S,V}^{{\lambda}}D_{V,U}^{{\lambda}}$ with $r_{SU}\in R$ make an ideal of $A$. Denote the set of the elements of the form $\sum\limits_{S,U\in M({\lambda})}r_{SU}C_{S,V}^{{\lambda}}D_{V,U}^{{\lambda}}$ by $I^{{\lambda}}$. Then for any $\eta\in\Lambda$, $P,Q\in M(\eta)$, and $S,U\in M({\lambda})$, we claim that the element $C_{P,Q}^{\eta}C_{S,V}^{{\lambda}}D_{V,U}^{{\lambda}}\in I^{{\lambda}}$. In fact, by (C3) of Definition \[2.4\] and Lemma \[2.14\] (7), $$\begin{aligned} C_{P,Q}^{\eta}C_{S,V}^{{\lambda}}D_{V,U}^{{\lambda}}&=&\sum_{\epsilon\in\Lambda, X,Y\in M(\epsilon)}r_{(P,Q,\eta),(S,V,{\lambda}),(X,Y,\epsilon)}C_{X,Y}^{\epsilon}D_{V,U}^{{\lambda}}\\ &=&\sum_{X\in M({\lambda})}r_{(P,Q,\eta),(S,V,{\lambda}),(X,V{\lambda})}C_{X,V}^{{\lambda}}D_{V,U}^{{\lambda}}\end{aligned}$$ The element $C_{S,V}^{{\lambda}}D_{V,U}^{{\lambda}}C_{P,Q}^{\eta}\in I^{{\lambda}}$ is proved similarly. We will denote $\sum\limits_{{\lambda}\in\Lambda, k_{\lambda}=0}I^{{\lambda}}$ by $I^\Lambda$. Similarly, for each ${\lambda}\in\Lambda$, the elements of the form $\sum\limits_{S,U\in M({\lambda})}r_{U,S}D_{U,V}^{{\lambda}}C_{V,S}^{{\lambda}}$ with $r_{U,S}\in R$ also make an ideal $I_{D}^{{\lambda}}$ of $A$. Denote $\sum\limits_{{\lambda}\in\Lambda, k_{\lambda}=0}I_{D}^{{\lambda}}$ by $I_{D}^\Lambda$. Define $$I=I^\Lambda+I_{D}^\Lambda$$ and define $\Lambda_{1}=\{{\lambda}\in\Lambda\mid \operatorname{rad}{\lambda}=0\},$$\Lambda_{2}=\Lambda_{0}-\Lambda_{1},$ $\Lambda_{3}=\Lambda-\Lambda_{0},$ $\Lambda_{4}=\{{\lambda}\in\Lambda_{1}\mid k_{{\lambda}}=0\}$. Now we are in a position to give the main results of this paper. \[thm\] Suppose that $R$ is an integral domain and that $A$ is a symmetric cellular algebra with a cellular basis $C=\{C_{S,T}^{\lambda}\mid S,T\in M({\lambda}),{\lambda}\in\Lambda\}$. Let $\tau$ be a symmetrizing trace on $A$ and let $\{D_{T,S}^{\lambda}\mid S,T\in M({\lambda}),{\lambda}\in\Lambda\}$ be the dual basis of $C$ with respect to $\tau$. Then\ [(1)]{} $I\subseteq \operatorname{rad}A$, $I^{3}=0$.\ [(2)]{} $I$ is independent of the choice of $\tau$.\ Moreover, if $R$ is a field, then\ [(3)]{} $\dim_{R}I\geq\sum\limits_{{\lambda}\in\Lambda_{2}}(n_{{\lambda}}+\dim_{R}\operatorname{rad}{\lambda})\dim_{R}L_{{\lambda}}+\sum\limits_{{\lambda}\in\Lambda_{4}}n_{{\lambda}}^{2},$ where $n_{{\lambda}}$ is the number of the elements in $M({\lambda})$.\ [(4)]{} $\sum\limits_{{\lambda}\in\Lambda_{2}}(\dim_{K}L_{{\lambda}})^{2}-\sum\limits_{{\lambda}\in\Lambda_{3}}n_{{\lambda}}^{2}\leq \sum\limits_{{\lambda}\in\Lambda_{2}}(\dim_{K}\operatorname{rad}{\lambda})^{2}-\sum\limits_{{\lambda}\in\Lambda_{4}}n_{{\lambda}}^{2}.$ \(1) $I\subseteq \operatorname{rad}A$ , $I^{3}=0$.\ Firstly, we prove $(I^{\Lambda})^2=0$. Obviously, by the definition of $I^\Lambda$, every element of $(I^{\Lambda})^2$ can be written as a linear combination of elements of the form $C_{S_{1},T}^{{\lambda}}D_{T,S_{2}}^{{\lambda}}C_{U_{1},V}^{\mu}D_{V,U_{2}}^{\mu}$(we omit the coefficient here) with $k_{\lambda}=0$ and $k_\mu=0$. If $\mu<{\lambda}$, then $C_{S_{1},T}^{{\lambda}}D_{T,S_{2}}^{{\lambda}}C_{U_{1},V}^{\mu}D_{V,U_{2}}^{\mu}=0$ by Lemma \[2.14\] (8). If $\mu>{\lambda}$, then by Lemma \[2.14\] (1) and (7), $$C_{S_{1},T}^{{\lambda}}D_{T,S_{2}}^{{\lambda}}C_{U_{1},V}^{\mu}D_{V,U_{2}}^{\mu}= \sum_{Y\in M({\lambda})}r_{(U_{1},V,\mu),(Y,T,{\lambda}),(S_{2},T,{\lambda})}C_{S_{1},T}^{{\lambda}}D_{T,Y}^{{\lambda}}D_{V,U_{2}}^{\mu}.$$ However, by Lemma \[2.15\], every $D_{P,Q}^{\eta}$ with nonzero coefficient in the expansion of $D_{T,Y}^{{\lambda}}D_{V,U_{2}}^{\mu}$ satisfies $\eta\geq\mu$. Since $\mu>{\lambda}$, then $\eta>{\lambda}$. Now, by Lemma \[2.14\] (7), we have $C_{S_{1},T}^{{\lambda}}D_{P,Q}^{\eta}=0$, that is, $C_{S_{1},T}^{{\lambda}}D_{T,S_{2}}^{{\lambda}}C_{U_{1},V}^{\mu}D_{V,U_{2}}^{\mu}=0$ if $\mu>{\lambda}$. If ${\lambda}=\mu$, by Lemma \[2.14\] (3) and (4), we only need to consider the elements of the form $$C_{S_{1},T_{1}}^{{\lambda}}D_{T_{1},S_{2}}^{{\lambda}}C_{S_{2},T_{2}}^{{\lambda}}D_{T_{2},S_{3}}^{{\lambda}}.$$ By Lemma \[2.17\] and Lemma \[2.20\], $$\begin{aligned} C_{S_{1},T_{1}}^{{\lambda}}D_{T_{1},S_{2}}^{{\lambda}}C_{S_{2},T_{2}}^{{\lambda}}D_{T_{2},S_{3}}^{{\lambda}}= k_{{\lambda}}C_{S_{1},T_{1}}^{{\lambda}}D_{T_{1},S_{3}}^{{\lambda}}=0.\end{aligned}$$ Then we get that all the elements of the form $C_{S_{1},T}^{{\lambda}}D_{T,S_{2}}^{{\lambda}}C_{U_{1},V}^{\mu}D_{V,U_{2}}^{\mu}$ are zero, that is, $(I^{\Lambda})^2=0$. Similarly, we get $(I_{D}^{\Lambda})^2=0$. To prove $I^3=0$, we now only need to consider the elements in $I^{\Lambda}I_{D}^{\Lambda}I^{\Lambda}$ and $I_{D}^{\Lambda}I^{\Lambda}I_{D}^{\Lambda}$. For ${\lambda},\mu,\eta\in\Lambda$ with $k_{\lambda}=k_\mu=k_\eta=0$ and $S,T,M\in M({\lambda})$, $U,V,N\in M(\mu)$, $P,Q,W\in M(\eta)$, suppose that $C_{S,T}^{{\lambda}}D_{T,M}^{{\lambda}}D_{U,V}^{\mu}C_{V,N}^{\mu}C_{P,Q}^{\eta}D_{Q,W}^{\eta}\neq 0$. If ${\lambda}>\mu$, then any $D_{X,Y}^{\epsilon}$ with nonzero coefficient in the expansion of $D_{T,M}^{{\lambda}}D_{U,V}^{\mu}$ satisfies $\epsilon\geq{\lambda}$, so $\epsilon>\mu$, this implies that $D_{X,Y}^{\epsilon}C_{V,N}^{\mu}=0$ by Lemma \[2.14\], a contradiction. If ${\lambda}<\mu$, then any $D_{X,Y}^{\epsilon}$ with nonzero coefficient in the expansion of $D_{T,M}^{{\lambda}}D_{U,V}^{\mu}$ satisfies $\epsilon\geq\mu$, so $\epsilon>{\lambda}$, this implies that $C_{S,T}^{{\lambda}}D_{X,Y}^{\epsilon}=0$ by Lemma \[2.14\], a contradiction. Thus ${\lambda}=\mu$. Similarly, we get $\eta=\mu$. By a direct computation, we can also get $C_{S,T}^{{\lambda}}D_{T,M}^{{\lambda}}D_{U,V}^{\mu}C_{V,N}^{\mu}C_{P,Q}^{\eta}D_{Q,W}^{\eta}=0$. This implies that $I^{\Lambda}I_{D}^{\Lambda}I^{\Lambda}=0$. Similarly $I_{D}^{\Lambda}I^{\Lambda}I_{D}^{\Lambda}=0$ is proved. Then $I^3=0$ follows. Now it is clear that $I\subseteq \operatorname{rad}A$ for $I$ is a nilpotent ideal of $A$.\ (2) $I$ is independent of the choice of $\tau$.\ Let $\tau$ and $\tau'$ be two symmetrizing traces and $D$, $d$ the dual bases determined by $\tau$ and $\tau'$ respectively. By Lemma \[2.3\], for arbitrary $d_{U,V}^{{\lambda}}\in d$, $$d_{U,V}^{{\lambda}}=\sum_{\varepsilon\in\Lambda, X,Y\in M(\varepsilon)}\tau(C_{X,Y}^{\varepsilon}d_{U,V}^{{\lambda}})D_{Y,X}^{\varepsilon}.$$ Then for arbitrary $S\in M({\lambda})$, $$C_{S,U}^{{\lambda}}d_{U,V}^{{\lambda}}=\sum_{\varepsilon\in\Lambda, X,Y\in M(\varepsilon)}\tau(C_{X,Y}^{\varepsilon}d_{U,V}^{{\lambda}})C_{S,U}^{{\lambda}}D_{Y,X}^{\varepsilon}.$$ By Lemma \[2.14\] (7), (8), if $\varepsilon<{\lambda}$, then $C_{X,Y}^{\varepsilon}d_{U,V}^{{\lambda}}=0$; if $\varepsilon>{\lambda}$, then $C_{S,U}^{{\lambda}}D_{Y,X}^{\varepsilon}=0.$ This implies that $$C_{S,U}^{{\lambda}}d_{U,V}^{{\lambda}}=\sum_{X,Y\in M({\lambda})}\tau(C_{X,Y}^{{\lambda}}d_{U,V}^{{\lambda}})C_{S,U}^{{\lambda}}D_{Y,X}^{{\lambda}}.$$ By Lemma \[2.14\] (5), if $Y\neq U$, then $C_{S,U}^{{\lambda}}D_{Y,X}^{{\lambda}}=0$. Hence $$C_{S,U}^{{\lambda}}d_{U,V}^{{\lambda}}=\sum_{X\in M({\lambda})}\tau(C_{X,U}^{{\lambda}}d_{U,V}^{{\lambda}})C_{S,U}^{{\lambda}}D_{U,X}^{{\lambda}}.$$ Noting that $\tau(C_{X,U}^{{\lambda}}d_{U,V}^{{\lambda}})=\tau(d_{U,V}^{{\lambda}}C_{X,U}^{{\lambda}})$, it follows from Lemma \[2.14\] that $d_{U,V}^{{\lambda}}C_{X,U}^{{\lambda}}=0$ if $X\neq V$. Thus $$C_{S,U}^{{\lambda}}d_{U,V}^{{\lambda}}=\tau(C_{V,U}^{{\lambda}}d_{U,V}^{{\lambda}})C_{S,U}^{{\lambda}}D_{U,V}^{{\lambda}}.$$ Similarly, we obtain $$C_{S,U}^{{\lambda}}D_{U,V}^{{\lambda}}=\tau'(C_{V,U}^{{\lambda}}D_{U,V}^{{\lambda}})C_{S,U}^{{\lambda}}d_{U,V}^{{\lambda}},$$ $$d_{V,U}^{{\lambda}}C_{U,S}^{{\lambda}}=\tau(C_{V,U}^{{\lambda}}d_{U,V}^{{\lambda}})D_{V,U}^{{\lambda}}C_{U,S}^{{\lambda}},$$ $$D_{V,U}^{{\lambda}}C_{U,S}^{{\lambda}}=\tau'(C_{V,U}^{{\lambda}}D_{U,V}^{{\lambda}})d_{V,U}^{{\lambda}}C_{U,S}^{{\lambda}}.$$ The above four formulas imply that $I$ is independent of the choice of symmetrizing trace.\ (3) $\dim_{R}I\geq\sum\limits_{{\lambda}\in\Lambda_{2}}(n_{{\lambda}}+\dim_{R}\operatorname{rad}{\lambda})\dim_{R}L_{{\lambda}}+\sum\limits_{{\lambda}\in\Lambda_{4}}n_{{\lambda}}^{2}.$\ For any ${\lambda}\in\Lambda_{2}$ and $S,T\in M({\lambda})$, it follows from Lemma \[2.14\] that $$C_{S,T}^{{\lambda}}D_{T,T}^{{\lambda}}\equiv \sum\limits_{X\in M({\lambda})}\Phi(X,S)D_{X,T}^{{\lambda}}\,\,\,\,\,(\mod A_{D}(>{\lambda})),$$ $$D_{T,T}^{{\lambda}}C_{T,S}^{{\lambda}}\equiv \sum\limits_{Y\in M({\lambda})}\Phi(Y,S)D_{T,Y}^{{\lambda}}\,\,\,\,\,(\mod A_{D}(>{\lambda})).$$ Let $V$ be the $R$-space generated by $$\{\sum\limits_{X\in M({\lambda})}\Phi(X,S)D_{X,T}^{{\lambda}}\mid S,T\in M({\lambda})\}\cup \{\sum\limits_{Y\in M({\lambda})}\Phi(Y,S)D_{T,Y}^{{\lambda}}\mid S,T\in M({\lambda})\}.$$ Then it is easy to know from the definition of $I^{\lambda}$ and $I_{D}^{\lambda}$ that $$\dim_{R}(I^{{\lambda}}+I_{D}^{\lambda})\geq\dim V.$$ Note that by Lemma \[2.21\], $D^{{\lambda}}: (D_{S},D_{T})\rightarrow D_{S,T}^{{\lambda}}$ is an isomorphism of $R$-modules. So we only need to consider the dimension of $V'$ generated by $$\{\sum\limits_{X\in M({\lambda})}\Phi(X,S)D_{X}\otimes D_{T}\mid S,T\in M({\lambda})\}\cup \{D_{T}\otimes\sum\limits_{Y\in M({\lambda})}\Phi(Y,S)D_{Y}\mid S,T\in M({\lambda})\}.$$ Since $\Phi_{{\lambda}}\neq 0$, $\operatorname{rank}G_{{\lambda}}=\dim_{R}L_{{\lambda}}$, we have $\dim V'=2n_{{\lambda}}\dim_{R}L_{{\lambda}}-(\dim_{R}L_{{\lambda}})^2,$ that is, $\dim V'=\dim_{R}L_{{\lambda}}\times(n_{{\lambda}}+\dim_{R}\operatorname{rad}{\lambda})$. Thus $$\dim_{R}(I^{{\lambda}}+I_{D}^{{\lambda}})\geq \dim_{R}L_{{\lambda}}\times(n_{{\lambda}}+\dim_{R}\operatorname{rad}{\lambda}).$$ Clearly, the above inequality holds true for any ${\lambda}\in\Lambda_{4}$, then we have $$\dim_{R}(I^{{\lambda}}+I_{D}^{{\lambda}})\geq n_{{\lambda}}^{2}$$ for any ${\lambda}\in\Lambda_{4}$. It is clear from Lemma \[2.15\] that $\dim_{R}I\geq \sum\limits_{{\lambda}\in\Lambda_{2}}\dim_{R}(I^{{\lambda}}+I_{D}^{{\lambda}})+\sum\limits_{{\lambda}\in\Lambda_{4}}n_{{\lambda}}^{2}$ and then item (3) follows.\ (4) $\sum\limits_{{\lambda}\in\Lambda_{2}}(\dim_{K}L_{{\lambda}})^{2}-\sum\limits_{{\lambda}\in\Lambda_{3}}n_{{\lambda}}^{2}\leq \sum\limits_{{\lambda}\in\Lambda_{2}}(\dim_{K}\operatorname{rad}{\lambda})^{2}.$\ By (1) and (3), $$\dim_{R}\operatorname{rad}A\geq\sum\limits_{{\lambda}\in\Lambda_{2}}(n_{{\lambda}}+\dim_{R}\operatorname{rad}{\lambda})\dim_{R}L_{{\lambda}}+\sum\limits_{{\lambda}\in\Lambda_{4}}n_{{\lambda}}^{2}.$$ By the formula $$\dim_{R}\operatorname{rad}A=\dim_{R}A-\sum_{{\lambda}\in\Lambda_{0}}(\dim_{R}L_{{\lambda}})^{2},$$ we have $$\dim_{R}A-\sum_{{\lambda}\in\Lambda_{0}}(\dim_{R}L_{{\lambda}})^{2}\geq \sum\limits_{{\lambda}\in\Lambda_{2}}(n_{{\lambda}}+\dim_{R}\operatorname{rad}{\lambda})\dim_{R}L_{{\lambda}}+\sum\limits_{{\lambda}\in\Lambda_{4}}n_{{\lambda}}^{2}.$$ That is, $$\sum_{{\lambda}\in\Lambda_{3}}n_{{\lambda}}^{2}+\sum_{{\lambda}\in\Lambda_{0}}n_{{\lambda}}^{2}-\sum_{{\lambda}\in\Lambda_{0}}(\dim_{R}L_{{\lambda}})^{2}\geq \sum\limits_{{\lambda}\in\Lambda_{2}}(n_{{\lambda}}+\dim_{R}\operatorname{rad}{\lambda})\dim_{R}L_{{\lambda}}+\sum\limits_{{\lambda}\in\Lambda_{4}}n_{{\lambda}}^{2},$$ or $$\sum_{{\lambda}\in\Lambda_{3}}n_{{\lambda}}^{2}+\sum_{{\lambda}\in\Lambda_{2}}n_{{\lambda}}^{2}-\sum_{{\lambda}\in\Lambda_{2}}(\dim_{R}L_{{\lambda}})^{2}\geq \sum\limits_{{\lambda}\in\Lambda_{2}}(n_{{\lambda}}+\dim_{R}\operatorname{rad}{\lambda})\dim_{R}L_{{\lambda}}+\sum\limits_{{\lambda}\in\Lambda_{4}}n_{{\lambda}}^{2},$$ or $$\sum\limits_{{\lambda}\in\Lambda_{2}}(\dim_{K}L_{{\lambda}})^{2}-\sum\limits_{{\lambda}\in\Lambda_{3}}n_{{\lambda}}^{2}\leq \sum\limits_{{\lambda}\in\Lambda_{2}}n_{{\lambda}}^{2}-\sum\limits_{{\lambda}\in\Lambda_{2}}(n_{{\lambda}}+ \dim_{R}\operatorname{rad}{\lambda})\dim_{R}L_{{\lambda}}-\sum\limits_{{\lambda}\in\Lambda_{4}}n_{{\lambda}}^{2}.$$ According to $\dim_{R}L_{{\lambda}}=n_{{\lambda}}-\dim_{R}\operatorname{rad}{\lambda}$, the right side of the above inequality is $\sum\limits_{{\lambda}\in\Lambda_{2}}(\dim_{K}\operatorname{rad}{\lambda})^{2}-\sum\limits_{{\lambda}\in\Lambda_{4}}n_{{\lambda}}^{2}$ and this completes the proof. Let $R$ be an integral domain and $A$ a symmetric cellular algebra. Let ${\lambda}$ be the minimal element in $\Lambda$. If $\operatorname{rad}{\lambda}\neq 0$, then $R-{\rm span}\{C_{S,T}^{{\lambda}}\mid S,T\in M({\lambda})\}\subset \operatorname{rad}A$. If $a=\sum\limits_{X,Y\in M({\lambda})}r_{X,Y}C_{X,Y}^{{\lambda}}$ is not in $\operatorname{rad}A$, then there exists some $D_{U,V}^{\mu}$ such that $aD_{U,V}^{\mu}\notin \operatorname{rad}A$. If $\mu\neq{\lambda}$, then $aD_{U,V}^{\mu}=0$ by Lemma \[2.14\], it is in $\operatorname{rad}A$. If $\mu={\lambda}$, then $aD_{U,V}^{\mu}\in \operatorname{rad}A$ by Theorem \[thm\]. It is a contradiction. Let $A$ be a finite dimensional symmetric cellular algebra and $r\in \operatorname{rad}A$. Assume that ${\lambda}\in\Lambda$ satisfies:\ [(1)]{} There exists $S,T\in M({\lambda})$ such that $C_{S,T}^{{\lambda}}$ appears in the expansion of $r$ with nonzero coefficient.\ [(2)]{} For any $\mu>{\lambda}$ and $U,V\in M(\mu)$, the coefficient of $C_{U,V}^{\mu}$ in the expansion of $r$ is zero.\ Then $k_{{\lambda}}= 0$. Since $r=\sum\limits_{\varepsilon\in\Lambda, X,Y\in M(\varepsilon)}r_{X,Y,\varepsilon}C_{X,Y}^{\varepsilon}\in \operatorname{rad}A$, we have $rD_{T,S}^{{\lambda}}\in \operatorname{rad}A$. The conditions (1) and (2) imply that $$rD_{T,S}^{{\lambda}}=\sum\limits_{X\in M({\lambda})}r_{X,T,{\lambda}}C_{X,T}^{{\lambda}}D_{T,S}^{{\lambda}}.$$It is easy to check that $(rD_{T,S}^{{\lambda}})^{n}=(k_{{\lambda}}r_{S,T,{\lambda}})^{n-1}rD_{T,S}^{{\lambda}}$. Applying $\tau$ on both sides of this equation, we get $\tau((rD_{T,S}^{{\lambda}})^{n})=(k_{{\lambda}}r_{S,T,{\lambda}})^{n-1}r_{S,T,{\lambda}}$. If $k_{{\lambda}}\neq 0$, then $\tau((rD_{T,S}^{{\lambda}})^{n})\neq 0$. Hence $rD_{T,S}^{{\lambda}}$ is not nilpotent and then $rD_{T,S}^{{\lambda}}\notin \operatorname{rad}A$, a contradiction. This implies that $k_{{\lambda}}= 0$. [**Example**]{} The group algebra $\mathbb{Z}_{3}S_{3}$. The algebra has a basis $$\{1, s_{1}, s_{2}, s_{1}s_{2}, s_{2}s_{1}, s_{1}s_{2}s_{1}\}.$$ A cellular basis is $C_{1,1}^{(3)}=1+s_{1}+s_{2}+s_{1}s_{2}+s_{2}s_{1}+s_{1}s_{2}s_{1}$, $C_{1,1}^{(2,1)}=1+s_{1}, \,\,\,\,\,\,\,\,\,C_{1,2}^{(2,1)}=s_{2}+s_{1}s_{2}$, $C_{2,1}^{(2,1)}=s_{2}+s_{2}s_{1}, C_{2,2}^{(2,1)}=1+s_{1}s_{2}s_{1}$, $C_{1,1}^{(1^3)}=1$.\ The corresponding dual basis is $D_{1,1}^{(3)}=-s_{2}+s_{1}s_{2}+s_{2}s_{1}$, $D_{1,1}^{(2,1)}=s_{1}+s_{2}-s_{1}s_{2}-s_{2}s_{1}, D_{2,1}^{(2,1)}=s_{2}-s_{1}s_{2}$, $D_{1,2}^{(2,1)}=s_{2}-s_{2}s_{1},\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, D_{2,2}^{(2,1)}=s_{2}-s_{1}s_{2}-s_{2}s_{1}+s_{1}s_{2}s_{1}$, $D_{1,1}^{(1^3)}=1-s_{1}-s_{2}+s_{1}s_{2}+s_{2}s_{1}-s_{1}s_{2}s_{1}$. It is easy to know that $\Lambda_{3}=(3)$ and $\Lambda_{1}=(1^3)$. Then $\dim_{K}\operatorname{rad}A=4$. Now we compute $I$. $C_{1,1}^{(3)}D_{1,1}^{(3)}=1+s_{1}+s_{2}+s_{1}s_{2}+s_{2}s_{1}+s_{1}s_{2}s_{1}$, $C_{1,2}^{(2,1)}D_{2,1}^{(2,1)}=1+s_{1}-s_{2}-s_{1}s_{2}s_{1}$, $C_{1,2}^{(2,1)}D_{2,2}^{(2,1)}=s_{2}+s_{1}s_{2}-s_{2}s_{1}-s_{1}s_{2}s_{1}$, $C_{2,1}^{(2,1)}D_{1,2}^{(2,1)}=1-s_{1}-s_{1}s_{2}+s_{1}s_{2}s_{1}$, $C_{2,1}^{(2,1)}D_{1,1}^{(2,1)}=s_{2}+s_{2}s_{1}-s_{1}-s_{1}s_{2}$. Then $\dim_{K} I=4$. This implies that $I=\operatorname{rad}A$. **Semisimplicity of symmetric cellular algebras** ================================================= As a by-product of the results on radicals, we will give some equivalent conditions for a finite dimensional symmetric cellular algebra to be semisimple. \[3.5\] Let $A$ be a finite dimensional symmetric cellular algebra. Then the following are equivalent.\ [(1)]{} The algebra $A$ is semisimple.\ [(2)]{} $k_{{\lambda}}\neq 0$ for all ${\lambda}\in\Lambda$.\ [(3)]{} $\{C_{S,T}^{{\lambda}}D_{T,T}^{{\lambda}}\mid{\lambda}\in\Lambda, S,T\in M({\lambda})\}$ is a basis of $A$.\ [(4)]{} For any ${\lambda}\in\Lambda$, there exist $S,T\in M({\lambda})$, such that $(C_{S,T}^{{\lambda}}D_{T,S}^{{\lambda}})^{2}\neq 0$.\ [(5)]{} For any ${\lambda}\in\Lambda$ and arbitrary $S,T\in M({\lambda})$, $(C_{S,T}^{{\lambda}}D_{T,S}^{{\lambda}})^{2}\neq 0$. (2)$\Longrightarrow$(1) If $k_{{\lambda}}\neq 0$ for all ${\lambda}\in\Lambda$, then $\operatorname{rad}{\lambda}=0$ for all ${\lambda}\in\Lambda$ by Corollary \[2.20\]. This implies that $A$ is semisimple by Theorem \[glthm\]. (1)$\Longrightarrow$(2) Assume that there exists some ${\lambda}\in\Lambda$ such that $k_{{\lambda}}=0$. Then it is easy to check that $I^{{\lambda}}$ is a nilpotent ideal of $A$. Obviously, $I^{{\lambda}}\neq 0$ because at least $C_{U,V}^{{\lambda}}D_{V,U}^{{\lambda}}\neq 0$. This implies that $I^{{\lambda}}\subseteq \operatorname{rad}A$. But $A$ is semisimple, a contradiction. This implies that $k_{{\lambda}}\neq 0$ for all ${\lambda}\in\Lambda$. (2)$\Longrightarrow$(3) Let $\sum\limits_{{\lambda}\in\Lambda, S,T\in M({\lambda})}k_{S,T,{\lambda}}C_{S,T}^{{\lambda}}D_{T,T}^{{\lambda}}=0$. Take a maximal element ${\lambda}_{0}\in\Lambda$. For arbitrary $X,Y\in M({\lambda}_{0})$, $$\begin{aligned} C_{X,X}^{{\lambda}_{0}}D_{X,Y}^{{\lambda}_{0}}(\sum_{{\lambda}\in\Lambda, S,T\in M({\lambda})}k_{S,T,{\lambda}}C_{S,T}^{{\lambda}}D_{T,T}^{{\lambda}})=k_{{\lambda}_{0}}\sum_{T\in M({\lambda}_{0})}k_{Y,T,{\lambda}_{0}}C_{X,T}^{{\lambda}_{0}}D_{T,T}^{{\lambda}_{0}}=0.\end{aligned}$$ This implies that $\tau(k_{{\lambda}_{0}}\sum\limits_{T\in M({\lambda}_{0})}k_{Y,T,{\lambda}_{0}}C_{X,T}^{{\lambda}_{0}}D_{T,T}^{{\lambda}_{0}})=0$, i.e., $k_{{\lambda}_{0}}k_{Y,X,{\lambda}_{0}}=0$. Since $k_{{\lambda}_{0}}\neq 0$, then we get $k_{Y,X,{\lambda}_{0}}=0$. Repeating the process as above, we get that all the $k_{S,T,{\lambda}}$ are zeros. (3)$\Longrightarrow$(2) Since $\{C_{S,T}^{{\lambda}}D_{T,T}^{{\lambda}}\mid{\lambda}\in\Lambda, S,T\in M({\lambda})\}$ is a basis of $A$, we have $$1=\sum_{{\lambda}\in\Lambda, S,T\in M({\lambda})}k_{S,T,{\lambda}}C_{S,T}^{{\lambda}}D_{T,T}^{{\lambda}}.$$ For arbitrary $\mu\in\Lambda$ and $U,V\in M(\mu)$, we have $$\begin{aligned} C_{U,V}^{\mu}D_{V,V}^{\mu}&=&\sum_{{\lambda}\in\Lambda, S,T\in M({\lambda})}k_{S,T,{\lambda}}C_{S,T}^{{\lambda}}D_{T,T}^{{\lambda}}C_{U,V}^{\mu}D_{V,V}^{\mu}\\ &=&k_{\mu}\sum_{ X\in M(\mu)}k_{X,U,\mu}C_{X,V}^{\mu}D_{V,V}^{\mu}.\end{aligned}$$ This implies that $k_{\mu}\neq 0$ since $C_{U,V}^{\mu}D_{V,V}^{\mu}\neq 0$. The fact that $\mu$ is arbitrary implies that $k_{{\lambda}}\neq 0$ for all ${\lambda}\in\Lambda$. (2)$\Longleftrightarrow$(4) and (2)$\Longleftrightarrow$(5) are clear by Lemma \[2.17\]. Let $R$ be an integral domain and $A$ a symmetric cellular algebra with a cell datum $(\Lambda, M, C, i)$. Let $K$ be the field of fractions of $R$ and $A_{K}=A\bigotimes_{R}K$. If $A_{K}$ is semisimple, then $$\{\mathcal {E}_{S,T}^{{\lambda}}=C_{S,S}^{{\lambda}}D_{S,T}^{{\lambda}}C_{T,T}^{{\lambda}}\mid{\lambda}\in\Lambda,S,T\in M({\lambda})\}$$ is a cellular basis of $A_{K}$. Moreover, if ${\lambda}\neq\mu$, then $\mathcal {E}_{S,T}^{{\lambda}}\mathcal {E}_{U,V}^{\mu}=0$. Firstly, we prove that $\{\mathcal {E}_{S,T}^{{\lambda}}\mid{\lambda}\in\Lambda,S,T\in M({\lambda})\}$ is a basis of $A_{K}$. We only need to show the elements in this set are $K$-linear independent. By Lemma \[2.14\], we have $$\begin{aligned} \mathcal {E}_{S,T}^{{\lambda}} &=&\sum\limits_{X\in M({\lambda})}r_{(T,T,{\lambda}),(X,S,{\lambda}),(T,S,{\lambda})}C_{S,S}^{{\lambda}}D_{S,X}^{{\lambda}}\\ &=&\sum\limits_{X\in M({\lambda})}\Phi(X,T)C_{S,X}^{{\lambda}}D_{X,X}^{{\lambda}}\end{aligned}$$ for all ${\lambda}\in\Lambda, S,T\in M({\lambda})$. Since $A_{K}$ is semisimple, all $G({\lambda})$ are non-degenerate. Moreover, $\{C_{S,T}^{{\lambda}}D_{T,T}^{{\lambda}}\mid{\lambda}\in\Lambda, S,T\in M({\lambda})\}$ is a basis of $A_{K}$ by Corollary \[3.5\], then $$\{\mathcal {E}_{S,T}^{{\lambda}}=C_{S,S}^{{\lambda}}D_{S,T}^{{\lambda}}C_{T,T}^{{\lambda}}\mid{\lambda}\in\Lambda,S,T\in M({\lambda})\}$$ is a basis of $A_{K}$. Secondly, $i(\mathcal {E}_{S,T}^{{\lambda}})\equiv\mathcal {E}_{T,S}^{{\lambda}}$ for arbitrary ${\lambda}\in\Lambda$, and $S,T\in M({\lambda})$. This is clear by Lemma \[2.14\] and \[2.15\]. Thirdly, for arbitrary $a\in A$, since $\{C_{S,T}^{{\lambda}}\mid{\lambda}\in\Lambda, S,T\in M({\lambda})\}$ is a cellular basis of $A$, we have $$\begin{aligned} a\mathcal{E}_{S,T}^{{\lambda}}&=&aC_{S,S}^{{\lambda}}D_{S,T}^{{\lambda}}C_{T,T}^{{\lambda}}\\ &=&\sum_{X\in M({\lambda})}r_{a}(X,S)C_{X,S}^{{\lambda}} D_{S,T}^{{\lambda}}C_{T,T}^{{\lambda}}\\ &=&\sum_{X\in M({\lambda})}r_{a}(X,S)C_{X,X}^{{\lambda}} D_{X,T}^{{\lambda}}C_{T,T}^{{\lambda}}\\ &=&\sum_{X\in M({\lambda})}r_{a}(X,S)\mathcal {E}_{X,T}^{{\lambda}}.\end{aligned}$$ Clearly, $r_{a}(X,S)$ is independent of $T$. Then $$\{\mathcal {E}_{S,T}^{{\lambda}}=C_{S,S}^{{\lambda}}D_{S,T}^{{\lambda}}C_{T,T}^{{\lambda}}\mid{\lambda}\in\Lambda,S,T\in M({\lambda})\}$$ is a cellular basis of $A_{K}$. Finally, for any ${\lambda}, \mu\in\Lambda$, $S,T\in M({\lambda})$, $U,V\in M(\mu)$, $$\begin{aligned} \mathcal {E}_{S,T}^{{\lambda}}\mathcal {E}_{U,V}^{\mu}&=&C_{S,S}^{{\lambda}}D_{S,T}^{{\lambda}}C_{T,T}^{{\lambda}}C_{U,U}^{\mu}D_{U,V}^{\mu}C_{V,V}^{\mu}\\ &=&\sum_{\epsilon\in\Lambda, X,Y\in M(\epsilon)}r_{(T,T,{\lambda}),(U,U,\mu),(X,Y,\epsilon)} C_{S,S}^{{\lambda}}D_{S,T}^{{\lambda}}C_{X,Y}^{\epsilon}D_{U,V}^{\mu}C_{V,V}^{\mu}.\end{aligned}$$ By Lemma \[2.14\], $C_{S,S}^{{\lambda}}D_{S,T}^{{\lambda}}C_{X,Y}^{\epsilon}D_{U,V}^{\mu}C_{V,V}^{\mu}\neq 0$ implies $\epsilon\geq{\lambda}, \epsilon\geq\mu$. On the other hand, by Definition \[2.4\], $r_{(T,T,{\lambda}),(U,U,\mu),(X,Y,\epsilon)}\neq 0$ implies $\epsilon\leq{\lambda}$ and $\epsilon\leq\mu$. Therefore, if ${\lambda}\neq\mu$, then $\mathcal {E}_{S,T}^{{\lambda}}\mathcal {E}_{U,V}^{\mu}=0$. **Acknowledgement** The author acknowledges his supervisor Prof. C.C. Xi. He also thanks Dr. Wei Hu and Zhankui Xiao for many helpful conversations. J. Brundan and C. Stroppel, [*Highest weight categories arising from Khovanov’s diagram algebra I: cellularity*]{}, arxiv: math0806.1532v1. J. Du and H.B. Rui, [*Based algebras and standard bases for quasi-hereditary algebras*]{}, Trans. Amer. Math.Soc., **350**, (1998), 3207-3235. M. Geck, [*Hecke algebras of finite type are cellular*]{}, Invent. math., **169**, (2007), 501-517. F. Goodman, [*Cellularity of cyclotomic Birman-Wenzl-Murakami algebras*]{}, J. Algebra, **321**, (2009), 3299-3320. J.J. Graham, [*Modular representations of Hecke algebras and related algebras*]{}, PhD Thesis, Sydney University, 1995. J.J. Graham and G.I. Lehrer, [*Cellular algebras*]{}, Invent. Math., **123**, (1996), 1-34. R.M. Green, [*Completions of cellular algebras*]{}, Comm. Algebra, **27**, (1999), 5349-5366. R.M. Green, [*Tabular algebras and their asymptotic versions*]{}, J. Algebra, **252**, (2002), 27-64. D. Kazhdan and G. Lusztig, [*Representations of Coxeter groups and Hecke algebras*]{}, Invent. Math., **53**, (1979), 165-184. S. Koenig and C.C. Xi, [*On the structure of cellular algebras*]{}, In: I. Reiten, S. Smalo and O. solberg(Eds.): Algebras and Modules II. Canadian Mathematics Society Proceedings, Vol. **24**, (1998), 365-386. S. Koenig and C.C. Xi, [*Cellular algebras: Inflations and Morita equivalences*]{}, J. London Math. Soc. (2), **60**, (1999), 700-722. S. Koenig and C.C. Xi, [*A characteristic-free approach to Brauer algebras*]{}, Trans. Amer. Math. Soc., **353**, (2001), 1489-1505. S. Koenig and C.C. Xi, [*Affine cellular algebras*]{}, preprint. G.I. Lehrer and R.B. Zhang, [*A Temperley-Lieb analogue for the BMW algebra*]{}, arXiv:math/08060687v1. G. Malle and A. Mathas, [*Symmetric cyclotomic Hecke algebras*]{}, J. Algebra, **205**, (1998), 275-293. E. Murphy, [*The representations of Hecke algebras of type $A_{n}$*]{}, J. Algebra, **173**, (1995), 97-121. H.B. Rui and C.C. Xi, [*The representation theory of cyclotomic Temperley-Lieb algebras*]{}, Comment. Math. Helv., **79**, no.2, (2004), 427-450. B.W. Westbury, [*Invariant tensors and cellular categories*]{}, J. Algebra, **321**, (2009), 3563-3567. C.C. Xi, [*Partition algebras are cellular*]{}, Compositio math., **119**, (1999), 99-109. C.C. Xi, [*On the quasi-heredity of Birman-Wenzl algebras*]{}, Adv. Math., **154**, (2000), 280-298. C.C. Xi and D.J. Xiang, [*Cellular algebras and Cartan matrices*]{}, Linear Algebra Appl., **365**, (2003), 369-388. [^1]: This work is partially supported by the Research Fund of Doctor Program of Higher Education, Ministry of Education of China.
{ "pile_set_name": "ArXiv" }
--- author: - 'by Bill Baritompa, Rainer Löwen, Burkard Polster, and Marty Ross' title: Mathematical Table Turning Revisited --- [Abstract]{} We investigate under which conditions a rectangular table can be placed with all four feet on a ground described by a function $\mathbb R^2\to \mathbb R$. We start by considering highly idealized tables that are just simple rectangles. We prove that given any rectangle, any continuous ground and any point on the ground, the rectangle can be positioned such that all its vertices are on the ground and its center is on the vertical through the distinguished point. This is a mathematical existence result and does not provide a practical way of actually finding a balancing position. An old, simple, beautiful, intuitive and applicable, but not very well known argument guarantees that a square table can be balanced on any ground that is not “too wild”, by turning it on the spot. In the main part of this paper we turn this intuitive argument into a mathematical theorem. More precisely, our theorem deals with rectangular tables each consisting of a solid rectangle as top and four line segments of equal length as legs. We prove that if the ground does not rise by more than $\arctan\left (\frac{1}{\sqrt 2}\right) \approx 35.26^\circ$ between any two of its points, and if the legs of the table are at least half as long as its diagonals, then the table can be balanced anywhere on the ground, without any part of it digging into the ground, by turning the table on the spot. This significantly improves on related results recently reported on in [@Martin] and [@Polster1] by also dealing with tables that are not square, optimizing the allowable “wobblyness” of the ground, giving minimal leg lengths that ensure that the table won’t run into the ground, and providing (hopefully) a more accessible proof. Finally, we give a summary of related earlier results, prove a number of related results for tables of shapes other than rectangles, and give some advice on using our results in real life. %&\$\#!!! ========= You sit down at a table and notice that it is wobbling, because it is standing on a surface that is not quite even. What to do? Curse, yes, of course. Apart from that, it seems that the only quick fix to this problem is to wedge something under one of the feet of the table to stabilise it. However, there is another simple approach to solving this annoying problem. Just turn the table on the spot! More often than not, you will find a position in which all four legs of the table are touching the ground. This may seem somewhat counterintuitive. So, why and under what conditions does this trick work? Balancing Mathematical Tables—a Matter of Existence =================================================== In the mathematical analysis of the problem, we will first assume that the ground is the graph of a function $g:\mathbb R^2\to \mathbb R$, and that a [*mathematical table*]{} consists of the four vertices of a rectangle of diameter 2 whose center is on the $z$-axis. What we are then interested in is determining for which choices of the function $g$ can a mathematical table be [*balanced locally*]{}: that is, when can a table be moved such that its center remains on the $z$-axis, and all its vertices end up on the ground. We first observe that it is not always possible to balance a mathematical table locally. Consider, for example, the reflectively symmetric function of the angle $\theta$ about the $z$-axis with $$g(\theta)= \left \{ \begin{array}{l} 2 \quad \mbox{ if } 0\leq \theta < \frac{\pi}{2} \mbox{ or } \pi \leq \theta < \frac{3\pi}{2}, \\ 1 \quad \mbox{ otherwise}. \end{array} \right.$$ So, the ground consists of four quadrants, two at height 1 and two at height 2; see Figure \[cliff\]. It is not hard to see that a square mathematical table cannot be balanced locally on such a clifflike piece of ground. On the other hand, we can prove the following theorem: A mathematical table can always be balanced locally, as long as the ground function $g$ is continuous. This result is a seemingly undocumented corollary of a theorem by Livesay [@Livesay], which can be phrased as follows: [*For any continuous function $f$ defined on the unit sphere, we can position a given mathematical table with all its vertices on the sphere such that $f$ takes on the same value at all four vertices.*]{} Note that since our mathematical table has diagonals of length 2, its four vertices will be on the unit sphere iff the centers of the table and the sphere coincide. Choose as the continuous function the [*vertical distance*]{} from the ground, $$f:\mathbb S^2 \to \mathbb R:(x,y,z) \mapsto z-g(x,y).$$ Note that here and in everything that follows the vertical distance of a point in space from the ground is really a signed vertical distance; depending on whether the point is above, on or below the ground its vertical distance is positive, zero or negative, respectively. Now, we are guaranteed a position of our rectangle with center at the origin such that all its vertices are the same vertical distance from the ground. This means that we can balance our mathematical table locally by translating it this distance in the vertical direction. Easy! Balancing Real Tables...by Turning the Tables ============================================= So, one of our highly idealized tables can be balanced locally on any continuous ground. However, being an existence result, Theorem 1 is less applicable to our real-life balancing act than it appears at first glance. Here are two problems that seem worth pondering: 1. [*Mathematical vs. Real Tables.*]{} A real table consists of four legs and a table top; our theorem only tells us that we can balance the four endpoints of the legs of this real table. However, balancing the whole real table in this position may be physically impossible, since the table top or other parts of the legs may run into the ground. To deal with this complication, we define a [*real table*]{} to consist of a solid rectangle with diameters of length 2 as [*top*]{}, and four line segments of equal length as [*legs*]{}. These legs are attached to the top at right angles, as shown in Figure \[wobble\]. The end points of the legs of a real table form its [*associated mathematical table*]{}. We say that a real table is balanced locally if its associated mathematical table is balanced locally, and if no point of the real table is below the ground. 2. [*Balancing by Turning.*]{} A second problem with our analysis so far is that Theorem 1, while guaranteeing a balancing position, provides no practical method for finding it. After all, although we restrict the center of the table to the $z$-axis, there are still four degrees of freedom to play with when we are actually trying to find a balancing position. The following rough argument indicates how, by turning a table on the spot in a certain way, we should be able to locate a balancing position, as long as we are dealing with a square table and a ground that is not “too crazy”. Unlike most other real-world applications of the Intermediate Value Theorem, it seems that this neat argument is not as well-known as it deserves. We have not been able to pinpoint its origin, but from personal experience we know that the argument has been around for at least thirty five years and that people keep rediscovering it. In terms of proper references in which variations of the argument explicitly appear, we are only aware of [@gardner], [@gardner1], [@gardner2] (Chapter 6, Problem 6), [@Hunzinker], [@Kraft], [@Martin], [@Polster], [@Polster1] and [@Polster2]; the earliest reference in this list, [@gardner], is Martin Gardner’s [*Mathematical Games*]{} column in the May 1973 issue of [*Scientific American*]{}. Note that an essential ingredient of the argument is the simple fact that a quarter-turn around its centre takes a square into itself—to move the table from the initial position to the end position takes roughly a quarter-turn around the $z$-axis. Closely related well-documented quarter-turn arguments date back almost a century; see, for example, Emch’s proof that any oval contains the vertices of a square in [@Emch] or [@Mayerson], Section 4. At any rate, we definitely do not claim to have invented this argument. At first glance, the above argument appears reasonable and, if true, would provide a foolproof method for balancing a square table locally by turning. However, for arbitrary continuous ground functions, it appears just about impossible to turn this intuitive argument into a rigorous proof. In particular, it seems very difficult to suitably model the rotating action, so that the vertical distance of the hovering vertices depends continuously upon the rotation angle, and such that we can always be sure to finish in the end position. As a second problem, it is easy to construct continuous grounds on which real tables cannot be balanced locally. For example, consider a real square table with short legs, together with a wedge-shaped ground made up of two steep half-planes meeting in a ridge along the $x$-axis. Then it is clear that the solid table top hitting the ground will prevent the table from being balanced locally on this ridge. By restricting ourselves to grounds that are not too wild, we can prove that [*balancing locally by turning*]{} really works. Suppose the ground is described by a Lipschitz continuous function[^1] with Lipschitz constant less than or equal to $\frac{1}{\sqrt 2}$. Then a real table with ratio $$r=\frac{\mbox {length short side}}{\mbox{length long side}}$$ can be balanced locally on this ground by turning if its legs have length greater than or equal to $\frac{1}{\sqrt {1+r^2}}$. Since $0<r\leq 1$, the maximum of $\frac{1}{\sqrt {1+r^2}}$ in this range is 1, while all our tables have diagonals of length 2. Thus we conclude that any real table whose legs are at least half as long as its diagonal can be balanced locally by turning on any “good” ground. If we are dealing with a square table, then this table can definitely be balanced locally by turning if its legs are at least half as long as its sides. Because of the half-turn symmetry of rectangles, we can be sure to reach a balancing position of a rectangular table whilst turning it 180 degrees on the spot. As we indicated earlier, to balance a square table we never have to turn it much more than 90 degrees. For an outline of the following proof for the special case of square tables, aimed at a very general audience, see [@Polster1]. Furthermore, it has just come to our attention that André Martin has also recently published a proof of this result in the special case of square tables and Lipschitz continuous ground functions with Lipschitz constant less than $2-{\sqrt 3}$. In terms of angles, Martin’s Lipschitz constant corresponds to 15 degrees and ours, which is optimal for local turning, to approximately 35.26 degrees. [*Proof.*]{} We again start by considering a mathematical table $ABCD$ with diameters of length 2 and centre $O$. Our approach is to bound the wobblyness of our ground by a suitable Lipschitz condition such that putting the two opposite vertices $A$ and $C$ on the ground, and wobbling the table about $AC$ until $B$ and $D$ are at equal vertical distance from the ground, are unique operations. This ensures that everything in sight moves continuously, as we turn the table on the spot. Following this, it is easy to conclude that we can balance the table locally by turning it. Our intuition tells us that to successfully place the four corners, we need four degrees of freedom, four separate motions of the table. Putting our intuition into effect, we approach our balancing act as a succession of four Intermediate Value Theorem (IVT) arguments, taking one ÒdimensionÓ at a time. FIRST VERTEX: $A$ Start out with the table hovering horizontally above the ground so that $OA$ lies above the positive $x$-axis, and lower the table until $A$ touches the ground. SECOND VERTEX: $C$ We now show that since the ground function $g$ is continuous, $A$ can be slid along the ground, towards the $z$-axis (with $O$ sliding up or down the $z$-axis), so that $C$ also touches the ground. To do this consider the function $$D(t)= |(t,0,g(t,0))-(-t,0,g(-t,0))|^2=4t^2+(g(t,0)-g(-t,0))^2\, .$$ Since our table has diagonals of length 2, we want a value of $t\leq \frac{2}{2}=1$ such that $D(t)=2^2=4$. Since $D$ is continuous, $D(0)=0$ and $D(1)\geq 4$, this follows trivially from IVT. UNIQUENESS OF $C$ Assuming that $g$ is Lipschitz with $Lip(g)\leq 1$, we show that the above positioning of $C$ on the ground is unique. This follows from the fact that the function $D$ is strictly monotonic; this can be seen by differentiating $D$, noting that $|g(t,0)-g(-t,0)|\leq 2t$ with equality only if the function $g(t,0)$ is linear with slope $\pm 1$ in the interval under consideration. (Lipschitzness is enough for this differentiation argument to work, but a direct algebraic argument is also easy). EQUAL HOVERING POSITION We now rotate the table through an angle $\theta\in [-\frac{\pi}{2}, \frac{\pi}{2}]$ about the diagonal $AC$. We choose the direction so that rotating the table through the angle $-\frac{\pi}{2}$ brings the table into a vertical position with $B$ lying above $AC$. We want to prove the existence of a $\theta$ for which the points $B$ and $D$ are at an equal vertical distance from the ground: we call such a position an [*equal hovering position*]{}. To show that there is such a special position, we first choose $\theta=-\frac{\pi}{2}$. The table is now vertical, with $B$ above $AC$ and $D$ below $AC$. Since the segments $AB$ and $BC$ are orthogonal, one of the slopes[^2] of these segments will be greater than or equal to 1. Hence, since $Lip(g)\leq 1$ and since both $A$ and $C$ are on the ground, we conclude that $B$ is above or on the ground; similarly, we conclude that $D$ is below or on the ground. If we now rotate the table about $AC$ until $\theta=\frac{\pi}{2}$, then $B$ is below or on the ground and $D$ is above or on the ground. Now, a straightforward application of IVT guarantees a value of $\theta$ for which $B$ and $D$ are an equal vertical distance from the ground. UNIQUENESS OF THE EQUAL HOVERING POSITION We now fix $k\leq 1$ and take the ground to have Lipschitz constant at most $k$. We show there exists a choice of $k$ which guarantees the uniqueness of the hovering position. Take $A$ and $C$ to be touching the ground as above, with $AC$ then inclined at an angle $\phi$. In the following, we sometimes need to express the various objects as functions of $\theta$, the rotation angle about $AC$ (when assuming the inclination angle $\phi$ to be fixed, which is the case when we are referring to a particular ground); then, for example, $AB$ would be expressed as $AB(\theta)$. At other times, we need to express the objects as functions of $\phi$ and $\theta$ (when we are not referring to a particular ground); $AB$ would then be expressed as $AB(\phi, \theta)$. Here $\phi\in [-\frac{\pi}{4}, \frac{\pi}{4}]$ and $\theta\in [-\frac{\pi}{2}, \frac{\pi}{2}]$. We first note that for any equal hovering position the slopes of both $AB$ and $BC$ must be at most $k$ in magnitude. To see this, suppose $AB$ has slope greater than $k$. Then, clearly, $B$ is either above or below the ground. Since $CD$ is parallel to $AB$, it has the same slope as $AB$; further, if $B$ is higher than $A$, then $D$ is lower than $C$, and vice versa. Therefore, if $B$ is above the ground, then $D$ is below the ground, and vice versa. It follows that equal hovering is impossible. Second, let $tangentB(\theta)$ and $tangentD(\theta)$ be the tangent vectors to the semi-circles swept out by the points $B(\theta)$ and $D(\theta)$, and let $vertB(\theta)$ and $vertD(\theta)$ be respectively the vertical distances of $B$ and $D$ to the ground. Note that we have an equal hovering position iff $vertB(\theta)-vertD(\theta)=0$. It is easy to see that in the $\theta$-interval where the slope of $tangentB(\theta)$ is greater than or equal to $k$, then $vertB(\theta)$ is strictly decreasing. And also, since $tangentB(\theta)=-tangentD(\theta)$, $vertD(\theta)$ is strictly increasing in this interval. Thus, $vertB(\theta)-vertD(\theta)$ is strictly decreasing. Now, let’s choose $$k = \min_{\phi,\theta}\,\max\{slopeAB(\phi, \theta), slopeBC(\phi,\theta), slopetangentB(\phi,\theta)\}.$$ Of course, $slopeAB(\phi, \theta)$, $slopeBC(\phi,\theta)$, and $slopetangentB(\phi,\theta)$ denote the slopes of $AB$, $BC$ and the tangent vector at $B$, respectively, and the minimum is taken over all choices of $\phi\in [-\frac{\pi}{4}, \frac{\pi}{4}]$ and $\theta\in [-\frac{\pi}{2}, \frac{\pi}{2}]$. Also, because of compactness, the minimum above is actually achieved. Given this choice of $k$, we shall show that in the interval where equal hovering is possible the slope of the tangent is at least $k$. So, in this interval, $vertB(\theta)-vertD(\theta)$ is strictly decreasing and thus the equal hovering position must be unique. We first show that $k=\frac{1}{\sqrt{2}}$. Note that the vectors $AB(\phi, \theta), BC(\phi,\theta),$ and $tangentB(\phi,\theta)$ are mutually orthogonal. If we then write $(0,0,1)$ in terms of this orthogonal frame and take norms, it immediately follows that $$1 =\sin^2\beta_1 + \sin^2\beta_2 +\sin^2\beta_3\, ,$$ where $\beta_1$, $\beta_2$ and $\beta_3$ are the angles the three vectors make with the $xy$-plane. Therefore, at least one of the $\sin^2\beta_j$ is at least $\frac{1}{{3}}$, and thus the vertical slope ($=|\tan\beta_j|$) of the corresponding vector must be at least $\frac{1}{\sqrt{2}}$. It follows that $k \geq \frac{1}{\sqrt{2}}$. To demonstrate the minimum $k=\frac{1}{\sqrt{2}}$ is achieved, we show any table can be oriented in the critical position, with all three slopes equal to $\frac{1}{\sqrt{2}}$, and with associated tilt angle $\phi$ between $-\frac{\pi}{4}$ and $\frac{\pi}{4}$. To do this, consider the tripod formed from three edges of a cube tilted to have vertical diagonal shown in the left diagram in Figure \[check\]. These edges are mutually orthogonal, and one easily calculates that the slopes of all three edges are $\frac{1}{\sqrt{2}}$. Notice that every table is similar to one of the grey rectangles, shown in the right diagram, created by moving the point $A'$ from $P$ to $B'$. Furthermore, it is clear that the slope of the diagonal $A'C'$ is less than the slope of $B'C'$, which is equal to $\frac{1}{\sqrt 2}$, guaranteeing the tilt angle $\phi$ is in the desired range. By scaling and translating the rectangle suitably, and relabelling the vertices $A',B',$ and $C'$ as $A,B$, and $C$, respectively, we arrive at the desired orientation of our table. It remains to show that $k=\frac{1}{\sqrt 2}$ implies that the slope of the tangent is at least $k$ in the interval where equal hovering is possible. Note that in this interval, $slopeAB$ and $slopeBC$ are at most $\frac{1}{\sqrt 2}$. Then the equation $1 =\sin^2\beta_1 + \sin^2\beta_2 +\sin^2\beta_3\, $ implies that $slopetangentB$ is at least $\frac{1}{\sqrt 2}$. CONTINUITY OF $A$, $B$, $C$, AND $D$ All of the above calculations were performed with $OA$ projecting to the positive $x$-axis. We now consider rotating the table about the $z$-axis (while of course being willing to tilt the table as we rotate). So let $\gamma$ be the angle the projection of $OA$ makes with the positive $x$-axis. By our Lipschitz hypotheis, for any $\gamma$ there is a unique equal hovering position (with the projection of $OA$ making the angle $\gamma$, and tilting the table around $AC$ an angle between $-\frac{\pi}{2}$ and $\frac{\pi}{2}$ in a fixed direction). We need to show that the positions of the four vertices $A$, $B$, $C$ and $D$, of the equally hovering table are continuous functions of $\gamma$. To do this, consider a sequence $\gamma_n \to \gamma$, with corresponding corner positions $A_n$, $B_n$, $C_n$, and $D_n$ and $A,B,C,$ and $D$. We want to show $A_n\to A$, $B_n\to B$, $C_n\to C$, and $D_n\to D$. By compactness, we can take a subsequence so that the corners converge to something: $A_n\to A^\ast$, $B_n\to B^\ast$, etc. But, by continuity of everything in sight, $A^\ast$ and $C^\ast$ are touching the ground $B^\ast$ and $D^\ast$ are an equal vertical distance from the ground, all four points are corners of the kind of table we are considering, with the projection of $OA^\ast$ making an angle $\gamma$ with the positive $x$-axis. By uniqueness, we must have $A^\ast=A$, $B^\ast=B$, $C^\ast =C$, and $D^\ast =D$, as desired. BALANCING POSITION With the uniqueness of the equal hovering position for a given $\gamma$, and with the continuous dependence of this position of the table upon $\gamma$, we conclude that the distance that $B$ and $D$ are hovering above the ground is also a continuous function of $\gamma$. If we are dealing with a square table, we can now finish the proof using IVT one more time, as described in the intuitive table turning argument presented at the beginning of this section. For a general rectangle, let the [*initial position*]{} be an equal hovering position for which the $z$-coordinate of the center of the table is a minimum, and let the [*end position*]{} be an equal hovering positions for which the $z$-coordinate of the center of the table is a maximum. Note that the hovering vertices in the initial position must be on or below the ground: if not, we could create a lower equal hovering position, contradicting minimality, by pushing vertically down on the table until the hovering vertices touch the ground. Similarly, in the end position the hovering vertices are on or above the ground. Now, IVT can be applied to guarantee that among all the equal hovering positions there is at least one balancing position. BALANCING REAL TABLES To balance a real table of side lengths ratio $r$, we determine a balancing position of the associated mathematical table, as described above. We now show that legs of length at least $\frac{1}{\sqrt{1+r^2}}$ guarantee that, balanced in this position, none of the points of the real table are below the ground. We give the complete argument for a square table, and then describe how things have to be modified to give the result for arbitrary rectangular tables. In the following, we will refer to the four vertices of the table top as $A',B',C'$ and $D'$, corresponding to the vertices $A,B,C$ and $D$, respectively. We first convince ourselves that no matter how long are the legs of our table, no part of a leg of the balanced table will be below the ground. Let’s consider the orthogonal tripod consisting of $AB$, $AD$ and the leg at $A$. Since the Lipschitz constant of our ground is at most $\frac{1}{\sqrt 2}$, the slopes of $AB$ and $AD$ are less than or equal to this value; thus, arguing as above, we see that the leg must have slope at least  $\frac{1}{\sqrt 2}$. This implies that no leg of our balanced table will dip below the ground.[^3] It remains to choose the length of the legs such that no point of the table top of our balanced table will ever be below the ground. First, fix the length of the legs and consider the inverted [solid]{} circular cone, whose vertex is one of the vertices of our mathematical table, whose symmetry axis is vertical, and whose slope is $\frac{1}{\sqrt 2}$. Intersecting this cone with the plane in which the table top lies gives a conic section which is either an ellipse, a parabola or a hyperbola.[^4] Note that since we intersect the plane with a solid cone this conic section will be “filled in”. We can be sure that a point in this plane is not below the ground if it is contained in the conic section. Therefore, what we want to show is that the union of the four conic sections associated with the four vertices of our mathematical table contains the whole table top. It is clear that the four conic sections are congruent and that any two of them can be brought into coincidence via a translation. Furthermore, given any point of one of these conic sections, this point and the respective points in the other three conic sections form a square that is congruent to our table top. Finally, since the legs have slope of at least $\frac{1}{\sqrt 2}$, the end point of a leg of our table on the plane is contained in the conic section associated with the other end point of this leg. To show that we need legs of length at least $\frac{1}{\sqrt 2}$, consider a special ground with Lipschitz constant $\frac{1}{\sqrt 2}$. This ground coincides with the $xy$-plane outside the unit circle, and above the unit circle it is the surface of the cone with vertex $(0,0,\frac{1}{\sqrt 2})$ and base the unit circle. Since the diagonals of our table are of length 2, the mathematical table will balance locally on this ground iff its vertices are on the unit circle. This means that the length of the legs have to be at least as long as the cone is high if we want to ensure that no point of the table top is below the ground; it follows that we have to choose the length of our legs to be at least $\frac{1}{\sqrt 2}$. If the length of the legs is equal to $\frac{1}{\sqrt 2}$ and the table is balanced on this ground, then the four conic sections are circles that intersect in the center of the table top as shown in Figure \[intersect\]. As you can see, the table top is indeed contained in the union of these four circles. If we make the legs longer, the circles will overlap more. If we make the legs shorter, the circles will no longer overlap in the middle. Now, consider any ground, and take the legs to be of length $\frac{1}{\sqrt 2}$; clearly, if we can show that this table does not dip below the ground, then the same is true for any table with longer legs. When we tilt the table away from the horizontal position, the intersection pattern of the conic sections gets more complicated. The critical observation is that tilting the table results in the conic sections getting larger: it can be shown that each conic section contains a copy of one of the circles in Figure \[intersect\].[^5] Since, given any point of one of these conic sections, it and the corresponding points in the other three conic sections form a square that is congruent to our table top, the union of these conic sections will contain a possible translated image of the union of the circles, that we encountered before; see Figure \[intersect1\]. So, in a way our previous picture has just grown a little bit and been translated. (Note, however, it is not immediate that the conic sections together cover the table top rather than some translation of the table top). Using this fact and the simple possible convex shapes of the conic sections that we are dealing with, we can conclude that no matter how we tilt the table, the union of these conic sections will always be a simply connected domain. This means that we can be sure that the table top is contained in this union if we can show that the boundary of the table top is contained in it. We proceed to show that for all possible positions of our table in space the sides of the table top never dip below the ground. Clearly, it suffices to show this for one of the sides of the table top, say $A'B'$. For this we consider the possible positions of the rectangle with vertices $A, A',B$ and $B'$ in space. We start with the rectangle vertical and $AB$ horizontal. Draw lines of slope $\frac{1}{\sqrt 2}$ ending in $A$ and $B$; see Figure \[thales\] (left). Since the point of intersection of these two lines is not above $A'B'$, no point of this segment can be below the ground when the rectangle is positioned in such a way. Now rotate the rectangle around its center, keeping it in a vertical plane, and keeping the slope of  $AB$ less than or equal to $\frac{1}{\sqrt 2}$. Again, draw lines of slope $\frac{1}{\sqrt 2}$ ending in $A$ and $B$; see Figure \[thales\] (middle). Again, the position of the point at which these two lines intersect tells you whether $A'B'$ can possibly touch the ground with the rectangle in this position. Since the two lines always intersect in the same angle, we know that the points of intersection are on a circle segment through $A$ and $B$; see Figure \[thales\] (right). Now, tilt the original rectangle around $AB$. We repeat everything that we have done so far to end up with another circle segment. However, the apex of this circle segment will the be closer to $A'B'$ than the one we encountered before. In fact, the more we tilt, the closer we will get; see Figure \[tiltrec\]. Since the apex of one of these circles is the point closest to $A'B'$, and since the apex corresponds to $AB$ being horizontal, we now calculate just how close this apex gets when we tilt around $AB$ though a maximal possible angle. This maximal possible angle is attained if $AD$ (which is orthogonal to $AB$) has slope $\frac{1}{\sqrt 2}$. It is a routine exercise to check that in this position the slope of the line connecting $A$ with the midpoint of $A'B'$ is $\frac{1}{\sqrt 2}$. This means that in this position the apex will be contained in $A'B'$. We conclude that if we choose the legs of our square table to be at least $\frac{1}{\sqrt 2}$ long, then the boundary of the table top, and hence also the table top itself, will not dip below the ground. For tables that are not square, the same arguments apply up the point where we start tilting the rectangle $AA'B'B$ around $AB$. We now have to worry about two different rectangles corresponding to the longer and shorter sides of the table top. If $r$ is the ratio of the lengths of the sides of the table, then it is easy to see that the critical length of the legs that we need to avoid running into the ground is the length that makes the longer of the two rectangles similar to the rectangle that we considered in the square case. This critical length is $\frac{1}{\sqrt{ 1+r^2}}$.$\Box$ Other Balancing Acts ==================== Horizontal balancing {#horizontal-balancing .unnumbered} -------------------- When we balance a table locally, the table will usually not end up horizontal, and a beer mug placed on the table may still be in danger of sliding off. It would be great if we could arrange it so that the table is not only balanced but also horizontal, maybe by moving the center of the table off the $z$-axis and balancing it somewhere else on the ground. Just imagine the ground to be a tilted plane, and you can see that this will not be possible in general. However, Fenn [@Fenn] proved the following result: [*If a continuous ground coincides with the $xy$-plane outside a compact convex disc and if the ground never dips below the $xy$-plane inside the disk, then a given square table can be balanced horizontally such that the center of the table lies above the disk.*]{} Let’s call the special kind of ground described here a [*Fenn ground*]{} and the part of this ground inside the distinguished compact disk its [*hill*]{}. The problem of horizontally balancing tables consisting of plane shapes other than squares on Fenn grounds has also been considered. Here ‘horizontal balancing on a Fenn ground’ means that in the balancing position some interior points of the shape are situated above the hill. It has been shown by Zaks [@Zaks] that a triangular table can be balanced on any Fenn ground. In fact, he showed that if we start out with a horizontal triangle somewhere in space and mark a point inside the triangle, then we can balance this triangle on any Fenn ground, with the marked point above the hill, by just translating the triangle. Fenn also showed that tables with four legs that are not concircular and those forming regular polygons with more than four legs cannot always be balanced horizontally on Fenn grounds. Zaks mentions an unpublished proof by L.M. Sonneborn that any polygon table with more than four legs cannot always be balanced horizontally on Fenn grounds. It is not known whether any concircular quadrilateral tables other than squares can always be balanced horizontally on Fenn grounds. See [@Kronheimer], [@Mayerson], [@Mayerson1], and [@Mayerson2] for further results relating to this line of research. Local Balancing of Exotic Mathematical Tables {#local-balancing-of-exotic-mathematical-tables .unnumbered} --------------------------------------------- Taking things to different mathematical extreme, we can consider a table consisting of $n\geq 3$ [*leg points*]{} in 3-space together with an additional [*center point*]{}. We then ask whether, given any continuous ground, it is possible to always balance this table locally, that is, move this configuration of $n+1$ points into a position in which the $n$ leg points are on the ground, and the center is on the $z$-axis. The example of a plane ground shows that the leg points of an always locally balancing table have to be coplanar. Let’s consider the example of a ground that contains part of a sphere that is large enough to ensure that all legs of our table end up on this part of the sphere whenever the table is locally balanced on this ground. Then intersecting this sphere with the plane that the leg points are contained in gives a circle that all leg points are contained in. Hence the leg points of the table are concircular. Now, let’s consider a ground that includes part of an ellipsoid which does not contain a copy of the circumcircle of the leg points of the table; moreover, we choose the ellipsoid large enough so that all leg points of our table end up on the ellipsoid whenever the table is locally balanced on this ground. Then intersecting the ellipsoid with the plane containing the leg points gives an ellipse that is different from the circumcircle of the leg points. However, this is impossible if the table contains more than four leg points because five points on an ellipse determine this ellipse uniquely. We conclude that an always locally balancing table must have three or four leg points and that these points are concircular. Note that requiring concircularity in the case of three points is not superfluous since we need to exclude the case of three collinear points. Livesay’s theorem, which made the proof of Theorem 1 so easy, has a counterpart for triangles, due to Floyd [@Floyd]. It is a straightforward exercise to apply this result to prove the following theorem: If the ground function is continuous, a triangular table whose three leg points are contained in a sphere around its center can be balanced locally. Of course, one should be able to prove a lot more when it comes to balancing triangular tables! In the case that the center and the (three or four) leg points of an always locally balancing table are coplanar, we can say a little bit more about the location of the center point with respect to the leg points. Begin by balancing the table in the $xy$-plane and drawing the circles around the center that contain leg points; if one of the leg points coincides with the center, then also consider this point as one of the circles. Now it is easy to see that there cannot be more than two such circles. Otherwise a ground that coincides with the $xy$-plane inside the third smallest circle and that lies above the plane outside this circle would clearly thwart all local balancing efforts. Therefore, if we want to check whether our favorite set of three or four concircular points is the set of leg points of a locally balancing table, there are usually very few positions of the center relative to the leg points which need to be considered. Perhaps the most natural choice for the center is the center of the circle that the leg points are contained in. As a corollary to the above theorem for triangles, we conclude that a triangular table with this natural choice of center is always locally balancing. In the case of four concircular points with this natural choice of center we do not know whether any tables apart form the rectangular ones are always locally balancing. However, a result worth mentioning in this context is Theorem 3 in Mayerson’s paper [@Mayerson] (see also the concluding remarks in Martin’s paper [@Martin]). It can be phrased as follows: [*Given a continuous ground and one of these special four-legged tables in the $xy$-plane, the table can be rotated in the $xy$-plane around its center such that in this new position the four points on the ground above the leg points are coplanar.*]{} The quadrilateral formed by the coplanar points on the ground will be congruent to the table if and only if the plane it is contained in is horizontal, in which case we have actually found a balancing position for our table. In all other cases, the quadrilateral on the ground is a deformed version of the table. Still, if the ground is not too wild, both quadrilaterals will be very similar, and lifting the table up onto the ground should result in the table not wobbling too much. Livesay’s theorem is a generalization of a theorem by Dyson [@Dyson], which only deals with the square case. A higher-dimensional counterpart of Dyson’s theorem arises as a special case of results of Joshi [@Joshi], Theorem 2 and Yang [@Yang], Theorem 3: [*Given a continuous real-valued function defined on the $n$-sphere, there are $n$ mutually orthogonal diameters of this sphere such that the function takes on the same value at all $2n$ end points of these diameters.*]{} Note that the endpoints of $n$ mutually orthogonal diameters of the $n$-sphere are the vertices an $n$-dimensional orthoplex, one of the regular solids in $n$-dimensions. (For example, a 1-dimensional orthoplex is just a line segment and a 3-dimensional orthoplex is an octahedron.) Using the same simple argument as in the case of Livesay’s theorem, we can prove the following theorem: An $(n-1)$-dimensional orthoplex-shaped table in $\mathbb R^n$ can be balanced locally on any ground given by a continuous function $g:\mathbb R^{n-1}\to \mathbb R$. For other closely related results see [@de; @Mira], [@Fenn1], [@Hadwiger], [@Yamabe], [@Yang1], [@Yang2], and [@Yang3]. Balance Everywhere {#balance-everywhere .unnumbered} ------------------ Imagine a square table with diameter of length 2 suspended horizontally high above some ground, with its center on the $z$-axis. Rotate it a certain angle about the $z$-axis, release it, and let it drop to the ground. It is easy to identify continuous grounds such that all four leg of the table will hit the ground simultaneously, no matter what release angle you choose. Of course, any horizontal plane will do, and so will any ground that contains a vertical translate of the unit circle. We leave it as an exercise for the reader to construct a ground that is not of this type but admits horizontal balancing for any angle. Also, the reader may wish to convince themselves that the following is true: we are dealing with a ground as in Theorem 2. If the center of the table has the same $z$-coordinate in all its equal hovering positions (positions in which $A$ and $C$ touch the ground and $B$ and $D$ are at equal vertical distance from the ground), then in fact the table is balanced in all these positions. Some Practical Advice ===================== Short Legs and Tiled Floors {#short-legs-and-tiled-floors .unnumbered} --------------------------- Note that if you shorten one of the legs of a real-life square table, this table will wobble if you set it down on the plane, and no turning or tilting will fix this problem. In real life rectangular tables the ends of whose legs do not form a perfect rectangle are not uncommon and, as our simple example shows, those uneven legs may conspire to make our anti-wobble tactics fail. Considering our example of a discontinuous ground at the beginning of this article, it should be clear that a wobbling table on a tiled floor may also defy our table turning efforts. How to Turn Tables in Practice {#how-to-turn-tables-in-practice .unnumbered} ------------------------------ In practice, it does not seem to matter how exactly you turn your table on the spot, as long as you turn roughly around the center of the table. Notice that you needn’t actually establish the equal hovering: as you rotate towards the correct balancing position, there will be less and less wobble-room until, at the correct rotation, the balancing position is forced. With a square table, you can even go for a little bit of a journey, sliding the table around in your (continuous) backyard. As long as you aim to get back to your starting position, incorporating a quarter turn in your overall movement, you can expect to find a balancing position. [99]{} de Mira Fernandes, A. Funzioni continue sopra una superficie sferica. [*Portugaliae Math*]{} [**4**]{} (1943), 69–72. Dyson, F. J. Continuous functions defined on spheres. [*Ann. of Math.*]{} [**54**]{} (1951), 534–536. Emch A. Some properties of closed convex curves in a plane, [*Amer. J. Math.*]{} XXXV (1913), 407–412. Fenn, Roger. The table theorem. [*Bull. London Math. Soc.*]{} [**2**]{} (1970), 73–76. Fenn, Roger. Some applications of the width and breadth of a closed curve to the two-dimensional sphere. [*J. London Math. Soc.*]{} [**10**]{} (1975), 219–222. Floyd, E. E. Real-valued mappings of spheres. [*Proc. Amer. Math. Soc.*]{} [**6**]{} (1955), 957–959. Gardner, Martin. Mathematical Games column in [*Scientific American*]{} (May 1973), page 104. Gardner, Martin. Mathematical Games column in [*Scientific American*]{} (June 1973), 109–110. Gardner, Martin. [*Knotted Doughnuts and Other Mathematical Entertainments*]{}. W.H. Freeman and Company, New York, 1986. Hadwiger, H. Ein Satz über stetige Funktionen auf der Kugelfläche. [*Arch. Math.*]{} [**11**]{} (1960), 65–68. Hunziker, Markus. The Wobbly Table Problem. [*In Summation*]{} Vol. 7 (April 2005), 5–7 (Newsletter of the Department of Mathematics, Baylor University). Joshi, Kapil D. A non-symmetric generalization of the Borsuk-Ulam theorem. [*Fund. Math.*]{} [**80**]{} (1973), 13–33. Kraft, Hanspeter. The wobbly garden table. [*J. Biol. Phys. Chem.*]{} 1 (2001), 95–96. Kronheimer, E. H. and Kronheimer, P. B. The tripos problem. [*J. London Math. Soc.*]{} [**24**]{} (1981), 182–192. Livesay, George R. On a theorem of F. J. Dyson. [*Ann. of Math.*]{} [**59**]{} (1954), 227–229. Martin, André. On the stability of four feet tables. http://arxiv.org/abs/math-ph/0510065 Meyerson, Mark D. Balancing acts. The Proceedings of the 1981 Topology Conference (Blacksburg, Va., 1981). [*Topology Proc.*]{} [**6**]{} (1981), 59–75. Meyerson, Mark D. Convexity and the table theorem. [*Pacific J. Math.*]{} [**97**]{} (1981), 167–169. Meyerson, Mark D. Remarks on Fenn’s “the table theorem” and Zaks’ “the chair theorem”. [*Pacific J. Math.*]{} [**110**]{} (1984), 167–169. Polster, Burkard; Ross, Marty and QED (the cat). Table Turning Mathsnack in [*Vinculum*]{} 42(2), June 2005 (Vinculum is the quarterly magazine for secondary school teachers published by the Mathematical Association of Victoria, Australia. Also available at www.mav.vic.edu.au/curres/mathsnacks/mathsnacks.html). Polster, Burkard; Ross, Marty and QED (the cat). Turning the Tables: feasting from a mathsnack. [*Vinculum*]{}, 42(4), 4 November 2005, 6–9 (also available at www.mav.vic.edu.au/curres/mathsnacks/mathsnacks.html). Vinculum, Editorial Board. Mathematical inquiry—from a snack to a meal. [*Vinculum*]{}, 42(3), September 2005, 11–12 (also available at www.mav.vic.edu.au/curres/mathsnacks/ $\mbox{mathsnacks.html}$). Yamabe, Hidehiko and Yujobô, Zuiman. On the continuous function defined on a sphere. [*Osaka Math. J.*]{} [**2**]{} (1950), 19–22. Yang, Chung-Tao. On theorems of Borsuk-Ulam, Kakutani-Yamabe-Yujobô and Dyson. I. [*Ann. of Math.*]{} [**60**]{} (1954), 262–282. Yang, Chung-Tao. On theorems of Borsuk-Ulam, Kakutani-Yamabe-Yujobô and Dyson. II. [*Ann. of Math.*]{} [**62**]{} (1955), 271–283. Yang, Chung-Tao. Continuous functions from spheres to euclidean spaces. [*Ann of Math.*]{} [**62**]{} (1955), 284–292. Yang, Chung-Tao. On maps from spheres to euclidean spaces. [*Amer. J. Math.*]{} [**79**]{} (1957), 725–732. Zaks, Joseph. The chair theorem. Proceedings of the Second Louisiana Conference on Combinatorics, Graph Theory and Computing (Louisiana State Univ., Baton Rouge, La., 1971), pp. 557–562. Louisiana State Univ., Baton Rouge, La., 1971. [^1]: Recall that for the ground function $g$ to be Lipschitz continuous means that there exists a $k$ such that the slope of the line segment connecting any two points on the ground is at most $k$: $|g(P)-g(Q)|\leq k|P-Q|$ for any $P,Q\in {\mathbb R}^2$. The Lipschitz constant of $g$ is then defined to be the optimal (smallest) choice of $k$. Also, recall that any Lipschitz continuous function is automatically continuous. [^2]: We emphasize that the slope of a line in space is always nonnegative. [^3]: For certain grounds with Lipschitz constant $\frac{1}{\sqrt 2}$ it is possible that a leg of a balanced table may lie along the ground. [^4]: Parabolas and hyperbolas can occur since the plane that the table top is contained in can have maximum slope greater than $\frac{1}{\sqrt 2}$. [^5]: To see this note that what we are looking at are the possible intersections of a given cone with planes that are a fixed distance from the vertex of the cone.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We analyse the response of a spatially extended direction-dependent local quantum system, a detector, moving on the Rindler trajectory of uniform linear acceleration in Minkowski spacetime, and coupled linearly to a quantum scalar field. We consider two spatial profiles: (i) a profile defined in the Fermi-Walker frame of an arbitrarily-accelerating trajectory, generalising the isotropic Lorentz-function profile introduced by Schlicht to include directional dependence; and (ii) a profile defined only for a Rindler trajectory, utilising the associated frame, and confined to a Rindler wedge, but again allowing arbitrary directional dependence. For (i), we find that the transition rate on a Rindler trajectory is non-thermal, and dependent on the direction, but thermality is restored in the low and high frequency regimes, with a direction-dependent temperature, and also in the regime of high acceleration compared with the detector’s inverse size. For (ii), the transition rate is isotropic, and thermal in the usual Unruh temperature. We attribute the non-thermality and anisotropy found in (i) to the *leaking* of the Lorentz-function profile past the Rindler horizon.' author: - 'Sanved Kolekar[^1]' date: November 2019 title: | Directional dependence of the Unruh effect\ for spatially extended detectors --- Introduction ============ The Unruh effect [@Fulling:1972md; @Davies:1974th; @Unruh:1976db] states that an observer of negligible spatial size on a worldline of uniform linear acceleration in Minkowski spacetime reacts to the Minkowski vacuum of a relativistic quantum field by thermal excitations and de-excitations, in the Unruh temperature $g/(2\pi)$, where $g$ is the observer’s proper acceleration. The acceleration singles out a distinguished direction in space, and an observer with direction-sensitive equipment will in general see a direction-dependent response; however, for the Lorentz-invariant notion of direction-sensitivity introduced in [@Takagi:1985tf], the associated temperature still turns out to be equal to $g/(2\pi)$, independently of the direction. For textbooks and reviews, see [@Birrell:1982ix; @Crispino:2007eb; @Fulling:2014wzx]. In this paper we address the response of uniformly linearly accelerated observers in Minkowski spacetime, operating direction-sensitive equipment of nonzero spatial size. We ask whether the temperature seen by these observers is still independent of the direction. The question is nontrivial: while a spatially pointlike detector with a monopole coupling is known to be a good approximation for the interaction between the quantum electromagnetic field and electrons on atomic orbitals in processes where the angular momentum interchange is insignificant [@MartinMartinez:2012th; @Alhambra:2013uja], finite size effects can be expected to have a significant role in more general situations [@DeBievre:2006pys; @Hummer:2015xaa; @Pozas-Kerstjens:2015gta; @Pozas-Kerstjens:2016rsh; @Pozas-Kerstjens:2017xjr; @Simidzija:2018ddw]. Also, the notion of a finite size accelerating body has significant subtlety: while a rigid body undergoing uniform linear acceleration in Minkowski spacetime can be defined in terms of the boost Killing vector, different points on the body have differing values of the proper acceleration, and the body as a whole does not have an unambiguous value of ‘acceleration’. It would be interesting to ask whether the resultant transition rate would be thermal at all. And if yes, then at what temperature. The observer’s response could hence well be expected to depend explicitly on the body’s shape as well as the size. A related issue is the following: An analysis of a direction dependent point-like detector re-affirms that the Unruh bath is isotropic even though there is a preferred direction in the Rindler frame, the spatial direction along the direction of acceleration [@Takagi:1985tf]. However, analysing drag forces on drifting particles in the Unruh bath reveals through the Fluctuation Dissipation Theorem that the quantum fluctuations in the Unruh bath are not isotropic [@Kolekar:2012sf]. These anisotropies could be relevant for direction dependent spatially extended detectors whose length scales are of the order or greater than the correlation scales associated with the quantum fluctuations. We consider the response of spatially extended direction-dependent detectors in uniform linear acceleration in two models of such a detector: (i) a spatial sensitivity profile that generalises the isotropic Lorentz-function considered by Schlicht [@schlicht] to include spatial anisotropy, and (ii) a spatial sensitivity profile defined in terms of the geometry of the Rindler wedge, and explicitly confined to this wedge, following De Bievre and Merkli [@DeBievre:2006pys]. We begin in Section \[schlichtsection\] by briefly reviewing a detector with an isotropic Lorentz-function spatial profile [@schlicht], highlighting the role of a spatial profile as the regulator of the quantum field’s Wightman function, and recalling how the Unruh effect thermality arises for this detector. In section \[directiondetsection\] we generalise the Lorentz-function spatial profile to include spatial anisotropy, initially for an arbitrarily-accelerated worldline, relying on the Fermi-Walker frame along the trajectory. We then specialise to a Rindler trajectory of uniform linear acceleration. We find that the transition rate is non-thermal, and angle dependent. Thermality is however restored in the low and high frequency regimes, and also in the regime of high acceleration compared with the inverse of the detector’s spatial extent. In section \[rindlerframesection\] we analyse a profile defined in the Rindler frame of a Rindler trajectory, and confined to the Rindler wedge, following De Bievre and Merkli [@DeBievre:2006pys]. We find that the transition rate is isotropic and thermal at the usual Unruh temperature. In section \[discsection\] we discuss and resolve the discrepancy of these two outcomes. The key property responsible for the non-thermality and anisotropy for the Lorentz-function profile is that this profile leaks outside the Rindler wedge, past the Rindler horizon. The leaking is an unphysical side effect of a detector model with a noncompact spatial profile, and it is unlikely to have a counterpart in spatially extended detectors with a more fundamental microphysical description. We leave the development of such spatially extended detector models subject to future work. Spatially isotropic Lorentz-function profile\[schlichtsection\] =============================================================== In this section we briefly review Schlicht’s generalisation [@schlicht] of a two-level Unruh-DeWitt detector [@Unruh:1976db; @DeWitt:1979] to a nonzero spatial size. We consider a massless scalar field $\phi$ in four-dimensional Minkowski spacetime, and a two-level quantum system, a detector, localised around a timelike worldline $x(\tau)$, parametrised in terms of the proper time $\tau$. The interaction Hamiltonian reads $H_{int} = c \, m(\tau) \,\chi(\tau) \phi(\tau)$, where $c$ is a coupling constant, $m(\tau)$ is the detector’s monopole moment operator, $\chi(\tau)$ is the switching function that specifies how the interaction is turned on and off, and $\phi(\tau)$ is the spatially smeared field operator. The formula for $\phi(\tau)$ is $$\phi(\tau) = \int d^3\xi \; f_{\epsilon} ({\bm \xi}) \, \phi(x(\tau, {\bm \xi})) \ , \label{smearedoperator}$$ where ${\bm \xi} = (\xi^1, \xi^2, \xi^3)$ stands for the spatial coordinates associated with the local Fermi-Walker transported frame and $x(\tau, {\bm \xi})$ is a spacetime point written in terms of the Fermi-Walker coordinates. The smearing profile function $f_{\epsilon} ({\bm \xi})$ specifies the spatial size and shape of the detector in its instantaneous rest frame. In linear order perturbation theory, the detector’s transition probability is then proportional to the response function, $${\cal F}(\omega) = \int_{-\infty}^{\infty} du \, \chi(u) \int_{-\infty}^{\infty} ds \, \chi(u -s) e^{- i \omega s} \, W(u,u-s) \ , \label{transprobability}$$ where $\omega$ is the transition energy, $W(\tau,\tau^\prime) = \langle \Psi | \phi(\tau) \phi(\tau^\prime) | \Psi \rangle$ and $|\Psi \rangle$ is the initial state of the scalar field. The choice for the smearing profile function $f_{\epsilon}$ made in [@schlicht] was the three-dimensional isotropic Lorentz-function, $$f_{\epsilon}({\bm \xi})= \frac{1}{\pi^2} \frac{\epsilon}{{(\xi^{2}+\epsilon^{2})}^2} \ , \label{schlichtprofile}$$ where the positive parameter $\epsilon$ of dimension length characterises the effective size. The selling point of the profile function is that it allows the switch-on and switch-off to be made instantaneous; for a strictly pointlike detector, by contrast, instantaneous switchings would produce infinities and ambiguities [@schlicht]. In particular, for a detector that is switched off at proper time $\tau$, the derivative of ${\cal F}$ with respect to $\tau$ can be understood as a transition rate, in the ‘ensemble of ensembles’ sense discussed in [@Langlois:2005if; @Louko:2007mu]. If the switch-on takes place in the infinite past, the transition rate formula becomes $${\dot {\cal F}}(\omega) = 2 \operatorname{Re}\int_0^\infty ds \,e^{-i \omega s} \, W(\tau,\tau - s) \ . \label{eq:transrate-schlicht}$$ When the trajectory is the Rindler trajectory of uniform linear acceleration of magnitude $g>0$, and $|\Psi\rangle$ is the Minkowski vacuum, the transition rate becomes [@schlicht] $${\dot {\cal F}}(\omega) = \frac{1}{2 \pi} \; \frac{(\omega /g)}{1+ \epsilon^2} \; \frac { e^{\frac{2\omega }{g} \tan^{-1}\left( g \epsilon \right)}} { e^{\frac{2 \pi \omega}{g}} -1 } \ . \label{schlichttrans}$$ In the limit $\epsilon \rightarrow 0$, ${\dot {\cal F}}$ reduces to the Planckian formula in the Unruh temperature $g/(2\pi)$, consistently with other ways of obtaining the response of a pointlike detector in the long time limit [@Unruh:1976db; @DeWitt:1979; @letaw; @Takagi:1986kn; @Fewster:2016ewy]. For $\epsilon$ strictly positive, ${\dot {\cal F}}$ is no longer Planckian. However, we wish to observe here that ${\dot {\cal F}}$ is still thermal, in the sense that it satisfies the detailed balance condition, $$\begin{aligned} {\dot {\cal F}}(-\omega) = e^{\beta\omega}{\dot {\cal F}}(\omega) \ , \end{aligned}$$ where the inverse temperature now reads $\beta = \bigl(2\pi - 4 \tan^{-1}( g \epsilon) \bigr) /g$. The temperature is thus higher than the usual Unruh temperature. This feature has to our knowledge not received attention in the literature, and we shall discuss its geometric origins in section \[discsection\]. Spatially anisotropic Lorentz-function profile {#directiondetsection} ============================================== In this section we generalise the isotropic Lorentz-function profile to include spatial anisotropy. General trajectory ------------------ Let $x(\tau)$ again be a timelike worldline parametrised by its proper time $\tau$, so that the four-velocity $u^a := \frac{dx^a}{d\tau}$ is a unit timelike vector. The four-acceleration vector $a^a := u^b \nabla_b u^a$ is orthogonal to $u^a$, and its direction is Fermi-Walker transported along the trajectory only when the trajectory stays in a timelike plane, as seen by considering the torsion and hypertorsion of the trajectory in the Letaw-Frenet equations [@letaw; @kolekar]. We define the direction dependence by writing the expression for $\phi(\tau)$ in Eq.(\[smearedoperator\]) as $$\frac{d \phi(\tau)}{d \Omega} = \int d\xi \; f_{\epsilon} ({\bm \xi}) \, \phi(x(\tau, {\bm \xi})) = \phi_\Omega(\tau) \ , \label{smearedoperatorang}$$ such that the ${\bm \xi}$ points in the direction of $\Theta_0$ and $\phi_0$. Integrating $\phi_\Omega(\tau)$ all over the solid angle then reproduces the smeared field operator $\phi(\tau)$ in the Schlicht case. Assuming the two level quantum system to couple linearly to $\phi_\Omega$, one can then proceed to calculate the transition rate as per formula in Eq.(\[transprobability\]) with the corresponding $W_\Omega(\tau,\tau^\prime)$ equal to $ \langle \Psi | \phi_\Omega(\tau) \phi_\Omega(\tau^\prime) | \Psi \rangle$. Equivalently, one can consider a detector whose spatial profile has the radial dependence of and depends on the angles $\Theta_0$ and $\phi_0$ through $$\begin{aligned} f_{\epsilon}({\bm \xi},\Theta_{0})= \frac{1}{2\pi^3} \frac{\epsilon}{{(\xi^{2}+\epsilon^{2})}^2} \frac{\delta (\theta - \Theta_{0})}{\sin\Theta_{0}} \delta(\phi - \phi_0) \ , \label{angprof}\end{aligned}$$ where $\theta$ and $\phi$ are measured in the ${\bm \xi} = (\xi^1, \xi^2, \xi^3)$ space. One can once again note that integrating over the solid angle $d\Omega_0 = \sin\Theta_{0} d\Theta_{0} d\phi_{0}$ yields the isotropic profile . Following the steps in section \[schlichtsection\], the transition rate formula becomes $$\begin{aligned} {\dot {\cal F}}_{\Theta_0}(\omega) = 2 \operatorname{Re}\int_0^\infty ds \,e^{-i \omega s} \, W_{\Theta_0}(\tau,\tau - s) \ , \label{angtransitionrate}\end{aligned}$$ where $$\begin{aligned} W_{\Theta_0}(\tau,\tau^\prime) = \langle \Psi | \phi(\tau, \Theta_0) \phi(\tau^\prime,\Theta_0) | \Psi \rangle \ , \label{angwhitmannfunction}\end{aligned}$$ and the smeared field operator reads $$\begin{aligned} \phi(\tau, \Theta_0) = \int d^3\xi \; f_{\epsilon} ({\bm \xi}, \Theta_0) \, \phi(x(\tau, {\bm \xi})) \ , \label{smearedRARF}\end{aligned}$$ provided these expressions are well defined. We shall now show that the expressions are well defined provided $\Theta_0 \ne \pi/2$, and we give a more explicit formula for ${\dot {\cal F}}_{\Theta_0}$. Suppose hence from now on that $\Theta_0 \ne \pi/2$. We work in global Minkowski coordinates in which points on Minkowski spacetime are represented by their position vectors, following the notation in [@schlicht]. The trajectory is written as $x^b(\tau)$. At each point on the trajectory, we introduce three spacelike unit vectors $e^b_{(\alpha)}(\tau)$, $\alpha = 1,2,3$, which are orthogonal to each other and to $u^b(\tau) = \frac{dx^b(\tau)}{d\tau}$, and are Fermi-Walker transported along the trajectory. We coordinatise the hyperplane orthogonal to $u^b(\tau)$ by ${\bm \xi} = (\xi^1, \xi^2, \xi^3)$ by $$\begin{aligned} x^b(\tau, {\bf \xi}) = x^b(\tau) + \xi^\alpha e^b_{(\alpha)}(\tau) \ . \end{aligned}$$ Using (\[angwhitmannfunction\]) and (\[smearedRARF\]), we obtain $$W_{\Theta_0}(\tau,\tau^\prime) = \frac{1}{(2 \pi)^3} \int \frac{d^3 {\bf k}}{2 \omega({\bf k})} \; g_{\Theta_0} ( {\bf k}, \tau) \, g^*_{\Theta_0} ( {\bf k}, \tau^\prime) \ , \label{gwhitmann}$$ where $\omega({\bf k}) = \sqrt{{\bf k}^2}$ and $$g_{\Theta_0} ( {\bf k}, \tau) = \int d^3\xi \; f_{\epsilon}({\bm \xi},\Theta_{0}) e^{i k_b x^b(\tau, \, {\bm \xi} )} \ . \label{gdef}$$ The index $\alpha$ refers to values $(1,2,3)$ and $e^b_{(\alpha)}(\tau)$ are the orthogonal Fermi unit-basis vectors in the spatial direction orthogonal to $u^b(\tau)$. Then defining $3$ - vector $\tilde{{\bf k}}$ having components $(\tilde{{\bf k}})_\alpha = k_b {\bf e}^b_{(\alpha)}(\tau)$ and working in spherical co-ordinates in the ${\bf \xi}$- space, we can recast Eq. (\[gdef\]) to get $$\begin{aligned} g_{\Theta_0}( {\bf k}, \tau) &=& \frac{1}{\pi} e^{i k_b x^b(\tau)} \int_0^\infty d\xi \, \frac{\xi^2\epsilon}{(\xi^2+\epsilon^2)^2} e^{i \xi \cos\Theta_0 |\tilde{{\bf k}}|} \notag \\ & = & e^{i k_b x^b(\tau)} \left( I_R + I_M \right) \ , \end{aligned}$$ where $I_R$ and $I_M$ are the real and imaginary parts of the integral. Here, $\tilde{{\bf k}}$ is oriented along the $z$ direction in the ${\bf \xi}$- space and the angle $\Theta_0$ is measured from the $z$ direction. The real part can be evaluated by contour integration, with the result $$\begin{aligned} I_R &=& \frac{1}{\pi} \int_{0}^\infty d\xi \frac{\xi^2\epsilon}{(\xi^2+\epsilon^2)^2} \cos(\xi |\tilde{{\bf k}}| \cos\Theta_0) \nonumber \\ &=& \frac{1}{4} \frac{\partial}{\partial \epsilon} \left( \epsilon e^ {- \epsilon |\tilde{{\bf k}}| |\cos{\Theta_0}| }\right) \nonumber \\ &=& \frac{1}{4} \left( 1-\epsilon |\tilde{{\bf k}}| |\cos\Theta_0| \right) e^{-\epsilon |\tilde{{\bf k}}| |\cos \Theta_0|} \ . \label{IR}\end{aligned}$$ The imaginary part can be reduced to the exponential integral $E_1$ [@dlmf], with the result $$\begin{aligned} I_M = \frac{i \operatorname{sgn}(a)}{4\pi} \Bigl[ (|a|+1) e^{|a|} E_1(|a|) + (|a|-1) e^{-|a|} \operatorname{Re}\bigl( E_1(-|a|) \bigr) \Bigr] \ , \end{aligned}$$ where $a = \epsilon |\tilde{{\bf k}}| \cos \Theta_0$. Note that the replacement $\Theta_0 \to \pi - \Theta_0$ leaves $I_R$ invariant but gives $I_M$ a minus sign. To proceed further, we assume that the profile function is invariant under $\Theta_0 \rightarrow \pi - \Theta_0$, that is, under $\cos(\Theta_0) \rightarrow - \cos(\Theta_0)$. Since $I_M$ is an odd function of $\cos(\Theta_0)$, it does not contribute to $g_{\Theta_0} \left( {\bf k}, \tau \right)$ under such an invariance whereas $I_R$ being even in $\cos(\Theta_0)$ contributes. Physically, this would mean that the direction sensitive detector reads off the average of two transition rates from the $\Theta_0$ and $\pi - \Theta_0$ directions respectively. Eq.(\[gdef\]) then becomes $$g_{\Theta_0} \left( {\bf k}, \tau \right) = \frac{1}{4} \frac{\partial}{\partial \epsilon} \left( \epsilon \; e^ {- \epsilon |\tilde{{\bf k}}| |\cos{\Theta_0}| } \; e^{i k_b x^b(\tau)}\right)$$ Using the fact that $k_b$ is a null vector, it is straightforward to show that $|\tilde{{\bf k}}| = - [k_b u^b(\tau)]$. Substituting the above expression in Eq.(\[gwhitmann\]) and upon performing the straightforward ${\bf k}$ integral, we can write a compact expression for $W_{\Theta_0}$ of the following form $$\begin{aligned} W_{\Theta_0}(\tau,\tau^\prime) &=& \frac{-1}{16} \frac{\partial}{\partial \epsilon^\prime} \frac{\partial}{\partial \epsilon^{\prime \prime}} \Biggl( \frac{4\pi\epsilon^\prime \epsilon^{\prime \prime} }{ \left[ T\left(\epsilon^\prime, \epsilon^{\prime \prime}, \tau, \tau^\prime \right) \right]^2 - \left[ X \left( \epsilon^\prime, \epsilon^{\prime \prime}, \tau, \tau^\prime \right) \right]^2 } \Biggr)_{\epsilon^\prime = \epsilon, \epsilon^{\prime \prime} = \epsilon} \label{wfinalcompact}\end{aligned}$$ where the functions $T\left(\epsilon^\prime, \epsilon^{\prime \prime}, \tau, \tau^\prime \right)$ and $X\left(\epsilon^\prime, \epsilon^{\prime \prime}, \tau, \tau^\prime \right)$ are found to be $$\begin{aligned} T\left(\epsilon^\prime, \epsilon^{\prime \prime}, \tau, \tau^\prime \right) &=& \left(t(\tau) -t(\tau^\prime) \right) - i |\cos\Theta_{0}| \left( \epsilon^{\prime \prime} {\dot{t}}(\tau) + \epsilon^{\prime} {\dot{t}}(\tau^\prime) \right) \label{Tdef} \\ X\left(\epsilon^\prime, \epsilon^{\prime \prime}, \tau, \tau^\prime \right) &=& \left({\bf x}(\tau) -{\bf x}(\tau^\prime) \right) - i |\cos\Theta_{0}| \left( \epsilon^{\prime \prime} {\dot{{\bf x}}}(\tau) + \epsilon^{\prime} {\dot{{\bf x}}}(\tau^\prime) \right) \label{Xdef}\end{aligned}$$ The overdot refers to the derivative with respect to the proper time $\tau$ or $\tau^\prime$. Expanding the above expression, $W_{\Theta_0}(\tau,\tau^\prime) $ can also be written as $$\begin{aligned} W_{\Theta_0}(\tau,\tau^\prime) = \frac{1}{16} & \bigg\{& \frac{4 \pi}{-T^2 + X^2} + \frac{i 8 \pi \epsilon^{\prime \prime} |\cos\Theta_{0}| \left[ -T\dot{t} + X \dot{{\bf x}} \right] }{\left( -T^2+X^2 \right)^2} \nonumber \\ && + \frac{i 8 \pi \epsilon^{\prime} |\cos\Theta_{0}| \left[-T \dot{t}^{\prime} + X \dot{{\bf x}}^{\prime} \right]}{\left(-T^2+X^2 \right)^2} \nonumber \\ && - \frac{32 \pi \epsilon^{\prime} \epsilon^{\prime\prime} |\cos\Theta_{0}| \left[ -T \dot{t}^\prime + X \dot{{\bf x}}^{\prime} \right] \left[ -T \dot{t}+X\dot{{\bf x}} \right] }{\left( -T^2+X^2 \right)^3} \nonumber \\ && + \frac{8 \pi \epsilon^{\prime} \epsilon^{\prime\prime} |\cos\Theta_{0}|^{2} \left[- \dot{t} \dot{t}^{\prime} + \dot{{\bf x}}\dot{{\bf x}}^{\prime} \right] }{ \left( -T^2+X^2 \right)^2} \; \bigg\}_{\epsilon^\prime = \epsilon, \epsilon^{\prime \prime} = \epsilon} \label{Wfinal}\end{aligned}$$ The first term is of the familiar form one gets for the total transition rate, however one must note that both the functions $T\left(\epsilon^\prime, \epsilon^{\prime \prime}, \tau, \tau^\prime \right)$ and $X\left(\epsilon^\prime, \epsilon^{\prime \prime}, \tau, \tau^\prime \right)$ are dependent on the angle $\Theta$. Another feature of Eq.(\[Wfinal\]), is that, $\epsilon$ and $|\cos\Theta_0|$ always appear as a product in the expression. Given that $|\cos\Theta_0|$ is always non-negative, one can formally absorb it in the definition of $\epsilon$ itself. Then, in the point-like limit of the detector, that is, when taking the $\epsilon \rightarrow 0$, one will arrive at an expression which is independent of the angular direction. Thus to have a direction dependence in the transition rate, one needs to have the spatial extension of the detector modelled using a finite positive $\epsilon$ parameter in the present model. We have thus finished our construction of the direction dependent spatially extended detector. Substituting Eq.(\[Wfinal\]) in Eq.(\[angtransitionrate\]) gives us the angular transition rate of the detector. The expression is general and will hold for any accelerating trajectory in a flat spacetime. Rindler trajectory ------------------ We shall now analyse the direction dependent transition rate for the special case of the Rindler trajectory. Substituting for the trajectory $t(\tau) = (1/g)\sinh (g\tau) $, $x(\tau) =(1/g) \cosh (g\tau) $ and $y = z = 0$ in Eqs.(\[Tdef\]) and (\[Xdef\]), we have $$\begin{aligned} -T^2+X^2 & = \frac{2}{g^2} \bigg\{ 1 - \sqrt{1-{c^{\prime}}^2} \sqrt{1-{c^{\prime\prime}}^2} \cosh \bigl[ g(\tau-\tau^{\prime})-i(\alpha_{c^{\prime\prime}}+\alpha_{c^{\prime}} ) \bigr] \notag \\ & \hspace{8ex} - \left( \frac{{c^\prime}^2 + {c^{\prime\prime}}^2 }{2} \right) \bigg\} \ , \label{TXRindler}\end{aligned}$$ where $c^{\prime\prime}=i|\cos\Theta_{0}| g \epsilon^{\prime\prime}$, $c^{\prime}=i|\cos\Theta_{0}| g \epsilon^{\prime}$, $\cos \alpha_{c^{\prime\prime}} = 1/\sqrt{1-{c^{\prime\prime}}^2}$ and $\cos \alpha_{c^{\prime}} = 1/\sqrt{1-{c^{\prime}}^2}$. As is expected for a stationary trajectory, the above expression depends on the proper time $\tau$ and $\tau^\prime$ through their difference $\tau - \tau^\prime$ only. Further substituting Eq.(\[TXRindler\]) in Eqs.(\[wfinalcompact\]) and (\[angtransitionrate\]), we get $${\dot {\cal F}}_{\Theta_0}(\omega) =\frac{1}{16} \frac{\partial}{\partial \epsilon^{\prime}} \frac{\partial}{\partial \epsilon^{\prime\prime}} \bigg\{ 2 \operatorname{Re}\int_0^{\infty} ds \, e^{-i\omega s} \frac{4\pi\epsilon^{\prime\prime} \epsilon^{\prime}}{-T^{2}+X^{2}} \bigg\}_{\epsilon^\prime = \epsilon, \epsilon^{\prime \prime} = \epsilon} \label{angFinter}$$ Identifying the symmetry in the integrand under the simultaneous exchange of $s \rightarrow -s$ and $i \rightarrow -i$ using Eq.(\[TXRindler\]), we can express the integral as a contour integral over the full real line $s \rightarrow (-\infty, \infty)$. Further, $(- T^2 + X^2)$ has the periodicity in $s \rightarrow s + 2\pi i /g$. One can then close the contour at $s + 2\pi i /g$ and evaluate the residue at the poles, to get $$\begin{aligned} {\dot {\cal F}}_{\Theta_0}(\omega) &= \frac{\partial}{\partial \epsilon^{\prime}} \frac{\partial}{\partial \epsilon^{\prime\prime}} \bigg\{\frac{\pi^{2}g\epsilon^{\prime} \epsilon^{\prime\prime}e^{\frac{\omega}{g} \left[ \tan^{-1} \left( |\cos\Theta_{0}|g\epsilon^{\prime} \right)+ \tan^{-1}\left( |\cos\Theta_{0}|g\epsilon^{\prime \prime} \right)\right]} }{4 \left( e^{\frac{2 \pi \omega}{g}}-1\right)} \notag \\[1ex] & \hspace{13ex} \times \frac{\sin \left( \frac{\omega}{g} \cosh^{-1} (c) \right)}{\sqrt{1-{c^{\prime}}^2} \sqrt{1-{c^{\prime \prime}}^2} \sinh \left[ \cosh^{-1} (c) \right] } \; \; \bigg\}_{\epsilon^\prime = \epsilon, \epsilon^{\prime \prime} = \epsilon}\end{aligned}$$ where $c = [1 - ({c^\prime}^2 + {c^{\prime \prime}}^2)/2]/\sqrt{1 - {c^\prime}^2}\sqrt{1- {c^{\prime \prime}}^2}$. Note that $c\ge1$. Considering only the factor inside the braces (without the partial derivatives), it does seem to satisfy the KMS condition with the inverse of the temperature being $2 \pi/ g - (2/g)\tan^{-1} \left( |\cos\Theta_{0}|g\epsilon^{\prime} \right)+ (2/g)\tan^{-1}\left( |\cos\Theta_{0}|g\epsilon^{\prime \prime} \right)$. One gets such a result in the Schlicht case for the total transition rate with $\epsilon^{\prime} = \epsilon^{\prime \prime}$ and without the $\cos\Theta_0$ dependence. However, in the present case, the additional partial derivatives break the KMS property for ${\dot {\cal F}}_{\Theta_0}(\omega) $. Another way to approach at the final expression is to first differentiate the integrand in Eq.(\[angFinter\]) and then perform the contour integration. This leads to the following $${\dot {\cal F}}_{\Theta_0}(\omega) = 2 \operatorname{Re}\int_0^{\infty} ds \, e^{-i\omega s} \, \frac{1}{D_\epsilon(s)}$$ where $$\begin{aligned} \frac{1}{D_\epsilon(s)} = \frac{ g^{2} \pi\bigg\{ 3 b^2 \epsilon^2 + b^4 \epsilon^4 - 2(1- b^2 \epsilon^2)\sinh^{2}\left[ \frac{gs}{2}-i\alpha \right] - 2 i b \epsilon \sinh\left[ gs - i2\alpha \right] \bigg\}}{32 \left( 1+b^2\epsilon^2 \right)^3 \sinh^{4}\left[ \frac{gs}{2}-i\alpha \right]} \label{denominator}\end{aligned}$$ and $b = g |\cos\Theta_0|$. This contour integral can be calculated using the similar procedure outlined for integral in Eq.(\[angFinter\]) for each of the three terms. One finally gets the angular transition rate to be $$\begin{aligned} {\dot {\cal F}}_{\Theta_0}(\omega) &=& \frac{\pi g^{2}}{32 {\left(1+b^2 \epsilon^2 \right)}^3} \; \; \frac { e^{\frac{2\omega }{g} \tan^{-1}\left( g|\cos\Theta_{0}|\epsilon \right)}}{\left( e^{\frac{2 \pi \omega}{g}} -1\right) } \nonumber \\ && \times \bigg\{ \; \; \frac{16 \pi}{3} \left(3 b^2 \epsilon^2 + b^4 \epsilon^4 \right) \frac{\omega}{g} \left(4+\frac{\omega^2}{g^2} \right) \nonumber \\ && \; \; \; \; \; \; + 16 \pi \left(1-b^2\epsilon^2 \right) \frac{\omega}{g} + 32 \pi b \epsilon \frac{\omega^2}{g^2} \; \; \bigg\}\end{aligned}$$ Thus ${\dot {\cal F}}_{\Theta_0}(\omega)$ is not KMS thermal, in general, except when $\Theta_0 = \pi/2$. In the case $\Theta_0 = \pi/2$, $b$ vanishes and one recovers the usual Unruh temperature. Interestingly, even though the regularization in Eq.(\[Wfinal\]) does not hold in the $\Theta_0 = \pi/2$ case, we find that the final expression is indeed finite for the case. For $\Theta_0 \neq \pi/2$, the quadratic term in the polynomial of $(\omega/g)$ breaks the thermality of the whole expression by just a sign. One can check that the polynomial in the braces does not possess a real root and hence is positive for all real values of $\omega$. Thus the transition rate ${\dot {\cal F}}_{\Theta_0}(\omega)$ is always positive as expected. In the low frequency regime $|\omega/g| \ll 1$, the terms linear in $(\omega/g)$ dominate compared to the quadratic and cubic terms. Whereas, in the high frequency regime $|\omega/g| \gg 1$, the term cubic in $(\omega/g)$ dominate compared to the linear and quadratic terms. Hence, in both these limits, ${\dot {\cal F}}_{\Theta_0}(\omega)$ is KMS with the inverse of the temperature being equal to $2 \pi/ g - (4/g)\tan^{-1} \left( |\cos\Theta_{0}|g\epsilon \right)$, that is one observes a angle dependent temperature. In the $\Theta_0 = \pi/2$ direction, the temperature is same as the usual Unruh temperature while it increases as $\Theta_0$ decreases in the domain $0 \leq \Theta_0 \leq \pi/2$. Along the direction of acceleration, it is the maximum. Let us look at the combination $\epsilon |\cos\Theta_0|$. As mentioned earlier, $\epsilon$ and $|\cos\Theta_0|$ always appear as a product in the expression. Let us assume that for $\Theta_0 \neq \pi/2$, the product $\epsilon |\cos\Theta_0| \gg 1$ by assuming $\epsilon \gg 1$ for a finite $|\cos\theta_0|$. In this case, the term with pre-factor $b^4 \epsilon^4$ dominates over rest of the terms in the braces and ${\dot {\cal F}}_{\Theta_0}(\omega)$ is KMS thermal with the inverse of the temperature being equal to $2 \pi/ g - (4/g)\tan^{-1} \left( |\cos\Theta_{0}|g\epsilon \right)$. The $\epsilon$ parameter represents the length scale of the spatial extension of the detector. A large $\epsilon$ would signify a sufficiently extended detector. Whereas, as mentioned earlier, in the point-like limit of the detector $\epsilon \rightarrow 0$, one recovers the usual isotropic Unruh temperature. This suggests that the features mentioned above are especially due to the spatial extension of the detector. We comment on this spurious result in the discussion section after analysing the spatially extended detector from the Rindler co-moving frame of reference in the next section. One might suspect that for the direction dependence feature of the spatially extended detector to vanish in the point-like limit $\epsilon \rightarrow 0^+$, could possibly be a feature of the transition rate of the detector wherein one has formally subtracted an infinite constant term by taking the difference of the transition probability of the detector at $\tau$ and $\tau + d\tau$ to arrive at the transition rate expression. The constant infinite term may, perhaps, contain the information about the direction dependence $\Theta_0$ even in the point-like limit $\epsilon \rightarrow 0^+$. One can check for the above suspicion by explicitly computing the transition probability for the extended detector when the detector is switched ON and switch OFF smoothly *around* the finite proper time $\tau_0$ and $\tau_f$ respectively. The general expression for the transition probability is given as $${\cal F}(\omega) = \int_{-\infty}^{\infty} du \, \chi(u) \int_{-\infty}^{\infty} ds \, \chi(u -s) e^{- i \omega s} \, W_{\Theta_0, \epsilon}(u,u-s) \label{transitionprobability}$$ where $\chi(\tau)$ is the smooth switching function which vanishes for $\tau < \tau_0$ and $\tau > \tau_f$ while it is unity for $\tau_0 < \tau < \tau_f$ and is smooth in its transition. For the stationary Rindler trajectory, we have $W_{\Theta_0, \epsilon}(u,u-s) = W_{\Theta_0, \epsilon}(s)$ as shown in Eqs.(\[TXRindler\]) and (\[wfinalcompact\]). Hence in the above equation Eq.(\[transitionprobability\]) for transition probability, we can interchange the sequence of integration and perform the $u$ integral first to get $${\cal F}(\omega) = \int_{-\infty}^{\infty} ds \, e^{- i \omega s} \, W_{\Theta_0, \epsilon}(s) \, Q(s) \label{transitionprobability2}$$ where $Q(s) = \int_{-\infty}^{\infty} du \, \chi(u) \chi(u -s)$ is also a smooth analytic function in complex $s$ plane. One can then perform the contour integral in Eq.(\[transitionprobability2\]), by choosing an appropriate contour and evaluating the residues of the expression at the poles of the Wightman function $W_{\Theta_0, \epsilon}(s)$ to obtain a finite result. However, from the expression in Eq.(\[denominator\]), one can see that the combination of $\epsilon$ and $|\cos{\Theta_0}|$ always appears as product, hence evaluating the contour integral in Eq.(\[transitionprobability2\]) would preserve the product structure which would imply that taking the point-like limit $\epsilon \rightarrow 0^+$ would make the $\Theta_0$ dependence to go away. In-fact, one can also verify that even in the non-stationary case, the product feature of the $\epsilon$ and $|\cos{\Theta_0}|$ would still hold and the direction dependence would vanish again in the the point-like limit $\epsilon \rightarrow 0^+$. A spatially extended detector in the Rindler frame {#rindlerframesection} ================================================== Our aim in this section is to investigate the response of an spatially extended detector, working in its co-moving frame and coupled to the Minkowski vacuum state of the scalar field, following De Bievre and Merkli [@DeBievre:2006pys]. We consider the corresponding centre of mass, with co-ordinates $(x_0(\tau))$, of the detector to follow the Rindler trajectory with uniform acceleration $g$. We work in Rindler co-ordinates with the following form of the metric $$ds^2 = \exp{(2 g z)} \left( - dt^2 + dz^2 \right) + d x^2_{\perp} \label{rindlermetric}$$ We further assume the usual monopole interaction Hamiltonian term proportional to the value of the field on the trajectory, but now the field is replaced by the smeared field $\phi(\tau)$ obtained through $$\phi(\tau) = \int dz d^2 x_{\perp} e^{g z} f \left(z, x_{\perp}, z_0(\tau), x_{\perp 0}(\tau) \right) \phi(x) \label{smeared}$$ where $dz d^2 x_{\perp} e^{g z}$ is the 3-spatial volume of the $t = $ constant hypersurface and $f \left(z, x_{\perp}, z_0(\tau), x_{\perp 0}(\tau) \right)$ is the profile function which encodes the spatial geometry of the extended detector itself. For the particular detector considered, $z_0(\tau) =0 = x_{\perp 0}(\tau) $, the detector is centred, with its centre of mass at the origin. The pullback of the Wightman function relevant for calculating the detector response function is $$\begin{aligned} W(\tau,\tau^\prime) = \langle 0_M | \phi(\tau) \phi(\tau^\prime) | 0_M \rangle \label{whitmannfunction}\end{aligned}$$ with the transition rate being $${\dot {\cal F}}(E) = 2 \operatorname{Re}\int_0^\infty ds \,e^{-i E s} \, W(\tau,\tau - s)$$ The quantised scalar field in terms of the mode solutions for the metric in Eq.(\[rindlermetric\]) is $$\phi(x) = \int d\omega \int d^2 k_{\perp} \left[ {\hat a}_{\omega, k_{\perp}} v_{\omega, k_{\perp}}(x) + {\hat a}^{\dagger}_{\omega, k_{\perp}} v^{\star}_{\omega, k_{\perp}}(x) \right] \label{field}$$ where the mode solutions are given in terms of the modified Bessel function as $$v_{\omega, k_{\perp}}(x) = \sinh \left[\frac{\pi\omega /g}{4\pi^{4}g} \right]^{1/2} K_{i\omega /g} \left[ \frac{\sqrt{k_{\bot}^{2} + m^{2}}}{g e^{-g z}} \right] e^{ik_{\perp} \cdot x_{\perp} - i \omega t} \label{modsol}$$ The smeared field operator defined in Eq.(\[smeared\]) can then be expressed as $$\phi(\tau) = \int d\omega \int d^2 k_{\perp} \left[ {\hat a}_{\omega, k_{\perp}} h_{\omega, k_{\perp}}(\tau) + {\hat a}^{\dagger}_{\omega, k_{\perp}} h^{\star}_{\omega, k_{\perp}}(\tau) \right]$$ with the corresponding smeared field modes to be $$\begin{aligned} h_{\omega, k_{\perp}}(\tau) &=& \sinh \left[\frac{\pi\omega /g}{4\pi^{4}g} \right]^{1/2} e^{ - i \omega t} \; u_{\omega, k_{\perp}}\left(z_0(\tau), x_{\perp 0}(\tau) \right) \end{aligned}$$ and $$\begin{aligned} u_{\omega, k_{\perp}}\left(z_0(\tau), x_{\perp 0}(\tau) \right) &=& \int dz \, d^2x_{\perp} e^{g z} f \left(z, x_{\perp}, z_0(\tau), x_{\perp 0}(\tau) \right) \nonumber \\ && \times K_{i\omega /g} \left[ \frac{\sqrt{k_{\bot}^{2} + m^{2}}}{g e^{-g z}} \right] e^{ik_{\perp} \cdot x_{\perp}} \end{aligned}$$ The pullback of the Wightman function given in Eq.(\[whitmannfunction\]) is then expressed in terms of the smeared field modes to become $$\begin{aligned} W(\tau,\tau^\prime) &=& \int d\omega \int d^{2}k_{\perp} \left[ \left( \eta_{\omega}+1 \right) h_{\omega, k_{\perp}}(\tau) h^{\star}_{\omega, k_{\perp}}(\tau^\prime) + \eta_{\omega} \, h^{\star}_{\omega, k_{\perp}}(\tau) h_{\omega, k_{\perp}}(\tau^\prime) \right] \nonumber \\ &=& \int d\omega \int d^{2}k_{\perp} \sinh \left[\frac{\pi\omega /g}{4\pi^{4}g} \right] \bigg[ \left( \eta_{\omega}+1 \right) u_{\omega, k_{\perp}}(\tau) u^{\star}_{\omega, k_{\perp}}(\tau^\prime) e^{- i \omega (\tau - \tau^\prime)} \nonumber \\ \; \; \; & & + \; \eta_{\omega} \, u^{\star}_{\omega, k_{\perp}}(\tau) u_{\omega, k_{\perp}}(\tau^\prime) \, e^{ i \omega (\tau - \tau^\prime)} \bigg] \end{aligned}$$ where $\eta_{\omega} = 1/(\exp{(\beta \omega)} - 1)$ being the Planckian factor with the usual Unruh temperature. Since for the Rindler trajectory, we have $z_0(\tau) =0 = x_{\perp 0}(\tau) $, hence the $u_{\omega, k_{\perp}}$ are just constants. Then $W(\tau, \tau^\prime) = W(\tau - \tau^\prime) = W(s)$ as expected for a Killing trajectory. The transition rate is straightforward to obtain and we get, $$\begin{aligned} {\dot {\cal F}}(E) &=& \int d\omega \int d^{2}k_{\perp} \sinh \left[\frac{\pi\omega /g}{4\pi^{4}g} \right] \bigg[ \Theta(\omega + E) \left( \eta_{\omega}+1 \right) u_{\omega, k_{\perp}} u^{\star}_{\omega, k_{\perp}} \nonumber \\ \; \; \; & & + \; \Theta(\omega - E)\; \eta_{\omega} \, u^{\star}_{\omega, k_{\perp}} u_{\omega, k_{\perp}} \bigg] \end{aligned}$$ Thus, ${\dot {\cal F}}(E) $, satisfies the KMS condition for an arbitrary profile function, $$\frac{{\dot {\cal F}}(E)}{{\dot {\cal F}}(-E)} = \frac{\eta_{E}}{\eta_{E}+1} = e^{- \beta E}$$ with the usual Unruh temperature. The above result regarding the thermality is quite general and holds for any arbitrary smooth profile function which falls off to Rindler’s spatial infinity. One could even have included a direction dependent angle as in the case of Eq.(\[angprof\]). However, the result would still be the same, since the spatial part does not contribute to the $\tau$ integral in the co-moving frame. Discussion {#discsection} ========== We have analysed two models for a spatially extended detectors having direction dependence on the Rindler trajectory. The first model is based on Schlicht type construction with a direction-sensitive Lorentz-function profile for the smeared field operator with a characteristic length $\epsilon$ and defined in the Fermi co-ordinates attached to the uniformly accelerated trajectory. Whereas the second model has a very general direction sensitive profile for the smeared field operator but defined in the Rindler wedge corresponding to the trajectory. The transition rate for the two models were found to differ significantly when evaluated on the Rindler trajectory. In the first model, the spectrum was obtained to be anisotropic and non-KMS in general. Only in the two limits for the frequency $\omega/g \ll 1$ and $\omega/g \gg 1$ was the spectrum KMS thermal with a direction weighted temperature $2 \pi/ g - (4/g)\tan^{-1} \left( |\cos\Theta_{0}|g\epsilon \right)$. Further, for an arbitrary frequency but for $\epsilon /g \gg 1$ the spectrum if KMS thermal again with the same direction dependent temperature. In contrast, for the second model, the transition rate was found to be KMS thermal and isotropic for an arbitrary direction sensitive profile for the detector. The reason for the discrepancy in results of the two models can be understood by analysing the tails of the profiles chosen relative to the Rindler horizon. In the first model with the Schlicht type profile, the constant time slices chosen in the Fermi co-ordinates extend all the way throughout the Rindler horizon since these slices form a subset of the Cauchy surfaces foliating the global flat spacetime. Hence the Lorentz-function profile defined on such a Cauchy surface has a tail extending much beyond the Rindler horizon which implies that the spatially extended detector is made up of constituents which leak outside the Rindler wedge. In such a case, when the proper time increases, the points at constant spatial coordinates on the orthogonal spatial hypersurfaces in the other Rindler wedge move to the past, and this casts doubt on the transition probability formula that involves the response function, since the formula is derived from time-dependent perturbation theory and involves time-ordered evolution. It is then a pure coincidence that Schlicht’s derivation of the spectrum in Eq.(\[schlichttrans\]) is KMS thermal with a higher temperature than the usual Unruh temperature, since the validity of the quantum description of the formula itself is suspect other than in the case when $\epsilon$ is set to zero wherein the detector model is restricted to the Rindler wedge with the usual Unruh temperature. However one can expect the transition rate expression derived in Eq.(\[angtransitionrate\]) and (\[Wfinal\]) is to be valid for detector trajectories not involving casual horizons. On the other hand, If the support of the detector’s profile is contained in the Rindler wedge, then the corresponding transition rate is KMS in the usual Unruh temperature of the Rindler trajectory as is evident from the results of the second model of the detector defined in the Rindler wedge. This is regardless whether the peak of the spatial profile chosen coincides with the reference trajectory of the detector. The underlying reason is that the monopole interaction defines the energy gap of the detector with respect to the proper time of the reference trajectory, even when the proper time at the peak of the profile may be quite different from the proper time at the reference trajectory, which is the Rindler trajectory in the present case. One could question whether this way to define the interaction is a reasonable model of the microphysics of an extended body. When the profile has a peak, perhaps a more reasonable model would be to choose the reference trajectory to coincide with the peak of the profile. Thus, based on the second model defined in the Rindler wedge, we conclude that the Unruh effect is directionally isotropic with the usual Unruh temperature for spatially extended direction dependent detectors. Nevertheless, there could be some interest in getting a more quantitative control of what happens for the Schlicht type detector model when the profile leaks outside the Rindler wedge, beyond Schlicht’s Lorentz-function profile. Schlicht’s profile gives KMS spectrum for the case discussed in the section \[schlichtsection\] but at a different temperature; greater than the usual Unruh temperature. One could question whether other leaking profiles still give KMS (at some temperature) or does any deviation from Schlicht’s profile necessarily break KMS. Acknowledgments {#acknowledgments .unnumbered} =============== We thank Jorma Louko for helpful discussions and useful comments on the draft. SK thanks the Department of Science and Technology, India for financial support and the University of Nottingham for hospitality where part of the work was completed. [99]{} S. A. Fulling, Phys. Rev. D [**7**]{}, 2850 (1973). P. C. W. Davies, J. Phys. A [**8**]{}, 609 (1975). W. G. Unruh, Phys. Rev. D [**14**]{}, 870 (1976). S. Takagi, Phys. Lett.  [**148B**]{}, 116 (1984). N. D. Birrell and P. C. W. Davies, *Quantum Fields in Curved Space* (Cambridge University Press, Cambridge, 1982). L. C. B. Crispino, A. Higuchi and G. E. A. Matsas, Rev. Mod. Phys.  [**80**]{}, 787 (2008) \[arXiv:0710.5373 \[gr-qc\]\]. S. Fulling and G. Matsas, Scholarpedia [**9**]{}, no. 10, 31789 (2014). E. Martín-Martínez, M. Montero and M. del Rey, Phys. Rev. D [**87**]{}, 064038 (2013) \[arXiv:1207.3248 \[quant-ph\]\]. Á. M. Alhambra, A. Kempf and E. Martín-Martínez, Phys. Rev. A [**89**]{}, 033835 (2014) \[arXiv:1311.7619 \[quant-ph\]\]. S. De Bievre and M. Merkli, Class. Quant. Grav.  [**23**]{}, 6525 (2006) \[arXiv:math-ph/0604023\]. D. Hümmer, E. Martín-Martínez and A. Kempf, Phys. Rev. D [**93**]{}, 024019 (2016) \[arXiv:1506.02046 \[quant-ph\]\]. A. Pozas-Kerstjens and E. Martín-Martínez, Phys. Rev. D [**92**]{}, 064042 (2015) \[arXiv:1506.03081 \[quant-ph\]\]. A. Pozas-Kerstjens and E. Martín-Martínez, Phys. Rev. D [**94**]{}, 064074 (2016) \[arXiv:1605.07180 \[quant-ph\]\]. A. Pozas-Kerstjens, J. Louko and E. Martín-Martínez, Phys. Rev. D [**95**]{}, 105009 (2017) \[arXiv:1703.02982 \[quant-ph\]\]. P. Simidzija and E. Martín-Martínez, Phys. Rev. D [**98**]{}, 085007 (2018) \[arXiv:1809.05547 \[quant-ph\]\]. S. Kolekar and T. Padmanabhan, Phys. Rev. D [**86**]{}, 104057 (2012) \[arXiv:1205.0258 \[gr-qc\]\]. S. Schlicht, Class. Quant. Grav. **21**, 4647 (2004) \[arXiv:gr-qc/0306022\]. B. S. DeWitt, in [*General Relativity: an Einstein centenary survey*]{}, edited by S. W. Hawking and W. Israel (Cambridge University Press, Cambridge, 1979). P. Langlois, “Imprints of spacetime topology in the Hawking-Unruh effect,” PhD Thesis, University of Nottingham, arXiv:gr-qc/0510127. J. Louko and A. Satz, Class. Quant. Grav.  [**25**]{}, 055012 (2008) \[arXiv:0710.5671 \[gr-qc\]\]. J. Letaw, Phys. Rev. D **23**, 1709 (1981). S. Takagi, Prog. Theor. Phys. Suppl.  [**88**]{}, 1 (1986). C. J. Fewster, B. A. Juárez-Aubry and J. Louko, Class. Quant. Grav.  [**33**]{}, 165003 (2016) \[arXiv:1605.01316 \[gr-qc\]\]. S. Kolekar and J. Louko, Phys. Rev. D [**96**]{}, 024054 (2017) \[arXiv:1703.10619 \[hep-th\]\]. *NIST Digital Library of Mathematical Functions.* `http://dlmf.nist.gov/`, Release 1.0.19 of 2018-06-22. F. W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider, R. F. Boisvert, C. W. Clark, B. R. Miller, and B. V. Saunders, eds. [^1]: sanved.kolekar@cbs.ac.in
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper, I address the oscillation probability of $O$(GeV) neutrinos of all active flavours produced inside the Sun and detected at the Earth. Flavours other than electron-type neutrinos may be produced, for example, by the annihilation of WIMPs which may be trapped inside the Sun. In the GeV energy regime, matter effects are important both for the “1–3” system and the “1–2” system, and for different neutrino mass hierarchies. A numerical scan of the multidimensional three-flavour parameter space is performed, “inspired” by the current experimental situation. One important result is that, in the three-flavour oscillation case, $P_{\alpha\beta}\neq P_{\beta\alpha}$ for a significant portion of the parameter space, even if there is no $CP$-violating phase in the MNS matrix. Furthermore, $P_{\mu\mu}$ has a significantly different behaviour from $P_{\tau\tau}$, which may affect expectations for the number of events detected at large neutrino telescopes.' --- CERN-TH-2000-168\   hep-ph/0006157\ .3in [**The Oscillation Probability of GeV Solar Neutrinos of All Active Species**]{} 0.5in André de Gouvêa 0.1in [*CERN - Theory Division\ CH-1211 Geneva 23, Switzerland*]{} .2in Introduction ============ In the Standard Model of particle physics, neutrinos are strictly massless. Any evidence for neutrino masses would, therefore, imply physics beyond the Standard Model. Even though the direct experimental determination of a neutrino mass is (probably) far beyond the current experimental reach, experiments have been able to obtain indirect, and recently very strong, evidence for neutrino masses, via neutrino oscillations. The key evidence for neutrino oscillations comes from the angular dependent flux of atmospheric muon-type neutrinos measured at SuperKamiokande [@atmospheric], combined with a large deviation of the muon-type to electron-type neutrino flux ratio from theoretical predictions. This “atmospheric neutrino puzzle” is best solved by assuming that $\nu_{\mu}$ oscillates into $\nu_{\tau}$ and that the $\nu_e$ does not oscillate. For a recent analysis of all the atmospheric neutrino data see [@atmos_analysis]. On the other hand, measurements of the solar neutrino flux [@Cl; @Kamiokande; @GALLEX; @SAGE; @Super-K] have always been plagued by a large suppression of the measured solar $\nu_e$ flux with respect to theoretical predictions [@SSM]. Again, this “solar neutrino puzzle” is best resolved by assuming that $\nu_e$ oscillates into a linear combination of the other flavour eigenstates [@bksreview; @rate_analysis] (for a more conservative analysis of the event rates and the inclusion of the “dark side” of the parameter space, see [@dark_side]). The most recent analysis of the solar neutrino data which includes the mixing of three active neutrino species can be found in [@solar_3]. Neutrino oscillations were first hypothesised by Bruno Pontecorvo in the 1950’s [@Pontecorvo]. The hypothesis of three flavour mixing was first raised by Maki, Nakagawa and Sakata [@MNS]. In light of the solar neutrino puzzle, Wolfenstein [@W] and Mikheyev and Smirnov [@MS] realized that neutrino–matter interactions could affect in very radical ways the survival probability of electron-type neutrinos which are produced in the solar core and detected at the Earth (MSW effect). Since then, significant effort has been devoted to understanding the oscillation probabilities of electron-type neutrinos produced in the Sun. For example, in [@KP_3] the survival probability of solar electron-type neutrinos was discussed in the context of three-neutrino mixing including matter effects, and solutions to the solar neutrino puzzle in this context were studied (for example, in [@KP_3; @MS_3; @solar_3]). In this paper, the understanding of solar neutrino oscillations is extend to the case of other active neutrino species ($\nu_{\mu}$, $\nu_{\tau}$, and antineutrinos) produced in the solar core. Even though only electron-type neutrinos are produced by the nuclear reactions which take place in the Sun’s innards, it is well know that, in a number of dark matter models, dark matter particles can be trapped gravitationally inside the Sun, and that the annihilation of these should yield a flux of high energy neutrinos ($E_{\nu}\gtrsim 1$ GeV) of all species which may be detectable at the Earth [@DM_review]. Indeed, this is one of the goals of very large “neutrino telescopes,” such as AMANDA [@Amanda] or BAIKAL [@Baikal]. It is important to understand how neutrino oscillations will affect the expected event rates at these experiments.[^1] The oscillation probability of all neutrino species has, of course, been studied in different contexts, such as in the case of neutrinos produced in the core of supernovae [@supernova] or in the case of neutrinos propagating in constant electron number densities [@barger_etal]. The latter case has been receiving a considerable amount of attention from neutrino factory studies [@nufact]. The case at hand (GeV solar neutrinos) differs significantly from these mentioned above, in at least a few of the following: source-detector distance, electron number density average value and position dependency, energy average value and spectrum. Neutrino factory studies, for example, are interested in $O$(1000) km base-lines, $O$(10) GeV electron-type and muon-type neutrinos produced via muon decay propagating in roughly constant, Earth-like (matter densities around 3 g/cm$^3$) electron number densities. The paper is organised as follows. In Sec. 2, the well known case of two-flavour oscillations is reviewed in some detail, while special attention will be paid to neutrinos produced inside the Sun. In Sec. 3 the same discussion is extended to the less familiar case of three-flavour oscillations. Again, special attention is paid to neutrinos produced in the Sun’s core. In Sec. 4 the results presented in Sec. 3 will be analysed numerically, and the three-neutrino multi-dimensional parameter space will be explored. Sec. 5 contains a summary of the results and the conclusions. It is important to comment at this point that one of the big challenges of studying three-flavour oscillations is the multi-dimensional parameter space, composed of three mixing angles, two mass-squared differences, and one complex phase, plus the neutrino energy. For this reason, the discussions presented here will take advantage of the current experimental situation to constrain the parameter space, and of the possibility of producing neutrinos of all species via dark matter annihilations to constrain the neutrino energies to the range from a few to tens of GeV. Two-Flavour Oscillations ======================== In this section, the well studied case of two-flavour oscillations will be reviewed [@general_review]. This is done in order to present the formalism which will be later extended to the case of three-flavour oscillations and describe general properties of neutrino oscillations and of neutrinos produced in the Sun’s core. Generalities ------------ Neutrino oscillations take place because, similar to what happens in the quark sector, neutrino weak eigenstates are different from neutrino mass eigenstates. The two sets are related by a unitary matrix, which is, in the case of two-flavour mixing, parametrised by one mixing angle $\vartheta$.[^2] $$\left(\matrix{\nu_{e} \cr \nu_{x} }\right)= \left(\matrix{U_{e1}&U_{e2}\cr U_{x1}&U_{x2}}\right) \left(\matrix{\nu_{1} \cr \nu_{2} }\right)= \left(\matrix{\cos\vartheta&\sin\vartheta\cr -\sin\vartheta&\cos\vartheta}\right) \left(\matrix{\nu_{1} \cr \nu_{2} }\right),$$ where $\nu_1$ and $\nu_2$ are neutrino mass eigenstates with masses $m_1$ and $m_2$, respectively, and $\nu_x$ is the flavour eigenstate orthogonal to $\nu_e$. All physically distinguishable situations can be obtained if $0\leq\vartheta\leq\pi/2$ and $m_1^2\leq m_2^2$ or $0\leq\vartheta\leq\pi/4$ and no constraint is imposed on the masses-squared. In the case of oscillations in vacuum, it is trivial to compute the probability that a neutrino produced in a flavour state $\alpha$ is detected as a neutrino of flavour $\beta$, assuming that the neutrinos are ultrarelativistic and propagate with energy $E_{\nu}$: $$P_{\alpha\beta}=|U_{\beta1}|^2|U_{\alpha1}|^2+|U_{\beta2}|^2|U_{\alpha2}|^2+2Re\left( U_{\beta1}^*U_{\beta2}U_{\alpha1}U_{\alpha2}^* e^{i\frac{\Delta m^2x}{2E_{\nu}}}\right).$$ Here $\Delta m^2\equiv m^2_{2}-m_1^2$ is the mass-squared difference between the two mass eigenstates and $x$ is the distance from the detector to the source. It is trivial to note that $P_{\alpha\beta}=P_{\beta\alpha}$ since all $U_{\alpha i}$ are real and the theory is $T$-conserving. Furthermore, note that $\vartheta$ is indistinguishable from $\pi/2-\vartheta$ (or, equivalently, the sign of $\Delta m^2$ is not physical), and all physically distinguishable situations are obtained by allowing $0\leq\vartheta\leq\pi/4$ and choosing a fixed sign for $\Delta m^2$. In the case of nontrivial neutrino–medium interactions, the computation of $P_{\alpha\beta}$ can be rather involved. Assuming that the neutrino–medium interactions can be expressed in terms of an effective potential for the neutrino propagation, one has to solve $$\frac{\rm d}{{\rm d}t}\left(\matrix{\nu_{1}(t) \cr \nu_{2}(t) }\right)= -i\left[\left(\matrix{E_1 & 0 \cr 0 & E_2}\right)+\left(\matrix{V_{11}(t) & V_{12}(t) \cr V_{12}(t)^* & V_{22}(t)}\right)\right]\left(\matrix{\nu_{1}(t) \cr \nu_{2}(t) }\right),$$ with the appropriate boundary conditions (either a $\nu_e$ or a $\nu_x$ as the initial state, for example). In the ultrarelativistic limit one may approximate $E_2-E_1\simeq \Delta m^2/2E_{\nu}$, ${\rm d}/{\rm d}t=\simeq{\rm d}/{\rm d}x$, and $V_{ij}(t)\simeq V_{ij}(x)$. A very crucial assumption is that there is no kind of neutrino absorption due to the neutrino–medium interaction, [*i.e.,*]{} the $2\times 2$ Hamiltonian for the neutrino system is Hermitian. It is interesting to argue what can be said about $P_{\alpha\beta}$ in very general terms. First, the conservation of probability requires that $$\begin{aligned} P_{ee}+P_{ex}&=&1, \\ P_{xe}+P_{xx}&=&1.\end{aligned}$$ Second, given that the Hamiltonian evolution is unitary, $$P_{ee}+P_{xe}=1. \label{extra_constraint}$$ It is easy to show that the extra constraint $P_{ex}+P_{xx}=1$ is redundant. Eq. (\[extra\_constraint\]) can be understood by the following “intuitive” argument: if the same amount of $\nu_e$ and $\nu_x$ is produced, independent of what happens to $\nu_e$ and $\nu_x$ during flight, the number of $\nu_e$ and $\nu_x$ detected in the end has to be the same. In light of the constraints above, one can show that there is only one independent $P_{\alpha\beta}$, which is normally chosen to be $P_{ee}$. The others are given by $P_{ex}=P_{xe}=1-P_{ee}$ and $P_{xx}=P_{ee}$. Note that the equality $P_{ex}=P_{xe}$ is [*not*]{} a consequence of $T$-invariance, but a consequence of the unitarity of the Hamiltonian evolution and particular only to the two-flavour oscillation case, as will be shown later. Oscillation of Neutrinos Produced in the Sun’s Core --------------------------------------------------- It is well known [@W; @MS] that neutrino–Sun interactions affect the oscillation probabilities of neutrinos produced in the Sun’s core in very nontrivial ways. Indeed, all but one solution to the solar neutrino puzzle rely heavily on neutrino–Sun interactions [@bksreview; @rate_analysis; @dark_side]. The survival probability of electron-type solar neutrinos has been computed in many different approximations by a number of people over the years, and can be understood in very simple terms [@general_review]. In the presence of electrons, the differential equation satisfied by the two neutrino system is, in the flavour basis, $$\frac{\rm d}{{\rm d}x}\left(\matrix{\nu_{e}(x) \cr \nu_{x}(x) }\right)=-i\left[ \frac{\Delta m^2}{2E_{\nu}}\left(\matrix{|U_{e2}|^2 & U_{e2}^*U_{\mu2} \cr U_{e2}U_{\mu2}^* & |U_{\mu2}|^2 }\right) +\left(\matrix{A(x) & 0 \cr 0 & 0}\right)\right] \left(\matrix{\nu_{e}(x) \cr \nu_{x}(x) }\right), \label{eq_2ns}$$ where terms proportional to the $2\times 2$ identity matrix were neglected, since they play no role in the physics of neutrino oscillations. $$A(x)=\sqrt{2}G_FN_e(x) \label{A(x)}$$ is the charged current contribution to the $\nu_e$-$e$ forward scattering amplitude, $G_F$ is Fermi’s constant, and $N_e(x)$ is the position dependent electron number density. In the case of the Sun [@SSM] (see also [@bahcall_www]), $A\equiv A(0)\simeq 6\times 10^{-3}$ eV$^2$/GeV, assuming an average core density of 79 g/cm$^3$, and $A(x)$ falls roughly exponentially until close to the Sun’s edge. It is safe to say that significantly far away from the Sun’s edge $A(x)$ is zero. A particularly simple way of understanding the propagation of electron-type neutrinos produced in the Sun’s core to the Earth is to start with a $\nu_e$ state in the basis of the eigenstates of the Hamiltonian evaluated at the production point, $|\nu_e\rangle=cos\vartheta_M(0)|\nu_L\rangle+\sin\vartheta_M(0)|\nu_H\rangle$, where $|\nu_H\rangle$ ($|\nu_L\rangle$) correspond to the highest (lowest) instantaneous Hamiltonian eigenstate. The matter mixing angle $\vartheta_M\equiv\vartheta_M(0)$ is given by $$\cos 2\vartheta_M=\frac{\Delta m^2\cos 2\vartheta-2E_{\nu}A} {\sqrt{(\Delta m^2)^2+A^2-4E_{\nu}A\Delta m^2\cos 2\vartheta}}. \label{cos2tm}$$ The evolution of this initial state from the Sun’s core is described by an arbitrary unitary matrix until the neutrino reaches the Sun’s edge. From this point on, one can rotate the state to the mass basis and follow the vacuum evolution of the state. Therefore, $P_{ee}(x)$, where $x$ is is the distance from the Sun’s edge to some point far away from the Sun (for example, the Earth), is $$P_{ee}(x)=\left|\left(\matrix{U_{e1}^* & U_{e2}^*}\right)\left(\matrix{1 & 0 \cr 0 & e^{-i\frac{\Delta m^2x}{2E_{\nu}}}}\right)\left(\matrix{A & B \cr -B^* & A^*}\right) \left(\matrix{\cos\vartheta_M \cr \sin\vartheta_M}\right)\right|^2, \label{Peex}$$ where overall phases in the amplitude have been neglected. The matrix parametrised by $A,B$ represents the evolution of the system from the Sun’s core to vacuum, and also rotates the state into the mass basis.[^3] Expanding Eq. (\[Peex\]), and assuming that there is no coherence in the Sun’s core between $\nu_L$ and $\nu_H$,[^4] one arrives at the well known expression (these have been first derived using a different language in [@Petcov_eq] and [@PP]) $$P_{ee}(x)=P_1\cos^2\vartheta + P_2\sin^2\vartheta -\cos 2\vartheta_M \sqrt{P_c(1-P_c)}\sin 2\vartheta\cos\left(\frac{\Delta m^2x}{2E_{\nu}}+\delta \right), \label{pee}$$ where $\delta$ is the phase of $AB^*$, $P_c\equiv |B|^2=1-|A|^2$ is the “level crossing probability”, and $P_1=1-P_2=\frac{1}{2}+\frac{1}{2}\left(1-2P_c\right)\cos 2\vartheta_M$ is interpreted as the probability that the neutrino exits the Sun as a $\nu_1$. Eq. (\[pee\]) should be valid in all cases of interest, and contains a large amount of features. In the case of the solar neutrino puzzle, the neutrino energies of interest range between hundreds of keV to ten MeV, and matter effects start to play a role for values of $\Delta m^2$ as high as $10^{-4}$ eV$^2$. In the adiabatic limit ($P_c\rightarrow 0$) very small values of $P_{ee}$ are attainable when $\cos 2\vartheta_M\rightarrow -1$ and $\sin^2\vartheta$ is small. More generally, in this limit $P_{ee}=\sin^2\vartheta$. This is what happens for all solar neutrino energies in the case of the LOW solution,[^5] for solar neutrino energies above a few MeV in the case of the LMA solution, and for 400 keV $\lesssim E_{\nu}\lesssim 1$ MeV energies in the case of the SMA solution. In the extremely nonadiabatic limit, which is reached when $\Delta m^2/2E_{\nu}\ll A$, $P_c\rightarrow \cos^2\vartheta$ and $\cos 2\vartheta_M\rightarrow -1$, the original vacuum oscillation expression is obtained, up to the “matter phase” $\delta$. This is generically what happens in the VAC solution to the solar neutrino puzzle. If the electron number density is in fact exponential, one can solve Eq. (\[eq\_2ns\]) exactly [@exponential; @PC]. For $N_e(x)=N_e(0)~e^{-x/r_0}$, where $x=0$ is the centre of the Sun, $$P_c=\frac{e^{-\gamma\sin^2\vartheta}-e^{-\gamma}}{1-e^{-\gamma}}, \label{pc}$$ [@PC; @check] where $$\gamma=2\pi r_0\frac{\Delta m^2}{2E_{\nu}}=1.05\left(\frac{\Delta m^2} {10^{-6}~{\rm eV}^2}\right)\left(\frac{1~{\rm GeV}}{E_{\nu}}\right), \label{gamma}$$ for $r_0=R_{\odot}/10.54=6.60\times 10^4$ km [@bahcall_www]. In the case of the Sun, the exponential profile approximation has been examined [@check], and was shown to be very accurate, especially if one allows $r_0$ to vary as a function of $\Delta m^2/2E_{\nu}$. The exact expression for $\delta$ has also been obtained [@Petcov_eq], and the readers are referred to [@P_phase] for details concerning physical implications of the matter phase. Its effects will not be discussed here any further. The Case of Antineutrinos ------------------------- Antineutrinos that are produced in the Sun’s core obey a differential equation similar to Eq. (\[eq\_2ns\]), except that the sign of the matter potential changes, [*i.e.*]{} $A(x)\leftrightarrow -A(x)$, and $U_{\alpha i}\leftrightarrow U^*_{\alpha i}$ (this is immaterial since, in the two-flavour mixing case, all $U_{\alpha i}$ are real). Instead of working out the probability of an electron-type antineutrino being detected as an electron-type antineutrino $P_{\bar{e}\bar{e}}$ from scratch, there is a very simple way of relating it to $P_{ee}$. One only has to note that, if the following transformation is applied to Eq. (\[eq\_2ns\]): $\vartheta\rightarrow \pi/2-\vartheta$, subtract the matrix $A(1_{2\times 2})$, where $1_{2\times 2}$ is the $2\times 2$ identity matrix and relabel $\nu_e(x)\leftrightarrow \nu_x(x)$, the equation of motion for antineutrinos is obtained.[^6] Therefore, $P_{\bar{e}\bar{e}}(\vartheta)= P_{xx}(\pi/2-\vartheta)=P_{ee}(\pi/2-\vartheta)$ (this was pointed out in [@Chizhov]). Remember that, in the case of vacuum oscillations, $\vartheta$ is physically equivalent to $\pi/2-\vartheta$, so $P_{\bar{e}\bar{e}}=P_{ee}$. In the more general case of nontrivial matter effects, this is clearly not the case, since the presence of matter (or antimatter) explicitly breaks $CP$-invariance. It is curious to note that, in the case of two-flavour oscillations, there is no $T$-noninvariance, [*i.e.,*]{} $P_{\alpha\beta}=P_{\beta\alpha}$, while there is potentially large $CP$ violation, [*i.e.,*]{} $P_{\alpha\beta}\neq P_{\bar{\alpha}\bar{\beta}}$, even if the Hamiltonian for the system is explicitly $T$-noninvariant and $CP$-noninvariant, as is the case of the propagation of neutrinos produced in the Sun (namely $A(t)$ is a generic function of time and $A(t)$ for neutrinos is $-A(t)$ for antineutrinos). Three Flavour Oscillations ========================== Currently, aside from the solar neutrino puzzle, there is an even more convincing evidence for neutrino oscillations, namely the suppression of the muon-type neutrino flux in atmospheric neutrino experiments [@atmospheric]. This atmospheric neutrino puzzle is best solved by $\nu_{\mu}\leftrightarrow\nu_{\tau}$ oscillations with a large mixing angle [@atmos_analysis]. Furthermore, the values of $\Delta m^2$ required to solve the atmospheric neutrino puzzle are at least one order of magnitude higher than the values required to solve the solar neutrino puzzle. For this reason, in order to solve both neutrino puzzles in terms of neutrino oscillations, three neutrino families are required. In this section, the oscillations of three neutrino flavours will considered. In order to simplify the discussion, I will concentrate on neutrinos with energies ranging from a few to tens of GeV, which is the energy range expected for neutrinos produced by the annihilation of dark matter particles which are possibly trapped inside the Sun. Furthermore, a number of experimentally inspired constraints on the neutrino oscillation parameter space will be imposed, as will become clear later. Generalities ------------ Similar to the two-flavour case, the “mapping” between the flavour eigenstates, $\nu_e$, $\nu_{\mu}$ and $\nu_{\tau}$ and the mass eigenstates $\nu_i$, $i=1,2,3$ with masses $m_i$ can be performed with a general $3\times 3$ unitary matrix, which is parametrised by three mixing angles ($\theta$, $\omega$, and $\xi$) and a complex phase $\phi$. In short hand notation $\nu_{\alpha}=U_{\alpha i}\nu_{i}$ where $\alpha=e,\mu,\tau$ and $i=1,2,3$. The MNS mixing matrix [@MNS] will be written, similar to the standard CKM quark mixing matrix [@PDG], as [$$\left(\matrix{U_{e1} & U_{e2} & U_{e3} \cr U_{\mu1} & U_{\mu2} & U_{\mu3} \cr U_{\tau1} & U_{\tau2} & U_{\tau3}}\right) =\left(\matrix{c\omega~c\xi & s\omega~c\xi & s\xi e^{i\phi} \cr -s\omega~c\theta-c\omega~s\theta~s\xi e^{-i\phi} & c\omega~c\theta- s\omega~s\theta~s\xi e^{-i\phi} & s\theta~c\xi \cr s\omega~s\theta-c\omega~c\theta~s\xi e^{-i\phi} & -c\omega~s\theta-s\omega~c\theta~s\xi e^{-i\phi} & c\theta~c\xi}\right), \label{MNSmatrix}$$ ]{} where $c\zeta\equiv\cos\zeta$ and $s\zeta\equiv\sin\zeta$ for $\zeta=\omega,\theta,\xi$. If the neutrinos are Majorana particles, two extra phases should be added to the MNS matrix, but, since they play no role in the physics of neutrino oscillations, they can be safely ignored. All physically distinguishable situations can be obtained if one allows $0\leq\phi\leq\pi$, all angles to vary between $0$ and $\pi/2$ and no restriction is imposed on the sign of the mass-squared differences, $\Delta m^2_{ij}\equiv m^2_i-m^2_j$. Note that there are only two independent mass-squared differences, which are chosen here to be $\Delta m^2_{21}$ and $\Delta m^2_{31}$. All experimental evidence from solar, atmospheric, and reactor neutrino experiments [@atmospheric; @Cl; @Kamiokande; @GALLEX; @SAGE; @Super-K; @reactor] can be satisfied,[^7] somewhat conservatively, by assuming [@general_review]: $10^{-4}$ eV$^2\lesssim |\Delta m^2_{31}|\simeq|\Delta m^2_{32}|\lesssim 10^{-2}$ eV$^2$, $0.3\lesssim\sin^2\theta\lesssim 0.7$, $10^{-11}$ eV$^2\lesssim |\Delta m^2_{21}|\lesssim 10^{-4}$ eV$^2$, $\sin^2\xi\lesssim 0.1$, while $\omega$ is mostly unconstrained. There is presently no information on $\phi$. In determining these bounds, it was explicitly assumed that only three active neutrinos exist. A few comments about the constraints imposed above are in order. First, one may complain that $\omega$ is more constrained than mentioned above by the solar neutrino data. The situation is far from definitive, however. As pointed out recently in [@dark_side] if the uncertainty on the $^8$B neutrino flux is inflated or if some of the experimental data is not considered (especially the Homestake data [@Cl]) in the fit, a much larger range of $\Delta m^2_{21}$ and $\omega$ is allowed. Furthermore, if three-flavour mixing is considered [@solar_3], different regions in the parameter space $\Delta m^2_{21}$-$\sin^2\omega$ are allowed for different values of $\sin^2\xi$, even if $\sin^2\xi$ is constrained to be small. Second, the limit from the Chooz and Palo Verde reactor experiments [@reactor] do not constrain $\sin^2\xi$ for $|\Delta m^2_{31}|\lesssim 10^{-3}$ eV$^2$. Furthermore, their constraints are to $\sin^2 2\xi$, so values of $\sin^2\xi$ close to one should also be allowed. However, the constraints from the atmospheric neutrino data require $\cos^2\xi$ to be close to one. This is easy to understand. Assuming that $L_{21}^{\rm osc}$ is much larger than the Earth’s diameter and that $\Delta m^2_{31}=\Delta m^2_{32}$, $$P_{\mu\mu}^{\rm atm}=1-4\cos^2\xi\sin^2\theta(1-\cos^2\xi\sin^2\theta) \sin^2\left(\frac{\Delta m^2_{31}x}{4E_{\nu}}\right),$$ according to upcoming Eq. (\[p3vac\]). Almost maximal mixing implies that $\cos^2\xi\sin^2\theta\simeq 1/2$. With the further constraint from $P_{ee}^{\rm atm}$, namely $\sin^2 2\xi\simeq 0$, one concludes that $\cos^2\xi\simeq 1$ and $\sin^2\theta\simeq 1/2$. In the case of oscillations in vacuum, it is straight forward to compute the oscillation probabilities $P_{\alpha\beta}$ of detecting a flavour $\beta$ given that a flavour $\alpha$ was produced. $$\label{p3vac} P_{\alpha\beta}=\sum_{i,j}U_{\alpha i}^*U_{\alpha j}U_{\beta i} U_{\beta j}^*e^{i\frac{\Delta m^2_{ij}x}{2E_{\nu}}}$$ The three different oscillation lengths, $L_{\rm osc}^{ij}$, are numerically given by $$L_{\rm osc}^{ij}= \frac{4\pi E_{\nu}}{\Delta m^2_{ij}}=2.47\times10^{8}{\rm km}\left(\frac{E} {1~\rm GeV}\right) \left(\frac{10^{-8}~\rm eV^2}{\Delta m^2_{ij}}\right),$$ which are to be compared to the Earth-Sun distance (1 a.u.$=1.496\times 10^{8}$ km). In the energy range of interest, 1 Gev$\lesssim E_{\nu}\lesssim 100$ GeV and given the experimental constraints on the parameter space described above, it is easy to see that $L_{\rm osc}^{31}$ and $L_{\rm osc}^{32}$ are much smaller than 1 a.u., and that its effects should “wash out” due to any realistic neutrino energy spectrum, detector energy resolution, or other “physical” effects. Such terms will therefore be neglected henceforth. In contrast, $L_{\rm osc}^{21}$ maybe as large as (and maybe even much larger than!) the Earth-Sun distance. Note that a nonzero phase $\phi$ implies $T$-violation, [*i.e.,*]{} $P_{\alpha\beta}\neq P_{\beta\alpha}$, unless $L_{\rm osc}^{21}\gg 1$ a.u.. This will be discussed in more detail later. In the presence of neutrino–medium interactions, the situation is, in general, more complicated (indeed, much more!). Similar to the two-neutrino case, it is important to discuss what is known about the oscillation probabilities. From the conservation of probability one has $$\begin{aligned} P_{ee}+P_{e\mu}+P_{e\tau}&=&1, \nonumber \\ P_{\mu e}+P_{\mu\mu}+P_{\mu\tau}&=&1, \\ P_{\tau e}+P_{\tau\mu}+P_{\tau\tau}&=&1, \nonumber \end{aligned}$$ and, similar to the two-neutrino case, unitarity of the Hamiltonian evolution implies $$\begin{aligned} P_{ee}+P_{\mu e}+P_{\tau e}&=&1, \nonumber \\ P_{e\mu}+P_{\mu\mu}+P_{\tau\mu}&=&1, \label{const3}\end{aligned}$$ A third equation of this kind, $P_{e\tau}+P_{\mu\tau}+P_{\tau\tau}=1$, is redundant. As before, Eqs. (\[const3\]) can be understood by arguing that, if equal numbers of all neutrino species are produced, the number of $\nu_{\beta}$’s to be detected should be the same, regardless of $\beta$, simply because the neutrino propagation is governed by a unitary operator. One may therefore express all $P_{\alpha\beta}$ in terms of only four quantities. Here, these are chosen to be $P_{ee}$, $P_{e\mu}$, $P_{\mu\mu}$, and $P_{\tau\tau}$. The others are given by $$\begin{aligned} P_{e\tau} & = & 1-P_{ee}-P_{e\mu}, \nonumber \\ P_{\mu e} & = & 1+P_{\tau\tau}-P_{ee}-P_{\mu\mu}-P_{e\mu}, \nonumber \\ P_{\mu\tau} & = & P_{ee}+P_{e\mu}-P_{\tau\tau}, \\ P_{\tau e} &= & P_{\mu\mu}+P_{e\mu}-P_{\tau\tau}, \nonumber \\ P_{\tau\mu} & = & 1-P_{\mu\mu}-P_{e\mu}. \nonumber \end{aligned}$$ Note that, in general, $P_{\alpha\beta}\neq P_{\beta\alpha}$. Oscillation of Neutrinos Produced in the Sun’s Core --------------------------------------------------- The propagation of neutrinos in the Sun’s core can, similar to the two-neutrino case, be described by the differential equation $$\frac{\rm d}{{\rm d}x}\nu_{\alpha}(r)=-i\left( \sum_{i=2}^{3}\left(\frac{\Delta m^2_{i1}}{2E_{\nu}}\right)U_{\alpha i}^*U_{\beta i} +A(x) \delta_{\alpha e}\delta_{\beta e}\right) \nu_{\beta}(r), \label{eq_3nus}$$ where $\delta_{\eta\zeta}$ is the Kronecker delta symbol. Terms proportional to the identity $\delta_{\alpha\beta}$ are neglected because they play no role in the physics of neutrino oscillations. The matter induced potential $A(x)$ is given by Eq.(\[A(x)\]). As in the two-neutrino case, it is useful to first discuss the initial states $\nu_{\alpha}$ in the Sun’s core, and to express them in the basis of instantaneous Hamiltonian eigenstates, which will be referred to as $|\nu_H\rangle$, $|\nu_M\rangle$, and $|\nu_L\rangle$ ($H=$ high, $M$= medium, and $L$= low). Therefore $$|\nu_{\alpha}\rangle=H_{\alpha}|\nu_H\rangle+M_{\alpha}|\nu_M\rangle+L_{\alpha} |\nu_L\rangle,$$ where $\langle\nu_{\alpha}|\nu_{\alpha '}\rangle=\delta_{\alpha\alpha '}$. As before (see Eq. [\[Peex\]]{}), the probability of detecting this initial state as a $\beta$-type neutrino far away from the Sun ([*e.g.,*]{} at the Earth) is given by $$P_{\alpha\beta}=\left|\left(\matrix{U_{\beta1}^* & U_{\beta2}^* & U_{\beta3}^*}\right) \left(\matrix{1 & 0 & 0 \cr 0 & e^{-i\frac{\Delta m^2_{21}x}{2E_{\nu}}} & 0 \cr 0 & 0 & e^{-i\frac{\Delta m^2_{31}x}{2E_{\nu}}}}\right)\left(V_{3\times 3}\right) \left(\matrix{L_{\alpha} \cr M_{\alpha} \cr H_{\alpha}}\right)\right|^2, \label{palphabetax}$$ where $V_{3\times 3}$ is an arbitrary $3\times 3$ unitary matrix which takes care of propagating the initial state until the edge of the Sun and rotating the state into the mass basis. In order to proceed, it is useful take advantage of the constraints on the neutrino parameter space and the energy range of interest. Note that $A\gtrsim\frac{|\Delta m^2_{31}|}{2E_{\nu}}\gg \frac{|\Delta m^2_{21}|}{2E_{\nu}}$ (remember that the energy range of interest is 1 GeV$\lesssim E_{\nu}\lesssim 100$ GeV and that $A\simeq 6\times 10^{-3}$ eV$^2$/GeV). It has been shown explicitly [@KP_3], assuming the neutrino mass-squared hierarchy to be $m_3^2>m_2^2>m_1^2$, [^8] that, if the mass-squared differences are very hierarchical ($|\Delta m^2_{31}|\gg |\Delta m^2_{21}|$), the three-level system “decouples” into two two-level systems, [*i.e.,*]{} one can first deal with matter effects in the “$H-M$” system and then with the matter effects in the “$M-L$” system. One way of understanding why this is the case is to realize that the “resonance point” corresponding to the $\Delta m^2_{31}$ is very far away from the resonance point corresponding to $\Delta m^2_{21}$. With this in mind, it is fair to approximate (this is similar to what is done, for example, in [@P_3]) $$V_{3\times 3}=\left(\matrix{A^L & B^L & 0 \cr -B^{L*} & A^{L*} & 0 \cr 0 & 0 & 1} \right)\left(\matrix{1 & 0 & 0 \cr 0 & A^{H} & B^{H} \cr 0 & -B^{H*} & A^{H*}} \right),$$ where $|B^H|^2=1-|A^H|^2\equiv P_c^H$, $|B^L|^2=1-|A^L|^2\equiv P_c^L$. The superscripts $H$, $L$ correspond to the “high” and the “low” resonances, respectively. It also possible to obtain an approximate expression for the initial states in the Sun’s core. Following the result outline above, this state should be described by two matter angles, $\xi_M$ and $\omega_M$, corresponding to each of the two-level systems. Both should be given by Eq. (\[cos2tm\]), where, in the case of $\cos 2\xi_M$, $\vartheta$ is to be replaced by $\xi$ and $\Delta m^2$ by $\Delta m^2_{31}$, while in the case of $\cos 2\omega_M$, $\vartheta$ is to be replaced by $\omega$, $\Delta m^2$ by $\Delta m^2_{21}$ and $A$ is to be replaced by $A\cos\xi$ [@P_3; @solar_3]. Furthermore, because $A\cos\xi\gg\frac{|\Delta m^2_{21}|}{2E_{\nu}}$, $\cos 2\omega_M$ can be safely replaced by -1 (remember that $\cos^2\xi\gtrsim0.9$). Within these approximations, in the Sun’s core, $$\begin{aligned} \label{inistate3} |\nu_{e}\rangle&=&\sin\xi_M|\nu_H\rangle+\cos\xi_M|\nu_M\rangle, \nonumber \\ |\nu_{\mu}\rangle&=&\sin\theta\cos\xi_M|\nu_H\rangle- \sin\theta\sin\xi_M|\nu_M\rangle -\cos\theta|\nu_L\rangle, \\ |\nu_{\tau}\rangle&=&\cos\theta\cos\xi_M|\nu_H\rangle- \cos\theta\sin\xi_M|\nu_M\rangle+\sin\theta|\nu_L\rangle. \nonumber\end{aligned}$$ The accuracy of this approximation has been tested numerically in the range of parameters of interest, and the difference between the “exact” result and the approximate result presented in Eq. (\[inistate3\]) is negligible. Keeping all this in mind, it is straight forward to compute all oscillation probabilities, starting from Eq. (\[palphabetax\]). From here on, $\phi=0$ (no $T$-violating phase in the mixing matrix, such that all $U_{\alpha i}$ are real) will be assumed, in order to simplify expressions and render the results cleaner. In the end of the day one obtains $$\begin{aligned} \label{pall3} P_{\alpha\beta}&=&a_{\alpha}^2 (U_{\beta 1})^2 + b_{\alpha}^2 (U_{\beta 2})^2 + c_{\alpha}^2 (U_{\beta 3})^2 + 2a_{\alpha}b_{\alpha} (U_{\beta1}U_{\beta2}) \cos\left(\frac{\Delta m^2_{21}x}{2E_{\nu}}+\delta^L\right) \nonumber \\ &&{\rm or} \\ P_{\alpha\beta}&=&\left(a_{\alpha}U_{\beta 1} + b_{\alpha}U_{\beta 2}\right)^2 + c_{\alpha}^2 (U_{\beta 3})^2 - 4a_{\alpha}b_{\alpha} (U_{\beta1}U_{\beta2}) \sin^2\left(\frac{\Delta m^2_{21}x}{4E_{\nu}}+\delta^L\right), \nonumber\end{aligned}$$ where $\delta^L$ is the matter phase, induced in the low resonance, and $$\begin{aligned} a_e&=&\sqrt{P_2^HP_c^L}, \nonumber \\ b_e&=&\sqrt{P_2^H(1-P_c^L)}, \nonumber \\ c_e&=&\sqrt{P_3^H}, \nonumber \\ a_{\mu}&=&-\sqrt{(1-P_c^L)}\cos\theta-\sqrt{P_3^HP_c^L}\sin\theta, \nonumber \\ b_{\mu}&=&\sqrt{P_c^L}\cos\theta-\sqrt{P_3^H(1-P_c^L)}\sin\theta, \\ c_{\mu}&=&\sqrt{P_2^H\sin^2\theta}, \nonumber \\ a_{\tau}&=&\sqrt{(1-P_c^L)}\sin\theta-\sqrt{P_3^HP_c^L}\cos\theta, \nonumber \\ b_{\tau}&=&-\sqrt{P_c^L}\sin\theta-\sqrt{P_3^H(1-P_c^L)}\cos\theta, \nonumber \\ c_{\tau}&=&\sqrt{P_2^H\cos^2\theta}, \nonumber $$ and $P_2^H=1-P_3^H=(|A^H|^2\cos^2\xi_M+|B^H|^2\sin^2\xi_M)$, which can also be written as $P_2^H=\frac{1}{2}+\frac{1}{2}\left(1-2P_c^H\right)\cos 2\xi_M$. This is to be compared with the expression for $P_1$ obtained in the two-flavour case. Note that $a^2_{\alpha}+b^2_{\alpha}+c^2_{\alpha}=1$. The effect of $\delta^L$ will not be discussed here and from here on $\delta^L$ will be set to zero. For details about the significance of $\delta^L$ for solar neutrinos in the two-flavour case, readers are referred to [@Petcov_eq; @P_phase]. Many comments are in order. First, in the nonadiabatic limit which can be obtained for very large energies, $P_c^H\rightarrow \cos^2\xi$, $P_c^L\rightarrow \cos^2\omega$ and $\cos 2\xi_M\rightarrow -1$. It is trivial to check that in this limit $a_{\alpha}\rightarrow U_{\alpha 1}$, $b_{\alpha}\rightarrow U_{\alpha 2}$, $c_{\alpha}\rightarrow U_{\alpha 3}$, and the vacuum oscillation result is reproduced, up to the matter induced phase $\delta^L$. Second, $P_{ee}$ can be written as $$P_{ee}=P_2^H\cos^2\xi(P_{ee}^{2\nu})+P_3^H\sin^2\xi, \label{pee3}$$ where $P_{ee}^{2\nu}$ is the two-neutrino result obtained in the previous section (see Eq. (\[pee\])) in the limit $\cos 2\vartheta_M\rightarrow -1$. It is easy to check that Eq. (\[pee\]) would be exactly reproduced (with $\vartheta_M$ replaced by $\omega_M$, of course) if the $\cos2\omega_M= -1$ approximation were dropped. For solar neutrino energies (100 keV$\lesssim E_{\nu}\lesssim 10$ MeV), $\xi_M\rightarrow\xi$, $P_c^H\rightarrow 0$ and therefore $P_2^H~(P_3^H)\rightarrow\cos^2\xi~(\sin^2\xi)$, reproducing correctly the result of the survival probability of electron-type solar neutrinos in a three-flavour oscillation scenario (see [@general_review] and references therein). In this scenario there is no “$H-L$” resonance inside the Sun, because $\frac{|\Delta m^2_{31}|}{2E_{\nu}}\gg A$ for solar neutrino energies. On the other hand, in the case $P_c^H\rightarrow 0$ and $\cos 2\xi_M\rightarrow -1$, $P_3^H\rightarrow 1$ and electron-type neutrinos exit the Sun as a pure $\nu_3$ mass eigenstate, and do not undergo vacuum oscillations even if $\Delta m^2_{21}$ is very small. In contrast, $\nu_{\mu}$ and $\nu_{\tau}$ always undergo vacuum oscillations if $\Delta m^2_{21}$ is small enough. The reason for this is simple. The generic feature of matter effects is to “push” $\nu_e$ into the heavy mass eigenstate, while $\nu_{\mu}$ and $\nu_{\tau}$ are “pushed” into the light mass eigenstates. This situation is changed by nonadiabatic effects, as argued above. Finally, it is important to note that all equations obtained are also valid in the case of inverted hierarchies ($m_3^2<m_{1,2}^2$ or $m_{2}^2<m_1^2$). This has been discussed in detail in the two-neutrino oscillation case [@earth_matter], and is also applicable here. It is worthwhile to point out that, in the approximation $\Delta m^2_{31} \simeq\Delta m^2_{32}$ the transformation $\Delta m^2_{21} \rightarrow-\Delta m^2_{21}$ can be reproduced by transforming $\omega\rightarrow\pi/2-\omega$, $\theta\rightarrow \pi-\theta$ and redefining the sign of $\nu_{\tau}$. Therefore, one is in principle allowed to fix the sign of $\Delta m^2_{21}$ as long as $\theta$ is allowed to vary between $0$ and $\pi$. In the case of inverted hierarchies (especially when $\Delta m^2_{31}<0$) one expects to see no “level crossing” (indeed, matter effects tend to increase the distance between the “energy” levels in this case), but matter effects are still present, because the initial state in the Sun’s core can be nontrivial ( [*i.e.*]{}, $\vartheta_M\neq\vartheta$). Note that $\nu_e$ is still “pushed” towards $\nu_H$, even in the case of inverted hierarchies, and the expressions for the matter mixing angles Eq. (\[cos2tm\]), and the initial states inside the Sun Eq. (\[inistate3\]) are still valid. The consequence of no “level crossing” is that the adiabatic limit does not connect, for example, $\nu_H\rightarrow\nu_3$ but $\nu_H\rightarrow\nu_2$ (or $\nu_1$, depending on the sign of $\Delta m^2_{21}$). This information is in fact contained in the equations above. The crucial feature is that, for example, when $\Delta m^2_{31}<0$, $P_c^H\rightarrow 1$ in the “adiabatic limit,” and the matrix $V_{3\times 3}$ correctly “connects” $\nu_H\rightarrow\nu_2$ (or $\nu_1$)! Another curious feature is that, in the limit $|\Delta m^2_{31}|/2E_{\nu}\gg A$, $\cos 2\xi_M\rightarrow-\cos2\xi$, $P_c^H\rightarrow 1$ and Eq. (\[pee3\]) correctly reproduces the survival probability of electron-type solar neutrinos in the three-flavour oscillation case. Note that on this case the sign of $\Delta m^2_{31}$ does not play any role, as expected. On the other hand, it is still true that $P_c^{(H,L)}\rightarrow\cos^2(\xi,\omega)$ in the extreme nonadiabatic limit, and vacuum oscillation results are reproduced, as expected. Again, in this limit, one is not sensitive to the sign of $\Delta m^2_{31}$, as expected. The Case of Antineutrinos ------------------------- As in the two-neutrino case, the difference between neutrinos and antineutrinos is that the equivalent of Eq. (\[eq\_3nus\]) for antineutrinos can be obtained by changing $A(x)\rightarrow -A(x)$ and $U_{\alpha i}\leftrightarrow U_{\alpha i}^*$. Unlike the two-flavour case, however, there is no set of variable transformations that allows one to exactly relate the differential equation for the neutrino and antineutrino systems. One should, however, note that if the signs of both $\Delta m^2$ are changed and $U_{\alpha i}\leftrightarrow U_{\alpha i}^*$, the neutrino equation turns into the antineutrino equation, up to an overall sign. This means, for example, that the instantaneous eigenvalues of the antineutrino Hamiltonian can be read from the eigenvalues of the neutrino Hamiltonian with $\Delta m^2_{ij}\leftrightarrow-\Delta m^2_{ij}$, $U_{\alpha i}\leftrightarrow U_{\alpha i}^*$ plus an overall sign. When it comes to computing $P_{\bar{\alpha}\bar{\beta}}$ this global sign difference is not relevant, and therefore $P_{\bar{\alpha}\bar{\beta}}(\Delta m^2_{ij},U_{\alpha i})= P_{\alpha\beta}(-\Delta m^2_{ij},U_{\alpha i}^*)$. Results and Discussions ======================= This section contains the compilation and discussion of a number of results concerning the oscillation of GeV neutrinos of all species produced in the Sun’s core. The goal here is to explore the multidimensional parameter space spanned by $\Delta m^2_{21}$, $\Delta m^2_{31}$, $\sin^2\omega$, $\sin^2\theta$, and $\sin^2\xi$ (and $E_{\nu}$). It will be assumed throughout that the electron number density profile of the Sun is exponential, so that Eq. (\[pc\]) can be used. As mentioned before, the numerical accuracy of this approximation is quite good, and certainly good enough for the purposes of this paper. Therefore, both $P_c^H$ and $P_c^L$ which appear in Eq. (\[pall3\]) will be given by Eqs. (\[pc\], \[gamma\]), with $\vartheta\rightarrow\xi$, $\Delta m^2\rightarrow\Delta m^2_{31}$ in the former, and $\vartheta\rightarrow\omega$, $\Delta m^2\rightarrow\Delta m^2_{21}$ in the latter. When computing $P_{\alpha\beta}$, an averaging over “seasons” is performed, which “washes out” the effect of very small oscillation wavelengths. Furthermore, integration over neutrino energy distributions is performed. Finally, all $P_{\alpha\beta}$ to be computed should be understood as the value of $P_{\alpha\beta}$ in the Earth’s surface, [*i.e.,*]{} Earth matter effects are not included. This is done in order to make the Sun matter effects in the evaluation of $P_{\alpha\beta}$ more clear. It should be stressed that Earth matter effects may play a significant role for particular regions of the parameter space, but the discussion of such effects will be left for another opportunity. Because the parameter space to be explored is multidimensional, it is necessary to make two-dimensional projections of it, such that “illustrative” points are required. The following points in the parameter space are chosen, all inspired by the current experimental situation: [$\bullet$]{} [ATM: $\Delta m^2_{31}=3\times 10^{-3}$ eV$^2$, $\sin^2\theta=0.5$, and $\sin^2\xi=0.01$,]{} [LMA: $\Delta m^2_{21}=2\times 10^{-5}$ eV$^2$, $\sin^2\omega=0.2$,]{} [SMA: $\Delta m^2_{21}=6\times 10^{-6}$ eV$^2$, $\sin^2\omega=0.001$,]{} [LOW: $\Delta m^2_{21}=1\times 10^{-7}$ eV$^2$, $\sin^2\omega=0.4$,]{} [VAC: $\Delta m^2_{21}=1\times 10^{-10}$ eV$^2$, $\sin^2\omega=0.55$.]{} ATM corresponds to the best fit point of the solution to the atmospheric neutrino puzzle [@atmos_analysis], and a value of $\sin^2\xi=0.01$ which is consistent with all the experimental bounds. Note that some “subset” of ATM will always be assumed (for example, $\sin^2\theta$ is fixed while exploring the ($\Delta m^2_{31}\times\sin^2\xi$)- plane). For each analysis, it will be clear what are the “variables” and what quantities are held fixed at their “preferred point” values. All other points refer to sample points in the regions which best solve the solar neutrino puzzle [@bksreview; @rate_analysis; @dark_side], and the notation should be obvious. Initially a flat neutrino energy distribution with $E_{\nu}^{min}=1$ GeV and $E_{\nu}^{max}=5$ GeV is considered (for concreteness), and the case of higher average energies is briefly discussed later. The Case of Vacuum Oscillations ------------------------------- If neutrinos were produced and propagated exclusively in vacuum, the oscillation probabilities would be given by Eq. (\[p3vac\]). This would be the case of neutrinos produced in the Sun’s core if either the electron number density were much smaller than its real value or if very low energy neutrinos were being considered. Nonetheless, it is still useful to digress some on the “would be” vacuum oscillation probabilities in order to understand better the matter effects. In the case of pure vacuum oscillations, it is trivial to check that $P_{\alpha\beta}=P_{\beta\alpha}$ (remember that the MNS matrix phase $\phi$ has been set to zero), and therefore all $P_{\alpha\beta}$ can be parametrised by three quantities, namely $P_{\alpha\alpha}$, $\alpha=e,\mu,\tau$. It is easy to show that $$P_{\alpha\beta}=P_{\beta\alpha}\Leftrightarrow P_{e\mu}=\frac{1}{2}(1+P_{\tau\tau}-P_{\mu\mu}-P_{ee}). \label{pab=pba}$$ From Eq. (\[p3vac\]) $$\begin{aligned} \label{p3vac_aa} P_{\alpha\alpha}&=&U_{\alpha1}^4+U_{\alpha2}^4+U_{\alpha3}^4+ 2U_{\alpha1}^2U_{\alpha2}^2 \cos\left(\frac{\Delta m^2_{21}x}{2E_{\nu}}\right)\nonumber \\ &&{\rm or} \\ P_{\alpha\alpha}&=&(1-U_{\alpha3}^2)^2+U_{\alpha3}^4-4U_{\alpha1}^2U_{\alpha2}^2 \sin^2\left(\frac{\Delta m^2_{21}x}{4E_{\nu}}\right). \nonumber \end{aligned}$$ Note that there is no dependency on $\Delta m^2_{31}$. Particularly simple limits can be reached when $L_{\rm osc}^{21}$ is either very small or very large compared with the Earth-Sun distance. In both limits $P_{\alpha\alpha}$ is independent of $\Delta m^2_{21}$ and, in the latter case, $P_{\alpha\alpha}$ depends only on $U_{\alpha3}^2$. Fig. \[dm21\_vacuum\] depicts constant $P_{\alpha\beta}$ contours in the ($\Delta m^2_{21}\times\sin^2\omega$)-plane, at ATM. Remember that, here, $P_{e\mu}$ is not an independent quantity but is a linear combination of all $P_{\alpha\alpha}$. Note that $P_{ee}$ is symmetric for $\omega\rightarrow\pi/2-\omega$, and that $P_{\mu\mu}\leftrightarrow P_{\tau\tau}$ when $\omega\rightarrow\pi/2-\omega$. The latter property is a consequence of $\theta=\pi/4$. Also, in the case of $P_{ee}$, the $L_{\rm osc}^{21}\rightarrow\infty$ coincides with the $\omega\rightarrow 0, \pi/2$ limit for any $L_{\rm osc}^{21}$ (this is because either $U_{e1}$ or $U_{e2}$ go to zero). This is not true of $P_{\mu\mu}$ or $P_{\tau\tau}$ unless $\sin^2\xi=0$. Another important consequence of $L_{\rm osc}^{21}\gg 1$ a.u. is that $T$-violating effects are absent, even if $\phi$ is nonzero. This can be seen by looking at the second expression in Eq. (\[p3vac\_aa\]), which is a function only of $|U_{\alpha 3}|^2$ in the limit $L_{\rm osc}^{21}\rightarrow\infty$. \[t\] Finally, one should note that oscillatory effects are maximal for $\Delta m^2_{21}\simeq 2\times 10^{-8}$ eV$^2$. In this region $ \cos\left(\frac{\Delta m^2_{21}x}{2E_{\nu}}\right)\simeq -1$, and the largest suppression to all $P_{\alpha\alpha}$ is obtained when $U_{\alpha1}^2U_{\alpha2}^2$ is maximum. For example, $P_{ee}$ is smallest when $\omega=\pi/4$, since $U_{e1}^2U_{e2}^2\propto\sin^2 2\omega$. There are no “localised” maxima for $P_{\alpha\alpha}$ because $U_{\alpha1}^2U_{\alpha2}^2$ is positive definite. “Normal” Neutrino Hierarchy --------------------------- When matter effects are “turned on,” the situation can be dramatically different. This is especially true in the case of normal neutrino mass hierarchies ($m_1^2<m_2^2<m_3^2$), which will be discussed first. The first effect one should observe is that, even though $L_{\rm osc}^{31}\ll 1$ a.u., $P_{\alpha\beta}$ depend rather nontrivially on $\Delta m^2_{31}$. This dependency comes from the terms $P_3^H$ and $P_2^H=1-P_3^H$ in Eq. (\[pall3\]). Remember that $P_3^H$ is interpreted as the probability that a $\nu_e$ produced in the Sun’s core exits the Sun as a $\nu_3$ mass eigenstate. When matter effects are negligible (such as in the limit of small neutrino energies) $P_3^H\rightarrow\sin^2\xi$, its “vacuum limit.” Fig. \[p3h\] depicts constant $P_3^H$ contours in the ($\Delta m^2_{31}/E_{\nu}\times\sin^2\xi$)-plane. \[t\] Note that, for $\Delta m^2_{31}/E_{\nu}\sim 10^{-2}$ eV$^2$/GeV, $P_3^H\rightarrow 1$, even for small values of $\sin^2\xi$. In this region, $\nu_e$’s produced in the Sun’s core exit the Sun as pure $\nu_3$’s. Therefore, $P_{e\alpha}\simeq U_{\alpha3}^2$. Because of unitarity in the propagation, $\nu_{\mu}$’s and $\nu_{\tau}$’s exit the Sun as linear combinations of the light mass eigenstates, and may not only undergo vacuum oscillations but are also susceptible to further matter effects (dictated by the “$M-L$” system, as described in Sec. 3). For future reference, at ATM, $P_3^H\simeq0.87$ when averaged over the energy range mentioned in the beginning of this section. As $\Delta m^2_{31}/E_{\nu}$ decreases (as is the case for higher energy neutrinos) the nonadiabaticity of the “$H-M$” system starts to become relevant, and $P_3^H\rightarrow\sin^2\xi$, as argued in Sec. 3.2. A hint of this behaviour can already be seen in Fig. \[p3h\], for small values of $\Delta m^2_{31}/E_{\nu}$. The information due to the “$M-L$” matter effect is encoded in $P_c^L$, present in Eq. (\[pall3\]). Fig. \[1-pcl\] depicts contours of constant $1-P_c^L$ in the ($\Delta m^2_{21}/E_{\nu}\times\sin^2\omega$)-plane. One should note that $1-P_c^L$ reaches its extreme nonadiabatic limit, $\sin^2\omega$, when $\Delta m^2_{21}/E_{\nu}\lesssim 10^{-7}$ eV$^2$/GeV. For $\Delta m^2_{21}/E_{\nu}\gtrsim 10^{-7}$ eV$^2$/GeV, matter effects increase the value of $1-P_c^L$. \[t\] One can use the intuition from the two-flavour solution to the solar neutrino puzzle to better appreciate the results presented here. In the case of the solutions to the solar neutrino puzzle, the energies of interest range from 100 keV to 10 MeV, and large matter effects happen around $\Delta m^2\sim 10^{-5}$ eV$^2$. Furthermore, at $\Delta m^2\sim 10^{-10}$ eV$^2$ one encounters the “just-so” solution, which is characterised by very long wave-length vacuum oscillations. Rescaling to $O$(GeV) energies, the equivalent of the “just-so” solution happens for $\Delta m^2_{21}\sim (10^{-8}-10^{-7})$ eV$^2$, while large matter effects would be present at $\Delta m^2\sim (10^{-3}-10^{-2})$ eV$^2$. Indeed, one observes large matter effects for $\Delta m^2_{31}\sim (10^{-3}-10^{-2})$ eV$^2$. $\Delta m^2_{21}\sim (10^{-5}-10^{-6})$ eV$^2$ corresponds to the region between the LOW and VAC solutions, where matter effects distort $P_{\alpha\beta}$ from its pure vacuum value, but no dramatic suppression or enhancement takes place. Incidently, this behaviour has physical consequences in the solution to the solar neutrino problem, as was first pointed out in [@alex]. Figs. \[dm31\_ssxi\_lma\] and \[dm31\_ssxi\_low\] depict contours of constant $P_{\alpha\alpha}$ and $P_{e\mu}$ in the ($\Delta m^2_{31}\times\sin^2\xi$)-plane. As expected, in the region where $P_3^H\sim 1$, $P_{ee}$ and $P_{e\mu}$ do not depend on $\Delta m^2_{21}$ or $\sin^2\omega$, namely $P_{ee}\sim\sin^2\xi$ and $P_{e\mu}\sim 0.5\cos^2\xi$. Remember that the results depicted in Figs. \[dm31\_ssxi\_lma\] and \[dm31\_ssxi\_low\] (and all other plots from here on) are for an energy band from 1 to 5 GeV. On the other hand, $P_{\mu\mu}$ and $P_{\tau\tau}$ do depend on the point (LMA, SMA, etc), even for $P_3^H\sim 1$, as foreseen. This dependence will be discussed in what follows. \[pt\] \[pt\] In the limit $P_3^H=1$, $\sin^2\theta=0.5$ $$\begin{aligned} c_{\mu}^2=&c_{\tau}^2=&0, \\ a_{\mu}^2=&b_{\tau}^2=&0.5(1+2\sqrt{P_c^L(1-P_c^L))}, \\ b_{\mu}^2=&a_{\tau}^2=&0.5(1-2\sqrt{P_c^L(1-P_c^L))}, \\ \label{amubmu} a_{\mu}b_{\mu}=&-a_{\tau}b_{\tau}=&0.5(1-2P_c^L),\end{aligned}$$ and $$\begin{aligned} \label{pmmtt} P_{(\mu\mu,\tau\tau)}&=&\frac{1}{2}(1-U_{(\mu,\tau)3}^2)\pm\sqrt{P_c^L(1-P_c^L)} (U_{(\mu,\tau)1}^2-U_{(\mu,\tau)2}^2) \\ &\pm& (1-2P_c^L)U_{(\mu,\tau)1}U_{(\mu,\tau)2} \cos\left(\frac{\Delta m^2_{21}x}{2E_{\nu}}\right). \nonumber\end{aligned}$$ At both LMA and SMA, the oscillatory term averages out to zero, while at VAC $\cos\left(\frac{\Delta m^2_{21}x}{2E_{\nu}}\right)=1$. It is only at LOW that the oscillatory term is nontrivial, as was mentioned in the analogy between the situation at hand and the solutions to the solar neutrino puzzle. Furthermore, at SMA, $1-P_c^L$ is tiny (see Fig. \[1-pcl\]), so it is fair to approximate $P_{\mu\mu}\simeq P_{\tau\tau}\simeq 0.5(1-0.5\times0.99) \simeq 0.25$, in agreement with Fig. \[dm31\_ssxi\_lma\](bottom). At LMA, it is fair to approximate $\sin^2\xi=0$. In this limit, $U_{\mu1}^2-U_{\mu2}^2\simeq -0.5\cos 2\omega+\sin 2\omega\sin\xi$, while $U_{\tau1}^2-U_{\tau2}^2\simeq -0.5\cos 2\omega-\sin 2\omega\sin\xi$. Therefore, because $\cos 2\omega=0.6>0$, $P_{\mu\mu}$ is significantly less than $P_{\tau\tau}$, since $\sqrt{P_c^L(1-P_c^L)}$ is nonnegligible. Roughly, $P_{\mu\mu}\simeq 0.15$ and $P_{\mu\mu}\simeq 0.4$, using the approximations above. Again, there is agreement with Fig. \[dm31\_ssxi\_lma\](top). In order to understand the behaviour at LOW and VAC, one should take advantage of the fact that $1-P_c^L\rightarrow\sin^2\omega$. In this case, it proves more advantageous to use the second form of Eq. (\[pall3\]) in order to express all $P_{\alpha\beta}$ $$\begin{aligned} \label{1-pcl=sso} P_{ee}&=&P_2^H\cos^2\xi+P_3^H\sin^2\xi-{\rm (Osc)}_{ee}, \nonumber \\ P_{e\mu}&=&\left(P_2^H\sin^2\xi+P_3^H\cos^2\xi\right)\sin^2\theta- {\rm (Osc)}_{e\mu}, \\ P_{\mu\mu}&=&\left(\cos^2\theta+\sqrt{P_3^H}\sin\xi\sin^2\theta\right)^2 +P_2^H\cos^2\xi\sin^4\theta-{\rm (Osc)}_{\mu\mu}, \nonumber \\ P_{\tau\tau}&=&\left(\sin^2\theta+\sqrt{P_3^H}\sin\xi\cos^2\theta\right)^2 +P_2^H\cos^2\xi\cos^4\theta-{\rm (Osc)}_{\tau\tau}, \nonumber\end{aligned}$$ where $${\rm (Osc)}_{\alpha\beta}=4a_{\alpha}b_{\alpha}U_{\beta1}U_{\beta2} \sin^2\left(\frac{\Delta m^2_{12}x}{4E_{\nu}}\right)$$ are the oscillatory terms. When $L_{\rm osc}^{21}\gg 1$ a.u., the oscillatory terms are zero, and $P_{\alpha\beta}$ are particularly simple. Note that on this limit many simplifications happen: $P_{\alpha\beta}$ is independent of $\omega$ and $\Delta m^2_{21}$, and $P_{\mu\mu}=P_{\tau\tau}$ if $\sin^2\theta=\cos^2\theta$, as can be observed in Fig. \[dm31\_ssxi\_low\](bottom). A very important fact is that, when the oscillatory terms are neglected, $2P_{e\mu}=1+P_{\tau\tau}-P_{\mu\mu}-P_{ee}$, as one may easily verify directly. As argued before, when this condition is satisfied, $P_{\alpha\beta}=P_{\beta\alpha}$. This is not the case in the presence of nonnegligible oscillation effects or when $P_c\neq\cos^2\omega$. Both statements are trivial to verify directly. For example, $$\label{abmumu} 4a_e b_e U_{\mu1} U_{\mu2}=P_2^H\sin 2\omega\left[\sin 2\omega\left(\sin^2\xi \sin^2\theta-\cos^2\theta\right)-\sin\xi\sin 2\theta\cos 2\omega\right]$$ while $$\label{abee} 4a_{\mu} b_{\mu} U_{e1} U_{e2}=\cos^2\xi\sin 2\omega\left[\sin 2\omega \left(P_3^H\sin^2\theta-\cos^2\theta\right)- \sqrt{P_3^H}\sin 2\theta\cos 2\omega\right],$$ so ${\rm Osc}_{e\mu}\neq{\rm Osc}_{\mu e}$. Figs. \[sst\_lma\] and \[sst\_low\] depict $P_{\alpha\beta}$ as a function of $\sin^2\theta$ at LMA and SMA, and at LOW and VAC, respectively. In these figures, all $P_{\alpha\beta}$ are plotted, in order to illustrate that $P_{\alpha\beta}\neq P_{\beta\alpha}$ at LMA, SMA and VAC. Note that at LMA and SMA, the difference comes from the fact that $P_3^H\neq \sin^2\xi$ [*and*]{} $P_c^L\neq \cos^2\omega$. At LOW, $P_c^L\simeq\cos^2\omega$, but $P_3^H\neq \sin^2\xi$ [*and*]{} nontrivial oscillatory terms render $P_{\alpha\beta}\neq P_{\beta\alpha}$. At VAC, even though $P_3^H\neq \sin^2\xi$, $P_{\alpha\beta}=P_{\beta\alpha}$ because $P_c^L=\cos^2\omega$ [*and*]{} because “1-2” oscillations don’t have “time” to happen. \[pt\] \[pt\] From Eqs. (\[1-pcl=sso\]) one can roughly understand the dependency of $P_{\alpha\beta}$ on $\sin^2\theta$. Obviously $P_{ee}$ does not depend on $\theta$ (by the very form of the MNS matrix, Eq. (\[MNSmatrix\])), while $P_{e\mu}$ ($P_{e\tau}$) depends almost exclusively on $\sin^2\theta$ ($\cos^2\theta$). This is guaranteed by the fact that $P_3^H\gg P_2^H$ even at LMA and LOW, when one expects the interference terms to play a significant role. It is also worthwhile to note that, as expected, at VAC and SMA the curves are very similar, a behaviour that can be understood from earlier discussions. Finally, Fig. \[dm21\_std\] depicts constant $P_{\alpha\beta}$ contours in the ($\Delta m^2_{21}\times\sin^2\omega$)-plane, at ATM. In light of the previous discussions, the shapes and forms can be readily understood. First note that the shapes of the constant $P_{ee}$ and $P_{e\mu}$ regions resemble those of the pure vacuum oscillations depicted in Fig. \[dm21\_vacuum\], with two important differences. First, the constant values of the contours are quite different. For example, $P_{ee}$ varies from a few percent to less then 15%, while in the case of pure vacuum oscillations, $P_{ee}$ varies from 30% to 100%. This can be roughly understood numerically by noting that $P_{(ee,e\mu)}\simeq P_2^H P_{(ee,e\mu)}^{\rm vac}$ (remember that $P_2^H=1-P^H_3\simeq 0.13$ when averaged over the energy range of interest). Second, at high $\Delta m^2_{21}$, the regions are distorted. This is due to nontrivial matter effects in the “$M-L$” system. Note that the contours follow the constant $1-P_c^L$ curves depicted in Fig. \[1-pcl\]. \[t\] The $P_{\mu\mu}$ and $P_{\tau\tau}$ contours are a lot less familiar, and require some more discussion. Many features are rather prominent. For example, the plane is roughly divided into a $\sin^2\omega>0.5$ and $\sin^2\omega<0.5$ structure, and large (small) values of $P_{\mu\mu}$ ($P_{\tau\tau}$) are constrained to the $\sin^2\omega>0.5$ half, and vice-versa. Also, there is a rough $P_{\mu\mu}(\omega) \leftrightarrow P_{\tau\tau}(\pi/2-\omega)$ symmetry in the picture, which was present in the pure vacuum case (see Fig. \[dm21\_vacuum\]). This symmetry is absent for large values of $\Delta m^2_{21}$, similar to what happens in the case of $P_{ee}$, and is due, as mentioned in the previous paragraph, to the fact that $P_c^L$ is significantly different from $\cos^2\omega$ in this region. The other features are also fairly simple to understand, and are all due to fact that $P_3^H\gg\sin^2\xi$. It is convenient to start the discussion in the limit when $L_{21}^{\rm osc}\gg 1$ a.u. (the very small $\Delta m^2_{21}$ region). As was noted before, $P_{\alpha\beta}$ are given by Eq. (\[1-pcl=sso\]) where the Osc$_{\alpha\beta}$ terms vanish. It is therefore easy to see that $P_{\alpha\beta}$ do not depend on $\omega$ or $\Delta m^2_{21}$ (as mentioned before), and furthermore it is trivial to compute the value of $P_{\alpha\beta}$ given that we are at ATM and that $P_3^H\simeq 0.87$. The next curious feature is that there is a “band” around $\sin^2\omega=1/2$ where $P_{\mu\mu}\simeq P_{\mu\mu}(L_{21}^{\rm osc}\rightarrow\infty)$. The same is true of $P_{\tau\tau}$. This is due to the fact that, around $\sin^2\omega\simeq 1/2$, $a_{\mu}b_{\mu}$ and $a_{\tau}b_{\tau}$ vanish when $P_3^H$ is large. In the limit $P_3^H=1$ one can use Eq.(\[amubmu\]) and note that indeed both $a_{\mu}b_{\mu}$ and $a_{\tau}b_{\tau}$ vanish at $P_c^L=1/2$. However, for values of $\Delta m^2_{21}\lesssim 10^{-7}$ eV$^2$ $P_c^L\simeq\cos^2\omega$, which explains the band around $\omega\simeq \pi/4$. Slight distortions are due to the fact that $P_3^H\neq1$, and are easily computed from the exact expressions. Again in the limit $P_3^H=1$, $P_c^L=\cos^2\omega$, the coefficient of the $\cos\left(\frac{\Delta m^2_{21}x}{2E_{\nu}}\right)$ term Eq. (\[pmmtt\]) is $$\pm\frac{1}{2}(1-2\cos^2\omega)\left(-\frac{1}{2}\sin 2\omega\mp 0.1\cos 2\omega\right),$$ if $\sin^2\xi$ terms are neglected. The $+,-$ signs are for $P_{\mu\mu}$ while the $-,+$ signs for $P_{\tau\tau}$. It is trivial to verify numerically (if a little tedious) that the $P_{\mu\mu}$ term has a maximum at $\sin^2\omega\simeq 0.1$ and a minimum at $\sin^2\omega\simeq 0.8$. For $P_{\tau\tau}$ the maximum (minimum) is at $\sin^2\omega\simeq 0.9 (0.2)$. It is important to comment that the minima are negative numbers. On the other hand, from Fig. \[dm21\_vacuum\] (as mentioned before) it is easy to see that $\cos\left(\frac{\Delta m^2_{21}x}{2E_{\nu}}\right)$ is minimum for $\Delta m^2_{21}\simeq 2\times 10^{-8}$ eV$^2$ (this is where all $P_{\alpha\alpha}$ are maximally suppressed in Fig. \[dm21\_vacuum\]). Combining both informations, it is simple to understand the maxima/minima of $P_{\mu\mu}$ and $P_{\tau\tau}$ at $\Delta m^2_{21}\simeq 2\times 10^{-8}$ eV$^2$: Minima occur when the coefficient is maximum ([*e.g.,*]{} at $\sin^2\omega\simeq 0.1$ for $P_{\mu\mu}$) while maxima occur when the coefficient is minimum ([*e.g.,*]{} at $\sin^2\omega\simeq 0.8$ for $P_{\mu\mu}$). A description of what has happened is the following: The matter effects “compress” the constant $P_{\mu\mu}$ ($P_{\tau\tau}$) contours from the pure vacuum oscillation case (presented in Fig. \[dm21\_vacuum\]) to the $\sin^2\omega<1/2$ ($>1/2$) half of the plane, and a new region “appears” on the other half. This other region is characterised by negative values to the coefficients of the oscillatory terms, which are not attainable in the case of pure vacuum oscillations (see Eq. (\[p3vac\_aa\])). At last, the contours in the region where the oscillatory effects average out, $P_{\mu\mu}$ and $P_{\tau\tau}$ are also best understood from Eq. (\[pmmtt\]) and the paragraphs which follow it, in the limit that $\cos\left(\frac{\Delta m^2_{21}x}{2E_{\nu}}\right)\rightarrow 0$. It is simple to see, for example, that $P_{\mu\mu}<P_{\tau\tau}$ if $\cos 2\omega>0$ ($\sin^2\omega>1/2$), while the situation is reversed if $\cos 2\omega<0$. This is indeed what one observes in Fig. \[dm21\_std\]. “Inverted” Neutrino Hierarchy ----------------------------- Here I turn to the case of an “inverted” neutrino hierarchy, namely $\Delta m^2_{31}<0$. Currently, there is no experimental hint as to what the sign of $\Delta m^2_{31}$ should be, so there is no reason to believe that the “normal” hierarchy is to be preferred over the “inverted” hierarchy. Indeed, even from a theoretical/ model building point of view, there are no strong reasons for or against a particular neutrino mass hierarchy [@theory_review]. The discussion will be restricted to $\Delta m^2_{21}>0$ for two reasons. First, the $\Delta m^2_{21}<0$ can be approximately read off from the $\Delta m^2_{21}>0$ case by changing $\omega\rightarrow\pi/2-\omega$, as mentioned before. Second, and most important, there is some experimental hints as to what is the sign of $\Delta m^2_{21}$ [@solar_3; @dark_side]. For example, the SMA solution only exists for one sign of $\Delta m^2_{21}$, while the LMA and LOW solutions prefer one particular sign. Even in the case of VAC there is the possibility of obtaining information concerning the sign of $\Delta m^2_{21}$ from solar neutrino data [@alex]. Therefore, the notation introduced in the beginning of this section (ATM, SMA, LMA, LOW, VAC) still applies, and one should simply remember that here $\Delta m^2_{31}<0$. As advertised, the largest effect of $\Delta m^2_{31}<0$ is the typical values of $P_{c}^H$. From Eq. (\[pc\]), keeping in mind that here $\gamma$ is negative, $$P_c^H=\frac{1-e^{-|\gamma|\cos^2\xi}}{1-e^{-|\gamma|}},$$ where $\gamma$ is given by Eq. (\[gamma\]) with $\Delta m^2\rightarrow\Delta m^2_{31}$. Since $|\gamma|\gg 1$ (see Eq. (\[gamma\])), $P_c^H=1$ for all values of $\Delta m^2_{31}$ and $\sin^2\xi$ of interest. Indeed, this is true for any value of $\sin^2\xi$ as long as $\Delta m^2_{31}\gg 10^{-6}$ eV$^2$. This is to be contrasted to the normal hierarchy case, when there is always some value of $\sin^2\xi$ (which is a function of $\Delta m^2_{31}$) below which $P_c$ deviates significantly from its adiabatic limit. All of the $\Delta m^2_{31}$ dependency of $P_{\alpha\beta}$ is therefore encoded in $\xi_M$. However, in the case $\Delta m^2_{31}<0$ it is trivial to show that $-1<\cos 2\xi_M<-\cos 2\xi$, where the upper bound is reached in the limit $|\Delta m^2_{31}/2E_{\nu}|\gg A$ (this has been mentioned before. The minus sign takes care of the “unorthodox” $P_c\rightarrow 1$ adiabatic limit). Since one is interested in $\sin^2\xi<0.1$ ($-\cos 2\xi<-0.8$), the range for $\xi_M$ is rather limited, and therefore any $\Delta m^2_{31}$ effects are bound to be very small. Larger $\Delta m^2_{31}$ effects are expected for larger $\sin^2\xi$. In light of this, Fig. \[ssxi\_min\] depicts $P_{\alpha\beta}$ and $P_{e\mu}$ as a function of $\sin^2\xi$ at the various points (LMA, SMA, LOW, VAC), for $\Delta m^2_{31}=-3\times 10^{-3}$ eV$^2$ and $\sin^2\theta=0.5$. \[t\] It is interesting to compare the results presented here with the pure vacuum case. In the limit $P_2^H=1$ $$P_{ee}=\cos^2\xi\left[P_c^L\cos^2\omega+(1-P_c^L)\sin^2\omega+\sqrt{P_c(1-P_c^l)} \sin 2\omega\cos\left(\frac{\Delta m^2_{21}L}{2E_{\nu}}\right)\right],$$ while the pure vacuum result in the same region of the parameter space is $$P_{ee}^{\rm vac}=\cos^4\xi\left[\cos^4\omega+\sin^4\omega+2\sin^2\omega\cos^2\omega \cos\left(\frac{\Delta m^2_{21}L}{2E_{\nu}}\right)\right]+\sin^4\xi.$$ In the limit $P_c^L=\cos^2\omega$, the difference $$P_{ee}-P_{ee}^{\rm vac}=\left(P_{ee}^{2\nu, \rm vac}\right) \cos^2\xi\sin^2\xi-\sin^4\xi,$$ where $P_{ee}^{2\nu, \rm vac}$ is the electron neutrino survival probability in the two-flavour case with $\Delta m^2=\Delta m^2_{21}$ and vacuum mixing angle $\omega$. This difference vanishes at $\sin^2\xi=0$, and $\sin^2\xi=\frac{P_{ee}^{2\nu, \rm vac}}{1+ P_{ee}^{2\nu, \rm vac}}$, (which is between 0 and 0.5). Furthermore, it is a convex function of $\sin^2\xi$, which means that $P_{ee}$ is [*larger*]{} than the pure vacuum case for values of $\sin^2\xi< \frac{P_{ee}^{2\nu, \rm vac}}{1+ P_{ee}^{2\nu, \rm vac}}$. Away from the limit $P_c^L=\cos^2\omega$, keeping in mind that the oscillatory terms average out, $P_{ee}$ is still larger than the pure vacuum case if $\cos^2\omega>\sin^2\omega$ since $P_c^L\leq\cos^2\omega$, as one can easily verify. Also, in the limit $P_2^H=1$, $\sin^2\theta=1/2$, $$P_{\mu\mu}=\frac{1}{2}\left[(1-P_c^L)U_{\mu1}^2+P_c^LU_{\mu2}^2+ U_{\mu3}^2+2\sqrt{P_c^L (1-P_c^L)}U_{\mu1}U_{\mu2}\cos\left(\frac{\Delta m^2{21}L}{2E_{\nu}}\right)\right].$$ The same expression applies for $P_{\tau\tau}$ with $U_{\mu i}\rightarrow U_{\tau i}$. This is a consequence of $\sin^2\theta=\cos^2\theta$. Furthermore, in the limit $\sin^2\xi\rightarrow 0$ (and for $\sin^2\theta=\cos^2\theta$), $U_{\mu i}=U_{\tau i}$, which explains why $P_{\mu\mu}=P_{\tau\tau}$ for $\sin^2\xi\lesssim 10^{-2}$. At VAC this equality remains for all values of $\sin^2\xi$. The reason for this is that, at VAC, the expression simplifies tremendously and $P_{\mu\mu}=P_{\tau\tau}=\frac{1}{4}\left(1+\cos^2\xi\right)$. In the same region of the parameter space, the pure vacuum oscillation case yields $P_{\mu\mu}^{\rm vac}=P_{\tau\tau}^{\rm vac}=\frac{1}{2}\cos^4\xi-\cos^2\xi+1$. Note that, in this region of the parameter space $P_{\mu\mu}^{\rm vac}\geq P_{\mu\mu}$, the inequality being saturated at $\cos^2\xi=1$. The same result also applies (approximately) at SMA, since the oscillatory terms are proportional to $\sqrt{P_c^L(1-P_c^L)}$ and $1-P_c^L$ is very small at SMA (see Fig \[1-pcl\]). The equality $P_{\mu\mu}=P_{\tau\tau}$ is broken at larger values of $\sin^2\xi$ because $P_c^L\neq \cos^2\omega$ at SMA. It remains to discuss how $P_{\mu\mu}$ and $P_{\tau\tau}$ diverge from the pure vacuum case at LMA and LOW. In the limit $P_c^L=\cos^2\omega$, and averaging out the oscillatory terms, $$\label{eq_diff} P_{\mu\mu}-P_{\mu\mu}^{\rm vac}=\frac{\sin\xi}{2}\left[\sin\xi\left(U_{\mu3}^2- (\cos^2\omega U_{\mu1}^2+\sin^2\omega U_{\mu2}^2)\right)- \sin 2\omega(U_{\mu1}^2-U_{\mu2}^2) \right].$$ This difference goes to zero as $\sin^2\xi\rightarrow 0$. This is to be expected, since in this limit the difference of $P_2^H$ and $\cos^2\xi$ disappears. For small values of $\sin^2\xi$, the last term in Eq. \[eq\_diff\] dominates, and, as discussed before, $U_{\mu1}^2-U_{\mu2}^2=-0.5\cos 2\omega+O(\sin\xi)$. Therefore, $P_{\mu\mu}-P_{\mu\mu}^{\rm vac}>0$ ($<0$) for $\cos 2\omega>0$ ($<0$). The expression for $P_{\tau\tau}$ can be obtained from Eq.(\[eq\_diff\]) by replacing $U_{\mu i}\rightarrow U_{\tau i}$ and changing the sign of the last term. Therefore, since $U_{\tau1}^2-U_{\tau2}^2=-0.5\cos 2\omega+O(\sin\xi)$, $P_{\tau\tau}-P_{\tau\tau}^{\rm vac}>0$ ($<0$) for $\cos 2\omega<0$ ($>0$). When the oscillatory terms do not average out, it is easy to verify explicitly that the behaviour of the oscillatory terms follows the behaviour of the average terms, discussed above, and the inequalities obtained above still apply. \[t\] The situation, however, changes, when $P_c^L\neq \cos^2\omega$, [*i.e.,*]{} when matter effects due to the “M-L” system are relevant. In this region, a behaviour similar to the one observed in the “normal” hierarchy case is expected, since $\Delta m^2_{21}>0$. Fig. \[sso\_min\] depicts constant $P_{\alpha\beta}$ contours in the ($\Delta m^2_{21}\times\sin^2\omega$)-plane. One should be able to see upon close inspection that the region $P_{ee}<30\%$ is smaller in Fig. \[sso\_min\] than the same region in the pure vacuum oscillation case, Fig \[dm21\_vacuum\]. Also, the constant $P_{\mu\mu}$ ($P_{\tau\tau}$) contours are shifted to larger (smaller) values of $\sin^2\omega$. The other prominent (and expected, as mentioned above) feature is the distortion of the contours at large values of $\Delta m^2_{21}$. This behaviour is similar to the one observed in Fig. \[dm21\_std\]. I conclude this subsection with a comment on antineutrinos. As discussed previously, $P_{\bar{\alpha}\bar{\beta}}(\Delta m^2_{21},\Delta m^2_{31}) =P_{\alpha\beta}(-\Delta m^2_{21},-\Delta m^2_{31})$, such that the “normal” hierarchies yield “inverted” hierarchy results for antineutrinos, and vice-verse. One cannot, however, apply Fig. \[dm21\_std\] and Fig. \[sso\_min\] for the antineutrinos because both $\Delta m^2_{ij}$ have to change sign, not just $\Delta m^2_{31}$. Qualitatively, however, it is possible to understand the constant $P_{\bar{\alpha}\bar{\beta}}$ contours by examining figures Fig. \[dm21\_std\] and Fig. \[sso\_min\] reflected in a mirror positioned at $\sin^2\omega=0.5$, meaning that $P_{\bar{\alpha}\bar{\beta}}(\sin^\omega,\Delta m^2_{31})\simeq P_{\alpha\beta}(\cos^2\omega,-\Delta m^2_{31})$. The equality is not complete because one is also required to exchange $\theta\rightarrow \pi-\theta$, as mentioned earlier. Higher Neutrino Energies ------------------------ As the average neutrino energy increases, the values of $P_{\alpha\beta}$ start to resemble more the pure vacuum case. This is easy to see from Figs. \[p3h\] and \[1-pcl\]. Any deviation of $1-P_c^L$ from $\sin^2\omega$ goes away even at LMA for $E_{\nu}\simeq 50$ GeV, while “H-M” effects remain important up to $E_{\nu}\simeq 1$ TeV, even though quantitatively the effect decreases noticeably. This can be illustrated by the value of $P_3^H$ at ATM, for example, which drops from 0.87 for energies which range from 1 to 5 GeV (see the previous subsections) to 0.058, for energies which range from 100 to 110 GeV. Furthermore, all $L^{\rm osc}_{ij}$ increase as the energy increases, for fixed values of $\Delta m^2_{ij}$. Therefore, LOW becomes indistinguishable from VAC at $E_{\nu}\simeq 100$ GeV. For $O$(TeV) neutrinos the sensitivity to $\Delta m^2_{21}$ remains only for its highest allowed values, while one should start worrying about nontrivial oscillatory effects due to $L_{31}^{\rm osc}$. The case of higher energy neutrinos contains a more serious complications: neutrino absorption inside the Sun. As the neutrino energy increases, one has to start worrying about the fact that absorptive neutrino interactions can take place. According to [@absorption], for neutrinos produced in the Sun’s core, absorption becomes important for $E_{\nu}\gtrsim 200$ GeV. In this case, $\nu_e$ and $\nu_\mu$ interact with nuclear matter and produce electrons and muons, respectively. The former are capture and “lost” inside the Sun, while the latter stop before decaying into low energy neutrinos. The case of $\nu_{\tau}$-Sun interactions is more interesting, because the $\tau$-leptons produced via charged current interactions decay before “stopping”, yielding $\nu_{\tau}$’s with slightly reduced energies. Therefore, it is possible to get a flux of very high energy initial state $\tau$-neutrinos but not muon or electron-type neutrinos. Such effects have been studied for high energy galactic neutrinos traversing the Earth [@absorption_earth]. The effect of neutrino oscillations inside the Sun in the presence of nonnegligible neutrino absorption is certainly of great interest but is beyond the scope of this paper. Conclusions =========== The oscillation probability of $O$(GeV) neutrinos of all flavours produced in the Sun’s core has been computed, including matter effects, which are, in general, nontrivial. In particular, it was shown that, unlike the two-flavour oscillation case, in the three-flavour case the probability of a neutrino produced in the flavour eigenstate $\alpha$ to be detected as a flavour eigenstate $\beta$ ($P_{\alpha\beta}$) is (in general) different from $P_{\beta\alpha}$, even if the $CP$-violating phase of the MNS matrix vanishes. This is, of course, expected since Sun–neutrino interactions explicitly break $T$-invariance. Indeed, it is the case of two-flavour oscillations which is special, in the sense that the number of independent oscillation probabilities is too small because of unitarity. The results of a particular scan of the parameter space are presented in Sec. 4. In this case, special attention was paid to the regions of the parameter space which are preferred by the current experimental situation. It turns out that, in the case of a “normal” neutrino mass hierarchy, it is possible to suppress $P_{ee}$ tremendously with respect to its pure vacuum oscillation values, by a mechanism that is similar to the well known MSW effect in the case of two-flavour oscillations: the parameters are such that electron-type neutrinos produced in the Sun’s core exit the Sun (almost) as pure mass eigenstates, and the $\nu_e$ component of this eigenstate is small. Both $P_{\mu\mu}$ and $P_{\tau\tau}$ can be significantly suppressed, and the constant $P_{\mu\mu}$ and $P_{\tau\tau}$ contours as a function of the “solar” angle and the smaller mass-squared differences are nontrivial. One important feature is that when $P_{\mu\mu}$ is significantly suppressed, $P_{\tau\tau}$ is not, and vice-versa. One consequence of this is that, for some regions of the parameter space, it is possible to have an enhancement of $\nu_{\tau}$’s detected in the Earth with respect to the number of $\nu_{\mu}$’s (or vice-versa). This may have important implications for solar WIMP annihilation searches at neutrino telescopes, and will be studied in another oportunity. It is important to note that the effect of neutrino oscillations on the expected event rate at neutrino telescopes will depend on the expected production rate of individual neutrino species inside the Sun, which is, of course, model dependent. In the case of an “inverted” mass hierarchy, the situation is very similar to the pure vacuum case, and no particular suppression of any $P_{\alpha\alpha}$ is possible. Indeed, for a large region of the parameter space $P_{ee}$ is in fact enhanced, a feature which is also observed in the two-flavour case [@earth_matter]. The case of higher energy neutrinos was very briefly discussed, and the crucial point is to note that, for neutrino energies above a few hundred GeV, the absorption of neutrinos by the Sun becomes important. The study of absorption effects is beyond the scope of this paper. Finally, it is important to reemphasise that the values of $P_{\alpha\beta}$ computed here are to be understood as if they were evaluated at the Earth’s surface. No Earth-matter effects have been included. It is possible that Earth-matter effects are important, especially the ones related to $\Delta m^2_{31}$, in the advent that $U_{e3}^2\equiv\sin^2\xi$ turns out to be “large.” Acknowledgements {#acknowledgements .unnumbered} ================ I would like to thank John Ellis for suggesting the study of GeV solar neutrinos, and for many useful discussions and comments on the manuscript. I also thank Amol Dighe and Hitoshi Murayama for enlightening discussions and for carefully reading this manuscript and providing useful comments. [99]{} Super-Kamiokande Collaboration (Y. Fukuda [*et al*]{}.), [*Phys. Rev. Lett.*]{} [**81**]{}, 1562 (1998), hep-ex/9807003. N. Fornengo, M.C. Gonzalez-Garcia, and J.W.F. Valle, FTUV-00-13, IFIC-00-14, hep-ph/0002147. B.T. Cleveland [*et al*]{}., [*Astrophys. J.*]{} [ **496**]{}, 505 (1998). KAMIOKANDE Collaboration (Y. Fukuda [*et al*]{}.), [*Phys. Rev. Lett.*]{} [**77**]{}, 1683 (1996). GALLEX Collaboration (W. Hampel [*et al*]{}.), [*Phys. Lett.*]{} [**B447**]{}, 127 (1999). SAGE Collaboration (J.N. Abdurashitov [*et al*]{}.), [ *Phys. Rev.*]{} [**C 59**]{}, 2246 (1999); SAGE Collaboration (J.N. Abdurashitov [ *et al*]{}.), astro-ph/9907113. Super-Kamiokande Collaboration (Y. Fukuda [*et al*]{}.), [*Phys. Rev. Lett.*]{} [**81**]{}, 1158 (1998), hep-ex/9805021. J.N. Bahcall, S. Basu, and M.H. Pinsonneault, [*Phys. Lett.*]{} [**B433**]{}, 1 (1998), astro-ph/9805135. J.N. Bahcall, P.I. Krastev, and A.Yu. Smirnov, [*Phys. Rev.*]{} [**D 58**]{}, 096016 (1998), hep-ph/9807216. M.C. Gonzalez-Garcia [*et al*]{}., FTUV-99-41, hep-ph/9906469. A. de Gouvêa, A. Friedland, and H. Murayama, UCB-PTH-00-03, hep-ph/0002064. G.L. Fogli [*et al*]{}., BARI-TH-365-99, hep-ph/9912231. B. Pontecorvo, [*Zh. Eksp. Teor. Fiz.*]{} [**33**]{}, 549 (1957). Z. Maki, M. Nakagawa, and S. Sakata, [*Prog. Theor. Phys.*]{} [**28**]{}, 870 (1962). L. Wolfenstein, [*Phys. Rev.*]{} [**D 17**]{}, 2369 (1978); S.P. Mikheyev and A.Yu. Smirnov, [ *Yad. Fiz. (Sov. J. of Nucl. Phys.)*]{} [ **42**]{}, 1441 (1985). T.K. Kuo and J. Pantaleone, [*Phys. Rev. Lett.*]{} [**57**]{}, 1805 (1986); [*Phys. Rev.*]{} [**D 35**]{}, 3432 (1987). S.P. Mikheyev and A.Yu. Smirnov, [*Phys. Lett.*]{} [**B 200**]{}, 560 (1987). for a review, see G. Jungman, M. Kamionkowski, and K. Griest, [*Phys. Rep.*]{} [**267**]{}, 195 (1996). AMANDA Collaboration (P. Askebjer [it et al]{}.), [*Nucl. Phys. Proc. Suppl.*]{} [**77**]{} 474, (1999). BAIKAL Collaboration (V.A. Balkanov [it et al]{}.), [*Prog.Part. Nucl.Phys.*]{} [**40**]{}, 391 (1998). J. Ellis, R.A. Flores, and S.S. Masood, [*Phys. Lett.*]{} [**B 294**]{}, 229 (1992). T.K. Kuo and J. Pantaleone, [*Phys. Rev.*]{}, [ **D 37**]{}, 298 (1988). V. Barger, [*et al.,*]{} [*Phys. Rev.*]{} [**D 22**]{}, 2718 (1980). see, [*e.g.*]{}, A. De Rújula, M.B. Gavela, and P. Hernández, [*Nucl. Phys.*]{} [**B 547**]{}, 21 (1999); V. Barger, S. Geer, and K. Whisnant, [*Phys. Rev.*]{}, [**D 61**]{}, 053004 (2000); A. Bueno, M. Campanelli, and A. Rubbia, ICARUS-TM-2000-01, hep-ph/0005007. for a general and updated review see S.M. Bilenkii, C. Giunti, and W. Grimus, [ *Prog. Part. Nucl. Phys.*]{} [**43**]{}, 1 (1999), hep-ph/9812360. `http://www.sns.ias.edu/~jnb/` S.T. Petcov, [*Phys. Lett.*]{} [**B 214**]{}, 139 (1988). S. Pakvasa and J. Pantaleone, [*Phys. Rev. Lett.*]{} [**65**]{}, 2479, (1990). T. Kaneko, [*Prog. Theor. Phys.*]{} [**78**]{}, 532 (1987); S. Toshev, [*Phys. Lett.*]{} [**B 196**]{}, 170 (1987); M. Ito, T. Kaneko, and M. Nakagawa, [*Prog. Theor. Phys.*]{} [**79**]{}, 13 (1988) \[Erratum [**79**]{}, 555 (1988)\]. S.T. Petcov, [*Phys. Lett.*]{} [**B 200**]{}, 373 (1988). P.I. Krastev and S.T. Petcov, [*Phys. Lett.*]{} [**B 207**]{}, 64 (1988). S.T. Petcov and J. Rich, [*Phys. Lett.*]{} [**B 224**]{}, 426 (1989); J. Pantaleone, [*Phys. Lett.*]{} [**B 251**]{}, 618 (1990). M.V. Chizhov, IC-99-135, hep-ph/9909439. Particle Data Group (C. Caso [*et al*]{}.), [*Eur. Phys. J.*]{} [**C 3**]{}, 1 (1998). Chooz Collaboration (M. Apollonio [*et al*]{}.), [*Nucl. Phys. Proc. Suppl.*]{} [**77**]{}, 159 (1999); F. Boehm [*et al.*]{}, STANFORD-HEP-00-03, hep-ex/0003022. LSND Collaboration (C. Athanassopoulos [*et al*]{}.), [*Phys. Rev. Lett.*]{} [**81**]{}, 1774 (1998); [*Nucl. Phys. Proc. Suppl.*]{} [**77**]{}, 207 (1999). J. Pantaleone, [*Phys. Rev.*]{} [**D 43**]{}, 641 (1991). A. de Gouvêa, A. Friedland, and H. Murayama, LBNL-44351, hep-ph/9910286. A. Friedland, UCB-PTH-00-04, hep-ph/0002063. For recent reviews see, [*e.g.*]{} R.N. Mohapatra, to appear in “Current Aspects of Neutrino Physics”, ed. by D. Caldwell, Springer-Verlag, 2000, hep-ph/9910365; S.M. Barr and I. Dorsner, BA-00-15, hep-ph/0003058. R. Gandhi [*et al.,*]{} [*Astropart. Phys.*]{} [**5**]{}, 81 (1996). S. Iyer, M.H. Reno, and I. Sarcevic, [*Phys. Rev.*]{} [**D 61**]{}, 053003 (2000). [^1]: Some effects have already been studied, in the two-neutrino case, in [@EFM]. [^2]: If the neutrinos are Majorana particles, there is also a Majorana phase, which will be ignored throughout since it plays no role in the physics of neutrino oscillations. [^3]: The most general form of a $2\times 2$ unitary matrix is $\left(\matrix{A & B \cr -B^* & A^*}\right)\left(\matrix{1 & 0 \cr 0 & e^{i\zeta}} \right)$, where $|A|^2+|B|^2=1$ and $0\leq\zeta\leq 2\pi$. In the case of neutrino oscillations, however, the physical quantities are $|A|^2$ and the phase of $AB^*$, and therefore $\zeta$ can be ignored. [^4]: This is in general the case, because one has to consider that neutrinos are produced at different points in space and time. [^5]: See [@bksreview; @rate_analysis; @dark_side] for the labelling of the regions of the parameter space that solve the solar neutrino puzzle [^6]: If one decides to limit $0\leq\vartheta\leq\pi/4$, a similar result can be obtained if $\Delta m^2\rightarrow-\Delta m^2$, explicitly $P_{\bar{e}\bar{e}}(\Delta m^2)=P_{ee}(-\Delta m^2)$. [^7]: There is evidence for neutrino oscillations coming from the LSND experiment [@LSND]. Such evidence has not yet been confirmed by another experiment, and will not be considered in this paper. If, however, it is indeed confirmed, it is quite likely that a fourth, sterile, neutrino will have to be introduced into the picture. [^8]: I will work under this assumption for the time being.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Oscillatory double-diffusive convection (ODDC, more traditionally called semiconvection) is a form of linear double-diffusive instability that occurs in fluids that are unstably stratified in temperature (Schwarzschild unstable), but stably stratified in chemical composition (Ledoux stable). This scenario is thought to be quite common in the interiors of stars and giant planets, and understanding the transport of heat and chemical species by ODDC is of great importance to stellar and planetary evolution models. Fluids unstable to ODDC have a tendency to form convective thermo-compositional layers which significantly enhance the fluxes of temperature and chemical composition compared with microscopic diffusion. Although a number of recent studies have focused on studying properties of both layered and non-layered ODDC, few have addressed how additional physical processes such as global rotation affect its dynamics. In this work we study first how rotation affects the linear stability properties of rotating ODDC. Using direct numerical simulations we then analyze the effect of rotation on properties of layered and non-layered ODDC, and study how the angle of the rotation axis with respect to the direction of gravity affects layering. We find that rotating systems can be broadly grouped into two categories, based on the strength of rotation. Qualitative behavior in the more weakly rotating group is similar to non-rotating ODDC, but strongly rotating systems become dominated by vortices that are invariant in the direction of the rotation vector and strongly influence transport. We find that whenever layers form, rotation always acts to reduce thermal and compositional transport.' author: - Ryan Moll and Pascale Garaud title: 'The effect of rotation on oscillatory double-diffusive convection (semiconvection)' --- Introduction {#sec:Intro} ============ In the gaseous interiors of stars and giant planets, regions that are unstably stratified in temperature (Schwarzschild unstable) but stably stratified in chemical composition (Ledoux stable) are likely to be common. Fluids stratified in this way are, by definition, stable to the sort of overturning motion that occurs in standard convection. However, @walin1964 and @kato1966 showed that, given the right conditions, infinitesimal perturbations can trigger an instability which takes the form of over-stable gravity waves. This instability, often known as semiconvection but more accurately described as oscillatory double-diffusive convection (ODDC) after @spiegel1969, can lead to significant augmentation of the turbulent transport of temperature and chemical species through a fluid, and is therefore an important process to consider in evolution models of stars and giant planets. Double-diffusive fluids with the kind of stratification described here were first discussed in the geophysical scientific community in the context of volcanic lakes [@Newman1976] and the polar ocean [@Timmermans2003; @toole2006]. There, they became well-known for their propensity to form density staircases consisting of convectively mixed layers separated by stably stratified interfaces. As a result, layered convection is usually studied in experiments where a layered configuration is imposed as an initial condition, rather than following naturally from the growth and non-linear saturation of ODDC. Thermo-compositional layering was first studied in laboratory experiments involving salt water [@Turner1965; @lindenshirtcliffe1978], or aqueous sugar/salt solutions [@shirtcliffe1973], that were initialized with layers. The results from these studies were then used to inform studies of double-diffusive fluids in stars [@langer1985; @merryfield1995] and giant planets [@stevenson1982; @leconte2012; @nettelmann2015]. However, recent studies have taken a different approach to characterize the dynamics of double-diffusive layering. Advances in high-performance computing have made it feasible to study ODDC using 3D numerical simulations. @rosenblum2011 discovered that layers may form spontaneously in a linearly unstable system, and proposed a mechanism to explain how layer formation occurs. This mechanism, known as the $\gamma-$instability, was originally put forward by @radko2003mlf to explain layer formation in fingering convection but was found to apply to ODDC as well. The simulations of @rosenblum2011 also demonstrated the existence of a non-layered phase of ODDC which had been neglected by nearly all previous studies except that of @langer1985, who proposed a model for mixing of chemical species by semiconvection that ignores layering entirely [see reviews by @merryfield1995; @Moll2016]. Next, @Mirouh2012 identified the parameter regimes in which layers do and do not form by the $\gamma-$instability in ODDC. @Wood2013 then studied the thermal and compositional fluxes through layered ODDC, and @Moll2016 studied the transport characteristics through non-layered ODDC. In each of these studies, a fairly simple model was used in which the only body force considered was gravity. It is natural to wonder how additional physical mechanisms may affect the long term dynamics of ODDC. Global rotation is one such mechanism that is particularly relevant to the gas giant planets in our own solar system due to their rapid rotation periods ($\sim 9.9$ hours for Jupiter and $\sim 10.7$ hours for Saturn). It is also potentially important to rapidly rotating extra-solar giant planets, and massive stars. There have been some recent studies of rotating layered convection in double-diffusive fluids, but only for the geophysical parameter regime [@CarpTimm2014] in conditions that are not unstable to ODDC (or to the $\gamma-$instability). In this work we study the effect of global rotation on the linear stability properties and long-term dynamics associated with ODDC. In Section \[sec:mathMod\] we introduce our mathematical model and in Section \[sec:LinStab\] we study how rotation affects its linear stability properties. We analyze the impact of Coriolis forces on the formation of thermo-compositional layers in Section \[sec:thetaZero\] by studying a suite of simulations with parameter values selected to induce layer formation in non-rotating ODDC. In Section \[sec:diffParams\] we show results from two other sets of simulations at different values of the diffusivities and of the background stratification and study how rotation affects the dynamics of the non-layered phase of ODDC. In Section \[sec:incSims\] we study the effect of colatitude on layer formation. Finally, in Section \[sec:conclusion\] we discuss our results and present preliminary conclusions. Mathematical Model {#sec:mathMod} ================== The basic model assumptions for rotating ODDC are similar to those made in previous studies of the non-rotating systems [@rosenblum2011; @Mirouh2012; @Wood2013; @Moll2016]. As in previous work, we consider a domain that is significantly smaller than a density scale height, and where flow speeds are significantly smaller than the sound speed of the medium. This allows us to use the Boussinesq approximation [@spiegelveronis1960] and to ignore the effects of curvature. We consider a 3D Cartesian domain centered at radius $r=r_0$, and oriented in such a way that the $z$-axis is in the radial direction, the $x$-axis is aligned with the azimuthal direction, and the $y$-axis is aligned with the meridional direction. We also assume constant background gradients of temperature, $T_{0z}$, and chemical composition, $\mu_{0z}$, over the vertical extent of the box, which are defined as follows: $$\begin{aligned} T_{0z} = \frac{\partial T}{\partial r} = \frac{T}{p} \frac{\partial p}{\partial r} \nabla \, , \nonumber \\ \mu_{0z} =\frac{\partial \mu}{\partial r} = \frac{\mu}{p} \frac{\partial p}{\partial r} \nabla_{\mu} \, ,\end{aligned}$$ where all the quantities are taken at $r =r_0$. Here, $p$ denotes pressure, $T$ is temperature, $\mu$ is the mean molecular weight, and $\nabla$ and $\nabla_{\mu}$ have their usual astrophysical definitions: $$\nabla = \frac{d \ln T}{d \ln p} \mbox{ , } \nabla_\mu = \frac{d \ln \mu}{d \ln p} \, \mbox{ at } r = r_0 \, .$$ We use a linearized equation of state in which perturbations to the background density profile, $\tilde{\rho}$, are given by $$\frac{\tilde{\rho}}{\rho_0} = -\alpha \tilde{T} + \beta \tilde{\mu} \, ,$$ where $\tilde{T}$, and $\tilde{\mu}$ are perturbations to the background profiles of temperature and chemical composition, respectively, and $\rho_0$ is the mean density of the domain. The coefficient of thermal expansion, $\alpha$, and of compositional contraction, $\beta$, are defined as $$\begin{aligned} \alpha &= &-\frac{1}{\rho_0} \left.\frac{\partial \rho}{\partial T}\right|_{p,\mu} \, , \nonumber \\ \beta &= &\frac{1}{\rho_0} \left.\frac{\partial \rho}{\partial \mu}\right|_{p,T} \, .\end{aligned}$$ We take the effect of rotation into account by assuming that the rotation vector is given by: $$\label{eq:RotAxis} \mathbf{\Omega} = \left| \mathbf{\Omega} \right| \left( 0,\sin{\theta},\cos{\theta} \right) \, ,$$ where $\theta$ is the angle between the rotation axis and the $z-$axis. With this assumed rotation vector, a domain placed at the poles has a rotation axis aligned with the $z$-direction ($\theta=0$), while at the equator the rotation axis is in the $y$-direction ($\theta=\frac{\pi}{2}$). Due the small sizes of the domains considered (compared to a stellar or planetary radius) we use an $f$-plane approximation where rotation is assumed to be constant throughout the domain. In what follows we use new units for length, $[l]$, time, $[t]$, temperature, $[T]$, and chemical composition, $[\mu]$ as, $$\begin{aligned} \label{eq:nondim} [l] &=& d = \left( \frac{\kappa_T \nu}{\alpha g \left| T_{0z} - T_{0z}^{\rm ad} \right|} \right)^{\frac{1}{4}} = \left( \frac{\kappa_T \nu}{\alpha g \frac{T}{p} \left| \frac{\partial p}{\partial r} \right| \left| \nabla - \nabla_{\rm ad} \right|} \right)^{\frac{1}{4}} \, , \nonumber \\ {[t]} &= &\frac{d^2}{\kappa_T} \, , \nonumber \\ {[T]} &= &d \left| T_{0z} - T_{0z}^{\rm ad} \right| \, , \nonumber \\ {[\mu]} &= &\frac{\alpha}{\beta} d \left| T_{0z} - T_{0z}^{\rm ad} \right| \, ,\end{aligned}$$ where $g$ is the local gravitational acceleration, $\nu$ is the local viscosity, $\kappa_T$ is the local thermal diffusivity, and where $T_{0z}^{\rm ad}$ is the adiabatic temperature gradient defined as $$T_{0z}^{\rm ad} = \frac{T}{p} \frac{dp}{dr} \nabla_{\rm ad} \mbox{ at } r = r_0\, .$$ The non-dimensional governing equations for rotating ODDC are then given by $$\begin{aligned} \label{eq:GovEq} \nabla \cdot \mathbf{u} &= &0 \, , \nonumber \\ \frac{\partial \mathbf{u}}{\partial t} + \mathbf{u} \cdot \nabla \mathbf{u} &= &-{\rm Pr}\nabla \tilde{p} + {\rm Pr}\left( \tilde{T} - \tilde{\mu} \right)\mathbf{\hat{e}}_z + {\rm Pr}\nabla^2 \mathbf{u} - \sqrt{\rm Ta^*} \left( \frac{\mathbf{\Omega}}{\left| \mathbf{\Omega} \right|} \times \mathbf{u} \right) \, , \nonumber \\ \frac{\partial \tilde{T}}{\partial t} + \mathbf{u} \cdot \nabla \tilde{T} - w &= &\nabla^2 \tilde{T} \, , \nonumber \\ \frac{\partial \tilde{\mu}}{\partial t} + \mathbf{u} \cdot \nabla \tilde{\mu} - R_0^{-1} w &= &\tau \nabla^2 \tilde{\mu} \, ,\end{aligned}$$ where $\mathbf{u} = (u,v,w)$ is the velocity field. This introduces the usual non-dimensional diffusion parameters ${\rm Pr}$ (the Prandtl number) and $\tau$ (the diffusivity ratio) as $${\rm Pr}=\frac{\nu}{\kappa_T} \: , \: \tau=\frac{\kappa_{\mu}}{\kappa_T} \, ,$$ where $\kappa_{\mu}$ is the compositional diffusivity, and the inverse density ratio, $R_0^{-1}$, as $$R_0^{-1} = \frac{\beta \left| \mu_{0z} \right|}{\alpha \left| T_{0z} - T_{0z}^{\rm ad} \right|} \, .$$ In a non-rotating model ${\rm Pr}$, $\tau$ and $R_0^{-1}$ are sufficient to fully describe the system. In a rotating model though, we must introduce a fourth non-dimensional parameter that controls the strength of rotation, $${\rm Ta^*} = \frac{4 \left| \mathbf{\Omega} \right|^2 d^4}{\kappa_T^2} \, ,$$ which is related to the commonly defined Taylor number in studies of rotating Rayleigh-Bénard convection as: $$\label{eq:TaNum} {\rm Ta} = \frac{4 \left| \mathbf{\Omega} \right|^2 L_z^4}{\nu^2} = {\rm Pr}^{-2} \left( \frac{L_z}{d} \right)^4 {\rm Ta}^* \, .$$ Values of ${\rm Ta^*}$ and $d$ in a stellar or planetary interior are difficult to estimate due to uncertainty in the superadiabaticity of double-diffusive regions. However we can make reasonable estimates for their upper and lower bounds. As in @nettelmann2015 we define the superadiabaticity, $\Delta T_{0z}$, as $$\Delta T_{0z} = \frac{\nabla - \nabla_{\rm ad}}{\nabla_{\rm ad}} = \frac{ T_{0z} - T_{0z}^{\rm ad} }{ T_{0z}^{\rm ad} } \, .$$ In their study values of $\Delta T_{0z}$ were typically between $10^{-2}$ and $10^2$ (see their Figure 2). This range of values, combined with data from @french2012, allows us to calculate $d$ and ${\rm Ta^*}$ as a function of depth for the interior of Jupiter. From Figure \[fig:RealTaylor\] we see that the lowest estimates for ${\rm Ta^*}$ are on the order of $10^{-3}$ (for large $\Delta T_{0z}$) and the upper bound is between $1$ and $10$ (for small $\Delta T_{0z}$). As we will show later, this range includes values of ${\rm Ta^*}$ which indicate significant rotational effects on the dynamics of ODDC. Larger values of $\Delta T_{0z}$ are expected in the case of layered ODDC, where $T_{0z}$ is close to $T_{0z}^{\rm ad}$, while smaller values are expected in the case of non-layered ODDC, where $T_{0z}$ is closer to the radiative temperature gradient. ![Values of ${\rm Ta^*}$ (left) and $d$ in units of meters (right) estimated for the interior Jupiter using data from @french2012. Estimates are made for various values of $\Delta T_{0z}$ between $10^{-2}$ and $10^2$.[]{data-label="fig:RealTaylor"}](Figure1.pdf){width="\linewidth"} The conditions for ODDC to occur in a non-rotating fluid are defined by ${\rm Pr}$, $\tau$, and, most importantly, $R_0^{-1}$ [@baines1969]. For a system to be unstable to infinitesimal perturbations, $R_0^{-1}$ must be within the following range: $$\label{eq:LinStabCrit} 1 < R_0^{-1} < R_c^{-1} \equiv \frac{{\rm Pr}+1}{{\rm Pr}+\tau} \, .$$ If $R_0^{-1} < 1$, the system is unstable to standard convection, and if $R_0^{-1} > R_c^{-1}$ the system is linearly stable. It should be noted that while a fluid with $R_0^{-1} > R_c^{-1}$ may be linearly stable, it is still possible for an instability to be triggered through finite amplitude perturbations [assuming that the perturbations are of the right functional form, see @huppert1976; @proctor1981]. When we discuss ODDC, however, we are referring only to the linearly unstable kind of double-diffusive convection. Linear stability analysis {#sec:LinStab} ========================= We analyze the linear stability of rotating double-diffusive convection by first linearizing the governing equations in (\[eq:GovEq\]) around $\tilde{T}=\tilde{\mu}=\mathbf{u}=0$. We then assume that the functional form of the perturbations is $$\{\mathbf{u},\tilde{T},\tilde{\mu}\} = \{\mathbf{\hat{u}},\hat{T},\hat{\mu}\} \exp\left( ilx + imy + ikz + \lambda t \right) \, ,$$ where the hatted quantities are the mode amplitudes, and where $l$, $m$, and $k$ are the wave numbers for the $x$, $y$, and $z$ directions, respectively. By assuming solutions of this form, we get the following dispersion relation: $$\begin{aligned} \label{eq:DispRel} \left( \lambda + {\rm Pr}K^2 \right)^2 \left( \lambda + \tau K^2 \right) \left( \lambda + K^2 \right)& \nonumber \\ - \frac{K_H^2}{K^2}{\rm Pr}\left( \lambda + {\rm Pr}K^2 \right) \left[ \left( \lambda + \tau K^2 \right) - R_0^{-1}\left( \lambda + K^2 \right) \right]& \nonumber \\ + {\rm Ta^*} \frac{\left( m\sin{\theta} + k\cos{\theta} \right)^2}{K^2} \left( \lambda + \tau K^2 \right) \left( \lambda + K^2 \right)& = 0 \, ,\end{aligned}$$ where $K=\sqrt{l^2+m^2+k^2}$ and $K_H$ is the magnitude of the horizontal wavenumber defined as $K_H=\sqrt{l^2+m^2}$. @Worthem1983 presented a similar linear stability analysis for rotating fingering convection (which has a similar dispersion relation to ODDC) which also included vertical velocity gradients and lateral gradients of temperature and chemical composition. As expected, when the additional physical effects are removed, and when the background gradients of temperature and chemical composition are assumed to be negative, their dispersion relation [Equation 6.2 of @Worthem1983] is equivalent to the one shown here. Other similar linear stability analyses of rotating double diffusive systems can be found in @kerr1986 and @kerr1995. As in non-rotating ODDC, it can be shown that the fastest growing linear modes in the rotating case have purely vertical fluid motions which span the height of the domain (ie. $k=0$). In fact, in Equation (\[eq:DispRel\]) we see that when $\theta=0$ and $k=0$ the rotation-dependent term drops out altogether. The fastest growing modes in rotating systems with $\theta=0$ are therefore identical to their non-rotating counterparts both in horizontal wavenumber and growth rate. However, when $\theta = 0$ rotation does affect modes with $k \neq 0$, and always acts to reduce their growth rates. This is illustrated in Figure \[fig:LinStabPhase\] which shows mode growth rates as a function of $k$ and $K_H$ for various values of ${\rm Ta^*}$. In this “polar" configuration, the mode growth rate only depends on the total horizontal wavenumber $K_H$, and not on $l$ or $m$ individually. As rotation increases we see that modes with $k \neq 0$ grow more slowly or become stable, while only modes with very low $k$ or $k = 0$ remain unstable. ![In each of the panels, ${\rm Pr}=\tau=0.1$, $R_0^{-1}=1.25$. Panels (a-c): Growth rates versus horizontal and vertical wave numbers for stated values of ${\rm Ta^*}$ with $\theta=0$. Panels (d-f): Surface of null growth rate for ${\rm Ta^*} = 1$ and stated values of $\theta$. The line shows the axis of $l$ wavenumbers. All points on this axis are unaffected by rotation, including the fastest-growing modes. []{data-label="fig:LinStabPhase"}](Figure2.pdf){width="\linewidth"} When $\theta \neq 0$ the fastest growing modes still have $k=0$. However, to avoid the attenuating effect of rotation on their growth rates, they must satisfy the additional constraint, $$\label{eq:InclinedConstraint} m\sin{\theta} = -k\cos{\theta} \, .$$ Consequently, the fastest growing modes must have both $m=0$ and $k=0$. Because of this extra constraint, there are fewer modes that grow at the fastest rate. This is well illustrated in Figure \[fig:LinStabPhase\] where we see that in rotating ODDC there is a ring of modes that are unaffected by rotation that is inclined at an angle of $\theta$. When $\theta \neq 0$, this ring intersects the $k=0$ plane at only two points meaning that there are only two fastest growing modes whose growth rates are not diminished by the effects of rotation. These unaffected fastest growing modes take the form of invariant vertically oscillating planes, spanned by the direction of gravity and the rotation axis (see Section \[sec:incSims\] for more details on this limit). Simulations with $\theta=0$ {#sec:thetaZero} =========================== Reproducing the conditions of stellar or planetary interiors in laboratory experiments is practically impossible, so in order to understand the development of rotating ODDC beyond linear theory, we must study results from direct numerical simulations (DNS). In this section we analyze data from 3D numerical simulations run using a version of the pseudo-spectral, triply periodic, PADDI Code [@traxler2011], which has been modified to take into account the effects of rotation. Each simulation is run with ${\rm Pr}=\tau=0.1$, $R_0^{-1}=1.25$. We have chosen these values because non-rotating simulations at these parameters have been found to spontaneously form layers [as predicted by $\gamma$-instability theory, see @Mirouh2012] which allows us to evaluate how global rotation affects the formation and evolution of these layers. We focus on 5 simulations with ${\rm Ta^*}=0,0.01,0.1,1$ and $10$. Based on their qualitative behavior, we consider the simulations with ${\rm Ta^*}=0.01,$ and $0.1$ to be “low ${\rm Ta^*}$" and the simulations with ${\rm Ta^*}=1$ and $10$ to be “high ${\rm Ta^*}$". Each has an effective resolution of $384^3$ mesh points and the simulation domains have dimensions of $(100d)^3$. The simulations are initialized with random infinitesimal perturbations to the temperature field. When studying the behavior of rotating ODDC using DNS, the quantities of greatest relevance to astrophysical models are the vertical fluxes of temperature and chemical composition through the domain. We express these fluxes in terms of thermal and compositional Nusselt numbers, ${\rm Nu}_T$ and ${\rm Nu}_{\mu}$, which are measures of total fluxes (turbulent $+$ diffusive) in units of the diffusive flux. Using the non-dimensionalization described in Section \[sec:mathMod\], ${\rm Nu}_T$ and ${\rm Nu}_{\mu}$ are expressed as $$\begin{aligned} {\rm Nu}_T(t) &= &1 + \langle \tilde{w}\tilde{T} \rangle = 1 + \left\langle \left| \nabla \tilde{T} \right|^2 \right\rangle \, , \\ {\rm Nu}_{\mu}(t) &= &1 + \frac{\langle \tilde{w}\tilde{\mu} \rangle}{\tau R_0^{-1}} = 1 + \frac{\left\langle \left| \nabla \tilde{\mu} \right|^2 \right\rangle}{\left(R_0^{-1}\right)^2} \, ,\end{aligned}$$ where angle brackets denote an average over all three spatial dimensions, and $\left| \nabla \tilde{T} \right|^2$ and $\left| \nabla \tilde{\mu} \right|^2$ are the thermal and compositional dissipations [for a detailed explanation of the dissipations, see @Wood2013; @Moll2016]. In practice, we are most interested in the capacity of ODDC to induce vertical turbulent mixing. We therefore quantify transport in terms of the non-dimensional turbulent flux of temperature, ${\rm Nu}_T-1$, and the non-dimensional turbulent flux of chemical species, ${\rm Nu}_{\mu}-1$. These quantities can also be viewed as the ratio of turbulent diffusivity to the microscopic diffusivity for each transported quantity. We also note that for astrophysical objects it is usually possible to estimate the heat flux by observing the intrinsic luminosity, but direct measurements of the compositional flux are more difficult to obtain. However, the rate of compositional transport may be inferred through observations of the heat flux if a set relationship exists between them. For this reason we also express our results in terms of the total inverse flux ratio, $\gamma_{\rm tot}^{-1}$, given (non-dimensionally) by $$\gamma_{\rm tot}^{-1} = \tau R_0^{-1} \frac{{\rm Nu}_{\mu}}{{\rm Nu}_T} \, .$$ This is the the ratio of the total buoyancy flux due to compositional transport, to the total buoyancy flux due to heat transport, which was first discussed by @stevenson1977. This ratio is typically smaller than one in the double-diffusive regime when significant turbulent mixing occurs, and describes what fraction of the total energy flux can be used to mix high-$\mu$ chemical species upwards. The inverse flux ratio is also a crucial player in the $\gamma-$instability theory: indeed, as shown by @Mirouh2012, a necessary and sufficient condition for layer formation in ODDC is that $\gamma_{\rm tot}^{-1}$ be a decreasing function of $R_0^{-1}$. Furthermore, $d \gamma_{\rm tot}^{-1} / dR_0^{-1}$ controls the growth rate of layering modes. Finally, measuring how the relative influence of rotation changes as rotating simulations evolve offers insight into how our results may scale to larger systems. We measure the influence of rotation with a Rossby number (the ratio of the inertial force to the Coriolis force), which is usually defined as a turbulent velocity divided by the product of a length scale and the rotation rate. Here, we define the Rossby number as $$\label{eq:RoNum} {\rm Ro} = \frac{u_{\rm h,rms}}{2 \pi L_h \sqrt{\rm Ta}^*} \, ,$$ where $u_{\rm h,rms}$ is the rms horizontal velocity, and $L_h$ is the expectation value of the horizontal length scale of turbulent eddies over the power spectrum, defined as $$L_h = \frac{ \sum_{l,m,k} \frac{\left( |\hat u_{lmk}|^2 + |\hat v_{lmk}|^2 \right)}{\sqrt{l^2 + m^2}} }{ \sum_{l,m,k} (|\hat u_{lmk}|^2 + |\hat v_{lmk}|^2 ) } \, ,$$ where $\hat u_{lmk}$ and $\hat v_{lmk}$ are the amplitudes of the Fourier modes of $u$ and $v$, respectively, with wavenumber $(l,m,k)$. We define ${\rm Ro}$ this way because in systems where $\theta=0$, only horizontal velocity components are affected by rotation. Growth and saturation of the linear instability ----------------------------------------------- Figure \[fig:basicInstNoInc\] shows the turbulent compositional flux as a function of time for each of the simulations with ${\rm Pr} = \tau = 0.1$, and $R_0^{-1} = 1.25$, focusing on the growth and saturation of basic instability of ODDC. It clearly shows that the growth of the linear instability in simulations of rotating ODDC (with $\theta = 0$) behaves similarly to the non-rotating case. This is not surprising since the overall growth of the linear instability is dominated by the fastest growing modes, which in this case are completely unaffected by rotation (see Section \[sec:LinStab\]). ![Exponential growth and early stages of the non-linear saturation of the turbulent compositional flux for simulations with ${\rm Pr}=\tau=0.1$, $R_0^{-1}=1.25$, $\theta=0$ and stated values of ${\rm Ta^*}$. The growth rates are independent of ${\rm Ta^*}$. The fluxes immediately after non-linear saturation are also more-or-less independent of ${\rm Ta^*}$, except for the cases with ${\rm Ta}^* = 0$ and ${\rm Ta}^* = 10$ (see text for detail).[]{data-label="fig:basicInstNoInc"}](Figure3.pdf){width="0.5\linewidth"} After the initial growth of the linear instability, each simulation reaches a non-linear saturation (at around $t=300$ in each case) and becomes homogeneously turbulent. Figure \[fig:basicInstNoInc\] shows that the compositional flux in the homogeneously turbulent phase are roughly independent of ${\rm Ta^*}$ at low ${\rm Ta^*}$. For ${\rm Ta^*}=0.01, 0.1$ and $1$ the mean fluxes during this phase are statistically similar to one another, while the most rapidly rotating simulation (${\rm Ta^*}=10$) reaches a plateau that is slightly higher than the others. Note that the composition flux in the non-rotating simulation (${\rm Ta^*}=0$) behaves differently, because in this case layers begin to form almost immediately after saturation. This causes it to continue to grow after saturation (albeit at a slower rate), never achieving a quasi-steady state as the rotating simulations do. We now look in more detail at the behavior of the low Ta\* and high Ta\* sets of simulations, respectively. Low $\rm Ta^{*}$ simulations {#sec:LowTa} ---------------------------- Figure \[fig:fig\_flux\] shows that in low ${\rm Ta^*}$ simulations the homogeneously turbulent phase (where fluxes remain more or less statistically steady) is followed by a series of step-wise increases in the compositional (and thermal) flux, which are indicative of layers that form spontaneously through the $\gamma$-instability and then merge progressively over time. In each case, three layers initially form which then merge into two, and ultimately into a single layer with a single interface. This final configuration is statistically stationary. We therefore find the qualitative evolution of layers to be consistent with previous studies of non-rotating layered ODDC [@rosenblum2011; @Wood2013]. However, the progressive increase in rotation rate introduces quantitative differences between rotating and non-rotating cases, even at low ${\rm Ta^*}$. The rotation rate clearly affects the time scales for layer formation and layer mergers, respectively, with stronger rotation leading to delays in both processes. ![Long-term behavior of the turbulent compositional flux (left) and of $\gamma_{\rm tot}^{-1}$ (right) for stated values of ${\rm Ta^*}$. In each simulation, ${\rm Pr}=\tau=0.1$, $R_0^{-1}=1.25$, and $\theta=0$. In low ${\rm Ta^*}$ simulations, the turbulent compositional flux increases in a stepwise manner indicative of layer formation, while in the high ${\rm Ta^*}$ cases there is no clear evidence for similar stepwise increases. []{data-label="fig:fig_flux"}](Figure4.pdf){width="\linewidth"} The formation of layers can be understood quantitatively by studying the growth of “layering modes" predicted by $\gamma$-instability theory [as in @stellmach2011; @rosenblum2011; @Mirouh2012 for example]. Each layering mode corresponds to a horizontally invariant, vertically sinusoidal perturbation to the background density profile. To analyze them, we therefore look at the amplitude of the Fourier modes of density perturbations with wave numbers $(0,0,k_n)$, where $k_n=\frac{2 \pi n}{L_z}$, where $L_z = 100d$ is the domain height, and $n$ is the number of layers in the process of forming. The evolution of the $(0,0,k_2)$ and $(0,0,k_3)$ modes as a function of time for each of the three low ${\rm Ta^*}$ simulations is shown in Figure \[fig:LayerModes\]. ![Time series of the amount of energy in layering modes $(0,0,k_2)$ (left) and $(0,0,k_3)$ (right) for simulations with ${\rm Pr}=\tau=0.1$, $R_0^{-1} = 1.25$, and $\theta=0$, for various values of ${\rm Ta}^*$. Layering modes are horizontally invariant perturbations to the background profiles of temperature and chemical composition. Also shown are the theoretical amplitudes these layering modes must attain in order to trigger layered convection. Perturbation amplitudes in the low ${\rm Ta^*}$ regime attain this amplitude, but fall short in the ${\rm Ta^*}=1$ simulation. \[fig:LayerModes\]](Figure5.pdf){width="\linewidth"} For simulations with ${\rm Ta^*} = 0,0.01$ and $0.1$, $(0,0,k_3)$ is the mode that initially grows to have the largest amplitude, which explains why the staircases first form with three layers. We see that, at low ${\rm Ta^*}$, the $(0,0,k_3)$ modes all initially grow at roughly the same rate. This is unsurprising, since rotation does not have a direct effect on the $\gamma$-instability because Coriolis terms only appear in the momentum equation in (\[eq:GovEq\]), which is ignored in the mean field theory upon which the $\gamma$-instability is based [@Mirouh2012]. Rotation could in principle have an indirect influence over layer formation by significantly affecting the turbulent fluxes in the homogeneously turbulent phase and changing the relationship between $\gamma^{-1}_{\rm tot}$ and $R_0^{-1}$, but as we see in Figure \[fig:basicInstNoInc\] this is not the case in the low ${\rm Ta^*}$ regime. To understand the delay in the formation of layers we must instead look at the amplitude that the density perturbations must achieve in order to trigger the onset of layered convection. Indeed, rotation is well-known to delay the onset of instability in the case of thermal convection between parallel plates [@chandrasekhar1961], so by analogy, we expect that the localized positive density gradients caused by the growth of layering modes must be larger to trigger convective overturning and cause the staircase to appear. In the Appendix, we estimate the critical density gradient needed to trigger convection in rotating Rayleigh-Bénard convection. Using this result, we then compute the amplitude the layering modes must achieve as a function of $k_n$ and ${\rm Ta^*}$, to be $$\begin{aligned} \label{eq:LayerModeEq} \left| A_n \right| = \left| \frac{ \frac{3\pi^4}{H_n^4}\left(\frac{H_n^4{\rm Ta}^*}{2{\rm Pr}^2\pi^4}\right)^{\frac{2}{3}} + \frac{27\pi^4}{4H_n^4} + \left( R_0^{-1} - 1 \right) }{2k_n} \right| \, .\end{aligned}$$ where $H_h = L_z / n = 2\pi/k_n$ is the nondimensional layer height associated with the layering mode $(0,0,k_n)$. This amplitude is shown in Figure \[fig:LayerModes\] for each of the $(0,0,k_3)$ modes in the low ${\rm Ta^*}$ simulations. Consistent with our idea, the layering modes stop growing shortly after achieving their respective critical amplitudes (except for the Ta\* = 1 case, see Section \[sec:highTa\]). This indicates that layered convection has commenced, taking the form of turbulent convective plumes bounded by freely moving, stably stratified interfaces. In each case the mode $(0,0,k_3)$ is then overtaken by modes $(0,0,k_2)$ and ultimately $(0,0,k_1)$ (not shown here). These multi-layer phases are metastable in that they persist over many eddy-turnover times before merging. Snapshots of the 3, 2, and 1-layered phases for ${\rm Ta^*}=0$ and ${\rm Ta^*}=0.1$ are shown in Figure \[fig:snapTa0\]b and \[fig:snapTa10\]b. ![(a) Density profiles and (b) snapshots of the chemical composition field in the 3, 2, and 1 layered phases for a non-rotating simulation (${\rm Ta^*}=0$) with ${\rm Pr}=\tau=0.1$ and $R_0^{-1}=1.25$. []{data-label="fig:snapTa0"}](Figure6.pdf){width="0.9\linewidth"} ![(a) Density profiles and (b) snapshots of the chemical composition field in the 3, 2, and 1 layered phases for a simulation with ${\rm Ta^*}=0.1$, ${\rm Pr}=\tau=0.1$, $R_0^{-1}=1.25$ and $\theta=0$. Noteworthy are the layer interfaces which are more stably stratified than in the non-rotating case. Also, there is a larger positive density gradient in the layers themselves.[]{data-label="fig:snapTa10"}](Figure7.pdf){width="0.9\linewidth"} Rotation also has a strong influence on several aspects of the dynamics of layered convection including the mean density profile within the layers, the stability of the interfaces, and, as mentioned earlier, the merger timescale. In Figures \[fig:snapTa0\]a and \[fig:snapTa10\]a, horizontally averaged density profiles show in greater detail the structure of the layers themselves in the three-, two-, and one-layered phases. Stronger rotation is correlated with larger positive density gradients in the layers themselves, which in turn necessarily leads to more stably stratified layer interfaces (at fixed $R_0^{-1}$). The increase with rotation rate of the density gradients within the layers is similar to what occurs in rotating Rayleigh-Bénard convection [@julien1996]. It is usually argued that turbulent buoyancy mixing by convection adjusts the mean density gradient (outside of any potential boundary layers) to a state of marginal stability. Combined with the fact that the theoretical critical density gradient for marginal stability increases with ${\rm Ta^*}$ (see Equation (\[eq:LayerModeEq\])), our results are not surprising. To study this quantitatively, we first calculate the $z$-derivative of a horizontally averaged density profile and then estimate the gradient in each layer by fitting profiles from a range of time steps over which the layer is stable. We repeat this procedure for the 3, 2, and 1-layered stages in the low $\rm{Ta}^*$ simulations. Figure \[fig:DensGrad\] shows this data as a function of the product of ${\rm Pr}$ and the thermal Rayleigh number which, in our non-dimensionalization, is defined as $${\rm Ra}_T = \frac{g\alpha \left| T_{0z} - T_{0z}^{\rm ad} \right| \left( H_n d \right)^4}{\kappa_T \nu} = H_n^4 \, ,$$ Also included are theoretical values of for the density gradient calculated using Equation (\[eq:DensGradAp\]) in the appendix, and are given by $$\frac{d\langle \rho \rangle_H}{dz} = \frac{3 \pi^4}{H_n^4} \left( \frac{H_n^4{\rm Ta^*}}{2{\rm Pr}^2\pi^4} \right)^{\frac{2}{3}} + \frac{27\pi^4}{4H_n^4}$$ where $\langle \cdot \rangle_H$ represents a horizontal average. ![Measurements of horizontally averaged density gradients in layers for two low ${\rm Ta^*}$ simulations. Also shown are predicted density gradients for systems with given rotation rates and layer heights. Both simulations have ${\rm Pr}=\tau=0.1$ and $R_0^{-1} = 1.25$.[]{data-label="fig:DensGrad"}](Figure8.pdf){width="0.55\linewidth"} We indeed find that the density gradients in layers are largest for the simulation with the higher rotation rate (${\rm Ta^*}=0.1$), and that the results from this simulation also fit well with prediction. The case with ${\rm Ta^*}=0.01$, on the other hand, shows much greater variability in density gradient, particularly at larger layer heights, making it difficult to determine whether or not the theoretical predictions are valid. From Figure \[fig:DensGrad\] we also see that in both cases the steepening of the density gradient is somewhat greater for smaller layers. However, this dependence of density gradient on layer height is weaker than the theory suggests. In Figure \[fig:fig\_flux\] we see that the simulation with ${\rm Ta^*}=0.1$ spends a greater amount of time in the three- and two-layered phases than either of the other two simulations, and consequently takes about twice as long as the non-rotating run (${\rm Ta^*}=0$) to reach the one-layered phase. The root cause is related to the lower supercriticality of convection within the layers combined with the increased stability of the interfaces. Through inspection of the density profiles of our layered simulations, we see that the positions and shapes of the interfaces in rotating simulations have less variability with time compared with non-rotating simulations. We also see in Figure \[fig:fig\_flux\] that the fluxes in the rotating layered systems oscillate less than they do in the non-rotating ones. @Wood2013 already discussed these large amplitude oscillations in non-rotating simulations and attributed them to the presence of large plumes of fluid punching through interfaces periodically causing spikes in transport. In our layered rotating simulations (particularly the ${\rm Ta^*}=0.1$ case) the absence of large amplitude oscillations in the layered phase suggests that the motion of these plumes is inhibited, possibly because convection is weaker and the interfacial gradients are stronger. Since the plumes could be key players in the layer merger events, their suppression in the rotating simulations could also explain why the merger timescale is longer. All of these effects also contribute to the reduction of mean fluxes of temperature and chemical composition through density staircases in rotating ODDC compared with the non-rotating case. To show this quantitatively, Table 1 presents measurements of ${\rm Nu}_T-1$ and ${\rm Nu}_{\mu}-1$ (with standard deviations) for each value of ${\rm Ta^*}$. For low ${\rm Ta^*}$ simulations, which clearly form convective layers, the fluxes are measured in the one-layered phase. The simulation with ${\rm Ta^*}=0.01$ shows $2\%$ and $7\%$ decreases in thermal and compositional fluxes, respectively, compared to the non-rotating simulation, while the ${\rm Ta^*}=0.1$ simulation shows $38\%$ and $44\%$ reductions. ${\rm Ta^*}$ ${\rm Ta}$ ${\rm Nu}_T - 1$ ${\rm Nu}_{\mu} - 1$ $\gamma_{\rm tot}^{-1}$ -------------- ------------ ------------------ ---------------------- ------------------------- 0 0 25.3 $\pm$ 10.8 149.8 $\pm$ 84.4 0.68 $\pm$ 0.12 0.01 1 24.5 $\pm$ 5.2 139.0 $\pm$ 37.5 0.68 $\pm$ 0.074 0.1 10 15.5 $\pm$ 4.0 82.9 $\pm$ 28.4 0.62 $\pm$ 0.067 1 100 10.2 $\pm$ 2.1 46.5 $\pm$ 12.4 0.52 $\pm$ 0.061 10 1000 14.4 $\pm$ 3.1 61.7 $\pm$ 14.7 0.51 $\pm$ 0.061 10 (narrow) 1000 44.0 $\pm$ 11.2 216.5 $\pm$ 58.9 0.62 $\pm$ 0.16 : Non-dimensional mean turbulent thermal and compositional fluxes through the domain. Mean values are calculated by taking time averages of the turbulent fluxes in the ultimate statistically stationary state achieved by the simulation. In each case, ${\rm Pr}=\tau=0.1$, $R_0^{-1}=1.25$ and $\theta=0$. For the cases with ${\rm Ta}^* = 0$, 0.01 and 0.1, these fluxes are measured in the 1-layered phase. For the cases with ${\rm Ta}^* = 1$ and 10, fluxes are measured once the system reaches a statistically steady state (see Figure \[fig:fig\_flux\]). \[tab:FluxTable\] @Wood2013 showed that fluxes in non-rotating layered ODDC follow a power law scaling which depends on the product of the Prandtl number and the thermal Rayleigh number. Figure \[fig:LayerSats\] shows the mean non-dimensional turbulent compositional flux as a function of ${\rm PrRa}_T$ for each of our low Ta\* simulations (${\rm Ta^*}=0$, $0.01$, $0.1$). To collect this data, average fluxes were computed in the 1, 2, and 3 layer phases of each simulation. ![Non-dimensional, mean turbulent compositional flux as a function of ${\rm Pr Ra}_T$ for low ${\rm Ta^*}$ simulations with ${\rm Pr}=\tau=0.1$, $R_0^{-1}=1.25$. Mean values are computed by time averaging the instantaneous turbulent compositional flux in the 3-, 2-, and 1-layered phases. In the ${\rm Ta^*}=0.1$ simulations rotation acts to reduce the turbulent compositional flux in each layered phase. However roughly the same power law applies to all simulations. \[fig:LayerSats\]](Figure9.pdf "fig:"){width="0.5\linewidth"} \[fig:LayerSats\_a\] We clearly see that rotation leads to reduced transport rates in layered convection. This is not entirely surprising because rotation is known to reduce the convective efficiency in Rayleigh-B[é]{}nard convection [@Rossby1969]. Bearing in mind the very limited amount of data available, we nevertheless attempt to fit it with flux laws of the form ${\rm Nu}_T-1 = A({\rm PrRa}_T)^a$ and ${\rm Nu}_{\mu}-1 = B({\rm PrRa}_T)^b$, using a nonlinear least square fit. The results are presented in Table 2. We find that rotation affects the constants of proportionality ($A$ and $B$), much more than it affects the exponent of ${\rm Pr}{\rm Ra}_T$ ($a$ and $b$). For ${\rm Ta^*}=0.01$ rotation has a minimal effect on the fluxes in each layered phase and the relationship between flux and Pr${\rm Ra}_T$ is the same as in the non-rotating case [@Wood2013]. For ${\rm Ta^*}=0.1$ however, rotation reduces the coefficient $A$ by almost a factor $5$ and increases the exponent $a$ by around $15\%$. There is evidence however that this change in the exponent may be due to the fact that the relative effect of rotation decreases for increasing values of Pr${\rm Ra}_T$. Indeed, Figure \[fig:RoPlot\] shows an increase in Rossby number as layers merge in low ${\rm Ta^*}$ simulations (see Equation(\[eq:RoNum\]) for definition of Rossby number). This suggests that for sufficiently large layer heights rotational effects could become negligible and the flux law probably tends to the one found by @Wood2013. This will need to be verified in simulations using larger computational domains. \[tab:FitData\] -------------- ------------ ------- ------ ------- ------ ${\rm Ta^*}$ ${\rm Ta}$ $A$ $a$ $B$ $b$ 0 0 0.076 0.32 0.21 0.36 0.01 1 0.071 0.32 0.22 0.35 0.1 10 0.016 0.37 0.035 0.42 -------------- ------------ ------- ------ ------- ------ : Best fits for data presented in Figure \[fig:LayerSats\]. \[tab:RaTFits\] ![Rossby number Ro (left) and average horizontal lengthscale of turbulent eddies $L_h$ (right) for simulations with ${\rm Pr}=\tau=0.1$, $R_0^{-1}=1.25$ and $\theta = 0$, for various values of ${\rm Ta^*}$. Noteworthy is that ${\rm Ro}$ increases as layers merge in the low ${\rm Ta^*}$ regime suggesting a decreased influence of rotation. Also note how the horizontal length scale in the high ${\rm Ta^*}$ simulations, which host a large-scale vortex, is constrained by the domain size.[]{data-label="fig:RoPlot"}](Figure10.pdf){width="\linewidth"} High ${\rm Ta^{*}}$ simulations {#sec:highTa} ------------------------------- In contrast to the low ${\rm Ta^*}$ case, the behavior of high ${\rm Ta^*}$ simulations (${\rm Ta^*}=1$ and $10$) is radically different from that described in studies of non-rotating ODDC. In Figure \[fig:fig\_flux\] we see that neither of the high ${\rm Ta^*}$ simulations shows clear stepwise increases in either the compositional flux or $\gamma_{\rm tot}^{-1}$. Instead we see turbulent fluxes that grow slowly and oscillate rapidly after saturation of the linear instability until they reach a highly variable statistically stationary state. The Ta\* = 1 and Ta\* = 10 simulations are themselves quite different from one another. Figure \[fig:snapTa100\] shows that the growth of step-like density perturbations through the $\gamma$-instability occurs for the ${\rm Ta^*}=1$ simulation just as they do in the low ${\rm Ta^*}$ cases. However, from Figure \[fig:LayerModes\] we see that their amplitudes never becomes large enough to trigger the onset of convection. The absence of the standard stepwise increase in the fluxes associated with the transition to layered convection also supports the idea that the latter does not happen in this simulation (see Figure \[fig:fig\_flux\]). Interestingly, the snapshot of chemical composition shown in Figure \[fig:snapTa100\] reveals that the system is dominated by a large scale cyclonic vortex. Inspection of the vertical velocity field shows that it is (roughly) constant within the vortex, which is consistent with Taylor-Proudman balance but is inconsistent with a system composed of convective layers separated by interfaces that resist penetrative motion. In some sense, it is perhaps more appropriate to consider ${\rm Ta^*}=1$ to be a transitional case rather than a high ${\rm Ta^{*}}$ case because it displays features of both high and low ${\rm Ta^{*}}$ regimes. At significantly higher ${\rm Ta^*}$ (in this case ${\rm Ta^*}=10$) we see from Figure \[fig:snapTa1000\] that the growth of perturbations to the density profile is completely suppressed for the duration of the simulation. Instead, after a transitional period the system becomes dominated by a cyclonic vortex similar to that observed in the ${\rm Ta^*}=1$ simulation albeit with much stronger vorticity. ![(a) Density profiles and (b) snapshots of the chemical composition field for a simulation with ${\rm Ta}^*=1$, ${\rm Pr}=\tau=0.1$, $R_0^{-1}=1.25$ and $\theta=0$. Note the presence of both a large scale vortex and layers. []{data-label="fig:snapTa100"}](Figure11.pdf){width="0.8\linewidth"} ![(a) Density profiles and (b) snapshots of the chemical composition field for a simulation with ${\rm Ta}^*=10$, ${\rm Pr}=\tau=0.1$, $R_0^{-1}=1.25$ and $\theta=0$. Note the complete absence of perturbations to the background density profile, indicating that layering modes do not grow.[]{data-label="fig:snapTa1000"}](Figure12.pdf){width="0.9\linewidth"} From the snapshots of vertical vorticity, $\omega_z$, in Figure \[fig:vorticity\] we see that the simulations with ${\rm Ta^*} = 1$ and $10$ have highly concentrated, vertically invariant vortex cores, necessarily surrounded by a more diffuse region of mostly anti-cyclonic vorticity (since $\int\int \omega_z(x,y,z) dxdy$ = 0 for all $z$). We find that, based on all available simulations, these large-scale vortices only occur in the high ${\rm Ta^*}$ regime. By comparison, the ${\rm Ta^*}=0.1$ simulation shows no large scale coherent structure in the vorticity field (which is true of the other low ${\rm Ta^*}$ simulations as well). ![Volume-rendered plots of the component of vorticity in the $z$-direction, $\omega_z$, for three simulations with ${\rm Pr}=\tau=0.1$, $R_0^{-1}=1.25$ and $\theta = 0$. (a) ${\rm Ta^*} = 0.1$. (b) ${\rm Ta^*} = 1$. (c) ${\rm Ta^*} = 10$. Purple/blue implies positive (cyclonic) vorticity, while red/yellow implies negative (anticyclonic) vorticity. The first simulation is in the low ${\rm Ta^*}$ regime (${\rm Ta^*}=0.1$) and the other two are in the high ${\rm Ta^*}$ regime (${\rm Ta}^* = 1$ and ${\rm Ta}^* = 10$). Vertically coherent, large scale vortices are present in the high ${\rm Ta^*}$ simulations, but no large-scale coherent structures exist in the low ${\rm Ta^*}$ case.[]{data-label="fig:vorticity"}](Figure13.pdf){width="0.9\linewidth"} These features are strongly reminiscent of the large scale vortices found by @guervilly2014 in rotating Rayleigh-Bénard convection using stress-free boundary conditions. In a parameter study they found that Reynolds numbers greater than $300$ and Rossby numbers less than $0.15$ were needed for large scale vortices to form. Using the Reynolds number from @guervilly2014 defined as $${\rm Re} = \frac{w_{\rm rms} L_z}{\rm Pr} \, ,$$ where $w_{\rm rms}$ is the rms vertical velocity, we find that values of ${\rm Re}$ for our simulations are $\sim 10^3$. Meanwhile, the Rossby numbers are shown in Figure \[fig:RoPlot\] and are less than $0.1$ for high ${\rm Ta^*}$. This suggests that their vortex formation process may be applicable to our high ${\rm Ta^*}$ simulations despite the significant differences in the systems being studied (ODDC vs. Rayleigh-Bénard convection). Also as in @guervilly2014, we find that whenever large scale vortices form they always grow to fill the horizontal extent of the domain[^1]. @julien2012 proposed that this may always occur in Cartesian domains using the $f-$plane approximation regardless of box size. However they argued that this would be limited in practice by the Rossby radius of deformation in astrophysical objects. Beyond that size, convection or ODDC would likely lead to development of zonal flows in banded structures instead. The amount of energy that can be extracted from ODDC to drive large scale vortices in the high ${\rm Ta^*}$ regime is illustrated in Figure \[fig:fig\_horizKE\] where we see the vast majority of kinetic energy in the ${\rm Ta^*}=1$ and $10$ simulations goes into horizontal fluid motions. ![Horizontal kinetic energy as a fraction of the total for simulations with ${\rm Pr}=\tau=0.1$, $R_0^{-1}=1.25$, and $\theta = 0$, for various values of ${\rm Ta}^*$. In the high ${\rm Ta^*}$ case this quantity is a proxy for the strength of the large-scale vortices, since they almost entirely dominate the energetics of the system. []{data-label="fig:fig_horizKE"}](Figure14.pdf){width="0.5\linewidth"} These results showing ratios of horizontal kinetic energy to total kinetic energy are consistent with those calculated in @guervilly2014. The total amount of kinetic energy in the vertical fluid motions remains the same however in all the simulations, while it is the total kinetic energy of the system that is much larger for high Ta\* than for low Ta\* simulations. It is worth noting that while the thermal and compositional fluxes (see Figure \[fig:fig\_flux\]) and vertical velocity stop growing and reach a statistically stationary state (at around $t=3500$ in both high ${\rm Ta^*}$ simulations) the total kinetic energy continues to grow (driven by the continued growth of the horizontal kinetic energy) and has not saturated by $t=$6000. This can be attributed to the fact that horizontal fluid motions are only limited by viscosity, and may only saturate on the global viscous diffusion timescale which is $\sim 10^5$ in these simulations. Unlike in the low ${\rm Ta^*}$ regime where average fluxes are calculated through time-integration of the quasi-steady 1-layered phase, we choose ranges for time integration of fluxes from $t=3500$ to the end of the simulations in the high ${\rm Ta^*}$ regime. From Table 1 we see that the simulation with ${\rm Ta^*}=1$ has the weakest transport in either ${\rm Ta^*}$ regime with $58.1\%$ and $67.4\%$ reductions to thermal and compositional transport, respectively, compared to the non-rotating case. Interestingly, the ${\rm Ta^*}=10$ simulation shows a slight increases in flux over the ${\rm Ta^*}=1$ case. A possible explanation for this is that the presence of a stably stratified interface separating layers in the ${\rm Ta^*}=1$ simulation inhibits vertical motion through the large-scale vortex. This could also suggest that in the high ${\rm Ta^*}$ regime, increased rotation may actually serve to enhance transport rather than suppress it, through vertical motions whose coherence is strengthened by the vortex. However, by contrast with the layered regime, fluxes in the presence of a large scale vortex are highly dependent on the aspect ratio of the box. In Table 1 the narrower simulation at high ${\rm Ta^*}=10$ has significantly higher fluxes than its wider counterpart. This makes it challenging to scale our results to simulations with larger domains, let alone apply them to more realistic astrophysical situations. Also noteworthy in Table 1 is that $\gamma_{\rm tot}^{-1}$ in the ultimate statistically steady state is roughly the same across all simulations (namely $\gamma_{\rm tot}^{-1} \approx 0.5-0.65$) with high ${\rm Ta^*}$ simulations having a $\gamma_{\rm tot}^{-1}$ that is at most $15\%$ lower than in the low ${\rm Ta^*}$ regime. This is significantly less than the variability that occurs due to the formation of large scale structures (layers or vortices): Figure \[fig:fig\_flux\]b shows how $\gamma_{\rm tot}^{-1}$ increases from roughly $0.35$ in the homogeneously turbulent phase to about $0.6$ in the ultimate stages. Varying ${\rm Pr}$, $\tau$, and $R_0^{-1}$ {#sec:diffParams} ========================================== We now study the effect of varying ${\rm Pr}$, $\tau$, and $R_0^{-1}$ on both the quantitative and qualitative attributes of rotating ODDC discussed in the previous section. This is not meant to be an exhaustive study, but rather to test whether the conclusions from previous section still hold. Varying ${\rm Pr}$ and $\tau$ {#sec:varyPrTau} ----------------------------- In order to study the effect of varying ${\rm Pr}$ and $\tau$ we show a set of simulations with ${\rm Pr} = \tau = 0.3$, and $R_0^{-1}=1.1$. As in Section \[sec:thetaZero\], we have chosen parameters at which layers form in non-rotating ODDC. Figure \[fig:vorticity03\] shows the evolution of the turbulent compositional flux as a function of time for simulations with ${\rm Ta^*}=0,0.09,0.9,9,$ and $90$ (corresponding to ${\rm Ta}=0,1,10,100$ and $1000$). Consistent with Section \[sec:thetaZero\] we find that at low ${\rm Ta^*}$ stepwise increases in mixing rates indicate the transition to layered convection, whereas in the high ${\rm Ta^*}$ case we do not. The transition between low and high ${\rm Ta^*}$ is still ${\rm Ta^*} \approx 1$ (equivalently ${\rm Ta}=10$ when ${\rm Pr}=0.3$). This shows that ${\rm Ta^*}$ is a more appropriate bifurcation parameter than ${\rm Ta}$ to determine when ODDC is rotationally dominated. As in Section \[sec:thetaZero\], layer formation only occurs in the low ${\rm Ta^*}$ regime. As with the ${\rm Ta^*}=1$ simulation from Section \[sec:thetaZero\] which shows characteristics of both high and low ${\rm Ta^*}$ ODDC, the ${\rm Ta^*}=0.9$ simulation here develops a large scale vortex, as well as layer-like perturbations to the background density profile (without evidence of actual layered convection). Large scale vortices are observed in simulations with ${\rm Ta^*}=0.9$ and $9$ and look very similar to the corresponding snapshots of the ${\rm Ta^*}=1$ and $10$ simulations in Figures \[fig:snapTa100\] and \[fig:snapTa1000\] from the previous section. Interestingly, however the large scale vortex does not form in our most rapidly rotating simulation with ${\rm Ta^*}=90$; instead we see multiple small scale vortices (see Figure \[fig:vorticity03\]). Analysis of ${\rm Re}$ and ${\rm Ro}$ for this simulation places it in a regime where large scale vortices should form according the criteria of @guervilly2014. This suggests that there may be additional constraints on the formation of large scale vortices in ODDC which should be determined through a more in-depth survey of parameter space in a future study. Surprisingly, the compositional fluxes for the ${\rm Ta^*}=9$ and $90$ runs are similar, which is likely a coincidence as we saw that the fluxes in the presence of large scale vortices depend on domain size. ![(a) Non-dimensional turbulent compositional flux for simulations with ${\rm Pr}=\tau=0.3$ and $R_0^{-1}=1.1$. One simulation is in the low ${\rm Ta^*}$ regime (${\rm Ta^*}=0.3$) and the other three are in the high ${\rm Ta^*}$ regime. (b) Snapshot of the component of vorticity in the $z$-direction for the most rapidly rotating simulation at ${\rm Ta^*}=90$, which appears to be dominated by small scale vortices. This may suggest that large-scale vortices only occur in a specific range of values of ${\rm Ta^*}$ (with $\theta=0$).\[fig:vorticity03\]](Figure15.pdf){width="\linewidth"} Simulations at large $R_0^{-1}$ ------------------------------- The simulations we have presented so far were runs with small values of $R_0^{-1}$ which are conducive to layer formation in non-rotating ODDC. However, there is a range of larger values of $R_0^{-1}$ where a system is unstable to ODDC, but where layers are not predicted to spontaneously form through the $\gamma$-instability. Previous studies have shown that, without exception, simulations in this parameter regime remain non-layered for as long as they are run. These simulations are dominated by large scale gravity waves and were studied in depth by @Moll2016 in the context of non-rotating ODDC. In that work they found that the growth of large scale gravity waves is associated with very moderate (but still non-zero) increases in thermal and compositional transport. However, these increases are very small compared to the increases in turbulent transport due to layers, and are likely unimportant for the purposes of stellar and planetary modeling. As a result, turbulent transport by ODDC at $R_0^{-1}$ greater than the layering threshold $R_L^{-1}$ can be ignored. We now address how rotation affects non-layered ODDC (ie. ODDC at $R_0^{-1} > R_L^{-1}$). As in Section \[sec:thetaZero\], we present five simulations with ${\rm Ta^*} = 0,0.01,0.1,1$ and $10$, and with ${\rm Pr}=\tau=0.1$ and $\theta = 0$. However, for each of these simulations we now set $R_0^{-1} = 4.25$. For comparison, for the stated values of ${\rm Pr}$ and $\tau$, $R_L^{-1} \simeq 1.7$ and the critical inverse density ratio for marginal stability is $R_c^{-1}=5.5$. As in Section \[sec:thetaZero\] we find that high $R_0^{-1}$ simulations can be divided into two general classes of behavior depending on ${\rm Ta^*}$. As seen in the snapshots in Figure \[fig:snaps425\], low ${\rm Ta^*}$ simulations are qualitatively similar to non-rotating simulations in that they are dominated by large scale gravity waves. The strongest gravity wave mode in both the ${\rm Ta^*}=0$ and ${\rm Ta^*}=0.01$ simulations has three wavelengths in the vertical direction, one wavelength in the $x$-direction, and is invariant in the $y$-direction. The simulation with ${\rm Ta^*}=0.1$ by contrast is dominated by a larger scale mode with a single wavelength in each spatial direction. Despite their qualitative similarity, inspection of the compositional flux in Figure \[fig:fig01\_flux\] shows large reductions compared to the non-rotating simulation (${\rm Ta^*}=0$), even in the case where ${\rm Ta^*}=0.01$. To understand why this is the case note how ${\rm Ro}$ is small even in the lowest ${\rm Ta^*}$ simulation. This is because the rms velocities are very small in this regime. Rotation therefore plays a role in the saturation of the gravity waves and acts to reduce their amplitudes, which in turn significantly reduces the mixing rates. ![Snapshots of the horizontal velocity field ($u$ or $v$) for simulations with ${\rm Pr}=\tau=0.1$, $R_0^{-1}=4.25$, and $\theta = 0$, for various values of ${\rm Ta^*}$: (a) ${\rm Ta}^* = 0$, (b) ${\rm Ta}^* = 0.01$, (c) ${\rm Ta}^* = 0.1$, (d) ${\rm Ta}^* = 1$, and (e) ${\rm Ta}^* = 10$. []{data-label="fig:snaps425"}](Figure16.pdf){width="0.25\linewidth"} ![Time series of the turbulent compositional flux (left) and Rossby number (right) for simulations with ${\rm Pr}=\tau=0.1$, $R_0^{-1}=4.25$, and $\theta = 0$ for various values of ${\rm Ta^*}$. []{data-label="fig:fig01_flux"}](Figure17.pdf){width="\linewidth"} As seen in the snapshot in Figure \[fig:snaps425\]e, the ${\rm Ta^*}=10$ simulation is dominated by vertically invariant vortices, while the ${\rm Ta^*}=1$ is again a transitional case which shows evidence both of gravity waves and of vortices. A significant difference with the results of Section \[sec:thetaZero\] however, is that vortices at low $R_0^{-1}$ are large-scale, while those at high $R_0^{-1}$ are small-scale (for the same values of ${\rm Ta^*}$). This suggests that the formation of large-scale vortices requires a more unstable stratification (which leads to more turbulence) than is present in the high $R_0^{-1}$ simulations shown here. This is, again, qualitatively consistent with the findings of @guervilly2014 that large scale vortices only form for sufficiently high Reynolds number. The most rapidly rotating simulation (${\rm Ta^*}=10$) shows a slight increase in the compositional flux compared to the non-rotating simulation but remains far less efficient than layered convection. Importantly, as with the non-rotating simulation, layers never form at any point (when $R_0^{-1} = 4.25$). Consequently, fluxes through non-layered (high $R_0^{-1}$) systems are effectively diffusive, and the conclusion from @Moll2016, namely that turbulent fluxes are negligible for non-layered systems, remains valid for all the simulations presented here. Inclined simulations {#sec:incSims} ==================== So far, for simplicity, we have discussed simulations in which the rotation vector is aligned with the direction of gravity, and which only model conditions applicable to the polar regions of a star or giant planet. We now discuss the dynamics of ODDC at lower latitudes (ie. simulations with $\theta \ne 0$). In what follows, we return to the parameters studied in Section \[sec:thetaZero\] (ie. ${\rm Pr}=\tau=0.1$ and $R_0^{-1}=1.25$). We focus on two sets of simulations with ${\rm Ta^*} = 0.1$ and $10$, respectively, which are each comprised of runs with angles $\theta = \frac{\pi}{8}$, $\frac{\pi}{4}$, $\frac{3\pi}{8}$, and $\frac{\pi}{2}$. Figure \[fig:basicInstInc\] shows the growth of the linear instability by way of the heat flux as a function of time for the set of simulations with ${\rm Ta^*}=10$. Each simulation grows at roughly the same rate regardless of inclination, which is expected from linear theory. However there is a slight difference between the amplitudes in inclined simulations and the non-inclined simulation. This can be understood by considering that the simulations are initialized with small amplitude, random perturbations on the grid scale and many more modes are initially attenuated in the inclined case than in the case where $\theta=0$ (see Section \[sec:LinStab\]). As a result the amount of energy in the initial perturbations projected onto the fastest growing modes is smaller. ![Nondimensional turbulent compositional flux during the primary instability growth phase, and immediately following non-linear saturation for simulations with ${\rm Pr}=\tau=0.1$, $R_0^{-1}=1.25$, ${\rm Ta^*}=10$, and various values of $\theta$. []{data-label="fig:basicInstInc"}](Figure18.pdf){width="0.5\linewidth"} Figure \[fig:InstabSnap\] shows snapshots of the chemical composition field during the growth of the primary instability for simulations with $\theta = 0$, $\frac{\pi}{8}$, $\frac{\pi}{4}$, $\frac{3\pi}{8}$, and $\frac{\pi}{2}$. In all inclined simulations, there are prominent modes that are invariant in the direction of rotation. In simulations with smaller (or no) inclinations ($\theta = 0$ and $\frac{\pi}{8}$) the dominant modes are those with structure both in the $x$ and $y$ directions while simulations with larger inclinations (simulations that are closer to the equator) have a strong preference for modes that are invariant in the plane spanned by the rotation and gravity vectors. ![Snapshots of the vertical velocity field during the growth of the linear instability for simulations with ${\rm Pr}=\tau=0.1$, $R_0^{-1}=1.25$, ${\rm Ta^*}=1$, and various values of $\theta$: (a) $\theta = 0$, (b) $\theta = \pi/8$, (c) $\theta = \pi/4$, (d) $\theta = 3\pi/8$ and (e) $\theta = \pi/2$. []{data-label="fig:InstabSnap"}](Figure19.pdf){width="0.25\linewidth"} While the behavior of the linearly unstable phase is qualitatively similar for both low and high ${\rm Ta^*}$ simulations regardless of $\theta$, we find that this is not the case after the saturation of the basic instability. While inclination has only small effects on systems in the low ${\rm Ta^*}$ regime, it has a more significant influence on post-saturation dynamics in the high ${\rm Ta^*}$ regime. Figure \[fig:IncFlux\] shows turbulent compositional fluxes for simulations in the low ${\rm Ta^*}$ regime (${\rm Ta^*}=0.1$). The fluxes in the homogeneously turbulent phase are roughly independent of $\theta$ indicating that inclination should have a minimal effect on the growth rate of layering modes through the $\gamma$-instability. Indeed, the stepwise increases in fluxes over time show that layer formation occurs at all inclinations. Inspection of the chemical composition profiles show that the layer interfaces are perpendicular with the direction of gravity, regardless of inclination. The latter appears to affect the layer formation timescale and layer merger rate, but this could simply be due to the inherent stochasticity of the convective layers. Finally, aside from the equatorial case, we find that inclination has a minimal impact on flux in each layered phase, so the flux laws discussed in Section \[sec:LowTa\] apply more or less at all latitudes. As a result, we expect that heat and compositional fluxes through layered convection on a sphere should be fairly isotropic. ![Long-term behavior of nondimensional turbulent fluxes of composition for simulations with stated values of $\theta$ and with ${\rm Ta^*}=0.1$ (left) and ${\rm Ta^*}=10$ (right). In both sets of simulations, ${\rm Pr}=\tau=0.1$, and $R_0^{-1}=1.25$. In the low ${\rm Ta^*}$ case the succession of layered phases is similar for polar and inclined simulations, with only small differences in layering time scales and turbulent fluxes. In the high ${\rm Ta^*}$ case, fluxes in inclined simulations are sharply attenuated compared to the polar case. []{data-label="fig:IncFlux"}](Figure20.pdf){width="\linewidth"} Figure \[fig:IncFlux\] also shows the turbulent compositional flux for simulations in the high ${\rm Ta^*}$ regime (${\rm Ta^*}=10$). The lack of clear stepwise increases indicates that layer formation is suppressed for most values of $\theta$ (as is the case in non-inclined simulations). The notable exception to this rule is the simulation at the equator ($\theta=\frac{\pi}{2}$) where layers are observed to form even in the high ${\rm Ta^*}$ case. Why they form in this case remains to be determined. Another major difference between inclined and non-inclined simulations is that there is no evidence for the large scale vortices in simulations with $\theta \ne 0$, even though they are observed in $\theta=0$ simulations at the same parameters (see Section \[sec:thetaZero\]). This is illustated in Figure \[fig:IncVort\] which shows the quantity $\omega_{yz}=\frac{ \mathbf{\omega} \cdot \mathbf{\Omega} }{\left| \mathbf{\Omega} \right|}$. There are many smaller scale vortices aligned with the rotation axis but no large scale vortex. This is even true in the simulation with the smallest inclination ($\theta=\frac{\pi}{8}$), bringing into question whether large scale vortices would be common in stars and planets except exactly at the poles. The inclination of the small scale vortices is associated with smaller vertical transport, and Figure \[fig:IncFlux\] suggests that mixing becomes less efficient as $\theta$ gets larger (except very close to the equator). ![Snapshots of $\omega_{yz}$, the component of the vorticity parallel to the rotation axis, during saturation of the linear instability. Shown are simulations with ${\rm Pr}=\tau=0.1$, $R_0^{-1}=1.25$, ${\rm Ta^*}=1$, and (a) $\theta=\frac{\pi}{8}$, (b) $\theta=\frac{\pi}{4}$ and (c) $\theta=\frac{3\pi}{8}$. In each case, coherent small scale vortices are aligned with the axis of rotation. \[fig:IncVort\]](Figure21.pdf){width="\linewidth"} Conclusion {#sec:conclusion} ========== Summary and discussion ---------------------- The main result of this study is the discovery of two distinct regimes in rotating ODDC depending on whether the rotation rate $\Omega$ is high or low. We find that the most appropriate parameter for determining if a system is in one regime or the other is ${\rm Ta^*} =\frac{4 \Omega^2 d^4}{\kappa_T^2}$, where $d$ is given in Equation (\[eq:nondim\]) and $\kappa_T$ is the thermal diffusivity. The transition from the regime with slow rotation to the regime that is rotationally dominated occurs consistently at ${\rm Ta^*} \approx 1$. In the low $\rm{Ta^*}$ regime in polar regions (with $\theta=0$), rotating ODDC behaves in a qualitatively similar way to non-rotating ODDC. The transition to layered convection (or lack thereof) at low $\rm{Ta^*}$ is consistent with the predictions of $\gamma$-instability theory made for non-rotating ODDC [@Mirouh2012]: at parameters where layers form in non-rotating ODDC, we also observe layer formation in low $\rm{Ta^*}$ simulations. Likewise, in the simulations we ran at non-layered parameters, we find that low $\rm{Ta^*}$ simulations do not form layers, and are dominated by gravity waves like their non-rotating counterparts [@Moll2016]. We understand this to be true because the thermal and compositional fluxes immediately after saturation of the primary instability of ODDC are unaffected by rotation in this regime. Since the $\gamma$-instability only depends on these fluxes, it is similarly unaffected. Given the limited number of available simulations, we cannot say definitively what effect (if any) rotation has on the layering threshold $R_L^{-1}$ (the value of the inverse density ratio, $R_0^{-1}$, below which layers are predicted to form through the $\gamma$-instability, and above which they are not), only that it is not significant in the low $\rm{Ta^*}$ simulations presented here, which are far from that threshold. However, we believe that $R_L^{-1}$ would be relatively unaffected by rotation for ${\rm Ta^*} < 1$. Beyond these qualitative similarities with non-rotating simulations, however, rotation in low $\rm{Ta^*}$ simulations has a deleterious effect on thermal and compositional transport in both the layered and non-layered parameter regimes. For a given layer height, turbulent fluxes through a thermo-compositional staircase decrease as rotation increases (eg. by about $50\%$ in the $\rm{Ta^*}=0.1$ simulation presented in Section \[sec:LowTa\]). However, our results also suggest that this effect becomes smaller as the layer height increases (through mergers, for example). For reasonably large layer heights, we postulate that rotation has a minimal effect on ODDC, and that the flux laws originally proposed by @Wood2013 actually hold. Turbulent fluxes through non-layered ODDC in the gravity-wave-dominated phase are reduced by as much as $90\%$ compared with the non-rotating case, but this merely implies that they remain negligible as discussed by @Moll2016. Finally, low $\rm{Ta^*}$ simulations at higher colatitude $\theta$ are not significantly different from their polar counterparts. Inclination has a negligible effect on the temperature and compositional fluxes, but may induce differences in the time scales of layer formation and mergers. In the high ${\rm Ta^*}$ regime, dynamics are radically different from non-rotating and low ${\rm Ta^*}$ simulations. Most striking is that layer formation is inhibited at low inverse density ratios (except in the equatorial case). Instead, the dynamics are dominated by vortices aligned with the direction of rotation, which span the domain. Their horizontal scales seem to depend on $R_0^{-1}$, $\theta$ and ${\rm Ta^*}$. In polar regions, we observe that some simulations become dominated by a single large scale cyclonic vortex which grows to fill the domain, similar to those observed by @guervilly2014 in rotating Rayleigh-Bénard convection. Our preliminary data show that this phenomenon may be limited to low $R_0^{-1}$, together with ${\rm Ta^*}$ between $1$ and $10$, but the precise conditions necessary for these large scale vortices to form remain to be determined. We find that large scale vortices do not occur in the most rapidly rotating simulation (${\rm Ta^*} = 90$) at low $R_0^{-1}$, in any of the high $R_0^{-1}$ simulations or in any of the inclined simulations. In these cases the system dynamics are instead dominated by many smaller scale vortices of both polarities. Turbulent fluxes through different types of vortices vary with parameters in a complex manner, and it is therefore difficult to make general statements about them. The fluxes in the presence of large scale vortices are significant, but appear to be highly dependent on the dimensions of the domain, which makes it difficult to predict in situ mixing in a star or planet. On the other hand, fluxes in the presence of small scale vortices are not likely to be dependent on domain size, but their dependence on ${\rm Ta^*}$ and $R_0^{-1}$ has yet to be extensively studied. The most definitive aspect of the fluxes in simulations that host small scale vortices is that they are highly dependent on the inclination $\theta$, with higher inclinations causing less efficient transport (except at the equator). This is due to the fact that velocities are constrained to being along the axis of rotation by Taylor-Proudman effects. It is interesting to note that in the high ${\rm Ta^*}$ regime, layers form in our equatorial simulation ($\theta=\frac{\pi}{2}$). In this run, turbulent fluxes through the layers are comparable to layered fluxes in the low ${\rm Ta^*}$ and non-rotating regimes. All of this suggests that the poles and equator may be regions of strongly enhanced temperature and compositional transport in ODDC, while turbulent mixing at latitudes in between is quenched. Finally, simulations where ${\rm Ta^*} \approx 1$ appear to be edge cases with features of both high and low ${\rm Ta^*}$. At parameters conducive to layering, simulations with ${\rm Ta^*} \approx 1$ show evidence of perturbations to the background density profiles, indicating the growth of the $\gamma$-instability. However, we also see the development of large-scale vertically invariant vortices which prevent actual layered convection from occurring. Also, when ${\rm Ta^*} \approx 1$ at non-layered, gravity-wave-dominated parameters, we see evidence of gravity waves as well as small thin vortices which are nearly vertically invariant. There are several caveats to these conclusions that should be mentioned. The dimensionality of parameter space that would need to be explored to provide a comprehensive study of rotating ODDC is high, and comprises of $(L_x,L_y,L_z)$, $\theta$, ${\rm Pr}$, $\rm \tau$, ${\rm Ta}^*$ and finally $R_0^{-1}$. Computational limitations have forced us to be highly selective on the sets of simulations explored so this study does not constitute a comprehensive sweep of parameter space. As such, there may be behaviors that occur at unexplored parameters that are not addressed here. First of all, in the interest of reducing computational expense we have chosen to run most of our simulations in domains with dimensions $(100d)^3$. With boxes of this size layers always merge until a single interface remains (as in the non-rotating case). It would be interesting to see in a taller domain if rotation has a role in determining the layer height (ie. to see if layers stop merging before reaching the one-layered phase). Wider domain sizes may also help to answer questions about the vortices present in high ${\rm Ta^*}$ simulations. Particularly, they may reveal whether large scale vortices have a characteristic horizontal length, or whether they always grow to fill the domain. Wider boxes may also show if there are so far undetected large scale features emerging in systems dominated by small scale vortices. Another area of uncertainty is that the chosen values of $R_0^{-1}$, $R_0^{-1} = 1.25$ and $R_0^{-1} = 4.25$, are fairly close to the convective and marginal stability thresholds, respectively, making them somewhat extreme cases. While we do not believe that choosing less extreme parameter values would lead to dramatic qualitative changes in the results, we cannot rule this possibility out until further work has been completed. Finally, for computational reasons, the values of ${\rm Pr}$ and $\tau$ chosen for our simulations (${\rm Pr}=\tau=0.1$ and $0.3$) are substantially larger than the values in stellar interiors (where ${\rm Pr}\sim\tau\sim10^{-6}$) and the interiors of giant planets like Jupiter and Saturn (where ${\rm Pr}\sim\tau\sim10^{-3}$). Consequently, there may be additional physical effects that occur at low parameter values that are not observed here. However, the values used here may be closer to actual values for ice giants such as Uranus and Neptune whose equations of state are influenced by the presence of water and methane ices in their atmospheres [@Redmer2011]. Prospects for stellar and planetary modeling -------------------------------------------- As summarized above, our results for the low ${\rm Ta^*}$ regime show that attenuation of the fluxes due to rotational effects is not likely to be significant for astrophysical models or observations. In this regime we advocate use of the parameterizations presented in @Wood2013 in layered ODDC and @Moll2016 in non-layered ODDC. However, a potentially observable effect of rotating ODDC in this regime could be related to how rotation affects the structure of layers and interfaces in low ${\rm Ta^*}$ simulations. We have found in our experiments that rotation leads to layer interfaces that are more stably stratified than in non-rotating simulations. It may be possible in the future that such steep density gradients could be observed in a star through asteroseismology. This line of inquiry could also be extended to gas giant planets. In Saturn, for example, it may be possible to detect density gradients using ring seismology [@Fuller2014], and in the future, we may even be able to probe the interior structure of Jupiter through detection of global modes [@gaulme2011]. In the high ${\rm Ta^*}$ regime the results of this study have potential observational implications for thermal and compositional transport in stars and planets. The sensitivity of the turbulent heat flux to the inclination in our high ${\rm Ta^*}$ simulations suggests that the transport in a rapidly rotating giant planet could vary substantially with latitude (with higher fluxes at the poles and equator). Indeed, the gas giants in our own solar system are found to have luminosities that are independent of latitude, despite the fact that regions close to the equator receive more solar energy. Since we would expect regions of Jupiter or Saturn’s atmosphere that get more radiation from the sun to have higher luminosities (because they are reradiating more solar energy) the isotropy of the outgoing flux in luminosity suggests that more heat from the interiors of these planets is being radiated at the poles than at other latitudes. Further study is warranted to determine if rapidly rotating ODDC contributes to this effect. The large-scale vortices present in the polar regions of the high ${\rm Ta^*}$ simulations present an intriguing observational potential, of regions with strong heat and compositional fluxes and strong collimated vertical flows. However, there is reasonable doubt as to whether these large scale vortices represent a real physical phenomenon. We only observe them to occur in polar simulations (in a limited range of ${\rm Ta^*}$), and it is possible that even a slight misalignment between the direction of gravity and the rotating axis could prevent them from forming. For stars, in the case of semi-convection zones adjacent to convection zones, @Moore2015 showed that non-rotating ODDC is always in the layered regime, and that transport through the semi-convective region is so efficient that the latter gets rapidly absorbed into the convection zone. In essence, aside from a fairly short transient period, the star evolves in a similar way taking into account semi-convection, or ignoring it altogether and using the Schwarzchild criterion to determine the convective boundary. Our results suggest that this conclusion remains true for slowly rotating stars. However, if the star is in the high ${\rm Ta^*}$ regime instead, layered convection is suppressed, transport through the semi-convective region is much weaker, and may possibly depend on latitude. This would in turn imply fairly different evolutionary tracks and asteroseismic predictions. This work was funded by NSF-AST 1211394 and NSF-AST 1412951. The authors are indebted to Stephan Stellmach for granting them the use of his code, and for helping them implement the effects of rotation. Appendix: Minimum mode amplitudes for layered convection {#sec:appA .unnumbered} ======================================================== In rotating systems density perturbations must grow to a higher amplitude in order for layered convection to occur, compared to non-rotating ones. This can be understood better by considering how convective plumes form at the edges of the diffusive boundaries in layered convection. In order for a hot plume at the bottom of a layer to rise it must displace the fluid above it. Because the interfaces act as flexible but more-or-less impenetrable boundaries, fluid that is moving upward because of the rising plume must be deflected by the top boundary and displaced horizontally. Rotation resists motion involving gradients of velocity in the direction of the rotation axis. In order to overcome this resistance, and therefore for convection to take place in the layer, a more strongly positive density gradient must be present between the interfaces. We make quantitative estimates of this effect on layered convection in ODDC through adaptation of a theory related to rotating Rayleigh-Bénard convection [@chandrasekhar1961]. Assuming free boundary conditions, the critical Rayleigh number, ${\rm Ra}_c$, for rotating Rayleigh-Bénard convection is the following function of ${\rm Ta^*}$: $$\label{eq:RaCritAp} {\rm Ra}_c({\rm Ta}^*) = 3\pi^4 \left( \frac{H^4 {\rm Ta}^*}{2{\rm Pr}^2\pi^4} \right)^{\frac{2}{3}} + \frac{27\pi^4}{4} \, ,$$ where $H$ is the layer height. In our non-dimensionalization the critical density gradient for a convective layer, $\left| \frac{\partial \rho}{\partial z} \right|_c$, can be written in terms of ${\rm Ra}_c$ as $$\label{eq:RaDimConstAp} \left| \frac{\partial \rho}{\partial z} \right|_c = \frac{ {\rm Ra}_c }{H^4} \, .$$ Then by considering a density profile defined as $$\label{eq:DensProfAp} \rho = (1-R_0^{-1})z + 2 A_n \sin{\left( k_n z \right)} \, ,$$ where $A_n$ is the amplitude of the perturbation from a single layering mode with vertical wavenumber $k_n$, we provide a second definition for the critical density gradient $$\label{eq:DensGradAp} \left| \frac{\partial \rho}{\partial z} \right|_c = \max{\left( \frac{\partial \rho}{\partial z} \right)} = 1-R_0^{-1} + 2 A_n k_n \, .$$ From equations (\[eq:RaCritAp\]) through (\[eq:DensGradAp\]) we can then generate an expression for $\left| A_n \right|$ in terms of ${\rm Ta}^*$, $R_0^{-1}$, $H$, and $k_n$, and thus get an estimate for the critical layering mode amplitude for the onset of layered convection, $$\label{eq:ModeAmpAp} \left| A_n \right| = \left| \frac{\frac{{\rm Ra}_c}{H^4} + \left(R_0^{-1} - 1\right)}{2 k_n} \right| = \left| \frac{ \frac{3\pi^4}{H^4}\left(\frac{H^4{\rm Ta}^*}{2{\rm Pr}^2\pi^4}\right)^{\frac{2}{3}} + \frac{27\pi^4}{4H^4} + \left( R_0^{-1} - 1 \right) }{2k_n} \right| \, .$$ This formula recovers Equation (29) of @rosenblum2011 in the non-rotating limit, as long as the term $27 \pi^4/4H^4$ can be neglected (which is always true for physically realizable layer heights, that typically have $H > 30$). [36]{} natexlab\#1[\#1]{} Baines, P. & Gill, A. 1969, J. Fluid Mech., 37 Carpenter, J. R. & Timmermans, M.-L. 2014, Journal of Physical Oceanography, 44, 289 , S. 1961, [Hydrodynamic and hydromagnetic stability]{} (International Series of Monographs on Physics, Oxford: Clarendon) , M., [Becker]{}, A., [Lorenzen]{}, W., [Nettelmann]{}, N., [Bethkenhagen]{}, M., [Wicht]{}, J., & [Redmer]{}, R. 2012, , 202, 5 , J. 2014, , 242, 283 , P., [Schmider]{}, F.-X., [Gay]{}, J., [Guillot]{}, T., & [Jacob]{}, C. 2011, , 531, A104 , C., [Hughes]{}, D. W., & [Jones]{}, C. A. 2014, Journal of Fluid Mechanics, 758, 407 , H. E. & [Moore]{}, D. R. 1976, Journal of Fluid Mechanics, 78, 821 , K., [Legg]{}, S., [McWilliams]{}, J., & [Werne]{}, J. 1996, Journal of Fluid Mechanics, 322, 243 , K., [Rubio]{}, A. M., [Grooms]{}, I., & [Knobloch]{}, E. 2012, Geophysical and Astrophysical Fluid Dynamics, 106, 392 , S. 1966, PASJ, 18, 374 , O. S. 1995, Journal of Fluid Mechanics, 301, 345 , O. S. & [Holyer]{}, J. Y. 1986, Journal of Fluid Mechanics, 162, 23 , N., [El Eid]{}, M. F., & [Fricke]{}, K. J. 1985, , 145, 179 , J. & [Chabrier]{}, G. 2012, [A&A]{}, 540, A20 , P. F. & [Shirtcliffe]{}, T. G. L. 1978, Journal of Fluid Mechanics, 87, 417 , W. J. 1995, [ApJ]{}, 444, 318 , G. M., [Garaud]{}, P., [Stellmach]{}, S., [Traxler]{}, A. L., & [Wood]{}, T. S. 2012, ApJ, 750, 61 , R., [Garaud]{}, P., & [Stellmach]{}, S. 2016, , 823, 33 , K. & [Garaud]{}, P. 2015, ArXiv e-prints , N., [Fortney]{}, J. J., [Moore]{}, K., & [Mankovich]{}, C. 2015, , 447, 3422 , F. C. 1976, Journal of Physical Oceanography, 6, 157 , M. R. E. 1981, Journal of Fluid Mechanics, 105, 507 Radko, T. 2003, J. Fluid Mech., 497, 365 , R., [Mattsson]{}, T. R., [Nettelmann]{}, N., & [French]{}, M. 2011, , 211, 798 , E., [Garaud]{}, P., [Traxler]{}, A., & [Stellmach]{}, S. 2011, [ApJ]{}, 731, 66 , H. T. 1969, Journal of Fluid Mechanics, 36, 309 , T. G. L. 1973, Journal of Fluid Mechanics, 57, 27 , E. A. 1969, Comments on Astrophysics and Space Physics, 1, 57 , E. A. & [Veronis]{}, G. 1960, ApJ, 131, 442 , S., [Traxler]{}, A., [Garaud]{}, P., [Brummell]{}, N., & [Radko]{}, T. 2011, ArXiv e-prints , D. J. 1982, , 30, 755 , D. J. & [Salpeter]{}, E. E. 1977, , 35, 239 , M.-L., [Garrett]{}, C., & [Carmack]{}, E. 2003, Deep Sea Research Part I: Oceanographic Research, 50, 1305 , J., [Krishfield]{}, R., [Proshutinsky]{}, A., [Ashjian]{}, C., [Doherty]{}, K., [Frye]{}, D., [Hammar]{}, T., [Kemp]{}, J., [Peters]{}, D., [Timmermans]{}, M., [von der Heydt]{}, K., [Packard]{}, G., & [Shanahan]{}, T. 2006, EOS Transactions, 87, 434 , A., [Stellmach]{}, S., [Garaud]{}, P., [Radko]{}, T., & [Brummell]{}, N. 2011, ArXiv e-prints Turner, J. 1965, International Journal of Heat and Mass Transfer, 8, 759 Walin, G. 1964, Tellus, 16, 389 , T. S., [Garaud]{}, P., & [Stellmach]{}, S. 2013, ApJ, 768, 157 , S., [Mollo-Christensen]{}, E., & [Ostapoff]{}, F. 1983, Journal of Fluid Mechanics, 133, 297 [^1]: To verify this, we have run an additional simulation in a domain of horizontal scale $200 d\times 200d$, and height $50d$. The large-scale vortex grew to fill the domain in this case as well.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper, the dynamics of a modified Leslie-Gower predator-prey system with two delays and diffusion is considered. By calculating stability switching curves, the stability of positive equilibrium and the existence of Hopf bifurcation and double Hopf bifurcation are investigated on the parametric plane of two delays. Taking two time delays as bifurcation parameters, the normal form on the center manifold near the double Hopf bifurcation point is derived, and the unfoldings near the critical points are given. Finally, we obtain the complex dynamics near the double Hopf bifurcation point, including the existence of quasi-periodic solutions on a 2-torus, quasi-periodic solutions on a 3-torus, and strange attractors.' author: - 'Yanfei Du$^{1,2}$' - 'Ben Niu$^{2}$' - 'Junjie Wei$^{2}$' title: 'Two delays induce Hopf bifurcation and double Hopf bifurcation in a diffusive Leslie-Gower predator-prey system' --- **Diffusive predator-prey models with delays have been investigated widely, and the delay induced Hopf bifurcation analysis has been well studied. However, the study about bifurcation analysis of predator-prey models with two simultaneously varying delays has not been well established. Neither the Hopf bifurcation theorem with two parameters nor the derivation process of normal form for two delays induced double Hopf bifurcation has been proposed in literatures. In this paper, we investigate a diffusive Leslie-Gower model with two delays, and carry out Hopf and double Hopf bifurcation analysis of the model. Applying the method of studying characteristic equation with two delays, we get the stability switching curves and the crossing direction, after which we give the Hopf bifurcation theorem in two-parameter plane for the first time. Under some condition, the intersections of two stability switching curves are double Hopf bifurcation points. To figure out the dynamics near the double Hopf bifurcation point, we calculate the normal form on the center manifold. The derivation process of normal form we use in this paper can be extended to other models with two delays, one delay, or without delay.** Introduction ============ The Leslie-Gower model, one of the most widely used predator-prey models, was proposed by Leslie and Gower [@P.; @Leslie; @PH; @Leslie] $$\begin{aligned} &\dot{u}=r_1u(1-\frac{u}{K})-a uv,\\ &\dot{v}=r_2v(1-\frac{v}{\gamma u}), \end{aligned}$$ where $u(t)$ and $v(t)$ represent the populations of the prey and the predator at time $t$, respectively. $r_1$ and $r_2$ are the intrinsic growth rates for prey and predator, respectively. $K$ is the environmental carrying capacity for prey population. $a$ is the per capita capturing rate of prey by a predator during unit time. $\frac{v}{\gamma u}$ is Leslie-Gower term with carrying capacity of the predator $\gamma u$, which means that the carrying is proportional to the population size of the prey, and $\gamma$ is referred to as a measure of the quality of the prey as the food for the predator. Since then, various researches on this model and modified models have been carried out. [@M.; @Aziz; @J.; @Collings; @P.; @Feng; @Y.; @Ma; @J.; @Zhou; @Yuan; @S.; @L.1; @Yuan; @S.; @L.2] Refuges have important effects on the coexistence of predator and prey, and reducing the chance of extinction due to predation. Chen et al. [@F.; @Chen] incorporated a refuge protecting $mu$ of the prey into Leslie-Gower system, which means that the remaining $(1-m)u$ of the prey is available to the predator. They considered the following Leslie-Gower predator-prey model $$\begin{aligned} &\dot{u}=(r_1-b_1u)u-a_1(1-m)vu,\\ &\dot{v}=[ r_2-a_2\frac{v}{(1-m)u}] v, \end{aligned}$$ where $m\in \left[ 0,1\right) $ is a refuge protecting rate of the prey. Time delays are ubiquitous in predator-prey systems. It seems that time delays play an important role in the stability of species densities. Researches are carried out to figure out the effect of delays on predator-prey systems. May [@R.M.; @May] considered the feedback time delay in prey growth, and the term $r_1u(t)(1-\frac{u(t-\tau)}{K})$ is the well-known delayed logistic equation. Another type of time delay was introduced to the negative feedback of the predator’s density in Leslie-Gower model in Refs. [@Yuan; @R.; @A.F.; @Nindjin], which denoted the time taken for digestion of the prey. Liu et al. [@Leslie] considered both delays mentioned above, and investigated a modified Leslie-Gower predator-prey system with two delays described by the following system: $$\label{odepredator} \left\lbrace \begin{array}{lll} \dot{u}(t) &=& r_1u(t)[ 1-\frac{u(t-\tau_1)}{K}] -a(1-m)u(t)v(t), \\ \dot{v}(t) &=&r_2v(t)[ 1-\frac{v(t-\tau_2)}{\gamma(1-m)u(t-\tau_2)}] , \\ \end{array} \right.$$ with the initial conditions $$\label{initial} \begin{array}{l} (\varphi_{1},\varphi_2)\in\textbf{C}([-\tau,0],\mathbb{R}_+^2),\varphi_i(0)>0,i=1,2, \end{array}$$ where $\tau_1$ is the feedback time delay in prey growth, $\tau_2$ is the feedback time delay in predator growth, and we define $\tau={\rm max}\{\tau_1,\tau_2\}$. Related to systems with two delays, general approach is to fix one delay, and vary another, or let $\tau_1+\tau_2=\tau$. [@Song; @K.; @Li; @S.; @Ruan; @C.; @Xu; @L.; @Deng; @Y.; @Ma] However, sometimes, we want to investigate the dynamics of system when two delays vary simultaneously. To discuss systems with two delays, Gu et al. [@Gu; @K] analyzed the characteristic quasipolynomial $$p(s)=p_0(s)+p_1(s)e^{-\tau_1s}+p_2(s)e^{-\tau_2s},$$ where $$p_l(s)=\sum_{k=0}^np_{lk}s^k,$$ and provided a detailed study on the stability crossing curves such that the characteristic quasipolynomial has at least one imaginary zero and the crossing direction. Lin and Wang [@Lin; @X] considered the following characteristic functions $$D(\lambda;\tau_1,\tau_2)= P_{0}(\lambda)+P_{1}(\lambda)e^{-\lambda\tau_1}+P_{2}(\lambda)e^{-\lambda\tau_2}+P_{3}(\lambda)e^{-\lambda(\tau_1+\tau_2)}.$$ They derived an explicit expression for the stability switching curves in the $(\tau_1, \tau_2)$ plane, and gave a criterion to determine switching directions. Since the preys and predators distribute inhomogenously in different locations, the diffusion should be taken into account in more realistic ecological models. To reveal new phenomena caused by the introduction of inhomogenous spatial environment, Du and Hsu [@Y.; @Du] considered a diffusive predator-prey model $$\label{diffusion du} \left\{ \begin{array}{ll} \dfrac{\partial u(x,t)} {\partial t}= d_1\Delta u(x,t)+\lambda u(x,t)-\alpha u(x,t)^2-\beta u(x,t)v(x,t),&x\in \Omega, t>0,\\ \dfrac{\partial v(x,t)}{\partial t }= d_2\Delta v(x,t)+\mu v(x,t)[1-\delta \frac{v(x,t)}{u(x,t)}],&x\in \Omega, ~t>0, \\ \dfrac{\partial u(x,t)} {\partial v}= 0,~~\dfrac{\partial v(x,t)} {\partial v}=0, & x\in \partial \Omega, t>0.\\ \end{array} \right.$$ The Neumann boundary condition means that no species can pass across the boundary of $\Omega$. They showed the existence of steady-state solutions with certain prescribed spatial patterns. Motivated by the previous work, we consider the following modified Leslie-Gower predator-prey model with diffusion and Neumann boundary conditions $$\label{diffusion predator} \left\{ \begin{array}{l} \begin{array}{l} \dfrac{\partial u(x,t)} {\partial t}= d_1\Delta u(x,t)+r_1u(x,t)[ 1-\frac{u(x,t-\tau_1)}{K}] -a(1-m)u(x,t)v(x,t),\\ \dfrac{\partial v(x,t)}{\partial t }= d_2\Delta v(x,t)+r_2v(x,t)[1-\frac{v(x,t-\tau_2)}{\gamma(1-m)u(x,t-\tau_2)}],~~~ \\ \end{array} x\in [0,l\pi],~t>0\\ \dfrac{\partial u(x,t)} {\partial x}= 0,~~\dfrac{\partial v(x,t)} {\partial x}=0, ~at~ x=0~and~l\pi,\\ \end{array} \right.$$ where $d_1,d_2>0$ are the diffusion coefficients characterizing the rate of the spatial dispersion of the prey and predator population, respectively. $m\in [0,1)$ is a refuge protecting rate of the prey. The spatial interval has been normalized as $[0,l\pi]$. In fact, there are many literatures on predator-prey model with delays, we refer to Refs. [@SHANSHAN; @CHEN; @I.; @Al-Darabsah; @Tian; @C; @Yang; @R; @Yuan; @R.] and the references therein. Among them, effects of one delay on systems have been discussed widely. In this paper, we focus on the joint effect of two delays on system (\[diffusion predator\]). Adjusting the method given in Ref. [@Lin; @X], which was proposed to solve the analysis of stability switching of delayed differential equations, we can apply the method on reaction-diffusion systems with two delays, and obtain the stability switching results when $(\tau_1,\tau_2)$ varies simultaneously. To perform bifurcation analysis, we will extend the normal form method given by Faria and Magalhães [@Faria; @FariaJDE] to the double Hopf bifurcation analysis of the reaction-diffusion system (\[diffusion predator\]), since the normal forms theory, which is an efficient method for bifurcation analysis, can be used to transform the original system to a qualitatively equivalent equation with the simplest form by using near-identity nonlinear transformations. There are many realistic problems with two delays, such as epidemic model, [@K.; @L.; @Cooke; @Jackson; @M.] population interactions, [@H.; @I.; @Freedman; @SHANSHAN; @CHEN; @Song; @K.; @Li; @S.; @Ruan; @C.; @Xu; @L.; @Deng; @Y.; @Ma] neural networks, [@J.; @Wei; @and; @S.; @Ruan] coupled oscillators, [@Nguimdo] and so on. As we know, delay usually destabilizes the equilibrium and induces Hopf bifurcation, which gives rise to the periodic activities. For these systems, when we take two delays as parameters, two Hopf bifurcation curves may intersect, and thus double Hopf bifurcation may occur, which is a source of complicated dynamical behaviors. The dynamics near double Hopf bifurcation is much more complicated than those near Hopf bifurcation. We can usually observe rich dynamical behaviors, such as periodic and quasi-periodic oscillations, coexisting of several oscillations, two- or three-dimensional invariant torus, and even chaos. [@Kuznetsov; @Guckenheimer] The analysis of double Hopf bifurcation provides a qualitative classification of bifurcating solutions arisen from double Hopf bifurcation, which can help us to figure out the dynamics corresponding to different values of two delays near the double Hopf bifurcation singularity. In fact, the results obtained and the methods used in this paper can be applied to the realistic problems with two delays mentioned above. On one hand, as was pointed out by Lin and Wang, [@Lin; @X] the method of stability switching curves can be used to find the curves where the stability switches, and determine the crossing direction, as long as the characteristic equation of the system we consider has the form of (\[character\]). On the other hand, the normal form derivation proposed in this paper can also be used to reaction-diffusion systems with two delays and Neumann boundary conditions, by which the dynamics near the double Hopf singularity can be obtained. This paper is organized as follows. In section \[existence\], we investigate the stability of the positive equilibrium and the existence of Hopf bifurcation by the method of stability switching curves given in Ref. [@Lin; @X]. In section \[normal form\], taking two delays as bifurcation parameters, we derive the normal form on the center manifold near the double Hopf bifurcation point, and give the unfoldings near the critical points. In section \[simulations\], we carry out some numerical simulations to support our analytical results. Stability switching curves and existence of double Hopf bifurcation {#existence} =================================================================== In this section, we perform bifurcation analysis near the positive equilibrium $E^*$ of system (\[diffusion predator\]). In order to discuss the joint effect of two delays $\tau_1$ and $\tau_2$ on system (\[diffusion predator\]), we will apply the method of the stability switching curves which is given in Ref. [@Lin; @X]. Stability switching curves {#Stability switching curves} -------------------------- Clearly, system (\[diffusion predator\]) has a unique positive constant equilibrium $E^*(u^*,v^*)$ with $u^*=\frac{Kr_1}{r_1+aK\gamma (1-m)^2}$ and $v^*=\gamma(1-m)u^*$. The linearization of system (\[diffusion predator\]) at the equilibrium $E^*$ is $$\label{linear e2} \frac{\partial }{\partial t} \left( \begin{array}{l} u(x,t) \\ v(x,t) \\ \end{array}\right) = (D\Delta + A) \left( \begin{array}{l} u(x,t)\\ v(x,t)\\ \end{array} \right) + B \left( \begin{array}{l} u(x,t-\tau_1)\\ v(x,t-\tau_2)\\ \end{array} \right) +C\left( \begin{array}{l} u(x,t-\tau_2)\\ v(x,t-\tau_2)\\ \end{array} \right),$$ where $$D=\left( \begin{array}{cc} d_1& 0\\ 0& d_2\\ \end{array}\right),~ A=\left( \begin{array}{cc} 0& -a(1-m)u^*\\ 0& 0\\ \end{array}\right),~ B=\left( \begin{array}{cc} -\frac{r_1u^*}{K}& 0\\ 0 & 0\\ \end{array}\right),~ C=\left( \begin{array}{cc} 0& 0\\ \gamma(1-m)r_2 & -r_2\\ \end{array}\right),$$ and $u(x,t),v(x,t)$ satisfy the homogeneous Neumann boundary condition. The characteristic equation of (\[linear e2\]) is $$\label{characterAG} {\rm det}(\lambda I_2-M_n-A-Be^{-\lambda\tau_1}-Ce^{-\lambda\tau_2})=0,$$ where $I_2$ is the $2\times 2$ identity matrix and $M_n=-\frac{n^2}{l^2} D$, $n\in \mathbb{N}_0$. The characteristic equation (\[characterAG\]) is equivalent to $$\label{character} D_n(\lambda;\tau_1,\tau_2)= P_{0,n}(\lambda)+P_{1,n}(\lambda)e^{-\lambda\tau_1}+P_{2,n}(\lambda)e^{-\lambda\tau_2}+P_{3,n}(\lambda)e^{-\lambda(\tau_1+\tau_2)}=0,$$ where $$\begin{array}{l} P_{0,n}(\lambda)=(\lambda+d_1\frac{n^2}{l^2})(\lambda+d_2\frac{n^2}{l^2}),\\P_{1,n}(\lambda)=\frac{r_1}{K}u^*(\lambda+d_2\frac{n^2}{l^2}),\\P_{2,n}(\lambda)=r_2(\lambda+d_1\frac{n^2}{l^2})+a(1-m)^2\gamma r_2u^*,\\P_{3,n}(\lambda)=\frac{r_1}{K}u^*r_2. \end{array}$$ When $\tau_1=\tau_2=0$, Eq. (\[character\]) becomes $$\label{charactertau120} \lambda^2+A\lambda+B=0,$$ where $$\begin{aligned} &A=d_1\frac{n^2}{l^2}+d_2\frac{n^2}{l^2}+\frac{r_1}{K}u^*+r_2>0,\\ &B=d_1d_2\frac{n^4}{l^4}+\frac{r_1}{K}u^*d_2\frac{n^2}{l^2}+r_2d_1\frac{n^2}{l^2}+a(1-m)^2\gamma r_2u^*+\frac{r_1}{K}u^*r_2>0. \end{aligned}$$ It is clear that all roots of (\[charactertau120\]) have negative real parts. Thus, when $\tau_1=\tau_2=0$, the positive equilibrium $E^*(u^*,v^*)$ is locally asymptotically stable. \[Du\] Du and Hsu [@Y.; @Du] have proved that the positive equilibrium of the diffusive Leslie-Gower predator-prey system (\[diffusion du\]) is globally asymptotically stable under certain conditions. We will also mention that Chen et al. [@SHANSHAN; @CHEN] have given a global stability result for a diffusive Leslie-Gower predator-prey system with two delays (the delay terms are different from these proposed in this paper). However, delays in this paper will destabilize the equilibrium. Thus, we only give the global stability result for $\tau_1=\tau_2=0$, which is a direct application of Proposition 2.1 in Ref. [@Y.; @Du]. Applying the globally asymptotical stability result in Ref. [@Y.; @Du] directly, we have the following result: if $\frac{r_1}{K}>a(1-m)$, the positive equilibrium $E^*(u^*,v^*)$ of (\[diffusion predator\]) is globally asymptotically stable when $\tau_1=\tau_2=0$. In order to apply the method of the stability switching curves, [@Lin; @X] we first verify the assumptions (i)-(iv) in Ref. [@Lin; @X] are all true for any fixed $n$. - Finite number of characteristic roots on $\mathbb{C}_+=\{\lambda\in\mathbb{C}:{\rm Re}\lambda>0\}$ under the condition $${\rm deg} (P_{0,n}(\lambda))\geq {\rm max}\{{\rm deg}(P_{1,n}(\lambda)),{\rm deg}(P_{2,n}(\lambda)),{\rm deg}(P_{3,n}(\lambda))\}.$$ - $P_{0,n}(0)+P_{1,n}(0)+P_{2,n}(0)+P_{3,n}(0)\neq 0$. - $P_{0,n}(\lambda),P_{1,n}(\lambda),P_{2,n}(\lambda),P_{3,n}(\lambda)$ are coprime polynomials. - $\lim\limits_{\lambda\rightarrow\infty} \left(\left| \frac{P_{1,n}(\lambda)}{P_{0,n}(\lambda)}\right|+ \left| \frac{P_{2,n}(\lambda)}{P_{0,n}(\lambda)}\right|+\left| \frac{P_{3,n}(\lambda)}{P_{0,n}(\lambda)}\right|\right) <1$. In fact, condition (ii)-(iv) are obviously satisfied and (i) follows from Ref. [@J.; @Hale]. Thus, similar to Ref. [@K.L.; @Cooke], we have the following lemma. As the delays $(\tau_1,\tau_2)$ vary continuously in $\mathbb{R}_+^2$, the number of zeros (counting multiplicity) of $D_n(\lambda; \tau_1,\tau_2)$ on $\mathbb{C}_+$ can change only if a zero appears on or cross the imaginary axis. To find the stability switching curves, we should seek all the points $(\tau_1,\tau_2)$ such that $D_n(\lambda; \tau_1,\tau_2)$ has at least one zero on the imaginary axis. Substituting $\lambda =i\omega~(\omega>0)$ into (\[character\]), we obtain $$\label{characteriomega} (P_{0,n}(i\omega)+P_{1,n}(i\omega)e^{-i\omega\tau_1})+(P_{2,n}(i\omega)+P_{3,n}(i\omega)e^{-i\omega\tau_1})e^{-i\omega\tau_2}=0.$$ From $|e^{-i\omega\tau_2}|=1$, we have $$|P_{0,n}(i\omega)+P_{1,n}(i\omega)e^{-i\omega\tau_1}|=|P_{2,n}(i\omega)+P_{3,n}(i\omega)e^{-i\omega\tau_1}|.$$ Thus, we have $$\label{omegatau1} |P_{0,n}(i\omega)|^2+|P_{1,n}(i\omega)|^2-|P_{2,n}(i\omega)|^2-|P_{3,n}(i\omega)|^2=2A_{1,n}(\omega)\cos(\omega\tau_1)-2B_{1,n}(\omega)\sin(\omega\tau_1),$$ with $$\begin{aligned} A_{1,n}(\omega)={\rm Re}(P_{2,n}(i\omega)\overline{P}_{3,n}(i\omega)-P_{0,n}(i\omega)\overline{P}_{1,n}(i\omega)),\\ B_{1,n}(\omega)={\rm Im}(P_{2,n}(i\omega)\overline{P}_{3,n}(i\omega)-P_{0,n}(i\omega)\overline{P}_{1,n}(i\omega)).\\ \end{aligned}$$ If $A_{1,n}(\omega)^2+B_{1,n}(\omega)^2>0$, there exists a function $\varphi_{1,n}(\omega)$ such that $$\begin{aligned} A_{1,n}(\omega)=\sqrt{A_{1,n}(\omega)^2+B_{1,n}(\omega)^2}\cos(\varphi_{1,n}(\omega)),\\ B_{1,n}(\omega)=\sqrt{A_{1,n}(\omega)^2+B_{1,n}(\omega)^2}\sin(\varphi_{1,n}(\omega)),\\ \end{aligned}$$ where, $\varphi_{1,n}(\omega)={\rm arg}\{P_{2,n}(i\omega)\overline{P}_{3,n}(i\omega)-P_{0,n}(i\omega)\overline{P}_{1,n}(i\omega)\}\in(-\pi,\pi]$. Thus, (\[omegatau1\]) can be written as $$\label{omegatau1xin} |P_{0,n}(i\omega)|^2+|P_{1,n}(i\omega)|^2-|P_{2,n}(i\omega)|^2-|P_{3,n}(i\omega)|^2 =2\sqrt{A_{1,n}(\omega)^2+B_{1,n}(\omega)^2}\cos(\varphi_{1,n}(\omega)+\omega\tau_1).$$ It is obvious that there exists $\tau_1\in \mathbb{R}_+$ satisfying (\[omegatau1xin\]) if and only if $$\label{conditiontau1} \left( (P_{0,n}(i\omega)|^2+|P_{1,n}(i\omega)|^2-|P_{2,n}(i\omega)|^2-|P_{3,n}(i\omega)|^2\right)^2 \leq 4(A_{1,n}(\omega)^2+B_{1,n}(\omega)^2).$$ Denote the set of $\omega\in\mathbb{R}_+$ which satisfies (\[conditiontau1\]) as $\Sigma^1_n$. We notice that (\[conditiontau1\]) also includes the case $A_{1,n}^2(\omega)+B_{1,n}^2(\omega)=0$. Denote $$\cos(\theta_{1,n}(\omega))=\dfrac{ |P_{0,n}(i\omega)|^2+|P_{1,n}(i\omega)|^2-|P_{2,n}(i\omega)|^2-|P_{3,n}(i\omega)|^2}{2\sqrt{A_{1,n}(\omega)^2+B_{1,n}(\omega)^2}},~~~~~\theta_{1,n}\in[0,\pi],$$ which leads to $$\label{tau1} \tau_{1,j_1,n}^{\pm}(\omega)=\dfrac{\pm\theta_{1,n}(\omega)-\varphi_{1,n}(\omega)+2j_1\pi}{\omega},~~~j_1\in\mathbb{Z}.$$ Similarly, we have $$\label{tau2} \tau_{2,j_2,n}^{\pm}(\omega)=\dfrac{\pm\theta_{2,n}(\omega)-\varphi_{2,n}(\omega)+2j_2\pi}{\omega},~~~j_2\in\mathbb{Z},$$ where $$\begin{aligned} \cos(\theta_{2,n}(\omega))=\dfrac{|P_{0,n}(i\omega)|^2-|P_{1,n}(i\omega)|^2+|P_{2,n}(i\omega)|^2-|P_{3,n}(i\omega)|^2}{2\sqrt{A_{2,n}(\omega)^2+B_{2,n}(\omega)^2}},~~~~~\theta_{2,n}\in[0,\pi],\\ A_{2,n}(\omega)={\rm Re}(P_{1,n}(i\omega)\overline{P}_{3,n}(i\omega)-P_{0,n}(i\omega)\overline{P}_{2,n}(i\omega))=2\sqrt{A_{2,n}(\omega)^2+B_{2,n}(\omega)^2}\cos(\varphi_{2,n}(\omega)),\\ B_{2,n}(\omega)={\rm Im}(P_{1,n}(i\omega)\overline{P}_{3,n}(i\omega))-P_{0,n}(i\omega)\overline{P}_{2,n}(i\omega))=2\sqrt{A_{2,n}(\omega)^2+B_{2,n}(\omega)^2}\sin(\varphi_{2,n}(\omega)).\\ \end{aligned}$$ Here the condition on $\omega$ is as follows $$\label{conditiontau2} \left( |P_{0,n}(i\omega)|^2-|P_{1,n}(i\omega)|^2+|P_{2,n}(i\omega)|^2-|P_{3,n}(i\omega)|^2\right)^2 \leq 4(A_{2,n}(\omega)^2+B_{2,n}(\omega)^2).$$ Denote the set of $\omega\in\mathbb{R}_+$ which satisfies (\[conditiontau2\]) as $\Sigma^2_n$. In fact, we can easily show that (\[conditiontau1\]) is equivalent to (\[conditiontau2\]) by squaring both sides of the two conditions (\[conditiontau1\]) and (\[conditiontau2\]). Thus, $\Sigma^1_n=\Sigma^2_n\stackrel{\vartriangle}{=}\Omega_n$. The set $$\begin{array}{r} \Omega_n=\Bigg\{ \omega\in \mathbb{R}_+:F_n(\omega)\stackrel{\vartriangle}{=} ( |P_{0,n}(i\omega)|^2+|P_{1,n}(i\omega)|^2-|P_{2,n}(i\omega)|^2-|P_{3,n}(i\omega)|^2)^2\\ \left.-4(A_{1,n}(\omega)^2+B_{1,n}(\omega)^2)\leq 0 \Bigg\}\right. \end{array}$$ is called the crossing set of $D_n(\lambda;\tau_1,\tau_2)=0$. Obviously, when $\omega\in \Omega_n$, both (\[conditiontau1\]) and (\[conditiontau2\]) hold. Now we consider the composition of set $\Omega_n$. The crossing set $\Omega_n$ consists of a finite number of intervals of finite length. **Proof.** We follow the similar method in Ref. [@Lin; @X] to show this result. Since $F_n(\omega)$ is an eighth degree polynomial, and $F_n(+\infty)=+\infty$, $F_n(\omega)$ has a finite number of roots on $\mathbb{R}_+$. If $F_n(0)>0$, Denote the roots of $F_n(\omega)=0$ as $0<a_{1,n}<b_{1,n}\leq a_{2,n}<b_{2,n}<\cdots\leq a_{N,n}<b_{N,n}<+\infty$, and we $\Omega_n=\bigcup\limits_{j=1}^N\Omega_{j,n},~~ \Omega_{j,n}=[a_{j,n},b_{j,n}].$ If $F_n(0)\leq 0$, denote the roots of $F_n(\omega)$ as $0<b_{1,n}\leq a_{2,n}<b_{2,n}<\cdots\leq a_{N,n}<b_{N,n}<+\infty$, and we have $\Omega_n=\bigcup\limits_{j=1}^N\Omega_{j,n},~~ \Omega_{1,n}=\left( 0,b_{1,n}\right] , \Omega_{j,n}=[a_{j,n},b_{j,n}]~~ (j\geq 2). ~~~~~~~~~~\Box$ In fact, we can verify that when $\tau_1=\tau_{1,j_1,n}^+(\omega)$, we have $\tau_2=\tau_{2,j_2,n}^-(\omega)$, and when $\tau_1=\tau_{1,j_1,n}^-(\omega)$, we have $\tau_2=\tau_{2,j_2,n}^+(\omega)$. Denote $$\label{Tzfk} \begin{aligned} \mathcal{T}_{j_1,j_2,n}^{\pm j}&=\left\lbrace \left( \tau_{1,j_1,n}^{\pm}(\omega),\tau_{2,j_2,n}^{\mp}(\omega)\right):\omega\in\Omega_{j,n} \right\rbrace \\&=\left\lbrace \left( \dfrac{\pm\theta_{1,n}(\omega)-\varphi_{1,n}(\omega)+2j_1\pi}{\omega},\dfrac{\mp\theta_{2,n}(\omega)-\varphi_{2,n}(\omega)+2j_2\pi}{\omega}\right):\omega\in\Omega_{j,n} \right\rbrace, \end{aligned}$$ $$\label{Tk} \mathcal{T}^{j}_n=\bigcup_{j_1=-\infty}^{\infty}\bigcup_{j_2=-\infty}^{\infty}(\mathcal{T}_{j_1,j_2,n}^{+j}\cup\mathcal{T}_{j_1,j_2,n}^{-j})\cap \mathbb{R}_+^2,$$ and $$\mathcal{T}_n=\bigcup_{j=1}^N\mathcal{T}^j_n.$$ Any $(\tau_1,\tau_2)\in \mathcal{T}_n$ is called a crossing point, which makes $D_n(\lambda;\tau_1,\tau_2)=0$ have at least one root $i\omega$ with $\omega$ belongs to the crossing set $\Omega_n$. The set $\mathcal{T}_n$, which is the collection of all the crossing points, is called stability switching curves. Since $F_n(a_{j,n})=F_n(b_{j,n})=0$, we have $$\theta_{i,n}(a_{j,n})=\delta_i^a\pi,~~\theta_{i,n}(b_{j,n})=\delta_i^b\pi,$$ where $\delta_i^a,\delta_i^b=0,1,i=1,2$. By (\[tau1\]) and (\[tau2\]), we can easily confirm that $$\label{connect} \begin{aligned} (\tau_{1,j_1,n}^{+j}(a_{j,n}),\tau_{2,j_2,n}^{-j}(a_{j,n}))=(\tau_{1,j_1+\delta_1^a,n}^{-j}(a_{j,n}),\tau_{2,j_2-\delta_2^a,n}^{+j}(a_{j,n})),\\ (\tau_{1,j_1,n}^{+j}(b_{j,n}),\tau_{2,j_2,n}^{-j}(b_{j,n}))=(\tau_{1,j_1+\delta_1^b,n}^{-j}(b_{j,n}),\tau_{2,j_2-\delta_2^b,n}^{+j}(b_{j,n})).\\ \end{aligned}$$ Thus, for the stability switching curves corresponding to $\Omega_{j,n}$, $\mathcal{T}_{j_1,j_2,n}^{+j}$ is connected to $\mathcal{T}_{j_1+\delta_1^a,j_2-\delta_2^a,n}^{-j}$ at one end $a_{j,n}$, and connected to $\mathcal{T}_{j_1+\delta_1^b,j_2-\delta_2^b,n}^{-j}$ at the other end $b_{j,n}$. Crossing directions ------------------- In the following, in order to identify the existence of Hopf bifurcation, we consider the direction in which the root of (\[character\]) cross the imaginary axis as $(\tau_1,\tau_2)$ deviates from a stability switching curve $\mathcal{T}_n^j$ by the method given by Lin and Wang. [@Lin; @X] Let $\lambda=\sigma+i\omega$. By (\[character\]), and the implicit function theorem, $\tau_1$, $\tau_2$ can be expressed as function of $\sigma$ and $\omega$. From (\[character\]), we have $$\begin{aligned} &\dfrac{\partial {\rm Re} D_n(\lambda;\tau_1,\tau_2)}{\partial \sigma}|_{\lambda=i\omega} =R_0, \dfrac{\partial {\rm Im} D_n(\lambda;\tau_1,\tau_2)}{\partial \sigma}|_{\lambda=i\omega} =I_0, \end{aligned}$$ $$\dfrac{\partial {\rm Re} D_n(\lambda;\tau_1,\tau_2)}{\partial \omega}|_{\lambda=i\omega}=-I_0, \dfrac{\partial {\rm Im} D_n(\lambda;\tau_1,\tau_2)}{\partial \omega}|_{\lambda=i\omega}=R_0,$$ $$\begin{aligned} &\dfrac{\partial {\rm Re} D_n(\lambda;\tau_1,\tau_2)}{\partial \tau_l}|_{\lambda=i\omega} =R_l, \dfrac{\partial {\rm Im} D_n(\lambda;\tau_1,\tau_2)}{\partial \tau_l}|_{\lambda=i\omega} =I_l, \end{aligned}$$ where $l=1,2$. By the implicit function theory, if $ {\rm det}\left(\begin{array}{cc} R_1& R_2\\I_1&I_2 \end{array}\right)=R_1I_2-R_2I_1\neq 0. $ we have $$\label{juzhen} \Delta(\omega):=\left( \begin{array}{cc} \frac{\partial \tau_1}{\partial \sigma}&\frac{\partial \tau_1}{\partial \omega}\\\frac{\partial \tau_2}{\partial \sigma}&\frac{\partial \tau_2}{\partial \omega} \end{array}\right) \arrowvert_{\sigma=0,\omega\in \Omega_n}=-\left( \begin{array}{cc} R_1& R_2\\I_1&I_2 \end{array}\right)^{-1}\left( \begin{array}{cc} R_0&-I_0\\I_0&R_0 \end{array}\right) .$$ For any stability switching curves $\mathcal{T}_{j_1,j_2,n}^{\pm j}$, the direction of the curve corresponding to increasing $\omega\in \Omega_{j,n}$ is called the positive direction, i.e. from $(\tau_{1,j_1,n}^{\pm j}(a_{j,n}),\tau_{2,j_2,n}^{\mp j}(a_{j,n}))$ to $(\tau_{1,j_1,n}^{\pm j }(b_{j,n}),\tau_{2,j_2,n}^{\mp j}(b_{j,n}))$. The region on the left-hand (right-hand) side as we head in the positive directions of the curve is called the region on the left (right). As we have mentioned in the previous section, $\mathcal{T}_{j_1,j_2,n}^{+j}$ is connected to $\mathcal{T}_{j_1+\delta_1^a,j_2-\delta_2^a,n}^{-j}$ at $a_{j,n}$, then the positive direction of the two curves are opposite. Since the tangent vector of $\mathcal{T}_{j_1,j_2,n}^{\pm j}$ at $p^{\pm}(\tau_{1,j_1,n}^{\pm},\tau_{2,j_2,n}^{\mp})$ along the positive direction is $(\frac{\partial \tau_1}{\partial \omega},\frac{\partial \tau_2}{\partial \omega})\mid_{p^{\pm}}\stackrel{\vartriangle}{=}\overrightarrow{T}_{p^{\pm}}$, the normal vector of $\mathcal{T}_{j_1,j_2,n}^{\pm j}$ pointing to the right region is $(\frac{\partial \tau_2}{\partial \omega},-\frac{\partial \tau_1}{\partial \omega})\mid_{p^{\pm}}\stackrel{\vartriangle}{=}\overrightarrow{n}_{p^{\pm}}$ (see Fig. \[fig:neighbor\]). On the other hand, as a pair of complex characteristic roots cross the imaginary axis to the right half plane, $(\tau_1,\tau_2)$ moves along the direction $(\frac{\partial \tau_1}{\partial \sigma},\frac{\partial \tau_2}{\partial \sigma})\mid_{p^{\pm}}$. We can conclude that if the inner product of these two vectors are positive, i.e., $$\label{delta} \delta(\omega)\mid_{p^{\pm}}:=\frac{\partial \tau_1}{\partial \sigma}\frac{\partial \tau_2}{\partial \omega}-\frac{\partial \tau_2}{\partial \sigma}\frac{\partial \tau_1}{\partial \omega}\mid_{p^{\pm}}>0,$$ Eq. (\[character\]) has two more characteristic roots with positive real parts in the region on the right of $\mathcal{T}_{j_1,j_2,n}^{\pm j}$. If the inequality (\[delta\]) is reversed, then the region on the left (\[character\]) has two more characteristic roots with positive real parts. It is easy to see that $\delta(\omega)={\rm det} \Delta(\omega)$. Since $ {\rm det}\left(\begin{array}{cc} -R_0& I_0\\-I_0&-R_0 \end{array}\right)=R_0^2+I_0^2\geq 0,$ (\[delta\]) can be written as $R_1I_2-R_2I_1>0$, if either $R_0\neq 0$ or $I_0\neq 0$, which is satisfied since we do not consider the case that $i\omega$ is the multiple root of $D_n(\lambda;\tau_1,\tau_2)=0$, i.e., $\frac{d D_n(\lambda;\tau_1,\tau_2)}{d \lambda}\mid_{\lambda=i\omega}=R_0+iI_0\neq 0$. We can verify that $$\begin{array}{l} R_1I_2-R_2I_1\mid_{p^{\pm}}\\={\rm Im}\{\overline{-i\omega (P_{1,n}e^{-i\omega\tau_{1,j_1,n}^{\pm}}+P_{3,n}e^{-i\omega(\tau_{1,j_1,n}^{\pm}+\tau_{2,j_2,n}^{\mp})})}(-i\omega)(P_{2,n}e^{-i\omega\tau_{2,j_2,n}^{\mp}}+P_{3,n}e^{-i\omega(\tau_{1,j_1,n}^{\pm}+\tau_{2,j_2,n}^{\mp})})\}\\ =\pm\omega^2 |P_{2,n}\overline{P}_{3,n}-P_{0,n}\overline{P}_{1,n}|\sin \theta_{1,n}.\\ \end{array}$$ Hence, $$\label{pone} \begin{array}{l} \delta(\omega\in \mathring{\Omega}_{j,n})\mid_{p^+}>0,~~\forall p^+\in\mathcal{T}_{j_1,j_2,n}^{+ j},~ and ~ \delta(\omega\in \mathring{\Omega}_{j,n})\mid_{p^-}<0,~~\forall p^-\in\mathcal{T}_{j_1,j_2,n}^{- j}\\ \end{array}$$ since $\theta_{1,n}( \mathring{\Omega}_{j,n})\subset (0,\pi)$. Here, $\mathring{\Omega}_{j,n}$ denotes the interior of $\Omega_{j,n}$. We have the following conclusion. \[direction\] For any $j=1,2,\cdots,N$, we have $$\delta(\omega\in \mathring{\Omega}_{j,n}) >0(<0),~~\forall (\tau_1(\omega),\tau_2(\omega))\in\mathcal{T}_{j_1,j_2,n}^{+ j}((\tau_1(\omega),\tau_2(\omega))\in\mathcal{T}_{j_1,j_2,n}^{- j}).$$ Therefore, the region on the right of $\mathcal{T}_{j_1,j_2,n}^{+j}$ $(\mathcal{T}_{j_1,j_2,n}^{- j})$ has two more (less) characteristic roots with positive real parts. ![This is a part of stability switching curves corresponding to $\Omega_{j,n}=[a_{j,n},b_{j,n}]$. The blue curve stands for $\mathcal{T}_{j_1,j_2,n}^{+ j}$, with two ends $A(\tau_{1,j_1,n}^{+j}(a_{j,n}),\tau_{2,j_2,n}^{-j}(a_{j,n}))$, and $B(\tau_{1,j_1,n}^{+j}(b_{j,n}),\tau_{2,j_2,n}^{-j}(b_{j,n}))$. The red curve denotes $\mathcal{T}_{j_1+\delta_1^a,j_2-\delta_2^a,n}^{- j}$, which is connected to $\mathcal{T}_{j_1,j_2,n}^{+ j}$ at $A$ corresponding to $a_{j,n}$, with the positive direction from $A$ to $C$. []{data-label="fig:neighbor"}](neighbor-eps-converted-to.pdf){width="70.00000%" height="40.00000%"} We can see from Fig. \[fig:neighbor\] that the region on the right of $\mathcal{T}_{j_1,j_2,n}^{+ j}$, and the region on the left of $\mathcal{T}_{j_1+\delta_1^a,j_2-\delta_2^a,n}^{- j}$, with the black arrows pointing to, have two more characteristic roots with positive real part. Thus, as we move along these curves, stability crossing directions are consistent (see Fig.\[fig:neighbor\], the region with two more characteristic roots with positive real part is on the same side of $\mathcal{T}_{j_1,j_2,n}^{+j}$ and $\mathcal{T}_{j_1+\delta_1^a,j_2-\delta_2^a,n}^{-j}$). Any given direction, $\overrightarrow{l}=(l_1,l_2)$, is pointing to the right region of the curve $\mathcal{T}_{j_1,j_2,n}^{\pm j}$, if its inner product with the right-hand side normal $(\frac{\partial \tau_2}{\partial \omega},-\frac{\partial \tau_1}{\partial \omega})$ is positive, i.e., $$\label{l1l2} l_1\frac{\partial \tau_2}{\partial \omega}-l_2\frac{\partial \tau_1}{\partial \omega}>0.$$ And it is pointing to the left region of the curve $\mathcal{T}_{j_1,j_2,n}^{\pm j}$ if its inner product with the right-hand side normal is negative. We have the following result \[coro\] As $(\tau_1,\tau_2)$ crosses the curve $\mathcal{T}_{j_1,j_2,n}^{\pm j}$ along the direction $\overrightarrow{l}=(l_1,l_2)$, there are two more (less) characteristic roots with positive real parts if $$-l_1(I_0I_1+R_0R_1)-l_2(I_0I_2+R_0R_2)>0 (<0).$$ Proof. From (\[juzhen\]), the left side of (\[l1l2\]) becomes$$\label{cc} [-l_1(I_0I_1+R_0R_1)-l_2(I_0I_2+R_0R_2)]/[R_1I_2-I_1R_2].$$ If $-l_1(I_0I_1+R_0R_1)-l_2(I_0I_2+R_0R_2)>0$, $(l_1,l_2)$ is in the same (opposite) side as the right-hand side normal of $\mathcal{T}_{j_1,j_2,n}^{+j}$ ($\mathcal{T}_{j_1,j_2,n}^{-j}$). From Theorem \[direction\], we can conclude that there are two more characteristic roots with positive real parts of (\[character\]) as $(\tau_1,\tau_2)$ crosses the curve along the direction $\overrightarrow{l}=(l_1,l_2)$. we can prove the result similarly when the inequality is reversed.               $\Box$ Theorem of Hopf bifurcation --------------------------- From the previous discussion, we have the following conclusion about Hopf bifurcation. ![A sketch of the transformation form $(\tau_1,\tau_2)$ plane to $(\delta_1,\delta_2)$ plane.[]{data-label="fig:transform"}](transform-eps-converted-to.pdf){width="65.00000%" height="40.00000%"} \[bifur\] For any $j=1,2,\cdots,N$, $\mathcal{T}_n^j$ is a Hopf bifurcation curve in the following sense: for any $p\in \mathcal{T}_n^j$ and for any smooth curve $\Gamma$ intersecting with $\mathcal{T}_n^j$ transversely at $p$, we define the tangent of $\Gamma$ at $p$ by $\overrightarrow{l}$. If $ \frac{\partial {\rm Re}\lambda}{\partial \overrightarrow{l}}\mid_p\neq 0 $, and the other eigenvalues of (\[character\]) at $p$ have non-zero real parts, then system (\[diffusion predator\]) undergoes a Hopf bifurcation at $p$ when parameters $(\tau_1,\tau_2)$ cross $\mathcal{T}_n^j$ at $p$ along $\Gamma$. Proof. Denote $p(\tau_{1}^0,\tau_2^0)$. Let $U$ be a neighbourhood of $p$. Suppose that the equation of curve $\Gamma$ is $\Gamma(\tau_1,\tau_2)=0$. Introduce a mapping $J:U\rightarrow \mathbb{R}^2$, whose coordinate component function is expressed by $\left\lbrace \begin{array}{c} \delta_1=\delta_1(\tau_1,\tau_2)\\ \delta_2=\delta_2(\tau_1,\tau_2) \end{array}\right.$. Suppose that $J$ locally maps $p(\tau_1^0,\tau_2^0)$, $\mathcal{T}_n^j$ and $\Gamma$ to $p'(0,0)$, $\delta_1$ axis and $\delta_2$ axis, respectively (shown in Fig. \[fig:transform\]), and the Jacobian determinant $\frac{\partial (\delta_1,\delta_2)}{\partial (\tau_1,\tau_2)}\arrowvert _p$ of mapping $J$ is not zero, then by inverse function group theorem, there exists a neighborhood of $p'$, $O(p')$, such that there is a unique inverse mapping $J^{-1}$ of $J$, $$\left\lbrace \begin{array}{c} \tau_1=\tau_1(\delta_1,\delta_2),\\ \tau_2=\tau_2(\delta_1,\delta_2), \end{array}\right. ~~~~(\delta_1,\delta_2)\in O(p').$$ Now the characteristic equation of system (\[diffusion predator\]) with $\delta_1=0$ has purely imaginary root $i\omega$ at $\delta_2=0$. We only need to further verify that $\frac{d {\rm Re} \lambda}{d \delta_2}\neq 0$. In fact, we have $\delta_1=\Gamma(\tau_1,\tau_2)$, and the tangent vector of curve $\Gamma$ is $\overrightarrow{l}=(-\frac{\partial \Gamma}{\partial \tau_2},\frac{\partial \Gamma}{\partial \tau_1})^T=(-\frac{\partial \delta_1}{\partial \tau_2},\frac{\partial \delta_1}{\partial \tau_1})^T$. For convenience, denote $J_1=\left(\begin{array}{cc} \frac{\partial \delta_1}{\partial \tau_1}&\frac{\partial \delta_1}{\partial \tau_2}\\ \frac{\partial \delta_2}{\partial \tau_1}&\frac{\partial \delta_2}{\partial \tau_2} \end{array} \right) $. Obviously, $\overrightarrow{e}_{\delta_2}=(0,1)^T=\frac{1}{{\rm det} J_1} J_1 ~\overrightarrow{l}$. Since $\frac{d {\rm Re} \lambda}{d \delta_2}=\frac{d \sigma}{d \delta_2}=\frac{\partial \sigma}{\partial \overrightarrow{e}_{\delta_2}}=(\frac{\partial \sigma}{\partial \delta_1},\frac{\partial \sigma}{\partial \delta_2})^T \cdotp \overrightarrow{e}_{\delta_2}$, we further treat the inner product as matrix multiplication $(\frac{\partial \sigma}{\partial \delta_1},\frac{\partial \sigma}{\partial \delta_2}) \overrightarrow{e}_{\delta_2}=(\frac{\partial \sigma}{\partial \tau_1},\frac{\partial \sigma}{\partial \tau_2})J_1^{-1} \frac{1}{{\rm det}J_1}J_1 \overrightarrow{l}=\frac{1}{{\rm det}J_1}(\frac{\partial \sigma}{\partial \tau_1},\frac{\partial \sigma}{\partial \tau_2})^T\cdotp \overrightarrow{l}=\frac{1}{{\rm det}J_1}\frac{\partial {\rm \sigma}}{\partial \overrightarrow{l}} $. Thus, the transversality condition $\frac{d {\rm Re} \lambda}{d \delta_2}\neq 0\mid_{\delta_2=0}$ holds if $\frac{\partial {\rm Re \lambda}}{\partial \overrightarrow{l}}\mid_p\neq 0$. According to Corollary 2.4 in Ref. [@JWu], the conclusion follows.               $\Box$ Suppose that there exists $\omega_{j_1,k_1}\in \Omega_{j_1,k_1}$ and $\omega_{j_2,k_2}\in\Omega_{j_2,k_2}$, such that $\mathcal{T}_{k_1}^{j_1}$ and $\mathcal{T}_{k_2}^{j_2}$ intersect. Then there are two pairs of pure imaginary roots of (\[character\]) at the intersection. Thus, system (\[diffusion predator\]) may undergoes double Hopf bifurcations near the positive equilibrium $E^*$ at the intersection of two stability switching curves. Normal form on the center manifold for double Hopf bifurcation {#normal form} ============================================================== From the previous section, when two stability switching curves intersect, system (\[diffusion predator\]) may undergoes double Hopf bifurcations near the positive equilibrium $E^*$. In order to investigate the dynamical behavior of system (\[diffusion predator\]) near the double Hopf bifurcation point, we will calculate the normal forms of double Hopf bifurcation, by applying the normal form method of partial functional differential equations. [@Faria] Without loss of generality, we always assume $\tau_1>\tau_2$ in this section. Otherwise, all the derivations can be proceeded in similar forms. Let $\overline{u}(x,t)=u(x,\tau_1t)-u^*,\overline{v}(x,t)=v(x,\tau_1t)-v^*$, and drop the bars, system (\[diffusion predator\]) can be written as $$\label{diffusion predator fourier} \frac{\partial }{\partial t} \left( \begin{array}{l} u(x,t) \\ v(x,t) \\ \end{array}\right) = \tau_1(D\Delta + A) \left( \begin{array}{l} u(x,t)\\ v(x,t)\\ \end{array} \right) +\tau_1 B \left( \begin{array}{l} u(x,t-1)\\ v(x,t-1)\\ \end{array} \right) +\tau_1C\left( \begin{array}{l} u(x,t-\tau_2/\tau_1)\\ v(x,t-\tau_2/\tau_1)\\ \end{array} \right)+\tau_1\left( \begin{array}{l} f_1\\ f_2\\ \end{array} \right),$$ where $$\begin{array}{l} f_1=-\frac{r_1}{K}u(x,t)u(x,t-1)-a(1-m)u(x,t)v(x,t),\\ f_2=-\frac{r_2}{\gamma(1-m)u^*}v(x,t)v(x,t-\tau_2/\tau_1)+\frac{r_2}{u^*}v(x,t)u(x,t-\tau_2/\tau_1)+\frac{r_2}{u^*}v(x,t-\tau_2/\tau_1)u(x,t-\tau_2/\tau_1)\\-\frac{r_2\gamma(1-m)}{u^*}u^2(x,t-\tau_2/\tau_1)+\frac{r_2}{\gamma(1-m)u^{*2}}v(x,t)u(x,t-\tau_2/\tau_1)v(x,t-\tau_2/\tau_1)\\-\frac{r_2}{u^{*2}}v(x,t)u^2(x,t-\tau_2/\tau_1)-\frac{r_2}{u^{*2}}v(x,t-\tau_2/\tau_1)u^2(x,t-\tau_2/\tau_1)+\frac{r_2\gamma(1-m)}{u^{*2}}u^3(x,t-\tau_2/\tau_1). \end{array}$$ For the Neumann boundary condition, we define the real-valued Hilbert space $$X=\left\lbrace (u,v)^T\in H^2(0,l\pi)\times H^2(0,l\pi):\frac{\partial u}{\partial x}=\frac{\partial v}{\partial x}=0~ at ~ x=0,l\pi\right\rbrace,$$ and the corresponding complexification space of $X$ by $X_{\mathbb{C}}:=X\oplus iX=\{U_1+iU_2:U_1,U_2\in X\}$, with the general complex-value $L^2$ inner product $\langle U,V\rangle=\int_0^{l\pi}(\overline{u}_1v_1+\overline{u}_2v_2)dx,$ for $U=(u_1,u_2)^T$, $V=(v_1,v_2)^T\in X_{\mathbb{C}}$. Let $\mathscr{C}:=C([-1,0],X_{\mathbb{C}})$ denote the phase space with the sup norm. We write $u^t\in\mathscr{C}$ for $u^t(\theta)=u(t+\theta)$, $-1\leq \theta\leq 0$. Denote the double Hopf bifurcation point by $(\tau_1^*,\tau_2^*)$. Introduce two bifurcation parameters $\sigma=(\sigma_1,\sigma_2)$ by setting $\sigma_1=\tau_1-\tau_1^*$, $\sigma_2=\tau_2-\tau_2^*$, and denote $U(t)=(u(t),v(t))^T$, then (\[diffusion predator fourier\]) can be written as $$\label{dudt} \dfrac{dU(t)}{dt}=D(\tau_1^*+\sigma_1,\tau_2^*+\sigma_2)\Delta U(t)+L(\tau_1^*+\sigma_1,\tau_2^*+\sigma_2)(U^t)+F(\tau_1^*+\sigma_1,\tau_2^*+\sigma_2,U^t),$$ where $$\begin{array}{l} D(\tau_1^*+\sigma_1,\tau_2^*+\sigma_2)=(\tau_1^*+\sigma_1)D=\tau_1^*D+\sigma_1 D,\\ L(\tau_1^*+\sigma_1,\tau_2^*+\sigma_2)U^t\\=(\tau_1^*+\sigma_1)AU^t(0)+(\tau_1^*+\sigma_1)BU^t(-1)+(\tau_1^*+\sigma_1)CU^t(-(\tau_2^*+\sigma_2)/(\tau_1^*+\sigma_1))\\=\tau_1^*(AU^t(0)+BU^t(-1)+CU^t(-\tau_2^*/\tau_1^*))+\sigma_1(AU^t(0)+BU^t(-1)+CU^t(-\tau_2^*/\tau_1^*))\\+\tau_1^*C(U^t(-(\tau_2^*+\sigma_2)/(\tau_1^*+\sigma_1))-U^t(-\tau_2^*/\tau_1^*))+\sigma_1C(U^t(-(\tau_2^*+\sigma_2)/(\tau_1^*+\sigma_1))-U^t(-\tau_2^*/\tau_1^*)),\\ F(\tau_1^*+\sigma_1,\tau_2^*+\sigma_2,U^t)= (\tau_1^*+\sigma_1) (f_1, f_2)^T. \end{array}$$ Consider the linearized system of (\[dudt\]) $$\label{dudtlinear} \dfrac{dU(t)}{dt}=\tau_1^*D\Delta U(t)+\tau_1^*(AU^t(0)+BU^t(-1)+CU^t(-\tau_2^*/\tau_1^*))\stackrel{\vartriangle}{=}D_0\Delta U(t)+L_0(U^t).$$ It is well known that the eigenvalues of $D\Delta$ on $X$ are $-d_1\frac{n^2}{l^2}$ and $-d_2\frac{n^2}{l^2}$, $n\in \mathbb{N}_0=\{0,1,2,\cdots\}$, with corresponding normalized eigenfunctions $\beta_n^{1}(x)=\gamma_n(x)(1,0)^T$ and $\beta_n^{2}(x)=\gamma_n(x)(0,1)^T$, where $\gamma_n(x)=\dfrac{\cos\frac{n}{l}x}{\parallel\cos\frac{n}{l}x\parallel_{L^2}}$. Define $\mathscr{B}_n$ the subspace of $\mathscr{C}$, by $\mathscr{B}_n:={\rm span} \left\lbrace \langle v(\cdot),\beta_n^{j}\rangle\beta_n^{j}~\arrowvert ~v\in \mathscr{C},j=1,2\right\rbrace$, satisfying $L(\mathscr{B}_n)(\tau_1,\tau_2)\subset {\rm span}\{\beta_n^{1},\beta_n^{2}\}$. For simplification of notations, we write $\left\langle v(\cdot),\beta_n \right\rangle=\left( \langle v(\cdot),\beta_n^{1} \rangle, \langle v(\cdot),\beta_n^{2} \rangle \right)^T. $ System (\[dudt\]) can be written as $$\label{dudt2} \dfrac{dU(t)}{dt}=D_0\Delta U(t)+L_0(U^t)+G(\sigma,U^t).$$ where $$\begin{aligned} G(\sigma,U^t)&=\sigma_1(D\Delta U^t(0)+AU^t(0)+BU^t(-1)+CU^t(-\tau_2^*/\tau_1^*))\\&+(\tau_1^*+\sigma_1)C(U^t(-(\tau_2^*+\sigma_2)/(\tau_1^*+\sigma_1))-U^t(-\tau_2^*/\tau_1^*))\\&+F(\tau_1^*+\sigma_1,\tau_2^*+\sigma_2,U^t). \end{aligned}$$ Rewrite (\[dudt2\]) as an abstract ordinary differential equation on $\mathscr{C}$ [@Faria] $$\label{ode} \frac{d}{dt}U^t=AU^t+X_0G(\sigma,U^t),$$ where $A$ is the infinitesimal generator of the $C_0$-semigroup of solution maps of the linear equation (\[dudt\]), defined by $ \label{A} A:\mathscr{C}_0^1\cap\mathscr{C}\rightarrow \mathscr{C}, ~A\varphi=\dot{\varphi}+X_0[D_0\Delta\varphi(0)+L_0(\varphi)-\dot{\varphi}(0)], $ with ${\rm dom}(A)=\{\varphi\in\mathscr{C}:\dot{\varphi}\in\mathscr{C},\varphi(0)\in {\rm dom}(\Delta)\}$, and $X_0$ is given by $X_0(\theta)=0$ for $\theta\in[-1,0)$ and $X_0(0)=I$. Then on $\mathscr{B}_n$, the linear equation $\frac{d}{dt}U(t)=D_0\Delta U(t)+L_0(U^t)$ is equivalent to the retarded functional differential equation on $\mathbb{C}^2$: $ \label{RFDE} \dot{z}(t)=-\frac{n^2}{l^2}D_0z(t)+L_0z^t. $ Define functions of bounded variation $\eta_m\in BV([-1,0],\mathbb{R})$ such that $$-\frac{k_m^2}{l^2}D_0\varphi(0)+L_0(\varphi)=\int_{-1}^0d\eta_m(\theta)\varphi(\theta), \varphi\in \mathscr{C}.$$ Let $A_m$ ($m=1,2$) denote the infinitesimal generator of the semigroup generated by (\[RFDE\]), and $A_m^*$ denote the formal adjoint of $A_m$ under the bilinear form $$(\alpha,\beta)_m=\alpha(0)\beta(0)-\int_{-1}^0\int_0^\theta\alpha(\xi-\theta)d\eta_m(\theta)\beta(\xi)d\xi.$$ From the previous section, we know that system (\[dudtlinear\]) has two pairs of pure imaginary eigenvalues $ \pm i\omega_{j_1,k_1}\tau_1^*,$ $\pm i\omega_{j_2,k_2}\tau_1^* $ at the double Hopf bifurcation point and the other eigenvalues with nonzero real parts. For simplicity of notation, we denote them by $\{\pm i\omega_1\tau_1^*,\pm i\omega_2\tau_1^*\}$. Suppose that $\omega_1:\omega_2\neq m:n$ for $m,n\in\mathbb{N}$ and $1\leq m,n\leq 3$, i.e., we do not consider the strongly resonant cases. Using the formal adjiont theory, we decompose $\mathscr{C}$ by $\{\pm i\omega_1\tau_1^*,\pm i\omega_2\tau_1^*\}$ as $\mathscr{C}=P_m\oplus Q_m$, where $Q_m=\{\phi\in \mathscr{C}:(\psi,\phi)_m=0,~for ~\psi~\in P_m^*\}$, $m=1,2$. We choose the basis $\varPhi_1(\theta)=(\phi_1(\theta),\overline{\phi}_1(\theta)), \varPhi_2(\theta)=(\phi_3(\theta),\overline{\phi}_3(\theta)$, $\varPsi_1(s)=(\psi_1(s),\overline{\psi}_1(s))^T$, $\varPsi_2(s)=(\psi_3(s),\overline{\psi}_3(s))^T$, in $P_1$, $P_1^*$, $P_2$, $P_2^*$, satisfying $(\Psi_m,\Phi_m)_{m}=I$, and $$A_m\Phi_m=\Phi_mB_m,~A_m^*\Psi_m=B_m \Psi_m,~ m=1,2,$$ with $B_1={ \rm diag} (i\omega_1\tau_1^*,-i\omega_1\tau_1^*)$, $B_2={ \rm diag} (i\omega_2\tau_1^*,-i\omega_2\tau_1^*)$. Denote $\Phi(\theta)=(\Phi_1(\theta),\Phi_2(\theta))$, and $ \Psi(s)={ (\Psi_1(s),\Psi_2(s))^T}.$ By a few calculations, we have $$\begin{aligned} \phi_1(\theta)=(1,r_{12})^Te^{i\omega_1\tau_1^*\theta},\phi_3(\theta)=(1,r_{32})^Te^{i\omega_2\tau_1^*\theta},\\ \psi_1(s)=D_1(1,r_{12}^*)e^{-i\omega_1\tau_1^*s}, \psi_3(s)=D_2(1,r_{32}^*)e^{-i\omega_2\tau_1^*s}, \end{aligned}$$ where $$\begin{array}{ll} r_{12}=\dfrac{\gamma(1-m)r_2e^{-i\omega_1\tau_2^*}}{r_2e^{-i\omega_1\tau_2^*}+d_2\frac{n^2}{l^2}+i\omega_1},~~~~ r_{32}=\dfrac{\gamma(1-m)r_2e^{-i\omega_2\tau_2^*}}{r_2e^{-i\omega_2\tau_2^*}+d_2\frac{n^2}{l^2}+i\omega_2},\\ r_{12}^*=\dfrac{-a(1-m)u^*}{r_2e^{-i\omega_1\tau_2^*}+d_2\frac{n^2}{l^2}+i\omega_1},~~~~ r_{32}^*=\dfrac{-a(1-m)u^*}{r_2e^{-i\omega_2\tau_2^*}+d_2\frac{n^2}{l^2}+i\omega_2},\\ D_1=\frac{1 }{1+r_{12}^*r_{12}-\tau_1^*\frac{r_1}{K}u^*e^{-i\omega_1\tau_1^*}+r_{12}^*\gamma(1-m)r_2\tau_2^*e^{-i\omega_1\tau_2^*}-r_2\tau_2^*e^{-i\omega_1\tau_2^*}r_{12}^*r_{12} },\\D_2=\frac{1 }{1+r_{32}^*r_{32}-\tau_1^*\frac{r_1}{K}u^*e^{-i\omega_2\tau_1^*}+r_{32}^*\gamma(1-m)r_2\tau_2^*e^{-i\omega_2\tau_2^*}-r_2\tau_2^*e^{-i\omega_2\tau_2^*}r_{32}^*r_{32} }. \end{array}$$ Now, we can decompose $\mathscr{C}$ into a center subspace and its orthocomplement, i.e., $$\label{ker} \mathscr{C}=\mathcal{P}\oplus{\rm Ker}\pi,$$ where $\pi:\mathscr{C}\rightarrow\mathcal{P}$ is the projection defined by $ \pi(\varphi)=\sum_{m=1}^2\Phi_m(\Psi_m,\langle\varphi(\cdot),\beta_{k_m}\rangle)_m\cdot\beta_{k_m}, $ with $\beta_{k_m}=\left( \beta_{k_m}^{1}, \beta_{k_m}^{2}\right) $, $m=1,2$. Define the enlarged phase space [@Faria] $\mathscr{BC}:=\{ \psi:[-1,0]\rightarrow X_{\mathbb{C}}:\psi {\rm ~is~ continuous~ on~} [-1,0), $ $\exists \lim_{\theta\rightarrow 0^-}\psi(\theta)\in X_{\mathbb{C} }\} .$ According to (\[ker\]), $U^t\in X$ can be composed as $$\label{Ut} U^t(\theta)=\phi_1(\theta)z_1\gamma_{k_1}+\overline{\phi}_1(\theta)z_2\gamma_{k_1}+\phi_3(\theta)z_3\gamma_{k_2}+\overline{\phi}_3(\theta)z_4\gamma_{k_2}+w(\theta)\stackrel{\vartriangle}{=}\Phi(\theta) z_x+w(\theta),$$ where $w \in \mathscr{C}^1\bigcap{\rm Ker}\pi:=\mathcal{Q}^1$ for any $t$. Then in $\mathscr{BC}$ the system (\[ode\]) is equivalent to the system $$\label{zdoty} \begin{array}{l} \dot{z_1}=i\omega_1\tau_1^*z_1+\psi_1(0) \langle G(\sigma,\Phi(\theta) z_x+w(\theta)),\beta_{k_1}\rangle,\\ \dot{z_2}=-i\omega_1\tau_1^*z_2+\overline{\psi}_1(0)\langle G(\sigma,\Phi(\theta) z_x+w(\theta)),\beta_{k_1}\rangle,\\ \dot{z_3}=i\omega_2\tau_1^*z_3+\psi_3(0) \langle G(\sigma,\Phi(\theta) z_x+w(\theta)),\beta_{k_2}\rangle,\\ \dot{z_4}=-i\omega_2\tau_1^*z_1+\overline{\psi}_3(0) \langle G(\sigma,\Phi(\theta) z_x+w(\theta)),\beta_{k_2}\rangle,\\ \frac{dw}{dt} =A_1w+(I-\pi)X_0 G(\sigma,\Phi(\theta) z_x+w(\theta)), \end{array}$$ where $A_1$ is the restriction of $A$ on $\mathcal{Q}^1\subset{\rm Ker \pi\rightarrow{\rm Ker}\pi}$, $A_1\varphi=A\varphi$ for $\varphi\in \mathcal{Q}^1$. Consider the formal Taylor expansion $$G(\sigma,\varphi)=\frac{1}{2!}G_2(\sigma,\varphi)+\frac{1}{3!}G_3(\sigma,\varphi),$$ where $G_j$ is the $j{\rm th}$ Fréchet derivation of $G$, which we calculate in section 1 of supplement material. Then (\[zdoty\]) can be written as $$\label{zdotTaylor} \begin{array}{l} \dot{z}=Bz+\sum\limits_{j\geq 2}\frac{1}{j!}f_j^1(z,w,\sigma),\\ \frac{d}{dt}w =A_1w+\sum\limits_{j\geq 2}\frac{1}{j!}f_j^2(z,w,\sigma), \end{array}$$ where $z=(z_1,z_2,z_3,z_4)\in \mathbb{C}^4, w\in \mathcal{Q}^1$, $B={\rm diag}$, $ (B_1,B_2)={\rm diag (i\omega_1\tau_1^*,-i\omega_1\tau_1^*,i\omega_2\tau_1^*,-i\omega_2\tau_1^*)}$, and $f_j=(f_j^1,f_j^2), j\geq 2$, are defined by $$\label{fj1fj2} \begin{array}{l} f_j^1(z,w,\sigma)=\left(\begin{array}{c} \psi_1(0) \langle G_j(\sigma,\Phi(\theta) z_x+w(\theta)),\beta_{k_1}\rangle\\ \overline{\psi}_1(0)\langle G_j(\sigma,\Phi(\theta) z_x+w(\theta)),\beta_{k_1}\rangle\\ \psi_3(0) \langle G_j(\sigma,\Phi(\theta) z_x+w(\theta)),\beta_{k_2}\rangle\\ \overline{\psi}_3(0) \langle G_j(\sigma,\Phi(\theta) z_x+w(\theta)),\beta_{k_2}\rangle\\ \end{array} \right),\\ f_j^2=(I-\pi)X_0 G_j(\sigma,\Phi(\theta) z_x+w(\theta)), \end{array}$$ Similar as in Ref. [@Faria], and using the notations in it, define the operator $M_j=(M_j^1,M_j^2)$, $j\geq 2$ by $$\label{mjU} \begin{array}{ll} M_j^1:V_j^{4+2}(\mathbb{C}^4)\rightarrow V_j^{4+2}(\mathbb{C}^4),&(M_j^1p)(z,\sigma)=D_zp(z,\sigma)Bz-Bp(z,\sigma),\\ M_j^2:V_j^{4+2}(\mathcal{Q}_1)\subset V_j^{4+2}({\rm Ker\pi})\rightarrow V_j^{4+2}({\rm Ker\pi}),&(M_j^2h)(z,\sigma)=D_zh(z,\sigma)Bz-A_1h(z,\sigma).\\ \end{array}$$ When $\omega_1:\omega_2\neq m:n$ for $m,n\in\mathbb{N}$ and $1\leq m,n\leq 3$ (i.e., the case of nonresonant double Hopf bifurcation) it is easy to verify that $$\begin{aligned} &{\rm Im}(M_2^1)^c={\rm span}\left\lbrace \sigma_1z_1e_1,\sigma_2z_1e_1,\sigma_1z_2e_2,\sigma_2z_2e_2,\sigma_1z_3e_3,\sigma_2z_3e_3,\sigma_1z_4e_4,\sigma_2z_4e_4 \right\rbrace.\\ &{\rm Im}(M_3^1)^c={\rm span}\left\{ z_1^2z_2e_1 , z_1z_3z_4e_1 , z_1z_2^2e_2 , z_2z_3z_4e_2, z_3^2z_4e_3 z_1z_2z_3e_3 , z_3z_4^2e_4, z_1z_2z_4e_4 \right\}. \end{aligned}$$ where $e_1=(1,0,0,0)^T,e_2=(0,1,0,0)^T,e_3=(0,0,1,0)^T,e_4=(0,0,0,1)^T$. According to Ref. [@Faria], by a recursive transformations of variables of the following form $$\label{zyalpha} (z,w,\sigma)=(\widehat{z},\widehat{w},\sigma)+\frac{1}{j!}(U_j^1(\widehat{z},\sigma),U_j^2(\widehat{z},\sigma),0),$$ with $U_j=(U_j^1,U_j^2)\in V_j^{4+2}(\mathbb{C}^4)\times V_j^{4+2}(\mathcal{Q}_1)$, [@Faria] which transforms (\[zdotTaylor\]) into the normal form $$\label{zdotnormalform} \begin{array}{l} \dot{z}=Bz+\sum\limits_{j\geq 2}\frac{1}{j!}g_j^1(z,w,\sigma),\\ \frac{dw}{dt} =A_1w+\sum\limits_{j\geq 2}\frac{1}{j!}g_j^2(z,w,\sigma), \end{array}$$ where $g_j=(g_j^1,g_j^2), j\geq 2$, are given by $g_j(z,w,\sigma)=\overline{f}_j(z,w,\sigma)-M_jU_j(z,\sigma)$, and $U_j\in V_j^{4+2}(\mathbb{C}^4)\times V_j^{4+2}(\mathcal{Q}_1)$ are expressed as $$\label{Ujzalpha} U_j(z,\sigma)=(M_j)^{-1}{\rm Proj}_{{\rm Im}(M_j^1)\times{\rm Im}(M_j^2)}\circ\overline{f}_j(z,0,\sigma),$$ where $\overline{f}_j=(\overline{f}_j^1,\overline{f}_j^2)$ stand for the terms of order $j$ in $(z,w)$, which are obtained after the computation of normal forms up to order $j-1$. From Ref. [@Faria], the normal form truncated to the third order has the following form $$\dot{z}=Bz+\frac{1}{2!}g_2^1(z,0,\sigma)+\frac{1}{3!}g_3^1(z,0,0)+h.o.t..$$ Here $g_3^1(z,0,0)={\rm Proj}_{{\rm Ker}(M_3^1)}\overline{f}_3^1(z,0,0)$, where $$\label{f31bar} \begin{aligned} \overline{f}_3^1(z,0,0)=f_3^1(z,0,0)+\frac{3}{2}[D_zf_2^1(z,0,0)U_2^1(z,0)\\+D_wf_2^1(z,0,0)U_2^2(z,0)-D_zU_2^1(z,0)g_2^1(z,0,0)], \end{aligned}$$ and $(U_2^1(z,\sigma),U_2^2(z,\sigma))\in V_j^{4+2}(\mathbb{C}^4)\times V_j^{4+2}(\mathcal{Q}_1)$ given by (\[Ujzalpha\]). The calculations of $g_2^1(z,0,\sigma)$ and $g_3^1(z,0,0)$ heavily depends on tedious mathematical derivations such as solve the center manifold function and some projections, thus we leave the details in section 2 and 3 of the supplement materials. Finally, the normal form truncated to the third order on the center manifold for double Hopf bifurcation is obtained as follows $$\label{normalform} \begin{aligned} &\dot{z}_1=i\omega_1\tau_1^*z_1+K_{11}\sigma_1z_1+K_{21}\sigma_2z_1+ K_{2100}z_1^2z_2+K_{1011}z_1z_3z_4, \\ &\dot{z}_2=-i\omega_1\tau_1^*z_2+\overline{K_{11}}\sigma_1z_2+\overline{K_{21}}\sigma_2z_2+ \overline{K_{2100}}z_1z_2^2+\overline{K_{1011}}z_2z_3z_4,\\ &\dot{z}_3=i\omega_2\tau_1^*z_3+K_{13}\sigma_1z_3+K_{23}\sigma_2z_3+ K_{0021}z_3^2z_4+K_{1110}z_1z_2z_3 ,\\ &\dot{z}_4=-i\omega_2\tau_1^*z_4+\overline{K_{13}}\sigma_1z_4+\overline{K_{23}}\sigma_2z_4+ \overline{K_{0021}}z_3z_4^2+\overline{K_{1110}}z_1z_2z_4 . \end{aligned}$$ Make the polar coordinate transformation $$\begin{array}{ll} z_1=\rho_1\cos \theta_1+i\rho_1\sin\theta_1,& z_2=\rho_1\cos \theta_1-i\rho_1\sin\theta_1,\\z_3=\rho_2\cos \theta_2+i\rho_2\sin\theta_2,& z_4=\rho_2\cos \theta_2-i\rho_2\sin\theta_2, \end{array}$$ where $\rho_1,\rho_2>0$. Denote $\epsilon_1={\rm Sign}({\rm Re}K_{2100})$, $\epsilon_2={\rm Sign}({\rm Re}K_{0021})$, rescale as $\widehat{\rho}_1=\rho_1\sqrt{|{\rm}K_{2100}|}$, $\widehat{\rho}_2=\rho_2\sqrt{|{\rm}K_{0021}|}$, $\widehat{t}=t\epsilon_1$, and drop the hats, then we obtain the equivalent system of (\[normalform\]) $$\label{normalformcylin} \begin{aligned} &\dot{\rho}_1=\rho_1(\nu_1+\rho_1^2+b\rho_2^2),\\ &\dot{\rho}_2=\rho_2(\nu_2+c\rho_1^2+d\rho_2^2). \end{aligned}$$ Here $$\begin{aligned} & \nu_1=\epsilon_1({\rm Re}K_{11}\sigma_1+{\rm Re}K_{21}\sigma_2), \nu_2=\epsilon_1({\rm Re}K_{13}\sigma_1+{\rm Re}K_{23}\sigma_2), \\&b=\frac{\epsilon_1\epsilon_2{\rm Re}K_{1011}}{{\rm Re}K_{0021}}, c=\frac{{\rm Re}K_{1110}}{{\rm Re}K_{2100}},d=\epsilon_1\epsilon_2.\\ \end{aligned}$$ As was discussed in chapter 7.5 in Ref. [@Guckenheimer], there are twelve distinct kinds of unfoldings for Eq. (\[normalformcylin\]) (see Table 1). \[twelve\] Case [I]{}a [I]{}b [II]{} [III]{} [IV]{}a [IV]{}b [V]{} [VI]{}a [VI]{}b [VII]{}a [VII]{}b [VIII]{} -------- -------- -------- -------- --------- --------- --------- ------- --------- --------- ---------- ---------- ---------- $d$ +1 +1 +1 +1 +1 +1 –1 –1 –1 –1 –1 –1 $b $ + + + – – – + + + – – – $c$ + + – + – – + – – + + – $d-bc$ + – + + + – – + – + – – : The twelve unfoldings of system (\[normalformcylin\]). In section \[simulations\], case VIa arises, thus we draw bifurcation set and phase portraits for the unfolding of case VIa in Fig. \[fig:VIa\]. ![Phase portraits for the unfoldings of case VIa with $\epsilon_1=1$.[]{data-label="fig:VIa"}](VIa-eps-converted-to.pdf){height="44.00000%"} Numerical simulations {#simulations} ===================== In this section, we carry out some simulations. We choose $$\label{para1} r_1=0.8,r_2=1,a=1.3,K=0.7,\gamma=1,m=0.27,l=2, d_1=0.3,d_2=0.4,$$ then one can get the unique positive constant equilibrium $E^*(0.4358,0.3181)$, which is globally asymptotically stable when $\tau_1=\tau_2=0$ according to Remark \[Du\]. To illustrate the dynamics in the presence of delays, we follow the process given in section 2.1. As shown in Fig. \[fig:F0T0\] a), $F_0(0)>0$, and $F_0(\omega)=0$ has four roots $a_{1,0}=0.2587,b_{1,0}=0.6682,a_{2,0}=0.7697$, and $b_{2,0}=1.1791$. The crossing set is $\Omega_{1,0}\bigcup\Omega_{2,0}=[a_{1,0},b_{1,0}]\bigcup[a_{2,0},b_{2,0}]$. For the two ends of the $\Omega_{1,0}$, we have $\theta_{1,0}(a_{1,0})=\pi$, $\theta_{2,0}(a_{1,0})=\pi$, $\theta_{1,0}(b_{1,0})=\pi$, and $\theta_{2,0}(b_{1,0})=0$, and $\delta_1^a=1$, $\delta_2^a=1$, $\delta_1^b=1$, $\delta_2^b=0$. From (\[Tzfk\]) and (\[Tk\]), we can get the stability switching curves $\mathcal{T}^1_0$ corresponding to $\Omega_{1,0}$ which is shown in Fig. \[fig:F0T0\] b). From the previous discussion in (\[connect\]), $\mathcal{T}_{j_1,j_2,0}^{+1}$ is connected to $\mathcal{T}_{j_1+1,j_2-1,0}^{-1}$ at the left point $a_{1,0}$ and $\mathcal{T}_{j_1+1,j_2,0}^{-1}$ at the right point $b_{1,0}$ for any $j_1,j_2$. To show the structure of stability switching curves and the crossing direction clearly, we take the left-most curve of $\mathcal{T}_0^1$ (i.e. $\tau_0^{1(1)}$ in Fig. \[fig:F0T0\] b)) as an example, and draw the figure in Fig. \[fig:T0direction\] a). From bottom to top, it starts with a part of $\mathcal{T}_{0,0,0}^{+1}$, which is connected to $\mathcal{T}_{1,0,0}^{-1}$ at $b_{1,0}$. $\mathcal{T}_{1,0,0}^{-1}$ is linked to $\mathcal{T}_{0,1,0}^{+1}$ at $a_{1,0}$, which is again connected to $\mathcal{T}_{1,1,0}^{-1}$ at $b_{1,0}$ $\cdots\cdots$. The numerical results coincide with the analysis result in (\[connect\]). In fact, the rest curves of $\mathcal{T}_0^1$ in Fig. \[fig:F0T0\] b)) have similar structure as $\tau_0^{1(1)}$. Similarly, the stability switching curves $\mathcal{T}^2_0$ corresponding to $\Omega_{2,0}$ are shown in Fig. \[fig:F0T0\] c), and the lowest curve of $\mathcal{T}_0^2$ (marked $\tau_0^{2(1)}$ in Fig. \[fig:F0T0\] c)) is drawn in Fig. \[fig:T0direction\] b). All the stability switching curves for $n=0$ are given by $\mathcal{T}_0=\mathcal{T}^1_0\cup\mathcal{T}^2_0$. a\) ![a) Graph of $F_0(\omega)$. b) Stability switching curves $\mathcal{T}^1_0$. c) Stability switching curves $\mathcal{T}^2_0$.[]{data-label="fig:F0T0"}](F0-eps-converted-to.pdf "fig:"){width="27.00000%"} b) ![a) Graph of $F_0(\omega)$. b) Stability switching curves $\mathcal{T}^1_0$. c) Stability switching curves $\mathcal{T}^2_0$.[]{data-label="fig:F0T0"}](T10-eps-converted-to.pdf "fig:"){width="27.00000%"} c) ![a) Graph of $F_0(\omega)$. b) Stability switching curves $\mathcal{T}^1_0$. c) Stability switching curves $\mathcal{T}^2_0$.[]{data-label="fig:F0T0"}](T30-eps-converted-to.pdf "fig:"){width="27.00000%"} a\) ![a) The detailed structure of the left-most curve of $\mathcal{T}_0^1$ (marked $\tau_0^{1(1)}$ in Fig. \[fig:F0T0\] b)). The blue (red) arrow represents the positive direction of $\mathcal{T}_{j_1,j_2,0}^{+1}$ ($\mathcal{T}_{j_1,j_2,0}^{-1}$). b) The detailed structure of the lowest curve of $\mathcal{T}_0^2$ (marked $\tau_0^{2(1)}$ in Fig. \[fig:F0T0\] c)). The blue (red) arrow represents the positive direction of $\mathcal{T}_{j_1,j_2,0}^{+2}$ ($\mathcal{T}_{j_1,j_2,0}^{-2}$). From Lemma \[direction\], we know that the regions on the right (left) of the blue (red) curves, which the black arrows point to, have two more characteristic roots with positive real parts.[]{data-label="fig:T0direction"}](T01direction-eps-converted-to.pdf "fig:"){width="46.00000%"} b) ![a) The detailed structure of the left-most curve of $\mathcal{T}_0^1$ (marked $\tau_0^{1(1)}$ in Fig. \[fig:F0T0\] b)). The blue (red) arrow represents the positive direction of $\mathcal{T}_{j_1,j_2,0}^{+1}$ ($\mathcal{T}_{j_1,j_2,0}^{-1}$). b) The detailed structure of the lowest curve of $\mathcal{T}_0^2$ (marked $\tau_0^{2(1)}$ in Fig. \[fig:F0T0\] c)). The blue (red) arrow represents the positive direction of $\mathcal{T}_{j_1,j_2,0}^{+2}$ ($\mathcal{T}_{j_1,j_2,0}^{-2}$). From Lemma \[direction\], we know that the regions on the right (left) of the blue (red) curves, which the black arrows point to, have two more characteristic roots with positive real parts.[]{data-label="fig:T0direction"}](T02direction-eps-converted-to.pdf "fig:"){width="46.00000%"} When $n=1$, $F_1(0)>0$, and $F_1(\omega)=0$ has four roots $a_{1,1}=0.184,b_{1,1}=0.5264,a_{2,1}=0.8607,b_{2,1}=1.189$, which is shown in Fig. \[fig:F1T1\] a). Thus, the crossing set is $\Omega_{1,1}\bigcup\Omega_{2,1}=[a_{1,1},b_{1,1}]\bigcup[a_{2,1},b_{2,1}]$, and we can get the stability switching curves $\mathcal{T}^1_1$ and $\mathcal{T}^2_1$, which is shown in Fig. \[fig:F1T1\] b) and c). Thus all the stability switching curves for $n=1$ are given by $\mathcal{T}_1=\mathcal{T}^1_1\cup\mathcal{T}^2_1$. a\) ![a) Graph of $F_1(\omega)$. b) Stability switching curves $\mathcal{T}^1_1$. c) Stability switching curves $\mathcal{T}^2_1$.[]{data-label="fig:F1T1"}](F1-eps-converted-to.pdf "fig:"){width="27.00000%"} b) ![a) Graph of $F_1(\omega)$. b) Stability switching curves $\mathcal{T}^1_1$. c) Stability switching curves $\mathcal{T}^2_1$.[]{data-label="fig:F1T1"}](T11-eps-converted-to.pdf "fig:"){width="27.00000%"} c) ![a) Graph of $F_1(\omega)$. b) Stability switching curves $\mathcal{T}^1_1$. c) Stability switching curves $\mathcal{T}^2_1$.[]{data-label="fig:F1T1"}](T31-eps-converted-to.pdf "fig:"){width="27.00000%"} When $n=2$, $F_2(0)>0$, and $F_2(\omega)=0$ has two roots $a_{1,2}=0.8968,b_{1,2}=1.171$, which is shown in Fig. \[fig:F2T2\] a). The crossing set is $\Omega_{1,2}=[a_{1,2},b_{1,2}]$. The stability switching curves $\mathcal{T}^1_2$ corresponding to $\Omega_{1,2}$ is shown in Fig. \[fig:F2T2\] b). All the stability switching curves for $n=2$ are given by $\mathcal{T}_2=\mathcal{T}^1_2$. a\) ![a) Graph of $F_2(\omega)$. b) Stability switching curves $\mathcal{T}^1_2$. []{data-label="fig:F2T2"}](F2-eps-converted-to.pdf "fig:"){width="27.00000%"} b) ![a) Graph of $F_2(\omega)$. b) Stability switching curves $\mathcal{T}^1_2$. []{data-label="fig:F2T2"}](T12-eps-converted-to.pdf "fig:"){width="27.00000%"} When $n=3$, $F_3(0)>0$, and $F_3(\omega)=0$ has two roots $a_{1,3}=0.6638,b_{1,3}=0.9798$, which is shown in Fig. \[fig:F3T3\] a). The crossing set is $\Omega_{1,3}=[a_{1,3},b_{1,3}]$. And the stability switching curves $\mathcal{T}^1_3$ corresponding to $\Omega_{1,3}$ is shown in Fig. \[fig:F3T3\] b). Thus all the stability switching curves for $n=3$ are given by $\mathcal{T}_3=\mathcal{T}^1_3$. When $n\geq 4$, numerical calculation indicates $F_n(\omega)>0$ for any $\omega$, thus there are no stability switching curves on $(\tau_1,\tau_2)$ plane for $n\geq 4$. a\) ![a) Graph of $F_3(\omega)$. b) Stability switching curves $\mathcal{T}^1_3$. []{data-label="fig:F3T3"}](F3-eps-converted-to.pdf "fig:"){width="27.00000%"} b) ![a) Graph of $F_3(\omega)$. b) Stability switching curves $\mathcal{T}^1_3$. []{data-label="fig:F3T3"}](T13-eps-converted-to.pdf "fig:"){width="27.00000%"} a\) ![a) The left-most curve and the lowest curve of $\mathcal{T}_0$ intersect at $(\tau_1,\tau_2)=(3.9042,1.406)$, which is a double Hopf bifurcation point on the $\tau_1-\tau_2$ plane. Crossing directions are marked by arrows. b) The complete bifurcation sets near HH.[]{data-label="fig:tau1tau2"}](tau1tau2-eps-converted-to.pdf "fig:"){width="44.00000%" height="32.00000%"} b) ![a) The left-most curve and the lowest curve of $\mathcal{T}_0$ intersect at $(\tau_1,\tau_2)=(3.9042,1.406)$, which is a double Hopf bifurcation point on the $\tau_1-\tau_2$ plane. Crossing directions are marked by arrows. b) The complete bifurcation sets near HH.[]{data-label="fig:tau1tau2"}](tau1tau2HHgai-eps-converted-to.pdf "fig:"){width="44.00000%" height="32.00000%"} Combining the stability switching curves shown in Fig. \[fig:F0T0\]-\[fig:F3T3\] together, and zooming in the part when $(\tau_1,\tau_2)\in [0,5]\times[0,3]$, we have the Hopf bifurcation curves shown in Fig. \[fig:tau1tau2\] a). We focus on the bottom left region bounded by left-most curve of $\mathcal{T}^1_0$ and the lowest curve of $\mathcal{T}^2_0$, which is shown in Fig. \[fig:tau1tau2\] a). Notice that the left-most and the lowest curve among all the stability switching curves are both part of $\mathcal{T}_0$. By Theorem \[direction\], we can verify that the positive equilibrium $E^*$ is stable in the bottom left region, since the crossing directions of the two switching curves (the black line and blue line) are all pointing outside of the region. Moreover, we can see that the two stability switching curves intersect at the point $(3.9042,1.406)$, and we denote the double Hopf bifurcation point by HH. For HH, using the normal form derivation process given in section \[normal form\], we have $\omega_1= 0.61081$, $\omega_2=0.94964 $, $K_{11} = 0.0947 - 0.0071i, K_{21} = -0.2689 + 0.4408i,K_{13} = 0.1196 + 1.2137i,K_{23} = 1.6381 - 2.5531i,K_{2100} = 0.0154 - 0.0146i, K_{1011 }= 0.4878 + 0.2082i,K_{0021} = -0.9861 - 0.9526i,K_{1110}= -0.1778 - 0.1523i$. Furthermore, we have the normal form (\[normalformcylin\]) with $\epsilon_1 =1, \epsilon_2 = -1, b = 0.4946,c = -11.5623, d = -1$, and $d-bc = 4.7192$. This means that the unfolding system near the double Hopf bifurcation point HH is of type VIa. According to Guckenheimer and Holmes, [@Guckenheimer] near the double Hopf bifurcation point there are eight different kinds of phase diagrams in eight different regions which are divided by semi-lines L1-L8 with $$\begin{aligned} &L_1:\tau_2=(\tau_1-3.9042)/( -13.6972 )+1.406 ~~ ~ (\tau_1>3.9042);\\ &L_2:\tau_2=(\tau_1-3.9042)/( 2.8383)+1.406 ~~ ~ (\tau_1>3.9042) ;\\ &L_3:\tau_2=(\tau_1-3.9042)/( 1.2106 )+1.406 ~~ ~(\tau_2>1.406) ;\\ &L_4:\tau_2=(\tau_1-3.9042)/( 0.6790)+1.406+o(\tau_1-3.9042)~~ ~(\tau_2>1.406);\\ &L_5:\tau_2=(\tau_1-3.9042)/( 0.6790)+1.406~~ ~(\tau_2>1.406);\\ &L_6:\tau_2=(\tau_1-3.9042)/( -3.5180)+1.406~~ ~(\tau_2>1.406);\\ &L_7:\tau_2=(\tau_1-3.9042)/( -13.6972)+1.406 ~~ ~(\tau_1<3.9042) ;\\ &L_8:\tau_2=(\tau_1-3.9042)/( 2.8381)+1.406 ~~ ~(\tau_1<3.9042) .\\ \end{aligned}$$ According to Fig. \[fig:VIa\], we have the bifurcation set near HH showing in Fig. \[fig:tau1tau2\] b). It is found that HH is the intersection of a supercritical Hopf bifurcation curve and a subcritical Hopf bifurcation curve. When $\tau_1=1.74,\tau_2=0.67$ in $D_2$, the positive equilibrium is a sink, which is shown in Fig. \[fig:E\]. When $\tau_1=3.62,\tau_2=1.435$ in region $D_3$, there is a stable periodic solutions originating from a supercritical Hopf bifurcation, which is shown in Fig. \[fig:periodic\]. a\) ![ When $\tau_1=1.74,\tau_2=0.67$ in $D_2$, the positive equilibrium is asymptotically stable.[]{data-label="fig:E"}](Eu-eps-converted-to.pdf "fig:"){width="27.00000%" height="30.00000%"} b)![ When $\tau_1=1.74,\tau_2=0.67$ in $D_2$, the positive equilibrium is asymptotically stable.[]{data-label="fig:E"}](Ev-eps-converted-to.pdf "fig:"){width="27.00000%" height="30.00000%"} c) ![ When $\tau_1=1.74,\tau_2=0.67$ in $D_2$, the positive equilibrium is asymptotically stable.[]{data-label="fig:E"}](Euvud-eps-converted-to.pdf "fig:"){width="27.00000%" height="30.00000%"} a\) ![ When $\tau_1=3.62,\tau_2=1.435$ in $D_3$, there is a stable periodic solution.[]{data-label="fig:periodic"}](periodicu-eps-converted-to.pdf "fig:"){width="27.00000%" height="30.00000%"} b)![ When $\tau_1=3.62,\tau_2=1.435$ in $D_3$, there is a stable periodic solution.[]{data-label="fig:periodic"}](periodicv-eps-converted-to.pdf "fig:"){width="27.00000%" height="30.00000%"} c) ![ When $\tau_1=3.62,\tau_2=1.435$ in $D_3$, there is a stable periodic solution.[]{data-label="fig:periodic"}](periodicuvud-eps-converted-to.pdf "fig:"){width="27.00000%" height="30.00000%"} Finally, we show the existence of quasi-periodic solutions and the results of the Poincaré map on a Poincaré section. Due to the fact that the double Hopf bifurcation point is the intersection of two curves from $\mathcal{T}_0$, all periodic or quasi-periodic solutions nearby are spatially homogeneous. Thus, we choose the solution curve of $(u(0,t),v(0,t))$ at $x=0$ to show the rich dynamics. Since the periodic solutions oscillate in an infinite dimensional phase space, [@JWu] we give simulations near the double Hopf bifurcation point in the space $u(0,t)-v(0,t)-u(0,t-\tau_1)$ and choose the Poincaré sections $v(0,t)=v^*$, $\dot{u}(0,t)=0$, respectively, in Fig. \[fig:23chaos\]. The system exhibits rich dynamical behavior near the bifurcation point. When $(\tau_1,\tau_2)=(3.82,1.4345)$ in $D_4$, system (\[diffusion predator\]) has a quasi-periodic solution on a 2-torus (Fig. \[fig:23chaos\] a)). It becomes a quasi-periodic solution on a 3-torus, which breaks down through an orbit-connection bifurcation at $(\tau_1,\tau_2)=(3.9043, 1.418)$ ( Fig. \[fig:23chaos\] b)). When the parameters vary and enter $D_6$, three-dimensional torus vanish. Due to the fact that a vanishing 3-torus might accompany the phenomenon of chaos, [@P.; @Battelino; @D.; @Ruelle; @J.P.; @Eckmann] near the double Hopf bifurcation point, we find strange attractors exist. We can see that when $\tau_1=3.905, \tau_2=1.4136$ in region $D_6$, system (\[diffusion predator\]) has a strange attractor, which is shown on the Poincaré section in Fig. \[fig:23chaos\] c). a\) ![ The phase portraits in $u(0,t)-v(0,t)-u(0,t-\tau_1)$, and the corresponding Poincaré map on a Poincaré section $v(0,t)=v^*$ and $\dot{u}(x,t)=0$. The parameter is given as a) $\tau_1=3.82,\tau_2=1.4345$ in $D_4$, b) $\tau_1=3.9043, \tau_2=1.418$ near $L_4$, c) $\tau_1=3.905, \tau_2=1.4136$ in region $D_6$, respectively. Note that the transient states have been deleted for a clear expression.[]{data-label="fig:23chaos"}](2torusuvud-eps-converted-to.pdf "fig:"){width="27.50000%" height="30.00000%"} ![ The phase portraits in $u(0,t)-v(0,t)-u(0,t-\tau_1)$, and the corresponding Poincaré map on a Poincaré section $v(0,t)=v^*$ and $\dot{u}(x,t)=0$. The parameter is given as a) $\tau_1=3.82,\tau_2=1.4345$ in $D_4$, b) $\tau_1=3.9043, \tau_2=1.418$ near $L_4$, c) $\tau_1=3.905, \tau_2=1.4136$ in region $D_6$, respectively. Note that the transient states have been deleted for a clear expression.[]{data-label="fig:23chaos"}](2torus7-eps-converted-to.pdf "fig:"){width="27.50000%" height="30.00000%"} ![ The phase portraits in $u(0,t)-v(0,t)-u(0,t-\tau_1)$, and the corresponding Poincaré map on a Poincaré section $v(0,t)=v^*$ and $\dot{u}(x,t)=0$. The parameter is given as a) $\tau_1=3.82,\tau_2=1.4345$ in $D_4$, b) $\tau_1=3.9043, \tau_2=1.418$ near $L_4$, c) $\tau_1=3.905, \tau_2=1.4136$ in region $D_6$, respectively. Note that the transient states have been deleted for a clear expression.[]{data-label="fig:23chaos"}](2torus8-eps-converted-to.pdf "fig:"){width="27.50000%" height="30.00000%"}\ b) ![ The phase portraits in $u(0,t)-v(0,t)-u(0,t-\tau_1)$, and the corresponding Poincaré map on a Poincaré section $v(0,t)=v^*$ and $\dot{u}(x,t)=0$. The parameter is given as a) $\tau_1=3.82,\tau_2=1.4345$ in $D_4$, b) $\tau_1=3.9043, \tau_2=1.418$ near $L_4$, c) $\tau_1=3.905, \tau_2=1.4136$ in region $D_6$, respectively. Note that the transient states have been deleted for a clear expression.[]{data-label="fig:23chaos"}](3torusuvud-eps-converted-to.pdf "fig:"){width="27.50000%" height="30.00000%"} ![ The phase portraits in $u(0,t)-v(0,t)-u(0,t-\tau_1)$, and the corresponding Poincaré map on a Poincaré section $v(0,t)=v^*$ and $\dot{u}(x,t)=0$. The parameter is given as a) $\tau_1=3.82,\tau_2=1.4345$ in $D_4$, b) $\tau_1=3.9043, \tau_2=1.418$ near $L_4$, c) $\tau_1=3.905, \tau_2=1.4136$ in region $D_6$, respectively. Note that the transient states have been deleted for a clear expression.[]{data-label="fig:23chaos"}](3torus7-eps-converted-to.pdf "fig:"){width="27.50000%" height="30.00000%"} ![ The phase portraits in $u(0,t)-v(0,t)-u(0,t-\tau_1)$, and the corresponding Poincaré map on a Poincaré section $v(0,t)=v^*$ and $\dot{u}(x,t)=0$. The parameter is given as a) $\tau_1=3.82,\tau_2=1.4345$ in $D_4$, b) $\tau_1=3.9043, \tau_2=1.418$ near $L_4$, c) $\tau_1=3.905, \tau_2=1.4136$ in region $D_6$, respectively. Note that the transient states have been deleted for a clear expression.[]{data-label="fig:23chaos"}](3torus8-eps-converted-to.pdf "fig:"){width="27.50000%" height="30.00000%"}\ c) ![ The phase portraits in $u(0,t)-v(0,t)-u(0,t-\tau_1)$, and the corresponding Poincaré map on a Poincaré section $v(0,t)=v^*$ and $\dot{u}(x,t)=0$. The parameter is given as a) $\tau_1=3.82,\tau_2=1.4345$ in $D_4$, b) $\tau_1=3.9043, \tau_2=1.418$ near $L_4$, c) $\tau_1=3.905, \tau_2=1.4136$ in region $D_6$, respectively. Note that the transient states have been deleted for a clear expression.[]{data-label="fig:23chaos"}](chaosuvud-eps-converted-to.pdf "fig:"){width="27.50000%" height="30.00000%"} ![ The phase portraits in $u(0,t)-v(0,t)-u(0,t-\tau_1)$, and the corresponding Poincaré map on a Poincaré section $v(0,t)=v^*$ and $\dot{u}(x,t)=0$. The parameter is given as a) $\tau_1=3.82,\tau_2=1.4345$ in $D_4$, b) $\tau_1=3.9043, \tau_2=1.418$ near $L_4$, c) $\tau_1=3.905, \tau_2=1.4136$ in region $D_6$, respectively. Note that the transient states have been deleted for a clear expression.[]{data-label="fig:23chaos"}](chaos7-eps-converted-to.pdf "fig:"){width="27.50000%" height="30.00000%"} ![ The phase portraits in $u(0,t)-v(0,t)-u(0,t-\tau_1)$, and the corresponding Poincaré map on a Poincaré section $v(0,t)=v^*$ and $\dot{u}(x,t)=0$. The parameter is given as a) $\tau_1=3.82,\tau_2=1.4345$ in $D_4$, b) $\tau_1=3.9043, \tau_2=1.418$ near $L_4$, c) $\tau_1=3.905, \tau_2=1.4136$ in region $D_6$, respectively. Note that the transient states have been deleted for a clear expression.[]{data-label="fig:23chaos"}](chaos8-eps-converted-to.pdf "fig:"){width="27.50000%" height="30.00000%"} Concluding remarks ================== This paper deals with a modified Leslie-Gower predator-prey system with two delays and diffusion. We focus on the joint effect of two delays on the dynamical behavior of the system. Applying the method of stability switching curves, we find the stable region of the positive equilibrium and obtain Hopf bifurcation results. By searching the intersection of stability switching curves near the stable region, we get the double Hopf bifurcation point. Through the calculation of normal form of the system, we get the corresponding unfolding system and the bifurcation set. We theoretically prove and illustrate the existence of quasi-periodic solution on two-torus, quasi-periodic solution on three-torus, and even strange attractor. The theorem of Hopf bifurcation corresponding to systems with single parameter has been proposed for a long time. However, the theorem with two parameters has not been well stated. In this paper, we define Hopf bifurcation curve on the plane $(\tau_1,\tau_2)$, and give the sufficient condition of the existence of Hopf bifurcation in two-parameter plane. The derivation process of normal form for double Hopf bifurcation is very difficult, and the calculation is very long. It is even harder when we deal with systems with two simultaneously varying delays, since the change of time scale $t\rightarrow\frac{t}{\tau_1}$ only transforms one delay to 1, and the other delay becomes $\frac{\tau_2}{\tau_1}$, which makes the Taylor expansion of the nonlinear term $G(\sigma, U^t)$ with $U^t(-\frac{\tau_2}{\tau_1})=U^t(-\frac{\tau_2^*+\sigma_2}{\tau_1^*+\sigma_1})$ very complicated. We should notice that the method of calculation of normal form used here can be also used in other systems with two delays, one delay or without delay by slight modifications. The calculation formula of the normal form we give here is at a double Hopf bifurcation point with $k_1=k_2=0$. The cases of $k_1=0,k_2\neq 0$ and $k_1\neq 0,k_2\neq 0$ can be deduced in a similar way, but in our model these double Hopf bifurcation points are not located at the boundary of stability region, hence unstable manifold always exists. Supplementary Material {#supplementary-material .unnumbered} ====================== In the supplementary material, we give the detailed calculation process of second and third order normal forms near double Hopf bifurcation. Acknowledgments {#acknowledgments .unnumbered} =============== The authors are grateful to the handling editor and anonymous referees for their careful reading of the manuscript and valuable comments, which improve the exposition of the paper very much. Y. Du is supported by Education Department of Shaanxi Province (grant No. 18JK0123). B. Niu and J. Wei are supported by National Natural Science Foundation of China (grant Nos.11701120 and No.11771109) and the Foundation for Innovation at HIT(WH). [99]{} P. Leslie, “A stochastic model for studying the properties of certain biological systems by numerical methods," Biometrika 45, 16-31 (1958). P. Leslie and J. Gower, “The properties of a stochastic model for the predator-prey type of interaction between two species," Biometrika 47, 219-234 (1960). M. Aziz, “Study of a Leslie-Gower-type tritrophic population," Chaos Soliton Fract. 14, 1275-1293 (2002). J. Collings, “The effects of the functional response on the bifurcation behavior of a mite predator-prey interaction model," J. Math. Biol. 36, 149-168 (1997). P. Feng and Y. Kang, “Dynamics of a modified Leslie-Gower model with double Allee efects," Nonlinear Dynam. 80, 1051-1062 (2015). Y. Ma, “Global Hopf bifurcation in the Leslie-Gower predator-prey model with two delays," Nonl. Anal. Real World Appl. 13, 370-375 (2012). J. Zhou, “Positive steady state solutions of a Leslie-Gower predator-prey model with Holling type II functional response and density-dependent diffusion," Nonlinear Anal. Theor. 82, 47-65 (2013). S. Yuan and Y. Song, “Stability and Hopf bifurcations in a delayed Leslie-Gower predator-prey system," J. Math. Anal. Appl. 355, 82-100 (2009). S. Yuan and Y. Song, “Bifurcation and stability analysis for a delayed Leslie-Gower predator-prey system," IMA J. Appl. Math. 74, 574-603 (2009). F. Chen, L. Chen, and X. Xie, “On a Leslie-Gower predator-prey model incorporating a prey refuge," Nonl. Anal. Real World Appl. 10, 2905-2908 (2009). R. M. May, “Time delay versus stability in population models with two and three trophic levels," Ecology 4, 315-325 (1973). R. Yang and J. Wei, “The effect of delay on a diffusive predator-prey system with modified Leslie-Gower functional response," B. Malays. Math. Sci. So. 40, 51-73 (2017). A. F. Nindjin, M. A. Aziz-Alaoui, and M. Cadivel, “Analysis of a predator-prey model with modified Leslie-Gower and Holling-type II schemes with time delay," Nonl. Anal. Real World Appl. 7, 1104-1118 (2006). Q. Liu, Y. Lin, and J. Cao, “Global Hopf bifurcation on two-delays Leslie-Gower predator-prey system with a prey refuge," Comput. Math. Method. M. 6, 619132 (2014). Y. Song, Y. Peng, and J. Wei, “Bifurcations for a predator-prey system with two delays," J. Math. Anal. Appl. 337, 466-479 (2008). K. Li and J. Wei, “Stability and Hopf bifurcation analysis of a prey-predator system with two delays," Chaos, Soliton. Fract. 42, 2606-2613 (2009). S. Ruan and J. Wei, “On the zeros of a third degree exponential polynomial with applications to a delayed model for the control of testosterone secretion," Ima J. Math. Appl. Med. 18, 41-52 (2001). C. Xu, X. Tang, M. Liao, and X. He, Bifurcation analysis in a delayed Lokta-Volterra predator-prey model with two delays," Nonlinear Dynam. 66, 169-183 (2011). L. Deng, X. Wang, and M. Peng, “Hopf bifurcation analysis for a ratio-dependent predator-prey system with two delays and stage structure for the predator," Appl. Math. Comput. 231, 214-230 (2014). K. Gu, S. Niculescu, and J. Chen, “On stability crossing curves for general systems with two delays," J. Math. Anal. Appl. 311, 231-253 (2005). X. Lin and H. Wang, “Stability analysis of delay differential equations with two discrete delays," Can. Appl. Math. Q. 20, 519-533 (2012). Y. Du and S. B. Hsu, “A diffusive predator-prey model in heterogeneous environment," J. Differ. Equations 203, 331-364 (2004). S. Chen, J. Shi, and J. Wei, “Global stability and Hopf bifurcation in a delayed diffusive Leslie-Gower predator-prey system," Int. J. Bifurcat. Chaos 22, 331-517 (2012). I. Al-Darabsah, X. Tang, and Y. Yuan, “A prey-predator model with migrations and delays," Discrete Contin. Dyn. Syst. Ser. B 21, 737-761 (2016). C. Tian, “Delay-driven spatial patterns in a plankton allelopathic system," Chaos 22, 013129 (2012) . R. Yang and C. Zhang, “Dynamics in a diffusive modified Leslie-Gower predator-prey model with time delay and prey harvesting," Nonlinear Dynam. 87, 863-878 (2017). T. Faria, “Normal forms and Hopf bifurcation for partial differential equations with delays," Trans. Amer. Math. Soc. 352, 2217-2238 (2000). T. Faria and L. T. Magalhães, “Normal forms for retarded functional differential equations with parameters and applications to Hopf bifurcation," J. Differ. Equations 122, 181-200 (1995). K. L. Cooke and P. van den Driessche, “Analysis of an SEIRS epidemic model with two delays," J. Math. Biol. 35, 240-260 (1996). M. Jackson and B. M. Chen-Charpentier, “Modeling plant virus propagation with delays," J. Comput. Appl. Math. 309, 611-621 (2016). H. I. Freedman and V. S. H. Rao, “Stability criteria for a system involving two time delays," SIAM J. Appl. Math. 46, 552-560 (1986). J. Wei and S. Ruan, “Stability and bifurcation in a neural network model with two delays," Physica D 130, 255-272 (1999). R. M. Nguimdo, “Constructing Hopf bifurcation lines for the stability of nonlinear systems with two time delays," Phys. Rev. E 97, 032211 (2018). Y. A. Kuznetsov, Elements of Applied Bifurcation Theory (Springer, 2013). J. Guckenheimer and P. Holmes, Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields (Springer, 1983). J. Hale, Theory of Functional Differential Equations (Springer-Verlag, 1977). K. L. Cooke and Z. Grossman, “Discrete delay, distributed delay and stability switches," J. Math. Anal. Appl. 86, 592-627 (1982). J. Wu, Theory and Applications of Partial Functional-Differential Equations (Springer, 1996). P. Battelino, C. Grebogi, E. Ott and J. Yorke, “Chaotic attractors on a 3-torus, and torus break-up," Physica D 39, 299-314 (1989). D. Ruelle and F. Takens, “On the nature of turbulence," Comm. Math. Phys. 20, 167-192 (1971). J. Eckmann,“ Roads to turbulence in dissipative dynamical systems," Rev. Modern Phys. 53, 643-654 (1981).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We initiate the development of a horizon-based initial (or rather final) value formalism to describe the geometry and physics of the near-horizon spacetime: data specified on the horizon and a future ingoing null boundary determine the near-horizon geometry. In this initial paper we restrict our attention to spherically symmetric spacetimes made dynamic by matter fields. We illustrate the formalism by considering a black hole interacting with a) inward-falling, null matter (with no outward flux) and b) a massless scalar field. The inward-falling case can be exactly solved from horizon data. For the more involved case of the scalar field we analytically investigate the near slowly evolving horizon regime and propose a numerical integration for the general case.' author: - Sharmila Gunasekaran - Ivan Booth bibliography: - 'ivp.bib' title: Horizons as boundary conditions in spherical symmetry --- Introduction ============ This paper begins an investigation into what horizon dynamics can tell us about external black hole physics. At first thought this might seem obvious: if one watches a numerical simulation of a black hole merger and sees a post-merger horizon ringdown (see for example [@sxs]) then it is natural to think of that oscillation as a source of emitted gravitational waves. However this cannot be the case. Neither event nor apparent horizons can actually send signals to infinity: apparent horizons lie inside event horizons which in turn are the boundary for signals that can reach infinity[@Hawking:1973uf]. It is not horizons themselves that interact but rather the “near horizon” fields. This idea was (partially) formalized as a “stretched horizon” in the membrane paradigm[@Thorne:1986iy]. Then the best that we can hope for from horizons is that they act as a proxy for the near horizon fields with horizon evolution reflecting some aspects of their dynamics. As explored in [@Jaramillo:2011re; @Jaramillo:2011rf; @Jaramillo:2012rr; @Rezzolla:2010df; @Gupta:2018znn] there should then be a correlation between horizon evolution and external, observable, black hole physics. Robinson-Trautman spacetimes (see for example [@Griffiths:2009dfa]) demonstrate that this correlation cannot be perfect. In those spacetimes there can be outgoing gravitational (or other) radiation arbitrarily close to an isolated (equilibrium) horizon[@Ashtekar:2000sz]. Hence our goal is two-fold: both to understand the conditions under which a correlation will exist and to learn precisely what information it contains. The idea that horizons should encode physical information about black hole physics is not new. The classical definition of a black hole as the complement of the causal past of future null infinity [@Hawking:1973uf] is essentially global and so defines a black hole spacetime rather than a black hole *in* some spacetime. However there are also a range of geometrically defined black hole boundaries based on outer and/or marginally trapped surfaces that seek to localize black holes. These include apparent[@Hawking:1973uf], trapping[@Hayward:1993wb], isolated [@Ashtekar:1998sp; @Ashtekar:1999yj; @Ashtekar:2000sz; @PhysRevD.49.6467] and dynamical [@Ashtekar:2003hk] horizons as well as future holographic screens [@Bousso:2015qqa]. These quasilocal definitions of black holes have successfully localized black hole mechanics to the horizon[@Ashtekar:1998sp; @Ashtekar:1999yj; @Ashtekar:2003hk; @Booth:2003ji; @Bousso:2015qqa; @Hayward:1993wb] and been particularly useful in formalizing what it means for a (localized) black hole to evolve or be in equilibrium. They are used in numerical relativity not only as excision surfaces (see, for example the discussions in [@Baumgarte:2010ndz; @Thornburg:2006zb]) but also in interpreting physics (for example [@Dreyer:2002mx; @Cook:2007wr; @Chu:2010yu; @Jaramillo:2011re; @Jaramillo:2011rf; @Jaramillo:2012rr; @Rezzolla:2010df; @Lovelace:2014twa; @Gupta:2018znn; @Owen:2017yaj]). In this paper we work to quantitatively link horizon dynamics to observable black hole physics. To establish an initial framework and build intuition we for now restrict our attention to spherically symmetric marginally outer trapped tubes (MOTTs) in similarly symmetric spacetimes. Matter fields are included to drive the dynamics. Our primary approach is to take horizon data as a (partial) final boundary condition that is used to determine the fields in a region of spacetime in its causal past. In particular these boundary conditions constrain the geometry and physics of the associated “near horizon” spacetime. The main application that we have in mind is interpreting the physics of evolving horizons that have been generated by either numerical simulations or theoretical considerations. Normally, data on a MOTT by itself is not sufficient to specify any region of the external spacetime. As shown in Fig. \[hd1\] even for a spacelike MOTT (a dynamical horizon) the region determined by a standard (3+1) initial value formulation would lie entirely within the event horizon. More information is needed to determine the near-horizon spacetime and hence in this paper we work with a characteristic initial value formulation [@MR1032984; @doi:10.1063/1.1724305; @Winicour:2012znc; @Winicour:2013gha; @Madler:2016xju] where extra data is specified on a null surface $\mathcal{N}$ that is transverse to the horizon (Fig. \[hd2\]). Intuitively the horizon records inward-moving information while $\mathcal{N}$ records the outward-moving information. Together they are sufficient to reconstruct the spacetime. There is an existing literature that studies spacetime near horizons, however it does not exactly address this problem. Most works focus on isolated horizons. [@Li:2015wsa] and [@Li:2018knr] examine spacetime near an isolated extremal horizon as a Taylor series expansion of the horizon while [@Krishnan:2012bt] and [@Lewandowski:2018khe] study spacetime near more general isolated horizons but in a characteristic initial value formulation with the extra information specified on a transverse null surface. [@Booth:2012xm] studied both the isolated and dynamical case though again as a Taylor series expansion off the horizon. In the case of the Taylor expansions, as one goes to higher and higher orders one needs to know higher and higher order derivatives of metric quantities at the horizon to continue the expansion. While the current paper instead investigates the problem as a final value problem, it otherwise closely follows the notation of and uses many results from [@Booth:2012xm]. It is organized as follows. We introduce the final value formulation of spherically symmetric general relativity in Sec.\[sec:formulation\]. We illustrate this for infalling null matter in \[sec:matter modelsI\] and then the much more interesting massless scalar field in \[sec:matter modelsII\]. We conclude with a discussion of results in Sec.\[sec:discussion\]. Formulation {#sec:formulation} =========== Coordinates and metric ---------------------- We work in a spherically symmetric spacetime (${\cal M}, g$) and a coordinate system whose non-angular coordinates are $\rho$ (an ingoing affine parameter) and $v$ (which labels the ingoing null hypersurfaces and increases into the future). Hence, $g_{\rho \rho} =0$ and the curves tangent to the future-oriented inward-pointing $$N = \frac{\partial}{\partial \rho} \label{E1}$$ are null. We then scale $v$ so that $\mathcal{V}= \frac{\partial}{\partial v}$ satisfies $$\mathcal{V} \cdot N = -1. \label{E2}$$ One coordinate freedom still remains: the scaling of the affine parameter on the individual null geodesics $$\tilde{\rho} = f(v) \rho \, . \label{gaugefree}$$ In subsection \[physconf\] we will fix this freedom by specifying how $N$ is to be scaled along the $\rho = 0$ surface $\Sigma$ (which we take to be a black hole horizon). ![Coordinate system for characteristic evolution. We work with final boundary conditions so that in the region of interest $\rho < 0$. []{data-label="fig:3p1"}](cdsys.pdf) Next we define the future-oriented outward-pointing null normal to the spherical surfaces $S_{(v,\rho)}$ as $\ell^a$ and scale so that $$\begin{aligned} \ell \cdot N = -1 \label{affine2} \, . \end{aligned}$$ With this choice the four-metric $g_{ab}$ and induced two-metric $\tilde{q}_{ab}$ on the $S_{(v,\rho)}$ are related by $$g^{ab} = \tilde{q}^{ab} - \ell^a N^b - N^a \ell^b \, . \label{decomp}$$ Further for some function $C$ we can write $${\cal V} = \ell - C N \, . \label{crel1}$$ The coordinates and normal vectors are depicted in Fig.\[fig:3p1\] and give the following form of the metric: $$ds^2 = 2 C(v,\rho) {\text{d}}v^2 - 2 {\text{d}}v \, {\text{d}}\rho + R(v,\rho)^2 {\text{d}}\Omega^2 \label{metant}$$ where $R(v,\rho)$ is the areal radius of the $S_{(v,\rho)}$ surfaces. Note the similarity to ingoing Eddington-Finkelstein coordinates for a Schwarzschild black hole. However $\nicefrac{\partial}{\partial \rho}$ points inwards as opposed to the outward oriented $\nicefrac{\partial}{\partial r}$ in those coordinates (hence the negative sign on the $\mbox{d} v \mbox{d} \rho$ cross-term). Typically, as shown in Fig.\[fig:3p1\] we will be interested in regions of spacetime that are bordered in the future by a surface $\Sigma$ of indeterminate sign on which $\rho=0$ and a null $\mathcal{N}$ which is one of the $v$=constant surfaces (and so $\rho < 0$ in the region of interest). We will explore how data on those surfaces determines the region of spacetime in their causal past. Equations of motion ------------------- In this section we break up the Einstein equations relative to these coordinates, beginning by defining some geometric quantities that appear in the equations. First the null expansions for the $\ell^a$ and $N^a$ congruences are $$\begin{aligned} \theta_{(\ell)} &= \tilde{q}^{ab} \nabla_a \ell_b = \frac{2}{R} {{\cal L}\, }_\ell R \quad \mbox{and} \label{nullexp1} \\ \theta_{(N)} &= \tilde{q}^{ab} \nabla_a N_b = \frac{2}{R} {{\cal L}\, }_N R = \frac{2}{R} R_{,\rho}. \label{nullexp2} \end{aligned}$$ while the inaffinities of the null vector fields are $$\begin{aligned} \kappa_N &= - N^a N_b \nabla_a \ell^b = 0 \quad \mbox{and} \\ \kappa_\mathcal{V} &= \kappa_\ell - C \kappa_N = - \ell^a N_b \nabla_a \ell^b \,. \end{aligned}$$ By construction $\kappa_N = 0$ and so we can drop it from our equations and henceforth write $$\kappa \equiv \kappa_{\mathcal{V}} = \kappa_\ell \, .$$ Finally the Gaussian curvature of $S_{(v,\rho)}$ is: $$\tilde{K} = \frac{1}{R^2} \, .$$ Then these curvature quantities are related by constraint equations along the surfaces of constant $\rho$ $$\begin{aligned} \mathcal{L}_{\mathcal{V}}R = & {{\cal L}\, }_\ell R - C {{\cal L}\, }_N R \quad \text{(by definition)}\, , \label{const1} \\ \mathcal{L}_\mathcal{V} \theta_{(\ell)} = & \kappa \, \theta_{(\ell)} + C \left( \frac{1}{R^2 } + \theta_{(N)} \theta_{(\ell)} - G_{ \ell N} \right) \nonumber \\ & - \left( G_{\ell \ell} + \frac{1}{2} \theta_{(\ell)}^2 \right) \,, \label{const2}\\ \mathcal{L}_\mathcal{V} \theta_{(N)} = & -\kappa\, \theta_{(N)} - \left( \frac{1}{R^2 } + \theta_{(N)} \theta_{(\ell)} - G_{\ell N} \right) \nonumber \\ & + C \left( G_{\! N \! N} + \frac{1}{2} \theta_{(N)}^2 \right), \label{const3}\end{aligned}$$ and “time” derivatives in the $\rho$ direction $$\begin{aligned} {\cal{L}}_{N} \theta_{(N)} = & -\frac{\theta_{(N)}^2}{2} - G_{\!N \!N}, \label{cevol1} \\ {\cal{L}}_N \theta_{(\ell)} =& -\frac{1}{R^2} - \theta_{(N)} \theta_{(\ell)} + G_{\ell N}, \label{cevol2} \\ {\cal{L}}_N \kappa = & \frac{1}{R^2 } + \frac{1}{2} \theta_{(N)} \theta_{(\ell)} - \frac{1}{2} G_{\tilde{q}} - G_{\ell N}, \label{cevol3}\end{aligned}$$ where by the choice of the coordinates $$\kappa = {\cal{L}}_N C \, . \\$$ These equations can be derived from the variations for the corresponding geometric quantities (see, for example, [@Booth:2006bn] and [@Booth:2012xm]) and of course are coupled to the matter content of the system through the Einstein equations $$G_{ab } = 8 \pi T_{ab} \; .$$ Using (\[nullexp1\]) and (\[nullexp2\]) we can rewrite the constraint and evolution equations in terms of the metric coefficients and coordinates as: $$\begin{aligned} R_{,v} = & \, R_\ell - C R_N \,, \quad \label{const1a} \\ R_{\ell,v} = & \, \kappa R_{\ell} + \frac{C \left( 1+ 4 R_\ell R_N \right)}{2 R} - \frac{R}{2} \,( G_{\ell \ell} + C G_{\ell N}) \,, \label{const2a}\\ R_{N,v} = & -\kappa R_{N} - \frac{ \left( 1+ 4 R_\ell R_N \right)}{2 R} + \frac{R}{2} \, ( G_{\ell N} + CG_{N\! N}). \label{const3a}\end{aligned}$$ and $$\begin{aligned} R_{,\rho \rho} &= - \frac{R}{2} \, G_{\! N \! N} \label{fcevol1}, \\ ({R R_{\ell}})_{,\rho} &= -\frac{1}{2} + \frac{R^2}{2} G_{\ell N}, \label{fcevol2} \\ C_{,\rho \rho} &= \frac{1}{R^2} + \frac{2R_{\ell} R_{N }}{R^2} - \frac{1}{2} G_{\tilde{q}} - G_{\ell N}, \label{fcevol3}\end{aligned}$$ where $$\kappa = C_{,\rho} \label{fcevol0}. \\$$ For those who don’t want to work through the derivations of [@Booth:2006bn] and [@Booth:2012xm], these can also be derived fairly easily (thanks to the spherical symmetry) from an explicit calculation of the Einstein tensor for (\[metant\]). Final Data {#physconf} ---------- We will focus on the case where $\rho =0$ is an isolated or dynamical horizon $H$. Thus $${\theta_{(\ell)}}{\overset{\scriptscriptstyle{H}}{=}}0 \quad \Longleftrightarrow \quad R_\ell {\overset{\scriptscriptstyle{H}}{=}}0 \; .$$ The notation ${\overset{\scriptscriptstyle{H}}{=}}$ indicates that the equality holds on $H$ (but not necessarily anywhere else). Further we can use the coordinate freedom (\[gaugefree\]) to set $$R_N {\overset{\scriptscriptstyle{H}}{=}}R_{,\rho}| {\overset{\scriptscriptstyle{H}}{=}}-1 \, . \label{RIC}$$ On $H$, the constraints (\[const1a\])-(\[const3a\]) fix three of $$\begin{aligned} \{C, \kappa, R, R_\ell, R_N, G_{\ell \ell}, G_{\ell N}, G_{NN} \}\end{aligned}$$ given the other five quantities. For example if $R_\ell {\overset{\scriptscriptstyle{H}}{=}}0$ and $R_N {\overset{\scriptscriptstyle{H}}{=}}-1$ then (\[const1a\]) and (\[const2a\]) give $$R_{,v} {\overset{\scriptscriptstyle{H}}{=}}C {\overset{\scriptscriptstyle{H}}{=}}\frac{R^2 G_{\ell \ell}}{1 - R^2 G_{\ell N}} \label{Rp}$$ and (\[const3a\]) gives $$\kappa = C_{\rho} {\overset{\scriptscriptstyle{H}}{=}}\frac{1}{2R} - \frac{R}{2} \left(G_{\ell N} + C G_{\! N \! N} \right) \label{kappaH} \, .$$ Thus if $G_{\ell \ell}$ and $G_{\ell N}$ are specified for $v_i \leq v \leq v_f$ on $H$ and $R(v_f) {\overset{\scriptscriptstyle{H}}{=}}R_f$ then one can solve (\[Rp\]) to find $R$ over the entire range. Equivalently one could take $R$ and one of $G_{\ell \ell}$ or $G_{\ell N}$ as primary and then solve for the other component of the stress-energy. Of course, in general the matter terms will also be constrained by their own equations; these will be treated in later sections. Further data on $\rho = 0$ will generally not be sufficient to fully determine the regions of interest and data will also be needed on an $\mathcal{N}$. Again this will depend on the specific matter model. Nevertheless if there is a MOTT at $\rho = 0$ then the constraints provide significant information about the horizon. If $G_{\ell \ell} = 0$ (no flux of matter through the horizon) then we have an isolated horizon with $C=0$, a constant $R$ and a null $H$. This is independent of other components of the stress-energy. Alternatively if $G_{\ell \ell} > 0$ (the energy conditions forbid it to be negative) and $ G_{\ell N} < 1/R^2$ then we have a dynamical horizon with $C>0$, increasing $R$ and spacelike $H$[^1]. Note that this growth doesn’t depend in any way on $G_{\! N \! N}$: there is no sense in which the growing horizon “catches” outward moving matter and hence grows even faster. The behaviour of the coordinates relative to isolated and dynamical horizons along with $\mathscr{I}^+$ is illustrated in Fig. (\[dyniso\]). The evolution equations are more complicated and depend on the matter field equations. We examine two such cases in the following sections. Traceless inward flowing null matter {#sec:matter modelsI} ==================================== As our first example consider matter that falls entirely in the inward $N$-direction with no outward $\ell$-flux. Then data on the horizon should be sufficient to entirely determine the region of spacetime traced by the horizon-crossing inward null geodesics: there are no dynamics that don’t involve the horizon. Translating these words into equations, we assume that $$T_{ab} N^a N^b = 0 \label{R1}$$ (no matter flows in the outward $\ell$-direction). Further, for simplicity we also assume that it is trace-free $$g^{ab} T_{ab} = 0 \quad \Leftrightarrow \quad T_{\tilde{q}} = 2 T_{\ell N} \; . \label{R2}$$ Then we can solve for the metric using only the Bianchi identities $$\nabla_a G^{ab} = 0,$$ without any reference to detailed equations of motion for the matter field. Keeping spherical symmetry but temporarily suspending the other simplifying assumptions they may be written as: $$\begin{aligned} \mathcal{L}_\ell (R^2G_{\! N \! N}) + & \mathcal{L}_N(R^2G_{\ell N}) + R^2( 2 \kappa_{\ell} G_{\! N \! N}) \nonumber \\ & + \frac{1}{2} R^2\theta_{(N)} G_{\tilde{q}} = 0, \label{bianchi1} \\ \mathcal{L}_N(R^2G_{\ell \ell}) + & \mathcal{L}_\ell(R^2G_{\ell N}) + R^2(- 2 \kappa_{N} G_{\ell \ell}) \nonumber \\ & + \frac{1}{2} R^2 \theta_{(\ell)} G_{\tilde{q}} = 0 \, . \label{bianchi2}\end{aligned}$$ In terms of metric coefficients with $\kappa_N = 0$ plus (\[R1\]) and (\[R2\]) these reduce to: $$\begin{aligned} (R^4 G_{\ell N})_{,\rho} = 0 \,\,\text{ and } \label{constd1}\\ (R^2 G_{\ell \ell})_{,\rho} + \frac{1}{R^2} ( R^4 G_{\ell N} )_{,v} = 0. \label{constd2}\end{aligned}$$ As we shall see, this class of matter includes interesting examples like Vaidya-Reissner-Nordström (charged null dust). We now demonstrate that given knowledge of $G_{\ell \ell}$ and $G_{\ell N}$ over a region of horizon $\bar{H} = \{ H: v_i \leq v \leq v_f \}$ as well as $R(v_f) {\overset{\scriptscriptstyle{H}}{=}}R_f$ we can determine the spacetime everywhere out along the horizon-crossing inward null geodesics. On the horizon -------------- First consider the constraints on $\bar{H}$. In this case it is tidier to take $R$ and $G_{\ell N}$ as primary. Then we can specify $$\begin{aligned} R {\overset{\scriptscriptstyle{H}}{=}}R_H (v) \quad \mbox{and} \quad G_{\ell N} {\overset{\scriptscriptstyle{H}}{=}}\frac{Q(v)}{R_H^4} \label{GLNH}\end{aligned}$$ for some functions $R_H(v)$ (dimensions of length) and $Q_H(v)$ (dimensions of length squared) where the form of the latter is chosen for future convenience. Then $$C {\overset{\scriptscriptstyle{H}}{=}}R_{H,v}$$ and by (\[Rp\]) $$G_{\ell \ell } {\overset{\scriptscriptstyle{H}}{=}}R_{H,v} \left( \frac{1}{R_H^2} - \frac{Q}{R_H^4} \right) \label{GLLH}$$ Finally by (\[kappaH\]), $$\kappa {\overset{\scriptscriptstyle{H}}{=}}C_{\rho} {\overset{\scriptscriptstyle{H}}{=}}\frac{1}{2R_H} \left(1 - \frac{Q}{R_H^2} \right) \, . \label{kappaHRN}$$ Off the horizon --------------- Next, integrate away from $\bar{H}$. First with $G_{\! N \! N}=0$ (\[fcevol1\]) can be integrated with initial condition (\[RIC\]) to give $$R(v,\rho) = R_H(v) - \rho \, .$$ Then with (\[GLNH\]) we can integrate (\[constd1\]) to find $$\begin{aligned} G_{\ell N} & = \frac{Q}{R^4} \end{aligned}$$ and use this result and (\[GLLH\]) to integrate(\[constd2\]) to get $$\begin{aligned} G_{\ell \ell} & = \frac{\left(R_H^2-Q \right) R_{H,v}}{R_H^2 R^2} + \frac{\rho \, Q_{,v} }{R_HR^3} \, . \end{aligned}$$ With these results in hand and initial condition $R_\ell {\overset{\scriptscriptstyle{H}}{=}}0$ we integrate (\[fcevol2\]) to get $$R_\ell = \frac{\rho(Q-R_H^2+ \rho R_H )}{2R^2 R_H }$$ and finally with initial conditions (\[Rp\]) and (\[kappaH\]) we can integrate (\[fcevol3\]) to find $$C = R_{H,v} - R_\ell \, . \label{TF2}$$ Comparison with Vaidya-Reissner-Nordström ----------------------------------------- We can now compare this derivation to a known example. The Vaidya-Reissner-Nordström (VRN) metric takes the form $$\mbox{d} s^2 = - \left(1 - \frac{2m(v)}{r}+\frac{q(v)^2}{r^2} \right) \mbox{d}v^2 + 2 \mbox{d} v \mbox{d} r + r^2 \mbox{d} \Omega^2$$ where the apparent horizon $r_H = m + \sqrt{m^2 - q^2}$ and $r$ is an affine parameter of the ingoing null geodesics. To put it into the form of (\[metant\]) where the affine parameter measures distance off the horizon we make the transformation $$r = r_H - \rho$$ whence the metric takes the form $$\begin{aligned} \mbox{d} s^2 =& - \left(2 r_{H,v} - \frac{\rho \left(q^2 - r_H (r_H-\rho) \right) }{r_H (r_H-\rho)^2} \right) \mbox{d}v^2 \\& - 2 \mbox{d} v \mbox{d} \rho + (r_H - \rho)^2 \mbox{d} \Omega^2 \, . \nonumber\end{aligned}$$ That is $$\begin{aligned} C & = r_{H,v} - \frac{\rho \left(q^2 - r_H (r_H-\rho) \right) }{2r_H (r_H-\rho)^2}\\ R & = r_H - \rho \end{aligned}$$ and on the horizon $$\begin{aligned} C {\overset{\scriptscriptstyle{H}}{=}}r_{H,v} \qquad \mbox{and} \qquad R {\overset{\scriptscriptstyle{H}}{=}}r_H \end{aligned}$$ as expected. To do a complete match we calculate the rest of the quantities. First appropriate null vectors are $$\begin{aligned} \ell &= \frac{\partial}{\partial v} + \left(r_{H,v} - \frac{\rho \left(q^2 - r_H (r_H-\rho) \right) }{2r_H (r_H-\rho)^2}\right) \frac{\partial}{\partial \rho} \\ N &= \frac{\partial}{\partial \rho} \, . \end{aligned}$$ Then direct calculation shows that $$\begin{aligned} R_\ell & = - \frac{\rho \left(q^2 - r_H (r_H-\rho) \right) }{2r_H (r_H-\rho)^2} \\ R_N & = - 1\end{aligned}$$ and $$\begin{aligned} G_{\ell \ell} & = \frac{\left(r_H^2-q^2 \right) r_{H,v}}{r_H^2 r^2} + \frac{2 \rho q q_{,v} }{r_H r^3} \\ G_{\ell N} & = \frac{q^2}{(r_H-\rho)^2} \\ G_{NN} & = 0 \\ G_{q} & = \frac{2q^2}{(r_H-\rho)^2} \; . \end{aligned}$$ It is clear that with $R_H = r_H$ and $Q=q^2$ our general results (\[GLNH\])-(\[TF2\]) give rise to the VRN spacetime (as they should). As expected the data on the horizon is sufficient to determine the spacetime everywhere back out along the ingoing null geodesics: we simply solve a set of (coupled) ordinary differential equations along each curve. With the matter providing the only dynamics and that matter only moving inwards along the geodesics the problem is quite straightforward. In this case there is no need to specify extra data on $\mathcal{N}$. We now turn to the more interesting case where the dynamics are driven by a scalar field for which there will be both inward and outward fluxes of matter. Massless scalar field {#sec:matter modelsII} ===================== Spherical spacetimes containing a massless scalar field $\phi(v,\rho)$ are governed by the stress energy tensor given by, $$\begin{aligned} T_{ab} & = \nabla_a \phi \nabla_b \phi - \frac{1}{2} g_{ab} \nabla^c \phi \nabla_c \phi \label{stress_sc}\end{aligned}$$ This system has nonvanishing inward and outward fluxes which are, $$\begin{aligned} T_{\ell \ell} &= (\phi_{\ell})^2 \\ T_{NN} & = (\phi_{N})^2.\end{aligned}$$ Here and in the following keep in mind that $N = \frac{\partial}{\partial \rho}$ and so $\phi_N = \phi_{, \rho}$. We also observe from that, $$T_{\ell N} = 0.$$ These fluxes are related by the wave equation$$\Box_g \phi := \nabla^\alpha \nabla_\alpha \phi = 0 \implies { \left(R \phi_\ell \right)_{,\rho} = - R_\ell \phi_{,\rho } }. \label{wave_eq}$$ For our purposes we are not particularly interested in the value of $\phi$ itself but rather in the associated net flux of energies in the ingoing and outgoing null direction. Hence we define $$\begin{aligned} \Phi_\ell = \sqrt{4 \pi} R \phi_{\ell} \quad \mbox{and} \quad \Phi_N = \sqrt{4 \pi} R \phi_{N} \, .\end{aligned}$$ Respectively these are the square roots of the scalar field energy fluxes in the $N$ and $\ell$ directions. That is, over a sphere of radius $R$, $\Phi_\ell$ is the square root of the total integrated flux in the $N$-direction and $\Phi_N$ is the square root of the total integrated flux in the $\ell$-direction. Though not strictly correct, we will often refer to $\Phi_\ell$ and $\Phi_N$ themselves as fluxes. Then (\[wave\_eq\]) becomes $$\begin{aligned} \Phi_{\ell, \rho} = - \frac{R_\ell \Phi_N}{R} \, \label{wave_eqII}\end{aligned}$$ or, making use of the fact that $\phi_{,v \rho} = \phi_{,\rho v}$, $$\begin{aligned} \Phi_{N,v} = - \kappa \Phi_N - C \Phi_{N,\rho} - \frac{R_N \Phi_\ell}{R} \label{intCon} \, . \end{aligned}$$ These can usefully be understood as advection equations with sources. Recall that a general homogeneous advection equation can be written in the form $$\begin{aligned} \frac{\partial \psi}{\partial t} + C \frac{\partial \psi}{\partial x} = 0\end{aligned}$$ where $C$ is the speed of flow of $\psi$: if $C$ is constant then this has the exact solution $$\psi = \psi (x-Ct)$$ and so any pulse moves with speed $\frac{dx}{dt} = C$. Any non-homogeneous term corresponds to a source which adds or removes energy from the system. Then (\[wave\_eqII\]) tells us that the flux in the $N$-direction ($\Phi_\ell)$ is naturally undiminished as it flows along a (null) surface of constant $v$ and increasing $ \rho$. However the interaction with the flux in the $\ell$ direction can cause it to increase or decrease. Similarly (\[intCon\]) describes the flow of the flux in the $\ell$-direction ($\Phi_N$) along a surface of constant $\rho$ and increasing $v$. Rewriting with respect to the affine derivative (see Appendix \[daff\]) $D_v = \partial_v + \kappa$ it becomes $$\begin{aligned} D_v \Phi_N + C \Phi_{N,\rho} = - \frac{R_N \Phi_\ell}{R} \label{adN} \, . \end{aligned}$$ Then, as might be expected, $\Phi_N$ naturally flows with coordinate speed $C$ (recall that $\ell = \frac{\partial}{\partial v} + C \frac{\partial}{\partial \rho}$ so this is the speed of outgoing light relative to the coordinate system) but its strength can be augmented or diminished by interactions with the outward flux. System of first order PDEs {#firstsystem} -------------------------- Together (\[wave\_eqII\]) and (\[intCon\]) constitute a first-order system of partial differential equations for the scalar field. We now restructure the gravitational field equations in the same way. First with respect to $\Phi_\ell$ and $\Phi_N$ the constraint equations (\[const1\])-(\[const3\]) on constant $\rho$ surfaces become: $$\begin{aligned} R_{,v} &= R_\ell - C R_N \label{consc1} \\ R_{\ell,v} &= \kappa R_{\ell} + \frac{C \left( 1 + 2 R_{\ell} R_N \right)}{2 R} - \frac{\Phi_\ell^2}{R} \label{consc2} \\ R_{N,v} &= -\kappa R_{N} - \frac{\left( 1 + 2 R_{\ell} R_N \right)}{2 R} + \frac{\Phi_N^2}{R} \label{consc3} $$ while the “time”-evolution equations (\[cevol1\])-(\[cevol3\]) are: $$\begin{aligned} & {R_{,\rho\rho} = - \frac{\Phi_N^2}{R}} \label{C1} \\ & { \left( R R_{\ell}\right)_{,\rho} = - \frac{1}{2} }\label{C2a} \\ & {C_{,\rho \rho}}\,\, {= \frac{1 + 2 R_\ell R_N}{R^2} - \frac{2 \Phi_\ell \Phi_N}{R^2} } \, . \label{C3}\end{aligned}$$ Two of these equations can be simplified. First, integrating (\[C2a\]) from $\rho = 0$ on which $R_\ell {\overset{\scriptscriptstyle{H}}{=}}0$ we find $$\begin{aligned} R_\ell = - \frac{ \rho}{2 R} \label{Rl} \, . \end{aligned}$$ This can be substituted into (\[consc2\]) to turn it into an algebraic constraint $$\begin{aligned} C = 2 \Phi_\ell^2 - 2 R_\ell\left(\kappa R + R_\ell \right) \label{C2} \, .$$ Despite these simplifications, the presence of interacting outward and inward matter fluxes means that in contrast to the dust examples, this is truly a set of coupled partial differential equations. Hence we can expect that the matter and spacetime dynamics will be governed by off-horizon data in addition to data at $\rho = 0$. We reformulate as a system of first order PDEs in the following way. First designate $$\begin{aligned} \; \; \{R, R_N, \kappa, \Phi_\ell, \Phi_N \} \end{aligned}$$ as the *primary variables*. The *secondary variables* $\{R_\ell, C\}$ are defined by (\[Rl\]) and (\[C2\]) in terms of the primaries. Next on $\rho=\mbox{constant}$ surfaces the primary variables are constrained by $$\begin{aligned} R_{,v} &= R_\ell - C R_{N} \; \; \mbox{and} \label{REq} \\ R_{N, v} &= -\kappa R_N - \frac{ 1}{2 R} \left( 1 + 2 R_{\ell} R_{N} - 2 C \Phi_N^2\right) \label{RNVEq}\end{aligned}$$ along with scalar flux equation (\[intCon\]) while their time evolution is governed by $$\begin{aligned} R_{,\rho} &= R_N \label{RNEq} \\ R_{N,\rho} & = - \frac{\Phi_N ^2}{R} \label{RNNEq} \\ \kappa_{,\rho} & = \frac{1}{R^2} \left(1 + 2 R_\ell R_{N} - 2 \Phi_{\ell} \Phi_{N}\right) \label{kappaEq} \\ { \Phi}_{\ell,\rho} &= - \frac{R_\ell \Phi_N}{R} \label{PhiLrhoEq} \, . \, \end{aligned}$$ We now consider how all of these equations may be used to integrate final data. The scheme is closely related to that used in [@Winicour:2012znc]. Final data on $\bar{H}$ and $\bar{\mathcal{N}}$ {#flux:onh} ----------------------------------------------- In line with the depiction in \[hd2\], we specify final data on $H \cup {\cal N}$ or rather on the sections $\bar{H} \cup \bar{{\cal N}}$ where $$\begin{aligned} \bar{H} &= \{(0,v)\in H: v_i \leq v \leq v_f\} \; \; \mbox{and} \\ \bar{\mathcal{N}} & = \{(\rho,v_f) \in \mathcal{N}: \rho_i \leq \rho \leq 0 \} \nonumber \, . \end{aligned}$$ Their intersection sphere is $\bar{H} \cap \bar{{\cal N}} = (0,v_f)$. Here and in what follows we suppress the angular coordinates. The scalar field data specified on those sections depends on whether $\bar{H}$ is isolated or dynamic. The final data is $$\begin{aligned} \bar{H}:&\; \; \left\{ \begin{array}{ll} \mbox{isolated:} & \Phi_\ell \equiv 0 \\ \mbox{dynamic:} & \Phi_\ell \; \; \mbox{and} \; \; \Phi_{N,\rho} \end{array} \label{data} \right. \\ \bar{{\cal N}}:&\;\; \Phi_N \nonumber \; \; \mbox{and} \\ \bar{H} \cap \bar{\cal N}:& \;\; R=R_o \nonumber \; . \end{aligned}$$ $\Phi_\ell$ and $\Phi_{N,\rho}$ on $\bar{H}$ are functions of $v$ while $\Phi_N$ on $\mathcal{\bar N}$ is a function of $\rho$. For a dynamic horizon $\Phi_N$ on $\bar{\mathcal{N}}$ and $\Phi_{N,\rho}$ on $\bar{H}$ should be compatible on $\bar{H}\cap \bar{\cal N}$. $R_o$ is a single number. Further on ${H}$ we specify $$\begin{aligned} R_\ell {\overset{\scriptscriptstyle{H}}{=}}0 \; \; \mbox{and} \; \; R_N {\overset{\scriptscriptstyle{H}}{=}}-1\end{aligned}$$ where the null vectors are scaled in the usual way and, as before, the notation ${\overset{\scriptscriptstyle{H}}{=}}$ indicates that all quantities on both sides of the equality are evaluated on $H$. This data can be used to evaluate all variables on $\bar{H}$. First from (\[C2\]) and (\[REq\]) $$\begin{aligned} C {\overset{\scriptscriptstyle{H}}{=}}& \, 2 \Phi_\ell^2 \label{Feq1} \; \; \mbox{and}\\ R {\overset{\scriptscriptstyle{H}}{=}}& \, R_o + 2\int_{v_f}^{v} \! \! \Phi_\ell^2 \, \mbox{d} v \, . \label{Rdyn1}\end{aligned}$$ Then these can be combined with (\[intCon\]): $$\begin{aligned} \kappa {\overset{\scriptscriptstyle{H}}{=}}& \, \frac{1}{2R} \left( 1 - 2C \Phi_{N}^2 \right) \label{Feq3}\end{aligned}$$ to get a first order equation to solve for $\Phi_N$ on $\bar{H}$: $$\begin{aligned} \Phi_{N,v} + \frac{1}{2R} \left( 1 - 4 \Phi_\ell^2 \Phi_N^2 \right) \Phi_N {\overset{\scriptscriptstyle{H}}{=}}- 2 \Phi_\ell^2 \Phi_{N,\rho} + \frac{\Phi_\ell}{R} \label{PhiNFV}\end{aligned}$$ Thus we have a first-order final value problem for $\Phi_N$ on $\bar{H}$. For an isolated horizon it can be explicitly solved as $$\begin{aligned} \Phi_N^{\mbox{\tiny{iso}}} {\overset{\scriptscriptstyle{H}}{=}}\Phi_{N_f} e^{- \nicefrac{(v - v_f)}{2R_o}} \label{Phi_solved}\end{aligned}$$ where $\Phi_{N_f} = \Phi_N(0, v_f)$. Equivalently as discussed in Appendix \[daff\], $\Phi_N$ is affinely constant on an isolated horizon. Hence from (\[data\]) we can obtain all primary and secondary quantities on $\bar{H}$. We can also integrate data down $\mathcal{\bar{N}}$. With $\Phi_N$ as known data on $\bar{\cal N}$ and final values known for all quantities on $\bar{H} \cap \bar{\mathcal{N}}$ the rest can be calculated in order: i) Solve (\[RNEq\]) and (\[RNNEq\]) for $R$ and $R_N$. \[i\] ii) Calculate $R_\ell$ from (\[Rl\]). iii) Solve (\[PhiLrhoEq\]) for $\Phi_\ell$. iv) Solve (\[kappaEq\]) for $\kappa$. v) Calculate $C$ from (\[C2\]). \[vi\] We then have all data on $\bar{\cal N}$. ![The constraint equations along with horizon initial conditions, i.e. $R_\ell {\overset{\scriptscriptstyle{H}}{=}}0, R_N {\overset{\scriptscriptstyle{H}}{=}}-1 $ determine $C$ and $R$ on $\bar{H}$[]{data-label="fig:onh"}](scalar_scheme_redo_final.pdf) Integrating from the final data ------------------------------- We now consider how that data can be integrated into the causal past of $\bar{H} \cup \bar{\mathcal{N}}$. The basic steps in any integration scheme are demonstrated in a simple numerical integration based on forward Euler approximations. Assume a discretization $\{v_m, \rho_n\}$ (with $m$ and $n$ at their maxima along the final surfaces) by steps $\Delta v$ and $\Delta \rho$. Then if all data is known at $(v_m, \rho_{n+1})$ and $(v_{m+1}, \rho_n)$ we can extend that data to $(v_m, \rho_n)$. This is done by the following procedure (illustrated in FIG. \[vmrhon\]). ![How data is calculated at $(v_m, \rho_n)$: a) $\Phi_N$ is approximated from data at $(v_{m+1}, \rho_n)$ b) $\Phi_{N,\rho}$ is approximated by comparing $\Phi_N$ at $(v_m, \rho_n)$ and $(v_{m}, \rho_{n+1})$ and c) $R$, $R_N$, $\Phi_\ell$ and $\kappa$ are approximated from data at $(v_m, \rho_{n+1})$.[]{data-label="vmrhon"}](vmrhon) a) Use (\[intCon\]) at $(v_{m+1}, \rho_n)$ to find $\Phi_{N,v}$. Then $$\begin{aligned} \Phi_N(v_{m}, \rho_n) \approx \Phi_N(v_{m+1}, \rho_n) - \Phi_{N,v}(v_{m+1}, \rho_n) \Delta v\end{aligned}$$ b) Approximate $$\begin{aligned} \Phi_{N,\rho} (v_{m}, \rho_n) \approx \frac{\Phi_{N} (v_{m}, \rho_{n+1}) - \Phi_{N} (v_{m}, \rho_n)}{\Delta \rho}\end{aligned}$$ c) Use (\[RNEq\])-(\[PhiLrhoEq\]) at $(v_m, \rho_{n+1})$ to find $R_{,\rho}$, $R_{N,\rho}$, $\kappa_{,\rho}$ and $\Phi_{\ell,\rho}$. Then generically if $X$ is any of these quantities $$\begin{aligned} X(v_{m}, \rho_n) &\approx X(v_{m}, \rho_{n+1}) - X_{,\rho}(v_{m}, \rho_{n+1}) \Delta \rho \end{aligned}$$ ![Evolving $\Phi$ in the $- \frac{\partial}{\partial v}$ direction. []{data-label="fig:onh1"}](scalar_scheme_cross.pdf) Then there are several ways that this basic computational unit may be used to step through the spacetime. The most straightforward is an outward integration from the horizon where we start with the data on $\bar{\mathcal{N}}$ and use it in (\[intCon\]) to find $\Phi_N (v_{f} - \Delta v , \rho)$. Then it is straightforward to use (\[RNEq\])-(\[PhiLrhoEq\]) to integrate all other primary quantities out from $\bar{H}$ along $v =v_{f} - \Delta v$. This process, which is illustrated in FIG. \[fig:onh1\], can be repeated for $v =v_{f} - 2 \Delta v$, $v =v_{f} -3 \Delta v$ and so on until we reach the end of the known data at $v = v_i$. The preceding is the most direct method of integration. However if we are mostly interested in spacetime near the horizon (as we will be in the next section) then it is more useful to integrate backwards along the surfaces of constant $\rho$ rather than out along the surfaces of constant $v$ (as in FIG. \[fig:onh\] but on other $\rho$ surfaces). In this case we repeatedly apply steps a)-c) to integrate in the $- \frac{\partial}{\partial v}$ direction, stopping at each step to calculate $\Phi_{N,\rho}$ to use in the evolution of $\Phi_N$ to the next $v_m$. As we shall see in the next section, in the slowly evolving, near horizon case things simplify: these pauses are not necessary as the $\Phi_{N,\rho}$ term will become negligible. Hence one can integrate continuously in the $v$-direction. It may not be immediately obvious how this integration scheme obeys causality and what restricts it to determining points inside the domain of dependence. This is briefly discussed in Appendix \[causalityApp\]. Spacetime near a slowly evolving horizon ---------------------------------------- We now apply the formalism to a concrete example: weak scalar fields near the horizon. Physically the black hole will be close to equilibrium and hence the horizon slowly evolving in the sense of [@Booth:2003ji; @Booth:2006bn]. “Near horizon” means that we expand all quantities as Taylor series in $\rho$ and keep terms up to order $\rho^2$. “Weak scalar field” means that we assume $$\begin{aligned} \Phi_N, \Phi_{\ell} \sim \frac{\varepsilon}{R}\end{aligned}$$ and then expand the terms of the Taylor series up to order $\epsilon^2$. To order $\epsilon^0$ the spacetime will be vacuum (and Schwarzschild), order $\epsilon^1$ will be a test scalar field propagating on the Schwarzschild background and order $\epsilon^2$ will include the back reaction of the scalar field on the geometry. ### Expanding the equations We expand all quantities as Taylor series in $\rho$. That is for $X \in \{ R, R_N, R_\ell, \kappa, C, \Phi_\ell, \Phi_N\}$ $$\begin{aligned} X(v, \rho) = \sum_{n = 0}^{\infty} \frac{ \rho^nX^{(n)}(v)}{n!} \end{aligned}$$ with $$\begin{aligned} R_N^{(n)} = R^{(n+1)} \; \; \mbox{and} \; \kappa^{(n)} = C^{(n+1)} \; . \end{aligned}$$ The free final data is $\Phi_\ell^{(0)}$ on $\bar{H}$, $R_o$ on $\bar{H} \cap \bar{\mathcal{N}}$ and the Taylor expanded $$\begin{aligned} \Phi_{N_f} (\rho) = \sum_{n=0}^{\infty} \frac{\rho^n}{n!} \Phi_{N_f}^{(n)} \, \label{PNf}\end{aligned}$$ on $\bar{\mathcal{N}}$. Following [@Frolov:1998wf] we give names to special cases of this free data: i) *out-modes*: no flux through $\bar{H}$ ($ \Phi_{\ell}^{(0)} = 0$),\ non-zero flux through $\bar{\mathcal{N}}$ ($\Phi^{(n)}_{N} \neq 0$ for some $n$) ii) *down-modes*: non-zero flux through $\bar{H}$ ($\Phi_{\ell}^{(0)} \neq 0$),\ zero flux through ${\bar{\mathcal{N}}}$ ($\Phi^{(n)}_{N} = 0$ for all $n$) From the free data we construct the rest of the final data on $\bar{H}$. Equations (\[Feq1\]) and (\[Feq3\]) give $$\begin{aligned} C^{(0)} & = 2 \Phi_\ell^{(0)^{\mbox{\scriptsize{2}}}} \\ C^{(1)} & = \kappa^{(0)} \approx \frac{1}{2 R^{(0)}} \, . \end{aligned}$$ Here and in what follows the $\approx$ indicates that terms of order $\epsilon^3$ or higher have been dropped. Further by our gauge choice $$\begin{aligned} R_N^{(0)} = R^{(1)} = -1 \end{aligned}$$ and so from (\[Rdyn\]) $$\begin{aligned} R^{(0)} = R_o + \int_{v_f}^{v} \! \! C^{(0)} \, \mbox{d} v \, . \label{Rdyn}\end{aligned}$$ This is an order $\epsilon^2$ correction as long as the interval of integration is small relative to $\nicefrac{1}{\epsilon}$. The last piece of final data on $\bar{H}$ is $\Phi_N^{(0)}$ and comes from the first order differential equation (\[PhiNFV\]) $$\begin{aligned} \frac{\mbox{d}\Phi_N^{(0)}}{\mbox{d} v} + \frac{\Phi_N^{(0)}}{2R_o} \approx \frac{ \Phi_\ell^{(0)} }{R_o} \label{PN1} \, \end{aligned}$$ which has the solution $$\begin{aligned} \Phi_N^{(0)} = \Phi_{N_f}^{(0)}e^{\nicefrac{(v_f - v)}{2R_o}} + e^{-\nicefrac{v}{2R_o}} \int_{v_f}^v e^{\nicefrac{\tilde{v}}{2R_o}} \Phi_\ell^{(0)} \mbox{d} \tilde{v} \label{PLInt}\end{aligned}$$ in which the free data $\Phi_{N_f}^{(0)}$ came in as a boundary condition. Note that scalar fields that start small on the boundaries remain small in the interior, again as long as the integration time is short compared to $\nicefrac{1}{\epsilon}$. We assume that this is the case. From the final data, the black hole is close to equilibrium and the horizon is slowly evolving to order $\epsilon^2$. That is, the expansion parameter [@Booth:2003ji; @Booth:2006bn]: $$\begin{aligned} C \left( \frac{1}{2} \theta_{(N)}^2 + G_{ab} N^a N^b \right) \approx \left(\frac{4 \Phi_\ell^2 }{R^2} \right) \sim \frac{4\epsilon^2}{R^2} \; . \end{aligned}$$ Further we already have the first order expansion of $C$: $$\begin{aligned} C \approx 2 \Phi_\ell^{(0)^{\mbox{\scriptsize{2}}}} \! + \frac{\rho}{2 R_o} \, . \end{aligned}$$ That is (to first order) there is a null surface at $$\begin{aligned} \rho_{\mbox{\tiny{EHC}}} \approx - 4 R_o \Phi_\ell^{(0)^{\mbox{\scriptsize{2}}}} \; . \end{aligned}$$ This null surface is the event horizon candidate discussed in [@Booth:2012xm]: if the horizon remains slowly evolving throughout its future evolution and ultimately transitions to isolation then the event horizon candidate is the event horizon. Moving off the horizon to calculate up to second order in $\rho^2$, from (\[RNEq\]) and (\[RNNEq\]) we find $$\begin{aligned} R_N^{(1)} = R^{(2)} & \approx - \frac{\Phi_{N}^{{(0)}^{\mbox{\scriptsize{2}}}}}{R_o} \label{RX2} \\ R_N^{(2)} & \approx -\frac{\Phi_N^{(0)} \left(\Phi_N^{(0)}+2R_o \Phi_N^{(1)} \right)}{R_o^2} \, \label{RX3} \end{aligned}$$ and so from (\[Rl\]) $$\begin{aligned} R_\ell^{(0)} & = 0 \\ R_\ell^{(1)} & = - \frac{1}{2R^{(0)}} \\ R_\ell^{(2)} & = - \frac{1}{R^{{(0)}^{\mbox{\scriptsize2}}}} \, . \end{aligned}$$ Note that the last two terms will include terms of order $\epsilon^2$ once the (\[Rdyn\]) integration is done to calculate $R^{(0)}$. From (\[PhiLrhoEq\]) we can rewrite $\Phi_\ell^{(n)}$ terms with respect to $\Phi_N^{(n)}$ ones: $$\begin{aligned} \Phi_\ell^{(1)} &= 0 \label{PL1} \\ \Phi_\ell^{(2)} & \approx \frac{ \Phi^{(0)}_N}{2 R_o^2} \label{PL2} \, . $$ The vanishing linear-order term reflects the fact that close to the horizon (where $R_\ell = 0$) the inward flux decouples from the outward (\[PhiLrhoEq\]) and so freely propagates into the black hole. Physically this means that (to first order in $\rho$ near the horizon) the horizon flux is approximately equal to the “near-horizon” flux. Next, from (\[kappaEq\]) $$\begin{aligned} \kappa^{(1)} & = C^{(2)} \approx\frac{1}{R^{{(0)}^{\mbox{\scriptsize2}}}} -\frac{ 2 \Phi_\ell^{(0)} \Phi_N^{(0)}}{R_o^2} \mbox{ and} \\ \kappa^{(2)} & \approx \frac{3}{R^{{(0)}^{\mbox{\scriptsize2}}}} - \frac{2\Phi_\ell^{(0)} \left(2 \Phi_N^{(0)} + R_o \Phi_N^{(1)}\right) }{R_o^2} \, . $$ Again keep in mind that the $R^{(0)}$ terms will be corrected to order $\epsilon^2$ from (\[Rdyn\]). Finally these quantities may be substituted into (\[intCon\]) to get differential equations for the $\Phi^{(n)}_N$: $$\begin{aligned} \frac{\mbox{d} \Phi_N^{(1)} }{\mbox{d} v} + \frac{\Phi_N^{(1)}}{R_o} & \approx \frac{ \Phi_{\ell}^{(0)}}{R_o^2} - \frac{\Phi_N^{(0)} }{R_o^2} \label{PN2} \\ \frac{\mbox{d} \Phi_N^{(2)}}{\mbox{d} v} + \frac{3 \Phi_N^{(2)}}{2R_o} & \approx \frac{2 \Phi_{\ell}^{(0)}}{R_o^3} - \frac{5 \Phi_N^{(0)}}{2 R_o^3} - \frac{3 \Phi_N^{(1)} }{R_o^2} \label{PN3} \, . $$ Like (\[PLInt\]) these are easily solved with an integrating factor and respectively have $\Phi_{N_f}^{(1)}$ and $\Phi_{N_f}^{(2)}$ as boundary conditions. Note the important simplification in this regime that enables these straightforward solutions. The fact that $R_\ell \sim \rho $ has raised the $\rho$-order of the $\Phi_{N,\rho}$ terms. As a result we can integrate directly across the $\rho=\mbox{constant}$ surfaces rather than having to pause at each step to first calculate the $\rho$-derivative. The $\Phi_{N_f}^{(n)}$ are final data for these equations. They can be solved order-by-order and then substituted back into the other expressions to reconstruct the near-horizon spacetime. It is also important that the matter and geometry equations decompose cleanly in orders of $\epsilon$: we can solve the matter equations at order $\epsilon$ relative to a fixed background geometry and then use those results to solve for the corrections to the geometry at order $\epsilon^2$. ### Constant inward flux We now consider the concrete example of an affinely constant flux through $\bar{H}$ along with an analytic flux through $\bar{\mathcal{N}}$. Then by Appendix \[daff\] $$\begin{aligned} \Phi_\ell^{(0)} = \Phi_{\ell_f}^{(0)} {e}^{V} \, ,\end{aligned}$$ where $ \Phi_{\ell_f}^{(0)} $ is the value of $\Phi_\ell^{(0)}$ at $v_f$ and $V = \frac{v-v_f}{2 R_o}$ while $\Phi_{N_f}$ retains its form from (\[PNf\]). We solve the equations for this data up to second order in $\rho$ and $\epsilon$. First for $\Phi_N^{(n)}$ equations we find: $$\begin{aligned} \Phi_N^{(0)} \approx & \; \left(e^V \!-e^{-V} \right) \Phi_{\ell_f}^{(0)} + e^{-V} \Phi_{N_f}^{(0)} \label{matter1} \\ \Phi_N^{(1)} \approx & \; \frac{2 \Phi_{\ell_f}^{(0)} }{R_o} \left(1 - e^{-2V} \right) + \frac{2\Phi_{N_f}^{(0)}}{R_o} \left(e^{-2V} - e^{-V} \right) \\ & \! + \Phi_{N_f}^{(1)} e^{-2V} \nonumber \\ \Phi_N^{(2)} \approx & - \frac{\Phi_{\ell_f}^{(0)}}{4R_o^2} \left(e^V \!+14 e^{-V} - 48 e^{-2V}\! + 33 e^{-3V} \right) \\ & +\frac{\Phi_{N_f}^{(0)}}{2R_o^2} \left(7e^{-V} \! - 24 e^{-2V} +17 e^{-3V} \right) \nonumber \\ & + \frac{6\Phi_{N_f}^{(1)}}{R_o} \left(e^{-3V} \!- e^{-2V} \right) + \Phi_{N_f}^{(2)} e^{-3V}\end{aligned}$$ and so $$\begin{aligned} \Phi_\ell^{(0)} & = e^{V} \Phi_{\ell_f}^{(0)}\, \\ \Phi_\ell^{(1)} &= 0 \\ \Phi_\ell^{(2)} & \approx \frac{ \Phi_{\ell_f}^{(0)} }{2R_o^2} \left(e^V \!-e^{-V} \right) + \frac{ \Phi_{N_f}^{(0)}}{2R_o^2} e^{-V} \label{matter2}\, . \end{aligned}$$ The scalar field equations are linear and so it is not surprising that to this order in $\epsilon$ each solution can be thought of as a linear combination of down and out modes. However for the geometry at order $\epsilon^2$, down and out modes no longer combine in a linear way. These quantities can be found simply by substituting the $\Phi_\ell^{(n)}$ and $\Phi_N^{(n)}$ into the expression for $R^{(n)}$, $R_N^{(n)}$, $R_\ell^{(n)}$, $C^{(n)}$ and $\kappa^{(n)}$ given in the last section. They are corrected at order $\epsilon^2$ by flux terms that are quadratic in combinations of $\Phi_{\ell_f}^{(m)}$ and $\Phi_{N_f}^{(n)}$. The terms are somewhat messy and the details not especially enlightening. Hence we do not write them out explicitly here. ### $\bar{H}-\bar{\mathcal{N}}$ correlations {#correlSect} From the preceding sections it is clear that there does not need to be any correlation between the scalar field flux crossing $\bar{H}$ and that crossing $\bar{\mathcal{N}}$. These fluxes are actually free data. Any correlations will result from appropriate initial configurations of the fields. In this final example we consider a physically interesting case where such a correlation exists. Consider quadratic affine final data (Appendix \[daff\]) on $\bar{H} = \{(v, 0): v_i < v < v_f\}$: $$\begin{aligned} \Phi_\ell^{(0)} = a_0 e^V + a_1e^{2V} + a_2 e^{3V} \label{quadratic}\end{aligned}$$ for $V = \nicefrac{v-v_f}{2R_o}$ along with similarly quadratic affine data on $\bar{\mathcal{N}}$: $$\begin{aligned} \Phi_{N_f} = \Phi_{N_f}^{(0)} + \rho \Phi_{N_f}^{(1)} + \frac{\rho^2}{2} \Phi_{N_f}^{(2)} . \end{aligned}$$ A priori these are uncorrelated but let us restrict the initial configuration so that $\Phi_{N}^{(n)}(v_i) = 0$. That is, there is no $\Phi_N$ flux through $v=v_i$. Then the process to apply these conditions is, given the free final data on $\bar{H}$: i) Solve for the $\Phi_N^{(n)}$ from (\[PN1\]), (\[PN2\]) and (\[PN3\]). ii) Solve $\Phi_N^{(n)} (v_i) = 0$ to find the $\Phi_{N_f}^{(n)}$ in terms of the $a_n$. These are linear equations and so the solution is straightforward. iii) Substitute the resulting expressions for $\Phi_N^{(n)}$ into results from the previous sections to find all other quantities. These calculations are straightforward but quite messy. Here we only present the final results for $\Phi_{N_f}$: $$\begin{aligned} \Phi_{N_f}^{(0)} \approx &( 1\! - e^{2V_i}) a_0 + \frac{2a_1(1\!-e^{3V_i})}{3} + \frac{a_2 (1\!-e^{4V_i})}{2} \\ \Phi_{N_f}^{(1)} \approx & \frac{2a_0 (e^{2V_i} - e^{3V_i})}{R_o} + \frac{a_1 (1 + 8 e^{3V_i} - 9 e^{4V_i} )}{6R_o} \\ & + \frac{a_2 (1 + 5 e^{4 V_i} - 6 e^{5V_i} )}{5R_o} \nonumber \\ \Phi_{N_f}^{(2)} \approx & -\frac{a_0 (1+14 e^{2V_i} - 48 e^{3V_i} +33 e^{4V_i} ) }{4R_o^2} \\ & -\frac{a_1 (1 + 35 e^{3V_i} - 135 e^{4V_i} + 99 e^{5V_i} ) }{15R_o^2} \nonumber \\ & +\frac{a_2 (1 -35 e^{4V_i} + 144 e^{5V_i} -110 e^{6V_i} ) }{20R_o^2} \nonumber\end{aligned}$$ where $V_i = V(v_i)$. If $V_i$ is sufficiently negative that we can neglect the exponential terms: $$\begin{aligned} \Phi_{N_f}^{(0)} \approx &\; a_0 + \frac{2a_1}{3} + \frac{a_2 }{2} \\ \Phi_{N_f}^{(1)} \approx & \frac{a_1}{6R_o} + \frac{a_2}{5R_o} \nonumber \\ \Phi_{N_f}^{(2)} \approx & -\frac{a_0 }{4R_o^2} + -\frac{a_1 }{15R_o^2} + \frac{a_2 }{20R_o^2} \; . \nonumber\end{aligned}$$ In either case the flux through $\bar{H}$ fully determines the flux through $\bar{\mathcal{N}}$. The constraint at $v_i$ is sufficient to determine the Taylor expansion of the flux through $\bar{\mathcal{N}}$ relative to the expansion of the flux through $\bar{H}$. Though we only did this to second order in $\rho / v$ we expect the same process to fix the expansions to arbitrary order. Discussion {#sec:discussion} ========== In this paper we have begun building a formalism that constructs spacetime in the causal past of a horizon $\bar{H}$ and an intersecting ingoing null surface $\bar{{\cal N}}$ using final data on those surfaces. It can be thought of as a specialized characteristic initial value formulation and is particularly closely related to that developed in [@Winicour:2013gha]. Our main interest has been to use the formalism to better understand the relationship between horizon dynamics and off-horizon fluxes. So far we have restricted our attention to spherical symmetry and so included matter fields to drive the dynamics. One of the features of characteristic initial value problems is that they isolate free data that may be specified on each of the initial surfaces. Hence it is no surprise that the corresponding data in our formalism is also free and uncorrelated. We considered two types of data: inward flowing null matter and massless scalar fields. For the inward-flowing null matter, data on the horizon actually determines the entire spacetime running backwards along the ingoing null geodesics that cross $\bar{H}$. Physically this makes sense. This is the only flow of matter and so there is nothing else to contribute to the dynamics. More interesting are the massless scalar field spacetimes. In that case, matter can flow both inwards and outwards and further inward moving radiation can scatter outwards and vice versa. For the weak field near-horizon regime that we studied most closely, the free final data is the scalar field flux through $\bar{H}$ and $\bar{\mathcal{N}}$ along with the value of $R$ at their intersection. Hence, as noted, these fluxes are uncorrelated. However we also considered the case where there was no initial flux of scalar field travelling “up” the horizon. In this case the coefficients of the Taylor expansion of the inward flux on $\bar{H}$ fully determined those on $\bar{\mathcal{N}}$ (though in a fairly complicated way). This constraint is physically reasonable: one would expect the dominant matter fields close to a black hole horizon to be infalling as opposed to travelling (almost) parallel to the horizon. It is hard to imagine a mechanism for generating strong parallel fluxes. While we have so far worked in spherical symmetry the current work still suggests ways to think about the horizon-$\mathscr{I}^+$ correlation problem for general spacetimes. For a dynamic non-spherical vacuum spacetime, gravitational wave fluxes will be the analogue of the scalar field fluxes of this paper and almost certainly they will also be free data. Then any correlations will necessarily result from special initial configurations. However as in our example these may not need to be very exotic. It may be sufficient to eliminate strong outward-travelling near horizon fluxes. In future works we will examine these more general cases in detail. This work was supported by NSERC Grants 2013-261429 and 2018-04873. We are thankful to Jeff Winicour for discussions on characteristic evolution during the 2017 Atlantic General Relativity Workshop and Conference at Memorial University. IB would like to thank Abhay Ashtekar, José-Luis Jaramillo and Badri Krishnan for discussions during the 2018 “Focus Session on Dynamical Horizons, Binary Coalescences, Simulations and Waveforms” at Penn State. Causal past of $\bar{H} \cup \bar{\cal{N}}$ {#causalityApp} =========================================== In this appendix we consider how the general integration scheme for the scalar field spacetimes of Section \[sec:matter modelsII\] “knows” how to stay within the past domain of dependence of $\bar{H} \cup \bar{\mathcal{N}}$. ![Causality restrictions on $\Delta v$: the CFL condition restricts the choice of $\Delta v$ to ensure that attempted numerical evolutions respect causality. In this figure the $\rho$ and $v$ coordinates are drawn to be perpendicular to clarify the connection with the usual advection equation: to compare to other diagrams rotate about $45^\circ$ clockwise and skew so coordinate curves are no longer perpendicular. The dashed lines are null and have slope $C$ in this coordinate system. If data at points $A$, $B$ and $C$ are used to determine $\Phi_{N,\rho}$ then the size of the discrete $v$-evolution is limited to lie inside the null line from point $C$. The largest $\Delta v$ allowed by the restriction evolves to $D$. []{data-label="causality"}](Adv_Causality.pdf) First, it is clear how the process develops spacetime up to the bottom left-hand null boundary ($v=v_i$) of the past domain of dependence. The bottom right-hand boundary is a little more complicated but follows from the advection form of the $\Phi_{N,v}$ equation (\[adN\]). Details will depend on the exact scheme of integration but the general picture is as follows. Assume that we have discretized the problem so that we are working at points $(v_j, \rho_k)$. Then in using (\[adN\]) to move from a surface $v_i$ to $v_{i-1}$, the Courant-Friedrichs-Lewy (CFL) condition tells us that the maximum allowed $\Delta v$ is bounded by $$\begin{aligned} \Delta v < \frac{\Delta \rho}{C} \, ,\end{aligned}$$ where $\Delta \rho$ is the coordinate separation of the points that we are using to calculate the right-hand side of (\[adN\]): to get second order accuracy in $\Phi_{N,\rho}$ we use a centred derivative $$\begin{aligned} \Phi_{N,\rho} \approx \frac{\Phi_N(v_{j},\rho_{k+1} )- \Phi_N(v_{j},\rho_{k-1}) }{2 \Delta \rho}\end{aligned}$$ and so need adjacent points as shown in FIG. \[causality\]. ![A cartoon showing the CFL-limited past domain of dependence of $\bar{H} \cup \bar{\mathcal{N}}$. Null lines are now drawn at $45^\circ$ so the analytic past domain of dependence is bound by the heavy dashed null lines running back from the ends of $\bar{H}$ and $\bar{\mathcal{N}}$. A (very coarse) discretization is depicted by the gray lines and the region that cannot be determined with dashed lines. The boundary points of that region are heavy dots. []{data-label="causality_2"}](FullCausal_Discrete.pdf) Then the lower-right causal boundary of FIG. \[dod\] is enforced by a combination of the endpoints of $\bar{\mathcal{N}}$ and the CFL condition as shown in Figure \[causality\_2\]. The numerical past-domain of dependence necessarily lies inside the analytic domain. The coarseness of the discretization in the figure dramatizes the effect: a finer discretization would keep the domains closer. Affine derivatives and final data {#daff} ================================= The off-horizon $\rho$-coordinate in our coordinate system is affine while $v$ is not. However, as seen in the main text, when considering the final data on $\bar{H}$ it is more natural to work relative to an affine parameter. This is somewhat complicated because $\Phi_\ell$ and $\Phi_N$ are respectively linearly dependent on $\ell$ and $N$ and the scaling of those vectors is also tied to coordinates via (\[E1\]), (\[E2\]) and (\[crel1\]). In this appendix we will discuss the affine parameterization of the horizon and the associated affine derivatives for various quantities. Restricting our attention to an isolated horizon $\bar{H}$ with $\kappa = \frac{1}{2R_o}$, consider a reparameterization $$\begin{aligned} \tilde{v} = \tilde{v} (v) \, . \end{aligned}$$ Then $$\begin{aligned} \frac{\partial}{\partial v} = \frac{\mbox{d} \tilde{v}}{\mbox{d} v} \frac{\partial}{\partial \tilde{v}}\end{aligned}$$ and so $$\begin{aligned} \ell = e^V \tilde{\ell} \; \; \mbox{and} \; N = e^{-V} \tilde{N}\end{aligned}$$ where we have defined $V$ so that $ \displaystyle e^V=\frac{\mbox{d} \tilde{v}}{\mbox{d} v} \, . $ Hence $$\begin{aligned} \tilde{\kappa} = - \tilde{N}_b \tilde{\ell}^a \nabla_a \tilde{\ell}^b = e^{-V} \left(\kappa - \frac{\mbox{d} V}{\mbox{d} v} \right) \, \end{aligned}$$ and so for an affine parameterization ($ \kappa=\partial_v V $): $$\begin{aligned} e^V = \exp \left( \frac{v-v_f}{2R_o} \right) \end{aligned}$$ for some $v_f$ and $$\begin{aligned} \tilde{v} - \tilde{v}_o = 2 R_o e^V $$ for some $\tilde{v}_o$. The $v_f$ freedom corresponds to the freedom to rescale an affine parameterization by a constant multiple while the $\tilde{v}_o$ is the freedom to set the zero of $\tilde{v}$ wherever you like. Now consider derivatives with respect to this affine parameter. For a regular scalar field $$\begin{aligned} \frac{\mbox{d} f}{\mbox{d} \tilde{v}} = e^{-V} \frac{\mbox{d}f}{\mbox{d} v} \, . \end{aligned}$$ However in this paper we are often interested in scalar quantities that are defined with respect to the null vectors: $$\begin{aligned} \Phi^{(0)}_{\ell} = e^{V} \Phi^{(0)}_{\tilde{\ell}} \; \; \mbox{and} \; \Phi^{(0)}_{N} = e^{-V} \Phi^{(0)}_{\tilde{N}} \, . \end{aligned}$$ Then $$\begin{aligned} \frac{\mbox{d} \Phi^{(0)}_{\tilde{\ell}}}{\mbox{d} \tilde{v}}\! &= \! e^{-V} \! \frac{\mbox{d}}{\mbox{d} v} \left(e^{-V} \Phi^{(0)}_\ell \! \right) \! = \! e^{-2V} \! \! \left(\! \frac{\mbox{d} \Phi^{(0)}_\ell}{\mbox{d} v} \! - \kappa \Phi^{(0)}_\ell \! \! \right) \\ \frac{\mbox{d} \Phi^{(0)}_{\!\tilde{N}}}{\mbox{d} \tilde{v}}\! &= e^{-V} \! \frac{\mbox{d}}{\mbox{d} v} \left(e^{V} \Phi^{(0)}_N \right) = \frac{\mbox{d} \Phi^{(0)}_N}{\mbox{d} v} \! + \kappa \Phi^{(0)}_N \, . \end{aligned}$$ That is these quantities are affinely constant if $$\begin{aligned} \Phi_\ell = e^{V} \Phi^{(0)}_{\ell_f} \; \; \mbox{and} \; \Phi_N = e^{-V} \Phi^{(0)}_{N_f} \end{aligned}$$ for some constants $\Phi^{(0)}_{\ell_f}$ and $\Phi^{(0)}_{N_f}$. In the main text we write this affine derivative on $\bar{H}$ as $D_v$ with its exact form depending on the $\ell$ or $N$ dependence of the quantity being differentiated. Finally at (\[quadratic\]) we consider a $\Phi_\ell$ that is “affinely quadratic”. By this we mean that: $$\begin{aligned} \Phi_{\tilde{\ell}} &= A_o + A_1 \tilde{v} + A_2 \tilde{v} ^2 \nonumber \\ & \mspace{70mu} \Updownarrow \nonumber \\ \Phi_\ell &= a_o e^{V} + a_1 e^{2V} + a_2 e^{3V} \, ,\end{aligned}$$ where for simplicity we have set $\tilde{v}_o$ to zero (so that $v=0$ is $\tilde{V}=2R_o$) and absorbed the extra $2R_o$s into the $a_n$. [^1]: $G_{\ell N} > 1/R^2$ signals that another MOTS has formed outside the original one and so a numerical simulation would see an apparent horizon “jump” [@Booth:2005ng; @Bousso:2015qqa]. In the current paper all matter satisfies $G_{\ell N} < 1/R^2$ and so this situation does not arise.
{ "pile_set_name": "ArXiv" }
--- abstract: 'A classical recursive construction for mutually orthogonal latin squares (MOLS) is shown to hold more generally for a class of permutation codes of length $n$ and minimum distance $n-1$. When such codes of length $p+1$ are included as ingredients, we obtain a general lower bound $M(n,n-1) \ge n^{1.079}$ for large $n$, gaining a small improvement on the guarantee given from MOLS.' address: - 'Sergey Bereg: Computer Science, University of Texas at Dallas, Richardson, TX ' - 'Peter J.  Dukes: Mathematics and Statistics, University of Victoria, Victoria, BC ' author: - Sergey Bereg - 'Peter J. Dukes' title: | A lower bound on permutation codes\ of distance $n-1$ --- ------------------------------------------------------------------------ Introduction ============ Let $n$ be a positive integer. The *Hamming distance* between two permutations $\sigma, \tau \in \mathcal{S}_n$ is the number of non-fixed points of $\sigma \tau^{-1}$, or, equivalently, the number of disagreements when $\sigma$ and $\tau$ are written as words in single-line notation. For example, $1234$ and $3241$ are at distance three. A *permutation code* PC$(n,d)$ is a subset $\Gamma$ of $\mathcal{S}_n$ such that the distance between any two distinct elements of $\Gamma$ is at least $d$. Language of classical coding theory is often used: elements of $\Gamma$ are *words*, $n$ is the *length* of the code, and the parameter $d$ is the *minimum distance*, although for our purposes it is not important whether distance $d$ is ever achieved. Permutation codes are also called *permutation arrays* by some authors, where the words are written as rows of an $n \times |\Gamma|$ array. The investigation of permutation codes essentially began with the articles [@DV; @FD]. After a decade or so of inactivity on the topic, permutation codes enjoyed a resurgence due to various applications. See [@CCD; @H; @SM] for surveys of construction methods and for more on the coding applications. For positive integers $n \ge d$, we let $M(n,d)$ denote the maximum size of a PC$(n,d)$. It is easy to see that $M(n,1)=M(n,2)=n!$, and that $M(n,n)=n$. The Johnson bound $M(n,d) \le n!/(d-1)!$ holds. The alternating group $A_n$ shows that $M(n,3)=n!/2$. More generally, a sharply $k$-transitive subgroup of $\mathcal{S}_n$ furnishes a permutation code of (maximum possible) size $n!/(n-k)!$. For instance, the Mathieu groups $M_{11}$ and $M_{12}$ are maximum PC$(11,7)$ and PC$(12,7)$, respectively. On the other hand, determination of $M(n,d)$ absent any algebraic structure appears to be a difficult problem. As an example, it is only presently known that $78 \le M(7,5) \le 134$; see [@BD; @JLOS] for details. A table of bounds on $M(n,d)$ can be found in [@SM]. In [@CKL], it was shown that the existence of $r$ mutually orthogonal latin squares (MOLS) of order $n$ yields a permutation code PC$(n,n-1)$ of size $rn$. Although construction of MOLS is challenging in general, the problem is at least well studied. Lower bounds on MOLS can be applied to the permutation code setting, though it seems for small $n$ not a prime power that the code sizes can be much larger than the MOLS guarantee. For example, $M(6,5) =18$ despite the nonexistence of orthogonal latin squares of order six, and $M(10,9) \ge 49$, [@JS], when no triple of MOLS of order 10 is known. On the other hand, it is straightforward to see, [@CKL], that $M(n,n-1)=n(n-1)$ implies existence of a full set of MOLS (equivalently a projective plane) of order $n$, so any nontrivial upper bound on permutation codes would have major impact on design theory and finite geometry. This connection is explored in more detail in [@BM]. Permutation codes are used in [@JS2] for some recent MOLS constructions. Let $N(n)$ denote the maximum number of MOLS of order $n$. Chowla, Erdős and Strauss showed in [@CES] that $N(n)$ goes to infinity. Wilson, [@WilsonMOLS], found a construction strong enough to prove $N(n) \ge n^{1/17}$ for sufficiently large $n$. Subsequently, Beth, [@Beth] tightened some number theory in the argument to lift the exponent to $1/14.8$. In terms of permutation codes, then, one has $M(n,n-1) \ge n^{1+1/14.8}$ for sufficiently large $n$. Our main result in this note gives a small improvement to the exponent. \[main\] $M(n,n-1) \ge n^{1+1/12.533}$ for sufficiently large $n$. The proof is essentially constructive, although it requires, as does [@Beth; @WilsonMOLS], the selection of a ‘small’ integer avoiding several arithmetic progressions. This is guaranteed by the Buchstab sieve; see [@Ivt]. Apart from this number theory, our construction method generalizes a standard design-theoretic construction for MOLS to permutation codes posessing a small amount of additional structure. Some set up for our methodology is given in the next two sections, and the proof of Theorem \[main\] is given in Section \[proof\] as a consequence of the somewhat stronger Theorem \[idem-bound\]. We conclude with a discussion of some possible next directions for this work. Idempotent permutation codes and latin squares ============================================== Let $[n]:=\{1,2,\dots,n\}$. Recall that a *fixed point* of a permutation $\pi:[n]\rightarrow [n]$ is an element $i \in [n]$ such that $\pi(i)=i$. In single-line notation, this says symbol $i$ is in position $i$. Of course, for the identity permutation $\iota$, every element is a fixed point. Let us say that a permutation code is *idempotent* if each of its words has exactly one fixed point. As some justification for the definition, recall that a latin square $L$ of order $n$ is idempotent if the $(i,i)$-entry of $L$ equals $i$ for each $i \in [n]$. So, a maximum PC$(n,n)$ is idempotent if and only if the ‘corresponding’ latin square is idempotent. We are particularly interested in idempotent PC$(n,n-1)$ in which every symbol is a fixed point of the same number, say $r$, words; these we call $r$-*regular* and denote by $r$-IPC$(n,n-1)$. Permutation codes with extra ‘distributional’ properties have been investigated before. For example, ‘$k$-uniform’ permutation arrays are introduced in [@DV], while ‘$r$-balanced’ and ‘$r$-separable’ permutation arrays are considered in [@DFKW]. However, our definition is seemingly new, or at least not obviously related to these other conditions. If there exists an $r$-IPC$(n,n-1)$, say $\Delta$, then $\Delta \cup \{\iota\}$ is also a PC$(n,n-1)$. Consequently, $M(n,n-1) \ge rn+1$. It follows that $r \le n-2$ is an upper bound on $r$. On the other hand, if $\Gamma$ is a PC$(n,n-1)$ containing $\iota$, then the words of $\Delta$ at distance exactly $n-1$ from $\iota$ form an idempotent IPC$(n,n-1)$. Concerning the $r$-regular condition, whether $\iota \in \Gamma$ or not, we may find an $r$-IPC$(n,n-1)$ with $$\label{r-formula} r=\max_{\sigma \in \Gamma} \min_{i \in [n]} |\{\tau \in \Gamma \setminus \{\sigma\}: \tau(i)=\sigma(i)\}|.$$ In more detail, if $\sigma$ achieves the maximum in (\[r-formula\]), then for each $i=1,\dots,n$ we choose exactly $r$ elements $\tau \in \Gamma$ which agree with $\sigma$ in position $i$. After relabelling each occurrence of $\sigma(i)$ to $i$, we have the desired $r$-idempotent PC$(n,n-1)$. A question in its own right is whether there exists an $r$-IPC$(n,n-1)$ for $r = \lfloor \frac{1}{n}(M(n,n-1)-1) \rfloor$. However, relatively little is known about maximum permutation code sizes. Indeed, the exact value of $M(n,n-1)$ is known only for $n=q$, a prime power, ($M(q,q-1) = q(q-1)$, [@FD]) and for $n=6$ ($M(6,5)=18$, [@K6]). \[ipc6\] A 2-IPC$(6,5)$: $$\begin{array}{cc} 1\ 3\ 5\ 6\ 2\ 4\ & 1\ 4\ 6\ 2\ 3\ 5 \\ 6\ 2\ 4\ 5\ 3\ 1\ & 5\ 2\ 1\ 3\ 6\ 4\\ 5\ 6\ 3\ 1\ 4\ 2\ & 4\ 5\ 3\ 2\ 6\ 1\\ 2\ 5\ 6\ 4\ 1\ 3\ & 3\ 6\ 1\ 4\ 2\ 5\\ 3\ 1\ 4\ 6\ 5\ 2\ & 6\ 4\ 2\ 1\ 5\ 3\\ 4\ 3\ 2\ 5\ 1\ 6\ & 2\ 1\ 5\ 3\ 4\ 6\\ \end{array}$$ A 3-IPC$(10,9)$ (symbol ‘0’ is used for ‘10’): $$\begin{array}{ccc} \ 1\ 8\ 6\ 2\ 9\ 5\ 4\ 0\ 3\ 7& \ 1\ 5\ 0\ 9\ 7\ 4\ 3\ 6\ 2\ 8& \ 1\ 3\ 8\ 0\ 6\ 9\ 5\ 2\ 7\ 4\\ \ 8\ 2\ 1\ 9\ 6\ 7\ 0\ 4\ 5\ 3& \ 3\ 2\ 5\ 6\ 9\ 8\ 1\ 7\ 0\ 4& \ 9\ 2\ 8\ 7\ 4\ 1\ 3\ 0\ 6\ 5\\ \ 5\ 9\ 3\ 2\ 7\ 8\ 0\ 1\ 4\ 6& \ 8\ 7\ 3\ 5\ 2\ 0\ 4\ 9\ 6\ 1& \ 9\ 4\ 3\ 1\ 6\ 2\ 8\ 5\ 0\ 7\\ \ 9\ 7\ 1\ 4\ 0\ 3\ 5\ 6\ 8\ 2& \ 6\ 3\ 2\ 4\ 1\ 0\ 9\ 7\ 5\ 8& \ 0\ 8\ 9\ 4\ 3\ 1\ 2\ 5\ 7\ 6\\ \ 0\ 3\ 6\ 7\ 5\ 2\ 1\ 4\ 8\ 9& \ 3\ 8\ 4\ 0\ 5\ 7\ 9\ 1\ 6\ 2& \ 7\ 9\ 2\ 8\ 5\ 4\ 6\ 3\ 0\ 1\\ \ 9\ 8\ 7\ 5\ 1\ 6\ 0\ 3\ 2\ 4& \ 8\ 9\ 4\ 1\ 0\ 6\ 2\ 7\ 3\ 5& \ 5\ 1\ 0\ 3\ 9\ 6\ 8\ 4\ 7\ 2\\ \ 3\ 0\ 9\ 8\ 1\ 2\ 7\ 6\ 4\ 5& \ 5\ 4\ 6\ 0\ 8\ 1\ 7\ 9\ 2\ 3& \ 8\ 6\ 0\ 2\ 4\ 3\ 7\ 5\ 1\ 9\\ \ 2\ 6\ 7\ 1\ 9\ 0\ 5\ 8\ 4\ 3& \ 4\ 1\ 9\ 0\ 2\ 3\ 6\ 8\ 5\ 7& \ 7\ 0\ 1\ 3\ 4\ 5\ 9\ 8\ 2\ 6\\ \ 0\ 7\ 4\ 6\ 1\ 5\ 8\ 2\ 9\ 3& \ 2\ 1\ 6\ 8\ 0\ 7\ 3\ 5\ 9\ 4& \ 4\ 5\ 7\ 3\ 6\ 8\ 2\ 0\ 9\ 1\\ \ 6\ 5\ 1\ 7\ 2\ 9\ 8\ 3\ 4\ 0& \ 4\ 6\ 5\ 8\ 7\ 1\ 9\ 2\ 3\ 0& \ 2\ 4\ 8\ 9\ 3\ 5\ 6\ 7\ 1\ 0\\ \end{array}$$ The connection with MOLS is important in the sequel. The following result is essentially the construction from MOLS to PC$(n,n-1)$ in [@CKL], except that here we track the idempotent condition. \[idemp-MOLS\] If there exist $r$ mutually orthogonal idempotent latin squares of order $n$, then there exists an $r$-IPC$(n,n-1)$. Suppose $L_1,\dots,L_r$ are the hypothesized latin squares, each on the set of symbols $[n]$. For each $i \in [n]$ and $j \in [r]$, define the permutation $\pi_{i,j} \in \mathcal{S}_n$ by $\pi_{i,j}(x) = y$ if and only if the $(x,y)$-entry of $L_j$ is $i$. Let $\Gamma = \{\pi_{i,j} : i \in [n], j \in [r]\}$. Consider distinct permutations $\pi_{i,j}$ and $\pi_{h,k}$ in $\Gamma$. They have no agreements if $j=k$, by the latin property, and they have exactly one agreement if $j \neq k$ by the orthogonality of squares $L_j$ and $L_k$. So $\Gamma$ is a PC$(n,n-1)$. Moreover, since each $L_j$ is an idempotent latin square, the permutation $\pi_{i,j}$ has only the fixed point $i$. It follows that $\Gamma$ is in fact an $r$-IPC$(n,n-1)$. We remark that the maximum number of mutually orthogonal idempotent latin squares of order $n$ is either $N(n)$ or $N(n)-1$, since we may permute rows and columns of one square so that its main diagonal is a constant, and then permute symbols of the other squares. That is, our idempotent condition is negligible as far as the rate of growth of $r$ in terms of $n$ is concerned. \[prime-powers\] For prime powers $q$, there exists a $(q-2)$-IPC$(q,q-1)$. More generally, MacNeish’s lower bound on MOLS leads to a similar bound for idempotent permutation codes. \[product\] If $n=q_1\dots q_t$ is factored as a product of powers of distinct primes, then there exists a $(q-2)$-IPC$(n,n-1)$ where $q=\min\{q_i : i=1,\dots,t\}$. Finally, it is worth briefly considering a ‘reverse’ of the MOLS construction for PC$(n,n-1)$. Suppose a PC$(n,n-1)$, say $\Gamma$, is partitioned into PC$(n,n)$, say $\Gamma_1,\dots,\Gamma_r$. We define $r$ partial latin squares as linear combinations of permutation matrices for $\Gamma_i$ with symbolic coefficients. Since two distinct words of the code have at most one agreement, overlaying any two of the $r$ partial latin squares leads to distinct ordered pairs of symbols over the common non-blank cells. We merely offer an example, but remark that this viewpoint is helpful for our recursive construction to follow. The $2$-IPC$(6,5)$ of Example \[ipc6\] admits a partition into three disjoint PC$(6,6)$; this can be seen by reading the array four rows at a time. Each of these sub-arrays is converted into a partial latin square of order six, where a permutation $\pi$ having fixed point $i$ fills all cells of the form $(x,\pi(x))$ in its square with symbol $i$. $$\begin{array}{|c|c|c|c|c|c|} \hline 1&4&&&3&2\\ \hline &2&1&&4&3\\ \hline &&3&2&1&4\\ \hline 3&&&4&2&1\\ \hline 4&1&2&3&&\\ \hline 2&3&4&1&&\\ \hline \end{array} \hspace{1cm} \begin{array}{|c|c|c|c|c|c|} \hline 1&&5&6&2&\\ \hline 5&2&6&1&&\\ \hline 2&6&&5&&1\\ \hline &1&2&&6&5\\ \hline 6&&1&&5&2\\ \hline &5&&2&1&6\\ \hline \end{array} \hspace{1cm} \begin{array}{|c|c|c|c|c|c|} \hline &6&4&3&&5\\ \hline 6&&&5&3&4\\ \hline 4&5&3&&6&\\ \hline 5&3&6&4&&\\ \hline &4&&6&5&3\\ \hline 3&&5&&4&6\\ \hline \end{array}$$ A recursive construction using block designs ============================================ In this section, we observe that idempotent permutation codes can be combined to produce larger such codes. Since the resultant code must preserve at most one agreement between different words, we are naturally led to consider block designs to align the ingredient codes. A *pairwise balanced design* PBD$(n,K)$ is a pair $(V,\mathcal{B})$, where $V$ is a set of size $n$, $\mathcal{B}$ is a family of subsets of $V$ with sizes in $K$, and such that every pair of distinct elements of $V$ belongs to exactly one set in $\mathcal{B}$. The sets in $\mathcal{B}$ are called *blocks*. Thinking of a PBD as a special type of hypergraph, we refer the elements of $V$ as *vertices* or *points*. The following construction is inspired from a similar one for MOLS; see [@CD Theorem 3.1]. \[pbd-construction\] If there exists a PBD$(n,K)$ and, for every $k \in K$, there exists an $r$-IPC$(k,k-1)$, then there exists an $r$-IPC$(n,n-1)$. Let $([n],\mathcal{B})$ be a PBD$(n,K)$. For each block $B \in \mathcal{B}$, take a copy of an $r$-IPC$(|B|,|B|-1)$ on the symbols of $B$. Its permutations are, say, $\pi_{i,j}^B:B \rightarrow B$, for $i \in B$, $j =1,\dots,r$, where $\pi_{i,j}^B(i)=i$ is the unique fixed point for $\pi_{i,j}^B$. Let $i \in [n]$ and put $\mathcal{B}_i:=\{B \in \mathcal{B}: i \in B\}$, the set of blocks containing symbol $i$. Since $\mathcal{B}$ is the block set of a PBD, we have that $\mathcal{B}_i$ is a partition of $[n] \setminus \{i\}$. For $j=1,\dots,r$, define a permutation $\pi_{i,j}:[n] \rightarrow [n]$ by $$\pi_{i,j}(x) = \begin{cases} i & \text{if~} x =i, \\ \pi^B_{i,j}(x) & \text{if~} x\neq i, \text{~where~} x \in B \in \mathcal{B}_i. \end{cases}$$ We claim that $\{\pi_{i,j}:i \in [n], j \in [r]\}$ is an $r$-IPC$(n,n-1)$ such that, for each $i$, the subset $\{\pi_{i,j}: j \in [r]\}$ has precisely the fixed point $i$. First, each $\pi_{i,j}$ is a permutation. That $([n],\mathcal{B})$ is a PBD ensures that $\pi_{i,j}$ is well-defined and bijective. In particular, if $a \in [n]$, $a \neq i$, we have $\{i,a\}$ contained in a unique block, say $A \in \mathcal{B}$. It remains to check the minimum distance. Consider $\pi_{i,j}$ and $\pi_{i,j'}$ for $j \neq j'$. They agree on $i$, but suppose for contradiction that they agree also on $h \neq i$. Let $B$ be the unique block of $\mathcal{B}_i$ containing $h$. By construction, we must have $\pi^B_{i,j}$ agreeing with $\pi^B_{i,j'}$ at $h$, and this is a contradiction to the minimum distance being $|B|-1$ within this component code. Now, consider $\pi_{i,j}$ and $\pi_{i',j'}$ for $i \neq i'$. Suppose they agree at distinct positions $h$ and $l$. Say $\pi_{i,j}(h)=\pi_{i',j'}(h)=a$ and $\pi_{i,j}(l) = \pi_{i',j'}(l) = b$. Then $\{i,i',h,a\}$ and $\{i,i',l,b\}$ are in the same block. It follows that $h,l$ are in the same block and we get a contradiction again. We illustrate the construction of Theorem \[pbd-construction\]. Figure \[idpc10\] shows a PBD$(10,\{3,4\})$ at left. The design is built from an affine plane of order three (on vertex set $\{1,\dots,9\}$) with one parallel class extended (to vertex $0$). In the center, template idempotent permutation codes of lengths 3 and 4 are shown. The code of length three is simply an idempotent latin square, but note that the code of length four achieves minimum distance three. On the right is shown the resultant $1$-IPC$(10,9)$, an unimpressive code for illustration only. It can be checked that two rows agree in at most one position (which if it exists is found within the unique block containing the chosen row labels). $$\begin{array}{cccc} \{0,1,2,3\} & \{1,4,7\} & \{1,5,9\} & \{1,6,8\} \\ \{0,4,5,6\} & \{2,5,8\} & \{2,6,7\} & \{2,4,9\} \\ \{0,7,8,9\} & \{3,6,9\} & \{3,4,8\} & \{3,5,7\} \\ \end{array}$$ (-2.5,0)–(1,0); (-2.5,0) to \[out=45,in=180\] (-1,1); (-2.5,0) to \[out=-45,in=180\] (-1,-1); (-1,1)–(1,1); (-1,-1)–(1,-1); (0,-1)–(0,1); (1,-1)–(1,1); (-1,-1)–(-1,1); (1,1)–(-1,-1); (1,-1)–(-1,1); (0,1)–(-1,0); (-1,0) to \[out=225,in=90\] (-1.5,-1); (1,-1) to \[out=225,in=270\] (-1.5,-1); (0,-1)–(1,0); (1,0) to \[out=45,in=270\] (1.5,1); (-1,1) to \[out=45,in=90\] (1.5,1); (0,1)–(1,0); (1,0) to \[out=315,in=90\] (1.5,-1); (-1,-1) to \[out=315,in=270\] (1.5,-1); (0,-1)–(-1,0); (-1,0) to \[out=135,in=270\] (-1.5,1); (1,1) to \[out=135,in=90\] (-1.5,1); at (-2.5,0.3) [$0$]{}; at (-0.3,0) [$5$]{}; at (0,1.3) [$2$]{}; at (0,-1.3) [$8$]{}; at (.7,0) [$6$]{}; at (1,1.3) [$3$]{}; at (1,-1.3) [$9$]{}; at (-1.3,0) [$4$]{}; at (-1,1.3) [$1$]{}; at (-1,-1.3) [$7$]{}; (-2.5,0) circle \[radius=.1\]; (0,0) circle \[radius=.1\]; (0,1) circle \[radius=.1\]; (0,-1) circle \[radius=.1\]; (1,0) circle \[radius=.1\]; (1,1) circle \[radius=.1\]; (1,-1) circle \[radius=.1\]; (-1,0) circle \[radius=.1\]; (-1,1) circle \[radius=.1\]; (-1,-1) circle \[radius=.1\];   ------- a c b c b a b a c ------- --------- a c d b c b d a d a c b b c a d ---------   $$\begin{array}{c} 0\ 2\ 3\ 1\ 5\ 6\ 4\ 8\ 9\ 7\\ 2\ 1\ 3\ 0\ 7\ 9\ 8\ 4\ 6\ 5\\ 3\ 0\ 2\ 1\ 9\ 8\ 7\ 6\ 5\ 4\\ 1\ 2\ 0\ 3\ 8\ 7\ 9\ 5\ 4\ 6\\ 5\ 7\ 9\ 8\ 4\ 6\ 0\ 1\ 3\ 2\\ 6\ 9\ 8\ 7\ 0\ 5\ 4\ 3\ 2\ 1\\ 4\ 8\ 7\ 9\ 5\ 0\ 6\ 2\ 1\ 3\\ 8\ 4\ 6\ 5\ 1\ 3\ 2\ 7\ 9\ 0\\ 9\ 6\ 5\ 4\ 3\ 2\ 1\ 0\ 8\ 7\\ 7\ 5\ 4\ 6\ 2\ 1\ 3\ 8\ 0\ 9\\ \end{array}$$ We conclude this section with an existence result for pairwise balanced designs. This is implicit in early constructions of mutually orthogonal latin squares, [@CES; @WilsonMOLS]. We extend the use to permutation codes later. \[pbd-existence\] Suppose $m,t,u$ are integers satisfying $N(t) \ge m-1$ and $0 \le u \le t$. Then there exists a PBD$(mt+u,\{m,m+1,t,u\})$. We use some design theory terminology from [@WilsonMOLS]. First, the existence of $m-1$ MOLS of order $t$ gives a transversal design TD$(m+1,t)$. Delete all but $u$ points from one group of this TD, and turn groups into blocks. We claim that the resulting system is a PBD of the required parameters. The total number of points is $mt+u$ and block sizes are in $\{m,m+1,t,u\}$, as needed. And, every pair of points, whether in the same or different groups of the original TD, appear together in a unique block. An improved exponent {#proof} ==================== We apply the partition and extension technique from [@BMS] to construct an idempotent permutation code. Consider AGL$(1,p)$, the affine general linear group of permutations on ${\mathbb{Z}}_p$, i.e. $$\text{AGL}(1,p)=\{x \mapsto ax+b : a,b\in {\mathbb{Z}}_p, a\neq 0\}.$$ First, we pick the largest $k<n$ such that $2k\le\lceil p/k\rceil+1$. Note that $2x\le\lceil p/x\rceil+1$ holds if $x=\sqrt{p/2}$ and fails if $x=\sqrt{p/2}+1$. Therefore $k=\lfloor \sqrt{p/2}\rfloor$ or $k=\lceil \sqrt{p/2}\rceil$. Put $s=\lceil p/k\rceil$. Write $n=p$ as a sum $n=n_1+n_2+\dots +n_k$ where $n_i\ge s$ for all $i$. Take an ordered partition ${\mathcal{P}}=(P_1,\dots,P_k)$ of ${\mathbb{Z}}_p$ into $k$ blocks of consecutive elements with $|P_i|=n_i$ for all $i$. Let us say that a permutation $\pi : {\mathbb{Z}}_p \rightarrow {\mathbb{Z}}_p$ is [*covered by*]{} $P_i$ for some $i=1,\dots,k$ if there exists $x\in P_i$ such that $\pi(x)\in P_i$. \[lemma-prime-plusone\] Let $p$ be a prime with $p\ge 5$. For any integer $j$ with $k \le j\le s$, and any $i=1,\dots,k$, all permutations in $B_j:=\{ jx+b : b\in{\mathbb{Z}}_p\}$ are covered by $P_i$. Take a permutation $\pi(x)=jx+b$ from $B_j$. Suppose that $P_i=\{m+1,m+2,\dots,m+n_i\}$. Then $\pi(P_i)=\{\pi(m+1),\pi(m+2),\dots,\pi(m+n_i)\}$ is an arithmetic sequence over ${\mathbb{Z}}_p$ with common difference $j$. We show that $\pi(P_i)\cap P_i\ne\emptyset$. Suppose to the contrary that $\pi(P_i)\cap P_i =\emptyset$. Since $j\le s$, $P_i$ does not have any element in $\{\pi(m')+1,\pi(m')+2,\dots, \pi(m')+n_i-1\}$ for any $m',m+1\le m'<m+n_i$. Let $S$ be the set $\{t \in {\mathbb{Z}}: j(m+1)+b\le t\le j(m+n_i)+b\}$ and let $S_p=\{t \pmod{p} : t\in S\}$. Since $\pi(P_i)\cap P_i =\emptyset$, $P_i$ does not have any element in set $S_p$. If $|S|\ge p$ then $|S_p|=p$ and $S_p={\mathbb{Z}}_p$ contradicting $S_p\cap P_i =\emptyset$. Thus $|S|=j(n_i-1)+1<p$ and $|S_p|=|S|$. Now, in view of $$jn_i\ge ks=k\cdot \lceil p/k\rceil\ge p,$$ we have $|{\mathbb{Z}}_p \setminus S_p|\le j-1$. This contradicts the fact that $|P_i|=n_i>j-1$. \[prime-plusone\] For primes $p\ge 5$, there exists an $r$-IPC$(p+1,p)$ where $r=k-1 = O(\sqrt{p})$ as above. Following [@BMS], create the distance-$p$ partition system $\Pi=({\mathcal{M}},{\mathcal{P}},{\mathcal{Q}})$ on ${\mathbb{Z}}_p$ such that: 1. ${\mathcal{M}}=(M_1,M_2,\dots,M_{k+1})$, where $M_1,M_2,\dots,M_k$ are selected as cosets $B_k,B_{k+1},\dots,B_{2k-1}$ from Lemma \[lemma-prime-plusone\], and $M_{k+1}=B_1$ is an additional coset, noting that $p \ge 5$ implies $k \ge 2$; 2. ${\mathcal{P}}={\mathcal{Q}}=(P_1,P_2,\dots,P_{k})$. First, we show that there are enough cosets to choose from Lemma \[lemma-prime-plusone\], i.e. that $k\le s-k+1$. This follows by the choice of $k$ and $s$ above. The extension $ext(\Pi)$ of $\Pi$ is a PC$(p+1,p)$ by [@BMS Theorem 1]. A $(k-1)$-IPC$(p+1,p)$ can be obtained from it as follows. Every permutation in $M_j, j=1,\dots,k$, has a unique fixed point. A fixed point of a permutation may disappear due to the extension. We remove permutations without fixed points. There are at most $n_j$ permutations removed from $ext(M_j)$. The sets $P_1,\dots,P_k$ are disjoint, and so every symbol $0,1,\dots,p-1$ is a fixed point in at least $k-1$ of the remaining permutations. By removing some permutations if necessary we can make every symbol $0,1,\dots,p-1$ a fixed point exactly $k-1$ times. Finally, we carefully choose $M_{k+1}$. Pick any $k-1$ permutations from coset $B_1$ except the identity permutation $0,1,2, \dots, p-1$. Adjoin $p$ at the end of every permutation. Then symbol $p$ will be the only fixed point and the entire permutation code is a $(k-1)$-IPC$(p+1,p)$. Next we cite an important number-theoretic result used in [@Beth] for MOLS. Let $2=p_0,p_1,\dots,p_k$ be the primes less than or equal to $y$, and let $\omega=\{a_0,a_1,\dots,a_k,b_1,\dots,b_k\}$ be a set of $2k+1$ integers. Let $B_\omega (x,y)$ denote the number of positive integers $z \le x$ which do not lie in any of the arithmetic progressions $z \equiv a_i \pmod{p_i}$, $i=0,1,\dots,k$ or $z \equiv b_j \pmod{p_j}$, $j=1,\dots,k$. Then $B_\omega(x,x^{4.2665})$ tends to infinity with $x$, independent of the selections $\omega$. The tools are now in place for our asymptotic lower bound on $M(n,n-1)$. \[idem-bound\] For sufficiently large $n$, there exists an $r$-IPC$(n,n-1)$ with $r \ge n^{1/12.533}$. We follow a similar strategy as in [@Beth; @WilsonMOLS], applying the Buchstab sieve. Put $\gamma=1/12.533$ and $r=\lceil n^\gamma \rceil$. Choose a prime $m$ with $2(r+2)^2 \le m \le 4(r+2)^2$. Then in view of Theorems \[product\] and \[prime-plusone\], there exist both $r$-IPC$(m,m-1)$ and $r$-IPC$(m+1,m)$. Use the Buchstab sieve to select an integer $t'$, $0 \le t' \le n^{4.2665 \times 2 \gamma} < n^{0.681}$, so that, with $t:=t'+\lfloor \frac{n}{m+1} \rfloor$, we have $t \not\equiv 0 \pmod{p}$ and $mt \not\equiv n \pmod{p}$ for each prime $p \le m$. Put $u=n-mt$ so that $n=mt+u$. By choice of $t$, and Theorem \[product\] again, there exist $r$-IPC$(t,t-1)$ and $r$-IPC$(u,u-1)$. In addition, we have $N(t) >m$ from MacNeish’s bound. Since $t \ge \frac{n}{m+1}$, we have $u \le t$. And, for large $n$, $$t \le n^{0.681} + n/(m+1) \le n/m + \left( n^{0.681} - \frac{n}{m(m+1)} \right) \le n/m,$$ from which it follows that $u \ge 0$. By Lemma \[pbd-existence\], there exists a PBD$(mt+u,\{m,m+1,t,u\})$. Hence, by Theorem \[pbd-construction\], there exists an $r$-IPC$(n,n-1)$. Our main result, Theorem \[main\], stating that $M(n,n-1) \ge n^{1+1/12.533}$ is now an immediate consequence of Theorem \[idem-bound\]. Discussion ========== Our exponent $1/12.533$ is only slightly better than the $1/14.8$ already known for MOLS. However, in certain cases it may be possible to construct a PBD whose block sizes are large primes or primes plus one. For example, a projective plane of order $p$ is a PBD$(p^2+p+1,\{p+1\})$. If $p'$ is another prime, say with $\sqrt{2p} < p' < p$, then, by deleting all but $p'$ points from one line of this plane we obtain a PBD$(p^2+p',\{p',p,p+1\})$. Our construction gives an $r$-IPC$(n,n-1)$ with $r$ on the order of $n^{1/4}$, and this is not in general subsumed by existing MOLS bounds nor existing permutation code constructions. A little more generally, an exponent approaching 1/4 can be achieved when $n$ has a representation $n=p_1+p_2 p_3$ for primes $p_i$ satisfying $n^{1/2-\epsilon} < p_1 < \max\{p_2,p_3\} <n^{1/2-\epsilon}$. The exponent could also be improved if a better construction for designs with large block sizes could be used in place of Lemma \[pbd-existence\]. Even with our family of designs from Lemma \[pbd-existence\], the hypothesis $N(t) \ge m-1$ significantly harms our exponent. Wilson’s construction for MOLS in [@WilsonMOLS] drops this strong requirement on $t$. However, a preliminary look at the construction suggests that a suitable relaxation for permutation codes PC$(n,n-1)$ likely demands a partition into codes of full distance, so that some latin square structure is maintained. This is an idea worth exploring in future work. In another effort to work around the hypothesis $N(t) \ge m-1$, we explored letting $t=s^2$ for an integer $s$ with no prime factors up to about $\sqrt{m}$. Our remainder $u=n-ms^2$ is then a quadratic in $s$ and one must avoid an extra arithmetic progression. The allowed range for $s$ is too small for the trade-off to be worthwhile. Applying equation (\[r-formula\]) to a known permutation code with $n=60$, we can report the existence of a $6$-IPC$(60,59)$. By comparison, it is only known that $N(60) \ge 5$; see [@Abel]. As a next step in researching $r$-IPC$(n,n-1)$, it would be interesting to accumulate some additional good examples, primarily in the case when neither $n$ nor $n-1$ is a prime power. Finding a maximum idempotent code (with the assumption on $r$-regularity dropped) is closely related to finding a smallest maximal set of permutations at distance $n$ in a PC$(n,n-1)$. Some preliminary experiments on known codes suggests that it is sometimes possible to have one permutation at distance exactly $n-1$ to all others. As one example, the current lower bound in the case $n=54$ is $408$ (see [@BMS]), yet there is an idempotent code of size $407$. Finally, we remark that using designs to join permutation codes may be a fruitful approach not only for smaller Hamming distances, but also perhaps for other measures of discrepancy, such as the Lee metric. [99]{} R.J.R. Abel, Existence of five MOLS of orders 18 and 60. *J. Combin. Des.* 23 (2015), 135–139. S. Bereg, L. Morales and I.H. Sudborough, Extending permutation arrays: improving MOLS bounds. *Des. Codes Cryptogr.* 83 (2017), 661–683. T. Beth, Eine Bemerkung zur Abschätzung der Anzahl orthogonaler lateinischer Quadrate mittels Siebverfahren. *Abh. Math. Sem. Univ. Hamburg* 53 (1983), 284–288. J. Bierbrauer and K. Metsch, A bound on permutation codes. *Electron. J. Combin.* 20 (2013), P6, 12 pp. M. Bogaerts and P. Dukes, Semidefinite programming for permutation codes. *Discrete Math.* 326 (2014), 34–43. S. Chowla, P. Erdős, and E.G. Strauss, On the maximal number of pairwise orthogonal latin squres of a given order. [*Canad. J. Math.*]{} 12 (1960), 204–208. W. Chu, C.J. Colbourn, and P.J. Dukes, Permutation codes for powerline communication. [*Des. Codes Cryptography*]{} 32 (2004), 51–64. C.J. Colbourn and J.H. Dinitz, Making the MOLS table. *Computational and constructive design theory*, 67–134, Math. Appl., 368, Kluwer Acad. Publ., Dordrecht, 1996. C.J. Colbourn, T. Kløve, and A.C.H. Ling, Permutation arrays for powerline communication and mutually orthogonal Latin squares. [*IEEE Trans. Inform. Theory*]{} 50 (2004), 1289–1291. M. Deza and S.A. Vanstone, Bounds for permutation arrays. [*J. Statist. Plann. Inference*]{} 2 (1978), 197–209. C. Ding, F.-W. Fu, T. Kløve and V.K.-W.Wei, Constructions of permutation arrays. *IEEE Trans. Inform. Theory* 48 (2002), 977–980. P. Frankl and M. Deza, On the maximum number of permutations with given maximal or minimal distance. [*J. Combin. Theory Ser. A*]{} 22 (1977), 352–360. S. Huczynska, Powerline communication and the 36 officers problem. *Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci.* 364 (2006), 3199–3214. H. Iwaniec, J. van de Lune and H.J.J.  te Riele, The limits of Buchstab’s iteration sieve. *Nederl. Akad. Wetensch. Indag. Math.* 42 (1980), 409–417. I. Janiszczak, W. Lempken, P.R.J. Östergård and R. Staszewski, Permutation codes invariant under isometries, *Des. Codes Cryptogr.* 75 (2015), 497–507. I. Janiszczak and R. Staszewski, An improved bound for permutation arrays of length 10. Preprint 4, Institute for Experimental Mathematics, University Duisburg-Essen, 2008. I. Janiszczak and R. Staszewski, Isometry invariant permutation codes and mutually orthogonal Latin squares, <https://arxiv.org/abs/1812.06886>. T. Kløve, Classification of permutation codes of length 6 and minimum distance 5. [*Proc. Int. Symp. Information Theory Appl.*]{}, 2000, 465–468. D.H. Smith and R. Montemanni, A new table of permutation codes. *Des. Codes Cryptogr.* 63 (2012), 241–253. R.M. Wilson, Concerning the number of mutually orthogonal Latin squares. *Discrete Math.* 9 (1974), 181-198.
{ "pile_set_name": "ArXiv" }
--- address: - | Mathematical Institute\ University of Oslo\ P. O. Box 1053\ N–0316 Oslo, Norway - 'Université Paris 7, UFR de Mathématiques et Institut de Mathématiques de Jussieu, Case Postale 7012, 2, Place Jussieu, F–75251 Paris Cedex 05' - | Mathematical Institute\ University of Bergen\ Allég 55\ N–5007 Bergen, Norway author: - Geir Ellingsrud - Joseph Le Potier - 'Stein A. Str[ø]{}mme' title: ' Some Donaldson invariants of $\CC\PP^2$' --- *In memory of the victims of the Kobe earthquake* Introduction {#introduction .unnumbered} ============ For an integer $n\ge2$, let $q_{4n-3}$ be the coefficient of the Donaldson polynomial of degree $4n-3$ of $P=\CC\PP^2$. An interpretation of $q_{4n-3}$ in an algebro-geometric context is the following. Let $M_n$ denote the Gieseker-Maruyama moduli space of semistable coherent sheaves on $P$ with rank 2 and Chern classes $c_1=0$ and $c_2=n$. For such a sheaf $F$, the Grauert-Mülich theorem implies that the restriction of $F$ to a general line $L\sub P$ splits as $F_L \iso \OO_L\dsum\OO_L$, and that the exceptional lines form a curve $J(F)$ of degree $n$ in the dual projective plane $P\v$. The association $F\mapsto J(F)$ is induced from a morphism of algebraic varieties, called the Barth map, $f_n\: M_n \to P_n$. Here $P_n=\PP^{n(n+3)/2}$ is the linear system parameterizing all curves of degree $n$ in $P\v$. Let $H\in\Pic(P_n)$ be the hyperplane class and let $\alpha = f_n^*H$. The interpretation of the Donaldson invariant is: $$q_{4n-3} = \int_{M_n} \alpha^{4n-3}.$$ Thus $q_{4n-3}$ is the degree of $f_n$ times the degree of its image. From [@Bart-2] it follows that $f_n$ is generically finite for all $n\ge2$, that $f_2$ is an isomorphism and $q_5=1$, and that $f_3$ is of degree 3 and $q_9=3$. Le Potier [@LePo] proved that $f_4$ is birational onto its image and that $q_{13}=54$. The value of $q_{13}$ has also been computed independently by Tikhomirov and Tyurin [@Tyur-1 prop. 4.1] and by Li and Qin [@Li-Qin thm. 6.29]. The main result in the present note is the following \[thm1\] $q_{17}=2540$ and $q_{21}=233208$. The proof consists of two parts. The first part, treated in this note, is to express $q_{4n-3}$ in terms of certain classes on the Hilbert scheme of length-$(n+1)$ subschemes of $P$. This is theorems \[thm2\] and \[thm3\] below. The second part is to evaluate these classes numerically. This has been carried out in [@Elli-Stro-5 prop. 4.2]. Let $H_{n+1}=\Hilb^{n+1}_P$ denote the Hilbert scheme parameterizing closed subschemes of $P$ of length $n+1$. There is a universal closed subscheme $\Z\sub H_{n+1}\x P$. Consider the vector bundles $$\E = R^1{p_1}_* (\I_{\Z}\*{p_2}^* \OO_{P}(-1))\text{ and } \G = R^1{p_1}_* \I_{\Z}$$ on $H_{n+1}$ of ranks $n+1$ and $n$, respectively, and the linebundle $$\L = \det(\G) \* \det(\E)\i.$$ \[thm2\] Let the notation be as above. Then $$q_{17} = \int_{H_6} s_{12}(\E\*\L) \quad\text{and}\quad q_{21} = \dfrac25 \int_{H_7} s_{14}(\E\*\L).$$ This result was obtained both by Tikhomirov and Tyurin [@Tyur-Tikh], using the method of “geometric approximation procedure” and by Le Potier [@LePo-3], using “coherent systems”. We present in this note what we believe is a considerably simplified proof, which is strongly hinted at on the last few pages of [@Tyur-Tikh]. The formula for $q_{17}$ is a special case of the following formula: \[thm3\] For $2\le n\le 5$, we have $$q_{4n-3} = \dfrac1{2^{5-n}}\int_{H_{n+1}} c_1(\L)^{5-n} s_{3n-3}(\E\*\L).$$ With this it is also easy to recompute $q_5$, $q_9$, and $q_{13}$ using similar techniques as in [@Elli-Stro-5]. We let $h$, $h\v$, and $H$ be the hyperplane classes in $P$, $P\v$, and $P_n$, respectively. In general, if $\omega$ is a divisor class, we denote by $\OO(\omega)$ the corresponding linebundle and its natural pullbacks. This work is heavily inspired by conversations with A. Tyurin, and we thank him for generously sharing his ideas. We would also like to express our gratitude towards the Taniguchi Foundation. Hulsbergen sheaves ================== Barth [@Bart-2] used the term Hulsbergen bundle to denote a stable rank-2 vector bundle $F$ on $P$ with $c_1(F)=0$ and $H^0(P,F(1))\ne0$. We modify this definition a little as follows: A *Hulsbergen sheaf* is a coherent sheaf $F$ on $P$ which admits a non-split short exact sequence (*Hulsbergen sequence*) $$\label{Hulsbergen} 0 \to \OO_P \to F(1) \to \I_Z(2) \to 0,$$ where $Z\sub P$ is a closed subscheme of finite length (equal to $c_2(F)+1$). Note that a Hulsbergen sheaf is not necessarily semistable or locally free. However: \[GM\] Let $F$ be a Hulsbergen sheaf with $c_2(F)=n>0$. Then the set $J(F)\sub P\v$ of exceptional lines for $F$ is a curve of degree $n$, defined by the determinant of the bundle map $$m\: H^1(P,F(-2))\*\OO_{P\v}(-1) \to H^1(P,F(-1))\*\OO_{P\v}$$ induced by multiplication with a variable linear form. First note from the Hulsbergen sequence that the two cohomology groups have dimension $n$. It is easy to see that any Hulsbergen sheaf is slope semistable, in the sense that it does not contain any rank-1 subsheaf with positive first Chern class. Thus by [@Bart-1 thm. 1], $F_L \iso \OO_L \dsum \OO_L$ for a general line $L$. On the other hand, it is clear that a line $L$ is exceptional if and only if $m$ is not an isomorphism at the point $[L]\in P\v$. It is straightforward to construct a moduli space for Hulsbergen sequences. For any length-$(n+1)$ subscheme $Z\sub P$, the isomorphism classes of extensions are parameterized by $\PP(\Ext^1_P(\I_Z(2),\OO_P)\v)$. By Serre duality, $$\Ext^1_P(\I_Z(2),\OO_P)\v \iso H^1(P,\I_Z(-1)).$$ For varying $Z$, these vector spaces glue together to form the vector bundle $\E$ over $H_{n+1}$, hence $D_n=\PP(\E)$ is the natural parameter space for Hulsbergen sequences. Let $\OO(\tau)$ be the associated tautological quotient linebundle. For later use, note that for any divisor class $\omega$ on $H_{n+1}$, we have $\pi_*(\tau+\pi^*\omega)^{k+n} = s_k(\E(\omega))$, where $\pi\: D_n \to H_{n+1}$ is the natural map [@IT]. The tautological quotient $\pi^*\E \to \OO(\tau)$ gives rise to a short exact sequence on $D_n\x P$: $$0 \to \OO(\tau) \to \F(h) \to (\pi\x1)^*\I_{\Z}(2h) \to 0$$ which defines a complete family $\F$ of Hulsbergen sheaves. As we noted earlier, a Hulsbergen sheaf is not necessarily semistable. On the other hand, the *generic* Hulsbergen sheaf is stable if $n\ge 2$. It follows that the family $\F$ induces a *rational* map $g_n\: D_n \to M_n$. By [lemma \[GM\]]{} above, there is also a Barth map $b_n\: D_n \to P_n$, defined everywhere, and by construction, the following diagram commutes: $$\begin{CD} D_n @>b_n>> P_n \\ @V{g_n}VV @VV{||}V \\ M_n @>>f_n> P_n \end{CD}$$ Put $\lambda=c_1(\pi^*\L)$. Then $b_n^*H = \tau+\lambda$. Let $L\sub P$ be a line. Twist the universal Hulsbergen sequence by $-2h$ and $-3h$ respectively. Multiplication by an equation for $L$ gives rise to the vertical arrows in a commutative diagram with exact rows on $D_n\x P$: $$\begin{CD} 0 @>>> \OO(\tau-3h) @>>> \F(-2h) @>>> (\pi\x1)^*\I_Z(-h) @>>>0 \\ @. @VVV @VVV @VVV @.\\ 0 @>>> \OO(\tau-2h) @>>> \F(-h) @>>> (\pi\x1)^*\I_Z @>>>0 \end{CD}$$ Pushing this down via the first projection, we get the following exact diagram on $D_n$: $$\begin{CD} 0@>>>R^1{p_1}_* \F(-2h) @>>> \pi^*\E @>>> \OO(\tau)@>>> 0\\ @. @Vm_LVV @VVV @VVV\\ 0@>>> R^1{p_1}_* \F(-h) @>\iso>> \pi^*\G @>>> 0 \end{CD}$$ Here the last map of the top row is nothing but the tautological quotient map on $\PP(\E)$. Let $A(L)\sub D_n$ be the set of Hulsbergen sequences where $L$ is an exceptional line for the middle term. Clearly, $A(L)$ is the degeneration locus of the left vertical map $m_L$ above. Hence the divisor class of $A(L)$ is $$\begin{aligned} [A(L)]&= c_1(R^1{p_1}_* \F(-h)) - c_1(R^1{p_1}_* \F(-2h)) \\ &= \pi^*c_1(\G) - \pi^*c_1(\E) + \tau \\ &= \tau+\lambda. \end{aligned}$$ On the other hand, $A(L)$ is the inverse image of a hyperplane in $P_n$ under $b_n$, so its divisor class is $b_n^*H$. The case $n\le 5$ ================= For $2\le n\le 5$, the rational map $g_n$ is dominating, and the general fiber is isomorphic to $\PP^{n-5}$. For $n\ge 5$, the map $g_n$ is generically injective with image of codimension $n-5$. In particular, $g_5$ is birational. Everything follows from the observation that the fiber over a point $[F]\in M_n$ in the image of $g_n$ is the projectivization of $H^0(P,F(1))$, and that for general such $F$, this vector space has dimension $h^0(F(1))=\max(1,6-n)$, which is easily seen from . The assertion about the codimension follows from a dimension count: $\dim(M_n)=4n-3$ and $\dim(D_n)=3n+2$. The first half of [theorem \[thm2\]]{} now follows: First of all, since $g_5$ is birational, the two morphisms $f_5$ and $b_5$ have the same image and the same degree. Therefore $q_{17}$ can be computed as $$q_{17} = \int_{D_5} H^{17} =\int_{D_5} (\tau+\lambda)^{17} = \int_{H_6} s_{12}(\E\*\L).$$ For [theorem \[thm3\]]{}, let $L_1,\dots,L_{5-n}$ be general lines in $P$, and let $B_n\sub D_n$ be the locus of Hulsbergen sequences where the closed subscheme $Z$ meets all these $5-n$ lines. The cohomology class of $B_n$ in $H^*(D_n)$ is $\lambda^{5-n}$. \[cover\] Let $2\le n\le5$. The general nonempty fiber of $g_n$ meets $B_n$ in $2^{5-n}$ points, hence the rational map $g_n|_{B_n}\: B_n \to M_n$ is dominating and generically finite, of degree $2^{5-n}$. The general nonempty fiber is of the form $\PP(H^0(P,F(1))\v)$. It suffices to show that the restriction of $\L$ to this fiber has degree 2 (if $n<5$). For this, it suffices to consider a linear pencil in the fiber. So let $\sigma_0$ and $\sigma_1$ be two independent global sections of $F(1)$, and consider the pencil they span. Now $\sigma_0\wedge \sigma_1 \in H^0(P,\wedge^2F)=H^0(P,\OO_P(2))$ is the equation of a conic $C\sub P$ which contains the zero scheme $V(t_0\sigma_0 + t_1\sigma_1)$ of each section in the pencil, $(t_0,t_1)\in\PP^1$. Since $C$ meets a general line in two points, it follows that there are exactly two members of the pencil whose zero set meets a general line. To complete the proof of [theorem \[thm3\]]{}, by [lemma \[cover\]]{} we now have for $2\le n\le5$: $$\begin{aligned} 2^{5-n}\,q_{4n-3} &= 2^{5-n}\int_{M_n} H^{4n-3} \\ &=\int_{B_n} (\tau+\lambda)^{4n-3} \\ &=\int_{D_n} \lambda^{5-n}\,(\tau+\lambda)^{4n-3} \\ &=\int_{H_{n+1}}c_1(\L)^{5-n} \,s_{3n-3}(\E\*\L). \end{aligned}$$ This completes the proof of the theorems for $n\le 5$. The case $n=6$ ============== For $n\ge6$ the techniques above will say something about the restriction of the Barth map to the Brill-Noether locus $B\sub M_n$ of semistable sheaves whose first twist admit a global section. For general $n$ this locus is too small to carry enough information about $M_n$, but in the special case $n=6$, it is actually a divisor, whose divisor class $\beta=[B]$ we can determine. Now $\Pic(M_n)\*\QQ$ has rank 2, generated by $\alpha$ and $\delta=[\Delta]$, the class of the locus $\Delta\sub M_n$ corresponding to non-locally free sheaves [@LePo-1]. In $\Pic(M_6)\*\QQ$, the following relation holds: $$\beta = \frac52 \,\alpha - \frac12\,\delta.$$ Let $\xi\:X\to M_6$ be a morphism induced by a flat family $\F$ of semistable sheaves on $P$, parameterized by some variety $X$. For certain divisor classes $a$ and $d$ on $X$, the second and third Chern classes of $\F$ can be written in the form $$c_2(\F) = a\,h+6\,h^2, \quad c_3(\F) = d\,h^2$$ modulo higher codimension classes on $X$. The Grothendieck Riemann-Roch theorem for the projection $p\: X\x P \to X$ easily gives (for example using [@schubert]) that $$-c_1(p_!\F(h)) = \frac52\, a- \frac12\, d.$$ The locus $\xi\i B\sub X$ is set-theoretically the support of $R^1 p_*\F(h)$. It is not hard to see that one can take the family $X$ in such a way that the 0-th Fitting ideal of $R^1 p_*\F(h)$ is actually reduced. Therefore the left hand side of the equation above is $\xi^*\beta$. On the other hand, $a=\xi^*\alpha$ by the usual definition of the $\mu$ map of Donaldson [@Dona-1], and $d=\xi^*\delta$. Since the family $\F/X$ was arbitrary, the required relation is actually universal, and so holds also in $\Pic(M_6)\*\QQ$. (It suffices to take a family with the properties that (i) $\xi^*\:\Pic_\QQ(M_6) \to \Pic_\QQ(X)$ is injective, (ii) the Fitting ideal above is reduced, and (iii) the general non-locally free sheaf in the family has colength 1 in its double dual.) With this, we complete the proof of the second part of [theorem \[thm2\]]{} in the following way. The general fiber of $f_6$ restricted to $\Delta$ has dimension 1, so $f_6(\Delta)$ has dimension 19, see e.g. [@Stro-1]. Therefore we get $$\begin{aligned} \int_{H_7} s_{14}(\E\*\L) &= \int_{D_6}(\lambda+\tau)^{20} \\ &= \int_{M_6} \beta\, \alpha^{20} \\ &= \int_{M_6} (\frac52\, \alpha - \frac12\,\delta)\,\alpha^{20} \\ &= \frac52\int_{M_6} \alpha^{21} -\frac12\int_{\Delta}\alpha^{20} = \frac52\, q_{21}. \end{aligned}$$ A geometric interpretation ========================== A *Darboux configuration* in $P\v$ consists of a pair $(\Pi,C)$ where $\Pi\sub P\v$ is the union of $n+1$ distinct lines, no three concurrent, and $C\sub P\v$ is a curve of degree $n$ passing through all the nodes of $\Pi$. If we let $Z\sub P$ consist of the $n+1$ points dual to the components of $\Pi$, we have by Hulsbergen’s theorem [@Bart-2 thm. 4] a natural 1-1 correspondence between Hulsbergen sequences and Darboux configurations $(\Pi,C)$, by letting $C=J(F)$. Therefore $D_n$ can be used as a compactification of the set of Darboux configurations, and the intersection number $$\int_{D_n} \lambda^i (\tau+\lambda)^{3n+2-i} = \int_{H_{n+1}} c_1(\L)^i s_{2n+2-i}(\E\*\L)$$ can be interpreted as the number of Darboux configurations $(\Pi,C)$ where $\Pi$ passes through $i$ given points and $C$ passes through $3n+2-i$ given points. It is not known whether the Barth map has degree 1 for $n\ge5$. A related question is the following: Let $(\Pi,C)$ be a general Darboux configuration ($n\ge 5$). Is the inscribed polygon $\Pi$ uniquely determined by $C$? [10]{} W. Barth. Moduli of vector bundles on the projective plane. , 42:63–91, 1977. W. Barth. Some properties of stable rank-2 vector bundles on [$\PP_n$]{}. , 226:125–150, 1977. S. K. Donaldson. Polynomial invariants for smooth 4-manifolds. , 29:257–315, 1990. G. Ellingsrud and S. A. Str[ø]{}mme. Bott’s formula and enumerative geometry. To appear in Journal of the AMS. W. Fulton. . Number 2 in Ergebnisse der Mathematik und ihrer Grenz-Gebiete. Springer-Verlag, Berlin-Heidelberg-New York, 1984. S. Katz and S. A. Str[ø]{}mme. , a [Maple]{} package for intersection theory and enumerative geometry. Software and documentation available from the authors or by anonymous ftp from ftp.math.okstate.edu or linus.mi.uib.no, 1992. J. Le Potier. Systèmes cohérent et polynômes de [Donaldson]{}. Preprint. J. Le Potier. Sur le groupe [Picard]{} de l’espace de modules des fibrés stables sur [$\PP_2$]{}. , 13:141–155, 1981. J. Le Potier. Fibrés stables sur le plan projectif et quartiques de [Lüroth]{}. Preprint, Oct 1989. W.-P. Li and Z. Qin. Lower-degree [Donaldson]{} polynomial invariants of rational surfaces. , 2:413–442, 1993. S. A. Str[ø]{}mme. Ample divisors on fine moduli spaces on the projective plane. , 187:405–523, 1984. A. Tikhomirov and A. N. Tyurin. Application of geometric approximation procedure to computing the [Donaldson’s]{} polynomials for [$\CC\PP^2$]{}. , 12:1–71, 1994. A. N. Tyurin. The moduli spaces of vector bundles on threefolds, surfaces and curves [I]{}. Erlangen preprint, 1990.
{ "pile_set_name": "ArXiv" }
{ "pile_set_name": "ArXiv" }
--- abstract: 'We report the first detection of X-ray emission from a brown dwarf in the Pleiades, the M7-type Roque 14, obtained using the EPIC detectors on [[*XMM-Newton*]{}]{}. This is the first X-ray detection of a brown dwarf intermediate in age between $\approx 12$ and $\approx 320$ Myr. The emission appears persistent, although we cannot rule out flare-like behaviour with a decay time-scale $> 4$ ks. The time-averaged X-ray luminosity of $\approx 3.3 \pm 0.8 \times 10^{27}$ , and its ratios with the bolometric ([$L_{\rm X}/L_{\rm bol}$]{} $\approx 10^{-3.05}$) and H$\alpha$ (/ $\approx 4.0$) luminosities suggest magnetic activity similar to that of active main-sequence M dwarfs, such as the M7 old-disc star VB 8, though the suspected binary nature of Roque 14 merits further attention. No emission is detected from four proposed later-type Pleiades brown dwarfs, with upper limits to in the range 2.1–3.8 $\times 10^{27}$ and to [$\log (L_{\rm X}/L_{\rm bol})$]{} in the range $-3.10$ to $-2.91$.' author: - | K.R. Briggs$^{1}$[^1] and J.P. Pye$^2$[^2]\ $^1$Paul Scherrer Institut, CH-5232 Villigen PSI, Switzerland\ $^2$Department of Physics and Astronomy, University of Leicester, Leicester LE1 7RH, UK\ date: Accepted 2004 June 8th title: 'X-ray emission from a brown dwarf in the Pleiades' --- X-rays: stars – stars: low-mass and brown dwarfs, activity, coronae – stars: individual: Roque 14 – open clusters and associations: individual: the Pleiades Introduction {#intro} ============ Magnetic activity, generating chromospheric H$\alpha$ and coronal X-ray and radio emissions, is a ubiquitous feature of main-sequence (MS) late-type stars (spectral types $\approx$F5–M7). Studies of these diagnostic emissions have found consistency in the character of magnetic activity throughout this range, despite the expected change in dynamo mechanism demanded by the absence of a radiative interior in stars of spectral types $\approx$ M3 and later. However, recent studies suggest the magnetic activity of ‘ultracool’ objects, with spectral types $\approx$ M8 and later, is quite different. The observed persistent (‘non-flaring’) levels of X-ray ([$L_{\rm X}/L_{\rm bol}$]{}) and H$\alpha$ () emission from MS late-type stars increase with decreasing Rossby number, $Ro = P / \tau_{\rm C}$, where $P$ is the rotation period and $\tau_{\rm C}$ is the convective turnover time, until reaching respective ‘saturation’ plateaus of [$L_{\rm X}/L_{\rm bol}$]{} $\sim 10^{-3}$ and $\sim 10^{-3.5}$ (e.g. Delfosse [et al. ]{}1998 for M dwarfs). The fraction of field stars showing persistent chromospheric emission levels close to saturation increases toward later spectral types, peaking around M6–7 (Gizis [et al. ]{}2000). However, around spectral type M9 persistent H$\alpha$ emission levels begin to plummet dramatically (Gizis [et al. ]{}2000). Among L-type dwarfs no rotation–activity connection is found: continues to fall steeply toward later spectral types despite most L-dwarfs being fast rotators (Mohanty & Basri 2003). A proposed explanation is that magnetic fields diffuse with increasing efficiency in the increasingly neutral atmospheres of cooler dwarfs (Meyer & Meyer-Hofmeister 1999; Mohanty [et al. ]{}2002), overwhelming the importance of a rotation-driven dynamo efficiency in chromospheric heating. Magnetic activity is still observed, however, in the forms of H$\alpha$ flaring on some L dwarfs (e.g. Liebert [et al. ]{}2003), and flaring and apparently-persistent radio emission from several ultracool dwarfs (Berger [et al. ]{}2001; Berger 2002). Detections of X-ray emission from ultracool field dwarfs are scarce. The M7 old-disc star VB 8 shows persistent and flaring X-ray emission levels of [$L_{\rm X}/L_{\rm bol}$]{}$\approx 10^{-4.1}$–$10^{-2.8}$, similar to those of active M dwarfs (Fleming [et al. ]{}1993; Schmitt, Fleming & Giampapa 1995; Fleming, Giampapa & Garza 2003). However, the persistent levels of X-ray emission from the M8 old-disc star VB 10 and the M9 $\sim$320 Myr-old brown dwarf LP 944-20 are at least an order of magnitude lower – [$L_{\rm X}/L_{\rm bol}$]{}$\approx 10^{-5.0}$ (Fleming [et al. ]{}2003) and [$L_{\rm X}/L_{\rm bol}$]{}$< 10^{-5.7}$ (Rutledge [et al. ]{}2000), respectively – despite the latter being a fast rotator ([$v \sin i$]{}$= 30$ ). Yet transient strong magnetic activity is evidenced by the flaring X-ray emission, with peak [$L_{\rm X}/L_{\rm bol}$]{}$\approx 10^{-3.7}$–$10^{-1.0}$, that has been observed on both VB 10 (Fleming, Giampapa & Schmitt 2000) and LP 944-20, and on the M9 field dwarfs LHS 2065 (Schmitt & Liefke 2002) and 1RXS J115928.5-524717 (Hambaryan [et al. ]{}2004). Interestingly the temperature of the dominant X-ray-emitting plasma appears to be low, $T \approx 10^{6.5}$ K, whether it is measured in the persistent (VB 10) or flaring (LP 944-20 and 1RXS J115928.5-524717) emission state. While such low temperatures are typical for the persistent coronae of inactive stars – M dwarfs (Giampapa [et al. ]{}1996) and the Sun (Orlando, Peres & Reale 2000) alike – the temperatures of flaring plasma are significantly higher, with $T > 10^{7.0}$ K (Güdel [et al. ]{}2004; Reale, Peres & Orlando 2001). As very young substellar objects ($t \la 5$ Myr) may have photospheres as warm as MS M5–6 dwarfs, an individual brown dwarf may experience a transition from ‘stellar-like’ to ‘ultracool’ magnetic activity as it cools. Brown dwarfs in star-forming regions are routinely observed to emit X-rays at high levels, [$L_{\rm X}/L_{\rm bol}$]{}$\ga 10^{-3.5}$, arising from plasma at $T \ga 10^{7.0}$ K, similar to those of dMe stars and higher-mass young stars (e.g. Neuhäuser & Comerón 1998; Imanishi, Tsujimoto & Koyama 2001; Mokler & Stelzer 2002; Preibisch & Zinnecker 2002; Feigelson [et al. ]{}2002). The $\approx 12$ Myr-old, low-mass brown dwarf TWA 5B, of ultracool spectral type M8.5–9, exhibits apparently persistent X-ray emission at [$L_{\rm X}/L_{\rm bol}$]{} $\approx 10^{-3.4}$, like younger brown dwarfs, but with $T \approx 10^{6.5}$ K (Tsuboi [et al. ]{}2003), like field ultracool dwarfs and LP 944-20. At around spectral type M8–9, between the ages of $\sim 10$ and $\sim 300$ Myr for brown dwarfs, persistent X-ray emission levels appear to fall by a factor $\sim 100$ and coronal temperatures appear constrained to $T \la 10^{6.5}$ K, even during flares and at high emission levels. The population of brown dwarfs in the Pleiades cluster, $135$ pc away (Pan, Shao & Kulkarni 2004), of age $\approx 125$ Myr (Stauffer, Schultz & Fitzpatrick 1998) and spanning spectral types M6.5–early-L (Mart[í]{}n [et al. ]{}1998), is therefore crucial to understanding the evolution of substellar magnetic activity and the conflict of a rotationally-driven magnetic dynamo against atmospheric neutrality. [[*ROSAT*]{}]{} observations of brown dwarfs in the Pleiades detected no X-ray emission at the level of [$L_{\rm X}/L_{\rm bol}$]{}$\ga 10^{-2.5}$ (Neuhäuser [et al. ]{}1999, and see Section 4.4). We present a deeper X-ray (0.3–4.5 keV) observation of five candidate brown dwarfs in the Pleiades (described in Section 2), using the the more sensitive [[*XMM-Newton*]{}]{} observatory (Section 3). We detect X-ray emission from the M7 Roque 14, investigate its temporal and spectral nature, and place upper limits on the X-ray emission levels of the undetected brown dwarfs: Teide 1, Roque 9, Roque 11 and Roque 12 (Section 4). We discuss the relative X-ray and H$\alpha$ emissions of these objects in the context of magnetic activity on ultracool dwarfs and its evolution (Section 5) and close by summarising our findings (Section 6). Sample of brown dwarfs in the Pleiades ====================================== Five objects in our [[*XMM-Newton*]{}]{} field have been proposed as candidate brown dwarf members of the Pleiades on the basis of optical and near-infrared (NIR) photometry in the [*IZJHK*]{} bands (Zapatero Osorio [et al. ]{}1997a; Pinfield [et al. ]{}2000; 2003). Observed and derived physical parameters are listed in Table \[tbl\_sample\]. All except Roque 9 have published spectral types consistent with those expected of $\approx$ 125 Myr-old brown dwarfs. Further evidence for or against membership of the Pleiades and hence substellar status is summarised below: [**Teide 1**]{} is the on-axis target of the [[*XMM-Newton*]{}]{} observation. Its status as a brown dwarf member of the Pleiades is well-established on the basis of its proper motion (Rebolo, Zapatero Osorio & Mart[í]{}n 1995), the detection of Li in its spectrum and its radial velocity (Rebolo [et al. ]{}1996). It shows H$\alpha$ emission with variable equivalent width, $EW_{\rm H\alpha} = 3.5$–8.6 Å (Rebolo [et al. ]{}1995; 1996). It has been suspected to have a lower-mass companion from its position on a $JK$ colour-magnitude diagram (Pinfield [et al. ]{}2003). [**Roque 11**]{} has an anomolous position on a $JHK$ colour-colour diagram (Pinfield [et al. ]{}2003) but its radial velocity of $-3.5 \pm 7$ is consistent with those of other Pleiades members, and its Na [i]{} absorption is lower than that of field stars of the same spectral type, indicating lower gravity, and hence youth (Zapatero Osorio [et al. ]{}1997b). [**Roque 12**]{} has a radial velocity consistent with Pleiades membership (Festin 1998), low Na [i]{} absorption and is a strong H$\alpha$ emitter with $EW_{\rm H\alpha} = 19.7$ Å (Martín [et al. ]{}1998). [**Roque 14**]{} has low Na [i]{} absorption and strong H$\alpha$ emission with $EW_{\rm H\alpha} = 17.0$ Å (Zapatero Osorio [et al. ]{}1997b). It has been suspected to be a near-equal mass binary on the basis of its position in an $IK$ colour-magnitude diagram (Pinfield [et al. ]{}2003) but no comparably bright companion with separation $> 0.1$ arcsec has been found (Mart[í]{}n [et al. ]{}2000). [**Roque 9**]{} has no published spectral type but we estimate a spectral type of M8 as its NIR photometry is similar to those of Teide 1 and Roque 11. We shall use the term “brown dwarf” to refer to all five objects, but note that the evidence in support of substellar status varies across the sample, and is weak for Roque 9. Observation and data analysis {#sec_obs} ============================= The [[*XMM-Newton*]{}]{} observation, 0094780101, was centred on Teide 1 (J2000: $\alpha=03^{\rm h} 47^{\rm m} 18\fs0$, $\delta=+24\degr 22\arcmin 31\arcsec$) and conducted on 2000 September 1 in orbit 134. The Thick optical blocking filter was placed in front of all three EPIC detectors: the [pn]{} (Str[" u]{}der [et al. ]{}2001) was exposed for 40.6 ks and each MOS (M1 and M2; Turner [et al. ]{}2001) for 33.0 ks, beginning 7.5 ks later. The data were processed using the [science analysis system]{} ([sas]{}) v5.4.1[^3] and each EPIC eventlist was further filtered to exclude flagged ‘bad’ events, and uncalibrated event patterns ($> 12$ for MOS; $> 4$ for [pn]{}). Several short intervals affected by high background were also excluded. We considered only events with PI in the range 300–4500 (nominally energies of 0.3–4.5 keV, and PI is reported in units of eV or keV from this point) to reduce background contamination. We extracted an image with $4 \times 4$ arcsec square pixels from each detector and performed source detection in each image. The procedure, using tasks available in the [sas]{}, is described in detail in Briggs & Pye, in preparation. In brief, potential sources were located using a wavelet detection package ([ewavelet]{}), and masked out of the photon image while it was adaptively smoothed (using [asmooth]{}) to generate a model of the background. The spatial variation of vignetting and quantum efficiency was modelled in an exposure map (produced using [eexpmap]{}). The images, background and exposure maps[^4] of the three EPIC instruments were also mosaicked to optimize the sensitivity of the analysis and [ewavelet]{} was used to locate potential sources in the EPIC image. In each image, at the position of each [ewavelet]{} source, a maximum likelihood fitting of the position-dependent instrument point spread function (PSF) was applied ([emldetect]{}) to parametrize each source and those with [*ML*]{} $> 6$ were retained. [emldetect]{} additionally reconstructed a smooth model of the input image by adding PSFs to the background map with the location, normalization and extent of the parametrized sources. We repeated the source detection procedure on 20 images generated from this model image using Poisson statistics, and 20 images generated from the source-free background map, and found in both cases a mean number of false detections per image of 5 with [ *ML*]{} $> 6$ and 2 with [*ML*]{} $> 7$. Approximately 130 X-ray sources were found, of which 34 were associated with proposed members of the Pleiades. Matching of the [emldetect]{} source positions with NIR positions of these members (Pinfield [et al. ]{}2000) revealed a boresight shift in the EPIC image of $\alpha_{\rm X} \cos\delta_{\rm X} - \alpha_{\rm NIR} \cos\delta_{\rm NIR} = 3.7$ arcsec and $\delta_{\rm X} - \delta_{\rm NIR} = -0.8$ arcsec, which is accounted for in the remainder of this work. The corrected positions of sources associated with the Pleiades were all within 6 arcsec of the NIR positions. The probabilities of a positional coincidence within 6 arcsec of a brown dwarf with a spurious detection or a source unrelated to the Pleiades are $\approx 10^{-3}$ and $\approx 0.02$, respectively. ML-fitting was performed in each image at the NIR position of each brown dwarf. The successful detection of Roque 14 is described further in Section 4.1, and investigations of the temporal and spectral nature of its emission are pursued in Sections 4.2 and 4.3, respectively. In no other case was an [*ML*]{} value $> 1$ found. Upper limits to the X-ray luminosity of these objects are calculated in Section 4.4. Results {#sec_res} ======= An X-ray detection of Roque 14 ------------------------------ Roque 14 was returned as an X-ray source by [ewavelet]{} in the M2, [pn]{}, and mosaicked-EPIC images. [emldetect]{} determined [*ML*]{} values of 2.1, 5.9 and 7.0 (1.6, 3.0 and $3.3 \sigma$) in these respective images, confirming the detection at [*ML*]{} $> 6$ ($3.0 \sigma$) only in the mosaicked-EPIC image[^5] (Fig. \[fig\_img\_x\]a). The best-fitting X-ray source position in the EPIC image was 0.8 arcsec offset from the NIR position. We estimated the [1-$\sigma$]{} uncertainty in the relative positions as the sum in quadrature of the statistical fitting uncertainty, 0.73 arcsec, an uncertainty in the EPIC absolute pointing of 1.0 arcsec, and an uncertainty in the optical position of 0.5 arcsec; thus the positional offset of the source is $\approx 0.6 \sigma$. The total number of EPIC source counts determined by [emldetect]{} was $24.7 \pm 5.9$. We calculate the X-ray luminosity after consideration of the source spectrum in Section 4.3. Further analysis of the X-ray emission from Roque 14 has been conducted both including and excluding periods affected by high background, and no significant improvement has been found by excluding those periods. Therefore, to maximize the number of events available, we report the results of including the short periods of high background. We extracted source events from a circle centred on the best-fitting source position with radius 10.75 arcsec, which encloses only 54 per cent of the source counts (Ghizzardi 2002) but optimises the signal:noise ratio, and background events from the surrounding annulus, 8 times larger than the source extraction region. M1 and M2 eventlists were combined as these instruments have near-identical sensitivity, spectral response and exposure length. A total of 32 counts in all EPIC instruments (23 from [pn]{} and 9 from the two [MOS]{} cameras) was extracted from the source region and 140 (99 [pn]{}; 41 [MOS]{}) counts from the background region. Thus we expect 17.5 (12.4 [pn]{}; 5.1 [MOS]{}) background counts in the source region. As the background region will contain $\approx 46$ per cent of the $\approx 25$ source counts, we expect a contribution of source counts ($\approx 1.4$) similar to the [1-$\sigma$]{} uncertainty ($\approx 1.2$) in the estimated mean background counts. The probability of 32 counts or more appearing in the source region as a result of a Poissonian fluctuation in a mean background of 17.5 counts is 0.0011 ($3.2 \sigma$), in support of the [emldetect]{} detection. The events extracted from the source region were used to examine the temporal and spectral behaviour of Roque 14. Events were extracted from a larger source-free region on the same CCD as Roque 14 to make a better model of the time and energy distribution of the background. Transient or persistent emission? --------------------------------- Examination of the arrival times of the photons in the source eventlist suggests a concentration, particularly of lower-energy (0.3–1.4 keV) events, in the $\approx 10$ ks interval 10–20 ks after the start of the [pn]{} exposure (Fig. \[fig\_r14\_ev\]). We investigated the statistical significance of this possible variability using the Kolmogorov–Smirnov (K–S) statistic. The K–S statistic is the maximum vertical difference between the cumulative distribution functions (CDFs) of an observed dataset and a test distribution (or two observed datasets). As our test model must account for the significant number of background events in the source extraction region, which is subject to Poissonian fluctuations about its expected number, we have performed Monte Carlo simulations to assess the confidence level of deviation from the test model as a function of K–S value. Our test-model CDF was constructed from the arrival times of events in the background extraction region and arrival times of a number of source events (such that the ratio of source:background counts was as estimated from the observed data) chosen at uniform intervals in the CDF of a constant count-rate source. Sets of arrival times for [pn]{} and [MOS]{} events were constructed separately and then merged before calculating the test-model CDF for all EPIC data (Fig. \[fig\_cdf\_mod\]a). Each trial dataset in our simulations was composed of the observed numbers of [pn]{} (23) and [MOS]{} (9) events. The number of background events was drawn from a Poisson distribution with mean value as estimated from the data. Arrival times for these background events were drawn at random from the observed distribution for background events. Arrival times for the remaining events were drawn at random from the modelled distribution for source events. [MOS]{} and [pn]{} datasets were again constructed separately before being combined. For each trial dataset the K–S statistic was calculated. The CDF of these K–S values gives the confidence level of deviation from the test model as a function of measured K–S value (Fig. \[fig\_nhp\_ks\]a). The simulated distribution of K–S values for a test model of a constant source count-rate, based on 10000 trials, is practically indistinguishable from the standard calculated distribution. The K–S value of the observed dataset for this test model is 0.110 (Fig.\[fig\_cdf\_mod\]c), which corresponds to just 20 per cent confidence that the source count rate is not constant. Following the same procedure we have additionally tested models in which the source emission is purely transient (Fig. \[fig\_cdf\_mod\]a), with the profile of fast (here, instantaneous) rise and exponential decay typical of flares on late-type stars, beginning 10 ks after the start of the [pn]{} observation. For short decay time-scales, when the source and background models are starkly different (Fig. \[fig\_cdf\_mod\]a), Fig. \[fig\_nhp\_ks\]a shows that the standard calculation significantly overestimates the simulated confidence level of deviation from the test-model. The K–S test rules out flare models with decay time-scale of 3 ks or less at the 90 per cent confidence level, but permits those with time-scale 4 ks or longer (Fig. \[fig\_cdf\_mod\]c). We therefore cannot exclude the possibility that the observed emission from Roque 14 was due to a flare-like outburst similar to those with decay time-scales of $\sim 5$ ks seen from the M9 ultracool dwarfs LP 944-20 (Rutledge [et al. ]{}2000), LHS 2065 (Schmitt & Liefke 2002), and RXS J115928.5-524717 (Hambaryan [et al. ]{}2004). Temperature of X-ray emitting plasma ------------------------------------ Fig. \[fig\_r14\_ev\] suggests the excess of counts was chiefly in the low-energy (0.3–1.4 keV) band, indicative of plasma with temperature $T \la 10^{7.25}$ K typical in coronae on late-type stars. We attempted to constrain the source temperature by implementing a K–S test of the PI values of the observed events, as described above for arrival times. Model distributions of PI values were tested for an isothermal optically-thin plasma source, with an array of temperatures, $T$, at intervals of 0.25 dex in $\log T$ in the range $6.0 \le \log T$ (K) $\le 8.0$ (Fig. \[fig\_cdf\_mod\]b). Each source model was generated in XSPEC (Arnaud 1996) using an [[apec]{}]{} model (Smith [et al. ]{}2001) with solar abundances and absorbing column density typical for the Pleiades, [$N_{\rm H}$]{}$= 2.0 \times 10^{20}$ cm$^{-2}$ (Stauffer 1984; Paresce 1984), convolved with the [pn]{} response matrix and ancilliary response at the source position. While it was possible to rule out models with $T < 10^{6.5}$ K at the 90 per cent confidence level, due to the small number of counts and the hard spectrum of the significant number of background events we were unable to exclude high source temperatures, even up to $10^{8.0}$ K (Fig. \[fig\_cdf\_mod\]d). For a plasma temperature of $10^{7.0}$ K the sensitivity ratio of [pn]{} to [MOS]{} is 3.6 in the 0.3–4.5 keV energy band and the [MOS]{} count-to-flux conversion factor is $6.4 \times 10^{-12}$ per (0.3–4.5 keV) cs$^{-1}$. Thus, the total [MOS]{}-equivalent effective exposure time for Roque 14 was 117.6 ks, its [MOS]{}-equivalent count-rate was $2.1 \pm 0.5 \times 10^{-4}$ cs$^{-1}$, and its time-averaged X-ray luminosity was $3.3 \pm 0.8 \times 10^{27}$ , with [$L_{\rm X}/L_{\rm bol}$]{}$\approx 10^{-3.05}$. The assumption of a plasma temperature of $10^{6.5}$ K would give $\approx 3.5 \pm 0.8 \times 10^{27}$ . Upper limits to X-ray emission from undetected brown dwarfs ----------------------------------------------------------- No significant X-ray emission was detected from any of the remaining four brown dwarfs[^6] . At the NIR position of each brown dwarf, we counted events detected by each EPIC instrument in the 0.3–1.4 keV energy range[^7]. A radius of 8 arcsec was used to minimize counts from nearby sources to Roque 11 and Teide 1. The expected number of background counts was estimated from the value at the source position in the reconstructed image generated by [emldetect]{} in the 0.3–1.4 keV band. This enabled us to account for stray counts from nearby sources, and we note that the respective backgrounds for Teide 1 and Roque 11 were thus 25 and 36 per cent higher than the values at those positions in the [asmooth]{} background map. Upper limits to the source counts at 95 per cent confidence were calculated using the Bayesian method described by Kraft, Burrows & Nousek (1991). These upper limits were corrected for the enclosed energy fraction at the source off-axis angle (0.40–0.46 for a radius of 8 arcsec; Ghizzardi 2002) and converted to [MOS]{}-equivalent count-rates by dividing by the value at the source position in the [MOS]{}-equivalent EPIC exposure map. The assumption of a $10^{7.0}$ K plasma with solar abundances and [$N_{\rm H}$]{}$=2 \times 10^{20}$ cm$^{-2}$ gives a [MOS]{} count-to-flux conversion factor of $7.2 \times 10^{-12}$ per (0.3–1.4 keV) cs$^{-1}$ and upper limits to the 0.3–4.5 keV X-ray luminosity in the range 1.9–3.4 $\times 10^{27}$ , with upper limits to [$L_{\rm X}/L_{\rm bol}$]{} in the range $10^{-3.10}$–$10^{-2.91}$ (listed in Table \[tbl\_activity\]). To optimise our sensitivity to the detection of X-ray emission from these brown dwarfs and put the tightest possible constraint on their mean X-ray luminosity, we constructed a composite image of all the available EPIC data in $1 \times 1$ arcmin squares centred on the NIR positions of the four brown dwarfs (Fig. \[fig\_img\_x\]b). The total [MOS]{}-equivalent exposure time for the composite brown dwarf was 439.3 ks. [emldetect]{} detected the source $\approx 25$ arcsec west of Roque 11, but no source at the position of the composite brown dwarf. We calculated an upper limit to the mean X-ray luminosity of the four brown dwarfs of $1.1 \times 10^{27}$ using the method described above. This corresponds to a mean [$L_{\rm X}/L_{\rm bol}$]{} of $10^{-3.4}$. Deeper observations or a composite analysis of a larger sample are required to determine if the mean activity level of Pleiades brown dwarfs lies well below the saturated level. Considerably lower 2-$\sigma$ ($\approx 95$ per cent confidence) upper limits, to in the range 5.2–9.5 $\times 10^{26}$ , and to [$L_{\rm X}/L_{\rm bol}$]{} in the range $10^{-3.83}$–$10^{-3.43}$, have been reported for these objects (excepting Roque 9) by Neuhäuser [et al. ]{}(1999; henceforth N99) using a number of observations of the Pleiades by the [[*ROSAT*]{}]{} PSPC. All five brown dwarfs were included in 7 separate exposures longer than 1.5 ks of 4 different PSPC pointings in the [[*ROSAT*]{}]{} public archive; the total exposure time of each field ranged from 22.5–39.9 ks. They were best-observed in the “Pleiades Center” field, at off-axis angles in the range 14–24 arcmin. The longest of the three exposures of this field, 22.4 ks from a total of 35.4 ks, was not used in N99, probably due to a faulty aspect solution (Hodgkin, Jameson & Steele 1995). We have recalculated 95 per cent confidence upper limits by applying the Bayesian method as described above, using the broad-band (0.1–2.4 keV) images, background maps and exposure maps of the 6 remaining archival exposures, and merging data from exposures of the same field. We used a variety of extraction radii in each field and further combined data from different fields obtained with similar enclosed energy fraction (calculated using Boese 2000, equation 9), to find the strictest upper limit for each brown dwarf. We were also mindful of avoiding nearby X-ray sources detected by [[*XMM-Newton*]{}]{}. Optimum enclosed energy fractions ranged from 0.35–0.75. We converted PSPC count-rates to unabsorbed fluxes in the 0.1–2.4 keV band using a conversion factor of $1.0 \times 10^{-11}$ per cs$^{-1}$, appropriate for a $10^{7.0}$ K plasma with low absorption, as used in N99. Fluxes in the 0.3–4.5 keV energy band would be 5 per cent higher. A distance of 135 pc was used to calculate luminosities (N99 used 125 pc). The tightest upper limits to the X-ray luminosity we could apply were 1.2–2.5 $\times 10^{28}$ for Roque 9, 14, 11, and 12, and 4.3 $\times 10^{28}$ for Teide 1, which may be contaminated by stray counts from the X-ray-bright K5 Pleiad HII 1348, $\approx 50$ arcsec away (Briggs & Pye 2003a). Upper limits to [$L_{\rm X}/L_{\rm bol}$]{} were in the range $10^{-2.4}$–$10^{-1.8}$. Hence, we conclude that the [[*ROSAT*]{}]{} observations were not sufficient to detect Roque 14 at the level detected here by [[*XMM-Newton*]{}]{}, and the current [[*XMM-Newton*]{}]{} observation places the strictest upper limits thus far to the X-ray emission levels of Pleiades brown dwarfs of spectral type M7.5–8. Discussion ========== X-ray emission from ultracool dwarfs ------------------------------------ We have detected X-ray emission only from the M7-type Roque 14 at [$L_{\rm X}/L_{\rm bol}$]{} $\approx 10^{-3.05}$, and placed an upper limit to the mean emission level from the four later-type Pleiades brown dwarfs at [$L_{\rm X}/L_{\rm bol}$]{} $< 10^{-3.4}$. This is consistent both with the pattern of emission from active main-sequence M stars, which scales with bolometric luminosity, and with the idea that persistent X-ray emission levels fall around spectral type M8 as magnetic field dissipates more easily in cooler, more neutral lower atmospheres. We can make some comparison of the character of the magnetic activity of Roque 14 with that of other low-mass stars based on the relative levels of chromospheric and coronal emission. Fleming (1988) has reported a mean ratio of / $\approx 6.7$ for a sample of the active, frequently-flaring, dMe stars and Reid, Hawley & Mateo (1995) have reported a value of $\sim 3$ for M0–6 field stars outside of flares. Unfortunately, and are both typically variable and simultaneous measurements are scarce. Observed values and calculated ratios of and for the H$\alpha$- and X-ray-detected dwarfs VB 8, VB 10, LHS 2065, and LP 944-20 are listed in Table \[tbl\_field\]. We stress that these / ratios are not calculated from simultaneous measurements of and , but from a simple ranking of the observed values of each. Nevertheless, a striking change in non-flaring emission appears to take place between the M7 star VB 8, in which $>$ and the M8 and M9 dwarfs VB 10 and LP 944-20, in which $<$ (noted by Fleming [et al. ]{}2003). This suggests that the efficiency of persistent coronal heating decays more quickly in response to the increasingly neutral atmosphere of ultracool dwarfs than the efficiency of chromospheric heating. Within the same sample, excluding LP 944-20, transition region heating, inferred from C [iv]{} emission, appears to remain as efficient as chromospheric heating (Hawley & Johns-Krull 2003). The ratio of the single measurements of and from Roque 14 is 4.0. If Roque 14’s measured chromospheric and coronal emissions are interpreted as persistent this suggests that its magnetic activity is of similar character to that of VB 8 and active main-sequence M dwarfs. Although we cannot exclude that Roque 14’s observed X-ray emission is solely the result of a flare with decay time-scale $\sim 5$ ks, like the observed high-level X-ray emission from M8 old-disc star VB 10 and M9 dwarfs LP 944-20, LHS 2065 and RXS J115928.5-524717, the high H$\alpha$ emission level of Roque 14 supports a persistently higher level of magnetic activity than on these cooler dwarfs. Roque 14 has been suspected to be a near-equal-mass binary, but /$>2$ even if the X-ray emission is interpreted as coming from two stars[^8]. An observation by the [*Hubble Space Telescope*]{} NICMOS camera does not support Roque 14’s binarity, finding no companion of comparable brightness at separations $> 0.1$ arcsec, or 13.5 AU (Mart[í]{}n [et al. ]{}2000). While interaction between two close binary components could be influential in the X-ray production mechanism, the emission is at a similar level to that produced by magnetic activity on single active MS M dwarfs. The observed strong H$\alpha$ emission from the M7.5 Roque 12 hints at activity similar to that of Roque 14. In contrast, the measured H$\alpha$ emission levels of the M8 Pleiades brown dwarfs Teide 1 and Roque 11 are very like that of the M8 VB 10, and we speculate that the magnetic activity of these objects is already in the regime where $<$ and their persistent X-ray emission levels are more than an order of magnitude lower than that of Roque 14. Deeper X-ray observations are required to test this prediction. The evolution of X-ray emission from brown dwarfs ------------------------------------------------- Very young brown dwarfs in star-forming regions are now routinely observed to emit X-rays at levels of [$L_{\rm X}/L_{\rm bol}$]{}$\sim 10^{-4}$–$10^{-3}$ that arise from hot ($T > 10^{7}$ K) plasma (e.g. Imanishi [et al. ]{}2001; Preibisch & Zinnecker 2002; Feigelson [et al. ]{}2002). A growing body of evidence indicates that young brown dwarfs experience accretion and outflows just like young low-mass stars (Jayawardhana, Mohanty & Basri 2003). Their X-ray emission is very likely to be produced by similar means to T Tauri stars. This is probably largely coronal emission as a result of magnetic activity, as in older low-mass stars, and as young substellar objects may have photospheres as warm as 2900 K (e.g. Baraffe [et al. ]{}1998), their atmospheric conditions are ripe for the efficient coronal heating seen on active MS M5–6 stars. However, H$\alpha$ emission may arise predominantly from material accreting on to the young brown dwarf, rather than from a hot chromosphere. X-ray emission levels appear to rise with H$\alpha$ emission levels up to $EW_{{\rm H}\alpha} \approx 20$ Å, as would be expected for H$\alpha$ emission predominantly from a magnetically-powered chromosphere, but there is no detection of X-rays from a young brown dwarf with $EW_{{\rm H}\alpha} > 30$ Å (Tsuboi [et al. ]{}2003). This is consistent with the scenario emerging from X-ray surveys of T Tauri stars wherein samples of stars showing signs of strong accretion, such as high $EW_{{\rm H}\alpha}$, appear to show lower levels of X-ray emission than weakly-accreting objects of similar mass (e.g. Flaccomio, Micela & Sciortino 2003). X-ray emission has been previously detected from just two brown dwarfs older than 5 Myr. While any accretion is expected to have ceased by the age of the 12 Myr-old TWA 5B, this object has a much lower mass than Roque 14, 0.014–0.043 M$_{\sun}$ (Neuhäuser [et al. ]{}2000), and is unlikely to be a good model for its youthful X-ray activity. Conversely, LP 944-20 appears to have a similar mass to Roque 14, 0.056–0.064 M$_{\sun}$ (Tinney 1998). It has cooled to a spectral type of M9, so its lower atmosphere is highly neutral, which is probably the key reason why its persistent X-ray emission level is $\sim 100$ times lower than those of many very young brown dwarfs and the detected level from Roque 14. However, we should be wary of assuming that the activity levels of all coeval brown dwarfs are the same, and that Roque 14 and LP 944-20 are representative of their respective ages. There is a large spread in the observed / values of young brown dwarfs, a significant number being undetected at upper limits $< 10^{-4}$ (e.g. Imanishi [et al. ]{}2001). Roque 14 has one of the highest H$\alpha$ emission levels among Pleiades brown dwarf candidates (Zapatero Osorio [et al. ]{}1997b), while LP 944-20 has one of the lowest among M9 field dwarfs (Mohanty & Basri 2003). So, while we may expect the X-ray emission level of brown dwarfs to decrease as they age from $\approx 125$ to $\approx 320$ Myr, as apparently observed by comparing Roque 14 and LP 944-20, the effect is probably exaggerated in choosing these two objects as representative. Larger, and necessarily deeper, surveys of the X-ray emission of brown dwarfs in the field and in the Pleiades (and other clusters) are required to make further progress in understanding the evolution of the magnetic activity of substellar and ultracool objects. Summary ======= We have observed five candidate brown dwarfs in the Pleiades with [[*XMM-Newton*]{}]{}, detecting X-ray emission from the M7 Roque 14. The low number of counts and significant contribution of background counts prevent meaningful constraint of the temperature and exclusion of transient emission. Assuming a plasma temperature of $10^{7.0}$ K and nominal absorbing column to the Pleiades of $2 \times 10^{20}$ cm$^{-2}$, the time-averaged X-ray luminosity of $=3.3 \pm 0.8 \times 10^{27}$ , and its ratios with the bolometric ([$L_{\rm X}/L_{\rm bol}$]{} $\approx 10^{-3.05}$) and H$\alpha$ (/ $\approx 4.0$) luminosities resemble those of active main-sequence M dwarfs, as have been observed on the M7 old-disc star VB 8. We have placed the tightest upper limits thus far on the X-ray emission levels of the four later-type Pleiades brown dwarfs: $<2.1$–$3.8 \times 10^{27}$ , and [$L_{\rm X}/L_{\rm bol}$]{} $< 10^{-3.10}$–$10^{-2.91}$. The [[*XMM-Newton*]{}]{} data do not exclude that Roque 14’s observed high level of X-ray emission is solely the result of a flare with decay time-scale $\sim 5$ ks like the observed high-level emission from M8–9 field dwarfs VB 10, LP 944-20, LHS 2065 and RXS J115928.5-524717, but the high H$\alpha$ emission level of Roque 14 is supportive of a persistently higher level of magnetic activity than on these cooler dwarfs. However, the similarity of the low H$\alpha$ emission levels of Teide 1 and Roque 11 to that of the M8 old-disc star VB 10 prompts us to speculate that the persistent X-ray emission levels of M8 brown dwarfs in the Pleiades may be low, with [$L_{\rm X}/L_{\rm bol}$]{}$\la 10^{-5}$, like those of VB 10 and the M9, $\approx 320$ Myr-old brown dwarf LP 944-20. Deeper X-ray observations, less contaminated by background, are required to confirm a persistent high level of X-ray emission from Roque 14, to assess its coronal temperature, and to probe the typical X-ray emission levels of Pleiades brown dwarfs. Coordinated programmes to study X-ray, H$\alpha$ and radio emission from ultracool and brown dwarfs of a range of ages are required to enable significant progress toward understanding the mechanisms and evolution of magnetic activity on these low-mass, cool objects. Acknowledgments {#acknowledgments .unnumbered} =============== JPP acknowledges the financial support of the UK Particle Physics and Astronomy Research Council (PPARC). The authors thank David Burrows for providing source code for calculating upper limits, and Manuel Güdel for useful discussions. This work uses data obtained by [[*XMM-Newton*]{}]{}, an ESA science mission with instruments and contributions directly funded by ESA Member States and the USA (NASA), as part of the [[*XMM-Newton*]{}]{} Survey Science Centre Guaranteed Time programme. The work also made use of archival material from the SIMBAD and VIZIER systems at CDS, Strasbourg, NASA’s Astrophysics Data System, the [[*ROSAT*]{}]{} Data Archive of the Max-Planck-Institut für extraterrestrische Physik (MPE) at Garching, Germany, and the Leicester Database and Archive Service (LEDAS). Arnaud K. A., 1996, in Jacoby G.H., Barnes J., eds, ASP Conf. Ser. Vol. 101, Astronomical Data Analysis Software and Systems V. Astron. Soc. Pac., San Francisco, p. 67 Baraffe I., Chabrier G., Allard F., Hauschildt P. H., 1998, A&A, 337, 403 Berger E. et al., 2001, Nat, 410, 338 Berger E., 2002, ApJ, 572, 503 Boese F. G., 2000, A&AS, 141, 507 Briggs K. R., Pye J. P., 2003a, MNRAS, 345, 714 Briggs K. R., Pye J. P., 2003b, Adv. Sp. Res., 32, 1081 Delfosse X., Forveille T., Perrier C., Mayor M., 1998, A&A, 331, 581 Feigelson, E. D., Broos, P., Gaffney, J. A., Garmire, G., Hillenbrand, L. A., Pravdo, S. H., Townsley, L., Tsuboi, Y. 2002, [ApJ]{}, 574, 258 Festin L., 1998, MNRAS, 298, L34 Flaccomio E., Micela G., Sciortino S., 2003, A&A, 397, 611 Fleming T. A., 1988, Ph.D. thesis, Univ. Arizona Fleming T. A., Giampapa M. S., Schmitt J. H. M. M., Bookbinder J. A., 1993, ApJ, 410, 387 Fleming T. A., Giampapa M. S., Schmitt J. H. M. M., 2000, ApJ, 533, 372 Fleming T. A., Giampapa M. S., Garza D., 2003, ApJ, 594, 982 Ghizzardi S., 2002, technical document, EPIC-MCT-TN-012 Giampapa M. S., Rosner R., Kashyap V., Fleming T. A., Schmitt J. H. M. M., Bookbinder J. A., 1996, ApJ, 463, 707 Gizis J. E., Monet D. G., Reid I. N., Kirkpatrick J. D., Liebert J., Williams R. J., 2000, AJ, 120, 1085 G[" u]{}del M., Audard M., Reale F., Skinner S. L., Linsky J. L., 2004, A&A, 416, 713 Hambaryan V., Staude A., Schwope A. D., Scholz R.-D., Kimeswenger S., Neuh[" a]{}user R., 2004, A&A, 415, 265 Hawley S. L., Johns-Krull C. M., 2003, ApJ, 588, L109 Hodgkin S. T., Jameson R. F., Steele I. A., 1995, MNRAS, 274, 869 Imanishi K., Tsujimoto M., Koyama K., 2001, ApJ, 563, 361 Jayawardhana R., Mohanty S., Basri G., 2003, ApJ, 592, 282 Kraft R. P., Burrows D. N., Nousek J. A. 1991, [ApJ]{}, 374, 344 Liebert J., Kirkpatrick J. D., Cruz K. L., Reid I. N., Burgasser A., Tinney C. G., Gizis J. E., 2003, AJ, 125, 343 Mart[í]{}n E. L., 1999, MNRAS, 302, 59 Mart[í]{}n E. L., Ardila D. R., 2001, AJ, 121, 2758 Mart[í]{}n E. L., Basri G., Zapatero Osorio M. R., Rebolo R., L[' o]{}pez R. J. G., 1998, ApJ, 507, L41 Mart[í]{}n E. L., Brandner W., Bouvier J., Luhman K. L., Stauffer J., Basri G., Zapatero Osorio M. R., Barrado y Navascu[' e]{}s D., 2000, ApJ, 543, 299 Meyer F., Meyer-Hofmeister E., 1999, A&A, 341, L23 Mohanty S., Basri G., Shu F., Allard F., Chabrier G., 2002, ApJ, 571, 469 Mohanty S., Basri G., 2003, ApJ, 583, 451 Mokler F., Stelzer B., 2002, A&A, 391, 1025 Neuhäuser R., Comerón F. 1998, Sci, 282, 83 Neuh[" a]{}user R. et al. 1999, [A&A]{}, 343, 883 Neuh[" a]{}user R., Guenther E. W., Petr M. G., Brandner W., Hu[' e]{}lamo N., Alves J. 2000, [A&A]{}, 360, L39 Orlando S., Peres G., Reale F., 2000, ApJ, 528, 524 Pan X., Shao M., Kulkarni S. R., 2004, Nat, 427, 326 Paresce F., 1984, AJ, 89, 1022 Pinfield D. J., Hodgkin S. T., Jameson R. F., Cossburn M. R., Hambly N. C., Devereux N., 2000, MNRAS, 313, 347 Pinfield D. J., Dobbie P. D., Jameson R. F., Steele I. A., Jones H. R. A., Katsiyannis A. C., 2003, MNRAS, 342, 1241 Preibisch T., Zinnecker H., 2002, [AJ]{}, 123, 1613 Reale F., Peres G., Orlando S., 2001, ApJ, 557, 906 Rebolo R., Zapatero Osorio M. R., Mart[í]{}n E. L., 1995, Nat, 377, 129 Rebolo R., Mart[í]{}n E. L., Basri G., Marcy G. W., Zapatero Osorio M. R., 1996, ApJ, 469, L53 Reid N., Hawley S. L., Mateo M. 1995, [MNRAS]{}, 272, 828 Rutledge R. E., Basri G., Mart[í]{}n E. L., Bildsten L., 2000, ApJ, 538, L141 Schmitt J. H. M. M., Liefke C., 2002, A&A, 382, L9 Schmitt J. H. M. M., Fleming T. A., Giampapa M. S. 1995, [ApJ]{}, 450, 392 Smith R. K., Brickhouse N. S., Liedahl D. A., Raymond J. C., 2001, ApJ, 556, L91 Stauffer J. R., 1984, ApJ, 280, 189 Stauffer J. R., Schultz, G., Kirkpatrick J. D. 1998, [ApJ]{}, 499, L199 Str[" u]{}der L. et al., 2001, A&A, 365, L18 Tinney C. G. 1998, [MNRAS]{}, 296, L42 Tinney C. G., Reid I. N., 1998, MNRAS, 301, 1031 Tsuboi Y., Maeda Y., Feigelson E. D., Garmire G. P., Chartas G., Mori K., Pravdo S. H., 2003, ApJ, 587, L51 Turner M. J. L. et al. 2001, [A&A]{}, 365, L27 Zapatero Osorio M. R., Rebolo R., Mart[í]{}n E. L., 1997a, A&A, 317, 164 Zapatero Osorio M. R., Rebolo R., Mart[í]{}n E. L., Basri G., Magazzu A., Hodgkin S. T., Jameson R. F., Cossburn M. R., 1997b, ApJ, 491, L81 Zapatero Osorio M. R., Rebolo R., Mart[í]{}n E. L., Hodgkin S. T., Cossburn M. R., Magazz[\` u]{} A., Steele I. A., Jameson R. F., 1999, A&AS, 134, 537 \[lastpage\] [^1]: briggs@astro.phys.ethz.ch [^2]: pye@star.le.ac.uk [^3]: http://xmm.vilspa.esa.es/. [^4]: The [pn]{} exposure map must be scaled by a factor to account for its higher sensitivity compared to M1 and M2. This factor is dependent on the source spectrum and discussed in Section 4.3. [^5]: [*ML*]{} values output by [emldetect]{} have been corrected as advised in [*XMM News 29*]{}. [^6]: A source detected by a previous analysis 15 arcsec from Roque 9 only in the 0.8–1.5 keV band is not recovered in this procedure, and was in any case considered more likely to be the chance alignment of a background source (Briggs & Pye 2003b). [^7]: For a $T = 10^{7.0}$ K plasma with solar abundances and [$N_{\rm H}$]{}$=2 \times 10^{20}$ cm$^{-2}$ the 0.3–1.4 keV energy band enables detection at a given signal:noise ratio against the observed background spectrum for the lowest total of 0.3–4.5 keV source counts. The required number of source counts is only slowly increasing with the upper energy bound up to 4.5 keV and we do not find better constraint of the Roque 14 source variability or plasma temperature using this stricter energy cut. [^8]: As is calculated here from $EW_{{\rm H}\alpha}$, as a ratio with the continuum, it should be little changed whether there is one source or are two similar sources.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The luminous $z=0.286$ quasar [HE0450–2958]{} is interacting with a companion galaxy at 6.5 kpc distance and the whole system radiates in the infrared at the level of an ultraluminous infrared galaxy (ULIRG). A so far undetected host galaxy triggered the hypothesis of a mostly “naked” black hole (BH) ejected from the companion by three-body interaction. We present new HST/NICMOS 1.6$\mu$m imaging data at 01 resolution and VLT/VISIR 11.3$\mu$m images at 035 resolution that are for the first time resolving the system in the near- and mid-infrared. We combine these data with existing optical HST and CO maps. (i) At 1.6$\mu$m we find an extension N-E of the quasar nucleus that is likely a part of the host galaxy, though not its main body. If true, a combination with upper limits on a main body co-centered with the quasar brackets the host galaxy luminosity to within a factor of $\sim$4 and places [HE0450–2958]{} directly onto the $M_\mathrm{BH}-M_\mathrm{bulge}$-relation for nearby galaxies. (ii) A dust-free line of sight to the quasar suggests a low dust obscuration of the host galaxy, but the formal upper limit for star formation lies at 60 M$_\odot$/yr. [HE0450–2958]{} is consistent with lying at the high-luminosity end of Narrow-Line Seyfert 1 Galaxies, and more exotic explanations like a “naked quasar” are unlikely. (iii) All 11.3$\mu$m radiation in the system is emitted by the quasar nucleus. It has warm ULIRG-strength IR emission powered by black hole accretion and is radiating at super-Eddington rate, $L/L_\mathrm{Edd}=6.2^{+3.8}_{-1.8}$, or 12 $M_\odot$/year. (iv) The companion galaxy is covered in optically thick dust and is not a collisional ring galaxy. It emits in the far infrared at ULIRG strength, powered by Arp220-like star formation (strong starburst-like). An M82-like SED is ruled out. (v) With its black hole accretion rate [HE0450–2958]{} produces not enough new stars to maintain its position on the $M_\mathrm{BH}-M_\mathrm{bulge}$-relation, and star formation and black hole accretion are spatially disjoint. This relation can either only be maintained averaging over a longer timescale ($\la$500 Myr) and/or the bulge has to grow by redistribution of preexisting stars. (vi) Systems similar to [HE0450–2958]{} with spatially disjoint ULIRG-strength star formation and quasar activity might be common at high redshifts but at $z<0.43$ we only find $<$4% (3/77) candidates for a similar configuration.' author: - 'Knud Jahnke, David Elbaz, Eric Pantin, Asmus Böhm$^,$, Lutz Wisotzki, Geraldine Letawe, Virginie Chantry, Pierre-Olivier Lagage' title: | The QSO HE0450–2958: Scantily dressed or heavily robed?\ A normal quasar as part of an unusual ULIRG. --- Introduction ============ In the current framework of galaxy evolution, galaxies and black holes are intimately coupled in their formation and evolution. The masses of galactic bulges and their central black holes (BHs) in the local Universe follow a tight relation [e.g. @haer04] with only 0.3 dex scatter. Currently it is not clear how this relation comes about and if and how it evolved over the last 13 Gyrs, but basically all semi-analytic models now include feedback from active galactic nuclei (AGN) as a key ingredient to acquire consensus with observations [e.g. @hopk06c; @some08]. In these models it is assumed that black hole growth by accretion and energetic re-emission from the ignited AGN back into the galaxy can form a self regulating feedback chain. This feedback loop can potentially regulate or possibly also truncate star formation and in this process create and maintain the red/blue color–magnitude bimodality of galaxies. In this light, any galaxy with an abnormal deviation from the $M_\mathrm{BH}$–$M_\mathrm{bulge}$-relation will be an important laboratory for understanding the coupling mechanisms of black hole and bulge growth. It will set observational limits for these models, and constrain the time-lines and required physics involved. Since the early work by @bahc94 [@bahc95b] on QSO host galaxies with the [*Hubble Space Telescope (HST)*]{} and the subsequently resolved dispute about putatively “naked” QSOs [@mcle95a], no cases for QSOs without surrounding host galaxies were found – when detection limits were correctly interpreted. Only recently the QSO [HE0450–2958]{} renewed the discussion, when @maga05 made a case for a 6$\times$ too faint upper limit of the host galaxy of [HE0450–2958]{} with respect to the $M_\mathrm{BH}$–$M_\mathrm{bulge}$-relation. In light of a number of competing explanations for this, the nature of the [HE0450–2958]{} system needs to be settled. The QSO [HE0450–2958]{} (a.k.a. IRAS 04505–2958) at a redshift of $z=0.286$ was discovered by @low88 as a warm IRAS source. [HE0450–2958]{} is a radio-quiet quasar, with a distorted companion galaxy at 15 (=6.5 kpc) distance at the same redshift, likely in direct interaction with the QSO [@cana01]. The combined system shows an infrared luminosity of an ultraluminous infrared galaxy (ULIRG, $L_\mathrm{IR}>10^{12}$ L$_\odot$). [HE0450–2958]{} was observed with the [Hubble Space Telescope (HST)]{} and its WFPC2 camera [@boyc96] in F702W (=$R$ band) and ACS camera [@maga05] in F606W (=$V$ band), both observations did not allow to detect a host galaxy centered on the quasar position within their limits (Figure \[fig:allwave\], left column). @maga05 estimated an expected host galaxy brightness if [HE0450–2958]{} was a normal QSO system that obeyed the $M_\mathrm{BH}$–$M_\mathrm{bulge}$-relation in the local Universe and given a BH mass estimate or luminosity of the QSO. They concluded that the ACS F606W detection limits were six times fainter than the expected value for the host galaxy, which qualified [HE0450–2958]{} to be very unusual. ![image](fig1_all.eps){width="\textwidth"} @maga05 sparked a flurry of subsequent papers to explain the undetected host galaxy to black hole relation. Over time three different alternative explanations have been put forward and were substantiated: 1. [HE0450–2958]{} is a normal QSO nucleus, but with a massive black hole residing in an under-massive host galaxy. The system is lying substantially off the local $M_\mathrm{BH}$–$M_\mathrm{bulge}$-relation; the host galaxy possibly hides just below the F606W detection limit [@maga05]. 2. The host galaxy is actually absent, [HE0450–2958]{} is a truly “naked” QSO, by means of a black hole ejection event in a gravitational three body interaction or gravitational recoil following the merger of [HE0450–2958]{} with the companion galaxy [@hoff06; @haeh06; @bonn07]. 3. The original black hole mass estimate was too high [@merr06; @kim07; @leta07] and is in fact $\sim$10 times lower. With comparably narrow ($\sim$1500 km/s FWHM) broad QSO emission lines the QSO could be the high-luminosity analog of the class of narrow-line Seyfert 1 galaxies (NLSy1). The host galaxy could be normal for the black hole mass and be absolutely consistent with the ACS upper limits. In this article we present new data initially motivated by the still undetected host galaxy and by the possibility that the host galaxy might be obscured by substantial amounts of dust. We want to investigate the overall cool and warm dust properties of the system, using new near infrared (NIR) and mid infrared (MIR) images. The F606W ACS band is strongly susceptible to dust attenuation, and dust could have prevented the detection of the host galaxy in the optical. With new NIR data we look at a substantially more transparent wavelength. At the same time the new infrared data is meant to localize the source(s) of the ULIRG emission. Three components are candidates for this: The AGN nucleus, the host galaxy, and the companion galaxy. Our NIR data allow to trace star formation and the MIR image traces the hot dust in the system. We present the new data and interpret it in the view of the so far collected knowledge from X-ray to radio-wavelengths that was built up since the article of @maga05. Throughout we will use Vega zero-points and a cosmology of $h=H_0/(100\mathrm{km s^{-1} Mpc^{-1}})=0.7$, $\Omega_M=0.3$, and $\Omega_\Lambda=0.7$, corresponding to a distance modulus of 40.84 for $z=0.286$ and linear scales of 4.312 kpc/. The IR angle ============ Up to now the only existing infrared observations on [HE0450–2958]{} were from the 2MASS survey in the near infrared $J$, $H$, and $K$ bands at $\sim$4resolution, and in the MIR from the IRAS mission [@grij87; @low88] at 12, 25, 60, and 100 $\mu$m with about 4 resolution. Both surveys do not resolve the different individual components of the system (QSO, companion galaxy, foreground star). De Grijp et al. ([-@grij87]) noted that the [HE0450–2958]{} system is showing the MIR/FIR luminosities of a ULIRG system, but it was not clear which components of the system are responsible for this emission due to the coarse IRAS resolution. We want to localize the dust emission in two ways: (a) A direct observation of the hot dust component at 8.9$\mu$m (rest-frame) with the VISIR imager at the ESO VLT. (b) A localization of dust in general by combining new HST near infrared and the existing ACS optical data. For this purpose we obtained HST NIC2 imaging in the rest-frame $J$-band at $\sim$1.3$\mu$m. VISIR 11.3$\mu$m imaging data ----------------------------- In the near and mid infrared the [HE0450–2958]{} system clearly has a spectral energy distribution (SED) that is composed of more than a single component: In Figure \[fig:iras\_sed\] we model the IRAS and VISIR flux densities with a composite SED of a quasar plus a star forming component. For the quasar we test the median and 68 percentile reddest quasar SED from @elvi94, and for the star forming component an Arp220-like starbursting SED, but we also tried a medium star formation M82 SED, both from @elba02. The median quasar SED plus Arp220 can reproduce the data at all wavlengths, except at observed 25$\mu$m, where it leaves a small mismatch. The 68 percentile reddest SED on the other hand creates a perfect match also there. For both cases the flux predicted for the companion galaxy at 11.3$\mu$m lies below the detection limit as observed. Milder, M82-like star formation can be ruled out on the same basis, as it predicts a detection of the companion also at 11.3$\mu$m – both with the information from the mid infrared, as well as when extrapolating the observed $H$-band flux. @papa08 match a simple model of two black-body emission curves to the four IRAS points, yielding a cool dust component heated by star formation and a warm dust component which can be attributed to intense AGN emission (see Section \[sec:discussion\_ulirg\]). While it is not possible to spatially resolve the system at FIR wavelengths with current telescopes, we aim for the highest wavelength where this is currently possible, in order to localize the warm emission component and test whether this comes solely from the (optically visible) QSO or from extra sources. ![\[fig:iras\_sed\] The SED of [HE0450–2958]{} in the mid-infrared: Shown are the IRAS flux density measurements from @grij87 [*(open circles)*]{}, our VISIR data point [*(filled circle)*]{} and upper limit on the companion galaxy [*(arrow)*]{} and overlaid composite AGN plus starburst SEDs [*(lines)*]{}. For the quasar nucleus we use the median [*(green dashed line)*]{} and 68 percentile reddest SEDs [*(black dashed line)*]{} from @elvi94, the starburst [*(red solid line)*]{} is a model for Arp220 by @elba02. The median quasar plus Arp220 SED [*(green solid line)*]{} can explain the data except for a slightly too low value at the observed 25$\mu$m point, but with the 68 percentile SED [*(black solid line)*]{} the match is perfect. The predicted flux of the companion galaxy where the star formation of the system is located [*(bar)*]{} lies below our detection limit, consistent with the data. Milder star formation templates as e.g. M82 can be ruled out, since they predict too high fluxes for the companion – also from the observed $H$-band data – which should be visible in the VISIR image. ](fig2.eps){width="\columnwidth"} The observations were performed using VISIR, the ESO/VLT mid-infrared imager and spectrograph mounted on unit 3 of the VLT (Melipal). VISIR gives a pixel size of 0075 and a total field-of-view of 192. The diffraction limited resolution is 035 FWHM. Standard “chopping and nodding” mid-infrared observational technique was used to supress the background dominating at these wavelength. All the observations were interlaced with standard star observations of HD 29085 (4.45 Jy) and HD 41047 (7.21 Jy). The estimated sensitivity was 4 mJy/10$\sigma$/1h. Imaging data were obtained on the 12th of December 2005 in service observing mode, through the PAH2 filter centered on 11.3 $\mu$m having a half-band width of 0.6 $\mu$m. Weather conditions were very good, optical seeing was below 1, and the object was observed always at an airmass of 1.15, which resulted in a diffraction limited image of 035 resolution. Chopping/nodding parameters were 8/8and 0.25 Hz/0.033 Hz. The total time spent on-source was 1623 s. The data were reduced using a dedicated pipeline written in IDL, which does the chopping/nodding correction and removes the spurious stripes due to detector instabilities [@pant07]. The reduced data were finally flux-calibrated using the two reference stars as photometric calibrators. The error on the photometry due to variations of the atmospheric transmission are estimated to be less than 2% (3$\sigma$). NICMOS $H$-band imaging data ---------------------------- The ACS $V$-band is too blue to penetrate any substantial amount of dust. With the scenario of a dust enshrouded host galaxy in mind, we acquired new HST NICMOS data (NIC2 with 0075 plate scale) in the F160W $H$-band (program \#10797, cycle 15) to reduce the dust attenuation by a factor of 3.5 in magnitude space. A total of 5204s integration on target was forcedly split into two observation attempts due to telescope problems, and carried out in July 2006 and 2007. These yielded two sets of data with 2602 s integration each, but slightly different orientations. In order to minimize chromatic effects, we also observed a point spread function (PSF) calibrator star (EIS J033259.33–274638.5) with the SED-characteristics over the F160W filter bandpass similar to a mean QSO template. We do not know the actual SED of [HE0450–2958]{} itself, as no NIR imaging or spectroscopic data of the system with high enough spatial resolution exist to date. As the stellar type yielding the likely most similar PSF we found K4III, by comparing the PSFs predicted by the TinyTim package [@kris03]. The only cataloged stars faint enough to not immediately saturate were observed by the ESO Imaging Survey [EIS, @groe02] located in the E-CDFS, and had to be observed at 6 months distance in time to [HE0450–2958]{}. Since we also want to minimize the PSF variation due to differences in observing strategy, we applied the same dither patterns for both [HE0450–2958]{} and the PSF star. Due to the absolute pointing accuracy of HST the centroid location of the star relative to the chip is shifted about 15 pixels (11) from the QSO centroid towards the companion galaxy. Data reduction and combination of the individual frames were carried out using a mix of STScI pipeline data products, pyraf, and our own procedures in MIDAS and Fortran. The resulting image is shown together with the analysis in Figure \[fig:dataimages\]a. Two parts of the team analyzed the combined images in complementary ways, by decomposition of the components using two-dimensional modeling and by image deconvolution. ### Uncertainty in the PSF {#sec:psfuncertainty} In order to detect a putative faint host galaxy underneath the bright QSO nucleus we require a precise knowledge of the PSF. The PSF will vary spatially, with the energy distribution in the filter as well as temporally, with a changing effective focus of the telescope due to changing thermal history. We opt for a double approach: First, we observe the separate PSF star with the properties described in the last section (and see Fig. \[fig:dataimages\]b). Secondly, we also have the foreground star available that is located at 18 distance from the QSO to the north-west. It is classified as a G star [@low89]. Its on-chip distance to the QSO will leave only room for small spatial variations, but its SED in the $H$-band likely will not perfectly match the SED of the QSO. It is difficult to assess the PSF uncertainty at the position of the QSO. In principle we have a combined effect of color, spatial, and temporal variation, but only one bit of information: the difference between the foreground star and the PSF star. We thus model the expected difference in the shape of these two stars with TinyTim and then compare their actual observed shapes. This shows that the foreground star should be slightly narrower than the observed PSF star, which is consistent with PSF star’s later, redder spectral type and an increase of PSF width with wavelength. We observe this effect also in the data, however somewhat stronger. A temporal variation can thus not be separated and ruled out. In any case we conclude that the PSF star is wider and thus will yield more conservative (=fainter) estimates for a QSO host galaxy, while in case of a non-detection the foreground star will yield brighter upper limits. For two-dimensional modeling of the system we use [galfit]{} [@peng02]. In order to quantify the PSF uncertainty for this process, we first let [ galfit]{} fit a single point source, represented by the PSF star, to the foreground star. In this process we use an error map created from the data itself and we add the sky as a free parameter. We minimize the influence of the nearby QSO on the foreground star by first fitting the former with a single point source as well, removing its modeled contribution, and mask out the remaining residuals starting at 09 from the star. The PSF created in this way is shown in Figure \[fig:dataimages\]c. This image is fed into the modeling process of the PSF star, or later the QSO/host/companion system. The residual flux in this process is of the order of 3% of the total, inside the 05 radius aperture where most apparent residuals are located, the absolute value of the residuals in the same region is 14%. This means that it will be generally impossible to detect any host galaxy of less than 3% of the total flux of the QSO, and it will even be difficult to isolate a somewhat brighter smooth galaxy in the non-smooth residuals. This level of residuals is consistent with experience from the HST ACS camera, where we find that due to PSF uncertainties 5% of the total flux are approximate detection limits for faint host galaxies [@jahn04b Jahnke et al. in prep.]. Including the structured PSF residuals we will only consider a host galaxy component as significant if it has clearly more than 3–5% of residual flux inside an 05 radius of the QSO, or that shows up as a non co-centric structure above the noise outside this region. In absolute magnitudes and related to the QSO these limits correspond to the following: inside an 05 radius of the QSO we can hide a galaxy co-centric with the QSO of at least $M_H\sim-24.7$ (for the 3% case) or $M_H\sim-25.2$ (for 5%). Results ======= VISIR ----- We detect a single unresolved point source in the VISIR field-of-view with a flux density 62.5 mJy at observed 11.3$\mu$m (Figure \[fig:allwave\], right column). This compares to 69.3 mJy in the IRAS 12 $\mu$m channel. There is no second source detected in the field down to a point source sensitivity of at least 3 mJy at the 5$\sigma$ level. Extended sources of the visual size of the companion galaxy have a 5$\sigma$ detection limit of 5.5 mJy. With only one source in the total 192 VISIR field three optical sources have in principle to be considered as potential counterparts: The QSO nucleus, the companion galaxy, and the foreground star. However, the star is a G spectral type and can thus be safely ruled out. We find that the initial position of the MIR point source as recorded in the VISIR image header comes to lie between the QSO and the companion, somewhat closer to the QSO. To clarify this we conducted an analysis of the pointing accuracy of VISIR testing the astrometry of a number of reference stars observed with VISIR at different epochs. The two results are: (1) In all cases the offset between targeted and effective RA,Dec is less than 1 rms, but (2) there is a systematic offset of 0.15s in RA recorded in the fits header, so the true positions need to be corrected by –0.15s in RA. This correction places the MIR point source exactly onto the locus of the QSO in the HST ACS images. It is thus clearly the QSO nucleus that is responsible for all of the 11.3 $\mu$m emission. NICMOS ------ ### Host galaxy {#results:host} To extract information on the host galaxy, we use three different methods to remove the flux contribution from the QSO nucleus. First, we make a model-independent test for obvious extended emission: In a simple peak subtraction we remove a PSF from the QSO, scaled to the total flux inside two pixels radius around the QSO center. This is a robust approach that is independent of specific model assumptions and quite insensitive to the noise distribution in the image [@jahn04b]. As a result, the peak subtracted image shows no obvious extended residual, i.e. host galaxy, centered on the QSO, when using the PSF star as PSF. As a second step we use on the one hand [galfit]{} to model the 2-dimensional light distribution of the [HE0450–2958]{} system and decompose it into different morphological components. On the other hand we use the MCS deconvolution method [@maga98] to mathematically deconvolve the system to a well defined and narrower PSF. The procedure we follow is based on the one described in @chan07. For [galfit]{} we use the two empirical PSFs, for MCS deconvolution we construct a number of combinations of empirical PSF and TinyTim models including very red dust-like SED components. While these two approaches are complementary in method, their results agree as can be seen in Figure \[fig:dataimages\]: The inner part of the QSO inside of 05 radius is consistent with a point source within the PSF uncertainties, but there is extra flux present outside of this radius. The structure of the PSF removal or deconvolution residuals points to a substantial mismatch between shape of the QSO nucleus and the separately observed PSF star, but also to too simple models of TinyTim. In order to remove obvious residual PSF structure a very red SED needs to be assumed, which at this point can not be discriminated from a marginally resolved red component on top of the AGN point source. However, in light of the non-average properties of this QSO, a mean QSO SED is also not expected. In the following we present our results in more detail and focus on the [galfit]{} results, since it allows a more direct estimate of the significance of detected structures. A comparison of the original and point source-removed images in the optical and NIR, and the MIR image are shown in Figure \[fig:allwave\]. We use [galfit]{} to perform a number of different model fits. In all of them the companion star and QSO nucleus are described by a pure point source, while the companion galaxy is fit with one or two Sérsic[^1] components with free axis ratio, or left unmodelled. We also attempt to add another Sérsic component for the putative host galaxy. We always leave the Sérsic parameter $n$ free, although the companion galaxy is too complex and the putative host galaxy too faint for $n$ to be interpreted physically. With the PSF star used as PSF [galfit]{} finds a result consistent with the peak subtraction. A positive residual of $H\sim17.7$ inside a $\sim$1 radius aperture has a flux below 2% of the 13.05 mag of the QSO itself (see Figure \[fig:dataimages\]e+f). Even though we choose an aperture larger than in our calculation in Section \[sec:psfuncertainty\], we receive a value far below our significance limit, so no significant co-centered host galaxy is seen in this way. If we use the foreground star as PSF (Fig. \[fig:dataimages\]g) we find – as expected – a residual flux that is slightly higher than before, and consistent values for two different approaches: First, for a pure PSF fit to the QSO location, integrating the flux of the residual within a 1 radius aperture around the QSO, except along the SE–NW-axis where we expect residual flux from foreground star and companion galaxy. Secondly, we get a similar flux for a fitted additional host galaxy Sérsic component. These two approaches yield a magnitude of $H$$\sim$15.8 and 16.2, respectively, for the host, $\sim$1.5mag brighter than for the PSF star fit. $H$$\sim$16 corresponds to $\sim$6% of the 13.05 mag of the QSO nucleus. Again the QSO residual shows substantial structure as reported in Section \[sec:psfuncertainty\]. It consists of nested rings of positive and negative flux, typical signs of a close but different width between the PSF we use and the actual one. The bulk of structure is contained in the innermost 05 radius and contains 2/3 of the residual flux. The remaining residual of 2% of the total flux outside this radius is again insignificant, and no main body of the host galaxy co-centered with the quasar is found which satisfies our significance criterion. Going back to the PSF residuals that we quantified earlier on, we detect no co-centered host galaxy at a level above 3% of the flux of the quasar nucleus, corresponding to an upper limit of $H=16.9$. ![image](fig3_all.eps){width="\textwidth"} However, after removal of the point source, a feature becomes clearer, what we dub the “NE-extension”. This faint structure extends from the QSO to the N–E, and it can be traced starting at the edge of the strong PSF residuals at 06 (2.5 kpc) N–E of the nucleus (Figures \[fig:dataimages\] and \[fig:ne-extension\]). Some signs of it are already visible in the optical, when going back to the the F606W image [@maga05 see also Fig. \[fig:allwave\]], but it is much more pronounced in the new $H$-band data compared to the $V$-band. The NE-extension is possibly part of a tidal arm similar to the arm towards the south of the companion, already described by @cana01, but our $H$-band image shows it to be clearly disjoint from the companion galaxy. Due to its proximity it is very likely associated with the QSO, even though it is clearly not its main body. It is unlikely that the NE-extension is just a gas cloud with star formation induced by the radio jet in the system, since it lies at least 50$^\circ$ from the jet direction [@feai07]. It is also unlikely a chance superposition of a gas cloud with emission line gas, as seen by @leta08, since the observed $H$-band does not contain any strong enough line. The NE-extension contains non-negligible flux far above the noise of the background and is unaffected by QSO residuals and independent of the PSF used. We estimate its brightness at $H=18.8$ using an aperture encompassing all visible extension outside the QSO nucleus residual. The same region in the ACS image has $V=21.6$, so $(V-H)=2.8$ ![\[fig:ne-extension\] A slight zoom into the inner region of [HE0450–2958]{} to show the newly found “NE-extension” of the QSO. We removed the star and the QSO using the PSF star as PSF. An extension to the N–E is visible (marked with [*red brackets*]{}) at a distance of 06–15 that is clearly not due to PSF residuals – a very similar result is seen when using the narrower foreground star as PSF (Fig. \[fig:dataimages\]e), or MCS deconvolution (Fig. \[fig:dataimages\]d). This structure is disjoint from the companion galaxy so very likely belongs to the QSO host galaxy itself. The estimated brightness is $H=18.8$. The image size is 45 on the side. ](fig4.eps){width="\columnwidth"} In summary, we detect no significant host galaxy that is co-centered with the QSO. We conclude this from the size and shape of the residuals underneath the QSO in comparison to the “PSF star minus foreground star” subtraction residuals we discussed above. The NE-extension, however, that can be seen outside of the residuals of the QSO nucleus, is a real and significant emission structure – and it is very likely associated with the main part of the host galaxy. ### Companion galaxy In the ACS $V$-band the companion galaxy located 15 to the S–E appears clumpy, with several bright knots as well as lower surface brightness in the center. @cana01 even call the companion a “collisional ring galaxy”. With the NICMOS $H$-band we get a substantially different picture. The galaxy at $H=15.2$ is still asymmetric, with tidal extensions, but contrary to the visible wavelengths it is smooth and shows a pronounced center: clear signs for substantial dust, distributed not smoothly but unevenly and clumpily, with concentration towards the center that only shows up in the optical (Fig. \[fig:companion\]). The complexity of the companion is manifested in that there is no good description with neither one or two Sersic components, when the azimuthal shape is restricted to ellipses. The Sersic index of the companion is around $n=2$ for a single Sersic component, and $n<1$ if two Sersic components are used. Taken at face value, both cases point to a more disk- than bulge-like companion, but a substantial fraction in flux is containted in the non-symmetric distorted part of the companion – and this should be the main description of the companion. More complex descriptions were put forward, with either a proposed additional faint AGN hosted by the companion galaxy [@leta09], explosive quasar outflows [@lipa09], or quasar-induced star formation [@elba09]. ![\[fig:companion\] Zoom on the companion galaxy. As a difference to the $I$-band (Fig. \[fig:allwave\]) the galaxy has a pronounced peak of emission and no ring. The light in the optical is obviously attenuated by dust, very strongly in the center where the dust is optically thick, less in the outer regions. Image size is 3 on the side. ](fig5.eps){width="\columnwidth"} Discussion ========== Where is the ULIRG? {#sec:discussion_ulirg} ------------------- There was substantial confusion about the source for the ULIRG-strength IRAS MIR and FIR emission in the literature. From the uncorrected \[OII\] line flux a star formation rate (SFR) of only 1 M$_\odot$/yr can be inferred [@kim07]. @maga05 still assign the ULIRG emission to the companion galaxy due to its Balmer decrement which yields non-negligible dust extinction, while @kim07 note that the corrected SFR would still be below 10 M$_\odot$/yr. This number is in strong disagreement with a SFR up to $\sim$800 M$_\odot$/yr inferred from total IR-luminosity or 370 M$_\odot$/yr from CO [@papa08]. The new NICMOS images show that the stars in the companion galaxy are not distributed in a ring, but smoothly (Fig. \[fig:companion\]) and that an optically thick dust creates the ring-like structure in the optical ACS images (Fig. \[fig:allwave\]a+b). This means that at optical wavelengths only information from the less extincted outer regions of the galaxy as well as the surface of the strongly extincted central regions is seen. UV-based SFRs must therefore dramatically underestimate the true SFRs when corrected with dust extinction estimated from (also optical wavelength) Balmer-decrements. The actual scale of the uncertainty in $A_V$, the optical extinction correction, can be estimated by comparing $A_V$ estimates from Balmer lines and Paschen/Bracket lines in other ULIRGs. @dann05 studied five ULIRGS for which they estimated $A_V$ both from H$_\alpha$/H$_\beta$ as well as from Pa$\alpha$/Br$\gamma$. NIR-derived values for A$_V$ were in every case significantly larger, ranging from factors of $\sim$1.16 to $\sim$10 (mean 4.0) times higher. As this factor does not scale in any way with the optical $A_V$ estimate, but only with the NIR estimate, we can not determine a correction for [HE0450–2958]{}. When starting out with the redshifted \[OII\]-line at $\lambda$3727 and $A_\mathrm{3727}=A_V\times1.57$ like @kim07, and the correction factors from @dann05, a huge range of possible star formation rates arises. An average of the two [*lowest*]{} correction values from @dann05 of 1.16 and 1.65 means $A_V\sim 2.1$, $A_\mathrm{3727}\sim3.3$ or corrected SFRs of 21 M$_\odot$/yr. Using their mean correction factor of $\sim$4 would lead to $A_\mathrm{3727}\sim9.4$ or $>$5000 M$_\odot$/yr. So already a number below the mean correction ($A_V\sim3$) would make these numbers consistent with FIR-emission based SFR estimates. This directly shows that optical/UV line-emission based SFRs as used by @kim07 can not at all be used to constrain the true SFR of ULIRGs and does not provide an argument against strong star formation in the companion. @papa08 approximate the IRAS IR SED with a 2-component black-body model and find a cool component $T_\mathrm{dust}^\mathrm{cool}=47$K, dust mass $M_\mathrm{dust}^\mathrm{cool}\sim10^8$ M$_\odot$ and $L_\mathrm{FIR} \sim 2.1\times10^{12}$ L$_\odot$, and a warm component with $T_\mathrm{dust}^\mathrm{warm}=184$K, $M_\mathrm{dust}^\mathrm{warm}\sim5\times10^4$ M$_\odot$ and $L_\mathrm{MIR} \sim 2.6\times10^{12}$ L$_\odot$. We can now for the first time spatially localize the warm component from the detection of the single 11.3$\mu$m point source with VISIR to be coincident with the position of the QSO nucleus. Since the measured flux density is consistent with a warm component having the previously known 12$\mu$m IRAS flux density, we conclude that the QSO nucleus itself already is a ULIRG-level emitter, but with a warmer component compared to star formation. For localizing star formation in the system, there are two recent new datasets available, radio data from @feai07 and the CO maps by @papa08. While the radio maps do not set strong constraints when trying to exploit the radio–FIR relation to assign a location for the FIR emission, the CO data are more powerful: at least the bulk, possibly all of molecular gas and thus star formation activity is located in the companion galaxy. We can add two further constraints from our NICMOS and VISIR images. Both the mid infrared SED of the system (Fig. \[fig:iras\_sed\]) as well as an extrapolation from the $H$-band are consistent with an Arp220-like star formation, while ruling out milder, M82-like conditions. In the latter case the companion would have to be visible in our observed 11.3$\mu$m image, but it is absent (Fig. \[fig:allwave\]). Together with the dense and clumpy dust geometry of the companion when comparing optical and NIR morphology, it becomes clear that the companion is responsible for most, if not all, of the 370 M$_\odot$/yr star formation. If we follow the 5:1 CO detection significance for the companion given by @papa08, this means that as a minimum 5/6=83% of CO are located in the companion and thus also $\ge83$% of the star formation and FIR emission. This number converts to an integrated IR luminosity of $L_\mathrm{FIR} \ge 1.75\times10^{12}$ L$_\odot$, so the companion also qualifies as a ULIRG. While the presence of very strong star formation in the companion is clear now, its trigger is a priori not so clear. The most probably solution is merger induced SF, so the system would be a classical ULIRG – just with a non-standard geometry – but there is room for a radio jet induced effect as well. One of the lobes of the jets from the QSO is located directly at the companion position. If and how much this contributes to star formation in the companion still needs to be quantified. Host galaxy detection {#sec:hostgalaxydetection} --------------------- With the companion identified as the main star-former, we get limits from the CO that less than 1/6 of the total cool dust is located within the putative host galaxy. Thus 1/6 of the FIR-inferred SFR by @papa08 of $\mathrm{SFR} = 1.76\cdot 10^{10} (\mathrm{L_{IR}/L_\odot})$ M$_\odot$/yr correspond to an upper limit of 62 M$_\odot$/yr. This leaves room for a non-negligible amount of SF in the host galaxy, but is also an upper limit[^2]. If we assume the host galaxy to have a mix of old and young stellar population as we find for other QSO host galaxies at these redshifts [@jahn04a; @leta07], we can convert this to an expected $H$-band flux. If the host galaxy had the same population mix as @cana01 modelled for the companion galaxy[^3] – 95.5% of a 10 Gyr old population with 5 Gyr e-folding SFR timescale plus 4.5% of a 128 Myr young population –, this SFR upper limit would translate to an expected NIR magnitude of 1.75 mag fainter than the companion or $H\ge16.95$. The combined color and $K$-correction term is $V-H_\mathrm{z=0.285} = 1.66$, and changes by only about $\pm$0.3mag for a pure old (10 Gyr) or young (100 Myr) population. So they are rather insensitive to the exact choice of stellar population. However, this limit will get brighter if the host galaxy contained less dust – by about 0.3 mag per magnitude decrease in $A_V$. With that in mind, this limit is not more stringent than the limit from NICMOS itself: No significant main host galaxy body is found after PSF removal (Section \[results:host\]) and so an upper limit from the NIR decomposition of $H=16.9$ applies for a host galaxy co-centered with the quasar nucleus. We therefore conclude that the current upper limit from NICMOS lies at around $H\sim16.9$. This is consistent with the CO/FIR limits. How do these numbers relate to the current upper limit for a co-centered host galaxy from the optical HST data? We convert our $H$-band limit to absolute $V$-band magnitudes with again the assumption of the host galaxy having the same stellar population mix as the companion. In the conversion to $M_V$ we assume two different values for dust extinction, (a) $A_V=0$, motivated by the nearly dust-free line of sight to the QSO nucleus, and (b) a moderate $A_V=1$ (corresponding to $A_{H\mathrm{(z=0.285)}}\sim0.29$). This yields host-galaxy upper limits of $M_V>-21.25$ and $>-22.55$, for the cases (a) and (b) respectively. If we convert the @maga05 upper limits to our $h=0.7$ cosmology and assume the same stellar population and dust properties, we receive $M_V>-20.6$ and $>-21.6$, respectively. We note here that this corresponds to a detection limit of only 1.5% of the total quasar flux in the optical. This factor of two is owed to the better determined PSF in the ACS images. This allows @maga05 to set somewhat stricter upper limits for a nucleus co-centered host galaxy component, particularly if a low dust extinction is present. Concerning lower limits to the host galaxy, the NE-extension (Figure \[fig:ne-extension\]) is a structure of real emission that can be traced towards the QSO from $\sim$15 to a radius of 06, where the region of substantial PSF residuals begins. We can not say for sure whether it continues further inward from this position. Signs of this structure are visible in the ACS $V$-band (see Figure \[fig:allwave\], left column) but it is not clear whether the more compact region only $\sim$02 N–E of the nucleus in ACS image is real or an artefact of the deconvolution process. We measured the $(V-H)$-color to be 2.8 outside this region, which is consistent with a stellar population of intermediate age. In the dust-free case this color corresponds to a $\sim$2.1 Gyr old single stellar population [@bruz03 solar metallicity], for $A_V=1.0$ to an age of 800 Myr. This is consistent with stellar material from a host galaxy, e.g. tidally ejected disk stars. We conclude that with its spatial detachment from the companion galaxy this NE-extension is likely a part of the host galaxy, possibly as a tidal extension, but its vicinity to the QSO makes other interpretations less likely. With this interpretation, we receive an $H\le18.8$ [*lower*]{} limit for the host, corresponding to $M_V<-20.4$ ($A_V=0$) or $<-20.7$ ($A_V=1$). If we include this off-center emission to the upper limit of a co-centered host galaxy, we obtain a total host galaxy upper limit of $M_V>-21.2$ and $-22.0$. We thus bracket the host galaxy luminosity in the $V$-band by 0.8 and 1.3 mag or factors of $\sim2$ and $\sim3.5$, respectively. Formally, the CO detection significance and NICMOS give the same limit on a star formation rate of up to $\sim60$ M$_\odot$/yr. If we take into account the stricter ACS $V$-band limits of $M_V>-20.6$ and $>-21.6$, depending on dust cases (a) and (b), these are fainter by 1.3 and 0.6mag than the CO predicted magnitures. Inversely, these reduce the upper limits on star formation to 18 and 35 M$_\odot$/yr, respectively. Beyond $A_V=2$mag the CO and NICMOS limits again become the most stringent. This means that we can not rule out dust obscuration in the host galaxy. At the same time the dust-free line of sight to the quasar nucleus is a strong argument against large amounts of dust, unless a very special geometrical configuration is invoked, while the warm ULIRG emission from the QSO points to dust in the very central few 100 pc. Only better CO limits or a detection of the host galaxy in the NIR will be able to finally resolve this matter. Black hole mass, galaxy luminosity, and the NLSy1 angle ------------------------------------------------------- Black hole mass estimates for [HE0450–2958]{} vary significantly through the literature. The original 8$\times$10$^8$ $M_\odot$ [@maga05] were revised later to a substantially lower value of 4$\times$10$^7$ [@leta07]. Both values are virial estimates based on H$\beta$ width, but while narrow and broad components were separately measured in the former study, the FWHM of the whole line was used in the latter. This revised value is consistent with the independent virial estimate of 6–9$\times$10$^7$ by @merr06, and even with an estimate from X-ray variability, $2^{+7}_{-1.3}$$\times$10$^7$ [@zhou07]. Since the virial estimates agree now, we will adopt the range 4–9$\times$10$^7$ $M_\odot$ for the black hole mass. @merr06 noted the rather narrow broad emission lines of [HE0450–2958]{} and suggested that it should actually be viewed not as a standard QSO but as a higher-$L$ analog of local NLSy1s. If we compare [HE0450–2958]{} with estimates from the literature [@grup04; @ohta07], we find that [HE0450–2958]{} is consistent with the high black hole mass end of the known NLSy1 distribution and does not need to constitute a new “higher-$L$ NLSy1 analog” class of its own. But is it consistent regarding other properties as well? Morphologically, NLSy1 are mostly spirals, often barred, mostly not strongly disturbed [@ohta07]. Since galaxies have increasing bulge mass with increasing black hole mass it is not clear which structural properties to expect and if a merging system like this is consistent with the properties of the local, lower mass NLSy1 population. There is even a debate on how different NLSy1 actually are from normal Seyferts. Recent studies show smaller BH mass differences between normal broad-line Sy1 and NLSy1 when using line dispersions instead of FWHM [@wats07], although a difference might remain. If galaxies with potentially core outflow-affected lines are considered separately, NLSy1 share the same $M_\mathrm{BH}-\sigma_\mathrm{bulge}$-relation with BLSy1, but their accretion rates are confirmed as lying often close to the Eddington limit [@komo07]. If we compute the [HE0450–2958]{} accretion rate – as derived from the $V$-band absolute magnitude of the quasar nucleus ($M_V=-25.75$, recomputed from the HST/ACS data with updated AGN color and $K$-correction) and a bolometric correction of $BC_V\sim8$ [@marc04; @elvi94] – in relation to its Eddington accretion rate, we obtain from $M_\mathrm{BH}=6.5\pm2.5\times10^7$ $M_\odot$ a super-Eddington accretion rate of $L/L_\mathrm{Edd}=6.2^{+3.8}_{-1.8}$. This is consistent with high Eddington ratios observed for NLSy1 [@warn04; @math05a]. ![\[fig:m\_m\] $M_\mathrm{BH}$–$L_\mathrm{bulge}$-relation for inactive galaxies in the local Universe as presented by @tund07, with data from @haer04, @shan04 and @mclu04 [*(black lozenges and lines)*]{}. Overplotted are the upper limits for the host galaxy of [HE0450–2958]{} for the dust-free case by @maga05 from $V$-band imaging [*(small blue arrow)*]{} and with an $A_V=1$ added [*(small red arrow)*]{}, with their original black hole estimate, converted to our cosmology. The [*blue and red rectangles*]{} show the range for black hole mass estimates and our new lower limits for the (total) galaxy luminosity from NICMOS and new upper limits based on the (still better constrained) optical HST data. Note: Here we combined the off-center flux lower limit (NICMOS $H$-band) with the upper limit for a co-centered host galaxy (ACS $V$-band) for a total upper limit. The arrows to the bottom right show the conversion of our $L_\mathrm{galaxy}$ limits to $L_\mathrm{bulge}$ limits for bulge-to-disk ratios of 1:2 and 1:4. Both the dust-free as well as the $A_V=1$ dust case show a galaxy that is absolutely consistent with the black hole mass, even if the bulge-to-disk ratio is accounted for. ](fig6.eps){width="\columnwidth"} With the new data and an explicit assumption/interpretation that the NE-extension is indeed associated with the host galaxy, we can for the first time present a black hole mass for [HE0450–2958]{} and bracketing limits for its host galaxy luminosity. We can thus place [HE0450–2958]{} on the $M_\mathrm{BH}$–$L_\mathrm{bulge}$-relation of active and inactive galaxies, with more than just an upper limit for galaxy luminosity. In Figure \[fig:m\_m\] we show data from @haer04 and others, as collected by @tund07. We overplotted the limits on [HE0450–2958]{} for the two assumptions of dust attenuation strength (Sec. \[sec:hostgalaxydetection\]). This shows that even when applying a sensible conversion factor of 1 to 1/4 (up to 1.5 mag) to convert from total to bulge luminosity, the host of [HE0450–2958]{} will be a perfectly normal galaxy in this parameter space, with a luminosity around the knee of the galaxy luminosity function, $L\sim L^*$. Contrary to the claim by @maga05 it does not deviate substantially from the local $M_\mathrm{BH}$–$L_\mathrm{bulge}$-relation for normal inactive local massive galaxies, mainly due to the revised mass estimate for the black hole. However, this also means that [HE0450–2958]{} does not show a $M_\mathrm{BH}$/$L_\mathrm{bulge}$ different from local broad-line AGN, consistent with being a NLSy1-analog if the @komo07 result is taken as a base. With the normal $M_\mathrm{BH}$/$L_\mathrm{bulge}$-ratio and the fact that we can now rule out huge amounts of obscuring dust around the QSO nucleus, the most likely explanation for the evasive host galaxy is indeed a high $L/L_\mathrm{Edd}$ accretion rate system – a NLSy1 at the high mass end of the normal NLSy1 population. With the current evidence Occam’s Razor favors this explanation over more exotic scenarios as the ejection of the QSO’s black hole in a 3-body interaction or a gravitational recoil event involving the companion galaxy [e.g. @hoff06; @haeh06; @merr06; @bonn07]. However, these scenarios are formally not ruled out even if the upper limit can be pushed down by another $\sim$5 magnitudes. All evidence combined is consistent with a system of a QSO with ULIRG-size IR emission, residing in an $L^*$ host galaxy that is in the process of colliding with a substantially more luminous and possibly more massive companion ULIR-galaxy[^4]. Much deeper high-resolution NIR imaging with a well controlled PSF are the best way to finally find and trace the here predicted host galaxy (bulge) component of [HE0450–2958]{} co-centered with the QSO nucleus and to estimate its luminosity and mass directly. Black hole – galaxy coevolution ------------------------------- Given the black hole mass and Eddington ratio the accretion rate of the BH is 1.4 M$_\odot$/yr. At the same time @papa08 derive a star formation rate from CO of 370 M$_\odot$/yr, predominantly in the companion galaxy. Applying a correction factor of 0.5 for mass returned to the interstellar matter by stellar winds, the stellar mass growth of the whole [HE0450–2958]{} system from star formation is 185 M$_\odot$/yr. The ratio of black hole accretion and stellar mass growth is then 12/185=6.5%, which is substantially higher than the $M_\mathrm{BH}$/$M_\mathrm{bulge}$ relation for local galaxies of 0.14% [@haer04]. We can conclude the following: If activity timescales are identical for star formation and BH accretion, this system grows in black hole mass much more rapidly than the bulge is required to grow to keep the system on the $M_\mathrm{BH}$/$M_\mathrm{bulge}$ relation. This is not possible, since the star formation is taking place in the companion and not the host galaxy. So in any case a potential maintainance of the relation for this system, if actually true, needs to be seen as an integral over more than several 10$^8$ yrs. On the other hand, a gas consumption timescale of 9.5$\times$10$^7$ yrs – if we divide the H$_2$ masses and SF rates derived by @papa08 and account for 50% mass recycling – is possibly longer than the luminous quasar accretion phase. This would add to the requirement, that processes like the tidal forces of the galaxy interaction redistribute mass, adding stars to the bulge of the host galaxy. These were to the larger extent already preexisting in the host galaxies disk or the companion before the interaction and not created only now. The “coevolution” of the host galaxy and its black hole in [HE0450–2958]{} is clearly a two-part process: the build-up of stellar mass and the build-up of black hole and bulge mass. The former will take place on timescales of $>$1 Gyr through star formation, the latter two can “coevolve” if seen as an average over timescales of longer than the BH accretion lifetime, and a few dynamical timescales for redistribution of stellar orbits of, say, $<$500 Myrs. How many [HE0450–2958]{}s are there? ------------------------------------ [HE0450–2958]{} is an unusual object. AGN in ULIRGs are common, but AGN right next to ULIRGs are not, particularly not luminous QSOs with inconspicuous host galaxies next to extreme starformers. So is [HE0450–2958]{} one of a kind or was it just the scarceness of IR imaging with 1 resolution and high-resolution CO maps that prevents us from finding similar objects en masse? In the higher redshift Universe there was a recent report of a very similar system [@youn08]. LH850.02 at $z=3.3$ is the brightest submm galaxy in the Lockman hole. Using the Submillimeter Array, the authors find two components of which one is a ULIRG with intense star formation, while the other component likely harbors an AGN. At $z>2$ however, objects like this might be quite common, since merging rates and gas reservoirs were much larger than today. If there existed a substantial number of similar systems at low redshifts, this would allow to study mechanisms of the high-redshift Universe at much lower distances. We try to estimate the frequency of such systems in the local Universe using the three morphologically best studied samples of quasars at $0.05\la z<0.43$. We deliberately use optically selected quasars only, as they have no bias with respect to frequency of merger signatures or extreme SFRs as IR-selected samples have by construction. In this way statements about the general population are possible. @jahn04a investigated a volume-limited and complete sample of 19 luminous QSOs out to $z=0.2$. While at least five of these QSOs are seen in intermediate and late stages of major mergers, only one, HE1254–0934, is a likely ULIRG[^5], as determined from its IRAS fluxes. It is also among the most distorted systems, with a companion at $\sim$1 distance from the QSO nucleus. The companion is more luminous than the host galaxy, and shows a substantial tidal tail. It looks remarkably similar to [HE0450–2958]{}. The two other samples are not volume-limited samples, so the selection function is unclear – except that these quasars stem from either optical or radio surveys, but not the IR. @floy04 studied the morphologies of two intermediate- and high-luminosity samples of ten radio-quiet and seven radio-loud quasars at $0.29<z<0.43$, using HST-imaging data. Only one of their 17 quasars shows a distorted geometry similar to [HE0450–2958]{}(1237–040 at $z=0.371$) but there exists no information about the total IR emission or star-formation rates. The IRAS flux limits of 200mJy is equivalent to upper limits of $L_\mathrm{ir}\sim6\times10^{12}$ $L_\odot$ at $z=0.37$. ULIRG-strength emission for 1237–040 could have gone unnoticed by IRAS. A recent study by @kim08b determined the morphologies of 45 HST-archived quasars at $z<0.35$. It has one object in common with @floy04 and three objects with @jahn04a. Of their sample, three other objects (HE0354–5500, PG1613+658, PKS2349–01) are clearly merging with a nearby companion, and are likely ULIRGs as judged from their IRAS fluxes. However, only in the case of HE0354–5500 the quasar and companion are still well separated and their envelopes have not yet merged into a common halo. The two other cases are in a very late merger state and star-formation will likely occur all over the system. This adds up to only $\le$3/77 QSOs to possibly be [HE0450–2958]{}-like in the three samples combined. At $\le 4$% such systems are indeed rare in the local Universe. These three quasars however should be investigated in more detail. It needs to be tested how strong their star-formation actually is, where in the system it is localized, and if the separated companion is in any way connected to the AGN-fuelling. If a similar situation as for [HE0450–2958]{}is found, the result can set strong constraints on the ULIRG–AGN evolutionary scenario [@sand96] and the creation mechanisms of AGN at high redshifts. It can contribute to answering the question whether SF-ULIRG activity in AGN systems is an indicator of a specific mechanism of AGN fuelling. Or, if these are just the most gas-rich merger-triggered AGN systems at the top end of SFRs, with a continuous sequence towards less gas-rich merger-triggered AGN systems. The merging–AGN fuelling mechanism could be identical from ULIRGs down to the Seyfert regime, where at some point secular mechanisms become more dominant. Lower SFR systems could just be the consequence of lower gas mass, but this might only mildly impact on the – much smaller – AGN fuelling rate. Conclusions =========== With new NIR and MIR images to spatially resolve the [HE0450–2958]{} system, and in the light of previously existing data, we find: 1. The companion galaxy is covered in optically thick and unevenly distributed dust. This makes it appear as a collisional ring galaxy in the optical, but intrinsically it is smooth and has smooth NIR emission increasing towards a pronounced center. The star formation in the companion is similar to the strong starburst Arp220, while softer M82-like star formation is ruled out. This can reconcile the SFR estimates from the optical and FIR. The companion is a star-formation powered ULIRG. 2. Our MIR image confirms a single warm dust point source at the location of the QSO nucleus. This supports a two component dust SED with the warm component fully associated with the QSO nucleus, which is an AGN-powered ULIRG. 3. A dust-free line of sight to the quasar nucleus is evidence that the host galaxy is not obscured by large amounts of dust. However, the ULIRG-strength warm IR emission by the nucleus and the upper limit on star formation in the host galaxy of substantial 60 M$_\odot$/yr leave room for dust. 4. With $H\ge16.9$ the current NICMOS images do not set stronger upper limits on the host galaxy of [HE0450–2958]{}. The $V$-band, $H$-band, and CO-constraints give $M_V\ge-21.2$ to $M_V\ge-22.0$ depending on the assumed dust masses. 5. Flux in the NE-extension of $H=18.8$ is likely associated with the QSO’s host galaxy. It corresponds to a first lower limit of $M_V<-20.4$ for the host galaxy. With a black hole of $\sim6.5\pm 2.5 \times10^7$ M$_\odot$, an accreting rate of 12 M$_\odot$/yr equal to super-Eddington accretion, $L/L_\mathrm{Edd}=6.2^{+3.8}_{-1.8}$, the host galaxy is consistent with the $M_\mathrm{BH}$–$M_\mathrm{bulge}$-relation for normal galaxies. It is also consistent with [HE0450–2958]{} being a NLSy1 at the high end of the known black hole mass distribution. The reason for the high accretion rate is unclear but could be connected to [HE0450–2958]{} being in an early stage of merging with its gas-rich companion. A more exotic explanation for the system is currently not required by any data, but can in the end only be ruled out with much deeper, high-resolution NIR images to find the main body and bulge of the host galaxy. 6. If host galaxy and black hole in [HE0450–2958]{} are co-evolving according to the local $M_\mathrm{BH}$–$M_\mathrm{bulge}$ relation, it has to occur over longer timescales ($\le$500 Myr) and/or the mass growth for the bulge is predominantly not caused by the current star formation in the system, but by redistribution of preexisting stars. 7. A constellation as in the [HE0450–2958]{} system with separate locations of QSO nucleus and strongly star forming ULIRG companion might be common at $z>2$ where gas masses and merger rates were higher, but at a fraction of $\le$4% it is extremely rare in the local Universe. The authors would like to thank E. F. Bell, A. Martínez Sansigre, H.Dannerbauer, E. Schinnerer, K. Meisenheimer, F. Courbin, P. Magain and H.-R. Klöckner for very fruitful discussions and helpful pointers. Based on observations made with ESO Telescopes at the Paranal Observatory under programme ID 276.B-5011. Also based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program \#10797. This research has made use of the NASA/IPAC Extragalactic Database (NED). KJ acknowledges support through the Emmy Noether Programme of the German Science Foundation (DFG) with grant number JA 1114/3-1. AB is funded by the Deutsches Zentrum für Luft- und Raumfahrt (DLR) under grant 50 OR 0404. VC, Research Fellow, thanks Belgian Funds for Scientific Research. This work was also supported by PRODEX experiment arrangement 90312 (ESA and PPS Science Policy, Belgium). [*Facilities:*]{} , . [54]{} natexlab\#1[\#1]{} , J. N., [Kirhakos]{}, S., & [Schneider]{}, D. P. 1994, ApJ, 435, L11 Bahcall, J. N., Kirhakos, S., & Schneider, D. P. 1995, ApJ, 450, 486 , A., [Cox]{}, P., [Benford]{}, D. J., [Dowell]{}, C. D., [Kov[á]{}cs]{}, A., [Bertoldi]{}, F., [Omont]{}, A., & [Carilli]{}, C. L. 2006, ApJ, 642, 694 , E. W., [Shields]{}, G. A., & [Salviander]{}, S. 2007, ApJ, 666, L13 , P. J., [Disney]{}, M. J., [Blades]{}, J. C., [Boksenberg]{}, A., [Crane]{}, P., [Deharveng]{}, J. M., [Macchetto]{}, F. D., [Mackay]{}, C. D., & [Sparks]{}, W. B. 1996, ApJ, 473, 760 , G., & [Charlot]{}, S. 2003, MNRAS, 344, 1000 , G., & [Stockton]{}, A. 2001, ApJ, 555, 719 , V., & [Magain]{}, P. 2007, A&A, 470, 467 , H., [Rigopoulou]{}, D., [Lutz]{}, D., [Genzel]{}, R., [Sturm]{}, E., & [Moorwood]{}, A. F. M. 2005, A&A, 441, 999 , K. M., [Tacconi]{}, L. J., [Davies]{}, R. I., [Genzel]{}, R., [Lutz]{}, D., [Naab]{}, T., [Burkert]{}, A., [Veilleux]{}, S., & [Sanders]{}, D. B. 2006, ApJ, 638, 745 , M. H. K., [Lub]{}, J., & [Miley]{}, G. K. 1987, AApSS, 70, 95 , D., [Cesarsky]{}, C. J., [Chanial]{}, P., [Aussel]{}, H., [Franceschini]{}, A., [Fadda]{}, D., & [Chary]{}, R. R. 2002, A&A, 384, 848 , D., [et al.]{} 2009, to be submitted to A&A Elvis, M., Wilkes, B. J., McDowell, J. C., Green, R. F., Bechtold, J., Willner, S. P., Oey, M. S., Polomski, E., & Cutri, R. 1994, ApJS, 95, 1 , I. J., [Papadopoulos]{}, P. P., [Ekers]{}, R. D., & [Middelberg]{}, E. 2007, ApJ, 662, 872 , D. J. E., [Kukula]{}, M. J., [Dunlop]{}, J. S., [McLure]{}, R. J., [Miller]{}, L., [Percival]{}, W. J., [Baum]{}, S. A., & [O’Dea]{}, C. P. 2004, MNRAS, 355, 196 , M. A. T., [Girardi]{}, L., [Hatziminaoglou]{}, E., [Benoist]{}, C., [Olsen]{}, L. F., [da Costa]{}, L., [Arnouts]{}, S., [Madejsky]{}, R., [Mignani]{}, R. P., [Rit[é]{}]{}, C., [Sikkema]{}, G., [Slijkhuis]{}, R., & [Vandame]{}, B. 2002, A&A, 392, 741 , D., & [Mathur]{}, S. 2004, ApJ, 606, L41 , M. G., [Davies]{}, M. B., & [Rees]{}, M. J. 2006, MNRAS, 366, L22 , N., & [Rix]{}, H.-W. 2004, ApJ, 604, L89 , L., & [Loeb]{}, A. 2006, ApJ, 638, L75 , P. F., [Hernquist]{}, L., [Cox]{}, T. J., [Di Matteo]{}, T., [Robertson]{}, B., & [Springel]{}, V. 2006, ApJS, 163, 1 Jahnke, K., Kuhlbrodt, B., & Wisotzki, L. 2004, MNRAS, 352, 399 Jahnke, K., Sánchez, S. F., Wisotzki, L., Barden, M., Beckwith, S. V. W., Bell, E. F., Borch, A., Caldwell, J. A. R., Häu[ß]{}ler, B., Heymans, C., Jogee, S., McIntosh, D. H., Meisenheimer, K., Peng, C. Y., Rix, H.-W., Somerville, R. S., & Wolf, C. 2004, ApJ, 614, 568 , M., [Ho]{}, L. C., [Peng]{}, C. Y., [Barth]{}, A. J., [Im]{}, M., [Martini]{}, P., & [Nelson]{}, C. H. 2008, submitted to ApJ, arxiv/0807.1337 , M., [Ho]{}, L. C., [Peng]{}, C. Y., & [Im]{}, M. 2007, ApJ, 658, 107 , S., & [Xu]{}, D. 2007, ApJ, 667, L33 Krist, J. 2003, STSCI, http://www.stsci.edu/software/tinytim , G., [Magain]{}, P., [Chantry]{}, V., & [Letawe]{}, Y. 2009, MNRAS, 607 , G., [Magain]{}, P., & [Courbin]{}, F. 2008, A&A, 480, 69 , G., [Magain]{}, P., [Courbin]{}, F., [Jablonka]{}, P., [Jahnke]{}, K., [Meylan]{}, G., & [Wisotzki]{}, L. 2007, MNRAS, 378, 83 , S., [Bergmann]{}, M., [Sanchez]{}, S. F., [Garcia-Lorenzo]{}, B., [Terlevich]{}, R., [Mediavilla]{}, E., [Taniguchi]{}, Y., [Zheng]{}, W., [Punsly]{}, B., [Ahumada]{}, A., & [Merlo]{}, D. 2009, ApJ (in press), arXiv/0901.3292 , F. J., [Cutri]{}, R. M., [Huchra]{}, J. P., & [Kleinmann]{}, S. G. 1988, ApJ, 327, L41 , F. J., [Cutri]{}, R. M., [Kleinmann]{}, S. G., & [Huchra]{}, J. P. 1989, ApJ, 340, L1 Magain, P., Courbin, F., & Sohy, S. 1998, ApJ, 494, 472 , P., [Letawe]{}, G., [Courbin]{}, F., [Jablonka]{}, P., [Jahnke]{}, K., [Meylan]{}, G., & [Wisotzki]{}, L. 2005, Nature, 437, 381 , A., [Risaliti]{}, G., [Gilli]{}, R., [Hunt]{}, L. K., [Maiolino]{}, R., & [Salvati]{}, M. 2004, MNRAS, 351, 169 , S., & [Grupe]{}, D. 2005, ApJ, 633, 688 McLeod, K. K., & Rieke, G. H. 1995, ApJ, 454, L77 , R. J., & [Dunlop]{}, J. S. 2004, MNRAS, 352, 1390 , D., [Storchi-Bergmann]{}, T., [Robinson]{}, A., [Batcheldor]{}, D., [Axon]{}, D., & [Cid Fernandes]{}, R. 2006, MNRAS, 367, 1746 , K., [Aoki]{}, K., [Kawaguchi]{}, T., & [Kiuchi]{}, G. 2007, ApJS, 169, 1 Pantin, E., Vanzi, L., & Weilenman, U. 2007, in 2007 ESO Instrument Calibration Workshop (Springer Verlag) , P. P., [Feain]{}, I. J., [Wagg]{}, J., & [Wilner]{}, D. J. 2008, ApJ, 684, 845 Peng, C. Y., Ho, L. C., Impey, C. D., & Rix, H.-W. 2002, AJ, 124, 266 Sanders, D. B., & Mirabel, I. F. 1996, ARA&A, 34, 749 Sérsic, J. 1968, Atlas de Galaxias Australes, Observatorio Astronomico de Cordoba , F., [Salucci]{}, P., [Granato]{}, G. L., [De Zotti]{}, G., & [Danese]{}, L. 2004, MNRAS, 354, 1020 , R. S., [Hopkins]{}, P. F., [Cox]{}, T. J., [Robertson]{}, B. E., & [Hernquist]{}, L. 2008, MNRAS, 391, 481 , E., [Bernardi]{}, M., [Hyde]{}, J. B., [Sheth]{}, R. K., & [Pizzella]{}, A. 2007, ApJ, 663, 53 , C., [Hamann]{}, F., & [Dietrich]{}, M. 2004, ApJ, 608, 136 , L. C., [Mathur]{}, S., & [Grupe]{}, D. 2007, AJ, 133, 2435 , J. D., [Dunlop]{}, J. S., [Peck]{}, A. B., [Ivison]{}, R. J., [Biggs]{}, A. D., [Chapin]{}, E. L., [Clements]{}, D. L., [Dye]{}, S., [Greve]{}, T. R., [Hughes]{}, D. H., [Iono]{}, D., [Smail]{}, I., [Krips]{}, M., [Petitpas]{}, G. R., [Wilner]{}, D., [Schael]{}, A. M., & [Wilson]{}, C. D. 2008, MNRAS, 387, 707 , X.-L., [Yang]{}, F., [L[ü]{}]{}, X.-R., & [Wang]{}, J.-M. 2007, AJ, 133, 432 [^1]: The Sérsic profile [@sers68] is a generalized galaxy profile with variable wing strength, set by the “Sérsic-parameter” $n$. It reverts to an exponential disk profile typical for spiral galaxies for $n=1$ and for $n=4$ it becomes a de Vaucouleurs profile found for many elliptical galaxies. [^2]: Note that for the galaxy-scale star formation regions around QSO nuclei the dust can be heated by a mix of stellar emission as well as energy from the AGN. In this sense the 47 K found for the cool dust component of [HE0450–2958]{} agrees well with the mean SF-heated dust around higher-$z$ QSOs [also 47 K, @beel06], and can be composed of intrisically cooler dust (20–30 K) plus AGN heating. This temperature could thus be a hint that indeed a part of this cool dust component is located in the QSO host galaxy and not in the companion. [^3]: @cana01 used optical spectra only. With the optically thick dust now detected we have to restrict their diagnosis to mainly the outer parts and surface of the companion. The population mix there might be identical to the core of the companion, but it does not necessarily have to. [^4]: It is interesting to note that the “companion” is close to a factor of 10 more luminous than the host galaxy. With all uncertainties included it would still appear as if the typical mass ratio upper limit of 1:3 for the merging galaxies in a ULIRG system [@dasy06] were exceeded here. However, when using the dynamical masses from @papa08 to predict a black hole mass in the host galaxy consistent with the @haer04 relation, we get a merger mass ratio of 1:1 or 1:2. [^5]: This is a borderline case because it will fall slightly below or above the ULIRG definition limit depending on if we include upper limits in 12 and 25$\mu$m or assume the flux to be zero.
{ "pile_set_name": "ArXiv" }
--- abstract: 'It is demonstrated that radiative corrections increase tunneling probability of a charged particle.' address: - '$^1$ School of Physics, University of New South Wales, Sydney 2052, Australia' - '$^2$ Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, Michigan 48824, USA' author: - 'V.V. Flambaum$^{1}$ and V.G. Zelevinsky$^{2}$[^1]' title: Quantum Münchhausen effect in tunneling --- Famous baron von Münchhausen saved himself from a swamp pulling his hairs by the hands of his own [@Mun]. According to classical physics, such a feat seems to be impossible. However, we live in a quantum world. In a tunneling of a charged particle, the head of the particle wave function can send a photon to the tail which absorbs this photon and penetrates the barrier with enhanced probability. Obviously, such a photon feedback should work in the two-body tunneling where the first particle, while continuing to be accelerated by the potential after the tunneling, can emit a (virtual) photon that increase energy of the second particle and its tunneling probability. The Münchhausen mechanism may be helpful in the tunneling of a composite system. It is related to phonon assisted tunneling but does not require any special device being always provided by the interaction of a charged particle with the radiation field. The interaction of a tunneling object with other degrees of freedom of the system and the influence of this interaction on the tunneling probability for a long time was a topic of intensive studies initiated by Caldeira and Leggett [@cal]. Their general conclusion, in agreement with intuitive arguments, was that any friction-type interaction suppresses the tunneling. At the same time, it was realized that such an interaction leads to distortions of the barrier which can be helpful in endorsing the tunneling. The simplest effect is associated with the zero-point vibrations of the source responsible for the existence of the barrier. This is important for the probabilities of subbarrier nuclear reactions as pointed out by Esbensen [@esb]. In the last decade, many experimental and theoretical efforts were devoted to the understanding of related aspects of subbarrier reactions, see the recent review [@bal] and references therein. Below we discuss the interaction of a charged tunneling object with the electromagnetic field which always accompanies motion of the object. Formally speaking, we are looking for the effects of radiative corrections on the single-particle tunneling. These effects can be described by the Schrödinger equation with the self-energy operator: $$\label{H} \hat{H}\Psi({\bf r}) + \int \Sigma({\bf r},{\bf r}';E) \Psi({\bf r}') d^3r'= E \Psi({\bf r})$$ where $\hat{H}$ is the unperturbed particle hamiltonian, which includes a barrier potential, and $\Sigma=M-i\Gamma/2$ is the complex nonlocal and energy-dependent operator determined by the coupling to virtual photons and a possibility of a real photon emission. The “photon hand" here connects two points ${\bf r}$ and ${\bf r}'$ of the same wave function. In the one-photon approximation the self-energy due to the interaction with the transverse radiation field can be written as $$\Sigma({\bf r},{\bf r}';E)=\sum_{{\bf k},\lambda}|g_{{\bf k}}|^{2} \sum_{n}\frac{\langle {\bf r}|(\hat{{\bf p}}\cdot {\bf e}_{{\bf k}\lambda}) e^{i{\bf k\hat{r}}} |n\rangle\langle n|(\hat{{\bf p}}\cdot{\bf e}^{\ast}_{{\bf k}\lambda}) e^{-i{\bf k\hat{r}}}|{\bf r}'\rangle}{E-E_{n}-\omega_{{\bf k}}-i0}. \label{1}$$ Here we sum over unperturbed stationary states $|n\rangle$; $\hat{{\bf r}}$ and $\hat{{\bf p}}$ are the position and momentum operators, respectively; the photons are characterized by the momentum ${\bf k}$, frequency $\omega_{{\bf k}}$ and polarization $\lambda$; the polarization vectors ${\bf e}_{{\bf k}\lambda}$ are perpendicular to ${\bf k}$ so that the momentum operators commute with the exponents. The normalization factors are included into $g_{{\bf k}}\propto \omega_{{\bf k}}^{-1/2}$. The relativistic generalization of (\[1\]) is straightforward. The hermitian part $M$ of the self-energy operator is given by the principal value integral over photon frequencies in (\[1\]). The expectation value of $M$ is responsible for the Lamb shift of bound energy levels. It contains also the mass renormalization for a free particle which should be subtracted. Our problem is different from the energy shift calculation for bound states since we are interested in the change of the wave function of the tunneling particle. However we can use some features of the conventional approach. As well known from the Lamb shift calculations, one can use different approximations in the two regions of integration over the photon frequency $\omega$. In the nonrelativistic low-frequency region, $\omega<\beta m$, where the parameter $\beta<1$ is chosen in such a way that typical excitation energies of a particle in the well $\delta E$ are smaller than $\beta m$ (in the hydrogen Lamb shift problem a fine structure constant $\alpha$ can play the role of the borderline scale parameter), it is possible to neglect the exponential factors in (\[1\]). The high-frequency contribution to $M$, where the potential can be considered as a perturbation to free motion, has been calculated, e. g. in Ref. [@Akhieser]. The two contributions match smoothly at $\omega =\beta m$. It is easy to estimate the mass operator $M$ with logarithmic accuracy. After summation over polarizations and standard regularization [@Akhieser], the low frequency part of the operator $M$ can be written as $$\label{sigma} \hat{M}(E)=\frac{2 Z^2 \alpha}{3 \pi m^{2}} \int d\omega \sum_n \hat{{\bf p}}|n\rangle \frac{E - E_n}{E - E_n -\omega}\langle n|\hat{{\bf p}}$$ where $Ze$ is the particle charge, and $m$ is the mass of the particle (reduced mass in the alpha-decay case). We use the units $\hbar=c=1$. Substituting the logarithm arising from the frequency integration by its average value $L= \ln(\beta m/\omega_{min}) $, we can use the closure relations and obtain a simple expression $$\begin{aligned} \hat{M}(E)&=&\frac{2Z^{2}\alpha}{3\pi m^{2}}L\hat{{\bf p}}(\hat{H} -E)\hat{{\bf p}} \\ &=&\frac{Z^{2}\alpha}{3\pi m^{2}}L\left\{\nabla^{2}\hat{U} +[(\hat{H}-E),\hat{{\bf p}}^{2}]_{+} \right\}. \label{2}\end{aligned}$$ The mean value of the term with the anticommutator $[...,...]_{+}$ in eq. (\[2\]) is equal to zero since $(\hat{H}-E)\Psi_0=0$ where $\Psi_0$ is the unperturbed wave function. A correction to the wave function due to this term can be calculated by using perturbation theory and the unperturbed Schrödinger equation, $$\delta\Psi=\frac{2Z^{2}\alpha}{3\pi m}L[U-\langle 0|U|0\rangle]\Psi_{0}. \label{2a}$$ This correction is not essential since it does not influence the exponent in the tunneling amplitude. Combining the remaining term in eq. (\[2\]) with the high-frequency contribution which contains $L= \ln(m/\beta m)$, see Ref. [@Akhieser], the result can be presented as an effective local operator proportional to the Laplacian $\nabla^{2}U({\bf r})$, $$\label{sigmaloc} M({\bf r}, {\bf r}'; E) \simeq \nabla^{2}U({\bf r}) \delta({\bf r}- {\bf r}')\frac{ Z^2 \alpha}{3 \pi m^2} \ln\frac{m}{U_0} \equiv \delta U({\bf r})\delta({\bf r}-{\bf r}').$$ Here we used the barrier height $U_0$ as a lower cut-off $\omega_{min}$ of the integration over frequencies (below we give a semiclassical estimate which leads also to a more accurate evaluation of the logarithmic factor). For the tunneling of an extended object, the mass $m$ in the argument of the logarithm should be replaced by the inverse size of the particle $1/r_0$ which comes from the upper frequency cut-off given in this case by the charge formfactor. The obtained result is physically equivalent to the averaging over the position fluctuations due to the coupling to virtual photons. Thus, in the logarithmic approximation the mass operator is reduced to a local correction $\delta U({\bf r})$ to the potential $U({\bf r})$. The Laplacian of the potential energy $\nabla^{2} U({\bf r})$ near the maximum of the barrier is negative (correspondingly, near the bottom of the potential well it is positive). Therefore, we obtained the negative correction $\delta U({\bf r})$ to the potential barrier which leads to a conclusion that jiggling of the photon increases the tunneling amplitude of the particle. The numerical value of the correction to the potential is small, $\sim$ 1 keV, for the alpha-decay. However, in some cases it may be noticeable due to the exponential dependence on the height of the barrier (recall the notorious cold fusion problem). Also, there exist theories like QCD where the radiation corrections are not small. In many-body systems one can use collective modes, as phonons, to transfer energy. This can influence electron tunneling through quantum dots or insulating surfaces. Energy exchange between a tunneling particle and nuclear environment is known to be important in subbarrier nuclear fission and fusion [@bal]. An analysis can be performed more in detail by using the semiclassical WKB approximation for the tunneling wave functions. The semiclassical radial Green function of unperturbed motion under the barrier can be written in terms of the classical momentum in the forbidden region, $p(r;E)=[2m(U(r)-E)]^{1/2}$, at a given energy $E$ as $$G(r,r';E)=-m[p(r)p(r')]^{-1/2}\left\{e^{-\int_{r'}^{r}d\xi\,p(\xi)}\Theta(r-r') +e^{-\int_{r}^{r'}d\xi\,p(\xi)}\Theta(r'-r)\right\} \label{3}$$ where $\Theta(x)$ is the step-function. The full three-dimensional Green function $G(E)=\sum_n |n\rangle(E-E_{n})^{-1}\langle n|$ contains also angular harmonics which could be separated in a routine way accounting for the fact that in the long wavelength approximation for the $s$-wave solution the intermediate states are $p$-waves. Indeed, the operator of electric dipole radiation ${\bf \hat p}$ converts an initial $s$-wave $\Psi$ in eq. (\[H\]) into an intermediate $p$-wave state $|n\rangle$. Therefore, it is sufficient to keep the $p$-wave part of the radial Green function and to use closure in the sum over angular harmonics. The kernel of the integral term in the Schrödinger equation (1) contains $$K(r,r';E)=\int d\omega\,G(r,r';E-\omega). \label{4}$$ The integrand consists of terms falling exponentially as $|r-r'|$ increases. The potential $U(r)$ is assumed to be a smooth function. Therefore, we can put $p(r')\approx p(r)$. Now it is easy to perform the integration over $\omega$ in eq.(\[4\]) which leads to $$K(r,r';E)=-\frac{1}{|r-r'|}\left\{e^{-p_{min}|r-r'|} - e^{-p_{max}|r-r'|}\right\} \label{4a}$$ where $p_{min}=[2m(U_{p}(r)-E)]^{1/2}$ , $p_{max}=[2\beta ]^{1/2}m$, and $U_{p}(r)$ is the effective $p$-wave radial potential which includes the centrifugal part. This expression has a very narrow maximum near $r=r'$ with the width $|r-r'| \sim 1/p_{max}$. This is a measure of non-locality of the self-energy operator $M(r,r';E)$. In any nonrelativistic application the kernel can be treated as proportional to the delta-function. The proportionality coefficent can be found by the integration over $r$. Thus, we obtain the local behavior of the kernel, $$K(r,r';E)\approx -L(r)\delta(r-r'), \label{5}$$ where now we determine the lower limit of the logarithm which has appeared in our previous derivation (\[sigmaloc\]) as related to the local value of the potential, $$L(r)=\ln\frac{m}{|U_{p}(r)-E|}. \label{6}$$ The substitution into eq. (\[sigmaloc\]) gives $$\label{deltaU} \delta U({\bf r})= \frac{ Z^2 \alpha}{3 \pi m^2} \ln\frac{m}{|U_{p}(r)-E|} \nabla^{2}U({\bf r}).$$ As usual, this semiclassical expression is not valid near the turning points where $U_{p}(r)=E$. However, a very weak logarithmic singularity does not produce any practical limitations on the applicabilty of eq. (\[deltaU\]). The conclusion of enhancement of the tunneling probability seems to contradict to the common sense: radiation should cause energy losses and reduce the tunneling amplitude of the charged particle. However, such an argument may be valid only for the real photon emission. This emission is described by the antihermitian part of the self-energy operator which is originated from the delta-function corresponding to on-shell processes, $$\label{gamma} \Gamma({\bf r},{\bf r}';E) =\frac{4 Z^2 \alpha}{3m^{2} } \int d\omega \sum_n \langle {\bf r}|\hat{{\bf p}} e^{i{\bf k\hat{r}}}|n\rangle\langle n| \hat{{\bf p}}e^{-i{\bf k\hat{r}}}|{\bf r}'\rangle \omega\delta(E - E_n -\omega).$$ Because of the energy conservation the sum here includes only states $|n\rangle$ with energy $E_n$ below $E$. Consider for example the tunneling from the ground $s$-state. A dipole transition transfers the particle from the $s$-state to a $p$-state. However, there are no quasidiscrete $p$-states $|n\rangle$ below the ground state in the potential well. Scattering $p$-waves can penetrate the potential barrier from the continuum with an exponentially small amplitude. This means that $\Gamma({\bf r},{\bf r}';E)$ is again exponentially small if one or both arguments ${\bf r}$ and ${\bf r}'$ are under the barrier or inside the potential well. Therefore, $\Gamma({\bf r},{\bf r}';E)$ does not considerably influence the tunneling amplitude. The reason for that can be easily understood. A real radiation would be impossible if there were no tunneling. Whence, the radiation width must vanish together with the tunneling width. On the contrary, the real part of $\Sigma$ under the barrier would present even if the tunneling probability would vanish. To avoid misunderstanding we need to stress that the contribution to the radiation intensity from the barrier area and the potential well, which was, in application to the nuclear alpha-decay, the subject of recent experimental [@exp] and theoretical [@Dyakonov; @Bertch; @B] studies, still may be important. The radiation amplitude with $E_{s}-E_{p}=\omega$ contains the matrix element $$\label{v} <s|\hat{{\bf p}}|p>=\frac{1}{\omega}<s|[\hat{H},\hat{{\bf p}}]|p>= \frac{i}{\omega}<s|\nabla \hat{U}|p>.$$ When one moves inside the barrier from the outer turning point inwards, the resonance $s$-wave function exponentially increases while the non-resonance $p$-wave function exponentially decreases. As a result, the product $\psi_s(r)\psi_p(r)$ does not change considerably. This means that the contribution to the real radiation from the inner area may be comparable to that from the area outside the barrier. The gradient $\nabla U$ changes its sign near the maximum of the potential which implies a destructive interference between the radiation from the different areas (since $|s\rangle$ is the non-oscillating ground state wave function, the product $\psi_s(r)\psi_p(r)$ does not change sign inside the barrier). This work was supported by the NSF grant 96-05207. V.F. gratefully acknowledges the hospitality and support from the NSCL. [99]{} G.A. Bürger, [*Wunderbare Reisen zu Wasser und Lande, Feldzüge und lustige Abenteur des Freyherrn von Münchhausen*]{} (London, 1786). A.O. Caldeira and A.J. Leggett, Ann. Phys. [**149**]{}, 374 (1983). H. Esbensen, Nucl. Phys. A [**352**]{}, 147 (1981). A.B. Balantekin and N. Takigawa, Rev. Mod. Phys. [**70**]{}, 77 (1998). A.I. Akhiezer and V.B. Berestetskii, [*Quantum Electrodynamics*]{} (Interscience Publishers, NY, 1965). J. Kasagi [*et al.*]{}, Phys. Rev. Lett. [**79**]{}, 371 (1997). M.I. Dyakonov and I.V. Gornyi, Phys. Rev. Lett. [**76**]{}, 3542 (1996). T. Papenbrock and G.F. Bertsch, Phys. Rev. Lett. [**80**]{}, 4141 (1998). C.A. Bertulani, D.T. de Paula and V.G. Zelevinsky, [*to be published*]{}. [^1]: email address: zelevinsky@nscl.msu.edu
{ "pile_set_name": "ArXiv" }
--- abstract: 'Many real world problems can now be effectively solved using supervised machine learning. A major roadblock is often the lack of an adequate quantity of labeled data for training. A possible solution is to assign the task of labeling data to a crowd, and then infer the true label using aggregation methods. A well-known approach for aggregation is the Dawid-Skene (DS) algorithm, which is based on the principle of Expectation-Maximization (EM). We propose a new simple, yet effective, EM-based algorithm, which can be interpreted as a ‘hard’ version of DS, that allows much faster convergence while maintaining similar accuracy in aggregation. We show the use of this algorithm as a quick and effective technique for online, real-time sentiment annotation. We also prove that our algorithm converges to the estimated labels at a linear rate. Our experiments on standard datasets show a significant speedup in time taken for aggregation - upto $\sim$8x over Dawid-Skene and $\sim$6x over other fast EM methods, at competitive accuracy performance. The code for the implementation of the algorithms can be found at <https://github.com/GoodDeeds/Fast-Dawid-Skene>.' author: - 'Vaibhav B Sinha, Sukrut Rao, Vineeth N Balasubramanian' bibliography: - 'fastdawidskene.bib' title: 'Fast Dawid-Skene: A Fast Vote Aggregation Scheme for Sentiment Classification' --- &lt;ccs2012&gt; &lt;concept&gt; &lt;concept\_id&gt;10003120.10003130&lt;/concept\_id&gt; &lt;concept\_desc&gt;Human-centered computing Collaborative and social computing&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10010147.10010257&lt;/concept\_id&gt; &lt;concept\_desc&gt;Computing methodologies Machine learning&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10010147.10010257.10010258.10010259&lt;/concept\_id&gt; &lt;concept\_desc&gt;Computing methodologies Supervised learning&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10010147.10010257.10010282.10010284&lt;/concept\_id&gt; &lt;concept\_desc&gt;Computing methodologies Online learning settings&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10002951.10003227&lt;/concept\_id&gt; &lt;concept\_desc&gt;Information systems Information systems applications&lt;/concept\_desc&gt; &lt;concept\_significance&gt;300&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;/ccs2012&gt; Introduction ============ Supervised learning has been highly effective in solving challenging tasks in sentiment analysis over the last few years. However, the success of supervised learning for the domain in recent years has been premised on the availability of large amounts of data to effectively train models. Obtaining a large labeled dataset is time-consuming, expensive, and sometimes infeasible; and this has often been the bottleneck in translating the success of machine learning models to newer problems in the domain. An approach that has been used to solve this problem is to crowdsource the annotation of data, and then aggregate the crowdsourced labels to obtain ground truths. Online platforms such as Amazon Mechanical Turk and CrowdFlower provide a friendly interface where data can be uploaded, and workers can annotate labels in return for a small payment. With the ever-growing need for large labeled datasets and the prohibitive costs of seeking experts to label large datasets, crowdsourcing has been used as a viable option for a variety of tasks, including sentiment scoring [@CSsentimentscoring], opinion mining [@CScommodityreview], general text processing [@Snow:2008:CFG:1613715.1613751], taxonomy creation [@Bragg2013CrowdsourcingMC], or domain-specific problems, such as in the biomedical field [@DBLP:journals/corr/GuanGDH17; @Albarqouni2016AggNetDL], among many others. In recent times, there is a growing need for a fast and real-time solution for judging the sentiment of various kinds of data, such as speech, text articles, and social media posts. Given the ubiquitous use of the internet and social media today, and the wide reach of any information disseminated on these platforms, it is critical to have a efficient vetting process to ensure prevention of the usage of these platforms for anti-social and malicious activities. Sentiment data is one such parameter that could be used to identify potentially harmful content. A very useful source for identifying harmful content is other users of these internet services, that report such content to the service administrators. Often, these services are set up such that on receiving such a flag, they ask other users interacting with the same content to classify whether the content is harmful or not. Then, based on these votes, a final decision can be made, without the need for any human intervention. Some such works include: crowdsourcing the sentiment associated with words [@CSsentimenttoword], crowdsourcing sentiment scoring for online media [@CSsentimentscoring], crowdsourcing the classification of words to be used as a part of lexicon for sentiment analysis [@CSlexicon], crowdsourcing sentiment judgment for video review [@CSvideoreview], crowdsourcing for commodity review [@CScommodityreview], and crowdsourcing for the production of word level annotation for opinion mining tasks [@CSsyntacticrelatedness]. However, with millions of users creating and adding new content every second, it is necessary that this decision be quick, so as to keep up with and effectively address all flags being raised. This indicates a need for fast vote aggregation schemes that can provide results for a stream of data in real time. The use of crowdsourced annotations requires a check on the reliability of the workers and the accuracy of the annotations. While the platforms provide basic quality checks, it is still possible for workers to provide incorrect labels due to misunderstanding, ambiguity in the data, carelessness, lack of domain knowledge, or malicious intent. This can be countered by obtaining labels for the same question from a large number of annotators, and then aggregating their responses using an appropriate scheme. A simple approach is to use majority voting, where the answer which the majority of annotators choose is taken to be the true label, and is often effective. However, many other methods have been proposed that perform significantly better than majority voting, and these methods are summarized further in Section \[related\]. Despite the various recent methods proposed, one of the most popular, robust and oft-used method to date for aggregating annotations is the Dawid-Skene algorithm, proposed by [@dawid1979maximum], based on the Expectation Maximization (EM) algorithm. This method uses the M-step to compute error rates, which are the probabilities of a worker providing an incorrect class label to a question with a given true label, and the class marginals, which are the probabilities of a randomly selected question to have a particular true label. These are then used to update the proposed set of true labels in the E-step, and the process continues till the algorithm converges on a proposed set of true labels (further described in Section \[dawidskenealgo\]). In this work, we propose a new simple, yet effective, EM-based algorithm for aggregation of crowdsourced responses. Although formulated differently, the proposed algorithm can be interpreted as a ‘hard’ version of Dawid-Skene (DS) [@dawid1979maximum], similar to Classification EM [@celeux1992classification] being a hard version of the original EM. The proposed method converges upto 7.84x faster than DS, while maintaining similar accuracy. We also propose a hybrid approach, a combination of our algorithm with the Dawid-Skene algorithm, that combines the high rate of convergence of our algorithm and the better likelihood estimation of the Dawid-Skene algorithm as part of this work. Related Work {#related} ============ The Expectation-Maximization algorithm for maximizing likelihood was first formalized by [@10.2307/2984875]. Soon after, Dawid and Skene [@dawid1979maximum] proposed an EM-based algorithm for estimating maximum likelihood of observer error rates, which became very popular for crowdsourced aggregation and is still considered by many as a baseline for performance. Many researchers, to this day, have worked on analyzing and extending the Dawid-Skene methodology (henceforth, called DS), of which we summarize the more recent efforts below. The work on crowdsourced data aggregation have not been confined only for sentiment analysis or opinion mining tasks, instead most of the methods are generic and can easily used for sentiment analysis and opinion mining tasks. A new model, GLAD, was proposed in [@NIPS2009_3644], that could simultaneously infer the true label, the expertise of the worker, and the difficulty of the problem, and use this to improve on the labeling scheme. [@Raykar:2010:LC:1756006.1859894] improved upon DS by jointly learning the classifier while aggregating the crowdsourced labels. However, the efforts of [@NIPS2009_3644] were restricted to binary choice settings; and in the case of [@Raykar:2010:LC:1756006.1859894], they focused on classification performance, which is however not the focus of this work. [@ipeirotis2010quality] presented improvements over DS to recover from biases in labels provided by the crowd, such as cases where a worker always provides a higher label than the true label when labels are ordinal. More recently, [@NIPS2016_6124] analyzed and characterized the tradeoff between the cost of obtaining labels from a large group of people per data point, and the improved accuracy on doing so, as well as the differences in adaptive vs non-adaptive DS schemes. In addition to these efforts, there has also been a renewed interest in recent years to understand the rates of convergence of the Dawid-Skene method. [@minimax-optimal-convergence-rates-for-estimating-ground-truth-from-crowdsourced-labels] obtained the convergence rates of a projected EM algorithm under the homogeneous DS model, which however is a constrained version of the general DS model. [@NIPS2014_5431] proposed a two-stage algorithm which uses spectral methods to offset the limitations of DS to achieve near-optimal rate convergence. [@article] recently proposed a permutation-based generalization of the DS model, and derived optimal rates of convergence for these models. However, none of these efforts have explicitly focused on increasing the speed of convergence, or making Dawid-Skene more efficient in practice. The work in [@IWMV] is the closest in this regard, where they proposed an EM-based Iterative Weighted Majority Voting (IWMV) algorithm which experimentally leads to fast convergence. We use this method for comparison in our experiments. In addition to methods based on Dawid-Skene, other methods for vote aggregation have been developed, such as using Gaussian processes [@Rodrigues:2014:GPC:3044805.3044941] and online learning methods [@Welinder2010OnlineCR]. The scope of the problem addressed by Dawid-Skene has also been broadened, to allow cases such as when a data point may have multiple true labels [@DUAN20145723]. (In this work, we show how our method can be extended to this setting too.) For ensuring reliability of the aggregated label, a common approach is to use a large number of annotators, which may however increase the cost. To mitigate this, work has also been done to intelligently assign questions to particular annotators [@0768fc60fef84637864e13671a981243], reduce the number of labels needed for the same accuracy [@Welinder2010OnlineCR], consider the biases in annotators [@NIPS2011_4311] and so on. Recent work on vote aggregation also includes deep learning-based approaches, such as [@Albarqouni2016AggNetDL; @training-deep-neural-nets-aggregate-crowdsourced-responses; @DBLP:journals/corr/abs-1709-01779]. A survey of many earlier methods related to vote aggregation can be found in the work of [@10.1007/978-3-642-41154-0_1] and [@sheshadri2013square]. Moreover, a benchmark collection of methods and datasets for vote aggregation is defined in [@sheshadri2013square], which we use for evaluating the performance of our method. While many new methods have been developed, the DS algorithm still remains relevant as being one of the most robust techniques, and is used as a baseline for nearly every new method. Inspired by [@celeux1992classification], our work proposes a simple EM-based algorithm for vote aggregation, that provides a similar performance as Dawid-Skene but with a much faster convergence rate. We now describe our method. Proposed Algorithm {#algos} ================== We propose an Expectation-Maximization (EM) based algorithm for efficient vote aggregation. The E-step estimates the dataset annotation based on the current parameters, and the M-step estimates the parameters which maximize the likelihood of the dataset. Starting from a set of initial estimates, the algorithm alternates between the M-step and the E-step till the estimates converge. Although formulated using a different approach to the aggregation problem, we call our algorithm Fast Dawid-Skene (FDS), because of its similarity to the DS algorithm (described in Section \[dawidskenealgo\]). Preliminaries {#subsec_preliminaries} ------------- For convenience, we use the analogy of a question-answer setting to model the crowdsourcing of labels. The data shown to the crowd is viewed as a question, and the possible labels as choices of answers from the crowd worker/participant. Let the questions (data points, problems) that need to be answered be $q = \{1,2,3,\dots,Q\}$ and the annotators (participants, workers) labeling them be $a = \{1,2,3,\dots,A\}$. The task requires the participants to label each question by selecting one of the predefined set of choices (options), $c = \{1,2,3,\dots, C\}$, which has the same length across all questions. A participant is said to answer a given question when s/he chooses an option as the answer for that question. A participant need not answer all the questions, and in fact, for a large pool of questions, it is reasonable to assume that a participant might be invited to answer only a small subset of all the questions. Each question is assumed to be answered by at least one participant (ideally, more). We also assume that the choice selected by a participant for a question is independent of the choice selected by any other participant. This assumption holds for real-world applications that use contemporary crowdsourcing methods, where participants generally do not know each other, and are often physically and geographically separated, and thus do not influence each other. Besides, while answering a question, the participants have no knowledge of the choices chosen by previous participants in these settings. The Fast Dawid-Skene Algorithm {#ouralgo} ------------------------------ We now derive the proposed Fast Dawid-Skene (FDS) algorithm under the assumption that each question has only one correct choice, and that a participant can select only one choice for each question. (In Section \[discussions\], we show how our method can be extended to relax this assumption.) Our goal is to aggregate the choices of the crowd for a question and to approximate the correct choice. Consider the question $q$. Let the $K$ participants that answered this question be $\{q_1, q_2, \dots, q_K\}$. The value of $K$ may vary for different questions. Let the choices chosen by these $K$ participants for question $q$ be $\{c_{q_1}, c_{q_2}, \dots, c_{q_K}\}$, and the correct (or aggregated) answer to be estimated for the question $q$ be $Y_q$. We define the answer to the question $q$ to be the choice $c \in \{1,2,\dots,C\}$ for which $P\left(Y_{q} = c | c_{q_1}, c_{q_2}, \dots, c_{q_K}\right)$ is maximum. Using Bayes’ theorem and the independence assumption among participants’ answers, we obtain: $$\begin{aligned} \label{e1} P&(Y_{q} = c | c_{q_1},c_{q_2},\dots, c_{q_K})\nonumber \\ &= \frac{P(c_{q_1}, c_{q_2}, \dots, c_{q_K} | Y_{q} = c)P(Y_{q} = c)}{\sum\limits_{c=1}^{C} P(c_{q_1}, c_{q_2}, \dots, c_{q_K} | Y_{q} = c)P(Y_{q} = c)}\nonumber\\ &= \frac{\left(\prod\limits_{k = 1}^{K} P(c_{q_k} | Y_{q} = c)\right)P(Y_{q} = c)}{\sum\limits_{c = 1}^{C} \left(\prod\limits_{k = 1}^{K} P(c_{q_k} | Y_{q} = c)\right)P(Y_{q} = c) } \end{aligned}$$ Let $T_{qc}$ be the indicator that the answer to question $q$ is choice $c$. Using our formulation: $$\label{e2} T_{qc} = \begin{cases} 1 &c = \underset{j \in \{1,2,\dots,C\}}{\arg\max} P(Y_{q} = c | c_{q_1}, c_{q_2}, \dots, c_{q_K}) \\ 0 & \text{otherwise} \end{cases}$$ These $T_{qc}$s serve as the proposed answer sheet. To determine the correct (or aggregated) choice for a question $q$, we need the values of $P(c_{q_k} | Y_{q} = c)$ for all $k$ and $c$, which however is not known given only the choices from the crowd annotators. However, if the correct choices are known for all the questions, we can compute these parameters. Let $q_k$ be the annotator $a$. To compute the parameters, we first define the following sets: $$S_{a}^{(c)} = \left\{ i\, |\, Y_i = c \wedge a \text{ has answered question } i \right\}$$ and $$T_{c_a}^{(c)} = \left\{ i \,|\, Y_i = c \wedge a \text{ has answered } c_a \text{ on question } i \right\}$$ Then, we have: $$\label{e3} P(c_a | Y_{q} = c) = \frac{ \left| T_{c_a}^{(c)} \right|}{ \left| S_a^{(c)} \right|}$$ where $\left| \cdot \right| $ denotes the cardinality of the set. Also, $P(Y_{q} = c)$ can be defined as: $$\label{e4} P(Y_{q} = c) = \frac{\text{Number of questions having answer as }c}{\text{Total number of questions}}$$ The above quantities can be estimated if we have the correct choices, and conversely, the correct choices can be obtained using the above quantities. We hence use an Expectation-Maximization (EM) strategy, where the E-step calculates the correct answer for each question, while the M-step determines the maximum likelihood parameters using equations \[e3\] and \[e4\]. There are no pre-calculated values of parameters to begin with, and so in the first E-step, we estimate the correct choices using majority voting. We continue applying the EM steps until convergence. We use the total difference between two consecutive class marginals being under a fixed threshold as the convergence criterion. We discuss the convergence criterion in more detail in Section \[experiments\]. The proposed algorithm is summarized below in Algorithm \[fdsalgorithm\]. Crowdsourced choices of $Q$ questions by $A$ participants (annotators) from $C$ choices Proposed true choices - $T_{qc}$ Estimate $T$s using majority voting. *M-step:* Obtain the parameters, $P(c_a | Y_{q} = c)$ and $P(Y_{q} = c)$ using Equations \[e3\] and \[e4\] *E-step:* Estimate $T$s using the parameters, $P(c_a | Y_{q} = c)$ and $P(Y_{q} = c)$, and with the help of Equations \[e2\] and \[e1\]. convergence Connection to Dawid-Skene Algorithm {#dawidskenealgo} ----------------------------------- The Dawid-Skene algorithm [@dawid1979maximum] was one of the earliest EM-based methods for aggregation, and still remains popular and competitive to newer approaches. In this subsection, we briefly describe the Dawid-Skene methodology, and show the connection of our approach to this method. As defined in [@dawid1979maximum], the maximum likelihood estimators for the DS method are given by: $$\begin{aligned} \hat{\pi}_{cl}^{(a)} &= \frac{\text{number of times participant $a$ chooses $l$ when $c$ is correct}}{\text{number of questions seen by participant $a$ when $c$ is correct}} \end{aligned}$$ and $\hat{p_c}$, which is the probability that a question drawn at random has a correct label of $c$. Let $n_{ql}^{(a)}$ be the number of times participant $a$ chooses $l$ for question $q$. Let $\{T_{qc} : q = 1,2,\dots, Q\}$ be the indicator variables for question $q$. If choice $m$ is true, for question $q$, $T_{qm} = 1$ and $\forall j \ne m,\,T_{qj} = 0$. Given the assumptions made in Section \[subsec\_preliminaries\], when the true responses of all questions are available, the likelihood is given by: $$\label{e8} \prod_{q=1}^{Q} \prod_{c=1}^{C} \left\{ p_c \prod_{a=1}^{A} \prod_{l=1}^{C} \left(\pi_{cl}^{(a)}\right)^{n_{ql}^{(a)}}\right\}^{T_{qc}}$$ where $n_{ql}^{(a)}$ and $T_{qc}$ are known. Using equation \[e8\], we obtain the maximum likelihood estimators as: $$\label{e9} \hat{\pi}_{cl}^{(a)} = \frac{\sum_q T_{qc} n_{ql}^{(a)}}{\sum_l \sum_q T_{qc} n_{ql}^{(a)}}$$ $$\label{e10} \hat{p}_c = \frac{\sum_q T_{qc}}{Q}$$ We then obtain using Bayes’ theorem: $$\label{e11} p(T_{qc} = 1 | \text{data}) = \frac{\prod_{a=1}^{A} \prod_{l=1}^{C} (\pi_{cl}^{(a)})^{n_{ql}^{(a)}} p_c }{ \sum_{r=1}^{C} \prod_{a=1}^{A} \prod_{l=1}^{C} (\pi_{rl}^{(a)})^{n_{ql}^{(a)}} p_r}$$ The DS algorithm is then defined by using equations \[e9\] and \[e10\] to obtain the estimates of $p$s and $\pi$s in the M-step, followed by using equation \[e11\] and the estimates of $p$s and $\pi$s to calculate the new estimates of $T$s in the E-step. These two steps are repeated until convergence (when the values don’t change over an iteration). A close examination of the DS and proposed FDS algorithms shows that our algorithm can be perceived as a ‘hard’ version of DS. The DS algorithm derives the likelihood assuming that the correct answers (which are ideally binary-valued) are known, but uses the values for $T_{qc}$ (which form a probability distribution over the choices) directly as obtained from equation \[e11\]. Instead, in our formulation, we always have $T_{qc}$ as either $0$ or $1$ after each E-step. Our method is similar to the well-known Classification EM proposed in [@celeux1992classification], which shows that a ‘hard’ version of EM significantly helps fast convergence and helps scale to large datasets [@jollois2007speed]. We show empirically in Section \[experiments\] that this subtle difference between DS and FDS ensures that changes in the answer sheet dampens down quickly, and allows our method to converge much faster than DS with comparable performance. A careful implementation for both FDS and DS provides a solution in $O(QACn)$ time under the assumption that there is only one correct choice for each question, where $n$ is the number of iterations required by the algorithm to converge. As the cost per iteration of FDS would be similar to DS by the nature of its formulation, this implies that the speedup of our algorithm is proportional to the ratio of the number of iterations required to converge by the two algorithms, which we also confirm experimentally. Theoretical Guarantees for Convergence -------------------------------------- In this subsection, we establish guarantees for convergence. We prove that if we start from an area close to a local maximum of the likelihood, we are guaranteed to converge to the maximum at a linear rate. For the analysis of our algorithm’s convergence, we first frame it in a way similar to the Classification EM algorithm as proposed by [@celeux1992classification]. Classification EM introduces an extra C-step (Classification step) after the E-step. This is the step that assigns each question a single answer, thus doing a ‘hard’ clustering of questions based on options instead of the ‘soft’ clustering by DS. To continue with the proof we will use the notation used for DS. The term $ P(c_{q_k} | Y_{q} = c)$ for FDS is replaced by $\pi_{cc_{qk}}^{q_k}$ and the term $ P(Y_q = c) $ for FDS is replaced by $p_c$. $n_{ql}^{(a)}$ used by DS would be either $1$ or $0$ for the setting considered. Having established the analogy, we restate the algorithm in CEM form (Algorithm \[cemalgorithm\]). Crowdsourced choices of $Q$ questions by $A$ participants (annotators) from $C$ choices Proposed true choices - $T_{qc}$ Estimate $T$s using majority voting. This essentially does the first E and C step. *M-step:* Obtain the parameters, $\pi$s and $p$s using Equations \[e3\] and \[e4\] *E-step:* Estimate $T$s using the parameters, $\pi$ and $p$, and with the help of Equation \[e1\]. *C-step:* Assign $T$s using the values obtained in the E-step and Equation \[e2\]. convergence We prove the convergence of the CEM algorithm similar to [@celeux1992classification]. For the proof, let us first form partitions. We form $C$ partitions out of all the questions based on their correct answer in a step. $$P_c = \{q | Y_q = c\}$$ In the CEM approach, each question can belong to only one partition. Now, we define the CML (Classification Maximum Likelihood) criterion: $$C_2(P,p,\pi) = \sum_{c=1}^{C} \sum_{q \in P_c} \log \left({ p_c f(q, \pi_c)}\right)$$ In the above equation, $\pi_c = \{\pi_{cj}^{(a)} | \forall j \in \{1\dots C\} \text{ and a } \in \{1\dots A\} \}$ and $$f(q,\pi_c) = \prod_{a=1}^{A} \prod_{l=1}^{C} \left(\pi_{cl}^{(a)}\right)^{n_{ql}^{(a)}}$$ To prove convergence, we define a few more notations. Note that we begin the algorithm by first doing a majority vote. This assigns each question to a class and forms the first partition. We denote this partition as $P^0$. We then proceed to the M-step and estimate $\pi$ and $p$. Let us denote this first set of parameters by $\pi^1$ and $p^1$. The next EC step gives the next partition, $P^1$. Thus, the algorithm continues to calculate $(P^{m}, p^{m+1}, \pi^{m+1})$ from $(P^{m}, p^{m}, \pi^{m})$ in the M step. Then, in the EC step, it calculates $(P^{m+1}, p^{m+1}, \pi^{m+1})$ from $(P^{m}, p^{m+1}, \pi^{m+1})$. For the sequence $(P^{m}, p^{m}, \pi^{m})$ obtained by FDS, the value of $C_2(P^{m}, p^{m}, \pi^{m})$ increases and converges to a stationary value. Under the assumption that $p$s and $\pi$s are well defined, the sequence $(P^{m}, p^{m}, \pi^{m})$ converges to a stationary point. To prove the above theorem we prove that\ $C_2(P^{m+1}, p^{m+1}, \pi^{m+1}) \ge C_2(P^{m}, p^{m}, \pi^{m}) \, \forall m > 1$.\ Note that equations \[e3\] and \[e4\] maximize the likelihood given the values of $T$ and $n$ (as shown by [@dawid1979maximum]), i. e. $T$ is known, and so $\pi$s and $p$s obtained by the M-step maximize the likelihood. We need to show that maximizing the likelihood is the same as maximizing the CML criterion, $C_2$. In the case of hard clustering, for each $q$, only one class, $c$, can have $T_{qc}$ as $1$; all other classes will have $T_{qc}$ as 0. With this observation, we can rewrite the CML criterion as: $$\begin{aligned} C_2(P,p,\pi) &= \sum_{c=1}^{C} \sum_{q \in P_c} \log (p_c f(q, \pi_c))\\ &= \log \left\{\prod_{q=1}^{Q} \prod_{c=1}^{C} \left( p_c f(q, \pi_c) \right)^{T_{qc}} \right\}\\ &= \log \left\{ \prod_{q=1}^{Q} \prod_{c=1}^{C} \left( p_c \prod_{a=1}^{A} \prod_{l=1}^{C} \left(\pi_{cl}^{(a)}\right)^{n_{ql}^{(a)}} \right)^{T_{qc}} \right\} \end{aligned}$$ Thus, maximizing maximum likelihood is equivalent to maximizing $C_2$. So, we have that after the M step, $C_2(P^{m}, p^{m+1}, \pi^{m+1}) \ge C_2(P^{m}, p^{m}, \pi^{m})$.\ Now, we consider the EC step. Observe that for each question $q$, we choose the answer as the option $c'$ for which $p_c' f(q,\pi_c') \ge p_c f(q,\pi_c)$ for all $c$ (By definition of the criterion for the C-step). Thus, $\log { p_c f(q, \pi_c)}$ increases individually for each question, and so cumulatively, $C_2(P^{m+1}, p^{m+1}, \pi^{m+1}) \ge C_2(P^{m}, p^{m+1}, \pi^{m+1})$.\ Combining the two inequalities, we obtain, $$C_2(P^{m+1}, p^{m+1}, \pi^{m+1}) \ge C_2(P^{m}, p^{m}, \pi^{m})$$ This proves that $C_2$ increases at each step. Since the number of questions are finite and so the number of partitions as well are finite; the value of $C_2$ must converge after a finite number of iterations.\ On convergence, we obtain $ C_2(P^{m+1}, p^{m+1}, \pi^{m+1}) = \\C_2(P^{m}, p^{m+1}, \pi^{m+1}) = C_2(P^{m}, p^{m}, \pi^{m})$ for some $m$. By definition of the C-step, the first equality implies that $P^{m+1} = P^{m}$. Also under the assumption that $p$s and $\pi$s are well defined, we have that $p^m = p^{m+1}$ and $\pi^{m+1} = \pi^m$. This proves the convergence to a stationary point. To prove the rate of convergence, we define $M$ to be the set of matrices $U \in \mathbb{R}^{C \times Q}$ of nonnegative values. The matrices are defined such that the summation of values in each column is 1 and the summation along each row is nonzero.\ Consider the criterion to be maximized as: $$C_2'(U,p,\pi) = \sum_{c=1}^{C} \sum_{q=1}^{Q} u_{qc} \log (p_c f(q, \pi_c))$$ With the above definitions, proposition 3 of [@celeux1992classification] guarantees a linear rate of convergence for FDS to a local maximum from a neighborhood around the maximum. Hybrid Algorithm {#hybridalgo} ---------------- While the proposed FDS method is quick and effective, by using the softer marginals, DS can obtain better likelihood values (which we found in some of our experiments too). A comparison of the likelihood values over multiple datasets (described in Section 4) is provided in Table 2. To bring the best of both DS and FDS, we propose a hybrid version, where we begin with DS, and at each step, we keep track of sum of the absolute values of the difference in class marginals ($p_c$s). When this sum falls below a certain threshold, we switch to the FDS algorithm and continue (Algorithm \[hybalgorithm\]). Our empirical studies showed that this hybrid algorithm can maintain high levels of accuracy along with faster convergence (Section \[experiments\]). We however observe that a similar likelihood to DS does not necessarily translate to better accuracy, and in fact FDS outperforms Hybrid on some datasets. Crowdsourced choices for $Q$ questions by $A$ participants given $C$ choices per question, threshold $\gamma$ Aggregated choices: $T_{qc}$ Estimate $T$s using majority voting. *M-step:* Obtain parameters, $\hat{\pi}_{cl}^{(a)}$ and $\hat{p}_c$ using equations \[e9\] and \[e10\] *E-step:* Estimate $T$s using parameters, $\hat{\pi}_{cl}^{(a)}$ and $\hat{p}_c$ using equation \[e11\]. $\sum_c | p_c^t - p_c^{t-1} | < \gamma$ EM steps of Algorithm \[fdsalgorithm\] (FDS) convergence Experimental Results {#experiments} ==================== We validated the proposed method on several publicly available datasets for vote aggregation, and the results are presented in this section. We first describe the datasets, competing methods used for comparison and the performance metrics used before presenting the results. #### Datasets: We used seven real-world datasets to compare the performance of the proposed method against other methods. These include *LabelMe* [@Russell2008; @R7807338], *SentimentPolarity (SP)* [@Pang:2005:SSE:1219840.1219855; @Rodrigues:2014:GPC:3044805.3044941], *DAiSEE* [@d2016daisee; @kamath2016crowdsourced], and four datasets from the SQUARE benchmark [@sheshadri2013square]: *Adult2* [@ipeirotis2010quality], *BM* [@DBLP:journals/corr/abs-1209-3686], *TREC2010* [@Buckley10-notebook], and *RTE* [@Snow:2008:CFG:1613715.1613751]. Many of the datasets had varying number of annotators per data point. For uniformity, we set a threshold for each dataset, and all data points with fewer annotators than the threshold were removed. In our experiments, we studied the performance of all the methods by varying the number of annotators from one till the threshold, by taking a random subset of all annotators for a data point at each step (We maintained the same random seed across the methods, and conducted multiple trials to verify the results presented herewith). Also, the *TREC2010* dataset has an ‘unknown’ class, which we removed for our experiments. Table 1 lists the size, the number of classes, and the number of annotators in each dataset. [|P[1.1cm]{}||P[0.5cm]{}|P[0.8cm]{}|P[1cm]{}||P[1.1cm]{}|P[1.2cm]{}|P[1.1cm]{}|]{} & \# qns &\# options (per qn)& Maximum \# of annotators (per qn) & Speedup of FDS over DS in Time (Iterations) & Speedup of FDS over IWMV in Time (Iterations) & Speedup of Hybrid over DS in Time (Iterations)\ Adult2 & 305 & 4 & 9 & 6.61(7.87) & 1.32(1.15) & 2.30(2.43)\ BM & 1000 & 2 & 5 & 2.69(4.51) & 1.70(1.02) & 1.49(2.03)\ TREC2010 & 3670 & 4 & 5 & 7.84(8.64) & 6.09(2.93) & 4.39(4.59)\ DAiSEE & 4628 & 4 & 10 & 6.57(7.37) & 4.40(2.04) & 4.11(4.37)\ LabelMe & 589 & 8 & 3 & 7.55(8.59) & 0.54(1.14) & 5.15(5.47)\ RTE & 800 & 2 & 10 & 3.14(4.95) & 2.63(1.24) & 1.88(2.24)\ SP & 4968 & 2 & 5 & 3.00(3.95) & 2.78(0.94) & 2.40(2.54)\ \[datasettable\] #### Baseline Methods: A total of six aggregation algorithms were used in our experiments for evaluation - Majority Voting (MV), Dawid-Skene (DS) [@dawid1979maximum], IWMV [@IWMV], GLAD [@NIPS2009_3644], proposed Fast Dawid-Skene (FDS), and the proposed hybrid algorithm. IWMV is among the fastest methods using EM for aggregation under general settings. [@IWMV] compared IWMV against other well-known aggregation methods, including [@Raykar:2010:LC:1756006.1859894], [@Karger] and [@LPI], and showed that IWMV gives an accuracy comparable to these algorithms but does so in a much lesser time. We hence compare our performance to IWMV in this work. GLAD [@NIPS2009_3644], another popular method, was proposed only for questions with two choices, and we hence use this method for comparison only on the binary label datasets in our experiments. #### Performance Metrics: For each experiment, the following metrics were observed: the accuracy of the aggregated results (against provided ground truth), time taken and number of iterations needed for empirical convergence. For DS, FDS, and Hybrid, the negative log likelihood after each iteration was also observed. For MV, only the accuracy was observed. The experiments were conducted on a 4-core system with Intel Core i5-5200U 2.20GHz processors with 8GB RAM. \[fig\_result\_graphs\] [@ccccccc@]{} Adult2 & BM & TREC2010 & DAiSEE & LabelMe & RTE & SP \ & & & & & &\ & & & & & &\ & & & & & &\ & & & & & #### Results: The results of our experiments are presented in Figure 1 and Table \[logltable\]. Table \[datasettable\] shows the speedup in time and number of iterations needed to converge of FDS over DS and IWMV and of Hybrid over DS, averaged over all observations with varying number of annotators. ---------- ---------- ---------- ---------- FDS DS Hybrid Adult2 1283.75 1153.09 1154.97 BM 2110.16 2094.76 2100.32 TREC2010 13109.26 12180.84 12346.91 DAiSEE 39968.08 36178.16 36350.61 LabelMe 1714.50 1655.94 1660.06 RTE 3741.61 3679.63 3680.32 SP 12472.00 12433.70 12440.70 ---------- ---------- ---------- ---------- \[logltable\] #### Performance Analysis of Fast Dawid-Skene: The results show that FDS gives similar accuracies when compared to DS, Hybrid, GLAD, and IWMV, and a significant improvement over MV, on most datasets except for the BM and LabelMe datasets. In LabelMe, the aggregation accuracy is not at par with DS or Hybrid but is still significantly higher than MV and comparable to IWMV. In the BM dataset, the accuracies of FDS and IWMV are slightly lower than MV but both are comparable to each other. In terms of time taken, we notice that apart from the LabelMe dataset, FDS performs much better than DS, Hybrid, IWMV and GLAD all through. In the case of LabelMe, IWMV outperforms in terms of speed but the margin is very small (around 0.1 sec). This leads us to infer that in general, FDS gives comparable accuracies to other methods while taking significantly lesser time. #### Performance Analysis of the Hybrid Method: The goal of the Hybrid algorithm is to converge to a similar likelihood as DS in much lesser time. From the experiments (especially Table 2), we see that this is indeed the case - the log likelihood of the Hybrid algorithm is close to that of DS and consistently better than FDS. This naturally leads to accuracies almost similar to those obtained by DS, as is confirmed in the results. The total time taken for convergence is much lower for Hybrid as compared to DS. Moreover, the time taken for convergence by Hybrid is consistently low and does not deviate as much as IWMV. While IWMV outperforms Hybrid with respect to time in a few datasets, the proposed Hybrid outperforms IWMV on accuracy on those datasets. These observations support Hybrid to be an algorithm which performs with accuracies similar to DS in a much lesser time consistently over datasets. #### Implementation Details: We discuss two important implementation details of the proposed methods in this section: *initialization* and *stopping conditions*. As argued in [@dawid1979maximum], a symmetric initialization of the parameters (all $P(Y_q = c)$s to be $1 / C$) corresponds to a start from a saddle point, from where the EM algorithm faces difficulty in converging. Instead, a good initialization is to start with the majority voting estimate. While performing majority voting, it could often happen that there is a tie between two or more options with the highest number of votes. In such situations, we randomly choose an option among those which received the highest votes[^1]. We maintained the same random seed for all methods which required this decision. The ideal convergence criterion would be when the answer sheet proposed by an algorithm stops changing. This condition is met within a few iterations for FDS and Hybrid, but DS does not converge using this criterion in a reasonable number of steps. For example, in case of the *DAiSEE* dataset, DS did not converge even after 100 iterations (as compared to $\le 10$ for FDS). To address this issue, we set the convergence criterion as the point when the difference in class marginals is less than $10^{-4}$. We do not include the changes in participant error rates in the final convergence criterion because we observed that its fluctuations could lead to stopping prematurely. Similarly, the criterion for switching from DS to FDS in the Hybrid algorithm is the point when the change in class marginals is less than 0.005 (which happened approximately between 45-75% of total iterations across the datasets). Online Vote Aggregation ======================= Online aggregation of crowdsourced responses is an important setting in today’s applications, where data points may be streaming in large data applications. We consider a setting in which we have access to an initial set of questions and have obtained the proposed answer key using FDS. We also have $P(Y = c)$ and $P(c_a| Y = a) \,\forall\, c, a$ at this time. When we receive a new question and the answers from multiple participants for this new question, we first estimate the answer for this question directly using majority voting. We then update the parameters using the M-step in Algorithm \[fdsalgorithm\]. After the M-step, we run the E-step only for this question to re-obtain the aggregated choice. To update the new knowledge which we have regarding the new participants, we run the M-step for one last time. We conducted experiments on the *SP* dataset[^2], and observed almost the same accuracy for online FDS as offline FDS (Table 4) for different number of annotators. Table 3 shows the results for the max number of annotators (= 5). \[onlinetable\] ---------------------------- -------- -------- -------- DS FDS Hybrid Accuracy 90.94% 90.60% 90.64% Time taken to converge (s) 4.40 3.76 4.09 \# Iterations to converge 26 4 5 ---------------------------- -------- -------- -------- \[onvsofftable\] ------------ -------- -------- -------- -------- Accuracy 2 3 4 5 FDS 85.59% 88.41% 90.02% 90.74% Online FDS 83.57% 88.06% 89.90% 90.60% ------------ -------- -------- -------- -------- Extension to Multiple Correct Options {#discussions} ===================================== The proposed FDS method can be extended to solve the aggregation problem under different settings. We describe an extension below, using the same notations as in Section \[subsec\_preliminaries\]. In real-world machine learning settings such as multi-label learning, a data point might belong to multiple classes, which would result in more than one true choice per question. For such cases, we now assume that participants are allowed to choose more than one choice for each question. Our Algorithm \[fdsalgorithm\] originally assumes that every question has exactly one correct choice. To overcome this limitation, we can make a simple modification in how we interpret questions when multiple options are correct. We assume that every (question, option) pair is a separate binary classification problem, where the label is true if the option is chosen for that question, and false otherwise. This transforms a task with $Q$ questions and $C$ options each to a task with $QC$ questions and two options each. This is valid because the correctness of an option is independent of the correctness of all other options for that question in this setting. We ran experiments using this model on the Affect Annotation Love dataset *(AffectAnnotation)* used in [@DUAN20145723] (which was specifically developed for this setting) on FDS, and compared our performance with DS and Hybrid. Our results are summarized in Table 5 (annotators=5, averaged over five subsets), showing the significantly improved results of FDS over DS. Hybrid attempts to follow DS in the likelihood estimation, and thus does not perform as well as FDS in this case. Besides, our results for FDS also performed better than the methods proposed in [@DUAN20145723], which showed a best accuracy of $\approx92\%$ on this dataset. \[multtable\] ---------------------------- -------- -------- -------- DS FDS Hybrid Accuracy 88.66% 94.14% 89.26% Time taken to converge (s) 0.44 0.057 0.14 \# Iterations to converge 29.6 2 5.8 ---------------------------- -------- -------- -------- Conclusion ========== In this paper we introduced a new EM-based method for vote aggregation in crowdsourced data settings. Our method, Fast Dawid-Skene (FDS), turns out to be a ‘hard’ version of the popular Dawid-Skene (DS) algorithm, and shows up to 7.84x speedup over DS and up to 6.09x speedup over IWMV in time taken for convergence. We also propose a hybrid variant that can switch between DS and FDS to provide the best in terms of accuracy and speed. We compared the performance of the proposed methods against other state-of-the-art EM algorithms including DS, IWMV and GLAD, and our results showed that FDS and the Hybrid approach indeed provide very fast convergence at comparable accuracies to DS, IWMV and GLAD. We proved that our algorithm converges to the estimated labels at a linear rate. We also showed how the proposed methods can be used for online vote aggregation, and extended to the setting where there are multiple correct answers, showing the generalizability of the methods. [^1]: We also tried a variant, in which the option with the highest running class marginal was used to break ties. But this variant did not perform as well as the randomized majority voting across all methods. We also ran many trials with different random seeds, and found the results to almost the same as those presented. [^2]: More results, including on other datasets, on <https://sites.google.com/view/fast-dawid-skene/>
{ "pile_set_name": "ArXiv" }
--- abstract: 'We have estimated the ages of a sample of A–type Vega–like stars by using Strömgren *uvby$ \beta $* photometric data and theoretical evolutionary tracks. We find that 13 percent of these A stars have been reported as Vega–like stars in the literature and that the ages of this subset run the gamut from very young (50 Myr) to old (1 Gyr), with no obvious age difference compared to those of field A stars. We clearly show that the fractional IR luminosity decreases with the ages of Vega–like stars.' author: - 'Inseok Song, J.-P. Caillault,' - 'David Barrado y Navascués,' - 'John R. Stauffer' title: 'Ages of A–type Vega–like stars from *uvby$ \beta $* Photometry' --- Introduction ============ There are several unusual sub–groups among the A–type stars, such as the metallic–line stars (Am), the peculiar A stars (Ap), $ \lambda $ Bootis type stars, and shell stars [@AbtMorrell95]. Another class of stars with many members amongst the A dwarfs is that of the Vega–like stars. Vega–like stars show excess IR emission attributable to an optically thin dust disk around them. These disks are believed to have very little or no gas [@LBA99]. It is very important to know the ages and, hence, the evolutionary stages of these stars, since they are believed to be signposts of exo–planetary systems or of on–going planet formation. However, determining the ages of individual A–type stars is a very difficult task. Some indirect age dating methods for A–type stars include the use of late–type companions if any exist (HR 4796A and Fomalhaut; see @Stauffer95 [@Barrado97; @myPhD]) or using stellar kinematic groups (Fomalhaut, Vega and $ \beta $ Pictoris; see @David98 and @BSSC). The use of Strömgren *uvby$ \beta $* photometry [@ATF97], however, provides a more direct and general determination of the ages of A–type stars. The photometric *uvby*$ \beta $ system as defined by @Stromgren63 and @CM66 allows for reasonably accurate determination of stellar parameters like effective temperature $ T_{eff} $, surface gravity $ g $, and metallicity for B, A, and F stars [@crawford79; @NSW93 and references therein]. The $ T_{eff} $ and $ g $ values can then be used to estimate directly the ages of stars when they are coupled with theoretical evolutionary tracks (though for individual stars these estimates have relatively large error bars). In this letter, we describe our application of this technique to a volume limited sample of 200 A stars. Method ====== $ T_{eff}\protect $ and $ \log g\protect $ determination -------------------------------------------------------- Extensive catalogues of *uvby*$ \beta $ data have been published by @HM80, @Olsen83, and @OP84. We have used these catalogues and WEBDA[^1] databases to find *uvby$ \beta $* photometry data for our sample of A–type stars. Numerous calibration methods of effective temperature and surface gravity using *uvby$ \beta $* photometry have been published . @MD85, in particular, demonstrate that their calibration yields $ T_{eff} $ and $ \log g $ to a 1 $ \sigma $ accuracy of $ 260 $ K and $ 0.10 $ dex, respectively. However, as pointed out by @NSW93, $ \log g $ from @MD85’s calibration depends on the $ T_{eff} $ value while the most desirable calibration method should not. Therefore, we used the @MD85 grids with Napiwotzki et al.’s gravity modification to eliminate the $ \log g $ dependence on $ T_{eff} $ for early–type stars. The subsequent temperature calibration is in agreement with the integrated–flux temperatures $ \left( T_{eff}=(\pi F/\sigma )^{1/4}\right) $ from @Code, @Beeckmans, and @Malagnini at the 1% level and the accuracy of $ \log g $ ranges from $ \approx 0.10 $ dex for early A stars to $ \approx 0.25 $ dex for hot B stars [@NSW93]. A rapidly rotating star has a surface gravity smaller at the equator than at the poles and both the local effective temperature and surface brightness are therefore lower at the equator than at the poles. Thus, in comparing a rotating star with a non–rotating star of the same mass, the former is always cooler. But the apparent luminosity change of a rotating star depends on the inclination angle ($ i $) such that a pole–on $ \left( i=0^{\circ }\right) $ star is brighter and an edge–on $ \left( i=90^{\circ }\right) $ star is dimmer than a non–rotating star [@Kraft]. In all cases, the combination of the luminosity and temperature changes result in an older inferred age compared to the non–rotating case. This effect is prominent in spectral types B and A in which most stars are rapidly rotating ($ v\sin i\geq 100km/sec $). Recently, @FB98 simulated the effect of stellar rotation on the Strömgren *uvby$ \beta $* photometric indices. They concluded that the effect of stellar rotation is to enhance the stellar main sequence age by an average of $ 40\% $. Therefore, we included the stellar rotation correction suggested by @FB98. However, their rotation correction schemes are available only for stars with spectral type between approximately B7–A4. We extended the range of rotation correction such that for stars earlier than B7, we used the correction scheme for B7 stars and for stars later than A4, we used the correction scheme for A4 stars. Therefore, stars earlier or later than the Figueras & Blasi’s (1998) range will have more uncertain ages. Large uncertainties in estimated ages are mainly due to the large error in $ \log g $. However, using a rotation correction scheme based on the projected stellar rotational velocities $ \left( v\sin i\right) $ rather than a scheme based on the true stellar rotational velocities $ \left( v\right) $ may have resulted in uncertainties also. The stellar rotation decreases the effective temperature depending on the inclination angle (small change of $ T_{eff} $ for $ i\approx 0^{\circ } $ but large change of $ T_{eff} $ for $ i\approx 90^{\circ } $) but the current rotation correction scheme cannot distinguish between the case of large $ v $ with small $ i $ and the case of small $ v $ with large $ i $. Thus, rotation correction using $ v\sin i $ instead of $ v $ may cause uncertainty in stellar ages. Ages of Open Clusters --------------------- The theoretical evolutionary grids of @Schaller92 were used to estimate ages of stars from $ T_{eff} $ and $ \log g $. To verify that our age dating method is working, we applied the method to a few open clusters with ages determined by other methods – $ \alpha $ Perseus (80 Myr), Pleiades (125 Myr), NGC 6475 (220 Myr), M34 (225 Myr), and Hyades (660 Myr). The ages for the first two clusters are based on recent application of the lithium depletion boundary method (LDBM) (@PerAge and @Stauffer99 for $ \alpha $ Perseus; @Stauffer98 for Pleiades). The ages for the other clusters are from upper main sequence isochrone fitting (UMSIF), and are taken from @JonesProsser or @Lynga. The age scales based on the two different methods (LDBM and UMSIF) are not yet consistent with each other, and both have possible systematic errors. The current best UMS isochrone ages for $ \alpha $ Perseus and the Pleiades are in the range 50–80 Myr and 80–150 Myr. In Figure \[OpenCluster\], one can see that the isochrones of these open clusters are fairly well reproduced. However, there are some deviations from the expected values. Stars that are younger than or close to 100 Myr, like stars in $ \alpha $ Perseus, tend to locate below the theoretical 100 Myr isochrone. So we assigned an age of 50 Myr for the stars below the 100 Myr isochrone. At intermediate ages, the open cluster data provide a mixed message – the M34 Strömgren age appears to be younger than the UMSIF age, whereas the NGC 6475 Strömgren age seems older than the UMSIF age. This could be indicative of the inhomogeneous nature of the ages (some from LDBM, some from relatively old UMS models, some from newer models) to which we are comparing the Strömgren ages. If we could use $ v $ data instead of $ v\sin i $ and if one could make a rotation correction scheme by using $ v $ values, then the new correction scheme would tighten more stars for a given cluster to the locus of the cluster compared to the uncorrected case. However, the $ v\sin i $ rotation correction scheme used in this study shifts the loci of clusters and only moderately reduces the standard deviations of ages (see, e.g., the case for the Pleiades in Figure \[roteffect\]). Field A stars and Vega–like stars ================================= We have identified 200 A dwarfs within 50 pc with known $ v\sin i $ values and measured *uvby$ \beta $* photometric indices. The distance limit of 50 pc was chosen so that the photospheres of most A–type stars within the given volume should be detected in the 12 $ \mu $m IRAS band and that the volume should contain enough A–type stars to draw a statistically significant result. Since rotation greatly affects the estimated stellar ages, we only included stars with known $ v\sin i $ values (from SIMBAD) throughout this study. $ T_{eff} $ and $ \log g $ values were calculated and corrected to account for the rotation effects as described in the previous section. Among these A–stars, 26 have been identified as possible Vega–like stars by cross–indexing the current list with Song’s [-@myPhD] master list of “proposed” Vega–like stars. Estimated ages, along with other data — spectral type, fractional IR luminosity $ f $, *uvby$ \beta $* photometric **data*,* and $ v\sin i $ — are summarized in Table \[AVegas\]. The frequency of Vega–like stars in our sample is 13% in good agreement with the results from other volume limited surveys: $ 14\pm 5\% $ from @Plets99’s [-@Plets99] survey of the incidence of the Vega phenomenon among main sequence and post main sequence stars and about $ 15 $% or more from the review article on the Vega phenomenon by @LBA99. More than 95% of our sample stars are listed in the IRAS Point Source Catalog and/or Faint Source Catalog and were detected at least at 12 $ \mu $m, and about 75% of them were detected at 12 and 25 $ \mu $m. Based on the 12 and 25 $ \mu $m IRAS fluxes, we checked whether there could be more IR excess stars besides the 26 already reported in the literature. Photospheric IR fluxes at the IRAS bands were calculated by using $$\label{eqn} F_{\nu }=6.347\times 10^{4}\frac{\pi ^{2}R^{2}}{\lambda ^{3}}\frac{1}{\exp \left( \frac{14388}{\lambda T}\right) -1}\, [Jy]$$ where $ \pi $ is parallax in arcseconds, $ R $ is stellar radius in solar radii, $ \lambda $ is wavelength in $ \mu $m, and $ T $ is stellar effective temperature in Kelvins [@myPhD]. In Equation \[eqn\], $ R $ and $ T $ values were calculated from the $ M_{v} $ versus $ R $ or $ T $ relations [@AQ] where $ M_{v} $ values were determined from apparent visual magnitude (from SIMBAD) and *Hipparcos* distance data. Uncertainties of IR fluxes ($ \Delta F_{\nu } $) were calculated from $$\label{Ferror} \Delta F_{\nu }=F_{\nu }\left( \pi _{\circ },R_{\circ },T_{\circ }\right) \left[ \frac{2\Delta \pi }{\pi }+\frac{2\Delta R}{R}\right] \, [Jy]$$ where flux uncertainty due to $ \Delta \mathrm{T} $ is negligible (less than 0.02% for a given 1% error in T at 10,000K). Average flux uncertainties due to $ \pi $ and $ R $ uncertainties are 3% and 4%, respectively. If we define the significance of IR excess ($ r_{\nu } $) as excess IR flux normalized by the uncertainty, then it can be calculated by $ r_{\nu }=(F_{IRAS}-F_{\nu })/\Delta F $ where $ \Delta F $ is the total flux uncertainty due to $ \Delta F_{\nu } $ and $ \Delta F_{IRAS} $ ($ F_{IRAS} $ and $ \Delta F_{IRAS} $ stand for flux value and flux uncertainty value from the IRAS catalog, respectively). $ \Delta F_{\nu } $ and $ \Delta F_{IRAS} $ were added in quadrature to calculate the total flux uncertainty ($ \Delta F $). We define the *bona–fide* Vega–like stars to be those that show significant IR excesses, $ r_{\nu }\geq 3.0 $, at three or more IRAS bands, with the most prominent excess at 60 $ \mu $m. We have found that 51 additional stars show significant IR excesses ($ r_{\nu }\geq 3.0 $) at both 12 and 25 $ \mu $m. However, only 14 of them turned out to be legitimate Vega–like star candidates. The other 37 stars are either luminosity class $ III $ stars (whose IR excesses would not arise because of a circumstellar dust disk) or stars whose excess radiation can easily be explained with a nearby companion star within the IRAS beam. The new Vega–like candidates are summarized in Table \[new\] with their $ r_{\nu } $ values at 12 and 25 $ \mu $m. Determining the $ f $ values for the Vega–like candidates with only 12 and 25 $ \mu $m IR flux measurements is difficult, because, for most of the cases, stellar photospheric flux dominates compared to any excess at these wavelengths; thus a slight error in the photospheric flux calculation results in a large error in $ f $ values. For this reason, we have not taken these stars into account in our consideration of $ f $ versus age relation (see below). The photospheric flux calculated from the @Plets99’s [-@Plets99] empirical relation between the visual magnitude and the IRAS 12 $ \mu $m magnitude is always higher than the photospheric flux values calculated by using equation \[eqn\]; thus the significance of the IR excess for most of the new Vega–like stars falls below the 3 $ \sigma $ threshhold when @Plets99’s method used. Therefore, these 14 new candidates have to be treated with care. We considered two different sets of Vega–like stars: (1) using *al*l proposed Vega–like stars (case A, N=26) and (2) using only the *bona–fide* stars (case B, N=20). The second column of Table \[AVegas\] indicates the case(s) to which the star belongs. Our conclusion, discussed below, does not depend on the choice of case. We assume that all of the A stars in the sample are post–ZAMS stars. We make that assumption because of simple timescale arguments (the ratio of $ <10 $ Myr old stars to the number of 100–300 Myr old stars should be of order $ <10/200 $ or 5%), and because we expect pre–ZAMS A stars to be located generally in star forming regions, which would make them easy to identify. There is no obvious age difference between field A–type stars and A–type Vega–like stars within 50pc with both groups running the gamut from very young (50 Myr) to old (1 Gyr). This result (and those in @Silverstone and @Song00) contrasts with @Habing99’s (1999) claim of the Vega phenomenon ending sharply at around 400 Myr. We have checked whether a correlation exists between ages and dust properties by comparing our estimated ages of A–type Vega–like stars and their fractional IR luminosities, $ f\equiv (L_{IR}/L_{*}) $, found in @myPhD. Unfortunately, a plot of $ f $ versus age is not very informative mainly because of the large uncertainties of the estimated ages for individual stars. Therefore, we divided the Vega–like stars into two groups, one for the stars younger than 200 Myr and the other for stars older than 200 My, and calculated each group’s average $ f $–value (Table \[fages\]). Clearly, the younger A–type Vega–like stars have higher $ f $ values compared to those of the older ones (case–independent). However, we cannot more accurately quantify this relation because the uncertainties in $ T_{eff} $ and $ \log g $ are large. Summary and Discussion ====================== In an attempt to determine the ages of A–type Vega–like stars, we have used a technique involving *uvby$ \beta $* photometry and theoretical $ \log T_{eff}-\log g $ evolutionary tracks. In addition, we have applied corrections for the effects of rapid rotation. As a test of this procedure, we have estimated the ages of a few open clusters and find that our values are in good agreement with their standard ages. We then applied this age dating method to the 200 A–type stars within 50 pc with known $ v\sin i $ values. Thirteen percent of these A stars have been reported as Vega–like stars in the literature and their ages run the gamut of very young (50 Myr) to old (1 Gyr) with no obvious age difference compared to the field A–stars. The younger Vega–like stars have higher $ f $ values compared to those of the older ones. Vega–like stars are closely related to the $ \lambda $ Boo stars. These are metal–deficient A–stars with IR excesses. Vega itself is discussed as a possible member of the $ \lambda $ Boo class [@HR]. An age determination of $ \lambda $ Boo stars was presented by @IB based on the assumption that $ \lambda $ Boo stars are main sequence stars. However, @HR argue that $ \lambda $ Boo stars are probably pre–main sequence stars. If $ \lambda $ Boo stars are indeed closely related to the Vega–like stars, then, based on our determination of the main sequence nature of Vega–like stars, it is likely that most of the $ \lambda $ Boo stars are main sequence stars. [36]{} natexlab\#1[\#1]{} , H. A. & [Morrell]{}, N. I. 1995, [*ApJS*]{}, [**99**]{}, 135 , R., [Torra]{}, J., & [Figueras]{}, F. 1997, [*A&A*]{}, [**322**]{}, 147 , D. 1998, [**]{}, [**339**]{}, 831 , D., [Stauffer]{}, J. R., [Hartmann]{}, L., & [Balachandran]{}, S. C. 1997, [**]{}, [**475**]{}, 313 , D., [Stauffer]{}, J. R., [Song]{}, I., & [Caillault]{}, J.-P. 1999, [**]{}, [**520**]{}, L123 , G. & [Martín]{}, E. L. 1999, [**]{}, [**510**]{}, 266 , F. 1977, [**]{}, [**60**]{}, 1 , A. D., [Bless]{}, R. C., [Davis]{}, J., & [Brown]{}, R. H. 1976, [ ** ]{}, [**203**]{}, 417 , N. 2000, [*Allen’s Astrophysical Quantities*]{}, AIP Press, Springer-Verlag: New York, 4th ed. , D. L. 1979, [*ApJ*]{}, [**84**]{}, 12 , D. L. & [Mander]{}, J. 1966, [*AJ*]{}, [**71**]{}, 114 , F. & [Blasi]{}, F. 1998, [**]{}, [**329**]{}, 957 , H. J., [*et al.*]{} 1999, [**]{}, [**401**]{}, 456 , B. & [Mermilliod]{}, M. 1980, [*A&AS*]{}, [**40**]{}, 1 , H. & [Rentzsch-Holm]{}, I. 1995, [**]{}, [**303**]{}, 819 , I. K. & [Barzova]{}, I. S. 1995, [**]{}, [**302**]{}, 735 , B. F. & [Prosser]{}, C. F. 1996, [**]{}, [**111**]{}, 1193 , R. P. 1970, in [*Spectroscpic Astrophysics*]{}, edited by C. [Herbig]{}, University of California Press (Berkeley), 383–423 , A. M., [Backman]{}, D. E., & [Artymowicz]{}, P. 2000, in [ *Protostars and planets [IV]{}*]{}, edited by V. [Mannings]{}, A. P. [Boss]{}, & S. S. [Russell]{}, (Tucson: University of Arizona Press), 639–672 , G. 1987, Catalogue of Open Cluster Data 5th Ed., catalog No. VII/92A , M. L., [Morossi]{}, C., [Rossi]{}, L., & [Kurucz]{}, R. L. 1986, [ ** ]{}, [**162**]{}, 140 , J. C. 1995, in [*Information and On-Line Data in Astronomy*]{}, edited by D. [Egret]{} & M. A. [Albrecht]{}, Kluwer Academic Press, Dordrecht, 127–138 , T. T. & [Dworetsky]{}, M. M. 1985, [*MNRAS*]{}, [**217**]{}, 305 , R., [Schönberner]{}, D., & [Wenske]{}, V. 1993, [*A&A*]{}, [ **268**]{}, 653 , E. H. 1983, [*A&AS*]{}, [**54**]{}, 55 , E. H. & [Perry]{}, C. L. 1984, [*A&AS*]{}, [**56**]{}, 229 , H. & [Vynckier]{}, C. 1999, [**]{}, [**343**]{}, 496 , G., [Schaerer]{}, D., [Meynet]{}, G., & [Maeder]{}, A. 1992, [ ** ]{}, [**96**]{}, 269 , M. D. 2000, Ph.D. thesis, University of California Los Angeles , B. & [Dworetsky]{}, M. M. 1995, [*A&A*]{}, [**293**]{}, 446 , I. 2000, Ph.D. thesis, University of Georgia Song, I., Caillault, J.-P., [Barrado y Navascués]{}, D., & Stauffer, J. R. 2000, [*ApJL*]{}, [**532**]{}, 41 , J. R., [Hartmann]{}, L. W., & [Barrado y Navascués]{}, D. 1995, [ ** ]{}, [**454**]{}, 910 , J. R., [Schultz]{}, G., & [Kirkpatrick]{}, J. D. 1998, [**]{}, [**499**]{}, L199 , J. R., [*et al.*]{} 1999, [**]{}, [**527**]{}, 219 , B. 1963, [*QJRAS*]{}, [**4**]{}, 8 ------------ ---------- ------------ ------------------------------ ------------ ----------------- --------------- ---------------- ------------ -------------------- ---------------- ------------- ------------- -------------- [HD]{} [Case]{} [Sp.]{} [$ f\equiv L_{IR}/L_{*} $]{} [$ v\sin i $]{} [type]{} [$ \times 10^{3} $]{} [$ m_{1} $]{} [$ c_{1} $]{} [$ \beta $]{} [(km/s)]{} [$ \log T_{e} $]{} [$ \log g $]{} [lower]{} [best]{} [upper]{} [3003]{} [A]{} [A0V]{} [15]{} [0.014]{} [0.179]{} [0.991]{} [2.910]{} [115]{} [3.993]{} [4.347]{} [–]{} [$ 50 $]{} [$ 247 $]{} [14055]{} [AB]{} [A1Vnn]{} [0.048]{} [0.005]{} [0.166]{} [1.048]{} [2.889]{} [240]{} [4.028]{} [4.188]{} [$ 50 $]{} [$ 163 $]{} [$ 245 $]{} [38678]{} [AB]{} [A2Vann]{} [0.17]{} [0.054]{} [0.188]{} [0.996]{} [2.877]{} [230]{} [3.990]{} [4.189]{} [$ 50 $]{} [$ 231 $]{} [$ 347 $]{} [39014]{} [AB]{} [A7V]{} [0.11]{} [0.126]{} [0.182]{} [0.961]{} [2.790]{} [225]{} [3.937]{} [3.797]{} [$ 522 $]{} [$ 541 $]{} [$ 663 $]{} [39060]{} [AB]{} [A3V]{} [3]{} [0.094]{} [0.196]{} [0.891]{} [2.859]{} [140]{} [3.955]{} [4.352]{} [–]{} [$ 50 $]{} [$ 299 $]{} [40932]{} [AB]{} [Am]{} [0.23]{} [0.093]{} [0.200]{} [0.981]{} [2.853]{} [20]{} [3.919]{} [3.966]{} [$ 565 $]{} [$ 693 $]{} [$ 693 $]{} [50241]{} [AB]{} [A7IV]{} [1.1]{} [0.126]{} [0.175]{} [0.998]{} [2.788]{} [230]{} [3.938]{} [3.686]{} [$ 501 $]{} [$ 664 $]{} [$ 890 $]{} [71155]{} [AB]{} [A0V]{} [0.062]{} [-0.007]{} [0.158]{} [1.026]{} [2.896]{} [130]{} [4.013]{} [4.205]{} [$ 50 $]{} [$ 169 $]{} [$ 266 $]{} [74956]{} [AB]{} [A1V]{} [0.22]{} [0.034]{} [0.151]{} [1.087]{} [2.876]{} [85]{} [3.979]{} [3.857]{} [$ 372 $]{} [$ 390 $]{} [$ 403 $]{} [78045]{} [AB]{} [Am]{} [0.03]{} [0.077]{} [0.188]{} [0.960]{} [2.871]{} [40]{} [3.928]{} [4.184]{} [$ 50 $]{} [$ 427 $]{} [$ 610 $]{} [91312]{} [AB]{} [A7IV]{} [0.093]{} [0.121]{} [0.208]{} [0.850]{} [2.821]{} [135]{} [3.922]{} [4.191]{} [$ 50 $]{} [$ 414 $]{} [$ 647 $]{} [95418]{} [AB]{} [A1V]{} [0.0062]{} [-0.006]{} [0.158]{} [1.088]{} [2.880]{} [40]{} [3.991]{} [3.883]{} [$ 335 $]{} [$ 358 $]{} [$ 369 $]{} [99211]{} [AB]{} [A0V]{} [0.012]{} [0.117]{} [0.194]{} [0.894]{} [2.822]{} [145]{} [3.925]{} [4.069]{} [$ 392 $]{} [$ 600 $]{} [$ 684 $]{} [102647]{} [A]{} [A3V]{} [0.012]{} [0.043]{} [0.211]{} [0.973]{} [2.899]{} [120]{} [3.958]{} [4.299]{} [–]{} [$ 50 $]{} [$ 331 $]{} [125162]{} [AB]{} [A0sh]{} [0.042]{} [0.051]{} [0.183]{} [0.999]{} [2.894]{} [100]{} [3.966]{} [4.188]{} [$ 50 $]{} [$ 313 $]{} [$ 451 $]{} [135379]{} [A]{} [A3V]{} [0.24]{} [0.043]{} [0.200]{} [1.011]{} [2.914]{} [60]{} [3.949]{} [4.281]{} [$ 50 $]{} [$ 166 $]{} [$ 378 $]{} [139006]{} [AB]{} [A0V]{} [0.023]{} [-0.001]{} [0.146]{} [1.058]{} [2.871]{} [135]{} [4.008]{} [3.952]{} [$ 267 $]{} [$ 314 $]{} [$ 322 $]{} [159492]{} [A]{} [A7V]{} [0.094]{} [0.102]{} [0.204]{} [0.883]{} [2.858]{} [80]{} [3.927]{} [4.322]{} [–]{} [$ 50 $]{} [$ 419 $]{} [161868]{} [AB]{} [A0V]{} [0.068]{} [0.015]{} [0.173]{} [1.051]{} [2.898]{} [220]{} [4.011]{} [4.199]{} [$ 50 $]{} [$ 184 $]{} [$ 277 $]{} [172167]{} [AB]{} [A0V]{} [0.013]{} [0.003]{} [0.157]{} [1.088]{} [2.903]{} [15]{} [3.987]{} [4.031]{} [$ 267 $]{} [$ 354 $]{} [$ 383 $]{} [172555]{} [AB]{} [A7V]{} [0.9]{} [0.112]{} [0.200]{} [0.839]{} [2.839]{} [175]{} [3.942]{} [4.376]{} [–]{} [–]{} [$ 50 $]{} [178253]{} [A]{} [A0/A1V]{} [0.064]{} [0.018]{} [0.184]{} [1.060]{} [2.889]{} [225]{} [4.008]{} [4.112]{} [$ 164 $]{} [$ 254 $]{} [$ 316 $]{} [181296]{} [AB]{} [A0Vn]{} [0.14]{} [0.000]{} [0.157]{} [1.002]{} [2.916]{} [420]{} [4.133]{} [4.898]{} [–]{} [–]{} [$ 50 $]{} [192425]{} [A]{} [A2V]{} [0.067]{} [0.028]{} [0.188]{} [1.024]{} [2.920]{} [160]{} [3.987]{} [4.354]{} [–]{} [$ 50 $]{} [$ 166 $]{} [216956]{} [AB]{} [A3V]{} [0.046]{} [0.037]{} [0.206]{} [0.990]{} [2.906]{} [100]{} [3.957]{} [4.291]{} [$ 50 $]{} [$ 156 $]{} [$ 344 $]{} [218396]{} [AB]{} [A5V]{} [0.22]{} [0.178]{} [0.146]{} [0.678]{} [2.739]{} [55]{} [3.868]{} [4.166]{} [$ 50 $]{} [$ 732 $]{} [$ 1128 $]{} ------------ ---------- ------------ ------------------------------ ------------ ----------------- --------------- ---------------- ------------ -------------------- ---------------- ------------- ------------- -------------- : A–stars with IR excesses\[AVegas\] -------- ------------------- --------- -------------- -------------- -------- HD other Sp. Remark number name type 12 $ \mu $m 25 $ \mu $m 2262 $ \kappa $ Phe A7V 3.8 3.3 6961 33 Cas A7V 3.7 3.6 18978 11 Eri A4V 3.9 3.5 20320 13 Eri A5m 4.1 3.1 SB 78209 15 UMa A1m 5.0 3.7 87696 21 LMi A7V 3.4 3.3 103287 $ \gamma $ UMa A0V 4.8 4.2 SB 112185 $ \epsilon $ UMa A0p 7.5 6.4 SB 123998 $ \eta $ Aps A2m 5.0 4.0 137898 10 Ser A8IV 4.0 4.0$ ^{*} $ 141003 $ \beta $ Ser A2IV 5.0 4.0 Double 192696 33 Cyg A3IV–Vn 5.7 4.7 SB 203280 $ \alpha $ Cep A7IV 10.2 5.5 214846 $ \beta $ Oct A9IV–V 7.1 5.8 -------- ------------------- --------- -------------- -------------- -------- : New A–type Vega–like candidates\[new\] $ ^{*} $100 $ \mu $m excess, 25 $ \mu $m excess $ r_{\nu }=1.9 $ --- ------- ---------- --------------------- ------ Number Average $ f $ of stars ($ \times 10^{3} $) A $ < $ 200 Myr 12 1.79 $ > $ 200 Myr 14 0.18 B $ < $ 200 Myr 7 0.71 $ > $ 200 Myr 13 0.17 --- ------- ---------- --------------------- ------ : Ages and $ f\protect $ values of Vega–like stars\[fages\] [^1]: Web version of BDA (Open clusters database, @BDA) http://obswww.unige.ch/webda
{ "pile_set_name": "ArXiv" }
--- abstract: 'We produce the family of Calabi-Yau hypersurfaces $X_{n}$ of $({\mathbb P}^{1})^{n+1}$ in higher dimension whose inertia group contains non commutative free groups. This is completely different from Takahashi’s result [@ta98] for Calabi-Yau hypersurfaces $M_{n}$ of ${\mathbb P}^{n+1}$.' address: - ' (Masakatsu Hayashi) Department of Mathematics, Graduate School of Science, Osaka University, Machikaneyamacho 1-1, Toyonaka, Osaka 560-0043, Japan ' - ' (Taro Hayashi) Department of Mathematics, Graduate School of Science, Osaka University, Machikaneyamacho 1-1, Toyonaka, Osaka 560-0043, Japan ' author: - Masakatsu Hayashi and Taro Hayashi title: 'Calabi-Yau hypersurfaces in the direct product of ${\mathbb P}^{1}$ and inertia groups' --- Introduction ============ Throughout this paper, we work over ${\mathbb C}$. Given an algebraic variety $X$, it is natural to consider its birational automorphisms $\varphi {\colon}X \dashrightarrow X$. The set of these birational automorphisms forms a group ${\operatorname{Bir}}(X)$ with respect to the composition. When $X$ is a projective space ${\mathbb P}^{n}$ or equivalently an $n$-dimensional rational variety, this group is called the Cremona group. In higher dimensional case ($n \geq 3$), though many elements of the Cremona group have been described, its whole structure is little known. Let $V$ be an $(n+1)$-dimensional smooth projective rational manifold. In this paper, we treat subgroups called the “inertia group" (defined below ) of some hypersurface $X \subset V$ originated in [@gi94]. It consists of those elements of the Cremona group that act on $X$ as identity. In Section \[cyn\], we mention the result (Theorem \[tak\]) of Takahashi [@ta98] about the smooth Calabi-Yau hypersurfaces $M_{n}$ of ${\mathbb P}^{n+1}$ of degree $n+2$ (that is, $M_{n}$ is a hypersurface such that it is simply connected, there is no holomorphic $k$-form on $M_{n}$ for $0<k<n$, and there is a nowhere vanishing holomorphic $n$-form $\omega_{M_{n}}$). It turns out that the inertia group of $M_{n}$ is trivial (Theorem \[intro2\]). Takahashi’s result (Theorem \[tak\]) is proved by using the “Noether-Fano inequality". It is the useful result that tells us when two Mori fiber spaces are isomorphic. Theorem \[intro2\] is a direct consequence of Takahashi’s result. In Section \[cy1n\], we consider Calabi-Yau hypersurfaces $$X_{n} = (2, 2, \ldots , 2) \subset ({\mathbb P}^{1})^{n+1}.$$ Let $${\operatorname{UC}}(N) {\coloneqq}\overbrace{{\mathbb Z}/2{\mathbb Z}* {\mathbb Z}/2{\mathbb Z}* \cdots * {\mathbb Z}/2{\mathbb Z}}^{N} = \operatorname*{\raisebox{-0.8ex}{\scalebox{2.5}{$\ast$}}}_{i=1}^{N}\langle t_{i}\rangle$$ be the *universal Coxeter group* of rank $N$ where ${\mathbb Z}/2{\mathbb Z}$ is the cyclic group of order 2. There is no non-trivial relation between its $N$ natural generators $t_{i}$. Let $$p_{i} {\colon}X_{n} \to ({\mathbb P}^{1})^{n}\ \ \ (i=1, \ldots , n+1)$$ be the natural projections which are obtained by forgetting the $i$-th factor of $({\mathbb P}^{1})^{n+1}$. Then, the $n+1$ projections $p_{i}$ are generically finite morphism of degree 2. Thus, for each index $i$, there is a birational transformation $$\iota_{i} {\colon}X_{n} \dashrightarrow X_{n}$$ that permutes the two points of general fibers of $p_{i}$ and this provides a group homomorphism $$\Phi {\colon}{\operatorname{UC}}(n+1) \to {\operatorname{Bir}}(X_{n}).$$ From now, we set $P(n+1) {\coloneqq}({\mathbb P}^{1})^{n+1}$. Cantat-Oguiso proved the following theorem in [@co11]. $($[@co11 Theorem 1.3 (2)]$)$\[iota\] Let $X_{n}$ be a generic hypersurface of multidegree $(2,2,\ldots,2)$ in $P(n+1)$ with $n \geq 3$. Then the morphism $\Phi$ that maps each generator $t_{j}$ of ${\operatorname{UC}}(n+1)$ to the involution $\iota_{j}$ of $X_{n}$ is an isomorphism from ${\operatorname{UC}}(n+1)$ to ${\operatorname{Bir}}(X_{n})$. Here “generic” means $X_{n}$ belongs to the complement of some countable union of proper closed subvarieties of the complete linear system $\big| (2, 2, \ldots , 2)\big|$. Let $X \subset V$ be a projective variety. The *decomposition group* of $X$ is the group $$\begin{aligned} {\operatorname{Dec}}(V, X) {\coloneqq}\{f \in {\operatorname{Bir}}(V)\ |\ f(X) =X \text{ and } f|_{X} \in {\operatorname{Bir}}(X) \}.\end{aligned}$$ The *inertia group* of $X$ is the group $$\begin{aligned} \label{inertia} {\operatorname{Ine}}(V, X) {\coloneqq}\{f \in {\operatorname{Dec}}(V, X)\ |\ f|_{X} = {\operatorname{id}}_{X}\}.\end{aligned}$$ Then it is natural to consider the following question: \[qu\] Is the sequence $$\begin{aligned} \label{se} 1 \longrightarrow {\operatorname{Ine}}(V, X) \longrightarrow {\operatorname{Dec}}(V, X) \overset{\gamma}{\longrightarrow} {\operatorname{Bir}}(X) \longrightarrow 1\end{aligned}$$ exact, i.e., is $\gamma$ surjective? Note that, in general, this sequence is not exact, i.e., $\gamma$ is not surjective (see Remark \[k3\]). When the sequence is exact, the group ${\operatorname{Ine}}(V, X)$ measures how many ways one can extend ${\operatorname{Bir}}(X)$ to the birational automorphisms of the ambient space $V$. Our main result is following theorem, answering a question asked by Ludmil Katzarkov: \[intro\] Let $X_{n} \subset P(n+1)$ be a smooth hypersurface of multidegree $(2, 2, \ldots, 2)$ and $n \geq 3$. Then: - $\gamma {\colon}{\operatorname{Dec}}(P(n+1), X_{n}) \to {\operatorname{Bir}}(X_{n})$ is surjective, in particular Question $\ref{qu}$ is affirmative for $X_{n}$. - If, in addition, $X_{n}$ is generic, there are $n+1$ elements $\rho_{i}$ $(1 \leq i \leq n+1)$ of ${\operatorname{Ine}}(P(n+1), X_{n})$ such that $$\langle \rho_{1}, \rho_{2}, \ldots , \rho_{n+1} \rangle \simeq \underbrace{{\mathbb Z}* {\mathbb Z}* \cdots * {\mathbb Z}}_{n+1} \subset {\operatorname{Ine}}(P(n+1), X_{n}).$$ In particular, ${\operatorname{Ine}}(P(n+1), X_{n})$ is an infinite non-commutative group. Our proof of Theorem \[intro\] is based on an explicit computation of elementary flavour. We also consider another type of Calabi-Yau manifolds, namely smooth hypersurfaces of degree $n+2$ in ${\mathbb P}^{n+1}$ and obtain the following result: \[intro2\] Suppose $n \geq 3$. Let $M_{n} = (n+2) \subset {\mathbb P}^{n+1}$ be a smooth hypersurface of degree $n+2$. Then Question $\ref{qu}$ is also affirmative for $M_{n}$. More precisely: - ${\operatorname{Dec}}({\mathbb P}^{n+1}, M_{n}) = \{ f \in {\operatorname{PGL}}(n+2, {\mathbb C}) = {\operatorname{Aut}}({\mathbb P}^{n+1})\ |\ f(M_{n}) = M_{n}\}$. - ${\operatorname{Ine}}({\mathbb P}^{n+1}, M_{n}) = \{{\operatorname{id}}_{{\mathbb P}^{n+1}}\}$, and $\gamma {\colon}{\operatorname{Dec}}({\mathbb P}^{n+1}, M_{n}) \overset{\simeq}{\longrightarrow} {\operatorname{Bir}}(M_{n}) = {\operatorname{Aut}}(M_{n})$. It is interesting that the inertia groups of $X_{n} \subset P(n+1) = ({\mathbb P}^{1})^{n+1}$ and $M_{n} \subset {\mathbb P}^{n+1}$ have completely different structures though both $X_{n}$ and $M_{n}$ are Calabi-Yau hypersurfaces in rational Fano manifolds. \[k3\] There is a smooth quartic $K3$ surface $M_{2} \subset {\mathbb P}^{3}$ such that $\gamma$ is not surjective (see [@og13 Theorem 1.2 (2)]). In particular, Theorem \[intro2\] is not true for $n = 2$. Preliminaries ============= In this section, we prepare some definitions and properties of birational geometry and introduce the Cremona group. Divisors and singularities -------------------------- Let $X$ be a projective variety. A *prime divisor* on $X$ is an irreducible subvariety of codimension one, and a *divisor* (resp. *${\mathbb Q}$-divisor* or *${\mathbb R}$-divisor*) on $X$ is a formal linear combination $D = \sum d_{i}D_{i}$ of prime divisors where $d_{i} \in {\mathbb Z}$ (resp. ${\mathbb Q}$ or ${\mathbb R}$). A divisor $D$ is called *effective* if $d_{i}$ $\geq$ 0 for every $i$ and denote $D \geq 0$. The closed set $\bigcup_{i}D_{i}$ of the union of prime divisors is called the *support* of $D$ and denote Supp$(D)$. A ${\mathbb Q}$-divisor $D$ is called *${\mathbb Q}$-Cartier* if, for some $0 \neq m \in {\mathbb Z}$, $mD$ is a Cartier divisor (i.e. a divisor whose divisorial sheaf ${\mathcal O}_{X}(mD)$ is an invertible sheaf), and $X$ is called ${\mathbb Q}$-*factorial* if every divisor is ${\mathbb Q}$-Cartier. Note that, since the regular local ring is the unique factorization domain, every divisor automatically becomes the Cartier divisor on the smooth variety. Let $f {\colon}X \dashrightarrow Y$ be a birational map between normal projective varieties, $D$ a prime divisor, and $U$ the domain of definition of $f$; that is, the maximal subset of $X$ such that there exists a morphism $f {\colon}U \to Y$. Then ${\operatorname{codim}}(X\setminus U) \geq 2$ and $D \cap U \neq \emptyset$, the image $(f|_{U})(D \cap U)$ is a locally closed subvariety of $Y$. If the closure of that image is a prime divisor of $Y$, we call it the *strict transform* of $D$ (also called the *proper transform* or *birational transform*) and denote $f_{*}D$. We define $f_{*}D = 0$ if the codimension of the image $(f|_{U})(D \cap U)$ is $\geq$ 2 in $Y$. We can also define the strict transform $f_{*}Z$ for subvariety $Z$ of large codimension; if $Z \cap U \neq \emptyset$ and dimension of the image $(f|_{U})(Z \cap U)$ is equal to $\dim Z$, then we define $f_{*}Z$ as the closure of that image, otherwise $f_{*}Z$ = 0. Let $(X, D)$ is a *log pair* which is a pair of a normal projective variety $X$ and a ${\mathbb R}$-divisor $D \geq 0$. For a log pair $(X, D)$, it is more natural to consider a *log canonical divisor* $K_{X} + D$ instead of a canonical divisor $K_{X}$. A projective birational morphism $g {\colon}Y \to X$ is a *log resolution* of the pair $(X, D)$ if $Y$ is smooth, ${\operatorname{Exc}}(g)$ is a divisor, and $g_{*}^{-1}(D) \cup {\operatorname{Exc}}(g)$ has simple normal crossing support (i.e. each components is a smooth divisor and all components meet transversely) where ${\operatorname{Exc}}(g)$ is an exceptional set of $g$, and a divisor $over$ $X$ is a divisor $E$ on some smooth variety $Y$ endowed with a proper birational morphism $g {\colon}Y \to X$. If we write $$K_{Y} + \Gamma + \sum E_{i} = g^{*}(K_{X}+D) + \sum a_{E_{i}}(X, D)E_{i},$$ where $\Gamma$ is the strict transform of $D$ and $E_{i}$ runs through all prime exceptional divisors, then the numbers $a_{E_{i}}(X, D)$ is called the *discrepancies of $(X, D)$ along $E_{i}$*. The *discrepancy of* $(X, D)$ is given by $${\operatorname{discrep}}(X, D) {\coloneqq}\inf\{ a_{E_{i}}(X, D)\ |\ E_{i} \text{ is a prime exceptional divisor over } X\}.$$ The discrepancy $a_{E_{i}}(X, D)$ along $E_{i}$ is independent of the choice of birational maps $g$ and only depends on $E_{i}$. Let us denote ${\operatorname{discrep}}(X, D) = a_{E}$. A pair $(X, D)$ is *log canonical* (resp. *Kawamata log terminal* ($klt$)) if $a_{E} \geq 0$ (resp. $a_{E} > 0$). A pair $(X, D)$ is *canonical* (resp. *terminal*) if $a_{E} \geq 1$ (resp. $a_{E} > 1$). Cremona groups -------------- Let $n$ be a positive integer. The *Cremona group* ${\operatorname{Cr}}(n)$ is the group of automorphisms of ${\mathbb C}(X_{1}, \ldots, X_{n})$, the ${\mathbb C}$-algebra of rational functions in $n$ independent variables. Given $n$ rational functions $F_{i} \in {\mathbb C}(X_{1}, \ldots, X_{n})$, $1 \leq i \leq n$, there is a unique endomorphism of this algebra maps $X_{i}$ onto $F_{i}$ and this is an automorphism if and only if the rational transformation $f$ defined by $f(X_{1}, \ldots, X_{n}) = (F_{1}, \ldots, F_{n})$ is a birational transformation of the affine space ${\mathbb A}^{n}$. Compactifying ${\mathbb A}^{n}$, we get $${\operatorname{Cr}}(n) = {\operatorname{Bir}}({\mathbb A}^{n}) = {\operatorname{Bir}}({\mathbb P}^{n})$$ where Bir$(X)$ denotes the group of all birational transformations of $X$. In the end of this section, we define two subgroups in ${\operatorname{Cr}}(n)$ introduced by Gizatullin [@gi94]. Let $V$ be an $(n+1)$-dimensional smooth projective rational manifold and $X \subset V$ a projective variety. The *decomposition group* of $X$ is the group $${\operatorname{Dec}}(V, X) {\coloneqq}\{f \in {\operatorname{Bir}}(V)\ |\ f(X) =X \text{ and } f|_{X} \in {\operatorname{Bir}}(X) \}.$$ The *inertia group* of $X$ is the group $${\operatorname{Ine}}(V, X) {\coloneqq}\{f \in {\operatorname{Dec}}(V, X)\ |\ f|_{X} = {\operatorname{id}}_{X}\}.$$ The decomposition group is also denoted by Bir$(V, X)$. By the definition, the correspondence $$\gamma {\colon}{\operatorname{Dec}}(V, X) \ni f \mapsto f|_{X} \in {\operatorname{Bir}}(X)$$ defines the exact sequence: $$\begin{aligned} \label{seq} 1 \longrightarrow {\operatorname{Ine}}(V, X) = \ker \gamma \longrightarrow {\operatorname{Dec}}(V, X) \overset{\gamma}{\longrightarrow} {\operatorname{Bir}}(X).\end{aligned}$$ So, it is natural to consider the following question (which is same as Question \[qu\]) asked by Ludmil Katzarkov: \[qexact\] Is the sequence $$\begin{aligned} \label{exact} 1 \longrightarrow {\operatorname{Ine}}(V, X) \longrightarrow {\operatorname{Dec}}(V, X) \overset{\gamma}{\longrightarrow} {\operatorname{Bir}}(X) \longrightarrow 1\end{aligned}$$ exact, i.e., is $\gamma$ surjective? In general, the above sequence is not exact, i.e., $\gamma$ is not surjective. In fact, there is a smooth quartic $K3$ surface $M_{2} \subset {\mathbb P}^{3}$ such that $\gamma$ is not surjective ([@og13 Theorem 1.2 (2)]). Calabi-Yau hypersurface in ${\mathbb P}^{n+1}$ {#cyn} ============================================== Our goal, in this section, is to prove Theorem \[intro2\] (i.e. Theorem \[ta\]). Before that, we introduce the result of Takahashi [@ta98]. Let $X$ be a normal ${\mathbb Q}$-factorial projective variety. The 1*-cycle* is a formal linear combination $C = \sum a_{i}C_{i}$ of proper curves $C_{i} \subset X$ which are irreducible and reduced. By the theorem of the base of Néron-Severi (see [@kl66]), the whole numerical equivalent class of 1-cycle with real coefficients becomes the finite dimensional ${\mathbb R}$-vector space and denotes $N_{1}(X)$. The dimension of $N_{1}(X)$ or its dual $N^{1}(X)$ with respect to the intersection form is called the *Picard number* and denote $\rho(X)$. $($[@ta98 Theorem 2.3]$)$\[tak\] Let $X$ be a Fano manifold $($i.e. a manifold whose anti-canonical divisor $-K_{X}$ is ample,$)$ with $\dim X \geq 3$ and $\rho(X) = 1$, $S \in |-K_{X}|$ a smooth hypersurface with ${\operatorname{Pic}}(X) \to {\operatorname{Pic}}(S)$ surjective. Let $\Phi {\colon}X \dashrightarrow X'$ be a birational map to a ${\mathbb Q}$-factorial terminal variety $X'$ with $\rho(X') = 1$ which is not an isomorphism, and $S' = \Phi_{*}S$. Then $K_{X'} + S'$ is ample. This theorem is proved by using the *Noether-Fano inequality* which is one of the most important tools in birational geometry, which gives a precise bound on the singularities of indeterminacies of a birational map and some conditions when it becomes isomorphism. This inequality is essentially due to [@im71], and Corti proved the general case of an arbitrary Mori fiber space of dimension three [@co95]. It was extended in all dimensions in [@ta95], [@bm97], [@is01], and [@df02], (see also [@ma02]). In particular, a log generalized version obtained independently in [@bm97], [@ta95] is used for the proof of Theorem \[tak\]. After that, we consider $n$-dimensional *Calabi-Yau manifold* $X$ in this paper. It is a projective manifold which is simply connected, $$H^{0}(X, \Omega_{X}^{i}) = 0\ \ \ (0<i<\dim X = n),\ \ \textrm{and \ } H^{0}(X, \Omega_{X}^{n}) = {\mathbb C}\omega_{X},$$ where $\omega_{X}$ is a nowhere vanishing holomorphic $n$-form. The following theorem is a consequence of the Theorem \[tak\], which is same as Theorem \[intro2\]. This provides an example of the Calabi-Yau hypersurface $M_{n}$ whose inertia group consists of only identity transformation. \[ta\] Suppose $n \geq 3$. Let $M_{n} = (n+2) \subset {\mathbb P}^{n+1}$ be a smooth hypersurface of degree $n+2$. Then $M_{n}$ is a Calabi-Yau manifold of dimension $n$ and Question $\ref{qexact}$ is affirmative for $M_{n}$. More precisely: - ${\operatorname{Dec}}({\mathbb P}^{n+1}, M_{n}) = \{ f \in {\operatorname{PGL}}(n+2, {\mathbb C}) = {\operatorname{Aut}}({\mathbb P}^{n+1})\ |\ f(M_{n}) = M_{n}\}$. - ${\operatorname{Ine}}({\mathbb P}^{n+1}, M_{n}) = \{{\operatorname{id}}_{{\mathbb P}^{n+1}}\}$, and $\gamma {\colon}{\operatorname{Dec}}({\mathbb P}^{n+1}, M_{n}) \overset{\simeq}{\longrightarrow} {\operatorname{Bir}}(M_{n}) = {\operatorname{Aut}}(M_{n})$. By Lefschetz hyperplane section theorem for $n \geq 3$, $\pi_{1}(M_{n}) \simeq \pi_{1}({\mathbb P}^{n+1}) = \{{\operatorname{id}}\}$, ${\operatorname{Pic}}(M_{n}) = {\mathbb Z}h$ where $h$ is the hyperplane class. By the adjunction formula, $$K_{M_{n}} = (K_{{\mathbb P}^{n+1}} + M_{n})|_{M_{n}} = -(n+2)h + (n+2)h = 0$$ in Pic$(M_{n})$. By the exact sequence $$0 \longrightarrow {\mathcal O}_{{\mathbb P}^{n+1}}(-(n+2)) \longrightarrow {\mathcal O}_{{\mathbb P}^{n+1}} \longrightarrow {\mathcal O}_{M_{n}} \longrightarrow 0$$ and $$h^{k}({\mathcal O}_{{\mathbb P}^{n+1}}(-(n+2))) = 0\ \ \text{for}\ \ 1 \leq k \leq n,$$ $$H^{k}({\mathcal O}_{M_{n}}) \simeq H^{k}({\mathcal O}_{{\mathbb P}^{n+1}}) = 0\ \ \text{for}\ \ 1 \leq k \leq n-1.$$ Hence $H^{0}(\Omega^{k}_{M_{n}}) = 0$ for $1 \leq k \leq n-1$ by the Hodge symmetry. Hence $M_{n}$ is a Calabi-Yau manifold of dimension $n$. By ${\operatorname{Pic}}(M_{n}) = {\mathbb Z}h$, there is no small projective contraction of $M_{n}$, in particular, $M_{n}$ has no flop. Thus by Kawamata [@ka08], we get ${\operatorname{Bir}}(M_{n}) = {\operatorname{Aut}}(M_{n})$, and $g^{*}h = h$ for $g \in {\operatorname{Aut}}(M_{n}) = {\operatorname{Bir}}(M_{n})$. So we have $g = \tilde{g}|_{M_{n}}$ for some $\tilde{g} \in {\operatorname{PGL}}(n+1, {\mathbb C})$. Assume that $f \in {\operatorname{Dec}}({\mathbb P}^{n+1}, M_{n})$. Then $f_{*}(M_{n}) = M_{n}$ and $K_{{\mathbb P}^{n+1}} + M_{n} = 0$. Thus by Theorem \[tak\], $f \in {\operatorname{Aut}}({\mathbb P}^{n+1}) = {\operatorname{PGL}}(n+2, {\mathbb C})$. This proves (1) and the surjectivity of $\gamma$. Let $f|_{M_{n}} = {\operatorname{id}}_{M_{n}}$ for $f \in {\operatorname{Dec}}({\mathbb P}^{n+1}, M_{n})$. Since $f \in {\operatorname{PGL}}(n+1, {\mathbb C})$ by (1) and $M_{n}$ generates ${\mathbb P}^{n+1}$, i.e., the projective hull of $M_{n}$ is ${\mathbb P}^{n+1}$, it follows that $f = {\operatorname{id}}_{{\mathbb P}^{n+1}}$ if $f|_{M_{n}} = {\operatorname{id}}_{M_{n}}$. Hence ${\operatorname{Ine}}({\mathbb P}^{n+1}, M_{n}) = \{{\operatorname{id}}_{{\mathbb P}^{n+1}}\}$, i.e., $\gamma$ is injective. So, $\gamma {\colon}{\operatorname{Dec}}({\mathbb P}^{n+1}, M_{n}) \overset{\simeq}{\longrightarrow} {\operatorname{Bir}}(M_{n}) = {\operatorname{Aut}}(M_{n})$. Calabi-Yau hypersurface in $({\mathbb P}^{1})^{n+1}$ {#cy1n} ==================================================== As in above section, the Calabi-Yau hypersurface $M_{n}$ of ${\mathbb P}^{n+1}$ with $n \geq 3$ has only identical transformation as the element of its inertia group. However, there exist some Calabi-Yau hypersurfaces in the product of ${\mathbb P}^{1}$ which does not satisfy this property; as result (Theorem \[main\]) shows. To simplify, we denote $$\begin{aligned} P(n+1) &{\coloneqq}({\mathbb P}^{1})^{n+1} = {\mathbb P}^{1}_{1} \times {\mathbb P}^{1}_{2} \times \cdots \times {\mathbb P}^{1}_{n+1},\\ P(n+1)_{i} &{\coloneqq}{\mathbb P}^{1}_{1} \times \cdots \times {\mathbb P}^{1}_{i-1} \times {\mathbb P}^{1}_{i+1} \times \cdots \times {\mathbb P}^{1}_{n+1} \simeq P(n),\end{aligned}$$ and $$\begin{aligned} p^{i} {\colon}P(n+1) &\to {\mathbb P}^{1}_{i} \simeq {\mathbb P}^{1},\\ p_{i} {\colon}P(n+1) &\to P(n+1)_{i}\end{aligned}$$ as the natural projection. Let $H_{i}$ be the divisor class of $(p^{i})^{*}({\mathcal O}_{{\mathbb P}^{1}}(1))$, then $P(n+1)$ is a Fano manifold of dimension $n+1$ and its canonical divisor has the form $\displaystyle{-K_{P(n+1)} = \sum^{n+1}_{i=1}2H_{i}}$. Therefore, by the adjunction formula, the generic hypersurface $X_{n} \subset P(n+1)$ has trivial canonical divisor if and only if it has multidegree $(2, 2, \ldots, 2)$. More strongly, for $n \geq 3$, $X_{n} = (2, 2, \ldots, 2)$ becomes a Calabi-Yau manifold of dimension $n$ and, for $n=2$, a $K3$ surface (i.e. 2-dimensional Calabi-Yau manifold). This is shown by the same method as in the proof of Theorem \[ta\]. From now, $X_{n}$ is a generic hypersurface of $P(n+1)$ of multidegree $(2, 2, \ldots , 2)$ with $n \geq 3$. Let us write $P(n+1) = {\mathbb P}^{1}_{i} \times P(n+1)_{i}$. Let $[x_{i1} : x_{i2}]$ be the homogeneous coordinates of ${\mathbb P}^{1}_{i}$. Hereafter, we consider the affine locus and denote by $\displaystyle x_{i} = \frac{x_{i2}}{x_{i1}}$ the affine coordinates of ${\mathbb P}^{1}_{i}$ and by ${\bf z}_{i}$ that of $P(n+1)_{i}$. When we pay attention to $x_{i}$, $X_{n}$ can be written by following equation $$\begin{aligned} \label{xn} X_{n} = \{ F_{i,0}({\bf z}_{i})x_{i}^{2} + F_{i,1}({\bf z}_{i})x_{i} + F_{i,2}({\bf z}_{i}) = 0 \}\end{aligned}$$ where each $F_{i,j}({\bf z}_{i})$ $(j = 0, 1, 2)$ is a quadratic polynomial of ${\bf z}_{i}$. Now, we consider the two involutions of $P(n+1)$: $$\begin{aligned} \tau_{i} {\colon}(x_{i}, {\bf z}_{i}) &\to \left(-x_{i}- \frac{F_{i,1}({\bf z}_{i})}{F_{i,0}({\bf z}_{i})}, {\bf z}_{i} \right)\label{tau}\\ \sigma_{i} {\colon}(x_{i}, {\bf z}_{i}) &\to \left(\frac{F_{i,2}({\bf z}_{i})}{x_{i} \cdot F_{i,0}({\bf z}_{i})}, {\bf z}_{i} \right).\label{sigma}\end{aligned}$$ Then $\tau_{i}|_{X_{n}} = \sigma_{i}|_{X_{n}} = \iota_{i}$ by definition of $\iota_{i}$ (cf. Theorem \[iota\]). We get two birational automorphisms of $X_{n}$ $$\begin{aligned} \rho_{i} = \sigma_{i} \circ \tau_{i} {\colon}(x_{i}, {\bf z}_{i}) &\to \left( \frac{F_{i,2}({\bf z}_{i})}{-x_{i} \cdot F_{i,0}({\bf z}_{i}) - F_{i,1}({\bf z}_{i})}, \ {\bf z}_{i} \right)\\ \rho'_{i} = \tau_{i} \circ \sigma_{i} {\colon}(x_{i}, {\bf z}_{i}) &\to \left( -\frac{x_{i} \cdot F_{i,1}({\bf z}_{i}) + F_{i,2}({\bf z}_{i})}{x_{i}\cdot F_{i,0}({\bf z}_{i})}, \ {\bf z}_{i} \right).\end{aligned}$$ Obviously, both $\rho_{i}$ and $\rho'_{i}$ are in Ine$(P(n+1), X_{n})$, map points not in $X_{n}$ to other points also not in $X_{n}$, and $\rho_{i}^{-1} = \rho'_{i}$ by $\tau_{i}^{2} = \sigma_{i}^{2} = {\operatorname{id}}_{P(n+1)}$. \[order\] Each $\rho_{i}$ has infinite order. By the definiton of $\rho_{i}$ and $\rho'_{i} = \rho_{i}^{-1}$, it suffices to show $$\begin{aligned} {\begin{pmatrix} 0 & F_{i,2}\\ -F_{i,0} & -F_{i,1} \end{pmatrix}}^{k} \neq \alpha I\end{aligned}$$ for any $k \in {\mathbb Z}\setminus \{0\}$ where $I$ is an identity matrix and $\alpha \in {\mathbb C}^{\times}$. Their eigenvalues are $$\frac{-F_{i,1} \pm \sqrt{F_{i,1}^{2} - 4F_{i,0}F_{i,2}}}{2}.$$ Here $F_{i,1}^{2} - 4F_{i,0}F_{i,2} \neq 0$ as $X_{n}$ is general (for all $i$). If $\begin{pmatrix} 0 & F_{i,2}\\ -F_{i,0} & -F_{i,1} \end{pmatrix}^{k} = \alpha I$ for some $k \in {\mathbb Z}\setminus \{0\}$ and $\alpha \in {\mathbb C}^{\times}$, then $$\left(\frac{-F_{i,1} + \sqrt{F_{i,1}^{2} - 4F_{i,0}F_{i,2}}}{-F_{i,1} - \sqrt{F_{i,1}^{2} - 4F_{i,0}F_{i,2}}}\right)^{k} = 1,$$ a contradiction to the assumption that $X_{n}$ is generic. We also remark that Proposition \[order\] is also implicitly proved in Theorem \[main\]. Our main result is the following (which is same as Theorem \[intro\]): \[main\] Let $X_{n} \subset P(n+1)$ be a smooth hypersurface of multidegree $(2, 2, \ldots, 2)$ and $n \geq 3$. Then: - $\gamma {\colon}{\operatorname{Dec}}(P(n+1), X_{n}) \to {\operatorname{Bir}}(X_{n})$ is surjective, in particular Question $\ref{qexact}$ is affirmative for $X_{n}$. - If, in addition, $X_{n}$ is generic, $n+1$ elements $\rho_{i} \in {\operatorname{Ine}}(P(n+1), X_{n})$ $(1 \leq i \leq n+1)$ satisfy $$\langle \rho_{1}, \rho_{2}, \ldots , \rho_{n+1} \rangle \simeq \underbrace{{\mathbb Z}* {\mathbb Z}* \cdots * {\mathbb Z}}_{n+1} \subset {\operatorname{Ine}}(P(n+1), X_{n}).$$ In particular, ${\operatorname{Ine}}(P(n+1), X_{n})$ is an infinite non-commutative group. Let ${\operatorname{Ind}}(\rho)$ be the union of the indeterminacy loci of each $\rho_{i}$ and $\rho^{-1}_{i}$; that is, $\displaystyle {\operatorname{Ind}}(\rho) = \bigcup_{i=1}^{n+1}\big({\operatorname{Ind}}(\rho_{i}) \cup {\operatorname{Ind}}(\rho^{-1}_{i})\big)$ where ${\operatorname{Ind}}(\rho_{i})$ is the indeterminacy locus of $\rho_{i}$. Clearly, ${\operatorname{Ind}}(\rho)$ has codimension $\geq 2$ in $P(n+1)$. Let us show Theorem \[main\] (1). Suppose $X_{n}$ is generic. For a general point $x \in P(n+1)_{i}$, the set $p_{i}^{-1}(x)$ consists of two points. When we put these two points $y$ and $y'$, then the correspondence $y \leftrightarrow y'$ defines a natural birational involutions of $X_{n}$, and this is the involution $\iota_{j}$. Then, by Cantat-Oguiso’s result [@co11 Theorem 3.3 (4)], ${\operatorname{Bir}}(X_{n})$ $(n\geq 3)$ coincides with the group $\langle \iota_{1}, \iota_{2}, \ldots , \iota_{n+1} \rangle \simeq \underbrace{{\mathbb Z}/2{\mathbb Z}* {\mathbb Z}/2{\mathbb Z}* \cdots * {\mathbb Z}/2{\mathbb Z}}_{n+1}$. Two involutions $\tau_{j}$ and $\sigma_{j}$ of $X_{n}$ which we construct in and are the extensions of the covering involutions $\iota_{j}$. Hence, $\tau_{j}|_{X_{n}} = \sigma_{j}|_{X_{n}} = \iota_{j}$. Thus $\gamma$ is surjective. Since automorphisms of $X_{n}$ come from that of total space $P(n+1)$, it holds the case that $X_{n}$ is not generic. This completes the proof of Theorem \[main\] (1). Then, we show Theorem \[main\] (2). By Proposition \[order\], order of each $\rho_{i}$ is infinite. Thus it is sufficient to show that there is no non-trivial relation between its $n + 1$ elements $\rho_{i}$. We show by arguing by contradiction. Suppose to the contrary that there is a non-trivial relation between $n+1$ elements $\rho_{i}$, that is, there exists some positive integer $N$ such that $$\begin{aligned} \label{rho} \rho_{i_{1}}^{n_{1}} \circ \rho_{i_{2}}^{n_{2}} \circ \cdots \circ \rho_{i_{l}}^{n_{l}} = {\operatorname{id}}_{P(n+1)}\end{aligned}$$ where $l$ is a positive integer, $n_{k} \in {\mathbb Z}\setminus\{0\}$ $(1\leq k \leq l)$, and each $\rho_{i_{k}}$ denotes one of the $n + 1$ elements $\rho_{i}$ $(1 \leq i \leq n+1)$ and satisfies $\rho_{i_{k}} \neq \rho_{i_{k+1}}$ $(0 \leq k \leq l-1)$. Put $N = |n_{1}| + \cdots + |n_{l}|$. In the affine coordinates $(x_{i_{1}}, {\bf z}_{i_{1}})$ where $x_{i_{1}}$ is the affine coordinates of $i_{1}$-th factor ${\mathbb P}^{1}_{i_{1}}$, we can choose two distinct points $(\alpha_{1}, {\bf z}_{i_{1}})$ and $(\alpha_{2}, {\bf z}_{i_{1}})$, $\alpha_{1} \neq \alpha_{2}$, which are not included in both $X_{n}$ and ${\operatorname{Ind}}(\rho)$. By a suitable projective linear coordinate change of ${\mathbb P}^{1}_{i_{1}}$, we can set $\alpha_{1} = 0$ and $\alpha_{2} = \infty$. When we pay attention to the $i_{1}$-th element $x_{i_{1}}$ of the new coordinates, we put same letters $F_{i_{1},j}({\bf z}_{i_{1}})$ for the definitional equation of $X_{n}$, that is, $X_{n}$ can be written by $$X_{n} = \{ F_{i_{1},0}({\bf z}_{i_{1}})x_{i_{1}}^{2} + F_{i_{1},1}({\bf z}_{i_{1}})x_{i_{1}} + F_{i_{1},2}({\bf z}_{i_{1}}) = 0 \}.$$ Here the two points $(0, {\bf z}_{i_{1}})$ and $(\infty, {\bf z}_{i_{1}})$ not included in $X_{n} \cup {\operatorname{Ind}}(\rho)$. From the assumption, both two equalities hold: \_[i\_[1]{}]{}\^[n\_[1]{}]{} \_[i\_[l]{}]{}\^[n\_[l]{}]{}(0, [**z**]{}\_[i\_[1]{}]{}) = (0, [**z**]{}\_[i\_[1]{}]{}) &\ \_[i\_[1]{}]{}\^[n\_[1]{}]{} \_[i\_[l]{}]{}\^[n\_[l]{}]{}(, [**z**]{}\_[i\_[1]{}]{}) = (, [**z**]{}\_[i\_[1]{}]{}).\[infty\] We proceed by dividing into the following two cases. [(i). The case where $n_{1} > 0$. Write $\rho_{i_{1}} \circ \rho_{i_{1}}^{n_{1}-1} \circ \rho_{i_{2}}^{n_{2}} \circ \cdots \circ \rho_{i_{l}}^{n_{l}} = {\operatorname{id}}_{P(n+1)}$. ]{} Let us denote $\rho_{i_{1}}^{n_{1}-1} \circ \cdots \circ \rho_{i_{l}}^{n_{l}}(0, {\bf z}_{i_{1}}) = (p, {\bf z}_{i_{1}}')$, then, by the definition of $\rho_{i_{1}}$, it maps $p$ to $0$. That is, the equation $F_{i_{1},2}({\bf z}'_{i_{1}}) = 0$ is satisfied. On the other hand, the intersection of $X_{n}$ and the hyperplane $(x_{i_{1}}=0)$ is written by $$X_{n} \cap (x_{i_{1}}=0) = \{F_{i_{1},2}({\bf z}_{i_{1}}) = 0\}.$$ This implies $(0, {\bf z}'_{i_{1}}) = \rho_{i_{1}}(p, {\bf z}'_{i_{1}}) = (0, {\bf z}_{i_{1}})$ is a point on $X_{n}$, a contradiction to the fact that $(0, {\bf z}_{i_{1}}) \notin X_{n}$. [(ii). The case where $n_{1} < 0$. Write $\rho^{-1}_{i_{1}} \circ \rho_{i_{1}}^{n_{1}+1} \circ \rho_{i_{2}}^{n_{2}} \circ \cdots \circ \rho_{i_{l}}^{n_{l}} = {\operatorname{id}}_{P(n+1)}$. ]{} By using the assumption , we lead the contradiction by the same way as in (i). Precisely, we argue as follows. Let us write $\displaystyle x_{i_{1}} = \frac{1}{y_{i_{1}}}$, then $(x_{i_{1}} = \infty, {\bf z}_{i_{1}}) = (y_{i_{1}} = 0, {\bf z}_{i_{1}})$ and $X_{n}$ and $\rho^{-1}_{i_{1}}$ can be written by $$X_{n} {\coloneqq}\{F_{i_{1},0}({\bf z}_{i_{1}}) + F_{i_{1},1}({\bf z}_{i_{1}})y_{i_{1}} + F_{i_{1},2}({\bf z}_{i_{1}})y_{i_{1}}^{2} = 0\},$$ $$\rho^{-1}_{i_{1}} {\colon}(y_{i_{1}}, {\bf z}_{i_{1}}) \to \left(\ -\frac{F_{i_{1},0}({\bf z}_{i_{1}})}{F_{i_{1},1}({\bf z}_{i_{1}}) + y_{i_{1}}\cdot F_{i_{1},2}({\bf z}_{i_{1}})},\ {\bf z}_{i_{1}} \right).$$ Let us denote $ \rho_{i_{1}}^{n_{1}+1} \circ \rho_{i_{2}}^{n_{2}} \circ \cdots \circ \rho_{i_{l}}^{n_{l}} (y_{i_{1}} =0, {\bf z}_{i_{1}}) = (y_{i_{1}} = q, {\bf z}_{i_{1}}'')$, then $\rho^{-1}_{i_{1}}$ maps $q$ to $0$. That is, the equation $F_{i_{1},0}({\bf z}''_{i_{1}}) = 0$ is satisfied, but the intersection of $X_{n}$ and the hyperplane $(y_{i_{1}} = 0)$ is written by $$X_{n}\cap (y_{i_{1}} = 0) = \{F_{i_{1},0}({\bf z}_{i_{1}}) = 0\}.$$ This implies $(y_{i_{1}}=0, {\bf z}''_{i_{1}}) = \rho^{-1}_{i_{1}}(y_{i_{1}} = q, {\bf z}_{i_{1}}'') = (x_{i_{1}}=\infty, {\bf z}_{i_{1}})$ is a point on $X_{n}$; that is, $(x_{i_{1}}=\infty, {\bf z}_{i_{1}}) \in X_{n} \cap (x_{i_{1}}=\infty)$. This is contradiction. From (i) and (ii), we can conclude that there does not exist such $N$. This completes the proof of Theorem \[main\] (2). Note that, for the cases $n = 2$ and $1$, Theorem \[main\] (2) also holds though (1) does not hold. [**Acknowledgements:** ]{} The authors would like to express their sincere gratitude to their supervisor Professor Keiji Oguiso who suggested this subject and has given much encouragement and invaluable and helpful advices. [aaaaaa]{} A. Bruno, K. Matsuki, *Log Sarkisov program*, Internat. J. Math. [**8**]{} no.4 (1997), 451-494. S. Cantat, K. Oguiso, *Birational automorphism group and the movable cone theorem for Calabi-Yau manifolds of Wehler type via universal Coxeter groups*, preprint [arXiv:1107.5862](http://arxiv.org/abs/1107.5862), to appear in Amer. J. Math. A. Corti, *Factoring birational maps of threefolds after Sarkisov*, J. Algebraic Geom. [**4**]{} no.2 (1995), 223-254. T. de Fernex, *Birational transformations of varieties*, University of Illinois at Chicago Ph. D. Thesis (2002). M. H. Gizatullin, *The decomposition, inertia and ramification groups in birational geometry*, Algebraic Geometry and its Applications, Aspects of Math. [**E25**]{} (1994), 39-45. V. A. Iskovskikh, *Birational rigidity of Fano hypersurfaces in the framework of Mori theory*, Usp. Mat. Nauk [**56**]{} no.2 (2001), 3-86; English transl., Russ. Math. Surveys [**56**]{} no.2 (2001), 207-291. V. A. Iskovskikh, Yu. I. Manin, *Three-dimensional quartics and counterexamples to the Lüroth problem*, Mat. Sb. [**86**]{} no.1 (1971), 140-166; English transl., Math. Sb. [**15**]{} no.1 (1971), 141-166. Y. Kawamata, *Flops connect minimal models*, Publ. Res. Inst. Math. Sci. [**44**]{} no.2 (2008), 419-423. S. Kleiman, *Toward a numerical theory of ampleness*, Ann. of Math. [**84**]{} no.3 (1966), 293-344. K. Matsuki, *Introduction to the Mori Program*, Universitext, Springer, New York (2002). K. Oguiso, *Quartic K3 surfaces and Cremona transformations*, Arithmetic and geometry of $K3$ surfaces and Calabi-Yau threefolds, Fields Inst. Commun. [**67**]{}, Springer, New York (2013), 455-460. N. Takahashi, *Sarkisov program for log surfaces*, Tokyo University Master Thesis, 1995. N. Takahashi, *An application of Noether-Fano inequalities*, Math. Z. [**228**]{} no.1 (1998), 1-9.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The Particle Swarm Optimization (PSO) algorithm is developed for solving the Schaffer F6 function in fewer than $4000$ function evaluations on a total of $30$ runs. Four variations of the Full Model of Particle Swarm Optimization (PSO) algorithms are presented which consist of combinations of Ring and Star topologies with Synchronous and Asynchronous updates. The Full Model with combinations of Ring and Star topologies in combination with Synchronous and Asynchronous Particle Updates is explored.' author: - title: Particle Swarm and EDAs --- particle swarm optimization, full model, asynchronous update Introduction ============ The Full Model PSO can be modeled using combinations of Ring and Star topologies in combination with Synchronous and Asynchronous Particle Updates. The four types of Particle Swarm Optimization (PSO) algorithm are the Full Model, Cognition Model, Social Model, and Selfless Model. The Full Model learns from itself and others $\phi_{1} > 0$, $\phi_{2} > 0$. The Cognition Model learns from itself $\phi_{1} > 0$, $\phi_{2} = 0$. The Social Model learns from others $\phi_{1} = 0$, $\phi_{2} > 0$. The Selfless Model learns from others $\phi_{1} = 0$, $\phi_{2} > 0$, except for the best particle in the swarm, which learns from changing itself randomly ($g \neq i$) [@b4]. There are two types of PSO topologies: Ring and Star. The star topology is dynamic, but the ring topology is not. For the star neighborhood topology, the social component of the particle velocity update reflects information obtained from all the particles in the swarm [@b1]. There are two types of particle update methods: asynchronous and synchronous. The asynchronous method updates the particles one at a time, while the synchronous method updates the particles all at ones. The asynchronous update method is similar to the Steady-State Genetic Algorithm update method, while the synchronous update method is similar to the Generational Genetic Algorithm update method. The Asynchronous Particle Update Method allows for newly discovered solutions to be used more quickly [@b4]. Synchronous updates are done separately from particle position updates. Asynchronous updates calculate the new best positions after each particle position update and have the advantage of being given immediate feedback about the best regions of the search space. Feedback with synchronous updates is only given once per iteration. Carlisle and Dozier reason that asynchronous updates are more important for *lbest* PSO where immediate feedback will be more beneficial in loosely connected swarms, while synchronous updates are more appropriate for *gbest* PSO [@b1]. Having the algorithm terminate when a maximum number of iterations, or function evaluations, has been exceeded is useful when the objective is to evaluate the best solution found in a restricted time period [@b1]. Methodology =========== In PSO, the vectors are $\textbf{x} = <x_{k0},x_{k1},...,x_{kn-1}>$, $\textbf{p} = <p_{k0},p_{k1},...,p_{kn-1}>$, and $\textbf{v} = <v_{k0},v_{k1},...,v_{kn-1}>$, where $k$ represents the particle and $n$ represents the dimension. The x-vector represents the current position in search space. The p-vector represents the location of the best solution found so far by the particle. The v-vector represents the gradient (direction) that the particle will travel if undisturbed [@b4]. The Fitness Values are $x_{fitness}(i)$ and $p_{fitness}(i)$. The x-fitness records the fitness of the x-vector. The p-fitness records the fitness of the p-vector [@b4]. Ring Topology with Synchronous Particle Update PSO -------------------------------------------------- Ring Topology with Synchronous Particle Update PSO (RS PSO) is used for sparsely connected population so as to speed up convergence. In this case the particles have predefined neighborhood based on their location in the topological space. The connection between the particles increases the convergence speed which causes the swarm to focus on the search for local optima by exploiting the information of solutions found in the neighborhood. Synchronous update provides feedback about the best region of the search space once every iteration when all the particles have moved at least once from their previous position. Ring Topology with Asynchronous Particle Update PSO --------------------------------------------------- The Ring Topology with Asynchronous Particle Update PSO (RA PSO) has information move at a slower rate through the social network, so convergence is slower, but larger parts of the search space are covered compared to the star structure. This provides better performance in terms of the quality of solutions found for multi-modal structures than those found using the star structure. Asynchronous updates provide immediate feedback about the best regions of the search space, while synchronous updates only provide feedback once per iteration. Star Topology with Synchronous Particle Update PSO -------------------------------------------------- The Star Topology with Synchronous Particle Update PSO (SS PSO) uses a global neighborhood with the star topology. Whenever searching for the best particle, it checks every particle in the swarm instead of just the neighborhood of three used in a ring topology. The synchronous update only provides feedback once each cycle, so all the particles in the swarm will update their positions before more feedback is provided, instead of checking to see if one of the recently updated particles has a better fit than the particle deemed best fit at the beginning of the cycle. Star Topology with Asynchronous Particle Update PSO --------------------------------------------------- The Star Topology with Asynchronous Particle Update PSO (SA PSO) has particles moving all at once in the search space, which allows for newly discovered solutions to be used more quickly. The Star Topology uses a global neighborhood, meaning that the entire swarm can communicate with one another and each particle bases its search off of the global best particle known to the swarm. The benefit of using a global neighborhood is that it allows for quicker convergence since the best known particle is communicated to all the particles in the swarm. Experiment ========== The experiment consists of four instances of a Full Model PSO with a cognition learning rate, $\phi_{1}$, and a social learning rate, $\phi_{2}$, equal to 2.05. To regulate the velocity and improve the performance of the PSO, the constriction coefficient implements to ensure convergence. The inertia weight, $\omega$, is also implemented to control the exploration and exploitation abilities of the swarm. Both topologies in this experiment use an $\omega$ value of 1.0, in order to facilitate exploration and increase diversity. The particles in this experiment are updated in two different ways: synchronously, and asynchronously. Asynchronous Particle Update is a method that updates particles one at a time and allows newly discovered solutions to be used more quickly, while Synchronous Particle Update is a method that updates all the particles at once. The four instances of the PSO are variations of the two Particle Update methods, and the two topologies described. With these four instances of the PSO, a population of 30 particles is evolved and each particle’s fitness is evaluated; this is done 30 times for each PSO. The number of function evaluations is observed after each population of 30 is evolved, and these 30 best function evaluation values for 30 runs are used to perform ANOVA tests and T-Tests to determine the equivalence classes of the four instances of the PSO. Ring Topology with Synchronous Particle Update PSO -------------------------------------------------- The RS PSO updates synchronously at the end of every iteration. It uses ring topology to compare and select the best solution within the neighborhood of three. Ring Topology with Asynchronous Particle Update PSO --------------------------------------------------- The RA PSO updates asynchronously, which allows for quick updates, and uses ring topology to compare solutions within a neighborhood of three. Star Topology with Synchronous Particle Update PSO -------------------------------------------------- The SS PSO updates synchronously, which only allows for one update per iteration, and uses star topology to compare solutions with a global neighborhood. Star Topology with Asynchronous Particle Update PSO --------------------------------------------------- The SA PSO updates asynchronously, which allows for quicker updates on newly discovered solutions. The star topology uses a global neighborhood to compare solutions, which allows for quicker convergence. Results ======= --------------- ----------- ----------- ----------- ----------- **** **** **** **** **** **Run** ***RS*** ***RA*** ***SS*** ***SA*** 1 4000 77 129 75 2 4000 71 57 72 3 82 82 82 65 4 62 60 4000 71 5 4000 72 49 56 6 72 4000 48 4000 7 95 4000 83 189 8 45 4000 4000 4000 9 71 54 4000 4000 10 61 68 91 4000 11 4000 66 38 89 12 50 4000 71 4000 13 4000 4000 4000 4000 14 4000 72 4000 4000 15 4000 65 4000 4000 16 4000 57 4000 146 17 54 69 58 4000 18 76 81 65 53 19 58 77 47 4000 20 4000 95 4000 4000 21 55 4000 89 56 22 90 65 51 4000 23 4000 72 4000 4000 24 4000 4000 4000 73 25 90 4000 55 52 26 55 4000 4000 4000 27 4000 58 61 40 28 65 4000 47 4000 29 62 4000 110 4000 30 68 4000 68 64 ***Average*** 1640.3667 1642.0333 1509.9667 2170.0333 --------------- ----------- ----------- ----------- ----------- : PSO Fitness Data Set[]{data-label="table_PSO_Dataset"} ------------ ----------- --------- ------------- -------------- **** **** **** **** **** **Groups** **Count** **Sum** **Average** **Variance** ***RS*** 30 49211 1640.3667 2840033.757 ***RA*** 30 49261 1642.0333 3834547.482 ***SS*** 30 45299 1509.9667 3713758.378 ***SA*** 30 65101 2170.0333 3959879.413 ------------ ----------- --------- ------------- -------------- : Anova Test Summary[]{data-label="table_Anova_Single_Factor_Summary"} --------------- ----------- -------- --------- ------- ------------- ------------ **** **** **** **** **** **** **** **SS** **df** **MS** **F** **P-value** **F crit** ***Between*** 7721005 3 2573668 0.7 0.6 2.7 ***Within*** 445098352 116 3837055 ***Total*** 452819357 119 --------------- ----------- -------- --------- ------- ------------- ------------ : Anova Test Variation Summary[]{data-label="table_Anova_Var"} The results place all four algorithms in the same equivalence class using both the ANOVA and Student T-tests. When the ANOVA test and T-Test are performed, the ANOVA test of the four algorithms yields a p-value of 0.57, so the F-Test is then performed to determine which two-tailed two-sample T-Test to use. In each comparison between algorithms, the T-Test results in a t Stat value that is smaller than the t Critical value, therefore the null hypothesis is accepted. The data set used is shown in Table \[table\_PSO\_Dataset\], while the ANOVA test results are shown in Tables \[table\_Anova\_Single\_Factor\_Summary\] and \[table\_Anova\_Var\]. Representative T-tests are shown in Tables \[table\_tTest\_RS\_SS\] and \[table\_tTest\_SS\_SA\]. Ring Topology with Synchronous Particle Update PSO -------------------------------------------------- ---------------------------------- ------------- ------------ **** **** **** **** **RS** **SS** **Mean** 1640.367 1509.967 **Variance** 3840033.757 3713758.37 **Observations** 30 3 **Pooled Variance** 3776896.068 **Hypothesized Mean Difference** 0 **df** 58 **t Stat** 0.2599 **P(T${<=}$t) one-tail** 0.3979 **t Critical one-tail** 1.6716 **P(T${<=}$t) two-tail** 0.7959 **t Critical two-tail** 2.0017 ---------------------------------- ------------- ------------ : t-Test: Two-Sample Assuming Equal Variances[]{data-label="table_tTest_RS_SS"} The RS PSO results is the better compared to SA PSO as observed from the T-test. It provides comparable quality solutions to RA PSO but is slower than RA PSO as it waits for all the particles to be updated. The SS PSO outperforms RS PSO by a significant margin as it has an appreciably lower mean than RS PSO when subjected to T-test. The T-test is shown in Table \[table\_tTest\_RS\_SS\]. Ring Topology with Asynchronous Particle Update PSO --------------------------------------------------- The RA PSO results in better quality solutions than the SA PSO, since larger parts of the search space are covered compared to the star structure. Using the RA PSO, solutions are found more quickly than when using a RS PSO. The SS PSO is relatively more slow of an algorithm and results in solutions of lesser quality. Star Topology with Synchronous Particle Update PSO -------------------------------------------------- The SS PSO is found to be in the same equivalence class as all the other algorithms in the experiment. However, the mean value of the SS PSO is slightly smaller than the mean values of the other three algorithms. It appears that it is able to find solutions slightly more quickly than the algorithms using the ring topology as it compares solutions using a global neighborhood allowing for quicker convergence. The T-test is shown in Table \[table\_tTest\_SS\_SA\]. Star Topology with Asynchronous Particle Update PSO --------------------------------------------------- ---------------------------------- ------------- ------------- **** **** **** **** **SS** **SA** **Mean** 1509.967 2170.033 **Variance** 3713758.378 3959879.413 **Observations** 30 30 **Pooled Variance** 3836818.895 **Hypothesized Mean Difference** 0 **df** 58 **t Stat** -1.305 **P(T${<=}$t) one-tail** 0.0985 **t Critical one-tail** 1.6716 **P(T${<=}$t) two-tail** 0.1970 **t Critical two-tail** 2.0017 ---------------------------------- ------------- ------------- : t-Test: Two-Sample Assuming Equal Variances \[table\_tTest\_SS\_SA\] The SA PSO is found to be in the same equivalence class as all of the other algorithms in this experiment. The mean value of the SA PSO is larger than the mean values of the other three algorithms; and the F value is found to be larger than the F crit value when comparing the SA PSO to each of the other algorithms as well, so the T-Test: Two Sample Assuming Equal Variances is performed. In each comparison of the SA PSO to the other three algorithms, the T-Test results in a t Stat value that is smaller than the t Critical two-tail value, therefore the null hypothesis is accepted that the hypothesized mean difference is zero, since the t Stat value is less than the t Critical two-tail value. Comparison of Run Times ----------------------- A comparison of the run times for each algorithm shows that the asynchronous algorithms run more quickly than the synchronous algorithms, and the ring algorithms result in longer run times than the star algorithms. The SA PSO algorithm has an average runtime of 22.56 seconds, based on 30 runs of the algorithm. The RS PSO algorithm has an average runtime of 18.07 seconds, based on 30 runs of the algorithm. The SS PSO algorithm has an average runtime of 7.07 seconds, based on 30 runs of the algorithm. The SA PSO algorithm has an average runtime of 4.65 seconds, based on 30 runs of the algorithm. The SA PSO algorithm has the smallest run time, while the RS PSO algorithm has the longest run time. Conclusions =========== The four different types of PSO are significant in their own way and have different applications. The Ring and Star topologies determine the scope of feedback whereas the synchronous or asynchronous method choice decides the nature of feedback. The results indicate that all four algorithms are in the same equivalence class, so there is no statistically significant difference in their performance. The T-tests indicate that the best quality solutions are provided by Star Synchronous algorithm. The SS PSO algorithm is the quickest algorithm, while the RS PSO algorithm is the slowest. These results are as expected and show that the asynchronous algorithms are quicker than the synchronous algorithms and the star algorithms have a significantly smaller run time than that of the ring algorithms. Breakdown of the Work ===================== Alison Jenkins - RA PSO and Introduction, Methodology, (Introduction). RA PSO part in Methodology, Experiment, and Results sections of [LaTeX]{} report. Vinika Gupta - RS PSO and Methodology (Modification and Conclusion). RS PSO part in Methodology, Experiment, and Results sections. Full editing and modification of [LaTeX]{} report. Alexis Myrick - SS PSO and Result. SS PSO part in Methodology, Experiment, and Results sections of [LaTeX]{} report. Mary Lenoir - SA PSO and Experiment. SS PSO part in Methodology, Experiment, and Results sections of [LaTeX]{} report. [00]{} Engelbrecht, Andries P. *Computational Intelligence: An Introduction*. John Wiley & Sons, 2007. Joseph, Anthony D., et al. *Adversarial Machine Learning*. Cambridge University Press, 2018. Sarkar, Dipanjan. *Text Analytics with Python*. Apress, 2016. Dozier, J. *Computational Intelligence and Adversarial Machine Learning: Particle Swarm Optimization*. Powerpoint Presentation, COMP6970 - Computational Intelligence and Adversarial Machine Learning class, Auburn University, 2019.
{ "pile_set_name": "ArXiv" }
--- abstract: 'This report reviews the recent experimental results from the CLAS collaboration (Hall B of Jefferson Lab, or JLab) on Deeply Virtual Compton Scattering (DVCS) and Deeply Virtual Meson Production (DVMP) and discusses their interpretation in the framework of Generalized Parton Distributions (GPDs). The impact of the experimental data on the applicability of the GPD mechanism to these exclusive reactions is discussed. Initial results obtained from JLab 6 GeV data indicate that DVCS might already be interpretable in this framework while GPD models fail to describe the exclusive meson production (DVMP) data with the GPD parameterizations presently used. An exception is the $\phi$ meson production for which the GPD mechanism appears to apply. The recent global analyses aiming to extract GPDs from fitting DVCS CLAS and world data are discussed. The GPD experimental program at CLAS12, planned with the upcoming 12 GeV upgrade of JLab, is briefly presented.' author: - | Hyon-Suk Jo\ \ Institut de Physique Nucléaire d’Orsay, 91406 Orsay, France title: '[**Deeply Virtual Compton Scattering and Meson Production at JLab/CLAS**]{}' --- Introduction {#introduction .unnumbered} ============ Generalized Parton Distributions take the description of the complex internal structure of the nucleon to a new level by providing access to, among other things, the correlations between the (transverse) position and (longitudinal) momentum distributions of the partons in the nucleon. They also give access to the orbital momentum contribution of partons to the spin of the nucleon. GPDs can be accessed via Deeply Virtual Compton Scattering and exclusive meson electroproduction, processes where an electron interacts with a parton from the nucleon by the exchange of a virtual photon and that parton radiates a real photon (in the case of DVCS) or hadronizes into a meson (in the case of DVMP). The amplitude of the studied process can be factorized into a hard-scattering part, exactly calculable in pQCD or QED, and a non-perturbative part, representing the soft structure of the nucleon, parametrized by the GPDs. At leading twist and leading order approximation, there are four independent quark helicity conserving GPDs for the nucleon: $H$, $E$, $\tilde{H}$ and $\tilde{E}$. These GPDs are functions depending on three variables $x$, $\xi$ and $t$, among which only $\xi$ and $t$ are experimentally accessible. The quantities $x+\xi$ and $x-\xi$ represent respectively the longitudinal momentum fractions carried by the initial and final parton. The variable $\xi$ is linked to the Bjorken variable $x_{B}$ through the asymptotic formula: $\xi=\frac{x_{B}}{2-x_{B}}$. The variable $t$ is the squared momentum transfer between the initial and final nucleon. Since the variable $x$ is not experimentally accessible, only Compton Form Factors, or CFFs (${\cal H}$, ${\cal E}$, $\tilde{{\cal H}}$ and $\tilde{{\cal E}}$), which real parts are weighted integrals of GPDs over $x$ and imaginary parts are combinations of GPDs at the lines $x=\pm\xi$, can be extracted. The reader is referred to Refs. [@gpd1; @gpd2; @gpd3; @gpd4; @gpd5; @gpd6; @gpd7; @gpd8; @vgg1; @vgg2; @bmk] for detailed reviews on the GPDs and the theoretical formalism. Deeply Virtual Compton Scattering {#deeply-virtual-compton-scattering .unnumbered} ================================= ![Handbag diagram for DVCS (left) and diagrams for Bethe-Heitler (right), the two processes contributing to the amplitude of the $eN \to eN\gamma$ reaction.[]{data-label="fig:diagrams"}](dvcs_bh.png){height="0.11\textheight"} Among the exclusive reactions allowing access to GPDs, Deeply Virtual Compton Scattering (DVCS), which corresponds to the electroproduction of a real photon off a nucleon $eN \to eN\gamma$, is the key reaction since it offers the simplest, most straighforward theoretical interpretation in terms of GPDs. The DVCS amplitude interferes with the amplitude of the Bethe-Heitler (BH) process which leads to the exact same final state. In the BH process, the real photon is emitted by either the incoming or the scattered electron while in the case of DVCS, it is emitted by the target nucleon (see Figure \[fig:diagrams\]). Although these two processes are experimentally indistinguishable, the BH is well known and exactly calculable in QED. At current JLab energies (6 GeV), the BH process is highly dominant (in most of the phase space) but the DVCS process can be accessed via the interference term rising from the two processes. With a polarized beam or/and a polarized target, different types of asymmetries can be extracted: beam-spin asymmetries ($A_{LU}$), longitudinally polarized target-spin asymmetries ($A_{UL}$), transversely polarized target-spin asymmetries ($A_{UT}$), double-spin asymmetries ($A_{LL}$, $A_{LT}$). Each type of asymmetry gives access to a different combination of Compton Form Factors. ![DVCS beam-spin asymmetries as a function of $-t$, for different values of $Q^{2}$ and $x_{B}$. The (black) circles represent the latest CLAS results [@bsa3], the (red) squares and the (green) triangles are the results, respectively, from Ref. [@bsa1] and Ref. [@halla]. The black dashed curves represent Regge calculations [@jml]. The blue curves correspond to the GPD calculations of Ref. [@vgg1] (VGG) at twist-2 (solid) and twist-3 (dashed) levels, with the contribution of the GPD $H$ only.[]{data-label="fig:bsa"}](bsa.png){height="0.34\textheight"} The first results on DVCS beam-spin asymmetries published by the CLAS collaboration were extracted using data from non-dedicated experiments [@bsa1; @bsa2]. Also using non-dedicated data, CLAS published DVCS longitudinally polarized target-spin asymmetries in 2006 [@tsa]. In 2005, the first part of the e1-DVCS experiment was carried out in the Hall B of JLab using the CLAS spectrometer [@clas] and an additional electromagnetic calorimeter, made of 424 lead-tungstate scintillating crystals read out via avalanche photodiodes, specially designed and built for the experiment. This additional calorimeter was located at the forward angles, where the DVCS/BH photons are mostly emitted, as the standard CLAS configuration does not allow detection at those forward angles. This first CLAS experiment dedicated to DVCS measurements, with this upgraded setup allowing a fully exclusive measurement, ran using a 5.766 GeV polarized electron beam and a liquid-hydrogen target. From this experiment data, CLAS published in 2008 the largest set of DVCS beam-spin asymmetries ever extracted in the valence quark region [@bsa3]. Figure \[fig:bsa\] shows the corresponding results as a function of $-t$ for different bins in ($Q^{2}$, $x_{B}$). The predictions using the GPD model from VGG (Vanderhaeghen, Guichon, Guidal) [@vgg1; @vgg2] overestimate the asymmetries at low $|-t|$, especially for small values of $Q^{2}$ which can be expected since the GPD mechanism is supposed to be valid at high $Q^{2}$. Regge calculations [@jml] are in fair agreement with the results at low $Q^{2}$ but fail to describe them at high $Q^{2}$ as expected. We are currently working on extracting DVCS unpolarized and polarized absolute cross sections from the e1-DVCS data [@hsj]. Having both the beam-spin asymmetries and the longitudinally polarized target-spin asymmetries, a largely model-independent GPD analysis in leading twist was performed, fitting simultaneously the values for $A_{LU}$ and $A_{UL}$ obtained with CLAS at three values of $t$ and fixed $x_{B}$, to extract numerical constraints on the imaginary parts of the Compton Form Factors (CFFs) ${\cal H}$ and $\tilde{{\cal H}}$, with average uncertainties of the order of 30% [@guidal_clas]. Before that, the same analysis was performed fitting the DVCS unpolarized and polarized cross sections published by the JLab Hall A collaboration [@halla] to extract numerical constraints on the real and imaginary parts of the CFF ${\cal H}$ [@guidal_fitter_code]. Another GPD analysis in leading twist, assuming the dominance of the GPD $H$ (the contributions of $\tilde{H}$, $E$ and $\tilde{E}$ being neglicted) and using the CLAS $A_{LU}$ data as well as the DVCS JLab Hall A data, was performed to extract constraints on the real and imaginary parts of the CFF ${\cal H}$ [@moutarde]. Similar analyses were performed using results published by the HERMES collaboration [@guidal_moutarde; @guidal_hermes]. A third approach was developped, using a model-based global fit on the available world data to calculate the real and imaginary parts of the CFF ${\cal H}$ [@km]. When we compare the different results of those analyses for the imaginary part of ${\cal H}$, they appear to be relatively compatible (such a comparison plot can be found in Ref. [@km2]). Deeply Virtual Meson Production {#deeply-virtual-meson-production .unnumbered} =============================== The CLAS collaboration published several results on pseudoscalar meson electroproduction ($\pi^{0}$, $\pi^{+}$) [@ps1; @ps2]. However, those are not reviewed in this paper, limiting itself to vector mesons. CLAS published cross-section measurements for the following vector mesons: $\rho^{0}$ [@rho01; @rho02], $\omega$ [@omega] and $\phi$ [@phi1; @phi2], contributing significantly to the world data on vector mesons with measurements in the valence quark region, corresponding to low $W$ ($W<5$ GeV). First measurements of $\rho^{+}$ electroproduction are being extracted from the e1-DVCS data mentionned above [@rho+]. ![Longitudinal cross sections for $\rho^{0}$ as a function of $W$ at fixed $Q^{2}$ (CLAS and world data). The results from CLAS are shown as full circles. The blue curves are VGG GPD-based predictions. The red curves represent GK GPD-based predictions: total (solid), valence quarks (dashed), sea quarks and gluons (dot-dashed).[]{data-label="fig:rho"}](rho_vgg_gk.png){height="0.37\textheight"} As the leading-twist handbag diagram is only valid for the longitudinal part of the cross section of those vector mesons, it is required to separate the longitudinal and transverse parts of the cross sections extracted from the experimental data by analyzing the decay angular distribution of the meson. Figure \[fig:rho\] shows the longitudinal cross sections of the $\rho^{0}$ meson production $\sigma_{L}(\gamma^{*} p \to p \rho^{0})$ as a function of $W$ at fixed $Q^{2}$, for different bins in $Q^{2}$. As a function of increasing $W$, those cross sections first drops at low $W$ ($W<5$ GeV, corresponding to the valence quark region) and then slightly rise at higher $W$. The longitudinal cross sections of the $\omega$ meson production seems to show the same behavior as a function of $W$ as the one observed for the $\rho^{0}$ meson. The GPD-based predictions from VGG and from GK (Goloskokov, Kroll) [@gk] describe quite well those results at high $W$ but both GPD models fail by large to reproduce the behavior at low $W$ (see the curves on Figure \[fig:rho\]). The $\phi$ meson production, which is mostly sensitive to gluon GPDs, is a different case as its longitudinal cross sections as a function of $W$ show a different behavior by continuously rising with increasing $W$ all the way from the lowest $W$ region; these cross sections are very well described by the GPD model predictions [@gk_phi]. The reason why the GPD models fail to describe the data for the $\rho^{0}$ and $\omega$ mesons at low $W$ (valence quark region) is unsure at this point. The handbag mechanism might not be dominant in the low $W$ valence region as the minimum value of $|-t|$ increases with decreasing $W$ and higher-twist effects grow with $t$. Another possibility is that the handbag mechanism might actually be dominant in the low $W$ valence region but there is an important contribution missing in the GPD models. DVCS and DVMP at CLAS12 {#dvcs-and-dvmp-at-clas12 .unnumbered} ======================= With the upcoming 12 GeV upgrade of JLab’s CEBAF accelerator, the instrumentation in the experimental halls will be upgraded as well. In Hall B, the CLAS detector will be replaced by the new CLAS12 spectrometer, under construction, with the study of Generalized Parton Distributions as one of the highest priorities of its future experimental program. The experiments currently proposed have the following goals: - DVCS beam-spin asymmetries on the proton, - DVCS longitudinal target-spin asymmetries on the proton, - DVCS transverse target-spin asymmetries on the proton, - DVCS on the neutron, - DVCS unpolarized and polarized cross sections, - DVMP: pseudoscalar meson electroproduction, - DVMP: vector meson electroproduction. To study DVCS on the neutron, a central neutron detector was designed to be added to the base equipment of the CLAS12 spectrometer. A combined analysis of DVCS on the proton and on the neutron allows flavor separation of GPDs. ![The CLAS12 detector currently under construction.[]{data-label="fig:clas12"}](clas12.pdf){height="0.3\textheight"} JLab 12 GeV will provide high luminosity (L$\sim10^{35}$cm$^{-2}$s$^{-1}$) for high accuracy measurements to study GPDs in the valence quark region and test the models on a large $x_{B}$ scale. The new CLAS12 spectrometer, with its large acceptance allowing measurements on a large kinematic range, will be perfectly fitted for a rich GPD experimental program. Conclusions {#conclusions .unnumbered} =========== The CLAS collaboration produced the largest set of data for DVCS and exclusive vector meson production ever extracted in the valence quark region. The VGG GPD model fairly agrees with the DVCS asymmetry data at high $Q^{2}$ but fails to reproduce it at lower $Q^{2}$. As for the exclusive vector meson data, GPD models describe well the longitudinal cross sections at high $W$ (region corresponding to sea quarks and/or gluons) which seem to be interpretable in terms of leading-twist handbag diagram (quark/gluon GPDs) but fail by large for $W<5$ GeV (corresponding to the valence quark region) except for the $\phi$ meson for which the GPD formalism seems to apply. We need experimental data of higher $Q^{2}$ while staying in the valence quark region to extend the DVCS data on a larger kinematic domain and provide more constraints for the GPD models, and to test the GPD mechanism validity regime for DVMP. JLab 12 GeV will provide high luminosity for high accuracy measurements to test models on a large $x_{B}$ scale and thus will be a great facility to study GPDs in the valence quark region. The new CLAS12 spectrometer, with its large acceptance, will be well suited for a rich and exciting GPD experimental program. Acknowledgments {#acknowledgments .unnumbered} =============== Thanks to P. Stoler and R. Ent for the opportunity to give this presentation. Thanks to M. Guidal and S. Niccolai for useful discussions and for providing slides used for the preparation of this talk. [99]{} D. Müller, D. Robaschik, B. Geyer, F.-M. Dittes, and J. Horejsi, Fortschr. Phys. [**4**2]{}, 101 (1994). X. Ji, Phys. Rev. Lett. 78, 610 (1997); Phys. Rev. D [**5**5]{}, 7114 (1997). A.V. Radyushkin, Phys. Lett. B 380 (1996) 417; Phys. Rev. D [**5**6]{}, 5524 (1997). J.C. Collins, L. Frankfurt and M. Strikman, Phys. Rev. D [**5**6]{}, 2982 (1997). K. Goeke, M.V. Polyakov and M. Vanderhaeghen, Prog. Part. Nucl. Phys. [**4**7]{}, 401 (2001). M. Diehl, Phys. Rept. [**3**88]{}, 41 (2003). A.V. Belitsky, A.V. Radyushkin, Phys. Rept. [**4**18]{}, 1 (2005). M. Guidal, Prog. Part. Nucl. Phys. [**6**1]{}, 89 (2008). M. Vanderhaeghen, P.A.M. Guichon, and M. Guidal, Phys. Rev. D [**6**0]{}, 094017 (1999). M. Guidal, M.V. Polyakov, A.V. Radyushkin and M. Vanderhaeghen, Phys. Rev. D [**7**2]{}, 054013 (2005). A. Belitsky, D. Müller and A. Kirchner, Nucl. Phys. B [**6**29]{}, 323 (2002). S. Stepanyan [*e*t al.]{} (CLAS Collaboration), Phys. Rev. Lett. [**8**7]{}, 182002 (2001). G. Gavalian [*e*t al.]{} (CLAS Collaboration), Phys. Rev. C [**8**0]{}, 035206 (2009). S. Chen [*e*t al.]{} (CLAS Collaboration), Phys. Rev. Lett. [**9**7]{}, 072002 (2006). B. Mecking [*e*t al.]{}, Nucl. Instrum. Meth. A [**5**03]{}, 513 (2003). F.X. Girod [*e*t al.]{} (CLAS Collaboration), Phys. Rev. Lett. [**1**00]{}, 162002 (2008). J.M. Laget, Phys. Rev. C [**7**6]{}, 052201(R) (2007). H.S. Jo, Ph.D. thesis, Université Paris-Sud, Orsay, France (2007). M. Guidal, Phys. Lett. B [**6**89]{}, 156 (2010). C. Munoz Camacho [*e*t al.]{} (JLab Hall A Collaboration), Phys. Rev. Lett. [**9**7]{}, 262002 (2006). M. Guidal, Eur. Phys. J. A [**3**7]{}, 319 (2008) \[Erratum-ibid. A [**4**0]{}, 119 (2009)\]. H. Moutarde, Phys. Rev. D [**7**9]{}, 094021 (2009). M. Guidal and H. Moutarde, Eur. Phys. J. A [**4**2]{}, 71 (2009). M. Guidal, Phys. Lett. B [**6**93]{}, 17 (2010). K. Kumeri[č]{}ki and D. Müller, Nucl. Phys. B [**8**41]{}, 1 (2010). K. Kumeri[č]{}ki and D. Müller, arXiv:1008.2762 \[hep-ph\]. R. De Masi [*e*t al.]{} (CLAS Collaboration), Phys. Rev. C [**7**7]{}, 042201(R) (2008). K. Park [*e*t al.]{} (CLAS Collaboration), Phys. Rev. C [**7**7]{}, 015208 (2008). C. Hadjidakis [*e*t al.]{} (CLAS Collaboration), Phys. Lett. B [**6**05]{}, 256-264 (2005). S. Morrow [*e*t al.]{} (CLAS Collaboration), Eur. Phys. J. A [**3**9]{}, 5-31 (2009). L. Morand [*e*t al.]{} (CLAS Collaboration), Eur. Phys. J. A [**2**4]{}, 445-458 (2005). K. Lukashin [*e*t al.]{} (CLAS Collaboration), Phys. Rev. C [**6**3]{}, 065205 (2001). J. Santoro [*e*t al.]{} (CLAS Collaboration), Phys. Rev. C [**7**8]{}, 025210 (2008). A. Fradi, Ph.D. thesis, Université Paris-Sud, Orsay, France (2009). S.V. Goloskokov and P. Kroll, Eur. Phys. J. C [**4**2]{}, 281 (2005); Eur. Phys. J. C [**5**0]{}, 829 (2007). S.V. Goloskokov, arXiv:0910.4308 \[hep-ph\].
{ "pile_set_name": "ArXiv" }
--- abstract: 'We report the observation of simultaneous quantum degeneracy in a dilute gaseous Bose-Fermi mixture of metastable atoms. Sympathetic cooling of helium-3 (fermion) by helium-4 (boson), both in the lowest triplet state, allows us to produce ensembles containing more than $10^{6}$ atoms of each isotope at temperatures below 1 $\mu$K, and achieve a fermionic degeneracy parameter of $T/T_{F}=0.45$. Due to their high internal energy, the detection of individual metastable atoms with sub-nanosecond time resolution is possible, permitting the study of bosonic and fermionic quantum gases with unprecedented precision. This may lead to metastable helium becoming the mainstay of quantum atom optics.' author: - 'J. M. McNamara' - 'T. Jeltes' - 'A. S. Tychkov' - 'W. Hogervorst' - 'W. Vassen' bibliography: - 'mcbib.bib' title: 'A Degenerate Bose-Fermi Mixture of Metastable Atoms' --- The stable fermionic ($^3$He) and bosonic ($^4$He) isotopes of helium (in their ground state), as well as mixtures of the two, have long exhibited profound quantum properties in both the liquid and solid phases [@wilkj90]. More recently, the advent of laser cooling and trapping techniques heralded the production of Bose-Einstein condensates (BECs) [@ande95; @davi95] and the observation of Fermi degeneracy [@dema99; @trus01] in weakly interacting atomic gases. To date nine different atomic species have been Bose condensed, each exhibiting its own unique features besides many generic phenomena of importance to an increasing number of disciplines. We can expect that studies of degenerate fermions will have a similar impact, and indeed they have been the object of much study in recent years, culminating in the detection of Bardeen-Cooper-Schriefer (BCS) pairs and their superfluidity [@zwie05]. However, only two fermions have so far been brought to degeneracy in the dilute gaseous phase: $^{40}$K [@dema99] and $^6$Li [@trus01]. Degenerate atomic Fermi gases have been difficult to realize for two reasons: firstly, evaporative cooling [@hess86] relies upon elastic rethermalizing collisions, which at the temperatures of interest ($<$ 1 mK) are primarily s-wave in nature and are forbidden for identical fermions; and secondly, the number of fermionic isotopes suitable for laser cooling and trapping is small. Sympathetic cooling [@lars86; @myat97] overcomes the limit to evaporative cooling by introducing a second component (spin-state, isotope or element) to the gas; thermalization between the two components then allows the mixture as a whole to be cooled. In 2001 a BEC of helium atoms in the metastable $2\;^{3}\textup{S}_{1}$ state (He\*) was realized [@robe01; @pere01]; more recently we reported the production of a He\* BEC containing a large ($>$ $10^7$) number of atoms [@tych06]. A quantum degenerate gas of He\* is unique in that the internal energy of the atoms (19.8 eV) is many orders of magnitude larger than their thermal energy ($10^{-10}$ eV per atom at 1 $\mu$K), allowing efficient single atom detection with a high temporal and spatial resolution in the plane of a microchannel plate (MCP) detector [@sche05]. In an unpolarized sample (as is the case in a magneto-optical trap (MOT)) the internal energy leads to large loss rates due to Penning ionization (PI) and associative ionization (AI) [@stas06]: $$\textup{He*}+\textup{He*}\rightarrow \textup{He}+\textup{He}^++e^- \quad ( \mbox{or}\quad \text{He}_{2}^{+}+e^-).$$ These losses are forbidden by angular momentum conservation in a spin-polarized He\* gas, in which all atoms have been transferred into a fully stretched magnetic substate. Spin-polarization suppresses ionizing losses by four orders of magnitude in the case of $^4$He\* [@shly94; @pere01]; it is only this suppression of the loss rate constant to an acceptable value of $\approx$ $10^{-14} \text{cm}^{3}/\text{s}$ [@tych06; @pere01] that has allowed the production of a BEC in this system [@robe01; @pere01; @tych06]. It has been shown that for $^4$He\* [@shly94] a very weak spin-dipole magnetic interaction can induce spin-flips and mediate PI/AI; far from being a hinderance however, the ions produced during these inelastic collisions can allow us to monitor losses non-destructively and in real-time [@tych06; @seid04]. In $^3$He\* the hyperfine interaction splits the $2\;^{3}\textup{S}_{1}$ state into an inverted doublet ($F$=3/2 and $F$=1/2, where $F$ is the total angular momentum quantum number) separated by 6.7 GHz (Fig. \[fig1\]a). $\begin{array}{ccc} \scalebox{0.6}{\includegraphics{Fig1a}} & \scalebox{0.6}{\includegraphics{Fig1b}} & \scalebox{0.6}{\includegraphics{Fig1c}} \end{array}$ The only magnetically trappable state for which spin conservation should hold is the $|F,M_{F}\rangle =|3/2,+3/2\rangle$ substate (where $M_{F}$ is the projection of [*F*]{} on the quantization axis). Whether or not interactions would enhance spin-flip processes and the associated loss rate was unknown, and the prospect of a similarly acceptable level of suppression in the case of $^3$He\* (and indeed between $^3$He\* and $^4$He\*) was an open question before this work. Having realized the capability of producing large BECs of $^4$He\* [@tych06], and therefore large clouds of ultracold $^4$He\*, we use $^4$He\* to sympathetically cool a cloud of $^3$He\* into the quantum degenerate regime. In the manner demonstrated previously [@stas04], we have adapted our setup [@tych06] to allow the magneto-optical trapping of both He\* isotopes simultaneously. The present configuration traps a mixture of $N_{^3\text{He*}}=7\times 10^8$ and $N_{^4\text{He*}}=1.5\times 10^9$ atoms simultaneously at a temperature of $\approx$ 1 mK; a complication here is the need for a repumper exciting the $^3$He\* C2 transition, due to the near (-811 MHz) coincidence of the $^4$He\* laser cooling and $^3$He\* C9 transitions (Fig. \[fig1\]a) [@stas04]. Unable to cool so much $^3$He\* [@carr04], we reduce the number of $^3$He\* atoms in the two-isotope MOT (TIMOT) to $\approx$ $10^7$ by either altering the ratio $^3$He:$^4$He in our helium reservoir or, more simply, by loading the TIMOT with $^3$He\* for a shorter period. Spin-polarization of the mixture to the $^3$He\* $|3/2,+3/2\rangle$ and $^4$He\* $|1,+1\rangle$ states prior to magnetic trapping not only suppresses PI and AI, but also enhances the transfer efficiency of the mixture into the magnetic trap. The application of 1D-Doppler cooling along the symmetry axis of our magnetic trap [@tych06] (Fig. \[fig1\]b) reduces the sample temperature to $T=0.13$ mK without loss of atoms, increasing the $^4$He\* phase space density by a factor of 600 to $\approx$ $10^{-4}$, greatly enhancing the initial conditions for evaporative cooling. We note at this point that the application of 1D-Doppler cooling to the $^4$He\* component already leads to sympathetic cooling of $^3$He\*, however the process appears to be more efficient if we actively cool both components simultaneously. During these experiments the lifetime of a pure sample of either $^3$He\* or $^4$He\* in the magnetic trap was limited by the background pressure in our ultra-high vacuum chamber to $\approx$ $110\;\mbox{s}$, whilst the lifetime of the mixture was only slightly shorter at $\approx$ $100\;\mbox{s}$, indicating that the suppression of PI and AI during $^3$He\*-$^3$He\* and $^3$He\*-$^4$He\* collisions works very well. In order to further increase the collision rate in our cloud, we adiabatically compress it during 200 ms by increasing the trap frequencies to their final radial and axial values: $\nu_{r}=273$ Hz and $\nu_{a}=54$ Hz for $^3$He\*, and $\nu_{r}=237$ Hz and $\nu_{a}=47$ Hz for $^4$He\* (the difference is due to their differing masses). We now perform forced evaporative cooling on the $^4$He\* component by driving radio-frequency (RF) transitions to the untrapped $M_{J}=0$ and -1 spin states (where $M_{J}$ is the projection of total electronic angular momentum quantum number J on the quantization axis), thereby sympathetically cooling $^3$He\*. The atoms couple only weakly to the magnetic field, and the energies of the various magnetic sub-states vary linearly with magnetic field: $E_{M_{F/J}}=g \mu_{B} M_{F/J}B$, where $g$ is the gyromagnetic ratio, $\mu_{B}$ the Bohr magneton, and $B$ the magnetic field strength (Fig. \[fig1\]c). Because of the differing $^3$He\* and $^4$He\* gyromagnetic ratios (4/3 and 2 respectively) the frequency, at any given B-field, for transitions between the magnetic substates in $^4$He\* is 3/2 times that of $^3$He\* (Fig. \[fig1\]c) and we only remove $^4$He\* during evaporative cooling (assuming that the mixture remains in thermal equilibrium). Furthermore, at the trap minimum ($B=3\;\mbox{G}$) the difference in transition frequencies is 2.8 MHz. Thus, when the temperature of the trapped sample is low enough ($<$ 20 $\mu$K) we may selectively remove either $^3$He\* or $^4$He\* from the trap by applying an appropriate RF sweep (despite having to drive two transitions in order to transfer $^3$He\* atoms into an untrapped magnetic substate). This allows us to perform measurements on the mixture as a whole, or on a single component. Upon release a time-of-flight (TOF) experiment is performed (Fig. \[fig1\]b); by fitting TOFs (Fig. \[fig2\]) with the applicable quantum statistical distribution functions we can extract the temperature of the gas and, having previously calibrated the MCP with absorption imaging [@tych06], the number of atoms. A single exponential ramp of the RF frequency to below 8.4 MHz removes all $^4$He\* atoms from the trap, and leads to the production of a pure degenerate Fermi gas of $^3$He\* (Fig. \[fig2\]a). An analysis of the TOF signals shows that we have achieved a maximum degeneracy ($N_{^3\text{He*}}=2.1\times 10^{6}$ at $T=0.8$ $\mu$K) of $T/T_{F}=0.45$ in a pure fermionic sample (Fig. \[fig2\]a), where the Fermi temperature is given by $k_{B}T_{F}=h(6N_{3}\nu_{r}^{2}\nu_{a})^{1/3}$, with $k_{B}$ Boltzmann’s constant and $h$ Planck’s constant. Alternatively we may halt the RF ramp just above 8.4 MHz and produce a quasi-pure BEC immersed in a degenerate Fermi gas (Fig. \[fig2\]b). Whilst recording a TOF we effectively integrate the density distribution of our sample over the two dimensions lying in the plane of our MCP detector, and the small difference between the non-gaussian distribution of our degenerate Fermi gas and the gaussian distribution of a classical gas becomes even less pronounced. It is therefore interesting to demonstrate the difference between the TOFs of classical gases and quantum gases explicitly, and confirm the result obtained above. As described by Schreck [@schreckthesis], we repeatedly fit a gaussian distribution to a single TOF; before each fit a varying fraction of the TOF peak center is removed. The population of low energy states is suppressed (enhanced) in a cloud displaying Fermi-Dirac (Bose-Einstein) statistics, and fitting a gaussian distribution to the whole TOF will lead to an overestimation (underestimation) of the cloud size and therefore the temperature of the sample. By fitting only the more “classical” wings of a TOF these effects are negated, and the fitted temperature should either fall (fermions), rise (bosons), or stay constant in the case of a classical gas. The high signal-to-noise ratio of our TOF spectra allows us to see this behavior clearly (Fig. \[fig3\]). By taking the temperature from a TOF for which we have removed $1.75~\sigma_{0}$ (where $\sigma_{0}$ is the root-mean-square width of a gaussian fit to the entire TOF, see Fig. \[fig3\]a), and the number of atoms calculated by integrating the TOF, we again recover a degeneracy parameter of $T/T_{F}=0.5$. It is interesting to note that we have produced degenerate Fermi gases with evaporative cooling ramps as short as 2.5 s ($N_{^3\text{He*}}=4\times 10^{6}$ and $T/T_{F}=0.75$), signifying that the process of rethermalization is very efficient. At densities of $10^{10}-10^{12}$ atoms/cm$^3$ this indicates a large heteronuclear scattering length. Recently theory [@przy05] and experiment [@moal06] finally agreed upon the $^4$He\*-$^4$He\* scattering length ($a_{44}=7.64(20)$ and 7.512(5) nm respectively). An extension to the theory of Przybytek and Jeziorski [@przy05] suggests that the $^3$He\*-$^4$He\* scattering length should also be very large and positive ($a_{34}=+28.8^{+3.9}_{-3.3}$ nm) [@przyprivcomm]. Such a large heteronuclear scattering length leads us to expect that losses, in particular boson-boson-fermion (BBF) 3-body losses (which scale with $a^{4}$), will have a significant impact on the mixture. We can estimate (order of magnitude) the BBF 3-body loss rate constant by using $K^{BBF}_{3}=120 \hbar a_{34}^{4}\sqrt{d+2/d}/(m_{4} \sqrt{3})$ [@inca04], where $d$ is the $^3$He:$^4$He mass ratio, $m_{4}$ is the $^4$He mass and we assume the theoretical value of $a_{34}$ given above. This gives $K^{BBF}_{3}\approx 1.4\times 10^{-24}$ cm$^6$/s, indicating an atom loss rate that is 1-3 orders of magnitude larger than in the case of pure $^4$He\*, and a condensate lifetime ($\tau_{C}$) which is significantly shorter in the degenerate mixture than in the absence of $^3$He\*. These estimates are in agreement with initial observations that $\tau_{C}^{(3+4)}\sim 0.01$ s while $\tau_{C}^{(4)}\sim 1$ s [@tych06]. Given the large magnitude of $a_{34}$ and having seen no evidence for a collapse of the mixture, we may further suppose that $a_{34}$ is positive. This is then the first Bose-Fermi system to exhibit boson-fermion and boson-boson interactions which are both strong and repulsive. A possible disadvantage, however, may be that the system is only expected to be sufficiently stable against Penning ionization when the atoms are all in the fully stretched magnetic substates, hampering the exploitation of possible Feshbach or optical resonances. In conclusion we have successfully produced a degenerate Fermi gas of metastable $^3$He containing a large number of atoms with $T/T_{F}=0.45$ and have also seen that we can produce a degenerate Bose-Fermi mixture of $^3$He\* and $^4$He\*. This source of degenerate metastable fermions, bosons, or mixtures of the two, could form the basis of many sensitive experiments in the area of quantum atom optics and quantum gases. Of particular interest is the recently realized Hanbury Brown and Twiss experiment on an ultracold gas of $^4$He\* [@sche05], demonstrating two-body correlations (bunching) with neutral atoms (bosons); we now have the ideal source to study anti-bunching in an ultracold Fermi gas of neutral atoms. The extremely large and positive $^3$He\*-$^4$He\* scattering length lends itself to the hitherto unobserved phenomena of phase separation in a Bose-Fermi mixture [@molm98] and, if the scattering lengths can be tuned only slightly, may allow a study of p-wave Cooper pairing of identical fermions mediated by density fluctuations in a bosonic component [@efre02]. Given the naturally large and positive scattering lengths, loading such a mixture into an optical lattice will provide a new playground for the study of exotic phases and phase transitions [@lewe04], including supersolidity [@kim04]. The possibility of ion detection as a real-time, non-destructive density determination tool will be very helpful in observing these and other phenomena. Finally, the ultralow temperatures to which we can now cool both isotopes will allow unprecedented accuracy in high resolution spectroscopy of the helium atom. This could improve the accuracy to which the fine structure constant is known and may allow, via isotope shift measurements, an accurate measurement of the difference in charge radius of the $^3$He and $^4$He nucleus [@drak99], challenging nuclear physics calculations. We thank J. Bouma for technical support. This work was supported by the Space Research Organization Netherlands (SRON), Grant No. MG-051, the “Cold Atoms” program of the Dutch Foundation for Fundamental Research on Matter (FOM) and the European Union, Grant No. HPRN-CT-2000-00125.
{ "pile_set_name": "ArXiv" }
--- abstract: 'A method is presented for solving the discrete-time finite-horizon Linear Quadratic Regulator (LQR) problem subject to auxiliary linear equality constraints, such as fixed end-point constraints. The method explicitly determines an affine relationship between the control and state variables, as in standard Riccati recursion, giving rise to feedback control policies that account for constraints. Since the linearly-constrained LQR problem arises commonly in robotic trajectory optimization, having a method that can efficiently compute these solutions is important. We demonstrate some of the useful properties and interpretations of said control policies, and we compare the computation time of our method against existing methods.' author: - 'Forrest Laine$^{1}$, and Claire Tomlin$^{1}$[^1][^2]' bibliography: - 'root.bib' nocite: - '[@goebel2005continuous]' - '[@mare2007solution]' - '[@cannon2008efficient]' - '[@scokaert1998constrained]' - '[@dunn1989efficient]' title: '**Efficient Computation of Feedback Control for Constrained Systems** ' --- INTRODUCTION {#sec:intro} ============ Due to its mathematical elegance and wide-ranging usefulness, the Linear Quadratic Regulator has become perhaps the most widely studied problem in the field of control theory. Referring to both continuous and discrete-time systems, the LQR problem is that of finding an infinite or finite-length control sequence for a linear dynamical system that is optimal with respect to a quadratic cost function. Either as a stand-alone means for computing trajectories and controllers for linear systems, or as a method for solving successive approximate trajectories for with nonlinear systems, it shows up in one way or another in the computation of nearly all finite-length trajectory optimization problems. Because of the importance of trajectory optimization in controlling Robotic systems, and because of the prevalence of the LQR problem in those optimizations, devoting time to highly efficient methods capable of solving LQR-type problems is an important endeavor. The focus of this paper is on a particular instance of the discrete-time, finite-horizon variant of the LQR problem, being that which is subject to linear constraints. These constrained problems are important in their own-right, and arise in relatively common situations. As an example, imagine we want plan a trajectory that minimizes the amount of energy need to get a robot to some desired configuration. If the dynamics of the robot can be modeled as a linear system, this problem takes the form of linearly-constrained LQR. We can also imagine constraints appearing at multiple stages in the trajectory and having varying dimensions. Perhaps we require that the center of mass of the robot is constrained to not move in the first half of the trajectory. Of course many robots have non-linear dynamics. But even when planning constrained trajectories for non-linear systems, iterative solution methods such as Sequential Quadratic Programming make successive local approximations of the trajectory optimization problem which result in a series of constrained LQR problems to be solved. We will discuss this relationship in more detail in a later section. Understanding that the linearly-constrained LQR problem is common, we provide some context surrounding methods for solving these type of problems. The property that any trajectory must satisfy linear dynamics can be thought of as a sequence of linear constraints on successive states in the trajectory. And since all auxiliary constraints we consider are also linear, these problems result in quadratic programs (QPs) just as unconstrained LQR problems are QPs [@boyd2004convex]. Under standard assumptions, the constrained problems are also strictly-convex and have a unique solution. Unlike unconstrained LQR, however, the presence of additional constraints cause some computational difficulties. Looking from a pure optimization standpoint, all of the approaches to solving convex QPs can be applied to the constrained LQR problem without problem. However, using general methods in a naive way fail to exploit the unique structure of the optimal control problem, and suffer a computational complexity which grows cubicly with the time horizon being considered in the control problem (trajectory length). Due to the sparsity of the problem data in the time domain, the KKT conditions of optimality for optimal control problems have a banded nature, and linear algebra packages designed for such systems can be used to solve the problem in a linear complexity with respect to the trajectory length [@wright1996applying]. However, these approaches result in what we will call *open-loop* trajectories, producing only numerical values of the state and control vectors making up the trajectory. It is well-known that unconstrained LQR problem offers a solution based on dynamic-programming which is sometimes referred to as the discrete-time Riccati recursion. This method can also solve unconstrained LQR problems in linear time complexity while *also* providing an affine relationship between the state and control variables. This relationship provides a feedback policy which can be used in control, and offers many advantages over the open-loop variants. It is because we would like to derive these policies for the constrained case that the aforementioned computational difficulties show up. The presence of auxiliary constraints have made it such that up until now, a method for the constrained LQR problem analogous to Riccati recursion has not been developed. This is due to the fact that linear constraints of dimension exceeding that of the control can not alway be thought of as time-separable. This means that the choice of control at a particular time-point may not always be able to satisfy a constraint appearing at that time-point (for arbitrary values of the corresponding state at that time). We will see that this complication requires reasoning about future constraints yet to come when computing the control in the present. This is the very reason why, as we will see, existing methods either make restrictive assumptions on the dimension of constraints, or require a higher order of computational complexity to compute solutions. Because of this, if the problem does not satisfy the restricting assumptions used by existing methods, solution approaches are currently limited to QP solvers and only offer open-loop trajectories, or suffer cubic time-complexity with respect to the trajectory length if control policies are desired. Given this context, we can now state the contribution of this work: [2em]{} We present a method for computing constraint-aware feedback control policies for discrete-time, time-varying, linear-dynamical systems which are optimal with respect to a quadratic cost function and subject to auxiliary linear equality constraints. We make no assumptions about the dimension of the constriants. In section \[sec:priorwork\] we discuss in more detail existing methods which have addressed the same problem and the limitations of those works. In section \[sec:method\] we formally define the problem and present our method. In section \[sec:analysis\] we discuss computational complexity, and present an alternative approach to solving the problem. We also demonstrate some of the advantages of the control policies derived from our method when compared to the open-loop solutions, and discuss applicability to SQP methods. PRIOR WORK {#sec:priorwork} ========== Consideration of the constrained linear-quadratic optimal control problem extends back to the early days in the field of control. Many authors have presented methods for constraining control systems to a time-invariant linear subspace. The author in [@johnson1973stabilization] studied this issue for continuous systems under the name subspace stabilization. In the works [@hemami1979modeling] and [@yu1996design] the same problem is addressed by designing pole-assignment controllers. More recently, [@posa2016optimization] utilize a very similar method to generate a time-varying controller for tracking existing trajectories. This method is also derived in continuous-time, and hence requires the constraint-dimension to be constant. The authors in [@ko2007optimal] developed a more comprehensive method for computing optimal control policies for discrete-time, time-varying objective functions, but only considers a single time-invariant constraint of constant dimension. In [@park2008lq] a method is presented for solving continuous- and discrete-time LQR problems with fixed terminal states. This method is able to reason about a constraint only appearing at a portion of the trajectory, being the end, but does not account for additional constraints appearing at other times, however. Perhaps the most general method for computing linearly constrained LQR control policies was presented in [@sideris2011riccati]. However, that method suffers a computational complexity which scales cubicly in the worst-case, i.e. when many constraints which have dimension exceeding the control dimension are present. As a part of the method presented in [@xie2017differential], a technique for satisfying linear constraints at arbitrary times in the trajectory is presented, but that method assumes that the constraint dimension does not exceed that of the control. Most recently, [@giftthaler2017projection] present a method for solving problems with time-varying constraints, but still require that the relative-degree of these constraints does not exceed 1. This is a slightly less-restrictive condition than requiring the dimension of the constraints be less than that of the control, but still limits the applicability of that method in that it can not handle full-state constraints when the control dimension is less than that of the state dimension. As mentioned above, the problem can also be solved using numerical linear algebra techniques, as discussed for example in [@wright1999numerical] and particularly for the optimal control problem, [@wright1996applying]. Again, these methods are very general and efficient but fail to produce the desired feedback control policies. The method we present combines the desirable properties of all these methods into one. The contribution of this method is that it is capable of generating optimal feedback control policies for general, discrete-time, linearly-constrained LQR problems while maintaining a linear computational complexity with respect to control horizon. To the best of our knowledge, the approach we present is the only method in existence that is capable of this. PROBLEM AND METHOD {#sec:method} ================== The method we present here is a means of deriving optimal feedback control policies for the following problem: $$\begin{aligned} & \min_{x_0, u_0,...,u_{T-1}, x_T} cost_T(x_T) + \sum_{t=0}^{T-1} cost_t(x_t, u_t) \label{obj:globalobjwords} \\ & \text{s.t.} \ \ \ \ dynamics_t(x_{t+1}, x_t, u_t) = 0 \ \forall t \in \{0,...,T-1\} \label{const:dynamicswords} \\ & \ \ \ \ \ \ \ \ x_0 = x_{init} \label{eq:init} \\ & \ \ \ \ \ \ \ \ constraint_t(x_t, u_t) = 0, \ \forall t \in \{0,...,T-1\} \label{const:twords}\\ & \ \ \ \ \ \ \ \ constraint_T(x_T) = 0 \label{const:Twords}\end{aligned}$$ \[opt:globalwords\] Where $x_t \in \mathbb{R}^n$, $u_t \in \mathbb{R}^m$, and the functions $$\begin{aligned} &cost_t: \mathbb{R}^n \times \mathbb{R}^m \to \mathbb{R} & &cost_T: \mathbb{R}^n \to \mathbb{R} \\ &constraint_t: \mathbb{R}^n \times \mathbb{R}^m \to \mathbb{R}^{l_t} & &constraint_T: \mathbb{R}^n \to \mathbb{R}^{l_T} \\ &dynamics_t : \mathbb{R}^n \times \mathbb{R}^n \times \mathbb{R}^m \to \mathbb{R}^n\end{aligned}$$ are defined as: $$\begin{aligned} &cost_t(x, u) = \frac{1}{2} \begin{pmatrix} 1 \\ x \\ u \end{pmatrix}^\intercal \begin{pmatrix} 0 & q_{x1_t}^\intercal & q_{u1_t}^\intercal \\ q_{x1_t} & Q_{xx_t} & Q_{ux_t}^\intercal \\ q_{u1_t} & Q_{ux_t} & Q_{uu_t} \end{pmatrix}\begin{pmatrix} 1 \\ x \\ u \end{pmatrix} \label{eq:costtform} \\ &cost_T(x) = \frac{1}{2} \begin{pmatrix} 1 \\ x \end{pmatrix}^\intercal \begin{pmatrix} 0 & q_{x1_T}^\intercal \\ q_{x1_T} & Q_{xx_T} \end{pmatrix}\begin{pmatrix} 1 \\ x \end{pmatrix} \\ &dynamics_t(x_{t+1}, x_t, u_t) = x_{t+1} - (F_{x_t} x_t + F_{u_t} u_t + f_{1_t}) \label{eq:dynamicsform}\\ &constraint_t(x_t, u_t) = G_{x_t} x_t + G_{u_t} u_t + g_{1_t} \label{eq:constrainttform} \\ &constraint_T(x_T) = G_{x_T} x_T + g_{1_T} \label{eq:constraintTform},\end{aligned}$$ where $l_t$ (for $0 \leq t < T$) and $l_T$ are the dimensions of the constraints at the corresponding times. In the above expressions, and in the rest of this paper, coefficients are assumed to have dimension such that the expression makes sense. We assume for now that the coefficients $Q_{uu_t}$ of the quadratic functions $cost_t$ are positive-definite, and that $Q_{xx_t} - Q_{ux_t} Q_{uu_t}^{-1} Q_{ux_t}^\intercal $ is positive semi-definite. This assumption is possible to be relaxed, and we will discuss this below. Constrained LQR {#constrained-lqr .unnumbered} --------------- The method for computing the constrained control policies will follow a dynamic programming approach. Starting from the end of the trajectory and working towards the beginning, a given control input $u_t$ will be chosen such that for any value of the resulting state $x_t$, the control will satisfy all constraints imposed at time $t$, as well as any constraints remaining to be satisfied in the remainder of the trajectory, if possible. If there are degrees of freedom in the control input that do not affect the constraint, the portion of the control lying in the null-space of the constraint will be chosen such as to minimize the cost in the remainder of the trajectory. If the constraint is unable to be satisfied by the control for arbitrary states, the control will minimize the sum of squared residuals of the constraints. This has the effect of eliminating $r$ dimensions of the constraint, where $r$ is the rank of the constraint coefficient multiplying $u_t$. For a trajectory to satisfy the constraint in this case, the state $x_t$ must therefore be such that the constraint residuals will be zero. This can be enforced by passing on a residual linear constraint to the choice of control at the preceding time, $u_{t-1}$ (and controls preceding that, if necessary). To formalize this procedure, we introduce a time-varying quadratic function, $cost\_to\_go_t: \mathbb{R}^n \to \mathbb{R}$, representing the minimum possible cost remaining in the trajectory from stage $t$ onward as a function of state. Additionally, we introduce a linear function $constraint\_to\_go_t : \mathbb{R}^n \to \mathbb{R}^{p_t}$, which defines through a constraint on $x_{t}$ the subspace of admissible states such that the control $u_t$ will be able to satisfy the constraints in the remainder of the trajectory. Here $p_t$ is the dimension of constraints needed to enforce this condition. These functions are defined as follows: $$\begin{aligned} cost\_to\_go_t(x) &= \frac{1}{2} \begin{pmatrix} 1 \\ x \end{pmatrix}^\intercal \begin{pmatrix} 0 & v_{x1_t}^\intercal \\ v_{x1_t} & V_{xx_t} \end{pmatrix}\begin{pmatrix} 1 \\ x \end{pmatrix} \label{eq:costtogo} \\ constraint\_to\_go_t(x) &= H_{x_t} x + h_{1_t} . \label{eq:constrainttogo}\end{aligned}$$ We initialize these terms at time $T$: $$\begin{aligned} V_{xx_T} &= Q_{xx_T} & v_{x1_T} &= q_{x1_T} \\ H_{x_T} &= G_{x_T} & h_{1_T} &= g_{1_T}. \end{aligned}$$ Note that in the value function (\[eq:costtogo\]) we do not include any constant terms (which would appear in the $0$ block of (\[eq:costtogo\])). This is because the calculations we will derive do not depend on them, and so we omit them for clarity. Given the above definitions, starting at $T-1$ and working backwards to $0$, we solve the following optimization problem for each time $t$: $$\begin{aligned} u_t^*(x_t) = \ &\text{arg}\min_{u_t} \ cost_t(x_t, u_t) + cost\_to\_go_{t+1}(x_{t+1}) \\ \text{s.t.} \ \ \ & 0 = dynamics_t(x_{t+1}, x_t, u_t) \label{eq:dyndp} \\ & u_t \in \text{arg}\min_{u} \| \begin{bmatrix} constraint_t(x_t, u) \\ constraint\_to\_go_{t+1}(x_{t+1}) \end{bmatrix} \|_2 \label{eq:setdp}\end{aligned}$$ \[opt:words\] To see why we want to solve this problem, we first simplify it by using the form of (\[eq:dyndp\]) to eliminate $x_{t+1}$, and plug in coefficients: $$\begin{aligned} \label{eq:simple} u_t^*(x_t) &= \text{arg}\min_{u_t} \frac{1}{2} \begin{pmatrix} 1 \\ x_t \\ u_t \end{pmatrix}^\intercal \begin{pmatrix} 0 & m_{x1_t}^\intercal & m_{u1_t}^\intercal \\ m_{x1_t} & M_{xx_t} & M_{ux_t}^\intercal \\ m_{u1_t} & M_{ux_t} & M_{uu_t} \end{pmatrix}\begin{pmatrix} 1 \\ x_t \\ u_t \end{pmatrix} \\ &\text{s.t.} \ \ u_t \in \text{arg}\min_u \| N_{x_t} x_t + N_{u_t} u + n_{1_t} \|_2 .\label{const:simple}\end{aligned}$$ \[opt:simple\] Where the above terms are defined as: $$\begin{aligned} m_{x1_t} &= q_{x1_t} + F_{x_t}^\intercal v_{x1_{t+1}} & m_{u1_t} &= q_{u1_t} + F_{u_t}^\intercal v_{x1_{t+1}} \\ M_{xx_t} &= Q_{xx_t} + F_{x_t}^\intercal V_{xx_{t+1}} F_{x_t} & M_{uu_t} &= Q_{uu_t} + F_{u_t}^\intercal V_{xx_{t+1}} F_{u_t} \\ M_{ux_t} &= Q_{ux_t} + F_{u_t}^\intercal V_{xx_{t+1}} F_{x_t} & N_{x_t} &= \begin{pmatrix} G_{x_t} \\ H_{x_{t+1}}F_{x_t} \end{pmatrix} \\ N_{u_t} &= \begin{pmatrix} G_{u_t} \\ H_{x_{t+1}} F_{u_t} \end{pmatrix} & n_{1_t} &= \begin{pmatrix} g_{1_t} \\ H_{x_{t+1}}f_{1_t} + h_{1_{t+1}} \end{pmatrix}. \end{aligned}$$ If, for a particular $x_t$, the vector $N_{x_t}x_t + n_{1_t}$ is not in the range-space of $N_{u_t}$, then the minimizer of (\[const:simple\]) will have a non-zero constraint residual. This is why we can not enforce the conditions $constraint_t(x_t)=0$ and $constraint\_to\_go_{t+1}(x_{t+1})=0$ in (\[opt:words\]), since that would result in a infeasible problem in general. When $x_t$ is such that the constraints are not feasible, this implies that even with the best effort done by the control to satisfy the constraints, the state $x_t$ can not be arbitrary. We must therefore place the requirement on $x_t$ that $N_{x_t}x_t + n_{1_t}$ does in fact lie in the range-space of $N_{u_t}$. This constraint will then be passed on to the choice of control at the preceding time. We will derive explicitly what this constraint on $x_t$ looks like after determining a closed-form solution of $u_t^*(x_t)$. To do this, we first write an equivalent problem to (\[opt:simple\]): $$\begin{aligned} &\begin{aligned} y_t^*, w_t^* = \text{arg}\min_{y_t, w_t} \frac{1}{2} \| N_{x_t} x_t + N_{u_t} P_{y_t} y_t + n_{1_t} \|_2 + \ \ \ \ \ \ \ \\ \frac{1}{2} \begin{pmatrix} 1 \\ x_t \\ Z_{w_t} w_t \end{pmatrix}^\intercal \begin{pmatrix} 0 & m_{x1_t}^\intercal & m_{u1_t}^\intercal \\ m_{x1_t} & M_{xx_t} & M_{ux_t}^\intercal \\ m_{u1_t} & M_{ux_t} & M_{uu_t} \end{pmatrix}\begin{pmatrix} 1 \\ x_t \\ Z_{w_t} w_t \end{pmatrix} \label{eq:vwsoln} \end{aligned} \\ & u_t^* = P_{y_t} y_t^* + Z_{w_t} w_t^* \label{eq:udirectsum}\end{aligned}$$ \[opt:infeasible\] Here, $Z_{w_t}$ is chosen such that the columns form an ortho-normal basis for the null-space of $N_{u_t}$, and $P_{y_t}$ is chosen such that its columns form a ortho-normal basis for the range-space of $N_{u_t}^\intercal$. Hence $N_{u_t}$ and $P_{y_t}$ are also orthogonal and their columns together span $\mathbb{R}^m$. Because of the orthogonality of these two sub-spaces, any control signal $u_t$ corresponds to a unique $y_t$ and $w_t$ [@callier2012linear]. The solution to (\[opt:infeasible\]), which is an unconstrained problem, is: $$\begin{aligned} y_t^* &= -(N_{u_t}P_{y_t})^{\dagger} (N_{x_t} x_t + n_{1_t}) \label{eq:controlvupdate} \\ w_t^* &= -(Z_{w_t}^\intercal M_{uu_t} Z_{w_t})^{-1} Z_{w_t}^\intercal (M_{ux_t} x_t + m_{u1_t}). \label{eq:controlwupdate} \end{aligned}$$ In the case that $P_{y_t}$ is a zero matrix (i.e. $\text{dim}(\text{null}(N_{u_t}))=m$), $Z_{w_t} = I_m$ (Identity matrix $\in \mathbb{R}^{m\times m}$), and $y_t$ has dimension 0. Correspondingly, when the nullity of $Z_{w_t}$ is 0, $P_{y_t} = I_m$ and $w_t$ has dimension 0. Therefore, in these cases, we ignore the update (\[eq:controlvupdate\] or \[eq:controlwupdate\]) that is of size $0$. With this in mind, and combining terms, we can express the control $u_t$ in closed-form as an affine function of the state $x_t$: $$\begin{aligned} u_t^* &= K_{x_t} x_t + k_{1_t} \label{eq:controlpolicy} \\ K_{x_t} &= -\big( P_{y_t}(N_{u_t}P_{y_t})^\dagger N_{x_t} + Z_{w_t} (Z_{w_t}^\intercal M_{uu_t} Z_{w_t})^{-1}Z_{w_t}^\intercal M_{ux_t} \big) \label{eq:K} \\ k_{1_t} &= -\big( P_{y_t}(N_{u_t}P_{y_t})^\dagger n_{1_t} + Z_{w_t} (Z_{w_t}^\intercal M_{uu_t} Z_{w_t})^{-1}Z_{w_t}^\intercal m_{u1_t} \big) \label{eq:k} \end{aligned}$$ Since the control is a function of the state, we can also express the constraint residual as a function of the state. We can let the function $constraint\_to\_go_t$ to be the value of the constraint residual (\[eq:setdp\]). We substitute (\[eq:controlpolicy\], \[eq:K\], \[eq:k\]) into (\[const:simple\]) to obtain: $$\begin{aligned} \begin{aligned} constraint\_to\_go_t(x_t) &= \\ N_{x_t}x_t - N_{u_t}P_{y_t}&(N_{u_t}P_{y_t})^\dagger(N_{x_t}x_t +n_{1_t}) + n_{1_t} \label{eq:constrainttogo} \end{aligned}\end{aligned}$$ This results in the update for the terms $H_{x_t}$ and $h_{1_t}$: $$\begin{aligned} H_{x_t} &= (I - N_{u_t}P_{y_t}(N_{u_t}P_{y_t})^\dagger)N_{x_t} \\ h_{1_t} &= (I - N_{u_t}P_{y_t}(N_{u_t}P_{y_t})^\dagger)n_{1_t}.\end{aligned}$$ Here $I$ is the identity matrix having the same leading dimension as $N_{x_t}$. By observing these updates, we see that the terms in (\[eq:constrainttogo\]) are computed by projecting $N_{x_t} x_t + n_{1_t}$ into the kernel of $(N_{u_t}P_{y_t})^\intercal$, as we would expect given the discussion above. Hence, the residual constraint will lie in a subspace of dimension no larger than the nullity of $(N_{u_t}P_{y_t})^\intercal$. We can therefore remove redundant constraints by removing linearly-dependent rows of the matrix $\begin{bmatrix} h_{1_t} & H_{x_t} \end{bmatrix}$, in order to maintain a minimal representation and keep computations small. Note that if the matrix $\begin{bmatrix} h_{1_t} & H_{x_t} \end{bmatrix}$ is full column rank at any time $t$, then there exists no $x_t$ that can satisfy the constraints, and we have detected that the trajectory optimization problem (\[opt:globalwords\]) is infeasible. We now plug the expression for the control in to the objective function of our optimization problem (\[eq:simple\]) to obtain an update on our value function as a function of the state (again, omitting constant terms): $$\begin{aligned} cost\_to\_go_t(x_t) &= \frac{1}{2} \begin{pmatrix} 1 \\x_t\\ u_t^* \end{pmatrix} ^\intercal \begin{pmatrix} 0 & m_{x1_t}^\intercal & m_{u1_t}^\intercal \\ m_{x1_t} & M_{xx_t} & M_{ux_t}^\intercal \\ m_{u1_t} & M_{ux_t} & M_{uu_t}\end{pmatrix} \begin{pmatrix} 1 \\x_t\\ u_t^* \end{pmatrix} \\ &= \frac{1}{2} \begin{pmatrix} 1 \\x_t \end{pmatrix} ^\intercal \begin{pmatrix} 0 & v_{x1_t}^\intercal \\ v_{x1_t} & V_{xx_t} \end{pmatrix} \begin{pmatrix} 1 \\x_t \end{pmatrix},\end{aligned}$$ where terms are defined as $$\begin{aligned} V_{xx_t} &= M_{xx_t} + 2 M_{ux_t}^\intercal K_{x_t} + K_{x_t}^\intercal M_{uu_t} K_{x_t} \label{update:V} \\ v_{x1_t} &= m_{x1_t} + K_{x_t}^\intercal m_{u1_t} + (M_{ux_t}^\intercal + K_{x_t}^\intercal M_{uu_t}) k_{1_t} \label{update:v}.\end{aligned}$$ We have now presented updates for the terms $V_{xx_t}$, $v_{x1_t}$, $H_{x_t}$, and $h_{1_t}$, and computed control policy terms $K_{x_t}$ and $k_{1_t}$ in the process. The sequence of control policies $\{K_{x_t}, k_{1_t}\}_{t\in\{0,...,T-1\}}$ will produce a sequence of states and controls that are optimal for our original problem (\[opt:globalwords\]). Analysis {#sec:analysis} ======== In the preceding section, we have presented a method for computing control policies for the equality-constrained LQR problem (\[opt:globalwords\]). In this section we will analyze the method and the resulting policies by evaluating the computational complexity of the method and by relating the policies to those that are produced in standard, unconstrained LQR. Computation {#sec:computation} ----------- n m T LAPACK (s) CLQR (s) ---- ---- ----- ---- ------------ ---------- 40 10 250 0 0.089 0.031 40 10 250 90 0.095 0.040 40 10 125 90 0.046 0.021 9 2 250 0 0.002 0.004 50 0.002 0.007 50 0.001 0.003 : Comparing computation times of constrained and unconstrained LQR problems between our constrained LQR method (CLQR) and a method using LAPACK to directly solve the KKT system of equations.[]{data-label="table:tab"} We mentioned that one of the contributions of this method is its computational efficiency compared to existing results. Due to the dynamic-programming nature of this method, the computational time-dependence on trajectory length is linear, irrespective of the dimension of auxiliary constraints. In each iteration of the dynamic programming backups, the heavy computations involve computing the null- and range-space representations of $N_{u_t}$ and $N_{u_t}^\intercal$, respectively, and then computing the pseudo-inverse of $N_{u_t}P_{y_t}$. These operations can all be done by making use of one singular value decomposition (SVD). The dimension of $N_{u_t}$ is no greater than $(2n+m) \times m$ where $n$ is the dimension of the state and $m$ is the dimension of the control signal. Computation complexity of the SVD is thus $O((2n+m)^2m + m^3)$ [@Golub89a]. We also make use of a decomposition on the terms $\begin{bmatrix} h_{1_t} & H_{x_t}\end{bmatrix}$ to remove redundant constraints, which requires computations on the order of $O(n^3)$. The remaining computations are numerous matrix-matrix products and matrix-inversions with terms having dimension no larger than $2n+m \times n$. Thus, the overall order of the method presented here is $O(T(\kappa_1n^3 + \kappa_2n^2m + \kappa_3m^2n + \kappa_4m^3))$, where $\kappa_1,\kappa_2,\kappa_3,\kappa_4$ are some positive scalars. Therefore, the method we present here has computational complexity which is roughly equivalent to known solutions based on using a banded-matrix solver on the system of KKT conditions [@demmel1997applied] [@wright1993interior]. This is not surprising, since our method can be thought of as performing a specialized block-substitution method on the system of KKT conditions, and hence a specialized block-substitution solver for the particular structure arising in constrained optimal control problems. In table \[table:tab\] we show a comparison of computation times between our method and the method ’DGBTRS’ from the well-known linear algebra package LAPACK [@laug]. All times are taken as the minimum over 10 trials, run on a 2013 MacBook Air with a 2-core GHz Intel Core i7 processor. The LAPACK method performs gaussian elimination on the banded KKT system of equations. We make this comparison for varying problem sizes and percentage of the number of independent constraints relative to the total number degrees of freedom in the problem. For problems of relatively small size, we see that LAPACK offers superior speed, even in the standard unconstrained LQR case. However, as the problem size grows, we see that our method quickly becomes more efficient than the LAPACK solution. Infinite Penalty Perspective ---------------------------- We here consider an alternative way to solve (\[opt:globalwords\]), being the quadratic penalty approach. It is known that we can solve equality-constrained quadratic programs by solving an unconstrained problem, where the linear constraint terms are penalized in the objective as an infinitely weighted cost on the sum of squared constraint residuals [@bertsekas1999nonlinear]. Therefore, in light of our original problem (\[opt:globalwords\]), we could penalize the constraints (\[const:twords\] and \[const:Twords\]) in this way, which would result in a standard (from a structural standpoint) LQR problem, where some of the cost terms are weighted infinitely high. The resulting problem would appear as $$\begin{aligned} & \min_{u_0,...,u_{T-1}} constraint\_penalized\_cost_T(x_T) \ + \\ & \ \ \ \ \ \ \ \ \ \ \sum_{t=0}^{T-1} constraint\_penalized\_cost_t(x_t, u_t) \\ & \text{s.t.} \ \ \ dynamics_t(x_{t+1}, x_t, u_t) = 0 \ \forall t \in \{0,...,T-1\} \\ & \ \ \ \ \ \ \ x_0 = x_{init} .\end{aligned}$$ \[opt:quadpenalty\] Here the modified cost functions are defined as $$\begin{aligned} &\begin{aligned} constraint\_modified\_cost_t(x, u) = \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ cost_t(x, u) + \frac{1}{\epsilon} \| constraint_t(x,u)\|^2_2 \end{aligned} \\ &\begin{aligned} constraint\_modified\_cost_T(x) = \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ \ \ \ \ \ \ cost_T(x) + \frac{1}{\epsilon} \|constraint_T\|^2_2. \end{aligned}\end{aligned}$$ In practice, we can not penalize the constraint terms by infinity (by letting $\epsilon \to 0_+$), but it may suffice to penalize the constraints by some very large constant. Because the optimal control problems we typically solve are based on approximate models of systems, solving an ’approximately’ constrained system may be adequate, since super-high precision on the solution is overkill. In these cases, one could consider solving the unconstrained penalized problem (\[opt:quadpenalty\]). The necessary computation for computing its solution is slightly less than the approach developed in section \[sec:method\], as was shown in Table \[table:tab\]. ![Limiting behavior of the penalty approach for problem (\[opt:doubleint\])[]{data-label="figure:doubleint"}](infinite_penalty_perspective-eps-converted-to.pdf) We illustrate this relationship between the two methods using a very simple example. Consider the constrained LQR problem for a discrete-time double integrator below: $$\begin{aligned} \min_{u_0,...,u_{T-1}}& \ \sum_{t=0}^{T-1} \|u_t\|_2^2 \\ \text{s.t.} \ \ \ x_{t+1} &= \begin{bmatrix} 1 & dt \\ 0 & 1 \end{bmatrix} x_t + \begin{bmatrix} 0 \\ dt \end{bmatrix} u_t \\ x_0 &= \begin{bmatrix} 1& 1\end{bmatrix} ^\intercal \\ x_{T/2} &= \begin{bmatrix} -1& -1\end{bmatrix} ^\intercal \\ x_T &= \begin{bmatrix} 0 & 0 \end{bmatrix} \end{aligned}$$ \[opt:doubleint\] For this example, we let $dt=0.01$ and $T=100$ to simulate a one second trajectory. In figure \[figure:doubleint\], trajectories of the first element of $x_t$ can be seen for the solution to the explicitly constrained formulation as well as solutions computed using the penalty formulation (\[opt:quadpenalty\]) for varying values of $\epsilon$. As can be seen, as $\epsilon \to 0_+$, the solutions of the penalty method converge to that of the explicitly constrained method. While this simpler approach might seem an enticing alternative to the approach outlined in section \[sec:method\], we maintain that our method which handles constraints explicitly is still important. Our method ensures the optimal solution without guessing a sufficient value of $\epsilon$. In applications where correct solutions are needed, such as using this method in the context of an SQP approach (discussed more below), iteratively updating the penalty parameter until acceptable constraint satisfaction might be much slower than computing the analytic solution from the start. Disturbance Rejection --------------------- ![Robust constraint satisfaction for problem (\[opt:doubleint\]) subject to additive input noise. []{data-label="figure:disturbed"}](disturbance_rejection-eps-converted-to.pdf) Another benefit of the control policies we have generated is in robustly satisfying constraints. Consider again the example (\[opt:doubleint\]). Let us compare the performance of executing the open-loop control signal as would be generated when using a gaussian elimination technique as discussed above, compared to executing the constrained feedback policies, in the presence of unforeseen disturbances. In figure \[figure:disturbed\], we see the comparison of the open loop control policy compared to the feedback policy when executed on a ’true’ system with dynamics when the input $u_t$ is corrupted by gaussian noise. We see (as would be expected) that the open loop signal strays far from satisfying either of the equality constraints, where as by using the constrained feedback policies, they are still nearly satisfied. This is a purely empirical argument, but demonstrates a simple case in which the benefits of the generated control policies are seen. More in-depth analysis of the robustness properties of constraint-aware feedback policies can be seen in [@ko2007optimal] for a time-invariant constraint, and a similar analysis could be done for the general constraint policies presented here, but is left for future work. Application to Sequential Quadratic Programming ----------------------------------------------- Due to the generality and computational efficiency of our method, we have mentioned that it is well-suited for algorithms for solving more complicated optimal control problems. In particular, consider the more general version of problem (\[opt:globalwords\]) where the cost functions might be non-quadratic or even non-convex, and the dynamic and auxiliary constraints might be non-linear. In this general form, computing solutions requires a non-convex optimization method. One prominent method for solving these types of problem is Sequential Quadratic Programming (SQP). A in-depth overview SQP methods can be found in [@bertsekas1999nonlinear] or [@wright1999numerical]. When using an SQP approach to solving a non-convex version of (\[opt:globalwords\]), newton’s method is used to solve the KKT conditions of the problem [@wright1999numerical]. Each iteration of newton’s method results in a linearly-constrained LQR problem, of which the solution provides an update to the solution of the non-convex problem. Therefore, because this procedure requires solving many constrained LQR problems, having an efficient means of computing the solutions to those subproblems is critical for an efficient solution to the non-convex problem. If the solutions of constrained LQR subproblems generated in an SQP are only used as updates in an iterative procedure for generating a trajectory, it may seem unnecessary to generate feedback policies and an approach might suffice. However, there has been much research into the advantages of type methods for unconstrained variants of the nonlinear optimal control problem, such as in Differential Dynamic Programming [@jacobson1970differential]. These methods generate iterates by applying the *open-loop* controls updates on the nonlinear system dynamics, in effect projecting the iterate onto the manifold of dynamically feasible trajectories. A recent exploration into the benefits of of these type of methods [@giftthaler2017family] has shown that generating iterates in this way can lead to improved rate of convergence of trajectories to solutions of the non-convex problem, but sometimes suffer instabilities when the underlying system dynamics are unstable. Using the feedback control *policies* to update the control signal as the nonlinear system trajectory diverges from the linear system trajectory such as in [@sideris2005efficient] and [@li2004iterative] can mitigate this instability while maintaining enhanced convergence properties. Because our method is highly efficient, and because it can handle arbitrary constraints without making any assumptions about linear dependence or dimension, it is an excellent candidate for use in SQP algorithms for trajectory optimization. Therefore, using our method to compute solutions to sub-problems would be no-worse than using a direct method in terms of versatility and computation-time, and the feedback policies could potentially improve convergence as discussed in [@giftthaler2017family] and [@giftthaler2017projection]. An in-depth analysis of how and when these policies can aid in convergence would be interesting, but is left for future work. CONCLUSION ========== In summary, we have presented a method for computing feedback control policies for the general linearly-constrained LQR problem. The method presented has a computational complexity that scales linearly with respect to the trajectory length. We demonstrated that in practice the computation of such policies is on the order of the fastest existing methods. We also showed that the control policies generated are useful in contexts of robustly satisfying constraints, and offered perspective on the use of our method in contexts of solving general trajectory optimization problems. [^1]: This research is supported by DARPA under the Assured Autonomy Program, by NSF under the CPS Frontier VeHICal project, and by the UC-Philippine-California Advanced Research Institute under project IIID-2016-005. [^2]: $^{1}$Forrest Laine and Claire Tomlin are with the Department of Electrical Engineering and Computer Sciences, University of California, Berkeley [forrest.laine@eecs.berkeley.edu]{}, tomlin@eecs.berkeley.edu
{ "pile_set_name": "ArXiv" }
--- abstract: 'We show that a tetragonal lattice of weakly interacting cavities with uniaxial electromagnetic response is the photonic counterpart of topological crystalline insulators, a new topological phase of atomic band insulators. Namely, the frequency band structure stemming from the interaction of resonant modes of the individual cavities exhibits an omnidirectional band gap within which gapless surface states emerge for finite slabs of the lattice. Due to the equivalence of a topological crystalline insulator with its photonic-crystal analog, the frequency band structure of the latter can be characterized by a $Z_{2}$ topological invariant. Such a topological photonic crystal can be realized in the microwave regime as a three-dimensional lattice of dielectric particles embedded within a continuous network of thin metallic wires.' author: - Vassilios Yannopapas title: 'Gapless surface states in a lattice of coupled cavities: a photonic analog of topological crystalline insulators' --- Introduction ============ The frequency band structure of artificial periodic dielectrics formally known as photonic crystals is the electromagnetic (EM) counterpart of the electronic band structure in ordinary atomic solids. Recently, a new analogy between electron and photon states in periodic structures has been proposed by Raghu and Haldane, [@haldane] namely the one-way chiral edge states in two-dimensional (2D) photonic-crystal slabs which are similar to the corresponding edge states in the quantum Hall effect. [@one_way] The photonic chiral edge states are a result of time-reversal (TR) symmetry breaking which comes about with the inclusion of gyroelectric/ gyromagnetic material components; these states are robust to disorder and structural imperfections as long as the corresponding topological invariant (Chern number in this case) remains constant. In certain atomic solids, TR symmetry breaking is not prerequisite for the appearance of topological electron states as it is the case in the quantum Hall effect. Namely, when spin-orbit interactions are included in a TR symmetric graphene sheet, a bulk excitation gap and spin-filtered edge states emerge [@mele_2005] without the presence of an external magnetic field, a phenomenon which is known in literature as quantum spin Hall effect. Its generalization to three-dimensional (3D) atomic solids lead to a new class of solids, namely, topological insulators. [@ti_papers] The latter possess a spin-orbit-induced energy gap and gapless surface states exhibiting insulating behavior in bulk and metallic behavior at their surfaces. Apart from topological insulators where the spin-orbit band structure with TR symmetry defines the topological class of the corresponding electron states, other topological phases have been proposed such as topological superconductors (band structure with particle-hole symmetry), [@ts_papers] magnetic insulators (band structure with magnetic translation symmetry), [@mi_papers] and, very recently, topological crystalline insulators. [@fu_prl] In the latter case the band structure respects TR symmetry as well as a certain point-group symmetry leading to bulk energy gap and gapless surface states. In this work, we propose a photonic analog of a topological crystalline insulator. Our model photonic system is a 3D crystal of weakly interacting resonators respecting TR symmetry and the point-symmetry group associated with a given crystal surface. As a result, the system possesses an omnidirectional band gap within which gapless surface states of the EM field are supported. It is shown that the corresponding photonic band structure is equivalent to the energy band structure of an atomic topological crystalline insulator and, as such, the corresponding states are topological states of the EM field classified by a $Z_{2}$ topological invariant. The frequency band structure of photonic crystals whose (periodically repeated) constituent scattering elements interact weakly with each other can be calculated by a means which is similar to the tight-binding method employed for atomic insulators and semiconductors. Photonic bands amenable to a tight-binding-like description are e.g., the bands stemming from the whispering-gallery modes of a lattice of high-index scatterers [@lido] the defect bands of a sublattice of point defects, within a photonic crystal with an absolute band gap, [@bayindir] the plasmonic bands of a lattice of metallic spheres [@quinten] or of a lattice of dielectric cavities within a metallic host. [@stefanou_ssc] In the latter case, the frequency band structure stems from the weak interaction of the surface plasmons of each individual cavity [@stefanou_ssc] wherein light propagates within the crystal volume by a hopping mechanism. Such type of lattice constitutes the photonic analog of a topological crystalline insulator presented in this work whose frequency band structure will be revealed based on a photonic tight-binding treatment within the framework of the coupled-dipole method. [@cde] The latter is an exact means of solving Maxwell’s equations in the presence of nonmagnetic scatterers. Tight-binding description of dielectric cavities in a plasmonic host ==================================================================== We consider a lattice of dielectric cavities within a lossless metallic host. The $i$-th cavity is represented by a dipole of moment ${\bf P}_{i}=(P_{i;x},P_{i;y},P_{i;z})$ which stems from an incident electric field ${\bf E}^{inc}$ and the field which is scattered by all the other cavities of the lattice. This way the dipole moments of all the cavities are coupled to each other and to the external field leading to the coupled-dipole equation $${\bf P}_{i}= \boldsymbol\alpha_{i}(\omega) [{\bf E}^{inc} + \sum_{i' \neq i} {\bf G}_{i i'}(\omega) {\bf P}_{i'}]. \label{eq:cde}$$ ${\bf G}_{i i'}(\omega)$ is the electric part of the free-space Green’s tensor and ${\bf \boldsymbol\alpha}_{i}(\omega)$ is the $3 \times 3$ polarizability tensor of the $i$-th cavity. Eq. (\[eq:cde\]) is a $3N \times 3N$ linear system of equations where $N$ is the number of cavities of the system. We assume that the cavities exhibit a uniaxial EM response, i.e., the corresponding polarizability tensor is diagonal with $\alpha_{x}=\alpha_{y}=\alpha_{\parallel}$ and $\alpha_{z}=\alpha_{\perp}$. For strong anisotropy, the cavity resonances within the $xy$-plane and along the $z$-axis can be spectrally distinct; thus, around the region of e.g., the cavity resonance $\omega_{\parallel}$ within the $xy$-plane, $\alpha_{\perp} \ll \alpha_{\parallel}$ (see appendix). In this case, one can separate the EM response within the $xy$-plane from that along the $z$-axis and Eq. (\[eq:cde\]) becomes a $2N \times 2N$ system of equations, $${\bf P}_{i}= \alpha_{\parallel}(\omega) [\sum_{i' \neq i} {\bf G}_{i i'}(\omega) {\bf P}_{i'}]. \label{eq:cde_no_field}$$ where we have set ${\bf E}^{inc}={\bf 0}$ since we are seeking the eigenmodes of the system of cavities. Also, now, ${\bf P}_{i}=(P_{i;x},P_{i;y})$. For a particle/cavity of electric permittivity $\epsilon_{\parallel}$ embedded within a material host of permittivity $\epsilon_{h}$, the polarizability $\alpha_{\parallel}$ is given by the Clausius-Mossotti formula $$\alpha_{\parallel}=\frac{3 V}{4 \pi} \frac{\epsilon_{\parallel}-\epsilon_{h}}{\epsilon_{\parallel}+ 2\epsilon_{h}} \label{eq:cm}$$ where $V$ is the volume of the particle/ cavity. For a lossless plasmonic (metallic) host in which case the electric permittivity can be taken as Drude-type, i.e., $\epsilon_{h}=1-\omega_{p}^{2} / \omega^{2}$ (where $\omega_{p}$ is the bulk plasma frequency), the polarizability $\alpha_{\parallel}$ exhibits a pole at $\omega_{\parallel}=\omega_{p} \sqrt{2/ (\epsilon_{\parallel} +2)}$ (surface plasmon resonance). By making a Laurent expansion of $\alpha_{\parallel}$ around $\omega_{\parallel}$ and keeping the leading term, we may write $$\alpha_{\parallel}= \frac{F} {\omega - \omega_{\parallel}} \equiv \frac{1} {\Omega} \label{eq:a_laurent}$$ where $F=(\omega_{\parallel}/2) (\epsilon_{\parallel} - \epsilon_{h})/ (\epsilon_{\parallel}+2)$. For sufficiently high value of the permittivity of the dielectric cavity, i.e., $\epsilon_{\parallel} > 10$, the electric field of the surface plasmon is much localized at the surface of the cavity. As a result, in a periodic lattice of cavities, the interaction of neighboring surface plasmons is very weak leading to much narrow frequency bands. By treating such a lattice in a tight binding-like framework, we may assume that the Green’s tensor ${\bf G}_{i i'}(\omega)$ does not vary much with frequency and therefore, ${\bf G}_{i i'}(\omega) \simeq {\bf G}_{i i'}(\omega_{\parallel})$. In this case, Eq. (\[eq:cde\_no\_field\]) becomes an eigenvalue problem $$\sum_{i' \neq i} {\bf G}_{i i'}(\omega_{\parallel}) {\bf P}_{i'}= \Omega {\bf P}_{i} \label{eq:cde_eigen}$$ where $$\begin{aligned} {\bf G}_{i i'}(\omega_{\parallel})=q_{\parallel}^{3} \Bigl[ C(q_{\parallel} | r_{ii'}|) {\bf I}_{2} + J(q_{\parallel} | r_{ii'}|) \left(% \begin{array}{cc} \frac{x_{ii'}^2}{r_{ii'}^{2}} & \frac{x_{ii'}y_{ii'}}{r_{ii'}^{2}} \\ \frac{x_{ii'}y_{ii'}}{r_{ii'}^{2}} & \frac{y_{ii'}^2}{r_{ii'}^{2}} \\ \end{array}% \right) \Bigr]. \nonumber \\ \label{eq:g_tensor}\end{aligned}$$ with ${\bf r}_{ii'}={\bf r}_{i}-{\bf r}_{i'}$, $q_{\parallel}=\sqrt{\epsilon_{h}}\omega_{\parallel}/c$ and ${\bf I}_{2}$ is the $2 \times 2$ unit matrix. The form of functions $C(q_{\parallel} | r_{ii'}|)$, $J(q_{\parallel} | r_{ii'}|)$ generally depends on the type of medium hosting the cavities (isotropic, gyrotropic, bi-anisotropic, etc). [@eroglu; @dmitriev] The Green’s tensor of Eq. (\[eq:g\_tensor\]) describes the electric interactions between two point dipoles ${\bf P}_{i}$ and ${\bf P}_{i'}$ each of which corresponds to a single cavity. The first term of ${\bf G}_{i i'}$ describes an interaction which does not depend on the orientation of the two dipoles whilst the second one is orientation dependent. ![(Color online) (a) Tetragonal crystal with two cavities within the unit cell. (b) The bulk Brillouin zone and (c) the surface Brillouin zone corresponding to the (001) surface.[]{data-label="fig1"}](Fig1.eps){width="8cm"} For an infinitely periodic system, i.e., a crystal of cavities, we assume the Bloch ansatz for the polarization field, i.e., $${\bf P}_{i}={\bf P}_{n \beta}=\exp (i {\bf k} \cdot {\bf R}_{n}) {\bf P}_{0 \beta} \label{eq:bloch}$$ The cavity index $i$ becomes composite, $i \equiv n \beta$, where $n$ enumerates the unit cell and $\beta$ the positions of inequivalent cavities in the unit cell. Also, ${\bf R}_{n}$ denotes the lattice vectors and ${\bf k}=(k_{x},k_{y},k_{z})$ is the Bloch wavevector. By substituting Eq. (\[eq:bloch\]) into Eq. (\[eq:cde\_eigen\]) we finally obtain $$\sum_{\beta'} \tilde{{\bf G}}_{\beta \beta'}(\omega_{\parallel},{\bf k}) {\bf P}_{0 \beta'}= \Omega {\bf P}_{0 \beta} \label{eq:cde_eigen_periodic}$$ where $$\tilde{{\bf G}}_{\beta \beta'}(\omega_{\parallel}, {\bf k}) = \sum_{n'} \exp [i {\bf k} \cdot ({\bf R}_{n}-{\bf R}_{n'})] {\bf G}_{n \beta; n' \beta'}(\omega_{\parallel}). \label{eq:green_fourier}$$ Solution of Eq. (\[eq:cde\_eigen\_periodic\]) provides the frequency band structure of a periodic system of cavities. ![Frequency band structure for tetragonal lattice of resonant cavities within a plasmonic host (see Fig. \[fig1\]) corresponding to the Green’s tensor of Eq. (\[eq:G\_elem\]) with $s^{A}_{1}=-s^{B}_{2}=1.2, s^{A}_{2}=-s^{B}_{2}=0.5, s'_{1}=2.5,s'_{2}=0.5,s_{z}=2$.[]{data-label="fig2"}](Fig2.eps){width="8cm"} Topological frequency bands =========================== Since Eq. (\[eq:cde\_eigen\_periodic\]) is equivalent to a Hamiltonian eigenvalue problem, we adopt the crystal structure of Ref. . Namely, a tetragonal lattice with a unit cell consisting of two same cavities at inequivalent positions $A$ and $B$ \[see Fig. \[fig1\](a)\] along the $c$-axis. In this case, the index $\beta$ in Eq. (\[eq:cde\_eigen\_periodic\]) assumes the values $\beta=A,B$ for each sublattice (layer) of the crystal. The above lattice is characterized by the $C_{4}$ point-symmetry group. In order to preserve the $C_{4}$ symmetry [@fu_prl] in the Green’s tensor matrix of Eq. (\[eq:cde\_eigen\_periodic\]) we assume that the interaction between two cavities within the same layer (either $A$ or $B$) depends on the relative orientation of the point dipole in each cavity whilst the interaction between cavities belonging to adjacent layers is orientation independent. Also, we take into account interactions up to second neighbors in both inter- and intra-layer interactions. Taking the above into account, the lattice Green’s tensor assumes the form $$\begin{aligned} \tilde{{\bf G}}({\bf k}) =\left(% \begin{array}{cc} \tilde{{\bf G}}^{AA}({\bf k}) & \tilde{{\bf G}}^{AB}({\bf k}) \\ \tilde{{\bf G}}^{AB \dagger}({\bf k}) & \tilde{{\bf G}}^{BB}({\bf k}) \\ \end{array}% \right) \nonumber \\\end{aligned}$$ where $$\begin{aligned} \tilde{{\bf G}}^{\beta \beta}({\bf k})= 2 s^{\beta}_{1} \left( \begin{array}{cc} \cos (k_{x} \alpha) & 0 \\ 0 & \cos (k_{y} \alpha) \\ \end{array} \right)+ \nonumber && \\ 2 s^{\beta}_{2}\left(% \begin{array}{cc} \cos (k_{x} \alpha)\cos (k_{y} \alpha) & -\sin (k_{x} \alpha)\sin (k_{y} \alpha) \\ -\sin (k_{x} \alpha)\sin (k_{y} \alpha) & \cos (k_{x} \alpha)\cos (k_{y} \alpha) \\ \end{array}% \right), %\ \ \beta=A,B \nonumber && \\ \tilde{{\bf G}}^{A B}({\bf k})=[s'_{1}+2 s'_{2} (\cos(k_{x} \alpha) + \cos (k_{y} \alpha)) +s'_{z} \exp( i k_{z} \alpha)] {\bf I}_{2}. \nonumber && \\ \label{eq:G_elem}\end{aligned}$$ ![Frequency band structure for a finite slab $ABAB \cdots ABB$ of the crystal of Fig. \[fig1\] made from 80 bilayers. []{data-label="fig3"}](Fig3.eps){width="8cm"} The lattice Green’s tensor of Eq. (\[eq:G\_elem\]) is completely equivalent to the lattice Hamiltonian of Ref. . $s^{\beta}_{1},s^{\beta}_{2},s'_{1},s'_{2},s_{z}$ in Eq. (\[eq:G\_elem\]) generally depend on $q_{\parallel}$, the lattice constant $a$ and the interlayer distance $c$ but hereafter will be used as independent parameters. Namely we choose $s^{A}_{1}=-s^{B}_{2}=1.2, s^{A}_{2}=-s^{B}_{2}=0.5, s'_{1}=2.5,s'_{2}=0.5,s_{z}=2$. In Fig. \[fig2\] we show the (normalized) frequency band structure corresponding to Eq. (\[eq:G\_elem\]) along the symmetry lines of the Brillouin zone shown in Fig. \[fig1\](b). It is evident that an omnidirectional frequency band gap exists around $\Omega=0$ which is prerequisite for the emergence of surface states. In order to inquire the occurrence of surface states we find the eigenvalues of the Green’s tensor of Eq. (\[eq:G\_elem\]) in a form appropriate for a slab geometry. The emergence of surface states depends critically on the surface termination of the finite slab, i.e., for different slab terminations different surface-state dispersions occur (if occur at all). Namely, we assume a finite slab parallel to the (001) surface (characterized by the $C_{4}$ symmetry group) consisting of 80 alternating $AB$ layers except the last bilayer which is $BB$, i.e., the layer sequence is $ABAB \cdots ABB$. The corresponding frequency band structure along the symmetry lines of the surface Brillouin zone of the (001) surface \[see Fig. \[fig1\](c)\] is shown in Fig. \[fig3\]. It is evident that there exist gapless surface states within the band gap exhibiting a quadratic degeneracy at the $\overline{M}$-point. In this case, the corresponding doublet of surface states can be described by an effective theory [@chong_prb] similarly to the doublet states at a point of linear degeneracy (Dirac point). [@sepkhanov] We note that the equivalence of the Green’s tensor ${\bf G}$ with the atomic Hamiltonian of Ref.  as well as the form of the time-reversal $T$ and (geometric) $C_{4}$-rotation $U$ operators for the EM problem [@haldane] which are the same as for spinless electrons, allows to describe the photonic band structure with the $Z_{2}$ topological invariant $\nu_{0}$ $$(-1)^{\nu_{0}}=(-1)^{\nu_{\Gamma M}} (-1)^{\nu_{A Z}} \label{eq:z2_def}$$ where for real eigenvectors of $\tilde{{\bf G}}({\bf k})$ we have [@fu_prl] $$(-1)^{{\bf k}_{1} {\bf k}_{2}} = {\rm Pf}[w({\bf k}_{2})]/ {\rm Pf}[w({\bf k}_{1})] \label{eq:pf_frac}$$ and $$w_{mn}({\bf k}_{i}) = \langle u_{m} ({\bf k}_{i}) | U | u_{n}({\bf k}_{i}) \rangle. \label{eq:w_def}$$ ${\rm Pf}$ stands for the Pfaffian of a skew-symmetric matrix, i.e., ${\rm Pf}[w]^{2}=\det(w)$. Due to the double degeneracy of the band structure at the four special momenta points $\Gamma, M, A, Z$ the frequency bands come in doublets. Since frequency eigenvectors with different eigenfrequencies are orthogonal, all the inter-pair elements of the $w$-matrix are zero and the latter is written as: $$w({\bf k}_{i})=\left(% \begin{array}{cccc} w^{1}({\bf k}_{i}) & 0 & 0 & 0 \\ 0 & w^{2}({\bf k}_{i}) & 0 & 0 \\ 0 & 0 & \ddots & 0 \\ 0 & 0 & 0 & w^{N}({\bf k}_{i}) \\ \end{array}% \right) \label{eq:w_reduced_form}$$ where $w^{j}({\bf k}_{i})$ are anti-symmetric $SU(2)$ matrices, [@wang_njp] i.e., $w^{j}({\bf k}_{i})=A_1$ or $A_2$, where $$A_1= \left(% \begin{array}{cc} 0 & 1 \\ -1 & 0 \\ \end{array}% \right), A_2= \left(% \begin{array}{cc} 0 & -1 \\ 1 & 0 \\ \end{array}% \right). \label{eq:alpha_matr_def}$$ In this case, ${\rm Pf}[w({\bf k}_{i})]=w^{1}_{12} w^{2}_{12} \cdots w^{N}_{12}=\pm 1$. Therefore, $(-1)^{{\bf k}_{1} {\bf k}_{2}} = \pm 1$ and $\nu_{0}=1$ which ensures the presence of gapless surface states. We note that the above analysis relies on the assumption of real frequency bands. The presence of losses in the constituent materials renders the frequency bands complex, i.e., the Bloch wavevector possesses both a real and an imaginary part. However, even in this case, one can still speak of real frequency bands if the imaginary part of the Bloch wavevector is at least [*hundred*]{} times smaller than the corresponding real part. This a common criterion used in calculations of the complex frequency band structure by on-shell electromagnetic solvers such as the layer-multiple scattering method [@comphy] or the transfer-matrix method. [@tmm] ![(Color online) A possible realization of a photonic structure with gapless surface states: dielectric particles of square cross section, joined together with cylindrical coupling elements and embedded within a 3D network of metallic wires (artificial plasma).[]{data-label="fig4"}](Fig4.eps){width="6cm"} Blueprint for a photonic topological insulator ============================================== A possible realization of the photonic analogue of topological insulator in the laboratory is depicted in Fig. \[fig4\]. Since our model system requires dielectric cavities within a homogeneous plasma, a lattice of nano-cavities formed within a homogeneous Drude-type metal, e.g., a noble metal (Au, Ag, Cu), would be the obvious answer. [@stefanou_ssc] However, the plasmon bands are extremely lossy due to the intrinsic absorption of noble metals in the visible regime. A solution to this would be the use of an [*artificial*]{} plasmonic medium operating in the microwave regime where metals are perfect conductors and losses are minimal. Artificial plasma can be created by a 3D network of thin metallic wires of a few tens of $\mu$m in diameter and spaced by a few mm. [@art_plasma] A lattice of dielectric particles within an artificial plasma can be modelled with the presented tight-binding Green’s tensor. Since the interaction among first and second neighbors within the same bilayer ($A$ or $B$) should depend on the dipole orientation (in order to preserve the $C_{4}$ symmetry), the dielectric particles in each layer are connected with cylindrical waveguiding elements (different in each layer $A$ or $B$ - see Fig. \[fig4\]). In contrast, between two successive bilayers there are no such elements since interactions between dipoles belonging to different layers should be independent of the dipole orientations (s orbital-like). Another advantage of realizing the photonic analog in the microwave regime is the absence of nonlinearities in the EM response of the constituent materials since photon-photon interactions may destroy the quadratic degeneracy of the surface bands in analogy with fermionic systems. [@sun] Finally, we must stress that a photonic topological crystalline insulator can be also realized with purely dielectric materials if the host medium surrounding the cavities is not a plasmonic medium but a photonic crystal with an absolute band gap: the cavities would be point defects within the otherwise periodic photonic crystal and the tight-binding description would be still appropriate. [@bayindir; @k_fang] In this case, Maxwell’s equations lack of any kind of characteristic length and the proposed analog would be realized in any length scale. Conclusions =========== In conclusion, a 3D lattice of weakly interacting cavities respecting TR symmetry and a certain point-group symmetry constitutes a photonic analog of a topological crystalline insulator by demonstrating a spectrum of gapless surface states. A possible experimental realization would be a 3D lattice of dielectric particles within a continuous network of thin metallic wires with a plasma frequency in the GHz regime. This work has been supported by the European Community’s Seventh Framework Programme (FP7/2007-2013) under Grant Agreement No. 228455-NANOGOLD (Self-organized nanomaterials for tailored optical and electrical properties). The $z$-component of the polarizability need not be zero but can assume finite values as far as the frequency band stemming from the surface-plasmon resonance corresponding to the z-component is spectrally distinct (no overlap) with the bands stemming from the $xy$-components of the polarizability. In this case, one can treat separately the two frequency bands (doublet) stemming from the xy-resonance from the band coming from the z-resonance (singlet). The above requirements can be quantified as follows, $$C_{\parallel} \ll \frac{\omega_{\parallel} - \omega_{\perp}}{\omega_{p}} \label{eq:app_1}$$ where $C_{\parallel}$ is the width of the $xy$-frequency bands (in dimensionless frequency units). $C_{\parallel}$ is obtained from the first term of the right-hand side of Eq. (\[eq:g\_tensor\]) of the paper. To a first approximation, it is given by [@cde] $$C_{\parallel} \sim \frac{\exp(-q_{\parallel} a_{\parallel})} {q_{\parallel} a_{\parallel}} \label{eq:app_2}$$ where $q_{\parallel}= \sqrt{|\epsilon_{h}|} \omega_{\parallel} / c$. Therefore, the condition (\[eq:app\_1\]) is written as $$\frac{\exp(-q_{\parallel} a_{\parallel})} {q_{\parallel} a_{\parallel}} \ll \frac{\omega_{\parallel} - \omega_{\perp}}{\omega_{p}} \label{eq:app_3}$$ where $a_{\parallel}$ is the lattice constant in the $xy$-plane. Given that $\omega_{\parallel} = \omega_{p} \sqrt{2/(\epsilon_{\parallel}+2)}$, Eq. (\[eq:app\_3\]) becomes $$\frac{\exp({-\sqrt{|\epsilon_{h}|} \omega_{\parallel} a_{\parallel} /c})}{\sqrt{|\epsilon_{h}|} \omega_{\parallel} a_{\parallel} /c} \ll \sqrt{\frac{2 (\epsilon_{\perp}-\epsilon_{\parallel})} {(\epsilon_{\parallel}+2)(\epsilon_{\perp}+2)}} \label{eq:app_5}$$ From the above equation it is evident that for a given value of the dielectric anisotropy $\epsilon_{\perp} - \epsilon_{\parallel}$, one can always find a suitably large lattice constant $\alpha_{\parallel}$ such that Eq. (\[eq:app\_5\]) is fulfilled. The latter allows the easy engineering of the photonic analog of a crystalline topological insulator since there is practically no restriction on the choice of the (uniaxial) material the cavities are made from. It can also be easily understood that if Eq. (\[eq:g\_tensor\]) holds, the same equation is true for the width $C_{\perp}$ of the singlet frequency band (resulting from the $z$-resonance). [*Numerical example*]{}. Suppose that the cavities are made from a nematic liquid crystal which is a uniaxial material. Typical values of the permittivity tensor $\epsilon$ are e.g., $\epsilon_{\parallel}=1.5$, $\epsilon_{\perp}=1.8$. In this case, $\omega_{\parallel} \approx 0.75 \omega_{p}$ and $\epsilon_{h}=1-\omega_{p}^{2} / \omega_{\parallel}^{2} \approx -0.777$. By choosing a large lattice constant, i.e., $a_{\parallel} = 4 c / \omega_{p}$ , Eq. (\[eq:app\_5\]) is clearly fulfilled $$\frac{\exp{(-\sqrt{|\epsilon_{h}|} \omega_{\parallel} a_{\parallel} /c)}}{\sqrt{|\epsilon_{h}|} \omega_{\parallel} a_{\parallel} /c} \approx 0.026866 \ll 0.2213 \approx \sqrt{\frac{2 (\epsilon_{\perp}-\epsilon_{\parallel})} {(\epsilon_{\parallel}+2)(\epsilon_{\perp}+2)}}. \label{eq:app_6}$$ F. D. M. Haldane and S. Raghu, , 013904 (2008); [*ibid*]{}, , 033834 (2008). Z. Wang, Y. D. Chong, J. D. Joannopoulos, and M. Soljačić, , 013905 (2008); [*ibid*]{}, Nature (London) [**461**]{}, 772 (2009); Z. Yu [*et al.*]{}, , 023902 (2008); H. Takeda and S. John, , 023804 (2008); D. Han [*et al.*]{}, , 123904 (2009); X. Ao, Z. Lin, and C. T. Chan, , 033105 (2009); M. Onoda and T. Ochiai, , 033903 (2009); T. Ochiai and M. Onoda, , 155103 (2009); R. Shen, L. B. Shao, B. Wang, and D. Y. Xing, , 041410(R) (2010); Y. Poo [*et al.*]{}, , 093903 (2011). C. L. Kane and E. J. Mele, , 226801 (2005); [*ibid*]{}, , 146802 (2005). L. Fu, C. L. Kane, and E. J. Mele, , 106803 (2007); L. Fu and C. L. Kane, , 045302 (2007); M. Z. Hasan and C. L. Kane, , 3045 (2010). A. P. Schnyder, S. Ryu, A. Furusaki, and A. W. W. Ludwig, , 195125 (2008); A. Kitaev, AIP Conf. Proc. [**1134**]{}, 22 (2009); Y. Ran, arXiv: 1006.5454; X.- L. Qi, T. L. Hughes, S. Raghu, and S.- C. Zhang, , 187001 (2009); R. Roy, arXiv: 0803.2868. R. S. K. Mong, A. M. Essin, and J. E. Moore, , 245209 (2010); ; R. Li, J. Wang, X.- L. Qi, and S.- C. Zhang, Nat. Phys. [**6**]{}, 284 (2010). L. Fu, , 106802 (2011). E. Lidorikis, M. M. Sigalas, E. N. Economou, and C. M. Soukoulis, , 1405 (1998). M. Bayindir, B. Temelkuran, and E. Ozbay, , 2140 (2000). M. Quinten, A. Leitner, J. R. Krenn, and F. R. Aussenegg, Opt. Lett. [**23**]{}, 1331 (1998). N. Stefanou, A. Modinos, and V. Yannopapas, Sol. Stat. Commun. [**118**]{}, 69 (2001). E. M. Purcell and C. R. Pennypacker, Astrophys. J. [**186**]{}, 705 (1973). For a detailed study see supplemental material at ... A. Eroglu, [*Wave Propagation and Radiation in Gyrotropic and Anisotropic Media*]{} (Springer, New York, 2010). V. Dmitriev, Prog. Electromag. Res. [**48**]{}, 145 (2004). Y. D. Chong, X.- G. Wen, and M. Soljacic, , 235125 (2008). R. A. Sepkhanov, Ya. B. Bazaliy, and C. W. J. Beenakker, , 063813 (2007). Z. Wang, X-. L.- Qi, and S.- C.- Zhang, New J. Phys. [**12**]{}, 065007 (2010). N. Stefanou, V. Yannopapas, and A. Modinos, Comput. Phys. Commun. [**113**]{}, 49 (1998). P. M. Bell [*et al.*]{}, Comput. Phys. Commun. [**85**]{}, 306 (1995). J. B. Pendry, A. J. Holden, W. J. Stewart, and I. Youngs, , 4773 (1996). K. Sun, H. Yao, E. Fradkin, and S. A. Kivelson, , 046811 (2009). K. Fang, Z. Yu, and S. Fan, , 075477 (2011).
{ "pile_set_name": "ArXiv" }
--- abstract: 'NO$\nu$A is an accelerator-based neutrino oscillation experiment which has a great potential to measure the last unknown mixing angle $\theta_{13}$, the neutrino mass hierarchy, and the CP-violation phase in lepton sector with 1) 700 kW beam, 2) 14 mrad off the beam axis, 3) 810 km long baseline. The Near Detector on the Surface is fully functioning and taking both NuMI and Booster beam data. The far detector building achieved beneficial occupancy on April 13. This proceeding will focus on the DAQ software system.' author: - 'X. C. Tian, on behalf of the NO$\nu$A Collaboration' title: 'NO$\nu$A Data Acquisition Software System' --- Introduction ============ The next generation long-baseline neutrino experiments [@NOvA; @T2K; @LBNE] aim to measure the third mixing angle $\theta_{13}$, determine whether CP is violated in the lepton sector, and resolve the neutrino mass hierarchy. The NuMI Off-axis electron-neutrino ($\nu_e$) Appearance (NO$\nu$A) experiment is the flagship experiment of the US domestic particle physics program which has the potential to address most of the fundamental questions in neutrino physics raised by the Particle Physics Project Prioritization Panel (P5). NO$\nu$A has two functionally identical detectors (Fig. \[detector\]), a 222 ton near detector located underground at Fermilab and a 14 kiloton far detector located in Ash River, Minnesota with a baseline of 810 km. The detectors are composed of extruded PVC cells loaded with titanium dioxide to enhance reflectivity. There are 16,416 and 356,352 cells for the near and far detector, respectively. Each cell has a size of 3.93 cm transverse to the beam direction and 6.12 cm along the direction filled with liquid scintillator (mineral oil plus 5% pseudocumene). The corresponding radiation length is 0.15 $X_0$ and the Moliere radius is 10 cm, ideal for the identification of electron-type neutrino events. The “Neutrinos at the Main Injector” (NuMI) will provide a 14 mrad off-axis neutrino beam to reduce neutral current backgrounds and which peaks at 2 GeV, corresponding to the first oscillation maximum for this detector distance. The accelerator and NuMI upgrades will double the protons per year delivered to the detector which is $6\times 10^{20}$ protons per year. For further details on the current status of NO$\nu$A experiment, please see Ref. [@Gavin]. ![The NO$\nu$A detectors. The far (near) detector has 29 (6) blocks and each block is made of 32 scintillator PVC planes, and 928 (192) planes in total. The near detector also has a muon catcher which is composed of 13 scintillator PVC planes and 10 steel planes.[]{data-label="detector"}](detector){width="100.00000%"} Charged particles from neutrino interactions or cosmic ray muons will emit scintillation light in the scintillator. The scintillation light is collected by a loop of wavelength shifting fibers (WLS) and a 32-pixel Avalanche Photo-diode (APD) attached to the fibers converts the light pulse into electrical signals. The Data Acquisition (DAQ) System as shown in Fig. \[daq-all\] will concentrate the data from those APDs into a single stream that can be analyzed and archived. The DAQ can buffer the data and wait for a trigger decision that the data should be recorded or rejected. Online trigger processors will be used to analyze the data stream to correlate data with similar time stamps and to look for clusters of hits indicating an interesting event. Additional functionality for dealing with flow control, monitoring, system operations and alarms is also included [@NOvA-TDR]. The event types that NO$\nu$A DAQ will record include beam neutrino events, cosmic ray muons, and other physics events (supernova neutrinos, high energy neutrinos, [*etc.*]{}) Every 2.2 s (will be reduced to 1.3 s with the accelerator and NuMI upgrades), a 10 $\mu$s beam spill will be generated and time stamped by a GPS based timing system at Fermilab. All hits that occur in a 30 $\mu$s window centered on the 10 $\mu$s spill are recorded for further processing. The event rates are 30 neutrino events per spill for the near detector and 1,400 $\nu_e$ beam events per year for the far detector. The randomly selected cosmic ray muons used for calibration and monitoring are taken to give 100 times the number of beam neutrino events. The cosmic ray muon rate is 50 Hz (200 kHz) for the near (far) detector. Other interesting physics process such as a supernova explosion at 10 kpc will result in thousands of neutrinos within 10 seconds in the far detector. For the near detector, the data rates are 75 TB per year through DAQ system and 1 TB per year written to disk. For the far detector, the data rates are 12,000 TB per year through DAQ sytem and 25 TB per year written to disk. ![An schematic overview of the NO$\nu$A DAQ system. The data stream is from left to right.[]{data-label="daq-all"}](daq.png){width="\textwidth"} NO$\nu$A Data Acquisition System ================================ The primary task for the DAQ is to record the data from APDs for further processing. The data flows through Front End Boards (FEBs), Data Concentrate Modules (DCMs), Buffer Nodes (BNs), DataLogger (DL) and then archived on disk or tape as shown in Fig. \[daq-all\]. Each APD is digitized by a FEB continuously without dead time. The data from a group of FEBs up to 64 are consolidated by the DCM into 5 ms time slices which are routed to downstream Buffer Nodes. The data is buffered in the Buffer Nodes for a minimum of 20 s waiting for the spill trigger. A spill signal is required to arrive within the buffering time so that the spill time can be correlated with the time-stamped data to determine if the hits occurred in or out spill. The triggered data from Buffer Nodes will be merged to form an event in the DataLogger and the event will be written to file for storage or shared memory for monitoring. The power distribution system (PDS) provides power to FEBs, APDs, ThermoElectric Coolers (TECs) [^1], DCMs and Timing Distribution Units (TDUs). The Run Control provides the overall control of the DAQ system. The following sections will describe the key subsystems of the NO$\nu$A DAQ system. Front End Boards (FEBs) ----------------------- The front end electronics (Fig.  \[feb\]) is responsible for amplifying and integrating the signals from the APD arrays, determining the amplitude of the signals and their arrival time and presenting that information to the DAQ. The FEBs are operated in trigger-less, continuous readout mode with no dead time, and the data is zero suppressed based on Digital Signal Processing (DSP) algorithms. Data above a pre-programmed threshold is settable at the channel level to allow different thresholds to be set depending on the particular characteristics of a given channel. Data above that threshold will be time-stamped and compared to a NuMI timing signal in the DAQ system to determine if the event was in or out of spill. Major components of the FEB are the carrier board connector location at the left, which brings the APD signals to the NO$\nu$A ASIC, which performs integration, shaping, and multiplexing. The chip immediately to the right is the ADC to digitize the signals, and FPGA for control, signal processing, and communication. The ASIC is customized to maximize the sensitivity of the detector to small signals from long fibers in the far detector. The average photoelectrons (PEs) yield at the far end of an extrusion module is 30, and the noise is 4 PEs. The FPGA on the FEB uses a Digital Signal Processing algorithm to extract the time and amplitude of signals from the APD. Each FEB reads out 32 channels corresponding to 32 pixels of one APD. Higher detector activity during beam spill at the near detector requires higher time resolution, therefore FEBs sample APDs pixels at 8 MHz at the near detector, 2 MHz at the far detector. The FEBs are capable of limited waveform digitization and waveform readout. ![Schematic of the APD module and the front-end electronics board showing the major components. []{data-label="feb"}](feb-apd.png){width="100.00000%"} Data Concentrator Modules (DCMs) -------------------------------- The DCM (Fig. \[dcm\]) is a custom component of the DAQ. Each DCM is responsible for consolidating the data into 5 ms time slices received from up to 64 FEBs to internal data buffers and for transferring the data out to the Event Builder Buffer Farm nodes over Gigabit Ethernet. The DCMs also pass timing and control information from the timing system to the FEBs. The DCM mainly consists of a FPGA, an embedded Power PC processor, and connectors as shown in Fig. \[dcm\]. Data from FEB consists of a header and hit information for a given timeslice. The mid-sized FPGA on the DCM concatenates and combines the hit information for all 64 FEBs to time slices on the order of 50 $\mu$s (MicroSlices). The data is then read from the FPGA by the embedded Power PC processor running embedded Linux. The DCM application software running on the processor has the responsibility of concatenating and packaging the data into 5 ms time slices (MilliSlice) received from the DCM FPGA and distributing the resulting data packets to Event Builder Buffer Nodes over a Gigabit Ethernet. The application software also handles communication with Run Control and the DAQ Monitor System, provides the high-level interface for programming and configuring the DCM and FEB hardware components, and provides support for operating in simulated data input mode. Data is transferred from the DCMs to different Buffer Nodes in a round robin fashion with each DCM starting with a different Buffer Node. Timing information from the timing system is used to divide time into slices such that data packets are transmitted to a different Buffer Node in each slice. In this way, DCMs stay synchronized in their rotation, and no two DCMs ever transmit to the same Buffer Node at once. Buffer Nodes are receiving data from only one DCM at a time, and all Buffer Nodes are receiving data on all time slices. The DCM also connects to the timing system which passes a single common timing clock to all FEBs to maintain a sync to common time better than $\pm1$ 64MHz clock tick detector wide. ![The block diagram of the Data Concentrate Module. The DCM FPGA concatenates and combines the data received from up to 64 FEBs on dedicated serial links to time slices on the order of 50 $\mu$s (MicroSlices). The DCM CPU runs kernel device drivers responsible for the programming of the DCM and FEB FPGAs, readout of the MicroSlices and FEB status buffers prepared by the DCM FPGA. []{data-label="dcm"}](dcm.pdf){width="90.00000%"} Buffer Nodes (BNs) ------------------ The Buffer Nodes are used to buffer the raw hit data from the entire detector for a minimum of 20 s until a trigger is received from the Global Trigger system (Fig. \[bn\]). The trigger is a time window and the Buffer Nodes use the time window to search for all data within that frame of time and route it to DataLogger process. All data for one time frame will be buffered in the same Buffer Node. Each Buffer Node is independent and contains distinct detector data, such that when a given time window is requested, all the Buffer Nodes must perform a search on the available data and at least one should return the desired data The Buffer Node will need to aggregate the data from each of the DCMs for the time frame and insure data from all DCMs is seen. This is a form of event building where data for a large time frame is built as a single event. For any frame seen by a Buffer Node, data must be received from each DCM. This provides a system integrity check and also allows more performant searching when a trigger is received. While all the data from a given time frame will be contained in one farm node, an actual 30 $\mu$s trigger, which defines an event, may span these large time frame boundaries and thus be split across two farm nodes. Each farm node will need to determine if it has any data for the trigger window in its memory buffer and send it to the logging process. ![The diagram of the Buffer Node. []{data-label="bn"}](bn){width="100.00000%" height="4.5cm"} ![The internal structure of the Buffer Node. Data is distributed from the DCMs to the Buffer Nodes in a round-robin fashion with traffic shaping.[]{data-label="dcm-bn"}](dcm-bn){width="100.00000%" height="4.5cm"} DataLogger (DL) --------------- The DataLogger (DL) is responsible for merging the triggered data blocks from Buffer Nodes to form events and writing them to local files or online monitoring systems through a Data Dispatcher process (Fig. \[dl\]). It receives input from the Global Trigger and the Buffer Nodes. It writes completed events to run/subrun output disk files and continuously populates a shared memory segment with events for online monitoring. The incomplete events called DataAtoms consist of TriggerBlock coming from Global Trigger and/or DataBlocks coming from Buffer Nodes. Each DataAtom, whether it is a TriggerBlock or DataBlocks or a combination of both has an associated Trigger Number which is used to build events. The complete events are written to disk - data streams are formed corresponding to each trigger type for the run plus a stream for all events. Failure to build an event which was incomplete is an indication of a DAQ problem and generates a warning message sent to the Run Control, with the resulting incomplete event written to disk. These files consist of various headers, data blocks, tails, and checksum words. Events are also written to a shared memory segment which is formed at run setup on the DataLogger node. This memory segment is overwritten at a constant rate with new events. An Event Viewer is available to view the data structure of these events and an online Event Display can also be run on them. ![The diagram of DataLogger. []{data-label="dl"}](dl){width="100.00000%" height="4.5cm"} Trigger ------- The Global Trigger system is responsible for receiving a beam spill signal, or other triggering conditions, such as the periodic calibration “Pulser” trigger and instructs the NO$\nu$A DAQ’s data buffer system to save a set of data, rather than the default buffer action which is to forget the oldest data in order to make room for current data. Among the variety of triggers, the most important is the beam spill trigger originating in the NuMI beam-line, to allow the beam neutrino induced interactions to be recorded. The beam spill signal will be generated at Fermilab in response to the signal firing the kicker magnet. The actual time will be logged and transmitted to the far detector Global Trigger via the Internet. Upon receipt of the beam spill time the Global Trigger system will then generate the time windows based on the signal and send that signal to the buffering processes on each of the buffer farm nodes. In the absence of actual spill triggers the system will generate random triggers to allow for tracking of calibration and monitoring of the detector. The calibration source for NO$\nu$A is the copious rate of cosmic rays hitting this large detector on the surface. While the data rate of these cosmic rays is too great to consider saving all of them, a subset is needed to calibrate, measure the background, and monitor the detector. Thus, a Calibration Trigger will be issued at configurable intervals to save blocks of time as determined by operational requirements to contain a useful sample of cosmic ray muons. Additionally, Data Driven trigger processing information will be sent from the Buffer Nodes to the Global Trigger, which will use that information to decide if the appropriate Data Driven trigger will be issued. Timing System ------------- The timing system consists one Master Timing Distribution Unit (MTDU) and serval Slave Timing Distribution Units (STDUs) which is used to to synchronize both near and far detector timing systems to a known time standard. MTDU is cabled to their corresponding STDUs in a daisy-chained fashion, and similarly each STDU will be cabled to the 12 DCMs on its double block in a daisy-chain configuration. MTDU derives clock from Global Positioning System (GPS) and distributes to the first STDU in chain. All the FEBs/DCMs are synchronized to this common high precision 16 MHz clock reference which is distributed by STDU. MTDU also generates command and sync signals as directed by Run Control, and synchronizes all detector DAQ components on both near and far detectors. The Near Detector MTDU differs from the Far Detector MTDU in that it also connects to the Main Injector timing system, and synchronizes itself to the Main Injector. The timing system is self-compensating for cable/transmission propagation delays between units (timing units and DCMs). Message Systems --------------- The message systems include message passing and message facility. The message passing system which is capable of high message bandwidth will be used to transport control and monitoring messages between processes in the DAQ. It will support the sending of messages to individual processes as well as groups of processes, and it will provide support for reply messages that are generated in response to request messages. General message passing is handled using the FNAL “Responsive Message System (RMS)” which uses OpenSplice DDS for low level message transmission. The NO$\nu$A specific layers provide ease-of-use. The message facility system will provide the infrastructure for all of the distributed processes in the DAQ to report status messages of various severities in a consistent manner to a central location, and it will provide the tools for displaying and archiving the messages for later analysis. The message facility system should act as a gate keeper to the Run Control clients with respect to error messages. It is the responsibility of this part of the system to determine which messages need to be seen by Run Control. The archiving of status messages for later analysis will also be valuable for diagnosing problems after they have occurred. Run Control ----------- The Run Control system provides a graphical interface for operators to control data taking, control logic to carry out the operators’ requests, and monitoring functionality to automatically react to exceptional conditions. It is written in C++ using QT [@QT] technology and client/server model. All DAQ components implement a well defined state model and under the command of Run Control, make transitions between states. The Run Control application must communicate with other applications, including the run history and configuration databases, trigger, message systems, and the detector control system. For debugging and commissioning purposes of the large detector of essentially one type of detector sub-system, the Run Control will support partitioning of the resources. There will need to be one central resource manager that tracks assignments of DCMs and Buffer Nodes to partitions. The resource manager will provide the Run Control client the necessary information for using only the hardware and applications reserved for its partition. Run Control will interact with the configuration database to provide appropriate information to processes on how to configure themselves, download or receive calibration parameters, [*etc.*]{} In addition, it is responsible for saving all relevant information on the configuration of the run, including partition information, to a run history database for offline access. Any changes in the configuration would require a new run to be started. Monitoring Systems ------------------ The DAQ is monitored at different levels by five different applications. The real time monitors include: 1) DAQ Monitor monitors the DAQ health and performance using Ganglia [@Ganglia]; 2) Memory Viewer displays the bytes of raw data; 3) Online Event Display displays reconstructed events; 4) Online Monitor monitors the run metrics. The offline monitor is data check which checks the detector performance by looking at metrics of multi-run periods. Please refer to [@Susan] for details. DAQ Performance on Near Detector On the Surface (NDOS) ====================================================== The Near Detector On the Surface (NDOS) DAQ system has been up and running since fall of 2010. The first neutrino candidate was observed on Dec.15, 2010. The NDOS has recorded 8.4$\times10^{18}$ Protons On Target (POTs) from NuMI neutrino beam, and 5.6$\times10^{19}$ POTs from NuMI anti-neutrino beam. NDOS also accumulated 3.0$\times10^{19}$ POTs from Booster neutrino beam. Please refer to [@Minerba] for details. Many DAQ performance gains and fixes of bugs have resulted from NDOS commissioning and running, for example, the DCM data through-put has gained a factor of two as a result of optimizing software and network (Fig. \[dcmgain\] left). The number of active channels (Fig. \[dcmgain\] right), live time and quality of data continue to improve over time. ![The Data Concentrate Module gain as a function of time (left) and the number of active channels as a function of time (right).[]{data-label="dcmgain"}](daqgain "fig:"){width="48.00000%"} ![The Data Concentrate Module gain as a function of time (left) and the number of active channels as a function of time (right).[]{data-label="dcmgain"}](channel "fig:"){width="48.00000%"} Conclusion ========== Much of the DAQ system has been designed, implemented and deployed to the NO$\nu$A prototype detector, Near Detector on the Surface (NDOS) which has been up and running from the Fall of 2010. The system is robust enough that can run the readout for the NDOS at near 100% live readout to disk. Many performance gains and fixes of bugs have been made as a result of commissioning the Prototype Detector. The far detector will start construction starting from Jan. 2012 and expect to readout the first di-block early next year. The far detector DAQ system is under development and the system is ready for the rate conditions that will see at the far detector early next year. The author would like to thank the NO$\nu$A Collaboration for the help on preparing the presentation and proceeding. [9]{} D. S. Ayres [*et al.*]{} (NO$\nu$A Collaboration), arXiv:hep-ex/0503053. Y. Itow [*et al.*]{} (T2K Collaboration), arXiv:hep-ex/0106019. M. C. Sanchez [*et al.*]{} (LBNE DUSEL Collaboration), [*AIP Conf. Proc.*]{} [**1222**]{}, 479 (2010). G. S. Davies (NO$\nu$A Collaboration), DPF proceeding. “Data Acquisition System”, Chapter 15, NOvA Technical Design Report. http://qt.nokia.com/ http://ganglia.sourceforge.net/ S. Lein (NO$\nu$A Collaboration), DPF proceeding. M. Betancourt (NO$\nu$A Collaboration), DPF proceeding. [^1]: TEC controller cools down the APDs to -15$^\circ$C to keep the noise contribution from photoconversion regions small.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In global seismology Earth’s properties of fractal nature occur. Zygmund classes appear as the most appropriate and systematic way to measure this local fractality. For the purpose of seismic wave propagation, we model the Earth’s properties as Colombeau generalized functions. In one spatial dimension, we have a precise characterization of Zygmund regularity in Colombeau algebras. This is made possible via a relation between mollifiers and wavelets.' author: - | Günther Hörmann and Maarten V. de Hoop\ *Department of Mathematical and Computer Sciences*,\ *Colorado School of Mines, Golden CO 80401* title: 'Geophysical modelling with Colombeau functions: Microlocal properties and Zygmund regularity' --- Introduction ============ *Wave propagation in highly irregular media*. In global seismology, (hyperbolic) partial differential equations the coefficients of which have to be considered generalized functions; in addition, the source mechanisms in such application are highly singular in nature. The coefficients model the (elastic) properties of the Earth, and their singularity structure arises from geological and physical processes. These processes are believed to reflect themselves in a multi-fractal behavior of the Earth’s properties. Zygmund classes appear as the most appropriate and systematic way to measure this local fractality (cf. [@Holschneider:95 Chap.4]). *The modelling process and Colombeau algebras*. In the seismic transmission problem, the diagonalization of the first order system of partial differential equations and the transformation to the second order wave equation requires differentiation of the coefficients. Therefore, highly discontinuous coefficients will appear naturally although the original model medium varies continuously. However, embedding the fractal coefficient first into the Colombeau algebra ensures the equivalence after transformation and yields unique solvability if the regularization scaling $\ga$ is chosen appropriately (cf. [@LO:91; @O:89; @HdH:01]). We use the framework and notation (in particular, $\G$ for the algebra and $\A_N$ for the mollifier sets) of Colombeau algebras as presented in [@O:92]. An interesting aspect of the use of Colombeau theory in wave propagation is that it leads to a natural control over and understanding of ‘scale’. In this paper, we focus on this modelling process. Basic definitions and constructions =================================== Review of Zygmund spaces ------------------------ We briefly review homogeneous and inhomogeneous Zygmund spaces, ${\ensuremath{\dot{C}_*}}^s(\R^m)$ and ${\ensuremath{C_*}}^s(\R^m)$, via a characterization in pseudodifferential operator style which follows essentially the presentation in [@Hoermander:97], Sect. 8.6. Alternatively, for practical and implementation issues one may prefer the characterization via growth properties of the discrete wavelet transform using orthonormal wavelets (cf. [@Meyer:92]). Classically, the Zygmund spaces were defined as extension of Hölder spaces by boundedness properties of difference quotients. Within the systematic and unified approach of Triebel (cf. [@Triebel:I; @Triebel:II]) we can simply identify the Zygmund spaces in a scale of inhomogeneous and homogeneous (quasi) Banach spaces, $B^s_{p q}$ and $\dot{B}^s_{p q}$ ($s\in\R$, $0 < p, q \leq \infty$), by ${\ensuremath{C_*}}^s(\R^m) = B^s_{\infty \infty}(\R^m)$ and ${\ensuremath{\dot{C}_*}}^s(\R^m) = \dot{B}^s_{\infty \infty}$. Both ${\ensuremath{C_*}}^s(\R^m)$ and ${\ensuremath{\dot{C}_*}}^s(\R^m)$ are Banach spaces. To emphasize the close relation with mollifiers we describe a characterization of Zygmund spaces in pseudodifferential operator style in more detail. Let $0 < a < b$ and choose $\vphi_0\in\D(\R)$, $\vphi_0$ symmetric and positive, $\vphi_0(t) = 1$ if $|t| < a$, $\vphi_0(t) = 0$ if $|t| > b$, and $\vphi_0$ strictly decreasing in the interval $(a,b)$. Putting $\vphi(\xi) = \vphi_0(|\xi|)$ for $\xi\in\R^m$ then defines a function $\vphi\in\D(\R^m)$. Finally we set $$\psi(\xi) = - \inp{\xi}{\grad\vphi(\xi)}$$ and note that if $a < |\xi| < b$ then $\psi(\xi) = - \vphi_0'(|\xi|) |\xi| > 0$. We denote by ${\cal M}(\R^m)$ the set of all pairs $(\vphi,\psi)\in\D(\R^m)^2$ that are constructed as above (we usually suppress the dependence of ${\cal M}$ on $a$ and $b$ in the notation). We are now in aposition to state the characterization theorem for the inhomogeneous Zygmund spaces as subspaces of $\S'(\R^m)$. It follows from [@Triebel:88], Sec. 2.3, Thm. 3 or, alternatively, from [@Hoermander:97], Sec. 8.6. Note that all appearing pseudodifferential operators in the following have $x$-independent symbols and are thus given simply by convolutions. \[inh\_Z\] Assume that $a \leq 1/4$ and $b \geq 4$ and choose $(\vphi,\psi)\in {\cal M}(\R^m)$ arbitrary. Let $s\in\R$ then $u\in\S'(\R^m)$ belongs to the inhomogeneous Zygmund space of order $s$ ${\ensuremath{C_*}}^s(\R^m)$ if and only if $${\ensuremath{|u|_{{\ensuremath{C_*}}^{s}}}} := \linf{\vphi(D)u} + \sup\limits_{0 < t < 1}\Big( t^{-s} \linf{\psi(tD)u}\Big) < \infty .$$ (Note that we made use of the modification for $q=\infty$ in [@Triebel:88], equ. (82).) \[Z\_rem\] 1. ${\ensuremath{|u|_{{\ensuremath{C_*}}^{s}}}}$ defines an equivalent norm on ${\ensuremath{C_*}}^s$. In fact that all norms defined as above by some $(\vphi,\psi)\in{\cal M}(\R^m)$ are equivalent can be seen as in [@Hoermander:97], Lemma 8.6.5. 2. If $s\in \R_+ \setminus \N$ then $C^s_*(\R^m)$ is the classical Hölder space of regularity $s$. Denoting by ${\ensuremath{\lfloor s \rfloor}}$ the greatest integer less than $s$ it consists of all ${\ensuremath{\lfloor s \rfloor}}$ times continuously differentiable functions $f$ such that $\d^\al f$ is bounded when $|\al| \leq {\ensuremath{\lfloor s \rfloor}}$ and globally Hölder continuous with exponent $s-{\ensuremath{\lfloor s \rfloor}}$ if $|\al| = {\ensuremath{\lfloor s \rfloor}}$. 3. Due to the term $\linf{\vphi(D)u}$ the norm ${\ensuremath{|u|_{{\ensuremath{C_*}}^{s}}}}$ is not homogeneous with respect to a scale change in the argument of $u$. 4. If $u\in\L^\infty(\R^m)$ then (cf. [@Hoermander:97], Sect. 8.6) $$u(x) = \vphi(D)u(x) + \int\limits_1^\infty \psi(D/t)u(x) \frac{dt}{t} \qquad \text{ for almost all } x .$$ Using $\vphi(\xi) = \int_0^1 \psi(\xi/t)/t \,dt$ this can be rewritten in the form $u(x) = \int_0^\infty \psi(D/t)u(x)/t \,dt$ and resembles Calderon’s classical identity in terms of a continuous wavelet transform (cf. [@Meyer:92], Ch. 1, (5.9) and (5.10)). 5. In a similar way one can characterize the homogeneous Zygmund spaces as subspaces of $\S'(\R^m)$ modulo the polynomials ${\cal P}$. A proof can be found in [@Triebel:82], Sec. 3.1, Thm. 1. We may identify $\S'/{\cal P}$ with the dual space $\S_0'(\R^m)$ of $\S_0(\R^m) = \{ f\in\S(\R^m) \mid \d^\al \FT{f}(0) = 0 \, \forall \al\in\N_0^m \}$, the Schwartz functions with vanishing moments, by mapping the class $u+{\cal P}$ with representative $u\in\S'$ to $u\mid_{\S_0}$. Assume that $a \leq 1/4$ and $b \geq 4$ and choose $\psi\in\D(\R^m)$ as constructed above and let $s\in\R$ and $u\in\S'(\R^m)$. Then $u\!\!\mid_{\S_0}$ belongs to the homogeneous Zygmund space ${\ensuremath{\dot{C}_*}}^s(\R^m)$ of order $s$ if and only if $${\ensuremath{|u|_{{\ensuremath{\dot{C}_*}}^{s}}}} := \sup\limits_{0 < t < \infty}\Big( t^{-s} \linf{\psi(tD)u}\Big) < \infty .$$ (Note that we use the modification for $q=\infty$ in [@Triebel:82], equ. (16).) The continuous wavelet transform -------------------------------- Following [@Holschneider:95] we call a function $g\in\L^1(\R^m)\cap\L^\infty(\R^m)$ with $\int g = 0$ a *wavelet*. We shall say that it is a *wavelet of order $k$* ($k\in\N_0$) if the moments up to order $k$ vanish, i.e., $\int x^\al g(x) dx = 0$ for $|\al|\leq k$. The (continuous) wavelet transform is defined for $f\in\L^p(\R^m)$ ($1\leq p \leq \infty$) by ($\eps > 0$) $$\label{wf_trafo} W_g f(x,\eps) = \int\limits_{\R^m} f(y) \frac{1}{\eps^m} \bar{g}(\frac{y-x}{\eps}) \, dy = f * (\bar{\check{g}})_\eps (x)$$ where we have used the notation $\check{g}(y) = g(-y)$ and $g_\eps(y) = g(y/\eps)/\eps^n$. By Young’s inequality $W_g f(.,\eps)$ is in $\L^p$ for all $\eps > 0$ and $W_g$ defines a continuous operator on this space for each $\eps$. If $g\in C_c(\R^m)$ we can define $W_g f$ for $f\in\L^1_{\text{loc}}(\R^m)$ directly by the same formula (\[wf\_trafo\]). If $g\in\S_0(\R^m)$ then $W_g$ can be extended to $\S'(\R^m)$ as the adjoint of the wavelet synthesis (cf.[@Holschneider:95], Ch. 1, Sects. 24, 25, and 30) or directly by $\S'$-$\S$-convolution in formula (\[wf\_trafo\]). If $f$ is a polynomial and $g\in\S_0$ it is easy to see that $W_g f = 0$. In fact, $f$, $g$, and $W_g f$ are in $\S'$ and $\FT{(W_g f(.,\eps))} = \FT{f} \bar{\FT{g}}(\eps .)$. Since $g$ is in $\S_0$ the Fourier transform $\FT{g}(\eps .)$ is smooth and vanishes of infinite order at $0$. But $\FT{f}$ has to be a linear combination of derivatives of $\de_0$ implying $\FT{f} \bar{\FT{g}}(\eps .) = 0$. Therefore the wavelet transform ‘is blind to polynomial parts’ of the analyzed function (or distribution) $f$. In terms of geophysical modelling this means that a polynomially varying background medium is filtered out automatically. Wavelets from mollifiers ------------------------ The Zygmund class characterization in Theorem \[inh\_Z\] (and remark \[Z\_rem\],(v)) used asymptotic estimates of scaled smoothings of the distribution which resembles typical mollifier constructions in Colombeau theory. In this subsection we relate this in turn directly to the wavelet transform obtaining the well-known wavelet characterization of Zygmund spaces. Let $\chi\in\S(\R^m)$ with $\int \chi = 1$ and define the function $\mu$ by $$\label{mo_to_wv} \ovl{\check{\mu}(x)} := m \chi(x) + \inp{x}{\grad\chi(x)} \, .$$ Then $\mu$ is in $\S(\R^m)$ and is a wavelet since a simple integration by parts shows that $$\begin{gathered} (-1)^{|\al|} \ovl{\int \mu(x) x^\al \, dx} = \int \bar{\check{\mu}}(x) (-x)^\al \, dx \\ = (-1)^{|\al|+1} \int x^\al \chi(x)\, dx \sum_{j=1}^m \al_j = (-1)^{|\al|+1} |\al| \int x^\al \chi(x)\, dx \; .\end{gathered}$$ $\int \mu = 0$ and if $|\al| > 0$ we have $\int x^\al \mu(x)\,dx = 0$ if and only if $\int x^\al \chi(x) \,dx = 0$. Therefore $\mu$ defined by (\[mo\_to\_wv\]) is a wavelet of order $N$ if and only if the mollifier $\chi$ has vanishing moments of order $1 \leq |\al| \leq N$. Furthermore, by straightforward computation, we have $$\label{mo_to_wv_2} (\bar{\check{\mu}})_\eps(x) = -\eps \diff{\eps} \big(\chi_\eps (x)\big)$$ yielding an alternative of (\[mo\_to\_wv\]) in the form $\bar{\check{\mu}}(x) = - \diff{\eps}\big(\chi_\eps(x)\big)\mid_{\eps=1}$. If $(\vphi,\psi)\in{\cal M}(\R^m)$ arbitrary and $\chi$, $\mu$ are the unique Schwartz functions such that $\FT{\chi} = \vphi$ and $\FT{\mu} = \psi$, then straightforward computation shows that $\mu$ and $\chi$ satisfy the relation (\[mo\_to\_wv\]). Therefore since $\mu$ is then a real valued and even wavelet we have for $u\in\S'$ $$\psi(tD)u(x) = t^m \bar{\check{\mu}}(\frac{.}{t})*u(x) = W_\mu u(x,t) \; .$$ Hence the distributions $u$ in the Zygmund class ${\ensuremath{C_*}}^s(\R^m)$ can be characterized in terms of a wavelet transform and a smoothing pseudodifferential operator by $ \linf{\vphi(D)u} < \infty$ and $ \sup_{0 < t < 1} \Big( t^{-s} \linf{W_\mu u(.,t)}\Big) < \infty$. We have shown Let $(\FT{\chi},\FT{\mu})\in{\cal M}(\R^m)$. A distribution $u\in\S'(\R^m)$ belongs to the Zygmund class ${\ensuremath{C_*}}^s(\R^m)$ if and only if $$\label{Z_W_char} \linf{u * \chi} < \infty \quad \text{and} \quad \linf{W_\mu u(.,r)} = O(r^s) \;\; (r \to 0) .$$ \[W\_rem\] 1. Observe that the condition on $\FT{\chi}$ implies that $\chi$ and hence $\mu$ can never have compact support. If this characterization is to be used in a theory of Zygmund regularity detection within Colombeau algebras one has to allow for mollifiers of this kind in the corresponding embedding procedures. This is the issue of the following subsection. Nevertheless we note here that according to remarks in [@Jaffard:97a (2.2) and (3.1)] and, more precisely, in [@Meyer:98 Ch.3] the restrictions on the wavelet itself in a characterization of type (\[Z\_W\_char\]) may be considerably relaxed — depending on the generality one wishes to allow for the analyzed distribution $u$. However, in case $m=1$ and $u$ a function a flexible and direct characterization (due to Holschneider and Tchamitchian) can be found in [@Daubechies:92], Sect. 2.9, or [@Holschneider:95], Sect. 4.2. 2. There are more refined results in the spirit of the above theorem describing local Hölder (Zygmund) regularity by growth properties of the wavelet transform (cf. in particular [@Holschneider:95], Sect. 4.2, [@JM:96], and [@Jaffard:97a]). 3. The counterpart of (\[Z\_W\_char\]) for $\L^1_{\text{loc}}$-functions in terms of (discrete) multiresolution approximations is [@Meyer:92], Sect. 6.4, Thm. 5. Colombeau modelling and wavelet transform ========================================= Embedding of temperate distributions ------------------------------------ We consider a variant of the Colombeau embedding $\iota_\chi^\ga : \D'(\R^m) \to \G(\R^m)$ that was discussed in [@HdH:01], subsect. 3.2. As indicated in remark \[W\_rem\],(i) we need to allow for mollifiers with noncompact support in order to gain the flexibility of using wavelet-type arguments for the extraction of regularity properties from asymptotic estimates. On the side of the embedded distributions this forces us to restrict to $\S'$, a space still large enough for the geophysically motivated coefficients in model PDEs. Recall ([@HdH:01], Def. 11) that an admissible scaling is defined to be a continuous function $\ga : (0,1) \to \R_+$ such that $\ga(r) = O(1/r)$, $\ga(r) \to\infty$, and $\ga(sr) = O(\ga(r))$ if $0<s<1$ (fixed) as $r\to 0$. Let $\ga$ be an admissible scaling, $\chi\in\S(\R^m)$ with $\int \chi = 1$, then we define $\iota_\chi^\ga : \S'(\R^m) \to \G(\R^m)$ by $$\iota_\chi^\ga(u) = \cl{(u * \chi^\ga(\phi,.))_{\phi\in\A_0(\R^m)}} \qquad u\in\S'(\R^m)$$ where $$\chi^\ga(\phi,x) = \ga(l(\phi_0))^m \chi(\ga(l(\phi_0)) x) \quad \text{ if } \phi = \phi_0\otimes \cdots \otimes\phi_0 \quad \text{ with } \phi_0\in\A_0 .$$ $\iota_\chi^\ga$ is well-defined since $(\phi,x) \to u*\chi^\ga(\phi,x)$ is clearly moderate and negligibility is preserved under this scaled convolution. By abuse of notation we will write $\iota_\chi^\ga(u)(\phi,x)$ for the standard representative of $\iota_\chi(u)$. The following statements describe properties of such a modelling procedure resembling the original properties used by M. Oberguggenberger in [@O:89], Prop.1.5, to ensure unique solvability of symmetric hyperbolic systems of PDEs (cf. [@O:89; @LO:91]). The definition of Colombeau functions of logarithmic and bounded type is given in [@O:92], Def. 19.2, the variation used below is an obvious extension. 1. $\iota_\chi^\ga : \S'(\R^m) \to \G(\R^m)$ is linear, injective, and commutes with partial derivatives. 2. $\forall u\in\S'(\R^m)$: $\iota_\chi^\ga(u) \approx u$. 3. If $u\in W^{-1}_\infty(\R^m)$ then $\iota_\chi^\ga(u)$ is of *$\ga$-type*, i.e., there is $N\in\N_0$ such that for all $\phi\in\A_N(\R^m)$ there exist $C > 0$ and $1 > \eta > 0$: $$\sup\limits_{y\in\R^m} | \iota_\chi^\ga(u)(\phi_\eps,y)| \leq N \ga(C\eps) \quad 0 < \eps < \eta .$$ 4. If $u\in\L^\infty(\R^m)$ then $\iota_\chi^\ga(u)$ is of bounded type and its first order derivatives are of $\ga$-type. *ad (i),(ii):* Is clear from $\chi_\eps := \chi^\ga(\phi_\eps,.) \to \de$ in $\S'$ as $\eps\to 0$ and the convolution formula. *ad (iii):* Although this involves only marginal changes in the proof of [@O:89], Prop. 1.5(i), we recall it here to make the presentation more self-contained. Let $u = u_0 + \sum_{j=1}^m \d_j u_j$ with $u_j\in\L^\infty$ ($j=0,\ldots,m$) then with $\ga_\eps := \ga(\eps l(\phi_0))$ $$\begin{gathered} |u*\chi_\eps(x)| \leq \linf{u_0 * \chi_\eps} + \sum_{j=1}^m \linf{u_j * \d_j(\chi_\eps)} \\ \leq \linf{u_0} \lone{\chi} + \ga_\eps \sum_{j=1}^m \linf{u_j} \lone{\d_j\chi} \\ = \ga_\eps \big( \frac{\linf{u_0} \lone{\chi}}{\ga_\eps} + \sum_{j=1}^m \linf{u_j} \lone{\d_j \chi} \big)\end{gathered}$$ where the expression within brackets on the r.h.s. is bounded by some constant $M$, dependent on $u$ and $\chi$ only but independent of $\phi$, as soon as $\eps < \eta$ with $\eta$ chosen appropriately (and dependent on $M$, $u$, $\chi$, and $\phi$). Therefore the assertion is proved by putting $N \geq M$ and $C = l(\phi_0)$. *ad (iv):* is proved by similar reasoning In particular, we can model a fairly large class of distributions as Colombeau functions of logarithmic growth (or log-type) thereby ensuring unique solvability of hyperbolic PDEs incorporating such as coefficients. 1. If $\ga(\eps) = \log(1/\eps)$ then $\iota_\chi^\ga(W^{-1,\infty}) \subseteq \{ U\in\G \mid U \text{ is of log-type } \}$ and $$\iota_\chi^\ga(\L^\infty) \subseteq \{ U\in\G \mid U \text{ of bounded type and } \d^\alpha U \text{ of log-type for } |\al| = 1 \} .$$ 2. If $u\in W^{-k,\infty}(\R^m)$ for $k\in\N_0$ then $\iota_\chi^\ga(u)$ is of $\ga^k$-type. In particular, there is an admissible scaling $\ga$ such that $\iota_\chi^\ga(u)$ and all first order derivatives $\d_j \iota_\chi^\ga(u)$ ($j = 1,\ldots,m$) are of log-type. Wave front sets under the embedding ----------------------------------- One of the most important properties of the embedding procedure introduced in [@HdH:01] was its faithfulness with respect to the microlocal properties if ‘appropriately measured’ in terms of the set of $\ga$-regular Colombeau functions $\G_\ga^\infty(\R^m)$ ([@HdH:01], Def. 11). But there the proof of this microlocal invariance property heavily used the compact support property of the standard mollifier $\chi$ which is no longer true in the current situation. In this subsection we show how to extend the invariance result to the new embedding procedure defined above. Let $w\in\S'(\R^m)$, $\ga$ an admissible scaling, and $\chi\in\S(\R^m)$ with $\int \chi = 1$ then $$WF_g^\ga(\iota_\chi^\ga(w)) = WF(w) .$$ The necessary changes in the proof of [@HdH:01], Thm. 15, are minimal once we established the following If $\vphi\in\D(\R^m)$ and $v\in\S'(\R^m)$ with $\supp(\vphi) \cap \supp(v) = \emptyset$ then $\vphi\cdot\iota_\chi^\ga(v)\in\G_\ga^\infty$. Using the short-hand notation $\chi_\eps = \chi^\ga(\phi_\eps,.)$ and $\ga_\eps = \ga(\eps l(\phi_0))$ we have $$\d^\be\big(\vphi (v*\chi_\eps)\big)(x) = \ga_\eps^m \sum_{\al\leq\be} \binom{\be}{\al} \d^{\be-\al}\vphi(x) \, \ga_\eps^{|\al|}\, \dis{v}{\d^{\al}\chi(\ga_\eps(x-.))} \; .$$ Hence we need to estimate terms of the form $\ga_\eps^{|\al|}\, \dis{v}{\d^{\al}\chi(\ga_\eps(x-.))}$ when $x\in\supp(\vphi) =: K$. Let $S$ be a closed set satisfying $\supp(v) \subset S \subset \R^m \setminus K$ and put $d = \text{dist}(S,K)>0$. Since $v$ is a temperate distribution there is $N\in\N$ and $C > 0$ such that $$\ga_\eps^{|\al|} |\dis{v}{\d^{\al}\chi(\ga_\eps(x-.))}| \leq C \ga_\eps^{|\al|} \sum_{|\sig|\leq N} \sup\limits_{y\in S} |\d^\sig\big( \d^\al \chi(\ga_\eps(x-y)) \big)| \, .$$ $\chi\in\S$ implies that each term in the sum on the right-hand side can be estimated for arbitrary $k\in\N$ by $$\sup\limits_{y\in S} |\d^{\sig+\al}\chi(\ga_\eps(x-y)) \big)| \ga_\eps^{|\sig|} \leq \ga_\eps^{|\sig|} \sup\limits_{y\in S} C_k (1+\ga_\eps|x-y|)^{-k} \leq C'_k \ga_\eps^{|\sig|-k}/ d^k$$ if $x$ varies in $K$. Since $|\al|+|\sig| \leq |\be|+ N$ we obtain $$\linf{\d^\be\big(\vphi (v*\chi_\eps)\big)} \leq C' \ga_\eps^{m+N +|\be|-k}$$ with a constant $C'$ depending on $k$, $v$, $\vphi$, $d$, and $\chi$ but $k$ still arbitrary. Choosing $k = |\be|$, for example, we conclude that $\vphi\cdot\iota_\chi^\ga(v)$ has a uniform $\ga_\eps$-growth over all orders of derivatives. Hence it is a $\ga_\eps$-regular Colombeau function. Referring to the proof (and the notation) of [@HdH:01], Thm. 15, we may now finish the proof of the theorem simply by carrying out the following slight changes in the two steps of that proof. *Ad step 1:* Choose $\psi\in\D$ such that $\psi = 1$ in a neighborhood of $\supp(\vphi)$ and write $$\vphi (w*\chi_\eps) = \vphi ((\psi w)*\chi_\eps) + \vphi (((1-\psi)w)*\chi_\eps \; .$$ The first term on the right can be estimated by the same methods as in [@HdH:01] and the second term is $\ga$-regular by the lemma above. *Ad step 2:* Rewrite $$\vphi w = \vphi \psi w = \vphi (\psi w - (\psi w)*\chi_\eps) + \vphi((\psi w)*\chi_\eps)$$ and observe that the reasoning of [@HdH:01] is applicable since $\Sigma_g^\ga(\vphi\, \iota_\chi^\ga(\psi w)) \subseteq \Sigma_g^\ga(\vphi\, \iota_\chi^\ga(w))$ by the above lemma. The modelling procedure and wavelet transforms ---------------------------------------------- Simple wavelet-mollifier correspondences as in subsection 2.3 allow us to rewrite the Colombeau modelling procedure and hence prepare for the detection of original Zygmund regularity in terms of growth properties in the scaling parameters. A first version describes directly $\iota_\chi^\ga$ but involves an additional nonhomogeneous term. \[inhom\_lemma\] If $\chi\in\S(\R^m)$ has the properties $\int\chi = 1$ and $\int x^\al \chi(x) dx = 0$ ($0 < |\al| \leq N$) then $\bar{\check{\mu}} = -\diff{\eps}(\chi_\eps)\mid_{\eps=1}$ defines a wavelet of order $N$ and we have for any $f\in\S'(\R^m)$ $$\iota_\chi^\ga(f)(\phi,x) = f * \chi (x) + \!\!\!\! \int\limits_{1/\ga(l(\phi))}^{1}\!\!\!\! W_\mu f(x,r)\, \frac{dr}{r} \;.$$ Let $\eps > 0$ then eq. (\[mo\_to\_wv\_2\]) implies $W_\mu f(x,\eps) = f * (\bar{\check{\mu}})_\eps (x) = -\eps \diff{\eps}\big( f * \chi_\eps(x) \big)$ and integration with respect to $\eps$ from $1/\ga(l(\phi))$ to $1$ yields $$- \!\!\!\! \int\limits_{1/\ga(l(\phi))}^{1} \!\!\!\! W_\mu f(x,\eps) \frac{d\eps}{\eps} = f * \chi (x) - \iota_\chi^\ga(f)(\phi,x) \; .$$ A more direct mollifier wavelet correspondence is possible via derivatives of $\iota_\chi^\ga$ instead. \[hom\_lemma\] If $\chi\in\S(\R^m)$ with $\int \chi =1$ then for any $\al\in\N_0^n$ with $|\al| > 0$ $$\label{chi_al} \chi_\al(x) = \ovl{(\d^\al\chi)\check{\ }(x)}$$ is a wavelet of order $|\al|-1$ and for any $f\in\S'(\R^m)$ we have $$\d^\al \iota_\chi^\ga(f)(\phi,x) = \ga(l(\phi))^{|\al|}\, W_{\chi_\al} f (x,\frac{1}{\ga(l(\phi))}) \; .$$ Let $|\be| < |\al|$ then $\int x^\be D^\al\chi(x) \, dx = (-D)^\be(\xi^\al \FT{\chi}(\xi))\mid_{\xi=0} = 0$ which proves the first assertion. The second assertion follows from $$\d^\al \iota_\chi^\ga(f)(\phi) = \ga^{|\al|+m} f * \d^\al\chi(\ga .) = \ga^{|\al|} f * \ovl{\big(\ga^m (\d^\al\bar{\chi})\check{\ }(\ga .)\big)\check{\ }}$$ with the short-hand notation $\ga = \ga(l(\phi))$. Both lemmas \[inhom\_lemma\] and \[hom\_lemma\] may be used to translate (global) Zygmund regularity of the modeled (embedded) distribution $f$ via Thm. \[Z\_W\_char\] into asymptotic growth properties with respect to the regularization parameter. To what extent this can be utilized to develop a faithful and completely intrinsic Zygmund regularity theory of Colombeau functions may be subject of future research. Zygmund regularity of Colombeau functions: the one-dimensional case =================================================================== If we combine the basic ideas of the Zygmund class characterization in 2.3 with the simple observations in 3.3 we are naturally lead to define a corresponding regularity notion intrinsically in Colombeau algebras as follows. \[Z\_C\_def\] Let $\ga$ be an admissible scaling function and $s$ be a real number. A Colombeau function $U\in\G(\R^m)$ is said to be *globally of $\ga$-Zygmund regularity $s$* if for all $\alpha\in\N_0^m$ there is $M\in\N_0$ such that for all $\phi\in\A_M(\R^m)$ we can find positive constants $C$ and $\eta$ such that $$\label{ZC_def} |\d^\al U(\phi_\eps,x)| \leq \begin{cases} C & \text{if $|\al| < s$}\\ C \;\ga_\eps^{|\al|-s} & \text{if $|\al| \geq s$} \end{cases} \qquad x\in\R^m, 0<\eps<\eta .$$ The set of all (globally) $\ga$-Zygmund regular Colombeau functions of order $s$ will be denoted by $\G_{*,\ga}^s(\R^m)$. A detailed analysis of $\G_{*,\ga}^s$ in arbitrary space dimensions and not necessarily positive regularity $s$ will appear elsewhere. Here, as an illustration, we briefly study the case $m=1$ and $s>0$ in some detail. Concerning applications to PDEs this would mean that we are allowing for media of typical fractal nature varying continuously in one space dimension. For example one may think of a coefficient function $f$ in ${\ensuremath{C_*}}^s(\R)$ to appear in the following ways. 1. Let $f$ be constant outside some interval $(-K,K)$ and equal to a typical trajectory of Brownian motion in $[-K,K]$; it is well-known that with probability $1$ those trajectories are in ${\ensuremath{\dot{C}_*}}^s(\R)$ whenever $s < 1/2$. This is proved, e.g., in [@Holschneider:95], Sect. 4.4, elegantly by wavelet transform methods. 2. We refer to [@Zygmund:68], Sect. V.3, for notions and notation in this example. Then similarly to the above one can set $f=0$ in $(-\infty,0]$, $f=1$ in $[2\pi,\infty)$ and in $[0,2\pi]$ let $f$ be Lebesgue’s singular function associated with a Cantor-type set of order $d\in\N$ with (constant) dissection ratio $0 < \xi < 1/2$. Then $f$ belongs to ${\ensuremath{C_*}}^s(\R)$ with $s = \log(d+1)/|\log(\xi)|$. (The classical triadic Cantor set corresponds to the case $d=2$ and $\xi = 1/3$.) We have already seen that the Colombeau embedding does not change the microlocal structure (i.e., the $\ga$-wave front set) of the original distribution. We will show now that also the refined Zygmund regularity information is accurately preserved. If $n\in\N_0$ we denote by $C^n_b(\R)$ the set of all $n$ times continuously differentiable functions with the derivatives up to order $n$ bounded. Note that $C^n_b(\R)$ is a strict superset of ${\ensuremath{C_*}}^{n+1}(\R)$. Let $\chi\in\A_0(\R)$ and $s>0$. Define $n\in\N_0$ such that $n < s \leq n+1$ then we have $$\iota_\chi^\ga(C^n_b(\R)) \cap \G_{*,\ga}^s(\R) = \iota_\chi^\ga({\ensuremath{C_*}}^s(\R)) .$$ In other words, in case $0< s <1$ we can precisely identify those Colombeau functions that arise from the Zygmund class of order $s$ within all embedded bounded continuous functions. We use the characterizations in [@Daubechies:92], Thms. 2.9.1 and 2.9.2 and the remarks on p. 48 following those; choosing a smooth compactly supported wavelet $g$ of order $n$ we may therefore state the following[^1]: $f\in C^n_b(\R)$ belongs to ${\ensuremath{C_*}}^s(\R)$ if and only if there is $C>0$ such that $$\label{Daub} |W_g f(x,r)| \leq C r^s \quad \text{ for all } x .$$ Now the proof is straightforward. First let $f\in{\ensuremath{C_*}}^s(\R)$. If $|\al| - 1 < n$ then $|\d^\al \iota_\chi^\ga(f)| = |\iota_\chi^\ga(\d^\al f)| \leq \linf{\d^\al f} \lone{\chi}$ by Young’s inequality. If $|\al| > n$ we use lemma \[hom\_lemma\] and set $\ga_\eps = \ga(\eps l(\phi))$ to obtain $$\d^\al \iota_\chi^\ga(f)(\phi_\eps,x) = \ga_\eps^{|\al|} W_{\chi_\al} f(x,\ga_\eps^{-1})$$ where $\chi_\al$ is a wavelet of order at least $n$. Hence (\[Daub\]) gives an upper bound $C \ga_\eps^{|\al| - s}$ uniformly in $x$. Hence (\[ZC\_def\]) follows. Finally, if we know that $f\in C^n_b(\R)$ and $\iota_\chi^\ga(f) \in \G_{*,\ga}^s(\R)$ then combination of (\[ZC\_def\]) and lemma \[hom\_lemma\] gives if $|\al| \geq s$ $$|\ga_\eps^{|\al|} W_{\chi_\al} f(x,\ga_\eps^{-1})| = |\d^\al \iota_\chi^\ga(f)(\phi_\eps,x)| \leq C \ga_\eps^{|\al| - s}$$ uniformly in $x$. Hence another application of (\[Daub\]) proves the assertion. [10]{} I. Daubechies. . Number 61 in CBMS. SIAM, Philadelphia, 1992. M. Holschneider. . Oxford University Press, New York, 1995. L. H[ö]{}rmander. . Springer-Verlag, Berlin Heidelberg, 1997. G. H[ö]{}rmann and M. V. de Hoop. Microlocal analysis and global solutions of some hyperbolic equations with discontinuous coefficients. , 2001. S. Jaffard. Multifractal formalism for functions, part [I]{}: results valid for all functions. , 28(4):944–970, 1997. S. Jaffard and Y. Meyer. Wavelet methods for pointwise regularity and local oscillations of functions. , 123(587), 1996. F. Lafon and M. Oberguggenberger. Generalized solutions to symmetric hyperbolic systems with discontinuous coefficients: the multidimensional case. , 160:93–106, 1991. Y. Meyer. . Cambridge studies in advanced mathematics 37. Cambridge University Press, Cambridge, 1992. Y. Meyer. . CRM Monograph Series 9. American Mathematical Society, Providence, 1998. M. Oberguggenberger. Hyperbolic systems with discontinuous coefficients: generalized solutions and a transmission problem in acoustics. , 142:452–467, 1989. M. Oberguggenberger. . Pitman Research Notes in Mathematics 259. Longman Scientific [&]{} Technical, 1992. H. Triebel. Characterizations of [B]{}esov-[H]{}ardy-[S]{}obolev spaces via harmonic functions, temperatures, and related means. , 35:275–297, 1982. H. Triebel. . Akademische Verlagsgesellschaft Geest [&]{} Portig and Birkh[ä]{}user Verlag, Leipzig and Basel, 1983. H. Triebel. Characterizations of [B]{}esov-[H]{}ardy-[S]{}obolev spaces: a unified approach. , 52:162–203, 1988. H. Triebel. . Birkh[ä]{}user Verlag, Basel, 1992. A. Zygmund. . Cambridge University Press, London, New York, 1968. [^1]: Note that we do not use the wavelet scaling convention adapted to $\L^2$-spaces here
{ "pile_set_name": "ArXiv" }
--- abstract: 'We investigate scalar perturbations from inflation in braneworld cosmologies with extra dimensions. For this we calculate scalar metric fluctuations around five dimensional warped geometry with four dimensional de Sitter slices. The background metric is determined self-consistently by the (arbitrary) bulk scalar field potential, supplemented by the boundary conditions at both orbifold branes. Assuming that the inflating branes are stabilized (by the brane scalar field potentials), we estimate the lowest eigenvalue of the scalar fluctuations – the radion mass. In the limit of flat branes, we reproduce well known estimates of the positive radion mass for stabilized branes. Surprisingly, however, we found that for de Sitter (inflating) branes the square of the radion mass is typically negative, which leads to a strong tachyonic instability. Thus, parameters of stabilized inflating braneworlds must be constrained to avoid this tachyonic instability. Instability of “stabilized” de Sitter branes is confirmed by the [ BraneCode]{} numerical calculations in the accompanying paper [@branecode]. If the model’s parameters are such that the radion mass is smaller than the Hubble parameter, we encounter a new mechanism of generation of primordial scalar fluctuations, which have a scale free spectrum and acceptable amplitude.' author: - 'Andrei V. Frolov' - Lev Kofman title: 'Can Inflating Braneworlds be Stabilized?' --- Introduction ============ One of the most interesting recent developments in high energy physics has been the picture of braneworlds. Higher dimensional formulations of braneworld models in superstring/M theory, supergravity and phenomenological models of the mass hierarchy have the most obvious relevance to cosmology. In application to the very early universe this leads to braneworld cosmology, where our 3+1 dimensional universe is a 3d curved brane embedded in a higher-dimensional bulk [@review]. Early universe inflation in this picture corresponds to 3+1 (quasi) de Sitter brane geometry, so that the background geometry is simply described by the five dimensional warped metric with four dimensional de Sitter slices $$\label{warp} ds^2 = a^2(w)\left[dw^2 - dt^2 + e^{2Ht}d\vec{x}^2\right].$$ For simplicity we use spatially flat slicing of the de Sitter metric $ds^2_4$. The conformal warp factor $a(w)$ is determined self-consistently by the five-dimensional Einstein equations, supplemented by the boundary conditions at two orbifold branes. We assume the presence of a single bulk scalar field $\varphi$ with the potential $V(\varphi)$ and self-interaction potentials $U_\pm(\varphi)$ at the branes. The potentials can be pretty much arbitrary as long as the phenomenology of the braneworld is acceptable. The class of metrics (\[warp\]) with bulk scalars and two orbifold branes covers many interesting braneworld scenarios including the Hořava-Witten theory [@HW; @Lukas], the Randall-Sundrum model [@RS1; @RS2] with phenomenological stabilization of branes [@GW; @Dewolfe], supergravity with domain walls, and others [@FTW; @FFK]. We will consider models where by the choice of the bulk/brane potentials the inter-brane separation (the so-called radion) can be fixed, i.e. models in which branes could in principle be stabilized. The theory of scalar fluctuations around flat stabilized branes, involving bulk scalar field fluctuations $\delta\varphi$, scalar 5d metric fluctuations and brane displacements, is well understood [@Tanaka:2000er]. Similar to Kaluza-Klein (KK) theories, the extra-dimensional dependence can be separated out, and the problem is reduced to finding the eigenvalues of a second-order differential equation for the extra-dimensional ($w$-dependent) part of the fluctuation eigenfunctions subject to the boundary conditions at the branes. The lowest eigenvalue corresponds to the radion mass, which is positive $m^2>0$ and exceeds the TeV scale or so [@Csaki:1999mp]. Tensor fluctuations around flat stabilized branes are also stable. Brane inflation, like all inflationary models, generates long wavelength cosmological perturbations from the vacuum fluctuations of all light (i.e. with mass less than the Hubble parameter $H$) degrees of freedom. The theory of metric fluctuations around the background geometry (\[warp\]) with inflating (de Sitter) branes is more complicated than that for the flat branes. For tensor fluctuations (gravitational waves), the lowest eigenvalue of the extra dimensional part of the tensor eigenfunction is zero, $m=0$, which corresponds to the usual 4d graviton. As it was shown in [@LMW; @gw], massive KK gravitons have a gap in the spectrum; the universal lower bound on the mass is $m \ge \sqrt{3 \over 2}\, H$. This means that massive KK tensor modes are not generated from brane inflation. Massless scalar and vector projections of the bulk gravitons are absent, so only the massless 4d tensor mode is generated. Scalar cosmological fluctuations from inflation in the braneworld setting (\[warp\]) have been considered in many important works [@Mukohyama:2000ui; @Kodama:2000fa; @Langlois:2000ia; @vandeBruck:2000ju; @Koyama:2000cc; @Deruelle:2000yj; @Gen:2000nu; @Mukohyama:2001ks]. The theory of scalar perturbations in braneworld inflation with bulk scalars is even more complicated than for tensor perturbations. This is because one has to consider 5d scalar metric fluctuations and brane displacements induced not only by the bulk scalar field fluctuations $\delta\varphi$, but also by the fluctuations $\delta \chi$ of the inflaton scalar field $\chi$ living at the brane. In fact, most papers on scalar perturbations from brane inflation concentrated mainly on the inflaton fluctuations $\delta \chi$, while the bulk scalar fluctuations were not included. This was partly because in the earlier papers on brane inflation people considered a single brane embedded in an AdS background without a bulk scalar field, and partly because for braneworlds with two stabilized branes there was an expectation that the fluctuations of the bulk scalar would be massive and thus would not be excited during inflation. In this letter we focus on the bulk scalar field fluctuations, assuming for the sake of simplicity that the inflaton fluctuations $\delta \chi$ are subdominant. We consider a relatively simple problem of scalar fluctuations around curved (de Sitter) branes, involving only bulk scalar field fluctuations $\delta\varphi$. We find the extra-dimensional eigenvalues of the scalar fluctuations subject to boundary conditions at the branes, focusing especially on the radion mass $m^2$ for the inflating branes. In particular, we investigate the presence or absence of a gap in the KK spectrum of scalar fluctuations in view of the tensor mode result. Our results are a generalization of the known results for flat stabilized branes [@Tanaka:2000er], which we reproduce in the limit where the branes are flattening $H \to 0$. Bulk Equations ============== The five-dimensional braneworld models with a scalar field in the bulk are described by the action $$\begin{aligned} \label{eq:action} S &=& M_5^3 \int \sqrt{-g}\, d^5 x\, \left\{R - (\nabla\varphi)^2 - 2V(\varphi)\right\} \nonumber\\ && -2 M_5^3 \sum \int \sqrt{-q}\, d^4 x\, \left\{ [{{\cal K}}] + U(\varphi)\right\},\end{aligned}$$ where the first term corresponds to the bulk and the sum contains contributions from each brane. The jump of the extrinsic curvature $[{\cal K}]$ provides the junction conditions across the branes (see equation (\[eq:jc\]) below). Variation of this action gives the bulk Einstein $G_{AB}=T_{AB}(\varphi)$ and scalar field $\Box\varphi=V_{,\varphi}$ equations. For the (stationary) warped geometry (\[warp\]) they are \[eq:bg\] $$\begin{aligned} &\displaystyle \varphi'' + 3\frac{a'}{a} \varphi' - a^2 V' = 0,&\label{eq:bg:phi}\\ &\displaystyle \frac{a''}{a} = 2\, \frac{a'^2}{a^2} - H^2 - \frac{\varphi'^2}{3},&\label{eq:bg:a}\\ &\displaystyle 6\left(\frac{a'^2}{a^2} - H^2\right) = \frac{\varphi'^2}{2} - a^2 V,&\label{eq:bg:c}\end{aligned}$$ where the prime denotes the derivative with respect to the extra dimension coordinate $w$. The first two equations are dynamical, and the last is a constraint. The solutions of equations (\[eq:bg\]) were investigated in detail in [@FFK]. Now we consider scalar fluctuations around the background (\[warp\]). The perturbed metric can be written in the longitudinal gauge as $$\label{eq:metric:pert} ds^2 = a(w)^2 \left[(1+2\Phi) dw^2 + (1+2\Psi)ds_4^2\right].$$ The linearized bulk Einstein equations and scalar field equation relate two gravitational potentials $\Phi(x^A)$, $\Psi(x^A)$ and bulk scalar field fluctuations $\delta\varphi(x^A)$. The off-diagonal Einstein equations require that $$\Psi = - \frac{\Phi}{2},$$ similar to four-dimensional cosmology, although the coefficient is different. The symmetry of the background guarantees separation of variables, so that perturbations can be decomposed with respect to four-dimensional scalar harmonics, e.g. $$\label{eq:sep} \Phi(x^A) = \sum\limits_m \Phi_m(w) Q_m(t, \vec x),$$ where the eigenvalues $m$ (constant of separation) appear as the four-dimensional masses ${^4}\Box Q_m = m^2 Q_m$, where ${^4}\Box$ is the D’Alembert operator on the 4d de Sitter slice. The four-dimensional massive scalar harmonics $Q_m$ can be further decomposed as $Q_m(t,\vec x) = \int f_k^{(m)}(t)\, e^{i \vec k \vec x}\, d^3k$. The temporal mode functions $f_k^{(m)}(t)$ obey the equation $$\label{eq:4} \ddot{f} + 3H\dot{f} + \left( e^{-2Ht}k^2 + m^2 \right) f = 0,$$ where dot denotes time derivative, and we dropped the labels $k$ and $m$ for brevity. Out of the remaining linearized Einstein equation we get the following equations for the extra-dimensional mode functions $\Phi_m(w)$ and $\delta\varphi_m(w)$ \[eq:pert\] $$\begin{aligned} (a^2 \Phi)' &=& \frac{2}{3} a^2 \varphi'\, \delta\varphi,\\ \left(\frac{a}{\varphi'}\, \delta\varphi\right)' &=& \left(1 - \frac{3}{2} \frac{m^2+4H^2}{\varphi'^2}\right) a \Phi,\end{aligned}$$ where we again omitted the label $m$ for transparency. These are very similar to the scalar perturbation equations in four-dimensional cosmology with a scalar field [@mukhanov], except for some numerical coefficients and powers of $a(w)$ (because the spacetime dimensionality is higher), and up to time to extra spatial dimension exchange. Indeed, we can introduce the higher-dimensional analog of the Mukhanov’s variable. However, in the presence of the curvature term $H^2$, the eigenvalue $m^2$ enters the second order equation for it in a complicated way, similar to that in the 4d problem with non-zero spatial curvature, see e.g. [@Garriga:1999vw]. We can introduce another convenient variable $u_m = \sqrt{\frac{3}{2}}\frac{a^{3/2}}{\varphi'}\, \Phi_m$. Then the two first order differential equations (\[eq:pert\]) can be combined into a single Schrödinger-type equation $$\label{v} u_m'' + \Big( m^2+4H^2 - V_{\text{eff}}(w) \Big) u_m = 0$$ with the effective potential $V_{\text{eff}} = \frac{z''}{z} + \frac{2}{3}\varphi'^2$, where we defined $z = \left(\frac{2}{3} a \varphi'^2\right)^{-\frac{1}{2}}$. There are two main differences relative to the four dimensional cosmology. First, in the latter case, FRW geometry with [*flat*]{} 3d spatial slices is usually considered, while the five dimensional brane inflation metric has [*curved*]{} 4d slices, which results in extra terms like $4H^2$ in equation (\[v\]). Second, here we are dealing not with an *initial* but a *boundary* value problem, with associated boundary conditions for perturbations at the branes on the edges. After we derive the boundary conditions, we will calculate the KK spectrum of the eigenvalues $m$. Brane Embedding and Boundary Conditions ======================================= The embedding of each brane is described by $w=w_{\pm}+\xi_\pm(x^a)$, where $\xi_\pm$ is the transverse displacement of the perturbed brane and $w_\pm$ is the position of the unperturbed brane. Holonomic basis vectors along the brane surface are $e_{(a)}^A \equiv \frac{\partial x^A}{\partial x^a} = \Big(\xi_{,a}, \delta_a^A\Big)$, while the unit normal to the brane is $n_{A} = a \Big(1+\Phi, -\xi_{,a} \delta_A^a\Big)$. The induced four-metric on the brane $d\sigma^2 = q_{ab} dx^a dx^b$ does not feel the brane displacement (to linear order) and is conformally flat $$\label{eq:induced} d\sigma^2 = a^2(1-\Phi)\left[-dt^2 + e^{2Ht}d\vec{x}^2\right].$$ The junction conditions for the metric and the scalar field at the brane are $$\label{eq:jc} [{{\cal K}}_{ab} - {{\cal K}}q_{ab}] = U(\varphi) q_{ab}, \hspace{1em} [n\cdot\nabla\varphi] = \frac{\partial U}{\partial \varphi},$$ where the extrinsic curvature is defined by ${{\cal K}}_{ab} = e_{(a)}^A e_{(b)}^B n_{A;B}$. We will only need its trace, which up to linear order in perturbations is $$\label{eq:k} {{\cal K}}= 4\frac{a'}{a^2} - 2 \frac{(a^2\Phi)'}{a^3} - \frac{{^4}\Box\xi}{a}.$$ For the background geometry (under the assumption of reflection symmetry across the branes), equations (\[eq:jc\]) reduce to $$\label{eq:jc:bg} \frac{a'}{a^2} = \mp \frac{U}{6}, \hspace{1em} \frac{\varphi'}{a} = \pm \frac{U'}{2}.$$ For the perturbed geometry, the traceless part of the extrinsic curvature must vanish in the absence of matter perturbations on the brane. Since it contains second cross-derivatives of $\xi$, the brane displacement $\xi$ is severely restricted. Basically, this means that the oscillatory modes of brane displacement are not excited without matter support at the brane. While there could possibly be global displacements of the brane, they do not interest us, so in the following we set $\xi=0$. Of course, for the more complete problem which includes fluctuations $\delta \chi$ of the “inflaton” field on the brane, the displacement $\xi$ does not vanish. Using expression (\[eq:k\]) for the trace of the extrinsic curvature, the first of equations (\[eq:jc\]) gives us the junction condition for linearized perturbations at the two branes $(a^2 \Phi)'\big|_{w_{\pm}} = \pm \frac{1}{3}\, U' a^3\, \delta\varphi \big|_{w_{\pm}}$. However, this junction condition does not really place any further restrictions on the bulk field perturbations, as it identically follows from the bulk perturbation equations (\[eq:pert\]) and the background junction condition (\[eq:jc:bg\]). Rather, this junction condition would relate the brane displacement $\xi$ to the matter perturbations on the brane if they were not absent. The second of equations (\[eq:jc\]) gives us a physically relevant boundary condition for the bulk field perturbations $$(\delta\varphi' - \varphi' \Phi)\big|_{w_{\pm}} = \pm \frac{1}{2}\, U'' a\, \delta\varphi \big|_{w_{\pm}}.$$ Using the bulk equations (\[eq:pert\]), this can be rewritten in a more suggestive form $$\label{eq:bc} \left(\frac{a}{\varphi'}\, \delta\varphi\right)\Bigg|_{w_{\pm}} = \frac{3}{2} \frac{m^2+4H^2}{a\varphi'^2} \frac{a^2 \Phi}{\frac{a^2 V'}{\varphi'} - 4 \frac{a'}{a} \mp a U_\pm''} \Bigg|_{w_{\pm}}.$$ The eigenvalues $m^2$ of bulk perturbation equations subject to the boundary condition (\[eq:bc\]) form a KK spectrum, which we find numerically. We considered several examples of the potentials $V$ and $U_{\pm}$, and found no universal positive mass gap. Moreover, for the most interesting models we found negative $m^2$. To understand the KK spectrum of $m^2$, we make a simplification of the boundary condition (\[eq:bc\]) which will allow us to treat the eigenvalue problem analytically, and which well corresponds to a spirit of brane stabilization [@GW]. Indeed, *rigid stabilization* of branes is thought to be achieved by taking $U''$ (i.e. the brane mass of the field) very large, so that the scalar field gets pinned down at the positions of the branes. In this case, the right hand side of (\[eq:bc\]) becomes very small, which leads to the boundary condition $$\label{eq:stab} \delta\varphi\big|_{w_\pm} = 0.$$ This by itself *does not guarantee stability*, or vanishing of the metric perturbations on the brane for that matter, as perturbations live in the bulk and only need to satisfy (\[eq:stab\]) on the branes. This poses an eigenvalue problem for the mass spectrum of the perturbation modes, which we study next. KK Mass Spectrum ================ Unlike the situation with gravitational waves [@gw], for the scalar perturbations there is no zero mode with $m=0$, nor is there a “supersymmetric” factorized form of the “Schrödinger”-like equation (\[v\]). To find the lowest mass eigenvalue, we have to use other ideas. Powerful methods for analyzing eigenvalue problems exist for normal self-adjoint systems [@kamke]. To use them, we transform our eigenvalue problem (\[eq:pert\]) and (\[eq:stab\]) into the self-adjoint form. While the second order differential equation (\[v\]) is self-adjoint, the boundary conditions for $u$ are not. Therefore, we introduce a new variable $Y=u/z=a^2\Phi$ and impose the boundary conditions (\[eq:stab\]) to obtain the boundary value problem \[eq:evp\] $$\begin{aligned} \label{eq:evp:de} &{{\cal D}}Y \equiv -(gY')' + fY = \lambda gY,&\\ \label{eq:evp:bc} &Y'(w_{\pm}) = 0,&\end{aligned}$$ where we have introduced the short-hand notation $f = 1/a$, $g = z^2 = \left(\frac{2}{3} a \varphi'^2\right)^{-1}$, and $\lambda = m^2+4H^2$. Since the boundary value problem (\[eq:evp\]) is self-adjoint, it is guaranteed that the eigenvalues $\lambda$ are real and non-negative, $\lambda \ge 0$. To estimate the lowest eigenvalue $\lambda_1$ of the eigenvalue problem (\[eq:evp\]), we apply the Rayleigh’s formula [@kamke], which places a rigorous upper bound on $\lambda_1$ $$\lambda_1 \le \frac{\int F {{\cal D}}F\, dw}{\int g F^2\, dw},$$ where $F$ can be *any* function satisfying the boundary conditions (\[eq:evp:bc\]), and does not have to be a solution of (\[eq:evp:de\]). Taking a trial function $F=1$, we have $$\lambda_1 \le \frac{\int f\, dw}{\int g\, dw}.$$ This bound on the lowest mass eigenvalue is our main result: $$\label{eq:bound} m^2 \le -4H^2 + \frac{2}{3} \frac{\int \frac{dw}{a}}{\int \frac{dw}{a\varphi'^2}}.$$ In practice, $F=1$ is a pretty good guess for the lowest eigenfunction, so the bound (\[eq:bound\]) is usually close to saturation (up to a few percent accuracy in some cases), as we have observed in direct computations using a numerical eigenvalue finder. The right hand side of equation (\[eq:bound\]) has the structure $-4H^2 + m_0^2(H)$, where the second term is a functional of $H$ (including the implicit $H$-dependence of the warp factor $a$). In the limit of flat branes $H \to 0$ we have only the second, positive term. In this limit our expression agrees with the estimation of the radion mass $m_0^2$ for flat branes, obtained in various approximations [@Csaki:1999mp; @Tanaka:2000er; @Mukohyama:2001ks]. A non-vanishing $H$ alters $m^2$ through both terms. The most drastic alteration of $m^2$ due to $H$ comes from the big negative term $ -4H^2$. For the particular case of two de Sitter branes embedded in 5d AdS without a bulk scalar this negative term was noticed in [@Gen:2000nu]. Tachyonic Instability of the Radion for Inflating Branes ======================================================== The most striking feature of the mass bound (\[eq:bound\]) is that $m^2$ for de Sitter branes is typically negative. Trying, for instance, to do Goldberger-Wise stabilization of braneworlds with inflating branes while taking bulk gradients $\varphi'^2$ small enough to ignore their backreaction (as it is commonly done for flat branes) is a sure way to get a tachyonic radion mass: an estimate of the integrals gives $m^2 \le -4H^2 + O(\varphi'^2)$, which will go negative if the bulk scalar field is negligible $\varphi'^2 \ll H^2$. In what follows we consider two situations. In this section, we consider braneworld models where $m^2$ is negative and mostly defined by the first term $-4H^2$ in equation (\[eq:bound\]). In the next section, we consider the case where both terms in equation (\[eq:bound\]) are tuned to be comparable and the net radion mass is smaller than the Hubble parameter $|m^2| \leq H^2$. In the last section we will discuss how these two cases may be dynamically connected. Suppose we start with a braneworld with curved de Sitter branes, and we find the mass squared of the radion to be negative. The extra-dimensional eigenfunction $\Phi_m(w)$ is regular in the interval $w_{-} \leq w \leq w_{+}$. Let us turn, however, to the four-dimensional eigenfunction $Q_m(t, \vec x)$. Bearing in mind the evolution of the quantum fluctuations of the bulk field, we choose the positive frequency vacuum-like initial conditions in the far past $t \to -\infty $, $f_k(t) \simeq \frac{1}{\sqrt{2k}} e^{ik\eta}$, $\eta=\int dt\, e^{-Ht}$. For the tachyonic mode $m^2 <0$ the solution to equation (\[eq:4\]) with this initial condition is given in terms of Hankel functions $f_k^{(m)}(\eta)=\frac{\sqrt{\pi}}{2} H |\eta|^{3/2} {\cal H}^{(1)}_{\mu} (k\eta)$, with the index $\mu=\sqrt{\frac{9}{4}+\frac{|m^2|}{H^2}}$. The late-time asymptotic of this solution diverges exponentially as $t \to \infty$ ($\eta \to 0$) $$\label{asym} f_k^{(m)}(t) \propto \exp \left[\left( {\sqrt{\frac{9}{4}+\frac{|m^2|}{H^2}} - \frac{3}{2}} \right) H t \right].$$ Thus the negative tachyon mass of the radion $|m^2| \sim 4H^2$ leads to a strong exponential instability of scalar fluctuations $\Phi \propto e^{Ht}$. This instability is observed using a completely different method in the accompanying [BraneCode]{} paper [@branecode], where we give a fully non-linear numerical treatment of inflating branes which were initially set to be stationary by the potentials $U_{\pm}(\varphi)$, and without any simplifications like approximating boundary condition (\[eq:bc\]) with (\[eq:stab\]). Tachyonic instability of the radion for inflating branes means that, in general, [*braneworlds with inflation are hard to stabilize.*]{} From the point of view of 4d effective theory one would expect brane stabilization at energies lower than the mass of the flat brane radion $m_0^2$, which is roughly equal to the second term in (\[eq:bound\]). If the energy scale of inflation $H$ is larger than $m_0$, $H^2 \gg m_0^2$, this expectation is incorrect. Successful inflation (lasting more than $65 H^{-1}$) requires the radion mass $m^2$ to be not too negative $$\label{criter} m^2 \gtrsim - \frac{H^2}{20}.$$ This is possible if both terms in (\[eq:bound\]) are of the same order. In the popular braneworld models the radion mass in the low energy limit, $m_0$, is of order of a TeV. For these models the scale of “stable” inflation would be the same order of magnitude, $H \sim \text{TeV}$. Although there is no evidence that this scale of inflation is too low, it is not a comfortable scale from the point of view of the theory of primordial perturbations from inflation. It is interesting to note that the system of curved branes may dynamically re-configure itself to reach a state where the condition (\[criter\]) is satisfied. In the case of the bulk scalar field $\varphi$ acting alone, for quadratic potentials $U_{\pm}$ suitable for brane stabilization, there may be two stationary warped geometry solutions (\[warp\]) with two different values of $H$. The solution with the larger Hubble parameter $H$ might be dynamically unstable due to the tachyonic instability of the radion, which we described above. The second solution with the lower $H$ which satisfies (\[criter\]) might be stable. A fully non-linear study of this model was performed numerically with the [BraneCode]{} and is reported in the accompanying paper [@branecode]. It shows that, indeed, the tachyonic instability violently re-configures the starting brane state with the larger $H$ into the stable brane state with the lower $H$. This re-configuration of the brane system has a spirit of the Higgs mechanism. If we add an “inflaton” scalar field $\chi$ located at the brane, its slow roll contributes to the decrease of $H$. Thus, for the “stable” brane we have a radion mass (\[criter\]). This condition includes the case when the radion is lighter than $H$, $|m^2| < H^2$. Even if the radion tachyonic instability is avoided, the light radion leads us to the other side of the story, a new mechanism of generation of scalar fluctuations from inflation associated with the radion. Induced Scalar Metric Perturbations at the Observable Brane =========================================================== Suppose that the radion mass is smaller than $H$, $|m^2| \ll H^2$, so that from (\[asym\]) we get the amplitude of the temporal mode function $f_k^{(m)}(t)$ in the late time asymptotic frozen at the level $f_k^{(m)}(t) \simeq \frac{H}{\sqrt{2}k^{3/2}}$. This is nothing but the familiar generation of inhomogeneities of a light scalar field from its quantum fluctuations during inflation. Therefore an observer at the observable brane will encounter long wavelength scalar metric fluctuations generated from braneworld inflation. The four dimensional metric describing scalar fluctuations around an inflating background is usually written as $$\label{eq:induced1} d\sigma^2 = - (1+2\widetilde{\Phi}) d\tilde{t}^2 + (1-2\widetilde{\Psi})e^{2\widetilde{H}\tilde{t}}d\tilde{x}^2,$$ where $\widetilde{\Phi}$ and $\widetilde{\Psi}$ are scalar metric fluctuations. The induced four-metric on the brane (\[eq:induced\]) in our problem can be rewritten in this standard form (\[eq:induced1\]) if we absorb the (constant) warp factor $a(w_{+})$ in the redefined time $\tilde{t} = at$ and spatial coordinates $\tilde{x} = a\vec{x}$ and rescale the Hubble parameter $\widetilde{H} = H/a$. Then we see that the induced scalar perturbations on the brane are $$\label{prop} \widetilde{\Psi} = -\widetilde{\Phi} = \frac{1}{2}\, \Phi.$$ The sign of the first equality here is opposite to what we usually have for $3+1$ dimensional inflation with a scalar field. It implies that the 4d Weyl tensor of the induced metric vanishes, as the induced fluctuations are conformally flat. The conformal structure of fluctuations (\[prop\]) is typical [@km87] for a $R^2$ inflation in the Starobinsky model [@star]. It is not a surprise, because for the scale of inflation comparable to the mass $m_0$ of the flat brane radion we expect higher derivative corrections to the 4d effective gravity on the brane. Indeed, the massive radion corresponds to a higher derivative 4d gravity [@Mukohyama:2001ks]. The amplitude and spectrum of induced fluctuations is defined by $\Phi$. From the mode decomposition (\[eq:sep\]) we get $$\label{ampl} k^{3/2}\, \widetilde{\Phi}_k \simeq \Phi_m (w_+) \, \frac{H}{M_4},$$ where $\Phi_m(w_+)$ is the amplitude of the extra-dimensional eigenmode at the observable brane, normalized in such a way that the fluctuations $\Phi(w, t, {\vec x})$ are canonically quantized on the 4d slice, namely $M_5^3 \int \frac{3}{2} \frac{a^3}{\varphi'^2}\, |\Phi_m(w)|^2 \, dw = 1$. The normalization $M_4$ of the 4d mode functions follows from canonical quantization of the perturbed action (\[eq:action\]); the usual 4d Planck mass $M_p$ is expected to be recovered in the effective field theory on the observable brane [@Tanaka:2000er]. The scalar metric fluctuations induced by the bulk scalar field fluctuations are scale-free and have the amplitude $k^{3/2} \widetilde{\Phi}_k \propto \frac{H}{M_p}$, with the numerical coefficient depending on the details of the warped geometry. The nature of these fluctuations is very different from those in $(3+1)$-dimensional inflation, where the inflaton scalar field is time dependent. Induced scalar fluctuations do not require “slow-roll” properties of the potentials $V$ and $U_{\pm}$. The underlying background bulk scalar field has no time-dependence, but only $y$ dependence. Thus, generation of induced scalar metric fluctuations from braneworld inflation is a new mechanism for producing cosmological inhomogeneities. If we add another, inflaton field $\chi$ localized at the brane, we should expect that fluctuations of both fields, the bulk scalar $\delta \varphi$ and the inflaton $\delta \chi$, contribute to the metric perturbations. We can conjecture that the net fluctuations will be similar to those derived in the combined model with $R^2$ gravity and a scalar field [@kls]. Discussion ========== Let us discuss the physical interpretation and the meaning of our result. Stabilization of flat branes is based on the balance between the gradient $\phi'$ of the bulk scalar field and the brane potentials $U(\phi)$ which keeps $\phi$ pinned down to its values $\phi_i$ at the branes. The interplay between different forces becomes more delicate if the branes are curved. The warped configuration of curved branes has the lowest eigenvalue for scalar fluctuations around it $$\label{crit} m^2=-4H^2+m_0^2(H) \ .$$ The term $m_0^2(H)$ is a functional of $H$, and depends on the parameters of the model. If parameters are such that $m^2$ becomes negative due to excessive curvature $\sim H^2$, the brane configuration becomes unstable. This is analogous to an instability of a simple elastic mechanical system supported by the balance of opposite forces, which arises for a certain range of the underlying parameters. Tachyonic instability of curved branes has serious implications for the theory of inflation in braneworlds. It may be not so easy to have a realization of inflation in the braneworld picture without taking care of parameters of the model. Inflation where $m^2$ in (\[crit\]) is negative and $|m^2|$ is larger than $H^2$ is a short-lived stage because of this instability. After inflation, the late time evolution should bring the brane configuration to (almost) flat stabilized branes in the low energy limit. This by itself requires fine tuning of the potentials $V$ and $U_{\pm}$ to provide stabilization. Stabilization at the inflation energy scale requires extra fine tuning to get rid of the tachyonic effect. Working with a single bulk scalar field, it is probably not easy to simultaneously achieve stabilization not only at low energy, but also at the high energy scale of inflation, to insure that $|m^2| \ll H^2$, and to provide a graceful exit from inflation. One may expect that introduction of another scalar field $\chi$ on the brane can help to have stabilization both at the scale of inflation and in the low energy limit. If we can achieve brane stabilization during inflation by suppression of the tachyonic instability, we encounter a byproduct effect. Light modes of radion fluctuations inevitably contribute to the induced scalar metric perturbations. Therefore the theory of braneworld inflation has an additional mechanism of generation of primordial cosmological perturbations. This new mechanism is different from that of the usual 4d slow roll inflation. It appears that one of the most interesting potential applications of our effect is a mechanism for reducing the 4d effective cosmological constant at the brane. Indeed, in terms of brane geometry, the 4d cosmological constant is related to the 4d curvature of the brane. Suppose we have two solutions of the background equations (\[eq:bg\]) with higher and lower values of the curvature of de Sitter brane, which is proportional to $H^2$. (The existence of two solutions for certain choices of parameters of the Goldberger-Wise type potentials used for brane stabilization can be demonstrated, see [@FFK; @branecode].) Suppose that the solution with the larger value of brane curvature is unstable. Then the brane configuration will violently restructure into the other static configuration, which is characterized by the lower value of brane curvature where the tachyonic instability is absent. The branes are flattening, which for a 4d observer means the lowering of the cosmological constant. It will be interesting to investigate how this mechanism works for brane configurations with several scalar fields or potentials which can admit more than two static solutions. The problem of the cosmological constant from a braneworld perspective (as a flat brane) was discussed in the literature. There was a suggestion that the flat brane is a special solution of the bulk gravity/dilaton system with a single brane [@Arkani-Hamed:2000eg; @Kachru:2000hf], the claim which was later dismissed [@Forste:2000ft]. In our setup, we consider two branes in order to screen the naked bulk singularity, which was one of the factors spoiling the models [@Arkani-Hamed:2000eg; @Kachru:2000hf]. The new element which emerges from our study is the instability of the curved branes. Acknowledgments {#acknowledgments .unnumbered} =============== We are grateful to R. Brandenberger, J. Cline, C. Deffayet, J. Garriga, A. Linde, S. Mukohyama, D. Pogosyan and V. Rubakov for valuable discussions. We are especially indebted to our collaborators on the [BraneCode]{} project, G. Felder, J. Martin and M. Peloso. This research was supported in part by the Natural Sciences and Engineering Research Council of Canada and CIAR. J. Martin, G. N. Felder, A. V. Frolov, M. Peloso and L. Kofman, [*Non-linear braneworld dynamics with the `BraneCode`*]{}, [hep-th/0309001]{}. D. Langlois, [*Brane cosmology: An introduction*]{}, Prog. Theor. Phys. Suppl.  [**148**]{}, 181 (2003) \[[hep-th/0209261]{}\]. P. Horava and E. Witten, [*Heterotic and type I string dynamics from eleven dimensions*]{}, Nucl. Phys. B [**460**]{}, 506 (1996) \[[hep-th/9510209]{}\]. A. Lukas, B. A. Ovrut, K. S. Stelle and D. Waldram, [*Heterotic M-theory in five dimensions*]{}, Nucl. Phys. B [**552**]{}, 246 (1999) \[[hep-th/9806051]{}\]. L. Randall and R. Sundrum, [*A large mass hierarchy from a small extra dimension*]{}, Phys. Rev. Lett.  [**83**]{}, 3370 (1999) \[[hep-ph/9905221]{}\]. L. Randall and R. Sundrum, [*An alternative to compactification*]{}, Phys. Rev. Lett.  [**83**]{}, 4690 (1999) \[[hep-th/9906064]{}\]. W. D. Goldberger and M. B. Wise, [*Modulus stabilization with bulk fields*]{}, Phys. Rev. Lett.  [**83**]{}, 4922 (1999) \[[hep-ph/9907447]{}\]. O. DeWolfe, D. Z. Freedman, S. S. Gubser and A. Karch, [*Modeling the fifth dimension with scalars and gravity*]{}, Phys. Rev. D [**62**]{}, 046008 (2000) \[[hep-th/9909134]{}\]. E. E. Flanagan, S. H. Tye and I. Wasserman, [*Brane world models with bulk scalar fields*]{}, Phys. Lett. B [**522**]{}, 155 (2001) \[[hep-th/0110070]{}\]. G. N. Felder, A. V. Frolov and L. Kofman, [*Warped geometry of brane worlds*]{}, Class. Quant. Grav.  [**19**]{}, 2983 (2002) \[[hep-th/0112165]{}\]. T. Tanaka and X. Montes, [*Gravity in the brane-world for two-branes model with stabilized modulus*]{}, Nucl. Phys. B [**582**]{}, 259 (2000) \[[hep-th/0001092]{}\]. C. Csaki, M. Graesser, L. Randall and J. Terning, [*Cosmology of brane models with radion stabilization*]{}, Phys. Rev. D [**62**]{}, 045015 (2000) \[[hep-ph/9911406]{}\]. S. Mukohyama, [*Gauge-invariant gravitational perturbations of maximally symmetric spacetimes*]{}, Phys. Rev. D [**62**]{}, 084015 (2000) \[[hep-th/0004067]{}\]. H. Kodama, A. Ishibashi and O. Seto, [*Brane world cosmology: Gauge-invariant formalism for perturbation*]{}, Phys. Rev. D [**62**]{}, 064022 (2000) \[[hep-th/0004160]{}\]. D. Langlois, [*Brane cosmological perturbations*]{}, Phys. Rev. D [**62**]{}, 126012 (2000) \[[hep-th/0005025]{}\]. C. van de Bruck, M. Dorca, R. H. Brandenberger and A. Lukas, [*Cosmological perturbations in brane-world theories: Formalism*]{}, Phys. Rev. D [**62**]{}, 123515 (2000) \[[hep-th/0005032]{}\]. K. Koyama and J. Soda, [*Evolution of cosmological perturbations in the brane world*]{}, Phys. Rev. D [**62**]{}, 123502 (2000) \[[hep-th/0005239]{}\]. N. Deruelle, T. Dolezel and J. Katz, [*Perturbations of brane worlds*]{}, Phys. Rev. D [**63**]{}, 083513 (2001) \[[hep-th/0010215]{}\]. U. Gen and M. Sasaki, [*Radion on the de Sitter brane*]{}, Prog. Theor. Phys.  [**105**]{}, 591 (2001) \[[gr-qc/0011078]{}\]. S. Mukohyama and L. Kofman, [*Brane gravity at low energy*]{}, Phys. Rev. D [**65**]{}, 124025 (2002) \[[hep-th/0112115]{}\]. D. Langlois, R. Maartens and D. Wands, [*Gravitational waves from inflation on the brane*]{}, Phys. Lett. B [**489**]{}, 259 (2000) \[[hep-th/0006007]{}\]. A. V. Frolov and L. Kofman, [*Gravitational waves from braneworld inflation*]{}, [hep-th/0209133]{}. V. F. Mukhanov, [*Gravitational instability of the universe filled with a scalar field*]{}, JETP Lett.  [**41**]{}, 493 (1985) \[Pisma Zh. Eksp. Teor. Fiz.  [**41**]{}, 402 (1985)\]. J. Garriga and V. F. Mukhanov, [*Perturbations in $k$-inflation*]{}, Phys. Lett. B [**458**]{}, 219 (1999) \[[hep-th/9904176]{}\]. von Dr. E. Kamke, [*Differentialgleichungen: Lösungsmethoden und Lösungen*]{}, Leipzig (1959). A. A. Starobinsky, [*A new type of isotropic cosmological models without singularity*]{}, Phys. Lett. B [**91**]{}, 99 (1980). L. A. Kofman, A. D. Linde and A. A. Starobinsky, [*Inflationary universe generated by the combined action of a scalar field and gravitational vacuum polarization*]{}, Phys. Lett. B [**157**]{}, 361 (1985). L. A. Kofman and V. F. Mukhanov, [*Evolution of perturbations in an inflationary universe*]{}, JETP Lett.  [**44**]{}, 619 (1986) \[Pisma Zh. Eksp. Teor. Fiz.  [**44**]{}, 481 (1986)\]. N. Arkani-Hamed, S. Dimopoulos, N. Kaloper and R. Sundrum, [*A small cosmological constant from a large extra dimension*]{}, Phys. Lett. B [**480**]{}, 193 (2000) \[[hep-th/0001197]{}\]. S. Kachru, M. B. Schulz and E. Silverstein, [*Self-tuning flat domain walls in 5d gravity and string theory*]{}, Phys. Rev. D [**62**]{}, 045021 (2000) \[[hep-th/0001206]{}\]. S. Forste, Z. Lalak, S. Lavignac and H. P. Nilles, [*The cosmological constant problem from a brane-world perspective*]{}, JHEP [**0009**]{}, 034 (2000) \[[hep-th/0006139]{}\].
{ "pile_set_name": "ArXiv" }
--- abstract: 'In order to estimate in absolute terms the luminosity of LHC certain beam parameters have to be measured very accurately. In particular the total beam current and the relative distribution of the charges around the ring, the transverse size of the beams at the interaction points and the relative position of the beams at the interaction point. The experiments can themselves measure several of these parameters very accurately thanks to the versatility of their detectors, other parameters need however to be measured using the monitors installed on the machine. The beam instrumentation is usually built for the purpose of aiding the operation team in setting up and optimizing the beams, often this only requires precise relative measurements and therefore the absolute scale is usually not very precisely calibrated. The luminosity calibration requires several machine-side instruments to be pushed beyond their initial scope.' author: - 'E. Bravin, CERN, Geneva, Switzerland' title: 'Instrumentation2: Other instruments, ghost/satellite bunch monitoring, halo, emittance, new developments[^1]' --- Colliding and non colliding charges =================================== In general in colliders the particles circulating in opposite directions are kept separated and only allowed to encounter each other at the designated interaction points. This is even more true for the LHC where the particles travel in different vacuum tubes for most of the accelerator length. Particles colliding outside of the experiments would provide no useful information and would only contribute to the background and reduce the lifetime of the beams. In order to estimate the luminosity it is therefore important to quantify the number of particles that can potentially collide in a given interaction point more than just the total current stored in the machine. The distribution of particles around the ring can be rather complicated. In theory there should be only a well known number of equal bunches spaced by well known amounts of time and in this situation it would be easy to calculate the colliding charges from the total current. In reality the bunches have all different currents and there can be charges also outside of these bunches. In the LHC the radio frequency (RF) system has a frequency of 400.8 MHz and only every 10th bucket at most is filled. This means that there are plenty of *wrong* RF buckets that can store particles in a stable way. It can happen that capture problems (also upstream in the injectors) create unwanted small intensity bunches near by the main ones. These, named satellite bunches, have typically intensities of up to 1% of the main bunch and are only a few RF buckets away from the main bunch (usually a multiple of the RF period of one of the preceding accelerators). Other effects can lead to particles escaping from the main buckets and becoming un-captured, these particles are no longer synchronous and will just diffuse around the ring where they can remain for very long time. In case some RF gymnastic is performed (like inserting dips in the accelerating voltage in order to improve injection efficiency) it can happen that some un-captured beam is recaptured forming a very large number of very faint bunches. These are called ghost bunches and have typically currents below the per-mill of the main bunches. In the LHC ghost bunches have been observed, in particular during the heavy ions run due to the special RF tricks used at injection when injecting ions. It is worth mentioning that un-captured particles will be lost if the energy of the machine is changed (e.g. during the ramp) due to the fact that they can not be properly accelerated by not being synchronous with the RF. Measuring the colliding charge ============================== Usually fast current transformers should be sufficient to measure the relative current variations from bunch to bunch. The dynamic range and speed of these detectors are however not sufficient to detect the satellites and the ghost bunches. Moreover in the LHC the fast current transformers integrate the beam current over 25ns (10 RF buckets) bins and it is not possible to know if and which satellites are included in the integration. Detectors with better time resolution and higher dynamic range are required. Candidates are: Wall current monitor Strip line pick-up Fast light detector sampling the synchrotron light vs. time Precise time stamping and counting of synchrotron light Wall current monitor -------------------- The wall current monitors can probably be used to estimate the satellites. This requires however averaging over many turns and correcting for quirks in the frequency response of the detector and the cables. It is in particular important to verify that reflection/noise or other effects are not limiting the potential of the averaging. For the moment the amount of charge in satellites is calculated by studying the frequency spectra of the acquired signals, as satellites are out of the nominal bunching pattern it is possible to compare the expected spectra with the measured one ad estimate the amount of charge producing the distortion. One complication to this process arises from the fact that the bunches are not necessarily Gaussian and their shape is not precisely known. It is however difficult to get sufficient accuracy in order to take care of the ghosts. At the moment a continuous analysis of the spectrum of the wall current monitor is performed by the front-end software and provides an estimate of the amount of charge outside of the correct buckets which is stored in the database. Figure \[WCM\] shows the signal from a wall current monitor acquired with a 10 GSample scope. A long tail after the bunch can be observed, this arises from the frequency response of the detector and is corrected for in the analysis. ![Signal from a wall current monitor. The top graph shows a zoom into a single bunch while the bottom graph shows the entire ring.[]{data-label="WCM"}](wall_current_monitor){width="1.0\linewidth"} Strip line pick-ups ------------------- The strip line pick-ups provide signals comparable to the ones of the wall current monitor with the drawback of a perfectly reflected pulse shortly after the main pulse with a delay that depends on the strip length, 30cm for the devices installed in the LHC, intrinsic to the principle of the device (see Fig. \[strip\_line\]). This reflection complicates the treatment of the signal resulting in the impossibility of using this instrument for the identification of ghosts and satellites. ![Signal from a strip line pick-up.[]{data-label="strip_line"}](strip_line){width="1.0\linewidth"} Synchrotron light detection --------------------------- There are two possibilities for using the synchrotron light for longitudinal measurements. One consists in simply using a fast optical detector connected to a fast sampler and record the intensity of synchrotron light as function of time. The principle is simple and photo-diodes in the order of 50GHz are commercially available, there are however a few difficulties associated with this technique. As for the WCM the transport of the high frequency signals is not simple and the cables response will modify the pulses requiring frequency domain corrections. Another problem is introduced by the need of fast digitizers implying a reduced dynamic range (typically 8 bits only), noise etc. On the other end the response of the detector itself should be much more linear than the one of the WCM and can in principle extend down to DC. It is surely worth trying this possibility however it will be very difficult to be able to measure the ghost bunches in this way. The other alternative is to count single SR photons with precise time stamping of the arrival time. Detectors suitable for the task exists (avalanche photo diodes, APD) and time to digital converters with resolutions of a few tens of ps also exists. The only draw back of this technique is that the counting rate is limited and the light has to be attenuated such that the probability of detecting a photon during a bunch passage should be less than  60%. Such a detector has been operated during the last part of the 2010 run (mainly during the ions period) and has given very promising results, it is known as the longitudinal density monitor or LDM (see Fig. \[LDM\].) Longitudinal density monitor LDM -------------------------------- The LDM is based on avalanche photodiodes from either id-Quantique or Micro Photon Devices connected to a fast TDC from Agilent (former Acquiris). The detector can resolve single photons with a time resolution of the order of 50ps, the TDC has a resolution of 50ps as well. At the moment the temporal resolution of the system is limited to about 75ps (300ps pk-pk) due to the reference timing used (turn clock from the BST receiver, BOBR), in the future this limitation will be removed by using a dedicated RF timing signal [@ldm]. The avalanche photo diodes present a short dead-time used to quench the avalanche (tens of ns) and there is also a small probability that at the end of this dead-time trapped electrons or holes will trigger a new avalanche (the probability of this type of events is of the order of 3%.) These effects, together with the dark count rate, although small, must be corrected for, a rather simple statistical algorithms is sufficient. The probability of SL photon triggering an avalanche per bunch-crossing must be maintained below a certain level (60-70%) otherwise the error on these corrections becomes too large. This has an impact on the maximum counting rate and thus on the integration time required for acquiring a profile with sufficient resolution. In fact the integration time required depends on what is being observed; if the aim is just to measure the so called core parameters of a bunch (mainly the bunch length) a few seconds are sufficient, on the other hand if the population of ghosts and satellites has to be measured an integration of several minutes may be required. --------------------------------------- --------------------------------------- --------------------------------------- --------------------------------------- ![image](ldm1){width="0.2\linewidth"} ![image](ldm2){width="0.2\linewidth"} ![image](ldm3){width="0.2\linewidth"} ![image](ldm4){width="0.2\linewidth"} a) b) c) d) --------------------------------------- --------------------------------------- --------------------------------------- --------------------------------------- The dynamic range observed in 2010 was of the order of $10^6$ with an integration time of 500 s. The LDMs consist of an extension to the already complex synchrotron light telescope, this means that there may be interferences between the optimization of the LDM and the other detectors present on the optical table (fast and slow cameras and abort gap monitor.) In 2011 the LDM should become operational for both beams. ![Schematics of the BSRT optical system.[]{data-label="bsrt_layout"}](bsrt_layout){width="1.0\linewidth"} Bunch length ============ At the moment bunch lengths is LHC are typically of the order of 0.8ns FWHM, the nominal value is 250ps one sigma. In order to measure this parameter a detector with high bandwidth is required (several GHz) however even a limited dynamic range would be sufficient. The list of candidates for this measurement is similar to the one presented before for the measurement of the satellites/ghost bunches Wall current monitor Strip line pick-up Fast light detector sampling the synchrotron light vs. time LDM Wall current monitor -------------------- This device measures the image current flowing on the beam pipe. The WCMs installed on LHC have an upper cut-off frequency of about 3GHz and the signals are sampled using a scope with 10GSample/s. These characteristics are sufficient for the measurement of the bunch length, however the non flat transfer function of the detectors introduce tails at the end of the bunch. By analyzing the signals in frequency domain these artifacts can be removed, Fig. \[WCM\] shows the signal directly on the scope display before processing. Strip line pick-up ------------------ The main function of this device is to measure the position of the beam with high temporal accuracy, in particular it allows to study the head-tail oscillations of the beam which provide hindsights on the stability of the beams and also a way to measure the chromaticity (variation of the betatron tune vs. the error in momentum of the particle). The device is composed of 4 electrodes, 30cm long, mounted at 90${}^\circ$. The amplitude of the signal on each electrode depends on the instantaneous beam current as well of the distance between the bunch and the electrode. By summing the signals on opposite electrodes one obtains a signal only proportional to the beam current while subtracting the signals from opposite electrodes and dividing by the sum one obtains a signal proportional to the position only. The bandwidth is similar to the one of the WCM, mainly limited by the characteristics of the feed-through and resonances in the electrodes. The acquisition is in fact performed with the same type of scope used for the WCM. The advantage of the strip line is that the transfer function is almost flat. Another characteristic of the strip line detectors has been already mentioned and consists of a second pulse, inverted in polarity, after the first one, the distance between the two being determined by the length of the electrodes (to be precise twice the length of the line divided by the speed of light) see Fig. \[strip\_line\]. Of course both the WCM and the strip line can provide single passage measurements as well as averaged measurements. LDM --- As seen before the LDM allows the sampling of the whole LHC ring with high time accuracy, with the present system 50ps resolution is possible. The 50ps temporal resolution is enough to measure the bunch length, provided the beam is stable over the integration time needed to acquire a profile, typically a few seconds. It has already been mentioned that the intensity of the synchrotron light could be acquired directly by a photon detector instead of performing single photon counting. This technique has however not yet been used in LHC as it would carry all the problematics of the strip-line pick-ups for example (cables transfer functions, fast sampling) without adding substantial advantages. Transverse emittance ==================== Another important factor in the determination of the luminosity from the machine parameters is the transverse emittance. Several instruments have been installed in the LHC for this purpose. In particular the instrument used to measure accurately the beam size and thus the beam emittance is the wire scanner. This instruments however only produces measurements on demand and can not be used when the total beam intensity is above 2 $10^{13}$protons. In order to cope with the limitations of the wire scanner two different monitors capable of continuous monitoring have been installed, the synchrotron light telescope (BSRT) and the rest gas ionization monitor (BGI.) All these devices only measure the transverse beam sizes, in order to calculate the emittance the knowledge of the optics of the machine at the location of the devices is needed, in particular the betatron function. Thanks to the accurate modeling and precise measurements the beta functions are known with an error between 5 and 10% all around the machine. Wire scanner ------------ This is the reference device for emittance measurement, since the systematic errors of this technique can be controlled well. The principle is rather simple and consists of scanning a 30$\mu$m diameter carbon wire across the beam at about 1m/s. The interaction of the particles in the beam with the nuclei in the wire produce high energetic secondary particles that are detected by a scintillator-photomultiplier assembly some 10m downstream of the scanner. The beam profile is inferred by the amplitude of the PMT signal as function of the wire position. Because the wire scanner needs to intercept the beam in order to make a measurement the range of beam intensity were it can be used is limited. There are two situations that need to be avoided: overheating the wire up to the point were it breaks or inducing secondary particles and beam losses of intensity sufficient to induce a quench in the neighboring superconducting magnets. At injection energy the first effect dominates while at top energy it is the second. The intensities of these two limits are rather close and for this reason only one value (the smaller) is used imposing an upper limit of 2 $10^{13}$particles per beam (about 200 nominal bunches) [@ws_limits]. The accuracy of the wire scanner in the LHC has not yet been studied, however a detailed study on similar devices has been carried out in the SPS a few years ago leading to an error of the order of 1% in the beam emittance for beams of $\sigma=1$mm transverse size [@ws_accuracy]. At the end of 2010 the bunch-by-bunch acquisition mode was also commissioning, the wire scanners can thus be used now either to measure the average over all bunches or the profile of individual bunches. BSRT ---- The two synchrotron light telescopes are installed at point 4 and take advantage of the D3 dipoles used to separate the beam around the RF cavities. Since at injection energy the spectra from these dipoles is in the far infrared two undulators have been developed and installed at the entrance of the D3. These special magnets provide sufficient radiation in the visible range up to about 1TeV where the radiation from the dipole magnet takes over, Fig. \[bsrt\_layout\] shows a simplified sketch of the BSRT setup. The imaging requires a complex mirror-based optical telescope and since it can not be accessed when there is beam present many components have to be adjusted remotely. The image acquisition is performed by an intensified camera which image intensifier can be gated to a single bunch allowing single bunch single turn acquisition. By scanning the gate delay all the bunches can be scanned in turn, this process is however long since the acquisition system is limited to one image per second. This type of bunch scan was performed regularly at the end of the 2010 run. Another camera, the fast camera, allows the single bunch single turn measurement, but in this case images can be acquired at over 11kHz allowing the acquisition of the full ring in a fraction of second. The fast camera was not installed in 2010, but will be made operational during 2011. Due to the complexity of the optical system and the many constrains the optical resolution of the telescope is intrinsically limited to a few hundred microns [@bsrt_alan], this limit has not yet been achieved yet and the reasons are not entirely understood [@bsrt_first]. The point spread function of the BSRTs have been calculated by comparing the sizes measured by the BSRT and the wire scanners, this PSFs are then de-convoluted from the measured values $$\label{eq:PSF} \sigma_{beam}=\sqrt{\sigma_{meas}^2-PSF^2}$$ Presently the PSF values are different for the two beams and for the two planes, but are all around 500$\mu$m. ![Evolution of the B1 beam emittance during a fill as measured by the BSRT and the wire scanner[]{data-label="BSRT_WS"}](bsrt_ws){width="1.0\linewidth"} ![Evolution of the B2 beam emittance during a fill as measured by the BGI, BSRT and the wire scanner. The vertical BGI follows the BSRT and the WS while the horizontal one is quite off.[]{data-label="BGI"}](bgi){width="1.0\linewidth"} Rest gas ionization monitor --------------------------- The BGI allows the measurement of the transverse projection of the beam in one direction (horizontal or vertical). The particles in the beam leave behind an ionized column where the ions (free-electrons) density reproduces the density of the beam. An electric field drifts the electrons toward a multi channel plate while a magnetic field, parallel to the electric one, guides the electrons and avoids smearing due to the thermal velocity and beam space charge effects. The MCP multiplies the impinging electrons which are imaged on a phosphor screen from where it can be acquired using an intensified camera [@bgi]. This device is very sensitive to many effects, beam space charge, electron cloud, stray fields and fields non homogeneity, but if all parameters are well controlled the accuracy can be elevated. The problem with the BGI is that in order to obtain sufficient signal either a large number of particles in the beam is needed or a local pressure bump must be created. A local pressure bump will also increase beam losses locally imposing a stringent control and limits. Due to these constraints the four BGIs installed in LHC (1 per beam and per plane) could not be fully commissioned. The results obtained so far show that for some device the agreement between BGI, BSRT and WS is good while for the others it is quite off. The reasons for this discrepancies will be investigated in 2011. Beam halo and tails =================== In the BSRT design it is foreseen to install an optical mask in order to cut the core of the beam and allow the observation of the tails, a technique known in astronomy as a “corona” monitor used in sun observations. At the moment the required hardware is not installed as this functionality is not considered a high priority, but if really needed this could be developed in a reasonable amount of time. The overall performance of this halo monitor is in the end limited by the amount os scattered light in the optical components and in general inside the telescope hatch. Beam position at the IP ======================= In order to monitor the beam position at the IP dedicated beam position monitors are installed just outside of the experiments and before the triplets. Around all the four interaction regions strip line pick-ups are installed, the choice for this type of devices is dictated by the fact that in multi bunch mode an incoming and an outgoing bunch can pass trough the detector with very small time difference making impossible to disentangle the signals from one or the other beam. The strip line devices have the advantage that although each strip has an upstream and a downstream port, the beam induces a pulse only in the upstream port, so the signals of the two beams can be read out independently from the two ports. The disadvantage of this method is that the electronic chains used to acquire the signals are different from one beam and the other, adding the possibility of an unknown electronic offset and making the overlap of the two beams more difficult. For this reason around IP1 and IP5 additional button pick-ups have been installed, these devices have the advantage that the readout chain is the same for the incoming and the outgoing beam so that any electronics offset cancels out. The disadvantage is that the bunch spacing must be larger than 150ns. In order to calculate the overlap one can use a simple ballistic model, the experiments have however strong magnetic fields which can complicate the situation, especially for LHC-b and ALICE where spectrometer magnets exist. The orbit mode resolution for the strip line detectors is of the order of 1$\mu$m and for the buttons it is slightly worse, but the electronic offset can be substantially larger than this value. Conclusions =========== In order to compute the luminosity of the LHC beams several parameters must be measured accurately. In particular the distribution of charges around the machine needs to be precisely known in order to calculate the fraction of colliding charges. The wall current monitor and the longitudinal density monitors are both able to provide this information, with the LDM probably able to give better accuracy, also because it can measure the DC component as well while the WCM is limited to AC. The other important parameter to measure and monitor is the transverse emittance and for this purpose the wire scanners and the BSRT are providing the required information. [9]{} A. Jeff et al., “Design for a Longitudinal Density Monitor for the LHC”, Proceeding of the IPAC Conference, Kyoto, Japan, (2010), MOPE055, and CERN-ATS-2010-110 M. Sapinski, Tom Kroyer, “Operational limits of wire scanners on LHC beam”, Proceeding of the Beam Instrumentation Workshop, Lake Tahoe, California, (2008), pp383 F. Roncarolo, B. Dehning, C. Fischer and J.Koopmann, “Accuracy of the SPS transverse emittance measurements”, CERN-AB-2005-081 A.S. Fisher, “Expected Performance of the LHC Synchrotron-Light Telescope (BSRT) and Abort-Gap Monitor (BSRA)”, LHC- Performance-Note-014 T. Lefevre et al.,“First Beam Measurements with the LHC Synchrotron Light Monitors”, Proceeding of the IPAC Conference, Kyoto, Japan, (2010), pp.1104 and CERN-ATS-2010-108 J. Koopman et al., “Design and Tests of a New Rest Gas Ionisation Profile Monitor Installed in the SPS as a Prototype for the LHC”, AIP Conf. Proc. 732 (2004) pp.133- 140. [^1]: This contribution is presented by the author on behalf of the BE-BI group
{ "pile_set_name": "ArXiv" }
--- abstract: 'The International Linear Collider (ILC) is the next large scale project in accelerator particle physics. Colliding electrons with positrons at energies from 0.3 TeV up to about 1 TeV, the ILC is expected to provide the accuracy needed to complement the LHC data and extend the sensitivity to new phenomena at the high energy frontier and answer some of the fundamental questions in particle physics and in its relation to Cosmology. This paper reviews some highlights of the ILC physics program and of the major challenges for the accelerator and detector design.' address: | Department of Physics, University of California at Berkeley and\ Lawrence Berkeley National Laboratory\ Berkeley, CA 94720, USA\ MBattaglia@lbl.gov author: - Marco Battaglia --- The International Linear Collider ================================= Introduction {#sec0} ------------ Accelerator particle physics is completing a successful cycle of precision tests of the Standard Model of electro-weak interactions (SM). After the discovery of the $W$ and $Z$ bosons at the $Sp\bar{p}S$ hadron collider at CERN, the concurrent operation of hadron and $e^+e^-$ colliders has provided a large set of precision data and new observations. Two $e^+e^-$ colliders, the SLAC Linear Collider (SLC) at the Stanford Linear Accelerator Center (SLAC) and the Large Electron Positron (LEP) collider at the European Organization for Nuclear Research (CERN), operated throughout the 1990’s and enabled the study of the properties of the $Z$ boson in great detail. Operation at LEP up to 209 GeV, the highest collision energy ever achieved in electron-positron collisions, provided detailed information on the properties of $W$ bosons and the strongest lower bounds on the mass of the Higgs boson and of several supersymmetric particles. The collision of point-like, elementary particles at a well-defined and tunable energy offers advantages for precision measurements, as those conducted at LEP and SLC, over proton colliders. On the other hand experiments at hadron machines, such as the Tevatron $p \bar p$ collider at Fermilab, have enjoyed higher constituent energies. The CDF and D0 experiments eventually observed the direct production of top quarks, whose mass had been predicted on the basis of precision data obtained at LEP and SLC. While we await the commissioning and operation of the LHC $pp$ collider at CERN, the next stage in experimentation at lepton colliders is actively under study. For more than two decades, studies for a high-luminosity accelerator, able to collide electrons with positrons at energies of the order of 1 TeV, are being carried out world-wide. The path towards the ILC {#sec1} ------------------------ The concept of an $e^+e^-$ linear collider dates back to a paper by Maury Tigner [@Tigner:1965] published in 1965, when the physics potential of $e^+e^-$ collisions had not yet been appreciated in full. This seminal paper envisaged collisions at 3-4 GeV with a luminosity competitive with that of the SPEAR ring at SLAC, i.e. $3 \times 10^{30}$ cm$^{-2}$ s$^{-1}$. [*A possible scheme to obtain $e^-e^-$ and $e^+e^-$ collisions at energies of hundreds of GeV*]{} is the title of a paper [@Amaldi:1976] by Ugo Amaldi published a decade later in 1976, which sketches the linear collider concept with a design close to that now developed for the ILC. The parameters for a linear collider, clearly recognised as the successors of $e^+e^-$ storage rings on the way to high energies, were discussed by Burt Richter at the IEEE conference in San Francisco in 1979 [@Richter:1979cq] and soon after came the proposal for the [*Single Pass Collider Project*]{} which would become SLC at SLAC. From 1985, the CERN Long Range Planning Committee considered an $e^+e^-$ linear collider, based on the CLIC [@Schnell:1986ig] design, able to deliver collisions at 2 TeV with $10^{33}$ cm$^{-2}$ s$^{-1}$ luminosity, [*vis-a-vis*]{} a hadron collider, with proton-proton collisions at 16 TeV and luminosity of $1.4 \times 10^{33}$ cm$^{-2}$ s$^{-1}$, as a candidate for the new CERN project after LEP. That review process eventually led to the decision to build the LHC, but it marked an important step to establish the potential of a high energy $e^+e^-$ collider. It is important to note that it was through the contributions of several theorists, including John Ellis, Michael Peskin, Gordon Kane and others, that the requirements in terms of energy and luminosity for a linear collider became clearer in the mid 1980’s [@Ahn:1988vj]. The SLC project gave an important proof of principle for a high energy linear collider and the experience gained has shaped the subsequent designs in quite a significant way. After a decade marked by important progress in the R&D of the basic components and the setup of advanced test facilities, designs of four different concepts emerged: TESLA, based on superconducting RF cavities, the NLC/JLC-X, based on high frequency (11.4 GHz) room-temperature copper cavities, JLC-C, based on lower frequency (5.7 GHz) conventional cavities and CLIC, a multi-TeV collider based on a different beam acceleration technique, the two-beam scheme with transfer structures operating at 30 GHz. Accelerator R&D had reached the maturity to assess the technical feasibility of a linear collider project and take an informed choice of the most advantageous RF technology. The designs were considered by the International Linear Collider Technical Review Committee (ILC-TRC), originally formed in 1994 and re-convened by the International Committee for Future Accelerators (ICFA) in 2001 under the chairmanship of Greg A. Loew. The ILC-TRC assessed their status using common criteria, identified outstanding items needing R&D effort and suggested areas of collaboration. The TRC report was released in February 2003 [@trc] and the committee found that there were [*no insurmountable show-stoppers to build TESLA, NLC/JLC-X or JLC-C in the next few years and CLIC in a more distant future, given enough resources*]{}. Nonetheless, significant R&D remained to be done. At this stage, it became clear that, to make further progress, the international effort towards a linear collider should be focused on a single design. ICFA gave mandate to an International Technology Recommendation Panel (ITRP), chaired by Barry Barish, to make a definite recommendation for a RF technology that would be the basis of a global project. In August 2004 the ITRP made the recommendation in favour of superconducting RF cavities [@itrp]. The technology choice, which was promptly accepted by all laboratories and groups involved in the R&D process, is regarded as a major step towards the realization of the linear collider project. Soon after it, a truly world-wide, centrally managed design effort, the Global Design Effort (GDE) [@gde], a team of more than 60 persons, started, with the aim to produce an ILC Reference Design Report by beginning of 2007 and an ILC Technical Design Report by end of 2008. The GDE responsibility now covers the detailed design concept, performance assessments, reliable international costing, industrialization plan, siting analysis, as well as detector concepts and scope. A further important step has been achieved with release of the Reference Design Report in February 2007 [@rdr]. This report includes a preliminary value estimate of the cost for the ILC in its present design and at the present level of engineering and industrialisation. The value estimate is structured in three parts: 1.78 Billion ILC Value Units for site-related costs, such as those of tunneling in a specific region, 4.87 Billion ILC Value Units for the value of the high technology and conventional components and 13,000 person-years for the required supporting manpower. For this estimate the conversion factor is 1 ILC Value Unit = 1 US Dollar = 0.83 Euro = 117 Yen. This estimate, which is comparable to the LHC cost, when the pre-existing facilities, such as the LEP tunnel, are included, provides guidance for optimisation of both the design and the R&D to be done during the engineering phase, due to start in Fall 2007. Technical progress was paralleled by increasing support for the ILC in the scientific community. At the 2001 APS workshop [*The Future of Physics*]{} held in Snowmass, CO, a consensus emerged for the ILC as the right project for the next large scale facility in particle physics. This consensus resonated and expanded in a number of statements by highly influential scientific advisory panels world-wide. The ILC role in the future of scientific research was recognised by the OECD Consultative Group on High Energy Physics [@oecd], while the DOE Office of Science ranked the ILC as its top mid-term project. More recently the EPP 2010 panel of the US National Academy of Sciences, in a report titled [*Elementary Particle Physics in the 21$^{st}$ Century*]{} has endorsed the ILC as the next major experimental facility to be built and its role in elucidating the physics at the high energy frontier, independently from the LHC findings [@epp2010]. Nowadays, the ILC is broadly regarded as the highest priority for a future large facility in particle physics, needed to extend and complement the LHC discoveries with the accuracy which is crucial to understand the nature of New Physics, test fundamental properties at the high energy scale and establish their relation to other fields in physical sciences, such as Cosmology. A matching program of physics studies and detector R&D efforts has been in place for the past decade and it is now developing new, accurate and cost effective detector designs from proof of concepts towards that stage of engineering readiness, needed for being adopted in the ILC experiments. ILC Accelerator Parameters {#sec2} -------------------------- ### ILC Energy {#sec2.1} The first question which emerges in defining the ILC parameters is the required centre-of-mass energy $\sqrt{s}$. It is here where we most need physics guidance to define the next thresholds at, and beyond, the electro-weak scale. The only threshold which, at present, is well defined numerically is that of top-quark pair production at $\sqrt{s} \simeq$ 350 GeV. Beyond it, there is a strong prejudice, supported by precision electro-weak and other data, that the Higgs boson should be light and new physics thresholds may exist between the electro-weak scale and approximately 1 TeV. If indeed the SM Higgs boson exists and the electro-weak data is not affected by new physics, its mass $M_H$ is expected to be below 200 GeV as discussed in section \[sec3.1\]. Taking into account that the Higgs main production process is in association with a $Z^0$ boson, the maximum of the $e^+e^- \to H^0 Z^0$ cross section varies from $\sqrt{s}$ = 240 GeV to 350 GeV for 120 GeV $< M_H <$ 200 GeV. On the other hand, we know that the current SM needs to be extended by some New Physics. Models of electroweak symmetry breaking contain new particles in the energy domain below 1 TeV. More specifically, if Supersymmetry exists and it is responsible for the dark matter observed in the Universe, we expect that a significant fraction of the supersymmetric spectrum would be accessible at $\sqrt{s}$ = 0.5-1.0 TeV. In particular, the ILC should be able to study in detail those particles determining the dark matter relic density in the Universe by operating at energies not exceeding 1 TeV, as discussed in section \[sec3.2\]. Another useful perspective on the ILC energy is an analysis of the mass scale sensitivity for new physics vs. the $\sqrt{s}$ energy for lepton and hadron colliders in view of their synergy. The study of electro-weak processes at the highest available energy offers a window on mass scales well beyond its kinematic reach. A comparison of the mass-scale sensitivity for various new physics scenarios as a function of the centre-of-mass energy for $e^+e^-$ and $pp$ collisions is given in section \[sec3.3\]. These and similar considerations, emerged in the course of the world-wide studies on physics at the ILC, motivate the choice of $\sqrt{s}$ = 0.5 TeV as the reference energy parameter, but requiring the ILC to be able to operate, with substantial luminosity, at 0.3 TeV as well and to be upgradable up to approximately 1 TeV. It is useful to consider these energies in an historical perspective. In 1954 Enrico Fermi gave a talk at the American Physical Society, of which he was chair, titled [*What can we learn with high energy accelerators ?*]{}. In that talk Fermi considered a proton accelerator with a radius equal to that of Earth and 2 T bending magnets, thus reaching a beam energy of $5 \times 10^{15}$ eV [@Maiani:2001wi]. Stanley Livingstone, who had built with Ernest O. Lawrence the first circular accelerator at Berkeley in 1930, had formulated an empirical linear scaling law for the available centre-of-mass energy vs. the construction year and cost. Using Livingstone curve, Fermi predicted that such an accelerator could be built in 1994 at a cost of 170 billion \$. We have learned that, not only such accelerator could not be built, but accelerator physics has irrevocably fallen off the Livingstone curve, even in its revised version, which includes data up to the 1980’s. As horizons expanded, each step has involved more and more technical challenges and has required more resources. The future promises to be along this same path. This underlines the need of coherent and responsible long term planning while sustaining a rich R&D program in both accelerator and detector techniques. The accelerator envisaged by Enrico Fermi was a circular machine, as the almost totality of machines operating at the high energy frontier still are. Now, as it is well known, charged particles undergoing a centripetal acceleration $a = v^2/R$ radiate at rate $P = \frac{1}{6 \pi \epsilon_0} \frac{e^2 a^2}{c^3} \gamma^4$. If the radius $R$ is kept constant, the energy loss is the above rate $P$ times $t = 2 \pi R/v$, the time spent in the bending section of the accelerator. The energy loss for electrons is $W = 8.85 \times 10^{-5} \frac{E^4 \mathrm{(GeV^4)}}{R \mathrm{(km)}}$ MeV per turn while for protons is $W = 7.8 \times 10^{-3} \frac{E^4 \mathrm{(TeV^4)}}{R \mathrm{(km)}}$ keV per turn. Since the energy transferred per turn by the RF cavities to the beam is constant, $G \times 2 R \times F$, where $G$ is the cavity gradient and $F$ the tunnel fill factor, for each value of the accelerator ring radius $R$ there exists a maximum energy $E_{max}$ beyond which the energy loss exceeds the energy transferred. In practice, before this value of $E_{max}$ is reached, the real energy limit is set by the power dumped by the beam as synchrotron radiation. To make a quantitative example, in the case of the LEP ring, with a radius $R$ =4.3 km, a beam of energy $E_{beam}$=250 GeV, would lose 80 GeV/turn. Gunther Voss is thought to be the author of a plot comparing the guessed cost of a storage ring and a linear collider as a function of the $e^+e^-$ centre-of-mass energy. A $\sqrt{s}$=500 GeV storage ring, which would have costed an estimated 14 billions CHF in 1970’s is aptly labelled as the [Crazytron]{} [@Treille:2002iu]. LEP filled the last window of opportunity for a storage ring at the high energy frontier. Beyond LEP-2 energies the design must be a linear collider, where no bending is applied to the accelerated particles. Still the accelerator length is limited by a number of constraints which include costs, alignment and siting. Therefore, technology still defines the maximum reachable energy at the ILC. The ILC design is based on superconducting (s.c. ) radio-frequency (RF) cavities. While s.c. cavities had been considered already in the 1960’s, it was Ugo Amaldi to first propose a fully s.c. linear collider in 1975 [@Amaldi:1976]. By the early 1990’s, s.c. cavities equipped already one accelerator, TRISTAN at KEK in Japan, while two further projects were in progress, CEBAF at Cornell and the LEP-2 upgrade at CERN. LEP-2 employed a total of 288 s.c. RF cavities, providing an average gradient of 7.2 MV/m. It was the visionary effort of Bjorn Wijk to promote, from 1990, the TESLA collaboration, with the aim to develop s.c. RF cavities pushing the gradient higher by a factor of five and the production costs down by a factor of four, thus reducing the cost per MV by a factor of twenty. Such reduction in cost was absolutely necessary to make a high energy collider, based on s.c. cavities, feasible. Within less than a decade 1.3 GHz, pure niobium cavities achieved gradients in excess to 35 MV/m. This opened the way to their application to a $e^+e^-$ linear collider, able to reach centre-of-mass energies of the order of 1 TeV, as presented in detail in the TESLA proposal published in 2001 [@Brinkmann:2001qn] and recommended for the ILC by the ITRP in 2004 [@itrp]. Today, the ILC baseline design aims at matching technical feasibility to cost optimisation. One of the major goals of the current effort in the ILC design is to understand enough about its costs to provide a reliable indication of the scale of funding required to carry out the ILC project. Preparing a reliable cost estimate for a project to be carried out as a truly world-wide effort at the stage of a conceptual design that still lacks much of the detailed engineering designs as well as agreements for responsibility and cost sharing between the partners and a precise industrialisation plan is a great challenge. Still having good cost information as soon as possible, to initiate negotiations with the funding agencies is of great importance. An interesting example of the details entering in this process is the optimisation of the cost vs.  cavity gradient for a 0.5 TeV collider. The site length scales inversely with the gradient $G$ while the cost of the cryogenics scales as $G^2/Q_0$ resulting in a minimum cost for a gradient of 40 MW/m, corresponding to a tunnel length of 40 km, and a fractional cost increase of 10 % for gradients of 25 MV/m or 57 MV/m. The chosen gradient of 35 MV/m, which is matched by the average performance of the most recent prototypes after electro-polishing, gives a total tunnel length of 44 km with a cost increment from the minimum of just 1 %. Beyond 1 TeV, the extension of conventional RF technology is more speculative. In order to attain collisions at energies in excess of about 1 TeV, with high luminosity, significantly higher gradients are necessary. As the gradient of s.c. cavities is limited below $\sim$ 50 MV/m, other avenues should be explored. The CLIC technology [@Assmann:2000hg], currently being developed at CERN and elsewhere, may offer gradients of the order of 150 MV/m [@wuensch], allowing collision energies in the range 3-5 TeV with a luminosity of $10^{35}$ cm$^{-2}$ s$^{-1}$, which would support a compelling physics program [@Battaglia:2004mw]. While RF cavities are limited to accelerating fields of order of 100-200 MV/m, or below, laser-wakefield accelerators are capable, in principle, of producing fields of 10-100 GV/m. Recently a 1 GeV $e^-$ beam has been accelerated over just 3.3 cm using a 40 TW peak-power laser pulse [@Leemans:2006], thus opening a possible path towards ultra-high energies in $e^+e^-$ collisions in some more distant future. ### ILC Luminosity {#sec2.2} The choice of a linear collider, rather than a circular storage ring, while solving the problem of the maximum reachable energy, introduces the challenge of achieving collisions with the required luminosity. The luminosity, $\cal{L}$, defined as the proportionality factor between the number of events produced and the process cross section $\sigma$, has requirements which depend on the typical values of s-channel cross sections and so scale as $1/s$. First luminosity requirements were already outlined in the 1980s [@Richter:1981uu; @Amaldi:1987xt] as ${\cal{L}} \simeq \frac{2 E_{beam}}{\mathrm{TeV}} \times 10^{33}$ cm$^{-2}$ s$^{-1}$, based on the estimated discovery potential. But in the present vision of the ILC role in probing the high energy frontier new requirements must be considered. One example is the precision study of electro-weak processes to look for deviations from the SM predictions, due to effect of new physics at high scales. The $e^+e^- \to b \bar b$ cross section at 1 TeV is just 96 fb, so this would corresponds to less than $10^3$ events per year at $10^{33}$ cm$^{-2}$ s$^{-1}$, which is certainly insufficient for the kind of precision measurements which we expect from the ILC. Another example is offered by one of the reactions most unique to the ILC: the double-Higgs production $e^+e^- \to HHZ$ sensitive to the Higgs self-coupling, which has a cross section of order of only 0.2 fb at 0.5 TeV. Therefore a luminosity of $10^{34}$ cm$^{-2}$ s$^{-1}$ or more is required as baseline parameter. The luminosity can be expressed as a function of the accelerator parameters as: $${\cal{L}} = f_{rep} n_b \frac{N^2}{4 \pi \sigma_x \sigma_y}.$$ Now, since in a linear machine the beams are collided only once and then dumped, the collision frequency, $f_{rep}$, is small and high luminosity should be achieved by increasing the number of particles in a bunch $N$, the number of bunches $n_b$ and decreasing the transverse beam size $\sigma$. Viable values for $N$ are limited by wake-field effects and the ILC parameters have the same number of electrons in a bunch as LEP had, though it aims at a luminosity three orders of magnitude higher. Therefore, the increase must come from a larger number of bunches and a smaller transverse beam size. The generation of beams of small transverse size, their preservation during acceleration and their focusing to spots of nanometer size at the interaction region presents powerful challenges which the ILC design must solve. A small beam size also induces beam-beam interactions. On one hand the beam self-focusing, due to the electrostatic attraction of particles of opposite charges enhances the luminosity. But beam-beam interactions also result in an increase of beamstrahlung with a larger energy spread of the colliding particles, a degraded luminosity spectrum and higher backgrounds. Beamstrahlung is energy loss due to particle radiation triggered by the trajectory bending in the interactions with the charged particles in the incoming bunch [@Noble:1986yz]. The mean beamstrahlung energy loss, which has to be minimised, is given by: $$\delta_{BS} \simeq 0.86 \frac{e r_e^3}{2 m_0 c^2} \frac{E_{cm}}{\sigma_z} \frac{N_b^2}{(\sigma_x+\sigma_y)^2}.$$ Since the luminosity scales as $\frac{1}{\sigma_x \sigma_y}$, while the beamstrahlung energy loss scales as $\frac{1}{\sigma_x + \sigma_y}$, it is advantageous to choose a large beam aspect ratio, with the vertical beam size much smaller than the horizontal component. The parameter optimisation for luminosity can be further understood by expressing the luminosity in terms of beam power $P = f_{rep} N E_{cm}$ = $\eta P_{AC}$ and beamstrahlung energy loss as: $${\cal{L}} \propto \frac{\eta P_{AC}}{E_{cm}} \sqrt{\frac{\delta_{BS}}{\epsilon_y}}H_D$$ which highlights the dependence on the cavity efficiency $\eta$ and the total power $P_{AC}$. The $H_D$ term is the pinch enhancement factor, that accounts for the bunch attraction in the collisions of oppositely charged beams. In summary, since the amount of available power is necessarily limited, the main handles on luminosity are $\eta$ and $\epsilon_y$. The efficiency for transferring power from the plug to the beam is naturally higher for s.c. than for conventional copper cavities, so more relaxed collision parameters can be adopted for a s.c. linear collider delivering the same luminosity. The main beam parameters for the ILC baseline design are given in Table \[tab:params\]. \[tab:params\] ILC Physics Highlights {#sec3} ---------------------- The ILC physics program, as we can anticipate it at present, is broad and diverse, compelling and challenging. The ILC is being designed for operation at 0.5 TeV with the potential to span the largest range of collision energies, from the $Z^0$ peak at 0.091 TeV up to 1 TeV, collide electrons with positrons, but optionally also electrons with electrons, photons with photons and photons with electrons, and combine various polarization states of the electron and positron beams. Various reports discussing the linear collider physics case, including results of detailed physics studies, have been published in the last few years [@Ahn:1988vj; @Murayama:1996ec; @Aguilar-Saavedra:2001rg; @Abe:2001wn; @Abe:2001gc; @Dawson:2004xz; @Battaglia:2004mw]. Here, I shall focus on three of the main ILC physics themes: the detailed study of the Higgs boson profile, the determination of neutralino dark matter density in the Universe from accelerator data, and the sensitivity to new phenomena beyond the ILC kinematic reach, through the analysis of two-fermion production, at the highest $\sqrt{s}$ energy. Results discussed in the following have been obtained mostly using realistic, yet parametric simulation of the detector response. Only few analyses have been carried out which include the full set of physics and machine-induced backgrounds on fully simulated and reconstructed events. With the progress of the activities of detector concepts and the definition of well-defined benchmark processes, this is becoming one of the priorities for the continuation of physics and detector studies. ### The Higgs Profile at the ILC {#sec3.1} Explaining the origin of mass is one of the great scientific quests of our time. The SM addresses this question by the Higgs mechanism [@Higgs]. The first direct manifestation of the Higgs mechanism through the Higgs sector will be the existence of at least one Higgs boson. The observation of a new spin-0 particle would represent a first sign that the Higgs mechanism of mass generation is indeed realised in Nature. This has motivated a large experimental effort, from LEP-2 to the Tevatron and, soon, the LHC, actively backed-up by new and more accurate theoretical predictions. After a Higgs discovery, which we anticipate will be possible at the LHC, full validation of the Higgs mechanism can only be established by an accurate study of the Higgs boson production and decay properties. It is here where the ILC potential in precision physics will be crucial for the validation of the Higgs mechanism, through a detailed study of the Higgs profile [@Heinemeyer:2005gs]. The details of this study depend on the Higgs boson mass, $M_H$. In the SM, $M_H = \sqrt{2 \lambda} v$ where the Higgs field expectation value $v$ is determined as $(\sqrt{2}G_F)^{-1/2} \approx 246$ GeV, while the Higgs self-coupling $\lambda$ is not specified, leaving the mass as a free parameter. However, we have strong indications that $M_H$ must be light. The Higgs self-coupling behaviour at high energies [@triv], the Higgs field contribution to precision electro-weak data [@ewwg:2005di] and the results of direct searches at LEP-2 [@Barate:2003sz] at $\sqrt{s} \ge$ 206 GeV, all point towards a light Higgs boson. In particular, the study of precision electro-weak data, which are sensitive to the Higgs mass logarithmic contribution to radiative corrections, is based on several independent observables, including masses ($m_{top}$, $M_W$, $M_Z$), lepton and quark asymmetries at the $Z^0$ pole, $Z^0$ lineshape and partial decay widths. The fit to eighteen observables results in a 95% C.L. upper limit for the Higgs mass of 166 GeV, which becomes 199 GeV when the lower limit from the direct searches at LEP-2, $M_H >$ 114.4 GeV, is included. As a result, current data indicates that the Higgs boson mass should be in the range 114 GeV $< M_H <$ 199 GeV. It is encouraging to observe that if the same fit is repeated, but excluding this time $m_{top}$ or $M_W$, the results for their values, 178$^{+12}_{-9}$ GeV and 80.361$\pm$0.020 GeV respectively, are in very good agreement with the those obtained the direct determinations, $m_{top}$ = 171.4$\pm$2.1 GeV and $M_W$ = 80.392$\pm$0.029 GeV. At the ILC the Higgs boson can be observed in the Higgs-strahlung production process $e^+e^- \rightarrow HZ$ with $Z \rightarrow \ell^+\ell^-$, independent of its decay mode, by the distinctive peak in the di-lepton recoil mass distribution. A data set of 500 fb$^{-1}$ at $\sqrt{s}$ = 350 GeV, corresponding to four years of ILC running, provides a sample of 3500-2200 Higgs particles produced in the di-lepton $HZ$ channel, for $M_H$ = 120-200 GeV. Taking into account the SM backgrounds, dominated by $e^+e^- \rightarrow Z^0Z^0$ and $W^+W^-$ production, the Higgs boson observability is guaranteed up to its production kinematical limit, independent of its decays. This sets the ILC aside from the LHC, since the ILC sensitivity to the Higgs boson does not depend on its detailed properties. After observation of a new particle with properties compatible with those of the Higgs boson, a significant experimental and theoretical effort will be needed to verify that this is indeed the boson of the scalar field responsible for the electro-weak symmetry breaking and the generation of mass. Outlining the Higgs boson profile, through the determination of its mass, width, quantum numbers, couplings to gauge bosons and fermions and the reconstruction of the Higgs potential, stands as a most challenging, yet compelling, physics program. The ILC, with its large data sets at different centre-of-mass energies and beam polarisation conditions, the high resolution detectors providing unprecedented accuracy on the reconstruction of the event properties and the use of advanced analysis techniques, developed from those successfully adopted at LEP and SLC, promises to promote Higgs physics into the domain of precision measurements. Since the Higgs mass $M_H$ is not predicted by theory, it is of great interest to measure it precisely. Once this mass, and thus $\lambda$, is fixed, the profile of the Higgs particle is uniquely determined in the SM. In most scenarios we expect the LHC to determine the Higgs mass with a good accuracy. At the ILC, this measurement can be refined by exploiting the kinematical characteristics of the Higgs-strahlung production process $e^+e^- \rightarrow Z^* \rightarrow H^0 Z^0$ where the $Z^0$ can be reconstructed in both its leptonic and hadronic decay modes. The $\ell^+\ell^-$ recoil mass for leptonic $Z^0$ decays yields an accuracy of 110 MeV for 500 fb$^{-1}$ of data, without any requirement on the nature of the Higgs decays. Further improvement can be obtained by explicitly selecting $H \rightarrow b \bar b$ ($WW$) for $M_H \le$($>$) 140 GeV. Here a kinematical 5-C fit, imposing energy and momentum conservation and the mass of a jet pair to correspond to $M_Z$, achieves an accuracy of 40 to 90 MeV for 120$< M_H <$ 180 GeV [@hmass1]. The total decay width of the Higgs boson is predicted to be too narrow to be resolved experimentally for Higgs boson masses below the $ZZ$ threshold. On the contrary, above $\simeq$ 200 GeV, the total width can be measured directly from the reconstructed width of the recoil mass peak, as discussed below. For the lower mass range, indirect methods must be applied. In general, the total width is given by $\Gamma_{tot}=\Gamma_X/\mathrm{BR}(H\to X)$. Whenever $\Gamma_X$ can be determined independently of the corresponding branching fraction, a measurement of $\Gamma_{tot}$ can be carried out. The most convenient choice is the extraction of $\Gamma_{H}$ from the measurements of the $WW$ fusion cross section and the $H\to WW^*$ decay branching fraction . A relative precision of 6% to 13% on the width of the Higgs boson can be obtained at the ILC with this technique, for masses between 120 GeV and 160 GeV. The spin, parity and charge-conjugation quantum numbers $J^{PC}$ of Higgs bosons can be determined at the ILC in a model-independent way. Already the observation of either $\gamma \gamma \rightarrow H$ production or $H \rightarrow \gamma\gamma$ decay sets $J \ne 1$ and $C=+$. The angular dependence $\frac{d \sigma_{ZH}}{d \theta} \propto \sin^2 \theta$ and the rise of the Higgs-strahlung cross section: $$\sigma_{ZH} \propto~\beta \sim~\sqrt{s-(M_H+M_Z)^2}$$ allows to determine $J^P = 0^+$ and distinguish the SM Higgs from a $CP$-odd $0^{-+}$ state $A^0$, or a $CP$-violating mixture of the two [@Miller:2001bi; @Schumacher:2001ax]. But where the ILC has a most unique potential is in verifying that the Higgs boson does its job of providing gauge bosons, quarks and leptons with their masses. This requires to precisely test the relation $g_{HXX} \propto {m_X}$ between the Yukawa couplings, $g_{HXX}$, and the corresponding particle masses, $m_X$. In fact, the SM Higgs couplings to fermion pairs $g_{Hff} = m_f/v$ are fully determined by the fermion mass $m_f$. The corresponding decay partial widths only depend on these couplings and on the Higgs boson mass, QCD corrections do not represent a significant source of uncertainty [@Djouadi:1995gt]. Therefore, their accurate determination will represent a comprehensive test of the Higgs mechanism of mass generation [@Hildreth:1993dx]. Further, observing deviations of the measured values from the SM predictions will probe the structure of the Higgs sector and may reveal a non-minimal implementation of the Higgs model or the effect of new physics inducing a shift of the Higgs couplings [@Carena:2001bg; @Desch:2004cu; @Battaglia:2004js]. The accuracy of these measurements relies on the performances of jet flavour tagging and thus mostly on the Vertex Tracker, making this analysis an important benchmark for optimising the detector design. It is important to ensure that the ILC sensitivity extends over a wide range of Higgs boson masses and that a significant accuracy is achieved for most particle species. Here, the ILC adds the precision which establishes the key elements of the Higgs mechanism. It is important to point out that these tests are becoming more stringent now that the $B$-factories have greatly improved the determination of the $b$- and $c$-quark masses. When one of these studies was first presented in 1999 [@Battaglia:1999re], the $b$ quark mass was known to $\pm 0.11$ GeV and the charm mass to $\pm 0.13$ GeV, with the expectation that $e^+e^-$ $B$-factory and LHC data could reduce these uncertainties by a factor of two by the time the ILC data would be analysed. Today, the analysis of a fraction of the BaBar data [@Aubert:2004aw] has already brought these uncertainties down to 0.07 GeV for $m_b$ and, more importantly, 0.09 GeV for $m_c$, using the spectral moments technique in semi-leptonic $B$ decays, which had been pioneered on CLEO [@Bauer:2002sh] and DELPHI data [@Battaglia:2002tm]. Extrapolating to the anticipated total statistics to be collected at PEP-II and KEKB, we can now confidently expect that the $b$ quark mass should be known to better than $\pm 0.05$ GeV and the charm mass to better than $\pm 0.06$ GeV. This translates into less than $\pm 0.4$ % and $\pm 6.5$ % relative uncertainty in computing the Higgs SM couplings to $b$ and $c$ quarks, respectively, and motivates enhanced experimental precision in the determination of these couplings at the ILC. Detailed simulation shows that these accuracy can be matched by the ILC [@Kuhl:2004ri; @Barklow:2003hz]. While much of the emphasis on the ILC capabilities in the study of the Higgs profile is for a light Higgs scenario, preferred by the current electro-weak data and richer in decay modes, the ILC has also the potential of precisely mapping out the Higgs boson properties for heavier masses. If the Higgs boson turns out to weigh of order 200 GeV, the 95% C.L. upper limit indicated by electro-weak fits, or even heavier, the analysis of the recoil mass in $e^+e^- \to HZ$ at $\sqrt{s}$ = 0.5 TeV allows to precisely determine $M_H$, $\Gamma_H$ and the Higgs-strahlung cross section. Even for $M_H$ = 240 GeV, the mass can be determined to a 10$^{-3}$ accuracy and, more importantly, the total width measured about 10% accuracy. Decays of Higgs bosons produced in $e^+e^- \to H \nu \bar \nu$ give access to the Higgs couplings. The importance of the $WW$-fusion process $e^+e^- \to H^0 \nu \bar \nu$ to probe rare Higgs decays at higher energies, emerged in the physics study for a multi-TeV linear collider [@Battaglia:2002gq]. Since this cross section increases as $log \frac{s}{M_H^2}$, it becomes dominant around $\sqrt{s}$ = 1 TeV. Detailed studies have been performed and show that 1 ab$^{-1}$ of data at $\sqrt{s}$ = 1 TeV, corresponding to three to four years of ILC running, can significantly improve the determination of the Higgs couplings, especially for the larger values of $M_H$ [@Battaglia:2002av; @Barklow:2003hz]. $WW$ and $ZZ$ couplings can be determined with relative accuracies of 3 % and 5 % respectively, while the coupling to $b \bar b$ pairs, a rare decay with a branching fraction of just $2 \times 10^{-3}$ at such large masses, can be determined to 4 % to 14 % for 180 GeV $< M_H <$ 220 GeV. This measurement is of great importance, since it would offer the only opportunity to learn about the fermion couplings of such an heavy Higgs boson, and it is unique to a linear collider. A most distinctive feature of the Higgs mechanism is the shape of the Higgs potential: $$V(\Phi) = - \frac{\mu^2}{2} \Phi^2 + \frac{\lambda}{4} \Phi^4$$ with $v = \sqrt{\frac{\mu^2}{\lambda}}$. In the SM, the triple Higgs coupling, $g_{HHH} = 3 \lambda v$, is related to the Higgs mass, $M_H$, through the relation $$g_{HHH} = \frac{3}{2} \frac{M_H^2}{v}.$$ By determining $g_{HHH}$, the above relation can be tested. The ILC has access to the triple Higgs coupling through the double Higgs production processes $e^+e^- \to H H Z$ and $e^+e^- \to H H \nu\nu$ [@Djouadi:1999gv]. Deviations from the SM relation for the strength of the Higgs self-coupling arise in models with an extended Higgs sector [@Kanemura:2004mg]. The extraction of $g_{HHH}$ is made difficult by their tiny cross sections and by the dilution effect, due to diagrams leading to the same double Higgs final states, but not sensitive to the triple Higgs vertex. This makes the determination of $g_{HHH}$ a genuine experimental [*tour de force*]{}. Other modes, such as $e^+e^- \to H H b \bar b$, have also been recently proposed [@Gutierrez-Rodriguez:2006qk] but signal yields are too small to provide any precise data. Operating at $\sqrt{s}$ = 0.5 TeV the ILC can measure the $HHZ$ production cross section to about 15% accuracy, if the Higgs boson mass is 120 GeV, corresponding to a fractional accuracy on $g_{HHH}$ of 23% [@Castanier:2001sf]. Improvements can be obtained first by introducing observables sensitive to the presence of the triple Higgs vertex and then by performing the analysis at higher energies where the $HH \nu \bar \nu$ channel contributes [@Battaglia:2001nn]. In the $HHZ$ process events from diagrams containing the $HHH$ vertex exhibit a lower invariant mass of the $HH$ system compared to double-Higgstrahlung events. When the $M_{HH}$ spectrum is fitted, a relative statistical accuracy of $\pm 0.20$ can be obtained with 1 ab$^{-1}$ at $\sqrt{s}$ = 0.5 TeV. The availability of beam polarization increases the $HHZ$ cross section by a factor of two and that for $HH \nu \bar \nu$ by a factor of four, thus offering a further possible significant improvement to the final accuracy. The ILC and, possibly, a multi-TeV $e^+e^-$ collider represent a unique opportunity for carrying out this fundamental measurement. In fact, preliminary studies show that, the analysis of double Higgs production at the LHC is only possible after a luminosity upgrade and, even then, beyond the observation of double Higgs production, it would provide only a very limited information on the triple-Higgs coupling  [@Baur:2002qd; @Baur:2003gp]. \[tab:summary\] ### Understanding Dark Matter at the ILC {#sec3.2} The search for new physics beyond the Standard Model has a central role in the science program of future colliders. It is instructive to contrast the LHC and the ILC in terms of their potential in such searches. Running at $\sqrt{s} \le$ 1 TeV the ILC might appear to be limited in reach, somewhere within the energy domain being probed by the Tevatron and that to be accessed by LHC. And yet its potential for fully understanding the new physics, which the LHC might have manifested, and for probing the high energy frontier beyond the boundaries explored in hadron collisions is of paramount importance. There are several examples of how the ILC will be essential for understanding new physics. They address scenarios where signals of physics beyond the SM, as observed at the LHC, may be insufficient to decide on the nature of the new phenomena. One such example, which has been studied in some details, is the case of Supersymmetry and Universal Extra Dimensions (UED), two very different models of new physics leading to the very same experimental signature: fermion pairs plus missing energy. Here, the limited analytical power of the LHC may leave us undecided [@Datta:2005zs; @Smillie:2005ar], while a single spin measurement performed at the ILC precisely identifies the nature of the observed particles [@Battaglia:2005zf]. But the ILC capability to fully understand the implications of new physics, through fundamental measurements performed with high accuracy, is manifested also in scenarios where the LHC could observe a significant fraction of the new particle spectrum. An especially compelling example, which can be studied quantitatively, is offered by Supersymmetry in relation to Dark Matter (DM). Dark Matter has been established as a major component of the Universe. We know from several independent observations, including the cosmic microwave background (CMB), supernovas (SNs) and galaxy clusters, that DM is responsible for approximately 20 % of the energy density of the universe. Yet, none of the SM particles can be responsible for it and the observation of DM is likely the first direct signal of new physics beyond the SM. Several particles and objects have been nominated as candidates for DM. They span a wide range of masses, from 10$^{-5}$ eV, in the case of axions, to 10$^{-5}$ solar masses, for primordial black holes. Cosmology tells us that a significant fraction of the Universe mass consists of DM, but does not provide clues on its nature. Particle physics tells us that New Physics must exist at, or just beyond, the EW scale and new symmetries may result in new, stable particles. Establishing the inter-relations between physics at the microscopic scale and phenomena at cosmological scale will represent a major theme for physics in the next decades. The ILC will be able to play a key role in elucidating these inter-relations. Out of these many possibilities, there is a class of models which is especially attractive since its existence is independently motivated and DM, at about the observed density, arises naturally. These are extensions of the SM, which include an extra symmetry protecting the lightest particle in the new sector from decaying into ordinary SM states. The lightest particle becomes stable and can be chosen to be neutral. Such a particle is called a weakly interacting massive particle (WIMP) and arises in Supersymmetry with conserved R-parity (SUSY) but also in Extra Dimensions with KK-parity (UED) [@Kong:2005hn]. Current cosmological data, mostly through the WMAP satellite measurements of the CMB, determine the DM density in the Universe with a 6 % relative accuracy [@wmap]. By the next decade, the PLANCK satellite will push this uncertainty to $\simeq$ 1 %, or below [@planck]. Additional astrophysical data manifest a possible evidence of DM annihilation. The EGRET data show excess of $\gamma$ emission in the inner galaxy, which has been interpreted as due to DM [@deBoer:2005tm] and the WMAP data itself may show a signal of synchrotron emission in the Galactic center [@Finkbeiner:2004us]. These data, if confirmed, may be used to further constrain the DM properties. Ground-based DM searches are also approaching the stage where their sensitivity is at the level predicted by Supersymmetry for some combinations of parameters [@Akerib:2005kh]. The next decades promise to be a time when accelerator experiments will provide new breakthroughs and highly accurate data to gain new insights, not only on fundamental questions in particle physics, but also in cosmology, when studied alongside the observations from satellites and other experiments. The questions on the nature and the origin of DM offer a prime example of the synergies of new experiments at hadron and lepton colliders, at satellites and ground-based DM experiments. It is essential to study, in well defined, yet general enough, models, which are the properties of the new physics sector, such as masses and couplings, most important to determine the resulting relic density of the DM particles. Models exist which allow to link the microscopic particle properties to the present DM density in the Universe, with mild assumptions. If DM consists of WIMPs, they are abundantly produced in the very early Universe when $T \simeq (t \mathrm{(sec)})^{-1/2} > $ 100 GeV and their interaction cross section is large enough that they were in thermal equilibrium for some period in the early universe. The DM relic density can be determined by solving the Boltzmann equation governing the evolution of their phase space number density [@Scherrer:1985zt]. It can be shown that, by taking the WMAP result for the DM relic density in units of the Universe critical density,$\Omega_{DM} h^2$, the thermal averaged DM annihilation cross section times the co-moving velocity, $<\sigma v>$, should be $\simeq$ 0.9. From this result, the mass of the DM candidate can be estimated as: $$M_{DM} = \sqrt{\frac{\pi \alpha^2}{8 <\sigma v>}} \simeq 100~\mathrm{GeV}.$$ A particle with mass $M = {\cal{O}}$(100 GeV) and weak cross section would naturally give the measured DM density. It is quite suggestive that new physics, responsible for the breaking of electro-weak symmetry, also introduce a WIMP of about that mass. In fact, in essentially every model of electroweak symmetry breaking, it is possible to add a discrete symmetry that makes the lightest new particle stable. Often, this discrete symmetry is required for other reasons. For example, in Supersymmetry, the conserved $R$ parity is needed to eliminate rapid proton decay. In other cases, such as models with TeV-scale extra dimensions, the discrete symmetry is a natural consequence of the underlying geometry. Data on DM density already set rather stringent constraints on the parameters of Supersymmetry, if the lightest neutralino $\chi^0_1$ is indeed responsible for saturating the amount of DM observed in the Universe. It is useful to discuss the different scenarios, where neutralino DM density is compatible with the WMAP result, in terms of parameter choices in the context of the constrained MSSM (cMSSM), to understand how the measurements that the ILC provides can establish the relation between new physics and DM. The cMSSM reduces the number of free parameters to just five: the common scalar mass, $m_0$, the common gaugino mass, $m_{1/2}$, the ratio of the vacuum expectation values of the two Higgs fields, $\tan\beta$, the sign of the Higgsino mass parameter, $\mu$, and the common trilinear coupling, $A_0$. It is a remarkable feature of this model that, as these parameters, defined at the unification scale, are evolved down to lower energies, the electroweak symmetry is broken spontaneously and masses for the $W^{\pm}$ and $Z^0$ bosons generated automatically. As this model is simple and defined by a small number of parameters, it is well suited for phenomenological studies. The cosmologically interesting regions in the $m_0$ - $m_{1/2}$ parameter plane are shown in Figure \[fig10\]. As we move away from the bulk region, at small values of $m_0$ and $m_{1/2}$, which is already severely constrained by LEP-2 data, the masses of supersymmetric particles increase and so does the dark matter density. It is therefore necessary to have an annihilation process, which could efficiently remove neutralinos in the early universe, to restore the DM density to the value measured by WMAP. Different processes define three main regions: i) the focus point region, where the $\chi^0_1$ contains an admixture of the supersymmetric partner of a neutral Higgs boson and annihilates to $W^+W^-$ and $Z^0Z^0$, ii) the co-annihilation region, where the lightest slepton has a mass very close to $M_{\chi^0_1}$, iii) the $A$ annihilation funnel, where $M(\chi^0_1)$ is approximately half that of the heavy $A^0$ Higgs boson, providing efficient $s$-channel annihilation, $\chi \chi \to A$. In each of these regions, researchers at the ILC will be confronted with several different measurements and significantly different event signatures. \[tab:bench\] It is interesting to observe that the DM constraint, reduces the dimensionality of the cMSSM plane, by one unit, since the allowed regions are tiny lines in the $m_0$ - $m_{1/2}$ plane, evolve with $\tan \beta$ and depend only very weakly on $A_0$ [@Battaglia:2003ab]. Representative benchmark points have been defined and their parameters are summarised in Table \[tab:bench\]. Even though these points have been defined in a specific supersymmetric model, their phenomenology is common to the more general supersymmetric solutions and we shall soon discuss the extension of results derived in this constrained model to the general MSSM. There are several features which are common to all these regions. First, the relic density depends on the mass of the lightest neutralino and of few additional particles, close in mass to it. The heavier part of the SUSY spectrum decouples from the value of $\Omega_{\chi} h^2$. This is of particular importance for the ILC. Running at $\sqrt{s} \le$ 1 TeV, the ILC will not be able to study supersymmetric particles exceeding $\simeq$450-490 GeV, in particular scalar quarks and heavy Higgs bosons in some regions of the parameter phase space. But, independently of the LHC results, the ILC will either observe and measure these particles if they may be relevant to determine the relic DM density, or it will set bounds that ensure their decoupling. A second important observation is that $\Omega_{\chi} h^2$ typically depends on SUSY parameters which can be fixed by accurate measurements of particle masses, particle mass splittings, decay branching fractions and production cross sections. In some instances the availability of polarised beams is advantageous. The LHC can often make precise measurements of some particles, but it is difficult for the LHC experiments to assemble the complete set of parameters needed to reconstruct annihilation cross section. It is also typical of supersymmetry spectra to contain light particles that may be very difficult to observe in the hadron collider environment. The ILC, in contrast, provides just the right setting to obtain both types of measurements. Again, it is not necessary for the ILC to match the energy of the LHC, only that it provides enough energy to see the lightest charged particles of the new sector. Rather detailed ILC analyses of the relevant channels for each benchmark point have been performed, [@Weiglein:2004hn; @Gray:2005ci; @Khotilovich:2005gb; @Battaglia:2004gk] based on parametric simulation, which includes realistic detector performances and effects of the ILC beam characteristics. It has been assumed that the ILC will be able to provide collisions at centre-of-mass energies from 0.3 TeV to 0.5 TeV with an integrated luminosity of 500 fb$^{-1}$ in a first phase of operation and then its collision energy can raised to 1 TeV to provide an additional data set of 1 ab$^{-1}$, corresponding to an additional three to four years of running. Results are summarised in terms of the estimated accuracies on masses and mass differences in Table \[tab:constraints\]. \[tab:constraints\] In order to estimate the implications of these ILC measurements on the estimation of neutralino dark DM density $\Omega_{\chi} h^2$, broad scans of the multi-parameter supersymmetric phase space need to be performed. For each benchmark point, the soft parameters (masses and couplings) at the electroweak scale can be computed with the full 2-loop renormalization group equations and threshold corrections using [Isajet 7.69]{} [@ISAJET]. Supersymmetric loop corrections to the Yukawa couplings can also be included. The electroweak-scale MSSM parameters are extracted from the high scale cMSSM parameters. The dark matter density $\Omega_{\chi} h^2$ can be estimated using the [DarkSUSY]{} [@Gondolo:2004sc] and [Micromegas]{} [@Belanger:2006is] programs. These programs use the same [Isajet]{} code to determine the particle spectrum and couplings, including the running Yukawa couplings, and compute the thermally averaged cross section for neutralino annihilation, including co-annihilation and solve the equation describing the evolution of the number density for the DM candidate. While the assumptions of the cMSSM are quite helpful for defining a set of benchmark points, the cMSSM is not representative of the generic MSSM, since it implies several mass relations, and its assumptions have no strong physics justification. Therefore, in studying the accuracy on $\Omega_{\chi} h^2$, the full set of MSSM parameters must be scanned in an uncorrelated way and the mass spectrum evaluated for each parameter set. A detailed study has recently been performed [@Baltz:2006fm]. I summarise here some of the findings, Table \[tab:dmsummary\] gives results for the neutralino relic density estimates in MSSM for the LHC, the ILC at 0.5 TeV and the ILC at 1 TeV. The LCC1 point is in the bulk region and the model contains light sleptons, with masses just above that of the lightest neutralino. The most important annihilation reactions are those with t-channel slepton exchange. At the LHC, many of the SUSY spectrum parameters can be determined from kinematic constraints. At the ILC masses can be determined both by the two-body decay kinematics of the pair-produced SUSY particles and by dedicated threshold scans. Let us consider the two body decay of a scalar quark $\tilde q \to q \chi^0_1$. If the scalar quarks are pair produced $e^+e^- \to \tilde q \tilde q$, $E_{\tilde q} = E_{beam}$ and the $\chi^0_1$ escapes undetected, only the $q$ (and the $\bar q$) are observed in the detector. In a 1994 paper, J. Feng and D. Finnell [@Feng:1993sd] pointed out that the minimum and maximum energy of production for the quark can be related to the mass difference between the scalar quark $\tilde q$ and the $\chi^0_1$: $$E_{max,~min} = \frac{E_{beam}}{2} \big( 1 \pm \sqrt{1 - \frac{m_{\tilde q}^2}{E_{beam}^2}} \big) \big( 1 - \frac{m_{\chi}^2}{m_{\tilde q}^2} \big).$$ The method can also be extended to slepton decays $\tilde \ell \to \ell \chi^0_1$, which share the same topology, and allows to determine slepton mass once that of the neutralino is known or determine a relation between the masses and get $m_{\chi^0_1}$ if that of the slepton can be independently measured. The measurement requires a precise determination of the endpoint energies of the lepton momentum spectrum, $E_{min}$ and $E_{max}$. It can be shown that accuracy is limited by beamstrahlung, affecting the knowledge of $E_{beam}$ in the equation above, more than by the finite momentum resolution, $\delta p/p$ of the detector. The ILC has a second, and even more precise, method for mass measurements. The possibility to precisely tune the collision energy allows to perform scans of the onset of the cross section for a specific SUSY particle pair production process. The particle mass and width can be extracted from a fit to the signal event yield as function of $\sqrt{s}$. The accuracy depends rather weakly on the number of points, $N$, adopted in the scan and it appears that concentrating the total luminosity at two or three different energies close to the threshold is optimal [@Martyn:2000; @Blair:2001cz]. The mass accuracy, $\delta m$ can be parametrised as: $$\delta m \simeq \Delta E \frac{1+0.36/\sqrt{N}}{\sqrt{18 N L \sigma}}$$ for S-wave processes, where the cross section rises as $\beta$ and as $$\delta m \simeq \Delta E \frac{1}{N^{1/4}} \frac{1+0.38/\sqrt{N}}{\sqrt{2.6 N L \sigma}}$$ for P-wave processes, where the cross section rises as $\beta^3$. The combination of these measurements allows the ILC to determine the $\chi^0_1$ mass to $\pm$0.05 GeV, which is two orders of magnitude better than the anticipated LHC accuracy, while the mass difference between the $\tilde \tau_1$ and the $\chi^0_1$ can be measured to $\pm$0.3 GeV, which is more than a factor ten better. Extension of ILC operation to 1 TeV gives access to the $e^+e^- \to H^0A^0$ process. As a result of the precision of these measurements, the ILC data at 0.5 TeV will allow to predict the neutralino relic density to $\pm$2 % and the addition of 1.0 TeV data will improve it to $\pm$0.25 %. It is suggestive that this accuracy is comparable, or better, than that expected by the improved CMB survey by the PLANCK mission. For comparison, the LHC data should provide a $\pm$7 % accuracy. This already a remarkable result, due the fact that, a large number of measurements will be available at the LHC and SUSY decay chains can be reconstructed. Still, the overall mass scale remains uncertain at the LHC. The direct mass measurements on the ILC data remove this uncertainty. The LCC1 point is characterised by the relatively low SUSY mass scale, most of the particles can be observed at the LHC and their masses accurately measured at the ILC. However, in more general scenarios, the information available from both collider will be more limited. This is the case at the LCC2 point, located in the focus point region, where masses of scalar quarks, sleptons and heavy Higgs bosons are very large, typically beyond the ILC but also the LHC reach, while gauginos masses are of the order of few hundreds GeV, thus within the kinematical domain of the ILC. In this specific scenario, the LHC will observe the SUSY process $\tilde g \to q \bar q \chi$ and the subsequent neutralino and chargino decays. Still the neutralino relic density can only be constrained within $\pm$40% and the hypothesis $\Omega_{\chi} h^2$ = 0, namely that the neutralino does not contribute to the observed dark matter density in the universe, cannot be ruled out, based only on LHC data. At a 0.5 TeV collider, the main SUSY reactions are $e^+e^- \to \chi^+_1 \chi^-_1$ and $e^+e^- \to \chi^0_2 \chi^0_3$. Operation at 1 TeV gives access also to $e^+e^- \to \chi^+_2 \chi^-_2$ and $e^+e^- \to \chi^0_3 \chi^0_4$. Not only the gaugino mass splittings but also the polarised neutralino and chargino production cross section can be accurately determined at the ILC [@Gray:2005ci]. These measurements fix the gaugino-Higgsino mixing angles, which play a major role in determining the neutralino relic density. The decoupling of the heavier, inaccessible part of the SUSY spectrum, can be insured with the data at the highest energy. The combined ILC data at 0.5 TeV and 1 TeV provide an estimate of the neutralino relic density to $\pm$8 % accuracy, which matches the current WMAP precision. The characteristics featured by the LCC2 point persist, while the SUSY masses increase, provided the gaugino-Higgsino mixing angle remains large enough. This DM-motivated region extends to SUSY masses which eventually exceed the LHC reach, highlighting an intriguing region of parameters where the ILC can still observe sizable production of supersymmetric particle, compatible with dark matter data, while the LHC may report no signals of New Physics [@Baer:2003ru]. Instead, the last two points considered, LCC3 and LCC4, are representative of those regions where the neutralino relic density is determined by accidental relationships between particle masses. Other such regions may also be motivated by baryogenesis constraints [@Balazs:2004bu]. The determination of the neutralino relic density, in such scenarios, depends crucially on the precision of spectroscopic measurements, due to the large sensitivity on masses and couplings. The conclusions of the current studies are that the LHC data do not provide quantitative constraints. On the contrary, the ILC can obtain interesting precision, especially when high energy data is available. \[tab:dmsummary\] The LCC3 point is in the so-called $\tilde \tau$ co-annihilation region. Here, the mass difference between the lightest neutralino, $\chi^0_1$, and the lightest scalar tau, $\tilde \tau_1$, is small enough that $\tilde \tau_1 \chi^0_1 \to \tau \gamma$ can effectively remove neutralinos in the early universe. The relative density of $\tilde \tau$ particles to neutralinos scales as $e^{-\frac{m_{\tilde \tau} - m_{\chi}}{m_{\chi}}}$, so this scenario tightly constrain the $m_{\tilde \tau} - m_{\chi}$ mass difference. Here, the precise mass determinations characteristic of LCC1 will not be available: at 0.5 TeV, the ILC will observe a single final state, $\tau^+ \tau^- + E_{missing}$, from the two accessible SUSY processes [@Khotilovich:2005gb], $e^+e^- \to \tilde \tau_1 \tilde \tau_1$, $\tilde \tau \to \tau \chi^0_1$ and $e^+e^- \to \chi^0_1 \chi^0_2$, $\chi^0_2 \to \chi^0_1 \tilde \tau \to \chi^0_1 \chi^0_1 \tau \tau$. The signal topology consists of two $\tau$-jets and missing energy. Background processes, such as $e^+e^- \to ZZ$ can be suppressed using cuts on event shape variables. The mass splitting can be determined by a study of the distribution of the invariant mass of the system made by the two $\tau$-jets and the missing energy vector, $M_{j_1 j_2 E_{missing}}$. In this variable, the remaining SM background is confined to low values and the shape and upper endpoint of the $\tilde \tau_1 \tilde \tau_1$ contribution depends on the stau-neutralino mass difference, $\Delta M = M_{\tilde \tau_1} - M_{\chi^0_1}$. Templates functions can be generated for different values of $\Delta M$ and the mass difference is extracted by a $\chi^2$ fit of these templates to the “data”. As the $\Delta M$ value decreases, the energy available to the $\tau$ leptons decreases. Since $\tau$ decays involve neutrinos, additional energy is lost from detection. When the $\tau \tau$ system becomes soft, the four fermion background process $ee \to ee \tau \tau$, the so-called $\gamma \gamma$ background which has cross sections at the nb level, makes its detection increasingly difficult. What makes possible to reject these $\gamma \gamma$ events is the presence of the two energetic primary electrons at small angle w.r.t. the beamline [@Bambade:2004tq]. This is a significant challenge for low angle calorimetry, since the electron has to be detected in an hostile environment populated by a large number of other electrons, of lower energy, arising from pairs created during the bunch collision [@Chen:1989ss; @Tauchi:1993tm]. A detailed study [@Khotilovich:2005gb], performed for a statistics of 500 fb$^{-1}$, shows that values of $\Delta M$ as small as 5 GeV can be measured at the ILC, provided the primary electrons can be vetoed down to 17 mrad. In the specific case of the LCC3 point, where the mass splitting, $\Delta M$, is 10.8 GeV, an accuracy of 1 GeV can be achieved. Heavier gauginos, as well as the $A^0$ boson, become accessible operating the ILC at 1 TeV. These data constrain both the mixing angles and $\tan \beta$. As a result the neutralino relic density can be estimated with an 18 % accuracy. Finally, the LCC4 point, chosen in the $A$ funnel, has the DM density controlled by the $\chi \chi \to A$ process. This point is rather instructive in terms of the discovery-driven evolution of a possible experimental program at the ILC. The ILC can obtain the neutralino and $\tilde \tau$ masses at 0.5 TeV, following the same technique as for LCC3. We would also expect LHC experiments to have observed the $A^0$ boson, but it is unlikely $M_A$ could be determined accurately in $pp$ collisions, since the available observation mode is the decay in $\tau$ lepton pairs. At this stage, it would be apparent that the mass relation between the neutralino mass, accurately measured by the ILC at 0.5 TeV, and the $A$ boson mass, from the LHC data, is compatible with $M_A \simeq 2 M_{\chi}$, as required for the s-channel annihilation process to be effective. Three more measurements have to be performed at the ILC: the $A^0$ mass, $M_A$, and width $\Gamma_A$ and the $\mu$ parameter, which is accessible through the mass splitting between heavier neutralinos, $\chi^0_3$, $\chi^0_4$ and the lighter $\chi^0_1$, $\chi^0_2$. All these measurements are available by operating the ILC at 1 TeV. $M_A$ and $\Gamma_A$ can be determined by studying the $A^0$ production in association with a $H^0$ boson, in the reaction $e^+e^- \to A^0 H^0 \to b \bar b b \bar b$. This process results in spectacular events with four $b$ jets, emitted almost symmetrically, due to low energy carried by the heavy Higgs bosons (see Figure \[fig12\]a). The cross section, for the parameters of LCC4 corresponding to $M_A$ = 419 GeV, is just 0.9 fb highlighting the need of large luminosity at the highest energy. Jet flavour tagging and event shape analysis significantly reduces the major multi-jet backgrounds, such as $WW$, $ZZ$ and $t \bar t$. The SM $b \bar b b \bar b$ electro-weak background has a cross section of $\sim$3 fb, but since it includes $Z^0$ or $h^0$ as intermediate states it can be efficiently removed by event shape and mass cuts. After event selection, the $A^0$ mass and width must be reconstructed from the measured di-jet invariant masses. This is achieved by pairing jets in the way that minimises the resulting di-jet mass difference, since the masses of the $A$ and $H$ bosons are expected to be degenerate within a few GeV, and the di-jet masses are computed by imposing constraints on energy and momentum conservation to improve the achievable resolution and gain sensitivity to the boson natural width (see Figure \[fig12\]b). The result is a determination of the $A$ mass to 0.2 % and of its width to $\simeq$15 % if a sample of 2 ab$^{-1}$ of data can be collected. The full set of ILC data provides a neutralino relic density evaluation with 19 % relative accuracy. The full details of how these numbers were obtained can be found in Ref. [@Baltz:2006fm] . SUSY offers a compelling example for investigating the complementarity in the search and discovery of new particles and in the study of their properties at the LHC and ILC. The connection to cosmology, through the study of dark matter brings precise requirements in terms of accuracy and completeness of the anticipated measurements and puts emphasis on scenarios at the edges of the parameter phase space. The interplay of satellite, ground-based and collider experiments in cosmology and particle physics will be unique and it will lead us to learn more about the structure of our Galaxy and of the Universe as well as of the underlying fundamental laws of the elementary particles. This quest will represent an major effort for science in the next several decades. The scenarios discussed above highlight the essential role of the ILC in this context. It will testing whether the particles observed at accelerators are responsible for making up a sizeable fraction of the mass of the Universe, through precision spectroscopic measurements. The data obtained at the ILC will effectively remove most particle physics uncertainties and become a solid ground for studying dark matter in our galaxy through direct and indirect detection experiments [@Feng:2005nz]. ### Indirect Sensitivity to New Physics at the ILC {#sec3.3} Beyond Supersymmetry there is a wide range of physics scenarios invoking new phenomena at, and beyond, the TeV scale. These may explain the origin of electro-weak symmetry breaking, if there is no light elementary Higgs boson, stabilise the SM, if SUSY is not realised in nature, or embed the SM in a theory of grand unification. The ILC, operating at high energy, represents an ideal laboratory for studying this New Physics in ways that are complementary to the LHC [@zp1; @dominici]. Not only it may directly produce some of the new particles predicted by these theories, the ILC also retains an indirect sensitivity, through precision measurements of virtual corrections to electro-weak observables, when the new particle masses exceed the available centre-of-mass energy. One of the simplest of such SM extensions consists of the introduction of an additional $U(1)$ gauge symmetry, as predicted in some grand unified theories [@Hewett:1993st; @Rizzo:2006nw]. The extra $Z'$ boson, associated to the symmetry, naturally mixes with the SM $Z^0$. The mixing angle is already strongly constrained, by precision electroweak data, and can be of the order of few mrad at most, while direct searches at Tevatron for a new $Z'$ boson set a lower limit on its mass around 800 GeV, which may reach 1 TeV by the time the LHC will start searching for such a state. The search for an extended gauge sector offers an interesting framework for studying the ILC sensitivity to scales beyond those directly accessible. It also raises the issue of the discrimination between different models, once a signal would be detected. The main classes of models with additional $Z'$ bosons include $E_6$ inspired models and left-right models (LR). In the $E_6$ models, the $Z'$ fermion couplings depend on the angle, $\theta_6$, defining the embedding of the extra $U(1)$ in the $E_6$ group. At the ILC, the indirect sensitivity to the mass of the new boson, $M_{Z'}$, can be parametrised in terms of the available integrated luminosity, ${\cal{L}}$, and centre-of-mass energy, $\sqrt{s}$. A scaling law for large values of $M_{Z'}$ can be obtained by considering the effect of the $Z'-\gamma$ interference in the two fermion production cross section $\sigma {\mathrm{(}}e^+e^- \to f \bar f {\mathrm{)}}$ ($\sigma_{f \bar f}$ in the following). For $s<< M_{Z'}^2$ and assuming the uncertainties $\delta \sigma$ to be statistically dominated, we obtain the following scaling for the difference between the SM cross section and that in presence of the $Z'$, in units of the statistical accuracy: $$\frac{|\sigma^{SM}_{f \bar f} - \sigma^{SM+Z'}_{f \bar f}|}{\delta \sigma} \propto \frac{1}{M^2_{Z'}}\sqrt{s {\cal{L}}}$$ from which we can derive that the indirect sensitivity to the $Z'$ mass scales with the square of the centre-of-mass energy and the luminosity as: $$M_{Z'} \propto (s {\cal{L}})^{1/4}. \label{resc}$$ In a full analysis, the observables sensitive to new physics contribution in two-fermion production are the cross section $\sigma_{f \bar f}$, the forward-backward asymmetries $A_{FB}^{f \bar f}$ and the left-right asymmetries $A_{LR}^{f \bar f}$. The ILC gives us the possibility to study a large number of reactions, $e^+_R e^-_L$, $e^+_R e^-_R \to (u \bar u~+~d \bar d),~s \bar s,~c \bar c,~b \bar b, ~t \bar t,~e^+e^-,~\mu^+\mu^-,~\tau^+\tau^-$ with final states of well defined flavour and, in several cases, helicity. In order to achieve this, jet flavour tagging is essential to separate $b$ quarks from lighter quarks and $c$ quarks from both $b$ and light quarks. Jet-charge and vertex-charge reconstruction allows then to tell the quark from the antiquark produced in the same event [@Ackerstaff:1997ke; @Abe:2004hx]. Similarly to LEP and SLC analyses, the forward-backward asymmetry can be obtained from a fit to the flow of the jet charge $Q^{jet}$, defined as $Q^{jet} = \frac{\sum_i q_i |p_i T|^k}{\sum_i |p_i T|^k}$, where $q_i$ is the particle charge, $p_i$ its momentum, $T$ the jet thrust axis and the sum is extended to all the particles in a given jet. Another possible technique uses the charge of secondary particles to determine the vertex charge and thus the quark charge. The application of this technique to the ILC has been studied in some details in relation to the optimisation of the Vertex Tracker [@Hillert:2005rp]. At ILC energies, the $e^+e^- \rightarrow f \bar f$ cross sections are significantly reduced, compared to those at LEP and SLC: at 1 TeV the cross section $\sigma(e^+e^- \to b \bar b)$ is only 100 fb, so high luminosity is essential and new experimental issues emerge. At 1 TeV, the ILC beamstrahlung parameter doubles compared to 0.5 TeV, beam-beam effects becoming important, and the primary $e^+e^-$ collision is accompanied by $\gamma \gamma \rightarrow {\mathrm{hadrons}}$ interactions [@Chen:1993db]. Being mostly confined in the forward regions, this background may reduce the polar angle acceptance for quark flavour tagging and dilute the jet charge separation using jet charge techniques. The statistical accuracy for the determination of $\sigma_{f \bar f}$, $A_{FB}^{f \bar f}$ and $A_{LR}^{f \bar f}$ has been studied, for $\mu^+\mu^-$ and $b \bar b$, taking the ILC parameters at $\sqrt{s}$ = 1 TeV. The additional particles from the $\gamma \gamma$ background cause a broadening of the $Q^{jet}$ distribution and thus a dilution of the quark charge separation. Detailed full simulation and reconstruction is needed to fully understand these effects. Despite these backgrounds, the anticipated experimental accuracy in the determination of the electro-weak observables in two-fermion processes at 1 TeV is of the order of a few percent, confirming the ILC role as the precision machine. Several scenarios of new physics have been investigated [@Riemann:1997py; @Battaglia:2001fr]. The analysis of the cross section and asymmetries at 1 TeV would reveal the existence of an additional $Z'$ boson up to $\simeq$ 6-15 TeV, depending on its couplings. As a comparison the LHC direct sensitivity extends up to approximately 4-5 TeV. The ILC indirect sensitivity also extends to different models on new physics, such as 5-dimensional extension of the SM with fermions on the boundary for a compactification where scales up to about 30 TeV can be explored. Finally, fermion compositeness or the exchange of very heavy new particles can be described in terms of effective four-fermion contact interactions [@Eichten:1983hw]. The interaction depends on a scale $\Lambda = M_X/g$, where $M_X$ is the mass of the new particle and $g$ the coupling. Limits to this scale $\Lambda$ can be set up to $\simeq$ 100 TeV, which shows that the ILC sensitivity to new phenomena can exceed its centre-of-mass energy by a significant factor. In order to maximise this indirect sensitivity to new physics, the precision of the SM predictions should match the experimental accuracy. Now, at TeV energies, well above the electroweak scale,the ILC will face the effects of large non-perturbative corrections. Large logarithms $\propto \alpha^n~log^{2n} (M^2/s)$ arise from the exchange of collinear, soft gauge bosons and are known as Sudakov logarithms [@Melles:2001ye]. At 1 TeV the logarithmically enhanced $W$ corrections to $\sigma_{b \bar b}$, of the form $\alpha~log^2 (M_W^2/s)$ and $\alpha~log (M_W^2/s)$ amount to 19% and -4% respectively. The effect of these large logarithmic corrections has been studied in some details [@Ciafaloni:1999ub; @Battaglia:2004mw]. It will be essential to promote a program of studies to reduce these theoretical uncertainties, to fully exploit the ILC potential in these studies. ### Run Plan Scenario {#sec3.4} One of the points of strength of the ILC is in its remarkable flexibility of running conditions. Not only the centre-of-mass energy can be changed over approximately an order of magnitude, but the beam particle and their polarization state can be varied to suit the need of the physics processes under study. At the same time, the ILC program is most diversified and data taken at the same centre-of-mass energy may be used for very different analyses, such as precise top mass determination, Higgs boson studies and reconstruction of SUSY decays. This has raised concerns whether the claimed ILC accuracies can be all achieved with a finite amount of data. A dedicated study was performed in 2001, under the guidance of Paul Grannis, taking two physics scenarios with Supersymmetry realised at relatively low mass, one being the LCC1 benchmark point, rich in pair-produced particles and requiring detailed threshold scans [@Battaglia:2002ey]. The study assumes a realistic profile for the delivered luminosity, which increases from 10 fb$^{-1}$ in the first year to 200 fb$^{-1}$ in the fifth year and 250 fb$^{-1}$ afterward, for a total integrated equivalent luminosity $\int {\cal{L}}$ = 1 ab$^{-1}$. The proposed run plan starts at the assumed maximum energy of 0.5 TeV for a first determination of the sparticle masses through the end-point study and then scans the relevant thresholds, including $t \bar t$ in short runs with tuned polarization states. A summary is given in Table \[tab:runplan\]. \[tab:runplan\] This plan devotes approximately two third of the total luminosity at, or near, the maximum energy, so the program will be sensitive to unexpected new phenomena at high energy, while providing accurate measurements of masses through dedicated scans. Sensors and Detectors for the ILC {#sec4} --------------------------------- The development of the ILC accelerator components and the definition of its physics case has been paralleled by a continuing effort in detector design and sensor R&D. This effort is motivated by the need to design and construct detectors which match the ILC promise to provide extremely accurate measurements over a broad range of collision energies and event topologies. It is important to stress that, despite more than a decade of detector R&D for the LHC experiments, much still needs to be done to obtain sensors matching the ILC requirements. While the focus of the LHC-motivated R&D has been on sensor radiation hardness and high trigger rate, the ILC, with its more benign background conditions and lower interaction cross sections, admits sensors of new technology which, in turn, have better granularity, smaller thickness and much improved resolution. Sensor R&D and detector design are being carried out world-wide and are starting deploying prototype detector modules on test beamlines. ### Detector Concepts {#sec4.1} The conceptual design effort for an optimal detector for the ILC interaction region has probed a wide spectrum of options which span from a spherical detector structure to improved versions of more orthodox barrel-shaped detectors. These studies have been influenced by the experience with SLD at the SLC, ALEPH, DELPHI and OPAL at LEP, but also with ATLAS and CMS at the LHC. The emphasis on accurate reconstruction of the particle flow in hadronic events and thus of the energy of partons is common to all designs. The main tracker technology drives the detector designs presently being studied. Four detector concepts have emerged, named GLD, LDC, SiD and 4$^{th}$ Concept [@concepts]. A large volume, 3D continuous tracking volume in a Time Projection Chamber is the centerpiece of the GLD, the LDC and the so-called 4$^{th}$ Concept designs. The TPC is followed by an highly segmented electro-magnetic calorimeter for which these three concepts are contemplating different technologies A discrete tracker made of layers of high precision Silicon microstrip detectors, and a larger solenoidal field, which allows to reduce the radius, and thus the size, of the calorimeter is being studied in the context of the SiD design. Dedicated detector design studies are being carried out internationally [@Behnke:2005re; @Abe:2006bz] to optimise, through physics benchmarks [@Battaglia:2006bv], the integrated detector concepts. Such design activities provide a bridge from physics studies to the assessment of priorities in detector R&D and are evolving towards the completion of engineered design reports at the end of this decade, synchronously with that foreseen for the ILC accelerator. ### Vertexing and Tracking {#sec4.2} The vertex and main tracker detectors must provide jet flavour identification and track momentum determination with the accuracy which makes the ILC such a unique facility for particle physics. The resolution in extrapolating charged particle trajectories to their production point, the so-called impact parameter, is dictated by the need to distinguish Higgs boson decays to $c \bar c$ from those to $b \bar b$ pairs, but also $\tau^+ \tau^-$ and gluon pairs, as discussed in section \[sec3.1\]. In addition, vertex charge measurements put emphasis on precise extrapolation of particle tracks down to very low momenta. Tagging of events with multiple $b$ jets, such as $e^+e^- \to H^0A^0 \to b \bar b b \bar b$, discussed in section \[sec3.2\], underscores the need of high tagging efficiency, $\epsilon_b$, since the overall efficiency scales as $\epsilon_b^N$, where $N$ is the number of jets to be tagged. This is best achieved by analysing the secondary vertex structures in hadronic jets. A $B$ meson, from a Higgs boson produced at 0.5 TeV, has an average energy of $x_B \sqrt{s}/4 \simeq$ 100 GeV, where $x_B \simeq 0.7$ represents the average $b$ fragmentation function, or a $\gamma$ value of $\simeq$ 70. Since $c \tau \simeq$ 500 $\mu$m, the average decay distance $\beta \gamma c \tau$ is 3.5 mm and the average impact parameter, $\beta \gamma c \tau \sin \theta$, is 0.5 mm. In comparison, a $D$ meson from a $H \to c \bar c$ decay has a decay length of 1.3 mm. More importantly, the average charged decay multiplicity for a $B$ meson is 5.1, while for a $D$ meson is 2.7. Turning these numbers into performance requirements sets the target accuracy for the asymptotic term $a$ and the multiple scattering term $b$ defining the track extrapolation resolution in the formula $$\sigma_{\mathrm{extrapolation}} = {\mathrm{a}}~\oplus \frac{{\mathrm{b}}}{p_t}$$ The ILC target values are compared to those achieved by the DELPHI experiment at LEP, those expected for ATLAS at the LHC and the best performance ever achieved at a collider experiment, that of SLD, in Table \[tab:ipres\]. \[tab:ipres\] This comparison shows that the improvements required for ILC on state-of-the-art technology is a factor 2-5 on asymptotic resolution and another factor 3-7 on the multiple scattering term. At the ILC, particle tracks in highly collimated jets contribute a local track density on the innermost layer of 0.2-1.0 hits mm$^{-2}$ at 0.5 TeV, to reach 0.4-1.5 hits mm$^{-2}$ at 1.0 TeV. Machine-induced backgrounds, mostly pairs, add about 3-4 hits mm$^{-2}$, assuming that the detector integrates 80 consecutive bunch crossings in a train. These values are comparable to, or even exceed, those expected on the innermost layer of the LHC detectors: 0.03 hits mm$^{-2}$ for proton collisions in ATLAS and 0.9 hits mm$^{-2}$ for heavy ion collisions in ALICE. Occupancy and point resolution set the pixel size to 20x20 $\mu$m$^2$ or less. The impact parameter accuracy sets the layer material budget to $\le 0.15\%~X_0$/layer. This motivates the development of thin monolithic pixel sensors. Charge coupled devices (CCD) have been a prototype architecture after the success of the SLD VXD3 [@Abe:1999ky]. However, to match the ILC requirements in terms of radiation hardness and read-out speed significant R&D is needed. New technologies, such as CMOS active pixels [@Turchetta:2001dy], SOI [@Marczewski:2005vy] and DEPFET [@Richter:2003dn] sensors, are emerging as promising, competitive alternatives, supported by an intensive sensor R&D effort promoted for the ILC [@Battaglia:2003kn]. The process $e^+e^- \to H^0Z^0$, $H^0 \to X$, $Z^0 \to \ell^+ \ell^-$ gives access to Higgs production, irrespective of the Higgs decay properties. Lepton momenta must be measured very accurately for the recoil mass resolution to be limited by the irreducible smearing due to beamstrahlung. Since the centre-of-mass energy $\sqrt{s} = E_{H} + E_{Z}$ is known and the total momentum $p_{H} + p_{Z} = 0$, the Higgs mass, $M_H$ can be written as: $$M_H^2 = E_H^2 - p_H^2 = (\sqrt{s} - E_Z)^2 - p_Z^2 = s + E_Z^2 - 2\sqrt{s} E_Z - p_Z^2 = s-2\sqrt{s} E_Z + M_Z^2$$ In the decay $Z^0 \to \mu^+ \mu^-$, $E_Z = E_{\mu^+} + E_{\mu^-}$ so that the resolution on $M_H$ depends on that on the muon momentum. In quantitative terms the resolution required is $$\delta p / p^2 < 2 \times 10^{-5}$$ A comparison with the performance of trackers at LEP and LHC is given in Table \[tab:pres\]. \[tab:pres\] The ability to tag Higgs bosons, independent on their decay mode is central to the ILC program in Higgs physics. A degraded momentum resolution would correspond to larger background, mostly from $e^+e^- \to ZZ^*$, being accepted in the Higgs signal sample. This degrades the accuracy on the determination of the Higgs couplings both in terms of statistical and systematic uncertainties. The particle momentum is measured through its bending radius $R$ in the solenoidal magnetic field, $B$. The error on the curvature, $k=1/R$, for a particle track of high momentum, measured at $N$ equidistant points with an accuracy, $\sigma$, over a length L, applying the constraint that it does originate at the primary vertex (as for the leptons from the $Z^0$ in the Higgstrahlung reaction) is given by [@pdg]: $$\delta k = \frac{\sigma}{L^2}\sqrt{\frac{320}{N+4}}$$ This shows that the same momentum resolution can be achieved either by a large number of measurements, each of moderate accuracy, as in the case of a continuous gaseous tracker, or by a small number of points measured with high accuracy, as in the case of a discrete Si tracker. Continuous tracking capability over a large area, with timing information and specific ionization measurement, and its robust performance make the Time Projection Chamber an attractive option for precision tracking at the ILC. The introduction of Micro Pattern Gaseous Detectors [@Giomataris:1995fq; @Sauli:1997qp] (MPGD) offers significant improvements in terms of reduced $E \times B$, larger gains, ion suppression and faster, narrower signals providing better space resolution. Improving on the space resolution requires an optimal sampling of the collected charge, while the high solenoidal magnetic field reduces the diffusion effects. Several paths are presently being explored with small size prototypes operated on beamlines and in large magnetic fields [@Kappler:2004cg; @Colas:2004ks]. A multi-layered Si strip detector tracker in an high $B$ field may offer a competitive $\delta p/p^2$ resolution with reduced material budget and afford a smaller radius ECAL, thus reducing the overall detector cost. This is the main rationale promoting the development of an all-Si concept for the main tracker, which follows the spirit of the design of the CMS detector at LHC. Dedicated conceptual design and module R&D is being carried out as a world-wide program [@Kroseberg:2005ue]. There is also considerable R&D required for the engineering of detector ladders, addressing such issues as mechanical stability and integration of cooling and electrical services. These modules may also be considered as supplemental tracking devices in a TPC-based design to provide extra space points, with high resolution, and in end-cap tracking planes. Assessing the required detector performance involves realistic simulation and reconstruction code accounting for inefficiencies, noise, overlaps and backgrounds. ### Calorimetry {#sec4.3} The ILC physics program requires precise measurements of multi-jet hadronic events, in particular di-jet invariant masses to identify $W$, $Z$ and Higgs bosons, through their hadronic decays. An especially demanding reaction is $e^+e^- \to Z^0 H^0 H^0$, which provides access to the triple Higgs coupling as discussed in section \[sec3.1\]. The large background from $e^+e^- \to Z^0 Z^0 Z^0$ can be reduced only by an efficient $H^0$/$Z^0$ separation, based on their masses. This impacts the parton energy resolution through the measurement of hadronic jets. Detailed simulation [@Castanier:2001sf] shows that a jet energy resolution $\frac{\sigma_{E_{jet}}}{E_{jet}} \simeq \frac{0.30}{\sqrt{E}}$ is required, in order to achieve an interesting resolution on the $g_{HHH}$ coupling. The analysis of other processes, such as $e^+e^- \to W^+W^- \nu \bar \nu$ and Higgs hadronic decays, leads to similar conclusions [@Brient:2002gh]. In the case of the determination of $H^0 \to W^+W^-$ branching fractions, the statistical accuracy degrades by 22 % when changing the jet energy resolution from $\frac{0.30}{\sqrt{E}}$ to $\frac{0.60}{\sqrt{E}}$. Such performance is unprecedented and requires the development of an advanced calorimeter design as well as new reconstruction strategies. The most promising approach is based on the [*particle flow algorithm*]{} (PFA). The energy of each particle in an hadronic jet is determined based on the information of the detector which can measure it to the best accuracy. In the case of charged particles, this is achieved by measuring the particle bending in the solenoidal field with the main tracker. Electromagnetic neutrals ($\gamma$ and $\pi^0$) are measured in the electromagnetic calorimeter and hadronic neutrals ($K^0_L$, $n$) in the hadronic calorimeter. The jet energy is then obtained by summing these energies: $$E_{jet} = E_{charged} + E_{em~neutral} + E_{had~neutral}$$ each being measured in a specialised detector. The resolution is given by: $$\sigma^2_{E_{jet}} = \sigma^2_{charged} + \sigma^2_{em~neutral} + \sigma^2_{had~neutral} + \sigma^2_{confusion}.$$ Assuming the anticipated momentum resolution, $\sigma_{E} \simeq 0.11/\sqrt{E}$ for the e.m. calorimeter, $\sigma_E \simeq 0.40/\sqrt{E}$ for the hadronic calorimeter and the fractions of charged, e.m. neutral and hadronic neutral energy in an hadronic jet we get: $$\sigma^2_{charged} \simeq (0.02 {\mathrm{GeV}})^2 \frac{1}{10} \sum \frac{E^4_{charged}}{(10 {\mathrm{GeV}})^4}$$ $$\sigma^2_{em~neutral} \simeq (0.6 {\mathrm{GeV}})^2 \frac{E_{jet}}{100 {\mathrm{GeV}}}$$ $$\sigma^2_{had~neutral} \simeq (1.3 {\mathrm{GeV}})^2 \frac{E_{jet}}{100 {\mathrm{GeV}}}$$ In case of perfect energy-particle association this would correspond to a jet resolution $\simeq 0.14/\sqrt{E}$. But a major source of resolution loss turns out to be the confusion term, $\sigma_{confusion}$, which originates from inefficiencies, double-counting and fakes, which need to be minimised by an efficient pattern recognition. This strategy was pioneered by the ALEPH experiment at LEP, where a resolution $\simeq 0.60/\sqrt{E}$ was obtained, starting from the stochastic resolutions of $\sigma_{E} \simeq 0.18/\sqrt{E}$ for the e.m. calorimeter, and $\sigma_E \simeq 0.85/\sqrt{E}$ for the hadronic calorimeter [@Buskulic:1994wz]. At hadron colliders, the possible improvement from using tracking information together with calorimetric measurements is limited, due to underlying events and the shower core size. On the contrary, at the ILC these limitations can be overcome, by developing an imaging calorimeter, where spatial resolution becomes as important as energy resolution. The minimisation of the confusion rate can then be obtained by choosing a large solenoidal field, $B$, and calorimeter radius, $R$, to increase the separation between charged and neutral particles in dense jets, a small Moliere radius, $R_M$, for the e.m. calorimeter, to reduce the transverse shower spread and small cells, $R_{pixel}$, with large longitudinal segmentation. The distance between a neutral and a charged particle, of transverse momentum $p_t$, at the entrance of the e.m. calorimeter located at a radius $R$ is given by $0.15 B R^2 / p_t$, where $B$ is the solenoidal magnetic field. A useful figure of merit of the detector in terms of the particle flow reconstruction capability is then offered by: $$\frac{B R^2}{R_M^2 R_{pixel}^2}$$ which is a measure of the particle separation capability. The value of $BR^2$ is limited to about 60 Tm$^2$ by the mechanical stability. An optimal material in terms of Moliere radius is Tungsten, with $R_M$ = 9 mm. In four-jet events at $\sqrt{s}$=0.8 TeV, there are on average 28 GeV per di-jet carried by photons, which are deposited within 2.5 cm from a charged particle at the e.m. calorimeter radius. With pixel cells of order of $1 \times 1$ cm$^2$ to ensure sufficient transverse segmentation and 30 to 40 layers in depth, the e.m. calorimeter would consists of up to 30 M channels and 3000 m$^2$ of active Si. Due to the large amount of channels and the wish to use an absorber with the smallest possible Moliere radius, the e.m. calorimeter is the main cost-driver of the ILC detector and its optimisation in terms of performance and cost requires a significant R&D effort. A Silicon-Tungsten calorimeter (SiW) was first proposed in the framework of the TESLA study [@Videau:2000es; @Behnke:2001qq] and it is currently being pursued by large R&D collaborations in both Europe and the US. Alternative technologies are also being studied by the GLD and the 4$^{th}$ Concept. This R&D program involves design, prototyping and tests with high energy particle beams and it is being carried out world-wide [@Strom:2005id; @Mavromanolakis:2005yh; @Strom:2005xt], supported by efforts on detailed simulation and reconstruction. Epilogue -------- The ILC promises to complement and expand the probe into the TeV scale beyond the LHC capabilities, matching and improving its energy reach while adding precision. Its physics program will address many of the fundamental questions of today’s physics from the origin of mass, to the nature of Dark Matter. After more than two decades of intense R&D carried out world-wide, the $e^+e^-$ linear collider, with centre-of-mass energies up to 1 TeV, has become technically feasible and a costed reference design is now available. Detectors matching the precision requirements of its anticipated physics program are being developed in an intense R&D effort carried out world-wide. Now, theoretical predictions matching the anticipated experimental accuracies are crucially needed, as well as further clues on what physics scenarios could be unveiled by signals that the LHC may soon be observing. These will contribute to further define the physics landscape for the ILC. A TeV-scale electron-positron linear collider is an essential component of the research program that will provide in the next decades new insights into the structure of space, time, matter and energy. Thanks to the efforts of many groups from laboratories and universities around the world, the technology for achieving this goal is now in hand, and the prospects for the ILC success are extraordinarily bright. Acknowledgments {#acknowledgments .unnumbered} --------------- I am grateful to the TASI organisers, in particular to Sally Dawson and Rabindra N. Mohapatra, for their invitation and the excellent organization. I am indebted to many colleagues who have shared with me both the excitement of the ILC physics studies and detector R&D, over many years, as well as many of the results included in this article. I would like to mention here Ugo Amaldi, Timothy Barklow, Genevieve Belanger, Devis Contarato, Stefania De Curtis, Jean-Pierre Delahaye, Albert De Roeck, Klaus Desch, Daniele Dominici, John Ellis, JoAnne Hewett, Konstantin Matchev, Michael Peskin and Tom Rizzo. I am also grateful to Barry Barish, JoAnne Hewett, Mark Oreglia and Michael Peskin for reviewing the manuscript and their suggestions. This work was supported in part by the Director, Office of Science, of the U.S. Department of Energy under Contract No.DE-AC02-05CH11231. [99]{} M. Tigner, *Nuovo Cim.*, [**37**]{} 1228 (1965). U. Amaldi, *Phys. Lett.* [**B61**]{} 313 (1976). B. Richter, IEEE Trans. Nucl. Sci.  [**26**]{}, 4261 (1979). W. Schnell, [*A Two Stage Rf Linear Collider Using A Superconducting Drive Linac*]{}, CERN-LEP-RF/86-06 (1976). C. r. Ahn [*et al.*]{}, [*Opportunities and Requirements for Experimentation at a Very High-Eenergy $e^+e^-$ Collider*]{}, SLAC-0329 (1988). G. Loew (editor), *International Linear Collider Technical Review Committee: Second Report*, SLAC-R-606 (2003). Report of the OECD Consultative Group on High-Energy Physics, June 2002 ([http://www.oecd.org/dataoecd/2/32/1944269.pdf]{}) L. Maiani, prepared for the [*9th International Symposium on Neutrino Telescopes*]{}, Venice, Italy, 6-9 March 2001. D. Treille, *Nucl. Phys. Proc. Suppl.* [**109B**]{}, 1 (2002). R. Brinkmann, K. Flottmann, J. Rossbach, P. Schmueser, N. Walker and H. Weise (editors), *TESLA: The superconducting electron positron linear collider with an integrated X-ray laser laboratory. Technical design report.*, DESY-01-011B (2001). R. W. Assmann [*et al.*]{}, [*A 3-TeV $e^+e^-$ linear collider based on CLIC technology*]{}, CERN-2000-008 (2000). W. Wuensch, [*Progress in Understanding the High-Gradient Limitations of Accelerating Structures*]{}, CLIC-Note-706 (2007). M. Battaglia, A. De Roeck, J. Ellis and D. Schulte (editors), [*Physics at the CLIC multi-TeV linear collider: Report of the CLIC Physics Working Group*]{}, CERN-2004-005 (2004) and arXiv:hep-ph/0412251. W.P. Leemans [*et al.*]{}, *Nature Physics* [**2**]{} 696 (2006). B. Richter, SLAC-PUB-2854 (1981) U. Amaldi, [*Summary talk given at Workshop on Physics at Future Accelerators, La Thuile, Italy, Jan 7-13, 1987*]{}, CERN-EP/87-95 (1987). R. J. Noble, Nucl. Instrum. Meth.  A [**256**]{}, 427 (1987). H. Murayama and M. E. Peskin, Ann. Rev. Nucl. Part. Sci.  [**46**]{}, 533 (1996) \[arXiv:hep-ex/9606003\]. J. A. Aguilar-Saavedra [*et al.*]{} \[ECFA/DESY LC Physics Working Group\], *TESLA Technical Design Report Part III: Physics at an e+e- Linear Collider*, DESY-2001-011C (2001) and arXiv:hep-ph/0106315. T. Abe [*et al.*]{} \[American Linear Collider Working Group\], [*Linear collider physics resource book for Snowmass 2001*]{}, SLAC-R-570 (2001). K. Abe [*et al.*]{} \[ACFA Linear Collider Working Group\], [*Particle physics experiments at JLC*]{}, KEK-REPORT-2001-11 (2001) and arXiv:hep-ph/0109166. S. Dawson and M. Oreglia, Ann. Rev. Nucl. Part. Sci.  [**54**]{}, 269 (2004) \[arXiv:hep-ph/0403015\]. P.W. Higgs, *Phys. Rev. Lett.* [**12**]{} 132 (1964); [*idem*]{}, *Phys. Rev.* [**145**]{} 1156 (1966); F. Englert and R. Brout, [*Phys. Rev. Lett.*]{} [**13**]{} 321 (1964); G.S. Guralnik, C.R. Hagen and T.W. Kibble, *Phys. Rev. Lett.* [**13**]{} 585 (1964). A. Hasenfratz [*et al.*]{}, *Phys. Lett.* [**B199**]{} 531 (1987); M. Lüscher and P. Weisz, *Phys. Lett.* [**B212**]{} 472 (1988); M. Göckeler [*et al.*]{}, *Nucl. Phys.* [**B404**]{} 517 (1993). R. Barate [*et al.*]{} \[LEP Working Group for Higgs boson searches\], *Phys. Lett. B* [**565**]{}, 61 (2003) \[arXiv:hep-ex/0306033\]. LEP Electroweak Working Group, Report CERN-PH-EP-2006 (2006), arXiv:hep-ex/0612034 and subsequent updates available at [http://lepewwg.web.cern.ch/LEPEWWG/]{}. S. Heinemeyer [*et al.*]{}, arXiv:hep-ph/0511332. P. Garcia-Abia, W. Lohmann and A. Raspereza, Note LC-PHSM-2000-062 (2000). D. J. Miller, S. Y. Choi, B. Eberle, M. M. Muhlleitner and P. M. Zerwas, Phys. Lett.  B [**505**]{}, 149 (2001) \[arXiv:hep-ph/0102023\]. M. Schumacher, Note LC-PHSM-2001-003 (2001). A. Djouadi, M. Spira and P. M. Zerwas, Z. Phys.  C [**70**]{}, 427 (1996) \[arXiv:hep-ph/9511344\]. M. D. Hildreth, T. L. Barklow and D. L. Burke, Phys. Rev.  D [**49**]{}, 3441 (1994). M. Carena, H. E. Haber, H. E. Logan and S. Mrenna, Phys. Rev.  D [**65**]{}, 055005 (2002) \[Erratum-ibid.  D [**65**]{}, 099902 (2002)\] \[arXiv:hep-ph/0106116\]. K. Desch, E. Gross, S. Heinemeyer, G. Weiglein and L. Zivkovic, JHEP [**0409**]{}, 062 (2004) \[arXiv:hep-ph/0406322\]. M. Battaglia, D. Dominici, J. F. Gunion and J. D. Wells, arXiv:hep-ph/0402062. M. Battaglia, arXiv:hep-ph/9910271. B. Aubert [*et al.*]{} \[BABAR Collaboration\], *Phys. Rev. Lett. * [**93**]{}, 011803 (2004) \[arXiv:hep-ex/0404017\]. C. W. Bauer, Z. Ligeti, M. Luke and A. V. Manohar, *Phys. Rev.* [**D67**]{}, 054012 (2003) \[arXiv:hep-ph/0210027\]. M. Battaglia [*et al.*]{}, *Phys. Lett.* [**B556**]{}, 41 (2003) \[arXiv:hep-ph/0210319\]. T. Kuhl, prepared for the [*International Conference on Linear Colliders (LCWS 04)*]{}, Paris, France, 19-24 April 2004. M. Battaglia and A. De Roeck, arXiv:hep-ph/0211207. M. Battaglia, arXiv:hep-ph/0211461. T. L. Barklow, arXiv:hep-ph/0312268. A. Djouadi, W. Kilian, M. Muhlleitner and P. M. Zerwas, *Eur. Phys. J.* [**C10**]{} (1999) 27 \[arXiv:hep-ph/9903229\]. S. Kanemura, Y. Okada, E. Senaha and C. P. Yuan, Phys. Rev.  D [**70**]{}, 115002 (2004) \[arXiv:hep-ph/0408364\]. A. Gutierrez-Rodriguez, M. A. Hernandez-Ruiz and O. A. Sampayo, arXiv:hep-ph/0601238. C. Castanier, P. Gay, P. Lutz and J. Orloff, arXiv:hep-ex/0101028. M. Battaglia, E. Boos and W. M. Yao, in [*Proc. of the APS/DPF/DPB Summer Study on the Future of Particle Physics (Snowmass 2001)* ]{} ed. N. Graf, E3016, \[arXiv:hep-ph/0111276\]. U. Baur, T. Plehn and D. L. Rainwater, *Phys. Rev.* [**D67**]{}, 033003 (2003) \[arXiv:hep-ph/0211224\]. U. Baur, T. Plehn and D. L. Rainwater, *Phys. Rev.* [**D69**]{}, 053004 (2004) \[arXiv:hep-ph/0310056\]. T. L. Barklow, arXiv:hep-ph/0411221. A. Datta, K. Kong and K. T. Matchev, Phys. Rev. D [**72**]{}, 096006 (2005) \[Erratum-ibid. D [**72**]{}, 119901 (2005)\] \[arXiv:hep-ph/0509246\]. J. M. Smillie and B. R. Webber, JHEP [**0510**]{}, 069 (2005) \[arXiv:hep-ph/0507170\]. M. Battaglia, A. Datta, A. De Roeck, K. Kong and K. T. Matchev, JHEP [**0507**]{}, 033 (2005) \[arXiv:hep-ph/0502041\]. D. N. Spergel et al. \[WMAP Collaboration\], Astrophys. J. Suppl. [**148**]{}, 175 (2003) \[arXiv:astro-ph/0302209\] J. R. Bond, G. Efstathiou and M. Tegmark, Mon. Not. Roy. Astron. Soc. [**291**]{}, L33 (1997) \[arXiv:astro-ph/9702100\] W. de Boer, C. Sander, V. Zhukov, A. V. Gladyshev and D. I. Kazakov, Astron. Astrophys.  [**444**]{}, 51 (2005) \[arXiv:astro-ph/0508617\]. D. P. Finkbeiner, arXiv:astro-ph/0409027. D. S. Akerib [*et al.*]{} \[CDMS Collaboration\], Phys. Rev. Lett.  [**96**]{}, 011302 (2006) \[arXiv:astro-ph/0509259\]. R. J. Scherrer and M. S. Turner, Phys. Rev. D [**33**]{}, 1585 (1986) \[Erratum-ibid. D [**34**]{}, 3263 (1986)\]. M. Battaglia, A. De Roeck, J. R. Ellis, F. Gianotti, K. A. Olive and L. Pape, Eur. Phys. J. C [**33**]{}, 273 (2004) \[arXiv:hep-ph/0306219\]. K. Kong and K. T. Matchev, JHEP [**0601**]{}, 038 (2006) \[arXiv:hep-ph/0509119\]. G. Weiglein [*et al.*]{} \[LHC/LC Study Group\], arXiv:hep-ph/0410364. R. Gray [*et al.*]{}, arXiv:hep-ex/0507008. V. Khotilovich, R. Arnowitt, B. Dutta and T. Kamon, Phys. Lett. B [**618**]{}, 182 (2005) \[arXiv:hep-ph/0503165\]. M. Battaglia, arXiv:hep-ph/0410123. F. E. Paige, S. D. Protopescu, H. Baer and X. Tata, arXiv:hep-ph/0312045. P. Gondolo, J. Edsjo, P. Ullio, L. Bergstrom, M. Schelke and E. A. Baltz, JCAP [**0407**]{}, 008 (2004) \[arXiv:astro-ph/0406204\]. G. Belanger, F. Boudjema, A. Pukhov and A. Semenov, arXiv:hep-ph/0607059. E. A. Baltz, M. Battaglia, M. E. Peskin and T. Wizansky, Phys. Rev. D [**74**]{}, 103521 (2006) \[arXiv:hep-ph/0602187\]. J. L. Feng and D. E. Finnell, Phys. Rev. D [**49**]{}, 2369 (1994) \[arXiv:hep-ph/9310211\]. G. A. Moortgat-Pick [*et al.*]{}, arXiv:hep-ph/0507011, based on work of U. Nauenberg [*et al.*]{}. G. A. Blair, in [*Proc. of the APS/DPF/DPB Summer Study on the Future of Particle Physics (Snowmass 2001)* ]{} ed. N. Graf, E3019. H. U. Martyn and G. A. Blair, Note LC-TH-2000-023. P. Bambade, M. Berggren, F. Richard and Z. Zhang, arXiv:hep-ph/0406010. P. Chen and V. I. Telnov, Phys. Rev. Lett.  [**63**]{}, 1796 (1989). T. Tauchi, K. Yokoya and P. Chen, Part. Accel.  [**41**]{}, 29 (1993). H. Baer, A. Belyaev, T. Krupovnickas and X. Tata, JHEP [**0402**]{}, 007 (2004) \[arXiv:hep-ph/0311351\]. C. Balazs, M. Carena and C. E. M. Wagner, Phys. Rev.  D [**70**]{}, 015007 (2004) \[arXiv:hep-ph/0403224\]. J. L. Feng, in Proc. of the [*2005 Int. Linear Collider Workshop (LCWS 2005)*]{}, Stanford, California, 18-22 Mar 2005, pp 0013 and \[arXiv:hep-ph/0509309\]. M. Battaglia [*et al.*]{}, in [*Physics and Experiments with Future Linear $e^+e^-$ Colliders*]{}, (A. Para and H.E. Fisk editors), AIP Conference Proceedings, New York, 2001, 607 \[arXix:hep-ph/0101114\]. D. Dominici, arXiv:hep-ph/0110084. J. L. Hewett, arXiv:hep-ph/9308321. T. G. Rizzo, arXiv:hep-ph/0610104. K. Ackerstaff [*et al.*]{} \[OPAL Collaboration\], Z. Phys.  C [**75**]{}, 385 (1997). K. Abe [*et al.*]{} \[SLD Collaboration\], Phys. Rev. Lett.  [**94**]{}, 091801 (2005) \[arXiv:hep-ex/0410042\]. S. Hillert \[LCFI Collaboration\], [*In the Proceedings of 2005 International Linear Collider Workshop (LCWS 2005), Stanford, California, 18-22 Mar 2005, pp 0313*]{}. P. Chen, T. L. Barklow and M. E. Peskin, Phys. Rev.  D [**49**]{}, 3209 (1994) \[arXiv:hep-ph/9305247\]. S. Riemann, arXiv:hep-ph/9710564. M. Battaglia, S. De Curtis, D. Dominici and S. Riemann, in [*Proc. of the APS/DPF/DPB Summer Study on the Future of Particle Physics (Snowmass 2001)* ]{} ed. N. Graf, E3020, \[arXiv:hep-ph/0112270\]. M. Melles, Phys. Rept.  [**375**]{}, 219 (2003) \[arXiv:hep-ph/0104232\]. E. Eichten, K. D. Lane and M. E. Peskin, Phys. Rev. Lett.  [**50**]{}, 811 (1983). P. Ciafaloni and D. Comelli, Phys. Lett. B [**476**]{}, 49 (2000) \[arXiv:hep-ph/9910278\]. M. Battaglia [*et al.*]{}, in [*Proc. of the APS/DPF/DPB Summer Study on the Future of Particle Physics (Snowmass 2001)* ]{} ed. N. Graf, E3006, \[arXiv:hep-ph/0201177\]. T. Behnke, [*In the Proceedings of 2005 International Linear Collider Workshop (LCWS 2005), Stanford, California, 18-22 Mar 2005, pp 0006*]{}. K. Abe [*et al.*]{} \[GLD Concept Study Group\], arXiv:physics/0607154. M. Battaglia, T. Barklow, M. Peskin, Y. Okada, S. Yamashita and P. Zerwas, [*In the Proceedings of 2005 International Linear Collider Workshop (LCWS 2005), Stanford, California, 18-22 Mar 2005, pp 1602*]{} \[arXiv:hep-ex/0603010\]. T. Abe \[SLD Collaboration\], Nucl. Instrum. Meth. A [**447**]{} (2000) 90 \[arXiv:hep-ex/9909048\]. R. Turchetta [*et al.*]{}, Nucl. Instrum. Meth. A [**458**]{} (2001) 677. J. Marczewski [*et al.*]{}, Nucl. Instrum. Meth. A [**549**]{} (2005) 112. R. H. Richter [*et al.*]{}, Nucl. Instrum. Meth. A [**511**]{} (2003) 250. M. Battaglia, Nucl. Instrum. Meth. A [**530**]{}, 33 (2004) \[arXiv:physics/0312039\]. W.-M. Yao [*et al*]{}, J. Phys. G [**33**]{}, 1 (2006) Y. Giomataris, P. Rebourgeard, J. P. Robert and G. Charpak, Nucl. Instrum. Meth. A [**376**]{}, 29 (1996). F. Sauli, Nucl. Instrum. Meth. A [**386**]{}, 531 (1997). S. Kappler [*et al.*]{}, IEEE Trans. Nucl. Sci.  [**51**]{}, 1039 (2004). P. Colas [*et al.*]{}, Nucl. Instrum. Meth. A [**535**]{}, 506 (2004). J. Kroseberg [*et al.*]{}, arXiv:physics/0511039. T. Behnke, S. Bertolucci, R. D. Heuer and R. Settles, *TESLA Technical design report. Pt. 4: A detector for TESLA* DESY-01-011 (2001). J. C. Brient and H. Videau, in [*Proc. of the APS/DPF/DPB Summer Study on the Future of Particle Physics (Snowmass 2001)* ]{} ed. N. Graf, E3047, \[arXiv:hep-ex/0202004\]. D. Buskulic [*et al.*]{} \[ALEPH Collaboration\], Nucl. Instrum. Meth.  A [**360**]{}, 481 (1995). H. Videau, [*Prepared for 5th International Linear Collider Workshop (LCWS 2000), Fermilab, Batavia, Illinois, 24-28 Oct 2000*]{} D. Strom [*et al.*]{}, IEEE Trans. Nucl. Sci.  [**52**]{}, 868 (2005). G. Mavromanolakis, [*In the Proceedings of 2005 International Linear Collider Workshop (LCWS 2005), Stanford, California, 18-22 Mar 2005, pp 0906*]{} \[arXiv:physics/0510181\]. D. Strom [*et al.*]{}, [*In the Proceedings of 2005 International Linear Collider Workshop (LCWS 2005), Stanford, California, 18-22 Mar 2005, pp 0908*]{}.
{ "pile_set_name": "ArXiv" }
--- abstract: | The evolution of number density, size and intrinsic colour is determined for a volume-limited sample of visually classified early-type galaxies selected from the HST/ACS images of the GOODS North and South fields (version 2). The sample comprises $457$ galaxies over $320$ arcmin$^2$ with stellar masses above $3\cdot 10^{10}$in the redshift range 0.4$<$z$<$1.2. Our data allow a simultaneous study of number density, intrinsic colour distribution and size. We find that the most massive systems (${\lower.5ex\hbox{$\; \buildrel > \over \sim \;$}}3\cdot 10^{11}M_\odot$) do not show any appreciable change in comoving number density or size in our data. Furthermore, when including the results from 2dFGRS, we find that the number density of massive early-type galaxies is consistent with no evolution between z=1.2 and 0, i.e. over an epoch spanning more than half of the current age of the Universe. Massive galaxies show very homogeneous [*intrinsic*]{} colour distributions, featuring red cores with small scatter. The distribution of half-light radii – when compared to z$\sim$0 and z$>$1 samples – is compatible with the predictions of semi-analytic models relating size evolution to the amount of dissipation during major mergers. However, in a more speculative fashion, the observations can also be interpreted as weak or even no evolution in comoving number density [*and size*]{} between 0.4$<$z$<$1.2, thus pushing major mergers of the most massive galaxies towards lower redshifts. author: - | Ignacio Ferreras$^{1}$[^1], Thorsten Lisker$^2$, Anna Pasquali$^3$, Sadegh Khochfar$^4$, Sugata Kaviraj$^{1,5}$\ $^1$ Mullard Space Science Laboratory, Unversity College London, Holmbury St Mary, Dorking, Surrey RH5 6NT\ $^2$ Astronomisches Rechen-Institut, Zentrum für Astronomie, Universität Heidelberg, Mönchhofstr. 12-14, D-69120 Heidelberg, Germany\ $^3$ Max-Planck-Institut für Astronomie, Koenigstuhl 17, D-69117 Heidelberg, Germany\ $^4$ Max-Planck-Institut für Extraterrestrische Physik, Giessenbachstrasse, D-85748 Garching, Germany\ $^5$ Astrophysics subdepartment, The Denys Wilkinson Building, Keble Road, Oxford OX1 3RH date: 'January 20, 2009: To be published in MNRAS' title: 'On the formation of massive galaxies: A simultaneous study of number density, size and intrinsic colour evolution in GOODS' --- \[firstpage\] galaxies: evolution — galaxies: formation — galaxies: luminosity function, mass function — galaxies: high redshift Introduction {#sec:intro} ============ During the past decades the field of extragalactic astrophysics has undergone an impressive development, from simple models that were compared with small, relatively nearby samples to current surveys extending over millions of Mpc$^3$ at redshifts beyond z$\sim$1 along with numerical models that can probe cosmological volumes with the aid of large supercomputers. However, in the same period of time, our knowledge of the ’baryon physics’ relating the dark and luminous matter components has progressed much slower, mainly due to the highly non-linear processes that complicate any ab initio approach to this complex problem. The evolution of the most massive galaxies constitutes one of the best constraints one can impose on the modelling of galaxy formation. Within the current paradigm of galaxy growth in a $\Lambda$CDM cosmology, massive galaxies evolve from subsequent mergers of smaller structures. The most massive galaxies are early-type in morphology and are dominated by old stellar populations, with a tight mass-metallicity relation and abundance ratios suggesting a quick build-up of the stellar component [see e.g. @ren06]. On the other hand, semi-analytic models of galaxy formation predict a more extended assembly history (if not star formation) from major mergers. By carefully adjusting these models, it has been possible to generate realizations that are compatible with the observed stellar populations in these galaxies [e.g. @kav06; @deluc06; @bow06] In this paper we study the redshift evolution of a sample of the most massive early-type galaxies from the catalogue of @egds09, which were visually selected from the [*HST*]{}/ACS images of the GOODS North (HDFN) and South (CDFS) fields [@giav04]. Our data set complements recent work exploring the issue of size and stellar mass evolution [e.g. @Bun05; @McIn05; @fran06; @fon06; @Borch06; @brown07; @Truj07; @vdk08]. The coverage (320 arcmin$^2$), depth ($1\sigma$ surface brightness limit per pixel of $24.7$ AB mag/arcsec$^2$ in the $i$ band) and high-resolution (FWHM$\sim 0.12$ arcsec) of these images allow us to perform a consistent analysis of the redshift evolution of the comoving number density, size and intrinsic colour of these galaxies. ![image](f1.eps){width="5in"} The sample {#sec:sample} ========== The [*HST*]{}/ACS images of the GOODS North and South fields (v2.0) were used to perform a visual classification of spheroidal galaxies. This is a continuation of @fer05 – that was restricted to the CDFS field. However, notice that our sample does [*not*]{} apply the selection based on the Kormendy relation, i.e. the only constraint in this sample is visual classification. The analysis of the complete sample is presented in @egds09. Over the $320$ arcmin$^2$ field of view of the North and South GOODS/ACS fields, the total sample comprises $910$ galaxies down to $i_{\rm AB}=24$ mag (of which 533/377 are in HDFN/CDFS). The available photometric data – both space and ground-based – were combined with spectroscopic or photometric redshifts in order to determine the stellar mass content. Spectroscopic redshifts are available for 66% of the galaxies used in this paper. The photometric redshifts have an estimated accuracy of $\Delta (z)/(1+z)\sim 0.002\pm 0.09$ [@egds09]. Stellar masses are obtained by convolving the synthetic populations of @bc03 with a grid of exponentially decaying star formation histories [see appendix B of @egds09 for details]. A @chab03 Initial Mass Function is assumed. Even though the intrinsic properties of a stellar population (i.e. its age and metallicity distribution) cannot be accurately constrained with broadband photometry, the stellar mass content can be reliably determined to within $0.2-0.3$ dex provided the adopted IMF gives an accurate representation of the true initial mass function [see e.g. @fsb08]. The sizes are computed using a non-parametric approach that measures the total flux within an ellipse with semimajor axis $a_{\rm TOT}<1.5a_{\rm Petro}$. The eccentricity of the ellipse is computed from the second order moments of the surface brightness distribution. The half-light radius is defined as R$_{50}\equiv\sqrt{a_{50}\times b_{50}}$, where $a_{50}$ and $b_{50}$ are respectively the semimajor and semiminor axes of the ellipse that engulfs 50% of the total flux. Those values need to be corrected for the loss of flux caused by the use of an aperture [see e.g. @gra05]. We used a synthetic catalogue of galaxies with Sersic profiles and the same noise and sampling properties as the original GOODS/ACS images to build fitting functions for the corrections in flux and size. The corrections depend mostly on R$_{50}$ and, to second order, on the Sersic index. Most of this correction is related to the ratio between the size of the object and the size of the Point Spread Function of the observations. The dependence with Sersic index (or in general surface brightness slope) is milder and for this correction the concentration [as defined in @ber00] was used as a proxy. We compared our photometry with the GOODS-MUSIC data [@graz06] in the CDFS. Our sample has 351 galaxies in common with that catalogue, and the difference between our total+corrected $i$-band magnitudes and the total magnitudes from GOODS-MUSIC is $\Delta i\equiv i_{\rm ours}- i_{\rm MUSIC}=-0.17\pm 0.16$ mag. This discrepancy is mostly due to our corrections of the total flux. A bootstrap method using synthetic images show that our corrections are accurate with respect to the true total flux to within 0.05 mag, and to within 9% in half-light radius [see appendix A of @egds09]. Our estimates of size were also compared with the GALFIT-based parametric approach of @gems on the GEMS survey. Out of 133 galaxies in common, the median of the difference defined as $($R$_{50}^{\rm ours}-$R$_{50}^{\rm GEMS})/$R$_{50}^{\rm ours}$ is $-0.01 \pm 0.16$ (the error bar is defined as the semi-interquartile range). ![image](f2.eps){width="5in"} We focus here on a volume-limited sample comprising early-type galaxies with stellar mass $M_s{\lower.5ex\hbox{$\; \buildrel > \over \sim \;$}}3\times 10^{10}$. This sample is binned according to fixed steps in comoving volume (a standard $\Lambda$CDM cosmology with $\Omega_m=0.3$ and $h=0.7$ is used throughout). The complete sample of 910 galaxies from @egds09 is shown in figure \[fig:sample\]. Solid (open) circles represent early-type galaxies whose colours are compatible with an older (younger) stellar population. This simple age criterion is based on a comparison of the observed optical and NIR colours with the predictions from a set of templates with exponentially decaying star formation histories, all beginning at redshift z$_{\rm F}=3$, with solar metallicity. The “old” population is compatible with formation timescales $\tau{\lower.5ex\hbox{$\; \buildrel < \over \sim \;$}}1$ Gyr [see @egds09 for details]. The black dots in the figure correspond to the sample of $457$ galaxies used in this paper. ![image](f3.eps){width="5in"} We further subdivide this sample into three mass bins, starting at $\log ({\rm M}_s/$$)=10.5$ with a width $\Delta\log ({\rm M}_s/$$)=0.5$ dex. For comparison, the characteristic stellar mass from the mass function of the GOODS-MUSIC sample is shown as a dashed line [@fon06], although we warn that the GOODS-MUSIC masses are calculated using a @salp55 IMF, which will give a systematic 0.25 dex overestimate in $\log$M$_s$ with respect to our choice of IMF. Our sample is safely away from the limit imposed by the cut in apparent magnitude ($i_{AB}\leq 24$). The curved solid lines give that limit for two extreme star formation histories, corresponding to the “old” and “young” populations as defined above. Notice that within our sample of massive early-type galaxies there are NO galaxies whose colours are compatible with young stellar populations (i.e. open circles). The evolution of massive galaxies {#sec:evol} ================================= The redshift evolution of the comoving number density is shown in figure \[fig:logn\] (black dots). The ($1\sigma$) error bars include both Poisson noise as well as the effect of a 0.3 dex uncertainty in the stellar mass estimates. These uncertainties are computed using a Monte Carlo run of 10,000 realizations. The figure includes data from GOODS-MUSIC [@fon06], COMBO17 [@Bell04] and Pal/DEEP2 [@con07]. At z=0 we show an estimate from the segregated 2dFGRS luminosity function [@crot05]. We take their Schechter fits for early-type galaxies within an environment with a mean density defined by a contrast – measured inside radius $8h^{-1}$ Mpc – in the range $\delta_8=-0.43\cdots+0.32$ (black open circles). In order to illustrate possible systematic effects in 2dFGRS, we also include the result for their full volume sample as grey open circles. The 2dFGRS data are originally given as luminosity functions in the rest-frame $b_J$. We took a range of stellar populations typical of early-type galaxies in order to translate those luminosities into stellar masses. The error bars shown for the 2dFGRS data represent the uncertainty caused by this translation from light into mass over a wide range of stellar populations (with typical M/L($b_J$) in the range $7\cdots 12$ / ). The black solid lines show semi-analytic model (SAM) predictions from @ks06a. Their SAM follows the merging history of dark matter halos generated by the Extended Press-Schechter formalism down to a mass resolution of $M_{\rm min}=5 \times 10^9$ , and follows the baryonic physics within these halos using recipes laid out in @kb05 [and references therein]. The grey dashed lines are the predictions from the Millennium simulation @deluc06. This model is extracted from their web-based database[^2], and is not segregated with respect to galaxy morphology. This explains the excess number density in the low-mass bin (bottom panel). In the two higher mass bins most of the galaxies have an early-type morphology. The predictions of the Millennium simulation are in agreement with the middle bin – i.e. masses between $10^{11}$ and $3\cdot 10^{11}$M$_\odot$. However, for the most massive bin, the sharp decrease in density with redshift of the models is in remarkable disagreement with the observations. In contrast, @ks06a predict a nearly constant density at the highest mass bin out to z$<$1. The main reason for this discrepancy is that AGN feedback in the Millennium simulation prohibits the growth of massive galaxies by gas cooling and subsequent star formation in order to reproduce the right colour-bimodality and the luminosity function at z=0. As shown in @ks08 the existence of a characteristic mass scale for the shut-off of star formation will lead to dry merging being the main mechanism for the growth of massive galaxies. In that respect the evolution of the number density of massive galaxies in the Millennium simulation is mainly driven by mergers. The difference between that model and @ks06a is probably due to the different merger rates in their models. The Millennium simulation predicts a lower major merger rate compared to @ks06a almost by a factor 10 (Hopkins et al., in preparation). ![image](f4.eps){width="5in"} Figure \[fig:Re\] shows the redshift evolution of the half-light radius. Our methodology follows a non-parametric approach avoiding the degeneracies intrinsic to profile fitting. Nevertheless, we compared our size estimates with those using a parametric approach like GALFIT [@gems] and there is good agreement (see §\[sec:sample\]). Our data (black dots) are compared with @Truj07 [grey triangles] and with a z$\sim$0 measurement from the SDSS [@Shen03 taking their early-type sample]. The error bars give the RMS scatter of the size distribution within each mass and redshift bin. The lines correspond to the models of @ks06b. These models associate size evolution to the amount of dissipation encountered during major mergers along the merging history of an early-type galaxy. The points at high redshift (z$>$1.2) correspond to [*individual*]{} measurements from the literature (see caption for details). In all the comparisons shown in this paper with work from the literature, we have checked that the initial mass functions used are similar, so that stellar masses are compared consistently. All results quoted either use a @chab03 IMF or functions very close to it in terms of the total mass expected per luminosity unit, which – for early-type systems – mainly reduces to the shape of the low-mass end of the IMF. Other functions used in the quoted data were @kro93 [@kro01] or @bg03. Only for the GOODS-MUSIC data [@fon06] the Salpeter IMF (1955) was used, which will always give a systematic overestimate of $\sim 0.25$ dex in stellar mass with respect to the previous choices given its (unphysical) extrapolation of the same power law down to the low stellar mass cutoff [see e.g. @bc03]. A single-law Salpeter IMF is an unlikely choice for the stellar populations in early-type galaxies as shown by comparisons of photometry with kinematics [@cap06] or with gravitational lensing [@fsb08]. Similarly to the density evolution, we also apply a simple power law fit only to our data points: R$_e\propto (1+z)^\beta$. The solid lines give those best fits, and the power law index is given in each panel. Taking into account all data points betweeen z$=$0 and z$\sim$2.5 one sees a clear trend of decreasing size with redshift for all three mass bins. However our data suggest milder size evolution for the most massive early-type galaxies between z$=$1.2 and z$=$0.4, corresponding to a 4 Gyr interval of cosmic time. The depth and high spatial resolution of the ACS images also allow us to probe in detail the [*intrinsic*]{} colour distribution of the galaxies (i.e. the colour distribution within each galaxy). We follow the approach described in @fer05 which, in a nutshell, registers the images in the two bands considered for a given colour, degrades them by the Point Spread Function of the other passband, and perfoms an optimal Voronoi tessellation in order to achieve a S/N per bin around $10$ while preserving spatial resolution. The final binned data is used to fit a linear relation between colour and $\log (R/R_e)$ from which we determine the slope and the scatter about the best fit (using a biweight estimator). Figure \[fig:clr\] shows the observer-frame V$-$i colour gradient ([*bottom*]{}) and scatter ([*top*]{}) as a function of stellar mass ([*left*]{}) and half-light radius ([*right*]{}). The black dots correspond to binned data in stellar mass, showing the average and RMS value within each bin. Notice the significant trend with increasing stellar mass towards redder cores (i.e. more negative colour gradients) and small scatter. The colour gradient is in most cases nearly flat, and only for the lowest mass bin do we find significantly large gradients. For comparison, we also show as small grey dots a continuation of the original sample from @egds09 towards lower stellar masses. Blue cores (positive colour gradients) dominate in spheroidal galaxies below $10^{10}$. The homogeneous intrinsic colour distribution thereby suggests no significant star formation and a fast rearranging process of the stellar populations if mergers take place during the observed redshift range. Notice this sample only targets objects visually classified as early-type galaxies. The early phases of major merging are therefore excluded from our sample. Nevertheless, the number density at the massive end (upper panel of figure \[fig:logn\]) does not change significantly between z=0 and z$\sim$1, already suggesting that major merger events must be rare over those redshifts. Discussion and Conclusions {#sec:discussion} ========================== Using a volume-limited sample of massive spheroidal galaxies from the v2.0 ACS/HST images of the GOODS North and South fields we have consistently estimated the number density, size and intrinsic colour distribution over the redshift range 0.4$<$z$<$1.2. In combination with other samples we find a significant difference in the redshift evolution according to stellar mass, in agreement with recent work based on other samples or different selection criteria [see e.g. @Bun05; @McIn05; @fran06; @fon06; @Borch06; @brown07; @Truj07; @vdk08]. The most massive galaxies – which impose the most stringent constraints on models of galaxy formation – keep a constant comoving number density between z$\sim$1 and 0 (i.e. over half of the current age of the Universe) but present a significant size evolution, roughly a factor 2 increase between z=1 and 0. Note, however that within our sample, there is no significant size evolution over the redshift range z=0.4$\cdots$1.2. It is by extending the analysis to higher redshifts that the size evolution shows up at the most massive bin [e.g. @vdk08; @bui08]. When velocity dispersion is added to the analysis, a significant difference is found in the $\sigma$-R$_e$ distribution between z=0 and z=1, suggesting an important change in the dynamics of these galaxies [@vdwel08]. Some of the semianalytic models of massive galaxy evolution [@ks06a; @ks06b] are in good agreement with these observations. These models follow the standard paradigm of early-type galaxy growth through major mergers, with the ansatz that size evolution is related to the amount of dissipation during major mergers. The decreasing evolution in the comoving number density at high masses is explained within the models by a balance between the ’sink’ (loss due to mergers of massive galaxies generating more massive galaxies) and ’source’ terms (gain from mergers at lower mass) over the redshifts considered. One could argue that the sink terms would generate a population of extremely massive galaxies (above a few $10^{12}$), possibly the central galaxies within massive groups or clusters. However, this population – with predicted comoving number densities below $10^{-6}$Mpc$^{-3}$ – are very hard to study with current surveys. Furthermore, environment effects in these systems will complicate the analysis of size evolution [e.g. @ko08]. It is important to note that the lack of evolution in the number density relates to the bright end of the luminosity function. @fab07 found a significant change in the number density of [*red*]{} galaxies with redshift. However, they also emphasize that this change does not refer to the most luminous galaxies. If we include all mass bins in our sample, we do find a significant decrease in the number density with redshift, as the lower mass bins – which contribute the most in numbers – do have a rather steep decrease in density (see figure \[fig:logn\]). This difference suggests that the (various) mechanisms playing a role in the transition from blue cloud to red sequence must be strongly dependent on the stellar mass of the galaxies involved. In a more speculative fashion, our data are also suggestive of weak or even [*no evolution*]{} in the number density of the most massive early-type galaxies over a redshift range 0.4$<$z$<$1.2. This would imply a negligible role of major mergers at the most massive end for z$>$0.4, thereby pushing this stage of galaxy formation towards lower redshifts [@ks08]. Another speculative scenario for the evolution of massive spheroidal galaxies would involve negligible major mergers at these redshifts and a significant amount of minor mergers which will ’puff up’ the galaxy. Minor mergers are considered the cause of recent star formation observed in NUV studies of early-type galaxies [@kav07]. Larger surveys of Luminous Red Galaxies are needed to confirm or disprove this important issue. Baldry, I. K. & Glazebrook, K., 2003, ApJ, 593, 258 Bell, E. F. [[et al. ]{}]{} 2004, ApJ, 608, 752 Bershady, M. A., Jangren, A. & Conselice, C. J., 2000, AJ, 119, 2645 Borch, A., [[et al. ]{}]{}, 2006, A& A, 453, 869 Bower, R. G, Benson, A. J., Malbon, R., Helly, J. C., Frenk, C. S., Baugh, C. M., Cole, S. & Lacey, C. G. 2006, MNRAS, 370, 645 Brown, M. J. I., Dey, A., Jannuzi, B. T., Brand, K., Benson, A. J., Brodwin, M., Croton, D. J. & Eisenhardt, P. R., 2007, ApJ, 654, 858 Bruzual, G. & Charlot, S., 2003, MNRAS, 344, 1000 Buitrago, F., Trujillo, I., Conselice, C. J., Bouwens, R. J., Dickinson, M. & Yan, H., 2008, arXiv:0807.4141 Bundy, K., Ellis, R. S. & Conselice, C. J., 2005, ApJ, 625, 621 Cappellari, M., [[et al. ]{}]{}, 2006, MNRAS, 366, 1126 Chabrier, G., 2003, PASP, 115, 763 Cimatti, A., [[et al. ]{}]{}, 2008, A& A, 482, 21 Conselice, C. J., [[et al. ]{}]{}, 2007, MNRAS, 381, 962 Croton, D. J., [[et al. ]{}]{}2005, MNRAS, 356, 1155 Damjanov, I., [[et al. ]{}]{}2008, arXiv:0807:1744 De Lucia, G., Springel, V., White, S. D. M., Croton, D. & Kauffmann, G.2006, MNRAS, 366, 499 Faber, S. M., [[et al. ]{}]{}, 2007, ApJ, 665, 265 Ferreras, I., Lisker, T., Carollo, C. M., Lilly, S. J. & Mobasher, B. 2005, ApJ, 635, 243 Ferreras, I., Saha, P. & Burles, S. 2008, MNRAS, 383, 857 Ferreras, I., Lisker, T., Pasquali, A. & Kaviraj, S. 2009, MNRAS submitted, arXiv:0901.2123 Fontana, A., [[et al. ]{}]{} 2006, A& A, 459, 745 Franceschini, A., [[et al. ]{}]{}, 2006, A& A, 453, 397 Giavalisco, M., [[et al. ]{}]{}, 2004, ApJ, 600, L93 Graham, A. W., Driver, S. P., Petrosian, V., Conselice, C. J., Bershady, M. A., Crawford, S. M. & Goto, T. 2005, AJ, 130, 1535 Grazian, A. [[et al. ]{}]{}, 2006, A& A, 449, 951. Häussler, B. [[et al. ]{}]{}, 2007, ApJS, 172, 615 Kaviraj, S., Devriendt, J. E. G., Ferreras, I., Yi, S. K. & Silk, J. 2006, arXiv:astro-ph/0602347 Kaviraj, S., Peirani, S., Khochfar, S., Silk, J. & Kay, S., 2007, MNRAS, in press, arXiv:0711.1493 Khochfar, S. & Burkert, A. 2005, MNRAS, 359, 1379 Khochfar, S. & Ostriker, J. P. 2008, ApJ, 680, 54 Khochfar, S. & Silk, J. 2006a, MNRAS, 370, 902 Khochfar, S. & Silk, J. 2006b, ApJ, 648, L21 Khochfar, S. & Silk, J. 2008, arXiv:0809.1734 Kroupa, P., Tout, C. A. & Gilmore, G., 1993, MNRAS, 262, 545 Kroupa, P., 2001, MNRAS, 322, 231 McIntosh, D., [[et al. ]{}]{}, 2005, ApJ, 632, 191 Renzini, A. 2006, ARA& A, 44, 141 Salpeter, E. E., 1955, ApJ, 121, 161 Shen, S., Mo, H. J., White, S. D. M., Blanton, M. R., Kauffmann, G., Voges, W., Brinkmann, J. & Csabai, I. 2003, MNRAS, 343, 978 Toft, S., 2007, ApJ, 671, 285 Trujillo, I., Conselice, C. J., Bundy, K., Cooper, M. C., Eisenhardt, P., & Ellis, R. S. 2007, MNRAS, 382, 109 van der Wel, A., Holden, B. P., Zirm, A. W., Franx, M., Rettura, A., Illingworth, G. D. & Ford, H. C., 2008, arXiv:0808.0077 van Dokkum, P. G., [[et al. ]{}]{}2008, ApJ, 677, L5 Zirm, A. W., [[et al. ]{}]{}2007, ApJ, 656, 66 [^1]: E-mail: ferreras@star.ucl.ac.uk [^2]: http://www.mpa-garching.mpg.de/millennium
{ "pile_set_name": "ArXiv" }
--- abstract: 'Natural selection and random drift are competing phenomena for explaining the evolution of populations. Combining a highly fit mutant with a population structure that improves the odds that the mutant spreads through the whole population tips the balance in favor of natural selection. The probability that the spread occurs, known as the fixation probability, depends heavily on how the population is structured. Certain topologies, albeit highly artificially contrived, have been shown to exist that favor fixation. We introduce a randomized mechanism for network growth that is loosely inspired in some of these topologies’ key properties and demonstrate, through simulations, that it is capable of giving rise to structured populations for which the fixation probability significantly surpasses that of an unstructured population. This discovery provides important support to the notion that natural selection can be enhanced over random drift in naturally occurring population structures.' author: - 'Valmir C. Barbosa' - Raul Donangelo - 'Sergio R. Souza' bibliography: - 'fixprob.bib' title: Network growth for enhanced natural selection --- Networks of agents that interact with one another underlie several important phenomena, including the spread of epidemics through populations [@bbpv04], the emergence of cooperation in biological and social systems [@sp05; @ohln06; @tdw07], the dynamics of evolution [@m58; @lhn05], and various others [@gh05; @sf07]. Typically, the dynamics of such interactions involves the propagation of information through the network as the agents contend to spread their influence and alter the states of other agents. In this letter, we focus on the dynamics of evolving populations, particularly on how network structure relates to the ability of a mutation to take over the entire network by spreading from its node of origin. In evolutionary dynamics, the probability that a mutation occurring at one of a population’s individuals eventually spreads through the entire population is known as the mutation’s fixation probability, $\rho$. In an otherwise homogeneous population, the value of $\rho$ depends on the ratio $r$ of the mutant’s fitness to that of the other individuals, and it is the interplay between $\rho$ and $r$ that determines the effectiveness of natural selection on the evolution of the population, given its size. In essence, highly correlated $\rho$ and $r$ lead to a prominent role of natural selection in driving evolution; random drift takes primacy, otherwise [@n06]. Let $P$ be a population of $n$ individuals and, for individual $i$, let $P_i$ be any nonempty subset of $P$ that excludes $i$. We consider the evolution of $P$ according to a sequence of steps, each of which first selects $i\in P$ randomly in proportion to $i$’s fitness, then selects $j\in P_i$ randomly in proportion to some weighting function on $P_i$, and finally replaces $j$ by an offspring of $i$ having the same fitness as $i$. When $P$ is a homogeneous population of fitness $1$ (except for a randomly chosen mutant, whose fitness is initially set to $r\neq 1$), $P_i=P\setminus\{i\}$ [^1], and moreover the weighting function on every $P_i$ is a constant (thus choosing $j\in P_i$ occurs uniformly at random), this sequence of steps is known as the Moran process [@m58]. In this setting, evolution can be modeled by a simple discrete-time Markov chain, of states $0,1,\ldots,n$, in which state $s$ indicates the existence of $s$ individuals of fitness $r$, the others $n-s$ having fitness $1$. In this chain, states $0$ and $n$ are absorbing and all others are transient. If $s$ is a transient state, then it is possible either to move from $s$ to $s+1$ or $s-1$, with probabilities $p$ and $q$, respectively, such that $p/q=r$, or to remain at state $s$ with probability $1-p-q$. When $r>1$ (an advantageous mutation), the evolution of the system has a forward bias; when $r<1$ (a disadvantageous mutation), there is a backward bias. And given that the initial state is $1$, the probability that the system eventually reaches state $n$ is precisely the fixation probability, in this case denoted by $\rho_1$ and given by $$\rho_1=\frac{1-1/r}{1-1/r^n}$$ (cf. [@n06]). The probability that the mutation eventually becomes extinct (i.e., that the system eventually reaches state $0$) is $1-\rho_1$. Because $\rho_1<1$, extinction is a possibility even for advantageous mutations. Similarly, it is possible for disadvantageous mutations to spread through the entirety of $P$. In order to consider more complex possibilities for $P_i$, we introduce the directed graph $D$ of node set $P$ and edge set containing every ordered pair $(i,j)$ such that $j\in P_i$. The case of a completely connected $D$ (in which every node connects out to every other node) corresponds to the Moran process. But in the general case, even though it continues to make sense to set up a discrete-time Markov chain with $0$ and $n$ the only absorbing states, analysis becomes infeasible nearly always and $\rho$ must be calculated by computer simulation of the evolutionary steps. The founding work on this graph-theoretic perspective for the study of $\rho$ is [@lhn05], where it is shown that we continue to have $\rho=\rho_1$ for a much wider class of graphs. Specifically, the necessary and sufficient condition for $\rho=\rho_1$ to hold is that the weighting function be such that, for all nodes, the probabilities that result from the incoming weights sum up to $1$ (note that this already holds for the outgoing probabilities, thus characterizing a doubly stochastic process for out-neighbor selection). In particular, if the weighting function is a constant for all nodes and a node’s in-degree (number of in-neighbors) and out-degree (the cardinality of $P_i$ for node $i$, its number of out-neighbors) are equal to each other and the same for all nodes, as in the Moran case, then $\rho=\rho_1$. Other interesting structures, such as scale-free graphs [@ba99], are also handled in [@lhn05], but the following two observations are especially important to the present study. The first one is that, if $D$ is not strongly connected (i.e., not all nodes are reachable from all others through directed paths), then $\rho>0$ if and only if all nodes are reachable from exactly one of $D$’s strongly connected components. Furthermore, when this is the case random drift may be a more important player than natural selection, since fixation depends crucially on whether the mutation arises in that one strongly connected component. If $D$ is strongly connected, then $\rho>0$ necessarily. The second important observation is that there do exist structures that suppress random drift in favor of natural selection. One of them is the $D$ that in [@lhn05] is called a $K$-funnel for $K\ge 2$ an integer. If $n$ is sufficiently large, the value of $\rho$ for the $K$-funnel, denoted by $\rho_K$, is $$\rho_K=\frac{1-1/r^K}{1-1/r^{Kn}}.$$ Thus, the $K$-funnel can be regarded as functionally equivalent to the Moran graph with $r^K$ substituting for the fitness $r$. Therefore, the fixation probability can be arbitrarily amplified by choosing $K$ appropriately, provided $r>1$. Noteworthy additions to the study of [@lhn05] can be found in [@ars06; @sar08]. In these works, analytical characterizations are obtained for the fixation probability on undirected scale-free graphs, both under the dynamics we have described (in which $j$ inherits $i$’s fitness) and the converse dynamics (in which it is $i$ that inherits $j$’s fitness). The main find is that the fixation probability is, respectively for each dynamics, inversely or directly proportional to the degree of the node where the advantageous mutation appears. In this letter, we depart from all previous studies of the fixation probability by considering the question of whether a mechanism exists for $D$ to be grown from some simple initial structure in such a way that, upon reaching a sufficiently large size, a value of $\rho$ can be attained that substantially surpasses the Moran value $\rho_1$ for an advantageous mutation. Such a $D$ might lack the sharp amplifying behavior of structures like the $K$-funnel, but being less artificial might also relate more closely to naturally occurring processes. We respond affirmatively to the question, inspired by the observation discussed above on the strong connectedness of $D$, and using the $K$-funnel as a sieving mechanism to help in looking for promising structures. It should be noted, however, that since other amplifiers exist with capabilities similar to those of the $K$-funnel (e.g., the $K$-superstar [@lhn05]), alternatives to the strategy we introduce that are based on them may also be possible. In a $K$-funnel, nodes are organized into $K$ layers, of which layer $k$ contains $b^k$ nodes for some fixed integer $b\ge 2$ and $k=0,1,\ldots,K-1$. It follows that the $K$-funnel has $(b^K-1)/(b-1)$ nodes. A node in layer $k$ connects out to all nodes in layer $k-1$ (modulo $K$, so that an edge exists directed from the single node in layer $0$ to each of the $b^{K-1}$ nodes in layer $K-1$). A $K$-funnel is then, by construction, strongly connected. For a given value of $n$, our strategy for growing $D$ is to make it a layered graph like the $K$-funnel, but letting it generalize on the $K$-funnel by allowing each layer to have any size (number of nodes), provided no layer remains empty. Graph $D$ is the graph that has $n$ nodes in the the sequence $D_0, D_1,\ldots$ of directed graphs described next. Graph $D_0$ has $K$ layers, numbered $0$ through $K-1$, each containing one node. The node in layer $k$ connects out to the node in layer $k-1$ (modulo $K$). For $t\ge 0$ an integer, $D_{t+1}$ is obtained from $D_t$ by adding one new node, call it $i$, to a randomly chosen layer, say layer $k$, according to a criterion to be discussed shortly. Node $i$ is then connected out to all nodes in layer $k-1$ (modulo $K$) and all nodes in layer $k+1$ (modulo $K$) are connected out to node $i$. Graph $D_t$ is then strongly connected for all $t$. We note that there are as many possibilities for the resulting $D$ as for partitioning $n$ indistinguishable objects into $K$ nonempty, distinguishable sets arranged circularly, provided we discount for equivalences under rotations of the sets. A lower bound on this number of possibilities is ${n\choose K}/n$, which for $K\ll n$ is roughly $n^{K-1}/K!$. Before we describe the rule we use to decide which layer is to receive the new node, $i$, it is important to realize that the double stochasticity mentioned earlier implies that $\rho=\rho_1$ for $D_0$. However, this ceases to hold already for $D_1$ and may not happen again as the graph gets expanded. So, whatever the rule is, we are aiming at higher $\rho$ values by giving up on the doubly stochastic character of the process whereby fitness propagates through the graph. For $t\ge 0$ and $k$ any layer of $D_t$, if we consider the layers in the upstream direction from $k$, we call $k^+$ the first layer we find whose successor has at most as many nodes as itself. In particular, if the successor of layer $k$ does not have more nodes than $k$, then $k^+=k$. Now let $d(k^+,k)$ be the distance from layer $k^+$ to layer $k$ in $D_t$ (i.e., the number of edges on a shortest directed path from any node in $k^+$ to any node in $k$). Layer $k$ is selected to receive node $i$ to yield $D_{t+1}$ with probability $$p_k\propto[K-d(k^+,k)]^a$$ for some $a\ge 1$. This criterion is loosely suggested by the topology of the $K$-funnel. It seeks to privilege first the growth of each layer $\ell$ such that $k^+=\ell$ for some $k$, then the growth of the layer $k$ that is immediately downstream from $\ell$, provided $k^+=\ell$, and so on through the other downstream layers. In our simulations we use $n\le 1\,000$ nearly exclusively and grow a large number of $D$ samples. The calculation of $\rho$ for a given $D$ involves performing several independent simulations (we use $10\,000$ in all cases), each one starting with the fitness-$r$ mutant substituting for any of the $n$ nodes and proceeding as explained earlier until the mutation has either spread through all of $D$’s nodes or died out (we use constant weighting throughout). The fraction of simulations ending in fixation is taken as the value of $\rho$ for that particular $D$. This calculation can be very time-consuming, so we have adopted a mechanism to decide whether to proceed with the calculation for a given $D$ or to discard it. Our mechanism is based on establishing a correlation threshold beyond which $D$ is declared sufficiently similar to the $K$-funnel to merit further investigation. The measure of correlation that we use is the Pearson correlation coefficient between two sequences of the same size, which lies in the interval $[-1,1]$ and indicates how closely the two sequences are to being linearly correlated (a coefficient of $1$ means a direct linear dependence). For sequences $X$ and $Y$, the coefficient, denoted by $C(X,Y)$, is given by $C(X,Y)=\mathrm{cov}(X,Y)/\sigma_X\sigma_Y$, where $\mathrm{cov}(X,Y)$ is the covariance of $X$ and $Y$, $\sigma_X$ and $\sigma_Y$ their respective standard deviations. In our case, $X$ and $Y$ are length-$K$ sequences. If we renumber the layers of $D$ so that the layer with the greatest number of nodes becomes layer $K-1$, the one immediately downstream from it layer $K-2$, and so on through layer $0$, then we let the sequences $X$ and $Y$ be such that $X_k=k$ and $Y_k=\ln n_k$, where $n_k$ is the number of nodes in layer $k$. Notice that, when $D$ is the $K$-funnel itself, then $n_k=b^k$ with $b\ge 2$, whence $Y_k=(\ln b)X_k$ and $C(X,Y)=1$. Every $D$ whose sequences $X$ and $Y$ lead $C(X,Y)$ to surpass the correlation threshold is as close to having $n_k$ given by some exponential of $k$ as the threshold allows. However, the near-linear dependence of the two sequences is not enough, since the base of such an exponential, which we wish to be as large as possible, can in principle be very small (only slightly above $1$), for very gently inclined straight lines. On the other hand, a steeper straight line indicates a faster reduction of layer sizes as we progressively move toward layer $0$ from layer $K-1$ through the other layers. In the analysis that follows, then, we also use the slope of the least-squares linear approximation of $Y$ as a function of $X$, denoted by $S(X,Y)$ and given by $S(X,Y)=\mathrm{cov}(X,Y)/\sigma_X^2$. For $C(X,Y)$ close to $1$, the base of the aforementioned exponential approaches $e^{S(X,Y)}$. Our simulation results are summarized in Fig. \[fig:5layers\], where $K=5$, $n=500,1\,000$, and $r=1.1,2.0$. For each combination and each of four $a$ values ($a=1,2,3,4$), a scatter plot is given representing each of the graphs generated by its fixation probability and the slope $S(X,Y)$ for its two sequences, provided $C(X,Y)>0.9$. We see that, in all cases, strengthening the layer-selection criterion by increasing $a$ has the effect of moving most of the resulting graphs away from the Moran probability ($\rho_1$) and also away from the near-$0$ slope. ![(Color online) Simulation results for $K=5$. Each graph $D$ for which $C(X,Y)>0.9$ is represented by its fixation probability and by the slope $S(X,Y)$. For each combination of $n$ and $r$, $500$ graphs are shown, corresponding roughly to $12\%$ of the number of graphs that were grown. Dashed lines mark $\rho_1$ through $\rho_3$ for $r=1.1$, $\rho_1$ for $r=2.0$.[]{data-label="fig:5layers"}](sp_5_all.eps) It is important to notice that, in the absence of the slope indicator for each graph, we would be left with a possibly wide range of fixation probabilities for the same value of $a$, unable to tell the true likeness of the best graphs to the $K$-funnel without examining their structures one by one. In a similar vein, the results shown in Fig. \[fig:5layers\] emphasize very strongly the role of our particular choice of a rule for selecting layers, as opposed to merely proceeding uniformly at random. To see this, it suffices that we realize that uniformly random choices correspond to setting $a=0$ in the expression for $p_k$, and then we can expect the graphs that pass the correlation threshold to be clustered around the points of $\rho\sim \rho_1$ and $S(X,Y)\sim 0$. We also note a sharp variation in how the fixation probabilities of the graphs relate to the asymptotic fixation probabilities of the $K$-funnel as a mutant’s fitness is increased. For $r=1.1$, the graphs exhibiting the highest fixation probabilities, and also the highest slopes, are such that $\rho$ is somewhere between $\rho_2$ and $\rho_3$. For $r=2.0$, though, this happens between $\rho_1$ and $\rho_2$ ($=0.75$, not shown), therefore providing considerably less amplification. Part of the reason why this happens may be simply that the more potent amplifiers are harder to generate by our layer-selection mechanism as $r$ is increased. But it is also important to realize that, even for the $K$-funnel, achieving a fixation probability near $\rho_K$ requires progressively larger graphs as $r$ is increased. This is illustrated in Fig. \[fig:funnel\] for $K=3$ and the same two values of $r$. ![(Color online) Simulation results for the $3$-funnel. Dashed lines mark the values of $\rho_3$.[]{data-label="fig:funnel"}](funnel.eps) Additional simulation results, for the much larger case of $K=10$ and $n=10\,000$, are presented in Fig. \[fig:10layers\] for $r=1.1$ and $a=1,2,3,4$. Computationally, this case is much more demanding than those of Fig. \[fig:5layers\], owing mainly to the number of distinct networks that can occur, as discussed earlier (in fact, for $K=10$ and $n=10\,000$, this number is at least of the order of $10^{33}$). Consequently, many fewer graphs surpassing the $0.9$ correlation threshold were obtained. Even so, one possible reading is that results similar to those reported in Fig. \[fig:5layers\] can be expected, but this remains to be seen. In summary, we have demonstrated that strongly connected layered networks can be grown for which the fixation probability significantly surpasses that of the Moran process. The growth mechanism we use aggregates one new node at a time and chooses the layer to be enlarged by the addition of the new node as a function of how far layers are from those whose populations are the closest upstream local maxima. A great variety of networks can result from this process, but we have shown that correlating each resulting $K$-layer network with the $K$-funnel appropriately works as an effective filter to pinpoint those of distinguished fixation probability. Further work will concentrate on exploring other growth methods and on targeting the growth of more general structures. ![(Color online) Simulation results for $K=10$, $n=10\,000$, and $r=1.1$. Each graph $D$ having $C(X,Y)>0.9$ is represented by its fixation probability and by the slope $S(X,Y)$. There are $100$ graphs, corresponding roughly to $0.04\%$ of the graphs that were grown. Dashed lines mark $\rho_1$ through $\rho_3$.[]{data-label="fig:10layers"}](sp_10.eps) We acknowledge partial support from CNPq, CAPES, FAPERJ BBP grants, and the joint PRONEX initiative of CNPq and FAPERJ under contract 26.171.528.2006. [^1]: $\setminus$ denotes set difference.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a new concept for a multi-stage Zeeman decelerator that is optimized particularly for applications in molecular beam scattering experiments. The decelerator consists of a series of alternating hexapoles and solenoids, that effectively decouple the transverse focusing and longitudinal deceleration properties of the decelerator. It can be operated in a deceleration and acceleration mode, as well as in a hybrid mode that makes it possible to guide a particle beam through the decelerator at constant speed. The deceleration features phase stability, with a relatively large six-dimensional phase-space acceptance. The separated focusing and deceleration elements result in an unequal partitioning of this acceptance between the longitudinal and transverse directions. This is ideal in scattering experiments, which typically benefit from a large longitudinal acceptance combined with narrow transverse distributions. We demonstrate the successful experimental implementation of this concept using a Zeeman decelerator consisting of an array of 25 hexapoles and 24 solenoids. The performance of the decelerator in acceleration, deceleration and guiding modes is characterized using beams of metastable Helium ($^3S$) atoms. Up to 60% of the kinetic energy was removed for He atoms that have an initial velocity of 520 m/s. The hexapoles consist of permanent magnets, whereas the solenoids are produced from a single hollow copper capillary through which cooling liquid is passed. The solenoid design allows for excellent thermal properties, and enables the use of readily available and cheap electronics components to pulse high currents through the solenoids. The Zeeman decelerator demonstrated here is mechanically easy to build, can be operated with cost-effective electronics, and can run at repetition rates up to 10 Hz.' author: - Theo Cremers - Simon Chefdeville - Niek Janssen - Edwin Sweers - Sven Koot - Peter Claus - 'Sebastiaan Y.T. van de Meerakker' title: 'A new concept multi-stage Zeeman decelerator' --- Introduction {#sec:intro} ============ In the last two decades, tremendous progress has been made in manipulating the motion of molecules in a molecular beam. Using methods that are inspired by concepts from charged particle accelerator physics, complete control over the velocity of molecules in a beam can be achieved. In particular, Stark and Zeeman decelerators have been developed to control the motion of molecules that possess an electric and magnetic dipole moment using time-varying electric and magnetic fields, respectively. Since the first experimental demonstration of Stark deceleration in 1998 [@Bethlem:PRL83:1558], several decelerators ranging in size and complexity have been constructed [@Meerakker:CR112:4828; @Narevicius:ChemRev112:4879; @Hogan:PCCP13:18705]. Applications of these controlled molecular beams are found in high-resolution spectroscopy, the trapping of molecules at low temperature, and advanced scattering experiments that exploit the unprecedented state-purity and/or velocity control of the packets of molecules emerging from the decelerator [@Carr:NJP11:055049; @Bell:MolPhys107:99; @Jankunas:ARPC66:241; @Stuhl:ARPC65:501; @Brouard:CSR43:7279; @Krems:ColdMolecules]. Essential in any experiment that uses a Stark or Zeeman decelerator is a high particle density of the decelerated packet. For this, it is imperative that the molecules are decelerated with minimal losses, i.e., molecules within a certain volume in six-dimensional (6D) phase-space should be kept together throughout the deceleration process [@Bethlem:PRL84:5744]. It is a formidable challenge, however, to engineer decelerators that exhibit this so-called phase stability. The problem lies in the intrinsic field geometries that are used to manipulate the beam. In a multi-stage Zeeman (Stark) decelerator a series of solenoids (high-voltage electrodes) yields the deceleration force as well as the transverse focusing force. This can result in a strong coupling between the longitudinal (forward) and transverse oscillatory motions; parametric amplification of the molecular trajectories can occur, leading to losses of particle density [@Meerakker:PRA73:023401; @Sawyer:EPJD48:197]. For Stark decelerators, the occurrence of instabilities can be avoided without changing the electrode design. By operating the decelerator in the so-called $s=3$ mode [@Meerakker:PRA71:053409], in which only one third of the electrode pairs are used for deceleration while the remaining pairs are used for transverse focusing, instabilities are effectively eliminated [@Meerakker:PRA73:023401; @Scharfenberg:PRA79:023410]. The high particle densities afforded by this method have recently enabled a number of high-resolution crossed beam scattering experiments, for instance [@Gilijamse:Science313:1617; @Kirste:Sience338:1060; @Zastrow:NatChem6:216; @Vogels:SCIENCE350:787]. For multi-stage Zeeman decelerators, several advanced switching protocols have been proposed and tested to mitigate losses. Wiederkehr *et al.* extensively investigated phase stability in a Zeeman decelerator, particularly including the role of the nonzero rise and fall times of the current pulses, as well as the influence of the operation phase angle [@Wiederkehr:JCP135:214202; @Wiederkehr:PRA82:043428]. Evolutionary algorithms were developed to optimize the switching pulse sequence, significantly increasing the number of particles that exit from the decelerator. Furthermore, inspired by the $s=3$ mode of a Stark decelerator, alternative strategies for solenoid arrangements were investigated numerically [@Wiederkehr:PRA82:043428]. Dulitz *et al.* developed a model for the overall 6D phase-space acceptance of a Zeeman decelerator, from which optimal parameter sets can be derived to operate the decelerator at minimum loss [@Dulitz:PRA91:013409]. Dulitz *et al.* also proposed and implemented schemes to improve the transverse focusing properties of a Zeeman decelerator by applying reversed current pulses to selected solenoids [@Dulitz:JCP140:104201]. Yet, despite the substantial improvements these methods can offer, the phase-stable operation of a multi-stage Zeeman decelerator over a large range of velocities remains challenging. Recently, a very elegant approach emerged that can be used to overcome these intrinsic limitations of multi-stage decelerators. So-called traveling wave decelerators employ spatially moving electrostatic or magnetic traps to confine part of the molecular beam in one or multiple wells that start traveling at the speed of the molecular beam pulse and are subsequently gradually slowed down. In this approach the molecules are confined in genuine potential wells, and stay confined in these wells until the final velocity is reached. Consequently, these decelerators are inherently phase stable, and no losses occur due to couplings of motions during the deceleration process. The acceptances are almost equal in both the longitudinal and transverse directions, which appears to be particularly advantageous for experiments that are designed to spatially trap the molecules at the end of the decelerator. Both traveling wave Stark [@Osterwalder:PRA81:051401; @vandenBerg:JMS300:201422] and Zeeman [@Trimeche:EPJD65:263; @Lavert-Ofir:NJP13:103030; @Lavert-Ofir:PCCP13:18948; @Akerman:NJP17:065015] decelerators have been successfully demonstrated. Recently, first experiments in which the decelerated molecules are subsequently loaded into static traps have been conducted [@Quintero:PRL110:133003; @Jansen:PRA88:043424]. These traveling wave decelerators typically feature a large overall 6D acceptance. This acceptance is almost equally partitioned between the longitudinal and both transverse directions. For high-resolution scattering experiments, however, there are rather different requirements for the beam than for trapping. Certainly, phase-stable operation of the decelerator—and the resulting production of molecular packets with high number densities—is essential. In addition, tunability over a wide range of final velocities is important, but the ability to reach very low final velocities approaching zero meters per second is often inconsequential. More important is the shape of the emerging packet in phase-space, i.e., the spatial and velocity distributions in both the longitudinal and transverse directions. Ideally, for scattering experiments the longitudinal acceptance of the decelerator should be relatively large, whereas it should be small in the transverse directions. A broad longitudinal distribution—in the order of a few tens of mm spatially and 10–20 m/s in velocity—is typically required to yield sufficiently long interaction times with the target beam or sample, and to ensure the capture of a significant part of the molecular beam pulse that is available for scattering. In addition, a large longitudinal velocity acceptance allows for the application of advanced phase-space manipulation techniques such as bunch compression and longitudinal cooling to further improve the resolution of the experiment [@Crompvoets:PRL89:093004]. By contrast, much narrower distributions are desired in the transverse directions. Here, the spatial diameter of the beam should be matched to the size of the target beam and the detection volume; typically a diameter of several mm is sufficient. Finally, the transverse velocity distribution should be narrow to minimize the divergence of the beam. These desiderata on beam distributions are unfortunately not met by traveling wave decelerators, where the resulting longitudinal (spatial) distributions are smaller and the transverse distributions are larger than what may be considered ideal for scattering experiments. Here, we describe a new concept for a multi-stage Zeeman decelerator that is optimized for applications in scattering experiments. The decelerator consists of an array of alternating magnetic hexapoles and solenoids, used to effectively decouple the longitudinal and transverse motions of the molecules inside the decelerator. We analyze in detail the performance of the decelerator using numerical trajectory calculations, and we will show that the decelerator exhibits phase stability, with a spatial and velocity acceptance that is much larger in the longitudinal than in the transverse directions. We show that the decelerator is able to both decelerate and accelerate, as well as to guide a packet of molecules through the decelerator at constant speed. We present the successful experimental implementation of the concept, using a multi-stage Zeeman decelerator consisting of 24 solenoids and 25 hexapoles. The performance of the decelerator in acceleration, deceleration and guiding modes is characterized using a beam of metastable helium atoms. In the decelerator presented here, we use copper capillary material in a new type of solenoid that allows for direct contact of the solenoid material with cooling liquid. The solenoid is easily placed inside vacuum, it offers excellent thermal properties and it allows for the use of low-voltage electronic components that are readily available and cost effective. Together, this results in a multi-stage Zeeman decelerator that is relatively easy and cheap to build, and that can be operated at repetition rates up to 10 Hz. This paper is organized as follows. In section \[sec:concept\] we first describe the concept of the multi-stage Zeeman decelerator and characterize its inherent performance with numerical simulations. For this, we use decelerators of arbitrary length and the NH ($X\,^3 \Sigma^-$) radical as an example, as this molecule is one of our target molecules for future scattering experiments. In the simulations, we use the field geometry as induced by the experimentally proven solenoid used in the Zeeman decelerator at ETH Z[ü]{}rich [@Wiederkehr:JCP135:214202]. In section \[sec:experiment\], we describe in a proof-of-principle experiment the successful implementation of the concept. Here, we use metastable helium atoms, as this species can be decelerated significantly using the relatively short decelerator presently available. Zeeman decelerator concept and design {#sec:concept} ===================================== The multi-stage Zeeman decelerator we propose consists of a series of alternating hexapoles and solenoids, as is shown schematically in Figure \[fig:mode-explain\]. The length of the hexapoles and solenoids are almost identical. To simulate the magnetic field generated by the solenoids, we choose parameters that are similar to the ones used in the experiments by Wiederkehr *et al.* [@Wiederkehr:JCP135:214202]. We assume a solenoid with a length of 7.5 mm, an inner and outer diameter of 7 and 11 mm, respectively, through which we run maximum currents of 300 A. Furthermore, we set the inner diameter to 3 mm for molecules to pass through. These solenoids can, for instance, be produced by winding enameled wire in multiple layers, and the current through these solenoids can be switched using commercially available high-current switches. With these levels of current, this solenoid can create a magnetic field strength on the molecular beam axis as shown in Figure \[fig:coilhexafield\]*a*; the radial profiles of the field strength at a few positions $z$ along the beam axis are shown in panel *b*. It is shown that the solenoid creates a concave field distribution near the center of the solenoid, whereas a mildly convex shape is produced outside the solenoid. The hexapoles have a length of 8.0 mm, are separated by a distance $D=4$ mm from the solenoids, and produce a magnetic field that is zero on the molecular beam axis but that increases quadratically as a function of the radial off-axis position $r$ (see Figure \[fig:coilhexafield\]*c*). We assume that the maximum magnetic field strength amounts to 0.5 T at a radial distance $r=1.5$ mm from the beam axis. Such magnetic field strengths are readily produced by arrangements of current carrying wires, permanent magnets [@Watanabe:EPJD38:219; @Osterwalder:EPJ-TI2:10], or a combination of both [@Poel:NJP17:055012]. The key idea behind this Zeeman decelerator concept is to effectively decouple the longitudinal and transverse motions of the molecules inside the decelerator. The fields generated by the solenoids are used to decelerate or accelerate the beam, but their mild transverse focusing and defocusing forces are almost negligible compared to the strong focusing effects of the hexapoles. These hexapoles, in turn, hardly contribute to the longitudinal deceleration forces. As we will discuss more quantitatively in the next sections, this stabilizes molecular trajectories and results in phase stability. Decelerators in which dedicated and spatially separated elements are used for transverse focusing and longitudinal deceleration have been considered before [@Kalnins:RSI73:2557; @Sawyer:EPJD48:197]. In charged particle accelerators, such separation is common practice, and the detrimental effects of elements that affect simultaneously the longitudinal and transverse particle motions are well known [@Lee:AccPhys:2004]. The insertion of focusing elements between the mechanically coupled deceleration electrodes in a Stark decelerator appears technically impractical, however. By contrast, the relatively open structure of individually connected solenoids in a Zeeman decelerator allows for the easy addition of focusing elements. In addition, magnetic fields generated by adjacent elements are additive; shielding effects of nearby electrodes that are a common problem when designing electric field geometries do not occur. The insertion of hexapoles further opens up the possibility to operate the Zeeman decelerator in three distinct modes that allow for either deceleration, acceleration, or guiding the molecular packet through the decelerator at constant speed. These operation modes are schematically illustrated in the lower half of Figure \[fig:mode-explain\]. In the description of the decelerator, we use the concepts of an equilibrium phase angle $\phi_0$ and a synchronous molecule from the conventions used to describe Stark decelerators [@Bethlem:PRL83:1558; @Bethlem:PRA65:053416]. The definition of $\phi_0$ in each of the modes is illustrated in Figure \[fig:mode-explain\], where zero degrees is defined as the relative position along the beam axis where the magnetic field reaches half the strength it has at the solenoid center. In deceleration mode, the solenoids are switched on before the synchronous molecule arrives in the solenoid, and switched off when the synchronous molecule has reached the position corresponding to $\phi_0$. In acceleration mode, the solenoid is switched on when the synchronous molecules has reached the position corresponding to $\phi_0$, and it is only switched off when the synchronous molecule no longer experiences the field induced by the solenoid. In hybrid mode, two adjacent solenoids are simultaneously activated to create a symmetric potential in the longitudinal direction. For this, each solenoid is activated twice: once when the synchronous molecule approaches, and once when the synchronous molecule exits the solenoid. In this description we neglected the nonzero switching time of the current in the solenoids. In our decelerator, however, the current pulses feature a rise time of about 8 $\mu$s, as will be explained in more detail in section \[subsec:simulations\]. In the simulations, the full current profile is taken into account; we will adopt the convention that the current has reached half of the maximum value when the synchronous particle reaches the $\phi_0$ position. This switching protocol ensures that in hybrid mode with $\phi_0=0^{\circ}$, the molecules will receive an equal amount of acceleration and deceleration, in analogy with operation of a Stark decelerator with $\phi_0=0^{\circ}$. The kinetic energy change $\Delta K$ that the synchronous molecule experiences per stage is shown for each mode in Figure \[fig:acceptance-overview\]*a*. In this calculation we assume the NH radical in its electronic ground state, that has a 2-$\mu_B$ magnetic dipole moment (*vide infra*). In the deceleration and acceleration modes, the full range of $\phi_0$ ($-90^{\circ}$ to $90^{\circ}$) can be used to reduce and increase the kinetic energy, respectively. In hybrid mode, deceleration and acceleration are achieved for $0^{\circ} < \phi_0 \leq 90^{\circ}$ and $-90^{\circ}\leq \phi_0 < 0^{\circ}$, respectively, whereas the packet is transported through the decelerator at constant speed for $\phi_0=0^{\circ}$. The maximum value for $\Delta K$ that can be achieved amounts to approximately 1.5 cm$^{-1}$. Numerical trajectory simulations {#subsec:simulations} -------------------------------- The operation characteristics of the Zeeman decelerator are extensively tested using numerical trajectory simulations. In these simulations, it is essential to take the temporal profile of the current pulses into account. Unless stated otherwise, we assume single pulse profiles as illustrated in Figure \[fig:NHZeemanshift\]*a*. The current pulses feature a rise time of approximately 8 $\mu$s, then a variable hold-on time during which the current has a constant value of 300 A. The current exponentially decays to a lingering current of 15 A with a characteristic decay time of 5 $\mu$s, as can be created by switching the current to a simple resistor in the electronic drive unit. This lingering current is only switched off at much later times, and is introduced to prevent Majorana transitions as will be explained in section \[subsec:Majorana\]. Furthermore, we assume that the hexapoles are always active when molecules are in their proximity. In these simulations, we use NH radicals in the $X\,^3\Sigma^-, N=0, J=1$ rotational ground state throughout. The Zeeman effect of this state is shown in Figure \[fig:NHZeemanshift\]*b*. NH radicals in the low-field seeking $M=1$ component possess a magnetic moment of 2 $\mu_B$, and experience a linear Zeeman shift. NH radicals in this state have a relatively small mass-to-magnetic moment ratio of 7.5 amu/$\mu_B$, making NH a prime candidate for Zeeman deceleration experiments. Our findings are easily translated to other species by appropriate scaling of this ratio, in particular for species that also have a linear Zeeman shift (such as metastable helium, for instance). The inherent 6D phase-space acceptance of the decelerator is investigated by uniformly filling a block-shaped area in 6D phase-space, and by propagating each molecule within this volume through a decelerator that consists of 100 solenoids and 100 hexapoles. In the range of negative $\phi_0$ in deceleration mode and positive $\phi_0$ for acceleration mode we instead used 200 pairs of solenoids and hexapoles to spatially separate the molecules within the phase stable area from the remainder of the distribution. This is explained in the appendix. The uniform distributions are produced using six unique Van der Corput sequences [@Corput:PAWA38:813]. For each of the three operation modes, the resulting longitudinal phase-space distributions of the molecules in the last solenoid of the decelerator are shown in Figure \[fig:phasespace3D\] for three different $\phi_0$. The separatrices that follow from the 1D model for phase stability that explicitly takes the temporal profiles of the currents into account, as described in detail by Dulitz *et al.* [@Dulitz:PRA91:013409], are given as a cyan overlay. In each simulation, the synchronous molecule has an initial velocity chosen such that the total flight time is approximately 4.8 ms. This results in velocity progressions of $[370 \rightarrow 625]$, $[390 \rightarrow 599]$ and $[421 \rightarrow 568]$ m/s in acceleration mode with $\phi_0=-60^{\circ}, -30^{\circ}$ and $0^{\circ}$, respectively; a progression of $[445 \rightarrow 550]$, $[500 \rightarrow 500]$ and $[550 \rightarrow 447]$ m/s in hybrid mode with $\phi_0=-30^{\circ}, 0^{\circ}$ and $30^{\circ}$; and finally a progression of $[570 \rightarrow 421]$, $[595 \rightarrow 399]$ and $[615 \rightarrow 383]$ m/s in deceleration mode corresponding to $\phi_0=0^{\circ}, 30^{\circ}$ and $60^{\circ}$. It is shown that in all operation modes and for all values of $\phi_0$, the separatrices accurately describe the longitudinal acceptances of the decelerator. For larger values of $|\phi_0|$, the sizes of the separatrices are reduced, reflecting the smaller size and depth of the effective time-averaged potential wells. Note the symmetric shape of the separatrix when the decelerator is operated in hybrid mode with $\phi_0 = 0^{\circ}$, corresponding to guiding of the packet through the decelerator at constant speed. The transmitted particle density is slightly less in hybrid mode than in other modes, which indicates that the transverse acceptance is not completely independent of the solenoid fields. However, in each mode of operation the regions in phase-space accepted by the decelerator are homogeneously filled; no regions with a significantly reduced number of molecules are found. This is a strong indication that the decelerator indeed features phase stability. The transverse acceptance is found to be rather independent of $\phi_0$, and is shown in Figure \[fig:transspace3D\] for $\phi_0=0^{\circ}$ only. It can be seen that the transverse acceptance is typically smaller than the longitudinal acceptance, in accordance with our desideratum for molecular beam scattering experiments. Note that the transverse (velocity) acceptance can be modified independently from the deceleration and acceleration properties of the decelerator, simply by adjusting the field strength of the hexapoles. Additionally, trajectory simulations can be used to quantify the overall 6D acceptance of the decelerator. Because of the uniform initial distribution, all particles that are propagated represent a small but equal volume in phase-space. At the end of the decelerator, the particles within a predefined range with respect to the synchronous particle are counted, yielding the volume in phase-space occupied by these particles. In the simulations, the initial “block” distribution is widened until the number of counted particles increases no further. We define the corresponding phase-space volume as the acceptance of the decelerator. The resulting 6D acceptance is shown for each operation mode in panel *b* of Figure \[fig:acceptance-overview\]. Operating in hybrid mode results in the typical triangle-shaped acceptance curve as a function of $\phi_0$ that is also found for Stark decelerators. A maximum 6D phase-space acceptance of approximately $1.2 \cdot 10^6$ mm$^3$ (m/s)$^3$ is found for $\phi_0=0^{\circ}$, and drops below $10^5$ mm$^3$ (m/s)$^3$ at large $|\phi_0|$. A peculiar effect is seen in the deceleration and acceleration modes for $\phi_0<0^{\circ}$ and $\phi_0>0^{\circ}$, respectively. Here, the acceptance largely exceeds the acceptance for $\phi_0=0^{\circ}$, and approaches values of $6 \cdot 10^6$ mm$^3$ (m/s)$^3$. This is a special consequence of the continuously acting focusing forces of the hexapoles, and will be discussed in more detail in the Appendix. Although one has to be careful to derive the merits of a decelerator from the 6D phase-space acceptance alone, it is instructive to compare these numbers to the phase-space acceptances found in other decelerators. Conceptually, the hybrid mode of our Zeeman decelerator is compared best to the $s=3$ mode of a Stark decelerator. For the latter, Scharfenberg *et al.* found a maximum phase-space acceptance of $3 \cdot 10^5$ mm$^3$ (m/s)$^3$ for OH ($X\,^2\Pi_{3/2}, J=3/2$) radicals, with a similar partitioning of this acceptance between the longitudinal and transverse coordinates as found here [@Scharfenberg:PRA79:023410]. In comparison, for a multi-stage Zeeman decelerator without hexapoles, Wiederkehr *et al.* found that the 6D acceptance peaks at about $2 \cdot 10^3$ mm$^3$ (m/s)$^3$ for Ne ($^3P_2$) atoms when equilibrium phase angles in the range $30^{\circ}$–$45^{\circ}$ degrees are used [@Wiederkehr:JCP135:214202]. The acceptance of the multi-stage Zeeman decelerator developed by Raizen and coworkers, also referred to as a magnetic coilgun, was reported to have an upper limit of $10^5$ mm$^3$ (m/s)$^3$ [@Narevicius:ChemRev112:4879]. The highest 6D acceptances to date are found in traveling wave decelerators, mostly thanks to the large transverse acceptances of these decelerators. The maximum acceptance of the traveling wave Zeeman decelerator of Narevicius and coworkers, for instance, amounts to $2 \cdot 10^7$ mm$^3$ (m/s)$^3$ for Ne ($^3P_2$) atoms [@Lavert-Ofir:PCCP13:18948]. Phase stability --------------- The numerical trajectory simulations yield very strong indications that the molecules are transported through the Zeeman decelerator without loss, i.e., phase stable operation is ensured. We support this conjecture further by considering the equation of motion for the transverse trajectories, using a model that was originally developed to investigate phase stability in Stark decelerators [@Meerakker:PRA73:023401]. In this model, we consider a (nonsynchronous) molecule with initial longitudinal position $z_i$ relative to the synchronous molecule, which oscillates in longitudinal phase-space around the synchronous molecule with longitudinal frequency $\omega_z$. In other words, during this motion the relative longitudinal coordinate $\phi$ oscillates around the synchronous value $\phi_0$. In the transverse direction, the molecule oscillates around the beam axis with transverse frequency $\omega_r$, which changes with $\phi$. In Figure \[fig:frequencies\], the longitudinal and transverse oscillation frequencies are shown that are found when the Zeeman decelerator is operated in hybrid mode with $\phi_0=0^{\circ}$. For deceleration and acceleration modes rather similar frequencies are found (data not shown). It can be seen that the transverse oscillation frequency largely exceeds the longitudinal oscillation frequency. As we will show below, this eliminates the instabilities that has deteriorated the phase-space acceptance of multi-stage Stark and Zeeman decelerators in the past [@Meerakker:PRA73:023401; @Sawyer:EPJD48:197; @Wiederkehr:JCP135:214202; @Wiederkehr:PRA82:043428]. During its motion, a molecule experiences a time-dependent transverse oscillation frequency that is given by [@Meerakker:PRA73:023401]: $$\omega_r(t) = \omega^2_0-A \cos(2\omega_z t), \label{eq:trans}$$ where $\omega_0$ and $A$ are constants that characterize the oscillatory function. The resulting transverse equation of motion is given by the Mathieu differential equation: $$\frac{d^2 r}{d\tau^2}+[a-2q\cos(2\tau)]r=0,$$ with: $$a=\left(\frac{\omega_0}{\omega_z}\right)^2, \qquad q=\frac{A}{2\omega^2_z}, \qquad \tau=\omega_z t. \label{eq:param}$$ Depending on the values of $a$ and $q$, the solution of this equation exhibits stable or unstable behavior. This is illustrated in Figure \[fig:stability\] that displays the Mathieu stability diagram. Stable and unstable solutions exist for combinations of $a$ and $q$ within the white and gray areas, respectively. For each operation mode of the decelerator, and for a given phase angle $\phi_0$, the values for the parameters $a$ and $q$ can be determined from the longitudinal and transverse oscillation frequencies of Figure \[fig:frequencies\]. The resulting values for the parameters $q$ and $a$ as a function of $z_i$ are shown in panel (*a*) and (*b*) of Figure \[fig:stability\], for the decelerator running in hybrid mode with $\phi_0=0^{\circ}$. The ($a,q$) combinations that govern the molecular trajectories for this operation mode are included as a solid red line in the stability diagram shown in panel (*c*). Clearly, the red line circumvents all unstable regions, and only passes through the unavoidable “vertical tongues” where they have negligible width. These narrow strips do not cause unstable behavior for decelerators of realistic length. The unstable areas in the Mathieu diagram are avoided because of the high values of the parameter $a$. This same result was found for the other operation modes and equilibrium phase angles. We thus conclude that the insertion of hexapoles effectively decouples the transverse motion from the longitudinal motion; the Zeeman decelerator we propose is inherently phase stable, and can in principle be realized with arbitrary length. Prevention of Majorana losses {#subsec:Majorana} ----------------------------- An important requirement in devices that manipulate the motion of molecules using externally applied fields, is that the molecules remain in a given quantum state as they spend time in the device. As the field strength approaches zero, different quantum states may become (almost) energetically degenerate, resulting in a possibility for nonadiabatic transitions. This may lead to loss of particles, which is often referred to as Majorana losses. The occurrence of nonadiabatic transitions has been studied extensively for neutral molecules in electric traps [@Kirste:PRA79:051401], as well as for miniaturized Stark decelerators integrated on a chip [@Meek:PRA83:033413]. Tarbutt and coworkers developed a theoretical model based on the time-dependent Hamiltonian for the field-molecule interaction, and quantitatively investigated the transition probability as the field strength comes close to zero, and/or if the field vector rotates quickly relative to the decelerated particles [@Wall:PRA81:033414]. In the multi-stage Zeeman decelerators that are currently operational, losses due to nonadiabatic transitions can play a significant role [@Hogan:PRA76:023412]. Specifically, when switching off a solenoid right as the particle bunch is near the solenoid center, there will be a moment in time where no well-defined magnetic quantization field is present. In previous multi-stage Zeeman decelerator designs, this was compensated by introducing a temporal overlap between the current pulses of adjacent solenoids, effectively eliminating nonadiabatic transitions [@Hogan:PRA76:023412]. In the Zeeman decelerator concept presented in this manuscript, this solution is not available, since adjacent solenoids are separated by hexapole elements. The hexapoles induce only marginal fringe fields, and do not contribute any magnetic field strength on the molecular beam axis. Referring back to Figure \[fig:NHZeemanshift\]*a*, we introduce a quantization field throughout the hexapole-solenoid array by switching each solenoid to a low-level lingering current when the high current pulse is switched off. Since the fringe field of a solenoid extends beyond the geometric center of adjacent hexapoles, and since in the center of the solenoid the maximum magnetic field per unit of current is created, a lingering current of approximately 15 A is sufficient to provide a minimum quantization field of 0.1 T. The resulting sequences of current profiles through the solenoids with number $n$, $n+1$ and $n+2$ are shown in the upper half of Figure \[fig:Majorana-currents\] for the deceleration (panel *a*) and hybrid modes (panel *b*). The profiles for acceleration mode are not shown here, but they feature the low current before switching to full current, instead of a low current after. The lingering current exponentially decays to its final value, and lasts until the next solenoid is switched off. In panels *c* and *d* the corresponding magnetic field strength is shown that is experienced by the synchronous molecule as it propagates through the decelerator (blue curves), together with the field that would have resulted if the solenoid were switched off with a conventional ramp time (red curves). Clearly, the low level current effectively eliminates the zero-field regions. From model calculations similar to the ones developed by Tarbutt and coworkers [@Wall:PRA81:033414], we expect that the magnetic field vector inside the solenoids will not rotate fast enough to induce nonadiabatic transitions, provided that all solenoid fields are oriented in the same direction. We therefore conclude that the probability for nonadiabatic transitions is expected to be negligible for the Zeeman decelerator concept proposed here. One may wonder how the addition of the slowly decaying lingering current affects the ability to efficiently accelerate or decelerate the molecules. This is illustrated in panels (*e*) and (*f*) of Figure \[fig:Majorana-currents\] that displays the acceleration rate experienced by the synchronous molecule. The acceleration follows from $-(\vec{\triangledown} U_{\textrm{Z}})/m$, where $U_{\textrm{Z}}$ is the Zeeman energy for NH ($X\,^3\Sigma^-, N=0, J=1, M=1$) induced by the time-varying magnetic field B(T), and $m$ is the mass of the NH radical. It is seen that the lingering current only marginally affects the acceleration force; a slight additional deceleration at early times is compensated by a small acceleration when the synchronous molecule exits the solenoid. Overall, the resulting values for $\Delta K$ with or without lingering current, as obtained by integrating the curves in panels (*e*) and (*f*), are almost identical (data not shown). Excessive focusing at low velocities {#subsec:low-velocities} ------------------------------------ A common problem in multi-stage decelerators is the occurrence of losses due to excessive focusing at low forward velocities. This effect has been studied and observed in multi-stage Stark decelerators that operate in the $s=1$ or $s=3$ modes, where losses occur below approximately 50 or 150 m/s, respectively [@Sawyer:EPJD48:197; @Scharfenberg:PRA79:023410]. Our concept for a multi-stage Zeeman decelerator shares these over-focusing effects at low final velocities, which may be considered a disadvantage over traveling wave decelerators, which are phase stable down to near-zero velocities. At relatively high velocities, the hexapole focusing forces can be seen as a continuously acting averaged force, keeping the molecules confined to the beam axis. However, at low velocities this approximation is no longer valid, and the molecules can drift from the beam axis between adjacent hexapoles. We investigate the expected losses using similar numerical trajectory simulations as discussed in section \[subsec:simulations\], i.e., we again assume a Zeeman decelerator consisting of 100 hexapole-solenoid-pairs. We assume packets of molecules with five different mean initial velocities ranging between $v_{\textrm{in}}=$ 350 m/s and 550 m/s, and these packets are subsequently propagated through the decelerator. The decelerator is operated in hybrid mode, and can be used with different values for $\phi_0$. Since we assume a 100-stage decelerator throughout, the packets emerge from the decelerator with different final velocities. In Figure \[fig:over-focusing\] we show the number of decelerated particles that are expected at the end of the decelerator as a function of $\phi_0$ (panel *a*), or as a function of the final velocity (panel *b*). For low values of $\phi_0$, the transmitted number of molecules is (almost) equal for all curves; the slightly higher transmission for higher values of $v_{\textrm{in}}$ is related to the shorter flight time of the molecules in the decelerator. Consequently, molecules that are not within the inherent 6D phase-space acceptance of the decelerator can still make it to the end of the decelerator, and are counted in the simulations. For higher values of $\phi_0$, the transmitted number of molecules decreases, reflecting the reduction of the phase-space acceptance for these phase angles. This is particularly clear for the blue and green curves ($v_{\textrm{in}}=$ 550 and 500 m/s, respectively), which follow the 6D phase-space acceptance curve from Figure \[fig:acceptance-overview\]*b*. The three other curves feature a drop in transmission that occurs when the velocity drops below approximately 160 m/s, as is indicated by the dashed vertical lines. Obviously, for lower values of $v_{\textrm{in}}$, this velocity is reached at lower values of $\phi_0$ (see panel (*a*)). The production of final velocities below this drop-off velocity is not a prime requirement in crossed beam scattering experiments, as the collision energy is determined by the velocities of both beams and the crossing angle between the beams. Very low collision energies can be reached using small crossing angles, relaxing the requirements on the final velocities of the reagent beams. For these applications we therefore see no direct need to combat these over-focusing effects. However, there are several promising options to mitigate these effects if desired. The first option is to employ hexapoles with a variable strength, such that the transverse oscillation frequency can be tuned along with the decreasing velocity of the molecular packet. Similarly, permanent hexapoles with different magnetization can be installed to modify the focusing properties. Finally, it appears possible to merge a hexapole and solenoid into a single element, by superimposing a hexapole arrangement on the outer diameter of the solenoid. Although technically more challenging, this approach will provide an almost continuously acting transverse focusing force, while keeping the possibility to apply current pulses to the solenoids. Preliminary trajectory simulations suggest that indeed a significant improvement can be achieved, but the validity of these approaches will need to be investigated further if near-zero final velocities are required. Experimental implementation {#sec:experiment} =========================== Multi-stage Zeeman decelerator ------------------------------ An overview of the experiment is shown in Figure \[fig:schematic\_setup\]. The generation and detection of the metastable helium beam will be discussed in section \[subsec:He\]; in this section we will first describe the decelerator itself, starting with a description of the solenoids and associated electronics. An essential part in a multi-stage Zeeman decelerator is the design of the deceleration solenoids, and the cooling strategy to remove the dissipated energy. A variety of solenoid designs have been implemented successfully in multi-stage Zeeman decelerators before. Merkt and coworkers utilized tightly-wound solenoids of insulated copper wire that were thermally connected to water-cooled ceramics [@Vanhaecke:PRA75:031402]. Later, similar solenoids were placed outside a vacuum tube, and submerged in a bath of cooling water [@Hogan:JPB41:081005]. This improved the cooling capacity, and enabled the experiment to operate at repetition rates of 10 Hz. Raizen and coworkers also developed a multi-stage Zeeman decelerator, referred to as the atomic or molecular coilgun, that is based on solenoids encased in high permeability material to increase the on-axis maximum magnetic field strength [@Narevicius:PRA77:051401; @Liu:PRA91:021403]. Recently, different types of traveling wave Zeeman decelerators have been developed, which consist of numerous spatially overlapping quadrupole solenoids [@Lavert-Ofir:PCCP13:18948], or a helical wire arrangement to produce the desired magnetic field [@Trimeche:EPJD65:263]. In the decelerator presented here, we use a new type of solenoid that is placed inside vacuum, but that allows for direct contact of the solenoid material with cooling liquid. The solenoids consist of 4 windings of a copper capillary that is wound around a 3 mm bore diameter. The capillary has an inner diameter of 0.6 mm and an outer diameter of 1.5 mm, and cooling liquid is circulated directly through the capillary. The solenoid is wound such that the first and last windings end with a straight section of the capillary, as is shown in a photograph of a single solenoid in Figure \[fig:setup\_photo\]*b*. These straight sections are glued into an aluminum mounting flange, as will be further discussed later. The inherent magnetic field profile generated by this solenoid is very similar to the solenoids as used in the simulations presented in section \[sec:concept\]. The use of a single layer of rather thick copper capillary as solenoid material in a Zeeman decelerator is unconventional, but it has some definite advantages. Because of the low-resistance copper capillary, small operating voltages (24 V) are sufficient to generate currents of approximately 4.5 kA that produce a maximum field of 2.2 T on the solenoid axis. This in turn allows for the use of FET-based electronics components to switch these currents, which are considerably cheaper than their high voltage IGBT-based counterparts. The same holds for the power supplies that deliver the current. By running cooling liquid directly through the solenoid capillary, the solenoids are efficiently cooled. The low operation voltage ensures that the cooling liquid does not conduct any significant electricity. The current pulses are provided by specially designed circuit boards; one such board is displayed in Figure \[fig:electronics\]*b*. Each solenoid is connected to a single board, that is mounted directly onto the solenoid-flange feedthroughs in order to minimize power loss between board and solenoid. Brass strips are used to mechanically clamp the board to the capillary material. The simplified electronic circuit is shown schematically in Figure \[fig:electronics\]*a*. The circuit board is mostly occupied by a parallel array of capacitors, with a total capacitance of 70 mF. The capacitors are charged by a 24 V power supply and then discharged through the connected solenoid. The solenoids have a very low resistance $R_C$ of about 1 m$\Omega$ and self-inductance $L_C$ of about 50 nH, even compared to the electronic circuit itself. The capacitors are discharged via two possible pathways indicated in red and green, respectively, by activating the two independent gates S1 and S2. Closing gate S1 will allow electrons to flow through the solenoid, generating a maximum current of about 4.5 kA. Closing gate S2 will send the flow through both the solenoid and a 100 m$\Omega$ resistor that limits the current to about 150 A. When both gates are opened any remaining power in the solenoid will either dissipate in the electrical components along pathway 3 (in blue) or return to the capacitors. The electronic configuration is able to apply up to two consecutive pulses to each solenoid, as is required for the hybrid mode of operation. As an example, the current profiles for a single pulse or double pulse are shown in Figure \[fig:electronics\]*c* and \[fig:electronics\]*d*, respectively, together with the trigger pulses that activate gates S1 and S2. These profiles were obtained from the induced voltage over a miniature solenoid that was placed inside the center of a decelerating solenoid [@Wiederkehr:JCP135:214202]. The current pulse is initiated by closing gate S1, after which the solenoid current shows a rapid rise to a maximal current of approximately 4.5 kA. After reopening gate S1, the current exponentially decreases with a time constant of 10 $\mu$s. Gate S2 is programmed to close automatically for a fixed duration of 50 $\mu$s, starting 30 $\mu$s after the reopening of S1. While gate S2 is closed, a low-level lingering current is maintained in the solenoid to prevent Majorana transitions (see section \[subsec:Majorana\]), providing a quantization field for atoms or molecules that are near the solenoid. The solenoids and electronics boards are actively cooled using a closed-cycle cooling system. An approximately 10-cm-long capillary section is soldered onto each electronics board, and each capillary is connected in series to its connecting solenoid using silicon tubes. All board-solenoid pairs are individually connected to a mains and return cooling line, using the same flexible silicon material. Each electronics board is additionally cooled by a small fan. Using this cooling system, relatively low operation temperatures are maintained despite the high currents that are passed through the solenoid. In the experiments shown here, the Zeeman decelerator is routinely operated with a repetition frequency of 10 Hz, while the temperature of the solenoids is kept below 40 degrees Celsius. The solenoids are pulsed in a predefined time sequence designed to control the longitudinal velocity of a specific paramagnetic particle. This time sequence is calculated while taking current profiles into account that are modeled after the measured profiles shown in Figure \[fig:electronics\]*b*. The resulting pulse sequence for gates S1 and S2 is programmed into a pattern generator (Spincore PulseBlaster PB24-100-4k), which sends pulse signals to each individual circuit board. The temperature of each solenoid is continuously monitored via a thermocouple on the connecting clamps of the circuit board. When the temperature of the solenoid exceeds a user-set threshold, operation of the decelerator is interrupted. The magnetic hexapoles consist of six wedge-shaped permanent magnets in a ring, as seen in Figure \[fig:setup\_photo\]*a*. Adjacent magnets in the ring have opposite radial remanence. The inner diameter of the hexapole is 3 mm and the length is 8 mm, such that these dimensions match approximately to the corresponding solenoid dimensions. The magnets used in this experiment are based on NdFeB (grade N42SH) with a remanence of approximately 450 mT. The advantage of using hexapoles consisting of permanent magnets is twofold: first, implementation is mechanically straightforward, and second, no additional electronics are needed to generate the focusing fields. However, this approach lacks any tunability of the field strength. This can in part be overcome by selectively removing hexapoles from the decelerator, or by exchanging the magnets for ones with a different magnetization. If required, electromagnetic hexapoles that allow for tunability of the field strength can be used instead. We have built and successfully operated hexapoles that are made of the solenoid capillary material, and could optimize their focusing strength by simply adjusting the time these hexapoles are switched on. However, we found that similar beam densities were achieved using the permanent hexapoles, and experiments with electromagnetic hexapoles are not further discussed here. The decelerator contains 24 solenoids and 25 hexapoles that are placed with a center-to-center distance of 11 mm inside a vacuum chamber. The chamber consist of a hollow aluminum block of length 600 mm with a squared cross section of side lengths 40 mm. This chamber is made by machining the sides of standard aluminum pipe material with an inner diameter of 20 mm. Solenoids and hexapoles are mounted on separate flanges, as can be seen in Figure \[fig:setup\_photo\], such that each element can be installed or removed separately. The first and last element of the decelerator is a hexapole to provide transverse focusing forces at the entrance and exit of the decelerator, respectively. Openings for the individual flanges on the decelerator housing spiral along the sides between subsequent elements, with clockwise 90 degree rotations. In this way there is enough space on each side of the decelerator to accommodate the electronics boards of the solenoids, which have a 42 mm height. In addition, since subsequent solenoids are rotated by 180 degrees in the decelerator, any asymmetry in the magnetic field because of the relatively coarse winding geometry is compensated. Vacuum inside the decelerator housing is maintained by a vacuum pump installed under the detection chamber, which has an open connection to the decelerator housing. Only a minor pressure increase in the chamber is observed if the solenoids are operational, reflecting the relatively low operational temperature of the solenoids. Although for long decelerators additional pumping capacity inside the decelerator is advantageous, we find that for the relatively short decelerator used here the beam density is hardly deteriorated by collisions with background gas provided the repetition rate of the experiment is below 5 Hz. Under these conditions, the pressure in the decelerator maintains below $5 \cdot 10^{-7}$ mbar. Metastable helium beam {#subsec:He} ---------------------- A beam of helium in the metastable (1s)(2s) $^3S$ ($m_S = 1$) state (from this point He\*) was used to test the performance of the Zeeman decelerator. This species was chosen for two main reasons. First, He\* has a small mass-to-magnetic-moment ratio (2.0 amu/$\mu_B$) with a large Zeeman shift, which allows for effective manipulation of the atom with magnetic fields. This allows us to significantly vary the mean velocity of the beam despite the relatively low number of solenoids. Second, He\* can be measured directly with a micro-channel plate (MCP) detector, without the need for an ionizing laser, such that full time-of-flight (TOF) profiles can be recorded in a single shot. This allows for a real-time view of TOF profiles when settings of the decelerator are changed, and greatly facilitates optimization procedures. The beam of He\* is generated by expanding a pulse of neat He atoms into vacuum using a modified Even-Lavie valve (ELV) [@Even:JCP112:8068] that is cooled to about 16 K using a commercially available cold-head (Oerlikon Leybold). At this temperature the mean thermal velocity of helium is about 460 m/s. The ELV nozzle is replaced by a discharge source consisting of alternating isolated and conducting plates, similar to the source described by Ploenes *et al.* [@Ploenes:RSI87:053305]. The discharge occurs between the conducting plates, where the front plate is kept at -600 V and the back plate is grounded. To ignite the discharge, a hot filament running 3 A of current is used. The voltage applied to the front plate is pulsed (20-30 $\mu$s duration) to reduce the total energy dissipation in the discharge. Under optimal conditions, a beam of He\* is formed, with a mean velocity just above 500 m/s. Unless stated otherwise, in the experiments presented here, the decelerator is programmed to select a packet of He\* with an initial velocity of 520 m/s. The beam of He\* passes through a 3 mm diameter skimmer (Beam Dynamics, model 50.8) into the decelerator housing. The first element (a hexapole) is positioned about 70 mm behind the skimmer orifice. The beam is detected by an MCP detector that is positioned 128 mm downstream from the exit of the decelerator. This MCP is used to directly record the integrated signal from the impinging He\* atoms. Results and Discussion {#sec:results} ====================== Longitudinal velocity control ----------------------------- As explained in section \[sec:concept\], the decelerator can be operated in three distinct modes of operation: in deceleration or acceleration modes, the atoms are most efficiently decelerated or accelerated, respectively, whereas in the so-called hybrid mode of operation, the beam can be transported or guided through the decelerator at constant speed (some mild deceleration or acceleration is in principle also possible in this mode). In this section, we present experimental results for all three modes of operation. We will start with the regular deceleration mode. In Figure \[fig:decelTOF\], TOF profiles for He\* atoms exiting the decelerator are shown that are obtained when the decelerator is operated in deceleration mode, using different values for the equilibrium phase angle $\phi_0$. In the corresponding pulse sequences, the synchronous atom is decelerated from 520 m/s to 365 m/s, 347 m/s and 333 m/s, corresponding to effective equilibrium phase angles of 30$^{\circ}$, 45$^{\circ}$ and 60$^{\circ}$, respectively. The corresponding loss of kinetic energy amounts to 23 cm$^{-1}$, 25 cm$^{-1}$ and 27 cm$^{-1}$. The arrival time of the synchronous atom in the graphs is indicated by the vertical green lines. Black traces show the measured profiles; the gray traces that are shown as an overlay are obtained when the decelerator was not operated, i.e., the solenoids are all inactive but the permanent hexapole magnets are still present to focus the beam transversely. The experimental TOF profiles are compared with profiles that result from three dimensional trajectory simulations. In these simulations, an initial beam distribution is assumed that closely resembles the He\* pulse generated by the modified ELV. The resulting TOF profiles are shown in red, vertically offset from the measured profiles for clarity. The simulated profiles show good agreement with the experiment, both in relative intensity and arrival time of the peaks. However, it must be noted that the relative intensities are very sensitive to the chosen parameters of the initial He\* pulse. By virtue of the supersonic expansion and discharge processes, these distributions are often not precisely known, and may vary from day to day. Nevertheless, the agreement obtained here, in particular regarding the overall shape of the TOF profiles and the predicted arrival times of the decelerated beam, suggests that the trajectory simulations accurately describe the motion of atoms inside the decelerator. No indications are found for unexpected loss of atoms during the deceleration process, or for behavior that is not described by the simulations. The profiles presented in Figure \[fig:decelTOF\] show more features than the decelerated packets alone. In particular, there is an additional decelerated peak in each of the graphs that is more intense but slightly faster than the decelerated packet. We use the three dimensional trajectory simulations to study the origin of this feature. In Figure \[fig:decelphase\], the longitudinal phase-space distributions are shown that result from these simulations at the entrance (upper panel), middle (central panel), and exit (lower panel) of the decelerator. The simulation pertains to the situation that results in the TOF profiles presented in Figure \[fig:decelTOF\]*a*, i.e., the decelerator is operated in deceleration mode with $\phi_0=30^{\circ}$. In these phase-space distributions, the grey contour lines depict the predicted trajectories considering the time-averaged Zeeman potential energy. The separatrix of the stable phase-space is highlighted with a cyan overlay. From this evolution of the longitudinal phase-space distribution, we can understand the origin of various pronounced features in the TOF profiles. The first peak in each TOF profile is a collection of the fastest particles in the initial beam distribution. These particles are hardly affected by the solenoids, and propagate to the detector almost in free flight. However, the part of the beam that is initially slower than the synchronous molecule is strongly affected by the solenoids. This part eventually gains in velocity relative to the decelerated bunch, resulting in an ensemble of particles with a relatively high density. This part arrives at the detector just before the decelerated He\* atoms, resulting in the second intense peak in the TOF profiles of Figure \[fig:decelTOF\]. It is noted that this peak appears intense because our decelerator is rather short, leaving insufficient time for the decelerated bunch to fully separate from the initial beam distribution. For longer decelerators, the part of the beam that is not enclosed by the separatrix will gradually spread out, and its signature in the TOF profiles will weaken. The phase-space distributions that are found at the end of the decelerator may also be used to determine the velocity width of the decelerated packet of atoms. For the examples of Figure \[fig:decelphase\], these widths are about 25 m/s. For completeness, we also measure a TOF profile when the decelerator is operated in acceleration mode. Figure \[fig:accelTOF\]*a* shows the TOF profile for the acceleration of He\* atoms from an initial velocity of 560 m/s to a final velocity of 676 m/s. The simulated profile (red trace) shows good agreement with the experimental profile (black trace). Again, the vertical green line indicates the expected arrival time of the accelerated bunch. The sequence selects the fastest atoms in the beam, which is why no additional peaks are visible. Finally, we study the performance of the decelerator in hybrid mode. This mode of operation allows for guiding of the beam through the decelerator at constant speed. In Figure \[fig:accelTOF\]*b* a TOF profile is shown when the decelerator is operated in hybrid mode and $\phi_0=0^{\circ}$, selecting an initial velocity of 520 m/s. The simulated TOF profile (red trace) again shows good agreement with the experimental TOF profile (black trace), although the intensity ratios between the guided part and the wings of the distributions are slightly different in the simulations than in the experiment. This is attributed to the idealized initial atom distribution that are assumed in the simulations. Presence of metastable helium molecules --------------------------------------- While our experiment is designed to decelerate He\* atoms in the $^3S$ state, other types of particles may be created in the discharged beam as well. Specifically, formation of metastable He$_2$ molecules in the a$^3\Sigma$ state (from here on He$_2$\*) is expected, as is also observed in the experiments by Motsch *et al.* and Jansen *et al.* that use a similar discharge source [@Motsch:PRA89:043420; @Jansen:PRL115:133202]. However, He$_2$\* is indistinguishable from He\* in our detection system. In order to probe both species separately, mass selective detection using a non-resonant laser ionization detection scheme is used. Ultraviolet (UV) laser radiation with a wavelength of 243 nm is produced by doubling the light from an Nd:YAG-pumped pulsed dye laser running with Coumarin 480 dye, and focused into the molecular beam close to the exit of the decelerator. The resulting ions are extracted with an electric field of about 1 kV/cm and accelerated towards an MCP detector, where the arrival time of the ions reflect their mass over charge ratio. We used this detection scheme to investigate the chemical composition of the beam that exits the Zeeman decelerator. In Figure \[fig:Hespec\], ion TOF spectra (i.e., the arrival times of the ions at the MCP detector with respect to the laser pulse) are shown. The black trace shows the ion TOF spectrum if the beam of He\* atoms is passed through the decelerator without operating the solenoids. The UV laser is fired at the mean arrival time of the beam in the laser ionization region. Two peaks are clearly visible corresponding to the expected arrival time of He$^+$ and He$_2^+$, confirming that indeed He$_2$ molecules are created in the discharge. He atoms and molecules are detected in an 8:1 ratio in the neutral beam. The green trace in Figure \[fig:Hespec\] shows the ion TOF spectra that is recorded when the solenoids are operated for a typical deceleration sequence similar to the ones used to generate Figure \[fig:decelTOF\]. This trace was taken when the UV laser selectively detects the decelerated part of the He\* beam. Here, only He$^+$ is present in the ion TOF spectrum. Although He$_2$\* has the same magnetic moment as He\* and will thus experience the same force, the double mass of the molecule results in only half the acceleration. He$_2$\* is therefore not decelerated at the same rate as He\*, and will not exit the decelerator at the same time as the decelerated He\* atoms. In conclusion, the Zeeman decelerator is quite effective in separating He\* from the He$_2$\*; the decelerated bunch only contains those species and/or particles in the quantum level for which the deceleration sequence was calculated. Referring back to Figures \[fig:decelTOF\] and \[fig:accelTOF\] that were recorded without laser-based mass spectroscopic detection, one may wonder how the presence of He$_2$ molecules in the beam affect the recorded TOF profiles. Figure \[fig:Hematch\] revisits the measurement from Figure \[fig:decelTOF\]*a*, but taking also He$_2$ molecules into account with the appropriate ratio to generate the simulated TOF profile. The resulting TOF profile for He$_2$\* molecules is shown by the green trace, and is seen to fill the part of the TOF that was under represented by the original simulations (indicated by the vertical green arrow for the experimental trace). Conclusions and Outlook ======================= We have presented a new type of multi-stage Zeeman decelerator that is specifically optimized for scattering experiments. The decelerator consists of an array of alternating solenoids and hexapoles, that effectively decouples the longitudinal deceleration and transverse focusing forces. This ensures that phase-stable operation of the decelerator is possible over a wide range of velocities. For applications in scattering experiments, this decelerator concept has a number of advantages over existing and experimentally demonstrated Zeeman decelerators. The decelerator can be operated in three distinct modes that make either acceleration, deceleration, or guiding at constant speed possible, enabling the production of molecular packets with a continuously tunable velocity over a wide range of final velocities. Phase stability ensures that molecules can be transported through the decelerator with minimal loss, resulting in relatively high overall 6D phase-space acceptance. Most importantly, this acceptance is distributed unequally between the longitudinal and transverse directions. Both the spatial and velocity acceptances are much larger in the longitudinal than in the transverse directions, which meets the requirements for beam distributions in scattering experiments in an optimal way. At low final velocities, however, losses due to over-focusing occur. In crossed beam scattering experiments this appears inconsequential, but for trapping experiments—where low final velocities are essential—the use of the concept presented here should be carefully considered. We have discussed various promising options for combating these losses using alternative hexapole designs in the last section of the decelerator. Additionally, Zhang *et al.* recently proposed a new operation scheme in a Stark decelerator that optimizes the transmitted particle numbers and velocity distributions, which could potentially be translated to a Zeeman decelerator [@Zhang:PRA93:023408]. The validity of these approaches will need to be investigated further, especially if near-zero final velocities are required. In a proof-of-principle experiment, we demonstrated the successful experimental implementation of a new concept presented here, using a decelerator that consist of 24 solenoids and 25 hexapoles. The performance of the decelerator was experimentally tested using beams of metastable helium atoms. Both deceleration, acceleration, and guiding of a beam at constant speed have been demonstrated. The experimental TOF profiles of the atoms exiting the decelerator show excellent agreement with the profiles that result from numerical trajectory simulations. Although the decelerator presented here is relatively short, up to 60% of the kinetic energy of He\* atoms that travel with an initial velocity of about 520 m/s could be removed. In the Zeeman decelerator presented here, we utilize a rather unconventional solenoid design that uses a thick copper capillary through which cooling liquid is circulated. The solenoid design allows for the switching of high currents up to 4.5 kA, using readily available and cheap low-voltage electronics components. The design is mechanically simple, and can be built at relatively low cost. We are currently developing an improved version of the decelerator, that is fully modular, and which can be extended to arbitrary length. The modules can be connected to each other without mechanically disrupting the solenoid-hexapole sequence, while the housing design will allow for the installation of sufficient pumping capacity to maintain excellent vacuum conditions throughout the decelerator. Operation of the Zeeman decelerator consisting of 100 solenoids and 100 hexapoles at repetition rates up to 30 Hz appears technically feasible. Acknowledgments =============== The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013/ERC grant agreement nr. 335646 MOLBIL). This work is part of the research program of the Netherlands Organization for Scientific Research (NWO). We thank Katrin Dulitz, Paul Janssen, Hansj[ü]{}rg Schmutz and Fr[é]{}d[é]{}ric Merkt for stimulating discussions on Zeeman deceleration, solenoid focusing properties and current switching protocols. We thank Rick Bethlem and Fr[é]{}d[é]{}ric Merkt for carefully reading the manuscript and for valuable suggestions for textual improvements. We thank Gerben Wulterkens for the design of prototypes. Appendix: Extreme equilibrium phase angles in deceleration mode {#Appendix-unbound} =============================================================== As can be seen in Figure \[fig:acceptance-overview\]*b*, the highest acceptance is found with $\phi_0 = -90^{\circ}$ in deceleration mode or $\phi_0 = 90^{\circ}$ in acceleration mode. This is a surprising result if we consider conventional multi-stage Zeeman decelerators. In these decelerators, the inherent transverse defocusing fields outside the solenoids prevent the effective use of these extreme values of $\phi_0$[@Wiederkehr:JCP135:214202]. However, with the addition of magnetic hexapoles this limitation no longer exists. Indeed, the total acceptance changes almost solely with the longitudinal acceptance. This acceptance increases in deceleration and acceleration mode with lower and higher $\phi_0$, respectively. We show this for deceleration mode in Figure \[fig:unbound\]*a*. In the negative $\phi_0$ range of deceleration mode the solenoids are turned off early, resulting in only a small amount of deceleration per stage. This is reflected in the kinetic energy change with this mode shown in Figure \[fig:acceptance-overview\]*a*. With lower $\phi_0$, less of the slope of the solenoid field is used to decelerate, and more of it is available for longitudinal focusing of the particle beam. Moreover, the minimum value of $\phi_0 = -90^{\circ}$ is an arbitrary limit, as even lower values of $\phi_0$ would produce even less deceleration and more longitudinal acceptance. Nevertheless, it is important to remember that this is a theoretical prediction, with the important assumption that the decelerator is of sufficient length that the slower particles have sufficient time to catch up with the synchronous particle. With less deceleration of the synchronous particle per solenoid, this catch-up time will increase. This is reflected in the difference in longitudinal phase-space distributions after 100 and 200 stages in Figures \[fig:unbound\] (b) and (c), respectively. In these simulations (similar to those shown in Figure \[fig:phasespace3D\]) a block distribution of NH($X\,^3\Sigma^-, N=0, J=1$) particles was used that well exceeded the predicted longitudinal separatrix. After 100 stages, the (deformed) corners of the initial block distribution are still visible as they revolve around the synchronous particle in longitudinal phase-space. Only after 200 stages of deceleration have these unaccepted particles had enough time to spatially separate from the particles with stable trajectories. This graph also shows that the prediction of the separatrix is quite accurate, and the uniformity of the particle distribution within is evidence of transverse phase stability, even with these extreme values of $\phi_0$. In acceleration mode, a similar rise in acceptance can be found with increasing $\phi_0$, which is also visible in Figure \[fig:acceptance-overview\]*b*. [45]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , ****, (). , , , , ****, (). , ****, (). , , , ****, (). , , , , ****, (). , ****, (). , ****, (). , , , ****, (). , , , ****, (). , , , eds., ** (, ). , , , , , ****, (). , , , , ****, (). , , , , , ****, (). , , , , , ****, (). , , , ****, (). , , , , ****, (). , , , , ****, (). , , , , , ****, (). , , , , , , , , , , ****, (). , , , , , , ****, (). , , , , , , ****, (). , , , , , , , , ****, (). , , , ****, (). , , , , ****, (). , , , , , ****, (). , , , , , , , , ****, (). , , , , , ****, (). , , , , , , , ****, (). , , , , , , ****, (). , , , , , , ****, (). , , , , , , ****, (). , , , , , , ****, (). , , , , , ****, (). , , , , ****, (). , ****, (). , , , , ****, (). , , , ****, (). , ** (, , ), ed., ISBN . , , , , , ****, (). , ****, (). , , , , ****, (). , , , , , ****, (). , , , , ****, (). , , , , , ****, (). , , , ****, (). , , , ****, (). , , , , , ****, (). , , , , , ****, (). , , , , , , , ****, (). , , , , , ****, (). , , , , , ****, (). , , , , , ****, (). P. Jansen, L. Semeria, L.E. Hofer, S. Scheidegger, J.A. Agner, H. Schmutz and F. Merkt, Phys. Rev. Lett. 115, 133202 (2015).
{ "pile_set_name": "ArXiv" }
--- author: - 'Y. X. Wu' - 'W. Yu' - 'Z. Yan' - 'L. Sun' - 'T. P. Li' title: 'On the Relation of Hard X-ray Peak Flux and Outburst Waiting Time in the Black Hole Transient GX 339-4' --- [In this work we re-investigated the empirical relation between the hard X-ray peak flux and the outburst waiting time found previously in the black hole transient GX 339-4. We tested the relation using the observed hard X-ray peak flux of the 2007 outburst of GX 339-4, clarified issues about faint flares, and estimated the lower limit of hard X-ray peak flux for the next outburst. ]{} [We included Swift/BAT data obtained in the past four years. Together with the CGRO/BATSE and RXTE/HEXTE light curves, the observations used in this work cover a period of 18 years. ]{} [The observation of the 2007 outburst confirms the empirical relation discovered before. This strengthens the apparent link between the mass in the accretion disk and the peak luminosity of the brightest hard state that the black hole transient can reach. We also show that faint flares with peak fluxes smaller than about 0.12 crab do not affect the empirical relation. We predict that the hard X-ray peak flux of the next outburst should be larger than 0.65 crab, which will make it at least the second brightest in the hard X-ray since 1991.]{} INTRODUCTION ============ GX 339-4 is a black hole transient discovered more than 30 years ago. It has a mass function of $5.8~M_{\odot}$, a low mass companion star and a distance of $\gtrsim 7$ kpc [@Mar73; @Hyn03; @Sha01; @Zdz04]. It is one of the black hole transients with the most frequent outbursts [@Kon02; @Zdz04]. @Yu07 analyzed the long-term observations of GX 339-4 made by the Burst and Transient Source Experiment (BATSE) on board the [*Compton Gamma-Ray Observatory*]{} (CGRO) and the [*Rossi X-ray Timing Explorer*]{} (RXTE) since May 31, 1991 until May 23, 2005. They found a nearly linear relation between the peak flux of the low/hard (LH) spectral state that occurs at the beginning of an outburst and the outburst waiting time defined based on the hard X-ray flux peaks. The empirical relation indicates a link between the brightest LH state that the source can reach and the mass stored in the accretion disk before an outburst starts. After then the source underwent an outburst in 2007. The 2007 outburst and any future outbursts can be used to test and refine the empirical relation. Here we show that the hard X-ray peak flux of the 2007 outburst falls right on the empirical relation obtained by @Yu07, proving that the empirical relation indeed holds. By including the most recent monitoring observations with the Swift/BAT in the past four years, we re-examine the empirical relation and make a prediction for the hard X-ray peak flux of the next bright outburst for a given waiting time. We also clarify issues related to faint flares that have been seen in the recent past. OBSERVATION AND DATA ANALYSIS ============================= We made use of observations performed with BATSE (20–160 keV) covering from May 31, 1991 to May 25, 2000, HEXTE (20–250 keV) covering from January 6, 1996 to January 2, 2006, as in @Yu07, and recent monitoring results of Swift/BAT that are publicly available (15–50 keV) covering from February 13, 2005 to August 31, 2009. The BATSE data were obtained in crab unit. The fluxes of the Crab were 305 counts s$^{-1}$ and 0.228 counts s$^{-1}$ cm$^{-2}$ for HEXTE and BAT respectively. These values were used to convert the source fluxes into the unit of crab. Following the previous study [@Yu07], the light curves were rebinned to a time resolution of 10 days. It is worth noting that the X-ray fluxes quoted below all correspond to 10-day averages, including those obtained in the empirical relation and the predicted fluxes. The combined BATSE, HEXTE and BAT light curves are shown in Fig \[fig\_pkwt\]. The triangles marked with 1–8 indicate the initial hard X-ray peaks during the rising phases of the outburst 1–8, and those with $\rm 5_e$–$\rm 8_e$ indicate the ending hard X-ray peaks during the decay phases of the outburst 5–8. Outburst 1-7 were studied in @Yu07. Outburst 8 is the 2007 outburst that occurred after the empirical relation was obtained. The waiting time of outburst 8 is determined in the same way as in the previous study, i.e., the time separation between the peaks $\rm 7_e$ and 8 and the peak $\rm 7_e$ is the hard X-ray peak associated with the HS-to-LH transition. In order to show how the peaks are chosen, we also plotted the soft X-ray light curves obtained with the RXTE/ASM and the hardness ratios between the ASM and the BATSE or HEXTE or BAT fluxes in Fig \[fig\_hr\]. This explicitly shows that the hard X-ray peaks at the end of outbursts correspond to the HS-to-LH state transitions. The initial hard X-ray peak, on the other hand, is normally the first prominent one during the initial LH state. Due to the hysteresis effect of spectral state transitions [@Miy95], the source would have very low luminosity after the HS-to-LH transition during the outburst decay. We took the hard X-ray peak corresponding to the HS-to-LH state transition such as peak $\rm 7_e$, as the end of the previous outburst, i.e., the starting time to calculate the waiting time of the following outburst (see the definition of waiting time in @Yu07). Due to the relatively low sensitivity of BATSE, flares with 10-day averaged peak flux at or below about 0.1 crab could not be identified as individual outbursts. It is therefore worth noting that the current empirical relation is determined based on outbursts with hard X-ray peak fluxes above about 0.2 crab. In recent years with more sensitive observations of Swift/BAT, we have observed several faint flares in this source. These flares would not have been clearly seen in the BATSE 10-day averaged light curve and would not have been taken as single outbursts if BATSE had operated. Therefore we ignored these flares although they were clearly seen with Swift/BAT. We will discuss the faint flares later on. We found that the data point of outburst 8 follows the empirical relation reported in @Yu07, as shown in the inset panel of Fig \[fig\_pkwt\]. The deviation from the empirical relation is only -0.034 crab. The linear Pearson’s correlation coefficient for all the 7 data points is 0.997, again indicating a nearly linear relation between the hard X-ray peak flux $\rm F_p$ and the waiting time $\rm T_w$. A linear fit to this relation gives $\rm F_p=(9.25\pm0.06)\times 10^{-4}{\rm T_w}-(0.039\pm0.005)$, where $\rm F_p$ is in unit of crab and $\rm T_w$ in units of days. This updated relation is almost identical to the one reported in @Yu07. The intrinsic scattering of the data is 0.014 crab, which defines a $\pm$0.014 crab bound of the linear relation. The intercept of the best-fitting linear model on the waiting time axis is $\rm T_w=42$ days when $\rm F_p=0$ crab. Considering the intrinsic scattering and the model uncertainty, we obtained an intercept $\rm T_w= 42\pm 20$ days. This means that the hard X-ray peak of any outburst should be at least $42\pm 20$ days after the end of the previous outburst, which is determined as the hard X-ray peak corresponding to the HS-to-LH transition. The refined empirical relation enables us to approximately estimate the hard X-ray peak flux (10-day average) for the next bright outburst in GX 339-4. The updated relation gives the peak flux of the next bright outburst as $\rm F_{p,n}=9.25\times10^{-4}~({\rm Day_{09}}+{\rm T_{rise}})+0.44$ crab, where $\rm Day_{09}$ is the number of days in 2009 when a future outburst starts and ${\rm T_{rise}}$ is the rise time in unit of day for the next outburst to reach its initial hard X-ray peak. The hard X-ray peak flux can be predicted almost as soon as the next outburst occurs because the rise time is nearly a small constant compared with the waiting time. The source has remained inactive for about 750 days since the end of the 2007 outburst. This gives that the hard X-ray peak flux of the next outburst should be at least 0.65 crab (Fig \[fig\_pred\]), making it the second brightest outburst since 1991, brighter than all the outbursts except outburst 6. Again notice that only for an outburst brighter than about 0.12 crab can such a prediction be made based on the empirical relation. We have shown that the empirical relation holds if faint hard X-ray flares are ignored. For example the flare of about 0.08 crab in March 2006 does not affect the peak flux of the 2007 outburst. This suggests that the flare of about 0.1 crab in March 2009 will not affect the hard X-ray peak flux of next bright outburst significantly. The negligible effect of the faint flares on the empirical relation is also consistent with the consideration of the actual value range of $\rm T_w$. The intersection of the best-fitting linear empirical relation on the time axis indicates that the hard X-ray peak of a major, bright outburst must occur more than $42\pm20$ days after the hard X-ray peak during the decay phase of the previous outburst. However as discussed in @Yu07, the sum of the decay time of the LH state in the previous outburst and the rise time of the LH state in the next outburst is normally about 100–150 days. Therefore in reality the minimal $\rm T_w$ for bright outbursts, as defined by @Yu07, would be 100–150 days. This corresponds to $\rm F_{p}$ in the range of $\sim0.04-0.12$ crab, which indicates a lower limit of $\rm F_{p}$ for any outburst that should be considered in the empirical relation. This might suggest that after an outburst, GX 339-4 can subsequently rise up to $\sim0.12$ crab without returning quiescence. Because of their low luminosities, the faint flares correspond to only a small portion of the mass in the disk. This consistently explains why using the empirical relation without including the faint flares can we estimate the hard X-ray peak flux of a bright outburst – the indicator of the disk mass. In order to get an idea on how good the estimation or prediction is, we also “predict” the hard X-ray peak flux for 2004 and 2007 outbursts with the data before the 2004 and the 2007 outburst respectively, and then compared the “predictions” with the observations (Fig \[fig\_pred\]). We then studied the deviations of the predicted values from the actual observed peak fluxes during the 2004 outburst and the 2007 outburst. The deviations are -0.012 crab and -0.034 crab, or 3.8% and 6.4%, respectively. Considering that the 10-day time binning would bring uncertainties, these predictions are extraordinarily good. The prediction made for the next bright outburst should have a similar accuracy. The hard X-ray peak of the next outburst should fall on the prediction in Fig \[fig\_pred\] with a lower limit around 0.65 crab, which is the predicted hard X-ray peak flux of an outburst if it happened at present (around MJD 55074). DISCUSSION ========== We included recent hard X-ray monitoring observations of GX 339-4 with Swift/BAT in addition to CGRO/BATSE and RXTE/HEXTE observations. We have analyzed the X-ray observations of GX 339-4 in the past 18 years following @Yu07 and re-examined the empirical relation between the hard X-ray peak flux and the outburst waiting time during bright outbursts found by @Yu07. We found that the hard X-ray peak flux of the 2007 outburst follows the empirical relation determined with observations before 2007 very well. We checked the potential influence of faint flares on the empirical relation. The empirical relation was determined based on the observations of bright outbursts, not including those faint flares below about 0.12 crab. The actually minimal waiting time required for an outburst to occur consistently explains that there exists a lower limit of peak flux for the outburst studied here. A refined relation between the hard X-ray peak flux and the waiting time in the past 18 years has been obtained. Based on this relation, we can estimate the hard X-ray peak flux for the next bright outburst as soon as it starts. It has been 750 days since the end of the most recent bright outburst. Based on this, we predict that the hard X-ray peak flux should be no less than 0.65 crab. One may think that during different outbursts the properties of the accretion flow are different, such that the radiation efficiencies differ for different outbursts while the actual mass accreted are about the same. This is not the case. The correlation between the hard X-ray peak flux and the peak flux of the corresponding HS state is found to hold for individual black hole binaries and neutron star low mass X-ray binaries [@YKF04; @YD07; @YY09]. Given that the neutron star has a hard surface, the observed X-ray flux from the neutron star system should in general reflect the instantaneous mass accretion rate. Therefore outbursts with different flux amplitude in neutron star systems should correspond to different mass accretion rate. Because the black hole systems fall on the same correlation track as those, the mass accretion rates should be different when GX 339-4 reaches the hard X-ray peaks during outbursts of different amplitudes. The empirical relation, confirmed by the BAT observations of the 2007 outburst, provides strong evidence that there is a link between the mass in the accretion disk and the brightest LH state that GX 339-4 can reach. The mechanism behind this link is not clear. But if the mass in the accretion disk is directly related to the production of the hard X-ray flux, then a major portion of the disk should be involved in generating the hard X-ray flux. Independent of such accretion geometry considerations, @YY09 have recently performed a comprehensive study of spectral state transitions in bright Galactic X-ray binaries. The results have confirmed the correlation between LH-to-HS transition luminosity and the peak luminosity of the following soft state shown in previous studies [@YKF04; @Yu07; @YD07], and provided strong evidence for that: a) non-stationary accretion plays a dominant role in generating a bright LH state and b) the rate-of-increase of the mass accretion rate can be the dominant parameter determining spectral state transitions. The empirical relation between the LH-to-HS state transition luminosity and the peak luminosity of the following HS state and the empirical relation studied in this paper connect the mass in the accretion disk (the cause and initial condition) and the peak luminosity of the hard state (the result) to the rate-of-increase of the mass accretion rate. This then could be the indicator of the initial mass which influences the overall development of the hard state and the soft state and the transitions between the two. The empirical relation allows us to estimate the mass in the accretion disk before an outburst in the special source GX 339-4. The phenomenon reminds us of a storage mechanism that works behind. This may be relevant to the phenomenon seen in solar flares, known as avalanche processes [e.g., @LH91; @Whe00], of which the magnetic field plays the major role. We would like to thank the CGRO/BATSE group at NASA Marshall Space Flight Center and the RXTE and the Swift Guest Observer Facilities at NASA Goddard Space Flight Center for providing monitoring results. WY would like to thank Robert Fender for stimulating discussions and hospitality and encouragement which speeded up this work. WY also thank Tomaso Belloni for a careful check of the BAT flux for the 2007 outburst of GX 339-4. This work was supported in part by the National Natural Science Foundation of China (10773023, 10833002), the One Hundred Talents project of the Chinese Academy of Sciences, the Shanghai Pujiang Program (08PJ14111), the National Basic Research Program of China (973 project 2009CB824800), and the starting funds at the Shanghai Astronomical Observatory. The study has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center. Hynes, R. I., Steeghs, D., Casares, J., Charles, P. A., & O’Brien, K. 2003, , 583, L95 Kong, A. K. H., Charles, P. A., Kuulkers, E., & Kitamoto, S. 2002, , 329, 588 Lu, E. T., & Hamilton, R. J. 1991, , 380, L89 Markert, T. H., Canizares, C. R., Clark, G. W., Lewin, W. H. G., Schnopper, H. W., & Sprott, G. F. 1973, , 184, L67 Miyamoto, S., Kitamoto, S., Hayashida, K., & Egoshi, W. 1995, , 442, L13 Shahbaz, T., Fender, R., & Charles, P. A. 2001, , 376, L17 Wheatland, M. S. 2000, , 536, L109 Yu, W., & Dolence, J.2007, , 667, 1043 Yu, W., Lamb, F. K., Fender, R., & van der Klis, M. 2007, , 663, 1309 Yu, W., van der Klis, M. & Fender, R., 2004, , 611, L121 Yu, W., & Yan, Z., 2009, accepted by Zdziarski, A. A., Gierli[ń]{}ski, M., Miko[ł]{}ajewska, J., Wardzi[ń]{}ski, G., Smith, D. M., Harmon, B. A., & Kitamoto, S. 2004, , 351, 791
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this work an iterative algorithm based on unsupervised learning is presented, specifically on a Restricted Boltzmann Machine (RBM) to solve a perfect matching problem on a bipartite weighted graph. Iteratively is calculated the weights $ w_{ij} $ and the bias parameters $\theta = ( a_i, b_j) $ that maximize the energy function and assignment element $i$ to element $j$. An application of real problem is presented to show the potentiality of this algorithm.' author: - | Francesco Curia\ Department of Statistical Science\ Sapienza, University of Rome\ Rome, 00185 Italy\ `francesco.curia@uniroma1.it`\ title: | Restricted Boltzmann Machine Assignment Algorithm:\ Application to solve many-to-one matching problems on weighted bipartite graph --- Introduction {#intro} ============ The assignment problems fall within the combinatorial optimization problems, the problem of matching on a bipartite weighted graph is one of the major problems faced in this compound. Numerous resolution methods and algorithms have been proposed in recent times and many have provided important results, among them for example we find: Constructive heuristics, Meta-heuristics, Approximation algorithms, Iper-heuristics, and other methods. Combinatorial optimization deals with finding the optimal solution between the collection of finite possibilities. The finished set of possible solutions. The heart of the problem of finding solutions in combinatorial optimization is based on the efficient algorithms that present with a polynomial computation time in the input dimension. Therefore, when dealing with certain combinatorial optimization problems one must ask with what speed it is possible to find the solutions or the optimal problem solution and if it is not possible to find a resolution method of this type, which approximate methods can be used in polynomial computational times that lead to stable explanations. Solve this kind of problem in polynomial time $o(n)$ has long been the focus of research in this area until Edmonds \[1\] developed one of the most efficient methods. Over time other algorithms have been developed, for example the fastest of them is the Micali e Vazirani algorithm \[2\], Blum \[3\] and Gabow and Tarjan \[4\]. The first of these methods is an improvement on that of Edmonds, the other algorithms use different logics, but all of them with computational time equal to $o(m \sqrt n)$. The problem is fundamentally the following: we imagine a situation in which respect for the characteristics detected on a given phenomenon is to be assigned between elements of two sets, as for example in one of the most known problems such as the tasks and the workers to be assigned to them. A classical maximum cardinality matching algorithm to take the maximum weight range and assign it, in a decision support system, through the domain expert this could also be acceptable, but in a totally automatic system like a system could be of artificial intelligence that puts together pairs of elements on the basis of some characteristics, this way would not be very reliable, totally removing the user’s control. Another problem related to this kind of situation is that of features. Let’s take as an example a classic problem of flight-gate assignment in an airport, on the basis of the history we could have information about the flight, the gates and the time, the flight number and maybe the airline. Little information that even through the best of feature enginering would lead to a model of machine learning, specifically of classification, very poor in information. Treating the same problem with classical optimization, as done so far, would lead to solving it with a perfect matching of maximum weight, and we would return to the beginning. Matching problems {#sec:1} ================= Matching problems are among the fundamental problems in combinatorial optimization. In this set of notes, we focus on the case when the underlying graph is bipartite. We start by introducing some basic graph terminology. A graph $G = (V, E)$ consists of a set $V = A \cup B$ of vertices and a set $E$ of pairs of vertices called edges. For an edge $e = (u, v)$, we say that the endpoints of e are $u$ and $v$; we also say that $e$ is incident to $u$ and $v$. A graph $G = (V, E)$ is bipartite if the vertex set $V$ can be partitioned into two sets $A$ and $B$ (the bipartition) such that no edge in $E$ has both endpoints in the same set of the bipartition. A matching M is a collection of edges such that every vertex of $V$ is incident to at most one edge of $M$. If a vertex $v$ has no edge of $M$ incident to it then $v$ is said to be exposed (or unmatched). A matching is perfect if no vertex is exposed; in other words, a matching is perfect if its cardinality is equal to $|A| = |B|$. In the literature several examples of the real world have been treated such as the assignment of children to certain schools \[5\], or as donors and patients \[6\] and workers at companies \[7\]. The problem of the weighted bipartite matching finds the feasible match with the maximum available weight. This problem was developed in several areas, such as in the work of \[8\] about protein and structure alignment, or within the computer vision as documented in the work of \[9\] or as in the paper by \[10\] in which the similarity of the texts is estimated. Other jobs have faced this problem in the classification \[11\],\[12\] e \[13\], but not for many to one correspondence. The mathematical formulation can be solved by presenting it as a linear program. Each edge $(i,j)$, where $i$ is in $A$ and $j$ is in $B$, has a weight $w_{ij}$. For each edge $(i,j)$ we have a decision variable $$x_{ij} =\begin{cases} 1 & \mbox{if the edge is contained in the matching} \\ 0 & \mbox{otherwise} \end{cases}$$ and $x_{ij}\in \mathbb {Z} {\text{ for }}i,j \in A,B$, and we have the following LP: $$\begin{aligned} & \underset{x_{ij}}{\text{max}} \sum _{(i,j)\in A\times B}w_{ij}x_{ij} \end{aligned}$$ $$\begin{aligned} \sum _{j\in B}x_{ij}=1{\text{ for }}i\in A \end{aligned}$$ $$\begin{aligned} \sum _{i\in A}x_{ij}=1{\text{ for }}j\in B \end{aligned}$$ $$\begin{aligned} 0 \leq x_{ij} \leq 1{\text{ for }}i,j \in A,B \end{aligned}$$ $$\begin{aligned} x_{ij}\in \mathbb {Z} {\text{ for }}i,j\in A,B \end{aligned}$$ ![Bipartite Weighted Matching[]{data-label="fig:1"}](matching.PNG){width="55.00000%"} Motivation {#sec:2} ========== The problem in a weighted bipartite graph $G = (V = A \cup B, W)$, is when we have different weights (historical data) $W = \left(w_{11}, w_{12} ..., w_{ij} \right)$ for the set of nodes to which corresponds the same set of nodes $B$. One of most popular solution is Hungarian algorithm \[16\] . The assignment rule therefore in many real cases could be misleading and limiting as well as it could be unrealistic as a solution. Machine learning (ML) algorithms are increasingly gaining ground in applied sciences such as engineering, biology, medicine, etc. both supervised and unsupervised learning models. The matching problem in this case can be seen as a set of inputs $ x_1, ..., x_k$ (in our case the nodes of the set A) and a set of ouput $y_1, ..., y_k$ (the respective nodes of the set $B$), weighed by a series of weights $w_{11}, ... w_{ij}$, which inevitably recall the structure of a classic neural network. The problem is that in this case there would be a number of classes (in the case of assignment) equal to the number of inputs. Considering it as a classic machine learning problem, the difficulty would lie in the features and their engineering, on the one hand, while on the other the number of classes to predict (assign) would be very large. For example, if we think about matching applicants and jobs, if we only had the name of a candidate for the job, we would have very little info to build a robust machine learning model, and even a good features engineering would not lead to much, but having available other information on the candidate they could be extracted it is used case never as “weight” to build a neural network, but even in this case the constraint of a classic optimization model solved with ML techniques would not be maintained, let’s say we would “force” a little hand. While what we want to present in this work is the resolution of a classical problem of matching (assignment) through the application of a ML model, in this case of a neural network, which as already said maintains the mathematical structure of a node (input) and arc (weight) but instead of considering the output of arrival (the set $B$) as classification label (assignment) in this case we consider an unsupervised neural network, specifically an Restricted Boltzmann Machine. Contributions {#sec:3} ============= The contributions of this work are mainly of two types: the first is related to the ability to use an unsupervised machine learning model to solve a classical optimization problem which in turn has the mathematical structure of a neural network based on two layers, in our case of RBM a visible and a hidden one. In this case the nodes of the set $ B $ become the variables of the model and the number of times that the node $i$ has been assigned to the node $j$ (for example in problems that concerns historical data analysis), becomes the weight $ w_ {ij} $ which at its turn becomes the value of the variable $i$-th in the RBM model. The second is the ability to solve real problems, as we will see later in the article, in which it is necessary to carry out a matching between elements of two sets and the maximum weight span is not said to be the best assignment, especially in the case where the problem is many-to-one, like many real problems. Restricted Boltzmann Machine {#sec:3} ============================ Restricted Boltzmann Machine is an unsupervised method neural networks based \[14\]; the algorithm learns one layer of hidden features. When the number of hidden units is smaller than that of visual units, the hidden layer can deal with nonlinear complex dependency and structure of data, capture deep relationship from input data , and represent the input data more compactly. Assuming there are $c$ visible units and $m$ hidden units in an Restricted Boltzmann Machine. So $v_i$ for $i = 1,...,c$ indicates the state of the $i-$th visible unit, where $$v_i =\begin{cases} 1 & \mbox{if the i-th term is annotated to the element } \\ 0 & \mbox{otherwise} \end{cases}$$ for $i=1,...,c$ and furthemore we have $$h_j =\begin{cases} 1 & \mbox{the state of hidden unit is active} \\ 0 & \mbox{otherwise} \end{cases}$$ for $j=1,...,m$ and $w_{ij}$ is the weight associated with the connection between $v_i$ and $h_j$ and define also the joint configuration $(v,h)$. The energy function that capturing the interaction patterns between visible layer and hidden layer is define as follow: $$E(v,h| \theta ) = - \sum_{i = 1}^{c} a_i v_i - \sum_{j = 1}^{m} b_j h _j - \sum_{i = 1}^{c} \sum_{j = 1}^{m} v_i h_j w_{ij}$$ where $\theta = \left( w_{ij}, a_i, b_j\right)$ are parameters of the model: $a_i$ and $b_j$ are biases for the visible and hidden variables, respectively. The parameters $w_{ij}$ is the weights of connection between visible variables and hidden variables. The joint probability is represented by the follow quantity: $$p(v,h) = \frac{e^-E(v,h)}{Z}$$ where $$Z = \sum_{v,h} e^{-E(v,h)}$$ is a normalization constant and the conditional distributions over the visible and hidden units are given by sigmoid functions as follows: $$p(v_i = 1 | h) = \sigma \left( \sum_{j = 1}^{m} w_{ij} h_j + a_i \right)$$ $$p(h_j = 1 | v) = \sigma \left( \sum_{i = 1}^{c} w_{ij} v_i + b_j \right)$$ where $\sigma = \frac{1}{1 + e^{-x}}$ $\\$ RBM are trained to optimizie the product of probabilities assigned to some training set $V$ (a matrix, each row of which is treated as a visible vector v) $$\begin{aligned} & \underset{w_{ij}}{\text{arg} \ \text{max}} \prod_{i=1}^c p(v_i) \end{aligned}$$ The RBM training takes place through the Contrastive Divergence Algorithm (see Hinton 2002 \[15\]). For the (4) and (7) we can pass to log-likelihood formulation $$L_v = log \sum_{h} e^{-E(v,h)} - log \sum_ {v,h} e^{-E(v,h)}$$ and derivate the quantity $$\frac{\partial L_v}{\partial w_{ij}} = \sum_{h} p(h|v) \cdot v_i h_j - \sum_{v,h} p(v,h) \cdot v_i h_j$$ $$\frac{\partial L_v}{\partial w_{ij}} = \mathbb{E} [p(h|v)] - \mathbb{E}[p(v,h)]$$ In the above expression, the first quantity represents the expectation of $v_i \cdot h_j$ to the conditional probability of the hidden states given the visible states and the second term represents the expectation of $v_i \cdot h_j$ to the joint probability of the visible and hidden states. In order to maximize the (7), which involves log of summation, there is no analytical solution and will use the stochastic gradient ascent technique. In order to compute the unknown values of the weights $w_{ij}$ such that they maximize the above likelihood, we compute using gradient ascent : $$w_{ij}^{k+1} = w_{ij}^{k} + \alpha \cdot \frac{\partial L_v}{\partial w_{ij}} %- \beta \cdot w_{ij}^{k} + \lambda \cdot \left(w_{ij}^{k} - w_{ij}^{k-1}\right)$$ where $\alpha \in (0,1) $ is the learning rate. At this formulation we can add an term of penality and obtain the follow $$w_{ij}^{k+1} = w_{ij}^{k} + \alpha \cdot \frac{\partial L_v}{\partial w_{ij}} + \eta \cdot \frac{w_{ij}^{k-1}}{w_{ij}^k}$$ where $\eta > 0$ is penalty parameters and $ \eta \cdot \frac{w_{ij}^{k-1}}{w_{ij}^k}$ measure the contribute of weights variation in the $k$-th step of update. Restricted Boltzmann Machine Assignment Algorithm (RBMAA) {#sec:4} ========================================================= In this section is presented the algorithm RBM based and is explaneid the single steps. 1. In this step the algorithm takes as input the matrix $W$ relative to the weights of the assignments, either the weight $ w_ {ij} $ represents the number of times that the element $ i $ has been assigned to the element $ j $ 2. The matrix $W$ is binarized. For each element $ w_ {ij} > $ 0 take 1 or 0 and an a matrix $\tilde W$ is created with elements 0-1. 3. In this step the RBM is applied taking as input the binary matrix $\tilde W $. The probability product related to the visible units of the RBM is maximized (13). The weights $ w_ {ij} $ are updated according to (17) and the biases according to the RBM training rule. Once these updated values are obtained, the optimized value $ \hat p (v_i) $ if this is greater than $\epsilon >$ 0 assigns 1 otherwise 0 and the iteration of the RBM is restarted until for each row of the matrix there is not a single value 1 which is the same as assignment. The quantile of level $ \alpha = $ 0.99 was used to determine the threshold $\epsilon$. 4. The output of the algorithm is a matrix with values 0-1 and on each line we obtain a single value equal to 1 which is equivalent to the assignment of the element $ i $ to the element $ j $. The pseudocode is presented in the next section. **Input**:\ $\epsilon$, threshold value\ $b_j$, hidden units bias value\ $a_i$, visible units bias value\ $\alpha$, learning rate for visible bias updating\ $\beta$, learning rate for hidden bias updating\ Matrix $n \times m$, $W = \{ w_{ij} \}$ number of times which $i$ has been assigned to $j$ \ Application of the RBMAA to a real problem {#sec:5} ========================================== Now we proceed to provide the results of the application of the algorithm (see Appendix). The problem instance is 351 elements for the set $ A $ and 35 for the set $ B $; the goal is to assign for each element of $ A $, one and only one element of $ B $, so as to have the sum for row equal to 1 as in (3) and in step 4 of the algorithm. The weights $ w_ {ij} $ are represented by the number of times the element $ a_i \in A $ has been assigned to the element $ b_j \in B $, based on a set of historical data that we have relative to flight-gate assignments of a well-known international airport. The difficulty is the one discussed in the first part of the work, in which we want to obtain a robust machine learning algorithm that classifies and assigns the respective gate for each flight. Starting from the available features, the algorithm presented in the work was implemented. The computational results are very interesting in terms of calculation speed and assignment. Conclusions and discussion {#sec:6} ========================== This can be the starting point for more precise, fast and sophisticated algorithms, which combine the combinatorial optimization with machine learning but on the basis of unsupervised learning and not only on the optimization of cost functions. J. Edmonds. “Paths, trees and flowers”. Canadian Journal of Mathematics, 17: 449-467, 1965 S. Micali and V. V. Vazirani. “An $O(\sqrt{|V|} \cdot |E|)$ algorithm for finding maximum matching in general graphs”. In Proceedings of the twenty First annual IEEE Symposium on Foundations of Computer Science,1980. N. Blum. “A new approach to maximum matching in general graphs”. In Proc. 17th ICALP, volume 443 of Lecture Notes in Computer Science, pages 586 - 597. Springer-Verlag, 1990. H. N. Gabow and R. E. Tarjan. “Faster scaling algorithms for general graph matching problems”. J. ACM, 38(4):815 - 853, 1991. Ryoji Kurata, Masahiro Goto, Atsushi Iwasaki, and Makoto Yokoo. “Controlled school choice with soft bounds and overlapping types”. In AAAI Conference on Artificial Intelligence (AAAI), 2015. Dimitris Bertsimas, Vivek F Farias, and Nikolaos Trichakis. “Fairness, efficiency, and flexibility in organ allocation for kidney transplantation”. Operations Research, 61(1):73–87, 2013. John Joseph Horton. “The effects of algorithmic labor market recommendations: evidence from a field experiment”,To appear, Journal of Labor Economics, 2017. E Krissinel and K Henrick. “Secondary-structure matching (ssm), a new tool for fast protein structure alignment in three dimensions”. Acta Crystallographica Section D: Biological Crystallography, 60(12):2256–2268, 2004. Serge Belongie, Jitendra Malik, and Jan Puzicha. “Shape matching and object recognition using shape contexts”. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(4):509–522, 2002. Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Shengxian Wan, and Xueqi Cheng. “Text matching as image recognition”. In AAAI Conference on Artificial Intelligence (AAAI), 2016. Gediminas Adomavicius and YoungOk Kwon.“ Improving aggregate recommendation diversity using ranking-based techniques”. IEEE Transactions on Knowledge and Data Engineering (TKDE), 24(5):896–911, 2012. Chaofeng Sha, Xiaowei Wu, and Junyu Niu. “A framework for recommending relevant and diverse items”. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 2016. Azin Ashkan, Branislav Kveton, Shlomo Berkovsky, and Zheng Wen. “Optimal greedy diversity for recommendation”. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pages 1742–1748, 2015. A. Fischer and C. Igel, “An Introduction to Restricted Boltzmann Machines, in Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications”, vol. 7441 of Lecture Notes in Computer Science, pp. 14–36, Springer Berlin Heidelberg, Berlin, Heidelberg, 2012. Hinton, G. E. “A Practical Guide to Training Restricted Boltzmann Machines”. Technical Report, Department of Computer Science, University of Toronto, 2010. H.W. Kuhn, “On the origin of the Hungarian method for the assignment problem, in J.K. Lenstra, A.H.G. Rinnooy Kan, A. Schrijver”, History of Mathematical Programming, Amsterdam, North-Holland, 1991, pp. 77-81 $\\$ $\\$ $\\$ $\\$ $\\$ $\\$ $\\$ $\\$ $\\$ $\\$ $\\$ $\\$ $\\$ $\\$ $\\$ $\\$ Appendix ========
{ "pile_set_name": "ArXiv" }
--- abstract: 'The odd parity gravitational Quasi-Normal Mode spectrum of black holes with non-trivial scalar hair in Horndeski gravity is investigated. We study ‘almost’ Schwarzschild black holes such that any modifications to the spacetime geometry (including the scalar field profile) are treated perturbatively. A modified Regge-Wheeler style equation for the odd parity gravitational degree of freedom is presented to quadratic order in the scalar hair and spacetime modifications, and a parameterisation of the modified Quasi-Normal Mode spectrum is calculated. In addition, statistical error estimates for the new hairy parameters of the black hole and scalar field are given.' author: - 'Oliver J. Tattersall' bibliography: - 'RefModifiedGravity.bib' date: 'Received ; published – 00, 0000' title: 'Quasi-Normal Modes of Hairy Scalar Tensor Black Holes: Odd Parity' --- Introduction ============ Gravitational wave (GW) astronomy is now in full swing, thanks to numerous and frequent observations of compact object mergers by advanced LIGO and VIRGO [@LIGOScientific:2018mvr]. With next generation ground and space based GW detectors on the horizon, the prospect of performing black hole spectroscopy (BHS) [@Dreyer:2003bv; @Berti:2005ys; @Gossan:2011ha; @Meidam:2014jpa; @Berti:2015itd; @Berti:2016lat; @Berti:2018vdi; @Baibhav:2018rfk; @2019arXiv190209199B; @Giesler:2019uxc; @Bhagwat:2019dtm; @Bhagwat:2019bwv; @Maselli:2019mjd; @Ota:2019bzl; @Cabero:2019zyt] (the gravitational analog to atomic spectroscopy) is tantalisingly close. With BHS, one aims to discern multiple distinct frequencies of gravitational waves emitted during the ringdown of the highly perturbed remnant black hole of a merger event. These frequencies, known as Quasi-Normal Modes (QNMs), act as fingerprints for a black hole, being dependent on both the background properties of a black hole (e.g. its mass) and on the laws of gravity [@1975RSPSA.343..289C; @0264-9381-16-12-201; @Kokkotas:1999bd; @Berti:2009kk; @Konoplya:2011qq]. In General Relativity (GR), the QNM spectrum of a Kerr black hole is entirely determined by its mass and angular momentum, and the black hole is said to have no further ‘hairs’ [@Kerr:1963ud; @Israel:1967wq; @Israel:1967za; @Carter:1971zc; @1972CMaPh..25..152H; @PhysRevD.5.2403]. Thus the detection of multiple QNMs in the ringdown portion of a gravitational wave signal allows a consistency check between the inferred values of $M$ and $J$ from each frequency. In gravity theories other than GR, however, the situation can be markedly different. For example, black holes may not be described by the Kerr solution, and may have properties other than mass or angular momentum that affect its QNM spectrum. Such black holes are said to have ‘hair’ and, despite no-hair theorems existing for various facets of modified gravity, finding and studying hairy black hole solutions is at the forefront of strong gravity research [@Blazquez-Salcedo:2016enn; @Blazquez-Salcedo:2017txk; @Silva:2017uqg; @Antoniou:2017acq; @Antoniou:2017hxj; @Bakopoulos:2018nui; @Silva:2018qhn; @Minamitsuji:2018xde; @Sullivan:2019vyi; @Ripley:2019irj; @Macedo:2019sem; @Konoplya:2001ji; @Dong:2017toi; @Endlich:2017tqa; @Cardoso:2018ptl; @Brito:2018hjh; @Franciolini:2018uyq; @Okounkova:2019zjf]. On the other hand, even if black holes in modified gravity theories are described by the same background solution as in GR (i.e. they have no hair), their perturbations may obey modified equations of motion that alter the emitted gravitational wave signal [@Barausse:2008xv; @Molina:2010fb; @Tattersall:2017erk; @Tattersall:2018nve; @Tattersall:2019pvx]. In this paper we will investigate the first possibility, where modified gravity black holes are altered from their usual description in GR due to their interactions with new gravitational fields. We will, however, assume that black holes are (to first order at least) well described by the GR solutions, and any modifications to the background spacetime are treated perturbatively. As various observations appear to suggest that black holes are well described by the suite of GR solutions [@2016PhRvL.116v1101A; @Isi:2019aib], this approach seems sensible. In this way we can treat the new modified QNM spectrum of these hairy black holes as a small correction to the original GR spectrum, greatly simplifying the analytical and numerical analysis. We will specifically focus on the Horndeski family of scalar-tensor theories of gravity [@Horndeski:1974wa], where a new gravitational scalar field interacts non-minimally with the metric. Furthermore, for simplicity, we will restrict ourselves to looking only at the odd parity sector of perturbations to spherically symmetric black holes, i.e. we will assume that the black holes studied here are described by a slightly modified Schwarzschild metric. The extension of this work to the even parity sector of spherically symmetric black holes, and to include the effects of rotation, are left as future exercises. *Summary*: In section \[horndeskisection\] we will introduce the action for Horndeski gravity, the hairy black hole metric and scalar field profile that we are considering, and explore the odd parity gravitational perturbations of this system. In section \[QNMsection\] we will utilise the results of [@Cardoso:2019mqo] to calculate the modified QNM spectrum of the modified black hole, and provide observational error estimates for the new hairy parameters. We will then conclude with a discussion of the results presented here. Throughout we will use natural units with $G=c=1$, except where otherwise stated. The metric signature will be mostly positive. Horndeski Gravity {#horndeskisection} ================= Background ---------- A general action for scalar-tensor gravity with 2$^{nd}$ order-derivative equations of motion is given by the Horndeski action [@Horndeski:1974wa; @Kobayashi:2011nu]: $$\begin{aligned} S=\int d^4x\sqrt{-g}\sum_{n=2}^5L_n,\label{Shorndeski}\end{aligned}$$ where the component Horndeski Lagrangians are given by: $$\begin{aligned} L_2&=G_2(\phi,X)\nonumber\\ L_3&=-G_3(\phi,X)\Box \phi\nonumber\\ L_4&=G_4(\phi,X)R+G_{4X}(\phi,X)((\Box\phi)^2-\phi^{\alpha\beta}\phi_{\alpha\beta} )\nonumber\\ L_5&=G_5(\phi,X)G_{\alpha\beta}\phi^{\alpha\beta}-\frac{1}{6}G_{5X}(\phi,X)((\Box\phi)^3 \nonumber\\ & -3\phi^{\alpha\beta}\phi_{\alpha\beta}\Box\phi +2 \phi_{\alpha\beta}\phi^{\alpha\sigma}\phi^{\beta}_{\sigma}),\end{aligned}$$ where $\phi$ is the scalar field with kinetic term $X=-\phi_\alpha\phi^\alpha/2$, $\phi_\alpha=\nabla_\alpha\phi$, $\phi_{\alpha\beta}=\nabla_\alpha\nabla_\beta\phi$, and $G_{\alpha\beta}=R_{\alpha\beta}-\frac{1}{2}R\,g_{\alpha\beta}$ is the Einstein tensor. The $G_i$ are arbitrary functions of $\phi$ and $X$, with derivatives $G_{iX}$ with respect to $X$. GR is given by the choice $G_4=M_{P}^2/2$ with all other $G_i$ vanishing and $M_{P}$ being the reduced Planck mass. Note that eq. (\[Shorndeski\]) is *not* the most general action for scalar-tensor theories, and it has been shown that it can be extended to an arbitrary number of terms [@Zumalacarregui:2013pma; @Gleyzes:2014qga; @Gleyzes:2014dya; @Achour:2016rkg]. For a spherically symmetric black hole solution in Horndeski gravity we assume the following form for the metric $g$ and scalar field $\phi$ in ‘Schwarzschild-like’ coordinates: $$\begin{aligned} ds^2=&\;g_{\mu\nu}dx^\mu dx^\nu=\;-A(r)dt^2+B(r)^{-1}dr^2+C(r)d\Omega^2\\ \phi=&\;\phi(r)\end{aligned}$$ where $d\Omega^2$ is the metric on the unit 2-sphere. Our starting point will be a hairless Schwarzschild solution, as in GR, such that $A=B=1-2M/r$, $C=r^2$ and $\phi=\phi_0=const$, where $M$ is the mass of the black hole. We will now introduce perturbatively small ‘hair’ in both the spacetime geometry and in the scalar field profile, leading to a modified ‘almost’ Schwarzschild black hole. Using $\epsilon$ as a book keeping parameter to track the perturbative order of our expansion, we make the following ansatz to second order in $\epsilon$: \[ansatz\] $$\begin{aligned} A(r)=&\;B(r)=1-\frac{2M}{r}+\epsilon \delta A_1(r) + \epsilon^2 \delta A_2(r)+\mathcal{O}(\epsilon^3)\label{gbackground}\\ C(r)=&\;\left(1+\epsilon \delta C_1(r) + \epsilon^2 \delta C_2(r)\right) r^2 + \mathcal{O}(\epsilon^3)\\ \phi(r)=&\;\phi_0+\epsilon \delta \phi_1(r)+ \epsilon^2 \delta\phi_2(r)+\mathcal{O}(\epsilon^3),\label{phibackground}\end{aligned}$$ where we are remaining agnostic as to the exact form of the modifications, merely supposing that such perturbations could exist. We include second order effects to account for the possibility that a small first order modification to the scalar profile $\delta\phi_1$ may only back-react onto the metric at $\mathcal{O}(\epsilon^2)$ due to the effective energy momentum tensor sourcing the metric being quadratic in $\phi$. Nevertheless we leave open the possibility that the metric is also perturbed at first order in $\epsilon$ through some non-minimal coupling. Black Hole Perturbations {#perturbsec} ------------------------ We now consider odd parity perturbations to the ‘almost Schwarzschild’ black hole described by eq. (\[gbackground\]) - (\[phibackground\]). For simplicity we will only be considering odd parity perturbations, and as such we do not need to consider the coupling of the metric perturbations to the scalar degree of freedom, but rather only the odd parity metric degree of freedom. An analysis of the even parity sector for perturbatively hairy black holes in Horndeski gravity is left as a future extension to this work; the stability of generic spherically symmetric black holes in Horndeski gravity was studied in [@Kobayashi:2012kh; @Kobayashi:2014wsa], whilst [@Franciolini:2018uyq] builds an effective field theory for QNMs in scalar-tensor gravity in the unitary gauge. In the Regge-Wheeler gauge [@Regge:1957td], odd parity perturbations $h_{\mu\nu}$ to the metric $g_{\mu\nu}$ can be written in the following way: $$\begin{aligned} h_{\mu\nu,\ell m}^{odd}=& \begin{pmatrix} 0&0&h_0(r)B^{\ell m}_\theta&h_0(r)B^{\ell m}_\phi\\ 0&0&h_1(r)B^{\ell m}_\theta&h_1(r)B^{\ell m}_\phi\\ sym&sym&0&0\\ sym&sym&0&0 \end{pmatrix}e^{-i\omega t}\end{aligned}$$ where $sym$ indicates a symmetric entry, $B^{\ell m}_\mu$ is the odd parity vector spherical harmonic and $Y^{\ell m}$ is the standard scalar spherical harmonic: $$\begin{aligned} B^{\ell m}_\theta = -\frac{1}{\sin\theta}\frac{\partial}{\partial\phi}Y^{\ell m},\quad B^{\ell m}_\phi = \sin\theta\frac{\partial}{\partial\theta}Y^{\ell m}.\end{aligned}$$ Through manipulation of the perturbed Horndeski equations of motion one can show that $h_0$ becomes an auxiliary field, whilst a redefined field $Q(h_1)$ is shown in [@Ganguly:2017ort] to obey the following equation of motion: $$\begin{aligned} \left[\frac{d^2}{dr_\ast^2}+\frac{\mathcal{F}}{\mathcal{G}}\omega^2-\mathscr{V}\right]Q=0\label{RWgen}\end{aligned}$$ where $r_\ast$ is the tortoise coordinate defined by $dr=\sqrt{AB}dr_\ast$, and the potential $\mathscr{V}$ is given by: $$\begin{aligned} \mathscr{V}=&\;\ell(\ell+1)\frac{A}{C}\frac{\mathcal{F}}{\mathcal{H}}-\frac{C^2}{4C^\prime}\left(\frac{ABC^{\prime 2}}{C^3}\right)^\prime-\frac{C^2\mathcal{F}^2}{4\mathcal{F}^\prime}\left(\frac{AB\mathcal{F}^{\prime 2}}{C^2\mathcal{F}^3}\right)^\prime -\frac{2A\mathcal{F}}{C\mathcal{H}}.\label{Vgen}\end{aligned}$$ In the above a prime denotes a derivative with respect to $r$ and the functions $\mathcal{F}$, $\mathcal{G}$, and $\mathcal{H}$ are combinations of the Horndeski $G_i$ functions evaluated at the level of the background: $$\begin{aligned} \mathcal{F}=&\;2\left(G_4+\frac{1}{2}B\phi^\prime X^\prime G_{5X}-X G_{5\phi}\right)\\ \mathcal{G}=&\;2\left[G_4-2XG_{4X}+X\left(\frac{A^\prime}{2A}B\phi^\prime G_{5X}+G_{5\phi}\right)\right]\\ \mathcal{H}=&\;2\left[G_4-2XG_{4X}+X\left(\frac{C^\prime}{2C}B\phi^\prime G_{5X}+G_{5\phi}\right)\right].\end{aligned}$$ Furthermore note that we have suppressed spherical harmonic indices for compactness, but eq. (\[RWgen\]) is assumed to hold for each $\ell$. Eq. (\[RWgen\]) is the analog of the Regge-Wheeler equation [@Regge:1957td] for a generic spherically symmetric black hole in Horndeski gravity. Imposing the boundary conditions that gravitational radiation should be purely ‘ingoing’ at the black hole horizon, and purely ‘outgoing’ at spatial infinity, one can find find the discrete spectrum of QNM frequencies $\omega$ that satisfies eq. (\[RWgen\]). We now Taylor expand all of the terms in eq. (\[RWgen\]) to $O(\epsilon^2)$ using eq. (\[gbackground\]) - (\[phibackground\]) to take into account the effects of the perturbative black hole hair that we introduced in eq. (\[ansatz\]), resulting in the following: $$\begin{aligned} \left[\frac{d^2}{dr_\ast^2}+\omega^2\left(1+\epsilon^2\alpha_{T}(r)\right)-A(r)\left(\frac{\ell(\ell+1)}{r^2}-\frac{6M}{r^3}+\epsilon\delta V_1+\epsilon^2\delta V_2 \right)\right]Q=0\label{RWhairy}\end{aligned}$$ where $\alpha_T$ is the speed excess of gravitational waves [@DeFelice:2011bh; @Bellini:2014fua] given by, to $O(\epsilon^2)$: $$\begin{aligned} \alpha_T(r)=&\;-\left(1-\frac{2M}{r}\right)\frac{G_{4X}-G_{5\phi}}{G_4}\delta\phi_1^{\prime 2},\label{alphaT}\end{aligned}$$ whilst the potential perturbations are given by: \[deltaVs\] $$\begin{aligned} \delta V_1=&\;\frac{1}{2r^2}\left[4\delta A_1-2r\delta A_1^\prime-2(\ell+2)(\ell-1)\delta C_1+2(r-3M)\delta C_1^\prime-r(r-2M)\delta C_1^{\prime\prime}-\frac{G_{4\phi}}{G_4}\left(r\left(r-2M\right)\delta\phi_1^{\prime\prime}-2(r-3M)\delta\phi_1^\prime\right)\right]\label{deltaV1}\\ \delta V_2=&\;\frac{1}{4r^2}\left[8\delta A_2-4r\delta A_2^\prime+4(\ell+2)(\ell-1)\left(\delta C_1^2-\delta C_2\right)+3r(r-2M)\delta C_1^{\prime2}+4(r-3M)\delta C_2^\prime-2r(r-2M)\delta C_2^{\prime\prime}+4r\delta A_1\delta C_1^\prime\right.\nonumber\\ &\left.-2r^2\left(\delta A_1^\prime \delta C_1^\prime+\delta A_1\delta C_1^{\prime\prime}\right)-4(r-3M)\delta C_1\delta C_1^{\prime\prime}+2r(r-2M)\delta C_1 \delta C_1^{\prime\prime}\right]\nonumber\\ &-\frac{1}{2r^2}\frac{G_{4\phi}}{G_4}\left[-2(r-3M)\delta\phi_2^\prime + r \left( r\delta A_1^\prime\delta\phi_1^\prime -\delta A_1\left(2\delta \phi_1^\prime - r \delta\phi_1^{\prime\prime}\right)+(r-2M)\left(\delta\phi_2^{\prime\prime}-\delta C_1^\prime \delta\phi_1^\prime\right)\right)\right]\nonumber\\ & + \frac{1}{4r^2}\left(\frac{G_{4\phi}}{G_4}\right)^2\left[3r(r-2M)\delta\phi_1^{\prime 2}+2\delta\phi_1\left(r(r-2M)\delta\phi_1^{\prime\prime}-2(r-3M)\delta\phi_1^\prime\right)\right]\nonumber\\ & - \frac{1}{2r^2}\frac{G_{4\phi\phi}}{G_4}\left[r(r-2M)\delta\phi_1^{\prime 2}+\delta\phi_1\left(r(r-2M)\delta\phi_1^{\prime\prime}-2(r-3M)\delta\phi_1^\prime\right)\right]\nonumber\\ & - \frac{\alpha_T(r)}{2r^3}\left[-5M+Mr(r-2M)^{-1}-2r(\ell+2)(\ell-1)+r^2(r-2M)\left(\frac{\delta\phi_1^{\prime\prime}}{\delta\phi_1^\prime}\right)^2 + r\left(r(r-2M)\frac{\delta\phi_1^{\prime\prime\prime}}{\delta\phi_1^\prime}-2(r-5M)\frac{\delta\phi_1^{\prime\prime}}{\delta\phi_1^\prime}\right)\right].\label{deltaV2}\end{aligned}$$ We emphasise that in the above expressions all of the $G_i$ Horndeski functions are evaluated at $\phi=\phi_0$ and $X=0$ (i.e. to zeroth order in the book-keeping parameter $\epsilon$), and as such are *constants*. Note that this approach assumes that the $G_i$ are amenable to an expansion around $\phi=\phi_0$ and $X=0$; this is not the case for Einstein-scalar-Gauss-Bonnet gravity, for example, where the $G_i$ include $\log |X|$ terms [@Kobayashi:2011nu]. As expected, to $\mathcal{O}(\epsilon^0)$ eq. (\[RWhairy\]) is simply the well known Regge Wheeler equation describing odd parity gravitational perturbations to a Schwarzschild black hole [@Regge:1957td]. At $\mathcal{O}(\epsilon^0)$ the effective potential of the Regge-Wheeler equation is modified by $\delta V_1$, which is linear in the first order modifications to the spacetime geometry and scalar profile (and their derivatives). Our expression for $\delta V_1$ with $\delta\phi_1=0$ matches that of eq. (5.9) in [@Franciolini:2018uyq], which concerns perturbations of hairy black holes in the unitary gauge (i.e. with $\delta\phi_1=0$). At $\mathcal{O}(\epsilon^2)$, the potential is further modified by $\delta V_2$, which is quadratic in first order ‘hairy’ terms, and linear in the second order modifications. Furthermore, at second order in the perturbative expansion, we see that the frequency term $\omega^2$ is rescaled by a factor of $c_T=1+\epsilon^2\alpha_T$ where $c_T$ is the propagation speed of gravitational waves in Horndeski gravity [@DeFelice:2011bh; @Bellini:2014fua]. Eqs. (\[RWhairy\]) - (\[deltaVs\]) are the main results of this section. In the next section, we will explore how the modifications introduced to eq. (\[RWhairy\]) by our perturbative hair approach affect the spectrum of QNM frequencies $\omega$ of the black hole. A note of interest, however, is that in the $\omega=0$ limit, eq. (\[RWgen\]) could be used to study the tidal deformation of black holes in Horndeski gravity. Parameterised QNM Spectrum {#QNMsection} ========================== In [@Cardoso:2019mqo] (henceforth referred to as Cardoso et al) a formalism is developed such that, given a Schr[ö]{}dinger style QNM style equation: $$\begin{aligned} \left[f(r)\frac{d}{dr}\left(f(r)\frac{d}{dr}\right)+\omega^2-f(r)\tilde{V}\right]\psi=0\label{RWcardoso}\end{aligned}$$ where $f(r)=1-r_H/r$ with $r_H$ the horizon radius, and $\tilde{V}$ is a modified Regge-Wheeler potential in the following form: $$\begin{aligned} \tilde{V}=\frac{\ell(\ell+1)}{r^2}-\frac{6M}{r^3}+\frac{1}{r_H^2}\sum_{j=0}^{\infty}\alpha_j\left(\frac{r_H}{r}\right)^j,\end{aligned}$$ the spectrum of frequencies $\omega$ can be described in terms of corrections to the standard GR QNM spectrum. The new frequencies are given by: $$\begin{aligned} \omega = \; \omega_{\,0} + \sum_{j=0}^{\infty}\alpha_j e_j\end{aligned}$$ where $\omega_{\,0}$ is the unperturbed GR frequency and the $e_j$ are a ‘basis set’ of complex numbers which have been calculated using high precision direct integration of the equations of motion (the reader should consult [@Cardoso:2019mqo] for a detailed explanation of this formalism). We will now use this approach to calculate the modifications to the QNM spectrum induced by the perturbative black hole hair (with an appropriate power law ansatz for the $\delta (A,C,\phi)_i$). For simplicity and compactness we will present results to only first order in the book-keeping parameter $\epsilon$, such that we are seeking the leading order corrections to $\omega$ in the following form: $$\begin{aligned} \omega=\;\omega_{\,0} + \epsilon \omega_1.\end{aligned}$$ First, however, we must make sure that our eq. (\[RWhairy\]) is transformed into the same form as eq. (\[RWcardoso\]) so that we can correctly read off the $\alpha_j$ coefficients. Equation manipulation --------------------- To first order in $\epsilon$, the modified Regge-Wheeler equation is given by $$\begin{aligned} \left[A(r)\frac{d}{dr}\left(A(r)\frac{d}{dr}\right)+\omega^2-A(r)\overline{V}\right]Q=0\label{RWnat}\end{aligned}$$ where the effective potential $\overline{V}$ is given by: $$\begin{aligned} \overline{V}=\frac{\ell(\ell+1)}{r^2}-\frac{6M}{r^3}+\epsilon\delta V_1\label{Vbar}\end{aligned}$$ Following the procedure introduced in Cardoso et al, the first step to obtain an equation in the form of eq. (\[RWhairy\]) is to write: $$\begin{aligned} A(r)=f(r)Z(r)\label{AtoF}\end{aligned}$$ where $f(r)=1-r_H/r$, and find appropriate expressions for $r_H$ and $Z$ to $O(\epsilon)$. The location of the horizon in our modified spacetime will not be exactly at $r=2M$, but will be corrected due to $\delta A_1$. We thus make the following expansion for the horizon radius: $$\begin{aligned} r_H=2M+\epsilon \delta r_{H,1}.\end{aligned}$$ To find the new position of the horizon, we require $A(r_H)=0$. Solving order by order in $\epsilon$, we find the following for the location of the horizon: $$\begin{aligned} \delta r_{H,1}=&\;-2M\delta A_1(2M)\label{deltarH}\end{aligned}$$ with $Z$ thus given by: $$\begin{aligned} Z(r)=&\; 1+ \epsilon \delta Z_1\nonumber\\ =&\; 1+ \epsilon \frac{\delta A_1(r)-\frac{2M}{r}\delta A_1(2M)}{1-2M/r}\end{aligned}$$ in order to make eq. (\[AtoF\]) hold to $O(\epsilon)$. If we now define $\tilde{Q}=\sqrt{Z}Q$, we transform eq. (\[RWnat\]) into $$\begin{aligned} \left[f(r)\frac{d}{dr}\left(f(r)\frac{d}{dr}\right)+\frac{\omega^2}{Z^2}-f(r)V\right]\tilde{Q}=0\label{RWmod}\end{aligned}$$ where the new potential $V$ is given by: $$\begin{aligned} V=\frac{\overline{V}}{Z}-\frac{f\left(Z^\prime\right)^2-2Z(fZ^\prime)^\prime}{4Z^2}.\end{aligned}$$ and $\overline{V}$ is still given by eq. (\[Vbar\]). We can expand the $\omega^2$ term in eq. (\[RWmod\]) to $O(\epsilon)$ and write it in the following way: $$\begin{aligned} \frac{\omega^2}{Z^2}=&\;\omega^2(1-2\epsilon\delta Z_1(r_H))-2\epsilon\omega^2(\delta Z_1(r)-\delta Z_1(r_H))\label{omegamod}\end{aligned}$$ The first term on the right hand side of eq. (\[omegamod\]) can be seen as a (constant) rescaling of the frequencies, whilst the second term can be absorbed into the perturbed potential $V$ by setting $\omega=\omega_{\,0}$. The final form of the modified Regge Wheeler equation is now in the same form as eq. (\[RWcardoso\]): $$\begin{aligned} \left[f(r)\frac{d}{dr}\left(f(r)\frac{d}{dr}\right)+\tilde{\omega}^2-f(r)\tilde{V}\right]\tilde{Q}=0\end{aligned}$$ where $$\begin{aligned} \tilde{\omega}^2=&\;\omega^2(1-2\delta Z_1(r_H))\\ \tilde{V}=&\;V+\frac{2\omega_0^2}{f(r)}(\delta Z_1(r)-\delta Z_1(r_H))\label{Vomega}\end{aligned}$$ The final step before we are able to calculate numerically the modified QNM spectrum of our hairy Horndeski black holes is to assume an appropriate functional form for $\delta A_i$, $\delta C_i$ and $\delta \phi_i$. We will make the following simple power law choices: $$\begin{aligned} \delta \phi_1 = &\;Q_1\left(\frac{2M}{r}\right),\;\; \delta A_1 = \;a_1\left(\frac{2M}{r}\right)^{2},\;\;\delta C_1 = \; c_{1}\left(\frac{2M}{r}\right)\label{deltaansatz}\end{aligned}$$ so that, in addition to the Horndeski $G_i$ parameters, we have 3 ‘hairs’, $Q_{1}$, $a_{1}$, and $c_{1}$ that can affect our QNM spectrum. Of course the hairy parameters may be related when considering specific solutions, but for now we will assume that they are independent. With the above ansatz we find the non-zero $\alpha_j$ are given by (absorbing $\epsilon$ into the definitions of $(a,c,Q)_1$): \[alphas\] $$\begin{aligned} \alpha_0=&\ 8M^2\omega_0^2a_1\\ \alpha_3=&\;\ell(\ell+1)(a_1-c_1)-a_1-2Q_1\frac{G_{4\phi}}{G_4}\\ \alpha_4=&\;\frac{5}{2}\left(a_1+c_1+Q_1\frac{G_{4\phi}}{G_4}\right)\end{aligned}$$ leading to the following corrections to the QNM frequency spectrum for the $\ell=2,3$ modes (for example): \[qnmspec\] $$\begin{aligned} M\omega_1^{\ell=2}=&\;Q_1\frac{G_{4\phi}}{G_4}\left[-0.0126+0.0032i\right]+a_1\left[-0.0267+0.0621i\right]+c_1\left[-0.1296+0.0106i\right]\\ M\omega_1^{\ell=3}=&\;Q_1\frac{G_{4\phi}}{G_4}\left[-0.0075+0.0008i\right]+a_1\left[-0.1326+0.0677i\right]+c_1\left[-0.2040+0.0110i\right]\end{aligned}$$ The choices made in eq. (\[deltaansatz\]) were simply to give a concrete example of a modified QNM in terms of the Horndeski (and new ‘hairy’) parameters; one could of course make a different ansatz of one’s choosing to calculate $\omega_1$ (though it should noted that the numerical results of Cardoso et al only apply for those potentials which can be expressed as a series in inverse integer powers of $r$ - for potentials that do not fit this form alternative methods of calculating the QNM spectrum will have to be deployed [@1985RSPSA.402..285L; @2009CQGra..26v5003D; @Pani:2012zz]). Figure \[fig1\] shows the effect that each of $a_1$, $c_1$, and $\tilde{Q}_1=Q_1G_{4\phi}/G_4$ has on the form of the (real part of the) effective potential $f(r)\tilde{V}$ for $\ell=2$. We see that a non-zero $a_1$ leads to the most noticeable modification to the effective potential. This is unsurprising due to a non-zero $a_1$ leading to a shift in the position of the horizon through eq. (\[deltarH\]), as well as giving rise to an effective mass-squared term due to $\alpha_0$ in eq. (\[alphas\]). ![image](plots.pdf){width="90.00000%"}\ Parameter Estimation {#paramsec} -------------------- We will now follow the Fisher matrix approach of [@Berti:2005ys] (to which the reader should refer to for an in-depth treatment of statistical errors and ringdown observations) for performing a parameter estimation analysis on the modified QNM spectrum calculated above. The ringdown signal observed at a gravitational wave detector from a black hole can be modelled as $h=h_+ F_+ + h_\times F_\times$, where $h$ is the total strain, $h_{+,\times}$ is the strain in each of the $+$ and $\times$ polarisations, and $F_{+,\times}$ are pattern functions which depend on the orientation of the detector with respect to the source, and on polarisation angle. In frequency space, the strain in each polarisation is given by: $$\begin{aligned} h_+=&\;\frac{A^+_{\ell m}}{\sqrt{2}}\left[e^{i\phi_+} S_{\ell m} b_+(f) + e^{-i\phi_+}S^\ast_{\ell m} b_-(f)\right]\\ h_\times=&\;\frac{A^+_{\ell m}N_\times}{\sqrt{2}}\left[e^{i\phi_\times} S_{\ell m} b_+(f) + e^{-i\phi_\times}S^\ast_{\ell m} b_-(f)\right]\end{aligned}$$ where the amplitude $A_+$, amplitude ratio $N_\times$, and phases $\phi_{+,\times}$ are real. The $S$ are complex spin weight 2 spheroidal harmonics, and $b_{\pm}$ are given by: $$\begin{aligned} b_{\pm}=&\;\frac{1/\tau_{\ell m}}{(1/\tau_{\ell m})^2+4\pi^2(f\pm f_{\ell m})}\end{aligned}$$ where for a given $(\ell, m)$, $\omega_{\ell m}=2\pi f_{\ell m} - i/\tau_{\ell m}$. We are interested in calculating the statistical errors in determining the ‘hairy’ parameters that affect the QNM spectrum, and as such we assume that the mass $M$ of the black hole, and thus the unperturbed QNM frequency $\omega_{\,0}$, is known. Furthermore, we will assume that $N_\times=1$ and $\phi_+=\phi_\times=0$, and that $A_+$ is known (effectively resulting in us fixing a specific signal-noise-ratio $\rho$). In [@Berti:2005ys] it is shown that the results for statistical errors are not strongly affected by the values of $N_\times$ or of the phases. For simplicity we will assume that $c_1=0$, thus we are effectively considering a Reissner-Nordstrom-like black hole with a $1/r$ scalar profile. Defining $\tilde{Q}_1=Q_1G_{4\phi}/G_4$ and setting our book-keeping parameter $\epsilon=1$, we can write the oscillation frequency and damping time of the perturbed $\ell=2$ QNM (for example) as follows: $$\begin{aligned} 2\pi f=&\;2\pi f_0-0.0126\tilde{Q}_1/M-0.0267a_1/M\label{f2}\\ \tau^{-1}=&\;\tau_0^{-1}-0.0032\tilde{Q}_1/M-0.0621a_1/M.\label{tau2}\end{aligned}$$ Using the Fisher matrix formalism laid out in [@Berti:2005ys], and remembering that we are assuming $M$ to be known exactly, we calculate the following errors for $\tilde{Q}_1$ and $a_1$: $$\begin{aligned} \sigma^2_{\tilde{Q}_1}=&\;\frac{1}{2\rho^2q^2}\frac{f^{\prime 2}q^2(1+4q^2)-2fqf^\prime q^\prime + f^2q^{\prime 2}}{\left(\dot{f}q^\prime-f^\prime \dot{q}\right)^2}\\ \sigma^2_{a_1}=&\;\frac{1}{2\rho^2q^2}\frac{\dot{f}^2q^2(1+4q^2)-2fq\dot{f}\dot{q} + f^2\dot{q}^2}{\left(\dot{f}q^\prime-f^\prime \dot{q}\right)^2}\end{aligned}$$ where $q=\pi f \tau$ is the ‘quality factor’ of a given oscillation mode, and we now use the notation $F^\prime\equiv\frac{\partial F}{\partial a_1}$ and $\dot{F}\equiv\frac{\partial F}{\partial \tilde{Q}_1}$ for a quantity $F$. Assuming a detection of the $\ell=2$ mode, we can use the expressions given in eq. (\[f2\]) and (\[tau2\]) to calculate the errors numerically. Additionally setting $\tilde{Q}_1=a_1=0$, we interpret the following errors as ‘detectability’ limits on the parameters: $$\begin{aligned} \rho\sigma_{\tilde{Q}_1}\approx&\;12,\quad\rho\sigma_{a_1}\approx\;2.\end{aligned}$$ With an SNR of $\rho\sim10^2$, which could be typical of LISA events, we thus have that $\sigma_{\tilde{Q}_1}\approx0.1$ whilst $\sigma_{a_1}\approx0.02$ (assuming that the mass of the black hole is known with absolute precision). Eqs. (\[f2\]) and (\[tau2\]) show that the metric Reissner-Nordstrom-like hair $a_1$ has a more significant effect on the frequency and damping time than the scalar hair $\tilde{Q}_1$, so it is unsurprising that we find it possible to constrain $a_1$ to a greater degree than the scalar hair. In fact, in general it perhaps makes intuitive sense that the odd parity QNMs are more affected by modifications to the spacetime geometry than to the scalar profile, given that the scalar perturbations only couple to the even parity sector of the gravitational perturbations. Discussion ========== In this paper we have studied the QNMs associated with odd parity gravitational perturbations of spherically symmetric black holes in Horndeski gravity. By assuming that the background solutions for the spacetime geometry and Horndeski scalar field are well described to first order by the hairless Schwarzschild solution, we can treat the effect of any black hole ‘hair’ perturbatively. Making use of the results for generic spherically symmetric black holes derived in [@Ganguly:2017ort], we present a modified Regge-Wheeler style equation, eq. (\[RWhairy\]), describing odd parity gravitational perturbations. Eq. (\[RWhairy\]) takes into account effects induced by *generic* modifications to both the spacetime geometry and to the background radial profile of the Horndeski scalar field. Labelling the background modifications by a book-keeping parameter $\epsilon$ to keep track of the order of ‘smallness’, we present results to $\mathcal{O}(\epsilon^2)$. We show that the odd parity perturbations are not only affected by changes to the background spacetime, but also by the scalar field profile, with the ‘nonminimal’ and ‘derivative’ couplings to curvature $G_4$ and $G_5$ in the Horndeski action playing a role in eq. (\[RWhairy\]). Through the formalism of [@Cardoso:2019mqo] the odd parity QNM spectrum of such perturbatively hairy black holes can be calculated (assuming an inverse power law ansatz for the modifications to both the spacetime and scalar profile). In eq. (\[qnmspec\]) the first order modifications to the QNM spectrum are presented for the $\ell=2,3$ modes. It is straightforward to calculate the modifications for other $\ell$ using the results of this paper combined with the numerical data provided in [@Cardoso:2019mqo]. We have thus presented a straightforward way to associate deviations from the expected GR QNM spectrum of black holes to not only modifications to the background spacetime, but also to fundamental parameters of a modified gravity theory (in this case, the $G_i$ of Horndeski gravity). In section \[paramsec\] we perform a simple parameter estimation exercise based on the hypothetical observation of the $\ell=2$ QNM of a black hole whose mass we are assuming to know. With SNRs typical of LISA detections we show that the ‘hairy’ black hole parameters introduced could potentially be well constrained. There are of course numerous ways to develop the work presented here. As mentioned briefly in section \[perturbsec\], eq. (\[RWgen\]) with $\omega=0$ could be used to study the tidal deformations of black holes in Horndeski gravity. Furthermore, one could attempt to find exact solutions for the $\delta(A,C,\phi)_i$ in different realisations of Horndeski gravity (through finding ‘order-by-order’ solutions of otherwise). The most natural extension to this work is of course to study the even parity sector of perturbations in Horndeski gravity. In general the even parity sector of gravitational perturbations is more complex than the odd parity sector, and in Horndeski gravity this is only further complicated through the coupling of scalar perturbations to the gravitational modes. The formalism of [@Cardoso:2019mqo] has, usefully, been expanded to apply to coupled QNM equations in [@McManus:2019ulj], thus calculating the modified even parity QNM spectrum should be relatively straightforward once the relevant equations have been derived. Such an analysis will then provide a complete description of ‘almost’ Schwarzschild QNMs in Horndeski theory. Perhaps the most important extension to this line of research is to include black hole spin, given that the black holes currently observed through merger events appear to possess non-negligible angular momentum [@LIGOScientific:2018mvr]. As a first step, one could consider studying slowly rotating ‘almost Kerr’ black holes in Horndeski gravity by introducing another ‘hairy’ function in the $g_{t\phi}$ component of the slowly rotating Kerr metric. More ambitiously, perhaps an ‘almost’ Teukoslky like equation could be found by introducing perturbations to the full Kerr solution. Perturbations of a stealth Kerr black hole (i.e. a Kerr geometry endowed with a non-trivial scalar profile) in Degenerate Higher Order Scalar Tensor theories were been studied in [@Charmousis:2019fre]. Acknowledgments {#acknowledgments .unnumbered} =============== OJT would like to thank Pedro Ferreira, Vitor Cardoso, and Adrien Kuntz for useful conversations whilst preparing this work. OJT acknowledges support from the European Research Council.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We review the stabilization of the radion in the Randall–Sundrum model through the Casimir energy due to a bulk conformally coupled field. We also show some exact self–consistent solutions taking into account the backreaction that this energy induces on the geometry.' address: |  IFAE, Departament de F[í]{}sica, Universitat Aut[ò]{}noma de Barcelona,\ 08193 Bellaterra $($Barcelona$)$, Spain author: - 'Oriol Pujol[à]{}s\' title: 'Effective potential in Brane-World scenarios' --- epsf **Abstract** $~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$ UAB-FT-504 Introduction ============ Recently, it has been suggested that theories with extra dimensions may provide a solution to the hierarchy problem [@gia; @RS1]. The idea is to introduce a $d$-dimensional internal space of large physical volume ${\cal V}$, so that the the effective lower dimensional Planck mass $m_{pl}\sim {\cal V}^{1/2} M^{(d+2)/2}$ is much larger than $M \sim TeV$- the true fundamental scale of the theory. In the original scenarios, only gravity was allowed to propagate in the higher dimensional bulk, whereas all other matter fields were confined to live on a lower dimensional brane. Randall and Sundrum [@RS1] (RS) introduced a particularly attractive model where the gravitational field created by the branes is taken into account. Their background solution consists of two parallel flat branes, one with positive tension and another one with negative tension embedded in a a five-dimensional Anti-de Sitter (AdS) bulk. In this model, the hierarchy problem is solved if the distance between branes is about $37$ times the AdS radius and we live on the negative tension brane. More recently, scenarios where additional fields propagate in the bulk have been considered [@alex1; @alex2; @alex3; @bagger]. In principle, the distance between branes is a massless degree of freedom, the radion field $\phi$. However, in order to make the theory compatible with observations this radion must be stabilized [@gw1; @gw2; @gt; @cgr; @tm]. Clearly, all fields which propagate in the bulk will give Casimir-type contributions to the vacuum energy, and it seems natural to investigate whether these could provide the stabilizing force which is needed. Here, we shall calculate the radion one loop effective potential $V_{\hbox{\footnotesize\it \hspace{-6pt} eff\,}}(\phi)$ due to conformally coupled bulk scalar fields, although the result shares many features with other massless bulk fields, such as the graviton, which is addressed in [@gpt]. As we shall see, this effective potential has a rather non-trivial behaviour, which generically develops a local extremum. Depending on the detailed matter content, the extremum could be a maximum or a minimum, where the radion could sit. For the purposes of illustration, here we shall concentrate on the background geometry discussed by Randall and Sundrum, although our methods are also applicable to other geometries, such as the one introduced by Ovrut [*et al.*]{} in the context of eleven dimensional supergravity with one large extra dimension [@ovrut]. This report is based on a work done in collaboration with Jaume Garriga and Takahiro Tanaka [@gpt]. Related calculations of the Casimir interaction amongst branes have been presented in an interesting paper by Fabinger and Hořava [@FH]. In the concluding section we shall comment on the differences between their results and ours. The Randall-Sundrum model and the radion field ============================================== To be definite, we shall focus attention on the brane-world model introduced by Randall and Sundrum [@RS1]. In this model the metric in the bulk is anti-de Sitter space (AdS), whose (Euclidean) line element is given by $$ds^2=a^2(z)\eta_{ab}dx^{a}dx^{b}= a^2(z)\left[dz^2 +d{\bf x}^2\right] =dy^2+a^2(z)d{\bf x}^2. \label{rsmetric}$$ Here $a(z)=\ell/z$, where $\ell$ is the AdS radius. The branes are placed at arbitrary locations which we shall denote by $z_+$ and $z_-$, where the positive and negative signs refer to the positive and negative tension branes respectively ($z_+ < z_-$). The “canonically normalized” radion modulus $\phi$ - whose kinetic term contribution to the dimensionally reduced action on the positive tension brane is given by $${1\over 2}\int d^4 x \sqrt{g_+}\, g^{\mu\nu}_+\partial_{\mu}\phi \,\partial_{\nu}\phi, \label{kin}$$ is related to the proper distance $d= \Delta y$ between both branes in the following way [@gw1] $$\phi=(3M^3\ell/4\pi)^{1/2} e^{- d/\ell}.$$ Here, $M \sim TeV$ is the fundamental five-dimensional Planck mass. It is usually assumed that $\ell \sim M^{-1}$ . Let us introduce the dimensionless radion $$\lambda \equiv \left({4\pi \over 3M^3\ell}\right)^{1/2} {\phi} = {z_+ \over z_-} = e^{-d/\ell},$$ which will also be refered to as [*the hierarchy*]{}. The effective four-dimensional Planck mass $m_{pl}$ from the point of view of the negative tension brane is given by $m_{pl}^2 = M^3 \ell (\lambda^{-2} - 1)$. With $d\sim 37 \ell$, $\lambda$ is the small number responsible for the discrepancy between $m_{pl}$ and $M$. At the classical level, the radion is massless. However, as we shall see, bulk fields give rise to a Casimir energy which depends on the interbrane separation. This induces an effective potential $V_{\hbox{\footnotesize\it \hspace{-6pt} eff\,}}(\phi)$ which by convention we take to be the energy density per unit physical volume on the positive tension brane, as a function of $\phi$. This potential must be added to the kinetic term (\[kin\]) in order to obtain the effective action for the radion: $$S_{\hbox{\footnotesize\it \hspace{-6pt} eff\,}}[\phi] =\int d^4x\, a_+^4 \left[{1\over 2}g_+^{\mu\nu}\partial_{\mu}\phi\, \partial_{\nu}\phi + V_{\hbox{\footnotesize\it \hspace{-6pt} eff\,}}(\lambda(\phi)) \right]. \label{effect}$$ In the following Section, we calculate the contributions to $V_{\hbox{\footnotesize\it \hspace{-6pt} eff\,}}$ from conformally invariant bulk fields. Massless scalar bulk fields =========================== The effective potential induced by scalar fields with arbitrary coupling to the curvature or bulk mass and boundary mass can be addressed. It reduces to a similar calculation to minimal the coupling massless field case, which is sovled in [@gpt], and correponds to bulk gravitons. However, for the sake of simplicity, we shall only consider below the contribution to $V_{\hbox{\footnotesize\it \hspace{-6pt} eff\,}}(\phi)$ from conformally coupled massless bulk fields. Technically, this is much simpler than finding the contribution from bulk gravitons and the problem of backreaction of the Casimir energy onto the background can be taken into consideration in this case. Here we are considering generalizations of the original RS proposal [@alex1; @alex2; @alex3] which allow several fields other than the graviton only (contributing as a minimally coupled scalar field). A conformally coupled scalar $\chi$ obeys the equation of motion $$-\Box_g \chi + {D-2 \over 4 (D-1)}\ R\ \chi =0, \label{confin}$$ $$\Box^{(0)} \hat\chi =0. \label{fse}$$ Here $\Box^{(0)}$ is the [*flat space*]{} d’Alembertian. It is customary to impose $Z_2$ symmetry on the bulk fields, with some parity given. If we choose even parity for $\hat\chi$, this results in Neumann boundary conditions $$\partial_{z}\hat\chi = 0,$$ at $z_+$ and $z_-$. The eigenvalues of the d’Alembertian subject to these conditions are given by $$\label{flateigenvalues} \lambda^2_{n,k}=\left({n \pi \over L}\right)^2+k^2,$$ where $n$ is a positive integer, $L=z_{-}-z_+$ is the coordinate distance between both branes and $k$ is the coordinate momentum parallel to the branes. [^1] Similarly, we could consider the case of massless fermions in the RS background. The Dirac equation,[^2] $$\gamma^{n}e^a_{\ n}\nabla_a\,\psi=0.$$ is conformally invariant [@bida], and the conformally rescaled components of the fermion obey the flat space equation (\[fse\]) with Neumann boundary conditions. Thus, the spectrum (\[flateigenvalues\]) is also valid for massless fermions.  Flat Spacetime --------------- Let us now consider the Casimir energy density in the conformally related flat space problem. We shall first look at the effective potential per unit area on the brane, ${\cal A}$. For bosons, this is given $$V^b_0 = {1\over 2 {\cal A}} {\rm Tr}\ {\rm\ln} (-\bar\Box^{(0)}/\mu^2).$$ Here $\mu$ is an arbitrary renormalization scale. Using zeta function regularization (see e.g. [@ramond]), it is straightforward to show that $$V^b_0 (L)= {(-1)^{\eta-1} \over (4\pi)^{\eta} \eta!} \left({\pi\over L}\right)^{D-1} \zeta'_R(1-D). \label{vboson}$$ Here $\eta=(D-1)/2$, and $\zeta_R$ is the standard Riemann’s zeta function. The contribution of a massless fermion is given by the same expression but with opposite sign: $$V^{f}_0(L) = - V_0^b(L). \label{vfermion}$$ The expectation value of the energy momentum tensor is traceless in flat space for conformally invariant fields. Moreover, because of the symmetries of our background, it must have the form [@bida] $$\langle T^z_{\ z}\rangle_{flat}= (D-1) \rho_0(z),\quad \langle T^i_{\ j}\rangle_{flat}= -{\rho_0(z)}\ \delta^i_{\ j}.$$ By the conservation of energy-momentum, $\rho_0$ must be a constant, given by $$\rho_0^{b,f} = {V_0^{b,f} \over 2 L} = \mp {A \over 2 L^D},$$ where the minus and plus signs refer to bosons and fermions respectively and we have introduced $$A\equiv{(-1)^{\eta} \over (4\pi)^{\eta} \eta!} \pi^{D-1} \zeta'_R(1-D) > 0.$$ This result [@adpq; @dpq], which is a simple generalization to codimension-1 branes embedded in higher dimensional spacetimes of the usual Casimir energy calculation, and it reproduces the same kind of behaviour: the effective potential depends on the interbrane distance monotonously. So, depending on $D$ and the field’s spin, it induces an atractive or repulsive force, describing correspondingly the collapse or the indefinite separation of the branes, just as happened in the Appelquist and Chodos calculation [@ac]. In this case, then, the stabilization of the interbrane distance cannot be due to quantum fluctuations of fields propagationg into the bulk.  AdS Spacetime -------------- Now, let us consider the curved space case. Since the bulk dimension is odd, there is no conformal anomaly [@bida] and the energy momentum tensor is traceless in the curved case too.[^3] This tensor is related to the flat space one by (see e.g. [@bida]) $$<T^{\mu}_{\ \nu}>_g = a^{-D} <T^{\mu}_{\ \nu}>_{flat}.$$ Hence, the energy density is given by $$\rho = a^{-D} \rho_0. \label{dilute}$$ The effective potential per unit physical volume on the positive tension brane is thus given by $$V_{\hbox{\footnotesize\it \hspace{-6pt} eff\,}}(\lambda) = 2\ a_+^{1-D} \int a^D(z) \rho\, dz = \mp \ell^{1-D}{A \lambda^{D-1} \over (1-\lambda)^{D-1}}. \label{ve1}$$ Note that the background solution $a(z)=\ell/z$ has only been used in the very last step. The previous expression for the effective potential takes into account the casimir energy of the bulk, but it is not complete because in general the effective potential receives additional contributions from both branes. We can always add to $V_{\hbox{\footnotesize\it \hspace{-6pt} eff\,}}$ terms which correspond to finite renormalization of the tension on both branes. These are proportional to $\lambda^0$ and $\lambda^{D-1}$. The coefficients in front of these two powers of $\lambda$ cannot be determined from our calculation and can only be fixed by imposing suitable renormalization conditions which relate them to observables. Adding those terms and particularizing to the case of $D=5$, we have $$V_{\hbox{\footnotesize\it \hspace{-6pt} eff\,}}(\lambda) = \mp \ell^{-4}\left[{A\lambda^4 \over (1-\lambda)^4} + \alpha+\beta\lambda^4\right], \label{confveff}$$ where $A\approx 2.46 \cdot 10^{-3}$. The values $\alpha$ and $\beta$ can be obtained from the observed value of the “hierarchy”, $\lambda_{obs}$, and the observed value of the effective four-dimensional cosmological constant, which we take to be zero. Thus, we take as our renormalization conditions $$V_{\hbox{\footnotesize\it \hspace{-6pt} eff\,}}(\lambda_{obs}) ={dV_{\hbox{\footnotesize\it \hspace{-6pt} eff\,}}\over d\lambda}(\lambda_{obs})=0. \label{renc}$$ If there are other bulk fields, such as the graviton, which give additional classical or quantum mechanical contributions to the radion potential, then those should be included in $V_{\hbox{\footnotesize\it \hspace{-6pt} eff\,}}$. From the renormalization conditions (\[renc\]) the unknown coefficients $\alpha$ and $\beta$ can be found, and then the mass of the radion is calculable. In Fig. \[fig1\] we plot (\[confveff\]) for a fermionic field and a chosen value of $\lambda_{obs}$. =10 cm\ From (\[renc\]), we have $$\beta = - A (1-\lambda_{obs})^{-5},\quad \alpha= -\beta \lambda_{obs}^5. \label{consts}$$ These values correspond to changes $\delta \sigma_{\pm}$ on the positive and negative brane tensions which are related by the equation $$\delta\sigma_+ = -\lambda^5_{obs}\ \delta\sigma_-. \label{reltensions}$$ As we shall see below, Eq. (\[reltensions\]) is just what is needed in order to have a static solution according to the five dimensional equations of motion, once the casimir energy is included. We can now calculate the mass of the radion field $m_\phi^{(-)}$ from the point of view of the negative tension brane. For $\lambda_{obs}\ll 1$ we have: $$m^{2\ (-)}_\phi = \lambda_{obs}^{-2}\ m^{2\ (+)}_\phi = \lambda_{obs}^{-2}\ {d^2 V_{\hbox{\footnotesize\it \hspace{-6pt} eff\,}}\over d\phi^2}\approx \mp \lambda_{obs} \left({5\pi^3 \zeta'_R(-4)\over 6 M^3 l^5}\right). \label{massconf}$$ The contribution to the radion mass squared is negative for bosons and positive for fermions. Thus, depending on the matter content of the bulk, it is clear that the radion may be stabilized due to this effect. Note, however, that if the “observed” interbrane separation is large, then the induced mass is small. So if we try to solve the hierarchy problem geometrically with a large internal volume, then $\lambda_{obs}$ is of order $TeV/m_{pl}$ and the mass (\[massconf\]) is much smaller than the $TeV$ scale. Such a light radion would seem to be in conflict with observations. In this case we must accept the existence of another stabilization mechanism (perhaps classical or nonperturbative) contributing a large mass to the radion. Of course, another possibility is to have $\lambda_{obs}$ of order one, with $M$ and $\ell$ of order $m_{pl}$, in which case the radion mass (\[massconf\]) would be very large, but then we must look for a different solution to the hierarchy problem.  Casimir Energy Backreatcion ---------------------------- Due to conformal invariance, it is straightforward to take into account the backreaction of the Casimir energy on the geometry. First of all, we note that the metric (\[rsmetric\]) is analogous to a Friedmann-Robertson-Walker metric, where the nontrivial direction is space-like instead of timelike. The dependence of $a$ on the transverse direction can be found from the Friedmann equation $$\left({a'\over a}\right)^2 = {16\pi G_5 \over 3} \rho - {\Lambda \over 6}. \label{friedmann}$$ Here a prime indicates derivative with respect to the proper coordinate $y$ \[see Eq. (\[rsmetric\])\], and $\Lambda<0$ is the background cosmological constant. Combined with (\[dilute\]), which relates the energy density $\rho$ to the scale factor $a$, Eq. (\[friedmann\]) becomes a first order ordinary differential equation for $a$. We should also take into account the matching conditions at the boundaries $$\left({a'\over a}\right)_{\pm}={\mp 8\pi G_5 \over 6} \sigma_{\pm}. \label{matching}$$ A static solution of Eqs. (\[friedmann\]) and (\[matching\]) can be found by a suitable adjustment of the brane tensions. Indeed, since the branes are flat, the value of the scale factor on the positive tension brane is conventional and we may take $a_+=1$. Now, the tension $\sigma_+$ can be chosen quite arbitrarily. Once this is done, Eq. (\[matching\]) determines the derivative $a'_+$, and Eq. (\[friedmann\]) determines the value of $\rho_0$. In turn, $\rho_0$ determines the co-moving interbrane distance $L$, and hence the location of the second brane. Finally, integrating (\[friedmann\]) up to the second brane, the tension $\sigma_-$ must be adjusted so that the matching condition (\[matching\]) is satisfied. Thus, as with other stabilization scenarios, a single fine-tuning is needed in order to obtain a vanishing four-dimensional cosmological constant. This is in fact the dynamics underlying our choice of renormalization conditions (\[renc\]) which we used in order to determine $\alpha$ and $\beta$. Indeed, let us write $\sigma_+=\sigma_0 + \delta\sigma_+$ and $\sigma_- =-\sigma_0 +\delta\sigma_-$, where $\sigma_0=(3 / 4\pi \ell G_5)$ is the absolute value of the tension of the branes in the zeroth order background solution. Elliminating $a'/a$ from (\[matching\]) and (\[friedmann\]), we easily recover the relation (\[reltensions\]), which had previously been obtained by extremizing the effective potential and imposing zero effective four-dimensional cosmological constant (here, $\delta\sigma_{\pm}$ is treated as a small parameter, so that extremization of the effective action coincides with extremization of the effective potential on the background solution.) In that picture, the necessity of a single fine tuning is seen as follows. The tension on one of the walls can be chosen quite arbitrarily. For instance, we may freely pick a value for $\beta$, which renormalizes the tension of the brane located at $z_-$. Once this is given, the value of the interbrane distance $\lambda_{obs}$ is fixed by the first of Eqs. (\[consts\]). Then, the value of $\alpha$, which renormalizes the tension of the brane at $z_+$, must be fine-tuned to satisfy the second of Eqs. (5.8). Eqs. (\[friedmann\]) and (\[matching\]) can of course be solved nonperturbatively. We may consider, for instance, the situation where there is no background cosmological constant ($\Lambda=0$). In this case we easily obtain $$a^3(z)={6\pi G A \over (z_- -z_+)^5}(C-z)^2 ={{3\over 4}\pi^3 \zeta'_R(-4) G_5 }{(z_0-z)^2\over(z_- -z_+)^5},$$ where the brane tensions are given by $$2\pi G \sigma_{\pm}=\pm (C-z_{\pm})^{-1}$$ and $C$ is a constant. This is a self–consistent solution where the warp in the extra dimension is entirely due to the Casimir energy. Of course, the conformal interbrane distance $(z_--z_+)$ is different from the physical $d$, although they are related. For instance, imposing $a(z_+)=1$, which we can rewrite as $$6\pi A G_5 = \left({z_--z_+\over z_0-z_+}\right)^2 (z_--z_+)^3$$ and we get the relation $$d=(z_--z_+)\left[{3\over5}\sqrt{(z_--z_+)^3 \over 6\pi G_5 A}\left( 1-\biggl(1-\sqrt{6\pi G_5 A\over(z_--z_+)^3} \;\biggr)^{5/3}\right)\right].$$ Here we can see that the case of negligible Casimir energy, ${6\pi G_5 A/(z_--z_+)^3} \ll 1$, indeed corresponds to the flat case, in which the conformal and the physical distances coincide. We can also integrate Eq. (\[friedmann\]) in the general case [@tesina], and get $$a(y)=\left({16\pi A M^3 \over{-\Lambda (z_--z_+)^5}}\right)^{1/5}\sinh^{2/5}\left({5\over2}\sqrt{-\Lambda/6}\;(y_0-y)\right),$$ with brane tensions given by $$\sigma_{\pm}=\pm {3\over{4\pi}}{\sqrt{-\Lambda/6}\over G_5}\coth\left({5\over2}\sqrt{-\Lambda/6}\;(y_0-y_\pm)\right).$$ Here we are assuming $\Lambda<0$, and $y_0$ is an integration constant. Moreover we can explicitely check how this reduces to RS solution in the limit of small Casimir energy compared to the cosmological constant, [*i.e.*]{}, when $${16\pi G_5 \over 3} \rho_0 \ll {\Lambda \over 6}.$$ Again fixing $a(z_+)=1$ we find $$y_0={2\over5}\sqrt{-6\over\Lambda}\; {\rm arcsinh}\left(\left({32\pi \rho_0\over-\Lambda M^3}\right)^{-1/2}\right) >> 1$$ since $(32\pi \rho_0/(-\Lambda M^3)) << 1 $, so that we can write the warp factor as a power series in the parameter $(32\pi \rho_0/(-\Lambda M^3))^{1/5} \ll 1$: $$a(y)\approx e^{-\sqrt{-\Lambda/6}\;y}\left( 1 - {1\over5}\left({128\pi \rho_0 \over -\Lambda M^3}\right)^{2/5} e^{2\sqrt{-\Lambda/6}\;y} +\dots \right).$$ Conclusions and discussion ========================== We have shown that in brane-world scenarios with a warped extra dimension, it is in principle possible to stabilize the radion $\phi$ through the Casimir force induced by bulk fields. Specifically, conformally invariant fields induce an effective potential of the form (\[confveff\]) as measured from the positive tension brane. From the point of view of the negative tension brane, this corresponds to an energy density per unit physical volume of the order $$V_{\hbox{\footnotesize\it \hspace{-6pt} eff\,}}^{-}\sim m_{pl}^4 \left[{A\lambda^4 \over (1-\lambda)^4}+\alpha+\beta\lambda^4\right],$$ where $A$ is a calculable number (of order $10^{-3}$ per degree of freedom), and $\lambda \sim \phi/(M^3 \ell)^{1/2}$ is the dimensionless radion. Here $M$ is the higher-dimensional Planck mass, and $\ell$ is the AdS radius, which are both assumed to be of the same order, whereas $m_{pl}$ is the lower-dimensional Planck mass. In the absence of any fine-tuning, the potential will have an extremum at $\lambda \sim 1$, where the radion may be stabilized (at a mass of order $m_{pl}$). However, this stabilization scenario without fine-tuning would not explain the hierarchy between $m_{pl}$ and the $TeV$. A hierarchy can be generated by adjusting $\beta$ according to (\[consts\]), with $\lambda_{obs}\sim (TeV/m_{pl}) \sim 10^{-16}$ (of course one must also adjust $\alpha$ in order to have vanishing four-dimensional cosmological constant). But with these adjustement, the mass of the radion would be very small, of order $$m^{2\ (-)}_{\phi} \sim\lambda_{obs}\ M^{-3} \ell^{-5} \sim \lambda_{obs} (TeV)^2. \label{smallmass}$$ Therefore, in order to make the model compatible with observations, an alternative mechanism must be invoked in order to stabilize the radion, giving it a mass of order $TeV$. Goldberger and Wise [@gw1; @gw2], for instance, introduced a field $v$ with suitable classical potential terms in the bulk and on the branes. In this model, the potential terms on the branes are chosen so that the v.e.v. of the field in the positive tension brane $v_+$ is different from the v.e.v. on the negative tension brane $v_-$. Thus, there is a competition between the potential energy of the scalar field in the bulk and the gradient which is necessary to go from $v_+$ to $v_-$. The radion sits at the value where the sum of gradient and potential energies is minimized. This mechanism is perhaps somewhat [*ad hoc*]{}, but it has the virtue that a large hierarchy and an acceptable radion mass can be achieved without much fine tuning. It is reassuring that in this case the Casimir contributions, given by (\[smallmass\]), would be very small and would not spoil the model. The graviton contribution to the radion effective potential can be computed as well. Each polarization of the gravitons contribute as minimally coupled massles bulk scalar field [@tamaproof], so since gravitons are not conformally invariant, the calculation is considerably more involved, and a suitable method has been developed for this purpose [@gpt]. The result is that gravitons contribute a negative term to the radion mass squared, but this term is even smaller than (\[smallmass\]), by an extra power of $\lambda_{obs}$. More over this method works also in AdS space for scalar fields of any kind (massive, nonminimally coupled …). In an interesting recent paper [@FH], Fabinger and Hořrava have considered the Casimir force in a brane-world scenario similar to the one discussed here, where the internal space is topologically $S^1/Z_2$. In their case, however, the gravitational field of the branes is ignored and the extra dimension is not warped. As a result, their effective potential is monotonic and stabilization does not occur (at least in the regime where the one loop calculation is reliable, just like in the original Kaluza-Klein compactification on a circle [@ac]). The question of gravitational backreaction of the Casimir energy onto the background geometry is also discussed in [@FH]. Again, since the gravitational field of the branes is not considered, they do not find static solutions. This is in contrast with our case, where static solutions can be found by suitable adjustment of the brane tensions. Finally, it should be pointed out that the treatment of backreaction (here and in [@FH]) applies to conformally invariant fields but not to gravitons. Gravitons are similar to minimally coupled scalar fields, for which it is well known that the Casimir energy density diverges near the boundaries [@bida]. Therefore, a physical cut-off related to the brane effective width seems to be needed so that the energy density remains finite everywhere. Presumably, our conclusions will be unchanged provided that this cut-off length is small compared with the interbrane separation, but further investigation of this issue would be interesting. It seems also interesting to clarify whether the same stabilization mechanism works in other kind of warped compactified brane world models, such as some coming form M-theory [@ovrut]. In this case the bulk instead of a slice of AdS (which is maximally symmetric), consists of a power-law warp factor, and consequently a less symmetric space. This complicates the calculation since, for instance, there are two 4-d massless moduli fields (apart from the 4-d gravitons) to stabilize. After the work reported here [@gpt; @tesina] was complete, Ref. [@muko] appeared with some overlapping results, and also [@bmno; @noz] with related topics. Acknowledgements ================ I devote special thanks to Jaume Garriga and Takahiro Tanaka for the entire realization of this work. I would like to thank Enric Verdaguer and Edgar Gunzig for his kind hospitallity in the Peyresq-5 Meeting. I acknowledge support from CICYT, under grant AEN99-0766, and from grant 1998FI 00198. [99]{} N. Arkani-Hamed, S. Dimopoulos, G. Dvali, “The Hierarchy Problem and New Dimensions at a Millimeter,” Phys.Lett. [**B429**]{}, 263 (1998) \[hep-ph/9803315\]; “Phenomenology, Astrophysics and Cosmology of Theories with Submillimeter Dimensions and TeV Scale Quantum Gravity,” Phys.Rev. [**D59**]{} 086004 (1999) \[hep-ph/9807344\]; “New Dimensions at a Millimeter to a Fermi and Superstrigs at a TeV,” I. Antoniadis, N. Arkani-Hamed, S. Dimopoulos, G. Dvali, Phys.Lett. [**B436**]{}, 257 (1998) \[hep-ph/9804398\]. L. Randall and R. Sundrum, “A Large Mass Hierarchy from a Small Extra Dimension,” Phys. Rev. Lett.[**83**]{}, 3370 (1999) \[hep-ph/9905221\]. T. Gherghetta and A. Pomarol, “Bulk fields and supersymmetry in a slice of AdS,” hep-ph/0003129. A. Pomarol, “Gauge bosons in a five-dimensional theory with localized gravity,” hep-ph/9911294. A. Pomarol, “Grand unified theories without the desert,” hep-ph/0005293. R. Altendorfer, J. Bagger and D. Nemeschansky, “Supersymmetric Randall-Sundrum scenario,” hep-th/0003117. W. D. Goldberger and M. B. Wise, “Modulus stabilization with bulk fields,” Phys. Rev. Lett.  [**83**]{}, 4922 (1999) \[hep-ph/9907447\]. W. D. Goldberger and M. B. Wise, “Phenomenology of a stabilized modulus,” hep-ph/9911457. J. Garriga and T. Tanaka, “Gravity in the Randall-Sundrum Brane World,” Phys. Rev. Lett.  [**84**]{}, 2778 (1999) \[hep-th/9911055\]. C. Charmousis, R. Gregory, V. Rubakov “Wave function of the radion in a brane world”, hep-th/9912160. T. Tanaka and X. Montes, “Gravity in the brane-world for two-branes model with stabilized modulus,” hep-th/0001092. J. Garriga, O. Pujol[à]{}s and T. Tanaka, “Radion effective potential in the brane-world,” \[hep-th/0004109\]; A. Lukas, B. A. Ovrut, K. S. Stelle and D. Waldram, “The universe as a domain wall,” Phys. Rev.  [**D59**]{}, 086001 (1999) \[hep-th/9803235\]. M. Fabinger and P. Hořava, “Casimir effect between world-branes in heterotic M-theory,” hep-th/0002073. N. D. Birrell and P. C. Davies, “Quantum Fields In Curved Space,” [*Cambridge, Uk: Univ. Pr.*]{} (1982) 340p. P. Ramond, “Field Theory: A Modern Primer,” [*(Frontiers in Physics, 74), Redwood City, USA: Addison-Wesley*]{} (1989) 329p. I. Antoniadis, S. Dimopoulos, A. Pomarol and M. Quiros, “Soft masses in theories with supersymmetry breaking by TeV-compactification,” Nucl. Phys.  [**B544**]{}, 503 (1999) \[hep-ph/9810410\]. A. Delgado, A. Pomarol and M. Quiros, “Supersymmetry and electroweak breaking from extra dimensions at the TeV-scale,” Phys. Rev.  [**D60**]{}, 095008 (1999) \[hep-ph/9812489\]. T. Appelquist and A. Chodos, “Quantum Effects in Kaluza-Klein Theories,” Phys. Rev. Lett. [**50**]{}, 141 (1983); “The Quantum Dynamics of Kaluza-Klein Theories,” Phys. Rev. [**D28**]{}, 772 (1983). K. Kirsten, “The a(5) heat kernel coefficient on a manifold with boundary,” Class. Quant. Grav.  [**15**]{}, L5 (1998) \[hep-th/9708081\]. D.V. Vassilevich, “Vector fields on a disk with mixed boundary conditions,” J. Math. Phys. [**36**]{}, 3174 (1995). O. Pujol[à]{}s, [*Master Thesis*]{}, unpublished. T. Tanaka, in preparation. S. Mukohyama “Quantum effects, brane tension and large hierarchy in the brane-world, ” Phys. Rev.  [**D63**]{}, 044008 (2001) I. Brevik, K. A. Milton, S. Nojiri and S. D. Odintsov, “Quantum (in)stability of a brane-world AdS(5) universe at nonzero temperature,” hep-th/0010205. S. Nojiri, S. D. Odintsov and S. Zerbini, “Bulk versus boundary (gravitational Casimir) effects in quantum creation of inflationary brane world universe,” Class. Quant. Grav. [**17**]{}, 4855 (2000) \[hep-th/0006115\]. [^1]: If we considered an odd parity field, then we would impose Dirichlet boundary conditions, $\hat\chi(z_-)=\hat\chi(z_+)=0$, and the set of eigenvalues would be the same except for the zero mode, which only the even field has. [^2]: Here, $e^a_{\ n}$ is the f[ü]{}nfbein, $n,m,\dots$ are flat indices, $a,b,\dots$ are “world” indices, and $\gamma^n$ are the Dirac matrices. The covariant derivative can be expressed in terms of the spin connection $\omega_{an m}$ as $\nabla_a=\partial_a+{1\over 2} \omega_{anm} \Sigma^{nm}$, where $\Sigma^{nm}={1\over 4}[\gamma^n,\gamma^m]$ are the generators of the Lorentz transformations in spin $1/2$ representation. [^3]: One can see that for the conformally coupled case and for the spacetime we are considering the anomaly vanishes even on the branes. Since the conformal anomally is given by the Seeley-de Witt coefficient $a_{5/2}$, and this is a conformal invariant quantity, in the conformally coupled case $a_{5/2}$ is the same as the flat spacetime related problem, which is zero in this case because the branes are flat. We can also use the expressions found for $a_{5/2}$ presented in Ref. [@klaus; @vass] to show how in this case it vanishes.
{ "pile_set_name": "ArXiv" }
--- abstract: 'It was conjectured by Lang that a complex projective manifold is Kobayashi hyperbolic if and only if it is of general type together with all of its subvarieties. We verify this conjecture for projective manifolds whose universal cover carries a bounded, strictly plurisubharmonic function. This includes in particular compact free quotients of bounded domains.' address: - | Sébastien Boucksom\ CNRS and CMLS\ École Polytechnique\ 91128 Palaiseau Cedex, France - | Simone Diverio\ Dipartimento di Matematica Guido Castelnuovo\ SAPIENZA Università di Roma\ Piazzale Aldo Moro 5\ I-00185 Roma. author: - Sébastien Boucksom - Simone Diverio bibliography: - 'bibliography.bib' title: 'A note on Lang’s conjecture for quotients of bounded domains' --- [^1] Introduction {#introduction .unnumbered} ============ For a compact complex space $X$, Kobayashi hyperbolicity is equivalent to the fact that every holomorphic map ${\mathbb{C}}\to X$ is constant, thanks to a classical result of Brody. When $X$ is moreover projective (or, more generally, compact Kähler), hyperbolicity is further expected to be completely characterized by (algebraic) positivity properties of $X$ and of its subvarieties. More precisely, we have the following conjecture, due to S. Lang. [@Lan86 Conjecture 5.6] A projective variety $X$ is hyperbolic if and only if every subvariety (including $X$ itself) is of general type. Recall that a projective variety $X$ is of general type if the canonical bundle of any smooth projective birational model of $X$ is big, *i.e.* has maximal Kodaira dimension. This is for instance the case when $X$ is smooth and *canonically polarized*, *i.e.* with an ample canonical bundle $K_X$. Note that Lang’s conjecture in fact implies that every smooth hyperbolic projective manifold $X$ is canonically polarized, as conjectured in 1970 by S. Kobayashi. It is indeed a well-known consequence of the Minimal Model Program that any projective manifold of general type without rational curves is canonically polarized (see for instance [@BBP Theorem A]). Besides the trivial case of curves and partial results for surfaces [@MM83; @DES79; @GG80; @McQ98], Lang’s conjecture is still almost completely open in higher dimension as of this writing. General projective hypersurfaces of high degree in projective space form a remarkable exception: they are known to be hyperbolic [@Bro17] (see also [@McQ99; @DEG00; @DT10; @Siu04; @Siu15; @RY18]), and they satisfy Lang’s conjecture [@Cle86; @Ein88; @Xu94; @Voi96; @Pac04]. It is natural to test Lang’s conjecture for the following two basic classes of manifolds, known to be hyperbolic since the very beginning of the theory: - compact Kähler manifolds $X$ with negative holomorphic sectional curvature; - compact, free quotients $X$ of bounded domains $\Omega\Subset{\mathbb{C}}^n$. In case (N), ampleness of $K_X$ was established in [@WY16a; @WY16b; @TY17] (see also [@DT16]). By curvature monotonicity, this implies that every smooth subvariety of $X$ also has ample canonical bundle. More generally, Guenancia recently showed [@Gue18] that each (possibly singular) subvariety of $X$ is of general type, thereby verifying Lang’s conjecture in that case. One might even more generally consider the case where $X$ carries an arbitrary Hermitian metric of negative holomorphic sectional curvature, which seems to be still open. In this note, we confirm Lang’s conjecture in case (B). While the case of quotients of bounded *symmetric* domains has been widely studied (see, just to cite a few, [@Nad89; @BKT13; @Bru16; @Cad16; @Rou16; @RT18]), the general case seems to have somehow passed unnoticed. Instead of bounded domains, we consider more generally the following class of manifolds, which comprises relatively compact domains in Stein manifolds, and has the virtue of being stable under passing to an étale cover or a submanifold. We say that a complex manifold $M$ is *of bounded type* if it carries a bounded, strictly plurisubharmonic function ${\varphi}$. By a well-known result of Richberg, any *continuous* bounded strictly psh function on a complex manifold $M$ can be written as a decreasing limit of smooth strictly psh functions, but this fails in general for discontinuous functions [@For p.66], and it is thus unclear to us whether every manifold of bounded type should carry also a *smooth* bounded strictly psh function. \[thm:main\] Let $X$ be a compact Kähler manifold admitting an étale (Galois) cover $\tilde X\to X$ of bounded type. Then: - $X$ is Kobayashi hyperbolic; - $X$ has large fundamental group; - $X$ is projective and canonically polarized; - every subvariety of $X$ is of general type. Note that $\tilde X$ can always be replaced with the universal cover of $X$, and hence can be assumed to be Galois. By [@Kob98 3.2.8], (i) holds iff $\tilde X$ is hyperbolic, which follows from the fact that manifolds of bounded type are Kobayashi hyperbolic [@Sib81 Theorem 3]. Alternatively, any entire curve $f:{\mathbb{C}}\to X$ lifts to $\tilde X$, and the pull-back to ${\mathbb{C}}$ of the bounded, strictly psh function carried by $\tilde X$ has to be constant, showing that $f$ itself is constant. By definition, (ii) means that the image in $\pi_1(X)$ of the fundamental group of any subvariety $Z\subseteq X$ is infinite [@Kol §4.1], and is a direct consequence of the fact that manifolds of bounded type do not contain nontrivial compact subvarieties. According to the Shafarevich conjecture, $\tilde X$ should in fact be Stein; in case $\tilde X$ is a bounded domain of ${\mathbb{C}}^n$, this is indeed a classical result of Siegel [@Sie50] (see also  [@Kob59 Theorem 6.2]). By another classical result, this time due to Kodaira [@Kod], any compact complex manifold $X$ admitting a Galois étale cover $\tilde X\to X$ biholomorphic to a bounded domain in ${\mathbb{C}}^n$ is projective, with $K_X$ ample. Indeed, the Bergman metric of $\tilde X$ is non-degenerate, and it descends to a positively curved metric on $K_X$. Our proof of (iii) and (iv) is a simple variant of this idea, inspired by [@CZ02]. For each subvariety $Y\subseteq X$ with desingularization $Z\to Y$ and induced Galois étale cover $\tilde Z\to Z$, we use basic Hörmander–Andreotti–Vesentini–Demailly $L^2$-estimates for ${\overline{\partial}}$ to show that the Bergman metric of $\tilde Z$ is generically non-degenerate. It then descends to a psh metric on $K_Z$, smooth and strictly psh on a nonempty Zariski open set, which is enough to conclude that $K_Z$ is big, by [@Bou02]. As a final comment, note that Kähler hyperbolic manifolds, *i.e.* compact Kähler manifolds $X$ carrying a Kähler metric ${\omega}$ whose pull-back to the universal cover $\pi:\tilde X\to X$ satisfies $\pi^*{\omega}=d{\alpha}$ with ${\alpha}$ bounded, also satisfy (i)–(iii) in Theorem A [@Gro]. It would be interesting to check Lang’s conjecture for such manifolss as well. This work was started during the first-named author’s stay at SAPIENZA Università di Roma. He is very grateful to the mathematics department for its hospitality, and to INdAM for financial support. Both authors would also like to thank Stefano Trapani for helpful discussions, in particular for pointing out the reference [@For]. The Bergman metric and manifolds of general type ================================================ Non-degeneration of the Bergman metric -------------------------------------- Recall that the *Bergman space* of a complex manifold $M$ is the separable Hibert space ${\mathcal{H}}={\mathcal{H}}(M)$ of holomorphic forms $\eta\in H^0(M,K_M)$ such that $$\|\eta\|_{\mathcal{H}}^2:=i^{n^2}\int_{\tilde X}\eta\wedge\bar\eta<\infty,$$ with $n=\dim M$. Assuming ${\mathcal{H}}\ne\{0\}$, we get an induced (possibly singular) psh metric $h_M$ on $K_M$, invariant under $\operatorname{Aut}(M)$, characterized pointwise by $$h/h_M=\sup_{\eta\in{\mathcal{H}}\setminus\{0\}}\frac{|\eta|^2_h}{\|\eta\|_{\mathcal{H}}^2}=\sum_j |\eta_j|^2_h,$$ for any choice of smooth metric $h$ on $K_M$ and orthonormal basis $(\eta_j)$ for ${\mathcal{H}}$ (see for instance [@Kob98 §4.10]). The curvature current of $h_M$ is classically called the Bergman metric of $M$; it is a *bona fide* Kähler form precisely on the Zariski open subset of $M$ consisting of points at which ${\mathcal{H}}$ generates $1$-jets [@Kob98 Proposition 4.10.11]. We shall say that a complex manifold $M$ has a *non-degenerate (resp. generically non-degenerate) Bergman metric* if its Bergman space ${\mathcal{H}}$ generates $1$-jets at each (resp. some) point of $M$. We next recall the following standard consequence of $L^2$-estimates for ${\overline{\partial}}$. \[lem:1jet\] Let $M$ be a complete Kähler manifold with a bounded psh function ${\varphi}$. If ${\varphi}$ is strictly psh on $M$ (resp. at some point of $M$), then the Bergman metric of $M$ is non-degenerate (resp. generically non-degenerate). Pick a complete Kähler metric $\omega$ on $M$. Assume ${\varphi}$ strictly psh at $p\in M$, and fix a coordinate ball $(U,z)$ centered at $p$ with ${\varphi}$ strictly psh near $\overline U$. Pick also $\chi\in C^\infty_c(U)$ with $\chi\equiv 1$ near $p$. Since $\chi\log|z|$ is strictly psh in an open neighbourhood $V$ of $p$, smooth on $U\setminus\overline V$, and compactly supported in $U$, we can then choose $A\gg 1$ such that $$\psi:=(n+1)\chi\log|z|+A{\varphi}$$ is psh on $M$, with $dd^c{\psi}\ge{\omega}$ on $U$. Note that $\psi$ is also bounded above on $M$, ${\varphi}$ being assumed to be bounded. For an appropriate choice of holomorphic function $f$ on $U$, the smooth $(n,0)$-form $\eta:=\chi f\,dz_1\wedge\dots\wedge dz_n$, which is compactly supported in $U$ and holomorphic in a neighborhood of $x$, will have any prescribed jet at $p$. The $(n,1)$-form $\bar\partial\eta$ is compactly supported in $U$, and identically zero in a neighborhood of $p$, so that $|{\overline{\partial}}\eta|_{\omega}e^{-\psi}\in L^2(U)$. Since $dd^c{\psi}\ge{\omega}$ on $U$, [@Dem82 Théorème 5.1] yields an $L^2_{\mathrm{loc}}$ $(n,0)$-form $u$ on $M$ such that ${\overline{\partial}}u={\overline{\partial}}\eta$ and $$\label{equ:l2} i^{n^2} \int_M u\wedge\bar u\,e^{-2\psi}\le\int_U|{\overline{\partial}}\eta|^2_{\omega}e^{-2\psi}dV_{\omega}.$$ As a result, $v:=\eta-u$ is a holomorphic $n$-form on $X$. Since $u=\eta-v$ is holomorphic at $x$ and $\psi$ has an isolated singularity of type $(n+1)\log|z|$ at $x$, (\[equ:l2\]) forces $u$ to vanish to order $2$ at $p$, so that $v$ and $\eta$ have the same $1$-jet at $p$. Finally, (\[equ:l2\]) and the fact that $\psi$ is bounded above on $M$ shows that $u$ is $L^2$. Since $\eta$ is clearly $L^2$ as well, $v$ belongs to the Bergman space ${\mathcal{H}}$, with given $1$-jet at $p$, and we are done. Manifolds of general type ------------------------- Let $X$ be a compact complex manifold, $\tilde X\to X$ a Galois étale cover, and assume that the Bergman metric of $\tilde X$ is non-degenerate, so that the canonical metric $h_{\tilde X}$ on $K_{\tilde X}$ defined by ${\mathcal{H}}(\tilde X)$ is smooth, strictly psh. Being invariant under automorphisms, this metric descends to a smooth, strictly psh metric on $K_X$, and the latter is thus ample by [@Kod]. This argument, which goes back to the same paper by Kodaira, admits the following variant. \[lem:big\] Let $X$ be a compact Kähler manifold admitting a Galois étale cover $\tilde X\to X$ with generically non-degenerate Bergman metric. Then $X$ is projective and of general type. The assumption now means that the psh metric $h_{\tilde X}$ on $K_{\tilde X}$ is smooth and strictly psh on a non-empty Zariski open subset. It descends again to a psh metric on $K_X$, smooth and strictly psh on a non-empty Zariski open subset, and we conclude that $K_X$ is big by [@Bou02 §2.3] (see also [@BEGZ10 §1.5]). Being both Moishezon and Kähler, $X$ is then projective. Proof of Theorem A ================== Let $X$ be a compact Kähler manifold with an étale cover $\pi:\tilde X\to X$ of bounded type, which may be assumed to be Galois after replacing $\tilde X$ by the universal cover of $X$. Since $\tilde X$ is also complete Kähler, its Bergman metric is non-degenerate by Lemma \[lem:1jet\], and $X$ is thus projective and canonically polarized by [@Kod]. Now let $Y\subseteq X$ be an irreducible subvariety. On the one hand, pick any connected component $\tilde Y$ of the preimage $\pi^{-1}(Y)\subset\tilde X$, so that $\pi$ induces a Galois étale cover $\pi|_{\tilde Y}\colon\tilde Y\to Y$. On the other hand, let $\mu\colon Z\to Y$ be a projective modification with $Z$ smooth and $\mu$ isomorphic over $Y_{\operatorname{reg}}$, whose existence is guaranteed by Hironaka. Since $Y$ is Kähler and $\mu$ is projective, $Z$ is then a compact Kähler manifold. The fiber product $\tilde Z=Z\times_{Y }\tilde Y$ sits in the following diagram $$\xymatrix{ \tilde Z \ar[dr]^{\tilde\mu}\ar[dd]_\nu & &\\ & \tilde Y \ar@{^{(}->}[r] \ar[dd]^{\pi|_{\tilde Y}}& \tilde X\ar[dd]^\pi \\ Z \ar[dr]_\mu & &\\ & Y \ar@{^{(}->}[r] & X. }$$ Being a base change of a Galois étale cover, $\nu$ is a Galois étale cover, and $\tilde\mu$ is a resolution of singularities of $\tilde Y$. Since $\pi$ is étale, we have $\tilde Y_{\operatorname{reg}}=\pi^{-1}(Y_{\operatorname{reg}})$, and $\tilde\mu$ is an isomorphism over $\tilde Y_{\operatorname{reg}}$. The pull-back of ${\varphi}$ to $\tilde Z$ is thus a bounded psh function, strictly psh at any point $p\in\tilde\mu^{-1}(\tilde Y_{\operatorname{reg}})$. Since $Z$ is compact Kähler, $\tilde Z$ is complete Kähler. By Lemma \[lem:1jet\], the Bergman metric of $Z$ is generically non-degenerate, and $Z$ is thus of general type, by Lemma \[lem:big\]. [^1]: Both authors are partially supported by the ANR Programme: Défi de tous les savoirs (DS10) 2015, GRACK, Project ID: ANR-15-CE40-0003ANR. The second named author is also partially supported by the ANR Programme: Défi de tous les savoirs (DS10) 2016, FOLIAGE, Project ID: ANR-16-CE40-0008
{ "pile_set_name": "ArXiv" }
--- abstract: 'We predict a huge interference effect contributing to the conductance through large ultra-clean quantum dots of chaotic shape. When a double-dot structure is made such that the dots are the mirror-image of each other, constructive interference can make a tunnel barrier located on the symmetry axis effectively transparent. We show (via theoretical analysis and numerical simulation) that this effect can be orders of magnitude larger than the well-known universal conductance fluctuations and weak-localization (both less than a conductance quantum). A small magnetic field destroys the effect, massively reducing the double-dot conductance; thus a magnetic field detector is obtained, with a similar sensitivity to a SQUID, but requiring no superconductors.' author: - 'Robert S. Whitney' - 'P. Marconcini' - 'M. Macucci' title: ' Symmetry causes a huge conductance peak in double quantum dots.' --- In the 1990s, interference effects (universal conductance fluctuations and weak-localization) were observed for electrons flowing through clean quantum dots [@Alhassid-review; @Marcus-chaos]. The chaotic shape of such dots makes these effects analogous to speckle-patterns in optics rather than to the regular interference patterns observed with Young’s slits or Fabry-Perot etalons. While such interference phenomena are beautiful, they have only a small effect on the properties of quantum dots coupled to multi-mode leads. Here we provide a theoretical analysis and numerical simulations showing that a much larger interference effect occurs in systems which are mirror-symmetric but otherwise chaotic [@Baranger-Mello; @Gopar-et-al; @Kopp-Schomerus-Rotter; @Gopar-Rotter-Schomerus], see Fig. \[Fig:butter-path\]. We show that the mirror symmetry induces interference that greatly enhances tunneling through a barrier located on the symmetry axis; it can make the barrier become effectively transparent. Thus an open double-dot system with an almost opaque tunnel barrier between the two dots will exhibit a huge peak in conductance when the two dots are the mirror image of each other, see Fig. \[Fig:numerics\]. This effect could be used to detect anything which breaks the mirror symmetry. For example, current 2D electron gas (2DEG) technology [@best-ultraclean-samples] could be used to construct a device whose resistance changes by a factor of ten, when an applied magnetic flux changes from zero to a fraction of a flux quantum in the double dot. This is a sensitivity similar to that of a SQUID, but it is achieved without superconductivity, making it easy to integrate with other 2DEG circuitry. ![\[Fig:butter-path\] A mirror-symmetric double dot, where the classical dynamics is highly chaotic. We call it a “butterfly double dot” to emphasize the left-right symmetry. Every classical path from the left lead to the right lead (solid line) which hits the barrier more than once, is part of a family of paths which are related to it by the mirror symmetry (dashed line). ](fig1.eps){width="6.5cm"} [**Origin of the conductance peak.**]{} The origin of the effect can be intuitively understood by looking at Fig. \[Fig:butter-path\]. Assume that electrons only follow the two paths shown (instead of an infinite number of different paths). Path 1 does not tunnel the first time it hits the barrier, but does tunnel the second time it hits it. Path 2 tunnels the first time it hits the barrier, but not the second time. Quantum mechanics gives the probability to go from the left lead to the right lead as $|r(\theta) t(\theta'){\rm e}^{{\rm i} S_1/\hbar} + t(\theta) r(\theta'){\rm e}^{{\rm i} S_2/\hbar}|^2$, where the scattering matrix of the tunnel barrier has amplitudes $r(\theta)$ and $t(\theta)$ for reflection and transmission at angle $\theta$. If there is no correlation between the classical actions of the two paths ($S_1$ and $S_2$), then the cross-term cancels upon averaging over energy, leaving the probability as $|r(\theta) t(\theta')|^2+|t(\theta) r(\theta')|^2$. In contrast, if there is a perfect mirror symmetry, then $S_2=S_1$, and the probability is $|r(\theta) t(\theta')+t(\theta) r(\theta')|^2$, which is significantly greater than $|r(\theta) t(\theta')|^2+|t(\theta) r(\theta')|^2$. Indeed, if we could drop the $\theta$-dependence of $r$ and $t$, the probability would be doubled by the constructive interference induced by the mirror symmetry. A path that hits the barrier $(n+1)$ times has $2^n$ partners with the same classical action (each path segment that begins and ends on the barrier can be reflected with respect to the barrier axis). However the conductance is [*not*]{} thereby enhanced by $2^n$, because (due to the nature of the barrier scattering matrix) there is also destructive interference when one path tunnels $(4j-2)$ times more than another (for integer $j$). The effect looks superficially like resonant tunneling. However, that only occurs when dots are weakly coupled to the leads, so that each dot has a peak for each level of the closed dot and the current flow is enhanced when two peaks are aligned. Instead in our case each dot is well coupled to a lead (with $N\!\gg\!1$ modes), so the density of states in each dot is featureless (the broadenning of each level is about $N$ times the level-spacing). Furthermore, resonant tunneling occurs at discrete energies, while our effect is largely energy independent. Another similar effect, called “reflectionless tunneling”, occurs when electrons are [*retro-reflected*]{} as holes, due to Andreev reflection from a superconductor [@reflectionless-tunnel-expt91; @reflectionless-tunnel-review]. However, this retro-reflection transforms the classical dynamics in the dot from chaotic to integrable [@Kosztin-Maslov-Goldbart], and large interference effects in integrable systems are not uncommon (consider a Fabry-Perot etalon). Here, the mirror symmetry induces a large interference effect without any retro-reflection and without a change in the nature of the classical dynamics (chaotic motion remains chaotic). ![\[Fig:numerics\] Average conductance as (a) a function of applied $B$-field (with the barrier on the symmetry axis), and as (b) a function of the barrier position (for zero $B$-field). The latter mimics the effect of gates that reduce the size of one dot relative to the other. The data points come from simulations performed for the structures shown in the insets. The curve comes from the semiclassical theory; in (b) there is no fitting parameter, while in (a) an unknown parameter (or order one) is adjusted to fit the data. The conductance of the tunnel barrier alone is $G_{\rm tb}$.](fig2.eps){width="8.4cm"} ![\[Fig:cond-ratio\] Plot of the ratio $\langle G_{\rm sym}\rangle/\langle G_{\rm asym}\rangle$, given by Eqs. (\[eq:Gsym\],\[eq:Gasym\]). The ratio grows as $T_{\rm tb}\to 0$ for all $P$ (although $\langle G_{\rm sym,asym}\rangle$ shrink). For given $T_{\rm tb}$, the ratio is maximal at $P=(1 -2T_{\rm tb}^{1/2})/(1-4T_{\rm tb})$. ](fig3.eps){width="6.6cm"} [**Semiclassical theory.**]{} Our analysis uses the semiclassical theory of transport through clean chaotic quantum dots [@Bar93]. The conductance through a system whose dimensions are much greater than a Fermi wavelength can be written as a double sum over classical paths, $\gamma$ and $\gamma'$, which both start at a point $y_0$ on the cross-section of the left lead and end at $y$ on the right lead: $$\begin{aligned} G &=& (2\pi \hbar)^{-1} G_0\sum_{\gamma,\gamma'} A_{\gamma}A_{\gamma'}^* \exp \big[{\rm i}(S_\gamma-S_{\gamma'})/\hbar \big] , \label{eq:conductance}\end{aligned}$$ where $G_0= 2e^2/h$ is the quantum of conductance, and $S_\gamma$ is the classical action of path $\gamma$. A tunnel barrier with left-right symmetry must have the scattering matrix $$\begin{aligned} {\cal S}_{\rm tb}(\theta) = {\rm e}^{{\rm i} \phi_{r(\theta)}} \left(\begin{array}{cc} |r(\theta)| & \pm {\rm i} |t(\theta)| \\ \pm {\rm i} |t(\theta)| & |r(\theta)|\end{array} \right) \label{eq:Stb}\end{aligned}$$ where $r(\theta)$ and $t(\theta)$ are reflection and transmission amplitudes for a plane wave at angle of incidence $\theta$. Keeping only the upper sign in ${\cal S}_{\rm tb}(\theta)$ [@footnote:sign], the amplitudes in Eq. (\[eq:conductance\]) are $$\begin{aligned} \label{eq:A} A_\gamma &=& \left(\frac{{\rm d} p_{y_0}}{{\rm d} y}\right)^{1/2}_{\gamma}\ \prod_{j=1}^{m_{\rm T}(\gamma)} {\rm i}|t(\theta_{{\rm T}j})| \prod_{k=1}^{m_{\rm R}(\gamma)} |r(\theta_{{\rm R}k})| \ \ \end{aligned}$$ where path $\gamma$ starts with a momentum across the left lead, $p_{y_0}$, and a total momentum given by the Fermi momentum, $p_{\rm F}$. This path reflects off the barrier $m_{\rm R}(\gamma)$ times (with the $k$th reflection at angle $\theta_{{\rm R}k}$) and transmits $m_{\rm T}(\gamma)$ times (at angles $\theta_{{\rm T}k}$) before hitting the right lead at $y$. The factor $({\rm d} p_{y_0}/{\rm d} y)_\gamma$ is the stability of the path that would exist if the barrier were absent for each transmission and impenetrable for each reflection. For most pairs with $\gamma\neq\gamma'$, the exponent in Eq. (\[eq:conductance\]) varies fast with energy, so that averaging over energy removes such pairs from the double sum. We keep only the main contributions surviving such averaging: those where $\gamma'$ can be constructed from $\gamma$ by means of the reflection with respect to the barrier axis (symmetry axis) of any path segment that begins and ends on the barrier, for which $S_{\gamma'}=S_\gamma$ at all energies (the paths thereby have the same stability $({\rm d} p_{y_0}/{\rm d} y)_\gamma$). Dropping weak-localization effects [@Richter-Sieber; @Baranger-Mello], the average conductance reads $$\begin{aligned} \langle G \rangle &=& {\frac{G_0} {2\pi \hbar}} \!\int_{\rm L} \! \! {\rm d} y_0 \int_{\rm R} \! {\rm d} y \sum_\gamma \left|{\frac{{\rm d} p_{y_0}} {{\rm d} y}}\right|_\gamma \left[{\prod_{m=1}^{n(\gamma)}} {\mathbb C}_{\gamma,m} {\mathbb S} \right]_{41} \ \ \label{eq:conductance-diag}\end{aligned}$$ where the product is ordered, and $n(\gamma)$ is the number of times the path $\gamma$ hits the barrier. The four-by-four matrix ${\mathbb S} = {\cal S}_{\rm tb} \otimes {\cal S}_{\rm tb}^\dagger$ gives the scattering of the two paths at the barrier. Thus ${\mathbb S}_{ij}$ gives the weight to go from state $j$ to state $i$, where we define state 1 as both paths in the left dot; state 2 as path $\gamma$ in the left dot and path $\gamma'$ in the right dot; state 3 as path $\gamma$ in the right dot and path $\gamma'$ in the left dot; and state 4 as both paths in the right dot. The matrices ${\mathbb C}_{\gamma,m}$ are diagonal with the following non-zero elements: $[{\mathbb C}_{\gamma,m}]_{11}=[{\mathbb C}_{\gamma,m}]_{44}=1$ and $[{\mathbb C}_{\gamma,m}]_{22}=[{\mathbb C}_{\gamma,m}]_{33}^* =\exp[{\rm i} \delta S_{\gamma,m}/\hbar]$. The action difference $\delta S_{\gamma,m}$ is that between path $\gamma$ in the left dot and its mirror image in the right dot between the $(m-1)$th and $m$th collision with the barrier. For perfect symmetry ${\mathbb C}_m = {\mathbb I}$ and then the product equals $[{\mathbb S}^n]_{41}$. We assume that the classical dynamics is sufficiently mixing that paths uniformly explore the dot between subsequent collisions with the barrier (or leads). Defining $\delta S_0/\hbar$ as the phase difference acquired in one time of flight across the dot, we have ${\mathbb C}_{\gamma,m} \simeq \exp[-\Gamma t_{\gamma,m}]$ where $\Gamma$ is a complex number, with ${\rm Im}[\Gamma]\simeq \langle \delta S_0 \rangle/(\tau_0\hbar)$ and ${\rm Re}[\Gamma]\simeq {\rm var}[\delta S_0]/(\tau_0\hbar^2)$. The probability that a path survives in the dot for a time $t$ without hitting either the barrier or the lead is ${\rm e}^{-t/\tau'_{\rm D}}$. Using this, we replace ${\mathbb C}_{\gamma,m}$ by its time-average ${\mathbb C} = \langle {\mathbb C}_{\gamma,m}\rangle$; its only non-zero elements are ${\mathbb C}_{11} ={\mathbb C}_{44}=1$ and ${\mathbb C}_{22} ={\mathbb C}_{33}^*= [1+\Gamma \tau'_{\rm D}]^{-1}$. Thus the product in Eq. (\[eq:conductance-diag\]) reduces to $({\mathbb C} {\mathbb S})^n$. The sum is over all $\gamma$s that hit the barrier $n$ times, and is independent of $y_0,y$. To proceed, we define $\tilde{\mathbb S} \equiv {\mathbb C}^{1/2}{\mathbb S} {\mathbb C}^{1/2}$; it is simple to show that $\big[({\mathbb C}{\mathbb S})^n\big]_{41}= [\tilde{\mathbb S}^n]_{41}$ for all $n$. Then, defining $P= W_{\rm tb}/(W_{\rm tb}+W)$ as the probability for a path to hit the $W_{\rm tb}$-wide barrier before escaping into the $W$-wide lead, we find that $\langle G \rangle = G_0 N (1-P) \sum_{n=1}^\infty P^n \big[\tilde{\mathbb S}^n \big]_{41}$, where $N=p_{\rm F}W/( \pi \hbar)$ is the number of modes in a lead. Upon finding the matrix, ${\mathbb U}$, which diagonalizes $\tilde{\mathbb S}$, one can easily evaluate the geometric series in $n$. This analysis gives the following average conductance of the symmetric double dot ($\Gamma=0$), $$\begin{aligned} \langle G_{\rm sym} \rangle &=& G_0N P (1+P)T_{\rm tb}/ [(1-P)^2 + 4PT_{\rm tb}],\ \label{eq:Gsym}\end{aligned}$$ where $T_{\rm tb}$ is the tunneling probability, $|t(\theta)|^2$, averaged over all $\theta$. For $T_{\rm tb} < (1-P)/2$ (i.e. for $G_{\rm tb}$, the conductance of a barrier with transmission $T_{\rm tb}$ in a waveguide of width $W_{\rm tb}$, less than P times the conductance of the series of the two constrictions), one finds that $\langle G_{\rm sym}\rangle$ is greater (often much greater) than the tunnel barrier conductance, $G_{\rm tb}$. Thus symmetrically placing constrictions on either side of the barrier can strongly [*enhance*]{} its conductance (this is a stark example of the fact that quantum conductances in series are not additive). In contrast, for the asymmetric double dot (large $\Gamma$) we have $$\begin{aligned} \langle G_{\rm asym} \rangle &=& G_0N PT_{\rm tb}/[1-P+2PT_{\rm tb}], \label{eq:Gasym}\end{aligned}$$ which is always less than $G_{\rm tb}$. The ratio $\langle G_{\rm sym} \rangle/\langle G_{\rm asym} \rangle$ is plotted in Fig. \[Fig:cond-ratio\]. For any finite $T_{\rm tb}$, the ratio is maximal at $P=(1 -2T_{\rm tb}^{1/2})/(1-4T_{\rm tb})$. This choice of $P$ gives $\langle G_{\rm sym} \rangle= G_0 N/4$ and (for small $T_{\rm tb}$) $\langle G_{\rm asym} \rangle\simeq T_{\rm tb}^{1/2} G_0 N/2$. Thus the conductance ratio can be arbitrarily large for a highly opaque tunnel barrier. [**Peak shape with symmetry-breaking.**]{} The effect of the mirror symmetry is suppressed by a perpendicular magnetic field, $B$, or by moving the boundary of one dot by a distance $\delta L$. It is also suppressed by disorder (defined by a mean free flight time between subsequent scatters from disorder, $\tau_{\rm mf}$) or decoherence (defined by a decoherence time, $\tau_\varphi$). The suppression can be quantified in terms of the following parameters: $$\begin{aligned} \Gamma_B &=& \eta(eB {\cal A}/h)^2/\tau_0 , \label{eq:Gamma_B} \\ \Gamma_{\rm boundary} &=& \tau_0^{-1} \big({\rm var}[\delta L]/\lambda_{\rm F}^2 +{\rm i} \langle \delta L \rangle/\lambda_{\rm F}\big) , \\ \Gamma_{\rm mf} &=& \tau_{\rm mf}^{-1} , \qquad \Gamma_{\rm \varphi} = \tau_{\varphi}^{-1} ,\end{aligned}$$ where $e$ is the electronic charge, ${\cal A}$ is the area of one dot, and $\tau_0$ is the time to cross the dot. In $\Gamma_B$, the constant $\eta$ is of order one, but is hard to estimate [@footnote:kappa]. For $\Gamma_{\rm boundary}$, we have $\langle\delta L\rangle \sim x\xi$ and ${\rm var}[\delta L] \sim x^2(\xi-\xi^2)$, if a fraction $\xi$ of the left dot is deformed outwards by a distance $x$. For multiple asymmetries, the total $\Gamma$ is the sum of the individual $\Gamma$s given above. For real $\Gamma$, $$\begin{aligned} \langle G(\Gamma) \rangle = \langle G_{\rm asym} \rangle + {\frac {\langle G_{\rm sym} \rangle - \langle G_{\rm asym} \rangle} {1+ F(P,T_{\rm tb}) \times\Gamma\tau'_{\rm D}}}, \label{eq:peak-shape-realGamma}\end{aligned}$$ where $F(P,T_{\rm tb}) =\langle G_{\rm sym} \rangle/[\langle G_{\rm asym} \rangle(1+P)] $, and $\tau'_{\rm D} \sim \pi L\tau_0/(W+W_{\rm tb})$ is the typical time a path spends in one dot before hitting a lead or the barrier. For the large conductance ratio (see below Eq. (\[eq:Gasym\])), $F(P,T_{\rm tb})\tau'_{\rm D}$ is about half the dwell time in the double-dot, $\tau_{\rm D}\sim (1-P)^{-1}\tau'_{\rm D}$. Thus the conductance is a Lorentzian function of the $B$-field, with similar width to the weak-localization dip in the same system with no barrier [@Richter-Sieber]. This makes the system an extremely sensitive detector of magnetic fields and deformations of the confining potential (for example due to the movement of charge near the double dot). Intriguingly, the peak remains when the leads are at different positions on the two dots; it is simply suppressed with an asymmetry parameter $\Gamma_{\rm lead} =(1-P)/\tau'_{\rm D}$. For complex $\Gamma$, as in Fig. \[Fig:numerics\](b), we have no analytic result for $\langle G(\Gamma) \rangle$, but we can get it by numerically diagonalizing the 4-by-4 matrix, $\tilde{\mathbb S}$. In Fig. \[Fig:numerics\](b), the data and the theory curve drop below $\langle G_{\rm asym}\rangle=0.23G_0$. We will show elsewhere that this is due to destructive interference. The conductance rises back up to $\langle G_{\rm asym}\rangle$ when the barrier is moved a distance of order a wavelength. [**Proposal for experimental observation.**]{} Consider making such a double-dot in an ultra-clean two-dimensional electron gas (2DEG) at the lowest achievable temperatures [@best-ultraclean-samples]. A finger gate could define the barrier [@barrier-finger-gate], with split gates controlling the lead widths. To maximize the effect for a 2DEG with a mean free path [@best-ultraclean-samples] of order $500\,\mu$m, each dot (see Fig. 2) can have size $L=\,4\,\mu$m (circumference $\sim 3.6 L\sim 15\,\mu$m) with 12 mode leads ($W = 310\,{\rm nm} \sim 6\lambda_{\rm F}$). A barrier with $T_{\rm tb}= 1.48 \times 10^{-3}$ and width $W_{\rm tb} = L$ gives $P = 0.93$ and $\tau'_{\rm D} \sim 3.5\tau_0$. In this case, $\langle G_{\rm sym} \rangle \simeq 14\langle G_{\rm asym} \rangle \simeq 3.2 G_0$ (resistance $R_{\rm sym} \sim 5\,{\rm k}\Omega$). The crossover from $\langle G_{\rm sym} \rangle$ to $\langle G_{\rm asym} \rangle$ happens for $\Gamma \simeq 0.14/\tau'_{\rm D} \sim 0.04/\tau_0$. At low temperatures ($\tau_{\varphi} > \tau_{\rm mf}$), disorder will suppress the peak to about 83% of $\langle G_{\rm sym} \rangle$, since $F(P,T_{\rm tb}) \Gamma_{\rm mf}\tau'_{\rm D} \sim 0.2$. Thus the double-dot conductance will drop by an order of magnitude if $10\%$ of the boundary of one dot is moved by $\lambda_{\rm F}/2$, or if a B-field is applied such that a fifth of a flux-quantum threads each dot. The latter is a B-field sensitivity similar to that of a SQUID. The main experimental challenge will be to define dots that are mirror-symmetric on a scale significantly less than $\lambda_{\rm F} \sim 50\,{\rm nm}$. We suggest that each dot should be defined by means of multiple gates (made as symmetric as possible); their voltages can then be tuned to maximise the symmetry. We propose the following protocol for this maximization. Starting with very wide leads, in such a way that $P$ is far from unity and the conductance peak is very broad, one scans the dot-defining gate voltages over a broad range to reveal the approximate symmetry point (maximal conductance). One then narrows the leads (increasing $P$), making the conductance peak higher and narrower, and adjusts the dot-defining gate voltages to again maximize the conductance. Repeatedly doing this should give the symmetry point with increasing accuracy, until one reaches the limit imposed by inherent asymmetries (disorder, etc). [**Numerical simulations**]{}. For the above maximisation we took $W_{\rm tb} = L$ and only 12 lead modes. This calls into question two assumptions in the theory. Firstly, we can no longer assume that paths in the dot will be well randomized between collisions with the barrier, since $\tau'_{\rm D} \sim 3.6\tau_0$. Secondly, we may not be able to neglect other interference effects (weak-localization, etc), since $\langle G \rangle$ is at most a few $G_0$. Thus to verify that the effect is as expected in such a parameter regime, we numerically simulated a stadium billiard containing a barrier with $T_{\rm tb}=1.48 \times 10^{-3}$, see Fig. \[Fig:numerics\]. We use the recursive Green’s function technique [@papmio] working in real space for the direction of current propagation (cut into multiple slices) and in mode space for the transverse direction. Magnetic fields are in a Landau gauge where the vector potential is oriented in the transverse direction [@gvr]. The number of longitudinal slices and transverse modes were increased until the results converged. The data shown here are for 836 longitudinal slices (200 of which are in the outer leads) and 200 transverse modes. We mimic thermal smearing, at a temperature of 23mK, by averaging over 44 energies uniformly distributed over an interval of 0.02meV around the Fermi energy of 9.02meV. We use the effective mass in GaAs of 0.067$m_0$. The simulation (data points in Fig. \[Fig:numerics\]) clearly shows that the effect exists in this regime. Indeed, despite the assumptions in its derivation, the theory (solid curve) agrees surprisingly well with the numerical data. [**Concluding comment.**]{} The conductance peak is [*not*]{} destroyed by bias voltages or temperatures greater than $\hbar/\tau_{\rm D}$, because the mirror symmetry is present at all energies and not just at the chemical potential (unlike the electron-hole symmetry for reflectionless tunneling into a superconductor). Large biases or temperatures should still be avoided, as they increase the decoherence. We thank M. Houzet and P. Brouwer for discussions. [9]{} Y. Alhassid, Rev. Mod. Phys. [**72**]{}, 895 (2000). C. M. Marcus [*et al.*]{}, Chaos, Solitons and Fractals [**8**]{}, 1261 (1997). H. U. Baranger and P. A. Mello, Phys. Rev. B [**54**]{}, R14297 (1996). V. A. Gopar, M. Martínez, P. A. Mello, and H. U. Baranger, J. Phys. A: Math. Gen. [**29**]{}, 881 (1996). V. A. Gopar, S. Rotter, and H. Schomerus, Phys. Rev. B [**73**]{}, 165308 (2006). M. Kopp, H. Schomerus, and S. Rotter Phys. Rev. B [**78**]{}, 075312 (2008). I. P. Radu, J. B. Miller, C. M. Marcus, M. A. Kastner, L. N. Pfeiffer, and K. W. West, arXiv:0803.3530 (2008). A. Kastalsky, A. W. Kleinsasser, L. H. Greene, R. Bhat, F. P. Milliken, and J. P. Harbison, Phys. Rev. Lett. [**67**]{}, 3026 (1991). C. W. J. Beenakker, Rev. Mod. Phys. [**69**]{}, 731 (1997). I. Kosztin, D. L. Maslov, and P. M. Goldbart, Phys. Rev. Lett. [**75**]{}, 1735 (1995). H. U. Baranger, R. A. Jalabert, and A. D. Stone, Phys. Rev. Lett. [**70**]{}, 3876 (1993); Chaos [**3**]{}, 665 (1993). R. S. Whitney, Phys. Rev. B [**75**]{}, 235404 (2007). It applies at all $\theta$ and all energies for barriers with smooth cross-sections (WKB analysis) or rectangular cross-sections. K. Richter and M. Sieber, Phys. Rev. Lett. [**89**]{}, 206801 (2002). In Fig. \[Fig:numerics\](a) we choose $\eta=0.34$ to best fit the data. O. A. Tkachenko, V. A. Tkachenko, D. G. Baksheyev, C.-T. Liang, M. Y. Simmons, C. G. Smith, D. A. Ritchie, G.-H. Kim, and M. Pepper, J. Phys.: Condens. Matter [**13**]{}, 9515 (2001). M. Macucci, A. Galick, and U. Ravaioli, Phys. Rev. B [**52**]{}, 5210 (1995). M. Governale and D. Boese, Appl. Phys. Lett. [**77**]{}, 3215 (2000). Appendix (only on arXiv version) ================================ [**Comment on the conductance ratio.**]{} It is instructive to consider particular limits of Eqs. (\[eq:Gsym\],\[eq:Gasym\]). At the symmetry point we observe that the more times a path returns to the barrier, the more transparent the interference makes the barrier. Thus if the path takes an infinite time to escape the double dot ($P=1$), then the barrier becomes completely transparent. However this does not generate a large conductance peak, since for $P \to 1$ the probability to go from the left lead to the right lead is a half for any finite barrier transparency, thus $\langle G_{\rm sym} \rangle/\langle G_{\rm asym} \rangle \to 1$. This is the reason for the maximum conductance peak occurring when $P$ is slightly less than one (as visible in Fig. \[Fig:cond-ratio\]). The opposite limit is $P \to 0$, then both $\langle G_{\rm sym} \rangle$ and $\langle G_{\rm asym} \rangle$ reduce to the conductance of the barrier alone. Since no path hits the barrier more than once, there can be no interference induced enhancement of tunneling. [**Comment on fitting $B$-field dependence.**]{} To make the theory quantitative (for comparison with the numerical simulations) we assumed that the area enclosed by each straight path segment from one point on the boundary of the left dot to another is uncorrelated with the next. We define this directed area, $a$, as that of the triangle made by the two ends of the path segment and the mid-point of the barrier. Assuming $(eBa/\hbar) \ll 1$ we have $\Gamma_B= 2\kappa(eB{\cal A}/h)^2/\tau_0$, where the system-specific parameter $\kappa={\rm var}(a/{\cal A})$. If $a$ were uniformly distributed over the range from $-{\cal A}/2$ to ${\cal A}/2$, we would get $\kappa=1/12$. In contrast, if the distribution were strongly peaked at $-{\cal A}/2$ and ${\cal A}/2$, we would get $\kappa$ as big as $1/2$. We believe only a ray-tracing simulation of the cavity would yield an accurate value for $\kappa$, thus for the theory curve in Fig. \[Fig:numerics\]a we treat $\kappa$ as a fitting parameter. The Lorentzian width of $40\,\mu{\rm T}$ corresponds to $\kappa= 0.17$; which is within the range estimated above. [**Comment on fitting barrier-position dependence.**]{} In the case shown in Fig. \[Fig:numerics\]b, the mirror symmetry is only broken at the point where a path segment begins or end at the tunnel barrier. Thus a path segment only acquires a phase-difference from its mirror image at the places where it touches the barrier. Paths acquire more phase in the left dot than the right dot (since the barrier is moved to the right). This phase difference has a very different form from that induced by a $B$-field (where the phase difference grows with the time a path spends in one of the dots). Taking this into account, we get the solid curve in Fig. \[Fig:numerics\](b) without any fitting parameters.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Spinel structured compounds, $AB_2O_4$, are special because of their exotic multiferroic properties. In $ACr_2O_4$ ($A$=$Co$, $Mn$,$Fe$), a switchable polarization has been observed experimentally due to a non-collinear magnetic spin order. In this article, we demonstrated the microscopic origin behind such magnetic spin order, hysteresis, polarisation and the so-called magnetic compensation effect in $ACr_2O_4$ ($A$=$Co$, $Mn$,$Fe$, $Ni$) using Monte Carlo simulation. With a careful choice of the exchange interaction, we were able to explain various experimental findings such as magnetization vs. temperature (T) behavior, conical stability, unique magnetic ordering and polarization in a representative compound $CoCr_2O_4$ which is the best known multiferroic compound in the $AB_2O_4$ spinel family. We have also studied the effect of $Fe$-substitution in $CoCr_2O_4$, with an onset of few exotic phenomena such as magnetic compensation and sign reversible exchange bias effect. These effects are investigated using an effective interactions mimicking the effect of substitution. Two other compounds in this family, $CoMn_2O_4$ and $CoFe_2O_4$, are also studied where no conical magnetic order and polarisation was observed, as hence provide a distinct contrast. Here all calculations are done using the polarisation calculated by the spin-current model. This model has certain limitation and it works quite good for low temperature and low magnetic field. But the model despite its limitation it can reproduce sign reversible exchange bias and magnetic compensation like phenomena quite well.' author: - Debashish Das - Aftab Alam title: 'Exotic multiferroic properties of spinel structured $AB_2O_4$ compounds: A Monte Carlo Study' --- Introduction ============ $CoCr_2O_4$ is a classic example of spinel which is observed to show a new kind of polarisation at very low temperature, whose origin lies in the formation of a conical magnetic order.[@Pol-org] The application of the magnetic field manipulates the cone angle and hence the coupling between ferromagnetism and ferroelectric properties. Similar multiferroism has been reported for other spinel compounds such as $MnCr_2O_4$,[@Tomiyasu] $NiCr_2O_4$,[@ACr2O4-pol] and $FeCr_2O_4$.[@FeCr2O4-pol] These four spinels posses both polarisation and magnetism due to spin origin. However, there are several other compounds, $RMnO_3$ (R= Tb, Dy), in perovskite family where the polarisation is due to spin spiral developed in the plane.[@RMnO3_1; @RMnO3_2] Therefore, such a compound does not have any net magnetization (M). However, the conical magnetic order in $ACr_2O_4$ adds an extra magnetism along the cone axis and makes these compounds much more interesting. There have been some experiments on this class of $AB_2O_4$ compounds, which provide useful information about their novel properties. Yamasaki *et al.*[@Pol-org] reported the signature of polarisation in $CoCr_2O_4$ below $T_s$=27 K. They also showed how polarisation can be controlled using magnetic field. Neutron scattering experiments on $ACr_2O_4$ \[$A$=$Co$, $Mn$\] was first performed by Tomiyasu *et al.*,[@Tomiyasu] who estimated the cone angle by analyzing the experimental intensity of satellite reflections. They also proposed a unique concept of “Weak Magnetic Geometrical Frustration" (MGF) in spinel $AB_2O_4$, where both $A$ and $B$ cation are magnetic. Such weak MGF is responsible for the short-range conical spiral. Using neutron diffraction, Chang *et al.*[@incommensurate] predicted a transformation from incommensurate conical spin order to commensurate order in $CoCr_2O_4$ at lowest temperature. A complete understanding of such transformation is lacking in the literature. Spin current model[@spin-current] is one simplistic approach which provides some conceptual advancement about incommensurate conical spin order, however, a firm understanding of incommensurate to commensurate transformation requires a better model. These class of compounds show few other phenomena such as negative magnetization, magnetic compensation and sign reversible exchange bias at a critical temperature called magnetic compensation temperature ($T_{comp}$).[@padam-Fe; @ram-Mn; @Junmoni-Mn-Fe; @Junmoni-Ni-Al; @Junmoni-Ni-Fe1; @Junmoni-Ni-Fe2] This is a temperature at which different sublattice magnetization cancels each other to fully compensate the net magnetization (M=0). Interestingly, it changes sign if one goes beyond this temperature. Depending on the substituting element, in some cases, magnetic compensation is associated with the exchange bias phenomena. Such unique phenomena are very useful for magnetic storage devices which require a reference fixed magnetization direction in space for switching magnetic field. Compounds having exchange bias are highly suitable for such a device because their hysteresis is not centred at M=0, H=0, rather shifted towards +ve or -ve side. Although the phenomena of exchange bias are well understood in various compounds including FM/AFM layered compounds,[@EB] the same is not true for the substituted spinel compounds which crystallize in a single phase. A deeper understanding of all these exotic phenomena is highly desired. Using the generalized Luttinger-Tisza[@GLT] method, a conical ground state can be found theoretically,[@LKDM] by defining a parameter $u$ $$u=\frac{4J_{BB}S_B}{3J_{AB}S_A}$$ Here $S_A$ and $S_B$ are the A-site (tetrahedral) and B-site (Octahedral) magnetic spins, $J_{AB}$ and $J_{BB}$ represent the exchange interaction between first nearest neighbor $A$-$B$ and $B$-$B$ pairs respectively. According to the theory, the stable conical spin order is possible only if $u$ lies between $0.88$ and $1.298$. Yan *et al*[@Yao-2009; @Yao-2009-2; @Yao-2010; @Yao-2011; @Yao-2013; @Yao-2017] has studied the conical spin order by performing simulation on a 3-dimensional spinel lattice. They show that $\hat{J}_{BB}$ and $\hat{J}_{AA}$ enhance the spin frustration, and single ion anisotropy helps to stabilize the cone state. Here $\hat{J}_{ij}=J_{ij}|\overrightarrow{S_i}|.|\overrightarrow{S_j}|$ and is called magnetic coupling constant. In this article, the conical spin order of $ACr_2O_4$ ($A$=$Mn$, $Fe$, $Co$ and $Ni$) along with $CoMn_2O_4$ and $CoFe_2O_4$ are studied using a combined Density Functional Theory (DFT) and Monte Carlo based Metropolis algorithm. The latter two compounds do not show conical spin order. For these six compounds, we have calculated the exchange interactions using the self consistent Density Functional Theory. We have then varied the interaction parameters and found a new set of exchange interactions which best fit the experimental magnetization and hysteresis curves. For comparison sake, the investigation of magnetic ordering, magnetization, hysteresis curve, and the ground state spin order were carried out using both sets of exchange interactions. We have also simulated the magnetic compensation and exchange bias behavior around $T_{comp}$. We found an effective exchange interaction pairs for the system $CoCr_2O_4$, for which its magnetization is similar to $Fe$ substituted $CoCr_2O_4$ showing magnetic compensation effect followed by a turn over in the sign having of M. Using these sets of exchange bias, we are able to predict the sign reversible exchange bias at around $T_{comp}$, as observed experimentally.[@padam-Fe] ------------- ------- ---------------- ---------------- ---------------- ------------ ----------- ------------ ----------- ----------- ------- ------- System Calculated Expt. $\hat{J}_{BB}$ $\hat{J}_{AB}$ $\hat{J}_{AA}$ $u$ $M_A$ $M_B$ $M_A$ $M_B$ $T_c$ $T_c$ (meV) (meV) (meV) ($\mu_B$) ($\mu_B$) ($\mu_B$) ($\mu_B$) (K) (K) $MnCr_2O_4$ set 1 -1.74 -1.28 -1.58 1.81 -4.50 3.01 -5 3 40 set 2 -0.97 -0.85 0.00 1.52 42 $FeCr_2O_4$ set 1 -2.88 -2.83 -0.67 1.35 -3.69 2.95 -4 3 117 set 2 -1.38 -1.94 -0.67 0.95 103 $CoCr_2O_4$ set 1 -3.01 -3.26 -0.56 1.23 -2.60 3.04 -3 3 145 set 2 -4.25 -2.83 0.00 2.00 94 $NiCr_2O_4$ set 1 -5.36 -3.94 -1.64 1.81 -1.69 2.93 -2 3 24 set 2 -3.75 -2.38 0.00 2.10 80 $CoMn_2O_4$ set 1 -9.46 (I) -3.53 -0.29 3.57 -2.68 3.81 -3 4 52 -1.05 (O) 0.40 set 2 -5.46 (I) -3.53 -0.29 2.06 -2.68 3.81 -3 4 60 -3.05 (O) 1.15 $CoFe_2O_4$ set 1 0.08 (Co-Co) -10.43 (Fe-Co) -3.98 2.66, 4.10 -4 3, 4 870 -4.77 (Fe-Fe) -21.65 (Fe-Fe) -2.06 0.29 0.84 (FeCo) set 2 0.08 (Co-Co) -10.00 (Fe-Co) -4 3, 4 840 -4.77 (Fe-Fe) -10.00 (Fe-Fe) -2.06 0.63 0.84 (FeCo) ------------- ------- ---------------- ---------------- ---------------- ------------ ----------- ------------ ----------- ----------- ------- ------- Methodology =========== For calculation, we have generated a 3-dimensional spinel structure involving a 7$\times$7$\times$7 supercell of 2 formula unit which contains a total of 2058 numbers of magnetic atoms. Oxygen atoms are removed while generating the supercell as they don’t contribute to magnetisation. We defined the energy equation of the form $$E=-\sum_{<i,j>} J_{ij} \overrightarrow{S_i}.\overrightarrow{S_j}-\overrightarrow{M}.\overrightarrow{h_m}-\overrightarrow{P}.\overrightarrow{h_e}$$ where $\overrightarrow{P}$ and $\overrightarrow{M}$ are polarisation and magnetisation respectively, defined as $$\overrightarrow{P}=a.\sum_{<i,j>}\overrightarrow{e_{ij}}\times\overrightarrow{S_i}\times\overrightarrow{S_j}$$ and $$\overrightarrow{M}=\sum_i \left( \sqrt{(S_i^x)^2+(S_i^y)^2+(S_i^z)^2}\right).g.\overrightarrow{\mu_B}$$ where $\overrightarrow{e}_{ij}$ is the vector connecting $\overrightarrow{S}_i$ and $\overrightarrow{S}_j$, ’$a$’ is a proportionality constant and $g$ is the Land[é]{} $g$-factor which is 2 $\mu_B$. We solve this energy equation by Monte Carlo simulation where the spins are considered classical vectors that are updated by Metropolis algorithm. 1,00,000 steps are taken for equilibration and the average of last 5000 steps data are used to calculate physical quantities. $\sum_{<i,j>}$ is summation over nearest $B$-$B$, $A$-$B$ and $A$-$A$ type of neighbors, while the higher-order neighbors are neglected. For the calculation of temperature dependence of magnetization, we have taken 5000 Monte Carlo steps for each temperature and the temperature is increased in the steps of 1 K. To reach the correct conical ground state, we have applied a large electric field (  20000 kV/m along \[110\] directions) and a magnetic field (20 Tesla along \[001\] direction), as also used by Nehme *et al*.[@Nehme] Result and Discussion ===================== Exchange Interaction parameters & Magnetisation ----------------------------------------------- In order to simulate various system properties, we have calculated two sets of exchange interaction parameters.\ (a) set-1: Interaction parameters derived from self-consistent first principles-based DFT calculation.\ (b) set-2: A new set of interaction parameters which best fit the experimental magnetisation.[@padam-Fe]\ Table \[table1\] shows the above two sets of interaction parameters for six representative systems $ACr_2O_4$ ($A$= $Mn$, $Fe$, $Co$, and $Ni$), $CoMn_2O_4$ and $CoFe_2O_4$. For $CoMn_2O_4$, the $Mn$-$Mn$ bonds in the $xy$-plane are smaller compared to those which are out of plane. I and O represent in-plane and out-of-plane $\hat{J}_{BB}$ interactions for CoMn$_2$O$_4$. For $CoFe_2O_4$ which crystallized in inverse spinel structure, half of the $B$-site are filled with $Co$ and the rest by $Fe$. This geometry creates three types of $B$-$B$ interactions ($Co$-$Co$, $Fe$-$Fe$ & $Co$-$Fe$) and two types of $A$-$B$ interactions ($Fe$-$Co$ & $Fe$-$Fe$). ![Temperature dependence of magnetization for six spinel compounds. Black (Red) line shows the calculated magnetization using set-1 (set-2) interaction parameters. Plus symbol indicates experimental data. []{data-label="fig1"}](Fig1) Figure \[fig1\] shows a comparison of theoretical and experimental temperature dependence of magnetization for six compounds. The black (red) line indicates the calculated magnetization using set-1 (set-2) exchange parameters. Solid plus symbols show the experimental data, wherever available. It is to be noted that, for our prime compound $CoCr_2O_4$, the calculated magnetization using set-2 exchange interactions matches fairly well with those of experimental data.[@padam-Fe] Comparing the set-1 and set-2 parameters in this case, we found that $Cr$-$Cr$ interactions are relatively stronger in set-2 than set-1, while $Co$-$Cr$ and $Co$-$Co$ interactions in set-2 are relatively weaker. In fact, $Co$-$Co$ pairs are hardly interacting in the set-2. Table I also display the stability parameter ($u$) for all the six compounds. $u$ turn out to be 1.23 (2.00) using set-1 (set-2) exchange parameters for $CoCr_2O_4$. In case of $MnCr_2O_4$, all the interactions in set-2 are weaker compared to those in set-1. M vs T data calculated using set-1 parameters, in this case, is grossly off as compared to experimental data. The value of $u$ calculated using set-1 (set-2) parameters is 1.81 (1.5). Both these values lie beyond the stability range (0.88$<$ $u$ $<$ 1.3 ). Interestingly the average $<u>$ calculated by Tomiyasu[@Tomiyasu] using the neutron scattering data, within the generalized Luttinger-Tisza [@GLT] method, for $CoCr_2O_4$ and $MnCr_2O_4$ are 2.00 and 1.50 which matches exactly with our calculated $u$-values. In case of $NiCr_2O_4$, the simulated magnetization which best matches with experimental values require negligibly small $\hat{J}_{AA}$ interactions, as in the previous two cases. We don’t have any experimental magnetization data for $FeCr_2O_4$. Interestingly from Figure \[fig1\](b), the magnetization curve calculated from set-1 shows a magnetic compensation at around $T_{comp}$= 40 K and magnetization changes its sign at this temperature. As we do not have any experimental evidence for such magnetic compensation for pure $FeCr_2O_4$ compound, therefore we calculated another set of interaction parameters (set-2), which does not show such compensation. In set-1, the value of $\hat{J}_{BB}$ and $\hat{J}_{AB}$ are close ($\hat{J}_{BB}$ is slightly higher than $\hat{J}_{AB}$). One way to remove the magnetic compensation effect is to choose $\hat{J}_{AB}$ $>$ $\hat{J}_{BB}$ which is what we have chosen in set-2. The calculated $u$ parameter for $FeCr_2O_4$ and $NiCr_2O_4$ using set-1 are 1.35 and 1.81 which become 0.95 and 2.10 when set-2 parameters are used. Using set-2 parameters, the calculated $u$ value is found to lie within the stability range, while for $NiCr_2O_4$, $u$ is far beyond the stability. The calculated magnetic transition temperature ($T_c$) is also tabulated in Table \[table1\] along with the experimental values. It is to be noted that, $T_c$ for $MnCr_2O_4$ is calculated to be 40 K(42 K) using set-1(set-2) exchange parameters whereas the magnetization of different sub-lattice cancel each other out and compensates the net moments for temperature above 4 K. At very low temperatures, it shows some finite moments. Similar behavior has also been observed in case of $FeCr_2O_4$, where the transition occurs at 103 K but just above 93 K total magnetization drops to zero. ------------- ------- -------------- ---------------- ---------------- ----------------------------------------- ----------------------- ----- System Type of Polarisation $T_s$ $\theta_{A}$ $\theta_{B_1}$ $\theta_{B_2}$ spin order (K) (Degree) (Degree) (Degree) ($\frac{\mu C}{m^2}$) $MnCr_2O_4$ set 2 132 85 77 Conical 4.9 4 Expt. 152 95 11 Conical - 16 $FeCr_2O_4$ set 2 164 14 16 Conical 3.3 0 $CoCr_2O_4$ set 2 142 83 40 Conical 1.8 16 Expt. 132 109 28 Conical - 24 $NiCr_2O_4$ set 2 144 84 37 Conical 0.9 17 $CoMn_2O_4$ set 2 90 141 38 $A$ is the resultant of $B_1$ and $B_2$ 0.1 0 Expt. 90 151 61 $A$ is the resultant of $B_1$ and $B_2$ 0 $CoFe_2O_4$ set 2 179 1 1 $A$ is anti parallel to $B_1$ and $B_2$ 0.0 0 Expt. 180 0 0 $A$ is anti parallel to $B_1$ and $B_2$ 0 ------------- ------- -------------- ---------------- ---------------- ----------------------------------------- ----------------------- ----- Magnetic order -------------- Table \[table2\] shows the calculated cone angle, types of spin order, polarisation and transition temperature ($T_s$) for the six systems. These properties are calculated using set-2 interaction parameters. Experimental data are shown wherever available. There are three cone angles $\theta_{A}$, $\theta_{B_1}$ and $\theta_{B_2}$ based on sites $A$, $B_1$, and $B_2$ respectively. Notably, the simulated value of the cone angles matches fairly well with those of experiments.[@Tomiyasu] Four systems $ACr_2O_4$ ($A$= $Mn$, $Fe$, $Co$ and $Ni$) show conical spin order, as also observed experimentally. For $CoMn_2O_4$, vector corresponding to $\theta_{A}$ is the resultant of those for $\theta_{B_1}$ and $\theta_{B_2}$. In case of $CoFe_2O_4$, however vector for $\theta_{A}$ is antiparallel to those of $\theta_{B_1}$ and $\theta_{B_2}$. These magnetic orderings are in fair agreement with the experimental observation.[@CoMn2O4-ang] ![For the six compounds, simulated and experimental[@ACr2O4-pol; @CoMn2O4-exp; @New-Exp] hysteresis curve. The simulated data is calculated with set-2 interactions parameters. []{data-label="fig2"}](Fig2) ![image](Fig3) For $FeCr_2O_4$, the $\hat{J}_{BB}/\hat{J}_{AB}$ is nearly 1.02 for set-1 which reduce to 0.71 for set-2. This decreases the geometrical frustration and therefore the cone angle at $B$-site decreases. This in turn increase the magnetization along the positive z-direction. This also helps to uplift the magnetization curve and removes the magnetic compensation. It is to be noted that the calculated polarisation ($\overrightarrow{P}$) and $T_s$ falls in the reasonable range. Compounds having no conical order: $CoMn_2O_4$ and $CoFe_2O_4$ -------------------------------------------------------------- From Table \[table1\], the first principles calculated exchange interaction in $CoMn_2O_4$ has a strong anisotropy because it crystallizes in a tetragonal structure whereas all the other compounds are cubic. Due to stretching along the z-direction and compression in the $xy$ plane, $\hat{J}_{BB}$ in the $xy$ plane becomes much stronger and those out of a plane turn weaker. In Table 1, (I) and (O) refers to in-plane and out of plane interaction respectively. Therefore at very low temperatures, all the spins lie in the $xy$ plane and as temperature crosses $T_c$, they get completely randomized. In Figure \[fig1\], the calculated magnetization is plotted along with the experimental curve. For $CoFe_2O_4$, the ground state is collinear which corroborates with the fact that $\hat{J}_{AB}$ is much stronger than $\hat{J}_{BB}$. Interestingly, because this compound crystallizes in inverse spinel structure, which is not the case for the other five compounds, $Fe$ sits at both $A$-site and $B$-site with antiparallel alignment. This cancels out the magnetization from $Fe$ and the observed magnetization is mostly due to the magnetic moments of collinear $Co$ spins. Figure \[Fig3\] shows a pictorial diagram of the calculated magnetic spin orders for all the six spinel compounds. Hysteresis ---------- Figure \[fig2\] shows the calculated hysteresis (red line) for all six compounds using the set-2 interaction parameters. Experimental data are shown by plus symbol (blue). It is clear that for the compounds $ACr_2O_4$, the experimental curves reach the saturation magnetization at a relatively smaller magnetization value as compared to the calculated ones. This may be due to the conical spin spiral developed in these four compounds which reduce their magnetization. Another reason can be the neglect of higher neighbor interactions in our Monte-Carlo simulation, which probably are not small enough and can affect the more sensitive results such as the hysteresis curve. In case of $CoFe_2O_4$, hysteresis curve is quite sensitive to the interaction parameters used, while magnetization curve hardly changes. Figure \[fig2\](f) shows the hysteresis curves calculated from the set-2 interaction parameters which matche fairly well with experiment. In contrast, both our calculated M Vs. T and hysteresis for $CoMn_2O4$ are somewhat different compared to experiment. This may be due to the fact that, in experimental sample of $CoMn_2O4$,[@CoMn2O4-exp] 21 % of $Co$ atoms are observed to interchange its positions with $Mn$. Such swapping is not considered in our calculations. ![Total magnetisation vs. $T$/$T_c$ at various values of ($\hat{J}_{BB}$/$\hat{J}_{AB}$) for $MnCr_2O_4$ (blue), $FeCr_2O_4$ (green), $CoCr_2O_4$ (red) and $NiCr_2O_4$ (black).[]{data-label="Fig4"}](Fig4) Polarisation ------------ Polarisation($\overrightarrow{P}$) for $ACr_2O_4$ is calculated using Eq (2). The proportionality constant ‘$a$’ is taken to be 0.03 $\frac{\mu C}{m^2}$. $\overrightarrow{P}$ is calculated using set-2 exchange parameters, which involve $BB$, $AB$ and $AA$ type of 1st neighbour interactions. Yao *et al.*[@Yao-2009; @Yao-2009-2; @Yao-2010; @Yao-2011; @Yao-2013; @Yao-2017] have also reported the calculation of $\overrightarrow{P}$ using only $BB$-type neighbour interaction. We observed that, inclusion of $AB$ and $AA$ (in addition to $BB$) interactions help to achieve the stable conical spin spiral order easily. Singh *et al.* measured the polarisation for both $CoCr_2O_4$ and $FeCr_2O_4$,[@FeCr2O4-pol] and found the magnitude of $\overrightarrow{P}$ for $FeCr_2O_4$ to be 10-12 times larger. This indicates that the choice of ‘$a$’ value is crucial in the theoretical simulation of $\overrightarrow{P}$. As we do not have much information for the rest of the compounds, for simplicity we have taken ‘$a$’ to be 0.03 $\frac{\mu C}{m^2}$ for all the compounds in the calculation of $\overrightarrow{P}$. It is to be noted that, as the magnitude of $A$-site spin decreases, the polarisation also decreases. In $CoFe_2O_4$, the calculated polarization is nearly zero as all the spins are collinear. For the compound $CoMn_2O_4$, the simulated polarisation is found to be quite small in magnitude,  0.1 $\frac{\mu C}{m^2}$. The critical temperature $T_s$ below which the polarisation can be measured are also listed in Table \[table2\]. In all the 4 compounds, except $CoCr_2O_4$, $T_s$, the value calculated using set-1 exchange parameters is higher than the set-2 parameters. This suggests that set-2 is giving more accurate cone-angle. It is important to note that, even the set-2 parameters are only a set of effective interaction parameters where higher-order interactions can be considered to be included within a mean-field scheme. This may be one of the reasons for some discrepancies. Magnetic compensation --------------------- It has been observed that some ferrimagnets have a certain critical temperature, below the ferri-para transition region ($T_c$), called the magnetic compensation temperature ($T_{comp}$), where the magnetization curve crosses the zero temperature axes. At $T=T_{comp}$, the antiferromagnetic spins of different sublattices just cancel each other out to give a compensating net zero magnetization. The magnetization just below and above $T_{comp}$ have opposite signs. The compensation has not been reported in any of a pristine spinel compounds $MnCr_2O_4$, $CoCr_2O_4$ and $NiCr_2O_4$ but is detected in some of their substituted counterpart. It is not easy to simulate the substituted systems, as we need to evaluate a new set of exchange parameters between the substituting magnetic atom and the rest of the atoms of the pristine compound. Also, the final result sensitively depends upon the substituting sites chosen in the Monte Carlo simulation. We chose to address this problem in the future. However, to check the possibility of magnetic compensation, we have calculated the magnetization vs. T for various interaction strengths $\hat{J}_{BB}$/$\hat{J}_{AB}$ from 0.5 to 2.0. This is shown in Fig. \[Fig4\] for the four compounds $ACr_2O_4$. These parameters can be thought of as effective interactions when the pristine compounds are substituted with a foreign element. For $CoCr_2O_4$ (red curve in Fig. \[Fig4\]), there is a clear indication of magnetic compensation temperature of $T$/$T_c$ = 0.3 for $\hat{J}_{BB}$/$\hat{J}_{AB}$= 1.4. Any interaction with $\hat{J}_{BB}$/$\hat{J}_{AB}$$>$1.4, makes the system non-compensating. For $\hat{J}_{BB}$/$\hat{J}_{AB}$$<$1.4, $T_{comp}$ increases towards higher T-side and again become non-compensating for $\hat{J}_{BB}$/$\hat{J}_{AB}$$<$1.0. Similar trend is found for $MnCr_2O_4$ and $FeCr_2O_4$ as well, but with different $T_{comp}$. For $NiCr_2O_4$, we could not find any compensation temperature between the range 0.5 $\leq$ $\hat{J}_{BB}$/$\hat{J}_{AB}$ $\leq$ 2.0. Origin Magnetic compensation ---------------------------- The origin of magnetic compensation lies in the cancellation of magnetization between A- and B-sites which, in turn, depends on the exchange interactions. In Fig. \[Fig5\], the total and atom projected magnetizations (for $\hat{J}_{BB}$/$\hat{J}_{AB}$=1.41) are plotted in the left and the right panels respectively, for $CoCr_2O_4$. This indicates that one can dictate the variation in $T_{comp}$ by tuning the magnetization at different sublattices. Substitution/doping is a unique way to modify the magnetization of a given system. This can affect the magnetization in two different ways: (i) the substituted magnetic atom manipulate the magnetization of that sublattice (ii) the exchange interaction between the substituted atoms with the rest of the atoms changes the spin alignment and hence the magnetization. By mimicking the substituting effect via an effective change in the exchange interactions, we found that as we increase $\hat{J}_{BB}$, the frustration in the $B$-sublattice increases and the magnetic spins of $Cr$-atoms start to deviate from the collinear state. This reduces the magnetization from the $B$ sublattice. As a result, the total magnetization increases. Since going from $MnCr_2O_4$ to $NiCr_2O_4$, the $A$ site magnetization reduces, the total magnetization increases in the direction of the magnetic orientation of $B$ sublattices. Therefore, to get the compensation temperature in $NiCr_2O_4$, we need to increase the $\hat{J}_{BB}$ interaction which creates more frustration in the B sublattice reducing its magnetization. This, in turn, will help the total magnetization to cross the temperature axis at some point. ![(Left) Calculated M vs. T curve (red) for $CoCr_2O_4$ with effective interaction parameters $\hat{J}_{BB}$/$\hat{J}_{AB}$=1.41, along with the experimental curve (blue) for $Co(Cr_{0.95}Fe_{0.5})_2O_4$. (Right) Calculated atom-projected M vs. T curve for $CoCr_2O_4$.[]{data-label="Fig5"}](Fig5) Exchange Bias in $CoCr_2O_4$ ---------------------------- Exchange bias is a phenomenon that shifts the origin of hysteresis on the magnetic axis. For most of the memory device and the device based on spintronics application need a layer having exchange bias so that it fixes the magnetic state with surrounding magnetic fluctuation. It has been reported that very close to $T_{comp}$, exchange bias is observed in the $Fe$ substituted $CoCr_2O_4$.[@padam-Fe] With a similar motivation as before, we have studied the appearance of exchange bias by mimicking the effect of substitution via the change in effective interactions. Figure \[Fig6\] shows the shift in the hysteresis as a function of varying temperature with $\hat{J}_{BB}$/$\hat{J}_{AB}$=1.41 ($\hat{J}_{BB}$=-4.00, $\hat{J}_{BB}$=-2.83). These parameters can only be taken in an average sense representing the mean-field estimate of the exchange interactions for $Fe$-substituted $CoCr_2O_4$. Interestingly, at around 30.36 K, sign reversible exchange bias is observed. The transition temperature agrees fairly well with the magnetic compensation temperature, as observed experimentally. Experimentally a magneto-structural correlation has been observed at around $T_{comp}$[@ram-Mn; @padam-MS-corr1; @debashish-ram]. As we have not considered the magneto-structural correlation in our calculation but we successfully able to detect exchange bias effect. Therefore we conclude that the exchange bias created in these substituted compounds is purely due to the magnetic spin order developed at low temperature and is independent of magneto-structural correlations. ![ Sign reversible exchange bias effect (shift of the origin of hysteresis with varing temperature) in $CoCr_2O_4$ with $\hat{J}_{BB}$/$\hat{J}_{AB}$ =1.41.[]{data-label="Fig6"}](Fig6) Conclusion ========== In summary, we have investigated the possibility of conical magnetic order in a series of six $AB_2O_4$ spinel compounds using Monte-Carlo simulation. These calculations are done with a careful choice of two sets of interaction parameters: (i) parameter set-1 obtained from self-consistent first principles-based DFT simulation and (ii) parameter set-2, which closely reproduce the experimental magnetization. set-2 parameters are further used to evaluate the rest of the magnetic properties such as hysteresis, magnetic order, exchange bias, etc. Considering $CoCr_2O_4$ as a representative system, we have been able to reproduce the correct angle of conical order and the stability parameter $u$, as observed. The estimated polarisation and the transition temperature agree fairly well with the experiment. The effect of Fe substitution in $CoCr_2O_4$ is simulated by mimicking a different set of exchange interactions. These parameters can be considered as the effective interactions, within a mean-field sense, representing the Fe substituted system $Co(Cr_{0.95}Fe_{0.05})_2O_4$. We found that this compound indeed shows a sign reversible exchange bias effect at around $T_{comp}$ =30.4 K, as observed experimentally, which is purely magnetic origin as we have not considered magneto-structural correlations observed around $T_{comp}$ in experiment but successfully able to mimic exchange bias phenomena. We have also simulated $CoMn_2O_4$ and $CoFe_2O_4$, and found no conical magnetic order and polarisation, as observed. The spin-current model which is used in our calculation works quite well for very low magnetic field and therefore with high magnetic field, the magnetisation will not saturate as observed in experiment. Similarly, this model is not thermally stable and the polarisation drops quite fast compare to experiments. Therefore a better model is needed to work in high magnetic field and high temperature. However, this model shows its potential by getting the nearly similar cone angle of the atomic spins as in experiments and also able to mimic exchange bias phenomena and magnetic compensation quite well. Acknowledgments =============== We thank IIT Bombay for lab and computing facilities. AA acknowledge IRCC early carrier research award project, IIT Bombay (RI/0217-10001338-001) to support this research. DD acknowledge financial support provided by the Science and Engineering Research Board (SERB) under the National Post Doctoral Fellowship, sanction order number PDF/2017/002160. [200]{} Y. Yamasaki, S. Miyasaka, Y. Kaneko, 2 J.-P. He, T. Arima, and Y. Tokura, [*Physical Review Letters*]{}, [**96**]{}, 207204 (2006) K. Tomiyasu, J. Fukunaga, and H. Suzuki [*Physical Review B*]{}, [**70**]{}, 214434 (2004) Kiran Singh, Antoine Maignan, Charles Simon, and Christine Martin [*Applied Physics Letters*]{}, [**99**]{}, 172903 (2011) N Mufti, A A Nugroho, G R Blake and T T M Palstra [*J. Phys.: Condens. Matter*]{}, [**22**]{}, 075902 (2010) T. Goto, T. Kimura, G. Lawes, A. P. Ramirez, and Y. Tokura [*Physical Review Letters*]{}, [**92**]{}, 257201 (2004) T. Kimura, G. Lawes, T. Goto, Y. Tokura, and A. P. Ramirez [*Physical Review B*]{}, [**71**]{}, 224425 (2005) L J Chang, D J Huang, W-H Li, S-W Cheong, W Ratcliff, and J W Lynn [*J. Phys.: Condens. Matter*]{}, [**21**]{}, 456008 (2009) Katsura H, Nagaosa N, Balatsky AV.[*Physical Review Letters*]{}, [**95**]{}, 057205 (2005) R. Padam, Swati Pandya, S. Ravi, A. K. Nigam, S. Ramakrishnan, A. K. Grover, and D. Pal [*Applied Physics Letters*]{}, [**102**]{}, 112412 (2013) Ram Kumar , S. Rayaprol , V. Siruguri , D. Pal [*Physica B: Physics of Condensed Matter*]{}, [**551**]{}, 98 (2017) Junmoni Barman, and S.Ravi [*Journal of Magnetism and Magnetic Materials*]{}, [**437**]{}, 42 (2017) Junmoni Barman, and S.Ravi [*Journal of Magnetism and Magnetic Materials*]{}, [**426**]{}, 82 (2017) Junmoni Barman, and S.Ravi [*Solid State Communications*]{}, [**201**]{}, 59 (2015) J. Barman, P. Babu, and S. Ravi [*Journal of Magnetism and Magnetic Materials*]{}, [**418**]{}, 300 (2016) B. Skubic, J. Hellsvik, L. Nordström and O. Eriksson [*Acta Physica Polonica A*]{}, [**115**]{}, 1 (2009) J. M. Luttinger and L. Tisza, [*Physical Review B*]{}, [**70**]{}, 954 (1946) D. H. Lyons, T. A. Kaplan, K. Dwight, and N. Menyuk [*Physical Review B*]{}, [**126**]{}, 540 (1962) Claude Ederer, and Matej Komelj [*Physical Review B*]{}, [**76**]{}, 064409 (2007) Xiaoyan Yao, Veng Cheong Lo, and Jun-Ming Liu [*Journal of Applied Physics*]{}, [**106**]{}, 073901 (2009) Xiaoyan Yao, and Qichang Li [*EPL*]{}, [**88**]{}, 47002 (2009) Xiaoyan Yao, Veng Cheong Lo, and Jun-Ming Liu [*Journal of Applied Physics*]{}, [**107**]{}, 093908 (2010) Xiaoyan Yao [*EPL*]{}, [**94**]{}, 67003 (2011) Xiaoyan Yao [*EPL*]{}, [**102**]{}, 67013 (2013) Xiaoyan Yao, and Li-Juan Yang [*Front. Phys.*]{}, [**12(3)** ]{}, 127501 (2017) Debashish Das, and Subhradip Ghosh [*J. Phys. D: Appl. Phys.*]{}, [**48** ]{}, 425001 (2015) Debashish Das, Rajkumar Biswas and, Subhradip Ghosh [*J. Phys.: Condens. Matter*]{}, [**28** ]{}, 446001 (2016) Z. Nehme, Y. Labaye, R. Sayed Hassan, N. Yaacoub, and J. M. Greneche [*AIP Advances* ]{}, [**5** ]{}, 127124 (2015) Jelena Habjanic, Marijana Juric, Jasminka Popovic, Kres imir Mols anov, and Damir Pajic [*Inorg. Chem.* ]{}, [**53** ]{}, 9633 (2014) A. Maignan, C. Martin, K. Singh, Ch. Simon, O.I. Lebedev, S. Turner,[*Journal of Solid State Chemistry* ]{}, [**95** ]{}, 41–49 (2012) B. Boucher, R. Buhl, and M. Perrin [*Journal of applied physics* ]{}, [**39** ]{}, 632 (1968) Y H Hou, Y J Zhao, Z W Liu, H Y Yu, X C Zhong, W Q Qiu, D C Zeng and L S Wen [*J. Phys. D: Appl. Phys.*]{}, [**43** ]{}, 445003 (2010) Bacchella G L and Pinot M [*J. Phys.*]{}, [**25** ]{}, 528 (1964) Lawes G, Melot B, Page K, Ederer C, Hayward M A, Proffen T and Seshadri R [*Phys. Rev. B*]{}, [**74** ]{}, 024413 (2006) Tomiyasu K and Kagomiya I [*J. Phys. Soc. Japan*]{}, [**73** ]{}, 2539 (2004) Wickham D G and Croft W J [*J. Phys. Chem. Solids*]{}, [**7** ]{}, 351 (1958) Teillet F J and Krishnan R [*J. Magn. Magn. Mater.*]{}, [**123** ]{}, 93-6 (1993) R Padam, Swati Pandya, S Ravi, S Ramakrishnan, A K Nigam, A K Grover and D Pal [*J. Phys.: Condens. Matter*]{}, [**29** ]{}, 055803 (2017) Ram Kumar , S. Rayaprol , V. Siruguri , Y. Xiao , W. Ji and D. Pal [*Journal of Magnetism and Magnetic Materials*]{}, [**454**]{}, 342-348 (2018) Ram Kumar, R. Padam, Debashish Das, S. Rayaprol, V. Siruguri and D. Pal [*RSC Advances*]{}, [**6**]{}, 93511-93518 (2016)
{ "pile_set_name": "ArXiv" }
--- abstract: 'We here discuss the emergence of Quasi Stationary States (QSS), a universal feature of systems with long-range interactions. With reference to the Hamiltonian Mean Field (HMF) model, numerical simulations are performed based on both the original $N$-body setting and the continuum Vlasov model which is supposed to hold in the thermodynamic limit. A detailed comparison unambiguously demonstrates that the Vlasov-wave system provides the correct framework to address the study of QSS. Further, analytical calculations based on Lynden-Bell’s theory of violent relaxation are shown to result in accurate predictions. Finally, in specific regions of parameters space, Vlasov numerical solutions are shown to be affected by small scale fluctuations, a finding that points to the need for novel schemes able to account for particles correlations.' author: - | Andrea Antoniazzi$^{1}$[^1], Francesco Califano$^ {2}$[^2], Duccio Fanelli$^{1,3}$[^3], Stefano Ruffo$^{1}$[^4] title: 'Exploring the thermodynamic limit of Hamiltonian models: convergence to the Vlasov equation.' --- The Vlasov equation constitutes a universal theoretical framework and plays a role of paramount importance in many branches of applied and fundamental physics. Structure formation in the universe is for instance a rich and fascinating problem of classical physics: The fossile radiation that permeates the cosmos is a relic of microfluctuation in the matter created by the Big Bang, and such a small perturbation is believed to have evolved via gravitational instability to the pronounced agglomerations that we see nowdays on the galaxy cluster scale. Within this scenario, gravity is hence the engine of growth and the Vlasov equation governs the dynamics of the non baryonic “dark matter" [@peebles]. Furthermore, the continuous Vlasov description is the reference model for several space and laboratory plasma applications, including many interesting regimes, among which the interpretation of coherent electrostatic structures observed in plasmas far from thermodynamic equilibrium. The Vlasov equation is obtained as the mean–field limit of the $N$–body Liouville equation, assuming that each particle interacts with an average field generated by all plasma particles (i.e. the mean electromagnetic field determined by the Poisson or Maxwell equations where the charge and current densities are calculated from the particle distribution function) while inter–particle correlations are completely neglected. Numerical simulations are presently one of the most powerful resource to address the study of the Vlasov equation. In the plasma context, the Lagrangian Particle-In-Cell approach is by far the most popular, while Eulerian Vlasov codes are particularly suited for analyzing specific model problems, due to the associated low noise level which is secured even in the non–linear regime [@mangeney]. However, any numerical scheme designed to integrate the continuous Vlasov system involves a discretization over a finite mesh. This is indeed an unavoidable step which in turn affects numerical accuracy. A numerical (diffusive and dispersive) characteristic length is in fact introduced being at best comparable with the grid mesh size: as soon as the latter matches the typical length scale relative to the (dynamically generated) fluctuations a violation of the continuous Hamiltonian character of the equations occurs (see Refs. [@califano]). It is important to emphasize that even if such [*non Vlasov*]{} effects are strongly localized (in phase space), the induced large scale topological changes will eventually affect the system globally. Therefore, aiming at clarifying the problem of the validity of Vlasov numerical models, it is crucial to compare a continuous Vlasov, but numerically discretized, approach to a homologous N-body model. Vlasov equation has been also invoked as a reference model in many interesting one dimensional problems, and recurrently applied to the study of wave-particles interacting systems. The Hamiltonian Mean Field (HMF) model [@antoni-95], describing the coupled motion of $N$ rotators, is in particular assimilated to a Vlasov dynamics in the thermodynamic limit on the basis of rigorous results [@BraunHepp]. The HMF model has been historically introduced as representing gravitational and charged sheet models and is quite extensively analyzed as a paradigmatic representative of the broader class of systems with long-range interactions [@Houches02]. A peculiar feature of the HMF model, shared also by other long-range interacting systems, is the presence of [*Quasi Stationary States*]{} (QSS). During time evolution, the system gets trapped in such states, which are characterized by non Gaussian velocity distributions, before relaxing to the final Boltzmann-Gibbs equilibrium [@ruffo_rapisarda]. An attempt has been made [@rapisarda_tsallis] to interpret the emergence of QSSs by invoking Tsallis statistics [@Tsallis]. This approach has been later on criticized in [@Yamaguchi], where QSSs were shown to correspond to stationary stable solutions of the Vlasov equation, for a particular choice of the initial condition. More recently, an approximate analytical theory, based on the Vlasov equation, which derives the QSSs of the HMF model using a maximum entropy principle, was developed in  [@antoniazziPRL]. This theory is inspired by the pioneering work of Lynden-Bell  [@LyndenBell68] and relies on previous work on 2D turbulence by Chavanis [@chava2D]. However, the underlying Vlasov ansatz has not been directly examined and it is recently being debated [@EPN]. In this Letter, we shall discuss numerical simulations of the continuous Vlasov model, the kinetic counterpart of the discrete HMF model. By comparing these results to both direct N-body simulations and analytical predictions, we shall reach the following conclusions: (i) the Vlasov formulation is indeed ruling the dynamics of the QSS; (ii) the proposed analytical treatment of the Vlasov equation is surprisingly accurate, despite the approximations involved in the derivation; (iii) Vlasov simulations are to be handled with extreme caution when exploring specific regions of the parameters space. The HMF model is characterized by the following Hamiltonian $$\label{eq:ham} H = \frac{1}{2} \sum_{j=1}^N p_j^2 + \frac{1}{2 N} \sum_{i,j=1}^N \left[1 - \cos(\theta_j-\theta_i) \right]$$ where $\theta_j$ represents the orientation of the $j$-th rotor and $p_j$ is its conjugate momentum. To monitor the evolution of the system, it is customary to introduce the magnetization, a macroscopic order parameter defined as $M=|{\mathbf M}|=|\sum {\mathbf m_i}| /N$, where ${\mathbf m_i}=(\cos \theta_i,\sin \theta_i)$ stands for the microscopic magnetization vector. As previously reported [@antoni-95], after an initial transient, the system gets trapped into Quasi-Stationary States (QSSs), i.e. non-equilibrium dynamical regimes whose lifetime diverges when increasing the number of particles $N$. Importantly, when performing the mean-field limit ($N \rightarrow \infty$) [*before*]{} the infinite time limit, the system cannot relax towards Boltzmann–Gibbs equilibrium and remains permanently confined in the intermediate QSSs. As mentioned above, this phenomenology is widely observed for systems with long-range interactions, including galaxy dynamics [@Padmanabhan], free electron lasers [@Barre], 2D electron plasmas [@kawahara]. In the $N \to \infty$ limit the discrete HMF dynamics reduces to the Vlasov equation $$\partial f / \partial t + p \, \partial f / \partial \theta \,\, - (dV / d \theta ) \, \partial f / \partial p = 0 \, , %\frac{\partial f}{\partial t} + p\frac{\partial f}{\partial \theta} - %\frac{d V}{d \theta} \frac{\partial f}{\partial p}=0\quad , \label{eq:VlasovHMF}$$ where $f(\theta,p,t)$ is the microscopic one-particle distribution function and $$\begin{aligned} V(\theta)[f] &=& 1 - M_x[f] \cos(\theta) - M_y[f] \sin(\theta) ~, \\ M_x[f] &=& \int_{-\pi}^{\pi} \int_{-\infty}^{\infty} f(\theta,p,t) \, \cos{\theta} {\mathrm d}\theta {\mathrm d}p\quad , \\ M_y[f] &=& \int_{-\pi}^{\pi} \int_{\infty}^{\infty} f(\theta,p,t) \, \sin{\theta}{\mathrm d}\theta {\mathrm d}p\quad . \label{eq:pot_magn}\end{aligned}$$ The specific energy $h[f]=\int \int (p^2/{2}) f(\theta,p,t) {\mathrm d}\theta {\mathrm d}p - ({M_x^2+M_y^2 - 1})/{2}$ and momentum $P[f]=\int \int p f(\theta,p,t) {\mathrm d}\theta {\mathrm d}p$ functionals are conserved quantities. Homogeneous states are characterized by $M=0$, while non-homogeneous states correspond to $M \ne 0$. Rigorous mathematical results [@BraunHepp] demonstrate that, indeed, the Vlasov framework applies in the continuum description of mean-field type models. This observation corroborates the claim that any theoretical attempt to characterize the QSSs should resort to the above Vlasov based interpretative picture. Despite this, the QSS non-Gaussian velocity distributions have been [*fitted*]{} [@rapisarda_tsallis] using Tsallis’ $q$–exponentials, and the Vlasov formalism assumed valid [*only*]{} for the limiting case of homogeneous initial conditions [@EPN]. In a recent paper [@antoniazziPRL], the aforementioned velocity distribution functions were instead reproduced with an analytical expression derived from the Vlasov scenario, with no adjustable parameters and for a large class of initial conditions, including inhomogeneous ones. The key idea dates back to the seminal work by Lynden-Bell [@LyndenBell68] (see also [@Chavanis06], [@Michel94]) and consists in coarse-graining the microscopic one-particle distribution function $f(\theta,p,t)$ by introducing a local average in phase space. It is then possible to associate an entropy to the coarse-grained distribution $\bar{f}$: The corresponding statistical equilibrium is hence determined by maximizing such an entropy, while imposing the conservation of the Vlasov dynamical invariants, namely energy, momentum and norm of the distribution. We shall here limit our discussion to the case of an initial single particle distribution which takes only two distinct values: $f_0=1/(4 \Delta_{\theta} \Delta_{p})$, if the angles (velocities) lie within an interval centered around zero and of half-width $\Delta_{\theta}$ ($\Delta_{p}$), and zero otherwise. This choice corresponds to the so-called “water-bag" distribution which is fully specified by energy $h[f]=e$, momentum $P[f]=\sigma$ and the initial magnetization ${\mathbf M_0}=(M_{x0}, M_{y0})$. The maximum entropy calculation is then performed analytically [@antoniazziPRL] and results in the following form of the QSS distribution $$\label{eq:barf} \bar{f}(\theta,p)= f_0\frac{e^{-\beta (p^2/2 - M_y[\bar{f}]\sin\theta - M_x[\bar{f}]\cos\theta)-\lambda p-\mu}} {1+e^{-\beta (p^2/2 - M_y[\bar{f}]\sin\theta - M_x[\bar{f}]\cos\theta)-\lambda p-\mu}}$$ where $\beta/f_0$, $\lambda/f_0$ and $\mu/f_0$ are rescaled Lagrange multipliers, respectively associated to the energy, momentum and normalization. Inserting expression (\[eq:barf\]) into the above constraints and recalling the definition of $M_x[\bar{f}]$, $M_y[\bar{f}]$, one obtains an implicit system which can be solved numerically to determine the Lagrange multipliers and the expected magnetization in the QSS. Note that the distribution (\[eq:barf\]) differs from the usual Boltzmann-Gibbs expression because of the “fermionic” denominator. Numerically computed velocity distributions have been compared in [@antoniazziPRL] to the above theoretical predictions (where no free parameter is used), obtaining an overall good agreement. However, the central part of the distributions is modulated by the presence of two symmetric bumps, which are the signature of a collective dynamical phenomenon [@antoniazziPRL]. The presence of these bumps is not explained by our theory. Such discrepancies has been recently claimed to be an indirect proof of the fact that the Vlasov model holds only approximately true. We shall here demonstrate that this claim is not correct and that the deviations between theory and numerical observation are uniquely due to the approximations built in the Lynden-Bell approach. A detailed analysis of the Lynden-Bell equilibrium (\[eq:barf\]) in the parameter plane $(M_{0},e)$ enabled us to unravel a rich phenomenology, including out of equilibrium phase transitions between homogeneous ($M_{QSS}=0$) and non-homogeneous ($M_{QSS} \ne 0$) QSS states. Second and first order transition lines are found that separate homogeneous and non homogeneous states and merge into a tricritical point approximately located in $(M_{0},e)=(0.2,0.61)$. When the transition is second order two extrema of the Lynden-Bell entropy are identified in the inhomogeneous phase: the solution $M=0$ corresponds to a saddle point, being therefore unstable; the global maximum is instead associated to $M \neq 0$, which represents the equilibrium predicted by the theory. This argument is important for what will be discussed in the following. Let us now turn to direct simulations, with the aim of testing the above scenario, and focus first on the kinetic model (\[eq:VlasovHMF\])–(\[eq:pot\_magn\]). The algorithm solves the Vlasov equation in phase space and uses the so-called “splitting scheme", a widely adopted strategy in numerical fluid dynamics. Such a scheme, pioneered by Cheng and Knorr [@Cheng], was first applied to the study of the Vlasov-Poisson equations in the electrostatic limit and then employed for a wide spectrum of problems [@califano]. For different values of the pair $(M_{0},e)$, which sets the widths of the initial water-bag profile, we performed a direct integration of the Vlasov system (\[eq:VlasovHMF\])–(\[eq:pot\_magn\]). After a transient, magnetization is shown to eventually attain a constant value, which corresponds to the QSS value observed in the HMF, discrete, framework. The asymptotic magnetizations are hence recorded when varying the initial condition. Results (stars) are reported in figure \[fig1\](a) where $M_{QSS}$ is plotted as function of $e$. A comparison is drawn with the predictions of our theory (solid line) and with the outcome of N-body simulation (squares) based on the Hamiltonian (\[eq:ham\]), finding an excellent agreement. This observation enables us to conclude that (i) the Vlasov equation governs the HMF dynamics for $N \to \infty$ [*both*]{} in the homogeneous and non homogeneous case; (ii) Lynden-Bell’s violent relaxation theory allows for reliable predictions, including the transition from magnetized to non-magnetized states. Deviations from the theory are detected near the transition. This fact has a natural explaination and raises a number of fundamental questions related to the use of Vlasov simulations. As confirmed by the inspection of figure \[fig1\](b), close to the transition point, the entropy $S$ of the Lynden-Bell coarse-grained distribution takes almost the same value when evaluated on the global maximum (solid line) or on the saddle point (dashed line). The entropy is hence substantially flat in this region, which in turn implies that there exists an extended basin of states accessible to the system. This interpretation is further validated by the inset of figure \[fig1\](a), where we show the probability distribution of $M_{QSS}$ computed via N-body simulation. The bell-shaped profile presents a clear peak, approximately close to the value predicted by our theory. Quite remarkably, the system can converge to final magnetizations which are sensibly different from the expected value. Simulations based on the Vlasov code running at different resolutions (grid points) confirmed this scenario, highlighting a similar degree of variability. These findings point to the fact that in specific regions of the parameter space, Vlasov numerics needs to be carefully analyzed (see also Ref. [@Elskens]). Importantly, it is becoming nowadays crucial to step towards an “extended«« Vlasov theoretical model which enables to account for discreteness effects, by incorporating at least two particles correlations interaction term. ![Panel (a): The magnetization in the QSS is plotted as function of energy, $e$, at $M_0=0.24$. The solid line refers to the Lynden-Bell inspired theory. Stars (resp. squares) stand for Vlasov (resp. N-body) simulations. Inset: Probability distribution of $M_{QSS}$ computed via N-body simulation (the solid line is a Gaussian fit). Panel (b): Entropy $S$ at the stationary points, as function of energy, $e$: magnetized solution (solid line) and non–magnetized one (dashed line).[]{data-label="fig1"}](fig1.eps){width="7cm"} ![Phase space snapshots for $(M_{0},e)=(0.5,0.69)$.[]{data-label="fig2"}](fig2.ps){width="7cm"} Qualitatively, one can track the evolution of the system in phase space, both for the homogeneous and non homogeneous cases. Results of the Vlasov integration are displayed in figure \[fig2\] for $(M_{0},e)=(0.5,0.69)$, where the system is shown to evolve towards a non magnetized QSS. The initial water-bag distribution splits into two large resonances, which persist asymptotically: the latter acquire constant opposite velocities which are maintained during the subsequent time evolution, in agreement with the findings of [@antoniazziPRL]. The two bumps are therefore an emergent property of the model, which is correctly reproduced by the Vlasov dynamics. For larger values of the initial magnetization ($M_{0}>0.89$), while keeping $e=0.69$, the system evolves towards an asymptotic magnetized state, in agreement with the theory. In this case several resonances are rapidly developed which eventually coaelesce giving rise to complex patterns in phase space. More quantitatively, one can compare the velocity distributions resulting from, respectively, Vlasov and N-body simulations. The curves are diplayed in figure \[fig3\] (a),(b),(c) for various choices of the initial conditions in the magnetized region. The agreement is excellent, thus reinforcing our former conclusion about the validity of the Vlasov model. Finally, let us stress that, when $e=0.69$, the two solutions (resp. magnetized and non magnetized) [@antoniazziPRL] are associated to a practically indistinguishible entropy level (see figure \[fig3\] (d)). As previously discussed, the system explores an almost flat entropy landscape and can be therefore be stuck in local traps, because of finite size effects. A pronounced variability of the measured $M_{QSS}$ is therefore to be expected. ![Symbols: velocity distributions computed via N-body simulations. Solid line: velocity distributions obtained through a direct integration of the Vlasov equation. Here $e=0.69$ and $M_0=0.3$ (a), $M_0=0.5$ (b), $M_0=0.7$ (c). Panel (d): Entropy at the stationary points as a function of the initial magnetization: the solid line refers to the global maximum, while the dotted line to the saddle point.[]{data-label="fig3"}](fig3.eps){width="7cm"} In this Letter, we have analyzed the emergence of QSS, a universal feature that occurs in systems with long-range interactions, for the specific case of the HMF model. By comparing numerical simulations and analytical predictions, we have been able to unambiguously demonstrate that the Vlasov model provides an accurate framework to address the study of the QSS. Working within the Vlasov context one can develop a fully predictive theoretical approach, which is completely justified from first principles. Finally, and most important, results of conventional Vlasov codes are to be critically scrutinized, especially in specific regions of parameters space close to transitions from homogeneous to non homogeneous states. We acknowledge financial support from the PRIN05-MIUR project [*Dynamics and thermodynamics of systems with long-range interactions*]{}. [99]{} P.J. Peebles, *The Large-scale structure of the Universe*, Princeton, NJ: Princeton Universeity Press (1980). A. Mangeney et al., J. Comp. Physics [**179**]{}, 495 (2002). L. Galeotti et al., Phys. Rev. Lett. [**95**]{}, 015002 (2005); F. Califano et al., Phys. Plasmas [**13**]{}, 082102 (2006). M. Antoni et al., Phys. Rev. E **52**, 2361 (1995). W. Braun et al., Comm. Math. Phys. **56**, 101 (1977). T. Dauxois et al., Lect. Not. Phys. [**602**]{}, Springer (2002). V. Latora et al. Phys. Rev. Lett. **80**, 692 (1998). V. Latora et al. Phys. Rev. E **64** 056134 (2001). C. Tsallis, J. Stat. Phys. [**52**]{}, 479 (1988). Y.Y. Yamaguchi et al. Physica A, [**337**]{}, 36 (2004). A. Antoniazzi et al., Phys. Rev. E **75** 011112 (2007); P.H. Chavanis Eur. Phys. J. B [**53**]{}, 487 (2006). D. Lynden-Bell et al., Mon. Not. R. Astron. Soc. [**138**]{}, 495 (1968). P.H. Chavanis, Ph. D Thesis, ENS Lyon (1996). A. Rapisarda et al., Europhysics News, [**36**]{}, 202 (2005); F. Bouchet et al., Europhysics News, [**37**]{}, 2, 9-10 (2006). T. Padmanabhan, Phys. Rep. [**188**]{}, 285 (1990). J. Barr[é]{} et al., Phys. Rev E [**69**]{}, 045501(R) (2004). R. Kawahara and H. Nakanishi, cond-mat/0611694. P. H. Chavanis, [Physica A]{} **359**, 177 (2006). J. Michel et al., Comm. Math. Phys. **159**, 195 (1994). C.G. Cheng and G. Knorr, J. Comp. Phys. [**22**]{}, 330 (1976). M.C. Firpo, Y. Elskens, Phys. Rev. Lett. [**84**]{}, 3318 (2000). [^1]: andrea.antoniazzi@unifi.it [^2]: francesco.califano@df.unipi.it [^3]: duccio.fanelli@ki.se [^4]: stefano.ruffo@unifi.it
{ "pile_set_name": "ArXiv" }
--- abstract: - | This paper investigates the generalization of Principal Component Analysis (PCA) to Riemannian manifolds. We first propose a new and general type of family of subspaces in manifolds that we call barycentric subspaces. They are implicitly defined as the locus of points which are weighted means of $k+1$ reference points. As this definition relies on points and not on tangent vectors, it can also be extended to geodesic spaces which are not Riemannian. For instance, in stratified spaces, it naturally allows principal subspaces that span several strata, which is impossible in previous generalizations of PCA. We show that barycentric subspaces locally define a submanifold of dimension $k$ which generalizes geodesic subspaces. Second, we rephrase PCA in Euclidean spaces as an optimization on flags of linear subspaces (a hierarchy of properly embedded linear subspaces of increasing dimension). We show that the Euclidean PCA minimizes the Accumulated Unexplained Variances by all the subspaces of the flag (AUV). Barycentric subspaces are naturally nested, allowing the construction of hierarchically nested subspaces. Optimizing the AUV criterion to optimally approximate data points with flags of affine spans in Riemannian manifolds lead to a particularly appealing generalization of PCA on manifolds called Barycentric Subspaces Analysis (BSA). - 'This supplementary material details the notions of Riemannian geometry that are underlying the paper [*Barycentric Subspace Analysis on Manifolds*]{}. In particular, it investigates the Hessian of the Riemannian square distance whose definiteness controls the local regularity of the barycentric subspaces. This is exemplified on the sphere and the hyperbolic space.' - 'This supplementary material details in length the proof that the flag of linear subspaces found by PCA optimizes the Accumulated Unexplained Variances (AUV) criterion in a Euclidean space.' address: - | Asclepios team, Inria Sophia-Antipolis Méditerrannée\ 2004 Route des Lucioles, BP93\ F-06902 Sophia-Antipolis Cedex, France\ - | Asclepios team, Inria Sophia Antipolis\ 2004 Route des Lucioles, BP93\ F-06902 Sophia-Antipolis Cedex, France - | Asclepios team, Inria Sophia-Antipolis Méditerranée\ 2004 Route des Lucioles, BP93\ F-06902 Sophia-Antipolis Cedex, France\ author: - - - title: - Barycentric Subspace Analysis on Manifolds - | Supplementary Materials A:\ Hessian of the Riemannian Squared Distance - 'Supplementary Materials B: Euclidean PCA as an optimization in the flag space' --- Introduction ============ In a Euclidean space, the principal $k$-dimensional affine subspace of the Principal Component Analysis (PCA) procedure is equivalently defined by minimizing the variance of the residuals (the projection of the data point to the subspace) or by maximizing the explained variance within that affine subspace. This double interpretation is available through Pythagoras’ theorem, which does not hold in more general manifolds. A second important observation is that principal components of different orders are nested, enabling the forward or backward construction of nested principal components. Generalizing PCA to manifolds first requires the definition of the equivalent of affine subspaces in manifolds. For the zero-dimensional subspace, an intrinsic generalization of the mean on manifolds naturally comes into mind: the Fréchet mean is the set of global minima of the variance, as defined by [@frechet48] in general metric spaces. For simply connected Riemannian manifolds of non-positive curvature, the minimum is unique and is called the Riemannian center of mass. This fact was already known by Cartan in the 1920’s, but was not used for statistical purposes. [@karcher77; @buser_gromovs_1981] first established conditions on the support of the distribution to ensure the uniqueness of a local minimum in general Riemannian manifolds. This is now generally called Karcher mean, although there is a dispute on the naming [@karcher_riemannian_2014]. From a statistical point of view, [@Bhattacharya:2003; @Bhattacharya:2005] have studied in depth the asymptotic properties of the empirical Fréchet / Karcher means. The one-dimensional component can naturally be a geodesic passing through the mean point. Higher-order components are more difficult to define. The simplest generalization is tangent PCA (tPCA), which amounts unfolding the whole distribution in the tangent space at the mean, and computing the principal components of the covariance matrix in the tangent space. The method is thus based on the maximization of the explained variance, which is consistent with the entropy maximization definition of a Gaussian on a manifold proposed by [@pennec:inria-00614994]. tPCA is actually implicitly used in most statistical works on shape spaces and Riemannian manifolds because of its simplicity and efficiency. However, if tPCA is good for analyzing data which are sufficiently centered around a central value (unimodal or Gaussian-like data), it is often not sufficient for distributions which are multimodal or supported on large compact subspaces (e.g. circles or spheres). Instead of an analysis of the covariance matrix, [@fletcher_principal_2004] proposed the minimization of squared distances to subspaces which are totally geodesic at a point, a procedure coined Principal Geodesic Analysis (PGA). These Geodesic Subspaces (GS) are spanned by the geodesics going through a point with tangent vector restricted to a linear subspace of the tangent space. However, the least-squares procedure is computationally expensive, so that the authors approximated it in practice with tPCA, which led to confusions between tPCA and PGA. A real implementation of the original PGA procedure was only recently provided by [@sommer_optimization_2013]. PGA is allowing to build a flag (sequences of embedded subspaces) of principal geodesic subspaces consistent with a forward component analysis approach. Components are built iteratively from the mean point by selecting the tangent direction that optimally reduces the square distance of data points to the geodesic subspace. In this procedure, the mean always belongs to geodesic subspaces even when it is outside of the distribution support. To alleviate this problem, [@huckemann_principal_2006], and later [@huckemann_intrinsic_2010], proposed to start at the first order component directly with the geodesic best fitting the data, which is not necessarily going through the mean. The second principal geodesic is chosen orthogonally to the first one, and higher order components are added orthogonally at the crossing point of the first two components. The method was named Geodesic PCA (GPCA). Further relaxing the assumption that second and higher order components should cross at a single point, [@sommer_horizontal_2013] proposed a parallel transport of the second direction along the first principal geodesic to define the second coordinates, and iteratively define higher order coordinates through horizontal development along the previous modes. These are all intrinsically forward methods that build successively larger approximation spaces for the data. A notable exception is the concept of Principal Nested Spheres (PNS), proposed by [@jung_analysis_2012] in the context of planar landmarks shape spaces. A backward analysis approach determines a decreasing family of nested subspheres by slicing a higher dimensional sphere with affine hyperplanes. In this process, the nested subspheres are not of radius one, unless the hyperplanes passe through the origin. [@damon_backwards_2013] have recently generalized this approach to manifolds with the help of a “nested sequence of relations”. However, up to now, such a sequence was only known for spheres or Euclidean spaces. We first propose in this paper new types of family of subspaces in manifolds: barycentric subspaces generalize geodesic subspaces and can naturally be nested, allowing the construction of inductive forward or backward nested subspaces. We then rephrase PCA in Euclidean spaces as an optimization on flags of linear subspaces (a hierarchy of properly embedded linear subspaces of increasing dimension). To that end, we propose an extension of the unexplained variance criterion that generalizes nicely to flags of barycentric subspaces in Riemannian manifolds. This leads to a particularly appealing generalization of PCA on manifolds: Barycentric Subspaces Analysis (BSA). Paper Organization {#paper-organization .unnumbered} ------------------ We recall in Section \[Sec:Geom\] the notions and notations needed to define statistics on Riemannian manifolds, and we introduce the two running example manifolds of this paper: $n$-dimensional spheres and hyperbolic spaces. Exponential Barycentric Subspaces (EBS) are then defined in Section \[Sec:Bary\] as the locus of weighted exponential barycenters of $k+1$ affinely independent reference points. The closure of the EBS in the original manifold is called affine span (this differs from the preliminary definition of [@pennec:hal-01164463]). Equations of the EBS and affine span are exemplified on our running examples: the affine span of $k+1$ affinely independent reference points is the great subsphere (resp. sub-hyperbola) that contains the reference points. In fact, other tuple of points of that subspace generates the same affine span, which is also a geodesic subspace. This coincidence is due to the very high symmetry of the constant curvature spaces. Section \[Sec:KBS\] defines the Karcher (resp. Fréchet) barycentric subspaces (KBS, resp. FBS) as the local (resp. global) minima of the weighted squared distance to the reference points. As the definitions relies on distances between points and not on tangent vectors, they are also valid in more general non-Riemannian geodesic spaces. For instance, in stratified spaces, barycentric subspaces may naturally span several strata. For Riemannian manifolds, we show that our three definitions are subsets of each other (except possibly at the cut locus of the reference points): the largest one, the EBS, is composed of the critical points of the weighted variance. It forms a cell complex according to the index of the critical points. Cells of positive index gather local minima to form the KBS. We explicitly compute the Hessian on our running spherical and hyperbolic examples. Numerical tests show that the index can be arbitrary, thus subdividing the EBS into several regions for both positively and negatively curved spaces. Thus, the KBS consistently covers only a small portion of the affine span in general and is a less interesting definition for subspace analysis purposes. For affinely independent points, we show in Section \[Sec:Prop\] that the regular part of a barycentric subspace is a stratified space which is locally a submanifold of dimension $k$. At the limit, points may coalesce along certain directions, defining non local jets[^1] instead of a regular $k+1$-tuple. Restricted geodesic subspaces, which are defined by $k$ vectors tangent at a point, correspond to the limit of the affine span when the $k$-tuple converges towards that jet. Finally, we discuss in Section \[Sec:BSA\] the use of these barycentric subspaces to generalize PCA on manifolds. Barycentric subspaces can be naturally nested by defining an ordering of the reference points. Like for PGA, this enables the construction of a forward nested sequence of subspaces which contains the Fréchet mean. In addition, BSA also provides backward nested sequences which may not contain the mean. However, the criterion on which these constructions are based can be optimized for each subspace independently but not consistently for the whole sequence of subspaces. In order to obtain a global criterion, we rephrase PCA in Euclidean spaces as an optimization on flags of linear subspaces (a hierarchies of properly embedded linear subspaces of increasing dimension). To that end, we propose an extension of the unexplained variance criterion (the Accumulated Unexplained Variance (AUV) criterion) that generalizes nicely to flags of affine spans in Riemannian manifolds. This results into a particularly appealing generalization of PCA on manifolds, that we call Barycentric Subspaces Analysis (BSA). Riemannian geometry {#Sec:Geom} =================== In Statistics, directional data occupy a place of choice [@dryden2005; @huckemann_principal_2006]. Hyperbolic spaces are also the simplest models of negatively curved spaces which model the space of isotropic Gaussian parameters with the Fisher-Rao metric in information geometry [@costa_fisher_2015]. As non-flat constant curvature spaces, both spherical and hyperbolic spaces are now considered in manifold learning for embedding data [@wilson_spherical_2014]. Thus, they are ideal examples to illustrate the theory throughout this paper. Tools for computing on Riemannian manifolds ------------------------------------------- We consider a differential manifold ${\ensuremath{{\cal M}}}$ endowed with a smooth scalar product ${\ensuremath{ \left< \:.\:\left|\:.\right.\right> }}_{x}$ called the Riemannian metric on each tangent space $T_{x}{\ensuremath{{\cal M}}}$ at point $x$ of ${\ensuremath{{\cal M}}}$. In a chart, the metric is specified by the dot product of the tangent vector to the coordinate curves: $g_{ij}(x) = {\ensuremath{ \left< \:\partial_i\:\left|\:\partial_j\right.\right> }}_x$. The Riemannian distance between two points is the infimum of the length of the curves joining these points. Geodesics, which are critical points of the energy functional, are parametrized by arc-length in addition to optimizing the length. We assume here that the manifold is geodesically complete, i.e. that the definition domain of all geodesics can be extended to ${\ensuremath{\mathbb{R}}}$. This means that the manifold has no boundary nor any singular point that we can reach in a finite time. As an important consequence, the Hopf-Rinow-De Rham theorem states that there always exists at least one minimizing geodesic between any two points of the manifold (i.e. whose length is the distance between the two points). #### Normal coordinate system From the theory of second order differential equations, we know that there exists one and only one geodesic $\gamma_{(x,v)}(t)$ starting from the point $x$ with the tangent vector $v \in T_{x}{\ensuremath{{\cal M}}}$. The exponential map at point $x$ maps each tangent vector $v \in T_{x}{\ensuremath{{\cal M}}}$ to the point of the manifold that is reached after a unit time by the geodesic: $ \exp_{x}(v) = \gamma_{(x,v)}(1)$. The exponential map is locally one-to-one around $0$: we denote by ${\ensuremath{\overrightarrow{xy}}}=\log_{x}(y)$ its inverse. The injectivity domain is the maximal domain $D(x) \subset T_{x}{\ensuremath{{\cal M}}}$ containing $0$ where the exponential map is a diffeomorphism. This is a connected star-shape domain limited by the tangential cut locus $\partial D(x) = C(x) \subset T_{x}{\ensuremath{{\cal M}}}$ (the set of vectors $t v$ where the geodesic $\gamma_{(x, v)}(t)$ ceases to be length minimizing). The cut locus ${\ensuremath{{\cal C}}}(x) = \exp_{x}(C(x)) \subset {\ensuremath{{\cal M}}}$ is the closure of the set of points where several minimizing geodesics starting from $x$ meet. The image of the domain $D(x)$ by the exponential map covers all the manifold except the cut locus, which has null measure. Provided with an orthonormal basis, exp and log maps realize a normal coordinate system at each point $x$. Such an atlas is the basis of programming on Riemannian manifolds as exemplified in [@pennec:inria-00614990]. #### Hessian of the squared Riemannian distance On ${\ensuremath{{\cal M}}}\setminus C(y)$, the Riemannian gradient $\nabla^a = g^{ab} \partial_b$ of the squared distance $d^2_y(x)={\ensuremath{\:\mbox{\rm dist}}}^2(x, y)$ with respect to the fixed point $y$ is $\nabla d^2_y(x) = -2 \log_x(y)$. The Hessian operator (or double covariant derivative) $\nabla^2$ is the covariant derivative of the gradient. In a normal coordinate at the point $x$, the Christoffel symbols vanish at $x$ so that the Hessian of the square distance can be expressed with the standard differential $D_x$ with respect to the footpoint $x$: $\nabla^2 d^2_y(x) = -2 (D_x \log_x(y))$. It can also be written in terms of the differentials of the exponential map as $ \nabla^2 d^2_y(x) = ( \left. D\exp_x \right|_{{\ensuremath{\overrightarrow{xy}}}})^{-1} \left. D_x \exp_x \right|_{{\ensuremath{\overrightarrow{xy}}}}$ to explicitly make the link with Jacobi fields. Following [@brewin_riemann_2009], we computed in \[suppA\] the Taylor expansion of this matrix in a normal coordinate system at $x$: $$\label{eq:Diff_log} \left[ D_x \log_x(y) \right]^a_b = -\delta^a_b + \frac{1}{3} R^a_{cbd} {\ensuremath{\overrightarrow{xy}}}^c {\ensuremath{\overrightarrow{xy}}}^d + \frac{1}{12} \nabla_c R^a_{dbe} {\ensuremath{\overrightarrow{xy}}}^c {\ensuremath{\overrightarrow{xy}}}^d {\ensuremath{\overrightarrow{xy}}}^e + O(\varepsilon^3).$$ Here, $R^a_{cbd}(x)$ are the coefficients of the curvature tensor at $x$ and Einstein summation convention implicitly sums upon each index that appear up and down in the formula. Since we are in a normal coordinate system, the zeroth order term is the identity matrix, like in Euclidean spaces, and the first order term vanishes. The Riemannian curvature tensor appears in the second order term and its covariant derivative in the third order term. Curvature is the leading term that makes this matrix departing from the identity (the Euclidean case) and may lead to the non invertibility of the differential. #### Moments of point distributions Let $\{x_0,\ldots x_k\}$ be a set of $k+1$ points on a Manifold provided with weights $(\lambda_0, \ldots \lambda_k)$ that do not sum to zero. We may see these weighted points as the weighted sum of Diracs $\mu(x) = \sum_i \lambda_i \delta_{x_i}(x)$. As this distribution is not normalized and weights can be negative, it is generally not a probability. It is also singular with respect to the Riemannian measure. Thus, we have to take care in defining its moments as the Riemannian log and distance functions are not smooth at the cut-locus. \[$(k+1)$-pointed / punctured Riemannian manifold\] $ $\ Let $\{x_0, \ldots x_k\} \in {\ensuremath{{\cal M}}}^{k+1}$ be a set of $k+1$ reference points in the $n$-dimensional Riemannian manifold ${\ensuremath{{\cal M}}}$ and $C(x_0, \ldots x_k) = \cup_{i=0}^k C(x_i)$ be the union of the cut loci of these points. We call the object consisting of the smooth manifold ${\ensuremath{{\cal M}}}$ and the $k+1$ reference points a $(k+1)$-pointed manifold. Likewise, we call the submanifold $ {\ensuremath{{\cal M}^*{(x_0, \ldots x_k)}}}= {\ensuremath{{\cal M}}}\setminus C(x_0, \ldots x_k)$ of the non-cut points of the $k+1$ reference points a $(k+1)$-punctured manifold. On $ {\ensuremath{{\cal M}^*{(x_0, \ldots x_k)}}}$, the distance to the points $\{x_0, \ldots x_k\}$ is smooth. The Riemannian log function ${\ensuremath{\overrightarrow{x x_i}}} = \log_x(x_i)$ is also well defined for all the points of ${\ensuremath{{\cal M}^*{(x_0, \ldots x_k)}}}$. Since the cut locus of each point is closed and has null measure, the punctured manifold ${\ensuremath{{\cal M}^*{(x_0, \ldots x_k)}}}$ is open and dense in ${\ensuremath{{\cal M}}}$, which means that it is a submanifold of ${\ensuremath{{\cal M}}}$. However, this submanifold is not necessarily connected. For instance in the flat torus $(S_1)^n$, the cut-locus of $k+1 \leq n$ points divides the torus into $k^n$ disconnected cells. \[Weighted moments of a $(k+1)$-pointed manifold\] $ $\ Let $(\lambda_0, \ldots \lambda_k) \in {\ensuremath{\mathbb{R}}}^{k+1}$ such that $\sum_i \lambda_i \not = 0$. We call ${{\underaccent{\bar}{\lambda}}}_i = \lambda_i / (\sum_{j=0}^k \lambda_j)$ the normalized weights. The weighted $p$-th order moment of a $(k+1)$-pointed Riemannian manifold is the $p$-contravariant tensor: $${\mathfrak M}_{p}(x,\lambda) = \sum_i \lambda_i \underbrace{{\ensuremath{\overrightarrow{xx_i}}} \otimes {\ensuremath{\overrightarrow{xx_i}}} \ldots \otimes {\ensuremath{\overrightarrow{xx_i}}}}_{\text{$p$ times}}.$$ The normalized $p$-th order moment is: $\underline{\mathfrak M}_p(x,\lambda) = {\mathfrak M}_p(x,{{\underaccent{\bar}{\lambda}}}) = {\mathfrak M}_p(x,\lambda)/ {\mathfrak M}_0(\lambda).$ Both tensors are smoothly defined on the punctured manifold ${\ensuremath{{\cal M}^*{(x_0, \ldots x_k)}}}$. The 0-th order moment ${\mathfrak M}_0(\lambda) = \sum_i \lambda_i = \mathds{1}{^{\text{\tiny T}}}\lambda$ is the mass. The $p$-th order moment is homogeneous of degree 1 in $\lambda$ while the normalized $p$-th order moment is naturally invariant by a change of scale of the weights. For a fixed weight $\lambda$, the first order moment ${\mathfrak M}_1(x,\lambda) = \sum_i \lambda_i {\ensuremath{\overrightarrow{xx_i}}}$ is a smooth vector field on the manifold ${\ensuremath{{\cal M}^*{(x_0, \ldots x_k)}}}$ whose zeros will be the subject of our interest. The second and higher order moments are smooth $(p,0)$ tensor fields that will be used in contraction with the Riemannian curvature tensor. #### Affinely independent points on a manifold In a Euclidean space, $k+1$ points are affinely independent if their affine combination generates a $k$ dimensional subspace, or equivalently if none of the point belong to the affine span of the $k$ others. They define in that case a $k$-simplex. Extending these different definitions to manifolds lead to different notions. We chose a definition which rules out the singularities of constant curvature spaces and which guaranties the existence of barycentric subspaces around reference point. In the sequel, we assume by default that the $k+1$ reference points of pointed manifolds are affinely independent (thus $k \leq n$). Except for a few examples, the study of singular configurations is left for a future work. A set of $k+1$ points $\{x_0,\ldots x_k\}$ is affinely independent if no point is in the cut-locus of another and if all the sets of $k$ vectors $\{ \log_{x_i}(x_j) \}_{0 \leq j \not = i \leq k} \in T_{x_i}{\ensuremath{{\cal M}}}^k$ are linearly independent. \[def:AffineIndependence\] Example on the sphere ${\cal S}_n$ ---------------------------------- We consider the unit sphere in dimension $n \geq 1$ embedded in ${\ensuremath{\mathbb{R}}}^{n+1}$. The tangent space at $x$ is the space of vectors orthogonal to $x$: $T_x{\cal S}_n = \{ v \in {\ensuremath{\mathbb{R}}}^{n+1}, v{^{\text{\tiny T}}}x =0\}$ and the Riemannian metric is inherited from the Euclidean metric of the embedding space. With these conventions, the Riemannian distance is the arc-length $d(x,y) = \arccos( x{^{\text{\tiny T}}}y)= \theta \in [0,\pi]$. Using the smooth function $f(\theta) = { \theta}/{\sin\theta}$ from $]-\pi;\pi[$ to ${\ensuremath{\mathbb{R}}}$ which is always greater than one, the spherical exp and log maps are: $$\begin{aligned} \exp_x(v) & = & \cos(\| v\|) x + \sin(\| v\|) v / \| v\| \\ \log_x(y) & = & f(\theta) \left( y - \cos\theta\: x \right) \quad \text{with} \quad \theta = \arccos(x{^{\text{\tiny T}}}y).\end{aligned}$$ #### Hessian The orthogonal projection $v=({\ensuremath{\:\mathrm{Id}}}-x x{^{\text{\tiny T}}})w$ of a vector $w \in {\ensuremath{\mathbb{R}}}^{n+1}$ onto the tangent space $T_x{\cal S}_n$ provides a chart around a point $x\in {\cal S}_n$ where we can compute the gradient and Hessian of the squared Riemannian distance (detailed in \[suppA\]). Let $u={({\ensuremath{\:\mathrm{Id}}}-xx{^{\text{\tiny T}}})y }/ {\sin \theta} = { \log_x(y) }/{\theta}$ be the unit tangent vector pointing from $x$ to $y$, we obtain: $$\begin{aligned} H_x(y) = \nabla^2 d^2_y(x) & = &2 u u{^{\text{\tiny T}}}+ 2 f( \theta )\cos\theta ({\ensuremath{\:\mathrm{Id}}}-xx{^{\text{\tiny T}}}- u u{^{\text{\tiny T}}}). \label{eq:HessDistSphere2}\end{aligned}$$ By construction, $x$ is an eigenvector with eigenvalue $0$. Then the vector $u$ (or equivalently $\log_x(y) = \theta u$) is an eigenvector with eigenvalue $1$. To finish, every vector which is orthogonal to these two vectors (i.e. orthogonal to the plane spanned by 0, $x$ and $y$) has eigenvalue $ f(\theta)\cos\theta = \theta \cot \theta$. This last eigenvalue is positive for $\theta \in [0,\pi/2[$, vanishes for $\theta = \pi/2$ and becomes negative for $\theta \in ]\pi/2 \pi[$. We retrieve here the results of [@buss_spherical_2001 lemma 2] expressed in a more general coordinate system. #### Moments of a $k+1$-pointed sphere We denote a set of $k+1$ point on the sphere and the matrix of their coordinates by $X=[x_0,\ldots x_k]$. The cut locus of $x_i$ is its antipodal point $-x_i$ so that the $(k+1)$-punctured manifold is ${\ensuremath{{\cal M}^*{(x_0, \ldots x_k)}}}= {\cal S}_n \setminus -X$. Using the invertible diagonal matrix $F(X,x) = \mbox{Diag}( f( \arccos(x_i {^{\text{\tiny T}}}x) ) )$, the first weighted moment is: $${\mathfrak M}_1(x, \lambda) = \textstyle \sum_i \lambda_i {\ensuremath{\overrightarrow{x x_i}}} = ( {\ensuremath{\:\mathrm{Id}}}- x x{^{\text{\tiny T}}}) X F(X,x) \lambda. \label{eq:MomemtSphere}$$ #### Affine independence of the reference points Because no point is antipodal nor identical to another, the plane generated by 0, $x_i$ and $x_j$ in the embedding space is also generated by 0, $x_i$ and the tangent vector $\log_{x_i}(x_j)$. This can be be seen using a stereographic projection of pole $-x_i$ from ${\cal S}_n$ to $T_{x_i} {\cal S}_n$. Thus, 0, $x_i$ and the $k$ independent vectors $\log_{x_i}(x_j)$ ($j \not = i$) generate the same linear subspace of dimension $k+1$ in the embedding space than the points $\{0, x_0,\ldots x_k\}$. We conclude that $k+1$ points on the sphere are affinely independent if and only if the matrix $X=[x_0,\ldots x_k]$ has rank $k+1$. Example on the hyperbolic space ${\ensuremath{\mathbb{H}}}^n$ {#sec:hyperbolic} ------------------------------------------------------------- We now consider the hyperboloid of equation $-x_0^2 + x_1^2 \ldots x_n^2 = -1$ ($x_0 > 0$) embedded in ${\ensuremath{\mathbb{R}}}^{n+1}$ ($n \geq 2$). Using the notation $x=(x_0,\hat x)$ and the indefinite non-degenerate symmetric bilinear form ${\ensuremath{ \left< \:x\:\left|\:y\right.\right> }}_* = x{^{\text{\tiny T}}}J y= \hat x{^{\text{\tiny T}}}\hat y -x_0 y_0$ with $ J = \mbox{diag}(-1, {\ensuremath{\:\mathrm{Id}}}_n)$, the hyperbolic space ${\ensuremath{\mathbb{H}}}^n$ can be seen as the pseudo-sphere $\|x\|^2_* = \|\hat x\|^2 -x_0^2 = -1$ of radius -1 in the Minkowski space ${\ensuremath{\mathbb{R}}}^{1,n}$. A point can be parametrized by $x=(\sqrt{1+\|\hat x\|^2}, \hat x)$ for $\hat x \in {\ensuremath{\mathbb{R}}}^n$ (Weierstrass coordinates). The restriction of the Minkowski pseudo-metric of the embedding space ${\ensuremath{\mathbb{R}}}^{1,n}$ to the tangent space of $T_x{\ensuremath{\mathbb{H}}}^n$ is positive definite. It defines the natural Riemannian metric on the hyperbolic space. With these conventions, geodesics are the trace of 2-planes passing through the origin and the Riemannian distance is the arc-length $d(x,y) = \operatorname{arccosh}( - {\ensuremath{ \left< \:x\:\left|\:y\right.\right> }}_* )$. Using the smooth positive function $f_*(\theta) = { \theta}/{\sinh(\theta)}$ from ${\ensuremath{\mathbb{R}}}$ to $]0,1]$, the hyperbolic exp and log maps are: $$\begin{aligned} \exp_x(v) & = & \cosh(\| v\|_* ) x + {\sinh(\| v\|_* )} v / {\| v\|_* } \\ \log_x(y) & = & f_*(\theta) \left( y - \cosh(\theta) x \right) \quad \text{with} \quad \theta = \operatorname{arccosh}( -{\ensuremath{ \left< \:x\:\left|\:y\right.\right> }}_* ).\end{aligned}$$ #### Hessian The orthogonal projection $v=w + {\ensuremath{ \left< \:w\:\left|\:x\right.\right> }}_* x = ({\ensuremath{\:\mathrm{Id}}}+ x x{^{\text{\tiny T}}}J) w$ of a vector $w\in {\ensuremath{\mathbb{R}}}^{1,n}$ onto the tangent space at $T_x {\ensuremath{\mathbb{H}}}^n$ provides a chart around the point $x\in {\ensuremath{\mathbb{H}}}^n$ where we can compute the gradient and Hessian of the hyperbolic squared distance (detailed in \[suppA\]). Let $u= { \log_x(y) }/{\theta}$ be the unit tangent vector pointing from $x$ to $y$, the Hessian is: $$H_x(y) = \nabla^2 d^2_y(x) = 2 J \left( u u{^{\text{\tiny T}}}+ \theta \coth \theta (J + x x{^{\text{\tiny T}}}-u u{^{\text{\tiny T}}}) \right) J \label{eq:GradDistHyper}$$ By construction, $x$ is an eigenvector with eigenvalue $0$. The vector $u$ (or equivalently $\log_x(y) = \theta u$) is an eigenvector with eigenvalue $1$. Every vector orthogonal to these two vectors (i.e. to the plane spanned by 0, $x$ and $y$) has eigenvalue $ \theta \coth \theta \geq 1$ (with equality only for $\theta=0$). Thus, the Hessian of the squared distance is always positive definite. As a consequence, the squared distance is a convex function and has a unique minimum. This was of course expected for a negatively curved space [@bishop_manifolds_1969]. #### Moments of a $k+1$-pointed hyperboloid We now pick $k+1$ points on the hyperboloid whose matrix of coordinates is denoted by $X=[x_0,\ldots x_k]$. Since there is no cut-locus, the $(k+1)$-punctured manifold is the manifold itself: ${\ensuremath{{\cal M}^*{(x_0, \ldots x_k)}}}= {\ensuremath{{\cal M}}}= {{\ensuremath{\mathbb{H}}}}^n$. Using the invertible diagonal matrix $F_*(X,x) = \mbox{Diag}( f_*( \operatorname{arccosh}( -{\ensuremath{ \left< \:x_i\:\left|\:x\right.\right> }}_* ) ) )$, the first weighted moment is $${\mathfrak M}_1(x, \lambda) = \textstyle \sum_i \lambda_i \log_x(x_i) = ({\ensuremath{\:\mathrm{Id}}}+ x x{^{\text{\tiny T}}}J) X F_*(X,x) \lambda. \label{eq:MomemtHyperboloid}$$ #### Affine independence As for the sphere, the origin, the point $x_i$ and the $k$ independent vectors $\log_{x_i}(x_j) \in T_{x_i}{{\ensuremath{\mathbb{H}}}}^n$ ($j \not = i$) generate the same $k+1$ dimensional linear subspace of the embedding Minkowski space ${\ensuremath{\mathbb{R}}}^{1,n}$ than the points $\{x_0, \ldots x_k\}$. Thus, $k+1$ points on the hyperboloid are affinely independent if and only if the matrix $X$ has rank $k+1$. Exponential Barycentric Subspaces (EBS) and Affine Spans {#Sec:Bary} ======================================================== Affine subspaces in a Euclidean space ------------------------------------- In Euclidean PCA, a zero dimensional space is a point, a one-dimensional space is a line, and an affine subspace of dimension $k$ is generated by a point and $k \leq n$ linearly independent vectors. We can also generate such a subspace by taking the affine hull of $k+1$ affinely independent points: $\operatorname{Aff}(x_0,\ldots x_k) =\left\{ x = \sum_i \lambda_i x_i, \text{with} \sum_{i=0}^k \lambda_i = 1\right\}.$ These two definitions are equivalent in a Euclidean space, but turn out to have different generalizations in manifolds. When there exists a vector of coefficients $\lambda = (\lambda_0, \lambda_1, \ldots, \lambda_k) \in {\ensuremath{\mathbb{R}}}^{k+1}$ (which do not sum to zero) such that $\sum_{i=0}^k \lambda_i (x_i-x) =0,$ then $\lambda$ is called the barycentric coordinates of the point $x$ with respect to the $k$-simplex $\{x_0, \ldots x_k\}$. When points are dependent, some extra care has to be taken to show that the affine span is still well defined but with a lower dimensionality. Barycentric coordinates are homogeneous of degree one: \[def:Pk\] Barycentric coordinates of $k+1$ points live in the real projective space ${\ensuremath{\mathbb{R}}}P^n = ({\ensuremath{\mathbb{R}}}^{k+1} \setminus \{0\})/{\ensuremath{\mathbb{R}}}^*$ from which we remove the codimension 1 subspace $\mathds{1}^{\perp}$ orthogonal to the point $\mathds{1} = (1:1: \ldots 1)$: $$\textstyle {\ensuremath{{\cal P}^*_k}}= \left\{ \lambda=(\lambda_0 : \lambda_1 : \ldots : \lambda_k) \in {\ensuremath{\mathbb{R}}}P^n \text{ s.t. } \mathds{1}^{\top}\lambda \not = 0 \right\}.$$ [r]{}[0.5]{} ![image](Figures/ProjectiveSpaceWeightsP2_small){width="50.00000%"} Projective points are represented by lines through 0 in Fig.\[fig:P2\]. Standard representations are given by the intersection of the lines with the “upper” unit sphere $S_k$ of ${\ensuremath{\mathbb{R}}}^{k+1}$ with north pole $\mathds{1}/\sqrt{k+1}$ or by the affine $k$-plane of ${\ensuremath{\mathbb{R}}}^{k+1}$ passing through the point $\mathds{1}/(k+1)$ and orthogonal to this vector. This last representation give the normalized weight $ \underline{\lambda}_i= \lambda_i / (\sum_{j=0}^k \lambda_j)$: the vertices of the simplex have homogeneous coordinates $(1 : 0 : ... : 0) \ldots (0 : 0 : ... : 1)$. To prevent weights to sum up to zero, we have to remove the codimension 1 subspace $\mathds{1}^{\perp}$ orthogonal to the projective point $\mathds{1} = (1:1: \ldots 1)$ (blue line in Fig.\[fig:P2\]). This excluded subspace corresponds to the equator of the pole $\mathds{1}/\sqrt{k+1}$ for the sphere representation (points $C$ and $-C$ identified in Fig.\[fig:P2\]), and to the projective completion (points at infinity) of the affine $k$-plane of normalized weights. EBS and Affine Span in Riemannian manifolds ------------------------------------------- \[Barycentric coordinates in a $(k+1)$-pointed manifold\] A point $x \in {\ensuremath{{\cal M}^*{(x_0, \ldots x_k)}}}$ has barycentric coordinates $\lambda \in {\ensuremath{{\cal P}^*_k}}$ with respect to $k+1$ reference affinely independent points if $$\label{eq:Bary} {\mathfrak M}_1(x,\lambda) = \textstyle \sum_{i=0}^k \lambda_i {\ensuremath{\overrightarrow{x x_i}}} = 0 .$$ Since the Riemannian log function ${\ensuremath{\overrightarrow{x x_i}}} = \log_x(x_i)$ is multiply defined on the cut locus of $x_i$, this definition cannot be extended to the the union of all cut loci $C(x_0, \ldots x_k)$, which is why we restrict the definition to ${\ensuremath{{\cal M}^*{(x_0, \ldots x_k)}}}$. The EBS of the affinely independent points $(x_0,\ldots x_k) \in {\ensuremath{{\cal M}}}^{k+1}$ is the locus of weighted exponential barycenters of the reference points in ${\ensuremath{{\cal M}^*{(x_0, \ldots x_k)}}}$: $$\mbox{EBS}(x_0, \ldots x_k) = \{ x\in {\ensuremath{{\cal M}^*{(x_0, \ldots x_k)}}}| \exists \lambda \in {\ensuremath{{\cal P}^*_k}}: {\mathfrak M}_1(x,\lambda) =0 \}.$$ The reference points could be seen as landmarks in the manifold. This definition is fully symmetric wit respect to all of them, while one point is privileged in geodesic subspaces. We could draw a link with archetypal analysis [@Cutler:1994:AA] which searches for extreme data values such that all of the data can be well represented as convex mixtures of the archetypes. However, extremality is not mandatory in our framework. The subspace of barycentric coordinates $\Lambda(x) = \{ \lambda \in {\ensuremath{{\cal P}^*_k}}| {\mathfrak M}_1(x,\lambda) =0 \}$ at point $x \in {\ensuremath{{\cal M}^*{(x_0, \ldots x_k)}}}$ is either void, a point, or a linear subspace of ${\ensuremath{{\cal P}^*_k}}$. We see that a point belongs to $\operatorname{EBS}(x_0, \ldots x_k)$ if and only if $\Lambda(x) \not = \emptyset$. Moreover, any linear combination of weights that satisfy the equation is also a valid weight so that $\Lambda(x)$ can only be a unique point (dimension 0) or a linear subspace of ${\ensuremath{{\cal P}^*_k}}$. The dimension of the dual space $\Lambda(x)$ is actually controlling the local dimension of the barycentric space, as we will see below. The discontinuity of the Riemannian log on the cut locus of the reference points may hide the continuity or discontinuities of the exponential barycentric subspace. In order to ensure the completeness and potentially reconnect different components, we consider the closure of this set. The affine span is the closure of the EBS in ${\ensuremath{{\cal M}}}$: $ \operatorname{Aff}(x_0, \ldots x_k) = \overline{\mbox{EBS}}(x_0, \ldots x_k).$ Because we assumed that ${\ensuremath{{\cal M}}}$ is geodesically complete, this is equivalent to the metric completion of the EBS. Characterizations of the EBS ---------------------------- Let $Z(x)= [{\ensuremath{\overrightarrow{x x_0}}},\ldots {\ensuremath{\overrightarrow{x x_k}}}]$ be the smooth field of $n\times (k+1)$ matrices of vectors pointing from any point $x \in {\ensuremath{{\cal M}^*{(x_0, \ldots x_k)}}}$ to the reference points. We can rewrite the constraint $\sum_i \lambda_i {\ensuremath{\overrightarrow{x x_i}}} =0$ in matrix form: ${\mathfrak M}_1(x,\lambda) = Z(x)\lambda =0,$ where $\lambda$ is the $k+1$ vector of homogeneous coordinates $\lambda_i$. \[THM1\] Let $Z(x)=U(x)\: S(x)\: V(x){^{\text{\tiny T}}}$ be a singular decomposition of the $n\times (k+1)$ matrix fields $Z(x)= [{\ensuremath{\overrightarrow{x x_0}}},\ldots {\ensuremath{\overrightarrow{x x_k}}}]$ on ${\ensuremath{{\cal M}^*{(x_0, \ldots x_k)}}}$ with singular values $\{s_i(x)\}_{0\leq i \leq k}$ sorted in decreasing order. $\mbox{EBS}(x_0, \ldots x_k)$ is the zero level-set of the smallest singular value $s_{k+1}(x)$ and the dual subspace of valid barycentric weights is spanned by the right singular vectors corresponding to the $l$ vanishing singular values: $\Lambda(x) = \operatorname{Span}(v_{k-l}, \ldots v_{k})$ (it is void if $l=0$). Since $U$ and $V$ are orthogonal matrices, $Z(x)\lambda=0$ if and only if at least one singular value (necessarily the smallest one $s_{k}$) is null, and $\lambda$ has to live in the corresponding right-singular space: $\Lambda(x) = Ker(Z(x))$. If we have only one zero singular value ($s_{k+1}=0$ and $s_k>0$), then $\lambda$ is proportional to $v_{k+1}$. If $l$ singular values vanish, then we have a higher dimensional linear subspace of solutions for $\lambda$. \[THM5\] Let $G(x)$ be the matrix expression of the Riemannian metric in a local coordinate system and $\Omega(x) = Z(x){^{\text{\tiny T}}}G(x) Z(x)$ be the smooth $(k+1)\times (k+1)$ matrix field on ${\ensuremath{{\cal M}^*{(x_0, \ldots x_k)}}}$ with components $\Omega_{ij}(x) = {\ensuremath{ \left< \: {\ensuremath{\overrightarrow{x x_i}}} \:\left|\: {\ensuremath{\overrightarrow{x x_j}}} \right.\right> }}_x$ and $\Sigma(x) = {\mathfrak M}_2(x,\mathds{1} ) = \sum_{i=0}^k {\ensuremath{\overrightarrow{x x_i}}} \: {\ensuremath{\overrightarrow{x x_i}}}{^{\text{\tiny T}}}= Z(x) Z(x){^{\text{\tiny T}}}$be the (scaled)  $n \times n$ covariance matrix field of the reference points. $\operatorname{EBS}(x_0, \ldots x_k)$ is the zero level-set of: $\det(\Omega(x))$, the minimal eigenvalue $\sigma_{k+1}^2$ of $\Omega(x)$, the $k+1$ eigenvalue (in decreasing order) of the covariance $\Sigma(x)$. The constraint ${\mathfrak M}_1(x,\lambda)=0$ is satisfied if and only if: $$\| {\mathfrak M}_1(x,\lambda) \|^2_x = \left\| { \textstyle \sum_i \lambda_i {\ensuremath{\overrightarrow{x x_i}}}} \right\|^2_{x} = {\lambda{^{\text{\tiny T}}}\Omega(x) \lambda} =0.$$ As the function is homogeneous in $\lambda$, we can restrict to unit vectors. Adding this constrains with a Lagrange multiplier to the cost function, we end-up with the Lagrangian ${\cal L}(x, \lambda, \alpha) = \lambda{^{\text{\tiny T}}}\Omega(x) \lambda +\alpha (\lambda{^{\text{\tiny T}}}\lambda -1)$. The minimum with respect to $\lambda$ is obtained for the eigenvector $\mu_{k+1}(x)$ associated to the smallest eigenvalue $\sigma_{k+1}(x)$ of $\Omega(x)$ (assuming that eigenvalues are sorted in decreasing order) and we have $\|{\mathfrak M}_1(x, \mu_{k+1}(x))\|^2_2 = \sigma_{k+1}(x)$, which is null if and only if the minimal eigenvalue is zero. Thus, the barycentric subspace of $k+1$ points is the locus of rank deficient matrices $\Omega(x)$: $$\operatorname{EBS}(x_0, \ldots x_k) = \phi{^{\text{\tiny (-1)}}}(0) \quad \mbox{where} \quad \phi(x) = \det(\Omega(x)).$$ One may want to relate the singular values of $Z(x)$ to the eigenvalues of $\Omega(x)$. The later are the square of the singular values of $G(x)^{1/2}Z(x)$. However, the left multiplication by the square root of the metric (a non singular but non orthogonal matrix) obviously changes the singular values in general except for vanishing ones: the (right) kernels of $G(x)^{1/2}Z(x)$ and $Z(x)$ are indeed the same. This shows that the EBS is an affine notion rather than a metric one, contrarily to the Fréchet / Karcher barycentric subspace. To draw the link with the $n\times n$ covariance matrix of the reference points, let us notice first that the definition does not assumes that the coordinate system is orthonormal. Thus, the eigenvalues of the covariance matrix depend on the chosen coordinate system, unless they vanish. In fact, only the joint eigenvalues of $\Sigma(x)$ and $G(x)$ really make sense, which is why this decomposition is called the proper orthogonal decomposition (POD). Now, the singular values of $Z(x)=U(x) S(x) V(x){^{\text{\tiny T}}}$ are also the square root of the first $k+1$ eigenvalues of $\Sigma(x) = U(x) S^2(x) U(x){^{\text{\tiny T}}}$, the remaining $n-k-1$ eigenvalues being null. Similarly, the singular values of $G(x)^{1/2}Z(x)$ are the square root of the first $k+1$ joint eigenvalues of $\Sigma(x)$ and $G(x)$. Thus, our barycentric subspace may also be characterized as the zero level-set of the $k+1$ eigenvalue (sorted in decreasing order) of $\Sigma$, and this characterization is once again independent of the basis chosen. Spherical EBS and affine span {#Sec:SphericalEBS} ----------------------------- From Eq.(\[eq:MomemtSphere\]) we identify the matrix: $Z(x) = ( {\ensuremath{\:\mathrm{Id}}}- x x{^{\text{\tiny T}}}) X F(X,x).$ Finding points $x$ and weights $\lambda$ such that $Z(x)\lambda=0$ is a classical matrix equation, except for the scaling matrix $F(X,x)$ acting on homogeneous projective weights, which is non-stationary and non-linear in both $X$ and $x$. However, since $F(X,x) = \mbox{Diag}( \theta_i /\sin \theta_i )$ is an invertible diagonal matrix, we can introduce [*renormalized weights*]{} $\tilde{\lambda} = F(X,x) \lambda, $ which leaves us with the equation $ ( {\ensuremath{\:\mathrm{Id}}}- x x{^{\text{\tiny T}}}) X \tilde \lambda=0$. The solutions under the constraint $\|x\|=1$ are given by $(x{^{\text{\tiny T}}}X \tilde{\lambda} ) x = X \tilde{\lambda}$ or more explicitly $x = \pm X \tilde{\lambda} / \| X \tilde{\lambda}\|$ whenever $X \tilde{\lambda} \not = 0$. This condition is ensured if $Ker(X)=\{0\}$. Thus, when the reference points are linearly independent, the point $x \in {\cal M}^*(X) $ has to belong to the Euclidean span of the reference vectors. Notice that for each barycentric coordinate we have two two antipodal solution points. Conversely, any unit vector $x = X\tilde \lambda$ of the Euclidean span of $X$ satisfies the equation $( {\ensuremath{\:\mathrm{Id}}}- x x{^{\text{\tiny T}}}) X \tilde \lambda = (1-\|x\|^2) X \tilde \lambda =0$, and is thus a point of the EBS provided that it is not at the cut-locus of one of the reference points. This shows that $$\operatorname{EBS}(X) = \operatorname{Span}\{x_0, \ldots x_k\} \cap {\cal S}_n \setminus X.$$ Using the renormalization principle, we can orthogonalize the reference points: let $X=U S V{^{\text{\tiny T}}}$ be a singular value decomposition of the matrix of reference vectors. All the singular values $s_i$ are positive since the reference vectors $x_i$ are assumed to be linearly independent. Thus, $\mu = S V{^{\text{\tiny T}}}\tilde{\lambda} = S V{^{\text{\tiny T}}}F(X,x) \lambda$ is an invertible change of coordinate, and we are left with solving $ ( {\ensuremath{\:\mathrm{Id}}}- x x{^{\text{\tiny T}}}) U\mu =0$. By definition of the singular value decomposition, the Euclidean spans of $X$ and $U$ are the same, so that $\operatorname{EBS}(U) = \operatorname{Span}\{x_0, \ldots x_k\} \cap {\cal S}_n \setminus -U$. This shows that the exponential barycentric subspace generated by the original points $X=[x_0, \ldots x_k]$ and the orthogonalized points $U=[u_0, \ldots u_k]$ are the same, except at the cut locus of all these points, but with different barycentric coordinates. To obtain the affine span, we take the closure of the EBS, which incorporates the cut locus of the reference points: $\operatorname{Aff}(X) = \operatorname{Span}\{x_0, \ldots x_k\} \cap {\cal S}_n$. Thus, for spherical data as for Euclidean data, the affine span only depend on the reference points through the point of the Grassmanian they define. The affine span $\operatorname{Aff}(X)$ of $k+1$ linearly independent reference unit points $X=[x_0, \ldots x_k]$ on the $n$-dimensional sphere ${\cal S}_n$ endowed with the canonical metric is the great subsphere of dimension k that contains the reference points. \[THM7\] When the reference points are affinely dependent on the sphere, the matrix $X$ has one or more (say $l$) vanishing singular values. Any weight $\tilde{\lambda} \in \mbox{Ker}(X)$ is a barycentric coordinate vector for any point $x$ of the pointed sphere since the equation $( {\ensuremath{\:\mathrm{Id}}}- x x{^{\text{\tiny T}}}) X \tilde \lambda =0$ is verified. Thus, the EBS is ${\cal S}_n \setminus -X$ and the affine span is the full sphere. If we exclude the abnormal subspace of weights valid for all points, we find that $x$ should be in the span of the non-zero left singular vectors of $X$, i.e. in the subsphere of dimension of dimension $rank(X)-1$ generated the Euclidean span of the reference vectors. This can also be achieved by focusing of the locus of points where $Z(x)$ has two vanishing singular values. This more reasonable result suggests adapting the EBS and affine span definitions for singular point configurations. Two points on a 2-sphere is an interesting example that can be explicitly worked out. When the points are not antipodal, the rank of $X=[x_0,x_1]$ is 2, and the generated affine span is the one-dimensional geodesic joining the two points. When the reference points are antipodal, say north and south poles, X becomes rank one and one easily sees that all points of the 2-sphere are on one geodesic joining the poles with opposite log directions to the poles. This solution of the EBS definition correspond to the renormalized weight $\tilde \lambda = (1/2 : 1/2) \in Ker(X)$ of the kernel of $X$. However, looking at the locus of points with two vanishing singular values of $Z(x)$ leads to restrict to the north and south poles only, which is a more natural and expected result. Hyperbolic EBS and affine span {#Sec:HyperbolicEBS} ------------------------------ The hyperbolic case closely follows the spherical one. From Eq., we get the expression of the matrix $ Z(x) = ( {\ensuremath{\:\mathrm{Id}}}+ x x{^{\text{\tiny T}}}J) X F_*(X,x)$. Solving for $Z(x)\lambda=0$ can be done as previously by solving $ ( {\ensuremath{\:\mathrm{Id}}}+ x x{^{\text{\tiny T}}}J) X \tilde \lambda =0$ with the renormalized weights $\tilde{\lambda} = F_*(X,x) \lambda$. This equation rewrites $<x| X \tilde{\lambda}>_* x = - X \tilde{\lambda}$, so that the solution has to be of the form $X \tilde \lambda =0$ or $x = \alpha X \tilde{\lambda}$. When the points are affinely independent, the first form is excluded since $Ker(X)=0$. In order to satisfy the constraint $\|x\|^2_*=-1$ in the second form, we need to have $\alpha^2 = - \|X \tilde{\lambda}\|_*^{-2} >0$ and the first coordinate $[X \tilde{\lambda}]_0$ of $X \tilde{\lambda}$ has to be positive. This defines a cone in the space of renormalized weights from which each line parametrizes a point $x = \text{sgn}( [X \tilde{\lambda}]_0) X \tilde \lambda / \sqrt {\tiny -\|X \tilde{\lambda}\|_*^2}$ of the Hyperbolic EBS. Thus, $\operatorname{Aff}(X)$ is the $k$-dimensional hyperboloid generated by the intersection of the Euclidean span of the reference vectors with the hyperboloid ${\ensuremath{\mathbb{H}}}^n$. Since it is complete, the completion does not add anything to the affine span: $$\operatorname{Aff}(X) = \operatorname{EBS}(X) = \operatorname{Span}\{x_0, \ldots x_k\} \cap {\ensuremath{\mathbb{H}}}^n.$$ As for spheres, we see that the hyperbolic affine span only depend on the reference points through the point of the Grassmanian they define. The affine span $\operatorname{Aff}(X) = \operatorname{EBS}(X)$ of $k+1$ affinely independent reference points $X=[x_0, \ldots x_k]$ on the $n$-dimensional hyperboloid ${\ensuremath{\mathbb{H}}}^n$ endowed with the canonical Minkowski pseudo-metric of the embedding space ${\ensuremath{\mathbb{R}}}^{1,n}$ is the hyperboloid of dimension $k$ generated by the intersection of the hyperboloid with the hyperplane containing the reference points. \[thm:HyperbolicSpan\] When the matrix $X$ has one or more vanishing singular values (affine dependance), all the points of the hyperboloid are solutions corresponding to weights from $Ker(X)$. Excluding these abnormal solutions and looking at the locus of points where $Z(x)$ has two vanishing singular values, we find that $x$ should be in the span of the non-zero left singular vectors of $X$, i.e. in the subsphere of dimension of dimension $rank(X)-1$ generated the Euclidean span of the reference vectors. Fréchet / Karcher Barycentric subspaces {#Sec:KBS} ======================================= The reformulation of the affine span as the weighted mean of $k+1$ points also suggests a definition using the Fréchet or the Karcher mean, valid in general metric spaces. Let $({\ensuremath{{\cal M}}}, {\ensuremath{\:\mbox{\rm dist}}})$ be a metric space of dimension $n$ and $(x_0,\ldots x_k) \in {\ensuremath{{\cal M}}}^{k+1}$ be $k+1\leq n+1$ distinct reference points. The (normalized) weighted variance at point $x$ with weight $\lambda \in {\ensuremath{{\cal P}^*_k}}$ is: $\sigma^2(x,\lambda) = \frac{1}{2}\sum_{i=0}^k {{\underaccent{\bar}{\lambda}}}_i {\ensuremath{\:\mbox{\rm dist}}}^2(x, x_i) = \frac{1}{2}\sum_{i=0}^k \lambda_i {\ensuremath{\:\mbox{\rm dist}}}^2(x, x_i) / (\sum_{j=0}^k \lambda_j).$ The Fréchet barycentric subspace of these points is the locus of weighted Fréchet means of these points, i.e. the set of absolute minima of the weighted variance: $$\operatorname{FBS}(x_0, \ldots x_k) = \left\{ \arg\min_{x\in {\ensuremath{{\cal M}}}} \sigma^2(x, \lambda), \: \lambda \in {\ensuremath{{\cal P}^*_k}}\right\}$$ The Karcher barycentric subspaces $\operatorname{KBS}(x_0, \ldots x_k)$ are defined similarly with local minima instead of global ones. In stratified metric spaces, for instance, the barycentric subspace spanned by points belonging to different strata naturally maps over several strata. This is a significant improvement over geodesic subspaces used in PGA which can only be defined within a regular strata. In the sequel, we only deal with the KBS/FBS of affinely independent points in a Riemannian manifold. Link between the different barycentric subspaces ------------------------------------------------ In order to analyze the relationship between the Fréchet, Karcher and Exponential barycentric subspaces, we follow the seminal work of [@karcher77]. First, the locus of local minima (i.e. Karcher mean) is a superset of the global minima (Fréchet mean). On the punctured manifold ${\ensuremath{{\cal M}^*{(x_0, \ldots x_k)}}}$, the squared distance $d^2_{x_i}(x) = {\ensuremath{\:\mbox{\rm dist}}}^2(x, x_i)$ is smooth and its gradient is $\nabla d^2_{x_i}(x) = -2 \log_x(x_i)$. Thus, one recognizes that the EBS equation $\sum_i {{\underaccent{\bar}{\lambda}}}_i \log_x(x_i) =0$ (Eq.(\[eq:Bary\])) defines nothing else than the critical points of the weighted variance: $$FBS\cap {\cal M}^* \subset KBS \cap {\cal M}^* \subset Aff \cap {\cal M}^* = EBS.$$ Among the critical points with a non-degenerate Hessian, local minima are characterized by a positive definite Hessian. When the Hessian is degenerate, we cannot conclude on the local minimality without going to higher order differentials. The goal of this section is to subdivide the EBS into a cell complex according to the index of the Hessian operator of the variance: $$\textstyle H(x,\lambda) = \nabla^2 \sigma^2(x,\lambda) = - \sum_{i=0}^k {{\underaccent{\bar}{\lambda}}}_i D_x \log_x(x_i). \label{eq:Hessian}$$ Plugging the value of the Taylor expansion of the differential of the log of Eq.(\[eq:Diff\_log\]), we obtain the Taylor expansion: $$\label{eq:TaylorH} \left[ H(x, \lambda) \right]^a_b = \delta^a_b - \frac{1}{3} R^a_{cbd}(x) {\mathfrak M}^{cd}_2(x,{{\underaccent{\bar}{\lambda}}}) - \frac{1}{12} \nabla_c R^a_{dbe}(x) {\mathfrak M}_3^{cde}(x,{{\underaccent{\bar}{\lambda}}}) + O(\varepsilon^4).$$ The key factor in this expression is the contraction of the Riemannian curvature with the weighted covariance tensor of the reference points. This contraction is an extension of the Ricci curvature tensor. Exactly as the Ricci curvature tensor encodes how the volume of an isotropic geodesic ball in the manifold deviates from the volume of the standard ball in a Euclidean space (through its metric trace, the scalar curvature), the extended Ricci curvature encodes how the volume of the geodesic ellipsoid ${\ensuremath{\overrightarrow{xy}}}{^{\text{\tiny T}}}{\mathfrak M}_2(x,{{\underaccent{\bar}{\lambda}}}){^{\text{\tiny (-1)}}}{\ensuremath{\overrightarrow{xy}}} \leq \varepsilon $ deviates from the volume of the standard Euclidean ellipsoid. In locally symmetric affine spaces, the covariant derivative of the curvature is identically zero, which simplifies the formula. In the limit of null curvature, (e.g. for a locally Euclidean space like the torus), the Hessian matrix $H(x, \lambda)$ converges to the unit matrix and never vanishes. In general Riemannian manifolds, Eq.(\[eq:TaylorH\]) only gives a qualitative behavior but does not provide guaranties as it is a series involving higher order moments of the reference points. In order to obtain hard bounds on the spectrum of $H(x, \lambda)$, one has to investigate bounds on Jacobi fields using Riemannian comparison theorems, as for the proof of uniqueness of the Karcher and Fréchet means (see [@karcher77; @kendall90; @Le:2004; @Afsari:2010; @Yang:2011]). \[def:NonDegenerate\] An exponential barycenter $x \in \operatorname{EBS}(x_0,\ldots x_k)$ is degenerate (resp. non-degenerate or positive) if the Hessian matrix $H(x,\lambda)$ is singular (resp. definite or positive definite) for all $\lambda$ in the the dual space of barycentric coordinates $\Lambda(x)$. The set of degenerate exponential barycenters is denoted by $EBS^0(x_0,\ldots,x_k)$ (resp. non-degenerate by $EBS^*(x_0,\ldots,x_k)$ and positive by $EBS^+(x_0,\ldots x_k)$). The definition of non-degenerate and positive points could be generalized to non-critical points (outside the affine span) by considering for instance the right singular space of the smallest singular value of $Z(x)$. However, this would depend on the metric on the space of weights and a renormalization of the weights (such as for spheres) can change the smallest non-zero singular value. Positive points are obviously non-degenerate. In Euclidean spaces, all the points of an affine span are positive and non-degenerate. In positively curved manifolds, we may have degenerate points and non-positive points, as we will see with the sphere example. For negatively curved spaces, the intuition that points of the EBS should all be positive like in Euclidean spaces is also wrong, as we sill see with the example of hyperbolic spaces. $ $\ \[THM2\] $EBS^+(x_0, \ldots x_k)$ is the set of non-degenerate points of the Karcher barycentric subspace $\operatorname{KBS}(x_0, \ldots x_k)$ on ${\ensuremath{{\cal M}^*{(x_0, \ldots x_k)}}}$. In other words, the KBS is the positive EBS plus potentially some degenerate points of the affine span and some points of the cut locus of the reference points. Spherical KBS {#Sec:SphericalKBS} ------------- In order to find the positive points of the EBS on the sphere, we compute the Hessian of the normalized variance. Using Eq.(\[eq:HessDistSphere2\]) and $u_i= { \log_x(x_i) }/{\theta_i}$, we obtain the Hessian of $\sigma^2(x,\lambda) = \frac{1}{2}\sum_{i=0}^k {{\underaccent{\bar}{\lambda}}}_i {\ensuremath{\:\mbox{\rm dist}}}^2(x, x_i)$: $$\textstyle H(x, \lambda) = \big(\sum_i {{\underaccent{\bar}{\lambda}}}_i \theta_i \cot\theta_i \big)({\ensuremath{\:\mathrm{Id}}}-xx{^{\text{\tiny T}}}) + \sum_i {{\underaccent{\bar}{\lambda}}}_i ( 1- \theta_i \cot\theta_i)u_i u_i{^{\text{\tiny T}}}.$$ As expected, $x$ is an eigenvector with eigenvalue $0$ due to the projection on the tangent space at $x$. Any vector $w$ of the tangent space at $x$ (thus orthogonal to $x$) which is orthogonal to the affine span (and thus to the vectors $u_i$) is an eigenvector with eigenvalue $ \sum_i {{\underaccent{\bar}{\lambda}}}_i \theta_i \cot \theta_i $. Since the Euclidean affine span $\operatorname{Aff}_{{\ensuremath{\mathbb{R}}}^{n+1}}(X)$ has $rank(X) \leq k+1$ dimensions, this eigenvalue has multiplicity $n+1-rank(X) \geq n - k$ when $x\in \operatorname{Aff}(X)$. The last $Rank(X)-1$ eigenvalues have associated eigenvectors within $\operatorname{Aff}_{{\ensuremath{\mathbb{R}}}^{n+1}}(X)$. [@buss_spherical_2001] have have shown that this Hessian matrix is positive definite for [*positive weights*]{} when the points are within one hemisphere with at least one non-zero weight point which is not on the equator. In contrast, we are interested here in the positivity and definiteness of the Hessian $H(x,\lambda)$ for the positive and negative weights which live in dual space of barycentric coordinates $\Lambda(x)$. This is actually a non trivial algebraic geometry problem. Simulation tests with random reference points $X$ show that the eigenvalues of $H(x, {{\underaccent{\bar}{\lambda}}}(x))$ can be positive or negative at different points of the EBS. The number of positive eigenvalues (the index) of the Hessian is illustrated on Fig. (\[fig:signature\]) for a few configuration of 3 affinely independent reference points on the 2-sphere. This illustrates the subdivision of the EBS on spheres in a cell complex based on the index of the critical point: the positive points of the KBS do not in general cover the full subsphere containing the reference points. It may even be disconnected, contrarily to the affine span which consistently covers the whole subsphere. For subspace definition purposes, this suggests that the affine span might thus be the most interesting definition. For affinely dependent points, the KBS/FBS behave similarly to the EBS. For instance, the weighted variance of $X=[e_1,-e_1]$ on a 2-sphere is a function of the latitude only. The points of a parallel at any specific latitude are global minima of the weighted variance for a choice of $\lambda =(\alpha : 1-\alpha), \: \alpha \in [0,1]$. Thus, all points of the sphere belong to the KBS, which is also the FBS and the affine span. However, the Hessian matrix has one positive eigenvalue along meridians and one zero eigenvalue along the parallels. This is a very non-generic case. ![Signature of the weighted Hessian matrix for different configurations of 3 reference points (in black, antipodal point in red) on the 2-sphere: the locus of local minima (KBS) in brown does not cover the whole sphere and can even be disconnected (first example).[]{data-label="fig:signature"}](Figures/SphereHessian_colorbar "fig:"){height="3.1cm"} ![Signature of the weighted Hessian matrix for different configurations of 3 reference points (in black, antipodal point in red) on the 2-sphere: the locus of local minima (KBS) in brown does not cover the whole sphere and can even be disconnected (first example).[]{data-label="fig:signature"}](Figures/SphereHessian_5_sc "fig:"){height="3.1cm"} ![Signature of the weighted Hessian matrix for different configurations of 3 reference points (in black, antipodal point in red) on the 2-sphere: the locus of local minima (KBS) in brown does not cover the whole sphere and can even be disconnected (first example).[]{data-label="fig:signature"}](Figures/SphereHessian_6_sc "fig:"){height="3.1cm"} ![Signature of the weighted Hessian matrix for different configurations of 3 reference points (in black, antipodal point in red) on the 2-sphere: the locus of local minima (KBS) in brown does not cover the whole sphere and can even be disconnected (first example).[]{data-label="fig:signature"}](Figures/SphereHessian_9_sc "fig:"){height="3.1cm"} ![Signature of the weighted Hessian matrix for different configurations of 3 reference points (in black, antipodal point in red) on the 2-sphere: the locus of local minima (KBS) in brown does not cover the whole sphere and can even be disconnected (first example).[]{data-label="fig:signature"}](Figures/SphereHessian_2_sc "fig:"){height="3.26cm"} Hyperbolic KBS / FBS {#Sec:HyperbolicKBS} -------------------- Let $x = X \tilde \lambda$ be a point of the hyperbolic affine span of $X=[x_0,\ldots x_k]$. The renormalized weights $\tilde \lambda$ are related to the original weights through $\lambda = F_*(X,x)^{-1} \tilde \lambda$ and satisfy $\|X \tilde{\lambda}\|_*^2 = -1$ and $\text{sgn}( [X \tilde{\lambda}]_0) >0$. The point $x$ is a critical point of the (normalized) weighted variance. In order to know if this is a local minimum (i.e. a point of the KBS), we compute the Hessian of this weighted variance. Denoting $ u_i = \log_x(x_i) / \theta_i$ with $\cosh \theta_i = -{\ensuremath{ \left< \:x\:\left|\:x_i\right.\right> }}_*$, and using the Hessian of the square distance derived in Eq., we obtain the following formula: $$\textstyle H(x,\lambda) = \sum_i {{\underaccent{\bar}{\lambda}}}_i \theta_i \coth \theta_i (J + J x x{^{\text{\tiny T}}}J) + \sum_i {{\underaccent{\bar}{\lambda}}}_i {(1 - \theta_i \coth \theta_i)} J u_i u_i{^{\text{\tiny T}}}J.$$ As expected, $x$ is an eigenvector with eigenvalue 0 due to the projection on the tangent space at $x$. Any vector $w$ of the tangent space at $x$ which is orthogonal to the affine span (and thus to the vectors $u_i$) is an eigenvector with eigenvalue $\sum_i {{\underaccent{\bar}{\lambda}}}_i \theta_i \coth \theta_i = 1/({\ensuremath{\mathds{1}}}{^{\text{\tiny T}}}\tilde \lambda)$ with multiplicity $n+1-rank(X)$. The last $Rank(X)-1$ eigenvalues have associated eigenvectors within $\operatorname{Aff}_{{\ensuremath{\mathbb{R}}}^{n+1}}(X)$. Simulation tests with random reference points $X$ show these eigenvalues can be positive or negative at different points of $Aff(X)$. The index of the Hessian is illustrated on Fig. (\[fig:signatureHyp\]) for a few configuration of 3 affinely independent reference points on the 2-hyperbolic space. Contrarily to the sphere, we observe only one or two positive eigenvalues corresponding respectively to saddle points and local minima. This subdivision of the hyperbolic affine span in a cell complex shows that the hyperbolic KBS is in general a strict subset of the hyperbolic affine span. We conjecture that there is an exception for reference points at infinity, for which the barycentric subspaces could be generalized using Busemann functions [@busemann_geometry_1955]: it is likely that the FBS, KBS and the affine span are all equal in this case and cover the whole lower dimensional hyperbola. ![Signature of the weighted Hessian matrix for different configurations of 3 reference points on the 2-hyperboloid: the locus of local minima (KBS) in brown does not cover the whole hyperboloid and can be disconnected (last two example).[]{data-label="fig:signatureHyp"}](Figures/HyperboloidHessian_Colorbar "fig:"){height="3cm"} ![Signature of the weighted Hessian matrix for different configurations of 3 reference points on the 2-hyperboloid: the locus of local minima (KBS) in brown does not cover the whole hyperboloid and can be disconnected (last two example).[]{data-label="fig:signatureHyp"}](Figures/HyperboloidHessian_2_sc "fig:"){height="3cm"} ![Signature of the weighted Hessian matrix for different configurations of 3 reference points on the 2-hyperboloid: the locus of local minima (KBS) in brown does not cover the whole hyperboloid and can be disconnected (last two example).[]{data-label="fig:signatureHyp"}](Figures/HyperboloidHessian_4_sc "fig:"){height="3cm"} ![Signature of the weighted Hessian matrix for different configurations of 3 reference points on the 2-hyperboloid: the locus of local minima (KBS) in brown does not cover the whole hyperboloid and can be disconnected (last two example).[]{data-label="fig:signatureHyp"}](Figures/HyperboloidHessian_7_sc "fig:"){height="3cm"} ![Signature of the weighted Hessian matrix for different configurations of 3 reference points on the 2-hyperboloid: the locus of local minima (KBS) in brown does not cover the whole hyperboloid and can be disconnected (last two example).[]{data-label="fig:signatureHyp"}](Figures/HyperboloidHessian_5_sc "fig:"){height="3cm"} ![Signature of the weighted Hessian matrix for different configurations of 3 reference points on the 2-hyperboloid: the locus of local minima (KBS) in brown does not cover the whole hyperboloid and can be disconnected (last two example).[]{data-label="fig:signatureHyp"}](Figures/HyperboloidHessian_8_sc "fig:"){height="3cm"} Properties of the barycentric subspaces {#Sec:Prop} ======================================= The EBS exists at each reference point $x_i$ with weight 1 for this point and zero for the others. Moreover, when the points are affinely independent, the matrix $Z(x_i)$ has exactly one zero singular value since column $i$ is $\log_{x_i}(x_i)=0$ and all the other column vectors are affinely independent. Finally, the weighted Hessian matrix boils down to $H(x_i,\lambda) = - \left. D_x \log_{x}(x_i)\right|_{x=x_i} = {\ensuremath{\:\mathrm{Id}}}$ (See e.g. Eq.(\[eq:Diff\_log\])). Thus the reference points are actually local minima of the weighted variance and the KBS exists by continuity in their neighborhood. Barycentric simplex in a regular geodesic ball ---------------------------------------------- We call the subset of the FBS that has non-negative weights a barycentric simplex. It contains all the reference points, the geodesics segments between the reference points, and of course the Fréchet mean of the reference points. This is the generalization of a geodesic segment for 2 points, a triangle for 3 points, etc. The $(k-l)$-faces of a $k$-simplex are the simplices defined by the barycentric subspace of $k-l+1$ points among the $k+1$. They are obtained by imposing the $l$ remaining barycentric coordinates to be zero. In parallel to this paper, [@weyenberg_statistics_2015] has investigated barycentric simplexes as extensions of principal subspaces in the negatively curved metric spaces of trees under the name Locus of Fréchet mean (LFM), with very interesting results. \[THM3\] Let $\kappa$ be an upper bound of sectional curvatures of ${\ensuremath{{\cal M}}}$ and $\text{inj}({\ensuremath{{\cal M}}})$ be the radius of injection (which can be infinite) of the Riemannian manifold. Let $X= \{ x_0,\ldots x_k\} \in {\ensuremath{{\cal M}}}^{(k+1)}$ be a set of $k+1\leq n$ affinely independent points included in a regular geodesic ball $B(x,\rho)$ with $\rho < \frac{1}{2}\min\{ \text{inj}({\ensuremath{{\cal M}}}), \frac{1}{2}\pi/\sqrt{\kappa} \} $ ($\pi/\sqrt{\kappa}$ being infinite if $\kappa < 0$). The barycentric simplex is the graph of a $k$-dimensional differentiable function from the non-negative quadrant of homogeneous coordinates $({\ensuremath{{\cal P}^*_k}})^+$ to $B(x,\rho)$ and is thus at most $k$-dimensional. The $(k-l)$-faces of the simplex are the simplices defined by the barycentric subspace of $k-l+1$ points among the $k+1$ and include the reference points themselves as vertices and the geodesics joining them as edges. The proof closely follows the one of [@karcher77] for the uniqueness of the Riemannian barycenter. The main argument is that $\mu_{(X, \lambda)}(x) = \sum {{\underaccent{\bar}{\lambda}}}_i \delta_{x_i}(x)$ is a probability distribution whose support is included in the strongly convex geodesic ball $B(x,\rho)$. The variance $\sigma^2(x, \lambda) = \frac{1}{2}\sum_i {{\underaccent{\bar}{\lambda}}}_i d^2(x, x_i)$ is strictly convex on that ball and has a unique minimum $x_{\lambda} \in B(x,\rho)$, necessarily the weighted Fréchet mean. This proof of the uniqueness of the weighted Fréchet mean with non-negative weights was actually already present in [@buser_gromovs_1981]. We supplement the proof here by noting that since the Hessian $H(x_{\lambda}, \lambda) = \sum_i {{\underaccent{\bar}{\lambda}}}_i H_i(x_{\lambda})$ is the convex combination of positive matrices, it is positive definite for all $\lambda \in ({\ensuremath{{\cal P}^*_k}})^+$ in the positive quadrant. Thus the function $x_{\lambda}$ is differentiable thanks to the implicit function theorem: $ D_{\lambda} x_{\lambda} = H( x_{\lambda}, \lambda){^{\text{\tiny (-1)}}}Z(x_{\lambda}).$ The rank of this derivative is at most $k$ since $Z(x_{\lambda})=0$, which proves that the graph of the function $x_{\lambda}$ describes at most a $k$ dimensional subset in ${\ensuremath{{\cal M}}}$. Barycentric simplexes and convex hulls -------------------------------------- In a vector space, a point lies in the convex hull of a simplex if and only if its barycentric coordinates are all non-negative (thus between 0 and 1 with the unit sum constraint). Consequently, barycentric coordinates are often thought to be related to convex hulls. However, in a general Riemannian manifold, the situation is quite different. When there are closed geodesics, the convex hull can reveal several disconnected components, unless one restrict to convex subsets of the manifolds as shown by [@Groisser:2003]. In metric spaces with negative curvature (CAT spaces), [@weyenberg_statistics_2015] displays explicit examples of convex hulls of 3 points which are 3-dimensional rather than 2-dimensional as expected. In fact, the relationship between barycentric simplexes and convex hulls cannot hold in general Riemannian manifolds if the barycentric simplex is not totally geodesic at each point, which happens for constant curvature spaces but not for general Riemannian manifolds. Local dimension of the barycentric subspaces -------------------------------------------- Let $x$ be a point of the $EBS$ with affinely independent reference points. The EBS equation $Z(x)\lambda = 0$ for $\lambda \in \Lambda(x)$ is smooth in $x$ and $\lambda$ so that we can take a Taylor expansion: at the first order, a variation of barycentric coordinates $\delta \lambda$ induces a variation of position $\delta x$ which are linked through $H(x,\lambda) \delta x - Z(x) \delta \lambda =0.$ Thus, at regular points: $$\delta x = H(x,\lambda){^{\text{\tiny (-1)}}}Z(x) \delta \lambda.$$ Let $Z(x)=U(x)S(x)V(x){^{\text{\tiny T}}}$ be a singular value decomposition with singular values sorted in decreasing order. Since $x$ belongs to the EBS, there is at least one (say $m \geq 1$) singular value that vanish and the dual space of barycentric coordinates is $\Lambda(x) = \operatorname{Span}(v_{k-m}, \ldots v_k)$. For a variation of weights $\delta \lambda$ in this subspace, there is no change of coordinates, while any variation of weights in $\operatorname{Span}(v_0, \ldots v_{k-m-1})$ induces a non-zero position variation. Thus, the tangent space of the EBS restricts to the $(k-m)$-dimensional linear space generated by $\{ \delta x_i' = H(x,\lambda){^{\text{\tiny (-1)}}}u_i\}_{0\leq i\leq k-m}$. Here, we see that the Hessian matrix $H(x, \lambda)$ encodes the distortion of the orthonormal frame fields $ u_1(x), \ldots u_k(x)$ to match the tangent space. Since the lower dimensional subspaces are included one the larger ones, we have a stratification of our $k$-dimensional submanifold into $k-1$, $k-2, \ldots 0$-dimensional subsets. \[THM4\] The non-degenerate exponential barycentric subspace $EBS^*(x_0,\ldots,x_k)$ of $k+1$ affinely independent points is a stratified space of dimension $k$ on ${\ensuremath{{\cal M}^*{(x_0, \ldots x_k)}}}$. On the $m$-dimensional strata, $Z(x)$ has exactly $k-m+1$ vanishing singular values. At degenerate points, $H(x, \lambda)$ is not invertible and vectors living in its kernel are also authorized, which potentially raises the dimensionality of the tangent space, even if they do not change the barycentric coordinates. These pathologies do not appear in practice for the constant curvature spaces as we have seen with spherical and hyperbolic spaces, and we conjecture that this is also not the case for symmetric spaces. Stability of the affine span with respect to the metric power ------------------------------------------------------------- The Fréchet (resp. Karcher) mean can be further generalized by taking a power $p$ of the metric to define the $p$-variance $\sigma^{p}(x) = \frac{1}{p} \sum_{i=0}^k {\ensuremath{\:\mbox{\rm dist}}}^{p}(x, x_i)$. The global (resp. local) minima of this $p$-variance defines the median for $p =1$. This suggest to further generalize barycentric subspaces by taking the locus of the minima of the weighted $p$-variance $\sigma^{p}(x,\lambda) = \frac{1}{p} \sum_{i=0}^k {{\underaccent{\bar}{\lambda}}}_i {\ensuremath{\:\mbox{\rm dist}}}^{p}(x, x_i)$. In fact, it turns out that all these “$p$-subspaces” are necessarily included in the affine span, which shows this notion is really central. To see that, we compute the gradient of the $p$-variance at non-reference point of ${\ensuremath{{\cal M}^*{(x_0, \ldots x_k)}}}$: $$\textstyle \nabla_x \sigma^{p}(x,\lambda) = - \sum_{i=0}^k {{\underaccent{\bar}{\lambda}}}_i {\ensuremath{\:\mbox{\rm dist}}}^{p -2}(x, x_i) \log_{x}(x_i).$$ Critical points of the $p$-variance satisfy the equation $\sum_{i=0}^k \lambda'_i \log_{x}(x_i) =0$ for the new weights $\lambda'_i = \lambda_i {\ensuremath{\:\mbox{\rm dist}}}^{p -2}(x, x_i) $. Thus, they are still elements of the EBS and changing the power of the metric just amounts to a reparametrization of the barycentric weights. Restricted geodesic submanifolds are limit of affine spans ---------------------------------------------------------- We investigate in this section what is happening when all the points $\{x_i = \exp_{x_0}(\varepsilon w_i)\}_{1\leq i\leq k}$ are converging to $x_0$ at first order along $k$ independent vectors $\{ w_i\}_{1\leq i\leq k}$. Here, we fix $w_0 =0$ to simplify the derivations, but the proof can be easily extended with a suitable change of coordinates provided that $\sum_{i=0}^k w_i =0$. In Euclidean spaces, a point of the affine span $y = \sum_{i=0}^k {{\underaccent{\bar}{\lambda}}}_i x_i$ may be written as the point $y = x + \varepsilon \sum_{i=1}^k {{\underaccent{\bar}{\lambda}}}_i w_i$ of the “geodesic subspace” generated by the family of vectors $\{ w_i\}_{1\leq i\leq k}$. By analogy, we expect the exponential barycentric subspace $\operatorname{EBS}(x_0, \exp_{x_0}(\varepsilon w_1) \ldots \exp_{x_0}(\varepsilon w_k))$ to converge towards the totally geodesic subspace at $x$ generated by the $k$ independent vectors $w_1, \ldots w_k$ of $T_x{\ensuremath{{\cal M}}}$: $$\textstyle GS(x, w_1, \ldots w_k) = \left\{ \textstyle \exp_{x}\left( \sum_{i=1}^k \alpha_i w_i \right) \in {\ensuremath{{\cal M}}}\text{ for } \alpha \in {\ensuremath{\mathbb{R}}}^k \right\}.$$ In fact, the above definition of the geodesic subspaces (which is the one implicitly used in most of the works using PGA) is too large and may not define a $k$-dimensional submanifold when there is a cut-locus. For instance, it is well known that geodesics of a flat torus are either periodic or everywhere dense in a flat torus submanifold depending on whether the components of the initial velocity field have rational or irrational ratios. This means that the geodesic space generated by a single vector for which all ratio of coordinates are irrational (e.g. $w=(\pi, \pi^2,\ldots \pi^k)$) is filling the full $k$-dimensional flat torus. Thus all the 1-dimensional geodesic subspaces that have irrational ratio of all coordinates minimize the distance to any set of data points in a flat torus of any dimension. In order to have a more meaningful definition and to guaranty the dimensionality of the geodesic subspace, we need to restrict the definition to the points of the geodesics that are distance minimizing. \[def:RGS\] Let $x \in {\ensuremath{{\cal M}}}$ be a point of a Riemannian manifold and let $W_x = \{ \sum_{i=1}^k \alpha_i w_i, \alpha \in {\ensuremath{\mathbb{R}}}^k\}$ be the $k$-dimensional linear subspace of $T_x{\ensuremath{{\cal M}}}$ generated a $k$-tuple $\{ w_i\}_{1\leq i\leq k} \in (T_x{\ensuremath{{\cal M}}})^k$ of independent tangent vectors at $x$. We consider the geodesics starting at $x$ with tangent vectors in $W_x$, but up to the first cut-point of $x$ only. This generates a submanifold of ${\ensuremath{{\cal M}}}$ called the restricted geodesic submanifold $GS^*(W_x)$: $$\textstyle GS^*(W_x) = GS^*(x, w_1, \ldots w_k) = \{ \exp_{x}\left( w \right), w\in W_x \cap D(x) \},$$ where $D(x) \subset T_x{\ensuremath{{\cal M}}}$ is the injectivity domain. It may not be immediately clear that the subspace we define that way is a submanifold of ${\ensuremath{{\cal M}}}$: since $\exp_x$ is a diffeomorphism from $D(x) \subset T_x{\ensuremath{{\cal M}}}$ to ${\ensuremath{{\cal M}}}\setminus {\ensuremath{{\cal C}}}(x)$ whose differential has full rank, its restriction to the open star-shape subset $ W_x \cap D(x)$ of dimension $k$ is a diffeomorphism from that subset to the restricted geodesic subspace $GS^*(W_x)$ which is thus an open submanifolds of dimension $k$ of ${\ensuremath{{\cal M}}}$. This submanifold is generally not geodesically complete. \[THM6\] The restricted geodesic submanifold $GS^*(W_{x_0}) = \{ \exp_{x_0}\left( w \right), w\in W_{x_0} \cap D(x_0) \}$ is the limit of the $EBS(x_0, x_1(\varepsilon), \ldots x_k(\varepsilon))$ when the points $x_i(\varepsilon) = \exp_{x_0}(\varepsilon w_i)$ are converging to $x_0$ at first order in $\varepsilon$ along the tangent vectors $w_i$ defining the $k$-dimensional subspace $W_{x_0} \subset T_{x_0}{\ensuremath{{\cal M}}}$. These limit points are parametrized by barycentric coordinates at infinity in the codimension 1 subspace $\mathds{1}^{\perp}$, the projective completion of ${\ensuremath{{\cal P}^*_k}}$ in ${\ensuremath{\mathbb{R}}}P^k$, see Definition \[def:Pk\]. The proof is deferred to Appendix A because of its technicality. We conjecture that the construction can be generalized using techniques from sub-Riemannian geometry to higher order derivatives when the first order derivative do not span a $k$-dimensional subspace. This would mean that we could also see some non-geodesic decomposition schemes as limit cases of barycentric subspaces, such as splines on manifolds [@crouch_dynamic_1995; @machado_higher-order_2010; @Gay-Balmaz:2012:10.1007/s00220-011-1313-y]. #### Example on spheres and hyperbolic spaces In spheres (resp. hyperbolic spaces), the restricted geodesic subspace $GS^*(W_{x})$ describes a great subsphere (resp. a great hyperbola), except for the cut-locus of the base-point $x$ in spheres. Thus, points of $GS^*(W_{x})$ are also points of the affine span generated by $k+1$ affinely independent reference points of this subspace. When all the reference points $x_i = \exp_{x}(\varepsilon w_i)$ coalesce to a single point $x$ along the tangent vectors $W = [w_0,\ldots w_k]$ (with $W {\ensuremath{\mathds{1}}}=0$), we find that solutions of the EBS equation are of the form $y = x + W ( \varepsilon \tilde \lambda / {\ensuremath{\mathds{1}}}{^{\text{\tiny T}}}\tilde \lambda) + O(\varepsilon^2)$, which describes the affine hyperplane generated by $x$ and $W$ in the embedding Euclidean (resp. Minkowski) space. The weights $\mu = \varepsilon \tilde \lambda / {\ensuremath{\mathds{1}}}{^{\text{\tiny T}}}\tilde \lambda$ converge to points at infinity (${\ensuremath{\mathds{1}}}{^{\text{\tiny T}}}\mu =0$) of the affine k-plane of normalized weights. When reference points coalesce with an additional second order acceleration orthogonally to the subspace $W_x$, we conjecture that the affine span is not any more a great subspheres but a smaller one. This would include principal nested spheres (PNS) developed by [@jung_generalized_2010; @jung_analysis_2012] as a limit case of barycentric subspaces. It would be interesting to derive a similar procedure for hyperbolic spaces and to determine which types of subspaces could be obtained by such limits for more general non-local and higher order jets. Barycentric subspace analysis {#Sec:BSA} ============================= PCA can be viewed as the search for a sequence of nested linear spaces that best approximate the data at each level. In a Euclidean space, minimizing the variance of the residuals boils down to an independent optimization of orthogonal subspaces at each level of approximation, thanks to the Pythagorean theorem. This enables building each subspace of the sequence by adding (resp. subtracting) the optimal one-dimensional subspace iteratively in a forward (resp. backward) analysis. Of course, this property does not scale up to manifolds, for which the orthogonality of subspaces is not even well defined. Flags of barycentric subspaces in manifolds ------------------------------------------- [@damon_backwards_2013] have argued that the nestedness of approximation spaces is one of the most important characteristics for generalizing PCA to more general spaces. Barycentric subspaces can easily be nested, for instance by adding or removing one or several points at a time, to obtains a family of embedded submanifolds which generalizes flags of vector spaces. A flag of a vector space $V$ is a filtration of subspaces (an increasing sequence of subspaces, where each subspace is a proper subspace of the next): $\{0\} = V_0 \subset V_1 \subset V_2 \subset \cdots \subset V_k = V$. Denoting $d_i = \dim(V_i)$ the dimension of the subspaces, we have $0 = d_0 < d_1 < d_2 < \cdots < d_k = n$, where n is the dimension of V. Hence, we must have $k \leq n$. A flag is [*complete*]{} if $d_i = i$, otherwise it is a [*partial flag*]{}. Notice that a linear subspace $W$ of $V$ is identified to the partial flag $ \{0\} \subset W \subset V$. A flag can be generated by adding the successive eigenspaces of an SPD matrix with increasing eigenvalues. If all the eigenvalues have multiplicity one, the generated flag is complete and one can parametrize it by the ordered set of eigenvectors. If an eigenvalue has a larger multiplicity, then the corresponding eigenvectors might be considered as exchangeable in this parametrization in the sense that we should only consider the subspace generated by all the eigenvectors of that eigenvalue. In an $n$-dimensional manifold ${\ensuremath{{\cal M}}}$, a strict ordering of $n+1$ independent points $x_0\prec x_1 \ldots \prec x_n$ defines a filtration of barycentric subspaces. For instance: $\operatorname{EBS}(x_0) = \{ x_0 \} \subset \cdots \operatorname{EBS}(x_0, x_1, x_k) \cdots \subset \operatorname{EBS}(x_0, \ldots x_n).$ The 0-dimensional subspace is now a points in ${\ensuremath{{\cal M}}}$ instead of the null vector in flags of vector spaces because we are in an affine setting. Grouping points together in the addition/removal process generates a partial flag of barycentric subspaces. Among the barycentric subspaces, the affine span seems to be the most interesting definition. Indeed, when the manifold ${\ensuremath{{\cal M}^*{(x_0, \ldots x_k)}}}$ is connected, the EBS of $n+1$ affinely independent points covers the full manifold ${\ensuremath{{\cal M}^*{(x_0, \ldots x_k)}}}$, and its completion covers the original manifold: ${\operatorname{Aff}}(x_0,\ldots x_n) = {\ensuremath{{\cal M}}}$. With the Fréchet or Karcher barycentric subspaces, we only generate a submanifold (the positive span) that does not cover the whole manifold in general, even in negatively curved spaces. Let $x_0\preceq x_1 \ldots \preceq x_k$ be $k+1 \leq n+1$ affinely independent ordered points of ${\ensuremath{{\cal M}}}$ where two or more successive points are either strictly ordered ($x_i \prec x_{i+1}$) or exchangeable ($x_i \sim x_{i+1}$). For a strictly ordered set of points, we call the sequence of properly nested subspaces $FL_i(x_0\prec x_1 \ldots \prec x_k) = {\operatorname{Aff}}(x_0, \ldots x_i)$ for $0 \leq i \leq k$ the flag of affine spans $FL(x_0\prec x_1 \ldots \prec x_k)$. For a flag comprising exchangeable points, the different subspaces of the sequence are only generated at strict ordering signs or at the end. A flag is said complete if it is strictly ordered with $k=n$. We call a flag of exchangeable points $FL(x_0\sim x_1 \ldots \sim x_k)$ a pure subspace because the sequence is reduced to the unique subspace $FL_k(x_0\sim x_1 \ldots \sim x_k) = {\operatorname{Aff}}(x_0, \ldots x_k)$. Forward and backward barycentric subspaces analysis --------------------------------------------------- In Euclidean PCA, the flag of linear subspaces can be built in a forward way, by computing the best 0-th order approximation (the mean), then the best first order approximation (the first mode), etc. It can also be built backward, by removing the direction with the minimal residual from the current affine subspace. In a manifold, we can use similar forward and backward analysis, but they have no reason to give the same result. With a forward analysis, we compute iteratively the flag of affine spans by adding one point at a time keeping the previous ones fixed. The barycentric subspace $\operatorname{Aff}(x_0) = \{ x_0 \}$ minimizing the unexplained variance is a Karcher mean. Adding a second point amounts to compute the geodesic passing through the mean that best approximate the data. Adding a third point now differ from PGA, unless the three points coalesce to a single one. With this procedure, the Fréchet mean always belong to the barycentric subspace. The backward analysis consists in iteratively removing one dimension. One should theoretically start with a full set of points and chose which one to remove. However, as all the sets of $n+1$ affinely independent points generate the full manifold with the affine span, the optimization really begin with the set of $n$ points $x_0, \ldots x_{n-1}$. We should afterward only test for which of the $n$ points we should remove. Since optimization is particularly inefficient in large dimensional spaces, we may run a forward analysis until we reach the noise level of the data for a dimension $k \ll n$. In practice, the noise level is often unknown and a threshold at 5% of the data variance is sometimes chosen. More elaborate methods exist to determine the intrinsic dimension of the data for manifold learning technique [@wang_scale-based_2008]. Point positions may be optimized at each step to find the optimal subspace and a backward sweep reorders the points at the end. With this process, there is no reason for the Fréchet mean to belong to any of the barycentric subspaces. For instance, if we have clusters, one expects the reference points to localize within these clusters rather than at the Fréchet mean. Approximating data using a pure subspace ---------------------------------------- Let ${Y} = \{ \hat y_i \}_{i=1}^N \in {\ensuremath{{\cal M}}}^N$ be $N$ data points and $X=\{x_0,\ldots x_k\}$ be $k+1$ affinely independent reference points. We assume that each data point $\hat y_i$ has almost surely one unique closest point $y_i(X)$ on the barycentric subspace. This is the situation for Euclidean, hyperbolic and spherical spaces, and this should hold more generally for all the points outside the focal set of the barycentric subspace. This allows us to write the residual $r_i(X) = {\ensuremath{\:\mbox{\rm dist}}}( \hat y_i,y_i(X))$ and to consider the minimization of the unexplained variance $\sigma^2_{out}(X) = \sum_j r_i^2(X)$. This optimization problem on ${\ensuremath{{\cal M}}}^{k+1}$ can be achieved by standard techniques of optimization on manifolds (see e.g. [@OptimizationManifold:2008]). However, it is not obvious that the canonical product Riemannian metric is the right metric to use, especially close to coincident points. In this case, one would like to consider switching to the space of (non-local) jets to guaranty the numerical stability of the solution. In practice, though, we may constraint the distance between reference points to be larger than a threshold. A second potential problem is the lack of identifiability: the minimum of the unexplained variance may be reached by subspaces parametrized by several k-tuples of points. This is the case for constant curvature spaces since every linearly independent $k$-tuple of points in a given subspace parametrizes the same barycentric subspace. In constant curvature spaces, this can be accounted for using a suitable polar or QR matrix factorization (see e.g. \[suppB\]). In general manifolds, we expect that the absence of symmetries will break the multiplicity of this relationship (at least locally) thanks to the curvature. However, it can lead to very badly conditioned systems to solve from a numerical point of view for small curvatures. A last problem is that the criterion we use here (the unexplained variance) is only valid for a pure subspace of fixed dimension, and considering a different dimension will lead in general to pure subspaces which cannot be described by a common subset of reference points. Thus, the forward and backward optimization of nested barycentric subspaces cannot lead to the simultaneous optimality of all the subspaces of a flag in general manifolds. A criterion for hierarchies of subspaces: AUV on flags of affine spans ---------------------------------------------------------------------- In order to obtain consistency across dimensions, it is necessary to define a criterion which depends on the whole flag of subspaces and not on each of the subspaces independently. In PCA, one often plots the unexplained variance as a function of the number of modes used to approximate the data. This curve should decreases as fast as possible from the variance of the data (for 0 modes) to 0 (for $n$ modes). A standard way to quantify the decrease consists in summing the values at all steps, giving the Accumulated Unexplained Variances (AUV), which is analogous to the Area-Under-the-Curve (AUC) in Receiver Operating Characteristic (ROC) curves. Given a strictly ordered flag of affine subspaces $Fl(x_0\prec x_1 \ldots \prec x_k)$, we thus propose to optimize the AUV criterion: $$\textstyle AUV(Fl(x_0\prec x_1 \ldots \prec x_k)) = \sum_{i=0}^k \sigma^2_{out}( Fl_i(x_0\prec x_1 \ldots \prec x_k ) )$$ instead of the unexplained variance at order $k$. We could of course consider a complete flag but in practice it is often useful to stop at a dimension $k$ much smaller than the possibly very high dimension $n$. The criterion is extended to more general partial flags by weighting the unexplained variance of each subspace by the number of (exchangeable) points that are added at each step. With this global criterion, the point $x_i$ influences all the subspaces of the flag that are larger than $Fl_i(x_0\prec x_1 \ldots \prec x_k )$ but not the smaller subspaces. It turns out that optimizing this criterion results in the usual PCA up to mode $k$ in a Euclidean space. \[THM8\] Let ${\hat Y} = \{ \hat y_i \}_{i=1}^N$ be a set of $N$ data points in ${\ensuremath{\mathbb{R}}}^n$. We denote as usual the mean by $\bar y = \frac{1}{N} \sum_{i=1}^N \hat y_i$ and the empirical covariance matrix by $\Sigma = \frac{1}{N} \sum_{i=1}^N (\hat y_i -\bar y) (\hat y_i -\bar y){^{\text{\tiny T}}}$. Its spectral decomposition is denoted by $\Sigma = \sum_{j=1}^n \sigma_j^2 u_j u_j{^{\text{\tiny T}}}$ with the eigenvalues sorted in decreasing order. We assume that the first $k+1$ eigenvalues have multiplicity one, so that the order from $\sigma_1$ to $\sigma_{k+1}$ is strict. Then the partial flag of affine subspaces $Fl(x_0\prec x_1 \ldots \prec x_k)$ optimizing $$\textstyle AUV(Fl(x_0\prec x_1 \ldots \prec x_k)) = \sum_{i=0}^k \sigma^2_{out}( Fl_i(x_0\prec x_1 \ldots \prec x_k ) )$$ is strictly ordered and can be parametrized by $x_0 = \bar y$, $x_i = x_0 + u_i$ for $1 \leq i \leq k$. The parametrization by points is not unique but the flag of subspaces which is generated is and is equal to the flag generated by the PCA modes up to mode $k$ included. The proof is detailed in \[suppB\]. The main idea is to parametrize the matrix of reference vectors by the product of an orthogonal matrix $Q$ with a positive definite triangular superior matrix (QR decomposition). The key property of this Gram-Schmidt orthogonalization is the stability of the columns of $Q$ when we add or remove columns (i.e reference points) in $X$, which allows to write the expression of the AUV explicitly. Critical points are found for columns of $Q$ which are eigenvectors of the data covariance matrix and the expression of the AUV shows that we have to select them in the decreasing order of eigenvalues. Sample-limited barycentric subspace inference on spheres -------------------------------------------------------- In several domains, it has been proposed to limit the inference of the Fréchet mean to the data-points only. In neuroimaging studies, for instance, the individual image minimizing the sum of square deformation distance to other subject images has been argued to be a good alternative to the mean template (a Fréchet mean in deformation and intensity space) because it conserves the full definition and all the original characteristics of a real subject image [@lepore:inria-00616172]. Beyond the Fréchet mean, [@Feragen2013] proposed to define the first principal component mode as the geodesic going through two of the data points which minimizes the unexplained variance. The method named [*set statistics*]{} was aiming to accelerate the computation of statistics on tree spaces. [@Zhai_2016] further explored this idea under the name of [*sample-limited geodesics*]{} in the context of PCA in phylogenetic tree space. However, in both cases, extending the method to higher order principal modes was considered as a challenging research topic. With barycentric subspaces, sample-limited statistics naturally extends to any dimension by restricting the search to (flags of) affine spans that are parametrized by data points. Moreover, the implementation boils down to a very simple enumeration problem. An important advantage for interpreting the modes of variation is that reference points are never interpolated as they are by definition sampled from the data. Thus, we may go back to additional information about the samples like the disease characteristics in medical image image analysis. The main drawback is the combinatorial explosion of the computational complexity: the optimal order-k flag of affine spans requires $O(N^{k+1})$ operations, where $N$ is the number of data points. In practice, the search can be done exhaustively for a small number of reference points but an approximated optimum has to be sought for larger $k$ using a limited number of random tuples [@Feragen2013]. ![[**Left:**]{} Equi 30 simulated dataset. Data and reference points are projected from the 5-sphere to the expected 2-sphere in 3d to allow visualization. For each method (FBS in blue, 1-PBS in green and 1-BSA in red), the first reference point has a solid symbol. The 1d mode is the geodesic joining this point to the second reference point. The third reference point of FBS and 2-BSA (on the lower left part) is smaller. [**Middle:**]{} graph of the unexplained variance and AUV for the different methods on the Equi 30 dataset. [**Right:**]{} Mount Tom Dinosaur trackway 1 data with the same color code. 1-BSA (in red) and FBS (in blue) are superimposed.[]{data-label="Fig:Equi30"}](Figures/EquiTriangleBSA_30.png "fig:"){width="0.30\columnwidth"} ![[**Left:**]{} Equi 30 simulated dataset. Data and reference points are projected from the 5-sphere to the expected 2-sphere in 3d to allow visualization. For each method (FBS in blue, 1-PBS in green and 1-BSA in red), the first reference point has a solid symbol. The 1d mode is the geodesic joining this point to the second reference point. The third reference point of FBS and 2-BSA (on the lower left part) is smaller. [**Middle:**]{} graph of the unexplained variance and AUV for the different methods on the Equi 30 dataset. [**Right:**]{} Mount Tom Dinosaur trackway 1 data with the same color code. 1-BSA (in red) and FBS (in blue) are superimposed.[]{data-label="Fig:Equi30"}](Figures/EquiTriangleBSA_curves.pdf "fig:"){width="0.37\columnwidth"} ![[**Left:**]{} Equi 30 simulated dataset. Data and reference points are projected from the 5-sphere to the expected 2-sphere in 3d to allow visualization. For each method (FBS in blue, 1-PBS in green and 1-BSA in red), the first reference point has a solid symbol. The 1d mode is the geodesic joining this point to the second reference point. The third reference point of FBS and 2-BSA (on the lower left part) is smaller. [**Middle:**]{} graph of the unexplained variance and AUV for the different methods on the Equi 30 dataset. [**Right:**]{} Mount Tom Dinosaur trackway 1 data with the same color code. 1-BSA (in red) and FBS (in blue) are superimposed.[]{data-label="Fig:Equi30"}](Figures/DinoTrackBSA.png "fig:"){width="0.30\columnwidth"} In this section, we consider the exhaustive sample-limited version of the Forward Barycentric Subspace (FBS) decomposition, the optimal $k$-dimensional Pure Barycentric Subspace with backward ordering (k-PBS), and the Barycentric Subspace Analysis up to order k (k-BSA). In order to illustrate the differences, we consider a first synthetic dataset where we draw 30 random points uniformly on an equilateral triangle of side length $\pi/2$ on a 6-dimensional sphere. We add to each point a (wrapped) Gaussian noise of standard deviation $\sigma = 10^{\circ}$. In this example, original data live on a 2-sphere: the ideal flag of subspaces is a pure 2d subspace spanning the first three coordinates. We illustrate in Fig.\[Fig:Equi30\] the different reference points that are found for the different methods. We can see that all methods end-up with different results, contrarily to the Euclidean case. The second observation is that the optimal pure subspace is not stable with the dimension: the reference points of the 0-PBS (the sample-limited Fréchet mean represented by the large blue solid diamond), the 1-PBS (in green) and the 2-PBS (identical to the red points of the 2-BSA in red) are all different. BSA is more stable: the first reference points are the same from the 1-BSA to the 3-BSA. In terms of unexplained variance, the 2-BSA is the best for two modes (since it is identical to the optimal 2-PBS) and reaches the actual noise level. It remains better than the 3-PBS and the FBS with three modes in terms of AUV even without adding a fourth point. As a second example, we take real data encoding the shape of three successive footprints of Mount Tom Dinosaur trackway 1 described in [@small96 p.181]. For planar triangles, the shape space (quotient of the triad by similarities) boils down to the sphere of radius $1/2$. These data are displayed on the right of Fig.\[Fig:Equi30\]. In this example, the reference points of the 0-BSA to the 3-BSA are stable and identical to the ones of the FBS. This is a behavior that we have observed in most of our simulations when modes cannot be confused. This may not hold anymore if reference points were optimized on the sphere rather than on the data points only. The optimal 1-PBS (the best geodesic approximation) picks up different reference points. Discussion ========== We investigated in the paper several notions of subspaces in manifolds generalizing the notion of affine span in a Euclidean space. The Fréchet / Karcher / exponential barycentric subspaces are the nested locus of weighted Fréchet / Karcher / exponential barycenters with positive or negative weights summing up to 1. The affine spans is the metric completion of the largest one (the EBS). It may be a non-connected manifold with boundaries. The completeness of the affine span enables reconnecting part of the subspace that arrive from different directions at the cut-locus of reference points if needed. It also ensures that there exits a closest point on the submanifold for data projection purposes, which is fundamental for dimension reduction purposes. The fact that modifying the power of the metric does not change the affine span is an unexpected stability result which suggests that the notion is quite central. Moreover, we have shown that the affine span encompass principal geodesic subspaces as limit cases. It would be interesting to show that we can obtain other types of subspaces like principal nested subspheres with higher order and non-local jets: some non-geodesic decomposition schemes such as loxodromes and splines could probably also be seen as limit cases of barycentric subspaces. Future work will address barycentric subspaces in interesting non-constant curvatures spaces. For instance, [@eltzner_dimension_2015] adaptively deforms the flat torus seen as a product of spheres into a unique sphere to allow principal nested spheres (PNS) analysis. A quick look at the flat torus shows that the the cut-locus of $k+1\leq n$ points in ${\cal S}_1^n$ divides the torus into $k^n$ cells in which the affine span is a $k$-dimensional linear subspace. The subspaces generated in each cell are generally disconnected, but when points coalesce with each others into a jet, the number of cells decreases in the complex and at the limit we recover a single cell that contain a connected affine span. For a first order jet, we recover as expected the restricted geodesic subspace (here a linear subspace limited to the cut locus of the jet base-point), but higher order jets may generate more interesting curved subspaces that may better describe the data geometry. The next practical step is obviously the implementation of generic algorithms to optimize barycentric subspaces in general Riemannian manifolds. Example algorithms include: finding a point with given barycentric coordinates (there might be several so this has to be a local search); finding the closest point (and its coordinates) on the barycentric subspace; optimizing the reference points to minimize the residual error after projection of data points, etc. If such algorithms can be designed relatively simply for simple specific manifolds as we have done here for constant curvature spaces, the generalization to general manifolds requires a study of the focal set of the barycentric subspaces or guarantying the correct behavior of algorithms. We conjecture that this is a stratified set of zero measure in generic cases. Another difficulty is linked to the non-identifiability of the subspace parameters. For constant curvature spaces, the right parameter space is actually the $k$-Grassmanian. In more general manifolds, the curvature and the interaction with the cut-locus break the symmetry of the barycentric subspaces, but lead to a poor numerical conditioning of the system good renormalization techniques need to be designed to guaranty the numerical stability. Finding the subspace that best explain the data is an optimization problem on manifolds. This raises the question of which metric should be considered on the space of barycentric subspaces. In this paper, we mainly see this space as the configuration space of $k+1$ affinely independent points, with convergence to spaces of jets (including non-local jets) when several points coalesce. Such a construction was named Multispace by [@olver_geometric_2001] in the context of symmetry-preserving numerical approximations to differential invariants. It is likely that similar techniques could be investigated to construct numerically stable implementations of barycentric subspaces of higher order parametrized by non-local jets, which are needed to optimize safely. Conversely, barycentric subspaces could help shedding a new light on the multispace construction for differential invariants. Barycentric subspaces could probably be used to extend methods like the probabilistic PCA of [@tipping_probabilistic_1999], generalized to PGA by [@zhang_probabilistic_2013]. A first easy step in that direction is to replace the reference points by reference distributions on the manifold and to look at the locus of weighted expected means. Interestingly, this procedure soften the constraints that we had in this paper about the cut locus. Thus, following [@karcher77], reference distributions could be used in a mollifier smoothing approach to study the regularity of the barycentric subspaces. For applications where data live on Lie groups, generalizing barycentric subspaces to more general non-Riemannian spaces like affine connection manifolds is a particularly appealing extension. In computational anatomy, for instance, deformations of shapes are lifted to a group of diffeomorphism for statistical purposes (see e.g. [@lorenzi:hal-00813835; @lorenzi:hal-01145728]). All Lie groups can be endowed with a bi-invariant symmetric Cartan-Schouten connection for which geodesics are the left and right translation of one-parameter subgroups. This provides the Lie group with an affine connection structure which may be metric or not. When the group is the direct product of compact and Abelian groups, it admits a bi-invariant metric for which the Cartan-Schouten connection is the natural Levi-Civita connection. Other groups do not admit any bi-invariant metric (this is the case for rigid transformations in more than 2 dimensions because of the semi-direct product), so that a Riemannian structure can only be left or right invariant but not both. However the bi-invariant Cartan-Schouten connection continues to exists, and one can design bi-invariant means using exponential barycenter as proposed by [@pennec:hal-00699361]. Thus, we may still define exponential barycentric subspaces and affine spans in these affine connection spaces, the main difference being that the derivative of the log is not any more the Hessian of a distance function. This might considerably complexify the analysis of the generated subspaces. The second topic of this paper concerns the generalization of PCA to manifolds using Barycentric Subspace Analysis (BSA). [@damon_backwards_2013] argued that an interesting generalization of PCA should rely on “nested sequence of relations”, like embedded linear subspaces in the Euclidean space or embedded spheres in PNS. Barycentric subspaces can naturally be nested by adding or removing points or equivalently by setting the corresponding barycentric coordinate to zero. Thus we can easily generalize PCA to manifolds using a forward analysis by iteratively adding one or more points at a time. At the limit where points coalesce at the first order, this amounts to build a flag of (restricted) principal geodesic subspaces. Thus it generalizes the Principal Geodesic Analysis (PGA) of [@fletcher_principal_2004; @sommer_optimization_2013] when starting with a zeroth dimensional space (the Fréchet mean) and the Geodesic PCA (GPCA) of [@huckemann_principal_2006; @huckemann_intrinsic_2010] when starting directly with a first order jet defining a geodesic. One can also design a backward analysis by starting with a large subspace and iteratively removing one or more points to define embedded subspaces. However, the greedy optimization of these forward/backward methods generally leads to different solutions which are not optimal for all subspace jointly. The key idea is to consider PCA as a joint optimization of the whole flag of subspaces instead of each subspace independently. In a Euclidean space, we showed that the Accumulated Unexplained Variances (AUV) with respect to all the subspaces of the hierarchy (the area under the curve of unexplained variance) is a proper criterion on the space of Euclidean flags. We proposed to extend this criterion to barycentric subspaces in manifolds, where an ordering of the reference points naturally defines a flag of nested barycentric subspaces. A similar idea could be used with other iterative least-squares methods like partial least-squares (PLS) which are also one-step at a time minimization methods. Acknowledgments {#acknowledgments .unnumbered} =============== This work was partially supported by the Erwin Schrödinger Institute in Vienna through a three-weeks stay in February 2015 during the program Infinite-Dimensional Riemannian Geometry with Applications to Image Matching and Shape. It was also partially supported by the Inria Associated team GeomStats between Asclepios and Holmes’ lab at Stanford Statistics Dept. I would particularly like to thank Prof. Susan Holmes for fruitful discussions during the writing of the paper. [47]{} , (). . . (). . . (). . . (). . . (). . . (). . . (). . . (). . . (). . . , (). . . (). . . (). . . (). . . (). . . , (). . In ( , eds.). . . , , , , , (). In . . , , (). . . (). . . , , , (). . . (). . . , (). . . (). . . , (). . . , , (). . In . . . (). . . (). . . . (). . . (). . . , , , , , , , , , , (). . In . , (). . . (). . . , (). . . (). . . (). . . (). . In . . . (). . In (, , eds.) . . , (). . . (). . . . (). . In , ( , eds.). . . , (). . . (). . . (). . . (). , . , , (). . . (). , . (). , . (). . In . Appendix A: Proof of Theorem \[THM6\] {#ProofTHM6 .unnumbered} ===================================== We first establish a useful formula exploiting the symmetry of the geodesics from $x$ to $y \not \in {\ensuremath{{\cal C}}}(x)$ with respect to time. Reverting time along a geodesic, we have: $\gamma_{(x,{\ensuremath{\overrightarrow{xy}}})}(t) = \gamma_{(y,{\ensuremath{\overrightarrow{yx}}})}(1-t)$, which means in particular that $\dot \gamma_{(x,{\ensuremath{\overrightarrow{xy}}})}(1) = - \dot \gamma_{(y,{\ensuremath{\overrightarrow{yx}}})}(0) = -{\ensuremath{\overrightarrow{yx}}}$. Since $\gamma_{(x,{\ensuremath{\overrightarrow{xy}}})}(t) = \exp_x(t {\ensuremath{\overrightarrow{xy}}})$, we obtain $ {\ensuremath{\overrightarrow{yx}}} = - D \left. \exp_x \right|_{{\ensuremath{\overrightarrow{xy}}}} {\ensuremath{\overrightarrow{xy}}}.$ Now, we also have $ \left( D \left. \exp_x \right|_{{\ensuremath{\overrightarrow{xy}}}} \right). D \left. \log_x \right|_y = {\ensuremath{\:\mathrm{Id}}}$ because $\exp_x( \log_x(y)) = y$. Finally, $D\exp_x$ and $D\log_x$ have full rank on ${\ensuremath{{\cal M}}}/{\ensuremath{{\cal C}}}(x)$ since there is no conjugate point before the cut-locus, so that we can multiply by their inverse and we end up with: $$\label{eq:symgeo} \forall y \not \in {\ensuremath{{\cal C}}}(x), \quad {\ensuremath{\overrightarrow{xy}}} = - D \left. \log_x \right|_y {\ensuremath{\overrightarrow{yx}}}.$$ Let us first restrict to a convenient domain of ${\ensuremath{{\cal M}}}$: we consider a open geodesic ball $B(x_0, \zeta)$ of radius $\zeta$ centered at $x_0$ and we exclude all the points of ${\ensuremath{{\cal M}}}$ which cut locus intersect this ball, or equivalently the cut-locus of all the points of this ball. We obtain an open domain ${\cal D}_{\zeta}(x_0) = {\ensuremath{{\cal M}}}\setminus {\ensuremath{{\cal C}}}(B(x_0, \zeta))$ in which $\log_x(y)$ is well defined and smooth for all $x \in B(x_0, \zeta)$ and all $y\in {\cal D}_{\zeta}(x_0)$. Thanks to the symmetry of the cut-locus, $\log_y(x)$ is also well defined and smooth in the same conditions and Eq. (\[eq:symgeo\]) can be rephrased: $$\label{eq:symgeo2} \forall x \in B(x_0, \zeta), y\in {\cal D}_{\zeta}(x_0), \quad {\ensuremath{\overrightarrow{xy}}} = - D \left. \log_x \right|_y {\ensuremath{\overrightarrow{yx}}}.$$ Let $\|w\|_{\infty} = \max_i \|w_i\|_{x_0}$ be the maximal length of the vectors $w_i$. For $\varepsilon < \zeta / \|w\|_{\infty}$, we have $\|\varepsilon w_i\|_{x_0} \leq \varepsilon \|w\|_{\infty} < \zeta$, so that all the points $x_i = \exp_{x_0}( \varepsilon w_i)$ belong to the open geodesic ball $B(x_0, \zeta)$. Thus, $\log_x(x_i)$ and $\log_{x_i}(x)$ are well defined and smooth for any $x \in {\cal D}_{\zeta}(x_0)$, and we can write the Taylor expansion in a normal coordinate system at $x_0$using Eq.\[eq:symgeo2\]: $$\textstyle \log_x(x_i(\varepsilon)) = \log_x( x_0) + \varepsilon D\log_x|_{x_0} w_i + O(\varepsilon ^2) = D\log_x|_{x_0} \left( \varepsilon w_i - \log_{x_0}(x) \right) + O(\varepsilon ^2)$$ Any point $x \in {\cal D}_{\zeta}(x_0)$ can be defined by $\log_{x_0}(x) = \sum_{j=1}^k \alpha_i w_i + w_{\bot}$ with ${\ensuremath{ \left< \:w_{\bot}\:\left|\:w_i\right.\right> }} =0$ and suitable constraints on the $\alpha_i$ and $w_{\bot}$. Replacing $\log_x( x_0)$ by its value in the above formula, we get $$\textstyle \log_x(x_i(\varepsilon)) = D\log_x|_{x_0} \left( \varepsilon w_i - \sum_{j=1}^k \alpha_j w_j - w_{\bot} \right) + O(\varepsilon ^2).$$ Since the matrix $D\log_x|_{x_0}$ is invertible, the EBS equation $\mathfrak{M}_1(x, \lambda)= \sum_{i=0}^k \lambda_i {\ensuremath{\overrightarrow{x x_i}}} =0$ is equivalent to $\textstyle \textstyle w_{\bot} + \sum_{j=1}^k \alpha_j w_j - \varepsilon\left(\sum_{i=1}^k {{\underaccent{\bar}{\lambda}}}_i w_i\right) = O(\varepsilon ^2).$ Projecting orthogonally to $W_{x_0}$, we get $w_{\bot} = O(\varepsilon^2)$: this means that any point of the limit EBS has to be of the form $x = \exp_{x_0}(\sum_{j=1}^k \alpha_i w_i)$. In other words, only points of the restricted geodesic subspace $GS^*(W_{x_0})$ can be solutions of the limit EBS equation. Now, for a point of $GS^*(W_{x_0})$ to be a solution of the limit EBS equation, there should exists barycentric coordinates $\lambda$ such that $\sum_{j=1}^k (\alpha_j - \varepsilon {{\underaccent{\bar}{\lambda}}}_i) w_j = O(\varepsilon ^2)$. Choosing $\lambda = (\varepsilon - \sum_i \alpha_i: \alpha_1 : \ldots : \alpha_k)$, we obtain the normalized barycentric coordinates ${{\underaccent{\bar}{\lambda}}}_i = \alpha_i / \varepsilon$ for $1\leq i \leq k$ and ${{\underaccent{\bar}{\lambda}}}_0 = 1 - (\sum_i \alpha_i) / \varepsilon$ that satisfy this condition. Thus any point of $GS^*(W_{x_0}) \cap {\cal D}_{\zeta}(x_0)$ is a solution of the limit EBS equation with barycentric coordinates at infinity on ${\ensuremath{{\cal P}^*_k}}$. Taking $\zeta$ sufficiently small, we can include all the points of $GS^*(W_{x_0})$. Riemannian manifolds ==================== A Riemannian manifold is a differential manifold endowed with a smooth collection of scalar products ${\ensuremath{ \left< \:.\:\left|\:.\right.\right> }}_{x}$ on each tangent space $T_{x}{\ensuremath{{\cal M}}}$ at point $x$ of the manifold, called the Riemannian metric. In a chart, the metric is expressed by a symmetric positive definite matrix $G(x) = [ g_{ij}(x) ]$ where each element is given by the dot product of the tangent vector to the coordinate curves: $g_{ij}(x) = {\ensuremath{ \left< \:\partial_i\:\left|\:\partial_j\right.\right> }}_x$. This matrix is called the [*local representation of the Riemannian metric*]{} in the chart $x$ and the dot products of two vectors $v$ and $w$ in $T_{x}{\ensuremath{{\cal M}}}$ is now ${\ensuremath{ \left< \:v\:\left|\:w\right.\right> }}_x = v{^{\text{\tiny T}}}\: G(x)\: w = g_{ij}(x) v^i w^j$ using the Einstein summation convention which implicitly sum over the indices that appear both in upper position (components of \[contravariant\] vectors) and lower position (components of covariant vectors (co-vectors)). Riemannian distance and geodesics --------------------------------- If we consider a curve $\gamma(t)$ on the manifold, we can compute at each point its instantaneous speed vector $\dot{\gamma}(t)$ (this operation only involves the differential structure) and its norm $ \left\| \dot{\gamma}(t)\right\|_{\gamma(t)}$ to obtain the instantaneous speed (the Riemannian metric is needed for this operation). To compute the length of the curve, this value is integrated along the curve: $$\label{curve_length} {\cal L}_a^b (\gamma) = \int_a^b \left\| \dot{\gamma}(t)\right\|_{\gamma(t)} dt = \int_a^b \left( {\ensuremath{ \left< \: \dot{\gamma}(t)\:\left|\:\dot{\gamma}(t) \right.\right> }}_{\gamma(t)} \right)^{\frac{1}{2}}dt$$ The distance between two points of a connected Riemannian manifold is the minimum length among the curves joining these points. The curves realizing this minimum are called geodesics. Finding the curves realizing the minimum length is a difficult problem as any time-reparameterization is authorized. Thus one rather defines the metric geodesics as the critical points of the energy functional ${\cal E}(\gamma) = \frac{1}{2}\int_0^1 \left\| \dot \gamma (t)\right\|^2\: dt$. It turns out that they also optimize the length functional but they are moreover parameterized proportionally to arc-length. Let $[g^{ij}] = [g_{ij}]{^{\text{\tiny (-1)}}}$ be the inverse of the metric matrix (in a given coordinate system) and $\Gamma^i_{jk} = \frac{1}{2} g^{im}\left( \partial_k g_{mj} + \partial_j g_{mk} - \partial_m g_{jk} \right)$ the Christoffel symbols. The calculus of variations shows the geodesics are the curves satisfying the following second order differential system: $$\ddot{\gamma}^i + \Gamma^i_{jk} \dot{\gamma}^j \dot{\gamma}^k = 0.$$ The fundamental theorem of Riemannian geometry states that on any Riemannian manifold there is a unique (torsion-free) connection which is compatible with the metric, called the Levi-Civita (or metric) connection. For that choice of connection, shortest paths (geodesics) are auto-parallel curves (“straight lines”). This connection is determined in a local coordinate system through the Christoffel symbols: $\nabla_{\partial_i}\partial_j = \Gamma_{ij}^k \partial_k$. With these conventions, the covariant derivative of the coordinates $v^i$ of a vector field is $v^i_{;j} = (\nabla_j v)^i = \partial_j v^i +\Gamma^i_{jk} v^k$. In the following, we only consider the Levi-Civita connection and we assume that the manifold is geodesically complete, i.e. that the definition domain of all geodesics can be extended to ${\ensuremath{\mathbb{R}}}$. This means that the manifold has no boundary nor any singular point that we can reach in a finite time. As an important consequence, the Hopf-Rinow-De Rham theorem states that there always exists at least one minimizing geodesic between any two points of the manifold (i.e. whose length is the distance between the two points). Normal coordinate systems {#ExpMapIntro} ------------------------- Let $x$ be a point of the manifold that we consider as a local reference and $v$ a vector of the tangent space $T_{x}{\ensuremath{{\cal M}}}$ at that point. From the theory of second order differential equations, we know that there exists one and only one geodesic $\gamma_{(x,v)}(t)$ starting from that point with this tangent vector. This allows to wrap the tangent space onto the manifold, or equivalently to develop the manifold in the tangent space along the geodesics (think of rolling a sphere along its tangent plane at a given point). The mapping $ \exp_{x}(v) = \gamma_{(x,v)}(1)$ of each vector $v \in T_{x}{\ensuremath{{\cal M}}}$ to the point of the manifold that is reached after a unit time by the geodesic $\gamma_{(x,v)}(t)$ is called the [*exponential map*]{} at point $x$. Straight lines going through 0 in the tangent space are transformed into geodesics going through point $x$ on the manifold and distances along these lines are conserved. The exponential map is defined in the whole tangent space $T_{x}{\ensuremath{{\cal M}}}$ (since the manifold is geodesically complete) but it is generally one-to-one only locally around 0 in the tangent space (i.e. around $x$ in the manifold). In the sequel, we denote by ${\ensuremath{\overrightarrow{xy}}}=\log_{x}(y)$ the inverse of the exponential map: this is the smallest vector (in norm) such that $y = \exp_{x}({\ensuremath{\overrightarrow{xy}}})$. It is natural to search for the maximal domain where the exponential map is a diffeomorphism. If we follow a geodesic $\gamma_{(x, v)}(t) = \exp_{x}(t\: v)$ from $t=0$ to infinity, it is either always minimizing all along or it is minimizing up to a time $t_0 < \infty$ and not any more after (thanks to the geodesic completeness). In this last case, the point $ \gamma_{(x,v)}(t_0)$ is called a [*cut point*]{} and the corresponding tangent vector $t_0\: v$ a [*tangential cut point*]{}. The set of tangential cut points at $x$ is called the [*tangential cut locus*]{} $C(x) \in T_{x}{\ensuremath{{\cal M}}}$, and the set of cut points of the geodesics starting from $x$ is the [*cut locus*]{} ${\ensuremath{{\cal C}}}(x) = \exp_{x}(C(x)) \in {\ensuremath{{\cal M}}}$. This is the closure of the set of points where several minimizing geodesics starting from $x$ meet. On the sphere ${\mathcal S}_2(1)$ for instance, the cut locus of a point $x$ is its antipodal point and the tangential cut locus is the circle of radius $\pi$. The maximal bijective domain of the exponential chart is the domain $D(x)$ containing 0 and delimited by the tangential cut locus ($\partial D(x) = C(x)$). This domain is connected and star-shaped with respect to the origin of $T_{x}{\ensuremath{{\cal M}}}$. Its image by the exponential map covers all the manifold except the cut locus, which has a null measure. Moreover, the segment $[0,{\ensuremath{\overrightarrow{xy}}}]$ is mapped to the unique minimizing geodesic from $x$ to $y$: geodesics starting from $x$ are straight lines, and the distance from the reference point are conserved. This chart is somehow the “most linear” chart of the manifold with respect to the reference point $x$. When the tangent space is provided with an orthonormal basis, this is called [*an normal coordinate systems at $x$*]{}. A set of normal coordinate systems at each point of the manifold realize an atlas which allows to work very easily on the manifold. The implementation of the exponential and logarithmic maps (from now on $\exp$ and $\log$) is indeed the basis of programming on Riemannian manifolds, and we can express using them practically all the geometric operations needed for statistics [@A:pennec:inria-00614994] or image processing [@A:pennec:inria-00614990]. The size of the maximal definition domain is quantified by the [*injectivity radius*]{} $\mbox{inj}({\ensuremath{{\cal M}}},x) = {\ensuremath{\:\mbox{\rm dist}}}(x,{\ensuremath{{\cal C}}}(x))$, which is the maximal radius of centered balls in $T_{x}{\ensuremath{{\cal M}}}$ on which the exponential map is one-to-one. The injectivity radius of the manifold $\mbox{inj}({\ensuremath{{\cal M}}})$ is the infimum of the injectivity over the manifold. It may be zero, in which case the manifold somehow tends towards a singularity (think e.g. to the surface $z=1/\sqrt{x^2+y^2}$ as a sub-manifold of ${\ensuremath{\mathbb{R}}}^3$). In a Euclidean space, normal coordinate systems are realized by orthonormal coordinates system translated at each point: we have in this case ${\ensuremath{\overrightarrow{xy}}} = \log_{x}(y) = y-x$ and $\exp_{x}({\ensuremath{\overrightarrow{v}}}) = x+{\ensuremath{\overrightarrow{v}}}$. This example is more than a simple coincidence. In fact, most of the usual operations using additions and subtractions may be reinterpreted in a Riemannian framework using the notion of [*bipoint*]{}, an antecedent of vector introduced during the 19th Century. Indeed, vectors are defined as equivalent classes of bipoints in a Euclidean space. This is possible because we have a canonical way (the translation) to compare what happens at two different points. In a Riemannian manifold, we can still compare things locally (by parallel transportation), but not any more globally. This means that each “vector” has to remember at which point of the manifold it is attached, which comes back to a bipoint. Hessian of the squared distance =============================== Computing the differential of the Riemannian log ------------------------------------------------ On ${\ensuremath{{\cal M}}}/ C(y)$, the Riemannian gradient $\nabla^a = g^{ab} \partial_b$ of the squared distance $d^2_y(x)={\ensuremath{\:\mbox{\rm dist}}}^2(x, y)$ with respect to the fixed point $y$ is well defined and is equal to $\nabla d^2_y(x) = -2 \log_x(y)$. The Hessian operator (or double covariant derivative) $\nabla^2 f(x)$ from $T_x{\ensuremath{{\cal M}}}$ to $T_x{\ensuremath{{\cal M}}}$ is the covariant derivative of the gradient, defined by the identity $\nabla^2 f(v) = \nabla_v(\nabla f)$. In a normal coordinate system at point $x$, the Christoffel symbols vanish at $x$, so that the Hessian operator of the squared distance can be expressed with the standard differential $D_x$ with respect to the point $x$: $$\nabla^2 d^2_y(x) = -2 (D_x \log_x(y)).$$ The points $x$ and $y=\exp_x(v)$ are called conjugate if $D\exp_x(v)$ is singular. It is known that the cut point (if it exists) occurs at or before the first conjugate point along any geodesic [@A:LeeCurvature:1997]. Thus, $D\exp_x(v)$ has full rank inside the tangential cut-locus of $x$. This is in essence why there is a well posed inverse function ${\ensuremath{\overrightarrow{x y}}} = \log_x(y)$, called the Riemannian log, which is continuous and differentiable everywhere except at the cut locus of $x$. Moreover, its differential can be computed easily: since $\exp_x(\log_x(y)) =y $, we have $\left. D\exp_x \right|_{{\ensuremath{\overrightarrow{xy}}}} D\log_x (y) = {\ensuremath{\:\mathrm{Id}}}$, so that $$D\log_x (y) = \left( \left. D\exp_x \right|_{{\ensuremath{\overrightarrow{xy}}}} \right)^{-1} \label{eq:Dylogxy}$$ is well defined and of full rank on ${\ensuremath{{\cal M}}}/C(x)$. We can also see the Riemannian log $\log_x(y) = {\ensuremath{\overrightarrow{x y}}}$ as a function of the foot-point $x$, and differentiating $\exp_x(\log_x(y))=y$ with respect to it gives: $ \left. D_x \exp_x \right|_{{\ensuremath{\overrightarrow{xy}}}} + \left. D\exp_x \right|_{{\ensuremath{\overrightarrow{xy}}}}.D_x\log_x (y) =0. $ Once again, we obtain a well defined and full rank differential for $x \in {\ensuremath{{\cal M}}}/C(y)$: $$D_x\log_x (y) = - \left( \left. D\exp_x \right|_{{\ensuremath{\overrightarrow{xy}}}}\right)^{-1} \left. D_x \exp_x \right|_{{\ensuremath{\overrightarrow{xy}}}}. \label{eq:Dxlogxy}$$ The Hessian of the squared distance can thus be written: $$\frac{1}{2}\nabla^2 d^2_y(x) = - D_x \log_x(x_i) = \left( \left. D\exp_x \right|_{{\ensuremath{\overrightarrow{xy}}}}\right)^{-1} \left. D_x \exp_x \right|_{{\ensuremath{\overrightarrow{xy}}}}.$$ If we notice that $J_0(t) = \left. D\exp_x\right|_{t {\ensuremath{\overrightarrow{xy}}}}$ (respectively $J_1(t) = \left. D_x \exp_x\right|_{t {\ensuremath{\overrightarrow{xy}}}}$) are actually matrix Jacobi field solutions of the Jacobi equation $\ddot J (t) + R(t) J(t) =0$ with $J_0(0)=0$ and $\dot J_0(0)={\ensuremath{\:\mathrm{Id}}}_n$ (respectively $J_1(0)={\ensuremath{\:\mathrm{Id}}}_n$ and $\dot J_1(0)=0$), we see that the above formulation of the Hessian operator is equivalent to the one of [@A:villani_regularity_2011]\[Equation 4.2\]: $\frac{1}{2}\nabla^2 d^2_y(x) = J_0(1){^{\text{\tiny (-1)}}}J_1(1)$. Taylor expansion of the Riemannian log -------------------------------------- In order to better figure out what is the dependence of the Hessian of the squared Riemannian distance with respect to curvature, we compute here the Taylor expansion of the Riemannian log function. Following [@A:brewin_riemann_2009], we consider a normal coordinate system centered at $x$ and $x_v = \exp_x(v)$ a variation of the point $x$. We denote by $R_{ihjk}(x)$ the coefficients of the curvature tensor at $x$ and by $\epsilon$ a conformal gauge scale that encodes the size of the path in terms of $\|v \|_x$ and $\| {\ensuremath{\overrightarrow{xy}}} \|_x$ normalized by the curvature (see [@A:brewin_riemann_2009] for details). In a normal coordinate system centered at $x$, we have the following Taylor expansion of the metric tensor coefficients: $$\begin{split} g_{ab}(v) = & g_{ab} - \frac{1}{3} R_{cabd}v^c v^d - \frac{1}{6} \nabla_e R_{cabd} v^e v^c v^d \\ & + \left( - \frac{1}{20} \nabla_e \nabla_f R_{cabd} + \frac{2}{45} R_{cad}^g R_{ebf}^h \delta_{gh} \right) v^c v^d v^e v^f + O(\epsilon^5). \end{split} \label{eq:TaylorMetric}$$ A geodesic joining point $z$ to point $z+\delta z$ has tangent vector: $$\begin{aligned} \left[ \log_z(z+\Delta z) \right]^a &= &\Delta z^a +\frac{1}{3} z^b \Delta z^c \Delta z^d R^a_{cbd} + \frac{1}{12} z^b z^c \Delta z^d \Delta z^e \nabla_d R^a_{bce} \\ && + \frac{1}{6} z^b z^c \Delta z^d \Delta z^e \nabla_b R^a_{dce} + \frac{1}{24} z^b z^c \Delta z^d \Delta z^e \nabla^a R_{bdce} \\ && + \frac{1}{12} z^b \Delta z^c \Delta z^d \Delta z^e \nabla_c R^a_{dbe} + O(\epsilon^4).\end{aligned}$$ Using $ z= v$ and $z+\Delta z = {\ensuremath{\overrightarrow{xy}}}$ (i.e. $\Delta z = {\ensuremath{\overrightarrow{xy}}} -v)$ in a normal coordinate system centered at $x$, and keeping only the first order terms in $v$, we obtain the first terms of the series development of the log: $$\label{eq:TaylorLog} \begin{split} \left[ \log_{x +v}(y) \right]^a & = {\ensuremath{\overrightarrow{xy}}}^a -v^a + \frac{1}{3} R^a_{cbd} v^b {\ensuremath{\overrightarrow{xy}}}^c {\ensuremath{\overrightarrow{xy}}}^d + \frac{1}{12} \nabla_c R^a_{dbe} v^b {\ensuremath{\overrightarrow{xy}}}^c {\ensuremath{\overrightarrow{xy}}}^d {\ensuremath{\overrightarrow{xy}}}^e + O(\epsilon^4). \end{split}$$ Thus, the differential of the log with respect to the foot point is: $$\label{eq:Diff_logSupp} - \left[ D_x \log_x(y) \right]^a_b = \delta^a_b - \frac{1}{3} R^a_{cbd} {\ensuremath{\overrightarrow{xy}}}^c {\ensuremath{\overrightarrow{xy}}}^d - \frac{1}{12} \nabla_c R^a_{dbe} {\ensuremath{\overrightarrow{xy}}}^c {\ensuremath{\overrightarrow{xy}}}^d {\ensuremath{\overrightarrow{xy}}}^e + O(\epsilon^3).$$ Since we are in a normal coordinate system, the zeroth order term is the identity matrix, like in the Euclidean space, and the first order term vanishes. The Riemannian curvature tensor appear in the second order term and its covariant derivative in the third order term. The important point here is to see that the curvature is the leading term that makes this matrix departing from the identity (i.e. the Euclidean case) and which may lead to the non invertibility of the differential. Example on spheres {#sec:sphere} ================== We consider the unit sphere in dimension $n \geq 2$ embedded in ${\ensuremath{\mathbb{R}}}^{n+1}$ and we represent points of ${\ensuremath{{\cal M}}}= {\cal S}_n$ as unit vectors in ${\ensuremath{\mathbb{R}}}^{n+1}$. The tangent space at $x$ is naturally represented by the linear space of vectors orthogonal to $x$: $T_x{\cal S}_n = \{ v \in {\ensuremath{\mathbb{R}}}^{n+1}, v{^{\text{\tiny T}}}x =0\}$. The natural Riemannian metric on the unit sphere is inherited from the Euclidean metric of the embedding space ${\ensuremath{\mathbb{R}}}^{n+1}$. With these conventions, the Riemannian distance is the arc-length $d(x,y) = \arccos( x{^{\text{\tiny T}}}y)= \theta \in [0,\pi]$. Denoting $f(\theta) = 1/ \mbox{sinc}(\theta) = { \theta}/{\sin(\theta)}$, the spherical exp and log maps are: $$\begin{aligned} \exp_x(v) & = & \cos(\| v\|) x + \mbox{sinc}(\| v\|) v / \| v\| \\ \log_x(y) & = & f(\theta) \left( y - \cos(\theta) x \right) \quad \text{with} \quad \theta = \arccos(x{^{\text{\tiny T}}}y).\end{aligned}$$ Notice that $f(\theta)$ is a smooth function from $]-\pi;\pi[$ to ${\ensuremath{\mathbb{R}}}$ that is always greater than one and is locally quadratic at zero: $f(\theta) = 1 +\theta^2/6 + O(\theta^4)$. Hessian of the squared distance on the sphere --------------------------------------------- To compute the gradient and Hessian of functions on the sphere, we first need a chart in a neighborhood of a point $x\in {\cal S}_n$. We consider the unit vector $x_v = \exp_x(v)$ which is a variation of $x$ parametrized by the tangent vector $v \in T_x{\cal S}_n$ (i.e. verifying $x{^{\text{\tiny T}}}v=0$). In order to extend this mapping to the embedding space to simplify computations, we consider that $v$ is the orthogonal projection of an unconstrained vector $w \in {\ensuremath{\mathbb{R}}}^{n+1}$ onto the tangent space at $x$: $v=({\ensuremath{\:\mathrm{Id}}}-x x{^{\text{\tiny T}}})w$. Using the above formula for the exponential map, we get at first order $x_v = x - v + O(\|v\|^2)$ in the tangent space or $x_w = x + ({\ensuremath{\:\mathrm{Id}}}-x x{^{\text{\tiny T}}})w + O(\|w\|^2)$ in the embedding space. It is worth verifying first that the gradient of the squared distance $\theta^2 = d^2_y(x) = \arccos^2\left( {x{^{\text{\tiny T}}}y} \right)$ is indeed $\nabla d^2_y(x) = -2 \log_x(y)$. We considering the variation $x_w = \exp_x( ({\ensuremath{\:\mathrm{Id}}}-xx{^{\text{\tiny T}}})w)= x +({\ensuremath{\:\mathrm{Id}}}-xx{^{\text{\tiny T}}})w + O(\|w\|^2)$. Because $D_x \arccos(y {^{\text{\tiny T}}}x) = -y {^{\text{\tiny T}}}/ \sqrt{ 1 - (y {^{\text{\tiny T}}}x)^2}$, we get: $$D_w \arccos^2\left( {x_w{^{\text{\tiny T}}}y} \right) = \frac{ -2 \theta}{\sin \theta} y{^{\text{\tiny T}}}({\ensuremath{\:\mathrm{Id}}}-xx{^{\text{\tiny T}}}) = -2 f(\theta) y{^{\text{\tiny T}}}({\ensuremath{\:\mathrm{Id}}}-xx{^{\text{\tiny T}}}),$$ and the gradient is as expected: $$\nabla d^2_y(x) = -2 f(\theta) ({\ensuremath{\:\mathrm{Id}}}-xx{^{\text{\tiny T}}})y = -2 \log_x(y). \label{eq:GradDistSphere}$$ To obtain the Hessian, we now compute the Taylor expansion of $\log_{x_w}(y)$. First, we have $$f(\theta_w) = f(\theta) - \frac{f'(\theta)}{\sin \theta} {y{^{\text{\tiny T}}}({\ensuremath{\:\mathrm{Id}}}-xx{^{\text{\tiny T}}})w} + O(\|w\|^2),$$ with $ f'( \theta ) = (1-f(\theta)\cos \theta)/\sin \theta$. Thus, the first order Taylor expansion of $\log_{x_w}(y) = f(\theta_w) ( y - \cos(\theta_w) x_w )$ is: $$\begin{split} \log_{x_w}(y) & = f(\theta_w) \left( {\ensuremath{\:\mathrm{Id}}}-xx{^{\text{\tiny T}}}-({\ensuremath{\:\mathrm{Id}}}-xx{^{\text{\tiny T}}})w x{^{\text{\tiny T}}}- x w{^{\text{\tiny T}}}({\ensuremath{\:\mathrm{Id}}}-xx{^{\text{\tiny T}}}) \right)y + O(\|w\|^2) \\ \end{split}$$ so that $$\begin{split} -2 D_w \log_{x_w}(y) = \frac{f'(\theta)}{\sin \theta} ( {\ensuremath{\:\mathrm{Id}}}-xx{^{\text{\tiny T}}})y y{^{\text{\tiny T}}}({\ensuremath{\:\mathrm{Id}}}-xx{^{\text{\tiny T}}}) - f(\theta) \left( x{^{\text{\tiny T}}}y {\ensuremath{\:\mathrm{Id}}}+ x y{^{\text{\tiny T}}}\right) ( {\ensuremath{\:\mathrm{Id}}}-xx{^{\text{\tiny T}}}) \end{split}$$ Now, since we have computed the derivative in the embedding space, we have obtained the Hessian with respect to the flat connection of the embedding space, which exhibits a non-zero normal component. In order to obtain the Hessian with respect to the connection of the sphere, we need to project back on $T_x{\cal S}_n$ (i.e. multiply by $({\ensuremath{\:\mathrm{Id}}}-x x{^{\text{\tiny T}}})$ on the left) and we obtain: $$\begin{split} \frac{1}{2} H_x(y) & = \left( \frac{1- f(\theta) \cos\theta}{\sin^2 \theta } \right) \left( {\ensuremath{\:\mathrm{Id}}}- x x{^{\text{\tiny T}}}\right) yy{^{\text{\tiny T}}}({\ensuremath{\:\mathrm{Id}}}-xx{^{\text{\tiny T}}}) + f( \theta )\cos \theta ({\ensuremath{\:\mathrm{Id}}}-xx{^{\text{\tiny T}}}) \\ & = \left( {\ensuremath{\:\mathrm{Id}}}- x x{^{\text{\tiny T}}}\right) \left( ( 1 - f(\theta) \cos\theta ) \frac{ yy{^{\text{\tiny T}}}}{ \sin^2\theta} + f( \theta )\cos\theta {\ensuremath{\:\mathrm{Id}}}\right) ({\ensuremath{\:\mathrm{Id}}}-xx{^{\text{\tiny T}}}). \end{split}$$ To simplify this expression, we note that $\|({\ensuremath{\:\mathrm{Id}}}-xx{^{\text{\tiny T}}})y\|^2 = \sin \theta$, so that $u = \frac{({\ensuremath{\:\mathrm{Id}}}-xx{^{\text{\tiny T}}})y }{ \sin \theta} = \frac{ \log_x(y) }{\theta}$ is a unit vector of the tangent space at $x$ (for $y \not = x$ so that $\theta > 0$). Using this unit vector and the intrinsic parameters $\log_x(y)$ and $\theta = \| \log_x(y)\|$, we can rewrite the Hessian: $$\begin{aligned} \qquad \frac{1}{2} H_x(y) & = & f( \theta )\cos\theta ({\ensuremath{\:\mathrm{Id}}}-xx{^{\text{\tiny T}}}) + \left( \frac{ 1- f(\theta)\cos\theta}{\theta^2 } \right) \log_x(y) \log_x(y){^{\text{\tiny T}}}\\ & = & u u{^{\text{\tiny T}}}+ f( \theta )\cos\theta ({\ensuremath{\:\mathrm{Id}}}-xx{^{\text{\tiny T}}}- u u{^{\text{\tiny T}}}) \end{aligned}$$ The eigenvectors and eigenvalues of this matrix are now very easy to determine. By construction, $x$ is an eigenvector with eigenvalue $\mu_0=0$. Then the vector $u$ (or equivalently $\log_x(y) = f(\theta) ({\ensuremath{\:\mathrm{Id}}}-x x {^{\text{\tiny T}}}) y = \theta u$) is an eigenvector with eigenvalue $\mu_1=1$. Finally, every vector $u$ which is orthogonal to these two vectors (i.e. orthogonal to the plane spanned by 0, $x$ and $y$) has eigenvalue $\mu_2= f(\theta)\cos\theta = \theta \cot \theta$. This last eigenvalue is positive for $\theta \in [0,\pi/2[$, vanishes for $\theta = \pi/2$ and becomes negative for $\theta \in ]\pi/2 \pi[$. We retrieve here the results of [@A:buss_spherical_2001 lemma 2] expressed in a more general coordinate system. Example on the hyperbolic space ${\ensuremath{\mathbb{H}}}^n$ {#example-on-the-hyperbolic-space-ensuremathmathbbhn} ============================================================= We consider in this section the hyperboloid of equation $-x_0^2 + x_1^2 \ldots x_n^2 = -1$ (with $x_0 > 0$ and $n \geq 2$) embedded in ${\ensuremath{\mathbb{R}}}^{n+1}$. Using the notations $x=(x_0,\hat x)$ and the indefinite nondegenerate symmetric bilinear form ${\ensuremath{ \left< \:x\:\left|\:y\right.\right> }}_* = x{^{\text{\tiny T}}}J y= \hat x{^{\text{\tiny T}}}\hat y -x_0 y_0$ with $ J = \mbox{diag}(-1, {\ensuremath{\:\mathrm{Id}}}_n)$, the hyperbolic space can be seen as the sphere $\|x\|^2_* =-1$ of radius -1 in the $(n+1)$-dimensional Minkowski space: $${\ensuremath{\mathbb{H}}}^n = \{ x \in {\ensuremath{\mathbb{R}}}^{n,1} / \|x\|^2_* = \|\hat x\|^2 -x_0^2 = -1 \}.$$ A point in ${\ensuremath{{\cal M}}}= {\ensuremath{\mathbb{H}}}^n \subset {\ensuremath{\mathbb{R}}}^{n,1}$ can be parametrized by $x=(\sqrt{1+\|\hat x\|^2}, \hat x)$ for $\hat x \in {\ensuremath{\mathbb{R}}}^n$ (Weierstrass coordinates). This happen to be in fact a global diffeomorphism that provides a very convenient global chart of the hyperbolic space. We denote $\pi(x)=\hat x$ (resp. $\pi{^{\text{\tiny (-1)}}}(\hat x)= (\sqrt{1+\|\hat x\|^2}, \hat x)$) the coordinate map from ${\ensuremath{\mathbb{H}}}^n$ to ${\ensuremath{\mathbb{R}}}^n$ (resp. the parametrization map from ${\ensuremath{\mathbb{R}}}^n$ to ${\ensuremath{\mathbb{H}}}^n$). The Poincarré ball model is another classical models of the hyperbolic space ${\ensuremath{\mathbb{H}}}^n$ which can be obtained by a stereographic projection of the hyperboloid onto the hyperplane $x_0 = 0$ from the south pole $(-1, 0 \ldots, 0)$. A tangent vector $v=(v_0, \hat v)$ at point $x=(x_0,\hat x)$ satisfies ${\ensuremath{ \left< \:x\:\left|\:v\right.\right> }}_* = 0$, i.e. $x_0 v_0 = \hat x{^{\text{\tiny T}}}\hat v$, so that $$T_x {\ensuremath{\mathbb{H}}}^n = \left\{ \left( \frac{\hat x{^{\text{\tiny T}}}\hat v}{\sqrt{1+\|\hat x\|^2}}, \hat v\right),\quad \hat v\in {\ensuremath{\mathbb{R}}}^{n} \right\}.$$ The natural Riemannian metric on the hyperbolic space is inherited from the Minkowski metric of the embedding space ${\ensuremath{\mathbb{R}}}^{n,1}$: the scalar product of two vectors $u=(\hat x{^{\text{\tiny T}}}\hat u / \sqrt{1+\|\hat x\|^2},\hat u)$ and $v=(\hat x{^{\text{\tiny T}}}\hat v / \sqrt{1+\| \hat x\|^2}, \hat v)$ at $x=(\sqrt{1+\|\hat x\|^2}, \hat x)$ is $${\ensuremath{ \left< \:u\:\left|\:v\right.\right> }}_* = u{^{\text{\tiny T}}}J v = -u_0 v_0 + \hat u{^{\text{\tiny T}}}\hat v = \hat u{^{\text{\tiny T}}}\left( -\frac{\hat x \hat x{^{\text{\tiny T}}}}{1+\|\hat x\|^2} + {\ensuremath{\:\mathrm{Id}}}\right) \hat v$$ The metric matrix expressed in the coordinate chart $G={\ensuremath{\:\mathrm{Id}}}- \frac{ \hat x \hat x{^{\text{\tiny T}}}}{1+\|\hat x\|^2}$ has eigenvalue 1, with multiplicity $n-1$, and $1/(1+\|\hat x\|^2)$ along the eigenvector $x$. It is thus positive definite. With these conventions, geodesics are the trace of 2-planes passing through the origin and the Riemannian distance is the arc-length: $$d(x,y) = \operatorname{arccosh}( - {\ensuremath{ \left< \:x\:\left|\:y\right.\right> }}_* ).$$ The hyperbolic exp and log maps are: $$\begin{aligned} \quad \exp_x(v) &=& \cosh(\| v\|_* ) x + {\sinh(\| v\|_* )} v / {\| v\|_* } \\ \log_x(y) &=& f_*(\theta) \left( y - \cosh(\theta) x \right) \quad \text{with} \quad \theta = \operatorname{arccosh}( -{\ensuremath{ \left< \:x\:\left|\:y\right.\right> }}_* ),\end{aligned}$$ where $f_*(\theta) = { \theta}/{\sinh(\theta)}$ is a smooth function from ${\ensuremath{\mathbb{R}}}$ to $(0,1]$ that is always positive and is locally quadratic at zero: $f_*(\theta) = 1 - \theta^2/6 + O(\theta^4)$. Hessian of the squared distance on the hyperbolic space ------------------------------------------------------- We first verify that the gradient of the squared distance $d^2_y(x) = \operatorname{arccosh}^2\left( -<x,y>_* \right)$ is indeed $\nabla d^2_y(x) = -2 \log_x(y)$. Let us consider a variation of the base-point along the tangent vector $v$ at $x$ verifying ${\ensuremath{ \left< \:v\:\left|\:x\right.\right> }}_*=0$: $$x_{v} = \exp_x(v) = \cosh(\| v\|_* ) x + \frac{\sinh( \| v\|_* )}{ \| v\|_* } v = x + v + O( \| v \|_*^2).$$ In order to extend this mapping to the embedding space around the paraboloid, we consider that $v$ is the projection $v=w + {\ensuremath{ \left< \:w\:\left|\:x\right.\right> }}_* x$ of an unconstrained vector $w\in {\ensuremath{\mathbb{R}}}^{n,1}$ onto the tangent space at $T_x {\ensuremath{\mathbb{H}}}^n$. Thus, the variation that we consider in the embedding space is $$x_w = x + \partial_w x_w + O(\|w\|^2_Q) \quad \mbox{with} \quad \partial_w x_w = w + {\ensuremath{ \left< \:w\:\left|\:x\right.\right> }}_* x = ({\ensuremath{\:\mathrm{Id}}}+ x x{^{\text{\tiny T}}}J) w.$$ Now, we are interested in the impact of such a variation on $\theta_w = d_y(x_w) =\operatorname{arccosh}\left( - {\ensuremath{ \left< \:x_w\:\left|\:y\right.\right> }}_* \right)$. Since $\operatorname{arccosh}'(t) = \frac{1}{\sqrt{t^2 -1}}$, and $\sqrt{\cosh(\theta)^2 -1} = \sinh(\theta)$ for a positive $\theta$, we have: $${d}/{dt} \left. \operatorname{arccosh}(t) \right|_{t=\cosh(\theta)} = { 1}/{{\sqrt{\cosh(\theta)^2 -1}}} = {1}/{\sinh(\theta)},$$ so that $$\theta_w = \theta - \frac{1}{\sinh(\theta)} {\ensuremath{ \left< \:w + {\ensuremath{ \left< \:w\:\left|\:x\right.\right> }}_* x\:\left|\:y\right.\right> }}_* + O(\|v\|_*^2).$$ This means that the directional derivative is $$\partial_w \theta_w = - \frac{1}{\sinh(\theta)} {\ensuremath{ \left< \:w + {\ensuremath{ \left< \:w\:\left|\:x\right.\right> }}_* x\:\left|\:y\right.\right> }}_* = - \frac{1}{\sinh(\theta)} {\ensuremath{ \left< \:w\:\left|\:y -\cosh(\theta) x\right.\right> }}$$ so that $ \partial_w \theta_w^2 = -2 f_*(\theta) {\ensuremath{ \left< \:w\:\left|\:y - \cosh(\theta) x \right.\right> }}_*.$ Thus, the gradient in the embedding space defined by $<\nabla d^2_y(x) , w>_* = \partial_w \theta_w^2$ is as expected: $$\nabla d^2_y(x) = - 2 f_*(\theta) (y- \cosh(\theta) x) = - 2 \log_x(y).$$ To obtain the Hessian, we now compute the Taylor expansion of $\log_{x_w}(y)$. First, we compute the variation of $f_*(\theta_w) = \theta_w / \sinh(\theta_w)$: $$\partial_w f_*(\theta_w) = {f_*'(\theta)} \: \partial_w \theta_w = - \frac{f_*'(\theta)}{\sinh(\theta)} {\ensuremath{ \left< \:w\:\left|\:y -\cosh(\theta) x\right.\right> }}_* = - \frac{f_*'(\theta)}{\theta} {\ensuremath{ \left< \:w\:\left|\:\log_x(y)\right.\right> }}_*$$ with $ f_*'( \theta ) = (1-f_*( \theta)\cosh \theta)/\sinh \theta = (1- \theta \coth \theta)/\sinh \theta$. The variation of $\cosh \theta_w$ is: $$\partial_w \cosh \theta_w = \sinh \theta \: \partial_w \theta_w = - {\ensuremath{ \left< \:w\:\left|\:y -\cosh(\theta) x\right.\right> }}_*.$$ Thus, the first order variation of $\log_{x_w}(y)$ is: $$\begin{split}\partial_w \log_{x_w}(y) &= \partial_w f_*(\theta_w) (y-\cosh \theta x ) - f_*(\theta) \left( \partial_w \cosh(\theta_w) x + \cosh(\theta) \partial_w x_w \right) \\ &= - \frac{f_*'(\theta)\sinh\theta}{\theta^2} {\ensuremath{ \left< \:w\:\left|\:\log_x(y)\right.\right> }}_* \log_x(y) \\ &\:\: + f_*(\theta) \left( {\ensuremath{ \left< \:w\:\left|\:y -\cosh(\theta) x\right.\right> }}_* x -\cosh(\theta) (w + {\ensuremath{ \left< \:w\:\left|\:x\right.\right> }}_* x)\right) \\ &= - \frac{(1- \theta \coth \theta)}{\theta^2} {\ensuremath{ \left< \:w\:\left|\:\log_x(y)\right.\right> }}_* \log_x(y) \\ &\:\: + {\ensuremath{ \left< \:w\:\left|\:\log_x(y)\right.\right> }}_* x - \theta \coth(\theta) (w + {\ensuremath{ \left< \:w\:\left|\:x\right.\right> }}_* x). \end{split}$$ This vector is a variation in the embedding space: it displays a normal component to the hyperboloid $ {\ensuremath{ \left< \:w\:\left|\:\log_x(y)\right.\right> }}_* x $ which reflects the extrinsic curvature of the hyperboloid in the Minkowski space (the mean curvature vector is $-x$), and a tangential component which measures the real variation in the tangent space: $$\begin{split} ({\ensuremath{\:\mathrm{Id}}}+ x x{^{\text{\tiny T}}}J) \partial_w \log_{x_w}(y) = & - \frac{(1- \theta \coth \theta)}{\theta^2} {\ensuremath{ \left< \:w\:\left|\:\log_x(y)\right.\right> }}_* \log_x(y) \\ & - \theta \coth(\theta) (J + x x{^{\text{\tiny T}}}) J w. \end{split}$$ Thus the intrinsic gradient is: $$D_x \log_{x}(y) = - \frac{(1- \theta \coth \theta)}{\theta^2} \log_x(y) \log_x(y){^{\text{\tiny T}}}J - \theta \coth(\theta) ({\ensuremath{\:\mathrm{Id}}}+ x x{^{\text{\tiny T}}}J).$$ Finally, the Hessian of the square distance, considered as an operator from $T_x{\ensuremath{\mathbb{H}}}^n$ to $T_x{\ensuremath{\mathbb{H}}}^n$, is $H_x(y)(w) = -2 D_x \log_{x}(y) w$. Denoting $u= \log_x(y) / \theta$ the unit vector of the tangent space at $x$ pointing towards the point $y$, we get in matrix form: $$\frac{1}{2} H_x(y) = u u{^{\text{\tiny T}}}J + \theta \coth \theta (J + x x{^{\text{\tiny T}}}-u u{^{\text{\tiny T}}}) J$$ In order to see that the Hessian is symmetric, we have to lower an index (i.e. multiply on the left by J) to obtain the bilinear form: $$H_x(y) (v,w) = {\ensuremath{ \left< \:v\:\left|\:H_x(y)(w)\right.\right> }}_* = 2 v{^{\text{\tiny T}}}J \left( u u{^{\text{\tiny T}}}+ \theta \coth \theta (J + x x{^{\text{\tiny T}}}-u u{^{\text{\tiny T}}}) \right) J w.$$ The eigenvectors and eigenvalues of (half) the Hessian operator are now easy to determine. By construction, $x$ is an eigenvector with eigenvalue $0$ (restriction to the tangent space). Then, within the tangent space at $x$, the vector $u$ (or equivalently $\log_x(y) = \theta u$) is an eigenvector with eigenvalue $1$. Finally, every vector $v$ which is orthogonal to these two vectors (i.e. orthogonal to the plane spanned by 0, $x$ and $y$) has eigenvalue $\theta \coth \theta \geq 1$ (with equality only for $\theta=0$). Thus, we can conclude that the Hessian of the squared distance is always positive definite and does never vanish along the hyperbolic space. This was of course expected since it is well known that the Hessian stay positive definite for negatively curved spaces [@A:bishop_manifolds_1969]. As a consequence, the squared distance is a convex function and has a unique minimum. [7]{} (). . . (). . . (). . . (). . . (). . . , (). . . (). . . A QR decomposition of the reference matrix ========================================== Let $X=[x_0, \ldots x_k]$ be a matrix of $k+1$ independent reference points in ${\ensuremath{\mathbb{R}}}^n$. Following the notations of the main paper, we write the reference matrix $$Z(x) = [x-x_0, \ldots x-x_k] = x\mathds{1}_{k+1} {^{\text{\tiny T}}}- X.$$ The affine span $\operatorname{Aff}(X)$ is the locus of points $x$ satisfying $Z(x)\lambda = 0$ i.e. $x = X \lambda / (\mathds{1}_{k+1}{^{\text{\tiny T}}}\lambda)$. Here, working with the barycentric weights is not so convenient, and in view of the principal component analysis, we prefer to work with a variant of the QR decomposition using the Gram-Schmidt orthogonalization process. Choosing $x_0$ as the pivot point, we iteratively decompose $X - x_0 {\ensuremath{\mathds{1}}}_{k+1}{^{\text{\tiny T}}}$ to find an orthonormal basis of the affine span of $X$. For convenience, we define the zeroth vectors $v_0= q_0 =0$. The first axis is defined by $v_1 = x_1-x_0$, or by the unit vector $q_1 = v_1 / \| v_1\|$. Next, we project the second direction $x_2-x_0$ onto $\operatorname{Aff}(x_0, x_1) = Aff(x_0, x_0 + e_1)$: the orthogonal component $v_2 = ({\ensuremath{\:\mathrm{Id}}}- e_1 e_1{^{\text{\tiny T}}}) (x_2 -x_0)$ is described by the unit vector $q_2 = v_2 / \| v_2\|$. The general iteration is then (for $i\geq 1$): $$v_i = ({\ensuremath{\:\mathrm{Id}}}- \sum_{j=0}^{i-1} e_j e_j{^{\text{\tiny T}}}) (x_i - x_0), \qquad \text{and} \qquad q_i = v_i / \| v_i\|.$$ Thus, we obtain the decomposition: $$\begin{split} X & = x_0 {\ensuremath{\mathds{1}}}_{k+1}{^{\text{\tiny T}}}+ Q T \\ Q & = [q_0, q_1, \ldots q_k] \\ T & = \left[ \begin{array}{ccccc} q_0{^{\text{\tiny T}}}(x_0 -x_0) & q_0{^{\text{\tiny T}}}(x_1 -x_0) & q_0{^{\text{\tiny T}}}(x_2 -x_0) & \ldots & q_0{^{\text{\tiny T}}}(x_k -x_0) \\ 0 & q_1{^{\text{\tiny T}}}(x_1 -x_0) & q_1{^{\text{\tiny T}}}(x_2 -x_0) & \ldots & q_1{^{\text{\tiny T}}}(x_k -x_0) \\ 0 & 0 & q_2{^{\text{\tiny T}}}(x_2 -x_0) & \ldots & q_2{^{\text{\tiny T}}}(x_k -x_0) \\ 0 & 0 & \ldots & \ldots & \ldots \\ 0 & 0 & \ldots & \ldots & q_{k}{^{\text{\tiny T}}}(x_k -x_0) \end{array} \right] \end{split}$$ With this affine variant of the QR decomposition, the $(k+1)\times (k+1)$ matrix $T$ is triangular superior with vanishing first row and first column (since $q_0=0$). The $n\times (k+1)$ matrix $Q$ also has a first null vector before the usual $k$ orthonormal vectors in its $k+1$ columns. The decomposition into matrices of this form is unique when we assume that all the points $x_0, \ldots x_k$ are linearly independent. This means that we can parametrize the matrix $X$ by the orthogonal (aside the first vanishing column) matrix $Q$ and the triangular (with first row and column zero matrix) $T$. In view of PCA, it is important to notice that the decomposition is stable under the addition/removal of reference points. Let $X_i=[x_0, \ldots x_{i}]$ be the matrix of the first $i+1$ reference points (we assume $i<k$ to simplify here) and $X_i = x_0 {\ensuremath{\mathds{1}}}_{i+1}{^{\text{\tiny T}}}+ Q_i T_i$ its QR factorization. Then, the matrix $Q_i$ is made of the first $i+1$ columns of $Q$ and the matrix $T_i$ is the upper $(i+1) \times (i+1)$ bloc of the upper triangular matrices $T$. Optimizing the $k$-dimensional subspace ======================================= With our decomposition, we can now write any point of $x \in \operatorname{Aff}(X)$ as the base-point $x_0$ plus any linear combination of the vectors $q_i$: $ x = x_0 + Q \alpha$ with $\alpha \in {\ensuremath{\mathbb{R}}}^{k+1}$. The projection of a point $y$ on $\operatorname{Aff}(X)$ is thus parametrized by the $k+1$ dimensional vector $\alpha$ that minimizes the (squared) distance $d(x,y)^2 = \| x_0 + Q \alpha -y \|^2$. Notice that we have $Q{^{\text{\tiny T}}}Q = {\ensuremath{\:\mathrm{Id}}}_{k+1} -e_1 e_1{^{\text{\tiny T}}}$ (here $e_1$ is the first basis vector of the embedding space ${\ensuremath{\mathbb{R}}}^{K+1}$) so that $Q^{\dag} = Q{^{\text{\tiny T}}}$. The null gradient of this criterion implies that $\alpha$ is solving $Q{^{\text{\tiny T}}}Q \alpha = Q{^{\text{\tiny T}}}(y-x_0)$, i.e. $\alpha = Q^{\dag} (y-x_0) = Q{^{\text{\tiny T}}}(y-x_0) $. Thus, the projection of $y$ on $\operatorname{Aff}(X)$ is $$Proj(y, \operatorname{Aff}(X)) = x_0 + Q Q{^{\text{\tiny T}}}(y-x_0),$$ and the residue is $$\begin{split} r^2(y) & = \| ({\ensuremath{\:\mathrm{Id}}}_{n} - Q Q{^{\text{\tiny T}}}) (y-x_0)\|^2 = {\mbox{\rm Tr}}\left( ({\ensuremath{\:\mathrm{Id}}}_{n} - Q Q{^{\text{\tiny T}}}) (y-x_0)(y-x_0){^{\text{\tiny T}}}\right). \end{split}$$ Accounting now for the $N$ data points ${ Y} = \{ y_i \}_{i=1}^N$, and denoting as usual $\bar y = \frac{1}{N} \sum_{i=1}^N y_i$ and $\Sigma = \frac{1}{N} \sum_{i=1}^N ( y_i -\bar y) ( y_i -\bar y){^{\text{\tiny T}}}$, the unexplained variance is: $$\sigma_{out}^2(X) = {\mbox{\rm Tr}}\left( ({\ensuremath{\:\mathrm{Id}}}_{n} - Q Q{^{\text{\tiny T}}}) ( \Sigma - (\bar y-x_0)(\bar y-x_0){^{\text{\tiny T}}}) \right) .$$ In this formula, we see that the value of the upper triangular matrix $T$ does not appear and can thus be chosen freely. The point $x_0$ that minimizes the unexplained variance is evidently $x_0 = \bar y$. To determine the matrix $Q$, we diagonalize the empirical covariance matrix to obtain the spectral decomposition $\Sigma = \sum_{j=1}^n \sigma_j^2 u_j u_j{^{\text{\tiny T}}}$ where by convention, the eigenvalues are sorted in decreasing order. The remaining unexplained variance $\sigma_{out}^2(X) = {\mbox{\rm Tr}}\left( ({\ensuremath{\:\mathrm{Id}}}_{n} - (U{^{\text{\tiny T}}}Q) (U{^{\text{\tiny T}}}Q){^{\text{\tiny T}}}) \mbox{Diag}(\sigma_i^2) \right)$ reaches its minimal value $ \sum_{i=k+1}^n \sigma_i^2$ for $[q_1, \ldots q_k] = [u_1, \ldots u_k] R$ where $R$ is any $k\times k$ orthogonal matrix. Here, we see that the solution is unique in terms of subspaces (we have $\text{Span}(q_1, \ldots q_k) = \text{Span}(u_1, \ldots u_k)$ whatever orthogonal matrix $R$ we choose) but not in terms of the matrix $Q$. In particular, the matrix $X = [\bar y, \bar y + u_1,\ldots \bar y + u_k ]$ is one of the matrices describing the optimal subspace but the order of the vectors is not prescribed. The AUV criterion ================= In PCA, one often plots the unexplained variance as a function of the number of modes used to approximate the data. This curve should decreases as fast as possible from the variance of the data (for 0 modes) to 0 (for $n$ modes). A standard way to quantify the decrease consists in summing the values at all steps. We show in this section that the optimal flag of subspaces (up to dimension $k$) that optimize this Accumulated Unexplained Variances (AUV) criterion is precisely the result of the PCA analysis. As previously, we consider $k+1$ points $x_i$ but they are now ordered. We denote by $X_i=[x_0, \ldots x_i]$ the matrix of the first $i+1$ columns of $X=[x_0, \ldots x_k]$. The flag generated by $X$ is thus $$Aff(X_0)=\{x_0\} \subset \ldots \subset Aff(X_i) \subset \ldots \subset Aff(X) \subset {\ensuremath{\mathbb{R}}}^n.$$ The QR decomposition of $X$ gives $k$ orthonormal unit vectors $q_1$ …$q_k$ which can be complemented by $n-k$ unit vector $q_{k+1}, \ldots q_n$ to constitute an orthonormal basis of ${\ensuremath{\mathbb{R}}}^n$. Using this extended basis, we can write: $$\sigma_{out}^2(X) = {\mbox{\rm Tr}}\left( W ( \Sigma - (\bar y-x_0)(\bar y-x_0){^{\text{\tiny T}}}) \right)$$ with $W= ({\ensuremath{\:\mathrm{Id}}}_{n} - Q Q{^{\text{\tiny T}}}) = \sum_{j=k+1}^n q_j q_j{^{\text{\tiny T}}}.$ Since the decomposition is stable under the removal of reference points, the QR factorization of $X_i$ is $X_i = x_0 {\ensuremath{\mathds{1}}}_{i+1}{^{\text{\tiny T}}}+ Q_i T_i$ with $Q_i=[q_0, \ldots q_i]$ and we can write the unexplained variance for the subspace $Aff(X_i)$ as: $$\sigma_{out}^2(X_i) = {\mbox{\rm Tr}}\left( W_i ( \Sigma - (\bar y-x_0)(\bar y-x_0){^{\text{\tiny T}}}) \right)$$ with $W_i= ({\ensuremath{\:\mathrm{Id}}}_{n} - Q_i Q_i{^{\text{\tiny T}}}) = \sum_{j=i+1}^n q_j q_j{^{\text{\tiny T}}}.$ Plugging this value into the criterion $AUV(X) = \sum_{i=0}^k \sigma^2_{out}( X_i )$, we get: $$AUV(X_k) = {\mbox{\rm Tr}}\left( \bar W ( \Sigma - (\bar y-x_0)(\bar y-x_0){^{\text{\tiny T}}}) \right) $$ with $$\bar W = \sum_{i=0}^k W_i = \sum_{i=0}^k ({\ensuremath{\:\mathrm{Id}}}_{n} - Q_i Q_i{^{\text{\tiny T}}}) = \sum_{i=0}^k \sum_{j=i+1}^n q_j q_j{^{\text{\tiny T}}}= \sum_{i=1}^k i q_i q_i{^{\text{\tiny T}}}+ (k+1) \sum_{i=k+1}^n q_i q_i{^{\text{\tiny T}}}.$$ PCA optimizes the AUV criterion =============================== The minimum over $x_0$ is achieved as before for $x_0= \bar y$ and the AUV for this value it now parametrized only by the matrix $Q$: $$AUV(Q) = {\mbox{\rm Tr}}\left( U{^{\text{\tiny T}}}W_k U \mbox{Diag}(\sigma_i^2) \right) = \sum_{i=1}^k i q_i{^{\text{\tiny T}}}\Sigma q_i + (k+1) \sum_{i=k+1}^n q_i{^{\text{\tiny T}}}\Sigma q_i.$$ Assuming that the first $k+1$ eigenvalues $\sigma_i^2$ ($1\leq i \leq k+1$) of $\Sigma$ are all different (so that they can be sorted in a strict order), we claim that the optimal unit orthogonal vectors are $q_i = u_i$ for $1\leq i \leq k$ and $[q_{k+1}, \ldots q_n] = [u_{k+1}, \ldots u_n] R$ where $R \in O(n-k)$ is any orthogonal matrix. In order to simplify the proof, we start by assuming that all the eigenvalues have multiplicity one, and we optimize iteratively over each unit vector $q_i$. We start by $q_1$: augmenting the Lagrangian with the the constraint $\|q_1\|^2 =1$ using the Lagrange multiplier $\lambda_1$ and differentiating, we obtain: $$\nabla_{q_1} ( AUV(Q) + \lambda \|q_1\|^2) = \Sigma q_1 + \lambda_1 q_1 =0.$$ This means that $q_1$ is a unit eigenvector of $\Sigma$. Denoting $\pi(1)$ the index of this eigenvector, we have $q_1^* = u_{\pi(1)}$ and the eigenvalue is $-\lambda_1 = \sigma_{\pi(1)}^2$. The criterion for this partially optimal value is now $$AUV([q_1^*, q_2 \ldots q_n] ) = \sigma_{\pi(1)}^2 + \sum_{i=2}^k i q_i{^{\text{\tiny T}}}\Sigma q_i + (k+1) \sum_{i=k+1}^n q_i{^{\text{\tiny T}}}\Sigma q_i.$$ To take into account the orthogonality of the remaining vectors $q_i$ ($i > 1$) with $q_1^*$ in the optimization, we can project all the above quantities along $u_{\pi(1)}$. Optimizing now for $q_2$ under the constraint $\|q_2\|^2=1$, we find that $q_2$ is a unit eigenvector of $\Sigma - \sigma_{\pi(1)}^2 u_{\pi(1)} u_{\pi(1)}{^{\text{\tiny T}}}$ associated to a non-zero eigenvalue. Denoting $\pi(2)$ the index of this eigenvector (which is thus different from $\pi(1)$ because it has to be non-zero), we have $q_2^* = u_{\pi(2)}$ and the eigenvalue is $-\lambda_2 = 2 \sigma_{\pi(2)}^2$. Iterating the process, we conclude that $q_i^* = u_{\pi(i)}$ for some permutation $\pi$ of the indices $1, \ldots n$. Moreover, the value of the criterion for that permutation is $$AUV([q_1^*, q_2^* \ldots q_n^*] ) = \sum_{i=q}^k i \sigma_{\pi(i)}^2 + (k+1) \sum_{i=k+1}^n \sigma_{\pi(i)}^2.$$ In order to find the global minimum, we now have to compare the values of this criterion for all the possible permutations. Assuming that $i<j$, we now show that the permutation of two indices $\pi(i)$ and $\pi(j)$ give a lower (or equal) criterion when $\pi(i) < \pi(j)$. Because eigenvalues are sorted in strictly decreasing order, we have $\sigma_{\pi(i)}^2 > \sigma_{\pi(j)}^2$. Thus, $(\alpha-1) \sigma_{\pi(i)}^2 > (\alpha-1) \sigma_{\pi(j)}^2$ for any $\alpha \geq 1$ and adding $\sigma_{\pi(i)}^2 + \sigma_{\pi(j)}^2$ on both sides, we get $\alpha \sigma_{\pi(i)}^2 + \sigma_{\pi(j)}^2 > \sigma_{\pi(i)}^2 + \alpha \sigma_{\pi(j)}^2$. For the value of $\alpha$, we distinguish there cases: - $i<j\leq k$: we take $\alpha = j/i > 1$. multiplying on both sides by the positive value $i$, we get: $i \sigma_{\pi(i)}^2 + j \sigma_{\pi(j)}^2 < i \sigma_{\pi(j)}^2 + j \sigma_{\pi(i)}^2$. The value of the criterion is thus strictly lower if $\pi(i) < \pi(j)$. - $i \leq k< j$: we take $\alpha = (k+1)/i > 1$ and we get: $i \sigma_{\pi(i)}^2 + (k+1) \sigma_{\pi(j)}^2 < i \sigma_{\pi(j)}^2 + (k+1) \sigma_{\pi(i)}^2$. Once again, the value of the criterion is thus strictly lower if $\pi(i) < \pi(j)$. - $k < i<j$: here permuting the indices does not change the criterion since $\sigma_{\pi(i)}^2$ and $\sigma_{\pi(j)}^2$ are both counted with the weight $(k+1)$. In all cases, the criterion is minimized by swapping indices in the permutation such that $\pi(i) < \pi(j)$ for $i<j$ and $i<k$. The global minimum is thus achieved for the identity permutation $\pi(i) = i$ for the indices $1 \leq i \leq k$. For the higher indices, any linear combination of the last $n-k$ eigenvectors of $\Sigma$ gives the same value of the criterion. Taking into account the orthonormality constraints, such a linear combination writes $[q_{k+1}, \ldots q_n] = [u_{k+1}, \ldots u_n] R$ for some orthonormal $(n-k)\times (n-k)$ matrix $R$. When some eigenvalues of $\Sigma$ have a multiplicity larger than one, then the corresponding eigenvectors cannot be uniquely determined since they can be rotated within the eigenspace. With our assumptions, this can only occur within the last $n-k$ eigenvalues and this does not change anyway the value of the criterion. We have thus proved the following theorem. $ $\ Let ${\hat Y} = \{ \hat y_i \}_{i=1}^N$ be a set of $N$ data points in ${\ensuremath{\mathbb{R}}}^n$. We denote as usual the mean by $\bar y = \frac{1}{N} \sum_{i=1}^N \hat y_i$ and the empirical covariance matrix by $\Sigma = \frac{1}{N} \sum_{i=1}^N (\hat y_i -\bar y) (\hat y_i -\bar y){^{\text{\tiny T}}}$. Its spectral decomposition is denoted $\Sigma = \sum_{j=1}^n \sigma_j^2 u_j u_j{^{\text{\tiny T}}}$ with the eigenvalues sorted in decreasing order. We assume that the first $k+1$ eigenvalues have multiplicity one, so that the order from $\sigma_1$ to $\sigma_{k+1}$ is strict. Then the partial flag of affine subspaces $Fl(x_0\prec x_1 \ldots \prec x_k)$ optimizing the AUV criterion: $$AUV(Fl(x_0\prec x_1 \ldots \prec x_k)) = \sum_{i=0}^k \sigma^2_{out}( Fl_i(x_0\prec x_1 \ldots \prec x_k ) )$$ is totally ordered and can be parameterized by $x_0 = \bar y$, $x_i = x_0 + u_i$ for $1 \leq i \leq k$. The parametrization by points is not unique but the flag of subspaces which is generated is and is equal to the flag generated by the PCA modes up to mode $k$ included. [^1]: $p$-jets are equivalent classes of functions up to order $p$. Thus, a $p$-jet specifies the Taylor expansion of a smooth function up to order $p$. Non-local jets, or multijets, generalize subspaces of the tangent spaces to higher differential orders with multiple base points.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a general framework to describe the simultaneous para-to-ferromagnetic and semiconductor-to-metal transition in electron-doped EuO. The theory correctly describes detailed experimental features of the conductivity and of the magnetization, in particular the doping dependence of the Curie temperature. The existence of correlation-induced local moments on the impurity sites is essential for this description.' author: - Michael Arnold and Johann Kroha title: ' Simultaneous ferromagnetic metal-semiconductor transition in electron-doped EuO' --- At room temperature stoichiometric europiumoxide (EuO) is a paramagnetic semiconductor which undergoes a ferromagnetic (FM) transition at the Curie temperature of $T_C=69~{\rm K}$. Upon electron doping, either by O defects or by Gd impurities, this phase transition turns into a simultaneous ferromagnetic and semiconductor-metal (SM) transition with nearly 100 % of the itinerant charge carriers polarized and a sharp resistivity drop of 8 to 13 orders of magnitude, depending on sample quality [@oliver1; @oliver2; @penney; @steeneken]. Concomitant with this transition is a huge colossal magnetoresistance (CMR) effect [@shapira], much larger than in the intensely studied manganates [@tokura]. These extreme properties make electron-doped EuO interesting for spintronics applications. Known since the 1970s, these features have therefore recently stimulated more systematic experimental studies with modern techniques and improved sample quality [@steeneken; @ott; @schmehl] as well as theoretical calculations [@schiller; @sinjukow]. In pure EuO the FM ordering is driven by the Heisenberg exchange coupling between the localized Eu 4$f$ moments with spin $S_f=7/2$ [@lee]. Upon electron doping, above $T_C$, the extra electrons are bound in defect levels situated in the semiconducting gap, and the transition to a FM metal occurs when the majority states of the spin-split conduction band shift downward to overlap with the defect levels. Although this scenario is widely accepted, several questions of fundamental as well as applicational relevance have remained poorly understood. (1) Why does the magnetic ordering of the Eu 4$f$ system occur simultaneously [@steeneken] with the SM transition of the conduction electron system? (2) What is the order of the transition? While the magnetic ordering of the 4$f$ system should clearly be of 2nd order, the metallic transition requires a [*finite*]{} shift of the conduction band and, hence, seems to favor a 1st order transition. (3) How can the critical temperature $T_C$ be enhanced by doping for spintronics applications? While in the Eu-rich compound EuO$_{1-x}$ a systematic $T_C$ increase due to the O defects (i.e. missing O atoms) is not observed experimentally [@oliver1; @oliver2], a minute Gd doping concentration significantly enhances $T_C$ [@matsumoto; @ott]. An O defect in EuO$_{1-x}$ essentially binds the two excess electrons from the extra Eu 6s orbital and, therefore, should not carry a magnetic moment. As shown theoretically in Ref. [@sinjukow], the presence of O defects with two-fold electron occupancy does not enhance $T_C$, in agreement with experiments [@oliver1; @oliver2]. In the present work we focus on the Gd-doped system Eu$_{1-y}$Gd$_y$ and calculate the temperature and doping dependent magnetization and resistivity from a microscopic model. We find that the key feature for obtaining a $T_C$ enhancement is that the impurities not only donate electrons but also carry a local magnetic moment in the paramagnetic phase. [*The model.*]{} — A Gd atom substituted for Eu does not alter the $S_f=7/2$ local moment in the Eu Heisenberg lattice but donates one dopant electron, which in the insulating high-temperature phase is bound in the Gd 5d level located in the gap. Therefore, the Gd impurities are Anderson impurities with a local level $E_d$ below the chemical potential $\mu$ and a [*strong*]{} on-site Coulomb repulsion $U>\mu - E_d$ which restricts their electron occupation essentially to one. The hybridization $V$ with the conduction band is taken to be site-diagonal because of the localized Gd 5d orbitals. The Hamiltonian for the Eu$_{1-y}$Gd$_y$O system then reads, $$\begin{aligned} \label{hamiltonian} H&=&\sum_{{\bf k}\sigma}\varepsilon_{{\bf k}} c_{{\bf k}\sigma}^{\dagger}c_{{\bf k}\sigma}^{\phantom{\dagger}}+H_{cd}+H_{cf}\\ \label{Hcd} H_{cd}&=&E_{d} \sum_{i=1 \dots N_I,\sigma} d_{i\sigma}^{\dagger}d_{i\sigma}^{\phantom{\dagger}} + V \sum_{i=1 \dots N_I,\sigma} (c_{i\sigma}^{\dagger} d_{i\sigma}^{\phantom{\dagger}} + H.c.)\nonumber\\ &+& U \sum_{i=1 \dots N_I} d_{i\uparrow}^{\dagger} d_{i\uparrow}^{\phantom{\dagger}} d_{i\downarrow}^{\dagger} d_{i\downarrow}^{\phantom{\dagger}} \\ \label{Hcf} H_{cf}&=&- \sum_{i,j} J_{ij} \vec S_{i}\cdot\vec S_{j} - J_{cf}\sum_{i}\vec \sigma_{i}\cdot\vec S_{i} \ ,\end{aligned}$$ where the first term in Eq. (\[hamiltonian\]) denotes conduction electrons with spin $\sigma$. The Eu 4$f$ moments $\vec S_i$ on the lattice sites $i=1,\dots, N$ are described in terms of a Heisenberg model $H_{cf}$ with FM nearest and next-nearest neighbor couplings $J_{ij}$ and an exchange coupling $J_{cf}$ to the conduction electron spin operators at site $i$, $\vec\sigma_{i}=(1/2)\sum_{\sigma\sigma'} c_{i\sigma}^{\dagger}\vec\tau_{\sigma\sigma'}c_{i\sigma'}^{\phantom{\dagger}}$, with $c_{i\sigma}=\sum_{\bf k} \exp(i{\bf k x_i})\,c_{{\bf k}\sigma}$ and $\vec \tau_{\sigma\sigma'}$ the vector of Pauli matrices. The Gd impurities at the random positions $i=1, ..., N_I$ are described by $H_{cd}$. For the numerical evaluations we take $U\to\infty$ for simplicity. For the present purpose of understanding the general form of the magnetization $m(T)$ and the systematic doping dependence of $T_C$ it is sufficient to treat the 4$f$ Heisenberg lattice, $H_{cf}$, on mean field level, although recent studies have shown that Coulomb correlations in the conduction band can soften the spin wave spectrum in similar systems [@golosov; @perakis]. The effect of the latter on $m(T)$ can be absorbed in the effective mean field coupling of the 4$f$ system, $J_{4f} \equiv \sum_{j}J_{ij}$. We therefore choose $J_{4f}$ such that for pure EuO it yields the experimental value of $T_C=69~{\rm K}$ [@oliver1; @oliver2; @shapira; @steeneken]. For simplicity, we don’t consider a direct coupling $J_{df}$ between the 4$f$ and the impurity spins, since this would essentially renormalize $J_{cf}$ only. The indirect RKKY coupling will also be neglected, since for the small conduction band fillings relevant here it is FM, like $J_{ij}$, but much smaller than $J_{ij}$. In the evaluations we use a semi-elliptical bare conduction band density of states (DOS) with a half width $D_0=8\, {\rm eV}$, (consistent with experiment [@steeneken]), centered around $\Delta _0\approx 1.05\, D_0$ above the (bare) defect level $E_d$. The other parameters are taken as $J_{4f} \equiv \sum_{j}J_{ij} = 7\cdot 10^{-5} D_{0}$, $J_{cf}=0.05 D_{0}$, $E_{d}=-0.4 D_{0}$, and $\Gamma=\pi V^{2}=0.05 D_{0}^{2}$, where $J_{cf}\gg J_{4f}$ because $J_{4f}$ involves a non-local matrix element. [*Selfconsistent theory.*]{} — The averaging over the random defect positions is done within the single-site $T$-matrix approximation, sufficient for dilute impurities. This yields for the retarded conduction electron Green‘s function $G_{c\sigma}({\bf k},\omega)$ in terms of its selfenergy $\Sigma _{c\sigma}(\omega)$, $$\begin{aligned} &&G_{c\sigma}({\bf k},\omega)=\left[\omega+\mu-\varepsilon_{\bf k}-\Sigma_{c\sigma}(\omega)\right]^{-1} \label{gc}\\ &&\Sigma_{c\sigma}(\omega)=n_{I} |V|^{2}G_{d\sigma}(\omega) -J_{cf}\langle S \rangle \sigma \label{se}\end{aligned}$$ where $G_{d\sigma}(\omega)$ is the defect electron propagator and $\langle S \rangle$ the average 4$f$–moment per site. In mean field theory it is obtained, together with the conduction electron magnetization $m$, as $$\begin{aligned} &&\langle S \rangle = \frac{\sum_{S} S e^{-\beta(2J_{4f}\langle S \rangle + J_{cf}m)S}}{\sum_{S}e^{-\beta(2J_{4f}\langle S \rangle + J_{cf}m)S}}\\ &&m=\frac{1}{2}\int d \omega f(\omega) [A_{c\uparrow}(\omega) - A_{c\downarrow}(\omega)]\label{magn}\end{aligned}$$ where $f(\omega)$ is the Fermi distribution function and $A_{c\sigma}(\omega)=- \sum_{{\bf k}} {\rm{Im}} G_{c\sigma}(k,\omega)/\pi$ the conduction electron DOS of the interacting system. \ In order to treat the strongly correlated spin and charge dynamics of the Anderson impurities without double occupancy beyond the static approximation, we use a slave particle representation and employ the non-crossing approximation (NCA) [@grewe]. For EuO the DOS at the Fermi level is so low or even vanishing that the Kondo temperature is well below $T_C$ and Kondo physics plays no role. In this high-energy regime the NCA has been shown to give quantitatively reliable results [@costi]. This remains true even for a finite magnetization, where the NCA would develop spurious potential scattering singularities near $T_K$ only [@kirchner]. One obtains the following set of equations for $G_{d\sigma}(\omega)$ in terms of the auxiliary fermion and boson propagators $G_{f\sigma}$, $G_{b}$, their spectral functions $A_{f\sigma}$, $A_{b}$ and their selfenergies $\Sigma_{f\sigma}, \Sigma_{b}$, $$\begin{aligned} \Sigma_{f\sigma}(\omega)&=&\Gamma \int {d\varepsilon}\left[1-f(\varepsilon)\right] A_{c\sigma}(\varepsilon)G_{b}(\omega-\varepsilon )\label{sigmaf}\\ \Sigma_{b}(\omega)&=&\Gamma \sum_{\sigma}\int {d\varepsilon} f(\varepsilon) A_{c\sigma}(\varepsilon)G_{f\sigma}(\omega+\varepsilon )\label{sigmab}\\ \nonumber G_{d\sigma}(\omega)&=&\int \frac{d\varepsilon} {e^{\beta \varepsilon}} \left[ G_{f\sigma}(\omega+\varepsilon )A_{b}(\varepsilon)-A_{f\sigma}(\varepsilon)G^{*}_{b}(\varepsilon-\omega)\right] \\ \label{Gd}\end{aligned}$$ Note that in Eqs. (\[sigmaf\], \[sigmab\]) $A_{c\sigma}(\varepsilon)$ is the interacting DOS, renormalized by the dilute concentration of Anderson impurities and the 4$f$–spins according to Eq. (\[gc\]). For details of the NCA and its evaluation see [@costi]. The equations (\[gc\]-\[Gd\]) form a closed set of selfconsistent integral equations. They are solved iteratively, fixing the total electron number per lattice site in the system, $$\begin{aligned} n= \sum_{\sigma}\int \!d\omega f(\omega)\, \left[A_{c\sigma}(\omega)+n_I\,A_{d\sigma}(\omega)\right]=n_I \label{pnumber}\end{aligned}$$ by the chemical potential $\mu$ in each step. \ [*Electrical conductivity.*]{} — The current operator $\hat{\bf j}$ can be derived from the continuity equation, $\partial\hat\rho_i/\partial t + \nabla\ \cdot \hat{\bf j} =0$, and the Heisenberg equation of motion for the total local charge operator $\hat\rho_i$ at site $i$. Because the impurity Hamiltonians $H_{cf}$, $H_{df}$ conserve $\hat\rho_i$, only $c$–electrons contribute to the current, and one obtains [@schweitzer], $ \hat{\bf j}=({e}/{\hbar}) \sum_{{\bf k}\sigma}{\partial \varepsilon_{\bf k}}/ {\partial {\bf k}} \ c_{{\bf k}\sigma}^{\dagger} c_{{\bf k}\sigma}^{\phantom{\dagger}} $. The linear response conductivity then reads for a local selfenergy [@schweitzer], $$\sigma=\frac{\pi e^{2}}{3 \hbar V} \sum_{{\bf k}\sigma} \int d\omega \left( -\frac{\partial f}{\partial \omega} \right) A_{c\sigma}^{2}({\bf k},\omega) \left( \frac{\partial \varepsilon_{\bf k}}{\partial {\bf k}} \right)^{2} \ . \label{cond1}$$ [*Results and discussion.*]{} — The results of the selfconsistent theory, Eqs. (\[gc\]–\[pnumber\]), and for the conductivity, Eq. (\[cond1\]), are presented in Figs. \[fig1\]–\[fig3\]. They allow to draw a complete picture of the FM semiconductor-metal transition in Gd-doped EuO. The spectral densities per lattice site above and below the transition are shown in Fig. \[fig1\]. In the paramagnetic, insulating phase the hybridization between $d$– and $c$–electrons necessarily implies the appearance of a conduction electron sideband (Fig. \[fig1\], inset), situated below $\mu$ and at the same energies inside the semiconducting gap as the impurity $d$–band. The $d$-band (not shown) has a similar width and shape as the $c$-sideband. The combined weight of the $c$–sideband and the $d$-band adjusts itself selfconsistently such that it just accommodates the total electron number, $n=n_I$. Note that the weight of the $d$–band per impurity and spin is $\lesssim 1/2$, because the doubly occupied weight is shifted to $U\to \infty$ [@costi]. \ The $c$–4$f$ exchange coupling $J_{cf}$ induces an effective FM coupling between the electrons of the $c$–$d$ system. Hence, either the 4$f$– or the $c$–$d$–electron system can drive a FM transition, depending on which of the (coupled) subsystems has the higher $T_C$. We have chosen $J_{cf}$ (see above) large enough that the transition is driven by the $c$–$d$–electrons, because this will yield detailed agreement with the experiments [@steeneken; @ott; @matsumoto]. In this case, $T_C$ is naturally expected to increase with the impurity density $n_I$. The results for the $T$-dependent conduction electron magnetization $m(T)$, Eq. (\[magn\]), and for the doping dependence of $T_C$ are shown in Fig. \[fig2\], lower panel, and in Fig. \[fig3\], right panel, respectively. It is seen that not only $T_C$ increases with the impurity concentration, in agreement with recent measurements on Eu$_{1-y}$Gd$_{y}$O$_{1-x}$ [@matsumoto; @ott], but also that $m(T)$ has a dome-like tail near $T_C$, before it increases to large values deep inside the FM phase. From our theory this feature is traced back to the mean-field-like 2nd order FM transition of the electron system, while the large dome in the magnetization further below $T_C$ is induced by the FM ordering of the 4$f$ system, whose magnetization is controlled by $J_{4f}$ and sets in at lower $T$. This distinct feature is again in agreement with the experimental findings [@matsumoto; @ott] and lends significant support for the present model for Eu$_{1-y}$Gd$_{y}$O. We note that the Eu-rich EuO$_{1-x}$ samples of Ref. [@matsumoto] also show a magnetization tail and a $T_C$ enhancement, suggesting (small) magnetic moments on the O defects. However, the nature of the O defects requires further experimental and theoretical studies. The conduction electron polarization $P(T)=m(T)/n_c(T)$ does not show this double-dome structure and below $T_C$ increases steeply to $P=1$ (not shown in Fig. \[fig2\]). The FM phase is connected with a spin splitting of the $c$– as well as the $d$–densities of states, as shown in Fig. \[fig1\]. The narrow $d$-band induces a Fano dip structure in the $c$ majority band and a small sideband in the $c$ minority band. Note that for the present scenario the existence of preformed local moments on the impurities, induced by strong Coulomb repulsion $U$, is essential. Without these moments the transition of the electron system would be purely Stoner-like, and, because of the extremely low conduction electron DOS at the Fermi level, its $T_C$ would be far below the Curie temperature of the 4$f$ system, so that no doping dependence would be expected [@sinjukow]. \ We now discuss the conductivity and the simultaneity of the FM and the SM transitions. In the paramagnetic phase, the system is weakly semiconducting, because $\mu$ lies in the gap (Fig. \[fig1\], inset). When the FM transition occurs, the impurity d-band must acquire a spin splitting in such a way that at least part of the minority $d$–spectral weight lies above the chemical potential $\mu$, in order to provide a finite magnetization. Since near the transition the spin splitting is small, the majority $d$–band must, therefore, also be shifted to have overlap with $\mu$ (Fig. \[fig1\]), and so must the hybridization-induced $c$-electron sideband (which eventually merges with the main conduction band for $T$ sufficiently below $T_C$). This immediately implies a transition to a metallic state, simultaneous with the FM transition, as seen in Fig. \[fig2\]. Because of the small, but finite thermal occupation of the states around $\mu$, we find that this shifting of spectral weight occurs continuously, which implies the FM semiconductor-metal transition to be of 2nd order (see Fig. \[fig2\]). The doping $n_I$ dependence of the conductivity is shown in Fig, \[fig3\], left panel. It is seen that the metallic transition can be driven by increasing $n_I$, if $T>T_C(n_I=0)$. As an alternative to Gd-doping the charge carrier concentration $n$ can be controlled independently of the impurity concentration $n_I$ by varying the chemical potential $\mu$, e.g. by applying a gate voltage to an EuO thin film. The conductivity $\sigma$ and magnetization $m$ as a function of $\mu$ are shown in Fig. \[fig4\] for two temperatures. To both sides of the ungated system ($n=n_I$) $\sigma$ increases exponentially upon changing $\mu$, characteristic for semiconducting behavior. By increasing $\mu$, the FM-metallic transition is finally reached. I.e. the magnetization can be switched, in principle, by a gate voltage. The non-monotonic behavior of $\sigma$ towards more negative $\mu$ reflects the energy dependence of the $c$ sideband. A more detailed study will be presented elsewhere. To conclude, our theory indicates that in Gd-doped EuO the existence of preformed local moments on the impurity levels inside the semicondicting gap is essential for understanding the distinct shape of the magnetization $m(T)$ near the ferromagnetic semiconductor-metal transition. The FM ordering is driven by these impurity moments which are superexchange coupled via the 4$f$ moments of the underlying Eu lattice. This scenario immediately implies an increase of the Curie temperature with the impurity concentration, in agreement with experiments. The double-dome shape of $m(T)$ arises because of the successive ordering of the dilute impurity and of the dense Eu 4$f$ systems, as $T$ is lowered. The dynamical accumulation of conduction spectral weight at the chemical potential, induced by the hybridization $V$ and the constraint of an emerging magnetization at the FM transition, implies the FM and the SM transition to be simultaneous and of 2nd order. The magnetization can be switched by applying a gate voltage. This might be relevant for spintronics applications. We wish to thank T. Haupricht, H. Ott, and H. Tjeng for useful discussions. J.K. is grateful to the Aspen Center for Physics where this work was completed. This work is supported by DFG through SFB 608. [10]{} M. Oliver [*et al.*]{}, Phys. Rev. Lett. [**[24]{}**]{}, 1064 (1970). M. Oliver [*et al.*]{}, Phys. Rev. B [**[5]{}**]{}, 1078 (1972). T. Penney, M. W. Shafer, and J. B. Torrance, Phys. Rev. B [**[5]{}**]{}, 3669 (1972). P. B. Steeneken et al., Phys. Rev. Lett. [**[88]{}**]{},047201 (2002). Y. Shapira, T. Foner, and S. B. Reed, Phys. Rev. B [**[8]{}**]{}, 2299 (1973). For a review see, e.g., M. Imada, A. Fujimori, and Y. Tokura, Rev. Mod. Phys. [**70**]{}, 1039 (1998). H. Ott [*et al.*]{}, Phys. Rev. B [**[73]{}**]{}, 094407 (2006). A. Schmehl [*et al.*]{}, Nature Materials doi:10.1038/nmat2012 (2007). R. Schiller, W. Müller, and W. Nolting, Phys. Rev. B [**[64]{}**]{},134409 (2001). P. Sinjukow and W. Nolting, Phys. Rev. B [**[68]{}**]{}, 125107 (2003); Phys. Rev. B [**[69]{}**]{}, 214432 (2004). V.-C. Lee and L. Liu , Phys. Rev. B [**[30]{}**]{}, 2026 (1984). T.Matsumoto et al., [*[J.Phys.]{}*]{} [**[16]{}**]{},6017 (2004). D. I. Golosov, Phys. Rev. B [**71**]{}, 014428 (2005). M. D. Kapetanakis, A. Manousaki, and I. E. Perakis, Phys. Rev. B [**73**]{}, 174424 (2006); M. D. Kapetanakis and I. E. Perakis, Phys. Rev. B [**75**]{}, 140401(R) (2007). N. Grewe and H. Keiter, Phys. Rev. B [**[24]{}**]{},4420 (1981); Y. Kuramoto, Z. Phys. B [**[53]{}**]{}, 37 (1983). T. A. Costi, J. Kroha, and P. Wölfle, Phys. Rev. B [**53**]{}, 1850 (1996). S. Kirchner and J. Kroha, J. Low Temp. Phys. [**126**]{}, 1233 (2002); arXiv:cond-mat/0202351. H. Schweitzer and G. Czycholl, Phys. Rev. Lett. [**[67]{}**]{}, 3724 (1991); T. Pruschke, M. Jarrell and J. Freericks, Adv. Phys. [**[44]{}**]{}, 187 (1995).
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this contribution, the evaluation of the diversity of the MIMO MMSE receiver is addressed for finite rates in both flat fading channels and frequency selective fading channels with cyclic prefix. It has been observed recently that in contrast with the other MIMO receivers, the MMSE receiver has a diversity depending on the aimed finite rate, and that for sufficiently low rates the MMSE receiver reaches the full diversity - that is, the diversity of the ML receiver. This behavior has so far only been partially explained. The purpose of this paper is to provide complete proofs for flat fading MIMO channels, and to improve the partial existing results in frequency selective MIMO channels with cyclic prefix.' author: - | Florian Dupuy and Philippe Loubaton\ Thales Communication EDS/SPM, 92704 Colombes (France)\ Université Paris Est, IGM LabInfo, UMR-CNRS 8049, 77454 Marne-la-Vallée (France),\ Telephone: +33 146 132 109, Fax: +33 146 132 555, Email: fdupuy@univ-mlv.fr\ Telephone: +33 160 957 293, Fax: +33 160 957 755, Email: loubaton@univ-mlv.fr bibliography: - 'IEEEabrv.bib' - 'bibMarne.bib' title: Diversity of the MMSE receiver in flat fading and frequency selective MIMO channels at fixed rate --- Diversity, Flat fading MIMO channels, Frequency selective MIMO channels, Outage probability, MMSE receiver Introduction ============ The diversity-multiplexing trade-off (DMT) introduced by [@zheng2003diversity] studies the diversity function of the multiplexing gain in the high SNR regime. [@kumar2009asymptotic] showed that the MMSE linear receivers, widely used for their simplicity, exhibit a largely suboptimal DMT in flat fading MIMO channels. Nonetheless, for a finite data rate (i.e. when the rate does not increase with the signal to noise ratio), the MMSE receivers take several diversity values, depending on the aimed rate, as noticed earlier in [@hedayat2005linear], and also in [@hedayat2004outage; @tajer2007diversity] for frequency-selective MIMO channels. In particular they achieve full diversity for sufficiently low data rates, hence their great interest. This behavior was partially explained in [@kumar2009asymptotic; @mehana2010diversity] for flat fading MIMO channels and in [@mehana2011diversity] for frequency-selective MIMO channels. Indeed the proof of the upper bound on the diversity order for the flat fading case given in [@mehana2010diversity] contains a gap, and the approach of [@mehana2010diversity] based on the Specht bound seems to be unsuccessfull. As for MIMO frequency selective channels with cyclic prefix, [@mehana2011diversity] only derives the diversity in the particular case of a number of channel taps equal to the transmission data block length, and claims that this value provides an upper bound in more realistic cases, whose expression is however not explicitly given. In this paper we provide a rigorous proof of the diversity for MMSE receivers in flat fading MIMO channels for finite data rates. We also derive the diversity in MIMO frequency selective channels with cyclic prefix for finite data rates if the transmission data block length is large enough. Simulations corroborate our derived diversity in the frequency selective channels case. Problem statement ================= We consider a MIMO system with $M$ transmitting, $N \geq M$ receiving antennas, with coding and ideal interleaving at the transmitter, and with a MMSE linear equalizer at the receiver, followed by a de-interleaver and a decoder (see Fig. \[fig:scheme\]). We evaluate in the following sections the achieved diversity by studying the outage probability, that is the probability that the capacity does not support the target data rate, at high SNR regimes. We denote $\rho$ the SNR, $I$ the capacity and $R$ the target data rate. We use the notation $\doteq$ for [*exponential equality*]{} [@zheng2003diversity], i.e. $$f(\rho) \doteq \rho^d \Leftrightarrow \lim_{\rho \to \infty} \frac{\log f(\rho)}{\log \rho}= d, \label{eq:exp-equ}$$ and the notations $\dot\leq$ and $\dot\geq$ for exponential inequalities, which are similarly defined. We note $\log$ the logarithm to base $2$. ![image](div_sch3){width="6.5in"} Flat fading MIMO channels ========================= In this section we consider a flat fading MIMO channel. The output of the MIMO channel is given by $$\y= \sqrt{\frac{\rho}{M}} \H \x + \n,$$ where $\n \sim \mathcal{CN}({\boldsymbol}{0},\I_N)$ is the additive white Gaussian noise and $\x$ the channel input vector, $\H$ the $N \times M$ channel matrix with i.i.d. entries $\sim \mathcal{CN}(0,1)$. For a rate $R$ such that $\log \frac{M}{m} < \frac{R}{M} < \log \frac{M}{m-1}$, with $m \in \{ 1, \ldots, M \}$, the outage probability verifies $$\PP(I<R) \doteq \rho^{-m(N-M+m)},$$ that is, a diversity of $m(N-M+m)$. Note that for a rate $R < M \log \frac{M}{M-1}$ (i.e. $m=M$) full diversity $MN$ is attained, while for a rate $R > M \log M$ the diversity corresponds to the one derived by DMT approach. This result was stated by [@mehana2010diversity]. Nevertheless the proof of the outage lower bound in [@mehana2010diversity] omits that the event noted $\mathcal{B}_a$ is not independent from the eigenvalues of $\H^H\H$, hence questioning the validity of the given proof. We thus provide an alternative proof based on an approach suggested by the analysis of [@kumar2009asymptotic] in the case where $R = r \log \rho$ with $r > 0$. The capacity $I$ of the MIMO MMSE considered system is given by $$I = \sum_{j=1}^M \log ( 1 + \beta_j),$$ where $\beta_j$ is the SINR for the $j$th stream: $$\beta_j= \frac{1}{\left( \left[ \I + \frac{\rho}{M} \H^*\H \right]^{-1} \right)_{jj} } - 1.$$ We lower bound in the first place $\PP(I<R)$ and prove in the second place that the bound is tight by upper bounding $\PP(I<R)$ with the same bound. Lower bound of the outage probability {#sec:lowB_flat} ------------------------------------- We here assume that $R/M>\log (M/m$). In order to lower bound $\PP(I<R)$ we need to upper bound the capacity $I$. Using Jensen’s inequality on function $x \mapsto \log x$ yields $$\begin{aligned} I &\leq M \log \Bigg[ \frac{1}{M} \sum_{j=1}^M \left( 1+\beta_j \right) \Bigg] \label{ineq:jensen1} \\ &= M \log \Bigg[ \frac{1}{M} \sum_{j=1}^M \bigg( \left[ \left( \I + \frac{\rho}{M} \H^*\H \right)^{-1} \right]_{jj} \bigg)^{-1} \Bigg]. \label{ineq:logconcave} \end{aligned}$$ We note $\H^*\H= \U^*\Lambda\U$ the SVD of $\H^*\H$ with $\Lambda=\mathrm{diag}(\lambda_1,\ldots,\lambda_M)$, $\lambda_1 \leq \lambda_2 \ldots \leq \lambda_M$. We recall that the $(\lambda_k)_{k=1, \ldots, M}$ are independent from the entries of matrix $\U$ and that $\U$ is a Haar distributed unitary random matrix, i.e. the probability distribution of $\U$ is invariant by left (or right) multiplication by deterministic matrices. Using this SVD we can write $$\frac{1}{M} \sum_{j=1}^M \bigg( \left[ \left( \I + \frac{\rho}{M} \H^*\H \right)^{-1} \right]_{jj} \bigg)^{-1} = \frac{1}{M} \sum_{j=1}^M \bigg( \sum_{k=1}^M \frac{|\U_{kj}|^2}{1+ \frac{\rho}{M} \lambda_k } \bigg)^{-1}. \label{eq:sumSINR} $$ ### Case m=1 In order to better understand the outage probability behavior, we first consider the case $m=1$. In this case $R/M>\log M$. We review the approach of [@kumar2009asymptotic III], which consists in upper bounding by $\left( 1+ \frac{\rho}{M} \lambda_1 \right) \frac{1}{M} \sum_{j=1}^M \frac{1}{|\U_{1j}|^2}$, as $\sum_{k=1}^M \frac{|\U_{kj}|^2}{1+ \frac{\rho}{M} \lambda_k } \geq \frac{|\U_{1j}|^2}{1+\frac{\rho}{M}\lambda_1}$. Using this bound in gives $$I \leq M \log \bigg[ \left( 1 + \frac{\rho}{M} \lambda_1 \right) \frac{1}{M} \sum_{j=1}^M \frac{1}{|\U_{1j}|^2} \bigg].$$ Therefore $$\Big( \left( 1 + \frac{\rho}{M} \lambda_1 \right) \frac{1}{M} \sum_{j=1}^M \frac{1}{|\U_{1j}|^2} < 2^{R/M} \Big) \subset (I<R).$$ In order to lower bound $\PP(I<R)$, [@kumar2009asymptotic] introduced the set $$\mathcal{A}_1 = \bigg\{ \frac{1}{M} \sum_{j=1}^M \frac{1}{|\U_{1j}|^2} < M + \eps \bigg\}$$ for $\eps > 0$. Then, $$\begin{aligned} \PP(I <R) & \geq \PP\left( (I<R) \cap \mathcal{A}_1 \right) \\ & \geq \PP \bigg[ \bigg( \left( 1 + \frac{\rho}{M} \lambda_1 \right) \frac{1}{M} \sum_{j=1}^M \frac{1}{|\U_{1j}|^2} < 2^{R/M} \bigg) \cap \mathcal{A}_1\bigg] \\ & \geq \PP \left[ \left( 1 + \frac{\rho}{M} \lambda_1 < \frac{2^{R/M}}{M+\eps} \right) \cap \mathcal{A}_1\right] \\ & = \PP(\mathcal{A}_1) \cdot \PP\left[ 1 + \frac{\rho}{M} \lambda_1 < \frac{2^{R/M}}{M+\eps} \right], \end{aligned}$$ where the last equality comes from the independence between eigenvectors and eigenvalues of Gaussian matrix $\H^*\H$. It is shown in [@kumar2009asymptotic Appendix A] that $\PP(\mathcal{A}_1) \neq 0$. Besides, as we supposed $2^{R/M}>M$, we can take $\eps$ such that $\frac{2^{R/M}}{M+\eps}>1$, ensuring that $\PP\Big[ \left( 1 + \frac{\rho}{M} \lambda_1 \right) < \frac{2^{R/M}}{M+\eps} \Big] \neq 0$. Hence there exists $\kappa>0$ such that $$\PP(I<R) \ \dot\geq \ \PP\left( \lambda_1 < \frac{\kappa}{\rho} \right),$$ which is asymptotically equivalent to $\rho^{-(N-M+1)}$ in the sense of (see, e.g., [@jiang2011performance Th. II.3]). ### General case 1&lt;=m&lt;= M By the same token as for $m=1$ we now consider the general case – we recall that we assumed that $\log (M/m) < R/M$. We first lower bound $\sum_k\frac{|\U_{kj}|^2}{1+ \frac{\rho}{M} \lambda_k}$ which appears in by the $m$ first terms of the sum and then use Jensen’s inequality applied on $x \mapsto x^{-1}$, yielding $$\begin{aligned} \sum_{k=1}^M \frac{|\U_{kj}|^2}{1+ \frac{\rho}{M} \lambda_k} &\geq \sum_{k=1}^m \frac{|\U_{kj}|^2}{1+ \frac{\rho}{M} \lambda_k} \\ &\geq \frac{\left( \sum_{l=1}^m |\U_{lj}|^2 \right)^2}{\sum_{k=1}^m |\U_{kj}|^2 \left( 1+ \frac{\rho}{M} \lambda_k\right)}. \end{aligned}$$ Using this inequality in , we obtain that $$\begin{aligned} \frac{1}{M} \sum_{j=1}^M \bigg( \left[ \left( \I + \frac{\rho}{M} \H^*\H \right)^{-1} \right]_{jj} \bigg)^{-1} \leq \frac{1}{M} & \sum_{j=1}^M \frac{\sum_{k=1}^m |\U_{kj}|^2\left( 1+ \frac{\rho}{M} \lambda_k\right)}{\left( \sum_{l=1}^m |\U_{lj}|^2 \right)^2} \notag \\ & = \sum_{k=1}^m \left( 1+ \frac{\rho}{M} \lambda_k\right) \delta_k(\U), \label{ineq:intr_delta} \end{aligned}$$ where $\delta_k(\U) = \frac{1}{M} \sum_{j=1}^M \frac{|\U_{kj}|^2}{\left( \sum_{l=1}^m |\U_{lj}|^2 \right)^2}$. Equation , together with , yields the following inclusion: $$\begin{aligned} \Bigg( \sum_{k=1}^m \delta_k(\U) \left( 1+ \frac{\rho}{M} \lambda_k\right) < 2^{R/M} \Bigg) \subset (I<R). \end{aligned}$$ Similarly to the case $m=1$, we introduce the set $\mathcal{A}_m$ defined by $$\mathcal{A}_m = \left\{ \delta_k(\U) < \frac{M}{m^2} + \eps, \ k=1,\ldots,m \right\}$$ for $\eps > 0$. We now use this set to lower bound $\PP(I<R)$. $$\begin{aligned} \PP(I<R) & \geq \PP\left( (I<R) \cap \mathcal{A}_m \right) \\ & \geq \PP \bigg[ \Bigg( \sum_{k=1}^m \delta_k(\U) \left( 1+ \frac{\rho}{M} \lambda_k\right) < 2^{R/M} \Bigg) \cap \mathcal{A}_m\bigg] \\ & \geq \PP \left[ \Bigg( \sum_{k=1}^m \left( 1+ \frac{\rho}{M} \lambda_k\right) < \frac{2^{R/M}}{\frac{M}{m^2} + \eps} \Bigg) \cap \mathcal{A}_m\right] \\ &= \PP(\mathcal{A}_m) \cdot \PP\left[ \sum_{k=1}^m \left( 1 + \frac{\rho}{M} \lambda_k \right) < \frac{2^{R/M}}{\frac{M}{m^2}+\eps} \right]. \end{aligned}$$ The independence between eigenvectors and eigenvalues of Gaussian matrix $\H^*\H$ justifies the last equality. As we assumed that $\log(M/m) < R/M$, that is $m < \frac{2^{R/M}}{M/m^2}$, we can choose $\eps$ such that $m < \frac{2^{R/M}}{M/m^2 +\eps}$. That ensures that $\PP\left[ \sum_{k=1}^m \left( 1 + \frac{\rho}{M} \lambda_k \right) < \frac{2^{R/M}}{ M/m^2 +\eps} \right] \neq 0$. We show in Appendix \[apx:prob\_sum\_first\_ev\] that this probability is asymptotically equivalent to $\rho^{-m(N-M+m)}$ in the sense of , leading to $$\PP(I<R) \ \dot\geq \ \frac{\PP(\mathcal{A}_m)}{\rho^{m(N-M+m)}}. \label{ineq:P_I<R}$$ We still need to prove that $\PP(\mathcal{A}_m) \neq 0$. Any Haar distributed random unitary matrix can be parameterized by $M^2$ independent angular random variables $(\alpha_1, \ldots, \alpha_{M^2})={\boldsymbol}\alpha$ whose probability distributions are almost surely positive (see [@dita2003factorization; @lundberg2004haar] and Appendix \[apx:ang\_par\]). We note $\Phi_m$ the functions such that $\U=\Phi_m({\boldsymbol}\alpha)$. Consider a deterministic unitary matrix $\U_*$ such that $|(\U_*)_{ij}|^2 = \frac{1}{M} \ \forall i,j$, and denote by ${\boldsymbol}\alpha_*$ a corresponding $M^2$ dimensional vector. It is straightforward to check that $\delta_k \circ \Phi_m (\alpha_*) = M / m^2$. Functions ${\boldsymbol}\alpha \mapsto (\delta_k \circ \Phi_m)({\boldsymbol}\alpha)$ are continuous at point ${\boldsymbol}\alpha_*$ for $1 \leq k \leq m$ and therefore there exists $\eta>0$ such that the ball $\mathcal{B} \left( {\boldsymbol}\alpha_*, \eta \right)$ is included in the set $\left\{ {\boldsymbol}\alpha, \ (\delta_k \circ \Phi_m)(\alpha) < \frac{M}{m^2}+\eps, \ k=1, \ldots, m \right\}$. We have therefore $\PP(\mathcal{A}_m) \neq 0$ as $$\begin{aligned} \PP(\mathcal{A}_m) & = \int_{\left\{ (\delta_k \,\circ\, \Phi_m) ({\boldsymbol}\alpha) < \frac{M}{m^2} + \eps, \, k=1, \ldots, m \right\}} p({\boldsymbol}\alpha) d{\boldsymbol}\alpha \\ & > \int_{\mathcal{B} \left( {\boldsymbol}\alpha_*, \eta \right)} p({\boldsymbol}\alpha) d{\boldsymbol}\alpha > 0 \end{aligned}$$ Coming back to , we eventually have $$\PP(I<R) \ \dot\geq \ \frac{1}{\rho^{m(N-M+m)}},$$ that is the diversity of the MMSE receiver is upper bounded by $m(N-M+m)$. Upper bound of the outage probability {#sec:upper-flat} ------------------------------------- We now conclude by studying the upper bound of the outage probability, showing that $m(N-M+m)$ is also a lower bound for the diversity. Note that this lower bound has been derived in [@kumar2009asymptotic; @mehana2010diversity] using however rather informal arguments; we provide a more rigorous proof here for the sake of completeness. We now assume that $R/M < \log (M/(m-1))$, i.e. $m-1 < M 2^{-R/M}$. Using Jensen inequality on function $y \mapsto \log(1/y)$, the capacity $I$ can be lower bounded: $$\begin{aligned} I &= - \sum_{j=1}^M \log \left( \left[ \left( \I + \frac{\rho}{M} \H^*\H \right)^{-1} \right]_{jj} \right) \\ &\geq -M \log \left( \frac{1}{M} \Tr \left[ \left( \I + \frac{\rho}{M} \H^*\H \right)^{-1} \right] \right), \end{aligned}$$ which leads to an upper bound for the outage probability: $$\PP(I<R) \leq \PP \left[ \Tr \left[ \left( \I + \frac{\rho}{M} \H^*\H \right)^{-1} \right] > M 2^{-R/M} \right]. \label{ineq:PP-B0}$$ We need to derive the probability in the right-hand side of the above inequality. Noting $\mathcal{B}_0= \left\{ \lambda_1 \leq \lambda_2 \ldots \leq \lambda_M, \ \sum_{k=1}^M \left( 1 + \frac{\rho}{M} \lambda_k \right)^{-1} > M 2^{-R/M} \right\}$, $$\begin{aligned} \PP \bigg[ \Tr \Big[ \Big( \I + & \frac{\rho}{M} \H^*\H \Big)^{-1} \Big] > M 2^{-R/M} \bigg] \notag \\ & = \int_{\mathcal{B}_0} p(\lambda_1, \ldots, \lambda_M) d\lambda_1 \ldots d\lambda_M. \label{eq:int_B0_PP} \end{aligned}$$ We now introduce $\mu_m = \sup_{(\lambda_1, \ldots, \lambda_M) \in \mathcal{B}_0}\{ \rho \,\lambda_m \}$ and prove by contradiction that $\mu_m < +\infty$. If $\mu_m = +\infty$, there exists a sequence $(\lambda_1^{(n)}, \lambda_2^{(n)}, \ldots, \lambda_M^{(n)})_{n\in\mathbb{N}}$ such that $\lambda_k^{(n)} \rightarrow +\infty$ for any $k \geq m$. Besides, $$M 2^{-R/M} \mhs < \sum_{k=1}^M \mhs \Big( 1+\frac{\rho}{M}\lambda^{(n)}_k \Big)^{-1} \mhs \leq (m-1) + \sum_{k=m}^M \mhs \Big(1+\frac{\rho}{M}\lambda^{(n)}_k\Big)^{-1}.$$ In particular $M 2^{-R/M} < (m-1) + \sum_{k=m}^M \mhs \big(1+\frac{\rho}{M}\lambda^{(n)}_k\big)^{-1} $, which, taking the limit when $n \rightarrow +\infty$, leads to $m-1 \geq M 2^{-R/M}$, a contradiction with the assumption $m-1 < M 2^{-R/M}$. Hence, $\mu_m < +\infty$. We introduce the set $\mathcal{B}_1= \{ \lambda_1 \leq \lambda_2 \ldots \leq \lambda_M, \, 0 < \lambda_k \leq \frac{\mu_m}{\rho}, \, k=1,\ldots,m \}$, which verifies $\mathcal{B}_0 \subset \mathcal{B}_1$. Using and , this implies that $$\PP(I<R) \leq \int_{\mathcal{B}_1} p(\lambda_1, \ldots, \lambda_M) d\lambda_1 \ldots d\lambda_M,$$ which is shown to be asymptotically smaller than $\rho^{-m(N-M+m)}$ in the sense of in Appendix \[apx:prob\_first\_ev\_bounded\]. The diversity is thus lower bounded by $m(N-M+m)$, ending the proof. Frequency selective MIMO channels with cyclic prefix {#sec:freqsel} ==================================================== We consider a frequency selective MIMO channel with $L$ independent taps. We consider a block transmission cyclic prefix scheme, with a block length of $K$. The output of the MIMO channel at time $t$ is given by $$\begin{aligned} \y_t =\sqrt{\frac{\rho}{ML}}\,\sum_{l=0}^{L-1} \H_l \x_{t-l} +\n_t = \sqrt{\frac{\rho}{ML}}\,[\H(z)] \x_t +\n_t\end{aligned}$$ where $\x_t$ is the channel input vector at time $t$, $\n_t \sim \mathcal{CN}({\boldsymbol}{0}, \I_N)$ the additive white Gaussian noise, $\H_l$ is the $N \times M$ channel matrix associated to $l^{\mathrm{th}}$ channel tap, for $l \in \{0,\ldots, L-1\}$, and $\H(z)$ denotes the transfer function of the discrete-time equivalent channel defined by $$\H(z) = \sum_{l=0}^{L-1} \H_l \, z^{-l}.$$ We make the common assumption that the entries of $\H_l$ are i.i.d and $\mathcal{CN}(0,1)$ distributed. We can now state the second diversity theorem of the paper. Assume that the non restrictive condition $K > {M^{2}(L-1)}$ holds, ensuring that $\log \frac{M}{m} < -\log\big(\frac{m-1}{M}+\frac{(L-1)(M-(m-1))}{K}\big)$ for any $m=1, \ldots, M$. Then, for a rate $R$ verifying $$\hspace{-11pt} \textstyle \log \frac{M}{m} < \frac{R}{M} < -\log \left(\frac{m-1}{M} + \frac{(L-1)(M-(m-1))}{K} \right), \label{eq:bounds_R_fsel}$$ $m \in \{ 1, \ldots, M \}$, the outage probability verifies $$\PP(I<R) \doteq \rho^{-m(LN-M+m)},$$ that is a diversity of $m(LN-M+m)$. The diversity of the MMSE receiver is thus $m(LN-M+m)$, corresponding to a flat fading MIMO channel with $M$ transmit antennas and $LN$ receive antennas. For a large block length $K$, the upper bound for rate $R$ is close to the bound of the previous flat fading case $\log \frac{M}{m-1}$. Concerning data rates verifying $ -\log\big(\frac{m-1}{M} + \frac{L-1}{K} (M-(m-1))\big) < \frac{R}{M} < \log \frac{M}{m-1}, $ the $m(LN-M+m)$ diversity is only an upper bound; nevertheless the diversity is also lower bounded by $(m-1)(LN-M+(m-1))$. Similarly to previous section the capacity of the MIMO MMSE system is written $ I = \sum_{j=1}^M \log ( 1 + \beta_j), $ where $\beta_j$ is the SINR for the $j$th stream of $\x_t$. It is standard material that in MIMO frequency selective channel with cyclic prefix the SINR of the MMSE receiver is given by $$\beta_j = \frac{1}{ \frac{1}{K}\sum_{k=1}^K \left[ \left( \S\left(\frac{k-1}{K}\right) \right)^{-1} \right]_{jj} } -1, \label{eq:SINR_freqsel}$$ where $\S(\nu)= \I_N + \frac{\rho}{M} \H(e^{2 i \pi \nu})^*\H(e^{2 i \pi \nu}) $. Lower bound for the outage probability -------------------------------------- We assume that $R/M > \log(M/m)$. One can show that function $\A \mapsto (\A^{-1})_{jj}$, defined over the set of positive-definite matrices, is convex. Using Jensen’s inequality then yields $$\begin{aligned} \frac{1}{K}\sum_{k=1}^K {\textstyle \left[ \left( \S\left(\frac{k-1}{K}\right) \right)^{-1} \right]_{jj} } & \geq \bigg( \bigg[ \frac{1}{K} \sum_{k=1}^K {\textstyle \S\left(\frac{k-1}{K}\right) }\bigg]^{-1} \bigg)_{jj} \\ & = \bigg( \bigg[ \I_N + \sum_{l=0}^{L-1} \frac{\rho}{M} \H_l^*\H_l \bigg]^{-1} \bigg)_{jj}. \end{aligned}$$ The last equality follows from the fact that $\frac{1}{K}\sum_{k=1}^K e^{2 i\pi \frac{k-1}{K}(l-n)}=\delta_{ln}$. Using this inequality in the SINR expression gives $$1+\beta_j \leq \bigg( \bigg( \bigg[ \I_N + \sum_{l=0}^{L-1} \frac{\rho}{M} \H_l^*\H_l \bigg]^{-1} \bigg)_{jj} \bigg)^{-1}.$$ We now come back to the capacity $I$ of the system; similarly to , using Jensen’s inequality yields $$\begin{aligned} I & \leq M \log \Bigg[ \frac{1}{M} \sum_{j=1}^M \left( 1+\beta_j \right) \Bigg] \\ & \leq M \log \Bigg[ \frac{1}{M} \sum_{j=1}^M \bigg( \bigg( \bigg[ \I_N + \frac{\rho}{M} \sum_{l=0}^{L-1} \H_l^*\H_l \bigg]^{-1} \bigg)_{jj} \bigg)^{-1} \,\Bigg]. \end{aligned}$$ We can now use the results of section \[sec:lowB\_flat\] by simply replacing $N \times M$ matrix $\H$ in by $LN \times M$ matrix $\Hb=[\H_0^T, \H_1^T, \ldots, \H_{L-1}^T]^T$. They lead to the following lower bound for the outage capacity, for a rate $R$ verifying $R/M > \log (M/m)$: $$\PP(I<R) \ \dot\geq \ \frac{1}{\rho^{m(LN-M+m)}}.$$ Upper bound for the outage probability -------------------------------------- We assume that $\frac{R}{M} < -\log\big(\frac{m-1}{M}+\frac{(L-1)(M-(m-1))}{K}\big)$, that is $2^{-R/M} < \frac{m-1}{M} \mhs + \mhs \frac{L-1}{K} (M \mhs - \mhs (m-1))$. We first derive a lower bound for the capacity $I$. $$\begin{aligned} I & = - \sum_{j=1}^M \log \left( \frac{1}{K}\sum_{k=1}^K \left( \left[ {\textstyle \S\left(\frac{k-1}{K}\right)} \right]^{-1} \right)_{jj} \right) \\ & \geq - M \log \left( \frac{1}{KM} \sum_{k=1}^K \Tr\left( \left[ {\textstyle \S\left(\frac{k-1}{K}\right)} \right]^{-1} \right) \right) \end{aligned}$$ The latter inequality follows once again from Jensen’s inequality on function $x \mapsto \log x$. We now analyze $\Tr\left(\S(\nu)^{-1}\right)$. To that end, we write $LN \times M$ matrix $\Hb=[\H_0^T, \ldots, \H_{L-1}^T]^T$ under the form $\Hb={\boldsymbol}\Theta (\Hb^*\Hb)^{1/2}$, where ${\boldsymbol}\Theta=[{\boldsymbol}\Theta_0^T, \ldots, {\boldsymbol}\Theta_{L-1}^T]^T$ and ${\boldsymbol}\Theta^*{\boldsymbol}\Theta=\I_M$. Besides, we note $\U^*\Lambda\U$ the SVD of $\Hb^*\Hb$ with $\Lambda=\mathrm{diag}(\lambda_1,\ldots,\lambda_M)$, $\lambda_1 \leq \ldots \leq \lambda_M$. Hence, $$\H(e^{2 i\pi \nu})={\boldsymbol}\Theta(e^{2 i\pi \nu})\U^*\Lambda^{1/2}\U,$$ where ${\boldsymbol}\Theta(z)=\sum_{l=0}^{L-1}{\boldsymbol}\Theta_l z^{-l}$. Using this parametrization, $$\begin{aligned} \Tr\left(\S(\nu)^{-1}\right) &= \Tr \left[ \left( \I + \frac{\rho}{M} \U {\boldsymbol}\Theta^*(e^{2 i \pi \nu}){\boldsymbol}\Theta(e^{2 i \pi \nu}) \U^* \Lambda \right)^{-1} \right] \\ &\leq \Tr \left[ \left( \I + \frac{\rho}{M} \gamma(e^{2 i \pi \nu}) \Lambda \right)^{-1} \right], \end{aligned}$$ where $\gamma(\nu)=\lambda_{\mathrm{min}}({\boldsymbol}\Theta^*(e^{2 i \pi \nu}){\boldsymbol}\Theta(e^{2 i \pi \nu}))$. Coming back to the outage probability, $$\begin{aligned} \PP(I<R) \leq & \PP \mhs \left[ \frac{1}{K} \sum_{k=0}^{K-1} \sum_{j=1}^M \mhs \left( \mhs 1 + \frac{\rho \lambda_j }{M} \gamma \mhs\left(\frac{k}{K}\right) \mhs \right)^{\mhs-1} \mhs > M2^{-R/M}\right] \notag \\ &= \PP \mhs \left[ \Hb \in \mathcal{B}_0 \right], \label{ineq:PP-fs} \end{aligned}$$ where $\mathcal{B}_0 = \big\{ \Hb, \frac{1}{K} \sum_{k=0}^{K-1} \sum_{j=1}^M \mhs \big( 1 + \frac{\rho \lambda_j }{M} \gamma \mhs\left(\frac{k}{K}\right) \big)^{-1} \mhs > M2^{-R/M} \big\}$. We now prove by contradiction that $\mu_m < +\infty$, where $\mu_m =\sup_{\Hb \in \mathcal{B}_0} \{ \rho \lambda_m \}$. If $\mu_m = +\infty$ there exists a sequence of matrices $\Hb^{(n)} \mhs \in \mhs \mathcal{B}_0$ such that $\rho \lambda_m^{(n)} \rightarrow +\infty$. Besides, $$\begin{aligned} M 2^{-\frac{R}{M}} & < \frac{1}{K} \sum_{k=0}^{K-1} \sum_{j=1}^M \bigg( 1 + \frac{\rho \lambda_j^{(n)} }{M} \gamma^{(n)} \mhs\left(\frac{k}{K}\right) \bigg)^{\mhs-1} \notag \\ & \leq (m-1) + \frac{1}{K} \sum_{k=0}^{K-1} \sum_{j=m}^M \bigg( 1 + \frac{\rho \lambda_j^{(n)} }{M} \gamma^{(n)} \mhs\left(\frac{k}{K}\right) \bigg)^{\mhs-1} \label{ineq:mat-seq} \end{aligned}$$ As ${\boldsymbol}\Theta^{(n)}$ belongs to a compact we can extract a subsequence ${\boldsymbol}\Theta^{(\psi(n))}$ which converges towards a matrix ${\boldsymbol}\Theta_\infty$. For this subsequence, inequality becomes $$M 2^{-\frac{R}{M}} \leq (m-1) + \frac{1}{K} \sum_{k=0}^{K-1} \sum_{j=m}^M \bigg( 1 + \frac{\rho \lambda_j^{(\psi(n))} }{M} \gamma^{(\psi(n))} \mhs\left(\frac{k}{K}\right) \bigg)^{-1}. \label{ineq:sumsum}$$ Let $\gamma_\infty$ be the function defined by $\gamma_\infty(\nu)=\lambda_{\mathrm{min}}({\boldsymbol}\Theta^*_\infty(e^{2 i \pi \nu}){\boldsymbol}\Theta_\infty(e^{2 i \pi \nu}))$ and $k_1, \ldots, k_p$ be the integers for which $\gamma_{\infty}(k_j/K) = 0$. Then $\det {\boldsymbol}\Theta_\infty(z) = \det \big( \sum_{l=0}^{L-1} {\boldsymbol}\Theta_{\infty,l} z^{-l} \big)=0$ for all $z \in \big\{ e^{2 i \pi k_j/K}, j=1,\ldots,p \big\}$. Nevertheless, polynomial $z \mapsto \sum_{l=0}^{L-1} {\boldsymbol}\Theta_{\infty,l} z^{-l}$ has a maximum degree of $M(L-1)$, therefore $p \leq M(L-1)$. Inequality then leads to $$M 2^{-\frac{R}{M}} \leq (m-1) + \frac{M(L-1)}{K} +\frac{1}{K} \sum_{k\notin\{k_1, \ldots, k_p\}} \sum_{j=m}^M \bigg( 1 + \frac{\rho \lambda_j^{(\psi(n))} }{M} \gamma^{(\psi(n))} \left( \frac{k}{K} \right) \bigg)^{\mhs-1} \label{ineq:M2-RMp}$$ Moreover, if $k \notin \{k_1, \ldots, k_p\}$, $\lambda_j^{(\psi(n))} \gamma^{(\psi(n))}(\frac{k}{K}) \rightarrow +\infty$ for $j \geq m$, as $\gamma^{(\psi(n))} \mhs\left(\frac{k}{K}\right) \rightarrow \gamma_\infty \mhs\left(\frac{k}{K}\right) \neq 0$ for $k \notin \left\{ k_1, \ldots, k_p \right\}$. Therefore taking the limit of when $n \rightarrow +\infty$ gives $$M 2^{-\frac{R}{M}} \leq (m-1) + \frac{M(L-1)}{K},$$ which is in contradiction with the original assumption $2^{-R/M} < \mhs \frac{m-1}{M} \mhs + \mhs \frac{L-1}{K} (M \mhs - \mhs (m-1))$. Hence $\mu_m < +\infty$, and $\mathcal{B}_0 \subset \mathcal{B}_1= \{ \Hb, \rho \lambda_m(\Hb^*\Hb) < \mu_m \}$. Using , we thus have $$\PP(I<R) \leq \PP(\Hb \in \mathcal{B}_1),$$ which, by Appendix \[apx:prob\_first\_ev\_bounded\], is asymptotically smaller than $\rho^{-m(NL-M+m)}$ in the sense of , therefore ending the proof. Numerical Results ================= We here illustrate the derived diversity in the frequency selective case. In the conducted simulation we took a block length of $K=64$, a number of transmitting and receiving antennas $M=N=2$, $L=2$ channel taps and an aimed data rate $R=3$ bits/s/Hz. Rate $R$ then verifies with $m=1$, therefore the expected diversity is $LN-M+1=3$. The outage probability is displayed on Fig. \[fig:Pout\] as a function of SNR. We observe a slope of $-10^{-3}$ per decade, hence a diversity of $3$, confirming the result stated in part \[sec:freqsel\]. ![Outage probability of the MMSE receiver, L=2, K=64, M=N=2[]{data-label="fig:Pout"}](div_40M){width="3in"} Conclusion ========== In this paper we provided rigorous proofs regarding the diversity of the MMSE receiver at fixed rate, in both flat fading and frequency selective MIMO channels. The higher the aimed rate the less diversity is achieved; in particular, for sufficiently low rates, the MMSE receiver achieves full diversity in both MIMO channel cases, hence its great interest. Nonetheless, in frequency selective channels, the diversity bounds are not tight for some specific rates; this could probably be improved. Simulations corroborated our results. {#apx:prob_sum_first_ev} We prove in this appendix that, for $b>0$, $\PP(\sum_{k=1}^m \rho \lambda_k < b) \ \dot\geq \ \rho^{-m(N-M+m)}$. We note $\mathcal{C}_m$ the set defined by $\mathcal{C}_m = \{ \lambda_1, \ldots, \lambda_m: \ 0 < \lambda_1 \leq \ldots \leq \lambda_m, \ \sum_{k=1}^m \rho \lambda_k < b \}$. As the $\lambda_i$ verify $0 < \lambda_1 \leq \ldots \leq \lambda_M$, we can write $$\PP \Bigg( \sum_{k=1}^m \rho \lambda_k < b \Bigg) = \int_{(\lambda_1, \ldots, \lambda_m) \in \mathcal{C}_m} \int_{\lambda_m}^{+\infty} \bhs\ldots \int_{\lambda_{M-1}}^{+\infty} p_{M,N}(\lambda_1, \ldots, \lambda_M) \ d\lambda_1 \ldots d\lambda_M, \label{eq:prob_sum_first_ev}$$ where $p_{M,N}: \mathbb{R}^M \rightarrow \mathbb{R}$ is the joint probability density function of the ordered eigenvalues of a $M \times M$ Wishart matrix with scale matrix $\I_M$ and $N$ degrees of freedom, given by (see, e.g., [@zheng2003diversity]): $$p_{M,N} = K_{M,N}^{-1} \prod_{i=1}^M \left( \lambda_i^{N-M} e^{-\lambda_i} \right) \prod_{i<j}(\lambda_i-\lambda_j)^2, \label{eq:p-wish}$$ where $K_{M,N}$ is a normalizing constant. We now try to separate the integral in in two integrals, one over $\lambda_1, \ldots, \lambda_m$, the other over $\lambda_{m+1}, \ldots, \lambda_M$. As we have $(\lambda_1, \ldots, \lambda_m) \in \mathcal{C}_m$ in , $\lambda_m < b / \rho$ and thus $$\begin{split} \int_{\lambda_m \leq \lambda_{m+1} \leq \ldots \leq \lambda_M} & p_{M,N}(\lambda_1, \ldots, \lambda_M) \ d\lambda_{m+1} \ldots d\lambda_M \\ & \geq \int_{(\lambda_{m+1}, \ldots, \lambda_M) \in \mathcal{D}} p_{M,N}(\lambda_1, \ldots, \lambda_M) \ d\lambda_{m+1} \ldots d\lambda_M \end{split} \label{ineq:2intg}$$ where $\mathcal{D}=\{(\lambda_{m+1}, \ldots, \lambda_M) \in \mathbb{R}_+^{M-m}; \ b/\rho \leq \lambda_{m+1}\leq \ldots \leq \lambda_M \}$. This integral can be simplified by noticing that $p_{M,N}(\lambda_1, \ldots, \lambda_M)$ explicit expression is invariant by permutation of its parameters $\lambda_1, \ldots, \lambda_M$, in particular by permutation of its parameters $\lambda_{m+1}, \ldots, \lambda_M$. Therefore, noting $\mathcal{S}=\mathrm{Sym}(\{\lambda_{m+1}, \ldots, \lambda_M\})$ the group of permutations over the finite set $\{\lambda_{m+1}, \ldots, \lambda_M\}$, we get $$\begin{aligned} \int_{b/\rho}^{+\infty} \bhs\ldots \int_{b/\rho}^{+\infty} & p_{M,N}(\lambda_1, \ldots, \lambda_M) \ d\lambda_{m+1} \ldots d\lambda_M \notag \\ & = \sum_{s \in \mathcal{S}} \int_{s(\lambda_{m+1}, \ldots, \lambda_M) \in \mathcal{D}} p_{M,N}(\lambda_1, \ldots, \lambda_M) \ d\lambda_{m+1} \ldots d\lambda_M \notag \\ & = \mathrm{Card}(\mathcal{S}) \int_{ (\lambda_{m+1}, \ldots, \lambda_M) \in \mathcal{D}} p_{M,N}(\lambda_1, \ldots, \lambda_M) \ d\lambda_{m+1} \ldots d\lambda_M \notag \\ & = (M-m)! \int_{ (\lambda_{m+1}, \ldots, \lambda_M) \in \mathcal{D}} p_{M,N}(\lambda_1, \ldots, \lambda_M) \ d\lambda_{m+1} \ldots d\lambda_M. \label{eq:intg-symm}\end{aligned}$$ Using and in , we obtain $$\PP \Bigg( \sum_{k=1}^m \rho \lambda_k < b \Bigg) \geq \frac{1}{(M-m)!} \int_{\mathcal{C}_m} \int_{b/\rho}^{+\infty} \bhs\ldots \int_{b/\rho}^{+\infty} p_{M,N}(\lambda_1, \ldots, \lambda_M) \ d\lambda_1 \ldots d\lambda_M.$$ We now replace $p_{M,N}$ by its explicit expression and then try to separate the $m$ first eigenvalues from the others. Note that we can drop the constants $(M-m)!$ and $K_{M,N}$ as we only need an asymptotic lower bound. $$\begin{aligned} \PP \Bigg( \sum_{k=1}^m \rho \lambda_k < b \mhs \Bigg) \mhs \ \dot\geq & \int_{\mathcal{C}_m} \int_{b/\rho}^{+\infty} \bhs \ldots \int_{b/\rho}^{+\infty} \prod_{i=1}^M \left( \lambda_i^{N-M} e^{-\lambda_i} \right) \prod_{i<j}(\lambda_i-\lambda_j)^2 \, d\lambda_1 \ldots d\lambda_M \\ = & \ \int_{\mathcal{C}_m} \int_{b/\rho}^{+\infty} \bhs \ldots \int_{b/\rho}^{+\infty} \Bigg( \prod_{i=1}^m \left( \lambda_i^{N-M} e^{-\lambda_i} \right) \prod_{i<j\leq m}(\lambda_i-\lambda_j)^2 \, \Bigg) \\ & \cdot \Bigg( \prod_{i=m+1}^M \left( \lambda_i^{N-M} e^{-\lambda_i} \right) \prod_{i\leq m<j}(\lambda_i-\lambda_j)^2 \prod_{m<i<j}(\lambda_i-\lambda_j)^2 \, \Bigg) \ d\lambda_1 \ldots d\lambda_M \end{aligned}$$ For $i\leq m<j$, we have that $\lambda_i \leq b/\rho$ and thus $(\lambda_i-\lambda_j)^2 \geq \big(\lambda_j-\frac{b}{\rho}\big)^2$. Hence, $$\begin{aligned} \PP \Bigg( \sum_{k=1}^m \rho \lambda_k < b \mhs \Bigg) \dot\geq & \ \bigg( \int_{\mathcal{C}_m} \prod_{i=1}^m \left( \lambda_i^{N-M} e^{-\lambda_i} \right) \prod_{i<j\leq m}(\lambda_i-\lambda_j)^2 \ d\lambda_1 \ldots d\lambda_m \bigg) \label{eq:2sep-intg} \\ & \cdot \mhs \bigg( \mhs \int_{b/\rho}^{+\infty} \bhs\mhs\mhs \ldots \int_{b/\rho}^{+\infty} \mhs\mhs \prod_{i=m+1}^M \mhs\mhs\mhs \left( \lambda_i^{N-M}e^{-\lambda_i} \right) \mhs \prod_{j=m+1}^M \mhs\mhs\mhs \left(\lambda_j-\frac{b}{\rho}\right)^{\,\mhs\mhs 2m} \mhs\mhs \prod_{m<i<j} \mhs\mhs\mhs (\lambda_i-\lambda_j)^2 \ d\lambda_{m+1} \ldots d\lambda_M \mhs\mhs\bigg) \notag \end{aligned}$$ We now have two separate integrals. We first consider the second one, in which we make the substitution $\beta_i= \lambda_i - b/\rho$ for $i=m+1, \ldots, M$. $$\begin{aligned} \int_{b/\rho}^{+\infty} & \bhs \ldots \int_{b/\rho}^{+\infty} \prod_{i=m+1}^M \mhs\mhs\mhs \left( \lambda_i^{N-M}e^{-\lambda_i} \right) \mhs \prod_{j=m+1}^M \mhs\mhs\mhs \left(\lambda_j-\frac{b}{\rho}\right)^{\,\mhs\mhs 2m} \mhs\mhs \prod_{m<i<j} \mhs\mhs\mhs (\lambda_i-\lambda_j)^2 \ d\lambda_{m+1} \ldots d\lambda_M \notag \\ & = e^{- {(M-m)b/\rho}} \int_0^{+\infty} \bhs \ldots \int_0^{+\infty} \mhs\mhs \prod_{i=m+1}^M \mhs\mhs \left( \mhs \left( {\textstyle \beta_i + \frac{b}{\rho}} \right)^{N-M} e^{- \beta_i} \beta_i ^{2m} \mhs \right) \mhs \prod_{m<i<j}(\beta_i-\beta_j)^2 \ d\beta_{m+1} \ldots d\beta_M \notag \\ & \geq \frac{1}{2} \int_0^{+\infty} \bhs \ldots \int_0^{+\infty} \prod_{i=m+1}^M \left( \beta_i ^{N-M+2m} e^{- \beta_i} \right) \prod_{m<i<j}(\beta_i-\beta_j)^2 \ d\beta_{m+1} \ldots d\beta_M \label{ineq:2nd-intg} \end{aligned}$$ for $\rho$ large enough, i.e. such that $e^{- (M-m) b/\rho} > 1/2$. It is straightforward to see that the integral in is nonzero, finite, independent from $\rho$ and therefore asymptotically equivalent to $1$ in the sense of . Hence, we can drop the second integral in , leading to: $$\PP \Bigg( \sum_{k=1}^m \rho \lambda_k < b \mhs \Bigg) \dot\geq \ \int_{\mathcal{C}_m} \prod_{i=1}^m \left( \lambda_i^{N-M} e^{-\lambda_i} \right) \prod_{i<j\leq m}(\lambda_i-\lambda_j)^2 \ d\lambda_1 \ldots d\lambda_m. \label{ineq:only1-intg}$$ Making the substitution $\alpha_i= \rho \lambda_i$ for $i=1, \ldots, m$ in and noting $\mathcal{C}'_m=\{ \alpha_1, \ldots, \alpha_m: \ 0 < \alpha_1 \leq \ldots \leq \alpha_m, \ \sum_{k=1}^m \alpha_k < b \}$ we then have $$\begin{aligned} \PP \Bigg(\sum_{k=1}^m \rho \lambda_k < b \Bigg) & \, \dot\geq \, \bigg( \rho^{-m-m(N-M)-m(m-1)} \int_{\mathcal{C}'_m} \prod_{i=1}^m \left( \alpha_i^{N-M} e^{-\alpha_i/\rho} \right) \prod_{i<j\leq m}(\alpha_i-\alpha_j)^2 \ d\alpha_1 \ldots d\alpha_m \bigg) \notag \\ & \geq \rho^{-m(N-M+m)} \int_{\mathcal{C}'_m} \prod_{i=1}^m \left( \alpha_i^{N-M} e^{-\alpha_i} \right) \prod_{i<j\leq m}(\alpha_i-\alpha_j)^2 \ d\alpha_1 \ldots d\alpha_m \label{ineq:no-intg} \end{aligned}$$ for $\rho \geq 1$, as we have then $e^{- \alpha_i/\rho} \geq e^{- \alpha_i}$ for $i=1, \ldots, m$. As $b>0$ it is straightforward to see that the integral in is nonzero but also finite and independent from $\rho$; it is therefore asymptotically equivalent to $1$ in the sense of , yielding $$\PP \Bigg( \sum_{k=1}^m \rho \lambda_k < b \Bigg) \dot\geq \ \rho^{-m(N-M+m)},$$ which concludes the proof. {#apx:prob_first_ev_bounded} We prove in this section that $\PP\left( \mathcal{B}_1 \right) \dot\leq \, \rho^{-m(M-N+m)}$, where the set $\mathcal{B}_1$ is defined by $$\mathcal{B}_1= \{ \lambda_1 \leq \lambda_2 \ldots \leq \lambda_M, \, 0 < \lambda_k \leq b, \, k=1,\ldots,m \},$$ with $b>0$ and $\lambda_1, \ldots, \lambda_M$ the ordered eigenvalues of the Wishart matrix $\H^*\H$. We use the same approach as in Appendix \[apx:prob\_sum\_first\_ev\]. For we note $p_{M,N}$ the joint probability density function of the ordered eigenvalues of a $M \times M$ Wishart matrix with scale matrix $\I_M$ and $N$ degrees of freedom, the probability $\PP(\mathcal{B}_1)$ can be written as $$\PP(\mathcal{B}_1) = \int_{(\lambda_1, \ldots, \lambda_M) \in \mathcal{B}_1} p_{M,N}(\lambda_1, \ldots, \lambda_M) \ d\lambda_1 \ldots d\lambda_M.$$ Similarly to Appendix \[apx:prob\_sum\_first\_ev\] we try to upper bound $\PP(\mathcal{B}_1)$ by the product of two integrals, one containing the $m$ first eigenvalues and the other the $M-m$ remaining eigenvalues. We first replace $p_{M,N}$ by it explicit expression : $$\begin{aligned} \PP(\mathcal{B}_1) & = K_{M,N}^{-1} \int_{(\lambda_1, \ldots, \lambda_M) \in \mathcal{B}_1} \prod_{i=1}^M \lambda_i^{N-M} e^{-\lambda_i} \prod_{i<j}(\lambda_i-\lambda_j)^2 \ d\lambda_1 \ldots d\lambda_M \\ & \doteq \int_{(\lambda_1, \ldots, \lambda_M) \in \mathcal{B}_1} \Bigg( \prod_{i=1}^m \left( \lambda_i^{N-M} e^{-\lambda_i} \right) \prod_{i<j\leq m}(\lambda_i-\lambda_j)^2 \Bigg) \\ & \fhs \cdot \Bigg( \prod_{i=m+1}^M \left( \lambda_i^{N-M} e^{-\lambda_i} \right) \prod_{i\leq m<j}(\lambda_i-\lambda_j)^2 \prod_{m<i<j}(\lambda_i-\lambda_j)^2 \Bigg) \ d\lambda_1 \ldots d\lambda_M. \end{aligned}$$ Note that we dropped the normalizing constant $K_{M,N}$, as $K_{M,N}^{-1} \doteq 1$. For $i \leq m < j$, we have $|\lambda_i - \lambda_j| \leq \lambda_j$ and thus $\prod_{i\leq m<j}(\lambda_i-\lambda_j)^2 \leq \prod_{j=m+1}^M \lambda_j^{2m}$, yielding $$\begin{aligned} \PP(\mathcal{B}_1) & \,\dot\leq\, \int_0^{b/\rho} \int_{\lambda_1}^{b/\rho} \bhs \ldots \int_{\lambda_{m-1}}^{b/\rho} \int_{\lambda_m}^{+\infty} \bhs \ldots \int_{\lambda_{M-1}}^{+\infty} \Bigg( \prod_{i=1}^m \left( \lambda_i^{N-M} e^{-\lambda_i} \right) \prod_{i<j\leq m}(\lambda_i-\lambda_j)^2 \Bigg) \\ & \fhs \cdot \Bigg( \prod_{i=m+1}^M \left( \lambda_i^{N+2m-M} e^{-\lambda_i} \right) \prod_{m<i<j}(\lambda_i-\lambda_j)^2 \Bigg) \ d\lambda_1 \ldots d\lambda_M \end{aligned}$$ In order to obtain two separate integrals we discard the $\lambda_m$ in the integral bound simply by noticing that $\lambda_m>0$, therefore $$\begin{aligned} \PP(\mathcal{B}_1) & \,\dot\leq\, \Bigg( \int_0^{b/\rho} \int_{\lambda_1}^{b/\rho} \bhs \ldots \int_{\lambda_{m-1}}^{b/\rho} \prod_{i=1}^m \left( \lambda_i^{N-M} e^{-\lambda_i} \right) \prod_{i<j\leq m}(\lambda_i-\lambda_j)^2 \ d\lambda_1 \ldots d\lambda_m \Bigg) \\ & \fhs \cdot \Bigg( \int_0^{+\infty} \int_{\lambda_{m+1}}^{+\infty} \bhs \ldots \int_{\lambda_{M-1}}^{+\infty} \prod_{i=m+1}^M \left( \lambda_i^{N+2m-M} e^{-\lambda_i} \right) \prod_{m<i<j}(\lambda_i-\lambda_j)^2 \ d\lambda_{m+1} \ldots d\lambda_M \Bigg) \end{aligned}$$ As the second integral (in $\lambda_{m+1}$, …, $\lambda_M$) is nonzero, finite and independent of $\rho$ it is asymptotically equivalent to $1$ in the sense of . Hence, $$\PP(\mathcal{B}_1) \,\dot\leq\, \int_0^{b/\rho} \int_{\lambda_1}^{b/\rho} \bhs \ldots \int_{\lambda_{m-1}}^{b/\rho} \prod_{i=1}^m \left( \lambda_i^{N-M} e^{-\lambda_i} \right) \prod_{i<j\leq m}(\lambda_i-\lambda_j)^2 \ d\lambda_1 \ldots d\lambda_m. \label{ineq:PB1-1intg}$$ We now make the substitutions $\alpha_i=\rho\lambda_i$ for $i=1, \ldots, m$ inside the remaining integral. $$\begin{aligned} & \int_0^{b/\rho} \int_{\lambda_1}^{b/\rho} \bhs \ldots \int_{\lambda_{m-1}}^{b/\rho} \prod_{i=1}^m \left( \lambda_i^{N-M} e^{-\lambda_i} \right) \prod_{i<j\leq m}(\lambda_i-\lambda_j)^2 \ d\lambda_1 \ldots d\lambda_m \notag \\ & \hspace{15pt} = \ \rho^{-m(N-M+m)} \int_0^{b} \int_{\alpha_1}^{b} \bhs \ldots \int_{\alpha_{m-1}}^{b} \prod_{i=1}^m \left( \alpha_i^{N-M} e^{-\alpha_i/\rho} \right) \prod_{i<j\leq m}(\alpha_i-\alpha_j)^2 \ d\alpha_1 \ldots d\alpha_m \notag \\ & \hspace{15pt} \leq \rho^{-m(N-M+m)} \mhs \int_0^{b} \int_{\alpha_1}^{b} \bhs \ldots \int_{\alpha_{m-1}}^{b} \prod_{i=1}^m \alpha_i^{N-M} \mhs \prod_{i<j\leq m} \mhs\mhs (\alpha_i-\alpha_j)^2 \ d\alpha_1 \ldots d\alpha_m , \label{ineq:rem_intg} \end{aligned}$$ as $e^{-\alpha_i/\rho} \leq 1$. The remaining integral in is nonzero ($b > 0$), finite and does not depend on $\rho$; therefore, is asymptotically equivalent to $\rho^{-m(N-M+m)}$ in the sense of . Coming back to we obtain $$\PP(\mathcal{B}_1) \ \dot\leq \ \rho^{-m(N-M+m)}.$$ {#apx:ang_par} In this appendix, we review the results of [@dita2003factorization; @lundberg2004haar] for the reader’s convenience. It has been shown in [@dita2003factorization] that any $n \times n$ unitary matrix $A_n$ can be written as $$A_n = d_n {\mathcal{O}}_n \begin{bmatrix} 1 & 0 \\ 0 & A_{n-1} \end{bmatrix}, \label{eq:unitary_dec}$$ with $A_{n-1}$ a $(n-1) \times (n-1)$ unitary matrix, $d_n$ a diagonal phases matrix, that is $d_n = \mathrm{diag}(e^{i\varphi_1}, \ldots, e^{i\varphi_n})$ with $\varphi_1, \ldots, \varphi_n \in [0, 2\pi]$, and $\mathcal{O}_n$ an orthogonal matrix (the angles matrix). Matrix $\mathcal{O}_n$ can be written in terms of parameters $\theta_1, \ldots, \theta_n \in [0, \frac{\pi}{2}]$ thanks to the following decomposition: $\mathcal{O}_n = J_{n-1,n} J_{n-2,n-1} \ldots J_{1,2}$, where $$J_{i,i+1}= \begin{bmatrix} \I_{i-1} & 0 & 0 & 0 \\ 0 & \cos\theta_i & -\sin\theta_i & 0 \\ 0 & \cos\theta_i & -\sin\theta_i & 0 \\ 0 & 0 & 0 & \I_{n-i-1} \end{bmatrix}.$$ Let $\U_M$ be a $M \times M$ unitary Haar distributed matrix. Then, using decomposition , $$\U_M= \D_M({\boldsymbol}\varphi_1) \V_M({\boldsymbol}\theta_1) \begin{bmatrix} 1 & 0 \\ 0 & \U_{M-1} \end{bmatrix},$$ with ${\boldsymbol}\varphi_1=(\varphi_{1,1},\ldots,\varphi_{1,M}) \in [0,2\pi]^M$, ${\boldsymbol}\theta_1=(\theta_{1,1},\ldots,\theta_{1,M-1}) \in [0,\frac{\pi}{2}]^{M-1}$, $\D_M({\boldsymbol}\varphi_1)$ the diagonal matrix defined by $\D_M({\boldsymbol}\varphi_1)=\mathrm{diag}(e^{i\varphi_{1,1}},\ldots,e^{i\varphi_{1,M}})$, $\V_M({\boldsymbol}\theta_1)$ the orthogonal matrix defined by $\V_M({\boldsymbol}\theta_1)=J_{M-1,M} J_{M-2,M-1} \ldots J_{1,2}$ and $\U_M$ a $M-1 \times M-1$ unitary matrix. Matrix $\U_{M-1}$ can naturally be similarly factorized. Similarly to [@lundberg2004haar], we can show that, in order $\U_M$ to be a Haar matrix it is sufficient that $(\varphi_{1,i})_{i=1,\ldots,M}$ are i.i.d. random variables uniformly distributed over interval $[0, 2\pi[$, that $\theta_{1,1},\ldots,\theta_{1,M-1}$ are independent with densities respectively equal to $(\sin \theta_1)^{M-2}, (\sin \theta_2)^{M-3}, \ldots, (\sin \theta_{M-2}), 1$ and independent from ${\boldsymbol}\varphi_1$ and that $\U_{M-1}$ is Haar distributed and independent from ${\boldsymbol}\varphi_1$ and ${\boldsymbol}\theta_1$. The proof consists in first showing, by a simple variable change, that if the $(\varphi_{1,i})_{i=1,\ldots,M}$ and the $\theta_{1,1},\ldots,\theta_{1,M-1}$ follow the mentioned distributions then $\D_M({\boldsymbol}\varphi_1) \V_M({\boldsymbol}\theta_1)$ is uniformly distributed over the unity sphere of $\mathbb{C}^M$. The proof is then completed by showing that if $\U_{M-1}$ is a Haar matrix independent from ${\boldsymbol}\varphi_1$ and $\theta_1$ then $\U_M$ is Haar distributed. Finally one can parameterize a Haar matrix $\U_M$ by ${\boldsymbol}\varphi_1$, $\theta_1$ and $\U_{M-1}$. Repeating the same parametrization for $\U_{M-1}$ we obtain that $\U_M$ can be parameterized by the $M^2$ following independent variables $$\begin{aligned} &(\varphi_{1,1}, \ldots, \varphi_{1,M}), (\theta_{1,1}, \ldots, \theta_{1,M-1}), (\varphi_{2,1}, \ldots, \varphi_{2,M-1}), (\theta_{2,1}, \ldots, \theta_{2,M-2}), \ldots, \\ &(\varphi_{M-2,1}, \varphi_{M-2,2}), \theta_{M-2,1}, \varphi_{M-1,1}, \end{aligned}$$ whose probability laws are almost surely positive.
{ "pile_set_name": "ArXiv" }
--- bibliography: - 'sources.bib' --- Ji Li, Dept. of Mathematics Thomas Roby, Dept. of Mathematics, University of Connecticut For my friends, who keep me sane. Without Ira’s brilliant and understanding mentorship, this research would not have been possible. Without Susan’s friendship and guidance or the camaraderie of the members of my cohort, I never would have made it to the point of doing graduate research. Without the encouragement David, Margaret, and Kim gave me, I might never have discovered mathematical research at all. Without my parents’ patient and generous support or the forbearance of my many teachers from childhood into my graduate years, I would never even have discovered the academy. Without Janet’s affection, fellowship, and tolerance of my eccentricities, my progress in these last years—and my spirits—would have been greatly diminished. My life and this work are gift from and a testament to everyone whom I have known. Thank you all. The theory of $\Gamma$-species is developed to allow species-theoretic study of quotient structures in a categorically rigorous fashion. This new approach is then applied to two graph-enumeration problems which were previously unsolved in the unlabeled case—bipartite blocks and general $k$-trees. Historically, the algebra of generating functions has been a valuable tool in enumerative combinatorics. The theory of combinatorial species uses category theory to justify and systematize this practice, making clear the connections between structural manipulations of some objects of interest and algebraic manipulations of their associated generating functions. The notion of ‘quotient’ enumeration (that is, of counting orbits under some group action) has been applied in species-theoretic contexts, but methods for doing so have largely been ad-hoc. We will contribute a species-compatible way for keeping track of the way a group $\Gamma$ acts on structures of a species $F$, yielding what we term a $\Gamma$-species, which has the sort of synergy of algebraic and structural data that we expect from species. We will then show that it is possible to extract information about the $\Gamma$-orbits of such a $\Gamma$-species and harness this new method to attack several unsolved problems in graph enumeration—in particular, the isomorphism classes of nonseparable bipartite graphs and $k$-trees (that is, ‘unlabeled’ bipartite blocks and $k$-trees). It is assumed that the reader of this thesis is familiar with the classical theory of groups and that he has encountered at least the basic vocabularies of category theory and graph theory. Results in these fields which are not original to this thesis will either be referenced from the literature or simply assumed, depending on the degree to which they are part of the standard body of knowledge one acquires when studying those disciplines. In the first chapter, we outline the theory of species, develop several classical methods, and introduce the notion of a $\Gamma$-species. In the second chapter, we apply these techniques to the enumeration of unlabeled vertex-$2$-connected bipartite graphs, a historically open problem. In the third chapter, we apply these techniques to the more complex problem of the enumeration of unlabeled general $k$-trees, also historically unsolved. Finally, in an appendix we discuss algebraic and computational methods which allow species-theoretical insights to be translated into explicit algorithmic techniques for enumeration. The theory of species {#c:species} ===================== Introduction {#s:introspec} ------------ Many of the most important historical problems in enumerative combinatorics have concerned the difficulty of passing from ‘labeled’ to ‘unlabeled’ structures. In many cases, the algebra of generating functions has proved a powerful tool in analyzing such problems. However, the general theory of the association between natural operations on classes of such structures and the algebra of their generating functions has been largely ad-hoc. André Joyal’s introduction of the theory of combinatorial species in [@joy:species] provided the groundwork to formalize and understand this connection. A full, pedagogical exposition of the theory of species is available in [@bll:species], so we here present only an outline, largely tracking that text. To begin, we wish to formalize the notion of a ‘construction’ of a structure of some given class from a set of ‘labels’, such as the construction of a graph from its vertex set or or that of a linear order from its elements. The language of category theory will allow us capture this behavior succinctly yet with full generality: \[def:species\] Let $\catname{FinBij}$ be the category of finite sets with bijections and $\catname{FinSet}$ be the category of finite sets with set maps. Then a *species* is a functor $F: \catname{FinBij} \to \catname{FinSet}$. For a set $A$ and a species $F$, an element of $F \sbrac{A}$ is an *$F$-structure on $A$*. Moreover, for a species $F$ and a bijection $\phi: A \to B$, the bijection $F \sbrac{\phi}: F \sbrac{A} \to F \sbrac{B}$ is the *$F$-transport of $\phi$*. A species functor $F$ simply associates to each set $A$ another set $F \sbrac{A}$ of its $F$-structures; for example, for $\specname{S}$ the species of permutations, we associate to some set $A$ the set $\specname{S} \sbrac{A} = \operatorname{Bij} \pbrac{A}$ of self-bijections (that is, permutations as maps) of $A$. This association of label set $A$ to the set $F \sbrac{A}$ of all $F$-structures over $A$ is fundamental throughout combinatorics, and functorality is simply the requirement that we may carry maps on the label set through the construction. \[ex:graphspecies\] Let $\specname{G}$ denote the species of simple graphs labeled at vertices. Then, for any finite set $A$ of labels, $G \sbrac{A}$ is the set of simple graphs with $\abs{A}$ vertices labeled by the elements of $A$. For example, for label set $A = \sbrac{3} = \cbrac{1, 2, 3}$, there are eight graphs in $\specname{G} \sbrac{A}$, since there are $\binom{3}{2} = 3$ possible edges and thus $2^{3} = 8$ ways to choose a subset of those edges: $$\specname{G} \sbrac{\cbrac{1, 2, 3}} = \cbrac{ \begin{array}{c} \begin{aligned} \tikz{ \node[style=graphnode](1) at (90:1) {1}; \node[style=graphnode](2) at (210:1) {2}; \node[style=graphnode](3) at (330:1) {3}; } \end{aligned}, \begin{aligned} \tikz{ \node[style=graphnode](1) at (90:1) {1}; \node[style=graphnode](2) at (210:1) {2}; \node[style=graphnode](3) at (330:1) {3}; \draw(1) to (2); } \end{aligned}, \begin{aligned} \tikz{ \node[style=graphnode](1) at (90:1) {1}; \node[style=graphnode](2) at (210:1) {2}; \node[style=graphnode](3) at (330:1) {3}; \draw(2) to (3); } \end{aligned}, \begin{aligned} \tikz{ \node[style=graphnode](1) at (90:1) {1}; \node[style=graphnode](2) at (210:1) {2}; \node[style=graphnode](3) at (330:1) {3}; \draw(1) to (3); } \end{aligned}, \\ \begin{aligned} \tikz{ \node[style=graphnode](1) at (90:1) {1}; \node[style=graphnode](2) at (210:1) {2}; \node[style=graphnode](3) at (330:1) {3}; \draw(1) to (2); \draw(1) to (3); \draw(2) to (3); } \end{aligned}, \begin{aligned} \tikz{ \node[style=graphnode](1) at (90:1) {1}; \node[style=graphnode](2) at (210:1) {2}; \node[style=graphnode](3) at (330:1) {3}; \draw(1) to (3); \draw(2) to (3); } \end{aligned}, \begin{aligned} \tikz{ \node[style=graphnode](1) at (90:1) {1}; \node[style=graphnode](2) at (210:1) {2}; \node[style=graphnode](3) at (330:1) {3}; \draw(1) to (2); \draw(1) to (3); } \end{aligned}, \begin{aligned} \tikz{ \node[style=graphnode](1) at (90:1) {1}; \node[style=graphnode](2) at (210:1) {2}; \node[style=graphnode](3) at (330:1) {3}; \draw(1) to (2); \draw(2) to (3); } \end{aligned} \end{array} }.$$ The symmetric group $\symgp{3}$ acts on the set $\sbrac{3}$ as permutations. Consider the permutation $\pmt{(23)}$ that interchanges $2$ and $3$ in $\sbrac{3}$. Then $\specname{G} \sbrac{\pmt{(23)}}$ is a permutation on the set $\specname{G} \sbrac{\cbrac{1, 2, 3}}$; for example, $$\specname{G} \sbrac{\pmt{(23)}} \pbrac{ \begin{aligned} \tikz{ \node[style=graphnode](1) at (90:1) {1}; \node[style=graphnode](2) at (210:1) {2}; \node[style=graphnode](3) at (330:1) {3}; \draw(1) to (2); } \end{aligned} } = \begin{aligned} \tikz{ \node[style=graphnode](1) at (90:1) {1}; \node[style=graphnode](2) at (210:1) {2}; \node[style=graphnode](3) at (330:1) {3}; \draw(1) to (3); }. \end{aligned}.$$ Since the image of a bijection under such a functor is necessarily itself a bijection, many authors instead simply define a species as a functor $F: \catname{FinBij} \to \catname{FinBij}$. Our motivation for using this definition instead will become clear in \[s:quot\]. Note that, having defined the species $F$ to be a functor, we have the following properties: - for any two bijections $\alpha: A \to B$ and $\beta: B \to C$, we have $F \sbrac{\alpha \circ \beta} = F \sbrac{\alpha} \circ F \sbrac{\beta}$, and - for any set $A$, we have $F \sbrac{\Id_{A}} = \Id_{F \sbrac{A}}$. Accordingly, we (generally) need not concern ourselves with the details of the set $A$ of labels we consider, so we will often restrict our attention to a canonical label set $\sbrac{n} := \cbrac{1, 2, \dots, n}$ for each cardinality $n$. Moreover, the permutation group $\symgp{A}$ on any given set $A$ acts by self-bijections of $A$ and induces *automorphisms* of $F$-structures for a given species $F$. The orbits of $F$-structures on $A$ under the induced action of $\symgp{A}$ are then exactly the ‘unlabeled’ structures of the class $F$, such as unlabeled graphs. Finally, we note that it is often natural to speak of maps between classes of combinatorial structures, and that these maps are sometimes combinatorially ‘natural’. For example, we might wish to map the species of trees into the species of general graphs by embedding; to map the species of connected bicolored graphs to the species of connected bipartite graphs by forgetting some color information; or the species of graphs to the species of sets of connected graphs by identification. These maps are all ‘natural’ in the sense that they are explicitly structural and do not reference labels; thus, at least at a conceptual level, they are compatible with the motivating ideas of species. We can formalize this notion in the language of categories: \[def:specmap\] Let $F$ and $G$ be species. A *species map* $\phi$ is a natural transformation $\phi: F \to G$ — that is, an association to each set $A \in \catname{FinBij}$ of a set map $\phi_{A} \in \catname{FinSet}$ such that the following diagram commutes: $$\begin{tikzpicture}[every node/.style={fill=white}] \matrix (m) [matrix of math nodes, row sep=4em, column sep=4em] { F \sbrac{A} & F \sbrac{B} \\ G \sbrac{A} & G \sbrac{B} \\ }; \path[->,font=\scriptsize] (m-1-1) edge node {$\phi_{A}$} (m-2-1) edge node {$F \sbrac{\sigma}$} (m-1-2) (m-2-1) edge node {$G \sbrac{\sigma}$} (m-2-2) (m-1-2) edge node {$\phi_{B}$} (m-2-2); \end{tikzpicture}$$ We call the set map $\phi_{A}$ the *$A$ component of $\phi$* or the *component of $\phi$ at $A$*. Such species maps may capture the idea that two species are essentially ‘the same’ or that one ‘contains’ or ‘sits inside’ another. \[def:specmaptypes\] Let $F$ and $G$ be species and $\phi: F \to G$ a species map between them. In the case that the components $\phi_{A}$ are all bijections, we say that $\phi$ is a *species isomorphism* and that $F$ and $G$ are *isomorphic*. In the case that the components $\phi_{A}$ are all injections, we say that $\phi$ is a *species embedding* and that $F$ *embeds in* $G$ (denoted $\phi: F \hookrightarrow G$). Likewise, in the case that the components $\phi_{A}$ are all surjections, we say that $\phi$ is a *species covering* and that $F$ *covers* $G$ (denoted $\phi: F \twoheadrightarrow G$). With the full power of the language of categories, we may make the following more general observation: \[note:specmapcattheo\] Let $\catname{Spc}$ denote the functor category of species; that is, define $\catname{Spc} \defeq \catname{FinSet}^{\catname{FinBij}}$, the category of functors from $\catname{FinBij}$ to $\catname{FinSet}$. Species maps as defined in \[def:specmap\] are natural transformations of these functors and thus are exactly the morphisms of $\catname{Spc}$. It is a classical theorem of category theory (cf. [@mac:cftwm]) that the epi- and monomorphisms of a functor category are exactly those whose components are epi- and monomorphisms in the target category if the target category has pullbacks and pushouts. Since $\catname{FinSet}$ is such a category, species embeddings and species coverings are precisely the epi- and monomorphisms of the functor category $\catname{Spc}$. Species isomorphisms are of course the categorical isomorphisms in $\catname{Spc}$. In the case that $F$ and $G$ are isomorphic species, we will often simply write $F = G$, since they are combinatorially equivalent; some authors instead use $F \simeq G$, reserving the notation of equality for the much stricter case that additionally requires that $F \sbrac{A} = G \sbrac{A}$ as sets for all $A$. The notions of species embedding and species covering are original to this work. In the motivating examples from above: - The species $\mathfrak{a}$ of trees *embeds* in the species $\specname{G}$ of graphs by the map which identifies each tree with itself as a graph, since any two distinct trees are distinct as graphs. - The species $\specname{BC}$ of bicolored graphs *covers* the species $\specname{BP}$ of bipartite graphs by the map which sends each bicolored graph to its underlying bipartite graph, since every bipartite graph has at least one bicoloring. - The species $\specname{G}$ of graphs is *isomorphic* with the species $\specname{E} \pbrac{\specname{G}^{\specname{C}}}$ of sets of connected graphs by the map which identifies each graph with its set of connected components, since this decomposition exists uniquely. Cycle indices and species enumeration {#s:cycind} ------------------------------------- In classical enumerative combinatorics, formal power series known as ‘generating functions’ are used extensively for keeping track of enumerative data. In this spirit, we now associate to each species a formal power series which counts structures with respect to their automorphisms, which will prove to be significantly more powerful: \[def:cycind\] For a species $F$, define its *cycle index series* to be the power series $$\label{eq:cycinddef} \civars{F}{p_{1}, p_{2}, \dots} := \sum_{n \geq 0} \frac{1}{n!} \big( \sum_{\sigma \in \symgp{n}} \fix \pbrac{F \sbrac{\sigma}} p_{1}^{\sigma_{1}} p_{2}^{\sigma_{2}} \dots \big) = \sum_{n \geq 0} \frac{1}{n!} \big( \sum_{\sigma \in \symgp{n}} \fix \pbrac{F \sbrac{\sigma}} p_{\sigma} \big)$$ where $\fix \pbrac{F \sbrac{\sigma}} := \abs{\cbrac{s \in F \sbrac{A} : F \sbrac{\sigma} \pbrac{s} = s}}$, where $\sigma_{i}$ is the number of $i$-cycles of $\sigma$, and where $p_{i}$ are indeterminates. (That is, $\fix \pbrac{F \sbrac{\sigma}}$ is the *number* of $F$-structures fixed under the action of the transport of $\sigma$.) We will make extensive use of the compressed notation $p_{\sigma} = p_{1}^{\sigma_{1}} p_{2}^{\sigma_{2}} \dots$ hereafter. In fact, by functorality, $\fix \pbrac{F \sbrac{\sigma}}$ is a class function[^1] on permutations $\sigma \in \symgp{n}$. Accordingly, we can instead consider all permutations of a given cycle type at once. It is a classical theorem that conjugacy classes of permutations in $\symgp{n}$ are indexed by partitions $\lambda \vdash n$, which are defined as multisets of natural numbers whose sum is $n$. In particular, conjugacy classes are determined by their cycle type, the multiset of the lengths of the cycles, which may clearly be identified bijectively with partitions of $n$. For a given partition $\lambda \vdash n$, there are $n! / z_{\lambda}$ permutations of cycle type $\lambda$, where $z_{\lambda} := \prod_{i} i^{\lambda_{i}} \lambda_{i}!$ where $\lambda_{i}$ denotes the multiplicity of $i$ in $\lambda$.. Thus, we can instead write $$\label{eq:cycinddefpart} \civars{F}{p_{1}, p_{2}, \dots} := \sum_{n \geq 0} \sum_{\lambda \vdash n} \fix \pbrac{F \sbrac{\lambda}} \frac{p_{1}^{\lambda_{1}} p_{2}^{\lambda_{2}} \dots}{z_{\lambda}} = \sum_{n \geq 0} \sum_{\lambda \vdash n} \fix \pbrac{F \sbrac{\lambda}} \frac{p_{\lambda}}{z_{\lambda}}$$ for $\fix F \sbrac{\lambda} := \fix F \sbrac{\sigma}$ for some choice of a permutation $\sigma$ of cycle type $\lambda$. Again, we will make extensive use of the notation $p_{\lambda} = p_{\sigma}$ hereafter. That the cycle index $\ci{F}$ usefully characterizes the enumerative structure of the species $F$ may not be clear. However, as the following theorems show, both labeled and unlabeled enumeration are immediately possible once the cycle index is in hand. Recall that, for a given sequence $a = \pbrac{a_{0}, a_{1}, a_{2}, \dots}$, the *ordinary generating function*[^2] of $a$ is the formal power series $\tilde{A} \pbrac{x} = \sum_{i = 0}^{\infty} a_{i} x^{i}$ and the *exponential generating function* is the formal power series $A \pbrac{x} = \sum_{i = 1}^{\infty} \frac{1}{i!} a_{i} x^{i}$. The scaling factor of $\frac{1}{n!}$ in the exponential generating function is convenient in many contexts; for example, it makes differentiation of the generating function a combinatorially-significant operation. The cycle index of a species is then directly related to two important generating functions: \[thm:ciegf\] The exponential generating function $F \pbrac{x}$ of labeled $F$-structures is given by $$\label{eq:ciegf} F \pbrac{x} = \civars{f}{x, 0, 0, \dots}.$$ \[thm:ciogf\] The ordinary generating function $\tilde{F} \pbrac{x}$ of unlabeled $F$-structures is given by $$\label{eq:ciogf} \tilde{F} \pbrac{x} = \ci{F} \pbracs[big]{x, x^{2}, x^{3}, \dots}.$$ Proofs of both theorems are found in [@bll:species §1.2]. In essence, \[eq:ciegf\] counts each labeled structure exactly once (as a fixed point of the trivial automorphism on $\sbrac{n}$) with a factor of $1/n!$, while \[eq:ciogf\] simply counts orbits Burnside’s Lemma. In cases where the unlabeled enumeration problem is interesting, it is generally more challenging than the labeled enumeration of the same structures, since the characterization of isomorphism in a species may be nontrivial to capture in a generating function. If, however, we can calculate the complete cycle index for a species, both labeled and unlabeled enumerations immediately follow. The use of $p_{i}$ for the variables instead of the more conventional $x_{i}$ alludes to the theory of symmetric functions, in which $p_{i}$ denotes the power-sum functions $p_{i} = \sum_{j} x_{j}^{i}$, which form an important basis for the ring $\Lambda$ of symmetric functions. When the $p_{i}$ are understood as symmetric functions rather than simply indeterminates, additional Pólya-theoretic enumerative information is exposed. In particular, the symmetric function in $x$-variables underlying a cycle index in $p$-variables may be said to count *partially*-labeled structures of a given species, where the coefficient on a monomial $\prod x_{i}^{\alpha_{i}}$ counts structures with $\alpha_{i}$ labels of each sort $i$. This serves to explain why the coefficients of powers of $p_{1} = \sum_{i} x_{i}$ counts labeled structures (where the labels must all be distinct) and why the automorphism types of structures are enumerated by $\civars{f}{x, x^{2}, x^{3}, \cdots}$, which allows clusters of labels to be the same. Another application of the theory of symmetric functions to the cycle indices of species may be found in [@gessel:laginvspec]. A more detailed exploration of the history of cycle index polynomials and their relationship to classical Pólya theory may be found in [@jili:pointdet]. Of course, it is not always obvious how to calculate the cycle index of a species directly. However, in cases where we can decompose a species as some combination of simpler ones, we can exploit these relationships algebraically to study the cycle indices, as we will see in the next section. Algebra of species {#s:specalg} ------------------ It is often natural to describe a species in terms of combinations of other, simpler species—for example, ‘a permutation is a set of cycles’ or ‘a rooted tree is a single vertex together with a (possibly empty) set of rooted trees’. Several combinatorial operations on species of structures are commonly used to represent these kinds of combinations; that they have direct analogues in the algebra of cycle indices is in some sense the conceptual justification of the theory. In particular, for species $F$ and $G$, we will define species $F + G$, $F \cdot G$, $F \circ G$, $\pointed{F}$, and $F'$, and we will compute their cycle indices in terms of $\ci{F}$ and $\ci{G}$. In what follows, we will not say explicitly what the effects of a given species operation are on bijections when those effects are obvious (as is usually the case). \[def:specsum\] For two species $F$ and $G$, define their *sum* to be the species $F + G$ given by $\pbracs[big]{F + G} \sbrac{A} = F \sbrac{A} \sqcup G \sbrac{A}$ (where $\sqcup$ denotes disjoint set union). In other words, an $\pbrac{F + G}$-structure is an $F$-structure *or* a $G$-structure. We use the disjoint union to avoid the complexities of dealing with cases where $F \sbrac{A}$ and $G \sbrac{A}$ overlap as sets. \[thm:specsumci\] For species $F$ and $G$, the cycle index of their sum is $$\label{eq:specsumci} \ci{F + G} = \ci{F} + \ci{G}.$$ In the case that $F = G_{1} + G_{2}$, we can simply invert the equation and write $F - G_{2} = G_{1}$. However, we may instead wish to study the species $F - G$ without first writing $F$ as a sum. In the spirit of the definition of species addition, we wish to define the species subtraction $F - G$ as the species of $F$-structures that ‘are not’ $G$-structures. For slightly more generality, we may apply the notions of \[def:specmaptypes\]: \[def:specdif\] For two species $F$ and $G$ with a species embedding $\phi: G \to F$, define their *difference with respect to $\phi$* to be the species $F \specsub{\phi} G$ given by $\pbracs[big]{F \specsub{\phi} G} \sbrac{A} \defeq F \sbrac{A} - \phi \pbrac{G \sbrac{A}}$. When there is no ambiguity about the choice of embedding $\phi$, especially in the case that $G$ has a combinatorially natural embedding in $F$, we may instead simply write $F - G$ and call this species their *difference*. For example, for $\specname{G}$ the species of graphs and $\mathfrak{a}$ the species of trees with the natural embedding, we have that $\specname{G} - \mathfrak{a}$ is the species of graphs with cycles. We note also that species addition is associative and commutative (up to species isomorphism), and furthermore the empty species $\numspecname{0}: A \mapsto \varnothing$ is an additive identity, so species with addition form an abelian monoid. This can be completed to create the abelian group of *virtual species*, in which the subtraction $F - G$ of arbitrary species is defined; the two definitions in fact agree where our definition applies. We will not delve into the details of virtual species theory here, directing the reader instead to [@bll:species §2.5]. \[thm:specprod\] For two species $F$ and $G$, define their *product* to be the species $F \cdot G$ given by $\pbrac{F \cdot G} \sbrac{A} = \sum_{A = B \sqcup C} F \sbrac{B} \times G \sbrac{C}$. In other words, an $\pbrac{F \cdot G}$-structure is a partition of $A$ into two sets $B$ and $C$, an $F$-structure on $B$, and a $G$-structure on $C$. This definition is partially motivated by the following result on cycle indices: \[thm:specprodci\] For species $F$ and $G$, the cycle index of their product is $$\label{eq:specprodci} \ci{F \cdot G} = \ci{F} \cdot \ci{G}.$$ Conceptually, the species product can be used to describe species that decompose uniquely into substructures of two specified species. For example, a permutation on a set $A$ decomposes uniquely into a (possibly empty) set of fixed points and a derangement of their complement in $A$. Thus, $\specname{S} = \specname{E} \cdot \operatorname{Der}$ for $\specname{S}$ the species of permutations, $\specname{E}$ the species of sets, and $\operatorname{Der}$ the species of derangements. We note also that species multiplication is commutative (up to species isomorphism) and distributes over addition, so the class of species with addition and multiplication forms a commutative semiring, with the species $\numspecname{1}: \begin{cases} \varnothing \mapsto \cbrac{\varnothing} \\ A \neq \varnothing \mapsto \varnothing \end{cases}$ as a multiplicative identity; if addition is completed as previously described, the class of virtual species with addition and multiplication forms a true commutative ring. In addition, the question of which species can be decomposed as sums and products without resorting to virtual species is one of great interest; the notions of *molecular* and *atomic* species are directly derived from such decompositions, and represent the beginnings of the systematic study of the structure of the class of species as a whole. Further details on this topic are presented in [@bll:species §2.6]. \[def:speccomp\] For two species $F$ and $G$ with $G \sbrac{\varnothing} = \varnothing$, define their *composition* to be the species $F \circ G$ given by $\pbrac{F \circ G} \sbrac{A} = \prod_{\pi \in P \pbrac{A}} \pbrac{F \sbrac{\pi} \times \prod_{B \in \pi} G \sbrac{B}}$ where $P \pbrac{A}$ is the set of partitions of $A$. In other words, the composition $F \circ G$ produces the species of $F$-structures of collections of $G$-structures. The definition is, again, motivated by a correspondence with a certain operation on cycle indices: \[def:cipleth\] Let $f$ and $g$ be cycle indices. Then the *plethysm* $f \circ g$ is the cycle index $$\label{eq:cipleth} f \circ g = f \pbrac{g \pbrac{p_{1}, p_{2}, p_{3}, \dots}, g \pbrac{p_{2}, p_{4}, p_{6}, \dots}, \dots},$$ where $f \pbrac{a, b, \dots}$ denotes the cycle index $f$ with $a$ substituted for $p_{1}$, $b$ substituted for $p_{2}$, and so on. This definition is inherited directly from the theory of symmetric functions in infinitely many variables, where our $p_{i}$ are basis elements, as previously discussed. This operation on cycle indices then corresponds exactly to species composition: \[thm:speccompci\] For species $F$ and $G$ with $G \sbrac{\varnothing} = \varnothing$, the cycle index of their plethysm is $$\label{eq:speccompci} \ci{F \circ G} = \ci{F} \circ \ci{G}$$ where $\circ$ in the right-hand side is as in \[eq:cipleth\]. Many combinatorial structures admit natural descriptions as compositions of species. For example, every graph admits a unique decomposition as a (possibly empty) set of (nonempty) connected graphs, so we have the species identity $\specname{G} = \specname{E} \circ \specname{G}^{C}$ for $\specname{G}$ the species of graphs and $\specname{G}^{C}$ the species of nonempty connected graphs. Diligent readers may observe that the requirement that $G \sbrac{\varnothing} = \varnothing$ in \[def:speccomp\] is in fact logically vacuous, since the given construction would simply ignore the $\varnothing$-structures. However, the formula in \[thm:speccompci\] fails to be well-defined for any $\ci{G}$ with non-zero constant term (corresponding to species $G$ with nonempty $G \sbrac{\varnothing}$) unless $\ci{F}$ has finite degree (corresponding to species $F$ with support only in finitely many degrees). Consider the following example: Let $\specname{E}$ denote the species of sets, $\specname{E}_{3}$ its restriction to sets with three elements, $\numspecname{1}$ the species described above (which has one empty structure), and $X$ the species of singletons (which has one order-$1$ structure). If $\specname{E} \pbrac{\numspecname{1} + X}$ were well-defined, it would denote the species of ‘partially-labeled sets’. However, for fixed cardinality $n$, there is an $\specname{E} \pbrac{\numspecname{1} + X}$-structure on $n$ labels *for each nonnegative $k$*—specifically, the set $\sbrac{n}$ together with $k$ unlabeled elements. Thus, there would be infinitely many structures of each cardinality for this ‘species’, so it is not in fact a species at all. However, the situation for $\specname{E}_{3} \pbrac{\numspecname{1} + X}$ is entirely different. A structure in this species is a $3$-set, some of whose elements are labeled. There are only four possible such structures: $\cbrac{*, *, *}$, $\cbrac{*, *, 1}$, $\cbrac{*, 1, 2}$, and $\cbrac{1, 2, 3}$, where $*$ denotes an unlabeled element and integers denote labeled elements. Moreover, by discarding the unlabeled elements, we can clearly see that $\specname{E}_{3} \pbrac{\numspecname{1} + X} = \sum_{i = 0}^{3} \specname{E}_{i}$. In our setting, we will not use this alternative notion of composition, so we will not develop it formally here. Several other binary operations on species are defined in the literature, including the Cartesian product $F \times G$, the functorial composition $F \square G$, and the inner plethysm $F \boxtimes G$ of [@travis:inpleth]. We will not use these here. However, we do introduce two unary operations: $\pointed{F}$ and $F'$. \[def:specderiv\] For a species $F$, define its *species derivative* to be the species $\deriv{F}$ given by $\deriv{F} \sbrac{A} = F \sbrac{A \cup \cbrac{*}}$ for $*$ an element chosen not in $A$ (say, the set $A$ itself). It is important to note that the label $*$ of an $\deriv{F}$-structure is *distinguished* from the other labels; the automorphisms of the species $\deriv{F}$ cannot interchange $*$ with another label. Thus, species differentiation is appropriate for cases where we want to remove one ‘position’ in a structure. For example, for $\specname{L}$ the species of linear orders and $\specname{C}$ the species of cyclic orders, we have $\specname{L} = \deriv{\specname{C}}$; a cyclic order on the set $A \cup \cbrac{*}$ is naturally associated with the linear order on the set $A$ produced by removing $*$. Terming this operation ‘differentiation’ is justified by its effect on cycle indices: \[thm:specderivci\] For a species $F$, the cycle index of its derivative is given by $$\label{eq:specderivci} \civars{\deriv{F}}{p_{1}, p_{2}, \dots} = \frac{\partial}{\partial p_{1}} \civars{F}{p_{1}, p_{2}, \dots}.$$ We note that we cannot in general recover $\ci{F}$ from $\ci{\deriv{F}}$, since there may be terms in $\ci{F}$ which have no $p_{1}$-component (corresponding to $F$-structures which have no automorphisms with fixed points). Finally, we introduce a variant of the species derivative which allows us to *label* the distinguished element $*$: \[def:specpoint\] For a species $F$, define its *pointed species* to be the species $\pointed{F}$ given by $\pointed{F} \sbrac{A} = F \sbrac{A} \times A$ (that is, pairs of the form $\pbrac{f, a}$ where $f$ is an $F$-structure on $A$ and $a \in A$) with transport $\pointed{F} \sbrac{\sigma} \pbrac{f, a} = \pbrac{F \sbrac{\sigma} \pbrac{f}, \sigma \pbrac{a}}$. We can also write $\pointed{F} \sbrac{A} = X \cdot \deriv{F}$ for $X$ the species of singletons. In other words, an $\pointed{F} \sbrac{A}$-structure is an $F \sbrac{A}$-structure with a distinguished element taken from the set $A$ (as opposed to $\deriv{F}$, where the distinguished element is new). Thus, species pointing is appropriate for cases such as those of rooted trees: for $\mathfrak{a}$ the species of trees and $\specname{A}$ the species of rooted trees, we have $\specname{A} = \pointed{\mathfrak{a}}$. leads directly to the following: \[thm:specpointci\] For a species $F$, the cycle index of its corresponding pointed species is given by $$\label{eq:specpointci} \ci{\pointed{F}} = \ci{X} \cdot \ci{\deriv{F}}.$$ Note that, again, we cannot in general recover $\ci{F}$ from $\ci{\pointed{F}}$, for the same reasons as in the case of $\ci{\deriv{F}}$. Multisort species {#s:mult} ----------------- A species $F$ as defined in \[def:species\] is a functor $F: \catname{FinBij} \to \catname{FinSet}$; an $F$-structure in $F \sbrac{A}$ takes its labels from the set $A$. The tool-set so produced is adequate to describe many classes of combinatorial structures. However, there is one particular structure type which it cannot effectively capture: the notion of distinct *sorts* of elements within a structure. Perhaps the most natural example of this is the case of $k$-colored graphs, where every vertex has one of $k$ colors with the requirement that no pair of adjacent vertices shares a color. Automorphisms of such a graph must preserve the colorings of the vertices, which is not a natural restriction to impose in the calculation of the classical cycle index in \[eq:cycinddef\]. We thus incorporate the notion of sorts directly into a new definition: \[def:ksortset\] For a fixed integer $k \geq 1$, define a *$k$-sort set* to be an ordered $k$-tuple of sets. Say that a $k$-sort set is *finite* if each component set is finite; in that case, its *$k$-sort cardinality* is the ordered tuple of its components’ set cardinalities. Further, define a *$k$-sort function* to be an ordered $k$-tuple of set functions which acts componentwise on $k$-sort sets. For two $k$-sort sets $U$ and $V$, a $k$-sort function $\sigma$ is a *$k$-sort bijection* if each component is a set bijection. For $k$-sort sets of cardinality $\pbrac{c_{1}, c_{2}, \dots, c_{k}}$, denote by $\symgp{c_{1}, c_{2}, \dots, c_{k}} = \symgp{c_{1}} \times \symgp{c_{2}} \times \dots \times \symgp{c_{k}}$ the *$k$-sort symmetric group*, the elements of which are in natural bijection with $k$-sort bijections from a $k$-sort set to itself. Finally, denote by $\catname{FinBij}^{k}$ the category of finite $k$-sort sets with $k$-sort bijections. We can then define an extension of species to the context of $k$-sort sets: \[def:multisort\] A *$k$-sort species* $F$ is a functor $F: \catname{FinBij}^{k} \to \catname{FinBij}$ which associates to each $k$-sort set $U$ a set $F \sbrac{U}$ of *$k$-sort $F$-structures* and to each $k$-sort bijection $\sigma: U \to V$ a bijection $F \sbrac{\sigma}: F \sbrac{U} \to F \sbrac{V}$. Functorality once again imposes naturality conditions on these associations. Just as in the theory of ordinary species, to each multisort species is associated a power series, its *cycle index*, which carries essential combinatorial data about the automorphism structure of the species. To keep track of the multiple sorts of labels, however, we require multiple sets of indeterminates. Where in ordinary cycle indices we simply used $p_{i}$ for the $i$th indeterminate, we now use $p_{i} \sbrac{j}$ for the $i$th indeterminate of the $j$th sort. In some contexts with small $k$, we will denote our sorts with letters (saying, for example, that we have ‘$X$ labels’ and ‘$Y$ labels’), in which case we will write $p_{i} \sbrac{x}$, $p_{i} \sbrac{y}$, and so forth. In natural analogy to \[def:cycind\], the formula for the cycle index of a $k$-sort species $F$ is given by $$\begin{gathered} \label{eq:multcycinddef} \civars{F}{p_{1} \sbrac{1}, p_{2} \sbrac{1}, \dots; p_{1} \sbrac{2}, p_{2} \sbrac{2}, \dots; \dots; p_{1} \sbrac{k}, p_{2} \sbrac{k}, \dots} = \\ \sum_{\substack{n \geq 0 \\ a_{1} + a_{2} + \dots + a_{k} = n}} \frac{1}{a_{1}! a_{2}! \dots a_{k}!} \sum_{\sigma \in \symgp{a_{1}, a_{2}, \dots, a_{k}}} \fix \pbrac{F \sbrac{\sigma}} p^{\sigma_{1}}_{\sbrac{1}} p^{\sigma_{2}}_{\sbrac{2}} \dots p^{\sigma_{k}}_{\sbrac{k}}.\end{gathered}$$ where by $p^{\sigma_{i}}_{\sbrac{i}}$ we denote the product $\prod_{j} \pbrac{p_{j} \sbrac{i}}^{\pbrac{\sigma_{i}}_{j}}$ where $\pbrac{\sigma_{i}}_{j}$ is the number of $j$-cycles of $\sigma_{i}$. The operations of addition and multiplication extend to the multisort context naturally. To make sense of differentiation and pointing, we need only specify a sort from which to draw the element or label which is marked; we then write $\deriv[X]{F}$ and $\pointed[X]{F}$ for the derivative and pointing respectively of $F$ ‘in the sort $X$’, which is to say with its distinguished element drawn from that sort. When $F$ is a $1$-sort species and $G$ a $k$-sort species, the construction of the $k$-sort species $F \circ G$ is natural; in other settings, we will not define a general notion of composition of multisort species. $\Gamma$-species and quotient species {#s:quot} ------------------------------------- It is frequently the case that interesting combinatorial problems admit elegant descriptions in terms of quotients of a class of structures $F$ under the action of a group $\Gamma$. In some cases, this group action will be *structural* in the sense that it commutes with permutations of labels in the species $F$, or, informally, that it is independent of the choice of labelings on each $F$-structure. In such a case, we may also say that $\Gamma$ acts on ‘unlabeled structures’ of the class $F$. \[ex:graphcomp\] Let $\specname{G}$ denote the species of simple graphs. Let the group $\symgp{2}$ act on such graphs by letting the identity act trivially and letting the non-trivial element $\pmt{(12)}$ send each graph to its complement (that is, by replacing each edge with a non-edge and each non-edge with an edge). This ‘complementation action’ is structural in the sense described previously. We note that a group action is structural is exactly the condition that each $\gamma \in \Gamma$ acts by a species isomorphism $\gamma: F \to F$ in the sense of \[def:specmaptypes\]. We now incorporate such species-compatible actions into a new definition: \[def:gspecies\] For $\Gamma$ a group, a *$\Gamma$-species* $F$ is a combinatorial species $F$ together with an action of $\Gamma$ on $F$-structures by species isomorphisms. Explicitly, for $F$ a $\Gamma$-species, the diagram $$\begin{tikzpicture}[every node/.style={fill=white}] \matrix (m) [matrix of math nodes, row sep=4em, column sep=4em, text height=1.5ex, text depth=0.25ex] { A & F \sbrac{A} & F \sbrac{A} \\ B & F \sbrac{B} & F \sbrac{B} \\ }; \path[->,font=\scriptsize] (m-1-1) edge node {$F$} (m-1-2) edge node {$\sigma$} (m-2-1) (m-1-2) edge node {$\gamma_{A}$} (m-1-3) edge node {$F \sbrac{\sigma}$} (m-2-2) (m-1-3) edge node {$F \sbrac{\sigma}$} (m-2-3) (m-2-1) edge node {$F$} (m-2-2) (m-2-2) edge node {$\gamma_{B}$} (m-2-3); \end{tikzpicture}$$ commutes for every $\gamma \in \Gamma$ and every set bijection $\sigma: A \to B$. (Note that commutativity of the left square is required for $F$ to be a species at all.) $\specname{G}$ is then a $\symgp{2}$-species with the action described in \[ex:graphcomp\]. For such a $\Gamma$-species, of course, it is then meaningful to pass to the quotient under the action by $\Gamma$: \[def:qspecies\] For $F$ a $\Gamma$-species, define $\nicefrac{F}{\Gamma}$, the *quotient species* of $F$ under the action of $\Gamma$, to be the species of $\Gamma$-orbits of $F$-structures. \[ex:graphcompquot\] Consider $\specname{G}$ as a $\symgp{2}$-species in light of the action defined in \[ex:graphcomp\]. The structures of the quotient species $\nicefrac{\specname{G}}{\symgp{2}}$ are then pairs of complementary graphs. We may choose to interpret each such pair as representing a $2$-partition of the set of vertex pairs of the complete graph (that is, of edges of the complete graph). More natural examples of quotient structures will present themselves in later chapters. For each label set $A$, let $\quomap{\Gamma} \sbrac{A}: F \sbrac{A} \to \nicefrac{F}{\Gamma} \sbrac{A}$ denote the map sending each $F$-structure over $A$ to its quotient $\nicefrac{F}{\Gamma}$-structure over $A$. Then $\quomap{\Gamma} \sbrac{A}$ is an injection for each $A$, and the requirement that $\Gamma$ acts by natural transformations implies that the induced functor map $\quomap{\Gamma}: F \to \nicefrac{F}{\Gamma}$ is a natural transformation. Thus, the passage from $F$ to $\nicefrac{F}{\Gamma}$ is a species cover in the sense of \[def:specmaptypes\]. A brief exposition of the notion of quotient species may be found in [@bll:species §3.6], and a more thorough exposition (in French) in [@bous:species]. Our motivation, of course, is that combinatorial structures of a given class are often ‘naturally’ identified with orbits of structures of another, larger class under the action of some group. Our goal will be to compute the cycle index of the species $\nicefrac{F}{\Gamma}$ in terms of that of $F$ and information about the $\Gamma$-action, so that enumerative data about the quotient species can be extracted. As an intermediate step to the computation of the cycle index associated to this quotient species, we associate a cycle index to a $\Gamma$-species $F$ that keeps track of the needed data about the $\Gamma$-action. \[def:gcycind\] For a $\Gamma$-species $F$, define the $\Gamma$-cycle index $\gci{\Gamma}{F}$ as in [@hend:specfield]: for each $\gamma \in \Gamma$, let $$\gcivars{\Gamma}{F}{\gamma} = \sum_{n \geq 0} \frac{1}{n!} \sum_{\sigma \in \symgp{n}} \fix \pbrac{\gamma \cdot F \sbrac{\sigma}} p_{\sigma} \label{eq:gcycinddef}$$ with $p_{\sigma}$ as in \[eq:cycinddef\]. We will call such an object (formally a map from $\Gamma$ to the ring $\ringname{Q} \sbrac{\sbrac{p_{1}, p_{2}, \dots}}$ of symmetric functions with rational coefficients in the $p$-basis) a *$\Gamma$-cycle index* even when it is not explicitly the $\Gamma$-cycle index of a $\Gamma$-species, and we will sometimes call $\gcielt{\Gamma}{F}{\gamma}$ the “$\gamma$ term of $\gci{\Gamma}{F}$”. So the coefficients in the power series count the fixed points of the *combined* action of a permutation and the group element $\gamma$. Note that, in particular, the classical (‘ordinary’) cycle index may be recovered as $\ci{F} = \gcielt{\Gamma}{F}{e}$ for any $\Gamma$-species $F$. The algebraic relationships between ordinary species and their cycle indices generally extend without modification to the $\Gamma$-species context, as long as appropriate allowances are made. The actions on cycle indices of $\Gamma$-species addition and multiplication are exactly as in the ordinary species case considered componentwise: \[def:gspecsumprod\] For two $\Gamma$-species $F$ and $G$, the $\Gamma$-cycle index of their sum $F + G$ is given by $$\label{eq:gspecsum} \gcielt{\Gamma}{F + G}{\gamma} = \gcielt{\Gamma}{F}{\gamma} + \gcielt{\Gamma}{G}{\gamma}$$ and the $\Gamma$-cycle index of their product $F \cdot G$ is given by $$\label{eq:gspecprod} \gcielt{\Gamma}{F \cdot G}{\gamma} = \gcielt{\Gamma}{F}{\gamma} \cdot \gcielt{\Gamma}{G}{\gamma}$$ The action of composition, which in ordinary species corresponds to plethysm of cycle indices, can also be extended: \[def:gspeccomp\] For two $\Gamma$-species $F$ and $G$, define their *composition* to be the $\Gamma$-species $F \circ G$ with structures given by $\pbrac{F \circ G} \sbrac{A} = \prod_{\pi \in P \pbrac{A}} \pbrac{F \sbrac{\pi} \times \prod_{B \in \pi} G \sbrac{B}}$ where $P \pbrac{A}$ is the set of partitions of $A$ and where $\gamma \in \Gamma$ acts on a $\pbrac{F \circ G}$-structure by acting on the $F$-structure and the $G$-structures independently. The requirement in \[def:gspecies\] that the action of $\Gamma$ commutes with transport implies that this is well-defined. Informally, for $\Gamma$-species $F$ and $G$, we have defined the composition $F \circ G$ to be the $\Gamma$-species of $F$-structures of $G$-structures, where $\gamma \in \Gamma$ acts on an $\pbrac{F \circ G}$-structure by acting independently on the $F$-structure and each of its associated $G$-structures. A formula similar to that \[thm:speccompci\] requires a definition of the plethysm of $\Gamma$-symmetric functions, here taken from [@hend:specfield §3]: \[def:gcipleth\] For two $\Gamma$-cycle indices $f$ and $g$, their *plethysm* $f \circ g$ is a $\Gamma$-cycle index defined by $$\pbrac{f \circ g} \pbrac{\gamma} = f \pbrac{\gamma} \pbrac{g \pbrac{\gamma} \pbrac{p_{1}, p_{2}, p_{3}, \dots}, g \pbracs[big]{\gamma^{2}} \pbrac{p_{2}, p_{4}, p_{6}, \dots}, \dots}. \label{eq:gcipleth}$$ This definition of $\Gamma$-cycle index plethysm is then indeed the correct operation to pair with the composition of $\Gamma$-species: \[thm:gspeccompci\] If $A$ and $B$ are $\Gamma$-species and $B \pbrac{\varnothing} = \varnothing$, then $$\label{eq:gspeccompci} \gci{\Gamma}{A \circ B} = \gci{\Gamma}{A} \circ \gci{\Gamma}{B}.$$ Thus, $\Gamma$-species admit the same sorts of ‘nice’ correspondences between structural descriptions (in terms of functorial algebra) and enumerative characterizations (in terms of cycle indices) that ordinary species do. However, to make use of this theory for enumerative purposes, we also need to be able to pass from the $\Gamma$-cycle index of a $\Gamma$-species to the ordinary cycle index of its associated quotient species under the action of $\Gamma$. This will allow us to adopt a useful strategy: if we can characterize some difficult-to-enumerate combinatorial structure as quotients of more accessible structures, we will be able to apply the full force of species theory to the enumeration of the prequotient structures, *then* pass to the quotient when it is convenient. Exactly this approach will serve as the core of both of the following chapters. Since we intend to enumerate orbits under a group action, we apply a generalization of Burnside’s Lemma found in [@gessel:laginvspec Lemma 5]: \[lem:grouporbits\] If $\Gamma$ and $\Delta$ are finite groups and $S$ a set with a $\pbrac{\Gamma \times \Delta}$-action, for any $\delta \in \Delta$ the number of $\Gamma$-orbits fixed by $\delta$ is $\frac{1}{\abs{\Gamma}} \sum_{\gamma \in \Gamma} \fix \pbrac{\gamma, \delta}$. Recall from \[eq:cycinddef\] that, to compute the cycle index of a species, we need to enumerate the fixed points of each $\sigma \in \symgp{n}$. However, to do this in the quotient species $\nicefrac{F}{\Gamma}$ is by definition to count the fixed $\Gamma$-orbits of $\sigma$ in $F$ under commuting actions of $\symgp{n}$ and $\Gamma$ (that is, under an $\pbrac{\symgp{n} \times \Gamma}$-action). Thus, \[lem:grouporbits\] implies the following: \[thm:qsci\] For a $\Gamma$-species $F$, the ordinary cycle index of the quotient species $\nicefrac{F}{\Gamma}$ is given by $$\label{eq:quotcycind} \ci{F / \Gamma} = \qgci{\Gamma}{F} \defeq \frac{1}{\abs{\Gamma}} \sum_{\gamma \in \Gamma} \gcielt{\Gamma}{F}{\gamma} = \frac{1}{\abs{\Gamma}} \sum_{\substack{n \geq 0 \\ \sigma \in \symgp{n} \\ \gamma \in \Gamma}} \frac{1}{n!} \pbrac{\gamma \cdot F \sbrac{\sigma}} p_{\sigma}.$$ where we define $\qgci{\Gamma}{F} = \frac{1}{\abs{\Gamma}} \sum_{\gamma \in \Gamma} \gcielt{\Gamma}{F}{\gamma}$ for future convenience. Note that this same result on cycle indices is implicit in [@bous:species §2.2.3]. With it, we can compute explicit enumerative data for a quotient species using cycle-index information of the original $\Gamma$-species with respect to the group action, as desired. Recall from \[thm:ciegf,thm:ciogf\] that the exponential generating function $F \pbrac{x}$ of labeled $F$-structures and the ordinary generating function $\tilde{F} \pbrac{x}$ of unlabeled $F$-structures may both be computed from the cycle index $\ci{F}$ of an ordinary species $F$ by simple substitutions. In the $\Gamma$-species context, we may perform similar substitutions to derive analogous generating functions. \[thm:gciegf\] The exponential generating function $F_{\gamma} \pbrac{x}$ of labeled $\gamma$-invariant $F$-structures is $$\label{eq:gciegf} F_{\gamma} \pbrac{x} = \gcieltvars{\Gamma}{F}{\gamma}{x, 0, 0, \dots}.$$ \[thm:gciogf\] The ordinary generating function $\tilde{F}_{\gamma} \pbrac{x}$ of unlabeled $\gamma$-invariant $F$-structures is $$\label{eq:gciogf} \tilde{F}_{\gamma} \pbrac{x} = \gcieltvars{\Gamma}{F}{\gamma}{x, x^{2}, x^{3}, \dots}.$$ These theorems follow directly from \[eq:ciegf,eq:ciogf\], thinking of $F_{\gamma} \pbrac{x}$ and $\widetilde{F_{\gamma} \pbrac{x}}$ as enumerating the combinatorial class of $F$-structures which are invariant under $\gamma$. Note that the notion of ‘unlabeled $\gamma$-invariant $F$-structures’ is always well-defined precisely because \[def:gspecies\] requires that the action of $\Gamma$ commutes with transport of structures. From these results and \[thm:qsci\], we can the conclude: \[qgciegf\] The exponential generating function $F \pbrac{x}$ of labeled $\nicefrac{F}{\Gamma}$-structures is $$\label{eq:qgciegf} F \pbrac{x} = \frac{1}{\abs{\Gamma}} \sum_{\gamma \in \Gamma} F_{\gamma} \pbrac{x}.$$ Similarly, \[qgciogf\] The ordinary generating function $\tilde{F} \pbrac{x}$ of unlabeled $\nicefrac{F}{\Gamma}$-structures is $$\label{eq:qgcogf} \tilde{F} \pbrac{x} = \frac{1}{\abs{\Gamma}} \sum_{\gamma \in \Gamma} \tilde{F}_{\gamma} \pbrac{x}.$$ Note also that all of the above extends naturally into the multisort species context. We will use this extensively in \[c:ktrees\]. It also extends naturally to weighted contexts, but we will not apply this extension here. The species of bipartite blocks {#c:bpblocks} =============================== Introduction {#s:bpintro} ------------ We first apply the theory of quotient species to the enumeration of bipartite blocks. \[def:bcgraph\] A *bicolored graph* is a graph $\Gamma$ each vertex of which has been assigned one of two colors (here, black and white) such that each edge connects vertices of different colors. A *bipartite graph* (sometimes called *bicolorable*) is a graph $\Gamma$ which admits such a coloring. There is an extensive literature about bicolored and bipartite graphs, including enumerative results for bicolored graphs [@har:bicolored], bipartite graphs both allowing [@han:bipartite] and prohibiting [@harprins:bipartite] isolated points, and bipartite blocks [@harrob:bipblocks]. However, this final enumeration was previously completed only in the labeled case. By considering the problem in light of the theory of $\Gamma$-species, we develop a more systematic understanding of the structural relationships between these various classes of graphs, which allows us to enumerate all of them in both labeled and unlabeled settings. Throughout this chapter, we denote by $\specname{BC}$ the species of bicolored graphs and by $\specname{BP}$ the species of bipartite graphs. The prefix $\specname{C}$ will indicate the connected analogue of such a species. We are motivated by the graph-theoretic fact that each *connected* bipartite graph may be identified with exactly two bicolored graphs which are color-dual. In other words, a connected bipartite graph is (by definition or by easy exercise, depending on your approach) an orbit of connected bicolored graphs under the action of $\symgp{2}$ where the nontrivial element $\tau$ reverses all vertex colors. We will hereafter treat all the various species of bicolored graphs as $\symgp{2}$-species with respect to this action and use the theory developed in \[s:quot\] to pass to bipartite graphs. Although the theory of multisort species presented in \[s:mult\] is in general well-suited to the study of colored graphs, we will not need it here. The restrictions that vertex colorings place on automorphisms of bicolored graphs are simple enough that we can deal with them directly. Bicolored graphs {#s:bcgraph} ---------------- We begin our investigation by directly computing the $\symgp{2}$-cycle index for the species $\specname{BC}$ of bicolored graphs with the color-reversing $\symgp{2}$-action described previously. We will then use various methods from the species algebra of \[c:species\] to pass to various other species. ### Computing $\gcielt{\symgp{2}}{\specname{BC}}{e}$ {#ss:ecibc} We construct the cycle index for the species $\specname{BC}$ of bicolored graphs in the classical way, which in light of our $\symgp{2}$-action will give $\gcielt{\symgp{2}}{\specname{BC}}{e}$. Recall the formula for the cycle index of a $\Gamma$-species in \[eq:gcycinddef\]: $$\gcielt{\Gamma}{F}{\gamma} = \sum_{n \geq 0} \frac{1}{n!} \sum_{\sigma \in \symgp{n}} \fix \pbrac{\gamma \cdot F \sbrac{\sigma}} p_{\sigma}.$$ Thus, for each $n > 0$ and each permutation $\pi \in \symgp{n}$, we must count bicolored graphs on $\sbrac{n}$ for which $\pi$ is a color-preserving automorphism. To simplify some future calculations, we omit empty graphs and define $\specname{BC} \sbrac{\varnothing} = \varnothing$. We note that the *number* of such graphs in fact depends only on the cycle type $\lambda \vdash n$ of the permutation $\pi$, so we can use the cycle index formula in \[eq:cycinddefpart\] interpreted as a $\Gamma$-cycle index identity. Fix some $n \geq 0$ and let $\lambda \vdash n$. We wish to count bicolored graphs for which a chosen permutation $\pi$ of cycle type $\lambda$ is a color-preserving automorphism. Each cycle of the permutation must correspond to a monochromatic subset of the vertices, so we may construct graphs by drawing bicolored edges into a given colored vertex set. If we draw some particular bicolored edge, we must also draw every other edge in its orbit under $\pi$ if $\pi$ is to be an automorphism of the graph. Moreover, every bicolored graph for which $\pi$ is an automorphism may be constructed in this way Therefore, we direct our attention first to counting these edge orbits for a fixed coloring; we will then count colorings with respect to these results to get our total cycle index. Consider an edge connecting two cycles of lengths $m$ and $n$; the length of its orbit under the permutation is $\lcm \pbrac{m, n}$, so the number of such orbits of edges between these two cycles is $mn / \lcm \pbrac{m, n} = \gcd \pbrac{m, n}$. For an example in the case $m = 4, n = 2$, see \[fig:exbcecycle\]. The number of orbits for a fixed coloring is then $\sum \gcd \pbrac{m, n}$ where the sum is over the multiset of all cycle lengths $m$ of white cycles and $n$ of black cycles in the permutation $\pi$. We may then construct any possible graph fixed by our permutation by making a choice of a subset of these cycles to fill with edges, so the total number of such graphs is $\prod 2^{\gcd \pbrac{m, n}}$ for a fixed coloring. (a1)(b1) We now turn our attention to the possible colorings of the graph which are compatible with a permutation of specified cycle type $\lambda$. We split our partition into two subpartitions, writing $\lambda = \mu \cup \nu$, where partitions are treated as multisets and $\cup$ is the multiset union, and designate $\mu$ to represent the white cycles and $\nu$ the black. Then the total number of graphs fixed by such a permutation with a specified decomposition is $$\fix \pbrac{\mu, \nu} = \prod_{\substack{i \in \mu \\ j \in \nu}} 2^{\gcd \pbrac{i, j}}$$ where the product is over the elements of $\mu$ and $\lambda$ taken as multisets. However, since $\mu$ and $\nu$ represent white and black cycles respectively, it is important to distinguish *which* cycles of $\lambda$ are taken into each. The $\lambda_{i}$ $i$-cycles of $\lambda$ can be distributed into $\mu$ and $\nu$ in $\binom{\lambda_{i}}{\mu_{i}} = \lambda_{i}! / \pbrac{\mu_{i}! \nu_{i}!}$ ways, so in total there are $\prod_{i} \lambda_{i}! / \pbrac{\mu_{i}! \nu_{i}!} = z_{\lambda} / \pbrac{z_{\mu} z_{\nu}}$ decompositions. Thus, $$\fix \pbrac{\lambda} = \frac{z_{\lambda}}{z_{\mu} z_{\nu}} \fix \pbrac{\mu, \nu} = \sum_{\mu \cup \nu = \lambda} \frac{z_{\lambda}}{z_{\mu} z_{\nu}} \prod_{\substack{i \in \mu \\ j \in \nu}} 2^{\gcd \pbrac{i, j}}.$$ Therefore we conclude: $$\label{eq:ecibc} \gcielt{\symgp{2}}{\specname{BC}}{e} = \sum_{n > 0} \sum_{\substack{\mu, \nu \\ \mu \cup \nu \vdash n}} \frac{p_{\mu \cup \nu}}{z_{\mu} z_{\nu}} \prod_{i, j} 2^{\gcd \pbrac{\mu_{i}, \nu_{j}}}$$ Explicit formulas for the generating function for unlabeled bicolored graphs were obtained in [@har:bicolored] using conventional Pólya-theoretic methods. Conceptually, this enumeration in fact largely mirrors our own. Harary uses the algebra of the classical cycle index of the ‘line group[^3]’ of the complete bicolored graph of which any given bicolored graph is a spanning subgraph. He then enumerates orbits of edges under these groups using the Pólya enumeration theorem. This is clearly analogous to our procedure, which enumerates the orbits of edges under each specific permutation of vertices. ### Calculating $\gcielt{\symgp{2}}{\specname{BC}}{\tau}$ {#ss:tcibc} Recall that the nontrivial element of $\tau \in \symgp{2}$ acts on bicolored graphs by reversing all colors. We again consider the cycles in the vertex set $\sbrac{n}$ induced by a permutation $\pi \in \symgp{n}$ and use the partition $\lambda$ corresponding to the cycle type of $\pi$ for bookkeeping. We then wish to count bicolored graphs on $\sbrac{n}$ for which $\tau \cdot \pi$ is an automorphism, which is to say that $\pi$ itself is a color-*reversing* automorphism. Once again, the number of bicolored graphs for which $\pi$ is a color-reversing automorphism is in fact dependent only on the cycle type $\lambda$. Each cycle of vertices must be color-alternating and hence of even length, so our partition $\lambda$ must have only even parts. Once this condition is satisfied, edges may be drawn either within a single cycle or between two cycles, and as before if we draw in any edge we must draw in its entire orbit under $\pi$ (since $\pi$ is to be an automorphism of the underlying graph). Moreover, all graphs for which $\pi$ is a color-reversing automorphism and with a fixed coloring may be constructed in this way, so it suffices to count such edge orbits and then consider how colorings may be assigned. Consider a cycle of length $2n$; we hereafter describe such a cycle as having *semilength* $n$. There are exactly $n^{2}$ possible white-black edges in such a cycle. If $n$ is odd, diametrically opposed vertices have opposite colors, so we can have an edge of length $l = n$ (in the sense of connecting two vertices which are $l$ steps apart in the cycle), and in such a case the orbit length is exactly $n$ and there is exactly one orbit. See \[fig:exbctincycd\] for an example of this case. However, if $n$ is odd but $l \neq n$, the orbit length is $2n$, so the number of such orbits is $\frac{n^{2} - n}{2n}$. Hence, the total number of orbits for $n$ odd is $\frac{n^2 + n}{2n} = \ceil{\frac{n}{2}}$. Similarly, if $n$ is even, all orbits are of length $2n$, so the total number of orbits is $\frac{n^{2}}{2n} = \frac{n}{2} = \ceil{\frac{n}{2}}$ also. See \[fig:exbctincyce\] for an example of each of these cases. Now consider an edge to be drawn between two cycles of semilengths $m$ and $n$. The total number of possible white-black edges is $2mn$, each of which has an orbit length of $\lcm \pbrac{2m, 2n} = 2 \lcm \pbrac{m, n}$. Hence, the total number of orbits is $2mn / \pbrac{2 \lcm \pbrac{m, n}} = \gcd \pbrac{m, n}$. (a0)(b1) All together, then, the number of orbits for a fixed coloring of a permutation of cycle type $2 \lambda$ (denoting the partition obtained by doubling every part of $\lambda$) is $\sum_{i} \ceil{\frac{\lambda_{i}}{2}} + \sum_{i < j} \gcd \pbrac{\lambda_{i}, \lambda_{j}}$. All valid bicolored graphs for a fixed coloring for which $\pi$ is a color-preserving automorphism may be obtained uniquely by making some choice of a subset of this collection of orbits, just as in \[ss:ecibc\]. Thus, the total number of possible graphs for a given vertex coloring is $$\prod_{i} 2^{\ceil{\frac{\lambda_{i}}{2}}} \prod_{i < j} 2^{\gcd \pbrac{\lambda_{i}, \lambda_{j}}},$$ which we note is independent of the choice of coloring. For a partition $2\lambda$ with $l \pbrac{\lambda}$ cycles, there are then $2^{l \pbrac{\lambda}}$ colorings compatible with our requirement that each cycle is color-alternating, which we multiply by the previous to obtain the total number of graphs for all permutations $\pi$ with cycle type $2 \lambda$. Therefore we conclude: $$\label{eq:tcibc} \gcielt{\symgp{2}}{\specname{BC}}{\tau} = \sum_{\substack{n > 0 \\ \text{$n$ even}}} \sum_{\lambda \vdash \frac{n}{2}} 2^{l \pbrac{\lambda}} \frac{p_{2 \lambda}}{z_{2 \lambda}} \prod_{i} 2^{\ceil{\frac{\lambda_{i}}{2}}} \prod_{i < j} 2^{\gcd \pbrac{\lambda_{i}, \lambda_{j}}}$$ Connected bicolored graphs {#s:cbc} -------------------------- As noted in the introduction of this section, we may pass from bicolored to bipartite graphs by taking a quotient under the color-reversing action of $\symgp{2}$ only in the connected case. Thus, we must pass from the species $\specname{BC}$ to the species $\specname{CBC}$ of connected bicolored graphs to continue. It is a standard principle of graph enumeration that a graph may be decomposed uniquely into (and thus species-theoretically identified with) the set of its connected components. We must, of course, require that the component structures are nonempty to ensure that the construction is well-defined, as discussed in \[s:specalg\]. This same relationship holds in the case of bicolored graphs. Thus, the species $\specname{BC}$ of nonempty bicolored graphs is the composition of the species $\specname{CBC}$ of nonempty connected bicolored graphs into the species $\specname{E}^{+} = \specname{E} - 1$ of nonempty sets: $$\specname{BC} = \specname{E}^{+} \circ \specname{CBC} \label{eq:bcdecomp}$$ Reversing the colors of a bicolored graph is done simply by reversing the colors of each of its connected components independently; thus, once we trivially extend the species $\specname{E}^{+}$ to an $\symgp{2}$-species by applying the trivial action, \[eq:bcdecomp\] holds as an identity of $\symgp{2}$-species for the color-reversing $\symgp{2}$-action described previously. To use the decomposition in \[eq:bcdecomp\] to derive the $\symgp{2}$-cycle index for $\specname{CBC}$, we must invert the $\symgp{2}$-species composition into $\specname{E}^{+}$. In the context of the theory of virtual species, this is possible; we write $\con := \pbrac{\specname{E} - 1}^{\abrac{-1}}$ to denote this virtual species. We can derive from [@bll:species §2.5, eq. (58c)] that its cycle index is $$\label{eq:zgamma} \ci{\con} = \sum_{k \geq 1} \frac{\mu \pbrac{k}}{k} \log \pbrac{1 + p_{k}}$$ where $\mu$ is the Möbius function. We can then rewrite \[eq:bcdecomp\] as $$\specname{CBC} = \con \circ \specname{BC}$$ It then follows immediately from \[thm:gspeccompci\] that $$\gci{\symgp{2}}{\specname{CBC}} = \ci{\con} \circ \gci{\symgp{2}}{\specname{BC}} \label{eq:zcbcdecomp}$$ Bipartite graphs {#s:bp} ---------------- As we previously observed, connected bipartite graphs are naturally identified with orbits of connected bicolored graphs under the color-reversing action of $\symgp{2}$. Thus, $$\specname{CBP} = \faktor{\specname{CBC}}{\symgp{2}}.$$ By application of \[thm:qsci\], we can then directly compute the cycle index of $\specname{CBP}$ in terms of previous results: $$\ci{\specname{CBP}} = \qgci{\symgp{2}}{\specname{CBC}} = \frac{1}{2} \pbrac{\gcielt{\symgp{2}}{\specname{CBC}}{e} + \gcielt{\symgp{2}}{\specname{CBC}}{\tau}}.$$ Finally, to reach a result for the general bipartite case, we return to the graph-theoretic composition relationship previously considered in \[s:cbc\]: $$\specname{BP} = \specname{E} \circ \specname{CBP}.$$ This time, we need not invert the composition, so the cycle-index calculation is simple: $$\ci{\specname{BP}} = \ci{\specname{E}} \circ \ci{\specname{CBP}}.$$ A generating function for labeled bipartite graphs was obtained first in [@harprins:bipartite] and later in [@han:bipartite]; the latter uses Pólya-theoretic methods to calculate the cycle index of what in modern terminology would be the species of edge-labeled complete bipartite graphs. Nonseparable graphs {#s:nbp} ------------------- We now turn our attention to the notions of block decomposition and nonseparable graphs. A graph is said to be *nonseparable* if it is vertex-$2$-connected (that is, if there exists no vertex whose removal disconnects the graph); every connected graph then has a canonical ‘decomposition’[^4] into maximal nonseparable subgraphs, often shortened to *blocks*. In the spirit of our previous notation, we we will denote by $\specname{NBP}$ the species of nonseparable bipartite graphs, our object of study. The basic principles of block enumeration in terms of automorphisms and cycle indices of permutation groups were first identified and exploited in [@rob:nonsep]. In [@bll:species §4.2], a theory relating a specified species $B$ of nonseparable graphs to the species $C_{B}$ of connected graphs whose blocks are in $B$ is developed using similar principles. It is apparent that the class of nonseparable bipartite graphs is itself exactly the class of blocks that occur in block decompositions of connected bipartite graphs; hence, we apply that theory here to study the species $\specname{NBP}$. From [@bll:species eq. 4.2.27] we obtain \[eq:nbpexp\] $$\label{eq:nbpexpmain} \specname{NBP} = \specname{CBP} \pbrac{\specname{CBP}^{\bullet \abrac{-1}}} + X \cdot \deriv{\specname{NBP}} - X,$$ where by [@bll:species 4.2.26(a)] we have $$\label{eq:nbpexpsub} \deriv{\specname{NBP}} = \con \pbrac{\frac{X}{\specname{CBP}^{\bullet \abrac{-1}}}}.$$ We have already calculated the cycle index for the species $\specname{CBP}$, so the calculation of the cycle index of $\specname{NBP}$ is now simply a matter of algebraic expansion. A generating function for labeled bipartite blocks was given in [@harrob:bipblocks], where their analogue of \[eq:nbpexp\] for the labeled exponential generating function for blocks comes from [@forduhl:combprob1]. However, we could locate no corresponding unlabeled enumeration in the literature. The numbers of labeled and unlabeled nonseparable bipartite graphs for $n \leq 10$ as calculated using our method are given in \[tab:bpblocks\]. The species of $k$-trees {#c:ktrees} ======================== Introduction {#s:intro} ------------ ### $k$-trees {#ss:ktrees} Trees and their generalizations have played an important role in the literature of combinatorial graph theory throughout its history. The multi-dimensional generalization to so-called ‘$k$-trees’ has proved to be particularly fertile ground for both research problems and applications. The class $\kt{k}$ of $k$-trees (for $k \in \ringname{N}$) may be defined recursively: \[def:ktree\] The complete graph on $k$ vertices ($K_{k}$) is a $k$-tree, and any graph formed by adding a single vertex to a $k$-tree and connecting that vertex by edges to some existing $k$-clique (that is, induced $k$-complete subgraph) of that $k$-tree is a $k$-tree. The graph-theoretic notion of $k$-trees was first introduced in 1968 in [@harpalm:acycsimp]; vertex-labeled $k$-trees were quickly enumerated in the following year in both [@moon:lktrees] and [@beinpipp:lktrees]. The special case $k=2$ has been especially thoroughly studied; enumerations are available in the literature for edge- and triangle-labeled $2$-trees in [@palm:l2trees], for plane $2$-trees in [@palmread:p2trees], and for unlabeled $2$-trees in [@harpalm:acycsimp] and [@harpalm:graphenum]. In 2001, the theory of species was brought to bear on $2$-trees in [@gessel:spec2trees], resulting in more explicit formulas for the enumeration of unlabeled $2$-trees. An extensive literature on other properties of $k$-trees and their applications has also emerged; Beineke and Pippert claim in [@beinpipp:multidim] that “\[t\]here are now over 100 papers on various aspects of $k$-trees”. However, no general enumeration of unlabeled $k$-trees appears in the literature to date. To begin, we establish two definitions for substructures of $k$-trees which we will use extensively in our analysis. \[def:hedfront\] A *hedron* of a $k$-tree is a $\pbrac{k+1}$-clique and a *front* is a $k$-clique. We will frequently describe $k$-trees as assemblages of hedra attached along their fronts rather than using explicit graph-theoretic descriptions in terms of edges and vertices, keeping in mind that the structure of interest is graph-theoretic and not geometric. The recursive addition of a single vertex and its connection by edges to an existing $k$-clique in \[def:ktree\] is then interpreted as the attachment of a hedron to an existing one along some front, identifying the $k$ vertices they have in common. The analogy to the recursive definition of conventional trees is clear, and in fact the class $\mathfrak{a}$ of trees may be recovered by setting $k = 1$. For higher $k$, the structures formed are still distinctively tree-like; for example, $2$-trees are formed by gluing triangles together along their edges without forming loops of triangles (see \[fig:ex2tree\]), while $3$-trees are formed by gluing tetrahedra together along their triangular faces without forming loops of tetrahedra. (a)[d]{} (a)(d) (d)[c]{} (a)(c) (d)(c) (d)[b]{} (a)(b) (c)(b) (a)[f]{} (a)(f) (c)(f) (f)[e]{} (f)(e) (c)(e) In graph-theoretic contexts, it is conventional to label graphs on their vertices and possibly their edges. However, for our purposes, it will be more convenient to label hedra and fronts. Throughout, we will treat the species $\kt{k}$ of $k$-trees as a two-sort species, with $X$-labels on the hedra and $Y$-labels on their fronts; in diagrams, we will generally use capital letters for the hedron-labels and positive integers for the front-labels (see \[fig:exlab2tree\]). (a)[d]{} (a)(d) (d)[c]{} (a)(c) (d)(c) at (barycentric cs:a=1,d=1,c=1) [B]{}; (d)[b]{} (a)(b) (c)(b) at (barycentric cs:b=1,d=1) [D]{}; (a)[f]{} (a)(f) (c)(f) at (barycentric cs:a=1,c=1,f=1) [C]{}; (f)[e]{} (f)(e) (c)(e) at (barycentric cs:f=1,c=1,e=1) [A]{}; The dissymmetry theorem for $k$-trees {#s:dissymk} ------------------------------------- Studies of tree-like structures—especially those explicitly informed by the theory of species—often feature decompositions based on *dissymmetry*, which allow enumerations of unrooted structures to be recharacterized in terms of rooted structures. For example, as seen in [@bll:species §4.1], the species $\mathfrak{a}$ of trees and $\specname{A} = \pointed{\mathfrak{a}}$ of rooted trees are related by the equation $$\specname{A} + \specname{E}_{2} \pbrac{\specname{A}} = \mathfrak{a} + \specname{A}^{2}$$ where the proof hinges on a recursive structural decomposition of trees. In this case, the species $\specname{A}$ is relatively easy to characterize explicitly, so this equation serves to characterize the species $\mathfrak{a}$, which would be difficult to do directly. A similar theorem holds for $k$-trees. \[thm:dissymk\] The species $\ktx{k}$ and $\kty{k}$ of $k$-trees rooted at hedra and fronts respectively, $\ktxy{k}$ of $k$-trees rooted at a hedron with a designated front, and $\kt{k}$ of unrooted $k$-trees are related by the equation $$\label{eq:dissymk} \ktx{k} + \kty{k} = \kt{k} + \ktxy{k}$$ as an isomorphism of species. We give a bijective, natural map from $\pbrac{\ktx{k} + \kty{k}}$-structures on the left side to $\pbrac{\kt{k} + \ktxy{k}}$-structures on the right side. Define a *$k$-path* in a $k$-tree to be a non-self-intersecting sequence of consecutively adjacent hedra and fronts, and define the *length* of a $k$-path to be the total number of hedra and fronts along it. Note that the ends of every maximal $k$-path in a $k$-tree are fronts. It is easily verified, as in [@kob:ktlogspace], that every $k$-tree has a unique *center* clique (either a hedron or a front) which is the midpoint of every longest $k$-path (or, equivalently, has the greatest $k$-eccentricity, defined appropriately). An $\pbrac{\ktx{k} + \kty{k}}$-structure on the left-hand side of the equation is a $k$-tree $T$ rooted at some clique $c$, which is either a hedron or a front. Suppose that $c$ is the center of $T$. We then map $T$ to its unrooted equivalent in $\kt{k}$ on the right-hand side. This map is a natural bijection from its preimage, the set of $k$-trees rooted at their centers, to $\kt{k}$, the set of unrooted $k$-trees. Now suppose that the root clique $c$ of the $k$-tree $T$ is *not* the center, which we denote $C$. Identify the clique $c'$ which is adjacent to $c$ along the $k$-path from $c$ to $C$. We then map the $k$-tree $T$ rooted at the clique $c$ to the same tree $T$ rooted at *both* $c$ and its neighbor $c'$. This map is also a natural bijection, in this case from the set of $k$-trees rooted at vertices which are *not* their centers to the set $\ktxy{k}$ of $k$-trees rooted at an adjacent hedron-front pair. The combination of these two maps then gives the desired isomorphism of species in \[eq:dissymk\]. In general we will reformulate the dissymmetry theorem as follows: \[cor:dissymkreform\] For the various forms of the species $\kt{k}$ as above, we have $$\label{eq:dissymkreform} \kt{k} = \ktx{k} + \kty{k} - \ktxy{k}.$$ as an isomorphism of ordinary species. This species subtraction is well-defined in the sense of \[def:specdif\], since the species $\ktxy{k}$ embeds in the species $\ktx{k} + \kty{k}$ by the centering map described in the proof of \[thm:dissymk\]. Essentially, \[eq:dissymkreform\] identifies each unrooted $k$-tree with itself rooted at its center simplex. and the consequent \[eq:dissymkreform\] allow us to reframe enumerative questions about generic $k$-trees in terms of questions about $k$-trees rooted in various ways. However, the rich internal symmetries of large cliques obstruct direct analysis of these rooted structures. We need to break these symmetries to proceed. Coherently-oriented $k$-trees ----------------------------- ### Symmetry-breaking {#ss:symbreak} In the case of the species $\specname{A} = \pointed{\kt{1}}$ of rooted trees, we may obtain a simple recursive functional equation [@bll:species §1, eq. (9)]: $$\label{eq:rtrees} \specname{A} = X \cdot \specname{E} \pbrac{\specname{A}}.$$ This completely characterizes the combinatorial structure of the class of trees. However, in the more general case of $k$-trees, no such simple relationship obtains; attached to a given hedron is a collection of sets of hedra (one such set per front), but simply specifying which fronts to attach to which does not fully specify the attachings, and the structure of that collection of sets is complex. We will break this symmetry by adding additional structure which we can later remove using the theory of quotient species. \[def:mirrorfronts\] Let $h_{1}$ and $h_{2}$ be two hedra joined at a front $f$, hereafter said to be *adjacent*. Each other front of one of the hedra shares $k-1$ vertices with $f$; we say that two fronts $f_{1}$ of $h_{1}$ and $f_{2}$ of $h_{2}$ are *mirror with respect to $f$* if these shared vertices are the same, or equivalently if $f_{1} \cap f = f_{2} \cap f$. \[obs:mirrorfronts\] Let $T$ be a coherently-oriented $k$-tree with two hedra $h_{1}$ and $h_{2}$ joined at a front $f$. Then there is exactly one front of $h_{2}$ mirror to each front of $h_{1}$ with respect to their shared front $f$. \[def:coktree\] Define an *orientation* of a hedron to be a cyclic ordering of the set of its fronts and an *orientation* of a $k$-tree to be a choice of orientation for each of its hedra. If two oriented hedra share a front, their orientations are *compatible* if they correspond under the mirror bijection. Then an orientation of a $k$-tree is *coherent* if every pair of adjacent hedra is compatibly-oriented. See \[fig:exco2tree\] for an example. Note that every $k$-tree admits many coherent orientations—any one hedron of the $k$-tree may be oriented freely, and a unique orientation of the whole $k$-tree will result from each choice of such an orientation of one hedron. We will denote by $\ktco{k}$ the species of coherently-oriented $k$-trees. By shifting from the general $k$-tree setting to that of coherently-oriented $k$-trees, we break the symmetry described above. If we can now establish a group action on $\ktco{k}$ whose orbits are generic $k$-trees we can use the theory of quotient species to extract the generic species $\kt{k}$. First, however, we describe an encoding procedure which will make future work more convenient. (a)[d]{} (a)(d) (d)[c]{} (a)(c) (d)(c) \(B) at (barycentric cs:a=1,d=1.25,c=1); at (B) [B]{}; ; (d)[b]{} (a)(b) (c)(b) \(D) at (barycentric cs:b=1,d=1.25); at (D) [D]{}; ; (a)[f]{} (a)(f) (c)(f) \(C) at (barycentric cs:a=1,c=1,f=1.25); at (C) [C]{}; ; (f)[e]{} (f)(e) (c)(e) \(A) at (barycentric cs:f=1,c=1,e=1.75); at (A) [A]{}; ; ### Bicolored tree encoding {#ss:bctree} Although $k$-trees are graphs (and hence made up simply of edges and vertices), their structure is more conveniently described in terms of their simplicial structure of hedra and fronts. Indeed, if each hedron has an orientation of its faces and we choose in advance which hedra to attach to which by what fronts, the requirement that the resulting $k$-tree be coherently oriented is strong enough to characterize the attaching completely. We thus pass from coherently-oriented $k$-trees to a surrogate structure which exposes the salient features of this attaching structure more clearly—structured bicolored trees in the spirit of the $R, S$-enriched bicolored trees of [@bll:species §3.2]. A $\pbrac{\specname{C}_{k+1}, \specname{E}}$-enriched bicolored tree is a bicolored tree each black vertex of which carries a $\specname{C}_{k+1}$-structure (that is, a cyclic ordering on $k+1$ elements) on its white neighbors. (The $\specname{E}$-structure on the black neighbors of each white vertex is already implicit in the bicolored tree itself.) For later convenience, we will sometimes call such objects *$k$-coding trees*, and we will denote by $\ct{k}$ the species of such $k$-coding trees. We now define a map $\beta: \ktco{k} \sbrac{n} \to \ct{k} \sbrac{n}$. For a given coherently-oriented $k$-tree $T$ with $n$ hedra: - For every hedron of $T$ construct a black vertex and for every front a white vertex, assigning labels appropriately. - For every black-white vertex pair, construct a connecting edge if the white vertex represents a front of the hedron represented by the black vertex. - Finally, enrich the collection of neighbors of each black vertex with a $\specname{C}_{k+1}$-structure inherited directly from the orientation of the $k$-tree $T$. The resulting object $\beta \pbrac{T}$ is clearly a $k$-coding tree with $n$ black vertices. We can recover a $T$ from $\beta \pbrac{T}$ by following the reverse procedure. For an example, see \[fig:exbctree\], which shows the $2$-coding tree associated to the coherently-oriented $2$-tree of \[fig:exco2tree\]. Note that, for clarity, we have rendered the black vertices (corresponding to hedra) with squares. (4)[B]{} ; (4)(B) (B)[5]{} (B)(5) (B)[2]{} (B)(2) (4)[D]{} ; (4)(D) (D)[1]{} (D)(1) (D)[6]{} (D)(6) (4)[C]{} ; (4)(C) (C)[9]{} (C)(9) (C)[3]{} (C)(3) (3)[A]{} ; (3)(A) (A)[8]{} (A)(8) (A)[7]{} (A)(7) \[thm:bctreeenc\] The map $\beta$ induces an isomorphism of species $\ktco{k} \simeq \ct{k}$. It is clear that $\beta$ sends each coherently-oriented $k$-tree to a unique $k$-coding tree, and that this map commutes with permutations on the label sets (and thus is categorically natural). To show that $\beta$ induces a species isomorphism, then, we need only show that $\beta$ is a surjection onto $\ct{k} \sbrac{n}$ for each $n$. Throughout, we will say ‘$F$ and $G$ have contact of order $n$’ when the restrictions $F_{\leq n}$ and $G_{\leq n}$ of the species $F$ and $G$ to label sets of cardinality at most $n$ are naturally isomorphic. First, we note that there are exactly $k!$ coherently-oriented $k$-trees with one hedron—one for each cyclic ordering of the $k+1$ front labels. There are also $k!$ coding trees with one black vertex, and the encoding $\beta$ is clearly a natural bijection between these two sets. Thus, the species $\ktco{k}$ of coherently-oriented $k$-trees and $\ct{k}$ of $k$-coding trees have contact of order $1$. Now, by way of induction, suppose $\ktco{k}$ and $\ct{k}$ have contact of order $n \geq 1$. Let $C$ be a $k$-coding tree with $n+1$ black vertices. Then let $C_{1}$ and $C_{2}$ be two distinct sub-$k$-coding trees of $C$, each obtained from $C$ by removing one black node which has only one white neighbor which is not a leaf. Then, by hypothesis, there exist coherently-oriented $k$-trees $T_{1}$ and $T_{2}$ with $n$ hedra such that $\beta \pbrac{T_{1}} = C_{1}$ and $\beta \pbrac{T_{2}} = C_{2}$. Moreover, $\beta \pbrac{T_{1} \cap T_{2}} = \beta \pbrac{T_{1}} \cap \beta \pbrac{T_{2}}$, and this $k$-coding tree has $n-1$ black vertices, so $T_{1} \cap T_{2}$ has $n-1$ hedra. Thus, $T = T_{1} \cup T_{2}$ is a coherently-oriented $k$-tree with $n+1$ black hedra, and $\beta \pbrac{T} = C$ as desired. Thus, $\beta^{-1} \pbrac{\beta \pbrac{T_{1}} \cup \beta \pbrac{T_{2}}} = T_{1} \cup T_{2} = T$, and hence $\ktco{k}$ and $\ct{k}$ have contact of order $n+1$. Thus, $\ktco{k}$ and $\ct{k}$ are isomorphic as species; however, $k$-coding trees are much simpler than coherently-oriented $k$-trees as graphs. Moreover, $k$-coding trees are doubly-enriched bicolored trees as in [@bll:species §3.2], for which the authors of that text develop a system of functional equations which fully characterizes the cycle index of such a species. We thus will proceed in the following sections with a study of the species $\ct{k}$, then lift our results to the $k$-tree context. ### Functional decomposition of $k$-coding trees {#ss:codecomp} With the encoding $\beta: \ktco{k} \to \ct{k}$, we now have direct graph-theoretic access to the attaching structure of coherently-oriented $k$-trees. We therefore turn our attention to the $k$-coding trees themselves to produce a recursive decomposition. As with $k$-trees, we will study rooted versions of the species $\ct{k}$ of $k$-coding trees first, then use dissymmetry to apply the results to unrooted enumeration. Let $\ctx{k}$ denote the species of $k$-coding trees rooted at black vertices, $\cty{k}$ denote the species of $k$-coding trees rooted at white vertices, and $\ctxy{k}$ denote the species of $k$-coding trees rooted at edges (that is, at adjacent black-white pairs). By construction, a $\ctx{k}$-structure consists of a single $X$-label and a cyclically-ordered $\pbrac{k+1}$-set of $\cty{k}$-structures. See \[fig:ctxconst\] for an example of this construction. (root) at (0,0) [$X$]{}; ; (180/:1) node [$\specname{C}_{\numfronts}$]{}; iin [0, ..., ]{} (childi) at (:3) [$Y$]{}; (root) – (childi); (childi) ++(+90:1) node [$\cty{\kval}$]{}; (childi) ++(:1cm) ++(180+-:2cm) arc (180+-:180++:2cm); (childi) – ++(:2) node \[rotate=,fill=white\] [$\cdots$]{}; (childi) – ++(+:2); (childi) – ++(-:2); Similarly, a $\cty{k}$-structure essentially consists of a single $Y$-label and a (possibly empty) set of $\ctx{k}$-structures, but with some modification. Every white neighbor of the black root of a $\ctx{k}$-structure is labeled in the construction above, but the white parent of a $\ctx{k}$-structure in this recursive decomposition is already labeled. Thus, the structure around a black vertex which is a child of a white vertex consists of an $X$ label and a linearly-ordered $k$-set of $\cty{k}$-structures. Thus, a $\cty{k}$-structure consists of a $Y$-label and a set of pairs of an $X$ label and an $\specname{L}_{k}$-structure of $\cty{k}$-structures. We note here for conceptual consistency that in fact $\specname{L}_{k} = \deriv{\specname{C}}_{k+1}$ for $\specname{L}$ the species of linear orders and $\specname{C}$ the species of cyclic orders and that $\deriv{\specname{E}} = \specname{E}$ for $\specname{E}$ the species of sets; readers familiar with the $R, S$-enriched bicolored trees of [@bll:species §3.2] will recognize echoes of their decomposition in these facts. Finally, a $\ctxy{k}$-structure is simply an $X \cdot \specname{L}_{k} \pbracs[big]{\cty{k}}$-structure as described above (corresponding to the black vertex) together with a $\cty{k}$-structure (corresponding to the white vertex). For reasons that will become clear later, we note that we can incorporate the root white vertex into the linear order by making it last, thus representing a $\ctxy{k}$-structure instead as an $X \cdot \specname{L}_{k+1} \pbracs[big]{\cty{k}}$-structure. See \[fig:ctxyconst\] for an example of this construction. (root) at (0,0) [$X$]{}; (root) ++(2\*:1cm) arc (2\*:360:1cm); (root) – ++(0:3) \[ultra thick\] node \[fill=white\] ; (180/:1) node [$\specname{L}_{\numfronts}$]{}; iin [0, ..., ]{} (childi) at (:3) [$Y$]{}; (root) – (childi); (childi) ++(+90:1) node [$\cty{\kval}$]{}; (childi) ++(:1cm) ++(180+-:2cm) arc (180+-:180++:2cm); (childi) – ++(:2) node \[rotate=,fill=white\] [$\cdots$]{}; (childi) – ++(+:2); (childi) – ++(-:2); The various species of rooted $k$-coding trees are therefore related by a system of functional equations: \[obs:funcdecompct\] For the (ordinary) species $\ctx{k}$ of $X$-rooted $k$-coding trees, $\cty{k}$ of $Y$-rooted $k$-coding trees, and $\ctxy{k}$ of edge-rooted $k$-coding trees, we have the functional relationships \[eq:ctfunc\] $$\begin{aligned} \ctx{k} &= X \cdot \specname{C}_{k+1} \pbracs[big]{\cty{k}} \label{eq:ctxfunc} \\ \cty{k} &= Y \cdot \specname{E} \pbrac{X \cdot \specname{L}_{k} \pbracs[big]{\cty{k}}} \label{eq:ctyfunc} \\ \ctxy{k} &= \cty{k} \cdot X \cdot \specname{L}_{k} \pbracs[big]{\cty{k}} = X \cdot \specname{L}_{k+1} \pbracs[big]{\cty{k}} \label{eq:ctxyfunc} \end{aligned}$$ as isomorphisms of ordinary species. However, a recursive characterization of the various ordinary species of $k$-coding trees is insufficient to characterize the species of $k$-trees itself, since $k$-coding trees represent $k$-trees with coherent orientations. Generic $k$-trees {#s:genkt} ----------------- To remove the additional structure of coherent orientation imposed on $k$-trees before their conversion to $k$-coding trees, we now apply the theory of $\Gamma$-species developed in \[s:quot\]. In [@gessel:spec2trees], the orientation-reversing action of $\symgp{2}$ on $\cyc_{\sbrac{3}}$ is exploited to study $2$-trees species-theoretically. We might hope to develop an analogous group action under which general $k$-trees are naturally identified with orbits of coherently-oriented $k$-trees under an action of $\symgp{k}$. Unfortunately: \[prop:notransac\] For $k \geq 3$, no transitive action of any group on the set $\cyc_{\sbrac{k+1}}$ of cyclic orders on $\sbrac{k+1}$ commutes with the action of $\symgp{k+1}$ that permutes labels. We represent the elements of $\cyc_{\sbrac{k+1}}$ as cyclic permutations on the alphabet $\sbrac{k+1}$; then the action of $\symgp{k+1}$ that permutes labels is exactly the conjugation action on these permutations. Consider an action of a group $G$ on $\cyc_{\sbrac{k+1}}$ that commutes with this conjugation action. Then, for any $g \in G$ and any $c \in \cyc_{\sbrac{k+1}}$, we have that $$\label{eq:transaction} g \cdot c = g \cdot c c c^{-1} = c \pbrac{g \cdot c} c^{-1}$$ and so $c$ and $g \cdot c$ commute. Thus, $c$ commutes with every element of its orbit under the action of $G$. But, for $k \geq 3$, not all elements of $\cyc_{\sbrac{k+1}}$ commute, so the action is not transitive. We thus cannot hope to attack the coherent orientations of $k$-trees by acting directly on the cyclic orderings of fronts. Accordingly, we cannot simply apply the results of \[ss:codecomp\] to compute a $\Gamma$-species $\ct{k}$ with respect to some hypothetical action of a group $\Gamma$ whose orbits correspond to generic $k$-trees. Instead, we will use the additional structure on *rooted* coherently-oriented $k$-trees; with rooting, the cyclic orders around black vertices are converted into linear orders, for which there is a natural action of $\symgp{k+1}$. ### Group actions on $k$-coding trees {#ss:actct} We have noted previously that every labeled $k$-tree admits exactly $k!$ coherent orientations. Thus, there are $k!$ distinct $k$-coding trees associated to each labeled $k$-tree, which differ only in the $\specname{C}_{k+1}$-structures on their black vertices. Consider a rooted $k$-coding tree $T$ and a black vertex $v$ which is not the root vertex. Then one white neighbor of $v$ is the ‘parent’ of $v$ (in the sense that it lies on the path from $v$ to the root). We thus can convert the cyclic order on the $k+1$ white neighbors of $v$ to a linear order by choosing the parent white neighbor to be last. There is a natural, transitive, label-independent action of $\symgp{k+1}$ on the set of such linear orders which induces an action on the cyclic orders from which the linear orders are derived. However, only elements of $\symgp{k+1}$ which fix $k+1$ will respect the structure around the black vertex we have chosen, since its parent white vertex must remain last. In addition, if we simply apply the action of some $\sigma \in \symgp{k+1}$ to the order on white neighbors of $v$, we change the coherently-oriented $k$-tree $\beta^{-1} \pbrac{T}$ to which $T$ is associated in such a way that it no longer corresponds to the same unoriented $k$-tree. Let $t$ denote the unoriented $k$-tree associated to $\beta^{-1} \pbrac{T}$; then there exists a coherent orientation of $t$ which agrees with orientation around $v$ induced by $\sigma$. The $k$-coding tree $T'$ corresponding to this new coherent orientation has the same underlying bicolored tree as $T$ but possibly different orders around its black vertices. If we think of the $k$-coding tree $T'$ as the image of $T$ under a global action of $\sigma$, orbits under all of $\symgp{}$ will be precisely the classes of $k$-coding trees corresponding to all coherent orientations of specified $k$-trees, allowing us to study unoriented $k$-trees as quotients. The orientation of $T'$ will be that obtained by applying $\sigma$ at $v$ and then recursively adjusting the other cyclic orders so that fronts which were mirror are made mirror again. This will ensure that the combinatorial structure of the underlying $k$-tree $t$ is preserved. Therefore, when we apply some permutation $\sigma \in \symgp{k+1}$ to the white neighbors of a black vertex $v$, we must also permute the cyclic orders of the descendant black vertices of $v$. In particular, the permutation $\sigma'$ which must be applied to some immediate black descendant $v'$ of $v$ is precisely the permutation on the linear order of white neighbors of $v'$ induced by passing over the mirror bijection from $v'$ to $v$, applying $\sigma$, and then passing back. We can express this procedure in formulaic terms: \[thm:rhodef\] If a permutation $\sigma \in \symgp{k+1}$ is applied to linearized orientation of a black vertex $v$ in rooted $k$-coding tree, the permutation which must be applied to the linearized orientation a child black vertex $v'$ which was attached to the $i$th white child of $v$ (with respect to the linear ordering induced by the orientation) to preserve the mirror relation is $\rho_{i} \pbrac{\sigma}$, where $\rho_{i}$ is the map given by $$\label{eq:rhodef} \rho_{i} \pbrac{\sigma}: a \mapsto \sigma \pbrac{i + a} - \sigma \pbrac{i}$$ in which all sums and differences are reduced to their representatives modulo $k+1$ in $\cbrac{1, 2, \dots, k+1}$. Let $v'$ denote a black vertex which is attached to $v$ by the white vertex $1$, which we suppose to be in position $i$ in the linear order induced by the original orientation of $v$. Let $2$ denote the white child of $v'$ which is $a$th in the linear order induced by the original orientation around $v'$. It is mirror to the white child $3$ of $v$ which is $\pbrac{i+a}$th in the linear order induced by the original orientation around $v$. After the action of $\sigma$ is applied, vertex $3$ is $\sigma \pbrac{i+a}$th in the new linear order around $v$. We require that $2$ is still mirror to $3$, so we must move it to position $\sigma \pbrac{i + a} - \sigma \pbrac{i}$ when we create a new linear order around $v'$. This completes the proof. This procedure is depicted in \[fig:rhoapp\]. ; (v)[3]{} (v) edge \[bend right=45, thick\] node \[below\](a3d)[$i+a$]{} (3); (v) edge \[bend left=45, thick\] node \[above\](a3u)[$\sigma \pbrac{i+a}$]{} (3); (a3d) edge \[-&gt;, dashed, thick\] node \[auto\][$\sigma$]{} (a3u); (v)[1]{} (v) edge \[bend left=45, thick\] node \[below\](a1d)[$i$]{} (1); (v) edge \[bend right=45, thick\] node \[above\](a1u)[$\sigma \pbrac{i}$]{} (1); (a1d) edge \[-&gt;, dashed, thick\] node \[auto\][$\sigma$]{} (a1u); (1)[v’]{} ; (v’) edge \[thick\] node \[auto\](b1)[$0$]{} (1); (v’)[2]{} (v’) edge \[bend left=45, thick\] node \[below\](b2d)[$a$]{} (2); (v’) edge \[bend right=45, thick\] node \[above\](b2u)[$\sigma \pbrac{i+a} - \sigma \pbrac{i}$]{} (2); (b2d) edge \[-&gt;, dashed, thick\] node \[auto\][$\rho_{i} \pbrac{\sigma}$]{} (b2u); (b2d) edge \[-&gt;, dotted, thick, bend right=15\] node \[auto\][$\mu$]{} (a3d); (b2u) edge \[-&gt;, dotted, thick, bend left=15\] node \[auto\][$\mu$]{} (a3u); As an aside, we note that, although the construction $\rho$ depends on $k$, the value of $k$ will be fixed in any given context, so we suppress it in the notation. Any $\sigma$ which is to be applied to a non-root black vertex $v$ must of course fix $k+1$. We let $\Delta: \symgp{k} \to \symgp{k+1}$ denote the obvious embedding; then the image of $\Delta$ is exactly the set of $\sigma \in \symgp{k+1}$ which fix $k+1$. We then have an action of $\symgp{k}$ on non-root black vertices induced by $\Delta$. (Equivalently, we can think of $\symgp{k}$ as the subgroup of $\symgp{k+1}$ of permutations fixing $k+1$, but the explicit notation $\Delta$ will be of use in later formulas.) In light of \[obs:funcdecompct\], we now wish to adapt these ideas into explicit $\symgp{k}$- and $\symgp{k+1}$-actions on $\ctx{k}$, $\cty{k}$, and $\ctxy{k}$ whose orbits correspond to the various coherent orientations of single underlying rooted $k$-trees. In the case of a $Y$-rooted $k$-coding tree $T$, if we declare that $\sigma \in \symgp{k}$ acts on $T$ by acting directly (as $\Delta \pbrac{\sigma}$) on each of the black vertices immediately adjacent to the root and then applying $\rho$-derived permutations recursively to their descendants, orbits behave as expected. The same $\symgp{k}$-action serves equally well for edge-rooted $k$-coding trees, where (for purposes of applying the action of some $\sigma$) we can simply ignore the black vertex in the root. However, if we begin with an $X$-rooted $k$-coding tree, the cyclic ordering of the white neighbors of the root black vertex has no canonical choice of linearization. If we make an arbitrary choice of one of the $k+1$ available linearizations, and thus convert to an edge-rooted $k$-coding tree, the full $\symgp{k+1}$-action defined previously can be applied directly to the root vertex. The orbit under this action of some edge-rooted $k$-coding tree $T$ with a choice of linearization at the root then includes all possible linearizations of the root orders of all possible $X$-rooted $k$-coding trees corresponding to the different coherent orientations of a single $k$-coding tree. ### $k$-trees as quotients {#ss:ktquot} Since these actions are label-independent, we may now treat $\cty{k}$ and $\ctxy{k}$ as $\symgp{k}$-species and $\ctxy{k}$ as an $\symgp{k+1}$-species. The $\symgp{k}$- and $\symgp{k+1}$-actions on $\ctxy{k}$ are compatible, but we will make explicit reference to $\ctxy{k}$ as an $\symgp{k}$- or $\symgp{k+1}$-species whenever it is important and not completely clear from context which we mean. As a result of the above results, we can then relate the rooted $\Gamma$-species forms of $\ct{k}$ to the various ordinary species forms of generic rooted $k$-trees in \[thm:dissymk\]: \[thm:arootquot\] For the various rooted forms of the ordinary species $\kt{k}$ as in \[thm:dissymk\] and the various rooted $\Gamma$-species forms of $\ct{k}$ as in \[obs:funcdecompct\] as $\symgp{k}$- and $\symgp{k+1}$-species, we have \[eq:arootquot\] $$\begin{aligned} \kty{k} &= \faktor{\cty{k}}{\symgp{k}} \label{eq:ayquot} \\ \ktxy{k} &= \faktor{\ctxy{k}}{\symgp{k}} \label{eq:axyquot} \\ \ktx{k} &= \faktor{\ctxy{k}}{\symgp{k+1}} \label{eq:axquot} \end{aligned}$$ as isomorphisms of ordinary species, where $\ctxy{k}$ is an $\symgp{k}$-species in \[eq:axyquot\] and an $\symgp{k+1}$-species in \[eq:axquot\]. As a result, we have explicit characterizations of all the rooted components of the original dissymmetry theorem, \[thm:dissymk\]. To compute the cycle indices of these components (and thus the cycle index of $\kt{k}$ itself), we need only compute the cycle indices of the various rooted $\ct{k}$ species, which we will do using a combination of the functional equations in \[eq:ctfunc\] and explicit consideration of automorphisms. Automorphisms and cycle indices {#s:ktcycind} ------------------------------- ### $k$-coding trees: $\cty{k}$ and $\ctxy{k}$ {#ss:ctcycind} of the dissymmetry theorem for $k$-trees has a direct analogue in terms of cycle indices: \[thm:dissymkci\] For the various forms of the species $\kt{k}$ as in \[s:dissymk\], we have $$\label{eq:dissymkci} \ci{\kt{k}} = \ci{\ktx{k}} + \ci{\kty{k}} - \ci{\ctxy{k}}.$$ Thus, we need to calculate the cycle indices of the three rooted forms of $\kt{k}$. From \[thm:arootquot\] and by \[thm:qsci\] we obtain: \[thm:aquotci\] For the various forms of the species $\kt{k}$ as in \[s:dissymk\] and the various $\symgp{k}$-species and $\symgp{k+1}$-species forms of $\ct{k}$ as in \[ss:actct\], we have \[eq:aquotci\] $$\begin{aligned} \ci{\kty{k}} &= \qgci{\symgp{k}}{\cty{k}} = \frac{1}{k!} \sum_{\sigma \in \symgp{k}} \gcielt{\symgp{k}}{\cty{k}}{\sigma} \label{eq:ayquotci} \\ \ci{\ctxy{k}} &= \qgci{\symgp{k}}{\ctxy{k}} = \frac{1}{k!} \sum_{\sigma \in \symgp{k}} \gcielt{\symgp{k}}{\ctxy{k}}{\sigma} \label{eq:axyquotci} \\ \ci{\ktx{k}} &= \qgci{\symgp{k+1}}{\ctxy{k}} = \frac{1}{\pbrac{k+1}!} \sum_{\sigma \in \symgp{k+1}} \gcielt{\symgp{k+1}}{\ctxy{k}}{\sigma} \label{eq:axquotci} \end{aligned}$$ We thus need only calculate the various $\Gamma$-cycle indices for the $\symgp{k}$-species and $\symgp{k+1}$-species forms of $\cty{k}$ and $\ctxy{k}$ to complete our enumeration of general $k$-trees. In \[obs:funcdecompct\], the functional equations for the ordinary species $\cty{k}$ and $\ctxy{k}$ both include terms of the form $\specname{L}_{k} \circ \cty{k}$. The plethysm of ordinary species does have a generalization to $\Gamma$-species, as given in \[def:gspeccomp\], but it does not correctly describe the manner in which $\symgp{k}$ acts on linear orders of $\cty{k}$-structures in these recursive decompositions. Recall from \[s:quot\] that, for two $\Gamma$-species $F$ and $G$, an element $\gamma \in \Gamma$ acts on an $\pbrac{F \circ G}$-structure (colloquially, ‘an $F$-structure of $G$-structures’) by acting on the $F$-structure and on each of the $G$-structures independently. In our action of $\symgp{k}$, however, the actions of $\sigma$ on the descendant $\cty{k}$-structures are *not* independent—they depend on the position of the structure in the linear ordering around the parent black vertex. In particular, if $\sigma$ acts on some non-root black vertex, then $\rho_{i} \pbrac{\sigma}$ acts on the white vertex in the $i$th place, where in general $\rho_{i} \pbrac{\sigma} \neq \sigma$. Thus, we consider automorphisms of these $\symgp{k}$-structures directly. First, we consider the component species $X \cdot \specname{L}_{k} \pbracs[big]{\cty{k}}$. \[lem:ctyinvar\] Let $B$ be a structure of the species $X \cdot \specname{L}_{k} \pbracs[big]{\cty{k}}$. Let $W_{i}$ be the $\cty{k}$-structure in the $i$th position in the linear order. Then some $\sigma \in \symgp{k}$ acts as an automorphism of $B$ if and only if, for each $i \in \sbrac{k+1}$, we have $\Delta^{-1} \pbrac{\rho_{i} \pbrac{\Delta \sigma}} W_{i} \cong W_{\sigma \pbrac{i}}$. Recall that the action of $\sigma \in \symgp{k}$ is in fact the action of $\Delta \sigma \in \symgp{k+1}$. The $X$-label on the black root of $B$ is not affected by the application of $\Delta \sigma$, so no conditions on $\sigma$ are necessary to accommodate it. However, the $\specname{L}_{k}$-structure on the white children of the root is permuted by $\Delta \sigma$, and we apply to each of the $W_{i}$’s the action of $\Delta^{-1} \pbrac{\rho_{i} \pbrac{\Delta \sigma}}$. Thus, $\sigma$ is an automorphism of $B$ if and only if the combination of applying $\Delta \sigma$ to the linear order and $\Delta^{-1} \pbrac{\rho_{i} \pbrac{\Delta \sigma}}$ to each $W_{i}$ is an automorphism. Since $\sigma$ ‘carries’ each $W_{i}$ onto $W_{\sigma \pbrac{i}}$, we must have that $\Delta^{-1} \pbrac{\rho_{i} \pbrac{\Delta \sigma}} W_{i} \cong W_{\sigma \pbrac{i}}$, as claimed. That this suffices is clear. Consider a structure $T$ of the $\symgp{k}$-species $\cty{k}$ and an element $\sigma \in \symgp{k}$. As discussed in \[ss:codecomp\], $T$ is composed of a $Y$-label and a set of $X \cdot \specname{L}_{k} \pbracs[big]{\cty{k}}$-structures. The permutation $\sigma$ acts trivially on $Y$ and $\specname{E}$ and acts on each of the component $X \cdot \specname{L}_{k} \pbracs[big]{\cty{k}}$-structures independently. For each of these component structures, by \[lem:ctyinvar\], we have that $\sigma$ is an automorphism if and only if $\Delta \sigma$ carries each $\cty{k}$-structure to its $\Delta^{-1} \pbrac{\rho_{i} \pbrac{\Delta \sigma}}$-image. Thus, when constructing $\sigma$-invariant $X \cdot \specname{L}_{k} \pbracs[big]{\cty{k}}$-structures, we must construct for each cycle of $\sigma$ a $\cty{k}$-structure which is invariant under the application of *all* the permutations $\Delta^{-1} \pbrac{\rho_{i} \pbrac{\Delta \sigma}}$ which will be applied to it along the cycle. For $c$ the chosen cycle of $\sigma$, this permutation is $\Delta^{-1} \pbrac{\prod_{i \in c} \rho_{i} \pbrac{\Delta \sigma}}$, where the product is taken over any chosen linearization of the cyclic order of the terms in the cycle. Once a choice of such a $\cty{k}$-structure for each cycle of $\sigma$ is made, we can simply insert the structures into the $\specname{L}_{k}$-structure to build the desired $\sigma$-invariant $X \cdot \specname{L}_{k} \pbracs[big]{\cty{k}}$-structure. Accordingly: \[thm:ctyfuncci\] The $\symgp{k}$-cycle index for the species $\cty{k}$ is characterized by the recursive functional equation $$\begin{gathered} \label{eq:ctyfuncci} \gcielt{\symgp{k}}{\cty{k}}{\sigma} = p_{1} \sbrac{y} \\ \times \ci{\specname{E}} \circ \pbracs[Big]{p_{1} \sbrac{x} \cdot \prod_{c \in C \pbrac{\sigma}} \gci{\symgp{k}}{\cty{k}} \pbracs[Big]{\Delta^{-1} \prod_{i \in c} \rho_{i} \pbrac{\Delta \sigma}} \pbrac{p_{\abs{c}} \sbrac{x}, p_{2 \abs{c}} \sbrac{x}, \dots; p_{\abs{c}} \sbrac{y}, p_{2 \abs{c}} \sbrac{y}, \dots}}. \end{gathered}$$ where $C \pbrac{\sigma}$ denotes the set of cycles of $\sigma$ (as a $k$-permutation) and the inner product is taken with respect to any choice of linearization of the cyclic order of the elements of $c$. The situation for the $\symgp{k+1}$-species $\ctxy{k}$ is almost identical. Recall from \[ss:actct\] that $\sigma \in \symgp{k+1}$ acts on a $\ctxy{k}$-structure $T$ by applying $\sigma$ directly to the linear order on the $k+1$ white neighbors of the root black vertex and applying $\rho$-variants of $\sigma$ recursively to their descendants. Thus, we once again need only require that, along each cycle of $\sigma$, the successive white-vertex structures are pairwise isomorphic under the action of the appropriate $\rho_{i} \pbrac{\sigma}$. Thus, we again need only choose for each cycle $c \in C \pbrac{\sigma}$ a $\cty{k}$-structure which is invariant under $\prod_{i \in c} \rho_{i} \pbrac{\sigma}$. Accordingly: \[thm:ctxyfuncci\] The $\symgp{k+1}$-cycle index for the species $\ctxy{k}$ is given by $$\begin{gathered} \label{eq:ctxyfuncci} \gcielt{\symgp{k+1}}{\ctxy{k}}{\sigma} = p_{1} \sbrac{x} \\ \times \prod_{c \in C \pbrac{\sigma}} \gci{\symgp{k}}{\cty{k}} \pbracs[Big]{\prod_{i \in c} \rho_{i} \sbrac{\sigma}} \pbrac{p_{\abs{c}} \sbrac{x}, p_{2 \abs{c}} \sbrac{x}, \dots, p_{\abs{c}} \sbrac{y}, p_{2 \abs{c}} \sbrac{y}, \dots}. \end{gathered}$$ under the same conditions as \[thm:ctyfuncci\]. Terms of the form $\prod_{i \in c} \rho_{i} \pbrac{\sigma}$ appear in \[eq:ctyfuncci,eq:ctxyfuncci\]. For the simplification of calculations, we note here a two useful results about these products. First, we observe that certain $\rho$-maps preserve cycle structure: \[lem:rhofp\] Let $\sigma \in \symgp{k}$ be a permutation of which $i \in \sbrac{k}$ is a fixed point and let $\lambda$ be the map sending each permutation in $\symgp{k}$ to its cycle type as a partition of $k$. Then $\lambda \pbrac{\rho_{i} \pbrac{\sigma}} = \lambda \pbrac{\sigma}$. Suppose $i + a \in \sbrac{k}$ is in an $l$-cycle of $\sigma$. Then $$\begin{aligned} \pbrac{\rho_{i} \pbrac{\sigma}}^{j} \pbrac{a} =& \pbrac{\rho_{i} \pbrac{\sigma}}^{j - 1} \pbrac{\sigma \pbrac{i + a} - \sigma \pbrac{i}} \\ =& \pbrac{\rho_{i} \pbrac{\sigma}}^{j - 2} \pbrac{\sigma \pbrac{i + \sigma \pbrac{i + a} - \sigma \pbrac{i}} - \sigma \pbrac{i}} \\ =& \pbrac{\rho_{i} \pbrac{\sigma}}^{j - 2} \pbrac{\sigma^{2} \pbrac{i + a} - \sigma^{2} \pbrac{i}} \\ &\vdots \\ =& \sigma^{j} \pbrac{i + a} - \sigma^{j} \pbrac{i} \end{aligned}$$ But the values of $\pbrac{\rho_{i} \pbrac{\sigma}}^{j} \pbrac{a} = \sigma^{j} \pbrac{i + a} - \sigma^{j} \pbrac{i}$ are all distinct for $j \leq l$, since $i + a$ is in an $l$-cycle and $i$ is a fixed point of $\sigma$. Furthermore, $\pbrac{\rho_{i} \pbrac{\sigma}}^{l} \pbrac{a} = \sigma^{l} \pbrac{i + a} = i+a$. Thus, $a$ is in an $l$-cycle of $\rho_{i} \pbrac{\sigma}$. This establishes a length-preserving bijection between cycles of $\rho_{i} \pbrac{\sigma}$ and cycles of $\sigma$, so their cycle types are equal. But then we note that the products in the above theorems are in fact permutations obtained by applying such $\rho$-maps: \[lem:rhoprod\] Let $\sigma \in \symgp{k}$ be a permutation with a cycle $c$. Then $\lambda \pbrac{\prod_{i \in c} \rho_{i} \pbrac{\sigma}}$ is determined by $\lambda \pbrac{\sigma}$ and $\abs{c}$. Let $c = \pbrac{c_{1}, c_{2}, \dots, c_{\abs{c}}}$. First, we calculate: $$\begin{aligned} \prod_{i = 1}^{\abs{c}} \rho_{c_{i}} \pbrac{\sigma} =& \rho_{c_{\abs{c}}} \pbrac{\sigma} \circ \dots \circ \rho_{c_{2}} \pbrac{\sigma} \circ \rho_{c_{1}} \pbrac{\sigma} \\ =& \rho_{c_{\abs{c}}} \pbrac{\sigma} \circ \dots \circ \rho_{c_{2}} \pbrac{\sigma} \pbrac{a \mapsto \sigma \pbrac{c_{1} + a} - \sigma \pbrac{c_{1}}} \\ =& \rho_{c_{\abs{c}}} \pbrac{\sigma} \circ \dots \circ \rho_{c_{3}} \pbrac{\sigma} \pbrac{a \mapsto \sigma \pbrac{c_{2} + \sigma \pbrac{c_{1} + a} - \sigma \pbrac{c_{1}}} - \sigma \pbrac{c_{2}}} \\ =& \rho_{c_{\abs{c}}} \pbrac{\sigma} \circ \dots \circ \rho_{c_{3}} \pbrac{\sigma} \pbrac{a \mapsto \sigma^{2} \pbrac{c_{1} + a} - \sigma^{2} \pbrac{c_{1}}} \\ &\vdots \\ =& a \mapsto \sigma^{\abs{c}} \pbrac{c_{1} + a} - \sigma^{\abs{c}} \pbrac{c_{1}} \\ =& \rho_{c_{1}} \pbrac{\sigma^{\abs{c}}}. \end{aligned}$$ But $c_{1}$ is a fixed point of $\sigma^{\abs{c}}$, so by the result of \[lem:rhofp\], this has the same cycle structure as $\sigma^{\abs{c}}$, which in turn is determined by $\lambda \pbrac{\sigma}$ and $\abs{c}$ as desired. From this and the fact that the terms of $X$-degree $1$ in all $\gci{\cty{k}}{\symgp{k}}$ and $\gci{\ctxy{k}}{\symgp{k+1}}$ are equal (to $p_{1} \sbrac{x} p_{1} \sbrac{y}^{k+1}$), we can conclude that: \[thm:ctciclassfunc\] $\gcielt{\cty{k}}{\symgp{k}}{\sigma}$ and $\gcielt{\ctxy{k}}{\symgp{k+1}}{\sigma}$ are class functions of $\sigma$ (that is, they are constant over permutations of fixed cycle type). This will simplify computational enumeration of $k$-trees significantly, since the number of partitions of $k$ grows exponentially while the number of permutations of $\sbrac{k}$ grows factorially. ### $k$-trees: $\kt{k}$ {#ss:ktcycind} We now have all the pieces in hand to apply \[thm:dissymkci\] to compute the cycle index of the species $\kt{k}$ of general $k$-trees. characterizes the cycle index of the generic $k$-tree species $\kt{k}$ in terms of the cycle indices of the rooted species $\ktx{k}$, $\kty{k}$, and $\ctxy{k}$; \[thm:arootquot\] gives the cycle indices of these three rooted species in terms of the $\Gamma$-cycle indices $\gci{\symgp{k}}{\cty{k}}$, $\gci{\symgp{k}}{\ctxy{k}}$, and $\gci{\symgp{k+1}}{\ctxy{k}}$; and, finally, \[thm:ctyfuncci,thm:ctxyfuncci\] give these $\Gamma$-cycle indices explicitly. By tracing the formulas in \[eq:ctyfuncci,eq:ctxyfuncci\] back through this sequence of functional relationships, we can conclude: \[thm:akci\] For $\mathfrak{a}_{k}$ the species of general $k$-trees, $\gci{\symgp{k}}{\cty{k}}$ as in \[eq:ctyfuncci\], and $\gci{\symgp{k+1}}{\ctxy{k}}$ as in \[eq:ctxyfuncci\] we have: \[thm:ktreecyc\] \[eq:akci\] $$\begin{aligned} \ci{\kt{k}} &= \frac{1}{\pbrac{k+1}!} \sum_{\sigma \in \symgp{k+1}} \gcielt{\symgp{k+1}}{\ctxy{k}}{\sigma} + \frac{1}{k!} \sum_{\sigma \in \symgp{k}} \gcielt{\symgp{k}}{\cty{k}}{\sigma} - \frac{1}{k!} \sum_{\sigma \in \symgp{k}} \gcielt{\symgp{k}}{\ctxy{k}}{\sigma} \label{eq:akciexplicit} \\ &= \qgci{\symgp{k+1}}{\ctxy{k}} + \qgci{\symgp{k}}{\cty{k}} - \qgci{\symgp{k}}{\ctxy{k}}. \label{eq:akciquot} \end{aligned}$$ in fact represents a recursive system of functional equations, since the formulas for the $\Gamma$-cycle indices of $\cty{k}$ and $\ctxy{k}$ are recursive. Computational methods can yield explicit enumerative results. However, a bit of care will allow us to reduce the computational complexity of this problem significantly. Unlabeled enumeration and the generating function $\tilde{\mathfrak{a}}_{k} \pbrac{x}$ {#ss:ktunlenum} -------------------------------------------------------------------------------------- in \[thm:akci\] gives a recursive formula for the cycle index of the ordinary species $\kt{k}$ of $k$-trees. The number of unlabeled $k$-trees with $n$ hedra is historically an open problem, but by application of \[thm:ciogf\] the ordinary generating function counting such structures can be extracted from the cycle index $\ci{\kt{k}}$. Actually computing terms of the cycle index in order to derive the coefficients of the generating function is, however, a computationally expensive process, since the cycle index is by construction a power series in two infinite sets of variables. The computational process can be simplified significantly by taking advantage of the relatively straightforward combinatorial structure of the structural decomposition used to derive the recursive formulas for the cycle index. Recall from \[thm:gciogf\] that, for a $\Gamma$-species $F$, the ordinary generating function $\tilde{F}_{\gamma} \pbrac{x}$ counting unlabeled $\gamma$-invariant $F$-structures is given by $$\tilde{F} \pbracs[big]{\gamma} \pbrac{x} = \gcieltvars{\Gamma}{F}{\gamma}{x, x^2, x^3, \dots}$$ and that the ordinary generating function for counting unlabeled $\nicefrac{F}{\Gamma}$-structures is given by $$\tilde{F} \pbrac{x} = \frac{1}{\abs{\Gamma}} \sum_{\gamma \in \Gamma} \tilde{F} \pbracs[big]{\gamma} \pbrac{x}.$$ These formula admits an obvious multisort extension, but we in fact wish to count $k$-trees with respect to just one sort of label (the $X$-labels on hedra), so we will not deal with multisort here. Each of the two-sort cycle indices in this chapter can be converted to one-sort by substituting $y_{i} = 1$ for all $i$. For the rest of this section, we will deal directly with these one-sort versions of the cycle indices. We begin by considering the explicit recursive functional equations in \[thm:ctyfuncci,thm:ctxyfuncci\]. In each case, by the above, the ordinary generating function is exactly the result of substituting $p_{i} \sbrac{x} = x^{i}$ into the given formula. Thus, we have: \[thm:ctrhoogf\] For $\cty{k}$ the $\symgp{k}$-species of $Y$-rooted $k$-coding trees and $\ctxy{k}$ the $\symgp{k+1}$-species of edge-rooted $k$-coding trees, the corresponding $\Gamma$-ordinary generating functions are given by \[eq:ctrhoogf\] $$\begin{aligned} \widetilde{\cty{k}} \pbrac{\sigma} \pbrac{x} &= \exp \pbracs[Big]{ \sum_{n \geq 1} \frac{x^{n}}{n} \cdot \prod_{c \in C \pbrac{\sigma^{n}}} \widetilde{\cty{k}} \pbracs[Big]{\Delta^{-1} \prod_{i \in c} \rho_{i} \pbracs[big]{\Delta \sigma^{n}}} \pbrac{x^{\abs{c}}}} \label{eq:ctyrhoogf} \\ \intertext{and} \widetilde{\ctxy{k}} \pbrac{\sigma} \pbrac{x} &= x \cdot \prod_{c \in C \pbrac{\sigma}} \widetilde{\cty{k}} \pbracs[Big]{\prod_{i \in c} \rho_{i} \pbrac{\sigma}} \pbracs[big]{x^{\abs{c}}}. \label{eq:ctxyrhoogf} \end{aligned}$$ where $\widetilde{\cty{k}}$ is an $\symgp{k}$-generating function and $\widetilde{\ctxy{k}}$ is an $\symgp{k+1}$-generating function. However, as a consequence of \[thm:ctciclassfunc\], we can simplify these expressions significantly: \[cor:ctogf\] For $\cty{k}$ the $\symgp{k}$-species of $Y$-rooted $k$-coding trees and $\ctxy{k}$ the $\symgp{k+1}$-species of edge-rooted $k$-coding trees, the corresponding $\Gamma$-ordinary generating functions are given by \[eq:ctogf\] $$\begin{aligned} \widetilde{\cty{k}} \pbrac{\lambda} \pbrac{x} &= \exp \pbracs[Big]{\sum_{n \geq 1} \frac{x^{n}}{n} \cdot \prod_{i \in \lambda^{n}} \widetilde{\cty{k}} \pbracs[big]{\lambda^{i}} \pbracs[big]{x^{i}}} \label{eq:ctyogf} \\ \intertext{and} \widetilde{\ctxy{k}} \pbrac{\lambda} \pbrac{x} &= x \cdot \prod_{i \in \lambda} \widetilde{\cty{k}} \pbracs[big]{\lambda^{i}} \pbracs[big]{x^{i}} \label{eq:ctxyogf} \end{aligned}$$ where $\lambda^{i}$ denotes the $i$th ‘partition power’ of $\lambda$ — that is, if $\sigma$ is any permutation of cycle type $\lambda$, then $\lambda^{i}$ denotes the cycle type of $\sigma^{i}$ — and where $f \pbrac{\lambda} \pbrac{x}$ denotes the value of $f \pbrac{\sigma} \pbrac{x}$ for every $\sigma$ of cycle type $\lambda$. As in \[thm:ctyfuncci\], we have recursively-defined functional equations, but these are recursions of power series in a single variable, so computing their terms is much less computationally expensive. Also, as an immediate consequence of \[thm:ctciclassfunc\], we have that $\widetilde{\cty{k}}$ and $\widetilde{\ctxy{k}}$ are class functions of $\sigma$, so we can restrict our computational attention to cycle-distinct permutations. Moreover, the cycle index of the species $\kt{k}$, as seen in \[eq:akci\], is given simply in terms of quotients of the cycle indices of the two $\Gamma$-species $\cty{k}$ and $\ctxy{k}$. Thus, we also have: \[thm:akrhoogf\] For $\kt{k}$ the species of $k$-trees and $\widetilde{\cty{k}}$ and $\widetilde{\ctxy{k}}$ as in \[thm:ctrhoogf\], we have $$\label{eq:akrhoogf} \tilde{\mathfrak{a}}_{k} \pbrac{x} = \frac{1}{\pbrac{k+1}!} \sum_{\sigma \in \symgp{k+1}} \widetilde{\ctxy{k}} \pbrac{\sigma} \pbrac{x} + \frac{1}{k!} \sum_{\sigma \in \symgp{k}} \widetilde{\cty{k}} \pbrac{x} \pbrac{\sigma} - \frac{1}{k!} \sum_{\sigma \in \symgp{k}} \widetilde{\ctxy{k}} \pbrac{\sigma} \pbrac{x}.$$ Again, as a consequence of \[thm:ctciclassfunc\] by way of \[cor:ctogf\], we can instead write For $\kt{k}$ the species of $k$-trees and $\widetilde{\cty{k}}$ and $\widetilde{\ctxy{k}}$ as in \[cor:ctogf\], we have $$\label{eq:akogf} \tilde{\mathfrak{a}}_{k} \pbrac{x} = \sum_{\lambda \vdash k+1} \frac{1}{z_{\lambda}} \widetilde{\ctxy{k}} \pbrac{\lambda} \pbrac{x} + \sum_{\lambda \vdash k} \frac{1}{z_{\lambda}} \widetilde{\cty{k}} \pbrac{\lambda} \pbrac{x} - \sum_{\lambda \vdash k} \frac{1}{z_{\lambda}} \widetilde{\ctxy{k}} \pbrac{\lambda \cup \cbrac{1}} \pbrac{x}.$$ This direct characterization of the ordinary generating function of unlabeled $k$-trees, while still recursive, is much simpler computationally than the characterization of the full cycle index in \[eq:akci\]. For computation of the number of unlabeled $k$-trees, it is therefore much preferred. Classical methods for working with recursively-defined power series suffice to extract the coefficients quickly and efficiently. The results of some such explicit calculations are presented in \[s:ktenum\]. Special-case behavior for small $k$ ----------------------------------- Many of the complexities of the preceding analysis apply only for $k$ sufficiently large. We note here some simplifications that are possible when $k$ is small. ### Ordinary trees ($k = 1$) When $k = 1$, an $\kt{k}$-structure is merely an ordinary tree with $X$-labels on its edges and $Y$-labels on its vertices. There is no internal symmetry of the form that the actions of $\symgp{k}$ are intended to break. The actions of $\symgp{2}$ act on ordinary trees rooted at a *directed* edge, with the nontrivial element $\tau \in \symgp{2}$ acting by reversing this orientation. The resulting decomposition from the dissymmetry theorem in \[thm:dissymk\] and the recursive functional equations of \[obs:funcdecompct\] then clearly reduce to the classical dissymmetry analysis of ordinary trees. ### $2$-trees When $k=2$, there is a nontrivial symmetry at fronts (edges); two triangles may be joined at an edge in two distinct ways. The imposition of a coherent orientation on a $2$-tree by directing one of its edges breaks this symmetry; the action of $\symgp{2}$ by reversal of these orientations gives unoriented $2$-trees as its orbits. The defined action of $\symgp{3}$ on edge-rooted oriented triangles is simply the classical action of the dihedral group $D_{6}$ on a triangle, and its orbits are unoriented, unrooted triangles. We further note that $\rho_{i}$ is the trivial map on $\symgp{2}$ and that $\rho_{i} \pbrac{\sigma} = \pbrac{1\ 2}$ for $\sigma \in \symgp{3}$ if and only if $\sigma$ is an odd permutation, both regardless of $i$. We then have that: \[eq:rest2trees\] $$\begin{aligned} \gci{\symgp{2}}{\cty{2}} &= p_{1} \sbrac{y} \cdot \ci{\specname{E}} \circ \pbracs[Big]{p_{1} \sbrac{x} \cdot \prod_{c \in C \pbrac{\sigma}} \gcieltvars{\symgp{2}}{\cty{2}}{e}{p_{\abs{c}} \sbrac{x}, p_{2 \abs{c}} \sbrac{x}, \dots; p_{\abs{c}} \sbrac{y}, p_{2 \abs{c}} \sbrac{y}, \dots}} \label{eq:ctyfuncci2} \\ \gci{\symgp{3}}{\ctxy{2}} &= p_{1} \sbrac{x} \cdot \prod_{c \in C \pbrac{\sigma}} \gci{\symgp{2}}{\cty{2}} \pbracs[big]{\rho \pbrac{\sigma}^{\abs{c}}} \pbrac{p_{\abs{c}} \sbrac{x}, p_{2 \abs{c}} \sbrac{x}, \dots; p_{\abs{c}} \sbrac{y}, p_{2 \abs{c}} \sbrac{y}, \dots}. \label{eq:ctxyfuncci2} \end{aligned}$$ where, by abuse of notation, we let $\rho$ represent any $\rho_{i}$. By the previous, the argument $\rho \pbrac{\sigma}^{\abs{c}}$ in \[eq:ctxyfuncci2\] is $\tau$ if and only if $\sigma$ is an odd permutation and $c$ is of odd length. This analysis and the resulting formulas for the cycle index $\ci{\kt{2}}$ are essentially equivalent to those derived in [@gessel:spec2trees]. Computation in species theory {#c:comp} ============================= Cycle indices of compositional inverse species {#s:compinv} ---------------------------------------------- In \[s:nbp\], our results included two references to the compositional inverse $\specname{CBP}^{\bullet \abrac{-1}}$ of the species $\specname{CBP}^{\bullet}$. Although we have not explored computational methods in depth here, the question of how to compute the cycle index of the compositional inverse of a specified species efficiently is worth some consideration. Several methods are available, including one developed in [@bll:species 4.2.19] as part of the proof that arbitrary species have compositional inverses, but our preferred method is one of iterated substitution. Suppose that $\Psi$ is a species (with known cycle index) of the form $X + \Psi_{2} + \Psi_{3} + \dots$ where $\Psi_{i}$ is the restriction of $\Psi$ to structures on sets of cardinality $i$ and that $\Phi$ is the compositional inverse of $\Psi$. Then $\Psi \circ \Phi = X$ by definition, but by hypothesis $$X = \Psi \circ \Phi = \Phi + \Psi_{2} \pbrac{\Phi} + \Psi_{3} \pbrac{\Phi} + \dots$$ also. Thus $$\label{eq:compinv} \Phi = X - \Psi_{2} \pbrac{\Phi} - \Psi_{3} \pbrac{\Phi} - \dots.$$ This recursive equation is the key to our computational method. To compute the cycle index of $\Phi$ to degree $2$, we begin with the approximation $\Phi \approx X$ and then substitute it into the first two terms of \[eq:compinv\]: $\Phi \approx X - \Psi_{2} \pbrac{X}$ and thus $\ci{\Phi} \approx \ci{X} - \ci{\Psi_{2}} \circ \ci{X}$. All terms of degree up to two in this approximation will be correct. To compute the cycle index of $\Phi$ to degree $3$, we then take this new approximation $\Phi \approx X - \Psi_{2} \pbrac{X}$ and substitute it into the first three terms of \[eq:compinv\]. This process can be iterated as many times as are needed; to determine all terms of degree up to $n$ correctly, we need only iterate $n$ times. With appropriate optimizations (in particular, truncations), this method can run very quickly on a personal computer to reasonably high degrees; we were able to compute $\ci{\specname{CBP}^{\bullet \abrac{-1}}}$ to degree sixteen in thirteen seconds. Enumerative tables {#c:enum} ================== Bipartite blocks {#s:bpenum} ---------------- With the tools developed in \[c:bpblocks\], we can calculate the cycle indices of the species $\mathcal{NBP}$ of nonseparable bipartite graphs to any finite degree we choose using computational methods. This result can then be used to enumerate unlabeled bipartite blocks. We have done so here using Sage 1.7.4 [@sage] and code listed in \[s:bpbcode\]. The resulting values appear in \[tab:bpblocks\]. $n$ Unlabeled ----- ----------- -- 1 1 2 1 3 0 4 1 5 1 6 5 7 8 8 42 9 146 10 956 : Enumerative data for unlabeled bipartite blocks with $n$ hedra[]{data-label="tab:bpblocks"} $k$-trees {#s:ktenum} --------- With the recursive functional equations for cycle indices of \[s:ktcycind\], we can calculate the explicit cycle index for the species $\kt{k}$ to any finite degree we choose using computational methods; this cycle index can then be used to enumerate both unlabeled and labeled (at fronts, hedra, or both) $k$-trees up to a specified number $n$ of hedra (or, equivalently, $kn + 1$ fronts). We have done so here for $k \leq 7$ and $n \leq 30$ using Sage 1.7.4 [@sage] using code available in \[s:ktcode\]. The resulting values appear in \[tab:ktrees\]. We note that both unlabeled and hedron-labeled enumerations of $k$-trees stabilize: \[thm:ktreestab\] For $k \geq n + 2$, the numbers of unlabeled and hedron-labeled $k$-trees are independent of $k$. We show that the species $\kt{k}$ and $\kt{k+1}$ have contact up to order $k+2$ by explicitly constructing a natural bijection. We note that in a $\pbrac{k+1}$-tree with no more than $k+2$ hedra, there will exist at least one vertex which is common to *all* hedra. For any $k$-tree with no more than $k+2$ hedra, we can construct a $\pbrac{k+1}$-tree with the same number of hedra by adding a single vertex and connecting it by edges to every existing vertex; we can then pass labels up from the $\pbrac{k+1}$-cliques which are the hedra of the $k$-tree to the $\pbrac{k+2}$-cliques which now sit over them. The resulting graph will be a $\pbrac{k+1}$-tree whose $\pbrac{k+1}$-tree hedra are adjacent exactly when the $k$-tree hedra they came from were adjacent. Therefore, any two distinct $k$-trees will pass to distinct $\pbrac{k+1}$-trees. Similarly, for any $\pbrac{k+1}$-tree with no more than $k+2$ hedra, choose one of the vertices common to all the hedra and remove it, passing the labels of $\pbrac{k+1}$-tree hedra down to the $k$-tree hedra constructed from them; again, adjacency of hedra is preserved. This of course creates a $k$-tree, and for distinct $\pbrac{k+1}$-trees the resulting $k$-trees will be distinct. Moreover, by symmetry the result is independent of the choice of common vertex, in the case there is more than one. However, thus far we have neither determined a direct method for computing these stabilization numbers nor identified a straightforward combinatorial characterization of the structures they represent. Code listing {#c:code} ============ Our results in \[c:bpblocks,c:ktrees\] provide a framework for enumerating bipartite blocks and general $k$-trees. However, there is significant work to be done adapting the theory into practical algorithms for computing the actual numbers of such structures. Using the computer algebra system Sage 1.7.4 [@sage], we have done exactly this. In each case, the script listed may be run with Sage by invoking > sage --python scriptname.py args on a computer with a functioning Sage installation. Alternatively, each code snippet may be executed in the Sage ‘notebook’ interface starting at the comment “`MATH BEGINS HERE`”; in this case, the final `print…` invocation should be replaced with one specifying the desired parameters. Bipartite blocks {#s:bpbcode} ---------------- The functional \[eq:nbpexp\] characterizes the cycle index of the species $\specname{NBP}$ of bipartite blocks. Python/Sage code to compute the coefficients of the ordinary generating function $\widetilde{\specname{NBP}} \pbrac{x}$ of unlabeled bipartite blocks explicitly follows in \[lst:bpcode\]. Specifically, the generating function may be computed to degree $n$ by invoking > sage --python bpblocks.py n on a computer with a functioning Sage installation. $k$-trees {#s:ktcode} --------- The recursive functional equations in \[eq:ctyogf,eq:ctxyogf,eq:akogf\] characterize the ordinary generating function $\tilde{\mathfrak{a}}_{k} \pbrac{x}$ for unlabeled general $k$-trees. Python/Sage code to compute the coefficients of this generating function explicitly follows in \[lst:ktcode\]. Specifically, the generating function for unlabeled $k$-trees may be computed to degree $n$ by invoking > sage --python ktrees.py k n on a computer with a functioning Sage installation. This code uses the class-function optimization of \[thm:ctciclassfunc\] extensively; as a result, it is able to compute the number of $k$-trees on up to $n$ hedra quickly even for relatively large $k$ and $n$. For example, the first thirty terms of the generating function for $8$-trees in \[tab:8trees\] were computed on a modern desktop-class computer in approximately two minutes. [^1]: That is, the value of $\fix \pbrac{F \sbrac{\sigma}}$ will be constant on conjugacy classes of permutations, which we note are exactly the sets of permutations of fixed cycle type. [^2]: Although these are called ‘functions’ for historical reasons, convergence of these formal power series is often not of immediate interest. [^3]: The *line group* of a graph is the group of permutations of edges induced by permutations of vertices. [^4]: Note that this decomposition does not actually partition the vertices, since many blocks may share a single cut-point, a detail which significantly complicates but does not entirely preclude species-theoretic analysis.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We propose a new type of hidden layer for a multilayer perceptron, and demonstrate that it obtains the best reported performance for an MLP on the MNIST dataset.' bibliography: - 'strings.bib' - 'strings-shorter.bib' - 'ml.bib' - 'aigaion-shorter.bib' --- The piecewise linear activation function ======================================== We propose to use a specific kind of piecewise linear function as the activation function for a multilayer perceptron. Specifically, suppose that the layer receives as input a vector $x \in \mathbb{R}^D$. The layer then computes presynaptic output $z = x^T W + b$ where $W \in \mathbb{R}^{D \times N}$ and $b \in \mathbb{R}^N$ are learnable parameters of the layer. We propose to have each layer produce output via the activation function $h(z)_i = \text{max}_{j \in S_i} z_j$ where $S_i$ is a different non-empty set of indices into $z$ for each $i$. This function provides several benefits: - It is similar to the rectified linear units [@Glorot+al-AI-2011] which have already proven useful for many classification tasks. - Unlike rectifier units, every unit is guaranteed to have some of its parameters receive some training signal at each update step. This is because the inputs $z_j$ are only compared to each other, and not to 0., so one is always guaranteed to be the maximal element through which the gradient flows. In the case of rectified linear units, there is only a single element $z_j$ and it is compared against 0. In the case when $0 > z_j$, $z_j$ receives no update signal. - Max pooling over groups of units allows the features of the network to easily become invariant to some aspects of their input. For example, if a unit $h_i$ pools (takes the max) over $z_1$, $z_2$, and $z_3$, and $z_1$, $z_2$ and $z_3$ respond to the same object in three different positions, then $h_i$ is invariant to these changes in the objects position. A layer consisting only of rectifier units can’t take the max over features like this; it can only take their average. - Max pooling can reduce the total number of parameters in the network. If we pool with non-overlapping receptive fields of size $k$, then $h$ has size $N / k$, and the next layer has its number of weight parameters reduced by a factor of $k$ relative to if we did not use max pooling. This makes the network cheaper to train and evaluate but also more statistically efficient. - This kind of piecewise linear function can be seen as letting each unit $h_i$ learn its own activation function. Given large enough sets $S_i$, $h_i$ can implement increasing complex convex functions of its input. This includes functions that are already used in other MLPS, such as the rectified linear function and absolute value rectification. Experiments =========== We used $S_i = \{ 5 i, 5 i + 1, ... 5 i + 4 \}$ in our experiments. In other words, the activation function consists of max pooling over non-overlapping groups of five consecutive pre-synaptic inputs. We apply this activation function to the multilayer perceptron trained on MNIST by @Hinton-et-al-arxiv2012. This MLP uses two hidden layers of 1200 units each. In our setup, the presynaptic activation $z$ has size 1200 so the pooled output of each layer has size 240. The rest of our training setup remains unchanged apart from adjustment to hyperparameters. @Hinton-et-al-arxiv2012 report 110 errors on the test set. To our knowledge, this is the best published result on the MNIST dataset for a method that uses neither pretraining nor knowledge of the input geometry. It is not clear how @Hinton-et-al-arxiv2012 obtained a single test set number. We train on the first 50,000 training examples, using the last 10,000 as a validation set. We use the misclassification rate on the validation set to determine at what point to stop training. We then record the log likelihood on the first 50,000 examples, and continue training but using the full 60,000 example training set. When the log likelihood of the validation set first exceeds the recorded value of the training set log likelihood, we stop training the model, and evaluate its test set error. Using this approach, our trained model made 94 mistakes on the test set. We believe this is the best-ever result that does not use pretraining or knowledge of the input geometry.
{ "pile_set_name": "ArXiv" }