text
stringlengths 0
514k
| meta
dict |
---|---|
---
abstract: '\[sec:abstract\] Reaction rate constants and cross sections are computed for the radiative association of carbon cations (C$^+$) and fluorine atoms (F) in their ground states. We consider reactions through the electronic transition $1^1\Pi \rightarrow X^1\Sigma^+$ and rovibrational transitions on the $X^1\Sigma^+$ and $a^3\Pi$ potentials. Semiclassical and classical methods are used for the direct contribution and Breit–Wigner theory for the resonance contribution. Quantum mechanical perturbation theory is used for comparison. A modified formulation of the classical method applicable to permanent dipoles of unequally charged reactants is implemented. The total rate constant is fitted to the Arrhenius–Kooij formula in five temperature intervals with a relative difference of $<3\:\%$. The fit parameters will be added to the online database KIDA. For a temperature of $10$ to $250\:$K, the rate constant is about $10^{-21}\:$cm$^3$s$^{-1}$, rising toward $10^{-16}\:$cm$^3$s$^{-1}$ for a temperature of $30{,}000\:$K.'
author:
- Jonatan Öström
- 'Dmitry S. Bezrukov'
- Gunnar Nyman
- Magnus Gustafsson
bibliography:
- 'citation.bib'
title: Reaction Rate Constant for Radiative Association of CF$^+$
---
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'This document contains additional plots proving the validity of the results in the main paper and plots showing intermediate measurements used for the adaptive schedule. Furthermore different parametrization for nonlinear annealing schedules are tested.'
author:
- Daniel Herr
- Ethan Brown
- Bettina Heim
- Mario Könz
- Guglielmo Mazzola
- Matthias Troyer
bibliography:
- 'biblio.bib'
title: 'Supplementary Material for: Optimizing schedules for Quantum Annealing'
---
Methods used in the main paper {#methods-used-in-the-main-paper .unnumbered}
------------------------------
In the following a more thorough description of algorithms used to simulate both Classical annealing (CA) and Quantum annealing (QA) is given.
### Classical Annealing {#classical-annealing .unnumbered}
In CA [@kirkpatrick1983] the system is first initialized to a random configuration at a high temperature and then gradually cooled. This allows the system to escape from local minima and to relax towards lower energy configurations [@sa_and_sa_conv_cond]. During each step the Metropolis-Monte-Carlo algorithm [@Metropolis] allows re-equilibration. Each step of this algorithm proposes a change to the current configuration. The change is always accepted if favorable, whereas if the cost increases it will only be accepted by a probability dependent on the temperature of the system.
### Quantum Annealing {#quantum-annealing .unnumbered}
During quantum annealing (QA) [@Ray1989; @Finnila1994; @Kadowaki1998; @idea_of_qa; @qareview] the adiabatic theorem enables the system to evolve from a trivial ground state to the solution of the complex problem $\mathcal{H}_P$, if the change in the Hamiltonian is sufficiently slow. For this one can state a new system Hamiltonian $$\mathcal{H}=s \mathcal{H}_P + \left(1-s\right) \mathcal{H}_D
\label{eq:generalhamil}$$ that introduces the control parameter $s \in \left[0 ,1\right]$ which enables a transition from an initial system $\mathcal{H}_D$ whose ground state can be easily obtained to the problem Hamiltonian. In contrast to CA, the system is held at a constant inverse temperature $\beta=1/T$ while the control parameter is slowly increased to 1. Thus, the QA schedule takes the system from a preparable state to a state where quantum fluctuation are suppressed and the system remains frozen in its configuration. The problem Hamiltonian encodes the classical problem that is about to be solved. In QA classical values of $s_i$ are replaced by the Pauli spin-z operators $\sigma_i^z$. And the driver Hamiltonian $\mathcal{H}_D$ introduces randomness to the quantum and is given by $$\mathcal{H}_D = \Gamma_0 \sum_i \sigma^x_i.$$ Here $\sigma_i^x$ is the Pauli spin-x operator for the $i$th spin and enable transitions between $\uparrow$ and $\downarrow$ for a nonzero transverse field $\Gamma_0$.
To simulate QA we used a discrete time SQA algorithm. We use the path integral Monte Carlo (PIMC) method to map the 3 dimensional system into $3+1$ dimensions by the introduction of the imaginary time dimension [@suzuki_orig]. This extra dimension is discretized with $M$ Trotter slices, which are copies of the classical Ising system coupled to each other by a value dependent on $\Gamma$ and $\beta$. As the transverse field decreases the coupling between the Trotter slices gets stronger. In the classical limit $\Gamma = 0$ the quantum fluctuation are suppressed and the system remains frozen in its configuration. The simulation using a finite number of Trotter slices $M$ is called discrete time SQA (DT-SQA) and comes with a discretization error of $O(\beta^3/M^2)$. The physical limit corresponds to the continuous time limit $M\rightarrow \infty$. To compare the computational effort between QA and CA the number of Monte Carlo sweeps of SQA and CA are compared. In SQA a single update consists of a Swendsen-Wang [@swendsenwang] cluster move in imaginary time direction only opposed to the single spin flip CA performs. This has proven to be a reliable comparison for the individual runtime [@Heimb15].
Since recent work by some of the authors indicates that using a PIMC technique with open boundary conditions in imaginary time is the best possible SQA algorithm and may better reproduce the scaling with system’s size of a coherent quantum annealer [@mazzola2017quantum; @Guglielmo], the simulations were run accordingly. The results serve as a best case estimate for the performance of a physical quantum annealer, e.g. the D-Wave 2, since the coherence length in such devices may or may not extend across the entire system. Furthermore all simulations were run in the physical limit where the number of Trotter slices is required to be high enough to ensure convergence. This leads to $M=1024$.
Observables in PIMC {#observables-in-pimc .unnumbered}
-------------------
To evaluate the expectation value of the QA counterpart of the specific heat $C_q$, we need to compute the value of $\left<\sigma_x\right>$. A derivation on how to calculate this observable using the PIMC method can be found in [@Krzakala08] and will be reproduced in the following.\
Generally an observable can be evaluated as: $$\left<O\right> = \frac{1}{Z} \text{tr}\left( O e^{-\beta \mathcal{H}}\right)$$ Thus the expectation value of $\left<\sigma_x\right>$ will evaluate to the following: $$\left<\sigma_i\right>=\frac{\sum_{\sigma_i} \bra{\sigma_i} \sigma_i^x e^{-\beta\mathcal{H}}\ket{\sigma_i}}{\sum_{\sigma_i} \bra{\sigma_i} e^{-\beta \mathcal{H}} \ket{\sigma_i}}$$ Introducing the Trotter decomposition $e^{\beta\mathcal{H}} = {\left( e^{\frac{\beta}{M} \mathcal{H}}\right)}^M$ with $\tau = \frac{\beta}{M}$ and adding identity operators will give $$\left<\sigma_i\right>\Big|_k = \frac{\sum_{\sigma_i} \bra{\sigma_i} \sigma_i^x e^{-\beta\mathcal{H}}\ket{\sigma_i}}{\sum_{\sigma_i} \bra{\sigma_i} e^{-\beta \mathcal{H}} \ket{\sigma_i}}.$$ But there is freedom to choose for which Trotter slice to evaluate $\sigma_i^x$. So one can also take the average of all these choices: $$\left<\sigma_i\right>= \frac{1}{M} \sum_{k=1}^{M} \left<\sigma_i\right>\Big|_k$$ Now with the introduction of a small error of order $O(\tau^2)$ one can split $e^{-\tau (\mathcal{H}_p + \Gamma\sum_i\sigma_i^x)}= e^{-\tau \mathcal{H}_p} e^{-\tau \Gamma \sum_i \sigma_i^x}$ and then find for the expectation value $$\left<\sigma_i^x\right> = \frac{1}{M}\sum_{k=1}^{M} \left( \frac{\bra{\sigma_i^{k+1}} \sigma_i^x e^{-\tau \Gamma \sum_i \sigma_i^x }\ket{\sigma_i^k}}{\bra{\sigma_i^{k+1}} e^{-\tau \Gamma \sum_i \sigma_i}\ket{\sigma_i^k}} \right)$$ Using $\bra{\uparrow} e^{a\sigma^x}\ket{\uparrow} = \cosh(a)$ and $\bra{\uparrow} e^{a\sigma^x}\ket{\downarrow} = \sinh(a)$ one can obtain the expectation value of the Pauli spin x operator: $$\left<\sigma_i^x\right>= \frac{1}{M} \sum_{k=1}^{M} {\tanh\left(-\tau \Gamma \right)}^{s_i^k s_i^{k+1}}$$
![\[fig:sigmax\] The expectation value of $\sigma_x$ operator depending on $\Gamma$](sigmax.pdf){width="\linewidth"}
In Figure \[fig:sigmax\] the expectation value of $\sigma_x$ is plotted for different values of the transverse field. For the more specific schedule relying on the Hamiltonian $$\mathcal{H} = \mathcal{H}_p + \Gamma(t) \sum_{i=0}^N \sigma_x^i$$ one can get an expectation value for $C_q$ by the following formula: $$C_q = \frac{1}{\beta} \frac{d^2}{d\Gamma^2} \log(Z)= \beta\Gamma\left(1 - \left<\sigma_x\right>^2\right)$$ Finally one should note that any constant prefactor in the measurement of $C_q$ will cancel further on when $\lambda$ is determined by fixing the number of MCS.
Schedules {#schedules .unnumbered}
---------
For optimized parameters an exponential schedule and a mixture of both classical and quantum annealing, where the transverse field was linearly decreased while the inverse temperature was linearly increased, were tested. The performance of these schedules is plotted in Figure \[fig:nonlin\_perf\]. Even the hybrid between CA and QA did not show a considerable performance increase, as can be seen in Figure \[fig:BG\_sched\]. The additional parameters that need to be optimized only seem to result in marginal changes. The residual energy seems behave similarly to SQA runs with fixed high temperature at few MCS, while still obtaining a slightly lower state which low temperature runs possess for large numbers of MCS.
![\[fig:nonlin\_perf\]Performance of the different schedules for optimized start and end parameters](Ising_final.pdf){width="\linewidth"}
![\[fig:convergence\]Change in number of trotter slice to check if DT-SQA already converged](convergence.pdf){width="\linewidth"}
![\[fig:BG\_sched\]Median residual energy for a schedule with linear change in both $\beta$ and $\Gamma$](BG_results.pdf){width="\linewidth"}
Notes on the adaptive schedule {#notes-on-the-adaptive-schedule .unnumbered}
------------------------------
The adaptive schedule presented in this paper is derived using the system Hamiltonian given in Eq. \[eq:generalhamil\]. Yet, a more common annealing procedure [@Heimb15; @santoro2] is when the problem Hamiltonian is kept at constant strength while the transverse field is decreased from $\Gamma_0$ to $0$. The simulations performed in this paper use this more specific schedule.
Validity of Results {#validity-of-results .unnumbered}
-------------------
In Figure \[fig:convergence\] one can see that at 1024 Trotter slices a change in discretization has almost no influence on the behavior of QA. Thus one can conclude that the algorithm is sufficiently converged.
Ferromagnet {#ferromagnet .unnumbered}
-----------
Additionally the same analysis was conducted for a 3D Ferromagnet whose degenerate ground state was lifted by a local field in $\sigma_z$ direction. The parameter for the annealing speed is plotted in Figure \[fig:sched\_ferro\] and in Figure \[fig:Ferro\_perf\] for different unoptimized annealing parameters. Again one can see that the adaptive schedule is better than the linear schedule for unoptimized parameters. But we noticed that optimized values, can further improve the performance such that a well optimized starting value beats the unoptimized adaptive schedule in the case of a Ferromagnet. Still this schedule becomes more advantageous the farther from optimized start values the annealing run is.
![\[fig:sched\_ferro\]Measured indicator for the annealing speed of a 3D Ferromagnet with $\beta = 32$](Ferro_CQ.pdf){width="\linewidth"}
![\[fig:Ferro\_perf\]Performance comparison between the adaptive and linear for a 3D Ferromagnet](Ferro_unoptimized.pdf){width="\linewidth"}
Comparison to 2D {#comparison-to-2d .unnumbered}
----------------
Simulations not in the converged limit show the exact same behavior as for 2D Ising spin glasses, cf. Figure \[fig:bettina\_B\] and the paper on 2D Ising spin glasses [@Heimb15]. The same can be observed for the converged case, that is presented in the main paper.
![\[fig:bettina\_B\]temperature behavior of the DT-SQA algorithm without the requirement of convergence to the continuous limit](bettina_B.pdf){width="\linewidth"}
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Ultralight scalars are an interesting dark matter candidate which may produce a mechanical signal by modulating the Bohr radius. Recently it has been proposed to search for this signal using resonant-mass antennae. Here, we extend that approach to a new class of existing and near term compact (gram to kilogram mass) acoustic resonators composed of superfluid helium or single crystal materials, producing displacements that are accessible with opto- or electromechanical readout techniques. We find that a large unprobed parameter space can be accessed using ultra-high-Q, cryogenically-cooled, cm-scale mechanical resonators operating at 100 Hz to 100 MHz frequencies, corresponding to $10^{-12}-10^{-6}$ eV scalar mass range.'
author:
- Jack Manley
- 'Dalziel J. Wilson'
- Russell Stump
- Daniel Grin
- Swati Singh
bibliography:
- 'DMsensor.bib'
title: Searching for scalar dark matter with compact mechanical resonators
---
*Introduction*.–The existence of dark matter (DM) is supported by numerous astrophysical observations [@1970ApJ...159..379R; @Tyson:1998vp; @Markevitch:2003at; @Hinshaw:2012aka; @Aghanim:2018eyx]. However, the Standard Model (SM) of particle physics provides no clear DM candidates, spurring searches for new (beyond the SM) particles like WIMPs (weakly interacting massive particles) [@Jungman:1995df; @Tan:2016zwf; @Akerib:2016vxi] and axions [@Peccei:1977hh; @Wilczek:1977pj; @Weinberg:1977ma; @Kim2010]. String theory suggests many new light particles, motivating the possibility of ultralight dark matter [@Witten:1984dg; @Damour:1994ya; @Damour:1994zq; @Svrcek:2006yi; @Conlon:2006tq; @Arvanitaki:2009fg].
For sufficiently low masses ($m_{\text{dm}}\lesssim 10^{-1}~{\rm eV}$), DM particles behave as a classical field, due to their large occupation numbers. DM would then be produced non-thermally through coherent oscillations of a cosmological scalar field [@Abbott:1982af; @Dine:1982ah; @PhysRevD.28.1243; @Preskill:1982cy]. Cosmic microwave background anisotropies, large-scale structure observations, and other measurements impose a lower limit of $m_{\text{dm}}\gtrsim 10^{-22}~{\rm eV}$ for ultralight DM (c.f. [@Hlozek:2014lca; @Marsh:2015xka; @Hlozek:2017zzf; @Poulin:2018dzj; @Irsic:2017yje; @Kobayashi:2017jcf; @Armengaud:2017nkf; @Gonzales-Morales:2016mkl]).
Under a parity transform, some ultralight DM particles (such as axions) transform as pseudoscalars, while others (e.g. dilatons and moduli) transform as scalars. The parameter space for new ultralight scalars has been constrained by stellar cooling bounds [@Hardy:2016kme; @Graham:2015ouw] and by torsion balance experiments [@adelberger2009torsion; @Wagner:2012ui]. Through couplings to the SM, scalar fields would modulate the fine-structure constant $\alpha$ and lepton masses (e.g. the electron mass $m_{e}$). [@Damour:2002mi; @Damour:2010rp]. If this scalar field is the dark matter, this modulation would occur at the DM Compton frequency, $\omega_{\text{dm}}=m_{\text{dm}}c^{2}/\hbar$, an effect detectable using atomic clocks, atom interferometry, laser interferometry, and other methods [@Arvanitaki:2014faa; @Stadnik:2014tta; @Stadnik:2015kia; @Stadnik:2015uka; @Stadnik:2016zkf; @Arvanitaki:2015iga; @Arvanitaki:2017nhi].
Modulation of $\alpha$ and $m_{\rm e}$ also produces a mechanical signal—an oscillating atomic strain—through modulation of the Bohr radius, $a_0=\hbar/\alpha c m_{\rm e}$ [@Arvanitaki:2015iga]. This strain can give rise to measurable displacement in a body composed of many atoms, and be resonantly enhanced in an elastic body with acoustic modes at $\omega_\text{dm}$. Recently it has been suggested to search for this *acoustic* DM signature using resonant-mass antennae [@Arvanitaki:2015iga]. Data from the AURIGA gravitational wave (GW) detector has already put bounds on scalar DM coupling [@Branca:2016rez]. In Ref. [@Arvanitaki:2015iga], new resonant DM detectors were proposed, including a frequency-tunable Cu-Si sphere coupled to a Fabry-Pérot cavity, and more compact quartz bulk acoustic wave (BAW) resonators [@2013NatSR...3E2132G]. A technique for broadband detection of low mass scalar DM was explored in Ref. [@Geraci:2018fax].
Here we propose extending the compact-resonator approach to a broader class of existing gram to kilogram-scale devices composed of superfluid He or single crystals. These devices (along with BAW resonators discussed earlier [@Arvanitaki:2015iga]) have been studied in the field of cavity optomechanics [@DeLorenzo2017; @Rowan2000; @Neuhaus2017cooling], and provide access to a broad frequency (mass) range from $100~{\rm Hz}\lesssim \omega_{\text{dm}}/2\pi\lesssim 100~{\rm MHz}$ ($10^{-12}~{\rm eV}\lesssim m_{\text{dm}}\lesssim 10^{-6}~{\rm eV}$). The key virtue of this approach is that, owing to their small dimensions and crystalline material, these devices can be operated at dilution refrigerator temperatures with quality factors as high as $10^{10}$ [@2013NatSR...3E2132G], thereby substantially reducing thermal noise. We present analytic expressions for thermal-noise-limited DM sensitivity for an arbitrary acoustic mode shape, and find that the minimum detectable scalar coupling can be orders of magnitude below current bounds.
![image](figure1v3.pdf){width="2.0\columnwidth"}
*Scalar DM field properties*–DM particles in the Milky Way have a Maxwellian velocity distribution about the virial velocity $v_{\text{vir}}\approx10^{-3} c$ [@Derevianko:2016vpm]. Given the local DM density ($\rho_{\text{dm}}\approx 0.3$ GeV/cm^3^ [@Lewin:1995rx]), ultralight DM particles behave as a classical field. We consider DM as a field with coherence time $\tau_{\text{c}}=\left(\frac{{v_{\text{vir}}}^2}{c^2} \omega_{\text{dm}}\right)^{-1}$ and coherence length $\lambda_\text{c}$ equal to the de Broglie wavelength $\lambda_\text{dm}$ [@Derevianko:2016vpm]. DM mass $m_{\text{dm}}\lesssim 10^{-6}~{\rm eV}$ corresponds to $\lambda_{\text{dm}}\gtrsim 1$ km, implying that the field is spatially uniform over laboratory scales.
Coupling of dark matter to $\alpha$ and $m_e$ leads to an oscillating strain given by [@Arvanitaki:2015iga] $$h(t)=-\frac{\delta \alpha\left(t\right)}{\alpha_0}-\frac{\delta m_e\left(t\right)}{m_{e,0}}=-h_0 \cos{\left(\omega_\text{dm}t\right)},$$ where $$h_0=d_\text{dm} \sqrt{\frac{8\pi G \rho_\text{dm}}{{\omega_\text{dm}}^2 c^2}}.$$ Here $d_{\text{dm}}=d_{m_e}+d_e$ is a dimensionless constant describing the strength of the DM coupling to the electron mass ($d_{m_e}$) and fine-structure constant ($d_e$) [@Damour:2010rp; @Arvanitaki:2014faa; @Arvanitaki:2015iga].
*Resonant mass detection*.–A scalar DM field modulates the size of atoms (by $h$, fractionally) at the Compton frequency $\omega_{\text{dm}}$. This effect introduces an isotropic stress in a solid body (rather, any form of condensed phase matter). This stress is effectively spatially uniform over length scales much smaller than $\lambda_\text{c}$ [@Derevianko:2016vpm]. Such a periodic stress may excite acoustic vibrations in the body. Note that not every acoustic mode couples to DM; a point that we wish to emphasize is that a uniform stress only couples to *breathing modes*.
Mechanical resonators that operate in non-breathing modes are not sensitive to scalar DM strain. An example of modes that would not be excited are those of a rigidly clamped solid bar. In this case, a spatially uniform stress will not cause any of the atoms in the bar to displace from their equilibrium position because of the zero net force on each. Without rigid clamping to impose an equal and opposite force on the edges of the bar, the bar will be free to expand and contract. We have found that by introducing at least one free acoustic boundary, a spatially uniform stress can couple to acoustic modes. It is for this reason that we specify that only breathing modes couple to scalar DM.
To quantify the effect of DM on an elastic body (the detector), we have adapted the analysis for continuous gravitational waves in Ref. [@Hirakawa1973]. We begin with the displacement field $u_i=\sum_n{\xi_n(t)u_{ni}(\boldsymbol{x})}$, where $u_{ni}$ is the normalized spatial distribution and $\xi_n$ is the time-dependent amplitude of the $n$^th^ acoustic mode; subscript $i$ denotes the spatial component {$x$,$y$,$z$}. This allows us to model the detector as a harmonic oscillator with effective mass $\mu_n=\int\rho\sum_i\left|u_{ni}\right|^2dV$. It is driven by thermal forces, $f_{\text{th}}(t)$, and a DM-induced force, $f_{\text{dm}}(t)=\ddot{h}(t)q_n$, where $q_n=\int\rho\sum_iu_{ni}x_idV$ is a parameter that determines the strength of the coupling between a scalar strain and the $n^\text{th}$ mode of the detector. By introducing dissipation in the form of velocity damping, the modes of the resonator obey damped harmonic motion $$\label{EoM}
\ddot{\xi}_n+\frac{\omega_n}{Q_n}\dot{\xi}_n+\omega_n^2\xi_n
=\frac{f_{\text{dm}}}{\mu_n}+\frac{f_{\text{th}}}{\mu_n},$$ where $\omega_n$ and $Q_n$ are, respectively, the resonance frequency and quality factor of the $n$^th^ mode.
Thus, the strategies developed for resonant detection of gravitational waves, originally proposed by Weber [@Weber1960], can also be applied to detecting DM [@Arvanitaki:2015iga]. Note that not all GW detectors double as scalar DM detectors. Broadband interferometric detectors, such as LIGO, are only sensitive to gradients in the DM strain field [@Arvanitaki:2014faa]. A spatially uniform isotropic strain would produce equal phase shifts in each arm of an interferometer. Moreover, scalar DM strains atoms, not free space—in this sense it is not equivalent to a scalar GW.
*DM Parameter Space*.–The parameter space for scalar couplings $d_{m_e}$ and $d_{e}$ is shown in Figs. \[fig:plot\] and \[fig:plotde\], respectively. Each plot includes sensitivity estimates for four candidate detectors (discussed below and in the caption). Overlaid are experimental constraints set by EP tests (the Eöt-Wash experiment) and gravitational wave searches (AURIGA), as well as the benchmark “natural $d_{\rm dm}$" line. Below we briefly review these constraints.
The Eöt-Wash experiment, a long-standing test of the weak equivalence principle using a torsion balance, has set the strongest existing constraints on $d_{m_e}$ and $d_e$. The orange exclusion region in Fig. \[fig:plot\](a) comes from the comparison of the differential accelerations of beryllium and titanium masses to $10^{-13}$ precision [@Wagner:2012ui].
AURIGA is a resonant-mass gravitational wave detector based on a $3$-m-long, $2200$ kg Al-alloy (Al5056) bar cooled to liquid He temperatures [@Branca:2016rez]. The detector has collected $\sim \! 10$ years of data, one month of which has been analyzed to search for scalar DM [@Branca:2016rez]. Extrapolating to its full (10 year) run time, the DM sensitivity of AURIGA is $\left(d_{\text{dm}}\right)_{\text{min}}\approx 10^{-5}$ for $850\text{Hz}\leq \nu_{\text{dm}} \leq 950 \text{Hz}$. This bandwidth is set by the sensitivity over which thermal motion of the Al bar can be detected.
The naturalness criterion requires that quantum corrections to $m_{\text{dm}}$ be smaller than $m_{\text{dm}}$ itself [@Dimopoulos:1996kp]. Consistent with other work [@Dimopoulos:1996kp; @Arvanitaki:2016fyj; @Arvanitaki:2015iga], this cutoff is chosen as roughly the energy scale up to which the SM is believed to be valid. The blue region in Fig. \[fig:plot\] indicates where the naturalness criterion is satisfied for a cutoff of 10 TeV.
*Thermal noise and minimum detectable coupling*.–Mechanical strain sensors, like AURIGA, are fundamentally limited by thermal noise. We consider mm to cm-scale mechanical resonators operating at Hz to MHz frequencies, for which thermal motion is the dominant noise source but deep cryogenics and quantum-limited displacement readout are available. The expression for thermally-limited strain sensitivity was first applied to resonant-mass DM detection in Ref. [@Arvanitaki:2015iga]. Here, we summarize the derivation of strain sensitivity, arriving at general expressions for arbitrary resonator geometries.
Thermal noise is well-described by a white-noise force spectrum, $S_{ff}^{\text{th}}=\frac{4k_{\text{B}}T\mu_n\omega_n}{Q_n}$, which drives the mechanical resonator into Brownian motion [@Saulson:1990jc]. Following Eq. (\[EoM\]), this limits the sensitivity of a strain measurement to $$\label{strainsensitivity}
\sqrt{S_{hh}^{\rm th}}=\sqrt{\frac{4k_{\text{B}}T\mu_n}{Q_n{q_n}^2{\omega_n}^3}}.$$
Accounting for the DM field’s finite coherence time, the minimum detectable strain for $2\sigma$ detection of the signal over measurement duration $\tau_{\text{int}}\gg\tau_\text{c}$ is $$\label{hmin}
h_{\text{min}}\approx\sqrt{\frac{16{v_{\text{vir}}} k_{\text{B}}T\mu_n}{Q_n{q_n}^2{\omega_n}^{5/2}c}}{\tau_{\text{int}}}^{-\frac{1}{4}}.$$ The minimum detectable DM coupling is $$\label{dmin}
\left(d_{\text{dm}}\right)_{\text{min}}\approx \sqrt{\frac{2{v_{\text{vir}}} c}{\pi G \rho_{\text{dm}}}}\sqrt{\frac{k_{\text{B}}T\mu_n}{Q_n{q_n}^2\sqrt{\omega_n \tau_{\text{int}}}}},$$ which can also be expressed in terms of the minimum detectable strain as $$\label{dminhmin}
\left(d_{\text{dm}}\right)_{\text{min}}\approx\sqrt{\frac{c^2}{8\pi G \rho_{\text{dm}}}}\omega_n h_{\text{min}}.$$
Equations (\[strainsensitivity\])-(\[dminhmin\]) are analytical expressions, general to any mechanical detector of arbitrary elastic material and geometry. Equation (\[dmin\]) is used to generate the results for each detector in Fig. \[fig:plot\](a) for $\tau_{\text{int}}=1$ year.
Typical $h_{\text{min}}$ values derived for the devices in this work are $\sim10^{-24}-10^{-23}$. From Eq. (\[dminhmin\]) it is evident that higher frequency detectors require a lower $h_{\text{min}}$ in order to maintain the same minimum detectable coupling. This scaling arises from the inverse relationship between the DM field amplitude $h_0$ and Compton frequency $\omega_\text{dm}$.
Another challenge to high frequency detection is that the DM signal’s coherence time $\tau_\text{c}$ is inversely proportional to the Compton frequency. Rearranging Eq. gives (for $\tau_\text{int}\gg\tau_\text{c}$) $h_{\text{min}}=2\sqrt{S_{hh}^{\rm th}}\left(\tau_{\text{int}}\tau_{\text{c}}\right)^{-1/4}$. Thus, a shorter coherence time increases $(d_\text{dm})_\text{min}$.
The detector geometry also introduces unfavorable frequency scaling, as higher frequency resonators are generally smaller, implying a reduced coupling factor $q_n$. Geometric considerations reduce $q_n$ for higher $n$ modes.
For the reasons explained above, $\left(d_{\text{dm}}\right)_{\text{min}}$ tends to scale as $\sim\omega_{\text{dm}}^{7/4}$ for simple, longitudinal modes. Thus, designing mechanical resonators to beat limits set by EP tests is difficult in the $\omega_\text{dm}\sim\,\text{GHz}$ range.
![Coupling strength $d_{\text{dm}}$ vs DM frequency $\nu_{\text{dm}}$ and mass $m_{\text{dm}}$ in $d_e$ parameter space. Point types and colors are as in Fig. 1. Higher sensitivities are needed to probe new parameter space for $d_e$ coupling than for $d_{m_e}$. []{data-label="fig:plotde"}](figure2v4.pdf){width="1.0\columnwidth"}
*Device parameters and results*.–We now consider several possible scalar dark matter detectors based on acoustic breathing mode resonators. Figure \[fig:plot\] highlights four resonators with gram to kilogram effective masses and Hz-MHz frequencies. Each detector behaves like a miniature Weber Bar antenna [@Branca:2016rez]. To facilitate comparison, we assume a 10 mK operating temperature and mechanical Q-factors of $10^9$, unless otherwise constrained by experiment. Specific parameters are stated in the caption of Fig. \[fig:plot\]. Note that while the mode shapes in Fig. \[fig:plot\](b-e) are rendered numerically in COMSOL$^\circledR$ [@ComsolMultiphysics], the results plotted in Fig. \[fig:plot\](a) and Fig. \[fig:plotde\] are analytical.
For DM frequencies $100 \, \text{Hz}\lesssim \nu_{\text{dm}} \lesssim 25 \, \text{kHz}$, we consider the superfluid helium bar resonator probed optomechanically, as discussed in Ref. [@DeLorenzo2017] (Fig. \[fig:plot\](b)). To permit breathing modes, the helium container designed to be only partially filled. The niobium shell supporting the container is assumed to be infinitely rigid due to its much greater bulk modulus. The resonant medium is the $2.7$ kg volume of superfluid. Assuming $T=10$ mK and $Q=10^9$ (limited by doping and clamping loss) [@DeLorenzo2017], $\left(d_{\text{dm}}\right)_{\text{min}}$ for the first 100 longitudinal modes is plotted in light blue in Fig. \[fig:plot\](a). For the fundamental mode ($\nu_1\approx120$ Hz), the strain sensitivity is $\sqrt{S_{hh}^{\rm th}}=2.5\cdot 10^{-21}$ Hz^-1/2^.
For DM frequencies $50\,\text{kHz}\lesssim \nu_{\text{dm}} \lesssim 2.5 \, \text{MHz}$, we consider a $0.3$ kg HEM$^\circledR$ sapphire cylinder intended for use as an end-mirror in future cryogenic GW detectors [@Rowan2000]. We note that an existing class of similar, promising devices are not considered in this work [@Locke1998; @Locke2000; @nand2013resonator; @Hirose:2014xga; @Bourhill2015]. We assume $T=10$ K as an experimental constraint due to the low thermal conductance of the test mass suspensions [@Khalaidovski:2014fqa]. A quality factor of $Q=10^9$ is assumed based on historical measurements of Braginsky *et. al.* [@Bagdasarov1975; @Braginsky1985systems], though we note a more contemporary benchmark is $Q = 2.5\times 10^8$ at $T = 4 $ K [@Uchiyama:1999ne]. Green points in Fig. \[fig:plot\](a) are estimates of $\left(d_{\text{dm}}\right)_{\text{min}}$ for $25$ longitudinal modes with dimensions as shown in Fig. \[fig:plot\](c). For the fundamental mode ($\nu_1\approx54$ kHz) the strain sensitivity is $\sqrt{S_{hh}^{\rm th}}=2.4\cdot 10^{-22}$ Hz^-1/2^.
For DM frequencies $550\,\text{kHz}\lesssim \nu_{\text{dm}} \lesssim 27 \, \text{MHz}$, we consider a modification of the quartz micropillar resonator developed by Neuhaus *et. al.* [@Neuhaus2017; @Neuhaus2017cooling] (see also Ref. [@Kuhn2011micropillar]) for cryogenic optomechanics experiments. The micropillar is assumed to be scaled up in size (Fig. \[fig:plot\](d)) and reconstructed of sapphire, whose higher density and sound velocity produces larger strain coupling in order to begin ruling out parameter space in the MHz regime with only $\sim 0.3$ grams of mass. Estimates of $\left(d_{\text{dm}}\right)_{\text{min}}$ for the first 25 odd-ordered longitudinal modes, with $Q=10^9$ and $T=10$ mK, are shown in blue in Fig. \[fig:plot\](a). For the fundamental mode ($\nu_1=550$ kHz), the strain sensitivity is $\sqrt{S_{hh}^{\rm th}}=7.7\cdot 10^{-23}$ Hz^-1/2^.
Finally, for DM frequencies $10\,\text{MHz}\lesssim \nu_{\text{dm}} \lesssim 350 \, \text{MHz}$, we consider two gram-scale quartz BAW resonators [@2013NatSR...3E2132G], initially proposed to search for scalar DM in Ref. [@Arvanitaki:2015iga]. Lavender points in Fig. \[fig:plot\](a) are for several longitudinal modes assuming an average quality factor of $10^{10}$ for Device 1 and $10^9$ for Device 2, with $Q$ adjusted for a few specific modes corresponding to measurements in Ref. [@2013NatSR...3E2132G]. Due to the unfavorable frequency scaling described above, these BAWs are predicted to surpass $d_{m_e}$ EP test constraints for only a few lower order modes, when operating at $T=10$ mK. The strain sensitivity for the mode at $\nu\approx10$ MHz is $\sqrt{S_{hh}^{\rm th}}\approx5\cdot 10^{-23}$ Hz^-1/2^ .
Excluded from the figures are high frequency devices such as phononic crystals [@chan2012optimized; @maccabe2019phononic] and GHz BAWs [@renninger2018bulk]. We found them unable to compete with EP test constraints. In principle one could extend our work to lower frequency mechanical resonators. In this case sensitivity would ultimately be limited by strain noise due to Newtonian gravity gradients and seismic fluctuations [@Adhikari:2013kya].
*Detector readout requirements and bandwidth*.–We have considered the thermal limit to resonant-mass DM detection for various compact resonators. To reach this limit, the imprecision of the readout system $S_{hh}^\text{imp}$ must be smaller than thermal noise $S_{hh}^\text{th}$, yielding a fractional detection bandwidth of $\Delta \omega/\omega\approx Q^{-1}\sqrt{S_{hh}^\text{th}/S_{hh}^\text{imp}}$.
The resonators discussed permit high-sensitivity optomechanical readout. Sapphire cylinders and pillars can be mirror-coated (e.g. using crystalline coatings [@cole2013tenfold]) and coupled to a Fabry-Pérot cavity. For devices in Fig. 1, thermal displacement of the end-face is on the order of $10^{-14}\,\text{m}/\sqrt{\text{Hz}}$ (cylinder) and $10^{-16}\,\text{m}/\sqrt{\text{Hz}}$ (pillar) near the fundamental resonance, implying a fractional bandwidth of $10^{-5}$ ($10^{-7}$) for a shot-noise-limited displacement sensitivity of $10^{-18}\,\text{m}/\sqrt{\text{Hz}}$ (achievable with mW of optical power for a cavity finesse of $1000$).
Superfluid-He and quartz BAW resonators have been probed non-invasively with low-noise microwave circuits. The piezoelectricity of quartz permits contact-free capacitive coupling of a BAW to a superconducting quantum interference device (SQUID) amplifier; this has enabled fractional bandwidths of $10^{-6}$ for a 10 mK, 10 MHz with $Q\sim 10^8$ device [@goryachev2014observation]. Helium bars have likewise been capacitively coupled to superconducting microwave cavities. For the bar considered in Fig. \[fig:plot\], a detailed roadmap to thermal-noise-limited readout is described in Ref. [@Singh:2016xwa].
Frequency tuning can also increase the effective detector bandwidth. The sound speed of quartz and sapphire are both thermally tunable, however, ultra-cryogenic operation practically limits the utility of this approach. Superfluid He permits broadband mechanical tuning by pressurization (which has been used to change the sound speed of He by 50% [@Abraham:1969zz]). Another possible route is through dynamical coupling to the microwave or optical resonator used for readout. Though weak, such “optical spring" effects (well studied in cavity optomechanics [@Aspelmeyer:2013lha]) are noninvasive and might be used to trim the detector at the level of the fractional DM signal bandwidth, $\Delta \omega_\text{dm}/\omega_\text{dm}=(\omega_\text{dm}\tau_\text{c})^{-1}\sim 10^{-6}$.
Tradeoffs between bandwidth, sensitivity and tunability ultimately determine the search strategy for a given detector. For instance, while three of the detectors discussed above (based on helium bar, sapphire cylinder and sapphire micropillar resonators) can surpass the sensitivity of the Eöt-Wash experiment in under a minute, their bandwidth will likely be smaller than that of the DM signal $\Delta \omega_\text{dm}$. To widen the search space, a natural strategy (analogous to haloscope searches for axion DM) would be to scan the detector in steps of $\Delta\omega_\text{dm}$, each time integrating for a duration long enough to resolve thermal noise $\tau_\text{int} \gtrsim 4Q/\omega_\text{dm}\times S_{hh}^\text{imp}/S_{hh}^\text{th}$. The slow scaling of sensitivity with $\tau_\text{int}$ (Eq. \[hmin\]) allows this strategy to significantly enhance the effective detector bandwidth. The total run time of the experiment can be reduced (or bandwidth increased) by using more detectors, which is facilitated by the compactness of the devices proposed.
*Conclusion and outlook*.–Existing, or near term compact mechanical resonators with high quality-factor acoustic modes operating at cryogenic temperatures have the potential to beat constraints on DM-SM coupling strength set by tests for EP violations in the 100 Hz- 100 MHz range. Frequency tuning techniques, along with arrays of these compact resonators can be used to enhance bandwidth and sensitivity, thereby enabling table-top experiments to cover a vast, unexplored region in the DM-SM coupling parameter space.
We thank Keith Schwab, David Moore, Andrew Geraci, Michael Tobar, and Eric Adelberger for helpful conversations. We thank Ken Van Tilburg, Asimina Arvanitaki, and Savas Dimopoulos for extensive feedback on the manuscript, as well as stimulating conversations. This work is supported by the National Science Foundation grant PHY-1912480, and the Provost’s Office at Haverford College.
Scalar DM coupling {#DMproperties}
==================
Here we review how scalar DM would interact with Standard Model fields through terms in which gauge-invariant operators of a SM field are coupled to operators containing DM fields [[@Arvanitaki:2014faa; @Derevianko:2016vpm], following the notation of Ref. [@Derevianko:2016vpm]]{}.
We begin by considering only linear couplings, denoted by Lagrangian density $\mathcal{L}_{\rm lin}=\sqrt{\hbar c}\phi({\bf x},t) \sum_x \gamma_x \mathcal{O}_{\rm SM}$, where $\gamma_x$ is the coupling coefficient and $\mathcal{O}_{\rm SM}$ are terms from the SM Lagrangian density. For simplicity, we consider only coupling to the electron (denoted by fermionic field $\psi_e$) and electromagnetic field strength (denoted by Faraday tensor $F_{\mu\nu}$). Thus $$-\mathcal{L}_{\rm lin}=\sqrt{\hbar c}\phi({\bf x},t) \left[-\frac{\gamma_e}{4}F_{\mu\nu} F^{\mu\nu} + \gamma_{m_e}\bar{\psi}_e\psi_e\right].$$ Combining it with the SM Lagrangian, this coupling can be absorbed into variations of fundamental constants [@Damour:2010rp] $$\begin{aligned}
m_e ({\bf x},t)&=&m_{e,0}\left[1+\sqrt{\hbar c}\gamma_{m_e}\phi({\bf x},t)\right],\\
\alpha ({\bf x},t)&=&\alpha_{0}\left[1+\sqrt{\hbar c}\gamma_{e}\phi({\bf x},t)\right],\end{aligned}$$
One can introduce dimensionless couplings $d_{m_e}$ and $d_e$ and consider the fractional change of constants $$\begin{aligned}
\label{eq:DMoscillations}
\frac{\delta m_e ({\bf x},t)}{m_{e,0}}&=&d_{m_e}\sqrt{4\pi \hbar c} E_{\rm Pl}^{-1}\phi({\bf x},t),\\\label{eq:DMoscillations2}
\frac{\delta \alpha ({\bf x},t)}{\alpha_{0}}&=&d_{e}\sqrt{4\pi \hbar c} E_{\rm Pl}^{-1}\phi({\bf x},t),\end{aligned}$$ where $E_{\rm Pl}$ is the Planck energy ($E_{\rm Pl}=\sqrt{\hbar c^5/G}$) [@Geraci:2018fax].
The couplings $d_{m_e}, d_e$ are dimensionless dilaton-coupling coefficients [@Damour:2010rp; @Arvanitaki:2014faa; @Arvanitaki:2016fyj], with a natural parameter range defined by the inequality [@Arvanitaki:2015iga] $$m_{\rm dm}^{2}\geq \frac{1}{2\left(4\pi\right)^{4}}d_{m_{e}}y_{e}^{2}\frac{\Lambda^{4}}{M_{\rm pl}^{2}}
+\frac{1}{32\pi^{2}}d_{e}^{2}\frac{\Lambda^{4}}{M_{\rm pl}^{2}},
\label{eq:nat}$$where $M_{\rm pl}$ is the reduced Planck mass and $y_{e}=2.94\times 10^{-6}$ is the electron Yukawa coupling. Eq. (\[eq:nat\]) imposes the requirement that quantum corrections to the scalar mass be well-controlled, assuming a $\Lambda=10~{\rm TeV}$ cutoff.
Minimum detectable strain and integration time {#timescaling}
==============================================
Over a finite measurement time $\tau$, the power spectral density $S_{hh}\left(\omega\right)$ of a coherent signal $h(t)=h_0e^{-i\omega_n t}$ has an apparent magnitude $$\label{coherentsignalint}
S_{hh}^\tau(\omega_n)=\frac{1}{\tau}\left<\left|H^{\tau}(\omega_n)\right|^2\right>={h_0}^2\tau.$$
If $h$ is partially coherent with coherence time $\tau_{\text{c}}$, then (\[coherentsignalint\]) is only a valid approximation for $\tau<\tau_{\text{c}}$. For measurement times $\tau\gg\tau_{\text{c}}$, a better approximation can be obtained by breaking the measurement into $N$ segments of duration $\tau_{\text{c}}$ and adding up the contributions in quadrature [@Budker:2013hfa]. For a stationary process, this yields $$S_{hh}^{\tau\gg\tau_\text{c}}\approx\sqrt{\sum\limits^{N} \left(S_{hh}^{\tau_\text{c}}\right)^2}=\sqrt{\frac{\tau}{\tau_{\text{c}}}\left(S_{hh}^{\tau_\text{c}}\right)^2}= h_0^2\sqrt{\tau\tau_\text{c}},$$ from which a signal strength $$h_0=\sqrt{S_{hh}^\tau}\left(\tau\tau_{\text{c}}\right)^{-1/4}$$ can be inferred.
We define the minimum detectable strain $h_{\text{min}}$ as the minimum signal amplitude $h_0$ needed to produce $\text{SNR}=1$. For $2\sigma$ detection limited by thermal noise $S_{hh}^{\text{th}}$, $$h_{\text{min}}\approx2\sqrt{S_{hh}^{\text{th}}}\left(\tau\tau_{\text{c}}\right)^{-1/4}.$$
Effect of readout noise
=======================
The preceding analysis assumes that noise in the readout (of amplitude coordinate $\xi$) contributes negligibly to the apparent strain. In practice broadband readout noise $S_{\xi\xi}^\text{imp}(\omega)\approx S_{\xi\xi}^\text{imp}(\omega_n)$ contributes an apparent strain $$S_{hh}^\text{imp}(\omega) = |\chi(\omega)|^{-2}S_{\xi\xi}^\text{imp}(\omega_n)$$ where $$|\chi(\omega)|^{2} = \frac{\omega^4 q_n^2/\mu_n^2}{(\omega^2-\omega_n^2)^2+\omega_n^2\omega^2/Q_n^2}$$ is the mechanical susceptibility.
The effect of readout noise on a measurement of finite duration $\tau$ is obtained by integrating the readout signal $S_{\xi\xi}(\omega_n)$ over a bandwidth $\Delta \omega = 2\pi/\tau$. For times $\tau\gg\tau_c$, the contribution of thermal and readout noise is
$$\begin{aligned}
S_{\xi\xi}^\tau(\omega_n) &= \int^{\omega_n+\tfrac{\Delta \omega}{2}}_{\omega_n-\tfrac{\Delta \omega}{2}}(S_{\xi\xi}^\text{imp}(\omega)+|\chi|^2 S_{hh}^\text{th}(\omega))\frac{d\omega}{\Delta\omega}\\
&\approx S_{\xi\xi}^\text{imp}(\omega_n)+S_{\xi\xi}^\text{th}(\omega_n)\frac{\tan^{-1}\left(\tau_n/\tau\right)}{\tau_n/\tau}\label{eq:SNRvstime}
\end{aligned}$$
where $\tau_n\equiv2\pi Q_n/\omega_n$ is the mechanical coherence time.
According to Eq. \[eq:SNRvstime\], the relative fraction of readout noise is minimized for integration times long compared to the mechanical coherence time $\tau\gg\tau_n$. For integration times $\tau\ll\tau_n$, relevant for frequency scanning, the fraction is $S_{\xi\xi}^\text{imp}/S_{\xi\xi}^\text{th}\times 2\tau_n/(\pi\tau)$. We use this formula in the main text to define the time necessary to resolve thermal noise as $2\tau_n/\pi\times S_{\xi\xi}^\text{imp}/S_{\xi\xi}^\text{th} = 4Q/\omega_n\times S_{hh}^\text{imp}/S_{hh}^\text{th}$.
As a specific example, a superfluid helium resonator with the dimensions discussed in main text, probed with a signal-to-noise ratio of $\sqrt{S_{hh}^\text{th}/S_{hh}^\text{imp}}=10$ for an integration time of $\tau_\text{int}\approx15$ hours, could in two years search a fractional frequency span of $\Delta\omega/\omega_\text{dm}\approx 0.1\%$ ($\sim 10^3$ distinct bins) with a sensitivity of $(d_\text{dm})_\text{min}\sim10^{-5}$, exceeding the current bound set by EP tests by more than 20 dB.
Equation of Motion {#EoMderivation}
==================
Dark matter modulates the size of atoms by $h\equiv h(t)$. In a linearly elastic medium, this effect is analogous to modulating the equilibrium position of each atom relative the center of the medium (or an edge, if that edge is clamped in place). In an isotropic medium, the effect can be modeled as a perturbation, $-x_ih$, to the displacement field, $u_i\equiv u_i({\boldsymbol{x}},t)$. The treatment follows that of Ref. [@Hirakawa1973] for continuous gravitational waves.
The $i^\text{th}$ component of the perturbed displacement field is simply $$w_i\equiv w_i({\boldsymbol{x}},t)=u_i({\boldsymbol{x}},t)-x_ih(t).$$
It should here be noted that this model only strictly applies for elastic media with at least one free acoustic boundary. A bar, for example, that is rigidly clamped at one end needs to have zero displacement $w_i$=0 at the rigid boundary. The model still applies to this case, but only if the rigid boundary is positioned at the origin $x_i=0$.
Navier’s equations of motion [@ContinuumMechanics] for the perturbed displacement field become $$\label{navier}
\rho \ddot{u}_i-\mu\sum_j\frac{\partial^2 u_i}{\partial {x_j}^2} - (\lambda + \mu) \sum_j\frac{\partial^2u_j}{\partial x_i \partial x_j}=\rho\ddot{h} x_i,$$ where $\rho$ is the mass density of the detecting medium and $\mu$ and $\lambda$ are Lamé parameters.
The displacement field due to acoustic oscillations can be expanded in terms of its eigenmodes: $u_i({\boldsymbol{x}},t)=\sum_n \xi_n \! (t) \, u_{ni}({\boldsymbol{x}})$, where $\xi_n\equiv \xi_n \! (t)$ gives the amplitude and phase of the oscillation while $u_{ni}\equiv u_{ni}({\boldsymbol{x}})$ is the normalized spatial distribution. The normalization is such that $(u_{ni})_{\text{max}}=1$. Without loss of generality, we can restrict our analysis to just one of the eigenmodes $$u_i=\xi_nu_{ni}.$$
With this substitution into (\[navier\]), we recover the equation of motion for a driven, harmonic oscillator $$\mu_n\left(\ddot{\xi}_n+\omega_{n}^{2}\xi_n\right)=\ddot{h}q_n,$$\
where $\mu_n=\int\mathrm{d}V\rho\sum_i\left|u_{ni}\right|^2$ is the effective mass of the $n$^th^ mode and $q_n=\int\mathrm{d}V\rho\sum_iu_{ni}x_i$ characterizes coupling between scalar DM strain and the $n$^th^ mode. Not every mode will couple. We have found that only breathing modes couple to an isotropic, spatially uniform strain.
Finally, we include velocity-proportional damping $\frac{\omega_n}{Q_n}$, and random thermal noise, $f_{\text{th}}$, and the equation of motion for the $n$^th^ eigenmode of the medium is $$\ddot{\xi}_n+\frac{\omega_n}{Q_n}\dot{\xi}_n+\omega_n^2\xi_n
=\frac{q_n}{\mu_n}\ddot{h}+\frac{f_{\text{th}}}{\mu_n}.$$
Acoustic Analysis of Devices {#ModeDetails}
============================
Here we consider the geometries of the proposed detectors, showing the analytical values of the effective mass $\mu_n$ and acoustic coupling factor $q_n$.
The sapphire test mass and pillar (Fig.\[fig:plot\](c-d)) are simple bars with free acoustic boundaries. Consider such a bar with length $L$ and cross-sectional area $A$. It’s ends are located at $z=0$ and $z=L$. The longitudinal displacement modes are [@kinsler1999fundamentals] $$\label{longmodes}
u_{nx}=u_{ny}=0; \,\,\,\,\,\,\, u_{nz}=\cos{\!\left[\frac{n\pi z}{L}\right]}.$$ Thus, for a bar with arbitrary cross-sectional geometry, the reduced mass is $$\mu_n=\rho A \int_0^L\mathrm{d}z \cos^2{\!\left[\frac{n\pi z}{L}\right]}=\frac{M}{2},$$ where $M$ is the total mass, and the acoustic coupling factor is $$\label{couplingfactorbar}
q_n=\rho A \int_0^L\mathrm{d}z \cos{\!\left[\frac{n\pi z}{L}\right]} z=\rho A L^2 \frac{\cos{\!\left(n\pi\right)}-1}{n^2 \pi^2}.$$ Equation (\[couplingfactorbar\]) illustrates that only the odd-ordered longitudinal modes couple to dark matter. Even-ordered modes are not breathing modes. In terms of the speed of sound in the material $v_s$, the resonance frequencies are $\nu_n=\frac{n v_s}{2L}$.
The geometry of the proposed superfluid helium cylinder in Fig.\[fig:plot\](b) differs only in that it has a rigid acoustic boundary at $z=0$. For this geometry, the longitudinal displacement modes are $$u_{nx}=u_{ny}=0; \,\,\,\,\,\,\, u_{nz}=\sin{\!\left[\frac{\left(2n-1\right)\pi z}{2L}\right]}.$$ The effective mass is still $$\mu_n=\frac{M}{2},$$ and the acoustic coupling factor is now $$q_n=-4\rho A L^2 \frac{\cos{\!\left(n\pi\right)}}{\left(2n-1\right)^2 \pi^2}.$$ Modes of both even and odd $n$ couple to DM strain, and the frequency is $\nu_n=\frac{\left(2n-1\right) v_s}{4L}$.
To approximate the displacement field for the quartz BAW resonators, we assume the crystal to be only weakly anisotropic and consider only the dominant component $u_{nz}$ of the quasi-longitudinal modes. The displacement modes are given by $$\label{bawdisplacement}
u_{nz}\approx\sin{\!\left[\frac{n\pi z}{L}\right]}\exp{\left[\frac{-\alpha n \pi }{2}\left(x^2+y^2\right)\right]},$$ with frequency $$\nu_n\approx\sqrt{\frac{n^2 \hat{c}_z}{4 L^2 \rho}},$$ where $\alpha\approx\sqrt{\frac{2}{5RL^3}}$ and $\hat{c}_z$ is the effective elastic constant [@Goryachev:2014yra]. From Eq. (\[bawdisplacement\]), we calculate $\mu_n$ and $q_n$ for odd-ordered modes, finding that $$\mu_n\approx\frac{\rho L}{2\alpha n}$$ and $$\left|q_n\right|\approx\frac{4\rho L^2}{n^3 \pi^2 \alpha}.$$
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
The ground state geometries of some small clusters have been obtained via ab initio molecular dynamical simulations by employing density based energy functionals. The approximate kinetic energy functionals that have been employed are the standard Thomas-Fermi $(T_{TF})$ along with the Weizsacker correction $T_W$ and a combination $F(N_e)T_{TF} + T_W$. It is shown that the functional involving $F(N_e)$ gives superior charge densities and bondlengths over the standard functional. Apart from dimers and trimers of Na, Mg, Al, Li, Si, equilibrium geometries for $Li_nAl, n=1,8$ and $Al_{13}$ clusters have also been reported. For all the clusters investigated, the method yields the ground state geometries with the correct symmetries with bondlengths within 5% when compared with the corresponding results obtained via full orbital based Kohn-Sham method. The method is fast and a promising one to study the ground state geometries of large clusters.
PACS Numbers : 71.10, 31.20G, 02.70N, 36.40
address: 'Department of Physics, University of Poona, Pune 411 007, India'
author:
- 'Dinesh Nehete, Vaishali Shah and D. G. Kanhere [@email]'
title: '[*Ab initio*]{} molecular dynamics using density based energy functionals: application to ground state geometries of some small clusters.'
---
Introduction
============
During the last few years the technique of first principles molecular dynamics (MD), initiated by Car and Parrinello (CP) [@car; @rem], has emerged as a powerful tool for investigations of structural, electronic and thermodynamic properties of large scale systems. The standard implementation of this method which is based on density functional theory is via Kohn-Sham (KS) orbitals. Such orbital based algorithms scale as $N_a^3$, $N_a$ being the number of atoms in the system. Quite clearly, such methods turn out to be computationally expensive for system sizes over about 100 atoms [@footnote]. Recently, approaches based on total energy functionals, which depend on charge density only or orbital free density functionals have been proposed [@pear; @shah; @govind]. These methods are based on approximate representation of kinetic energy (KE) functionals and offer an attractive alternative for investigating large scale systems. Since the method is orbital free i.e there are no wavefunctions to handle, there is no computationally expensive orthogonality constraint and the methods scale linearly with system size. In addition, these methods are shown to yield stable dynamics even with large timesteps, a highly desirable feature for molecular dynamics simulations.
It is clear that the utility of these methods is critically dependent on their ability to investigate the systems of interest with acceptable accuracy, at least for a class of physical properties. Madden and coworkers have investigated structural and thermodynamic properties of some simple metals with considerable success. For example, the dynamic structure factor of liquid Sodium and static structure factor, vacancy formation energy, free energies of point defects as well as phonon dispersion curves of Sodium [@pear; @smar] are well described by this method. The method has also been applied for ground state configurations of c-Si and H/Si (1 0 0) surface [@govind] and for geometries of some silicon clusters [@niranjan] and a good agreement has been found with experiments as well as with other calculations. However majority of the calculations reported so far have been performed on extended systems.
In the present work, we focus our attention on studying the ground state and energetically low lying structures of clusters, a field of current interest. Obviously, due to the approximate nature of the KE functionals the bondlengths and binding energies will not be obtained with the same level of accuracy as the KS orbital based methods. However, it is of considerable interest to examine whether such a method is capable of yielding the correct shapes (i.e the right symmetries) of clusters by employing Car-Parrinello simulated annealing methods. If desired the KS method can then be used to search the local minimum around structures obtained by Orbital Free Method (OFM) in ‘quenching’ mode. This can be a computationally tractable way to avoid the long and costly simulated annealing runs of the orbital based KS molecular dynamics.
Towards this end we have carried out a number of calculations on a variety of representative small clusters of simple metals. Specifically, we have investigated dimers and trimers of Na, Mg, Al, Li, Si, small clusters of $Na_n, (n=6,8)$, $Li_nAl, (n=1,8)$ and $Al_{13}$. These systems are representative of the small metal atom clusters of current interest and more accurate KS based results have been reported. Hence, it is possible to make an assessment of the present method by comparing the bondlengths and geometries with the reported ones.
The question of appropriate choice of kinetic energy functionals has been addressed by Smargiassi and Madden [@smarg]. They have investigated a family of kinetic energy functionals which incorporate exact linear response properties. All such KE functionals are based on the Thomas Fermi (TF) and the Weizsacker correction term. Since our interest is in finite size systems, we have chosen to use simple KE functionals. These functionals have been previously used in the study of atoms and molecules. However, it must be mentioned that significant progress has been made towards improving the KE functionals notably by DePristo and Kress, Wang and Teter [@depristo; @wang].
In the next section we briefly discuss the method, the KE functionals used and give the relevant numerical details. This is then followed by the results and discussion.
Formalism and Computational details
===================================
Total Energy Calculation
-------------------------
The total energy of a system of $N_e$ interacting electrons and $N_a$ atoms, according to the Hohenberg-Kohn theorem [@hoh; @ksh], can be uniquely expressed as a functional of the electron density $\rho({\bf r})$ under an external potential due to the nuclear charges at coordinates ${\bf R}_n$, $$E\Bigl[\rho,\{{\bf R}_n\}\Bigr] = T[\rho]
+ E_{xc}[\rho]
+ E_c[\rho]
+ E_{ext}\Bigl[\rho,\{{\bf R}_n\}\Bigr]
+ E_{ii}\Bigl(\{{\bf R}_n\}\Bigr),$$ where $E_{xc}$ is the exchange-correlation energy, $E_c$ is the electron-electron Coulomb interaction energy. The electron-ion interaction energy $E_{ext}$ is given by $$E_{ext}\Bigl[\rho,\{{\bf R}_n\}\Bigr] = \int{V({\bf r})
\rho({\bf r}) d^3r}$$ where $V({\bf r})$ is the external potential, usually taken to be a convenient pseudopotential [@bach]. The last term in Eq. (1), $E_{ii}$, denotes the ion-ion interaction energy. The first term in Eq. (1), the KE functional, is usually approximated as $$T[\rho] = T_{TF}[\rho] + T_W[\rho]$$ where $T_{TF}[\rho]$ is the Thomas-Fermi term, exact in the limit of homogeneous density, and has the form $$T_{TF}[\rho] = \frac{3}{10}(3\pi^2)^{\frac{2}{3}}
\int{\rho({\bf r})^{5/3} d^3r}$$ and $T_W[\rho]$ is the gradient correction due to Weizsacker, given as $$T_W[\rho] = \frac{\lambda}{8} \int{\frac{\nabla\rho({\bf r})
\cdot\nabla\rho({\bf r}) d^3r}{\rho({\bf r})}}$$ which is believed to be the correct asymptotic behavior of T$[\rho]$ for rapidly varying densities. Instead of $\lambda = 1$, the original Weizsacker value, $\lambda = \frac{1}{9}$ and $\lambda = \frac{1}{5}$ [@parr] are also commonly used. It has been argued that for rapidly varying densities, which is the case for finite size clusters a more appropriate kinetic energy functional would be a following combination of these two terms[@acharya] $$T[\rho] = F(N_e) T_{TF}[\rho] + T_W[\rho]$$ where the factor $F(N_e)$ [@gaz] is $$F(N_e) = \bigglb(1-\frac{2}{N_e}\biggrb)
\bigglb(1-\frac{A_1}{N_e^{\frac{1}{3}}}
+\frac{A_2}{N_e^{\frac{2}{3}}}\biggrb)$$ with optimized parameter values $A_1 = 1.314$ and $A_2 = 0.0021$ [@ghosh]. This functional which includes the full contribution of the Weizsacker correction describes the response properties of the electron gas well. This functional has been used for investigating atoms and molecules with reasonable success.
We briefly describe our procedure, details of which can be found in [@shah]. The total energy of the system (Eq. (1)), is minimized for fixed ionic positions using the conjugate gradient method [@press] which forms the starting point for molecular dynamics. The trajectories of ions and the fictitious electron dynamics are then simulated using Lagrange’s equations of motion which are solved by Verlet algorithm [@car]. The stability of CP dynamics has been discussed in [@pear] in the context of density based methods and timesteps of the order to 50 a.u. have been successfully used. We have verified that by appropriate adjustment of the fictitious electron mass the CP dynamics remains very stable for over 10000 iterations with a timestep of 40 a.u. in the present calculations of clusters. Typically, for free dynamics the grand total energy which is the sum of the kinetic energy of ions, kinetic energy of electrons and the potential energy of the system remains constant to within $10^{-5}$ a.u.
For the calculations of the ground state structures for dimers and trimers of Na, Mg, Al, Li and Si a periodically repeated unit cell of length 26 a.u. with a 54 $\times$ 54 $\times$ 54 mesh and timestep $\Delta
t \sim$ 10 to 20 a.u. was used. For the rest of the small clusters the calculations were done on a unit cell of length of 30 a.u with a 54 $\times$ 54 $\times$ 54 mesh. We have chosen to use the plane wave expansion on the entire fast fourier transform mesh without any truncation yielding the energy cutoff of 95 Rydberg. It must be mentioned that, due to the orbital free calculations the number of fast fourier transforms per iteration are constant irrespective of the number of electrons in the system. For clusters, the ground state configurations are obtained either by starting with different initial configurations and then quenching the structures or by dynamical simulated annealing where the cluster is heated to $300-350^{\circ} K$ and then cooled very slowly. In all the cases the stability of the final ground state configurations has been tested by reheating the clusters and allowing them to span the configuration space for a few thousand iterations and then cooling them to get the low energy configurations.
Results and Discussion
======================
In this section, we first discuss the results for the equilibrium bondlengths and binding energies of dimers and trimers of Na, Mg, Al, Li, Si along with their KS results. All the results presented here are obtained with energy convergence up to $10^{-13}$ for total energy minimization.
Table I shows the equilibrium bondlengths and binding energies for dimer and trimer systems using different kinetic energy functionals. These results have been compared with full nonlocal pseudopotential KS calculations. A few representative results using $\lambda = \frac{1}{5}$ have been given. It can be seen that for $\lambda = \frac{1}{5}$ the trend is similar to the $\lambda = \frac{1}{9}$ functional and there is no significant improvement in the results. Clearly, the results involving $F(N_e)$ functional show significant improvement over $\lambda = \frac{1}{9}$ (with the exception of Mg) and are in reasonable agreement with the bondlengths obtained by the KS method. The error in the bondlengths is around 10%. It is known that such methods based on approximate KE functionals are not expected to give accurate binding energies. One notable feature of the binding energy comparison is the considerable improvement by $F(N_e)$ over $\lambda = \frac{1}{9}$ (excepting again the case of Mg). The results for the Na, Li, Si trimer binding energies are not given because these are Jahn-Teller distorted isoscales triangles and the present method yields equilateral triangle geometries. Clearly, such density based methods are unable to reach the Jahn-Teller distorted geometries.
The quality of the charge densities obtained by this method can be gauged by comparing them with the KS charge density. In Fig. 1 we have plotted the self-consistent charge densities obtained using the functionals involving $F(N_e)$ (curve a) and $\lambda = \frac{1}{9}$ (curve b) with the KS charge density (curve c) for Al dimer along the axis joining the atoms. The ionic positions are marked by arrows on the plot. The KS charge density has been obtained using the identical pseudopotentials and the same cell size as in the case of OFM. Three prominent features can be observed.
1. Overall the $F(N_e)$ functional densities compare very well with the KS densities except at the origin where both the $F(N_e)$ and $\lambda = \frac{1}{9}$ self-consistent densities show overestimation.
2. At the atomic sites the $F(N_e)$ and KS based densities are very close and nonzero, whereas the $\lambda = \frac{1}{9}$ shows a disturbing feature of almost zero density.
3. At the peaks on either side of the origin, the KS and $F(N_e)$ charge densities again are close, but the charge density by $\lambda = \frac{1}{9}$ shows considerable overestimation.
In Fig. 2 we have plotted the superposed free atom charge density ($0^{th}$ iteration density) represented by the curve b and the self-consistent charge density for the functional involving $\lambda = \frac{1}{9}$ by the curve c and $F(N_e)$ by the curve a. The self consistent charge density obtained using $\lambda = \frac{1}{9}$ shows improvement only at the origin. Contrary to this the self consistent charge density using $F(N_e)$ shows a significant overall improvement, both at the origin and at the peaks on either side of the peak at the origin. To get an idea of the nature of the forces obtained by the OFM and KS dynamics, we have given the results for the vibrational frequencies for Na, Mg and Li dimers in Table 2. It is gratifying to note that the vibrational frequencies obtained by OFM method are in very good agreement with the KS ones.
To assess the utility and performance of this method, it has been applied to calculate the ground state geometries of a range of small clusters. We report here our calculations on heteronuclear clusters of $Li_nAl, n=1,8$ and a highly symmetric homonuclear cluster of $Al_{13}$ and clusters of $Na_n, n = 6,8$ using the $F(N_e)$ functional. The results are compared with the ones reported by KS method.
The geometries of the heteronuclear $Li_nAl$ clusters are shown in Fig. 3 and the bondlengths and symmetries in Table III. along with the KS results. Evidently, the present method not only reproduces the correct ground state geometry with bondlengths within 5% but also reproduces the two key features observed in the more accurate KS calculation [@cheng].
1. The $Li_nAl$ clusters for $n < 3$ are two-dimensional whereas from $n \geq 3$ the clusters become three-dimensional.
2. The Al atom gets trapped inside the Li atoms at $n = 6$.
It can also be noted that as the number of atoms in the cluster increases, the accuracy in the bondlengths appears to improve. However, for the case of $Li_3Al$ and $Li_8Al$ we get the ground structure configurations to be ideally symmetric rather than slightly Jahn Teller distorted geometries of the KS calculation.
We have also investigated the $Al_{13}$ cluster since it shows an interesting icosahedral geometry. The calculations were performed in two different ways. First we started with a highly distorted icosahedron and applied the dynamical quenching to get the equilibrium geometry. In the second one, we started by placing the Al atoms at the fcc lattice points and heated the cluster to $300^{\circ} K$ and let the system span the configuration space for a few thousand iterations. This was then followed by a slow cooling schedule. It is very gratifying to note that in both the calculations the correct icosahedron is obtained with a bondlength of 4.88 a.u. as compared to the KS bondlength of 5.03 a.u. The error in the bondlengths being 3%. This strengthens our confidence in the ability of the method to reproduce the correct ground state geometries with acceptable bondlengths. In addition, we have also obtained the ground state geometries for Na$_6$, Na$_8$ and Na$_{20}$ and have verified that the geometries obtained are identical to those reported in [@urs], with the bondlengths differing by about 5%.
Conclusion
==========
In this work, we have presented the results obtained by using density based [*ab initio*]{} MD for a variety of small clusters and demonstrated that the method using approximate KE functionals is capable of yielding bondlengths within an accuracy of 5%. Our calculations indicate that the ground state geometries and symmetries of both homonuclear and heteronuclear clusters can be obtained within a reasonable accuracy and timesteps of the order of 40 a.u. can be used successfully for stable dynamics. The $F(N_e)$ functional is shown to give considerable improvement over standard $\lambda = \frac{1}{9}$ functional both in terms of charge densities and bondlengths and is thus recommended.
We believe the method to be a promising tool in the study of finite temperature and dynamical properties of clusters. So far, all the reported OFM calculations have been performed using local pseudopotentials only and it would be interesting to implement the nonlocal pseudopotentials and study the effect of nonlocality on the bonding and binding properties of such clusters. More work is required in this direction and the implementation of nonlocality is under consideration. It may be possible to expand the applications of OFM by incorporating the nonlocal pseudopotentials and by employing more accurate KE functionals. It is hoped that the problems of current interest in the field of clusters like fragmentation, dissociation, interaction between clusters, which may involve large number of atoms as well as more than one atomic species will be amenable by the present technique.
Acknowledgements
================
Partial financial assistance from the Department of Science and Technology (DST), Government of India and the Centre for Development of Advanced Computing (C-DAC), Pune is gratefully acknowledged. Two of us (V. S and D. N) acknowledge financial assistance from C-DAC. One of us (D. G. K.) acknowledges P. Madden for a number of fruitful discussions on the OFM. We also acknowledge K. Hermansson and L. Ojamoe for the MOVIEMOL animation program.
Electronic Address: dinesh, vaishali, kanhere@unipune.ernet.in
R. Car, M. Parrinello, Phys. Rev. Lett., [**55**]{}, 685(1985)
D. K. Remler and P. A. Madden, Molecular Physics, [**70**]{}, 921(1990)
The algorithms which scale linearly with system size, based on density matrix formulation have been proposed see: W. Kohn, Chem. Phys. Lett. [**208**]{}, 167 (1993); M. S. Daw, Phys. Rev. B. [**47**]{}, 10895 (1993); X. P. Li, R. W. Nunes and D. Vanderbilt, Phys. Rev. B [**47**]{}, 10891 (1993)
M. Pearson, E. Smargiassi and P.A. Madden, J. Phys. Condens. Matter [**5**]{}, 3221 (1993)
V. Shah, D. Nehete and D. G. Kanhere, J. Phys: Condens. Matter. [**6**]{}, 10773 (1994)
N. Govind, J. Wang and H. Guo, Phys. Rev. B. [**50**]{}, 11175 (1994)
E. Smargiassi and P. A. Madden, Phys. Rev. B. [**51**]{}, 117 (1995); E. Smargiassi and P. A. Madden, Phys. Rev. B. [**51**]{}, 129 (1995)
N. Govind, J. L. Mozos and H. Guo, Phys. Rev. B. [**51**]{}, 7101 (1995)
E. Smargiassi and P. A. Madden, Phys. Rev. B. [**49**]{}, 5220 (1994)
A. E. DePristo and J. D. Kress, Phys. Rev. A. [**35**]{}, 438 (1987)
L. W. Wang and M. P. Teter, Phys. Rev. B. [**45**]{}, 13196 (1992)
P. Hohenberg and W. Kohn, Phys. Rev. [**136**]{}, B864(1964)
W. Kohn and L. J. Sham, Phys. Rev. [**140**]{}, A1133(1965)
G. B. Bachelet, D. R. Hamann and M. Schluter, Phys. Rev. B. [**26**]{}, 4199 (1982)
R. G. Parr and W. Yang, [*Density Functional Theory of Atoms and Molecules*]{} (O. U. P., Oxford, 1989)
P. K. Acharya, L. J. Bartolotti, S. B. Sears and R. G. Parr, Natl. Acad. Sci. U.S.A. [**77**]{}, 6978 (1980)
J. L. Gazquez, and J. Robles, J. Chem. Phys. [**76**]{}, 1467 (1982)
S. K. Ghosh, L. C. Basbas, J. Chem. Phys. [**83**]{}, 5778 (1985)
W. H. Press, B. P. Flannery, S. A. Teukolsky, W. T. Vetterling, [*Numerical Recipes*]{}, Cambridge University Press, Cambridge (1987); P. E. Gill, W. Murray, M. H. Wright, [*Practical Optimization*]{}, Academic Press, London (1988); B. N. Pshenichny and Y. M. Danilin, [*Numerical Methods in Extremal Problems*]{}, Mir publishers, Moscow (1978)
H. P. Cheng, R. N. Barnett and U. Landman, Phys. Rev. B. [**48**]{}, 1820 (1993)
U. Rothlisberger and W. Andreoni, J. Chem. Phys. [**94**]{}, 8129 (1991)
J. L. Martins, R. Car and J. Buttet, J. Chem. Phys. [**78**]{}, 5646 (1983)
V. Kumar and R. Car, Phys. Rev. B. [**44**]{}, 8243 (1991)
S. H. Yang, D. A. Drabold, J. B. Adams and A. Sachdev, Phys. Rev. B [**47**]{}, 1567 (1993)
B. P. Feuston, R. K. Kalia, and P. Vashishta, Phys. Rev. B. [**35**]{}, 6222 (1987)
D. Tomanek and M. A. Schluter, Phys. Rev. B. [**36**]{}, 1208 (1987)
TABLE I. Comparison of the equilibrium bondlengths (in a.u) and binding energies (in eV/atom) using the different kinetic energy functionals with the KS self consistent method.
= hspace[0.5in]{}= hspace[0.5in]{}= hspace[0.5in]{}= hspace[0.5in]{}= hspace[0.5in]{}= hspace[0.5in]{}= hspace[0.5in]{}= System Bondlengths Binding energies\
$\lambda = \frac{1}{9}$ $\lambda = \frac{1}{5}$ $F(N_e)$ KS $\lambda = \frac{1}{9}$ $\lambda = \frac{1}{5}$ $F(N_e)$ KS\
\
$Na_2$ 5.67 - 5.69 5.66$^a$ -0.116 - -0.867 -0.71$^a$\
$Na_3$ 5.81 5.99 5.75 6.00$^b$ -0.207 -0.281 -1.286 -\
$Mg_2$ 5.79 - 4.71 6.33$^c$ -0.195 - -1.432 -0.115$^c$\
$Mg_3$ 5.94 5.81 4.87 5.93$^c$ -0.355 -0.526 -2.096 -0.284$^c$\
$Al_2$ 5.74 - 4.14 4.66$^d$ -0.261 - -1.389 -1.06$^d$\
$Al_3$ 5.88 5.57 4.32 4.74$^d$ -0.483 -0.733 -2.074 -1.96$^d$\
$Li_2$ 5.87 - 5.51 5.15$^e$ -0.102 - -0.891 -\
$Li_3$ 6.03 6.11 5.58 5.3$^b$ -0.182 -0.256 -1.311 -\
$Si_2$ 5.35 - 3.74 4.29$^f$ -0.371 - -0.56 -0.6$^g$\
$Si_3$ 5.50 - 3.92 4.10$^f$ -0.651 - -0.938 -\
$^a$Reference[@urs] $^e$our own KS calculations
$^b$Reference[@martin] $^f$Reference[@feuston]
$^c$Reference[@kumar] $^g$Reference[@tomanek]
$^d$Reference[@yang]
TABLE II. The vibrational frequencies (in $cm^{-1}$) of Na, Mg, Li dimer using the OFM and KS self consistent method.
== dimers OFM KS\
Na 167.4 168\
Mg 107.3 108.6\
Li 273.7 311\
TABLE III. The bondlengths between Li-AL of $Li_nAl, n=1,8$ using OFM compared with those obtained by KS method [@cheng]. All the bondlengths are in a.u.
==== system OFM KS % error Symmetry\
$LiAl $ 4.77 5.35 10.8 $C_{\infty v}$\
$Li_2Al$ 2 $\times$ 4.76 2 $\times$ 5.22 8.8 $C_{2v}$\
$Li_3Al$ 3 $\times$ 4.79 3 $\times$ 4.98 3.8 $C_{3v}$\
$Li_4Al$ 4 $\times$ 4.84 2 $\times$ 4.82 0.4 $C_{3v}$\
2 $\times$ 4.89 1\
$Li_5Al$ 4 $\times$ 4.84 4 $\times$ 4.74 2 $C_{4v}$\
4.95 5.13 3.3\
$Li_6Al$ 6 $\times$ 4.79 6 $\times$ 4.58 4.5 $O_h$\
$Li_7Al$ 4.97 4.70 5.7 $C_{1h}$\
2 $\times$ 4.92 2 $\times$ 4.85 1.4\
2 $\times$ 4.88 2 $\times$ 4.74 2.9\
2 $\times$ 4.89 2 $\times$ 4.81 1.6\
$Li_8Al$ 8 $\times$ 4.99 8 $\times$ 4.82 3.5 $D_{4d}$\
**Figure Captions**
1. The self-consistent charge densities of Al dimer. Curve a represents the $F(N_e)$ functional charge density, curve b represents the $\lambda = \frac{1}{9}$ charge density and curve c denotes the charge density obtained using the KS method.
2. Comparison of self-consistent charge densities by the $F(N_e)$ (curve a) and $\lambda = \frac{1}{9}$ (curve c) functional for Al dimer with the superposed ( $0^{th}$ iteration) free Al atom charge density (curve b).
3. The ground state geometries of the $Li_nAl$ clusters for $n = 1, 8$. The large sphere represents the Li atom and small sphere represents the Al atom.
| {
"pile_set_name": "ArXiv"
} |
---
author:
- Utpal Bora
- Santanu Das
- Pankaj Kukreja
- Saurabh Joshi
- Ramakrishna Upadrasta
- Sanjay Rajopadhye
bibliography:
- 'references.bib'
title: ': A Fast Static Data-Race Checker for OpenMP Programs'
---
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'For $R=Q/J$ with $Q$ a commutative graded algebra over a field and $J\ne0$, we relate the slopes of the minimal resolutions of $R$ over $Q$ and of $k=R/R_{+}$ over $R$. When $Q$ and $R$ are Koszul and $J_1=0$ we prove ${\operatorname{Tor}_{i}^{Q}(R,k){}}_j=0$ for $j>2i\ge0$, and also for $j=2i$ when $i>\dim Q-\dim R$ and ${\operatorname{pd}}_QR$ is finite.'
address:
- 'Department of Mathematics, University of Nebraska, Lincoln, NE 68588, U.S.A.'
- 'Dipartimento di Matematica, Universit«a di Genova, Via Dodecaneso 35, I-16146 Genova, Italy'
- 'Department of Mathematics, University of Nebraska, Lincoln, NE 68588, U.S.A.'
author:
- 'Luchezar L. Avramov'
- Aldo Conca
- 'Srikanth B. Iyengar'
date:
-
-
title: |
Free resolutions over commutative\
Koszul algebras
---
[^1]
Let $K$ be a field and $Q$ a commutative ${{\mathbb N}}$-graded $K$-algebra with $Q_0=K$. Each graded $Q$-module $M$ with $M_j=0$ for $j\ll0$ has a unique up to isomorphism minimal graded free resolution, $F^M$. The module $F^M_i$ has a basis element in degree $j$ if and only if ${\operatorname{Tor}_{i}^{Q}(k,M){}}_j\ne0$ holds, where $k=Q/Q_{{\!\scriptscriptstyle{+}}}$ for $Q_{{\!\scriptscriptstyle{+}}}=\bigoplus_{j{{{\scriptstyle}\geqslant}}1}Q_j$. Important structural information on $F^M$ is encoded in the sequence of numbers $${t_{i}^{Q}(M){}}=\sup\{j\in{{\mathbb Z}}\mid{\operatorname{Tor}_{i}^{Q}(k,M){}}_j\ne0\}\,.$$ It is distilled through the notion of *Castelnuovo-Mumford regularity*, defined by $${\operatorname{reg}}_QM=\sup_{i{{{\scriptstyle}\geqslant}}0}\{{t_{i}^{Q}(M){}}-i\}\,.$$ One has ${\operatorname{reg}}_Qk\ge0$, and equality means that $Q$ is *Koszul*; see, for instance, [@PP].
When the $K$-algebra $Q$ is finitely generated, every finitely genetrated graded $Q$-module $M$ has finite regularity if and only if $Q$ is a polynomial ring over some Koszul algebra, see [@AP]; by contrast, the *slope* of $M$ over $Q$, defined to be the real number $${\operatorname{slope}}_{Q}M=\sup_{i{{{\scriptstyle}\geqslant}}1}\left\{\frac{{t_{i}^{Q}(M){}}-{t_{0}^{Q}(M){}}}{i}\right\}\,,$$ is always finite; see Corollary \[cor:rate\]. Following Backelin [@Ba], we set ${\operatorname{Rate}}Q={\operatorname{slope}}_QQ_{{\!\scriptscriptstyle{+}}}$ and note that one has ${\operatorname{Rate}}Q\geq 1$, with equality if and only if $Q$ is Koszul.
\[thm:main\] If $Q$ is a finitely generated commutative Koszul $K$-algebra and $J$ a homogeneous ideal with $0\ne J\subseteq(Q_{{\!\scriptscriptstyle{+}}})^2$, then for $R=Q/J$ and $c={\operatorname{Rate}}R$ one has
1. $\max\{c,2\}\le{\operatorname{slope}}_QR\le c+1$, with $c<{\operatorname{slope}}_QR$ when ${\operatorname{pd}}_QR$ is finite.
2. ${t_{i}^{Q}(R){}}=(c+1)\cdot i$ for some $i\ge1$ implies the following conditions: ${t_{h}^{Q}(R){}}=(c+1)\cdot h$ for $1\le h\le i$ and $i\le{\operatorname{rank}}_k(J/Q_{{\!\scriptscriptstyle{+}}}J)_{c+1}$.
3. ${t_{i}^{Q}(R){}}<(c+1)\cdot i$ holds for all $i>\dim Q-\dim R$ when ${\operatorname{pd}}_QR$ is finite.
4. ${\operatorname{reg}}_QR\le c\cdot{\operatorname{pd}}_QR$; when $Q$ is a standard graded polynomial ring, equality holds if and only if $J$ is generated by a $Q$-regular sequence of forms of degree $c+1$.
The result is new even in the case of a polynomial ring $Q$, where a related statement was initially proved by using Gröbner bases; see \[rem:taylor\].
The theorem is proved in Section \[sec:koszul\]. Its assertions have very different underpinnings: The inequalities in (1) come from results in homological algebra, established in Section \[sec:rate\] with no finiteness or hypotheses on $Q$. The remaining statements are deduced from results about small homomorphism $Q\to R$, proved in Section \[sec:small\] by using delicate properties of commutative noetherian rings.
Much of the discussion in the body of the paper concerns the general problem of relating properties of the numbers ${\operatorname{slope}}_QM$, ${\operatorname{slope}}_QR$, and ${\operatorname{slope}}_RM$, when $Q\to R$ is a homomorphism of graded $K$-algebras and $M$ is a graded module defined over $R$.
The essence of our results is a comparison of two types of degrees, ones arising from homological considerations, the others induced by internal gradings of the objects under study. In constructions involving two or more gradings the index referring to an internal degree always appears last. When $y$ is a homogeneous element of a bigraded object, $|y|$ denotes the *homological degree* and $\deg(y)$ the *internal degree*.
The proofs presented below involve various homological constructions that are well documented in the case of commutative local rings and their local homomorphisms, but for which graded analogs may be difficult to find in the literature. When explicit information on the behavior of internal degrees is needed, we give the statements in the graded context with references to sources dealing with the local situation. We have verified—and invite readers to follow suit—that in these instances an internal degree can be factored in all the arguments involved.
Slopes of graded modules {#sec:rate}
========================
In this section ${{\varphi}}{\colon}Q\to R$ is a surjective homomorphism of graded $K$-algebras, and $M$ is a graded $R$-module with $M_j=0$ for all $j\ll0$; we set $J={\operatorname{Ker}}{{\varphi}}$.
We recall a classical change-of-rings spectral sequence of Cartan and Eilenberg.
\[ce\] By [@CE Ch.XVI, §5], there exists a spectral sequence of trigraded $k$-vector spaces $$\label{eq:cesequence}
{{}^{r}\!\operatorname{E}_{p,q,j}}\underset{p}{\implies}{\operatorname{Tor}_{p+q}^{Q}(k,M){}}_j \quad\text{for}\quad r\ge2\,,$$ with differentials acting according to the pattern $$\label{eq:cedifferential}
{{}^{r}\!\operatorname{d}_{p,q,j}}{\colon}{{}^{r}\!\operatorname{E}_{p,q,j}}\to{{}^{r}\!\operatorname{E}_{p-r,q+r-1,j}} \quad\text{for}\quad r\ge2\,,$$ with second page of the form $$\label{eq:ceE2}
{{}^{2}\!\operatorname{E}_{p,q,j}}\cong \bigoplus_{j_1+j_2=j}{\operatorname{Tor}_{p}^{R}(k,M){}}_{j_1}\otimes_{k}{\operatorname{Tor}_{q}^{Q}(k,R){}}_{j_2}\,,$$ and with edge homomorphisms $$\label{eq:ceedge}
{\operatorname{Tor}_{i}^{Q}(k,M){}}_{j}{\twoheadrightarrow}{{}^{\infty}\!\operatorname{E}_{i,0,j}}={{}^{i+1}\!\operatorname{E}_{i,0,j}}{\hookrightarrow}{{}^{2}\!\operatorname{E}_{i,0,j}}\cong{\operatorname{Tor}_{i}^{R}(k,M){}}_{j}$$ equal to the canonical homomorphisms of $k$-vector spaces $$\label{eq:cechange}
{\operatorname{Tor}_{i}^{{{\varphi}}}(k,M){}}_j{\colon}{\operatorname{Tor}_{i}^{Q}(k,M){}}_j\to{\operatorname{Tor}_{i}^{R}(k,M){}}_j\,.$$
For all $r$, $p$, and $q$ we set $\sup{{}^{r}\!\operatorname{E}_{p,q,*}}=\sup\{j\in{{\mathbb Z}}\mid {{}^{r}\!\operatorname{E}_{p,q,j}}\ne0\}$.
The proof of the next result is based on an analysis of the convergence of the preceding change-of-rings spectral sequence on the line $q=0$.
\[thm:ceub\] When $J\ne QJ_1$ holds there are inequalities $${\operatorname{slope}}_{R}M \leq \max\left\{{\operatorname{slope}}_{Q}M\,,
\sup_{i{{{\scriptstyle}\geqslant}}1}\left\{\frac{{t_{i}^{Q}(R){}}-1}{i}\right\}\right\}
\leq\max\{{\operatorname{slope}}_{Q}M,{\operatorname{slope}}_QR\}\,.$$
If ${t_{i}^{Q}(R){}}$ or ${t_{i}^{Q}(M){}}$ is infinite for some $i\ge0$, then so are both maxima above, hence there is nothing to prove. Thus, we may assume that ${t_{i}^{Q}(R){}}$ and ${t_{i}^{Q}(M){}}$ are finite for every $i\ge0$; in this case the second inequality is clear. Let $m$ denote the middle term in the inequalities above. Using the equality ${t_{0}^{Q}(M){}}={t_{0}^{R}(M){}}$, we get $$\begin{aligned}
\tag*{(\ref{thm:ceub}.1)${}_{i}$}
{t_{i}^{Q}(M){}}&\leq mi+{t_{0}^{R}(M){}}\,;
\\
\tag*{(\ref{thm:ceub}.2)${}_{i}$}
{t_{i}^{Q}(R){}}&\leq mi+1\,.
\end{aligned}$$
For $i\ge0$ and $r\geq 2$, from fomulas and one gets exact sequences $$\label{eq:celimit}
0{\longrightarrow}{{}^{r+1}\!\operatorname{E}_{i,0,j}}{\longrightarrow}{{}^{r}\!\operatorname{E}_{i,0,j}}{\xrightarrow}{\ {{}^{r}\!\operatorname{d}_{i,0,j}} \ }{{}^{r}\!\operatorname{E}_{i-r,r-1,j}} \,.
\tag{\ref{thm:ceub}.3}$$ We set up a primary induction on $i$ and a secondary, descending one, on $r$ to prove $$\begin{aligned}
\tag*{(\ref{thm:ceub}.4)${}_{i,r}$}
\sup {{}^{r}\!\operatorname{E}_{i,0,*}} &\le mi + {t_{0}^{R}(M){}}
\quad\text{and}\quad i+1\ge r\ge2\,.
\end{aligned}$$ In view of , the validity of [(\[thm:ceub\].4)${}_{i,2}$]{} is the assertion of the proposition.
The basis of the primary induction, for $i=1$, comes from and (\[thm:ceub\].1)${}_{1}$.
Fix an integer $i\ge2$ and assume that (\[thm:ceub\].4)${}_{i',r}$ holds for $i'<i$. Formulas and (\[thm:ceub\].1)${}_{i}$ imply (\[thm:ceub\].4)${}_{i,i+1}$. Fix $r\in[2,i]$ and assume that (\[thm:ceub\].4)${}_{i,r'}$ holds for $i+1\ge r'>r$. The first relation in the following chain $$\begin{aligned}
\sup {{}^{r}\!\operatorname{E}_{i,0,*}}
&\leq \max\{\sup {{}^{r+1}\!\operatorname{E}_{i,0,*}} \,, \sup {{}^{r}\!\operatorname{E}_{i-r,r-1,*}} \}\\
&\leq \max\{mi+{t_{0}^{R}(M){}} \,, \sup {{}^{r}\!\operatorname{E}_{i-r,r-1,*}} \}\\
&\leq \max\{mi+{t_{0}^{R}(M){}} \,, \sup {{}^{2}\!\operatorname{E}_{i-r,r-1,*}} \}\\
&= \max\{mi+{t_{0}^{R}(M){}} \,, {t_{i-r}^{R}(M){}}+{t_{r-1}^{Q}(R){}} \}\\
&\leq \max\{mi+{t_{0}^{R}(M){}} \,,(m(i-r)+{t_{0}^{R}(M){}}) + (m(r-1)+1)\}\\
& = \max\{mi+{t_{0}^{R}(M){}} \,, mi+{t_{0}^{R}(M){}}-(m-1)\}\\
&\leq mi + {t_{0}^{R}(M){}}
\end{aligned}$$ comes from the exact sequence . The second one holds by (\[thm:ceub\].4)${}_{i,r+1}$, the third because ${{}^{r}\!\operatorname{E}_{i-r,r-1,*}}$ is a subfactor of ${{}^{2}\!\operatorname{E}_{i-r,r-1,*}}$, the fourth by , the fifth by (\[thm:ceub\].4)${}_{i-r,2}$ and (\[thm:ceub\].2)${}_{r-1}$, and the last one because $J\ne QJ_1$ implies $m\geq1$.
This completes the inductive proof of the inequality (\[thm:ceub\].4)${}_{i,r}$.
Variants of the proposition have been known for some time, at least when $M$ is finitely generated and $R$ is *standard graded*; that is, $R=K[R_1]$ with ${\operatorname{rank}}_KR_1$ finite. Thus, Aramova, Bărcănescu, and Herzog in [@ABH 1.3] established the corresponding result for a related invariant, ${\operatorname{rate}}_RM=\sup_{i{{{\scriptstyle}\geqslant}}1}\{{t_{i}^{Q}(M){}}/i\}$. They used the same spectral sequence, extending an argument of Avramov for $M=k$, see [@Ba p. 97]; in the latter case, the corollary below was first proved by Anick in [@An 4.2].
\[cor:rate\] If $R$ is finitely generated over $K$, then for every finitely generated $R$-module $M$ one has ${\operatorname{slope}}_RM<\infty$.
One may choose $Q$ to be a polynomial ring in finitely many indeterminates over $K$. In this case ${\operatorname{Tor}_{i}^{Q}(k,R){}}_*$ and ${\operatorname{Tor}_{i}^{Q}(k,M){}}_*$ are finitely generated over $k$ for each $i\ge0$ and are zero for almost all $i$, so ${\operatorname{slope}}_QR$ and ${\operatorname{slope}}_QM$ are finite.
In the proof of the next result we again use the spectral sequence in \[ce\], this time analyzing its convergence on the line $p=0$. The hypothesis includes a condition on the maps ${\operatorname{Tor}_{i}^{{{\varphi}}}(k,M){}}_j$; see \[ch:small\] and Proposition \[prop:koszul\_small\] for situations where it is met.
\[thm:celb\] If $M\ne0$ and ${\operatorname{Tor}_{i}^{{{\varphi}}}(k,M){}}$ is injective for each $i$, then one has $$\begin{aligned}
{\operatorname{slope}}_{Q}R &\leq 1+ s
\quad\text{where}\quad
s=\sup_{i{{{\scriptstyle}\geqslant}}2}\left\{\frac{{t_{i}^{R}(M){}} - {t_{0}^{R}(M){}}-1}{i-1}\right\}\,.
\end{aligned}$$
The hypothesis implies ${t_{0}^{R}(M){}}>-\infty$. There is nothing to prove if ${t_{i}^{Q}(M){}}=\infty$ for some $i$, so we assume that ${t_{i}^{Q}(M){}}$ is finite for all $i\ge0$. By the definition of the number $s$, the following inequalities then hold: $$\label{eq:sup}
\tag*{(\ref{thm:celb}.1)${}_{i}$}
{t_{i}^{R}(M){}} \leq s(i-1)+1+{t_{0}^{R}(M){}}\quad \text{for all}\quad i\ge 2\,.$$
It follows from and that for $r\geq 2$ there exist exact sequences $$\label{eq:cecolimit}
{{}^{r}\!\operatorname{E}_{r,i-r+1,j}}{\xrightarrow}{\ {{}^{r}\!\operatorname{d}_{r,i-r+1,j}} \ }{{}^{r}\!\operatorname{E}_{0,i,j}}{\longrightarrow}{{}^{r+1}\!\operatorname{E}_{0,i,j}}{\longrightarrow}0
\tag*{(\ref{thm:celb}.2)}$$
By primary induction on $i$ and secondary, descending induction on $r$, we prove $$\begin{aligned}
\tag*{(\ref{thm:celb}.3)${}_{i,r}$}
\sup {{}^{r}\!\operatorname{E}_{0,i,*}}& \leq (s+1)i + {t_{0}^{R}(M){}}
\quad\text{for}\quad i+2\ge r\ge2\,.
\end{aligned}$$ In view of , the validity of [(\[thm:celb\].3)${}_{i,2}$]{} yields the assertion of the proposition.
The injectivity of ${\operatorname{Tor}_{}^{{{\varphi}}}(k,M){}}$ and imply ${{}^{\infty}\!\operatorname{E}_{p,q,*}}=0$ for $q\ge 1$ and all $p$. It follows from and that ${{}^{n+2}\!\operatorname{E}_{0,i,*}}$ is isomorphic to ${\operatorname{Tor}_{0}^{R}(k,M){}}_*$ for $i=0$ and to $0$ for $i\ge1$, so [(\[thm:celb\].3)${}_{i,i+2}$]{} holds for all $i\ge0$. This gives the basis of the primary induction for $i=0$ and that of the secondary induction for all $i\ge1$.
Fix an integer $i\ge1$ and assume that (\[thm:celb\].3)${}_{i',r'}$ holds for all pairs $(i',r')$ with $i'<i$ and $i+2\ge r'>r$. One then has a chain of relations $$\begin{aligned}
\sup {{}^{r}\!\operatorname{E}_{r,i-r+1,*}}
&\leq \sup {{}^{2}\!\operatorname{E}_{r,i-r+1,*}} \\
& = {t_{r}^{R}(M){}} + {t_{i-r+1}^{Q}(R){}}\\
&\leq {t_{r}^{R}(M){}} + (s+1)(i-r+1)\\
&\leq s(r-1)+1 + {t_{0}^{R}(M){}} + (s+1)(i-r+1) \\
&= (s+1)i + (2-r) + {t_{0}^{R}(M){}}\\
&\leq (s+1)i + {t_{0}^{R}(M){}}\,,\end{aligned}$$ where the first one holds because ${{}^{r}\!\operatorname{E}_{r,i-r+1,*}} $ is a subfactor of ${{}^{2}\!\operatorname{E}_{r,i-r+1,*}}$, the second by formula , the third by (\[thm:celb\].3)${}_{i-r+2,2}$ and , and the fourth by (\[thm:celb\].1)${}_{r}$. The exact sequence \[eq:cecolimit\], the preceding inequalities, and (\[thm:celb\].3)${}_{i,r+1}$ give $$\begin{aligned}
\sup {{}^{r}\!\operatorname{E}_{0,i,*}}
&\leq\max\{\sup{{}^{r+1}\!\operatorname{E}_{0,i,*}}\,,\sup{{}^{r}\!\operatorname{E}_{r,i-r+1,*}}\}
\\ &\leq (s+1)i + {t_{0}^{R}(M){}}\,.
\end{aligned}$$
Hereby, the inductive proof of the inequality (\[thm:celb\].3)${}_{i,r}$ is complete.
Regular elements {#sec:reg}
================
Not surprisingly, the bounds obtained in the preceding section can be sharpened in cases when the minimal free resolution of $R$ or of $M$ over $Q$ is particularly simple.
In this section we discuss a classical avatar of this phenomenon.
\[thm:reg\] If $R=Q/(f)$ for a non-zero divisor $f\in Q_{{\!\scriptscriptstyle{+}}}$, then one has: $$\begin{aligned}
{3}
\tag{1}
{\operatorname{slope}}_QM&\le\max\{{\operatorname{slope}}_RM,\deg(f)\}
&\quad&\text{with equality for } \quad &f&\notin (Q_{{\!\scriptscriptstyle{+}}})^2\,.
\\
\tag{2}
{\operatorname{slope}}_RM&\le\max\{{\operatorname{slope}}_QM,\deg(f)/2\}
&\quad&\text{with equality for} \quad &f&\in Q_{{\!\scriptscriptstyle{+}}}{\operatorname{Ann}}_QM\,.
\end{aligned}$$
We start by noting an elementary inequality that will be invoked a couple of times: All pairs of real numbers $(a_1,a_2)$ and $(b_1,b_2)$ with positive $b_1$ and $b_2$ satisfy $$\label{eq:short}
\frac {a_1+a_2}{b_1+b_2} \leq
\max\left\{\frac{a_1}{b_1}\,,\,\frac{a_2}{b_2}\right\}\,.$$
Set $d=\deg(f)$. The minimal graded free resolution of $R$ over $Q$ is $$0{\longrightarrow}Q(-d) {\xrightarrow}{\ f \ } Q{\longrightarrow}0$$ so ${\operatorname{Tor}_{q}^{Q}(R,k){}}$ vanishes for $q\ne0,1$, is isomorphic to $k$ for $q=0$, and to $k(-d)$ for $q=1$, so for each pair $(i,j)$ the spectral sequence \[ce\] yields an exact sequence $$\label{eq:long}
\begin{gathered}
\xymatrixcolsep{1.3pc}
\xymatrixrowsep{.3pc}
\xymatrix {
&&{\operatorname{Tor}_{i+1}^{R}(k,M){}}_{j}\ar@{->}[rr]^-{\delta_{i+1,j}}
&&{\operatorname{Tor}_{i-1}^{R}(k,M){}}_{j-d}
\\
\ar@{->}[r]
&{\operatorname{Tor}_{i}^{Q}(k,M){}}_{j}\ar@{->}[r]
&{\operatorname{Tor}_{i}^{R}(k,M){}}_{j}\ar@{->}[rr]^-{\delta_{i,j}}
&&{\operatorname{Tor}_{i-2}^{R}(k,M){}}_{j-d}
}
\end{gathered}$$ The one for $i=0$ gives the following equality: $$\label{eq:zero}
{t_{0}^{Q}(M){}}={t_{0}^{R}(M){}}\,.$$
\(1) For $i\ge1$ the middle three terms of the exact sequences yield $$\label{eq:long1}
\begin{aligned}
{t_{i}^{Q}(M){}}
&\le\max\{{t_{i}^{R}(M){}},({t_{i-1}^{R}(M){}}+d)\}
\end{aligned}$$ [From]{} , , and we obtain the inequalities below: $$\begin{aligned}
{\operatorname{slope}}_QM
&=\sup_{i{{{\scriptstyle}\geqslant}}1}\left\{\frac{{t_{i}^{Q}(M){}}-{t_{0}^{Q}(M){}}}{i}\right\}
\\
&\le \sup_{i{{{\scriptstyle}\geqslant}}1}\left\{\max\left\{\frac{{t_{i}^{R}(M){}}-{t_{0}^{R}(M){}}}i\,,\,
\frac{({t_{i-1}^{R}(M){}}-{t_{0}^{R}(M){}})+d}{(i-1)+1}\right\}\right\}
\\
&\leq\sup_{i{{{\scriptstyle}\geqslant}}2}\left\{\max\left\{\frac{{t_{i}^{R}(M){}}-{t_{0}^{R}(M){}}}{i}\,,
\frac{{t_{i-1}^{R}(M){}}-{t_{0}^{R}(M){}}}{i-1},d\right\}\right\}
\\
&=\max\left\{\sup_{i{{{\scriptstyle}\geqslant}}1}
\left\{\frac{{t_{i}^{R}(M){}}-{t_{0}^{R}(M){}}}{i}\right\},d\right\}
\\
&=\max\left\{{\operatorname{slope}}_RM,d\right\}\,.
\end{aligned}$$
When $f\notin(Q_{{\!\scriptscriptstyle{+}}})^2$ holds, the proof in [@Av:barca 3.3.3(1)] of a result of Nagata, implies $\delta_{i,j}=0$ in , so equalities hold in . This and give $$\begin{aligned}
{t_{1}^{Q}(M){}}-{t_{0}^{Q}(M){}}&=\max\{{t_{1}^{R}(M){}}-{t_{0}^{R}(M){}},d\}\,,
\\
{t_{i}^{Q}(M){}}-{t_{0}^{Q}(M){}}&\ge{t_{i}^{R}(M){}}-{t_{0}^{R}(M){}}
\quad\text{for}\quad i\ge2\,.
\end{aligned}$$ The preceding relations clearly imply ${\operatorname{slope}}_QM\ge\max\{{\operatorname{slope}}_RM,d\}$.
\(2) For $i\ge1$ the last three terms of the exact sequences yield $$\label{eq:long2}
\begin{aligned}
{t_{i}^{R}(M){}}
&\le\max\{{t_{i}^{Q}(M){}},({t_{i-2}^{R}(M){}}+d)\}\\
&\le\max\{{t_{i}^{Q}(M){}},({t_{i-2}^{Q}(M){}}+d),({t_{i-4}^{R}(M){}}+2d)\}
\le\cdots\\
&\le\max_{0{{{\scriptstyle}\leqslant}}2h{{{\scriptstyle}\leqslant}}i}\{{t_{i-2h}^{Q}(M){}}+hd\}\,.
\end{aligned}$$
[From]{} , , and we obtain the inequalities below: $$\begin{aligned}
{\operatorname{slope}}_RM
&=\sup_{i{{{\scriptstyle}\geqslant}}1}\left\{\frac{{t_{i}^{R}(M){}}-{t_{0}^{R}(M){}}}{i}\right\}
\\
&\le\sup_{i{{{\scriptstyle}\geqslant}}1}\left\{\max_{0{{{\scriptstyle}\leqslant}}2h{{{\scriptstyle}\leqslant}}i}
\left\{\frac{{t_{i-2h}^{Q}(M){}}-{t_{0}^{Q}(M){}}+hd}{(i-2h)+(2h)}\right\}\right\}
\\
&\leq\sup_{i{{{\scriptstyle}\geqslant}}1}\left\{\max_{0{{{\scriptstyle}\leqslant}}2h< i}
\left\{\frac{{t_{i-2h}^{Q}(M){}}-{t_{0}^{Q}(M){}}}{i-2h}\,,\,\frac{d}{2}\right\}\right\}
\\
&=\max\left\{\sup_{i{{{\scriptstyle}\geqslant}}1}
\left\{\frac{{t_{i}^{Q}(M){}}-{t_{0}^{Q}(M){}}}{i}\right\}\,,\,\frac{d}{2}\right\}
\\
&=\max\left\{{\operatorname{slope}}_QM\,,\,\frac{d}{2}\right\}\,.
\end{aligned}$$
For $f\in Q_{{\!\scriptscriptstyle{+}}}{\operatorname{Ann}}_QM$, the proof in [@Av:barca 3.3.3(2)] of a result of Shamash shows that $\delta_{i,*}$ in is surjective, so equalities hold in ; in view of one gets $$\begin{aligned}
{t_{1}^{R}(M){}}-{t_{0}^{R}(M){}}&={t_{1}^{Q}(M){}}-{t_{0}^{Q}(M){}}\,,
\\
{t_{2}^{R}(M){}}-{t_{0}^{R}(M){}}&=\max\{{t_{2}^{Q}(M){}}-{t_{0}^{Q}(M){}},d\}\,,
\\
{t_{i}^{R}(M){}}-{t_{0}^{R}(M){}}&\ge{t_{i}^{Q}(M){}}-{t_{0}^{Q}(M){}} \quad\text{for}\quad
i\ge3\,.
\end{aligned}$$ These relations clearly imply an inequality ${\operatorname{slope}}_RM\ge\max\{{\operatorname{slope}}_QM,d/2\}$.
Small homomorphisms of graded algebras {#sec:small}
======================================
A homomorphism ${{\varphi}}{\colon}Q\to R$ of graded $K$-algebras is called *small* if the map $${\operatorname{Tor}_{i}^{{{\varphi}}}(k,k){}}_j{\colon}{\operatorname{Tor}_{i}^{Q}(k,k){}}_j \to {\operatorname{Tor}_{i}^{R}(k,k){}}_j$$ is injective for each pair $(i,j)\in{{\mathbb N}}\times{{\mathbb Z}}$; see \[ch:small\] for examples. Recall that *homological products* turn ${\operatorname{Tor}_{}^{Q}(k,R){}}$ into a bigraded algebra; see [@CE Ch.XI, §4].
\[thm:small\] Let $Q$ be a standard graded $K$-algebra, ${{\varphi}}{\colon}Q\to R$ a surjective small homomorphism of graded $K$-algebras with ${\operatorname{Ker}}{{\varphi}}\ne0$, and set $c={\operatorname{Rate}}R$.
For every integer $i\ge1$ there are then inequalities $${{t_{i}^{Q}(R){}}} \le {\operatorname{slope}}_QR\cdot i \le (c+1)\cdot i\,,$$ and the following conditions are equivalent:
1. ${t_{i}^{Q}(R){}}=(c+1)\cdot i$.
2. ${t_{h}^{Q}(R){}}=(c+1)\cdot h$ for $1\le h\le i$.
3. ${t_{1}^{Q}(R){}}=c+1$ and ${\operatorname{Tor}_{i}^{Q}(k,R){}}_{i(c+1)}=({\operatorname{Tor}_{1}^{Q}(k,R){}}_{c+1})^i\ne0$.
Before starting on the proof of the theorem we present an application, followed by a couple of easily verifiable sufficient conditions for the smallness of ${{\varphi}}$.
\[cor:small\] With $J={\operatorname{Ker}}{{\varphi}}$, the following assertions hold:
1. ${t_{i}^{Q}(R){}}=(c+1)\cdot i$ for some $i\ge1$ implies the conditions ${t_{h}^{Q}(R){}}=(c+1)\cdot h$ for $1\le h\le i$ and $i\le{\operatorname{rank}}_k(J/Q_{{\!\scriptscriptstyle{+}}}J)_{c+1}$.
2. ${t_{i}^{Q}(R){}}<(c+1)\cdot i$ holds for all $i>\dim Q-\dim R$ when ${\operatorname{pd}}_QR$ is finite.
3. ${\operatorname{reg}}_QR\le c\cdot{\operatorname{pd}}_QR$.
Homological products are strictly skew-commutative for the homological degree, see [@CE Ch.XI, §4], so $({\operatorname{Tor}_{1}^{Q}(k,R){}}_*){}^i$ is the image of a canonical $k$-linear map $$\lambda_{i,*}{\colon}\textstyle{\bigwedge}^i_k(J/Q_{{\!\scriptscriptstyle{+}}}J)_*\cong
\textstyle{\bigwedge}^i_k {\operatorname{Tor}_{1}^{Q}(k,R){}}_* \to {\operatorname{Tor}_{i}^{Q}(k,R){}}_*\,.$$
\(1) This follows from the map above and the implication (i)$\implies$(ii) and (iii).
\(2) When ${\operatorname{pd}}_QR$ is finite one has ${\operatorname{grade}}_QR=\dim Q-\dim R$ by a theorem of Peskine and Szpiro [@PS], and $\lambda_{i,*}=0$ for $i>{\operatorname{grade}}_QR$ from a theorem of Bruns [@Br]. Thus, Theorem \[thm:small\] implies ${{\operatorname{Tor}_{i}^{Q}(k,R){}}}_j=0$ for $j\ge(c+1)i$.
\(3) The theorem gives ${t_{i}^{Q}(R){}}-i\le ci$ for each $i$, hence ${\operatorname{reg}}_QR\le c\cdot{\operatorname{pd}}_QR$.
A bit of notation comes in handy at this point.
\[canonical\] A standard graded $K$-algebra $R$ has a *canonical presentation* $R={\widetilde}R/I_R$ with ${\widetilde}R$ the symmetric $K$ algebra on $R_1$ and $I_R\subseteq({\widetilde}R_{{\!\scriptscriptstyle{+}}})^2$, obtained from the epimorphism of $K$-algebras ${\widetilde}R\to R$ extending the identity map on $R_1$.
If $Q$ is standard graded $K$-algebra and ${{\varphi}}{\colon}Q\to R$ is a surjective homomorphism with ${\operatorname{Ker}}{{\varphi}}\subseteq(Q_{{\!\scriptscriptstyle{+}}})^2$, then ${\widetilde}R\to R$ factors as ${{\widetilde}R}\cong{\widetilde}Q\to Q{\xrightarrow}{{{\varphi}}}R$.
\[ch:small\] A homomorphism ${{\varphi}}$ as on \[canonical\] is small if $J={\operatorname{Ker}}{{\varphi}}$ satisfies one of the conditions:
1. $J\subseteq(f_1,\dots,f_a)$, where $f_1,\dots,f_a$ is some $Q$-regular sequence in $Q_{{\!\scriptscriptstyle{+}}}$.
2. $J_j=0$ for $j\le{\operatorname{reg}}_{{\widetilde}Q}Q$, where $Q={{\widetilde}Q}/I_Q$ is the canonical presentation.
Indeed, see [@Av:small 4.3] for (a), and Şega [@Se 5.1, 9.2(2)] for (b).
*The hypothesis of Theorem *\[thm:small\]* are in force for the rest of this section.* The proof of the theorem utilizes free resolutions with additional structure.
A *model* of ${{\varphi}}$ is a differential bigraded $Q$-algebra $Q[X]$ with the following properties: For $n\ge1$ here exist linearly independent over $K$ homogeneous subsets $X_n=\{x\in X\mid
|x|=n\}$, such that the underlying bigraded algebra is isomorphic to $Q\otimes_K\bigotimes_{n=1}^{\infty}K[X_n]$, where $K[X_n]$ is the exterior algebra of the graded $K$-vector space $KX_n$ when $n$ is odd, and the symmetric algebra of that space when $n$ is even. The differential satisfies $\deg({\partial}(y))=\deg(y)$ for every element $y\in
Q[X]$, and the following sequence of homomorphisms of free graded $Q$-modules is resolution of $R$: $$\cdots {\longrightarrow}Q[X]_{n,*}{\xrightarrow}{\,{\partial}\,} Q[X]_{n-1,*}\cdots {\longrightarrow}\cdots
{\longrightarrow}Q[X]_{0,*}{\longrightarrow}0$$
A $Q$-basis of $Q[X]$ is provided by the set consisting of $1$ and all the monomials $x_{1}^{d_1}\cdots x_{s}^{d_s}$ with $x_r\in X$, and with $d_r=1$ when $|x_r|$ is odd, respectively, $d_r\ge1$ when $|x_r|$ is even. The model $Q[X]$ is said to be *minimal* if for each $x\in
X$, the coefficient of every $x_i\in X$ in the expansion of ${\partial}(x)$ is contained in $Q_{{\!\scriptscriptstyle{+}}}$.
We summarize the properties of minimal models used in our arguments.
\[model:exist\] A minimal model $Q[X]$ of ${{\varphi}}$ always exists, and is unique up to non-canonical isomorphism of differential bigraded $Q$-algebras; see [@Av:barca 7.2.4]. In such a model ${\partial}(X_1)$ is a minimal set of homogeneous generators of the ${\operatorname{Ker}}{{\varphi}}$ and $Q[X_1]$ is the Koszul complex on that set, with its standard bigrading, differential and multiplication.
\[model:omega\] Let ${{\widetilde}R}[Z]$ be a minimal model for the canonical presentation ${{\widetilde}R}\to R$, see \[canonical\]. Let $Z_0$ be a $K$-basis of ${{\widetilde}R}_1$, and choose a $k$–linearly independent set $$Z'=\{z'\mid |z'|=|z|+1\text{ and }\deg(z')=\deg(z)\}_{z\in Z_0\sqcup Z}\,.$$ By [@Av:barca 7.2.6], there exists an isomorphism of bigraded $k$-vector spaces $${\operatorname{Tor}_{}^{R}(k,k){}}\cong\bigotimes_{n=1}^{\infty}k\langle Z'_n\rangle\,,$$ where $k\langle Z'_n\rangle$ denotes the exterior algebra of the graded $k$-vector space $kZ'_n$ when $n$ is odd, and the divided powers algebra of that space when $n$ is even.
\[model:small\] Let $Q[X]$ be a minimal model for ${{\varphi}}$, and let ${{\widetilde}R}{\xrightarrow}{\psi}Q{\xrightarrow}{{{\varphi}}}R$ be a factorization of the canonical presentation ${\widetilde}R\to R$ as in \[canonical\]. If ${{\widetilde}R}[Y]$ is a minimal model for $\psi$, then there is a minimal model ${{\widetilde}R}[Z]$ of ${\widetilde}R\to R$ with $Z=Y\sqcup X$; see [@AI 4.11].
For every integer $i\ge2$ the following equality holds: $$\label{eq:c}
{t_{i-1}^{R}(R_{{\!\scriptscriptstyle{+}}}){}}-{t_{0}^{R}(R_{{\!\scriptscriptstyle{+}}}){}}={t_{i}^{R}(k){}}-1\,.$$ Thus, for $i\ge1$ the definition of slope and Proposition \[thm:celb\] applied with $M=k$ give $$\label{eq:t}
{{t_{i}^{Q}(R){}}} /i\le{\operatorname{slope}}_QR\le c+1\,.$$
It remains to establish the equivalence of the conditions in the theorem.
(iii)$\implies$(ii). The condition $({\operatorname{Tor}_{1}^{Q}(k,R){}}_{c+1})^i\ne0$ forces $({\operatorname{Tor}_{1}^{Q}(k,R){}}_{c+1})^h\ne0$ for $h=1,\dots,i$. As ${\operatorname{Tor}_{}^{Q}(k,R){}}$ is a bigraded algebra, one gets $${\operatorname{Tor}_{h}^{Q}(k,R){}}_{(c+1)h}\supseteq({\operatorname{Tor}_{1}^{Q}(k,R){}}_{c+1})^h\ne0\,.$$ This implies ${t_{h}^{Q}(R){}}\ge(c+1)h$, and provides the converse inequality.
(ii)$\implies$(i). This implication is a tautology.
(i)$\implies$(iii). The hypothesis means ${\operatorname{Tor}_{i}^{Q}(k,R){}}_{i(c+1)}\ne0$, so we have to prove $$\label{eq:kx0}
{\operatorname{Tor}_{i}^{Q}(k,R){}}_{i(c+1)}=({\operatorname{Tor}_{1}^{Q}(k,R){}}_{c+1})^i\,.$$
Let $Q[X]\to R$ be a minimal model and set $k[X]=k\otimes_QQ[X]$. The bigraded $k$-algebras ${\operatorname{H}(k[X])}$ and ${\operatorname{Tor}_{}^{Q}(k,R){}}$ are isomorphic, with $${\operatorname{Tor}_{i}^{Q}(k,R){}}_j\cong{\operatorname{H}_{i}(k[X])}_j\,.
\label{eq:hkx}$$
In view of \[model:small\] each $x\in X_n$ can be viewed as an indeterminate of a minimal model of ${{\widetilde}R}\to R$, and so by \[model:omega\] it defines an element $x'$ in ${\operatorname{Tor}_{n+1}^{R}(k,k){}}$ with $\deg(x) =\deg(x')$. [From]{} this equality and we obtain $$\label{eq:x}
\deg(x)
=\deg(x') \le{t_{n+1}^{R}(k){}} \le cn+1=c|x|+1\,.$$ The $k$-vector space $k[X]_{i,(c+1)i}$ has a basis of monomials $x_1^{d_1}\cdots x_s^{d_s}$ with $x_r\in X$ and $d_r\ge1$. The following relations hold, with the inequality coming from : $$\begin{aligned}
\sum_{r=1}^sd_r|x_r| &=\big|x_1^{d_1}\cdots x_s^{d_s}\big|=i=(c+1)i-ci\\
&=\deg\big(x_1^{d_1}\cdots x_s^{d_s}\big)-c\big|x_1^{d_1}\cdots
x_s^{d_s}\big|
\\
&=\sum_{r=1}^sd_r(\deg(x_r)-c|x_r|)
\\
&\le\sum_{r=1}^sd_r\,.
\end{aligned}$$ All $d_r$ and $|x_r|$ are positive integers, so for $1\le r\le s$ we get first $|x_r|=1$, then $|x_r|=\deg(x_r)-c|x_r|$; thats is, $\deg(x_r)=c+1$. We have now proved $$k[X]_{i,(c+1)i}=k[X_1]_{i,(c+1)i}=(kX_{1,c+1})^i\,.$$ The isomorphism maps ${\operatorname{Tor}_{1}^{Q}(k,R){}}_{c+1}$ to $kX_{1,c+1}$ and ${\operatorname{Tor}_{i}^{Q}(k,R){}}_{(c+1)i}$ to a quotient of $k[X]_{i,(c+1)i}$, so the equalities above establish .
Koszul agebras {#sec:koszul}
==============
In this section we prove and discuss the theorem stated in the introduction.
Here $Q$ is a standard graded $K$-algebra, ${{\varphi}}{\colon}Q\to R$ a surjective homomorphism of graded $K$-algebras, and $M$ a graded $R$-module. As in [@PP], we say that $M$ is *Koszul* over $Q$ if ${\operatorname{Tor}_{i}^{Q}(k,M){}}_j=0$ unless $i=j$. In the following proposition the Koszul hypotheses are related to the injectivity of ${\operatorname{Tor}_{}^{{{\varphi}}}(k,M){}}$ through the following lemma.
\[prop:koszul\_small\] Assume that $J$ is contained in $(Q_{{\!\scriptscriptstyle{+}}})^2$.
1. If $Q$ is Koszul, then ${{\varphi}}$ is small.
2. If ${{\varphi}}$ is small and $M$ is Koszul over $Q$, then ${\operatorname{Tor}_{}^{{{\varphi}}}(k,M){}}$ is injective.
Forming vector space duals, one sees that the injectivity of ${\operatorname{Tor}_{}^{{{\varphi}}}(k,M){}}$ is equivalent to surjectivity of the homomorphism of bigraded $k$-vector spaces $${\operatorname{Ext}^{}_{{{\varphi}}}(M,k){}}{\colon}{\operatorname{Ext}^{}_{R}(M,k){}}\to{\operatorname{Ext}^{}_{Q}(M,k){}}\,.$$
\(1) For $M=k$ the map above is a homomorphism of $K$-algebras, with multiplication given by Yoneda products. The map ${\operatorname{Ext}^{1}_{{{\varphi}}}(k,k){}}_*$ is isomorphic to $${\operatorname{Hom}_{R}({{\varphi}}_1,k)}_*{\colon}{\operatorname{Hom}_{R}(R_1,k)}_*\to{\operatorname{Hom}_{Q}(Q_1,k)}_*\,,$$ which is bijective as $J\subseteq(Q_{{\!\scriptscriptstyle{+}}})^2$ holds. As $Q$ is Koszul, the $k$ -algebra ${\operatorname{Ext}^{}_{Q}(k,k){}}$ is generated by ${\operatorname{Ext}^{1}_{Q}(k,k){}}$, see [@PP Ch.2, §1, Def.1], so ${\operatorname{Ext}^{}_{{{\varphi}}}(k,k){}}$ is surjective.
\(2) Yoneda products turn ${\operatorname{Ext}^{}_{{{\varphi}}}(M,k){}}$ into a homomorphism of bigraded left modules over ${\operatorname{Ext}^{}_{R}(k,k){}}$, with this algebra acting on ${\operatorname{Ext}^{0}_{Q}(M,k){}}$ through ${\operatorname{Ext}^{}_{{{\varphi}}}(k,k){}}$. The bigraded module ${\operatorname{Ext}^{}_{Q}(M,k){}}$ is generated over ${\operatorname{Ext}^{}_{Q}(k,k){}}$ by ${\operatorname{Ext}^{0}_{Q}(M,k){}}$, because $M$ is Koszul over $Q$; see [@PP Ch.2, §1, Def.2]. Since ${{\varphi}}$ is small, ${\operatorname{Ext}^{0}_{{{\varphi}}}(k,k){}}_*$ is surjective, and hence ${\operatorname{Ext}^{0}_{Q}(M,k){}}$ generates ${\operatorname{Ext}^{}_{Q}(M,k){}}$ as an ${\operatorname{Ext}^{}_{R}(k,k){}}$-module as well. The map ${\operatorname{Ext}^{0}_{{{\varphi}}}(M,k){}}_*$ is surjective, because it is canonically isomorphic to the identity map of ${\operatorname{Hom}_{k}(M_0,k)}_*$. It follows that ${\operatorname{Ext}^{}_{{{\varphi}}}(M,k){}}$ is surjective.
Recall that $Q$ is Koszul, $J$ is a non-zero ideal of $Q$ with $J_1=0$, and $c={\operatorname{slope}}_R{R_{{\!\scriptscriptstyle{+}}}}$. Note that ${{\varphi}}$ is small by Proposition \[prop:koszul\_small\](1).
\(1) The inequality ${\operatorname{slope}}_QR\le c+1$ was proved as part of Theorem \[thm:small\].
One has ${t_{i}^{Q}(k){}}=i$ for $1\le i<{\operatorname{pd}}_Qk+1$ by the Koszul hypothesis on $Q$, and ${t_{i}^{Q}(R){}}\ge i+1$ for $1\le i<{\operatorname{pd}}_QR+1$ by the conditions $J_1=0$. The exact sequence $${\operatorname{Tor}_{i+1}^{Q}(k,k){}}\to{\operatorname{Tor}_{i}^{Q}(k,R_{{\!\scriptscriptstyle{+}}}){}}\to{\operatorname{Tor}_{i}^{Q}(k,R){}}\,.$$ of graded vector spaces, which holds for every $i\ge1$, therefore implies $${t_{i}^{Q}(R_{{\!\scriptscriptstyle{+}}}){}}\le\max\{{t_{i+1}^{Q}(k){}},{t_{i}^{Q}(R){}}\}={t_{i}^{Q}(R){}}\,,$$ and hence ${\operatorname{slope}}_QR_{{\!\scriptscriptstyle{+}}}\le\sup_{i{{{\scriptstyle}\geqslant}}1}\{({t_{i}^{Q}(R){}}-1)/i\}$. Now Proposition \[thm:ceub\] gives $$c \le\max\left\{{\operatorname{slope}}_Q{R_{{\!\scriptscriptstyle{+}}}},\sup_{i{{{\scriptstyle}\geqslant}}1}\left\{\frac{{t_{i}^{Q}(R){}}-1}i\right\}\right\}
\le\sup_{i{{{\scriptstyle}\geqslant}}1}\left\{\frac{{t_{i}^{Q}(R){}}-1}i\right\}
\le{\operatorname{slope}}_QR\,.$$ When ${\operatorname{pd}}_QR$ is finite the last inequality is strict, so one has $c<{\operatorname{slope}}_QR$.
The inequalities in (2), (3), and (4) were proved as part of Corollary \[cor:small\].
Finally, assume that $Q$ is a standard graded polynomial ring and ${\operatorname{reg}}_QR=cp$ holds with $p={\operatorname{pd}}_QR$. Theorem \[thm:small\] then shows that $({\operatorname{Tor}_{1}^{Q}(k,R){}}_{c+1})^p$ is not zero, and so ${\operatorname{Ker}}{{\varphi}}$ needs at least $p$ minimal generators of degree $c+1$. As a bigraded $k$-algebra, ${\operatorname{Tor}_{}^{Q}(k,R){}}$ is isomorphic to the homology of the Koszul $E$ complex on some $K$-basis of $Q_1$, so one also has $({\operatorname{H}_{1}(E)})^p\ne0$. Now a theorem of Wiebe, see [@BH 2.3.15], implies that ${\operatorname{Ker}}{{\varphi}}$ is generated by a $Q$-regular sequence of $p$ elements.
\[prop:canonical\] For a Koszul $K$-algebra $Q$ and $R=Q/J$ with $J\subseteq(Q_{{\!\scriptscriptstyle{+}}})^2$ one has $$2\le{\operatorname{slope}}_QR\le{\operatorname{slope}}_{{\widetilde}R}R\,,$$ where $R={\widetilde}R/I_R$ is the canonical presentation. Equalities hold when $R$ is Koszul.
The canonical presentation factors as ${{\widetilde}R}\to Q{\xrightarrow}{{{\varphi}}}R$; see \[canonical\]. Part (1) of the main theorem, applied to the homomorphism ${{\widetilde}R}\to Q$ and the $Q$-module $R$, gives inequalities $2\le{\operatorname{slope}}_{{\widetilde}R}Q\le{\operatorname{Rate}}Q+1=2$, so Proposition \[thm:ceub\] yields $${\operatorname{slope}}_QR\le\max\{{\operatorname{slope}}_{{\widetilde}R}R,{\operatorname{slope}}_{{\widetilde}R}Q\}
=\max\{{\operatorname{slope}}_{{\widetilde}R}R,2\}={\operatorname{slope}}_{{\widetilde}R}R\,.$$ When $R$ is Koszul, the computation above gives $2\le{\operatorname{slope}}_{{\widetilde}R}R\le{\operatorname{Rate}}R+1=2$.
The last assertion of Proposition \[prop:canonical\] does not admit a converse. To demonstrate this we appeal to a family of graded algebras constructed by Roos [@Ro]. Recall that the formal power series $H_{M}(s)=\sum_{j\in{{\mathbb N}}}{\operatorname{rank}}_KM_js^j$ in ${{\mathbb Z}}[\![s]\!]$ is called the *Hilbert series* of $M$, and the formal Laurent series $P^{R}_k(s,t)=\sum_{i\in{{\mathbb N}},j\in{{\mathbb Z}}}\beta_{i,j}^R(M)\,s^jt^i $ in ${{\mathbb Z}}[s^{\pm1}][\![t]\!]$, where $\beta_{i,j}^R(M)={\operatorname{rank}}_k{\operatorname{Tor}_{i}^{R}(k,M){}}_j$, is known as its *graded Poincaré series*.
\[ch:roos\] Let $P=K[x_1,x_2,x_3,x_4,x_5,x_6]$ be a polynomial ring.
For each integer $a\ge2$ set $R{(a)}=P/I{(a)}$, where $I{(a)}$ is the ideal $$\big(\{x_i^2\}_{1\le i\le 6}\,,\,\{x_{i}x_{i+1}\}_{1\le i\le 5}\,,\,
x_1x_3+ax_3x_6-x_4x_6\,,\,x_1x_4+x_3x_6+(a-2)x_4x_6\big)\,.$$ When the characteristic of $K$ is zero, Roos [@Ro Thm.1$'$] proves the equalities $$H_{R(a)}(s)=1+6s+8s^2
\quad\text{and}\quad
P^{R{(a)}}_k(s,t)=\frac1{H_{R(a)}(-st)-(st)^{a+1}(s+st)}\,.$$
For each $a\ge2$ the graded $K$-algebra $R{(a)}$ from \[ch:roos\] satisfies $${\operatorname{slope}}_{P}R{(a)}-1=1<1+(1/a)\le{\operatorname{Rate}}R{(a)}\le1+(2/a)\,.$$
Indeed, one has ${t_{1}^{P}(R(n)){}}=2$ because $I(a)$ is generated by quadrics. The isomorphism ${\operatorname{Tor}_{i}^{P}(k,R(a)){}}_j\simeq{\operatorname{H}_{i}(E\otimes_P{R(a)})}_j$, where $E$ denotes the Koszul complex on some basis of ${P}_1$, and the equalities ${R(a)}_j=0$ for $j\ge3$ imply ${t_{i}^{P}(R(a)){}}\le i+2$ for $2\le i\le 6$. Comparing the numbers ${t_{i}^{P}(R(a)){}}/i$, one gets ${\operatorname{slope}}_{P}R{(a)}=2$.
Following [@ABH], for each $f(s,t)=\sum_{i,j{{{\scriptstyle}\geqslant}}0}b_{i,j}s^jt^i\in{{\mathbb R}}[s][\![t]\!]$ we set $${\operatorname{rate}}(f(s,t))=\sup_{i,j}\{j/i\mid i\ge1\text{ and }b_{i,j}\ne0\}\,.$$ Writing $h(s,t)=6-8st+s^{a+1}t^{a}+s^{a+1}t^{a+1}$, we obtain the expression $$P^{R(a)}_{R(a)_{{\!\scriptscriptstyle{+}}}}(s,t)=\frac{P^{R(a)}_k(s,t)-1}{t}
=\frac{sh(s,t)}{1-(st)h(s,t)}
=\sum_{i{{{\scriptstyle}\geqslant}}1}s^it^{i-1}h(s,t)^i\,.$$ The momomial $s^jt^i$ with least $i\ge1$ and largest $j$, which appears with a non-zero coefficient in the sum on the right, is $s^{a+2}t^{a}$. This gives the first inequality below: $$\begin{aligned}
\frac{a+1}a
&\le{\operatorname{slope}}_{R(a)}(R(a)_{{\!\scriptscriptstyle{+}}})
={\operatorname{rate}}\left(\frac{s\cdot h(s,t)}{1-(st)h(s,t)}\right)\\
&\le\max\big\{{\operatorname{rate}}(s\cdot h(s,t))\,,{\operatorname{rate}}(1-(st)h(s,t))\big\}\\
&=\max\left\{\frac{a+2}a\,,\frac{a+2}{a+1}\right\}
=\frac{a+2}a\,.
\end{aligned}$$ The second inequality comes from [@ABH 1.1]. The desired inequalities follow.
Slopes and Gröbner bases
========================
Let $R$ be a standard graded $K$-algebra and $R={{\widetilde}R}/I_R$ its canonical presentation.
Let $T(R)$ denote the set of all term orders on all $K$-bases of ${{\widetilde}R}_1$. Letting ${\operatorname{in}}_\tau(I_R)$ denote the initial ideal corresponding to $\tau\in T$, Eisenbud, Reeves, and Totaro [@ERT] set $$\Delta(R)=\inf_{\tau\in T(R)}\{ t^{{\widetilde}R}_1({{\widetilde}R}/{\operatorname{in}}_\tau(I_R)) \}\,.$$ In words: $\Delta(R)$ is the smallest number $a$ such that $I_R$ has a Gröbner basis of elements of degree $\leq a$ with respect to a term order on some coordinate system. Now we set $$\Delta^{\ell}(R)=\inf\{ \Delta(Q) \}\,,$$ where $Q$ ranges over the set of all graded $K$-algebras satisfying $Q/L\simeq R$ for some ideal $L$ generated by a $Q$-regular sequence of elements of degree $1$.
\[prop:a1\] When $R$ is not a polynomial ring the following inequalities hold: $$2\le{\operatorname{Rate}}R+1\le\Delta^{\ell}(R)\,.
\qedhere$$
For $R\cong Q/(l)$ with $l$ a non-zero-divisor in $Q_1$, one has a chain $${\operatorname{Rate}}R={\operatorname{slope}}_{R}R_{{\!\scriptscriptstyle{+}}}={\operatorname{slope}}_{Q}R_{{\!\scriptscriptstyle{+}}}
={\operatorname{slope}}_{Q}Q_{{\!\scriptscriptstyle{+}}}={\operatorname{Rate}}Q\le\Delta(Q)-1\,.$$ where the first and third equalities hold by definition, the second one by Proposition \[thm:reg\](1), and the last one from the exact sequence $0\to Q(-1)\to Q_{{\!\scriptscriptstyle{+}}}\to R_{{\!\scriptscriptstyle{+}}}\to 0$; the inequality, announced without proof by Backelin [@Ba Claim, p.98], is established in [@ERT Prop. 3]. The second inequality in the proposition follows.
Combining the main theorem and the preceding proposition, one obtains:
\[cor:taylor\] The following inequalities hold.
1. ${\operatorname{slope}}_{{\widetilde}R}R\le\Delta^{\ell}(R)$.
2. $t_i^{{\widetilde}R}(R)< \Delta^{\ell}(R)\cdot i$ for all $i>({\operatorname{rank}}_KR_1-\dim R)$.
3. ${\operatorname{reg}}_{{\widetilde}R} R\leq(\Delta^{\ell}(R)-1)\cdot({\operatorname{rank}}_KR_1-{\operatorname{depth}}R)$.
The research reported in this paper was prompted by the inequalities above, which were initially obtained by a very different argument; we proceed to sketch it.
\[rem:taylor\] For any isomorphism $R\simeq Q/L$, with $L$ generated by a regular sequence of linear forms, and for each $\tau\in T(Q)$ and every pair of integers $(i,j)$ one has: $$\label{eq:betti}
\beta_{i,j}^{{\widetilde}R}(R)=\beta_{i,j}^{{\widetilde}Q}(Q)\le
\beta_{i,j}^{{\widetilde}Q}({\widetilde}Q/{\operatorname{in}}_\tau(I_Q))\,;$$ see, for instance, [@BC 3.13]. The Taylor resolution of the monomial ideal ${\operatorname{in}}_\tau(I_Q)$, see [@Fr §5], yields inequalities ${t_{i}^{{\widetilde}Q}({{\widetilde}Q}/{\operatorname{in}}_\tau(I_Q)){}}\leq
{t_{1}^{{\widetilde}Q}({{\widetilde}Q}/{\operatorname{in}}_\tau(I_Q)){}}\cdot i$, which are strict for $i>{\operatorname{rank}}_KQ_1-\dim Q$. From these observations one obtains: $${\operatorname{slope}}_{{\widetilde}R}R={\operatorname{slope}}_{{\widetilde}Q}{Q}=\sup_{i{{{\scriptstyle}\geqslant}}1}\{{t_{i}^{{\widetilde}Q}(Q){}}/i\}
\le\inf_{\tau\in T(Q)}\{{t_{1}^{{\widetilde}Q}({{\widetilde}Q}/{\operatorname{in}}_\tau(I_Q)){}}\}=\Delta(Q)\,.$$ These inequalities imply part (1) of Corollary \[cor:taylor\]; part (3) is a formal consequence.
In [@Co], algebras $R$ satisfying $\Delta(R)=2$ are called *G-quadratic*, and those with $\Delta^{\ell}(R)=2$ are called *LG-quadratic*. A G-quadratic algebra is LG-quadratic by definition, and an LG-quadratic one is Koszul, see Proposition \[prop:a1\].
The first one of the preceding implications is not invertible: By an observation of Caviglia, see [@Co 1.4], complete intersections of quadrics are LG-quadratic, while it is known that not all of them are G-quadratic, see [@ERT]. Which leaves us with:
Is every Koszul algebra LG-quadratic?
The *Betti numbers* $\beta^{{\widetilde}R}_i(R)=\sum_{j\in{{\mathbb Z}}}{\operatorname{rank}}_k{\operatorname{Tor}_{i}^{{\widetilde}R}(k,R){}}_j$ might help separate the two notions. Indeed, when $R$ is LG-quadratic one has $R\cong Q/L$ and $Q={\widetilde}Q/I_Q$, where $Q$ is a standard graded $K$-algebra, $L$ is an ideal generated by a $Q$-regular sequence of linear forms, and the initial ideal ${\operatorname{in}}_\tau(I_Q)$ for some $\tau\in T(Q)$ is generated by quadrics. As a consequence, one has $\beta_1^{{\widetilde}Q}(Q)=\beta_1^{{\widetilde}Q}({\widetilde}Q/{\operatorname{in}}_\tau(I_Q))$, so we obtain $$\begin{aligned}
\beta_i^{{\widetilde}R}(R)
\le\beta_i^{{\widetilde}Q}({\widetilde}Q/{\operatorname{in}}_\tau(I_Q))
\le\binom{\beta^{{\widetilde}Q}_1({\widetilde}Q/{\operatorname{in}}_\tau(I_Q))}i
=\binom{\beta^{{\widetilde}R}_1(R)}i\,,
\end{aligned}$$ with inequalities coming from and the Taylor resolution. Thus, we ask:
If $R$ is a Koszul algebra, does $\beta^{{\widetilde}R}_i(R)\le\displaystyle\binom{\beta^{{\widetilde}R}_1(R)}i$ hold for every $i$?
[20]{}
D. J. Anick, *On the homology of associative algebras*, Trans. Amer. Math. Soc. **296** (1986), 641-659.
A. Aramova, J. Herzog, Ş. Bărcănescu, *On the rate of relative Veronese submodules*, Rev. Roumaine Math. Pures Appl. **40** (1995), 243–251.
L. L. Avramov, *Small homomorphisms of local rings*, J. Algebra **50** (1978), 400–453.
L. L. Avramov, *Infinite free resolutions*, Six lectures on commutative algebra (Bellaterra, 1996), Progr. Math. **166**, BirkhŠuser, Basel, 1998; 1–118.
L. L. Avramov, S. Iyengar, *André-Quillen homology of algebra retracts*, Ann. Sci. École Norm. Sup. (4) **36** (2003), 431–462.
L. L. Avramov, I. Peeva, *Finite regularity and Koszul algebras*, Amer. J. Math. **123** (2001), 275–281.
J. Backelin, *On the rates of growth of the homologies of Veronese subrings*, Algebra, algebraic topology, and their interactions (Stockholm, 1983), Lecture Notes in Math. **1183**, Springer, Berlin, 1986; 79–100.
W. Bruns, *On the Koszul algebra of a local ring*, Illinois J. Math. **37** (1993), 278–283.
W. Bruns, A. Conca, *Gröbner bases and determinantal ideals*, Commutative algebra, singularities and computer algebra (Sinaia, 2002), NATO Sci. Ser. II Math. Phys. Chem., **115**, Kluwer Acad. Publ., Dordrecht, 2003; 9–66.
W. Bruns, A. Conca, T. Römer, *Koszul homology and syzygies of Veronese subalgebras*, `arXiv:0902.2431`.
W. Bruns, J. Herzog, *Cohen-Macaulay rings*, Revised edition, Cambridge Studies Adv. Math. **39**, University Press, Cambridge, 1998.
H. Cartan, S. Eilenberg, *Homological algebra*, Princeton Univ. Press, Princeton, NJ, 1956.
A. Conca, *Koszul algebras and Gröbner bases of quadrics*, Proceedings of the 29th Symposium on Commutative Algebra in Japan, Nagoya, Japan, 2007; 127-133; `arXiv:0903.2397`.
D. Eisenbud, A. Reeves, B. Totaro, *Initial ideals, Veronese subrings, and rates of algebras*, Adv. Math. 109 (1994), no. 2, 168–187.
R. Fröberg, *Some complex constructions with applications to Poincaré series*, Séminaire d’Algèbre Paul Dubreil, (Paris, 1977–1978), Lecture Notes in Math., **740**, Springer, Berlin, 1979; 272–284.
C. Peskine, L. Szpiro, *Syzygies et multiplicités*, C. R. Acad. Sci. Paris Sér. A **278** (1974), 1421–1424.
A. Polishchuk, L. Positselski, *Quadratic algebras*, Univ. Lecture Ser. **37**, Amer. Math. Soc., Providence, RI, 2005.
J.-E. Roos, *Commutative non Koszul algebras having a linear resolution of arbitrarily high order. Applications to torsion in loop space homology*, C. R. Acad. Sci. Paris **316** (1993), 1123–1128.
L. M. Şega, *Homological properties of powers of the maximal ideal of a local ring*, J. Algebra **241** (2001), 827-858.
[^1]: Research partly supported by NSF grants DMS 0803082 (LLA) and 0602498 (SBI)
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In this universe, governed fundamentally by quantum mechanical laws, characterized by indeterminism and distributed probabilities, classical deterministic laws are applicable over a wide range of time, place, and scale. We review the origin of these deterministic laws in the context of the quantum mechanics of closed systems, most generally, the universe as a whole. In this formulation of quantum mechanics, probabilities are predicted for the individual members of sets of alternative histories of the universe that decohere, for which there is negligible interference between pairs of histories in the set as measured by a decoherence functional. An expansion of the decoherence functional in the separation between histories allows the form of the phenomenological, deterministic equations of motion to be derived for suitable coarse grainings of a class of non-relativistic systems, including ones with general non-linear interactions. More coarse graining is needed to achieve classical predictability than naive arguments based on the uncertainty principle would suggest. Coarse graining is needed to effect decoherence, and coarse graining beyond that to achieve the inertia necessary to resist the noise that mechanisms of decoherence produce. Sets of histories governed largely by deterministic laws constitute the quasiclassical realm of everyday experience which is an emergent feature of the closed system’s initial condition and Hamiltonian. We analyse the question of the sensitivity of the existence of a quasiclassical realm to the particular form of the initial condition. We find that almost any initial condition will exhibit a quasiclassical realm of some sort, but only a small fraction of the total number of possible initial states could reproduce the everyday quasiclassical realm of our universe.'
author:
- 'James B. Hartle'
title: 'Quasiclassical Realms In A Quantum Universe[^1]'
---
\#1\#2[[\#1 \#2]{}]{}
Introduction {#sec:I}
============
In cosmology we confront a problem which is fundamentally different from that encountered elsewhere in physics. This is the problem of providing a theory of the initial condition of the universe. The familiar laws of physics describe evolution in time. The evolution of a plasma is described by the classical laws of electrodynamics and mechanics and the evolution of an atomic state by Schrödinger’s equation. These dynamical laws require boundary conditions and the laws which govern the evolution of the universe — the classical Einstein equation, for instance — are no exception. There are no particular laws governing these boundary conditions; they summarize our observations of the universe outside the subsystem whose evolution we are studying. If we don’t see any radiation coming into a room, then we solve Maxwell’s equations inside with no-incoming-radiation boundary conditions. If we prepare an atom in a certain way, then we solve Schrödinger’s equation with the corresponding initial condition.
In cosmology, however, by definition, there is no rest of the universe to pass the specification of the boundary conditions off to. The boundary conditions must be part of the laws of physics themselves. Constructing a theory of the initial condition of the universe, effectively its initial quantum state, and examining its observational consequences is the province of that area of astrophysics that has come to be called quantum cosmology.[^2] This talk will consider one manifest feature of the quantum universe and its connection to the theory of the initial condition. This is the applicability of the deterministic laws of classical physics to a wide range of phenomena in the universe ranging from the cosmological expansion itself to the turbulent and viscous flow of water through a pipe. This quasiclassical realm[^3] is one of the most immediate facts of our experience. Yet what we know of the basic laws of physics suggests that we live in a quantum mechanical universe, characterized by indeterminacy and distributed probabilities, where classical laws can be but approximations to the unitary evolution of the Schrödinger equation and the reduction of the wave packet. What is the origin of this wide range of time, place, and scale on which classical determinism applies? How can we derive the form of the phenomenological classical laws, say the Navier-Stokes equations, from a distantly related fundamental quantum mechanical theory which might, after all, be heterotic, superstring theory? What features of these laws can be traced to their quantum mechanical origins? It is such old questions that will be examined anew in this lecture from the perspective of quantum cosmology, reporting largely on joint work with Murray Gell-Mann [@GH93a].
Standard derivations of classical behavior from the laws of quantum mechanics are available in many quantum mechanics texts. One popular approach is based on Ehrenfest’s theorem relating the acceleration of the expected value of position to the expected value of the force: $$m\ \frac{d^2\langle x\rangle}{dt^2} = - \left\langle\frac{\partial
V}{\partial x}\right\rangle\ ,
\label{oneone}$$ (written here for one-dimensional motion). Ehrenfest’s theorem is true in general, but for certain states, typically narrow wave packets, we may approximately replace the expected value of the force with the force evaluated at the expected value of position, thereby obtaining a classical equation of motion for that expected value: $$m\ \frac{d^2\langle x\rangle}{dt^2} = - \frac{\partial V(\langle x
\rangle)}{\partial x}\ .
\label{onetwo}$$ This equation shows that the center of a narrow wave packet moves on an orbit obeying Newton’s laws. More precisely, if we make a succession of position and momentum measurements that are crude enough not to disturb the approximation that allows (1.2) to replace (1.1), the expected values of the results will be correlated by Newton’s deterministic law.
This kind of elementary derivation is inadequate for the type of classical behavior that we hope to discuss in quantum cosmology for the following reasons:
- The behavior of expected or average values is not enough to define classical behavior. In quantum mechanics, the statement that the moon moves on a classical orbit is properly the statement that, among a set of alternative histories of its position as a function of time, the probability is high for those histories exhibiting the correlations in time implied by Newton’s law of motion and near zero for all others. To discuss classical behavior, therefore, we should be dealing with the probabilities of individual time histories, not with expected or average values.
- The Ehrenfest theorem derivation deals with the results of “measurements” on an isolated system with a few degrees of freedom. However, in quantum cosmology we are interested in classical behavior in much more general situations, over cosmological stretches of space and time, and over a wide range of subsystems, [*independent*]{} of whether these subsystems are receiving attention from observers. Certainly we imagine that our observations of the moon’s orbit, or a bit of the universe’s expansion, have little to do with the classical behavior of those systems. Further, we are interested not just in classical behavior as exhibited in a few variables and at a few times of our choosing, but in as refined a description as possible, so that classical behavior becomes a feature of the systems themselves and not a choice of observers.
- The Ehrenfest theorem derivation relies on a close connection between the equations of motion of the fundamental action and the phenomenological deterministic laws that govern classical behavior. But when we speak of the classical behavior of the moon, or of the cosmological expansion, or even of water in a pipe, we are dealing with systems with many degrees of freedom whose phenomenological classical equations of motion may be only distantly related to the underlying fundamental theory, say superstring theory. We need a derivation which derives the [*form*]{} of the equations as well as the probabilities that they are satisfied.
- The Ehrenfest theorem derivation posits the variables — the position $x$ — in which classical behavior is exhibited. But, as mentioned above, classical behavior is most properly defined in terms of the probabilities and properties of histories. In a closed system we should be able to [*derive*]{} the variables that enter into the deterministic laws, especially because, for systems with many degrees of freedom, these may be only distantly related to the coördinates entering the fundamental action.
Despite these shortcomings, the elementary Ehrenfest analysis already exhibits two necessary requirements for classical behavior: Some coarseness is needed in the description of the system as well as some restriction on its initial condition. Not every initial wave function permits the replacement of (\[oneone\]) by (\[onetwo\]) and therefore leads to classical behavior; only for a certain class of wave functions will this be true. Even given such a suitable initial condition, if we follow the system too closely, say by measuring position exactly, thereby producing a completely delocalized state, we will invalidate the approximation that allows (1.2) to replace (1.1) and classical behavior will not be expected. Some coarseness in the description of histories is therefore needed. For realistic systems we therefore have the important questions of [*how restricted*]{} is the class of initial conditions which lead to classical behavior and [*what*]{} and [*how large*]{} are the coarse grainings necessary to exhibit it.
Before pursuing these questions in the context of quantum cosmology I would like to review a derivation of classical equations of motion and the probabilities they are satisfied in a simple class of model systems, but before doing [*that*]{} I must review, even more briefly, the essential elements of the quantum mechanics of closed systems [@Gri84; @Omnsum; @GH90a].
[=2.00in ]{}
The Quantum Mechanics of Closed Systems {#sec:II}
=======================================
Most generally we aim at predicting the probabilities of alternative time histories of a closed system such as the universe as a whole. Alternatives at a moment of time are represented by an exhaustive set of orthogonal projection operators $\{P^k_{\alpha_k} (t_k)\}$. For example, these might be projections on a set of alternative intervals for the center of mass position of a collection of particles, or projections onto alternative ranges of their total momentum. The superscript denotes the set of alternatives a certain set of position ranges or a certain set of momentum ranges, the discrete index $\alpha_k = 1,2,3 \cdots$ labels the particular alternative, a particular range of position, and $t_k$ is the time. A set of alternative histories is defined by giving a series of such alternatives at a sequence of times, say $t_1, \cdots, t_n$. An individual history is a sequence of alternatives $(\alpha_1, \cdots, \alpha_n)\equiv
\alpha$ and is represented by the corresponding chain of projections. $$C_\alpha \equiv P^n_{\alpha_n} (t_n) \cdots P^1_{\alpha_1} (t_1)\ .
\label{twoone}$$ Such a set is said to be “coarse-grained” because the $P$’s do not restrict all possible variables and because they do not occur at all possible times.
The decoherence functional $$D\left(\alpha^\prime, \alpha\right) = Tr\,\bigl[C_{\alpha^\prime} \rho
C^\dagger_\alpha\bigr]
\label{twotwo}$$ measures the amount of quantum mechanical interference between pairs of histories in a universe whose initial condition is represented by a density matrix $\rho$. When, for a given set, the interference between all pairs of distinct histories is sufficiently low, $$D\left(\alpha^\prime, \alpha\right) \approx 0\quad , \quad {\rm all}
\ \alpha^\prime \not=
\alpha
\label{twothree}$$ the set of alternative histories is said to [*decohere*]{}, and probabilities can be consistently assigned to its individual members. The probability of an individual history $\alpha$ is just the corresponding diagonal element of $D$, [*viz.*]{} $$p(\alpha) = D(\alpha, \alpha)\ .
\label{twofour}$$
Describe in terms of operators, check decoherence and evaluate probabilities — that is how predictions are made for a closed system, whether the alternatives are participants in a measurement situation or not.
When the projections at each time are onto the ranges $\{\Delta_\alpha\}$ of some generalized coördinates $q^i$ the decoherence functional can be written in a convenient path integral from
$$D\left(\alpha^\prime, \alpha\right) = \int_{\alpha^\prime}
\delta q^\prime \int_\alpha \delta q\, \delta \bigl(q^\prime_f
- q_f\bigr)
e^{i(S[q^\prime(\tau)] - S[q(\tau)])/\hbar} \rho \left(q^\prime_0,
q_0\right)
\label{twofive}$$
where the integral is over the paths that pass through the intervals defining the histories (Fig. 1). This form will be useful in what follows.
Classical Behavior in a Class of Model Quantum Systems {#sec:III}
======================================================
The class of models we shall discuss are defined by the following features:
- We restrict attention to coarse grainings that follow a fixed subset of the fundamental coördinates $q^i$, say the center of mass position of a massive body, and ignore the rest. We denote the followed variables by $x^a$ and the ignored ones by $Q^A$ so that $q^i=(x^a, Q^A)$. We thus posit, rather than derive, the variables exhibiting classical behavior, but we shall derive, rather than posit, the form of their phenomenological equations of motion.
- We suppose the action is the sum of an action for the $x$’s, an action for the $Q$’s, and an interaction between them that is the integral of a local Lagrangian free from time derivatives. That is, $$S[q(\tau)] = S_{\rm free} [x(\tau)] + S_0 [Q(\tau)] + S_{\rm int}
[x(\tau), Q(\tau)]
\label{threeone}$$ suppressing indices where clarity is not diminished.
- We suppose the initial density matrix factors into a product of one depending on the $x$’s and another depending on the ignored $Q$’s which are often called the “bath” or the “environment”.
$$\rho\left(q^\prime_0, q_0\right) = \bar\rho \left(x^\prime_0,
x_0\right)\, \rho_B \left(Q^\prime_0, Q_0\right)\ .
\label{threetwo}$$
Under these conditions the integral over the $Q$’s in (2.5) can be carried out to give a decoherence functional just for coarse-grained histories of the $x$’s of the form:
$$D\left(\alpha^\prime, \alpha\right) = \int_{\alpha^\prime} \delta x^\prime
\int_{\alpha} \delta x\, \delta\bigl(x^\prime_f - x_f\bigr)
\exp
\biggl\{i\Bigl(S_{\rm free} [x^\prime (\tau)]
- S_{\rm free} [x(\tau)]
+ W
\left[x^\prime (\tau), x(\tau)\right]\Bigr)/\hbar\biggr\}\,
\bar\rho\left(x^\prime_0, x_0\right)
\label{threethree}$$
where $W [x^\prime(\tau), x(\tau)]$, called the Feynman-Vernon influence phase, summarizes the results of integrations over the $Q$’s.
[=2.00in ]{}
The influence phase $W$ generally possesses a positive imaginary part [@Bru93]. If that grows as $|x^\prime-x|$ increases, it will effect decoherence because there will then be negligible contribution to the integral (3.3) for $x^\prime \not= x$ or $\alpha^\prime \not= \alpha$. That, recall, is the definition of decoherence (2.3). Let us suppose this to be the case, as is true in many realistic examples. Then we can make an important approximation, which is a [*decoherence*]{} [*expansion*]{}. Specifically, introduce coördinates which measure the average and difference between $x^\prime$ and $x$ (Fig. 2) $$X = \half \left(x^\prime + x\right)\ , \quad \xi = x^\prime - x\ .
\label{threefour}$$
The integral defining the diagonal elements of $D$, which are the probabilities of the histories, receives a significant contribution only for small $\xi(t)$. We can thus expand the exponent of the integrand of (\[threethree\]) in powers of $\xi(t)$ and legitimately retain only the lowest, say up to quadratic, terms. The result for the exponent is
$$\begin{aligned}
S[x(\tau) & + & \xi(\tau)/2] - S[x(\tau) -\xi(\tau)/2] + W[x(\tau), \xi(\tau)]
\nonumber\\
& = & -\xi_0 P_0
+ \int^T_0 dt\, \xi(t)\ \left[\frac{\delta S}{\delta X(t)} +
\left(\frac{\delta W}{\delta\xi(t)}\right)_{\xi(t)=0}\right]
+ \half \int^T_0 dt' \int^T_0 dt\ \xi(t^\prime)
\left(\frac{\delta^2 W}{\delta \xi(t')
\delta \xi(t)}\right)_{\xi(t)=0}
\ \xi(t) + \cdots\ .
\label{threefive}\end{aligned}$$
The essentially unrestricted integrals over the $\xi(t)$ can then be carried out to give the following expression for the probabilities
$$p(\alpha) = \int_\alpha \delta X\, ({\rm det}\ K_I/4\pi)^{-\half}
\exp\Bigl[-\frac{1}{\hbar} \int^T_0 dt' \int^T_0 dt\ {\cal E}
(t', X(\tau)]\, K^{\rm inv}_I \left(t', t; X(\tau)\right]\ {\cal E}
(t, X(\tau)]\Bigr]\ \bar w \left(X_0, P_0\right)\ .
\label{threesix}$$
Here, $${\cal E}(t, X(\tau)] \equiv \frac{\delta S}{\delta X(t)} +
\left\langle F(t, X(\tau)]\right\rangle
\label{threeseven}$$ where $\langle F(t, X (\tau)]\rangle$ has been written for $(\delta
W/\delta\xi(t))_{\xi=0}$ because it can be shown to be the expected value of the force arising from the ignored variables in the state of the bath. $K^{\rm
inv}(t', t, X(\tau)]$ is the inverse of $(2\hbar/i)(\delta^2
W/\delta\xi(t')\delta\xi(t))$ which turns out to be real and positive. Finally $\bar w(X,P)$ is the Wigner distribution for the density matrix $\bar\rho$: $$w(X, P) = \frac{1}{2\pi} \int d\xi\, e^{i P\xi /\hbar}\rho (X+\xi/2, X-
\xi/2)\ .
\label{threeeight}$$ This expression shows that, when $K^{\rm inv}_I$ is sufficiently large, the probabilities for histories of $X(t)$ are peaked about those which satisfy the equation of motion $${\cal E} (t, X(\tau)] = \frac{\delta S}{\delta X(t)} + \langle F(t,
X(\tau)]\rangle = 0\ .
\label{threenine}$$ and the initial conditions of these histories are distributed according to the Wigner distribution. The Wigner distribution is not generally positive, but, up to the accuracy of the approximations, this integral of it must be [@Hal92].
Thus we derive the form of the phenomenological equations of motion for this class of models. It is the equation of motion of the fundamental action $S[X(t)]$ corrected by phenomenological forces arising from the interaction with the bath. These depend not only on the form of the interaction Hamiltonian but also on the initial state of the bath, $\rho_B$. These forces are generally non-local in time, depending at a given instant on the whole trajectory $X(\tau)$. It can be shown that quantum mechanical causality implies that they depend only on part of path $X(\tau)$ to the past of $t$. Thus quantum mechanical causality implies classical causality.
It is important to stress that the expansion of the decoherence functional has enabled us to consider the equations of motion for fully non-linear systems, not just the linear oscillator models that have been widely studied.
The equation of motion (\[threenine\]) is not predicted to be satisfied [*exactly*]{}. The probabilities are [*peaked*]{} about ${\cal E}=0 $ but distributed about that value with a width that depends on the size of $K^{\rm inv}$. That is quantum noise whose spectrum and properties can be derived from (\[threethree\]). The fact that both the spectrum of fluctuations and the phenomenological forces can be derived from the same influence phase is the origin of the fluctuation dissipation theorem for linear systems.
Simple examples of this analysis are the linear oscillator models that have been studied using path integrals by Feynman and Vernon [@FV63], Caldeira and Leggett [@CL83], Unruh and Zurek [@UZ89], and many others. For these, the $x$’s describe a distinguished harmonic oscillator linearly coupled to a bath of many others. If the initial state of the bath is a thermal density matrix, then the decoherence expansion is exact. In the especially simple case of a cut-off continuum of bath oscillators and high bath temperature, there are the following results: The imaginary part of the influence phase is given by $$ImW[x'(\tau),x(\tau)]= \frac{2M\gamma kT_B}{\hbar} \int^T_0 dt
\left(x^\prime(t) -
x(t)\right)^2
\label{threeten}$$ where $M$ is the mass of the $x$-oscillator, $\gamma$ is a measure of the strength of its coupling to the bath, and $T_B$ is the temperature of the bath. The exponent of the expression (\[threeseven\]) giving the probabilities for histories is $$-\frac{M}{8\gamma kT_B} \int^T_0 dt\, \left[\ddot X + \omega^2 X +
2\gamma \dot X\right]^2
\label{threeeleven}$$ where $\omega$ is the frequency of the $x$-oscillator renormalized by its interaction with the bath. The phenomenological force is friction, and the occurrence of $\gamma$, both in that force and the constant in front of (\[threeeleven\]), whose size governs the deviation from classical predictability, is a simple example of the fluctuation-dissipation theorem.
In this simple case, an analysis of the the requirements for classical behavior is straightforward. To achieve decoherence we need high values of $\gamma kT_B$. That is, strong coupling is needed if interference phases are to be dissipated efficiently into the bath. However, the larger the value of $\gamma kT_B$ the smaller the coefficient of front of (\[threeeleven\]), decreasing the size of the exponential and [*increasing*]{} deviations from classical predictability. This is reasonable: the stronger the coupling to the bath the more noise is produced by the interactions that are carrying away the phases. To counteract that, and achieve a sharp peaking about the classical equation of motion, $M$ must be large so that $M/\gamma kT_B$ is large. That is, high inertia is needed to resist the noise that arises from the interactions with the bath.
Thus, much more coarse graining is needed to ensure classical predictability than naive arguments based on the uncertainty principle would suggest. Coarse graining is needed to effect decoherence, and coarse graining beyond that to achieve the inertia necessary to resist the noise that the mechanisms of decoherence produce.
Quasiclassical Realms in Quantum Cosmology {#sec:IV}
==========================================
As observers of the universe, we deal every day with coarse-grained histories that exhibit classical correlations. Indeed, only by extending our direct perceptions with expensive and delicate instruments can we exhibit [*non*]{}-classical behavior. The coarse grainings that we use individually and collectively are, of course, characterized by a large amount of ignorance, for our observations determine only a very few of the variables that describe the universe and those only very imprecisely. Yet, we have the impression that the universe exhibits a much finer-grained set of histories, [*independent of our choice*]{}, defining an always decohering “quasiclassical realm”, to which our senses are adapted but deal with only a small part of. If we are preparing for a journey to a yet unseen part of the universe, we do not believe that we need to equip our spacesuits with detectors, say sensitive to coherent superpositions of position or other unfamiliar quantum operators. We expect that histories of familiar quasiclassical operators will decohere and exhibit patterns of classical correlation there as well as here.
Roughly speaking, a quasiclassical realm is a set of decohering histories, that is maximally refined with respect to decoherence, and whose individual histories exhibit as much as possible patterns of deterministic correlation. At present we lack satisfactory measures of maximality and classicality with which to make the existence of one or more quasiclassical realms into quantitative questions in quantum cosmology [@GH90a; @PZ93]. We therefore do not know whether the universe exhibits a [*unique*]{} class of roughly equivalent sets of histories with high levels of classicality constituting the quasiclassical realm of familiar experience, or whether there might be other essentially inequivalent quasiclassical realms [@GH94]. However, even in the absence of such measures and such analyses, we can make an argument for the form of at least some of the operators we expect to occur over and over again in histories defining one kind of quasiclassical realm — operators we might call “quasiclassical”. In the earliest instants of the history of the universe, the coarse grainings defining spacetime geometry on scales above the Planck scale must emerge as quasiclassical. Otherwise, our theory of the initial condition is simply inconsistent with observation in a manifest way. Then, when there is classical spacetime geometry we can consider the conservation of energy, and momentum, and of other quantities which are conserved by virtue of the equations of quantum fields. Integrals of densities of conserved or nearly conserved quantities over suitable volumes are natural candidates for quasiclassical operators. Their approximate conservation allow them to resist deviations from predictability caused by “noise” arising from their interactions with the rest of the universe that accomplish decoherence. Such “hydrodynamic” variables [*are*]{} among the principal variables of classical theories.
This argument is not unrelated to a standard one in classical statistical mechanics that seeks to identify the variables in which a hydrodynamic description of non-equilibrium systems may be expected. All isolated systems approach equilibrium — that is statistics. With certain coarse grainings this approach to equilibrium may be approximately described by hydrodynamic equations, such as the Navier-Stokes equation, incorporating phenomenological descriptions of dissipation, viscosity, heat conduction, diffusion, etc. The variables that characterize such hydrodynamic descriptions are the local quantities which very most [*slowly*]{} in time — that is, averages of densities of approximately conserved quantities over suitable volumes. The volumes must be large enough that statistical fluctuations in the values of the averages are small, but small enough that equilibrium is established within each volume in a time short compared to the dynamical times on which the variables vary. The constitutive relations defining coefficients of viscosity, diffusion, etc. are then defined and independent of the initial condition, permitting the closure of the set of hydrodynamic equations. Local equilibrium being established, the further equilibration of the volumes among themselves is described by the hydrodynamic equations. In the context of quantum cosmology, coarse grainings by averages of densities of approximately conserved quantities not only permit local equilibrium and resist gross statistical fluctuations leading to high probabilities for deterministic histories as in this argument, they also, as described above, resist the fluctuations arising from the mechanicsms of decoherence necessary for predicting probabilities of any kind in quantum mechanics.
In this way we can sketch how a quasiclassical realm consisting of histories of ranges of values of quasiclassical operators, extended over cosmological dimensions both in space and in time, but highly refined with respect to those scales, is a feature of our universe and thus must be a prediction of its quantum initial condition. It may seem strange to attribute the classical behavior of everyday objects to the initial condition of the universe some 12 billion years ago, but, in this connection, two things should be noted: First, we are not just speaking of the classical behavior of a few objects described in a very coarse graining of our choosing, but of a much more refined feature of the universe extending over cosmological dimensions and indeed including the classical behavior of the cosmological geometry itself all the way back to the briefest of moments after the big bang. Second, at the most fundamental level the [*only*]{} ingredients entering into quantum mechanics are the theory of the initial condition and the theory of dynamics, so that [*any*]{} feature of the universe must be traceable to these two starting points and the accidents of our particular history. Put differently (neglecting quantum gravity) the possible classical behavior of a set of histories represented by strings of projection operators as in (\[twoone\]) does not depend on the operators alone except in trivial cases. Rather, like decoherence itself, classicality depends on the relation of those operators to the initial state $|\Psi\rangle$ through which we calculate the decoherence and probabilities of sets of histories by which classical behavior is defined.
Yet it is reasonable to ask — how sensitive is the existence of a quasiclassical realm to the particular form of the initial condition? In seeking to answer this question it is important to recognize that there are two things it might mean. First, we might ask whether [*given*]{} an initial state $|\Psi\rangle$, there is always a set of histories which decoheres and exhibits deterministic correlations. There is, trivially. Consider the set of histories which just consists of projections down on ranges $\{\Delta E_\alpha\}$ of the [*total*]{} energy (or any other conserved quantity) at a sequence of times $$C_\alpha = P^H_{\alpha_n} (t_n) \cdots P^H_{\alpha_1} (t_1)
\ .\label{fourone}$$ Since the energy is conserved these operators are independent of time, commute, and $C_\alpha$ is merely the projection onto the intersection of the intervals $\Delta E_{\alpha_`}, \cdots, \Delta
E_{\alpha_n}$. The set of histories represented by (\[fourone\]) thus [*exactly*]{} decoheres $$D\left(\alpha^\prime, \alpha\right) = Tr\bigl[C_{\alpha^\prime}
|\Psi\rangle\langle\Psi|C^\dagger_\alpha\bigr] =
\langle\Psi|C^\dagger_\alpha C_{\alpha^\prime} |\Psi \rangle \propto
\delta_{\alpha\alpha^\prime}\ ,
\label{fourtwo}$$ and exhibits deterministic correlations — the total energy today is the same as it was yesterday. Of course, such a set is far from maximal, but imagine subdividing the total volume again and again and considering the set of histories which results from following the values of the energy in each subvolume over the sequence of times. If the process of subdividing is followed until we begin to lose decoherence we might hope to retain some level of determinism while moving towards maximality. Thus, it seems likely that, for most initial $|\Psi\rangle$, we may find [*some*]{} sets of histories which constitute a quasiclassical realm.
However, we might ask about the sensitivity of a quasiclassical realm to initial condition in a different way. We might fix the chains of projections that describe [*our*]{} highly refined quasiclassical realm and ask for how many [*other*]{} initial states does this set of histories decohere and exhibit the same classical correlations. This amounts to asking, for a given set of alternative histories $\{C_\alpha\}$, how many initial states $|\Psi\rangle$ will have the same decoherence functional? Expand $|\Psi\rangle$ in some generic basis in Hilbert space, $|i\rangle$: $$|\Psi\rangle = \sum\nolimits_i c_i | i\rangle\ .
\label{fourthree}$$ The condition that $|\Psi\rangle$ result in a given decoherence functional $D(\alpha^\prime, \alpha)$ is $$\sum\nolimits_{ij} c^*_i c_j\ \bigl\langle i|C^\dagger_{\alpha^\prime}
C_\alpha | j\bigr\rangle = D\left(\alpha^\prime, \alpha\right)\ .
\label{fourfour}$$ Unless the $C_\alpha$ are such that decoherence and correlations are trivially implied by the operators (as is the above example of chains of projections onto a total conserved energy), the matrix elements $\langle
i|C^\dagger_{\alpha^\prime}\, C_\alpha | j\rangle$ will not vanish indentically. Equation (\[fourfour\]) is therefore (number of histories $\alpha)^2$ equations for (dimension of Hilbert space) coefficients. When that dimension is made finite, say by limiting the total volume and energy, we expect a solution only when $$\left({\rm number\ of\ histories}\atop{\rm in\ the\ quasiclassical
\ realm}\right)^2 \ltwid \left({\rm dim}
\ {\cal H}\right)\ .
\label{fourfive}$$ As the set of histories becomes increasingly refined, so that there are more and more alternative cases, the two sides may come closer to equality. The number of states $|\Psi\rangle$ which reproduce the [*particular*]{} maximal quasiclassical realm of our universe may thus be large but still small compared to the total number of states in Hilbert space.
The Main Points Again {#sec:V}
=====================
- Classical behavior of quantum systems is defined through the probabilities of deterministic correlations of individual time histories of a closed system.
- Classical predictability requires coarse graining to accomplish decoherence, and coarse graining beyond that to achieve the necessary inertia to resist the noise which mechanisms of decoherence produce.
- The maximally refined quasiclassical realm of familiar experience is an emergent feature, not of quantum evolution alone, but of that evolution, coupled to a specific theory of the universe’s initial condition. Whether the whole closed system exhibits a quasiclassical realm like ours, and indeed whether it exhibits more than one essentially inequivalent realm, are calculable questions in quantum cosmology if suitable measures of maximality and classicality can be supplied.
- A generic initial state will exhibit some sort of quasiclassical realm, but the maximally refined quasiclassical realm of familiar experience will be an emergent feature of only a small fraction of the total possible initial states of the universe.
Most of this paper reports joint work with M. Gell-Mann. The author’s research was supported in part by NSF grant PHY90-08502.
[99]{}
J. Halliwell, in [*Quantum Cosmology and Baby Universes: Proceedings of the 1989 Jerusalem Winter School for Theoretical Physics*]{}, S. Coleman, J.B. Hartle, T. Piran, and S. Weinberg, World Scientific, Singapore (1991) pp. 65-157.
M. Gell-Mann and J.B. Hartle, [*Phys. Rev. D*]{} [**47**]{}, 3345 (1993).
R. Griffiths, [*J. Stat. Phys.*]{} [**36**]{} 219 (1984).
R. Omnès, [*J. Stat. Phys.*]{} [**53**]{}, 893 (1988); [*ibid*]{} [**53**]{}, 933 (1988); [*ibid*]{} [**53**]{}, 957 (1988); [*ibid*]{} [**57**]{}, 357 (1989); [*Rev. Mod. Phys.*]{} [**64**]{}, 339 (1992).
M. Gell-Mann and J.B. Hartle in [*Complexity, Entropy, and the Physics of Information, SFI Studies in the Sciences of Complexity*]{}, Vol. VIII, W. Zurek, Addison Wesley, Reading (1990) or in [*Proceedings of the 3rd International Symposium on the Foundations of Quantum Mechanics in the Light of New Technology*]{} S. Kobayashi, H. Ezawa, Y. Murayama, and S. Nomura, Physical Society of Japan, Tokyo (1990).
T. Brun, [*Phys. Rev. D*]{} [**47**]{} 3383 (1993).
J. Halliwell, [*Phys. Rev. D*]{} [**46**]{}, 1610 (1992).
R.P. Feynman and J.R. Vernon, [*Ann. Phys. (N.Y.)*]{} [**24**]{}, 118 (1963).
A. Caldeira and A. Leggett, [*Physica*]{} [**121A**]{}, 587 (1983).
W. Unruh and W. Zurek, [*Phys. Rev. D*]{} [**40**]{}, 1071 (1989).
J.P. Paz and W.H. Zurek, [*Phys. Rev. D*]{} [**48**]{}, 2728, (1993).
M. Gell-Mann and J.B. Hartle, [*Equivalent Sets of Histories and Multiple Quasiclassical Domains*]{}; gr-qc/9404013.
[^1]: Talk given at the Lanczos Centenary Meeting, North Carolina State University, December 15, 1993
[^2]: For a recent review see [@Hal91]
[^3]: Earlier work, e.g. [@GH90a] called this the ‘quasiclassical domain’, but this risks confusion the usage in condensed matter physics.
| {
"pile_set_name": "ArXiv"
} |
2
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'X-ray polarimetry in astronomy has not been exploited well, despite its importance. The recent innovation of instruments is changing this situation. We focus on a complementary MOS (CMOS) pixel detector with small pixel size and employ it as an x-ray photoelectron tracking polarimeter. The CMOS detector we employ is developed by GPixel Inc., and has a pixel size of 2.5$\mathrm{\mu}$m $\times$ 2.5 $\mathrm{\mu}$m. Although it is designed for visible light, we succeed in detecting x-ray photons with an energy resolution of 176eV (FWHM) at 5.9keV at room temperature and the atmospheric condition. We measure the x-ray detection efficiency and polarimetry sensitivity by irradiating polarized monochromatic x-rays at BL20B2 in SPring-8, the synchrotron radiation facility in Japan. We obtain modulation factors of 7.63% $\pm$ 0.07% and 15.5% $\pm$ 0.4% at 12.4keV and 24.8keV, respectively. It demonstrates that this sensor can be used as an x-ray imaging spectrometer and polarimeter with the highest spatial resolution ever tested.'
author:
- Kazunori Asakura
- Kiyoshi Hayashida
- Takashi Hanasaka
- Tomoki Kawabata
- Tomokage Yoneyama
- Koki Okazaki
- Shuntaro Ide
- Hirofumi Noda
- Hironori Matsumoto
- Hiroshi Tsunemi
- Hisamitsu Awaki
- Hiroshi Nakajima
bibliography:
- 'GMAX0505\_paper.bib'
title: 'X-ray imaging polarimetry with a 2.5-$\mathrm{\mu}$m pixel CMOS sensor for visible light at room temperature'
---
[**\***Kazunori Asakura, asakura$\_$k@ess.sci.osaka-u.ac.jp]{}
Introduction {#sec:intro}
============
Polarimetry is a powerful diagnostic tool to measure magnetic fields and scattering process of celestial objects. In fact, polarimetry has been widely employed for various targets and purposes in radio-to-ultraviolet astronomy. On the other hand, polarimetry in x-ray and $\gamma$-ray astronomy had long been an unexploited field after the first detection of polarization of the Crab Nebula with a sounding rocket instrument[@Novick1972] and succeeding satellite observations[@Weisskopf1978]. Polarimetry in hard x-ray and soft $\gamma$-ray astronomy has been markedly activated in these years, starting from the detection of polarization in the Crab Nebula with instruments on-board INTEGRAL[@Forot2008; @Dean2008]. Hard x-ray to soft $\gamma$-ray polarimetry of the Crab Nebula has succeeded in other instruments, e.g., the CZT Imager on ASTROSAT[@Vadawale2018], PoGO+[@Chauvin2017; @Chauvin2018], and the Soft Gamma-ray Detector on Hitomi[@Hitomi2018]. Polarization of Cyg X-1 has also been measured with some of these instruments. Gamma-ray bursts are also the targets of extensive observations of their polarization, with GAP on IKAROS[@Yonetoku2011], the CZT Imager on ASTROSAT[@Chattopadhyay2017], and POLAR on-board Tiangong-2[@Zhang2019]. The imaging x-ray polarimetry explorer (IXPE)[@Weisskopf2016], a small satellite mission dedicated to soft x-ray polarimetry, is scheduled to be launched in 2021.
Various polarimetry instruments have been developed in x-ray and $\gamma$-ray astronomy, as reviewed by Weisskopf et al.[@Weisskopf2010] They can be classified into three main categories by their working principle: Bragg reflection, Compton/Thomson scattering, and photoelecton tracking. The first two were employed in real observations, whereas observation with the last one will be achieved with IXPE. Photoelectron tracking polarimetry utilizes the anisotropic distribution of the photoelectron emission in photoelectric absorption of x-rays. The anisotropy is most enhanced in the K-shell photoelectric absorption; its differential cross section is described as $$\begin{aligned}
\label{eq1}
\frac{d{\sigma}}{d{\Omega}}{\propto}\frac{{\sin{\theta}^2}{\cos{\phi}}^2}{(1-{\beta}\cos{\theta})^4},\end{aligned}$$ where $\theta$ and $\phi$ are the polar and azimuth angles of the emitted photoelectron with respect to the x-ray polarization vector, and $\beta$ is the speed of the photoelectron normalized by the speed of light. IXPE employed micropixel gas detectors with gas electron multiplier foils. Similar format detectors were developed by various groups[@Sakurai1996; @Tanimori1999; @Black2003]. Photoelectron tracking polarimeters can also be realized with solid-state detectors. The first demonstration with a CCD was presented by Tsunemi et al.[@Tsunemi1992] In that experiment, a CCD with a pixel size of 12$\mathrm{\mu}$m was employed. A CCD with a smaller pixel size of 6.8$\mathrm{\mu}$m was employed in the following experiment conducted by Buschhorn et al.[@Buschhorn1994] Polarimetric performance can be optimized with data reduction.[@Hayashida1999] Although x-ray CCDs were utilized in most of the x-ray astronomical satellites since ASCA, no positive detection of polarization has been reported. This is mainly due to the pixel size of the x-ray CCDs used in space, ranging from 24$\mathrm{\mu}$m (Chandra ACIS and Suzaku XIS) to 150$\mathrm{\mu}$m (XMM-Newton EPIC-pn), which is too large to measure photoelectron tracks.
Comparing the two classes of photoelectron tracking polarimeters, gas detectors and solid-state detectors, the former has higher polarimetry sensitivity. This comes from the difference in the range of electrons; the range is longer in gas than in a solid. However, since the imaging and spectroscopic capability of solid-state detectors is superior to that of gas detectors, there can be a wide range of application fields in solid-state detectors if they have enough polarimetric capability. Such instruments will be suitable not only for a focal plane detector with an x-ray mirror but also for the newly proposed x-ray interferometer without a mirror[@Hayashida2016; @Hayashida2018].
We focus on complementary MOS (CMOS) pixel sensors designed for visible light with a pixel size of several micrometers, which are developed for numerous commercial applications. The noise performance of the latest CMOS pixel sensors is as good as or better than that of CCDs. In particular, the so-called scientific purpose CMOS sensors have noise plus dark current as small as a few electrons even at room temperature. It means that those sensors can in principle detect x-rays in photon-counting mode. In fact, such CMOS sensors are employed in the rocket experiment, FOXSI3 to observe solar x-rays[@Ishikawa2018], and are planned to be employed in the x-ray wide-field survey mission Einstein Probe[@Wang2018]. In this paper, we have employed a newly released 2.5-$\mathrm{\mu}$m pixel CMOS sensor designed for visible light to measure its x-ray spectroscopic and polarimetric performance. This is the smallest pixel size detector whose x-ray performance has been measured so far. When errors are quoted without further specification, these refer to 1$\mathrm{\sigma}$ uncertainties in this paper.
Detection of X-ray Events with GMAX0505
=======================================
CMOS Image Sensor, GMAX0505
---------------------------
GMAX0505 is a CMOS image sensor developed by GPixel Inc. and is originally designed for visible light imaging, displayed in Fig \[GMAX\_image\]. Its imaging area consists of $5120\times5120$ pixels, and the size of a pixel is 2.5$\mathrm{\mu}$m $\times$ 2.5 $\mathrm{\mu}$m, which can provide us high spatial resolution. Table \[GMAX\_table\] shows the specific properties of GMAX0505. The structure of GMAX0505 is designed for visible light. In particular, its advanced light pipe structure optimizes the response to visible light as described in Yokoyama et al.[@Yokoyama2018] We applied it for x-ray imaging and polarimetry for the first time. Although Yokoyama et al.[@Yokoyama2018] provide a schematic view of the pixel structure, the size of the photodiode, which is essential for the x-ray detection, is not provided. We thus estimate the thickness of the x-ray detection layer to be $\sim$5$\mathrm{\mu}$m in Sec. 3.3. Although the device is usually equipped with a cover glass, we employ the device without it. However, micro-lenses implemented in the sensor at the illumination side are not removed in our experiment. As this sensor is very sensitive to the visible light, we cover the sensor with a dark curtain to block the visible light.
We adopt the evaluation board and software developed by GPixel Inc. to operate GMAX0505 and acquire data. GMAX0505 has 32 levels of gain, and we utilize two of them; when a register is set to 0, GMAX0505 is operated in a low-gain mode. While the register is set to 4, it is in a high-gain mode. GMAX0505 is operated at room temperature and the atmospheric condition throughout our experiments in this paper. Readout noise is evaluated from data with an exposure time of 1ms. The average noise is 5.7e${}^{-}$ (RMS) in the low-gain mode and 2.7e${}^{-}$ (RMS) in the high-gain mode.
\[H\]
----------------------------------------------------------------------------------------
![GMAX0505 properties.[]{data-label="GMAX_table"}](GMAX_image.eps "fig:"){width="7cm"}
----------------------------------------------------------------------------------------
------------------ ----------------------------------------------
chip size $15.85~\mathrm{mm} \times 16.88~\mathrm{mm}$
pixel size $2.5~\mu\mathrm{m} \times 2.5~\mu\mathrm{m}$
number of pixels $5120 \times5120$
Effective Area $12.8~\mathrm{mm}\times12.8~\mathrm{mm}$
Frame rate 40 frames per second @12 bit
Shutter Type Global shutter
Device Type Front Illuminated
------------------ ----------------------------------------------
: GMAX0505 properties.[]{data-label="GMAX_table"}
Detecting X-rays from ${}^{55}$Fe
---------------------------------
First, we used ${}^{55}$Fe as an x-ray source to see if GMAX0505 can detect x-ray photons. Both high-gain and low-gain modes were adopted in the ${}^{55}$Fe measurement in order to check properties with these gains. We took 500 frames in each of the high- and low-gain modes with the exposure time of 100ms. About 200 frames were also obtained with the same exposure without the x-ray source as background data. We estimated the background level of each pixel by averaging all the frames in the background data and subtracted them from the pixel level of each pixel of each frame with the x-ray illumination.
Figure \[Fe\_image\] shows a zoom-in frame data obtained in the ${}^{55}$Fe irradiation test. We clearly detect x-ray events. Most of the events are confined within one pixel, which we call single-pixel events. However, we found some events spread over neighboring multiple pixels, which are often seen in x-ray detection with x-ray CCDs. If the event spreads over two adjacent pixels, we call it double-pixel events. Events spreading over more than two pixels are defined as extended events. According to the standard event extraction scheme employed for x-ray CCDs, we employ two signal thresholds for event extraction, the event threshold for the event center, and the split threshold for neighboring pixels. We set the event threshold to 10$\sigma$ and the split threshold to 3$\sigma$, where $\sigma$ is the standard deviation of the background level. In our analysis, we classify the events into single-pixel, double-pixel, and extended events.
Figure \[Fe\_spec\] shows the spectra of single-pixel, double-pixel, and extended events obtained by GMAX0505 exposed to x-rays from ${}^{55}$Fe. It indicates that GMAX0505 evidently detected x-rays from the source. The gain of the high-gain mode is calculated to be 4.27eV ch${}^{-1}$, whereas that of the low-gain mode is 12.7eV ch${}^{-1}$ for single-pixel events. The energy resolution is also obtained from the width of the Mn K$\mathrm{\alpha}$ peak; FWHM at 5.9keV is 176eV in the high-gain mode and 196eV in the low-gain mode.
\[H\]
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Zoom-in frame images of a single-pixel event (left), a double-pixel event (middle), and an extended event (right) obtained by GMAX0505 in the high-gain mode exposed to x-rays from ${}^{55}$Fe. More than 60% of events are single-pixel events.[]{data-label="Fe_image"}](Fe_image.eps "fig:")
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
\[H\]
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![(Left) A spectrum obtained by GMAX0505 exposed to x-rays from ${}^{55}$Fe in the high-gain mode. The peaks around 1400 and 1500ch correspond to 5.9keV (Mn K$\mathrm{\alpha}$) and 6.4keV (Mn K$\mathrm{\beta}$), respectively. (Right) Same as left, but in the low-gain mode.[]{data-label="Fe_spec"}](Fe_spec.eps "fig:")
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
X-ray Beamline Experiment
==========================
SPring-8 BL20B2 and Calibration of the Beam Polarization
--------------------------------------------------------
To evaluate the x-ray detection efficiency and the x-ray polarimetry sensitivity of GMAX0505, we conducted an x-ray beam experiment at BL20B2 in SPring-8, the synchrotron radiation facility in Hyogo, Japan in October 2018. The beamline was 215m in length, and the beam divergence was as small as sub-arcsecond. We fixed our CMOS detector in the experimental hatch at the beam downstream. The beam size was collimated by a slit at the entrance of the hatch to 10 mm $\times$ 10 mm. X-ray photons were monochromatized by a double-crystal monochromator set at upstream of the beam. We used the x-ray energy either 12.4 or 24.8keV and inserted attenuators so that we can avoid the event pile-up; a 0.15mm-thick Mo plate to 12.4-keV beam, or a 0.1mm-thick Sn plus 0.1mm-thick Cu plate to 24.8-keV beam.
We prepared a scattering polarimeter system consisting of a Be target of 5-mm diameter and 20-mm length and a CdTe detector, XR-100CdTe provided by AMPTEK, Inc. to detect x-ray photons scattered by 90 deg. We applied the x-ray beam to this system for the measurement of the x-ray beam polarization. The modulation of the count rate was obtained to be 94.05% $\pm$ 0.03% at 12.4keV and 93.26% $\pm$ 0.08% at 24.8keV. We calculated MF of our scattering polarimeter system to be 94.7%, simply from its geometry and the differential cross section of the Thomson scattering. Dividing the former by the latter, we obtained the polarization degree of the beam to be 99.31% $\pm$ 0.03% at 12.4keV and 98.48% $\pm$ 0.08% at 24.8keV, respectively. The polarization direction (electric vector) is parallel to the horizontal plane.
X-ray Beam Irradiation to GMAX0505
----------------------------------
The experimental setup is shown in Fig. \[setup\]. We defined the H and V direction in the GMAX0505 imaging area and the rotation angle $\phi$ of GMAX0505 as shown in Fig. \[setup\]. We took frame data at $\phi$ = 0 deg, 30 deg, 60 deg, and 90 deg. The number of obtained frames was 200 to 500 at each angle and each energy, and an exposure of every frame was 600ms at 12.4keV, and 5ms at 24.8keV. We adopted the high-gain mode when we measured at 12.4keV, and the low-gain mode at 24.8keV. Background data were also obtained with the same exposure without irradiating the x-ray beam.
We show x-ray events detected in the raw frame data of GMAX0505 in Fig. \[beam\_image\]. The numbers of single-pixel, double-pixel, and extended events are almost the same at 12.4keV, whereas most of the 24.8-keV events are extended over multiple pixels. This contrast is caused by the difference in the range of photoelectrons in silicon, 1.1$\mathrm{\mu}$m for the 12.4keV x-ray incidence and 4.3$\mathrm{\mu}$m for the 24.8keV x-ray incidence, according to the empirical formula of $r~(\mathrm{{\mu}m})=\left[E_\mathrm{e}/10~(\mathrm{keV})\right]^{1.75}$, where $E_\mathrm{e}$ is the initial photoelectron energy[@Janesick1985]. The small pixel size of 2.5$\mathrm{\mu}$m for GMAX0505 enables us to “image” photoelectron tracks at least for the 24.8keV x-ray incidence as shown in Fig. \[beam\_image2\]. To our knowledge, this is the first time for solid-state detectors to take such images.
The spectra measured with GMAX0505 are shown in Fig. \[beam\_spectra\]. Subtraction of background data and event selection are conducted in the same way as described in Sec. 2.2. Each spectrum has a primary peak corresponding to the incident x-ray beam energy 12.4 or 24.8keV. Other peaks are escape events and fluorescence x-ray events from surrounding materials.
\[H\]
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Experimental setup in SPring-8 BL20B2. Distance between GMAX0505 and the beamline window are set to $\sim$ 2.5m. The imaging area of GMAX0505 has H and V axis, and hence we define the rotation angle $\phi$ as legends in the figure.[]{data-label="setup"}](SP8_setup.eps "fig:")
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
\[H\]
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![(Left) Zoom-in frame images obtained by GMAX0505 exposed to the x-ray beam at 12.4keV. These events are picked up at random in a frame data. (Right) Same as left, but the energy is 24.8keV. Almost all events split into neighboring pixels, whereas some events are confined in one pixel at 12.4keV. []{data-label="beam_image"}](beam_image.eps "fig:")
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
\[H\]
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Zoom-in frame images of extended events at 24.8keV. We extract the events where the track of a photoelectron is especially visible.[]{data-label="beam_image2"}](beam_image2.eps "fig:")
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
\[H\]
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![(Left) Spectra of GMAX0505 exposed to the 12.4-keV x-ray beam. Different colors show different types of events. We utilized a range of 2600 to 3200ch to extract events of which energy is around 12.4keV. (Right) Same as left, but the beam energy is 24.8keV. We utilize the events in 1900 to 2100ch.[]{data-label="beam_spectra"}](beam_spec.eps "fig:")
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
X-ray Detection Efficiency of GMAX0505
--------------------------------------
To measure the detection efficiency of GMAX0505 to x-rays, we also measured the 12.4keV x-ray beam by the CdTe detector that was used in Sec. 3.1. Exposures of the CdTe observations were set to 600s. Attenuation plates and distance between the beamline window and the detector were set to same as those in the measurement with GMAX0505. The CdTe detector had a CdTe crystal with thickness of 1mm, and its detection efficiency was more than 99.9% at 12.4-keV x-rays. Hence, we regard that its detection efficiency was 100%, and compared count rates per 1mm$^2$ derived by GMAX0505 and the CdTe sensor. The detection efficiency was calculated to be 1.9% at 12.4keV, from which the thickness of the detection layer was evaluated to be $\sim$5$\mathrm{\mu}$m. Note that we simplified the structure of the detection layer to be a flat slab of Si in our estimation, and it was not necessarily the physical size of the photodiode. In this calculation, we integrated all types of x-ray events in 600 to 3200ch. For the 24.8keV x-ray incidence, the integrated counts were mostly in the low-energy tail of the spectra, and it was difficult to evaluate it accurately. However, we employed only the double-pixel events to detect the polarization of x-rays, as discussed in detail in Sec. 4. When we employ the events in the pulse height range specifically described in Fig. \[beam\_spectra\], we obtained the detection efficiency of double-pixel events at 12.4 and 24.8keV to be 0.093% and 0.0011%, respectively.
X-ray Polarimetry with GMAX0505
===============================
Polarization Measurement with Double-pixel Events
-------------------------------------------------
Photoelectrons are preferentially emitted in the direction parallel to the electric vector of incident x-ray photons, i.e., x-ray events should spread horizontally if the beam is polarized horizontally. X-ray CCD image sensors show the same characteristics and can detect x-ray polarization by selecting double-pixel events[@Tsunemi1992]. We then focus on the double-pixel events and estimate the polarimetry sensitivity of our GMAX0505.
We calculate the number ratio of double-pixel events spreading along the H axis against the total double-pixel events in every rotation angle by the following equation: $$\begin{aligned}
\label{eq2}
r_\mathrm{H}(\phi)=\frac{N_\mathrm{H}(\phi)}{N_\mathrm{H}(\phi)+N_\mathrm{V}(\phi)} ,\end{aligned}$$ where $\phi$ shows the rotation angle of GMAX0505, and $N_\mathrm{H}(\phi)$ and $N_\mathrm{V}(\phi)$ represent the number of double-pixel events of GMAX0505 along its H and V axes. We adopt the spectral pulse height range of 2600 to 3200ch and 1900 to 2100ch at 12.4 and 24.8keV, respectively. When $\phi$ gets closer to 90 deg, i.e., when the GMAX0505 H axis gets closer to the beam polarization direction, $r_\mathrm{H}$ gradually increases as shown in Fig. \[ratio\], as we anticipate. It indicates that we succeed in detecting x-ray polarization with GMAX0505. We also calculate $r_\mathrm{H}$ in case the incident x-rays are nonpolarized using the double-pixel events of ${}^{55}$Fe data described in Sec. 2.2. We obtain $r_\mathrm{H}$ of 46.6% $\pm$ 2.6%, which shows that carriers may be likely to spread along the V axis in GMAX0505 even if incident x-rays are nonpolarized.
\[hb\]
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![The number ratio of double events spreading along the H axis against the whole double events defined as $r_\mathrm{H}(\phi)$ in Eq. (\[eq2\]). Red shows $r_\mathrm{H}(\phi)$ at 12.4keV, and blue shows $r_\mathrm{H}(\phi)$ at 24.8keV. Errors of 12.4keV $r_\mathrm{H}$ are negligible because of sufficient statistics. These plots can be well fitted by a sine curve function that a modulation curve is expected to follow.[]{data-label="ratio"}](ratio_plot.eps "fig:")
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
To evaluate the polarimetry sensitivity, we calculate the MF, which is defined as $$\begin{aligned}
\label{eq3}
MF=\frac{1}{P}\frac{r_\mathrm{H}(90~\mathrm{deg})-r_\mathrm{H}(0~\mathrm{deg})}{r_\mathrm{H}(90~\mathrm{deg})+r_\mathrm{H}(0~\mathrm{deg})} ,\end{aligned}$$ where P describes the degree of polarization of the incident x-ray beam, and we substitute the value derived in Sec. 3.1 to it. In consequence, we obtain a MF of 7.63% $\pm$ 0.07% at 12.4keV, and 15.5% $\pm$ 0.4% at 24.8keV.
Comparison between Simulations and Measurements
-----------------------------------------------
We compare the $r_\mathrm{H}$ values obtained by GMAX0505 with numerical simulations performed by the Geant4 software[@Geant2003; @Geant2006; @Geant2016] in an ideal case where diffusion of carriers in silicon is neglected, and electric noise is zero. We choose the Geant4 10.03.03 version and adopt Livermore model as a physics model. This model includes fluorescence x-rays, Auger electrons, and polarization effect. In our simulations, we trace electrons (not only the primary photoelectrons but also the secondary electrons generated by various processes) until its energy is reduced to 10eV. We construct 5$\times$5 Si pixel arrays, where each pixel size is the same as that of GMAX0505. The thickness of pixels is estimated to be 5$\mathrm{\mu}$m in Sec. 3.3, and here we adopt this value. In the simulations, $10^8$ x-ray photons with 100% polarization enter the pixels. We calculate the total energy deposit at each pixel for each event and compare the numbers of double-pixel events parallel and perpendicular to the polarization direction. MF obtained from this simulation is 21.4% $\pm$ 0.6% at 12.4keV, and 20.0% $\pm$ 0.3% at 24.8keV. These values are larger than the measured values by a factor of 1.3 to 3. It would be attributed to our simplified assumptions; for example, the weak electric field in Si substrate between and under the photodiodes may affect the signal of double-pixel events. These values should be regarded as theoretical upper limits of MF in the ideal case.
Discussion
----------
The previous polarimetry measurement by a 12-$\mathrm{\mu}$m pixel size CCD detector has been conducted at 27 and 43keV.[@Hayashida1999] Since the definition of their MF is different from ours, we convert MF to be 7% at 27keV and 15% at 43keV with Eq. (\[eq3\]). At this time, we perform polarimetry in the lower energy band with the higher spatial resolution than that in the CCD experiment, owing to the smaller pixel size of GMAX0505. It is also remarkable that GMAX0505 is operated at room temperature, in contrast to x-ray CCDs, which need to be cooled down to about $-$100${}^{\circ}$C. Another advantage of the CMOS over CCDs is pile-up tolerance. In fact, we need to attenuate x-rays by orders of magnitude if we employ a CCD with a frame time of a few seconds.
Here, we evaluate the sensitivity of the x-ray polarimetry as minimum detectable polarization (MDP). MDP is calculated as $$\begin{aligned}
\label{eq4}
\mathrm{MDP}=\frac{4.29}{MF \times \sqrt{N_{\mathrm{H}}+N_{\mathrm{V}}} } ,\end{aligned}$$ where $N_\mathrm{H}+N_\mathrm{V}$ is the total number of double-pixel events[@Weisskopf2010]. MDP shows minimum polarization degree required to detect x-ray polarization at the 99% confidence level. We calculate MDP in the case that we observe the Crab Nebula in the 10 to 20keV band with GMAX0505. The spectrum of the Crab Nebula is approximated by a power-law continuum with the photon index of 2.12 and the normalization at 1keV of 9.42photons $\mathrm{keV}^{-1}~\mathrm{cm}^{-2}~\mathrm{s}^{-1}$[@Kirsch2005]. With the power-law emission with this model, the photon flux in 10 to 20keV is 0.344$\mathrm{photons}~\mathrm{cm}^{-2}~\mathrm{s}^{-1}$. If an exposure time is $10^6$s and the effective area of a mirror is 400$\mathrm{cm}^2$, $1.38\times10^8$ x-ray photons enter the detector. Since the detection efficiency of double-pixel events is 0.093% $\pm$ 0.002% at 12.4keV in Sec. 3.3, we substitute 1.28$\times10^5$ to $N_\mathrm{H}+N_\mathrm{V}$. Adopting MF = 7.63% $\pm$ 0.07% at 12.4keV, we obtain MDP of $\sim$16% in the 10 to 20keV band.
Note that our data reduction procedure has not yet been optimized to the polarimetry with GMAX0505. We employ only double-pixel events. As shown in Fig. \[beam\_image\], particularly for the 24.8-keV incidence, photoelectron track images extending over multiple pixels should have valid information on their emission direction. Extensive studies have been made to analyze photoelectron track images obtained with gas micro-pixel detectors for x-ray polarimetry. We expect that those techniques are also useful to those in GMAX0505, and they will improve MDP significantly.
MF of GMAX0505 is not as high as other x-ray polarimeters; 20% to 65% for the gas photoelectron tracking polarimeter at 2 to 8keV[@Weisskopf2016] and 30% to 50% for the hard x-ray and soft $\gamma$-ray scattering polarimeters above 20keV.[@Yonetoku2011; @Hayashida2016b; @Chauvin2017] However, the energy range of 10 to 20keV is the valley of these two classes of x-ray polarimeters, and there is a room for the small-sized pixel CMOS sensors such as GMAX0505. Combining with a future high-quality x-ray mirror of 10-m focal lengths, we would obtain the spatial resolution of $\sim$0.05arcsec. This will enable us to detect spatially resolved scattering x-rays from molecular tori surrounding supermassive black holes in nearby galactic nuclei, for example. This type of small pixel CMOS sensors is also essential for the newly proposed x-ray interferometer without mirrors, Multi-Image X-ray Interferometer Module.[@Hayashida2016; @Hayashida2018]
Conclusion {#sec:sections}
==========
We applied GMAX0505, the CMOS image sensor designed for visible light and that has the smallest pixel size ever, 2.5$\mathrm{\mu}$m $\times$ 2.5 $\mathrm{\mu}$m each, to x-ray imaging, spectroscopy, and polarimetry. We first irradiated x-rays from ${}^{55}$Fe and obtained the energy resolution of 176eV (FWHM) at 5.9keV at room temperature operation. We then brought GMAX0505 to SPring-8 BL20B2 and measured its x-ray polarimetry sensitivity. We obtained MF of 7.63% $\pm$ 0.07% at 12.4keV and 15.5% $\pm$ 0.4% at 24.8keV. These results show that GMAX0505 has both x-ray polarimetry sensitivity and spectroscopic performance with the highest spatial resolution ever.
The synchrotron radiation experiments were performed at BL20B2 in SPring-8 with the approval of the Japan Synchrotron Radiation Research Institute (JASRI) (Proposals Nos. 2018B1235, 2018A1368, 2017B1186, and 2017B1098). KH acknowledges the support from JSPS KAKENHI under Grant Nos. JP18K18767, JP16K13787, JP16H00949, and JP26109506. HM acknowledges the support from JSPS KAKENHI under Grant No. 15H02070. HA acknowledges the support from JSPS KAKENHI under Grant Nos. 15H02070 and 17K18782. HN acknowledges the support from JSPS KAKENHI under Grant Nos. 15H03641 and 18H01256. The authors have no relevant financial interests in the manuscript and no other potential conflicts of interest to disclose.
**Kazunori Asakura** is a graduate student at Osaka University in Japan, He received his BS degree in Physics from Osaka University in 2018. His current research includes instrumentation of X-ray imaging spectroscopy and polarimetry (especially CCDs and CMOS), and observational study using archival data of X-ray astronomy satellites.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'This paper presents new deep and wide narrow-band surveys undertaken with UKIRT, Subaru and the VLT; a unique combined effort to select large, robust samples of H$\alpha$ star-forming galaxies at $z=0.40$, $0.84$, $1.47$ and $2.23$ (corresponding to look-back times of 4.2, 7.0, 9.2 and 10.6 Gyrs) in a uniform manner over $\sim2$ deg$^2$ in the COSMOS and UDS fields. The deep multi-epoch H$\alpha$ surveys reach a matched 3$\sigma$ flux limit of $\approx3$M$_{\odot}$yr$^{-1}$ out to $z=2.2$ for the first time, while the wide area and the coverage over two independent fields allow to greatly overcome cosmic variance and assemble by far the largest samples of H$\alpha$ emitters. Catalogues are presented for a total of 1742, 637, 515 and 807 H$\alpha$ emitters, robustly selected at $z=0.40$, $0.84$, $1.47$ and $2.23$, respectively, and used to determine the H$\alpha$ luminosity function and its evolution. The faint-end slope of the H$\alpha$ luminosity function is found to be $\alpha=-1.60\pm0.08$ over $z=0-2.23$, showing no significant evolution. The characteristic luminosity of SF galaxies, $L_{\rm H\alpha}^*$, evolves significantly as $\log\,L^*_{\rm H\alpha}(z)=0.45z+\log\,L^*_{z=0}$. This is the first time H$\alpha$ has been used to trace SF activity with a single homogeneous survey at $z=0.4-2.23$. Overall, the evolution seen with H$\alpha$ is in good agreement with the evolution seen using inhomogeneous compilations of other tracers of star formation, such as FIR and UV, jointly pointing towards the bulk of the evolution in the last 11Gyrs being driven by a statistically similar SF population across cosmic time, but with a strong luminosity increase from $z\sim0$ to $z\sim2.2$. Our uniform analysis allows to derive the H$\alpha$ star formation history of the Universe (SFRH), showing a clear rise up to $z\sim2.2$, for which the simple parametrisation $\log_{10}\rho_{\rm SFR}=-2.1(1+z)^{-1}$ is valid over 80 per cent of the age of the Universe. The results reveal that both the shape and normalisation of the H$\alpha$ SFRH are consistent with the measurements of the stellar mass density growth, confirming that our H$\alpha$ SFRH is tracing the bulk of the formation of stars in the Universe for $z<2.23$. The star formation activity over the last $\sim$11Gyrs is responsible for producing $\sim95$ per cent of the total stellar mass density observed locally, with half of that being assembled in 2Gyrs between $z=1.2$–$2.2$, and the other half in 8Gyrs (since $z<1.2$). If the star-formation rate density continues to decline with time in the same way as seen in the past $\sim11$Gyrs, then the stellar mass density of the Universe will reach a maximum which is only 5 per cent higher than the present-day value.'
author:
- |
David Sobral$^{1}$[^1], Ian Smail$^{2}$, Philip N. Best$^{3}$, James E. Geach$^{4}$, Yuichi Matsuda$^{5}$, John P. Stott$^{2}$, Michele Cirasuolo$^{4,6}$ & Jaron Kurk$^{7}$\
$^{1}$ Leiden Observatory, Leiden University, P.O. Box 9513, NL-2300 RA Leiden, The Netherlands\
$^{2}$ Institute for Computational Cosmology, Durham University, South Road, Durham, DH1 3LE, UK\
$^{3}$ SUPA, Institute for Astronomy, Royal Observatory of Edinburgh, Blackford Hill, Edinburgh, EH9 3HJ, UK\
$^{4}$ Department of Physics, McGill University, Ernest Rutherford Building, 3600 Rue University, Montréal, Québec, Canada, H3A 2T8\
$^5$ Cahill Center for Astronomy & Astrophysics, California Institute of Technology, MS 249-17, Pasadena, CA 91125, USA\
$^6$ UK Astronomy Technology Centre, Royal Observatory, Blackford Hill, Edinburgh EH9 3HJ\
$^7$ Max-Planck-Institut f[ü]{}r Astrophysik, Karl-Schwarzschild Strasse 1, D-85741 Garching, Germany
bibliography:
- 'bibliography.bib'
date: 'Accepted 2012 September 27. Received 2012 September 27; in original form 2012 February 15'
title: 'A large H$\alpha$ survey at $\bf z=2.23,1.47,0.84 \, \&\, 0.40$: the 11Gyr evolution of star-forming galaxies from HiZELS[^2] '
---
\[firstpage\]
galaxies: high-redshift, galaxies: luminosity function, cosmology: observations, galaxies: evolution.
Introduction {#intro}
============
Observational studies show that star formation activity in galaxies, as measured through the star formation rate density ($\rho_{\rm SFR}$) in the Universe, has been decreasing significantly with time [e.g. @Lilly96]. Nevertheless, while surveys reveal that $\rho_{\rm SFR}$ rises steeply out to at least $z\sim1$ [e.g. @Hopkins2006], determining the redshift where $\rho_{\rm SFR}$ might have peaked at $z>1$ is still an open problem. This is because the use of different techniques/indicators (affected by different biases, dust extinctions and with different sensitivities – and that can only be used over limited redshift windows) results in a very blurred and scattered understanding of the star formation history of the Universe. Other problems/limitations result from the difficulty of obtaining both large-area, large-sample, clean and deep observations (to overcome both cosmic variance, and avoid large extrapolations down to faint luminosities).
One way to make significant progress in our understanding of star formation at high redshifts is through the use of narrow-band imaging techniques. These can provide sensitive wide-field surveys to select star-forming galaxies through a single emission line and track it out to high redshift as it shifts from the optical into the near-IR. While there are a number of emission lines which are used to trace star formation, H$\alpha$ is by far the best at $z<3$,[^3] as it provides a sensitive census of star formation activity, is well-calibrated, and suffers only modest extinction in typical star-forming galaxies [e.g. @Gilbank; @Garn2010a; @Sobral11B], in contrast to shorter-wavelength emission lines. Furthermore, H$\alpha$ is also a much better estimate of the instantaneous star formation rate when compared to other widely used tracers, such as UV, FIR, or radio, as it sensitive to the presence of the most massive stars only, which are very short-lived. Even longer wavelength star-formation tracing emission lines, such as the Paschen series lines, are less affected by dust extinction, but they are intrinsically fainter than H$\alpha$ (e.g. Pa$\alpha$ is intrinsically $\sim10\times$ weaker than H$\alpha$ for a typical star-forming galaxy) and hence provide much less sensitive surveys out to lower redshifts.
H$\alpha$ surveys have been carried out by many authors [e.g. @Bunker95; @Malkan1996], but they initially resulted in a relatively low number of sources for $z>0.5$ surveys. Fortunately, the development of wide field near-IR detectors has recently allowed a significant increase in success: at $z\sim2$, narrow-band surveys such as [@Moorwood], which could only detect a handful of emitters, have been rapidly extended by others, such as [@G08], increasing the sample size by more than an order of magnitude. Substantial advances have also been obtained at $z\sim1$ [e.g. @Villar; @S09a; @CHU11]. Other H$\alpha$ surveys have used dispersion prisms on [hst]{} to make progress [e.g. @McCarthy; @Yan; @Hopkins2000; @Shim09], and there is promising work being conducted using the upgraded WFC3-grism [e.g. WISP or 3D-HST; @Atek2010; @HSTTHER; @vanDokkum].
------------------------------------------------------------------------------------- -- -- -- --
**NB filter & $\bf \lambda_{\rm c}$ & **FWHM &$z$ H$\alpha$ & **Volume (H$\alpha$)\
& ($\umu$m) & (Å) & & ($10^4$Mpc$^3$deg$^{-2}$)\
NB921 & 0.9196 &132 & 0.401$\pm$0.010 & 5.13\
NB$_{\rm J}$ & 1.211 & 150 & 0.845$\pm$0.015 & 14.65\
NB$_{\rm H}$ &1.617 & 211 & 1.466$\pm$0.016 & 33.96\
NB$_{\rm K}$ & 2.121 & 210 & 2.231$\pm$0.016 & 38.31\
HAWK-I H$_2$ & 2.125 & 300 & 2.237$\pm$0.023 & 54.70\
******
------------------------------------------------------------------------------------- -- -- -- --
: Narrow-band filters used to conduct the multi-epoch surveys for H$\alpha$ emitters, indicating the central wavelength ($\umu$m), full width at half maximum (FHWM), the redshift range for which the H$\alpha$ line is detected over the filter FWHM, and the corresponding volume (per square degree) surveyed (for the H$\alpha$ line). Note that the NB921 filter provides an \[O[ii]{}\] survey which precisely matches the H$\alpha$ $z=1.47$ survey, and also a \[O[iii]{}\] survey which broadly matches the $z=0.84$ H$\alpha$ survey. The NB$_{\rm J}$ and NB$_{\rm H}$ filters also provide \[O[ii]{}\]3727 and \[O[iii]{}\]5007 surveys, respectively, which match the $z=2.23$ NB$_{\rm K}$ H$\alpha$ survey.
\[numbers\]
HiZELS, the High-redshift(Z) Emission Line Survey[^4] [@G08; @S09a; @Sobral11B hereafter S09 and S12] is a Campaign Project using the Wide Field CAMera (WFCAM) on the United Kingdom Infra-Red Telescope (UKIRT), as well as the Suprime-Cam on the Subaru Telescope and the HAWK-I camera on VLT. On UKIRT, HiZELS exploits specially-designed narrow-band filters in the $J$ and $H$ bands (NB$_{\rm J}$ and NB$_{\rm H}$), along with the H$_2$S(1) filter in the $K$ band (hereafter NB$_{\rm K}$), to undertake panoramic, deep surveys for line emitters. The Subaru observations provide a comparable survey at $z=0.40$ using the NB921 filter on Suprime-Cam, while the HAWK-I observations extend the UKIRT survey to fainter limits at $z=2.23$ over a smaller area. The combined elements of HiZELS primarily target the H$\alpha$ emission line [but also other lines e.g. @S09b] redshifted into the red or near-infrared at $z=0.40$, $z=0.84$, $z=1.47$ and $z=2.23$ [see @Best2010], while the NB$_{\rm J}$ and NB$_{\rm H}$ filters also detect \[O[ii]{}\]3727 and \[O[iii]{}\]5007 emitters at $z=2.23$, matching the NB$_{\rm K}$ H$\alpha$ coverage at the same redshift.
--------------------------------------------------------------------------------------------------------------------------- -- -- -- -- -- -- -- -- --
**Field & **Band & **R.A. & **Dec. & **Int. time & **FHWM & **Dates &**$\bf m_{lim}$(Vega)\
& (filter) & [(J2000)]{} &(J2000) & (ks) & ($''$) & & (3$\sigma$)\
COSMOS-1 & NB921 & 095923 & $+$023029 & 2.9 & 0.9 & 2010 Dec 9 & 24.4\
COSMOS-2 & NB921 & 100134 & $+$023029 & 2.9 & 0.9 & 2010 Dec 9 &24.5\
COSMOS-3 & NB921 & 095923 & $+$020416 & 2.9 & 0.9 & 2010 Dec 9 & 24.5\
COSMOS-4 & NB921 & 100134 & $+$020416 & 2.9 & 0.9 & 2010 Dec 9 & 24.5\
UKIDSS-UDS C & NB921 & 021800 & $-$050000 & 30.0 & 0.8 & 2005 Oct 29, Nov 1, 2007 Oct 11$-$12 & 26.6\
UKIDSS-UDS N & NB921 & 021800 & $-$043500 & 37.8 & 0.9 & 2005 Oct 30,31, Nov 1, 2006 Nov 18, 2007 Oct 11,12 & 26.7\
UKIDSS-UDS S & NB921 & 021800 & $-$052500 & 37.1 & 0.8 & 2005 Aug 29, Oct 29, 2006 Nov 18, 2007 Oct 12 & 26.6\
UKIDSS-UDS E & NB921 & 021947 & $-$050000 & 29.3 & 0.8 & 2005 Oct 31, Nov 1, 2006 Nov 18, 2007 Oct 11,12 & 26.6\
UKIDSS-UDS W & NB921 & 021613 & $-$050000 & 28.1 & 0.8 & 2006 Nov 18, 2007 Oct 11,12 & 26.0\
COSMOS-NW(1) & NB$_{\rm J}$ & 100000 & +021030 & 19.7 & 0.8 & 2007 Jan 14–16 & 22.0\
COSMOS-NE(2) & NB$_{\rm J}$ & 100052 & +021030 & 23.8 & 0.9 & 2006 Nov 10; 2007 Jan 13–14 & 22.0\
COSMOS-SW(3) & NB$_{\rm J}$ & 100000 & +022344 & 18.9 & 0.9 & 2007 Jan 15–17 & 22.0\
COSMOS-SE(4) & NB$_{\rm J}$ & 100053 & +022344 & 17.1 & 1.0 & 2007 Jan 15, 17; Feb 13, 14, 16 & 21.9\
UKIDSS-UDS NE & NB$_{\rm J}$ & 021829 & $-$045220 & 20.9 & 0.8 & 2007 Oct 21–23 & 22.0\
UKIDSS-UDS NW & NB$_{\rm J}$ & 021736 & $-$045220 & 22.4 & 0.9 & 2007 Oct 20–21 & 22.1\
UKIDSS-UDS SE & NB$_{\rm J}$ & 021829 & $-$050553 & 19.6 & 0.9 & 2007 Oct 23, 24 & 22.0\
UKIDSS-UDS SW & NB$_{\rm J}$ & 021738 & $-$050534 & 22.4 & 0.8 & 2007 Oct 19, 21 & 22.0\
COSMOS-NW(1) & NB$_{\rm H}$ & 100000 & $+$021030 & 12.5 & 1.0 & 2009 Feb 27; Mar 1-2 & 21.1\
COSMOS-NE(2) DEEP & NB$_{\rm H}$ & 100052 & $+$021030 & 107.0 & 0.9 & 2009 Feb 28; Apr 19; May 22; 2011 Jan 26–30 & 22.2\
COSMOS-SW(3) & NB$_{\rm H}$ & 100000 & $+$022344 & 14.0 & 0.7 & 2010 Apr 2 & 21.0\
COSMOS-SE(4) & NB$_{\rm H}$ & 100053 & $+$022344 & 18.1 & 1.0 & 2009 Mar 2; Apr 30; May 22; 2010 Apr 3 & 20.8\
COSMOS-A & NB$_{\rm H}$ & 100001 & $+$023653 & 12.6 & 1.0 & 2010 Apr 9; 2011 Jan 25 & 21.1\
COSMOS-B & NB$_{\rm H}$ & 100054 & $+$023630 &12.6 & 0.9 & 2010 Apr 8-9 & 21.0\
COSMOS-C & NB$_{\rm H}$ & 100001 & $+$015710 & 13.0 & 0.8 & 2010 Apr 6-8 & 20.8\
COSMOS-D & NB$_{\rm H}$ & 100052 & $+$015715 & 14.2 & 0.8 & 2010 Apr 7-8 & 21.0\
COSMOS-E & NB$_{\rm H}$& 095907 & $+$022344 & 12.6 & 0.8 & 2010 Apr 3-6 & 20.9\
COSMOS-F & NB$_{\rm H}$ & 095907 & $+$021030 & 12.6 & 0.7 & 2010 Apr 4 & 21.1\
COSMOS-G & NB$_{\rm H}$ & 100146 & $+$022344 & 14.5 & 0.8 & 2010 Apr 4 & 9 & 21.0\
COSMOS-H & NB$_{\rm H}$ & 100148 & $+$021051 & 14.0 & 0.8 & 2010 Apr 6 & 9 & 20.8\
UKIDSS-UDS NE & NB$_{\rm H}$ & 021829 & $-$045220 & 18.2 & 0.9 & 2008 Sep 28-29; 2009 Aug 16-17; 2010 Jul 22 & 21.3\
UKIDSS-UDS NW & NB$_{\rm H}$ & 021736 & $-$045220 & 18.0 & 0.9 & 2008 Sep 25, 29; 2010 Jul 18, 22 & 20.9\
UKIDSS-UDS SE & NB$_{\rm H}$ & 021829 & $-$050553 & 25.2 & 0.8 & 2008 Sep 25, 28-29; 2009 Aug 16-17 & 21.5\
UKIDSS-UDS SW & NB$_{\rm H}$ & 021738 & $-$050534 & 19.6 & 0.9 & 2008 Oct-Nov; 2009 Aug 16-17; 2010 Jul 23 & 21.3\
COSMOS-NW(1) & NB$_{\rm K}$ & 100000 & $+$021030 & 24.3 & 0.9 & 2006 Dec 17, 19–20; 2008 May 11; 2009 Feb 27 & 21.0\
COSMOS-NE(2) DEEP & NB$_{\rm K}$ & 100052 & $+$021030 & 62.5 & 0.9 & 2006 May 20–21; Dec 20; 2008 Mar 6-9 & 21.3\
COSMOS-SW(3) & NB$_{\rm K}$ & 100000 & $+$022344 & 20.0 & 0.9 & 2006 May 22, 24, Dec 20; 2009 May 20 & 20.8\
COSMOS-SE(4) & NB$_{\rm K}$ & 100053 & $+$022344 & 19.5 & 0.9 & 2006 Nov 13–15, 30; Dec 16 & 20.9\
COSMOS-A & NB$_{\rm K}$ & 100001 & $+$023653 & 20.0 & 1.0 & 2011 Mar 19-26; 2012 Mar 20 & 21.0\
COSMOS-B & NB$_{\rm K}$ & 100054 & $+$023630 & 20.0 & 0.9 & 2011 Mar 25-26 & 20.8\
COSMOS-C & NB$_{\rm K}$ & 100001 & $+$015710 & 26.7 & 0.9 & 2011 Mar 27, 30; Apr 3-5, 16-18 & 20.9\
COSMOS-D & NB$_{\rm K}$ & 100052 & $+$015715 & 20.0 & 0.9 & 2011 Mar 30; Jun 6; 2012 Jan 5,18; Feb 26; Mar 2 & 20.9\
COSMOS-E & NB$_{\rm K}$ & 095907 & $+$022344 & 20.0 & 0.8 & 2011 Mar 30; May 18-23, 30 & 20.7\
COSMOS-F & NB$_{\rm K}$ & 095907 & $+$021030 & 20.0 & 0.9 & 2011 Dec 18; 2012 Mar 2, 17 & 20.8\
COSMOS-G & NB$_{\rm K}$ & 100146 & $+$022344 & 20.0 & 0.9 & 2012 Mar 18-19 & 20.9\
COSMOS-H & NB$_{\rm K}$ & 100148 & $+$021051 & 20.0 & 0.8 & 2012 Mar 19-20 & 20.7\
UKIDSS-UDS NE & NB$_{\rm K}$ & 021829 & $-$045220 & 19.2 & 0.8 & 2005 Oct 18; 2006 Nov 13-14 & 20.7\
UKIDSS-UDS NW & NB$_{\rm K}$ & 021736 & $-$045220 & 20.0 & 0.9 & 2006 Nov 11 & 20.8\
UKIDSS-UDS SE & NB$_{\rm K}$ & 021829 & $-$050553 & 18.7 & 0.8 & 2006 Nov 15-16 & 20.7\
UKIDSS-UDS SW & NB$_{\rm K}$ & 021738 & $-$050534 & 23.6 & 0.8 & 2007 Sep 30 & 20.9\
COSMOS-HAWK-I & H2 & 100000 & $+$021030 & 19.4 & 0.9 & 2009 Apr 10, 14-15, 18; May 13-14 & 21.5\
UKIDSS-UDS-HAWK-I & H2 & 021736 & $-$045220 & 19.1 & 1.0 & 2009 Aug 16, 19, 24, 27 & 21.7\
****************
--------------------------------------------------------------------------------------------------------------------------- -- -- -- -- -- -- -- -- --
\[obs\]
One of the main aims of HiZELS is to provide measurements of the evolution of the H$\alpha$ luminosity function from $z=0.0$ to $z=2.23$ [but also other properties, such as clustering, environment and mass dependences; c.f. @SOBRAL10A; @SOBRAL10B; @GEACH12]. The first results [@G08 S09; S12] indicate that the H$\alpha$ luminosity function evolves significantly, mostly due to an increase of about one order of magnitude in L$_{\rm H\alpha}^*$, the characteristic H$\alpha$ luminosity, from the local Universe to $z=2.23$ (S09). In addition, [@SOBRAL10B] found that at $z=0.84$ the faint-end slope of the luminosity function ($\alpha$) is strongly dependent on the environment, with the H$\alpha$ luminosity function being much steeper in low density regions and much shallower in the group/cluster environments.
However, even though the progress has been quite remarkable, significant issues remain to be robustly addressed for a variety of reasons. For example, is the faint-end slope of the H$\alpha$ luminosity function ($\alpha$) becoming steeper from low to high redshift? Results from [@Hayes] point towards a steep faint-end slope at $z>2$. However, Hayes et al. did not sample the bright end, and have only targeted one single field over a relatively small area, and thus cosmic variance could play a huge role. [@Tadaki] find a much shallower $\alpha$ at $z\sim2$ using Subaru. Furthermore, measurements so far rely on different data, obtained down to different depths and using different selection criteria. Additionally, different ways of correcting for completeness [c.f. for example @CHU11], filter profiles or contamination by the \[N[ii]{}\]$_{\lambda\lambda6548,6583.6}$ lines can also lead to significant differences. How much of the evolution is in fact real, and how much is a result of different ways of estimating the H$\alpha$ luminosity function? This can only be fully quantified with a completely self-consistent multi-epoch selection and analysis. Another issue which still hampers the progress is overcoming cosmic variance and probing a very wide range of environments and stellar masses at $z>1$. Large samples of homogeneously selected star-forming galaxies at different epochs up to $z>2$ would certainly be ideal to provide strong tests on our understanding of how galaxies form and how they evolve.
![The broad- and narrow-band filter profiles used for the analysis. The narrow-band filters in the $z'$, $J$, $H$ and $K$ bands (typical FWHM of $\approx100-200$Å) trace the redshifted H$\alpha$ line at $z=0.4,0.84,1.47,2.23$ very effectively, while the (scaled) broad-band imaging is used to estimate and remove the contribution from the continuum. Note that because the filters are not necessary located at the center of the respective broad-band transmission profile, very red/blue sources can produce narrow-band excesses which mimic emission lines; that is corrected by estimating the continuum colour of each source and correcting for it. \[line\_frac\_fluxes\]](./figs/FILTERS.pdf){width="8.2cm"}
In order to clearly address the current shortcomings and provide the data that is required, we have undertaken by far the largest area, deep multi-epoch narrow-band H$\alpha$ surveys over two different fields. By doing so, both faint and bright populations of equally selected H$\alpha$ emitters at $z=0.4$, $0.84$, $1.47$ and $2.23$ have been obtained, using 4 narrow band filters (see Figure 1 and Table 1). This paper presents the narrow-band imaging and results obtained with the NB921, NB$_{\rm J}$, NB$_{\rm H}$, NB$_{\rm K}$ and H$_2$ filters on Subaru, the United Kingdom InfraRed Telescope (UKIRT) and the Very Large Telescope (VLT), over a total of $\sim2\deg^2$ in the Cosmological Evolution Survey [COSMOS; @Scoville] and the SXDF Subaru-XMM–UKIDSS Ultra Deep Survey [UDS; @Lawrence] fields.
The paper is organised as follows: §2 describes the observations, data reduction, source extraction, catalogue production, selection of line emitters, and the samples of H$\alpha$ emitters. In §3, after estimating and applying the necessary corrections, the H$\alpha$ luminosity functions are derived at $z=0.4$, $0.84$, $1.47$ and $2.23$, together with an accurate measurement of their evolution at both bright and faint ends. In §4 the star formation rate density at each epoch is also evaluated and the star formation history of the Universe is presented. §4 also discusses the results in the context of galaxy formation and evolution in the last 11Gyrs, including the inferred stellar mass density growth. Finally, §5 presents the conclusions. An H$_0=70$kms$^{-1}$Mpc$^{-1}$, $\Omega_M=0.3$ and $\Omega_{\Lambda}=0.7$ cosmology is used. Narrow-band magnitudes in the near-infrared and the associated broad-band magnitudes are in the Vega system, except when noted otherwise (e.g. for colour-colour selections). NB921 and $z'$ magnitudes are given in the AB system (except in Table 2, where they are given in Vega for direct comparison).
DATA AND SAMPLES {#data_technique}
================
Optical NB921 imaging with Subaru {#observationes_NB921}
---------------------------------
Optical imaging data were obtained with Suprime-Cam using the NB921 narrow-band filter. Suprime-Cam consists of 10 CCDs with a combined field of view of $34'\times27'$ and with chip gaps of $\sim15''$. The NB921 filter is centered at 9196Å with a FWHM of 132Å. The COSMOS field was observed in service mode in December 2010 with four different pointings covering the central 1.1deg$^2$. Total exposure times were 2.9ks per pointing, composed of individual exposures of 360s dithered over 8 different positions. Observations are detailed in Table 2. The UDS field has also been observed with the NB921 filter [see @Ouchi10], and these data have been extracted from the archive. Full details of the data reduction and catalogue production of the UDS data were presented by S12 and the same approach was adopted for the COSMOS data. In brief, all the raw NB921 data were reduced with the Suprime-Cam Deep field REDuction package [[sdfred]{}, @Yagi2002; @Ouchi2004] and [iraf]{}. The combined images were aligned to the public $z'$-band images of Subaru-XMM Deep Survey or the COSMOS field and PSF matched (FWHM$=0.9''$). The NB921 zero points were determined using $z'$ data, so that the ($z'$-NB921) colours are consistent with a median of zero for $z'$ between 19 and 21.5 – where both NB921 and $z'$ images are unsaturated and have very high signal-to-noise ratios.
Source detection and photometry were performed using [SExtractor]{} [@SExtractor]. Sources were detected on each individual NB921 image and magnitudes measured with $2''$ and $3''$ diameter apertures. The $3''$ apertures are used to select and measure H$\alpha$ line fluxes: at $z=0.4$ the 3$''$ apertures measure the same physical area as 2$''$ apertures at $z=0.8,1.47,2.23$ ($\approx16$kpc), assuring full consistency. The $2''$ apertures are used to measure emission lines from sources at higher redshift (\[O[ii]{}\] at $z=1.47$, to match the NB$_{\rm H}$ H$\alpha$ measurement at the same redshift, and \[O[iii]{}\] at $z=0.84$ to match the NB$_{\rm J}$ H$\alpha$ survey). The average NB921 3$\sigma$ limiting magnitudes (in 2$''$ apertures) are given in Table \[obs\].
![The survey strategy used to cover the COSMOS field. The central pointings (1,2,3,4; see Table 2) were complemented with further pointings (A to H) to both increase the surveyed area, and increase the exposure time in the central area. The region delimited by the dashed line shows the NB921 coverage obtained in COSMOS. See Table 2 for details on the pointings, including exposure times. \[COSMOS\_SURVEY\]](./figs/COSMOS_ON_SKY.pdf){width="8.2cm"}
Near-infrared imaging with UKIRT {#observationes_UKIRT}
--------------------------------
The COSMOS and UKIDSS UDS fields were observed with WFCAM on UKIRT as summarised in Table 2, using the NB$_{\rm J}$, NB$_{\rm H}$ and NB$_{\rm K}$ narrow-band filters, with central wavelengths and FWHM given in Table 1. WFCAM’s has four $2048\times2048$ $0.4''$pixel$^{-1}$ detectors offset by $\sim20'$, resulting in a non-contiguous field of view of $\sim27'\times27'$ which can be macrostepped four times to cover a contiguous region of $\sim55'\times55'$. Observations were conducted over 2006–2012, covering $1.6$deg$^2$ (NB$_{\rm J}$) and $2.34$deg$^2$ (NB$_{\rm H}$ and NB$_{\rm K}$) over the COSMOS and the UDS fields (see Table 2).
The coverage over UDS is a simple mosaic obtained with 4 different pointings, covering a contiguous region of $\sim55'\times55'$. For the COSMOS field, an initial 0.8deg$^2$ coverage obtained with 4 WFCAM pointings was complemented in NB$_{\rm H}$ and NB$_{\rm K}$ by 8 further WFCAM pointings, macro-jittered to obtain a combined 1.6 deg$^2$ coverage with increasing exposure time per pixel towards the centre of the field (see Figure \[COSMOS\_SURVEY\]). Part of the central region ($\sim0.2$deg$^2$) benefits further from some significant extra deep data both in NB$_{\rm H}$ and NB$_{\rm K}$ (see Table 2), leading to a much higher total exposure time.
A dedicated pipeline has been developed for HiZELS (PfHiZELS, c.f. S09 for more details). The pipeline has been modified and updated since S09 mostly to 1) improve the flat-fielding[^5] and 2) provide more accurate astrometric solutions for each individual frame which result in a more accurate stacking[^6]. The updated version of the pipeline (PfHiZELS2012) has been used to reduce all UKIRT narrow band data (NB$_{\rm J}$, NB$_{\rm H}$ and NB$_{\rm K}$), including those already presented in previous papers. This approach guarantees a complete self-consistency and takes advantage of the improved reduction which, in some cases, is able to go deeper by $\approx0.2$mag when compared to the data reduced by the previous version of the pipeline (e.g. S09).
For the COSMOS field, in order to co-add frames taken with different WFCAM cameras (due to the survey strategy, see Figure \[COSMOS\_SURVEY\]), [scamp]{} is used (in combination with SDSS-DR7) to obtain accurate astrometry solutions which account for distortions in each stack (in addition to individual frames being corrected prior to combining) before co-adding different fields. The typical rms is $<0.1''$, by using on average $\sim500$ sources per chip. By following this approach, even at the highest radial distances ($r>1000$ pix) from the centre of the images the PSF/ellipticity remains unchanged due to stacking, and the data over areas that double/triple the expose time are found to become deeper by (on average) 0.3-0.4 mags, with no radial change in the PSF.
Narrow-band images were photometrically calibrated (independently) by matching $\sim100$ stars per frame with $J$, $H$ and $K$ between the 12th an 16th magnitudes from the 2MASS All-Sky catalogue of Point Sources [@2MASS] which are unsaturated in the narrow-band images. WFCAM images are affected by cross-talk and other artifacts caused by bright stars: accurate masks are produced in order to reject such regions. Sources were extracted using [SExtractor]{} [@SExtractor], making use of the masks. Photometry was measured in apertures of $2''$ diameter which at $z=0.8-2.2$ recover H$\alpha$ fluxes over $\approx16$kpc. The average 3$\sigma$ depths of the entire set of NB frames vary significantly, and are summarised in Table 2. The total numbers of sources detected with each filter are given in Table 3. Note that the central region of COSMOS NB$_{\rm H}$ and NB$_{\rm K}$ coverages benefits from a much higher total exposure time per pixel, resulting in data that are deeper by 0.3–0.4mag (on average) than the outer regions.
Near-infrared H$_2$ imaging with HAWK-I {#observationes_HAWKI}
---------------------------------------
The UKIDSS UDS and COSMOS fields were observed with the HAWK-I instrument [@Pirard; @Casali06] on the VLT during 2009. A single dithered pointing was obtained in each of the fields using the H$_2$ filter, characterised by $\lambda_c = 2.124\,\umu$m and $\delta\lambda = 0.030\,\umu$m (note that the filter is slightly wider than that on WFCAM). Individual exposures were of 60s, and the total exposure time per field is 5 hours. Table 2 presents the details of the observations and depth reached.
Data were reduced using the HAWK-I ESO pipeline recipes, by following an identical reduction scheme/procedure to the WFCAM data. The data have also been distortion corrected and astrometrically calibrated before combining, using the appropriate pipeline recipes. After combining all the individual reduced frames it is possible to obtain a contiguous image of $\approx7.5\times7.5$arc min$^2$ in each of the fields. There are, nonetheless, small regions with slightly lower exposure time per pixel in regions related with chip gaps at certain positions. Because of the availability of the very wide WFCAM imaging, regions in the HAWK-I combined images for which the exposure time per pixel is $<80$% of the total are not considered. Frames are photometrically calibrated using 2MASS as a first pass, and then using UDS and COSMOS $K_s$ calibrated images to guarantee a median 0 colour ($K$-NB$_K$) for all magnitudes probed, as this procedure provides a larger number of sources. Similarly to the procedure used for WFCAM data, sources were extracted using [SExtractor]{} and photometry was measured in apertures of $2''$ diameter.
Narrowband excess selection {#narrowB_exc_selection}
---------------------------
In order to select potential line emitters, broad-band ($BB$) imaging is used in the $z'$, $J$, $H$ and $K_s$ bands to match narrow-band ($NB$) imaging in the NB921, NB$_{\rm J}$, NB$_{\rm H}$ and NB$_{\rm K}$/H$_2$, respectively. Count levels on the broad-band images are scaled down to match the counts (of 2MASS sources) for each respective narrow-band image, in order to guarantee a median zero colour, and a common counts-to-magnitude zero point. Sources are extracted from $BB$ images using the same aperture sizes used for $NB$ images and matched to the $NB$ catalogue with a search radius of $<0.9''$. Note, however, that none of the narrow-band filters fall at the centre of the broad-band filters (see Figure 1). Thus, objects with significant continuum colours will not have $BB-NB=0$; this can be corrected with broad-band colours (c.f. S12), in order to guarantee that $BB-NB$ distribution is centred on 0 and has no dependence on continuum broad-band colours. Average colour corrections[^7] are given by:
$\rm(z'-NB921)_{AB}=(z'-NB921)_{0,AB}-0.05(J_{AB}-z'_{AB})-0.04$
$\rm(J-NB_{\rm J})=(J-NB_{\rm J})_0-0.09(z'_{AB}-J_{AB})+0.11$
$\rm(H-NB_{\rm H})=(H-NB_{\rm H})_0+0.07(J_{AB}-H_{AB})+0.06$
$\rm(K-NB_{\rm K})=(K-NB_{\rm K})_0-0.02(H_{AB}-K_{AB})+0.04$
Potential line emitters are then selected according to the significance of their $(BB-NB)$ colour, as they will have $(BB-\,NB)>0$. True emitters are distinguished from those with positive colours due to the scatter in the magnitude measurements by quantifying the significance of the narrowband excess. The parameter $\Sigma$ [see @Bunker95] quantifies the excess compared to the random scatter expected for a source with zero colour, as a function of narrow-band magnitude (see e.g. S09), and is given by:
$$\Sigma=\frac{1-10^{-0.4(BB-NB)}}{10^{-0.4(ZP-NB)}\sqrt{\pi r_{\rm ap}^2(\sigma_{\rm NB}^2+\sigma_{\rm BB}^2)}},$$
where ZP is the zero point of the $NB$ (and $BB$, as those have been scaled to have the same ZP as $NB$ images), $r_{\rm ap}$ is the aperture radius (in pixels) used, and $\sigma_{\rm NB}$ and $\sigma_{\rm BB}$ are the rms (per pixel) of the $NB$ and $BB$ images, respectively.
Here, potential line emitters are selected if $\Sigma>3.0$ (see Figure 3). The spread on the brighter end (narrow-band magnitudes which are not saturated, but for which the scatter is not affected by errors in the magnitude, i.e., much brighter than the limit of the images) is quantified for each data-set and frame, and the minimum $(BB-NB)$ colour limit over bright magnitudes is set by the $3\times$ the standard deviation of the excess colour over such magnitudes ($s_b$). A common rest-frame EW limit of EW$_0=$25Å is applied, guaranteeing a limit higher than the $3\times s_b$ dispersion over bright magnitudes in all bands. The combined selection criteria guarantees a clean selection of line emitters and, most importantly, it ensures that the samples of H$\alpha$ emitters are selected down to the same rest-frame EW, allowing one to quantify the evolution across cosmic time. An example of this selection for the full COSMOS NB$_{\rm K}$ data is shown in Figure 3 and the reader is referred to e.g. S09 and S12 for further examples.
![Narrow-band excess as a function of narrow-band magnitude for NB$_{\rm K}$ (Vega magnitudes) data over the full COSMOS coverage (1.6deg$^2$). These show $>3\sigma$ detections in narrow-band imaging and the solid line present the average 3.0$\Sigma$ colour significance for NB$_{\rm K}$ (for the average depth, but note that the analysis is done individually for each frame, and that there are variations in depth ip to 0.6 mag difference). H$\alpha$ sources shown are narrow-band emitters which satisfy all the H$\alpha$ selection criteria fully described in Section 2.6. The horizontal dashed line presents the equivalent width cut used for NB$_{\rm K}$ data – corresponding to a $z=2.23$ rest-frame EW limit of 25Å for H$\alpha$+\[N[ii]{}\]; this is the same rest-frame equivalent width used at all redshifts. \[photoz\]](./figs/NBK_colour_mag_COSMOS.jpg){width="8.3cm"}
As a further check on the selection criteria, the original imaging data are used to produce $BB$ and $NB$ postage stamp images of all the sources. The $BB$ is subtracted from the $NB$ image leaving the residual flux. From visual inspection these residual images contain obvious narrow band sources and it is found that the remaining flux correlates well with the catalogue significance.
The samples of NB line emitters {#NB_emitters}
-------------------------------
Narrow-band detections below the estimated 3$\sigma$ detection threshold were not considered. By using colour-colour diagnostics (see S12), potential stars are identified in the sample and rejected as well (the small fraction varies from band to band; see Table 3). The sample of remaining potential emitters ($\Sigma>3$ & EW$_0>25$Å; see Table 3 for numbers) is visually checked to identify spurious sources, artifacts which might not have been masked, or sources being identified in very noisy regions (see Table 3). Sources classed as spurious/artifacts are removed from the sample of potential emitters. The final samples of line emitters are then derived.
As a further test of the reliability of the line emitter samples, it can be noted that since the HAWK-I observations are both deeper and obtained over a larger redshift slice (due to a wider filter profile) when compared to WFCAM, they should be able to confirm all NB$_{\rm K}$ emitters over the matched area. This is confirmed, as all 10 emitters which are detected with WFCAM in the matched area are recovered by HAWK-I data as well.
The catalogues, containing all narrow-band emitter candidates, are presented in Appendix A. The catalogues provide IDs, coordinates, narrow-band and broad-band magnitudes, estimated fluxes and observed EWs. Further details and information are available on the HiZELS website.
![image](./figs/PHOTOZ_ALL_NBs.pdf){width="16.8cm"}
The photometric redshift [Photo-$z$s from @Ilbert09; @Cirasuolo10] distributions of the sources selected with the 4 narrow-band filters are presented in Figure \[photoz\]. The photometric redshifts show clear peaks associated with H$\alpha$, H$\beta$/\[O[iii]{}\]$_{\lambda\lambda 4959,5007}$, and \[O[ii]{}\]$_{\lambda3727}$ (see Figure \[photoz\]), together with further emission lines such as Paschen lines and Ly$\alpha$. Spectroscopic redshift are also available for a fraction of the selected line-emitters [@Lilly09; @Yamada05; @Bart_Simpson06; @Geach008; @van_breu07; @Ouchi2008; @Smail08; @Ono09][^8] – these will be discussed in the following Sections.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- -- -- -- -- -- -- -- -- -- --
**Filter & **Field & **Detect & **W/ Colours & **Emitters & **Stars & **Artifacts & **H$\alpha$ & **$\bf z$ H$\alpha$ conf. & **Conf 2lines & **Volume & **H$\alpha$ SFR\
NB & C/U & **3$\bf \sigma$ & \# & **3$\bf \Sigma$ & \# & \# & \# & \# & \# & 10$^4$Mpc$^3$ & M$_\odot$yr$^{-1}$\
NB921 & C & 155542 & 148702 & 2819 & 247 & – & 521 & 38 & – & 5.1 & 0.03\
NB921 & U & 236718 & 198256 & 6957 & 775 & – & 1221 & 8 & – & 5.1 & 0.01\
NB$_{\rm J}$ & C & 32345 & 31661 & 700 & 40 & 46 & 425 & 81 & 158 & 7.9 & 1.5\
NB$_{\rm J}$ & U & 21233 & 19916 & 551 & 49 & 30 & 212 & 14 & 79 & 11.1 & 1.5\
NB$_{\rm H}$ & C & 65912 & 64453 & 723 & 60 & 63 & 327 & 28 & 158 & 49.1 & 3.0\
NB$_{\rm H}$ & U & 26084 & 23503 & 418 & 23 & 5 & 188 & 18 & 188 & 22.8 & 5.0\
NB$_{\rm K}$ & C & 99395 & 98085 & 1359 & 78 & 56 & 588 & 4 & 125 & 54.8 & 5.0\
NB$_{\rm K}$ & U & 28276 & 26062 & 399 & 28 & 10 & 184 & 2 & 30 & 22.4 & 10.0\
H$_2$ & C & 1054 & 940 & 52 & 3 & 2 & 31 & 0 & 3 & 0.9 & 3.5\
H$_2$ & U & 1193 & 1059 & 33 & 7 & 1 & 14 & 0 & 0 & 0.9 & 3.5\
****************************
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- -- -- -- -- -- -- -- -- -- --
\[numbers\]
![image](./figs/BRiK_ALL.jpg){width="17.9cm"}
Selecting H$\alpha$ emitters {#selecting_Ha}
----------------------------
Samples of H$\alpha$ emitters at the various redshifts are selected using a combination of broad-band colours (colour–colour selections) and photometric redshifts (when available). Colour–colour separation of emitters are different for each redshift, and for some redshifts two sets of colour–colour separations are used to reduce contamination to a minimum. Additionally, spectroscopically confirmed sources are included, and sources confirmed to be other emission lines removed from the samples – but the reader should note that at all four redshifts the number of $z$-included sources missed by the selection is typically $<10$, while the $z$-rejected sources are typically $<10$ as well. Therefore, the decrease of available spectroscopic redshifts with redshift does not introduce any bias.
Additionally, sources found to be line emitters in two (or three, for H$\alpha$ emitters at $z=2.23$) bands, making them robust H$\alpha$ candidates are also included in the samples, even if they have been missed by the colour–colour and photometric selection (although it is found that only very few real H$\alpha$ sources are missed by the selection criteria). Table \[numbers\] provides the number of sources, including spectroscopically confirmed ones for each field at each redshift. Within the samples of narrow-band excess sources, 20 (NB921), 54 (NB$_{\rm J}$), 49 (NB$_{\rm H}$) and 47 (NB$_{\rm K}$) per cent are H$\alpha$ emitters at redshifts $z=0.4$, 0.84, 1.47 and 2.23, respectively.
### H$\alpha$ emitters at $z=0.4$ {#selecting_Ha04}
The selection of H$\alpha$ emitters at $z=0.4$ is primarily done by selecting sources for which $0.35<z_{\rm phot}<0.45$. For further completeness, the $BRiK$ ($B-R$ vs $i-K$) colour-colour selection (see Figure \[BRIKS\] and S12) is then applied to recover real H$\alpha$ sources without photometric redshifts. The selection method can then be accessed by using spectroscopic redshifts [from $z$COSMOS; @Lilly09], which are available for 38 sources. Thirty six sources are confirmed to be at $z=0.391-0.412$, while 2 sources are \[N[ii]{}\] emitters. This implies a very high completeness of the sample, and a contamination of $\sim5$ per cent over the entire sample. Contaminants have been removed, and spectroscopic sources added. A total of 1742 H$\alpha$ emitters at $z=0.4$ are selected.
### H$\alpha$ emitters at $z=0.84$ {#selecting_Ha147}
Sources are selected to be H$\alpha$ emitters at $z=0.84$ if $0.75<z_{\rm phot}<0.95$ or if they satisfy the $BRiK$ (see Figure \[BRIKS\]; S09) colour-colour selection for $z\sim0.8$ sources. Additionally, sources with $1.3<z_{\rm phot}<1.7$ (likely H$\beta$/\[O[iii]{}\] $z\approx1.4$ emitters) and $2.0<z_{\rm phot}<2.5$ (likely $z=2.23$ \[O[ii]{}\] emitters) are removed to further reduce contamination from higher redshift emitters. Sources with spectroscopically confirmed redshifts are included and sources with other spectroscopically confirmed lines are removed. In practice, 6 H$\alpha$ spectroscopic sources missed by the selection criteria are introduced in the sample; 7 sources found in the sample are not H$\alpha$ – a mix of \[S[ii]{}\], \[N[ii]{}\] and \[O[iii]{}\] emitters. A total of 95 sources are spectroscopically confirmed as H$\alpha$, while 237 sources are confirmed as dual H$\alpha$-\[O[iii]{}\] emitters. A total of 637 H$\alpha$ emitters at $z=0.84$ are selected.
### H$\alpha$ emitters at $z=1.47$ {#selecting_Ha147}
Note that, as described in S12, the NB$_{\rm H}$ filter can be combined with NB921 (probing the \[O[ii]{}\] emission line), to provide very clean, complete surveys of $z=1.47$ line emitters, as the filter profiles are extremely well-matched for a dual H$\alpha$-\[O[ii]{}\] survey. By applying the dual narrow-band selection, a total of 346 H$\alpha$-\[O[ii]{}\] emitters are robustly identified in COSMOS and UDS. However, the dual narrow-band selection is only complete ($>98$% complete) if the NB921 survey probes down to \[O[ii]{}\]/H$\alpha$ $\sim0.1$ (c.f. S12), which is not the case for the deepest NB$_{\rm H}$ COSMOS coverage. Additionally, only the central 1.1deg$^2$ region of the COSMOS field has been targeted with the NB921 filter.
In order to select H$\alpha$ emitters in areas where the NB921 is not deep enough to provide a complete selection, or where NB921 data are not available, the following steps are taken. Sources are selected if $1.35<z_{\rm phot}<1.55$, or if they satisfy the $z\sim1.5$ $BzK$ ($B-z$ vs. $z-K$) criteria defined in Figure \[BRIKS\], which is able to recover the bulk of the dual narrow-band emitters and sources with high quality photometric redshifts of $z\sim1.5$. However, the $z\sim1.5$ $BzK$ selection, although highly complete, is still contaminated by higher redshift emitters. In order to exclude likely higher redshift sources an additional $ziK$ ($i-z$ vs. $z-K$; see S12) colour-colour separation is used (see Figure \[BRU\]), in combination with rejecting sources with $z_{\rm phot}>1.8$.
![$Top$: Colour-colour separation of $z=1.47$ H$\alpha$ emitters and those at higher redshift ($z\sim2.3$ H$\beta$/\[O[iii]{}\], $z\sim3.3$ \[O[ii]{}\]); separation is obtained by $(z-K)<5(i-z)-0.4$. $Bottom$: colour-colour separation of $z\sim3$ H$\beta$/\[O[iii]{}\] emitters from $z=2.23$ H$\alpha$ emitters, given by $(B-R)<-0.55(U-B)+1.25$. Using the B$z$K colour-colour separation only results in some contamination of the H$\alpha$ sample with higher redshift emitters – the $B-R$ vs $U-B$ colour-colour separation allows to greatly reduce that. The Figure also indicates the location of each final H$\alpha$ selected source in the colour-colour plot. Note that sources shown as $z_{phot}\sim1.5$ and $z_{phot}\sim2.2$ are those with photometric redshifts which are within $\pm0.2$ of those values. \[BRU\]](./figs/BRU.pdf){width="7.8cm"}
The selection leads to a total sample of 515 robust H$\alpha$ emitters at $z=1.47$, by far the largest sample of H$\alpha$ emitters at $z\sim1.5$. Comparing the double NB921 and NB$_{\rm H}$ analysis with the colour and photo-$z$ selection (for sources for which the NB921 data are deep enough to detect \[O[ii]{}\]) shows that the colour and photo-$z$ selection by itself results in a contamination of $\approx15$ per cent, and a completeness of $\approx85$ per cent. However, as the double NB921 and NB$_{\rm H}$ analysis has been used wherever the data are available and sufficiently deep, the contamination of the entire sample is estimated to be lower ($\approx5$ per cent), and the completeness higher $(\approx95$ per cent).
### H$\alpha$ emitters at $z=2.23$ {#selecting_Ha223}
As can be seen from the photometric redshift distribution in Figure \[photoz\], the high quality photo-$z$s in the COSMOS and UDS fields can provide a powerful tool to select $z=2.23$ sources. However, the sole use of the photometric redshifts can not result in clean, high completeness sample of $z=2.23$ H$\alpha$ emitters, not only because reliable photometric redshifts are not available for 35 per cent of the NB$_{\rm K}$ emitters, at the faint end, but also because the errors in the photometric redshifts will be much higher at $z\sim2.2$ than at lower redshift (particularly as one is selecting star-forming galaxies). Nevertheless, although spectroscopy only exists for a few H$\alpha$ $z=2.23$ sources ($z$COSMOS and UDS compilation), double line detections between NB$_{\rm K}$ and one of NB$_{\rm H}$ (\[O[iii]{}\]) and/or NB$_{\rm J}$ (\[O[ii]{}\]) allow the identification of 155 secure H$\alpha$ emitters. These can be used to optimise the selection criteria and estimate the completeness and contamination of the sample.
The selection of H$\alpha$ emitters is done in the same way for both COSMOS and UDS, and for both WFCAM and HAWK-I data. An initial sample of $z=2.23$ H$\alpha$ emitters is obtained by selecting sources for which $1.7<z_{\rm phot}<2.8$, where the limits were determined using the distribution of photometric redshifts found for confirmed H$\alpha$ emitters at $z=2.23$ (this selects 525 sources, of which 3 are spectroscopically confirmed to be contaminants and 87 are double/triple line emitters and thus robust $z=2.23$ H$\alpha$ emitters). Because some sources lack reliable photometric redshifts, the colour selection $(z-K)>(B-z)$ is used recover additional $z\sim2$ faint emitters. This colour-colour selection is a slightly modified version of the standard $BzK$ colour-colour separation [@Daddi04][^9]. It selects 274 additional H$\alpha$ candidates (and re-selects 90% of those selected through photometric redshifts), and guarantees a high completeness of the H$\alpha$ sample (see Figure \[BRIKS\]). However, the $BzK$ selection also selects $z\sim3.3$ H$\beta$/\[O[iii]{}\] emitters very effectively, and the contamination by such emitters needs to be minimised. In order to do this, sources with $z_{\rm phot}>3.0$ are excluded (121 sources). For sources for which a photometric redshift does not exist, a rest-frame UV colour-colour separation is used ($B-R$ vs. $U-B$; see Figure \[BRU\], probing the rest-frame UV), capable of broadly separating $z=2.23$ and $z\sim3.3$ emitters due to their different UV colours (see Figure \[BRU\]; this removes a further 27 sources). Three further sources are removed as they are confirmed contaminants (Pa$\beta$, \[S[iii]{}\] and \[O[iii]{}\] at $z=0.65$, $z=1.23$ and $z=3.23$, respectively).
Overall, the selection leads to a total sample of 807 H$\alpha$ emitters, by far the largest sample of $z=2.23$ H$\alpha$ emitters ever obtained, and an order of magnitude larger than the previous largest samples presented by [@G08] and [@Hayes]. With the limited spectroscopy available, it is difficult to accurately determine the completeness and contamination of the sample, but based on the double/triple-line detections (155) and the confirmed contaminants which have been removed (6), the completeness is estimated to be $>90$ per cent, and contamination is likely to be $<10$ per cent.
Analysis and Results: H$\alpha$ LF over 11 Gyrs {#LFs}
================================================
Removing the contamination by the \[N[ii]{}\] line {#cont_adj_lines}
--------------------------------------------------
Due to the width of all filters in detecting the H$\alpha$ line, the adjacent \[N[ii]{}\] lines can also be detected when the H$\alpha$ line is detected at the peak transmission of the filter. A correction for the \[N[ii]{}\] line contamination is therefore done, following the relation given in S12. The relation has been derived to reproduce the full SDSS relation between the average $\log$(\[N[ii]{}\]/H$\alpha)$, $f$, and $\log$\[EW$_0$(\[N[ii]{}\]+H$\alpha$)\], $E$: $f=-0.924+4.802E-8.892E^2+6.701E^3-2.27E^4+0.279E^5$. This relation is used to correct all H$\alpha$ fluxes at $z=0.4$, $0.84$, $1.47$ and $2.23$. The median correction (the median \[N[ii]{}\]/(\[N[ii]{}\]+H$\alpha$)) is $\approx0.25$.
![Average completeness of the various narrow-band surveys as a function of H$\alpha$ flux. Note that the completeness of individual fields/frames for each band can vary significantly due to the survey strategy (e.g. see difference between one of the deep pointings in COSMOS, NB$_{\rm H}$ D and the average over all NB$_{\rm H}$ fields), and thus the completeness corrections are computed for each individual fields. \[line\_frac\_fluxes\]](./figs/NBH_NB921_completeness.jpg){width="8.2cm"}
Completeness corrections: detection and selection {#complet}
-------------------------------------------------
It is fundamental to understand how complete the samples are as a function of line flux. This is done using simulations, as described in S09 and further detailed in S12. The simulations consider two major components driving the incompleteness: i) the detection completeness (which depends on the actual imaging depth and the apertures used) and ii) the incompleteness resulting from the selection (both EW and colour significance).
The detection completeness is estimated by placing sources with a given magnitude at random positions on each individual narrow-band image, and studying the recovery rate as a function of the magnitude of the source. For the large Subaru frames, 2500 sources are added for each magnitude, for WFCAM images 500, and for HAWK-I frames 100 sources are added for each realisation.
The individual line completeness estimates are performed in the same way for the data at the four different redshifts. A set of galaxies is defined, which is consistent with being at the approximate redshift (applying the same photometric redshift + colour-colour selections to all NB detected sources with no significant excess) but not having emission lines above the detection limit. Emission lines are then added to the sources, and the study of the recovery fraction is undertaken. The average completeness corrections as a function of H$\alpha$ flux are presented in Figure \[line\_frac\_fluxes\]. Note that the simulations include the different EW/colour cuts used in selecting line emitters in all bands, and therefore take the EW limits and colour selection into account. Also note that because of the very different distributions of magnitudes of H$\alpha$ emitters from low to higher redshift, the EW/colour cut is a much more important source of incompleteness for low redshift H$\alpha$ emitters than for the highest redshift, $z=2.23$.
It should be noted that because of the differences in depth, simulations are conducted for each individual frame, and the appropriate completeness corrections applied accordingly when computing the luminosity function. For any given completeness correction applied, an uncertainty of 20% of the size of the applied correction is added in quadrature to the other uncertainties to account for the uncertainties in deriving such corrections.
Volume
------
At $z=0.4$, the total area surveyed is 1.68 deg$^2$. The NB921 filter, centred at 9196Å and with a FWHM of 132Å can probe the H$\alpha$ line (using the top hat approximation) from $z_{\rm min}=0.3907$ to $z_{\rm max}=0.4108$. This means that the narrow-band filter surveys an H$\alpha$ volume of $5.1\times10^4$Mpc$^3$deg$^{-2}$. The H$\alpha$ survey therefore probes a total volume of $8.8\times10^4$Mpc$^3$.
The NB$_{\rm J}$ filter (FWHM of 140Å) can be approximated by a top hat, probing $z_{\rm min}=0.8346$ to $z_{\rm max}=0.8559$ for H$\alpha$ line detections, resulting in surveying $1.5\times10^5$Mpc$^3$deg$^{-1}$. As the total survey has covered 1.3deg$^2$, it results in a total volume of $1.9\times10^5$Mpc$^3$. Assuming the top hat (TH) model for the NB$_{\rm H}$ filter (FWHM of 211.1Å, with $\lambda^{TH}_{min}=1.606\,\umu$m and $\lambda^{TH}_{max}=1.627\,\umu$m), the H$\alpha$ survey probes a (co-moving) volume of $3.3\times10^5$Mpc$^3$deg$^{-2}$. Volumes are computed on a field by field basis as each field reaches a different depth (although the difference in volume is only important at the faintest fluxes). The total volume of the survey is $7.4\times10^5$Mpc$^3$. The volume down to the deepest depth is $3.9\times10^4$Mpc$^3$ (see Table 4 for details). The NB$_{\rm K}$ filter is centred on $\lambda=2.121$$\umu$m, with a FWHM of 210Å. Using the top hat approximation for the filter, it can probe the H$\alpha$ emission line from $z_{\rm min}=2.2147$ to $z_{\rm max}=2.2467$, so with a $\Delta z=0.016$. The H$_2$ filter therefore probes a volume of $3.8\times10^5$Mpc$^3$deg$^{-2}$.
The HAWK-I survey uses a slightly different H$_2$ filter, centred on $\lambda=2.125$$\umu$m, with FWHM$=300$Å. A top hat is an even better approximation of the filter profile, with $z_{\rm min}=2.2139$ to $z_{\rm max}=2.2596$ for H$\alpha$ line detections. The filter effectively probes $5.5\times10^5$Mpc$^3$deg$^{-2}$. Each HAWK-I pointing covers only about 13.08arcmin$^2$, and so the complete HAWK-I survey (COSMOS and UDS, 0.0156deg$^2$) probes a total volume of $1.7\times10^4$Mpc$^3$. Note that the survey conducted by [@Hayes] (using a narrower NB filter), although deeper, only probed $5.0\times10^3$Mpc$^3$, so a factor of 3 smaller in volume and over a single field. Table 4 presents a summary of the volumes probed as a function of H$\alpha$ luminosity and the number of sources detected at each redshift.
Filter Profiles: volume corrections {#filter_profiles}
-----------------------------------
None of the narrow band filters are perfect top hats (see Figure 1). In order to model the effect of this bias on estimating the volume (luminous emitters will be detectable over larger volumes – although, if seen in the filter wings, they will be detected as fainter emitters), a series of simulations is done, following S09 and S12. Briefly, a top hat volume selection is used to compute a first-pass (input) luminosity function and derive the best fit. The fit is used to generate a population of simulated H$\alpha$ emitters (assuming they are distributed uniformly across redshift); these are then folded through the true filter profile, from which a recovered luminosity function is determined. Studying the difference between the input and recovered luminosity functions shows that the number of bright emitters is underestimated, while faint emitters can be slightly overestimated (c.f. S09 for details), but the actual corrections are different for each filter and each input luminosity function. This allows correction factors to be estimated – these are then used to obtain the corrected luminosity function. Corrections are computed for each individual narrow-band filter.
Extinction Correction {#ext_corr}
---------------------
The H$\alpha$ emission line is not immune to dust extinction. Measuring the extinction for each source can in principle be done by several methods, one of which is the comparison between H$\alpha$ and far-infrared determined SFRs (see Ibar et al. in prep.), while the spectroscopic analysis of Balmer decrements also provides a very good estimate of the extinction. As shown in S12, the median \[O[ii]{}\]/H$\alpha$ line ratio of a large sample of galaxies can also be reasonably well calibrated (using Balmer decrement) as a dust extinction indicator (see Sobral et al. 2012 for more details). For the COSMOS $z=1.47$ sample, this results in $A_{\rm H\alpha}=0.8$mag (although there is a bias towards lower extinction due to the fact that the NB921 survey is not deep enough to recover sources with much higher extinctions). However, for UDS (where a sufficiently deep NB921 coverage is available) a A$_{\rm H\alpha}\approx1$mag of extinction at H$\alpha$ is shown to be an appropriate median correction at $z=1.47$ (see S12). That is also similar has been found at $z=0.84$ [@Garn2010a $A_{\rm H\alpha}\approx1.2$]. The dependence of extinction on observed luminosity is also relatively small (S12) at $z\sim1.5$ – therefore, for simplicity and for an easier comparison, a simple 1 mag of extinction is applied for the four redshifts and for all observed luminosities.
Note that S12 still find a relatively mild luminosity dependence, but one which is offset to the local Universe relation [e.g. @Hopkins] by 0.5 mag in $A_{\rm H\alpha}$. Nevertheless, one could interpret this differently, as a single relation that holds at both $z\sim1.5$ and $z\sim0$, provided that luminosities at both $z\sim0$ and $z\sim1.5$ are divided by $L^*_{\rm H\alpha}$ at the corresponding epochs; this would imply that the typical extinction does not depend on SFR or H$\alpha$ luminosity in an absolute manner, but rather that it depends on how star-forming or luminous a source is relative the normal star-forming galaxy at that epoch.
H$\alpha$ Luminosity Functions at $\bf z=0.40,0.84,1.47,2.23$ {#LF_Ha}
-------------------------------------------------------------
![image](./figs/LF_EVO.jpg){width="14.8cm"}
By taking all H$\alpha$ selected emitters at the four different redshifts, the H$\alpha$ luminosity function is computed at 4 very different cosmic times, reaching a common observed luminosity limit of $\approx10^{41.6}$ergs$^{-1}$ for the first time in a consistent way over $\sim11$Gyrs. As previously described, the method of S09 and S12 is applied to correct for the real profile (see Section \[filter\_profiles\]). Candidate H$\alpha$ emitters are assumed to be at $z=0.4$, $0.84$, $1.47$ and $2.23$ for luminosity distance calculations. Results can be found in Figure \[LF\_Halpha\_HALPHA\] and Table \[LF\_NUMBERS\]. Errors are Poissonian, but they include a further 20% of the total completeness corrections added in quadrature.
All derived luminosity functions are fitted with Schechter functions defined by three parameters $\alpha$, $\phi ^*$ and $L^*$:
$$\phi(L) \rm dL = \it \phi^* \left(\frac{L}{L^*}\right)^{\alpha} e^{-(L/L^*)} \rm d\it\left(\frac{L}{L^*}\right),$$
which are found to provide good fits to the data at all redshifts. In the $\log$ form, the Schechter function is given by:
$$\phi(L) \rm dL = \ln10 \, \it \phi^* \left(\frac{L}{L^*}\right)^{\alpha} e^{-(L/L^*)} \left(\frac{L}{L^*}\right)\rm d\log L.$$
Schechter functions are fitted to each luminosity function. The best fits for the H$\alpha$ luminosity functions at $z=0.4-2.23$ are presented in Table \[lfs\_\_\], together with the uncertainties on the parameters (1$\sigma$). Uncertainties are obtained from either the 1$\sigma$ deviation from the best-fit, or the 1$\sigma$ variance of fits, obtained with a suite of multiple luminosity functions with different binning – whichever is higher (although they are typically comparable). The best-fit functions and their errors are also shown in Figure \[LF\_Halpha\_HALPHA\], together with the $z\approx0$ luminosity function determined by [@Ly2007] – which has extended the work by [@Gallego95] at $z\approx0$, for a local-Universe comparison. Deeper data from the literature are also presented for comparison; [@CHU11] for $z=0.8$, and [@Hayes] for $z=2.23$, after applying the small corrections to ensure the extinction corrections are consistent[^10].
![image](./figs/evo_PARAMS.pdf){width="17.cm"}
The results not only reveal a very clear evolution of the H$\alpha$ luminosity function from $z=0$ to $z=2.23$, but they also allow for a detailed investigation of exactly how the evolution occurs, in steps of $\sim2-3$Gyrs. The strongest evolutionary feature is the increase in $L_{\rm H\alpha}^*$ as a function of redshift from $z=0$ to $z=2.23$ (see Figure \[evo\_PARAMS\]), with the typical H$\alpha$ luminosity at $z\sim2$ ($L_{\rm H\alpha}^*$) being 10 times higher than locally. This is clearly demonstrated in Figure \[evo\_PARAMS\], which shows the evolution of the Schechter function parameters describing the H$\alpha$ luminosity function. The L$_{\rm H\alpha}^*$ evolution from $z\sim0$ to $z\sim2.2$, can be simply approximated as $\log\,L^*=0.45z+\log\,L^*_{z=0}$, with $\log\,L^*_{z=0}=41.87$ (see Figure \[evo\_PARAMS\]). At the very bright end ($L>4L^*$), and particularly at $z>1$, there seems to be a deviation from a Schechter function. Follow-up spectroscopy of such luminous H$\alpha$ sources has recently been obtained for a subset of the $z=1.47$ sample, and unveil a significant fraction of narrow and broad-line AGN (with strong \[N[ii]{}\] lines as well) which become dominant at the highest luminosities (Sobral et al. in prep). It is therefore likely that the deviation from a Schechter function is being mostly driven by the increase in the AGN activity fraction at such luminosities, particularly due to the detection of rare broad-line AGN and from very strong \[N[ii]{}\] emission.
The normalisation of the H$\alpha$ luminosity function, $\phi^*$, is also found to evolve, but much more mildly. There is an increase of $\phi^*$ up to $z\sim1$ (by a factor $\sim4$)[^11], and then this decreases again for higher redshifts by a factor of $\sim2$ from $z\sim1$ to $z=2.23$ – see Figure \[evo\_PARAMS\]. By fitting a simple quadratic model to describe the data, one finds that the parametrisation: $\log\phi^*=-0.38z+z-3.18$ provides a good fit for $z=0-2.23$, but the current data can only exclude a model with a constant $\phi^*$ at a $<2$$\sigma$ level. The statistical significance for evolution in $\phi^*$ becomes even lower ($<1$$\sigma$) if one restricts the analysis to $z=0.4-2.23$.
The faint-end slope, $\alpha$, is found to be relatively steep from $z\sim0$ up to $z=2.23$ (when compared to a canonical $\alpha=-1.35$), and it is not found to evolve. The median $\alpha$ over $0<z<2.23$ is $-1.60\pm0.08$. Very deep data from [@Hayes] and [@CHU11] not only agree well with such faint-end slope, but even more importantly, their data at the faintest luminosities are also very well fitted by the best-fit $z=2.23$ and $z=0.84$ luminosity functions. If those data points are included and used to re-fit the luminosity functions at those 2 redshifts, the resulting best-fit faint-end slopes remain the same, but the error in $\alpha$ is reduced by $\sim10-15$ per cent.
It is therefore shown that by measuring the H$\alpha$ luminosity function in a consistent way, and using multiple fields, the faint-end slope can be very well approximated by a constant $\alpha=-1.6$ at least up to $z=2.23$. This shows that while the faint-end slope truly is steep at $z\sim2$, it does not become significantly steeper from $z\sim0$ to $z\sim2$, and rather has remained relatively constant for the last 11Gyrs (our data can not rule out weak evolution). The potential strong steepening of the faint-end slope, which has been previously reported [e.g. @Hayes] may in part be a result of comparing different data-sets which probe different ranges in luminosity, use different completeness corrections, different selection of emitters and probe a different parameter space. Furthermore, the results from [@SOBRAL10B] show that the faint-end slope depends relatively strongly on environment ($\alpha\sim-1.1$ for the densest clusters to $\alpha\sim-1.9$ for the poorest regions), which indicates that the changes in the faint-end slope measured before may also have resulted by the relatively small areas which can (by chance) probe different environments. Note that this is not the case for this paper because the multi-epoch H$\alpha$ surveys cover $\sim2$deg$^2$ areas over two independent fields and are able to cover a wide range of environments. Indeed, apart from the rich, dense structures presented in [@SOBRAL10B] at $z=0.84$, our the H$\alpha$ survey is also able to probe significantly overdense regions even at $z=2.23$ [see @GEACH12 for details on a significant H$\alpha$-detected overdensity in the COSMOS field]. By splitting the sample in a similar way to [@SOBRAL10B] (isolating the over density in COSMOS and nearby regions), a variation of $\alpha$ with local density is clearly recovered, consistent with the results at $z=0.84$, i.e., overdense regions present a much shallower $\alpha$ ($\sim-1.3$), while the general field regions have a steeper ($\alpha\sim-1.7$) faint-end slope. The dependence of $\alpha$ on environment since $z=2.23$ will be carefully quantified in a forthcoming paper.
--------------------------------------------------------------------------------------------------- -- -- -- --
**$\bf \log$**L$_{\rm \bf H\alpha}$ & **Sources &$\bf \phi$ **obs & $\bf \phi$ **corr & **Volume\
**$\bf z=0.40$ & \# & Mpc$^{-3}$ & Mpc$^{-3}$ & 10$^4$Mpc$^3$\
$40.50\pm0.05$ & $128$ & $-1.84\pm0.04$ & $-1.66\pm0.04$ & 8.8\
$40.60\pm0.05$ & $147$ & $-1.78\pm0.04$ & $-1.70\pm0.04$ & 8.8\
$40.70\pm0.05$ & $118$ & $-1.87\pm0.04$ & $-1.81\pm0.04$ & 8.8\
$40.80\pm0.05$ & $86$ & $-2.01\pm0.05$ & $-1.93\pm0.05$ & 8.8\
$40.90\pm0.05$ & $56$ & $-2.20\pm0.06$ & $-1.96\pm0.07$ & 8.8\
$41.00\pm0.05$ & $54$ & $-2.21\pm0.06$ & $-2.03\pm0.07$ & 8.8\
$41.10\pm0.05$ & $34$ & $-2.41\pm0.08$ & $-2.12\pm0.09$ & 8.8\
$41.20\pm0.05$ & $36$ & $-2.39\pm0.08$ & $-2.27\pm0.08$ & 8.8\
$41.30\pm0.05$ & $33$ & $-2.43\pm0.08$ & $-2.29\pm0.09$ & 8.8\
$41.40\pm0.05$ & $25$ & $-2.55\pm0.10$ & $-2.42\pm0.10$ & 8.8\
$41.50\pm0.05$ & $25$ & $-2.55\pm0.10$ & $-2.46\pm0.11$ & 8.8\
$41.60\pm0.05$ & $17$ & $-2.71\pm0.12$ & $-2.57\pm0.13$ & 8.8\
$41.70\pm0.05$ & $10$ & $-2.94\pm0.17$ & $-2.69\pm0.19$ & 8.8\
$41.80\pm0.05$ & $11$ & $-2.90\pm0.16$ & $-2.73\pm0.17$ & 8.8\
$41.90\pm0.05$ & $8$ & $-3.04\pm0.19$ & $-2.88\pm0.20$ & 8.8\
$42.00\pm0.05$ & $4$ & $-3.34\pm0.30$ & $-3.03\pm0.35$ & 8.8\
$42.20\pm0.10$ & $3$ & $-3.45\pm0.36$ & $-3.56\pm0.51$ & 8.8\
$42.50\pm0.15$ & $2$ & $-3.64\pm0.53$ & $-3.71\pm0.71$ & 8.8\
**$\bf z=0.84$& \# & Mpc$^{-3}$ & Mpc$^{-3}$ & 10$^4$Mpc$^3$\
$41.70\pm0.075$ & $218$ & $-2.12\pm0.03$ & $-1.93\pm0.03$ & 19.1\
$41.85\pm0.075$ & $222$ & $-2.11\pm0.03$ & $-2.02\pm0.03$ & 19.1\
$42.00\pm0.075$ & $107$ & $-2.43\pm0.04$ & $-2.18\pm0.04$ & 19.1\
$42.15\pm0.075$ & $54$ & $-2.72\pm0.06$ & $-2.43\pm0.06$ & 19.1\
$42.30\pm0.075$ & $12$ & $-3.38\pm0.15$ & $-2.73\pm0.17$ & 19.1\
$42.45\pm0.075$ & $10$ & $-3.46\pm0.17$ & $-3.01\pm0.17$ & 19.1\
$42.60\pm0.075$ & $7$ & $-3.61\pm0.21$ & $-3.27\pm0.21$ & 19.1\
$42.75\pm0.075$ & $2$ & $-4.16\pm0.53$ & $-3.79\pm0.55$ & 19.1\
$42.90\pm0.075$ & $1$ & $-4.46\pm0.90$ & $-4.13\pm1.51$ & 19.1\
**$\bf z=1.47$ & \# & Mpc$^{-3}$ & Mpc$^{-3}$ &10$^4$Mpc$^3$\
$42.10\pm0.05$ & $25$ & $-2.20\pm0.10$ & $-2.13\pm0.10$ & 4.0\
$42.20\pm0.05$ & $32$ & $-2.37\pm0.08$ & $-2.25\pm0.09$ & 7.5\
$42.30\pm0.05$ & $62$ & $-2.55\pm0.06$ & $-2.34\pm0.06$ & 22.1\
$42.40\pm0.05$ & $86$ & $-2.67\pm0.05$ & $-2.47\pm0.05$ & 40.2\
$42.50\pm0.05$ & $101$ & $-2.78\pm0.05$ & $-2.62\pm0.05$ & 60.4\
$42.60\pm0.05$ & $106$ & $-2.83\pm0.04$ & $-2.73\pm0.04$ & 71.4\
$42.70\pm0.05$ & $43$ & $-3.23\pm0.07$ & $-2.91\pm0.08$ & 73.6\
$42.80\pm0.05$ & $23$ & $-3.50\pm0.10$ & $-3.18\pm0.11$ & 73.6\
$42.90\pm0.05$ & $9$ & $-3.91\pm0.18$ & $-3.55\pm0.18$ & 73.6\
$43.00\pm0.05$ & $5$ & $-4.17\pm0.26$ & $-3.81\pm0.26$ & 73.6\
$43.10\pm0.05$ & $3$ & $-4.39\pm0.37$ & $-4.22\pm0.38$ & 73.6\
$43.20\pm0.05$ & $2$ & $-4.57\pm0.53$ & $-4.55\pm0.55$ & 73.6\
$43.40\pm0.15$ & $2$ & $-4.57\pm0.53$ & $-4.86\pm0.55$ & 73.6\
**$\bf z=2.23$ & \# & Mpc$^{-3}$ & Mpc$^{-3}$ & 10$^4$Mpc$^3$\
$42.00\pm0.075$ & $8$ & $-2.18\pm0.19$ & $-1.93\pm0.19$ & 0.8\
$42.15\pm0.075$ & $11$ & $-2.34\pm0.16$ & $-2.07\pm0.16$ & 1.6\
$42.30\pm0.05$ & $47$ & $-2.24\pm0.07$ & $-2.19\pm0.07$ & 6.7\
$42.40\pm0.05$ & $91$ & $-2.36\pm0.05$ & $-2.31\pm0.05$ & 20.9\
$42.50\pm0.05$ & $107$ & $-2.48\pm0.04$ & $-2.41\pm0.05$ & 32.7\
$42.60\pm0.05$ & $158$ & $-2.60\pm0.04$ & $-2.50\pm0.04$ & 63.3\
$42.70\pm0.05$ & $163$ & $-2.68\pm0.04$ & $-2.59\pm0.05$ & 77.2\
$42.80\pm0.05$ & $100$ & $-2.89\pm0.05$ & $-2.73\pm0.06$ & 77.2\
$42.90\pm0.05$ & $51$ & $-3.18\pm0.07$ & $-2.88\pm0.14$ & 77.2\
$43.00\pm0.05$ & $30$ & $-3.41\pm0.09$ & $-3.09\pm0.17$ & 77.2\
$43.10\pm0.05$ & $16$ & $-3.68\pm0.12$ & $-3.33\pm0.22$ & 77.2\
$43.20\pm0.05$ & $7$ & $-4.04\pm0.21$ & $-3.67\pm0.31$ & 77.2\
$43.30\pm0.05$ & $3$ & $-4.41\pm0.37$ & $-4.01\pm0.51$ & 77.2\
$43.40\pm0.05$ & $2$ & $-4.59\pm0.53$ & $-4.22\pm0.68$ & 77.2\
$43.60\pm0.15$ & $3$ & $-4.41\pm0.37$ & $-4.63\pm0.41$ & 77.2\
********************
--------------------------------------------------------------------------------------------------- -- -- -- --
: Luminosity Functions from HiZELS. L$_{\rm \bf H\alpha}$ has been corrected for both \[N[ii]{}\] contamination and for dust extinction (using A$_{\rm H\alpha}=1$mag). Volumes assuming top hat filters. $\phi$ corr has been corrected for both incompleteness and the fact that the filter profile is not a perfect top hat.
\[LF\_NUMBERS\]
The steep faint-end slope of the H$\alpha$ luminosity function is in very good agreement with the UV luminosity function at $z\sim2$ and above, and particularly consistent with a relatively non-evolving $\alpha\approx-1.6$. This can be seen by comparing the results in this paper with those presented by [@TREYER98], [@Arnouts05], and more recently, [@Oesch10]. It is also likely that (similarly to the H$\alpha$ luminosity function) the large scatter, and the different selections/corrections applied, have driven studies to assume/argue for a steepening of the UV luminosity faint-end slope, just like for the H$\alpha$ luminosity function – see [@Oesch10].
Overall, the results imply that the bulk of the evolution of the star-forming population from $z=0$ to $z\sim2.2$ is occurring as a strong boost in luminosity of all galaxies. The UV luminosity results also show very similar trends to the H$\alpha$ LF, by revealing that the strongest evolution to $z\sim2$ is in the typical luminosity/break of the luminosity function, which evolves significantly. However, individual measurements for the UV luminosity function at $z<2$ are still significantly affected by cosmic variance, small sample sizes and much more uncertain dust corrections, and thus the H$\alpha$ analysis provides a much stronger constraint on the evolution of star-forming galaxies up to $z\sim2.2$.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- -- -- -- -- -- --
**Epoch & $\bf L^*_{\rm H\alpha}$ & $\rm \bf \phi^*_{\rm \bf H\alpha}$ & $\bf \alpha_{\rm \bf H\alpha}$ & **log$\rho_{\bf L_{\bf H\alpha}}$**41.6 & **log$\rho_{\bf L_{\bf H\alpha}}$ & $\bf \rho_{\rm \bf SFR H\alpha}$ **41.6 & $\bf \rho_{\rm \bf SFR H\alpha}$ **All\
($z$) & ergs$^{-1}$ & Mpc$^{-3}$ & & ergs$^{-1}$Mpc$^{-3}$ & ergs$^{-1}$Mpc$^{-3}$ & M$_{\odot}$yr$^{-1}$ Mpc$^{-3}$ & M$_{\odot}$yr$^{-1}$ Mpc$^{-3}$\
$z=0.40\pm0.01$ & $41.95^{+0.47}_{-0.12}$ & $-3.12^{+0.10}_{-0.34}$ & $-1.75^{+0.12}_{-0.08}$ & $38.99^{+0.19}_{-0.22}$ & $39.55^{+0.22}_{-0.22}$ & $0.008^{+0.002}_{-0.002}$ & $0.03^{+0.01}_{-0.01}$\
$z=0.84\pm0.02$& $42.25^{+0.07}_{-0.05}$ & $-2.47^{+0.07}_{-0.08}$ & $-1.56^{+0.13}_{-0.14}$ & $39.75^{+0.12}_{-0.05}$ & $40.13^{+0.24}_{-0.21}$ & $0.040^{+0.007}_{-0.006}$ & $0.10^{+0.01}_{-0.02}$\
$z=1.47\pm0.02$ & $42.56^{+0.06}_{-0.05}$ & $-2.61^{+0.08}_{-0.09}$ & $-1.62^{+0.25}_{-0.29}$ & $40.03^{+0.08}_{-0.07}$ & $40.29^{+0.16}_{-0.14}$ & $0.07^{+0.01}_{-0.01}$ & $0.13^{+0.02}_{-0.02}$\
$z=2.23\pm0.02$ & $42.87^{+0.08}_{-0.06}$ & $-2.78^{+0.08}_{-0.09}$ & $-1.59^{+0.12}_{-0.13}$ & $40.26^{+0.01}_{-0.02}$ & $40.44^{+0.03}_{-0.03}$ & $0.13^{+0.01}_{-0.01}$ & $0.21^{+0.02}_{-0.03}$\
************
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- -- -- -- -- -- --
\[lfs\_\_\]
The Star formation history of the Universe: the last 11 Gyrs with H$\alpha$ {#SFRD_z147}
===========================================================================
Unveiling the star formation history of the Universe is fundamental to understand how, when and at what pace galaxies assembled their stellar masses. The best-fit Schechter function fit to the H$\alpha$ luminosity functions at $z=0.4$, $0.84$, $1.47$ and $2.23$ can be used to estimate the star formation rate density at the four epochs, corresponding to look-back times of 4.2, 7.0, 9.2 and 10.6 Gyrs. The standard calibration of Kennicutt (1998) is used to convert the extinction-corrected H$\alpha$ luminosity to a star formation rate: $${\rm SFR}({\rm M}_{\odot} {\rm yr^{-1}})= 7.9\times 10^{-42} ~{\rm L}_{\rm H\alpha} ~ ({\rm erg\,s}^{-1}),$$ which assumes continuous star formation, Case B recombination at $T_e = 10^4$K and a Salpeter initial mass function ranging from 0.1–100M$_{\odot}$.
As detailed before, a constant 1 magnitude of extinction at H$\alpha$ is assumed for the analysis, which is likely to be a good approach for the entire integrated star formation.
Removal of the AGN contribution {#AGN}
-------------------------------
Interpreting the integral of the H$\alpha$ luminosity function as a star formation rate density requires a good estimation of the possible contribution of AGN to that quantity. For the $z=0.84$ sample, [@Garn2010a] conducted a detailed search for potential AGN, finding a fraction of $8\pm3$% within the H$\alpha$ population at $z=0.84$. Similar (i.e., $\sim10$%) AGN contaminations at lower redshift have also been found by other studies, and therefore assuming a 10% contribution from AGN up to $z\sim1$ is likely to be a good approximation. At higher redshifts, and particularly for the sample at $z=1.47$ and $z=2.23$, the AGN activity could in principle be different. By looking at a range of AGN indicators – X-rays, radio and [irac]{} colours (and emission lines ratios for sources with such information[^12]), it is found that $\sim15$% of the sources are potentially AGN at $z=1.47$. Similar results are found at $z=2.23$. Therefore, when converting integrated luminosities to star formation rate densities at each epoch, it is assumed that AGNs contribute 10% of that up to $z\sim1$ and 15% above that redshift. While this correction may be uncertain, the actual correction will likely be within 5% of what is assumed, and in order to guarantee the robustness of the measurements, the final measurements include the error introduced by the AGN correction – this is done by adding 20% of the AGN correction in quadrature to the other errors. The AGN contribution/contamination will be studied in detail in Sobral et al. (in prep.).
![image](./figs/FIGI.jpg){width="12.2cm"}
![image](./figs/SM_assembly.pdf){width="12.5cm"}
The H$\alpha$ Star Formation History of the Universe {#SFHISTORY}
----------------------------------------------------
The results are shown in Table 5, both down to the approximate common survey limits, and by fully integrating the luminosity function. Figure \[SFRD\_HISTORY\] also presents the results (fully integrating down the luminosity functions), and includes a comparison between the consistent view on the H$\alpha$ star formation history of the Universe derived in this paper with the various measurements from the literature [@Hopkins2006; @CHU11], showing a good agreement.
The improvement when compared to other studies is driven by: i) the completely self-consistent determinations, ii) the significantly larger samples, and iii) the fact that the faint-end slope is accurately measured from $z\sim0$ to $z\sim2.23$ and luminosity functions determined down to a much lower common luminosity limit than ever done before. A comparison with all other previous measurements (which show a large scatter) reveals a good agreement with the H$\alpha$ measurements. However, the homogeneous H$\alpha$ analysis provides, for the first time, a much clearer and cleaner view of the evolution. The results presented in Figure \[SFRD\_HISTORY\] reveal the H$\alpha$ star formation history of the Universe for the last $\sim11$Gyrs. The evolution is particularly steep up to about $z\sim1$. While the evolution is then milder, $\rho_{\rm SFR}$ continues to rise, up to at least $z\sim2$.
Up to $z\sim1$, the H$\alpha$ star formation history is well fitted by $\log\rho_{\rm SFR}=4\times(z+1)-2.08$. However, such parameterisation is not a good fit for higher redshifts. It is possible to fit the entire H$\alpha$ star formation history since $z\sim2.2$, or for the last 11Gyrs by the simple parameterisation $\log\rho_{\rm SFR}=-0.14T-0.23$, with $T$ being the time since the Big Bang in Gyrs (see Figure 10). A power-law parameterisation as a function of redshift ($a\times(1+z)^{\beta})$ yields $\beta=-1.0$, and thus the H$\alpha$ star formation history can also be simply parameterised by $\log\rho_{\rm SFR}=\frac{-2.1}{(z+1)}$, clearly revealing that $\rho_{\rm SFR}$ has been declining for the last $\sim11$Gyrs. This parameterisation is also a very good fit for results from Karim et al. (2011), using radio stacking over a similar redshift range in the COSMOS field.
The Stellar mass assembled in the last 11 Gyrs {#MASS_ASSEMBLED}
----------------------------------------------
The results presented in this paper can be used to provide an estimate of the stellar mass (density) which has been assembled by H$\alpha$ star-forming galaxies over the last 11Gyrs. This is done in a similar way to [@Hopkins2006] or [@Glazebrook04], taking into account that a significant part of the mass of newborn stars at each redshift is recycled, and can be used in subsequent star formation episodes. The fraction of recycled mass depends on the IMF used. For a Salpeter IMF, which has been used for the H$\alpha$ calibration, the recycling fraction is 30%. Note, however, that changing the IMF does not change the qualitative results presented in this paper, in particular the agreement between the predicted and the measured stellar mass density growth. Nevertheless, changing the IMF changes both the normalisation of the star formation history and the stellar mass density growth.
Here, the following approach is taken: the measured stellar mass density already in place at $z\sim2.2$ [many determinations exist, e.g. @Hopkins2006; @PerezGonz; @Ilbert09] is assumed to be $\log_{10}\,M=7.45$M$_{\odot}$Mpc$^{-3}$. By using the measured H$\alpha$ star formation history derived in this paper ($\log\rho_{\rm SFR}=-0.14T-0.23$), a prediction of the evolution of the stellar mass density of the Universe is computed, using the recycling fraction of the Salpeter IMF (30%).
The results are presented in Figure \[Mass\_Assembly\], and compared with various measurements of the stellar mass density at different redshifts available from the literature [@Hopkins2006; @PerezGonz; @Elsner08; @Marchesini09]. All literature results have been converted to a Salpeter IMF if derived with a different IMF – including those with a modified Salpeter IMF (SalA; resulting in masses a factor of 0.77 lower than Salpeter; see e.g. Hopkins & Beacom).
The results reveal a very good agreement between the predictions based on the H$\alpha$ star formation history of the Universe presented in this paper since $z=2.23$ and the stellar mass density evolution of the Universe, measured directly by many authors. The results therefore indicate that at least since $z=2.23$ the H$\alpha$ star formation history of the Universe is a very good representation of the total star formation history of the Universe. It is possible to reconcile the observed evolution of the stellar mass density with that produced from the observed star formation history with very simple assumptions, without the need to modify the IMF or have it evolve as a function of time. The H$\alpha$ analysis reveals that star formation since $z=2.23$ is responsible for 95% of the total stellar mass density observed today, with about half of that being assembled from $z\sim2.2$ to $z\sim1.2$, and the other half since $z\approx1.2$. Note that the same conclusion is reached if the stellar mass density at $z=0$ is adopted for the normalisation (instead of that at $z=2.23$), and the measured H$\alpha$ star formation history is used (with appropriate recycling factor) to evolve this stellar mass density back to earlier epochs. Moreover, if the star formation rate density continues to decline with time in the same way as in the last $\sim11$Gyrs, the stellar mass density growth will become increasingly slower, with the stellar mass density of the Universe reaching a maximum which is only 5% higher than currently.
Conclusions
===========
This paper presents new results from a unique combination of wide and deep narrow-band H$\alpha$ surveys using UKIRT, Subaru and the VLT. It has resulted in robust and equally selected samples of several hundreds of H$\alpha$ emitters in narrow redshift slices, allowing to study and parameterise in a completely self-consistent way the evolution of the H$\alpha$ luminosity function over the last 11 Gyrs of the Universe. The main results are:
- We robustly select a total of 1742, 637, 515 and 807 H$\alpha$ emitters ($\Sigma$$>$3, EW$_{0\rm (H\alpha)}$$>$25Å) across the COSMOS and the UDS fields at $z=0.40$, $0.84$, $1.47$ and $2.23$, respectively. These are by far the largest samples of homogeneously selected H$\alpha$ emitters, while the wide area and the coverage over two independent fields allows to greatly overcome cosmic variance and also assemble large samples of more luminous galaxies.
- We find that the H$\alpha$ luminosity function evolves significantly from $z\sim0$ to $z\sim2.2$, with the bulk of the evolution being driven by the continuous rise in $L^*_{\rm H\alpha}$ by a factor of 10 from the local Universe to $z\sim2.2$, which is well described by $\log\,L^*_{\rm H\alpha}(z)=0.45z+41.87$
- By obtaining very deep data over a wide range of epochs, it is found that the faint-end slope, $\alpha$ does not evolve with redshift up to $z\sim2.3$, and is set to $\alpha=-1.60\pm0.08$ for the last 11Gyrs ($0<z<2.2$), contrarily to previous claims (based on heterogeneous samples) which argued for a steepening with redshift.
- The evolution seen in the H$\alpha$ luminosity function is in good agreement with the evolution seen using inhomogeneous compilations of other tracers of star formation, such as FIR and UV, jointly pointing towards the bulk of the evolution in the last 11Gyrs being driven by a similar star-forming population across cosmic time, but with a strong luminosity increase from $z\sim0$ to $z\sim2.2$.
- This is the first time H$\alpha$ has been used to trace SF activity with a single homogeneous survey at $z=0.4-2.23$. The simple parametrisations $\log\rho_{\rm SFR}=-0.14T-0.23$ (with $T$ being the age of the Universe in Gyrs) or $\log\rho_{\rm SFR}=\frac{-2.1}{(z+1)}$ are good approximations for the last 11Gyrs, showing that $\rho_{\rm SFR}$ has been declining since $z\sim2.2$.
- The results reveal that both the shape and normalisation of the H$\alpha$ star formation history are consistent with the measurements of the stellar mass density growth, confirming that the H$\alpha$ cosmic star formation history is tracing the bulk of the formation of stars in the Universe for $z<2.3$.
- The star formation activity over the last $\approx$11Gyrs is responsible for producing $\sim95$% of the total stellar mass density observed locally today, with about half of that being assembled from $z\sim2.2$ to $z\sim1.2$, and the other half at $z<1.2$.
The results presented in this paper provide a self-consistent view that improves our understanding of the evolution of star-forming galaxies. Particularly, it shows that the evolution of the star-forming population in the last 11 Gyrs has been mostly driven by a change in the typical star formation rate of the population ($L^*_{\rm H\alpha}$), while the faint-end slope of the H$\alpha$ LF has remained constant ($\alpha=-1.6$), and the change in the normalisation has been much more moderate. The strong evolution in $L^*_{\rm H\alpha}$ (or SFR$^*$) may well be unveiling something very fundamental about the evolution of star-forming galaxies, as it seems to mark a transition between disk and mergers (e.g. S09) in the last 9-10Gyrs. Also, scaling SFRs by the SFR$^*$ at each epoch (or H$\alpha$ luminosities by $L^*_{\rm H\alpha}$ at each epoch) seems to recover relatively non-evolving relations between scaled SFRs/luminosities and e.g. dust extinction (S12), morphological class (S09), merger rates (Stott et al., 2012), or the typical dark matter halo in which the star-forming galaxies are likely to reside [@SOBRAL10A].
The results presented in this paper also complement the current view on the evolution of the stellar mass function over the last 11Gyrs [e.g. @Ilbert09; @Peng; @Marchesini12], which also reveal a non-evolving faint-end slope (of the stellar mass function) at least for $z<2$, but shallower, $\alpha=-1.3$. However, the typical mass of the stellar mass function, $M^*$, is found to be roughly constant in the last 11Gyrs, with the main change being $\phi^*$, which continuously increases in the last $\sim11$Gyrs. Combining the results of the evolution of the H$\alpha$ luminosity function with those of the evolution of the stellar mass function point towards the existence of a star-forming population which is mostly evolving by an overall decrease in their SFRs/luminosity, while the overall population of galaxies evolves by a change in number density, but with a rather non-evolving typical mass ($M^*$), a rather simple evolution scenario which is consistent with that proposed by [@Peng].
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors would like to thank the reviewer, James Colbert, for many comments and suggestions which improved the paper significantly. DS is supported by a NOVA fellowship. IRS acknowledges a Leverhulme Senior Fellowship. PNB acknowledges support from the Leverhulme Trust. YM, JPS and IRS thank the U.K. Science and Technology Facility Council (STFC). JEG is supported by a Banting Fellowship, administered by the Natural Science and Engineering Research Council of Canada. We would like to thank Richard Ellis, Simon Lilly, Peter Capak, Adam Muzzin, Taddy Kodama, Masao Hayashi, Andy Lawrence, Joop Schaye, Marijn Franx, Huub Röttgering, Rychard Bouwens and Renske Smit for many interesting and helpful discussions. We would also like to thank Chun Ly for helpful comments. The authors wish to thank all the JAC staff for their help conducting the observations at the UKIRT telescope, and their continuous support and we are also extremely grateful to all the Subaru staff. We also acknowledge ESO and Subaru for service observations. Finally, the authors fully acknowledge the tremendous work that has been done by both COSMOS and UKIDSS UDS/SXDF teams in assembling such large, state-of-the-art multi-wavelength data-sets over such wide areas, as those have been crucial for the results presented in this paper.
Catalogues of candidate HiZELS narrow-band emitters {#Cats}
===================================================
The catalogues of potential narrow-band emitters over the COSMOS and UDS fields are presented in Tables A.1 (NB921), A.2 (NB$_{\rm J}$), A.3 (NB$_{\rm H}$) and A.4 (NB$_{\rm K}$). It contains IDs (including field and observing band), Right Ascension (RA), Declination (Dec), narrow-band magnitude (NB), broad-band magnitude (BB), the significance of the narrow-band excess ($\Sigma$), Estimated Flux ($\log10$), estimated observed EW, and a flag for those that are classified as H$\alpha$. Note that only the online version contains the full catalogue – here only five entries of the table are shown as examples of the entire catalogues.
------------------------------ ------------- ------------- ---------------- ---------------- ---------- ------------- ---------------- --------------------- -- --
ID R.A. Dec. NB BB $\Sigma$ log Flux EW$_{\rm obs}$ Class. as H$\alpha$
[(J2000)]{} (J2000) (AB) (AB) ergs$^{-1}$ Å
HiZELS-COSMOS-NB921-S12-14 095813.57 $+$021715.6 23.86$\pm0.11$ 24.85$\pm0.08$ 13.6 $-16.255$ 418.4 No
HiZELS-COSMOS-NB921-S12-105 095957.96 $+$021741.5 23.14$\pm0.03$ 23.39$\pm0.02$ 9.9 $-16.132$ 119.8 No
HiZELS-COSMOS-NB921-S12-136 095857.55 $+$021745.0 21.64$\pm0.01$ 22.11$\pm0.01$ 56.5 $-15.636$ 105.9 Yes
HiZELS-COSMOS-NB921-S12-4064 095920.49 $+$022054.5 22.06$\pm0.01$ 22.21$\pm0.01$ 14.3 $-16.084$ 39.13 No
HiZELS-UDS-NB921-S12-220582 021619.37 $-$051354.2 22.37$\pm0.01$ 22.65$\pm0.01$ 55.6 $-16.287$ 38.22 Yes
------------------------------ ------------- ------------- ---------------- ---------------- ---------- ------------- ---------------- --------------------- -- --
\[NB921\_CAT\]
--------------------------- ------------- ------------- ---------------- ---------------- ---------- ------------- ---------------- --------------------- -- --
ID R.A. Dec. NB BB $\Sigma$ log Flux EW$_{\rm obs}$ Class. as H$\alpha$
[(J2000)]{} (J2000) (Vega) (Vega) ergs$^{-1}$ Å
HiZELS-COSMOS-NBJ-S12-206 095842.11 $+$015840.5 21.58$\pm0.20$ 22.83$\pm0.23$ 3.5 $-16.11$ 463.5 No
HiZELS-COSMOS-NBJ-S12-292 095842.65 $+$022724.2 20.51$\pm0.08$ 20.85$\pm0.05$ 3.7 $-16.08$ 65.4 Yes
HiZELS-COSMOS-NBJ-S12-293 095842.63 $+$021953.2 20.84$\pm0.10$ 21.46$\pm0.07$ 4.4 $-16.01$ 139.3 Yes
HiZELS-UDS-NBJ-S12-5 021616.81 $-$050909.2 20.51$\pm0.08$ 20.85$\pm0.05$ 5.4 $-15.89$ 194.6 Yes
HiZELS-UDS-NBJ-S12-26 021617.15 $-$050742.4 20.84$\pm0.10$ 21.46$\pm0.07$ 4.6 $-15.96$ 79.8 Yes
--------------------------- ------------- ------------- ---------------- ---------------- ---------- ------------- ---------------- --------------------- -- --
\[NBJ\_CAT\]
--------------------------- ------------- ------------- ---------------- ---------------- ---------- ------------- ---------------- --------------------- -- --
ID R.A. Dec. NB BB $\Sigma$ log Flux EW$_{\rm obs}$ Class. as H$\alpha$
[(J2000)]{} (J2000) (Vega) (Vega) ergs$^{-1}$ Å
HiZELS-COSMOS-NBH-S12-206 095747.31 $+$020208.1 21.58$\pm0.14$ 20.99$\pm0.05$ 3.1 $-15.98$ 251.4 No
HiZELS-COSMOS-NBH-S12-292 095747.33 $+$015153.1 20.51$\pm0.02$ 18.33$\pm0.01$ 12.1 $-15.40$ 77.3 No
HiZELS-COSMOS-NBH-S12-293 095747.65 $+$021619.8 20.84$\pm0.08$ 19.80$\pm0.02$ 3.3 $-15.87$ 101.8 No
HiZELS-UDS-NBH-S12-4012 021644.17 $-$044453.0 20.04$\pm0.18$ 20.93$\pm0.04$ 3.0 $-15.87$ 308 Yes
HiZELS-UDS-NBH-S12-4136 021645.35 $-$040726.8 19.15$\pm0.08$ 19.76$\pm0.02$ 5.2 $-15.64$ 171 Yes
--------------------------- ------------- ------------- ---------------- ---------------- ---------- ------------- ---------------- --------------------- -- --
\[NBH\_CAT\]
----------------------------- ------------- ------------- ---------------- ---------------- ---------- ------------- ---------------- --------------------- -- --
ID R.A. Dec. NB BB $\Sigma$ log Flux EW$_{\rm obs}$ Class. as H$\alpha$
[(J2000)]{} (J2000) (Vega) (Vega) ergs$^{-1}$ Å
HiZELS-COSMOS-NBK-S12-2227 095814.05 $+$023405.1 20.26$\pm0.23$ 21.68$\pm0.16$ 3.1 $-16.31$ 728 Yes
HiZELS-COSMOS-NBK-S12-45390 100059.58 $+$024435.9 19.24$\pm0.09$ 19.92$\pm0.04$ 5.4 $-16.09$ 208 No
HiZELS-COSMOS-NBKS12-44193 100055.39 $+$015955.1 19.02$\pm0.06$ 19.68$\pm0.02$ 7.6 $-16.01$ 198 Yes
HiZELS-UDS-NBK-S12-15961 021809.03 $-$044749.9 17.47$\pm0.02$ 18.27$\pm0.01$ 30.6 $-15.33$ 262 Yes
HiZELS-UDS-NBK-S12-22618 021853.02 $-$050107.7 19.36$\pm0.10$ 20.02$\pm0.02$ 4.5 $-16.16$ 196 No
----------------------------- ------------- ------------- ---------------- ---------------- ---------- ------------- ---------------- --------------------- -- --
\[NBK\_CAT\]
\[lastpage\]
[^1]: NOVA Fellow; E-mail: sobral@strw.leidenuniv.nl
[^2]: This work is based on observations obtained using the Wide Field CAMera (WFCAM) on the 3.8m United Kingdom Infrared Telescope (UKIRT), as part of the High-redshift(Z) Emission Line Survey (HiZELS; U/CMP/3 and U/10B/07). It also relies on observations conducted with HAWK-I on the ESO Very Large Telescope (VLT), program 086.7878.A, and observations obtained with Suprime-Cam on the Subaru telescope (S10B-144S).
[^3]: It may be possible to extend H$\alpha$ studies to even higher redshifts: [@Shim] suggest that [*Spitzer*]{} IRAC mid-IR fluxes can be used to detect strong H$\alpha$ emission at even higher redshifts ($z\sim4$). NIRCAM and NIRISS on the [*James Webb Space Telescope*]{} will obviously significantly expand the exploration of H$\alpha$ emission at such high redshifts.
[^4]: For more details on the survey, progress and data releases, see http://www.roe.ac.uk/ifa/HiZELS/
[^5]: The improvements in the flat-fielding are obtained by stacking all second-pass flattened frames per field and producing source masks on the stacked images. Masks are then used to produce third pass flats using all the frames in the jitter sequence except the frame being flattened. The third-pass flattened frames are then stacked again, and the procedure is repeated another time. This procedure is able to both mask many sources which are undetected in individual frames out of the flats, but particularly to mask bright sources much more effectively, as the stacking of all images reveals a wider distribution of flux from those sources.
[^6]: [iraf]{} and [scamp]{} (Bertin et al. 2000) is used to distort correct the frames and obtain a very accurate ($rms\approx0.1-0.2''$) astrometric solution for each frame (using 2MASS), always assuring that the flux is conserved.
[^7]: Sources for which one of the broad-band colours is not available (typically 5 per cent of the sources) are assigned the median correction. The median corrections are: $-0.04$, $+0.07$, $+0.05$ and $+0.03$ for NB921, NB$_{\rm J}$, NB$_{\rm H}$ and NB$_{\rm K}$ respectively.
[^8]: See UKIDSS UDS website (http://www.nottingham.ac.uk/astronomy/UDS) for a redshift compilation by O. Almaini and the COSMOS data archive (http://irsa.ipac.caltech.edu/data/COSMOS) for the catalogues, spectra and information on the various instruments and spectroscopic programmes.
[^9]: The selection was modified because the Daddi et al. cut was designed to select $z>1.4$ sources, while here $z=2.23$ emitters are targeted. The precise location of the new cut, which is 0.2 magnitudes higher/redder in $z-K$ than that of Daddi et al., is motivated by the confirmed H$\alpha$ emitters and by the need to minimise contamination from $z<2$ sources).
[^10]: The correction is applied to obtain data points corrected for extinction by 1 mag at H$\alpha$.
[^11]: Note that the difference in $\phi^*$ to the Sobral et al. (2009) H$\alpha$ luminosity function is mostly driven by $\phi^*_{S09}$ reported there being $\phi^*_{S09}=\phi^*\times \ln10$ (due to the fitting to dLogL without taking the ln10 factor into account – see Sobral et al. 2012), which accounts for a factor $\approx2.23$. The remaining difference (a factor $\sim1.5$) is an actual difference driven by the improved data reduction, selection of emitters (3$\Sigma$ instead of 2.5$\Sigma$), completeness and cleanness of the catalogues of H$\alpha$ emitters (particularly due to the significantly improved photometric redshifts and a larger number of spectroscopic redshifts).
[^12]: Follow up spectroscopy of luminous H$\alpha$ sources unveil a significant fraction of narrow and broad-line AGN which becomes dominant at the highest luminosities (Sobral et al. in prep), but is consistent with an overall AGN contribution to the H$\alpha$ luminosity density of 15 per cent.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The goal of the present work is to revisit the cranking formula of the vibrational parameters, especially its well known drawbacks. The latter can be summarized as spurious resonances or singularities in the behavior of the mass parameters in the limit of unpaired systems. It is found that these problems are simply induced by the presence of two derivatives in the formula. In effect, this formula is based on the hypothesis of contributions of excited states due only to two quasiparticles. But it turns out that this is not the case for the derivatives. We deduce therefore that the derivatives are not well founded in the formula. We propose then simply to suppress these terms from the formula. Although this solution seems to be simplistic, it solves definitively all its inherent problems.'
author:
- 'B. Mohammed-Azizi'
title: 'The Inglis-Belyaev formula and the hypothesis of the two-quasiparticle excitations'
---
Introduction
============
Collective low lying levels of the nucleus are often deduced numerically from the Interacting Bosons Model (IBM) [@01] or the Generalized Bohr Hamiltonian (GBH) [@1]-[@1a]. Restricting ourselves to the latter we can say that it is built on the basis of seven functions: The collective potential energy of deformation of the nucleus, and for its kinetic-energy part, three mass parameters (also called vibrational parameters ) and three moments of inertia. All these functions depend on the deformation of the nuclear surface. Usually, the deformation energy can be evaluated in the framework of the constrained Hartree-Fock theory (CHF) or by the phenomenological shell correction method. The mass parameters and the moments of inertia are often approximated by the cranking formula [@2]-[@2a] or in the self consistent approaches by other models [@3]-[@5]. Most of the self-consistent formulations are based on the adiabatic time dependent Hartree-Fock-Bogolyoubov approximation (ATDHFB) which leads to constrained Hartree-Fock-Bogolioubov (CHFB) calculations [@5a]-[@5b] in which the so-called Thouless-Valatin corrections are neglected. It is to be noted that there are several self-consistent formulations for the mass parameters in which always some approximations are made (not always the same). Other types of approaches of the mass parameters use the so-called Generator Coordinate Method combined with the Gaussian-Overlap-Approximation (GCM+GOA) [@5b1]. Recently new methods have again been developped [@5bb]-[@5b2]. This leads to a certain confusion and the problem of the evaluation of the mass parameters remains (up to now) a controversial question as already noticed in Ref.[@5b2].
In this paper we will focus exclusively on the mass parameters, especially on the problems induced by the cranking formula, i.e. the “classical” Inglis-Belyaev formula of the vibrational parameters. Indeed, it is well-known that this formula leads sometimes to inextricable problems when the pairing correlations are taken into account (by means of the BCS model). The transition between normal $\left( \Delta=0\right) $ and superfluid phase $\left( \Delta>\Delta_{0}\approx0.3MeV\right) $ affects generally the magic nuclei near the spherical shape under the changing of the deformation [@11]. The problem occurs sometimes (not always) exactly in these cases for an unpaired system $\Delta\sim0.$ In that cases the mass parameters take anomalous very large values near a “critical” deformation close to the spherical shape.
This singular behaviour is well-known and constitutes undoubtedly unphysical effect. It has been early found that these problems are due simply to the presence of the derivatives of $\Delta$ (pairing gap) and $\lambda$ (Fermi level) in the formula. They have been reported many times [@1], [@10]-[@14] in the litterature, but no solution has been proposed. The authors of Ref. [@1] and [@11] claim that for sufficiently large pairing gaps $\Delta$ the total mass parameter is essentially given by the diagonal part without the derivatives, whereas those of Ref. [@14] affirm that the role of the derivatives is by no mean small in the fission process and this leads to contradictory conclusions. Other studies [@13] neglect the derivatives without any justification. Some self-consistent calculations met also the same difficulties. For example in Ref. [@5c], resonances in mass parameters have also already been noticed. As in the present work they were attributed to the derivative of the gap parameter $\Delta$ near the pairing phase transition. In short, up to now the problem remains unclear. Curiously, one must point out that contrary to the vibrational parameters, the same formulation (Inglis-Belyaev) for the moments of inertia does not exhibit any explicit dependence on $\Delta$ and $\lambda$ (as the I-B formula does for the mass parameters) and this explains why the I-B formula for the moments of inertia does not meet such problems. This difference appears not so natural and is a part of the motivation of this work. All these problems as well as intensive numerical calculations led us to ask ourselves if the presence of these derivatives is well founded. If this is not the case, their removal should be justified. In fact, the Inglis-Belyaev formula is based on the fundamental hypothesis on contributions of two-quasiparticle states excitations. Rigorously it turns out that the derivatives of $\Delta$ and $\lambda$ do not belong to this type of excitations and this must explain their reject from the formula.The object of this paper is not so much to tell if this model is good or not or to specify the field of the validity this model, etc... This study is simply and wholly devoted to a correction of the Inglis-Belyaev formula in the light of its fundamental hypothesis.
Hypothesis of the two-quasiparticle excitations or the cranking Inglis-Belyaev formula.\[section hyp\]
======================================================================================================
Without pairing correlations
----------------------------
The mass (or vibrational) parameters are given by the Inglis formula [@1], [@2]: $$D_{ij}\left\{ \beta_{1},.,\beta_{n}\right\} =2\hbar^{2}\sum_{M\neq0}%
\frac{\left\langle O\right\vert \partial\text{ }/\partial\beta_{i}\left\vert
M\right\rangle \left\langle M\right\vert \partial\text{ }/\partial\beta
_{j}\left\vert O\right\rangle }{E_{M}-E_{O}} \label{massparameters}%$$ Where $\left\vert O\right\rangle ,\left\vert M\right\rangle $ are respectively the ground state and the excited states of the nucleus. The quantities $E_{M},E_{O}$ are the associated eigenenergies. In the independent-particle model, whenever the state of the nucleus is assumed to be a Slater determinant (built on single-particle states of the nucleons), the ground state$\left\vert
O\right\rangle $ will be of course the one where all the particles occupy the lowest states. The excited states $\left\vert M\right\rangle $ will be approached by the one particle-one hole configurations. In that case, Eq. (\[massparameters\]) becomes:$$D_{ij}\left\{ \beta_{1},.,\beta_{n}\right\} =2\hbar^{2}%
%TCIMACRO{\tsum \limits_{l>\lambda,k<\lambda}}%
%BeginExpansion
{\textstyle\sum\limits_{l>\lambda,k<\lambda}}
%EndExpansion
\frac{\left\langle k\right\vert \dfrac{\partial}{\partial\beta_{i}}\left\vert
l\right\rangle \left\langle l\right\vert \dfrac{\partial}{\partial\beta_{i}%
}\left\vert k\right\rangle }{\left( \epsilon_{l}-\epsilon_{k}\right) }
\label{unoe}%$$ where $\left\{ \beta_{1},.,\beta_{n}\right\} $ or in short $\left\{
\beta\right\} $ is a set of deformation parameters. The single particle states $\left\vert l\right\rangle ,\left\vert k\right\rangle $ and single particle energies $\epsilon_{l},\epsilon_{k}$ are given by the Schrodinger equation of the independent-particle model [@8], i.e. $H_{sp}\left\vert
\nu\right\rangle =\epsilon_{\nu}\left\vert \nu\right\rangle $, where $H_{sp}%
$is the single-particle Hamiltonian). At last $\lambda$ is the Fermi levelUsing the properties $\left\langle \nu\right\vert
\partial/\partial\beta\left\vert \mu\right\rangle =\left\langle \nu\right\vert
\left[ \partial/\partial\beta,H_{sp}\right] \left\vert \mu\right\rangle
/\left( \epsilon_{\nu}-\epsilon_{\mu}\right) $ and $\left[ \partial
/\partial\beta,H_{sp}\right] =\partial H_{sp}/\partial\beta$ Eq.(\[unoe\]) becomes $$D_{ij}\left\{ \beta_{1},.,\beta_{n}\right\} =2\hbar^{2}%
%TCIMACRO{\tsum \limits_{l>\lambda,k<\lambda}}%
%BeginExpansion
{\textstyle\sum\limits_{l>\lambda,k<\lambda}}
%EndExpansion
\dfrac{\left\langle k\right\vert \dfrac{\partial H_{sp}}{\partial\beta_{i}%
}\left\vert l\right\rangle \left\langle l\right\vert \dfrac{\partial H_{sp}%
}{\partial\beta_{i}}\left\vert k\right\rangle }{\left( \epsilon_{l}%
-\epsilon_{k}\right) ^{3}} \label{massparameterssingle2}%$$ $H_{sp}$ is the single-particle Hamiltonian and $\lambda$ is the Fermi level.
With pairing correlations, hypothesis of the two-quasiparticle excitations states
---------------------------------------------------------------------------------
It must be noted that in Eq. (\[massparameterssingle2\]) the denominator $\epsilon_{l}-\epsilon_{k}$ vanishes in the case where the Fermi level coincides with two or more degenerate levels. This is the major drawback of the formula. It is possible to overcome this difficulty by taking into account the pairing correlations. This can be achieved through the BCS approximation by the following replacements in Eq. (\[massparameters\]):i) the ground state $\left\vert O\right\rangle $ by the BCS state $\left\vert
BCS\right\rangle .$ii) the excited states $\left\vert
M\right\rangle $ by the two-quasiparticle excitations states $\left\vert
\nu,\mu\right\rangle =\alpha_{\nu}^{+}\alpha_{\mu}^{+}\left\vert
BCS\right\rangle $ (here we consider only the even-even nuclei).iii) the energy $E_{O\text{ }}$ by $E_{BCS}$ and $E_{M}$ by the energy of the two quasiparticles, i.e., by $E_{\nu}+E_{\mu}+E_{BCS}$. The BCS state is defined from the “true” vacuum $\left\vert 0\right\rangle $ by: $\left\vert
BCS\right\rangle =\Pi_{k}\left( u_{k}+\upsilon_{k}a_{k}^{+}a_{\overline{k}%
}^{+}\right) \left\vert 0\right\rangle $.$$D_{ij}\left\{ \beta_{1},.,\beta_{n}\right\} =2\hbar^{2}\sum_{\nu,\mu}%
\frac{\left\langle BCS\right\vert \partial\text{ }/\partial\beta_{i}\left\vert
\nu,\mu\right\rangle \left\langle \nu,\mu\right\vert \partial\text{ }%
/\partial\beta_{j}\left\vert BCS\right\rangle }{E_{\nu}+E_{\mu}}
\label{bcsformula}%$$ Where $\left( u_{\nu},\upsilon_{\mu}\right) $ are the usual amplitudes of probability and$$E_{\nu}=\sqrt{\left( \epsilon_{\nu}-\lambda\right) ^{2}+\Delta^{2}}
\label{qp}%$$ is the so-called quasiparticle energy.As shown by Belyaev [@6] or as detailed in appendix tha above formula can be written in an other form:$$D_{ij}\left\{ \beta_{1},.,\beta_{n}\right\} =2\hbar^{2}%
%TCIMACRO{\dsum _{\nu}}%
%BeginExpansion
{\displaystyle\sum_{\nu}}
%EndExpansion%
%TCIMACRO{\dsum _{\mu\neq\nu}}%
%BeginExpansion
{\displaystyle\sum_{\mu\neq\nu}}
%EndExpansion
\left( u_{\nu}\upsilon_{\mu}+u_{\mu}\upsilon_{\nu}\right) ^{2}%
\dfrac{\left\langle \nu\right\vert \dfrac{\partial}{\partial\beta_{i}%
}\left\vert \mu\right\rangle \left\langle \mu\right\vert \dfrac{\partial
}{\partial\beta_{j}}\left\vert \nu\right\rangle }{E_{\nu}+E_{\mu}}+2\hbar
^{2}\underset{\nu}{%
%TCIMACRO{\dsum }%
%BeginExpansion
{\displaystyle\sum}
%EndExpansion
}\frac{1}{2E_{\nu}}\frac{1}{\upsilon_{\nu}^{2}}\frac{\partial u_{\nu}%
}{\partial\beta_{i}}\frac{\partial u_{\nu}}{\partial\beta_{j}} \label{dudu}%$$ Beside this formula, there is an other more convenient formulation due to Bes [@7] modified slightly by the authors of Ref. [@1] where the derivatives $\partial u_{\nu}/\partial\beta_{i},\partial u_{\nu}/\partial
\beta_{j}$ of Eq.(\[dudu\]) are explitly performed (see also the details in the appendix of the present paper): $$D_{ij}\left\{ \beta_{1},.,\beta_{n}\right\} =2\hbar^{2}%
%TCIMACRO{\dsum _{\nu}}%
%BeginExpansion
{\displaystyle\sum_{\nu}}
%EndExpansion%
%TCIMACRO{\dsum _{\mu\neq\nu}}%
%BeginExpansion
{\displaystyle\sum_{\mu\neq\nu}}
%EndExpansion
\frac{\left( u_{\nu}\upsilon_{\mu}+u_{\mu}\upsilon_{\nu}\right) ^{2}%
}{\left( E_{\nu}+E_{\mu}\right) ^{3}}\left\langle \nu\right\vert
\dfrac{\partial H_{sp}}{\partial\beta_{i}}\left\vert \mu\right\rangle
\left\langle \mu\right\vert \dfrac{\partial H_{sp}}{\partial\beta_{j}%
}\left\vert \nu\right\rangle +2\hbar^{2}\underset{\nu}{%
%TCIMACRO{\dsum }%
%BeginExpansion
{\displaystyle\sum}
%EndExpansion
}\dfrac{\Delta^{2}}{8E_{\nu}^{5}}R_{i}^{\nu}R_{j}^{\nu} \label{with}%$$ here the most important quantity concerned by the subject of this paper is $R_{i}^{\nu}$ (once again see formula (\[rik\]) in appendix how this quantity is obtained):$$R_{i}^{\nu}=-\left\langle \nu\right\vert \frac{\partial H_{sp}}{\partial
\beta_{i}}\left\vert \nu\right\rangle +\dfrac{\partial\lambda}{\partial
\beta_{i}}+\frac{\left( \epsilon_{\nu}-\lambda\right) }{\Delta}%
\dfrac{\partial\Delta}{\partial\beta_{i}} \label{mumu}%$$ The two quantities of the r.h.s of Eq. (\[dudu\]) and (\[with\]) are in the adopted order, the so-called “non-diagonal” and the “diagonal” parts of the mass parameters. The derivatives are contained in the above diagonal term $R_{i}^{\nu}$. In other papers, the cranking formula is usually cast under a slightly different form. All these formulae (\[bcsformula\]), (\[dudu\]), (\[with\]) and others are equivalent.The derivatives contained in Eq (\[mumu\]) can be then actually calculated as in the Ref. [@11], [@1] with the help of the following formulae. $$\begin{gathered}
\frac{\partial\lambda}{\partial\beta_{i}}=\frac{ac_{\beta_{i}}+bd_{\beta_{i}}%
}{a^{2}+b^{2}}\label{difflamda}\\
\frac{\partial\Delta}{\partial\beta_{i}}=\frac{bc_{\beta_{i}}-ad_{\beta_{i}}%
}{a^{2}+b^{2}} \label{diffdelta}%\end{gathered}$$ with$$\begin{gathered}
a=\underset{\nu}{%
%TCIMACRO{\dsum }%
%BeginExpansion
{\displaystyle\sum}
%EndExpansion
}\Delta E_{\nu}^{-3},\text{ \ }b=\underset{\nu}{%
%TCIMACRO{\dsum }%
%BeginExpansion
{\displaystyle\sum}
%EndExpansion
}(\epsilon_{\nu}-\lambda)E_{\nu}^{-3},\label{ab}\\
c_{\beta_{i}}=\underset{\nu}{%
%TCIMACRO{\dsum }%
%BeginExpansion
{\displaystyle\sum}
%EndExpansion
}\Delta\left\langle \nu\right\vert \dfrac{\partial H_{sp}}{\partial\beta_{i}%
}\left\vert \nu\right\rangle E_{\nu}^{-3},\text{ \ }d_{\beta_{i}}%
=\underset{\nu}{%
%TCIMACRO{\dsum }%
%BeginExpansion
{\displaystyle\sum}
%EndExpansion
}(\epsilon_{\nu}-\lambda)\left\langle \nu\right\vert \dfrac{\partial H_{sp}%
}{\partial\beta_{i}}\left\vert \nu\right\rangle E_{\nu}^{-3} \label{cgdg}%\end{gathered}$$ These equations can be easily derived through the well known properties of the implicit functions. In the following the expression “the derivatives” means simply the both derivatives given by Eq. (\[difflamda\]) and (\[diffdelta\]).
In the simple BCS theory the gap parameters $\Delta$ and the Fermi level $\lambda$ are solved from the following BCS equations (\[bcs1\]) and (\[bcs2\]) as soon as the single-particle spectrum $\left\{ \epsilon_{\nu
}\right\} $ is known.$$\dfrac{2}{G}=\underset{\nu=1}{\overset{N_{P}}{\sum}}\frac{1}{\sqrt{\left(
\epsilon_{\nu}-\lambda\right) ^{2}+\Delta^{2}}} \label{bcs1}%$$$$N\text{ or }Z=\underset{\nu=1}{\overset{N_{P}}{\sum}}\left( 1-\frac
{\epsilon_{\nu}-\lambda}{\sqrt{\left( \epsilon_{\nu}-\lambda\right)
^{2}+\Delta^{2}}}\right) \label{bcs2}%$$ ($N_{P}$ is the number of pairs of particles in numerical calculations). Of course, from equations (\[bcs1\]) and (\[bcs2\]) the deformation dependence of the eigenenergies $\epsilon_{\nu}(\beta)$ involves the ones of $\Delta$ and $\lambda$.Formally, the solution of Eq.(\[bcs1\]) and (\[bcs2\]) amounts to express $\Delta$ and $\lambda$ as functions of the set of the energy levels $\left\{ \epsilon_{\nu}\right\} $.
Paradox of the formula in an umpaired system\[paradox\]
-------------------------------------------------------
It is well known that the BCS equations have non-trivial solutions only above a critical value of the strength $G$ of the pairing interaction. The trivial solution corresponds theoretically to the value $\Delta=0$ of an unpaired system. In this case, the mass parameters given by (\[dudu\]) or (\[with\]) must reduce to the ones of the formula (\[massparameterssingle2\]), i.e. the cranking formula of the independent-particle model. Indeed, when $\Delta=0$ it is quite clear that:
$E_{\nu}=\sqrt{\left( \epsilon_{\nu}-\lambda\right) ^{2}+\Delta^{2}%
}\rightarrow E_{\nu}=\left\vert \epsilon_{\nu}-\lambda\right\vert $
$u_{\nu},\upsilon_{\nu}\rightarrow$ $0$ $or$ $1$ therefore in Eq.(\[with\]) $\left( u_{\nu}\upsilon_{\mu}+u_{\mu}\upsilon_{\nu}\right) ^{2}\rightarrow$ $0$ $or$ $1$
In accordance with the above assumption $\left( u_{\nu}\upsilon_{\mu}+u_{\mu
}\upsilon_{\nu}\right) ^{2}=$ $0$ $or$ $1$, we can define $\nu$ and $\mu$ in a such way $\epsilon_{\nu}>\lambda$ and $\epsilon_{\mu}<\lambda$ therefore $E_{\nu}+E_{\mu}=\left\vert \epsilon_{\nu}-\lambda\right\vert +\left\vert
\epsilon_{\mu}-\lambda\right\vert $ $=\epsilon_{\nu}-\lambda+\lambda
-\epsilon_{\mu}=\epsilon_{\nu}-\epsilon_{\mu}$ so that it is easy to see that the non-diagonal part of the right hand side of Eq.(\[with\]) reduces effectively to Eq. (\[massparameterssingle2\]), i.e.:
$2\hbar^{2}%
%TCIMACRO{\dsum _{\nu}}%
%BeginExpansion
{\displaystyle\sum_{\nu}}
%EndExpansion%
%TCIMACRO{\dsum _{\mu\neq\nu}}%
%BeginExpansion
{\displaystyle\sum_{\mu\neq\nu}}
%EndExpansion
\frac{\left( u_{\nu}\upsilon_{\mu}+u_{\mu}\upsilon_{\nu}\right) ^{2}%
}{\left( E_{\nu}+E_{\mu}\right) ^{3}}\left\langle \nu\right\vert
\tfrac{\partial H_{sp}}{\partial\beta_{i}}\left\vert \mu\right\rangle
\left\langle \mu\right\vert \tfrac{\partial H_{sp}}{\partial\beta_{j}%
}\left\vert \nu\right\rangle \rightarrow2\hbar^{2}%
%TCIMACRO{\tsum \limits_{\nu>\lambda,\mu<\lambda}}%
%BeginExpansion
{\textstyle\sum\limits_{\nu>\lambda,\mu<\lambda}}
%EndExpansion
\tfrac{\left\langle \nu\right\vert \tfrac{\partial H_{sp}}{\partial\beta_{i}%
}\left\vert \mu\right\rangle \left\langle \mu\right\vert \tfrac{\partial
H_{sp}}{\partial\beta_{i}}\left\vert \nu\right\rangle }{\left( \epsilon_{\nu
}-\epsilon_{\mu}\right) ^{3}}$This implies the important fact that in this limit ($\Delta\rightarrow0$), *the diagonal part (i.e. the second term) of the r.h.s. of Eq. (\[with\]) must vanish*, i.e. in other words:
$2\hbar^{2}\underset{\nu}{%
%TCIMACRO{\dsum }%
%BeginExpansion
{\displaystyle\sum}
%EndExpansion
}\tfrac{\Delta^{2}}{8E_{\nu}^{5}}R_{i}^{\nu}R_{j}^{\nu}$ $\rightarrow0$ when $\Delta\rightarrow0$However in practice in some rare cases of the pairing phase transtion this does not occur because it happens that this term diverges near the breakdown of the pairing correlations, i.e., in practice for very small values of $\Delta(\sim0)$ (see numerical example in the text below). This constitutes really a contradiction and a paradox in this formula.In the quantity $R_{i}^{\nu}$ of Eq (\[mumu\]) the diagonal matrix elements $\left\langle \nu\right\vert \partial H_{sp}/\partial\beta
_{i}\left\vert \nu\right\rangle $ are finite and relatively small, it is then clear that it is the derivatives $\partial\Delta/\partial\beta_{i}$ and $\partial\lambda/\partial\beta_{i}$ which cause the problem. These features have been checked in numerical calculations. In this respect, the formulae (\[difflamda\]) and (\[diffdelta\]) which give these derivatives are subject to a major drawback because their common denominator, i.e. $a^{2}+b^{2}$ can accidentally cancel. Let us study briefly this situation. In effect, this can be easily explained because in unpaired situation we must have $\Delta\sim0,$ involving $a\sim0$ in Eq. (\[ab\]). In addition, $b$ is defined as a sum of postive and negative values depending on whether the terms are below or above the Fermi level. Therefore, it could happen accidentally that $b\sim0$ in Eq (\[ab\]) involving serious drawbacks or at least numerical instabilities.
Quantities such as $\Delta$ and $\lambda$ are not consistent with the hypothesis of the Inglis-Belyaev formula. \[correction\]
==============================================================================================================================
Basic hypothesis of the Inglis-Belyaev formula and simplification of the formula
--------------------------------------------------------------------------------
In the independent-particle approximation the contributions to the mass parameters are simply due to one particle-one hole excitations. Thus in the formulae (\[unoe\]) or (\[massparameterssingle2\]) the particle-hole excitations are denoted by the single-particle states $k$ and $l$. When the pairing correlations are taken into account, the contributions are supposed due only to two-quasiparticle excitations states $\left( \nu,\overline{\mu
}\right) $ $\left\{ \mu\neq\nu\right\} $ in Eq. which gives rise to the first term of Eq.(\[with\]). The second term of this formula is due to the derivatives of the probability amplitudes and has to be interpreted as two quasiparticle excitations of the type $\left( \nu,\overline{\nu}\right) $. However this is not true for all the terms entering into the product of the quantities $R_{i}^{\nu},R_{j}^{\nu}$ . Let us re-focus onto the formula (\[with\]) in which we will replace in the second sum the quantity $\Delta$ by its equivalent from the identity $\Delta=2u_{\nu}\upsilon_{\nu}E_{\nu}$. After simplification of the coefficient of $R_{i}^{\nu}R_{j}^{\nu}$ we obtain: $$D_{ij}\left\{ \beta_{1},.,\beta_{n}\right\} =2\hbar^{2}%
%TCIMACRO{\dsum _{\nu}}%
%BeginExpansion
{\displaystyle\sum_{\nu}}
%EndExpansion%
%TCIMACRO{\dsum _{\mu\neq\nu}}%
%BeginExpansion
{\displaystyle\sum_{\mu\neq\nu}}
%EndExpansion
\frac{\left( u_{\nu}\upsilon_{\mu}+u_{\mu}\upsilon_{\nu}\right) ^{2}%
}{\left( E_{\nu}+E_{\mu}\right) ^{3}}\left\langle \nu\right\vert
\dfrac{\partial H_{sp}}{\partial\beta_{i}}\left\vert \mu\right\rangle
\left\langle \mu\right\vert \dfrac{\partial H_{sp}}{\partial\beta_{j}%
}\left\vert \nu\right\rangle +2\hbar^{2}\underset{\nu}{%
%TCIMACRO{\dsum }%
%BeginExpansion
{\displaystyle\sum}
%EndExpansion
}\dfrac{\left( 2u_{\nu}\upsilon_{\nu}\right) ^{2}}{8E_{\nu}^{3}}R_{i}^{\nu
}R_{j}^{\nu} \label{mp}%$$ The fundamental point is in this way it is clear that all the quantities in Eq. (\[mp\]) are associated to quasiparticle states $\nu$ and $\mu$ except the derivative of $\Delta$ and $\lambda.$ Quantities such as $\Delta$ and $\lambda$ appearing in $\left( R_{i}^{\nu},R_{j}^{\nu}\right) $ (see Eq. (\[mumu\])) which are deduced from Eq. (\[bcs1\])-(\[bcs2\]) are due to all the spectrum, they are clearly not specifically linked to these two particular states (otherwise indices $\nu$ and $\mu$ should appear with these quantities). Therefore they cannot be really considered as contributions due to two quasiparticle excitation states which is the basic hypothesis of the Inglis-Belyaev formula. Therefore they cannot be taken into account.With this additional assumption, the element $R_{i}^{\nu}$ must reduce to nothing but a simple matrix element: $$R_{i}^{\nu}=-\left\langle \nu\right\vert \frac{\partial H_{sp}}{\partial
\beta_{i}}\left\vert \nu\right\rangle$$ Consequently this contributes to simplify greatly the formula (\[mp\]) which becomes:$$D_{ij}\left\{ \beta_{1},.,\beta_{n}\right\} =2\hbar^{2}%
%TCIMACRO{\dsum _{\nu}}%
%BeginExpansion
{\displaystyle\sum_{\nu}}
%EndExpansion%
%TCIMACRO{\dsum _{\mu\neq\nu}}%
%BeginExpansion
{\displaystyle\sum_{\mu\neq\nu}}
%EndExpansion
\left( u_{\nu}\upsilon_{\mu}+u_{\mu}\upsilon_{\nu}\right) ^{2}%
\frac{\left\langle \nu\right\vert \dfrac{\partial H_{sp}}{\partial\beta_{i}%
}\left\vert \mu\right\rangle \left\langle \mu\right\vert \dfrac{\partial
H_{sp}}{\partial\beta_{j}}\left\vert \nu\right\rangle }{\left( E_{\nu}%
+E_{\mu}\right) ^{3}}+2\hbar^{2}\underset{\nu}{%
%TCIMACRO{\dsum }%
%BeginExpansion
{\displaystyle\sum}
%EndExpansion
}\left( 2u_{\nu}\upsilon_{\nu}\right) ^{2}\frac{\left\langle \nu\right\vert
\dfrac{\partial H_{sp}}{\partial\beta_{i}}\left\vert \nu\right\rangle ^{2}%
}{\left( 2E_{\nu}\right) ^{3}} \label{mpp}%$$ It is to be noted that the missing term $(\mu=\nu)$ in the double sum is precisely the contribution of the simple sum of the r.h.s of Eq. (\[mpp\]). Therefore, the formula (\[mpp\]) can be reformulated in a compact form: $$D_{ij}\left\{ \beta_{1},.,\beta_{n}\right\} =2\hbar^{2}%
%TCIMACRO{\dsum _{\nu}}%
%BeginExpansion
{\displaystyle\sum_{\nu}}
%EndExpansion%
%TCIMACRO{\dsum _{\mu}}%
%BeginExpansion
{\displaystyle\sum_{\mu}}
%EndExpansion
\left( u_{\nu}\upsilon_{\mu}+u_{\mu}\upsilon_{\nu}\right) ^{2}%
\frac{\left\langle \nu\right\vert \dfrac{\partial H_{sp}}{\partial\beta_{i}%
}\left\vert \mu\right\rangle \left\langle \mu\right\vert \dfrac{\partial
H_{sp}}{\partial\beta_{j}}\left\vert \nu\right\rangle }{\left( E_{\nu}%
+E_{\mu}\right) ^{3}}%$$ In this more “symmetric” form, this formula looks like more naturally to the Inglis-Belyaev formula of the moments of inertia which does not contain dependence on the derivatives of $\Delta$ and $\lambda$.
The “new” formula corrects the previous paradox because in the case of the phase transition $\Delta\rightarrow0$, we will have in this limit for the diagonal terms $\nu=\mu,$ $u_{\nu}\upsilon_{\nu}+u_{\nu}\upsilon_{\nu}=0$ for any $\nu$ and due to the fact that the corresponding matrix element $\left\langle \nu\right\vert \partial H_{sp}/\partial\beta_{i}\left\vert
\nu\right\rangle $ is finite the second term of Eq. (\[mpp\]) tends uniformly toward zero so that Eq. (\[mpp\]) reduces in this case to the equation of the unpaired system (\[massparameterssingle2\]) without any problem.
Illustration of the application of the Inglis Belyaev formula in the case where the singularity occurs
======================================================================================================
This is illustrated in Fig. \[fig1\] by the behaviour of the vibrational parameter $B_{\beta\beta}(\beta,\gamma=0)$ as a function of the Bohr parameter in the case of the magic nuclei $_{54}^{136}$Xe$_{82}$. These calculations have been performed for the both formulae (\[with\]) and (\[mpp\]), i.e., respectively with and without the derivatives $\partial\lambda/\partial\beta$ and $\partial\Delta/\partial\beta$. The resonance (singularity) $B_{\beta
\beta}\sim7000000\hbar^{2}MeV^{-1}$ occurs near the deformation $\beta=0.09$ for the formula with the derivatives. This happens even if $\Delta$ is very close to $0.$ Between $\beta=0$ and $\beta=0.15$ the formula without derivatives gives small (finite) values ($B_{\beta\beta}\sim25\hbar
^{2}MeV^{-1}$). These very small values of the independent particle model are due to the collapse of the pairing correlations. In addition, during the phase transition, i.e., for $0.1\lesssim\Delta\lesssim0.2,$ the vibrational parameters increase up to the important value $B_{\beta\beta}\sim500\hbar
^{2}MeV^{-1}$. We have checked that this is due to a pseudo crossing levels near the Fermi level. However, in this respect we have futhermore checked carefully that there is absolutely no crossing levels near the singularity. Thus the singularity is not a consequence of a crossing levels as it is often claimed [@10]. As said before the explanation comes from the fact that in Eq. (\[difflamda\]) and (\[diffdelta\]) the denominator simply cancels. This demonstrates the weakness of the old formula (\[with\]) with respect to that proposed in this paper, that is Eq. (\[mpp\]).
![Neutron contribution to the mass parameters $B_{\beta\beta}$ for the magic nucleus $_{54}^{136}Xe_{82}$; The calculations are performed within the cranking formula including the derivatives and for the same formula without derivating; Note the quasi divergence (singularity) of the version with derivatives near the deformation $\beta=0.1$[]{data-label="fig1"}](Graph1.eps){width="140mm"}
Conclusion
==========
In some rare but important illustrative cases the application of the Inglis-Belyaev formula to the mass parameters reveals incontestable weaknesses in the limit of unpaired systems $\Delta\rightarrow0$. In effect, this formula leads straightforwardly to a major contradiction, that is, not only it does not reduce to the one of the unpaired system in the case $\Delta=0$ (which is already a contradictory fact) but even gives unphysical (singular) values. It has been reported in the litterature that self-consistent calculations meet also the same kind of problems (see text). After extensive calculations within the Inglis-Belyaev formula, we realized that these problems are inherent to a spurious presence of the derivatives of $\Delta$ and $\lambda$ in the formula. This led us to “revise” the conception of this formula simply by removing the derivatives which are not consistent with the basic hypothesis of the formula, that is to say with two quasiparticle excitation states. This is the reason why our proposal cannot be considered as a simple recipe to the limit $\Delta=0$ but as a well founded rectification of the formula which is thus no more subject to the cited problems and reduces naturally to that of the unpaired system in the limit $\Delta\rightarrow0$.
The cranking formula with pairing correlations
==============================================
We have to calculate the matrix element of the type $\left\langle
n,m\right\vert \partial$ $/\partial\beta_{i}\left\vert BCS\right\rangle $ which appears in Eq. (\[bcsformula\]) of the text, i.e.:
$D_{ij}\left\{ \beta_{1},.,\beta_{n}\right\} =2\hbar^{2}\sum_{\nu,\mu}%
\frac{\left\langle BCS\right\vert \partial\text{ }/\partial\beta_{i}\left\vert
\nu,\mu\right\rangle \left\langle \nu,\mu\right\vert \partial\text{ }%
/\partial\beta_{j}\left\vert BCS\right\rangle }{E_{\nu}+E_{\mu}}$keeping in mind however that the differential operator acts not only on the wave functions of the BCS state but also on the occupations probabilities $u_{k},\upsilon_{k}$ (of the BCS state) which also depend on the deformation parameter $\beta_{i}$ we have to write.
$\dfrac{\partial}{\partial\beta_{i}}=\left( \dfrac{\partial}{\partial
\beta_{i}}\right) _{wave\text{ }func}+\left( \dfrac{\partial}{\partial
\beta_{i}}\right) _{occup.prob}$We must therefore to evaluate successively two types of matrix elements
Calculation of the first type of matrix elements\[first type\]
--------------------------------------------------------------
For one particle operator we have in second quantization representation:$\left( \dfrac{\partial}{\partial\beta_{i}}\right)
_{wave\text{ }func}=\sum_{\nu,\mu}\left\langle \nu\right\vert \dfrac{\partial
}{\partial\beta_{i}}\left\vert \mu\right\rangle a_{\nu}^{+}a_{\mu}^{{}}%
$Applying this operator on the paired system and using the inverse of the Bogoliubov-Valatin transformation:$a_{\nu}=(u_{\nu}\alpha_{\nu
}+\upsilon_{\nu}\alpha_{\overline{\nu}}^{+}),a_{\nu}^{+}=(u_{\nu}\alpha_{\nu
}^{+}+\upsilon_{\nu}\alpha_{\overline{\nu}})$We find:$$\left( \dfrac{\partial}{\partial\beta_{i}}\right) _{wave\text{ }%
func}\left\vert BCS\right\rangle =\sum_{\nu,\mu}\left\langle \nu\right\vert
\dfrac{\partial}{\partial\beta_{i}}\left\vert \mu\right\rangle a_{\nu}%
^{+}a_{\mu}^{{}}\left\vert BCS\right\rangle \label{twoqp}%$$ $=\sum_{\nu,\mu}\left\langle \nu\right\vert \dfrac{\partial}{\partial\beta
_{i}}\left\vert \mu\right\rangle (u_{\nu}\alpha_{\nu}^{+}+\upsilon_{\nu}%
\alpha_{\overline{\nu}})(u_{\mu}\alpha_{\mu}+\upsilon_{\mu}\alpha
_{\overline{\mu}}^{+})\left\vert BCS\right\rangle =\sum_{\nu,\mu}\left\langle
\nu\right\vert \dfrac{\partial}{\partial\beta_{i}}\left\vert \mu\right\rangle
(u_{\nu}\alpha_{\nu}^{+}+\upsilon_{\nu}\alpha_{\overline{\nu}})\upsilon_{\mu
}\alpha_{\overline{\mu}}^{+}\left\vert BCS\right\rangle $ because $\alpha
_{\mu}\left\vert BCS\right\rangle =0$Therefore $$\left( \dfrac{\partial}{\partial\beta_{i}}\right) _{wave\text{ }%
func}\left\vert BCS\right\rangle =\sum_{\nu,\mu}\left\langle \nu\right\vert
\dfrac{\partial}{\partial\beta_{i}}\left\vert \mu\right\rangle \left\{
u_{\nu}\alpha_{\nu}^{+}\upsilon_{\mu}\alpha_{\overline{\mu}}^{+}\left\vert
BCS\right\rangle +\upsilon_{\nu}\alpha_{\overline{\nu}}\upsilon_{\mu}%
\alpha_{\overline{\mu}}^{+}\left\vert BCS\right\rangle \right\} \label{vnsh}%$$ We must notice that for the term $\nu=\mu$ we will have the contribution $\left\langle \nu\right\vert \dfrac{\partial}{\partial\beta_{i}%
}\left\vert \nu\right\rangle \left\{ u_{\nu}\upsilon_{\nu}\alpha_{\nu}%
^{+}\alpha_{\overline{\nu}}^{+}\left\vert BCS\right\rangle +\upsilon_{\nu}%
^{2}\left\vert BCS\right\rangle \right\} $ which is a mixing of a two quasparticle-state with a BCS state. Because the state given by Eq. (\[twoqp\]) must represent only two quasiparticle excitation, we have to exclude the contribution due to the term $\nu=\mu$ from the sum of this equation. This restriction leads to the following formula: $$\left( \dfrac{\partial}{\partial\beta_{i}}\right) _{wave\text{ }%
func}\left\vert BCS\right\rangle =\sum_{\nu\neq\mu}\left\langle \nu\right\vert
\dfrac{\partial}{\partial\beta_{i}}\left\vert \mu\right\rangle (u_{\nu
}\upsilon_{\mu}\alpha_{\nu}^{+}\alpha_{\overline{\mu}}^{+})\left\vert
BCS\right\rangle \label{op}%$$ It will be noted that the term $\upsilon_{\nu}\alpha_{\overline{\nu}}%
\upsilon_{\mu}\alpha_{\overline{\mu}}^{+}\left\vert BCS\right\rangle $ vanishes for $\nu\neq\mu$ in the r.h.s of Eq. (\[vnsh\]). We then calculate then the first type of matrix elements:$$I_{1}=\left\langle n,m\right\vert \sum_{\nu\neq\mu}\left\langle \nu\right\vert
\dfrac{\partial}{\partial\beta_{i}}\left\vert \mu\right\rangle u_{\nu}%
\upsilon_{\mu}\alpha_{\nu}^{+}\alpha_{\overline{\mu}}^{+}\left\vert
BCS\right\rangle \label{firsttype}%$$ The above form of the formula suggests that the excited states must be of the form $\left\vert n,m\right\rangle =\alpha_{k}^{+}\alpha
_{\overline{l}}^{+}\left\vert BCS\right\rangle =\left\vert k,\overline
{l}\right\rangle $.We obtain then:$I_{1}=\left\langle
BCS\right\vert \alpha_{\overline{l}}^{{}}\alpha_{k}^{{}}\sum_{\nu\neq\mu
}\left\langle \nu\right\vert \dfrac{\partial}{\partial\beta_{i}}\left\vert
\mu\right\rangle u_{\nu}\upsilon_{\mu}\alpha_{\nu}^{+}\alpha_{\overline{\mu}%
}^{+}\left\vert BCS\right\rangle $$=\sum_{\nu\neq\mu}\left\langle
\nu\right\vert \dfrac{\partial}{\partial\beta_{i}}\left\vert \mu\right\rangle
u_{\nu}\upsilon_{\mu}\left\langle BCS\right\vert \alpha_{\overline{l}}^{{}%
}\alpha_{k}^{{}}\alpha_{\nu}^{+}\alpha_{\overline{\mu}}^{+}\left\vert
BCS\right\rangle $We use the following usual fermions anticommutation relations:$\left\{ \alpha_{k}^{{}},\alpha_{l}^{{}}\right\} =\left\{
\alpha_{k}^{+},\alpha_{l}^{+}\right\} =0,$ $\left\{ \alpha_{k}^{{}%
},\alpha_{l}^{+}\right\} =\delta_{kl}$Thus the quantity between brakets of the BCS sate gives:$\left\langle BCS\right\vert
\alpha_{\overline{l}}^{{}}\alpha_{k}^{{}}\alpha_{\nu}^{+}\alpha_{\overline
{\mu}}^{+}\left\vert BCS\right\rangle =\left( \delta_{l\mu}\delta_{\nu
k}-\delta_{\overline{\mu}k}\delta_{\nu\overline{l}}\right) $We obtain:$I_{1}=\sum_{\nu\neq\mu}\left\langle \nu\right\vert
\dfrac{\partial}{\partial\beta_{i}}\left\vert \mu\right\rangle u_{\nu}%
\upsilon_{\mu}\left( \delta_{l\mu}\delta_{\nu k}-\delta_{\overline{\mu}%
k}\delta_{\nu\overline{l}}\right) =\left\langle k\right\vert \dfrac{\partial
}{\partial\beta_{i}}\left\vert l\right\rangle u_{k}\upsilon_{l}-\left\langle
\overline{l}\right\vert \dfrac{\partial}{\partial\beta_{i}}\left\vert
\overline{k}\right\rangle u_{\overline{l}}\upsilon_{\overline{k}}$ with $k\neq l$ Because indexes of brakets must be different in Eq(\[firsttype\]).Noting that if $\widehat{T}$ is the time-reversal conjugation operator we must have for any operator $\hat{O}$$\left\langle p\right\vert \hat{O}\left\vert q\right\rangle =\left\langle
\widehat{T}p\right\vert \widehat{T}\hat{O}\widehat{T}^{-1}\left\vert
\widehat{T}q\right\rangle ^{\ast}$Applying this result for our case and assuming that $\partial/\partial\beta_{i}$ is time-even, i.e. $\widehat
{T}\left( \partial/\partial\beta_{i}\right) \widehat{T}^{-1}=\partial
/\partial\beta_{i}$, we get:$\left\langle \overline{l}\right\vert
\dfrac{\partial}{\partial\beta_{i}}\left\vert \overline{k}\right\rangle
=\left\langle \widehat{T}\overline{l}\right\vert \widehat{T}\dfrac{\partial
}{\partial\beta_{i}}\widehat{T}^{-1}\left\vert \widehat{T}\overline
{k}\right\rangle ^{\ast}=\left\langle k\right\vert \dfrac{\partial}%
{\partial\beta_{i}}\left\vert l\right\rangle $Moreover, using the usual phase convention$u_{\overline{l}}=u_{l}$, $\upsilon
_{\overline{k}}=-\upsilon_{k}$we deduce :$I_{1}=\left\langle
k\right\vert \dfrac{\partial}{\partial\beta_{i}}\left\vert l\right\rangle
u_{k}\upsilon_{l}+\left\langle k\right\vert \dfrac{\partial}{\partial\beta
_{i}}\left\vert l\right\rangle u_{l}\upsilon_{k}=\left( u_{k}\upsilon
_{l}+u_{l}\upsilon_{k}\right) \left\langle k\right\vert \dfrac{\partial
}{\partial\beta_{i}}\left\vert l\right\rangle $ Taking into account that the brakets states in Eq (\[op\]) must be different, the final result for $I_{1}$ given (\[firsttype\]) will take the following form: $$I_{1}=\left\langle k,\overline{l}\right\vert \left( \frac{\partial}%
{\partial\beta_{i}}\right) _{wave\text{ }func}\left\vert BCS\right\rangle
=\left( u_{k}\upsilon_{l}+u_{l}\upsilon_{k}\right) \left\langle k\right\vert
\frac{\partial}{\partial\beta_{i}}\left\vert l\right\rangle \ \ with\ \ k\neq
l \label{a3}%$$ Let be $H_{sp}$ the single-particle Hamiltonian and $$H^{\prime}=%
%TCIMACRO{\tsum _{\nu,\mu}}%
%BeginExpansion
{\textstyle\sum_{\nu,\mu}}
%EndExpansion
\left\langle \nu\right\vert \left( H_{sp}-\lambda\right) \left\vert
\mu\right\rangle a_{\nu}^{+}a_{\mu}-G%
%TCIMACRO{\tsum _{\nu,\mu>0}}%
%BeginExpansion
{\textstyle\sum_{\nu,\mu>0}}
%EndExpansion
a_{\nu}^{+}a_{\overline{\nu}}^{+}a_{\overline{\mu}}a_{\mu}%$$ the nuclear paired BCS Hamiltonian with the constraint on the particle number. Writing this Hamiltonian in the well-known quasiparticles representation $H^{\prime}=E_{BCS}+%
%TCIMACRO{\tsum _{\nu}}%
%BeginExpansion
{\textstyle\sum_{\nu}}
%EndExpansion
E_{\nu}\alpha_{\nu}^{+}\alpha_{\nu}+residual$ $qp$ $interaction$, neglecting (as usual) the latter term and using Eq. (\[op\]) it is quite easy to establish the following identity$$\left\langle k,\overline{l}\right\vert \left[ H^{\prime},\left(
\dfrac{\partial}{\partial\beta_{i}}\right) _{wave\text{ }func}\right]
\left\vert BCS\right\rangle =-\left\langle k,\overline{l}\right\vert \left(
\dfrac{\partial H^{\prime}}{\partial\beta_{i}}\right) _{wave\text{ }%
func}\left\vert BCS\right\rangle =\left( E_{k,\overline{l}}-E_{BCS}\right)
\left\langle k,\overline{l}\right\vert \left( \dfrac{\partial}{\partial
\beta_{i}}\right) _{wave\text{ }func}\left\vert BCS\right\rangle$$ where the eigenenergies $E_{k,\overline{l}}$ corresponding to the excited states $\left\vert k,\overline{l}\right\rangle $ are given by $E_{k,\overline
{l}}=E_{BCS}+E_{k}+E_{l}$so that:$$\left\langle k,\overline{l}\right\vert \left( \dfrac{\partial}{\partial
\beta_{i}}\right) _{wave\text{ }func}\left\vert BCS\right\rangle
=-\dfrac{\left\langle k,\overline{l}\right\vert \left( \dfrac{\partial
H^{\prime}}{\partial\beta_{i}}\right) _{wave\text{ }func}\left\vert
BCS\right\rangle }{E_{k,\overline{l}}-E_{BCS}}=-\dfrac{\left\langle
k,\overline{l}\right\vert \left( \dfrac{\partial H^{\prime}}{\partial
\beta_{i}}\right) _{wave\text{ }func}\left\vert BCS\right\rangle }%
{E_{k}+E_{l}}%$$ Due to the fact that the pairing strength $G$ does not depend on the nuclear deformation, it is clear from the expression of $H^{\prime}$(in the particles representation) that $\partial H^{\prime}/\partial\beta_{i}%
=\partial\left( H_{sp}-\lambda\right) /\partial\beta_{i}$ Therefore $\left\langle k,\overline{l}\right\vert \left( \partial H^{\prime}%
/\partial\beta_{i}\right) _{wave\text{ }func}\left\vert BCS\right\rangle
=\left\langle k,\overline{l}\right\vert \left( \partial H_{sp}/\partial
\beta_{i}\right) \left\vert BCS\right\rangle -\partial\lambda/\partial
\beta_{i}\langle k,\overline{l}\left\vert BCS\right\rangle =\left\langle
k,\overline{l}\right\vert \left( \partial H_{sp}/\partial\beta_{i}\right)
\left\vert BCS\right\rangle $Here we have $\langle k,\overline
{l}\left\vert BCS\right\rangle =0$ because excited states and bcs state are supposed orthogonal.Again using the second quantization formalism $\left( \partial H_{sp}/\partial\beta_{i}\right) =\sum_{\nu\neq\mu
}\left\langle \nu\right\vert \left( \partial H_{sp}/\partial\beta_{i}\right)
\left\vert \mu\right\rangle a_{\nu}^{+}a_{\mu}^{{}}$ and performing then exactly the same transformations as before for $\sum_{\nu\neq\mu}\left\langle
\nu\right\vert \partial/\partial\beta_{i}\left\vert \mu\right\rangle a_{\nu
}^{+}a_{\mu}$ but this time with $\sum_{\nu\neq\mu}\left\langle \nu\right\vert
\partial H_{sp}/\partial\beta_{i}\left\vert \mu\right\rangle a_{\nu}^{+}%
a_{\mu}$ we will obtain in the same manner a new form for Eq. (\[a3\]): $$I_{1}=I_{1}(k,l)=\left\langle k,\overline{l}\right\vert \left( \frac
{\partial}{\partial\beta_{i}}\right) _{wave\text{ }func}\left\vert
BCS\right\rangle =-\frac{\left( u_{k}\upsilon_{l}+u_{l}\upsilon_{k}\right)
}{E_{k}+E_{l}}\left\langle k\right\vert \frac{\partial H_{sp}}{\partial
\beta_{i}}\left\vert l\right\rangle \ \ with\ \ k\neq l \label{aa}%$$
Calculation of the second type of matrix elements\[second type\]
----------------------------------------------------------------
Recalling that the BCS state is given by: $\left\vert BCS\right\rangle
=\Pi_{k}\left( u_{k}+\upsilon_{k}a_{k}^{+}a_{\overline{k}}^{+}\right)
\left\vert 0\right\rangle $ and differentiating this state with respect to the probabilty amplitudes, we obtain:$\left( \dfrac{\partial}%
{\partial\beta_{i}}\right) _{occup.prob}\left\vert BCS\right\rangle
=\sum_{\tau}(\dfrac{\partial u_{\tau}}{\partial\beta_{i}}+\dfrac
{\partial\upsilon_{\tau}}{\partial\beta_{i}}a_{\tau}^{+}a_{\overline{\tau}%
}^{+})\prod_{k\neq\tau}(u_{k}+\upsilon_{k}a_{k}^{+}a_{\overline{k}}%
^{+})\left\vert 0\right\rangle $We use the evident property:$\prod_{k\neq\tau}(u_{k}+\upsilon_{k}a_{k}^{+}a_{\overline{k}}^{+})\left\vert
0\right\rangle $ $=(u_{\tau}+\upsilon_{\tau}a_{\tau}^{+}a_{\overline{\tau}%
}^{+})^{-1}\left\vert BCS\right\rangle $Therefore:$\left(
\dfrac{\partial}{\partial\beta_{i}}\right) _{occup.prob}\left\vert
BCS\right\rangle =\sum_{\tau}\left[ (\dfrac{\partial u_{\tau}}{\partial
\beta_{i}}+\dfrac{\partial\upsilon_{\tau}}{\partial\beta_{i}}a_{\tau}%
^{+}a_{\overline{\tau}}^{+})(u_{\tau}+\upsilon_{\tau}a_{\tau}^{+}%
a_{\overline{\tau}}^{+})^{-1}\right] \left\vert BCS\right\rangle $Making an expansion of the inverse operator in $a_{\tau}^{+}a_{\overline{\tau
}}^{+}$ :$\left( \dfrac{\partial}{\partial\beta_{i}}\right)
_{occup.prob}\left\vert BCS\right\rangle =\sum_{\tau}\left[ (\dfrac{\partial
u_{\tau}}{\partial\beta_{i}}+\dfrac{\partial\upsilon_{\tau}}{\partial\beta
_{i}}a_{\tau}^{+}a_{\overline{\tau}}^{+})u_{\tau}^{-1}(1-\upsilon_{\tau
}u_{\tau}^{-1}a_{\tau}^{+}a_{\overline{\tau}}^{+}+\left( \upsilon_{\tau
}u_{\tau}^{-1}a_{\tau}^{+}a_{\overline{\tau}}^{+}\right) ^{2}+...)\right]
\left\vert BCS\right\rangle $using the inverse of the Bogoliubov-Valatin transformation:$a_{\tau}^{+}=(u_{\tau}\alpha_{\tau
}^{+}+\upsilon_{\tau}\alpha_{\overline{\tau}})$We find for the quantity $a_{\tau}^{+}a_{\overline{\tau}}^{+}$$a_{\tau}^{+}%
a_{\overline{\tau}}^{+}=(u_{\tau}\alpha_{\tau}^{+}+\upsilon_{\tau}%
\alpha_{\overline{\tau}})(u_{\overline{\tau}}\alpha_{\overline{\tau}}%
^{+}+\upsilon_{\overline{\tau}}\alpha_{\tau})$$=u_{\tau}%
u_{\overline{\tau}}\alpha_{\tau}^{+}\alpha_{\overline{\tau}}^{+}+u_{\tau
}\upsilon_{\overline{\tau}}\alpha_{\tau}^{+}\alpha_{\tau}+\upsilon_{\tau
}u_{\overline{\tau}}\alpha_{\overline{\tau}}\alpha_{\overline{\tau}}%
^{+}+\upsilon_{\tau}\upsilon_{\overline{\tau}}\alpha_{\overline{\tau}}%
\alpha_{\tau}$replacing in the above expression and retaining only two creation of quasiparticles with at most products of two amplitude probability:$\left( \dfrac{\partial}{\partial\beta_{i}}\right)
_{occup.prob}\left\vert BCS\right\rangle =\sum_{\tau}\left[ \dfrac{\partial
u_{\tau}}{\partial\beta_{i}}u_{\tau}^{-1}\left( -\upsilon_{\tau}u_{\tau}%
^{-1}u_{\tau}u_{\overline{\tau}}\alpha_{\tau}^{+}\alpha_{\overline{\tau}}%
^{+}\right) +u_{\tau}^{-1}\dfrac{\partial\upsilon_{\tau}}{\partial\beta_{i}%
}u_{\tau}u_{\overline{\tau}}\alpha_{\tau}^{+}\alpha_{\overline{\tau}}%
^{+}\right] \left\vert BCS\right\rangle $Noting that: $u_{\overline
{\tau}}=u_{\tau}$, $\upsilon_{\overline{\tau}}=-\upsilon_{\tau}$, we find$\left( \dfrac{\partial}{\partial\beta_{i}}\right)
_{occup.prob}\left\vert BCS\right\rangle =\sum_{\tau}\left[ u_{\tau}%
\dfrac{\partial\upsilon_{\tau}}{\partial\beta_{i}}\newline-\upsilon_{\tau
}\dfrac{\partial u_{\tau}}{\partial\beta_{i}}\right] \alpha_{\tau}^{+}%
\alpha_{\overline{\tau}}^{+}\left\vert BCS\right\rangle $The excited states will be necessarily here, of the following form:$\left\vert
M\right\rangle =\alpha_{m}^{+}\alpha_{\overline{m}}^{+}\left\vert
BCS\right\rangle =\left\vert m,\overline{m}\right\rangle $We have therefore to calculate:$I_{2}=\left\langle BCS\right\vert
\alpha_{\overline{m}}^{{}}\alpha_{m}^{{}}(u_{m}\dfrac{\partial\upsilon_{m}%
}{\partial\beta_{i}}-\upsilon_{m}\dfrac{\partial u_{m}}{\partial\beta_{i}%
})\alpha_{m}^{+}\alpha_{\overline{m}}^{+}\left\vert BCS\right\rangle $due to the normalisation of the excited states, we obtains:$I_{2}=u_{m}\dfrac{\partial\upsilon_{m}}{\partial\beta_{i}}-\upsilon_{m}%
\dfrac{\partial u_{m}}{\partial\beta_{i}}$knowing that the normalization condition of the probability amplitudes is:$u_{m}%
^{2}+\upsilon_{m}^{2}=1$we find by differentiation$2u_{m}\dfrac{\partial u_{m}}{\partial\beta_{i}}+2\upsilon_{m}\dfrac
{\partial\upsilon_{m}}{\partial\beta_{i}}=0$combining these two relations, we obtain in $I_{2}$:$I_{2}=-\dfrac{1}{\upsilon_{m}}%
\dfrac{\partial u_{m}}{\partial\beta_{i}}$then, the second term reads:$I_{2}=\left\langle m,\overline{m}\right\vert \left(
\frac{\partial}{\partial\beta_{i}}\right) _{occup.prob}\left\vert
BCS\right\rangle =-\dfrac{1}{\upsilon_{m}}\dfrac{\partial u_{m}}{\partial
\beta_{i}}$which can be cast as follows: $$I_{2}=I_{2}(k,l)=\left\langle k,\overline{l}\right\vert \left( \frac
{\partial}{\partial\beta_{i}}\right) _{occup.prob}\left\vert BCS\right\rangle
=-\frac{1}{\upsilon_{k}}\frac{\partial u_{k}}{\partial\beta_{i}}%
\ \ with\ \ k=l \label{a4}%$$ The two matrix elements $I_{1}$ given by Eq. (\[aa\]) and $I_{2}$ given by Eq. (\[a4\]). They correspond respectively to the non-diagonal $k\neq l$ and diagonal part $k=l$ of the total contribution. Reassembling the two parts $I_{1}$ and $I_{2}$ in only one formula, we get:$I_{1}+I_{2}%
=\left\langle k,\overline{l}\right\vert \dfrac{\partial}{\partial\beta_{i}%
}\left\vert BCS\right\rangle =-\dfrac{\left( u_{k}\upsilon_{l}+u_{l}%
\upsilon_{k}\right) }{E_{k}+E_{l}}\left\langle k\right\vert \dfrac{\partial
H_{sp}}{\partial\beta_{i}}\left\vert l\right\rangle \left( 1-\delta
_{k,l}\right) -\dfrac{1}{\upsilon_{k}}\dfrac{\partial u_{k}}{\partial
\beta_{i}}\delta_{kl}$Replacing this quantity in Eq. (\[bcsformula\]) of section \[section hyp\], noticing that the crossed terms $(I_{1}I_{2}$ and $I_{2}I_{1})$ cancel in the product we find: $$D_{ij}\left\{ \beta_{1},.,\beta_{n}\right\} =2\hbar^{2}\sum_{k,l}%
\frac{\left( u_{k}\upsilon_{l}+u_{l}\upsilon_{k}\right) ^{2}}{\left(
E_{k}+E_{l}\right) ^{3}}\left\langle l\right\vert \frac{\partial H_{sp}%
}{\partial\beta_{i}}\left\vert k\right\rangle \left\langle k\right\vert
\frac{\partial H_{sp}}{\partial\beta_{j}}\left\vert l\right\rangle \left(
1-\delta_{k,l}\right) +2\hbar^{2}\sum_{k}\frac{1}{2E_{k}}\frac{1}%
{\upsilon_{k}^{2}}\frac{\partial u_{k}}{\partial\beta_{i}}\frac{\partial
u_{k}}{\partial\beta_{j}} \label{a5}%$$ The expression$$\sum_{k}\frac{1}{2E_{k}}\frac{1}{\upsilon_{k}^{2}}\frac{\partial u_{k}%
}{\partial\beta_{i}}\frac{\partial u_{k}}{\partial\beta_{j}} \label{prod}%$$ meet in the second part of the r.h.s of the above formula (\[a5\]) can be further clarified. Recalling that the probabilty amplitudes are:$u_{k}^{{}}=\left( 1/\sqrt{2}\right) \left( 1+\varepsilon_{k}%
/\sqrt{\varepsilon_{k}^{2}+\Delta^{2}}\right) ^{1/2}$ and $\upsilon_{k}^{{}%
}=\left( 1/\sqrt{2}\right) \left( 1-\varepsilon_{k}/\sqrt{\varepsilon
_{k}^{2}+\Delta^{2}}\right) ^{1/2}$where:$\varepsilon_{k}%
=\epsilon_{k}-\lambda$ is the single-particle energy with respect to the Fermi level $\lambda$, $\epsilon_{k}$ being the single particle energy. Since the deformation dependence in $u_{k}$ appears through $\epsilon_{k},\Delta,$ and $\lambda$, a simple differentiation of $u_{k}$ with respect to $\beta_{i}$ leads to:$\dfrac{\partial u_{k}^{{}}}{\partial\beta_{i}}=\dfrac
{1}{2\sqrt{2}}\left( 1+\dfrac{\varepsilon_{k}}{\sqrt{\varepsilon_{k}%
^{2}+\Delta^{2}}}\right) ^{-1/2}\left[ \dfrac{\partial\varepsilon_{k}%
}{\partial\beta_{i}}\left( \varepsilon_{k}^{2}+\Delta^{2}\right)
^{-1/2}-\varepsilon_{k}\left( \varepsilon_{k}^{2}+\Delta^{2}\right)
^{-3/2}\left( \varepsilon_{k}\dfrac{\partial\varepsilon_{k}}{\partial
\beta_{i}}+\Delta\dfrac{\partial\Delta}{\partial\beta_{i}}\right) \right]
$multiplying by $\dfrac{1}{\upsilon_{k}}$and simplifying we get$:$$\dfrac{1}{\upsilon_{k}}\dfrac{\partial u_{k}^{{}}}%
{\partial\beta_{i}}=\dfrac{1}{2\left( \varepsilon_{k}^{2}+\Delta^{2}\right)
}\left\{ \Delta\dfrac{\partial\varepsilon_{k}}{\partial\beta_{i}}%
-\varepsilon_{k}\dfrac{\partial\Delta}{\partial\beta_{i}}\right\} $using $\varepsilon_{k}=\epsilon_{k}-\lambda$, we obtain explicitly:$\dfrac{1}{\upsilon_{k}}\frac{\partial u_{k}^{{}}}{\partial\beta_{i}}%
=\dfrac{1}{2\left( \varepsilon_{k}^{2}+\Delta^{2}\right) }\left\{
\Delta\dfrac{\partial\epsilon_{k}}{\partial\beta_{i}}-\Delta\dfrac
{\partial\lambda}{\partial\beta_{i}}-\left( \epsilon_{k}-\lambda\right)
\dfrac{\partial\Delta}{\partial\beta_{i}}\right\} $Moreover, noting that:$\dfrac{\partial\epsilon_{k}}{\partial\beta_{i}}=\left\langle
k\right\vert \dfrac{\partial H_{sp}}{\partial\beta_{i}}\left\vert
k\right\rangle $we find:$\dfrac{1}{\upsilon_{k}}%
\dfrac{\partial u_{k}^{{}}}{\partial\beta_{i}}=\dfrac{\Delta}{2\left(
\varepsilon_{k}^{2}+\Delta^{2}\right) }\left\{ \left\langle k\right\vert
\dfrac{\partial H_{sp}}{\partial\beta_{i}}\left\vert k\right\rangle
-\dfrac{\partial\lambda}{\partial\beta_{i}}-\dfrac{\left( \epsilon
_{k}-\lambda\right) }{\Delta}\dfrac{\partial\Delta}{\partial\beta_{i}%
}\right\} $the quasiparticle energy is $E_{k}=\left( \varepsilon
_{k}^{2}+\Delta^{2}\right) ^{1/2}$ so that:$\dfrac{1}{\upsilon_{k}%
}\dfrac{\partial u_{k}^{{}}}{\partial\beta_{i}}=\dfrac{\Delta}{2E_{k}^{2}%
}\left\{ \left\langle k\right\vert \dfrac{\partial H_{sp}}{\partial\beta_{i}%
}\left\vert k\right\rangle -\dfrac{\partial\lambda}{\partial\beta_{i}}%
-\dfrac{\left( \epsilon_{k}-\lambda\right) }{\Delta}\dfrac{\partial\Delta
}{\partial\beta_{i}}\right\} =-\dfrac{\Delta}{2E_{k}^{2}}R_{i}^{k}$where we have put:$$R_{i}^{k}=-\left\langle k\right\vert \dfrac{\partial H_{sp}}{\partial\beta
_{i}}\left\vert k\right\rangle +\dfrac{\partial\lambda}{\partial\beta_{i}%
}+\dfrac{\left( \epsilon_{k}-\lambda\right) }{\Delta}\dfrac{\partial\Delta
}{\partial\beta_{i}} \label{rik}%$$ Using the result $\dfrac{1}{\upsilon_{k}}\dfrac{\partial
u_{k}^{{}}}{\partial\beta_{i}}=-\dfrac{\Delta}{2E_{k}^{2}}R_{i}^{k}$the product of the similar terms of Eq. (\[prod\]) gives finally:$\sum_{k}\dfrac{1}{2E_{k}}\dfrac{1}{\upsilon_{k}^{2}}\dfrac{\partial u_{k}%
}{\partial\beta_{i}}\dfrac{\partial u_{k}}{\partial\beta_{j}}=\sum_{k}%
\dfrac{1}{2E_{k}}\left( \dfrac{1}{\upsilon_{k}}\dfrac{\partial u_{k}%
}{\partial\beta_{i}}\right) \left( \dfrac{1}{\upsilon_{k}}\dfrac{\partial
u_{k}}{\partial\beta_{j}}\right) $$=\sum_{k}\dfrac{1}{2E_{k}}\left(
-\Delta\dfrac{R_{i}^{k}}{2E_{k}^{2}}\right) \left( -\Delta\dfrac{R_{j}^{k}%
}{2E_{k}^{2}}\right) =\sum_{k}\dfrac{\Delta^{2}}{8E_{k}^{5}}R_{i}^{k}%
R_{j}^{k}$The cranking formula of the mass parameters becomes finally: $$D_{ij}\left\{ \beta_{1},.,\beta_{n}\right\} =2\hbar^{2}\sum_{k,l}%
\frac{\left( u_{k}\upsilon_{l}+u_{l}\upsilon_{k}\right) ^{2}}{\left(
E_{k}+E_{l}\right) ^{3}}\left\langle l\right\vert \frac{\partial H_{sp}%
}{\partial\beta_{i}}\left\vert k\right\rangle \left\langle k\right\vert
\frac{\partial H_{sp}}{\partial\beta_{j}}\left\vert l\right\rangle \left(
1-\delta_{k,l}\right) +2\hbar^{2}\sum_{k}\frac{\Delta^{2}}{8E_{k}^{5}}%
R_{i}^{k}R_{j}^{k} \label{a8}%$$
[99]{}
A. Arima and F. Iachello, Phys. Rev. Lett. 35, 1069–1072 (1975)
M. Brack, J. Damgaard, A. S. Jensen, H. C. Pauli, V. M. Strutinsky and C. Y. Wong, Rev Mod. Phys. 44 (1972) 320
L. Prochniak, K. Zajac, K. Pomorski, S. G. Rohozinski, J. Srebrny, Nucl. Phys. A648, 181 (1999)
D. R. Inglis, Phys. Rev. 96 (1954) 1059, 97 (1955) 701
A. K. Kerman, Ann. Phys. (New York),12(1961)300, 222, 523
M. Baranger, M. Vénéroni, Ann. Phys. (NY) 114, 123 (1978).
M.J. Giannoni, P. Quentin, Phys. Rev. C21, 2060 (1980).
M.J. Giannoni, P. Quentin, Phys. Rev. C21, 2076 (1980).
J. Decharge and D. Gogny, Phys. Rev., C21 (1980) 1568
M. Girod and B. Grammaticos, Phys. Rev., C27 (1983) 2317
Giraud B., Grammaticos B, Nucl. Phys. A233 , 373, 1974
M Mirea and R C Bobulescu J. Phys. G: Nucl. Part. Phys. 37 (2010) 055106
N. Hinohara, T. Nakatsukasa, M. Matsuo and K. Matsuyanagi, Prog. Theor. Phys. Vol. 115 No. 3 (2006) pp. 567-599
J. J. Griffin, Nucl. Phys. A170 (1971) 395
T Ledergerber, H. C. Pauli, Nucl. Phys. A207 (1973) 1–32
P.-G. Reinhard, Nucl. Phys. A281 (1977) 221–239
V. Schneider, J. Maruhn, and W. Greiner, Z. Phys. A 323 (1986) 111
D. N. Poenaru, R. A. Gherghescu, W. Greiner, Rom. Journ. Phys., Vol. 50, Nos. 1–2, P. 187–197, Bucharest, 2005
B. Mohammed-Azizi, and D.E. Medjadi, Computer physics Comm. 156(2004) 241-282.
S. T. Belyaev, Mat. Fys. Medd. Dan. Vid. Sehk. 31 (1959) N$%
%TCIMACRO{\U{b0}}%
%BeginExpansion
{{}^\circ}%
%EndExpansion
$11
D. Bes, Mat. Fys. Medd. Dan. Vid. Selsk. 33 (1961) no. 2
E. Kh. Yuldashbaeva, J. Libert, P. Quentin, M. Girod , B. K. Poluanov, Ukr. J. Phys. 2001. V. 46, N 1
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The high velocity gradient observed in the compact cloud CO-0.40-0.22, at a projected distance of 60 pc from the centre of the Milky Way, has led its discoverers to identify the closeby mm continuum emitter, CO-0.40-0.22\*, with an intermediate mass black hole (IMBH) candidate. We describe the interaction between CO-0.40-0.22 and the IMBH, by means of a simple analytical model and of hydrodynamical simulations. Through such calculation, we obtain a lower limit to the mass of CO-0.40-0.22\* of few $10^4 \times \; M_{\odot}$. This result tends to exclude the formation of such massive black hole in the proximity of the Galactic Centre. On the other hand, CO-0.40-0.22\* might have been brought to such distances in cosmological timescales, if it was born in a dark matter halo or globular cluster around the Milky Way.'
author:
- |
A. Ballone$^{1}$, M. Mapelli$^{1,2,3}$, M. Pasquato$^{1}$\
$^{1}$INAF, Osservatorio Astronomico di Padova, vicolo dell’Osservatorio 5, I-35122 Padova, Italy\
$^{2}$Institute für Astro- und Teilchen Physik, Universität Innsbruck, Technikerstrasse 25/8, A-6020 Innsbruck, Austria\
$^{3}$INFN, Milano Bicocca, Piazza della Scienza 3, I-20126 Milano, Italy
bibliography:
- 'mylit.bib'
date: 'Accepted XXX. Received YYY; in original form ZZZ'
title: 'Weighing the IMBH candidate CO-0.40-0.22\* in the Galactic Centre'
---
\[firstpage\]
black hole physics – Galaxy: centre – ISM: clouds
Introduction {#intro}
============
Intermediate-mass black holes (IMBHs), with masses $M_{BH}=10^2\div 10^5 \; M_{\odot}$, represent an “hollow” in the mass distribution of detected black holes. Yet, they might even be the missing link between stellar mass and supermassive black holes.
The LIGO and VIRGO interferometers have indeed first detected the formation of dark objects at the very low end of the IMBHs mass range, by merging of binary stellar mass black holes [@Abbott16a; @Abbott16b; @Abbott17a; @Abbott17b; @Abbott17c]. Indeed, IMBHs might be forming by runaway merging of massive stars in the very dense centre of young massive star clusters [e.g., @Colgate67; @Ebisuzaki01; @PortegiesZwart04; @Freitag06; @Giersz15; @Mapelli16], by runaway tidal capture of stars by stellar mass black holes [@Miller12; @Stone17], through repeated mergers of stellar black holes in globular clusters [@Miller02; @Giersz15] or by accretion of stars and compact objects in the disks of active galactic nuclei [@McKernan12]. Other possible origins involve the direct collapse of high-mass Population III stars [e.g., @Fryer01; @Madau01; @Schneider02; @Spera17] or collapse of pristine, metal-free gas in high-redshift dark matter halos [e.g., @Bromm03; @Begelman06; @Lodato07; @Shang10; @Agarwal12].
Several possible additional hints of the existence of IMBHs come from the study of kinematics of the central stars and millisecond pulsars of globular clusters [e.g., @Gebhardt02; @Gebhardt05; @Noyola08; @Ibata09; @Noyola10; @Lutzgendorf11; @Lutzgendorf13; @FeldmeierKrause13; @Lutzgendorf15; @Askar17; @Kiziltan17; @Perera17], but these are often disputed [e.g., @Baumgardt03; @Anderson10; @Lanzoni13; @Zocchi17; @Gieles18].
IMBHs are also invoked to explain a fraction of the ultraluminous x-ray sources [ULXs; see, e.g., @Miller03; @Casella08; @Godet09; @Sutton12; @Mezcua15]. Among these, probably the best candidate is HLX-1 in the S0 galaxy ESO 243-49 [@Farrell09], with black hole mass greater than 500 $M_{\odot}$. Nonetheless, also in the case of ULXs, there is room for alternative theoretical interpretations [e.g., @Goad06; @Feng07; @Gladstone09; @Zampieri09; @Mapelli09; @Mapelli10; @Feng11; @Liu13; @King14].
Detections of the upper end of the IMBH mass distribution have also been claimed for dwarf galaxies [see @Kormendy13; @Reines15; @Mezcua17 and references therein].
@Tanaka14, @Oka16 and @Oka17 have reported the detection, by means of molecular emission lines, of a high velocity compact cloud, CO-0.40-0.22, with internal velocity gradient of several tens of $\mathrm{km \; s^{-1}}$. In particular, through high resolution ALMA observations, @Oka17 set its size, $\lesssim 1$ pc, and its internal line of sight velocity, possibly ranging from -80 to -40 $\mathrm{km \; s^{-1}}$ with respect to us. In addition, these authors discovered an unresolved object, CO-0.40-0.22\*, very close to CO-0.40-0.22 and emitting at 231 and 266 GHz in continuum emission. This has been interpreted as an IMBH, with an extremely low accretion rate [but see @Ravi18 for an alternative interpretation]. @Oka16 and @Oka17 also showed, by means of pure gravity simulations, that the high internal velocity dispersion of the observed cloud can be qualitatively explained by the interaction with a $10^5 \; M_{\odot}$ IMBH. Other physical explanations are possible [@Tanaka14; @Ravi18; @Yalinewich17; @Tanaka18] and @Tanaka18 even disputed the correctness of the reduction of the ALMA data by @Oka17 [even though a velocity gradient of 20-40 $\mathrm{km s^{-1}}$ has been confirmed; see Fig. 11 in @Tanaka18].
Nonetheless, even if the result is still highly debated, the possibility that CO-0.40-0.22\* is associated with an IMBH makes this object extremely intriguing, because this could be one of the closest IMBHs to us and because of its vicinity to SgrA\*. Thus, the aim of this paper is to gain insight on the possible mass of CO-0.40-0.22\*, in the IMBH scenario. In particular we built a simple analytical model to describe the interaction between the observed cloud and this putative IMBH, that we also tested through hydrosimulations. This allows to put, for the first time, a lower limit to the mass of this object, that might help understanding its possible nature and origin.
Analytical model {#anmod}
================
As already mentioned, @Oka16 and @Oka17 speculate that the high velocity gradient observed in the cloud CO-0.40-0.22 is generated by the interaction between this cloud and a putative IMBH, located on the position of CO-0.40-0.22\*, a closeby mm [and possibly IR, @Ravi18] continuum emitter. In @Oka17, these authors provided a single N-body simulation to try to fit their observations, with IMBH mass of $10^5 \; M_{\odot}$. In this section we try to present a simple analytical calculation to model this interaction; its main purpose is to identify a lower limit for the mass of CO-0.40-0.22\*.
In our simple model, we assume that the cloud is radially infalling towards the IMBH. Fig. \[sketch\] shows a sketch of this configuration. Such an assumption is based on few considerations:
1. The observed position-velocity diagrams show evidence of an “oblique” occupation of the cloud in this phase-space. In this case, **the internal large velocity gradient is to be attributed to a different bulk velocity between different parts of the cloud**, rather than to an homogeneous internal velocity dispersion.
2. **The cloud and the putative IMBH CO-0.40-0.22\* are clearly separated** from each other, both in the ALMA emission map and in the corresponding position-velocity diagrams. Hence, the cloud is not currently experiencing its pericentre passage around the IMBH and the velocity gradient through its elongation can only be explained by its tidal elongation towards the IMBH.
3. **The highest tidal velocity gradient**, for a fixed IMBH mass, **is obtained in the case of a radial cloud trajectory**. In other words, a radial trajectory is the one providing the lowest IMBH mass needed to produce the observed velocity gradient.
The “orbital” velocity of a radially infalling object is
$$\label{vff}
v_{orb}=\sqrt{\frac{2GM_{BH}}{d_{BH}}+2E_{orb}},$$
where $G$ is the gravitational constant, $M_{BH}$ is the mass of the IMBH, $d_{BH}$ is the distance from the IMBH and $E_{orb}$ is the specific orbital energy of the cloud.
$E_{orb}$ can be positive, negative or null, leading to velocities along the orbit that are bigger, smaller or equal to the escape velocity. At the observed distance from CO-0.40-0.22\* it is fair to assume that the IMBH is the main responsible of the velocity difference between different parts of the cloud and we restrict ourselves to the case of $E_{orb}=0$. Nonetheless, in section \[ingred\] and in Appendix \[eorb\] we will discuss the impact of a non-null $E_{orb}$ on our estimate of the mass of CO-0.40-0.22\*.
$\Delta v_{LOS}=20 \;km s^{-1}$ $\Delta v_{LOS}=30 \;km s^{-1}$ $\Delta v_{LOS}=40 \;km s^{-1}$
----------------------- --------------------------------- --------------------------------- ---------------------------------
$(D,S)=(1,15)$ arcsec $9.0\times 10^3 M_{\odot}$ $2.0 \times 10^4 M_{\odot}$ $3.6\times 10^4 M_{\odot}$
$(D,S)=(2,14)$ arcsec $2.6\times 10^4 M_{\odot}$ $5.7 \times 10^4 M_{\odot}$ $1.0 \times 10^5 M_{\odot}$
$(D,S)=(3,13)$ arcsec $5.5\times 10^4 M_{\odot}$ $1.2\times 10^5 M_{\odot}$ $2.2\times 10^5 M_{\odot}$
So, in this case, the observed velocity gradient between the head and the tail of the cloud is
$$\label{dvlos}
\begin{split}
\Delta v_{LOS} & = \sqrt{2GM_{BH}}\left[\frac{1}{\sqrt{d_{BH,h}}}-\frac{1}{\sqrt{d_{BH,t}}}\right]\cos\theta \\
& = \sqrt{2G M_{BH} \sin\theta}\cos\theta\left[\frac{1}{\sqrt{D}}-\frac{1}{\sqrt{S+D}}\right],
\end{split}$$
where $D$ is the observed (projected) distance between the closest point of the cloud and the IMBH, $S$ the observed (projected) size of the cloud, $\theta$ is the angle between the cloud trajectory and the line of sight (LOS), while $d_{BH,h}$ and $d_{BH,t}$ are the intrinsic distance between the IMBH and the head and the tail of the cloud, respectively (see Fig. \[sketch\]).
From Eq. \[dvlos\] we can directly infer that
$$\label{lowlim}
M_{BH}>\frac{3\sqrt{3}}{4}\frac{(\Delta v_{LOS})^2}{G}\frac{D(S+D)}{(\sqrt{S+D}-\sqrt{D})^2},$$
with $2/(3\sqrt{3})=\max(\sin\theta\cos^2\theta)$. This corresponds to $\theta \simeq 35^{\circ}$.
The values of $S$, $D$ and $\Delta v_{LOS}$, from the observed position-velocity diagrams in @Oka17, come with an uncertainty given by the instrumental spread function and by the interpretation of the actual limits of the cloud. We chose optmistic, pessimistic and intermediate contours for the cloud; thus, we calculated the lower limit for $M_{BH}$ for $(D,S)$=$(1,15)$,$(2,14)$,$(3,13)$ arcsec [^1] and $\Delta v_{LOS}$=20,30,40 $\mathrm{km\; s^{-1}}$. The results are summarized in Table \[mbh\]. We stress that the value of $M_{BH}$ obtained with Eq. \[lowlim\] is already the smallest possible mass for the putative IMBH CO-0.40-0.22\*, due to our extreme assumption of a radial orbit for the gas cloud CO-0.40-0.22.
Impact of other physical ingredients {#ingred}
------------------------------------
As already mentioned in section \[anmod\], we restricted our calculation to $E_{orb}=0$, which is the easiest case and a relatively good approximation at those distances from the IMBH.
In the context of a radial orbit, $E_{orb}>0$ corresponds to a non-null velocity at infinity. This is certainly possible: e.g., stars might have velocity dispersion of up to 100 $\mathrm{km \;s^{-1}}$ at those distances from SgrA\* [e.g., @Fritz16]. However, this case would further increase the IMBH mass needed to perturb the cloud (see also Appendix \[eorb\]), hence it is not changing our results, that consist of lower limits.
On the other hand, $E_{orb}<0$ is equivalent to a cloud starting its plunge towards the IMBH with zero velocity at a finite distance. This might happen if the cloud was previously strongly scattered by some other close-by massive object, being left with zero velocity, relative to the IMBH. However, in order to really affect the lower limit on the mass of the IMBH, this should happen at distances comparable with the current position of the cloud. Even assuming that the cloud started its plunge at the current position of its tail, our result on the limit for the mass of CO-0.40-0.22\* would be reduced by around a factor of 2 (see Appendix \[eorb\]).
In the present calculation we neglect the possible effect of the self-gravity of the cloud. Such assumption is based on the fact that self-gravity would oppose the tidal stretching by the IMBH, hence requiring a even higher mass for CO-0.40-0.22\*. Nonetheless, the cloud size and its distance from the putative IMBH are of the same order. In this case, from a simple “Roche limit” argument, the self-gravity of the cloud would become important only if the cloud mass were comparable to the mass of the IMBH, which is not realistic.
We also neglect the impact on the cloud of the tidal force of the central supermassive black hole (SMBH) of the Milky Way [$M_{SMBH}\simeq 4 \times 10^6 \; M_{\odot}$; @Boehle16; @Gillessen17] and of the Galactic Centre stars [$M_*(<60\; \mathrm{pc})\simeq 10^8\; M_{\odot}$; @Launhardt02; @Fritz16; @FeldmeierKrause17; @GallegoCano18]. In fact, applying Eq. \[dvlos\], the velocity gradient produced on CO-0.40-0.22 by the SMBH and the stellar component at a distance of 60 pc from the Galactic Centre would approximately be 0.5 and 3 $\mathrm{km \;s^{-1}}$, respectively. Both values are significantly smaller than the observed velocity gradient.
Thermodynamics, turbulence and magnetic fields can be important in typical molecular clouds. However, in this case, the thermal, turbulent and magnetic pressures in the cloud should be comparable to the tidal “pressure” of the IMBH.
In particular, concerning thermodynamics, we can exclude that the observed velocity gradient is produced by a strong unbalance between the internal cloud pressure and the external pressure of the surrounding interstellar medium. In fact, such a high velocity gradient would mean either an “explosion” or an “implosion” of the cloud and requires another physical process [such as those invoked by @Tanaka14; @Tanaka18; @Ravi18; @Yalinewich17] to explain such an out-of-equilibrium configuration. However, we can easily test whether internal pressure gradients induced by radiation and hydrodynamical cooling/heating can become as important as tides. Hence, we compare the acceleration driven by a pressure difference $\Delta P$ over the cloud size $S/\sin \theta$,
$$a_{th}=\frac{1}{\rho_{cl}}\frac{\Delta P}{S}\sin\theta$$
to the tidal acceleration[^2]
$$a_{tid}=\frac{S}{\sin\theta} \frac{G M_{BH}}{d^3_{BH}}.$$
In order for these to be comparable,
$$\frac{\Delta P}{P} = \left(\frac{S}{\sin\theta}\right)^2\frac{ GM_{BH}}{d^3_{BH}}\frac{\mu m_H}{k_B T_{cl}},$$
where $\mu$ is the mean molecular weight, $m_H$ is the hydrogen mass and $k_B$ is the Boltzmann constant. For the case $(D,S)$ = (2,14) arcsec and $\Delta v=30\; \mathrm{km \; s^{-1}}$ and assuming $d_{BH}=(S+D)/(2\sin\theta)$, $\theta=35^{\circ}$, $\mu=2.46$ and $T_{cl}=60 \;K$, we get $\Delta P/P\simeq 5 \times 10^3$. This ratio means that, in order for thermodynamical evolution to matter, the cloud should get extremely compressed, which is not happening for pure tidal evolution, or get heated to temperature bigger than $10^4$ K, meaning a change of status to atomic or even ionized state. The gas would then not shine in molecular lines at all.
We can also try to estimate whether we expect turbulence to have an impact on the cloud, comparable to that of the tidal field. In fact, a cloud of 40 $M_{\odot}$ in virial equilibrium requires a turbulent $\sigma_v\approx\sqrt{GM_{cl}/R_{cl}}\approx\sqrt{2GM_{cl}\sin\theta/S} \approx 0.5\; \mathrm{km \; s^{-1}}$, for S=14 arcsec. In our model, the observed velocity gradient (20-40 $\mathrm{km \; s^{-1}}$) is produced by the tidal stretching of the IMBH. In order for turbulence to compete with the tidal stretching, the cloud should be in an extremely supervirial configuration, i.e., we would require, again, some other physical phenomenon to explain such an out of equilibrium state.
To test the impact of magnetic fields, we can use the magnetic stability criterion derived by @Mouschovias76. In this case, an uniform magnetic field B is able to support the cloud against gravitational collapse if
$$\label{magcoll}
B\geq\frac{M_{cl}}{73\; M_{\odot}} \left(\frac{R_{cl}}{0.1 \;\mathrm{pc}}\right)^{-2} \mathrm{mG}$$
We also mentioned before that the Roche mass for the cloud should be of the order of the mass of the IMBH. Hence, if we use $M_{cl}=M_{BH}$ in Eq. \[magcoll\], we can also get an estimate of the magnitude of the magnetic field needed to have an effect comparable to that of the tidal force. If we approximate again $R_{cl}$ in Eq. \[magcoll\] with S/(2$\sin\theta$) and consider the case with $(D,S)$ = (2,14) arcsec and $\Delta v=30\; \mathrm{km \; s^{-1}}$, we get $B\gtrsim 40$ mG, which is much higher than the typical magnetic field in molecular clouds, even in the Galactic Centre [where it is estimated to be maximum few mG; e.g., @Ferriere09]. Furthermore, given the estimated cloud mass, $M_{cl}=40 M_{\odot}$, we rather expect a a magnetic field of the order of 30 $\mu$G, if it were in a critical state.
Simulations
===========
Initial conditions and methods {#ic}
------------------------------
In order to test our analytical calculation, we run an hydrodynamical simulation with the Eulerian Adaptive Mesh Refinement (AMR) code RAMSES [@Teyssier02]. We adopted Cartesian coordinates and chose a cubic box with $x,y,z=[-5.5:5.5]$ pc.
The IMBH is initially put at $(x,y,z)=0$ as a sink particle, with Bondi-Hoyle accretion [@Bleuler14]. Even though no significant motion of the IMBH is expected (given the small cloud mass), we allow it to move and integrate its motion by direct force summation.
For the cloud, we chose the simplest case of a radial trajectory with $E_{orb}=0$. In this case, the time needed to reach the IMBH from a certain distance is
$$\label{tfall}
t_{fall}=\sqrt{\frac{2d^3_{BH}}{9GM_{BH}}},$$
whenever the mass of the IMBH is significantly larger than the mass of the cloud (which is the case of our model, see section. \[anmod\]). Eq. \[tfall\] then allowed us to set the initial conditions of our simulation. Specifically, we can get the initial distance of head and tail of the cloud as
$$\label{distini}
d_{BH,0}=4.5 G M_{BH} (t_{fall}+t_{arb})^{1/3},$$
where $t_{arb}$ is any arbitrary time, needed to head and tail of the cloud to reach their current position.
In particular, we tested the case with $(D,S)=(2,14)$ arcsec and $\Delta v_{LOS}=30\; \mathrm{ km \;s^{-1}}$, with $\theta \simeq 35^{\circ}$ and corresponding $M_{BH}= 5.7 \times 10^4 M_{\odot}$ (see Table \[mbh\]). For this model, $d_{BH,h}\simeq 0.14$ pc and $d_{BH,t}\simeq 0.99$ pc. We adopted $t_{arb}=0.3$ Myr, which gives $d_{BH,h,0}\simeq 4.73$ pc and $d_{BH,t,0}\simeq 5.01$ pc. We hence put a spherical cloud on the x-axis, with radius $R_{cl}=(d_{BH,t,0}-d_{BH,h,0})/2\simeq 0.14$ pc at a distance of $d_{BH,cl,0}=(d_{BH,h,0}+d_{BH,t,0})/2\simeq 4.87$ pc from the IMBH and with a velocity towards the IMBH of $\simeq 10\;\mathrm{ km \;s^{-1}}$ (see Eq. \[vff\]). We adopted a cloud mass $M_{cl} = 40 M_{\odot}$ (which corresponds to a density of $\rho_{cl}\simeq 2.3 \times 10^{-19} \;\mathrm{g \; cm^{-3}}$), a cloud temperature of $T_{cl}=60$ K (see section \[anmod\]) and put it in pressure equilibrium with a rarified and hot background medium with $\rho_{bg}=\rho_{cl}/10^5$.
Compared to the test-particle and N-body simulations in @Oka16 and @Oka17, we considered completely different initial conditions for the cloud. In fact, as discussed in section \[anmod\], we chose the value of the mass estimated by these authors from its emission, which leads to a cloud that is not bound by its self-gravity and does not require strong turbulence to support it. This value is much smaller than that of their simulated cloud, with $M_{sim}=10^3 M_{\odot}$. In particular, in @Oka17 the authors distribute their particles with a Gaussian radial mass distribution with dispersion of $\sigma_r = 0.2$ pc and internal velocity dispersion of $\sigma_v = 1.43 \mathrm{km \; s^{-1}}$, claiming that this leads to an initially virialized cloud. However, in this case, $GM_{sim}/\sigma_r \approx 10 \sigma^2_v$, hence this setup is actually strongly subvirial.[^3] A virial equilibrium would require a $\sigma_v \approx 5 \mathrm{km \; s^{-1}}$. If the molecular gas has a temperature of 60 K, as estimated by @Oka17, this means that the turbulence has a Mach number $\mathcal{M}\approx 10$, which is probably too high for clouds with that size [e.g., @Larson81].
So, as discussed here and in Section 2.1, we do not expect self-gravity and turbulence to significantly affect the results of our analytical calculation. Nonetheless, we decided to test the influence of self-gravity and turbulence by running two simulations of the same cloud: in the first simulation, the cloud is assumed to have no initial turbulence and no self-gravity, while in the second setup we include gas self-gravity and turbulence.
In particular, for the turbulent cloud we generated a random Gaussian, divergence-free turbulent velocity field, with power spectrum $||\delta{}^2_v || \propto{} k^{-4}$ . Such power spectrum is usually chosen to reproduce the observed trend of the velocity dispersion, in molecular clouds, with the cloud size and the size of its subregions (Larson 1981). The ratio between kinetic and gravitational energy is set to 1, i.e., the cloud is marginally self-bound.
The minimum and maximum refinement levels in our simulations are 4 and 10, respectively, which correspond to $(\Delta x,y,z)_{max}= 11/2^4\;\mathrm{pc}=0.6875$ pc and $(\Delta x,y,z)_{min}= 11/2^{10}\; \mathrm{pc}\simeq 0.0107$ pc, respectively. The AMR strategy we used is “quasi-Lagrangian”, i.e., we refine every cell with mass higher than a certain value $m_{res}$. We chose $m_{res}\simeq 9\times 10^{-4} \; M_{\odot}$, so to be sure to have maximum resolution over the whole cloud, while keeping the backgroung medium at low resolutions.
This test simulation is run with an isothermal equation of state for the gas. Such simplification is justified by the fact that at the cloud densities, the molecular gas is kept at a roughly constant “equilbrium” temperature by heating and cooling processes [e.g., @Larson85; @Larson05; @Koyama00].
In terms of numerics, we adopted an “exact” Riemann solver with 10 iterations and a MonCen slope limiter for the piecewise linear reconstruction [e.g., @Toro09].
![Projected density maps for the run with no self-gravity and no turbulence. The figure shows the evolution of the cloud on its radial orbit at $t=$ 0, 0.17 and 0.30 Myr, in the simulation domain. The white dot shows the position of the IMBH. []{data-label="intmap"}](time_series3.pdf)
![image](map.pdf) ![image](pv.pdf)
Results
-------
In Fig. \[intmap\] we show the free-fall of the cloud towards the IMBH in our simulation. As the cloud approaches its attractor, the tidal force progressively distorts its shape, leading to an elongation in the direction of the motion and a perpendicular compression. In the lower panel of Fig. \[intmap\] the distance from the IMBH of the head of the cloud is much smaller than the cloud size. For this reason, the simplified treatment of tidal effects based on a MacLaurin expansion of the gravitational force is no longer valid. The latter would have predicted a tidal force acting symmetrically, with respect to the cloud baricentre, onto head and tail of the cloud. This approximation is no longer valid at this time of the simulation and the cloud assumes a drop-like shape, instead. We must stress here that the positions of head and tail of the cloud and, consequently, the cloud elongation are a direct consequence of our imposition of Eq. \[distini\], once $t_{arb}$ is chosen. On the other hand, the cloud thickness in the direction perpendicular to the orbit depends on what shape is initially given to the cloud, i.e., on its initial thickness.
![image](map_turb.pdf) ![image](pv_turb.pdf)
The left panel of Fig. \[obspl\] shows the cloud as projected to the sky plane. In order to produce this plot, we simply assumed that the line-of-sight forms an angle $\theta \simeq 35^{\circ}$ with the x-axis of our simulation (see Fig. \[sketch\] and section \[ic\]) and an angle $\alpha=45^{\circ}$ with the Galactic plane (the arrow shows the direction of the x-axis of the simulation). Interestingly enough, also the observed cloud CO-0.40-0.22 shows a sort of drop-like shape, vaguely similar to the simulated cloud shown in Fig. \[obspl\].
The right panel of Fig. \[obspl\] shows mock position-velocity diagrams for the simulated cloud. The distance from the IMBH is simply $d_{proj}=d_{BH}\cos\theta=x\cos\theta$, while the line-of-sight velocity in this plot is obtained by adding a fixed velocity of $-90 \; \mathrm{km \; s^{-1}}$ to the gas velocity. The latter should represent the velocity of the centre of mass of the whole IMBH+cloud system and it is needed to match the observed velocity values. Such centre of mass velocity is nonetheless compatible with the typical orbital velocities at those distances from the Galactic centre. However, we must point out that we would obtain a different centre of mass velocity by simply assuming that the cloud is falling towards the IMBH on an orbit with $E_{orb}\neq 0$ in Eq. \[vff\].
For these simple diagrams we plotted the mass in any position-velocity bin, rather than the emission. This is a 0th-order approximation , based on the assumption that every molecular species has an uniform abundance over the whole cloud and that the observed molecular lines are optically thin (which is probably the case, given the cloud properties). Panel (a) is the direct risult of the simulation, while panel (b) is obtained from panel (a) by applying a Gaussian smoothing with with FWHM equal to 1.2 arcsec and 11 $\mathrm{km\; s^{-1}}$ in distance from the IMBH and line-of-sight velocity, respectively. Applying a Gaussian smoothing is needed to reproduce the observed velocity dispersion at any fixed distance from the IMBH. Such velocity dispersion might be explained by internal supersonic turbulence. However, a velocity dispersion of 11 $\mathrm{km\; s^{-1}}$ in a gas at 60 K implies a turbulence with Mach number $\mathcal{M}\gtrsim 20$, which seems very unlikely. Thus, the most likely explanation is that the observed velocity dispersion is due to the instrumental spread-function. Indeed, the position-velocity diagrams in Fig. 2 of @Oka17 show large velocity dispersion for all the noise/background patches surrounding CO-0.40-0.22.
The head of the cloud appears more rarefied (i.e., less visible) in the simulated position-velocity diagrams. This is simply because the leading part of the cloud occupies a portion of the orbit with larger velocity gradient (see the dotted line in the right panels of Fig. \[obspl\]). Hence, its emission (or, for our simplified comparison, its mass) will be spread over more velocity bins, compared to the trailing part. In addition, as discussed before, the cloud assumes a drop-like shape close to the IMBH, thus leading to a non-uniform mass distribution along the length of the cloud. The observations by @Oka17 show a large ($\simeq 20-40\; \mathrm{km \; s^{-1}}$) velocity gradient in the brightest gas and some possible emission from very high (relative) velocity gas. In our model, this would possibly imply an “effective” larger velocity gradient between the head and the tail of the cloud and, consequently, an even higher black hole mass.
Fig. \[obsplturb\] shows the density map and the position-velocity diagrams of the simulation in which the effects of self-gravity and turbulence have been included. As visible, the turbulence leads to a non-uniform distribution of the gas. This is indeed more consistent with the observed cloud, which shows some subpeaks in its elongation. The main point of this work is still confirmed in the lower panel of Fig. \[obsplturb\], where the cloud has basically the same extent in the position-velocity space as in Fig. \[obspl\]. Nonetheless, the turbulence gives a slightly larger velocity extent close the head of the cloud, compared to our simpler uniform model.
So, including turbulence gives results that are slightly closer to the observations. However, turbulence does not significantly affect the overall velocity gradient produced across the whole cloud length by the tidal field of the IMBH.
Discussion and conclusions
==========================
In this paper, we assumed that the large velocity gradient observed in the very compact molecular cloud CO-0.40-0.22 [@Oka16; @Oka17] is the result of the infall of this cloud on the putative IMBH CO-0.40-0.22\*. Our extreme assumptions (e.g., radial infall, best possible inclination angle between the cloud orbit and the sky plane, etc.) gave us a strong lower limit to the mass of such IMBH of few $\times 10^4 M_{\odot}$. We must again stress that the lower limits in Table \[mbh\] are obtained assuming the most favourable conditions and higher masses are to be expected. This is the first paper where a robust lower limit is given. However, we cannot exclude that other phenomena explain the observed velocity gradient inside CO-0.40-0.22, such as collisions with other clouds, bipolar outflows from young stellar objects or supernova explosions [@Tanaka14; @Ravi18; @Yalinewich17].
Is it reasonable to find such a massive black hole at that distance from the Galactic Centre?
An estimate of its dynamical friction timescale ($t_{df}$) can help us answering this question. In fact, if $t_{df}$ were short, it would be very unlikely to find it at its current position. At 60 pc from SgrA\* (i.e., the projected distance of CO-0.40-0.22\*), the IMBH would interact with the stars of the Milky Way nuclear star cluster and the inner parts of the nuclear stellar disc [see @BlandHawthorn16 and references therein]. At those distances, the enclosed stellar mass can be described by a power-law $M(r)=2\times M_0 (r/R_0)^\alpha$, with $M_0=2 \times 10^8 \; M_{\odot}$, $R_0=60 pc$ and $\alpha=1.2$ [e.g., @Fritz16]. Using Eq. 16 in @McMillan03,
$$t_{df}=\frac{\alpha+1}{\alpha(\alpha+3)}\frac{1}{\chi\ln\Lambda}\left(\frac{M_0}{G}\right)^{1/2}\frac{R^{3/2}_0}{m_{BH}},$$
and assuming $\chi=0.34$ (if the IMBH moves at the circular velocity and the stars are in dynamical equilibrium) and $\ln\Lambda=1-10$, we get that $t_{df, BH}\simeq 1.3-13$ Gyr for $m_{BH}=5\times 10^4 \; M_{\odot}$. This means that this IMBH has probably spent quite some time at its current position and it is not expected to get much closer in the next few Gyr. This calculation holds as long as the IMBH has reached its current position in isolation, i.e., if it is currently not surrounded by a host stellar cluster (see later).
Concerning its origin, similar IMBH masses are obtained by theoretical estimates of BHs forming in high-$\sigma$ density fluctuations in a $\Lambda$CDM cosmological context [e.g., @Volonteri03]. By means of N-body cosmological simulations, @Diemand05 have shown that 10-100 of such IMBHs are expected to be found in the inner kpc of the Milky Way. A similar result has also been obtained by the semi-analytic work by @Volonteri05, also including dynamical friction of the IMBH host halo. Unfortunately, such studies barely (or do not) reach distances from the centre of the galaxy smaller then 100 pc.
The lower limit to the IMBH mass found in the present study is, instead, only marginally compatible with (at the upper end of) the IMBH mass distribution of putative IMBHs in globular clusters [e.g., @Lutzgendorf13; @Mezcua17] and it is also in tension with theoretical estimates (e.g., [@Miller02; @PortegiesZwart04; @Freitag06; @Mapelli16]; but see also [@Giersz15]).
The infall of globular clusters by dynamical friction has been theorized to be responsible for the formation of nuclear star clusters [@Tremaine75; @Capuzzo93; @Capuzzo08; @Agarwal11], also in the case of the Milky Way [@Antonini12; @Gnedin14], and perhaps for the formation/growth of supermassive black holes [e.g., @Ebisuzaki01; @PortegiesZwart06].
In particular, @Mastrobuono14 predicted that IMBHs hosted by globular clusters are expected to inspiral down to the inner pc of the Galaxy. Their simulations, though, assume that massive ($\approx 10^6 M_{\odot}$) globular clusters can survive up to distances of 20 pc from SgrA\* [in this regard, see @Miocchi06].
@PortegiesZwart06 also predicted a population of around 50 IMBHs in the inner 10 pc of the Milky Way. However, these are expected to have masses of the order of $10^3 M_{\odot}$, since they should be born in lower mass ($\approx 10^5 M_{\odot}$) star clusters, forming closer to the Galactic Center, such as the Arches [@Nagata95; @Cotera96; @Serabyn98] and the Quintuplet [@Nagata90; @Figer95]. Finally, @Fragione18 modeled the fate of IMBHs born in globular clusters in the halo of Milky-Way-like galaxies and found that the most massive ($\gtrsim 10^6 M_{\odot}$) globular clusters might have delivered few massive ($\gtrsim 10^4 M_{\odot}$) IMBHs at distances smaller than 100 pc from the centre of the galaxy. In contrast with the assumptions of @Mastrobuono14, @McMillan03 and @Fragione18 expect the parent globular cluster to have dissolved before reaching the observed position of CO-0.40-0.22\* and have left a “naked” black hole. From the IR observations of CO-0.40-0.22 [@Ravi18], it is almost impossible to understand whether the IMBH candidate is naked or surrounded by a star cluster, because of the high absorption along the line of sight.
These arguments show that it is very unlikely that such a massive IMBH formed on the spot: Arches/Quintuplet-like star clusters would not be massive enough to produce IMBHs with mass $>10^4 M_{\odot}$. A more massive local parent cluster, instead, should have been dragged, along with its IMBH, to much smaller distances from SgrA\*. Thus, the most plausible scenario is that this object might have formed in the halo of our Galaxy and was successively brought at its current position by its original host, which dissolved on the way.
Hence, assuming that CO-0.40-0.22\* is an IMBH with high mass, as those resulting from our calculation, is not in tension with current theoretical estimates.
As already mentioned, different explanations for the high velocity gradient of CO-0.40-0.22 are possible. A supernova explosion inside the cloud would provide enough energy to generate it [@Tanaka14; @Yalinewich17] and it is a feasible alternative. Indeed, @Ravi18 reported that the cloud might be associated to an HII region. On the other hand, as already mentioned by @Oka17, CO-0.40-0.22 does not clearly show a cavity at its center. A bipolar outflow could be another possibility, but it does not seem to be energetic enough [@Tanaka14]. A cloud-cloud collision seems to be the most promising alternative explanation, as discussed by @Tanaka14 and @Tanaka18. Indeed, CO-0.40-0.22 seem to be on the rim of two large molecular shells. Concerning CO-0.40-0.22\*, @Ravi18 showed that its spectral energy distribution can be due to synchrotron emission by an advection dominated accretion flow or by a relativistic jet/outflow (similarly to the case of SgrA\*). However, these authors also show that thermal black-body emission from a massive protostellar disc around a young star can be a viable alternative explanation.
In conclusion, the interpretation of the nature of CO-0.40-0.22 and CO-0.40-0.22\* is still highly debated, so further attention should be given to this exotic object, particularly in the light of the parallel claim of another IMBH with mass $\gtrsim 10^4 \; M_{\odot}$ in the IRS13E complex at $\approx 0.13$ pc from SgrA\* [@Schodel05; @Fritz10; @Tsuboi17].
Acknowledgements {#acknowledgements .unnumbered}
================
We thank the referee, Prof. Mirosław Giersz, for his stimulating comments. AB and MM acknowledge financial support from the MERAC Foundation, through grant ‘The physics of gas and protoplanetary discs in the Galactic centre’, and from INAF, through PRIN-SKA ‘Opening a new era in pulsars and compact objects science with MeerKat’. MP acknowledges support from the European Union’s Horizon $2020$ research and innovation programme under the Marie Skłodowska-Curie grant agreement No. $664931$. AB would like to thank the whole ForDyS group for useful discussions. Most of the simulation post-processing was carried out with the yt toolkit [@Turk11].
Impact of $E_{orb}$ on the mass of the IMBH {#eorb}
===========================================
In this Appendix we estimate the impact of a non-zero orbital energy of the radial orbit of the cloud towards the IMBH.
For the $E_{orb}<0$ case, the most extreme configuration is that the cloud had zero velocity at $d_{BH,0}$ equal to the current position of the tail of the cloud. This simply means that Eq. \[dvlos\] becomes
$$\Delta v_{LOS,new}= \sqrt{2G M_{BH} \sin\theta\left[\frac{1}{D}-\frac{1}{S+D}\right]}\cos\theta.$$
Thus the mass limit would become
$$M_{BH,new}=M_{BH}(E_{orb}=0)\frac{[1-\sqrt{D/(S+D)}]^2}{1-D/(S+D)}$$
For the case $(D,S)$ = (2,14) arcsec, the new lower limit is 0.5 times the value we get for the $E_{orb}=0$ case.
In general, under the assumption of $d_{BH,h}<d_{BH,t}<<d_{BH,0}$, a 1st-order Taylor expansion of Eq. \[vff\] gives
$$M_{BH,new}\approx M_{BH}(E_{orb}=0)\left(1+\frac{d_{BH,t}-d_{BH,h}}{2d_0}\right)^{-2}.$$
The case of $E_{orb}>0$ does not affect our main conclusions, since we provided lower limits on the mass of CO-0.40-0.22\*, but this case always provides higher IMBH masses.
\[lastpage\]
[^1]: 22 arcsec = 0.9 pc [@Oka17].
[^2]: We must note that this is a first order approximation, not fully correct at the observed distance between the IMBH and the cloud, but still useful for such a back-of-the-envelope estimate.
[^3]: In fact, calculating the free-fall time $t_{ff}$ of such a cloud and comparing it to the turbulence crossing time $t_{cross}$ and to the time $t_{ca}$ that it takes for their cloud to reach the closest approach to the IMBH, we get $t_{ff}\approx t_{cross}/2 \approx t_{ca}/10$. Hence, the final result of the N-body simulations presented in @Oka17 might suffer of a strong imprint of these unstable initial conditions.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
Different types of data skew can result in load imbalance in the context of parallel joins under the shared nothing architecture. We study one important type of skew, join product skew (JPS). A static approach based on frequency classes is proposed which takes for granted the data distribution of join attribute values. It comes from the observation that the join selectivity can be expressed as a sum of products of frequencies of the join attribute values. As a consequence, an appropriate assignment of join sub-tasks, that takes into consideration the magnitude of the frequency products can alleviate the join product skew. Motivated by the aforementioned remark, we propose an algorithm, called Handling Join Product Skew (HJPS), to handle join product skew.
DBMS, join operation, data distribution, data skew, load imbalance, shared nothing architecture
author:
- 'Foto Afrati, Victor Kyritsis, Paraskevas Lekeas, Dora Souliou'
title: A New Framework for Join Product Skew
---
Introduction
============
The limited potentials of centralized database systems in terms of the storage and the process of large volumes of data has led to the advent of parallel database management systems (PDBMS) that adopt the shared-nothing architecture. According to this architecture, each computational node (database processor) has its own memory and CPU and independently accesses its local disks while it is provided with the ability to perform locally relational operations. By definition, the aforementioned architecture favors the deployment of data intensive scale computing applications [@mapreduce] by reducing the complexity of the underlying infrastructure and the overall cost as well.
Within the scope of the parallel evaluation of the relational operators by splitting them into many independent operators (*partitioned parallelism*), sort-merge join and hash-join constitute the main algorithms for the computation of the equijoin. Equijoin is a common special case of the join operation $R \Join S$, where the join condition consists solely of equalities of the form $R.X=S.Y$ ($X$ and $Y$ are assumed to be attributes of the relations $R$ and $S$ respectively). Both algorithms are subject to parallel execution. However, the hash-based algorithm has prevailed since it has linear execution cost, and it performs better in the presence of data skew as well [@ParallelDatabaseSystemsThefutureofhighperformanceDatabaseSystems].
The parallel hash-based join processing is separated into three phases. In the first phase, each relation is fully declustered horizontally across the database processors by applying a partition function on the declustering attribute, which in general is different from the join attribute. Next, at the redistribution phase, each database processor applies a common hash function $h$ on the join attribute value for its local fragments of relations $R$ and $S$. The hash function $h$ ships any tuple belonging to either relation $R$ or $S$ with join attribute value $b_i$ to the $h(b_i)$-th database processor. At the end of the redistribution process both relations are fully partitioned into disjoint fragments. Lastly, each database processor $p$ performs locally with the most cost-effective way an equijoin operation between its fragments of relations $R$ and $S$, denoted by $R^p$ and $S^p$ respectively. The joined tuples may be kept locally in each database processor instead of being merged with other output tuples into a single stream.
Skewness, perceived as the variance in the response times of the database processors involved in the previously described computation, is identified as one of the major factors that affects the effectiveness of the hash-based parallel join [@EffectivenessParallelJoins]. [@TaxonomyAndPerformanceModelDataSkewEffectsInParallelJoins] defines four types of the data skew effect: Tuple placement skew, selectivity skew, redistribution skew and join product skew. Query load balancing in terms of the join operation is very sensitive to the existence of the redistribution skew and/or the join product skew. Redistribution skew can be observed after the end of the redistribution phase. It happens when at least one database processor has received large number of tuples belonging to a specific relation, say $R$, in comparison to the other processors after the completion of the redistribution phase. This imbalance in the number of redistributed tuples is due to the existence of naturally skewed values in the join attribute. Redistribution skew can be experienced in a subset of database processors. It may also concern both the relations $R$ and $S$ (double redistribution skew). Join product skew occurs when there is an imbalance in the number of join tuples produced by each database processor. [@DataPlacementSharedNothingParallelDatabaseSystems] points the impact of this type of skewness to the response time of the join query. Especially, join product skew deteriorates the performance of subsequent join operation since this type of data skew is propagated into the query tree.
In this paper we address the issue of join product skew. Various techniques and algorithms have been proposed in the literature to handle this type of skew ([@SkewInsensitiveParallelAlgorithmsForRelationalJoin], [@PracticalSkewHandlingParallelJoins], [@HandlingDataSkewParallelJoinsInSharedNothingSystems], [@FrequencyAdaptiveJoinForSharedNothingMachines], [@DynamicJoinProductSkewHandlingHashJoinsSharedNothingDatabaseSystems], [@HandlingDataSkewInParallelHashJoin]). We introduce the notion of frequency classes, whose definition is based on the product of frequencies of the join attribute values. Under this perspective we examine the cases of homogeneous and heterogeneous input relations.
We also propose a new static algorithm, called HJPS (Handling Join Product Skew) to improve the performance of the parallel joins in the presence of this specific type of skewness. The algorithm is based on the intuition that join product skew comes into play when the produced tuples associated with a specific value overbalance the workload of a processor. HJPS algorithm constitutes a refinement of the PRPD algorithm [@HandlingDataSkewParallelJoinsInSharedNothingSystems] in the sense that the exact number of the needed processors is defined for each skewed value instead of duplicating or redistributing the tuples across all the database processors. Additionally, HJPS is advantageous in the case of having join product skew without having redistribution skew.
The rest of this paper is organized as follows. Section \[RelatedWorkSection\] discusses the related work. In section \[ExamplesSection\] we illustrate the notion of division of join attribute values into classes of frequencies by means of two generic cases. In section \[AlgorithmSection\] an algorithm that helps in reducing join product skew effect is proposed and section \[ConclusionSection\] concludes the paper.
Related Work {#RelatedWorkSection}
============
The achievement of load balancing in the presence of redistribution and join product skew is related to the development of static and dynamic algorithms. In static algorithms it is assumed that adequate information on skewed data is known before the application of the algorithm. [@SkewInsensitiveParallelAlgorithmsForRelationalJoin], [@PracticalSkewHandlingParallelJoins] and [@HandlingDataSkewParallelJoinsInSharedNothingSystems] expose static algorithms. On the contrary, [@FrequencyAdaptiveJoinForSharedNothingMachines], [@DynamicJoinProductSkewHandlingHashJoinsSharedNothingDatabaseSystems] and [@HandlingDataSkewInParallelHashJoin] propose techniques and algorithms according to which data skew is detected and encountered dynamically at run time.
[@FrequencyAdaptiveJoinForSharedNothingMachines], [@HandlingDataSkewInParallelHashJoin] address the issue of the join product skew following a dynamic approach. A dynamic parallel join algorithm that employs a two-phase scheduling procedure is proposed in [@HandlingDataSkewInParallelHashJoin]. The authors of [@FrequencyAdaptiveJoinForSharedNothingMachines] present an hybrid frequency-adaptive algorithm which dynamically combines histogram-based balancing with standard hashing methods. The main idea is that the processing of each sub-relation, stored in a processor, depends on the join attribute value frequencies which are determined by its volume and the hashing distribution.
[@SkewInsensitiveParallelAlgorithmsForRelationalJoin], [@PracticalSkewHandlingParallelJoins] and [@HandlingDataSkewParallelJoinsInSharedNothingSystems] deal with the join product skew in a static manner. In [@HandlingDataSkewParallelJoinsInSharedNothingSystems], authors addresses the issue of the redistribution skew by proposing the PRPD algorithm. However, except for redistribution skew, their approach handles the join product skew that results from the former. In PRPD algorithm, the redistribution phase of the hash-join has been modified to some degree. Especially, for the equijoin operation $R_1 \Join R_2$, the tuples of each sub-relation of $R_1$ with skewed join attribute values occurring in $R_1$ are kept locally in the database processor. On the other hand, the tuples that have skewed values happening in $R_2$ are broadcast to all the database processor. The remaining tuples of sub-relation are hash redistributed. The tuples of each sub-relation of $R_2$ are treated in the respective way. The algorithm captures efficiently the case where some values are skewed in both relations. Using the notion of the splitting values stored in a split vector, virtual processor partitioning [@PracticalSkewHandlingParallelJoins] assigns multiple range partitions instead of one to each processor. Finally, authors in [@SkewInsensitiveParallelAlgorithmsForRelationalJoin] assign a work weight function to each join attribute value in order to generate partitions of nearly equal weight.
Finally, OJSO algorithm [@EfficientOuterJoinDataSkewHandlingParallelDBMS] handles data skew effect in an outer join, which is a variant of the equijoin operation.
Two Motivating Examples {#ExamplesSection}
=======================
\[ExamplesSection\] We will assume the simple case of a binary join operation $R_1(A,B) \Join R_2(B,C)$, in which the join predicate is of the form $R_1.B=R_2.B$. The $m$ discrete values $b_1, b_2, \ldots, b_m$ define the domain $D$ of the join attribute $B$. Let $f_i(b_j)$ denote the relative frequency of join attribute value $b_j$ in relation $R_i$. Given the relative frequencies of the join attribute values $b_1, b_2, \ldots, b_m$, the join selectivity of $R_1 \Join R_2$ is equal to [@OnTheRelativeCostOfSamplingForJoinSelectivityEstimation]
$$\label{eq1}
\mu = \sum_{b_j \in D}\prod_{i=1}^{2} f_i(b_j) = \sum_{b_j \in D} f_1(b_j)f_2(b_j)$$
Since $\mu=\frac{|R_1 \Join R_2|}{|R_1 \times R_2|}$ and the size of the result set of the cross product $R_1 \times R_2$ is equal to the product $|R_1| |R_2|$, the cardinality of the result set associated with the join operation $R_1 \Join R_2$ is determined by the magnitude of the join selectivity.
By extending the previous analysis, the join selectivity $\mu$ can be considered as the probability of the event that two randomly picked tuples, belonging to the relations $R_1$ and $R_2$ respectively, join on the same join attribute value. Based on this observation an analytical formula concerning the size of the result set of the chain join (which is one of the most common form of the join operation) is proven. Especially we state that the join selectivity of the chain join, denoted by $R=\Join_{i=1}^{k}R_{i}(A_{i-1}, A_{i})$, is equal to the product of the selectivities $\mu_{i,i+1}$ of the constituent binary operation $R_{i}(A_{i-1}, A_{i}) \Join R_{i+1}(A_{i}, A_{i+1})$ under a certain condition of independence. In our notation, we omit to include attributes in the relations that do not participate in the join process. Formally, we have the following\
**Lemma** *Given that the values of the join attributes $A_{i}$ in a chain join of $k$ relations are independent of each other, the overall join selectivity of the chain join, denoted by $\mu$, is equal to the product of the selectivities of the constituent binary join operations, i.e., $\mu = \prod_{i=1}^{k-1}\mu_{i,i+1}$.*\
We define a pair of random variables $(\mathsf{X}_i, \mathsf{Y}_i)$ for every relation $R_i$, where $i=2,\ldots,k-1$. Specifically, the random variable $\mathsf{X}_i$ corresponds to the join attribute $R_i.A_i$ and it is defined as the function $\mathsf{X}_i(t) : \Omega_{i} \rightarrow \mathbb{N}_{\mathsf{X}_i}$, where $\Omega_{i}$ is the set of the tuples in the relation $R_i$. $\mathbb{N}_{\mathsf{X}_i}$ stands for the set $\{0, 1, \ldots ,|D_{A_i}|-1\}$, where $D_{A_i}$ is the domain of the join attribute $A_i$. In other words, $\mathbb{N}_{\mathrm{X}_i}$ defines an enumeration of the values of the join attribute $A_i$, in such a way that there is a one-to-one correspondence between the values of the set $D_{A_i}$ and $\mathbb{N}_{\mathsf{X}_i}$. Similarly, the random variable $\mathsf{Y}_i(t) : \Omega_{i} \rightarrow \mathbb{N}_{\mathsf{Y}_i}$ corresponds to the join attribute $A_{i+1}$, where $\mathbb{N}_{\mathsf{Y}_i}$ represents the set $\{0, 1, \ldots ,|D_{A_{i+1}}|-1\}$.
As for the edge relations $R_1$ and $R_k$, only the random variables $\mathsf{Y}_1$ and $\mathsf{X}_k$ are defined, since the attributes $R_1.A_0$ and $R_k.A_k$ do not participate in the join process.
Let $\mathcal{R}$ denote the event of the join process. Then we have that $$p(\mathcal{R}) = p \bigl( \mathsf{Y}_1 = \mathsf{X}_2 \wedge \mathsf{Y}_2 = \mathsf{X}_3 \wedge \ldots \wedge \mathsf{Y}_{k-1} = \mathsf{X}_k \bigr)$$
By assumption, the random variables are independent of each other. Thus, it is valid to say that $$p(\mathcal{R}) = \prod_{i=1}^{k-1}p( \mathsf{Y}_i = \mathsf{X}_{i+1} )$$ Moreover, $p(\mathsf{Y}_i = \mathsf{X}_{i+1})$ represents the probability of the event that two randomly picked tuples from relations $R_i$ and $R_{i+1}$ agree on their values of the join attribute $A_i$. Since it holds that $p( \mathsf{Y}_i = \mathsf{X}_{i+1} ) = \mu_{i,i+1}$, the lemma follows.
As a direct consequence of the previous lemma, the cardinality of the result set associated with the join operation $R=\Join_{i=1}^{k}R_{i}(A_{i-1}, A_{i})$ is given by the formula $$|R|=\bigl( \prod_{i=1}^{k-1}\mu_{i,i+1} \bigr) \cdot \bigl( \prod_{j=1}^{k} |R_j|\bigr)$$
Homogeneous Input Relations {#HomogeneousInputRelations}
---------------------------
Firstly, we examine the natural join of two homogeneous relations $R_1(A,B) \Join R_2(B,C)$ in the context of the join product skew effect. In the case of the homogeneous relations the distribution of the join attribute values $b_i$ is the same for both input relations $R_1$ and $R_2$. That is, there exists a distribution $f$ such that $f_1(b)=f_2(b)=f(b)$ for any $b \in D$. In this setting, the distribution $f$ is skewed when there are join attribute values $b_i,b_j \in D$ such that $f(b_i) \gg f(b_j)$.
The join attribute values with the same relative frequency $f_k$ defines the *frequency class* $C_k=\{b \in D \; | \; f(b) = f_k\}$. Thus, the domain $D$ of the join attribute $B$ is disjointly separated into classes of different frequencies. This separation can be represented with a two level tree, called *frequency tree*. The nodes of the first level correspond to classes of different frequencies. The $k^{th}$ node of the first level is labeled with $C_k$. The descendant leaves of the labeled node $C_k$ correspond to the join attributes belonging to class $C_k$. Each leaf is labeled with the value of one of the join attributes of the class corresponding to the parent node. The following picture depicts the structure of a simple frequency tree for join operation $R_1 \Join R_2$ assuming that $D = \{b_1, \ldots, b_6\}$ is separated into four frequency classes $C_1, \ldots, C_4$.
The number of produced joined tuples for a given class $C_k$ is equal to $|C_k|f_{k}^{2}|R_1||R_2|$ since $f_{k}|R_1|$ tuples of relation $R_1$ matches with $f_{k}|R_2|$ tuples of relation $R_2$ on any join attribute value $b \in C_k$. Let $N$ be the number of the database processors participating in the computation of the join operation. Since only the join product skew effect is considered, the workload associated with each node is determined by the size of the partial result set that is computed locally. In order the workload of the join operation to be evenly apportioned on the $N$ database processors, each node should produce approximately $\bigl(\frac{\sum_{k=1}^{K}|C_k|f_{k}^{2}}{N} \bigr)|R_1||R_2|$ number of joined tuples, where $K$ denotes the number of frequency classes. In terms of the frequency classes, this is equivalent to an appropriate assignment of either entire or subset of frequency class(es) to each database processors in order to achieve the nearly even distribution of the workload. This assignment can be represented by the selection of some internal nodes and leaves in the frequency tree. By construction, the selection of an internal node in the frequency tree amounts to the exclusive assignment of the corresponding frequency class to some database processor. Thus, this database processor will join tuples from the relations $R_1$ and $R_2$ whose join attribute value belongs to the selected class. Finally, to guarantee the integrity of the final result set, the sequence of selections must span all the leaves of the frequency tree.
Heterogeneous Input Relations
-----------------------------
We extend the previous analysis in the case of heterogenous input relations. The join attribute values are distributed to the input relations $R_1(A,B)$ and $R_2(B,C)$ according to the data distributions $f_1$ and $f_2$, respectively. In general, it holds that the relative frequencies of any join attribute value $b \in D$ are different in the relations $R_1$ and $R_2$, i.e., $f_1(b) \neq f_2(b)$ for any $b \in D$. The above are depicted in table \[generalbinaryjoin\].
The number of joined tuples corresponding to the join attribute value $b \in D$ is rendered by the product $f_1(b) f_2(b)$. Thus, the join product skew happens when $f_1(b_i) f_2(b_i) \gg f_2(b_j) f_2(b_j)$ for some $b_i, b_j \in D$. This means that the workload of the join process for the database processor, to which the tuples with join attribute value equal to $b_i$ have been shipped at the redistribution phase, will be disproportional compared with the respective workload of another database processor. Similarly to section \[HomogeneousInputRelations\], the classes $C_k=\{b \in D \; | \; f_1(b) f_2(b) = f_k\}$ disjointly partition the join attribute values.
Alternatively, it is possible the definition of classes of ranges of frequencies according to the schema $C_k=\{b \in D \; | \; f_{k-1} \leq f_1(x)f_2(x) < f_k\}$ (range partitioning in the frequency level).
The “primary-key-to-foreign-key” join consists a special case of heterogeneity where in one of the two relation, say $R_1$, two different tuples always have different values in the attribute $B$. This attribute is called primary key and its each value $b \in D$ uniquely identifies a tuple in relation $R_1$. As to relation $R_2$, attribute $B$, called foreign key, matches the primary key of the referenced relation $R_1$. In this setting, which is very common in practice, we have that $f_1(b_i) = \frac{1}{m}$ for any $b_i \in D$, and in general $f_2(b_i) \neq \frac{1}{m}$ with $f_2(b_i)>0$. The join product skew happens when $f_2(b_i) \gg f_2(b_j)$ for some $b_i,b_j \in D$, since $f_1(b_i)=f_1(b_j)$. Thus, the separation of the join attribute values into disjoint frequency classes can be defined with respect to the data distribution $f_2$, i.e., $C_k=\{x \in D \; | \; f_2(x) = f_k\}$.
Algorithm HJPS {#AlgorithmSection}
==============
In this section, we propose an algorithm, called HJPS, that alleviates the join product skew effect. The algorithm deals with the case of the binary join operation $R(A,B) \Join S(B,C)$ in which the join predicate is $R.B = S.B$.
Let $D = \{b_1, b_2, ..., b_m\}$ be the domain of values associated with the join attribute $B$. We denote by $|R_{b_{i}}|$ ($|S_{b_{i}}|$) the number of tuples of the relation $R$ (respectively $S$) with join attribute value equal to $b_i$, where $b_i \in D$. The algorithm considers that the quantities $|R_{b_{i}}|$, $|S_{b_{i}}|$ for every $b_i \in D$ are known in advance by either previously collected or sampled statistics. We also denote by $n$ the number of the database processors. In our setting, all the database processors are supposed to have identical configuration.
As it has been mentioned earlier, the number of the needed computations for the evaluation of the join operation, that identifies the total processing cost ($TPC$), is determined by the sum of products of the number of tuples in both relations that have the same join attribute values. This means that $TPC$ is expressed by the equation $$TPC = \sum_{b_i \in D }|R_{b_{i}}|*|S_{b_{i}}|$$ In the context of the parallel execution of the join operator, the ideal workload assigned to each processor, denoted by $pwl$, is defined as the approximate number of the joined tuples that it should produce in order not to experience the join product skew effect. Obviously, it holds that that $pwl = TPC/n$.
HJPS determines whether or not a join attribute value $b_i \in D$ is skewed by the number of the processors dedicated to the production of the joined tuples corresponding to this value. To be more specific, the quotient of the division of the number of joined tuples associated with the join attribute value $b_i$ (which is equal to $|R_{b_{i}}|*|S_{b_{i}}|$) by $pwl$ gives the number of the processors needed to handle this attribute value. In the case that the result of the division, denoted by $vwl_{b_i}$, exceeds the value of two, the algorithm considers the join attribute value as skewed. The latter is inserted into a set of values, denoted by $SK$.
Let $SK=\{b_{a_{1}}, b_{a_{2}}, b_{a_{3}}, ..., b_{a_{l}}\}$ be the set of the skewed values. The algorithm iterates over the set $SK$. In particular, for the value $b_{a_{1}}$, suppose that the number of the needed processors is equal to $vwl_{b_{a_{1}}}$. The algorithm takes a decision based on the number of tuples with join attribute value $b_{a_{1}}$ in relations $R$ and $S$. If $|R_{b_{a_{1}}}|>|S_{b_{a_{1}}}|$, the tuples of the relation $R$ are redistributed to the first $vwl_{b_{a_{1}}}$ processors while all the tuples from the second relation are duplicated to all of the $vwl_{b_{a_{1}}}$ processors. In order to decide which of the $vwl_{b_{a_{1}}}$ processors is going to receive a tuple of the relation $R$ with join attribute value $b_{a_{1}}$, the algorithm applies a hash function on a set of attributes. On the contrary, if it holds that $|R_{b_{a_{1}}}|<|S_{b_{a_{1}}}|$, all the tuples from the relation $R$ with join attribute value equal to $b_{a_{1}}$ are duplicated to all of the $vwl_{b_{a_{1}}}$ processors while the tuples of the relation $S$ are distributed to all of the $vwl_{b_{a_{1}}}$ processors according to a hash function. The same procedure takes place for the rest skewed values. The remaining tuples are redistributed to the rest processors according to a hash function on the join attribute. A Pseudocode of the algorithm is given below.
Conclusion and Future Work {#ConclusionSection}
==========================
We address the problem of join product skew in the context of the PDBMS. In our analysis, the apriori knowledge of the distribution of the join attribute values has been taken for granted. We concentrated on the case of partitioned parallelism, according to which the join operator to be parallelized is split into many independent operators each working on a part of data. We introduced the notion of frequency classes and we examined its application in the general cases of homogeneous and heterogeneous input relations. Furthermore, an heuristic algorithmic called HJPS is proposed to handle join product skew. The proposed algorithm identifies the skew elements and assigns a specific number of processors to each of them. Given a skewed join attribute value, the number of dedicated processors is determined by the process cost for computing the join for this attribute value, and by the workload that a processor can afford.
We are looking at generalizing our analysis with frequency classes at multiway joins. In this direction we have proven the lemma of section \[ExamplesSection\] which is about the chain join of $k$ relations. Furthermore, other types of multiway join operations, e.g., star join, cyclic join, are going to be studied in the perspective of the data skew effect and under the context of frequency classes. Finally, in a future work we will examine the case of multiway joins supposing that no statistical information about the distribution of the join attribute values is given in advance.
[4]{} K. Alsabti and S. Ranka. Skew-insensitive parallel algorithms for relational join. In [*HIPC ’98: Proceedings of the Fifth International Conference on High Performance Computing*]{}, page 367, Washington, DC, USA, 1998. IEEE Computer Society.
Mostafa Bamha and Gaétan Hains. Frequency-adaptive join for shared nothing machines. pages 227–241, 2001.
David J. DeWitt and Jim Gray. Parallel database systems: The future of high performance database systems. , 35(6):85–98, 1992.
David J. DeWitt, Jeffrey F. Naughton, Donovan A. Schneider, and S. Seshadri. Practical skew handling in parallel joins. In [*18th International Conference on VLDB, Vancouver, Canada, Proceedings*]{}, pages 27–40. Morgan Kaufmann, 1992.
Peter J. Haas, Jeffrey F. Naughton, and Arun N. Swami. On the relative cost of sampling for join selectivity estimation. In [*PODS ’94: Proceedings of the thirteenth ACM SIGACT-SIGMOD-SIGART symposium on Principles of database systems*]{}, pages 14–24, New York, NY, USA, 1994. ACM.
Lilian Harada and Masaru Kitsuregawa. Dynamic join product skew handling for hash-joins in shared-nothing database systems. In [*Database Systems for Advanced Applications ’95, Proceedings of the 4th International Conference on DASFAA, Singapore, 1995*]{}, volume 5 of [*Advanced Database Research and Development Series*]{}, pages 246–255. M. Seetha Lakshmi and Philip S. Yu. Effectiveness of parallel joins. , 2(4):410–424, 1990.
Manish Mehta and David J. DeWitt. Data placement in shared-nothing parallel database systems. , 6(1):53–72, 1997.
Christopher B. Walton, Alfred G. Dale, and Roy M. Jenevein. A taxonomy and performance model of data skew effects in parallel joins. In [*17th International Conference on VLDB, 1991, Barcelona, Catalonia, Spain, Proceedings*]{}, pages 537–548. Morgan Kaufmann, 1991. Yu Xu and Pekka Kostamaa. Efficient outer join data skew handling in parallel dbms. , 2(2):1390–1396, 2009.
Yu Xu, Pekka Kostamaa, Xin Zhou, and Liang Chen. Handling data skew in parallel joins in shared-nothing systems. In [*SIGMOD ’08: Proceedings of the 2008 ACM SIGMOD international conference on Management of data*]{}, pages 1043–1052, New York, NY, USA, 2008. ACM.
Z. Xiaofang and M.E. Orlowska. Handling data skew in parallel hash join computation using two-phase scheduling. In [*Algorithms and Architectures for Parallel Processing*]{}, pages 527 – 536. IEEE Computer Society, 1995.
Jeffrey Dean and Sanjay Ghemawat. Mapreduce: simplified data processing on large clusters. , 51(1):107–113, 2008.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
Long standing problem in out-of-equilibrium thermal field theories are pinching singularities. We find that the expressions suspect of pinching require loop particles to be on the mass shell. This fact, with the help of threshold effect and similar effect due to spin, leads to the elimination of pinching in single self-energy insertion approximation to propagator in all propagators appearing in QED and QCD under very mild restrictions on particle densities.
This, together with the cancellation of collinear singularities, allows the extraction of useful physical information contained in the imaginary parts of the two loop diagrams.
In some cases of interest ($\pi-\rho $ interaction, electro-weak interaction, decay of Higgs particle, ...) none of the mentioned mechanisms works and one has to resort to the resummed Schwinger-Dyson series. These cases are more sensitive to the limitations related to the finite time range.
---
-15mm -6mm addtoreset[equation]{}[section]{}
[**Cancellation of pinching singularities in out-of-equilibrium thermal field theory**]{}\
[^1]\
$^1$ Ruder Bošković Institute, Zagreb, Croatia\
$^2$ Fakultät für Physik, Universität Bielefeld , Germany\
Introduction
=============
Out of equilibrium thermal field theories have recently attracted much interest. From the experimental point of view, various aspects of heavy-ion collisions and the related hot QCD plasma are of considerable interest, in particular the supposedly gluon-dominated stage.
Contrary to the equilibrium case$^{(\cite{landsman,mleb})}$ where pinch, collinear, and infrared problems have been successfully controlled, out of equilibrium theory$^{(\cite{schwinger,keldysh,rammer})}$ has suffered from them to these days. However, progress has been made in this field, too.
Weldon$^{\cite{weldon11}}$ has observed that the out of equilibrium pinch singularity does not cancel; hence it spoils analyticity and causality. The problem gets worse with more than one self-energy insertions.
Bedaque has argued that in out of equilibrium theory the time extension should be finite. Thus, the time integration limits from $-\infty $ to $+\infty $, which are responsible for the appearance of pinches, have to be abandoned as unphysical$^{\cite{bedaque}}$. Similar argument, referring to the Fermi’s “golden rule” is given by Greiner and Leupold$^{\cite{gl}}$.
Le Bellac and Mabilat$^{\cite{lebellac}}$ have shown that pinching singularity gives a contribution of order $g^2\delta n$, where $\delta n$ is a deviation from equilibrium. They have also found that collinear singularities cancel in scalar theory, and in QCD using physical gauges$^{\cite{rl}}$, but not in the case of covariant gauges. Niégawa$^{\cite{niegawacom}}$ has found that the pinch-like term contains a divergent part that cancels collinear singularities in the covariant gauge.
Altherr and Seibert have found that in massive $g^2\phi^3$ theory pinch singularity does not occur owing to the kinematical constraint$^{\cite{as}}$.
Altherr has suggested a regularization method in which the propagator is modified by the width $\gamma $ which is an arbitrary function of momentum to be calculated in a self-consistent way. In $g^2\phi^4$ theory, for small deviations from equilibrium, $\gamma$ was found to be just the usual equilibrium damping rate$^{\cite{altherr}}$.
This recipe has been justified in the resummed Schwinger-Dyson series in various problems with pinching$^{\cite{bdr,bdrs,bdrk,carrington,niegawa}}$.
Baier, Dirks, and Redlich$^{\cite{bdr}}$ have calculated the $\pi-\rho $ self-energy contribution to the pion propagator, regulating pinch contributions by the damping rate. In subsequent papers with Schiff$^{\cite{bdrs,bdrk}}$ they have calculated the quark propagator within the HTL approximation$^{\cite{p,ebp,ft}}$; in the resummed Schwinger-Dyson series, the pinch is naturally regulated by $Im\Sigma_R$.
Carrington, Defu, and Thoma$^{\cite{carrington}}$ have found that no pinch singularities appear in the HTL approximation to the resummed photon propagator .
Niégawa$^{\cite{niegawa}}$ has introduced the notion of renormalized particle-number density. He has found that, in the appropriately redefined calculation scheme, the amplitudes and reaction rates are free from pinch singularities.
By pinching singularity we understand the contour passing between two infinitely close poles: $$\label{pinch}
\int {dx\over (x+i\epsilon)(x-i\epsilon)}.$$ When $\epsilon $ tends to zero, the integration path is “pinched” between the two poles, and the expression is ill-defined. Integration gives an $\epsilon^{-1} $ contribution plus regular terms. Decomposition of $(x\pm i\epsilon)^{-1}$ into $PP(1/x)\mp i\pi\delta (x)$, gives the related ill-defined $\delta^2$ expression.
The following expression, which is similar to (\[pinch\]), corresponds to the resummed Schwinger-Dyson series: $$\label{pinchlikesd}
\int dx{\omega(x)\over (x-\Sigma_R(x)+i\epsilon)
(x-\Sigma^*_R(x)-i\epsilon)},$$ where $\omega(x)$ and $\bar \omega(x)$ (which appears in (\[pinchlike1\])) are, respectively, proportional to $\Omega(x)$ and $\bar \Omega(x)$ where $\Omega(x)$, $\Sigma_R(x)$, and $\bar \Omega(x)$ are the components of the self-energy matrix.
In expression (\[pinchlikesd\]), pinching is absent$^{\cite{bdr,bdrs,bdrk,carrington,niegawa}}$ if $Im\Sigma_R(x_o)\neq0$ at a value of $x_o$ satisfying $x_o-Re\Sigma_R(x_o)=0$.
The expression corresponding to the single self-energy insertion approximation to the propagator is similar to (\[pinchlikesd\]): $$\label{pinchlike1}
\int dx{\bar \omega(x)\over (x+i\epsilon)(x-i\epsilon)}.$$ One can rewrite the integral as $$\label{rpinchlike1}
\int {dx\over 2}\biggl({1\over x+i\epsilon}
+{1\over x-i\epsilon}\biggr)
{\bar \omega(x)\over x}.$$ If it happens that $$\label{rpinchlike2}
\lim_{x \rightarrow 0} {\bar \omega(x)\over x}=K<\infty ,$$ then the integral (\[rpinchlike1\]) decomposes into two pieces that, although possibly divergent, do not suffer from pinching.
There are two cases in which the function $\bar \omega(x)$ is even identically zero in the vicinity of the $x=0$ point: in thermal equilibrium, because of detailed balance relations; in massive $g^2 \phi^3$ theory out of equilibrium, owing to the mass shell condition$^{\cite{as}}$. The latter mechanism also works in out of equilibrium QED if a small photon mass $m_{\gamma} $ is introduced. However, this elimination of pinching can be misleading: the domain of $x$, where $\bar \omega(x)=0 $, shrinks to a point as $m_{\gamma} \rightarrow 0$. We shall show that the elimination of pinching also occurs in the $m_{\gamma}=0$ case.
In this paper$^{\cite{id}}$ we identify two mechanisms leading to relation (\[rpinchlike2\]). They are based on the observation that in the pinch-like contribution loop particles have to be on mass shell.
The first mechanism is effective in out of equilibrium QED: in the pinch-like contribution to the electron propagator, phase space vanishes linearly as $x \rightarrow 0$ . In the pinch-like contribution to the photon propagator, the domain of integration is shifted to infinity as $x \rightarrow 0$. For distributions disappearing rapidly enough at large energies, the contribution again vanishes linearly in the $x \rightarrow 0$ limit. This mechanism is also valid in QCD in the cases with massive quarks.
In out of equilibrium massless QCD, phase space does not vanish, but there is an alternative mechanism: the spinor/tensor structure in all cases leads to relation (\[rpinchlike2\]).
In a few cases, none of the mentioned mechanisms works and one has to sum the Schwinger-Dyson series. This is the case of the $\pi-\rho $ loop in the $\pi $ self-energy . Even in the limit of zero pion mass, $\bar \omega(x)$ vanishes only as $|x|^{1/2}$ and relation (\[rpinchlike2\]) is not fulfilled. A similar problem appears in electroweak interactions involving decays of $Z$ and $W$ bosons, decay of Higgs particles, etc. Another important case is massless $g^2\phi^3$ theory. In contrast to massless QCD, massless $g^2\phi^3$ theory contains no spin factor to provide a $q^2$ factor necessary to obtain (\[rpinchlike2\]).
The densities are restricted only mildly: they should be cut off at high energies, at least as $|k_o|^{-3-\delta}$, in order to obtain a finite total particle density; for nonzero $k_o$, they should be finite; for $k_o$ near zero, they should not diverge more rapidly than $|k_o|^{-1}$, the electron (positron) distribution should have a finite derivative.
Furthermore, we were unable to eliminate pinches related to the double, triple, etc., self-energy insertion contributions to the propagator.
The resummed Schwinger-Dyson series must be free from pinching as “for any system which moves towards thermal equilibrium and thus behaves dissipatively, the full propagator must have some finite width”$^{\cite{gl}}$.
Propagators and the Schwinger-Dyson equation
=============================================
We start$^{\cite{cshy,niemi}}$ by defining out of equilibrium thermal propagators for bosons, in the case when we can ignore the variations of slow variables in Wigner functions$^{\cite{lebellac,bio}}$: $$\begin{aligned}
\label{D11}
&&D_{11}(k)=D^*_{22}(k)\cr
\nonumber\\
&&={i \over k^2-m^2+2i\epsilon|k_o|}+
2\pi \sinh^2\theta\delta(k^2-m^2),\end{aligned}$$ $$\begin{aligned}
\label{D12}
&&D_{12}(k)=2\pi \delta(k^2-m^2)\cr
\nonumber\\
&&
(\cosh^2 \theta \Theta(k_o)+\sinh^2 \theta \Theta(-k_o)),\end{aligned}$$ $$\begin{aligned}
\label{D21}
&&D_{21}(k)=2\pi \delta(k^2-m^2)\cr
\nonumber\\
&&
(\cosh^2 \theta \Theta(-k_o)+\sinh^2 \theta \Theta(k_o)).\end{aligned}$$ The propagator satisfies the important condition $$\label{sumD}
0=D_{11}-D_{12}-D_{21}+D_{22}.$$ To obtain the corresponding relations for fermions, we only need to make the substitution $$\label{b-f}
\sinh^2\theta(k_o) \rightarrow -\sin^2\bar\theta(k_o).$$ In the case of equilibrium, we have $$\label{eqB}
\sinh^2\theta(k_o)=n_B(k_o)={1 \over \exp\beta |k_o|-1},$$ and similarly for fermions. Out of equilibrium, $n_B(k_o)$ and $n_F(k_o)$ will be some given functions of $k_o$.
We transform to the Keldysh components $$\label{DR}
D_R(k)=-D_{11}+D_{21}={-i \over k^2-m^2+2i\epsilon k_o},$$ $$\label{DA}
D_A(k)=-D_R^*(k)=D_R(-k),$$ $$\begin{aligned}
\label{barD}
&&D_K(k)=D_{11}+D_{22}=h(k_o)(D_R-D_A)\cr
\nonumber\\
&&=2\pi \delta(k^2-m^2)(1+2\sinh^2\theta),\cr
\nonumber\\
&&
h(k_o)=-\epsilon(k_o)(1+2\sinh^2\theta).\end{aligned}$$ Again for fermions $$\begin{aligned}
\label{barDF}
D_K(k)=2\pi \delta(k^2-m^2)(1-2\sin^2\bar\theta).\end{aligned}$$ The proper self-energy satisfies the condition $$\label{sumSigma}
0=\Sigma_{11}+\Sigma_{12}+\Sigma_{21}+\Sigma_{22}.$$ It is also transformed into the Keldysh form: $$\label{SigmaR}
\Sigma_R=-(\Sigma_{11}+\Sigma_{21}),~\Sigma_A=\Sigma_R^*,$$ $$\label{Omega}
\Omega=\Sigma_{11}+\Sigma_{22}.$$ The “cutting rules” (refs.[@weldoncr; @ksem], see also ref.[@gelis] for application of the rules out of equilibrium) will convince us that only on-shell loop-particle momenta contribute to $Im\Sigma_R$ and $\Omega $.
The Schwinger-Dyson equation $$\label{Schwinger-Dyson}
{\cal G}=G+iG \Sigma {\cal G},$$ can be written in terms of Keldysh components as $$\label{KeldishR}
{\cal G}_R=G_R+iG_R \Sigma_R {\cal G}_R,$$ $$\begin{aligned}
\label{Keldysh}
&&{\cal G}_K=G_K\cr
\nonumber\\
&&+i\left(G_A\Omega {\cal G}_R+G_K\Sigma_R{\cal G}_R +
G_A\Sigma_A{\cal G}_K\right).\end{aligned}$$ By expanding (\[KeldishR\]), we deduce the contribution from the single self-energy insertion to be of the form $$\label{psol2GRA}
{\cal G}_R\approx G_R+iG_R\Sigma_RG_R,$$ which is evidently well defined, and the Keldysh component suspected for pinching: $$\begin{aligned}
\label{psol2barG}
&&{\cal G}_K\approx G_K\cr
\nonumber\\
&&+iG_A\Omega G_R+iG_K\Sigma_RG_R
+iG_A\Sigma_AG_K.\end{aligned}$$ The equation for ${\cal G}_R$ is simple and the solution is straightforward: $$\label{sol1GR}
{\cal G}_R={1 \over G_R^{-1}-i\Sigma_R}=-{\cal G}_A^*.$$ To calculate ${\cal G}_K$, we can use the solution (\[sol1GR\]): $$\label{sol1barG}
{\cal G}_K={\cal G}_A\left(h(q_o)(G_A^{-1}-G_R^{-1})
+i\Omega\right){\cal G}_R.$$ The first term in (\[sol1barG\]) is not always zero, but it does not contain pinching singularities! The second term in (\[sol1barG\]) is potentially ill-defined (or pinch-like). The pinch-like contribution appears only in this equation; thus it is the key to the whole problem of pinch singularities. In the one-loop approximation, it requires loop particles to be on mass shell.
We start with (\[psol2barG\]). After substituting (\[barD\]) into (\[psol2barG\]), we obtain the regular term plus the pinch-like contribution: $$\label{pert0}
{\cal G}_K\approx {\cal G}_{Kr}+{\cal G}_{Kp},$$ $$\begin{aligned}
\label{pinch0r}
&&{\cal G}_{Kr}=h(q_o)\cr
\nonumber\\
&&\left(G_R-G_A+iG_R\Sigma_RG_R
-iG_A\Sigma_AG_A\right),\end{aligned}$$ $$\label{pinch0p}
{\cal G}_{Kp}= iG_A\bar \Omega G_R,~~~
\bar \Omega=\Omega -h(q_o)(\Sigma_R-\Sigma_A).$$ For equilibrium densities, we have $\Sigma_{21}=e^{-\beta q_o}\Sigma_{12}$ , and expression (\[pinch0p\]) vanishes identically.
Expression (\[pinch0p\]) is the only one suspected of pinch singularities at the single self-energy insertion level. The function $\bar\Omega $ in (\[pinch0p\]) belongs to the type of functions characterized by the fact that both loop particles have to be on mass shell. It is analyzed in detail in Secs. III and IV (for threshold effect) and in Sec. V (for spin effect). With the help of this analysis we show that relation (\[pinch0p\]) transforms into $$\begin{aligned}
\label{pinch0pd}
&&{\cal G}_{Kp}= -i{K(q^2,q_o)\over 2}
\cr
\nonumber\\
&&\left({1\over q^2-m^2+2i\epsilon q_o}
+{1\over q^2-m^2-2i\epsilon q_o}\right),\end{aligned}$$ where $K(q^2,q_o)$ is $\bar\Omega/(q^2-m^2)$ multiplied by spinor/tensor factors included in the definition of $G_{R,A}$. The finiteness of the limit $$\label{pinch0pdk}
\lim_{q^2\rightarrow m^2\mp 0}K(q^2,q_o) = K_{\mp}(q_o) < \infty,$$ is important for cancellation of pinches. The index $\mp $ indicates that the limiting value $m^2$ is approached from either below or above, and these two values are generally different. To isolate the potentially divergent terms, we express the function $K(q^2,q_o)$ in terms of functions that are symmetric ($K_{1}(q^2,q_o)$) and antisymmetric ($K_2(q^2,q_o)$) around the value $q^2=m^2$: $$\begin{aligned}
\label{Kby12}
&&K(q^2,q_o)
=K_1(q^2,q_o)\cr
\nonumber\\
&&+\epsilon(q^2-m^2)K_2(q^2,q_o).\end{aligned}$$ These functions are given by $$\begin{aligned}
\label{K12}
&&K_{1,2}(q^2,q_o)\cr
\nonumber\\
&&={1\over 2}\big(K(q^2,q_o)\pm K(2m^2-q^2,q_o)\big).\end{aligned}$$ Locally (around the value $q^2=m^2$), these functions are related to the limits $K_{\pm}(q_o)$ by $$\label{K12pm}
K_{1,2}(q^2,q_o)={1\over 2}\big(K_{+}(q_o)\pm K_{-}(q_o)\big).$$ As a consequence, the right-hand side of expression (\[pinch0pd\]) behaves locally as $$\begin{aligned}
\label{pinchs}
&&{\cal G}_{Kp}(q^2,q_o)
\cr
\nonumber\\
&&\approx-{i\over 2}\left(K_1(q_o)+
\epsilon(q^2-m^2)K_2(q_o)\right)
\cr
\nonumber\\
&&\left({1\over q^2-m^2+2i\epsilon q_o}
+{1\over q^2-m^2-2i\epsilon q_o}\right),\end{aligned}$$ and the term proportional to $K_2$ is capable of producing logarithmic singularity.
Threshold factor
================
In this section we analyze the phase space of the loop integral with both loop particles on mass shell. Special care is devoted to the behavior of this integral near thresholds. The expressions are written for all particles being bosons, and spins are not specified; change to fermions is elementary.
Now, starting from (\[SigmaR\]) to (\[Omega\]), we calculate $\Omega $ and $Im\Sigma_R$: $$\begin{aligned}
\label{Omegai2}
\Omega=2iIm\Sigma_{11}
=2{ig^2 \over 2}\int d\mu N_{\Omega }(k_o,k_o-q_o)F,\end{aligned}$$ where $$\begin{aligned}
\label{dmu}
d\mu= {d^4k \over (2\pi)^4} 4\pi^2
\delta(k^2-m_D^2)\delta((k-q)^2-m_S^2),\end{aligned}$$ and $$\begin{aligned}
\label{nomega}
&&N_{\Omega}(k_o,k_o-q_o)=
-.5\epsilon(k_o(k_o-q_o))\cr
\nonumber\\
& &+(.5+\sinh^2\theta_D(k_o))(.5+\sinh^2\theta_S(k_o-q_o)),\end{aligned}$$ $$\begin{aligned}
\label{SigmaRi2}
&&Im\Sigma_R={g^2 \over 2}\int d\mu
N_R(k_o,k_o-q_o)F,\end{aligned}$$ and $$\begin{aligned}
\label{nr}
& &N_R(k_o,k_o-q_o)\cr
\nonumber\\
& &=\sinh^2\theta_D(k_o)\epsilon(k_o-q_o)
\cr
\nonumber\\
&&+\sinh^2\theta_S(k_o-q_o)\epsilon(-k_o)\cr
\nonumber\\
& &+\Theta(-k_o)\Theta(k_o-q_o)-\Theta(k_o)\Theta(q_o-k_o).\end{aligned}$$ $F$ is the factor dependent on spin and internal degrees of freedom.
It is useful to define $N_{\bar \Omega}(k_o,k_o-q_o)$ as $$\label{nbaromega}
N_{\bar \Omega}=
N_{\Omega}-h(q_o)N_R.$$ After integrating over $\delta$’s, one obtains $$\begin{aligned}
\label{dmu1}
d\mu={1 \over 4|\vec q|}
{|k_o|dk_o \over |\vec k|}d\phi\Theta(1-z_o^2),\end{aligned}$$ and expressions for $\Omega $ and $Im\Sigma_R$ take general form $$\begin{aligned}
\label{trint1}
{\cal I}=\int d\mu N(k_o,k_o-q_o)
F(q,k_o,|\vec k|,\vec q\vec k),\end{aligned}$$ where $|\vec k|=(k_o^2-m_D^2)^{1/2}$, $$\label{trint2}
\vec q\vec k=|\vec q||\vec k|z_o,$$ $$\label{trint3}
z_o={\vec q^2+\vec k^2-(\vec q-\vec k)^2 \over 2|\vec k||\vec q|}.$$ $\phi\epsilon (0,2\pi)$ is the angle between vector $\vec k_T$ and $x$ axes.
Let us start with the $q^2>0$ case. Solution of $\Theta(1-z_o^2)$ gives the integration limits $$\begin{aligned}
\label{ko12}
&&k_{o1,2}={1 \over 2q^2}(q_o(q^2+m_D^2-m_S^2)\cr
\nonumber\\
&&\mp{1 \over 2q^2}
|\vec q|((q^2-q^2_{+tr})(q^2-q^2_{-tr}))^{1/2},\end{aligned}$$ $$\label{qmp}
q_{\pm tr}=|m_D \pm m_S|.$$ Assume now that $q_{tr}\neq 0$. In this case, at the threshold, the limits shrink to the value $$\begin{aligned}
\label{koko}
k_{o~tr}={q_o(q^2_{tr}+m_D^2-m_S^2) \over 2q^2_{tr}}.\end{aligned}$$ We define the coefficient $c_1$ by $$\begin{aligned}
\label{c1}
c_1={1 \over 4|\vec q|}\int d\phi N(k_{otr},k_{otr}-q_{o})F.\end{aligned}$$ Now the expression (\[trint1\]) can be approximated by $$\begin{aligned}
\label{2}
& &{\cal I}\approx c_1(|\vec k|_{2}-|\vec k|_{1})\cr
\nonumber\\
& &\approx c_1(\Theta(q^2-q_{+tr}^2)+ \Theta(-q^2+q_{-tr}^2))\cr
\nonumber\\
& &
{q_o((q^2-q^2_{+tr})(q^2-q^2_{-tr}))^{1/2} \over q^2}.\end{aligned}$$ Relation (\[2\]) is the key to further discussion of the threshold effect.
We obtain this also for higher dimension (D=6, for example).
Owing to (\[qmp\]) and (\[2\]), the function ${\cal I}(q^2,m_D^2,m_S^2)$ has the following properties important for cancellation of pinches.
It vanishes between the thresholds, i.e., the domain $(m_D-m_S)^2<q^2<(m_D+m_S)^2$ is forbidden (${\cal I}=0$). If it happens that the bare mass $m^2$ belongs to this domain, the single self-energy insertion will be free of pinching. In this case, multiple (double, triple, etc.) self-energy insertions will also be free of pinching. Massive $\lambda\phi^3$ theory$^{\cite{as}}$ is a good example of this case.
It is (in principle) different from zero in the allowed domain $q^2<(m_D-m_S)^2$ and $(m_D+m_S)^2<q^2$. In this case, one cannot get rid of pinching. This situation appears in the $\pi-\rho $ interaction$^{\cite{bdr}}$.
The behavior at the boundaries (i.e., in the allowed region near the threshold) depends on the masses $m_D$ and $m_S$ and there are a few possibilities.
If both masses are nonzero and different ($0\neq m_D\neq m_S\neq 0$), then there are two thresholds and ${\cal I}$ behaves as $(q^2-q_{\pm tr}^2)^{1/2}$ in the allowed region near the threshold $q_{\pm tr}^2$. For $m^2=q_{tr}^2$, the power $1/2$ is not large enough to suppress pinching.
If one of the masses is zero ($m_D\neq0, m_S=0$ or $m_D=0, m_S\neq 0$), then (\[2\]) gives that the thresholds are identical (i.e., the forbidden domain shrinks to zero) and one obtains the $(q^2-m_D^2)^1$ behavior near $m_D^2$. This case (for $m^2=m_D^2$) is promising. The elimination of pinching in the electron propagator, considered in Sec.IV, is one of important examples.
If the masses are equal but different from zero ($m_D=m_S\neq 0$), then there are two thresholds with different behavior. The function ${\cal I}$ behaves as $(q^2-q_{+tr}^2)^{1/2}$ in the allowed region near the threshold $q_{+tr}^2=4m_D^2$,and this behavior cannot eliminate pinching in the supposed case $m^2=4m_D^2$ .
However, at the other threshold, namely at $q^2_{-tr}=0$, the physical region is determined by $q^2<0$ and the above discussion does not apply. In fact, the integration limits (\[ko12\]) are valid, but the region between $k_{o~1}$ and $k_{o~2}$ is now excluded from integration. One has to integrate over the domain $(-\infty,k_{o~1})\bigcup (k_{o~2},+\infty)$. This leads to the limitation in the high-energy behavior of the density functions. An important example of such behavior, elimination of pinching in the photon propagator ($m_{\gamma}$), is discussed in Sec.IV.
If both masses vanish ($m_D=m_S=0$), the thresholds coincide, there is no forbidden region and no threshold behavior. The behavior depends on the spin of the particles involved. For scalars, the leading term in the expansion of ${\cal I}$ does not vanish. Pinching is not eliminated.
The case of vanishing masses ($m_D=m_S=0$) for particles with spin exhibits a peculiar behavior. In all studied examples (see Sec.V for details), ${\cal I}$ behaves as $q^2$ as $q^2\rightarrow 0$, which promises the elimination of pinching.
Pinch Singularities in QED
===========================
Pinch Singularities in the Electron Propagator
-----------------------------------------------
In this subsection we apply the results of preceding section to cancel the pinching singularity appearing in a single self-energy insertion approximation to the electron propagator. To do so, we have to substitute $m_D=m$, $m_S=0$, $\sinh^2\Theta_D(k_o)\rightarrow -n_e(k_o)$, $\sinh^2\Theta_S(k_o-q_o) \rightarrow n_\gamma(k_o-q_o)$, and $h(k_o)=-\epsilon(k_o)(1-2n_e(k_o))$, where $n_e$ and $n_\gamma$ are given non-equilibrium distributions of electrons and photons in relations (\[nomega\]), (\[nr\]),(\[nbaromega\]), and (\[barD\]). The thresholds are now identical ($q^2_{\pm tr}=m^2$), and the integration limits satisfy $$\label{ke12}
|\vec k|_{2}-|\vec k|_1={q_o \over q^2}(q^2-m^2)).$$ At threshold the limits shrink to the value $k_{o~tr}=q_o,~|\vec k|_{tr}=|\vec q|$.
Then, with the help of (\[c1\]), we define $$\begin{aligned}
\label{KB}
& &\KB(q^2,q_o)={(\qB+m)\barOmegaB(\qB+m)\over (q^2-m^2)}\cr
\nonumber\\
& &\approx
{1 \over 16\pi^2|\vec q|(q^2-m^2)}\int d\phi N_{\bar \Omega}
(k_{otr},k_{otr}-q_o)
\cr
\nonumber\\
& &
(\qB+m)\FB(\qB+m)(|\vec k|_{2}-|\vec k|_{1}).\end{aligned}$$ For $q^2\neq 0$, we can decompose the vector $k$ as $$\begin{aligned}
\label{kl}
&&k={(k.q)\over q^2}q+{(k.\tilde q)\over \tilde q^2}\tilde q+k_T
\cr
\nonumber\\
& &=(q-{q_o\over |\vec q|}\tilde q){-m_\gamma^2+m^2+q^2\over 2q^2}+
{k_o\over |\vec q|}\tilde q+k_T,\end{aligned}$$ where, in the heat-bath frame we have $$\begin{aligned}
\label{tildeq}
&&q=(q_o,0,0,|\vec q|),~~
\tilde q=(|\vec q|, 0,0,q_o),
\cr
\nonumber\\
& &q\tilde q=0,~~\tilde q^2=-q^2.\end{aligned}$$ In calculating the term proportional to $(1-a)$, where $a$ is the gauge parameter, we have to use the trick $$\begin{aligned}
\label{mgamma}
&&((k-q)^2\pm i\epsilon)^{-2}
\cr
\nonumber\\
& &=\lim_{m_\gamma \rightarrow 0}\left[
{\partial \over \partial m_\gamma^2}
((k-q)^2\pm i\epsilon)-m_\gamma^2)^{-1}\right].\end{aligned}$$ Finally, we obtain The “sandwiched” trace factor $\FB $ calculated with loop particles on mass shell: $$\begin{aligned}
\label{qfq}
& &(\qB+m)\FB(\qB+m)=2m(q^2+m^2+2m\qB)\cr
\nonumber\\
& &
+(q^2-m^2)\biggl(-{q^2-m^2\over q^2}\qB
\cr
\nonumber\\
& &+(-{q_o(q^2+m^2)\over q^2|\vec q|}+2{k_o\over |\vec q|})
\tilde \qB+2\kB_T
\cr
\nonumber\\
& &-(1-a){(q^2-m^2)\over 2q^2}(-\qB+{q_o\over |\vec q|}
\tilde \qB)\biggr).\end{aligned}$$ Now we can study the limit $$\begin{aligned}
\label{KkB}
&&\KB(q_o)=\lim_{q^2\rightarrow m^2}\KB(q^2,m^2,q_o)
\cr
\nonumber\\
&&=
(\qB+m){q_o \over 2\pi|\vec q|m^2}N_{\bar \Omega}
(k_{o~tr},k_{o~tr}-q_{o}).\end{aligned}$$ It is easy to find that $\KB(q_o) $ is finite provided that $m^2\neq 0$ and $N_{\bar \Omega}(q_o,0)<\infty $. The last condition is easy to investigate using the limiting procedure: $$\begin{aligned}
\label{egamma}
& &N_{\bar \Omega}(q_o,0)=\lim_{k_o\rightarrow q_o}
N_{\bar \Omega}(k_o,k_o-q_o)
\cr
\nonumber\\
& &=\lim_{k_o\rightarrow q_o}2n_\gamma(k_o-q_o)(n_e(q_o)-n_e(k_o))
\cr
\nonumber\\
& &
+\lim_{k_o\rightarrow q_o}(n_e(q_o)-n_e(k_o)
-\epsilon(q_o)\epsilon(k_o-q_o)\cr
\nonumber\\
& &\lim_{k_o\rightarrow q_o}(n_e(q_o)+n_e(k_o)-2n_e(q_o)n_e(k_o)
.\end{aligned}$$ The integration limits imply that the limit $k_o\rightarrow q_o$ is taken from below for $q^2>m^2$, and from above for $q^2<m^2$. The two limits lead to different values of $N_{\bar \Omega}(q_o,0)$. This leads to the discontinuity of $\KB(q^2,m^2,q_o)$ at the point $q^2=m^2$.
Only the first term in (\[egamma\]) can give rise to problems. We rewrite it as $\lim_{k_o\rightarrow 0}\left(2k_on_\gamma(k_o)
{\partial n_e(k_o+q_o)\over \partial k_o}\right)$. As relation (\[KkB\]) should be valid at any $q_o$ we find two conditions: $$\label{cgamma}
\lim_{k_o\rightarrow 0}k_on_\gamma(k_o)<\infty,$$ $$\label{cel}
|{\partial n_e(q_o)\over \partial q_o}|<\infty.$$ Under the very reasonable conditions (\[cgamma\]) and (\[cel\]) the electron propagator is free from pinching.
It is worth observing that $\KB(q_o) $ is gauge independent, at least within the class of covariant gauges.
Pinch Singularities in the Photon Propagator
---------------------------------------------
To consider the pinching singularity appearing in a single self-energy insertion approximation to the photon propagator, we have to make the substitutions $m_D=m=m_S$, $\sinh^2\Theta_D(k_o)\rightarrow -n_e(k_o)$, $\sinh^2\Theta_S(k_o-q_o) \rightarrow -n_e(k_o-q_o)$, and $h(k_o)=-\epsilon(k_o)(1+2n_\gamma(k_o))$. There are two thresholds, but only $q^2_{1, tr}=0$ and the domain where $q^2<0$ are relevant to a massless photon. The integration limits are given by the same expression (\[ko12\]), but now we have to integrate over the domain $(-\infty,k_{o~1})\bigcup (k_{o~2},+\infty)$. As $q^2 \rightarrow -0$, we find $(k_{o~1}\rightarrow -\infty)$ and $(k_{o~2}\rightarrow +\infty)$. The integration domain is still infinite but is shifted toward $\pm \infty$ where one expects that the particle distribution vanishes: $$\begin{aligned}
\label{Kmn}
& &K_{\mu\nu}(q^2,q^o)=
\left(g_{\mu\rho}-(1-a)
{q_{\mu}q_{\rho}\over q^2-2iq_o\epsilon}\right )
\cr
\nonumber\\
& &{\bar\Omega^{\rho\sigma}\over q^2}
\left(g_{\sigma \nu}-(1-a)
{q_{\sigma}q_{\nu}\over q^2+2iq_o\epsilon}\right )
\cr
\nonumber\\
& &=
{1 \over 16\pi^2|\vec q|q^2}
\left(\int_{-\infty}^{k_{o1}}+\int_{k_{o2}}^{\infty}
\right){k_odk_o\over |\vec k|}
\int d\phi
\cr
\nonumber\\
& &N_{\bar \Omega}(k_o,k_o-q_o)
\left(g_{\mu\rho}-(1-a)
{q_{\mu}q_{\rho}\over q^2-2iq_o\epsilon}\right )
\cr
\nonumber\\
& &
F^{\rho\sigma}\left(g_{\sigma \nu}-(1-a)
{q_{\sigma}q_{\nu}\over q^2+2iq_o\epsilon}\right ).\end{aligned}$$ To calculate $F^{\mu\nu}$ for the $e-\bar e$ loop, we parameterize the loop momentum $k$ by introducing an intermediary variable $l$ perpendicular to $q$. $m$ is the mass of loop particles: $$\begin{aligned}
\label{lm}
&&k=\alpha q+l,~q.l=o,~
k^2=(k-q)^2=m^2,~
\cr
\nonumber\\
&&l^2=m^2-\alpha^2q^2,~\alpha={k^2+q^2-(k-q)^2 \over 2q^2}.\end{aligned}$$ After all possible singular denominators are canceled, one can set $\alpha=1/2$. $$\begin{aligned}
\label{fmn}
& &F_{e\bar e}^{\mu \nu}=
-Tr(\kB +m)\gamma^{\mu}(\kB-\qB+m)\gamma^{\nu}\cr
\nonumber\\
& &=\biggl({4m^2q_o^2\over \vec q^2}A^{\mu \nu}(q)\cr
\nonumber\\
& &
+{q^2 \over \vec q^2}\biggl((4k_o(k_o-q_o)-4m^2-q^2)A^{\mu \nu}(q)
\cr
\nonumber\\
& &+(-8(k_o-{q_o\over 2})^2+2\vec q^2) B^{\mu \nu}(q)\biggr)\biggr),\end{aligned}$$ For projection operators $A$, $B$, $C$ and $D$ see (\[A\])-(\[Dmunu\]). Now we obtain $$\begin{aligned}
\label{Kmns}
& &K_{\mu\nu}(q^2,q_o)=
{1 \over 16\pi^2|\vec q|q^2}\cr
\nonumber\\
& &
\left(\int_{-\infty}^{k_{o1}}+\int_{k_{o2}}^{\infty} \right)
{k_odk_o\over |\vec k|}
\int d\phi N_{\bar \Omega}(k_o,k_o-q_o)
\cr
\nonumber\\
& &
\biggl({4m^2q_o^2\over \vec q^2}A_{\mu \nu}(q)\cr
\nonumber\\
& &
+{q^2 \over \vec q^2}\biggl((4k_o(k_o-q_o)-4m^2-q^2)A_{\mu \nu}(q)
\cr
\nonumber\\
& &+(-8(k_o-{q_o\over 2})^2+2\vec q^2) B_{\mu \nu}(q)\biggr)\biggr).\end{aligned}$$ In the integration over $k_o$ the terms proportional to $(k_o^2q^2)^n$ dominate and $\lim_{q^2\rightarrow 0}|K_{\mu\nu}(q^2,q_o)|<\infty$ if $$\begin{aligned}
\label{Kmna}
& &|{1 \over 16\pi^2|\vec q|q^2}
\left(\int_{-\infty}^{k_{o1}}+\int_{k_{o2}}^{\infty} \right)
{k_odk_o\over |\vec k|}\cr
\nonumber\\
& &(\alpha+\beta k_o^2 q^2)
\int d\phi N_{\bar \Omega}(k_o,k_o-q_o)|<\infty.\end{aligned}$$ Here $N_{\bar \Omega}(k_o,k_o-q_o)$ is given by $$\begin{aligned}
\label{ebare}
& &N_{\bar \Omega}(k_o,k_o-q_o)=
-2n_{e}(k_o-q_o)\cr
\nonumber\\
& &(-n_\gamma(q_o)-n_e(k_o))
-n_\gamma(q_o)-n_e(k_o)
\cr
\nonumber\\
& &-\epsilon(q_o)\epsilon(k_o-q_o)(-n_\gamma(q_o)
+n_e(k_o)
\cr
\nonumber\\
& &+2n_\gamma(q_o)n_e(k_o)).\end{aligned}$$ Assuming that the distributions obey the inverse-power law at large energies $n_{e}(k_o)\propto |k_o|^{-\delta_e}$ and $n_{\bar e}(k_o)\propto |k_o|^{-\delta_{\bar e}}$, we find that the terms linear in densities dominate. Thus, for $n=0,1$, one finds $$\begin{aligned}
\label{ltsa}
&&{-1\over q^2}\biggl(\int_{-\infty}^{k_{o~1}}
+\int_{k_{o~2}}^{+\infty}\biggr)
{|k_o|dk_o\over |\vec k|}|k_o|^{2n-\delta} (-q^2)^n
\cr
\nonumber\\
&&\propto (\delta-1-2n)^{-1}(|\vec q|m)^{1+2n-\delta}
(-q^2)^{(\delta-3)/2}.\end{aligned}$$ It follows that (\[Kmna\]) is finite (in fact, it vanishes) if $\delta_e,\delta_{\bar e}>3$. Similar analysis for electron propagator at $q^2 < 0$ (thus outside of our analysis of pinch singularities) leads to $\delta_{\gamma}>3$. This is exactly the condition $$\label{fine}
\int d^3kn_{\gamma,e,\bar e}(k_o)<\infty.$$ Thus the pinching singularity is canceled in the photon propagator under the condition that the electron and positron distributions should be such that the total number of particles is finite.
Also, in the photon propagator, the quantity $\lim_{q^2 \rightarrow 0}K_{\mu\nu}(q^2,q_o)$ does not depend on the gauge parameter.
Expression (\[ltsa\]) is not valid for $m=0$.
Pinch Singularities in Massless QCD
====================================
In this section we consider the case of massless QCD. One should observe that the massless quarks and gluons are an idealisation eventually appropriate at the lowest order. In the nonequilibrium HTL resummation scheme both quarks and gluons acquire dynamical mass$^{\cite{carrington}}$. Pinching singularities, related to massive quarks, are eliminated by the methods used in the preceding section.
Attention is turned to the spin degrees of freedom, i.e., to the function $F$ of the integrand in (\[Omegai2\]) to (\[trint1\]). In the calculation of $F$ it has been anticipated that the loop particles have to be on mass shell. In this case, $F$ provides an extra $q^2$ factor which suffices for the elimination of pinching singularities.
The integration limits are now $$\label{ko12qg}
k_{o1,2}={1\over 2}\left(q_o\mp|\vec q|\right).$$ The difference $|\vec k|_2-|\vec k|_1$ is finite and there is no threshold effect.
It is worth observing that for $q^2>0$, we have to integrate between $k_{o1}$ and $k_{o2}$, whereas for $q^2<0$, the integration domain is $(-\infty,k_{o~1})\bigcup (k_{o~2},+\infty)$. This leads to two limits, $\lim_{q^2\rightarrow \pm0}K(q^2,q_o)=K_{\pm}(q_o)$, in all cases of massless QCD.
By inspection of the final results (\[qqse\]),(\[ghghse\]), and (\[ggse\]), we find that the case $q^2<0$ requires integrability of the function $k_o^2N_{\bar\Omega}(k_o,k_o-q_o)$ leading to the condition (\[fine\]) on the quark, gluon, and ghost distribution functions.
The function $K_{\mu \nu}(q^2,q_o)$ related to the gluon propagator is the sum of the contributions from various loops, where the terms in the sum are defined as $$\begin{aligned}
\label{VCbarOS}
& &K_{\mu \nu}(q^2,q_o)=
(g_{\mu\rho}-(1-a)D_{R\mu\rho})\cr
\nonumber\\
& &
{\bar\Omega^{\rho \sigma} \over q^2}
(g_{\sigma\nu}-(1-a)D_{A\sigma\nu}).\end{aligned}$$ The tensor $F$ related to the massless quark-antiquark contribution to the gluon self-energy is $$\begin{aligned}
\label{qqse}
& &F_{q \bar q}^{\mu \nu}=
-{\delta_{a b} \over 6}Tr\kB\gamma^{\mu}(\kB-\qB)\gamma^{\nu}
\cr
\nonumber\\
& &={\delta_{a b} \over 6}
\biggl({q^2 \over \vec q^2}\biggl((4k_o(k_o-q_o)-q^2)A^{\mu \nu}(q)
\cr
\nonumber\\
& &+(-8(k_o-{q_o\over 2})^2+2\vec q^2) B^{\mu \nu}(q)\biggr)
+O^{\mu\nu}(\vec k_T)\biggr).\end{aligned}$$ As $F_{\mu\nu}$ contains only $A$ and $B$ projectors, the result does not depend on the gauge parameter.
Relation (\[qqse\]) contains only terms proportional to $q^2$, and $\lim_{q^2\rightarrow 0}K_{\mu\nu}(q^2,q_o)$ is finite.
For the ghost-ghost contribution to the gluon self-energy, the tensor $F$ is given by $$\begin{aligned}
\label{ghghse}
& &F_{gh gh}^{\mu \nu}=-\delta_{a b}N_ck^{\mu}(k-q)^{\nu}
\cr
\nonumber\\
& &=
-\delta_{a b}N_c{q^2 \over \vec q^2}
\biggl({4k_o(k_o-q_o)+q^2 \over 8}A^{\mu \nu}(q)
\cr
\nonumber\\
& &-(k_o-{q_o \over 2})^2 B^{\mu \nu}(q)
-{\vec q^2 \over 4}D^{\mu \nu}(q)+O^{\mu\nu}(\vec k_T)
\biggr).\end{aligned}$$ The tensor $F$ for the gluon-gluon contribution to the gluon self-energy is $$\begin{aligned}
\label{ggse}
& &F_{gg}^{\mu \nu}=
{\delta_{ab}N_c \over 2}\cr
\nonumber\\
& &\left(g^{\mu \sigma}(q+k)^{\tau}-
g^{\sigma \tau}(2k-q)^{\mu}+
g^{\tau \mu}(k-2q)^{\sigma}\right)\cr
\nonumber\\
& &
\left(g_{\sigma \rho}-(1-a){(k-q)_{\sigma}(k-q)_{\rho}
\over (k-q)^2\pm 2i(k_o-q_o)\epsilon}\right)\cr
\nonumber\\
& &\left(g^{\nu \rho}(q+k)^{\eta}-
g^{\rho\eta}(2k-q)^{\nu}+g^{\eta\nu}(k-2q)^{\rho}\right)
\cr\nonumber\\& &
\left(g_{\tau \eta}-(1-a){k_{\tau}k_{\eta}
\over k^2\pm 2ik_o\epsilon}\right)\cr
\nonumber\\
& &
\rightarrow{\delta_{ab}N_c q^2\over 2}\biggl({1\over \vec q^2}
\biggl((10(k_o-{q_o \over 2})^2+{3 \over 2}\vec q^2 )A^{\mu \nu}(q)
\cr
\nonumber\\
& &
+(-10(k_o-{q_o\over 2})^2+4\vec q^2) B^{\mu \nu}(q)
-{\vec q^2 \over 2}D^{\mu \nu}(q)\biggr)
\cr
\nonumber\\
& &
-(1-a)\biggl({1\over 2}A^{\mu\nu}-B^{\mu\nu}
-{q_o\over |\vec q|}C^{\mu\nu} \biggr)
\cr
\nonumber\\
& &
+(1-a)^2\biggl(-{q^2\over \vec q^2}A^{\mu\nu}
+2{q_o^2\over \vec q^2}B^{\mu\nu}
-2{q_o\over |\vec q|}C^{\mu\nu}\cr
\nonumber\\
& &-2D^{\mu\nu}\biggr)
+O^{\mu\nu}(\vec k_T)\biggr).\end{aligned}$$ Expressions (\[qqse\]), (\[ghghse\]), and (\[ggse\]) for the ghost-ghost, quark-antiquark, and gluon-gluon contributions to the gluon self-energy contain only terms proportional to $q^2$. The function $K_{\mu\nu}(q^2,q_o)$ approaches the finite value $K_{\mu\nu}(\pm,q_o)$.
Thus we have shown that the single self-energy contribution to the gluon propagator is free from pinching under the condition (\[fine\]) .
The $K$ spinor for the quark-gluon contribution to the massless quark propagator is defined as $$\label{SKbarOS}
\KB(q^2,q_o)=\qB{\barOmegaB\over q^2}\qB.$$ In the self-energy of a massless quark coupled to a gluon the “sandwiched” spin factor $\qB\FB\qB$ is given by (as the term proportional to $\kB_T$ vanishes after integration, we drop it). $$\begin{aligned}
\label{qtrl}
&&\qB\FB_{qg}\qB=\delta_{a b}{N_c^2-1 \over 2N_c}
\cr
\nonumber\\
&&\left (g_{\mu \nu}-{(1-a)(k-q)_{\mu}(k-q)_{\nu}
\over (k-q)^2\pm 2i(k_o-q_o)\epsilon}\right )
\qB\gamma^\mu \kB\gamma^\nu\qB\cr
\nonumber\\
&&=\delta_{a b}{N_c^2-1 \over 2N_c}q^2
\cr
\nonumber\\
&&\left(-\qB-{q_o\over |\vec q|}\tilde\qB+2{k_o\over |\vec q|}
\tilde\qB
-{1-a\over 2}(-\qB+{q_o\over |\vec q|}\tilde\qB)\right),\end{aligned}$$ which contains the damping factor $q^2$.
By inserting (\[qtrl\]) into (\[SKbarOS\]), we obtain (\[pinch0pdk\]) free from pinches.
To calculate $\KB(q_o)$, we need the limit $$\label{qtrll}
\lim_{q^2\rightarrow 0}{\qB \FB_{qg}\qB\over q^2}
=\delta_{a b}{N_c^2-1 \over 2N_c}{2(k_o-q_o)\over q_o}\qB.$$ From (\[qtrll\]) we conclude that $\KB(q_o)$ does not depend on the gauge parameter.
Omitting details, we observe that pinching is absent from the quark propagator, also in the Coulomb gauge, with the same limit (\[qtrll\]).
The $K$ factor for the ghost-gluon self-energy contribution to the ghost propagator is defined as $$\label{SgKbarOS}
K(q^2,q_o)={\bar\Omega\over q^2}.$$ The $F$ factor for the ghost-gluon contribution is $$\begin{aligned}
\label{ghgse}
&&F_{gh g}=\delta_{a b}N_ck^{\mu}q^{\nu}\cr
\nonumber\\
&&\left(g_{\mu \nu}-{(1-a)(k_{\mu}-q_{\mu})(k_{\nu}-q_{\nu})
\over (k-q)^2\pm 2i(k_o-q_o)\epsilon}\right)\cr
\nonumber\\
&&
\rightarrow \delta_{a b}N_c{q^2 \over 2}.\end{aligned}$$ The factor $q^2$ ensures the absence of pinch singularity and a well-defined perturbative result.
The $K$ factor for the scalar-photon self-energy contribution to the scalar propagator is defined as $$\label{SbKbarOS}
K(q^2,q_o)={\bar\Omega\over q^2}.$$ The $F$ factor for the massless scalar-photon contribution to the scalar self-energy, $$\begin{aligned}
\label{scphot}
&&F_{s\gamma}=(q+k)^{\mu}(q+k)^{\nu}\cr
\nonumber\\
&&\left(g_{\mu \nu}-{(1-a)(k-q)_{\mu}(k-q)_{\nu} \over
(k-q)^2\pm 2i(k_o-q_o)\epsilon}\right)\rightarrow 2q^2,\end{aligned}$$ clearly exhibits the $q^2$ damping factor!
Conclusion
===========
Studying the out of equilibrium Schwinger-Dyson equation, we have found that ill-defined pinch-like expressions appear exclusively in the Keldysh component (${\cal G}_K$) of the resummed propagator (\[sol1barG\]), or in the single self-energy insertion approximation to it (\[pinch0p\]). This component does not vanish only in the expressions with the Keldysh component (\[Omega\]) ($\Omega $ or $\bar\Omega $ for the single self-energy approximation) of the self-energy matrix. This then requires that loop particles be on mass shell. This is the crucial point to eliminate pinch singularities.
We have identified two basic mechanisms for the elimination of pinching: the threshold and the spin effects.
For a massive electron and a massless photon (or quark and gluon) it is the threshold effect in the phase space integration that produces, respectively, the critical $q^2-m^2$ or $q^2$ damping factors.
In the case of a massless quark, ghost, and gluon, this mechanism fails, but the spinor/tensor structure of the self-energy provides an extra $q^2$ damping factor.
We have found that, in QED, the pinching singularities appearing in the single self-energy insertion approximation to the electron and the photon propagators are absent under very reasonable conditions: the distribution function should be finite, exceptionally the photon distribution is allowed to diverge as $k_o^{-1}$ as $k_o \rightarrow 0$; the derivative of the electron distribution should be finite; the total density of electrons should be finite.
For QCD, identical conditions are imposed on the distribution of massive quarks and the distribution of gluons; the distributions of massless quarks and ghosts (observe here that in the covariant gauge, the ghost distribution is not required to be identically zero) should be integrable functions; they are limited by the finiteness of the total density.
In the preceding sections we have shown that all pinch-like expressions appearing in QED and QCD (with massless and massive quarks!) at the single self-energy insertion level do transform into well-defined expressions. Many other theories behave in such a way. However, there are important exceptions: all theories in which lowest-order processes are kinematically allowed do not acquire well-defined expressions at this level. These are electroweak interactions, processes involving Higgs and two light particles, a $\rho $ meson and two $\pi $ mesons, $Z$, $W$, and other heavy particles decaying into a pair of light particles, etc. The second important exception is massless $g^2\phi^3$ theory. This theory, in contrast to massless QCD, contains no spin factors to provide (\[rpinchlike2\]). In these cases, one has to resort to the resummed Schwinger-Dyson series. One can also expect that, in these cases, higher order contributions become more important and provide natural cutoff which reduces the contribution of pinch-like terms. In ultimate case this points out to the limitations of the method.
The main result of the present paper is the cancellation of pinching singularities at the single self-energy insertion level in QED- and QCD-like theories. This, together with the reported$^{\cite{lebellac,niegawacom}}$ cancellation of collinear singularities, allows the extraction of useful physical information contained in the imaginary parts of the two-loop diagrams. This is not the case with three-loop diagrams, because some of them contain double self-energy insertions. In this case, one again has to resort to the sophistication of resummed propagators.
Appendix
=========
We start$^{\cite{landsman}}$ by defining a heat-bath four-velocity $U_{\mu}$, normalized to unity, and define the orthogonal projector $$\label{Delta}
\Delta_{\mu \nu}=g_{\mu \nu}-U_\mu U_\nu.$$ We further define spacelike vectors in the heat-bath frame: $$\label{kappa}
\kappa_{\mu}=\Delta_{\mu \nu}q^\nu,~~~~\kappa_\mu \kappa^\mu
=\kappa^2=-\vec q^2.$$ There are four independent symmetric tensors (we distinguish retarded from advanced tensors by the usual modification of the $i\epsilon $ prescription) $A$, $B$, and $D$ (which are mutually orthogonal projectors), and $C$: $$\label{A}
A_{\mu \nu}(q)=
\Delta_{\mu \nu}-{\kappa_\mu \kappa_\nu \over \kappa^2},$$ $$\label{B}
B_{R~\mu \nu}(q)=U_\mu U_\nu +{\kappa_\mu \kappa_\nu
\over \kappa^2}-{q_\mu q_\nu \over (q^2+2iq_o\epsilon)},$$ $$\begin{aligned}
\label{Cmunu}
& &C_{R~\mu \nu}(q)
={(-\kappa^2)^{1/2} \over U.q}\cr
\nonumber\\
& &
\left ({(U.q)^2 \over \kappa^2}U_\mu U_\nu -{\kappa_\mu \kappa_\nu
\over \kappa^2}+{(q_o^2+\vec q^2)
q_\mu q_\nu \over \vec q^2(q^2+2iq_o\epsilon)}\right ),\end{aligned}$$ $$\label{Dmunu}
D_{R~\mu \nu}(q)={q_\mu q_\nu \over q^2+2iq_o\epsilon}.$$ In addition to the known multiplication$^{\cite{landsman}}$ properties (for convenience we drop $q$-dependence) $$\label{AA}
A A =A ,~B_{R,A} B_{R,A} =B_{R,A} ,$$ $$\begin{aligned}
\label{CRCR}
& &C_{R,A} C_{R,A} =-(B_{R,A} +D_{R,A} ),
\cr
\nonumber\\
& &D_{R,A} D_{R,A} =D_{R,A} ,\end{aligned}$$ $$\begin{aligned}
\label{AB}
& &A B =B A =A C =C A =0,
\cr
\nonumber\\
& &
A D =D A =B D =D B =0,\end{aligned}$$ $$\begin{aligned}
\label{BC}
& &(B_{R,A} C_{R,A} )_{\mu\nu}=(C_{R,A} D_{R,A} )_{\mu\nu}
\cr
\nonumber\\
& &
=(C_{R,A} B_{R,A} )_{\nu\mu}=(D_{R,A} C_{R,A} )_{\nu\mu}
\cr
\nonumber\\
& &
=
{\tilde q_\mu q_\nu\over q^2\pm 2iq_o\epsilon},\end{aligned}$$ we need mixed products $$\label{BRBA}
B_{R,A} B_{A,R} ={1 \over 2}(B_R +B_A ),$$ $$\begin{aligned}
\label{CRCA}
C_{R,A} C_{A,R}=-{1 \over 2}(B_R +B_A+D_R +D_A ),\end{aligned}$$ $$\label{DRDA}
D_{R,A} D_{A,R} ={1 \over 2}(D_R +D_A ),$$ $$\begin{aligned}
\label{BCRA}
& &(B_{R,A} C_{A,R} )_{\mu\nu}=(C_{R,A} D_{A,R} )_{\mu\nu}
\cr
\nonumber\\
& &
=(C_{R,A} B_{A,R} )_{\nu\mu}=(D_{R,A} C_{A,R} )_{\nu\mu}
\cr
\nonumber\\
& &
={1\over 2}({\tilde q_\mu q_\nu\over q^2+2iq_o\epsilon}
+{\tilde q_\mu q_\nu\over q^2-2iq_o\epsilon}).\end{aligned}$$
[99]{} N. P. Landsman and Ch. G. van Weert, [[Phys. Rep. ]{}]{}145, 141 (1987). M. Le Bellac, “Thermal Field Theory”, (Cambridge University Press, Cambridge, 1996). J. Schwinger, [[J. Math. Phys. ]{}]{}2, 407 (1961). L. V. Keldysh, ZH. Eksp. Teor. Fiz.47, 1515 (1964) \[Sov. Phys.-JETP 20, 1018 (1965)\]. J. Rammer and H. Smith, [[Rev. Mod. Phys. ]{}]{}58, 323 (1986). H. A. Weldon, [[Phys. Rev. ]{}]{}D45, 352 (1992). P. F. Bedaque, [[Phys. Lett. ]{}]{}B344, 23 (1995). C. Greiner and S. Leupold, Preprint hep-ph/9802312 and hep-ph/9804239. M. Le Bellac and H. Mabilat, [[Phys. Rev. ]{}]{}D55, 3215 (1997). P. V. Landshoff and A. Rebhan, [[Nucl. Phys. ]{}]{}B393, 607 (1993). A. Niégawa, [[Eur. Phys. J. ]{}]{}C5, 345 (1998). T. Altherr and D. Seibert, [[Phys. Lett. ]{}]{}B333, 149 (1994). T. Altherr, [[Phys. Lett. ]{}]{}B341, 325 (1995). R. Baier, M. Dirks, and K. Redlich, [[Phys. Rev. ]{}]{}D55, 4344 (1997). R. Baier, M. Dirks, K. Redlich, and D. Schiff, Preprint hep-ph/9704262. R. Baier, M. Dirks, and K. Redlich, Contribution to the XXXLII Cracow School, [[Acta. Phys. Pol. ]{}]{}B28, 2873 (1997). M. E. Carrington, H. Defu, and M. H. Thoma, Preprint hep-ph/9708363. A. Niégawa, Preprint hep-th/9709140. I. Dadić, Preprint hep-ph/9801399. R. D. Pisarski, [[Phys. Rev. Lett. ]{}]{}63, 1129 (1989). E. Braaten and R. D. Pisarski, [[Nucl. Phys. ]{}]{}B337, 569 (1990). J. Frenkel and J. C. Taylor, [[Nucl. Phys. ]{}]{}B334, 199 (1990). K.-C. Chou, Z.-B. Su, B.-L. Hao, and L.Yu, [[Phys. Rep. ]{}]{}118, 1 (1985). A. J. Niemi, [[Phys. Lett. ]{}]{}B203, 425 (1988). J-P. Blaizot, E. Iancu, and J-Y. Ollitrault, in Quark-Gluon Plasma II, edited by R. Hwa (World Scientific, Singapore, 1995). H. A. Weldon, [[Phys. Rev. ]{}]{}D28,2007 (1983). R. L. Kobes and G. W. Semenoff, [[Nucl. Phys. ]{}]{}B260, 714 (1985). F. Gelis, [[Nucl. Phys. ]{}]{}B508, 483 (1997).
[^1]: Talk given at the 5th International Workshop “Thermal Field Theories and their Applications”, August 10-14, 1998, Regensburg, Germany
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'Michael Beck, Sebastian Henningsen'
bibliography:
- 'references.bib'
date: 'February, 2017'
title: |
Technical Report\
The Stochastic Network Calculator
---
Introduction and Overview
=========================
In this technical report, we provide an in-depth description of the Stochastic Network Calculator tool, henceforth just called “Calculator”. This tool is designed to compute and automatically optimize performance bounds in queueing networks using the methodology of stochastic network calculus (SNC). For a detailed introduction to SNC, see the publicly available thesis of Beck [@Beck:thesis], which shares the notations and definitions used in this document. Other introductory texts are the books [@Chang:book; @Jiang:book] and the surveys [@Fidler:survey; @Fidler:guide].
The Stochastic Network Calculator is an open source project and can be found on github at <https://github.com/scriptkitty/SNC>; it was also presented in [@Beck:SNCalc; @Beck:SNCalc2]. We structure this report as follows:
- We give the essential notations, definitions, and results of SNC in Sections \[sec:Essentials\] and \[sec:Theoretical Results\]. Section \[sec:Modeling with SNC\] focuses on modeling queueing systems, service elements, and arrivals within the language of SNC.
- Section \[sec:Code Structure\] gives an overview on the calculator’s code structure and its workflow.
- In Section \[sec:Code Representation\] we give more detail on how the concepts of SNC are represented in the code.
- How to access and extend the Calculator is topic of Section \[sec:APIs and Extensions\].
- We wrap things up with a full example in Section \[sec:full\_example\]. It describes the modeling steps we have undertaken to produce the results for our presentation at the IEEE Infocom conference 2017 [@Beck:SNCalc2].
We refrain from displaying larger chunks of code in this report for two reasons: (1) The Calculator is under development and the code-base might change at any time. (2) The code is extensively commented; hence, instead of enlargening this report to unbearable lengths, it will be more useful for developers to read about the code’s details in their own context.
Essentials of Stochastic Network Calculus {#sec:Essentials}
=========================================
This Section orients itself on the notations and definitions made in [@Beck:thesis].
We partition time into time-slots of unit length and consider a fluid data model. In this scenario we define a *flow* as a cumulative function of data:
A *flow* is a cumulative function $$\begin{aligned}
A\,:\,\mathbb{N}&\rightarrow \mathbb{R}_0^+ \\
t&\mapsto A(t)\end{aligned}$$
The interpretation of $A(t)$ is the (cumulative) amount of data arriving up to time $t$. Correspondingly the doubly-indexed function $A(s,t)$ describes the amount of data arriving in the interval $(s,t]$.
In SNC a stochastic bound on the amount of arrivals is needed. Without such a bound the total number of arrivals in some interval could be arbitrarily large, thus, making an analysis of the system impossible. The Calculator is based on the MGF approach of SNC. Flows and other stochastic processes are represented by their respective MGF which are upper bounded.
\[def:Arrival-Bound\] The MGF of some quantity $A(t)$ at $\theta$ is defined by $$\phi_{A(t)}(\theta) = \mathbb{E}(e^{\theta A(t)}),$$ where $\mathbb{E}$ denotes the expectation of a random variable.
We have an MGF-bound for a flow $A$ and the value $\theta>0$, if $$\phi_{A(s,t)}(\theta) \leq e^{\theta \rho(\theta)(t.s) + \theta \sigma(\theta)}$$ holds for all time pairs $s\leq t$.
(0,0) node(start) ++(1,0) node\[circle,draw\](U) [$U$]{} ++(1,0) node(end) ; (start) – node\[above\] [$A$]{}(U); (U) – node\[above\] [$B$]{}(end);
The second basic quantity in a queueing system is the amount of service per time-slot. The relationship between these two and the resulting output (seeFigure \[fig:single-queue\]) is given by $$B(t) \geq A\otimes U(0,t) = \min_{0\leq s\leq t}\{A(0,s)+U(s,t)\}.$$ Here $U$ is a bivariate function (or stochastic process) that describes the service process’s behavior. For example a constant rate server takes the form$U(s,t) = r_U(t-s)$, where $r_U$ is the service element’s rate.
\[def:Service-Bound\] A service element is a dynamic $U$-server, if it fulfills for any input-output pair $A$ and $B$: $$B(t)\geq A\otimes U(0,t).$$
Such a server is MGF-bounded for some $\theta>0$, if $U$ fulfills $$\phi_{U(s,t)}(-\theta)\leq e^{\theta\rho(\theta)(t-s) + \theta\sigma(\theta)}$$ for all pairs $s\leq t$.
Note the minus-sign in the above definition to indicate that the service is bounded from below, whereas the arrivals are bounded from above.
We are particularly interested in two performance measures that puts a system’s arrivals and departures into context.
The *backlog* at time $t$ for an input-output pair $A$ and $B$ is defined by $$\mathfrak{b}(t) = A(t) - B(t).$$ The *virtual delay* at time $t$ is defined by $$\mathfrak{d}(t) = \min\{s\geq 0 \mid A(t) \leq B(t+s)\}.$$
Note that in these definitions we make two assumptions about the queueing system: (1) As the backlog is defined by the difference of $A$ and $B$, we assume the system to be loss-free – all data that has not yet departed from the system must still be queued in it. (2) We only consider the *virtual* delay. This is the time until we see an amount of departures from the system, which is *equivalent* to the accumulated arrivals up to a time $t$. For FIFO-systems its virtual delay coincides with its delay; in non-FIFO systems, however, this does not need to be the case.
Modeling with Stochastic Network Calculus {#sec:Modeling with SNC}
=========================================
Modeling the Network
--------------------
(-1,1) node(source\_1)\[align=center\] [Source]{} (0,-1) node(source\_2)\[align=center\] [Source]{};
(source\_1) – (0,1); (source\_2) – (1,-1);
(0,0.75) – ++(0.75,0) – ++(0,0.5) – ++(-0.75,0); (1,-1.25) – ++(0.75,0) – ++(0,0.5) – ++(-0.75,0);
(0.65,0.8) – ++(0,0.4); (0.6,0.8) – ++(0,0.4);
(1.65,-1.2) – ++(0,0.4); (1.6,-1.2) – ++(0,0.4); (1.55,-1.2) – ++(0,0.4);
(1,1) node\[circle, draw, minimum size = 0.5cm\](Server\_1) (2,-1) node\[circle, draw, minimum size = 0.5cm\](Server\_2);
(Server\_1) – (3.5,0.3); (Server\_2) – (3.5,-0.3);
(3.5,0.05) – ++(0.75,0) – ++(0,0.5) – ++(-0.75,0); (3.5,-0.55) – ++(0.75,0) – ++(0,0.5) – ++(-0.75,0);
(4.15, 0.1) – ++(0,0.4); (4.1, 0.1) – ++(0,0.4); (4.05, 0.1) – ++(0,0.4);
(4.15, -0.5) – ++(0,0.4); (4.1, -0.5) – ++(0,0.4);
(4.85,0) node\[circle, draw, minimum size = 1.2cm\](Server\_3) ++(0,0.3) node\[circle, minimum size = 1.2cm\](dummy\_top) ++(0,-0.6) node\[circle, minimum size = 1.2cm\](dummy\_bottom);
(7,0.3) node\[align = center\](Destination\_1)[Departures]{} (7,-0.3) node\[align = center\](Destination\_2)[Departures]{};
(dummy\_top) – (Destination\_1); (dummy\_bottom) – (Destination\_2);
(a)
(-0.75,0) node(source)\[draw, circle\] [$e$]{} (1,1) node(U)\[draw, circle\] [$U$]{} (1,-1) node(V)\[draw, circle\] [$V$]{} (3,0) node(W)\[draw, circle\] [$W$]{} ++(0.05,0.25) node(top\_dummy)\[circle, text = white\] [$\cdot$]{} ++(0,-0.5) node(bottom\_dummy)\[circle, text = white\] [$\cdot$]{} ++(2,0.25) node(destination)\[draw, circle\] [$e^\prime$]{} ++(-0.2,0.05) node(dest\_top\_dummy)\[circle, text = white\] ++(0,-0.1) node(dest\_bottom\_dummy)\[circle, text = white\] ;
(source) – (U); (U) – (top\_dummy); (top\_dummy) – (dest\_top\_dummy); (source) – (V); (V) – (bottom\_dummy); (bottom\_dummy) – (dest\_bottom\_dummy);
(b)
In SNC we consider a queueing system such as a communication network as a collection of flows and service elements. These can be represented as nodes and edges as shown in Figure \[fig:graph-representation\]. In this transformation we replace each flow’s route by a sequence of directed edges, such that each hop of the flow through the network is mapped to one edge; furthermore, we introduce two extra nodes. They represent the “outside” of the network. Each flow originates from $e$ and leaves the network via $e^\prime$. Note that in such a scenario there does not need to be a one to one correspondence between nodes and physical entities. In the graph representation one node with several inputs just represents one resource that is expended by several flows of arrivals.
For example: A router can have several interfaces each leading to another router. In this scenario data packets do not queue up in the router, but rather in each of its interfaces; hence, the nodes in the corresponding graph represent a single interface only and not the entire router. This leads to different topologies between the physical network (in terms of routers and their connections) and the graph of service elements and flows.
Modeling the Arrival Processes
------------------------------
Now, we give more details on the modeling of data flows and present some of the arrival bounds currently implemented in the tool. As introduced in the previous chapter we use a fluid model with discrete time-slots in the Calculator. This means we are interested in the arrival’s distribution per time-slot; or more precisely: in their moment generating function (MGF). The easiest (and bland) example is a stream of data with constant rate:
Assume a source sends $r$ data per time-slot (for example100 Mb/sec.). It immediately follows: $A(s,t)=r(t-s)$ and as there is no randomness involved it also holds $\phi_{A(s,t)}(\theta)=e^{\theta r(t-s)}$ for the MGF. To achieve a bound conform to Definition \[def:Arrival-Bound\] we define $\rho(\theta)=r$ and $\sigma(\theta)=0$.
We can construct a simple random model by assuming that the arrivals per time-slot are stochastically independent and follow the same distribution (an i.i.d. assumption).
\[ex:Exponential-Increments\] Assume a source sends in time-slot $t$ an amount of data equal to $a_t$. Here $a_t$ are stochastically independent and exponentially distributed with a common parameter $\lambda$. The exponential distribution has density $$f(x) = \lambda e^{-\lambda x} \qquad \text{ for all } x\geq 0$$ and $f(x)=0$ for all x < 0. From this we can derive the MGF for a single increment as $\phi_{a(t)}(\theta)=\frac{\lambda}{\lambda-\theta}$. Due to the stochastic independence of the increments we have: $$\phi_{A(s,t)}(\theta)=\prod_{r=s+1}^t \phi_{a(r)}(\theta)=\left(\frac{\lambda}{\lambda - \theta}\right);$$ hence, an MGF-bound for this type of arrivals is given by $\rho(\theta) = 1/\theta \log(\frac{\lambda}{\lambda - \theta})$ and $\sigma(\theta) = 0 $.
The above example can be easily generalized to increments with other distributions, as long as their MGF can be computed.
As of today we can roughly divide the methodology of SNC by the way of how to bound the involved stochastic processes (for details refer to [@Fidler:survey]). The Calculator uses MGF-bounds as in Definitions \[def:Arrival-Bound\] and \[def:Service-Bound\]. In the next example we show how to convert bounds from the “other” branch of SNC to MGF-bounds.
We say a flow has exponentially bounded burstiness (follows the EBB-model), if $$\mathbb{P}(A(s,t)>\rho(t-s)+\varepsilon)\leq Me^{d \varepsilon}$$ holds for all pairs $s\leq t$ and $\varepsilon>0$. In this model we call $M$ the prefactor and $d$ the bound’s decay; the parameter $\rho$ represents the arrival’s long-term rate. We can convert such a bound to an MGF-bound (see for example [@Li_eff_bandwidth_2007] or Lemma 10.1 in [@Beck:thesis]) via $$\phi_{A(s,t)}(\theta) \leq \int_0^1 e^{\theta (\rho(t-s)+\varepsilon)}\mathrm{d}\varepsilon^\prime.$$ Here $\varepsilon = -1/d \log(\tfrac{\varepsilon^\prime}{M})$. Solving the above integral leads to $$\phi_{A(s,t)}(\theta) \leq e^{\theta \rho(t-s)} \frac{1}{M^{\nicefrac{\theta}{d}}(1 - \nicefrac{\theta}{d})}$$ and we can define an MGF-bound for flows following the EBB-model by $\rho(\theta) = \rho$ and $\sigma(\theta) = -1/\theta \log(M^{\nicefrac{\theta}{d}}(1 - \nicefrac{\theta}{d}))$.
The above example is important as it allows us to use results from the tailbounded branch of SNC inside the Calculator. The EBB-model contains important traffic classes such as Markov-modulated On-Off processes (see [@Li_eff_bandwidth_2007]).
The next example is influenced by classical queueing theory and is a way to handle the underlying flow being defined in a continuous time setting.
Assume a Poisson jump process on $\mathbb{R}$, meaning the interarrival times between any two jumps are independent and exponentially distributed for some intensity parameter $\mu$. At each jump a packet arrives and the sequence of packets forms the increment process $a_i$ with $i \in \mathbb{N}$. The total number of arrivals in a (discrete-timed) interval $(s,t]$ is given by $$A(s,t) = \sum_{i \in N(s,t)}a_i,$$ where $N(s,t)$ is the set of jumps which occur in the interval $(s,t]$; now, assume that the increment process $a_i$ is i.i.d. (for stochastically independent exponential distributions we have the traditional M/M/1-model of queueing theory); then, we can calculate the MGF as $$\begin{aligned}
\phi_{A(s,t)}(\theta) & = \sum_{k=0}^\infty \mathbb{E}\left(e^{\theta A(s,t)} \mid N(s,t) = k\right)\mathbb{P}(N(s,t) = k) \\
& = e^{\mu(t-s)(\phi_{a_i}(\theta) - 1)};\end{aligned}$$ hence, the flow is MGF-bounded with $\rho(\theta) = \mu/\theta (\phi_{a_i}(\theta) - 1)$ and $\sigma(\theta)=0$.
In our last example for modeling arrivals we only assume that their distribution are stationary. Instead of having detailed information on their distribution, we model them as the aggregate of sub-flows that (each by their own) have passed through a token bucket shaper.
Assume a subflow $A_i$. We say that $A_i$ has passed through a token bucket shaper, if for all pairs $s\leq t$ it holds $A(s,t)\leq \rho_i(t-s) + \sigma_i$. The rate $\rho_i$ is the shaper’s token refreshing rate and $\sigma_i$ is its bucket size; now, assume that the stochastic processes $A_i$ are stationary, meaning $A_i(s,t)$ is equal in distribution to any shift performed to the interval $(s,t)$. For the aggregate $\sum_i A_i =: A$ the following bound ([@Massoulie:tokenbuckets]) holds: $$\phi_{A(s,t)}(\theta) \leq e^{\theta \sum_i \rho_i(t-s)}\left(1/2 e^{\theta \sum_i \sigma_i} + 1/2 e^{-\theta \sum_i \sigma_i}\right).$$ By defining $\rho(\theta) = \sum_i \rho_i$ and $\sigma(\theta) = 1/\theta \log(1/2 e^{\theta \sum_i \sigma_i} + 1/2 e^{-\theta \sum_i \sigma_i})$ we have an MGF-bound in the sense of Definition \[def:Arrival-Bound\].
With the above examples we see how to derive MGF-bounds from several models: We covered stochastically independent increments, a conversion from tailbounds, the traditional M/M/1-model, and the result of aggregated shaped traffics. All these bounds are implemented and available in the Calculator.
Modeling the Service Process
----------------------------
The modeling of service elements is usually much easier, as the randomness of service times usually comes from flows interfering with the service element; in fact, the Calculator has currently only one kind of service element implemented, which is the constant rate service.
We can model a service process $U$ by a constant rate server, i.e., $U(s,t) = r(t-s)$ for some rate $r$. Similarly to the constant rate arrivals the MGF simply is $$\phi_{U(s,t)}(-\theta) = e^{-r(t-s)}$$ and we achieve an MGF as in Definition \[def:Service-Bound\] by defining $\rho(\theta)=-r$ and $\sigma(\theta)=0$.
We want to briefly point out that, when we deal with a wired system, service elements can usually be modeled as constant rate servers. One should bear in mind, however, that the situation becomes fundamentally different in wireless scenarios. In wireless scenarios, the channel characteristics and properties of wireless nodes have to be taken into account. More details on the latter can be found in [@jiang:servermodel] and [@Jiang:IWQoS2010], where a router’s service is parametrized via statistical methods and measurements. This method can likely be applied as a general approach to get a more detailed service description of real-world systems. The modeling of fading channels is addressed in [@Fidler:fading-channels].
In the next section we reason why this simple service model still allows to analyze a wide variety of networks.
Theoretical Results {#sec:Theoretical Results}
===================
Performance Bounds
------------------
For a system with a single node and a single arrival as in Figure \[fig:single-queue\], we have the following performance bounds:
\[thm:Fundamental-Theorem\] Consider the system in Figure \[fig:single-queue\] and assume that the MGF-bounds $$\begin{aligned}
\phi_{A(s,t)}(\theta) &\leq e^{\theta\rho_A(\theta)(t-s) + \theta\sigma_A(\theta)}\\
\phi_{U(s,t)}(-\theta) &\leq e^{\theta\rho_U(\theta)(t-s) + \theta\sigma_U(\theta)}\end{aligned}$$ hold for $A$ and $U$ and some $\theta > 0$. If $A$ and $U$ are stochastically independent, then for all $t>0$ the following bounds hold: $$\begin{aligned}
\mathbb{P}(\mathfrak{b}(t)>N) &\leq e^{\theta N}e^{\theta \sigma_A(\theta) + \theta \sigma_U(\theta)} \cdot \frac{1}{1 - e^{\theta (\rho_A(\theta)+\rho_U(\theta))}} \\
\mathbb{P}(\mathfrak{d}(t)>T) &\leq e^{\theta \rho_U(\theta)T}e^{\theta \sigma_A(\theta) + \theta \sigma_U(\theta)} \cdot \frac{1}{1 - e^{\theta (\rho_A(\theta)+\rho_U(\theta))}},\end{aligned}$$ if $\rho_A(\theta)+\rho_U(\theta) < 0$.
Proofs for the above theorem can for example be found in [@Beck:thesis; @fidler-iwqos06].
This theorem gives us a method to achieve stochastic bounds on the virtual delay and backlog of a single server with a single input. This raises the question on how to achieve performance bounds on more complex networks. The idea here is to reduce a network to the single-flow-single-node case. To illustrate this we give an example of a slightly more complex network.
Assume the same network as above, but instead of a single flow entering the service element we have two flows $A_1$ and $A_2$ as input, each with their own bounding functions $\rho_i(\theta)$ and $\sigma_i(\theta)$ ($i\in\{1,2\}$). In this scenario we might be interested in the total backlog which can accumulate at the service element. For using the above result, we make an important observation: If $A_1$ and $A_2$ are stochastically independent, we can derive from their MGF-bounds a new bound for the aggregated arrivals: $$\begin{aligned}
\phi_{A_1(s,t) + A_2(s,t)}(\theta) & = \mathbb{E}(e^{\theta(A_1(s,t) + A_2(s,t))})=\phi_{A_1(s,t)}(\theta)\phi_{A_2(s,t)}(\theta) \\
& \leq e^{\theta \rho_1(\theta)(t-s) + \theta\sigma_1(\theta)}e^{\theta \rho_2(\theta)(t-s) + \theta\sigma_2(\theta)} \\
& = e^{\theta (\rho_1(\theta) + \rho_2(\theta))(t-s) + \theta(\sigma_1(\theta) + \sigma_2(\theta))}. \\\end{aligned}$$ By defining $\rho_A(\theta) = \rho_1(\theta) + \rho_2(\theta)$ and $\sigma_A(\theta) = \sigma_1(\theta) + \sigma_2(\theta)$ we can use Theorem \[thm:Fundamental-Theorem\] again and calculate the aggregated flow’s backlog.
It is important to describe exactly what happened in the above example: We have reduced a network consisting of two flows and a service element to a network with only a single flow and a service element. For this we calculated a new MGF-bound consisting of MGF-bounds we have known before. This idea of reducing networks makes SNC a powerful theory.
Reduction of Networks
---------------------
In this subsection we generalize the above result and show four methods for reducing a network’s complexity. The first of these network operations is a repetition of the above example. Proofs and many more details for the following results can for example be found in [@Beck:thesis].
\[lem:Multiplexing\] Assume a service element has two stochastically independent input flows $A_1$ and $A_2$ with MGF-bounds $\rho_i(\theta)$ and $\sigma_i(\theta)$; then, the aggregate has an MGF-bound with bounding functions $\rho(\theta) = \rho_1(\theta)+\rho_2(\theta)$ and $\sigma(\theta) = \sigma_1(\theta) + \sigma_2(\theta)$.
The next lemma simplifies a tandem of two service elements into a single service element which describes the end-to-end service.
(0,0) node(start) ++(1,0) node\[circle,draw\](U\_1) [$U_1$]{} ++(1.5,0) node\[circle,draw\](U\_2) [$U_2$]{} ++(1,0) node(end) ; (start) – node\[above\] [$A$]{}(U\_1); (U\_1) – (U\_2); (U\_2) – (end);
\[lem:Convolution\] Assume a tandem network as in Figure \[fig:Tandem-Network\], where the processes $U_1$ and $U_2$ are stochastically independent and MGF-bounded by the functions $\rho_i(\theta)$ and $\sigma_i(\theta)$; then, we can merge the two service elements into a single service element with input $A$ and output $C$, representing the end-to-end behavior. It has an MGF-bound with bounding functions $\rho(\theta) = \max(\rho_1(\theta),\rho_2(\theta))$ and $$\sigma(\theta) = \sigma_1(\theta) + \sigma_2(\theta) - 1/\theta \log(1 - e^{-\theta |\rho_1(\theta) - \rho_2(\theta)|}).$$
The next lemma ties in with a service element’s scheduling discipline. Consider again the situation in Figure \[fig:single-queue\], but with two input flows $A_1$ and $A_2$; now, instead of the system’s performance with respect to the aggregated flow we might be interested in the performance for a particular flow. Considering a subflow raises the question of how the flows’ arrivals are scheduled inside the service element. From the perspective of SNC the easiest scheduling policy is the strict priority policy (or arbitrary multiplexing): In this policy the flow with lower priority only receives service, if there are no arrivals of the higher priority flow enqueued.
\[lem:Demultiplexing\] Assume the above described scenario and that $A_1$ and $U$ are stochastically independent with bounding functions $\rho_A(\theta),\sigma_A(\theta)$ and $\rho_U(\theta),\sigma_U(\theta)$, respectively. This system can be reduced to a single-flow-single-node system for flow $A_2$ and a service element $U_l$ with MGF-bound $$\phi_{U_l(s,t)}(-\theta) \leq e^{\theta(\rho_A(\theta)+\rho_U(\theta))(t-s) + \theta(\sigma_A(\theta)+\sigma_U(\theta))}.$$
More elaborate scheduling policies have been analyzed in SNC. At this stage, however, the Calculator has only implemented the above method for calculating leftover service. Note that this is a worst case view with respect to the scheduling algorithm. By this we mean that any other scheduling, like FIFO or WFQ, gives more service to $A_2$ than arbitrary multiplexing does; therefore, the result of the above lemma can always be used as a lower bound for the service $A_2$ receives.
The next result is needed to produce results for intermediate nodes or flows. It gives an MGF-bound for a service element’s output.
\[lem:Deconvolution\] Assume the scenario as in Figure \[fig:single-queue\] again. If $A$ and $U$ are stochastically independent and MGF-bounded by bounding functions $\rho_A(\theta),\sigma_A(\theta)$ and $\rho_U(\theta),\sigma_U(\theta)$, respectively, we have for the output flow $B$: $$\phi_{B(s,t)}(\theta) \leq e^{\theta\rho_A(\theta)(t-s) + \theta(\sigma_A(\theta)+\sigma_U(\theta))}\cdot \frac{1}{1 - e^{\theta(\rho_A(\theta)+\rho_U(\theta))}},$$ if $\rho_A(\theta)+\rho_U(\theta) < 0$. By this $B$ is MGF-bounded with $\rho_B(\theta) = \rho_A(\theta)$ and $$\sigma_B(\theta) =\sigma_A(\theta) + \sigma_U(\theta) - 1/\theta \log(1 - e^{\theta(\rho_A(\theta) + \rho_U(\theta))}).$$
All of the above results required some independence assumption between the analyzed objects. For the analysis of stochastically dependent objects, we use Hölder’s inequality:
Let $X$ and $Y$ be two stochastic processes. It holds $$\mathbb{E}(XY) \leq (\mathbb{E}(X^p))^{\nicefrac{1}{p}}(\mathbb{E}(Y^q))^{\nicefrac{1}{q}}$$ for all pairs $p,q$ such that $\tfrac{1}{p} + \tfrac{1}{q} = 1$. In particular we have $$\phi_{XY}(\theta) \leq (\phi_{X}(p\theta))^{\tfrac{1}{p}}(\phi_{Y}(q\theta))^{\tfrac{1}{q}}.$$
When we apply this inequality to the above results we get a modified set of network operations. For more details, we refer again to [@Beck:thesis].
In the case of stochastic dependence the bounding functions in Lemma \[lem:Multiplexing\] change to $\rho(\theta) = \rho_1(p\theta) + \rho_2(q\theta)$ and $\sigma(\theta) = \sigma_1(p\theta) + \sigma_2(q\theta)$.
\[lem:Dependent-Convolution\] In the case of stochastic dependence the bounding functions in Lemma \[lem:Convolution\] change to $\rho(\theta) = \max(\rho_1(p\theta) + \rho_2(q\theta))$ and $$\sigma(\theta) = \sigma_1(p\theta) + \sigma_2(q\theta) - 1/\theta \log(1 - e^{-\theta |\rho_1(p\theta) - \rho_2(q\theta)|}).$$
In the case of stochastic dependence the bounding functions in Lemma \[lem:Demultiplexing\] change to $\rho(\theta) = \rho_A(p\theta) + \rho_U(q\theta)$ and $\sigma(\theta) = \sigma_A(p\theta) + \sigma_U(q\theta)$.
In the case of stochastic dependence the bounding functions in Lemma \[lem:Deconvolution\] change to $\rho_B(\theta) = \rho_A(p\theta) $ and $$\sigma_B(\theta) =\sigma_A(p\theta) + \sigma_U(q\theta) - 1/\theta \log(1 - e^{\theta(\rho_A(p\theta) + \rho_U(q\theta))}).$$
Now, we show how these network operations work together to reduce a complex network to the single-node-single-flow case. These examples are taken directly from Chapter 1 of [@Beck:thesis] lifted to MGF-bounded calculus.
(-1,0) node(label) [$\mathcal{G}:$]{} (0,0.25) node(Origin\_1) ++(1,0) node(dummy\_U\_top)\[text = white\][$\cdots$]{} ++(0,-0.25) node(U)\[circle, draw\][$U$]{} ++(1,0.25) node(dummy\_V\_top)\[text = white\][$\cdots$]{} ++(0,-0.25) node(V)\[circle, draw\][$V$]{} ++(1,0.25) node(Destination\_1) (0,-0.25) node(Origin\_2) ++(1,0) node(dummy\_U\_bottom)\[text = white\][$\cdots$]{} ++(1,0) node(dummy\_V\_bottom)\[text = white\][$\cdots$]{} ++(1,0) node(Destination\_2) ;
(Origin\_1) – node\[above\] [${A}_1$]{}(dummy\_U\_top); (dummy\_U\_top) – (dummy\_V\_top); (dummy\_V\_top) – (Destination\_1);
(Origin\_2) – node\[below\] [${A}_2$]{}(dummy\_U\_bottom); (dummy\_U\_bottom) – (dummy\_V\_bottom); (dummy\_V\_bottom) – (Destination\_2);
We consider the network of Figure \[fig:2-Nodes-2-Flows\] and assume that the following MGF-bounds on the involved elements hold: $$\begin{aligned}
\phi_{A_1(s,t)}(\theta) & \leq e^{\theta\rho_{A_1}(\theta)(t-s) + \theta\sigma_{A_1}(\theta)} \\
\phi_{A_2(s,t)}(\theta) & \leq e^{\theta\rho_{A_2}(\theta)(t-s) + \theta\sigma_{A_2}(\theta)} \\
\phi_{U(s,t)}(\theta) & \leq e^{\theta\rho_{U}(\theta)(t-s) + \theta\sigma_{U}(\theta)} \\
\phi_{V(s,t)}(\theta) & \leq e^{\theta\rho_{V}(\theta)(t-s) + \theta\sigma_{V}(\theta)}.\end{aligned}$$
We present three examples for reducing the network using the operations defined in the lemmata above.
Consider the graph $\mathcal{G}$ given in Figure \[fig:2-Nodes-2-Flows\]. After merging both arrivals the graph can be simplified in two ways: Either apply Lemma \[lem:Convolution\] to the two service elements (resulting in graph $\mathcal{G}_{1}$ in Figure \[fig:Reduction-Example-Aggregate-First\]) or calculate an output bound for the first node’s departures (resulting in graph $\mathcal{G}_{1}^{\prime}$ in Figure \[fig:Reduction-Example-Aggregate-First\]). The graphs $\mathcal{G}_{1}$ and $\mathcal{G}_{1}^{\prime}$ describe the system for both arrivals aggregated and as such, can also be used to calculate performance bounds for only one of the flows. The difference between these two methods is that the graph $\mathcal{G}_{1}$ describes the system’s end-to-end behavior, whereas $\mathcal{G}_{1}^{\prime}$ describes the behavior at the service element $V$. The MGF-bounds of the quantities appearing in Figure \[fig:Reduction-Example-Aggregate-First\] can be calculated using Lemmas \[lem:Multiplexing\]-\[lem:Deconvolution\]. To show how these work together we derive here the bounding functions for the MGF-bound on $A^\prime := (A_1\oplus A_2)\oslash U$: First we combine the MGF-bounds of $A_1$ and $A_2$ into the MGF-bound $$\phi_{A_1(s,t)+A_2(s,t)}(\theta) \leq e^{\theta(\rho_{A_1}(\theta)+\rho_{A_2}(\theta))(t-s) + \theta(\sigma_{A_1}(\theta)+\sigma_{A_2}(\theta))}.$$ Next, we apply Lemma \[lem:Deconvolution\] to the aggregate and the service process $U$, resulting in $$\phi_{A^\prime(s,t)}(\theta) \leq e^{\theta(\rho_{A_1}(\theta)+\rho_{A_2}(\theta))(t-s) + \theta(\sigma_{A_1}(\theta)+\sigma_{A_2}(\theta))} \frac{1}{1 - e^{\theta(\rho_{A_1}(\theta)+\rho_{A_2}(\theta)+\rho_U(\theta))}},$$ if $\rho_{A_1}(\theta)+\rho_{A_2}(\theta)+\rho_U(\theta)<0$.
(-1,0) node(label) [$\mathcal{G}_1:$]{} ++(1.5,0) node(Origin) [${A}_1 \oplus {A}_2$]{} ++(2.5,0) node(Service)\[ellipse, draw\][$U\otimes V$]{};
(Origin) – (Service);
\(a) Convolution after multiplexing
(-1,0) node(label) [$\mathcal{G}_1^\prime:$]{} ++(2,0) node(Origin) [$({A}_1 \oplus {A}_2)\oslash U$]{} ++(2,0) node(Service)\[circle, draw\][$V$]{};
(Origin) – (Service);
\(b) Deconvolution after multiplexing.
\[ex:Subtract-First\] Another method to reduce $\mathcal{G}$ is to subtract one of the flows – say $A_{2}$ – first. Afterwards either Lemma \[lem:Convolution\] or Lemma \[lem:Deconvolution\] can be applied, leading to the graphs $\mathcal{G}_{2}$ and $\mathcal{G}_{2}^{\prime}$ in Figure \[fig:Reduction-Example-Subtract-First\]. The graph $\mathcal{G}_{2}$ describes an end-to-end behavior, whereas $\mathcal{G}_{2}^{\prime}$ is the local analysis at the second node. In contrast to the previous example, the flows are considered separately throughout the whole analysis. This approach proves to be better in general topologies in which flows interfere only locally. Note that by following this approach there occurs a stochastic dependency in graph $\mathcal{G}_2$ when we use \[lem:Convolution\]: The process $A_2$ appears in both service descriptions. As a consequence we need to use its variation formulated in Lemma \[lem:Dependent-Convolution\], which introduces a set of Hölder parameters; similarly, in graph $\mathcal{G}^\prime_2$ we have to employ a variation of Theorem \[thm:Fundamental-Theorem\] when we want to calculate performance bounds (the process $A_2$ appears in the arrivals and in the service description).
(-1,0) node(label) [$\mathcal{G}_2:$]{} ++(1,0) node(Origin) [${A}_1$]{} ++(4.5,0) node(Service)\[ellipse, draw\][$[U\ominus{A}_2]^+\otimes[V\ominus({A}_2\oslash U)]^+$]{};
(Origin) – (Service);
\(a) Convolution after subtraction.
(-1,0) node(label) [$\mathcal{G}_2^\prime:$]{} ++(2,0) node(Origin) [${A}_1 \oslash [U\ominus {A}_2]^+$]{} ++(4,0) node(Service)\[ellipse, draw\][$[V\ominus({A}_2\oslash U)]^+$]{};
(Origin) – (Service);
Instead of merging one of the edges first, one can also use Lemma \[lem:Convolution\] to merge the two service elements first. The resulting node is labeled by $U\otimes V$. The graph $\mathcal{G}_{3}$ in Figure \[fig:Reduction-Example-Convolve-First\](a) equals $\mathcal{G}_{1}$; indeed, just the order of aggregation and convolution was switched. Subtracting a crossflow from the convoluted service element, instead, would lead to Figure \[fig:Reduction-Example-Convolve-First\](b). This last graph $\mathcal{G}^\prime_3$ is generally assumed to yield the best end-to-end bounds for flow the flow $A_1$; however, this strategy of convoluting before calculating leftover services cannot be applied in general feedforward networks.
(-1,0) node(label) [$\mathcal{G}_3:$]{} ++(1.5,0) node(Origin) [${A}_1 \oplus {A}_2$]{} ++(2.5,0) node(Service)\[ellipse, draw\][$U\otimes V$]{};
(Origin) – (Service);
\(a) Multiplexing after convolution.
(-1,0) node(label) [$\mathcal{G}_3^\prime:$]{} ++(1,0) node(Origin) [${A}_1$]{} ++(3,0) node(Service)\[ellipse, draw\][$[(U\otimes V)\ominus {A}_2]^+$]{};
(Origin) – (Service);
\(b) Subtraction afer convolution.
We see there are several ways of reducing even this simple example of a network. The results differ in quality and also in what exactly we want to analyze (the performance with respect to a single flow vs. the aggregate and the end-to-end performance vs. the performance at the network’s second node). Note also that the choice of network operations applied may or may not result in Hölder parameters appearing in the resulting performance bounds; therefore any automatic process that performs these actions must keep track whether stochastic dependencies occur and where exactly Hölder parameters must be introduced.
End-to-end Results
------------------
Now, we discuss SNC’s capabilities to perform an end-to-end analysis of a queueing system. Often one is interested in the end-to-end delay of a tandem of servers as in Figure \[fig:Tandem-Network\], but with $n$ service elements instead of two. A typical scenario would be the end-to-end delay between a client and a server with several routers or switches in between.
Given such a network we could theoretically calculate an end-to-end delay bound in two ways: (1) We could start by reducing the network to the first service element and calculate a delay bound for this element in isolation; next, we reduce the original network to the second service element and calculate another local delay bound and so on. All these single-node delay bounds can be combined into an end-to-end delay bound by “adding them up”. While this approach works in theory, we know the resulting bounds to be very loose in general. (2) The other approach is to use Lemma \[lem:Convolution\] to get an end-to-end description for the system and use it to derive delay bound directly. From the theory of network calculus we know that this approach is beneficial.
Still, in this second course of action there exists a problem: Inspecting Lemma \[lem:Convolution\] reveals that with each application of it a term of the form $\frac{1}{1 - e^{\theta |\rho_i(\theta) - \rho_{i+1}(\theta)|}}$ enters the equations. These terms worsen the delay bounds, especially when the quantitites $\rho_i(\theta)$ and $\rho_{i+1}(\theta)$ are similarly sized (in fact, if they should be equal the lemma cannot deliver this result at all). The next theorem shows a method for avoiding these terms completely (see Theorem 3.1 in [@Beck:thesis] and also [@fidler-iwqos06]). This can be seen as an end-to-end convolution, whereas the successively applying Lemma \[lem:Convolution\] would compare to a node-by-node convolution.
\[thm:End-to-End\] Fix some $\theta>0$ and consider a sequence of two service elements as in Lemma \[lem:Convolution\]; further, let $A$ be MGF-bounded with functions $\rho_A$ and $\sigma_A$. Let $A$, $U$, and $V$ be stochastically independent. Under the stability condition $\rho_{A}(\theta)<-\rho_{U}(\theta)\wedge-\rho_{V}(\theta)$, the end-to-end performance bounds $$\begin{aligned}
\mathbb{P}(\mathfrak{d}(t)>T)\leq e^{-\theta \rho_A(\theta)T} \frac{e^{\theta(\sigma_{A}(\theta)+\sigma_{U}(\theta)+\sigma_{V}(\theta))}} {(1-e^{\theta(\rho_{U}(\theta)+\rho_{A}(\theta))})(1-e^{\theta(\rho_{V}(\theta)+\rho_{A}(\theta))})}\end{aligned}$$ holds for all $t$ and $T$.
Above theorem easily generalizes to $N$ hops. Denoting the bounding functions of the $i$-th server by $\rho_{i}$ and $\sigma_i$ we have $$\mathbb{P}(\mathfrak{d}(t)>T)\leq e^{-\theta \rho_A(\theta)T} \frac{e^{\theta \sigma_A(\theta) + \sum_i \theta \sigma_i(\theta)}} {\prod_i 1 - e^{\theta(\rho_i(\theta) +\rho_A(\theta))}}$$ under the stability condition $\rho_A(\theta) < \bigwedge_i - \rho_i(\theta)$.
For stochastically dependent services or arrivals, the introduction of Hölder parameters is needed similarly to the previous subsection.
Note that by using lemmata \[lem:Multiplexing\] - \[lem:Deconvolution\] (or their respective variants for stochastically dependent cases) we can reduce any feedforward network to a tandem of $N$ service elements for any flow of interest with $N$ hops. In doing so, however, the exact sequence of performed network operations will determine the number of Hölder parameters. The optimal way of reducing the network to the tandem is not known and subject to current research.
Code Structure: An Overview on the Calculator {#sec:Code Structure}
=============================================
Now, we give an overview on the Calculator and its code structure. The work-flow with the program is the following.
1. The network must be modeled and given to the Calculator by the user. This requires deriving MGF-bounds for all input-flows and service-elements. If pre-existing stochastic dependencies are known, they must be given to the program. Otheriwse the program will assume stochastic independence. This does only include the initial stochastic processes given to the program, for any intermediate results the tool will keep track of stochastic dependencies by itself. For example the stochastic dependencies occurring in Example \[ex:Subtract-First\] will be recognized by the tool. Networks can either be entered through the GUI or by loading a file holding the description.
2. After giving the network to the tool it can perform an *analysis* of it for any given flow and node of interest (or flow and path of interest). This translates into using Lemmata \[lem:Multiplexing\] - \[lem:Deconvolution\] (and their variants) until the network has been reduced to one on which Theorem \[thm:Fundamental-Theorem\] can be applied. This step is performed entirely on a symbolic level, meaning: The Calculator works on the level of functions and composes these as defined by Lemmata \[lem:Multiplexing\] - \[lem:Deconvolution\]. As a last action of this analysis step the tool applies Theorem \[thm:Fundamental-Theorem\] (again on the level of functions) to compute the function that describes the delay- or backlog bound.
3. In the *optimization*-step the tool takes the function from the analysis-step and optimizes it by all the parameters it includes. The parameters to be optimized include at least $\theta$ and might include any number of additional Hölder parameters; consequently, it is important that any optimization-method implemented to the tool is flexible to the actual number of optimization parameters occurring. This step is usually the one that takes the most computational time.
The tool reflects the above roadmap by consisting of different exchangeable parts. Figure \[fig:Calculator-Modules\] presents these modules.
- The GUI is the interface between the program’s core and the user. We have implemented a simple GUI for the tool, which allows to construct and manipulate the network by hand. It also gives access to the program’s analysis-part and optimization-part. Note that the GUI is not necessary to use the calculator, instead, the provided packages can be mostly used like a library.
- The Calculator is the core of the program. It is the interface between the other models and relays commands and information as needed.
- The Network stores all the needed topology. This includes the flows and nodes with their parameters, but also MGF-bounds on service processes and flows and Hölder parameters that are created during the analysis-step.
- The Analysis is responsible to performing the algebraic part. It is coded entirely on a symbolic level.
- The Optimizer has the task to “fill” the functions given by the Analysis with numerical values. Following an optimization strategy (or heuristic) it will find a near optimal set of parameters and calculate the corresponding performance bound.
(0,0) node\[ellipse, draw\](User) [User]{} ++(0,-1) node\[rectangle, draw\](GUI) [GUI]{} ++(0,-1) node\[rectangle, draw\](Calculator) [Calculator]{} ++(-4,-1) node(Network) [Network]{} ++(0,-1) node\[rectangle, draw\](Flow) [Flows]{} ++(0,-1) node\[rectangle, draw\](Hoelder) [Hoelder]{} ++(0,-1) node\[rectangle, draw\](Vertex) [Vertices]{}
++(4,3) node\[rectangle, draw\](Analysis) [Analysis]{} ++(4,0) node\[rectangle, draw\](Optimizer) [Optimizer]{};
(User) – (GUI); (GUI) – (Calculator); (Calculator) – (-3,-2.85); (-5,-2.75) rectangle (-3,-6.3); (Calculator) – (Analysis); (Calculator) – (Optimizer);
The calculator’s main class is the [SNC]{} class, which is a singleton, bridging the communication between GUI and backend. Alternatively, the [SNC]{} class can be used directly. It provides a command-pattern-style interface; accordingly, the underlying network is altered through sending commands to the [SNC]{} class; moreover, these commands are stored in an [undoRedoStack]{}, which is be used to track, undo and redo changes. Note that loosening connections and facilitating the use of the backend without a GUI is still work in progress.
When exceptions occur in the backend, the control flow will try to repair things as good as possible. If this is not possible, a generic runtime exception is thrown. These exceptions are specified in [misc]{} at the moment. The design choice for generic exceptions extending the java build-in [RuntimeException]{} stems from the fact that we wanted to avoid methods with a variety of checked exceptions. As a result, the code is less cluttered and more readable. We are aware that this topic is under debate, especially in the java community and that this practice needs thorough documentation.
Currently the code is organized into the following packages all packages start with [unikl.disco.]{}, which we omit in this list:
- [calculator]{} This package contains only the main class, called [SNC]{}. It is the core of the program.
- [calculator.commands]{} This package contains the various commands by which the network is manipulated (adding a flow, removing a vertex, etc.)
- [calculator.gui]{} This package includes all classes needed to generate, display, and interact with the tool’s GUI. Except for the [FlowEditor]{} class, the GUI is modular, easy to change and extend since actions are separated from markup.
- [calculator.network]{} This package includes all classes related to topological information like [Flow]{} and [Vertex]{}. It further contains the classes needed to perform an analysis.
- [calculator.optimization]{} This class contains all classes needed to evaluate and optimize performance bounds. Note that the result of an analysis is given as an object of type [Flow]{} and belongs to the [calculator.network]{} package. The bound’s information must be “translated” into backlog- and delay bounds first, before numerical values can be provided. This is why we find in the [optimization]{}-package classes like [BacklogBound]{} or [BoundType]{}.
- [calculator.symbolic\_math]{} This is a collection of algebraic manipulations. We find for example an [AdditiveComposition]{} here, which just combines two symbolic functions into a new one. When evaluated with a set of parameters this returns the sum of the two atom-functions evaluated with the same set of parameters; furthermore, we find in this package symbolic representations of MGF-bounds for flows ([Arrival]{}) and service elements ([Service]{}).
- [calculator.symbolic\_math.functions]{} Some arrival classes or specific manipulations (such as Lemma \[lem:Deconvolution\]) require the repeated usage of very specific algebraic manipulations. This package collects these operations.
- [misc]{} A package containing miscellaneous classes not fitting well anywhere else and generic runtime exceptions.
Input & Output {#sec:input_output}
==============
There are several different methods to input and output data when using the calculator:
- Using the functions provided by the GUI
- Writing/reading networks from/to text files
- Using custom defined functions when the calculator is used as a library
The GUI provides methods for adding, removing, subtracting and convoluting flows and vertices. This is useful for quick tests and experimentation but cumbersome for larger networks. To that end, we implemented a simple text-file based interface for saving and loading networks. Note that at the moment only networks without dependencies can be loaded/saved! Extending these methods to the general case is subject to future work.
The file format is specified as follows:
- A line starting with “\#” is a comment and ignored
- First, the interfaces (vertices) are specified, each in its own line
- After the last interface definition, an “EOI” ends the interface block
- Then, the flows are specified, again, each in its own line, until “EOF” ends the document
An interface line has the following syntax (without the linebreak):
I <vertexname>, <scheduling policy>, <type of service>,
<parameters> ...
At the moment only FIFO scheduling and constant rate (“CR”) service are supported. A flow line has the following syntax (again, without the linebreak):
F <flowname>, <number of vertices on route>,
<name of first hop>:<priority at this hop>, ...,
<type of arrival at first hop>, <parameters> ...
The priority is a natural number with 0 being the highest. At the moment, the following arrival types are possible (with parameters):
- CONSTANT – service rate ($\geq 0$!)
- EXPONENTIAL – mean
- EBB (exponentially bounded burstiness) – rate, decay, prefactor
- STATIONARYTB – rate, bucket, \[maxTheta\]
where [maxTheta]{} is optional. In the following we see a sample network with three vertices and one exponentially distributed flow.
# Configuration of a simple network
# Interface configuration. Unit: Mbps
I v1, FIFO, CR, 1
I v2, FIFO, CR, 3
I v3, FIFO, CR, 4
EOI
# Traffic configuration. Unit Mbps or Mb
# One flow with the route v1->v2->v3
F F1, 3, v1:1, v2:1, v3:2, EXPONENTIAL, 2
EOF
Code Representation of SNC Results and Concepts {#sec:Code Representation}
===============================================
Now we elaborate on some of the most important classes of the Calculator and how they represent core-concepts of SNC.
The [Network]{}-class
---------------------
This class is responsible for storing a network’s topology and manipulating its elements. Its key members are three [Map]{}s:
- [flows]{} is of type [Map<Integer,Flow>]{} and is a collection of flows, each with a unique ID. Each flow represents one flow’s entire path through the network. See also the subsection about [flows]{} below.
- [vertices]{} is of type [Map<Integer,Vertex>]{} and is a collection of vertices, each with a unique ID. Each vertex represents one service element of the network. It does not matter how many flows this service element has to process, it will always be modeled by a single [Vertex]{}. See also the below subsection about [vertices]{}.
- [hoelders]{} is of type [Map<Integer,Hoelder>]{}. Each newly introduced Hölder parameter (actually the pair of parameters $p$ and $q$ are defined by $1/p +1/q = 1$ and can be represented by a single variable) is collected in this object and receives a unique ID. This data-structure is needed to keep track of and distinguish the introduced parameters.
These [Map]{}s are created and manipulated by various methods of the [Network]{}-class. Some of these methods are straightforward such as [addVertex]{}, [addFlow]{}, and [setServiceAt]{}. Others methods are more involved and directly reflect core concepts of SNC: The method [computeLeftover]{} for example manipulates the network like follows: For a specific node it identifies the flow that has priority at this service element. It then calculates the leftover service description after serving this flow (Lemma \[lem:Demultiplexing\]) and gives this description to this [Vertex]{}; furthermore, the method gives as output an object of type [Arrival]{}, which encodes the MGF bound on this node’s output for the just served flow (Lemma \[lem:Deconvolution\]).
The [Flow]{}-class and the [Arrival]{}-class
--------------------------------------------
These two classes are closely related: The [Flow]{} class can be thought of as the topological information of a flow through the network. It contains a [List]{} of integer-IDs that describes the flow’s path through the network and a [List]{} of corresponding priorities. It further has a [List]{} of [Arrival]{}-objects. These objects describe the flow’s MGF-bounds at a given node. Usually a flow added to the network only has a single [Arrival]{}-object in this list, which is the MGF-bound at that flow’s the ingress node. Every [Flow]{}-object keeps track of for how many hops arrival-bounds are known in the integer variable [established\_arrivals]{}.
An [Arrival]{}-object most important members are the two [SymbolicFunction]{}s [rho]{} and [sigma]{}. These directly represent the bounding-functions $\rho$ and $\sigma$ of an MGF-bound (see Definition \[def:Arrival-Bound\]); further, important members are two[Set<Integer>]{}s, which keep track of the flows’ and services’ IDs this arrival is stochastically dependent to, respectively.
The [Vertex]{}-class and the [Service]{}-class
----------------------------------------------
Similarly to [Flow]{} and [Arrival]{} these two classes are closely connected to each other. Each [Vertex]{}-object has a member of type [Service]{}, which describes its service via an MGF-bound (Definition \[def:Service-Bound\]); furthermore, a [Vertex]{}-object has members [priorities]{} (of type[Map<Integer,Integer>]{}) and [incoming]{} (of type [Map<Integer, Arrival>]{}) to identify which flow would receive the full service and what set of flows are incoming to this node.
An [Service]{}-object is the equivalent of an [Arrival]{}-object on the service side. It also contains two [SymbolicFunction]{}s called [rho]{} and [sigma]{} and two [Set<Integer>]{} to keep track of stochastic dependencies.
The [SymbolicFunction]{}-interface
----------------------------------
This interface lies at the core of the symbolic computations made to analyze a network. Each MGF-bound is represented by two functions $\rho$ and $\sigma$, which find their representation as [SymbolicFunction]{} in the code. This interface’s most important method is the [getValue]{}-method. It takes a [double]{} (the $\theta$) and a [Map<Integer,Hoelder>]{} (the – possibly empty – set of Hölder parameters) as input and evaluates the function at this point. A simple example is the [ConstantFunction]{}-class, which implements this interface. When the method [getValue]{} is called, an object of this kind just returns a constant value, given that the [Map<Integer,Hoelder>]{} was empty. Mathematically written such an object just represents $f(\theta) = c$, which can for example be found in the MGF-bound of constant rate arrivals or service elements.
The modeling power here lies in the composition of [SymbolicFunctions]{}; for example, when we want to merge two constant rate arrivals, their MGF-bound would contain $\rho_{agg}(\theta) = r_1 + r_2$ with $\rho_1(\theta) = r_1$ and $\rho_2(\theta)= r_2$ being the subflows’ rates, respectively. The class [AdditiveComposition]{} implements the [SymbolicFunction]{}-interface itself and has two members of type [SymbolicFunction]{}. These are called atom-functions; in this scenario the atom-functions would be two[ConstantFunction]{}s with rates $r_1$ and $r_2$. When the [getValue]{}-method of[AdditiveComposition]{} is called it will relay the given parameters to its atom-functions and get their values ($r_1$ and $r_2$) and return their sum to the caller; indeed, [AdditiveComposition]{} is just a representation of the plus-sign in$r_1 + r_2 = \rho_1(\theta) + \rho_2(\theta)$.
The [AbstractAnalysis]{}-class
------------------------------
The abstract class [AbstractAnalysis]{} defines the methods and members an analysis of a network needs. It serves as a starting point for concrete analysis classes. Its members include a network’s topological information together with the indices of the flow of interest and the service of interest, so the analysis knows what performance the caller is interested in. The important method here is the [analyze]{}-method of the [Analyzer]{} interface, which every Analysis has to implement. This method (to be defined by any realization of this abstract class) gives as output an object of type [Arrival]{}, which represents a performance bound; in fact, remembering the bounds from Theorem \[thm:Fundamental-Theorem\] $$\begin{aligned}
\mathbb{P}(\mathfrak{b}(t)>N) &\leq e^{\theta N}e^{\theta \sigma_A(\theta) + \theta \sigma_U(\theta)} \cdot \frac{1}{1 - e^{\theta (\rho_A(\theta)+\rho_U(\theta))}} \\
\mathbb{P}(\mathfrak{d}(t)>T) &\leq e^{\theta \rho_U(\theta)T}e^{\theta \sigma_A(\theta) + \theta \sigma_U(\theta)} \cdot \frac{1}{1 - e^{\theta (\rho_A(\theta)+\rho_U(\theta))}}.\end{aligned}$$ we see that we can split these bounds into a part that depends on the bound’s value ($N$ or $T$, respectively) and a factor that does not depend on the bound’s value. So, we can also write: $$\mathbb{P}(\mathfrak{b}(t)>N) \leq e^{\theta \rho_\mathfrak{b}(\theta) N + \theta \sigma_\mathfrak{b}(\theta)}$$ with $\rho_\mathfrak{b}(\theta) := 1$ and $\sigma_\mathfrak{b} := \sigma_A(\theta) + \theta_U(\theta) - 1/\theta \log(1 - e^{\theta (\rho_A(\theta) + \rho_U(\theta))})$. And: $$\mathbb{P}(\mathfrak{d}(t)>T) \leq e^{\theta \rho_\mathfrak{d}(\theta) T + \theta \sigma_\mathfrak{d}(\theta)}$$ with $\rho_\mathfrak{d}(\theta) = \theta \rho_U(\theta)$ and $\sigma_\mathfrak{d} = \sigma_\mathfrak{b}$. This representation has the advantage that we can use the already implemented operations for MGF-bounds for our performance bounds; for this reason, the output of the [analyze]{}-method is an object of type [Arrival]{}, which is how the code represents an MGF-bound.
The [AbstractOptimizer]{}-class
-------------------------------
Similar to the [AbstractAnalysis]{} class, this abstract class serves as a starting point for implementing optimizers. It implements the [minimize]{}-method of the [Optimizer]{} interface, which every optimizer has to implement. This method takes as input the granularity for which the continuous space of optimization parameters should be discretized too and returns the minimal value found by the optimization algorithm. Its most important member is [bound]{}, which is basically the MGF-bound presented in the previous subsection. Together with the class [BoundType]{} and the interface [Optimizable]{} the function to be optimized is defined. This can either be a backlog- or delay bound as defined in the previous subsection or it can be their inverse functions, i.e., the smallest bound $N$ or $T$ that can be found for a given violation probability $\varepsilon$. for this the bounds from Theorem \[thm:Fundamental-Theorem\] must be solved for $N$ and $T$.
APIs and Extending the Calculator {#sec:APIs and Extensions}
=================================
The interfaces and abstract classes provide a good starting point when extending the calculator. The backend makes heavy use of the factory pattern. As long as new classes implement the necessary interfaces, extending the behavior is easy. The only exception being the [FlowEditor]{} of the GUI, which we are planning to rewrite as soon as possible.
In this section we describe in more detail how a user can implement his own models into it. For this we cover four cases; we describe how users can implement their own
- arrival model to the Calculator and its GUI, given some known MGF-bounds.
- service model to the Calculator and its GUI, given some known MGF-bounds.
- method of analysis to the Calculator and its GUI.
- method of parameter optimization to the Calculator and its GUI.
These descriptions are subject to changes of the code and we strongly recommend to pay attention to the code’s documentation before implementing any of the above.
Adding Arrival Models
---------------------
To add a new arrival model to the calculator we need to be able to write the arrivals in an MGF-bounded form as in Definition \[def:Arrival-Bound\]. We consider for this documentation an arrival that has an exponential amount of data arriving in each time step with rate parameter $\lambda$ as example (seeExample \[ex:Exponential-Increments\]): $$\mathbb{E}(e^{\theta(A(t)-A(s))})\leq\left(\frac{\lambda}{\lambda-\theta}\right)^{t-s}\qquad\text{for all }\theta<\lambda.$$ In this case $\rho(\theta)=\tfrac{1}{\theta}\log(\tfrac{\lambda}{\lambda-\theta})$ and $\sigma(\theta)=0$.
When we have appropriate $\sigma$ and $\rho$ we can implement the arrival model, by performing changes in the following classes:
1. In [ArrivalFactory]{} we write a new method[buildMyModel(parameter1,...)]{} with any input parameters needed for your model (like a rate-parameter, etc.). In this function we construct the $\sigma$ and $\rho$ as symbolic functions. For this we might have to write our own new symbolic functions. These go into the package[uni.disco.calculator.symbolic\_math.functions]{}. See the otheralready implemented arrival models for examples.
2. We add our new model in the list of arrival types in the class [ArrivalType]{}.
3. To make our new model available in the GUI we need to change the class [FlowEditor]{}
1. First we need to prepare the dialog so it can collect the parameters from users’ input. Under the comment-line “Adds the cards for the arrival” we can find one card for each already implemented arrival model. We add our arrival model here appropriately.
2. We add our newly created card to [topCardContainer]{} in the directly subsequent lines.
3. A bit further down the code we can find the action the dialog should perform after the [APPROVE\_OPTION]{}. We add our own [if]{}-clause and follow the examples of the already implemented flows in how to generate the [Arrival]{}-object from the parameters put in by the user. Make sure to use [return;]{} to jump out of the [if]{}-clause, whenever a parameter could not have been read from the input-fields or was initialized incorrectly (e.g., a negative rate was given).
Adding Service Models
---------------------
Adding new service models is completely parallel to how to add new arrival models. Again MGF-bounds must be available to implement a new model (see Definition \[def:Service-Bound\]). Changes to the code must be made in the classes: [ServiceFactory]{}, [ServiceType]{}, and [VertexEditor]{}. For exact details, compare to the changes being performed for adding new arrival models.
Adding a new Analysis
---------------------
To add a new method for analyzing a network we follow these steps:
- We construct a new class extending the [AbstractAnalysis]{}-class. We must make sure that the output for the [analyze]{}-method produces the required performance measure in an MGF-bound format and is an [Arrival]{}-object.
- Next we add the new analysis in the class [AnalysisType]{}.
- We add a new case in the class [AnalysisFactory]{}. Should our analysis require more parameters than the one offered by the [getAnalyzer]{}-method of [AnalysisFactory]{} we must make corresponding changes to the factory. When doing so these changes must be propagated to the classes [AnalysisDialog]{} and the [analyzeNetwork]{}-method of the [SNC]{}-class. In this case, however, we would recommend to switch to a builder pattern instead.
Adding a new Optimization
-------------------------
To add a new method for optimization a performance bound we follow these steps:
- We construct a new class extending the [AbstractOptimizer]{}-class. The new optimizer must define the [minimize]{}-method: The code for how to find a near optimal value goes in here.
- Afterwards we add the new optimizer to the [OptimizationType]{}-class and as a new case in the [OptimizationFactory]{}. As with adding new analyses there might be more parameters needed than the optimization factory can currently offer. In this case changes must be propagated to the [OptimizationDialog]{} and to the method called[optimizeSymbolicFunction]{} in the [SNC]{}-class. Again, when existing methods have to be changed anyway, it would be advisable to use a more general approach, such as a builder pattern.
A Full Example {#sec:full_example}
==============
(-0.5,0.25) node(Origin\_top) [$A_x$]{} ++(0,-0.5) node(Origin\_bottom) [$A$]{} ++(1.5,0.25) node(U\_1)\[circle, draw\][$U_1$]{} ++(1.5,0) node(U\_2)\[circle, draw\][$U_2$]{} ++(1.5,0) node(dots)\[circle, draw\] [$U_3$]{} ++(1.5,0) node(U\_n)\[circle, draw\][$U_4$]{} ++(1.5,0.25) node(dummy\_Destination\_top) ++(0,-0.5) node(dummy\_Destination\_bottom);
(1,0.25) node(dummy\_U\_1\_top)\[text = white\][$U_1$]{} ++(0,-0.5) node(dummy\_U\_1\_bottom)\[text = white\][$U_1$]{} ++(1.5,0.5) node(dummy\_U\_2\_top)\[text = white\][$U_2$]{} ++(0,-0.5) node(dummy\_U\_2\_bottom)\[text = white\][$U_2$]{} ++(1.5,0.5) node(dummy\_dots\_top)\[text = white\][$\ldots$]{} ++(0,-0.5) node(dummy\_dots\_bottom)\[text = white\][$\ldots$]{} ++(1.5,0.5) node(dummy\_U\_n\_top)\[text = white\][$U_n$]{} ++(0,-0.5) node(dummy\_U\_n\_bottom)\[text = white\][$U_n$]{};
(Origin\_top) – (dummy\_U\_1\_top); (Origin\_bottom) – (dummy\_U\_1\_bottom);
(dummy\_U\_1\_top) – (dummy\_U\_2\_top); (dummy\_U\_1\_bottom) – (dummy\_U\_2\_bottom);
(dummy\_U\_2\_top) – (dummy\_dots\_top); (dummy\_U\_2\_bottom) – (dummy\_dots\_bottom);
(dummy\_dots\_top) – (dummy\_U\_n\_top); (dummy\_dots\_bottom) – (dummy\_U\_n\_bottom);
(dummy\_U\_n\_top) – (dummy\_Destination\_top); (dummy\_U\_n\_bottom) – (dummy\_Destination\_bottom);
(1,1.25) node(Rung1) [$A^1$]{} ++(1.5,0) node(Rung2) [$A^2$]{} ++(1.5,0) node(Rung3) [$A^3$]{} ++(1.5,0) node(Rung4) [$A^4$]{}; (1, 1) – (U\_1); (2.5, 1) – (U\_2); (4, 1) – (dots); (5.5, 1) – (U\_n);
(U\_1) – (1,-1); (U\_2) – (2.5,-1); (dots) – (4,-1); (U\_n) – (5.5,-1);
In this section we will give a full walk-through on our modeling steps for the results presented in [@Beck:SNCalc2]. In this scenario we consider the topology in Figure \[fig:ladder-topology\] with 2, 3, or 4 service elements in tandem. In this network we consider the flow of interest as having a low-priority under the crossing flow $A_x$. This can be interpret as our flow of interest lying in the “backgronud” traffic that flows from end-to-end. The rung-flows $A^1,\ldots, A^4$ are interfering with the service elements in a FIFO- or WFQ-fashion. We also conducted NS3 simulations for the same scenario to make the analytical results comparable.
Arrival Model
-------------
We used the following approach for producing arrivals in NS3-simulations: Each arrival consists of a constant stream of data with $x$ MB/s, where $x$ is a value that changes each 0.1 seconds and is exponentially distributed. The subsequent values of $x$ are stochastically independent from each other for all flows and each time-slot. Here the parameters of exponential distributions is chosen, such that the expected datarate for the flow of interest $A$ is equal to 20 MB/s, the crossflow’s expected datarate is 40 MB/s, and each rung-flow’s expected rate is20 MB/s.
To model these arrivals in the Calculator we use the MGF-bounds as derived in Example \[ex:Exponential-Increments\]. Notice that this model is slightly different from the simulations, since the model assumes that the complete bulk of arrivals of one time-slot (with length of 0.1 seconds) arrive in an instant, whereas our simulation streams these arrivals with a constant rate over each single time-slot. We will make up for this difference when modeling service elements.
Service Model
-------------
In the NS3-simulation we use a 100 MB/s link-speed between each node. The natural method to model these is to define a constant rate server with rate $r = 100$ MB/s; however, we want to account for the differences in the model and the simulation when it comes to the flow’s burstiness. Note that in the simulations the service elements start working on the data “as it comes in”, meaning the processing starts from simulation time zero onwards; instead, in our SNC model we would wait one full time-slot and consider all the arrivals of the first 0.1 seconds to arrive in one batch at time $t= 0.1$ s. As the service rate is constant there is basically a shift of service by one time-slot between the simulation and the model. For this reason we define the service’s MGF-bound by the functions $$\begin{aligned}
\rho_{S^\prime}(\theta) & = - 10 \\
\sigma_{S^\prime}(\theta) & = - 10 \end{aligned}$$ The unit chosen here is MB per time-slot (100 MB/sec = 10 MB/0.1 sec). The above MGF-bound differs from a constant rate MGF-bound by having one additional time-slot of service in $\sigma_S$, which is available right at the start of the model. We have implemented a corresponding service model into the Calculator as described in the previous Section.
So far we have not discussed how the service elements schedule the flows. Our simulations work either by a FIFO- or WFQ-scheduling, when it comes to decide whether a packet from the flow of interest or another flow will be processed. So far the Calculator does not have leftover service descriptions implemented for these scheduling disciplines; however, using a leftover scheduling will lead to overly pessimistic results; instead, we have decided to neglect the crossflows’ burstiness entirely and subtract the expected number of the rung-flows’ arrivals from the constant rate server. This means, we have to subtract the value 2 from our service rates, leading us to the bounding functions $\rho_S(\theta) = -8$ and $\sigma_S(\theta) = -8$.
End-to-End Analysis
-------------------
As a last step we need to account for the crossflow $A_x$, which joins our flow of interest for the entire path. We need to take this crossflow into account when we want to use Theorem \[thm:End-to-End\]; in fact, this slightly modifies the proof of this theorem in a straightforward manner. Having this result at hand we implemented a new [Analysis]{} to the Calculator, which uses this end-to-end result. There was no need to implement a new optimization method, as the ones implemented can already cope with this scenario.
With the analysis and the service model implemented the results about end-to-end delay can be produced using the Calculator. Note that calling the analysis repeatedly is needed to produce the graphs in [@Beck:SNCalc2]. Since the GUI does not support such a repeated calling we accessed the Calculators methods and classes directly instead. This allowed to automatically loop through an increasing given violation probability.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We report a comprehensive approach to analysing continuous-output photon detectors. We employ principal component analysis to maximise the information extracted, followed by a novel noise-tolerant parameterised approach to the tomography of PNRDs. We further propose a measure for rigorously quantifying a detector’s photon-number-resolving capability. Our approach applies to all detectors with continuous-output signals. We illustrate our methods by applying them to experimental data obtained from a transition-edge sensor (TES) detector.'
author:
- 'Peter C. Humphreys'
- 'Benjamin J. Metcalf'
- Thomas Gerrits
- Thomas Hiemstra
- 'Adriana E. Lita'
- Joshua Nunn
- Sae Woo Nam
- Animesh Datta
- 'W. Steven Kolthammer'
- 'Ian A. Walmsley'
bibliography:
- 'TESpapers.bib'
title: 'Tomography of photon-number resolving continuous-output detectors'
---
The continuing development of highly efficient photon detectors has significant impact across a broad range of fields, from quantum information [@Calkins2013] to astronomy [@Day2003] and biomedical imaging [@art2006photon]. The physics underlying the operation of different photon detectors is rich and varied, but their outputs typically fall into two categories. Those such as photomultiplier tubes, avalanche photodiodes [@Eisaman2011] and superconducting nanowires [@Natarajan2012; @Marsili2013a] are often based on avalanche phenomena and lead to discrete ‘click’ outcomes, while others, such as transition-edge sensors [@Lita2008], kinetic-inductance detectors [@Day2003] and superconducting tunnel junctions [@Peacock1996] rely on smooth transitions leading to continuous ‘trace’ outputs (avalanche photodiodes can also give continuous-valued outputs under appropriate conditions [@Kardyna2008]). Some of these, including TES detectors, are highly sensitive single-photon detectors with quantum efficiencies of up to 98% [@Lita2008; @Fukuda2011] and true photon-number sensitivity [@Lita2008]. Others, such as microwave kinetic inductance detectors, allow unprecedented level of integration into large arrays [@Day2003]. These advances over traditional discrete-output detectors will enable new applications in wide-ranging fields.
With these novel applications and regimes of performance come additional challenges in detector characterisation. Unlike discrete-output detectors, many photon-number resolving detectors (PNRD) produce a complex time-varying signal from which the input state must be inferred. Efficiently extracting information from these signals is therefore necessary to realise the full capability of such detectors [@Avella2011; @Brida2012a; @Levine2012].
The signal produced by a continuous-output detector is typically a time-dependent voltage with some dependence on photon number which may in general be nonlinear, as shown in Fig. \[fig:SVDtraces\]a. A set of such output signals $\v{V} = \{v_i(t)\}$, arising from a set of input states of the incident light beam, can be represented using a set of basis functions $\{w_j (t)\}$, such that $$\begin{aligned}
v_i(t) = \sum^n_j s_{ij} w_j(t). \notag\end{aligned}$$ In general, this implies that, in order to capture the full output of the detector, it is necessary to determine the weighting components $s_{ij}$ for all of the $n$ basis functions for each signal to be measured. For a truly continuous signal, $n$ is in principle infinite, but of course for any real experiment the upper limit to $n$ is set by the temporal and voltage resolution of the detector. However, this finite signal still spans a space of high dimension; in our work a signal consists of 1024 16-bit numbers. Directly analysing this signal is therefore impractical. This is particularly the case for detector tomography, necessary to rigorously characterise the relationship between input states and output signals [@Lundeen2008; @Zhang2012a]. Detector tomography requires a sufficiently small space of outputs that the probability of a given outcome can be estimated precisely from the measured data. For the full output space of our detector signal, we estimate the probability of the same trace occurring twice (to within the resolution of the analogue-to-digital converter) in a data set of $10^5$ traces to be on the order of $10^{-4}$, rendering tomography in this full space infeasible. This motivates the development of an approach to the characterization of continous-output detectors that enables accurate and precise signal analysis and detector tomography.
Detector tomography has been previously carried out for continuous-output PNRDs with 5% quantum efficiencies [@Brida2012a], in which the continuous-output problem was circumvented by ‘binning’ the detector output based on the maximum amplitude of the signal. This approach does not make optimal use of the information available. Furthermore, as we will discuss, the numerical techniques for detector tomography used in the study are not effective in the high detection-efficiency regime, which is now accessible with TES detectors. Another recent work has explored algorithmic methods of interpreting the response of high detection-efficiency PNRDs based on cluster analysis [@Levine2012]. Although this may prove useful for rapid characterisation of a detector, it is not a tomographic technique and is therefore unable to provide a rigorous characterisation of the detector response.
![a) Representative TES traces $v_i(t)$ from a data set of 180,000 total signals. b) Truncated representation of the same traces using only the first two principal components $w_1(t)$ and $w_2(t)$. c) Variance of the set of principal component scores $\{ s_{ij} \}$ as a function of the principal component number $j$. d) Principal components $w_1(t)$ (blue) and $w_2(t)$ (green). e) Probability density function $p(s_{i1} | \alpha)$ over the signal scores $s_{i1}$ for a coherent state input $\ket{\alpha}\!\!\bra{\alpha}$ with a mean of 3.1 photons per pulse. Note that these values can be negative as the mean signal is subtracted from each signal during the calculation of $s_{i1}$[]{data-label="fig:SVDtraces"}](tracesAndTruncatedSVD.pdf){width="8.0cm"}
*Principal component analysis:* We first consider the problem of efficiently extracting information from a high-dimensional detector signal data set. We achieve this by employing a standard technique from multi-variate statistics, namely principal component analysis [@Abdi2010]. For a given data set, this approach determines the optimal set of ‘principal component’ basis functions $\{w_j (t)\}$ such that each successive basis function captures the maximum amount of information possible from the data set (as measured by the variance of the ‘principal component scores’ $s_{ij}$), while maintaining orthogonality with the previous components. Crucially, this implies that if the principal component basis is truncated to compress the data, the maximum amount of the variance of the original data set will still be captured. In other words, the truncated principal component basis will provide the most faithful reconstruction of the data for a given number of components.
In an actual experiment, the signals $v_i(t)$ and therefore the basis functions $w_j (t)$ are necessarily discretised due to the finite temporal resolution of the detector. In this case the set of signals $\v{V}$ can be expressed as a matrix. It can be shown that the problem of determining $\{w_j (t)\}$ for $\v{V}$ is equivalent to finding the eigenvectors of the matrix $\v{\tilde{V}}^T\v{\tilde{V}}$, where $\v{\tilde{V}}$ is the data set with the mean signal subtracted [@Abdi2010]. These eigenvectors can be efficiently determined using singular value decomposition. Once $w_i(t)$ are known, $s_{ij}$ can be calculated from the detector signals $v_j(t)$ by $s_{ij} = \int v_j(t) w_i(t) \mathrm{d}t$.
We applied principal component analysis to a data set of 180,000 TES traces, taken with a range of 300 different coherent state inputs with average photon numbers spanning from 0 to approximately 15 photons per pulse. In Fig. \[fig:SVDtraces\]a & b, example TES traces from this data set are plotted both in their original form, and in a reduced form using only the first two principal components $w_1(t)$ and $w_2(t)$. As can be seen, with just these two components, most of the structure of the traces has been reproduced. This can be shown more formally by comparing the variance of $\{ s_{ij} \}$ for different principal component numbers $j$, as plotted in Fig. \[fig:SVDtraces\]c. The variance of $\{ s_{i1} \}$ is two orders of magnitude greater than $\{ s_{i2} \}$, and this trend continues, with the variance rapidly decreasing as a function of $j$.
Interestingly, as Fig. \[fig:SVDtraces\]d shows, $w_1(t)$ is very close to the mean shape of the TES traces. This would be expected theoretically in the small-signal limit, in which the TES trace height simply scales linearly with the photon number [@Miller2001]. This confirms that projecting onto the mean trace shape, as used by [@Levine2012], is a useful approach for distinguishing TES signals in the few-photon limit using only a single parameter. Beyond providing a justification for this choice of processing method, the higher order principal components that are revealed by our analysis can provide additional data with which to characterise the response of a detector, particularly for higher photon numbers. For example, $w_2(t)$ captures the increase in the pulse length with photon number due to an increase in thermal recovery time [^1]. However, since the dominant contribution to the data variance is from $w_1(t)$, particularly for the low photon numbers considered here, we choose to solely focus on this component for the remainder of our analysis.
*Detector tomography:* We now seek to determine the correspondence between the reduced detector signals and the input number of photons by carrying out detector tomography [@Lundeen2008]. The goal of detector tomography is to determine the positive-operator-valued measure (POVM) $\{ \pi(s) \}$ that fully characterises the detector response; this is parameterised by the outcome $s$ in the space of $s_{i1}$. Once the POVM is known, the probability density for detector outcome $s$, given input state $\rho$, is determined by the Born rule $$p(s | {\rho}) = \mathrm{Tr}\left [\rho \, \pi(s) \right ] \label{eqn:born}.$$ The standard approach to tomography consists of experimentally estimating the outcome probability densities $p(s | \rho_k)$ for a set of known probe basis states $\left \{ \rho_k \right \}$. Using these estimated probabilities, equation (\[eqn:born\]) can then, in principle, be inverted to find $\pi(s)$.
The set of probe states $\left \{ \rho_k \right \}$ must provide a sufficient basis for the operator space of the POVM $\{ \pi(s) \}$; in other words, it must be tomographically complete. We satisfy this constraint by using a well established method [@Lundeen2008] for tomography of PNRDs based on coherent state probes $\ket{\alpha}$. It is well known that coherent states form an over-complete basis for an optical mode. Coherent states are also straightforward to generate in the lab and are insensitive in form to experimental losses during preparation, making them ideal probe states. Additionally, as TES detectors are phase insensitive, their response depends only on the magnitude of the coherent state parameter $\alpha$, and not its phase. This significantly reduces the number of probe states needed to form a tomographically complete set of basis operators and removes the need for any phase reference in the experiment.
A phase insensitive detector will have POVM elements diagonal in the photon-number basis; these can therefore be expressed as $$\pi(s) = \sum_{n=0}^{\infty} \theta_{n}(s) \ket{n}\!\!\bra{n}. \label{eqn:fockStateBasis}$$ Coherent-state probes are given in this basis by [^2] $$\ket{\alpha} = \exp(-\abs{\alpha}^2/2)\sum_{n=0}^{\infty} \frac{\alpha^{n}}{\sqrt{n!}} \ket{n},\label{eqn:coherentState}$$ Inserting equations (\[eqn:fockStateBasis\]) & (\[eqn:coherentState\]) into the Born rule (equation (\[eqn:born\])), we find that the probability density for a given outcome is $$p(s | \alpha) = \sum_{n=0}^{\infty} \, F_{\alpha,n} \,\theta_{n}(s). \label{eqn:prob}$$ where $F_{\alpha,n} = \abs{\alpha}^{2n} \frac{\exp(-\abs{\alpha}^2)}{n!}$
Using the set of probability density functions $p(s | \alpha_k)$ associated with the input probe states $\{ \ket{\alpha_k} \}$, this relation can be numerically inverted to find the best solution for $\theta_{n}(s)$ consistent with the physicality constraints $$\theta_{n}(s) \geq 0, \qquad \text{and} \qquad \int \theta_{n}(s) \, \mathrm{d}s \leq 1. \notag$$
It is necessary to use a calibrated light source in order to produce coherent-state probes with known energies for detector tomography. Since we do not have access to a source calibrated to a radiometric standard, we built our own calibrated source by using a Newport 918D-IG-OD3R power meter, which provides a specified calibration accuracy of 2% of absolute power and a linearity of better than 0.5%. This power meter was used to calibrate a series of fixed attenuators to reduce the output from a pulsed laser to the single-photon level with a known mean-photon number per pulse [^3].
We measured the detector response to a set of 300 different probe energies equally spaced between 0 and 15 photons per pulse. For each probe energy, we ran 49152 trials, and used the measured signals to estimate the probability density function for the outcomes in the space of $s_{i1}$ [^4]. Fig. \[fig:SVDtraces\]e shows an example measured probability density function for a probe state with a mean of 3.1 photons per pulse. It is well known that the problem of inverting equation (\[eqn:born\]) to obtain $\pi(s) $ is ill-conditioned [@Lundeen2008]. We found that published methods of performing this numerical inversion based on constrained least squares techniques [@Brida2012a] did not give satisfactory results [^5]. This may be in part due to the reduced overlap between the POVM elements for different photon numbers as compared to previous studies because of our much higher system detection efficiency. This means that regularisation techniques designed to promote this overlap [@Lundeen2008; @Zhang2012a] do not work as effectively.
We used insights from our collected data to develop a novel detector tomography routine that is effective for high quantum efficiencies. We adopted a model in which the detector response to photon number $n$ (in the space of $S_1$) is given by the sum of $n + 1$ Gaussians, with widths, heights, and positions as free variables. This Gaussian-mixture model [@Bishop2006] is consistent with detectors for which several different sources of noise contribute to the response of the detector to a given photon number, leading to an overall Gaussian error as might be expected from the central limit theorem. We employed a maximum likelihood routine [^6] to find the parameterised POVM that was most consistent with the full coherent state tomography data set.
![a) Fock state POVM elements determined from our parameterised detector tomography routine. Note that these solutions are continuous functions in the space of $s_{i1}$, and have not been arbitrarily binned into different ‘photon-number’ outcomes. b) Fock state POVM elements after incorporating the uncertainty in the probe state energies.[]{data-label="fig:EMsolutionAndCalib"}](ExpectMaxSolutionAndCalibUncertainty.pdf){width="7.5cm"}
The results of this inversion are shown in Fig. \[fig:EMsolutionAndCalib\]a. The efficacy of this model-based routine can be estimated by using the calculated POVM to reconstruct the original data set. The $L_1$ difference between this reconstruction and the original data (normalised by the $L_1$ norm of the original data set) is 0.054 as compared to 0.047 for the unphysical reconstruction given by a least-squares approach, showing that this model equally effectively captures the detector response while being significantly more robust to noise. The model-based approach also allows us to estimate the system detection efficiency from the tomography data, giving an efficiency of $0.98 \, ({+0.02}/{-0.08})$ [^7].
As a final step, it is necessary to incorporate the uncertainty in the coherent-state probe energies [^8] to give the POVM elements shown in Fig. \[fig:EMsolutionAndCalib\]b. The higher photon-number POVM elements are particularly sensitive to this uncertainty, and show correspondingly large deviations from their ideal values. This highlights the crucial importance of an accurately calibrated probe state source for detector tomography. Our setup has a high calibration uncertainty of 8%, however, calibration uncertainties of less than 1% are achievable [@Lunghi2014; @Miller2011]. Since this shortcoming is not intrinsic to our detector, in the following analysis we will assume such a 1% calibration uncertainty, as this allows us to better demonstrate the information that our protocol can provide.
*Characterising photon-number resolution:* The above tomography procedure gives the probability density $p(s | n)$ for a specific outcome $s$ given an $n$-photon input to the detector. However, in typical experiments, we are actually interested in the complementary probability density $p(n | s)$ that the input contained $n$ photons given that the detector measured outcome $s$. Determining this requires Bayes’ theorem $p(n | s) = p(s | n) p(n) / p(s)$ and thus depends on our prior probability $p(n)$ of an $n$-photon input [^9].
Closely linked to determining $p(n | s)$ is the problem of finding a quantitative measure of the ‘photon-number resolution’ of the detector. Since $p(n | s)$ only gives information on the confidence with which a specific outcome $s$ can determine the photon-number input, we propose a measure that represents an average of this confidence, weighted by the probability density for $s$ given $n$ input photons, $$\begin{aligned}
C_n &= \int^\infty_{-\infty} p(n | s) p(s | n) \, \mathrm{d}s= \int^\infty_{-\infty} \frac{p(s | n)^2 p(n)}{p(s)} \, \mathrm{d}s \nonumber \\
&= \int^\infty_{-\infty} \frac{p(s | n)^2 p(n)}{\sum_k p(s | k) p(k)} \, \mathrm{d}s. \notag\end{aligned}$$ Given an input of $n$ photons, this confidence $C_n$ represents the average probability ascribed to the $n$ photon component of the inferred state $\rho(s) = \sum_n p(n | s) \ket{n}\!\!\bra{n}$. More loosely, it represents the probability that the detector gives the correct photon number. Additionally, $C_n = \int \bra{n} \rho(s) \ket{n} p(s | n) \mathrm{d}s$, the average squared fidelity between the inferred detected state and an $n$ photon number state $\ket{n}$, weighted by the probability $p(s|n)$. For the detection of a heralding state from a spontaneous parametric down-conversion (SPDC) source [@Ramelow2012], this will therefore also be the fidelity of the heralded state with $\ket{n}$. Note that the detector does not have information on the specific input photon-number $n$; however, a prior distribution must be specified. This confidence is therefore a function of the distribution chosen. Fig \[fig:Confidence\]a shows the confidence for different photon numbers as a function of the SPDC source thermal prior distribution parameter $\lambda^2$, where $p(\ket{n,n} | \, \lambda) = (1 - \lambda^2) \lambda^{2 n}$.
![a) Calculated confidence $C_n$ for different photon numbers as a function of the thermal prior distribution parameter $\lambda^2$. b) Calculated confidence for our detector given a flat prior, as a function of photon number $n$ (blue). Confidence for outcomes at the centres of the peaks in $p(s | n)$ (dashed yellow). Confidence for a time-multiplexed pseudo-number-resolving detector (dashed green).[]{data-label="fig:Confidence"}](confidencePlots.pdf){width="8cm"}
In order to facilitate comparison between different detectors, it may be useful to determine this confidence given a flat prior for the photon number, $$C_n = \int^\infty_{-\infty} \frac{p(s | n)^2}{\sum_k p(s | k)} \, \mathrm{d}s. \notag$$ This is plotted in Fig. \[fig:Confidence\]b. As would be expected, our detector is extremely effective at resolving vacuum and lower photon numbers, while for higher photon numbers, the increasing effect of the detection inefficiency and gradual saturation of the detector leads to a reduced confidence in the outcomes. As an example of the additional information given by our continuous-output analysis, we also plot the confidence for a post-selected case, in which only outcomes at the centres of the peaks in $p(s | n)$ (Fig. \[fig:EMsolutionAndCalib\]) are accepted [^10]. This could be employed to boost the fidelity of the heralded Fock states produced by SPDC sources. In order to demonstrate that this measure is widely applicable to different PNRDs, the confidence for the time-multiplexed pseudo-number-resolving detector with 8 time bins presented in [@Lundeen2008] is also shown.
Acknowledgements
================
This work was supported by the UK Engineering and Physical Sciences Research Council (EPSRC EP/K034480/1) and the European Office of Aerospace Research & Development (AFOSR EOARD; FA8655-09-1-3020). WSK is supported by EC Marie Curie fellowship (PIEF-GA-2012-331859). AD is supported by the EPSRC (EP/K04057X/1). We thank Alvaro Feito for kindly helping us to obtain the data from his previous paper and Tim Bartley for his help in installing the TES detectors. Contribution of NIST, an agency of the U.S. Government, not subject to copyright.
Principal component analysis
============================
The main text of the paper focuses on only the first principal component $w_1(t)$. Although, particularly for lower photon numbers, this component provides most of the distinguishing information available (as measured through the data covariance), it is interesting to note that higher order components can contribute additional information. In Fig. \[fig:exampleSVDPDF\] we plot example probability density functions in the space of $s_{i1}$ and $s_{i2}$ for different coherent state probes. Structure along $s_{i2}$ is visible, and could be incorporated into a detector tomography analysis to further distinguish input states.
![Example probability density functions in the space of the first and second principle component scores for coherent state probes with varying average photon numbers.[]{data-label="fig:exampleSVDPDF"}](exampleSVDPDF.pdf){width="8.5cm"}
Gaussian-mixture maximum likelihood estimation
==============================================
We found that published methods of performing detector tomography based on constrained least squares techniques [@Brida2012a] did not give satisfactory results, with the resulting POVM clearly showing unphysical noise features (Fig. \[fig:SOCPsolution\]). This may be because the reduced overlap between the POVM elements for different photon numbers as compared to previous studies means that regularisation techniques designed to promote this overlap [@Lundeen2008] do not work as effectively.
![Constrained least squares solutions for the Fock state POVM elements $\Pi_{n,s}$ showing the responses to vacuum and up to 17 photons. The solution has unphysical noise features.[]{data-label="fig:SOCPsolution"}](SOCPsolution.pdf){width="8.5cm"}
As introduced in the main text, we have developed a novel detector tomography routine that is effective for high quantum efficiencies. In this approach, we parameterise the detector response in $s_{i1}$ as a series of overlapping Gaussian distributions. Specifically, we model the response of our detector to a photon number $n$ as composed of a sum of $n+1$ Gaussians, with widths, heights, and positions as free variables. Our approach is readily extendable to higher order components ($s_{i2}, s_{i3} \dots$), however, as in the main text, we choose to focus on $s_{i1}$ here. This model gives the following expression for the POVM coefficients for photon number $n$, $$\theta_{n,s} = \sum_j \beta_{n,j} \, \mathcal{N} (s | \mu_{n,j}, \sigma_{n,j}).\label{eqn:modelPOVMelem}$$ where $\beta_{n,j}$ is a weighting factor for the Gaussian probability distribution $\mathcal{N} (s | \mu_{n,j}, \sigma_{n,j})$ in the outcome space $s$, with mean $\mu_{n,j}$, and standard deviation $\sigma_{n,j}$.
We imposed the constraint that $\mu_{n,j}$ = $\mu_{n+1,j}$, i.e. that the Gaussians from different photon numbers should be aligned. This is physically motivated by the fact that the detector cannot distinguish between cases where $n$ photons were input and cases where $n+1$ photons were input and one photon was lost. Removing this constraint does not alter the solution significantly, beyond leading to a slight jitter in the location of the peaks for each photon number. However, this jitter complicates the additional analysis that we carry out, particularly with regard to compensating for the uncertainty in the probe state energies (as discussed in Section \[sec:CalibLightSource\]). No constraint is placed on $\sigma_{n,j}$.
Substituting equation (\[eqn:modelPOVMelem\]) into equation (4) of the main text, we find that $$p(s | \alpha, \v{\chi}) = \sum_{n,j} F_{\alpha,n} \beta_{n,j} \mathcal{N} (s | \mu_{n,j}, \sigma_{n,j}),$$ where $\v{\chi}$ is used a shorthand to denote the set of all the parameters $\beta_{n,j}, \mu_{n,j}, \sigma_{n,j}$, in order to make the dependence on the model explicit.
This expression gives the posterior probability density for the TES detector producing an outcome $s$ in our model, given an input coherent-state probe $\ket{\alpha}\!\!\bra{\alpha}$. We wish to maximise this posterior probability for the data that we measure. Typically, maximum likelihood estimation [@Bishop2006] is carried out based on a set of observed outcomes $\{ s_{i1} \}$. In this case, the quantity to be maximised is the log-likelihood $$\begin{aligned}
\mathcal{L} &= \log \left( \prod_i p(s_{i1} | \alpha_i, \v{\chi}) \right)\notag\\
&= \sum_i \log \left(\, p(s_{i1} | \alpha_i, \v{\chi}) \,\right).\label{eqn:datasetll}\end{aligned}$$
However, due to the large number of data points that we sample, evaluation of this sum becomes impractical. Instead, we used our data set $\{ s_{i1} \}$ to estimate the outcome probability density $q(s | \alpha_k)$ for each $\ket{\alpha_k}\!\!\bra{\alpha_k}$. We can use this distribution to rewrite equation (\[eqn:datasetll\]) as $$\mathcal{L} = \sum_{k} \int N_k\, q(s | \alpha_k) \log \left(\, p(s | \alpha_k, \v{\chi}) \, \right) \mathrm{d}s,\notag$$ where $N_k$ is the total number of samples measured at each value of $\alpha_k$. Since we measured the same number of samples per coherent state value, we will neglect this constant factor that has no impact on the maximum likelihood estimation.
The full expression for the log-likelihood therefore becomes $$\begin{aligned}
\mathcal{L} &= \sum_{k} \int \mathrm{d}s \, q(s | \alpha_k) \dots \notag\\
& \quad \qquad \log \left(\, \sum_{n,j} F_{\alpha_k,n} \, \beta_{n,j} \, \mathcal{N} (s | \mu_{n,j}, \sigma_{n,j}) \,\right).\notag\end{aligned}$$
In order to maximise this log-likelihood, we follow the standard approach [@Bishop2006] of taking derivatives with respect to each parameter in the model. For example, differentiating with respect to $\mu_{n,j}$ gives $$\pd{\mathcal{L}}{\mu_{n,j}} = \sum_{k} \int \, q(s | \alpha_k) \, \gamma_{s,k,n,j} \sigma_{n,j} (s - \mu_{n,j}) \, \mathrm{d}s \notag$$ in which we have defined $$\gamma_{s,k,n,j} = \frac{F_{\alpha_k,n} \, \beta_{n,j} \, \mathcal{N} (s | \mu_{n,j}, \sigma_{n,j})}{\sum_{n,j} F_{\alpha_k,n} \, \beta_{n,j} \, \mathcal{N} (s | \mu_{n,j}, \sigma_{n,j})}. \notag$$
Rearranging leads to the following expression for $\mu_{n,j}$ $$\mu_{n,j} = \frac{1}{N_{n,j}} \sum_{k} \int \, q(s | \alpha_k) \, \gamma_{s,k,n,j} \, s \, \mathrm{d}s \label{eqn:muSoln}$$ where $$N_{n,j} = \sum_{k} \int \, q(s | \alpha_k) \, \gamma_{s,k,n,j} \, \mathrm{d}s. \notag$$
Similarly we find that $$\begin{aligned}
\sigma_{n,j} &= \frac{1}{N_{n,j}} \int \, q(s | \alpha_k) \, \gamma_{s,k,n,j} \, (s - \mu_{n,j})^2 \, \mathrm{d}s \label{eqn:sigSoln}\end{aligned}$$ and $$\begin{aligned}
\beta_{n,j} &= \frac{N_{n,j}}{N_n} \text{, where } N_n = \sum_j N_{n,j} \label{eqn:betaSoln}.\end{aligned}$$
Note that these expressions for the parameters are dependent on $\gamma_{s,k,n,j}$, and therefore do not form a closed-form solution. This means that the optimal solution cannot be found analytically. However, it can be shown that a simple routine consisting of the repeated application of two steps will converge to a solution [@Bishop2006]. In the first step, the current values of the parameters are used to calculate $\gamma_{s,k,n,j}$. This is then used in the second step to re-estimate the optimal values of the parameters using equations (\[eqn:muSoln\]), (\[eqn:sigSoln\]) & (\[eqn:betaSoln\]).
Calibrated light source {#sec:CalibLightSource}
=======================
![Calibrated light source. Because the dynamic range of our power meter is insufficient to span the attenuation required to reduce the coherent-state energy to the single-photon level, we perform the calibration in two steps at the expense of increased error. We use the laser diode running in CW mode to calibrate the attenuators, since the power meter is most accurate in this mode. a) We first take a series of measurement of the output powers $P_{1A}$ and $P_{1B}$ for a range of input powers to the fibre beam splitter. This lets us calibrate the output power in port $1B$ if we know the power in port $1A$. b) We connect a second fibre beam splitter to the first one before we calibrate it to make sure the effect of the FC/FC connection is properly accounted for. We then make a series of measurements of $P_{2A}$ and $P_{1A}$ for a range of input powers. Concatenating these results we now know the output power in port $1B$ given the recorded power in $2A$. c) For the detector characterisation we switch the laser to pulsed operation and attenuate the input light to the nanowatt level. []{data-label="fig:calibratedLightSource"}](calibratedLightSource.pdf){width="8.5cm"}
We built a calibrated coherent state source based on a Newport 918D-IG-OD3R power meter, which provides a specified calibration accuracy of 2% of absolute power and a linearity of better than 0.5%. This power meter was used to calibrate a series of fixed attenuators to reduce the output from a pulsed laser to the single-photon level with a known mean-photon number per pulse [@Miller2011].
Our method uses a fibre beam splitter with a fixed fibre attenuator connected on one of the output ports, as shown in Fig. \[fig:calibratedLightSource\]a. As long as this attenuation is well within the linear dynamic range of the power meter, we can obtain a calibration curve for the combined splitter-attenuator device relating the power measured at $P_{1A}$ to the power at $P_{1B}$. In our case, the attenuation required to reach the single photon level is much greater than the dynamic range of the power meter. This forces us to use a second, calibrated splitter-attenuator device in series with the first Fig. \[fig:calibratedLightSource\]b. A weighted total least-squares algorithm [@Krystek2007] was used to find the total attenuation taking account of the absolute power errors in both variables. The total attenuation is given by the product of the two attenuators, but the errors in the measurements add linearly since they are not independent. Thus our final calibrated attenuation is found to be $$\eta_{\mathrm{att}} = (2.10 \pm 0.16) \times 10^{-6}, \notag$$ which relates the power measured at the monitor port $2A$ to the power at port $2B$ (Fig \[fig:calibratedLightSource\]c). A variable attenuator is used to set the input power level before the calibrated attenuator so that we can probe our detector with a variety of coherent state amplitudes. We monitor the input power to the attenuator using port $2A$ and calculate the average photon number per pulse in port $1B$ which is coupled to the TES. The value of $\eta_\mathrm{att}$ also includes a correction to account for the Fresnel reflection from the unterminated fibre when plugged into the monitor power meter, which leads us to underestimate the total power that will be input when this fibre is instead directly coupled to the fibre leading to the TES. Fibre specifications give this loss at about 3.3% but there is a 1% uncertainty in this figure [@DeCusatis2013].
As we discuss in the Methods section of the main text, the POVM element coefficient $\theta_{n}(s)$ gives the probability density $p(s | n)$ that we will measure outcome $s$ given $n$ input photons. This probability is actually $p(s | n, \eta_{\mathrm{att}})$ since $\eta_{\mathrm{att}}$ is a variable in our tomography calculations. Our uncertainty in $\eta_{\mathrm{att}}$ must therefore be accounted for. Based on our error analysis (and assuming normally distributed errors), we can estimate the probability density $p(\eta_{\mathrm{att}})$ for $\eta_{\mathrm{att}}$. Additionally, we can calculate $p(s | n, \eta_{\mathrm{att}})$ for different $\eta_{\mathrm{att}}$. Combining these, we can incorporate this statistical uncertainty into our POVM using$$p(s | n) = \int p(s | n, \eta_{\mathrm{att}}) p(\eta_{\mathrm{att}}) \mathrm{d}\eta_{\mathrm{att}}. \notag$$ The results of this analysis are shown in Fig. 2b of the main text.
Estimating the system detection efficiency
==========================================
![image](binomialProbs){width="12.5cm"}
The POVM elements that we calculate using detector tomography completely characterise the detector response. Model-free detector tomography is obtained by treating the detector as a black box, and so in principle does not contain information on the system detection efficiency, i.e. the loss that occurs between the input and the detector.
However, the less general, but physically motivated model-based detector tomography approach that we have adopted can allow us to make an estimate of this efficiency. As noted above, we assume that the response of the detector to each photon number is composed of several Gaussian elements. We can make the further assumption that these different Gaussian elements occur due to the action of loss on an initial Fock state, leading to a statistical mixture of photon numbers at the detector. Therefore the heights of these elements should follow a binomial distribution within each Fock state POVM element. For a given system detection efficiency, it is then possible to calculate the expected height of these Gaussian elements and compare them to the actual tomography output. We used a numerical routine to find the loss level that minimised the L2 norm between this predicted output and the tomography data.
This analysis suggests that our system detection efficiency is $0.98 \, ({+0.02}/{-0.08})$. The asymmetric uncertainty arises as the efficiency is upper bounded at 1.0. Additionally, we find a strong agreement between the predicted photon number distribution and the tomography data, as shown in Fig. \[fig:binomial\], suggesting that our initial assumption is correct.
Impact of photon-number prior probabilities
===========================================
As we discuss in the main text, detector tomography gives the probability $p(s | n)$ that a specific outcome $s$ will occur given an $n$-photon input to the detector. However, in typical experiments, we are actually interested in the complementary probability $p(n | s)$ that the input contained $n$ photons given that the detector measured outcome $s$ (as calculated from the detector signal $v(t)$ by $s = \int v(t) w_1(t) \mathrm{d}t$).
Calculating this requires Bayes’ theorem $p(n | s) = p(s | n) p(n) / p(s)$ and thus depends on our prior probability $p(n)$ of an $n$-photon input. Here, as an example we consider two distinct priors which might arise in applications. First, we consider a Poisson distribution $p(n | \alpha) = e^{- \abs{\alpha}^2} \abs{\alpha}^{2 n} / n!$ which would result from a coherent state input. We also consider a thermal distribution $ p(n | \lambda) = (1 - \lambda^2) \lambda^{2 n} $ which describes a thermal state input and, importantly, the single-mode marginal statistics of a spontaneous parametric down-conversion source. If one mode of such a source is sent to a detector, $p(n | s, \lambda)$ represents the statistical mixture of photon numbers onto which the other mode is projected. Such information is extremely important for quantum information and metrology applications.
Two example probability distributions $p(n | s, \alpha)$ and $p(n | s, \lambda)$ are plotted in Figs. \[fig:BayesCalculations\] a & b. As can be seen, the two priors lead to significant differences in the distributions. For the thermal distribution, the thermal prior suppresses the overlap between the outcomes associated with neighbouring photon numbers. This is because, for small $\lambda$, $n+1$ input photons will occur much less frequently than $n$ photons. Therefore the predominant overlap contribution, due to an $n+1$-photon input being detected in the space of outcomes most associated with $n$ input photons, occurs correspondingly less frequently than genuine $n$-photon inputs.
The Poissonian prior plotted in Fig. \[fig:BayesCalculations\]b has the opposite effect as the thermal prior, since in this case an input of $n+1$ photons is more probable than an input of $n$ photons, and therefore the overlap is promoted. It should be noted that in both cases, due to the truncation of our detector tomography at 17 input photons, the distributions $p(n | s)$ become inaccurate in regions in which significant contributions would be expected from photon numbers greater than this. In practice, this simply translates to an operational requirement that detector tomography must be extended to include all photon numbers that are expected to contribute in any given experiment.
![Example distributions $p(n | s_{i1})$ from our tomography data, which give the probability that the input contained $n$ photons given that the detector measured outcome $s$. The effect of the prior input photon number probabilities can be seen in the difference between a) a thermal distribution with $\lambda^2 = 0.1,$ and b) a Poisson distribution with $\abs{\alpha}^2 = 5$.[]{data-label="fig:BayesCalculations"}](calculatedBayesianProbs.pdf){width="8.0cm"}
Post-selecting outcomes to improve confidence
=============================================
For certain applications [@Datta2011], it is important to maximise the fidelity of the inferred detected state with a photon number state ($C_n$). In these cases, the fidelity can be improved using post-selection strategies in which only a subset of outcomes are accepted. This is possible to explore using our detector tomography data since our treatment has explicitly avoided any binning of outcomes.
![Post-selecting on outcomes within windows centred on the peak maxima can be employed to boost the confidence of detected photon states.[]{data-label="fig:PostSelect"}](UsingOnlyPeakPositions.pdf){width="8.0cm"}
One strategy is to only consider outcomes within windows centred on the peak maxima (Fig. \[fig:PostSelect\]). As would be expected, the highest confidence is obtained in the limit of the window width tending to zero, in which case the number of accepted outcomes would also tend to zero. This limit therefore upper bounds the performance of this strategy, and is plotted in Fig 3b of the main text. For our detector, the increase in confidence as compared to using the full space of outcomes is comparatively modest, since the overlap between different photon number POVM elements is dominated by the detection efficiency. However, as the detection efficiency of detectors improves, the intrinsic overlap between neighbouring Gaussian peaks is expected to become increasingly important. In this case, this post-selection strategy should become more effective.
[^1]: See Supplementary Information: Section I for further discussion on the second principal component scores for the coherent probe state data.
[^2]: This assumes that the input state is a pure state, however this can easily be extended to mixed state inputs [@Lundeen2008] (due to classical uncertainties in $\alpha$ for example).
[^3]: See Supplementary Information: Section III for further details on our calibrated coherent state source.
[^4]: The probability density functions were calculated using Gaussian-kernel density estimation [@Botev2010]. This technique is better suited to this problem than using histograms, as it is not necessary to choose an arbitrary binning of the data. Instead, this approach directly gives continuous-valued estimates of the functions.
[^5]: See Supplementary Information: Section II for our model-free tomography results.
[^6]: See Supplementary Information: Section II for more details on our maximum likelihood detector tomography routine.
[^7]: The system detection efficiency is defined as the efficiency with which a photon in the fiber connected to the detector is detected [@Marsili2013a]. See Supplementary Information: Section IV for details on estimating the system detection efficiency.
[^8]: See Supplementary Information: Section III for additional details on our the calibration factor uncertainty analysis.
[^9]: See Supplementary Information: Section V for a detailed discussion of the impact of photon-number prior probabilities.
[^10]: See Supplementary Information: Section VI for a further discussion on the use of post-selection to increase the detector confidence.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Aspect Term Extraction (ATE), a key sub-task in Aspect-Based Sentiment Analysis, aims to extract explicit aspect expressions from online user reviews. We present a new framework for tackling ATE. It can exploit two useful clues, namely opinion summary and aspect detection history. Opinion summary is distilled from the whole input sentence, conditioned on each current token for aspect prediction, and thus the tailor-made summary can help aspect prediction on this token. Another clue is the information of aspect detection history, and it is distilled from the previous aspect predictions so as to leverage the coordinate structure and tagging schema constraints to upgrade the aspect prediction. Experimental results over four benchmark datasets clearly demonstrate that our framework can outperform all state-of-the-art methods.[^1]'
author:
- |
Xin Li$^1$, Lidong Bing$^2$, Piji Li$^1$, Wai Lam$^1$, Zhimou Yang$^3$\
$^1$Key Laboratory of High Confidence Software Technologies, Ministry of Education (CUHK Sub-Lab),\
Department of Systems Engineering and Engineering Management,\
The Chinese University of Hong Kong, Hong Kong\
$^2$Tencent AI Lab, Shenzhen, China\
$^3$College of Information Science and Engineering, Northeastern University, China\
{lixin, wlam, pjli}@se.cuhk.edu.hk, lyndonbing@tencent.com, yangzhimou@stumail.neu.edu.cn\
bibliography:
- 'ijcai18.bib'
title: 'Aspect Term Extraction with History Attention and Selective Transformation[^2]'
---
Introduction
============
Aspect-Based Sentiment Analysis (ABSA) involves detecting opinion targets and locating opinion indicators in sentences in product review texts [@liu2012sentiment]. The first sub-task, called Aspect Term Extraction (ATE), is to identify the phrases targeted by opinion indicators in review sentences. For example, in the sentence “*I love the operating system and preloaded software*”, the words “operating system” and “preloaded software” should be extracted as aspect terms, and the sentiment on them is conveyed by the opinion word “love”. According to the task definition, for a term/phrase being regarded as an aspect, it should co-occur with some “opinion words” that indicate a sentiment polarity on it [@pontiki-EtAl:2014:SemEval].
Many researchers formulated ATE as a sequence labeling problem or a token-level classification problem. Traditional sequence models such as Conditional Random Fields (CRFs) [@chernyshevich:2014:SemEval; @toh-wang:2014:SemEval; @toh-su:2016:SemEval; @yin2016unsupervised], Long Short-Term Memory Networks (LSTMs) [@liu-joty-meng:2015:EMNLP] and classification models such as Support Vector Machine (SVM) [@manek2016aspect] have been applied to tackle the ATE task, and achieved reasonable performance. One drawback of these existing works is that they do not exploit the fact that, according to the task definition, aspect terms should co-occur with opinion-indicating words. Thus, the above methods tend to output false positives on those frequently used aspect terms in non-opinionated sentences, e.g., the word “restaurant” in “*the restaurant was packed at first, so we waited for 20 minutes*”, which should not be extracted because the sentence does not convey any opinion on it.
There are a few works that consider opinion terms when tackling the ATE task. [@wang-EtAl:2016:EMNLP20164] proposed Recursive Neural Conditional Random Fields (RNCRF) to explicitly extract aspects and opinions in a single framework. Aspect-opinion relation is modeled via joint extraction and dependency-based representation learning. One assumption of RNCRF is that dependency parsing will capture the relation between aspect terms and opinion words in the same sentence so that the joint extraction can benefit. Such assumption is usually valid for simple sentences, but rather fragile for some complicated structures, such as clauses and parenthesis. Moreover, RNCRF suffers from errors of dependency parsing because its network construction hinges on the dependency tree of inputs. CMLA [@wang2017coupled] models aspect-opinion relation without using syntactic information. Instead, it enables the two tasks to share information via attention mechanism. For example, it exploits the global opinion information by directly computing the association score between the aspect prototype and individual opinion hidden representations and then performing weighted aggregation. However, such aggregation may introduce noise. To some extent, this drawback is inherited from the attention mechanism, as also observed in machine translation [@luong-pham-manning:2015:EMNLP] and image captioning [@xu2015show].
To make better use of opinion information to assist aspect term extraction, we distill the opinion information of the whole input sentence into opinion summary[^3], and such distillation is conditioned on a particular current token for aspect prediction. Then, the opinion summary is employed as part of features for the current aspect prediction. Taking the sentence “*the restaurant is cute but not upscale*” as an example, when our model performs the prediction for the word “restaurant”, it first generates an opinion summary of the entire sentence conditioned on “restaurant”. Due to the strong correlation between “restaurant’ and “upscale” (an opinion word), the opinion summary will convey more information of “upscale” so that it will help predict “restaurant” as an aspect with high probability. Note that the opinion summary is built on the initial opinion features coming from an auxiliary opinion detection task, and such initial features already distinguish opinion words to some extent. Moreover, we propose a novel transformation network that helps strengthen the favorable correlations, e.g. between “restaurant’ and “upscale”, so that the produced opinion summary involves less noise.
Besides the opinion summary, another useful clue we explore is the aspect prediction history due to the inspiration of two observations: (1) In sequential labeling, the predictions at the previous time steps are useful clues for reducing the error space of the current prediction. For example, in the B-I-O tagging (refer to Section \[sec:task\]), if the previous prediction is “O”, then the current prediction cannot be “I”; (2) It is observed that some sentences contain multiple aspect terms. For example, *“Apple is unmatched in product quality, aesthetics, craftmanship, and customer service”* has a coordinate structure of aspects. Under this structure, the previously predicted commonly-used aspect terms (e.g., “product quality”) can guide the model to find the infrequent aspect terms (e.g., “craftmanship”). To capture the above clues, our model distills the information of the previous aspect detection for making a better prediction on the current state.
Concretely, we propose a framework for more accurate aspect term extraction by exploiting the opinion summary and the aspect detection history. Firstly, we employ two standard Long-Short Term Memory Networks (LSTMs) for building the initial aspect and opinion representations recording the sequential information. To encode the historical information into the initial aspect representations at each time step, we propose truncated history attention to distill useful features from the most recent aspect predictions and generate the history-aware aspect representations. We also design a selective transformation network to obtain the opinion summary at each time step. Specifically, we apply the aspect information to transform the initial opinion representations and apply attention over the transformed representations to generate the opinion summary. Experimental results show that our framework can outperform state-of-the-art methods.
![image](fig/model_times.pdf){width="80.00000%"}
The Proposed Model
==================
The ATE Task {#sec:task}
------------
Given a sequence $X = \{x_1,...,x_{T}\}$ of $T$ words, the ATE task can be formulated as a token/word level sequence labeling problem to predict an aspect label sequence $Y = \{y_1,...,y_{T}\}$, where each $y_i$ comes from a finite label set $\mathcal{Y}= \{B, I, O\}$ which describes the possible aspect labels. As shown in the example below:\
$B$, $I$, and $O$ denote beginning of, inside and outside of the aspect span respectively. Note that in commonly-used datasets such as [@pontiki-EtAl:2016:SemEval], the gold standard opinions are usually not annotated.
Model Description
-----------------
As shown in Figure \[fig:architecture\], our model contains two key components, namely **T**runcated **H**istory-**A**ttention (THA) and **S**elective **T**ransformation **N**etwork (STN), for capturing aspect detection history and opinion summary respectively. THA and STN are built on two LSTMs that generate the initial word representations for the primary ATE task and the auxiliary opinion detection task respectively. THA is designed to integrate the information of aspect detection history into the current aspect feature to generate a new history-aware aspect representation. STN first calculates a new opinion representation conditioned on the current aspect candidate. Then, we employ a bi-linear attention network to calculate the opinion summary as the weighted sum of the new opinion representations, according to their associations with the current aspect representation. Finally, the history-aware aspect representation and the opinion summary are concatenated as features for aspect prediction of the current time step.
### Building Memory
As Recurrent Neural Networks can record the sequential information [@graves2012supervised], we employ two vanilla LSTMs to build the initial token-level contextualized representations for sequence labeling of the ATE task and the auxiliary opinion word detection task respectively. [****]{} For simplicity, let $\text{LSTM}^{\mathcal{T}}(x_t)$ denote an LSTM unit where $\mathcal{T} \in \{A, O\}$ is the task indicator. In the following sections, without specification, the symbols with superscript $A$ and $O$ are the notations used in the ATE task and the opinion detection task respectively. We use Bi-Directional LSTM to generate the initial token-level representations $h^{\mathcal{T}}_t \in \mathbb{R}^{2\mathrm{dim}^{\mathcal{T}}_h}$ ($\mathrm{dim}^{\mathcal{T}}_h$ is the dimension of hidden states):
$$h^{\mathcal{T}}_t = [\overrightarrow{\text{LSTM}}^{\mathcal{T}}(x_t); \overleftarrow{\text{LSTM}}^{\mathcal{T}}(x_t)], t \in [1, T].
$$
### Capturing Aspect History
In principle, RNN can memorize the entire history of the predictions [@graves2012supervised], but there is no mechanism to exploit the relation between previous predictions and the current prediction. As discussed above, such relation could be useful because of two reasons: (1) reducing the model’s error space in predicting the current label by considering the definition of B-I-O schema, (2) improving the prediction accuracy for multiple aspects in one coordinate structure.
We propose a Truncated History-Attention (THA) component (the **THA** block in Figure \[fig:architecture\]) to explicitly model the aspect-aspect relation. Specifically, THA caches the most recent $N^{A}$ hidden states. At the current prediction time step $t$, THA calculates the normalized importance score $s^t_i$ of each cached state $h^A_i$ ($i \in [t-N^A, t-1]$) as follows:
$$a^t_i = \mathrm{\bf v}^\top \mathrm{tanh}(\mathrm{\bf W}_1 h^A_i + \mathrm{\bf W}_2 h^A_t + \mathrm{\bf W}_3 \tilde{h}^A_i),$$
$$s^t_i = \mathrm{Softmax}(a^t_i).$$
$\tilde{h}^A_i$ denotes the previous history-aware aspect representation (refer to Eq. \[eq:haar\]). $\mathrm{\bf v} \in \mathbb{R}^{2\mathrm{dim}^A_h}$ can be learned during training. $\mathrm{\bf W}_{1,2,3} \in \mathbb{R}^{2\mathrm{dim}^A_h \times 2\mathrm{dim}^A_h}$ are parameters associated with previous aspect representations, current aspect representation and previous history-aware aspect representations respectively. Then, the aspect history $\hat{h}^A_t$ is obtained as follows:
$$\hat{h}^A_t =\sum^{t-1}_{i=t-N^A} s^t_i \times \tilde{h}^A_i.$$
To benefit from the previous aspect detection, we consolidate the hidden aspect representation with the distilled aspect history to generate features for the current prediction. Specifically, we adopt a way similar to the residual block [@he2016deep], which is shown to be useful in refining word-level features in Machine Translation [@wu2016google] and Part-Of-Speech tagging [@bjerva2016semantic], to calculate the history-aware aspect representations $\tilde{h}^A_t$ at the time step $t$: $$\label{eq:haar}
\tilde{h}^A_t = h^A_t + \mathrm{ReLU}(\hat{h}^A_t),$$ where ReLU is the relu activation function.
### Capturing Opinion Summary
Previous works show that modeling aspect-opinion association is helpful to improve the accuracy of ATE, as exemplified in employing attention mechanism for calculating the opinion information [@wang2017coupled; @li-lam:2017:EMNLP2017]. MIN [@li-lam:2017:EMNLP2017] focuses on a few surrounding opinion representations and computes their importance scores according to the proximity and the opinion salience derived from a given opinion lexicon. However, it is unable to capture the long-range association between aspects and opinions. Besides, the association is not strong because only the distance information is modeled. Although CMLA [@wang2017coupled] can exploit global opinion information for aspect extraction, it may suffer from the noise brought in by attention-based feature aggregation. Taking the aspect term “fish” in *“Furthermore, while the **fish** is unquestionably fresh, **rolls** tend to be inexplicably bland.”* as an example, it might be enough to tell “fish” is an aspect given the appearance of the strongly related opinion “fresh”. However, CMLA employs conventional attention and does not have a mechanism to suppress the noise caused by other terms such as “rolls”. Dependency parsing seems to be a good solution for finding the most related opinion and indeed it was utilized in [@wang-EtAl:2016:EMNLP20164], but the parser is prone to generating mistakes when processing the informal online reviews, as discussed in [@li-lam:2017:EMNLP2017].
To make use of opinion information and suppress the possible noise, we propose a novel Selective Transformation Network (STN) (the **STN** block in Figure \[fig:architecture\]), and insert it before attending to global opinion features so that more important features with respect to a given aspect candidate will be highlighted. Specifically, STN first calculates a new opinion representation $\Hat{h}^O_{i,t}$ given the current aspect feature $\tilde{h}^A_t$ as follows: $$\label{eq:new_op_re}
\Hat{h}^O_{i,t} = h^O_i + \mathrm{ReLU}(\mathrm{\bf W}_4 \tilde{h}^A_t + \mathrm{\bf W}_5 h^O_i),$$ where $\mathrm{\bf W}_{4}$ and $\mathrm{\bf W}_{5} \in \mathbb{R}^{2\mathrm{dim}^O_h \times 2\mathrm{dim}^O_h}$ are parameters for history-aware aspect representations and opinion representations respectively. They map $\tilde{h}^A_t$ and $h^O_i$ to the same subspace. Here the aspect feature $\tilde{h}^A_t$ acts as a “filter” to keep more important opinion features. Equation \[eq:new\_op\_re\] also introduces a residual block to obtain a better opinion representation $\Hat{h}^O_{i,t}$, which is conditioned on the current aspect feature $\tilde{h}^A_t$.
For distilling the global opinion summary, we introduce a bi-linear term to calculate the association score between $\tilde{h}^A_t$ and each $\Hat{h}^O_{i,t}$: $$\label{eq:w_i_t}
w_{i,t} = \mathrm{Softmax}(\mathrm{tanh}(\tilde{h}^A_t \mathrm{\bf W}_{bi} \Hat{h}^O_{i,t} + \mathrm{\bf b}_{bi})),$$ where $\mathrm{\bf W}_{bi}$ and $\mathrm{\bf b}_{bi}$ are parameters of the Bi-Linear Attention layer. The improved opinion summary $\Hat{h}^O_t$ at the time $t$ is obtained via the weighted sum of the opinion representations: $$\Hat{h}^O_t = \sum^{T}_{i=1} w_{i,t} \times \Hat{h}^O_{i, t}.$$ Finally, we concatenate the opinion summary $\Hat{h}^O_t$ and the history-aware aspect representation $\tilde{h}^A_t$ and feed it into the top-most fully-connected (FC) layer for aspect prediction: $$f^A_t = [\tilde{h}^A_t : \Hat{h}^O_t],$$ $$P(y^A_{t}|x_t) = \mathrm{Softmax}(\mathrm{\bf W}^A_{f} f^A_t + \mathrm{\bf b}^A_f).$$ Note that our framework actually performs a multi-task learning, i.e. predicting both aspects and opinions. We regard the initial token-level representations $h^O_i$ as the features for opinion prediction: $$P(y^O_i|x_i) = \mathrm{Softmax}(\mathrm{\bf W}^O_{f} h^O_i + \mathrm{\bf b}^O_f).$$ $\mathrm{\bf W}^{\mathcal{T}}_{f}$ and $\mathrm{\bf b}^{\mathcal{T}}_f$ are parameters of the FC layers.
Joint Training
--------------
All the components in the proposed framework are differentiable. Thus, our framework can be efficiently trained with gradient methods. We use the token-level cross-entropy error between the predicted distribution $P(y^{\mathcal{T}}_t|x_t)$ ($\mathcal{T} \in \{A, O\}$) and the gold distribution $P(y^{\mathcal{T}, g}_t|x_t)$ as the loss function: $$\
\begin{split}
\mathcal{L}_{\mathcal{T}} &= -\frac{1}{~T~}\sum^{T}_{t=1} P(y^{\mathcal{T}, g}_{t}|x_{t}) \odot \log [P(y^{\mathcal{T}}_{t}|x_{t})]. \end{split}$$ Then, the losses from both tasks are combined to form the training objective of the entire model: $$\mathcal{J}(\theta)=\mathcal{L}_A+\mathcal{L}_O,$$ where $\mathcal{L}_A$ and $\mathcal{L}_O$ represent the loss functions for aspect and opinion extractions respectively.
Experiment
==========
Datasets
--------
To evaluate the effectiveness of the proposed framework for the ATE task, we conduct experiments over four benchmark datasets from the SemEval ABSA challenge [@pontiki-EtAl:2014:SemEval; @pontiki-EtAl:2015:SemEval; @pontiki-EtAl:2016:SemEval]. Table \[dataset\_statistic\] shows their statistics. $D_1$ (SemEval 2014) contains reviews of the laptop domain and those of $D_2$ (SemEval 2014), $D_3$ (SemEval 2015) and $D_4$ (SemEval 2016) are for the restaurant domain. In these datasets, aspect terms have been labeled by the task organizer.
Gold standard annotations for opinion words are not provided. Thus, we choose words with strong subjectivity from MPQA[^4] to provide the distant supervision [@Mintz_distantIE]. To compare with the best SemEval systems and the current state-of-the-art methods, we use the standard train-test split in SemEval challenge as shown in Table \[dataset\_statistic\].
Comparisons
-----------
We compare our framework with the following methods:
- **CRF-1**: Conditional Random Fields with basic feature templates[^5].
- **CRF-2**: Conditional Random Fields with basic feature templates and word embeddings.
- **Semi-CRF**: First-order Semi-Markov Conditional Random Fields [@sarawagi2004semi] and the feature templates in @cuong2014conditional are adopted.
- **LSTM**: Vanilla bi-directional LSTM with pre-trained word embeddings.
- **IHS\_RD** [@chernyshevich:2014:SemEval], **DLIREC** [@toh-wang:2014:SemEval], **EliXa** [@sanvicente-saralegi-agerri:2015:SemEval], **NLANGP** [@toh-su:2016:SemEval]: The winning systems in the ATE subtask in SemEval ABSA challenge [@pontiki-EtAl:2014:SemEval; @pontiki-EtAl:2015:SemEval; @pontiki-EtAl:2016:SemEval].
- **WDEmb** [@yin2016unsupervised]: Enhanced CRF with word embeddings, dependency path embeddings and linear context embeddings.
- **MIN** [@li-lam:2017:EMNLP2017]: MIN consists of three LSTMs. Two LSTMs are employed to model the memory interactions between ATE and opinion detection. The last one is a vanilla LSTM used to predict the subjectivity of the sentence as additional guidance.
- **RNCRF** [@wang-EtAl:2016:EMNLP20164]: CRF with high-level representations learned from Dependency Tree based Recursive Neural Network.
- **CMLA** [@wang2017coupled]: CMLA is a multi-layer architecture where each layer consists of two coupled GRUs to model the relation between aspect terms and opinion words.
To clarify, our framework aims at extracting aspect terms where the opinion information is employed as auxiliary, while RNCRF and CMLA perform joint extraction of aspects and opinions. Nevertheless, the comparison between our framework and RNCRF/CMLA is still fair, because we do not use manually annotated opinions as used by RNCRF and CMLA, instead, we employ an existing opinion lexicon to provide weak opinion supervision.
Settings
--------
We pre-processed each dataset by lowercasing all words and replace all punctuations with `PUNCT`. We use pre-trained GloVe 840B vectors[^6] [@pennington2014glove] to initialize the word embeddings and the dimension (i.e., $\mathrm{dim}_w$) is 300. For out-of-vocabulary words, we randomly sample their embeddings from the uniform distribution $\mathcal{U}(-0.25, 0.25)$ as done in [@kim:2014:EMNLP2014]. All of the weight matrices except those in LSTMs are initialized from the uniform distribution $\mathcal{U}(-0.2, 0.2)$. For the initialization of the matrices in LSTMs, we adopt Glorot Uniform strategy [@glorot2010understanding]. Besides, all biases are initialized as 0’s.
The model is trained with SGD. We apply dropout over the ultimate aspect/opinion features and the input word embeddings of LSTMs. The dropout rates are empirically set as 0.5. With 5-fold cross-validation on the training data of $D_2$, other hyper-parameters are set as follows: $dim^A_h=100$, $dim^O_h=30$; the number of cached historical aspect representations $N^A$ is 5; the learning rate of SGD is 0.07.
Main Results
------------
As shown in Table \[tab:main\_results\], the proposed framework consistently obtains the best scores on all of the four datasets. Compared with the winning systems of SemEval ABSA, our framework achieves 5.0%, 1.6%, 1.4%, 1.3% absolute gains on $D_1$, $D_2$, $D_3$ and $D_4$ respectively.
Our framework can outperform RNCRF, a state-of-the-art model based on dependency parsing, on all datasets. We also notice that RNCRF does not perform well on $D_3$ and $D_4$ (3.7% and 3.9% inferior than ours). We find that $D_3$ and $D_4$ contain many informal reviews, thus RNCRF’s performance degradation is probably due to the errors from the dependency parser when processing such informal texts.
CMLA and MIN do not rely on dependency parsing, instead, they employ attention mechanism to distill opinion information to help aspect extraction. Our framework consistently performs better than them. The gains presumably come from two perspectives: (1) In our model, the opinion summary is exploited after performing the selective transformation conditioned on the current aspect features, thus the summary can to some extent avoid the noise due to directly applying conventional attention. (2) Our model can discover some uncommon aspects under the guidance of some commonly-used aspects in coordinate structures by the history attention.
CRF with basic feature template is not strong, therefore, we add CRF-2 as another baseline. As shown in Table \[tab:main\_results\], CRF-2 with word embeddings achieves much better results than CRF-1 on all datasets. WDEmb, which is also an enhanced CRF-based method using additional dependency context embeddings, obtains superior performances than . Therefore, the above comparison shows that word embeddings are useful and the embeddings incorporating structure information can further improve the performance.
![image](fig/3-1_times.pdf){height="27mm" width="100.00000%"} \[fig:1a\]
![image](fig/3-2_times.pdf){height="27mm" width="100.00000%"} \[fig:1b\]
![image](fig/2-1_times.pdf){height="25mm" width="100.00000%"} \[fig:1c\]
![image](fig/2-2_times.pdf){height="25mm" width="100.00000%"} \[fig:1d\]
Ablation Study
--------------
To further investigate the efficacy of the key components in our framework, namely, **THA** and **STN**, we perform ablation study as shown in the second block of Table \[tab:main\_results\]. The results show that each of THA and STN is helpful for improving the performance, and the contribution of STN is slightly larger than THA. “OURS w/o THA & STN” only keeps the basic bi-linear attention. Although it performs not bad, it is still less competitive compared with the strongest baseline (i.e., CMLA), suggesting that only using attention mechanism to distill opinion summary is not enough. After inserting the STN component before the bi-linear attention, i.e. “OURS w/o THA”, we get about 1% absolute gains on each dataset, and then the performance is comparable to CMLA. By adding THA, i.e. “OURS”, the performance is further improved, and all state-of-the-art methods are surpassed.
Attention Visualization and Case Study
--------------------------------------
In Figure \[fig:my\_label\], we visualize the opinion attention scores of the words in two example sentences with the candidate aspects “maitre-D” and “bathroom”. The scores in Figures \[fig:1a\] and \[fig:1c\] show that our full model captures the related opinion words very accurately with significantly larger scores, i.e. “incredibly”, “unwelcoming” and “arrogant” for “maitre-D”, and “unfriendly” and “filthy” for “bathroom”. “OURS w/o STN” directly applies attention over the opinion hidden states $h_i^O$’s, similar to what CMLA does. As shown in Figure \[fig:1b\], it captures some unrelated opinion words (e.g. “fine”) and even some non-opinionated words. As a result, it brings in some noise into the global opinion summary, and consequently the final prediction accuracy will be affected. This example demonstrates that the proposed STN works pretty well to help attend to more related opinion words given a particular aspect.
Some predictions of our model and those of LSTM and OURS w/o THA & STN are given in Table \[tab:predictions\]. The models incorporating attention-based opinion summary (i.e., OURS and OURS w/o THA & STN) can better determine if the commonly-used nouns are aspect terms or not (e.g. “device” in the first input), since they make decisions based on the global opinion information. Besides, they are able to extract some infrequent or even misspelled aspect terms (e.g. “survice” in the second input) based on the indicative clues provided by opinion words. For the last three cases, having aspects in coordinate structures (i.e. the third and the fourth) or long aspects (i.e. the fifth), our model can give precise predictions owing to the previous detection clues captured by THA. Without using these clues, the baseline models fail.
Related Work
============
Some initial works [@hu2004miningA] developed a bootstrapping framework for tackling Aspect Term Extraction (ATE) based on the observation that opinion words are usually located around the aspects. [@popescu-etzioni:2005:HLTEMNLP] and [@qiu2011opinion] performed co-extraction of aspect terms and opinion words based on sophisticated syntactic patterns. However, relying on syntactic patterns suffers from parsing errors when processing informal online reviews. To avoid this drawback, [@liu-xu-zhao:2012:EMNLP-CoNLL; @liu2013opinion] employed word-based translation models. Specifically, these models formulated the ATE task as a monolingual word alignment process and aspect-opinion relation is captured by alignment links rather than word dependencies. The ATE task can also be formulated as a token-level sequence labeling problem. The winning systems [@chernyshevich:2014:SemEval; @sanvicente-saralegi-agerri:2015:SemEval; @toh-su:2016:SemEval] of SemEval ABSA challenges employed traditional sequence models, such as Conditional Random Fields (CRFs) and Maximum Entropy (ME), to detect aspects. Besides heavy feature engineering, they also ignored the consideration of opinions.
Recently, neural network based models, such as LSTM-based [@liu-joty-meng:2015:EMNLP] and CNN-based [@poria2016aspect] methods, become the mainstream approach. Later on, some neural models jointly extracting aspect and opinion were proposed. [@wang-EtAl:2016:EMNLP20164] performs the two task in a single Tree-Based Recursive Neural Network. Their network structure depends on dependency parsing, which is prone to error on informal reviews. CMLA [@wang2017coupled] consists of multiple attention layers on top of standard GRUs to extract the aspects and opinion words. Similarly, MIN [@li-lam:2017:EMNLP2017] employs multiple LSTMs to interactively perform aspect term extraction and opinion word extraction in a multi-task learning framework. Our framework is different from them in two perspectives: (1) It filters the opinion summary by incorporating the aspect features at each time step into the original opinion representations; (2) It exploits history information of aspect detection to capture the coordinate structures and previous aspect features.
Concluding Discussions
======================
For more accurate aspect term extraction, we explored two important types of information, namely aspect detection history, and opinion summary. We design two components, i.e. truncated history attention, and selective transformation network. Experimental results show that our model dominates those joint extraction works such as RNCRF and CMLA on the performance of ATE. It suggests that the joint extraction sacrifices the accuracy of aspect prediction, although the ground-truth opinion words were annotated by these authors. Moreover, one should notice that those joint extraction methods do not care about the correspondence between the extracted aspect terms and opinion words. Therefore, the necessity of such joint extraction should be obelized, given the experimental findings in this paper.
[^1]: Codes and datasets are available at <https://github.com/lixin4ever/HAST>.
[^2]: The work was done when Xin Li was an intern at Tencent AI Lab. The project is substantially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Code: 14203414).
[^3]: Technically, opinion summary is the linear combination of the opinion representations generated from LSTM.
[^4]: http://mpqa.cs.pitt.edu/
[^5]: http://sklearn-crfsuite.readthedocs.io/en/latest/
[^6]: https://nlp.stanford.edu/projects/glove/
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'A. H. Zemanian'
title: THE GALAXIES OF NONSTANDARD ENLARGEMENTS OF INFINITE AND TRANSFINITE GRAPHS
---
Abstract — The galaxies of the nonstandard enlargements of conventionally infinite graphs as well as of transfinite graphs are defined, analyzed, and illustrated by some examples. It is then shown that any such enlargement either has exactly one galaxy, its principal one, or it has infinitely many galaxies. In the latter case, the galaxies are partially ordered by there “closeness” to the principal galaxy. If an enlargement has a galaxy different from its principal galaxy, then it has a two-way infinite sequence of galaxies that are totally ordered according to that “closeness” property. There may be many such totally ordered seqences.
Key Words: Nonstandard graphs, enlargements of graphs, transfinite graphs, galaxies in nonstandard graphs, graphical galaxies.
Introduction
============
In this work we extend the idea of galaxies in the hyperreal line $^{*}\!{I \kern -4.5pt R}$ to nonstandard enlargements of conventionally infinite graphs and also of transfinite graphs. Since graphs have structures much different from that of the real line ${I \kern -4.5pt R}$, the enlargements of graphs have properties not possessed by $^{*}\! {I \kern -4.5pt R}$. The graphical galaxies of those enlargements comprise one aspect of that distinctive complexity. We will show that that any such enlargement has either one galaxy or infinitely many of them. Moreover, just as $^{*}\! {I \kern -4.5pt R}$ contains images of the real numbers, called the standard hyperreals, as well as hyperreals that are nonstandard, so too may the enlargement $^{*}\!G$ of a graph $G$ contain “hypernodes,” some of which are images of nodes of $G$ and others of which are nonstandard hypernodes. In addition, there are “hyperbranches” incident to pairs of hypernodes; some of these hyperbranches are images of branches of $G$, but there may be others that are not.
The galaxies graphically partition $^{*}\!G$ in the sense the every hypernode belongs to exactly one galaxy, and so too does every hyperbranch. There is a unique galaxy, which we refer to as the “principal galaxy,” that contains the standard hypernodes and possibly nonstandard hypernodes as well. In the event that there are infinitely many galaxies, those galaxies are partially ordered according to how “close” they are to the principal galaxy. In fact, if there is a galaxy different from the principal galaxy, then there is a two-way infinite sequence of galaxies that are totally ordered according to their “closeness” to the principal galaxy. There may be many such totally ordered sequences, but a galaxy in one such sequence may not be comparable to a galaxy in another sequence according to that “closeness” property.
We speak of “conventionally infinite” graphs to distinguish them from transfinite graphs of ranks 1 or higher [@tgen Chapter 2], [@gn Chapter 2]. Sections 2 through 4 herein are devoted to the enlargements of conventionally infinite graphs. The results for such enlargements extend to enlargements of transfinite graphs, but in more complicated ways. We show this in Sections 5 through 11, but only for transfinite graphs of rank 1. Results for transfinite graphs of still higher ranks are obtained similarly but in still more complicated ways and with additional complexity in the symbols. For the sake of brevity, the latter results are not included herein, but they may be found in [@gal2] as well as in the archive www.arxiv.org in the category “mathematics” under “A.H. Zemanian.”
Our notations and terminology follow the usual conventions of nonstandard analysis. ${I \kern -4.5pt N}= \{0,1,2,\ldots\}$ is the set of natural numbers, and $^{*}\! {I \kern -4.5pt N}$ is the set of hypernaturals. The standard hypernaturals are (i.e., can be identified with) the natural numbers. Also, $\langle a_{n}\rangle$ or $\langle a_{n}\!: n\in{I \kern -4.5pt N}\rangle$ or $\langle a_{0},a_{1},
a_{2},\ldots\rangle$ denotes a sequence whose elements can be members of any set, such as the set $X$ of nodes in a conventional graph $G=\{X,B\}$, where $B$ is the set of branches, a branch being a two-element set of nodes. On the other hand, $[a_{n}]$ denotes an equivalence class of sequences, where two sequences $\langle a_{n}\rangle$ and $\langle b_{n}\rangle$ are taken to be equivalent if $\{n\!:a_{n}=b_{n}\}\in {\cal F}$, where ${\cal F}$ is any chosen and fixed free ultrafilter.[^1] The $a_{n}$ appearing in $[a_{n}]$ are understood to be the elements of any one of the sequences in the equivalence class. At times, we will use the more specific notation $[\langle a_{0},a_{1},a_{2},\ldots\rangle]$. More generally, we adhere to the notations and terminology appearing in [@go].
The ordinals are denoted in the usual way: $\omega$ is the first transfinite ordinal. With $\tau\in{I \kern -4.5pt N}$, the product $\omega\cdot \tau$ is the sum of $\tau$ summands, each being $\omega$.
The Nonstandard Enlargement of a Graph
======================================
Throughout Sections 2 to 4, we assume that the conventionally infinite graph $G$ is connected and has infinitely many nodes. The definition of a nonstandard graph that we use herein is given in [@gn Section 8.1], a special case of which is the “enlargement” of a graph $G$.
Let us define the [*enlargement*]{} $^{*}\!G$ of $G$ here as well in order to remove any need for referring to [@gn]. $G=\{X,B\}$ is now taken to be a conventional connected graph having an infinite set $X$ of nodes and therefore an infinite set of branches as well, each branch being a two-element set of nodes. Thus, there are no parallel branches (i.e., multiple branches). ${\cal F}$ will denote a chosen and fixed free ultrafilter. ${\bf x}=[x_{n}]$ denotes an equivalence class of sequences of nodes as stated in the Introduction. ${\bf x}$ will be called a [ *hypernode*]{}.[^2] Thus, the set of all sequences of nodes from $G$ is partitioned into hypernodes. $^{*}\!X$ denotes the set of hypernodes. If all the elements of one of the representative sequences $\langle x_{n}\rangle$ for a hypernode ${\bf x}=[x_{n}]$ are the same node (i.e., $x_{n}=x$ for all $n$), then ${\bf x}=[x]$ can be identified with $x$; in this case, ${\bf x}$ is called a [*standard hypernode*]{}. Otherwise, ${\bf x}=[x_{n}]$ is called a [*nonstandard hypernode*]{}.
We turn now to the definition of a “hyperbranch.” Let ${\bf x}=[x_{n}]$ and ${\bf y}=[y_{n}]$ be two hypernodes. Also, let ${\bf b}=[\{x_{n},y_{n}\}]$, where $\langle \{x_{n},y_{n}\}\rangle$ is a sequence of pairs of nodes from $G$ such that, for almost all $n$, $\{ x_{n},y_{n}\}$ is a branch in $G$; that is, $\{n\!: \{x_{n},y_{n}\}\in B\}\in {\cal F}$. It can be shown [@gn page 155] that this definition is independent of the representative sequences $\langle x_{n}\rangle$ and $\langle y_{n}\rangle$ chosen of ${\bf x}$ and ${\bf y}$ respectively and that we truly have an equivalence relation for the set of all sequences of branches from $G$. We let ${\bf b}=[\{x_{n},y_{n}\}]$ denote such an equivalence class and will call it a [*hyperbranch*]{}; we write ${\bf b}=\{{\bf x},{\bf y}\}$. Also, $^{*}\!B$ will denote the set of all hyperbranches. If ${\bf x}=[x_{n}]$ and ${\bf y}=[y_{n}]$ are standard hypernodes, then ${\bf b}=[\{x,y\}]$ is called a [*standard hyperbranch*]{}. Otherwise, ${\bf b}$ is called a [*nonstandard hyperbranch*]{}.
Finally, the pair $^{*}\!G=\{^{*}\!X,\,^{*}\!B\}$ denotes the [*enlargement*]{} of $G$. It is a special case of a nonstandard graph, as defined in [@gn page 155].[^3]
Distances and Galaxies in Enlarged Graphs
=========================================
The [*length*]{} $|P_{x,y}|$ of any path $P_{x,y}$ connecting two nodes $x$ and $y$ in a graph $G$ is the number of branches in $P_{x,y}$. The [*distance*]{} $d(x,y)$ between $x$ and $y$ is $d(x,y)=\min\{|P_{x,y}|\}$, where the minimum is taken over all paths terminating at $x$ and $y$. In the trivial case, $d(x,x)=0$. $d$ satisfies the triangle inequality, namely, for any three nodes $x$, $y$, and $z$ in $G$, $d(x,y)\leq d(x,z)+d(z,y)$. In fact, $d$ satisfies the other metric axioms, too, and the set $X$ of nodes in $G$ along with $d$ is a metric space.
The metric $d$ can be extended into an internal function $\bf d$ mapping the Cartesian product $^{*}\! X\,\times \,^{*}\! X$ into the set of hypernaturals $^{*}\! {I \kern -4.5pt N}$ as follows: For any ${\bf x}=[ x_{n}]$ and ${\bf y}=[y_{n}]$ in $^{*}\!X$, ${\bf d}$ is defined by $${\bf d}({\bf x},{\bf y})\;=\;[d(x_{n},y_{n}]\,\in\,^{*}\!{I \kern -4.5pt N}.$$ By the transfer principle, we have, for any three hypernodes ${\bf x}$, ${\bf y}$, and ${\bf z}$, $${\bf d}({\bf x},{\bf z})\;\leq\; {\bf d}({\bf x},{\bf y})\,+\,{\bf d}(({\bf y},{\bf z}). \label{3.1}$$ From the point of view of an ultrapower construction, this means that $$\{n\!: d(x_{n},z_{n})\;\leq\;d(x_{n},y_{n})\,
+\,d(y_{n},z_{n}\}\;\in\;{\cal F}.$$ The other metric axioms, such as ${\bf d}({\bf x},{\bf x})=0$, are obviously satisfied by ${\bf d}$.
We define the “galaxies” of $^{*}\!G$ as nonstandard subgraphs of $^{*}\!G$ by first defining the “nodal galaxies.” Two hypernodes ${\bf x}=[x_{n}]$ and ${\bf y}=[y_{n}]$ are taken to be in the same [*nodal galaxy*]{} $\dot{\Gamma}$ of $^{*}\!G$ if ${\bf d}({\bf x},{\bf y})$ is no greater that a standard hypernatural $\bf k$, that is, if there exists a natural number $k\in{I \kern -4.5pt N}$ such that $\{n\!: d(x_{n},y_{n})\,\leq\,k\}\;\in\; {\cal F}$. In this case, we say that ${\bf x}$ and ${\bf y}$ are [*limitedly distant*]{}, and we write ${\bf d}({\bf x},{\bf y})\leq {\bf k}$.
Let $N_{{\bf x},{\bf y}}$ be the set of all standard hypernaturals that are no less than ${\bf d}({\bf x},{\bf y})$. $N_{{\bf x},{\bf y}}$ is a well-ordered set, and therefore it has a minimum ${\bf k}_{{\bf x},{\bf y}}$. So, we can say that ${\bf x}$ and ${\bf y}$ are in the same nodal galaxy $\dot{\Gamma}$ if ${\bf d}({\bf x},{\bf y})={\bf k}_{{\bf x},{\bf y}}$.
[**Lemma 3.1.**]{} [*The nodal galaxies partition the set $^{*}\!X$ of all hypernodes in $^{*}\!G$.*]{}
[**Proof.**]{} The property of two hypernodes being limitedly distant is a binary relation on $^{*}\!X$ that is obviously reflexive and symmetric. Its transitivity follows directly from (\[3.1\]). Alternatively, we can use an ultrapower argument. Assume that ${\bf x}=[x_{n}]$ and ${\bf y}=[z_{n}]$ are in some nodal galaxy and that ${\bf y}$ and ${\bf z}=[z_{n}]$ are in some nodal galaxy; we want to show that those galaxies are the same. There exist two standard natural numbers $k_{1}$ and $k_{2}$ such that $N_{{\bf x},{\bf y}}=\{n\!:d(x_{n},y_{n})\leq k_{1}\}\in{\cal F}$ and $N_{{\bf y},{\bf z}}=\{n\!:d(y_{n},z_{n})\leq k_{2}\}\in{\cal F}$. Since $d(x_{n},z_{n})\,\leq\,d(x_{n},y_{n})+d(y_{n},z_{n})$, $$\{n\!: d(x_{n},z_{n})\leq k_{1}+k_{2}\}\;\supseteq\;
N_{{\bf x},{\bf y}}\cap N_{{\bf y},{\bf z}}\;\in\;{\cal F}.$$ So, the left-hand side is a set in ${\cal F}$. Thus, ${\bf x}$ and ${\bf z}$ are limitedly distant, too, and ${\bf x}$, ${\bf y}$, and ${\bf z}$ are all in the same nodal galaxy. $\Box$
We define a [*galaxy*]{} $\Gamma$ of $^{*}\!G$ as a maximal nonstandard subgraph of $^{*}\!G$ whose hypernodes are all in the same nodal galaxy $\dot{\Gamma}$; that is, the hyperbranches of $\Gamma$ corresponding to $\dot{\Gamma}$ are all those pairs $\{{\bf x},{\bf y}\}$ such that ${\bf x},{\bf y} \in \dot{\Gamma}$. We will say that a hypernode ${\bf x}$ is [*in*]{} $\Gamma$ when ${\bf x}\in\dot{\Gamma}$ and that a hyperbranch $\{{\bf x},{\bf y}\}$ is [*in*]{} $\Gamma$ when ${\bf x},{\bf y}\in\dot{\Gamma}$. It follows from Lemma 3.1 that the galaxies of $^{*}\!G$ partition $^{*}\!G$ in the sense of graphical partitioning (i.e., each hyperbranch is in one and only one galaxy).
The [*principal galaxy*]{} $\Gamma_{0}$ of $^{*}\!G$ is that unique galaxy, each of whose hypernodes is limitedly distant from some standard hypernode (and therefore from all standard hypernodes). All the nodes in $G$ will be (i.e., can be identified with) standard hypernodes in $\Gamma_{0}$, but there may be nonstandard hypernodes in $\Gamma_{0}$ as well. The following examples illustrate this point.
[**Example 3.2.**]{} Consider the endless (i.e., two-way infinite) path: $$P\;=\;\langle\ldots,x_{-1},b_{-1},x_{0},b_{0},x_{1},b_{1}\ldots\rangle$$ with nodes $x_{k}$ and branches $b_{k}$, $k\in{Z \kern -7.5pt Z}$, ${Z \kern -7.5pt Z}$ being the set of integers. The enlargement $^{*}\!P$ of $P$ has hypernodes, each being represented by $[x_{n_{k}}]$ where $\langle k_{n}\rangle$ is some sequence of integers. Each hyperbranch is represented by $[\{x_{k_{n}},x_{k_{n}+1}]$. The nodal galaxies are infinitely many because they correspond bijectively with the galaxies of the enlargement $^{*}\!{Z \kern -7.5pt Z}$ of ${Z \kern -7.5pt Z}$. Moreover, the principal galaxy $\Gamma_{0}$ of $^{*}\!P$ has only standard hypernodes and in fact is (i.e., can be identified with) $P$ itself. Also, every galaxy is graphically isomorphic to $\Gamma_{0}$ and therefore to every other galaxy. $\Box$
[**Example 3.3.**]{} Now, consider a one-ended path: $$T\;=\;\langle x_{0},b_{0},x_{1},b_{1},x_{2},b_{2},\ldots\rangle$$ Each hypernode in the enlargement $^{*}\!T$ of $T$ is represented by $[x_{k_{n}}]$, where $\langle k_{n}\rangle$ is some sequence of natural numbers. Thus, $^{*}\!T$ has a hypernode set $^{*}\!X$ that can be identified with the set $^{*}\!{I \kern -4.5pt N}$ of hypernaturals. Hence, $^{*}\!T$ has an infinitely of galaxies, too. The principal galaxy $\Gamma_{0}$ of $^{*}\!T$ is the one-ended path $T$. However, any hypernode ${\bf x}=[x_{k_{n}}]$ in a galaxy $\Gamma$ different from $\Gamma_{0}$ will be such that, for every $m\in{I \kern -4.5pt N}$, $\{n\!: k_{n}>m\}\in{\cal F}$. Such a hypernode is adjacent both to $[x_{k_{n}+1}]$ and to $[x_{k_{n}-1}]$, where we are free to replace $x_{k_{n}-1}$ by, say, $x_{0}$ whenever $k_{n}=0$. (The set $\{n\!: k_{n}=0\}$ will not be a member of ${\cal F}$ when ${\bf x}=[x_{k_{n}}]$ is in $\Gamma$.) Thus, ${\bf x}=[x_{k_{n}}]\in\Gamma$ has both a predecessor and a successor, which implies that $\Gamma$ is graphically isomorphic to an endless path. In fact, all the galaxies other than $\Gamma_{0}$ are isomorphic to each other, being identifiable with an endless path. $\Box$
[**Example 3.4.**]{} Consider next the grounded, one-way infinite ladder $L$ of Figure 1. Now, for every $k\in{I \kern -4.5pt N}$, $d(x_{k},x_{g})=d(x_{k},x_{k+1})=1$, and, for every $k,l\in{I \kern -4.5pt N}$ with $|k-l|>1$, $d(x_{k},x_{l})=2$. In this case, for every two hypernodes ${\bf x}$ and ${\bf y}$, ${\bf d}({\bf x},{\bf y})\leq[2]=2$. Thus, every two hypernodes are limitedly distant from each other, which means that $^{*}\!L$ has only one galaxy, its principal galaxy $\Gamma_{0}$. Now, $\Gamma_{0}$ has both standard and nonstandard hypernodes. $\Box$
[**Example 3.5.**]{} Furthermore, consider the graph $G$ obtained from $L$ by appending a one-ended path $P$ starting at $x_{g}$, but otherwise isolated from $L$, as shown in Figure 2. In this case, we again have an infinity of galaxies by virtue of the isolation of $P$ from $L$. The principal galaxy $\Gamma_{0}$ has both standard and nonstandard hypernodes, its nonstandard hypernodes being due to $L$. All the other galaxies are graphically isomorphic to an endless path (as in Example 3.3) and thus to each other, but not to $G$ and not to $\Gamma_{0}$. $\Box$
A subgraph $G_{s}$ of $G$ with the property that there exists a natural number $k$ such that $d(x,y)\leq k$ for all pairs of nodes $x,y$ in $G_{s}$ will be called a [*finitely dispersed*]{} subgraph of $G$. Example 3.5 suggests that the structures of the galaxies other than $\Gamma_{0}$ do not depend upon any finitely dispersed subgraph of $G$. This is true in general because the nodes $x_{n}$ in any representative $\langle x_{n}\rangle$ of any hypernode in a galaxy other than $\Gamma_{0}$ must lie outside any finitely dispersed subgraph of $G$ for almost all $n$ whatever be the choice of that finitely dispersed subgraph.
For instance, consider
[**Example 3.6.**]{} Let $D_{2}$ be the 2-dimensional grid; that is, we can represent $D_{2}$ by having its nodes at the lattice points $(k,l)$ of the 2-dimensional plane, where $k,l\in {Z \kern -7.5pt Z}$ and with its branches being $\{(k,l),(k+1,l)\}$ and $\{(k,l),(k,l+1)\}$. So, the hypernodes of $^{*}\!D_{2}$ occur at $^{*}\!{Z \kern -7.5pt Z}\,\times\,^{*}\!{Z \kern -7.5pt Z}$. Under this representation, the principal nodal galaxy of $^{*}\!D_{2}$ will have its nodes at the lattice points of ${Z \kern -7.5pt Z}\times{Z \kern -7.5pt Z}$.
Next, let $G$ be a connected graph obtained from $D_{2}$ by deleting or appending finitely many branches to $D_{2}$. So, outside a finitely dispersed subgraph of $G$, $G$ is identical to $D_{2}$. Then the principal galaxy $\Gamma_{0}$ of $^{*}\!G$ is the same as (i.e., is graphically isomorphic to) $G$, but every other galaxy is the same as $D_{2}$. $\Box$
In view of Examples 3.3 and 3.4, the following theorem is pertinent. As always, we assume that $G$ is connected and has an infinite node set $X$.
[**Theorem 3.7.**]{} [*Let $G$ be locally finite. Then, $^{*}\!G$ has at least one hypernode not in its principal galaxy $\Gamma_{0}$ and thus at least one galaxy $\Gamma_{1}$ different from $\Gamma_{0}$.*]{}
[**Proof.**]{} Choose any $x_{0}\in X$. By connectedness and local finiteness, for each $n\in{I \kern -4.5pt N}$, the set $X_{n}$ of nodes that are at a distance of $n$ from $x_{0}$ is nonempty and finite. Also, $\cup X_{n}\,=\,X$ by the connectedness of $G$. By König’s Lemma [@wi page 40], there is a one-ended path $P$ starting at $x_{0}$. $P$ must pass through every $X_{n}$. Thus, there is a subsequence $\langle x_{0},x_{1},x_{2},\ldots\rangle$ of the sequence of nodes of $P$ such that $x_{n}\in X_{n}$; that is, $d(x_{n},x_{0})=n$ for every $n$. Set ${\bf x}=[x_{n}]$. Then, ${\bf x}$ must be in a galaxy $\Gamma_{1}$ that is different from the principal galaxy $\Gamma_{0}$. $\Box$
When $^{*}\!G$ Has a Hypernode Not in Its Principal Galaxy
==========================================================
In this section, $G$ is connected and infinite but not necessarily locally finite. Let $\Gamma_{a}$ and $\Gamma_{b}$ be two galaxies that are different from the principal galaxy $\Gamma_{0}$ of $^{*}\!G$. We shall say that $\Gamma_{a}$ [*is closer to $\Gamma_{0}$ than is*]{} $\Gamma_{b}$ and that $\Gamma_{b}$ [*is further away from $\Gamma_{0}$ than is*]{} $\Gamma_{a}$ if there are a ${\bf y}=[y_{n}]$ in $\Gamma_{a}$ and a ${\bf z}=[z_{n}]$ in $\Gamma_{b}$ such that, for some ${\bf x}=[x_{n}]$ in $\Gamma_{0}$ and for every $m_{0}\in{I \kern -4.5pt N}$, we have $$N_{0}(m_{0})\;=\;\{n\!:d(z_{n},x_{n})-d(y_{n},x_{n})
\,\geq\, m_{0}\}\;\in\;{\cal F}.$$ Any set of galaxies for which every two of them, say, $\Gamma_{a}$ and $\Gamma_{b}$ satisfy this condition will be said to be [*totally ordered according to their closeness to*]{} $\Gamma_{0}$. With Lemma 3.1 in hand, the conditions for a total ordering (reflexivity, antisymmetry, transitivity, and connectedness) are readily shown. For instance, the proof of Theorem 4.3 below establishes transitivity.
[**Lemma 4.1.**]{} [*These definitions are independent of the representative sequences $\langle x_{n}\rangle$, $\langle y_{n}\rangle$, and $\langle z_{n}\rangle$ chosen for ${\bf x}$, ${\bf y}$, and ${\bf z}$.*]{}
[**Proof.**]{} Let $\langle x_{n}'\rangle$, $\langle y_{n}'\rangle$, and $\langle z_{n}'\rangle$ be any other such representative sequences. Then, $$d(z_{n},x_{n})\;\leq\;d(z_{n},z_{n}')+d(z_{n}',x_{n}')+d(x_{n}',x_{n}).$$ So, $$d(z_{n}',x_{n}')\;\geq\;d(z_{n},x_{n})-d(z_{n},z_{n}')-d(x_{n}',x_{n})
\;\geq\; d(z_{n},x_{n})-m_{1}$$ for some $m_{1}\in{I \kern -4.5pt N}$ and for all $n$ in some $N_{1}(m_{1})\in{\cal F}$. Also, $$d(y_{n}',x_{n}')\;\leq\;
d(y_{n}',y_{n})+d(y_{n},x_{n})+d(x_{n},x_{n}')\;\leq\; d(y_{n},x_{n})+m_{2}$$ for some $m_{2}\in {I \kern -4.5pt N}$ and for all $n$ in some $N_{2}(m_{2})\in{\cal F}$. Therefore, $$d(z_{n}',x_{n}')-d(y_{n}',x_{n}')\;\geq\; d(z_{n},x_{n})-d(y_{n},x_{n})-m_{1}-m_{2}$$ for all $n$ in $N_{1}(m_{1})\cap N_{2}(m_{2})\,\in\, {\cal F}$. So, for $N_{0}(m_{0})$ as defined above and for each $m_{0}$ no matter how large, $$\{n\!: d(z_{n}',x_{n}')-d(y_{n}',x_{n}')\;\geq\; m_{0}-m_{1}-m_{2}\}
\;\supseteq\; N_{0}(m_{0})\cap N_{1}(m_{1})\cap N_{2}(m_{2})\;\in\; {\cal F}.$$ This proves Lemma 4.1. $\Box$
We will say that a set $A$ is a [*totally ordered, two-way infinite sequence*]{} if there is a bijection from the set ${Z \kern -7.5pt Z}$ of integers to the set $A$ that preserves the total ordering of ${Z \kern -7.5pt Z}$.
[**Theorem 4.2.**]{} [*If $^{*}\!G$ has a hypernode that is not in its principal galaxy $\Gamma_{0}$, then there exists a two-way infinite sequence of galaxies totally ordered according to their closeness to $\Gamma_{0}$.*]{}
[**Note.**]{} There may be many such sequences, and a galaxy in one sequence and a galaxy in another sequence may not be comparable according to their closeness to $\Gamma_{0}$.
[**Proof.**]{} Let ${\bf x}=[\langle x,x,x,\ldots\rangle ]$ be a standard hypernode in $\Gamma_{0}$. Also, let ${\bf v}=[v_{n}]$ be the asserted hypernode not in $\Gamma_{0}$. Thus, for each $m\in{I \kern -4.5pt N}$, $\{n\!: d(v_{n},x)>m\}\,\in\,{\cal F}$. We can choose a subsequence $\langle y_{n}\rangle$ of $\langle v_{n}\rangle$ such that $d(y_{n},x)\rangle$ is a monotonically increasing sequence of natural numbers that tends to $\infty$ as $n\rightarrow\infty$. Thus, ${\bf y}=[y_{n}]$ is a hypernode in a galaxy $\Gamma_{b}$ different from $\Gamma_{0}$.
There will be a smallest $n_{1}\in{I \kern -4.5pt N}$ such that $d(y_{n},x)-d(y_{0},x)>1$ for all $n\geq n_{1}$. Set $w_{n}=y_{0}$ for $0\leq n < n_{1}$. Thus, for $0\leq n<n_{1}$, we have that $d(y_{n},x)-d(w_{n},x)\geq 0$ and $d(w_{n},x)\geq 0$.
Again, there will be a smallest $n_{2}\in{I \kern -4.5pt N}$ such that $d(y_{n},x)-d(y_{n_{1}},x)>2$ for all $n\geq n_{2}$. Set $w_{n}=y_{0}$ for $n_{1}\leq n<n_{2}$. Thus, for $n_{1}\leq n<n_{2}$, we have that $d(y_{n},x)-d(w_{n},x)>1$ and $d(w_{n},x)\geq 0$.
Once again, there will be a smallest $n_{3}\in{I \kern -4.5pt N}$ such that $d(y_{n},x)-d(y_{n_{2}},x)>3$ for all $n\geq n_{3}$. Set $w_{n}=y_{n_{1}}$ for $n_{2}\leq n<n_{3}$. Thus, for $n_{2}\leq n<n_{3}$, we have that $d(y_{n},x)-d(w_{n},x)>2$ and $d(w_{n},x)>1$. The last inequality follows from $d(y_{n_{1}},x)>d(y_{0},x)+1\geq 1$ for all $n\geq n_{1}$.
Continuing this way, we will have a smallest $n_{k}\in{I \kern -4.5pt N}$ such that $d(y_{n},x)-d(y_{n_{k-1}},x)>k$ for all $n\geq n_{k}$. Set $w_{n}=y_{n_{k-2}}$ for $n_{k-1}\leq n<n_{k}$. In this general case for $n_{k-1}\leq n<n_{k}$, we have that $d(y_{n},x)-d(w_{n},x)>k-1$ and $d(w_{n},x)> k-2$. The last inequality occurs because $d(y_{n_{k-2}},x)>d(y_{n_{k-3}},x)+k-2> k-2$ for all $n\geq n_{k-2}$.
Altogether then, $w_{n}$ is defined for all $n$. Moreover, $d(w_{n},x)$ increases monotonically, eventually becoming larger than $m$ for every $m\in{I \kern -4.5pt N}$. Therefore, ${\bf w}=[w_{n}]$ is in a galaxy $\Gamma_{a}$ different from the principal galaxy $\Gamma_{0}$. Furthermore, $d(y_{n},x)-d(w_{n},x)$ also increases monotonically in the same way. Consequently, the galaxy $\Gamma_{a}$ containing ${\bf w}=[w_{n}]$ is closer to $\Gamma_{0}$ than is the galaxy $\Gamma_{b }$ containing ${\bf y}=[y_{n}]$.
We can now repeat this argument with $\Gamma_{b}$ replaced by $\Gamma_{a}$ and with ${\bf w}=[w_{n}]$ playing the role that ${\bf y}=[y_{n}]$ played to find still another galaxy $\Gamma_{a}'$ different from $\Gamma_{0}$ and closer to $\Gamma_{0}$ than is $\Gamma_{a}$. Continual repetitions yield an infinite sequence of galaxies indexed by, say, the negative integers and totally ordered by their closeness to $\Gamma_{0}$.
The conclusion that there is an infinite sequence of galaxies progressively further away from $\Gamma_{0}$ than is $\Gamma_{b}$ is easier to prove. With ${\bf y}\in \Gamma_{b}$ as before, we have that, for every $m\in{I \kern -4.5pt N}$, $\{n\!: d(y_{n},x)>m\}\in {\cal F}$. Therefore, for each $n\in{I \kern -4.5pt N}$, we can choose $z_{n}$ as an element of $\langle y_{n}\rangle$ such that $d(z_{n},x)\geq d(y_{n},x)+n$ and also such that $d(z_{n},x)$ monotonically increases with $n$. Clearly, $d(z_{n},x)\rightarrow\infty$ as $n\rightarrow\infty$. This implies that ${\bf z}=[z_{n}]$ must be in a galaxy $\Gamma_{c}$ that is further away from $\Gamma_{0}$ than is $\Gamma_{b}$
We can repeat the argument of the last paragraph with $\Gamma_{c}$ in place of $\Gamma_{b}$ to find still another galaxy $\Gamma_{c}'$ further away from $\Gamma_{0}$ than is $\Gamma_{c}$. Repetitions of this argument show that there is an infinite sequence of galaxies indexed by, say, the positive integers and totally ordered by their closeness to $\Gamma_{0}$. The union of the two infinite sequences yields the conclusion of the theorem. $\Box$
By virtue of Theorem 3.7, the conclusion of Theorem 4.2 holds whenever $G$ is locally finite.
In general, the hypothesis of Theorem 4.2 may or may not hold. Thus, $^{*}\!G$ either has exactly one galaxy, its principal one $\Gamma_{0}$, or has infinitely many galaxies.
A more general result may be true: Namely, for every two galaxies $\Gamma_{1}$ and $\Gamma_{3}$ different from $\Gamma_{0}$ with $\Gamma_{1}$ closer to $\Gamma_{0}$ than is $\Gamma_{3}$, there is another galaxy $\Gamma_{2}$ with $\Gamma_{2}$ further away from (resp. closer to) $\Gamma_{0}$ than is $\Gamma_{1}$ (resp. $\Gamma_{3})$. This has yet to be proven.
Instead of the idea of “totally ordered according to closeness to $\Gamma_{0}$,” we can define the idea of “partially ordered according to closeness to $\Gamma_{0}$” in much the same way. Just drop the connectedness axiom for a total ordering.
[**Theorem 4.3.**]{} [*Under the hypothesis of Theorem 4.2, the set of galaxies of $^{*}\!G$ is partially ordered according to the closeness of the galaxies to the principal galaxy $\Gamma_{0}$.*]{}
[**Proof.**]{} Reflexivity and antisymmetry are obvious. Consider transitivity: Let $\Gamma_{a}$, $\Gamma_{b}$, and $\Gamma_{c}$ be galaxies different from $\Gamma_{0}$. (The case where $\Gamma_{a}=\Gamma_{0}$ can be argued similarly.) Assume that $\Gamma_{a}$ is closer to $\Gamma_{0}$ than is $\Gamma_{b}$ and that $\Gamma_{b}$ is closer to $\Gamma_{0}$ than is $\Gamma_{c}$. Thus, for any ${\bf x}$ in $\Gamma_{0}$, ${\bf u}$ in $\Gamma_{a}$, ${\bf v}$ in $\Gamma_{b}$, and ${\bf w}$ in $\Gamma_{c}$ and for every $m\in{I \kern -4.5pt N}$, we have $$N_{uv}\;=\;\{n\!: d(v_{n},x_{n})-d(u_{n},x_{n})\geq m\}\,\in\,{\cal F}$$ and $$N_{vw}\;=\;\{n\!: d(w_{n},x_{n})-d(v_{n},x_{n})\geq m\}\,\in\,{\cal F}.$$ We also have $$d(w_{n},x_{n})-d(u_{n},x_{n})\;=\;d(w_{n},x_{n})-d(v_{n},x_{n})
+d(v_{n},x_{n})-d(u_{n},x_{n}).$$ So, $$N_{uw}\;=\;\{n\!: d(w_{n},x_{n})-d(u_{n},x_{n})\geq 2m\}\,
\supseteq\,N_{uv}\cap N_{vw}\;\in\;{\cal F}.$$ Thus, $N_{uw}\in {\cal F}$. Since $m$ can be chosen arbitrarily, we can conclude that $\Gamma_{a}$ is closer to $\Gamma_{0}$ than is $\Gamma_{c}$. $\Box$
The Hyperordinals
=================
In the following sections, we shall extend the results obtained so far to enlargements of transfinite graphs of rank 1, that is, to enlargements of 1-graphs. For this purpose, we need to replace the set $^{*}\!{I \kern -4.5pt N}$ of hypernaturals by a set of “hyperordinals”; these are defined as follows. A hyperordinal $\underline{\alpha}$ is an equivalence class of sequences of ordinals where two such sequences $\langle \alpha_{n}\rangle$ and $\langle \beta_{n}\rangle$ are taken to be equivalent if $\{n\!: \alpha_{n}=\beta_{n}\}\in {\cal F}$. We denote $\underline{\alpha}$ also by $[\alpha_{n}]$ where again the $\alpha_{n}$ are the elements of one (any one) of the sequences in the equivalent class. Any set of hyperordinals is totally ordered by the inequality relation. That is, given any hyperordinals $\underline{\alpha}=[\alpha_{n}]$ and $\underline{\beta}=[\beta_{n}]$, exactly one of the sets: $$\{n\!: \alpha_{n}<\beta_{n}\},\;\; \{n\!:\alpha_{n}=\beta_{n}\},\;\; \{n\!:\alpha_{n}>\beta_{n}\}$$ will be in ${\cal F}$. So, exactly one of the expressions: $$\underline{\alpha}<\underline{\beta},\;\;\underline{\alpha}=\underline{\beta},\;\;\underline{\alpha}>\underline{\beta}$$ holds.
Walks in 1-Graphs
=================
1-graphs arise when conventionally infinite graphs are connected at their infinite extremities through 1-nodes, the latter being a generalization of the idea of a node. Such 1-nodes and the resulting 1-graphs are defined in [@tgen Section 2.1] and also in [@gn Section 2.3]. Let us restate the needed definitions concisely.
We will be dealing with two kinds of nodes and two kinds of graphs. A conventionally infinite graph $G^{0}$ will now be called a 0-[*graph*]{} and the nodes in $G^{0}$ will be called 0-[*nodes*]{} in order to distinguish these ideas from those pertaining to transfinite graphs of rank 1. Similarly, what we called a “hypernode” previously will henceforth be called a 0-[*hypernode*]{}, and what we called a “galaxy” in the enlargement of a 0-graph will now be called a 0-[*galaxy*]{}.
An [*infinite extremity*]{} of a 0-graph $G^{0}$ is defined as an equivalence class of one-ended paths in $G^{0}$, where two such paths are considered to be [*equivalent*]{} if they are eventually identical. Such an equivalence class is called a 0-[*tip*]{} of $G^{0}$. $G^{0}$ may have one or more 0-tips (or possibly none at all). To obtain the “1-nodes,” the set of 0-tips is partitioned in some fashion into subsets, and to each subset a single 0-node may (or may not) be added under the proviso that, if a 0-node is added to one subset, it is not added to any other subset. Then, each subset (possibly augmented with a 0-node) is called a 1-[*node*]{}. With $X^{1}$ denoting the set of 1-nodes and $X^{0}$ the set of 0-nodes of $G^{0}$, the 1-[*graph*]{} $G^{1}$ is defined as the triplet: $$G^{1}\;=\;\{X^{0},B,X^{1}\},$$ and $G^{0}=\{X^{0},B\}$ is now called the 0-[*graph of*]{} $G^{1}$. Furthermore, a path in $G^{0}$ is now called a 0-[*path*]{}, and connectedness in $G^{0}$ is now called 0-[*connectedness*]{}. We will consistently append the superscript 0 to the symbols and the prefix 0- to the terminology for concepts from Sections 2 through 4 regarding 0-graphs.
In order to define the “1-galaxies,” we need the idea of distances in a 1-graph $G^{1}$. But now, we must make a significant choice. The distances between two nodes (0-nodes or 1-nodes) can be defined as the minimum length of all paths—or, alternatively, of all walks—connecting the two nodes. It turns out that a path need not exist between two nodes in a 1-graph $G^{1}$, but a walk always will exist between them. To ensure the existence of at least one path between every two nodes, additional conditions must be imposed on $G^{1}$ (see [@tgen Conditions 3.2-1 and 3.5-1] or [@gn Condition 3.1-2]), and this leads to a more restrictive and yet more complicated theory involving distances. Such can be done, but it is more general and simpler to use walk-based distance ideas. This we now do.
A [*nontrivial 0-walk*]{} $W^{0}$ in a 0-graph is the conventional concept. It is a (finite or one-way infinite or two-way infinite) alternating sequence: $$W^{0}\;=\;\langle \ldots,x_{-1}^{0},b_{-1},x_{0}^{0},b_{0},x_{1}^{0},b_{1},\ldots\rangle \label{6.1}$$ of 0-nodes $x_{m}^{0}$ and branches $b_{m}$, where each branch $b_{m}$ is incident to the two 0-nodes $x_{m}^{0}$ and $x_{m+1}^{0}$ adjacent to it in the sequence. If the sequence terminates at either side, it is required to terminate at a 0-node. The 0-walk is called [*two-ended*]{} or [*finite*]{} if it terminates on both sides, [*one-ended*]{} if it terminates on just one side, and [*endless*]{} if it terminates on neither side.
A [*trivial 0-walk*]{} is a singleton set whose sole element is a 0-node.
A one-ended 0-walk $W^{0}$ will be called [*extended*]{} if its 0-nodes are eventually distinct, that is, if it is eventually identical to a one-ended path. We say that $W^{0}$ [*traverses*]{} a 0-tip if it is extended and eventually identical to a representative of that 0-tip. Finally, $W^{0}$ is said to [*reach*]{} a 1-node $x^{1}$ if $W^{0}$ traverses a 0-tip contained in $x^{1}$. In the same way, an endless 0-walk can [*reach*]{} two 1-nodes (or possibly reach the same 1-node) by traversing two 0-tips, one toward the left and the other toward the right. When this is so, we say that the endless 0-walk is [*extended*]{}. On the other hand, if a 0-walk terminates at a 0-node contained in a 1-node, we again say that the 0-walk [*reaches*]{} both of those nodes and does so [*through*]{} a branch incident to that 0-node.
Every two-ended 0-walk contains a 0-path that terminates at the two 0-nodes at which the 0-walk terminates, so there is no need to employ 0-walks when defining distances in a 0-graph. On the other hand, such a need arises for 1-graphs. To meet this need, we first define a 0-[*section*]{} $S^{0}$ in a 1-graph $G^{1}$ as a subgraph $S^{0}$ of the 0-graph $G^{0}$ of $G^{1}$ induced by a maximal set of branches that are pairwise 0-connected in $G^{0}$. A 1-node $x^{1}$ is said to be [*incident to*]{} $S^{0}$ if either it contains a 0-node incident to a branch of $S^{0}$ or it contains a 0-tip having a representative one-ended path lying entirely within $S^{0}$. In this case, we also say that that 0-tip [*belongs to*]{} $S^{0}$. Given two 1-nodes $x^{1}$ and $y^{1}$ incident to $S^{0}$, there will be a 0-walk $W^{0}$ in $S^{0}$ that reaches each of $x^{1}$ and $y^{1}$ through a 0-tip belonging to $S^{0}$ or through a branch in $S^{0}$.[^4] Moreover, there may also be a 0-walk $W^{0}$ in $S^{0}$ that reaches the same 1-node at both extremities of $W^{0}$. To be more specific, let us state
[**Lemma 6.1.**]{} [*Let $S^{0}$ be a 0-section in $G^{1}$, and let $x^{1}$ and $y^{1}$ be two 1-nodes incident to $S^{0}$. Then, there exists a 0-walk in $S^{0}$ that reaches $x^{1}$ and $y^{1}$.*]{}
[**Proof.**]{} That $x^{1}$ is incident to $S^{0}$ means that there is a 0-path $P_{x}^{0}$ in $S^{0}$ that either reaches $x^{1}$ through a 0-tip of $x^{1}$ or reaches $x^{1}$ through a branch. Similarly, there is such a 0-path $P_{y}^{0}$ reaching $y^{1}$. Let $u^{0}$ be a node of $P_{x}^{0}$, and let $v^{0}$ be a node of $P_{y}^{0}$. Since $S^{0}$ is 0-connected, there is a 0-path $P_{uv}^{0}$ in $S^{0}$ terminating at $u^{0}$ and $v^{0}$ (possibly a trivial 0-path if $u^{0}=v^{0}$). Then, $P_{x}^{0}\cup P_{uv}^{0}\cup P_{y}^{0}$ as a 0-walk in $S^{0}$ as asserted. $\Box$
A [*nontrivial, two-ended 1-walk*]{} $W^{1}$ is a finite sequence: $$W^{1}\;=\;\langle x_{0},W^{0}_{0},x_{1}^{1},W_{1}^{0},\ldots,x_{m-1}^{1},W_{m-1}^{0},x_{m}\rangle \label{6.2}$$ with $m\geq 1$ that satisfies the following conditions.
[1.]{} $x_{1}^{1},\ldots,x_{m-1}^{1}$ are 1-nodes, while $x_{0}$ and $x_{m}$ may be either 0-nodes or 1-nodes.
[2.]{} For each $k=0,\ldots,m-1$, $W_{k}^{0}$ is a nontrivial 0-walk that reaches the two nodes adjacent to it in the sequence.
[3.]{} For each $k=1,\ldots,m-1$, at least one of $W_{k-1}^{0}$ and $W_{k}^{0}$ reaches $x_{k}^{1}$ through a 0-tip, not through a branch.
A [*one-ended*]{} 1-walk is a sequence like (\[6.2\]) except that it extends infinitely to the right. An [*endless*]{} 1-walk extends infinitely on both sides. A [*trivial 1-walk*]{} is a singleton set whose sole element is either a 0-node or a 1-node.
We now define a more general kind of connectedness (called “1-wconnectedness” to distinguish it from path-based 1-connectedness). Two branches (resp. two nodes—either 0-nodes or 1-nodes) will be said to be 1-[*wconnected*]{} if there exists a 0-walk or 1-walk that terminates at a 0-node of each branch (resp. that terminates at those two nodes). If a terminal node of a walk is the same as, or contains, or is contained in the terminal node of another walk, the two walks taken together form another walk. We call this the [*conjunction*]{} of the two walks. It follows that 1-wconnectedness is a transitive binary relation for the branch set $B$ of the 1-graph $G^{1}$ and is in fact an equivalence relation. If every two branches of $G^{1}$ are 1-wconnected, we will say that $G^{1}$ is 1-[*wconnected*]{}.
Walk-Based Distances in a 1-Graph
=================================
The length $|W^{0}|$ of a 0-walk $W^{0}$ is defined as follows: If $W^{0}$ is two-ended, $|W^{0}|$ is the number $\tau_{0}$ of branch traversals in it; that is, each branch is counted as many times as it appears in $W^{0}$. If $W^{0}$ is one-ended and extended, we set $|W^{0}|=\omega$, the first transfinite ordinal. If $W^{0}$ is endless and extended in both directions, we set $|W^{0}|=\omega\cdot 2$.
As for a nontrivial two-ended 1-walk $W^{1}$, its length $|W^{1}|$ is taken to be $|W^{1}|=\sum_{k=0}^{m}|W_{k}^{0}|$, where the sum is over the finitely many 0-walks $W_{k}^{0}$ in (\[6.2\]). Thus, $$|W^{1}|\;=\;\omega\cdot \tau_{1}+\tau_{0} \label{7.1}$$ where $\tau_{1}$ is the number of traversals of 0-tips performed by $W^{1}$ and $\tau_{0}$ is the number of traversals of branches in all the two-ended (i.e., finite) 0-walks appearing as terms in (\[6.2\]). We take $\sum_{k=0}^{m} |W^{0}_{k}|$ to be the natural sum of ordinals; this yields a normal expansion of an ordinal [@ab pages 354-355]. $\tau_{1}$ is not 0 because $W^{1}$ is a nontrivial, two-sided 1-walk. However, $\tau_{0}$ may be 0, this occurring when every $W_{k}^{0}$ in (\[6.2\]) is one-ended or endless.
A 0-node is called [*maximal*]{} if it is not contained in a 1-node, and [*nonmaximal*]{} otherwise. A distance measured from a nonmaximal 0-node is the same as that measured from the 1-node containing it. Given two nodes $x$ and $y$ (of ranks 0 or 1), we define the [*wdistance*]{}[^5] $d(x,y)$ between them as $$d(x,y)\;=\;\min|W_{x,y}| \label{7.2}$$ where the minimum is taken over all two-ended walks (0-walks or 1-walks) terminating at $x$ and $y$. That minimum exists because any set of ordinals is a well-ordered set. In view of (\[7.1\]), $d(x,y)<\omega^{2}$. If $x=y$, we set $d(x,x)=0$.
Clearly, if $x\neq y$, $d(x,y)>0$ and $d(x,y)=d(y,x)$. Furthermore, the conjunction of two two-ended walks is again a two-ended walk, whose length is the natural sum of the ordinal lengths of the two walks. So, by taking minimums appropriately, we obtain the triangle inequality: $$d(x,z)\;\leq\;d(x,y)\,+\,d(y,z) \label{7.3}$$ where again the natural sum of ordinals is understood. Altogether then, we have
[**Lemma 7.1.**]{} [*The ordinal-valued wdistances between the maximal nodes of a 1-graph satisfy the metric axioms.*]{}
Enlargements of 1-Graphs and Hyperdistances in Them
===================================================
In [@gn pages 163-164], a nonstandard 1-node was defined as an equivalence class of sequences of sets of tips shorted together, with the tips taken from sequences of possibly differing 1-graphs. But, since each set of tips shorted together is a 1-node, that definition of a nonstandard 1-node can also be stated as an equivalence class of sequences of 1-nodes. Specializing to the case where all the 1-graphs are the same, we have the following definition of a nonstandard 1-node, which we now call a “1-hypernode.”
Consider a given 1-graph along with a chosen free ultrafilter ${\cal F}$. Two sequences $\langle x_{n}^{1}\rangle$ and $\langle y_{n}^{1}\rangle$ of 1-nodes in $G^{1}$ are taken to be [*equivalent*]{} if $\{n\!: x_{n}^{1}=y_{n}^{1}\}\in{\cal F}$. It is easy to show that this is truly an equivalence relation. Then, ${\bf x}^{1}=[x_{n}^{1}]$ denotes one such equivalence class, where the $x_{n}^{1}$ are the elements of any one of the sequences in that class. ${\bf x}^{1}$ will be called a 1-[*hypernode*]{}.
The [*enlargement*]{} of the 1-graph $G^{1}=\{X^{0},B,X^{1}\}$ is the nonstandard 1-graph $$^{*}\!G^{1}\;=\;\{\,^{*}\!X^{0},\,^{*}\!B,\,^{*}\!X^{1}\,\}$$ where $^{*}\!X^{0}$ and $^{*}\!B$ are respectively the set of 0-hypernodes and branches in the enlargement of the 0-graph $G^{0}=\{X^{0},B\}$ of $G^{1}$ and $^{*}\!X^{1}$ is the set of 1-hypernodes defined above, that is, the set of all equivalence classes of sequences of 1-nodes taken from $X^{1}$.
We define the [*hyperdistance*]{} ${\bf d}$ between any two hypernodes ${\bf x}$ and ${\bf y}$ of $^{*}\!G^{1}$ (of ranks 0 and/or 1) to be the internal function $${\bf d}({\bf x},{\bf y})\;=\;[d(x_{n},y_{n})]. \label{8.1}$$ Since distances in $G^{1}$ are less than $\omega^{2}$, ${\bf d}({\bf x},{\bf y})$ is a hyperordinal less than $\underline{\omega}^{2}$. We say that a 0-hypernode ${\bf x}^{0}=[x_{n}^{0}]$ is [*maximal*]{} if the set of $n$ for which $x_{n}^{0}$ is not contained in a 1-node is a member of $\cal F$. All the 1-nodes in this work are perforce maximal because there are no nodes of higher rank. ${\bf d}$, when restricted to the maximal hypernodes, also satisfies the metric axioms, in particular, the triangle inequality: $${\bf d}({\bf x},{\bf z})\;\leq\;{\bf d}({\bf x},{\bf y})\,+\,{\bf d}({\bf y},{\bf z}) \label{8.2}$$ But, now $\bf d$ is hyperordinal-valued.
The Galaxies of $^{*}\!G^{1}$
=============================
The 0-[*galaxies*]{} of $^{*}\!G^{1}$ are defined just as they are for the enlargement $^{*}\!G^{0}$ of a 0-graph; see Section 3. However, we henceforth write “0-galaxy” in place of “galaxy” and “0-limitedly distant” in place of “limitedly distant.”
As was mentioned above, each 0-section of $G^{1}$ is the subgraph of the 0-graph $G^{0}\{X^{0},B\}$ of $G^{1}$ induced by a maximal set of branches that are 0-connected. A 0-section is a 0-graph by itself. So, within the enlargement $^{*}\!G^{1}$, each 0-section $S^{0}$ enlarges into $^{*}\!S^{0}$ as defined in Section 2. Within each enlarged 0-section there may be one or more 0-galaxies. As a special case, a particular 0-section may have only finitely many 0-nodes, and so its enlargement is itself—all its 0-hypernodes are standard. On the other hand, there may be infinitely many 0-galaxies in some enlarged 0-section. Moreover, the enlarged 0-sections do not, in general, comprise all of the enlarged 0-graph $^{*}\!G^{0}=\{\,^{*}\!X^{0},\,^{*}\!B\,\}$ of $^{*}\!G^{1}$. Indeed, there can be a 0-hypernode ${\bf x}^{0}=[x_{n}^{0}]$ where each $x_{n}^{0}$ resides in a different 0-section; in this case ${\bf x}^{0}$ will reside in a 0-galaxy that is not in an enlargement of a 0-section.
Something more can happen with regard to the 0-galaxies in $^{*}\!G^{1}$. 0-galaxies can now contain 1-hypernodes. For example, this occurs when a 1-node $x^{1}$ is incident to a 0-section $S^{0}$ through a branch. Then, the standard 1-hypernode ${\bf x}^{1}$ corresponding to $x^{1}$ is 0-limitedly distant from the standard 0-hypernodes in $^{*}\! S^{0}$. So, there is a 0-galaxy containing not only $^{*}\!S^{0}$ but ${\bf x}^{1}$ as well. See Example 9.3 below in this regard. In general, the nodal 0-galaxies partition the set $^{*}\!X^{0}\,\cup\,^{*}\!X^{1}$ of all the hypernodes in $^{*}\!G^{1}$. As we shall see in Examples 9.1 and 9.2 below, there may be a singleton 0-galaxy containing a 1-hypernode only.
Let us now turn to the “1-galaxies” of $^{*}\!G^{1}$ Two hypernodes ${\bf x}=[x_{n}]$ and ${\bf y}=[y_{n}]$ (of ranks 0 and/or 1) in $^{*}\!G^{1}$ will be said to be in the same [*nodal 1-galaxy*]{} $\dot{\Gamma}^{1}$ if there exists a natural number $k\in{I \kern -4.5pt N}$ such that $\{n\!:d(x_{n},y_{n})\leq \omega\cdot k\}\in{\cal F}$. In this case, we say that ${\bf x}$ and ${\bf y}$ are [*1-limitedly distant*]{}, and we write ${\bf d}({\bf x},{\bf y})
\leq [\omega\cdot k]$ where $[\omega\cdot k]$ denotes the standard hyperordinal corresponding to $\omega\cdot k$. This defines an equivalence relation on the set $^{*}\!X^{0}\,\cup\, ^{*}\!X^{1}$ of all the hypernodes in $^{*}\!G^{1}$. Indeed, reflexivity and symmetry are obvious. For transitivity, assume that ${\bf x}$ and ${\bf y}$ are 1-limitedly distant and that ${\bf y}$ and ${\bf z}$ are 1-limitedly distant, too. Then, there are natural numbers $k_{1}$ and $k_{2}$ such that $$N_{xy}\;=\;\{n\!: d(x_{n},y_{n})\leq \omega\cdot k_{1}\}\,\in\,{\cal F}$$ and $$N_{yz}\;=\;\{n\!: d(y_{n},z_{n})\leq \omega\cdot k_{2}\}\,\in\,{\cal F}.$$ By the triangle inequality (\[7.3\]), $$N_{xz}\;=\;\{n\!:d(x_{n},z_{n})\leq\omega\cdot (k_{1}+k_{2})\}\;\supseteq
\;N_{xy}\cap N_{yz}\;\in\;{\cal F}.$$ So, $N_{xz}\in {\cal F}$ and therefore ${\bf x}$ and ${\bf z}$ are 1-limitedly distant. We can conclude that the set $^{*}\!X^{0}\,\cup\,^{*}\!X^{1}$ of all hypernodes in $^{*}\! G^{1}$ is partitioned into nodal 1-galaxies by this equivalence relation.
Corresponding to each nodal 1-galaxy $\dot{\Gamma}^{1}$, we define a 1-[*galaxy*]{} $\Gamma^{1}$ as a nonstandard subgraph of $^{*}\!G^{1}$ consisting of all the hypernodes in $\dot{\Gamma}^{1}$ along with all the hyperbranches both of whose 0-hypernodes are in $\dot{\Gamma}^{1}$.
No hyperbranch can have its two incident 0-hypernodes in two different 0-galaxies or two different 1-galaxies because the distance between their 0-hypernodes is 1. Thus, the hyperbranch set $^{*}\!B$ is also partitioned by the 0-galaxies and more coarsely by the 1-galaxies.
The [*principal 1-galaxy*]{} $\Gamma_{0}^{1}$ of $^{*}\!G^{1}$ is the 1-galaxy whose hypernodes are 1-limitedly distant from a standard hypernode in $^{*}\!G^{1}$ (i.e., from a node of $G^{1}$).
Note that the enlargement $^{*}\!S^{0}$ of each 0-section $S^{0}$ of $G^{1}$ has its own principal 0-galaxy $\Gamma_{0}^{0}(S^{0})$. Moreover, every $^{*}\!S^{0}$ lies within the principal 1-galaxy $\Gamma_{0}^{1}$. Indeed, any standard hypernode ${\bf x}$ by which $\Gamma_{0}^{1}$ may be defined and any standard 0-hypernode ${\bf y}^{0}$ by which $\Gamma_{0}^{0}(S^{0})$ may be defined are 1-limitedly distant. Also, the hyperdistance ${\bf d}({\bf y}^{0},{\bf z}^{0})$ between any two 0-hypernodes ${\bf y}^{0}$ and ${\bf z}^{0}$ of $^{*}\!S^{0}$ is no larger than a hypernatural ${\bf k}$. So, by the triangle inequality (\[8.2\]), every 0-hypernode of $^{*}\!S^{}$ is 1-limitedly distant from ${\bf x}$. Whence our assertion.
[**Example 9.1.**]{} Consider an endless 1-path $P^{1}$ having an endless 0-path between every consecutive pair of 1-nodes in $P^{1}$. The 0-sections of $P^{1}$ are those endless 0-paths, and each of their enlargements have an infinity of 0-galaxies in $^{*}\!P^{1}$. However, there are other 0-galaxies in $^{*}\!P^{1}$, infinitely many of them. Indeed, consider a 0-hypernode ${\bf x}^{0}=[x_{n}^{0}]$, where each 0-node $x_{n}^{0}$ lies in a different 0-section of $P^{1}$; ${\bf x}^{0}$ will lie in a 0-galaxy $\Gamma_{1}^{0}$ different from all the 0-galaxies in any enlargement of a 0-section of $P^{1}$. The 0-nodes of $\Gamma_{1}^{0}$ will be all the 0-hypernodes that are 0-limitedly distant from ${\bf x}^{0}$. Furthermore, there are still other 0-galaxies now. Each 1-hypernode ${\bf x}^{1}=[x_{n}^{1}]$ is the sole member of a 0-galaxy. In fact, the nodal 0-galaxies partition the set of all the 0-hypernodes and 1-hypernodes.
On the other hand, the principal 1-galaxy of $^{*}\!P^{1}$ consists of all the standard 1-hypernodes corresponding to the 1-nodes of $P^{1}$ along with the enlargements of the 0-sections of $P^{1}$. Also, there will be infinitely many 1-galaxies, each of which contains infinitely many 0-galaxies along with 1-hypernodes. In this particular case, each of the 1-galaxies is graphically isomorphic to the principal 1-galaxy, but this is not true in general. $\Box$
[**Example 9.2.**]{} An example of a nonstandard 1-graph $^{*}\!G^{1}$ having exactly one 1-galaxy (its principal one) and infinitely many 0-galaxies is provided by the enlargement of the 1-graph $G^{1}$ obtained from the 0-graph of Figure 1 by replacing each branch by an endless 0-path, thereby converting each 0-node into a 1-node. Again each endless path of that 1-graph $G^{1}$ is a 0-section, and its enlargement is like that of Example 3.2. There are infinitely many such 0-galaxies in the enlargement $^{*}\!G^{1}$ of $G^{1}$. Also, there are infinitely many 0-galaxies, each consisting of a single 1-hypernode. With regard to the 1-galaxies, the enlargement $^{*}\!G^{1}$ of $G^{1}$ mimics that of Example 3.4, except that now the rank 0 is replaced by the rank 1. The hyperdistance between every two 1-hypernodes (resp. 0-hypernodes) is no larger than $\omega\cdot 4$ (resp. $\omega\cdot 6$). Hence, $^{*}\!G^{1}$ has only one 1-galaxy, its principal one. $\Box$
[**Example 9.3.**]{} Here is an example where the 1-hypernodes are not isolated within 0-galaxies. Replace each of the horizontal branches in Figure 1 by an endless 0-path, but do not alter the branches incident to $x_{g}$. Now, the nodes $x_{k}$ $(k=0,1,2,\ldots)$ become 1-nodes $x_{k}^{1}$, each containing a 0-node of the branch incident to $x_{k}^{1}$ and $x_{g}$. The corresponding standard 1-hypernodes along with the standard 0-hypernode for $x_{g}$ and the standard hyperbranches connecting them all comprise a single 0-galaxy. Moreover, there will be other 0-galaxies obtained through equivalence classes of sequences of these nodes and branches. The endless paths that replace the horizontal branches lead to still other 0-galaxies. Again, the nodal 0-galaxies partition the set of all the hypernodes in $^{*}\!G^{1}$.
On the other hand, there is again only one 1-galaxy for $^{*}\!G^{1}$. $\Box$
[**Example 9.4.**]{} The distances in the three preceding examples can be fully defined by paths. So, let us now present an example where walks are needed. The 1-graph $G^{1}$ of Figure 3 illustrates one such case. It consists of an infinite sequence of 0-subgraphs, each of which is an infinite series connections of four-branch subgraphs, each in a diamond configuration, as shown. To save words, we shall refer to such an infinite series connection as a “chain.” The chain starting at the 0-node $x_{k}^{0}$ will be denoted by $C_{k}$ $(k=0,1,2,\ldots)$. Each $C_{k}$ is a 0-graph; it does not contain any 1-node. Each $C_{k}$ has uncountably many 0-tips. One 0-tip has a representative 0-path starting at $x_{k}^{0}$, proceeding along the left-hand sides of the diamond configurations, and reaching the 1-node $x_{k}^{1}$. Another 0-tip has a representative 0-path that proceeds along the right-hand sides and reaches the 1-node $x_{k+1}^{1}$. Still other 0-tips of $C_{k}$ (uncountably many of them) have representatives that pass back and forth between the two sides infinitely often to reach singleton 1-nodes; these are not shown in that figure. The chain $C_{k}$ is connected to $C_{k+1}$ through the 1-node $x_{k+1}^{1}$, as shown. Note that there is no path connecting, say, $x_{k}^{0}$ to $x_{m}^{0}$ when $m-k\geq 2$, but there is such a walk.
Each $C_{k}$ is a 0-section, and its enlargement $^{*}\!C_{k}$ has infinitely many 0-galaxies. Also, the 1-nodes $x_{k}^{1}$ together produce infinitely many 0-galaxies, each being a single 1-hypernode. As before, the nodal 0-galaxies comprise a partition of $^{*}\!X^{0}\,\cup\,^{*}\!X^{1}$.
On the other hand, the enlargement $^{*}\!G^{1}$ of the 1-graph $G^{1}$ of Figure 3 has infinitely many 1- galaxies. Its principal one is a copy of $G^{1}$. Each of the other 1-galaxies is also a copy of $G^{1}$ except that it extends infinitely in both directions—infinitely to the left and infinitely to the right. Here, too, the nodal 1-galaxies comprise a partitioning of $^{*}\!X^{0}\,\cup\,^{*}\!X^{1}$, but a coarser one. $\Box$
These examples indicate that the enlargements of 1-graphs can have rather complicated structures.
Locally 1-Finite 1-Graphs and a Property of Their Enlargements
==============================================================
In general, $^{*}\!G^{1}$ has 1-galaxies other than its principal 1-galaxy. One circumstance where this occurs is when $^{*}\!G^{1}$ is locally finite in certain way, which we will explicate below.
We need some more definitions. Two 1-nodes of $G^{1}$ are said to be 1-[*adjacent*]{} if they are incident to the same 0-section. A 1-node will be called a [*boundary 1-node*]{} if it is incident to two or more 0-sections. $G^{1}$ will be called [*locally 1-finite*]{} if each of its 0-sections has only finitely many incident boundary 1-nodes.[^6]
[**Lemma 10.1.**]{} [*Let $x^{1}$ be a boundary 1-node. Then, any 1-walk that passes through $x^{1}$ from any 0-section $S^{0}_{1}$ incident to $x^{1}$ to any other 0-section $S^{0}_{2}$ incident to $x^{1}$ must have a length no less than $\omega$.*]{}
[**Proof.**]{} The only way such a walk can have a length less than $\omega$ (i.e., a length equal to a natural number) is if it avoids traversing a 0-tip in $x^{1}$. But, this means that it passes through two branches incident to a 0-node in $x^{1}$. But, that in turn means that $S^{0}_{1}$ and $S^{0}_{2}$ cannot be different 0-sections. $\Box$
Remember that $G^{1}$ is called 1-wconnected if, for every two nodes of $G^{1}$, there is a 0-walk or 1-walk that reaches those two nodes.
[**Lemma 10.2.**]{} [*Any two 1-nodes $x^{1}$ and $y^{1}$ that are 1-wconnected but are not 1-wadjacent must satisfy $d(x^{1},y^{1})\geq \omega$.*]{}
[**Proof.**]{} Any walk 1-wconnecting $x^{1}$ and $y^{1}$ must pass through at least one boundary 1-node different $x^{1}$ and $y^{1}$ while passing from one 0-section to another 0-section. Therefore, that walk must be a 1-walk. By Lemma 10.1, its length is no less than $\omega$. Since this is true for every such walk, our conclusion follows. $\Box$
The next theorem mimics Theorem 3.7 but at the rank 1.
[**Theorem 10.3.**]{} [*Let $G^{1}$ be locally 1-finite and 1-wconnected and have infinitely many boundary 1-nodes. Then, given any 1-node $x_{0}^{1}$ of $G^{1}$, there is a one-ended 1-walk $W^{1}$ starting at $x_{0}^{1}$: $$W^{1}\;=\;\langle x_{0}^{1},W_{0}^{0},x_{1}^{1},W_{1}^{0},
\ldots,x_{m}^{1},W_{m}^{0},\ldots\rangle$$ such that there is a subsequence of 1-nodes $x_{m_{k}}^{1}$, $k=1,2,3,\ldots$, satisfying $d(x_{0}^{1},x_{m_{k}}^{1})\,\geq\,\omega\cdot k$.*]{}
[**Proof.**]{} $x^{1}_{0}$ need not be a boundary 1-node, but it will be 1-wadjacent to only finitely many boundary 1-nodes because of local 1-finiteness and 1-wconnectedness. Let $X_{0}$ be the nonempty finite set of those boundary 1-nodes. For the same reasons, there is a nonempty finite set $X_{1}$ of boundary 1-nodes, each being 1-wadjacent to some 1-node in $X_{0}$ but not 1-wadjacent to $x_{0}^{1}$. By Lemma 10.2, for each $x^{1}\in X_{2}$, we have $d(x_{0}^{1},x^{1})\geq\omega$. In general, for each $k\in{I \kern -4.5pt N}$, $k\geq 2$, there is a nonempty finite set $X_{k}$ of boundary 1-nodes, each being 1-wadjacent to some 1-node in $X_{k-1}$ but not 1-wadjacent to any of the 1-nodes in $\cup_{l=0}^{k-2} X_{l}$. By Lemma 10.2 again, for any such $x^{1}\in X_{k}$, we have $d(x_{0}^{1},x^{1})\geq \omega\cdot k$.
We now adapt the proof of König’s lemma: From each of the infinitely any boundary 1-nodes in $G^{1}$, there is a 1-walk reaching that boundary 1-node and also reaching $x_{0}^{1}$. Thus, there are infinitely many 1-walks starting at $x_{0}^{1}$ and passing through one of the 1-nodes in $X_{0}$, say, $x_{m_{0}}^{1}$. Among those 1-walks, there are again infinitely many 1-walks passing through one of the 1-nodes in $X_{1}$, say, $x_{m_{1}}^{1}$. Continuing in this say, we find an infinite sequence $\langle x_{m_{1}}^{1}, x_{m_{2}}^{1},x_{m_{3}}^{1},\ldots\rangle$ of 1-nodes occurring in a one-ended 1-walk starting at $x_{0}^{1}$ and such that $d(x_{0}^{1},x_{m_{k}}^{1})\geq\omega
\cdot k$. $\Box$
[**Corollary 10.4.**]{} [*Under the hypothesis of Theorem 10.3, the enlargement $^{*}\!G^{1}$ of $G^{1}$ has at least one 1-hypernode not in its principal galaxy $\Gamma_{0}^{1}$ and thus at least one 1-galaxy $\Gamma^{1}$ different from its principal 1-galaxy $\Gamma_{0}^{1}$.*]{}
[**Proof.**]{} Set ${\bf x}^{1} =[\langle x_{0}^{1},x_{m_{0}}^{1},
x_{m_{1}}^{1},\ldots\rangle]$, where the $x_{m_{k}}^{1}$ are the 1-nodes specified in the preceding proof. With ${\bf x}_{0}^{1}$ being the standard 1-hypernode corresponding to $x_{0}^{1}$, we have by Theorem 10.3 that ${\bf d}({\bf x}_{0}^{1},{\bf x}^{1})\geq
[\omega\cdot n]$. Hence, ${\bf x}^{1}$ is not 1-limitedly distant from ${\bf x}_{0}^{1}$ and thus must reside in a 1-galaxy $\Gamma^{1}$ different from $\Gamma_{0}^{1}$. $\Box$
When $^{*}\!G^{1}$ Has a 1-Hypernode Not in Its Principal Galaxy
================================================================
We are at last ready to extend the results of Section 4 to the rank 1 of transfiniteness. The arguments are the much same as those of Section 4, and so we shall now simply state definitions and results while at times indicating what modifications are needed.
In this section $G^{1}$ is 1-wconnected and has an infinity of boundary 1-nodes, but $G^{1}$ need not be locally finite. Let $\Gamma^{1}_{a}$ and $\Gamma^{1}_{b}$ be two 1-galaxies of $^{*}\!G^{1}$ that are different from the principal 1-galaxy $\Gamma^{1}_{0}$. We say that $\Gamma^{1}_{a}$ [*is closer to $\Gamma^{1}_{0}$ than is $\Gamma^{1}_{b}$*]{} and that $\Gamma^{1}_{b}$ [*is further away from $\Gamma^{1}_{0}$ than is $\Gamma^{1}_{a}$*]{} if there are a ${\bf y}=[y_{n}]$ in $\Gamma^{1}_{a}$ and a ${\bf z}=[z_{n}]$ in $\Gamma^{1}_{b}$ such that, for some ${\bf x}=[x_{n}]$ in $\Gamma^{1}_{0}$ and for every $m\in{I \kern -4.5pt N}$, $$\{n\!: d(z_{n},x_{n})-d(y_{n},x_{n})\;\geq\omega\cdot m\}\;\in\;{\cal F}.$$ (The ranks of ${\bf x}$, ${\bf y}$, and ${\bf z}$ may now be either 0 or 1.)
Any set of 1-galaxies for which every two of them, say, $\Gamma^{1}_{a}$ and $\Gamma^{1}_{b}$ satisfy these conditions will be said to be [*totally ordered according to their closeness to*]{} $\Gamma^{1}_{0}$. Here, too, the conditions for a total ordering are readily shown.
[**Lemma 11.1.**]{} [*These definitions are independent of the representative sequences $\langle x_{n}\rangle$, $\langle y_{n}\rangle$, and $\langle z_{n}\rangle$ chosen for ${\bf x}$, ${\bf y}$, and ${\bf z}$.*]{}
The proof of this lemma is the same as that of Lemma 4.1 except that the rank 0 is replaced by the transfinite rank 1. For instance, the natural numbers $m_{0}$, $m_{1}$, and $m_{2}$ are now replaced by $\omega\cdot m_{0}$, $\;\omega\cdot m_{1}$, and $\omega\cdot m_{2}$.
[**Theorem 11.2.**]{} [*If $^{*}\!G^{1}$ has a hypernode (of either rank 0 or rank 1) that is not in its principal 1-galaxy $\Gamma^{1}_{0}$, then there exists a two-way infinite sequence of 1-galaxies totally ordered according to their closeness to $\Gamma^{1}_{0}$.*]{}
Here, too, the proof of this is much like that of Theorem 4.2. For instance, the natural number $k$ is replaced by the ordinal $\omega\cdot k$. Also, galaxies, that is, 0-galaxies are replaced by 1-galaxies.
Similarly, by mimicking the proof of Theorem 4.3, we can prove
[**Theorem 11.3.**]{} [*Under the hypothesis of Theorem 11.2, the set of 1-galaxies of $^{*}\!G^{1}$ is partially ordered according to the closeness of the 1-galaxies to $\Gamma^{1}_{0}$.*]{}
Extensions to Higher Ranks of Transfiniteness
=============================================
The extension of these results to the enlargements of transfinite graphs of any natural-number rank is quite similar to what we have presented. The ideas are the same, but the notations and the details of the arguments are somewhat more complicated. Moreover, further complications arise with the extension to the arrow rank $\vec{\omega}$ of transfiniteness. Extensions to still higher ranks then proceed in much the same way. All this is explicated in the technical report [@gal2], which can also be found in the internet archive www.arxiv.org.
[99]{}
A. Abian, [*The Theory of Sets and Transfinite Arithmetic*]{}, W.B. Saunders Company, Philadelphia, Pennsylvania, 1965. C. Berge, [*Graphs and Hypergraphs*]{}, North Holland Publishing Co., Amsterdam, 1973. R. Goldblatt, [*Lectures on the Hyperreals*]{}, Springer, New York, 1998. R.J. Wilson, [*Introduction to Graph Theory*]{}, Academic Press, New York, 1972. A.H. Zemanian, [*Transfiniteness for Graphs, Electrical Networks, and Random Walks*]{}, Birkhauser-Boston, Cambridge, Massachusetts, 1996. A.H. Zemanian, [*Graphs and Networks: Transfinite and Nonstandard*]{}, Birkhauser-Boston, Cambridge, Massachsetts, 2004. A.H. Zemanian, [*The Galaxies of Nonstandard Enlargements of Transfinite Graphs of Higher Ranks*]{}, CEAS Technical Report 814, University at Stony Brook, Stony Brook, NY 11794, September 2004.
[^1]: Also called a nonprincipal ultrafilter.
[^2]: Our terminology should not be confused with that of a hypergraph—an entirely different concept [@be].
[^3]: If $G$ were a finite graph, then every hypernode (resp. hyperbranch) could be identified with a node (resp. branch)in $G$, and $^{*}\!G$ would be identified with $G$.
[^4]: For examples of when a 0-walk is needed because a 0-path won’t do, see Figures 3.1 and 3.2 of [@tgen] and Figures 4.1, 5.1, 5.2, and 5.3 of [@gn].
[^5]: We write “wdistance” to distinguish this walk-based idea from a distance based on paths.
[^6]: Note that a 0-section in a locally 1-finite 1-graph may have infinitely many incident 1-nodes that are not boundary 1-nodes. Also, this definition of locally 1-finiteness does not prohibit 0-nodes of infinite degree.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We report the discovery of a light echo (LE) from the Type Ia supernova (SN) 2006X in the nearby galaxy M100. The presence of the LE is supported by analysis of both the [*Hubble Space Telescope (HST)*]{} Advanced Camera for Surveys (ACS) images and the Keck optical spectrum that we obtained at $\sim$300 d after maximum brightness. In the image procedure, both the radial-profile analysis and the point-spread function (PSF) subtraction method resolve significant excess emission at 2–5 ACS pixels ($\sim0.05''''-0.13''''$) from the center. In particular, the PSF-subtracted ACS images distinctly appear to have an extended, ring-like echo. Due to limitations of the image resolution, we cannot confirm any structure or flux within 2 ACS pixels from the SN. The late-time spectrum of SN 2006X can be reasonably fit with two components: a nebular spectrum of a normal SN Ia and a synthetic LE spectrum. Both image and spectral analysis show a rather blue color for the emission of the LE, suggestive of a small average grain size for the scattering dust. Using the Cepheid distance to M100 of 15.2 Mpc, we find that the dust illuminated by the resolved LE is $\sim$27–170 pc from the SN. The echo inferred from the nebular spectrum appears to be more luminous than that resolved in the images (at the $\sim$2$\sigma$ level), perhaps suggesting the presence of an inner echo at $<$2 ACS pixels ($\sim0.05''''$). It is not clear, however, whether this possible local echo was produced by a distinct dust component (i.e., the local circumstellar dust) or by a continuous, larger distribution of dust as with the outer component. Nevertheless, our detection of a significant echo in SN 2006X confirms that this supernova was produced in a dusty environment having small dust particles.'
author:
- |
Xiaofeng Wang, Weidong Li, Alexei V. Filippenko, Ryan J. Foley,\
Nathan Smith, and Lifan Wang
title: |
The Detection of a Light Echo from the Type Ia\
Supernova 2006X in M100
---
Introduction
============
Light echoes (LEs) are produced when light emitted by the explosive outburst of some objects is scattered toward the observer by the foreground or surrounding dust, with delayed arrival time due to the longer light path. This phenomenon is rare, having been observed only around a few variable stars in the Galaxy, and around several extragalactic supernovae (SNe). The best-studied events are SN 1987A (Schaefer 1987; Gouiffes et al. 1988; Chevalier & Emmering 1988; Crotts 1988; Crotts, Kunkel, & McCarthy 1989; Bond et al. 1990; Xu et al. 1995) and the peculiar star V838 Mon (Bond et al. 2003). Other SNe with LEs include the Type II SNe 1993J (Liu et al. 2002; Sugerman 2003), 2002hh (Meikle et al. 2006; Welch et al. 2007), and 2003gd (Sugerman 2005; Van Dyk et al. 2006), as well as the Type Ia SNe 1991T (Schmidt et al. 1994; Sparks et al. 1999), 1998bu (Cappellaro et al. 2001; Garnavich et al. 2001), and possibly 1995E (Quinn et al. 2006). Besides their spectacular appearance, LEs offer a unique means to diagnose the composition, distribution, and particle size of the scattering dust. In particular, LEs from the circumstellar environments might provide constraints on SN progenitors.
The Type Ia SN 2006X was discovered on 2006 February 7.10 (UT dates are used throughout this paper) by S. Suzuki and M. Migliardi (IAUC 8667, CBET 393) in the nearby spiral galaxy NGC 4321 (M100). Extensive photometric and spectroscopic coverage is presented by Wang et al. (2007, hereafter W07). They suggest that SN 2006X is highly reddened \[$E(B - V)_{\rm host} = 1.42 \pm 0.04$ mag\] by abnormal dust with $\Re_{V} = 1.48 \pm 0.06$. Its early-epoch spectra are characterized by strong, high-velocity features of both intermediate-mass and iron-group elements. In addition to the anomalous extinction and the very rapid expansion, SN 2006X exhibits a continuum bluer than that of normal SNe Ia. Moreover, its late-time decline rate in the $B$ band is slow, $\beta = 0.92 \pm
0.05$ mag (100 [d]{})$^{-1}$, significantly below the 1.4 mag (100 [d]{})$^{-1}$ rate observed in normal SNe Ia and comparable to the decay rate of 1.0 mag (100 [d]{})$^{-1}$ expected from $^{56}$Co $\rightarrow$ $^{56}$Fe decay. This may suggest additional energy sources besides radioactive decay, such as the interaction of the supernova ejecta with circumstellar material (CSM) and/or a LE.
Attempts to detect the CSM in SNe Ia in different wavebands were unsuccessful before SN 2006X, and only some upper limits could be placed (see Patat et al. 2007a, and references therein) except for the peculiar SNe Ia/IIn 2002ic (Hamuy et al. 2003; Deng et al. 2004; Wang et al. 2004; Wood-Vasey et al. 2004) and 2005gj (Aldering et al. 2006; Prieto et al. 2007). Recent progress in this respect was made from high-resolution spectroscopy by Patat et al. (2007b, hereafter P07), who find time-variable Na I D absorption lines in spectra of SN 2006X. This has been interpreted as the detection of CSM within a few 10$^{16}$ cm ($\sim 0.01$ pc) from the explosion site of the supernova. With the inferred velocity, density, and location of the CSM, P07 proposed that the companion star of the progenitor of SN 2006X is most likely to be a red giant (but see Hachisu et al. 2007, who present a main-sequence star model with mass stripping). Note, however, that SN 2006X exhibited somewhat abnormal features in spectra and photometry; it may not represent a typical SN Ia. Multi-epoch, high-resolution spectral observations of SN 2007af, a normal SN Ia, do not reveal any significant signature of CSM absorption (Simon et al. 2007).
In this paper we report the discovery of an optical LE around SN 2006X, with evidence from [*Hubble Space Telescope (HST)*]{} Advanced Camera for Surveys (ACS) images and Keck optical spectra. The paper is organized as follows. In §2 we briefly describe the late-epoch data available for SN 2006X, while the data analysis and the interpretation are presented in §3. We discuss the properties of the light echo and the underlying dust in §4. Our conclusions are given in §5.
Observations
============
[*HST*]{} Archival Images
-------------------------
Several epochs of [*HST*]{} data covering the site of SN 2006X are publicly available in the MAST archive. The pre-discovery images were taken on 1993 December 31 (Proposal ID 5195: PI, Sparks) by the Wide Field Planetary Camera 2 (WFPC2) in the F439W and F555W filters, with the same integration time of 1800 s. The images taken prior to the SN explosion allow us to examine the immediate environment of the progenitor, whereas the post-explosion observations enable us to search for a possible LE. The most recent, post-discovery images of SN 2006X were obtained with the High Resolution Channel (HRC, with a mean spatial resolution of $0.026''$ pixel$^{-1}$) of [*HST*]{}/ACS on 2006 May 21 (90 d after $B$ maximum) and on 2006 December 25 (308 d after $B$ maximum), respectively (GO–10991; PI, Arlin Crotts). At $t = 90$ d, the SN was imaged in F435W (1480 s), F555W (1080 s), and F775W (1080 s), while at $t = 308$ d, the SN was again observed in the same three bandpasses, with the exposure times of 920 s, 520 s, and 520 s, respectively.
The standard [*HST*]{} pipeline was employed to pre-process the images and remove cosmic-ray hits. In Figure 1 we show the pre- and post-explosion [*HST*]{} images of SN 2006X in the F555W filter. This pre-discovery image does not reveal any significant source brighter than 24.0 mag in F555W, excluding the possibility of a significant star cluster at the location of SN 2006X. Neither of the two post-discovery images exhibits any resolved LE arcs or rings around SN 2006X. The magnitudes of SN 2006X were measured from the [*HST*]{} ACS images with both the Dolphot method (Dolphin 2000) and the Sirianni procedure (Sirianni et al. 2005), and the mean photometry is given in Table 1.
Keck Optical Spectrum
---------------------
Observations of nebular-phase spectra provide an alternative way to explore the possibility of an LE around SNe, as the scattered, early-phase light will leave a noticeable imprint on the nebular spectra (e.g., Schmidt et al. 1994; Cappellaro et al. 2001) when the SN becomes dimmer. Two very late-time spectra of SN 2006X taken at t $\approx$ 277 and 307 days after $B$ maximum were published by W07 (see their Fig. 19), which were obtained by the Keck telescopes at the W. M. Keck Observatory: one with the Low Resolution Imaging Spectrometer (LRIS; Oke et al. 1995) mounted on the 10 m Keck I telescope, and the other with the Deep Extragalactic Imaging Multi-Object Spectrograph (DEIMOS) mounted on the 10 m Keck II telescope. In the following analysis we focus on the LRIS spectrum taken at $t \approx 277$ d because of its wider wavelength coverage.
Data Analysis
=============
Late-Time Light Curves
----------------------
Figure 2 shows the absolute $B$, $V$, and $I$ light curves of SN 2006X and SN Ia 1996X (Salvo et al. 2001). The former were obtained using the Cepheid distance $\mu = 30.91 \pm 0.14$ mag (Freedman et al. 2001), and corrected for extinction in the Milky Way ($A_{V}({\rm MW})$ = 0.08 mag; Schlegel et al. 1998) and in the host galaxy ($A_{V}({\rm host})$ = 2.10 mag; W07). The distance modulus and host-galaxy extinction for SN 1996X are derived by Wang et al. (2006) and Jha et al. (2007) using independent methods, and we adopted the mean values $\mu = 32.11 \pm 0.15$ mag ($H_{0} = 72$ km s$^{-1}$ Mpc$^{-1}$ and $A_{V}({\rm host}) = 0.08$ mag are assumed throughout this paper). The absolute magnitudes of these two SNe are similar near maximum, except in the $I$ band where SN 2006X is $\sim$0.4 mag fainter than SN 1996X.
Noticeable differences between the two SNe emerge in $B$ one month after maximum light, when SN 2006X begins to decline slowly at a rate of $0.92 \pm 0.05$ mag (100 d)$^{-1}$. The discrepancy reaches about 0.9 mag in $B$ at $t = 308$ d, while it is $\sim$0.7 mag in $V$ and $\sim$0.2 mag in $I$. This suggests that the emission of SN 2006X is 130% $\pm$ 60% higher in $B$, 90% $\pm$ 40% higher in $V$, and 20% $\pm$ 20% higher in $I$ with respect to SN 1996X. The large error bars primarily reflect the uncertainty in the distances.
The apparently overluminous behavior seen in SN 2006X in the tail phase is possibly linked to the light scattering of the surrounding dust, though the interaction of the SN ejecta with the CSM produced by the progenitor system and/or the excess trapping of photos and positrons (created in $^{56}$Co $\longrightarrow$ $^{56}$Fe decays within the ejecta) cannot be ruled out. The resultant LE, if present, may not be directly resolved even in the [*HST*]{}/ACS images at a distance of $\sim$15 Mpc due to the limited angular resolution. To examine this conjecture, in §3.2 we compare the PSFs of the SN and local stars, and in §3.3 we apply the image-subtraction technique to analyze the SN images.
Radial Brightness Profile
-------------------------
The radial brightness profiles of the images of the SN and local stars in the same field (see Fig. 1) are compared in Figure 3. These were obtained by extracting the flux using different apertures, ranging from 0.1 to 10 pixels with a resolution of 0.1 pixel. The fluxes of the four local stars labeled in Figure 1 are scaled so that the integrated flux within the 10-pixel aperture is the same. Based on the distribution of the radial profiles of these four stars, we derived a mean radial profile with the same integrated flux through Monte Carlo simulations. The radial profile of the four stars was thus normalized by the peak flux of the simulated radial profile.
One can see that the star profiles are uniform at large radius but show noticeable scatter within 2 pixels from the center. For comparison, the central flux of SN 2006X is scaled to be 1.0, with the assumption that the central region of the SN image was not affected by any LE. At $t = 90$ d the SN profile does not show a significant difference from that of the local stars. At $t = 308$ d the SN profiles appear distinctly broader at radii of 2–4 pixels, especially in the F435W and F555W images. Note that the SN data are quite steep at around 1 pixel, probably due to noise.
The inset plot of Figure 3 shows the residual of the radial profile between SN 2006X and the local stars. This was obtained by subtracting the simulated radial profile of the stars from that of the SN. Also plotted is the scatter of the simulated profile of the local stars. At $t = 308$ d, the SN shows significant extra flux in the F435W, F555W, and F775W filters at radii of $\sim$2 to 5 pixels, suggesting the presence of a LE. Such a residual flux was not present at $t = 90$ d.
One can see some structure (peaks and valleys) at $<$2 pixels in the inset residual plot of Figure 3. These alternating negative/positive residuals clearly show that the substructure within the inner 2 pixels cannot be trusted, and could result from the misalignment of the peak surface brightness of the images. Of course, it is possible that part of the LE is so close to the SN, but the above analysis cannot definitively reveal it.
Integrating the overall residual emission in the range 2–10 pixels, we find that the observed LE brightness is $\sim$22.8 mag in F435W, $\sim$22.0 mag in F555W, and $\sim$22.1 mag in F775W. Its contribution to the total flux of the SN + LE is $\sim$29% in F435W, $\sim$27% in F555W, and $\sim$11% in F775W. In view of potential additional LE emission at radii $<$2 pixels, these values are probably lower limits to the true brightness of the LE.
Although Star 1 and SN 2006X show some diffraction spikes in Figure 1, the spikes have the same shape and orientation for all stars in the field. Thus, (a) they should affect the radial surface brightness profiles of all stars in the same way, and not affect the excess light from an echo, and (b) they should be adequately removed by the image subtraction procedure (§3.3).
The difference between the radial profile of SN 2006X and other stars can also be demonstrated by their measured full width at half-maximum intensity (FWHM). Table 2 lists the FWHM of SN 2006X and the average value of several local stars, obtained by running the IRAF[^1] “imexamine” task in three modes: $r$ (radial profile Gaussian fit), $j$ (line 1D Gaussian fit), and $k$ (column 1D Gaussian fit). At $t = 90$ d, the PSF of SN 2006X is comparable to that of the average values of the local stars, while at $t = 308$ d, the SN exhibits a significantly broader profile. The FWHM increases by about 0.3 pixel in the $r$-profile and by $\sim$1.0 pixel in the $j$-profile and $k$-profile with respect to the local stars. The reasonable interpretation is that the PSF is broadened by scattered radiation (that is, the LE).
Light Echo Images
-----------------
The radial-profile study suggests the presence of a LE in SN 2006X. In this section, we apply image subtraction to provide further evidence for the LE, and study its two-dimensional (2D) structure.
We extract a small section ($20 \times 20$ pixels) centered on SN 2006X and Star 1 (the brightest star in the field), and align their peak pixels to high precision (0.01 pixel). We then scale Star 1 so that its peak has the same counts as that of SN 2006X, and subtract it from the SN 2006X image. The underlying assumption is the same as in our radial-profile study: the central peak of SN 2006X is not affected by any LE.
Figure 4 shows the PSF-subtracted images of SN 2006X. The left panel shows the subtracted images at the original [*HST*]{}/ACS resolution. To bring out more details, the middle panel shows subsampled images by using a cubic spline function to interpolate one pixel into $8 \times 8$ pixels. The right panel has three circles (with radii of 2, 4, and 6 pixels, respectively) overplotted. The residual images all show an extended, bright, ring-like feature around the supernova, consistent with the general expected appearance of a LE. These features emerge primarily at radii of 2–4 pixels (or $0.05''$–$0.11''$) in the images, consistent with those derived above from the radial profiles. The central structure seen within a circle of radius 2 pixels (e.g., the asymmetric feature in F435W, the double features in F775W, and the arc in F555W) are not to be trusted; due to the limited spatial resolution, the images used for the image subtraction may not be perfectly aligned (in terms of the geometry and/or the flux of the central regions), and some artifacts could be introduced at the center of the subtracted images. Similarly, the apparent clumps within the echo ring are not reliable, generally being only a few pixels in size.
The integrated flux, measured from the PSF-subtracted images at 2–10 pixels from the SN site, contributes to the total flux of SN + LE by $\sim$33% in F435W, $\sim$29% in F555W, and $\sim$9% in F775W. This is fully consistent with the above estimate from the radial-profile analysis, taking into account the uncertainty in the PSF subtraction. The brightness of the LE component is estimated to be $\sim$22.7 mag in F435W, $\sim$21.9 mag in F555W, and $\sim$22.3 mag in F775W.
As with the radial-profile analysis, the PSF-subtraction method might remove some fraction of flux from the LE itself; it had been assumed that none of the flux in the central 2-pixel radius is produced by the LE, but this might be incorrect. Thus, our estimate of the echo flux from the image analysis may be only a lower limit of the true LE emission. In view of the image analysis, we cannot verify or rule out that the LE may be distributed continuously from the SN site to an angular radius of $\sim$6 pixels ($0.15''$).
Light Echo Spectrum
-------------------
A consistency check for the existence of a LE around a source can also be obtained by comparing the observed supernova spectrum and the synthetic spectrum using an echo model. The observed spectrum should be a combination of the intrinsic late-time SN spectrum and the early-time scattered SN spectrum. Inspection of the late-epoch Keck spectrum (see Fig. 19 of W07) clearly reveals that SN 2006X behaves unlike a normal SN Ia, showing a rather blue continuum at short wavelengths and a broad absorption feature near 6100 Å(probably due to Si II $\lambda$6355).
To construct the composite spectrum containing the echo component, we use the nebular-phase spectrum of SN 1996X to approximate that of SN 2006X. SN 1996X is a normal SN Ia in the elliptical galaxy NGC 5061 (Salvo et al. 2001), with $\Delta m_{15} = 1.30 \pm 0.05$ mag, similar to that of SN 2006X (W07). Late-time optical spectra with wide wavelength coverage and high signal-to-noise ratio (S/N) are available on day 298 for SN 1996X (Salvo et al. 2001; http://bruford.nhn.ou.edu/$^{\thicksim}$suspect/) and on day 277 for SN 2006X (W07). Comparing the spectrum of SN 2006X obtained at $t =
277$ d with that taken at $t = 307$ d, we found that the overall spectral slope changed little during this period. We thus could extrapolate the original nebular spectra $t = 308$ d, a phase when both SNe have relatively good multicolor photometry. To completely match the spectrum of SN 2006X, the spectral flux of SN 1996X was multiplied by a factor of 3.0 caused by the difference in distances. Extinction corrections have also been applied to the nebular spectra of these two SNe (W07; Wang et al. 2006).
We considered the cases of both SN 2006X and SN 1996X as the central pulse source when deriving the echo spectrum. The observed spectra of SN 2006X are available at eleven different epochs from about $-$1 d to 75 d after $B$ maximum, while 14 spectra of SN 1996X are available from about $-$4 d to 87 d after $B$ maximum (Salvo et al. 2001). The above spectra were properly dereddened[^2] and interpolated to achieve uniform phase coverage. Regardless of the original flux calibration, all of the input spectra have been recalibrated according to their light curves at comparable phases (W07; Salvo et al. 2001) and corrected for the effects of scattering using a similar function, $S(\lambda) \propto \lambda^{-\alpha}$ (e.g., Suntzeff et al. 1988; Cappellaro et al. 2001). These corrected spectra were then coadded and scaled, together with the nebular spectrum of SN 1996X, to match the nebular spectrum of SN 2006X.
The best-fit $\alpha$ values obtained for the combinations of SN 2006X (near $B$ maximum) + SN 1996X (nebular) and SN 1996X (near $B$ maximum) + SN 1996X (nebular) are $3.0 \pm 0.3$ and $3.3 \pm 0.5$, respectively. One can see that the combination of SN 2006X + SN 1996X gives a somewhat better fit to the observed spectrum of SN 2006X. This is not surprising; the spectrum of SN 2006X differs from that of a normal SN Ia at early times, showing extremely broad and blueshifted absorption minima (W07). The large value of $\alpha$ may indicate a small grain size for the scattering dust. The composite nebular spectrum and the underlying echo spectrum are compared with the observed spectrum of SN 2006X in Figure 5 (upper and middle panels). Given the simple assumption of the scattering function, incomplete spectral coverage, and intrinsic spectral difference between SN 1996X and SN 2006X, the agreement between the observation and the model is satisfactory, with major features in the spectrum well matched. This provides independent, strong evidence for the LE scenario. However, the broad emission peak seen at $\lambda \approx
4300$–4500 Å cannot be reasonably fit by the echo model (see Fig. 5); this mismatch is probably produced by intrinsic features in the nebular spectrum of SN 2006X.
Light-Echo Luminosity and Color
-------------------------------
We can constrain the properties of the LE and the underlying dust through the luminosity and colors of the LE. The LE luminosity of SN 2006X has been estimated by analyzing the [*HST*]{} SN images; it can also be obtained by integrating the echo spectrum shown in Figure 5. The magnitudes of the echo given by different methods are listed in Table 3. For the image-based measurement, the error accounts only for the scatter of the stellar PSF. On the other hand, for the spectrum-based measurement, the error primarily consists of the uncertainties in extinction correction (i.e., $\sim$0.2 mag for SN 2006X and $\sim$0.1 mag for SN 1996X in the $B$ band) and distance modulus (i.e., $\sim$0.14 mag for SN 2006X and $\sim$0.15 mag for SN 1996X).
We note that the echo inferred from the spectral fitting seems somewhat brighter than that revealed by the image analysis: $\delta
m_{F435W} = -0.6 \pm 0.3$ mag. This difference is also demonstrated in the bottom panel of Figure 5, where the flux ratio of the inferred echo spectrum and the observed spectrum of SN 2006X is plotted as a function of wavelength. Overplotted are the ratios yielded for the photometry of the echo image (circles) and the spectrophotometry of the echo spectrum (squares) in F435W and F555W, respectively. Such a discrepancy, at a confidence level of only $\sim2\sigma$, may suggest that there is some echo emission within a radius of 2 pixels (21% $\pm$ 12% of the total flux of SN + LE in F435W and 17% $\pm$ 11% in F555W) that was not resolved by the image analysis. Despite this possibility, we must point out that the echo luminosity derived from the echo spectrum may have an error that is actually larger than our estimate, since we did not consider possible uncertainties associated with the spectrum itself and the simple scattering model adopted in our analysis (see §3.4).
Assuming that all of the observed differences between the light curves and spectra of SN 2006X and SN 1996X at $t = 308$ d are entirely due to the LE around SN 2006X, we can place an upper limit on the LE brightness as $21.9 \pm 0.3$ mag in F435W and $21.3 \pm
0.3$ mag in F555W. The magnitudes and the resulting color are not inconsistent with those presented in Table 3, especially in the case of the spectral fit which likely takes into account most of the echo emission. This leaves little room for other possible mechanisms for the extra emission, suggesting that the echo is the primary cause of the abnormal overluminosity of SN 2006X at $t = 308$ d.
We find, from analysis of both the [*HST*]{} images and the nebular spectrum (see Table 3), that the LE has an average color (F435W–F555W)$_{\rm echo}$ = $0.8 \pm 0.3$ mag (this roughly equals $(B -V)_{\rm echo}$ = $0.8 \pm 0.3$ mag), which is much bluer than the SN color at maximum brightness. The LE is clearly brighter in bluer passbands than at redder wavelengths (see Fig. 5). Comparing the colors of the echo and the underlying SN light helps us interpret the dust, as the color shift depends on the scattering coefficient and hence on the dimensions of the dust grains (Sugerman 2003a).
Integrating over the entire SN light curve (W07) from about $-$11 d to 116 d after $B$ maximum yields $(B - V)_{\rm SN} = 1.70$ mag for the overall emission of SN 2006X. The observed change in color, $\Delta(B - V) = -0.9 \pm 0.3$ mag, is much larger than the color shift derived for Galactic dust but is comparable to the change derived for Rayleighan dust[^3], $\Delta(B - V)_{\rm max} = -0.96$ mag (Sugerman et al. 2003b). This is consistent with constraints from the direct spectral fit, which suggests that the dust has a scattering efficiency proportional to $\lambda^{-3.0}$. We thus propose that the dust surrounding SN 2006X is different from that of the Galaxy and may have small-size grains, perhaps with diameter $\lesssim 0.01~\mu$m, reflecting the shorter wavelengths of light more effectively. Smaller dust particles are also consistent with the low value of $\Re_{V} \approx 1.5$ derived by W07.
Dust Distance
-------------
Of interest is the distribution of the dust producing the echo; for example, it may be a plane-parallel dust slab or a spherical dust shell. Couderc (1939) was the first to correctly interpret the LE ring observed around Nova Persei 1901. Detailed descriptions of LE geometries can also be found in more recent papers (e.g., Sugerman 2003a; Tylenda et al. 2004; Patat 2005). In general, the analytical treatment shows that both a dust slab and a dust shell could produce an echo that is a circular ring containing the source. Assuming that the SN light is an instantaneous pulse, then the geometry of an LE is straightforward: the distance of the illuminated dust material lying on the paraboloid can be approximated as $$R \approx \frac{{}D^{2}\theta^{2} \mp (ct)^{2}}{2ct},$$ where $D$ is the distance from the SN to the observer, $\theta$ is the angular radius of the echo, $c$ is the speed of light, and $t$ is the time since the outburst. The equation with a minus sign corresponds to the single dust slab, while the plus sign represents the case for a dust shell.
As suggested by the analysis of the radial profile and the PSF-subtracted image of the SN, there is a confirmed LE ring $\sim0.08''$ away from the SN, with a possible width of $\sim0.03''$. For this echo of SN 2006X, $ct$ = 0.27 pc, which leads to $R$ $\approx$ 27–120 pc, consistent with the scale of the ISM dust cloud. As the dust cloud in front of SN 2006X seems to be very extended, we do not give the thickness of the dust along the line of sight. Considering the possible echo emission within 2.0 pixel ($\sim0.05''$) inferred from the echo luminosity (see discussion in §4.1) and that extending up to 5 pixels ($\sim0.13''$; see Fig. 3), the actual distribution of the dust may be from $<$27 pc to $\sim$170 pc from the SN.
In principle, one can also estimate the distance of the dust itself from the SN through a fit to the observed echo luminosity using the light-echo model (e.g., Cappellaro et al. 2001), as the actual echo flux is related to the light emitted by the SN, the physical nature of the dust, and the dust geometry. However, current analytical treatments for the LE model must assume some idealized configuration, which may not apply to the dust surrounding SN 2006X that is found to probably have smaller dust grains with $\Re_{V}
\approx 1.5$ and a relatively extended distribution. Moreover, multiple scattering processes rather than a single scattering should be considered in the echo model due to the large optical depth measured from the dust: $\tau^{V}_{d} \approx 2.0$ for SN 2006X. Detailed modelling of the LE emission seen in SN 2006X is beyond the scope of this paper.
Discussion
==========
Analysis of both the late-time [*HST*]{} images and the late-time Keck optical spectrum favors the presence of a LE in SN 2006X, the fourth non-historical SN Ia with a detection of echo emission. Comparison of the SN 2006X echo with the other three known events, SNe 1991T, 1995E, and 1998bu, shows that the Type Ia echoes may have a wide range of dust distances from $\lesssim$ 10 pc to $\sim$ 210 pc. The echo detected in SN 1991T is consistent with being a dust cloud of radius 50 pc (Sparks et al. 1999), while the echo speculated from SN 1995E probably corresponds to a dust sheet at a distance of $207 \pm 35$ pc (Quinn et al. 2006). Garnavich et al. (2001) proposed from the [*HST*]{} WFPC2 imaging that SN 1998bu may have two echoes, caused by dust at $120 \pm 15$ pc and $<10$ pc away from the SN; the outer echo is consistent with an ISM dust sheet, while the inner component is likely from the CSM dust. On the other hand, the resolved echo image of SN 2006X appears quite extended in the direction perpendicular to the line of sight. This yields a dust distance spanning from $\sim$ 27 pc to $\sim$ 170 pc away from the site of the SN, indicating that the dust causing the LE may not be a thin dust sheet but could be a cloud or shell distribution of the dust around the progenitor or a more complicated dust system.
The echo from SN 2006X is found to be brighter than that of the other three Type Ia echo events. Assuming the echo magnitude listed in Table 3 and the SN peak magnitude derived in W07, one can find that the echo flux with respect to the extinction-corrected peak magnitude of SN 2006X is $\sim$9.6 mag in $V$. Quinn et al. (2006) proposed that all of the other three Type Ia echoes (SNe 1991T, 1995E, 1998bu) show a striking similarity in their echo brightness relative to the extinction-corrected peak SN brightness, $\Delta V
\approx 10.7$ mag. According to the analytical expression of the dust scattering (e.g., Patat 2005), the excess echo brightness from SN 2006X by $\sim$1 mag perhaps suggests a dust distribution closer to the SN, given the similar optical depth for SNe 2006X and 1995E. The SN 2006X echo emission also shows a prominent wavelength dependence, with more light from the shorter wavelengths, suggestive of smaller-size dust around SN 2006X. This is also demonstrated by the difference of the scattering coefficient $\alpha$ required to fit the observed nebular spectrum, which is $\sim$3.0 for SN 2006X, $\sim$2.0 for 1991T (Schmidt et al. 1994), and $\sim$1.0 for SN 1998bu (Cappellaro et al. 2001).
In fitting the nebular spectrum, the echo brightness is found to be $\sim$ 60% brighter than that from the echo image at the $\sim$2$\sigma$ level, likely suggesting the presence of a local echo that was not resolved at the regions close to the SN site. Regarding the location of the echo emission in SN 2006X, one may naturally tie the distribution of the dust underlying the echo to a combination of local CSM dust and distant ISM dust, given the quite extended dust distribution and the small dust grains that were not typical for the ISM dust. Detection of the CSM dust is of particular importance for understanding SN Ia progenitor models. P07 recently reported the detection of CSM in SN 2006X from variable Na I D lines, and they estimate that the absorbing dust is a few $10^{16}$ cm from the SN. It is hence expected that an echo very close to the SN ($<$0.01 pc away) should be produced, although the SN UV radiation field could destroy or change the distribution of the surrounding dust particles out to a radius of a few $10^{17}$ cm (Dwek 1983). However, it is not possible for us to detect the emission of such a close CSM echo at $t = 308$ d, since the maximum delayed travel time of the light for this echo is $<$0.1 yr and the SN radiation decreases with time.
As noted by W07, the spectrum of SN 2006X probably showed a UV excess at $t \approx 30$ d. This may be a signature of the nearby CSM claimed by P07, but the S/N of the spectrum is quite low below 4000 Å. In this case, the possible echo emission at $<$27 pc inferred from the nebular spectrum at $t = 308$ d could result from a dust shell that is farther out than that claimed by P07. This is possible if the CSM dust around SN 2006X has multiple shells, such as the dust ring (or shell) of a planetary nebula (Wang 2005) and nova-like shells.
The presence of a local echo helps explain the slow decline of the $B$-band light curve of SN 2006X at early phases. Nevertheless, the local echo (if present) is not necessarily from the CSM dust, as forward scattering from the distant dust cloud in front of the SN could also produce an echo of very small angular size. To further distinguish between the two possible cases of distant ISM plus local dust and single ISM dust, future [*HST*]{} observations of SN 2006X are necessary. More late-phase [*HST*]{}/ACS images would help constrain the evolution of the LE. Using equation (1), we can predict the evolution of the echo ring with time. If the dust formed as a result of past mass loss from the central source, the echo will be more symmetric and the expansion will slow down after the initially rapid phase; with time, its size will eventually shrink to zero. On the other hand, if the dust is of interstellar origin, the echo should expand continuously with slowly decreasing brightness as more-distant regions are illuminated. Assuming that the inner component of the echo within 2 pixels is caused by a CSM dust shell $\sim$1 pc from the SN, then the emission within 2 pixels will finally decrease to zero at $t \approx 6.5$ yr. In contrast, the local echo from distant ISM dust should remain nearly constant for a longer time.
It is worth pointing out that the recent nearby SNe Ia, SN 2007gi (CBET 1017, CBET 1021), SN 2007le (CBET 1100, CBET 1101), and probably SN 2007sr (CBET 1172,1174, ATEL 1343), may exhibit high-velocity features in their spectra similar to those of SN 2006X. If the SN 2006X-like events preferentially occur in environments with abundant ISM dust or CSM dust (Wang et al. 2008, in prep.), then we might expect to detect late-time echo emission in the above three SNe Ia. Thus, it would be interesting to obtain future high-resolution [*HST*]{}/ACS images of these SNe.
Conclusions
===========
The emergence of a LE in SN 2006X has been confirmed with PSF-subtracted [*HST*]{} ACS images which show a ring-like, but rather extended, echo 2–5 pixels ($0.05''$–$0.13''$) from the SN site at $t = 308$ d past maximum brightness. A Keck nebular spectrum of the SN taken at a similar phase provides additional evidence for the LE scenario; it can be decomposed into a nebular spectrum of a normal SN Ia and a reflection spectrum consisting of the SN light emitted at early phases.
From the resolved echo image, we derive that the intervening dust is $\sim$27–170 pc from the supernova. Based on the quite blue color of the echo, we suggest that the mean grain size of the scattering dust is substantially smaller than Galactic dust. Smaller dust particles are also consistent with the low $\Re_{V}$ value obtained from the SN photometry. Our detection of a LE in SN 2006X confirms that this SN Ia occurred in a dusty environment with atypical dust properties, as suggested by the photometry (W07).
Analysis of the nebular spectrum might also suggest a local echo at $<$27 pc (or at $<$2 pixels) that is not resolved in the PSF-subtracted image. This possible local echo is likely associated with the CSM dust produced by the progenitors, though detailed modeling of the echo spectrum and/or further high-resolution imaging are required to test for the other possibilities, such as very forward scattering by a distant cloud or CSM-ejecta interaction.
Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration (NASA). The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. This research was supported by NASA/[*HST*]{} grants AR–10952 and AR–11248 from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5–26555. We also received financial assistance from NSF grant AST–0607485, the TABASGO Foundation, the National Natural Science Foundation of China (grant 10673007), and the Basic Research Funding at Tsinghua University (JCqn2005036).
Aldering, G., et al. 2006, , 650, 510 Bloom, J. S., et al. 2007, ATEL, 1343 Bond, H. E., Gilmozzi, R., Meakes, M. G., & Panagia, N. 1990, , 354, L49 Bond, H. E., et al. 2003, Nature, 422, 405 Cappellaro, E., et al. 2001, , 549, L215 Chevalier, R. A., & Emmering, R. T. 1988, , 338, 388 Couderc, P. 1939, Ann. Astrophys., 2, 271 Crotts, A. 1988, , 333, L51 Crotts, A. P. A., Kunkel, W. E., & McCarthy, P. J. 1989, , 347, L61 Deng, J. S., et al. 2004, , 605, L37 Dolphin, A. E. 2000, , 112, 1383 Drake, A. J., et al. 2007, CBET, 1172 Dwek, E. 1983, , 274, 175 , 558, 323 Hachisu, I., Kato, M., & Nomoto, K. 2007, ApJ, submitted (arXiv:0710.0319) Harutyunyan, A., Benetti, S., & Cappellaro, E. 2007, CBET 1021 1941, , 93, 70 Garnavich, P., et al. 2001, AAS, 199, 4701 Gouiffes, C., et al. 1988, , 198, L9 Filippenko, A. V., Silverman, J. M., Foley, R. J., & Modjaz, M. 2007, CBET 1101 Freedman, W. L., et al. 2001, , 553, 47 Jha, S., Riess, A. G., & Kirshner, R. P. 2007, , 659, 122 Kotak, R., et al. 2004, , 354, L13 Liu, J. F., Bregman, J. N., & Seitzer, P. 2003, , 582, 919 Mathis, J. S., Rumpl, W., & Nordsieck, K. H. 1977, , 217, 425 Meikle, W. P. S., et al. 2006, , 649, 332 Monard, L. A. G. 2007, CBET 1100 Nakano, S. 2007, CBET 1017 Meixner, M. 2004, , 128, 2339 Patat, F. 2005, , 357, 1161 Patat, F., et al. 2007a, A&A, 474, 931 Patat, F. 2007b, Science, 317, 924 (P07) Prieto, J. L., et al. 2007, ApJ, submitted (arXiv:0706.4088) Quinn, J. L., et al. 2006, , 652, 512 Salvo, M. E., et al. 2001, , 321, 254 Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, , 500, 525 Schaefer, B. E. 1987, , 323, L47 Schmidt, B. P., et al. 1994, , 434, L19 Sirianni, M., et al. 2005, , 117, 1049 Simon, J. D., et al. 2007, , 671, L25 Suzuki, S., & Migliardi, M. 2006, IAUC 8667 Sparks, W. B., Macchetto, F., Panagia, N., Boffi, F. R., Branch, D., Hazen, M. L., & Della Valle, M. 1999, , 523, 585 Suntzeff, N. B., et al. 1988, Nature, 334, 135 Sugerman, B. E. K., & Crotts A. P. S. 2003a, , 632, L17 Sugerman, B. E. K. 2003b, , 126, 1939 Sugerman, B. E. K. 2005, , 632, L17 Tylenda, R. 2004, , 414, 223 Umbriaco, G., et al. 2007, CBET, 1174 Van Dyk, S. D., Li, W., & Filippenko, A. V. 2006, , 118, 351 Wang, L. F., et al. 2004, , 604, L53 Wang, L. F. 2005, , 635, L33 Wang, X. F., et al. 2006, , 645, 488 Wang, X. F., et al. 2007, , in press (arXiv:0708.0140)(W07) Welch, D. L., et al. 2007, , 669, 525 Wood-Vasey, W. M., et al. 2004, , 616, 339 Xu, J., et al. 1994, , 435, 274
[lllccc]{} UT Date& JD$-$2,450,000 & Phase (d) &F435W & F555W & F775W\
05/21/2006&3876.0 & +90.0 &18.71(04) &17.36(02) &16.46(02)\
12/25/2006&4094.0 & +308.0&21.48(06) &20.56(09) &19.69(02)\
[lllllc]{} Object & $r$(pixel) & $j$(pixel) & $k$(pixel) & bandpass\
& &$t = 90$ d & &\
SN & 2.21 & 2.61 & 2.51 & F435W\
star & 2.17$\pm$0.02 & 2.60$\pm$0.03 & 2.60$\pm$0.05 & F435W\
SN & 2.38 & 2.70 & 2.68 & F555W\
star & 2.39$\pm$0.05 & 2.81$\pm$0.04 &2.78$\pm$0.03& F555W\
SN & 2.79 & 2.84 &2.85 &F775W\
star & 2.82$\pm$0.05 & 2.89$\pm$0.08& 2.89$\pm$0.03&F775W\
& &$t = 308$ d & &\
SN & 2.50 & 3.85 & 3.36 & F435W\
star& 2.21$\pm$0.03 & 2.81$\pm$0.03& 2.51$\pm$0.02 & F435W\
SN & 2.62 & 4.00 & 3.69 & F555W\
star& 2.33$\pm$0.03 & 2.74$\pm$0.02& 2.64$\pm$0.05&F555W\
SN & 2.95 & 3.37 & 3.33 & F775W\
star& 2.79$\pm$0.01 & 2.94$\pm$0.02 &2.83$\pm$0.02 & F775W\
[lclc]{} Method & F435W (mag) & F555W (mag) & F775W (mag)\
Residual radial profile (2–10 pixels) &22.8$\pm$0.1 & 22.0$\pm$0.3 & 22.1$\pm$0.7\
PSF-subtracted image (2–10 pixels) &22.7$\pm$0.1 & 21.9$\pm$0.3 & 22.3$\pm$0.9\
Synthetic echo spectrum (SN 1996X) &22.2$\pm$0.3 & 21.4$\pm$0.3 &\
Synthetic echo spectrum (SN 2006X) &22.1$\pm$0.3&21.5$\pm$0.3 &\
\[fig:three\]
[^1]: IRAF, the Image Reduction and Analysis Facility, is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc. (AURA) under cooperative agreement with the National Science Foundation (NSF).
[^2]: Here we assume that the dust surrounding SN 2006X is a plane-parallel slab and/or shell, so that both the SN and the LE were affected by roughly the same amount of extinction.
[^3]: The Rayleighan dust consists of only small particles with grain size $<0.01~\mu$m, and hence has a scattering efficiency proportional to $\lambda^{-4}$ (Sugerman 2003a).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We consider the problem of predicting a response from a set of covariates when the test distribution differs from the training distribution. Here, we consider robustness against distributions that emerge as intervention distributions. Causal models that regress the response variable on all of its causal parents have been suggested for the above task since they remain valid under arbitrary interventions on any subset of covariates. However, in linear models, for a set of interventions with bounded strength, alternative approaches have been shown to be minimax prediction optimal. In this work, we analyze minimax solutions in nonlinear models for both direct and indirect interventions on the covariates. We prove that the causal function is minimax optimal for a large class of interventions. We introduce the notion of distribution generalization, which is motivated by the fact that, in practice, minimax solutions need to be identified from observational data. We prove sufficient conditions for distribution generalization and present corresponding impossibility results. To illustrate the above findings, we propose a practical method, called NILE, that achieves distribution generalization in a nonlinear instrumental variable setting with linear extrapolation. We prove consistency, present empirical results and provide code.'
author:
- |
Rune Christiansen$^\flat$ Niklas Pfister$^\flat$ Martin Emil Jakobsen$^{\flat}$\
Nicola Gnecco$^{\sharp}$ Jonas Peters$^\flat$
bibliography:
- 'ref.bib'
title: ' **The Difficult Task of Distribution Generalization in Nonlinear Models** '
---
Introduction {#sec:intro}
============
Large-scale learning systems, particularly those focusing on prediction tasks, have been successfully applied in various domains of application. Since inference is usually done during training time, any difference between training and test distribution poses a challenge for prediction methods [@Quionero2009; @Pan2010; @Csurka2017; @Arjovsky2019]. Dealing with differences in training and test distribution is of great importance in fields such as many environmental sciences, where methods need to extrapolate both in space and time. Tackling this task requires restrictions on how the distributions may differ, since, clearly, generalization becomes impossible if the test distribution may be arbitrary. Given a response $Y$ and some covariates $X$, existing procedures often aim to find a function $f$ which minimizes the worst-case risk $\sup_{P \in \mathcal{N}} \mathbb{E}_P [(Y - f(X))^2]$ across distributions contained in a small neighborhood $\mathcal{N}$ of the training distribution. The neighborhood $\mathcal{N}$ should be representative of the difference between the training and test distributions, and often mathematical tractability is taken into account, too [@abadeh2015distributionally; @sinha2017certifying]. A typical approach is to define a $\rho$-ball of distributions $\cN_\rho(P_0) := \{P: D(P, P_0) \leq \rho\}$ around the training distribution $P_0$, with respect to some divergence measure $D$, such as the Kullback-Leibler divergence or the $\chi^2$ divergence [@hu2013kullback; @ben2013robust; @bertsimas2018data; @lam2019recovering; @duchi2016statistics]. While these divergence functions only consider distributions with the same support as $P_0$, the Wasserstein distance allows to define a neighborhood of distributions around $P_0$ with possibly different supports [@abadeh2015distributionally; @sinha2017certifying; @esfahani2018data; @blanchet2019data]. In our analysis, we do not start from a divergence measure, but we construct a neighborhood of distributional changes by using the concept of interventions [@pearl2009causality; @Peters2017book].
We will see that, depending on the considered setup, one can find models that perform well under interventions which yield distributions that are considered far away from the observational distribution in any commonly used metric. Using causal concepts for the above problem has been motivated by the following observation. A causal prediction model, that uses only the direct causes of the response $Y$ as covariates, is known to be invariant under interventions on variables other than $Y$: the conditional distribution of $Y$ given its causes does not change (this principle is known as invariance, autonomy or modularity) [@Aldrich1989; @Haavelmo1944; @pearl2009causality]. Such a model yields the minimal worst-case prediction error when considering all interventions on variables other than $Y$ [e.g., @Rojas2016 Theorem 1, Appendix]. It has therefore been suggested to use causal models in problems of domain generalization or distributional shifts [@Scholkopf2012; @Rojas2016; @HeinzeDeml17; @Magliacane2018; @Meinshausen2018; @Arjovsky2019; @pfister2019stabilizing]. One may argue, however, that causal methods are too conservative in that the interventions which induce the test distributions may not be arbitrarily strong. As a result, methods which focus on a trade-off between predictability and causality have been proposed for linear models [@rothenhausler2018anchor; @Pfister2019pnas], see also Section \[sec:existingmethods\]. In this work, we consider the problem of characterizing and finding minimax optimal models in a more general, nonlinear framework. 0
Niklas: Maybe use this from below: In cases with more complicated interventions, the causal function is not necessarily a minimax solution anymore. Two interesting such scenarios are the following.
1. If ${\mathcal{I}}$ consists only of the trivial intervention (does not change anything), then the [[Martin: any]{}]{} conditional mean function, i.e., $f(x)={\mathbb{E}}(Y\vert X=x)$ [[Martin: $x\mapsto{\mathbb{E}}(Y\vert X=x)$]{}]{}, is a minimax solution.
2. If ${\mathcal{I}}$ consists of interventions that fix the structural form of $X$ apart from adding shifts to $A$, then (under additional assumptions on $A$) the causal function will approximate a minimax solution as if the allowed shift strength increases to infinity, see Section \[sec:learnability\]. [[Martin: If referring to AR, then maybe change to we add shifts to $X$ not $A$, or simply intervene on $A$ such that $Cov(A,{\varepsilon})=0$. ]{}]{}
Contribution {#sec:contr}
------------
We assume that the true data generating process can be described by a model $M$ that belongs to a class of models ${\mathcal{M}}$ and induces an observational distribution ${\mathbb{P}}_{M}$. We then consider the risk of a prediction function ${f_{\diamond}}$ from a function class ${\mathcal{F}}$ under a modified model $M(i)$ that is obtained from $M$ by an intervention $i$, which belongs to a set of interventions ${\mathcal{I}}$. Here, interventions can either act directly on $X$ or indirectly, via an exogenous variable $A$, if the latter exists (precise definitions are provided in Section \[sec:setup\] below). Our work has four main contributions. (1) We analyze the relation between the causal function (defined formally in Section \[sec:setup\]) and the minimizer of $\sup_{i \in {\mathcal{I}}} {\mathbb{E}}_{{M}(i)} [ (Y - {f_{\diamond}}(X))^2 ]$. Our findings go beyond existing results in that the causal function is shown to be minimax optimal already for relatively small intervention classes. We further prove that, in general, the difference between a minimax solution and the causal function can be bounded and that any minimax solution different from the causal function is not robust with respect to misspecification of the intervention class. (2) In practice, we usually have to learn the minimax solution from an observational distribution, in the absence of causal background knowledge. We therefore introduce the concept of distribution generalization, which requires the existence of a prediction model $f^*$ which (approximately) solves the minimax problem $\operatorname*{argmin}_{{f_{\diamond}}\in {\mathcal{F}}} \sup_{i \in {\mathcal{I}}} {\mathbb{E}}_{\tilde{M}(i)}[ (Y - {f_{\diamond}}(X))^2]$ for all $\tilde{M}$ with ${\mathbb{P}}_M = {\mathbb{P}}_{\tilde{M}}$. To the best of our knowledge, the considered setup is novel. (3) We then investigate explicit conditions on $\mathcal{M}$, $\mathcal{I}$ and ${\mathbb{P}}_M$ that allow us to use the observational distribution of $(X, Y, A)$ to identify a function $f^*:{\mathbb{R}}^d\rightarrow{\mathbb{R}}$ that generalizes to ${\mathcal{I}}$, i.e., it (approximately) solves the above minimax problem. We prove several results. E.g., if the interventions are such that the support of $X$ does not increase with respect to the training distribution, then identifiability of the causal function — a well-studied problem in causality — is in general sufficient for generalization. We furthermore give sufficient conditions for generalization to interventions on either $A$ or $X$ that extend the support of $X$. Table \[tab:generalizability\] summarizes some of these results.
We also prove that, without these assumptions, generalization is impossible; (4) In Section \[sec:learning\], we discuss how minimax functions can be learned from finitely many data and explain how existing methodology fits into our framework. We propose a novel estimator, the NILE, that is applicable in a nonlinear instrumental variables (IV) setting and achieves distribution generalization with linear extensions. We prove consistency and provide empirical results. Our code is available as an `R`-package at <https://runesen.github.io/NILE>. Scripts generating all our figures and results can be found at the same url.
Further related work {#sec:related_work}
--------------------
That the causal function is minimax optimal under the set of all interventions on the covariates has been shown by [@Rojas2016], for example, where the additional assumption of no hidden variables is made. In Section \[sec:setup\], we extend this result in various ways. The question of distributional robustness, sometimes also referred to as out-of-distribution generalization, aims to develop procedures that are robust to changes between training and testing distribution. Empirically, this problem is often studied using adversarial attacks, where small digital [@goodfellow2014explaining] or physical [@evtimov2017robust] perturbations of pictures can deteriorate the performance of a model; arguably, these procedures are not yet fully understood theoretically. Unlike the procedures mentioned in Section \[sec:contr\] that aim to minimize the worst-case risk across distributions contained in a neighborhood of the training distribution, e.g., in the Wasserstein metric, [@sinha2017certifying], we assume these neighborhoods to be generated by interventions. To the best of our knowledge, the characterization of distribution generalization that we consider in Section \[sec:generalizability\] is novel.
In settings of covariate shift, one usually assumes that the training and test distribution of the covariates are different, while the conditional distribution of the response given the covariates remains invariant [@daume2006domain; @bickel2009discriminative; @David10; @muandet2013domain]. Sometimes, it is additionally assumed that the support of the training distribution covers the one of the test distribution [@shimodaira2000improving]. In this work, the conditional distribution of the response given the covariates is allowed to change between interventions, due to the existence of a hidden confounder, and we consider settings where the test observations lie outside the training support. Data augmentation methods increase the diversity of the training dataset by changing the geometry and the color of the images (e.g., by rotation, cropping or changing saturation) [@zhang2017mixup; @shorten2019survey]. This allows the user to create models that generalize better to unseen environments [e.g., @volpi2018adversarial]. We view these approaches as a way to enlarge the support of the covariates, which comes with theoretical advantages, see Section \[sec:generalizability\].
Minimizing the worst-case prediction error can also be formulated in terms of minimizing the regret in a multi-armed bandit problem [@lai1985asymptotically; @auer2002finite; @bartlett2008high]. In that setting, the agent can choose the distribution which generates the data. In our setting, though, we do not assume to have control on the interventions and hence on the distribution of the sampled data.
Structure of this work
----------------------
We introduce our framework for generating a collection of intervention distributions in Section \[sec:setup\]. In Section \[sec:robustness\], we formalize the problem considered in this work, namely to find a model that predicts well under a set of intervention distributions. We prove that for a wide range of intervention classes, this is achieved by the causal function. In reality, we are not given the full causal model, but only the observational distribution. This problem is considered in Section \[sec:generalizability\], where we provide sufficient conditions under which distribution generalization is possible and prove corresponding impossibility results. The condition whether the intervened $X$ values are inside the support of the training distribution will play an important role. Section \[sec:learning\] considers the problem of learning models from a finite amount of data. In particular, we propose a method, called NILE, that learns a generalizing model in a nonlinear IV setting. We prove consistency and compare our method to state-of-the art approaches empirically. In Appendix \[sec:causal\_relations\_X\], we comment on the different model classes that are contained in our framework. Appendix \[sec:IVconditions\] summarizes existing results on identifiability in IV models and Appendix \[sec:test\_statistic\] provides details on the test statistic that we use in NILE. Appendix \[sec:additional\_experiments\] contains an additional experiment and all proofs are provided in Appendix \[app:proofs\].
Modeling intervention induced distributions {#sec:setup}
===========================================
We now specify the statistical model used throughout this paper. For a real-valued response variable $Y\in{\mathbb{R}}$ and predictors $X \in {\mathbb{R}}^d$, we consider the problem of estimating a regression function that works well not only on the training data, but also under distributions that we will model by interventions. We require a model that is able to model an observational distribution of $(X,Y)$ (training) and the distribution of $(X,Y)$ under a class of interventions on (parts of) $X$ (testing). We will do so by means of a structural causal model (SCM) [@Bollen1989; @pearl2009causality]. More precisely, denoting by $H \in {\mathbb{R}}^q$ some additional (unobserved) variables, we consider the SCM
$$\begin{aligned}
H &\coloneqq {{\varepsilon}}_H \; &{ \text{\scriptsize $q$ assignments} } \\
X &\coloneqq h_2(H, {\varepsilon}_X)\; &{\text{\scriptsize $d$ assignments} }\\
Y &\coloneqq f(X) + h_1(H, {\varepsilon}_Y)\; &{\text{\scriptsize $1$ assignment} }
\end{aligned}$$
Here, $f$, $h_1$ and $h_2$ are measurable functions, the innovation terms ${{\varepsilon}}_X$, ${{\varepsilon}}_Y$ and ${{\varepsilon}}_H$ are independent vectors with possibly dependent coordinates. Two comments are in order. The joint distribution of $(X, Y)$ is constrained only by requiring that $X$ and $h_1({{\varepsilon}}_Y, H)$ enter the equation of $Y$ additively. This constraint affects the allowed conditional distributions of $Y$ given $X$ but does not make any restriction on the marginal distributions of either $X$ or $Y$. Furthermore, we do not assume that the above SCM represents the true causal relationships between the random variables. We do not assume any causal background knowledge of the system. Instead, the SCM is used only to construct the test distributions (by considering interventions on $X$) for which we are analyzing the predictive performance of different methods – similar to how one could have considered a ball around the training distribution. If causal background knowledge exists, however, e.g., in the form of an SCM over variables $X$ and $Y$, it can be made to fit into the above framework. As such, our framework includes a large variety of models, including SCMs in which some of the $X$ are not ancestors but descendants of $Y$ (this requires adapting the set of interventions appropriately), see Appendix \[sec:causal\_relations\_X\] for details. The following remark shows such an example, and may be interesting to readers with a special interest in causality. It can be skipped at first reading.
\[rem:model\] If a priori causal background knowlededge is available, e.g., in form of an SCM, our framework is still applicable after an appropriate transformation. The following example shows a reformulation of an SCM over variables $X_1$, $X_2$ and $Y$.
[ $$\begin{aligned}
X_1 &\coloneqq {{\varepsilon}}_1 \\
X_2 &\coloneqq k(Y) + {\varepsilon}_2\\
Y &\coloneqq f(X_1) + {\varepsilon}_3,
\end{aligned}$$ with $({\varepsilon}_1,{\varepsilon}_2, {{\varepsilon}}_3)\sim Q$. ]{}
[ $$\xrightarrow{\text{rewrite}}$$]{}
[ $$\begin{aligned}
H &\coloneqq {{\varepsilon}}_3 \\
X &\coloneqq h_2(H,({{\varepsilon}}_1,{{\varepsilon}}_2))\\
Y &\coloneqq f(X_1) + H,
\end{aligned}$$ with $({\varepsilon}_1,{\varepsilon}_2, {{\varepsilon}}_3)\sim Q$]{}.
Here, $h_2(H,({{\varepsilon}}_1, {{\varepsilon}}_2))\coloneqq({{\varepsilon}}_1, k(f({{\varepsilon}}_1)+H) + {{\varepsilon}}_2)$. Both SCMs induce the same observational distribution over $(X_1,X_2, Y)$ and any intervention on the covariates in the SCM on the left-hand side can be rewritten as an intervention on the covariates in the SCM on the right-hand side. Details and a more general treatment are provided in Appendix \[sec:causal\_relations\_X\].
Sometimes, the vector $X$ contains variables that are independent of $H$ and that enter additively into the assignments of the other covariates. If such covariates exist, it can be useful to explicitly distinguish them from the other covariates. We will denote them by $A$ and call them exogenous variables. Such variables are interesting for two reasons. (i) Under additional assumptions, they can be used as instrumental variables [e.g., @Bowden1985; @greene2003econometric], a well-established tool for ensuring that $f$ can be uniquely recovered from the observational distribution of $(X, Y)$. And (ii), we will see below that in general, interventions on such variables lead to intervention distributions with desirable properties. In the remainder of this article, we will therefore consider a slightly larger class of SCMs that also includes exogenous variables $A$. It contains the SCM presented at the beginning of Section \[sec:setup\] as a special case.[^1]
Model
-----
Formally, we consider a response $Y \in {\mathbb{R}}^1$, covariates $X\in {\mathbb{R}}^d$, exogenous variables $A\in {\mathbb{R}}^r$, and unobserved variables $H \in {\mathbb{R}}^q$. Let further ${\mathcal{F}}\subseteq\{f:{\mathbb{R}}^d\rightarrow{\mathbb{R}}\}$, ${\mathcal{G}}\subseteq\{g:{\mathbb{R}}^r\rightarrow{\mathbb{R}}^d\}$, ${\mathcal{H}}_1\subseteq\{h_1:{\mathbb{R}}^{q+1}\rightarrow{\mathbb{R}}\}$ and ${\mathcal{H}}_2\subseteq\{h_2:{\mathbb{R}}^{q+d}\rightarrow{\mathbb{R}}^d\}$ be fixed sets of measurable functions. Moreover, let $\mathcal{Q}$ be a collection of probability distributions on ${\mathbb{R}}^{d+1+r+q}$, such that for all $Q\in\mathcal{Q}$ it holds that if $({\varepsilon}_X,{\varepsilon}_Y, {{\varepsilon}}_A, {{\varepsilon}}_H)\sim Q$, then ${\varepsilon}_X,{\varepsilon}_Y, {{\varepsilon}}_A$ and ${{\varepsilon}}_H$ are jointly independent, and for all $h_1\in{\mathcal{H}}_1$ and $h_2\in{\mathcal{H}}_2$ it holds that $\xi_Y := h_1({{\varepsilon}}_H, {\varepsilon}_Y)$ and $\xi_X := h_2({{\varepsilon}}_H, {\varepsilon}_X)$ have mean zero.[^2] Let $\mathcal{M}\coloneqq
{\mathcal{F}}\times{\mathcal{G}}\times{\mathcal{H}}_1\times{\mathcal{H}}_2\times\mathcal{Q}$ denote the model class. Every model $M=(f, g, h_1, h_2, Q)\in \cM$ then specifies an SCM by[^3]
$$\begin{aligned}
A &\coloneqq {{\varepsilon}}_A \; &{\text{\scriptsize $r$ assignments} } \\
H &\coloneqq {{\varepsilon}}_H \; &{\text{\scriptsize $q$ assignments} } \\
X &\coloneqq g(A) + h_2(H, {\varepsilon}_X)\; &{\text{\scriptsize $d$ assignments} }\\
Y &\coloneqq f(X) + h_1(H, {\varepsilon}_Y)\; &{\text{\scriptsize $1$ assignment} }
\end{aligned}$$
with $({\varepsilon}_X,{\varepsilon}_Y, {{\varepsilon}}_A, {{\varepsilon}}_H)\sim Q$. For each model $M = (f,g,h_1,h_2,Q) \in\mathcal{M}$, we refer to $f$ as the *causal function* (for the pair $(X,Y)$) and assume that the entailed distribution has finite second moments. Furthermore, we denote by ${\mathbb{P}}_M$ the joint distribution over the observed variables $(X, Y, A)$ induced by the SCM specified by $M$. If no exogenous variables $A$ exist, one can think of the function $g$ as being a constant function. 0 We now specify the statistical model, we consider in this work. For a real-valued response variable $Y\in{\mathbb{R}}$ and predictors $X \in {\mathbb{R}}^d$, we consider the problem of estimating a regression function that works well not only on the training data, but also under interventions. In order to formally model interventions we will introduce a class of structural causal models (SCMs), over the response $Y \in {\mathbb{R}}^1$, the covariates $X\in {\mathbb{R}}^d$, exogenous variables $A\in {\mathbb{R}}^r$, and hidden variables $H \in {\mathbb{R}}^q$.
Let ${\mathcal{F}}\subseteq\{f:{\mathbb{R}}^d\rightarrow{\mathbb{R}}\}$, ${\mathcal{G}}\subseteq\{g:{\mathbb{R}}^r\rightarrow{\mathbb{R}}^d\}$, ${\mathcal{H}}_1\subseteq\{h_1:{\mathbb{R}}^{q+1}\rightarrow{\mathbb{R}}\}$ and ${\mathcal{H}}_2\subseteq\{h_2:{\mathbb{R}}^{q+d}\rightarrow{\mathbb{R}}^d\}$ be fixed sets of measurable functions. Moreover, let $\mathcal{Q}$ be a collection of probability distributions on ${\mathbb{R}}^{d+1+r+q}$ such that for all $Q\in\mathcal{Q}$ it holds if $({\varepsilon}_X,{\varepsilon}_Y, {{\varepsilon}}_A, {{\varepsilon}}_H)\sim Q$ then ${\varepsilon}_X,{\varepsilon}_Y, {{\varepsilon}}_A$ and ${{\varepsilon}}_H$ are jointly independent and for all $h_1\in{\mathcal{H}}_1$ and $h_2\in{\mathcal{H}}_2$ it holds that $\xi_Y := h_1({{\varepsilon}}_H, {\varepsilon}_Y)$ and $\xi_X := h_2({{\varepsilon}}_H, {\varepsilon}_X)$ have mean zero.[^4] Let $\mathcal{M}\coloneqq
{\mathcal{F}}\times{\mathcal{G}}\times{\mathcal{H}}_1\times{\mathcal{H}}_2\times\mathcal{Q}$ denote the model class, then every model, $M=(f, g, h_1, h_2, Q)\in \cM$, specifies an SCM by[^5]
$$\begin{aligned}
A &\coloneqq {{\varepsilon}}_A \; &{\scriptsize \text{$r$ assignments} } \\
H &\coloneqq {{\varepsilon}}_H \; &{\scriptsize \text{$q$ assignments} } \\
X &\coloneqq g(A) + h_2(H, {\varepsilon}_X)\; &{\scriptsize \text{$d$ assignments} }\\
Y &\coloneqq f(X) + h_1(H, {\varepsilon}_Y)\; &{\scriptsize \text{$1$ assignment} }
\end{aligned}$$
with $({\varepsilon}_X,{\varepsilon}_Y, {{\varepsilon}}_A, {{\varepsilon}}_H)\sim Q$.
We require that these SCMs induce a unique distribution over the variables $(X, Y, A, H)$. For each model $M = (f,g,h_1,h_2,Q) \in\mathcal{M}$, we refer to $f$ as the *causal function* (for the pair $(X,Y)$). Furthermore, we denote by ${\mathbb{P}}_M$ the joint distribution over the observed variables $(X, Y, A)$ induced by the SCM specified by $M$. Each SCM $M\in\mathcal{M}$ can now be modified by the concept of interventions [e.g., @pearl2009causality; @Peters2017book]. An intervention corresponds to replacing one or more of the structural assignments of the SCM. Thus, for each $M\in \cM$ and intervention $i$ we consider the intervened SCM $M(i)$ over the variables $(X^i, Y^i, A^i, H^i)$ given by the SCM of $M$ under intervention $i$. We will henceforth refer to $M(i)$ as an intervened model. We do not require that there exists an $M'\in \cM$ such that the SCM of $M'$ coincides with $M(i)$. We only consider interventions, such that $M(i)$ induces a unique distribution over $(X^i,Y^i,A^i,H^i)$. Let $\cI$ denote a subset of such interventions.
Interventions {#sec:interventions}
-------------
Each SCM $M\in\mathcal{M}$ can now be modified by the concept of interventions [e.g., @pearl2009causality; @Peters2017book]. An intervention corresponds to replacing one or more of the structural assignments of the SCM. For example, we intervene on all covariates $X$ by replacing the $d$ assignments with, e.g., a random variable, which is independent of the other noise variables and has a multivariate Gaussian distribution. Importantly, an intervention on some of the variables does not change the assignment of any other variable. In particular, an intervention on $X$ does not change the conditional distribution of $Y$, given $X$ and $H$ (this is an instance of the invariance property mentioned in Section \[sec:intro\]). More generally, we denote by $M(i)$ the intervened SCM over the variables $(X^i, A^i, Y^i, H^i)$, obtained by performing the intervention $i$ in model $M$. We do not require that the intervened model $M(i)$ belong to the model class $\cM$, but we require that $M(i)$ induces a joint distribution over $(X^i,Y^i,A^i,H^i)$, which has finite second moments. We use ${\mathcal{I}}$ to denote a collection of interventions.
In this work, we only consider interventions on the covariates $X$ and $A$. More specifically, for a given model $M=(f,g,h_1,h_2,Q)\in \cM$ and an intervention $i\in{\mathcal{I}}$, the intervened SCM $M(i)$ takes one of two forms. First, for an intervention on $X$ it is given by $$\begin{aligned}
A^i := {{\varepsilon}}_A^i, \quad H^i := {{\varepsilon}}_H^i, \quad X^i :=
\psi^i(g, h_2, A^i,H^i, {{\varepsilon}}^i_X ,I^i), \quad Y^i:= f(X^i) + h_1(H^i,{{\varepsilon}}_Y^i),\end{aligned}$$ and, second, for an intervention on $A$ it is given by $$A^i := \psi^i(I^i, {{\varepsilon}}_A^i), \quad H^i := {{\varepsilon}}_H^i, \quad X^i := g(A^i) + h_2(H^i, {{\varepsilon}}_X^i), \quad Y^i:= f(X^i) + h_1(H^i,{{\varepsilon}}_Y^i).$$ In both cases, $({{\varepsilon}}_X^i, {{\varepsilon}}_Y^i, {{\varepsilon}}_A^i,{{\varepsilon}}_H^i) \sim Q$, the (possibly degenerate) random vector $I^i$ is independent of $({{\varepsilon}}_X^i, {{\varepsilon}}_Y^i, {{\varepsilon}}_A^i,{{\varepsilon}}_H^i)$, and $\psi^i$ is a measurable function, whose arguments are all part of the structural assignment of the intervened variable in model $M$. We will see below that this class of interventions is rather flexible. It does, however, not allow for arbitrary manipulations of $M$. For example, the noise variable ${{\varepsilon}}_Y$ is not allowed to enter the structural assignment of the intervened variable. Interventions on $A$ will generally be easier to analyze than interventions on $X$. We therefore distinguish between the following different types of interventions on $X$. Let $i$ be an intervention on $X$ with intervention map $\psi^i$. The intervention is then called $$\begin{array}{ll}
&\quad\textit{confounding-preserving}\qquad
\begin{array}{ll}
\text{if
there exists a map ${\varphi}^i$, such that}\\
\psi^i(g, h_2, A^i ,H^i, {{\varepsilon}}^i_X ,I^i) = {\varphi}^i(A^i,g(A^i),h_2(H^i,
{{\varepsilon}}^i_X) ,I^i)
\end{array}\\
&\text{and it is called}\\
&\quad\textit{confounding-removing}\phantom{ii}\qquad
\begin{array}{ll}
\text{if for all models $M \in {\mathcal{M}}$,}\\
\psi^i(g, h_2, A^i ,H^i, {{\varepsilon}}^i_X ,I^i) {\perp \!\!\! \perp}H^i\quad\text{under }M(i).
\end{array}
\end{array}$$ Furthermore, we call a set of interventions ${\mathcal{I}}$ *well-behaved* either if it consists only of confounding-preserving interventions or if it contains at least one confounding-removing intervention. Confounding-preserving interventions contain, e.g., *shift interventions* on $X$, which linearly shift the original assignment by $I^i$, that is, $\psi^i(g, h_2, A^i,H^i, {{\varepsilon}}^i_X ,I^i) = g(A^i) + h_2(H^i, {{\varepsilon}}_X^i) +
I^i$. The name ‘confounding-preserving’ stems from the fact that the unobserved (confounding) variables $H$ only enter the intervened structural assignment of $X$ via the term $h_2(H^i, {{\varepsilon}}^i_X)$, which is the same as in the original model. Some interventions are confounding-removing and confounding-preserving, but not every confounding-removing intervention is confounding-preserving. For example, the intervention $\psi^i(g, h_2, A^i ,H^i, {{\varepsilon}}^i_X ,I^i)={{\varepsilon}}^i_X$ is confounding-removing but, in general, not confounding-preserving. Similarly, not all confounding-preserving are confounding-removing.
If the context does not allow for any ambiguity, we omit the superscript $i$ and write expressions such as ${\mathbb{E}}_{M(i)}[(Y-f(X))^2]$. The support of random variables under interventions will play an important role for the analysis of distribution generalization. Throughout this paper, ${\mathrm{supp}}^{M}(Z)$ denotes the support of the random variable $Z\in\{A, X, H, Y\}$ under the distribution ${\mathbb{P}}_M$, which is induced by the SCM $M\in\mathcal{M}$. Moreover, ${\mathrm{supp}}_{{\mathcal{I}}}^{M}(Z)$ denotes the union of ${\mathrm{supp}}^{M(i)}(Z)$ over all interventions $i\in{\mathcal{I}}$. We call a collection of interventions on $Z$ *support-reducing* (w.r.t. $M$) if ${\mathrm{supp}}_\cI^M(Z)\subseteq {\mathrm{supp}}^M(Z)$ and *support-extending* (w.r.t. $M$) if ${\mathrm{supp}}_\cI^M(Z)\not \subseteq {\mathrm{supp}}^M(Z)$. Whenever it is clear from the context which model is considered, we may drop the indication of $M$ altogether and simply write ${\mathrm{supp}}(Z)$.
Interventional robustness and the causal function {#sec:robustness}
=================================================
Let ${\mathcal{M}}$ be a fixed model class, let $M = (f,g,h_1, h_2, Q) \in {\mathcal{M}}$ be the true data generating model, and let ${\mathcal{I}}$ be a class of interventions. In this work, we aim to find a function $f^*:{\mathbb{R}}^d\rightarrow{\mathbb{R}}$, such that the predictive model $\hat Y = f^*(X)$ has low worst-case risk over all distributions induced by the interventions in $\cI$. We therefore consider the optimization problem $$\label{eq:minimax_problem}
\operatorname*{argmin}_{{f_{\diamond}}\in {\mathcal{F}}} \sup_{i \in {\mathcal{I}}} {\mathbb{E}}_{M(i)} \big[ (Y - {f_{\diamond}}(X))^2 \big],$$ where ${\mathbb{E}}_{M(i)}$ is the expectation in the intervened model $M(i)$. In general, this optimization problem is neither guaranteed to have a solution, nor is the solution, if it exists, ensured to be unique. Whenever a solution $f^*$ exists, we refer to it as a *minimax solution* (for model $M$ w.r.t. ($\mathcal{F},{\mathcal{I}}$)).
If, for example, ${\mathcal{I}}$ consists only of the trivial intervention, that is, ${\mathbb{P}}_M = {\mathbb{P}}_{M(i)}$, we are looking for the best predictor on the observational distribution. In that case, the minimax solution is obtained by any conditional mean function, $f^*: x\mapsto{\mathbb{E}}[Y\vert X=x]$ (provided that $f^*\in\mathcal{F}$). For larger classes of interventions, however, the conditional mean may become sub-optimal in terms of prediction. To see this, it is instructive to decompose the risk under an intervention. Since the structural assignment for $Y$ remains unchanged for all interventions that we consider in this work, it holds for all ${f_{\diamond}}\in {\mathcal{F}}$ and all interventions $i$ on either $A$ or $X$ that $${\mathbb{E}}_{M(i)}[(Y - {f_{\diamond}}(X))^2]
= {\mathbb{E}}_{M(i)}[(f(X) - {f_{\diamond}}(X))^2]+{\mathbb{E}}_{M}[\xi_Y^2]+2{\mathbb{E}}_{M(i)}[\xi_Y(f(X)-{f_{\diamond}}(X))].$$ Here, the middle term does not depend on $i$ since $\xi_Y = h_1(H, {{\varepsilon}}_Y)$ remains fixed. If $i$ is a confounding-removing intervention, then $\xi_Y{\perp \!\!\! \perp}X$ under ${\mathbb{P}}_{M(i)}$, and, because of ${\mathbb{E}}_{M}[\xi_Y] = 0$, the last term in the above equation vanishes. Therefore, if ${\mathcal{I}}$ consists only of confounding-removing interventions, the causal function is a solution to the minimax- problem . The following proposition shows that an even stronger statement holds: The causal function is already a minimax solution if ${\mathcal{I}}$ contains at least one confounding-removing intervention on $X$.
\[prop:minimax\_equal\_causal\] If ${\mathcal{I}}$ is a set of interventions on $X$ or $A$ and at least one of these is a confounding-removing intervention, then the causal function $f$ is a minimax solution.
One step in the proof of this proposition is to show that the minimal worst-case loss is attained at a confounding-removing intervention. That is, $$\inf_{{f_{\diamond}}\in{\mathcal{F}}}\sup_{i \in {\mathcal{I}}} {\mathbb{E}}_{M(i)} \big[ (Y - {f_{\diamond}}(X))^2 \big]=\inf_{{f_{\diamond}}\in{\mathcal{F}}}\sup_{i \in {\mathcal{I}}_{\text{cr}}} {\mathbb{E}}_{M(i)} \big[ (Y - {f_{\diamond}}(X))^2 \big],$$ where ${\mathcal{I}}_{\text{cr}}{\subseteq}{\mathcal{I}}$ denotes the non-empty subset of confounding-removing interventions. This observation will also be used in the proofs of some of the results that follow below.
We now prove that when restricting ourselves to linear functions only, the causal function is also a minimax solution with respect to the set of all shift interventions on $X$ – interventions that appear in linear IV models and recently gained further attention in the causal community [@rothenhausler2018anchor; @Sani2020]. The proposition below also makes precise in which sense shift interventions are related to linear model classes. Intuitively, when the causal relation between $X$ and $Y$ is linear, shift interventions are sufficient to create unbounded variability in all directions of the covariance matrix of $X$ (more precisely, the unbounded eigenvalue condition below is satisfied if ${\mathcal{I}}$ is the set of all shift interventions on $X$). As the following proposition shows, under this condition, the causal function is a minimax solution.
\[prop:shift\_interventions\] Let ${\mathcal{F}}$ be the class of all linear functions, and let ${\mathcal{I}}$ be a set of interventions on $X$ or $A$ s.t. $\sup_{i\in \cI} \lambda_{\min}\big({\mathbb{E}}_{M(i)}\big[XX^\top\big]\big) =\infty$, where $\lambda_{\min}$ denotes the smallest eigenvalue (assuming that the considered moments exist). Then, the causal function $f$ is the unique minimax solution.
Even if the causal function $f$ does not solve the minimax problem , the difference between the minimax solution and the causal function cannot be arbitrarily large. The following proposition shows that the worst-case $L_2$-distance between $f$ and any function ${f_{\diamond}}$ that performs better than $f$ (in terms of worst-case risk) can be bounded by a term which is related to the strength of the confounding.
\[prop:difference\_to\_causal\_function\] Let ${\mathcal{I}}$ be a set of interventions on $X$ or $A$. Then, for any function ${f_{\diamond}}\in{\mathcal{F}}$ which satisfies that $$\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}[(Y-{f_{\diamond}}(X))^2]\leq\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}[(Y-f(X))^2],$$ it holds that $$\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}[(f(X)-{f_{\diamond}}(X))^2]\leq 4{\operatorname{Var}}_M(\xi_Y).$$
Even though the difference can be bounded, it may be non-zero, and one may benefit from choosing a function that differs from the causal function $f$. This choice, however, comes at a cost: it relies on the fact that we know the class of interventions $\mathcal{I}$. In general, being a minimax solution is not entirely robust with respect to misspecification of $\mathcal{I}$. In particular, if the set $\cI_2$ of interventions describing the test distributions is misspecified by a set $\cI_1\neq\cI_2$, then the considered minimax solution with respect to $\cI_1$ may perform worse than the causal function on the test distributions.
\[prop:misspecification\_minimax\] Let ${\mathcal{I}}_1$ and ${\mathcal{I}}_2$ be any two sets of interventions on $X$, and let $f_1^*\in\mathcal{F}$ be a minimax solution w.r.t. ${\mathcal{I}}_1$. Then, if ${\mathcal{I}}_2\subseteq{\mathcal{I}}_1$ it holds that $$\sup_{i\in{\mathcal{I}}_2}{\mathbb{E}}_{M(i)}\big[(Y-f_1^*(X))^2\big] \leq \sup_{i\in{\mathcal{I}}_2}{\mathbb{E}}_{M(i)}\big[(Y-f(X))^2\big].$$ If ${\mathcal{I}}_2\not\subseteq{\mathcal{I}}_1$, however, it can happen (even if $\mathcal{F}$ is linear) that $$\sup_{i\in{\mathcal{I}}_2}{\mathbb{E}}_{M(i)}\big[(Y-f_1^*(X))^2\big] > \sup_{i\in{\mathcal{I}}_2}{\mathbb{E}}_{M(i)}\big[(Y-f(X))^2\big].$$
The second part of the proposition should be understood as a non-robustness property of non-causal minimax solutions. Improvements on the causal function are possible in situations, where one has reasons to believe that the test distributions do not stem from a set of interventions that is much larger than the specified set.
So far, the optimizer of the minimax problem depends on the true model $M$. In practice, however, we do not have access to the true model $M$, but only to its observational distribution ${\mathbb{P}}_M$. This motivates the definition of distribution generalization.
Distribution generalization {#sec:generalizability}
===========================
Throughout this section, let ${\mathcal{M}}$ denote a fixed model class, let $M = (f,g,h_1,h_2,Q) \in {\mathcal{M}}$ be the true (but unknown) data generating model, with observational distribution ${\mathbb{P}}_M$, and let ${\mathcal{I}}$ be a set of interventions on $X$ or $A$. Depending on the model class ${\mathcal{M}}$, there may be several models $\tilde{M} \in {\mathcal{M}}$ that induce the observational distribution ${\mathbb{P}}_M$ but do not agree with $M$ on all intervention distributions induced by $\cI$. Each such model induces a potentially different minimax problem, with a potentially different set of solutions. Given knowledge only of ${\mathbb{P}}_M$, it is therefore generally not possible to identify a solution to . In this section, we study conditions on ${\mathcal{M}}$, ${\mathbb{P}}_M$ and ${\mathcal{I}}$, under which this becomes possible. More precisely, we aim to characterize under which conditions $({\mathbb{P}}_M, \mathcal{M})$ generalizes to $\mathcal{I}$.
\[defi:general\] $({\mathbb{P}}_{M},\mathcal{M})$ is said to generalize to ${\mathcal{I}}$ if for every ${\varepsilon}> 0$ there exists a function $f^{*}\in\mathcal{F}$ such that, for all models $\tilde{M}\in\mathcal{M}$ with ${\mathbb{P}}_{\tilde{M}}={\mathbb{P}}_{M}$, it holds that $$\label{eq:def_generalization}
\left \vert \sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-f^*(X))^2\big] - \inf_{{f_{\diamond}}\in{\mathcal{F}}}\,
\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-{f_{\diamond}}(X))^2\big] \right \vert \leq {\varepsilon}.$$
Distribution generalization does not require the existence of a minimax solution in ${\mathcal{F}}$ (which would require further assumptions on the function class ${\mathcal{F}}$) and instead focuses on whether an approximate solution can be identified based only on the observational distribution ${\mathbb{P}}_{M}$. If, however, there exists a function $f^* \in {\mathcal{F}}$ which, for every $\tilde{M}\in{\mathcal{M}}$ with ${\mathbb{P}}_{\tilde{M}} = {\mathbb{P}}_M$, is a minimax solution for $\tilde{M}$ w.r.t. $({\mathcal{F}}, {\mathcal{I}})$, then, in particular, $({\mathbb{P}}_{M},\mathcal{M})$ generalizes to ${\mathcal{I}}$. As the next proposition shows, generalization is closely linked to the ability of identifying the joint intervention distributions of $(X, Y)$ from the observational distribution.
0 [[Rune: maybe introduce an “${\varepsilon}$-minimax solution” in Section \[sec:robustness\]? That would make the discussion below a bit easier (and the above definition more compact)]{}]{}.
Definition \[defi:general\] says that the observational distribution ${\mathbb{P}}_{M}$ is sufficient for identifying a solution $f^*$ that for all $\tilde{M}\in\mathcal{M}$ with ${\mathbb{P}}_{\tilde{M}}={\mathbb{P}}_{M}$ solves (up to ${\varepsilon}$) the minimax problem corresponding to $\tilde{M}$. The ability to generalize, in particular, ensures that there exists a well-defined mapping [[Jonas: does this not require that every ${\mathbb{P}}_M$ generalizes?]{}]{} $$F:\{{\mathbb{P}}_{\tilde{M}}\,\vert\,\tilde{M}\in\mathcal{M}\}\to {\mathcal{F}}$$ such that $F({\mathbb{P}})$ solves the minimax problem (up to ${\varepsilon}$) for all observational equivalent models (i.e., all models $\tilde{M}\in\mathcal{M}$ with ${\mathbb{P}}_{\tilde{M}}={\mathbb{P}}$). That is, the observational distribution of the variables $(X, Y, A)$ is sufficient to identify a solution that minimizes the worst-case intervention mean squared error under all models with the same observational distribution.
\[prop:suff\_general\] Assume that for all $\tilde{M} \in {\mathcal{M}}$ it holds that[^6] $${\mathbb{P}}_{\tilde{M}} = {\mathbb{P}}_M \quad \Rightarrow \quad {\mathbb{P}}^{(X,Y)}_{\tilde{M}(i)} = {\mathbb{P}}^{(X,Y)}_{M(i)} \quad \forall i \in {\mathcal{I}},$$ where ${\mathbb{P}}^{(X,Y)}_{M(i)}$ is the joint distribution of $(X, Y)$ under $M(i)$. Then, $({\mathbb{P}}_M, {\mathcal{M}})$ generalizes to ${\mathcal{I}}$.
Proposition \[prop:suff\_general\] provides verifiable conditions for distribution generalization, and is a useful result for proving possibility statements. It is, however, not a necessary condition. In Propositions \[prop:genX\_intra\] and \[prop:genX\_extra\], we give further conditions under which distribution generalization is possible for all well-behaved sets of interventions. In particular, if the set of interventions ${\mathcal{I}}$ contains at least one confounding-removing intervention it can be shown that the causal function always generalizes, even in cases where the interventional marginal of $X$ is not identified. We will see that distribution generalization is closely linked to the relation between the support of ${\mathbb{P}}_M$ and the support of the intervention distributions. Below, we therefore distinguish between support-reducing interventions (Section \[sec:supp\_reducing\_onX\]) and support-extending interventions (Section \[sec:support\_extending\_onX\]) on $X$. In Section \[sec:int\_onA\], we consider interventions on $A$. We will see that parts of the analysis carry over from the interventions on $X$.
Support-reducing interventions on $X$ {#sec:supp_reducing_onX}
-------------------------------------
In order to simplify the following analysis, we will constrain ourselves to cases in which the causal function is identified on the support of $X$. This condition is made precise in the following assumption.
\[ass:identify\_f\] For all $\tilde{M}=(\tilde{f}, \dots) \in \mathcal{M}$ with ${\mathbb{P}}_{\tilde{M}} = {\mathbb{P}}_{M}$, it holds that $\tilde{f}(x) = f(x)$ for all $x \in {\mathrm{supp}}(X)$.
Assumption \[ass:identify\_f\] concerns identifiability of the causal function from the observational distribution on the support of $X$. This question has received a lot of attention in literature. In linear instrumental variable settings, for example, one assumes that the functions $f$ and $g$ are linear and the product moment between $A$ and $X$ has rank at least the dimension of $X$ [e.g., @wooldridge2010econometric]. In linear non-Gaussian models, one can identify the function $f$ even if there are no instruments [@Hoyer2008b]. For nonlinear models, restricted structural causal models can be exploited, too. In that case, Assumption \[ass:identify\_f\] holds under regularity conditions if $h_1(H, {\varepsilon}_Y)$ is independent of $X$ [@Zhang2009; @Peters2014jmlr; @Peters2017book] and first attempts have been made to extend such results to non-trivial confounding cases [@Janzing2009uai]. The nonlinear IV setting [e.g., @amemiya1974nonlinear; @newey2013nonparametric; @newey2003instrumental] is discussed in more detail in Appendix \[sec:IVconditions\], where we give a brief overview of identification results for linear, parametric and non-parametric function classes. There is also a technical aspect regarding identifiability: Assumption \[ass:identify\_f\] states that $f$ is identifiable, even on ${\mathbb{P}}_M$-null sets, which is usually achieved by placing further constraints on the function class, such as smoothness. Even though this issue seems technical, it becomes important when considering hard interventions that set $X$ to a fixed value, for example.
Assumption \[ass:identify\_f\] is not necessary for generalization. [@rothenhausler2018anchor] show, for example, that if ${\mathcal{F}}$ and ${\mathcal{G}}$ consist of linear functions it is possible to generalize to a set of bounded interventions on $A$ – even if Assumption \[ass:identify\_f\] does not hold. If, however, Assumption \[ass:identify\_f\] holds, then distribution generalization is possible even in nonlinear settings, under a large class of interventions if these are support-reducing.
\[prop:genX\_intra\] Let ${\mathcal{I}}$ be a well-behaved set of interventions on $X$, and assume that ${\mathrm{supp}}_{{\mathcal{I}}}(X)\subseteq{\mathrm{supp}}(X)$. Then, under Assumption \[ass:identify\_f\], $({\mathbb{P}}_{M}, \cM)$ generalizes to the interventions ${\mathcal{I}}$.
Proposition \[prop:genX\_intra\] states that Assumption \[ass:identify\_f\] is a sufficient condition for generalization when ${\mathcal{I}}$ is a well-behaved set of support-reducing interventions. However, for an arbitrary set of interventions, generalization can become impossible, even if Assumption \[ass:identify\_f\] is satisfied and all interventions are support-reducing.
### Impossibility of generalization under changes in confounding
0
Remove: [[todo: explain example]{}]{} [[todo: turn this into a proposition]{}]{} [[Jonas: Assumption \[ass:identify\_f\] is not satisfied, correct? Isn’t it possible though to adapt the example?]{}]{}
\[ex:impossible\_interpolation\] For $c\in(0,1)$, consider the following models $$M_c:\quad
\begin{cases}
&A\coloneqq {\varepsilon}_A\quad H\coloneqq{\varepsilon}_H\\
&X\coloneqq \sqrt{1-\big(\tfrac{1}{c}\big)^2}{\varepsilon}_X +
\tfrac{1}{c}H\\
&Y\coloneqq \beta X + \sqrt{1-c^2}{\varepsilon}_Y + c H,
\quad\text{with }
{{\color{red}Jonas: {\varepsilon}_A}},
{\varepsilon}_X,{\varepsilon}_Y,{\varepsilon}_H \overset{i.i.d.}{\sim}\mathcal{N}(0,1)
\end{cases}$$ Then, it follows that the joint distribution $\bP_{M_c}$ of $(X,Y,A)$ is Gaussian and independent of $c$ with ${\operatorname{Var}}(X)=1$, ${\operatorname{Var}}(Y)=1+\beta^2+2\beta$ and ${\operatorname{Cov}}(X,Y)=\beta+1$. For every $k\in(0, \infty)$ [[Jonas: can we not simply use $i\in(0, \infty)$? this would save us a subscript]{}]{} define the intervention $i_k$ on $X$ by the mapping $\psi_k(H)=k H$. [[Jonas: $\psi^{i_k}(g, h_2, A^i,H^i, {{\varepsilon}}^i_X ,I^i) = kH$.]{}]{} These interventions are not confounding-preserving [[Jonas: well-behaved?]{}]{}. Using the properties of the ordinary least square estimator we get that $$\operatorname*{argmin}_{\beta}{\mathbb{E}}_{M_c(i_k)}\big[(Y-\beta
X)^2\big]=\beta^{OLS}_{M_c(i_k)}=\frac{{\operatorname{Cov}}(Y, X)}{{\operatorname{Var}}{X}}
{{\color{red}Jonas: =\frac{{\operatorname{Cov}}_{M_c(i_k)}(Y, X)}{{\operatorname{Var}}_{M_c(i_k)}{X}}}}
=\beta+\frac{c}{k}.$$ This implies that for ${\mathcal{I}}\coloneqq\{i_k\}$ the minimax solution can range in the interval $(\beta, \beta+\frac{1}{k})$ depending on the model, i.e., the observational distribution of $(X, Y)$ does not generalize in this setting, i.e., for any [[Jonas: why ‘any’?]{}]{} $\{M_c:c\in(0,1)\}\subseteq\mathcal{M}$ it holds [[Jonas: for any $c\in(0,1)$]{}]{} that $(\bP_{M_{c}},\mathcal{M})$ does not generalize to the intervention $\cI=\{i_k\}$ [[Remove: for any $c\in(0,1)$]{}]{}.
Consider, for example, a one-dimensional linear instrumental variable setting. Let therefore $\mathcal{Q}$ be a class of product distributions on ${\mathbb{R}}^4$, such that for all $Q \in \mathcal{Q}$, the coordinates of $Q$ are non-degenerate, zero-mean with finite second moment. Let ${\mathcal{M}}$ be the class of all models of the form $$\label{eq:modelmc}
A\coloneqq {\varepsilon}_A, \quad
H\coloneqq\sigma{\varepsilon}_H, \quad
X\coloneqq \gamma A + {\varepsilon}_X + \tfrac{1}{\sigma}H, \quad
Y\coloneqq \beta X + {\varepsilon}_Y + \tfrac{1}{\sigma}H,$$ with $\gamma, \beta \in {\mathbb{R}}$, $\sigma > 0$ and $({\varepsilon}_A, {\varepsilon}_X,{\varepsilon}_Y,{\varepsilon}_H) \sim Q \in \mathcal{Q}$. Assume that ${\mathbb{P}}_M$ is induced by some (unknown) model $M = M(\gamma, \beta, \sigma, Q)$ from the above model class (here, we slightly adapt the notation from Section \[sec:setup\]). The following proposition shows that if the set of interventions ${\mathcal{I}}$ is not well-behaved, distribution generalization is not always ensured.
\[prop:impossibility\_interpolation\] Assume that ${\mathcal{M}}$ is given as defined above, and let ${\mathcal{I}}\subseteq {\mathbb{R}}_{>0}$ be a compact set of interventions on $X$ defined by $\psi^i(g, h_2, A^i, H^i, {\varepsilon}_X^i, I^i) = iH$, for $i \in {\mathcal{I}}$ (this set of interventions is not well-behaved). Then, $({\mathbb{P}}_M, \cM)$ does not generalize to the interventions in ${\mathcal{I}}$ (even if Assumption \[ass:identify\_f\] is satisfied). In addition, any prediction model other than the causal model may perform arbitrarily bad under the interventions ${\mathcal{I}}$. That is, for any $b \neq \beta$ and any $c > 0$, there exists a model $\tilde{M}\in\cM$ with ${\mathbb{P}}_{\tilde{M}} = {\mathbb{P}}_{M}$, such that $$\abs[\Big]{\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-bX)^2\big]
- \inf_{{b_{\diamond}}\in {\mathbb{R}}}\,
\sup_{i\in{\mathcal{I}}} {\mathbb{E}}_{\tilde{M}(i)}\big[(Y-{b_{\diamond}}X)^2\big]} \geq c.$$
Support-extending interventions on $X$ {#sec:support_extending_onX}
--------------------------------------
If the interventions in ${\mathcal{I}}$ extend the support of $X$, i.e., ${\mathrm{supp}}_{\mathcal{I}}(X) \not {\subseteq}{\mathrm{supp}}(X)$, Assumption \[ass:identify\_f\] is not sufficient for ensuring distribution generalization. This is because there may exist models $\tilde{M} \in {\mathcal{M}}$ which agree with $M$ on the observational distribution, but whose corresponding causal function $\tilde{f}$ differs from $f$ outside of the support of $X$ In that case, a support-extending intervention on $X$ may result in different dependencies between $X$ and $Y$ in the two models, and therefore induce a different set of minimax solutions. The following assumption on the model class ${\mathcal{F}}$ ensures that any $f\in \cF$ is uniquely determined by its values on ${\mathrm{supp}}(X)$.
\[ass:gen\_f\] For all $\tilde{f}, \bar{f} \in {\mathcal{F}}$ with $\tilde{f}(x) = \bar{f}(x)$ for all $x \in {\mathrm{supp}}(X)$, it holds that $\tilde{f} \equiv \bar{f}$.
We will see that this assumption is sufficient (Proposition \[prop:genX\_extra\]) for generalization with respect to well-behaved interventions on $X$. Furthermore, it is also necessary (Proposition \[prop:impossibility\_extrapolation\]) if $\cF$ is sufficiently flexible. The following proposition can be seen as an extension of Proposition \[prop:genX\_intra\].
\[prop:genX\_extra\] Let ${\mathcal{I}}$ be a well-behaved set of interventions on $X$. Then, under Assumptions \[ass:identify\_f\] and \[ass:gen\_f\], $({\mathbb{P}}_{M}, \cM)$ generalizes to ${\mathcal{I}}$.
Because the interventions may change the marginal distribution of $X$, the preceding proposition includes examples, in which distribution generalization is possible even if some of the considered joint (test) distributions are arbitrarily far from the training distribution, in terms of any reasonable divergence measure over distributions, such as Wasserstein distance or $f$-divergence.
The proposition relies on Assumption \[ass:gen\_f\]. Even though this assumption is restrictive, it is satisifed by several reasonable function classes, which therefore allow for generalization under any set of well-behaved interventions. Below, we give two examples of such a function class.
### Sufficient conditions for generalization {#sec:suff_genX_extrap}
Assumption \[ass:gen\_f\] states that every function in ${\mathcal{F}}$ is globally identified by its values on ${\mathrm{supp}}(X)$. This is, for example, satisfied if $\mathcal{F}$ is a linear space of functions with domain $\mathcal{D} {\subseteq}{\mathbb{R}}^d$ which are linearly independent on ${\mathrm{supp}}(X)$. More precisely, $$\begin{aligned}
\mathcal{F} \text{ is linearly closed}: &\quad
f_1, f_2 \in \mathcal{F}, c \in {\mathbb{R}}, \implies
f_1 + f_2 \in \mathcal{F}, cf_1 \in \mathcal{F}, \text{ and} \label{eq:linear}\\
\mathcal{F} \text{ is lin.\ ind.\ on }{\mathrm{supp}}(X):
&\quad f_1(x) = 0 \quad \forall x \in {\mathrm{supp}}(X) \;\implies \;f_1(x) = 0 \quad \forall x \in \mathcal{D}. \label{eq:linearind}\end{aligned}$$ Examples of such classes include (i) globally linear parametric function classes, i.e., ${\mathcal{F}}$ is of the form $$\mathcal{F}^1 \coloneqq\{{f_{\diamond}}:\mathcal{D}\rightarrow{\mathbb{R}}{\, \vert \,}\text{ there exists } \gamma \in {\mathbb{R}}^k \text{ s.t. }
\forall
x\in\mathcal{D} {\, : \,}{f_{\diamond}}(x)=\gamma^\top \nu (x) \},$$ where $\nu = (\nu_1, \dots, \nu_k)$ consists of real-valued, linearly independent functions satisfying that ${\mathbb{E}}_M[\nu(X) \nu(X)^\top]$ is strictly positive definite, and (ii) the class of differentiable functions that extend linearly outside of ${\mathrm{supp}}(X)$, that is, ${\mathcal{F}}$ is of the form $$\mathcal{F}^2 \coloneqq\{{f_{\diamond}}:\mathcal{D}\rightarrow{\mathbb{R}}\,\vert\,
{f_{\diamond}}\in C^1 \text{ and } \forall x\in\mathcal{D} \setminus
{\mathrm{supp}}(X):\, {f_{\diamond}}(x)={f_{\diamond}}(x_b)+\nabla{f_{\diamond}}(x_b)(x-x_b)\},
$$ where $x_b\coloneqq\operatorname*{argmin}_{z\in{\mathrm{supp}}(X)}\norm{x-z}$ and ${\mathrm{supp}}(X)$ is assumed to be closed with non-empty interior. Clearly, both of the above function classes are linearly closed. To see that ${\mathcal{F}}^1$ satisfies , let $\gamma \in {\mathbb{R}}^k$ be s.t. $\gamma^\top \nu (x) = 0$ for all $x \in {\mathrm{supp}}(X)$. Then, it follows that $0 = {\mathbb{E}}_M[(\gamma^\top \nu(X))^2] = \gamma^\top {\mathbb{E}}_M[\nu(X)
\nu(X)^\top] \gamma$ and hence that $\gamma = 0$. To see that ${\mathcal{F}}^2$ satisfies , let ${f_{\diamond}}\in {\mathcal{F}}^2$ and assume that ${f_{\diamond}}(x) = 0$ for all $x \in {\mathrm{supp}}(X)$. Then, ${f_{\diamond}}(x) = 0$ for all $x \in \mathcal{D}$ and thus ${\mathcal{F}}^2$ uniquely defines the function on the entire domain $\mathcal{D}$. By Proposition \[prop:genX\_extra\], generalization with respect to these model classes is possible for any well-behaved set of interventions. In practice, it may often be more realistic to impose bounds on the higher order derivatives of the functions in ${\mathcal{F}}$. We now prove that this still allows for approximate distribution generalization, see Propositions \[prop:extrapolation\_bounded\_deriv\_cr\] and \[prop:extrapolation\_bounded\_deriv\].
### Sufficient conditions for approximate distribution generalization {#sec:suffapprox}
For differentiable functions, exact generalization cannot always be achieved. Bounding the first derivative, however, allows us to achieve approximate generalization. We therefore consider the following function class $$\label{eq:boundedder}
\mathcal{F}^3\coloneqq\{{f_{\diamond}}:\mathcal{D}\rightarrow{\mathbb{R}}\,\vert\,
{f_{\diamond}}\text{ is continuously differentiable with } \norm{\nabla{f_{\diamond}}}_{\infty}\leq K\},$$ for some fixed $K<\infty$, where $\nabla{f_{\diamond}}$ denotes the gradient and $\mathcal{D}\subseteq{\mathbb{R}}^d$. We then have the following result.
\[prop:extrapolation\_bounded\_deriv\_cr\] Let $\mathcal{F}$ be as defined in . Let ${\mathcal{I}}$ be a set of interventions on $X$ containing at least one confounding-removing intervention, and assume that Assumption \[ass:identify\_f\] holds true. (In this case, the causal function $f$ is a minimax solution.) Then, for all $f^*$ with $f^*=f$ on ${\mathrm{supp}}(X)$ and all $\tilde{M}\in\mathcal{M}$ with ${\mathbb{P}}_{\tilde{M}}={\mathbb{P}}_{M}$, it holds that $$\label{eq:gen_cond}
\abs[\Big]{\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-f^*(X))^2\big] - \inf_{{f_{\diamond}}\in{\mathcal{F}}}\,
\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-{f_{\diamond}}(X))^2\big]}\leq
4\delta^2K^2+4\delta K \sqrt{{\operatorname{Var}}_M(\xi_Y)},$$ where $\delta :=
\sup_{x\in{\mathrm{supp}}_{\cI}^{M}(X)}\inf_{z\in{\mathrm{supp}}^{M}(X)}\norm{x-z}$. If ${\mathcal{I}}$ consists only of confounding-removing interventions, the same statement holds when replacing the bound by $4\delta^2K^2$.
Proposition \[prop:extrapolation\_bounded\_deriv\_cr\] states that the deviation of the worst-case generalization error from the best possible value is bounded by a term that grows with the square of $\delta$. Intuitively, this means that under the function class defined in , approximate generalization is reasonable only for interventions that are close to the support of $X$. We now prove a similar result for cases in which the minimax solution is not necessarily the causal function. The following proposition bounds the worst-case generalization error for arbitrary confounding-preserving interventions. Here, the bound additionally accounts for the approximation to the minimax solution.
\[prop:extrapolation\_bounded\_deriv\] Let $\mathcal{F}$ be as defined in . Let ${\mathcal{I}}$ be a set of confounding-preserving interventions on $X$, and assume that Assumption \[ass:identify\_f\] is satisfied. Let ${\varepsilon}> 0$ and let $f^{*}\in\mathcal{F}$ be such that $$\abs[\Big]{\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}\big[(Y-f^*(X))^2\big] - \inf_{{f_{\diamond}}\in{\mathcal{F}}}\,
\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}\big[(Y-{f_{\diamond}}(X))^2\big]}\leq {\varepsilon}.$$ Then, for all $\tilde{M}\in\mathcal{M}$ with ${\mathbb{P}}_{\tilde{M}}={\mathbb{P}}_{M}$, it holds that $$\begin{aligned}
&\abs[\Big]{\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-f^*(X))^2\big]
- \inf_{{f_{\diamond}}\in{\mathcal{F}}}\,
\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-{f_{\diamond}}(X))^2\big]} \\
&\quad
\leq {{\varepsilon}}+ 12 \delta^2 K^2 + 32 \delta K \sqrt{{\operatorname{Var}}_M(\xi_Y)} +
4 \sqrt{2} \delta K \sqrt{{{\varepsilon}}}
\end{aligned}$$ where $\delta :=
\sup_{x\in{\mathrm{supp}}_{\cI}^{M}(X)}\inf_{z\in{\mathrm{supp}}^{M}(X)}\norm{x-z}$.
We can take $f^*$ to be the minimax solution if it exists. In that case, the terms involving ${{\varepsilon}}$ disappear from the bound, which then becomes more similar to the one in Proposition \[prop:extrapolation\_bounded\_deriv\_cr\].
### Impossibility of generalization without restrictions on ${\mathcal{F}}$
If we do not constrain the function class ${\mathcal{F}}$, generalization is impossible. Even if we consider the set of all continuous functions ${\mathcal{F}}$, we cannot generalize to interventions outside the support of $X$. This statement holds even if Assumption \[ass:identify\_f\] is satisfied.
\[prop:impossibility\_extrapolation\] Assume that ${\mathcal{F}}= \{{f_{\diamond}}: {\mathbb{R}}^d \to {\mathbb{R}}\mid {f_{\diamond}}\text{ is continuous}\}$. Let ${\mathcal{I}}$ be a well-behaved set of support-extending interventions on $X$, such that ${\mathrm{supp}}_{{\mathcal{I}}}(X) \setminus {\mathrm{supp}}(X)$ has non-empty interior. Then, $({\mathbb{P}}_{M}, \cM)$ does not generalize to the interventions in ${\mathcal{I}}$, even if Assumption \[ass:identify\_f\] is satisfied. In particular, for any function $\bar{f} \in{\mathcal{F}}$ and any $c > 0$, there exists a model $\tilde{M} \in {\mathcal{M}}$, with ${\mathbb{P}}_{\tilde{M}} = {\mathbb{P}}_{M}$, such that $$\abs[\Big]{\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-\bar{f}(X))^2\big]
- \inf_{{f_{\diamond}}\in {\mathcal{F}}}\,
\sup_{i\in{\mathcal{I}}} {\mathbb{E}}_{\tilde{M}(i)}\big[(Y-{f_{\diamond}}(X))^2\big]} \geq c.$$
Interventions on $A$ {#sec:int_onA}
--------------------
We can now derive corresponding results for interventions on $A$, for which, as we will see, parts of the analysis simplify. We will be able to employ several of the above results by realizing that any intervention on $A$ can be written as an intervention on $X$, in which the structural assignment of $X$ is altered in a way that depends on the functional relationship $g$ between $X$ and $A$. The effect of such an intervention on the prediction model is propagated by $g$. More formally, under such an intervention, a model $\tilde{M} = (\tilde{f}, \tilde{g}, \tilde{h}_1, \tilde{h}_2, \tilde{Q})$ with $\tilde{g} \neq g$ may induce a distribution over $(X,Y)$ that differs from the one induced by $M$. Without further restrictions on the function class ${\mathcal{G}}$, this may happen even in cases where $\tilde{M}$ and $M$ agree on the observational distribution. This motivates an assumption on the identifiability of $g$.
\[ass:identify\_g\] For all $\tilde{M} = (\tilde{f}, \tilde{g}, \tilde{h}_1, \tilde{h}_2, \tilde{Q}) \in {\mathcal{M}}$ with ${\mathbb{P}}_{\tilde{M}} = {\mathbb{P}}_M$, it holds that $\tilde{g}(a) = g(a)$ for all $a \in {\mathrm{supp}}(A) \cup {\mathrm{supp}}_{\mathcal{I}}(A)$.
Since $g(A)$ is a conditional mean for $X$ given $A$, the values of $g$ are identified from ${\mathbb{P}}_M$ for ${\mathbb{P}}_M$-almost all $a$. If ${\mathrm{supp}}_{\mathcal{I}}(A) {\subseteq}{\mathrm{supp}}(A)$, Assumption \[ass:identify\_g\] therefore holds if, for example, ${\mathcal{G}}$ contains continuous functions only. The point-wise identifiability of $g$ is necessary, for example, if some of the test distributions are induced by hard interventions on $A$, which set $A$ to some fixed value $a \in {\mathbb{R}}^r$. In the case where the interventions ${\mathcal{I}}$ extend the support of $A$, we additionally require the function class ${\mathcal{G}}$ to extrapolate from ${\mathrm{supp}}(A)$ to ${\mathrm{supp}}(A) \cup {\mathrm{supp}}_{\mathcal{I}}(A)$; this is similar to the conditions on ${\mathcal{F}}$ which we made in Section \[sec:support\_extending\_onX\] and requires further restrictions on ${\mathcal{G}}$. Under Assumption \[ass:identify\_g\], we obtain a result corresponding to Propositions \[prop:genX\_intra\] and \[prop:genX\_extra\].
\[prop:genA\] Let ${\mathcal{I}}$ be a set of interventions on $A$ and assume Assumption \[ass:identify\_g\] is satisfied. Then, $({\mathbb{P}}_{M}, \cM)$ generalizes to ${\mathcal{I}}$ if either ${\mathrm{supp}}_{{\mathcal{I}}}(X)\subseteq {\mathrm{supp}}(X)$ and Assumption \[ass:identify\_f\] is satisfied or if both Assumptions \[ass:identify\_f\] and \[ass:gen\_f\] are satisfied.
### Impossibility of generalization without constraints on $\mathcal{G}$
Without restrictions on the model class ${\mathcal{G}}$, generalization to interventions on $A$ is impossible. This holds true even under strong assumptions on the true causal function (such as $f$ is known to be linear). Below, we give a formal impossibility result for hard interventions on $A$, which set $A$ to some fixed value, where ${\mathcal{G}}$ is the set of all continuous functions.
\[prop:impossibility\_intA\] Assume that ${\mathcal{F}}= \{{f_{\diamond}}: {\mathbb{R}}^d \to {\mathbb{R}}{\, \vert \,}{f_{\diamond}}\text{ is linear} \}$ and ${\mathcal{G}}= \{{g_{\diamond}}: {\mathbb{R}}^r \to {\mathbb{R}}^d {\, \vert \,}{g_{\diamond}}\text{ is continuous} \}$. Let $\mathcal{A} {\subseteq}{\mathbb{R}}^r$ be bounded, and let ${\mathcal{I}}$ denote the set of all hard interventions which set $A$ to some fixed value from $\mathcal{A}$. Assume that $\mathcal{A} \setminus {\mathrm{supp}}(A)$ has nonempty interior. Assume further that ${\mathbb{E}}_M[\xi_X \xi_Y] \neq 0$ (this excludes the case of no hidden confounding). Then, ${\mathbb{P}}_M$ does not generalize to the interventions in ${\mathcal{I}}$. In addition, any function other than $f$ may perform arbitrarily bad under the interventions in ${\mathcal{I}}$. That is, for any $\bar{f} \neq f$ and $c > 0$, there exists a model $\tilde{M} \in {\mathcal{M}}$ with ${\mathbb{P}}_{\tilde{M}} = {\mathbb{P}}_M$ such that $$\abs[\Big]{\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-\bar{f}(X))^2\big] - \inf_{{f_{\diamond}}\in {\mathcal{F}}}\,
\sup_{i\in{\mathcal{I}}} {\mathbb{E}}_{\tilde{M}(i)}\big[(Y-{f_{\diamond}}(X))^2\big]} \geq c.$$
0
![[[todo: ]{}]{}[]{data-label="fig:impossibility"}](impossibility_AtoX_nonlinear){width="\linewidth"}
![ The left plot illustrates the straight-forward idea behind the impossibility result in Proposition \[prop:impossibility\_extrapolation\]. The plots in the middle and on the right-hand side illustrate the impossibility result in Proposition \[prop:impossibility\_intA\]. All plots visualize the case of univariate variables. Under well-behaved interventions on $X$ (left; here using confounding-removing interventions) which extend the support of $X$, generalization is impossible without further restrictions on the function class ${\mathcal{F}}$. This holds true even if Assumption \[ass:identify\_f\] is satisfied. Indeed, although the candidate model (blue line) coincides with the causal model (green dashed curve) on the support of $X$, it may perform arbitrarily bad on test data generated under support-extending interventions. Under interventions on $A$ (right and middle), generalization is impossible even under strong assumptions on the function class ${\mathcal{F}}$ (here, ${\mathcal{F}}$ is the class of all linear functions). Any support-extending intervention on $A$ shifts the marginal distribution of $X$ by an amount which depends on the (unknown) function $g$, resulting in a distribution of $(X,Y)$ which cannot be identified from the observational distribution. Without further restrictions on the function class ${\mathcal{G}}$, any candidate model apart from the causal model may result in arbitrarily large worst-case prediction risk. []{data-label="fig:impossibility"}](impossibility_all){width="\linewidth"}
Learning generalizing models from data {#sec:learning}
======================================
So far, our focus has been on the possibility to generalize, that is, we have investigated under which conditions it is possible to identify generalizing models from the observational distribution. In practice, generalizing models need to be estimated from finitely many data. This task is challenging for several reasons. First, analytical solutions to the minimax problem are only known in few cases. Even if generalization is possible, the inferential target thus often remains a complicated object, given as a well-defined but unknown function of the observational distribution. Second, we have seen that the ability to generalize depends strongly on whether the interventions extend the support of $X$, see Propositions \[prop:genX\_extra\] and \[prop:impossibility\_extrapolation\]. In a setting with a finite amount of data, the empirical support of the data lies within some bounded region, and suitable constraints on the function class ${\mathcal{F}}$ are necessary when aiming to achieve empirical generalization outside this region, even if $X$ comes from a distribution with full support. As we show in our simulations in Section \[sec:experiments\], constraining the function class can also improve the prediction performance at the boundary of the support.
In Section \[sec:existingmethods\], we survey existing methods for learning generalizing models. Often, these methods assume either a globally linear model class ${\mathcal{F}}$ or are completely non-parametric and therefore do not generalize outside the empirical support of the data. Motivated by this observation, we introduce in Section \[sec:nile\] a novel estimator, which exploits an instrumental variable setup and a particular extrapolation assumption to learn a globally generalizing model.
Existing methods {#sec:existingmethods}
----------------
As discussed in Section \[sec:related\_work\], a wide range of methods have been proposed to guard against various types of distributional changes. Here, we review methods that fit into the causal framework in the sense that the distributions that in the minimax formulation the supremum is taken over are induced by interventions.
[>p[1.8cm]{}>p[3.3cm]{}>p[1.9cm]{}>p[1.7cm]{}|>p[3.1cm]{}]{} model class & interventions & ${\mathrm{supp}}_{{\mathcal{I}}}(X)$ & assumptions & algorithm\
$\mathcal{F}$ linear & on $X$ or $A$of which at least one is confounding-removing & – & Ass. \[ass:identify\_f\] & linear IV (e.g., two-stage least squares, K-class or PULSE [@Theil1958; @jakobsen2020distributional])\
${\mathcal{F}}, {\mathcal{G}}$ linear & on A & bounded strength & – & anchor regression [@rothenhausler2018anchor], see also [@Theil1958]\
$\mathcal{F}$ smooth & on $X$ or $A$ of which at least one is confounding-removing & support-reducing & Ass. \[ass:identify\_f\] & nonlinear IV (e.g., NPREGIV [@NPREGIV-CRAN])\
$\mathcal{F}$ smooth and linearly extrapolates & on $X$ or $A$ of which at least one is confounding-removing & – & Ass. \[ass:identify\_f\] & **NILE** Section \[sec:nile\]\
For well-behaved interventions on $X$ which contain at least one confounding-removing intervention, estimating minimax solutions reduces to the well-studied problem of estimating causal relationships. One class of algorithms for this task is given by linear instrumental variable (IV) approaches. They assume that $\mathcal{F}$ is linear and require identifiability of the causal function (Assumption \[ass:identify\_f\]) via a rank condition on the observational distribution, see Appendix \[sec:IVconditions\]. Their target of inference is to estimate the causal function, which by Proposition \[prop:minimax\_equal\_causal\] will coincide with the minimax solution if the set ${\mathcal{I}}$ consists of well-behaved interventions with at least one of them being confounding-removing. A basic estimator for linear IV models is the two-stage least squares (TSLS) estimator, which minimizes the norm of the prediction residuals projected onto the subspace spanned by the observed instruments (TSLS objective). TSLS estimators are consistent but do not come with strong finite sample guarantees; e.g., they do not have finite moments in a just-identified setup [e.g., @mariano2001simultaneous]. K-class estimators [@Theil1958] have been proposed to overcome some of these issues. They minimize a linear combination of the residual sum of squares (OLS objective) and the TSLS objective. K-class estimators can be seen as utilizing a bias-variance trade-off. For fixed and non-trivial relative weights, they have, in a Gaussian setting, finite moments up to a certain order that depends on the sample-size and the number of predictors used. If the weights are such that the OLS objective is ignored asymptotically, they consistently estimate the causal parameter [e.g., @mariano2001simultaneous]. More recently, PULSE has been proposed [@jakobsen2020distributional], a data-driven procedure for choosing the relative weights such that the prediction residuals ‘just’ pass a test for simultaneous uncorrelatedness with the instruments. In cases where the minimax solution does not coincide with the causal function, only few algorithms exist. Anchor regression [@rothenhausler2018anchor] is a procedure that can be used when $\mathcal{F}$ and $\mathcal{G}$ are linear and $h_1$ is additive in the noise component. It finds the minimax solution if the set ${\mathcal{I}}$ consists of all interventions on $A$ up to a fixed intervention strength, and is applicable even if Assumption \[ass:identify\_f\] is not necessarily satisfied.
In a linear setting, where the regression coefficients differ between different environments, it is also possible to minimize the worst-case risk among the observed environments [@meinshausen2015maximin]. In its current formulation, this approach does not quite fit into the above framework, as it does not allow for changing distributions of the covariates. A summary of the mentioned methods and their assumptions is given in Table \[tab:learnability\].
If $\mathcal{F}$ is a nonlinear or non-parametric class of functions, the task of finding minimax solutions becomes more difficult. In cases where the causal function is among such solutions, this problem has been studied in the econometrics community. For example, [@newey2013nonparametric; @newey2003instrumental] treat the identifiability and estimation of causal functions in non-parametric function classes. Several non-parametric IV procedures exists, e.g., NPREGIV [@NPREGIV-CRAN] contains modified implementations of [@darolles2011nonparametric] and [@horowitz2011applied]. Identifiability and estimation of the causal function using nonlinear IV methods in parametric function classes is discussed in Appendix \[sec:IVconditions\]. Unlike in the linear case, most of the methods do not aim to extrapolate and only recover the causal function inside the support of $X$, that is, they cannot be used to predict interventions outside of this domain. In the following section, we propose a procedure that is able to extrapolate when $\mathcal{F}$ consists of functions which extend linearly outside of the support of $X$. In our simulations, we show that such an assumption can improve the prediction performance on the boundary of the support.
NILE {#sec:nile}
----
We have seen in Proposition \[prop:impossibility\_extrapolation\] that in order to generalize to interventions which extend the support of $X$, we require additional assumptions on the function class ${\mathcal{F}}$. In this section, we start from such assumptions and verify both theoretically and practically that they allows us to perform distribution generalization in the considered setup. Along the way, several choices can be made and usually several options are possible. We will see that our choices yield a method with competitive performance, but we do not claim optimality of our procedure. Several of our choices were partially made to keep the theoretical exposition simple and the method computationally efficient. We first consider the univariate case (i.e., $X$ and $A$ are real-valued) and comment later on the possibility to extend the methodology to higher dimensions. Unless specific background knowledge is given, it might be reasonable to assume that the causal function extends linearly outside a fixed interval $[a,b]$. By additionally imposing differentiability on ${\mathcal{F}}$, any function from ${\mathcal{F}}$ is uniquely defined by its values within $[a,b]$, see also Section \[sec:suff\_genX\_extrap\]. Given an estimate $f$ on $[a,b]$, the linear extrapolation property then yields a global estimate on the whole of $\mathbb{R}$. In principle, any class of differentiable functions can be used. Here, we assume that, on the interval $[a,b]$, the causal function $f$ is contained in the linear span of a B-spline basis. More formally, let $B = (B_1, ..., B_k)$ be a fixed B-spline basis on $[a,b]$, and define $\eta := (a,b,B)$. Our procedure assumes that the true causal function $f$ belongs to the function class ${\mathcal{F}}_{\eta} := \{f_\eta(\cdot; \theta) {\, : \,}\theta \in {\mathbb{R}}^k\}$, where for every $x \in {\mathbb{R}}$ and $\theta \in {\mathbb{R}}^k$, $f_\eta(x; \theta)$ is given as $$\label{eq:f_theta}
f_{\eta}(x;\theta) :=
\begin{cases}
B(a)^\top \theta + B^\prime (a)^\top \theta (x - a) & \text{ if } x < a \\
B(x)^\top \theta & \text{ if } x \in [a, b] \\
B(b)^\top \theta + B^\prime (b)^\top \theta (x - b) & \text{ if } x >b,\\
\end{cases}$$ where $B^\prime := (B_1^\prime, \dots, B_k^\prime)$ denotes the component-wise derivative of $B$. In our algorithm, $\eta = (a, b, B)$ is a hyper-parameter, which can be set manually, or be chosen from data.
0 Let $({\mathbf{X}}, {\mathbf{Y}}, {\mathbf{A}}) \in {\mathbb{R}}^{n \times 3}$ be the observed data. It its most general form, the Nonlinear Intervention-robust Linear Extrapolator (NILE) solves an optimization problem of the form $$\label{eq:nile_general}
\operatorname*{argmin}_{{f_{\diamond}}\in {\mathcal{F}}} \underbrace{\norm{{\mathbf{Y}} - {f_{\diamond}}({\mathbf{X}})}_2^2}_{\text{prediction}} +
\lambda \underbrace{\norm{{\mathbf{P}}({\mathbf{Y}} - {f_{\diamond}}({\mathbf{X}}))}_2^2}_{\text{invariance}} +
\gamma \underbrace{\int ({f_{\diamond}}^{\prime \prime}(x))^2 dx}_{\text{smoothness + lin. extrap.}},$$ where ${\mathcal{F}}$ is some suitable function class, ${\mathbf{P}}$ is the (square) “hat-matrix” for a nonlinear regression of the residuals ${\mathbf{Y}} - {f_{\diamond}}({\mathbf{X}})$ onto ${\mathbf{A}}$, and $\lambda, \gamma > 0$ are some tuning parameters. This estimator may be seen as a nonlinear version of the PULSE (see Section \[sec:existingmethods\]), with an additional constraint that enforces linear extrapolation. By choosing ${\mathcal{F}}$ as the linear span of a B-spline basis, the smoothness penalty in becomes a quadratic function of the spline coefficients [e.g., @fahrmeir2013regression], and the resulting optimization problem becomes strictly convex.
### Estimation procedure {#sec:estimation}
We now introduce our estimation procedure for fixed choices of all hyper-parameters. Section \[sec:algorithm\] describes how these can be chosen from data in practice. Let $({\mathbf{X}}, {\mathbf{Y}}, {\mathbf{A}}) \in {\mathbb{R}}^{n \times 3}$ be $n$ i.i.d. realizations sampled from a distribution over $(X,Y,A)$, let $\eta = (a,b,B)$ be fixed and assume that ${\mathrm{supp}}(X){\subseteq}[a,b]$. Our algorithm aims to learn the causal function $f_\eta(\cdot ; \theta^0) \in {\mathcal{F}}_\eta$, which is determined by the linear causal parameter $\theta^0$ of a $k$-dimensional vector of covariates $(B_1(X), \dots, B_k(X))$. From standard linear IV theory, it is known that at least $k$ instrumental variables are required to identify the $k$ causal parameters, see Appendix \[sec:IVconditions\]. We therefore artificially generate such instruments by nonlinearly transforming $A$, by using another B-spline basis $C = (C_1, \dots, C_k)$. The parameter $\theta^0$ can then be identified from the observational distribution under appropriate rank conditions, see Section \[sec:consistency\]. In that case, the hypothesis $H_0(\theta) : \theta= \theta^0$ is equivalent to the hypothesis $\tilde H_0(\theta) : {\mathbb{E}}[C(A)(Y - B(X)^\top \theta)] =
0$. Let ${\mathbf{B}} \in {\mathbb{R}}^{n \times k}$ and ${\mathbf{C}} \in {\mathbb{R}}^{n \times k}$ be the associated design matrices, for each $i \in \{1, \dots, n\}$, $j \in \{1, \dots, k\}$ given as ${\mathbf{B}}_{ij} = B_j(X_i)$ and ${\mathbf{C}}_{ij} = C_{j}(A_i)$. A straightforward choice would be to construct the standard TSLS estimator, i.e., $\hat \theta$ as the minimizer of $\theta \mapsto \norm{{\mathbf{P}} ({\mathbf{Y}} - {\mathbf{B}} \theta)}_2^2$, where ${\mathbf{P}}$ is the projection matrix onto the columns of ${\mathbf{C}}$, see also [@hall2005generalized]. Even though this procedure may result in an asymptotically consistent estimator, there are several reasons why it may be suboptimal in a finite sample setting. First, the above estimator can have large finite sample bias, in particular if $k$ is large. Indeed, in the extreme case where $k = n$, and assuming that all columns in ${\mathbf{C}}$ are linearly independent, ${\mathbf{P}}$ is equal to the identity matrix, and $\hat \theta$ coincides with the OLS estimator. Second, since $\theta$ corresponds to the linear parameter of a spline basis, it seems reasonable to impose constraints on $\theta$ which enforce smoothness of the resulting spline function. Both of these points can be addressed by introducing additional penalties into the estimation procedure. Let therefore ${\mathbf{K}} \in {\mathbb{R}}^{k \times k}$ and ${\mathbf{M}} \in {\mathbb{R}}^{k \times k}$ be the matrices that are, for each $i,j \in \{1, \dots, k\}$, defined as ${\mathbf{K}}_{ij} = \int B^{\prime \prime}_i(x) B^{\prime \prime}_j(x) dx$ and ${\mathbf{M}}_{i j} = \int C^{\prime \prime}_{i}(a) C^{\prime \prime}_{j}(a) da$, and let $\gamma, \delta > 0$ be the respective penalties associated with ${\mathbf{K}}$ and ${\mathbf{M}}$. For $\lambda \geq 0$ and with $\mu := (\gamma, \delta, C)$, we then define the estimator $$\label{eq:thetahat}
\hat \theta^n_{\lambda, \eta, \mu}
:= \operatorname*{argmin}_{\theta \in {\mathbb{R}}^{k}}
\norm{{\mathbf{Y}} - {\mathbf{B}} \theta }_2^2 + \lambda \norm{{\mathbf{P}}_\delta({\mathbf{Y}} - {\mathbf{B}} \theta)}_2^2 + \gamma \theta^\top {\mathbf{K}} \theta, \\$$ where ${\mathbf{P}}_\delta := {\mathbf{C}} ({\mathbf{C}}^\top {\mathbf{C}} + \delta {\mathbf{M}})^{-1} {\mathbf{C}}^\top$ is the ‘hat’-matrix for a penalized regression onto the columns of ${\mathbf{C}}$. By choice of ${\mathbf{K}}$, the term $\theta^\top {\mathbf{K}} \theta$ is equal to the integrated squared curvature of the spline function parametrized by $\theta$. The above may thus be seen as a nonlinear extension of K-class estimators [@Theil1958], with an additional penalty term which enforces linear extrapolation. In principle, the above approach extends to situations where $X$ and $A$ are higher-dimensional, in which case $B$ and $C$ consist of multivariate functions. For example, [@fahrmeir2013regression] propose the use of tensor product splines, and introduce multivariate smoothness penalties based on pairwise first- or second order parameter differences of basis functions which are close-by with respect to some suitably chosen metric. Similarly to , such penalties result in a convex optimization problem. However, due to the large number of involved variables, the optimization procedure becomes computationally burdensome already in small dimensions. Within the function class ${\mathcal{F}}_\eta$, the above defines the global estimate $f_\eta(\cdot; \hat \theta^n_{\lambda, \eta, \mu})$, for every $x \in {\mathbb{R}}$ given by $$\label{eq:fhat_theta}
f_\eta(x; \hat \theta^n_{\lambda, \eta, \mu}) :=
\begin{cases}
B(a)^\top \hat \theta^n_{\lambda, \eta, \mu} + B^\prime (a)^\top \theta^n_{\lambda, \eta, \mu} (x - a) & \text{ if } x < a \\
B(x)^\top \hat \theta^n_{\lambda, \eta, \mu} & \text{ if } x \in [a,b] \\
B(b)^\top \theta^n_{\lambda, \eta, \mu} + B^\prime (b)^\top \theta^n_{\lambda, \eta, \mu} (x - b) & \text{ if } x > b. \\
\end{cases}$$ We deliberately distinguish between three different groups of hyper-parameters $\eta$, $\mu$ and $\lambda$. The parameter $\eta = (a,b,B)$ defines the function class to which the causal function $f$ is assumed to belong. To prove consistency of our estimator, we require this function class to be correctly specified. In turn, the parameters $\lambda$ and $\mu=(\gamma, \delta, C)$ are algorithmic parameters that do not describe the statistical model. Their values only affects the finite sample behavior of our algorithm, whereas consistency is ensured as long as $C$ satisfies certain rank conditions, see Assumption \[ass:RankCondition\] in Section \[sec:consistency\]. In practice, $\gamma$ and $\delta$ are chosen via a cross-validation procedure, see Section \[sec:algorithm\]. The parameter $\lambda$ determines the relative contribution of the OLS and TSLS losses to the objective function. To choose $\lambda$ from data, we use an idea similar to the PULSE [@jakobsen2020distributional].
### Algorithm {#sec:algorithm}
Let for now $\eta, \mu$ be fixed. In the limit $\lambda \to \infty$, our estimation procedure becomes equivalent to minimizing the TSLS loss $\theta \mapsto \norm{{\mathbf{P}}_\delta ({\mathbf{Y}} - {\mathbf{B}}\theta)}_2^2$, which may be interpreted as searching for the parameter $\theta$ which complies ‘best’ with the hypothesis $\tilde{H}_0(\theta) : {\mathbb{E}}[C(A)(Y - B(X)^\top \theta)] = 0$. For finitely many data, following the idea introduced in [@jakobsen2020distributional], we propose to choose the value for $\lambda$ such that $\tilde{H}_0(\hat \theta^n_{\lambda, \eta, \mu})$ is just accepted (e.g., at a significance level $\alpha = 0.05$). That is, among all $\lambda \geq 0$ which result in an estimator that is not rejected as a candidate for the causal parameter, we chose the one which yields maximal contribution of the OLS loss to the objective function. More formally, let for every $\theta \in {\mathbb{R}}^k$, $T(\theta) = (T_n(\theta))_{n \in {\mathbb{N}}}$ be a statistical test at (asymptotic) level $\alpha$ for $\tilde{H}_0(\theta)$ with rejection threshold $q(\alpha)$. That is, $T_n(\theta)$ does not reject $\tilde{H}_0(\theta)$ if and only if $T_n(\theta) \leq q(\alpha)$. The penalty $\lambda^\star_n$ is then chosen in the following data-driven way $$\begin{aligned}
\lambda^\star_n := \inf \{\lambda \geq 0 : T_n(\hat \theta^n_{\lambda, \eta, \mu})\leq q(\alpha)\}.\end{aligned}$$ In general, $\lambda^\star_n$ is not guaranteed to be finite for an arbitrary test statistic $T_n$. Even for a reasonable test statistic it might happen that $T_n(\hat \theta^n_{\lambda, \eta, \mu} ) > q(\alpha)$ for all $\lambda \geq 0$; see [@jakobsen2020distributional] for further details. We can remedy the problem by reverting to another well-defined and consistent estimator, such as the TSLS (which minimizes the TSLS loss above) if $\lambda^\star_n$ is not finite. Furthermore, if $\lambda \mapsto T_n(\hat \theta^n_{\lambda, \eta, \mu})$ is monotonic, $\lambda^\star_n$ can be computed efficiently by a binary search procedure. In our algorithm, the test statistic $T$ and rejection threshold $q$ can be supplied by the user. Conditions on $T$ that are sufficient to yield a consistent estimator $f_\eta(\cdot , \hat \theta_{\lambda_n^\star, \mu, \eta})$, given that $\mathcal{F}_\eta$ is correctly specified, are presented in Section \[sec:consistency\]. Two choices of test statistics which are implemented in our code package can be found in Appendix \[sec:test\_statistic\]. For every $\gamma \geq 0$, let ${\mathbf{Q}}_\gamma = {\mathbf{B}} ({\mathbf{B}}^\top {\mathbf{B}} + \gamma {\mathbf{K}})^{-1} {\mathbf{B}}^\top$ be the ‘hat’-matrix for the penalized regression onto ${\mathbf{B}}$. Our algorithm then proceeds as follows.\
**input**: data $({\mathbf{X}}, {\mathbf{Y}}, {\mathbf{A}}) \in {\mathbb{R}}^{n \times 3}$ **options**: $k$, $T$, $q$, $\alpha$ **output**: $\hat{f}^n_{\text{NILE}} := f_{\hat \eta}(\, \cdot \, ; \hat \theta^n_{\lambda_n^\star, \mu^n_{\text{CV}}, \hat \eta})$ defined by
The penalty parameter $\gamma^n_{\text{CV}}$ is chosen to minimize the out-of-sample mean squared error of the prediction model $\hat{{\mathbf{Y}}} = {\mathbf{Q}}_{\gamma} {\mathbf{Y}}$, which corresponds to the solution of for $\lambda = 0$. After choosing $\lambda_n^\star$, the objective function in increases by the term $\lambda_n^\star \norm{{\mathbf{P}}_{\delta_{\text{CV}}^n}({\mathbf{Y}} - {\mathbf{B}} \theta)}_2^2$. In order for the penalty term $\gamma \theta^\top {\mathbf{K}} \theta$ to impose the same degree of smoothness in the altered optimization problem, the penalty parameter $\gamma$ needs to be adjusted accordingly. The heuristic update in our algorithm is motivated by the simple observation that for all $\delta, \lambda \geq 0$, $\norm{{\mathbf{Y}} - {\mathbf{B}} \theta}_2^2 + \lambda \norm{{\mathbf{P}}_\delta ({\mathbf{Y}} - {\mathbf{B}} \theta)}_2^2 \leq (1 + \lambda) \norm{{\mathbf{Y}} - {\mathbf{B}} \theta}_2^2$.
### Asymptotic generalization (consistency) {#sec:consistency}
We now prove consistency of our estimator in the case where the hyper-parameters $(\eta, \mu)$ are fixed (rather than data-driven), and the function class ${\mathcal{F}}_{\eta}$ is correctly specified. Fix any $a<b$ and a basis $B=(B_1, \dots, B_k)$. Let $\eta_0 = (a,b,B)$ and let the model class be given by ${\mathcal{M}}= {\mathcal{F}}_{\eta_0} \times {\mathcal{G}}\times {\mathcal{H}}_1 \times {\mathcal{H}}_2 \times
\mathcal{Q}$, where ${\mathcal{F}}_{\eta_0}$ is as described in Section \[sec:nile\]. Assume that the data-generating model $M = ( f_{\eta_0}(\, \cdot \,; \theta^0),g,h_1,h_2,Q) \in {\mathcal{M}}$ induces an observational distribution ${\mathbb{P}}_M$ such that ${\mathrm{supp}}^M(X) \subseteq (a,b)$. Let further ${\mathcal{I}}$ be a set of interventions on $X$ or $A$, and let $\alpha \in (0,1)$ be a fixed significance level.
We prove asymptotic generalization (consistency) for an idealized version of the NILE estimator which utilizes $\eta_0$, rather than the data-driven values. Choose any $\delta,\gamma \geq 0$ and basis $C=(C_1,...,C_k)$ and let $\mu=(\delta,\gamma,C)$. We will make use of the following assumptions.
1. $\forall \tilde{M}\in \cM$ s.t. $\bP_M = \bP_{\tilde M}$ : $\sup_{i\in\cI}{\mathbb{E}}_{\tilde{M}(i)} [X^2], \, \sup_{i\in\cI} \lambda_{\max}({\mathbb{E}}_{\tilde{M}(i)} [B(X)B(X)^\top ])<\infty$. \[ass:MaximumEigenValueBounded\]
2. ${\mathbb{E}}_M[ B(X)B(X)^\top ]$, ${\mathbb{E}}_M[ C(A)C(A)^\top ]$ and ${\mathbb{E}}_M[ C(A)B(X)^\top ]$ are of full rank.\[ass:RankCondition\]
<!-- -->
1. $T(\theta)$ has uniform asymptotic power on any compact set of alternatives. \[ass:ConsistentTestStatistic\]
2. $\lambda^\star_n := \inf\{\lambda\geq 0 : T_n(\hat \theta^n _{\lambda,\eta_{0},\mu}) \leq q(\alpha)\}$ is almost surely finite. \[ass:LambdaStarAlmostSurelyFinite\]
3. $\lambda \mapsto T_n(\hat \theta^n _{\lambda,\eta_0,\mu})$ is weakly decreasing and $\theta \mapsto T_n(\theta)$ is continuous. \[ass:MonotonicityAndContinuityOfTest\]
Assumptions \[ass:MaximumEigenValueBounded\]–\[ass:RankCondition\] ensure consistency of the estimator as long as $\lambda^\star_n$ tends to infinity. Intuitively, in this case, we can apply arguments similar to those that prove consistency of the TSLS estimator. Assumptions \[ass:ConsistentTestStatistic\]–\[ass:MonotonicityAndContinuityOfTest\] ensure that consistency is achieved when choosing $\lambda^\star_n$ in the data-driven fashion described in Section \[sec:algorithm\]. In Assumption \[ass:MaximumEigenValueBounded\], $\lambda_{\max}$ denotes the largest eigenvalue. In words, the assumption states that, under each model $\tilde M\in \cM$ with $\bP_M = \bP_{\tilde M}$, there exists a finite upper bound on the variance of any linear combination of the basis functions $B(X)$, uniformly over all distributions induced by ${\mathcal{I}}$. The first two rank conditions of \[ass:RankCondition\] enable certain limiting arguments to be valid and they guarantee that estimators are asymptotically well-defined. The last rank condition of \[ass:RankCondition\] is the so-called rank condition for identification. It guarantees that $\theta^0$ is identified from the observational distribution in the sense that the hypothesis $H_0(\theta):\theta=\theta^0$ becomes equivalent with $\tilde{H}_0(\theta) : {\mathbb{E}}_M[C(A)(Y-B(X)^\top \theta)]=0$. \[ass:ConsistentTestStatistic\] means that for any compact set $K\subseteq {\mathbb{R}}^k$ with $\theta^0 \not \in K$ it holds that $\lim_{n\to \infty} P(\inf_{\theta\in K} T_n(\theta) \leq q(\alpha)) =0$. If the considered test has, in addition, a level guarantee, such as pointwise asympotic level, the interpretation of the finite sample estimator discussed in Section \[sec:algorithm\] remains valid (such level guarantee may potentially yield improved finite sample performance, too). \[ass:LambdaStarAlmostSurelyFinite\] is made to simplify the consistency proof. As previously discussed in Section \[sec:algorithm\], if \[ass:LambdaStarAlmostSurelyFinite\] is not satisfied, we can output another well-defined and consistent estimator on the event $(\lambda^\star_n=\infty)$, ensuring that consistency still holds. Under these conditions, we have the following asymptotic generalization guarantee.
\[thm:consis\] Let $\cI$ be a set of interventions on $X$ or $A$ of which at least one is confounding-removing. If assumptions \[ass:MaximumEigenValueBounded\]–\[ass:RankCondition\] and \[ass:ConsistentTestStatistic\]–\[ass:MonotonicityAndContinuityOfTest\] hold true, then, for any $\tilde{M} \in {\mathcal{M}}$ with ${\mathbb{P}}_{\tilde{M}} = {\mathbb{P}}_M$, and any ${\varepsilon}> 0$, it holds that $${\mathbb{P}}_{M} \left( \big\vert \sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-f_{\eta_0}(X;\hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu}))^2\big] - \inf_{{f_{\diamond}}\in{\mathcal{F}}_{\eta_0}}\,
\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-{f_{\diamond}}(X))^2\big] \big\vert \leq {\varepsilon}\right) \to 1,$$ as $n\to \infty$. In the above event, only $\hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu}$ is stochastic.
### Experiments {#sec:experiments}
We now investigate the empirical performance of our proposed estimator, the NILE, with $k = 50$ spline basis functions. To choose $\lambda_n^\star$, we use the test statistic $T_n^2$, which tests the slightly stronger hypothesis $\bar{H}_0$, see Appendix \[sec:test\_statistic\]. In all experiments use the significance level $\alpha = 0.05$. We include two other approaches as baseline: (i) the method NPREGIV (using its default options) introduced in Section \[sec:existingmethods\], and (ii) a linearly extrapolating estimator of the ordinary regression of $Y$ on $X$ (which corresponds to the NILE with $\lambda^\star \equiv 0$). In all experiments, we generate data sets of size $n=200$ as independent replications from $$\label{eq:sim_model}
A:= {\varepsilon}_A, \quad H:= {\varepsilon}_H, \quad X := \alpha_A A + \alpha_H H + \alpha_{\varepsilon}{\varepsilon}_X, \quad Y := f (X) + 0.3 H + 0.2 {\varepsilon}_Y,$$ where $({\varepsilon}_A, {\varepsilon}_H, {\varepsilon}_X, {\varepsilon}_Y)$ are jointly independent with $\mathcal{U}(-1,1)$ marginals. To make results comparable across different parameter settings, we impose the constraint $\alpha_A^2 + \alpha_H^2 +\alpha_{\varepsilon}^2 = 1$, which ensures that in all models, $X$ has variance $1/3$. The function $f$ is drawn from the linear span of a basis of four natural cubic splines with knots placed equidistantly within the $90\%$ inner quantile range of $X$. By well-known properties of natural splines, any such function extends linearly outside the boundary knots. Figure \[fig:overlay\_estimates\_and\_varying\_confounding\] (left) shows an example data set from , where the causal function is indicated in green. We additionally display estimates obtained by each of the considered methods, based on 20 i.i.d. datasets. Due to the confounding variable $H$, the OLS estimator is clearly biased. NPREGIV exploits $A$ as an instrumental variable and obtains good results within the support of the observed data. Due to its non-parametric nature, however, it cannot extrapolate outside this domain. The NILE estimator exploits the linear extrapolation assumption on $f$ to produce global estimates.
![A sample dataset from the model with $\alpha_A = \sqrt{1/3}$, $\alpha_H = \sqrt{2/3}$, $\alpha_{\varepsilon}= 0$. The true causal function is indicated by a green dashed line. For each method, we show 20 estimates of this function, each based on an independent sample from . For values within the support of the training data (vertical dashed lines mark the inner 90% quantile range), NPREGIV correctly estimates the causal function well. As expected, when moving outside the support of $X$, the estimates become unreliable, and we gain an increasing advantage by exploiting the linear extrapolation assumed by the NILE.[]{data-label="fig:overlay_estimates_and_varying_confounding"}](overlay_estimates){width="\linewidth"}
We further investigate the empirical worst-case mean squared error across several different models of the form . That is, for a fixed set of parameters $(\alpha_A, \alpha_H, \alpha_{{\varepsilon}})$, we construct several models $M_1, \dots, M_N$ of the form by randomly sampling causal functions $f_1, \dots, f_N$ (see Appendix \[sec:additional\_experiments\] for further details on the sampling procedure). For every $x \in [0,2]$, let ${\mathcal{I}}_x$ denote the set of hard interventions which set $X$ to some fixed value in $[-x,x]$. We then characterize the performance of each method using the average (across different models) worst-case mean squared error (across the interventions in ${\mathcal{I}}_x$), i.e., for each estimator $\hat{f}$, we consider $$\label{eq:experiments_risk}
\frac{1}{N} \sum_{j=1}^N \sup_{i \in {\mathcal{I}}_x} {\mathbb{E}}_{M_j(i)} \big[ (Y - \hat{f}(X))^2 \big]
= {\mathbb{E}}[\xi_Y^2] + \frac{1}{N} \sum_{j=1}^N \sup_{\tilde{x} \in [-x,x]} (f_j(\tilde{x}) - \hat{f}(\tilde{x}))^2,$$ where $\xi_Y := 0.3 H + 0.2 {\varepsilon}_Y$ is the noise term for $Y$ (which is fixed across all experiments). In practice, we evaluate the functions $\hat{f}$, $f_1, \dots, f_N$ on a fine grid on $[-x,x]$ to approximate the above supremum. Figure \[fig:overlay\_estimates\_and\_varying\_confounding2\] plots the average worst-case mean squared error versus intervention strength for different parameter settings. The optimal worst-case mean squared error ${\mathbb{E}}[\xi_Y^2]$ is indicated by a green dashed line. The results show that the linear extrapolation property of the NILE estimator is beneficial in particular for strong interventions. In the case of no confounding ($\alpha_H = 0$), the minimax solution coincides with the regression of $Y$ on $X$, hence even the OLS estimator yields good predictive performance. In this case, the hypothesis $\bar{H}_0(\hat \theta^n_{\lambda, \delta^n_{\text{CV}}, \gamma^n_{\text{CV}}})$ is accepted already for small values of $\lambda$ (in this experiment, the empirical average of $\lambda^\star_n$ equals 0.015), and the NILE estimator becomes indistinguishable from the OLS. As the confounding strength increases, the OLS becomes increasingly biased, and the NILE objective function differs more notably from the OLS (average $\lambda^\star_n$ of 2.412 and 5.136, respectively). The method NPREGIV slightly outperforms the NILE inside the support of the observed data, but drops in performance for stronger interventions. We believe that the increase in extrapolation performance of the NILE for stronger confounding (increasing $\alpha_H$) might stem from the fact that, as the $\lambda_n^\star$ increases, also the smoothness penalty $\gamma$ increases, see Algorithm \[alg:nile\]. While this results in slightly worse in-sample prediction, it seems beneficial for extrapolation (at least for the particular function class that we consider). We do not claim that our algorithm has theoretical guarantees which explain this increase in performance.
![ Predictive performance under confounding-removing interventions on $X$ for different confounding- and intervention strengths (see alpha values in the grey panel on top). The right panel corresponds to the same parameter setting as in Figure \[fig:overlay\_estimates\_and\_varying\_confounding\]. The plots in each panel are based on data sets of size $n=200$, generated from $N = 100$ different models of the form . For each model, we draw a different function $f$, resulting in a different minimax solution (see Appendix \[sec:additional\_experiments\] for details on the sampling procedure). The performances under individual models are shown by thin lines; the average performance across all models is indicated by thick lines. In all considered models, the optimal prediction error is equal to ${\mathbb{E}}[\xi_Y^2]$ (green dashed line). The grey area indicates the inner 90 % quantile range of $X$ in the training distribution; the white area can be seen as an area of generalization. []{data-label="fig:overlay_estimates_and_varying_confounding2"}](varying_confounding_with_variability){width="\linewidth"}
In the case, where all exogenous noise comes from the unobserved variable ${\varepsilon}_X$ (i.e., $\alpha_A$ = 0), the NILE coincides with the OLS estimator. In such settings, standard IV methods are known to perform poorly, although also the NPREGIV method seems robust to such scenarios. As the instrument strength increases, the NILE clearly outperforms OLS and NPREGIV for interventions on $X$ which include values outside the training data.
Discussion and future work
==========================
In many real world problems, the test distribution may differ from the training distribution. This requires statistical methods that come with a provable guarantee in such a setting. It is possible to characterize robustness by considering predictive performance for distribution that are close to the training distribution in terms of standard divergences or metrics, such as KL divergences or Wasserstein distance. As an alternative view point, we have introduced a novel framework that formalizes the task of distribution generalization when considering distributions that are induced by a set of interventions. Based on the concept of modularity, interventions modify parts of the joint distribution and leave other parts invariant. Thereby, they impose constraints on the changes of the distributions that are qualitatively different from considering balls in a certain metric. As such, we see them as a useful language to describe realistic changes between training and test distributions. Our framework is general in that it allows us to model a wide range of causal models and interventions, which do not need to be known beforehand. We have proved several generalization guarantees, some of which show robustness for distributions that are not close to the training distribution by considering almost any of the standard metrics. We have further proved impossibility results that indicate the limits of what is possible to learn from the training distribution. In particular, in nonlinear models, strong assumptions are required for distribution generalization to a different support of the covariates. As such, methods such as anchor regression cannot be expected to work in nonlinear models, unless strong restrictions are placed on the function class ${\mathcal{G}}$.
![Predictive performance for varying instrument strength. If the instruments have no influence on $X$ ($\alpha_A = 0$), the second term in the objective function is effectively constant in $\theta$, and the NILE therefore coincides with the OLS estimator (which uses $\lambda=0$). This guards the NILE against the large variance which most IV estimators suffer from in a weak instrument setting. For increasing influence of $A$, it clearly outperforms both alternative methods for large intervention strengths. []{data-label="fig:weak_instruments"}](varying_instrument_with_variability){width="\linewidth"}
Our work can be extended into several directions. It may, for example, be worthwhile to investigate the sharpness of the bounds we provide in Section \[sec:suffapprox\] and other extrapolation assumptions on $\mathcal{F}$. While our results can be applied to situations where causal background knowledge is available, via a transformation of SCMs, our analysis is deliberately agnostic about such information. It would be interesting to see whether stronger theoretical results can be obtained by including causal background information. Finally, it could be worthwhile to investigate whether NILE, which outperforms existing approaches with respect to extrapolation, can be combined with non-parametric methods. This could yield an even better performance on estimating the causal function within the support of the covariates.
We view our work as a step towards understanding the problem of distribution generalization. We hope that considering the concepts of interventions may help to shed further light into the question under which assumptions it is possible to generalize knowledge that was acquired during training to a different test distribution.
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank Thomas Kneib for helpful discussions. RC and JP were supported by a research grant (18968) from VILLUM FONDEN.
Transforming causal models {#sec:causal_relations_X}
==========================
As illustrated in Remark \[rem:model\], our framework is able to model cases where causal relations between the observed variables are given explicitly, e.g., by an SCM. The key insight is that most of these causal relations can be absorbed by the hidden confounding $H$ on which we make few restrictions. To show how this can be done in a general setting, let us consider the following SCM $$\begin{aligned}
\label{eq:SCM_full_causal}
\begin{split}
A\coloneqq {{\varepsilon}}_A\qquad\qquad
&X\coloneqq w(X, Y) + g(A) + h_2(H, {{\varepsilon}}_X)\\
H\coloneqq {{\varepsilon}}_H\qquad\qquad
&Y\coloneqq f(X) + h_1(H, {{\varepsilon}}_Y).
\end{split}\end{aligned}$$ Assume that this SCM is uniquely solvable in the sense that there exists a unique function $F$ such that $(A, H, X, Y)=F({{\varepsilon}}_A,{{\varepsilon}}_H,{{\varepsilon}}_X,{{\varepsilon}}_Y)$ almost surely, see [@Bongers2016b] for more details. Denote by $F_X$ the coordinates of $F$ that correspond to the $X$ variable (i.e., the coordinates from $r+q+1$ to $r+q+d$). Assume further that there exist functions $\tilde{g}$ and $\tilde{h}_2$ such that $$\label{eq:linear_decomposition_inverse}
F_X({{\varepsilon}}_A,{{\varepsilon}}_H,{{\varepsilon}}_X,{{\varepsilon}}_Y)=\tilde{g}({{\varepsilon}}_A) + \tilde{h}_2(({{\varepsilon}}_H, {{\varepsilon}}_Y), {{\varepsilon}}_X).$$ This decomposition is not always possible, but it exists in the following settings, for example: (i) *There are no $A$ variables.* As discussed in Section \[sec:setup\] our framework also works if no $A$ variables exist. In these cases, the additive decomposition becomes trivial. (ii) *There are further constraints on the full SCM.* The additive decomposition holds if, for example, $w$ is a linear function or $A$ only enters the structural assignments of covariates $X$ which have at most $Y$ as a descendant.
Using the decomposition in , we can define the following SCM $$\begin{aligned}
\label{eq:SCM_full_causal_simple}
\begin{split}
A\coloneqq {{\varepsilon}}_A\qquad\qquad
&X\coloneqq \tilde{g}(A) + \tilde{h}_2(\tilde{H}, {{\varepsilon}}_X)\\
\tilde{H}\coloneqq {{\varepsilon}}_{\tilde{H}}\qquad\qquad
&Y\coloneqq f(X) + h_1(\tilde{H}),
\end{split}\end{aligned}$$ where ${{\varepsilon}}_{\tilde{H}}$ has the same distribution as $({{\varepsilon}}_H, {{\varepsilon}}_Y)$ in the previous model. This model fits the framework described in Section \[sec:setup\], where the noise term in $Y$ is now taken to be constantly zero. Both SCMs and induce the same observational distribution and the same function $f$ appears in the assignments of $Y$.
It is further possible to express the set of interventions on the covariates $X$ in the original SCM as a set of interventions on the covariates in the reduced SCM . The description of a class of interventions in the full SCM may, however, become more complex if we consider them in the reduced SCM . In particular, to apply the developed methodology, one needs to check whether the interventions in the reduced SCM is a well-behaved set of interventions (this is not necessarily the case) and how the support of all $X$ variables behaves under that specific intervention. We now discuss the case that the causal graph induced by the full SCM is a directed acyclic graph (DAG).
*Intervention type.*First, we consider which types of interventions in translate to well-behaved interventions in . Importantly, interventions on $A$ in the full SCM reduce to regular interventions $A$ also in the reduced SCM. Similarly, performing hard interventions on all components of $X$ in the full SCM leads to the same intervention in the reduced SCM, which is in particular both confounding-removing and confounding-preserving. For interventions on subsets of the $X$, this is not always the case. To see that, consider the following example.
$$\begin{aligned}
\label{eq:ex_causal_1}
\begin{split}
A&\coloneqq {{\varepsilon}}_A\\
X_1 &\coloneqq {{\varepsilon}}_1, \quad
X_2 \coloneqq Y + {\varepsilon}_2\\
Y &\coloneqq X_1 + {\varepsilon}_Y
\end{split}
\end{aligned}$$
$$\begin{aligned}
\label{eq:ex_causal_simple_1}
\begin{split}
A &\coloneqq {{\varepsilon}}_A, \quad
H \coloneqq {{\varepsilon}}_Y \\
X &\coloneqq ({{\varepsilon}}_1, H + {{\varepsilon}}_1 + {{\varepsilon}}_2)\\
Y &\coloneqq X_1 + H
\end{split}
\end{aligned}$$
with ${{\varepsilon}}_A, {\varepsilon}_1, {\varepsilon}_2, {{\varepsilon}}_Y\overset{i.i.d.}{\sim}
\mathcal{N}(0,1)$, where represents the full SCM and corresponds to the reduced SCM using our framework. Consider now, in the full SCM, the intervention $X_1 \coloneqq i$, for some $i\in{\mathbb{R}}$. In the reduced SCM, this intervention corresponds to the intervention $X = (X_1, X_2) \coloneqq (i, H + i + {{\varepsilon}}_2)$, which is neither confounding-preserving nor confounding-removing.[^7] On the other hand, any intervention on $X_2$ or $A$ in the full SCM model corresponds to the same intervention in the reduced SCM. We can generalize these observations to the following statements:
- *Interventions on A:* If we intervene on $A$ in the full SCM (i.e., by replacing the structural assignment of $A$ with $\psi^i(I^i, {{\varepsilon}}_A^i)$), then this translates to an equivalent intervention in the reduced SCM .
- *Hard interventions on all X:* If we intervene on all $X$ in the full SCM by replacing the structural assignment of $X$ with an independent random variable $I\in{\mathbb{R}}^d$, then this translates to the same intervention in the reduced SCM which is confounding-removing.
- *No X is a descendant of $Y$ and there is no unobserved confounding $H$:* If we intervene on $X$ in the full SCM (i.e., by replacing the structural assignment of $X$ with $\psi^i(g, A^i, {{\varepsilon}}^i_X ,I^i)$), then this translates to a potentially different but confounding-removing intervention in the reduced SCM . This is because the reduced SCM does not include unobserved variables $H$ in this case.
- *Hard interventions on a variable $X_j$ which has at most $Y$ as a descendant:* If we intervene on $X_j$ in the full SCM by replacing the structural assignment of $X_j$ with an independent random variable $I$, then this intervention translates to a potentially different but confounding-preserving intervention.
Other settings may yield well-behaved interventions, too, but may require more assumptions on the full SCM model or further restrictions on the intervention classes. *Intervention support.*A support-reducing intervention in the full SCM can translate to a support-extending intervention in the reduced SCM. Consider the following example.
$$\begin{aligned}
\label{eq:ex_causal_2}
\begin{split}
X_1 &\coloneqq {{\varepsilon}}_1 \\
X_2 &\coloneqq X_1 + \mathbf{1}\{X_1 = 0.5\}\\
Y &\coloneqq X_2 + {\varepsilon}_Y
\end{split}
\end{aligned}$$
$$\begin{aligned}
\label{eq:ex_causal_simple_2}
\begin{split}
X &\coloneqq ({{\varepsilon}}_1, {{\varepsilon}}_1 + \mathbf{1}\{{{\varepsilon}}_1 = 0.5\})\\
Y &\coloneqq X_2 + {{\varepsilon}}_Y,
\end{split}
\end{aligned}$$
with ${{\varepsilon}}_1, {{\varepsilon}}_Y\overset{i.i.d.}{\sim} \mathcal{U}(0,1)$. As before, represents the full SCM, whereas corresponds to the reduced SCM converted to fit our framework. Under the observational distribution, the support of $X_1$ and $X_2$ is equal to the open interval $(0, 1)$. Consider now the support-reducing intervention $X_1:= 0.5$ in . Within our framework, such an intervention would correspond to the intervention $X = (X_1, X_2) := (0.5, 1.5)$, which is support-extending. This example is rather special in that the SCM consists of a function that changes on a null set of the observational distribution. With appropriate assumptions to exclude similar degenerate cases, it is possible to show that support-reducing interventions in correspond to support-reducing interventions within our framework .
Sufficient conditions for Assumption 1 in IV settings {#sec:IVconditions}
=====================================================
Assumption \[ass:identify\_f\] states that $f$ is identified on the support of $X$ from the observational distribution of $(Y,X,A)$. Whether this assumption is satisfied depends on the structure of $\cF$ but also on the other function classes $\cG,\cH_1,\cH_2$ and $\mathcal{Q}$ that make up the model class $\cM$ from which we assume that the distribution of $(Y,X,A)$ is generated.
Identifiability of the causal function in the presence of instrumental variables is a well-studied problem in econometrics literature. Most prominent is the literature on identification in linear SCMs [e.g., @fisher1966identification; @greene2003econometric]. However, identification has also been studied for various other parametric function classes. We say that $\mathcal{F}$ is a parametric function class if it can be parametrized by some finite dimensional parameter set $\Theta \subseteq {\mathbb{R}}^p$. We here consider classes of the form $$\begin{aligned}
\mathcal{F} := \{ f(\cdot ,\theta):{\mathbb{R}}^d \to {\mathbb{R}}\,\vert\, \theta :\Theta \rightarrow {\mathbb{R}}, \theta \mapsto f(x,\theta) \text{ is } C^2 \text{ for all } x\in {\mathbb{R}}^d \}.\end{aligned}$$ Consistent estimation of the parameter $\theta_0$ using instrumental variables in such function classes has been studied extensively in the econometric literature [e.g., @amemiya1974nonlinear; @jorgenson1974efficient; @kelejian1971two]. These works also contain rigorous results on how instrumental variable estimators of $\theta_0$ are constructed and under which conditions consistency (and thus identifiability) holds. Here, we give an argument on why the presence of the exogenous variables $A$ yields identifiability under certain regularity conditions. Assume that ${\mathbb{E}}[h_1(H, {\varepsilon}_Y)|A]=0$, which implies that the true causal function $f(\cdot,\theta_0)$ satisfies the population orthogonality condition $$\begin{aligned}
\label{Eq:PopOrthCondNonLinearIV}
{\mathbb{E}}[l(A)^\top (Y-f(X,\theta_0))] = {\mathbb{E}}\big[l(A)^\top {\mathbb{E}}[h_1(H, {\varepsilon}_Y)|A]\big]= 0,\end{aligned}$$ for some measurable mapping $l:{\mathbb{R}}^q\to {\mathbb{R}}^g$, for some $ g \in \mathbb{N}_{>0}$. Clearly, $\theta_0$ is identified from the observational distribution if the map $\theta \mapsto {\mathbb{E}}[l(A)^\top (Y-f(X,\theta))]$ is zero if and only if $\theta=\theta_0$. Furthermore, since $\theta\mapsto f(x,\theta)$ is differentiable for all $x\in {\mathbb{R}}^d$, the mean value theorem yields that, for any $\theta\in \Theta$ and $x\in {\mathbb{R}}^d$, there exists an intermediate point $\tilde{\theta}(x,\theta,\theta_0)$ on the line segment between $\theta$ and $\theta_0$ such that $$f(x,\theta) - f(x,\theta_0) = D_\theta f(x,\tilde{\theta}(x,\theta,\theta_0))(\theta-\theta_0),$$ where, for each $x\in {\mathbb{R}}^d$, $D_\theta f(x,\theta)\in{\mathbb{R}}^{1\times p}$ is the derivative of $\theta\mapsto f(x,\theta)$ evaluated in $\theta$. Composing the above expression with the random vector $X$, multiplying with $l(A)$ and taking expectations yields that $${\mathbb{E}}[l(A)(Y-f(X,\theta_0))] - {\mathbb{E}}[l(A)(Y-f(X,\theta))]= {\mathbb{E}}[l(A)D_\theta f(X,\tilde{\theta}(X,\theta,\theta_0))](\theta_0-\theta).$$ Hence, if $ {\mathbb{E}}[l(A)D_\theta f(X,\tilde{\theta}(X,\theta,\theta_0))]\in
{\mathbb{R}}^{g\times p}$ is of rank $p$ for all $\theta\in\Theta$ (which implies $g \geq p$), then $\theta_0$ is identifiable as it is the only parameter that satisfies the population orthogonality condition of . As $\theta_0$ uniquely determines the entire function, we get identifiability of $f\equiv f(\cdot,\theta_0)$, not only on the support of $X$ but the entire domain ${\mathbb{R}}^d$, i.e., both Assumptions \[ass:identify\_f\] and \[ass:gen\_f\] are satisfied. In the case that $\theta \mapsto f(x,\theta)$ is linear, i.e.$f(x,\theta) = f(x)^T \theta$ for all $x\in {\mathbb{R}}^d$, the above rank condition reduces to ${\mathbb{E}}[l(A)f(X)^T]\in {\mathbb{R}}^{g\times p}$ having rank $p$ (again, implying that $g \geq p$). Furthermore, when $(x,\theta)\mapsto f(x,\theta)$ is bilinear, a reparametrization of the parameter space ensures that $f(x,\theta)= x^T \theta$ for $\theta\in\Theta \subseteq {\mathbb{R}}^d$. In this case, the rank condition can be reduced to the well-known rank condition for identification in a linear SCM, namely that ${\mathbb{E}}[AX^T] \in {\mathbb{R}}^{q\times p}$ is of rank $p$.
Finally, identifiability and methods of consistent estimation of the causal function have also been studied for non-parametric function classes. The conditions for identification are rather technical, however, and we refer the reader to [@newey2013nonparametric; @newey2003instrumental] for further details.
Choice of test statistic {#sec:test_statistic}
========================
By considering the variables $B(X) = (B_1(X), \dots, B_k(X))$ and $C(A) = (C_1(A), \dots, C_k(A))$ as vectors of covariates and instruments, respectively, our setting in Section \[sec:nile\] reduces to the classical (just-identified) linear IV setting. We could therefore use a test statistics similar to the one propsed by the PULSE [@jakobsen2020distributional]. With a notation that is slightly adapted to our setting, this estimator tests $\tilde{H}_0(\theta)$ using the test statistic $$\begin{aligned}
T^1_n(\theta) = c(n) \frac{\norm{{\mathbf{P}} ({\mathbf{Y}} - {\mathbf{B}}\theta)}_2^2}{\norm{{\mathbf{Y}} - {\mathbf{B}}\theta}_2^2},\end{aligned}$$ where ${\mathbf{P}}$ is the projection onto the columns of ${\mathbf{C}}$, and $c(n)$ is some function with $c(n) \sim n$ as $n\to\infty$. Under the null hypothesis, $T^1_n$ converges in distribution to the $\chi^2_{k}$ distribution, and diverges to infinity in probability under the general alternative. Using this test statistic, $\tilde{H}_0(\theta)$ is rejected if and only if $T^1_n(\theta)> q(\alpha)$, where $q(\alpha)$ is the $(1-\alpha)$-quantile of the $\chi^2_{k}$ distribution. The acceptance region of this test statistic is asymptotically equivalent with the confidence region of the Anderson-Rubin test [@anderson1949estimation] for the causal parameter $\theta^0$. Using the above test results in a consistent estimator for $\theta^0$ [@jakobsen2020distributional Theorem 3.12]; the proof exploits the particular form of $T^1_n$ without explicitly imposing that assumptions \[ass:ConsistentTestStatistic\] and \[ass:LambdaStarAlmostSurelyFinite\] hold.
If the number $k$ of basis functions is large, however, numerical experiments suggest that the above test has low power in finite sample settings. As default, our algorithm therefore uses a different test based on a penalized regression approach. This test has been proposed in [@chen2014note] for inference in nonparametric regression models. We now introduce this procedure with a notation that is adapted to our setting. For every $\theta \in {\mathbb{R}}^k$, let $R_\theta = Y - B(X)^\top \theta$ be the residual associated with $\theta$. We then test the slightly stronger hypothesis $$\bar{H}_0(\theta): \text{ there exists } \sigma_\theta^2>0 \text{ such that } {\mathbb{E}}[R_\theta {\, \vert \,}A] {\ensuremath{\stackrel{\text{a.s.}}{=}}}0 \text{ and } \text{Var}[R_\theta {\, \vert \,}A] = \sigma_\theta^2$$ against the alternative that ${\mathbb{E}}[R_\theta {\, \vert \,}A] = m(A)$ for some smooth function $m$. To see that the above hypothesis implies $\tilde{H}_0(\theta)$ (and therefore $H_0(\theta)$, see Section \[sec:estimation\]), let $\theta \in {\mathbb{R}}^k$ be such that $\bar{H}_0(\theta)$ holds true. Then, $${\mathbb{E}}[C(A)(Y - B(X)^\top \theta)] = {\mathbb{E}}[C(A) R_\theta] = {\mathbb{E}}[{\mathbb{E}}[C(A) R_\theta {\, \vert \,}A]] = {\mathbb{E}}[C(A) {\mathbb{E}}[R_\theta {\, \vert \,}A]] = 0,$$ showing that also $\tilde{H}_0(\theta)$ holds true. Thus, if $\tilde{H}_0(\theta)$ is false, then also $\bar H_0(\theta)$ is false. As a test statistic $T^2_n(\theta)$ for $\bar{H}_0(\theta)$, we use (up to a normalization) the squared norm of a penalized regression estimate of $m$, evaluated at the data ${\mathbf{A}}$, i.e., the TSLS loss $\norm{{\mathbf{P}}_\delta ({\mathbf{Y}} - {\mathbf{B}}\theta)}_2^2$. In the fixed design case, where ${\mathbf{A}}$ is non-random, it has been shown that, under $\bar{H}_0(\theta)$ and certain additional regularity conditions, it holds that $$\frac{\norm{{\mathbf{P}}_\delta ({\mathbf{Y}} - {\mathbf{B}}\theta)}_2^2 - \sigma_\theta^2 c_n}{\sigma_\theta^2 d_n} \stackrel{\text{d}}{\longrightarrow} \mathcal{N}(0,1),$$ where $c_n$ and $d_n$ are known functions of ${\mathbf{C}}$, ${\mathbf{M}}$ and $\delta$ [@chen2014note Theorem 1]. The authors further state that the above convergence is unaffected by exchanging $\sigma_\theta^2$ with a consistent estimator $\hat{\sigma}_\theta^2$, which motivates our use of the test statistic $$T^2_n(\theta) := \frac{\norm{{\mathbf{P}}_\delta ({\mathbf{Y}} - {\mathbf{B}}\theta)}_2^2 - \hat \sigma_{\theta,n}^2 c_n}{\hat \sigma_{\theta,n}^2 d_n},$$ where $\hat \sigma_{\theta,n}^2 := \frac{1}{n-1} \sum_{i=1}^n \norm{({\mathbf{I}}_n - {\mathbf{P}}_\delta)({\mathbf{Y}} - {\mathbf{B}} \theta)}_2^2$. As a rejection threshold $q(\alpha)$ we use the $1-\alpha$ quantile of a standard normal distribution. For results on the asymptotic power of the test defined by $T^2$, we refer to Section 2.3 in [@chen2014note].
In our software package, both of the above tests are available options.
Their result is formalized in a fixed design setup. With a slight change of notation to suit our setting, they consider several instances of a response variable $R_1, \dots, R_n$ and fixed values of a covariate $a_1, \dots, a_n \in [c,d]$. They assume the model $$R_i = m(a_i) + \xi_i, \qquad {\mathbb{E}}[\xi_i] = 0, \qquad \text{Var}(\xi_i) = \sigma^2, \qquad i = 1, \dots, n,$$ for some unspecified smooth function $m$, and consider the null hypothesis $$\tilde{H}_0: m(a) = 0 \qquad \text{for all} \qquad a \in [c,d].$$ To test the above hypothesis, they use a B-spline estimate of $m$ to derive a test statistic which has an asymptotic standard normal distribution under $\tilde{H}_0$. We let $C_1, \dots, C_k$ denote the B-spline basis, and use ${\mathbf{M}}$ to denote the corresponding penalty matrix, defined analogously as in Section \[sec:estimation\].
$$\tilde{H}_0(\theta):
\begin{cases}
\text{there exists } \xi {\perp \!\!\! \perp}A \text{ with } {\mathbb{E}}[\xi] = 0, \text{Var}(\xi) = \sigma^2 \text{ and } \\
\text{a smooth function } m \text{ such that } R_\theta = m(A) + \xi
\end{cases}$$
For every $\theta \in {\mathbb{R}}^k$, let $R_\theta = Y - B(X)^\top \theta$ be the residual associated with $\theta$, and consider the slightly stronger hypothesis
To see that $\tilde{H}_0$ implies $H_0$, let $\theta \in {\mathbb{R}}^k$ be such that $\tilde{H}_0(\theta)$ holds true. Then, $${\mathbb{E}}[C(A)(Y - B(X)^\top \theta)] = {\mathbb{E}}[C(A) R_\theta] = {\mathbb{E}}[{\mathbb{E}}[C(A) R_\theta {\, \vert \,}A]] = {\mathbb{E}}[C(A) {\mathbb{E}}[R_\theta {\, \vert \,}A]] = 0,$$ showing that also $H_0(\theta)$ holds true.
the class of distributions defined by $${\mathcal{P}}:=
\left \lbrace
\begin{array}{l}
\text{distr. over } (R,A) \in {\mathbb{R}}^2
\text{ s.t. there exists } \xi {\perp \!\!\! \perp}A \text{ with } {\mathbb{E}}[\xi] = 0, \text{Var}(\xi) = \sigma^2 \\
\text{and a smooth function } m \text{ such that } R = m(A) + \xi
\end{array}
\right \rbrace.$$ Within this class of distributions, we can define $\tilde{H}_0(\theta): (R_\theta, A) \in {\mathcal{P}}$ with $m = 0$
, and let $m_\theta$ denote a conditional mean function of $R_\theta$ given $A$. The true causal coefficient $\theta^0$ yields the residual $R_{\theta^0} = \xi_Y$, which is independent of $A$. Since $\xi_Y$ has zero mean, it holds that $m_{\theta^0} = 0$ almost surely. For every $\theta$, let $$\tilde{H}_0(\theta) : m_{\theta} = 0 \text{ almost surely.}$$
We use a test for $H_0$ designed for inference in penalized spline regression models. [@chen2014note] propose a test statistic $T_n$ which, up to a normalization depending on $n$, ${\mathbf{C}}$ and ${\mathbf{M}}$, is equal to the TSLS loss $\norm{{\mathbf{P}}_\delta({\mathbf{Y}} - {\mathbf{B}} \theta)}_2^2$, and show that, under $H_0(\theta)$, this test statistic converges to a standard normal distribution [Theorem 1 @chen2014note].
Addition to experiments {#sec:additional_experiments}
=======================
Sampling of the causal function {#sec:exp_sampling}
-------------------------------
To ensure linear extrapolation of the causal function, we have chosen a function class consisting of natural cubic splines, which, by construction, extrapolate linearly outside the boundary knots. We now describe in detail how we sample functions from this class for the experiments in Section \[sec:experiments\]. Let $q_{\min}$ and $q_{\max}$ be the respective $5\%$- and $95\%$ quantiles of $X$, and let $B_1, \dots, B_4$ be a basis of natural cubic splines corresponding to 5 knots placed equidistantly between $q_{\min}$ and $q_{\max}$. We then sample coefficients $\beta_i {\overset{\text{iid}}{\sim}}\mathcal{U}(-1,1)$, $i = 1, \dots, 4$, and construct $f$ as $f = \sum_{i=1}^4 \beta_i B_i$. For illustration, we have included 18 realizations in Figure \[fig:f\_samples\].
![ The plots show independent realizations of the causal function that is used in all our experiments. These are sampled from a linear space of natural cubic splines, as described in Appendix \[sec:exp\_sampling\]. To ensure a fair comparison with the alternative method, NPREGIV, the true causal function is chosen from a model class different from the one assumed by the NILE.[]{data-label="fig:f_samples"}](f_samples){width="\linewidth"}
Violations of the linear extrapolation assumption {#sec:exp_violation_lin_extrap}
-------------------------------------------------
We have assumed that the true causal function extrapolates linearly outside the 90% quantile range of $X$. We now investigate the performance of our method for violations of this assumption. To do so, we again sample from the model , with $\alpha_A = \alpha_H = \alpha_{\varepsilon}= 1/\sqrt{3}$. For each data set, the causal function is sampled as follows. Let $q_{\min}$ and $q_{\max}$ be the $5\%$- and $95\%$ quantiles of $X$. We first generate a function $\tilde{f}$ that linearly extrapolates outside $[q_{\min}, q_{\max}]$ as described in Section \[sec:exp\_sampling\]. For a given threshold $\kappa$, we then draw $k_1, k_2 {\overset{\text{iid}}{\sim}}\mathcal{U}(-\kappa, \kappa)$ and construct $f$ for every $x \in {\mathbb{R}}$ by $$f(x) = \tilde{f}(x) + \frac{1}{2} k_1 ((x-q_{\min})_{-})^2 + \frac{1}{2} k_2 ((x-q_{\max})_{+})^2,$$ such that the curvature of $f$ on $(-\infty, q_{\min}]$ and $[q_{\max}, \infty)$ is $k_1$ and $k_2$, respectively. Figure \[fig:violation\_lin\_extrap\] shows results for $\kappa = 0,1,2,3,4$. As the curvature increases, the ability to generalize decreases.
![Worst-case mean squared error for increasingly strong violations of the linear extrapolation assumption. The grey area marks the inner 90 % quantile range of $X$ in the training distribution. As the curvature of $f$ outside the domain of the observed data increases, it becomes difficult to predict the interventional behavior of $Y$ for strong interventions. However, even in situations where the linear extrapolation assumption is strongly violated, it remains beneficial to extrapolate linearly. []{data-label="fig:violation_lin_extrap"}](varying_extrapolation_curvature_with_variability){width="\linewidth"}
Proofs {#app:proofs}
======
Proof of Proposition \[prop:minimax\_equal\_causal\] {#sec:prop:minimax_equal_causal}
----------------------------------------------------
Assume that $\cI$ is a set of interventions on $X$ with at least one confounding-removing intervention. Let $i \in {\mathcal{I}}$ and ${f_{\diamond}}\in {\mathcal{F}}$, then we have the following expansion $$\label{eq:decomp_equation}
{\mathbb{E}}_{M(i)}[(Y - {f_{\diamond}}(X))^2]
= {\mathbb{E}}_{M(i)}[(f(X) - {f_{\diamond}}(X))^2]+{\mathbb{E}}_{M(i)}[\xi_Y^2]+2{\mathbb{E}}_{M(i)}[\xi_Y(f(X)-{f_{\diamond}}(X))],$$ where $\xi_Y=h_1(H, {\varepsilon}_Y)$. For any intervention $i\in \cI$ the causal function always yields an identical loss. In particular, it holds that $$\begin{aligned}
\label{eq:SupLossCausalFunction}
\sup_{i\in{\mathcal{I}}} {\mathbb{E}}_{M(i)}[(Y - f(X))^2] =\sup_{i\in{\mathcal{I}}} {\mathbb{E}}_{M(i)}[\xi_Y^2] = {\mathbb{E}}_{M}[\xi_Y^2],
\end{aligned}$$ where we used that the distribution of $\xi_Y$ is not affected by an intervention on $X$. The loss of the causal function can never be better than the minimax loss, that is, $$\begin{aligned}
\label{eq:lowerbdd_prop1}
\inf_{{f_{\diamond}}\in{\mathcal{F}}}\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}[(Y - {f_{\diamond}}(X))^2]
&\leq\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}[(Y - f(X))^2] ={\mathbb{E}}_{M}[\xi_Y^2].
\end{aligned}$$ In other words, the minimax solution (if it exists) is always better than or equal to the causal function. We will now show that when $\cI$ contains at least one confounding-removing intervention, then the minimax loss is dominated by any such intervention.
Fix $i_0\in \cI$ to be a confounding-removing intervention and let $(X,Y,H,A)$ be generated by the SCM $M(i_0)$. Recall that there exists a map $\psi^{i_0}$ such that $X:= \psi^{i_0}(g, h_2, A, H, {{\varepsilon}}_X ,I^{i_0})$ and that $X{\perp \!\!\! \perp}H$ as $i_0$ is a confounding-removing intervention. Furthermore, since the vectors $A$, $H$, ${{\varepsilon}}_X$, ${{\varepsilon}}_Y$ and $I^{i_0}$ are mutually independent, we have that $(X,H){\perp \!\!\! \perp}{{\varepsilon}}_Y$ which together with $X{\perp \!\!\! \perp}H$ implies $X, H$ and ${{\varepsilon}}_Y$ are mutually independent, and hence $X {\perp \!\!\! \perp}h_1(H,{{\varepsilon}}_Y)$. Using this independence we get that ${\mathbb{E}}_{M(i_0)}[\xi_Y(f(X)-{f_{\diamond}}(X))]={\mathbb{E}}_{M}[\xi_Y]{\mathbb{E}}_{M(i_0)}[(f(X)-{f_{\diamond}}(X))]$. Hence, for the intervention $i_0$ together with the modeling assumption ${\mathbb{E}}_{M}[\xi_Y]=0$ implies that for all ${f_{\diamond}}\in {\mathcal{F}}$, $${\mathbb{E}}_{M(i_0)}[(Y - {f_{\diamond}}(X))^2]
= {\mathbb{E}}_{M(i_0)}[(f(X) - {f_{\diamond}}(X))^2]+{\mathbb{E}}_{M}[\xi_Y^2]\geq{\mathbb{E}}_{M}[\xi_Y^2].$$ This proves that the smallest loss at a confounding-removing intervention is achieved by the causal function. Denoting the non-empty subset of confounding-removing interventions by $\cI_{\text{cr}}\subseteq \cI$, this implies $$\begin{aligned}
\label{eq:upperbdd_prop1}
\inf_{{f_{\diamond}}\in{\mathcal{F}}} \sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}[(Y - {f_{\diamond}}(X))^2] & \geq
\inf_{{f_{\diamond}}\in{\mathcal{F}}} \sup_{i\in \cI_{\text{cr}}} {\mathbb{E}}_{M(i)}[(Y - {f_{\diamond}}(X))^2] \\ \notag
&\geq
\inf_{{f_{\diamond}}\in{\mathcal{F}}} {\mathbb{E}}_{M(i_0)}[(Y - {f_{\diamond}}(X))^2] \\ \notag
&= {\mathbb{E}}_{M}[\xi_Y^2].
\end{aligned}$$ Combining and it immediately follows that $$\begin{aligned}
\inf_{{f_{\diamond}}\in{\mathcal{F}}} \sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}[(Y - {f_{\diamond}}(X))^2] = \sup_{i\in{\mathcal{I}}} {\mathbb{E}}_{M(i)}[(Y - f(X))^2],
\end{aligned}$$ and hence $$f\in\operatorname*{argmin}_{{f_{\diamond}}\in {\mathcal{F}}} \sup_{i \in {\mathcal{I}}} {\mathbb{E}}_{M(i)}[(Y-{f_{\diamond}}(X))^2],$$ which completes the proof of Proposition \[prop:minimax\_equal\_causal\].
Proof of Proposition \[prop:shift\_interventions\]
--------------------------------------------------
Let ${\mathcal{F}}$ be the class of all linear functions and let ${\mathcal{I}}$ denote the set of interventions on $X$ that satisfy $$\sup_{i\in \cI} \lambda_{\min}\big({\mathbb{E}}_{M(i)}\big[XX^\top\big]\big) =\infty.$$ We claim that the causal function $f(x)=b^\top x$ is the unique minimax solution of . We prove the result by contradiction. Let $\bar{f}\in\mathcal{F}$ (with $\bar{f}(x)=\bar{b}^{\top}x$) be such that $$\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}[(Y-\bar{b}^\top X)^2]\leq\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}[(Y-b^\top X)^2],$$ and assume that $\norm{\bar{b}-b}_2>0$. For a fixed $i\in \cI$, we get the following bound $$\begin{aligned}
{\mathbb{E}}_{M(i)}[(b^\top X-\bar{b}^\top X)^2]
=(b-\bar{b})^\top {\mathbb{E}}_{M(i)}[XX^{\top}](b-\bar{b})
\geq \lambda_{\min}({\mathbb{E}}_{M(i)}[XX^{\top}]) \|b-\bar{b} \|_2^2.
\end{aligned}$$ Since we assumed that the minimal eigenvalue is unbounded, this means that we can choose $i\in{\mathcal{I}}$ such that ${\mathbb{E}}_{M(i)}[(b^\top X-\bar{b}^\top X)^2]$ can be arbitrarily large. However, applying Proposition \[prop:difference\_to\_causal\_function\], this leads to a contradiction since $\sup_{i \in {\mathcal{I}}} {\mathbb{E}}_{M(i)}[(b^\top X-\bar{b}^\top X)^2]\leq 4\operatorname{Var}_{M}(\xi_Y)$ cannot be satisfied. Therefore, it must holds that $\bar{b}=b$, which moreover implies that $f$ is indeed a solution to the minimax problem $\operatorname*{argmin}_{{f_{\diamond}}\in\mathcal{F}}\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}[(Y-{f_{\diamond}}(X))^2]$, as it achieves the lowest possible objective value. This completes the proof of Proposition \[prop:shift\_interventions\].
Proof of Proposition \[prop:difference\_to\_causal\_function\] {#sec:prop:difference_to_causal_function}
--------------------------------------------------------------
Let ${\mathcal{I}}$ be a set of interventions on $X$ or $A$ and let ${f_{\diamond}}\in{\mathcal{F}}$ with $$\label{eq:better_than_causal_cond}
\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}[(Y-{f_{\diamond}}(X))^2]\leq\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}[(Y-f(X))^2].$$ For any $i\in{\mathcal{I}}$, the Cauchy-Schwartz inequality implies that $$\begin{aligned}
&{\mathbb{E}}_{M(i)}[(Y-{f_{\diamond}}(X))^2]
={\mathbb{E}}_{M(i)}[(f(X)+\xi_Y-{f_{\diamond}}(X))^2]\\
&\qquad={\mathbb{E}}_{M(i)}[(f(X)-{f_{\diamond}}(X))^2]+{\mathbb{E}}_{M(i)}[\xi_Y^2]+2{\mathbb{E}}_{M(i)}[\xi_Y(f(X)-{f_{\diamond}}(X))]\\
&\qquad\geq{\mathbb{E}}_{M(i)}[(f(X)-{f_{\diamond}}(X))^2]+{\mathbb{E}}_{M}[\xi_Y^2]-2\left({\mathbb{E}}_{M(i)}[(f(X)-{f_{\diamond}}(X))^2]{\mathbb{E}}_{M}[\xi_Y^2]\right)^{\frac{1}{2}}.
\end{aligned}$$ A similar computation shows that the causal function $f$ satisfies $${\mathbb{E}}_{M(i)}[(Y-f(X))^2]={\mathbb{E}}_{M}[\xi_Y^2].$$ So by condition this implies for any $i\in{\mathcal{I}}$ that $${\mathbb{E}}_{M(i)}[(f(X)-{f_{\diamond}}(X))^2]+{\mathbb{E}}_{M}[\xi_Y^2]-2\left({\mathbb{E}}_{M(i)}[(f(X)-{f_{\diamond}}(X))^2]{\mathbb{E}}_{M}[\xi_Y^2]\right)^{\frac{1}{2}}\leq{\mathbb{E}}_{M}[\xi_Y^2],$$ which is equivalent to $$\begin{aligned}
&{\mathbb{E}}_{M(i)}[(f(X)-{f_{\diamond}}(X))^2]\leq 2\sqrt{{\mathbb{E}}_{M(i)}[(f(X)-{f_{\diamond}}(X))^2]{\mathbb{E}}_{M}[\xi_Y^2]}\\
&\iff\quad
{\mathbb{E}}_{M(i)}[(f(X)-{f_{\diamond}}(X))^2]\leq 4{\mathbb{E}}_{M}[\xi_Y^2].
\end{aligned}$$ As this inequality holds for all $i\in{\mathcal{I}}$, we can take the supremum over all $i\in{\mathcal{I}}$, which completes the proof of Proposition \[prop:difference\_to\_causal\_function\].
Proof of Proposition \[prop:misspecification\_minimax\]
-------------------------------------------------------
As argued before, we have that for all $i \in {\mathcal{I}}_1$, $${\mathbb{E}}_{M(i)}\big[(Y-f(X))^2\big]={\mathbb{E}}_{M(i)}\big[\xi_Y^2\big]={\mathbb{E}}_{M}\big[\xi_Y^2\big].$$ Let now $f_1^*\in\mathcal{F}$ be a minimax solution w.r.t. ${\mathcal{I}}_1$. Then, using that the causal function $f$ lies in ${\mathcal{F}}$, it holds that $$\sup_{i\in{\mathcal{I}}_1}{\mathbb{E}}_{M(i)}\big[(Y-f_1^*(X))^2\big] \leq \sup_{i\in{\mathcal{I}}_1}{\mathbb{E}}_{M(i)}\big[(Y-f(X))^2\big] \ = {\mathbb{E}}_{M}\big[\xi_Y^2\big].$$ Moreover, if ${\mathcal{I}}_2\subseteq{\mathcal{I}}_1$, then it must also hold that $$\sup_{i\in{\mathcal{I}}_2}{\mathbb{E}}_{M(i)}\big[(Y-f_1^*(X))^2\big] \leq {\mathbb{E}}_{M}\big[\xi_Y^2\big]=\sup_{i\in{\mathcal{I}}_2}{\mathbb{E}}_{M(i)}\big[(Y-f(X))^2\big].$$ To prove the second part, we give a one-dimensional example. Let $\mathcal{F}$ be linear (i.e., $f(x)=b x$) and let ${\mathcal{I}}_1$ consist of shift interventions on $X$ of the form $$X^i\coloneqq g(A^i) + h_2(H^i, {\varepsilon}_X^i)+ c,$$ with $c\in [0, K]$. Then, the minimax solution $f^*_1$ (where $f^*_1(x)=b^*_1 x$) with respect to ${\mathcal{I}}_1$ is not equal to the causal function $f$ as long as ${\operatorname{Cov}}(X, \xi_Y)$ is strictly positive. This can be seen by explicitly computing the OLS estimator for a fixed shift $c$ and observing that the worst-case loss is attained at $c=K$. Now let $\cI_2$ be a set of interventions of the same form as $\cI_1$ but including shifts with $c>K$ such that $\cI_2 \not \subseteq \cI_1$. Since $\cF$ consists of linear functions, we know that the loss ${\mathbb{E}}_{M(i)}\big[(Y-f_1^*(X))^2\big]$ can become arbitrarily large, since $$\begin{aligned}
&{\mathbb{E}}_{M(i)}\big[(Y-f_1^*(X))^2\big]\\
&\quad=(b-b^*_1)^2{\mathbb{E}}_{M(i)}[X^2]+{\mathbb{E}}_{M}[\xi_Y^2]+2(b-b^*_1){\mathbb{E}}_{M(i)}[\xi_Y X]\\
&\quad=(b-b^*_1)^2(c^2+{\mathbb{E}}_{M}[X^2]+2c{\mathbb{E}}_{M}[X])+{\mathbb{E}}_{M}[\xi_Y^2]+2(b-b^*_1)({\mathbb{E}}_{M}[\xi_Y X]+{\mathbb{E}}_{M}[\xi_Y]c),
\end{aligned}$$ and $(b-b^*)^2>0$. In contrast, the loss for the causal function is always ${\mathbb{E}}_{M}[\xi_Y^2]$, so the worst-case loss of $f^*_1$ becomes arbitrarily worse than that of $f$. This completes the proof of Proposition \[prop:misspecification\_minimax\].
Proof of Proposition \[prop:suff\_general\]
-------------------------------------------
Let ${\varepsilon}> 0$. By definition of the infimum, we can find $f^* \in {\mathcal{F}}$ such that $$\left \vert \sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}\big[(Y-f^*(X))^2\big] - \inf_{{f_{\diamond}}\in{\mathcal{F}}}\,
\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}\big[(Y-{f_{\diamond}}(X))^2\big] \right \vert \leq {\varepsilon}.$$ Let now $\tilde{M} \in {\mathcal{M}}$ be s.t. ${\mathbb{P}}_{\tilde{M}} = {\mathbb{P}}_M$. By assumption, the left-hand side of the above inequality is unaffected by substituting $M$ for $\tilde{M}$, and the result thus follows.
Proof of Proposition \[prop:genX\_intra\]
-----------------------------------------
Let ${\mathcal{I}}$ be a well-behaved set of interventions on $X$. We consider two cases; (A) all interventions in ${\mathcal{I}}$ are confounding-preserving and (B) there is at least one intervention in ${\mathcal{I}}$ that is confounding-removing.
**Case (A):** In this case, we prove the result in two steps: (i) We show that $(A, \xi_X, \xi_Y)$ is identified from the observational distribution ${\mathbb{P}}_M$. (ii) We show that this implies that the intervention distributions $(X^i, Y^i)$, $i \in {\mathcal{I}}$, are also identified from the observational distribution, and conclude by using Proposition \[prop:suff\_general\]. Some of the details will be slightly technical because we allow for a large class of distributions (e.g., there is no assumption on the existence of densities).
We begin with step (i). In this case, ${\mathcal{I}}$ is a set of confounding-preserving interventions on $X$, and we have that ${\mathrm{supp}}_{{\mathcal{I}}}(X)\subseteq{\mathrm{supp}}(X)$. Fix $\tilde{M} =(\tilde{f},\tilde{g},\tilde{h}_1,\tilde{h}_2,\tilde{Q})
\in {\mathcal{M}}$ such that ${\mathbb{P}}_{\tilde{M}} = {\mathbb{P}}_M$ and let $(\tilde{X}, \tilde{Y},\tilde{H},\tilde{A})$ be generated by the SCM of $\tilde{M}$. We have that $(X,Y,A) {\ensuremath{\stackrel{\text{d}}{=}}}(\tilde{X}, \tilde{Y},\tilde{A})$ and by Assumption \[ass:identify\_f\], we have that $f \equiv \tilde{f}$ on ${\mathrm{supp}}(X)$, hence $f(X) {\ensuremath{\stackrel{\text{a.s.}}{=}}}\tilde{f}(X)$. Further, fix any $B\in \cB({\mathbb{R}}^p)$ (i.e., in the Borel sigma-algebra on ${\mathbb{R}}^p$) and note that $$\begin{aligned}
\bE_M[\mathbbm{1}_{B}(A)X|A]
&=\bE_M[\mathbbm{1}_{B}(A)g(A)+\mathbbm{1}_{B}(A)h_2(H,{{\varepsilon}}_X)|A] \\
&= \bE_M[\mathbbm{1}_{B}(A)g(A)|A] + \mathbbm{1}_{B}(A)\bE[h_2(H,{{\varepsilon}}_X)] = \mathbbm{1}_{B}(A)g(A),
\end{aligned}$$ almost surely. Here, we have used our modeling assumption ${\mathbb{E}}[h_2(H, {\varepsilon}_X)] = 0$. Hence, by similar arguments for $\bE_{\tilde{M}}(\mathbbm{1}_{B}(\tilde{A})\tilde{X}|\tilde{A})$ and the fact that $(X,Y,A) {\ensuremath{\stackrel{\text{d}}{=}}}(\tilde{X}, \tilde{Y},\tilde{A})$ we have that $$\begin{aligned}
\mathbbm{1}_{B}(A)g(A) {\ensuremath{\stackrel{\text{a.s.}}{=}}}\bE_M (\mathbbm{1}_{B}(A)X|A) {\ensuremath{\stackrel{\text{d}}{=}}}\bE_{\tilde{M}}(\mathbbm{1}_{B}(\tilde{A})\tilde{X}|\tilde{A}) {\ensuremath{\stackrel{\text{a.s.}}{=}}}\mathbbm{1}_{B}(\tilde{A})\tilde{g}(\tilde{A}).
\end{aligned}$$ We conclude that $\mathbbm{1}_{B}(A)g(A){\ensuremath{\stackrel{\text{d}}{=}}}\mathbbm{1}_{B}(\tilde{A})\tilde{g}(\tilde{A})$ for any $B\in \cB({\mathbb{R}}^p)$. Let $\bP$ and $\tilde{\bP}$ denote the respective background probability measures on which the random elements $(X,Y,H,A)$ and $(\tilde{X},\tilde{Y},\tilde{H},\tilde{A})$ are defined. Fix any $F\in \sigma(A)$ (i.e., in the sigma-algebra generated by $A$) and note that there exists a $B\in \cB({\mathbb{R}}^p)$ such that $F=\{A\in B\}$. Since $A {\ensuremath{\stackrel{\text{d}}{=}}}\tilde{A}$, we have that, $$\begin{aligned}
\int_F g(A) \, \mathrm{d} \bP = \int \mathbbm{1}_{B}(A) g(A) \, \mathrm{d} \bP = \int \mathbbm{1}_{B}(\tilde{A})\tilde{g}(\tilde{A}) \, \mathrm{d} \tilde{\bP} = \int \mathbbm{1}_{B}(A)\tilde{g}(A) \, \mathrm{d}\bP = \int_F \tilde{g}(A) \, \mathrm{d}\bP.
\end{aligned}$$ Both $g(A)$ and $\tilde{g}(A)$ are $\sigma(A)$-measurable and they agree integral-wise over every set $F\in \sigma(A)$, so we must have that $g(A) {\ensuremath{\stackrel{\text{a.s.}}{=}}}\tilde{g}(A)$. With $\eta(a,b,c)= (a,c-\tilde{f}(b),b-\tilde{g}(a))$ we have that $$\begin{aligned}
(A,\xi_Y,\xi_X) {\ensuremath{\stackrel{\text{a.s.}}{=}}}(A,Y-\tilde{f}(X),X-\tilde{g}(A))
=\eta(A,X,Y)
{\ensuremath{\stackrel{\text{d}}{=}}}\eta(\tilde{A},\tilde{X},\tilde{Y})
= (\tilde{A},\tilde{\xi}_Y,\tilde{\xi}_X),
\end{aligned}$$ so $(A,\xi_Y,\xi_X)
{\ensuremath{\stackrel{\text{d}}{=}}}(\tilde{A},\tilde{\xi}_Y,\tilde{\xi}_X)$. This completes step (i).
Next, we proceed with step (ii). Take an arbitrary intervention $i \in {\mathcal{I}}$ and let ${\varphi}^i, I^i, \tilde{I}^i$ with $I^i {\ensuremath{\stackrel{\text{d}}{=}}}\tilde{I}^i$, $I^i{\perp \!\!\! \perp}({{\varepsilon}}_X^i,{{\varepsilon}}_Y^i,{{\varepsilon}}_H^i,{{\varepsilon}}_A^i) \sim Q$ and $\tilde{I}^i {\perp \!\!\! \perp}(\tilde{{{\varepsilon}}}^i_X,\tilde{{{\varepsilon}}}^i_Y,\tilde{{{\varepsilon}}}^i_H,\tilde{{{\varepsilon}}}^i_A)
\sim \tilde{Q}$ be such that the structural assignments for $X^i$ and $\tilde{X}^i$ in $M(i)$ and $\tilde{M}(i)$, respectively, are given as $$X^i := {\varphi}^i(A^i,g(A^i), h_2(H^i, {\varepsilon}_X^i), I^i) \quad \text{ and } \quad \tilde{X}^i := {\varphi}^i(\tilde{A}^i,\tilde{g}(\tilde{A}^i), \tilde{h}_2(\tilde{H}^i, \tilde{{\varepsilon}}_X^i), \tilde{I}^i).$$ Define $\xi_X^i := h_2(H^i, {\varepsilon}_X^i)$, $\xi_Y^i := h_1(H^i, {\varepsilon}_Y^i)$, $\tilde{\xi}_X^i := \tilde{h}_2(\tilde{H}^i, \tilde{{\varepsilon}}_X^i)$ and $\tilde{\xi}_Y^i := \tilde{h}_1(\tilde{H}^i,
\tilde{{\varepsilon}}_Y^i)$. Then, it holds that $$(A^i, \xi_X^i, \xi_Y^i) {\ensuremath{\stackrel{\text{d}}{=}}}(A,\xi_X, \xi_Y) {\ensuremath{\stackrel{\text{d}}{=}}}(\tilde{A}, \tilde{\xi}_X, \tilde{\xi}_Y) {\ensuremath{\stackrel{\text{d}}{=}}}(\tilde{A}^i, \tilde{\xi}_X^i, \tilde{\xi}_Y^i),$$ where we used step (i), that $(A^i, \xi_X^i, \xi_Y^i)$ and $(A,\xi_X, \xi_Y)$ are generated by identical functions of the noise innovations and that $ ({{\varepsilon}}_X,{{\varepsilon}}_Y,{{\varepsilon}}_H,{{\varepsilon}}_A) $ and $({{\varepsilon}}_X^i,{{\varepsilon}}_Y^i,{{\varepsilon}}_H^i,{{\varepsilon}}_A^i)$ have identical distributions. Adding a random variable with the same distribution, that is mutually independent with all other variables, on both sides does not change the distribution of the bundle, hence $$\begin{aligned}
(A^i, \xi_X^i, \xi_Y^i,I^i) {\ensuremath{\stackrel{\text{d}}{=}}}(\tilde{A}^i, \tilde{\xi}_X^i, \tilde{\xi}_Y^i, \tilde{I}^i).
\end{aligned}$$ Define $\kappa(a,b,c,d) :=
({\varphi}^i(a,\tilde{g}(a),b,d),\tilde{f}({\varphi}^i(a,\tilde{g}(a),b,d))+c)$. As shown in step (i) above, we have that $g(A^i){\ensuremath{\stackrel{\text{a.s.}}{=}}}\tilde{g}(A^i)$. Furthermore, since ${\mathrm{supp}}(X^i) \subseteq{\mathrm{supp}}(X)$ we have that $f(X^i) {\ensuremath{\stackrel{\text{a.s.}}{=}}}\tilde{f}(X^i)$, and hence $$\begin{aligned}
(X^i, Y^i) &{\ensuremath{\stackrel{\text{a.s.}}{=}}}(X^i,\tilde{f}(X^i)+\xi_Y^i) \\
&= ({\varphi}^i(A^i,g(A^i), \xi_X^i , I^i) ,\tilde{f}({\varphi}^i(A^i,g(A^i), \xi_X^i, I^i))+\xi_Y^i) \\
&{\ensuremath{\stackrel{\text{a.s.}}{=}}}({\varphi}^i(A^i,\tilde{g}(A^i), \xi_X^i , I^i) ,\tilde{f}({\varphi}^i(A^i,\tilde{g}(A^i), \xi_X^i, I^i))+\xi_Y^i) \\
&=\kappa(A^i, \xi_X^i, \xi_Y^i,I^i)
{\ensuremath{\stackrel{\text{d}}{=}}}\kappa(\tilde{A}^i, \tilde{\xi}_X^i, \tilde{\xi}_Y^i, \tilde{I}^i)
= (\tilde{X}^i, \tilde{Y}^i).
\end{aligned}$$ Thus, $\bP_{M(i)}^{(X,Y)} = \bP_{\tilde{M}(i)}^{(X,Y)}$, which completes step (ii). Since $i \in {\mathcal{I}}$ was arbitrary, the result now follows from Proposition \[prop:suff\_general\].
**Case (B):** Assume that the set of interventions ${\mathcal{I}}$ contains at least one confounding-removing intervention. Let $\tilde{M} =(\tilde{f},\tilde{g},\tilde{h}_1,\tilde{h}_2,\tilde{Q})
\in {\mathcal{M}}$ be such that ${\mathbb{P}}_{\tilde{M}} = {\mathbb{P}}_M$. Then, by Proposition \[prop:minimax\_equal\_causal\], it follows that the causal function $\tilde{f}$ is a minimax solution w.r.t. $(\tilde{M}, {\mathcal{I}})$. By Assumption \[ass:identify\_f\], we further have that $\tilde{f}$ and $f$ coincide on ${\mathrm{supp}}(X) \supseteq {\mathrm{supp}}_{{\mathcal{I}}}(X)$. Hence, it follows that $$\inf_{{f_{\diamond}}\in {\mathcal{F}}} \sup_{i \in {\mathcal{I}}} {\mathbb{E}}_{\tilde{M}(i)}[(Y - {f_{\diamond}}(X))^2] = \sup_{i \in {\mathcal{I}}} {\mathbb{E}}_{\tilde{M}(i)}[(Y - \tilde{f}(X))^2] = \sup_{i \in {\mathcal{I}}} {\mathbb{E}}_{\tilde{M}(i)}[(Y - f(X))^2],$$ showing that also $f$ is a minimax solution w.r.t. $(\tilde{M}, {\mathcal{I}})$. This completes the proof of Proposition \[prop:genX\_intra\].
Proof of Proposition \[prop:impossibility\_interpolation\]
----------------------------------------------------------
We first show that the causal parameter $\beta$ is not a minimax solution. Let $u := \sup {\mathcal{I}}< \infty$, since ${\mathcal{I}}$ is bounded, and take $b = \beta + 1/(\sigma u)$. By an explicit computation we get that $$\begin{aligned}
\inf_{{b_{\diamond}}\in {\mathbb{R}}}\,
\sup_{i\in{\mathcal{I}}} {\mathbb{E}}_{M(i)}\big[(Y-{b_{\diamond}}X)^2\big] & \leq \sup_{i\in{\mathcal{I}}} {\mathbb{E}}_{M(i)}\big[(Y-b X)^2\big]
= \sup_{i\in{\mathcal{I}}} {\mathbb{E}}_{M(i)}\big[({{\varepsilon}}_Y + \tfrac{1}{\sigma}H -
\tfrac{1}{\sigma u}iH)^2\big] \\
&= \sup_{i\in{\mathcal{I}}} \left[1 + \left(1 - \tfrac{i}{u}\right)^2\right]
< 2
= \sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}\big[(Y-\beta X)^2\big],
\end{aligned}$$ where the last inequality holds because $0 < 1 + (1 - i/u)^2 < 2$ for all $i \in {\mathcal{I}}$, and since ${\mathcal{I}}{\subseteq}{\mathbb{R}}_{>0}$ is compact with upper bound $u$. Hence, $$\begin{aligned}
\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}\big[(Y-\beta X)^2\big]
- \inf_{{b_{\diamond}}\in {\mathbb{R}}}\,
\sup_{i\in{\mathcal{I}}} {\mathbb{E}}_{M(i)}\big[(Y-{b_{\diamond}}X)^2\big] > 0,
\end{aligned}$$ proving that the causal parameter is not a minimax solution for model $M$ w.r.t. $(\cF, \cI)$. Recall that in order to prove that $(\bP_{M},\cM)$ does not generalize with respect to $\cI$ we have to show that there exists an ${{\varepsilon}}>0$ such that for all $b\in {\mathbb{R}}$ it holds that $$\begin{aligned}
\sup_{\tilde M: \bP_{\tilde{M}}= \bP_M}\big| \sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-b X)^2\big]
- \inf_{{b_{\diamond}}\in {\mathbb{R}}}\,
\sup_{i\in{\mathcal{I}}} {\mathbb{E}}_{\tilde{M}(i)}\big[(Y-{b_{\diamond}}X)^2\big]\big| \geq {{\varepsilon}}.\end{aligned}$$ Thus, it remains to show that for all $b \not = \beta$ there exists a model $\tilde M \in \cM$ with $\bP_M = \bP_{\tilde M}$ such that the generalization loss is bounded below uniformly by a positive constant. We will show the stronger statement that for any $b \neq \beta$, there exists a model $\tilde{M}$ with ${\mathbb{P}}_{\tilde{M}} = {\mathbb{P}}_M$, such that under $\tilde{M}$, $b$ results in arbitrarily large generalization error. Let $c > 0$ and $i_0 \in {\mathcal{I}}$. Define $$\begin{aligned}
\tilde\sigma \coloneqq \frac{\operatorname{sign}{((\beta - b)i_0)}\sqrt{1 + c} - 1}{
(\beta - b)i_0} > 0,
\end{aligned}$$ and let $\tilde M \coloneqq M(\gamma, \beta, \tilde{\sigma}, Q)$. By construction of the model class ${\mathcal{M}}$, it holds that ${\mathbb{P}}_{\tilde{M}} = {\mathbb{P}}_{M}$. Furthermore, by an explicit computation we get that $$\begin{aligned}
\label{eq:proof_imp_a1}
\begin{split}
\sup_{i\in{\mathcal{I}}} {\mathbb{E}}_{\tilde{M}(i)}\big[(Y-bX)^2\big]
&\geq {\mathbb{E}}_{\tilde{M}(i_0)}\big[(Y-bX)^2\big]
= {\mathbb{E}}_{\tilde{M}(i_0)}\big[((\beta-b)i_0H+{{\varepsilon}}_Y+
\tfrac{1}{\tilde\sigma}H)^2\big]\\
&={\mathbb{E}}_{\tilde{M}(i_0)}\big[([(\beta-b)i_0\tilde{\sigma}+1] {{\varepsilon}}_H+{{\varepsilon}}_Y )^2\big]
= [(\beta-b)i_0\tilde{\sigma}+1]^2 + 1 \\
&= ((\beta - b)i_0\tilde\sigma)^2 + 2(\beta - b)i_0\tilde\sigma +
2\\
&= (\operatorname{sign}{((\beta-b)i_0)}\sqrt{1+c}-1)^2 + 2\operatorname{sign}{((\beta-b)i_0)}\sqrt{1+c}\\
&= c + 2.
\end{split}
\end{aligned}$$ Finally, by definition of the infimum, it holds that $$\begin{aligned}
\label{eq:proof_imp_b1}
\begin{split}
\inf_{{b_{\diamond}}\in {\mathbb{R}}}\,\sup_{i\in{\mathcal{I}}} {\mathbb{E}}_{\tilde{M}(i)}\big[(Y-{b_{\diamond}}X)^2\big] \leq \sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-\beta
X)^2\big] = 2.
\end{split}
\end{aligned}$$ Combining and yields that the generalization error is bounded below by $c$. That is, $$\begin{aligned}
\big| \sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-b X)^2\big]
- \inf_{{b_{\diamond}}\in {\mathbb{R}}}\,
\sup_{i\in{\mathcal{I}}} {\mathbb{E}}_{\tilde{M}(i)}\big[(Y-{b_{\diamond}}X)^2\big]\big| \geq c.
\end{aligned}$$ The above results make no assumptions on $\gamma$, and hold true, in particular, if $\gamma \neq 0$ (in which case Assumption \[ass:identify\_f\] is satisfied, see Appendix \[sec:IVconditions\]). This completes the proof of Proposition \[prop:impossibility\_interpolation\].
Proof of Proposition \[prop:genX\_extra\]
-----------------------------------------
Let $\tilde{M} \in {\mathcal{M}}$ be such that ${\mathbb{P}}_{\tilde{M}} = {\mathbb{P}}_M$. By Assumptions \[ass:identify\_f\] and \[ass:gen\_f\], it holds that $f \equiv \tilde{f}$. The proof now proceeds analogously to that of Proposition \[prop:genX\_intra\].
Proof of Proposition \[prop:extrapolation\_bounded\_deriv\_cr\] {#sec:prop:extrapolation_bounded_deriv_cr}
---------------------------------------------------------------
By Assumption \[ass:identify\_f\], $f$ is identified on ${\mathrm{supp}}^{M}(X)$ by the observational distribution $\bP_{M}$. Let $\cI$ be a set of interventions containing at least one confounding-removing intervention. For any $\tilde{M}=(\tilde{f},\tilde{g},\tilde{h}_1,\tilde{h}_2,\tilde{Q})\in
\cM$, Proposition \[prop:minimax\_equal\_causal\] yields that the causal function is a minimax solution. That is, $$\begin{aligned}
\notag
\inf_{{f_{\diamond}}\in\mathcal{F}}\sup_{i\in\cI}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-{f_{\diamond}}(X))^2\big]
&= \sup_{i\in\cI}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-\tilde{f}(X))^2\big] = \sup_{i\in \cI }{\mathbb{E}}_{\tilde{M}(i)}[\xi_Y^2] \\
&={\mathbb{E}}_{\tilde{M}}[\xi_Y^2], \label{eq:propboundedderiv_causalfunctionsolvesminimax}
\end{aligned}$$ where we used that any intervention $i\in \cI$ does not affect the distribution of $\xi_Y=\tilde{h}_2(H,{{\varepsilon}}_Y)$. Now, assume that $\tilde{M}=(\tilde{f},\tilde{g},\tilde{h}_1,\tilde{h}_2,\tilde{Q})\in
\cM$ satisfies $\bP_{\tilde{M}} = \bP_M$. Since $({\mathbb{P}}_M,\cM)$ satisfies Assumption \[ass:identify\_f\], we have that $f \equiv \tilde{f}$ on ${\mathrm{supp}}^M(X)={\mathrm{supp}}^{\tilde{M}}(X)$. Let $f^*$ be any function in ${\mathcal{F}}$ such that $f^*=f$ on ${\mathrm{supp}}^M(X)$. We first show that $\norm{\tilde{f} - f^*}_{\cI,\infty} \leq 2\delta K$, where $\|f\|_{\cI,\infty} := \sup_{x\in{\mathrm{supp}}_{\cI}^M(X)}\|f(x)\|$. By the mean value theorem, for all ${f_{\diamond}}\in{\mathcal{F}}$ it holds that $\abs{{f_{\diamond}}(x) - {f_{\diamond}}(y)} \leq K\norm{x - y}$, for all $x, y\in \mathcal D$. For any $x \in {\mathrm{supp}}^M_\cI(X)$ and $y \in {\mathrm{supp}}^M(X)$ we have $$\begin{aligned}
\abs[\big]{\tilde{f}(x) - f^*(x)}
&= \abs[\big]{\tilde{f}(x) - \tilde{f}(y) + f^*(y) - f^*(x)}\\
&\leq \abs[\big]{\tilde{f}(x) - \tilde{f}(y)} + \abs[\big]{f^*(y) - f^*(x)}\\
&\leq 2 K \norm{x - y},
\end{aligned}$$ where we used the fact that $\tilde{f}(y)= f(y) =f^*(y)$, for all $y\in {\mathrm{supp}}^M(X)$. In particular, it holds that $$\begin{aligned}
\label{eq:unif_norm}
\begin{split}
\norm{\tilde{f} - f^*}_{\cI,\infty}
&=\sup_{x\in {\mathrm{supp}}^M_\cI(X)} \abs[\big]{\tilde{f}(x) - f^*(x)}\\
&\leq 2K\sup_{x\in {\mathrm{supp}}^M_\cI(X)} \inf_{y\in{\mathrm{supp}}^M(X)}\norm{x - y}\\
&= 2\delta K.
\end{split}
\end{aligned}$$ For any $i\in{\mathcal{I}}$ we have that $$\begin{aligned}
{\mathbb{E}}_{\tilde{M}(i)}\big[(Y- f^*(X))^2\big]
&={\mathbb{E}}_{\tilde{M}(i)}\big[(\tilde{f}(X)+\xi_Y-f^*(X))^2\big]\nonumber\\
&={\mathbb{E}}_{\tilde{M}}\big[\xi_Y^2\big] +
{\mathbb{E}}_{\tilde{M}(i)}\big[(\tilde{f}(X)-f^*(X))^2\big]\nonumber\\
&\qquad\qquad + 2{\mathbb{E}}_{\tilde{M}(i)}\big[\xi_Y(\tilde{f}(X)-f^*(X))\big].\label{eq:starting_pt_split}
\end{aligned}$$ Next, we can use Cauchy-Schwarz, and in to get that $$\begin{aligned}
&\left|\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-f^*(X))^2\big]-\inf_{{f_{\diamond}}\in{\mathcal{F}}}\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-{f_{\diamond}}(X))^2\big]\nonumber \right|\\
&\quad= \sup_{i\in{\mathcal{I}}} {\mathbb{E}}_{\tilde{M}(i)}\big[(Y-f^*(X))^2\big]-{\mathbb{E}}_{\tilde{M}}[\xi_Y^2]\nonumber \\
&\quad = \sup_{i\in{\mathcal{I}}} \left(
{\mathbb{E}}_{\tilde{M}(i)}\big[(\tilde{f}(X)-f^*(X))^2\big]+2{\mathbb{E}}_{\tilde{M}(i)}\big[\xi_Y(\tilde{f}(X)-f^*(X))\big] \right) \nonumber\\
&\quad\leq 4\delta^2K^2+4\delta K \sqrt{{\operatorname{Var}}_M(\xi_Y)},\label{eq:ineq_prt}
\end{aligned}$$ proving the first statement. Finally, if ${\mathcal{I}}$ consists only of confounding-removing interventions, then the bound in can be improved by using that ${\mathbb{E}}[\xi_Y]=0$ together with $H{\perp \!\!\! \perp}X$. In that case, we get that ${\mathbb{E}}_{\tilde{M}(i)}\big[\xi_Y(\tilde{f}(X)-f(X))\big]=0$ and hence the bound becomes $4\delta^2 K^2$. This completes the proof of Proposition \[prop:extrapolation\_bounded\_deriv\_cr\].
Proof of Proposition \[prop:extrapolation\_bounded\_deriv\] {#sec:prop:extrapolation_bounded_deriv}
-----------------------------------------------------------
By Assumption \[ass:identify\_f\], $f$ is identified on ${\mathrm{supp}}^{M}(X)$ by the observational distribution $\bP_{M}$. Let $\cI$ be a set of confounding-preserving interventions. For a fixed ${\varepsilon}>0$, let $f^*\in\mathcal{F}$ be a function satisfying $$\label{eq:fstarDiffEpsilon}
\abs{\sup_{i\in\cI}{\mathbb{E}}_{M(i)}\big[(Y-f^*(X))^2)\big]-\inf_{{f_{\diamond}}\in\mathcal{F}}\sup_{i\in\cI}{\mathbb{E}}_{M(i)}\big[(Y-{f_{\diamond}}(X))^2)\big]}\leq{\varepsilon}.$$ Fix any secondary model $\tilde{M}=(\tilde{f},\tilde{g},\tilde{h}_1,\tilde{h}_2,\tilde{Q})\in
\cM$ with $\bP_{\tilde{M}} = \bP_M$. The general idea is to derive an upper bound for $\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}[(Y-f^*(X))^2]$ and a lower bound for $\inf_{{f_{\diamond}}\in{\mathcal{F}}}\,
\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}[(Y-{f_{\diamond}}(X))^2]$ which will allow us to bound the absolute difference of interest.
Since $({\mathbb{P}}_M,\cM)$ satisfies Assumption \[ass:identify\_f\], we have that $f \equiv \tilde{f}$ on ${\mathrm{supp}}^M(X)={\mathrm{supp}}^{\tilde{M}}(X)$. We first show that $\norm{\tilde{f} - f}_{\cI,\infty} \leq 2\delta K$, where $\|f\|_{\cI,\infty} := \sup_{x\in{\mathrm{supp}}_{\cI}^M(X)}\|f(x)\|$. By the mean value theorem, for all ${f_{\diamond}}\in{\mathcal{F}}$ it holds that $\abs{{f_{\diamond}}(x) - {f_{\diamond}}(y)} \leq K\norm{x - y}$, for all $x, y\in \mathcal D$. For any $x \in {\mathrm{supp}}^M_\cI(X)$ and $y \in {\mathrm{supp}}^M(X)$ we have $$\begin{aligned}
\abs[\big]{\tilde{f}(x) - f(x)}
&= \abs[\big]{\tilde{f}(x) - \tilde{f}(y) + f(y) - f(x)}\\
&\leq \abs[\big]{\tilde{f}(x) - \tilde{f}(y)} + \abs[\big]{f(y) - f(x)}\\
&\leq 2 K \norm{x - y},
\end{aligned}$$ where we used the fact that $\tilde{f}(y)=f(y)$, for all $y\in {\mathrm{supp}}_M(X)$. In particular, it holds that $$\begin{aligned}
\label{eq:unif_norm2}
\begin{split}
\norm{\tilde{f} - f}_{\cI,\infty}
&= \sup_{x\in {\mathrm{supp}}^M_\cI(X)} \abs[\big]{\tilde{f}(x) - f(x)}\\
&\leq 2K\sup_{x\in {\mathrm{supp}}^M_\cI(X)} \inf_{y\in{\mathrm{supp}}^M(X)}\norm{x - y}\\
&= 2\delta K.
\end{split}
\end{aligned}$$ Let now $i\in{\mathcal{I}}$ be fixed. The term $\xi_Y = h_1(H, {\varepsilon}_Y)$ is not affected by the intervention $i$. Furthermore, ${\mathbb{P}}^{(X,\xi_Y)}_{M(i)}={\mathbb{P}}^{(X,\xi_Y)}_{\tilde{M}(i)}$ since $i$ is confounding-preserving (this can be seen by a slight modification to the arguments from case (A) in the proof of Proposition \[prop:genX\_intra\]). Thus, for any ${f_{\diamond}}\in \cF$ we have that $$\begin{aligned}
&{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-{f_{\diamond}}(X))^2\big] \nonumber\\
&\qquad={\mathbb{E}}_{\tilde{M}(i)}\big[(\tilde{f}(X)+\xi_Y-{f_{\diamond}}(X)+f(X)-f(X))^2\big]\nonumber\\
&\qquad={\mathbb{E}}_{\tilde{M}(i)}\big[\xi_Y^2\big] +
{\mathbb{E}}_{\tilde{M}(i)}\big[(f(X)-{f_{\diamond}}(X))^2\big]+
{\mathbb{E}}_{\tilde{M}(i)}\big[(\tilde{f}(X)-f(X))^2\big]\nonumber\\
&\qquad\qquad\qquad + 2{\mathbb{E}}_{\tilde{M}(i)}\big[\xi_Y(f(X)-{f_{\diamond}}(X))\big]\nonumber\\
&\qquad\qquad\qquad +
2{\mathbb{E}}_{\tilde{M}(i)}\big[(\tilde{f}(X)-f(X))(f(X)-{f_{\diamond}}(X))\big]\nonumber\\
&\qquad\qquad\qquad +
2{\mathbb{E}}_{\tilde{M}(i)}\big[\xi_Y(\tilde{f}(X)-f(X))\big]\nonumber\\
&\qquad={\mathbb{E}}_{M(i)}\big[\xi_Y^2\big] +
{\mathbb{E}}_{M(i)}\big[(f(X)-{f_{\diamond}}(X))^2\big]+
{\mathbb{E}}_{M(i)}\big[(\tilde{f}(X)-f(X))^2\big]\nonumber\\
&\qquad\qquad\qquad +2{\mathbb{E}}_{M(i)}\big[\xi_Y(f(X)-{f_{\diamond}}(X))\big]\nonumber\\
&\qquad\qquad\qquad +
2{\mathbb{E}}_{M(i)}\big[(\tilde{f}(X)-f(X))(f(X)-{f_{\diamond}}(X))\big]\nonumber\\
&\qquad\qquad\qquad + 2{\mathbb{E}}_{M(i)}\big[\xi_Y(\tilde{f}(X)-f(X))\big]\nonumber\\
&\qquad={\mathbb{E}}_{M(i)}\big[(Y-{f_{\diamond}}(X))^2\big] + L_1^i(\tilde{f}) +
L_2^i(\tilde{f},{f_{\diamond}}) + L_3^i(\tilde{f}),\label{eq:decomposition_intoLs}
\end{aligned}$$ where, we have made the following definitions $$\begin{aligned}
L_1^i(\tilde{f})&\coloneqq {\mathbb{E}}_{M(i)}\big[(\tilde{f}(X)-f(X))^2\big],\\
L_2^i(\tilde{f},{f_{\diamond}})&\coloneqq
2{\mathbb{E}}_{M(i)}\big[(\tilde{f}(X)-f(X))(f(X)-{f_{\diamond}}(X))\big],\\
L_3^i(\tilde{f})&\coloneqq 2{\mathbb{E}}_{M(i)}\big[\xi_Y(\tilde{f}(X)-f(X))\big].
\end{aligned}$$ Using it follows that $$\label{eq:estimate_L1}
0\leq L_1^i(\tilde{f}) \leq 4\delta^2 K^2,$$ and by the Cauchy-Schwarz inequality it follows that $$\begin{aligned}
\abs[\big]{L_3^i(\tilde{f})} &\leq 2 \sqrt{{\operatorname{Var}}_M(\xi_Y)4\delta^2K^2} =4\delta K \sqrt{{\operatorname{Var}}_M(\xi_Y)}. \label{eq:estimate_L3}
\end{aligned}$$ Let now ${f_{\diamond}}\in{\mathcal{F}}$ be any function such that $$\label{eq:ConditionBetterThanTildeCausal}
\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde M(i)}\big[(Y-{f_{\diamond}}(X))^2)\big]\leq\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde M(i)}\big[(Y-\tilde{f}(X))^2)\big],$$ then by , the Cauchy-Schwarz inequality and Proposition \[prop:difference\_to\_causal\_function\], it holds for all $i\in \cI$ that $$\begin{aligned}
\nonumber
L_2^i(\tilde{f},{f_{\diamond}}) &= 2{\mathbb{E}}_{M(i)}\big[(\tilde{f}(X)-f(X))(f(X)-{f_{\diamond}}(X))\big]\\
&=2{\mathbb{E}}_{\tilde{M}(i)}\big[(\tilde{f}(X)-f(X))(f(X)-{f_{\diamond}}(X))\big]\nonumber\\
&= -2{\mathbb{E}}_{\tilde{M}(i)}\big[(\tilde{f}(X)-f(X))^2\big]+2{\mathbb{E}}_{\tilde{M}(i)}\big[(\tilde{f}(X)-f(X))(\tilde{f}(X)-{f_{\diamond}}(X))\big]\nonumber\\
&\geq -8\delta^2K^2 -
2\sqrt{4\delta^2K^2}\sqrt{4{\operatorname{Var}}_M(\xi_Y)}\nonumber\\
&=-8\delta^2K^2- 8\delta K \sqrt{{\operatorname{Var}}_M(\xi_Y)}, \label{eq:estimate_L2_BetterThanTildeCausal}
\end{aligned}$$ where, in the third equality, we have added and subtracted the term $2{\mathbb{E}}_{\tilde{M}(i)}\big[(\tilde{f}(X)-f(X))\tilde{f}(X)\big]$. Now let $\cS := \{{f_{\diamond}}\in \cF :
\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-{f_{\diamond}}(X))^2\big] \leq
\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-\tilde{f}(X))^2\big]
\}$ be the set of all functions satisfying . Due to , , and we have the following lower bound of interest $$\begin{aligned}
\notag
&\inf_{{f_{\diamond}}\in{\mathcal{F}}}\, \sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-{f_{\diamond}}(X))^2\big] \\\notag
=& \inf_{{f_{\diamond}}\in\cS}\, \sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-{f_{\diamond}}(X))^2\big] \\\notag
=& \inf_{{f_{\diamond}}\in \cS}\,
\sup_{i\in{\mathcal{I}}}\big\{ {\mathbb{E}}_{M(i)}\big[(Y-{f_{\diamond}}(X))^2\big] + L_1^i(\tilde{f}) +
L_2^i(\tilde{f},{f_{\diamond}}) + L_3^i(\tilde{f}) \big\} \\\notag
\geq& \inf_{{f_{\diamond}}\in \cS}\,
\sup_{i\in{\mathcal{I}}} {\mathbb{E}}_{M(i)}\big[(Y-{f_{\diamond}}(X))^2\big] -8\delta^2K^2- 8\delta K \sqrt{{\operatorname{Var}}_M(\xi_Y)} - 4\delta K \sqrt{{\operatorname{Var}}_M(\xi_Y)} \\ \label{eq:lowerboundoninf}
\geq & \inf_{{f_{\diamond}}\in \cF}\,
\sup_{i\in{\mathcal{I}}} {\mathbb{E}}_{M(i)}\big[(Y-{f_{\diamond}}(X))^2\big] - 8 \delta^2 K^2 - 12 \delta K \sqrt{{\operatorname{Var}}_M(\xi_Y)}.
\end{aligned}$$ Next, we construct the aforementioned upper bound of interest. To that end, note that $$\begin{aligned}
\notag
&\sup_{i\in{\mathcal{I}}} {\mathbb{E}}_{\tilde{M}(i)}\big[(Y-f^*(X))^2\big] \\ \label{eq:QuantityForUpperBound}
&\quad = \sup_{i\in{\mathcal{I}}} \left\{ {\mathbb{E}}_{M(i)}\big[(Y-f^*(X))^2\big] + L_1^i(\tilde{f}) +
L_2^i(\tilde{f},f^*) + L_3^i(\tilde{f}) \right\},
\end{aligned}$$ by . We have already established upper bounds for $L_1^i(\tilde f)$ and $L_3^i(\tilde{f})$ in and , respectively. In order to control $L_2^i(\tilde{f},f^*)$ we introduce an auxiliary function. Let $\bar{f}^*\in \cF$ satisfy $$\label{eq:BarfStarBetterThanCausal}
\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}\big[(Y-\bar{f}^*(X))^2)\big]\leq\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}\big[(Y-f(X))^2)\big],$$ and $$\label{eq:epsilonineqforBarfStar}
\abs[\Big]{\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}\big[(Y-\bar{f}^*(X))^2\big] - \inf_{{f_{\diamond}}\in{\mathcal{F}}}\,
\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}\big[(Y-{f_{\diamond}}(X))^2\big]}\leq {\varepsilon}.$$ Choosing such a $\bar f^*\in \cF$ is always possible. If $f$ is an ${{\varepsilon}}$-minimax solution, i.e., it satisfies , then choose $\bar
f^*=f$. Otherwise, if $f$ is not a ${{\varepsilon}}$-minimax solution, then choose any $\bar f^*\in \cF$ that is an ${{\varepsilon}}$-minimax solution (which is always possible). In this case we have that $$\begin{aligned}
\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}\big[(Y-\bar{f}^*(X))^2\big] - \inf_{{f_{\diamond}}\in{\mathcal{F}}}\,
\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}\big[(Y-{f_{\diamond}}(X))^2\big] \leq {{\varepsilon}},
\end{aligned}$$ and $$\begin{aligned}
\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}\big[(Y-f(X))^2\big] - \inf_{{f_{\diamond}}\in{\mathcal{F}}}\,
\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}\big[(Y-{f_{\diamond}}(X))^2\big] \geq {{\varepsilon}},
\end{aligned}$$ which implies that is satisfied. We can now construct an upper bound on $L_2^i(\tilde{f},f^*)$ in terms of $L_2^i(\tilde{f},\bar f^*)$ by noting that for all $i\in\cI$ $$\begin{aligned}
\notag
\abs[\big]{L_2^i(\tilde{f},f^*)}
&= 2\big|{\mathbb{E}}_{M(i)}\big[(\tilde{f}(X)-f(X))(f(X)-f^*(X))\big] \big| \\
\notag
&\leq 2\big|{\mathbb{E}}_{M(i)}\big[(\tilde{f}(X)-f(X))(f(X)-\bar{f}^*(X))\big]\big| \\ \notag
&\qquad \qquad +2{\mathbb{E}}_{M(i)}\big|(\tilde{f}(X)-f(X))(\bar{f}^*(X)-f^*(X))\big| \\ \notag
&= \abs[\big]{L_2^i(\tilde{f},\bar f^*)} +2{\mathbb{E}}_{M(i)}\big|(\tilde{f}(X)-f(X))(\bar{f}^*(X)-f^*(X))\big| \\ \notag
&\leq \abs[\big]{L_2^i(\tilde{f},\bar f^*)} +2\sqrt{{\mathbb{E}}_{M(i)}\left[(\tilde{f}(X)-f(X))^2\right]{\mathbb{E}}_{M(i)}\left[(\bar{f}^*(X)-f^*(X))^2 \right]} \\ \label{eq:UpperboundOfL2TildefStarf}
&\leq \abs[\big]{L_2^i(\tilde{f},\bar f^*)} +4\delta K\sqrt{{\mathbb{E}}_{M(i)}\left[(\bar{f}^*(X)-f^*(X))^2 \right]},
\end{aligned}$$ where we used the triangle inequality, Cauchy-Schwarz inequality and . Furthermore, and together with Proposition \[prop:difference\_to\_causal\_function\] yield the following bound $$\begin{aligned}
|L_2^i(\tilde{f},\bar f^*)|
&=2\big|{\mathbb{E}}_{M(i)}\big[(\tilde{f}(X)-f(X))(f(X)-\bar f^*(X))\big] \big|\nonumber\\
&= 2 \sqrt{{\mathbb{E}}_{M(i)}\big[(\tilde{f}(X)-f(X))^2\big] {\mathbb{E}}_{M(i)}\big[(f(X)-\bar f^*(X))^2\big]}\nonumber\\
&\leq 2\sqrt{4\delta^2K^2} \sqrt{4{\operatorname{Var}}_M(\xi_Y)}\nonumber \\
&= 8\delta K \sqrt{{\operatorname{Var}}_M(\xi_Y)}, \label{eq:estimate_L2}
\end{aligned}$$ for any $i\in\cI$. Thus, it suffices to construct an upper bound on the second term in the final expression in . Direct computation leads to $$\begin{aligned}
{\mathbb{E}}_{M(i)}\big[(Y-f^*(X))^2\big] &={\mathbb{E}}_{M(i)}\big[(Y-\bar{f}^*(X))^2\big] \\
&\quad \quad+{\mathbb{E}}_{M(i)}\big[(\bar{f}^*(X)-f^*(X))^2\big] \\
&\quad \quad +2{\mathbb{E}}_{M(i)}\big[(Y-\bar{f}^*(X))(\bar{f}^*(X)-f^*(X))\big].
\end{aligned}$$ Rearranging the terms and applying the triangle inequality and Cauchy-Schwarz results in $$\begin{aligned}
&{\mathbb{E}}_{M(i)}\big[(\bar{f}^*(X)-f^*(X))^2\big] \\
&\quad ={\mathbb{E}}_{M(i)}\big[(Y-f^*(X))^2\big]-{\mathbb{E}}_{M(i)}\big[(Y-\bar{f}^*(X))^2\big]\\
&\quad \quad \quad -2{\mathbb{E}}_{M(i)}\big[(Y-\bar{f}^*(X))(\bar{f}^*(X)-f^*(X))\big]\\
&\quad\leq \big|{\mathbb{E}}_{M(i)}\big[(Y-f^*(X))^2\big]- \inf_{{f_{\diamond}}\in{\mathcal{F}}}\,
\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}\big[(Y-{f_{\diamond}}(X))^2\big]\big| \\
&\quad \quad \quad+ \big| \inf_{{f_{\diamond}}\in{\mathcal{F}}}\,
\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}\big[(Y-{f_{\diamond}}(X))^2\big]-{\mathbb{E}}_{M(i)}\big[(Y-\bar{f}^*(X))^2\big] \big| \\
& \quad \quad\quad + 2{\mathbb{E}}_{M(i)}\big|(Y-\bar{f}^*(X))(\bar{f}^*(X)-f^*(X))\big|\\
&\quad\leq 2{\varepsilon}+2\sqrt{{\mathbb{E}}_{M(i)}\big[(Y-\bar{f}^*(X))^2\big]}\sqrt{{\mathbb{E}}_{M(i)}\big[(\bar{f}^*(X)-f^*(X))^2\big]}\\
&\quad\leq 2{\varepsilon}+2\sqrt{{\operatorname{Var}}_{M}(\xi_Y)}\sqrt{{\mathbb{E}}_{M(i)}\big[(\bar{f}^*(X)-f^*(X))^2\big]},
\end{aligned}$$ for any $i\in \cI$. Here, we used that both $f^*$ and $\bar f^*$ are ${{\varepsilon}}$-minimax solutions with respect to $M$ and that $\bar f^*$ satisfies which implies that $$\begin{aligned}
{\mathbb{E}}_{M(i)}\big[(Y-\bar{f}^*(X))^2)\big]\leq\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}\big[(Y-f(X))^2)\big] = \sup_{i\in{\mathcal{I}}} {\mathbb{E}}_{M(i)}\big[\xi_Y^2\big] = {\operatorname{Var}}_{M}(\xi_Y),
\end{aligned}$$ for any $i\in \cI$, as $\xi_Y$ is unaffected by an intervention on $X$. Thus, ${\mathbb{E}}_{M(i)}\big[(\bar{f}^*(X)-f^*(X))^2\big]$ must satisfy $\ell({\mathbb{E}}_{M(i)}\big[(\bar{f}^*(X)-f^*(X))^2\big])\leq 0$, where $\ell:[0,\infty)\to {\mathbb{R}}$ is given by $\ell(z)=z-2{{\varepsilon}}-2\sqrt{{\operatorname{Var}}_{M}(\xi_Y)} \sqrt{z}$. The linear term of $\ell$ grows faster than the square root term, so the largest allowed value of ${\mathbb{E}}_{M(i)}\big[(\bar{f}^*(X)-f^*(X))^2\big]$ coincides with the largest root of $\ell(z)$. The largest root is given by $$\begin{aligned}
C^2:=2{{\varepsilon}}+2{\operatorname{Var}}_{M}(\xi_Y) + 2\sqrt{{\operatorname{Var}}_{M}(\xi_Y)^2+2{{\varepsilon}}{\operatorname{Var}}_{M}(\xi_Y)},
\end{aligned}$$ where $(\cdot)^2$ refers to the square of $C$. Hence, for any $i\in \cI$ it holds that $$\label{eq:LastFactorUpperBound}
{\mathbb{E}}_{M(i)}\big[(\bar{f}^*(X)-f^*(X))^2\big]\leq C^2.$$ Hence by , and we have that the following upper bound is valid for any $i\in \cI$. $$\begin{aligned}
\abs[\big]{L_2^i(\tilde{f},f^*)} &\leq 8\delta K \sqrt{{\operatorname{Var}}_M(\xi_Y)} + 4\delta K C \label{eq:estimate_L2_fstar}.
\end{aligned}$$ Thus, using with , and , we get the following upper bound $$\begin{aligned}
\notag
&\sup_{i\in{\mathcal{I}}} {\mathbb{E}}_{\tilde{M}(i)}\big[(Y-f^*(X))^2\big] \\
&\quad \leq \sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}\big[(Y-f^*(X))^2\big] + 4\delta^2K^2+4\delta K C+ 12\delta K \sqrt{{\operatorname{Var}}_M(\xi_Y)}.\label{eq:upperboundsup}
\end{aligned}$$ Finally, by combining the bounds and together with we get that $$\begin{aligned}
&\abs[\Big]{\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-f^*(X))^2\big]
-
\inf_{{f_{\diamond}}\in{\mathcal{F}}}\,
\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-{f_{\diamond}}(X))^2\big]}\nonumber\\ \nonumber
&\quad\leq\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}\big[(Y-f^*(X))^2\big]
-
\inf_{{f_{\diamond}}\in{\mathcal{F}}}\,
\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{M(i)}\big[(Y-{f_{\diamond}}(X))^2\big]\nonumber\\ \nonumber
&\qquad\qquad+ 4\delta^2K^2+4\delta K C+ 12\delta K \sqrt{{\operatorname{Var}}_M(\xi_Y)}\\ \nonumber
&\qquad \qquad + 8 \delta^2 K^2 + 12 \delta K \sqrt{{\operatorname{Var}}_M(\xi_Y)} \\
& \quad \leq {{\varepsilon}}+ 12 \delta^2 K^2 + 24 \delta K \sqrt{{\operatorname{Var}}_M(\xi_Y)} + 4 \delta K C. \label{eq:CorrectUpperBound}
\end{aligned}$$ Using that all terms are positive, we get that $$C = \sqrt{{\operatorname{Var}}_{M}(\xi_Y)} + \sqrt{{\operatorname{Var}}_{M}(\xi_Y) + 2 {{\varepsilon}}}
\leq 2\sqrt{{\operatorname{Var}}_{M}(\xi_Y)} + \sqrt{2 {{\varepsilon}}}$$ Hence, is bounded above by $$\begin{aligned}
{{\varepsilon}}+ 12 \delta^2 K^2 + 32 \delta K \sqrt{{\operatorname{Var}}_M(\xi_Y)} +
4 \sqrt{2} \delta K \sqrt{{{\varepsilon}}}.
\end{aligned}$$ This completes the proof of Proposition \[prop:extrapolation\_bounded\_deriv\].
Proof of Proposition \[prop:impossibility\_extrapolation\]
----------------------------------------------------------
Let $\bar{f}\in {\mathcal{F}}$ and $c > 0$. By assumption, ${\mathcal{I}}$ is a well-behaved set of support-extending interventions on $X$. Since ${\mathrm{supp}}_{{\mathcal{I}}}^{M}(X) \setminus {\mathrm{supp}}^{M}(X)$ has non-empty interior, there exists an intervention $i_0 \in {\mathcal{I}}$ and ${{\varepsilon}}>0$ such that ${\mathbb{P}}_{M(i_0)}(X\in B) \geq {\varepsilon}$, for some open subset $B \subsetneq \bar{B}$, such that $\mathrm{dist}(B,{\mathbb{R}}^d \setminus \bar{B})>0$, where $\bar{B}:={\mathrm{supp}}_{{\mathcal{I}}}^{M}(X) \setminus {\mathrm{supp}}^{M}(X)$. Let $\tilde{f}$ be any continuous function satisfying that, for all $x \in B\cup({\mathbb{R}}^d\setminus\bar B)$, $$\begin{aligned}
\tilde{f}(x) =
\begin{cases}
\bar{f}(x) + \gamma, \quad & x \in B\\
f(x), \quad & x \in {\mathbb{R}}^d \setminus \bar{B},
\end{cases}
\end{aligned}$$ where $\gamma := {\varepsilon}^{-1/2} \left\{(2 {\mathbb{E}}_{\tilde{M}}[\xi_{Y}^2] +
c)^{1/2} + ({\mathbb{E}}_{\tilde{M}}[\xi_{Y}^2])^{1/2}\right\}$. Consider a secondary model $\tilde{M} = (\tilde{f}, g, h_1, h_2, Q)\in\cM$. Then, by Assumption \[ass:identify\_f\], it holds that ${\mathbb{P}}_{M} = {\mathbb{P}}_{\tilde{M}}$. Since ${\mathcal{I}}$ only consists of interventions on $X$, it holds that ${\mathbb{P}}_{M(i_0)}(X\in B) = {\mathbb{P}}_{\tilde{M}(i_0)}(X\in B)$ (this holds since all components of $\tilde{M}$ and $M$ are equal, except for the function $f$, which is not allowed to enter in the intervention on $X$). Therefore, $$\begin{aligned}
\label{eq:proof_imp_a}
{\mathbb{E}}_{\tilde{M}(i_0)}\big[(Y-\bar{f}(X))^2\big]
&\geq {\mathbb{E}}_{\tilde{M}(i_0)}\big[(Y-\bar{f}(X))^2 \mathbbm{1}_{B}(X)\big] \nonumber\\
&= {\mathbb{E}}_{\tilde{M}(i_0)}\big[(\gamma + \xi_Y)^2 \mathbbm{1}_{B}(X)\big] \nonumber\\
&\geq \gamma^2{\varepsilon}+ 2 \gamma {\mathbb{E}}_{\tilde{M}(i_0)}\big[\xi_Y
\mathbbm{1}_{B}(X) \big]\nonumber\\
&\geq \gamma^2{\varepsilon}- 2 \gamma \left({\mathbb{E}}_{\tilde{M}}\big[\xi_Y^2\big]
{\varepsilon}\right)^{1/2} \nonumber\\
&= c + {\mathbb{E}}_{\tilde{M}}[\xi_{Y}^2],
\end{aligned}$$ where the third inequality follows from Cauchy–Schwarz. Further, by the definition of the infimum it holds that $$\begin{aligned}
\label{eq:proof_imp_b}
\begin{split}
\inf_{{f_{\diamond}}\in {\mathcal{F}}}\,\sup_{i\in{\mathcal{I}}} {\mathbb{E}}_{\tilde{M}(i)}\big[(Y-{f_{\diamond}}(X))^2\big]
\leq \sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-\tilde{f}(X))^2\big]
= {\mathbb{E}}_{\tilde{M}}[\xi_{Y}^2].
\end{split}
\end{aligned}$$ Therefore, combining and , the claim follows.
Proof of Proposition \[prop:genA\]
----------------------------------
We prove the result by showing that under Assumption \[ass:identify\_g\] it is possible to express interventions on $A$ as confounding-preserving interventions on $X$ and applying Propositions \[prop:genX\_intra\] and \[prop:genX\_extra\]. To avoid confusion, we will throughout this proof denote the true model by $M^0 = (f^0, g^0, h_1^0, h_2^0, Q^0)$. Fix an intervention $i\in{\mathcal{I}}$. Since it is an intervention on $A$, there exist $\psi^i$ and $I^i$ such that for any $M = (f,g,h_1, h_2, Q) \in {\mathcal{M}}$, the intervened SCM $M(i)$ is of the form $$A^i := \psi^i(I^i, {{\varepsilon}}_A^i), \quad H^i := {{\varepsilon}}_H^i, \quad X^i := g(A^i) + h_2(H^i, {{\varepsilon}}_X^i), \quad Y^i:= f(X^i) + h_1(H^i,{{\varepsilon}}_Y^i),$$ where $({{\varepsilon}}^i_X, {{\varepsilon}}^i_Y, {{\varepsilon}}^i_A, {{\varepsilon}}^i_H) \sim Q$. We now define a confounding-preserving intervention $j$ on $X$, such that, for all models $\tilde{M}$ with ${\mathbb{P}}_{\tilde{M}} = {\mathbb{P}}_M$, the distribution of $(X,Y)$ under $\tilde{M}(j)$ coincides with that under $\tilde{M}(i)$. To that end, define the intervention function $$\bar{\psi}^j(h_2, A^j, H^j, {{\varepsilon}}^j_X ,I^j) \coloneqq g^0(\psi^i(I^j, A^j)) +
h_2(H^j, {{\varepsilon}}_X^j),$$ where $g^0$ is the fixed function corresponding to model $M$, and therefore not an argument of $\bar{\psi}^j$. Let now $j$ be the intervention on $X$ satisfying that, for all $M = (f,g,h_1, h_2, Q) \in {\mathcal{M}}$, the intervened model $M(j)$ is given as $$A^{j} := {{\varepsilon}}_A^{j}, \quad H^{j} := {{\varepsilon}}_H^{j}, \quad X^{j} := \bar{\psi}^j(h_2, A^{j}, H^{j}, {{\varepsilon}}^{j}_X ,I^{j}), \quad Y^{j}:= f(X^{j}) + h_1(H^{j},{{\varepsilon}}_Y^{j}),$$ where $({{\varepsilon}}^j_X, {{\varepsilon}}^j_Y, {{\varepsilon}}^j_A, {{\varepsilon}}^j_H) \sim Q$ and where $I^j$ is chosen such that $I^j{\ensuremath{\stackrel{\text{d}}{=}}}I^i$. By definition, $j$ is a confounding-preserving intervention. Let now $\tilde{M} = (\tilde{f}, \tilde{g}, \tilde{h}_1, \tilde{h}_2,
\tilde{Q})$ be such that ${\mathbb{P}}_{\tilde{M}} = {\mathbb{P}}_M$, and let $(\tilde{X}^i, \tilde{Y}^i)$ and $(\tilde{X}^j, \tilde{Y}^j)$ be generated under $\tilde{M}(i)$ and $\tilde{M}(j)$, respectively. By Assumption \[ass:identify\_g\], it holds for all $a \in {\mathrm{supp}}(A) \cup {\mathrm{supp}}_{{\mathcal{I}}}(A)$ that $\tilde{g}(a) = g^0(a)$. Hence, we get that $$\begin{aligned}
(\tilde{X}^i, \tilde{Y}^i)
&{\ensuremath{\stackrel{\text{d}}{=}}}(\tilde{g}(\psi^i(I^i, \tilde{{{\varepsilon}}}_A^i)) + \tilde{h}_2(\tilde{{{\varepsilon}}}^i_H, \tilde{{{\varepsilon}}}_X^i),
\tilde{f}(\tilde{g}(\psi^i(I^i, \tilde{{{\varepsilon}}}_A^i)) + \tilde{h}_2(\tilde{{{\varepsilon}}}^i_H, \tilde{{{\varepsilon}}}_X^i)) + \tilde{h}_1(\tilde{{{\varepsilon}}}_H^i, \tilde{{{\varepsilon}}}^i_Y)) \\
&=
(g^0(\psi^i(I^i, \tilde{{{\varepsilon}}}_A^i)) + \tilde{h}_2(\tilde{{{\varepsilon}}}^i_H, \tilde{{{\varepsilon}}}_X^i),
\tilde{f}(g^0(\psi^i(I^i, \tilde{{{\varepsilon}}}_A^i)) + \tilde{h}_2(\tilde{{{\varepsilon}}}^i_H, \tilde{{{\varepsilon}}}_X^i)) + \tilde{h}_1(\tilde{{{\varepsilon}}}_H^i, \tilde{{{\varepsilon}}}^i_Y)) \\
&{\ensuremath{\stackrel{\text{d}}{=}}}(g^0(\psi^i(I^j, \tilde{{{\varepsilon}}}_A^j)) + \tilde{h}_2(\tilde{{{\varepsilon}}}^j_H, \tilde{{{\varepsilon}}}_X^j),
\tilde{f}(g^0(\psi^i(I^j, \tilde{{{\varepsilon}}}_A^j)) + \tilde{h}_2(\tilde{{{\varepsilon}}}^j_H, \tilde{{{\varepsilon}}}_X^j)) + \tilde{h}_1(\tilde{{{\varepsilon}}}_H^j, \tilde{{{\varepsilon}}}^j_Y)) \\
&{\ensuremath{\stackrel{\text{d}}{=}}}(\bar{\psi}^j(\tilde{h}_2, \tilde{{{\varepsilon}}}_A^j, \tilde{{{\varepsilon}}}_H^j, \tilde{{{\varepsilon}}}^j_X ,I^j),
\tilde{f}(\bar{\psi}^j(\tilde{h}_2, \tilde{{{\varepsilon}}}_A^j, \tilde{{{\varepsilon}}}_H^j, \tilde{{{\varepsilon}}}^j_X ,I^j)) + \tilde{h}_1(\tilde{{{\varepsilon}}}_H^j, \tilde{{{\varepsilon}}}^j_Y)) \\
&{\ensuremath{\stackrel{\text{d}}{=}}}(\tilde{X}^j, \tilde{Y}^j),
\end{aligned}$$ as desired. Since $i \in {\mathcal{I}}$ was arbitrary, we have now shown that there exists a mapping $\pi$ from ${\mathcal{I}}$ into a set $\mathcal{J}$ of confounding-preserving (and hence a well-behaved set) of interventions on $X$, such that for all $\tilde{M}$ with ${\mathbb{P}}_{\tilde{M}} = {\mathbb{P}}_M$, ${\mathbb{P}}^{(X,Y)}_{\tilde{M}(i)} = {\mathbb{P}}_{\tilde{M}(\pi(i))}^{(X,Y)}$. Hence, we can rewrite Equation in Definition \[defi:general\] in terms of the set $\mathcal{J}$. The result now follows from Propositions \[prop:genX\_intra\] and \[prop:genX\_extra\].
Proof of Proposition \[prop:impossibility\_intA\]
-------------------------------------------------
Let $b \in {\mathbb{R}}^d$ be such that $f(x) = b^\top x$ for all $x \in {\mathbb{R}}^d$. We start by characterizing the error ${\mathbb{E}}_{\tilde{M}(i)}\big[(Y-{f_{\diamond}}(X))^2\big]$. Let us consider models of the form $\tilde{M} = (f, \tilde{g}, h_1, h_2, Q) \in {\mathcal{M}}$ for some function $\tilde{g} \in {\mathcal{G}}$ with $\tilde{g}(a) = g(a)$ for all $a \in {\mathrm{supp}}_M(A)$. Clearly, any such model satisfies that ${\mathbb{P}}_{\tilde{M}} = {\mathbb{P}}_M$. For every $a \in \mathcal{A}$, let $i_a \in {\mathcal{I}}$ denote the corresponding hard intervention on $A$. For every $a \in \mathcal{A}$ and ${b_{\diamond}}\in {\mathbb{R}}^d$, we then have $$\label{eq:linear_mse}
\begin{aligned}
&\ {\mathbb{E}}_{\tilde{M}(i_a)}\big[(Y - {b_{\diamond}}^\top X)^2\big] \\
= &\ {\mathbb{E}}_{\tilde{M}(i_a)}\big[(b^\top X + \xi_Y - {b_{\diamond}}^\top X)^2\big] \\
= &\ (b - {b_{\diamond}})^\top {\mathbb{E}}_{\tilde{M}(i_a)}[X X^\top] (b - {b_{\diamond}}) + 2(b - {b_{\diamond}})^\top {\mathbb{E}}_{\tilde{M}(i_a)}[X \xi_Y] + {\mathbb{E}}_{\tilde{M}(i_a)}\big[\xi_Y^2] \\
= &\ (b - {b_{\diamond}})^\top \underbrace{(\tilde{g}(a) \tilde{g}(a)^\top + {\mathbb{E}}_{M}[\xi_X \xi_X^\top])}_{=: K_{\tilde{M}}(a)} (b - {b_{\diamond}}) + 2(b - {b_{\diamond}})^\top {\mathbb{E}}_{M}[\xi_X \xi_Y] + {\mathbb{E}}_{M}\big[\xi_Y^2],
\end{aligned}$$ where we have used that, under $i_a$, the distribution of $(\xi_X, \xi_Y)$ is unaffected. We now show that, for any $\tilde{M}$ with the above form, the causal function $f$ does not minimize the worst-case mean squared error across interventions in ${\mathcal{I}}$. The idea is to show that the worst-case mean squared error strictly decreases at ${b_{\diamond}}= b$ in the direction $u := {\mathbb{E}}_{M}[\xi_X \xi_Y] / \norm{{\mathbb{E}}_{M}[\xi_X \xi_Y]}_2$. For every $a \in \mathcal{A}$ and $s \in {\mathbb{R}}$, define $$\ell_{\tilde{M},a}(s) := {\mathbb{E}}_{\tilde{M}(i_a)}\big[(Y-(b + s u)^\top X)^2\big] = u^\top K_{\tilde{M}}(a) u \cdot s^2 - 2 u^\top {\mathbb{E}}_{M}[\xi_X \xi_Y] \cdot s + {\mathbb{E}}_{M}\big[\xi_Y^2].$$ For every $a$, $\ell_{\tilde{M},a}^\prime (0) = -2 \norm{{\mathbb{E}}_{M}[\xi_X \xi_Y]}_2 < 0$, showing that $\ell_{\tilde{M},a}$ is strictly decreasing at $s=0$ (with a derivative that is bounded away from 0 across all $a \in \mathcal{A}$). By boundedness of $\mathcal{A}$ and by the continuity of $a \mapsto \ell_{\tilde{M},a}^{\prime \prime } (0) = 2 u^\top K_{\tilde{M}}(a) u$, it further follows that $\sup_{a \in \mathcal{A}} \card{\ell^{\prime \prime}_{\tilde{M},a} (0)} < \infty$. Hence, we can find $s_0 > 0$ such that for all $a \in \mathcal{A}$, $\ell_{\tilde{M}, a}(0) > \ell_{\tilde{M}, a}(s_0)$. It now follows by continuity of $(a, s) \mapsto \ell_{\tilde{M},a}(s)$ that $$\sup_{i \in {\mathcal{I}}} {\mathbb{E}}_{\tilde{M}(i)}\big[(Y-b^\top X)^2\big] = \sup_{a \in \mathcal{A}} \ell_{\tilde{M}, a}(0) > \sup_{a \in \mathcal{A}} \ell_{\tilde{M}, a}(s_0) = \sup_{i \in {\mathcal{I}}} {\mathbb{E}}_{\tilde{M}(i)}\big[(Y-(b+s_0 u)^\top X)^2\big],$$ showing that $b+s_0 u$ attains a lower worst-case mean squared error than $b$.
We now show that all functions other than $f$ may result in an arbitrarily large error. Let $\bar{b} \in {\mathbb{R}}^d \setminus \{ b \}$ be given, and let $j \in \{1, \dots, d\}$ be such that $b_j \neq \bar{b}_j$. The idea is to construct a function $\tilde{g} \in {\mathcal{G}}$ such that, under the corresponding model $\tilde{M} = (f, \tilde{g}, h_1, h_2, Q) \in {\mathcal{M}}$, some hard interventions on $A$ result in strong shifts of the $j$th coordinate of $X$. Let $a \in \mathcal{A}$. Let $e_j \in {\mathbb{R}}^d$ denote the $j$th unit vector, and assume that $\tilde{g}(a) = n e_j$ for some $n \in {\mathbb{N}}$. Using , it follows that $$\begin{aligned}
&{\mathbb{E}}_{\tilde{M}(i_a)}\big[(Y-\bar{b}^\top X)^2\big] \\
&\qquad = n^2 (\bar{b}_j - b_j)^2 + (\bar{b} - b)^\top {\mathbb{E}}_{M}[\xi_X \xi_X^\top] (\bar{b} - b) +
2
(\bar{b} - b)^\top {\mathbb{E}}_{M}[\xi_X \xi_Y] + {\mathbb{E}}_{M}\big[\xi_Y^2].\end{aligned}$$ By letting $n \to \infty$, we see that the above error may become arbitrarily large. Given any $c > 0$, we can therefore construct $\tilde{g}$ such that ${\mathbb{E}}_{\tilde{M}(i_a)}\big[(Y-\bar{b}^\top X)^2\big] \geq c
+{\mathbb{E}}_{M}\big[\xi_Y^2]$. By carefully choosing $a \in \text{int}(\mathcal{A} \setminus {\mathrm{supp}}_M(A))$, this can be done such that $\tilde{g}$ is continuous and $\tilde{g}(a) = g(a)$ for all $a \in {\mathrm{supp}}_M(A)$, ensuring that $\bP_{\tilde M}= \bP_M$. It follows that $$\begin{aligned}
c &\leq {\mathbb{E}}_{\tilde{M}(i_a)}\big[(Y-{\bar{b}}^\top X)^2\big] - {\mathbb{E}}_{M}\big[\xi_Y^2] \\
&= {\mathbb{E}}_{\tilde{M}(i_a)}\big[(Y-{\bar{b}}^\top X)^2\big] - \sup_{i \in {\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)} \big[(Y- b^\top X)^2] \\
&\leq {\mathbb{E}}_{\tilde{M}(i_a)}\big[(Y-{\bar{b}}^\top X)^2\big] - \inf_{{b_{\diamond}}\in {\mathbb{R}}^d} \sup_{i \in {\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)} \big[(Y-{b_{\diamond}}^\top X)^2] \\
&\leq \sup_{i \in {\mathcal{I}}} {\mathbb{E}}_{\tilde{M}(i)}\big[(Y-{\bar{b}}^\top X)^2\big] - \inf_{{b_{\diamond}}\in {\mathbb{R}}^d} \sup_{i \in {\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)} \big[(Y-{b_{\diamond}}^\top X)^2],\end{aligned}$$ which completes the proof of Proposition \[prop:impossibility\_intA\].
Proof of Proposition \[thm:consis\]
-----------------------------------
By assumption, $ \cI$ is a set of interventions on $X$ or $A$ of which at least one is confounding-removing. Now fix any $$\tilde M=(f_{\eta_0}(x;\tilde{\theta}),\tilde g,\tilde h_1,\tilde h_2,\tilde Q)\in \cM,$$ with $\bP_{M}=\bP_{\tilde M}$. By Proposition \[prop:minimax\_equal\_causal\], we have that a minimax solution is given by the causal function. That is, $$\begin{aligned}
\inf_{{f_{\diamond}}\in{\mathcal{F}}_{\eta_0}}\,
\sup_{i\in {\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-{f_{\diamond}}(X))^2\big] = \sup_{i\in {\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-
f_{\eta_0}(X;\tilde{\theta})
)^2\big] = {\mathbb{E}}_{M}[\xi^2_Y],\end{aligned}$$ where we used that $\xi_Y$ is unaffected by an intervention on $X$. By the support restriction ${\mathrm{supp}}^M(X) \subseteq (a,b)$ we know that $$\begin{aligned}
f_{\eta_0}(x;\theta^0)=B(x)^\top \theta^0, \quad
f_{\eta_0}(x;\tilde \theta)=B(x)^\top \tilde \theta, \quad
f_{\eta_0}(x;\hat \theta_{\lambda^\star_n,\eta_0,\mu}^n)=B(x)^\top \hat \theta_{\lambda^\star_n,\eta_0,\mu}^n,\end{aligned}$$ for all $x\in {\mathrm{supp}}^M(X)$. Furthermore, as $Y=B(X)^\top \theta^0+\xi_Y$ $\bP_{M}$-almost surely, we have that $$\begin{aligned}
\label{eq:ExpansionOfECY_0}
{\mathbb{E}}_M\left[ C(A) Y \right] = {\mathbb{E}}_M\left[ C(A) B(X)^\top \theta^0\right] + {\mathbb{E}}_M\left[C(A) \xi_Y \right] = {\mathbb{E}}_M\left[ C(A) B(X)^\top \right]\theta^0,\end{aligned}$$ where we used the assumptions that ${\mathbb{E}}\left[ \xi_Y \right] =0$ and $A {\perp \!\!\! \perp}\xi_Y$ by the exogeneity of $A$. Similarly, $$\begin{aligned}
{\mathbb{E}}_{\tilde M}\left[ C(A) Y \right] = {\mathbb{E}}_{\tilde M}\left[ C(A) B(X)^\top \right]\tilde\theta.\end{aligned}$$ As $\bP_M = \bP_{\tilde M}$, it holds $
{\mathbb{E}}_M[ C(A) Y ] = {\mathbb{E}}_{\tilde M}[ C(A) Y ]$ and ${\mathbb{E}}_M[ C(A) B(X)^\top ]= {\mathbb{E}}_{\tilde{M}}[ C(A) B(X)^\top ]$, hence $$\begin{aligned}
{\mathbb{E}}_{ M}\left[ C(A) B(X)^\top \right]\tilde\theta = {\mathbb{E}}_{M}\left[ C(A) B(X)^\top \right]\theta^0 \iff \tilde \theta = \theta^0,\end{aligned}$$ by assumption \[ass:RankCondition\], which states that $ {\mathbb{E}}[ C(A) B(X)^\top ]$ is of full rank (bijective). In other words, the causal function parameterized by $\theta^0$ is identified from the observational distribution. Assumptions \[ass:identify\_f\] and \[ass:gen\_f\] are therefore satisfied. Furthermore, we also have that $$\begin{aligned}
&\sup_{i\in {\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-f_{\eta_0}(X;\hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu}))^2\big] \\
&\quad = \sup_{i\in\cI} \big\{ {\mathbb{E}}_{\tilde{M}(i)} \big[(f_{\eta_0}(X;\theta^0)-f_{\eta_0}(X;\hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu}))^2\big] + {\mathbb{E}}_{\tilde{M}(i)}\big[\xi_Y^2\big] \\
& \qquad \qquad + 2 {\mathbb{E}}_{\tilde{M}(i)}\big[\xi_Y (f_{\eta_0}(X;\theta^0)-f_{\eta_0}(X;\hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu})) \big] \big\} \\
&\quad \leq \sup_{i\in\cI} \big\{ {\mathbb{E}}_{\tilde{M}(i)} \big[(f_{\eta_0}(X;\theta^0)-f_{\eta_0}(X;\hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu}))^2\big] + {\mathbb{E}}_{\tilde{M}(i)}\big[\xi_Y^2\big] \\
& \qquad \qquad + 2 \sqrt{ {\mathbb{E}}_{\tilde{M}(i)}\big[\xi_Y^2 \big] {\mathbb{E}}_{\tilde{M}(i)} \big[(f_{\eta_0}(X;\theta^0)-f_{\eta_0}(X;\hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu}))^2 \big] } \big\} \\
&\quad \leq \sup_{i\in\cI} {\mathbb{E}}_{\tilde{M}(i)} \big[(f_{\eta_0}(X;\theta^0)-f_{\eta_0}(X;\hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu}))^2\big] + {\mathbb{E}}_{M}\big[\xi_Y^2\big] \\
& \qquad \qquad + 2 \sqrt{ {\mathbb{E}}_{M}\big[\xi_Y^2 \big] \sup_{i\in\cI} {\mathbb{E}}_{\tilde{M}(i)} \big[(f_{\eta_0}(X;\theta^0)-f_{\eta_0}(X;\hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu}))^2 \big] },
\end{aligned}$$ by Cauchy-Schwarz inequality, where we additionally used that $ {\mathbb{E}}_{\tilde M(i)} [ \xi_Y^2 ] = {\mathbb{E}}_{M} [ \xi_Y^2 ]$ as $\xi_Y$ is unaffected by interventions on $X$. Thus, $$\begin{aligned}
&\big\vert \sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-f_{\eta_0}(X;\hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu}))^2\big] - \inf_{{f_{\diamond}}\in{\mathcal{F}}_{\eta_0}}\,
\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-{f_{\diamond}}(X))^2\big] \big\vert \\
&\quad \leq \sup_{i\in\cI} {\mathbb{E}}_{\tilde{M}(i)} \big[(f_{\eta_0}(X;\theta^0)-f_{\eta_0}(X;\hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu}))^2\big] \\
& \qquad \qquad + 2 \sqrt{ {\mathbb{E}}_{M}\big[\xi_Y^2 \big] \sup_{i\in\cI} {\mathbb{E}}_{\tilde{M}(i)} \big[(f_{\eta_0}(X;\theta^0)-f_{\eta_0}(X;\hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu}))^2 \big] } .
\end{aligned}$$ For the next few derivations let $\hat \theta =\hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu}$ for notational simplicity. Note that, for all $x \in {\mathbb{R}}$, $$\begin{aligned}
(f_{\eta_0}(x;\theta^0)-f_{\eta_0}(x;\hat \theta ))^2 &\leq (\theta^0 -\hat \theta)^\top B(x)B(x)^\top (\theta^0 -\hat \theta) \\
& \quad + (B(a)^\top (\theta^0-\hat \theta ) + B'(a)^\top (\theta^0-\hat \theta )(x-a))^2 \\
& \quad + (B(b)^\top (\theta^0-\hat \theta ) + B'(b)^\top (\theta^0-\hat \theta )(x-b))^2.\end{aligned}$$ The second term has the following upper bound $$\begin{aligned}
& (B(a)^\top (\theta^0-\hat \theta ) + B'(a)^\top (\theta^0-\hat \theta )(x-a))^2 \\
&\quad = (\theta^0 -\hat \theta)^\top B(a)B(a)^\top (\theta^0 -\hat \theta) \\
& \qquad + (x-a)^2(\theta^0 -\hat \theta)^\top B'(a)B'(a)^\top (\theta^0 -\hat \theta) \\
& \qquad + 2(x-a) (\theta^0 -\hat \theta)^\top B'(a)B(a)^\top (\theta^0 -\hat \theta) \\
&\quad \leq \lambda_{\max}(B(a)B(a)^\top) \|\theta^0 - \hat \theta\|_2^2 \\
& \qquad + (x-a)^2 \lambda_{\max}(B'(a)B'(a)^\top ) \|\theta^0 - \hat \theta\|_2^2 \\
& \qquad + 2(x-a) \lambda_{\max}((B'(a)B(a)^\top+B(a)B'(a)^\top )/2 ) \|\theta^0 - \hat \theta\|_2^2,\end{aligned}$$ where $\lambda_{\max}$ denotes the maximum eigenvalue. An analogous upper bound can be constructed for the third term. Thus, by combining these two upper bounds with a similar upper bound for the first term, we arrive at $$\begin{aligned}
&{\mathbb{E}}_{\tilde{M}(i)} \big[(f_{\eta_0}(X;\theta^0)-f_{\eta_0}(X;\hat{\theta}))^2\big] \\
&\quad \leq \lambda_{\max}({\mathbb{E}}_{\tilde{M}(i)} [B(X)B(X)^\top])\|\theta^0 -\hat \theta\|_2^2 \\
&\qquad + \lambda_{\max}(B(a)B(a)^\top) \|\theta^0 - \hat \theta\|_2^2 \\
& \qquad + {\mathbb{E}}_{\tilde{M}(i)} [(X-a)^2] \lambda_{\max}(B'(a)B'(a)^\top ) \|\theta^0 - \hat \theta\|_2^2 \\
& \qquad + 2{\mathbb{E}}_{\tilde{M}(i)} [X-a] \lambda_{\max}((B'(a)B(a)^\top+B(a)B'(a)^\top )/2 ) \|\theta^0 - \hat \theta\|_2^2\\
&\qquad + \lambda_{\max}(B(b)B(b)^\top) \|\theta^0 - \hat \theta\|_2^2 \\
& \qquad + {\mathbb{E}}_{\tilde{M}(i)} [(X-b)^2] \lambda_{\max}(B'(b)B'(b)^\top ) \|\theta^0 - \hat \theta\|_2^2 \\
& \qquad + 2{\mathbb{E}}_{\tilde{M}(i)} [X-b] \lambda_{\max}((B'(b)B(b)^\top+B(b)B'(b)^\top )/2 ) \|\theta^0 - \hat \theta\|_2^2.\end{aligned}$$ Assumption \[ass:MaximumEigenValueBounded\] imposes that $\sup_{i\in\cI}{\mathbb{E}}_{\tilde{M}(i)} [X^2]$ and $\sup_{i\in\cI}\lambda_{\max}({\mathbb{E}}_{\tilde{M}(i)} [B(X)B(X)^\top])$ are finite. Hence, the supremum of each of the above terms is finite. That is, there exists a constant $c>0$ such that $$\begin{aligned}
&\big\vert \sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-f_{\eta_0}(X;\hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu}))^2\big] - \inf_{{f_{\diamond}}\in{\mathcal{F}}_{\eta_0}}\,
\sup_{i\in{\mathcal{I}}}{\mathbb{E}}_{\tilde{M}(i)}\big[(Y-{f_{\diamond}}(X))^2\big] \big\vert \\
&\quad \leq c \|\theta^0 - \hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu} \|_2^2 + 2 \sqrt{ {\mathbb{E}}_{M}\big[\xi_Y^2 \big] c }\|\theta^0 - \hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu} \|_2 .
\end{aligned}$$
It therefore suffices to show that $$\begin{aligned}
\hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu} \underset{n\to\infty}{\stackrel{P}{\longrightarrow}} \theta^0,\end{aligned}$$ with respect to the distribution induced by $M$. To simplify notation, we henceforth drop the $M$ subscript in the expectations and probabilities. Note that by the rank conditions in \[ass:RankCondition\], and the law of large numbers, we may assume that the corresponding sample product moments satisfy the same conditions. That is, for the purpose of the following arguments, it suffices that the sample product moment only satisfies these rank conditions asymptotically with probability one.
Let $B:= B(X)$, $C:= C(A)$, let $\fB$ and $\fC$ be row-wise stacked i.i.d. copies of $B(X)^\top$ and $C(A)^\top $, and recall the definition $\fP_\delta := \fC \left( \fC^\top \fC + \delta \fM \right)^{-1}
\fC^\top$. By convexity of the objective function we can find a closed form expression for our estimator of $\theta^0$ by solving the corresponding normal equations. The closed form expression is given by $$\begin{aligned}
\hat \theta^n_{\lambda, \eta, \mu}
: &= \operatorname*{argmin}_{\theta \in {\mathbb{R}}^{k}}
\norm{{\mathbf{Y}} - {\mathbf{B}} \theta }_2^2 + \lambda \norm{{\mathbf{P}}_\delta({\mathbf{Y}} - {\mathbf{B}} \theta)}_2^2 + \gamma \theta^\top {\mathbf{K}} \theta, \\
&= \left( \frac{\fB^\top \fB}{n} + \lambda^\star_n \frac{\fB^\top \fP_\delta \fP_\delta \fB}{n} + \frac{\gamma \fK}{n} \right)^{-1} \left( \frac{\fB^\top \fY }{n} + \lambda^\star_n \frac{\fB^\top \fP_\delta \fP_\delta \fY}{n} \right),\end{aligned}$$ where we used that $\lambda^\star_n \in [0,\infty)$ almost surely by \[ass:LambdaStarAlmostSurelyFinite\]. Consequently (using standard convergence arguments and that $n^{-1} \gamma \fK $ and $n^{-1} \delta \fM$ converges to zero in probability), if $\lambda^\star_n$ diverges to infinity in probability as $n$ tends to infinity, then $$\begin{aligned}
\hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu} &\stackrel{P}{\to} \left( {\mathbb{E}}\left[ BC^\top \right]{\mathbb{E}}\left[ CC^\top \right]^{-1}{\mathbb{E}}\left[ CB^\top \right] \right)^{-1} {\mathbb{E}}\left[ BC^\top \right] {\mathbb{E}}\left[ CC^\top \right]^{-1} {\mathbb{E}}\left[ C Y \right] \\
&= \theta^0.\end{aligned}$$ Here, we also used that the terms multiplied by $\lambda^\star_n$ are the only asymptotically relevant terms. These are the standard arguments that the K-class estimator (with minor penalized regression modifications) is consistent as long as the parameter $\lambda^\star_n$ converges to infinity, or, equivalently, $\kappa_n^\star= \lambda^\star_n/(1+\lambda^\star_n)$ converges to one in probability.
We now consider two cases: *(i)* ${\mathbb{E}}[B\xi_Y]\not =0$ and *(ii)* ${\mathbb{E}}[B\xi_Y]=0$, corresponding to the case with unmeasured confounding and without, respectively. For *(i)* we show that $\lambda^\star_n$ converges to infinity in probability and for *(ii)* we show consistency by other means (as $\lambda^\star_n$ might not converge to infinity in this case).
**Case (i):** The confounded case ${\mathbb{E}}[B\xi_Y]\not =0$. It suffices to show that $$\lambda^\star_n := \inf\{\lambda\geq 0 : T_n(\hat \theta^n
_{\lambda,\eta_{0},\mu})\leq q(\alpha)\}\underset{n\to\infty}{\stackrel{P}{\longrightarrow}} \infty.$$ To that end, note that for fixed $\lambda \geq 0$ we have that $$\begin{aligned}
\label{eq:ThetaLambdaConsistentEstimator}
\hat{\theta}^n_{\lambda,\eta_0,\mu} & \underset{n\to\infty}{\stackrel{P}{\longrightarrow}} \theta_\lambda,\end{aligned}$$ where $$\begin{aligned}
\label{eq:ThetaLambdaFullRepresentation}
\theta_\lambda \, &:= \left( {\mathbb{E}}\left[ BB^\top \right] + \lambda {\mathbb{E}}\left[ BC^\top \right]{\mathbb{E}}\left[ CC^\top \right]^{-1}{\mathbb{E}}\left[ CB^\top \right] \right)^{-1} \\ \notag
& \qquad \qquad \times \left( {\mathbb{E}}\left[ B Y \right] + \lambda {\mathbb{E}}\left[ BC^\top \right] {\mathbb{E}}\left[ CC^\top \right]^{-1} {\mathbb{E}}\left[ C Y \right] \right). \end{aligned}$$ Recall that states that $
{\mathbb{E}}\left[ C Y \right] = {\mathbb{E}}\left[ C B^\top \right]\theta^0$. Using and that $Y=B^\top \theta^0 + \xi_Y$ $\bP_{M}$-almost surely, we have that the latter factor of is given by $$\begin{aligned}
&{\mathbb{E}}\left[ B Y \right] + \lambda {\mathbb{E}}\left[ BC^\top \right] {\mathbb{E}}\left[ CC^\top \right]^{-1} {\mathbb{E}}\left[ C Y \right] \\
& \quad = {\mathbb{E}}\left[ B B^\top \right]\theta^0 + {\mathbb{E}}\left[ B \xi_Y \right] + \lambda {\mathbb{E}}\left[ BC^\top \right] {\mathbb{E}}\left[ CC^\top \right]^{-1} {\mathbb{E}}\left[ C B^\top \right] \theta^0 \\
&\quad = \left( {\mathbb{E}}\left[ B B^\top \right] + \lambda {\mathbb{E}}\left[ BC^\top \right] {\mathbb{E}}\left[ CC^\top \right]^{-1} {\mathbb{E}}\left[ C B^\top \right] \right)\theta^0 + {\mathbb{E}}\left[ B \xi_Y \right]\end{aligned}$$ Inserting this into we arrive at the following representation of $\theta_\lambda$ $$\begin{aligned}
\label{eq:ThetaLambdaInTermsOfTrueTheta}
\theta_\lambda &= \theta^0 + \left( {\mathbb{E}}\left[ BB^\top \right] + \lambda {\mathbb{E}}\left[ BC^\top \right]{\mathbb{E}}\left[ CC^\top \right]^{-1}{\mathbb{E}}\left[ CB^\top \right] \right)^{-1} {\mathbb{E}}\left[ B \xi_Y \right].\end{aligned}$$ Since ${\mathbb{E}}\left[ B \xi_Y \right]\not =0$ by assumption, the above yields that $$\begin{aligned}
\label{eq:ThetaTrueNotEqualThetaLambda}
\forall \lambda \geq 0 : \quad \quad \theta^0 \not = \theta_\lambda.\end{aligned}$$ Now we prove that $\lambda^\star_n$ diverges to infinity in probability as $n$ tends to infinity. That is, for any $\lambda \geq 0$ we will prove that $$\begin{aligned}
\lim_{n \to \infty }\bP (\lambda^\star_n \leq \lambda ) =0. \end{aligned}$$ We fix an arbitrary $\lambda\geq 0$. By we have that $\theta^0 \not = \theta_{\lambda}$. This implies that there exists an ${{\varepsilon}}>0$ such that $\theta^0\not \in \overline{B(\theta_{\lambda},{{\varepsilon}})}$, where $\overline{B(\theta_{\lambda},{{\varepsilon}})}$ is the closed ball in ${\mathbb{R}}^k$ with center $\theta_{\lambda}$ and radius ${{\varepsilon}}$. By the consistency result , we know that the sequence of events $(A_n)_{n\in {\mathbb{N}}}$, for every $n \in {\mathbb{N}}$, given by $$A_n:= (|\hat \theta_{\lambda,\eta_0,\mu}^n -\theta_{\lambda}|\leq {{\varepsilon}}) = (\hat \theta_{\lambda,\eta_0,\mu}^n \in\overline{B(\theta_{\lambda},{{\varepsilon}})}),$$ satisfies $\bP(A_n)\to 1$ as $n\to \infty$. By assumption \[ass:MonotonicityAndContinuityOfTest\] we have that $$\begin{aligned}
\tilde \lambda\mapsto T_n(\theta^n_{\tilde \lambda, \eta_0,\mu} ), \qquad \text{and} \qquad \theta \mapsto T_n(\theta ),\end{aligned}$$ are weakly decreasing and continuous, respectively. Together with the continuity of $\tilde \lambda \mapsto \hat \theta_{\tilde \lambda,\eta_0,\mu }^n$, this implies that also the mapping $\tilde \lambda \mapsto T_n(\hat \theta_{\tilde \lambda,\eta_0,\mu }^n)$ is continuous. It now follows from Assumption \[ass:LambdaStarAlmostSurelyFinite\] (stating that $\lambda^\star_n$ is almost surely finite) that for all $n \in {\mathbb{N}}$, $\bP(T_{n}( \hat \theta^{n}_{\lambda^\star_{n},\eta_0,\mu} ) \leq q(\alpha))=1$. Furthermore, since $\tilde \lambda\mapsto T_n(\theta^n_{\tilde \lambda, \eta_0,\mu} )$ is weakly decreasing, it follows that $$\begin{aligned}
\bP (\lambda^\star_{n} \leq \lambda ) &= \bP( \{\lambda^\star_{n} \leq \lambda \}\cap \{T_{n}( \hat \theta^{n}_{\lambda^\star_{n}, \eta_0,\mu} ) \leq q(\alpha)\} ) \\
&\leq \bP( \{\lambda^\star_{n} \leq \lambda \} \cap \{T_{n}( \hat \theta^{n}_{\lambda, \eta_0,\mu} ) \leq q(\alpha) \}) \\
& =\bP( \{\lambda^\star_{n} \leq \lambda \}\cap \{T_{n}( \hat \theta^{n}_{\lambda, \eta_0,\mu} ) \leq q(\alpha)\} \cap A_{n}) \\
& \qquad \qquad + \bP( \{\lambda^\star_{n} \leq \lambda \} \cap \{T_{n}( \hat \theta^{n}_{\lambda, \eta_0,\mu} ) \leq q(\alpha)\} \cap A_{n}^c) \\
&\leq \bP( \{\lambda^\star_{n} \leq \lambda \} \cap \{ T_{n}( \hat \theta^{n}_{\lambda, \eta_0,\mu} ) \leq q(\alpha)\} \cap \{ |\hat \theta_{\lambda,\eta_0,\mu}^n -\theta_{\lambda}|\leq {{\varepsilon}}\} ) + \bP(A_n^c).\end{aligned}$$ It now suffices to show that the first term converges to zero, since $\bP(A_{n}^c)\to 0$ as $n\to \infty$. We have $$\begin{aligned}
&\bP( \{\lambda^\star_{n} \leq \lambda \} \cap \{ T_{n}( \hat \theta^{n}_{\lambda, \eta_0,\mu} ) \leq q(\alpha)\} \cap \{ |\hat \theta_{\lambda,\eta_0,\mu}^n -\theta_{\lambda}|\leq{{\varepsilon}}\} ) \\
& \quad \leq \bP \Big( \{ \lambda^\star_{n} \leq \lambda \} \cap \Big\{ \inf_{\theta\in \overline{B(\theta_{\lambda},{{\varepsilon}})}}T_{n}( \theta ) \leq q(\alpha) \Big\} \cap \{ |\hat \theta_{\lambda,\eta_0,\mu}^n -\theta_{\lambda}|\leq{{\varepsilon}}\} \Big) \\
& \quad \leq \bP \Big( \inf_{\theta\in \overline{B(\theta_{\lambda},{{\varepsilon}})}}T_{n}( \theta ) \leq q(\alpha) \Big)\\
& \quad \stackrel{P}{\to}0, \end{aligned}$$ as $n\to \infty$, since $\overline{B(\theta_{\lambda},{{\varepsilon}})}$ is a compact set not containing $\theta^0$. Here, we used that the test statistic $(T_n)$ is assumed to have compact uniform power \[ass:ConsistentTestStatistic\]. Hence, $\lim_{n\to \infty} \bP (\lambda^\star_{n} \leq \lambda ) = 0 $ for any $\lambda \geq 0$, proving that $\lambda^\star_n$ diverges to infinity in probability, which ensures consistency.
**Case (ii):** the unconfounded case ${\mathbb{E}}[B(X)\xi_Y]=0$. Recall that $$\begin{aligned}
\notag
\hat{\theta}^n_{\lambda,\eta_0,\mu} \,:&= \operatorname*{argmin}_{\theta \in {\mathbb{R}}^{k}}
\norm{{\mathbf{Y}} - {\mathbf{B}} \theta }_2^2 + \lambda \norm{{\mathbf{P}}_\delta({\mathbf{Y}} - {\mathbf{B}} \theta)}_2^2 + \gamma \theta^\top {\mathbf{K}} \theta \\ \label{eq:ThetaLambdaMinimizesObjectiveFunction}
&=\operatorname*{argmin}_{\theta \in {\mathbb{R}}^{k}} l_{\text{OLS}}^n(\theta) + \lambda l_{\text{TSLS}}^n(\theta) + \gamma l_{\text{PEN}}(\theta) ,\end{aligned}$$ where we defined $l_{\text{OLS}}^n(\theta):= n^{-1}\norm{{\mathbf{Y}} - {\mathbf{B}} \theta }_2^2$, $l_{\text{TSLS}}^n(\theta) := n^{-1}\norm{{\mathbf{P}}_\delta({\mathbf{Y}} - {\mathbf{B}} \theta)}_2^2$, and $l_{\text{PEN}}(\theta) := n^{-1} \theta^\top {\mathbf{K}} \theta$. For any $0\leq \lambda_1 < \lambda_2$ we have $$\begin{aligned}
&l_{\text{OLS}}^n(\hat{\theta}^n_{\lambda_1,\eta_0,\mu}) + \lambda_1 l_{\text{TSLS}}^n(\hat{\theta}^n_{\lambda_1,\eta_0,\mu}) + \gamma l_{\text{PEN}}(\hat{\theta}^n_{\lambda_1,\eta_0,\mu}) \\
&\quad \leq l_{\text{OLS}}^n(\hat{\theta}^n_{\lambda_2,\eta_0,\mu}) + \lambda_1 l_{\text{TSLS}}^n(\hat{\theta}^n_{\lambda_2,\eta_0,\mu}) + \gamma l_{\text{PEN}}(\hat{\theta}^n_{\lambda_2,\eta_0,\mu}) \\
&\quad = l_{\text{OLS}}^n(\hat{\theta}^n_{\lambda_2,\eta_0,\mu}) + \lambda_2 l_{\text{TSLS}}^n(\hat{\theta}^n_{\lambda_2,\eta_0,\mu}) + \gamma l_{\text{PEN}}(\hat{\theta}^n_{\lambda_2,\eta_0,\mu}) + (\lambda_1-\lambda_2) l_{\text{TSLS}}^n(\hat{\theta}^n_{\lambda_2,\eta_0,\mu}) \\
&\quad \leq l_{\text{OLS}}^n(\hat{\theta}^n_{\lambda_1,\eta_0,\mu}) + \lambda_2 l_{\text{TSLS}}^n(\hat{\theta}^n_{\lambda_1,\eta_0,\mu}) + \gamma l_{\text{PEN}}(\hat{\theta}^n_{\lambda_1,\eta_0,\mu}) + (\lambda_1-\lambda_2) l_{\text{TSLS}}^n(\hat{\theta}^n_{\lambda_2,\eta_0,\mu}),\end{aligned}$$ where we used . Rearranging this inequality and dividing by $(\lambda_1 - \lambda_2)$ yields $$\begin{aligned}
l_{\text{TSLS}}^n(\hat{\theta}^n_{\lambda_1,\eta_0,\mu}) \geq l_{\text{TSLS}}^n(\hat{\theta}^n_{\lambda_2,\eta_0,\mu}),\end{aligned}$$ proving that $\lambda \mapsto l_{\text{TSLS}}^n(\hat{\theta}^n_{\lambda,\eta_0,\mu})$ is weakly decreasing. Thus, since $\lambda^\star_n \geq 0$ almost surely, we have that $$\begin{aligned}
\label{eq:unconfounded_IVinOLSboundedByConvergenceToZero}
l_{\text{TSLS}}^n(\hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu}) \leq l_{\text{TSLS}}^n(\hat{\theta}^n_{0,\eta_0,\mu}) =n^{-1} ({\mathbf{Y}} - {\mathbf{B}} \hat{\theta}^n_{0,\eta_0,\mu})^{\top } {\mathbf{P}}_\delta{\mathbf{P}}_\delta({\mathbf{Y}} - {\mathbf{B}} \hat{\theta}^n_{0,\eta_0,\mu}).\end{aligned}$$ Furthermore, recall from that $$\begin{aligned}
\label{eq:unconfoundedOLSconvergence}
\hat{\theta}^n_{0,\eta_0,\mu} \underset{n\to\infty}{\stackrel{P}{\longrightarrow}} \theta_0 = \theta^0,\end{aligned}$$ where the last equality follows from using that we are in the unconfounded case ${\mathbb{E}}[B(X)\xi_Y]=0$. By expanding and deriving convergence statements for each term, we get $$\begin{aligned}
\notag
&({\mathbf{Y}} - {\mathbf{B}} \hat{\theta}^n_{0,\eta_0,\mu})^{\top } {\mathbf{P}}_\delta{\mathbf{P}}_\delta({\mathbf{Y}} - {\mathbf{B}} \hat{\theta}^n_{0,\eta_0,\mu}) \\ \notag
&\quad \underset{n\to\infty}{\stackrel{P}{\longrightarrow}} ({\mathbb{E}}[ YC^\top ] - \theta_0{\mathbb{E}}[B C^\top ]) {\mathbb{E}}[C^\top C]^{-1} (
{\mathbb{E}}[CY]
- {\mathbb{E}}[CB^\top] \theta_0 ) \\
& \quad = 0, \label{eq:unconfoundedTSLSinOLSConvpZero}\end{aligned}$$ where we used Slutsky’s theorem, the weak law of large numbers, and . Thus, by and it holds that $$\begin{aligned}
l_{\text{TSLS}}^n(\hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu}) = n^{-1}\| {\mathbf{P}}_\delta({\mathbf{Y}} - {\mathbf{B}} \hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu}) \|_2^2 \underset{n\to\infty}{\stackrel{P}{\longrightarrow}} 0.\end{aligned}$$ For any $z\in {\mathbb{R}}^n$ we have that $$\begin{aligned}
\| {\mathbf{P}}_\delta z \|_2^2 & =
z^\top \fC ( \fC^\top \fC + \delta \fM )^{-1}
\fC^\top\fC ( \fC^\top \fC + \delta \fM )^{-1}
\fC^\top z \\
& =
z^\top \fC ( \fC^\top \fC + \delta \fM )^{-1}
(\fC^\top\fC)^{1/2}(\fC^\top\fC)^{1/2} ( \fC^\top \fC + \delta \fM )^{-1}
\fC^\top z \\
&= \| (\fC^\top\fC)^{1/2} ( \fC^\top \fC + \delta \fM )^{-1}
\fC^\top z \|_2^2,\end{aligned}$$ hence $$\begin{aligned}
\label{eq:consitUconfoundedDifferenceConvPtoZero}
\norm{H_n-G_n \hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu} }_2^2 &= \| n^{-1/2}(\fC^\top\fC)^{1/2} ( \fC^\top \fC + \delta \fM )^{-1}
\fC^\top ({\mathbf{Y}} - {\mathbf{B}} \hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu})\|_2^2 \\ \notag
&\stackrel{P}{\to } 0,\end{aligned}$$ where for each $n \in {\mathbb{N}}$, $G_n\in {\mathbb{R}}^{k\times k}$ and $H_n \in {\mathbb{R}}^{k\times 1}$ are defined as $$\begin{aligned}
G_n &:= n^{-1/2}(\fC^\top\fC)^{1/2} ( \fC^\top \fC + \delta \fM )^{-1} \fC^\top \fB, \text{ and } \\
H_n &:= n^{-1/2}(\fC^\top\fC)^{1/2} ( \fC^\top \fC + \delta \fM )^{-1} \fC^\top \fY.\end{aligned}$$ Using the weak law of large numbers, the continuous mapping theorem and Slutsky’s theorem, it follows that, as $n \to \infty$, $$\begin{aligned}
G_n \stackrel{P}{\to} G &:= E[CC^\top]^{1/2} E[CC^\top]^{-1} E[CB^\top], \text{ and }\\
H_n \stackrel{P}{\to} H &:= E[CC^\top]^{1/2} E[CC^\top]^{-1} E[CY] \\
& = E[CC^\top]^{1/2} E[CC^\top]^{-1} E[CB^\top ]\theta^0 \\
&= G\theta^0,\end{aligned}$$ where the second to last equality follows from . Together with , we now have that $$\begin{aligned}
\norm{G_n \hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu} - G \theta^0}_2^2
&\leq \norm{G_n \hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu} - H_n}_2^2 +
\norm{H_n - G \theta^0}_2^2 \underset{n\to\infty}{\stackrel{P}{\longrightarrow}} 0.\end{aligned}$$ Furthermore, by the rank assumptions in \[ass:RankCondition\] we have that $G_n\in {\mathbb{R}}^{k\times k}$ is of full rank (with probability tending to one), hence $$\begin{aligned}
\|\hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu} -\theta^0\|_2^2 &= \|G_n^{-1}G_n( \hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu} -\theta^0)\|_2^2 \\
& \leq \|G_n^{-1}\|_{\text{op}}^2 \|G_n( \hat{\theta}^n_{\lambda^\star_n,\eta_0,\mu} -\theta^0) \|_2^2 \\
&\stackrel{P}{\to}\|G^{-1}\|_{\text{op}}^2 \cdot 0 \\
&=0,\end{aligned}$$ as $n\to \infty$, proving the proposition.
[^1]: This follows from choosing $A$ as an independent noise variable and a constant $g$.
[^2]: This can be assumed without loss of generality if ${\mathcal{F}}$ and ${\mathcal{G}}$ are closed under addition and scalar multiplication, and contain the constant function.
[^3]: For appropriate choices of $h_2$, the model includes settings in which (some of) the $A$ directly influence $Y$.
[^4]: This is wlog if ${\mathcal{F}}$ and ${\mathcal{G}}$ are closed under addition and scalar multiplication, and contain the constant function.
[^5]: For an appropriate choice of $h_2$, the model includes settings in which (some of) the $A$ directly influence $Y$.
[^6]: It is in fact sufficient if the marginal distribution of $X$, ${\mathbb{E}}_{\tilde{M}(i)}[Y\,\vert\, X]$ and ${\mathbb{E}}_{\tilde{M}(i)}[Y^2\,\vert\, X]$ remain fixed for all $\tilde{M}\in\mathcal{M}$ with ${\mathbb{P}}_{\tilde{M}} = {\mathbb{P}}_M$.
[^7]: This may not come as a surprise since without the help of an instrument, it is impossible to distinguish whether a covariate is an ancestor or a descendant of $Y$.
| {
"pile_set_name": "ArXiv"
} |
---
author:
-
title: Dense matter with eXTP
---
[2]{}
Introduction
============
The *enhanced X-ray Timing and Polarimetry mission* (*eXTP*) is a mission concept proposed by a consortium led by the Institute of High-Energy Physics of the Chinese Academy of Sciences, envisaged for a launch in the mid 2020s. *eXTP* would carry 4 instrument packages for the 0.5–30 keV bandpass, its primary purpose being to study conditions of extreme density (this paper), gravity [@WP_SG] and magnetism [@WP_SM] in and around compact objects in the Universe. It would also be a powerful observatory for a wider range of astrophysical phenomena since it combines high throughput, good spectral and timing resolution, polarimetric capability and wide sky coverage [@WP_OS].
A detailed description of eXTP’s instrumentation can be found in @WPinstrumentation, but we summarize briefly here. The scientific payload of eXTP consists of the Spectroscopic Focusing Array ([SFA]{}), the Polarimetry Focusing array ([PFA]{}), the Large Area Detector (LAD), and the Wide Field Monitor ([WFM]{}). The [SFA]{}is an array of nine identical X-ray telescopes covering the energy range 0.5–10 keV with a spectral resolution of better than 180 eV (full width at half maximum, FWHM) at 6 keV, and featuring a total effective area from $\sim$0.7 m$^2$ at 2 keV to $\sim 0.5$ m$^2$ at 6 keV. The [SFA]{}angular resolution is required to be less than 1 arcmin (HPD). In the current baseline, the [SFA]{}focal plane detectors are silicon-drift detectors (SDDs), that combine CCD-like spectral resolutions with very small dead times, and therefore are excellently suited for studies of the brightest cosmic X-ray sources at the smallest time scales. The [PFA]{}consists of four identical X-ray telescopes that are sensitive between 2 and 8 keV with a spectral resolution of 1.1 keV at 6 keV (FWHM), have an angular resolution better than $\sim30$ arcsec (HPD) and a total effective area of $\sim 900$ cm$^2$ at 2 keV (including the detector efficiency). The [PFA]{}features Gas Pixel Detectors (GPDs) to allow polarization measurements in the X-rays. It reaches a minimum detectable polarization (MDP) of 5% in 100 ks for a source with the Crab-like spectrum of flux $3\times10^{-11}$ erg s$^{-1}$ cm$^{-2}$ (i.e. about 1 milliCrab). The [LAD]{}has a very large effective area of $\sim 3.4$ m$^2$ at 8 keV, obtained with non-imaging SDDs, active between 2 and 30 keV with a spectral resolution of about 260 eV and collimated to a field of view of 1$^\circ$ FWHM. The [LAD]{}and the [SFA]{}together reach an unprecedented total effective area of more than 4 m$^{2}$. The science payload is completed by the [WFM]{}, consisting of 6 coded-mask cameras covering about 4 sr of the sky at at an expected sensitivity of 2.1 mCrab for an exposure time of 50 ks in the 2 to 50 keV energy range, and for a typical sensitivity of 0.2 mCrab combining 1 yr of observations outside the Galactic plane. The instrument will feature an angular resolution of a few arcminutes and will be endowed with an energy resolution at 6 keV of about 300 eV (FWHM).
The nature of matter under conditions of extreme density and stability, found only in the cores of neutron stars (NSs), remains an open question. eXTP’s capabilities will allow us to statistically infer global properties of NSs (such as their mass and radius) to within a few percent. This information can be used to statistically infer the equation of state of the matter in the NS interior, and the nature of the forces between fundamental particles under such extreme conditions. This White Paper outlines the current state of our understanding of dense matter physics, the techniques that eXTP will exploit, and the advances that we expect.
The nature of dense matter
==========================
![image](NScutsimple3.pdf){width="100.00000%"}
One of the overarching goals of modern physics is to understand the nature of the fundamental interactions. Here we focus on the strong interaction, which controls the properties of both atomic nuclei and NSs, where gravity compresses the material in the core of the star to extreme nuclear densities (Figure \[fig:nscut\]). NSs are remarkable natural laboratories that allow us to investigate the constituents of matter and their fundamental interactions under conditions that cannot be reproduced in any terrestrial laboratory, and to explore the phase diagram of quantum chromodynamics (QCD) in a region which is presently inaccessible to numerical calculations [@Fukushima11].
![image](Trhoasym3.pdf){width="100.00000%"}
The quest to test the state of matter under the most extreme conditions and to determine the equation of state (EOS) encompasses both laboratory experiments and astronomical observations of stars (Fig. \[fig:trho\]). Heavy-ion collision experiments currently going on at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven and at the Large Hadron Collider (LHC) at CERN can probe the high temperature and low density region of the strong interacting matter phase diagram. The next generation of heavy-ion colliders such as the Facility for Antiproton and Ion Research (FAIR) at GSI in Darmstadt, and the Nuclotron-based Ion Collider fAcility (NICA) at JINR in Dubna will be able to probe high temperature and dense matter (up to $\sim 4$ [$\rho_\mathrm{sat}$]{}, see Figure \[fig:nscut\]) and to search for the possible existence of a critical endpoint of a first-order quark deconfinement phase transition. Laboratory EOS constraints through heavy ion collisions will also be pursued at rare isotope facilities such as RIKEN/RIBF and FRIB (where the collisions will have less energy but more neutron richness).
Neutron stars, by contrast, access a unique region of the QCD phase diagram at low temperature ($T \ll 1$ MeV after a few minutes from the NS birth) and high density (up to $\sim 10$ [$\rho_\mathrm{sat}$]{}) which cannot be explored in the laboratory. In the simplest picture the core of a NS is modeled as a uniform charge-neutral fluid of neutrons $n$, protons $p$, electrons $e^-$ and muons $\mu^-$ in equilibrium with respect to the weak interaction ($\beta$-stable nuclear matter). Even in this simplified picture, the determination of the EOS from the underlying nuclear interactions is a formidable theoretical problem. One has to calculate the EOS under extreme conditions of high density and high neutron-proton asymmetry [see for example @Hebeler15 and Figure \[fig:trho\]], in a regime where the properties of nuclear forces are poorly constrained by nuclear data and experiments.
Due to the large central densities additional constituents, such as hyperons (@Glendenning85, @Chatterjee16) or a quark deconfined phase of matter (see for example @Glendenning96 and @Bombaci2016) may also form. The reason for hyperon formation is simple: the stellar constituents $npe^-\mu^-$ are different species of fermions, so due to the Pauli principle their Fermi energies (chemical potentials) are very rapidly increasing functions of the density. Above some threshold density ($\sim$ 2-4 [$\rho_\mathrm{sat}$]{}) it is energetically favorable to form hyperons via the strangeness-changing weak interaction. This means that there may be different types of compact stars - nucleonic, hyperonic, hybrid or quark - the latter two containing deconfined up-down-strange quark matter in their cores.
Various superfluid states produced through Cooper pairing (caused by an attractive component of the baryon-baryon interaction) are also expected. For example, a neutron superfluid (due to neutron-neutron pairing in the $^1S_0$ channel) is expected in the NS inner crust. Many possible color superconducting phases of quark matter are also expected [@Alford08] in quark deconfined matter. Matter may also be characterized by the formation of different crystalline structures [@Anglani14; @Buballa15]. These superfluid, color superconducting and crystalline phases of matter are of crucial importance for modeling NS cooling and pulsar glitches.
![image](eos.png){width="48.00000%"} ![image](MR.png){width="48.00000%"}
Connecting NS parameters to strong interaction physics can be done because the forces between the nuclear particles set the stiffness of NS matter [@Lattimer16]. This is encoded in the EOS, the thermodynamical relation between pressure, energy density and temperature. The EOS of dense matter is a basic ingredient for modeling various astrophysical phenomena related to NSs, including core-collapse supernovae and binary neutron star mergers (note that for most neutron star scenarios – except immediately after formation or merger – we can consider the temperature to be effectively zero). The EOS and NS rotation rate set the gravitational mass M and equatorial radius R via the stellar structure equations. By measuring and then inverting the M-R relation, we can thus recover the EOS [@Lindblom92; @Ozel09; @Riley18 and Figure \[fig:eostomr\]]. To distinguish the models shown in Figure \[fig:eostomr\], one needs to measure M and R to precisions of a few percent, for multiple sources with different masses.
Most efforts to date to measure the M-R relation have involved modelling the spectra of thermonuclear X-ray bursts and quiescent low-mass X-ray binaries (see for example @Suleimanov11, @Ozel13, @Steiner13, @Guillot14, @Nattila16, @Nattila17, @Steiner17). The constraints obtained so far are weak. The technique also suffers from systematic errors of at least 10% in absolute flux calibration, and uncertainties in atmospheric composition, residual accretion, non-uniform emission, distance, and identification of photospheric touchdown point for bright bursts that exhibit Photospheric Radius Expansion (PRE) (see the discussions in @Miller13a, @Heinke14, @Poutanen14). The planned ESA L-class mission Athena has the right energy band to exploit this technique [@Motch13]: however the systematic uncertainties will remain. The X-ray timing instrument NICER (the Neutron Star Interior Composition Explorer, see @Arzoumanian14), which was installed on the International Space Station in 2017 and which will instead use the pulse-profile modelling technique, is discussed in more detail in Section \[rpp\].
Constraints have also been derived via radio pulsar timing, where the masses of NSs in compact binaries can be measured very precisely: high mass stars yield the strongest EOS constraints. However even the discovery of pulsars with masses $\approx 2 $M$_\odot$ (@Demorest10, @Antoniadis13, @Fonseca16) has left a broad range of EOS viable, producing radii ranging from 10–14 km for a typical 1.4 M$_\odot$ NS [@Hebeler13]. The next generation of radio telescopes (the Square Kilometer Array and its precursors) will deliver improved mass measurements. Precision radius measurements, however, will be more challenging: there is only one system, the Double Pulsar, for which we expect a radius measurement with $\sim$ 5-10% accuracy (via its moment of inertia) within the next 20 years (@Lattimer05, @Kramer09, @Watts15).
![image](eXTP_whitePaper_figure_a.png){width="90.00000%"}
The gravitational wave telescopes Advanced LIGO [@LIGO15] and Advanced VIRGO [@Acernese15], have now made the first direct detection of a binary NS merger [@Abbott17]. Gravitational waves from the late inspirals of binary NSs are sensitive to the EOS, with departures from the point particle waveform due to tidal deformation encoding information about the EOS [@Read09]. The statistical constraints from the first detection are comparable to and in agreement with those obtained from X-ray spectral fitting. In the event of a very high signal to noise event, Advanced LIGO/VIRGO may be able to constrain R to $\sim 10$% (@Read13, @Hotokezaka16). More realistic estimates indicate a few tens of detections are likely to be required to reach this level of accuracy (@Delpozzo13, @Agathos15, @Lackey15, @Chatziioannou15). There may also be systematic errors of comparable size due to approximations made or higher-order terms neglected in the templates [@Favata14; @Lackey15]. The coalescence can also excite post-ringdown oscillations in the supermassive NS remnant that may exist very briefly before collapse to a black hole. These oscillations are sensitive to the finite temperature EOS [@Bauswein12; @Bauswein14; @Takami14], but detection will be difficult because there are no complete waveform models for the pre- and post-merger signal [@Clark16]. The eventual detection of NS-black hole binary mergers may also yield EOS constraints (see for example @Lackey14). See @WP_OS for other aspects of compact object merger astrophysics where eXTP can provide information on electromagnetic counterparts.
The large area and spectral-timing-polarimetric capabilities of eXTP open up new techniques and different sources to constrain the dense matter EOS, which should allow us to measure M and R to within a few percent. In the Sections that follow, we outline the various techniques that eXTP will use to measure the dense matter EOS, and explore its expected performance in more detail.
Pulse profile modelling {#ppm}
=======================
Basic principles of pulse profile modelling {#ppmb}
--------------------------------------------
Pulse profile modelling exploits localised, radiatively intense regions (hereafter ‘hotspots’, specific examples of which will be discussed in subsequent sections) that can develop on the NS. As the star rotates, a hotspot generates an observable pulsation in X-rays. Prior to observation, the photons propagate through the curved exterior spacetime of the spinning compact star. Extensive work on propagation of electromagnetic radiation through such spacetimes has now quantified fully the relativistic effects on the photons, and thus on the pulse profile (@Pechenick83, @Miller98, @Poutanen03, @PoutanenBeloborodov06, @Cadeau07, @Morsink07, @Baubock13, @Psaltis14a, @NattilaPihajoki17); the simulations in Figure \[fig:gr1\] illustrate such observables, using a realistic Schwarzschild exterior spacetime.
![image](eXTP_whitePaper_figure_b.png){width="90.00000%"}
Strictly, the Schwarzschild exterior spacetime is exact only for spherically symmetric (stress-isotropic, non-rotating) stars. However the mathematical structure of both the interior and exterior spacetime of a spinning NS is well understood in general relativistic gravity; high-accuracy spacetimes for rapidly spinning NSs can be computed numerically, albeit expensively (for a review, see @Stergioulas03). For families of EOS, there exist (numerically computed) approximate universalities relating first-order (and higher) spacetime structure to the lowest-order properties – specifically, the mass monopole moment, the (circumferential) equatorial radius, and the spin frequency (see for example the review of @Yagi16). In order to simulate observable radiation for the application of statistical inference, various approximations are employed which demonstrably reduce computation time. Given universal relations, one typically embeds an oblate surface – from which radiation emanates – in an ambient (exterior) spacetime, and either: (i) exploits spherical symmetry of the exterior (Schwarzschild) solution (see for example @Morsink07); or (ii) permits axisymmetry, but neglects structure beyond second-order in a metric expansion in terms of a natural variable (see for example @Baubock12 or @NattilaPihajoki17 and references therein). The accuracy of these approximations are well understood (see the discussion in @Watts16); embedding an oblate star in an ambient Schwarzschild spacetime introduces negligible systematic errors in the best-fit masses and radii at spin rates typical for observed millisecond pulsars. We expect the statistical uncertainty incurred due to noise in eXTP observations to dominate systematic biases which would arise from low-order exterior spacetime approximation. In practice this should be proven for each relevant generative model via blind parameter estimation studies, given synthetic data generated using a higher-order exterior spacetime. Nevertheless, in the coming years, algorithmic advances which improve both numerical likelihood evaluation speeds (via, e.g., extensive GPU exploitation) and Bayesian posterior sampling efficiencies may permit us to condition on generative models using higher-order exterior spacetimes.
![image](extp_amps_polarization.pdf){width="90.00000%"}
We now describe in a simplified manner how relativistic effects encode information on M and R. General relativistic (hereafter GR) light-bending, which is highly sensitive to compactness M/R in the near vicinity of the NS, directly affects both the amplitude of the pulsations and photon time-delays from distinct points on the NS surface. Gravitational redshifting of photons is also entirely dependent on the compactness, and manifests principally in the energy-dependent normalisation of the pulse profile. Relativistic beaming introduces asymmetry (harmonic content) in the pulse profile; locally, beaming depends on the projected velocity of the (relativistically moving) hotspot along a light-ray connecting a point on the local NS surface to the observer. The functional form of the local speed contains R and the (asymptotic) spin frequency of the uniformly rotating star; these two parameters are degenerate with respect to influence on local beaming. However, the spin frequency can be accurately measured from the observed pulse frequency, thus breaking this degeneracy and increasing the statistical constraining power on R. Figure \[fig:gr2\] demonstrates the sensitivity of the observable to changing R alone, with all other model parameters fixed. The beaming is also sensitive to local time dilation at the NS surface, which is in turn sensitive to the compactness (M/R). The pulse profile model enters in a generative model for telescope photon data, and thus — via existing fitting algorithms — yields a statistical constraint on M and R.
Naturally, there are additional model parameters affecting the pulse profile, which must be properly marginalised over when statistically inferring M and R. These include the specific details of the photospheric comoving radiation field (as a function of surface coordinates, emission direction, and energy), and *a priori* unknown geometrical factors (the hotspot size, shape, and colatitude $\theta$; observer inclination $i$), and emission from the rest of the star and disk, which may also exhibit pulsations [@Poutanen08]. However, the resulting degeneracies can be broken, allowing successful recovery of M and R (@Lo13, @Psaltis14b, @Miller15, @Stevens16). Knowledge of the geometrical factors (enabled by the polarimetry capabilities of eXTP) further improves statistical constraining power via degeneracy breaking: M, R, and the nuisance parameters all enter in generative models for additional observable quantities.
Radiation emitted by hotspots is expected to be linearly polarised because the opacity is dominated by electron scattering [@Viironen04]. Both the observed polarization degree (PD) and polarization angle (PA) change with the rotational phase $\phi$ following variations of the angle between the spot normal and the line-of-sight and of the position angle of the projection of the hotspot normal on the sky (see Fig. \[fig:amps\_polarization\]). The variation of PA $\chi$ can be well described by the rotating vector model [@Radhakrishnan69]: $$\label{eq:PA_RVM}
\tan\chi =-\frac{\sin \theta\ \sin \phi}
{\sin i\ \cos \theta - \cos i\ \sin \theta\ \cos \phi }.$$ This formula can be corrected for rapid rotation [@Ferguson73; @Ferguson76] and gravitational light bending, but these effects are non-negligible only for spins in excess of 500 Hz [@Viironen04]. The phase-dependence of the PA allows us to constrain both angles $i$ and $\theta$.
![image](mr_error2.png){width="90.00000%"}
Accretion-powered millisecond pulsars {#sec:AMPS}
-------------------------------------
Accretion-powered millisecond pulsars (AMPs) contain weakly magnetised NSs (with $B\sim 10^8$–$10^9$ G) accreting matter from a typically rather small companion star [@Patruno12]. We now know of 16, all transients that go into outbursts every few years. NSs in these systems have been spun up by accretion up to millisecond periods. Close to the NS, the accreting matter follows the magnetic field lines hitting the surface close to the magnetic poles. The resulting shockwave heats the electrons to $\sim$30–60 keV producing X-ray radiation by thermal Comptonization in a slab of Thomson optical depth of order unity [@Poutanen06]. Rotation of the hotspot causes modulation of the observed flux with the pulsar phase because of the evolving solid angle subtended on the observer’s sky, as well as of Doppler boosting. As the observed pulsations indicate that the shock covers only a small part of the NS surface, the scattered radiation should be linearly polarized up to 20%, depending on the pulse phase, the photon energy and the geometry of the system. In addition to the emission from the shock, pulsating thermal emission from the heated NS surface is seen at lower energies. In the peaks of the outbursts, when the accretion rate is high, the pulse profiles are usually very stable and rather sinusoidal with a harmonic content growing towards higher energies as a result of stronger contribution of the Comptonized emission which has a more anisotropic emission pattern. The pulse shape implies that only a single hotspot is seen, while the secondary pole is blocked by the accretion disk. The pulse stability allows the collection of millions of photons under constant conditions.
One of the challenges for modelling pulse profiles from AMPs is the absence of first-principles models that predict the emission pattern from the shock. The angular dependence, therefore, has to be parametrized, based on models of radiation transfer in an optically thin slab of hot electrons. Degeneracy between the number of parameters did not allow strong constraints on M and R using existing data from the [*Rossi X-ray Timing Explorer*]{} (RXTE) (see @Poutanen03, @Leahy08 [@Leahy09; @Leahy11]). The LAD on eXTP would allow the collection of many more photons, and significant improvement in the constraints on M and R. Furthermore, in a 100 ks observation of a bright AMP such as SAX J1808.4–3654 or XTE J1751–305 the X-ray polarimeter onboard of eXTP can measure polarisation in 10 phase bins at the 3$\sigma$ level and thus determine the basic geometrical parameters such as spot colatitude and observer inclination (Fig. \[fig:amps\_polarization\]). This not only improves the constraints on M and R (see solid orange contour in Fig. \[fig:extp\_eos\]), but allows an independent check of the fitting procedure based on the pulse profile alone.
Observations with the LAD of the PRE bursts from the AMPs and analysis of their spectral evolution in the cooling tail give independent M-R constraints (see for example @Suleimanov11, @Poutanen14, @Nattila16, @Nattila17). Using the currently most accurate method to directly fit atmosphere spectral models to the data [@Nattila17], one would be able to reduce the error in radius to just a few %, allowing us to put strong constraints on the EOS of cold dense matter (see dotted orange contour in Fig. \[fig:extp\_eos\]).
Burst oscillation sources
-------------------------
Hotspots that form during thermonuclear explosions on accreting NSs give rise to pulsations known as burst oscillations (@Strohmayer06b, @Galloway08). The mechanism responsible for burst oscillations remains unknown: flame spread, uneven cooling, or even surface modes may play a role [see @Watts12 for a review]. However burst oscillation sources are particularly attractive for M-R measurement in that they are numerous (increasing the odds of sampling a range of masses), have a well-understood thermal spectrum (@Suleimanov11b, @Miller13b), and offer multiple opportunities for independent cross-checks using complementary constraints [@Bhattacharyya05b; @Chang05; @Lo13], thereby reducing systematic errors. Detailed studies have shown that accuracies of a few % in M and R can be obtained with 10$^6$ pulsed photons (@Lo13, @Psaltis14b, @Miller15). In addition the technique is robust, with clear flags if any of the assumptions made during the fitting process are breached.
To estimate the observing time that eXTP would require to obtain measurements of M, R at the few % level for known sources we can scale from the burst fluxes, burst oscillation amplitudes, burst recurrence times and the percentage of bursts with oscillations observed by RXTE. For the persistent burst oscillation sources 4U 1636–536 and 4U 1728–34 we would require 350 ks and 375 ks respectively. For burst oscillations from the transient AMPs SAX J1808.4–3658 and XTE J1814–338 we would require 490 ks and 275 ks respectively. These observing times are substantial, but feasible. Burst oscillations from AMPs are particularly useful since the M-R measurements they generate can be compared to the results obtained from pulse profile fitting of accretion powered pulsations from the same sources (Sect. \[sec:AMPS\]). In addition the constraints on system geometry (inclination) acquired from the phase-dependence of the polarization of the persistent emission can also be used in fitting the burst oscillations, reducing uncertainties on M and R. Additional constraints for burst oscillation sources will also come from spectral fitting of strong bursts showing PRE (see Sect. \[sec:AMPS\] and @WP_OS).
![image](RPP.png){width="90.00000%"}
Rotation-powered pulsars {#rpp}
------------------------
*NICER* is a NASA Explorer Mission of Opportunity carrying soft X-ray timing instrument [@Arzoumanian14] that was installed on the International Space Station in June of 2017. *NICER* applies the pulse profile modeling technique to X-ray emitting rotation-powered millisecond pulsars (MSPs) [@Bogdanov08]. Since *NICER*’s targets rotate relatively slowly ($\sim$200 Hz), the measurements cannot rely on well-understood Doppler effects to break degeneracies between M and R. Nevertheless, if the surface radiation field and mass of the neutron star are known *a priori*, *NICER* could in principle achieve an accuracy of $\sim$2% in R (@Gendreau12, @Bogdanov13). The mass is now known to 5% accuracy for *NICER*’s main target, PSR J0437$-$4715 [@Reardon16], but is not yet known for its other top targets. The surface radiation field depends on the pulsar mechanism and is at present not well constrained, although theoretical work to address this topic is underway.
*NICER* has a peak effective area at 1 keV of 1800 cm$^2$. *eXTP* will be a factor of 4–5 larger in the soft waveband, enabling it to measure energy-resolved pulse waveforms of the nearest pulsars such as PSR J0437$-$4715 [@Bogdanov13] and J0030$+$0451 [@Bogdanov09] more efficiently than *NICER*, thus producing improved constraints on M-R. Perhaps more importantly, the larger collecting area and significantly lower background of the *eXTP* SFA will enable studies of fainter MSPs that are not accessible with *NICER*. Of great interest are nearby MSP binaries with precise measurements of the NS mass from radio pulse timing. These include PSR J1614$-$2230 with M$=1.928\pm0.017$ M$_{\odot}$ [@Fonseca16], PSR J2222$-$0137 [M$=1.20\pm0.14$ M$_{\odot}$; @Kaplan14], PSR J0751$+$1807 [M$=1.64\pm0.15$ M$_{\odot}$; @Desvignes16], PSR J1909$-$3744 [M$=1.54\pm0.027$ M$_{\odot}$; @Desvignes16]. The broad range of masses spanned by these systems is particularly beneficial for mapping out the dependence of R on M. Figure \[fig:rpp\] shows the level of constraints achievable within $\sim$ 1 Ms exposure times with eXTP for these sources.
Spin measurement {#Spinmeas}
================
NSs with the fastest spins constrain the EOS since the limiting spin rate, at which the equatorial surface velocity is comparable to the local orbital velocity and mass-shedding occurs, is a function of M and R (Figure \[fig:spincons\]). Softer EOS have smaller R for a given M, and hence have higher limiting spin rates. More rapidly spinning NSs place increasingly stringent constraints on the EOS. The current record holder (the MSP PSR J1748–2446ad in the Globular Cluster Terzan 5), which spins at 716 Hz [@Hessels06], does not rotate rapidly enough to rule out any EOS models. However the discovery of a NS with a sub-millisecond spin period would place a strong and clean constraint on the EOS. There are prospects for finding more rapidly spinning NSs in future radio surveys [@Watts15], however since the standard formation route for the MSPs is via spin-up due to accretion (@Alpar82, @Radhakrishnan82, @Bhattacharya91), it is clear that we should look in the X-ray as well as the radio, and theory has long suggested that accretion could spin stars up close to the break-up limit [@Cook94b]. Interestingly the drop-off in spin distribution at high spin rates seen in the MSP sample is not seen in the current (albeit much more limited) sample of accreting NSs [@Watts16].
![image](MR_maxspin.png){width="80.00000%"}
Since eXTP would have a larger effective area than all preceding X-ray timing missions [see @WPinstrumentation for a comparison], it is well suited to discover many more NS spins, using both burst oscillations and accretion-powered pulsations. We know from RXTE that the latter can be highly intermittent (@Galloway07, @Casella08, @Altamirano08), perhaps due to the way that accretion flows are channeled onto weakly magnetized NSs [@Romanova08], or because these systems are close to alignment [@Lamb09b]. In addition, weak persistent pulsations are expected in systems where magnetic field evolution as accretion progresses has driven the system towards alignment [@Ruderman91]. Searches for weak pulsations can exploit the sophisticated semi-coherent techniques being used for the *Fermi* pulsar surveys (@Atwood06, @Abdo09, @Messenger11, @Pletsch12), which compensate for orbital Doppler smearing.
eXTP will be able to detect burst oscillations in individual Type I X-ray bursts to amplitudes of 0.4% (1.3%) rms in the burst tail (rise) assuming a 10s (1s) integration time; by stacking bursts, sensitivity improves. In estimating detectability of accretion-powered pulsations with eXTP we consider three source classes: bright (e.g. Sco X-1), moderate (e.g. Aql X-1) and faint (e.g. XTE J1807–294)[^1]. We consider both coherent and semi-coherent searches. Coherent searches consider a simple FFT in a short data segment so that we do not lose coherence of the signal as a consequence of Doppler shifts induced by the orbital motion. We consider a duration of 128s, comparable to the duration of intermittent pulsation episodes seen in Aql X-1 [@Casella08]. Under these assumptions, eXTP will be able to perform a coherent search for intermittent pulsations down to amplitudes of 0.04% (bright), 0.3% rms (moderate), 1.9% rms (faint) (5 $\sigma$ single trial limits).
For semi-coherent searches, we assume a 10 ks long observation, which need not be continuous, and coherence lengths (the segment over which we can search for individual trains of coherent pulsations) of 256 s and 512 s. These assumptions are extremely conservative, and we would expect to be able to do better than this for many of our target sources, for which we know orbital parameters, reducing the number of templates to be searched. For this type of search eXTP would be sensitive down to amplitudes of 0.01% rms (bright), 0.1% rms (moderate), and 0.6% rms (faint) (5 $\sigma$ single trial limits).
eXTP can also conduct blind searches of nearby (less than 3 kpc) *Fermi* LAT sources that are suspected eclipsing redback and transitional MSP binaries similar to the canonical “missing link” PSR J1023+0038 [@Archibald15] and XSS J2124–3358 [@Bassa14]. There are a handful of candidates that seem to be undetectable even in deep radio pulsation searches, but are by all other accounts strong redback MSP candidates. The 716 Hz MSP in Terzan 5 is actually one of these eclipsing redback binaries, so conceivably some of these *Fermi* sources may be harboring even faster MSPs.
Constraints from accretion flows in the disks of NS Low Mass X-ray Binaries
===========================================================================
The advanced timing and polarimetry capabilities of eXTP will also enable other methods that could constrain the EOS for accreting NSs. The methods outlined in this section are derived from phenomena associated with the inner parts of the accretion disk. Compared to the spin rate constraint described in Section \[Spinmeas\] they are more model-dependent. However, they are nonetheless powerful as they provide additional complementary cross-checks and allow us to calibrate different techniques to extend our reach to a wider range of sources. See the eXTP White Papers on Strong Gravity [@WP_SG] and Observatory Science [@WP_OS] for further discussion of accretion flows.
Kilohertz Quasi-Periodic Oscillations (QPOs)
--------------------------------------------
Kilohertz QPOs are rapid variations in the intensity of NS Low Mass X-ray Binaries (LMXBs), both persistent and transient [see @vanderKlis00 for a review]. RXTE observed this phenomenon in a few tens of sources. The corresponding millisecond time scale is so short that the QPOs must be associated with dynamical time scales in the accretion flow in the vicinity of NSs. In many cases, these QPOs are seen as twin peaks in the Fourier power spectra. If one of the twin peaks is an indicator of the orbital motion in the accretion flow, it would put a constraint on NS mass and radius: the stable orbit must be outside the NS so at its smallest at either the NS radius or the innermost stable circular orbit (ISCO) [@Miller98b].
In addition to the association of the kHz QPOs with orbital motion in the innermost accretion flow onto NSs based on the millisecond time scales, there is increasing observational evidence that the kHz QPOs do indeed indicate the orbital frequency in the accretion flow (or boundary layer) surrounding the NS. The frequency of the lower kHz QPOs is anti-correlated with the mHz QPO flux in 4U 1608–52, which is consistent with a modulation of the orbital frequency under radiation force from the NS [@Yu02]. The pulse amplitude changes significantly when the upper kHz QPO passes the spin frequency in the accretion-powered millisecond pulsar SAX J1808.4–3654, strongly suggesting that the QPO is produced by azimuthal motion at the inner edge of the accretion disk, most likely orbital motion [@Bult15].
The behaviour of the QPOs as they approach their highest frequencies was difficult to resolve with RXTE as both amplitude and coherence drop at this point, although the behaviour is consistent with that expected near the ISCO [@Barret06]. eXTP will make breakthroughs by being able to track the QPOs to higher frequencies where the amplitudes are weaker, and to investigate QPO variability on timescales a factor $\sim 10$ shorter. The latter is very important: QPOs in Sco X-1, for example, have been observed to drift by more than 22 Hz in 0.08 s [@Yu01].
The QPO coherence drop and rapid frequency drifts may be due to radiation force effects on the orbital frequency in the accretion flow, since an anti-correlation between kHz QPO frequency and X-ray flux was detected on the time scales of lower frequency QPOs (where the flux probably originates from the NS, see @Yu01, @Yu02). In sources with the most detections of kHz QPOs such as 4U1636–536, the maximum QPO frequency seems to be anti-correlated with the X-ray flux [@Barret05]. Both this anti-correlation and the QPO coherence variation can be explained by radiation force effects. The rate at which the QPO frequencies change as a function of the QPO frequencies themselves also supports a scenario in which the inner part of the accretion disc is truncated at a radius that is set by the combined effect of viscosity and radiation drag [@Sanna12]. This in turn can put constraints on the NS EOS by measurements of the maximum kHz QPO frequency and the X-ray flux [@Yu08], although relativistic magnetohydrodynamical simulations with radiation will be needed to create models of sufficient accuracy.
The energy-dependent time lags of the kHz QPOs [@deAvellar13] offer an independent constraint on the physical size of the accretion disc, and hence the NS. Together with the time-averaged spectrum of the source, a combination of the frequency, amplitude and time lag of these variability features over very short time scales (see for example @Lee01, @Zhang17, @Ribeiro17) will provide the transfer function of the system. This depends upon the physical size of the accretion disc and the corona, and hence can be used to further constrain the radius of the NS. With eXTP, the maximum kHz QPO frequency measured in bright sources on short time scales, and in sources at lower flux levels, would increase by about 50 Hz (or 5%). Using the ISCO model of @Miller98b, this would lower the upper limit on the NS radius by $\sim$0.5 km or the mass by $\sim 0.1$ M$_\odot$ ($\sim 5$% and $\sim$ 7% respectively for a 10 km 1.4M$_\odot$ NS). Corrections for radiation force effects would modify these estimates somewhat.
![image](Fefigc.png){width="80.00000%"}
Constraints from relativistic Fe line modelling
-----------------------------------------------
A broad relativistic Fe K$\alpha$ spectral emission line is observed from many stellar-mass and supermassive black hole systems (@Fabian00, @Reynolds03, @MillerJ07). Such a fluorescent line near 6 keV is believed to be generated by the reflection of hard X-rays from the accretion disk, and is shaped by various physical effects, such as the Doppler effect, special relativistic beaming, gravitational redshifting and GR light-bending. The properties of this line can be used to measure $r_{\rm in}c^2$/GM, i.e., the inner-edge radius $r_{\rm in}$ of the accretion disk in the unit of the black hole mass M. By considering the disk inner-edge to be the innermost stable circular orbit (ISCO), which may be a reasonable assumption for black holes, one can also infer the black hole angular momentum parameter for the Kerr spacetime.
A broad relativistic spectral line has also been observed from a number of NS LMXBs (@Bhattacharyya07, @Cackett08, @Pandel08, @Dai09, @Cackett10, @MillerJ13, @Chiang16). As for black holes, one can infer $r_{\rm in}c^2$/GM for NSs from the relativistic Fe line. Since the disk inner edge radius $r_{\rm in} \ge$ R, the inferred $r_{\rm in}c^2$/GM provides an upper limit on R$c^2$/GM. One can therefore use M-$r_{\rm in}c^2$/GM space (instead of M-R space) for a known spin to constrain EOS models (Figure \[fig:Fefig\] and @Bhattacharyya11). This method requires computations of $r_{\rm in}c^2$/GM for given M, spin and EOS models. Note that, while $r_{\rm in} = r_{\rm ISCO}$ (i.e., ISCO radius) for a black hole, $r_{\rm in}$ is either $r_{\rm ISCO}$ or R, whichever is greater, for a NS. For a spinning (Kerr) black hole, $r_{\rm ISCO}$ can be analytically computed as a function of M and $a$. For a NS in an LMXB, one must compute $r_{\rm ISCO}$ and R values numerically for various EOS models and NS configurations, using an appropriate rapidly spinning stellar spacetime. Simulations for the eXTP LAD show that a statistical error of less than 0.1 in $r_{\rm in}c^2/GM$, sufficient to distinguish models, is achievable with a 30 ks exposure (see @Bhattacharyya17 and Figure \[fig:Fefig\]).
Summary
=======
eXTP offers unprecedented discovery space for the EOS of cold supranuclear density matter. eXTP’s large area will enable the most sensitive searches for accretion-powered pulsations and burst oscillations ever undertaken. Both yield the spin frequency of the NS; a single measurement of sub millisecond period spin would provide a clean and extremely robust constraint on the EOS.
However, eXTP will also deliver high precision measurements of M and R. The combination of large effective area and polarimeter will enable us to deploy multiple independent techniques: pulse profile modelling of accretion-powered pulsations, burst oscillations, and rotation-powered pulsations; spectral modelling of bursts, and using phenomena related to the accretion disc such as kHz QPOs and the relativistic Fe line. Many sources show several of these phenomena, allowing us to make completely independent measurements for a single source, to reduce systematic errors. Examples of targets in this class include the accretion-powered millisecond pulsar SAX J1808.4–3658, which goes into regular outburst, and the persistently accreting burster 4U 1636–536. We anticipate that eXTP could delivery precision constraints on M and R, at the few percent level, for of order 10 sources for a reasonable observing plan and given the anticipated mission lifetime. This would be unprecedented in terms of mapping the EOS and expanding the frontiers of dense matter physics.
[**Acknowledgments**]{}: ALW and TER acknowledge support from ERC Starting Grant 639217 CSINEUTRONSTAR. AP acknowledges support from a Netherlands Organization for Scientific Research (NWO) Vidi Fellowship. YC is suported by the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Global Fellowship grant agreement No 703916. SKG, KH and AS are supported in part by the DFG through Grant SFB 1245 and the ERC Grant No. 307986 STRONGINT.
[**Author contributions**]{}: This paper is an initiative of eXTP’s Science Working Group 1 on Dense Matter, whose members are representatives of the astronomical community at large with a scientific interest in pursuing the successful implementation of eXTP. The paper was primarily written by Anna Watts, Wenfei Yu, Juri Poutanen, and Shu Zhang with major contributions by Sudip Bhattacharyya (Fe lines), Slavko Bogdanov (rotation powered pulsars), Long Ji (spin measurements), Alessandro Patruno (spin measurements) and Thomas Riley (pulse profile modelling technique). Contributions were edited by Anna Watts. Other co-authors provided input to refine the paper.
[99]{}
J., [Abbott]{} B.P., et al., 2015, Classical and Quantum Gravity 32, 074001
B.P., [Abbott]{} R., [Abbott]{} T.D., et al., 2017, Phys. Rev. Lett. 119, 161101
A.A., [Ackermann]{} M., [Ajello]{} M., et al., 2009, Science 325, 840
F., [Agathos]{} M., [Agatsuma]{} K., et al., 2015, Classical and Quantum Gravity 32, 024001
M., [Meidam]{} J., [Del Pozzo]{} W., et al., 2015, Physical Review D 92, 023012
A., [Pandharipande]{} V.R., 1997, Phys. Rev. C 56, 2261
M.G., [Schmitt]{} A., [Rajagopal]{} K., [Sch[ä]{}fer]{} T., 2008, Rev. Mod. Phys. 80, 1455
M.A., [Cheng]{} A.F., [Ruderman]{} M.A., [Shaham]{} J., 1982, [Nature]{}300, 728
D., [Casella]{} P., [Patruno]{} A., et al., 2008, [ApJ]{}674, L45
R., [Casalbuoni]{} R., [Ciminale]{} M., et al., 2014, Rev. Mod. Phys. 86, 509
J., [Freire]{} P.C.C., [Wex]{} N., et al., 2013, Science 340, 448
A.M., [Bogdanov]{} S., [Patruno]{} A., et al., 2015, [ApJ]{}, 807, 62
Z., [Gendreau]{} K.C., [Baker]{} C.L., et al., 2014, In: Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9144, p. 20
W.B., [Ziegler]{} M., [Johnson]{} R.P., [Baughman]{} B.M., 2006, [ApJ]{}652, L49
D., [Olive]{} J.F., [Miller]{} M.C., 2005, [MNRAS]{}361, 855
D., [Olive]{} J.F., [Miller]{} M.C., 2006, [MNRAS]{}370, 1140
C.G., [Patruno]{} A., [Hessels]{} J.W.T., 2014, [MNRAS]{}441, 1825
M., [Berti]{} E., [Psaltis]{} D., [[Ö]{}zel]{} F., 2013, [ApJ]{}777, 68
M., [Psaltis]{} D., [[Ö]{}zel]{} F., [Johannsen]{} T., 2012, [ApJ]{}753, 175
A., [Janka]{} H.T., [Hebeler]{} K., [Schwenk]{} A., 2012, Phys. Rev. D 86, 063001
A., [Stergioulas]{} N., [Janka]{} H.T., 2014, Phys. Rev. D 90, 023002
I., [Haensel]{} P., [Zdunik]{} J.L., et al., 2012, [A&A]{}543, A157
D., [van den Heuvel]{} E.P.J., 1991, Physics Reports 203, 1
S., 2011, [MNRAS]{}415, 3247
S., 2017, JApA, 38, 38
S., [Strohmayer]{} T.E., [Miller]{} M.C., [Markwardt]{}, C.B. 2005, [ApJ]{}, 619, 483
S., 2007, [ApJ]{}664, L103
S., [Bombaci]{} I., [Logoteta]{} D., [Thampan]{}, A.V. 2016, MNRAS, 457, 3101
S., [Grindlay]{} J.E., [Rybicki]{} G.B., 2008, [ApJ]{}689, 407
S., [Grindlay]{} J.E., 2009, [ApJ]{}703, 1557
S., 2013, [ApJ]{}762, 96
I., [Logoteta]{} D., [Vidaña]{} I., & [Provid[ê]{}ncia]{} C., 2016, Eur. Phys. J. A, 52, 58
M., [Carignano]{} S., 2015, Prog. Part. Nucl. Phys. 81, 39
P., [van der Klis]{} M., 2015, [ApJ]{}806, 90
E.M., [Miller]{} J.M., [Ballantyne]{} D.R., et al., 2010, [ApJ]{}720, 205
E.M., [Miller]{} J.M., [Bhattacharyya]{} S., et al., 2008, [ApJ]{}674, 415
C., [Morsink]{} S.M., [Leahy]{} D., [Campbell]{} S.S., 2007, [ApJ]{}654, 458
S., [Ravasio]{} M., [Israel]{} G. L., [Mangano]{} V., [Belloni]{}, T. 2003, [ApJ]{}, 594, L39
P., [Altamirano]{} D., [Patruno]{} A., et al., 2008, [ApJ]{}674, L41
P., [Bildsten]{} L., [Wasserman]{} I., 2005, [ApJ]{}629, 998
D., [Vida[ñ]{}a]{} I., 2016, EJPA, 52, 29
K., [Yagi]{} K., [Klein]{} A., et al., 2015, Phys. Rev. D 92, 104008
C.Y., [Cackett]{} E.M., [Miller]{} J.M., et al., 2016, [ApJ]{}821, 105
J.A., [Bauswein]{} A., [Stergioulas]{} N., [Shoemaker]{} D., 2016, Classical and Quantum Gravity 33, 085003
G.B., [Shapiro]{} S.L., [Teukolsky]{} S.A., 1994, [ApJ]{}423, L117
A, [[Ż]{}ycki]{} P., [Di Salvo]{} T., [Iaria]{} R., [Lavagetto]{} G., [Robba]{} N. R., 2007, [ApJ]{}, 667, 411
A., [Iaria]{} R., [Di Salvo]{} T., et al., 2009, [ApJ]{}693, L1
M.G.B., [M[é]{}ndez]{} M., [Sanna]{} A., [Horvath]{}, J.E., 2013, MNRAS 433, 3453
W., [Li]{} T.G.F., [Agathos]{} M., et al., 2013, Phys. Rev. Lett. 111, 071101
P.B., [Pennucci]{} T., [Ransom]{} S.M., et al., 2010, [Nature]{}467, 1081
A., [Uttley]{}, P., [Gou]{}, L., et al., 2017, Science China Physics, Mechanics & Astronomy, this issue (eXTP White Paper on Strong Gravity)
G., [Caballero]{} R.N., [Lentati]{} L., et al., 2016, [MNRAS]{}458, 3341
A.C., [Iwasawa]{} K., [Reynolds]{} C.S., [Young]{} A.J., 2000, PASP 112, 1145
M., 2014, Phys. Rev. Lett. 112, 101101
H., [Santangelo]{}, A., [Zane]{}, S., et al., 2017, Science China, to be submitted (eXTP White Paper on Strong Magnetism)
D.C., 1973, [ApJ]{}183, 977
D.C., 1976, [ApJ]{}205, 247
, E., [Pennucci]{} T.T., [Ellis J.A.,]{} et al., 2016, [ApJ]{}, 832, 167
K., [Hatsuda]{} T., 2011, Rept. Prog. Phys., 74, 014001
D.K., [Morgan]{} E.H., [Krauss]{} M.I., et al., 2007, [ApJ]{}654, L73
D.K., [Muno]{} M.P., [Hartman]{} J.M., et al., 2008, [ApJ Supp.]{}179, 360
K.C., [Arzoumanian]{} Z., [Okajima]{} T., 2012, In: Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 8443, p. 13
N.K., 1985, [ApJ]{}293, 470
N.K., 1996, Compact Stars: Nuclear Physics, Particle Physics, and General Relativity, Springer Verlag
S., [Rutledge]{} R.E., 2014, [ApJ]{}796, L3
P., [Zdunik]{} J.L., [Bejger]{} M., [Lattimer]{} J.M., 2009, [A&A]{}502, 605
K., [Lattimer]{} J.M., [Pethick]{} C.J., [Schwenk]{} A., 2013, [ApJ]{}773, 11
K., [Holt]{} J.-D., [Men[é]{}ndez]{} J., [Schwenk]{} A., 2015, Annual Review of Nuclear and Particle Science 65, 457
C.O., [Cohn]{} H.N., [Lugger]{} N.A. et al., 2014, [MNRAS]{}, 444, 443
J.W.T., [Ransom]{} S.M., [Stairs]{} I.H., et al., 2006, Science 311, 1901
K., [Kyutoku]{} K., [Sekiguchi]{} Y.i., [Shibata]{} M., 2016, Phys. Rev. D 93, 064082
J., [Bozzo]{} E., [Qu]{} J., et al., 2017, Science China Physics, Mechanics & Astronomy, this issue (eXTP White Paper on Observatory Science)
, D. L. [Boyles]{} J., [Dunlap]{} B. H., et al., 2014, [ApJ]{}789, 119
M., [Wex]{} N., 2009, Classical and Quantum Gravity 26, 073001
A., [Fraga]{} E.S., [Schaffner-Bielich]{} J., [Vuorinen]{} A., 2014, [ApJ]{} 789, 127
B.D., [Kyutoku]{} K., [Shibata]{} M., et al., 2014, Phys. Rev. D 89, 043009
B.D., [Wade]{} L., 2015, Physical Review D 91, 043002
F.K., [Boutloukos]{} S., [Van Wassenhove]{} S., et al., 2009, [ApJ]{}705, L36
J.M., [Prakash]{} M., 2001, [ApJ]{}550, 426
J.M., [Schutz]{} B.F., 2005, [ApJ]{}629, 979
J.M., [Prakash]{} M., 2016, Physics Reports 621, 127
D.A., [Morsink]{} S.M., [Cadeau]{} C., 2008, [ApJ]{}672, 1119
D.A., [Morsink]{} S.M., [Chou]{} Y., 2011, [ApJ]{}742, 17
D.A., [Morsink]{} S.M., [Chung]{} Y.Y., [Chou]{} Y., 2009, [ApJ]{}691, 1235
H.C., [Misra]{} R., [Taam]{} R.E., 2001, [ApJ]{}549, L229
A., [Zhang]{} B., [Zhang]{} N.-B., et al., 2016, Phys. Rev. D 94, 083010
L., 1992, [ApJ]{}398, 569
K.H., [Miller]{} M.C., [Bhattacharyya]{} S., [Lamb]{} F.K., 2013, [ApJ]{}776, 19
C., 2011, Phys. Rev. D 84, 083003
J.M., 2007, ARAA 45, 441
J.M., [Parker]{} M.L., [Fuerst]{} F., et al., 2013a, [ApJ]{}779, L2
M.C., 2013, arXiv:1312.0029
M.C., [Boutloukos]{} S., [Lo]{} K.H., [Lamb]{} F.K., 2013b, In: [Zhang]{} C.M., [Belloni]{} T., [M[é]{}ndez]{} M., [Zhang]{} S.N. (eds.) IAU Symposium, Vol. 290, p.101
M.C., [Lamb]{} F.K., 1998, [ApJ]{}499, L37
M.C., [Lamb]{} F.K., 2015, [ApJ]{}808, 31
M.C., [Lamb]{} F.K., [Psaltis]{} D., 1998, [ApJ]{}508, 791
S.M., [Leahy]{} D.A., [Cadeau]{} C., [Braga]{} J., 2007, [ApJ]{}663, 1244
C., [Wilms]{} J., [Barret]{} D., et al., 2013, arXiv:1306.2334
J., [Steiner]{} A.W., [Kajava]{} J.J.E., et al., 2016, [A&A]{}591, A25
J., [Miller]{}, M.C., [Steiner]{} A.W., et al., 2017, [A&A]{}608, A31
J., [Pihajoki]{}, P., 2017, arXiv:1709.07292
F., 2013, Reports on Progress in Physics 76, 016901
F., [Psaltis]{} D., 2009, Phys. Rev. D 80, 103003
D., [Kaaret]{} P., [Corbel]{} S., 2008, [ApJ]{}688, 1288
A., [Watts]{} A.L., 2012, arXiv:1206.2727
K.R., [Ftaclas]{} C., [Cohen]{} J.M., 1983, [ApJ]{}274, 846
H.J., [Guillemot]{} L., [Allen]{} B., et al., 2012, [ApJ]{}744, 105
J., 2006, Adv. Sp. Res. 38, 2697
J., 2008, In: [Wijnands]{} R., [Altamirano]{} D., [Soleri]{} P., [Degenaar]{} N., [Rea]{} N., [Casella]{} P., [Patruno]{} A., [Linares]{} M. (eds.) A decade of accreting millisecond X-ray pulsars, AIP Conf. Ser., Vol. 1068, p.77
J., [Beloborodov]{} A.M., 2006, [MNRAS]{}373, 836
J., [Gierli[ń]{}ski]{} M., 2003, [MNRAS]{}343, 1301
J., [N[ä]{}ttil[ä]{}]{} J., [Kajava]{} J.J.E., et al., 2014, [MNRAS]{}442, 3777
D., [[Ö]{}zel]{} F., 2014, [ApJ]{}792, 87
D., [[Ö]{}zel]{} F., [Chakrabarty]{} D., 2014, [ApJ]{}787, 136
V., [Cooke]{} D.J., 1969, Astrophysics Letters 3, 225
V., [Srinivasan]{} G., 1982, Current Science 51, 1096
H., [Misra]{}, R., [Dewangan]{}, G. 2011, MNRAS, 416, 637
J.S., [Baiotti]{} L., [Creighton]{} J.D.E., et al., 2013, Phys. Rev. D 88, 044042
J.S., [Lackey]{} B.D., [Owen]{} B.J., [Friedman]{} J.L., 2009, Phys. Rev. D 79, 124032
D.J., [Hobbs]{} G., [Coles]{} W., et al., 2016, [MNRAS]{}455, 1751
C.S., [Nowak]{} M.A., 2003, Physics Reports 377, 389
E.M., [M[é]{}ndez]{} M., [Zhang]{} G., [Sanna]{} A., 2017, MNRAS 471, 1208
T.E., [Raaijmakers]{} G., [Watts]{} A.L., 2018, MNRAS 478, 1093
M.M., [Kulkarni]{} A.K., [Lovelace]{} R.V.E., 2008, [ApJ]{}673, L171
M., 1991, [ApJ]{}366, 261
T., [N[ä]{}ttil[ä]{}]{} J., [Poutanen]{}, J., 2018, arXiv:1805.01149
A., [M[é]{}ndez]{} M., [Belloni]{} T., [Altamirano]{} D., 2012, MNRAS 424, 2936
A.W., [Heinke]{} C.O., [Bogdanov]{} S., et al., 2017, arXiv:1709.05013
A.W., [Lattimer]{} J.M., [Brown]{} E.F., 2013, [ApJ]{}765, L5
N., 2003, Living Reviews in Relativity 6, 3
A.L., [Fiege]{} J.D., [Leahy]{} D.A., [Morsink]{} S.M. 2016, [ApJ]{}, 833, 244
T., [Bildsten]{} L., 2006, In: [Lewin]{} W., [van der Klis]{} M. (eds.) Compact stellar X-ray sources, Cambridge Astrophysics Series, Vol. 39, p.113
V., [Poutanen]{} J., [Revnivtsev]{} M., [Werner]{} K., 2011a, [ApJ]{}742, 122
V., [Poutanen]{} J., [Werner]{} K., 2011b, [A&A]{}527, A139
K., [Rezzolla]{} L., [Baiotti]{} L., 2014, Physical Review Letters 113, 091104
M., 2000, [ARAA]{}38, 717
K., [Poutanen]{} J., 2004, [A&A]{}426, 985
A., [Espinoza]{} C.M., [Xu]{} R., et al., 2015, in Advancing Astrophysics with the Square Kilometre Array (AASKA14), p. 43
A.L., 2012, ARAA 50, 609
A.L., [Andersson]{} N., [Chakrabarty]{} D., et al., 2016, Reviews of Modern Physics 88, 021001
K., [Yunes]{} N., 2016, Class. Quant. Grav. 33, 095005
W., 2008, In: [Yuan]{} Y.F., [Li]{} X.D., [Lai]{} D. (eds.) Astrophysics of Compact Objects, AIP Conf. Ser. Vol. 968, p.215
W., [van der Klis]{} M., 2002, [ApJ]{}567, L67
W., [van der Klis]{} M., [Jonker]{} P.G., 2001, [ApJ]{}559, L29
J.L., [Haensel]{} P., 2013, [A&A]{}551, A61
, G., [M[é]{}ndez]{} M., [Sanna]{}, A., [Ribeiro]{}, E.M., [Gelfand]{}, J.D., 2017, MNRAS 465, 5003
S.N., et al., 2017, Science China Physics, Mechanics & Astronomy, this issue (eXTP White Paper on Instrumentation)
[^1]: The assumed fluxes are as follows: Sco X-1, 0.5-10 keV flux of $1.6 \times 10^{-7}$ [erg/cm$^2$/s]{}[@Dai07]; Aql X-1, 0.5-10 keV flux of $3.3 \times 10^{-9}$ [erg/cm$^2$/s]{}[@Raichur11]; XTE J1807-294, 0.5-10 keV flux of $1.7 \times 10^{-10}$ [erg/cm$^2$/s]{}[@Campana03].
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'A generalization of the Dirac’s canonical quantization theory for a system with second-class constraints is proposed as the fundamental commutation relations that are constituted by all commutators between positions, momenta and Hamiltonian so they are simultaneously quantized in a self-consistent manner, rather than by those between merely positions and momenta so the theory either contains redundant freedoms or conflicts with experiments. The application of the generalized theory to quantum motion on a torus leads to two remarkable results: i) The theory formulated purely on the torus, i.e., based on the so-called the purely intrinsic geometry, conflicts with itself. So it provides an explanation why an intrinsic examination of quantum motion on torus within the Schrödinger formalism is improper. ii) An extrinsic examination of the torus as a submanifold in three dimensional flat space turns out to be self-consistent and the resultant momenta and Hamiltonian are satisfactory all around.'
author:
- 'D. M. Xun'
- 'Q. H. Liu'
- 'X. M. Zhu'
title: 'Quantum motion on a torus as a submanifold problem in a generalized Dirac’s theory of second-class constraints'
---
Introduction
============
The embedding problem of quantum motion of a particle on a two-dimensional curved surface $\Sigma ^{2}$ in the flat space $R^{3}$ has attracted much attention, including theoretical explorations [jk,dacosta,CB,FC,liu07,liu11,japan1990,japan1992,japan1993]{} and experimental investigations [@Szameit; @onoe]. Fundamentally, there are two formalisms to investigate the quantum motion on $\Sigma ^{2}$. One is within the Schrödinger formalism that needs a wave function and another is within the Dirac one that purely deals with operators, but they usually give different predictions. In this section, we will mainly review these two formalisms, and present a generalization of the Dirac’s canonical quantization theory for a system of the second-class constraints.
Schrödinger and Dirac formalism: discrepancies in curvature dependent quantum potentials
----------------------------------------------------------------------------------------
By the *Schrödinger formalism* we mean that the Schrödinger equation is first formulated in $R^{3}$, actually in a curved shell of an equal and finite thickness $\delta $ whose intermediate surface coincides with the prescribed one $\Sigma ^{2}$ (or equivalently, the particle moves within the range of the same width $\delta $ due to a confining potential around the surface), and an effective Schrödinger equation on the curved surface $\Sigma ^{2}$ is then derived by taking the squeezing limit $\delta
\rightarrow 0$ to confine the particle to the $\Sigma ^{2}$ [jk,dacosta,CB,liu11]{}. It leads to a unique form of the so-called geometric potential [@Szameit; @liu11] $$V_{g}=-\frac{\hbar ^{2}}{2m}\left( M^{2}-K\right) \label{gp}$$that depends on both the mean and the gaussian curvature $M$ and $K$ which are, respectively, the extrinsic and the intrinsic curvature. This amounts to an extrinsic examination of the quantum motion on $\Sigma ^{2}$ within the Schrödinger formalism. The potential (\[gp\]) has been experimentally confirmed [@Szameit; @onoe]. To note that the extrinsic curvature $M$ is a geometric consequence of embedding the system on $\Sigma
^{2}$ in $R^{3}$ and is inaccessible with purely intrinsic description. However, for this formalism, we do not know why the Schrödinger equation can not be entirely formulated on $\Sigma ^{2}$ without considering any embedding. We are familiar with a fact an intrinsic examination of the quantum motion on $\Sigma ^{2}$ within the Schrödinger formalism that predicts no curvature dependent quantum potential, which is contrary to the experiments [@Szameit; @onoe].
By the *Dirac formalism* we mean to use the Dirac’s canonical quantization theory on systems with the second-class constraints [dirac1,dirac2]{}, with an understanding that Dirac formalism can also be applied to the system* *that is considered either within purely intrinsic geometry on $\Sigma ^{2}$ or as a submanifold in $R^{3}$, predicting a curvature dependent potential $V_{D}$ with two real parameters $%
\alpha $ and $\beta $ [@japan1990; @japan1992], $$V_{D}=-\frac{\hbar ^{2}}{2m}\left( \alpha M^{2}-\beta K\right) . \label{vd}$$This form of the potential (\[vd\]) can also be easily constructed by dimensional analysis for two geometric invariants $M$ and $K$ have dimension of *length*$^{-1}$ and *length*$^{-2}$, respectively. In comparison with the Schrödinger formalism, we have one more unknown associated with the Dirac one, that is, once taking the $\Sigma ^{2}$ as a submanifold in $R^{3}$ we do not know what form of the potential can be singled out among a family of it (\[vd\]). However, Schrödinger’s theory gives an unambiguous choice with $\alpha =\beta =1$ [jk,dacosta,FC,liu11]{}.
So far, we find that both formalisms suffer from shortcomings. Since the extrinsic examination of the torus within the Schrödinger formalism has experimental supports, an immediate question is whether there is a possible theoretical framework from which we can fix the parameters $\alpha $ and $%
\beta $ within a possibly generalized Dirac’s theory, so rendering it compatible with Schrödinger’s and also the experimental results. This question will be partially answered in this paper.
Schrödinger and Dirac formalism: discrepancies in momentum operators
--------------------------------------------------------------------
In addition to the unique form of the geometric potential $V_{g}=-\hbar
^{2}\left( M^{2}-K\right) /2m$, Schrödinger’s theory also leads to a unique definition of the geometric momentum $\mathbf{p}$ [@liu07; @liu11], $$\mathbf{p}=-i\hbar (\mathbf{r}^{\mu }\partial _{\mu }+M\mathbf{n}),
\label{gm}$$where $\mathbf{r=(}x(x^{1},x^{2}),y(x^{1},x^{2}),z(x^{1},x^{2})\mathbf{)}$ is the position vector in $R^{3}$ on the surface $\Sigma ^{2}$ whose local coordinates are $x^{\mu }\equiv (x^{1},x^{2})$ and $\mathbf{r}^{\mu }=g^{\mu
\nu }\mathbf{r}_{\nu }=g^{\mu \nu }\partial \mathbf{r/}x^{\nu }$, and at this point $\mathbf{r}$, $\mathbf{n=(}n_{x},n_{y},n_{z}\mathbf{)}$ denotes the normal and $M\mathbf{n}$ symbolizes the mean curvature vector field, another geometric invariant. Throughout the paper, the Einstein summation convention over repeated indices is used.
However, the present formulation of Dirac’s theory opens a wide door to permit various definitions of the generalized momenta, including i) the well-known generalized ones $p_{\mu }=-i\hbar (\partial _{\mu }+\Gamma _{\mu
}/2)$ which satisfy quantum commutator $[x^{\nu },p_{\mu }]=i\hbar \delta
_{\mu }^{\nu }$, where $\Gamma _{\mu }$ is the once-contracted Christoffel symbol $\Gamma _{\mu \nu }^{\sigma }$ constructed with Riemannian metric $%
g^{\mu \nu }$ [@japan1990], where greek letters $\mu $, $\nu $, $\sigma $, etc. run between $1$ to $2$, and ii) geometric momentum (\[gm\]), and etc. [@liu11; @japan1992]. *It is very important to note that in the extrinsic examination of quantum motion on* $\Sigma ^{2}$* in* $%
R^{3}$*, the local coordinates* $x^{\mu }\equiv (x^{1},x^{2})$* are no longer position operators but parameters, and the position operators are* $\mathbf{r=(}x(x^{1},x^{2}),y(x^{1},x^{2}),z(x^{1},x^{2})%
\mathbf{)}$*.*
A framework based on the purely intrinsic geometry implies that every quantity solely relies on the Riemannian metric $g^{\mu \nu }$ and its various constructions such as Christoffel symbol $\Gamma _{\mu \nu }^{\sigma
}$ and the gaussian curvature $K$. Consequently, neither momentum nor Hamiltonian in quantum mechanics depends on the extrinsic curvature. When the curvature dependent potential with$\ $(\[vd\]) $\alpha \neq 0$ and geometric momentum (\[gm\]) appear in a formulation of quantum mechanics for a system on $\Sigma ^{2}$, we in fact take the system under study to be embedded in $R^{3}$, which is beyond the purely intrinsic geometry.
A generalization Dirac’s theory for a system of the second-class constraints
----------------------------------------------------------------------------
We are deeply impressed by the very success of Schrödinger’s theory that produces unique result of the geometric potential (\[vd\]) and momentum (\[gm\]), and also by the disturbing arbitrariness associated with Dirac’s theory of the second-class constraints. As we know, Dirac’s theory postulates that a quantum commutator $[A,B]$ of two variables $A$ and $B$ in quantum mechanics is achieved by direct correspondence of the Dirac’s brackets $\{A{,B\}}_{D}$ as $\{A{,B\}}_{D}\rightarrow \lbrack A,B]\ $which is defined by $[A,B]=i\hbar O(\{A{,B\}}_{D})$ where $O(F)$ is used to emphasize the operator form of the classical quantity $F$ in order to avoid possible confusion. When all constraints are removed, the Dirac bracket $\{A{%
,B\}}_{D}$ assumes its usual* *form, the Poisson* *bracket $%
\{A{,B\}}$. However, Dirac himself states that *fundamental commutation relations involve only those between canonical positions* $x_{{i}%
}$* and canonical momenta* ${p}_{{i}}$* *[@dirac1; @dirac2].
One can ask a curious question: when there is no constraint, why there is no such a *fundamental* canonical quantization rule between $f$ $(=x_{{i}%
}$, $p_{{i}})$ and the Hamiltonian $H$ as $[f,H]=i\hbar O(\{f,H\})$? This is because the direct quantization $[f,H]=i\hbar O(\{f,H\})$ might be redundant, or meaningless, or practically useless, etc. For instance, when the system has a classical analogue, the Hamiltonian is the same function of the positions and momenta in the quantum theory as in the classical theory, provided that the Cartesian system of axes is used [dirac2,dirac3,Greiner]{}. In this case the rule $[f,H]=i\hbar O(\{f,H\})$ turns out to be redundant. When a quantum Hamiltonian has no classical analogy, the canonical quantization rule $[f,H]=i\hbar O(\{f,H\})$ is meaningless. In many other cases, e.g., to quantize a classical Hamiltonian $%
H=\gamma x^{3}p^{3}$ with $\gamma $ being a real parameter, the rule should be imposed but is practically useless. Thus, it appears unacceptable to include the canonical quantization rule $[f,H]=i\hbar O(\{f,H\})$ as a fundamental element of a theory.
For systems with the second-class constraints, the situation is totally different. Discrepancies between either curvature dependent quantum potentials or momentum operators present when different formalisms, or different geometric points, are utilized. It strongly implies that, while the quantization of the system is performed, the proper operator form of positions, momenta and Hamiltonian are simultaneously determined in a self-consistent way. Therefore we have attempted to generalize the Dirac’s theory so as to add $[f,H]$ into the category of the *fundamental commutation relations* which should also be directly achieved via following quantization rule [@liu11], $$\lbrack f,H]=i\hbar O(\{f,H\}_{D}),\text{ }f=x_{i}\text{ and }p_{j}.
\label{generalized}$$In rest part of the paper, the convention $O(F)=F$ in quantum mechanics assumes without no longer emphasizing it an operator with the symbol $O$. These commutation relations (\[generalized\]) may not be applicable when the system has no constraint. So we would like to call them the second category of fundamental* *ones [@liu11], whereas the existing ones between positions and momenta, the first.
This generalized Dirac’s theory reproduces the usual form for the system that has a classical analogue but has not a constraint, together with the necessary utilization of the Cartesian system of axes, therefore enriches the Dirac formalism of quantum mechanics. We will call it the *general theory of the canonical quantization* (GTCQ).
Purpose and organization of the paper
-------------------------------------
As an application of the GTCQ to quantum motion on a sphere [@liu11], we find that, on one hand, an attempt of trying a proper description within the purely intrinsic geometry proves problematic, and one the other hand, an account of embedding the sphere in three-dimensional space is very coherent. Notice that the classification theorem for compact surfaces states that [class]{}, every compact orientable surface is homeomorphic either to a sphere or to a connected sum of tori, implying that if there is any difficulty associated with quantum mechanics for a particle constrained on a sphere or a torus, enormous theoretical problems would arise from dealing with an arbitrary two-dimensional curved surface in quantum mechanics. It forms one of the reasons that the sphere [@liu11] and the torus [torus1,torus2,torus3,torus4]{} are used to test various theories. The main purpose of the present study is to take the torus to show that Dirac formalism is complementary to the Schrödinger one. The former eliminates the purely intrinsic description, and the latter gives the unique form of the geometric potential, while both define the identical form of the geometric momentum.
This paper is organized as follows. In following section II, we present the GTCQ for quantum motion on the torus within purely intrinsic geometry. Results show that the theory can never be consistently set up. In section III, we revisit the same problem as a submanifold in flat space $R^{3}$ with the GTCQ. Results show that the theory turns out to be self-consistent all around, and the obtained geometric momentum (\[gm\]) and potential ([gp]{}) are also satisfactory. Section IV briefly remarks and concludes this study.
GTCQ for a torus within intrinsic geometry
==========================================
The toroidal surface is with two local coordinates $\theta \in \lbrack
0,2\pi ),\varphi \in \lbrack 0,2\pi )$$$\mathbf{r}=((a+r\sin \theta )\cos \varphi ,(a+r\sin \theta )\sin \varphi
,r\cos \theta ),\text{ }a>r\neq 0, \label{rr}$$where $\varphi $ is the azimuthal angle and $\theta $ the polar angle, and $%
a $ and $r$ are the outer and inner radii of the torus, respectively. The constraint is $r=b\neq 0$. In this section, we will first give the classical mechanics for motion on the torus, and then turn into the Dirac formalism of quantum mechanics. In classical mechanics, the theory appears nothing surprising, but after transition to quantum mechanics, it becomes contradictory to itself.
Classical mechanical treatment
------------------------------
The Lagrangian $L$ in the toric coordinate system is, $$L=\frac{m}{2}(\dot{r}^{2}+r^{2}\dot{\theta}^{2}+(a+r\sin \theta )^{2}\dot{%
\varphi}^{2})-\lambda (r-b), \label{lag}$$where $\lambda $ is the Lagrangian multiplier enforcing the constrained of motion on the surface. The Lagrangian is singular because it does not contain the “velocity” $\dot{\lambda}$. Hence we need the Dirac formalism of the classical mechanics for a system with the second-class constraints, which gives the canonical momenta conjugate to $r,\theta ,\varphi $ and $%
\lambda $ in the following,$$\begin{aligned}
p_{r} &=&\frac{\partial L}{\partial \dot{r}}=m\dot{r}, \\
p_{\theta } &=&\frac{\partial L}{\partial \dot{\theta}}=mr^{2}\dot{\theta},
\\
p_{\varphi } &=&\frac{\partial L}{\partial \dot{\varphi}}=m(a+r\sin \theta
)^{2}\dot{\varphi}, \\
p_{\lambda } &=&\frac{\partial L}{\partial \dot{\lambda}}=0. \label{plamb}\end{aligned}$$Eq. (\[plamb\]) represents the primary constraint:$$\varphi _{1}\equiv p_{\lambda }\approx 0, \label{prim}$$hereafter symbol “$\approx $” implies a weak equality [@dirac2]. After all calculations are finished, the weak equality takes back the strong one. By the Legendre transformation, the primary Hamiltonian $H_{p}$ is [dirac2]{},$$H_{p}=\frac{1}{2m}(p_{r}^{2}+\frac{p_{\theta }^{2}}{r^{2}}+\frac{p_{\varphi
}^{2}}{(a+r\sin \theta )^{2}})+\lambda \left( r-b\right) +\dot{\lambda}%
p_{\lambda }, \label{hami}$$where $\dot{\lambda}$ is also a Lagrangian multiplier guaranteeing that this Hamiltonian is defined on the symplectic manifold. The secondary constraints (not confusing with second-class constraints) are generated successively, then determined by the conservation condition [@dirac2],$$\varphi _{i+1}\equiv \left\{ \varphi _{i},H_{p}\right\} \approx 0,\text{\ }%
(i=1,2,....),$$where $\left\{ f,g\right\} $ is the Poisson bracket with $%
q_{1}=r,q_{2}=\theta ,q_{3}=\varphi $, and $p_{1}=p_{r},p_{2}=p_{\theta
},p_{3}=p_{\varphi }$, $$\left\{ f,g\right\} \equiv \frac{\partial f}{\partial q_{k}}\frac{\partial g%
}{\partial p_{k}}+\frac{\partial f}{\partial \lambda }\frac{\partial g}{%
\partial p_{\lambda }}-(\frac{\partial f}{\partial p_{k}}\frac{\partial g}{%
\partial q_{k}}+\frac{\partial f}{\partial p_{\lambda }}\frac{\partial g}{%
\partial \lambda }). \label{possi}$$The complete set of the secondary constraints is, $$\begin{aligned}
\varphi _{2} &\equiv &\left\{ \varphi _{1},H_{p}\right\} =-(r-b)\approx 0,
\label{db1} \\
\varphi _{3} &\equiv &\left\{ \varphi _{2},H_{p}\right\} =-\frac{p_{r}}{m}%
\approx 0, \label{db2} \\
\varphi _{4} &\equiv &\left\{ \varphi _{3},H_{p}\right\} =\frac{\lambda }{m}-%
\frac{1}{m^{2}}(\frac{p_{\theta }^{2}}{r^{3}}+\frac{p_{\varphi }^{2}\sin
\theta }{(a+r\sin \theta )^{3}})\approx 0, \label{thi} \\
\varphi _{5} &\equiv &\left\{ \varphi _{4},H_{p}\right\} =\frac{\dot{\lambda}%
}{m}-\frac{3ap_{\theta }p_{\varphi }^{2}\cos \theta }{m^{3}r^{2}(a+r\sin
\theta )^{4}}\approx 0. \label{for}\end{aligned}$$Eqs. (\[db1\]) and (\[db2\]) show, respectively, that on the surface of torus $r=b$, no motion along the normal direction is possible $p_{r}=0$, while Eqs. (\[thi\]) and (\[for\]) determine, respectively, the Lagrangian multipliers $\lambda $ and $\dot{\lambda}$.
The Dirac bracket instead of the Poisson one for two variables $A$ and $B$ is defined by,$$\left\{ A,B\right\} _{D}\equiv \left\{ A,B\right\} -\left\{ A,\varphi
_{u}\right\} C_{uv}^{-1}\left\{ \varphi _{v},B\right\} ,$$where the $4\times 4$ matrix $C\equiv \left\{ C_{uv}\right\} $ whose elements are defined by $C_{uv}\equiv \left\{ \varphi _{u},\varphi
_{v}\right\} $ with $u,v=1,2,3,4$ from Eqs. (\[prim\]) and (\[db1\])-(\[thi\]). The inverse matrix $C^{-1}$ is,$$C^{-1}=\left\{
\begin{array}{cccc}
0 & C_{12}^{-1} & 0 & m \\
-C_{12}^{-1} & 0 & -m & 0 \\
0 & m & 0 & 0 \\
-m & 0 & 0 & 0%
\end{array}%
\right\} ,$$where$$C_{12}^{-1}=\frac{3}{m}\left( \frac{p_{\theta }^{2}}{b^{4}}+\frac{p_{\varphi
}^{2}\sin ^{2}\theta }{\left( a+b\sin \theta \right) ^{4}}\right) .$$Thus, the generalized positions $q^{\mu }$ $(=\theta ,\varphi )$ and momenta $p_{\mu }$ satisfy the following Dirac brackets,$$\{q^{\mu },q^{\nu }\}_{D}=0,\text{ }\{p_{\mu },p_{\nu }\}_{D}=0,\text{ }%
\{q^{\mu },p_{\nu }\}_{D}=\delta _{\nu }^{\mu }. \label{xp1}$$By use of the equation of motion,$$\dot{f}=\left\{ f,H_{c}\right\} _{D},$$we obtain those for the positions $\theta $, $\varphi $ and the momenta $%
p_{\theta }$, $p_{\varphi }$, respectively,$$\begin{aligned}
\dot{\theta} &\equiv &\left\{ \theta ,H_{c}\right\} _{D}=\frac{p_{\theta }}{%
mb^{2}},\text{\ \ }\dot{\varphi}\equiv \left\{ \varphi ,H_{c}\right\} _{D}=%
\frac{p_{\varphi }}{m(a+b\sin \theta )^{2}}, \label{xh} \\
\dot{p}_{\theta } &\equiv &\left\{ p_{\theta },H_{c}\right\} _{D}=\frac{%
b\cos \theta p_{\varphi }^{2}}{m(a+b\sin \theta )^{3}},\text{ \ }\dot{p}%
_{\varphi }\equiv \left\{ p_{\varphi },H_{c}\right\} _{D}=0. \label{ph}\end{aligned}$$In these calculations (\[xh\]) and (\[ph\]), we in fact need only the usual form of Hamiltonian, $H_{p}\rightarrow H_{c}$,$$H_{c}=\frac{1}{2m}\left( \frac{p_{\theta }^{2}}{b^{2}}+\frac{p_{\varphi }^{2}%
}{\left( a+b\sin \theta \right) ^{2}}\right) .$$
So far, the classical mechanics for the motion on the torus is complete and coherent in itself.
Quantum mechanical treatment
----------------------------
In quantum mechanics, we assume that the Hamiltonian takes the following general form,$$\begin{aligned}
H &=&-\frac{\hbar ^{2}}{2m}\left[ \nabla ^{2}+\left( \alpha M^{2}-\beta
K\right) \right] \notag \\
&=&-\frac{\hbar ^{2}}{2m}\left[ \frac{1}{b^{2}}\frac{\partial ^{2}}{\partial
\theta ^{2}}+\frac{\cos \theta }{b\left( a+b\sin \theta \right) }\frac{%
\partial }{\partial \theta }+\frac{1}{\left( a+b\sin \theta \right) ^{2}}%
\frac{\partial ^{2}}{\partial \varphi ^{2}}\right. \notag \\
&&+\left. \alpha \frac{1}{4}\left( \frac{a+2b\sin \theta }{ab+b^{2}\sin
\theta }\right) ^{2}-\beta \frac{\sin \theta }{ab+b^{2}\sin \theta }\right] ,
\label{h}\end{aligned}$$where, $$M=-\frac{1}{2}\frac{a+2b\sin \theta }{ab+b^{2}\sin \theta },\text{ }K=\frac{%
\sin \theta }{ab+b^{2}\sin \theta }.$$We are ready to construct commutator $[A,B]$ of two variables $A$ and $B$ in quantum mechanics, which can be straightforwardly realized by a direct correspondence of the Dirac’s brackets as $\{A,B\}_{D}\rightarrow \left[ A,B%
\right] /i\hbar $. From the Dirac’s brackets (\[xp1\]), the first category of the fundamental commutators between operators $q^{\mu }$ and $p_{\nu }$ are given by,$$\lbrack q^{\mu },q^{\nu }]=0,\text{ }[p_{\mu },p_{\nu }]=0,\text{ }[q^{\mu
},p_{\nu }]=i\hbar \delta _{\nu }^{\mu }. \label{xp2}$$In light of the GTCQ, we have the second category of fundamental commutators between $q^{\mu }$ and $H$ from Eq. (\[xh\]),$$\begin{aligned}
\left[ \theta ,H\right] &=&\frac{\hbar ^{2}}{mb^{2}}\left( \frac{\partial }{%
\partial \theta }+\frac{b\cos \theta }{2\left( a+b\sin \theta \right) }%
\right) =i\hbar \frac{p_{\theta }}{mb^{2}}, \label{qxh1} \\
\left[ \varphi ,H\right] &=&\frac{\hbar ^{2}}{m(a+b\sin \theta )^{2}}\frac{%
\partial }{\partial \varphi }=i\hbar \frac{p_{\varphi }}{m(a+b\sin \theta
)^{2}}. \label{qxh2}\end{aligned}$$From these quantum commutators, the operators $p_{\theta }$ and $p_{\varphi
} $ are, respectively,$$p_{\theta }=-i\hbar \left[ \frac{\partial }{\partial \theta }+\frac{b\cos
\theta }{2\left( a+b\sin \theta \right) }\right] ,\text{ }p_{\varphi
}=-i\hbar \frac{\partial }{\partial \varphi }\text{.} \label{cmom}$$Using these operators, we can directly calculate two quantum commutators $%
\left[ p_{\theta },H\right] $ and $\left[ p_{\varphi },H\right] $ with quantum Hamiltonian (\[h\]), and the results are, respectively,$$\begin{aligned}
\left[ p_{\theta },H\right] &=&i\hbar \frac{b\cos \theta }{m(a+b\sin \theta
)^{3}}p_{\varphi }^{2}+i\hbar \frac{\hbar ^{2}\cos \theta \left(
a^{2}(\alpha -2\beta +1)+2ab(\alpha -\beta )\sin \theta -b^{2}\right) }{%
4bm(a+b\sin \theta )^{3}}, \label{qph1} \\
\left[ p_{\varphi },H\right] &=&0. \label{qph2}\end{aligned}$$The second equation (\[qph2\]) is satisfactory, whereas the first one ([qph1]{}) can hardly hold true. In the GTCQ, the quantum commutator $\left[
p_{\theta },H\right] $ (\[qph1\]) must be the canonical quantization of the Dirac bracket (\[ph\]). We get, with noting the mutual commutabiliy between two observables $p_{\varphi }$ and $\theta $, $$i\hbar \left\{ p_{\theta },H\right\} _{D}=\frac{i\hbar b\cos \theta
p_{\varphi }^{2}}{m(a+b\sin \theta )^{3}}. \label{qph1-1}$$In comparison with the right-handed sides of the Eqs. (\[qph1\]) and ([qph1-1]{}), we obtain a unique solution, $$\alpha =\beta =\frac{a^{2}-b^{2}}{a^{2}}(\neq 1),$$which leads an unacceptable curvature dependent quantum potential that includes the extrinsic curvature $M$,$$V_{D}=-\frac{\hbar ^{2}}{2m}\frac{a^{2}-b^{2}}{a^{2}}\left( M^{2}-K\right) =-%
\frac{\hbar ^{2}}{2m}\frac{a^{2}-b^{2}}{4b^{2}\left( a+b\sin \theta \right)
^{2}}.$$However, no matter what other values of $\alpha $ and $\beta $ are chosen, there is a manifest breakdown of the canonical quantization rule between Dirac bracket $\left\{ p_{\theta },H\right\} _{D}$ (\[ph\]) and the quantum commutator $\left[ p_{\theta },H\right] $ (\[qph1\]). So we see that the intrinsic geometry is insufficient for the GTCQ to be self-consistent.
If using original form of the Dirac’s theory instead, we still have results (\[cmom\])-(\[qph2\]) but we can never require them as the canonical quantization of the relevant Dirac brackets (\[xh\])-(\[ph\]). It is sheer nonsense for we neither are able to exclude the extrinsic curvature $M$, nor give a unambiguous prediction of the curvature dependent potential to be testable by experiment.
One should be noted that we have not introduced additional assumptions such as “dummy factors” techniques [@Kleinert] etc. in passing from Dirac’s brackets to the quantum commutators. They mean further generalizations of the Dirac’s theory.
In classical limit $\hbar \rightarrow 0$, all inconsistency vanishes, as expected.
Summary
-------
From the studies in this section, we see that the GTCQ of second-class constraints for quantum motion on the torus can not be consistently formulated. We therefore need to invoke an extrinsic examination of the same problem, as will be done in next section.
GTCQ for a torus as a submanifold
=================================
The surface equation of the torus (\[rr\]) in Cartesian coordinates $%
\left( x,y,z\right) $ is given by,$$f\left( \mathbf{x}\right) \equiv a^{2}-b^{2}+(x^{2}+y^{2}+z^{2})-2a\sqrt{%
x^{2}+y^{2}}=0.$$In this section, we will also first give the classical mechanics for motion on the torus within the Dirac formalism of the classical mechanics for a system with the second-class constraints, and then turn into quantum mechanics. The GTCQ proves to be self-consistent all around and the resultant momenta and Hamiltonian are exactly those given by the Eq. ([gm]{}) and (\[gp\]), respectively.
Classical mechanical treatment
------------------------------
The Lagrangian $L$ in the Cartesian coordinate system is,$$L=\frac{m}{2}\left( \dot{x}^{2}+\dot{y}^{2}+\dot{z}^{2}\right) -\lambda
f\left( \mathbf{x}\right) . \label{lagca}$$The generalized momentum $\mathbf{p}$ whose three components $p_{i}$ $%
(i=x,y,z)$ and $p_{\lambda }$ canonically conjugate to variables $x_{i}$ $%
(x_{1}=x,x_{2}=y,x_{3}=z,)$ and $\lambda $, are given by, respectively,$$\begin{aligned}
p_{i} &=&\frac{\partial L}{\partial \dot{x}_{i}}=m\dot{x}_{i},(i=1,2,3), \\
p_{\lambda } &=&\frac{\partial L}{\partial \dot{\lambda}}=0. \label{plambca}\end{aligned}$$Eq. (\[plambca\]) represents the primary constraint,$$\varphi _{1}\equiv p_{\lambda }\approx 0. \label{prim2}$$By the Legendre transformation, the primary Hamiltonian $H_{p}$ is,$$H_{p}=\frac{1}{2m}p_{i}^{2}+\lambda f\left( \mathbf{x}\right) +\dot{\lambda}%
p_{\lambda }.$$The secondary constraints are determined by successive use of the Poisson brackets,$$\begin{aligned}
\varphi _{2} &\equiv &\left\{ \varphi _{1},H_{p}\right\}
=-(a^{2}-b^{2}+x_{i}^{2}-2a\sqrt{x^{2}+y^{2}})\approx 0, \label{1st2} \\
\varphi _{3} &\equiv &\left\{ \varphi _{2},H_{p}\right\} =-\frac{2\left(
\sqrt{x^{2}+y^{2}}(p_{x}x+p_{y}y+p_{z}z)-a(p_{x}x+p_{y}y)\right) }{m\sqrt{%
x^{2}+y^{2}}}\approx 0, \\
\varphi _{4} &\equiv &\left\{ \varphi _{3},H_{p}\right\} =\frac{4\lambda
\left( a^{2}-2a\sqrt{x^{2}+y^{2}}+x_{i}^{2}\right) }{m}+\frac{%
2a(p_{y}x-p_{x}y)^{2}}{m^{2}\left( x^{2}+y^{2}\right) ^{3/2}}-\frac{%
2p_{i}^{2}}{m^{2}}\approx 0, \label{thica} \\
\varphi _{5} &\equiv &\left\{ \varphi _{4},H_{p}\right\} =\frac{4\dot{\lambda%
}\left( a^{2}-2a\sqrt{x^{2}+y^{2}}+x_{i}^{2}\right) }{m}-\frac{%
6a(p_{x}x+p_{y}y)(p_{y}x-p_{x}y)^{2}}{m^{3}\left( x^{2}+y^{2}\right) ^{5/2}}%
\approx 0. \label{forca}\end{aligned}$$Similarly, the Dirac bracket between two variables $A$ and $B$ is defined by,$$\left\{ A,B\right\} _{D}=\left\{ A,B\right\} -\left\{ A,\varphi _{u}\right\}
D_{uv}^{-1}\left\{ \varphi _{v},B\right\} ,$$where the $4\times 4$ matrix $D\equiv \left\{ D_{uv}\right\} $ whose elements are defined by $D_{uv}\equiv \left\{ \varphi _{u},\varphi
_{v}\right\} $ with $u,v=1,2,3,4$ from Eqs. (\[prim2\]) and (\[1st2\])-(\[thica\]). The inverse matrix $D^{-1}$ is easily carried out,$$D^{-1}=\left(
\begin{array}{cccc}
0 & D_{12}^{-1} & 0 & \kappa \\
-D_{12}^{-1} & 0 & -\kappa & 0 \\
0 & \kappa & 0 & 0 \\
-\kappa & 0 & 0 & 0%
\end{array}%
\right) ,$$where,$$D_{12}^{-1}=\frac{\left( 3a^{2}-7a\sqrt{x^{2}+y^{2}}\right)
(p_{y}x-p_{x}y)^{2}+4\left( x^{2}+y^{2}\right) ^{2}p_{i}^{2}}{4b^{4}m\left(
x^{2}+y^{2}\right) ^{2}},\text{ }\kappa =\frac{m}{4b^{2}}.$$Then primary Hamiltonian $H_{p}$ assumes its usual one: $H_{p}\rightarrow
H_{c},$$$H_{c}=\frac{p_{x}^{2}+p_{y}^{2}+p_{z}^{2}}{2m}. \label{HP}$$All fundamental Dirac’s brackets are as follows,$$\begin{aligned}
\{x_{i},x_{j}\}_{D} &=&0, \label{xxca1} \\
\{x_{i},p_{j}\}_{D} &=&\delta _{ij}-\frac{1}{b^{2}}f_{i}f_{j}, \label{xpca1}
\\
\{p_{i},p_{j}\}_{D} &=&-\frac{1}{b^{2}}\left[ f_{i}\left( p_{j}+\frac{%
a\left( xp_{y}-yp_{x}\right) }{\left( x^{2}+y^{2}\right) ^{3/2}}\left(
y\delta _{1j}-x\delta _{2j}\right) \right) -f_{j}\left( p_{i}+\frac{a\left(
xp_{y}-yp_{x}\right) }{\left( x^{2}+y^{2}\right) ^{3/2}}\left( y\delta
_{1i}-x\delta _{2i}\right) \right) \right] , \label{xhca1} \\
\{x_{i},H_{c}\}_{D} &=&\frac{p_{i}}{m}=\dot{x}_{i}, \\
\{p_{i},H_{c}\}_{D} &=&-\frac{1}{mb^{2}}\left[ f_{i}\left(
p_{x}^{2}+p_{y}^{2}+p_{z}^{2}-\frac{a\left( xp_{y}-yp_{x}\right) ^{2}}{%
\left( x^{2}+y^{2}\right) ^{3/2}}\right) \right] =\dot{p}_{i}, \label{phca1}\end{aligned}$$ where $f_{i}=x_{i}-a\left( x\delta _{1i}+y\delta _{2i}\right) /\sqrt{%
x^{2}+y^{2}}$.
Quantum mechanical treatment
----------------------------
Now let us turn to quantum mechanics. The first category of the fundamental commutators between operators $x_{i}$ and $p_{i}$ are, by quantization of (\[xxca1\])-(\[xhca1\]),$$\begin{aligned}
\left[ x_{i},x_{j}\right] &=&0,\text{ \ }\left[ x_{i},p_{j}\right] =i\hbar
\left( \delta _{ij}-\frac{1}{b^{2}}f_{i}f_{j}\right) , \label{xx-xp} \\
\left[ p_{i},p_{j}\right] &=&-\frac{i\hbar }{b^{2}}\left[ f_{i}\left( p_{j}+a%
\frac{L_{z}\left( y\delta _{1j}-x\delta _{2j}\right) +\left( y\delta
_{1j}-x\delta _{2j}\right) L_{z}}{2\left( x^{2}+y^{2}\right) ^{3/2}}\right)
\right. \notag \\
&&\left. -f_{j}\left( p_{i}+a\frac{L_{z}\left( y\delta _{1i}-x\delta
_{2i}\right) +\left( y\delta _{1i}-x\delta _{2i}\right) L_{z}}{2\left(
x^{2}+y^{2}\right) ^{3/2}}\right) \right] , \label{ppca2}\end{aligned}$$where $L_{z}=xp_{y}-yp_{x}$. It seems that we have complicated operator-ordering problem as passing from the Dirac bracket Eq. (\[xhca1\]) to the quantum commutator (\[ppca2\]). In fact, only one pair between the noncommuting observables $x_{i}$ (precisely, $\left( y\delta
_{1j}-x\delta _{2j}\right) $) and $L_{z}$ matters, and the product of $%
\left( y\delta _{1j}-x\delta _{2j}\right) $ and $L_{z}$ can be made Hermitian by a symmetric construction, $\left( \left( y\delta _{1j}-x\delta
_{2j}\right) L_{z}+L_{z}\left( y\delta _{1j}-x\delta _{2j}\right) \right) /2$. Other products of factors $f_{i}$ (or $f_{j}$) and $L_{z}$ impose no operator-ordering problem because of the Jacobi identity.
There is a family of the momenta $p_{i}$ all of them are solutions to the Eq. (\[ppca2\]), as explicitly shown in [@japan1992]. With these momenta $p_{i}$ at hand, we completely do not know the correct form of the quantum Hamiltonian, as suggested by Eq. (\[HP\]). It is therefore understandable that the quantum Hamiltonian would contain arbitrary parameters.
However, the GTCQ requires the second category of the fundamental commutators as $\left[ x_{i},H\right] $ and $\left[ p_{i},H\right] $. We immediately find that the momenta $p_{i}$ from following commutators, $$\left[ x_{i},H\right] =i\hbar \frac{p_{i}}{m}. \label{xhca2}$$The obtained momenta $p_{i}$ are nothing but three components of the geometric momentum (\[gm\]) on the torus [@torus4], $$\begin{aligned}
p_{x} &=&-i\hbar \left( \frac{\cos \theta \cos \varphi }{b}\frac{\partial }{%
\partial \theta }-\frac{\sin \varphi }{a+b\sin \theta }\frac{\partial }{%
\partial \varphi }-\frac{a+2b\sin \theta }{2b(a+b\sin \theta )}\sin \theta
\cos \varphi \right) , \\
p_{y} &=&-i\hbar \left( \frac{\cos \theta \sin \varphi }{b}\frac{\partial }{%
\partial \theta }+\frac{\cos \varphi }{a+b\sin \theta }\frac{\partial }{%
\partial \varphi }-\frac{a+2b\sin \theta }{2b(a+b\sin \theta )}\sin \theta
\sin \varphi \right) , \\
p_{z} &=&i\hbar \left( \frac{\sin \theta }{b}\frac{\partial }{\partial
\theta }+\frac{a+2b\sin \theta }{2b\left( a+b\sin \theta \right) }\cos
\theta \right) .\end{aligned}$$
As to the form of quantum Hamiltonian, we also start from the general form (\[h\]), and now resort to the following complicated operator-ordering arrangement with $w_{\pm }=(x\pm iy)^{3/2}$, $$\begin{aligned}
\left[ p_{i},H\right] &=&-\frac{i\hbar }{mb^{2}}\{mHf_{i}+mf_{i}H \notag \\
&&-\frac{a}{4}\alpha _{1}[f_{i}(L_{z}\frac{1}{w_{+}}L_{z}\frac{1}{w_{-}}+%
\frac{1}{w_{-}}L_{z}\frac{1}{w_{+}}L_{z})+(L_{z}\frac{1}{w_{+}}L_{z}\frac{1}{%
w_{-}}+\frac{1}{w_{-}}L_{z}\frac{1}{w_{+}}L_{z})f_{i}] \notag \\
&&-\frac{a}{4}\alpha _{2}[f_{i}(L_{z}\frac{1}{w_{+}}\frac{1}{w_{-}}L_{z}+%
\frac{1}{w_{-}}L_{z}L_{z}\frac{1}{w_{+}})+(L_{z}\frac{1}{w_{+}}\frac{1}{w_{-}%
}L_{z}+\frac{1}{w_{-}}L_{z}L_{z}\frac{1}{w_{+}})f_{i}] \notag \\
&&-\frac{a}{4}\alpha _{3}[f_{i}(\frac{1}{w_{+}}L_{z}L_{z}\frac{1}{w_{-}}%
+L_{z}\frac{1}{w_{-}}\frac{1}{w_{+}}L_{z})+(\frac{1}{w_{+}}L_{z}L_{z}\frac{1%
}{w_{-}}+L_{z}\frac{1}{w_{-}}\frac{1}{w_{+}}L_{z})f_{i}] \notag \\
&&-\frac{a}{4}\alpha _{4}[f_{i}(\frac{1}{w_{+}}L_{z}\frac{1}{w_{-}}%
L_{z}+L_{z}\frac{1}{w_{-}}L_{z}\frac{1}{w_{+}})+(\frac{1}{w_{+}}L_{z}\frac{1%
}{w_{-}}L_{z}+L_{z}\frac{1}{w_{-}}L_{z}\frac{1}{w_{+}})f_{i}] \notag \\
&&-\frac{a}{2}\alpha _{5}\frac{1}{w_{+}w_{-}}(f_{i}L_{z}^{2}+L_{z}^{2}f_{i})%
\}, \label{op-ord}\end{aligned}$$where $\alpha _{k}$, $(k=1,2,...5)$ are five real parameters satisfying $%
\sum \alpha _{k}=1$. In comparison of both sides of the this equation, we find that the solution $\alpha =\beta =1$, and two of the five real parameters $\alpha _{k}$ are freely to be specified, $$\alpha _{1}=\frac{11}{9}-\alpha _{4}-\alpha _{5},\alpha _{2}=\alpha _{3}=-%
\frac{1}{9}.$$We see that free parameters remain, but they are irrelevant to observable quantities such as momentum and potential. In fact, with $\alpha =\beta =1$ in (\[vd\]), a much simpler choice of the operator-ordering without free parameters is possible,$$\begin{aligned}
\left[ p_{i},H\right] &=&-\frac{i\hbar }{mb^{2}}\{mHf_{i}+mf_{i}H+\frac{1}{9}%
\frac{a}{4}[(\frac{1}{w_{+}}f_{i}L_{z}^{2}\frac{1}{w_{-}}+\frac{1}{w_{-}}%
f_{i}L_{z}^{2}\frac{1}{w_{+}}) \notag \\
&&+(\frac{1}{w_{+}}L_{z}^{2}\frac{1}{w_{-}}f_{i}+\frac{1}{w_{-}}L_{z}^{2}%
\frac{1}{w_{+}}f_{i})]-\frac{10}{9}\frac{a}{2}\frac{1}{w_{+}w_{-}}%
(f_{i}L_{z}^{2}+L_{z}^{2}f_{i})\}. \label{ph2}\end{aligned}$$Even we can by no means exhaust all possible forms of the operator-ordering, from Eqs. (\[op-ord\]) and (\[ph2\]), we can at least conclude that the curvature dependent potential (\[vd\]) given by the Dirac formalism converges to the geometric potential (\[gp\]) given by the Schrödinger one.
Summary
-------
An examination of the motion on torus as a submanifold problem in GTCQ ensures a highly self-consistent description, and this formalism comes compatible with the Schrödinger one.
Remarks and conclusions
=======================
It is long known that Dirac’s theory of second-class constraints, in which the fundamental commutation relations involve only those between canonical positions and canonical momenta, contains redundant freedoms and causes difficulty sometimes. To overcome the problems, we recently put forward a proposal that the commutators between the positions, momenta, and Hamiltonian form a full set of the fundamental commutation relations to construct a self-consistent quantum theory, the so-called GTCQ. Then the GTCQ produces a unique form of the geometric momentum, and imposes additional requirement on the form of the Hamiltonian via the curvature dependent potential that has no direct analogy. We see that the geometric potential comes as the consequence of the extrinsic examination of the constrained motion.
Through a careful analysis of the quantum motion on a torus, we demonstrate that the purely intrinsic geometry does not suffice for the GTCQ to be self-consistently formulated, but an extrinsic examination of the torus in three dimensional flat space does. Our study implies that the Dirac formalism is complementary to the Schrödinger one. The former can be helpful to eliminate the intrinsic description, and the latter gives the unique form of the geometric potential, while both define the identical form of the geometric momentum.
This work is financially supported by National Natural Science Foundation of China under Grant No. 11175063.
[99]{} H. Jensen and H. Koppe, *Ann. Phys*. **63**, 586(1971).
R. C. T. da Costa, *Phys. Rev. A* **23**, 1982(1981).
A. V. Chaplik and R. H. Blick, *New J. Phys.* **6**, 33(2004).
G. Ferrari and G. Cuoghi, *Phys. Rev. Lett.* **100,** 230403 (2008).
Q. H. Liu, C. L. Tong and M. M. Lai, *J. Phys. A: Math. and Theor.* **40,** 4161(2007).
Q. H. Liu, L. H. Tang and D. M. Xun, *Phys. Rev. A* **84**, 042101(2011).
T. Homma, T. Inamoto, T. Miyazaki, Phys. Rev. D **42**, 2049(1990); *Z. Phys. C* **48**, 105(1990).
M. Ikegami and Y. Nagaoka, S. Takagi and T. Tanzawa, Prog. Theoret. Phys. **88**, 229(1992). This paper utilizes two forms of the constraint $f\left( \mathbf{x}\right) =0$ that is defined in real-space, and $df\left( \mathbf{x}\right) /dt=0$ that is defined in phase-space, together with a resort to some special arrangements of the operator-ordering, to give their results. They claim that the geometric potential of form (\[vd\]) with $\alpha \neq 0$ is only available with $%
df\left( \mathbf{x}\right) /dt=0$. Since there is by no means to exhaust all possible forms of the operator-ordering, their results are stimulating but far from complete.
S. Matsutani, J. Phys. A: Math. Gen. **26,** 5133-5143(1993).
A. Szameit, F. Dreisow, M. Heinrich, R. Keil, S. Nolte, A. Tünnermann and S. Longhi, *Phys. Rev. Lett.* **104**, 150403(2010).
J. Onoe, T. Ito, H. Shima, H. Yoshioka and S. Kimura, *Europhys. Lett.* **98**, 27001(2012).
P. A. M. Dirac, *Lectures on quantum mechanics* (Yeshiva University, New York, 1964); Can. J. Math. **2**, 129(1950).
P. A. M. Dirac, *The Principles of Quantum Mechanics*, 4th ed. (Oxford University Press, Oxford, 1967).
P. A. M. Dirac, *Proc. R. Soc. Lond. A* **109**, 642-653(1925). In this paper, Dirac scrutinized the problem whether $%
i\hbar \{f,H\}\rightarrow \lbrack f,H]$ is necessary for a system without constraints and wrote his thoughtful observations as follows *These equations (i.e.* $i\hbar \{f,H\}\rightarrow
\lbrack f,H]$*) will be true on the quantum theory for systems for which the orders of the factors of the products occurring in the equations of motions are unimportant. They may be taken to be true for system for which these orders are important in one can decide upon the orders of the factors in* $H$*.*
W. Greiner, *Quantum Mechanics: An Introduction*, 4th. ed. (Springer, Berlin, 2001) pp. 193–196. The author elaborated on how it would not be successful unless the Cartesian system is used when performing the canonical quantization of the classical system, as noted by Dirac [@dirac2].
W. S. Massey, *Algebraic Topology: An Introduction* (New York: Springer, 1977).
M. Encinosa and L. Mott, *Phys. Rev. A* **68**, 014102 (2003).
S. Ishikawa, T. Miyazaki, K. Yamamoto, M. Yamanobe, *Int. J. Mod. Phys. A*, **11**, 3363(1996).
K. Kowalski and J. Rembieliński, *Phys. Rev. A* **75**, 052102(2007).
Q. H. Liu, J. X. Hou, Y. P. Xiao, and L. X. Li, Int. J. Theor. Phys. **43**, 1011(2004).
H. Kleinert, *Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets*, 5th ed., (World Scientific, Singapore, 2009).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We focus on the possibility of measuring the gravitomagnetic effects due to the rotation of the Earth, by means of a space-based experiment that exploits satellites in geostationary orbits. Due to the rotation of the Earth, there is an asymmetry in the propagation of electromagnetic signals in opposite directions along a closed path around the Earth. We work out the delays between the two counter-propagating beams for a simple configuration, and suggest that accurate time measurements could allow, in principle, to detect the gravitomagnetic effect of the Earth.'
author:
- Matteo Luca Ruggiero
- Angelo Tartaglia
title: Test of gravitomagnetism with satellites around the Earth
---
Introduction {#sec:intro}
============
Among the many predictions of General Relativity (GR), gravitomagnetic effects require, still today, an exceptional observational effort to be detected within a reasonable accuracy level. Indeed, the term *gravitomagnetism* refers to the part of the gravitational field originating from *mass currents*; actually, it is a well known fact (see e.g. [@ncb]) that Einstein equations, in weak-field approximation (small masses, low velocities), can be written in analogy with Maxwell equations for the electromagnetic field, where the mass density and current play the role of the charge density and current, respectively.
There were many proposals in the past (see the review paper [@ncb]) and also, more recently, to test these effects; among the recent attempts to measure gravitomagnetic effects, it’s worth mentioning the LAGEOS tests around the Earth [@Ciufolini:2004rq; @ciufolini2010], the MGS tests around Mars [@iorio2006; @iorio2010a] and other tests around the Sun and the planets [@iorio2012a]. Some years ago, in 2012 the LARES mission [@LARES] was launched to measure the Lense-Thirring effect of the Earth: results and comments about the LAGEOS/LARES missions can be found in the papers [@Ciufolini:2016ntr; @Iorio:2017uew; @ciufolini18]. Moreover, The Gravity Probe B [@GP.B] mission was launched to measure the precession of orbiting gyroscopes [@pugh; @schiff]. LAGRANGE [@Tartaglia:2017fri] is another proposed space-based experiment, which suggests the possibility of exploiting spacecrafts located in the Lagrangian points of the Sun-Earth system to measure some relativistic effects, among which the gravitomagnetic effect of the Sun; moreover, the satellites can be used to build a relativistic positioning system[@Tartaglia:2010sw]. GINGER is a proposal which investigates the possibility of measuring gravitomagnetic effects in a terrestrial laboratory, by using an array of ring lasers [@GINGER11; @ruggierogalaxies; @GINGER14; @Tartaglia:2016jfo].
Indeed, the main problem in detecting gravitomagnetic effects is that they are very small compared to the *gravitoelectric* ones (i.e. Newtonian-like), due to the masses and not to their currents: in fact, one of the most difficult challenges is modelling with adequate accuracy the dominant effects, which are several orders of magnitude greater.
In this paper we discuss a new proposal to measure an observable quantity which is purely gravitomagnetic, since it is related to the angular momentum of the source of the gravitational field, and is independent of its mass alone. Actually, the idea of measuring the propagation times of electromagnetic signals in order to measure the curvature of space-time was already discussed, with a more general approach, by Synge[@synge]. The experimental setup involved consists of satellites orbiting the Earth, sending electromagnetic signals to each other along two opposite directions along a closed path: in particular we suppose that two signals are contemporarily emitted from one satellite in opposite directions; the two signals reach the other satellites where they are re-transmitted and eventually arrive to the satellite which emitted them. If signals are emitted in flat space-time, it is intuitively expected that the signal propagating in the same direction of the satellites rotation takes a longer time with respect to the signal propagating in the opposite direction, and this can be seen as a special relativistic (SR) time delay. Indeed, in curved space-time there is an additional time delay, due to rotation of the Earth, i.e. to its angular momentum, and this can be seen as a gravitomagnetic effect. We calculate the time difference for satellites on a geostationary orbit and evaluate the magnitude of the effect for a simple configuration. In order to assess the magnitude of the effects we are dealing with, we remember that the propagation time for a complete round trip along a geostationary orbit is in the order of a second: in these conditions, the SR time delay between the two propagation times is in order of microseconds, while the gravitomagnetic contribution is about ten orders of magnitude smaller than the former.
The time delay {#sec:thesetup}
==============
In this Section, we aim at calculating the difference in the propagation times of two electromagnetic signals moving in opposite directions, along a closed path around the Earth. The closed path is determined by a constellation of satellites. More in details, we suppose that two signals are contemporarily emitted from one satellite in opposite directions; the two signals reach the other satellites where they are re-transmitted and eventually arrive to the satellite which emitted them. The delay between the arrival times of the two signals, as measured by a clock in the emitting/receiving satellite, is the observable quantity that we want to measure.
![We use polar coordinates around the Earth: the position vector $\vec r$ is identified by its length $r$, $\vartheta$ (the angle with the $z$ axis) and $\varphi$ (the angle between the projection of $\vec r$ on the $xy$ plane and the $x$ axis). The $z$ axis is aligned with the Earth rotation axis; because of the axial symmetry, it is not important for our purposes to define the orientation of the $x$ and $y$ axes. []{data-label="fig:coord"}](coordinate.eps)
To begin with, we describe the space-time around the Earth by the following approximated line element: ds\^2 = -(1- )c\^2dt\^2+(1+ )dr\^[2]{}+r\^2 (d\^2+\^2 d\^2 ) -\^2 ddt, \[eq:wf1\]
In the above equation, $M_{E }$ is the Earth mass, while $\vec{J}_{{ E}}$ is its angular momentum, [$G$ is the gravitational constant and $c$ is the speed of light]{}; we use the Schwarzschild-like coordinates $(t,r,\vartheta,\varphi)$ (see Figure \[fig:coord\]) and assume that the angular momentum is orthogonal to the equatorial plane $\vartheta=\pi/2$. The Earth is assumed to be spherical and the lowest approximation is given by the term containing $J_{E}$, which in the terrestrial environment is indeed six orders of magnitude smaller than the mass terms. In order to give a preliminary evaluation of the effect, for the sake of simplicity we consider satellites in a geostationary orbit in the equatorial plane. To this end, we remember that the radius of the geostationary orbit is $r_{{geo}} \simeq 4.2 \times 10^{7}$ m, with respect to the centre of the Earth and that the satellites are moving with a period of 1 day, which corresponds to an angular speed $\displaystyle \omega_{ E}=\sqrt{\frac{GM_{ E}}{r^{3}_{{geo}}}} \simeq 7.3 \times 10^{-5}$ rad/s.
If we set $\vartheta=\pi/2$, Eq. (\[eq:wf1\]) becomes ds\^2 = -(1- )c\^2dt\^2+(1+ )dr\^[2]{}+r\^2 d\^2 - ddt, \[eq:wf2\]
Then, we perform the transformation $\varphi=\phi+\omega_{ E}t$ to the reference frame co-rotating with the satellites, since measurements are performed in this frame; accordingly, on taking into account that $\omega_{E}$ is constant, the line element becomes
ds\^[2]{}=-( 1---)c\^2dt\^[2]{}-2(+\_[ E]{}r\^[2]{} )ddt+(1+ )dr\^[2]{}+r\^[2]{}d\^[2]{} \[eq:wf3\]
We remember that a line-element (general coordinates) in the form ds\^[2]{}=g\_[00]{}dt\^[2]{}+2g\_[0i]{}dtdx\^[i]{}+g\_[ij]{}dx\^[i]{}dx\^[j]{} \[eq:nonto\] is said to be *non time-orthogonal*, because $g_{0i} \neq 0$. In our case, indices $i,j$ correspond to coordinates $r,\phi$; as we see, $\displaystyle g_{0i} \rightarrow g_{0\phi}=-\frac{2GJ_{E }}{c^2r}-\omega_{ E}r^{2}$, and this term depends on both the rotation of the source of the gravitational field, through its angular momentum, and on the rotational features of the reference frame, through the angular velocity.
As described in [@Ruggiero:2014aya], given a line element in the form (\[eq:nonto\]), in order to calculate the propagation times of electromagnetic signals it is possible to proceed as follows. First of all we set $ds^{2}=0$ and, hence, we are able to solve for the infinitesimal coordinate time interval along the world line of a light ray: dt= \[eq:1rev\] We choose $dt > 0$, since we are interested in solutions in the future. Equation (\[eq:1rev\]) allows to evaluate the coordinate time of flight of an electromagnetic signal between two successive events [in a vacuum.]{} If we consider a closed path (in space) and integrate over the path in two opposite directions from the emission to the absorption events, two different results for the times of flight are obtained because of the off diagonal $g_{0i}$ components of the metric tensor, say $t_{+}$, $t_{-}$, where “$+$” refers to the signal co-rotating with the satellites reference frame, while “$-$” stands for the counter-rotating signal. Consequently, the difference between the times of flight turns out to be t= t\_[+]{}-t\_[-]{} = - \_[L]{} dx\^[i]{} \[eq:sagnac1\] where $L$ is the spatial trajectory of the signals; in obtaining the above result, we have used the time independence of the metric coefficients, as well as the fact that emission and absorption happen at the same position in the rotating frame.
In our case and to lowest approximation order, we distinguish two contributions to the time difference $\Delta t$ t = t\^[SR]{}+t\^[GR]{} \[eq:sagnac2\] where t\^[SR]{} \_[L]{} \_[E]{}r\^[2]{}d\[eq:deltatSR1\] depends on the rotation of the reference frame without an appreciable contribution from the mass [$M_{E}$]{} and, consequently, is a SR term, while t\^[GR]{} 2 \_[L]{} d\[eq:deltatGR1\] is related to the angular momentum of the Earth and it is a *gravitomagnetic* GR contribution.
![Satellites 1, 2, 3 are at the vertices of a triangle, and moving along a geostationary orbit. Signals propagate in opposite directions starting from satellite 1; after a complete round trip along the triangular path, they reach again satellite 1.[]{data-label="fig:sat"}](sat.eps)
In order to calculate the above contributions, we consider a simple and symmetrical configuration, made of three satellites at the vertices of an equilateral triangle. The situation is depicted in Figure \[fig:sat\]: the two electromagnetic signals are emitted from satellite 1 and, after a complete round trip, reach again [the location of the original transmitting signal]{}; the signal moving in the direction co-rotating with the Earth takes more time than the other one. This is easily understood in the rest frame of the Earth, since the path of the co-rotating signal is longer than the path of counter-rotating one; on the other hand, in the rotating frame of the satellites, this time difference is explained in terms of the synchronization gap along a closed path in non-time-orthogonal frames (see e.g. [@RRinRRF]).
We neglect the gravitational deflection of the signals, hence we assume that they propagate along straight lines, with impact parameter $b=r_{geo}/2.$
The special relativistic contribution {#ssec:SR}
-------------------------------------
It is possible to apply the Stokes theorem to the line integral (\[eq:sagnac1\]); to this end, we define the vector field $\vec h$ such that$\ h_{i} =\frac{g_{0i}}{g_{00}}$ (see e.g. [@Ruggiero:2014aya]). The Stokes theorem states that \_[L]{} h d x = \_[S]{} d S \[eq:stokes1\] where $\vec S$ is the area vector of the surface $S$ enclosed by the contour line $L$. In our case, the surface $S$ is in the geostationary orbits plane and, as a consequence, the vector $S$ is parallel to the rotation axis of the Earth.
If we apply the above result to the SR contribution, since $h_{\phi}\simeq \omega_{ E}r/c^2$, we get $\vec \nabla \wedge \vec h = 2 \vec \omega_{ E}/c^2$, where $\vec \omega_{E}$ is the angular velocity vector of the Earth and it is a constant vector. As a consequence, we obtain t\^[SR]{}= 2 \_[L]{} h d x = \_[S]{} d S = 4 S \[eq:SR1\] In our configuration the vectors $\vec \omega_{E}$ and $\vec S$ in Eq. (\[eq:SR1\]) are parallel; on taking into account the area of the equilateral triangle whose side is $\ell=\sqrt 3 r_{geo}$, we do obtain t\^[SR]{}= r\_[geo]{}\^[2]{}\_[E ]{}
The general relativistic contribution {#ssec:GR}
-------------------------------------
![Straight line: $b=|\overline{OH}|$ is the closest approach distance, and $\phi_{H}$ is the polar angle of the closest approach point $H$.[]{data-label="fig:retta_pol"}](retta_pol.eps)
In order to calculate the GR time delay for light propagating along the sides of a triangle or, more in general, a polygon, we use polar coordinates $(r,\phi)$ in the equatorial plane. Let $O$ be the origin of the polar coordinate system, then we may write the straight line equation in the form r()= \[eq:retta\_pol\] where $b=|\overline{OH}|$ is the closest approach distance to the origin, $\phi_{H}$ is the polar angle of the closest approach point $H$ (see Figure \[fig:retta\_pol\]) and $\displaystyle -\pi/2 < \phi-{\phi_{H}} <\pi/2$.
![A triangle with vertices $P_{1}, P_{2}, P_{3}$; $H_{12}, H_{23}, H_{31}$ are the closest approach points, and we set $b_{12}=|\overline{OH}_{12}|$, $b_{23}=|\overline{OH}_{23}|$, $b_{31}=|\overline{OH}_{31}|$ for the corresponding distances from the origin $O$.[]{data-label="fig:tria"}](tria.eps)
Consider for instance the triangle with vertices $P_{1}, P_{2}, P_{3}$ described in Figure \[fig:tria\]; we suppose to know the polar coordinates $(r_{1},\phi_{1})$, $(r_{2},\phi_{2})$, $(r_{3},\phi_{3})$ of the vertices and the polar coordinates $(b_{12},\phi_{12})$, $(b_{23},\phi_{23})$, $(b_{31},\phi_{32})$ of the closest approach points along the straight lines.
For calculating the time delay for light right propagating along the sides of the triangle, we proceed as follows, starting from the general expression $\displaystyle {\Delta t^{GR}}=-\frac{2}{c^2} \oint_{L} {g^{GR}_{0i}}{} dx^{i}$ with $\displaystyle g^{GR}_{0i} \rightarrow g^{GR}_{0\phi}=-\frac{2GJ_{E }}{c^2r}$, where $r$ is the distance from the source of the gravitomagnetic field, which is supposed to be located in $O$. We may write: t\^[GR]{} = - \[eq:timedelay2\] For instance, the first integral in (\[eq:timedelay2\]) turns out to be \_[P\_[1]{}]{}\^[P\_[2]{}]{} g\^[GR]{}\_[0i]{}dx\^[i]{}= \_[P\_[1]{}]{}\^[P\_[2]{}]{} g\^[GR]{}\_[0]{}d=[2GJ\_[E]{}]{} \_[\_[1]{}]{}\^[\_[2]{}]{} d= \[eq:int1\] Notice that difference between the sine functions can be written as (\_[2]{}-\_[12]{})-(\_[1]{}-\_[12]{})=2 (-\_[12]{} )( ) \[eq:pfrs\] On setting $\displaystyle \Delta_{12} \doteq \frac{\phi_{2}-\phi_{1}}{2}$, Eq. (\[eq:int1\]) can be written as \_[P\_[1]{}]{}\^[P\_[2]{}]{} g\^[GR]{}\_[0i]{}dx\^[i]{} = (\_[12]{}-\_[12]{})\_[12]{}
As a consequence, the time delay (\[eq:timedelay2\]) becomes t\^[GR]{}= \[eq:timedelay3\] The above results can be generalised to an arbitrary polygon, to obtain t\^[GR]{}= \[eq:timedelay4\]
If we confine ourselves to considering an equilateral triangle, by symmetry the three contributions in eq. (\[eq:timedelay3\]) are equal, so, on setting $b=b_{12}=b_{23}=b_{31}$, we may write t\^[GR]{}= \[eq:timedelay5\] It is $\Delta=\pi/3$, and $\phi=\pi/3$. Accordingly, we obtain t\^[GR]{}=12 3 \[eq:timedelay6\]
In particular, since $b=r_{GEO}/2$, we obtain $\displaystyle \Delta t^{GR}=24 \sqrt 3 \frac{GJ_{E}}{c^{4}r_{geo}}$.
Discussion {#sec:disconc}
==========
We obtained the expression of the total time difference $\Delta t= \Delta t^{SR}+\Delta t^{GR}$, so that we can give numerical estimates of the two terms. As for the SR contribution, we obtain $\displaystyle \Delta t^{SR}=\frac{\sqrt 3}{4} \frac{1}{c^{2}}r^{2}_{geo} \omega_{ } \simeq 6.2 \times 10^{-7}$ s. On the other hand, in order to evaluate the GR contribution, we model the Earth as a rotating rigid sphere to evaluate its angular momentum: even if this is an oversimplified model, it is sufficient to estimate the order of magnitude of the contribution. Accordingly, we get $\displaystyle \Delta t^{GR}=24\sqrt 3\frac{GJ_{E}}{c^{4}r_{geo}} \simeq 5.2 \times 10^{-17}$ s. Since we used a toy model to calculate the time delay, the estimates are meant to be evaluations of the order of magnitude; indeed, different geometric configurations would give different coefficients in the above formulae, however we could say that $\displaystyle \Delta t^{SR} \sim \frac{1}{c^{2}}r^{2}_{geo} \omega_{ E} $ and $\displaystyle \Delta t^{GR} \sim \frac{GJ_{E}}{c^{4}r_{geo}}$. It is worth mentioning that, on the experimental side, the above numbers acquire different weight according to the type of electromagnetic waves we would be able to employ, because of their different periods: if we could use light $\Delta t^{GR}$ would be in the order of a hundredth of a typical period, well within the range of interference measurements, even though we should be able to measure the SR contribution with the accuracy of at least one part in $10^{10}$ in order to discriminate its contribution from the one of GR.
We see that, in any case, the gravitomagnetic effect is expected to be very small in the terrestrial gravitational field. However, remember that these are time differences after *one* complete round trip of the two signals; the Euclidean distance travelled by each signal is $L= 3\sqrt{3}r_{geo}$, which corresponds to a propagation time of $t_{L} \simeq 0.73 $ s. For comparison, in this time each geostationary satellite travels about 2 kilometers. As a consequence, in one day there could be about $10^{5}$ round trips, so that the overall effect would be increased by the corresponding factor. One approach to measure the effect could be to consider a series of round trips; however, in doing so, both the SR and the GR effect will increase and, to measure the GR effect, it is important to accurately model the dominant SR effect.
Because of the preliminary character of this proposal, we have used a simplified toy model, with the purpose of emphasising the underlying relativistic physics. To this end, we have not mentioned the perturbations that may arise in a more realistic situation. For instance, we supposed stable geostationary orbits, however this is not the case because of the influence of the gravitational fields of the the Sun and the Moon, non sphericity of the Earth and so on. While these effects are important for observations time in the order of one year, we may guess that they could be negligible for operations time of some days, however a careful analysis is needed. Similarly, in our model we supposed to neglect the gravitational field of other objects in the Solar System, we assumed perfect sphericity of the Earth, constant rotation rate and constant angular momentum: again, the impact of all these elements should be considered and evaluated, taking into account the observation times. Furthermore, in order to evaluate the feasibility of such an experiment, it is important to assess the technical details of signals transmission and detection, which involve, for instance, the characteristic of the relay delay mechanism and the accuracy and stability of clocks (because of the magnitude of the effect, atomic clocks will be needed). Eventually, it is important to emphasise that both the SR and GR contributions are obtained as *difference* between propagation times: so, the average effect (over long enough time-span) of systematic noise and perturbations that are independent of the propagation direction should not influence the result of measurements.
Conclusions {#sec:conc}
===========
In this paper, we have suggested that the gravitomagnetic effect of the Earth can be measured by exploiting the propagation times of electromagnetic signals emitted, transmitted and received by satellites around the Earth. To emphasise the underlying physical idea, we used a toy model, which enabled us to obtain reasonable estimates of the effect. The actual feasibility of the idea needs further analysis, as we have briefly discussed above but, at least in principle, we have shown that the gravitomagnetic effect of the rotating Earth is not far from the range of measurements of satellites equipped with accurate clocks.
[200]{}
M. L. Ruggiero and A. Tartaglia, Nuovo Cim. B [**117**]{} (2002) 743 \[gr-qc/0207065\].
I. Ciufolini and E. C. Pavlis, Nature [**431**]{} (2004) 958. I. Ciufolini et al., Gravitomagnetism and Its Measurement with Laser Ranging to the LAGEOS Satellites and GRACE Earth Gravity Models, in General Relativity and John Archibald Wheeler, Astrophysics and Space Science Library 367 (Springer, The Netherlands) (2010). L. Iorio, Class. Quant. Grav. **23** (2006) 5451.
L. Iorio, Central European Journal of Physics **8** (2010) 509.
L. Iorio, Solar Physics **281** (2012) 815.
I. Ciufolini, A. Paolozzi, E. Pavlis, J. Ries, V. Gurzadyan, R. Koenig, R. Matzner and R. Penrose [*et al.*]{}, Eur. Phys. J. Plus [**127**]{} (2012) 133 \[arXiv:1211.1374 \[gr-qc\]\].
I. Ciufolini [*et al.*]{}, Eur. Phys. J. C [**76**]{} (2016) no.3, 120 doi:10.1140/epjc/s10052-016-3961-8 \[arXiv:1603.09674 \[gr-qc\]\]. L. Iorio, Eur. Phys. J. C [**77**]{} (2017) no.2, 73 doi:10.1140/epjc/s10052-017-4607-1 \[arXiv:1701.06474 \[gr-qc\]\].
[ I. Ciufolini *et al.*, Eur. Phys. J. C [**78**]{} (2018) no.11, 880 doi:10.1140/epjc/s10052-018-6303-1]{}
C. W. F. Everitt, D. B. DeBra, B. W. Parkinson, J. P. Turneaure, J. W. Conklin, M. I. Heifetz, G. M. Keiser and A. S. Silbergleit [*et al.*]{}, Phys. Rev. Lett. [**106**]{} (2011) 221101 \[arXiv:1105.3456 \[gr-qc\]\]. G.E. Pugh, Proposal for a Satellite Test of the Coriolis Prediction of General Relativity. Weapons Systems Evaluation Group. Research Memorandum No. 11; The Pentagon, Washington DC (1959).
L.I. Schiff, Phys. Rev. Lett. **4** (1960) 215.
A. Tartaglia, D. Lucchesi, M. L. Ruggiero and P. Valko, Gen. Rel. Grav. [**50**]{} (2018) no.1, 9 doi:10.1007/s10714-017-2332-6 \[arXiv:1701.08217 \[gr-qc\]\]. A. Tartaglia, M. L. Ruggiero and E. Capolongo, Adv. Space Res. [**47**]{} (2011) 645 doi:10.1016/j.asr.2010.10.023 \[arXiv:1001.1068 \[gr-qc\]\]. F. Bosi, G. Cella, A. Di Virgilio, A. Ortolan, A. Porzio, S. Solimeno, M. Cerdonio and J. P. Zendri [*et al.*]{}, Phys. Rev. D [**84**]{} (2011) 122002 \[arXiv:1106.5072 \[gr-qc\]\]. [ M.L. Ruggiero, Galaxies **2015** (2015) 84-102]{}
A. Di Virgilio, M. Allegrini, A. Beghi, J. Belfi, N. Beverini, F. Bosi, B. Bouhadef and M. Calamai [*et al.*]{}, Comptes rendus - Physique [**15**]{} (2014) 866 \[arXiv:1412.6901 \[gr-qc\]\]. A. Tartaglia, A. Di Virgilio, J. Belfi, N. Beverini and M. L. Ruggiero, Eur. Phys. J. Plus [**132**]{} (2017) no.2, 73 doi:10.1140/epjp/i2017-11372-5 \[arXiv:1612.09099 \[gr-qc\]\]. J.L. Synge, Relativity: The General Theory (North Holland, Amsterdam) (1964)
M. L. Ruggiero and A. Tartaglia, Eur. Phys. J. Plus [**129**]{} (2014) 126 doi:10.1140/epjp/i2014-14126-y \[arXiv:1403.6341 \[gr-qc\]\]. G. Rizzi and M.L. Ruggiero, Chapter 10 in *Relativity in Rotating Frames*, eds. G. Rizzi and M.L. Ruggiero, in the series “Fundamental Theories of Physics", (Kluwer Academic Publishers, Dordrecht) (2003)
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We report an improved low-energy extrapolation of the cross section for the process $^7\mathrm{Be}(p,\gamma)^8\mathrm{B}$, which determines the $^8$B neutrino flux from the Sun. Our extrapolant is derived from Halo Effective Field Theory (EFT) at next-to-leading order. We apply Bayesian methods to determine the EFT parameters and the low-energy $S$-factor, using measured cross sections and scattering lengths as inputs. Asymptotic normalization coefficients of $^8$B are tightly constrained by existing radiative capture data, and contributions to the cross section beyond external direct capture are detected in the data at $E < 0.5$ MeV. Most importantly, the $S$-factor at zero energy is constrained to be $S(0)= 21.3\pm 0.7$ , which is an uncertainty smaller by a factor of two than previously recommended. That recommendation was based on the full range for $S(0)$ obtained among a discrete set of models judged to be reasonable. In contrast, Halo EFT subsumes all models into a controlled low-energy approximant, where they are characterized by nine parameters at next-to-leading order. These are fit to data, and marginalized over via Monte Carlo integration to produce the improved prediction for $S(E)$.'
author:
- Xilin Zhang
- 'Kenneth M. Nollett'
- 'D. R. Phillips'
bibliography:
- 'nuclear\_reaction.bib'
date: July 2015
title: 'Halo effective field theory constrains the solar ${}^7{\rm Be} + p \rightarrow {}^8{\rm B} + \gamma$ rate'
---
[*Introduction—*]{} A persistent challenge in modeling the Sun and other stars is the need for nuclear cross sections at very low energies [@Adelberger:2010qa; @rolfs-rodney]. Recent years have seen a few measurements at or near the crucial “Gamow peak” energy range for the Sun [@luna; @Adelberger:2010qa], but cross sections at these energies are so small that data almost always lie at higher energies, where experimental count rates are larger. The bulk of the data must be extrapolated to the energies of stellar interiors using nuclear reaction models.
The models available for extrapolation also have limitations. Qualitatively correct models of nonresonant radiative capture reactions, with reacting nuclei treated as interacting particles, have been available since the mid-1960s [@christyduck]. However, these models suffer from weak input constraints and dependence on *ad hoc* assumptions like the shapes of potentials. Developing models with realistically interacting nucleons as their fundamental degrees of freedom is currently a priority for the theoretical community, but progress is slow, and models remain incomplete [@Descouvemont:2004hh; @Navratil:2011sa]. [*Ab initio*]{} calculations employing modern nuclear forces may yield tight constraints in the future.
For the $^7\mathrm{Be}(p,\gamma)^8\mathrm{B}$ reaction – which determines the detected flux of $^8$B decay neutrinos from the Sun – the precision of the astrophysical $S$-factor at solar energies ($\sim
20$ keV) is limited by extrapolation from laboratory energies of typically 0.1–0.5 MeV. A recent evaluation [@Adelberger:2010qa] found the low-energy limit $S(0) = 20.8 \pm 0.7\pm 1.4$ , with the first error reflecting the uncertainties of the measurements. The second accounts for uncertainties in extrapolating those data. It was chosen to cover the full variation among a few extrapolation models thought to be plausible. Since the differences among $S(E)$ shapes for different models were neither well-understood nor represented by continuous parameters, no goodness-of-fit test was used for model selection.
Halo EFT [@vanKolck:1998bw; @Kaplan:1998tg; @Kaplan:1998we; @Bertulani:2002sz; @Bedaque:2003wa; @Hammer:2011ye; @Rupak:2011nk; @Canham:2008jd; @Higa:2008dn; @Ryberg:2013iga], provides a simple, transparent, and systematic way to organize the reaction theory needed for the low-energy extrapolation. The $^7\mathrm{Be}+p$ system is modeled as two interacting particles and described by a Lagrangian expanded in powers of their relative momentum, which is small compared with other momentum scales in the problem. The point-Coulomb part of the interaction can be treated exactly, and the form of the strong interaction is fully determined by the order at which the Lagrangian is truncated [@Ryberg:2013iga; @Zhang:2014zsa; @Zhang:2015; @Ryberg:2014exa]. The coupling constants of the Lagrangian are determined by matching to experiment. This is similar in spirit and in many quantitative details to traditional potential model or $R$-matrix approaches. However, it avoids some arbitrary choices (like Woods-Saxon shapes or matching radii) of these models, is organized explicitly as a low-momentum power series, and allows quantitative estimates of the error arising from model truncation.
The low-energy $S$-factor for $^7\mathrm{Be}(p,\gamma)^8\mathrm{B}$ consists entirely of electric-dipole ($E1$) capture from $s$- and $d$-wave initial states to $p$-wave final states (which dominate $^7\mathrm{Be}+p$ configurations within $^8$B). All models are dominated by “external direct capture,” the part of the $E1$ matrix element arising in the tails of the wave function (out to 100 fm and beyond) [@christyduck; @jennings98]. Models differ in how they combine the tails of the final state with phase shift information and in how they model the non-negligible contribution from short-range, non-asymptotic regions of the wave functions.
Halo EFT includes these mechanisms, and can describe $S(E)$ over the low-energy region (LER) at $E < 0.5$ MeV. Beyond 0.5 MeV, higher-order terms could be important, and resonances unrelated to the $S$-factor in the Gamow peak appear. Compared with a potential model, the EFT has about twice as many adjusted parameters, too many to determine uniquely with existing data. However, calculations of the solar neutrino flux do not require that all parameters be known: it is enough to determine $S(18\pm 6~\mathrm{keV})$. We fit the amplitudes of recently computed next-to-leading-order (NLO) terms [@Zhang:2015] in $E1$ capture to the experimental $S(E)$ data in the LER. We then use Bayesian methods to propagate the (theory and experimental) uncertainties and obtain a rather precise result for $S(20~\mathrm{keV})$.
[*EFT at NLO—*]{} The EFT amplitude for $E1$ capture is organized in an expansion in the ratio of low-momentum and high-momentum scales, $k/\Lambda$. $\Lambda$ is set by the ${{}^{7}\mathrm{Be}}$ binding energy relative to the ${}^3\mathrm{He}+{}^4\mathrm{He}$ threshold, $1.59$ MeV, so $\Lambda \approx 70$ MeV, corresponding to a co-ordinate space cutoff of $\approx 3$ fm. Physics at distances shorter than this is subsumed into contact operators in the Lagrangian. The ${{}^{8}\mathrm{B}}$ ground state, which is 0.1364(10) MeV below the ${{}^{7}\mathrm{Be}}$-p scattering continuum [@AME2012I; @AME2012II], is a shallow p-wave bound state in our EFT: it is bound by contact operators but the wave-function tail should be accurately represented. To ensure this we also include the $J^\pi=\frac{1}{2}^-$ bound excited state of ${{}^{7}\mathrm{Be}}$ in the theory. ${{}^{7}\mathrm{Be}}^*$ is 0.4291 MeV above the ground state; the configuration containing it and the proton is significant in the ${{}^{8}\mathrm{B}}$ ground state [@Zhang:2014zsa]. The large ($\sim 10$ fm) ${{}^{7}\mathrm{Be}}$-p scattering lengths play a key role in the low-energy dynamics; s-wave rescattering in the incoming channels must be accurately described. This also requires that the Coulomb potential be iterated to all orders when computing the scattering and bound state wave functions [@Zhang:2014zsa; @Ryberg:2014exa]. Indeed $Z_{{{}^{7}\mathrm{Be}}} Z_p \alpha_{em} M_{p} \approx k_C = 27$ MeV while the binding momentum of ${{}^{8}\mathrm{B}}$ is 15 MeV, so these low-momentum scales are well separated from $\Lambda$. We generically denote them by $k$, and anticipate that $k/\Lambda \approx 0.2$. Since the EFT incorporates all dynamics at momentum scales $< \Lambda$ its radius of convergence is larger than other efforts at systematic expansions of this $S$-factor [@WilliamsKoonin81; @Baye:2000ig; @Baye:2000gi; @Baye:2005; @Jennings:1998qm; @Jennings:1998ky; @Jennings:1999in; @Cyburt:2004jp; @Mukhamedzhanov:2002du].
The leading-order (LO) amplitude includes only external direct capture. As the ${{}^{7}\mathrm{Be}}$ ground state is $\frac{3}{2}^-$ there are two possible total spin channels, denoted here by $s=1,2$. They correspond, respectively, to ${{}^{3}S_{1}}$ and ${{}^{5}S_{2}}$ components in the incoming scattering state, and ${{}^{3}P_{2}}$ and ${{}^{5}P_{2}}$ configurations in ${{}^{8}\mathrm{B}}$. The parameters that appear at LO are the two asymptotic normalization coefficients (ANCs), $C_{s}$, for the ${{}^{7}\mathrm{Be}}$-p configuration in ${{}^{8}\mathrm{B}}$ in each of the spin channels, together with the corresponding s-wave scattering lengths, $a_{s}$ [@Zhang:2013kja; @Zhang:2014zsa; @Ryberg:2014exa]. The NLO result for $S(E)$, full details of which will be given elsewhere [@Zhang:2015], can be written as:
$$\begin{aligned}
S(E)&=&f(E) \sum_{s} C_{s}^2
\bigg[ \big\vert \mathcal{S}_\mathrm{EC} \left(E;\delta_s(E)\right)
+ \overline{L}_{s} \mathcal{S}_\mathrm{SD} \left(E;\delta_s(E)\right)
+ \epsilon_{s} \mathcal{S}_\mathrm{CX}\left(E;\delta_s(E)\right) \big\vert^2
+|\mathcal{D}_\mathrm{EC}(E)|^2 \bigg] \ . \end{aligned}$$
Here, $f(E)$ is an overall normalization composed of final-state phase space over incoming flux ratio, dipole radiation coupling strength, and a factor related to Coulomb-barrier penetration [@Zhang:2014zsa]. $\mathcal{S}_\mathrm{EC}$ is proportional to the spin-$s$ $E1$ [@Walkecka1995book; @Zhang:2014zsa; @Zhang:2013kja] external direct-capture matrix element between continuum ${{}^{7}\mathrm{Be}}$–p s-wave and ${{}^{8}\mathrm{B}}$ ground-state wave functions. $\mathcal{S}_\mathrm{CX}$ is the contribution from capture with core excitation, i.e. into the ${{}^{7}\mathrm{Be}}^*$-p component of the ground state. Its strength is parameterized by $\epsilon_s$. Since ${{}^{7}\mathrm{Be}}^*$ is spin-half this component only occurs for $s=1$, so $\epsilon_2=0$. Because the inelasticity in ${{}^{7}\mathrm{Be}}$-p s-wave scattering is small [@Navratil:2010jn; @Navratil:2011sa] it is an NLO effect.
Short-distance contributions, $\mathcal{S}_\mathrm{SD}$, are also NLO. They originate from NLO contact terms in the EFT Lagrangian [@Zhang:2015] and account for corrections to the LO result arising from the $E1$ transition at distances $\lsim 3$ fm. The size of these is set by the parameters $\overline{L}_s$, which must be fit to data. $\mathcal{S}_\mathrm{EC}$, $\mathcal{S}_\mathrm{SD}$, and $\mathcal{S}_\mathrm{CX}$ are each functions of energy, $E$, but initial-state interactions mean they also depend on the s-wave phase shifts $\delta_s$. At NLO we parametrize $\delta_s(E)$ by the Coulomb-modified effective-range expansion up to second order in $k^2$, i.e., we include the term proportional to $r_s k^2$, with $r_s$ the effective range (see supplemental material) [@Higa:2008dn; @Koenig:2012bv; @GoldbergerQM]. Finally, $\mathcal{D}_\mathrm{EC}$ is the $E1$ matrix element between the d-wave scattering state and the ${{}^{8}\mathrm{B}}$ bound-state wave function. It is not affected by initial-state interactions up to NLO, and hence is the same for $s=1,\,2$ channels and introduces no new parameters. This leaves us with $9$ parameters in all: $C_{1,2}^2$, $a_{1,2}$ at LO and five more at NLO: $r_{1,2}$, $\overline{L}_{1,2}$, and $\epsilon_1$ [@Zhang:2015].
[*Data—*]{} The 42 data points in our analysis come from all modern experiments with more than one data point for the direct-capture $S$-factor in the LER: Junghans [*et.al.,* ]{} (experiments “BE1” and “BE3”) [@Junghans:2010zz], Filippone [*et.al.,*]{} [@Filippone:1984us], Baby [*et.al.,*]{} [@Baby:2002hj; @Baby:2002ju], and Hammache [*et.al.,*]{} (two measurements published in 1998 and 2001) [@Hammache:1997rz; @Hammache:2001tg]. Ref. [@Adelberger:2010qa] summarizes these experiments, and the common-mode errors (CMEs) we assign are given in the supplemental material. All data are for energies above $0.1$ MeV. We subtracted the $M1$ contribution of the ${{}^{8}\mathrm{B}}$ $1^{+}$ resonance from the data using the resonance parameters of Ref. [@Filippone:1984us]. This has negligible impact for $E \leq 0.5$ MeV, so we retain only points in this region, thus eliminating the resonance’s effects.
[*Bayesian analysis—*]{} To extrapolate $S(E)$ we must use these data to constrain the EFT parameters. We compute the posterior probability distribution function (PDF) of the parameter vector ${\boldsymbol}{g}$ given data, $D$, our theory, $T$, and prior information, $I$. To account for the common-mode errors in the data we introduce data-normalization corrections, $\xi_i$. We then employ Bayes’ theorem to write the desired PDF as: $${\rm pr} \left({\boldsymbol}{g},\{\xi_i\} \vert D;T; I \right)
=
{\rm pr} \left(D \vert {\boldsymbol}{g},\{\xi_i\};T; I \right) {\rm pr} \left({\boldsymbol}{g},\{\xi_i\} \vert I \right) , \label{eqn:bayesian1}$$ with the first factor proportional to the likelihood: $$\ln {\rm pr} \left(D \vert {\boldsymbol}{g},\{\xi_i\};T;I \right) = c - \sum_{j=1}^N \frac{\left[ (1-\xi_j)S({\boldsymbol}{g}; E_j)-D_j\right]^2}{2 \sigma_j^{2}},$$ where $S({\boldsymbol}{g};E_j)$ is the NLO EFT $S$-factor at the energy $E_j$ of the $j$th data point $D_j$, whose statistical uncertainty is $\sigma_j$. The constant $c$ ensures ${\rm pr} \left({\boldsymbol}{g},\{\xi_i\} \vert D;T; I \right)$ is normalized. Since the CME affects all data from a particular experiment in a correlated way there are only five parameters $\xi_i$: one for each experiment.
In Eq. (\[eqn:bayesian1\]) ${\rm pr}\left( {\boldsymbol}{g}, \{ \xi_i\},\vert
I \right)$ is the prior for ${\boldsymbol}{g}$ and $\{\xi_i\}$. We choose independent Gaussian priors for each data set’s $\xi_i$, all centered at $0$ and with width equal to the assigned CMEs. We also choose Gaussian priors for the s-wave scattering lengths $\left(a_{1},\,
a_2\right)$, with centers at the experimental values of Ref. [@Angulo2003], $\left(25,\, -7\right)$ fm, and widths equal to their errors, $\left(9,\, 3\right)$ fm. All the other EFT parameters are assigned flat priors over ranges that correspond to, or exceed, natural values: $0.001 \leq C^2_{1,2} \leq
1\,\mathrm{fm}^{-1}$, $0\leq r_{1,2} \leq 10\, \mathrm{fm} $ [@Phillips:1996ae; @Wigner:1955zz], $-1\leq \epsilon_1\leq 1$, $-10\leq L_{1,2} \leq 10\, \mathrm{fm}$. We do, though, restrict the parameter space by the requirement that there is no s-wave resonance in ${{}^{7}\mathrm{Be}}$-p scattering below $0.6$ MeV.
To determine ${\rm pr}\left({\boldsymbol}{g},\{\xi_i\} \vert D;T;I \right)$, we use a Markov chain Monte Carlo algorithm [@SiviaBayesian96] with Metropolis-Hastings sampling [@Metropolis:1953am], generating $2\times 10^4$ uncorrelated samples in the $14$-dimensional (14d) ${\boldsymbol}{g}$ $\bigoplus$ $\{\xi_i\}$ space. Making histograms, e.g., over two parameters $g_1$ and $g_2$, produces the marginalized distribution, in that case: ${\rm pr} \left(g_{1}, g_{2} \vert D;T;I \right)=$ $\int {\rm pr} \left({\boldsymbol}{g},\{\xi_i\} \vert D;T;I \right)\,$ $d\xi_1 \ldots d\xi_5 dg_3 \ldots dg_9$. Similarly, to compute the PDF of a quantity $F({\boldsymbol}{g})$, e.g., $S(E; {\boldsymbol}{g})$, we construct ${\rm pr}\left(\bar{F}\vert D; T; I\right)$ $\equiv$ $\int {\rm pr} \left({\boldsymbol}{g},\{\xi_i\} \vert D;T;I \right)$ $\delta(\bar{F}-F({\boldsymbol}{g})) d\xi_1 \ldots d \xi_5 d{\boldsymbol}{g}$, and histogramming again suffices.
[*Constraints on parameters and the S-factor—*]{} The tightest parameter constraint we find is on the sum $C_1^2+C_2^2=0.564(23)~\mathrm{fm}^{-1}$, which sets the overall scale of $S(E)$ [^1]. Fig. \[fig:results1\] shows contours of 68% and 95% probability for the 2d joint PDF of the ANCs. Neither ANC is strongly constrained by itself, but they are strongly anticorrelated; the 1d PDF of $C_1^2+ C_2^2$ is shown in the inset. The ellipses in Fig. \[fig:results1\] show ANCs from an *ab initio* variational Monte Carlo calculation (the smaller ellipse) [@Nollett:2011qf] [^2] and inferred from transfer reactions by Tabacaru *et al.* (larger ellipse) [@Tabacaru:2005hv]. These are also shown as error bars in the inset. The *ab initio* ANCs shown compare well with the present results. (The *ab initio* ANCs of Ref. [@Navratil:2011sa] sum to $0.509~\mathrm{fm}^{-1}$ and appear to be in moderate conflict.) Tabacaru *et al.* recognized that their result was $1\sigma$ to $2\sigma$ below existing analyses of $S$-factor data; a $1.8\sigma$ conflict remains in our analysis. We suggest that for $^8$B the combination of simpler reaction mechanism, fewer assumptions, and more precise cross sections makes the capture reaction a better probe of ANCs than transfer reactions.
![(Color online.) 2d distribution for $C_1^2$ (x-axis) and $C_2^2$ (y-axis). Shading represents the 68% and 95% regions. The small circle and ellipse are the $1\sigma$ contours of an [*ab initio*]{} calculation [@Nollett:2011qf] and empirical results [@Tabacaru:2005hv], with their best values marked as red squares. The inset is the histogram and the corresponding smoothed 1d PDF of the quantity $[C_1^2+C_2^2]\times \mathrm{fm}$; the larger and smaller error bars show the empirical and [*ab initio*]{} values.[]{data-label="fig:results1"}](paper_ANC.pdf)
![(Color online.) 2d distribution for $\epsilon_1$ (x-axis) and $\bar{L}_1$ (y-axis). The shaded area is the 68% region. The inset is the histogram and corresponding smoothed 1d PDF of the quantity $0.33\, \bar{L}_1/\mathrm{fm} - \epsilon_1$.[]{data-label="fig:results2"}](paper_LEps.pdf)
Fig. \[fig:results2\] depicts the 2d distribution of $\bar{L}_1$ and $\epsilon_1$. There is a positive correlation: in $S(E)$ below the ${{}^{7}\mathrm{Be}}$-p inelastic threshold, the effect of core excitation, here parameterized by $\epsilon_1$, can be traded against the short-distance contribution to the spin-1 $E1$ matrix element. The inset shows the $1$d distribution of the quantity $0.33\, \bar{L}_1/\mathrm{fm} - \epsilon_1$, for which there is a slight signal of a non-zero value. In contrast, the data prefers a positive $\bar{L}_2$: its 1d pdf yields a 68% interval $-0.58~{\rm fm} < \bar{L}_2 < 7.94~{\rm fm}$.
We now compute the PDF of $S$ at many energies, and extract each median value (the thin solid blue line in Fig. \[fig:results3\]), and 68% interval (shaded region in Fig. \[fig:results3\]). The PDFs for $S$ at $E=0$ and $20~\mathrm{keV}$ are singled out and shown on the left of the figure: the blue line and histogram are for $E=0$ and the red-dashed line is for $E=20$ keV. We found choices of the EFT-parameter vector ${\boldsymbol}{g}$ (given in the supplemental material) that correspond to natural coefficients, produce curves close to the median $S(E)$ curve of Fig. \[fig:results3\], and have large values of the posterior probability.
![(Color online.) The right panel shows the NLO $S$-factor at different energies, including the median values (solid blue curve). Shading indicates the 68% interval. The dashed line is the LO result. The data used for parameter determination are shown, but have not been rescaled in accord with our fitted $\{\xi_i\}$. They are: Junghans [*et.al.*]{}, BE1 and BE3 [@Junghans:2010zz] (filled black circle and filled grey circle), Filippone [*et.al.,*]{} [@Filippone:1984us] (open circle), Baby [*et.al.,*]{} [@Baby:2002hj; @Baby:2002ju] (filled purple diamond), and Hammache [*et.al.,*]{} [@Hammache:1997rz; @Hammache:2001tg] (filled red box). The left panel shows 1d PDFs for $S(0)$ (blue line and histogram) and $S(20~\mathrm{keV})$ (red-dashed line). []{data-label="fig:results3"}](paper_S.pdf)
$S$ (eV b) $S'/S$ ($\mathrm{MeV}^{-1}$) $S''/S$ ($\mathrm{MeV}^{-2} $)
----------- ----------------- ------------------------------ --------------------------------
Median 21.33 \[20.67\] $-1.82$ \[$-1.34$\] 31.96 \[22.30\]
$+\sigma$ 0.66 \[0.60\] 0.12 \[0.12\] 0.33 \[0.34\]
$-\sigma$ 0.69 \[0.63\] 0.12 \[0.12\] 0.37 \[0.38\]
: The median values of $S$, $S'/S$, and $S''/S$ at $E=0$ keV \[$E=20$ keV\], as well as the upper and lower limits of the (asymmetric) 68% interval. The sampling errors are $0.02\%$, $0.07\%$, $0.01\%$ for median values, as estimated from $\left<X^2-\left<X\right>^2\right>^{1/2}/\sqrt{N}$ with $N=2 \times 10^4$.[]{data-label="tab:SdSddS"}
[*$S(20~keV)$ and the thermal reaction rate—*]{}Table \[tab:SdSddS\] compiles median values and 68% intervals for the $S$-factor and its first two derivatives, $S^\prime/S$ and $S^{\prime\prime}/S$, at $E=0$ and $20$ keV. Ref. [@Adelberger:2010qa] recommends $S(0)=20.8\pm 1.6$ (quadrature sum of theory and experimental uncertainties). Our $S(0)$ is consistent with this, but the uncertainty is more than a factor of two smaller. Ref. [@Adelberger:2010qa] also provides effective values of $S^\prime/S=-1.5\pm 0.1~\mathrm{MeV}^{-1}$ and $S^{\prime
\prime}/S=11\pm 4~\mathrm{MeV}^{-2}$. These are not literal derivatives but results of quadratic fits to several plausible models over $0 < E < 50~\mathrm{keV}$, useful for applications. Our values are consistent, considering the large higher derivatives (rapidly changing $S^{\prime\prime}$) left out of quadratic fits.
The important quantity for astrophysics is in fact not $S(E)$ but the thermal reaction rate; derivatives of $S(E)$ are used mainly in a customary approximation to the rate integral [@caughlan62; @rolfs-rodney; @Adelberger:2010qa]. By using our $S^\prime$ and $S^{\prime\prime}$ in a Taylor series for $S(E)$ about $20$ keV, then regrouping terms and applying the approximation formula, we find a rate (given numerically in the supplemental material) that differs from numerical integration of our median $S(E)$ by only 0.01% at temperature $T_9 \equiv T /( 10^9~\mathrm{K}) =
0.016$ (characteristic of the Sun), and 1% at $T_9 = 0.1$ (relevant for novae).
[*How accurate is NLO?—*]{}Our improved precision for $S(0)$ is achieved because, by appropriate choices of its nine parameters, NLO Halo EFT can represent all the models whose disagreement constitutes the 1.4 uncertainty quoted in Ref. [@Adelberger:2010qa]—including the microscopic calculation of Ref. [@Descouvemont:2004hh]. Halo EFT matches their $S(E)$ and phase-shift curves with a precision of 1% or better for $E < 0.5$ MeV, and thus spans the space of models of $E1$ capture in the LER [@Zhang:2015].
The LO curve shown in Fig. \[fig:results3\] employs values of $C_1$, $C_2$, $a_1$, and $a_2$ from the NLO fit. It differs from the NLO curve by $<2$% at $E=0$, and by $< 10$% at $E=0.5$ MeV. This rapid convergence suggests that the naive estimate of N2LO effects in the amplitude, $(k/\Lambda)^2\approx 4 \%$, is conservative. And indeed, we added a term with this $k$-dependence to the model, allowing a natural coefficient that was then marginalized over, and found that it shifted the median and error bars from the NLO result by at most $0.2\%$ in the LER. Finally, we estimate that direct $E2$ and $M1$ contributions to $S$ in the LER are less than $0.01\%$, and radiative corrections are around $0.01\%$.
[*Summary—*]{} We used Halo EFT at next-to-leading order to determine precisely the $^7\mathrm{Be}(p,\gamma)^8\mathrm{B}$ $S$-factor at solar energies. Halo EFT connects all low-energy models by a family of continuous parameters, and marginalization over those parameters represents marginalization over all reasonable models of low-energy physics. Many of the individual EFT parameters are poorly determined by existing $S$-factor data, at $E > 0.1$ MeV, but these data constrain their combinations sufficiently that the extrapolated $S(20~\mathrm{keV})$ is determined to 3%. We estimate that the impact of neglected higher-order terms in the EFT on $S(0)$ is an order of magnitude smaller than this.
Extension of the EFT to higher order and inclusion of couplings between s- and d-wave scattering states is not expected to reduce the uncertainty, although it would provide slightly greater generality in matching possible reaction mechanisms. There is, however, no indication in the literature that coupling to $d$-waves is important for $S(E)$ [@Descouvemont:2004hh] in the LER. Our analysis could perhaps be extended to higher energies, but for $E > 0.5$ MeV, accurate representation of $M1$ resonances is at least as important as reliable calculations of the $E1$ transition.
The most significant source of uncertainty in our extrapolant is, in fact, the $1$ keV uncertainty in the ${{}^{8}\mathrm{B}}$ proton-separation energy, which can shift $S(20~\mathrm{keV})$ by approximately $0.75$%. This could be eliminated by better mass measurements. Further significant improvement in $S(20~\mathrm{keV})$ for ${{}^{7}\mathrm{Be}}(p,\gamma){{}^{8}\mathrm{B}}$ requires stronger constraints on EFT parameters. Better determinations of $s$-wave scattering parameters seem to be of limited utility. The ANCs affect the very-low-energy $S$-factor the most, and so more information on them, from either *ab initio* theory or capture/transfer data, would be useful.
A number of other radiative capture processes whose physics parallels ${{}^{7}\mathrm{Be}}(p,\gamma){{}^{8}\mathrm{B}}$ are important in astrophysics. The formalism developed herein should be applicable to many of them.
[*Acknowledgments—*]{} We thank Carl Brune for several useful discussions, and Barry Davids, Pierre Descouvemont, and Stefan Typel for sharing details of their calculations with us. We are grateful to the Institute for Nuclear Theory for support under Program INT-14-1, “Universality in few-body systems: theoretical challenges and new directions", and Workshop INT-15-58W, “Reactions and structure of exotic nuclei". During both we made significant progress on this project. X.Z. and D.R.P. acknowledge support from the US Department of Energy under grant DE-FG02-93ER-40756. K.M.N. acknowledges support from the Institute of Nuclear and Particle Physics at Ohio University, and from U.S. Department of Energy Awards No. DE-SC 0010 300 and No. DE-FG02-09ER41621 at the University of South Carolina.
Supplemental material
=====================
Common-mode errors for experimental data
----------------------------------------
The quoted common-mode errors for Junghans [*et al.*]{}, sets BE1 and BE3, [@Junghans:2010zz], Filippone [*et.al.,*]{} [@Filippone:1984us], Baby [*et.al.,*]{} [@Baby:2002hj; @Baby:2002ju], and the Hammache [*et.al.,*]{} 1998 data set [@Hammache:1997rz] are $2.7\%$, $2.3\%$, $11.25\%$, $5\%$, and $2.2\%$, respectively. The data of Ref. [@Hammache:2001tg] are a measurement of the absolute $S(186~\mathrm{keV})$ and of the ratios $S(135~\mathrm{keV})/S(186~\mathrm{keV})$ and $S(112~\mathrm{keV})/S(186~\mathrm{keV})$. We treat each of these three quantities as one data point, so they do not need a CME.
EFT details
-----------
The modified-effective-range expansion for s-wave ${{}^{7}\mathrm{Be}}$-p scattering is: $$p\left(\cot \delta_s(E) - i\right)\, \frac{2\pi \eta}{e^{2\pi\eta}-1} =-\frac{1}{a_{s}} + \frac{1}{2} r_{s} p^{2} -2{k_{C}}H(\eta).$$ Here $H(\eta) =\psi(i\eta)+{1}/{(2i\eta)}-\ln(i\eta)$, $\eta \equiv k_C/p$, $k_C \equiv Z_{{{}^{7}\mathrm{Be}}} Z_p \alpha_{em} m_R$ with $m_R$ the ${{}^{7}\mathrm{Be}}$-p reduced mass, $p=\sqrt{2 m_R E}$, and $\psi$ the digamma function [@MathHandBook1].
An EFT parameter set that gives a good fit—as mentioned in the main text—is listed in Table \[tab:aEFTfit\].
$C_{({{}^{3}P_{2}})}^{2}$ ($\mathrm{fm}^{-1}$) $a_{({{}^{3}S_{1}})}$ (fm) $r_{({{}^{3}S_{1}})}$ (fm) $\epsilon_{1} $ $ \overline{L}_{1}$ (fm) $C_{({{}^{5}P_{2}})}^{2}$ ($\mathrm{fm}^{-1}$) $a_{({{}^{5}S_{2}})}$ (fm) $r_{({{}^{5}S_{2}})}$ (fm) $ \overline{L}_{2}$ (fm)
------------------------------------------------ ---------------------------- ---------------------------- ----------------- -------------------------- ------------------------------------------------ ---------------------------- ---------------------------- --------------------------
0.2336 24.44 3.774 -0.04022 1.641 0.3269 -7.680 3.713 0.1612
: A representative EFT parameter set that gives a curve almost on the top of the median value curve (solid blue) in Fig. \[fig:results3\]. The LO curve (dashed black) uses the LO parameters listed here, with the strictly NLO parameters set to zero. Because the parameter space is very degenerate, many such parameter sets could be given that have similar $S(E)$ curves but very different parameter values.[]{data-label="tab:aEFTfit"}
Results for $S$-factor and thermal reaction rate
------------------------------------------------
$E$ (MeV) Median (eV b) $-\sigma$ (eV b) $+\sigma$ (eV b)
----------- --------------- ------------------ ------------------
0. 21.33 0.69 0.66
0.01 20.97 0.65 0.63
0.02 20.67 0.63 0.60
0.03 20.42 0.60 0.58
0.04 20.20 0.57 0.55
0.05 20.02 0.55 0.53
0.1 19.46 0.45 0.44
0.2 19.27 0.34 0.34
0.3 19.65 0.32 0.30
0.4 20.32 0.35 0.31
0.5 21.16 0.42 0.41
: The median values and 68% interval bounds for $S$ in the energy range from 0 to 0.5 MeV. At each energy point, the histogram of $S$ is drawn from the Monte-Carlo simulated ensemble and then is used to compute the median and the bounds. []{data-label="tab:s0to0.5MeV"}
The median values and 68% interval bounds for $S$ in 10 keV intervals to 50 keV and then in 100 keV intervals to 500 keV is listed in Table \[tab:s0to0.5MeV\].
Regrouping the Taylor series for $S(E)$ about $20$ keV into a quadratic and applying the approximations of Refs. [@caughlan62; @rolfs-rodney] yields $$N_A \langle \sigma v \rangle=\frac{2.7648 \times10^5}{T_9^{2/3}} \exp\left(\frac{-10.26256}{T_9^{1/3}}\right)
\times (1 + 0.0406 T_9^{1/3} - 0.5099 T_9^{2/3} - 0.1449 T_9
+0.9397 T_9^{4/3} + 0.6791 T_9^{5/3}),
\label{eq:thermalreactionrate}$$ in units of $\mathrm{cm^3\,s^{-1}\,mol^{-1}}$, where $N_A$ is Avogadro’s number. Up to $T_9=0.6$, the lower and upper limits of the 68% interval for $S(E)$ produce a numerically integrated rate that is $0.969 (1+0.0576 T_9-0.0593 T_9^2)$ and $1.030
(1-0.05 T_9 +0.0511 T_9^2)$ times that of Eq. (\[eq:thermalreactionrate\]). At $T_9 \gtrsim 0.7$ energies beyond the LER, and hence resonances, come into play and so these results no longer hold. We know of no astrophysical environment with such high $T_9$ where $^7\mathrm{Be}(p,\gamma)^8\mathrm{B}$ matters.
[^1]: The second moments of the MCMC sample distribution imply that $C_1^2+ 0.94 C_2^2$ is best constrained, but we consider $C_1^2+
C_2^2$ for simplicity.
[^2]: We recomputed the sampling errors of Ref. [@Nollett:2011qf] in the basis of good $s$, taking more careful account of correlations between ANCs.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'A growing trend for information technology is to not just react to changes, but anticipate them as much as possible. This paradigm made modern solutions, such as recommendation systems, a ubiquitous presence in today’s digital transactions. Anticipatory networking extends the idea to communication technologies by studying patterns and periodicity in human behavior and network dynamics to optimize network performance. This survey collects and analyzes recent papers leveraging context information to forecast the evolution of network conditions and, in turn, to improve network performance. In particular, we identify the main prediction and optimization tools adopted in this body of work and link them with objectives and constraints of the typical applications and scenarios. Finally, we consider open challenges and research directions to make anticipatory networking part of next generation networks.'
author:
- 'Nicola Bui, Matteo Cesana, S. Amir Hosseini, Qi Liao, Ilaria Malanchini, and Joerg Widmer, [^1]'
bibliography:
- 'main.bib'
title: '[A Survey of Anticipatory Mobile Networking: Context-Based Classification, Prediction Methodologies, and Optimization Techniques]{}'
---
Anticipatory, Prediction, Optimization, 5G, Mobile Networks.
Introduction {#sec:introduction}
============
Evolving from one generation to the next, wireless networks have been constantly increasing their performance in many different ways and for diverse purposes. Among them, communication efficiency has always been paramount to increase the network capabilities without updating the entire infrastructure. This survey investigates anticipatory networking, a recent research direction that supports network optimization through system state prediction.
The core concept of anticipatory networking is that, nowadays, tools exist to make reliable prediction about network status and performance. Moreover, information availability is increasing every day as human behavior is becoming more socially and digitally interconnected. In addition, data centers are becoming more and more important in providing services and tools to access and analyze huge amounts of data.
As a consequence, not only can researchers tailor their solutions to specific places and users, but also they can anticipate the sequence of locations a user is going to visit or to forecast whether connectivity might be worsening, and to exploit the forecast information to take action before the event happens. This enables the possibility to take full advantage of good future conditions (such as getting closer to a base station or entering a less loaded cell) and to mitigate the impact of negative events (e.g., entering a tunnel).
This survey covers a body of recent works on anticipatory networking, which share two common aspects:
- [*Anticipation*]{}: they either explore prediction techniques directly or consider some future knowledge as given.
- [*Networking*]{}: they aim to optimize communications in mobile networks.
In addition, this survey delves into the following questions: How can prediction support wireless networks? Which type of information is possible to predict and which applications can take advantage of it? Which tools are the best for a given scenario or application? Which scenarios, among the ones envisioned for 5G networks, can benefit the most from anticipatory networking? What is yet to be studied in order for anticipatory networking to be implemented in 5G networks?
The main contributions of this survey are the following:
- A thorough [**context-based analysis**]{} of the literature classified according to the information exploited in the predictive framework.
- Two [**handbooks on the prediction and optimization**]{} techniques used in the literature, which allow the reader to get familiar with them and critically assess the different approaches.
- [An analysis of the applicability of anticipatory networking techniques to different [**types of wireless networks**]{} and at different layers of the [**protocol stack**]{}.]{}
- Summaries of all the main parts of the survey, highlighting [**most popular choices and best practices**]{}.
- A final section analyzing [**open challenges and potential issues**]{} to the adoption of anticipatory networking solutions in future generation mobile networks.
Background and Guidelines {#sec:guidelines}
-------------------------
[c|lV[2.5]{}lV[2.5]{}l|]{} & &\
& & *Ideal:* \[31, 42, 43, 45\] & *ConvOpt$^a$:* \[43\]\
& & *Time series:* \[13, 28, 29, 32, 37, 38, 41\] & *$^b$/$^c$:* \[24, 26\]\
& & *Regression, classification:* \[14, 15, 22, 33-35, 44, 46\] & *Game theory:* \[131\]\
& & *Probabilistic:* \[11, 12, 16-21, 23-26\] & *Heuristic:* \[25, 32, 41, 42, 44-46\]\
& & *Ideal:* \[56, 57, 65-70, 72-79\] & *ConvOpt:* \[64-70 72-79\]\
& & *Time series:* \[54, 58, 59, 63\] & */:* \[50, 60, 62, 158\]\
& & *Regression, classification:* \[47-49, 51, 52, 55, 64\] & *Game theory:* \[129\]\
& & *Probabilistic:* \[30, 50, 53, 60-62, 158\] & *Heuristic:* \[30, 47, 51, 54, 58, 59, 61, 63\]\
& & *Ideal:* \[95-97, 111, 112, 115, 118, 138\] & *ConvOpt:* \[103-107, 111, 118-120, 138\]\
& & *Time series:* \[100, 108-110, 113, 119, 145, 165\] & */:* \[100, 115, 116, 165\]\
& & *Regression, classification:* \[92-94, 98, 99, 101, 104-107, 114, 117, 156\] & *Game theory:* \[117\]\
& & *Probabilistic:* \[93, 102, 116\] & *Heuristic:* \[96-99, 101, 112, 117\]\
& & *Ideal:* \[121, 124, 137, 140\] & *ConvOpt:* \[126, 127, 137, 140, 159\]\
& & *Time series:* \[40\] & */:* \[157\]\
& & *Regression, classification:* \[122, 123, 134, 139, 148, 149, 154\] & *Game theory:* \[128-131, 133\]\
& & *Probabilistic:* \[125-127, 129, 130, 132, 135, 136, 157, 159\] & *Heuristic:* \[40, 121-125, 132, 148, 149\]\
&\
Anticipatory networking is the engineering branch that focuses on communication solutions that leverage the knowledge of the future evolution of a system to improve its operation. For instance, while a standard networking solution would answer the question *“which is the best user to be served?”*, an anticipatory equivalent would answer *“which are the best users to be served in the next time frames given the predicted evolution of their channel condition and service requirements?”*
A typical anticipatory networking solution is usually characterized by the following three attributes, which also determine the structure of this survey:
- [*Context*]{} defines the type of information considered to forecast the system evolution.
- [*Prediction*]{} specifies how the system evolution is forecast from the current and past context.
- [*Optimization*]{} describes how prediction is exploited to meet the application objectives.
To continue with the access selection example, the anticipatory networking solution might exploit the history of information (the *context*) to train an model (the *prediction*) to predict the future positions of the users and their channel conditions to solve an problem (the *optimization*) that maximizes their .
The main body of the anticipatory networking literature can be split into four categories based on the context used to characterize the system state and to determine its evolution: [*geographic*]{}, such as human mobility patterns derived from location-based information; [*link*]{}, such as channel gain, noise and interference levels obtained from reference signal feedback; [*traffic*]{}, such as network load, throughput, and occupied physical resource blocks based on higher-layer performance indicators; [*social*]{}, such as user’s behavior, profile, and information derived from user-generated contents and social networks. In order to determine which techniques are the most suitable to solve a given problem, it is important to analyze the following:
- [*Properties*]{} of the context:\
1) [*Dimension*]{} describes the number of variables predicted by the model, which can be uni- or multivariate.\
2) [*Granularity and precision*]{} define the smallest variation of the parameter considered by the context and the accuracy of the data: the lower the granularity, the higher the precision and vice versa. Temporal and spatial granularities are crucial to strike a balance between efficiency and accuracy.\
3) [*Range*]{} characterizes the distance (usually time or space) between known data samples and the farthest predicted sample. It is also known as prediction (or optimization) horizon.
- [*Constraints*]{} of the prediction or optimization model:\
1) [*Availability of physical model*]{} states whether a closed-form expression exists to describe the phenomenon.\
2) [*Linearity*]{} expresses the quality of the functions linking inputs and outputs of a problem.\
3) [*Side information*]{} determines whether the main context can be supported by auxiliary information.\
4) [*Reliability and validity of information*]{} specifies the noisiness of the data set, depending on which the prediction robustness should be calibrated.
[|lV[2.5]{}p[12cm]{}|]{} & [**Content**]{}\
Big Data & [@zheng2016big] studies big data analytics for network optimization.\
& [@makris2013survey; @pejovic2015anticipatory] discuss acquisition, modeling, exchange and usage of contextual information for different scenarios.\
Data Classification & [@boucheron2005theory] surveys a variety of classifiers and uses them to predict unknown data.\
& [@liu2015empirical] uses trace-driven simulation to compare prediction errors obtained using different techniques.\
& [@nguyen2008survey] uses real network traffic to evaluate prediction techniques and to discuss their practical challenges.\
& [@jin2013understanding] uses social network information to study traffic patterns.\
& [@barakovic2013survey] investigates the impact of prediction on\
& [@hoyhtya2016spectrum] investigates spectrum occupancy models and their reliability.\
& [@chen2016survey] focus on spectrum occupancy and channel status prediction.\
The classification section will help the reader to understand the link between the different contexts and the solutions adopted to satisfy the given application requirements. Also, it is meant to provide a complete panorama of anticipatory networking. The two handbooks have the twofold objective of providing the reader with a short overview of the tools adopted in the literature and to analyze them in terms of variables of interest and constraints of the models. [We believe that not only will this survey help researchers studying anticipatory networking, but also it will ease its adoption in future generation networks by providing a comprehensive overview of research directions, available solutions and application scenarios.]{}
Table \[tab:class\] provides a mapping between the techniques described in Section \[sec:prediction\] and \[sec:optimization\] (columns) and the context discussed in Section \[sec:classification\] (rows). Each main category is further split into subcategories according to its internal structure. Namely, the prediction category is subdivided into ideal (perfect prediction is assumed to be available), time series predictive modeling, similarity-based classification and regression analysis, and probabilistic methods. The optimization category is split into , and , game theoretic and, heuristic approaches. [The rest of the survey consists of a quick overview of other surveys on related topics in Section \[sec:related\_work\], a context-based classification of the anticipatory networking literature in Section \[sec:classification\], two handbooks on prediction and optimization techniques in Section \[sec:prediction\] and Section \[sec:optimization\], respectively. [Section \[sec:network\] and \[sec:protocol\] discuss how the anticipatory networking paradigm can be applied in a variety of network types and at different layers of the protocol stack.]{} Section \[sec:challenges\] and \[sec:conclusions\] conclude the survey reporting the impact of anticipatory networking on future networks, the envisioned hindrances to its implementation and the open challenges.]{}
Related Work {#sec:related_work}
============
This section discusses a few recent survey on topics close to anticipatory networking and is summarized in Table \[tab:related\].
[Applying big data analytics for network optimization is studied in [@zheng2016big]. Based on the papers they reviewed, the authors propose a generic framework to support big data based optimization of mobile networks. Using traffic patterns derived from case studies, they argue that their framework can be used to optimize resource allocation, base station deployment, and interference coordination in such networks. In [@makris2013survey; @pejovic2015anticipatory], the ability to extract and process contextual information by entities in a network is identified as a key factor in improving network performance. In [@makris2013survey], the procedure of using context information in wireless networks is broken down into acquisition, modeling, exchanging and evaluating stages, where the first two deal with gathering information and predicting the future behavior, and the latter two perform self-optimization and decision making. A similar taxonomy is provided in [@pejovic2015anticipatory] and various examples of different techniques are reviewed for each phase. In addition to that, the authors provide a thorough survey on potential use cases of anticipatory networks and their respective challenges.]{}
[Predicting future states of network attributes is an essential task in designing anticipatory networks. Data classification, a popular prediction technique, has been thoroughly surveyed in [@boucheron2005theory]. Among other attributes, the prediction of data traffic and throughput has been the subject of [@liu2015empirical; @nguyen2008survey]. In [@liu2015empirical], the authors consider seven algorithms for throughput prediction, ranging from mean-based and linear regression methods to and and compare their performance using a trace-driven simulator. Furthermore, they develop an information theoretic lower bound for the prediction error. In a similar attempt, [@nguyen2008survey] reviews real time Internet traffic classification. Here, the authors not only review prediction algorithms, but also try to shed light on practical challenges in deploying different kinds of techniques under different network scenarios. For instance, they argue that algorithms that require packet inspection either in the form of port number or payload, might have limited applicability due to potential encryption compared to methods that rely on statistical traffic properties. ]{}
[The capability to extract user behavior in online social networks and use it to learn the evolution of traffic patterns in mobile networks is the subject of another survey [@jin2013understanding]. The general approach of the papers included in that review is to use social graphs and classify different types of interactions between users on social networks in order to monitor the corresponding network traffic. Another important attribute for network performance is modeling the Quality of Experience (QoE) or how the service is perceived by the user. The authors of [@barakovic2013survey] provide a thorough survey including various methods for modeling QoE for different applications and also discuss tools for estimating and predicting QoE values by probing network parameters.]{}
[ and are two very important technologies to measure, estimate and predict spectrum availability and occupancy. For instance, [@hoyhtya2016spectrum; @chen2016survey] provide two independent taxonomies of methodologies, campaigns and models. In addition, they review the reliability of these types of measurements [@hoyhtya2016spectrum] and they illustrate how to predict the system evolution thanks to available information and regression analysis [@chen2016survey].]{}
[To the best of our knowledge, this survey is the first to specifically address anticipatory techniques for mobile networks. We believe that, while the topic is undeniably hot, an overarching review of the body of work is still missing and greatly needed to facilitate the adoption of such a promising direction.]{}
A Context-Based Classification of Anticipatory Networking Solutions {#sec:classification}
===================================================================
In this section, we classify the different types of context that can be predicted and exploited. For each one, we highlight the most popular prediction techniques as well as the applications for which an anticipatory optimization is performed.
Geographic Context
------------------
Geographic context refers to the geographic area associated to a specific event or information. In wireless communications, it refers to the location of the mobile users, often enriched with speed information as well as past and future trajectories. Understanding human mobility is an emergent research field that especially in the last few years has significantly benefited from the rapid proliferation of wireless devices that frequently report status and location updates. Fig. \[fig:trajectory\] illustrates an example of estimated trajectories of 6 mobile users.
The potential predictability in user mobility can be as high as $93\%$ [@song2010limits][^2]. Along the same line, [@lu2013approaching] investigates both the maximal predictability and how close to this value practical algorithms can come when applied to a large mobile phone dataset. Those results indicate that human mobility is very far from being random. Therefore, collecting, predicting and exploiting geographic context is of crucial importance. In the rest of this section we organize the papers dealing with geographic context according to their main focus: the majority of them deals with pure geographical prediction and differs on secondary aspects such as whether they predict a single future location, a sequence of places or a trajectory. The second largest group of papers deals with multimedia streaming optimization.
### Next location prediction
The simplest approach is to forecast where a given user will be at a predetermined instant of time in the future. The authors of [@jiang2013tracking] propose to track mobile nodes using topological coordinates and topology preserving maps. Nodes’ location is identified with a vector of distances (in hops) from a set of nodes called anchors and a linear predictor is used to estimate the mobile nodes’ future positions. Evaluation is performed on synthetic data and nodes are assumed to move at constant speed. Results show that the proposed method approaches an accuracy above $90\%$ for a prediction horizon of some tens of seconds.
![Geographic context example: an example of estimated trajectories of 6 mobile users.[]{data-label="fig:trajectory"}](./fig/TrajectorySample){width="1\columnwidth"}
A more general approach that exploits is discussed in [@ghouti2013mobility]. , which do not require any parameter tuning, are used to speed up the learning process. The method is evaluated using synthetic data over different mobility models. To extend the prediction horizon [@chen2013predicting] exploits users’ locations and short-term trajectories to predict the next handover. The authors use and handover history to solve a classification problem via supervised learning, i.e., employing a multi-class . In particular, each classifier corresponds to a possible previous cell and predicts the next cell. A real-time prediction scheme is proposed and the feedback is used to improve the accuracy over time. Simulation results have been derived using both synthetic and real datasets. The longer moves along a given path, the higher the accuracy of forecasting the rest. Location information can be extracted from cellular network records. In this way the granularity of the prediction is coarser, but positioning can be obtained with little extra energy. In particular, [@xiong2014mpaas] aims at predicting a given user location from those of similar users. *Collective behavioral patterns* and a Markovian predictor are used to compute the next six locations of a user with a one-hour granularity, i.e., a six-hour prediction horizon. Evaluation is done using a real dataset and shows that an accuracy of about $70\%$ can be achieved in the first hour, decreasing to $40-50\%$ for the sixth hour of prediction.
### Space and time prediction
Prediction of mobility in a combined space-time domain is often modeled using statistical methods. In [@lee2006modeling], the idea is to predict not only the future location a user will reach, but also *when* and for *how long* the user will stay there. To incorporate the *sojourn* time during which a user remains in a certain location, mobility is modeled as a semi-Markov process. In particular, the transition probability matrix and the sojourn time distribution are derived from the previous association history. Evaluation is done on a real dataset and shows approximately $80\%$ accuracy. A similar approach is presented in [@abu2010application], where the prediction is extended from single to multi-transitions (estimating the likelihood of the future event after an arbitrary number of transitions). Both papers provide also some preliminary results on the benefits of the prediction on resource allocation and balancing.
In [@barth2011mobility], the authors represent the network coverage and movements using graph theory. The user mobility is modeled using a process where the prediction of the next node to be visited depends not only on the current node but also on the previous one (i.e., second-order Markovian predictor). Considering both local as well as global users’ profiles, [@barth2012combining] extends the previous Markovian predictor and improves accuracy by about $30\%$. As pointed out in [@gidofalvi2012and], sojourn times and transition probabilities are inhomogeneous. Thus, an inhomogeneous process is exploited to predict user mobility. Evaluation on a real dataset shows an accuracy of $67\%$ for long time scale prediction.
The interdependence between time and space is investigated also in [@chon2013understanding] by examining real data collected from smartphones during a two-month deployment. Furthermore, [@chon2012evaluating] shows the benefit of using a location-dependent Markov predictor with respect to a location-independent model based on nonlinear time series analysis. Additionally, it is shown that information on arrival times and periodicity of location visits is needed to provide accurate prediction. A system design, named SmartDC, is presented in [@chon2014smartdc; @chon2014adaptive; @chon2011mobility]. SmartDC comprises a mobility learner, a mobility predictor and an adaptive duty cycling. [The proposed location monitoring scheme optimizes the sensing interval for a given energy budget.]{} The system has been implemented and tested in a real environment. Notably, this is also one of the few papers that takes into account the *cost* of prediction, which in this case is evaluated in terms of energy. Namely, the authors detect approximately $90\%$ of location changes, while reducing energy consumption at the expense of higher detection delay.
### Location sequences and trajectories
A natural extension of the spatio-temporal perspective is the prediction of the location patterns and trajectories of the users. [User mobility profiles have been introduced in [@akyildiz2004predictive] to optimize call admission control, resource management and location updates. Statistical predictors are used to forecast the next cell to which a mobile phone is going to connect. The validation of the solution is done via simulation.]{} In [@scellato2011nextplace], an approach for location prediction based on nonlinear time series analysis is presented. The framework focuses on the *temporal* predictability of users’ location, considering their arrival and dwell time in relevant places. The evaluation is done considering four different real datasets. The authors evaluate first the predictability of the considered data and then show that the proposed nonlinear predictor outperforms both linear and Markov-based predictors. Precision approaches $70-90\% $ for medium scale prediction ($5$ minutes) and decreases to $20-40\%$ for long scale (up to $8$ hours).
In order to improve the accuracy of time series techniques, in [@de2013interdependence] the authors exploit the movement of friends, people, and, in general, entities, with correlated mobility patterns. By means of multivariate nonlinear time series prediction techniques, they show that forecasting accuracy approaches $95\% $ for medium time scale prediction ($5$ to $10$ minutes) and is approximately $50 \%$ for $3$ hour prediction. Confidence bands show a significant improvement when prediction exploits patterns with high correlation. Evaluation is done considering two different real datasets.
[Trajectory analysis and prediction also benefit from exploiting specific constraints such as streets, roads, traffic lights and public transportation routes. In [@fazio2016pattern] the authors adapt the local Markovian prediction model for a specific coverage area in terms of a set of roads, moving directions, and traffic densities. When applying Markov prediction schemes, the authors consider a road compression approach to avoid dealing with a large number of locations, reduce the size of the state space, and minimize the approximation error. A more attractive candidate for trajectory prediction is the public transportation system, because of known routes and stops, and the large amount of generated mobile data traffic. In [@abouzeid2015evaluating], the authors investigate the predictability of mobility and signal variations along public transportation routes, to examine the viability of predictive content delivery. The analysis on a real dataset of a bus route, covering both urban and sub-urban areas, shows that modeling prediction uncertainty is paramount due to the high variability observed, which depends on combined effects of geographical area, time, forecasting window and contextual factors such as signal lights and bus stops.]{}
Moving from discrete to continuous trajectories, Kalman filtering is used to predict the future velocity and moving trends of vehicles and to improve the performance of broadcasting [@yang2013broadcasting]. The main idea is that each node should send the message to be broadcast to the fastest candidate based on its neighbors’ future mobility. Simulation results show modest gains, in terms of percentage of packet delivery and end-to-end delay, with respect to non-predictive methods.
An alternative to Kalman filters is the use of regression techniques [@sridaran2013location], which analyze observations of past trips. A systematic methodology, based on geometrical structures and data-mining techniques, is proposed to extract meaningful information for location patterns. This work characterizes the location patterns, i.e., the set of locations visited, for several millions of users using nationwide call data records. The analysis highlights statistical properties of the typical covered area and route, such as its size, average length and spatial correlation.
Along the same line, [@froehlich2008route] shows how the regularity of driver’s behavior can be exploited to predict the current end-to-end route. The prediction is done by exploiting clustering techniques and is evaluated on a real dataset. A similar approach, named *WhereNext*, is proposed in [@monreale2009wherenext]. This method predicts the next location of a moving object using past movement patterns that are based on both spatial and temporal information. The prediction is done by building a decision tree, whose nodes are the regions frequently visited. It is then used to predict the future location of a moving object. Results are shown using a real dataset provided by the GeoPKDD project [@geopkdd]. The authors show the trade-off between the fraction of predicted trajectories and the accuracy. Both [@froehlich2008route] and [@monreale2009wherenext] show similar performance with an accuracy of approximately $40 \%$ and medium time scale prediction (order of minutes).
### Dealing with errors
The impact of estimation and prediction errors is modeled in [@bui2014model]. The authors propose a comprehensive overview of several mobility predictors and associated errors and investigate the main error sources and their impact on prediction. Based on this, they propose a stochastic model to predict user throughput that accounts for uncertainty. The method is evaluated using synthetic data while assuming that prediction’s errors have a truncated Gaussian distribution. [The joint analysis on the predictability of location and signal strength, which in this case is simply quantified by the standard deviation of the random variable, shown in [@abouzeid2015evaluating] indicates that location-awareness is a key factor to enable accurate signal strength predictions.]{} Location errors are also considered in [@liao2015channel] where both temporal and spatial correlation are exploited to predict the average channel gain. The proposed method combines an model with functional linear regression and relies on location information. Results are derived using real data taken from the MOMENTUM project [@momentum] and show that the proposed method outperforms and processes.
### Mobility-assisted handover optimization
Seamless mobility requires efficient resource reservation and context transfer procedures during handover, which should not be sensitive to randomness in user movement patterns. To guarantee the service continuity for mobile users, the conventional in-advance resource reservation schemes make a bandwidth reservation over all the cells that a mobile host will visit during its active connection. With mobility pattern prediction, it is possible to prepare resources in the most probable cells for the moving users. Using a Markov chain-based pattern prediction scheme, the authors in [@fazio2016pattern] propose a statistical bandwidth management algorithm to handle proactive resource reservations to reduce bandwidth waste. Along similar lines, [@barth2011mobility; @wanalertlak2011behavior] investigate mobility prediction schemes, considering not only location information but also user profiles, time-of-day, and duration characteristics, to improve the handover performance in terms of resource utilization, handover accuracy, call dropping and call blocking probabilities.
### Geographically-assisted video optimization
One of the main applications that has been used to show the benefits of geographic context is video streaming. A pioneer work showing the benefit of a long-term location-based scheduling for streaming is [@riiser2012video]. The authors propose a system for bandwidth prediction based on geographic location and past network conditions. Specifically, the streaming device can use a -based bandwidth-lookup service in order to predict the expected bandwidth availability and to optimally schedule the video playout. The authors present simulation as well as experimental results, where the prediction is performed for the upcoming $100$ meters. The predictive algorithm reduces the number of buffer underruns and provides stable video quality.
Application-layer video optimization based on prediction of user’s mobility and expected capacity, is proposed also in [@lu2013optimizing; @abouzeid2013optimal; @margolies2014exploiting]. In [@lu2013optimizing], the authors minimize a utility function based on system utilization and rebuffering time. For the single user case they propose an online scheme based on partial knowledge, whereas the multiuser case is studied assuming complete future knowledge. In [@abouzeid2013optimal], different types of traffic are considered: full buffer, file download and buffered video. Prediction is assumed to be available and accurate over a limited time window. Three different utility functions are compared: maximization of the network throughput, maximization of the minimum user throughput, and minimization of the degradations of buffered video streams. Both works show results using synthetic data and assuming perfect prediction of the future wireless capacity variations over a time window with size ranging from tens to hundreds of seconds. In contrast, [@margolies2014exploiting] introduces a data rate prediction mechanism that exploits mobility information and is used by an enhanced scheduler. The performance gain is evaluated using a real dataset and shows a throughput increase of $15$%-$55$%.
Delay tolerant traffic can also benefit from offloading and prefetching as shown in [@siris2013enhancing]. The authors propose methods to minimize the data transfer over a mobile network by increasing the traffic offloaded to WiFi hotspots. Three different algorithms are proposed for both delay tolerant and delay sensitive traffic. They are evaluated using empirical measurements and assuming errors in the prediction. Results show that offloaded traffic is maximized when using prediction, even when this is affected by errors.
A *geo-predictive streaming system* called GTube, is presented in [@hao2014gtube]. The application obtains the user’s locations and informs a server which provides the expected connection quality for future locations. The streaming parameters are adjusted accordingly. In particular, two quality adaptation algorithms are presented, where the video quality level is adapted for the upcoming 1 and $n$ steps, respectively, based on the estimated bandwidth. The system is tested using a real dataset and shows that accuracy reaches almost $90\%$ for very short time scale prediction (few seconds), but it decreases very fast approaching zero for medium time scale prediction (few minutes). However, the proposed $n$-step algorithm improves the stability of the video quality and increases bandwidth utilization.
Link Context
------------
Link context refers to the prediction of the evolution of the physical wireless channel, i.e., the channel quality and its specific parameters, so that it is possible either to take advantage of future link improvements or to counter bad conditions before they impact the system. As an example of link context, Fig. \[fig:map\] shows a pathloss map of the center of Berlin realized with the data of the MOMENTUM [@momentum] project.
### Channel parameter prediction
One possible approach to anticipate the evolution of the physical channel state is to predict the specific parameters that characterize it. In general, the variations of the physical channel can be caused by large-scale and small-scale fading. While predicting small-scale fading is quite challenging, if not impossible, several papers focuses on predicting path loss and shadowing effects. In [@tie2011anticipatory], the time-varying nonlinear wireless channel model is adopted to predict the channel quality variation anticipating distance and pathloss exponent. The performance evaluation is done using both an indoor and an outdoor testbed. The goodput obtained with the proposed bitrate control scheme can be almost doubled compared to other approaches.
Pathloss prediction in urban environments is investigated in [@piacentini2010path]. The authors propose a two-step approach that combines machine learning and dimensional reduction techniques. Specifically, they propose a new model for generating the input vector, the dimension of which is reduced by applying linear and nonlinear principal component analysis. The reduced vector is then given to a trained learning machine. The authors compare and using real measurements and conclude that slightly better results can be achieved using the regressors.
Supporting the temporal prediction with spatial information is proposed in, e.g., [@dallanese2011channel] to study the evolution of shadow fading. The authors suggest to implement a to track the time varying shadowing using a network of . The prediction is used to anticipate the position of the primary users and the expected interference and, consequently, to maximize the transmission rate of networks. Errors with the proposed model approach $2$ dB (compared to $10$ dB obtained with the pathloss based model). Targeting the same objective, but using a different methodology, [@yin2011prediction] formulates the throughput optimization problem as an . In particular, the predicted channel availability is used to maximize the throughput and to reduce the time overhead of channel sensing. Predictors robust to channel variations are investigated also in [@tarsa2015taming]. A clustering method with supervised classification is proposed. The performance is shown for bulk data transport via and it is also shown that the predictive approach outperforms non-predictive ones.
Finally, maps can be used to summarize predicted information; for instance, algorithms to build pathloss maps are proposed in [@kasparick2015kernel]. In this paper, the authors propose two kernel-based adaptive algorithms, namely the adaptive projected subgradient method and the multikernel approach with adaptive model selection. Numerical evaluation is done for both a urban scenario and a campus network scenario, using real measurements. The performance of the algorithms is evaluated assuming perfect knowledge of the users’ trajectories.
### Combined channel and mobility context
Channel quality and mobility information are jointly predicted in [@nicholson2008breadcrumbs]. The authors combine information on visited locations and corresponding achieved link quality to provide *connectivity forecast*. A Markov model is implemented in order to forecast future channel conditions. Location prediction accuracy is approximately $70\%$ for a prediction window of $20$ seconds. However, the location information has quite a coarse granularity (of about $100$ m). In terms of bandwidth, the proposed model, evaluated on a real dataset, shows an accuracy within $10$ KB/s for over $50\%$ of the evaluation period, and within $50$ KB/s for over $80\%$ of the time. In [@naimi2014anticipation], prediction is employed to adjust the routing metrics in ad hoc wireless networks. In particular, the metrics considered in the paper are the average number of retransmissions needed and the time expected to transmit a data packet. The solution anticipates the future signal strength using linear regression on the history of the link quality measurements. Simulations show that the packet delivery ratio is close to $100\%$, even though it drops to $20\%$ using classical methods.
When the information used to drive the prediction is affected by errors, it is important to account for the magnitude of the error. This has been considered, for instance, in [@muppirisetty2015spatial] [and [@muppirisetty2016channel]]{}, where the impact of location uncertainties is taken into account. Namely, the authors of [@muppirisetty2015spatial] show that classical wrongly predicts the channel gain in presence of errors, while uncertain , which explicitly accounts for location uncertainty, outperforms the former in both learning and predicting the received power. Gains are shown also for a simple proactive resource allocation scenario. [Similarly, the same authors propose in [@muppirisetty2015proactive] a proactive scheduling mechanism that exploits the statistical properties of user demand and channel conditions. Furthermore, the model captures the impact of prediction uncertainties and assesses the optimal gain obtained by the proactive resource scheduler. The authors also propose an asymptotically optimal policy that attains the optimal gain rapidly as the prediction window size increases.]{} Uncertainties are also dealt with in [@bui2015mobile], where a resource allocation algorithm for mobile networks that leverages link quality prediction is proposed. Time series filtering techniques () are used to predict near term link quality, whereas medium to long term prediction is based on statistical models. The authors propose a resource allocation optimization framework under imperfect prediction of future available capacity. Simulations are done using a real dataset and show that the proposed solution outperforms the limited horizon optimizer (i.e., when the prediction is done only for the upcoming few seconds) by $10-15\%$. Resource allocation is also addressed in [@margolies2014exploiting], which extends the standard scheduler of 4G networks to account for data rate prediction obtained through adaptive radio maps.
### Channel-assisted video optimization
In [@wang2013ames], the authors propose an adaptive mobile video streaming framework, which stores video in the cloud and offers to each user a continuous video streaming adapted to the fluctuations of the link quality. The paper proposes a mechanism to predict the potential available bandwidth in the next time window (of a duration of a few seconds) based on the measurements of the link quality done in the previous time window. A prototype implementation of the proposed framework is used to evaluate the performance. This shows that the prediction has a relative error of about $10\%$ for very short time windows (a couple of seconds) but becomes relatively poor for larger time windows. The video performance is evaluated in terms of “click-to-play” delay, which is halved with the proposed approach. A Markov model is used in [@bao2015bitrate], where information on both channel and buffer states is combined to optimize mobile video streaming. Both an optimal policy as well as a fast heuristic are proposed. A drive test was conducted to evaluate the performance of the proposed solution. In particular, the authors show the proportional dependency between utility and buffer size, as well as the complexity of the two algorithms. Furthermore, a Markov model is adopted to represent different user’s achievable rates [@seetharam2015managing] and channel states [@hosseini2015not]. The transition matrix is derived empirically to minimize the number of video stalls and their duration over a $10$-second horizon.
[Video calls are considered in [@kurdoglu2016realtime]. Namely, a cross-layer design for proactive congestion control, named Rebera, is proposed. The system measures the real-time available bandwidth and uses a linear adaptive filter to estimate the future capacity. Furthermore, it ensures that the video sending rate never exceeds the predicted values, thereby preventing self-congestion and reducing delays. Performance results with respect to today’s solutions are given for both a testbed and a real cellular network. In [@liu2016hop], the authors propose a hop-by-hop video quality adaptation scheme at the router level to improve the performance of adaptive video streaming in . In this context, the routers monitor network conditions by estimating the end-to-end bandwidth and proactively decrease the video quality when network congestion occurs. Performance is evaluated considering a realistic large-scale network topology and it is shown that the proposed solution outperforms state of the art schemes in terms of both playback quality and average delay.]{}
### Video optimization under uncertainty
For the video optimization use case, some works also assess the impact of uncertain predictions. In [@blobel2015anticipatory], the authors propose a stochastic model of prediction errors, based on [@bui2014model], and introduce an online scheduler that is aware of prediction errors. Namely, based on the expected prediction accuracy, the algorithm determines whether to consider or discard the predicted data rate. A similar model for prediction errors is introduced in [@tsilimantos2016anticipatory]. In this case, a formulation is proposed to trade off spectral efficiency and stalling time. The proposed solution shows good gains with respect to the case without prediction, even when errors occur. is used also in [@atawia2014robust] to minimize the base station airtime with the constraint of no video interruption. In this case, uncertainties are modeled by using a fuzzy approach. Furthermore, in order to keep track of the previous values of the error, a Kalman filter is used. Simulations are run using synthetic data and show the effect of channel variability on video degradation and average airtime. In [@mangla2016video], bandwidth prediction is exploited to increase the quality of video streaming. Both perfect and uncertain prediction are considered and a robust heuristic is proposed to mitigate the effect of prediction errors when adapting the video bitrate. In [@atawia2015chance; @atawia2016joint], a predictive resource allocation robust to rate uncertainties is proposed. The authors propose a framework that provides quality guarantees with the objective of minimizing energy consumption. Both optimal gradient-based and real-time guided heuristic solutions are presented. In [@atawia2015chance] both Gaussian and Bernstein approximation are used to model rate uncertainties, whereas [@atawia2016joint] considers only the former one. [Similarly, [@hossain2004link] provides predictive over wireless networks: given the TDMA nature of these networks, these schemes optimize the number of allocated time slots depending on the characteristics of the traffic stream and the wireless link.]{}
### Efficiency bounds and approximations for multimedia streaming applications
A few papers ([@abouzeid2014energy; @abouzeid2014efficient; @bui2015anticipatory; @bui2015anticipatoryb; @draxler2013cross; @draxler2015smarterphones; @valentin2014anticipatory; @zou2015can]) investigate resource allocation optimization assuming that the future channel state is perfectly known. While addressing different objectives, these papers share similar methods: they first devise a problem formulation from which an optimal solution can be obtained (using standard optimization techniques), then they propose sub-optimal approaches and on-line algorithms to obtain an approximation of the optimal solution. Furthermore, all these papers leverage a buffer to counteract the randomness of the channel. For instance, in case a given amount of information has to be gathered within a deadline, the buffer allows the system to optimize (for a given objective function) the resource allocation while meeting the deadline.
In this regard, energy-efficiency is the primary objective in [@abouzeid2014energy; @abouzeid2014efficient], which is optimized by allowing the network base stations to be switched off once the users’ streaming requirements have been satisfied. Simulations show that an energy saving up to $80 \%$ with respect to the baseline approach can be achieved and that the performance of the heuristic solution is quite close to the optimal (but impractical) approach. Buffer size is investigated in [@valentin2014anticipatory], where the author introduces a linear formulation that minimizes the amount for resources assigned to non-real time video streaming with constraints on the user’s playout buffer. Results are shown for a scenario with both video and best effort users and highlight the gain in terms of required resources to serve the video users as well as data rate for the best effort users.
The trade-off between streaming interruption time and average quality is investigated in [@draxler2013cross; @draxler2015smarterphones] by devising a mixed-integer quadratically constrained problem which computes the optimal download time and quality for video segments. Then, the authors propose a set of heuristics tailored to greedily optimize segment scheduling according to a specific objective function, e.g., maximum quality, minimum streaming interruption, or fairness. Similar objectives are tackled in [@bui2015anticipatory; @bui2015anticipatoryb] in a lexicographic approach, so that streaming continuity is always prioritized over quality. They first propose a heuristic for the lateness-quality problem that performs almost as good as the formulation. Then, they extend the formulation to include guarantees and they introduce an iterative approximation based on a simpler formulation. A further heuristic approach is devised in [@zou2015can] and accounts for the buffer and channel state prediction. The proposed approach maximizes the streaming quality while guaranteeing that there are no interruptions.
### Cognitive radio maps
are context-aware wireless devices that adapt their functionalities to changes in the environment. They have been recently used [@xing2013spectrum; @wei2013construction; @yilmaz2013radio] to obtained the so-called : a multi-dimensional database containing a wide set of information ranging from regulations to spectrum usage. For instance, are used to predict spectrum availability in [@xing2013spectrum]: the paper exploits cognitive maps to provide contextual information for predictive machine learning approaches such as , and regression techniques. The construction of these maps is discussed in [@wei2013construction] and the references therein, while their use as enabler for networks is analyzed in [@yilmaz2013radio]. In the context of anticipatory networking, are often used as a source of contextual information for the actual prediction technique adopted, rather than as prediction tools themselves. [ [@hoyhtya2016spectrum; @chen2016survey] present two surveys of methodologies and measurement campaigns of spectrum occupancy. In particular, [@hoyhtya2016spectrum] proposes a conservative approach to account for measurement uncertainty, while [@chen2016survey] exploits predictors to provide the future channel status. In addition, prediction through machine learning approaches is addressed in [@thilina2013machine], where different techniques are compared to assess future channel availability. Imperfect measurements are dealt with in [@khan2016opportunistic], which models the problem as a repeated game and maximizes the total network payoff. However, in cognitive networks, the channel status depends on the activity of primary users. [@saleem2014primary] surveys the models proposed so far to describe primary users activity and that can be used to drive prediction in this area. Once the activity of primary users is available or predicted, it is possible to control the activity of secondary users in order to guarantee the agreed to the former [@monemi2015characterizing; @monemi2016characterization]. These papers compute the feasible cognitive interference region in order to allow secondary users’ communication respecting primary users’ rights. The utilization of spectrum opportunity describes the probability of a secondary user to exploit a free communication slot [@ozger2016utilization]. A similar form of opportunistic spectrum usage goes under the name of white space [@akhtar2016white]: i.e., channels that are unused at specific location and time. s can take advantage of these frequencies thanks to dynamic spectrum access. Finally, [@khan2016cognitive] describes how to exploit to realize a complete smart grid scenario; [@bukhari2016survey] describes how to exploit channel bonding to increase the bandwidth and decrease the delay of . ]{}
Traffic Context
---------------
This section overviews some of the approaches that focus on traffic and throughput prediction. Although related to the previous context, the papers discussed in this section leverage information collected from higher layers of the protocol stack. For instance, solutions falling in this category try to predict, among other parameters, the number of active users in the network and the amount of traffic they are going to produce. Similarly, but from the perspective of a single user, the prediction can target the data rate that a streaming application is going to achieve in the near term.
We grouped these papers in three main classes: pure analysis of mobile traffic; traffic prediction for networking optimization; and direct throughput prediction.
### Traffic analysis and characterization {#subsub:traffic-analysis}
The analysis of mobile traffic is fundamental for long-term network optimization and re-configuration. To this end, several pieces of work have addressed such research topics in the recent past.
The work in [@paul2011understanding] targets the creation of regressors for different performance indicators at different spatio-temporal granularity for mobile cellular networks. Namely, the authors focus on the characterization of per-device throughput, base station throughput and device mobility. A one-week nation-wide cellular network dataset is collected through proprietary traffic inspection tools placed in the operator network and are used to characterize the per-user traffic, cell-aggregate traffic and to perform further spatio-temporal correlation analysis. A similar scope is addressed by [@shafiq2011characterizing] which, on the other hand, focuses more on core network measurements. Flow level mobile device traffic data are collected from a cellular operator’s core network and are used to characterize the IP traffic patterns of mobile cellular devices. More recently, the authors of [@sayeed2015cloud] studied traffic prediction in cloud analytics and prove that optimizing the choice of metrics and parameters can lead to accurate prediction even under high latency. This prediction is exploited at the application/ layer to improve the performance of the application avoiding buffer overflows and/or congestion.
### Traffic prediction {#subsub:traffic-prediction}
Several applications can benefit from the prediction of traffic performance features. For instance, a predictive framework that anticipates the arrival of upcoming requests is used in [@tadrous2013proactive] to prefetch the needed content at the mobile terminal. The authors propose a theoretical framework to assess how the outage probability scales with the prediction horizon. The theoretical framework accounts for prediction errors and multicast delivery. Along the same line, queue modeling [@huang2014backpressure] and analysis [@abedini2014content] is used to predict the upcoming workloads in a lookahead time window. Leveraging the workload prediction, a multi-slot joint power control and scheduling problem is formulated to find the optimal assignment that minimizes the total cost [@huang2014backpressure] or maximizes the [@abedini2014content]. Multimedia optimization is the focus in [@xu2013proteus]. By predicting throughput, packet loss and transmission delay half a second in advance, the authors propose to dynamically adjust application-level parameters of the reference video streaming or video conferencing services including the compression ratio of the video codec, the forward error correction code rate and the size of the de-jittering buffer. Traffic prediction is also addressed in [@samulevicius2015most], where the authors propose to use a database of events (concerts, gatherings, etc.) to improve the quality of the traffic prediction in case of unexpected traffic patterns and in [@lee2013generalized], where a general predictive control framework along with Kalman filter is proposed to counteract the impact of network delay and packet loss. The objective of [@sekar2013developing] is to build a model for user engagement as a function of performance metrics in the context of video streaming services. The authors use a supervised learning approach based on average bitrate, join time, buffering ratio and buffering to estimate the user engagement. Finally, inter-download time can be modeled [@beister2014predicting] and subsequently predicted for quality optimization.
[The work in [@pollakis2016anticipatory] targets energy-efficient resource scheduling in mobile radio networks. The authors introduce a which returns on a slot basis the optimal allocation of resources to users and the optimal users-cell association pattern. The proposed model leverages optimal traffic predictors to obtain the expected traffic conditions in the following slots. Radio resource allocation in mobile radio networks is addressed also in [@yu2014predictive] and later by the same authors in [@yu2016power]; the target is to design a predictive framework to optimally orchestrate the resource allocation and network selection in case one operator owns multiple access networks. The predictive framework aims at minimizing the expected time average power consumption while keeping the network (user queues) stable. The core contribution of [@du2016traffic; @du2016resource] is the use of deep learning techniques to predict the upcoming video traffic sessions; the prediction outcome is then used to proactively allocate the resources of video servers to these future traffic demands. ]{}
### Throughput prediction {#subsub:throughput-prediction}
Rather than predicting the expected traffic or optimizing the network based on traffic prediction, the work in this section targets the prediction/optimization based on the expected throughput. A common characteristic of the work described here is that the spatio-temporal correlation is exploited in the prediction phase of the expected throughput.
Quite a few early works studied how to effectively predict the obtainable data rate. In particular, long term prediction [@papagiannaki2003long] with 12-hour granularity allows to estimate aggregate demands up to 6 months in advance. Shorter and variable time scales are studied in [@sadek2004multi; @zhou2005network] adopting and techniques.
In [@abouzeid2013predictive], the authors propose a dynamic framework to allocate downlink radio resources across multiple cells of 4G systems. The proposed framework leverages context information of three types: radio maps, user’s location and mobility, as well as application-related information. The authors assume that a forecast of this information is available and can be used to optimize the resource allocation in the network. The performance of the proposed solution is evaluated through simulation for the specific use case of video streaming. Geo-localized radio maps are also exploited in [@yao2012improving]. Here the optimization is performed at the application layer by letting adaptive video streaming clients and servers dynamically change the streaming rate on the basis of the current bandwidth prediction from the bandwidth maps. The empirical collection of geo-localized data rate measures is also addressed in [@riiser2013commute] which introduces a dataset of adaptive sessions performed by mobile users.
[The work in [@millan2015tracking] considers the problem of predicting end-to-end quality of multi-hop paths in community WiFi networks. The end-to-end quality is measured by a linear combination of the expected transmission count across all the links composing the multi-hop path. The authors resort to a real data set of a WiFi community network and test several predictors for the end-to-end quality. ]{}
[The anticipation of the upcoming throughput values is often applied to the optimization of adaptive video streaming services. In this context, Yin *et al.* [@yin2015control] leverage throughput prediction to optimally adapt the bit rate of video encoders; here, prediction is based on the harmonic mean of the last $k$ throughput samples.]{}
[In [@yi2016cs2p; @jiang2016cfa] the authors build on the conjecture that video sessions sharing the same critical features have similar (e.g., re-buffering, startup latency, etc.). Consequently, first clustering techniques are applied to group similar video sessions, and then throughput predictors based on are applied to each cluster to dynamically adapt the bit rate of the video encoder to the predicted throughput samples.]{}
[The work in [@zahran2016oscar] resorts to a model-based throughput predictor in which the throughput of a -based video streaming service is assumed to be a random variable with Beta-like distribution whose parameters are empirically estimated within an observation time window. Building on this estimate, the authors propose a with a concave objective function and linear constraints. The program is implemented as a multiple choice knapsack problem and solved using commercial solvers. Along the same lines, the optimization of a -based video streaming service is addressed in [@wang2016squad], where the authors propose an adaptive video streaming framework based on a smoothed rate estimate for the video sessions.]{}
[The work in [@miller2015control] considers the scenario where a small cell is used to deliver video content to a highly dense set of users. The video delivery can also be supported in a distributed way by end-user devices storing content locally. A control-theoretic framework is proposed to dynamically set the video quality of the downloaded content while enforcing stability of the system.]{}
Social Context
--------------
The work on anticipatory networking leveraging social context exploits *ex ante* or *ex post* information on social-type relationships between agents in the networking environment. Such information may include: the network of social ties and connections, the user’s preference on contents, measures on user’s centrality in a social network, and measures on users’ mobility habits. The aforementioned context information is leveraged in three main application scenarios: caching at the edge of mobile networks, mobility prediction, and downlink resource allocation in mobile networks.
### Social-assisted caching
Motivated by the need of limiting the load in the backhaul of 5G networks, references [@bastug2013proactive; @bastug2014living; @bastug2014anticipatory] propose two schemes to proactively move contents closer to the end users. In [@bastug2013proactive], caching happens at the small cells, whereas in [@bastug2014living; @bastug2014anticipatory] contents can be proactively downloaded by a subset of end users which then re-distribute them via communication. The authors first define two optimization problems which target the load reduction in the backhaul (caching at small cells) and in the small cell (caching at end users), respectively, then heuristic algorithms based on machine learning tools are proposed to obtain sub-optimal solutions in reasonable processing time. The heuristic first collects users’ content rating/preferences to predict the popularity matrix ${\boldsymbol{\mathbf{P}}}_m$. Then, content is placed at each small cell in a greedy way starting from the most popular ones until a storage budget is hit. The first algorithmic step of caching at the end users is to identify the $K$ most connected users and to cluster the remaining ones in communities. Then it is possible to characterize the content preference distributions within each community and greedily place contents at the cluster heads. In [@bastug2014anticipatory], the prediction leverages additional information on the underlying structure of content popularity within the communities of users. [Joint mobility and popularity prediction for content caching at small cell base stations is studied in [@siris2016exploiting]. Here, the authors propose a heuristic caching scheme that determines whether a particular content item should be cached at a particular base station by jointly predicting the mobility pattern of users that request that item as well as its popularity, where popularity prediction is performed using the inter-arrival times of consecutive requests for that object. They conclude that the joint scheme outperforms caching with only mobility and only popularity models.]{} A similar problem is addressed in [@golrezaei2012femtocaching]: the authors consider a distributed network of femto base stations, which can be leveraged to cache videos. The authors study where to cache videos such that the average sum delay across all the end users is minimized for a given video content popularity distribution, a given storage capacity and an arbitrary model for the wireless link. A greedy heuristic is then proposed to reduce the computational complexity.
[In [@tadrous2015optimal; @tadrous2015joint], it is argued that proactive caching of delay intolerant content based on user preferences is subject to prediction uncertainties that affect the performance of any caching scheme. In [@tadrous2015optimal], these uncertainties are modeled as probability distributions of content requests over a given time period. The authors provide lower bounds on the content delivery cost given that the probability distribution for the requests is available. They also derive caching policies that achieve this lower bound asymptotically. It is shown that under uniform uncertainty, the proposed policy breaks down to equally spreading the amount of predicted content data over the horizon of the prediction window. Another approach to solve the same problem is used in [@tadrous2015joint], where personalized content pricing schemes are deployed by the service provider based on user preferences in order to enhance the certainty about future demand. The authors model the pricing problem as an optimization problem. Due to the non-convex nature of their model, they use an iterative sub-optimal solution that separates price allocation and proactive download decisions.]{}
### Social-assisted matching game theory
Matching game theory [@gu2015matching] can be used to allocate networks resources between users and base stations, when social attributes are used to profile users. For instance, by letting users and base stations rank one another to capture users’ similarities in terms of interests, activities and interactions, it is possible to create social utility functions controlling a distributed matching game. In [@semiari2015context], a self-organizing, context-aware framework for resource allocation is proposed that exploits the likelihood of strongly connected users to request similar contents. The solution is shown to be computationally feasible and to offer substantial benefits when users’ social similarities are present. A similar approach is used in [@semiari2016context] to deal with joint millimeter and micro wave dual base station resource allocation, in [@namvar2014context] for user base station association in small cell networks, and in [@zhang2015social] to optimize offloading techniques. Caching in small cell networks can also be addressed as a many-to-many matching game [@hamidouche2014many]: by matching video popularity among users most frequently served by a given server it is possible to devise caching policies that minimize end-users’ delays. Simulations show the approach is effective in small cell networks.
### Social-assisted mobility prediction
Motivated by the need to reduce the active scanning overhead in IEEE 802.11 networks, the authors of [@wanalertlak2011behavior] propose a mobility prediction tool to anticipate the next access point a WiFi user is moving to. The proposed solution is based on context information on the handoffs which were performed in the past; specifically, the system stores centrally a time varying handoff table which is then fed into an ARIMA predictor which returns the likelihood of a given user to handoff to a specific access point. The quality of the predictor is measured in terms of signaling reduction due to active scanning.
The prediction of user mobility is also addressed in [@noulas2012mining]. The authors leverage information coming from the social platform Foursquare to predict user mobility on coarse granularity. The *next check-in problem* is formulated to determine the next place in an urban environment which will be most likely visited by a user. The authors build a time-stamped dataset of “check-ins” performed by Foursquare users over a period of one month across several venues worldwide. A set of features is then defined to represent user mobility including user mobility features (e.g., number of historical visits to specific venues or categories of venues, number of historical visits that friends have done to specific venues), global mobility features (e.g., popularity of venues, distance between venues, transition frequency between couples of venues), and temporal features which measures the historical check-ins over specific time periods. Such a feature set is then used to train a supervised classification problem to predict the next check-in venue. Linear regression and M5 decision trees are used in this regard. The work is mostly speculative and does not address directly any specific application/use of the proposed mobility prediction tool.
Along the same lines, the mobility of users in urban environments is characterized in [@calabrese2010human]. Different from the previous work which only exploits social information, the authors also leverage physical information about the current position of moving users. A probabilistic model of the mobile users’ behavior is built and trained on a real life dataset of user mobility traces. A social-assisted mobility prediction model is proposed in [@bapierre2011variable], where a variable-order Markov model is developed and trained on both temporal features (i.e., when users were at specific locations) and social ones (i.e., when friends of specific users were at a given location). The accuracy of the proposed model is cross-validated on two user-mobility datasets.
[|p[1.6cm]{}V[2.5]{}p[2.55cm]{}|p[2.05cm]{}|p[3.8cm]{}|p[5.9cm]{}|]{} & [**Applications**]{} & [**Prediction[^3]**]{} & [**Optimization**]{} & [**Remarks**]{}\
& Mobility predictionMultimedia streaming BroadcastResource allocationDuty cycling& 1$^{\rm st}$ Probabilistic2$^{\rm nd}$ Regression3$^{\rm rd}$ Time series4$^{\rm th}$ Classification & 1) Prediction to define convex optimization problems 2) Prediction as the optimization objective & 1) Prediction accuracy is inversely proportional to the time scale and granularity2) High prediction accuracy can be obtained on long time scales if periodicity and/or trends are present3) Prediction is more effectively used in delay tolerant applications\
& Channel forecastResource allocationNetwork mappingRoutingMultimedia streaming & 1$^{\rm st}$ Regression2$^{\rm nd}$ Time series3$^{\rm rd}$ Probabilistic4$^{\rm th}$ Classification & 1) Markov decision process is used when statistical knowledge of the system is available2) Convex optimization is preferred when it is possible to perform accurate forecast & 1) Channel quality maps can be effectively used to improve networking2) Mobility dynamics affect the prediction effectiveness3) Channel is most often predicted by means of functional regression or Markovian models\
& Traffic analysisResource allocationMultimedia streaming & 1$^{\rm st}$ Regression2$^{\rm nd}$ Classification3$^{\rm rd}$ Probabilistic & 1) Maps are used to deterministically guide the optimization2) Convex optimization problems can be formulated to obtain bounds & 1) Improved long-term network optimization and reconfiguration2) Traffic distribution is skewed both with regards to users and locations3) Traffic has a strong time periodicity4) Geo-localized information can be used as inputs for optimization\
& Network cachingMobility predictionResource allocationMultimedia streaming & 1$^{\rm st}$ Classification2$^{\rm nd}$ Regression3$^{\rm rd}$ Time series4$^{\rm th}$ Probabilistic & 1) Formal optimization problems can be defined, but they are usually impractical to be solved2) Game theory and heuristics are the preferable online solutions & 1) A fraction of social information can be accurately predicted2) Prediction obtained from social information is usually coarse3) Social information prediction can effectively improve application performance\
\[tab:prediction\_class\]
### Social-assisted radio resource allocation
The optimization of elastic traffic in the downlink of mobile radio networks is addressed in [@proebster2012context; @proebster2011context]. The key tenet is to provide to the downlink scheduler “richer” context to make better decisions in the allocation of the radio resources. Besides classical network-side context including the cell load and the current channel quality indicator which are widely used in the literature to steer the scheduling, the authors propose to include user-side features which generically capture the satisfaction degree of the user for the reference application. Namely, the authors introduce the concept of a *transaction*, which represents the atomic data download requested by the end user (e.g., a web page download via , an object download via or a file download via ). For each transaction and for each application, a utility function is defined capturing the user’s sensitivity with respect to the transmission delay and the expected completion time. The functional form of this utility function depends on the type of application which “generated” the transaction; as an example, the authors make the distinction between transactions from applications which are running in the foreground and the background on the user’s terminal. For the sake of presentation, a parametric logistic function is used to represent the aforementioned utility. The authors then formulate an optimization problem to maximize the sum utility across all the users and transactions in a given mobile radio cell and design a greedy heuristic to obtain a sub-optimal solution in reasonable computing time. The proposed algorithm is validated against state-of-the-art scheduling solutions ( / weighted scheduling) through simulation on synthetic data mimicking realistic user distributions, mobility patterns and traffic patterns.
[In order to predict the spatial traffic of base stations in a cellular network, [@yi2016spatial] applies the idea of social networks to base stations. Here, the base stations themselves create a social network and a social graph is created between them based on the spatial correlation of the traffic of each of them. The correlation is calculated using the Pearson coefficient. Based on the topology of the social graph, the most important base stations are identified and used for traffic prediction of the entire network, which is done using . The authors conclude that with the traffic data of less than 10% of the base stations, effective prediction with less than 20% mean error can be achieved.]{}
Social-oriented techniques related to the popularity of the end users are leveraged also in [@tsiropoulos2011impact] where the authors target the performance optimization of downlink resource allocation in future generation networks. The utility maximization problem is formulated with the utility being a combination (product) of a network-oriented term (available bandwidth) and a social-oriented term (social distance). The social-oriented term is defined to be the degree centrality measure [@jackson2008social] for a specific user. The proposed problem is sub-optimally solved through a heuristic which is finally validated using synthetic data.
Summary {#sec:chal:cont}
-------
Hereafter, we summarize the main takeaways of the section in terms of application and objective for which different context types can be used. Table \[tab:prediction\_class\] provides a synthesis of the main considerations: each context is associated with its typical applications, prediction methodologies (ordered by decreasing popularity), optimization approaches and general remarks.
### Mobility prediction
It has been shown that predictability of user mobility can be potentially very high [(93% potential predictability in user mobility as stated in [@song2010limits]), despite the significant differences in the travel patterns]{}. As a matter of fact, many papers study how to forecast users’ mobility by means of a variety of techniques. [For predicting trajectories, characterized by sequences of discretized locations indicated by cell or road segments, fixed-order Markov models or variable-order Markov models are the most promising tools, while for continuous trajectories, regression techniques are widely used. To enhance the prediction accuracy,]{} the most popular ones leverage geographic information: data, cell records and received signal strength are used to obtain precise and frequent data sampling to locate users on a map. However, the movements of an individual are largely influenced by those of other individuals via social relations. Several papers analyze social information and location check-ins to find recurrent patterns. For this second case usually a sparser dataset is available and may limit the accuracy of the prediction.
### Network efficiency
Predicting and optimizing network efficiency (i.e., increasing the performance of the network while using the same amount of resources) is the most frequent objective in anticipatory networking. We found papers exploiting all four types of context to achieve this. As such, objectives and constraints cover the whole attribute space. Improving network efficiency is likely to become the main driver for including anticipatory networking solutions in next generation networks.
### Multimedia streaming
The main source of data traffic in 4G networks has been multimedia streaming and, in particular, video on demand. 5G networks are expected to continue and even increase this trend. As a consequence, several anticipatory networking solutions focus on the optimization of this service. All the context types have been used to this extent and each has a different merit: social information is needed to predict when a given user is going to request a given content, combined geographic and social information allows the network to cache that content closer to where it will be required and physical channel information can be used to optimize the resource assignment.
### Network offloading
Mobility prediction can be used to handover communications between different technologies to decrease network congestion, improve user experience, reduce users’ costs and increase energy efficiency.
### Cognitive networking
Physical channel prediction can be exploited for cognitive networking and for network mapping. The former application allows secondary users to access a shared medium when primary subscribers left resource unused, thus, predicting when this is going to happen will highly improve the effectiveness of the solution. The latter, instead, exploits link information to build networking maps that can provide other applications with an estimate of communication quality at a given time and place.
### Throughput- and traffic-based applications
Traffic information is usually studied to be, first, modeled and, subsequently, predicted. Traffic models and predictors are then used to improve networking efficiency by means of resource allocation, traffic shaping and network planning.
Prediction Methodologies for Anticipatory Networking {#sec:prediction}
====================================================
In this section, we present some selected prediction methods for the types of context introduced in Section \[sec:guidelines\]. The selected methods are classified into four main categories: [*time series methods*]{}, [*similarity-based classification*]{}, [*regression analysis*]{}, and [*statistical methods for probabilistic modeling*]{}. Their mathematical principles and the application to inferring and predicting the aforementioned contextual information are introduced in Sections \[subsec:Prediction\_TimeSeries\], \[subsec:Classification\], \[subsec:Regression\], and \[subsec:Probabilistic\], respectively.
The goal of the prediction handbook is to show *which methods work in which situation*. In fact, selecting the appropriate prediction method requires to analyze the prediction variables and the model constraints with respect to the application scenario (see Section \[sec:guidelines\]). This section concludes with a series of takeaways that summarize some general principles for selection of prediction methods based on the scenario analysis.
Time Series Predictive Modeling {#subsec:Prediction_TimeSeries}
-------------------------------
A time series is a set of time-stamped data entries which allows a natural association of data collected on a regular or irregular time basis. In wireless networks, large volumes of data are stored as time series and frequently show temporal correlation. For example, the trajectory of the mobile device can be characterized by successive time-stamped locations obtained from geographical measurements; individual social behavior can be expressed through time-evolving events; traffic loads modeled in time series can be leveraged for network planning and controlling. Fig. \[fig:TS\_CellLoad\] and \[fig:TS\_AggrLoad\] illustrate two time series of per-cell and per-city aggregated uplink and downlink data traffic, where temporal correlation is clearly recognizable.
In the following, we introduce the two most widely used time series models based on linear dynamic systems: 1) , and 2) Kalman filters. Examples of context prediction in wireless networks are given and their extensions to nonlinear systems are briefly discussed.
### Autoregressive and moving average models {#subsubsec:TS_Stationary_Linear}
Consider a univariate time series $\{X_t: t\in{\mathcal{T}}\}$, where ${\mathcal{T}}$ denotes the set of time indices. The general model, denoted by ${{\mathrm{ARMA}}}(p,q)$, has $p$ terms and $q$ terms, given by $$X_t = Z_t + \sum_{i = 1}^p\phi_i X_{t-i} + \sum_{j=1}^q \theta_j Z_{t-j}
\label{eqn:TS_ARMA}$$ where $Z_t$ is the process of the white noise errors, and $\{\phi_i\}_{i=1}^p$ and $\{\theta_j\}_{j=1}^q$ are the parameters. The model is a generalization of the simpler and models that can be obtained for $q = 0$ and $p=0$ respectively. Using the [*lag operator*]{} $L^i X_t := X_{t-i}$ the model becomes $$\phi(L)X_t = \theta(L) Z_t
\label{eqn:TS_ARMA_Lag}$$ where $\phi(L):=1-\sum_{i=1}^p\phi_i L^i$ and $\theta(L):=1 + \sum_{j=1}^q \theta_jL^j$.
The fitting procedure of such processes assumes [*stationarity*]{}. However, this property is seldom verified in practice and [*non-stationary*]{} time series need to be stationarized through differencing and logging. The model generalizes models for the case of non-stationary time series: a non seasonal ARIMA model ${{\mathrm{ARIMA}}}(p,d,q)$ after $d$ differentiations reduces to an ${{\mathrm{ARMA}}}(p,q)$ of the form $$\phi(L)\Delta^d X_t = \theta(L) Z_t,
\label{eqn:ARIMA}$$ where $\Delta^d = (1-L)^d$ denotes the $d$th difference operator.
Numerous studies have been done on prediction of traffic load in wireless or IP backbone networks using autoregressive models. The stationarity analysis often provides important clues for selecting the appropriate model. For instance, in [@papagiannaki2003long] a low-order ARIMA model is applied to capture the non-stationary short memory process of traffic load, while in [@sadek2004multi] a Gegenbauer ARMA model is used to specify long memory processes under the assumption of stationarity. Similar models are applied to mobility- or channel-related contexts. In [@wanalertlak2011behavior], an exponential weighted moving average, equivalent to ${{\mathrm{ARIMA}}}(0,1,1)$, is used to forecast handoffs. In [@tie2011anticipatory; @jiang2013tracking], models are applied to predict future signal-to-noise ratio values and user positions, respectively. If the variance of the data varies with time, as in [@zhou2005network] for data traffic, and can be expressed using an , then the whole model is referred to as .
### Kalman filter {#subsubsec:Kalman}
Kalman filters are widely applied in time series analysis for linear dynamic systems, which track the estimated system state and its uncertainty variance. In the anticipatory networking literature, Kalman filters have been mainly adopted to model the linear dependence of the system states based on historical data.
Consider a multivariate time series $\{{\boldsymbol{\mathbf{x}}}_t\in{\mathbb{R}}^n: t\in{\mathcal{T}}\}$, the Kalman filter addresses the problem of estimating state ${\boldsymbol{\mathbf{x}}}_t$ that is governed by the linear stochastic difference equation $${\boldsymbol{\mathbf{x}}}_t={\boldsymbol{\mathbf{A}}}_t{\boldsymbol{\mathbf{x}}}_{t-1} + {\boldsymbol{\mathbf{B}}}_t{\boldsymbol{\mathbf{u}}}_{t}+{\boldsymbol{\mathbf{w}}}_{t}, \ t = 0,1,\ldots,
\label{eqn:Kalman_state}$$ where ${\boldsymbol{\mathbf{A}}}_t\in{\mathbb{R}}^{n\times n}$ expresses the state transition, and ${\boldsymbol{\mathbf{B}}}_t \in{\mathbb{R}}^{n\times l}$ relates the optional control input ${\boldsymbol{\mathbf{u}}}_t\in{\mathbb{R}}^l$ to the state ${\boldsymbol{\mathbf{x}}}_t\in{\mathbb{R}}^n$. The random variable ${\boldsymbol{\mathbf{w}}}_t\sim {\mathcal{N}}({\boldsymbol{\mathbf{0}}}, {\boldsymbol{\mathbf{Q}}}_t)$ represents a multivariate normal noise process with covariance matrix ${\boldsymbol{\mathbf{Q}}}_t\in{\mathbb{R}}^{n\times n}$. The observation ${\boldsymbol{\mathbf{z}}}_t\in{\mathbb{R}}^m$ of the true state ${\boldsymbol{\mathbf{x}}}_t$ is given by $${\boldsymbol{\mathbf{z}}}_{t}={\boldsymbol{\mathbf{H}}}_t{\boldsymbol{\mathbf{x}}}_t + {\boldsymbol{\mathbf{v}}}_t,
\label{eqn:Kalman_measure}$$ where ${\boldsymbol{\mathbf{H}}}_t\in{\mathbb{R}}^{m \times n}$ maps the true state space into the observed space. The random variable ${\boldsymbol{\mathbf{v}}}_t$ is the observation noise process following ${\boldsymbol{\mathbf{v}}}_t\sim {\mathcal{N}}({\boldsymbol{\mathbf{0}}}, {\boldsymbol{\mathbf{R}}}_t)$ with covariance ${\boldsymbol{\mathbf{R}}}_t\in{\mathbb{R}}^{n\times n}$. Kalman filters iterate between 1) predicting the system state with Eq. (\[eqn:Kalman\_state\]) and 2) updating the model according to Eq. (\[eqn:Kalman\_measure\]) to refine the previous prediction. The interested reader is referred to [@harvey1990forecasting] for more details.
In [@zaidi2005real; @yang2013broadcasting], Kalman filters are used to study users’ mobility. Wireless channel gains are studied in [@dallanese2011channel] with , while the authors of [@okutani1984dynamic] adopt the technique to predict short-term traffic volume. The extended Kalman filter adapts the standard model to nonlinear systems via online Taylor expansion. According to [@pappas2014extended], this improves shadow/fading estimation.
Similarity-based Classification {#subsec:Classification}
-------------------------------
Similarity-based classification aims to find inherent structures within a dataset. The core rationale is that similarity patterns in a dataset can be used to predict unknown data or missing features. Recommendation systems are a typical application where users give a score to items and the system tries to infer similarities among users and scores to predict the missing entries.
These techniques are unsupervised learning methods, since categories are not predetermined, but are inferred from the data. They are applied to datasets exhibiting one or more of the following properties: 1) entries of the dataset have many attributes, 2) no law is known to link the different features, and 3) no classification is available to manually label the dataset.
In what follows, we briefly review the similarity-based classification tools that have been used in the anticipatory networking literature accounted for in this survey.
### Collaborative filtering {#sec:predotherCF}
Recommendation systems usually adopt to predict unknown opinions according to user’s and/or content’s similarities. While a thorough survey is available in [@lee2012comparative], here, we just introduce the main concepts related to anticipatory networking.
predicts the missing entries of a $n_c \times n_u$ matrix ${\boldsymbol{\mathbf{Y}}} \in \mathcal{A}^{n_c \times n_u}$, mapping $n_c$ users to $n_u$ contents through their opinions which are taken from an alphabet $\mathcal{A}$ of possible ratings. Thus, the entry $y_{ik}, i\in\{1,\dots,n_c\}, k\in\{1,\dots,n_u\}$ expresses how much user $k$ likes content $i$. An auxiliary matrix ${\boldsymbol{\mathbf{R}}} \in [0,1]^{n_c \times n_u}$ expresses whether user $k$ evaluated content $i$ ($r_{ik}=1$) or not ($r_{ik}=0$).
To predict the missing entries of ${\boldsymbol{\mathbf{Y}}}$ the feature learning approach exploits a set of $n_f$ features to represent contents’ and users’ similarities and defines two matrices ${\boldsymbol{\mathbf{X}}} \in [0,1]^{n_c \times n_f}$ and ${\boldsymbol{\mathbf{\Theta}}}\in\mathcal{A}^{n_u \times n_f}$, whose entries $x_{ij}$ and $\theta_{kj}$ represent how much content $i$ is represented by feature $j$ and how high user $k$ would rate a content completely defined by feature $j$, respectively. The new matrices aim to map ${\boldsymbol{\mathbf{Y}}}$ in the feature space and they can be computed by: $$\begin{aligned}
\label{eq:cfopt}
& \underset{{\boldsymbol{\mathbf{X}}},{\boldsymbol{\mathbf{\Theta}}}}{\textrm{argmin}} & \sum_{i,k:r_{ik}=1} ( {\boldsymbol{\mathbf{x}}}_{i \ast} {\boldsymbol{\mathbf{\theta}}}_{k \ast}^T - y_{ik})^2,
$$ where ${\boldsymbol{\mathbf{x}}}_{i \ast}:= ({{\mathrm{col}}}_i {\boldsymbol{\mathbf{X}}}^T)^T$ denotes the $i$-th row of matrix ${\boldsymbol{\mathbf{X}}}$. Note that in the regularization terms are omitted. Solving amounts to obtain a matrix $\tilde{{\boldsymbol{\mathbf{Y}}}} = {\boldsymbol{\mathbf{X}}}{\boldsymbol{\mathbf{\Theta}}}^T$ which best approximates ${\boldsymbol{\mathbf{Y}}}$ according to the available information ($i,k:r_{ik}=1$). Finally, $\tilde{y}_{ik} = {\boldsymbol{\mathbf{x}}}_{i\ast}{\boldsymbol{\mathbf{\theta}}}_{k\ast}^T$ predicts how user $k$ with parameters ${\boldsymbol{\mathbf{\theta}}}_{k\ast}$ rates content $i$ having feature vector ${\boldsymbol{\mathbf{x}}}_{i\ast}$.
Other applications of are, for instance, network caching optimization [@bastug2014think; @dutta2015predictive], where communication efficiency is optimized by storing contents where and when they are predicted to be consumed. Similarly, location-based services [@noulas2012mining] predict where and what to serve to a given user.
### Clustering {#sec:predotherClus}
Clustering techniques are meant to group elements that share similar characteristics. The following provides an introduction to $K$-means, which is among the most commonly-used clustering techniques in anticipatory networking. The interested reader is referred to [@xu2005survey] for a complete review.
$K$-means splits a given dataset into $K$ groups without any prior information about the group structure. The basic idea is to associate each observation point from a dataset $\mathcal{X} := \{{\boldsymbol{\mathbf{x}}}_i\in \mathbb{R}^n: i = 1, \ldots, M\}$, to one of the centroids in set ${\mathcal{M}} := \{{\boldsymbol{\mathbf{\mu}}}_j \in \mathbb{R}^n: j=1,\dots,K\}$. The centroids are optimized by minimizing the intra-cluster sum of squares (sum of distance of each point in the cluster to the $K$ centroids), given by $$\begin{aligned}
\label{eq:clusteropt}
& \underset{{\mathcal{C}},{\mathcal{M}}}{\textrm{minimize}} & \sum_{j=1}^K \sum_{i=1}^M c_{ij}\|{\boldsymbol{\mathbf{x}}}_i - {\boldsymbol{\mathbf{\mu}}}_j \|^2,\end{aligned}$$ where ${\mathcal{C}} := \{c_{ij} \in \{0,1\}: i = 1,\dots,M, j=1,\dots,K\}$ associates entry ${\boldsymbol{\mathbf{x}}}_i$ to centroid ${\boldsymbol{\mathbf{\mu}}}_j$. No entry can be associated to multiple centroids ($\sum_{j=1}^K c_{ij} = 1, \forall i \in {\mathcal{M}}$). Clustering is applied in anticipatory networking to build a data-driven link model [@tarsa2015taming], to find similarities within vehicular paths [@froehlich2008route], to identify social events [@samulevicius2015most] that might impact network performance, and to identify device types [@shafiq2011characterizing].
### Decision Trees {#sec:predotherDT}
A supervised version of clustering is [*decision tree learning*]{} (the interested reader is referred to [@murthy1998automatic] for a survey on the topic). Assuming that each input observation is mapped to a consequence on its target value (such as reward, utility, cost, etc.), the goal of decision tree learning is to build a set of rules to map the observations to their target values. Each decision branches the tree into different paths that lead to leaves representing the class labels. With prior knowledge, decision trees can be exploited for location-based services [@noulas2012mining], for identifying trajectory similarities [@monreale2009wherenext], and for predicting the for multimedia streams [@sekar2013developing]. For continuous target variables, regression trees can be used to learn trends in network performance [@xu2013proteus].
![Example of a functional dataset: WiFi traffic in Rome depending on hour of the day. Data source from Telecom Italia’s Big Data Challenge [@TelecomItalia].[]{data-label="fig:FDA_wifiload"}](./fig/FDA_Aggr_UplinkDownlinkLoad_inGB){width=".9\columnwidth"}
Regression Analysis {#subsec:Regression}
-------------------
When the interest lies in understanding the relationship between different variables, regression analysis is used to predict dependent variables from a number of independent variables by means of so-called regression functions. In the following, we introduce three regression techniques, which are able to capture complex nonlinear relationships, namely [*functional regression*]{}, [*support vector machines*]{} and [*artificial neural networks*]{}.
### Functional regression {#subsubsec:functional}
Functional data often arise from measurements, where each point is expressed as a function over a physical continuum (e.g., Fig. \[fig:FDA\_wifiload\] illustrates the example of aggregated WiFi traffic as a function of the hour of the day). Functional regression has two interesting properties: smoothness allows to study derivatives, which may reveal important aspects of the processes generating the data, and the mapping between original data and the functional space may reduce the dimensionality of the problem and, as a consequence, the computational complexity [@ramsay2006functional]. The commonly encountered form of function prediction regression model (scalar-on-function) is given by [@ramsay1991some]: $$Y_i = B_0 + \int X_i(z)B(z)dz + E_i
\label{eqn:FLM_2}$$ where $Y_i, i = 1, \ldots, M$ is a continuous response, $X_i(z)$ is a functional predictor over the variable $z$, $B(z)$ is the functional coefficient, $B_0$ is the intercept, and $E_i$ is the residual error.
Functional regression methods are applied in [@sayeed2015cloud] to predict traffic-related metrics (e.g., throughput, modulation and coding scheme, and used resources) showing that cloud analytics of short-term metrics is feasible. In [@mozer2000predicting], functional regression is used to study churn rate of mobile subscribers to maximize the carrier profitability.
### Support vector machines {#sec:svm}
is a supervised learning technique that constructs a hyperplane or set of hyperplanes (linear or nonlinear) in a high- or infinite-dimensional space, which can be used for classification, regression, or other tasks. In this survey we introduce the SVM for classification, and the same principle is used by for regression. Consider a training dataset $\{({\boldsymbol{\mathbf{x}}}_i,y_i): {\boldsymbol{\mathbf{x}}}_i\in \mathbb{R}^n,y_i\in\{-1,1\}, i = 1, \ldots, M\}$, where $\mathbf{x}_i$ is the $i$-th training vector and $y_i$ the label of its class. First, let us assume that the data is linearly separable and define the linear separating hyperplane as ${\boldsymbol{\mathbf{w}}}\cdot{\boldsymbol{\mathbf{x}}} - b = 0$, where ${\boldsymbol{\mathbf{w}}}\cdot{\boldsymbol{\mathbf{x}}}$ is the Euclidean inner product. The optimal hyperplane is the one that maximizes the [*margin*]{} (i.e., distance from the hyperplane to the instances closest to it on either side), which can be found by solving the following optimization problem: $$\begin{aligned}
\label{eq:svm}
\nonumber
&\text{minimize}& \frac{1}{2}||\mathbf{w}||^2 \\
&\text{subject to}&y_i(\mathbf{x}_i\cdot\mathbf{w} + b) - 1\geq0~\forall i \in \{1,\dots,M\}.\end{aligned}$$ Fig. \[fig:svm:lin\] shows an example of linear classifier separating two classes in $\mathbb{R}^2$.
If the data is not linearly separable, the training points are projected to a high-dimensional space $\mathcal{H}$ through a nonlinear transformation ${\boldsymbol{\mathbf{\phi}}}:R^n\rightarrow\mathcal{H}$. Then, a linear model in the new space is built, which corresponds to a nonlinear model in the original space. Since the solution of (\[eq:svm\]) consists of inner products of training data ${\boldsymbol{\mathbf{x}}}_i\cdot{\boldsymbol{\mathbf{x}}}_j$, for all $i,j$, in the new space the solution is in the form of ${\boldsymbol{\mathbf{\phi}}}({\boldsymbol{\mathbf{x}}}_i)\cdot {\boldsymbol{\mathbf{\phi}}}({\boldsymbol{\mathbf{x}}}_j)$. The [*kernel trick*]{} is applied to replace the inner product of basis functions by a [*kernel function*]{} $K({\boldsymbol{\mathbf{x}}}_i,{\boldsymbol{\mathbf{x}}}_j) = {\boldsymbol{\mathbf{\phi}}}({\boldsymbol{\mathbf{x}}}_i)\cdot {\boldsymbol{\mathbf{\phi}}}({\boldsymbol{\mathbf{x}}}_j)$ between instances in the original input space, without explicitly building the transformation ${\boldsymbol{\mathbf{\phi}}}$.
The Gaussian kernel $K({\boldsymbol{\mathbf{x}}},{\boldsymbol{\mathbf{y}}}) := \exp(\gamma||{\boldsymbol{\mathbf{x}}}-{\boldsymbol{\mathbf{y}}}||^2)$ is one of the most widely used kernels in the literature. For example, it is used in [@chen2013predicting] to predict user mobility. In [@kasparick2015kernel], the authors propose an algorithm for reconstructing coverage maps from path-loss measurements using a kernel method. Nevertheless, choosing an appropriate kernel for a given prediction task remains one of the main challenges.
### Artificial neural networks {#subsubsec:neuralNetworks}
is a supervised machine learning solution for both regression and classification. An is a network of nodes, or *neurons*, grouped into three layers (input, hidden and output), which allows for nonlinear classification. Ideally, it can achieve zero training error.
Consider a training dataset $\{({\boldsymbol{\mathbf{x}}}_i,y_i): {\boldsymbol{\mathbf{x}}}_i\in \mathbb{R}^n, i = 1, \ldots, M\}$. Each hidden node $h_l$ approximates a so-called logistic function in the form $h_l = 1/(1+\exp(-{\boldsymbol{\mathbf{\omega}}}_l\cdot\mathbf{x}))$, where ${\boldsymbol{\mathbf{\omega}}}_l$ is a weight vector. The outputs of the hidden nodes are processed by the output nodes to approximate ${\boldsymbol{\mathbf{y}}}$. These nodes use linear and logistic functions for regression and classification, respectively. In the linear case, the approximated output is represented as: $${\boldsymbol{\mathbf{\hat{y}}}}=\sum_{l=1}^Lh_lv_l =\sum_{l=1}^L\frac{1}{1+\exp(-{\boldsymbol{\mathbf{\omega}}}_l\cdot\mathbf{x})}v_l,$$ where $L$ is the number of hidden nodes and $v_l$ is the weight vector of the output layer. The training of an can be performed by means of the *backpropagation* method that finds weights for both layers to minimize the mean squared error between the training labels $y$ and their approximations $\hat{y}$. In the anticipatory networking literature, have been used for example to predict mobility in mobile ad-hoc networks [@ghouti2013mobility; @kaaniche2010mobility].
For both and , as for other supervised learning approaches, no prior knowledge about the system is required but a large training set has to be acquired for parameter setting in the predictive model. A careful analysis needs to be performed while processing the training data in order to avoid both overfitting and underlearning.
Statistical Methods for Probabilistic Forecasting {#subsec:Probabilistic}
-------------------------------------------------
Probabilistic forecasting involves the use of information at hand to make statements about the likely course of future events. In the following subsections, we introduce two probabilistic forecasting techniques: [*Markovian models*]{} and [*Bayesian inference*]{}.
### Markovian models {#subsubsec:Markovian}
These models can be applied to any system for which state transitions only depend on the current state. In the following we briefly discuss the basic concepts of discrete, and continuous time and their respective applications to anticipatory networking.
A is a discrete time stochastic process $X_n (n\in\mathbb{N})$, where a state $X_n$ takes a finite number of values from a set $\mathcal{X}$ in each time slot. The Markovian property for a transitioning from any time slot $k$ to $k+1$ is expressed as follows: $$P(X_{k+1} = j|X_{k}=i) = p_{ij}(k).
\label{eq:markov}$$
For a stationary , the subscript $k$ is omitted and the transition matrix $\mathbf{P}$, where $p_{ij}$ represents the transition probability from state $i$ to state $j$, completely describes the model. Empirical measurements on mobility and traffic evolution can be accurately predicted using a with low computational complexity [@chon2011mobility; @barth2011mobility; @bapierre2011variable; @chon2012evaluating; @shafiq2011characterizing]. However, obtaining the transition probabilities of the system requires a variable training period, which depends on the prediction goal. In practice, the data collection period can be in the order of one [@shafiq2011characterizing] or even multiple weeks [@nicholson2008breadcrumbs; @barth2012combining].
A assumes the time the system spends in each state is equal for all states. This time depends on the prediction application and can range from a few hundred milliseconds to predict wireless channel quality [@hosseini2015not], to tens of seconds for user mobility prediction [@nicholson2008breadcrumbs; @barth2011mobility], to hours for Internet traffic [@shafiq2011characterizing]. For tractability reason, the state space is often compressed by means of simple heuristics [@barth2012combining; @beister2014predicting; @nicholson2008breadcrumbs], $K$-means clustering [@hosseini2015not; @bapierre2011variable], equal probability classification [@beister2014predicting], and density-based clustering [@bapierre2011variable].
Eq. (\[eq:markov\]) defines a first order and can be extended to the $l$-th order (i.e., transition probabilities depend on the $l$ previous states). By Using higher order, DTMCs can increase the accuracy of the prediction at the expense of a longer training time and an increased computational complexity [@chon2012evaluating; @bapierre2011variable; @barth2011mobility].
If the sojourn time of each state is relevant to the prediction, the system can be modeled as a . The Markovian property is preserved in when the sojourn time is exponentially distributed, as in [@gidofalvi2012and]. When the sojourn time has an arbitrary distribution, it becomes a Markov renewal process as described in [@lee2006modeling; @abu2010application].
If the transition probabilities cannot be directly measured, but only the output of the system is quantifiable (dependent on the state), hidden Markov models allow to map the output state space to the unobservable model that governs the system. As an example, the inter-download times of video segments are predicted in [@beister2014predicting], where the output sequences are the inter-download times of the already downloaded segments and the states are the instants of the next download request.
[|p[1.8cm]{}|p[2cm]{}V[2.5]{}p[1.65cm]{}|p[1.65cm]{}|p[1.65cm]{}V[2.5]{}p[1.1cm]{}|p[1.4cm]{}|p[1.2cm]{}|p[1.2cm]{}|]{} & &\
Class & Methodology & Dimension & Granularity & Range & Type & Linearity & Side Info. & Quality\
& ARIMA & univariate & M/L & S & data & Y & N & weak\
& Kalman filter & multivariate & M/L & S & data & Y & N & weak\
& References &\
& & multivariate & L & M/L & data & Y & both & robust\
& Clustering & multivariate & L & M/L & data & both & both & robust\
& Decision trees & multivariate & L & any & data & both & Y & robust\
& References &\
& Functional & multivariate & any & M/L & models & both & Y & robust\
& & multivariate & any & any & both & both & both & weak\
& & multivariate & any & any & data & both & both & weak\
& References &\
& Markovian & multivariate & M/L & any & both & both & both & weak\
& Bayesian & multivariate & any & any & both & both & Y & weak\
& &\
& &\
\[tab:ObjectiveAndConstaints\_predict\]
### Bayesian inference {#subsubsec:Bayesian}
This approach allows to make statements about what is unknown, by conditioning on what is known. Bayesian prediction can be summarized in the following steps: 1) define a [*model*]{} that expresses qualitative aspects of our knowledge but has unknown parameters, 2) specify a [*prior*]{} probability distribution for the unknown parameters, 3) compute the [*posterior*]{} probability distribution for the parameters, given the observed data, and 4) make predictions by averaging over the posterior distribution.
Given a set of observed data ${\mathcal{D}}:=\{({\boldsymbol{\mathbf{x}}}_i, {\boldsymbol{\mathbf{y}}}_i): i = 1, \ldots, M\}$ consisting of a set of input samples ${\mathcal{X}}: = \{{\boldsymbol{\mathbf{x}}}_i\in{\mathbb{R}}^p: i = 1, \ldots, M\}$ and a set of output samples ${\mathcal{Y}} := \{{\boldsymbol{\mathbf{y}}}_i\in{\mathbb{R}}^q: i = 1, \ldots, M\}$, inference in Bayesian models is based on the [*posterior distribution*]{} over the parameters, given by the [*Bayes’ rule*]{}: $$\begin{aligned}
p({\boldsymbol{\mathbf{\theta}}}|{\mathcal{D}}) & = \frac{p({\mathcal{Y}}|{\mathcal{X}}, {\boldsymbol{\mathbf{\theta}}})p({\boldsymbol{\mathbf{\theta}}})}{p({\mathcal{Y}}|{\mathcal{X}})}\propto p({\mathcal{Y}}|{\mathcal{X}}, {\boldsymbol{\mathbf{\theta}}})p({\boldsymbol{\mathbf{\theta}}}), \label{eqn:Bayes}
$$ where ${\boldsymbol{\mathbf{\theta}}}$ is the unknown parameter vector. Two recent works adopting the Bayesian framework are [@muppirisetty2015spatial] and [@liao2015channel]. The former focuses on spatial prediction of the wireless channel, building a $2$D non-stationary random field accounting for pathloss, shadowing and multipath. The latter exploits spatial and temporal correlation to develop a general prediction model for the channel gain of mobile users.
Summary {#subsec:GeneralPrinciple}
-------
Hereafter, we provide some guidelines for selecting the appropriate prediction methods depending on the application scenario or context of interest.
### Applications and data
The predicted context is the most important information that drives decision making in anticipatory optimization problems (see Section \[sec:optimization\]). Thus, the selection of the prediction method shall take into consideration the objectives of the application and the constraints imposed by the available data.
#### Choosing the outputs
Applications define the properties of the predicted variables, such as dimension, granularity, accuracy, and range. For example, large granularity or high data aggregation (such as frequently visited location, social behavior pattern) is best dealt with similarity-based classification methods which provide sufficiently accurate prediction without the complexity of other model-based regression techniques.
#### System model and data
The application environment is equally important as its outputs, which determines the constraints of modeling. Often, an accurate analysis of the scenario might highlight linearity, deterministic and/or causal laws among the variables that can further improve the prediction accuracy. Moreover, the quality of dataset heavily affects the prediction accuracy. Different methods exhibit different level of robustness to noisy data.
### Guidelines for selecting methods
To choose the correct tool among the aforementioned set, we study the rationale for adopting each of them in the literature and derive the following practical guidelines.
#### Model-based methods
When a physical model exists, model-based regression techniques based on closed-form expressions can be used to obtain an accurate prediction. They are usually preferable for long-term forecast and exhibit good resilience to poor data quality.
#### Time series-based methods
These are the most convenient tools when the information is abundant and shows strong temporal correlation. Under these conditions, time series methods provide simple means to obtain multiple scale prediction of moderate to high precision.
#### Causal methods
If the data exhibits large and fast variations, causality laws can be key to obtain robust predictions. In particular, if a causal relationship can be observed between the variables of interest and the other observable data, causal models usually outperform pure data-driven models.
#### Probabilistic models
If the physical model of the prediction variable is either unavailable or too complex to be used, probabilistic models offer robust prediction based on the observation of a sufficient amount of data. In addition, probabilistic methods are capable of quantifying the uncertainty of the prediction, based on the probability density function of the predicted state.
### Prediction summary
Table \[tab:ObjectiveAndConstaints\_predict\] characterizes each prediction method with respect to [*properties of the context*]{} and [*constraints*]{} presented in Section \[sec:guidelines\]. Note that the methods for predicting a multivariate process can be applied to univariate processes without loss of generality. The granularity of variables and the prediction range are described using qualitative attributes such as [**S**]{}hort, [**M**]{}edium, [**L**]{}arge, and [**any**]{} instead of explicit values. For example, for the time series of traffic load per cell, S, M and L time scales are generally defined by minutes, tens of minutes and hours, respectively, while for the time series of channel gain, they can be seen as milliseconds, hundreds of milliseconds and seconds, respectively. The sixth column reports the prediction type, that can be driven by [**data**]{}, [**models**]{} or [**both**]{}. Linearity indicates whether it is required ([**Y**]{}) or not ([**N**]{}) or applicable in [**both**]{} cases. The side information column states whether out-of-band information can ([**both**]{}), cannot ([**N**]{}) or must ([**Y**]{}) be used to build the model. Finally, the quality column reports whether the predictor is [**weak**]{} or [**robust**]{} against insufficient or unreliable dataset.
Optimization Techniques for Anticipatory Networking {#sec:optimization}
===================================================
This section identifies the main optimization techniques adopted by anticipatory networking solutions to achieve their objectives. Disregarding the particular domain of each work, the common denominator is to leverage some future knowledge obtained by means of prediction to drive the system optimization. How this optimization is performed depends both on the ultimate objectives and how data are predicted and stored.
In general, we found two main strategies for optimization: (1) adopting a well-known optimization framework to model the problem and (2) designing a novel solution (most often) based on heuristic considerations about the problem. The two strategies are not mutually exclusive and often, when known approaches lead to too complex or impractical solutions, they are mixed in order to provide feasible approximation of the original problem.
Heuristic approaches usually consist of (1) algorithms that allow for fast computation of an approximation of the solution of a more complex problem (e.g., convex optimization) and (2) greedy approaches that can be proven optimal under some set of assumptions. Both approaches trade optimality for complexity and most often are able to obtain performance quite close to the optimal one. However, heuristic approaches are tailored to the specific application and are usually difficult to be generalized or to be adapted for different scenarios, thus they cannot be directly applied to new applications if the new requirements do not match those of the original scenario.
In what follows, we focus on optimization methods only and we will provide some introductory descriptions of the most relevant ones used for anticipatory networking. The objective is to provide the reader with a minimum set of tools to understand the methodologies and to highlight the main properties and applications.
Convex Optimization
-------------------
Convex optimization is a field that studies the problem of minimizing a convex function over convex sets. The interested reader can refer to [@boyd2004convex] for convex optimization theory and algorithms. Hereafter, we will adopt Boyd’s notation [@boyd2004convex] to introduce definitions and formulations that frequently appear in anticipatory networking papers. The inputs are often referred to as the optimization variables of the problem and defined as the vector ${\boldsymbol{\mathbf{x}}} = (x_1,\dots,x_n)$. In order to compute the best configuration or, more precisely, to optimize the variables, an objective is defined: this usually corresponds to minimizing a function of the optimization variables, $f_0:\mathbb{R}^n \rightarrow \mathbb{R}$. The feasible set of input configurations is usually defined through a set of $m$ constraints $f_i(x) \leq b_i$, $i = 1,\dots,m$, with $f_i:\mathbb{R}^n \rightarrow \mathbb{R}$. The general formulation of the problem is $$\begin{aligned}
\label{eq:genopt}
& \textrm{minimize} & f_0({\boldsymbol{\mathbf{x}}}) \nonumber \\
& \textrm{subject to} & f_i \leq b_i, \;\; i = 1,\dots,m.\end{aligned}$$
The solution to the optimization problem is an optimal vector ${\boldsymbol{\mathbf{x}}}^*$ that provides the smallest value of the objective function, while satisfying all the constraints. The convexity property (i.e., objective and constraint functions satisfy $f_i(a{\boldsymbol{\mathbf{x}}} + (1-a){\boldsymbol{\mathbf{y}}}) \leq af_i({\boldsymbol{\mathbf{x}}}) + (1-a)f_i({\boldsymbol{\mathbf{y}}})$ for all ${\boldsymbol{\mathbf{x}}},{\boldsymbol{\mathbf{y}}} \in \mathbb{R}^n$ and $a \in [0,1]$) can be exploited in order to derive efficient algorithms that allows for fast computation of the optimal solution. Furthermore, if the optimization function and the constraints are linear, i.e., $f_i(a{\boldsymbol{\mathbf{x}}} + b{\boldsymbol{\mathbf{y}}}) = af_i({\boldsymbol{\mathbf{x}}}) + bf_i({\boldsymbol{\mathbf{y}}})$ for all ${\boldsymbol{\mathbf{x}}},{\boldsymbol{\mathbf{y}}} \in \mathbb{R}^n$ and $a,b \in \mathbb{R}$, the problem belongs to the class of *linear optimization*. For this class, highly efficient solvers exist, thanks to their inherently simple structure. Within the linear optimization class, three subclasses are of particular interest for anticipatory networking: least-squares problems, linear programs and mixed-integer linear programs.
*Least-squares* problems can be thought of as distance minimization problems. They have no constraints ($m=0$) and their general formulation is: $$\begin{aligned}
\label{eq:ls}
\textrm{minimize} & f_0({\boldsymbol{\mathbf{x}}}) = ||{\boldsymbol{\mathbf{A}}}{\boldsymbol{\mathbf{x}}} - {\boldsymbol{\mathbf{b}}}||_2^2,\end{aligned}$$ where $A \in \mathbb{R}^{k \times n}$, with $k \geq n$ and $||x||_2$ is the Euclidean norm. Notably, problems of this class have an analytical solution ${\boldsymbol{\mathbf{x}}}=({\boldsymbol{\mathbf{A}}}^T{\boldsymbol{\mathbf{A}}})^{-1}{\boldsymbol{\mathbf{A}}}^T{\boldsymbol{\mathbf{b}}}$ (where superscript $^T$ denotes the transpose) derived from reducing the problem to the set of linear equations ${\boldsymbol{\mathbf{A}}}^T{\boldsymbol{\mathbf{A}}}{\boldsymbol{\mathbf{x}}} = {\boldsymbol{\mathbf{A}}}^T{\boldsymbol{\mathbf{b}}}$.
*Linear programming* (LP) problems are characterized by linear objective function and constraints and are written as $$\begin{aligned}
\label{eq:lpgen}
& \textrm{minimize} & {\boldsymbol{\mathbf{c}}}^T{\boldsymbol{\mathbf{x}}} \nonumber \\
& \textrm{subject to} & {\boldsymbol{\mathbf{A}}}^T{\boldsymbol{\mathbf{x}}} \leq b,\end{aligned}$$ where ${\boldsymbol{\mathbf{c}}} \in \mathbb{R}^n$, ${\boldsymbol{\mathbf{A}}} \in \mathbb{R}^{n \times m}$ and ${\boldsymbol{\mathbf{b}}} \in \mathbb{R}^n$ are the parameters of the problem. Although, there is no analytical closed-form solution to LP problems, a variety of efficient algorithms are available to compute the optimal vector ${\boldsymbol{\mathbf{x}}}^*$. When the optimization variable is a vector of integers $x \in \mathbb{Z}^n$, the class of problems is called *integer linear programming* (ILP), while the class of *mixed-integers linear programming* (MILP) allows for both integer and real variables to co-exist. These last two classes of problems can be shown to be NP-hard (while is P complete) and their solution often implies combinatorial aspects. See [@schrijver1998theory] for more details on integer optimization.
In anticipatory networking, we find that resource allocation problems are often modeled as , or , by setting the amount of resources to be allocated as the optimization variable and accounting for prediction in the constraints of the problem. In [@abouzeid2014energy], prediction of the channel gain is exploited to optimize the energy efficiency of the network. Time is modeled as a finite number of slots corresponding to the look-ahead time of the prediction. When dealing with multimedia streaming, the data buffer is usually modeled in the constraints of the problem by linking the state at a given time slot to the previous slot. The solver will then choose whether to use resources in the current slot or use what has been accumulated in the buffer, as in, e.g., [@draxler2015smarterphones]. Admission control is often used to enforce quality-of-service, e.g., [@bui2015anticipatory; @chen2015rate], with the drawback of introducing integer variables in the optimization function. In these cases, the optimal ILP/MILP formulation is followed by a fast heuristic that enables the implementation of real-time algorithms.
Model Predictive Control {#sec:mpc}
------------------------
Model Predictive Control (MPC) is a control theoretic approach that optimizes the sequence of actions in a dynamic system by using the process model of that system within a finite time horizon. Therefore, the process model, i.e., the process that turns the system from one state to the next, should be known. In each time slot $t$, the system state, ${\boldsymbol{\mathbf{x}}}(t)$, is defined as a vector of attributes that define the relevant properties of the system. At each state, the control action, ${\boldsymbol{\mathbf{u}}}(t)$, turns the system to the next state ${\boldsymbol{\mathbf{x}}}(t+1)$ and results in the output ${\boldsymbol{\mathbf{y}}}(t+1)$. In case the system is linear, both the next state and the output can be determined as follows: $$\begin{aligned}
{\boldsymbol{\mathbf{x}}}(t+1) & = & {\boldsymbol{\mathbf{A}}}{\boldsymbol{\mathbf{x}}}(t) + {\boldsymbol{\mathbf{B}}}{\boldsymbol{\mathbf{u}}}(t) + {\boldsymbol{\mathbf{\psi}}}(t) \\
{\boldsymbol{\mathbf{y}}}(t) & = & {\boldsymbol{\mathbf{C}}}{\boldsymbol{\mathbf{x}}}(t) + {\boldsymbol{\mathbf{\epsilon}}}(t),\end{aligned}$$ where ${\boldsymbol{\mathbf{\psi}}}(t)$ and ${\boldsymbol{\mathbf{\epsilon}}}(t)$ are usually zero mean random variables used to model the effect of disturbances on the input and output, respectively, and ${\boldsymbol{\mathbf{A}}}$, ${\boldsymbol{\mathbf{B}}}$, and ${\boldsymbol{\mathbf{C}}}$ are matrices determined by the system model.
At each time slot, the next $N$ states and their respective outputs are predicted and a cost function $J(\cdot)$ is minimized to determine the optimal control action ${\boldsymbol{\mathbf{u}}}^*(t)$ at $t = t_0$: $$\label{eq:mpc_opt}
{\boldsymbol{\mathbf{u}}}^*(t_0)=\arg\underset{{\boldsymbol{\mathbf{u}}}(t_0)}\min J({\boldsymbol{\mathbf{\hat x}}}(t_0),{\boldsymbol{\mathbf{u}}}(t_0)),$$ where ${\boldsymbol{\mathbf{\hat x}}}(t_0)$ is the set of all the predicted states from $t = t_0 + 1$ to $t = t_0 + N$, including the observed state at $t = t_0$. The expression in essentially states that the optimal action of the current time slot is computed based on the predicted states of a finite time horizon in the future. In other words, in each time slot the MPC sequentially performs a $N$ step lookahead open loop optimization of which only the first step is implemented [@qin2003survey].
This approach has been adopted for on-line prediction and optimization of wireless networks [@bianchi2013networked; @lee2013generalized]. Since the process model (for the prediction of future states and outputs) is available in this kind of systems, autoregressive methods can be used along with Kalman filtering [@lee2013generalized], or max-min MPC formulation [@witheephanich2014min]. In [@bianchi2013networked], Kalman filtering is compared to other methods such as mean and median value estimation, Markov chains, and exponential averaging filters.
Optimization based on MPC relies on a finite horizon. The length of the horizon determines the trade-off between complexity and accuracy. Longer horizons need further look ahead and more complex prediction but in turn result in a more foresighted control action [@witheephanich2014min]. Reducing the horizon reduces the complexity while resulting in a more myopic action. This trade-off is examined in [@bianchi2013networked] by proposing an algorithm that adaptively adjusts the horizon length. In general, the prediction horizon is kept to a fairly low number (1 step in [@witheephanich2014min] and 6 steps in [@lee2013generalized]) to avoid high computation overhead.
It is worth noting that MPC methods can be extended to the nonlinear case. In this case, the prediction accuracy and control optimality increase at the cost of more complex algorithms to find the solution [@qin2003survey]. Another benefit of these approaches is their applicability to non-stationary problems.
Markov Decision Process {#sec:mdp}
-----------------------
Markov Decision Process (MDP) is an efficient tool for optimizing sequential decision making in stochastic environments. Unlike MPCs, MDPs can only be applied to stationary systems where a priori information about the dynamics of the system as well as the state-action space is available.
A MDP consists of a four tuple $({\mathcal{X}},{\mathcal{U}},{\boldsymbol{\mathbf{P}}},r)$, where ${\mathcal{X}}$ and ${\mathcal{U}}$ represent the set of all achievable states in the system and the set of all actions that can be performed in each of the states, respectively. Time is assumed to be slotted and in any time slot $t$, the system is in state $x_t \in {\mathcal{X}}$ from which it can take an action $u_t$ from the set $U_{x_t} \in {\mathcal{U}}$. Due to the assumption of stationarity, we can omit the time subscript for states and actions. Upon taking action $u$ in state $x$, the system moves to the next state $x'\in{\mathcal{X}}$ with transition probability $\mathbf{P}(x'|x,u)$ and receives a reward equal to $r(x,u,x')$. The transition probabilities are predicted and modeled as a Markov Chain prior to solving the MDP and preserve the Markovian behavior of the system.
The goal is to find the optimal policy $\pi^*: {\mathcal{X}}\rightarrow{\mathcal{U}}$ (i.e., optimal sequence of actions that must be taken from any initial state) in order to maximize the long term discounted average reward $\mathbb{E}\left(\sum_{t = 0}^\infty \gamma^tr(x_t,u_t,x_{t+1})\right)$, where $0 \leq \gamma < 1$ is called *discount factor* and determines how myopic (if closer to zero) or foresighted (if closer to 1) the decision process should be. In order to derive the optimal policy, each state is assigned to a value function $V^\pi(x)$, which is defined as the long term discounted sum of rewards obtained by following policy $\pi$ from state $x$ onwards. The goal of MDP algorithms is to find $V^{\pi^*}(x) (\forall x\in{\mathcal{X}})$. Given that the Markovian property holds, it has been proved that the optimal value functions follow the Bellman optimality criterion described below [@puterman2014markov] : $$\begin{gathered}
V^{\pi^*}(x) = \\
= \max_{u\in{\mathcal{U}}}\sum_{x'\in{\mathcal{X'}}}\left(r(x,u,x') + \gamma \mathbf{P}(x'|x,u)V^{\pi^*}(x')\right) \\
\forall x\in{\mathcal{X}},\end{gathered}$$ where ${\mathcal{X'}}\subset{\mathcal{X}}$ is the set of states for which $\mathbf{P}(x'|x,u)>0$. In order to solve the above equation set, linear programming or dynamic programming techniques can be used, in which the optimal policy is derived by simple iterative algorithms such as policy iteration and value iteration [@puterman2014markov].
MDPs are very efficient for several problems, especially in the framework of anticipatory networking, due to their wide applicability and ease of implementation. MDP-based optimized download policies for adaptive video transmission under varying channel and network conditions are presented in [@hosseini2015not; @bao2015bitrate; @chen2013markov].
[|p[2.5cm]{}V[2.5]{}p[7cm]{}V[2.5]{}p[7cm]{}|]{} & [**Properties of context**]{} & [**Modeling constraints**]{}\
ConvOpt & Can support any context property, but larger system states slow the solver performance. The solution accuracy is linked to the context precision. & Linearity can be exploited to improve the solver efficiency, while data reliability impacts the solution optimality.\
MPC & Usually offers the highest precision by coupling prediction and optimization. & The most computationally intensive technique.\
MDP & Limited range and precision. & The most robust approach to low data reliability. Although the system setup can be computationally intensive, it allows for lightweight policies to be implemented.\
Game theory & Limited granularity to allow the system to converge to an equilibrium. & Very low computational complexity. Fast dynamics hinder the system convergence.\
\[Tab:Optimization\_Class\]
In order to avoid large state spaces (which limit the applicability of MDPs), there are cases where the accuracy of the model must be compromised for simplicity. In [@chen2013markov], a large video receiver buffer is modeled for storing video on demand but only a small portion of the buffer is used in the optimization, while the rest of the buffer follows a heuristic download policy. [@hosseini2015not; @bao2015bitrate] solve this problem by increasing the duration of the time slot such that more video can be downloaded in each slot and, therefore, the buffer is filled entirely based on the optimal policy. This, in turn, comes at the cost of lower accuracy, since the assumption is that the system is static within the duration of a time slot. Heuristic approaches are also adopted for on-line applications. For instance, creating decision trees with low depth from the MDP outputs is proposed in [@hosseini2015not]. Simpler heuristics are also applied to the MDP outputs in [@bao2015bitrate; @chen2013markov; @dutta2015predictive]. If any of the assumptions discussed above does not hold, or if the state space of the system is too large, MDPs and their respective dynamic programming solution algorithms fail. However, there are alternative techniques to solve this kind of problems. For instance, if the system dynamics follow a Markov Renewal Process instead of a , a semi is solved instead of the regular one [@puterman2014markov]. In non-stationary systems, for which the dynamics cannot be predicted a priori or the reward function is not known beforehand, reinforcement learning [@sutton1998reinforcement] can be applied and the optimization turns into an on-line unsupervised learning problem. Large state spaces can be dealt with using value function approximation, where the value function of the is approximated as a linear function, a neural network, or a decision tree [@sutton1998reinforcement]. If different subsets of state attributes have independent effects on the overall reward, i.e., multi user resource allocation, the problem can be modeled as a weakly coupled [@fu2010systematic] and can be decomposed into smaller and more tractable .
Game theoretic approaches
-------------------------
Although small in number, the papers adopting a game theoretic framework offer an alternative approach to optimization. In fact, while the approaches described in the previous subsections strive to compute the optimal solution of an often complex problem formulation, game theory defines policies that allow the system to converge towards a so-called equilibrium, where no player can modify her action to improve her utility. In mobile networks, game theory is applied in the form of matching games [@gu2015matching], where system players (e.g. users) have to be matched with network resources (e.g. base stations or resource blocks). Three types of matching games can be used depending on the application scenario: 1) one-to-one matching, where each user can be matched with at most one resource (as in [@semiari2015context], which optimizes communication in small cell scenarios); 2) many-to-one matching, where either multiple resources can be assigned to a single user (as in [@semiari2016context] for small cell resource allocation), or multiple users can be matched to a single resource (as in [@namvar2014context] for user-cell association); 3) many-to-many matching, where multiple users can be matched with multiple resource (as in [@hamidouche2014many] where videos are associated to caching servers).
Summary {#sec:chal:opt}
-------
This section (and Table \[tab:class\_opt\_summary\]) summarizes the main takeaways of this optimization handbook.
### Convex Optimization methods
These methods are often combined with time series analysis or ideal prediction. The main reason is that they are used to determine performance bounds when the solving time is not a system constraint. Thus, convex optimization is suggested as a benchmark for large scale prediction. This may have to be replaced by fast heuristics in case the optimization tool needs to work in real-time. An exception to this is for which very efficient algorithms exist that can compute a solution in polynomial time. In contrast, convex optimization methods should be preferred when dealing with high precision and continuous output. They require the complete dataset and show a reliability comparable to that of the used predictor.
### Model Predictive Control {#model-predictive-control}
combines prediction and optimization to minimize the control error by tuning both the prediction and the control parameters. Therefore, it can be coupled with any predictor. The main drawback of this approach is that, by definition, prediction and optimization cannot be decoupled and must be evaluated at each iteration. This makes the solution computationally very heavy and it is generally difficult to obtain real-time algorithms based on . The close coupling between prediction and optimization makes it possible to adopt the method for any application for which a predictor can be designed with the only additional constraint being the execution time. Objectives and constraints are usually those imposed by the used predictor.
### Markov Decision Processes
are characterized by a statistical description of the system state and they usually model the system evolution through probabilistic predictors. As such, they best fit to scenarios that show similar objective functions and constraints as those of probabilistic predictors. Thus, are the ideal choice when the optimization objective aims at obtaining stationary policies (i.e., policies that can be applied independently of the system time). This translates to low precision and high reliability. Moreover, even though they require a computationally heavy phase to optimize the policies, once the policies are obtained, fast algorithms can easily be applied.
### Game theory
Matching games prove to be effective solutions that, without struggling to compute an overly complex optimal configuration, let the system converge towards a stable equilibrium which satisfies all the players (i.e., no action can be taken to improve the utility of any player). These are the preferable solutions for those applications where the computational capability is a stringent constraint and where fairness is important for the system quality.
Applicability of Anticipatory Networking to other Wireless Networks {#sec:network}
===================================================================
[|p[2cm]{}V[2.5]{}p[4.5cm]{}|p[4.5cm]{}|p[4.5cm]{}|]{} & [**Features**]{} & [**Advantages**]{} & [**Challenges**]{}\
*5G Cellular* & mm-wavesMassive MIMOCloud-RAN & Localization and tracking predictionLoad space-time distributionResource management & Channel modelsAmount of data\
** & Variable topologyMulti-hop communicationSelf-management & Routing improvementLoad balancing & Infrastructure absenceDistributed optimizationVariable topology\
*Cognitive* & Primary/Secondary usersSensing capabilities & Spectrum availability predictionLoad prediction and managementTransmission/Sensing ratio & Impact on models\
** & Complex topologyMulti-RAN & Interference managementResource allocation & Models complexityInterference\
** & Mostly deterministic trafficHigh overheadSparse communicationLow-latency control loops & Prediction for compressionModels for anomaly detectionOverhead decrease & Amount of data and devicesScalabilityConstrained devices\
\[tab:network\]
[So far this survey mainly focused on current cellular networks. In this section we analyze how different types of mobile wireless networks can take advantage of anticipatory networking solutions. Although each type would deserve a dedicated survey, in what follows we provide brief summaries of the distinctive features, the application scenarios, the expected benefits and the challenges related to the implementation of anticipatory networking for each of them. Table \[tab:network\] summarizes the discussion of this section.]{}
5G Cellular Networks
--------------------
LTE and LTE-advanced represent the fourth generation of mobile cellular networks and, as it emerged from the analyses of the previous sections, they can already benefit from predictive optimization. Since the fifth generation is expected to improve on its predecessors in every aspect [@hossain20155g], not only is anticipatory networking applicable, but also it will provide even greater benefits.
### Characteristics
The next generation of mobile cellular networks will provide faster communications, improved users , shorter communication delays, higher reliability and improved energy savings. Among the solutions envisioned to realize these improvements, cell densification, mm-wave bands, massive MIMO, unified multi-technology frame structure and architecture and network function virtualization are the ones that are going to have a substantial impact on existing and future use case scenarios. In fact, a denser infrastructure is going to decrease the average time mobile users spend in a specific cell; the directionality of communications in higher portion of the spectrum will increase the importance of localization and tracking functionalities; while the increase of communicating elements and the de-localization of radio access functionalities are going to impact on channel models and network resource management.
### Advantages
The performance of 5G cellular networks will strongly depend on their knowledge of the exact user positions (e.g., localization for mm-wave, resource management for network function virtualization). As a consequence, predictive solutions that provide the system with accurate information about users’ current and future positions, trajectories, traffic profiles and content request probabilities are likely to be the most desirable aspects of anticipatory solutions. For what concerns 5G applications, we believe network caching and cloud will also greatly benefit from this. In fact, the former can exploit prediction to decide which content to store in which specific part of the network to serve a given user profile, while the latter can, for instance, forecast when to instantiate a number of virtual machines to face an increase of the network traffic.
### Challenges
The upcoming 5G technologies will also bring new challenges to the basic mechanisms of anticipatory networking. In particular, we see mm-wave, massive MIMO and cell densification as disruptive technologies for the current methods used for predictive optimization. In this regard, mm-waves channel model is going to impact how to forecast future signal quality and achievable data rates while network densification and massive MIMO will challenge the scalability of prediction techniques due to the sheer size of the information needed to describe and exchange them.
Mobile ad hoc networks
----------------------
consist of mobile wireless devices connected to one another without a fixed infrastructure [@giordano2002mobile]. As a consequence, they share some characteristics with cellular networks but have some unique features due to the variable topology. These networks are the most practical form of communication when an infrastructure is absent or it has been compromised by a disruptive event.
### Characteristics
The dynamic nature of causes the path between any two nodes to vary over time and require adaptive routing mechanisms that allow, on one hand, to maintain the connectivity among all the network nodes and, on the other hand, to balance the load in the different areas of the network. In addition, adaptive discovery and management functionalities are needed to allow new devices and services to be added to an existing network and to report problems and missing links/nodes. When a extends over an area larger than the communication range of the devices, transmissions must be relayed from one node to another in order to allow messages to reach their destinations.
### Advantages
Knowing nodes’ positions in advance and being able to track their trajectories enable advanced routing functionalities: in fact, additional paths can be created before a missing link interrupts a route without waiting for a new discovery procedure to be performed. Also, routing tables can be readily adapted when shorter routes appear. In a similar way, management procedure can be enhanced by knowing in advance the traffic being produced by a given node or area of the network or by forecasting which service is going to be needed in a given part of the network.
### Challenges
The absence of a fixed infrastructure is the main source of challenges that are distinctive of . For instance, it is not possible to have known databases collecting users’ and devices’ information to build prediction models nor centralized optimization services can be provided or they may suffer from delays in delivering solutions and/or information to the whole network. Moreover, the topology variability makes map-based prediction techniques difficult or impossible to apply.
Cognitive Radio Networks
------------------------
networks consist of devices that exploit channels that are unused at specific locations and times [@chen2016survey], but that are usually allocated to primary users (i.e. users that can legitimately communicate using a given channel). devices are usually referred to as secondary users as their operations must not interfere with those performed by the primary users.
### Characteristics
The main distinctive feature of devices is that they need to scan for primary users’ activity before attempting any communication in order not to disrupt legitimate transmissions. This scanning/sensing activity decreases the amount of time secondary users’ can spend on actual communications and, thus, it reduces their throughput. On the other hand, a network is usually able to build accurate spectrum occupancy models fusing the information coming from different devices.
### Advantages
Prediction capabilities are already envisioned for networks, in fact, it is easily understandable that being able to predict when primary users are going to occupy their channel will decrease the amount of sensing needed to decide when a secondary user is allowed to transmit. Not only can spectrum occupancy maps be used to predict the upcoming channel state, but also, content information and predictive models available to primary users can be exploited by secondary users to reduce their interference probability. Therefore, allowing secondary users to access primary user information is profitable for both: if are able to improve their throughput by more precisely picking spectrum holes, primary users will be more protected from secondary interference.
### Challenges
Although anticipatory can be seen as symbiotic to primary users, their operations introduce a non trivial feedback in the resulting system. In fact, those models that are valid when primary users operate only may be no longer valid when secondary users contribute. However, given that those models are usually built using information about primary users only, it will be impossible with the current techniques to create or modify prediction and optimization solutions that take into consideration secondary users. As such, the whole anticipatory infrastructure needs to account for in order to allow prediction-based schemes to work for primary and secondary users.
Device-to-Device
----------------
communication refers to the use of direct communication between mobile phones to support the operations of a cellular network [@asadi2014survey]. In addition, since must not interfere with the regular cellular network operations it can be seen as secondary users to the main communications. Therefore, they share characteristics that are specific to and networks.
### Characteristics
communications are characterized by a complex topology where the usual star network overlies a mesh network. Also, the devices may use different s in the mesh network: for instance they can exploit the same cellular technology (inband) or other wireless solutions such as direct-WiFi.
### Advantages
Given the similarities to and , communications can take advantage from anticipatory networking mostly to mitigate interference related problems and to improve the resource and power allocation.
### Challenges
While we do not expect communications to pose distinctive challenges to the implementation of anticipatory networking that are not listed in the previous sections, that will make the adoption of current prediction models less straightforward. In fact, prediction-based optimization and other anticipatory schemes will be made more complex due to the possible coexistence of multiple technologies and the primary/secondary interference and interactions, which will require to also predict channels, in addition to primary.
Internet of Things
------------------
Nowadays, thanks to the miniaturization and the progressive decrease of computational and communicating chipsets, more and more ordinary objects are being equipped with micro-CPUs and are connected to the Internet [@alfuqaha2015internet; @zanella2014internet; @xu2014internet]: in such a way smart cities and smart industries, among a variety of other enhanced scenarios, can be realized. The typical device in the is capable of performing one or a set of measurements and/or actuations on the real world. They are usually constrained in their capabilities: for instance, they can be battery powered or equipped with low data rate radios or their computational power may be limited.
### Characteristics
Due to the wide definition of the entities that populate the , many of its features have been already described in the preceding subsections. For instance, communications often involve aspects, they can be if they are able to sense spectrum and they can be considered part of a if they are mobile. However, the most unique features that are only present in devices are that they involve type communication and that devices are typically constrained. Moreover, although the number of smart things is expected to grow exponentially in the next decade, their traffic is not going to grow as fast as that, e.g., the one generated by mobile cellular networks. In fact, traffic is expected to be mainly due to monitoring, control and detection activities, which are characterized by limited throughput and almost deterministic transmission frequency.
### Advantages
Anticipatory networking and prediction-based optimization can be applied to many aspects of the . For instance, devices that harvest their energy from renewable sources may predict the source availability and optimize their operations according to that. Furthermore, data prediction models can be used to compress the data produced by devices by sending only the difference from the forecast or the same models can be used to identify anomalies or prevent disruptive events before they can cause serious problems. Finally, due to the almost deterministic periodicity of data production, their communication can be easily modeled and accounted for to mitigate their impact on the overall system.
### Challenges
Scalability is one of the main challenges in . In fact, due to the variety of device types, the difference in their capabilities, requirements and applications, the amount of information needed to represent and model the is huge and the obtained benefits must more than compensate for the cost related to its realization. Moreover, the is impacted by most of the challenges and problems discussed above for the other network types.
On the impact of Anticipatory Networking on the Protocol Stack {#sec:protocol}
==============================================================
[In this section, we address another important aspect of anticipatory networking solutions: where to implement them in the ISO/OSI protocol stack [@zimmermann1980osi] and which layers contribute to their realizations.]{}
Physical
--------
We do not expect anticipatory networking solutions to modify how the physical layer is designed and managed. In fact, in order to apply prediction-based schemes, some form of interaction is required between two or more entities of the system. As a consequence, the physical layer, which defines how information is transferred to bits and wave-form [@zimmermann1980osi], might provide different profiles to allow for predictive techniques to be applied in the higher layers, but will not directly implement any of them.
Data Link
---------
The data link layer is the first entry point for predictive solutions. In particular, this layer implements functionalities. Therefore, resource management [@lu2013optimizing] and admission control [@bui2015anticipatoryb] procedures are likely to greatly benefit from anticipatory optimization. Also, we envision that anticipatory networking to be even more important in next generation networks: in particular, channel estimation and beam steering solutions are going to be key for the success of mm-wave a massive MIMO communications [@hossain20155g].
Network
-------
The network layer contains two of the functionalities that can benefit the most from prediction: routing and caching [@bastug2014living; @naimi2014anticipation]. In fact, by knowing users’ mobility and traffic in advance it is possible to optimize routes and caching location to maximize network performance and save resources. For instance, it is possible to build alternative paths before the existing ones deteriorate and break and popular contents may be moved across the network according to where they will be requested with higher probability.
Transport
---------
This layer is mainly concerned with end-to-end message delivery and the two most popular protocols are and : the former guarantees reliable communications, while the latter is a lightweight best-effort solution. Anticipatory networking solutions are easily implemented here [@calabrese2010human; @abouzeid2015evaluating], in particular, when error correction and retransmissions are driven by network metrics such as, among others, and . Prediction models can be used to react to changes in the network conditions before they reach a disruptive state and recovery actions have to be taken. In addition, modern transport solutions, such as multipath-, can exploit predictive optimization to manage the traffic flows along the different routes and improve the .
Session, Presentation and Application
-------------------------------------
Since these layers are concerned with connection management between end-points (session), syntax mapping between different protocols (presentation) and interaction with users and software (application), they are the least preferable to implement anticipatory networking solutions. However, in order to allow applications to exploit predictive mechanisms, these three layers will act as a connection point to provide application with the needed context information and to allow them to configure the needed services and parameters for the application requirements. For instance, in Section \[sec:classification\].A.6 we described geographically-assisted video optimization [@draxler2015smarterphones; @hosseini2015not] where mobile phone applications modulated the request video bit rate to optimize the playback of the video itself, or geo-assisted applications [@noulas2012mining] that exploits social and contextual information to enhance their services.
Issues, Challenges, and Research Directions {#sec:challenges}
===========================================
We conclude the paper by providing some insights on how anticipatory optimization will enable new 5G use cases and by detailing the open challenges of anticipatory networking in order to be successfully applied in 5G.
Context related analyses
------------------------
### Geographic context
Geographic context is essential to achieve seamless service. Depending on the optimization objective, a mobility state can be defined with different granularity in multiple dimensions (location, time, speed, etc.). For example, for handover optimization it is sufficient to predict the staying time in the current serving cell and the next serving cell of the user. Medium to large spatial granularity such as cell or cell coverage area can be considered as a state, and a trajectory can be characterized by a discrete sequence of cell over time. State-space models such as Markov chains, and Kalman filters fit the system modeling, while requiring large training samples and considerable insight to make the model compact and tractable. An alternative is the variable-order Markov models, including a variety of lossless compression algorithms (some of the most used belong to family), where Shannon’s entropy measure is identified as a basis for comparing user mobility models. Such an information-theoretic approach enables adaptive online learning of the model, to reduce update paging cost. Moving from discrete to continuous models, which are applied to assist the prediction of other system metrics with high granularity, e.g., link gain or capacity, regression techniques are widely used. To enhance the prediction accuracy, a priori knowledge can be exploited to provide additional constraints on the content and form of the model, based on street layouts, traffic density, user profiles, etc. However, finding the right trade-off between the model accuracy and complexity is challenging. An effective solution is to decompose the state space and to introduce localized models, e.g., to use distinct models for weekdays and weekends, or urban and rural areas.
[ Although mobility prediction has been shown to be viable, it has not been widely adopted in practical systems. This is because, unlike location-aware applications with users’ permission to use their location information, mobile service providers must not violate the privacy and security of mobile users. To facilitate the next generation of user-centric networks, new interaction protocols and platforms need to be developed for enabling more user-friendly agreements on the data usage between the service providers and the mobile users.]{}
[Furthermore, next generation wireless networks introduce ultra-dense small cells and high frequencies such as mmWaves. The transmission range gets shorter and transmission often occurs in line-of-sight conditions. Thus, 2D geographic context with a coarse level of accuracy is not sufficient to fully utilize the future radio techniques and resources. This trend opens the door for new research directions in inference and prediction of 3D geographic context, by utilizing advanced feedback from sensors in user equipments such as accelerometers, magnetometers, and gyroscopes.]{}
### Link context
When predicting link context, i.e., channel quality and its parameters, linear time series models have the potential to provide the best tradeoff between performance and complexity. When the channel changes slowly, e.g., because users are static or pedestrian, it is convenient to exploit the temporal correlation of historic measurements of the users’ channel and implement linear auto-regressive prediction. This can be quite accurate for very short prediction horizons and at the same time simple enough to be implemented in real time systems. Kalman filters can also be used to track errors and their variance, based on previous measurements, thus handling uncertainties. However, time series and linear models are not robust to fast changes. Therefore, in high mobility scenarios, more complex models are needed. One possible approach is to exploit the spatio-temporal correlation between location and channel quality. By combining the prediction of the channel qualities with the prediction of the user’s trajectory, regression analysis, e.g., , can be employed to build accurate radio maps to estimate the long term average channel quality, which accounts for pathloss and slow fading, but neglects fast fading variations. Ideally, one should have two predictions available: a very accurate short term prediction and an approximate long term prediction.
[Usually, such prediction is exploited to optimize the scheduling, i.e., resource allocation over time or frequency. Convex and linear optimization are often used when prediction is assumed to be perfect. In contrast, Markov models are applied when a probabilistic forecasting is available. Despite the great benefits that link context can potentially bring to resource (and more generally network) optimization, today’s networks do not yet have the proper infrastructure to collect, share, process and distribute link context. Furthermore, proper methods are needed not only to gather data from users, but also, to discard irrelevant or redundant measurements as well as to handle sparsity or gaps in the collected data. ]{}
### Traffic context
Traffic and throughput prediction has a concrete impact on the optimization of different services of different networks at different time scales.
[Network-wide and for long time scales, linear time series models are already used to predict the macroscopic traffic patterns of mobile radio cells for medium/long-term management and optimization of the radio resources. At faster time scales and for specific radio cells or groups of radio cells, the probabilistic forecasting of the upcoming traffic, e.g., by using Markovian models, can be exploited to solve short-term problems including the radio resource allocation among users and the cell assignment problem. ]{}
[Throughput prediction tools are then naturally coupled with video streaming services in mobile radio networks which have embedded rate adaptation capabilities. In this context, a good practice is to use simple yet effective look-ahead video throughput predictors based on time windows which are often coupled with clustering approaches to group similar video sessions. Deep learning techniques are also proposed to predict the throughput of video sessions, which offer improved performance at the price of a much higher complexity. ]{}
[The data coming from traffic/throughput prediction can be effectively coupled with application/scenario-specific optimization frameworks. When targeting network-wide efficiency, centralized optimization approaches seem to be superior and more widely used. As an example, the problem of radio resource allocation in mobile radio networks is effectively representable and solvable though convex optimization techniques in semi-real-time scenario. In contrast, when the optimization has to be performed with the granularity of the technology-specific time slot, sub-optimal heuristics are preferable. Besides resorting to optimization approaches, control theoretic modeling is extremely powerful in all those cases where the optimization objective includes traffic (and queue) stability. ]{}
### Social context
We can conclude that leveraging the social context of data transmission results in gains for proactive caching of multimedia content and can improve resource allocation by predicting the social behavior of users. For the former, determining the popularity of content plays a crucial role. Collaborative filtering is a well-known approach for this purpose. However, due to the heavy tail nature of content popularity, trying to use this kind of models for a broad class of content will usually not lead to good results. However, for more specific and limited classes of content, i.e., localized advertisement, where a particular item is likely to be requested by a large number of users, popularity prediction is an appealing solution. In general, proactive caching requires that content is stored on caches close to the edge network in order not to put excessive load on the core network. For optimizing resource allocation using social behavior, the social interaction of different users can be used to create social graphs that determine the level of activity of each user and thereby make it possible to predict the amount of resources each user will need. Network utility maximization and heuristic methods are the most popular techniques for this context. Due to the complexity of modeling the social behavior of users, they are useful for wireless networks that either expose a great deal of measurable social interaction (device-to-device communication, dense cellular networks with small cells, local wireless networks in a sports stadium), or when resources are very scarce.
Anticipation-enabled use cases {#sec:chal:opp}
------------------------------
Future networks are envisioned to cater to a large variety of new services and applications. Broadband access in dense areas, massive sensor networks, tactile Internet and ultra-reliable communications are only a few of the use cases detailed in [@NGMN]. The network capabilities of today’s systems (i.e., 4G systems) are not able to support such requirements. Therefore, 5G systems will be designed to guarantee an efficient and flexible use (and sharing) of wireless resources, supported by a native software defined network and/or network function virtualization architecture [@NGMN]. Big data analysis and context awareness are not only enablers for new value added services but, combined with the power of anticipatory optimization, can play a role in the 5G technology.
### Mobility management
Network densification will be used in 5G systems in order to cope with the tremendous growth of traffic volume. As a drawback, mobility management will become more difficult. Additionally, it is foreseen that mobility in 5G will be on-demand [@NGMN], i.e., provided for and customized to the specific service that needs it. In this sense, being able to predict the user’s context (e.g., requested service) and his mobility behavior can be extremely useful in order to speed up handover procedures and to enable seamless connectivity. Furthermore, since individual mobility is highly social, social context and mobility information will be jointly used to perform predictions for a group of socially related individuals.
### Network sharing
5G systems will support resource and network sharing among different stakeholders, e.g., operators, infrastructure providers, service providers. The effectiveness of such sharing mechanisms relies on the ability of each player to predict the evolution of his own network, e.g., expected network load, anticipated user’s link quality and prediction of the requested services. Wireless sharing mechanisms can strongly benefit from the added value provided by anticipation, especially when prediction is available at fine granularity, e.g., in a multi-operator scheduler [@malanchini2016wireless].
### Extreme real-time communications
Tactile Internet is only one of the applications that will require a very low latency (i.e., in the order of some milliseconds). Allocating resources and guaranteeing such low end-to-end delay will be very challenging. 5G systems will support such requirements by means of a new physical layer (e.g., a new air interface). However, this will not be enough if not combined with context information used to prioritize control information (e.g., used to move virtual or real objects in real time) over content [@fettweis2014tactile]. Knowledge about the information that is transmitted and its specific requirements will be crucial in order to assign priorities and meet the expected quality-of-experience in a combined effort of physical and higher layers.
### Ultra-reliable communications
Reliability is mentioned in several 5G white papers, e.g. in [@NGMN], as necessary prerequisite for lifeline communications and e-health services, e.g., remote surgery. A recent work [@suryaprakash2016reliability] proposed a quantified definition of reliability in wireless access networks. As outlined here, a posteriori evaluation of the achieved reliability is not enough in order to meet the expected target, which in some cases is as high as $99.999\%$. To this end, it is mandatory to design resource allocation mechanisms that account for (and are able to anticipate the impact on) reliability in advance.
Open challenges {#sec:chal:supp}
---------------
While the literature surveyed so far clearly points out how anticipatory networking can enhance current networks, this section discusses several problems that need to be solved for its wider adoption.
In particular, we identified four functionalities that are going to play an important role in the adoption of anticipatory networking in 5G networks:
- [**Measurements and information collection:**]{} in order to provide means to obtain and share context information, future networks need to provide trusted mechanisms to manage the information exchange.
- [**Data analysis and prediction:**]{} information databases need interoperable procedures to make sure that processing and forecasting tools are usable with many possible information sources .
- [**Optimization and decision making:**]{} data and procedures are then exploited to derive system management policies.
- [**Execution:**]{} finally, in contrast to current procedures, anticipatory execution engines need to take into account the impact of the decisions made in the past and re-evaluate their costs and rewards in hindsight of the actual evolution of the system.
For instance, scheduling and load balancing are two processes that greatly profit from anticipatory networking and cannot be realized without a comprehensive integration of the four aforementioned functionalities in future generation networks. The realization of these functionalities poses the following important challenges.
### Privacy and security
In our opinion, one of the main hindrances for anticipatory networking to become part of next generation networks is related to how users feel about sharing data and being profiled. While voluntarily sharing personal information has become a daily habit, many disapprove that companies create profiles using their data [@singer2015sharing]. In a similar way, there might be a strong resistance against a new technology that, even though in an anonymous way, collects and analyzes users’ behavior to anticipate users’ decisions. Standards and procedures need to be studied to enforce users’ privacy, data anonymity and an adequate security level for information storage. [In addition, data ownership and control need to be defined and regulated in order to allow users and providers to interact in a trusted environment, where the former can decide the level of information disclosure and the latter can operate within shared agreements.]{}
### Network functions and interfaces
Many of the applications that are likely to benefit from anticipatory networking capabilities (i.e. decision making and execution) require unprecedented interactions among information producers, analyzers and consumers. A simple example is provided by predictive media streaming optimizers, which need to obtain content information from the related database and user streaming information from the user and/or the network operator. This information is then analyzed and fed to a streaming provider that optimizes its service accordingly. While ad hoc services can be realized exploiting the current networking functionalities, next generation applications, such as the extreme real-time communications mentioned above, will greatly benefit from a tighter coupling between context information and communication interfaces. [We believe that the potential of anticipatory functionalities can be used in communication system and they could be applied to other domains, such as public transportation and smart city management.]{}
### Next generation architecture
5G networks are currently being discussed and, while much attention is paid to increasing the network capacity and virtualizing the network functions, we believe that the current infrastructure should be enhanced with repositories for context information and application profiles [@wan2014context] to assist the realization of novel predictive applications. As per the previous concerns above, sharing sensible information, even in an anonymized way, will require particular care in terms of users’ privacy and database accessibility. [We believe that anticipatory networking can potentially improve every kind of mobile networks: cellular networks will likely be the first to exploit this paradigm, because they already own the information needed to enable the predictive frameworks and it is only a matter of time and regulations to make it a reality. Once it will be integrated in cellular networks, other systems, such as public WiFi deployments, device-to-device solutions and the Internet of Things, will be able to participate in the infrastructure to exploit forecasting functionalities; in particular, we believe this will be applied to smart cities and multi-modal transportation.]{}
### Impact of prediction errors
When making and using predictions, one should carefully estimate its accuracy, which is itself a challenge. It might be potentially more harmful to use a wrong prediction than not using prediction at all. Usually, a good accuracy can be obtained for a short prediction horizon, which, however, should not be too short, otherwise the optimization algorithms cannot benefit from it. Therefore, a good balance between prediction horizon and accuracy must be found in order to provide gains. In contrast, over medium/long term periods, metrics can usually be predicted in terms of statistical behavior only. Furthermore, to build robust algorithms that are able to deal with uncertainties, proper prediction error models should be derived. In the existing literature, uncertainties are mainly modeled as Gaussian random variables. Despite the practicability of such an assumption, more complex error models should be derived to take into account the source (e.g., location and/or channel quality) as well as the cause (e.g., GPS accuracy and/or fast fading effect) of errors.
Conclusions
===========
\[sec:conclusions\]
[This survey analyzed the literature on anticipatory networking for mobile networks. We provided a thorough analysis of application scenarios categorized by the contextual information used to build the predictive framework. The most relevant prediction and optimization techniques adopted in the literature have been described and commented in two handbooks that have the twofold objective of supporting researchers to advance in the field and providing standardization and regulation bodies with a common ground on anticipatory networking solutions. While the core of this survey is devoted to mobile cellular networks, we also analyzed applicability and advantages of anticipatory networking solution to other types of wireless networks and at the different layers of the protocol stack. Finally, we analyzed benefits and disadvantages of the proposed solutions, the most promising application scenarios for 5G networks, and the challenges that are yet to be faced to adopt anticipatory networking paradigms.]{}
[To conclude, while the literature reviewed in this works suggests that anticipatory networking is a quite mature approach to improve the performance of mobile networks, we believe that issues (mainly at the system level) still need to be solved to realize its potential. In particular, most of the work which has been evaluated in this survey tends to focus on the benefit of anticipation, while overlooking possible problems and disadvantages in the anticipatory networking framework. ]{}
[All the main components of anticipatory networking, the context database and the prediction/anticipation intelligence, must be effectively integrated into the mobile network architecture which poses challenges at different levels. First, new interfaces and communication paradigms must be defined for data collection from both end users and sources external to the mobile network itself; second, the management of the context databases brings an additional burden in terms of required bandwidth and processing power for several network elements which may lead to scalability issues as well as security and privacy concerns. ]{} [To this extent, a thorough and comprehensive cost-benefit analysis for specific anticipatory networking scenarios is, in our opinion, a required next step for the research in the field. ]{}
List of Acronyms
================
[^1]: Nicola Bui and Joerg Widmer are with IMDEA Networks Institute, Madrid, Spain. email:{nicola.bui, joerg.widmer}@imdea.org. Matteo Cesana is with Politecnico di Milano, Italy. email:matteo.cesana@polimi.it. S. Amir Hosseini is with NYU Tandon School of Engineering, US. email:amirhs.hosseini@nyu.edu. Qi Liao and Ilaria Malanchini are with Nokia Bell Labs, Stuttgart, Germany. email:{qi.liao, ilaria.malanchini}@nokia-bell-labs.com. This work has been has been supported in part by the European Union H2020-ICT grant 644399 (MONROE), European Union H2020-MSCA-ITN grant 643002 (ACT5G), by the Madrid Regional Government through the TIGRE5-CM program (S2013/ICE-2919), the Ramon y Cajal grant from the Spanish Ministry of Economy and Competitiveness RYC-2012-10788 and grant TEC2014-55713-R.
[^2]: Value obtained for a high-income country with stable social conditions. The percentage can decrease for different countries, e.g., low-income country or natural disaster situation.
[^3]: Ranking based on the number of papers reviewed in this survey using the predictor.
| {
"pile_set_name": "ArXiv"
} |
-1cm
\
[**Abstract**]{}
The paper presents the $QCD$ description of the hard and semihard processes in the framework of the Wilson operator product expansion. The smooth transition between the cases of the soft and hard Pomerons is obtained.
The recent measurements of the deep-inelastic (DIS) structure function (SF) $F_2$ by the $H1$ [@1] and $ZEUS$ [@2] collaborations open a new kinematical range to study proton structure. The new $HERA$ data show the strong increase of $F_2$ with decresing $x$. However, the data of the $NMC$ [@3] and $E665$ collaboration [@4] at small $x$ and smaller $Q^2$ is in the good agreement with the standard Pomeron or with the Donnachie-Landshoff picture where the Pomeron intercept: $\alpha_p = 1.08$, is very close to standard one. The interpritation of the fast changing of the intercept in the region of $Q^2$ between $Q^2=1GeV^2$ and $Q^2=10GeV^2$ (see Fig.3 in [@5]) is yet absent. There are the arguments in favour of that is one intercept (see [@6]) or the superposition of two different Pomeron trajectories, one having an intercept of $1.08$ and the one of $1.5$ (see Fig.4 in [@5]).
The aim of this article is the possible “solution” of this problem in the framework of Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) equation [@6.5]. It is good known (see, for example, [@7]), that in the double-logarithmical approximation the DGLAP equation solution is the Bessel function, or $\exp{\sqrt{\phi (Q^2) ln(1/x)}}$, where $\phi (Q^2)$ is known $Q^2$-dependent function[^1]. However, we will seek the “solution”[^2] of DGLAP equation in the Regge form (we use the parton distributions (PD) multiplied by $x$ and neglect the nonsinglet quark distribution at small $x$): $$\begin{aligned}
f_a (x,Q^2) \sim x^{-\delta} \tilde
f_a(x,Q^2), ~~~~(a=q,g)~~(\alpha_p \equiv 1+\delta ) \label{1}
\end{aligned}$$ where $\tilde f_a(x,Q^2)$ is nonsingular at $x \to 0$ and $\tilde
f_a(x,Q^2) \sim (1-x)^{\nu}$ at $x \to 1$[^3]. The similar investigations were already done and the results are good known (see [@8], [@11]-[@15])[^4]. The aim of this letter is to expand these results to the range where $\delta \sim 0$ (and $Q^2$ is not large) following to the observed early (see [@12; @13])[^5] method to replace the Mellin convolution by a simple product. Of course, we understand that the Regge behaviour (\[1\]) is not in the agreement with the double-logarithmic solution, however the range, where $\delta \sim 0$ and the $Q^2$ values are nonlarge, is really the Regge regime and a “solution” of DGLAP equation in the form of (\[1\]) would be worthwhile. This “solution” may be understand as the solution of DGLAP equation together with the condition of its Regge asymptotic at $x \to 0$.
Consider DGLAP equation and apply the method from [@13] to the Mellin convolution in its r.h.s. (in contrast with standard case, we use below $\alpha(Q^2)=\alpha_s(Q^2)/(4\pi)$): $$\begin{aligned}
{&&\hspace*{-1cm}}\frac{d}{dt}f_a (x,t)~=~- \frac{1}{2} \sum_{i=a,b}
\hat \gamma_{ai}(\alpha,x) \otimes f_a(x,t)~~~(a,b)=(q,g) \nonumber \\
{&&\hspace*{-1cm}}=~- \frac{1}{2} \sum_{i=a,b}
\tilde \gamma_{ai}(\alpha,1+\delta)f_a(x,t)~+~O(x^{1-\delta})~~~
\Bigl( \gamma_{ab}(\alpha,n)=\alpha \gamma_{ab}^{(0)}(n)+\alpha^2
\gamma_{ab}^{(1)}(n)+...\Bigr)
, \label{2}
\end{aligned}$$ where $t=ln(Q^2/\Lambda ^2)$. The $\hat \gamma_{ab}(\alpha,x)$ are the spliting functions corresponding to the anomalous dimensions (AD) $\gamma_{ab}(\alpha,n) = \int^1_0 dx x^{n-1} \hat \gamma_{ab}(\alpha,x)$. Here the functions $\gamma_{ab}(\alpha,1+\delta)$ are the AD $\gamma_{ab}(\alpha,n)$ expanded from the integer argument “$n$” to the noninteger one “$1+\delta$”. The functions $\tilde \gamma_{ab}(\alpha,1+\delta)$ (marked lower as AD, too) can be obtained from the functions $\gamma_{ab}(\alpha,1+\delta)$ replacing the term $1/\delta$ by the one $1/\tilde \delta$:
$$\begin{aligned}
\frac{1}{\delta} \to \frac{1}{\tilde \delta}~=~\frac{1}{\delta}
\Bigl( 1 - \varphi(x,\delta)x^{\delta} \Bigr)
\label{3}
\end{aligned}$$
This replacement (\[3\]) is appeared very naturally from the consideration the Mellin convolution at $x \to
0$ (see [@13]) and preserves the smooth and nonsingular transition to the case $\delta =0$, where
$$\begin{aligned}
\frac{1}{\tilde \delta}~=~ln\frac{1}{x} - \varrho(x)
\label{4}
\end{aligned}$$
The concrete form of the functions $\varphi(x,\delta)$ and $\varrho(x)$ depends strongly on the type of the behaviour of the PD $f_a(x,Q^2)$ at $x \to 0$ and in the case of the Regge regime (\[1\]) they are (see [@12; @13]):
$$\begin{aligned}
\varphi(x,\delta)~=~ \frac{\Gamma(\nu +1)\Gamma(1-\delta)}{\Gamma(\nu
+1-\delta)}~~ \mbox{ and }~~ \varrho(x)~=~\Psi(\nu+1)-\Psi(1),
\label{5}
\end{aligned}$$
where $\Gamma(\nu+1)$ and $\Psi(\nu+1)$ are the Eulerian $\Gamma$- and $\Psi$-functions, respectively. As it can be seen, there is the correlation with the PD behaviour at large $x$.
If $\delta$ is not small (i.e. $x^{\delta}>>1$), we can replace $1/ \tilde \delta$ to $1/\delta$ in the r.h.s. of Eq.(\[2\]) and obtain its solution in the form (hereafter $t_0=t(Q^2=Q^2_0)$):
$$\begin{aligned}
\frac{f_a(x,t)}{f_a(x,t_0)}~=~ \frac{M_a(1+\delta,t)}{M_a(1+\delta,t_0)},
\label{6}
\end{aligned}$$
where $M_a(1+\delta,t)$ is the analytical expansion of the PD moments $M_a(n,t) = \int^1_0 dx x^{n-1} f_a(x,t)$ to the noninteger value “$n=1+\delta$”.
This solution is good known one (see [@12] for the first two orders of the perturbation theory, [@14] for the first three orders and [@15] containing a resummation of all orders, respectively). Note that recently the fit of $HERA$ data was done in [@17] with the formula for PD $f_q(x,t)$ very close [^6] to (\[6\]) and the very well agreement (the $\chi^2$ per degree of freedom is $0.85$) is found at $\delta = 0.40 \pm 0.03$. There are also the fits [@17.5] of the another group using equations which are similar to (\[6\]) in the LO approximation.
The news in our investigations are in the follows. Note that the $Q^2$-evolution of $M_a(1+\delta,t)$ contains the two: “+” and “$-$” components, i.e. $M_a(1+\delta,t)= \sum_{i= \pm} M_a^i(1+\delta,t)$, and in principle the every component evolves separately and may have the independent (and not equal) intercept. Here for the simplicity we restricte ourselves to the LO analysis and give NLO formulae lower without large intermediate equations.
[**1.**]{} Consider DGLAP equation for the “+” and “$-$” parts (hereafter $s=ln(lnt/lnt_0)$):
$$\begin{aligned}
\frac{d}{ds} f_a^{\pm}(x,t)~=~- \frac{1}{2\beta_0}
\tilde
\gamma_{\pm}(\alpha,1+\delta_{\pm})f_a^{\pm}(x,t)~+~O(x^{1-\delta}),
\label{7} \end{aligned}$$
where $$\gamma_{\pm}~=~ \frac{1}{2}
\biggl[
\Bigl(\gamma_{gg}+\gamma_{qq} \Bigr)~\pm ~
\sqrt{ {\Bigl( \gamma_{gg}- \gamma_{qq} \Bigr)}^2
~+~4\gamma_{qg}\gamma_{gq}}
\biggr]$$ are the AD of the “$\pm $” components (see, for example, [@18])
The “$-$” component $\tilde \gamma_{-}(\alpha,1+\delta_-)$ does not contain the singular term (see [@12; @14] and lower) and its solution have the form:
$$\begin{aligned}
\frac{f_a^-(x,t)}{f_a^-(x,t_0)}~=~e^{-d_-(1+\delta_-)s}, \mbox{ where }
d_{\pm}=\frac{\gamma_{\pm}(1+\delta_{\pm})}{2\beta_0}
\label{8}
\end{aligned}$$
The “+” component $\tilde \gamma_{+}(\alpha,1+\delta_+)$ contains the singular term and $f_a^+(x,t)$ have the solution similar (\[8\]) only for $x^{\delta_+}>>1$:
$$\begin{aligned}
\frac{f_a^+(x,t)}{f_a^+(x,t_0)}~=~e^{-d_+(1+\delta_+)s}, \mbox{ if }
x^{\delta_+}>>1
\label{9}
\end{aligned}$$
The both intersepts $1+\delta_+$ and $1+\delta_-$ are unknown and should be found, in principle, from the analysis of the experimental data. However there is the another way. From the small $Q^2$ (and small $x$) data of the $NMC$ [@3] and $E665$ collaboration [@4] we can conclude that the SF $F_2$ and hence the PD $f_a(x,Q^2)$ have the flat asymptotics for $x \to 0$ and $Q^2 \sim (1\div2)GeV^2$. Thus we know that the values of $\delta_+$ and $\delta_-$ is approximately zero at $Q^2 \sim 1GeV^2$.
Consider Eqs.(\[7\]) with $\delta_{\pm}=0$ and with the boundary condition $f_a(x,Q^2_0)=A_a$ at $Q^2_0=1GeV^2$. For the “$-$” component we already have the solution: the Eq.(\[8\]) with $\delta_-=0$ and $d_-(1)=16f/(27\beta_0$), where $f$ is the number of the active quarks and $\beta_i$ are the coefficients in the $\alpha$-expansion of QCD $\beta$-function. For its “+” component Eq.(\[7\]) can be rewritten in the form (hereafter the index $1+\delta $ will be omitted in the case $\delta \to 0$):
$$\begin{aligned}
ln(\frac{1}{x})\frac{d}{ds}\delta_+(s)~+~
\frac{d}{ds} ln(A_a^+) ~=~- \frac{1}{2\beta_0}
\biggr[ \hat
\gamma_{+}
\Bigl( ln(\frac{1}{x}) -\varrho(\nu) \Bigr) ~+~ \overline \gamma_+
\biggl]
\label{10} \end{aligned}$$
where $\hat\gamma_{+}$ and $\overline \gamma_+$ are the coefficients of the singular and regular parts at $\delta \to 0$ of AD $\gamma^+(1+\delta)$: $$\gamma^+(1+\delta)~=~\hat\gamma^+ \frac{1}{\delta} ~+~
\overline\gamma^+,~~~\hat\gamma^+=-24,~\overline\gamma^+=22+
\frac{4f}{27}$$
The solution of Eq.([10]{}) is
$$\begin{aligned}
f_a^+(x,t)~=~A^+_a~x^{\hat d_+s}e^{-\overline d_+s},
\label{11}
\end{aligned}$$
where $$\hat d_+ \equiv \frac{\hat \gamma^+}{2\beta_0} \simeq -
\frac{4}{3},~~
\overline d_+ \equiv \frac{1}{2\beta_0}
\Bigl( \overline \gamma_+ ~-~ \hat \gamma_+ \varrho(\nu)\Bigr)
\simeq \frac{4}{3} \varrho(\nu) + \frac{101}{81}$$ Herefter the symbol $\simeq $ marks the case $f=3$.
As it can be seen from (\[11\]) the flat form $\delta_+=0$ of the “+”-component of PD is very nonstable from the (perturbative) viewpoint, because $d(\delta_+)/ds \neq 0$, and for $Q^2 > Q_0^2$ we have already the nonzero power of $x$ (i.e. pomeron intercept $\alpha_p >1$). This is in the agreement with the experimental data. Let us note that the power of x is positive for $Q^2<Q^2_0$ that is in principle also supported by the $NMC$ [@3] data, but the use of this analysis to $Q^2<1GeV^2$ is open the question.
Thus, we have the DGLAP equation solution for the “+” component at $Q^2$ is close to $Q^2_0=1GeV^2$, where Pomeron starts in its movement to the subcritical (or Lipatov [@19.5; @19.6]) regime and also for the large $Q^2$, where pomeron have the $Q^2$-independent intercept. In principle, the general solution of (\[7\]) should contain the smooth transition between these pictures but this solution is absent [^7]. We introduce the some “critical” value of $Q^2$: $Q^2_c$, where the solution (\[9\]) is replaced by the solution (\[11\]). The exact value of $Q^2_c$ may be obtained from the fit of experimental data. Thus, we have in the LO of the perturbation theory:
$$\begin{aligned}
{&&\hspace*{-1cm}}f_a(x,t)~=~ f_a^-(x,t)~+~ f_a^+(x,t) \nonumber \\
{&&\hspace*{-1cm}}f_a^-(x,t)~=~A^-_a~\exp{(- d_-s)} \nonumber \\
{&&\hspace*{-1cm}}f_a^+(x,t)~=~
\left\{
\begin{array}{ll} A^+_a
x^{\hat d_+s}\exp{(-\overline d_+s)}, & \mbox{ if } Q^2 \leq Q^2_c \\
f_a^+(x,t_c)
\exp{\Bigl(-d_+(1+\delta_c)(s-s_c)\Bigr)},
& \mbox{ if } Q^2>Q^2_c
\end{array} \right.
\label{12}
\end{aligned}$$
where $$\begin{aligned}
{&&\hspace*{-1cm}}t_c~=~t(Q^2_c),~~s_c~=~s(Q^2_c) \nonumber \\
{&&\hspace*{-1cm}}A^+_q~=~(1- \overline \alpha )A_q ~+~ \tilde \alpha A_g,~~
A^+_g~=~ \overline \alpha A_g ~-~ \varepsilon A_q \nonumber \\
{&&\hspace*{-1cm}}\mbox{and } A_a^-~=~A_a ~-~ A_a^+
\label{13}
\end{aligned}$$ and the values of the coefficients $\overline \alpha$, $\tilde \alpha$ and $\varepsilon$ may be found, for example, in [@18].
Using the concrete AD values at $\delta =0$ and $f=3$, we have
$$\begin{aligned}
{&&\hspace*{-1cm}}A^+_q~ \approx ~\frac{1}{27}\frac{4A_q+9A_g}{ln(\frac{1}{x})-\varrho
(\nu) - \frac{85}{108}} \nonumber \\
{&&\hspace*{-1cm}}A^+_g~ \approx ~A_g~+~\frac{4}{9}A_q ~-~
\frac{4}{27}\frac{9A_g-A_q}{ln(\frac{1}{x})-\varrho
(\nu) - \frac{85}{108}}
\label{14}
\end{aligned}$$
Thus, the value of the “+”component of the quark PD is suppressed logarithmically that is in the qualitative agreement with the $HERA$ parametrizations of SF $F_2$ (see [@20.5; @21]) (in the LO $F_2(x,Q^2)~=~(2/9)f_q(x,Q^2)$ for $f=3$), where the magnitude connected with the factor $x^{-\delta}$ is $5 \div 10 \%$ from the flat (for $x \to
0$) magnitude.
[**2.**]{} By analogy with the subsection [**1**]{} and knowing the NLO $Q^2$-dependence of PD moments, we obtain the following equations for the NLO $Q^2$-evolution of the both: ”+” and “$-$” PD components (hereafter $\tilde s=ln(\alpha(Q^2_0)/\alpha(Q^2)), p=\alpha(Q^2_0)-\alpha(Q^2)$):
$$\begin{aligned}
{&&\hspace*{-1cm}}f_a(x,t)~=~ f_a^-(x,t)~+~ f_a^+(x,t) \nonumber \\
{&&\hspace*{-1cm}}f_a^-(x,t)=~\tilde A^-_a~\exp{(- d_-\tilde s -d_{--}^ap)}
\nonumber \\
{&&\hspace*{-1cm}}f_a^+(x,t)=
\left\{
\begin{array}{ll} \tilde A^+_a
x^{(\hat d_+\tilde s + \hat d_{++}^a p)}\exp{(-\overline d_+\tilde s
-\overline d_{++}^ap)}, & \mbox{if } Q^2 \leq Q^2_c \\
f_a^+(x,t_c)
\exp{\Bigl(-d_+(1+\delta_c)(\tilde s-\tilde
s_c)-d_{++}^a(1+\delta_c)(p-p_c) \Bigr) },
& \mbox{if } Q^2>Q^2_c
\end{array} \right.
\label{15}
\end{aligned}$$
where $$\begin{aligned}
{&&\hspace*{-1cm}}\tilde s_c ~=~ \tilde s(Q^2_c),~
p_c~=~p(Q^2_c),~\alpha_0~=~\alpha(Q^2_0)
,~\alpha_c~=~\alpha(Q^2_c) \nonumber \\ {&&\hspace*{-1cm}}\tilde A^{\pm}_a~=~\Bigl(1~-~\alpha_0 K^a_{\pm} \Bigr)
A^{\pm}_a ~+~ \alpha_0 K^a_{\pm} A^{\mp}_a \nonumber \\ {&&\hspace*{-1cm}}d_{++}^a~=~ \hat d_{++}^a
\Bigl( ln(\frac{1}{x}) - \varrho(\nu) \Bigl) ~+~ \overline d_{++}^a, ~~
d^a_{++}~=~ \frac{\gamma_{\pm \pm}}{2\beta_0} ~-~
\frac{\gamma_{\pm} \beta_1}{2\beta^2_0} ~-~ K^a_{\pm} \nonumber \\
{&&\hspace*{-1cm}}\mbox{and }~~ K^q_{\pm}~=~ \frac{\gamma_{\pm \mp}}{2\beta_0 +
\gamma_{\pm} - \gamma_{\mp}},~~ K^g_{\pm}~=~ K^q_{\pm}
\frac{\gamma_{\pm}-\gamma^{(0)}_{qq}}{\gamma_{\mp}-\gamma^{(0)}_{qq}}
\label{16}
\end{aligned}$$
The NLO AD of the “$\pm$” components are connected with the NLO AD $\gamma^{(1)}_{ab}$. The corresponding formulae can be found in [@18].
Using the concrete values of the LO and NLO AD at $\delta =0$ and $f=3$, we obtain the following values for the NLO components from (\[15\]),(\[16\]) (note that we remail only the terms $\sim O(1)$ in the NLO terms)
$$\begin{aligned}
{&&\hspace*{-1cm}}d^q_{--}~=~ \frac{16}{81} \Big[ 2\zeta (3) + 9 \zeta (2) -
\frac{779}{108} \Big] \approx 1.97, ~~
d^g_{--}~=~ d^q_{--}~+~ \frac{28}{81} \approx 2.32 \nonumber \\ {&&\hspace*{-1cm}}\hat d^q_{++}~=~ \frac{2800}{81} , ~~
\overline d^q_{++}~=~ 32 \Big[ \zeta (3) + \frac{263}{216}\zeta (2) -
\frac{372607}{69984} \Big] \approx -67.82 \nonumber \\ {&&\hspace*{-1cm}}\hat d^g_{++}~=~ \frac{1180}{81} , ~~
\overline d^g_{++}~=~ \overline d^q_{++}~+~ \frac{953}{27} -12\zeta
(2) \approx -52.26
\label{17}
\end{aligned}$$
and
$$\begin{aligned}
{&&\hspace*{-1cm}}\tilde A^+_q~ \simeq ~\frac{20}{3} \alpha_0
\Bigl[ A_g + \frac{4}{9} A_q \Bigr] ~+~
\frac{1}{27}\frac{4A_q(1-7.67 \alpha_0)+9A_g(1-8.71
\alpha_0)}{ln(\frac{1}{x})-\varrho
(\nu) - \frac{85}{108}} \nonumber \\
{&&\hspace*{-1cm}}\tilde A^+_g~ \simeq ~ \Bigl(A_g ~+~\frac{4}{9}A_q \Bigr)
\Bigl(1-\frac{80}{9}\alpha_0 \Bigr)
~-~
\frac{4}{27}\frac{9A_g-A_q}{ln(\frac{1}{x})-\varrho
(\nu) - \frac{85}{108}} \Bigl( 1+ \frac{692}{81}\alpha_0)
\nonumber \\
{&&\hspace*{-1cm}}\mbox{and } \tilde A_a^-~=~A_a ~-~ \tilde A_a^+
\label{18}
\end{aligned}$$
It is useful to change in Eqs.(\[15\])-(\[18\]) from the quark PD to the SF $F_2(x,Q^2)$, which is connected in NLO approximation with the PD by the following way (see [@18]):
$$\begin{aligned}
F_2(x,Q^2)~=~ \Bigl( 1+\alpha(Q^2)B_q(1+\delta) \Bigr) \delta^2_s
f_q(x,Q^2) ~+~ \alpha(Q^2)B_g(1+\delta) \delta^2_s f_g(x,Q^2),
\label{19}
\end{aligned}$$
where $\delta ^2_s = \sum_{i=1}^f/f \equiv <e_f^2>$ is the average charge square of the active quarks: $\delta ^2_s$ = (2/9 and 5/18) for $f$ = (3 and 4), respectively. The NLO corrections lead to the appearence in the r.h.s. of Eqs.(\[15\]) of the additional terms $\Bigl( 1+\alpha B_{\pm} \Bigr)/\Bigl( 1+\alpha_0 B_{\pm} \Bigr)$ and the necessarity to transform $\tilde A^{\pm}_q$ to $C^{\pm} \equiv F_2^{\pm}(x,Q^2)$ into the input parts. The final results for $F_2(x,Q^2)$ are in the form:
$$\begin{aligned}
{&&\hspace*{-1cm}}F_2(x,t)~=~ F_2^-(x,t)~+~ F_2^+(x,t) \nonumber \\
{&&\hspace*{-1cm}}F_2^-(x,t)~=~ C^-~\exp{(- d_-\tilde s -d_{--}^qp)}
(1+\alpha B^-)/(1+\alpha_0 B^-)
\nonumber \\
{&&\hspace*{-1cm}}F_2^+(x,t)~=~
\left\{
\begin{array}{ll} C^+
x^{(\hat d_+\tilde s + \hat d_{++}^q p)}\exp{(-\overline d_+\tilde s
-\overline d_{++}^qp)}(1+\alpha B^+)/(1+\alpha_0 B^+)
, & \mbox{ if } Q^2 \leq Q^2_c \\
F_2^+(x,t_c)
\exp{\Bigl(-d_+(1+\delta_c)(\tilde s-\tilde
s_c)-d_{++}^q(1+\delta_c)(p-p_c) \Bigr) } & \\
\biggl(1+
\alpha B^+(1+\delta_c) \biggr)/
\biggl(1+
\alpha_c B^+(1+\delta_c) \biggr),
& \mbox{ if } Q^2>Q^2_c
\end{array} \right.
\label{20}
\end{aligned}$$
where $$B^{\pm}~=~B_q ~+~ \frac{\gamma_{\pm}}{\gamma^{(0)}_{qg}}B_g,~~
C^{\pm}~=~\tilde A^{\pm}_q (1+\alpha_0 B^{\pm})$$ with the substitution of $A_q$ by $C \equiv F_2(x,Q^2_0)$ into Eq.(\[18\]) $\tilde
A^{\pm}_q$ according
$$\begin{aligned}
{&&\hspace*{-1cm}}C~=~ \Bigl( 1+\alpha_0 B_q \Bigr) \delta^2_s
A_q ~+~ \alpha_0 B_g \delta^2_s A_g,
\label{21}
\end{aligned}$$
For the gluon PD the situation is more simple: in Eq.(\[18\]) it is necessary to replace $A_q$ by $C$ according (\[21\]).
For the concrete values of the LO and NLO AD at $\delta =0$ and $f=3$, we have for $Q^2$-evolution of $F_2(x,Q^2)$ and the gluon PD:
$$\begin{aligned}
{&&\hspace*{-1cm}}F_2(x,t)~=~ F_2^-(x,t)~+~ F_2^+(x,t),~~
f_g(x,t)~=~ f_g^-(x,t)~+~ f_g^+(x,t) \nonumber \\
{&&\hspace*{-1cm}}F_2^-(x,t)~=~ C^-~\exp{(- \frac{32}{81} \tilde s
-1.97p)}(1-\frac{8}{9} \alpha )/(1-\frac{8}{9} \alpha_0 )
\nonumber \\
{&&\hspace*{-1cm}}F_2^+(x,t)~=~
\left\{
\begin{array}{ll} C^+
x^{(-\frac{4}{3} \tilde s + \frac{2800}{81}
p)}
\exp{\Bigl(- \frac{4}{3}(\varrho(\nu)+\frac{101}{108}) \tilde s
+(\frac{2800}{81} \varrho(\nu)-67.82)p \Bigr)} & \\
\Bigl(1+6[ln(\frac{1}{x})-\varrho(\nu)-\frac{101}{108}] \alpha \Bigr)/
\Bigl(1+6[ln(\frac{1}{x})-\varrho(\nu)-\frac{101}{108}] \alpha_0 \Bigr)
, & \mbox{if } Q^2 \leq Q^2_c \\
F_2^+(x,t_c)
\exp{\Bigl(-d_+(1+\delta_c)(\tilde s-\tilde
s_c)-d_{++}^q(1+\delta_c)(p-p_c) \Bigr) } & \\
\biggl(1+
\alpha B^+(1+\delta_c) \biggr)/
\biggl(1+
\alpha_c B^+(1+\delta_c) \biggr),
& \mbox{if } Q^2>Q^2_c
\end{array} \right.
\label{22} \\
{&&\hspace*{-1cm}}f_g^-(x,t)~=~ A_g^-~\exp{(- \frac{32}{81} \tilde s
-2.32p)}(1-\frac{8}{9} \alpha )/(1-\frac{8}{9} \alpha_0 )
\nonumber \\
{&&\hspace*{-1cm}}f_g^+(x,t)~=~
\left\{
\begin{array}{ll} A_g^+
x^{(-\frac{4}{3} \tilde s + \frac{1180}{81}
p)}\exp{\Bigl(- \frac{4}{3}(\varrho(\nu)+\frac{101}{108}) \tilde s
+(\frac{1180}{81} \varrho(\nu)-52.26)p \Bigr)} & \\
\Bigl(1+6[ln(\frac{1}{x})-\varrho(\nu)-\frac{101}{108}] \alpha \Bigr)/
\Bigl(1+6[ln(\frac{1}{x})-\varrho(\nu)-\frac{101}{108}] \alpha_0 \Bigr)
, & \mbox{if } Q^2 \leq Q^2_c \\
f_g^+(x,t_c)
\exp{\Bigl(-d_+(1+\delta_c)(\tilde s-\tilde
s_c)+d_{++}^a(1+\delta_c)(p-p_c) \Bigr) } & \\
\biggl(1+
\alpha B^+(1+\delta_c) \biggr)/
\biggl(1+
\alpha_c B^+(1+\delta_c) \biggr),
& \mbox{ if } Q^2>Q^2_c
\end{array} \right.
\label{23}
\end{aligned}$$
where
$$\begin{aligned}
{&&\hspace*{-1cm}}\tilde C^+~ \simeq ~\frac{2}{27}
\Biggl( 26\alpha_0
\Bigl[ A_g + 2C \Bigr] ~+~
\frac{A_g(1-9.74 \alpha_0)+2C(1-7.82
\alpha_0)}{ln(\frac{1}{x})-\varrho
(\nu) - \frac{85}{108}} \Biggr) \nonumber \\
{&&\hspace*{-1cm}}\mbox{and } C^-~=~C
{}~-~ C^+
\label{24} \\
{&&\hspace*{-1cm}}\tilde A^+_g~ \simeq ~A_g \Bigl(1-\frac{28}{3}\alpha_0 \Bigr)
{}~+~2C ~-~
\frac{2}{27}\frac{2A_g(1+ \frac{590}{81}\alpha_0)-C(1+
\frac{572}{81}\alpha_0)}{ln(\frac{1}{x})-\varrho
(\nu) - \frac{85}{108}}
\nonumber \\
{&&\hspace*{-1cm}}\mbox{and } \tilde A_g^-~=~A_g ~-~ \tilde A_g^+
\label{25}
\end{aligned}$$
Let us give some conclusions following from Eqs.(\[24\])-(\[25\]). It is clearly seen that the NLO corrections reduce the LO contributions. Indeed, the value of the subcritical Pomeron intercept, which increases as $ln(\alpha_0/\alpha)$ in the LO, obtaines the additional term $ \sim (\alpha_0 - \alpha)$ with the large (and opposite in sign to the LO term) numerical coefficient. Note that this coefficient is different for the quark and gluon PD, that is in the agreement with the recent $MRS(G)$ fit in [@19] and the data analysis by $ZEUS$ group (see [@20]). The intercept of the gluon PD is larger then the quark PD one (see also [@19; @20]). However, the effective reduction of the quark PD is smaller (that is in the agreement with W.-K. Tung analysis in [@21]), because the quark PD part increasing at small $x$ obtains the additional ($ \sim
\alpha_0$ but not $ \sim 1/lnx $) term, which is important at very small $x$.
Note that there is the fourth quark threshold at $Q^2_{th} \sim 10
GeV^2$ and the $Q^2_{th}$ value may be larger or smaller to $Q^2_c$ one. Then, either the solution in the r.h.s. of Eqs. (\[20\],\[22\],\[23\]) before the critical point $Q^2_c$ and the one for $Q^2 > Q^2_c$ contain the threshold transition, where the values of all variables are changed from ones at $f=3$ to ones at $f=4$. The $\alpha(Q^2)$ is smooth because $\Lambda^{f=3}_{\overline{MS}} \to \Lambda^{f=4}_{\overline{MS}}$ (see also the recent experimental test of the flavour independence of strong interactions into [@22]).
For simplicity here we suppose that $Q^2_{th} = Q^2_c$ and all changes initiated by threshold are done authomatically: the first (at $Q^2
\leq Q^2_c$) solutions contain $f=3$ and second (at $Q^2 > Q^2_c$) ones have $f=4$, respectively. For the “$-$” component we should use $Q^2_{th}=Q^2_c$, too.
Note only that the Pomeron intercept $\alpha_p = 1~-~(d_+ \tilde s +
\hat d^q_{++}p)$ increases at $Q^2=Q^2_{th}$, because
$$\alpha_p ~-~1 ~=~
\left\{
\begin{array}{ll}
\frac{4}{3} \tilde s(Q^2_{th},Q^2_0)~-~ \frac{2800}{81} p(Q^2_{th},Q^2_0)
, & \mbox{ if } Q^2 \leq Q^2_c \\
1.44 \tilde s(Q^2_{th},Q^2_0)~-~ 38.11 p(Q^2_{th},Q^2_0)
,& \mbox{ if } Q^2>Q^2_c
\end{array} \right.$$ that agrees with results [@23] obtained in the framework of dual parton model. The difference $$\bigtriangleup \alpha_p ~=~ 0.11 \tilde s(Q^2_{th},Q^2_0) - 3.55
p(Q^2_{th},Q^2_0)$$ dependes from the values of $Q^2_{th}$ and $Q^2_0$. For $Q^2_{th}=10GeV^2$ and $Q^2_0=1GeV^2$ it is very small: $$\bigtriangleup \alpha_p ~=~ 0.012$$
[**3.**]{} Let us resume the obtained results. We have got the DGLAP equation “solution” having the Regge form (\[1\]) for the two cases: at small $Q^2$ ($Q^2 \sim 1GeV^2$), where SF and PD have the flat behaviour at small $x$, and at large $Q^2$, where SF $F_2(x,Q^2)$ fastly increases when $x \to 0$. The behaviour in the flat case is nonstable with the perturbative viewpoint because it leads to the production of the subcritical value of pomeron intercept at larger $Q^2$ and the its increase (like $4/3~ ln(\alpha (Q^2_0)/\alpha(Q^2)$ in LO) when the $Q^2$ value increases[^8]. The solution in the Lipatov Pomeron case corresponds to the well-known results (see [@12; @14; @17]) with $Q^2$-independent Pomeron intercept. The general “solution” should contains the smooth transition between these pictures. Unfortunately, it is impossible to obtain it in the case of the simple approximation (\[1\]), because the r.h.s. of DGLAP equation (\[7\]) contains the both: $\sim x^{-\delta}$ and $\sim
Const$, terms. As a result, we used two above “solutions” gluing in some point $Q^2_c$.
Note that our “solution” is some generation (or a application) of the solution of DGLAP equation in the momentum space. The last one have two: ”+” and “$-$” components. The above our conclusions are related to the “+” component, which is the basic Regge asymptotic. The Pomeron intercept corresponding to “$-$” component, is $Q^2$-independent and this component is the subasymptotical one at large $Q^2$. However, the magnitude of the “+” is suppressed like $1/ln(1/x)$ and $\alpha (Q^2_0)$, and the subasymptotical “$-$” component may be important. Indeed, it is observed experimentally (see [@20.5; @20]). Note, however, that the suppression $\sim
\alpha(Q^2_0)$ is really very slight if we choose a small value of $Q^2_0$.
Our “solution” in the form of Eqs.(\[22\])-(\[25\]) is in the very well agreement with the recent $MRS(G)$ fit [@19] and with the results of [@17] at $Q^2=15GeV^2$. As it can be seen from Eqs.(\[22\]),(\[23\]), in our formulae there is the dependence on the PD behaviour at large $x$. Following to [@25] we choose $\nu =5$ that agrees in the gluon case with the quark counting rule [@26]. This $\nu$ value is also close to the values obtained by $CCFR$ group [@27] ($\nu = 4$) and in the last $MRS(G)$ analysis [@19] ($\nu =6$). Note that this dependence is strongly reduced for the gluon PD in the form
$$f_g(x,Q^2_0)~=~A_g(\nu)(1-x)^{\nu},$$ if we suppose that the proton’s momentum is carred by gluon, is $\nu$-independent. We used $A_g(5)=2.1$ and $F_2(x,Q^2_0)=0.3$ when $x \to 0$.
For the quark PD the choise $\nu =3$ is more preferable, however the use of two different $\nu$ values complicates the analysis. Because the quark contribution to the “+” component is not large, we put $\nu =5$ to both: quark and gluon cases. Note also that the variable $\nu (Q^2)$ have (see [@28]) the $Q^2$-dependence determinated by the LO AD $\gamma^{(0)}_{NS}$. However this $Q^2$-dependence is proportional $s$ and it is not important in our analysis.
Starting from $Q^2_0=1GeV^2$ (by analogy with [@23.6]) and from $Q^2_0=2GeV^2$, and using two values of QCD parameter $\Lambda$: more standard one ($\Lambda^{f=4}_{\overline {MS}}$ = 200 $MeV$) and ($\Lambda^{f=4}_{\overline {MS}}$= 255 $MeV$) obtained in [@19], we have the following values of the quark and gluon PD “intercepts” $\delta_a ~=~-(d_+ \tilde s +\hat d^q_{++}a)$ (here $\Lambda^{f=4}_{\overline {MS}}$ is marked as $\Lambda$):
if $Q^2_0$ = 1 $GeV^2$
------- ------------------- ------------------- ------------------- -----------------
$Q^2$ $\delta_q(Q^2)$ $ \delta_g(Q^2)$ $\delta_q(Q^2)$ $\delta_g(Q^2)$
$\Lambda =200MeV$ $\Lambda =200MeV$ $\Lambda =255MeV$ $\Lambda
=255MeV$
4 0.191 0.389 0.165 0.447
10 0.318 0.583 0.295 0.659
15 0.367 0.652 0.345 0.734
------- ------------------- ------------------- ------------------- -----------------
if $Q^2_0$ = 2 $GeV^2$
------- ------------------- ------------------- ------------------- -----------------
$Q^2$ $\delta_q(Q^2)$ $ \delta_g(Q^2)$ $\delta_q(Q^2)$ $\delta_g(Q^2)$
$\Lambda =200MeV$ $\Lambda =200MeV$ $\Lambda =255MeV$ $\Lambda
=255MeV$
4 0.099 0.175 0.097 0.198
10 0.226 0.368 0.227 0.410
15 0.275 0.438 0.278 0.486
------- ------------------- ------------------- ------------------- -----------------
Note that these values of $\delta_a $ are above the ones from [@19]. Because we have the second (subasymptotical) part, the effective our “intercepts” have the smaller values.
As a conclusion, we note that BFKL equation (and thus the value of Lipatov Pomeron intercept) was obtained in [@19.5] in the framework of perturbative QCD. The large-$Q^2$ $HERA$ experimental data are in the good agreement with Lipatov’s trajectory and thus with perturbative QCD. The small $Q^2$ data agrees with the standard Pomeron intercept $\alpha_p=1$ or with Donnachie-Landshoff pisture: $\alpha_p=1.08$. Perhaps, this range requires already the knowledge of nonperturbative QCD dynamics and perturbative solutions (including BFKL one) should be not applied here directly and are corrected by some nonperturbative contributions.
In our analysis Eq.(\[1\]) can be considered as the nonperturbative (Regge-type) input at $Q^2_0 \sim 1GeV^2$. Above $Q^2_0$ the PD behaviour obeys DGLAP equation, Pomeron moves to the subcritical regime and tends to its perturbative value. After some $Q^2_c$, where its perturbative value was already attained, Pomeron intercept saves the permanent value. The application of this approach to analyse small $x$ data invites futher investigation.
[99]{} H1 Collab.: T.Ahmed et al., $DESY$ preprint 95-006 (1995). ZEUS Collab.: M.Derrick et al., $DESY$ preprint 94-143 (1994). NM Collab.: P.Amadrus et al., [*Phys.Lett.*]{} [**B295**]{}, (1992) 159, [**B309**]{}, (1993) 222. E665 Collab.: in the B.Badelek’s report “Low $Q^2$, low $x$ in electroproduction. An overview.”. In Proceeding de Moriond on QCD and high energy hadron interactions (1995) Les Arc. A.Levy, $DESY$ preprint 95-003 (1995). J.D. Bjorken, In Proceeding of the International Workshop on DIS, Eilat, Izrael, Feb.1994. V.N.Gribov and L.N.Lipatov, [*Sov.J.Nucl.Phys.*]{} [**18**]{}, (1972) 438; L.N.Lipatov, [*Yad.Fiz.*]{} [**20**]{}, (1974) 181; G.Altarelli and G.Parisi, [*Nucl.Phys.*]{} [**B126**]{}, (1977) 298; Yu.L.Dokshitzer, [*ZHETF*]{} [**46**]{} (1977) 641. V.N.Gribov, E.M.Levin and M.G.Ryskin, [*Phys.Rep.*]{} [**100**]{} (1983) 1; E.M.Levin and M.G.Ryskin, [*Phys.Rep.*]{} [**189**]{} (1990) 267. V.I.Vovk, A.V.Kotikov and S.J.Maximov, [*Teor.Mat.Fiz.*]{} [**84**]{} (1990) 101; A.V.Kotikov, S.I.Maximov and I.S. Parobij, Preprint ITP-93-21E (1993) Kiev, [*Teor.Mat.Fiz.*]{} (1995) in press. M.Virchaux and A.Milsztain, [*Phys.Lett.*]{} [**B274**]{} (1992) 221. A.V.Kotikov, work in progress L.L.Enkovszky, A.V.Kotikov and F.Paccanoni, [ *Yad.Fiz.*]{} [**55**]{} (1993) 2205. A.V.Kotikov, [*Yad.Fiz.*]{} [**56**]{} (1993) N9, 217. A.V.Kotikov, [*Yad.Fiz.*]{} [**57**]{} (1994) 142; [*Phys.Rev.*]{} [**D49**]{} (1994) 5746. R.K.Ellis, E.Levin and Z.Kunst, [*Nucl.Phys.*]{} [**420B**]{} (1994) 514. R.K.Ellis, F.Hautmann and B.R.Webber, [*Phys.Lett.*]{} [**B348**]{} (1995) 582. N.N.Nikolaev, B.G.Zakharov and V.R.Zoller, [ *Phys.Lett.*]{} [**B328**]{}, (1994) 486; N.N.Nikolaev and B.G.Zakharov, [*Phys.Lett.*]{} [**B327**]{}, (1994) 149. C.Lopez and F.J.Yndurain, [*Nucl.Phys.*]{} [**171B**]{} (1980) 231; [**183B**]{} (1981) 157; A.M.Cooper-Sarkar, G.Ingelman, K.R.Long, R.G.Roberts and D.H.Saxon, [*Z.Phys.*]{} [**C39**]{} (1988) 281; A.V.Kotikov, $JINR$ preprints P2-88-139, E2-88-422 (1988) Dubna (unpublished). G.M.Frichter, D.W.McKay and J.P.Ralston, [*Phys.Rev.Lett.*]{} [**74**]{} (1995) 1508. M.Bertini, P.Desgrolard, M.Giffon, L.Jenkovszky and F.Paccanoni, Preprint LYCEN/9366 (1993). A.J.Buras, [*Rev.Mod.Phys.*]{} [**52**]{} (1980) 149. E.A.Kuraev, L.N.Lipatov and V.S.Fadin, [*ZHETF*]{} [**53**]{} (1976) 2018, [**54**]{} (1977) 128; Ya.Ya.Balitzki and L.N.Lipatov, [*Yad.Fiz.*]{} [**28**]{} (1978) 822; L.N.Lipatov, [*ZHETF*]{} [**63**]{} (1986) 904. M.Giafaloni, [*Nucl.Phys.*]{} [**B296**]{}, (1987) 249; S.Catani, F.Fiorani and G.Marchesini, [*Phys.Lett.*]{} [**B234**]{} (1990) 389, [*Nucl.Phys.*]{} [**B336**]{} (1990) 18; S.Catani, F.Fiorani, G.Marchesini and G.Oriani, [*Nucl.Phys.*]{} [**B361**]{} (1991) 645; G.Wolf, $DESY$ preprint 94-022 (1994). ZEUS Collab.: M.Derrick et al., [*Phys.Lett.*]{} [**B345**]{}, (1995) 576. A.D.Martin, W.S.Stirling and R.G.Roberts, Preprint RAL-95-021, DTP/95/14 (1995). W.K.Tung, [*Nucl.Phys.*]{} [**B315**]{} (1989) 378. SLD Collab.: K.Abe et al., preprint $SLAC$-PUB-6687 (1995), submitted to [*Phys.Rev.Lett.*]{}. A.Capella, U.Sukhatme,C.-I.Tan and J.Tran Thanh Van, [*Phys.Rep.*]{} [**236**]{} (1993) 225; [*Phys.Rev.*]{} [**D36**]{} (1987) 109. A.Capella, A.Kaidalov, C.Merino,and J.Tran Thanh Van, [*Phys.Lett.*]{} [**B337**]{} (1994) 358; M.Bertini, M.Giffon and E.Predazzi, Preprint LYCEN/9504 (1995). R.D.Ball and S.Forte, [*Phys.Lett.*]{} [**B336**]{} (1994) 77; preprints CERN-TH-7422-94 (1994), CERN-TH-95-1(1995). A.Donnachie and P.V.Landshoff, [*Nucl.Phys.*]{} [**B303**]{} (1988) 634. S.Brodsky and G.Farrar, [*Phys.Rev.Lett.*]{} [**31**]{} (1973) 1153; V.Matveev, R.Muradyan and A.Tavkhelidze, [ *Lett. Nouvo Cim.*]{} [**7**]{} (1973) D654.. CCFR Collab.: R.Z.Quintas et al., [*Phys.Rev.Lett.*]{} [**71**]{} (1993) 1307. D.J.Gross, [*Phys.Rev.Lett.*]{} [**32**]{} (1974) 1071.
[^1]: More correctly, $\phi$ is $Q^2$-dependent for the solution of DGLAP equation with the boundary condition: $f_a(x,Q^2_0)= Const$ at $x \to 0$. In the case of the boundary condition: $f_a(x,Q^2_0) \sim
\exp{\sqrt{ ln(1/x)}}$, $\phi$ is lost (see [@8]) its $Q^2$-dependence
[^2]: We use the termin “solution” because we will work in the leading twist approximation in the range of $Q^2$: $Q^2>1GeV^2$, where the higher twist terms may give the sizeable contribution (see, for example, [@9]). Moreover, our “solution” is the Regge asymptotic with unknown parameters rather then the solution of DGLAP equation. The parameters are found from the agreement of the r.h.s. and l.h.s. of the equation.
[^3]: Consideration of the more complicate behaviour in the form $x^{-\delta}(ln(1/x))^b
I_{2g}(\sqrt{\phi ln(1/x)})$ is given in [@8] and will be considered in this content in the forthcomming article [@10]
[^4]: In the double-logarithmical approximation the similar results were obtained in [@15.5]
[^5]: The method is based on the earlier results [@16]
[^6]: The used formula (Eq.(2) from [@17]) coincides with (\[6\]) in the leading order (LO) approximation, if we save only $f_g(x,Q^2)$ in the r.h.s. of (\[2\]) (or put $\gamma_{qq}=0$ and $\gamma_{qg}=0$ formally). Eq.(\[6\]) and Eq.(\[2\]) from [@17] have some differences in the next-to-leading order (NLO), which are not very important because they are corrections to the $\alpha$-correction.
[^7]: The form $\exp \Bigl({ -s \tilde
\gamma_+(1+\delta)/(2\beta_0)} \Bigr)$ coincides with the both solution: Eq.(\[9\]) if $x^{\hat d_+} >>1$ and Eq.(\[11\]) when $\delta
=0$ but it is not the solution of DGLAP equation.
[^8]: The Pomeron intercept value increasing with $Q^2$ was obtained also in [@23.5].
| {
"pile_set_name": "ArXiv"
} |
---
bibliography:
- 'Biblio42.bib'
---
Introduction
============
Dans un monde où les technologies d’acquisition de données sont en croissance rapide, l’analyse exploratoire des bases de données hétérogènes et de grandes tailles reste un domaine peu étudié. Une technique fondamentale de l’analyse non supervisée est celle du clustering, dont l’objectif est de découvrir la structure sous-jacente des données en regroupant les individus *similaires* dans des groupes homogènes. Cependant, dans de nombreux contextes d’analyse exploratoire de données, cette technique de regroupement d’objets reste insuffisante pour découvrir les motifs les plus pertinents. Le co-clustering [@hartigan1975], apparu comme extension du clustering, est une technique non-supervisée dont l’objectif est regrouper conjointement les deux dimensions de la même table de données, en profitant de l’interdépendance entre les deux entités (individus et variables) représentées par ces deux dimensions pour extraire la structure sous-jacente des données. Cette technique est la plus adaptée, par example, dans des contextes comme l’analyse des paniers de consommation où l’objectif est d’identifier les sous-ensembles de clients ayant tendance à acheter les mêmes de produits, plutôt que de grouper simplement les clients (ou les produits) en fonction des modèles d’achat/vente.
Dans la littérature, plusieurs approches de co-clustering ont été développées. En particulier, certains algorithmes de co-clustering proposent d’optimiser une fonction qui mesure l’écart entre la matrice de données et la matrice de co-clusters [@church2000]. D’autres techniques sont basées sur la théorie de l’information ([@dhillon2003]), sur les modèles de mélange pour définir des modèles de blocs latents [@govaert2008], sur l’estimation Bayésienne des paramètres ([@banerjee2008]), sur l’approximation matricielle [@seung2001], ou sur le partitionnement des graphes [@dhillon2001]. Cependant, ces méthodes s’appliquent naturellement sur des données de même type.
Dans [@bouchareb2017], nous avons proposé une méthodologie permettant d’étendre l’utilisation du co-clustering au cas d’une table de données contenant des variables numériques et catégorielles simultanément. L’approche est basée sur une discrétisation de toutes les variables en fréquences égales, suivant un paramètre utilisateur, suivi par l’application d’une méthode de co-clustering sur les données discretisées. Dans ce papier, nous proposons une nouvelle famille de modèles permettant de formaliser cette méthodologie. Le modèle proposé ici ne nécessite aucun paramètre utilisateur et permet une inférence automatique des discrétisations optimales des variables selon une approche regularisée, par opposition à la discrétisation définie par l’utilisateur proposée par [@bouchareb2017]. Un nouveau critère, mesurant la capacité du modèle à représenter les données, et de nouveaux algorithmes sont présentés.
Le reste de ce papier est organisé comme suit. En section \[ModelCriterion\], nous présentons le modèle proposé, le critère de sélection et la stratégie d’optimisation implémentée. La section \[Experiments\] présente des résultats expérimentaux sur des données réelles, et la section \[Conclusion\] conclusion et perspectives.
Un modèle de co-clustering de données mixtes {#ModelCriterion}
============================================
Avant de présenter le modèle proposé, décrivons les données telles qu’elles sont vues par notre modèle. Les données sont composées d’un ensemble d’instances (identifiants de ligne de la matrice) et un ensemble de variables pouvant être numériques ou catégorielles. Nous définissons la notion d’une observation qui représente un ’log’ d’une interaction entre une instance et une variable. Cette représentation nous permet de considérer le cas des valeurs manquantes dans les données mais aussi le cas de plusieurs observations par couple (instance, variable) comme dans les séries temporelles. Un example simple, illustrant cette représentation, est donné par :
Cet example contient $4$ instances ($i_1,\ldots,\,i_4$), 3 variables numériques ($X_1, X_2, X_3$), 2 variables catégorielles ($X_4, X_5$) et un total de $21$ observations.
Les paramètres du modèle
------------------------
Le modèle de co-clustering est défini par une hiérarchie des paramètres. A chaque étage de la hiérarchie, les paramètres sont choisis en fonction des paramètres précédents.
\[definitionModel\] Le modèle de co-clustering des données mixtes est défini par :
- la taille de la partition de chaque variable. Une partition est un regroupement des valeurs dans le cas d’une variable catégorielle et une discrétisation en intervalles dans le cas d’une variable numérique,
- la partition des valeurs de chaque variable catégorielle en groupes de valeurs,
- le nombre de clusters d’instances et de clusters de parties de variables. Ces choix définissent la taille de la matrice des co-clusters,
- la partition des instances et des parties de variables selon le nombre de clusters choisi,
- la distribution des observations sur les cellules de la matrice des co-clusters,
- la distribution des observations associées à chaque cluster d’instances (resp. parties de variables) sur l’ensemble des instances (resp. parties de variables) dans le cluster,
- la distribution des observations dans chaque partie de variable catégorielle sur l’ensemble des valeurs dans la partie.
#### Notations.
Pour formaliser ce modèle, nous considérons les notations suivantes :
- $N$: le nombre total d’observations (connu),
- $K_n$: le nombre de variables numériques (connu),
- $K_c$: le nombre de variables catégorielles (connu). $\mathbf{X}_c$ l’ensemble de ces variables,
- $V_k$: le nombre de valeurs uniques de la variable catégorielle $X_k$ (connu),
- $J_k$: le nombre de parties de la variable $X_k$ (**inconnu**),
- $I$: le nombre total d’instances (connu),
- $J=\sum_k{J_k}$: le nombre total de parties de variables (déduit),
- $G_u$ : le nombre de clusters d’instances (**inconnu**),
- $G_p$ : le nombre de clusters de parties de variables (**inconnu**),
- $G = G_u\times G_p$ : le nombre de co-clusters (déduit),
- $N_{g_u, g_p}$ : le nombre d’observations dans le co-cluster formé par le cluster d’instances $g_u$ et le cluster de parties de variables $g_p$ (**inconnu**),
- $N^{(u)}_{g_u}$ : le nombre d’observations dans le cluster d’instances $g_u$ (déduit),
- $N^{(p)}_{g_p}$ : nombre d’observations dans le cluster de parties de variables $g_p$ (déduit),
- $m^{(u)}_{g_u}$ : le nombre d’instances dans le cluster d’instances $g_u$ (déduit),
- $m^{(p)}_{g_p}$ : le nombre de parties dans le cluster de parties de variables $g_p$ (déduit),
- $m^{(k)}_{j_k}$ : le nombre de valeurs dans la partie $j_k$ de la variable $X_k$ (déduit)
- $n_{i.}$ : le nombre d’observations associées à la $i^{\grave{e}me}$ instance (**inconnu**),
- $n_{.kj_k}$ : le nombre d’observations associées à la partie $j_k$ de la variable $X_k$ (**inconnu**)
- $n_{v_{k}}$ : le nombre d’observations associées à la valeur $v_k$ de la variable catégorielle $X_k$ (**inconnu**)
Un modèle de la définition \[definitionModel\] est complètement défini par le choix des paramètres ci-dessus notés **inconnu**.
Le critère Bayésien de sélection du meilleur modèle
---------------------------------------------------
Nous faisons l’hypothèse d’une distribution a priori des paramètres la moins informative possible, en exploitant la hiérarchie des paramètres avec un a priori uniforme à chaque niveau.
Étant donné les paramètres, la vraisemblance conditionnelle $P(\mathcal{D}|\mathcal{M})$ des données sachant le modèle peut être définie par une distribution multinomiale sur chaque niveau de la hiérarchie. Le produit de la probabilité a priori du modèle et de la vraisemblance, permet de calculer de manière exacte la probabilité a posteriori du modèle connaissant les données $P(\mathcal{M}|\mathcal{D})$. A partir de cette probabilité, nous définissons un critère de sélection de modèle $\mathcal{C}(\mathcal{M}) = -\log P(\mathcal{M}|\mathcal{D})$, donné par théorème \[criterion\].
\[criterion\] Parmi les modèles définis en définition \[definitionModel\], un modèle suivant un a priori hiérarchique uniforme est optimal s’il minimise le critère :
[ $$\label{CriterionEquation}
\begin{split}
\mathcal{C}(\mathcal{M}) =& \sum\limits_{X_k\in \mathbf{X}_c}\log V_k +K_n \log N+\sum\limits_{X_k\in \mathbf{X}_c}{\log B(V_k, J_k)} +\log I +\log J \\
& + \log B(I, G_u)+ \log B(J, G_p)+ \log \binom{
N+G-1}{G-1}+ \sum\limits_{g_u=1}^{G_u}\log \binom{
N_{g_u}^{(u)}+m_{g_u}^{(u)}-1}{m_{g_u}^{(u)}-1}\\
& +\sum\limits_{g_p=1}^{G_p} \log \binom{
N_{g_p}^{(p)}+m_{g_p}^{(p)}-1}{m_{g_p}^{(p)}-1} + \sum\limits_{X_k\in \mathbf{X}_c}\sum\limits_{j_k=1}^{J_k} \log \binom{
n_{.kj_k}+m_{j_k}^{(k)}-1}{m_{j_k}^{(k)}-1} \\
& + \log N! -\sum\limits_{g_u=1}^{G_u}
\sum\limits_{g_p=1}^{G_p} {\log
N_{g_u, g_p}!} + \sum\limits_{g_u=1}^{G_u}{\log N_{g_u}^{(u)}!} - \sum\limits_{i=1}^{I}{\log
n_{i.}! } \\
&
+\sum\limits_{g_p=1}^{G_p}{\log N_{g_p}^{(p)}! } - \sum\limits_{X_k\in \mathbf{X}_c}
\sum\limits_{v_k =1}^{V_k}{\log
n_{v_k}!}
\end{split}$$ ]{}
où $B(A, B)=\sum\limits_{b=1}^{B}{S(A, b)}$ est le nombre de Stirling de deuxième espèce donnant le nombre de répartitions possibles de $A$ valeurs en, au plus, $B$ groupes.
Les trois première lignes représentent le coût a priori du modèle tandis que les deux dernières représentent le coût de la vraisemblance. Pour des raisons de manque d’espace, la preuve de ce théorème n’est pas présentée dans ce papier.
Algorithme d’optimisation {#OptimizationStrategy}
-------------------------
En raison de leur grande expressivité, les modelés de co-clustering des données mixtes sont complexes à optimiser. Dans ce papier, nous proposons une heuristique d’optimisation en deux étapes. Dans la première étape, nous commençons par partitionner les variables en fréquences égales en utilisant un ensemble prédéfini des tailles de partitions et nous appliquons la méthodologie proposée en [@bouchareb2017] pour trouver des co-clusters initiaux. Parmi les tailles testées, nous choisissons la solution initiale qui correspond à la valeur minimale du critère comme point de départ. A partir de cette solution initiale, la deuxième étape est une post-optimisation qui effectue les fusions de clusters, les fusions de parties de variables, les déplacements de parties de variables entre clusters et déplacements de valeurs entre parties, qui minimisent le mieux le critère. Cette post-optimisation permet de choisir le meilleur modèle parmi un large sous-ensemble de modèles testés tout en améliorant l’interpretabilité, étant donné que le modèle optimisé est souvent très compact, comparé à la solution initiale.
Expérimentation {#Experiments}
===============
Pour valider l’apport du modèle proposé dans l’analyse exploratoire des données mixtes, nous l’avons appliqué sur les bases de données Iris et CensusIncome [@datasets].
La base Iris est composée de $150$ instances, $750$ observations, $4$ variables numériques et $1$ variable catégorielle. Les tailles des partitions de départ sont de 2 à 10 parties par variable.
La figure \[IrisCoclusters\] montre le meilleur modèle pour la base Iris. Ce modèle est le résultat d’une discrétisation initiale en $3$ parties par variable en fréquences égales suivi d’une post-optimisation qui fusionne deux parties pour en faire $14$ au total. La couleur du co-cluster montre l’information mutuelle entre les instances et les parties de variables formant le co-cluster. La couleur rouge représente une sur-représentation des observations par rapport au cas d’indépendance. La couleur bleu représente une sous-représentation et la couleur blanche un co-cluster vide. Pour confirmation, le nombre d’observations par co-cluster est montré sur la figure \[IrisCoclusters\].
Le modèle optimisé comporte 3 clusters d’instances et 7 clusters de parties de variables. Les compositions des clusters de parties de variables les mieux représentés permettent d’expliquer les clusters d’instances. En particulier, nous distinguons :
- un cluster ($C_1^u$) de 50 instances contenant les petites fleurs setosa caractérisées par $C_1^p$ (i.e. *Class*$\{setosa\}$, *PetalLength*$]-inf;2.4]$, et *PetalWidth*$]-inf;0.8]$),
- un cluster ($C_2^u$) de 51 instances contenant les grandes fleurs virginica caractérisées par $C_4^p$ (i.e. *PetalLength*$]4.85;+inf[$, *PetalWidth*$]1.65;+inf[$, et *Class*$\{virginica\}$),
- un cluster ($C_3^u$) de 49 instances contenant les fleurs moyennes versicolor caractérisées par $C_6^p$ (i.e. *PetalLength*$]2.4;4.85]$, *PetalWidth*$]0.8;1.65]$, et *Class*$\{versicolor\}$).
On remarque que les variables *Class*, *PetalLength*, et *PetalWidth* sont fortement corrélées et les plus informatives vis-à-vis des clusters d’instances.
Pour la base CensusIncome, composée de $299.285$ instances, $11.945.874$ observations, $8$ variables numériques et $34$ variables catégorielles, les tailles de partitions de départ sont de $2$ à $128$, par puissance de $2$. Le meilleur modèle est trouvé à partir de la solution initiale correspondant à $64$ parties par variable. Le modèle post-optimisé contient $256$ parties de variables, $607$ clusters d’instances, et $97$ clusters de parties de variables. En première analyse, notre modèle de co-clustering permet de distinguer globalement deux familles d’instances (figure \[CoclustersCensus2\]), les individus actifs (payeurs d’impôts, âgés de $27$ à $64$, gagnant plus que $50K$ par an, $\ldots$) et les individus inactifs (non payeurs d’impôts, âgés de moins de $15$ ans, gagnant moins de $50K$ par an, $\ldots$).
Globalement, le modèle obtenu permet d’obtenir un résumé de la base de données très riche en informations et exploitable à plusieurs niveaux de granularité pour piloter l’analyse exploratoire.
Conclusion {#Conclusion}
==========
Dans ce papier, nous avons proposé un modèle de co-clustering des données mixtes, un critère de sélection du meilleur modèle et un algorithme d’optimisation. Nous avons montré l’efficacité de ce modèle pour extraire des motifs intéressants à partir des bases petites et simples comme Iris et des bases grandes et complexes comme CensusIncome.
Toutefois, quand les données sont volumineuses et de grande complexité, notre modèle capture cette complexité et fourni un co-clustering très détaillé, au détriment de l’interprétabilité. Dans des travaux futurs, nous viserons à développer une méthodologie permettant d’interpréter les résultats sur différents niveaux de granularité et de définir les instances et parties de variables les plus représentatives de chaque cluster pour faciliter l’interprétation du modèle.
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'O. Absil[^1]'
- 'V. Coudé du Foresto'
- 'M. Barillot'
- 'M. R. Swain'
bibliography:
- '7582.bib'
date: 'Received 2 April 2007 / Accepted 28 August 2007'
title: 'Nulling interferometry: performance comparison between Antarctica and other ground-based sites'
---
[Detecting the presence of circumstellar dust around nearby solar-type main sequence stars is an important pre-requisite for the design of future life-finding space missions such as ESA’s Darwin or NASA’s Terrestrial Planet Finder (TPF). The high Antarctic plateau may provide appropriate conditions to perform such a survey from the ground.]{} [We investigate the performance of a nulling interferometer optimised for the detection of exozodiacal discs at Dome C, on the high Antarctic plateau, and compare it to the expected performance of similar instruments at temperate sites.]{} [Based on the currently available measurements of the turbulence characteristics at Dome C, we adapt the GENIEsim software (Absil et al. 2006, A&A 448) to simulate the performance of a nulling interferometer on the high Antarctic plateau. To feed a realistic instrumental configuration into the simulator, we propose a conceptual design for ALADDIN, the Antarctic $L$-band Astrophysics Discovery Demonstrator for Interferometric Nulling. We assume that this instrument can be placed above the 30-m high boundary layer, where most of the atmospheric turbulence originates.]{} [We show that an optimised nulling interferometer operating on a pair of 1-m class telescopes located 30 m above the ground could achieve a better sensitivity than a similar instrument working with two 8-m class telescopes at a temperate site such as Cerro Paranal. The detection of circumstellar discs about 20 times as dense as our local zodiacal cloud seems within reach for typical Darwin/TPF targets in a integration time of a few hours. Moreover, the exceptional turbulence conditions significantly relax the requirements on real-time control loops, which has favourable consequences on the feasibility of the nulling instrument.]{} [The perspectives for high dynamic range, high angular resolution infrared astronomy on the high Antarctic plateau look very promising.]{}
Introduction
============
Nulling interferometry is considered to be the technique that can enable the spectroscopic characterisation of the atmosphere of habitable extrasolar planets in the thermal infrared, where markers of biological activity have been identified [@Kaltenegger07]. This is actually the objective of the Darwin and missions studied by ESA and NASA, respectively [@Fridlund04; @Beichman99]. While the spectral domain ($6 - 20\,\mu$m, where the atmosphere is mostly opaque) and required dynamic range ($\sim 10^7$) mandate a space interferometer to achieve this goal, a ground-based pathfinder might be needed to demonstrate the technique in an operational context, carry out precursor science, and therefore pave the way for space missions.
One of the main limitations of ground-based nulling interferometers is related to the influence of atmospheric turbulence. Active compensation of the harmful effects of turbulence requires real-time control systems to be designed with challenging requirements [@Absil06a hereafter Paper I]. The choice of a good astronomical site with (s)low turbulence is therefore of critical importance. In this respect, recent studies suggest that the high Antarctic plateau might be the best place on Earth to perform high-resolution observations in the infrared domain, thanks to its very stable atmospheric conditions.
The Antarctic plateau has long been recognised as a high-quality site for observational astronomy, mainly in the context of sub-millimetric and infrared applications, for which the low temperature and low water vapour content bring a substantial gain in sensitivity. However, the only site that has been extensively used for astronomy so far is the South Pole, where high wind velocity causes poor turbulence conditions and thereby prevents high-resolution applications in the near-infrared. The construction of the French-Italian Concordia station at Dome C (75$^{\circ}$S, 123$^{\circ}$E) has recently opened the path to new and exciting astronomical studies [@Candidi03]. Its main peculiarity with respect to the South Pole station is that it resides on a local summit of the plateau (3250 m), where katabatic winds have not yet acquired a significant velocity nor a large thickness by flowing down the slope of the plateau. For this reason, it is expected that Dome C could become the best accessible site on the continent, and, given its promising environmental characteristics, it is worthwhile to investigate its potential for a ground-based nulling interferometer.
Mission definition
==================
In order to provide a valid comparison with respect to a temperate site, we chose to study the potential of the Antarctic plateau in the context of a well specified mission, i.e., the ground-based Darwin demonstrator that has been identified and studied by ESA at the phase A level [@Gondoin04]. In its original version, GENIE (Ground-based European Nulling Interferometer Experiment) is conceived as a focal instrument of the VLTI and its science objective is the study of the exozodiacal dust around nearby solar-type stars like the Darwin targets. Indeed, our knowledge of the dust distribution in the first few AUs around solar-type stars is currently mostly limited to the observation of the solar zodiacal disc, a sparse structure of warm silicate grains 10 to 100$\mu$m in diameter, which is the most luminous component of the solar system after the Sun. The presence of similar discs around the Darwin targets (exozodiacal discs) may present a severe limitation to the Earth-like planet detection capabilities of this mission, as warm exozodiacal dust becomes the main source of noise if it is more than 20 times as dense as in the solar zodiacal disc [@Beichman06]. On-going interferometric studies are indeed suggesting that dense exozodiacal discs may be more common than anticipated [@Absil06b; @DiFolco07]. The prevalence of dust in the habitable zone around nearby solar-type stars must therefore be assessed before finalising the design of the Darwin mission.
Besides its scientific goals, the demonstrator also serves as a technology test bench to validate the operation of nulling interferometry on the sky.
Atmospheric parameters at Dome C
================================
Several locations on the Antarctic plateau are expected to provide excellent atmospheric conditions for high-angular resolution astronomy. Because Dome C has been extensively characterised during the past years, it is taken as a reference site for the present study. It might well turn out in the future that other sites, such as Dome A or Dome F, are better suited than Dome C for the considered mission. One the one hand, the height of the turbulent ground layer might be thinner than at Dome C [@Swain06], while on the other hand, free air seeing[^2] could be somewhat smaller at Antarctic sites located closer to the centre of the polar vortex [@Marks02].
Atmospheric turbulence
----------------------
Intensive site characterisation at Dome C has been carried out since the austral summer 2002–03, with the deployment of several instruments [@Aristidi03; @Lawrence03]. First, daytime seeing measurements with a Differential Image Motion Monitor (DIMM) were used to derive a median seeing value of $1\farcs2$ [@Aristidi03]. Later on, using a Multi-Aperture Scintillation Sensor (MASS) and a Sonic radar (SODAR) in automated mode during wintertime, @Lawrence04 reported a median seeing of $0\farcs27$. The isoplanatic angle $\theta_0$ and coherence time $\tau_0$ were also derived from MASS measurements, with average values of $5\farcs7$ and 7.9 msec respectively. For comparison, the corresponding values at Cerro Paranal are $\theta_0=2\farcs5$ and $\tau_0=3.3$ msec. These outstanding atmospheric conditions are however valid only above 30 m, as the SODAR measures the distribution of turbulence in an atmospheric layer comprised between 30 and 900 m above the ground, while the MASS is insensitive to seeing below about 500 m.
These first results suggest that most of the atmospheric turbulence is concentrated in a thin boundary layer, about 30 m thick. The simultaneous use of two DIMMs at different heights (3 m and 8 m above the ice surface) further confirms this fact, showing that half of the turbulence is concentrated into the first 5 m above the surface [@Aristidi05]. A similar behaviour had already been reported at the South Pole, where SODAR measurements showed that turbulence was mostly confined to a boundary layer sitting below 270 m [@Travouillon03]. This behaviour can be explained by the horizontal katabatic wind, whose altitude profile closely matches the turbulence profile.
In 2005, the first winter-over mission at Dome C has allowed DIMM measurements and balloon-borne thermal measurements to be obtained during the long Antarctic night. Preliminary results reported by @Agabi06 confirm the two-layered structure of atmospheric turbulence at Dome C. A 36-m thick surface layer is responsible for 87% of the turbulence, resulting in a total seeing of $1\farcs9 \pm 0\farcs5$, while the very stable free atmosphere has a median seeing of $0\farcs36
\pm 0\farcs19$ above 30 m. This value is remarkably similar to the median free air seeing of $0\farcs32$ reported at South Pole by @Marks99.
Water vapour seeing
-------------------
Another critical parameter for infrared observations is the water vapour content of the atmosphere. On one hand, it strongly influences the sky transparency as a function of wavelength, and on the other hand, its temporal fluctuations are an important source of noise for infrared observations. The water vapour content of the Antarctic atmosphere has been measured at South Pole by radiosonde, giving an exceptionally low average value of 250 $\mu$m during austral winter [@Chamberlin97; @Bussmann05], where temperate sites typically have a few millimetres of precipitable water vapour (PWV). This is mainly due to the extreme coldness of the air, with a ground-level temperature of about $-61^{\circ}$C (212 K) at the South Pole during winter,[^3] which induces a low saturation pressure for water vapour. The winter time PWV at Dome C is estimated to be between 160 $\mu$m [@Lawrence04b] and 350 $\mu$m [@Swain06b].
---------------------------------------------------------------------------------
Site $r_0$ $\langle {\rm PWV} $\sigma_{\rm PWV}$ References
\rangle$
--------------- --------- -------------------- -------------------- -------------
Cerro Paranal 14.5 cm 3 mm 27 $\mu$m @Meisner02
Mauna Kea 17.8 cm 1.6 mm 11 $\mu$m @Colavita04
Dome C 38.2 cm 0.25 mm 1 $\mu$m @Bussmann05
---------------------------------------------------------------------------------
The very low water vapour content of the Dome C atmosphere has an important advantage in the context of high-precision infrared interferometry: longitudinal dispersion, created by the fluctuations of the water vapour column density above the telescopes [@Colavita04], is greatly reduced with respect to temperate sites. The standard deviation of PWV ($\sigma_{\rm PWV}$) can be estimated at Dome C assuming that water vapour seeing follows the same statistics as the piston, as suggested by @Lay97. In that case, the standard deviation of PWV fluctuation depends on the Fried parameter $r_0$ viz. $\sigma_{\rm PWV} \propto r_0^{-5/6}$ according to @Roddier81. Assuming that $\sigma_{\rm PWV}$ is also proportional to $\langle {\rm PWV} \rangle$, the average PWV content, its value at Dome C can then be obtained by means of a comparison with the data obtained at temperate sites: $$\sigma_{\rm PWV} ({\rm DC}) = \frac{\langle {\rm PWV} \rangle_{\rm DC}} {\langle {\rm PWV}
\rangle_i} \left( \frac{r_{0,\,{\rm DC}}}{r_{0,\,i}} \right)^{-5/6} \sigma_{\rm PWV} (i) \; ,
\label{eq:pwv}$$ where $i$ represents a (well-characterised) temperate site. The application of this relation with the atmospheric parameters of either Cerro Paranal or Mauna Kea taken as a reference gives very similar estimates of 1.0 and 0.91 $\mu$m for $\sigma_{\rm PWV}$ at Dome C (see Table \[tab:pwv\]), using an average PWV content of 250 $\mu$m for Dome C. For this calculation, we assumed a seeing of $0\farcs27$, which is valid only above 30 m [@Lawrence04]. Using this value is recommended in the present case for two reasons: on one hand, telescopes are contemplated to be placed above the turbulent ground layer, and on the other hand, the study of @Bussmann05 shows that most of the PWV is concentrated between 200 m and 2 km above the ground, so that water vapour seeing is suspected to be only weakly affected by the ground layer.
Atmospheric transmission and sky brightness
-------------------------------------------
Another benefit from the low water vapour content is to widen and improve the overall transmission of the infrared atmospheric windows. @Lawrence04b shows that the $K$ band is extended up to 2.5 $\mu$m and the $L$ band from 2.9 to 4.2 $\mu$m. The transmission of the $M$ band around 5 $\mu$m is also significantly improved.
The infrared sky brightness is also partially determined by the water vapour content, which affects its wavelength-dependent emissivity factor. The other parameter influencing the sky emission is its effective temperature, which depends on the altitude of the main opacity layer at a given wavelength. The effective temperature above South Pole has been measured by @Chamberlain00 in the mid-infrared, with values ranging from 210 K to 239 K depending on wavelength. Most of the winter sky background emission is in fact assumed to emanate from an atmospheric layer just above the temperature inversion layer, located between 50 and 200 m at Dome C [@Chamberlain00; @Lawrence04b]. The atmospheric temperature at this altitude is about 230 K in wintertime [@Agabi06]. As a result of both low temperature and low emissivity, the sky background is exceptionally low in Antarctica. The measurements obtained at South Pole show that is it reduced by a factor ranging between 10 and 100 in the infrared domain with respect to temperate sites. The largest gain in sensitivity for astronomical observations is expected to arise in the $K$, $L$ and $M$ bands. It is estimated that 1-m class telescopes at Dome C would reach almost the same sensitivity as 8-m class telescopes at a temperate site at these wavelengths.
The atmospheric parameters discussed in this section are summarised in Table \[tab:atmoparam\].
[cc]{}\
Fried parameter $r_0$ at 500 nm & 38 cm\
Equivalent seeing & $0\farcs27$\
Coherence time $\tau_0$ & 7.9 msec\
Equivalent wind speed & 15 m/s\
Outer scale $L_{\rm out}$ & 100 m\
Sky temperature & 230 K\
Ambient temperature at $h=30$ m & 230 K\
Mean PWV & 250 $\mu$m\
rms PWV & 1 $\mu$m\
Pressure & 640 mbar\
The ALADDIN nulling interferometer concept
==========================================
To provide plausible inputs in terms of instrumental parameters for the performance simulation, a conceptual design is needed for our Antarctic nulling interferometer. The main design guidelines that have been adopted are the minimisation of the number of open air reflections, the preservation of the full symmetry between the two beams and the optimisation of the range of baselines for typical Darwin target stars. Another critical guideline to benefit from the outstanding free air seeing is to place the instrument above the boundary layer (i.e., about 30 m above the ground at Dome C). The following sections describe a practical concept of a nulling interferometer dedicated to exozodiacal disc detection, which follows these recommendations without pretending to be optimal. This concept is referred to as ALADDIN, the Antarctic $L$-band Astrophysics Discovery Demonstrator for Interferometric Nulling.
The interferometric infrastructure
----------------------------------
![image](7582fig1.eps){width="18cm"}
The concept proposed here consists in a 40 m long rotating truss installed on top of a 30 m tower, and on which are placed two moveable siderostats feeding off-axis telescopes (Fig. \[fig:aladdin\]). Such a design has two main advantages: first, thanks to the moveable siderostats, the baseline length can be optimised to the observed target and second, thanks to the rotating truss, the baseline can always be chosen perpendicular to the line of sight so that neither long delay lines nor dispersion correctors are needed. Moreover, polarisation issues, which are especially harmful in nulling interferometry [@Serabyn01], are mitigated by this fully symmetric design. The available baseline lengths range from 4 to 30 m and provide a maximum angular resolution of 10 mas in the $L$ band. This is largely sufficient to study the habitable zones around Darwin/TPF-I candidate targets, since they are typically separated by a few tens of milliarcseconds from their parent star [@Kaltenegger06].
[cc]{}\
Baselines & $4-30$ m\
Telescope diameter & 1 m\
Number of warm optics & 5\
Warm optics temperature & 230 K\
Warm throughput & 80%\
Warm emissivity & 20%\
Number of cold optics & 15\
Cryogenic temperature & 77 K\
Cold throughput & 10%\
Science waveband & $3.1-4.1$ $\mu$m ($L$)\
Fringe sensing waveband & $2.0-2.4$ $\mu$m ($K$)\
Tip-tilt sensing waveband & $1.15-1.3$ $\mu$m ($J$)\
For the baseline version of the ALADDIN design shown in Fig. \[fig:aladdin\], the diameter of the siderostats has been set to 1 m, which is expected to provide similar performance to 8-m class telescopes at a temperate site. Only five reflections are required to lead the light from the sky down to the instrument, which is accommodated under the rotating truss, at the rotation centre. All relay mirrors are at ambient temperature, i.e., about 230 K at that altitude during wintertime [@Agabi06]. Note that an alternative design, where the instrument is placed on the ground, was introduced earlier [@Barillot06]. In the latter version the cryostat does not need to be rotated with the truss and remains fully static, at the cost of a more complex optical train which enables symmetric de-rotation of the beams and preservation of the polarisation. The harmful influence of ground-layer seeing is then mitigated by propagating compressed beams about 40 mm in diameter, i.e., smaller than the typical Fried parameter in the ground layer.
The nulling instrument
----------------------
The ALADDIN interferometer feeds a nulling instrument whose design is directly inherited from GENIE, a nulling instrument originally designed to be installed at ESO’s Very Large Telescope Interferometer (VLTI) on top of Cerro Paranal. Using a common base design has the advantage of improving the comparative value of the performance simulations. Indeed, ALADDIN is foreseen to operate in the same wavelength regime, the $L$ band (ranging from 2.8 to 4.2 $\mu$m at Dome C), which is very appropriate to investigate the inner region of extrasolar zodiacal discs. The whole nulling instrument is assumed to be enclosed in a cryostat, in order to improve its overall stability and to mitigate the influence of temperature variations between seasons at the ground level (the mean temperature during the austral summer is about 40$^{\circ}$C higher than during winter, while the instrument should be usable during the whole year). The lower temperature of the optics inside the cryostat (77 K) also further decreases the background emission produced by the instrument. Since (as is shown below) two subsystems needed for GENIE are no longer needed for the Antarctic version, the instrument is expected to be smaller and therefore easier to enclose into a cryostat.
The ALADDIN instrument comprises the same basic functionalities as GENIE (fringe tracking, tip-tilt correction, phase shifting, beam combination, modal filtering, spectral dispersion and detection), except for two critical control loops that are not needed any more. As demonstrated in Section \[sec:aladdinperfo\], ALADDIN can on the one hand be operated without any dispersion correction thanks to the rotating baseline and to the very low water vapour seeing, provided that the observing waveband is restricted to the $3.1 - 4.1$ $\mu$m region, while on the other hand, real-time intensity control is not required any more since the size of the collectors is significantly smaller than the Fried parameter in the $L$ band ($r_{0, L} \simeq 4$ m). A block diagram of the optical path and control system of ALADDIN is shown in Fig. \[fig:block\]. Most optical functions are kept at low temperature inside a vacuum enclosure. The optical arrangement has been significantly simplified with respect to the original VLTI/GENIE design:
- The two-mirror afocal telescopes are off-axis. Thanks to the narrow field-of-view, high wavefront quality is expected.
- Tip-tilt correction is performed at the level of the collecting telescopes assemblies, so that the optical paths downstream are kept identical whatever the baseline and orientations of the siderostats and structural beam.
- The achromatic $\pi$ phase-shift is achieved geometrically, by means of opposite periscopes.
- The beam splitters shown in Fig. \[fig:block\] are actually dichroic beam splitters, which separate the signal between the science wave band and the tip-tilt and OPD sensing wave bands.
- Optical delay lines are of the short stroke/high accuracy kind, since long stroke is not necessary in the rotating beam architecture. Their design is expected to be greatly simplified with respect to usual delay lines: one-stage actuators based on linear piezoelectric motors translating a small and light plane mirror are expected to be sufficient.
- The preferred beam combiner arrangement is the Modified Mach-Zehnder [MMZ, @Serabyn01].
- The modal filter is a single-mode optical fibre. Fluoride glass fibres are appropriate for ALADDIN’s science wavelengths.
- OPD detection may be achieved downstream the beam combiner, by means of an ABCD algorithm, provided that one of the two nulled outputs of the MMZ receives a $\pi/2$ phase shift. Alternately, the separation between OPD sensor and science bands may be implemented upstream the beam combiner and a second beam combiner accommodated for the OPD measurement. The latter option, which was the baseline for the GENIE instrument, has been used for performance estimation to provide a fair comparison with GENIE.
The control system involves three control loops only, respectively dedicated to the stabilisation of one OPD and two tip-tilt parameters. They are expected to be operated at lower repetition frequencies than at a temperate site thanks to the slowness of atmospheric turbulence, which represents a significant simplification. The control loops are based on conventional and separated PID controllers involving separated sensors and actuators. The location of the tip-tilt mirrors in the output pupil of the telescopes ensures proper uncoupling between tip-tilt actuation and OPD.
Performance study at Dome C {#sec:aladdinperfo}
===========================
In order to evaluate the performance of ALADDIN, we use the GENIE simulation software (GENIEsim), which performs end-to-end simulations of ground-based nulling interferometers with a system-based architecture. All the building blocks and physical processes included in GENIEsim are described in detail in . They include the simulation of astronomical sources (star, circumstellar disc, planets, background emission), atmospheric turbulence (piston, longitudinal dispersion, wavefront errors, scintillation), as well as a realistic implementation of closed-loop compensation of atmospheric effects by means of a fringe tracking system and of a wavefront correction system. The output of the simulator basically consists in time series of photo-electrons recorded by the detector at the two outputs of the nulling beam combiner (constructive and destructive outputs). Various information on the sub-systems are also available on output for diagnostic. Routines dedicated to the post-processing of nulling data are also included, as described in . GENIEsim is written in the IDL language. It has been originally designed to simulate the GENIE instrument at the VLTI interferometer, and has been extensively validated in that context either by comparison with on-sky data when available (e.g., MACAO and STRAP for adaptive optics, FINITO for fringe tracking) or by comparison with performance estimations carried out by industrial partners during the GENIE phase A study.
-------------------- ----------------------- ----------------------- ----------------------- ----------------------- -----------------
Worst case Best case Worst case Best case Goal
Piston 17 nm @ 20 kHz 6.2 nm @ 13 kHz 14 nm @ 3 kHz 10 nm @ 2 kHz $<4$ nm
Inter-band disp. 17 nm @ 200 Hz 4.4 nm @ 300 Hz 7.0 nm @ 0 Hz 7.0 nm @ 0 Hz $<4$ nm
Intra-band disp. 4.1 nm @ 200 Hz 1.0 nm @ 300 Hz 7.4 nm @ 0 Hz 7.4 nm @ 0 Hz $<4$ nm
Tip-tilt 11 mas @ 1 kHz 11 mas @ 1 kHz 9 mas @ 1 kHz 9 mas @ 1 kHz (see intensity)
Intensity mismatch 4% @ 1 kHz 4% @ 1 kHz 1.2% @ 0 Hz 1.2% @ 0 Hz $<1$%
Total null $9.7{\times 10^{-4}}$ $6.2{\times 10^{-4}}$ $2.9{\times 10^{-4}}$ $2.2{\times 10^{-4}}$ $f$(baseline)
Instrumental null $5.0{\times 10^{-4}}$ $1.5{\times 10^{-4}}$ $2.0{\times 10^{-4}}$ $1.3{\times 10^{-4}}$ $10^{-5}$
rms null $4.5{\times 10^{-6}}$ $2.0{\times 10^{-6}}$ $5.0{\times 10^{-6}}$ $3.5{\times 10^{-6}}$ $10^{-5}$
-------------------- ----------------------- ----------------------- ----------------------- ----------------------- -----------------
Thanks to the versatility of the simulator, only a few input parameters have to be changed to switch from the original configuration (GENIE at Cerro Paranal) to ALADDIN at Dome C. These changes include the atmospheric transmission [@Lawrence04b], as well as the atmospheric and instrumental parameters listed in Tables \[tab:atmoparam\] and \[tab:instruparam\]. It must be noted that the ALADDIN performance can be modelled with greater confidence than in the case of GENIE as it does not rely on the nominal performance of an external system such as the VLTI. Furthermore, the performance should remain similar across most of the Antarctic plateau, as free air seeing is not expected to change drastically for sites located within the polar vortex. The only requirement is then to adapt the height of the structure on which the instrument is placed. In this regard, Dome C might not be the best possible site, as the boundary layer is suspected to be about 10 m thinner at Dome F [@Swain06].
As in the case of GENIE, the performance is measured in terms of sensitivity to faint exozodiacal dust clouds. We assume that these dust clouds follow the same density and temperature distribution as in the solar system [@Kelsall98], except for a global density scaling factor. To account for this, we introduce the unit [*zodi*]{}, which corresponds to the global dust density in our local zodiacal cloud.
Control loop performance
------------------------
Because dispersion and intensity control loops are not expected to be required in the case of ALADDIN, we have disabled these two loops in the GENIEsim software when simulating the residual atmospheric turbulence at beam combination. The simulation results are presented in Table \[tab:loopperf\], where the absence of dispersion and intensity control is represented by a 0 Hz repetition frequency. For these simulations, we have used two different assumptions on the atmospheric turbulence characteristics. The [*worst case scenario*]{} does not take into account the effect of pupil averaging, which is expected to reduce the power spectral density (PSD) of piston and dispersion at high frequencies [@Conan95]. This scenario thereby assumes a logarithmic slope of $-8/3$ at high frequencies for the PSD of these two quantities. Conversely, the [*best case scenario*]{} takes into account the effect of pupil averaging at high frequencies where it produces a $-17/3$ logarithmic slope. The rationale for introducing the worst-case scenario is that the $-17/3$ slope has never been observed to our best knowledge (most probably due to instrumental limitations), while spurious instrumental effects might potentially increase the high-frequency content of piston. It must be noted that the PSDs of higher order Zernike modes (tilt and above) remain the same in both scenarios and take into account pupil averaging.
The results listed in Table \[tab:loopperf\] confirm that two critical control loops (dispersion and intensity control) are not required any more: the input atmospheric perturbations for these two quantities are either well below other contributions (e.g., piston) or marginally compliant with the goal performance taken from .[^4] A second important conclusion is that, in order to reach a residual piston similar to that of GENIE, fringe tracking can be carried out at a much lower frequency (about 3 kHz instead of 20 kHz). The technical feasibility of the instrument directly benefits from these two features. Finally, it must be noted that the two models for atmospheric turbulence provide similar results. There are two reasons for this: the actual shape of the power spectral density has no influence on the global fluctuation of the quantities that are not subject to real-time control, and the cut-off frequency at which the effect of pupil averaging becomes important is significantly higher than in the case of GENIE due to the reduced pupil diameter .
Despite its smaller collecting area, the overall performance of ALADDIN in terms of instrumental null is slightly improved with respect to GENIE’s, by a factor up to 2.5 in the worst-case scenario. However, the mean instrumental nulling ratio achieved by ALADDIN is still a factor $\sim
10$ above the performance required to detect 20-zodi discs without calibrating the instrumental response. This shows that, as in the case of GENIE, the calibration of instrumental stellar leakage will be mandatory to approach the goal sensitivity of 20 zodi.
Estimated sensitivity
---------------------
Using the parameters of Tables \[tab:atmoparam\] and \[tab:instruparam\], we have simulated the detection performance of ALADDIN for exozodiacal discs. The simulations take into account the same calibration procedures as discussed in in the context of GENIE, i.e., background subtraction, geometric leakage calibration and instrumental leakage calibration. Four hypothetic targets, representative of the Darwin star catalogue, have been chosen for this performance study: a K0V at 5 pc, a G5V at 10 pc, a G0V at 20 pc and a G0V at 30 pc. The integration time has been fixed to 30 min as in the case of GENIE. Unless specified, we have assumed a typical uncertainty $\Delta\theta_{\ast}$ of 1% on the diameters of the target stars and we have used the worst case scenario for atmospheric turbulence with the $-8/3$ logarithmic slope of the power spectra at high frequencies. As demonstrated in Table \[tab:loopperf\], using the best case scenario would not significantly change the final results.
In Fig. \[fig:perfbase\], we present the results of the simulations in terms of detectable exozodiacal density level as a function of baseline length. As in , the threshold for detection is set at a global signal-to-noise of 5, including the residuals from background subtraction and from geometric and instrumental stellar leakage calibration. Fig. \[fig:perfbase\] shows that the optimum baseline for studying typical Darwin target stars is comprised between about 4 and 40 m, which closely matches the baseline range offered by ALADDIN. With its 1 m class telescopes, ALADDIN significantly outperforms GENIE for the same integration time in the case of nearby targets (see Table \[tab:aladdingenie\] for a thorough comparison). This fact is not only due to the exceptional atmospheric conditions, but also to the optimisation of ALADDIN both regarding the available baselines and the instrumental design.
To check the relevance of ALADDIN in the context of the Darwin preparatory science, it is useful to compare the angular resolution provided by the optimum baseline length with respect to the position of the habitable zone for the various targets, because Darwin will be most sensitive to dust located in that particular zone where it will search for Earth-like planets. According to @Kasting93, the position of the habitable zone expressed in AU is given in good approximation by the following equation: $$r_{\rm HZ} = \left( \frac{T_{\star}}{T_{\odot}} \right)^2 \frac{R_{\star}}{R_{\odot}} \; ,$$ which yields 0.68, 0.85 and 1.16 AU for a K0V, a G5V and a G0V star respectively. The angular distance of the habitable zone to its parent star is compared to the angular resolution of ALADDIN in Table \[tab:optbase\]. The first bright fringe of the optimised nulling interferometer always falls between the star and the habitable zone, and the associated angular resolution is compatible with the study of this most important region of the exozodiacal disc. This also validates [*a posteriori*]{} the choice of the $L$ band for the study of exozodiacal dust around the Darwin target stars.
----------- ---------- ---------------- ------------------
Optimum Ang. resol. Position HZ
baseline $(\lambda/2b)$ $(r_{\rm HZ}/d)$
K0V 5 pc 4 m 93 mas 135 mas
G5V 10 pc 10 m 37 mas 85 mas
G0V 20 pc 24 m 15 mas 58 mas
G0V 30 pc 30 m 12 mas 39 mas
----------- ---------- ---------------- ------------------
: Comparison of the angular resolution provided by the optimum ALADDIN baseline length ($4
\le b \le 30$m) with the characteristic position of the habitable zone of the target systems.[]{data-label="tab:optbase"}
Calibration of stellar angular diameters
----------------------------------------
An important parameter influencing the performance of a nulling interferometer is the uncertainty on the angular diameter of the target star ($\Delta \theta_{\star}$). It is the main contributor to the quality of calibration not only for geometric stellar leakage but also for instrumental stellar leakage, which relies on the estimation of the instrumental nulling ratio on a well-known calibration star. In Fig. \[fig:perfang\], we investigate the influence of this $\Delta
\theta_{\star}$ parameter on the ALADDIN sensitivity. The baseline length is optimised in each case within the specified range ($4-30$ m). This simulation shows that, similarly to the GENIE case, an improved accuracy on stellar diameters would largely improve the detection capabilities of ALADDIN.
The very good sensitivity obtained for a perfect knowledge of the stellar diameter gives an idea of the gain that could be achieved by using more elaborate nulling configurations that are almost insensitive to stellar leakage. An example of such a configuration is the Degenerate Angel Cross [@Mennesson05], which uses three aligned telescopes to provide a central transmission proportional to the fourth power of the angular distance to the optical axis ($\theta^4$) instead of the second power ($\theta^2$) for a two-telescope Bracewell interferometer. The use of phase chopping with multi-telescope configurations would have almost the same effect, as geometric stellar leakage would then be removed by the chopping process. Fig. \[fig:perfang\] shows that an advanced nulling interferometer at Dome C should be capable of reaching a sensitivity ranging between 10 and 20 zodi around most of the Darwin targets. Multi-telescope configurations are however not contemplated in the context of ALADDIN, for which simplicity is strongly advocated.
Star 0.25% 0.5% 1% 1.5% Instrument
------------ ------- ------ ----- ------ ------------
72 90 125 154 GENIE – AT
K0V – 5pc 114 227 455 682 GENIE – UT
20 33 55 79 ALADDIN
111 130 154 176 GENIE – AT
G5V – 10pc 30 59 117 176 GENIE – UT
15 24 37 51 ALADDIN
255 261 278 297 GENIE – AT
G0V – 20pc 21 29 50 73 GENIE – UT
19 25 37 48 ALADDIN
575 585 604 615 GENIE – AT
G0V – 30pc 36 46 59 71 GENIE – UT
62 63 67 72 ALADDIN
: Comparison of the GENIE and ALADDIN performance expressed in detectable exozodiacal disc densities as compared to the solar zodiacal disc. Four different levels of uncertainty have been assumed on the angular diameter of the target stars. The simulations are performed in the $L$ band, which extends from 3.5 to 4.1 $\mu$m in the case of GENIE and from 3.1 to 4.1 $\mu$m in the case of ALADDIN. An integration time of 30 min is assumed in all cases.[]{data-label="tab:aladdingenie"}
Table \[tab:aladdingenie\] compares the expected sensitivity of ALADDIN, operated on 1-m telescopes, with that of GENIE on either 8-m Unit Telescopes or 1.8-m Auxiliary Telescopes at the VLTI, using various assumptions on the stellar diameter knowledge. A significant gain (up to a factor 4) is obtained with ALADDIN, except in the case of the G0V at 30 pc where the 8-m telescopes are providing a more suited collecting area. The gain with ALADDIN is all the larger when the target star is closer, because the use of short baselines is crucial for stars with relatively large angular diameters ($\gtrsim$1 mas). As obvious from Table \[tab:aladdingenie\], an accurate knowledge of the stellar angular diameter ($<0.5$%) at the observing wavelength is mandatory to reach our goal sensitivity of 20 zodi.
Angular diameters in the $L$ band are however currently not well constrained, due to the lack of actual measurements. Furthermore, it is not guaranteed that an interferometer will operate in this band in a near future to provide angular diameter measurements with the required accuracy, while extrapolating stellar models from the visible or near-infrared ($H$, $K$ bands) towards the $L$ band is not straightforward . An integrated concept as ALADDIN presents a significant advantage in this respect, as the continuous range of available baselines can be used to fit the stellar angular diameter simultaneously with the exozodiacal disc parameters. This procedure is illustrated in Fig. \[fig:fit\], where we have simulated ten 30-min observations of a K0V star at 5 pc surrounded by a 50-zodi disc, using ten baseline lengths ranging between 4 and 30 m. All standard calibrations have been applied in these observations, except for the calibration of the geometric stellar leakage (which is actually the main contributor to the observed nulling ratio). The simultaneous fit of the stellar radius and the exozodiacal dust density level provides encouraging results, and confirms that the [*a priori*]{} knowledge of the stellar radius is not required if a sufficient baseline coverage is used.
Influence of integration time
-----------------------------
Another advantage of the ALADDIN concept is its ability to perform very long on-source integrations. The interferometer is assumed to be continuously operated during the long winter night, but also during the equinox twilight and the summer day thanks to the low sky temperature in all seasons and to the very low aerosol and dust content in the atmosphere (coronal sky). The summertime performance will of course be somewhat degraded due to the unavoidable stray light in the instrument and to the higher temperature of the sky and optical train, which produces a larger background emission. Long integrations are also enabled by the fact that ALADDIN would be dedicated to the survey of exozodiacal discs, while an instrument like GENIE would have to compete with other instruments at the VLT (especially when using the 8-m Unit Telescopes). Therefore, it makes sense to investigate the gain in sensitivity that can be achieved by longer integrations. The computation of this gain is not trivial, as all the noise sources do not have the same temporal behaviour. For instance, shot noise, detector noise and instability noise (to the first order) have the classical $t^{1/2}$ dependence, while the imperfect calibration of geometric and instrumental stellar leakage is proportional to time (it actually acts as a bias).
In Fig. \[fig:perftime\], we simulate the sensitivity of ALADDIN as a function of integration time. Because increasing the integration time does not improve the accuracy of both geometric and instrumental stellar leakage calibration, which are among of the main contributors to the noise budget (especially for very nearby stars), the overall performance does not largely improve for long exposures (except for the fainter targets, for which sensitivity is background-limited). It must still be noted that the goal sensitivity of 20 zodi is within reach after 8 hours of integration for G0V stars located closer than 20 pc.
A side effect of increasing the integration time is that the optimum baseline is decreased. Indeed, shorter baselines allow for less exozodiacal light to make it through the transmission pattern, but also for a better cancellation of the stellar light. The result is an improved signal-to-noise ratio regarding stellar leakage calibration, while the relative increase of the shot noise contribution with respect to the transmitted exozodiacal signal is compensated by the longer integration time. For instance, in the case of a G0V star at 20 pc, the optimum baseline decreases from 24 m for a 30 min integration to 12 m for an 8 h integration. Reducing the optimum baseline is favourable to the global feasibility of the concept, as it reduces the required size of the truss supporting the siderostats.
Influence of pupil diameter
---------------------------
Finally, in order to choose the most appropriate diameter for the ALADDIN siderostats, we study the influence of the collecting area on the sensitivity of the instrument. To keep the system architecture unchanged, we restrict the pupil diameter to 2 m at most, since larger pupils would become comparable to the size of turbulent cells above the boundary layer (about 4 m in the $L$ band) and would therefore require either adaptive optics or additional intensity control to be implemented.
Fig. \[fig:perfdiam\] shows the simulated performance of ALADDIN for three different sizes of the siderostats. By increasing the diameter from 0.5 m to 1 m, the performance improves by a factor ranging from 25% to 75%, while a typical gain between 25% and 50% is observed when increasing the telescope size from 1 m to 2 m. The G0V star at 30 pc shows the most significant improvement as a function of pupil size, due to its faintness (shot noise from the background emission is dominant for such a faint star). It must be noted that the performance of ALADDIN with 50-cm collectors is still better than that of GENIE at the VLTI for the two closest targets. Reducing the size of the siderostats could thus make sense if the feasibility of the project was found to be jeopardised by the requirement to put 1-m siderostats on a 40-m truss located 30 m above the ground. Increasing the integration times by a factor about 4 would then be required to achieve similar performance as with 1-m collectors. A beneficial side-effect of increasing the integration time will be to reduce the optimum baselines down to an acceptable length, because 50-cm siderostats are associated with optimum baselines typically twice as large as for the original 1-m collectors. In practice, the final choice of the pupil diameter will result from a trade-off between feasibility, performance, integration time and available baselines.
Site impact on performance
==========================
In this section, we estimate the gain in performance that is actually related to the outstanding observing conditions above the boundary layer on the Antarctic plateau (and not to the optimised instrumental design). For that purpose, we simulate the performance of ALADDIN at two other locations: first on the ground at Dome C (below the boundary layer) and then at Cerro Paranal. In the first case, we use the ground-level wintertime seeing conditions recently reported at Dome C by @Agabi06: a median seeing of $1\farcs9$ (i.e., a Fried parameter of 5.4 cm at 500 nm) and a coherence time of about 2.9 msec (i.e., equivalent wind speed of 5.8 m/s). In the second case, we use the standard atmospheric conditions of Cerro Paranal already presented in .
Ground-level performance at Dome C
----------------------------------
One of the main limitations of ground-level observations comes from the fact that the Fried parameter in the $L$ band ($\sim 57$ cm) becomes smaller than the size of the apertures, so that multiple speckles are formed in the image plane. Assuming only tip-tilt control at 1 kHz, which provides a residual tip-tilt of about 15 mas, the typical fluctuations of the relative intensity mismatch between the two beams after modal filtering would be about 18%. This is much too large to ensure a high and stable instrumental nulling ratio, and the use of adaptive optics (or of an intensity matching device) is therefore required to stabilise the injection into the single mode waveguides. Another limitation comes from the increased strength of the piston effect. Assuming fringe tracking to be performed at a maximum frequency of 10 kHz, the residual OPD would range between 15 and 35 nm rms depending on the target star. Here again, the stability of the nulling ratio would be significantly degraded with respect to the baseline ALADDIN concept. On the contrary, longitudinal dispersion is not expected to increase very significantly since the precipitable water vapour content of the first 30 m of the atmosphere is relatively small due to the very low temperature right above the ice.
Taking all these effects into account, the instability of the nulling ratio ([*instability noise*]{}) would become the main source of noise in the budget of a ground-level ALADDIN. The simulations performed with GENIEsim show that the sensitivity in the case of a G0V star located at 20 pc would only be about 200 zodi instead of 37 zodi for the original ALADDIN concept on top of a 30-m tower. In order to match the baseline ALADDIN performance with a ground-level instrument, higher repetition frequencies would be required for piston and tip-tilt control (both about 6 kHz), while adaptive optics (or intensity control) should be used to stabilise the injection efficiency into the waveguides. Dispersion control might also be required. Preliminary estimations show that deformable mirrors using $20 \times 20$ actuators at a repetition frequency around 1 kHz would be required to reduce the intensity fluctuations down to 1%. In that case, a sensitivity around 50 zodi would be reachable for a G0V star at 20 pc.
Obviously, placing the instrument above the ground layer is recommended to obtain a significant gain on both the performance and feasibility aspects with respect to an instrument installed at a temperate site such as Cerro Paranal.
Ground-level performance at Cerro Paranal
-----------------------------------------
To better emphasise the attractiveness of Antarctic sites in the context of high dynamic range interferometry, let us now virtually move the ALADDIN experiment to Cerro Paranal while keeping the design unchanged. Because the Fried parameter is larger at Paranal ($r_0 \sim 1.2$ m in the $L$ band) than at the ground level at Dome C, while the coherence time is of the same order of magnitude ($\tau_0 \sim 3$ msec), the performance should be somewhat better than on the ground at Dome C. Simulations indeed show that the residual OPD is slightly improved (now between 10 and 30 nm), while the residual intensity fluctuation is significantly reduced (now about 7%, but still well above the goal of 1%).
However, two other parameters significantly degrade the situation: the large background emission and the increased PWV content in the atmosphere. The main effect of the former is to increase the integration time to reach a given sensitivity limit, while the fluctuations of the latter produce large variations of longitudinal dispersion, which can reach about 0.7 radian if they are not reduced by a real-time control loop as in the case of GENIE . This corresponds to an additional OPD error of about 400 nm at the edges of the observing waveband (ranging from 3.5 to 4.1 $\mu$m in the case of Cerro Paranal). All in all, a sensitivity of about 3000 zodi is expected for a replica of ALADDIN installed at Cerro Paranal. By introducing a dispersion control loop similar to that described in and operating it at a typical frequency of 50 Hz, longitudinal dispersion could be reduced down to about 0.05 radian (30 nm), in which case the sensitivity would reach about 250 zodi. This would however significantly increase the technical complexity of the instrument, which is not desired.
Conclusion {#sec:conclusion}
==========
In this paper, we have investigated a potential solution to a well-defined scientific need, viz.characterising the dusty environment of candidate target stars for future life-finding missions such as Darwin or TPF. In a previous study , we have shown that an infrared nulling interferometer installed on a temperate site, such as the GENIE project at Cerro Paranal, would provide useful information on candidate targets, but (1) that its technical feasibility could be jeopardised by the requirement to design complicated control loops for mitigating the effects of atmospheric turbulence, and (2) that its sensitivity would not reach the desired level of 20 times the density of our local zodiacal cloud.
To overcome these two limitations, we propose in this paper a conceptual design for a nulling interferometer (ALADDIN) to be installed at Dome C, on the high Antarctic plateau. Based on the atmospheric turbulence measurements obtained so far at Dome C, we have updated the GENIEsim software to simulate the performance of such an instrument. These simulations show that, using 1-m collectors, this instrument would have an improved sensitivity with respect to GENIE working on 8-m telescopes, provided that it is placed above the turbulence boundary layer, which is about 30 m thick at Dome C. In particular, the 20-zodi sensitivity goal seems within reach for typical Darwin/TPF target stars. Moreover, the exceptional turbulence conditions above the boundary layer significantly relax the requirements on the real-time compensation of atmospheric effects, improving the feasibility of the instrument. It must also be noted that, thanks to the optimised range of adjustable baselines, the harmful influence of our imperfect knowledge of stellar angular diameters can be largely mitigated by simultaneously fitting a photospheric model and an exozodiacal disc model to the collected data, yet at the price of an increased observing time.
While we assumed the instrument would be deployed above the boundary layer at Dome C, site of the Concordia station, it might turn out that other sites on the Antarctic plateau provide simultaneously a thinner boundary layer and an improved free air seeing, and hence better performance. The final choice for the site will have to trade off the practical advantages of feasibility and performance vs. logistical support.
This paper illustrates the potential of Antarctic sites for high-angular, high-dynamic range astrophysics in the infrared domain. In the particular, well specified case of a nulling interferometer, we were able to realistically quantify the relative gain with respect to a temperate site, showing that a pair of 1m telescopes on the plateau will perform better than a pair of 8m telescopes at Cerro Paranal. Other applications would result in different gains, but it is clear that there are niches where the Antarctic plateau enables observations that would otherwise require access to space.
The authors are indebted to R. den Hartog and D. Defrère for their major contributions to the development of the GENIEsim software, which has been used throughout this paper. The authors also wish to thank the engineers at Thales Alenia Space that have contributed to the preliminary design of the ALADDIN instrument, as well as T. Fusco for the simulation of adaptive optics performance at Dome C. O.A. acknowledges the financial support of the Belgian National Fund for Scientific Research (FNRS) while at IAGL and of a Marie Curie Intra-European Fellowship (EIF) while at LAOG.
[^1]: Marie-Curie EIF Postdoctoral Fellow
[^2]: seeing above the turbulent ground layer
[^3]: Dome C is even colder during winter, with an average ground temperature of $-65^{\circ}$C (208 K).
[^4]: Note that the strength of dispersion decreases for shorter baselines, and is only about 3nm rms for a 4-m baseline.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The growing scale of face recognition datasets empowers us to train strong convolutional networks for face recognition. While a variety of architectures and loss functions have been devised, we still have a limited understanding of the source and consequence of label noise inherent in existing datasets. We make the following contributions: 1) We contribute cleaned subsets of popular face databases, , MegaFace and MS-Celeb-1M datasets, and build a new large-scale noise-controlled IMDb-Face dataset. 2) With the original datasets and cleaned subsets, we profile and analyze label noise properties of MegaFace and MS-Celeb-1M. We show that a few orders more samples are needed to achieve the same accuracy yielded by a clean subset. 3) We study the association between different types of noise, , label flips and outliers, with the accuracy of face recognition models. 4) We investigate ways to improve data cleanliness, including a comprehensive user study on the influence of data labeling strategies to annotation accuracy. The IMDb-Face dataset has been released on <https://github.com/fwang91/IMDb-Face>.'
author:
- 'Fei Wang [^1]'
- Liren Chen
- |
\
Cheng Li
- Shiyao Huang
- |
\
Yanjie Chen
- Chen Qian
- |
\
Chen Change Loy
bibliography:
- '1201.bib'
title: The Devil of Face Recognition is in the Noise
---
Introduction {#sec:introduction}
============
Datasets are pivotal to the development of face recognition. From the early FERET dataset [@phillips1998feret] to the more recent LFW [@huang2007labeled], MegaFace [@kemelmacher2016megaface; @nech2017level], and MS-Celeb-1M [@guo2016ms], face recognition datasets play a main role in driving the development of new techniques. The datasets not only become more diverse, the scale of data is also growing tremendously. For instance, MS-Celeb-1M [@guo2016ms] contains around 10M images for 100K celebrities, far exceeding FERET [@phillips1998feret] that only has 14,126 images from 1,199 individuals. Large-scale datasets together with the emergence of deep learning have led to the immense success of face recognition in recent years.
Large-scale datasets are inevitably affected by label noise. The problem is pervasive since well-annotated datasets in large-scale are prohibitively expensive and time-consuming to collect. That motivates researchers to resort to cheap but imperfect alternatives. A common method is to query celebrities’ images by their names on search engines, and subsequently clean the labels with automatic or semi-automatic approaches [@parkhi2015deep; @li2016robust; @deng2017marginal]. Other methods introduce clustering with constraints on social photo sharing sites. The aforementioned methods offer a viable way to scale the training samples conveniently but also bring label noises that adversely affect the training and performance of a model. We show some samples with label noises in Figure \[fig:overview\]. As can be seen, MegaFace [@nech2017level] and MS-Celeb-1M [@guo2016ms] consist considerable incorrect identity labels. Some noisy labels are easy to remove while many of them are hard to be cleaned. In MegaFace, there are a number of redundant images too (shown in the last row).
![Label noises in MegaFace [@nech2017level] and MS-Celeb-1M [@guo2016ms]. Each row depicts images that are labeled with the same identity. Some incorrect labels are easy while many of them are hard.[]{data-label="fig:overview"}](figure1_eccv_low_res.pdf){width="\linewidth"}
The first goal of this paper is to develop an understanding of the source of label noise and its consequences towards face recognition by deep convolutional neural networks (CNN) [@sun2014deep; @Schroff_2015_CVPR; @wen2016discriminative; @huang2018deep; @cao2018pose; @zhan2018consensus]. We seek answers to questions like: How many noisy samples are needed to achieve an effect tantamount to clean data? What is the relationship between noise and final performance? What is the best strategy to annotate face identities? A better understanding of the aforementioned questions would help us to design a better data collection and cleaning strategy, avoid pitfalls in training, and formulate stronger algorithms to cope with real-world problems. To facilitate our research, we manually clean subsets of two most popular face recognition databases, namely, MegaFace [@nech2017level] and MS-Celeb-1M [@guo2016ms]. We observe that a model trained with only $32\%$ of MegaFace or $20\%$ of MS-Celeb-1M cleaned subsets, can already achieve comparable performance with models that are trained on the respective full dataset. The experiments suggest that a few orders more samples are needed for face recognition model training if noisy samples are used.
The second goal of our study is to build a clean face recognition dataset for the community. The dataset could help training better models and facilitate further understanding of the relationship between noise and face recognition performance. To this end, we build a clean dataset called **IMDb-Face**. The dataset consists of 1.7M images of 59K celebrities collected from movie screenshots and posters from the IMDb website[^2]. Due to the nature of the data source, the images exhibit large variations in scale, pose, lighting, and occlusion. We carefully clean the dataset and simulate corruption by injecting noise on the training labels. The experiments show that the accuracy of face recognition decreases rapidly and nonlinearly with the increase of label noises. In particular, we confirm the common belief that the performance of face recognition is more sensitive towards label flips (example has erroneously been given the label of another class within the dataset) than outliers (image does not belong to any of the classes under consideration, but mistakenly has one of their labels). We also conduct an interesting experiment to analyze the reliability of different ways of annotating a face recognition dataset. We found that label accuracy correlates with time spent on annotation. The study helps us to find the source of erroneous labels and thereafter design better strategies to balance annotation cost and accuracy.
We hope that this paper could shed lights on the influences of data noise to the face recognition task, and point to potential labelling strategies to mitigate some of the problems. We contribute the new data **IMDb-Face** with the community. It could serve as a relatively clean data to facilitate future studies of noises in large-scale face recognition. It can also be used as a training data source to boost the performance of existing methods, as we will show in the experiments.
How Noisy is Existing Data? {#sec:datasets}
===========================
We first introduce some popular datasets used in face recognition study and then approximate their respective signal-to-noise ratio.
Face Recognition Datasets
-------------------------
Table \[tab:valueExperiment\] provides a summary of representative datasets used in face recognition research.
\[tab:valueExperiment\]
**LFW:** Labeled Faces in the Wild (LFW) [@huang2007labeled] is perhaps the most popular dataset to date for benchmarking face recognition approaches. The database consists of $13,000$ facial images of $1,680$ celebrities. Images are collected from Yahoo News by running the Viola-Jones face detector. Limited by the detector, most of the faces in LFW is frontal. The dataset is considered sufficiently clean despite some incorrectly labeled matched pairs are reported. Errata of LFW are provided in <http://vis-www.cs.umass.edu/lfw/>.
**CelebFaces:** CelebFaces [@sun2014deep; @celebface] is one of the early face recognition training databases that are made publicly available. Its first version contains $5,436$ celebrities and $87,628$ images, and it was upgraded to $10,177$ identities and $202,599$ images in a year later. Images in CelebFaces were collected from search engines and manually cleaned by workers.
**VGG-Face:** VGG-Face [@parkhi2015deep] contains 2,622 identities and 2.6M photos. More than 2,000 images per celebrity were downloaded from search engines. The authors treat the top 50 images as positive samples and train a linear SVM to select the top 1,000 faces. To avoid extensive manual annotation, the dataset was ‘block-wise’ verified, , ranked images of each identity are displayed in blocks and annotators are asked to validate blocks as a whole. In this study we did not focus on VGG-Face [@parkhi2015deep] since it should have the similar ‘search-engine bias’ problem with MS-Celeb-1M [@guo2016ms].
**CASIA-WebFace:** The images in CASIA-WebFace [@yi2014learning] were collected from IMDb website. The dataset contains 500K photos of 10K celebrities and it is semi-automatically cleaned via tag-constrained similarity clustering. The authors start with each celebrity’s main photo and those photos that contain only one face. Then faces are gradually added to the dataset constrained by feature similarity and name tag. CASIA-WebFace uses the same source as the proposed IMDb-Face dataset. However, limited by the feature and clustering steps, CASIA-WebFace may fail to recall many challenging faces.
**MS-Celeb-1M:** MS-Celeb-1M [@guo2016ms] contains 100K celebrities who are selected from the 1M celebrity list in terms of their popularities. Public search engines are then leveraged to provide approximately 100 images for each celebrity, resulting in about 10M web images. The data is deliberately left uncleaned for several reasons. Specifically, collecting a dataset of this scale requires tremendous efforts in cleaning the dataset. Perhaps more importantly, leaving the data in this form encourages researchers to devise new learning methods that can naturally deal with the inherent noises.
**MegaFace:** Kemelmacher-Shlizerman [@nech2017level] clean massive number of images published on Flickr by proposing algorithms to cluster and filter face data from the YFCC100M dataset. For each user’s albums, the authors merge face pairs with a distance closer than $\beta$ times of average distance. Clusters that contain more than three faces are kept. Then they drop ‘garbage’ groups and clean potential outliers in each group. A total of 672K identities and 4.7M images were collected. MegaFace2 avoids ‘search-engine’ bias as in VGG-Face [@parkhi2015deep] and MS-Celeb-1M [@guo2016ms]. However, we found this cluster-based approach introduces new bias. MegaFace prefers small groups with highly duplicated images, , face captured from the same video. Limited by the base model for clustering, considerable groups in MegaFace contain noises, or sometimes mess up multiple people in the same group.
![(a) A visualization of size and estimated noise percentage of datasets. (b) Noise distribution of MS-Celeb-1M(v1) [@guo2016ms]. (c) Noise distribution of MegaFace [@nech2017level]. The two horizontal lines in each bar represent the lower- and upper-bounds of noise, respectively. See Sec. \[subsec:noise\_ratio\] for details.[]{data-label="fig:noiseDistribution"}](noiseDistribution_eccv.pdf){width="0.7\linewidth"}
An Approximation of Signal-to-Noise Ratio {#subsec:noise_ratio}
-----------------------------------------
Owing to the source of data and cleaning strategies, existing large-scale datasets invariably contain label noises. In this study, we aim to profile the noise distribution in existing datasets. Our analysis may provide a hint to future research on how one should exploit the distribution of these data.
It is infeasible to obtain the exact number of these noises due to the scale of the datasets. We bypass this difficulty by randomly selecting a subset of a dataset and manually categorize them into three groups – ‘correct identity assigned’, ‘doubtful’, and ‘wrong identity assigned’. We select a subset of 2.7M images from MegaFace [@nech2017level] and 3.7M images from MS-Celeb-1M [@guo2016ms]. For CASIA-WebFace [@yi2014learning] and CelebFaces [@sun2014deep; @celebface], we sampled 30 identities to estimate their signal-to-noise ratio. The final statistics are visualized in Figure \[fig:noiseDistribution\](a). Due to the difficulty in estimating the exact ratio, we approximate an upper and a lower bound of noisy data during the estimation. The lower-bound is more optimistic considering doubtful labels as clean data. The upper-bound is more pessimistic considering all doubtful cases as badly labeled. We provide more details on the estimations in the supplementary material. As observed in Figure \[fig:noiseDistribution\](a), the noise percentage increases dramatically along the scale of data. This is not surprising given the difficulty in data annotation. It is noteworthy that the proposed IMDb-Face pushes the envelope of large-scale data with a very high signal-to-noise ratio (noise is under 10% of the full data).
We investigate further the noise distribution of the two largest public datasets to date, MS-Celeb-1M [@guo2016ms] and MegaFace [@nech2017level]. We first categorize identities in a dataset based on their number of images. A total of six groups/bins are established. We then plot a histogram showing the signal-to-noise ratio of each bin along the noise lower- and upper-bounds. As can be seen in Figure \[fig:noiseDistribution\](b,c), both datasets exhibit a long-tailed distribution, , most identities have very few images. This phenomenon is especially obvious on the MegaFace [@nech2017level] dataset since it uses automatically formed clusters for determining identities, therefore, the same identity may be distributed in different clusters. Noises across all groups in MegaFace [@nech2017level] are less in comparison to MS-Celeb-1M [@guo2016ms]. However, we found that many images in the clean portion of MegaFace [@nech2017level] are duplicated images. In Sec. \[sec:damage\_noise\], we will perform experiments on the MegaFace and MS-Celeb-1M datasets to quantify the effect of noise on the face recognition task.
Building a Noise-Controlled Face Dataset
========================================
As shown in the previous section, face recognition datasets that are more than a million scale typically have a noise ratio higher than 30%. How about building a large scale noise controlled face dataset? It can be used to train better face recognition algorithms. More importantly, it can be used to further understand the relationship between noise and face recognition performance. To this end, we seek not only a cleaner and more diverse source to collect face data, but also an effective way to label the data.
![The second row depicts the raw data from the IMDb website. As a comparison, we show the images of the same identity queried from the Google search engine in the first row.[]{data-label="fig:rawDataSample"}](rawDataSample_eccv_low_res.pdf){width="0.8\linewidth"}
-0.45cm
Celebrity Faces from IMDb {#sec:collecting}
-------------------------
Search engines are important sources from which we can quickly construct a large-scale dataset. The widely used ImageNet [@deng2009imagenet] was built by querying images from Google Image. Most of the face recognition datasets were built in the same way (except MegaFace [@nech2017level]). While querying from search engines offers the convenience of data collection, it also introduces data bias. Search engines usually operate in a high-precision regime [@chen2014enriching]. Observing the queried images in Figure \[fig:rawDataSample\], they tend to have a simple background with sufficient illumination, and the subjects are often in a near frontal posture. These data, to a certain extent, are more restricted than those we could observe in reality, , faces in videos (IJB-A [@klare2015pushing] and YTF [@wolf2011face]) and selfie photos (millions of distractors in MegaFace). Another pitfall in crawling images from search engines is the low recall rate. We performed a simple analysis and found that on average the recall rate is only 40% for the first 200 photos we query for a particular name.
In this study, we turn our data collection source to the IMDb website. IMDb is more structured. It includes a diverse range of photos under each celebrity’s profile, including official photos, lifestyle photos, and movie snapshots. Movie snapshots, we believe, provide essential data samples for training a robust face recognition model. Those screenshots are rarely returned by querying a search engine. In addition, the recall rate is much higher (90% on average) when we query a name on IMDb. This is much higher than 40% from search engines. The IMDb website lists about 300K celebrities who have official and gallery photos. By clawing IMDb dataset, we finally collected and cleaned 1.7M raw images from 59K celebrities.
Data Distribution {#sec:dataDistribution}
-----------------
Figure \[fig:yawDistribution\]-a presents the distribution of yaw angle in our dataset compared with MS-Celeb-1M and MegaFace. Figures \[fig:yawDistribution\]-c, -d and -e present the age, gender and race distributions. As can be observed, images in IMDb-Face exhibit larger pose variations, and they also show diversity in age, gender and race.
![image](dataDistribution_cr.pdf){width="\linewidth"}
-0.3cm
How Good can Human Label Identity? {#sec:userStudy}
----------------------------------
The data downloaded from IMDb are noisy as multiple celebrities may co-exist on the same image. We still need to clean the dataset before it can be used for training. We take this opportunity to study how human annotators would clean a face data. The study will help us to identify the source of noise during annotation and design a better data cleaning strategy for the full dataset.
For the purpose of the user study, we extract a small subset of 30 identities from the IMDb raw data. We carefully select three images with confirmed identity serving as gallery images. The remaining images of these 30 identities are treated as query images. To make the user study more challenging and statistically more meaningful, we inject 20% outliers to the query set. Next, we prepare three annotation schemes as follows. The interface of each scheme is depicted in Figure \[fig:annotateInterface\].
![Interfaces for user study: (a) Scheme I - volunteers were asked to draw a box on the target’s face. (b) Scheme II - given three query faces, volunteers were asked to select the face that belongs to the target person. (c) Scheme III - volunteers were asked to select the face that belongs to the target.[]{data-label="fig:annotateInterface"}](annotateInterface.pdf){width="\linewidth"}
-0.2cm
**Scheme I - Draw the box:** We present the target person to a volunteer by showing the three gallery faces. We then show a query image selected from the query set. The image may contain multiple persons. If the target appears in the query image, the volunteer is asked to draw a bounding box on the target. The volunteer can either confirm the selection or assign a ‘doubt’ flag on the box if he/she is not confident about the choice. ‘No target’ is selected when he/she cannot find the target person.
**Scheme II - Choose 1 in 3:** Similar to Scheme I, we present the target person to a volunteer by showing the gallery images. We then randomly sample three faces detected from the query set, from which the volunteer will select a single image as the target face. We ensure that all query faces have the same gender as the target person. Again, the volunteer can choose a ‘doubt’ flag if he/she is not confident about the selection or choose ‘no target’ at all.
**Scheme III - Yes or No:** Binary query is perhaps be the most natural and popular way to clean a face recognition set. We first rank all faces based on their similarity to probe faces in the gallery, and then ask a volunteer to make a choice if each belongs to the target person. The volunteer is allowed to answer ‘doubt’.
**Which scheme to choose?**: Before we can quantify the effectiveness of different schemes, we first need to generate the ground truth of these 30 identities. We use a ‘consensus’ approach. Specifically, each of the aforementioned schemes was conducted on three different volunteers. We ensure that each query face was annotated nine times across the three schemes. If four of the annotations consistently point to the same identity, we assign the query face to the targeted identity. With this ground truth, we can measure the effectiveness of each annotation scheme.
\[\]
[![A ROC comparison between three different annotating schemes; volunteers were allowed to select ‘doubt’ so two data points can be obtained depending if we count doubt data as positive or negative.[]{data-label="fig:humanROC"}](HumanROC_eccv.pdf "fig:"){width="1.1\linewidth"}]{}
Figure \[fig:humanROC\] shows the Receiver operating characteristic (ROC) curve of each of the three schemes[^3]. Scheme I achieves the highest $F_1$ score. It recalls more than 90% faces with under 10% false positive samples. Finding a face and drawing a box seems to make annotators more focused on finding the right face. Scheme II provides a high true positive rate when the false positive is low. The existence of distractors forces annotators to work harder to match the faces. Scheme III yields the worse true positive rate when the false positive is low. This is not surprising since this task is much easier than Schemes I and II. The annotators tend to make mistakes given this relaxing task, especially after a prolonged annotation process. We observe an interesting phenomenon: *the longer a volunteer spends on annotating a sample, the more accurate the annotation is*. With full speed in one hour, each volunteer can draw 180-300 faces in Scheme I, or finish around 600 selections in Scheme II, or answer over 1000 binary questions in Scheme III. We believe the most reliable way to clean a face recognition dataset is to leverage both Schemes I and II to achieve a high precision and recall. Limited by our budget, we only conducted Scheme I to clean the IMDb-Face dataset. During the cleaning of the IMDb-Face, since multiple identities may co-exist on the same image, first we annotated gallery images to make sure the queried identity. The gallery images come from the official gallery provided by the IMDb website, which most of these official gallery images contain the true identity. We ask volunteers to look through the 10 gallery images back and forth and draw bounding box of the face that occurs most frequently. Then, annotators label the rest of the queried images guided by the three largest labeled faces as galleries. For identities having fewer than three gallery images, their queried images may have too much noise. To save labor, we did not annotate their images.
It took 50 annotators one month to clean the IMDb-Face dataset. Finally, we obtained 1.7M clean facial images from 2M raw images. We believe that the cleaning is of high quality. We estimate the noise level of IMBb-Face as the product of approximated noise level in the IMDb raw data ($2.7 \pm 4.5$%) and the false positive rate (8.7%) of Scheme I. The noise level is controlled under 2%. The quality of IMDb-Face is validated in our experiments.
Experiments {#experiment}
===========
We divide our experiments into a few sections. First, we conduct ablation studies by simulating noise on our proposed dataset. The studies help us to observe the deterioration of performance in the presence of increasing noise, or when a fixed amount of clean data is diluted with noise. Second, we perform experiments on two existing datasets to further demonstrate the effect of noise. Third, we examine the effectiveness of our dataset by comparing it to other datasets with the same training condition. Finally, we compare the model trained on our dataset with other state-of-the-arts. Next, we describe the experimental setting.
**Evaluation Metric:** We report rank-1 identification accuracy on the Megaface benchmark [@kemelmacher2016megaface]. It is a very challenging task to evaluate the performance of face recognition methods at the million scale of distractors. The MegaFace benchmark consists of one gallery set and one probe set. The gallery set contains more than 1 million images and the probe set consists of two existing datasets: Facescrub [@ng2014data] and FGNet. We use Facescrub [@ng2014data] as MegaFace probe dataset in our experiments. Verification performance of MegaFace (reported as TPR at FPR$=10^{-6}$) is included in the supplementary material due to page limit. We also test LFW [@huang2007labeled] and YTF [@wolf2011face] in Section \[sec:state-of-art\].
**Architecture:** To better examine the effect of noise, we use the same architecture in all experiments. After a comparison among ResNet-50, ResNet-101 and Attention-56 [@Wang_2017_CVPR], we finally choose Attention-56 that achieves a good balance between computation and accuracy. As a reference, the model converges on a database with 80 hours on an 8-GPU server with a batch-size of 256. The output of Attention-56 is a 256-dimensional feature for each input image. We use cosine similarity to compute scores between image pairs.
**Pre-processing:** We cropped and aligned faces, then rigidly transferred them onto a mean shape. Then we resized the cropped image into $224\times256$, and subtracted them with the mean value in each RGB channel.
**Loss:** We apply three losses: SoftMax [@celebface], Center Loss [@wen2016discriminative] and A-Softmax [@liu2017sphereface]. Our implementation is based on the public implementation of these losses:
*Softmax:* Softmax loss is the most commonly used loss, either for model initialization or establishing a baseline.
*Center Loss:* Wen [@wen2016discriminative] propose center loss, which minimizes the intra-class distance to enhance features’ discriminative power. The authors jointly trained CNN with the center loss and the softmax loss.
*A-Softmax:* Liu [@liu2017sphereface] formulate A-Softmax to explicitly enforce the angle margin between different identities. The weight vector of each category was restricted on a hypersphere.
Investigating the Effect of Noise on IMDb-Face
----------------------------------------------
The proposed IMDb-Face dataset enables us to investigate the effect of noise. There are two common types of noise in large-scale face recognition datasets: 1) *label flips*: example has erroneously been given the label of another class within the dataset 2) *outliers*: image does not belong to any of the classes under consideration, but mistakenly has one of their labels. Sometimes even non-faces may be mistakenly included. To simulate the first type of noise, we randomly perturb faces into incorrect categories. For the second type, we randomly replace faces in IMDb-Face with images from MegaFace.
![1:1M rank-1 identification results on MegaFace benchmark: (a) introducing label flips to IMDb-Face, (b) introducing outliers to IMDb-Face, and (c) fixing the size of clean data and dilute it with different ratios of label flips.[]{data-label="fig:dilute"}](noiseExperiment_eccv.pdf){width="0.98\linewidth"}
-0.25cm
Here we perform two experiments: 1) We gradually contaminate our dataset with different types of noise. We gradually increase the noise in our dataset by 10%, 20% and 50%. 2) We fix the size of clean data and ‘dilute’ it with label flips. We do not use ensemble models in these experiments.
Figure \[fig:dilute\](a) and (b) summarize the results of our first experiment. 1) Label flips severely deteriorate the performance of a model, more so than outliers. 2) A-Softmax, which used to achieve a better result on a clean dataset, becomes worse than Center loss and Softmax in the high-noise region. 3) Outliers seem to have a less abrupt effect on the performance across all losses, matching the observation in [@krause2016unreasonable] and [@rolnick2017deep].
The second experiment was inspired by a recent work from Rolnick [@rolnick2017deep]. They found that if a dataset contains sufficient clean data, a deep learning model can still be properly trained on it when the data is diluted by a large amount of noise. They show that a model can still achieve a feasible accuracy on CIFAR-10, even the ratio of noise to clean data is increased to $20:1$. Can we transfer their conclusion to face recognition? Here we sample four subsets from IMDb-Face with $1E5$, $2E5$, $5E5$ and $1E6$ images. And we dilute them with an equal number, double, and five times of label flip noise. Figure \[fig:dilute\](c) shows that a large performance gap still exists against the completely clean baseline, even we maintain the same number of clean data. We conjecture two reasons that cleanliness of data still plays a key role in face recognition: 1) current dataset, even it is clean, still far from sufficient to address the challenging face recognition problem thus noise matters. 2) Noise is more lethal on a 10,000-class problem than on a 10-class problem.
The Effect of Noise on MegaFace and MS-Celeb-1M {#sec:damage_noise}
-----------------------------------------------
To further demonstrate the effect of noise, we perform experiments on two public datasets: MegaFace and MS-Celeb-1M. In order to quantify the effect of noise on the face recognition, we sampled subsets from the two datasets and manually cleaned them. This provides us with a noisy sampled subset and a clean subset for each dataset. For a fair comparison, the noisy subset was sampled to have the same distribution of image numbers to identities as the original dataset. Also, we control the scale of noisy subsets to make sure the scales for each clean subset are nearly the same. Because of the large size of the sampled subsets, we have chosen the third labeling method mentioned in Sec. \[sec:userStudy\], which is the fastest.
-------------- ------ ------- --------- -------- -----------
Softmax Center A-softmax
MSV1-raw 96k 8.6M 71.70 73.82 73.99
-sampled 46k 3.7M 66.15 69.81 70.56
-clean 46k 1.76M 70.66 73.15 73.53
MegaFace-raw 670k 4.7M 64.32 64.71 66.95
-sampled 270k 2.7M 59.68 62.55 63.12
-clean 270k 1.5M 62.86 67.64 68.88
-------------- ------ ------- --------- -------- -----------
: Noisy data vs. Clean data. The results are obtained from rank-1 identification test on the MegaFace benchmark [@kemelmacher2016megaface]. Abbreviation MSV1 = MS-Celeb-1M(v1).[]{data-label="tab:test"}
Three different losses, namely, SoftMax, Center Loss and A-Softmax, are respectively applied to the original datasets, sampled, and cleaned subsets. Table \[tab:test\] summarizes the results on the MegaFace recognition challenge [@kemelmacher2016megaface]. The effect of clean datasets is tremendous. By comparing the results between cleaned datasets and sampled datasets, the average improvement of accuracy is as large as $4.14\%$. The accuracies on clean subsets even surpass those on raw datasets, which are 4 times larger on average. The results suggest the effectiveness of reducing noise for large-scale datasets. As the mater of fact, the result of this experiment is part of our motivation to collect IMDb-Face dataset.
It is worth pointing out that recent metric learning based methods such as A-Softmax [@liu2017sphereface] and Center-loss [@wen2016discriminative] also benefit from learning on clean datasets, although they already perform much better than Softmax [@celebface]. As shown in Table \[tab:test\], the improvements of accuracy on MegaFace using A-Softmax and Center-loss are over $5\%$. The results suggest that reducing dataset noise is still helpful, especially when metric learning is performed. Reducing noisy samples could help an algorithm focuses more on hard examples learning, rather than picking up meaningless noises.
Comparing IMDb-Face with other Face Datasets
--------------------------------------------
In the third experiment, we wish to show the competitiveness of IMDb-Face against several well-established face recognition training datasets including: 1) CelebFaces [@sun2014deep; @celebface], 2) CASIA-WebFace [@yi2014learning], 3) MS-Celeb-1M(v1) [@guo2016ms], and 4) MegaFace [@nech2017level]. The data size of the two latter datasets is a few times larger than the proposed IMDb-Face. Note that MS-Celeb-1M has a larger subset(v2), containing 900,000 identities. Limited by our computational resources we did not conduct experiments on it. We do not use ensemble models in this experiment. Table \[tab:mainExperiment\] summarizes the results of using different datasets as the training source across three losses. We observed that the proposed noise-controlled IMDb-Face dataset is competitive as a training source despite its smaller size, validating the effectiveness of the IMDb data source and the cleanliness of IMDb-Face.
--------------- ------- ------- ----------- ------------- -----------
Softmax Center Loss A-Softmax
CelebFaces 10k 0.20M 36.15 42.54 43.72
CASIA-WebFace 10.5k 0.49M 65.17 68.09 70.89
96k 8.6M 71.70 73.82 73.99
670k 4.7M 64.32 64.71 66.95
IMDbFace 59k 1.7M **74.75** **79.41** **84.06**
--------------- ------- ------- ----------- ------------- -----------
: Comparative results on using different face recognition datasets for training. Rank-1 identification accuracy on MegaFace benchmark is reported.[]{data-label="tab:mainExperiment"}
Comparisons with State-of-the-Arts {#sec:state-of-art}
----------------------------------
We are interested to compare the performance of model trained on IMDb-Face with state-of-the-arts. Evaluation is conducted on MegaFace [@kemelmacher2016megaface], LFW [@huang2007labeled], and YTF [@wolf2011face] following the standard protocol. For LFW [@huang2007labeled] we compute equals error rate (EER). For YTF [@wolf2011face] we report accuracy for recognition. To highlight the effect of training data, we do not adopt model ensemble. The comparative results are shown in Table \[tab:finalResult\]. Our single model trained on IMDb-Face (A-Softmax$^\sharp$, IMDb-Face) achieves a state-of-the-art performance on LFW, MegaFace, and YTF against published methods. It is noteworthy that the performance of our final model is also comparable to a few private methods on MegaFace.
[r|cccc]{} Method, Dataset & LFW & Mega(Ident.) & YTF\
Vocord-deep V3$^\dagger$, Private & - & **91.76** & -\
YouTu Lab$^\dagger$, Private & - & 83.29 & -\
DeepSense V2$^\dagger$, Private & - & 81.23 & -\
Marginal Loss$^\sharp$ [@deng2017marginal] MS-Celeb-1M & 99.48 & 80.278 &95.98\
SphereFace [@liu2017sphereface],CASIA-WebFace & 99.42 & 75.77 & 95.00\
Center Loss [@wen2016discriminative],CASIA-WebFace & 99.28 & 65.24 & 94.90\
A-Softmax$^\sharp$, MS-Celeb-1M &99.58 & 73.99 & 97.45\
A-Softmax$^\sharp$, IMDb-Face &**99.79** & **84.06** & **97.67**\
\
\
-0.3cm \[tab:finalResult\]
Conclusion
==========
Beyond existing efforts of developing sophisticated losses and CNN architectures, our study has investigated the problem of face recognition from the data perspective. Specifically, we developed an understanding of the source of label noise and its consequences. We also collected a new large-scale data from IMDb website, which is naturally a cleaner and wilder source than search engines. Through user studies, we have discovered an effective yet accurate way to clean our data. Extensive experiments have demonstrated that both data source and cleaning effectively improve the accuracy of face recognition. As a result of our study, we have presented a noise-controlled IMDb-Face dataset, and a state-of-the-art model trained on it. A clean dataset is important as the face recognition community has been looking for large-scale clean datasets for two practical reasons: 1) to better study the training performance of contemporary deep networks as a function of noise level in data. Without a clean dataset, one cannot induce controllable noise to support a systematic study. 2) to benchmark large-scale automatic data cleaning methods. Although one can use the final performance of a deep network as a yardstick, this measure can be affected by many uncontrollable factors, , network hyperparameters setting. A clean and large-scale dataset enables unbiased analysis.
[^1]: = equal contribution
[^2]: [www.IMDb.com](www.IMDb.com)
[^3]: We should emphasize that the curves in Figure \[fig:humanROC\] are different from actual human’s performance on verifying arbitrary face pairs. This is because in our study the faces from a query set are very likely to belong to the same person. The ROC thus represents human’s accuracy on ‘verifying face pairs that likely belong to the same identity’.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter $\rho$. While a high value of $\rho$ is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations $n$ and/or their size $N$ increase together. First asymptotic results have recently been obtained under the assumption that $N$ and $n$ are large and commensurable. Interestingly, no results concerning the regime of $n$ going to infinity with $N$ fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult $N$ and $n$ large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when $n\to\infty$ with $N$ fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter $\rho$.'
author:
- 'Abla Kammoun, Romain Couillet, Frédéric Pascal, Mohamed-Slim Alouini [^1] [^2] [^3]'
bibliography:
- 'IEEEabrv.bib'
- 'IEEEconf.bib'
- './tutorial\_RMT.bib'
title: |
Convergence and Fluctuations\
of Regularized Tyler Estimators
---
Introduction
============
The estimation of covariance matrices is at the heart of many applications in signal processing and wireless communications. The frequently used estimator is the well-known sample covariance matrix (SCM). Its popularity owes to its low complexity and in general to a good understanding of its behavior. However, the use of the SCM in practice is hurdled by its poor performance when samples contain outliers or have an impulsive nature. This is especially the case of radar detection applications in which the noise is often modeled by heavy-tailed distributions [@Ward81; @Watts85; @Nohara91; @Billingsley93]. One of the reasons why the SCM performs poorly in such scenarios is that, as opposed to the case of Gaussian observations, the SCM is not the maximum likelihood estimator (MLE) of the covariance matrix. This is for instance the case of complex elliptical distributions, originally introduced by Kelker [@kelker] and widely used in radar applications, for which the MLE takes a strikingly different form. In order to achieve better robustness against outliers, a class of covariance estimators termed robust estimators of scatter have been proposed by Huber, Hampel and Maronna [@huber1964robust; @Huber72; @Maronna76], and extended more recently to the complex case [@esa-12; @mahot-13; @pascal2008covariance]. This class of estimators can be viewed as a generalization of MLEs, in that they are derived from the optimization of a meaningful cost function [@ollila-tyler; @Palomar-14]. Aside from robustness to the presence of outliers, a second feature whose importance should not be underestimated, is the conditioning of the covariance matrix estimate. This feature becomes all the more central when the quantity of interest coincides with the inverse of the population covariance matrix. In order to guarantee an acceptable conditioning, regularized robust-estimators, which find their roots in the diagonal loading technique due to Abramowitch and Carlson [@abramovich-81; @carlson-88], were proposed in [@ollila-tyler]. The idea is to force by construction all the eigenvalues of the robust-scatter estimator to be greater than a regularization coefficient $\rho$.
The most popular regularized estimators that are today receiving increasing interest, are the regularized Tyler estimators (RTE), which correspond to regularized versions of the robust Tyler estimator [@tyler]. In addition to achieving the desired robustness property, RTEs present the advantage of being well-suited to scenarios where the number of observations is insufficient or the population covariance matrix is ill-conditioned, while their non-regularized counterparts are ill-conditioned or even undefined if the number of observations $n$ is less than their sizes $N$. Motivated by these interesting features, several works have recently considered the use of RTEs in radar detection applications [@chen-11; @Pascal-2013; @kammoun-15; @ollila-tyler; @couillet-kammoun-14]. While existence and uniqueness of the robust-scatter estimator seem to be sufficiently studied [@Pascal-2013; @ollila-tyler], the impact of the regularization parameter on the behavior of the RTE has remained less understood. Answering this question is essential in order to come up with appropriate designs of the RTE in practice. It poses, however, major technical challenges, mainly because it necessitates a profound analysis of the behavior of the RTE estimator, which is far from being an easy task. As a matter of fact, the main difficulty towards studying the behavior of the RTE fundamentally lies in its non-linear relation to the observations, thus rendering the analysis for fixed $n$ and $N$ likely out of reach. In light of this observation, recent works have considered asymptotic regimes where $n$ and/or $N$ are allowed to grow to infinity. Two regimes can be distinguished: the regime of fixed $N$ with $n$ growing to infinity and the regime of $n$ and $N$ growing large simultaneously. While the former regime, coined the large-$n$ regime, is standard in that it was by far the most considered in the literature, the second one, which we will refer to as large-$n,N$ regime, is very recent and is particularly driven by the recent advances in the spectral analysis of large dimensional random matrices. Interestingly, contrary to what one would imagine, very little on the behavior of RTE seems to be known in the standard regime, whereas very recent results regarding the behavior of RTE for the large-$n,N$ regime have recently been obtained in [@couillet-kammoun-14; @couillet-13]. One major advantage of the large-$n,N$ regime is that, although requiring the use of advanced tools from random matrix theory, it often leads to less involved results that let themselves to simple interpretation. This interesting feature fundamentally inheres in the double averaging effect that leads to more compact results in which only prevailing quantities remain. However, when $N$ is not so large, the same averaging effect is no longer valid and thus cannot be leveraged. A priori, assuming that $N$ is fixed entails major changes on the behavior of RTEs that have not thus far been grasped. Understanding what really happens in the large-$n$ regime, besides its own theoretical interest, should lead to alternative results that might be more accurate for not so large-$N$ scenarios. A second motivation behind working under the large-$n$ regime is that covariance matrix estimators usually converge in this case to deterministic matrices, which opens up possibilities for easier handling of the RTE. Encouraged by these interesting practical and theoretical aspects, we study in this paper the asymptotic [behavior]{} of the RTE in the large-$n$ regime. In particular, we prove in section \[sec:first\_order\] that the RTE converges to a deterministic matrix which depends on the theoretical covariance [matrix]{} and the regularization parameter before presenting its fluctuations around this asymptotic limit in section \[sec:second\_order\]. Numerical results are finally provided in order to support the accuracy of the derived results.
[**Notation**]{}. In this paper, the following notations are used. Vectors are defined as column vectors and designated with bold lower case, while matrices are given in bold upper case. The norm notation $\|.\|$ refers to the spectral norm for matrices and Euclidean norm for vectors while the norm $\|.\|_{\rm Fro}$ refers to the [Frobenius]{} norm of matrices. Notations $(.)^{\mbox{\tiny T}}$ $(.)^*$, $\overline{(.)}$ denotes respectively transpose, Hermitian (i.e. complex conjugate transpose) and pointwise conjugate. Besides, ${\bf I}_N$ denotes the $N\times N$ identity matrix, for a matrix ${\bf A}$, $\lambda_{\rm min}({\bf A})$ and $\lambda_{\rm max}({\bf A})$ denote respectively the smallest and largest eigenvalues of ${\bf A}$, while notation ${\rm vec}({\bf A})$ refers to the vector obtained by stacking the columns of ${\bf A}$. For ${\bf A}$, ${\bf B}$ two positive semi-definite matrices, ${\bf A}\preceq {\bf B}$ means that ${\bf B}-{\bf A}$ is positive semi-definite. $X_n=o_p(1)$ implies the convergence in probability to zero of $X_n$ as $n$ goes to infinity and $X_n=\mathcal{O}_p(1)$ implies that $X_n$ is bounded in probability. The arrow “${\overset{\rm a.s.}{\longrightarrow}}$" designates almost sure convergence while the arrow“$\xrightarrow[]{\mathcal{D}}$" refers to convergence in distribution.
Convergence of the regularized M-estimator of scatter matrix {#sec:first_order}
============================================================
Consider ${\bf x}_1,\cdots,{\bf x}_n$, $n$ observations of size $N$ defined as: $${\bf x}_i=\boldsymbol{\Sigma}_N^{\frac{1}{2}}{\bf w}_i,$$ where ${\bf w}_i\in\mathbb{C}^{N}$ are Gaussian zero-mean random vectors with covariance ${\bf I}_N$ and $\boldsymbol{\Sigma}_N\succeq 0$ is the population covariance matrix. The regularized robust scatter estimator that will be considered in this work is that defined in [@Pascal-2013] as the unique solution $\hat{\bf C}_N(\rho)$ to: $$\hat{\bf C}_N(\rho)=(1-\rho)\frac{1}{n}\sum_{i=1}^n \frac{{\bf x}_i{\bf x}_i^*}{\frac{1}{N}{\bf x}_i^*\hat{\bf C}_N^{-1}(\rho){\bf x}_i} + \rho {\bf I}_N.
\label{eq:hatc}$$ with $\rho\in\left(\max(0,1-\frac{n}{N}),1\right]$[^4] Obviously, Chen’s estimator is more involved and will not be thus considered in this work. Such an estimator can be thought of as a hybrid robust-shrinkage estimator reminding Tyler’s M-estimator of scale [@tyler] and Ledoit-Wolf’s shrinkage estimator [@wolf]. It will be coined thus Regularized-Tyler estimator (RTE), and defines a class of regularized-robust scatter estimators indexed by the regularization parameter $\rho$. When $n>N$, by varying $\rho$ from $0$ to $1$, one can move from the [unbiased Tyler-estimator]{} [@Pascal-08] to the identity matrix $(\rho=1)$ which corresponds to a trivial estimate of the unknown covariance matrix $\boldsymbol{\Sigma}{_N}$.
Review of the results obtained in the large-$n,N$ regime
--------------------------------------------------------
Letting $c_N=\frac{N}{n}$, the large-$n,N$ regime will refer in the sequel to the one where $n\to\infty$ and $N\to\infty$ with $c_N\to c\in(0,\infty)$.
As mentioned earlier, unless considering particular assumptions on $\boldsymbol{\Sigma}_N$, the [RTE]{} cannot be proven to converge (in any usual matrix norm) to some deterministic matrix in the large-$n,N$ regime. Failing that, the approach pursued in [@couillet-kammoun-14] consists in determining a random equivalent for the [RTE]{}, that corresponds to a standard matrix model. This finding is of utmost importance, since it allows one to replace the RTE, whose direct analysis is overly difficult, by another random object, for which an important load of results is available. The meaning of the equivalence between the [RTE]{} and the new object will be specified below.
Prior to presenting the results of [@couillet-kammoun-14], we shall, for the reader convenience, gather all the observations’ properties in the following assumption:
For $i\in\left\{1,\cdots,n\right\}$, ${\bf x}_i=\boldsymbol{\Sigma}_N^{\frac{1}{2}}{\bf w}_i$, with:
- ${\bf w}_1,\cdots,{\bf w}_n$ are $N\times1$ independent Gaussian random vectors with zero mean and covariance ${\bf I}_N$,
- $\boldsymbol{\Sigma}_N\in\mathbb{C}^{N\times N}\succeq 0$ is such that $\frac{1}{N}\operatorname{tr}\boldsymbol{\Sigma}_N=1$.
\[ass:model\]
It is worth noticing that the normalization $\frac{1}{N}\operatorname{tr}\boldsymbol{\Sigma}_N=1$ is considered for ease of exposition and is not limiting since the RTE is invariant to any scaling of $\boldsymbol{\Sigma}_N$. Denote by $\hat{\bf S}_N(\rho)$ the matrix given by: $$\hat{\bf S}_N(\rho)=\frac{1}{\gamma_N(\rho)}\frac{1-\rho}{1-(1-\rho)c_N}\frac{1}{n}\sum_{i=1}^n {\bf w}_i{\bf w}_i^* +\rho I_N,$$ where $\gamma_N(\rho)$ is the unique positive solution to: $$1=\frac{1}{N}\operatorname{tr}\boldsymbol{\Sigma}_N\left(\rho\gamma_N(\rho)+(1-\rho)\boldsymbol{\Sigma}_N\right)^{-1}$$ then $\hat{\bf S}_N(\rho)$ is equivalent to the RTE $\hat{\bf C}_N(\rho)$ in the sense of the following theorem,
For any $\kappa >0$ small, define $\mathcal{R}_\kappa\triangleq\left[\kappa+\max(0,1-c^{-1}),1\right]$. Then, as $N,n\to\infty$ with $\frac{N}{n}\to c\in\left(0,\infty\right)$ and assuming $\lim\sup\|\boldsymbol{\Sigma}_N\|<\infty$, we have: $$\sup_{\rho\in\mathcal{R}_\kappa}\left\|\hat{\bf C}_N(\rho)-\hat{\bf S}_N\right\|{\overset{\rm a.s.}{\longrightarrow}}0.$$ \[th:large\_nN\]
Convergence of the RTE in the large-$n$ regime
----------------------------------------------
In this section, we will consider the regime wherein $N$ is fixed and $n$ tends to infinity. An illustrative tool that is frequently used to handle this regime is the strong law of large numbers [(SLLN)]{} which suggests replacing the average of independent and identically distributed random variables by their expected value. This result should particularly serve to treat the term $$\frac{1}{n}\sum_{i=1}^n \frac{{\bf x}_i{\bf x}_i^*}{{\bf x}_i^*\hat{\bf C}_N^{-1}(\rho){\bf x}_i}$$ in the expression of the RTE. Nevertheless, because of the dependence of $\hat{\bf C}_N(\rho)$ on the observations ${\bf x}_i$, the [SLLN]{} cannot be directly applied to handle the above quantity. As we expect $\hat{\bf C}_N(\rho)$ to converge to some deterministic matrix, say $\boldsymbol{\Sigma}_0(\rho)$, it is sensible to substitute $
\frac{1}{n}\sum_{i=1}^n \frac{{\bf x}_i{\bf x}_i^*}{{\bf x}_i^*\hat{\bf C}_N^{-1}(\rho){\bf x}_i}$ by $\frac{1}{n}\sum_{i=1}^n \frac{{\bf x}_i{\bf x}_i^*}{{\bf x}_i^*\boldsymbol{\Sigma}_0^{-1}(\rho){\bf x}_i}$. The latter quantity is in turn equivalent to $\mathbb{E}\left[\frac{{\bf x}{\bf x}^*}{{\bf x}^*\boldsymbol{\Sigma}_0^{-1}(\rho){\bf x}}\right]$ from the [SLLN]{} where the expectation is taken over the distribution of the random vectors ${\bf x}_i$. Based on these heuristic arguments, a plausible guess is that $\hat{\bf C}_N(\rho)$ converges to $\boldsymbol{\Sigma}_0(\rho)$, the solution to the following equation: $$\boldsymbol{\Sigma}_0(\rho)=N(1-\rho)\mathbb{E}\left[\frac{{\bf x}{\bf x}^*}{{\bf x}^*\boldsymbol{\Sigma}_0^{-1}(\rho){\bf x}}\right]+\rho {\bf I}_N.
\label{eq:sigma_0}$$ The main goal of this section is to establish the convergence of $\hat{\bf C}_N(\rho)$ to $\boldsymbol{\Sigma}_0(\rho)$. We will assume that $\boldsymbol{\Sigma}_0(\rho)$ exists for each $\rho\in\left(0,1\right]$. The existence and uniqueness of $\boldsymbol{\Sigma}_0(\rho)$ will be discussed later on in this section. Similar to the large-$n,N$ regime, we need to introduce a random equivalent for $\hat{\bf C}_N(\rho)$ that is easier to handle. Naturally, an intuitive random equivalent is obtained by replacing, in the right-hand side of , $\hat{\bf C}_N(\rho)$ by $\boldsymbol{\Sigma}_0(\rho)$, thus yielding: $$\tilde{\boldsymbol{\Sigma}}(\rho)=N(1-\rho)\frac{1}{n}\sum_{i=1}^n \frac{{\bf x}_i{\bf x}_i^*}{{\bf x}_i^*\boldsymbol{\Sigma}_0^{-1}(\rho){\bf x}_i}+\rho {\bf I}_N.
\label{eq:tilde_sigma}$$ Unlike $\hat{\bf C}_N(\rho)$, $\tilde{\boldsymbol{\Sigma}}(\rho)$ is more tractable, being an explicit function of the observations’ vectors. By the [SLLN]{}, $\tilde{\boldsymbol{\Sigma}}(\rho)$ is an unbiased estimate of $\boldsymbol{\Sigma}_0(\rho)$ that satisfies: $$\boldsymbol{\Sigma}_0(\rho)=\tilde{\boldsymbol{\Sigma}}(\rho)+\boldsymbol{\epsilon}_n(\rho),$$ where $\boldsymbol{\epsilon}_n(\rho)$ is an $N\times N$ matrix whose elements converge almost surely to zero and are bounded in probability at the rate $\frac{1}{n}$, i.e, $$\left[\boldsymbol{\epsilon}_n(\rho)\right]_{i,j}=\mathcal{O}_p\left(\frac{1}{n}\right).$$
For the above convergence to hold uniformly in $\rho$, one needs to check that the first absolute second moment of the entries of $\frac{{\bf x}{\bf x}^*}{{\bf x}^*\boldsymbol{\Sigma}_0^{-1}(\rho){\bf x}}$ is uniformly bounded in $\rho$. To this end we shall additionally assume that:
Matrix $\boldsymbol{\Sigma}_N$ is non-singular, i.e., the smallest eigenvalue of $\boldsymbol{\Sigma}_N$, $\lambda_{\rm min}(\boldsymbol{\Sigma}_N)$ satisfies: $$\lambda_{\rm min}(\boldsymbol{\Sigma}_N) >0.$$ \[ass:min\]
Under Assumption \[ass:min\], the spectral norm of $\boldsymbol{\Sigma}_0(\rho)$ can be bounded as:
Let $\boldsymbol{\Sigma}_0$ be the solution to , whenever it exists. Then, $$\sup_{\rho\in\left[\kappa,1\right]}\left\|\boldsymbol{\Sigma}_0(\rho)\right\| \leq \frac{\|\boldsymbol{\Sigma}_N\|}{\lambda_{\rm min}(\boldsymbol{\Sigma}_N)}$$ where $\kappa>0$ is some positive scalar. \[lemma:bounded\_spectral\]
See Appendix \[app:bounded\_spectral\]
Equipped with the bound provided by Lemma \[lemma:bounded\_spectral\], we can claim that: $$\sup_{\rho\in \left[\kappa,1\right]}\left|\left[\boldsymbol{\epsilon}_n(\rho)\right]_{i,j}\right|=\mathcal{O}_p\left(\frac{1}{n}\right)$$ or equivalently: $$\sup_{\rho\in \left[\kappa,1\right]} \left\|\tilde{\boldsymbol{\Sigma}}(\rho)-\boldsymbol{\Sigma}_0(\rho)\right\| =\mathcal{O}_p\left(\frac{1}{n}\right).$$ Characterizing the rate of convergence of $\tilde{\boldsymbol{\Sigma}}(\rho)$ to $\boldsymbol{\Sigma}_0(\rho)$ is of fundamental importance and would later help in the derivation of the second-order statistics for $\tilde{\boldsymbol{\Sigma}}(\rho)$ and then for $\hat{\bf C}_N(\rho)$.
Before stating our first main result, we would like to particularly stress the fact that Assumption \[ass:min\] is not limiting. To see that, consider $\boldsymbol{\Sigma}_N={\bf U}\boldsymbol{\Lambda}{\bf U}^*$ the eigenvalue decomposition of $\boldsymbol{\Sigma}_N$ wherein the diagonal elements of $\boldsymbol{\Lambda}$, $\lambda_1,\cdots,\lambda_N$ correspond to the eigenvalues of $\boldsymbol{\Sigma}_N$ arranged in the decreasing order, i.e., $\lambda_1 \geq \lambda_2 \cdots \geq \lambda_N$. Denoting by $r$ the rank of $\boldsymbol{\Sigma}_N$, then, $\lambda_{r+1}=\cdots=\lambda_N=0$. Write ${\bf U}$ as ${\bf U}=\left[{\bf U}_{r},{\bf U}_{N-r}\right]$, ${\bf U}_r\in\mathbb{C}^{N\times r}$. Then, it is easy to see that: $${\hat{\bf C}_N{(\rho)}}{\bf U}_{N-r}=\rho{\bf U}_{N-r}$$ while: $${\bf U}_r^*{\hat{\bf C}_N}(\rho){\bf U}_r=(1-\rho)\frac{1}{n}\sum_{i=1}^n \frac{\boldsymbol{\Lambda_r}^{\frac{1}{2}}\tilde{\bf w}_i\tilde{\bf w}_i^*\boldsymbol{\Lambda}_r^{\frac{1}{2}}}{\frac{1}{N}\tilde{\bf w}_i^*\boldsymbol{\Lambda}_r^{\frac{1}{2}}{\bf U}_r^*\hat{\bf C}_N^{-1}(\rho){\bf U}_r\boldsymbol{\Lambda}_r^{\frac{1}{2}}\tilde{\bf w}_i} +\rho {\bf I}_N,
\label{eq:C_N}$$ where $\tilde{\bf w}_i={\bf U}_r^*{\bf x}_i$ follows a Gaussian distribution with zero-mean and covariance ${\bf I}_r$. Since $\left({\bf U}_r^*\hat{\bf C}_N(\rho){\bf U}_r\right)^{-1}={\bf U}_r^*\hat{\bf C}_N^{-1}(\rho){\bf U}_r$, instead of using $\hat{\bf C}_N(\rho)$, it thus suffices to work with ${\bf U}_r^*{\hat{\bf C}_N}(\rho){\bf U}_r$, for which Assumption \[ass:min\] can be used.
The following theorem establishes the convergence of $\hat{\bf C}_N(\rho)$ to $\boldsymbol{\Sigma}_0(\rho)$, the hypothetical solution to ,
Assume that there exists a unique solution $\boldsymbol{\Sigma}_0(\rho)$ to . Let $\kappa>0$ be [a]{} some small positive real [scalar]{}. Then, assuming that [Assumptions \[ass:model\] and \[ass:min\]]{} hold true, [one has]{} under the large-$n$ regime: $$\sup_{\rho\in\left[\kappa,1\right]}\left\|\hat{\bf C}_N(\rho)- \boldsymbol{\Sigma}_0(\rho)\right\|{\overset{\rm a.s.}{\longrightarrow}}0.$$ Moreover, $$\sup_{\rho\in\left[\kappa,1\right]} \left\|\hat{\bf C}_N(\rho)-\boldsymbol{\Sigma}_0(\rho)\right\|=\mathcal{O}_p\left(\frac{1}{n}\right).$$ \[th:first\_order\]
See Appendix \[app:first\_order\]
In Theorem \[th:first\_order\], we establish the convergence of $\hat{\bf C}_N(\rho)$ to some limiting matrix $\boldsymbol{\Sigma}_0(\rho)$ that solves the fixed point equation . While seems to fully characterize $\boldsymbol{\Sigma}_0(\rho)$, it does not clearly unveil its relationship with the observations’ covariance matrix $\boldsymbol{\Sigma}_N$. The major intricacy stems from the expectation operator in the term $\mathbb{E}\left[\frac{{\bf x}{\bf x}^*}{{\bf x}^*\boldsymbol{\Sigma}_0^{-1}(\rho){\bf x}}\right]$. A close look to this quantity reveals that it can be further developed by leveraging some interesting features of Gaussian distributed vectors. Note first that is also equivalent to: $$N(1-\rho)\mathbb{E}\left[\frac{{\bf w}{\bf w}^*}{{\bf w}^*\boldsymbol{\Sigma}_N^{\frac{1}{2}}\boldsymbol{\Sigma}_0^{-1}(\rho)\boldsymbol{\Sigma}_N^{\frac{1}{2}}{\bf w}}\right]+\rho\boldsymbol{\Sigma}_N^{-1}=\boldsymbol{\Sigma}_N^{-\frac{1}{2}}\boldsymbol{\Sigma}_0(\rho)\boldsymbol{\Sigma}_N^{-\frac{1}{2}},
\label{eq:sigma_0_transform}$$ where ${\bf w}\sim\mathcal{CN}({\bf 0},{\bf I}_N)$. Let $\boldsymbol{\Sigma}_N^{\frac{1}{2}}\boldsymbol{\Sigma}_0^{-1}(\rho)\boldsymbol{\Sigma}_N^{\frac{1}{2}}={\bf V}{\bf D}{\bf V}^*$ be an eigenvalue decomposition of $\boldsymbol{\Sigma}_N^{\frac{1}{2}}\boldsymbol{\Sigma}_0^{-1}(\rho)\boldsymbol{\Sigma}_N^{\frac{1}{2}}$, where ${\bf D}$ is a diagonal matrix with diagonal elements $d_1,d_2, \cdots,d_N$. [Notice that, of course the $d_i$’s depend on $\rho$. However, for simplicity purposes, the notation with $(\rho)$ is omitted.]{} Since the Gaussian distribution is invariant under unitary transformation, is also equivalent to: $$N(1-\rho)\mathbb{E}\left[\frac{{\bf w}{\bf w}^*}{{\bf w}^*{\bf D}{\bf w}}\right] +\rho {\bf V}^*\boldsymbol{\Sigma}_N^{-1}{\bf V}={\bf D}^{-1}.
\label{eq:equivalent}$$ It is not difficult to see that the off-diagonal elements of $\mathbb{E}\left[\frac{{\bf w}{\bf w}^*}{{\bf w}^*{\bf D}{\bf w}}\right]$ are equal to zero. In effect for $i\neq j$, writing $w_i$ as $r_ie^{\jmath \theta_i}$ with $r_i$ [Rayleigh]{} distributed and $\theta_i$ independent of $r_i$ and uniformly distributed over $[-\pi,\pi]$, [one has]{} $\mathbb{E}\left[\left[\frac{{\bf w}{\bf w}^*}{{\bf w}^*{\bf D}{\bf w}}\right]_{i,j}\right]=\mathbb{E}\left[\frac{r_ir_j^*e^{\jmath(\theta_i-\theta_j)}}{\sum_{i=1}^N d_i |r_i|^2}\right]$ which can be shown to be zero by taking the expectation over the difference of phase $\theta_i-\theta_j$. Therefore, $\mathbb{E}\left[\frac{{\bf w}{\bf w}^*}{{\bf w}^*{\bf D}{\bf w}}\right]$ is diagonal, with diagonal elements $\left(\alpha_i\right)_{i=1,\cdots,N}$ given by: $$\alpha_i({\bf D})=\mathbb{E}\left[\frac{|w_i|^2}{{\bf w}^*{\bf D}{\bf w}}\right].$$ Hence, ${\bf V}^*\boldsymbol{\Sigma}_N^{-1}{\bf V}$ is also diagonal, thus implying that $\boldsymbol{\Sigma}_N$ and ${\boldsymbol{\Sigma}_0(\rho)}$ share the same eigenvector matrix ${\bf U}$. In order to prove the existence of $\boldsymbol{\Sigma}_0(\rho)$, it suffices to check that [$d_1,\cdots,d_N$ are]{} solutions to the following equation: $$N(1-\rho)\alpha_i({\bf D}) +\frac{\rho}{\lambda_i} =\frac{1}{d_i}.
\label{eq:system}$$ To this end, consider $$\begin{aligned}
&h:\mathbb{R}_{+}^{N}\to \mathbb{R}_{+}^{N}\\
& \left(x_1,\cdots,x_N\right) \mapsto N(1-\rho)\left(\mathbb{E}\left[\frac{|w_1|^2}{\sum_{j=1}^N \frac{1}{x_j}|w_j|^2}\right]+\frac{\rho}{\lambda_1},\cdots,\right.\\
&\left.\mathbb{E}\left[\frac{|w_N|^2}{\sum_{j=1}^N \frac{1}{x_j}|w_j|^2}\right]+\frac{\rho}{\lambda_N}\right).
\end{aligned}$$ Proving that $d_1,\cdots,d_N$ are the unique solutions of is equivalent to showing that: $${\bf x}=h\left(x_1,\cdots,x_N\right)
\label{eq:fixed}$$ admits a unique positive solution. For this, we show that $h$ satisfies the following properties:
- Nonnegativity: For each $x_1,\cdots,x_N\geq 0$, vector $h(x_1,\cdots,x_N)$has positive elements.
- Monotonicity: For each $x_1\geq x_1^{'},\cdots,x_N\geq x_N^{'}$, $h(x_1,\cdots,x_N)\geq h(x_1^{'},\cdots,x_N^{'})$ where $\geq$ holds element-wise.
- Scalability: For each $\alpha>1$, $\alpha h(x_1,\cdots,x_N) > h(\alpha x_1,\cdots,\alpha x_N)$.
The first item is trivial. The second one follows from the fact that $h$ is an increasing function of each $x_i$. As for the last item, it follows by noticing that as $\rho>0$, $$\mathbb{E}\left[\frac{|w_i|^2}{\sum_{j=1}^N \frac{1}{\alpha x_j}|w_j|^2}\right]+\frac{\rho}{\lambda_j} <\alpha\left(\mathbb{E}\left[\frac{|w_i|^2}{\sum_{j=1}^N \frac{1}{ x_j}|w_j|^2}\right]+\frac{\rho}{\lambda_j}\right)$$ According to [@YAT95], $h$ is a standard interference function, and if there exists $q_1,\cdots,q_N$ such that ${\bf q} > h(q_1,\cdots,q_N)$ where $>$ holds element-wise, then there is a unique ${\bf x}_{\infty}=\left(x_{1,\infty},\cdots,x_{N,\infty}\right)$ such that: $${\bf x}_{\infty}=h(x_{1,\infty},\cdots,x_{N,\infty}).$$ Moreover, ${\bf x}_{\infty}=\lim_{t\to\infty}{\bf x}^{(t)}$ with ${\bf x}^{(0)}> 0$ arbitrary and for $t\geq 0$, ${\bf x}^{(t+1)}=h(x_1^{(t)},\cdots,x_N^{(t)}) $. To prove the feasibility condition, take ${\bf q}=\left(q,\cdots,q\right)$. Then, ${h(q,\cdots,q)}=(1-\rho)q+\frac{\rho}{\lambda_i}$. Setting $q\geq \frac{1}{\lambda_{\rm min}}$, we get that ${h(q,\cdots,q)}<{\bf q}$, thereby establishing the desired inequality.
The interest of the framework of Yates [@YAT95] is that in addition to being a useful tool for proving existence and uniqueness of the fixed-point of a standard interference function, it shows that the solution can be numerically approximated by computing iteratively ${\bf x}^{(t+1)}=h(x_1^{t},\cdots,x_N^{t})$. However, in order to implement this algorithm, one needs to further develop the terms $\alpha_i({\bf D})$. This is in particular the goal of the following lemma, the proof of which is deferred to Appendix \[app:di\].
Let ${\bf w}=\left[w_1,\cdots,w_N\right]^{\mbox{\tiny T}}$ be a standard complex Gaussian vector and ${\bf D}=\left(d_1,\cdots,d_N\right)$ be a diagonal matrix with positive diagonal elements. Consider $\alpha_1,\cdots,\alpha_N$, the set of scalars given by: $$\alpha_i({\bf D})=\mathbb{E}\left[\frac{|w_i|^2}{\sum_{i=1}^N d_i |w_i|^2}\right].$$ Then $$\begin{aligned}
&\alpha_i({\bf D})=\frac{1}{2^NN}\frac{1}{d_i\prod_{j=1}^N d_j}\\
&\times F_D^{(N)}\left(N,1,\cdots,\!\underset{\substack{\uparrow \\ i\textnormal{-th} \\ \textnormal{position}}}{2},\!1,\cdots\!,1,N+1,\frac{d_1-\frac{1}{2}}{d_1},\cdots,\!\frac{d_N-\frac{1}{2}}{d_N}\right),\end{aligned}$$ where $F_D^{(N)}$ is the Lauricella’s type $D$ hypergeometric function. [^5] \[lemma:di\]
Equipped with the result of Lemma \[lemma:di\], we will now show how one can in practice approximate $\boldsymbol{\Sigma}_0(\rho)$. First, one needs to approximate the solution of . Let ${\bf d}^{0}=\left[d_1^{(0)},\cdots,d_N^{(0)}\right]^{\mbox{\tiny T}}$ be an arbitrary vector with positive elements. We set ${\bf d}^{(t)}=\left[d_1^{(t)},\cdots,d_N^{(t)}\right]$ as: $$d_i^{(t+1)}= \frac{1}{\frac{\rho }{\lambda_i}+N(1-\rho)\alpha_i({\rm diag}({\bf d}^{(t)}))}$$ where the expression of $\alpha_i({\rm diag}({\bf d}^{(t)}))$ is given by Lemma \[lemma:di\]. As $t\to\infty$, ${\bf d}^{(t)}$ tends to ${\bf d}$, the vector of eigenvalues of $\boldsymbol{\Sigma}_N^{\frac{1}{2}}\boldsymbol{\Sigma}_0^{-1}(\rho)\boldsymbol{\Sigma}_N^{\frac{1}{2}}$ which is the solution of . Since $\boldsymbol{\Sigma}_N$ and $\boldsymbol{\Sigma}_0(\rho)$ share the same eigenvectors, the eigenvalues $s_{1,\infty},\cdots,s_{N,\infty}$ of $\boldsymbol{\Sigma}_0(\rho)$ are given by $
s_{i,\infty}=\frac{\lambda_i}{d_{i}}.
$ The matrix $\boldsymbol{\Sigma}_0(\rho)$ is finally given by: $$\boldsymbol{\Sigma}_0(\rho)={\bf U}\hspace{0.05cm}{\rm diag}(\left[s_{1,\infty},\cdots,s_{N,\infty}\right]){\bf U}^*.$$ While the above characterization of $\boldsymbol{\Sigma}_0{(\rho)}$ seems to provide few insights in most cases, it shows that except for the particular case of $\boldsymbol{\Sigma}_N={\bf I}_N$, the RTE $\hat{\bf C}_N(\rho)$ is biased for $\rho\in\left[\kappa,1\right)$ in that: $$\boldsymbol{\Sigma}_0{(\rho)} \neq \boldsymbol{\Sigma}_N.$$ To see that, notice that $\boldsymbol{\Sigma}_0{(\rho)} = \boldsymbol{\Sigma}_N$ implies that ${\bf D}={\bf I}_N$. Replacing ${\bf D}$ by the identity matrix in and using the fact that $\mathbb{E}\left[\frac{{\bf w}{\bf w}^*}{{\bf w}^*{\bf w}}\right]=\frac{1}{N}{\bf I}_N$ shows that only $\boldsymbol{\Sigma}_N={\bf I}_N$ satisfies a null bias. [Hence]{}, it appears that improving the conditioning of the RTE by using a non-zero regularization coefficient comes in general at the cost of a higher bias.
Second order statistics in the large-$n$ regime {#sec:second_order}
===============================================
The previous section establishes the convergence of the RTE to the limiting deterministic matrix $\boldsymbol{\Sigma}_0(\rho)$. [In the following, for readability purposes, $\boldsymbol{\Sigma}_0(\rho)$ will be simply replaced by $\boldsymbol{\Sigma}_0$.]{} The convergence holds in the almost sure sense, and can help infer the asymptotic limit of any functional of the RTE. More formally, for any functional $f$ continuous around $\boldsymbol{\Sigma}_0$, $f(\hat{\bf C}_N)$ converges almost surely to $f(\boldsymbol{\Sigma}_0)$. While this result can be used to understand the convergence of inference methods using RTEs, it becomes of little help when one is required to deeply understand their fluctuations, a prerequisite that essentially arises in many detection applications. This motivates the present section which aims at establishing a Central Limit Theorem (CLT) for the RTE.
It is worth noticing that the scope of applicability of the results obtained in the large-$n$ regime is much wider than that of the $n,N$ large regime. As a matter of fact, using the Delta Method [@vaart], our result can help obtain the CLT for any continuous functional of the RTE. We deeply believe that this can facilitate the design of inference methods using RTEs. Although treatments of both regimes seem to take different directions, they have thus far presented the common denominator of relying on an intermediate random equivalent for $\hat{\bf C}_N(\rho)$, be it $\tilde{\boldsymbol{\Sigma}}(\rho)$ or $\hat{\bf S}_N(\rho)$ (See Theorem \[th:large\_nN\]). It is thus easy to convince oneself that in order to derive the CLT for $\hat{\bf C}_N(\rho)$, a CLT for $\tilde{\boldsymbol{\Sigma}}(\rho)$ is required.
We denote in the sequel by $\boldsymbol{\delta}$ and $\tilde{\boldsymbol{\delta}}$ the quantities: $\boldsymbol{\delta}={\rm vec}(\hat{\bf C}_N(\rho))-{\rm vec}(\boldsymbol{\Sigma}_0)$ and $\tilde{\boldsymbol{\delta}}={\rm vec}(\tilde{\boldsymbol{\Sigma}}(\rho))-{\rm vec}(\boldsymbol{\Sigma}_0)$ and consider the derivation of the CLT for vectors $\boldsymbol{\delta}$ and then for $\tilde{\boldsymbol{\delta}}$. We will particularly prove that $\boldsymbol{\delta}$ and $\tilde{\boldsymbol{\delta}}$ behave in the large-$n$ regime as Gaussian random variables that can be fully characterized by their covariance matrices $\mathbb{E}\left[\boldsymbol{\delta}\boldsymbol{\delta}^*\right]$ and $\mathbb{E}[\boldsymbol{\tilde{\delta}}\boldsymbol{\tilde{\delta}}^*]$. Starting with the observation that in many signal processing applications, the focus might be put on the second-order statistics of the real and imaginary parts of $\boldsymbol{\delta}$ and $\tilde{\boldsymbol{\delta}}$, we additionally provide expressions for the pseudo-covariance matrices $\mathbb{E}\left[\boldsymbol{\delta}\boldsymbol{\delta}^{\mbox{\tiny T}}\right]$ and $\mathbb{E}[\boldsymbol{\tilde{\delta}}\boldsymbol{\tilde{\delta}}^{\mbox{\tiny T}}]$ of $\boldsymbol{\delta}$ and $\tilde{\boldsymbol{\delta}}$ which, coupled with that of covariance matrices, suffice to fully characterize fluctuations of the vectors $\left[\Re \boldsymbol{\delta}^{\mbox{\tiny T}},\Im \boldsymbol{\delta}^{\mbox{\tiny T}}\right]^{\mbox{\tiny T}}$ and $[\Re \tilde{\boldsymbol{\delta}}^{\mbox{\tiny T}},\Im \tilde{\boldsymbol{\delta}}^{\mbox{\tiny T}}]^{\mbox{\tiny T}}$.
We will start by handling the fluctuations of $\tilde{\boldsymbol{\delta}}$. To this end, we need first to work out the expression of $\tilde{\boldsymbol{\Sigma}}(\rho)$. Recall that $\tilde{\boldsymbol{\Sigma}}(\rho)$ is given by: $$\tilde{\boldsymbol{\Sigma}}(\rho)=\frac{N(1-\rho)}{n}\sum_{i=1}^n \frac{{\bf x}_i{\bf x}_i^*}{{\bf x}_i^*{\boldsymbol{\Sigma}_0^{-1}}{\bf x}_i}+\rho {\bf I}_N.$$ Therefore, $$\begin{aligned}
\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\tilde{\boldsymbol{\Sigma}}(\rho)\boldsymbol{\Sigma}_0^{-\frac{1}{2}}-{\bf I}_N&=\frac{N(1-\rho)}{n}\sum_{i=1}^n \frac{\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\boldsymbol{\Sigma}_N^{\frac{1}{2}}{\bf w}_i{\bf w}_i^*\boldsymbol{\Sigma}_N^{\frac{1}{2}}\boldsymbol{\Sigma}_0^{-\frac{1}{2}}}{{\bf w}_i^*\boldsymbol{\Sigma}_N^{\frac{1}{2}}{\boldsymbol{\Sigma}_0^{-1}}\boldsymbol{\Sigma}_N^{\frac{1}{2}}{\bf w}_i}\\
&+\rho \boldsymbol{\Sigma}_0^{-1}-{\bf I}_N\end{aligned}$$ Using the eigenvalue decomposition of $\boldsymbol{\Sigma}_N^{\frac{1}{2}}\boldsymbol{\Sigma}_0^{-1}\boldsymbol{\Sigma}_N^{\frac{1}{2}}={\bf U}{\bf D}{\bf U}^*$ and denoting $\tilde{\bf w}_i={\bf U}^*{\bf w}_i$, we thus obtain: $$\begin{aligned}
{\bf U}^*\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\tilde{\boldsymbol{\Sigma}}(\rho)\boldsymbol{\Sigma}_0^{-\frac{1}{2}}{\bf U}-{\bf I}_N&=\frac{N(1-\rho)}{n}\sum_{i=1}^n \frac{{\bf D}^{\frac{1}{2}}\tilde{\bf w}_i\tilde{\bf w}_i^*{\bf D}^{\frac{1}{2}}}{\tilde{\bf w}_i^*{\bf D}\tilde{\bf w}_i}\\
&+\rho {\bf U}^*\boldsymbol{\Sigma}_0^{-1}{\bf U}-{\bf I}_N.\end{aligned}$$ From the characterization of ${\boldsymbol{\Sigma}_0}$ provided in the previous section, we can easily check that: $$N(1-\rho) \mathbb{E}\left[\frac{{\bf D}^{\frac{1}{2}}\tilde{\bf w}\tilde{\bf w}^*{\bf D}^{\frac{1}{2}}}{\tilde{\bf w}^*{\bf D}\tilde{\bf w}}\right] ={\bf I}_N-\rho {\bf U}^*\boldsymbol{\Sigma}_0^{-1}{\bf U}$$ Therefore, $$\begin{aligned}
&{\bf U}^*\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\tilde{\boldsymbol{\Sigma}}(\rho)\boldsymbol{\Sigma}_0^{-\frac{1}{2}}{\bf U}-{\bf I}_N\\
&=\frac{N(1-\rho)}{n}\sum_{i=1}^n \left[\frac{{\bf D}^{\frac{1}{2}}\tilde{\bf w}_i\tilde{\bf w}_i^*{\bf D}^{\frac{1}{2}}}{\tilde{\bf w}_i^*{\bf D}\tilde{\bf w}_i}\
-\mathbb{E}\left[\frac{{\bf D}^{\frac{1}{2}}\tilde{\bf w}\tilde{\bf w}^*{\bf D}^{\frac{1}{2}}}{\tilde{\bf w}^*{\bf D}\tilde{\bf w}}\right]\right].\label{eq:latter}\end{aligned}$$ From , it appears that the asymptotic distribution of $[\Re\boldsymbol{\tilde{\delta}}^{\mbox{\tiny T}},\Im\boldsymbol{\tilde{\delta}}^{\mbox{\tiny T}}]^{\mbox{\tiny T}}$ is Gaussian and thus can be fully characterized by its asymptotic covariance and pseudo-covariance matrices. Using , it is easy to show that we need for that the pseudo-covariance and covariance matrices of: $$\frac{1}{n}\sum_{i=1}^n \frac{{\rm vec}(\tilde{\bf w}_i\tilde{\bf w}_i^*)}{\tilde{\bf w}_i^*{\bf D}\tilde{\bf w}_i}-\mathbb{E}\left[\frac{{\rm vec}(\tilde{\bf w}\tilde{\bf w}^*)}{\tilde{\bf w}^*{\bf D}\tilde{\bf w}}\right].$$ These quantities involve the following set of scalars, $$\beta_{i,j}=\mathbb{E}\left[\frac{|w_i|^2|w_j|^2}{\left({\bf w}^*{\bf D}{\bf w}\right)^2}\right] \hspace{0.2cm} i,j=1,\cdots,N$$ for which closed-form expressions need to be derived. This is the objective of the following technical lemma, which is of independent interest:
\[lemma:beta\] Let ${\bf w}=\left[w_1,\cdots,w_N\right]^{\mbox{\tiny T}}$ be a standard complex Gaussian vector and ${\bf D}={\rm diag}(d_1,\cdots,d_N)$ be a diagonal matrix with positive diagonal elements. Consider $\beta_{i,j}$ as above. Then $\beta_{i,j}$ are given for $i=j$ and $i\neq j$ by the expressions in , and at the top of [next]{} page.
$$\begin{aligned}
\beta_{i,i}&=\frac{1}{2^{N-1}N(N+1)}\frac{1}{d_i^2\prod_{k=1}^N d_k} F_{D}^{N}\left(N,1\cdots,1,\underset{\substack{\uparrow \\ i\textnormal{-th} \\ \textnormal{position}}}{3},1,\cdots,1,N+2,\frac{d_1-\frac{1}{2}}{d_1},\cdots,\frac{d_N-\frac{1}{2}}{d_N}\right) \label{eq:betaii}\\
\beta_{i,j}&=\frac{1}{2^{N}N(N+1)}\frac{1}{d_i d_j\prod_{k=1}^N d_k}F_{D}^{N}\left(N,1\cdots,1,\underset{\substack{\uparrow \\ i\textnormal{-th} \\ \textnormal{position}}}{2},1,\cdots,1,\underset{\substack{\uparrow \\ j\textnormal{-th} \\ \textnormal{position}}}{2},1\cdots,1,N+2,\frac{d_1-\frac{1}{2}}{d_1},\cdots,\frac{d_N-\frac{1}{2}}{d_N}\right), i<j\label{eq:betaij}\\
\beta_{i,j}&=\beta_{j,i}, \hspace{0.1cm} i>j\label{eq:betaji}\end{aligned}$$
------------------------------------------------------------------------
With this result at hand, the next Lemma follows immediately:
Let ${\bf D}$ be $N\times N$ diagonal matrix with positive diagonal elements. Consider $\tilde{\bf w}_1,\cdots,\tilde{\bf w}_n$ $n$ [independent]{} complex Gaussian random vectors with zero-mean and covariance ${\bf I}_N$. Then, $\sqrt{n}\left(\frac{1}{n}\sum_{i=1}^n \frac{{\rm vec}(\tilde{\bf w}_i\tilde{\bf w}_i^*)}{\tilde{\bf w}_i^*{\bf D}\tilde{\bf w}_i}-\mathbb{E}\left[\frac{{\rm vec}(\tilde{\bf w}\tilde{\bf w}^*)}{\tilde{\bf w}^*{\bf D}\tilde{\bf w}}\right]\right)$ converges to a multivariate Gaussian distribution with covariance ${\bf B}({\bf D})$ and pseudo-covariance ${\bf G}({\bf D})$ given by: $$\begin{aligned}
{\bf B}({\bf D})&=\tilde{\bf B}({\bf D})-{\rm vec}(\boldsymbol{\Xi}){\rm vec}(\boldsymbol{\Xi})^{\mbox{\tiny T}} \label{eq:B}\\
{\bf G}({\bf D})&=\tilde{\bf G}({\bf D})-{\rm vec}(\boldsymbol{\Xi}){\rm vec}(\boldsymbol{\Xi})^{\mbox{\tiny T}} \label{eq:G}\end{aligned}$$ where $$\begin{aligned}
\tilde{\bf B}({\bf D})&=\mathbb{E}\left[\frac{{\rm vec}(\tilde{\bf w}\tilde{\bf w}^*)\left({\rm vec}(\tilde{\bf w}\tilde{\bf w}^*))\right)^{*}}{(\tilde{\bf w}^*{\bf D}\tilde{\bf w})^2}\right]\\
\tilde{\bf G}({\bf D})&=\mathbb{E}\left[\frac{{\rm vec}(\tilde{\bf w}\tilde{\bf w}^*)\left({\rm vec}(\tilde{\bf w}\tilde{\bf w}^*))\right)^{\mbox{\tiny T}}}{(\tilde{\bf w}^*{\bf D}\tilde{\bf w})^2}\right]\\
\boldsymbol{\Xi}({\bf D})&={\rm diag}\left(\alpha_1({\bf D}),\cdots,\alpha_N({\bf D})\right)\end{aligned}$$ Furthermore, $\tilde{\bf B}$ and $\tilde{\bf G}$ are composed of $N^2$ block of $N\times N$ matrices, i.e, $\tilde{\bf B}({\bf D})=\begin{bmatrix}
\tilde{\bf B}_{1,1} &\cdots& \tilde{\bf B}_{1,N}\\
&\ddots & \\
\tilde{\bf B}_{N,1}&\cdots & \tilde{\bf B}_{N,N}
\end{bmatrix}
$, $\tilde{\bf G}({\bf D})=\begin{bmatrix}
\tilde{\bf G}_{1,1} &\cdots& \tilde{\bf G}_{1,N}\\
&\ddots & \\
\tilde{\bf G}_{N,1}&\cdots & \tilde{\bf G}_{N,N}
\end{bmatrix}
$ where: $$\begin{aligned}
\tilde{{\bf B}}_{i,i}&={\rm diag}\left(\beta_{i,1}\cdots,\beta_{i,N}\right)\\
\left[\tilde{{\bf B}}_{i,j}\right]_{k,\ell}&=1_{\left\{k=i,\ell=j\right\}}\beta_{i,j}, \hspace{0.1cm} i\neq j\\
\left[\tilde{{\bf G}}_{i,j}\right]_{k,\ell}&=1_{\left\{k=i,\ell=j\right\}}\beta_{i,j}+1_{\left\{k=j,\ell=i\right\}}\beta_{i,j}.
$$ \[lemma:covariance\]
Equipped with Lemma \[lemma:covariance\], we are now in position to state the CLT for $\tilde{\boldsymbol{\Sigma}}(\rho)$, whose proof is omitted being a direct consequence of Lemma \[lemma:covariance\]:
Let $\tilde{\boldsymbol{\Sigma}}(\rho)$ be given by wherein observations ${\bf x}_1,\cdots,{\bf x}_n$ are drawn according to Assumption \[ass:model\]. Consider $\boldsymbol{\Sigma}_N={\bf U}\boldsymbol{\Lambda}_N{\bf U}^*$ the eigenvalue decomposition of $\boldsymbol{\Sigma}_N$. Denote by ${\bf D}$ the diagonal matrix whose diagonal elements are solutions to the system of equations . Then, in the asymptotic large-$n$ regime, $\sqrt{n}\tilde{\boldsymbol{\delta}}=\sqrt{n}\left({\rm vec}(\tilde{\boldsymbol{\Sigma}}{(\rho)})-{\rm vec}(\boldsymbol{\Sigma}_0)\right)$ behaves as a zero-mean Gaussian distributed vector with covariance: $$\tilde{\bf M}_1=N^2(1-\rho)^2\left(\overline{\bf U}\boldsymbol{\Lambda}_N^{\frac{1}{2}}\otimes {\bf U}\boldsymbol{\Lambda}_N^{\frac{1}{2}}\right){\bf B}({\bf D})\left(\boldsymbol{\Lambda}_N^{\frac{1}{2}}{\bf U}^{\mbox{\tiny T}}\otimes \boldsymbol{\Lambda}_N^{\frac{1}{2}}{\bf U}^*\right)$$ and pseudo-covariance: $$\tilde{\bf M}_2=N^2(1-\rho)^2\left(\overline{\bf U}\boldsymbol{\Lambda}_N^{\frac{1}{2}}\otimes {\bf U}\boldsymbol{\Lambda}_N^{\frac{1}{2}}\right)\boldsymbol{\bf G}({\bf D})\left(\boldsymbol{\Lambda}_N^{\frac{1}{2}}{\bf U}^*\otimes \boldsymbol{\Lambda}_N^{\frac{1}{2}}{\bf U}^{\mbox{\tiny T}}\right).$$ where ${\bf B}({\bf D})$ and ${\bf G}({\bf D})$ are given by and of Lemma \[lemma:covariance\]. \[th:clt\_tilde\]
Now that the fluctuations of $\tilde{\boldsymbol{\Sigma}}{(\rho)}$ have been determined, we are in position to derive the asymptotic distribution of ${\rm vec}(\hat{\bf C}_N(\rho))$. The very recent results in [@couillet-kammoun-14] establishing equality between the fluctuations of the bilinear-forms of $\hat{\bf C}_N(\rho)$ and those of its random equivalent $\hat{\bf S}_N(\rho)$ in the large-$n,N$ regime might lead us to expect similar results to hold in the large-$n$ regime. As we will show in the following theorem, contrary to these first intuitions, the asymptotic distribution of ${\rm vec}(\tilde{\boldsymbol{\Sigma}}(\rho))$ is different from that of ${\rm vec}(\hat{\bf C}_N(\rho))$, even though it plays a central role in facilitating its analytical derivation.
Under the same setting of Theorem \[th:clt\_tilde\], define $\tilde{\bf F}$ the $N^2\times N^2$ matrix: $$\tilde{\bf F}=N(1-\rho)\left(\overline{\bf U}{\bf D}^{\frac{1}{2}}\otimes {\bf U}{\bf D}^{\frac{1}{2}}\right)\tilde{\bf B}({\bf D})\left({\bf D}^{\frac{1}{2}}{\bf U}^{\mbox{\tiny T}}\otimes {\bf D}^{\frac{1}{2}}{\bf U}^*\right)$$ with $\tilde{\bf B}({\bf D})$ defined in Lemma \[lemma:covariance\]. Consider $\hat{\bf C}_N(\rho)$ the robust scatter estimator in . Then, in the large-$n$ asymptotic regime, $\sqrt{n}\boldsymbol{\delta}=\sqrt{n}\left({\rm vec}(\hat{\bf C}_N(\rho))-{\rm vec}(\boldsymbol{\Sigma}_0)\right)$ behaves as a zero-mean Gaussian-distributed vector with covariance: $$\begin{aligned}
{\bf M}_1&=\left(\left(\boldsymbol{\Sigma}_0^{\frac{1}{2}}\right)^{\mbox{\tiny T}}\otimes \boldsymbol{\Sigma}_0^{\frac{1}{2}}\right)({\bf I}_{N^2}-\tilde{\bf F})^{-1}\left(\left(\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)^{\mbox{\tiny T}}\otimes \boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)\tilde{\bf M}_1\\
&\times\left(\left(\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)^{\mbox{\tiny T}}\otimes \boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)({\bf I}_{N^2}-\tilde{\bf F})^{-1}\left(\left(\boldsymbol{\Sigma}_0^{\frac{1}{2}}\right)^{\mbox{\tiny T}}\otimes \boldsymbol{\Sigma}_0^{\frac{1}{2}}\right)\end{aligned}$$ and pseudo-covariance: $$\begin{aligned}
{\bf M}_2&=\left(\left(\boldsymbol{\Sigma}_0^{\frac{1}{2}}\right)^{\mbox{\tiny T}}\otimes \boldsymbol{\Sigma}_0^{\frac{1}{2}}\right)({\bf I}_{N^2}-\tilde{\bf F})^{-1}\left(\left(\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)^{\mbox{\tiny T}}\otimes \boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)\tilde{\bf M}_2\\
&\times\left(\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\otimes \left(\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)^{\mbox{\tiny T}}\right)({\bf I}_{N^2}-\tilde{\bf F}^{\mbox{\tiny T}})^{-1}\left(\boldsymbol{\Sigma}_0^{\frac{1}{2}}\otimes \left(\boldsymbol{\Sigma}_0^{\frac{1}{2}}\right)^{\mbox{\tiny T}}\right).\end{aligned}$$ \[th:clt\]
The proof is deferred to Appendix \[app:clt\]
Numerical results
=================
In all our simulations, we consider the case where ${\bf x}_1,\cdots,{\bf x}_n$ are [independent zero-mean]{} Gaussian random vectors with covariance matrix $\boldsymbol{\Sigma}_N$ of Toeplitz form: $$\left[{\bf C}_N\right]_{i,j}=\left\{\begin{array}{ll}b^{j-i} &\hspace{0.1cm} i\leq j\\
\left(b^{i-j}\right)^* &\hspace{0.1cm}i>j
\end{array}
, \hspace{0.9cm}|b|\in\left]0,1\right[,
\right.
\label{eq:CN}$$
Which regime is expected to be more accurate
--------------------------------------------
In order to study the [behavior]{} of RTE, assumptions letting the number of observations and/or their sizes increase to infinity are essential for tractability. The [behavior]{} of RTE is studied under both concurrent asymptotic regimes, namely the large-$n$ regime, which underlies all the derivations of this paper, and the $n,N$-large regime recently considered in [@couillet-kammoun-14]. Given that the scope of the results derived in the [large-$n,N$]{} regime, has thus far been limited to the handling of bilinear forms, practitioners might wonder to know whether, for their specific scenario, further investigation of this regime would produce more accurate results. In this first experiment, we attempt to answer to this open question by noticing that both regimes have the common denominator of producing random matrices that act as equivalents to the robust-scatter estimator. The accuracy of each regime is thus evaluated by measuring the closeness of the robust-scatter estimator to its random equivalent proposed by each regime. This closeness is measured using the following metrics: $$\mathcal{E}_{n}\triangleq\frac{1}{N}\mathbb{E}\left\|\hat{\bf C}(\rho)-\tilde{\boldsymbol{\Sigma}}(\rho)\right\|_{{\rm Fro}}^2$$ and $$\mathcal{E}_{n,N}\triangleq \frac{1}{N}\mathbb{E}\left\|\hat{\bf C}(\rho)-\hat{\bf S}_N(\rho)\right\|_{{\rm Fro}}^2.$$ Figures \[fig:N4\], \[fig:N16\] and \[fig:N32\] represent these metrics with respect to the ratio $\frac{n}{N}$ when $N=4,16,32$, $b=0.7$ and $\rho$ set to $0.5$. The region over which the use of the large-$n$ regime is recommended corresponds to the values of $\frac{n}{N}$ for which the $\mathcal{E}_n$ curve is below the $\mathcal{E}_{n,N}$ one.
From these figures, it appears that, as $N$ increases, the region over which results derived under the large-$n$ regime are more accurate, corresponds to larger values of the ratio $\frac{n}{N}$.
![Accuracy of the random equivalent when $N=4$[]{data-label="fig:N4"}](fig_N4_crop.pdf)
![Accuracy of the random equivalent when $N=16$[]{data-label="fig:N16"}](fig_N16_crop.pdf)
![Accuracy of the random equivalent when $N=32$[]{data-label="fig:N32"}](fig_N32_crop.pdf)
Asymptotic bias
---------------
In this section, we assess the bias of the RTE with respect to the population covariance matrix. Since in many applications in radar detection, we only need to estimate the covariance matrix up to a scale factor, we define the bias as: $${\rm Bias}=\left\|\mathbb{E}\left[\frac{N}{\operatorname{tr}\left(\boldsymbol{\Sigma}_N^{-1}\hat{\bf C}_N\right)}\boldsymbol{\Sigma}_N^{-1}\hat{\bf C}_N\right]-{\bf I}_N\right\|_{\rm Fro}^2.$$ Since $\frac{N}{\operatorname{tr}\left(\boldsymbol{\Sigma}_N^{-1}\hat{\bf C}_N\right)}\boldsymbol{\Sigma}_N^{-1}\hat{\bf C}_N$ has a bounded spectral norm, the dominated convergence theorem implies that: $${\rm Bias}\xrightarrow[n\to+\infty]{} \left\|\left[\frac{N}{\operatorname{tr}\left(\boldsymbol{\Sigma}_N^{-1}\boldsymbol{\Sigma}_0\right)}\boldsymbol{\Sigma}_N^{-1}\boldsymbol{\Sigma}_0\right]-{\bf I}_N\right\|_{\rm Fro}^2.$$ Figure \[fig:bias\] displays the asymptotic and empirical bias with respect to the Toeplitz coefficient $b$ and for $\rho=0.2,0.5,0.9$. We note that the bias is an increasing function of $b$. This is expected since for small values of $b$, the covariance matrix becomes close to the identity matrix. The RTE, viewed as a shrunk version of the Tyler to the identity matrix will thus produce small values of bias.
=\[dash pattern=on off 2pt\]
coordinates[ (0.100000,0.001950)(0.200000,0.004280)(0.300000,0.008254)(0.400000,0.014116)(0.500000,0.022235)(0.600000,0.033138)(0.700000,0.047700)(0.800000,0.067934)(0.900000,0.097917)]{}; ; coordinates[ (0.100000,0.000747)(0.200000,0.003021)(0.300000,0.006930)(0.400000,0.012674)(0.500000,0.020576)(0.600000,0.031155)(0.700000,0.045256)(0.800000,0.064352)(0.900000,0.091315) ]{}; ; coordinates[ (0.100000,0.003980)(0.200000,0.015009)(0.300000,0.034080)(0.400000,0.062252)(0.500000,0.101473)(0.600000,0.154927)(0.700000,0.228298)(0.800000,0.333067)(0.900000,0.499290) ]{}; coordinates[ (0.100000,0.003615)(0.200000,0.014649)(0.300000,0.033698)(0.400000,0.061882)(0.500000,0.101097)(0.600000,0.154531)(0.700000,0.227873)(0.800000,0.332593)(0.900000,0.498636) ]{}; coordinates[ (0.100000,0.008686)(0.200000,0.034828)(0.300000,0.078776)(0.400000,0.141124)(0.500000,0.222765)(0.600000,0.325025)(0.700000,0.449787)(0.800000,0.599873)(0.900000,0.779918) ]{}; coordinates[ (0.100000,0.008677)(0.200000,0.034819)(0.300000,0.078765)(0.400000,0.141109)(0.500000,0.222755)(0.600000,0.325014)(0.700000,0.449776)(0.800000,0.599866)(0.900000,0.779911) ]{}; (source) at (axis cs:0.6,0.007)[ $\rho=0.2,0.5,0.9$ ]{}; (destination) at (axis cs:0.4,1); (source)–(destination);
Central Limit Theorem
---------------------
The central limit theorem provided in this paper can help determine fluctuations of any continuous functional of ${\rm vec}(\hat{\bf C}_N)$. As an application, we consider in this section the quadratic form of type $\frac{1}{N}{\bf p}^{*}\hat{\bf C}_N^{-1}(\rho){\bf p}$ with $\|{\bf p}\|=1$ (used for instance for detection in array processing problems [@vtree2002oap]), for which the large-$n$ and the large-$n,N$ regimes predict different kind of fluctuations. As a matter of fact, applying the Delta Method [@vaart], one can easily prove that under the large-$n$, $$\begin{aligned}
&{T}_n\triangleq\frac{\sqrt{n}\left(\frac{1}{N}{\bf p}^{*}\hat{\bf C}_N^{-1}(\rho){\bf p}-\frac{1}{N}{\bf p}^{*}\boldsymbol{\Sigma}_0^{-1}(\rho){\bf p}\right)}{\sqrt{\frac{1}{N^2}\left((\boldsymbol{\Sigma}_0^{-1})^{\mbox{\tiny T}}\overline{\bf p}\otimes \boldsymbol{\Sigma}_0^{-1}{\bf p}\right)^{*}{\bf M}_1\left((\boldsymbol{\Sigma}_0^{-1})^{\mbox{ \tiny T}}\overline{\bf p}\otimes \boldsymbol{\Sigma}_0^{-1}{\bf p}\right)}}\\
&\xrightarrow[]{\mathcal{D}}\mathcal{N}(0,1).\end{aligned}$$ On the other hand, using results from [@couillet-kammoun-14], [one]{} can prove that under the large-$n,N$ regime, $\frac{\sqrt{n}}{N}{\bf p}^{*}\hat{\bf C}_N^{-1}(\rho){\bf p}$ [satisfies:]{} $$\begin{aligned}
&T_{n,N}\triangleq\sqrt{\frac{n}{\sigma_N^2}}\left(\frac{1}{N}{\bf p}^{*}\hat{\bf C}_N^{-1}(\rho){\bf p}-\frac{1}{N}{\bf p}^{*}{\bf Q}_N(\underline{\rho}){\bf p}\right)\xrightarrow[]{\mathcal{D}}\mathcal{N}(0,1)\end{aligned}$$ where: $$\sigma_N^2=\frac{m(-\underline{\rho})^2(1-\underline{\rho})^2\left(\frac{1}{N}{\bf p}^*\boldsymbol{\Sigma}_N{\bf Q}_N^2{\bf p}\right)^2}{{\rho}^2(1-cm(-\underline{\rho})^2(1-\underline{\rho})^2\frac{1}{N}\boldsymbol{\Sigma}_N^2{\bf Q}_N^2(\underline{\rho}))}$$ with $\underline{\rho}$, $m(-\underline{\rho})$ and ${\bf Q}(\underline{\rho})$ have the same expressions as in [@couillet-kammoun-14] when ${\bf C}_N$ in [@couillet-kammoun-14] is replaced by $\boldsymbol{\Sigma}_N$. A natural question that arises is which of the two competing results is the most reliable for a particular set of values $N$ and $n$. To answer this question, we plot in figures \[fig:clt\_4\], \[fig:clt\_16\] and \[fig:clt\_32\] the Kolmogorov-Smirnov distance, between the empirical distribution function of $T_n$ and $T_{n,N}$ obtained over $50\,000$ realizations, and the standard normal distribution with respect to the ratio $\frac{n}{N}$ when $b=0.7\jmath, \rho=0.5, {\bf p}=\left[1,\cdots,1\right]$ and for $N=4,16,32$. We note that for values of $N$ up to $16$, results derived under the large-$n$ regime are more accurate for a large range of $n$ while the use of the results from the large-$n,N$ regime seems to be recommended for $N=32$.
![Analysis of the accuracy of the CLT results for $N=4$[]{data-label="fig:clt_4"}](clt_4_crop.pdf)
![Analysis of the accuracy of the CLT results for $N=16$[]{data-label="fig:clt_16"}](clt_16_crop.pdf)
![Analysis of the accuracy of the CLT results for $N=32$[]{data-label="fig:clt_32"}](clt_32_crop.pdf)
Conclusions
===========
This paper focuses on the statistical behavior of the RTE. It is worth noticing that despite the popularity of the RTE, characterizing its statistical properties has remained unclear until the work in [@couillet-kammoun-14] shedding light on its behavior when the large-$n,N$ regime is considered (the number of observations $n$ and their size $N$ growing simultaneously to infinity.). Interestingly, no results were provided for the standard large-$n$ regime in which $N$ is fixed while $n$ goes to infinity. This has motivated our work. In particular, we established in this paper that the RTE converges, under the large-$n$ regime, to a deterministic matrix which differs as expected from the true population covariance matrix. An important feature of this results is that it allows for the computation of the asymptotic bias incurred by the use of the RTE. We also studied the fluctuations of the RTE around its limit and prove that they converge to a multivariate Gaussian distribution with zero mean and a covariance matrix depending on the true population covariance and the regularization parameter. The characterization of these fluctuations are paramount to applications of radar detection in which RTEs are used. Finally, numerical simulations were carried out in order to validate the theoretical results and also to assess their accuracy with their counterparts obtained under the large-$n,N$ regime.
Proof of Lemma \[lemma:bounded\_spectral\] {#app:bounded_spectral}
==========================================
[In the following appendices, for readability purposes, the notation $\boldsymbol{\Sigma}_0(\rho)$ (resp. $\tilde{\boldsymbol{\Sigma}}(\rho)$) is simply replaced by $\boldsymbol{\Sigma}_0$ (resp. $\tilde{\boldsymbol{\Sigma}}$). Of course, the dependence of $\boldsymbol{\Sigma}_0$ to $\rho$ is not omitted.]{}
Multiplying both sides of by $\boldsymbol{\Sigma}_N^{-1}$, we show that $\boldsymbol{\Sigma}_0$ satisfies: $$(1-\rho)\mathbb{E}\left[\frac{{\bf w}{\bf w}^*}{\frac{1}{N}{\bf w}^*\boldsymbol{\Sigma}_N^{\frac{1}{2}}\boldsymbol{\Sigma}_0^{-1}\boldsymbol{\Sigma}_N^{\frac{1}{2}}{\bf w}}\right]+\rho \boldsymbol{\Sigma}_N^{-1}=\boldsymbol{\Sigma}_N^{-\frac{1}{2}}\boldsymbol{\Sigma}_0\boldsymbol{\Sigma}_N^{-\frac{1}{2}},$$ where ${\bf w}$ is zero-mean distributed with covariance [matrix]{} ${\bf I}_N$. Define ${\bf A}=\boldsymbol{\Sigma}_N^{-\frac{1}{2}}\boldsymbol{\Sigma}_0\boldsymbol{\Sigma}_N^{-\frac{1}{2}}$. Then, $${\bf A}=(1-\rho)\mathbb{E}\left[\frac{{\bf w}{\bf w}^*}{\frac{1}{N}{\bf w}^*{\bf A}^{-1}{\bf w}}\right]+\rho\boldsymbol{\Sigma}^{-1}$$ which yields the following bound for $\|{\bf A}\|$, $$\|{\bf A}\|\leq (1-\rho)\|{\bf A}\|+\frac{\rho}{\lambda_{\rm min}(\boldsymbol{\Sigma}_N)}.$$ Hence, $$\|{\bf A}\| \leq \frac{1}{\lambda_{\rm min}(\boldsymbol{\Sigma}_N)}.
\label{eq:upper_bound}$$ Now, $\|{\bf A}\|$ can be lower-bounded by: $$\begin{aligned}
\|{\bf A}\|&=\max_{\|{\bf x}\|=1} {\bf x}^*\boldsymbol{\Sigma}_N^{-\frac{1}{2}}\boldsymbol{\Sigma}_0\boldsymbol{\Sigma}_N^{-\frac{1}{2}}{\bf x} \nonumber\\
&\stackrel{(a)}{\geq} \|\boldsymbol{\Sigma}_0\| \max_{\|{\bf x}\|=1} {\bf x}^*\boldsymbol{\Sigma}_N^{-\frac{1}{2}}{\bf u}{\bf u}^*\boldsymbol{\Sigma}_N^{-\frac{1}{2}}{\bf x}\nonumber\\
&\geq \|\boldsymbol{\Sigma}_0\| {\bf u}^* \boldsymbol{\Sigma}_N^{-\frac{1}{2}}{\bf u}{\bf u}^*\boldsymbol{\Sigma}_N^{-\frac{1}{2}}{\bf u} \nonumber\\
&\geq \frac{\|\boldsymbol{\Sigma}_0\|}{\|\boldsymbol{\Sigma}_N\|}, \label{eq:lower_bound}\end{aligned}$$ where in $(a)$ ${\bf u}$ is the eigenvector corresponding to the maximum eigenvalue of $\boldsymbol{\Sigma}_0$. Combining and , we thus obtain: $$\|{\boldsymbol{\Sigma}_0}\|\leq \frac{\|\boldsymbol{\Sigma}_N\|}{\lambda_{\rm min}(\boldsymbol{\Sigma}_N)}.$$
Proof of Theorem \[th:first\_order\] {#app:first_order}
====================================
The proof is based on controlling the random elements $d_i(\rho)$ given by: $$d_i(\rho)=\frac{{\bf x}_i^*\hat{\bf C}_N^{-1}(\rho){\bf x}_i-{\bf x}_i^*{\boldsymbol{\Sigma}_0^{-1}}{\bf x}_i}{\sqrt{{\bf x}_i^*{\boldsymbol{\Sigma}_0^{-1}}{\bf x}_i}\sqrt{{\bf x}_i^*\hat{\bf C}_N^{-1}(\rho){\bf x}_i}}.$$ Recall that, by the [SLLN]{}, under the large-$n$ regime, ${\boldsymbol{\Sigma}_0}$ satisfies: $${\boldsymbol{\Sigma}_0}=N(1-\rho)\frac{1}{n}\sum_{i=1}^n \frac{{\bf x}_i{\bf x}_i^*}{{\bf x}_i^*{\boldsymbol{\Sigma}_0^{-1}}{\bf x}_i}+\rho{\bf I}_N+\boldsymbol{\epsilon}_n(\rho),$$ where $\boldsymbol{\epsilon}_n$ is a $N\times N$ matrix whose elements converge almost surely to zero and satisfy $\left[\boldsymbol{\epsilon}_n(\rho)\right]_{i,j}=\mathcal{O}_p(\frac{1}{n})$.
In the sequel, we prove that for any $\kappa>0$, $$\sup_{\rho\in\left[\kappa,1\right]}\max_{1\leq i\leq n}|d_i(\rho)|{\overset{\rm a.s.}{\longrightarrow}}0.$$ For that, we need to work out the differences ${\bf x}_i^*\hat{\bf C}_N^{-1}(\rho){\bf x}_i-{\bf x}_i^*{\boldsymbol{\Sigma}_0^{-1}}{\bf x}_i$ for $i=1,\cdots,n$. Using the resolvent identity ${\bf A}^{-1}-{\bf B}^{-1}={\bf A}^{-1}\left({\bf B}-{\bf A}\right){\bf B}^{-1}$ for any $N\times N$ invertible matrices, we obtain: $$\begin{aligned}
&{\bf x}_j^*\hat{\bf C}_N^{-1}(\rho){\bf x}_j-{\bf x}_j^*{\boldsymbol{\Sigma}_0^{-1}}{\bf x}_j\\
&={\bf x}_j^*\hat{\bf C}_N^{-1}\left[\frac{1-\rho}{n}\sum_{i=1}^n\frac{{\bf x}_i{\bf x}_i^*\left(\frac{1}{N}{\bf x}_i^*\boldsymbol{\Sigma}_0^{-1}{\bf x}_i-\frac{1}{N}{\bf x}_i^*\hat{\bf C}_N^{-1}(\rho){\bf x}_i\right)}{\frac{1}{N}{\bf x}_i^*{\boldsymbol{\Sigma}_0^{-1}}{\bf x}_i \frac{1}{N}{\bf x}_i^*\hat{\bf C}_N^{-1}(\rho){\bf x}_i}\right.\\
&\left.+\boldsymbol{\epsilon}_n\right.\Bigg] \boldsymbol{\Sigma}_0^{-1}{\bf x}_j\\
&=\frac{1-\rho}{n}\sum_{i=1}^n \frac{{\bf x}_j^*\hat{\bf C}_N^{-1}(\rho){\bf x}_i{\bf x}_i^*{\boldsymbol{\Sigma}_0^{-1}}{\bf x}_jd_i(\rho)}{\sqrt{\frac{1}{N}{\bf x}_i^*\hat{\bf C}_N^{-1}(\rho){\bf x}_i\frac{1}{N}{\bf x}_i^*{\boldsymbol{\Sigma}_0^{-1}}{\bf x}_i}}\\
&+{\bf x}_j^*\hat{\bf C}_N^{-1}(\rho)\boldsymbol{\epsilon}_n{\boldsymbol{\Sigma}_0^{-1}}{\bf x}_j.\end{aligned}$$ Hence, $$\begin{aligned}
d_j(\rho)&=\frac{\frac{1-\rho}{n}\sum_{i=1}^n \frac{{\bf x}_j^*\hat{\bf C}_N^{-1}(\rho){\bf x}_i{\bf x}_i^*{\boldsymbol{\Sigma}_0^{-1}}{\bf x}_j d_i(\rho)}{\sqrt{\frac{1}{N}{\bf x}_i^*{\boldsymbol{\Sigma}_0^{-1}}{\bf x}_i\frac{1}{N}{\bf x}_i^*\hat{\bf C}_N^{-1}(\rho){\bf x}_i}}}{\sqrt{{\bf x}_j^*\hat{\bf C}_N^{-1}(\rho){\bf x}_j{\bf x}_j^*{\boldsymbol{\Sigma}_0^{-1}}{\bf x}_j }}\\
&+\frac{{\bf x}_j^*\hat{\bf C}_N^{-1}(\rho)\boldsymbol{\epsilon}_n{\boldsymbol{\Sigma}_0^{-1}}{\bf x}_j}{\sqrt{{\bf x}_j^*\hat{\bf C}_N^{-1}(\rho){\bf x}_j{\bf x}_j^*{\boldsymbol{\Sigma}_0^{-1}}{\bf x}_j }}.\end{aligned}$$ Let $d_{\rm max}(\rho)=\max_{1\leq j\leq n}|d_j(\rho)|$. By the Cauchy-Schwartz inequality, we thus obtain: $$\begin{aligned}
d_{\rm max}(\rho)&\leq \frac{d_{\rm max}(\rho)}{\sqrt{{\bf x}_j^*\hat{\bf C}_N^{-1}(\rho){\bf x}_j{\bf x}_j^*{\boldsymbol{\Sigma}_0^{-1}}{\bf x}_j}}\\
&\times \sqrt{\frac{1-\rho}{n}\sum_{i=1}^n\frac{{\bf x}_j^*\hat{\bf C}_N^{-1}(\rho){\bf x}_i{\bf x}_i^*\hat{\bf C}_N^{-1}(\rho){\bf x}_j}{\frac{1}{N}{\bf x}_i^*\hat{\bf C}_N^{-1}(\rho){\bf x}_i}}\\
&\times\sqrt{\frac{1-\rho}{n}\sum_{i=1}^n\frac{{\bf x}_j^*{\boldsymbol{\Sigma}_0^{-1}}{\bf x}_i{\bf x}_i^*{\boldsymbol{\Sigma}_0^{-1}}{\bf x}_j}{\frac{1}{N}{\bf x}_i^*{\boldsymbol{\Sigma}_0^{-1}}{\bf x}_i}}\\
&+\|\hat{\bf C}_N^{-\frac{1}{2}}(\rho)\boldsymbol{\epsilon}_n\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\|.\end{aligned}$$ Therefore, $$\begin{aligned}
d_{\rm max}(\rho)&\leq \frac{d_{\rm max}(\rho)}{\sqrt{{\bf x}_j^*\hat{\bf C}_N^{-1}(\rho){\bf x}_j{\bf x}_j^*{\boldsymbol{\Sigma}_0^{-1}}{\bf x}_j}}\\
&\times \sqrt{{\bf x}_j^*\hat{\bf C}_N^{-\frac{1}{2}}\left({\bf I}_N-\rho\hat{\bf C}_N^{-1}(\rho)\right)\hat{\bf C}_N^{-\frac{1}{2}}{\bf x}_j}\\
&\times\sqrt{{\bf x}_j^*\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\left({\bf I}_N-\rho{\boldsymbol{\Sigma}_0^{-1}}\right)\boldsymbol{\Sigma}_0^{-\frac{1}{2}}{\bf x}_j-{\bf x}_j^*\boldsymbol{\Sigma}_0^{-1}\boldsymbol{\epsilon}_n\boldsymbol{\Sigma}_0^{-1}{\bf x}_j}\\
&+\|\hat{\bf C}_N^{-\frac{1}{2}}(\rho)\boldsymbol{\epsilon}_n\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\|.\end{aligned}$$ Using the relation $\left|{\bf x}^*{\bf A}{\bf y}\right|\leq \|{\bf x}\|\|{\bf A}\|\|{\bf y}\|$, we thus obtain: $$\begin{aligned}
&d_{\rm max}(\rho)\leq d_{\rm max}(\rho) \sqrt{\|{\bf I}_N-\rho\hat{\bf C}_N^{-1}(\rho)\|}\\
&\left(\|{\bf I}_N-\rho{\boldsymbol{\Sigma}_0^{-1}}\|-\frac{{\bf x}_j^*\boldsymbol{\Sigma}_0^{-1}\boldsymbol{\epsilon}_n\boldsymbol{\Sigma}_0^{-1}{\bf x}_j}{{\bf x}_j^*\boldsymbol{\Sigma}_0^{-1}{\bf x}_j}\right)^{\frac{1}{2}}+\|\hat{\bf C}_N^{-\frac{1}{2}}(\rho)\boldsymbol{\epsilon}_n\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\|.\end{aligned}$$ Since $\sup_{\rho\in\left[\kappa,1\right)}\|\hat{\bf C}_N^{-\frac{1}{2}}\boldsymbol{\epsilon}_n(\rho)\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\|\leq \frac{1}{\kappa}\sup_{\rho\in\left[\kappa,1\right)}\|\boldsymbol{\epsilon}_n(\rho)\|$ and using the fact that $\|{\bf I}_N-\rho\hat{\bf C}_N^{-1}(\rho)\|\leq 1$, we get: $$\begin{aligned}
d_{\rm max}(\rho) &\leq d_{\rm max}(\rho)\left(\sqrt{\|{\bf I}_N-\rho{\boldsymbol{\Sigma}_0^{-1}}\|}\right.\\
&\left.+\sqrt{\|\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\boldsymbol{\epsilon}_n\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\|}\right) +\frac{1}{\kappa}\|\boldsymbol{\epsilon}_n\|.\end{aligned}$$ Again, as $\|\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\boldsymbol{\epsilon}_n\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\| \leq \frac{\left\|\boldsymbol{\epsilon}_n\right\|}{\kappa}$, we have: $$d_{\rm max}(\rho)\left(1-\sqrt{\|{\bf I}_N-\rho{\boldsymbol{\Sigma}_0^{-1}}\|}-\sqrt{\frac{1}{\kappa}\|\boldsymbol{\epsilon}_n\|}\right) \leq \frac{1}{\kappa}\|\boldsymbol{\epsilon}_n\|.$$ From Lemma \[lemma:bounded\_spectral\], $\left\|{\boldsymbol{\Sigma}_0}\right\| \leq \frac{\|\boldsymbol{\Sigma}_N\|}{\lambda_{\rm min}(\boldsymbol{\Sigma}_N)}$. Therefore, for $n$ large enough (say large enough for the left-hand parenthesis to be greater than zero), $$d_{\rm max}(\rho) \leq \frac{\frac{1}{\kappa}\|\boldsymbol{\epsilon}_n\|}{1-\sqrt{1-\rho\frac{\lambda_{\rm min}(\boldsymbol{\Sigma}_N)}{\|\boldsymbol{\Sigma}_N\|}}-\sqrt{\frac{1}{\kappa}\|\boldsymbol{\epsilon}_n\|}}.$$ Taking the supremum over $\rho\in\left[\kappa,1\right)$, we finally obtain: $$\sup_{\rho\in\left[\kappa,1\right)} d_{\rm max}(\rho) \leq \frac{\frac{1}{\kappa}\|\boldsymbol{\epsilon}_n\|}{1-\sqrt{1-\kappa\frac{\lambda_{\rm min}(\boldsymbol{\Sigma}_N)}{\|\boldsymbol{\Sigma}_N\|}}-\sqrt{\frac{1}{\kappa}\|\boldsymbol{\epsilon}_n\|}}.$$ thereby showing that $d_{\rm max}(\rho){\overset{\rm a.s.}{\longrightarrow}}0$ and $d_{\rm max}(\rho)=\mathcal{O}_p\left(\frac{1}{n}\right)$ Now, that the control of $d_{\rm max}(\rho)$ is performed, we are in position to handle the difference $\hat{\bf C}_N(\rho)-{\boldsymbol{\Sigma}_0}$. We have: $$\begin{aligned}
\hat{\bf C}_N(\rho)-{\boldsymbol{\Sigma}_0}&=\frac{1-\rho}{n}\sum_{i=1}^n \frac{{\bf x}_i{\bf x}_i^*\left({\bf x}_i^*{\boldsymbol{\Sigma}_0^{-1}}{\bf x}_i-{\bf x}_i^*\hat{\bf C}_N^{-1}(\rho){\bf x}_i\right)}{{\bf x}_i^*\hat{\bf C}_N^{-1}(\rho){\bf x}_i\frac{1}{N}{\bf x}_i^*{\boldsymbol{\Sigma}_0^{-1}}{\bf x}_i}\\
&-\boldsymbol{\epsilon}_n(\rho)\\
&=\frac{1-\rho}{n}\sum_{i=1}^n \frac{-{\bf x}_i{\bf x}_i^*d_i(\rho)}{\sqrt{\frac{1}{N}{\bf x}_i^*\hat{\bf C}_N^{-1}(\rho){\bf x}_i}\sqrt{\frac{1}{N}{\bf x}_i^*{\boldsymbol{\Sigma}_0^{-1}}{\bf x}_i}}\\
&-\boldsymbol{\epsilon}_n(\rho).\end{aligned}$$ Therefore, $$\begin{aligned}
&\|\hat{\bf C}_N(\rho)-{\boldsymbol{\Sigma}_0}\|\\
&\leq d_{\rm max}(\rho)\left\|\frac{1-\rho}{n}\sum_{i=1}^n\frac{{\bf x}_i{\bf x}_i^*}{\sqrt{\frac{1}{N}{\bf x}_i^*\hat{\bf C}_N^{-1}(\rho){\bf x}_i}\sqrt{\frac{1}{N}{\bf x}_i^*{\boldsymbol{\Sigma}_0^{-1}}{\bf x}_i}}\right\|\\
&+\left\|\boldsymbol{\epsilon}_n(\rho)\right\|.\end{aligned}$$ By the Cauchy-Schwartz inequality, we get: $$\begin{aligned}
\|\hat{\bf C}_N(\rho)-{\boldsymbol{\Sigma}_0}\|&\leq d_{\rm max}(\rho) \left\|\frac{1-\rho}{n}\sum_{i=1}^n \frac{{\bf x}_i{\bf x}_i^*}{\frac{1}{N}{\bf x}_i^*\hat{\bf C}_N^{-1}(\rho){\bf x}_i}\right\|^{\frac{1}{2}}\\
&\times \left\|\frac{1-\rho}{n}\sum_{i=1}^n \frac{{\bf x}_i{\bf x}_i^*}{\frac{1}{N}{\bf x}_i^*{\boldsymbol{\Sigma}_0^{-1}}{\bf x}_i}\right\|^{\frac{1}{2}}+\left\|\boldsymbol{\epsilon}_n(\rho)\right\|\end{aligned}$$ or equivalently: $$\begin{aligned}
&\|\hat{\bf C}_N(\rho)-{\boldsymbol{\Sigma}_0}\|\leq d_{\rm max}(\rho) \left\|\hat{\bf C}_N-\rho{\bf I}_N\right\|^{\frac{1}{2}}\left\|\boldsymbol{\Sigma}_0-\rho{\bf I}_N-\boldsymbol{\epsilon}_n\right\|^{\frac{1}{2}}\\
& + \left\|\boldsymbol{\epsilon}_n(\rho)\right\|.\end{aligned}$$ Since $d_{\rm max}(\rho){\overset{\rm a.s.}{\longrightarrow}}0$, to conclude, we need to check that the spectral norm of $\hat{\bf C}_N$ is almost surely bounded. The proof is almost the same as that proposed in Lemma \[lemma:bounded\_spectral\] to control the spectral norm of $\boldsymbol{\Sigma}_0$ with the slight difference that the expectation operator is replaced by the empirical average, and using additionally the fact that $\frac{1}{n}\sum_{i=1}^n \frac{{\bf w}_i{\bf w}_i^*}{{\bf w}_i^*{\bf w}_i}{\overset{\rm a.s.}{\longrightarrow}}\frac{1}{N}{\bf I}_N$. Details are thus omitted.
Proof of Lemma \[lemma:di\] {#app:di}
===========================
The proof of Lemma \[lemma:di\] is based on the same technique [as]{} in [@provost-94]. Using the relation $\frac{1}{\alpha}=\int_0^{+\infty} e^{-\alpha t} dt$, we write $\mathbb{E}\left[\frac{|w_i|^2}{{\bf w}^*{\bf D}{\bf w}}\right]$ as: $$\begin{aligned}
&\mathbb{E}\left[\frac{|w_i|^2}{{\bf w}^*{\bf D}{\bf w}}\right]=\mathbb{E}\left[|w_i|^2\int_0^{+\infty}e^{-t\left(d_i |w_i|^2+\sum_{j=1,j\neq i}^N |w_j|^2 d_j\right)}\right] \\
&=\int_0^{+\infty}\int_0^{+\infty}\frac{1}{2^N}e^{-t d_i u} u \exp({-u/2})\int_0^{+\infty}\cdots\int_0^{+\infty} \\
&\times \exp\left({-t\displaystyle{\sum_{j=1,j\neq i} u_j d_j}}\right)\prod_{j=1,j\neq i}^Ne^{-u_j/2}du_1\cdots du_{N-1} du dt\\
&=\int_0^{\infty} \frac{1}{2^N}\frac{1}{(\frac{1}{2}+td_i)} \prod_{j=1}^N \frac{1}{\frac{1}{2}+td_j}dt.\end{aligned}$$ Conducting the change of variable $t=\frac{1}{v}-1$, we eventually obtain: $$\mathbb{E}\left[\frac{|w_i|^2}{{\bf w}^*{\bf D}{\bf w}}\right]=\int_0^1 \frac{1}{2^N}\frac{v^{N-1}}{d_i\prod_{j=1}^N d_j (1-v\frac{d_i-\frac{1}{2}}{d_i})}\prod_{j=1}^N \frac{1}{1-v\frac{d_j-\frac{1}{2}}{d_j}}dv.$$ We finally end the proof by using the integral representation of the Lauricella’s type $D$ hypergeometric function.
Proof of Lemma \[lemma:beta\]
=============================
Again the proof of the results in Lemma \[lemma:beta\] [follows]{} the same lines as in [Appendix]{} \[app:di\]. We will only detail the derivations for the expressions of $\beta_{i,i},i=1,\cdots,N$. The same kind of calculations can be used to derive that of $\beta_{i,j}, i\neq j$. Using the relation $\frac{1}{\alpha^2}=\int_0^{\infty}te^{-\alpha t}dt$, we write $\beta_{i,i}=\mathbb{E}\left[\frac{|w_i|^4}{({\bf w}^*{\bf D}{\bf w})^2}\right]$ as: $$\begin{aligned}
\beta_{i,i}& = \mathbb{E}\left[|w_i|^4\int_0^{\infty} te^{-t|w|_i^2+\sum_{j=1,j\neq i}|w_j|^2d_j}\right]\\
&=\int_0^{\infty}\int_0^{\infty}\frac{t}{2^N}u^2e^{-td_iu}u\exp(-u/2)\int_0^{\infty}\cdots \int_0^{\infty} \\
&\times \exp\left(-t\sum_{j=1,j\neq i}^N u_j d_j\right)\prod_{j=1,j\neq i}^N e^{-u_j/2}du_1\cdots du_{N-1}du dt\\
&=\frac{1}{2^{N-1}}\int_0^{\infty}\frac{t}{\left(\frac{1}{2}+td_i\right)^2} \prod_{k=1}^N \frac{1}{\frac{1}{2}+td_k}dt.\end{aligned}$$ Conducting the change of variable $t=\frac{1}{v}-1$, we obtain: $$\begin{aligned}
\beta_{i,i}&=\frac{1}{2^{N-1}}\int_0^1\frac{(1-v)v^{N-1}dv}{d_i^2\prod_{k=1}^N d_k\left(1-\frac{v(d_i-\frac{1}{2})}{d_i}\right)^2\prod_{k=1}^N(\frac{v(\frac{1}{2}-d_k)}{d_k}+1)}.\end{aligned}$$
Proof of Theorem \[th:clt\]
===========================
Our approach is based on a perturbation analysis of ${\rm vec}(\hat{\bf C}_N(\rho))$ in the vicinity of the asymptotic limit ${\boldsymbol{\Sigma}_0}$ coupled with the use of the Slutsky Theorem [@vaart] which allows us to discard terms converging to zero in probability.
Set $\boldsymbol{\Delta}=\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\left(\hat{\bf C}_N(\rho)-\boldsymbol{\Sigma}_0\right)\boldsymbol{\Sigma}_0^{-\frac{1}{2}}$. Then, $$\boldsymbol{\Delta}=\frac{N(1-\rho)}{n}\sum_{i=1}^n \frac{\boldsymbol{\Sigma}_0^{-\frac{1}{2}}{\bf x}_i{\bf x}_i^*\boldsymbol{\Sigma}_0^{-\frac{1}{2}}}{{\bf x}_i^*\hat{\bf C}_N^{-1}(\rho){\bf x}_i} +\rho\boldsymbol{\Sigma}_0^{-1}-{\bf I}_N.$$ Writing $\hat{\bf C}_N^{-1}$ as: $$\begin{aligned}
\hat{\bf C}_N^{-1}&=\left(\hat{\bf C}_N-\boldsymbol{\Sigma}_0+\boldsymbol{\Sigma}_0\right)^{-1}\\
&=\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\left({\bf I}_N+\boldsymbol{\Delta}\right)^{-1}\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\\
&=\boldsymbol{\Sigma}_0^{-1}-\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\boldsymbol{\Delta}\boldsymbol{\Sigma}_0^{-\frac{1}{2}}+o_p(\|\Delta\|)
$$ we obtain: $$\begin{aligned}
\boldsymbol{\Delta}&=\frac{N(1-\rho)}{n}\sum_{i=1}^n \frac{\boldsymbol{\Sigma}_0^{-\frac{1}{2}}{\bf x}_i{\bf x}_i^*\boldsymbol{\Sigma}_0^{-\frac{1}{2}}}{{\bf x}_i^*{\boldsymbol{\Sigma}_0^{-1}}{\bf x}_i-{\bf x}_i^*\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\boldsymbol{\Delta}\boldsymbol{\Sigma}_0^{-\frac{1}{2}}{\bf x}_i+o_p(\|\boldsymbol{\Delta}\|)} \\
&+\rho\boldsymbol{\Sigma}_0^{-1}-{\bf I}_N.\end{aligned}$$ From [@vaart Lemma 2.12], $\boldsymbol{\Delta}$ writes finally as: $$\begin{aligned}
\boldsymbol{\Delta}&=\frac{N(1-\rho)}{n}\sum_{i=1}^n\frac{\boldsymbol{\Sigma}_0^{-\frac{1}{2}}{\bf x}_i{\bf x}_i^*\boldsymbol{\Sigma}_0^{-\frac{1}{2}}}{{\bf x}_i^*\boldsymbol{\Sigma}_0^{-1}{\bf x}_i}\left(1+\frac{{\bf x}_i^*\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\boldsymbol{\Delta}\boldsymbol{\Sigma}_0^{-\frac{1}{2}}{\bf x}_i}{{\bf x}_i^*\boldsymbol{\Sigma}_0^{-1}{\bf x}_i}\right)\\
&+\rho\boldsymbol{\Sigma}_0^{-1}-{\bf I}_N+o_p(\|\boldsymbol{\Delta}\|)\\
&=\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\tilde{\boldsymbol{\Sigma}}\boldsymbol{\Sigma}_0^{-\frac{1}{2}}-{\bf I}_N\\
&+\frac{N(1-\rho)}{n}\sum_{i=1}^n \frac{\boldsymbol{\Sigma}_0^{-\frac{1}{2}}{\bf x}_i{\bf x}_i^*\boldsymbol{\Sigma}_0^{-\frac{1}{2}}{\bf x}_i^*\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\boldsymbol{\Delta}\boldsymbol{\Sigma}_0^{-\frac{1}{2}}{\bf x}_i}{\left({\bf x}_i^*\boldsymbol{\Sigma}_0^{-1}{\bf x}_i\right)^2}+o_p(\|\boldsymbol{\Delta}\|)\\
&=\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\tilde{\boldsymbol{\Sigma}}\boldsymbol{\Sigma}_0^{-\frac{1}{2}}-{\bf I}_N\\
&+\frac{N(1-\rho)}{n}\sum_{i=1}^n \frac{\boldsymbol{\Sigma}_0^{-\frac{1}{2}}{\bf x}_i{\bf x}_i^*\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\left({\bf x}_i^{T}(\boldsymbol{\Sigma}_0^{-\frac{1}{2}})^{T}\otimes {\bf x}_i^*\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right) {\rm vec}(\boldsymbol{\Delta})}{\left({\bf x}_i^*\boldsymbol{\Sigma}_0^{-1}{\bf x}_i\right)^2}\\
&+o_p(\|\boldsymbol{\Delta}\|).\end{aligned}$$ Let ${\bf F}$ be the $N^2\times N^2$ matrix given by: $${\bf F}=\frac{N(1-\rho)}{n}\sum_{i=1}^n \frac{{\rm vec}\left(\boldsymbol{\Sigma}_0^{-\frac{1}{2}}{\bf x}_i{\bf x}_i^*\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)\left({\bf x}_i^{T}(\boldsymbol{\Sigma}_0^{-\frac{1}{2}})^{T}\otimes {\bf x}_i^*\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)}{\left({\bf x}_i^*\boldsymbol{\Sigma}_0^{-1}{\bf x}_i\right)^2}.$$ Then, ${\rm vec}({\boldsymbol{\Delta}})$ satisfies the following system of equations: $$\begin{aligned}
{\rm vec}(\boldsymbol{\Delta})&={\rm vec}\left(\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\tilde{\boldsymbol{\Sigma}}\boldsymbol{\Sigma}_0^{-\frac{1}{2}}-{\bf I}_N\right)+\mathbb{E}\left({\bf F}\right){\rm vec}(\boldsymbol{\Delta})\nonumber\\
&+\left({\bf F}-\mathbb{E}({\bf F})\right)\boldsymbol{\delta}+o_p(\|\boldsymbol{\delta}\|).
\label{eq:delta}\end{aligned}$$ Given that the two last terms in the right-hand side of converges to zero at a rate faster than $\frac{1}{\sqrt{n}}$, we have: $$\begin{aligned}
&\sqrt{n}{\rm vec}(\boldsymbol{\Delta})=\sqrt{n}\left(\left(\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)^{\mbox{\tiny T}}\otimes\boldsymbol{\Sigma}_0^{-\frac{1}{2}} \right)\tilde{\boldsymbol{\delta}}+\sqrt{n}\mathbb{E}({\bf F}){\rm vec}(\boldsymbol{\Delta})\nonumber\\
&+o_p(1).
\label{eq:delta_bis}\end{aligned}$$ It remains thus to compute $\mathbb{E}({\bf F})$ and to check that its spectral norm is less than $1$. We will start by controlling the spectral norm of $\mathbb{E}({\bf F})$. Recall that $\mathbb{E}({\bf F})$ is given by: $$\begin{aligned}
&\mathbb{E}({\bf F})=N(1-\rho)\\
&\times\mathbb{E}\left[\frac{{\rm vec}\left(\boldsymbol{\Sigma}_0^{-\frac{1}{2}}{\bf x}{\bf x}^*\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)\left({\bf x}^{\mbox{\tiny T}}(\boldsymbol{\Sigma}_0^{-\frac{1}{2}})^{\mbox{\tiny T}}\otimes {\bf x}^*\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)}{({\bf x}^*\boldsymbol{\Sigma}_0^{-1}{\bf x})^2}\right]\\
&=N(1-\rho)\mathbb{E}\left[\frac{\left((\boldsymbol{\Sigma}_0^{-\frac{1}{2}})^{\mbox{\tiny T}}\overline{\bf x}\otimes \boldsymbol{\Sigma}_0^{-\frac{1}{2}}{\bf x}\right)\left({\bf x}^{\mbox{\tiny T}}(\boldsymbol{\Sigma}_0^{-\frac{1}{2}})^{\mbox{\tiny T}}\otimes {\bf x}^*\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)}{\left({\bf x}^*\boldsymbol{\Sigma}_0^{-1}{\bf x}\right)^2}\right]\\
&=N(1-\rho)\mathbb{E}\left[\frac{\left((\boldsymbol{\Sigma}_0^{-\frac{1}{2}})^{\mbox{\tiny T}}\overline{\bf x}{\bf x}^{\mbox{\tiny T}}(\boldsymbol{\Sigma}_0^{-\frac{1}{2}})^{\mbox{\tiny T}}\right)\otimes\left(\boldsymbol{\Sigma}_0^{-\frac{1}{2}}{\bf x}{\bf x}^*\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)}{\left({\bf x}^*\boldsymbol{\Sigma}_0^{-1}{\bf x}\right)^2}\right].\end{aligned}$$ It can be easily noticed that: $\frac{(\boldsymbol{\Sigma}_0^{-\frac{1}{2}})^{\mbox{\tiny T}}\overline{\bf x}{\bf x}^{\mbox{\tiny T}}(\boldsymbol{\Sigma}_0^{-\frac{1}{2}})^{\mbox{\tiny T}}}{{\bf x}^*\boldsymbol{\Sigma}_0^{-1}{\bf x}}\preceq {\bf I}_N$. Therefore, $$\begin{aligned}
\mathbb{E}({\bf F}) &\preceq N(1-\rho)\mathbb{\bf I}_N\otimes \mathbb{E}\left[\frac{\boldsymbol{\Sigma}_0^{-\frac{1}{2}}{\bf x}{\bf x}^*\boldsymbol{\Sigma}_0^{-\frac{1}{2}}}{{\bf x}^*\boldsymbol{\Sigma}_0^{-1}{\bf x}}\right]\\
&=\mathbb{\bf I}_N\otimes\left({\bf I}_N-\rho\boldsymbol{\Sigma}_0^{-1}\right)\end{aligned}$$ thus implying $$\left\|\mathbb{E}({\bf F})\right\|\leq \left\|{\bf I}_N-\rho\boldsymbol{\Sigma}_0^{-1}\right\| <1.$$ We will now provide a closed-form expression for $\mathbb{E}({\bf F})$. To this end, we will use the eigenvalue decomposition of $\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\boldsymbol{\Sigma}_N^{\frac{1}{2}}={\bf U}{\bf D}^{\frac{1}{2}}{\bf U}^*$. Then, letting $\tilde{\bf w}={\bf U}^*{\bf w}$ with ${\bf w}=\boldsymbol{\Sigma}_N^{-\frac{1}{2}}{\bf x}$, we obtain: $$\mathbb{E}({\bf F})=\mathbb{E}\left[\frac{N(1-\rho)\overline{\bf U}{\bf D}^{\frac{1}{2}}\overline{(\tilde{\bf w})}\tilde{\bf w}^{\mbox{\tiny T}}{\bf D}^{\frac{1}{2}}{\bf U}^{\mbox{\tiny T}}\otimes {\bf U}\boldsymbol{\bf D}^{\frac{1}{2}}\tilde{\bf w}\tilde{\bf w}^*{\bf D}^{\frac{1}{2}}{\bf U}^*}{\left(\tilde{\bf w}^*{\bf D}\tilde{\bf w}\right)^2}\right].$$ Therefore, $$\begin{aligned}
&\left({\bf D}^{-\frac{1}{2}}{\bf U}^{\mbox{\tiny T}}\otimes {\bf D}^{-\frac{1}{2}}{\bf U}^*\right)\mathbb{E}({{\bf F}}) \left(\overline{\bf U}{\bf D}^{-\frac{1}{2}}\otimes {\bf U}{\bf D}^{-\frac{1}{2}}\right)\\
&=N(1-\rho)\mathbb{E}\left[\frac{\overline{(\tilde{\bf w})}\tilde{\bf w}^{\mbox{\tiny T}}\otimes \tilde{\bf w}\tilde{\bf w}^*}{\left(\tilde{\bf w}^*{\bf D}\tilde{\bf w}\right)^2}\right]\\
&=N(1-\rho)\mathbb{E}\left[\frac{\left(\overline{(\tilde{\bf w})}\otimes \tilde{\bf w}\right)(\tilde{\bf w}\otimes \tilde{\bf w}^*)}{\left(\tilde{\bf w}^*{\bf D}\tilde{\bf w}\right)^2}\right]\\
&=N(1-\rho)\mathbb{E}\left[\frac{{\rm vec}(\tilde{\bf w}\tilde{\bf w}^*)\left({\rm vec}(\tilde{\bf w}\tilde{\bf w}^*)\right)^*}{\left(\tilde{\bf w}^*{\bf D}\tilde{\bf w}\right)^2}\right]\\
&=N(1-\rho)\tilde{\bf B}({\bf D}),\end{aligned}$$ where $\tilde{\bf B}({\bf D})$ is provided by Lemma \[lemma:di\]. A closed-form expression for $\tilde{\bf F}\triangleq\mathbb{E}({\bf F})$ is thus given by: $$\tilde{\bf F}= N(1-\rho)\left(\overline{\bf U}{\bf D}^{\frac{1}{2}}\otimes {\bf U}{\bf D}^{\frac{1}{2}}\right)\tilde{\bf B}({\bf D})\left({\bf D}^{\frac{1}{2}}{\bf U}^{\mbox{\tiny T}}\otimes {\bf D}^{\frac{1}{2}}{\bf U}^*\right).$$ The linear system of equations in thus becomes: $$\sqrt{n}{\rm vec}(\boldsymbol{\Delta})=\sqrt{n}({\bf I}_N-\tilde{\bf F})^{-1}\left(\left(\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)^{\mbox{\tiny T}}\otimes \boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)\tilde{\boldsymbol{\delta}}+o_p(1).$$ Writing ${\rm vec}(\boldsymbol{\Delta})=\left(\left(\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)^{\mbox{\tiny T}}\otimes \boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)\boldsymbol{\delta}$, we finally obtain: $$\begin{aligned}
\sqrt{n}\boldsymbol{\delta}&=\left(\left(\boldsymbol{\Sigma}_0^{\frac{1}{2}}\right)^{\mbox{\tiny T}}\otimes \boldsymbol{\Sigma}_0^{\frac{1}{2}}\right)({\bf I}_{N^2}-\tilde{\bf F})^{-1}\left(\left(\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)^{\mbox{\tiny T}}\otimes \boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)\sqrt{n}\tilde{\boldsymbol{\delta}}\\
&+o_p(1).\end{aligned}$$ Thus, $\sqrt{n}\boldsymbol{\delta}$ behaves as a zero-mean Gaussian distributed vector with covariance: $$\begin{aligned}
{\bf M}_1&=\left(\left(\boldsymbol{\Sigma}_0^{\frac{1}{2}}\right)^{\mbox{\tiny T}}\otimes \boldsymbol{\Sigma}_0^{\frac{1}{2}}\right)({\bf I}_{N^2}-\tilde{\bf F})^{-1}\left(\left(\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)^{\mbox{\tiny T}}\otimes \boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)\tilde{\bf M}_1\\
&\times\left(\left(\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)^{\mbox{\tiny T}}\otimes \boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)({\bf I}_{N^2}-\tilde{\bf F})^{-1}\left(\left(\boldsymbol{\Sigma}_0^{\frac{1}{2}}\right)^{\mbox{\tiny T}}\otimes \boldsymbol{\Sigma}_0^{\frac{1}{2}}\right)\end{aligned}$$ and pseudo-covariance: $$\begin{aligned}
{\bf M}_2&=\left(\left(\boldsymbol{\Sigma}_0^{\frac{1}{2}}\right)^{\mbox{\tiny T}}\otimes \boldsymbol{\Sigma}_0^{\frac{1}{2}}\right)({\bf I}_{N^2}-\tilde{\bf F})^{-1}\left(\left(\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)^{\mbox{\tiny T}}\otimes \boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)\tilde{\bf M}_2\\
&\times\left(\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\otimes \left(\boldsymbol{\Sigma}_0^{-\frac{1}{2}}\right)^{\mbox{\tiny T}}\right)({\bf I}_{N^2}-\tilde{\bf F}^{\mbox{\tiny T}})^{-1}\left(\boldsymbol{\Sigma}_0^{\frac{1}{2}}\otimes \left(\boldsymbol{\Sigma}_0^{\frac{1}{2}}\right)^{\mbox{\tiny T}}\right)\end{aligned}$$ This completes the proof. \[app:clt\]
[^1]: A. Kammoun and M.S. Alouini are with the Computer, Electrical, and Mathematical Sciences and Engineering (CEMSE) Division, KAUST, Thuwal, Makkah Province, Saudi Arabia (e-mail: abla.kammoun@kaust.edu.sa, slim.alouini@kaust.edu.sa)
[^2]: R. Couillet and F. Pascal are with Laboratoire des Signaux et Systèmes (L2S, UMR CNRS 8506) CentraleSupélec-CNRS-Université Paris-Sud, 91192 Gif-sur-Yvette, France (e-mail: romain.couillet@centralesupelec.fr, frederic.pascal@centralesupelec.fr)
[^3]: Couillet’s work is supported by the ERC MORE EC–120133
[^4]: Another concurrent RTE is that of Chen {*et al* [@chen-11] which is given as the unique solution of $$\check{\bf C}_N(\rho)={\check{\bf B}_N(\rho)}{\frac{1}{N}\operatorname{tr}\check{\bf B}_N(\rho)}$$ where $$\check{\bf B}_N(\rho)=(1-\rho)\frac{1}{n}\sum_{i=1}^n \frac{{\bf x}_i{\bf x}_i^*}{\frac{1}{N}{\bf x}_i^*\check{\bf C}_N(\rho)^{-1}{\bf x}_i} +\rho {\bf I}_N.$$
[^5]: The evaluation of the Lauricella’s type $D$ hypergeometric function is performed numerically using its integral representation $$\begin{aligned}
&F_D^{(N)}(a,b_1,\cdots,b_n,c;x_1,\cdots,x_n)\\
&=\frac{\Gamma(c)}{\Gamma(a)\Gamma(a-c)}\int_0^1 t^{a-1}(1-t)^{c-a-1}\prod_{i=1}^n (1-x_it)^{-b_i}dt. \hspace{0.1cm} \Re c > \Re a >0.\end{aligned}$$
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We investigated the velocity of an asymmetric camphor boat moving on aqueous solutions with glycerol. The viscosity was controlled by using several concentrations of glycerol into the solution. The velocity decreased with an increase in the glycerol concentration. We proposed a phenomenological model, and showed that the velocity decreased with an increase in the viscosity according to power law. Our experimental result agreed with the one obtained from our model. The results provided an approximation that the characteristic decay length of the camphor concentration profile at the front of the boat was sufficiently shorter than that at the rear of the boat, which was difficult to measure directly.'
author:
- Michiko Shimokawa
- Masashi Oho
- Kengo Tokuda
- Hiroyuki Kitahata
title: Power law observed in the motion of an asymmetric camphor boat under viscous conditions
---
Introduction
============
We can observe a wide variety of patterns, such as in a traffic jam [@Kikumako; @Bando; @Sugiyama], a large-scale ordering of swimming bacteria [@Peng; @Nishi], a swarm of mosquitoes, a parliament of birds and a school of fish [@Vicsek; @Vicsek2; @Toner], formed by living things as self-propelled objects. It is one of the challenging studies to understand pattern formations induced by these collective motions.
Similar behaviors also emerge in chemical systems, such as microtubes [@Sumino], droplets [@Thutupalli; @Ohmura; @Tanaka], Janus particles [@Nishi2; @janus] and camphor systems [@Suematsu; @Suematsu3; @Nishimori; @Soh; @Soh2; @Ikura; @Suematsu; @Kohira; @Suematsu2; @Nakata; @Nagayama; @Eric; @Kitahata2; @Lauga; @Suematsu2; @Yui; @Koyano; @Heisler; @NishiWakai]. Self-propelled objects transform chemical energy into kinetic energy in non-equilibrium systems, and move spontaneously as if these were living. Recently, a lot of studies have reported on camphor boats as self-propelled particles in the chemical system [@Suematsu; @Nishimori; @Kohira; @Suematsu2; @Nakata; @Yui]. A camphor boat is made of a plastic sheet attached to a camphor disk. When the camphor boat is put on an aqueous surface, the camphor molecules dissolve from the disk under the boat and expand on the surface. As the camphor molecules decrease the surface tension of the aqueous phase, the camphor boat moves on the aqueous phase spontaneously due to a difference in surface tension around the boat. There have been many experimental studies, as well as numerical ones, on the camphor boat. Some of the numerical models are based on reaction-diffusion dynamics on the camphor concentration [@Nakata; @Nagayama; @Eric; @Heisler; @NishiWakai], and the others are based on fluid dynamics [@Soh; @Soh2; @Lauga]. These models could explain the experimental behaviors in a qualitative manner. Basic physical quantities were necessary in order to realize the quantitative correspondence. However, it had been difficult to measure the driving force on the motion of the camphor boat, the surface tension difference between the front and the back of the boat, the diffusion coefficient, the supply rate of camphor molecules from the camphor disk to the water surface, and a relaxation rate before Suematsu [*et al.*]{} measured these quantitative properties in experiments [@Suematsu2]. The results have allowed us to compare the experimental results with theoretical ones quantitatively, and have provided a deep understanding of the interesting phenomena of the camphor boat. However, they investigated only the situation for pure water as an aqueous phase. Thus, we focused on viscosity dependence of the motion with regard to a camphor boat.
As methods to change the viscosities of the aqueous solution under the camphor boat, the temperature control of the solution or the use of the solution with different physical concentration is considered. We adopted the latter; we used aqueous solutions of glycerol with several glycerol concentration [@Nagayama; @Koyano], and changed the viscosity of the base solution.
In this paper, we investigated the velocity $v$ of the camphor boat for several glycerol concentrations $p$, and found that $v$ decreased with an increase in $p$. In order to understand the $p$ dependence of $v$, we proposed the mathematical model. The model showed a power law $v\sim\mu^{-1/2}$, where $\mu$ is the viscosity of the base solution. Our experimental results satisfied the scaling relation obtained from the numerical model. The agreement between the experimental result and the theoretical result for the viscosity dependence of $v$ provides an estimation of the concentration field around the camphor boat, which is difficult to measure directly in experiments.
Experimental procedure
======================
A round-shape boat as shown in Figs. \[fig:method\](a) and (b) was used to measure the velocity of the camphor boat. The boat was composed of a plastic plate (thickness: 0.1 mm) and a camphor disk, which was prepared by pressing camphor powder ((+)-Camphor, Wako, Japan) using a pellet die set for the preparation of samples on Fourier transform-infrared (FT-IR) spectroscopy analysis. The diameter and the thickness of the camphor disk were 3.0 mm and 1.0 mm, respectively. The plastic plate was cut in a circle with a diameter of 6.0 mm, and the camphor disk was attached to the edge of the flat circular plastic plate using an adhesive (Bath bond Q, KONISHI, Japan), so that a half of the camphor disk was outside of the plastic sheet. This round-shape camphor boat moved toward the direction of the plastic sheet.
An annular glass chamber was used, which was composed of two petri dishes with different diameters as shown in Figs. \[fig:method\](c) and (d). Inner and outer diameters were 128.5 mm and 145.8 mm, and the channel width of the chamber was thus 8.7 mm. As it is known that the velocity is sensitive to the depth of water [@Yui], the chamber was put on the clear horizontal plate. The solution was poured into the chamber so that the depth of the solution was 4.7 mm, which was glycerol (Glycerol, Wako, Japan) and water mixed at several mass ratios $p$, i.e. $p$ is a percentage of a glycerol mass in the mixed solution. We investigated physical properties of the solution, such as the viscosity, the surface tension, and the camphor solubility against glycerol concentration $p$. The detailed results are shown in Appendix A. The camphor boat was put on the surface of the solution in the glass chamber, and then it started to move spontaneously. For a visualization of the motion, a LED board was placed under the horizontal plate. The motion of the boat was captured with a digital video camera (HDR-FX1, SONY, Japan) from the top of the chamber. Obtained movies were analyzed using an image-processing system (ImageJ, Nature Institutes of Health, USA).
![\[fig:method\](Color online) Schematic drawings of (a) top view and (b) side view of a camphor boat for the measurements of velocities, (c) top view and (d) side view on the annular chamber.](method.eps){width="7cm"}
Experimental Results
====================
We investigated the velocity of the camphor boat on the solutions of various glycerol concentration $p$. The position of the camphor boat is described as a radial angle $\theta$ in the annular chamber, as shown in Fig. \[fig:velo\](a). Analyses of the videos captured by the digital video camera provide the position $\theta$ at time $t$, where $t=0$ corresponds to the time when the boat finished three laps along the chamber after the boat had been put on the surface of the solution. In Fig. \[fig:velo\](b), $\theta$ had a constant gradient in time, that is to say, the camphor boat moved with a constant velocity. Figure \[fig:velo\](c) shows a time series of the angular velocity $\omega = \Delta\theta/\Delta t$, where $\Delta t =1/30$ s for one frame of the video camera and $\Delta\theta$ is an angular difference between $t$ and $t+\Delta t$. In Fig. \[fig:velo\](b), the expanded plot is shown for the time region corresponding to the gray region in Fig. \[fig:velo\](c). The angular velocity $\omega$ in the region fluctuated around the average value 1.08 rad/s. The similar tendency was observed at 50 s $\lesssim t \lesssim$ 200 s, i.e. $\omega$ increased with time and had noisy data before $t\sim10$ s, and $\omega$ began to decrease after $t\sim250$ s. Therefore, we investigated $\omega$ at 60 s $\lesssim t \lesssim$ 180 s, during which $\omega$ had almost a constant value for time. Next, we investigated the angular velocity for $p$ as shown in Fig. \[fig:velo\](d). The vertical and horizontal axes in Fig. \[fig:velo\](d) show the angular velocity $\overline{\omega}$ and concentration $p$. The $\overline{\omega}$ was obtained from the linear fitting of time series as shown in Fig. \[fig:velo\](b). The values of the errors for each $\overline{\omega}$ were lower than $10^{-3}$ rad/s. As shown in Fig. \[fig:velo\](d), $\overline{\omega}$ decreased with an increase in $p$.
![image](velo.eps){width="14cm"}
Mathematical Model \[sec:model\]
================================
The glycerol concentration $p$ of the solution was controlled in our experiments, which led to a change in the viscosity $\mu$ shown in Appendix A. In this section, we consider a viscosity dependence of the camphor boat velocity. Now, the annular glass chamber used in our experiments is recognized as a one-dimensional channel with an infinite length.
The time evolution equation of the camphor boat in a one-dimensional system (The spatial coordinate is represented as $x$) is given as $$\begin{aligned}
m\frac{d^2X}{dt^2} = -h\frac{dX}{dt}+F,
\label{eq:motion}
\end{aligned}$$ where $m$, $X$, $h$ and $F$ are the mass, the center of mass, the friction coefficient of the camphor boat, and the driving force exerted on the moving camphor boat, respectively. We assume that $h$ is proportional to viscosity $\mu$ such as $h=K\mu$, where $K$ is a constant ($K > 0$). The assumption has been used in many previous papers [@Eric; @Koyano; @Nagayama; @Nakata; @Nishimori; @Suematsu; @Suematsu2; @Kohira; @Soh; @NishiWakai; @Heisler], and it was also reported that the viscous drag on the mobility of thin film in Newtonian fluid obeyed a linear relationship with the fluid viscosity [@stone]. Therefore, we considered that the assumption $h=K\mu$ is natural [@viscous; @drag]. The driving force $F$ is described as $$\begin{aligned}
F = w[\gamma(c(X+r+\ell))-\gamma(c(X-r))],
\label{eq:driving}
\end{aligned}$$ where $w$ is the width of the camphor disk. Here, we consider that the positions of the front and the back of the boat are shown as $x=X+r+\ell$ and $x=X-r$, where $r$ and $\ell$ are the radius of the disk and the size of the boat as defined in Fig. \[fig:model\]. The surface tension $\gamma$ depends on the concentration $c$ of the camphor molecules at the surface of the solution, and we assume the linear relation as $$\begin{aligned}
\gamma=\gamma_0-\Gamma c,
\label{eq:surface}
\end{aligned}$$ where $\gamma_0$ is the surface tension of the base solution without camphor and $\Gamma$ is a positive constant.
![\[fig:model\] Illustration of side view of a camphor boat.](model.eps){width="7cm"}
The time evolution on the camphor concentration $c$ is shown as $$\begin{aligned}
\frac{\partial c}{\partial t} = D \frac{\partial^2 c}{\partial x^2}-ac+f(x-X),
\label{eq:concentration}
\end{aligned}$$ where $a$ is the sum of sublimation rate and dissolution rate of the camphor molecules on solution surface, $D$ is the diffusion coefficient of the camphor molecule, and $f$ denotes the dissolution rate of the camphor molecules from the camphor disk to the aqueous solution surface. As for the term $f(z)$, we apply the following description, $$\begin{aligned}
f(z) = \begin{cases}
f_0, & ({-r < z <r}),\\
0, & \text({\rm otherwise}).
\end{cases}
\label{eq:provide2}
\end{aligned}$$ That is to say, the dissolution of camphor molecules from the disk occurs at $-r < z < r$. The above equation does not include Marangoni effect directly, although the flow has an influence on the camphor concentration. The previous paper [@Kitahata2] showed that Eq. (\[eq:concentration\]) was reasonable if $D$ was recognized as the spatially uniform effective diffusion coefficient of the camphor to include the transportation by the flow. In addition, this spatially-uniform effective diffusion coefficient is supported by the experimental results that the diffusion length is proportional to the square root of elapsed time [@Suematsu].
Theoretical analysis
====================
Our experimental results showed that the camphor boat moved with a constant velocity in time as shown in Fig. \[fig:velo\]. Thus, we should consider solutions for the motion of the camphor boat with a constant velocity $v$ in $x$-direction, i.e. $X = vt$. From this condition, Eq. (\[eq:motion\]) leads to $$\begin{aligned}
-hv+F=0.
\label{eq:motion1}
\end{aligned}$$ By setting $\xi=x-vt$ and $c=c(\xi)$, Eq. (\[eq:concentration\]) provides $$\begin{aligned}
-v\frac{dc}{d\xi} = D \frac{d^2 c}{d\xi^2}-ac+f(\xi).
\label{eq:concentration1}
\end{aligned}$$ Equation (\[eq:concentration1\]) leads to the following solutions $$\begin{aligned}
c(\xi) = \begin{cases}
\beta_1 \exp \big(\lambda_-(\xi-r)\big), & ({\xi > r}),\\
\dfrac{f_0}{a}+\alpha_2 \exp\big(\lambda_+\xi)+\beta_2
\exp\big(\lambda_-\xi\big), & ({-r <\xi < r}),\\
\alpha_3 \exp \big(\lambda_+(\xi+r)\big), & ({\xi < -r}),
\end{cases}
\label{eq:provide}
\end{aligned}$$ where $$\begin{aligned}
\lambda_\pm= -\frac{v}{2D}\pm\frac{\sqrt{v^2+4Da}}{2D},
\label{eq:lambda_pm}
\end{aligned}$$ $$\begin{aligned}
\beta_1 = \frac{f_0\lambda_+}{a(\lambda_+-\lambda_-)}(1-\exp(2\lambda_-r)),
\label{eq:beta1}
\end{aligned}$$ $$\begin{aligned}
\alpha_2= \frac{f_0\lambda_-\exp(-\lambda_+r)}{a(\lambda_+-\lambda_-)},
\label{eq:alpha2}
\end{aligned}$$ $$\begin{aligned}
\beta_2 = \frac{f_0\lambda_+\exp(-\lambda_-r)}{a(\lambda_+-\lambda_-)},
\label{eq:beta2}
\end{aligned}$$ $$\begin{aligned}
\alpha_3 = -\frac{f_0\lambda_-}{a(\lambda_+-\lambda_-)}(1-\exp(-2\lambda_+r)).
\label{eq:alpha1}
\end{aligned}$$
Equations (\[eq:provide\])-(\[eq:alpha1\]) provide $$\begin{aligned}
F =& -\Gamma w \left[\beta_1 \exp \left(\lambda_-\ell \right)-\alpha_3
\right]
\nonumber \\
=&-\frac{\Gamma w f_0}{a \left(\lambda_+-\lambda_-\right)}
\left[\lambda_+ \left(1-\exp \left(2\lambda_-r \right) \right) \exp
\left(\lambda_-\ell \right) \right. \nonumber\\
& \left. + \lambda_- \left(1-\exp \left(-2\lambda_+r\right) \right) \right].
\end{aligned}$$
As $v$ is sufficiently large in our experiments, we assume $r\ll1/\lambda_+$ and $\ell\gg1/\left|\lambda_-\right|$. Then, $\lambda_+\sim a/v$ and $\lambda_-\sim -v/D$, which lead to $$\begin{aligned}
F = & -\frac{\Gamma w f_0}{a(v/D)} \left[\frac{a}{v} \left(1-\exp
\left(-\frac{2vr}{D} \right)\right)\exp\left(-\frac{v}{D}\ell\right)
\right. \nonumber \\ &
\left.-\frac{v}{D}
\left(1-\exp\left(-\frac{2ar}{v}\right)\right)\right]
\nonumber\\
\simeq & -\frac{\Gamma wf_0D}{av}\left(-\frac{v}{D}
\right)\left(\frac{2ar}{v} \right)
\nonumber\\
= & \frac{2\Gamma wf_0r}{v}.
\label{eq:F}
\end{aligned}$$ As $F = K\mu v$ from Eq. (\[eq:motion1\]), $$\begin{aligned}
K\mu v = \frac{2\Gamma w f_0 r}{v}.
\label{eq:F1}
\end{aligned}$$ From Eq. (\[eq:F1\]), we obtain $$\begin{aligned}
v=\sqrt{\frac{2\Gamma wf_0r}{K\mu}}.
\label{eq:v}
\end{aligned}$$
Equation (\[eq:v\]) shows a power law $v\propto\mu^{-1/2}$, if other parameters such as $\Gamma, w$ and $f_0$ are independent of $\mu$. The power law with the index $-1/2$ is an interesting result, since Stokes relation naturally suggests another relation; $v \propto \mu^{-1}$ [@fluid].
Numerical results
=================
In the theoretical analysis, we have assumed the solution depending on $\xi = x - vt$. However, the supposed mathematical model has other symmetries and whether the considered solution depending on $\xi$ is an attractor or not should be checked. Therefore, we performed numerical calculations based on equations in Sec. \[sec:model\]. For numerical calculation, we considered a one-dimensional array with a spatial step of $\Delta x = 0.1$. The spatial size of the considered system was 1000 with periodic boundary condition, and we adopted Euler method with time step $\Delta t = 10^{-3}.$ As for the spatial derivative, we used explicit method. The parameters are set to be $m = 0.1$, $w = 1$, $\Gamma = 1$, $r=1$, $\ell = 1$, $D = 1$, $a = 1$, and $f_0 = 1$. In the discretization process, the first-order interpolation was adopted for Eqs. and . The parameter $h$ corresponding to the viscosity $\mu$ was changed, and we investigated the time development of the camphor boat position and camphor concentration profile.
![\[fig:sim\]Numerical results. (a) Time course of camphor boat velocity $dX/dt$ for $h=0.01$. (b) Camphor concentration profile $c(x)$ for $h=0.01$ at $t=1000$, when the camphor boat velocity reached a constant value. The position of the camphor boat was $X \simeq 188.6$. (c) Final velocity ($t = 1000$) depending on $h$, which is proportional to viscosity. The power law $v \propto h^{-1/2}$ holds for smaller $h$.](fig_sim.eps){width="7cm"}
In Fig. \[fig:sim\], the numerical results are shown. In Fig. \[fig:sim\](a), the time development of camphor boat velocity is shown. The camphor boat velocity is saturated to a constant value. The camphor concentration profile after the velocity became constant ($t = 1000$) is shown in Fig. \[fig:sim\](b). The camphor concentration profile was asymmetric with regard to the camphor boat position $x = X \simeq 188.6$. After reaching a constant velocity, the concentration profile did not change the shape but shifted in a positive $x$-direction. Thus, we can guess that the solution with regards to $\xi = x - vt$ is an attractor of this system. We have also confirmed that the solution converged to this attractor from other initial conditions (data not shown). The mathematical analysis on this convergence to the solution depending on $\xi$ remains and it may be possible to approach such mathematical problem by considering Lie group symmetry [@Olver].
The final velocity against $h$ is shown in Fig. \[fig:sim\](c). For the regime of $h$ smaller than 0.1, the power law $v \propto h^{-1/2}$ held, where $h$ is proportional to the viscosity $\mu$ in the present framework. In the theoretical analysis, we assumed $r \ll 1/\lambda_+$ and $\ell \gg 1/\left|\lambda_- \right|$, which is equivalent to $aD/v^2 \ll 1$, as will be discussed in detail in the following section. Since the final velocity is nearly equal to 5 for $h \sim 0.1$, and $a = D = 1$, the divergence from the power law originates from the breakdown of the assumption in the analysis.
Discussion
==========
Our model showed a power law $v\sim\mu^{-1/2}$ under the assumptions that $r\ll1/\lambda_+$ and $\ell\gg1/\left|\lambda_-\right|$. In this section, we compare experimental results with the numerical results in Eq. (\[eq:v\]) in order to check whether our model is reasonable. Equation (\[eq:v\]) has several parameters such as $\Gamma$, $w$, $f_0$, $r$, $K$, and $\mu$. Since similar camphor boats were used, $w$, $r$, and $K$ were constant values in our experiments. We investigated the dependence of the other parameters, i.e., $\Gamma$, $f_0$, and $\mu$, on the glycerol concentration $p$ in Appendix A. Equation (\[eq:surface\]) showed $\Gamma=(\gamma_0 - \gamma)/c$. As $(\gamma_0 - \gamma)$ was independent of $p$ in our measurements, we considered that $\Gamma$ was constant. The supply rate $f_0$ corresponds to $\Delta M$, which is a loss of a camphor disk per unit time in our experiments, and we found that $\Delta M$ decreased with an increase in $p$. The viscosity $\mu$ of the base solution increased with $p$. Thus, $f_0$ and $\mu$ in Eq. (\[eq:v\]) are functions of $p$. In addition, the angular velocity is proportional to the camphor boat velocity in our experiments.
From the above discussion, Eq. leads to $$\begin{aligned}
\overline{\omega}(p) \propto\sqrt{\frac{\Delta M(p)}{\mu(p)}}.
\label{eq:v2}
\end{aligned}$$ Figure \[power\] shows a relationship between $\Delta M/\mu$ and $\overline{\omega}$ obtained from our experiments. The result almost agrees with the solid line in Eq. (\[eq:v2\]) [@Delta_M].
![\[power\](Color online) Relationship between $\Delta M/\mu$ and $\overline{\omega}$, where $\Delta M$, $\mu$, and $\overline{\omega}$ are a weight loss of a camphor disk per one second, the viscosity of the base solution, and the angular velocity of the camphor boat, respectively. The solid line shows the numerical result; $\overline{\omega}\sim\sqrt{\Delta M/\mu}$ in Eq. .](power.eps){width="7cm"}
The power law was obtained under the assumptions that $r\ll1/\lambda_+$ and $\ell\gg1/\left|\lambda_-\right|$, which is equivalent to $aD/v^2\ll1$. Since $\sqrt{D/a}$ corresponds to a characteristic decay length of the camphor concentration profile, and $v/a$ is a distance of the camphor boat motion during the characteristic time during which the concentration field keeps the memory, the assumption means that the characteristic length for the camphor concentration profile is sufficiently smaller than the characteristic length for the camphor boat motion. In such a case, the camphor concentration profile should be asymmetric with respect to the camphor particle position.
Here, we confirm the acceptability of the assumptions for our experiments. We needed values of parameters such as $a$, $D$, and $v$ included in the assumption. We used a rectangular camphor boat and chalk powders in measurements of $D$. The boat was put on the solution surface covered by the chalk powders, and the camphor diffused into the solution. The diffusion was visualized by the chalk powders. We analyzed the videos of powders’ motion and estimated $D$. A method of the measurement is similar to that in a previous study [@Suematsu2]. The effective diffusion coefficient $D$ against $p$ is shown in Appendix B, which shows that $D$ decreases with an increase in $p$. For $a$, $a=1.8\times10^{-2}$ s$^{-1}$ was used, which was based on the experimental observation reported in the previous work [@Suematsu2]. Using these data, the relationship between $p$ and $aD/{v^2}$ was obtained as shown in Fig. \[fig:compare\]. The result shows that the values of $aD/{v^2}$ were sufficiently smaller than 1 for all $p$, which suggests that our assumption is reasonable. The result provides the following consideration: the camphor concentration around the boat is quite asymmetric, and the decay length of the concentration field at the back of the boat is sufficiently greater than that at the front.
![\[fig:compare\] Relationship between $p$ and $aD/v^2$, where $a$, $D$, and $v$ correspond to the sum of sublimation rate and dissolution rate of camphor molecules on an aqueous surface, effective diffusion coefficient, and velocity of a camphor boat, respectively. $aD/v^2$ was much smaller than 1, which suggests our approximation is valid.](compare.eps){width="6cm"}
There have been many analytical studies on collective motion of symmetric camphor disks in both experiments and theoretical analyses [@Nishimori; @Eric; @Ikura; @NishiWakai]. There have also been some studies on asymmetric camphor boats, in which numerical calculation for both concentration field and camphor boat positions was performed, and analytical approach under the assumption of slow velocity was performed [@Suematsu; @Heisler]. In contrast to these studies, we discussed under the assumption of fast velocity, and this assumption was justified by the experimental observation. It would enable analytical approach on the collective motions of the camphor boats with fast velocity. Therefore, our model would provide a deep understanding of the collective motions on not only camphor boats but also living things.
Conclusion
==========
We investigated the velocity $v$ of the asymmetric camphor boat against several glycerol concentration $p$ of the glycerol aqueous solution. In order to know the dependence of the camphor boat velocity $v$ on the glycerol concentration $p$, we discussed a numerical model based on a diffusion-reaction equation. When it is assumed that the characteristic length of the camphor concentration at the front of the boat is shorter than that at the rear, $v$ should obey a power law $v\sim\mu^{-1/2}$, where $\mu$ is the viscosity of the base solution. The power law agreed with experimental results, and it was also confirmed that our assumption in the model was reasonable through a comparison with our experimental results. Using our proposed model, we can discuss the profile of camphor concentration, which is difficult to be directly measured in experiments. Thus, our experiment has profound significance in the estimation of the concentration through the measurements of the velocity.
As a future topic, it would be worth investigating whether the similar power law $v\sim\mu^{-1/2}$ persists with smaller levels of $v$ in experiments with such variables as increased the boat size. In addition, we considered that the hydrodynamic effect was included in the effective diffusion coefficient in this paper. It, however, would be also important to consider the fluid flow around the boat when we study the behavior of two or more camphor boats as the collective motion. As future work, it would be also interesting to consider the hydrodynamic interaction in multi camphor particle system.
This work was supported by Y. Koyano. MS would like to thank Samantha Hawkins of Fukuoka Institute of Technology for proofreading this manuscript. This work was supported by JSPS KAKENHI Grant Numbers JP18K11338, JP18K03572, JP25103008 and JP15K05199.
Physical properties of a glycerol-water solution as a base solution
===================================================================
Figure \[fig:solution\](a) shows a viscosity dependence for various glycerol concentrations $p$ of the aqueous solution; i.e. $p$ means a percentage of a glycerol mass in the aqueous solution. The viscosity $\mu$ was measured using a viscometer (SV-10A, A$\&$D, Japan). As shown in Fig. \[fig:solution\] (a), the viscosity $\mu$ increased with $p$.
Figure \[fig:solution\](b) shows surface tension difference $\gamma_0-\gamma$ of the solution for $p$, where $\gamma_0$ and $\gamma$ correspond to the surface tension for glycerol-water solution without camphor and that for the solution with $6.8 \times 10^{-3}$ g camphor dissolved per 1500 ml, respectively. The camphor concentration was set to become close to that in measurements of angular velocity $\omega$. The surface tension was measured using a surface tensiometer (DMs-401, Kyowa Interface Science Co., Ltd., Japan). The surface tension $\gamma$ with camphor was lower than that of $\gamma_0$ without the camphor, and $\gamma$ and $\gamma_0$ decreased with an increase in glycerol concentration $p$. The difference of $\gamma-\gamma_0$, however, almost kept constant for different values of $p$ as shown in Fig. \[fig:solution\] (b). The average value of $\gamma-\gamma_0$ was 0.29 mN/m.
Next, we investigated the dependence of camphor solubility on the glycerol concentration $p$ of the base solution. We measured the mass of the camphor disk before and after the camphor disk moved for $\Delta t=50$ min, and the mass change was set to be $\Delta M$. From $\Delta M$, we obtained the weight loss rate $\Delta M = \Delta m/\Delta t$. As shown in Fig. \[fig:solution\] (c), $\Delta M$ decreased with an increase in $p$.
![image](solution.eps){width="14cm"}
Effective diffusion coefficient of camphor on solution with several glycerol concentrations
===========================================================================================
The effective diffusion coefficient $D$ of camphor is included in our assumption, and the value of $D$ was necessary for checking whether the assumption was reasonable. Thus, we measured $D$ for various glycerol concentrations $p$.
The rectangular boat in Figs. \[rectangle\](a)-(d) was used for the measurements of the effective diffusion coefficient of the camphor molecules on the solution, and its shape was different from the round-shaped boat in the measurements of the velocity. The rectangular boat was made by bending both sides of a rectangular plastic plate that was 8.0 mm in width and 10.0 mm in height at 2.0 mm from the edge. The camphor disk was attached at the center of the plastic plate, where the shortest distance from the edge was 3.5 mm. The shape was similar to the one reported in the previous study [@Suematsu2].
![\[rectangle\] Schematic drawings of (a) three-dimensional view, (b) upside down three-dimensional view, (c) top view, and (d) side view of a camphor boat used for the measurements of effective diffusion coefficients.](sup.2_1.1.eps){width="6cm"}
Figures \[diffusion\](a)-(f) are snapshots captured from the top at time $t$, where (a) $t=0$ s, (b) 0.03 s, (c) 0.07 s, (d) 0.13 s, (e) 0.20 s and (f) 0.30 s, respectively. In Fig. \[diffusion\], $t=0$ corresponds to the time at which the chalk powders started moving on the water. The diffusion of camphor molecules under the rectangular boat leads to the motion of chalk powders on the water surface. As shown in Fig. \[diffusion\](a), all regions of the surface were covered by chalk powders with a gray color at $t=0$. The chalk powders started moving at $t=0.03$ s, and the water surface without powders was observed as a white region around the boat in Fig. \[diffusion\](b). The area of the white region grew with time (Figs. \[diffusion\](b)-(d)). The boat stayed at the same position before $t\sim0.2$ s (Figs. \[diffusion\] (a)-(e)), although the powders moved. The camphor boat, then, started to move after $t\sim0.2$ s (Fig. \[diffusion\](f)). In this process, the chalk powders were carried by not only the camphor diffusion but also fluid flow induced by the motion of the boat.
![\[diffusion\] Snapshots on the expansion of the camphor molecular layer at (a) $t=0$ s, (b) 0.03 s, (c) 0.07 s, (d) 0.13 s, (e) 0.20 s and (f) 0.30 s, respectively. Chalk powders were dispersed on the solution surface for visualization of the camphor layer. The white and gray regions indicate the camphor layer and the region rich in floating chalk powders, respectively.](sup.2_1.2.eps){width="7cm"}
![\[diffusion2\] (a) Relationship between time $t$ and $r^{2}$, where $r$ is the longest distance between the edge and the center of the area from which chalk powders were swept out. $t=0$ corresponds to the time at which chalk powders on the solution started moving. Closed circles, open squares, and closed triangles show the data for glycerol concentrations $p=0~\%$ ($\mu=$ 0.92 mPa$\cdot$s), $p=40~\%$ ($\mu=4.03$ mPa$\cdot$s), and $p=70~\%$ ($\mu=25.80$ mPa$\cdot$s), respectively. (b) An expanded one for 0 s $<t<0.8$ s in (a), and solid lines are the results of the linear fittings for time before the boat started moving.](sup.2_2.eps){width="7cm"}
Next, we investigated $r^2$ at time $t$, where $r$ was the longest distance between the edge and the center of the region with the camphor layer, shown as the white region in Fig. \[diffusion\]. The closed circles, open squares and closed triangles in Fig. \[diffusion2\](a) show the data for $p=0~\%$ ($\mu=$ 0.92 mPa$\cdot$s) for water, and $p=40~\%$ ($\mu=4.03$ mPa$\cdot$s) and $p=70~\%$ ($\mu=25.80$ mPa$\cdot$s) for the glycerol-water solution, respectively. Let us focus on the data for $p=0$. The trend of the data changed around at $t\sim0.2$ s, which almost corresponded to the time when the camphor boat began to move as shown in Fig. \[diffusion\]. As we needed the effective diffusion coefficient of the camphor, we measured $r^2$ in the time range in which the camphor boat did not move. Figure \[diffusion2\](b) is an expanded figure for small $t$, i.e. time without the boat motion. When the camphor boat stayed at a certain position, $r^2$ increased linearly with time. Linear fittings are shown as solid lines, where fitting was executed for the region 0 s $ <t<$ 0.13 s for closed circles. The gradients of these solid lines provide the effective diffusion coefficients $D$ of the camphor molecules on the glycerol-water solution. The effective diffusion coefficient on the water was obtained at $180~(\pm~20)$ mm$^{2}$/s. A previous paper [@Kitahata2] reported that the effective diffusion coefficient $D$ in a numerical study almost agreed with the value for $D$ measured with this method. Thus, we consider that this method is reasonable for the measurement of $D$. The gradient of the solid line decreases with an increase in the glycerol concentration. Figure \[diffusion3\] shows the relationship between $p$ and $D$, and the tendency that $D$ decreases with an increase in $p$ was confirmed.
![\[diffusion3\] Effective diffusion coefficient $D$ against glycerol concentrations $p$ of the base solutions. Error bars denote standard deviations.](diffusion.eps){width="7cm"}
[99]{} A. Nakayama, [*et al.*]{}, New J. Phys. [**11**]{}, 083025 (2009).
M. Bando, [*et al.*]{}, Phys. Rev. E [**58**]{}, 5429 (1998).
Y. Sugiyama, [*et al.*]{}, New J. Phys. [**10**]{}, 033001 (2008).
C. Peng, [*et al.*]{}, Science [**354**]{}, 882 (2016).
D. Nishiguchi, [*et al.*]{}, Phys. Rev. E [**95**]{}, 020601(R) (2017).
T. Vicsek, [*et al.*]{}, Phys. Rev. Lett. [**75**]{}, 1226 (1995).
T. Vicsek and A. Zafeiris, Phys. Rep. [**517**]{}, 71 (2012).
J. Toner and Y. Tu, Phys. Rev. Lett. [**75**]{}, 4326 (1995).
Y. Sumino, [*et al.*]{}, Nature [**483**]{}, 448 (2012).
S. Thutupalli, R. Seemann, and S. Herminghaus, New J. Phys. [**13**]{}, 073021 (2011).
T. Ohmura, [*et al.*]{}, Appl. Phys. Lett. [**107**]{}, 074102 (2015).
S. Tanaka, S. Nakata, and T. Kano, J. Phys. Soc. Jpn. [**86**]{}, 101004 (2017).
D. Nishiguchi, [*et al.*]{} New J. Phys. [**20**]{}, 015002 (2018).
J. Hu, [*et al.*]{} Chem. Soc. Rev. [**41**]{}, 4356 (2012).
N. J. Suematsu, [*et al.*]{}, Phys. Rev. E [**81**]{}, 056210 (2010).
H. Nishimori, N. J. Suematsu, and S. Nakata, J. Phys. Soc. Jpn. [**86**]{}, 101012 (2017).
M. I. Kohira, [*et al.*]{}, Langmuir [**17**]{}, 7124 (2001).
N. J. Suematsu, [*et al.*]{}, Langmuir [**30**]{}, 8101 (2014).
Y. Matsuda, [*et al.*]{}, Chem. Phys. Lett. [**654**]{}, 92 (2016).
S. Nakata, [*et al.*]{}, Phys. Chem. Chem. Phys. [**17**]{}, 10326 (2015).
M. Nagayama, [*et al.*]{}, Physica D [**194**]{}, 151 (2004).
E. Heisler, [*et al.*]{}, J. Phys. Soc. Jpn. [**81**]{}, 074605 (2012).
K. Nishi [*et al.*]{}, Phys. Rev. E [**92**]{}, 022910 (2015).
E. Heisler, [*et al.*]{}, Phys. Rev. E [**85**]{}, 055201 (2012).
S. Soh, K. J. M. Bishop, and B. A. Grzybowski, J. Phys. Chem. B [**112**]{}, 10848 (2008).
S. Soh, M. Branicki, and B. A. Grzybowski, J. Phys. Chem. Lett. [**2**]{}, 770 (2011).
E. Lauga and A. M. J. Davis, J. Fluid Mech. [**705**]{}, 120 (2011).
Y. Koyano, H. Kitahata, and T. Sakurai, Phys. Rev. E [**94**]{}, 042215 (2016).
N. J. Suematsu, [*et al.*]{}, J. Phys. Soc. Jpn. [**84**]{}, 034802 (2015).
Y. S. Ikura, [*et al.*]{}, Phys. Rev. E [**88**]{}, 012911 (2013).
H. Kitahata and N. Yoshinaga, J. Chem. Phys. [**148**]{}, 134906 (2018).
H. A. Stone and H. Masoud, J. Fluid Mech. [**781**]{}, 494 (2015).
It would be interesting to investigate the viscous dependence for a viscous drag directly in a similar experiment to Ref. [@Suematsu2].
J. Happel and H. Brenner, Low Reynolds number hydrodynamics: with special applications to particulate media (Springer, 1983).
P. J. Olver, Applications of Lie groups to differential equations (Springer, 1993).
The experimental data are distributed since the values of $\Delta M$ are noisy as shown in Fig. \[fig:solution\](c) of Appendix A. The values of $\Delta M$, investigated in the motion of the camphor disk, are sufficiently small to be susceptible to external factors.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'This article gives a proof of the Langlands-Shelstad fundamental lemma for the spherical Hecke algebra for every unramified $p$-adic reductive group $G$ in large positive characteristic. The proof is based on the transfer principle for constructible motivic integration. To carry this out, we introduce a general family of partition functions attached to the complex $L$-group of the unramified $p$-adic group $G$. Our partition functions specialize to Kostant’s $q$-partition function for complex connected groups and also specialize to the Langlands $L$-function of a spherical representation. These partition functions are used to extend numerous results that were previously known only when the $L$-group is connected (that is, when the $p$-adic group is split). We give explicit formulas for branching rules, the inverse of the weight multiplicity matrix, the Kato-Lusztig formula for the inverse Satake transform, the Plancherel measure, and Macdonald’s formula for the spherical Hecke algebra on a non-connected complex group (that is, non-split unramified $p$-adic group).'
author:
- 'William Casselman, Jorge E. Cely, and Thomas Hales'
bibliography:
- 'hecke.bib'
title: 'The Spherical Hecke algebra, partition functions, and motivic integration'
---
=
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We consider asymptotic distributions of maximum deviations of sample covariance matrices, a fundamental problem in high-dimensional inference of covariances. Under mild dependence conditions on the entries of the data matrices, we establish the Gumbel convergence of the maximum deviations. Our result substantially generalizes earlier ones where the entries are assumed to be independent and identically distributed, and it provides a theoretical foundation for high-dimensional simultaneous inference of covariances.'
address:
- |
501 Hill Center\
110 Frelinghuysen Road\
Piscataway, NJ 08854\
- |
Department of Statistics\
5734 S. University Ave\
Chicago, IL 60637\
author:
-
-
bibliography:
- 'mybib.bib'
title: Simultaneous Inference of Covariances
---
Introduction
============
Let $\boldsymbol{X}_n=\left(X_{ij}\right)_{1\leq i \leq n, 1\leq j
\leq m}$ be a data matrix whose $n$ rows form independent samples from some population distribution with mean vector ${\boldsymbol{\mu}}_n$ and covariance matrix $\Sigma_n$. High dimensional data increasingly occur in modern statistical applications in biology, finance and wireless communication, where the dimension $m$ may be comparable to the number of observations $n$, or even much larger than $n$. Therefore, it is necessary to study the asymptotic behavior of statistics of $\boldsymbol{X}_n$ under the setting that $m=m_n$ grows to infinity as $n$ goes to infinity.
In many empirical examples, it is often assumed that $\Sigma_n=I_m$, where $I_m$ is the $m\times m$ identity matrix, so it is important to perform the test $$\label{eq:1.test}
H_0:\; \Sigma_n=I_m$$ before carrying out further estimation or inference procedures. Due to high dimensionality, conventional tests often do not work well or cannot be implemented. For example, when $m>n$, the likelihood ratio test (LRT) cannot be used because the sample covariance matrix is singular; and even when $m<n$, the LRT is drifted to infinity and lead to many false rejections if $m$ is also large [@bai:2009]. [@ledoit:2003] found that the empirical distance test [@nagao:1973] is not consistent when both $m$ and $n$ are large. The problem has been studied by several authors under the “large $n$, large $m$” paradigm. [@bai:2009] and [@ledoit:2003] proposed corrections to the LRT and the empirical distance test respectively. Assuming that the population distribution is Gaussian with ${\boldsymbol{\mu}}_n=0$, [@johnstone:2001] used the largest eigenvalue of the sample covariance matrix $\boldsymbol{X}_n^\top\boldsymbol{X}_n$ as the test statistic, and proved that its limiting distribution follows the Tracy-Widom law [@tracy:1994]. Here we use the superscript $\top$ to denote the transpose of a matrix or a vector. His work was extended to the non-Gaussian case by [@soshnikov:2002] and [@peche:2009], where they assumed the entries of $\boldsymbol{X}_n$ are independent and identically distributed (i.i.d.) with sub-Gaussian tails.
Let $x_{1},x_{2},\ldots,x_{m}$ be the $m$ columns of $\boldsymbol{X}_n$. In practice, the entries of the mean vector ${\boldsymbol{\mu}}_n$ are often unknown, and are estimated by $\bar x_i =
(1/n)\sum_{k=1}^n X_{ki}$. Write $x_i-\bar x_i$ for the vector $x_i-\bar x_i {\boldsymbol{1}}_n$, where ${\boldsymbol{1}}_n$ is the $n$-dimensional vector with all entries being one. Let $\sigma_{ij} = {\rm Cov}(X_{1 i},
X_{1 j})$, $1 \le i, j \le m$, be the covariance function, namely, the $(i, j)$th entry of $\Sigma_n$. The sample covariance between columns $x_{i}$ and $x_{j}$ is defined as $$ \hat\sigma_{ij} = \frac{1}{n}{(x_{i}-\bar x_{i})^\top(x_{j}-\bar x_{j})}.$$ In high-dimensional covariance inference, a fundamental problem is to establish an asymptotic distributional theory for the maximum deviation $$M_n = \max_{1\leq i<j\leq m} |\hat\sigma_{ij}-\sigma_{ij}|.$$ With such a distributional theory, one can perform statistical inference for structures of covariance matrices. For example, one can use $M_n$ to test the null hypothesis $H_0:\;\Sigma_n = \Sigma^{(0)}$, where $\Sigma^{(0)}$ is a pre-specified matrix. Here the null hypothesis can be that the population distribution is a stationary process so that $\Sigma_n$ is Toeplitz, or that $\Sigma_n$ has a banded structure.
It is very challenging to derive an asymptotic theory for $M_n$ if we allow dependence among $X_{11}, \ldots, X_{1 m}$. Many of the earlier results assume that the entries of the data matrix $\boldsymbol{X}_n$ are i.i.d.. In this case $\sigma_{i j} = 0$ if $i \not= j$. [@jiang:2004] derived the asymptotic distribution of $$L_n = \max_{1\leq i<j\leq m} |\hat\sigma_{ij}|.$$
\[thm:jiang\] Suppose $X_{i,j},\;i,j=1,2,\ldots$ are independent and identically distributed as $\xi$ which has variance one. Suppose ${{\mathbb{E}}}|\xi|^{30-\epsilon}<\infty$ for any $\epsilon>0$. If $n/m\rightarrow c\in(0,\infty)$, then for any $y\in{{\mathbb{R}}}$, $$\lim_{n\rightarrow\infty}P\left(nL_n^2 - 4\log m + \log(\log m) + \log(8\pi) \leq y\right) = \exp\left(-e^{-y/2}\right).$$
Jiang’s work has attracted considerable attention, and been followed by [@li:2010], [@liu:2008], [@zhou:2007] and [@li:2006]. Under the same setup that $\boldsymbol{X}_n$ consists of i.i.d. entries, these works focus on three directions (i) reduce the moment condition; (ii) allow a wider range of $p$; and (iii) show that some moment condition is necessary. In a recent article, [@cai:2011a] extended those results in two ways: (i) the dimension $p$ could grow exponentially as the sample size $n$ provided exponential moment conditions; and (ii) they showed that the test statistic $\max_{|i-j|>s_n} |\hat \sigma_{ij}|$ also converges to the Gumbel distribution if each row of $\boldsymbol{X}_n$ is Gaussian and is $s_n$-dependent. The latter generalization is important since it is one of the very few results that allow dependent entries.
In this paper we shall show that a self-normalized version of $M_n$ converges to the Gumbel distribution under mild dependence conditions on the vector $(X_{11}, \ldots, X_{1 m})$. Thus our result provides a theoretical foundation for high-dimensional simultaneous inference of covariances.
The rest of this article is organized as follows. We present the main result in Section \[sec:main\]. In Section \[sec:example\], we use two examples on linear processes and nonlinear processes to demonstrate that the technical conditions are easily satisfied. We discuss three tests for the covariance structure using our main result in Section \[sec:cov\_structure\]. The proof is given in Section \[sec:proof\], and some auxiliary results are collected in Section \[sec:auxiliary\].
Main result {#sec:main}
===========
We consider a slightly more general situation where population distribution can depend on $n$. Let ${\boldsymbol{X}}_n =
(X_{n,k,i})_{1\leq k\leq n, 1\leq i \leq m}$ be a data matrix whose $n$ rows are i.i.d. $m$-dimensional random vectors with mean ${\boldsymbol{\mu}}_n=(\mu_{n,i})_{1\leq i \leq m}$ and covariance matrix $\Sigma_n=(\sigma_{n,i,j})_{1\leq i,j \leq m}$. Let $x_{1},x_{2},\ldots,x_{m}$ be the $m$ columns of $\boldsymbol{X}_n$. Let $\bar x_i = (1/n)\sum_{k=1}^n X_{n,k,i}$, and write $x_i-\bar x_i$ for the vector $x_i-\bar x_i {\boldsymbol{1}}_n$. The sample covariance between $x_{i}$ and $x_{j}$ is defined as $$ \hat\sigma_{n,i,j}
= \frac{1}{n}{(x_{i}-\bar x_{i})^\top(x_{j}-\bar x_{j})}.$$ It is unnatural to study the maximum of a collection of random variables which are on different scales, so we consider the normalized version $|\hat\sigma_{n,i,j}-\sigma_{n,i,j}|/\tau_{n,i,j}$, where $$\tau_{n,i,j}
= \operatorname{Var}\left[(X_{n,1,i}-\mu_{n,i})(X_{n,1,j}-\mu_{n,j})\right].$$ In practice, $\tau_{n,i,j}$ are usually unknown, and can be estimated by $$\hat\tau_{n,i,j}
= \frac{1}{n} \left|(x_{i}-\bar x_{i})\circ(x_{j}-\bar x_{j})
-\hat\sigma_{n,i,j}\cdot {\boldsymbol{1}}_n\right|^2.$$ where $\circ$ denotes the Hadamard product defined as $A\circ
B:=(a_{ij}b_{ij})$ for two matrices $A=(a_{ij})$ and $B=(b_{ij})$ with the same dimensions. We thus consider $$\label{eq:max_cov}
M_n = \max_{1 \leq i<j \leq m} \frac{|\hat\sigma_{n,i,j}-\sigma_{n,i,j}|}{\sqrt{\hat\tau_{n,i,j}}}.$$ Due to the normalization procedure, we can assume without loss of generality that $\sigma_{n,i,i}=1$ and $\mu_{n,i}=0$ for each $1\leq i
\leq m$. Define the index set $\mathcal{I}_n=\{(i,j):\,1\leq i<j \leq m\}$, and for $\alpha=(i,j)\in\mathcal{I}_n$, let $X_{n,\alpha}:=X_{n,1,i}X_{n,1,j}$. Define $$\begin{aligned}
& \mathcal{K}_n(t,p)
= \sup_{1\leq i \leq m} {{\mathbb{E}}}\exp\left(t |X_{n,1,i}|^p\right), \\
& \mathcal{M}_n(p)
= \sup_{1\leq i \leq m} {{\mathbb{E}}}(|X_{n,1,i}|^{p}), \\
& \tau_n = \inf_{1 \leq i<j \leq m} \tau_{n,i,j}, \\
& \gamma_n = \sup_{\alpha, \beta \in\mathcal{I}_n
\hbox{ {\tiny and} } \alpha\neq\beta}
\left|\operatorname{Cor}(X_{n,\alpha},X_{n,\beta})\right|, \\
& \gamma_n(b)=\sup_{\alpha\in\mathcal{I}_n}
\sup_{\mathcal{A}\subset\mathcal{I}_n,|\mathcal{A}|=b}
\inf_{\beta \in \mathcal{A}}
\left|\operatorname{Cor}(X_{n,\alpha},X_{n,\beta})\right|.\end{aligned}$$ We need the following technical conditions. $$\begin{aligned}
& ({\boldsymbol{\mathrm{A1}}}).\quad \liminf_{n \rightarrow\infty} \tau_n > 0. \\
& ({\boldsymbol{\rm A2}}).\quad \limsup_{n} \gamma_n<1. \\
& ({\boldsymbol{\rm A3}}).\quad \gamma_n(b_n) \cdot (\log b_n) = o(1)
\hbox{ for any sequence $(b_n)$ such that $b_n\rightarrow\infty$.}\\
& ({\boldsymbol{\rm A3'}}).\quad \gamma_n(b_n) = o(1)
\hbox{ for any sequence $(b_n)$ such that $b_n\rightarrow\infty$, and}\\
& \qquad\qquad \sum_{\alpha,\beta\in\mathcal{I}_n}
\left[\operatorname{Cov}(X_{n,\alpha},X_{n,\beta})\right]^2 = O(m^{4-\epsilon})
\hbox{for some constant $\epsilon>0$}. \\
& ({\boldsymbol{\rm A4}}).\quad \log m = o\left(n^{p/(4+2p)}\right) \hbox{ and
}
\limsup_{n\rightarrow\infty}\mathcal{K}_n(t,p) < \infty \hbox{ for some constants} \\
& \qquad \quad\;\;\;\hbox{$t>0$ and $0<p\leq 4$}. \\
& ({\boldsymbol{\rm A4'}}).\quad m=O(n^q) \hbox{ and }
\limsup_{n\rightarrow\infty}\mathcal{M}_n(4q+4+\delta)<\infty \hbox{
for some constants}\\
& \qquad \quad\;\;\;\hbox{$q>0$ and $\delta>0$}.\end{aligned}$$ The two conditions ($\mathrm{A3}$) and ($\mathrm{A3'}$) require that the dependence among $X_{n,\alpha},\;\alpha\in\mathcal{I}_n$, are not too strong. They are translations of (B1) and (B2) in Section \[sec:normal\_comparison\] (see Remark \[rk:empirical\] for some equivalent versions), and either of them will make our results valid. We use (A2) to get rid of the case where they may be lots of pairs $(\alpha,\beta)\in\mathcal{I}_n$ such that $X_{n,\alpha}$ and $X_{n,\beta}$ are perfectly correlated. Assumptions ($\mathrm{A4}$) and ($\mathrm{A4'}$) connect the growth speed of $m$ relative to $n$ and the moment conditions. They are typical in the context of high dimensional covariance matrix estimation. Condition (A1) excludes the case that $X_{n,\alpha}$ is a constant.
\[thm:max\_cov\] Suppose that ${\boldsymbol{X}}_n = (X_{n,k,i})_{1\leq k\leq n, 1\leq i \leq m}$ is a data matrix whose $n$ rows are i.i.d. $m$-dimensional random vectors, and whose entries have mean zero and variance one. Assume (A1), (A2), either of ($\mathrm{A3}$) and ($\mathrm{A3'}$), and either of ($\mathrm{A4}$) and ($\mathrm{A4'}$), then for any $y \in {{\mathbb{R}}}$, $$\lim_{n\rightarrow\infty}
P\left(nM_n^2 - 4\log m + \log(\log m)
+ \log(8\pi) \leq y\right)
= \exp\left(-e^{-y/2}\right).$$
Examples {#sec:example}
========
Except for ($\mathrm{A4}$) and ($\mathrm{A4'}$), which put conditions on every single entry of the random vector $(X_{n,1,i})_{1\leq i \leq m}$, all the other conditions of Theorem \[thm:max\_cov\] are related to the dependence among these entries, which can be arbitrarily complicated. In this section we shall provide examples which satisfy the four conditions (A1), (A2), ($\mathrm{A3}$) and ($\mathrm{A3'}$). Observe that if each row of ${\boldsymbol{X}}_n$ is a random vector with uncorrelated entries (specifically, the entries are independent), then all these conditions are automatically satisfied. They are also satisfied if the number of non-zero covariances is bounded.
Stationary Processes
--------------------
Suppose $(X_{n,k,i})=(X_{k,i})$, and each row of $(X_{k,i})_{1\leq i
\leq m}$ is distributed as a stationary process $(X_i)_{1\leq i \leq
m}$ of the form $$\begin{aligned}
X_i=g(\epsilon_i,\epsilon_{i-1},\ldots)\end{aligned}$$ where $\epsilon_i$’s are i.i.d. random variables, and $g$ is a measurable function such that $X_i$ is well-defined. Let $(\epsilon_i')_{i\in{{\mathbb{Z}}}}$ be an i.i.d. copy of $(\epsilon_i)_{i\in{{\mathbb{Z}}}}$, and $X_i'=g(\epsilon_i,\ldots,\epsilon_1,\epsilon_0',\epsilon_{-1},\epsilon_{-2},\ldots)$. Following [@wu:2005], define the [*physical dependence measure*]{} of order $p$ by $$\begin{aligned}
\delta_p(i)=\|X_i-X_i'\|_p.\end{aligned}$$ Define the squared tail sum $$\begin{aligned}
\Psi_p(k)=\left[\sum_{j=k}^\infty (\delta_p(i))^2\right]^{1/2},\end{aligned}$$ and use $\Psi_p$ as a shorthand for $\Psi_p(0)$.
We give sufficient conditions for (A1), (A2), ($\mathrm{A3}$) and ($\mathrm{A3'}$) in the following lemma and leave its proof to the supplementary file.
\[thm:stationary\]
- If $0<\Psi_4<\infty$ and $\operatorname{Var}(X_iX_j)>0$ for all $i,j\in{{\mathbb{Z}}}$, then (A1) holds.
- If in addition, $|\operatorname{Cor}(X_iX_j,X_kX_l)|<1$ for all $i,j,k,l$ such that they are not all the same, then (A2) holds.
- Assume that the conditions of (i) and (ii) hold. If $\Psi_p(k)=o(1/\log k)$ as $k\rightarrow\infty$, then ($\mathrm{A3}$) holds. If $\sum_{j=0}^m (\Psi_4(j))^2 =
O(m^{1-\delta})$ for some $\delta>0$, then ($\mathrm{A3'}$) holds.
Let $g$ be a linear function with $g(\epsilon_i, \epsilon_{i-1},
\ldots) = \sum_{j=0}^\infty a_j \epsilon_{i-j}$, where $\epsilon_j$ are i.i.d. with mean $0$ and ${{\mathbb{E}}}(|\epsilon_j|^p) <
\infty$ and $a_j$ are real coefficients with $\sum_{j=0}^\infty
a_j^2 < \infty$. Then the physical dependence measure $\delta_p(i)
= |a_i| \|\epsilon_0 - \epsilon_0'\|_p$. If $a_i = i^{-\beta}
\ell(i)$, where $1/2 < \beta < 1$ and $\ell$ is a slowly varying function, then $(X_i)$ is a long memory process. Smaller $\beta$ indicates stronger dependence. Condition (iii) holds for all $\beta \in (1/2, 1)$. Moreover, if $a_i = i^{-1/2}
(\log(i))^{-2}$, $i \ge 2$, which corresponds to the extremal case with very strong dependence $\beta = 1/2$, we also have $\Psi_p(k)
= O( (\log k)^{-3/2} ) = o(1/\log k)$. So our dependence conditions are actually quite mild.
If $(X_i)$ is a linear process which is not identically zero, then the following regularity conditions are automatically satisfied: $\Psi_4>0$, $\operatorname{Var}(X_iX_j)>0$ for all $i,j\in{{\mathbb{Z}}}$, and $|\operatorname{Cor}(X_i
X_j, X_k X_l)|<1$ for all $i,j,k,l$ such that they are not all the same.
Non-stationary Linear Processes
-------------------------------
Assume that each row of $(X_{n,k,i})$ is distributed as $(X_{n,i})_{1\leq i
\leq m}$, which is of the form $$\begin{aligned}
X_{n,i}=\sum_{t\in{{\mathbb{Z}}}} f_{n,i,t}\epsilon_{i-t},\end{aligned}$$ where $\epsilon_i,\,i\in{{\mathbb{Z}}}$ are i.i.d. random variables with mean zero, variance one and finite fourth moment, and the sequence $(f_{n,i,t})$ satisfies $\sum_{t\in {{\mathbb{Z}}}} f_{n,i,t}^2=1$. Denote by $\kappa_4$ the fourth cumulant of $\epsilon_0$. For $1\leq i,j,k,l
\leq m$, we have $$\begin{aligned}
\sigma_{n,i,j} & = \sum_{t\in{{\mathbb{Z}}}} f_{n,i,i-t}f_{n,j,j-t}, \\
\operatorname{Cov}(X_{n,i}X_{n,j},X_{n,k}X_{n,l})
& = \mathrm{Cum}(X_{n,i},X_{n,j},X_{n,k},X_{n,l})
+ \sigma_{n,i,k}\sigma_{n,j,l}+\sigma_{n,i,l}\sigma_{n,j,k},\end{aligned}$$ where $\mathrm{Cum}(X_{n,i},X_{n,j},X_{n,k},X_{n,l})$ is the fourth order joint cumulant of the random vector $(X_{n,i},X_{n,j},X_{n,k},X_{n,l})^\top$, which can be expressed as $$\begin{aligned}
\mathrm{Cum}(X_{n,i},X_{n,j},X_{n,k},X_{n,l})
= \sum_{t\in{{\mathbb{Z}}}} f_{n,i,i-t}f_{n,j,j-t}f_{n,k,k-t}f_{n,l,l-t}\kappa_4,\end{aligned}$$ by the multilinearity of cumulants. In particular, we have $$\begin{aligned}
\operatorname{Var}(X_iX_j) = 1 + \sigma_{n,i,j}^2
+ \kappa_4\cdot\sum_{t \in {{\mathbb{Z}}}} f_{n,i,t}^2f_{n,j,t}^2.\end{aligned}$$ Since $\kappa_4 = \operatorname{Var}(\epsilon_0^2) - 2 \left({{\mathbb{E}}}\epsilon_0^2\right)^2 \geq -2$, the condition $$\begin{aligned}
\label{eq:cumulant}
\kappa_4>-2\end{aligned}$$ guarantees ($\mathrm{A1}$) in view of $$\begin{aligned}
\operatorname{Var}(X_iX_j) \geq (1 + \sigma_{n,i,j}^2)(1+\min\{\kappa/2,0\})
\geq \min\{1,1+\kappa/2\}>0.\end{aligned}$$
To ensure the validity of ($\mathrm{A2}$), it is natural to assume that no pairs $X_{n,i}$ and $X_{n,j}$ are strongly correlated, [ *i.e.*]{} $$\begin{aligned}
\label{eq:corr}
\limsup_{n\rightarrow\infty}\sup_{1\leq i<j\leq m}\left|\sum_{t\in{{\mathbb{Z}}}}f_{n,i,i-t}f_{n,j,j-t}\right|<1.\end{aligned}$$ We need the following lemma, whose proof is elementary and will be given in the supplementary file.
\[thm:normal\] The condition (\[eq:corr\]) suffices for ($\mathrm{A2}$) if $\epsilon_i$’s are i.i.d. $N(0,1)$.
As an immediate consequence, when $\epsilon_i$’s are i.i.d. $N(0,1)$, we have $$\begin{aligned}
\ell:=\limsup_{n\rightarrow\infty} \inf_{\ast}\inf_{\rho\in{{\mathbb{R}}}}\operatorname{Var}\left(X_{n,i}X_{n,j}-\rho X_{n,k}X_{n,l}\right) > 0,\end{aligned}$$ where $\inf_{\ast}$ is taken over all $1\leq i,j,k,l \leq m$ such that $i<j$, $k<l$ and $(i,j)\neq(k,l)$. Observe that when $\epsilon_i$’s are i.i.d. $N(0,1)$, $$\begin{aligned}
\label{eq:var}
\operatorname{Var}\left(X_{n,i}X_{n,j}-\rho X_{n,k}X_{n,l}\right) & =
2\cdot\sum_{t\in{{\mathbb{Z}}}}(f_{n,i,i-t}f_{n,j,j-t}-\rho f_{n,k,k-t}f_{n,l,l-t})^2\\ \nonumber
& + \sum_{s<t}
\left(f_{n,i,i-t}f_{n,j,j-s}+f_{n,i,i-s}f_{n,j,j-t} \right.\\ \nonumber
& \quad\;
\left. -\rho f_{n,k,k-t}f_{n,l,l-s}-\rho f_{n,k,k-s}f_{n,l,l-t}\right)^2;\end{aligned}$$ and when $\epsilon_i$’s are arbitrary variables, the variance is given by the same formula with the number 2 in (\[eq:var\]) being replaced by $2+\kappa_4$. Therefore, if (\[eq:cumulant\]) holds, then $$\begin{aligned}
\limsup_{n\rightarrow\infty} \inf_{\ast}\inf_{\rho\in{{\mathbb{R}}}}\operatorname{Var}\left(X_{n,i}X_{n,j}-\rho X_{n,k}X_{n,l}\right)
\geq \min\{1,1+\kappa_4/2\}\cdot\ell>0,\end{aligned}$$ which implies $(\mathrm{A2})$ holds. To summarize, we have shown that (\[eq:cumulant\]) and (\[eq:corr\]) suffice for $(\mathrm{A2})$.
Now we turn to Conditions ($\mathrm{A3}$) and ($\mathrm{A3'}$). Set $$\begin{aligned}
h_n(k)=\sup_{1\leq i \leq m}\left(\sum_{|t|={\lfloor k/2 \rfloor}}^\infty f_{n,i,t}^2\right)^{1/2},\end{aligned}$$ where ${\lfloor x \rfloor}=\max\{y\in{{\mathbb{Z}}}\,:\,y \leq x\}$ for any $x\in{{\mathbb{E}}}$, then we have $$\begin{aligned}
|\sigma_{n,i,j}| \leq 2h_n(0)h_n(|i-j|) = 2h_n(|i-j|).\end{aligned}$$ Fixing a subset $\{i,j\}$, for any integer $b>0$, there are at most $8b^2$ subsets $\{k,l\}$ such that $\{k,l\}\subset B(i;b)\cup B(j;b)$, where $B(x;r)$ is the open ball $\{y:|x-y|<r\}$. For all other subsets $\{k,l\}$, we have $$\begin{aligned}
|\operatorname{Cov}(X_{n,i}X_{n,j},X_{n,k}X_{n,l})| \leq (4+2\kappa_4)h_n(b),\end{aligned}$$ and hence ($\mathrm{A3}$) holds if we assume $h_n(k_n)\log k_n = o(1)$ for any positive sequence $(k_n)$ such that $k_n\rightarrow\infty$. ($\mathrm{A3'}$) holds if we assume $$\begin{aligned}
\sum_{k=1}^m [h_n(k)]^2 = O\left(m^{1-\delta}\right).\end{aligned}$$ for some $\delta>0$, because $$\begin{aligned}
\left|\operatorname{Cov}(X_{n,i}X_{n,j},X_{n,k}X_{n,l})\right| \leq 2\kappa_4h_n(|i-j|) + 2h_n(|i-k|) + 2h_n(|i-l|).\end{aligned}$$
Testing for covariance structures {#sec:cov_structure}
=================================
The asymptotic distribution given in Theorem \[thm:max\_cov\] has several statistical applications. One of them is in high dimensional covariance matrix regularization, because Theorem \[thm:max\_cov\] implies a uniform convergence rate for all sample covariances. Recently, [@cai:2011b] explored this direction, and proposed a thresholding procedure for sparse covariance matrix estimation, which is adaptive to the variability of each individual entry. Their method is superior to the uniform thresholding approach studied by [@bickel:2008b].
Testing structures of covariance matrices is also a very important statistical problem. As mentioned in the introduction, when the data dimension is high, conventional tests often cannot be implemented or do not work well. Let $\Sigma_n$ and $R_n$ be the covariance matrix and correlation matrix of the random vector $(X_{n,1,i})_{1\leq i \leq m}$ respectively. Two types of tests have been studied under the large $n$, large $m$ paradigm. [@chen:2010], [@bai:2009], [@ledoit:2003] and [@johnstone:2001] considered the test $$\label{eq:3.identity}
H_0:\;\Sigma_n=I_m;$$ and [@liu:2008], [@schott:2005], [@srivastava:2005] and [@jiang:2004] studied the problem of testing for complete independence $$\label{eq:3.sphericity}
H_0:\;R_n=I_m.$$ Their testing procedures are all based on the critical assumption that the entries of the data matrix ${\boldsymbol{X}}_n$ are i.i.d., while the hypotheses themselves only require the entries of $(X_{n,1,i})_{1\leq
i \leq m}$ to be uncorrelated. Evidently, we can use $M_n$ in (\[eq:max\_cov\]) to test (\[eq:3.sphericity\]), and we only require the uncorrelatedness for the validity of the limiting distribution established in Theorem \[thm:max\_cov\], as long as the mild conditions of the theorem are satisfied. On the other hand, we can also take the sample variances into consideration, and use the following test statistic $$M_n' = \max_{1 \leq i\leq j \leq m} \frac{|\hat\sigma_{n,i,j}-\sigma_{n,i,j}|}{\sqrt{\hat\tau_{n,i,j}}}.$$ to test the identity hypothesis (\[eq:3.identity\]), where $\sigma_{n,i,j}=I\{i=j\}$. It is not difficult to verify that $M_n'$ has the same asymptotic distribution as $M_n$ under the same conditions with the only difference being that we now have to take sample variances into account as well, namely, the index set $\mathcal{I}_n$ in Section \[sec:main\] is redefined as $\mathcal{I}_n = \{(i,j):\,1\leq i\leq j\leq m\}$. Clearly, we can also use $M_n'$ to test $H_0:\;\Sigma_n=\Sigma^0$ for some known covariance matrix $\Sigma^0$.
By checking the proof of Theorem \[thm:max\_cov\], it can be seen that if instead of taking the maximum over the set $\mathcal{I}_n=\{(i,j):\,1\leq i<j \leq m\}$, we only take the maximum over some subset $A_n\subset\mathcal{I}_n$ whose cardinality $|A_n|$ converges to infinity, then the maximum also has the Gumbel type convergence with normalization constants which are functions of the cardinality of the set $A_n$. Based on this observation, we are able to consider three more testing problems.
Test for stationarity
---------------------
Suppose we want to test whether the population is a stationary time series. Under the null hypothesis, each row of the data matrix ${\boldsymbol{X}}_n$ is distributed as a stationary process $(X_i)_{1\leq i \leq
m}$. Let $\gamma_l=\operatorname{Cov}(X_0,X_l)$ be the autocovariance at lag $l$. In principle, we can use the following test statistic $$\tilde T_n = \max_{1 \leq i\leq j \leq m} \frac{|\hat\sigma_{n,i,j}-\gamma_{i-j}|}{\sqrt{\hat\tau_{n,i,j}}}.$$ The problem is that $\gamma_l$ are unknown. Fortunately, they can not only be estimated, but also be estimated with higher accuracy $$\hat{\gamma}_{n,l} = \frac{1}{nm}\sum_{k=1}^n\sum_{i=|l|+1}^n (X_{n,k,i-|l|}-\hat\mu_n)(X_{n,k,i}-\hat\mu_n),$$ where $\hat\mu_n = (1/nm) \sum_{k=1}^n\sum_{i=1}^m X_{n,k,i}$, and we are lead to the test statistic $$T_n = \max_{1 \leq i\leq j \leq m} \frac{|\hat\sigma_{n,i,j}-\hat\gamma_{i-j}|}{\sqrt{\hat\tau_{n,i,j}}}.$$ Using similar arguments of Theorem 2 of [@wu:2011a], under suitable conditions, we have $$\max_{0 \leq l \leq m-1} |\hat\gamma_{n,l}-\gamma_l| = O_P(\sqrt{\log m/nm}).$$ Therefore, the limiting distribution for $M_n$ in Theorem \[thm:max\_cov\] also holds for $T_n$.
Test for bandedness
-------------------
In time series and longitudinal data analysis, it can be of interest to test whether $\Sigma_m$ has the banded structure. The hypothesis to be tested is $$\label{eq:3.bandedness}
H_0:\; \sigma_{n,i,j}=0 \hbox{ if } |i-j|>B,$$ where $B=B_n$ may depend on $n$. [@cai:2011a] studied this problem under the assumption that each row of the data matrix ${\boldsymbol{X}}_n$ is a Gaussian random vector. They proposed to use the maximum sample correlation outside the band $$\tilde T_n = \max_{|i-j|>B} \frac{\hat\sigma_{n,i,j}}{\sqrt{\hat\sigma_{n,i,i}\hat\sigma_{n,j,j}}}$$ as the test statistic, and proved that $T_n$ also has the Gumbel type convergence provided that $B_n=o(m)$ and several other technical conditions hold.
Apparently, our Theorem \[thm:max\_cov\] can be employed to test (\[eq:3.bandedness\]). If all the conditions of the theorem are satisfied, the test statistic $$T_n = \max_{|i-j|>B_n} \frac{|\hat\sigma_{n,i,j}|}{\sqrt{\hat\tau_{n,i,j}}}.$$ has the same asymptotic distribution as $M_n$ as long as $B_n=o(m)$. Our theory does not need the normality assumption.
Assess the tapering procedure
-----------------------------
Banding and tapering are commonly used regularization procedures in high dimensional covariance matrix estimation. Convergence rates were first obtained by [@bickel:2008a], and later on improved by [@cai:2010]. Let us introduce a weaker version of the latter result. Suppose each row of ${\boldsymbol{X}}_n$ is distributed as the random vector $X=(X_i)_{1\leq i \leq m}$ with mean $\mu$ and covariance matrix $\Sigma=(\sigma_{ij})$. Let $K_0,K$ and $t$ be positive constants, and $\mathscr{C}_{\eta}(K_0, K, t)$ be the class of $m$-dimensional distributions which satisfy the following conditions $$\begin{aligned}
& \max_{|i-j|=k} |\sigma_{ij}| \leq Kk^{-(1+\eta)} \quad\hbox{for all } k; \label{eq:3.decay} \\
& \lambda_{\max}(\Sigma) \leq K_0; \cr
& P\left[|v^\top(X-\mu)|>x\right] \leq e^{-tx^2/2} \quad \hbox{for all $x>0$ and $\|v\|=1$}; \nonumber\end{aligned}$$ where $\lambda_{\max}(\Sigma)$ is the largest eigenvalue of $\Sigma$. For a given even integer $1\leq B \leq m$, define the tapered estimate of the covariance matrix $\Sigma$ $$\begin{aligned}
\hat\Sigma_{n,B_n} = \left(w_{ij}\hat\sigma_{n,i,j}\right),\end{aligned}$$ where the weights correspond to a flat top kernel and are given by $$\begin{aligned}
w_{ij} = \left\{
\begin{array}{ll}
1, & \hbox{when } |i-j| \leq B_n/2, \\
2-2|i-j|/B_n, & \hbox{when } B_n/2<|i-j|\leq B_n, \\
0, & \hbox{otherwise}.
\end{array}\right.\end{aligned}$$
\[thm:cai\] If $m \geq n^{1/(2\eta+1)}$, $\log m = o(n)$ and $B_n=n^{1/(2\eta+1)}$, then there exists a constant $C>0$ such that $$\begin{aligned}
\sup_{\mathscr{C}_\eta} {{\mathbb{E}}}\left[{\lambda({\hat{\Sigma}_{n,B_n}-\Sigma})}\right]^2
\leq Cn^{-2\eta/(2\eta+1)} + C\frac{\log m}{n}.
\end{aligned}$$
We see that it is the parameter $\eta$ that decides the convergence rate under the operator norm. After such a tapering procedure has been applied, it is important to ask whether it is appropriate, and in particular, whether (\[eq:3.decay\]) is satisfied. We propose to use $$\begin{aligned}
T_n = \max_{|i-j|>B_n} \frac{|\hat\sigma_{n,i,j}|}{\sqrt{\hat\tau_{n,i,j}}}\end{aligned}$$ as the test statistic. According to the observation made at the beginning of Section \[sec:cov\_structure\], if the conditions of Theorem \[thm:max\_cov\] are satisfied, then $$\begin{aligned}
T_n'= \max_{|i-j|>B_n} \frac{|\hat\sigma_{n,i,j}-\sigma_{i,j}|}{\sqrt{\hat\tau_{n,i,j}}}\end{aligned}$$ has the same limiting law as $M_n$. On the other hand, (\[eq:3.decay\]) implies that $$\begin{aligned}
\max_{|i-j|>B_n} |\sigma_{i,j}| = O\left(n^{-(1+\eta)/(2\eta+1)}\right),\end{aligned}$$ so $T_n$ has the same limiting distribution as $T_n'$ if we further assume $\log m = o\left(n^{2/(4\eta+2)}\right)$.
Proof {#sec:proof}
=====
The proofs of Theorem \[thm:max\_cov\] under ($\mathrm{A4}$) and ($\mathrm{A4'}$) are very similar, and they share a common Poisson approximation step, which we will formulate in Section \[sec:max\_mean\] under a more general context, where the limiting distribution of the maximum of sample means is obtained. Since the proof under ($\mathrm{A4'}$) is more involved, we provide the detailed proof under this assumption in Section \[sec:a4’\], and point out in Section \[sec:a4\] how it can be adapted to give a proof under ($\mathrm{A4}$).
Maximum of Sample Means: An Intermediate Step {#sec:max_mean}
---------------------------------------------
In this section we provide a general result on the maximum of sample means. Let ${\boldsymbol{Y}}_n=(Y_{n,k,i})_{1\leq k \leq n,\,i\in\mathcal{I}_n}$ be a data matrix whose $n$ rows are independent and identically distributed, and whose entries have mean zero and variance one, where $\mathcal{I}_n$ is an index set with cardinality $|\mathcal{I}_n|=s_n$. For each $i \in \mathcal{I}_n$, let $y_i$ be the $i$-th column of ${\boldsymbol{Y}}_n$, $\bar y_i = (1/n)\sum_{k=1}^n
Y_{n,k,i}$. Define $$\label{eq:max_mean}
W_n = \max_{i \in \mathcal{I}_n} {|\bar y_i|}.$$ Let $\Sigma_n$ be the covariance matrix of the $s_n$-dimensional random vector $(Y_{n,1,i})_{i \in \mathcal{I}_n}$.
\[thm:max\_mean\] Assume $\Sigma_n$ satisfies either (B1) or (B2) of Section \[sec:normal\_comparison\] and $\log s_n
=o(n^{1/3})$. Suppose there is a constant $C>0$ such that $Y_{n,k,i}
\in \mathscr{B}(1,Ct_n)$ for each $1\leq k \leq
n,\;i\in\mathcal{I}_n$, with $$t_n = \frac{\sqrt{n}\delta_n}{(\log s_n)^{3/2}}, $$ where $(\delta_n)$ is a sequence of positive numbers such that $\delta_n=o(1)$ and $(\log s_n)^3/n=o(\delta_n)$, and the definition of the collection $\mathscr{B}(d,\tau)$ is given in (\[eq:bernstein\]). Then $$\label{eq:max_mean_convergence}
\lim_{n\rightarrow\infty}P\left(nW_n^2 - 2\log s_n + \log (\log s_n) + \log \pi \leq z \right)
=\exp\left(-e^{-z/2}\right).$$
For each $z\in{{\mathbb{R}}}$, let $z_n=a_{2s_n}z/2 + b_{2s_n}$. Let $(Z_{n,i})_{i\in\mathcal{I}_n}$ be a mean zero normal random vector with covariance matrix $\Sigma_n$. For any subset $A=\{i_1,i_2,\ldots,i_d\} \subset \mathcal{I}_n$, let $y_A=\sqrt{n}(\bar y_{i_1}, \bar y_{i_2}, \ldots, \bar
y_{i_d})^\top$ and $Z_{A}=(Z_{i_1}, Z_{i_2}, \ldots, Z_{i_d})$. By Lemma \[thm:zaitsev\], we have for $\theta_n =
\delta_n^{1/2}/\sqrt{\log s_n}$ that $$\begin{aligned}
P\left(|y_{A}|_\bullet > z_n\right) & \leq P(|Z_A|_{\bullet}>z_n - \theta_n)
+ C_d\exp\left\{-\frac{\theta_n}{C_d \delta_n(\log s_n)^{-3/2}}\right\} \\
& \leq P(|Z_A|_{\bullet}>z_n - \theta_n) + C_d\exp\left\{-(\log s_n)\delta_n^{-1/2}\right\}
\end{aligned}$$ Therefore, $$\begin{aligned}
\sum_{A \subset \mathcal{I}_n, |A|=d} & P\left(|y_{A}|_\bullet > z_n\right) \\
& \leq \sum_{A \subset \mathcal{I}_n, |A|=d} P(|Z_A|_{\bullet}>z_n - \theta_n)
+ C_ds_n^d \exp\left\{-(\log s_n)\delta_n^{-1/2}\right\}.
\end{aligned}$$ Similarly, we have $$\begin{aligned}
\sum_{A \subset \mathcal{I}_n, |A|=d} & P\left(|y_{A}|_\bullet > z_n\right) \\
& \geq \sum_{A \subset \mathcal{I}_n, |A|=d} P(|Z_A|_{\bullet}>z_n + \theta_n)
- C_ds_n^d \exp\left\{-(\log s_n)\delta_n^{-1/2}\right\}.
\end{aligned}$$ Since $(z_n\pm\theta_n)^2 = 2\log s_n - \log (\log s_n) - \log \pi +
z + o(1)$, by Lemma \[thm:normal\_comparison\], we know $$\begin{aligned}
\lim_{n\rightarrow\infty} \sum_{A \subset \mathcal{I}_n, |A|=d} P(|Z_A|_{\bullet}>z_n \pm \theta_n)
= \frac{e^{-dz/2}}{d\,!},
\end{aligned}$$ and hence $$\begin{aligned}
\lim_{n\rightarrow\infty}\sum_{A \subset \mathcal{I}_n, |A|=d} P\left(|y_{A}|_\bullet > z_n\right)
= \frac{e^{-dz/2}}{d\,!}.
\end{aligned}$$ The proof is complete in view of Lemma \[thm:poisson\].
Proof under ($\mathrm{A4'}$) {#sec:a4'}
----------------------------
We divide the proof into three steps. The first one is a truncation step, which will make the Gaussian approximation result Lemma \[thm:zaitsev\] and the Bernstein inequality applicable, so that we can prove Theorem \[thm:max\_cov\] under the assumption that all the involved mean and variance parameters are known. In the next two steps we show that plugging in estimated mean and variance parameters does not change the limiting distribution.
####
For notational simplicity we let $q=p/(4+2p)$. Define $$\begin{aligned}
\label{eq:3.truncation}
\tilde X_{n,k,i} = X_{n,k,i}I\left\{|X_{n,k,i}|\leq n^{1/(4+2p)}\right\},\end{aligned}$$ and define $\tilde M_n$ similarly as $M_n$ with $X_{n,k,i}$ being replaced by its truncated version $\tilde X_{n,k,i}$. Since $\log m =
o (n^{q})$, we have $$\begin{aligned}
P\left(\tilde M_n \neq M_n\right) & \leq \sum_{k=1}^n\sum_{i=1}^m
P\left[|X_{n,k,i}| > n^{1/(4+2p)}\right] \\
& \leq nm \mathcal{K}_n(t,p) \exp\left\{-t n^{p/(4+2p)}\right\} \\
& = \mathcal{K}_n(t,p) \exp\left\{-tn^{q} + \log m + \log n\right\}
= o(1).\end{aligned}$$ Therefore, in the rest of the proof, it suffices to consider $\tilde
X_{n,k,i}$. For notational simplicity, we still use $\tilde X_{n,k,i}$ to denote its centered version with mean zero.
Define $\tilde \sigma_{n,i,j}={{\mathbb{E}}}\left(\tilde X_{n,1,i}\tilde
X_{n,1,j}\right)$, and $\tilde \tau_{n,i,j} = \operatorname{Var}\left(\tilde
X_{n,1,i}\tilde X_{n,1,j}\right)$. Set $$\begin{aligned}
M_{n,1} & = \max_{1\leq i<j \leq m} \frac{1}{\sqrt{\tilde \tau_{n,i,j}}}
\left|\frac{1}{n} \sum_{k=1}^n \tilde X_{n,k,i}\tilde X_{n,k,j} -\tilde \sigma_{n,i,j} \right|; \\
M_{n,2} & = \max_{1\leq i<j \leq m} \frac{1}{\sqrt{\tilde \tau_{n,i,j}}}
\left|\frac{1}{n} \sum_{k=1}^n \tilde X_{n,k,i}\tilde X_{n,k,j} - \sigma_{n,i,j} \right|. \\\end{aligned}$$ Elementary calculations show that $$\begin{aligned}
\label{eq:3.2}
\max_{1\leq i \leq j \leq m} |\tilde \sigma_{n,i,j} - \sigma_{n,i,j}| & \leq C \exp \left\{-tn^{q}/2
\right\},\quad\hbox{and} \\ \label{eq:3.3}
\max_{\alpha, \beta \in \mathcal{I}_n} \left|\operatorname{Cov}(\tilde X_{n,\alpha}, \tilde X_{n,\beta} ) -
\operatorname{Cov}(X_{n,\alpha}, X_{n,\beta})\right| & \leq C \exp \left\{-tn^{q}/2
\right\}.\end{aligned}$$ By (\[eq:3.3\]), we know the covariance matrix of $(\tilde
X_{n,\alpha})_{\alpha\in\mathcal{I}_n}$ satisfies either (B1) or (B2) if $\Sigma_n$ satisfies (B1) or (B2) correspondingly. On the other hand, we have by elementary calculation that there exists a constant $C_{p}>0$ such that $$\begin{aligned}
\limsup_{n\rightarrow\infty}\max_{\alpha \in \mathcal{I}_n} {{\mathbb{E}}}\exp\{C_p t |\tilde X_{n,\alpha}|^{p/2}\} < \infty.\end{aligned}$$ It follows that when $0<p<2$, for each integer $r \geq 3$ $$\begin{aligned}
{{\mathbb{E}}}|\tilde X_{n,\alpha}|^{r} & \leq {{\mathbb{E}}}|\tilde X_{n,\alpha}|^{rp/2} \cdot \left(4 n^{2/(4+2p)}\right)^{r(1-p/2)} \\
& \leq \left(4 n^{2/(4+2p)}\right)^{r(1-p/2)} r! (C_pt)^{-r} {{\mathbb{E}}}\exp\{C_p t |X_{n,\alpha}|^{p/2}\}.\end{aligned}$$ Therefore, $$\begin{aligned}
{{\mathbb{E}}}_0\tilde X_{n,\alpha} \in \mathscr{B}\left[1,C\frac{\sqrt{n}}{n^{2p/(4+2p)}}\right].\end{aligned}$$ When $2 \leq p \leq 4$, it is easily seen that ${{\mathbb{E}}}_0\tilde X_{n,\alpha}
\in \mathscr{B}(1,C)$. Since $\log m = o (n^q)$, we know all the conditions of Lemma \[thm:max\_mean\] are satisfied, and hence $$\label{eq:3.4}
\lim_{n\rightarrow\infty}P\left(nM_{n,1}^2 - 4\log m + \log(\log m) + \log(8\pi) \leq y\right)
= \exp\left(-e^{-y/2}\right).$$ Combining (\[eq:3.2\]) and (\[eq:3.3\]), we know the preceding equation (\[eq:3.4\]) also holds with $M_{n,1}$ being replaced by $M_{n,2}$.
####
Set $\bar X_{n,i} = (1/n)\sum_{k=1}^n \tilde X_{n,k,i}$. Define $$\begin{aligned}
M_{n,3} = \max_{1\leq i<j \leq m} \frac{1}{\sqrt{\tilde \tau_{n,i,j}}}
\left|\frac{1}{n} \sum_{k=1}^n (\tilde X_{n,k,i}-\bar X_{n,i})(\tilde X_{n,k,j}-\bar X_{n,j}) - \sigma_{n,i,j} \right|.\end{aligned}$$ In this step we show that (\[eq:3.4\]) also holds for $M_{n,3}$. Observe that $$\begin{aligned}
\left|M_{n,3} - M_{n,2}\right| \leq \max_{1\leq i<j \leq m} \frac{|\bar X_{n,i}\bar{X}_{n,j}|}{\sqrt{\tilde \tau_{n,i,j}}}
\leq \max_{1 \leq i \leq m}|\bar X_{n,i}|^2 \cdot \left(\min_{1 \leq i<j \leq m} \tilde \tau_{n,i,j}\right)^{-1/2}.\end{aligned}$$ Since each $X_{n,k,i}$ is bounded by $2n^{1/(4+2p)}$, by Bernstein’s inequality we have for any constant $K>0$, $$\begin{aligned}
\max_{1\leq i \leq m}P\left(|\bar X_{n,i}| > 2K\sqrt{\log m \over n}\right)
& \leq C \exp\left\{ -\frac{2K^2 n \log m}{C n + 2K \sqrt{n\log m}
\cdot 2n^{1/(4+2p)}} \right\} \\
& \leq C m^{-K^2/C},\end{aligned}$$ and hence $$\begin{aligned}
\label{eq:3.1}
\max_{1 \leq i \leq m}|\bar X_{n,i}| = O_P\left(\sqrt{\frac{\log m}{n}}\right),\end{aligned}$$ which together with (\[eq:3.3\]) implies that $$\begin{aligned}
\left|M_{n,3} - M_{n,2}\right| = O_P\left(\frac{\log m}{n}\right) = o_P\left(\sqrt{\frac{1}{n \log m}}\right).\end{aligned}$$ Therefore, (\[eq:3.4\]) also holds for $M_{n,3}$.
####
Denote by $\check{\sigma}_{n,i,j}$ the estimate of $\tilde\sigma_{n,i,j}$ $$\begin{aligned}
{\check\sigma_{n,i,j}} = \frac{1}{n} \sum_{k=1}^n (\tilde X_{n,k,i}-\bar X_{n,i})(\tilde X_{n,k,j}-\bar X_{n,j}).\end{aligned}$$ In the definition of $\tilde M_n$, $\tilde\tau_{n,i,j}$ is unknown, and is estimated by $$\begin{aligned}
{\check\tau_{n,i,j}} = \frac{1}{n}\sum_{k=1}^n
\left[(\tilde X_{n,k,i} - \bar X_{n,i}) (\tilde X_{n,k,j} - \bar X_{n,j}) - {\check\sigma_{n,i,j}}\right]^2\end{aligned}$$ In this step we show that (\[eq:3.4\]) holds for $\tilde M_{n}$. Since $$\begin{aligned}
n\left|M_{n,3}^2-\tilde M_n^2\right| \leq nM_{n,3}^2 \cdot \max_{1\leq i<j\leq m}|1-\tilde\tau_{n,i,j}/\check\tau_{n,i,j}|,\end{aligned}$$ it suffices to show that $$\begin{aligned}
\label{eq:3.5}
\max_{1\leq i<j\leq m} \left|\check\tau_{n,i,j}-\tilde\tau_{n,i,j}\right| = o_P(1/\log m).\end{aligned}$$ Set $$\begin{aligned}
\check\tau_{n,i,j,1} & = \frac{1}{n}\sum_{k=1}^n
\left[(\tilde X_{n,k,i} - \bar X_{n,i}) (\tilde X_{n,k,j} - \bar X_{n,j}) - {\tilde\sigma_{n,i,j}}\right]^2 \\
\check\tau_{n,i,j,2} & = \frac{1}{n}\sum_{k=1}^n
\left(\tilde X_{n,k,i} \tilde X_{n,k,j} - {\tilde\sigma_{n,i,j}}\right)^2.\end{aligned}$$ Observe that $$\begin{aligned}
\check\tau_{n,i,j,1} - \check\tau_{n,i,j} = (\check\sigma_{n,i,j} - \tilde\sigma_{n,i,j})^2\end{aligned}$$ which in together with (\[eq:3.4\]) implies that $$\begin{aligned}
\label{eq:3.6}
\max_{1\leq i<j\leq m} \left|\check\tau_{n,i,j,1}-\check\tau_{n,i,j}\right| = O_P \left(\log m/n\right).\end{aligned}$$ Note that $\tilde X_{n,k,i,j}$ are uniformly bounded according to the truncation (\[eq:3.truncation\]), so $$\begin{aligned}
\left(\tilde X_{n,k,i} \tilde X_{n,k,j} - {\tilde\sigma_{n,i,j}}\right)^2 \leq 64 n^{4/(4+2p)}.\end{aligned}$$ By Bernstein’s inequality, we have $$\begin{aligned}
\max_{1\leq i<j\leq m} P\left(|\check\tau_{n,i,j,2}-\tilde\tau_{n,i,j}| \geq 2n^{-q}\right)
& \leq \exp\left\{-\frac{2 n^{2(1-q)}}{Cn + 2n^{1-q} \cdot 128n^{4/(4+2p)}/3}\right\} \\
& \leq \exp\left(-n^q/100\right),\end{aligned}$$ and it follows that $$\begin{aligned}
\label{eq:3.7}
\max_{1\leq i<j\leq m} \left|\check\tau_{n,i,j,2}-\tilde\tau_{n,i,j}\right|
= O_P(n^{-q}).$$ In view of (\[eq:3.6\]), (\[eq:3.7\]), and the assumption $\log
m=o(n^q)$, we know to show (\[eq:3.5\]), it remains to prove $$\begin{aligned}
\label{eq:3.8}
\max_{1\leq i<j\leq m} \left|\check\tau_{n,i,j,1}-\check\tau_{n,i,j,2}\right| = o_P(1/\log m).\end{aligned}$$ Elementary calculations show that $$\begin{aligned}
\max_{1\leq i<j\leq m} \left|\check\tau_{n,i,j,1}-\check\tau_{n,i,j,2}\right|
\leq 4h_{n,1}^2h_{n,2} + 3h_{n,1}^4 + 4h_{n,4}^{1/2}h_{n,2}^{1/2}h_{n,1}
+ 2h_{n,3}h_{n,1}^2,\end{aligned}$$ where $$\begin{aligned}
h_{n,1} & =\max_{1\leq i \leq m} |\bar X_{n,i}| \\
h_{n,2} & = \max_{1\leq i \leq m} \frac{1}{n}\sum_{k=1}^n \tilde X_{n,k,i}^2 \\
h_{n,3} & = \max_{1\leq i\leq j \leq m}
\left|\frac{1}{n}\sum_{k=1}^n \tilde X_{n,k,i} \tilde X_{n,k,j} - \tilde\sigma_{n,i,j}\right|\\
h_{n,4} & = \check\tau_{n,i,j,2}.\end{aligned}$$ By (\[eq:3.1\]), we know $h_{n,1}=O_P(\sqrt{\log m/n})$. By (\[eq:3.7\]) we have $h_{n,4}=O_P(1)$. Combining (\[eq:3.truncation\]) and the Bernstein’s inequality, we can show that $$\begin{aligned}
h_{n,3}= O_P\left(\sqrt{{\log m}/{n}}\right).\end{aligned}$$ As an immediate consequence, we know $h_{n,2}=O_P(1)$. Therefore, $$\begin{aligned}
\max_{1\leq i<j\leq m} \left|\check\tau_{n,i,j,1}-\check\tau_{n,i,j,2}\right| = O_P\left(\sqrt{{\log m}/{n}}\right),\end{aligned}$$ and (\[eq:3.8\]) holds by using the assumption $\log m
=o(n^{q})=o(n^{1/3})$. The proof of Theorem \[thm:max\_cov\] under ($\mathrm{A4'}$) is now complete.
Proof under (A4) {#sec:a4}
----------------
We follow the proof in Section \[sec:a4’\], and point out necessary modifications to make it work under (A4). If not specified, all the notations have the same definitions as in Section \[sec:a4’\]. For notational simplicity, we let $p=4(1+q)+\delta$.
####
We truncate $X_{n,k,i}$ by $$\begin{aligned}
\tilde X_{n,k,i} = X_{n,k,i}I\left\{|X_{n,k,i}|\leq n^{1/4}/\log n\right\},\end{aligned}$$ then $$\begin{aligned}
P\left(\tilde M_n \neq M_n\right) \leq nm \mathcal{M}_n(p) n^{-p/4}(\log n)^p
\leq C \mathcal{M}_n(p) n^{-\delta/4}(\log n)^p = o(1).\end{aligned}$$ Therefore, in the rest of the proof, it suffices to consider $\tilde
X_{n,k,i}$. For notational simplicity, we still use $\tilde X_{n,k,i}$ to denote its centered version with mean zero.
Elementary calculations show that $$\begin{aligned}
\label{eq:3.11}
\max_{1\leq i \leq j \leq m} |\tilde \sigma_{n,i,j} - \sigma_{n,i,j}| & \leq C n^{-(p-2)/4} (\log n)^{p-2},
\quad\hbox{and} \\ \label{eq:3.12}
\max_{\alpha, \beta \in \mathcal{I}_n} \left|\operatorname{Cov}(\tilde X_{n,\alpha}, \tilde X_{n,\beta} ) -
\operatorname{Cov}(X_{n,\alpha}, X_{n,\beta})\right| & \leq C n^{-(p-4)/4} (\log n)^{p-4}.\end{aligned}$$ By (\[eq:3.11\]), we know the covariance matrix of $(\tilde
X_{n,\alpha})_{\alpha\in\mathcal{I}_n}$ satisfies either (B1) or (B2) if $\Sigma_n$ satisfies (B1) or (B2) correspondingly. Since $${{\mathbb{E}}}_0\tilde X_{n,\alpha} \in \mathscr{B}\left[1,8\sqrt{n}/(\log
n)^2\right],$$ we know all the conditions of Lemma \[thm:max\_mean\] are satisfied, and hence (\[eq:3.4\]) holds for $M_{n,1}$. Combining (\[eq:3.11\]) and (\[eq:3.12\]), we know (\[eq:3.4\]) also holds with if we replace $M_{n,1}$ by $M_{n,2}$.
####
Using Bernstein’s inequality, we can show $$\begin{aligned}
\max_{1 \leq i \leq m}|\bar X_{n,i}| = O_P\left(\sqrt{\frac{\log n}{n}}\right),\end{aligned}$$ which implies that $$\begin{aligned}
\left|M_{n,3} - M_{n,2}\right| = O_P\left(\frac{\log n}{n}\right) $$ and hence (\[eq:3.4\]) also holds for $M_{n,3}$.
####
It suffices to show that $$\begin{aligned}
\label{eq:3.13}
\max_{1\leq i<j\leq m} \left|\check\tau_{n,i,j}-\tilde\tau_{n,i,j}\right| = o_P(1/\log n).\end{aligned}$$ Using (\[eq:3.4\]), we know $$\begin{aligned}
\label{eq:3.15}
\max_{1\leq i<j\leq m} \left|\check\tau_{n,i,j,1}-\check\tau_{n,i,j}\right| = O_P \left(\log n/n\right).\end{aligned}$$ Since $$\begin{aligned}
\left(\tilde X_{n,k,i} \tilde X_{n,k,j} - {\tilde\sigma_{n,i,j}}\right)^2 \leq 64 n/(\log n)^4.\end{aligned}$$ By Corollary 1.6 of [@nagaev:1979] (with $x=n/(\log n)^2$ and $y=n/[2(\log n)^3]$ in their inequality (1.22)), we have $$\begin{aligned}
\max_{1\leq i<j\leq m}
P\left(|\check\tau_{n,i,j,2}-\tilde\tau_{n,i,j}| \geq (\log
n)^{-2}\right) & \leq \left[\frac{Cn}{n(\log n)^{-2} \cdot [n(\log
n)^{-3}/2]^{q\wedge 1}}\right]^{\log n} \\
& \leq \left[\frac{C(\log n)^5}{n^{q\wedge 1}}\right]^{\log n},\end{aligned}$$ and it follows that $$\begin{aligned}
\label{eq:3.16}
\max_{1\leq i<j\leq m} \left|\check\tau_{n,i,j,2}
-\tilde\tau_{n,i,j}\right| = O_P\left[(\log n)^{-2}\right].\end{aligned}$$ In view of (\[eq:3.15\]), (\[eq:3.16\]), we know to show (\[eq:3.13\]), it remains to prove $$\begin{aligned}
\label{eq:3.17}
\max_{1\leq i<j\leq m} \left|\check\tau_{n,i,j,1}-\check\tau_{n,i,j,2}\right| = o_P(1/\log n).\end{aligned}$$ We know $h_{n,1}=O_P(\sqrt{\log n/n})$ and $h_{n,4}=O_P(1)$. Using the Bernstein’s inequality, we can show that $$\begin{aligned}
h_{n,3}= O_P\left(\sqrt{{\log n}/{n}}\right),\end{aligned}$$ and it follows that $h_{n,2}=O_P(1)$. Therefore, $$\begin{aligned}
\max_{1\leq i<j\leq m} \left|\check\tau_{n,i,j,1}-\check\tau_{n,i,j,2}\right| = O_P\left(\sqrt{{\log n}/{n}}\right),\end{aligned}$$ and (\[eq:3.17\]) holds. The proof of Theorem \[thm:max\_cov\] under ($\mathrm{A4}$) is now complete.
Some auxiliary results {#sec:auxiliary}
======================
In this section we provide a normal comparison principle and a Gaussian approximation result, and a Poisson convergence theorem.
A normal comparison principle {#sec:normal_comparison}
-----------------------------
Suppose for each $n\geq 1$, $(X_{n,i})_{i\in\mathcal{I}_n}$ is a Gaussian random vector whose entries have mean zero and variance one, where $\mathcal{I}_n$ is an index set with cardinality $|\mathcal{I}_n|=s_n$. Let $\Sigma_n=(r_{n,i,j})_{i,j\in\mathcal{I}_n}$ be the covariance matrix of $(X_{n,i})_{i\in\mathcal{I}_n}$. Assume that $s_n\rightarrow\infty$ as $n\rightarrow\infty$.
We impose either of the following two conditions. $$ \begin{aligned}
\hbox{({\bf B1}) } & \hbox{For any sequence $(b_n)$ such that $b_n\rightarrow\infty$, }
\gamma(n,b_n)= o\left({1}/{\log b_n}\right);\\
& \hbox{and } \limsup_{n\rightarrow\infty}\gamma_n<1. \\
\hbox{({\bf B2}) } & \hbox{For any sequence $(b_n)$ such that $b_n\rightarrow\infty$, }
\gamma(n,b_n)=o(1);\\
& \sum_{i\neq j \in \mathcal{I}_n} r_{n,i,j}^2=O\left(s_n^{2-\delta}\right)
\hbox{ for some } \delta>0; \hbox{ and }
\limsup_{n\rightarrow\infty}\gamma_n<1.
\end{aligned}$$ where $$\begin{aligned}
& \gamma(n,b_n):=\sup_{i\in\mathcal{I}_n}
\sup_{\mathcal{A}\subset\mathcal{I}_n,|\mathcal{A}|=b_n}
\inf_{j\in \mathcal{A}}
\left|r_{n,i,j}\right|\\
& \hbox{and} \quad
\gamma_n:=\sup_{i,j\in\mathcal{I}_n;\; i\neq j}|r_{n,i,j}|.\end{aligned}$$
\[thm:normal\_comparison\] Assume either (B1) or (B2). For a positive real number $z_n$, define $$A_{n,i}'=\{|X_{n,i}| > z_n\} \quad\hbox{and}\quad
Q_{n,d}' = \sum_{\mathcal{A}\subset\mathcal{I}_n,|\mathcal{A}|=d}
P\left(\bigcap_{i\in\mathcal{A}} A_{n,i}'\right).$$ If $z_n$ satisfies that $z_n^2 = 2\log s_n - \log\log s_n -\log
\pi + 2z + o(1)$, then for all $d \geq 1$. $$\lim_{n \to \infty} Q_{n,d}' = \frac{e^{-dz}}{d\,!},$$
Lemma \[thm:normal\_comparison\] is a refined version of Lemma 20 in [@wu:2011a], so we omit the proof and put the details in a supplementary file.
\[rk:empirical\] The conditions imposed on $\gamma(n,b_n)$ seem a little involved. We have the following equivalent versions. Define $$\begin{aligned}
G_n(t) = \max_{i \in \mathcal{I}_n} \sum_{j\in\mathcal{I}_n}
I\{|r_{n,i,j}|>t\}.
\end{aligned}$$ Then (i) $\gamma(n,b_n)=o(1)$ for any sequence $b_n \to \infty$ if and only if the sequence $[G_n(t)]_{n\geq 1}$ is bounded for all $t>0$; and (ii) $\gamma(n,b_n)(\log b_n)=o(1)$ for any sequence $b_n \to \infty$ if and only if $G_n(t_n) = \exp \{o(1/t_n)\}$ for any positive sequence $(t_n)$ converging to zero.
A Gaussian approximation result
-------------------------------
For a positive integer $d$, let $\mathfrak{B}_d$ be the Borel $\sigma$-field on the Euclidean space $\mathbb{R}^d$. For two probability measures $P$ and $Q$ on $\left(\mathbb{R}^d,
\mathfrak{B}_d\right)$ and $\lambda>0$, define the quantity $$ \pi(P,Q;\lambda) =
\sup_{A\in\mathfrak{B}_d}\left\{\max \left[P(A) - Q\left(A^\lambda\right),Q(A) - P\left(A^\lambda\right)\right]\right\},$$ where $A^{\lambda}$ is the $\lambda$-neighborhood of $A$ $$A^\lambda := \left\{x \in \mathbb{R}^d:\;\inf_{y\in A}|x-y|<\lambda\right\}.$$ For $\tau>0$, let $\mathscr{B}(d,\tau)$ be the collection of $d$-dimensional random variables which satisfy the multivariate analogue of the Bernstein’s condition. Denote by $(x,y)$ the inner product of two vectors $x$ and $y$. $$\label{eq:bernstein}
\begin{aligned}
\mathscr{B}(d,\tau)=& \left\{
\xi \hbox{ is a random variable}:\;{{\mathbb{E}}}\xi=0, \hbox{ and }
\phantom{\langle\xi,t\rangle^2}\right.\\
& \left| {{\mathbb{E}}}\left[(\xi,t)^2(\xi,u)^{m-2}\right] \right|
\leq \frac{1}{2}m!\tau^{m-2}\|u\|^{m-2}{{\mathbb{E}}}\left[(\xi,t)^2\right] \\
& \left.\hbox{for every }
m=3,4,\ldots \hbox{ and for all } t,u\in\mathbb{R}^d
\right\}.
\end{aligned}$$ The following Lemma on the Gaussian approximation is taken from [@zaitsev:1987].
\[thm:zaitsev\] Let $\tau>0$, and $\xi_1,\xi_2,\ldots,\xi_n \in {{\mathbb{R}}}^d$ be independent random vectors such that $\xi_i\in\mathscr{B}(d,\tau)$ for $i=1,2,\ldots,n$. Let $S=\xi_1+\xi_2+\ldots+\xi_n$, and $\mathscr{L}(S)$ be the induced distribution on ${{\mathbb{R}}}^d$. Let $\Phi$ be the Gaussian distribution with the zero mean and the same covariance matrix as that of $S$. Then for all $\lambda>0$ $$\pi[\mathscr{L}(S),\Phi;\lambda]
\leq c_{1,d} \exp\left(-\frac{\lambda}{c_{2,d}\tau}\right),$$ where the constants $c_{j,d},\;j=1,2$ may be taken in the form $c_{j,d} = c_j d^{5/2}$.
Poisson approximation: moment method
------------------------------------
\[thm:poisson\] Suppose for each $n\geq 1$, $(A_{n,i})_{i\in\mathcal{I}_n}$ is a finite collection of events. Let $I_{A_{n,i}}$ be the indicator function of $A_{n,i}$, and $W_n=\sum_{i\in\cal I}I_{A_{n,i}}$. For each $d \geq 1$, define $$Q_{n,d} = \sum_{\mathcal{A}\subset\mathcal{I}_n,|\mathcal{A}|=d}
P\left(\bigcap_{i\in\mathcal{A}} A_{n,i}\right).$$ Suppose there exists a $\lambda>0$ such that $$\lim_{n\rightarrow\infty} Q_{n,d} = {\lambda^d}/{d\,!} \hbox{ for each } d\geq 1.$$ Then $$\lim_{n\rightarrow\infty} P(W_n = k) = \lambda^ke^{-\lambda}/k\,! \hbox{ for each } k\geq 0.$$
Observe that for each $d\geq 1$, the $d$-th factorial moment of $W_n$ is given by $${{\mathbb{E}}}\left[W_n(W_n-1)\cdots(W_n-d+1)\right] = d\,! \cdot Q_{n,d},$$ so Lemma \[thm:poisson\] is essentially the moment method. The proof is elementary, and we omit details.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Complete suppression of the native n-type Schottky barrier is demonstrated in Al/InGaAs(001) junctions grown by molecular-beam-epitaxy. This result was achieved by the insertion of Si bilayers at the metal-semiconductor interface allowing the realization of truly Ohmic non-alloyed contacts in low-doped and low-In content InGaAs/Si/Al junctions. It is shown that this technique is ideally suited for the fabrication of high-transparency superconductor-semiconductor junctions. To this end magnetotransport characterization of Al/Si/InGaAs low-n-doped single junctions below the Al critical temperature is presented. Our measurements show Andreev-reflection dominated transport corresponding to junction transparency close to the theoretical limit due to Fermi-velocity mismatch.'
address:
- 'Scuola Normale Superiore and INFM, I-56126 Pisa, Italy'
- 'Laboratorio Nazionale TASC-INFM, Area di Ricerca, Padriciano 99, I-34012 Trieste, Italy'
author:
- 'Silvano De Franceschi, Francesco Giazotto, and Fabio Beltram'
- 'Lucia Sorba, Marco Lazzarino, and Alfonso Franciosi$^{a)}$'
- 'To be published in Phyl. Mag. B'
title: 'Andreev reflection in engineered Al/Si/InGaAs(001) junctions '
---
In the last few years there has been an increasing interest in the study of semiconductor-superconductor (Sm-S) hybrid systems [@been; @klein; @lamb]. These allow the investigation of exotic coherent-transport effects and have great potential for device applications. The characteristic physical phenomenon driving electron transport at a S-Sm junction is Andreev reflection [@andreev]. In this process (originally observed in normal metal-superconductor junctions ) an electron incident from the Sm side on the superconductor may be transmitted as a Cooper pair if a hole is retroreflected along the time-reversed path of the incoming particle. High junction transparency is a crucial property for the observation of Andreev-reflection dominated transport. Different techniques have been explored to meet this requirement including metal deposition immediately after As-decapping [@kast], Ar$^+$ back-sputtering [@nguy], and [*in situ*]{} metallization in the molecular-beam epitaxy (MBE) chamber [@akaz]. All these tests were performed in InAs-based Sm-S devices where the main transmittance-limiting factor is interface contamination. On the contrary, for semiconductor materials such as those grown on either GaAs or InP, the strongest limitation arises from the presence of a native Schottky barrier. In this case, in order to enhance junction transparency penetrating contacts [@gao; @williams] and heavily doped surface layers [@kast; @tabo] were used. Recently we have reported on a new technique [@silv], alternative to doping, to obtain Schottky-barrier-free Al/n-In$_x$Ga$_{1-x}$As(001) junctions ($x \agt 0.3$) by MBE growth. This is based on the inclusion of an ultrathin Si interface layer under As flux which changes the pinning position of the Fermi level at the metal-semiconductor junction and leads to the total suppression of the Schottky barrier. In this work we show the behavior of such Ohmic contacts realized and furthermore demonstrate how this method can be successfully exploited to obtain high transparency Sm-S hybrid junctions [@franz]. Notably these are based on low-doped and low-In-content InGaAs alloys that are ideal candidates for the implementation of ballistic-transport structures.
Al/n-In$_{0.38}$Ga$_{0.62}$As junctions incorporating Si interface layers were grown by MBE. Their schematic structure is shown in Fig. 1. The semiconductor portion consists of a 300-nm-thick GaAs buffer layer grown at 600 $^\circ$C on n-type GaAs(001) and Si-doped at $n \sim 10^{18}$ cm$^{-3}$ followed by a 2-$\mu$m-thick n-In$_{0.38}$Ga$_{0.62}$As layer grown at 500 $^\circ$C with an inhomogeneous doping profile. The top 1.5-$\mu$m-thick region was doped at $n=6.5 \cdot 10^{16}$ cm$^{-3}$, the bottom buffer region (0.5 $\mu$m thick) was heavily doped at $n \sim 10^{18}$ cm$^{-3}$. After In$_{0.38}$Ga$_{0.62}$As growth the substrate temperature was lowered to 300$^\circ$C and a Si atomic bilayer was deposited under As flux [@silv]. Al deposition ($\simeq 150$ nm) was carried out [*in situ*]{} at room temperature. During Al deposition the pressure in the MBE chamber was below $5 \cdot 10^{-10}$ Torr. Reference Al/n-In$_{0.38}$Ga$_{0.62}$As junctions were also grown with the same semiconductor part but without the Si interface layer.
In order to compare the current-voltage ($I$–$V$) behavior of Si-engineered and reference junctions, circular contacts were defined on the top surface with various diameters in the 75–150 $\mu$m range. Standard photolithographic techniques and wet chemical etching were used to this aim. Back contacting was provided for electrical characterization by metallizing the whole substrate bottom. $I$–$V$ characterization was performed in the 20–300 K temperature range using a closed-cycle cryostat equipped with microprobes. Typical room-temperature (dashed lines) and low-temperature (solid lines) $I$–$V$ characteristics for both Si-engineered and reference diodes are shown in Fig. 2.
The reference diode exhibits a marked rectifying behavior which is enhanced at low temperatures. We have measured the corresponding barrier height by different techniques: thermionic-emission $I$–$V$ measurements in the 270–300 K temperature range, and linear fit in the forward bias region of log($I$)–$V$ characteristics measured at $\sim 200$ K. These two approaches yielded barrier heights of $0.22 \pm 0.05$ eV and $0.23 \pm 0.02$ eV respectively. These values include corrections for image-charge and thermionic-field-emission effects [@S]. The quoted uncertainties reflect diode to diode fluctuations and uncertainties in the barrier height determination.
The engineered diode shows no rectifying behavior even at low temperatures (20 K in Fig. 2). Its $I$–$V$ characteristics bear no trace of a SB and are linear over the whole 20–300 K temperature range. Their slope is only weakly affected by temperature. To investigate the possible existence of a residual SB whose rectifying effect might be hidden by the series resistance arising from the InGaAs bulk and the back contact, we modeled the low-temperature $I$–$V$ behavior of the engineered diode in terms of a residual barrier height $\phi_n$ and a series resistance $R$ [@PS]. We were able to reproduce the experimental $I$–$V$ curves only with $\phi_n < 0.03$ eV. As will be apparent from what follows, this value represents only an upper limit for the barrier height.
Doping effects do not play any significant role in the barrier suppression. In order to verify this, we annealed the engineered diode at 420 $^\circ$C for 5 seconds. Following this we observed a marked rectifying behavior analogous to that of the reference sample. This result is in line with the findings reported in Ref. [@S6B] on the thermal stability of Si-engineered SBs in Al/GaAs junctions and reflects Si redistribution at the interface. Wear-out tests on engineered diodes were also performed in order to verify the persistence of the ohmic behavior against prolonged high-current stress. To this end we monitored the $I$–$V$ characteristics during 24 hours of continuous operation at current densities of 200 A/cm$^2$. No changes were detected.
In order to demonstrate the applicability of this technique to the realization of high transparency Sm-S hybrid devices, rectangular 100$\times$160 $\mu$m$^2$ Al/n-In$_{0.38}$Ga$_{0.62}$As junctions were patterned on the sample surface using standard photolithographic techniques and wet chemical etching. Two additional 100$\times$50 $\mu$m$^2$-wide and 200-nm-thick Au pads were electron-beam evaporated on top of every Al pattern in order to allow four-wire electrical measurements. Samples were mounted on non-magnetic dual-in-line sample holders, and 25-$\mu$m-thick gold wires were bonded to the gold pads. $I$–$V$ characterizations as a function of temperature ($T$) and static magnetic field ($H$) were performed in a $^3$He closed-cycle cryostat.
The critical temperature ($T_c$) of the Al film was 1.1 K (which corresponds to a gap $\Delta \approx 0.16$ meV). The normal-state resistance $R_N$ of our devices was 0.2 $\Omega$, including the series-resistance contribution ($\approx 0.1 \Omega$) of the semiconductor. At $H=0$ and below $T_c$, dc $I$–$V$ characteristics exhibited important non-linearities around zero bias that can be visualized by plotting the differential conductance ($G$) as a function of the applied bias ($V$). In Fig. 3(a) we show a tipical set of $G$–$V$ curves obtained at different temperatures in the 0.33–1.03 K range. Notably even at $T=0.33$ K, i.e. well below $T_c$, a high value of $G$ is observed at zero bias. At low temperature and bias (i.e., when the voltage drop across the junction is lower than $\Delta /e$ [@nota]), transport is dominated by Andreev reflection. The observation of such pronounced Andreev reflection demonstrates high junction transparency. The latter can be quantified in terms of a dimensionless parameter $Z$ according to the Blonder-Tinkham-Klapwijk (BTK) model [@btk; @z]. To analyze the data of Fig. 3(a) we followed the model by Chaudhuri and Bagwell [@chau], which is the three-dimensional generalization of the BTK model. For our S-Sm junction we found $Z \approx 1$ corresponding to a $\sim$50 % normal-state transmission coefficient. We note that without the aid of the Si-interface-layer technique, doping concentrations over two orders of magnitude greater than that employed here would be necessary to achieve comparable transmissivity (see e.g. Refs. [@kast; @gao; @tabo; @tabo2]). This drastic reduction in the impurity concentration is a very attractive feature for the fabrication of ballistic structures. It should also be noted that our reported $Z$ value is close to the intrinsic transmissivity limit related to the Fermi-velocity mismatch between Al and InGaAs [@BT].
We should also like to comment on the homogeneity of our junctions. By applying the BTK formalism, $Z\approx 1$ leads to a calculated value of the normal-state resistance ($R_N^{th}$) much smaller than the experimental value $R_N^{exp}$: $R_N^{th}/R_N^{exp}=0.003$ . This would indicate that only a small fraction ($R_N^{th}/R_N^{exp}$) of the contact area has the high transparency and leads to the transport properties of the junction, as already reported for different fabrication techniques [@gao; @van]. Values of $R_N^{th}/R_N^{exp}$ ranging from $\sim 10^{-4}$ to $\sim 10^{-2}$ can be found in the literature (see, e.g., Refs. [@kast; @gao; @van]). Such estimates, however, should be taken with much caution. Experimentally, no homogeneities were observed on the lateral length scale of our contacts and we did observe a high uniformity in the transport properties of all junctions studied.
The superconducting nature of the conductance dip for $|V|<\Delta/e$ is proved by its pronounced dependence on temperature and magnetic field. Figure 3(a) shows how the zero-bias differential-conductance dip observed at $T=0.33$ K progressively weakens for $T$ approaching $T_c$. This fact is consistent with the well-known temperature-induced suppression of the superconducting energy gap $\Delta$. Far from $V=0$ the conductance is only marginally affected by temperature as expected for a S-Sm junction when $|V|$ is significantly larger than $\Delta/e$ [@btk]. A small depression in the zero-bias conductance is still observed at $T \simeq T_c$. This, together with the slight asymmetry in the $G$–$V$ curves, can be linked to a residual barrier at the buried InGaAs/GaAs heterojunction.
In Fig. 3(b) we show how the conductance can be strongly modified by very weak magnetic fields ($H$). The $G$–$V$ curves shown in Fig. 3(b) were taken at $T=0.33$ K for different values of $H$ applied perpendicularly to the plane of the junction in the 0–5 mT range. The superconducting gap vanishes for $H$ approaching the critical field ($H_c$) of the Al film ($H_c \simeq 10$ mT at $T=0.33$ K). Consequently, the zero-bias conductance dip is less and less pronounced and at the same time shrinks with increasing magnetic field. The latter effect was not as noticeable in Fig. 3(a) owing to the temperature-induced broadening of the single-particle Fermi distribution function [@btk].
In conclusion, we have reported on Ohmic behavior and Andreev-reflection dominated transport in MBE-grown Si-engineered Al/n-In$_{0.38}$Ga$_{0.62}$As junctions. Transport properties were studied as a function of temperature and magnetic field and showed junction transmissivity close to the theoretical limit for the S-Sm combination. The present study demonstrates that the Si-interface-layer technique is a promising tool to obtain high-transparency S-Sm junctions involving InGaAs alloys with low In content and low doping concentration. This technique yields Schottky-barrier-free junctions without using InAs-based heterostructures and can be exploited in the most widespread MBE systems. It is particularly suitable for the realization of low-dimensional S-InGaAs hybrid systems grown on GaAs or InP substrates. We should finally like to stress that its application in principle is not limited to Al metallizations and other superconductors could be equivalently used. In fact, to date the most convincing interpretation of the silicon-assisted Schottky-barrier engineering is based upon the heterovalency-induced IV/III-V local interface dipole [@bin]. Within this description Schottky-barrier tuning is a metal-independent effect.
The present work was supported by INFM under the PAIS project Eterostrutture Ibride Semiconduttore-Superconduttore and the TUSBAR program. One of us (F. G.) would like to acknowledge Europa Metalli S.p.A. for financial support.
C. W. J. Beenakker, Rev. Mod. Phys. [**69**]{}, 731 (1997).
A. W. Kleinsasser and W. L. Gallagher, [*Superconducting Devices*]{}, edited by S. Ruggiero and D. Rudman (Academic, Boston, 1990), p. 325.
C. J. Lambert and R. Raimondi, J. Phys. Condens. Matter [**10**]{}, 901 (1998).
A. F. Andreev, Zh. Eksp. Teor. Fiz. [**46**]{}, 1823 (1964) \[Sov.Phys.–JETP [**19**]{}, 1228 (1964)\].
A. Kastalsky, A. W. Kleinsasser, L. H. Greene, R. Bhat, F. P. Milliken, and J. P. Harbison, Phys. Rev. Lett. [**67**]{}, 3026 (1991).
C. Nguyen, H. Kroemer, and E. L. Hu, Appl. Phys. Lett. [**65**]{}, 103 (1994).
T. Akazaki, J. Nitta, and H. Takayanagi, Appl. Phys. Lett. [**59**]{}, 2037 (1991).
J. R. Gao, J. P. Heida, B. J. van Wees, S. Bakker, and T. M. Klapwijk, Appl. Phys. Lett. [**63**]{}, 334 (1993).
A. M. Marsh, and D. A. Williams, J. Vac. Sci. Technol. A [**14**]{}, 2577 (1996).
R. Taboryski, T. Clausen, J. Bindslev Hansen, J. L. Skov, J. Kutchinsky, C.B. S[ø]{}rensen, and P. E. Lindelof, Appl. Phys. Lett. [**69**]{}, 656 (1996).
S. De Franceschi, F. Beltram, C. Marinelli, L. Sorba, M. Lazzarino, B. Müller, and A. Franciosi, Appl. Phys. Lett. [**72**]{}, 1996 (1998).
S. De Franceschi, F. Giazotto, F. Beltram, L. Sorba, M. Lazzarino, and A. Franciosi, Appl. Phys. Lett. [**73**]{}, 3890 (1998).
J. M. Shannon, Solid State Electron. [**19**]{}, 537 (1976).
F. A. Padovani and R. Stratton, Solid State Electron. [**9**]{}, 695 (1966).
The values used for $m^*$ and $\epsilon$ were obtained by linear interpolation between the corresponding InAs and GaAs parameters.
L. Sorba, S. Yildirim, M. Lazzarino, A. Franciosi, D. Chiola, and F. Beltram, Appl. Phys. Lett. [**69**]{}, 1927 (1996).
The voltage drop across the junction amounts to about half of the applied bias, due to the series-resistance contribution.
G. E. Blonder, M. Tinkham, and T. M. Klapwijk, Phys. Rev. B [**25**]{}, 4515 (1982).
In this model $Z$ is related to the normal-state transmission coefficient $\Gamma$ by $\Gamma=(1+Z^2)^{-1}$. This approach was developed in the contex of ballistic systems, but has been widely applied to diffusive systems (like the present one) in order to gain an estimate of the junction trasmissivity [@gao; @tabo; @van].
W. M. van Huffelen, T. M. Klapwijk, D. R. Heslinga, M. J. de Boer, and N. van der Post, Phys. Rev. B [**47**]{}, 5170 (1993).
S. Chaudhuri and P. F. Bagwell, Phys. Rev. B [**51**]{}, 16936 (1995).
J. Kutchinsky, R. Taboryski, T. Clausen, C. B. S[ø]{}rensen, A. Kristensen, P. E. Lindelof, J. Bindslev Hansen, C. Schelde Jacobsen, and J. L. Skov, Phys. Rev. Lett. [**78**]{}, 931 (1997).
G. E. Blonder and M. Tinkham, Phys. Rev. B [**27**]{}, 112 (1983).
C. Berthod, J. Bardi, N. Binggeli, A. Baldereschi, J. Vac. Sci. Technol. B [**14**]{}, 3000 (1996); C. Berthod, [*et al.*]{}, Phys. Rev. B [**57**]{}, 9757 (1998).
| {
"pile_set_name": "ArXiv"
} |
\[section\] \[propo\][Lemma]{} \[propo\][Theorem]{} \[propo\][Corollary]{} \[propo\][Definition]{}
Ł[[L]{}]{}¶[[P]{}]{} 16.5mm 16.5mm
**FLOQUET THEORY FOR SECOND ORDER LINEAR\
HOMOGENEOUS DIFFERENCE EQUATIONS**
------------------------------------------------------------------------
Introduction and Preliminaries
==============================
It is well known the important role that linear homogeneous difference equations play in several problems of the engineering or of the science. However, whereas the expression of the solutions when the coefficients of the equations are constant is widely known, the same does not happen for variable coefficients, except for the simplest case of first order equations. In [@M97] a complete closed form solution of a second order linear homogeneous difference equation with variable coefficients was presented. The solutions are then expressed solely in terms of the given coefficients.
We are here interested in the Floquet Theory for second order linear homogeneous difference equation; that is, for difference equations whose coefficients are periodic sequences. The main question in this framework is to find the conditions under which the given equation has periodic solutions with the same period than the coefficients, see for instance [@A00]. The strategy we follow in this work is to extend the equivalence between second order equations with constant coefficients and Chebyshev equations to the case of periodic coefficients. Then, the characterization of the existence of periodic solutions can be reduced to the same question about Chebyshev equations. We remark that whereas the equivalence between a second order equation with periodic coefficients and a Chebyshev equation can be established, more or less easily, by induction, the determination of the parameter of the equivalent Chebysev equation is more difficult since it involves a highly non-linear recurrence. In this work we solve this recurrence obtaining a nice closed formula for this parameter in terms of the coefficients of the considered second order equation. As an straightforward consequence, we obtain the necessary and sufficient condition for the existence of periodic solutions.
Throughout the paper, $\KK$ denotes either $\RR$ or $\CC$, $\ZZ$ the set of integers, $\NN$ the set of nonnegative integers, $\KK^*=\KK\setminus \{0\}$ and $\NN^*=\NN\setminus\{0\}$. A sequence of elements of $\KK$ is a function $z\func{\ZZ}{\KK}$ and we denote by $\ell(\KK)$ the space of all sequences of elements of $\KK$ and by $\ell(\KK^*)\subset \ell(\KK)$ the subset of the sequences $z\in \ell(\KK)$ such that $z(k)\not=0$ for all $k\in \ZZ$. The null sequence is denoted by $\0$.
Given $z\in \ell(\KK)$ and $p\in \NN^*$, for any $m\in \ZZ$ we denote by $z_{p,m}\in \ell(\KK)$ the subsequence of $z$ defined as $$z_{p,m}(k)=z(kp+m),\hspace{.25cm}k\in \ZZ.$$ Clearly, any sequence $z\in \ell(\KK)$ is completely determined by the values of the sequences $z_{p,j}$, for $0\le j\le p-1$. In particular, $z_{1,0}=z$, whereas $z_{2,0}$ and $z_{2,1}$ are the subsequences of even or odd indexes, respectively. Moreover, the sequences $z_{1,m}$ are the [*shift subsequences*]{} of $z$, since $z_{1,m}(k)=z(k+m)$ for any $k\in \ZZ$. Notice that if we also allow $p=-1$, then $z_{-1,m}$ are the [*flipped shift subsequences*]{} of $z$, since $z_{-1,m}(k)=z(m-k)$ for any $k\in \ZZ$.
The sequence $z\in \ell(\KK)$ is called [*quasi-periodic with period $p\in \NN^*$ and ratio $r\in \KK^*$*]{} if it satisfies that $$z(p+k)=r\, z(k), \hspace{.25cm}k\in \ZZ,$$ which also implies that $z(kp+m)=r^kz(m)$ for any $k,m\in \ZZ$.
Clearly a sequence $z\in \ell(\KK)$ is periodic with period $p$ iff it is quasi-periodic with period $p$ and ratio $r=1$. The set of quasi-periodic sequences with period $p$ and ratio $r$ is denoted by $\ell(\KK;p,r)$ and we define $\ell(\KK^*;p,r)=\ell(\KK;p,r)\cap \ell(\KK^*)$. Then, $\ell(\KK;p,1)$ consists of the periodic sequences with period $p$, whereas $\ell(\KK;1,r)$ consists of the geometric sequences with common ratio $r$; that is, if $z\in \ell(\KK;1,r)$, then $z(k)=z(0)r^k$. In particular $\ell(\KK;1,1)$ consists of all constant sequences and it is identified with $\KK$.
In the sequel we omit the parameter $r$ when it equals $1$. Therefore, the space of periodic sequences with period $p$ is denoted simply by $\ell(\KK;p)$ and hence, $\ell(\KK;1)$ consists of the constant sequences.
If $z\in \ell(\KK;p,r)$ is not the null sequence, then $r=z(k_0)^{-1}z(k_0+p)$, where $k_0=\min\{k\in \NN: z(k)\not=0\}$. Therefore, if $z$ is a non-null quasi-periodic sequence of period $p$, then $z$ is determined by the $p+1$ values $z(j)$, $j=0,\ldots,p-1$ and $r$ or equivalently by the values $z(j)$, $j=0,\ldots,p$.
\[qp:cha\] Given $p\in \NN^*$ and $r\in \KK^*$, then $z\in \ell(\KK;p,r)$ iff $z_{p,m}\in \ell(\KK;1,r)$ for any $m\in \ZZ$. Moreover, $\ell(\KK;p,r)\subset \ell(\KK;np,r^n)$ for any $n\in \NN^*$.
Given three sequences $a,c\in \ell(\KK^*)$ and $b\in \ell(\KK)$, we can consider the irreducible homogeneous linear second order difference equation $$\label{equation}
a(k)z(k+1)-b(k)z(k)+c(k-1)z(k-1)=0,\hspace{.5cm}k\in \ZZ.$$ The sequences $a,b$ and $c$ are called the [*coefficients of the Equation* ]{} and any sequence $z\in \ell(\KK)$ satisfying the Identity is called a [*solution of the equation*]{}. It is well-known that for any $z_0,z_1\in \KK$ and any $m\in \ZZ$, there exists a unique solution of Equation satisfying $z(m)=z_0$ and $z(m+1)=z_1$. In addition, when $a,b,c\in \ell(\RR)$, then a solution of Equation satisfies that $z\in \ell(\RR)$ iff $z(m),z(m+1)\in \RR$ for some $m\in \ZZ$.
The Equation has [*constant coefficients*]{} when $a,c\in \KK^*$ and $b\in \KK$. Linear difference equations with constant coefficients can be characterized as those satisfying that $z\in \ell(\KK)$ is a solution iff any shift of $z$ is also a solution. Moreover, $a=c$ iff, in addition, when $z\in \ell(\KK)$ is a solution any flipped shift of $z$ is also a solution. We conclude this section with a result about quasi periodic solutions of equations with constant coefficients.
\[sol:per\] Given $a,c\in \KK^*$, $b\in \KK$, and $z\in \ell(\KK)$ a solution of the difference equation $$az(k+1)-bz(k)+cz(k-1)=0,\hspace{.5cm}k\in \ZZ,$$ then $z\in \ell(\KK;p,r)$ iff $z(p)=r\,z(0)$ and $z(p+1)=r\,z(1)$.
Chebyshev sequences
===================
Among equations with constant coefficients, the so-called [*Chebyshev equations*]{} play a main role in the study of this kind of difference equations, see [@ABD05 Theorem 3.1]. Given $q\in \KK$, the second order difference equation with constant coefficients $$\label{Chebyshev:equation}
z(k+1)-2qz(k)+z(k-1)=0,\hspace{.25cm} k\in \ZZ,$$ is called [*Chebyshev equation with parameter $q$*]{} and its solutions are called [*Chebyshev sequences with parameter $q$*]{}. Clearly any shift of any flipped shift of a Chebyshev sequence is also a Chebyshev sequence. Moreover, a non null Chebyshev sequence determines its parameter, since if $z\in \ell(\KK)$ is a Chebyshev sequence with parameters $q$ and $\hat q$, then $$2qz(k)=z(k+1)+z(k-1)=2\hat qz(k), \hspace{.25cm}k\in \ZZ,$$ and hence, $2(q-\hat q)z=\0$, which implies that $q=\hat q$.
Recall that a polynomial sequence $\{P_k(x)\}_{k\in \ZZ}\subset \CC[x]$ is a [*sequence of Chebyshev polynomials*]{} if it satisfies the following three-term recurrence $$\label{Ch:poly}
P_{k+1}(x)=2xP_k(x)-P_{k-1}(x),\hspace{.25cm}k\in \ZZ.$$ Therefore, any sequence of Chebyshev polynomials is completely determined by the choice of the two polynomials $P_{m}(x)$ and $P_{m+1}(x)$ for some $m\in \ZZ$. In particular, the choice $U_{-1}(x)=0$ and $U_0(x)=1$, determines the sequence $\{U_k(x)\}_{k\in \ZZ}$, where for any $k\in \ZZ$, $U_k(x)$ is the [*$k$-th Chebyshev polynomial of second kind*]{}, see [@MH03]. Then $U_{-k}(x)=-U_{k-2}(x)$, $U_k(-x)=(-1)^kU_k(x)$ for any $k\in \ZZ$ and any $x\in \CC$ and moreover $$\label{ch:second}
U_k(x)=\sum\limits_{j=0}^{\lfloor \frac{k}{2}\rfloor}(-1)^j{k-j\choose j}(2x)^{k-2j},\hspace{.25cm}k\in \NN.$$
In addition, for any sequence of Chebyshev polynomials $\{P_k(x)\}_{k\in \ZZ}$ we have that $$\label{comb}
P_k(x)=P_0(x)U_k(x)-P_{-1}(x)U_{k-1}(x),\hspace{.25cm}\hbox{for any}\hspace{.25cm}k\in \ZZ.$$ In particular, $\{P_k(x)\}_{k\in \ZZ}\subset \RR[x]$ iff $P_{-1}(x),P_0(x)\in \RR[x]$, since $U_k(x)\in \RR[x]$ for any $k\in \ZZ$. Moreover, if given $p\in \NN^*$ we apply to the flipped shift sequence $\{U_{p-k}(x)\}_{k\in \ZZ}$ we obtain the well-known identity $$\label{fs}
U_{p-k}(x)=U_{p}(x)U_k(x)-U_{p+1}(x)U_{k-1}(x),\hspace{.25cm}\hbox{for any}\hspace{.25cm}k\in \ZZ.$$
The sequences $\{T_k(x)\}_{k\in \ZZ}$, $\{V_k(x)\}_{k\in \ZZ}$ and $\{W_k(x)\}_{k\in \ZZ}$ defined by taking $T_{-1}(x)=x$, $V_{-1}(x)=1$, $W_{-1}(x)=-1$ and $T_0(x)=V_0(x)=W_0(x)=1$ are known as [*Chebyshev polynomials of first, third and fourth order*]{}, respectively. From the Identity we obtain that $V_k(x)=U_k(x)-U_{k-1}(x)$, $W_k(x)=U_k(x)+U_{k-1}(x)$ $T_k(x)=U_k(x)-xU_{k-1}(x)=\frac{1}{2}\big[U_k(x)-U_{k-2}(x)\big]$ for any $k\in \ZZ$, which implies that $T_{-k}(x)=T_{k}(x)$, $T_k(-x)=(-1)^kT_k(x)$ for any $k\in \ZZ$ and any $x\in \CC$ and moreover $$\label{ch:first}
T_k(x)=\dfrac{k}{2}\sum\limits_{j=0}^{\lfloor \frac{k}{2}\rfloor}\dfrac{(-1)^j}{k-j}{k-j\choose j}(2x)^{k-2j},\hspace{.25cm}k\in \NN^*.$$
Backing to Chebyshev sequences, it is clear that any Chebyshev sequence with parameter $q$ is of the form $\{P_k(q)\}_{k\in \ZZ}$, where $\{P_k(x)\}_{k\in \ZZ}$ is a sequence of Chebyshev polynomials. Therefore, many properties of Chebyshev sequences are consequence of properties of Chebyshev polynomials and conversely.
\[Cheb:periodic\] Given $q\in \KK$, then for any $p\in \NN^*$ the Chebyshev equation with parameter $q$ has non-null solutions belonging to $\ell(\KK;p,r)$ iff $$r=T_p(q)\pm \sqrt{T_p(q)^2-1}.$$ In particular, the following results hold:
- If $p\in \NN^*$, the Chebyshev equation with parameter $q$ has non-null solutions belonging to $\ell(\KK;p)$ iff ${q=\cos\big(\frac{2j\pi}{p}\big)}$, $j=0,\ldots,\lceil\frac{p-1}{2}\rceil$.
- If $r\in \KK^*$ the Chebyshev equation with parameter $q$ has non-null solutions belonging to $\ell(\KK;1,r)$ iff $q=\dfrac{1}{2}(r+r^{-1})$.
- The Chebyshev equation with parameter $q$ has constant solutions iff $q=1$.
Given $z(k)=AU_k(q)+BU_{k-1}(q)$ a non-null Chebyshev sequence with parameter $q$, then from Lemma \[sol:per\] $z\in \ell(\KK;p,r)$ iff $z(p)=rz(0)$ and $z(p+1)=rz(1)$; that is iff $$\begin{bmatrix}U_p(q)-r &U_{p-1}(q)\\[1ex]
U_{p+1}(q)-2qr & U_p(q)-r\end{bmatrix}\begin{bmatrix}A\\[1ex]B\end{bmatrix}=\begin{bmatrix}0\\[1ex]0\end{bmatrix}$$ Therefore, the Chebyshev equation with parameter $q$ has non-null solutions belonging to $\ell(\KK;p,r)$ iff the determinant of the above matrix equals $0$; that is, applying , iff $$0= r^2+U_p^2(q)-U_{p+1}(q)U_{p-1}(q)-2r\big[U_{p}(q)-qU_{p-1}(q)\big]+1=r^2-2rT_p(q)+1.$$
\(i) When $z\in \ell(\KK;p)$; that is, when $r=1$, the above equation becomes $T_p(q)=1$, that implies that $q=\cos\Big(\frac{2j\pi}{p}\Big)$, $j=0,\ldots,\lceil\frac{p-1}{2}\rceil$, see [@MH03]. (ii) When $p=1$, then the above equation becomes $r^2-2qr+1=0$ and hence $q=\dfrac{1}{2}(r+r^{-1})$. (iii) It is an straightforward consequence of both, (i) or (ii).
The role of Chebyshev equations among constant coefficients equations is described by the following results. We start with an easy-to-proof result involving first order linear difference equations with constant coefficients.
\[order:1\] Let $r\in \KK^*$ and consider $q=\frac{1}{2}(r+r^{-1})$. Then, a sequence $z\in
\ell(\KK)$ is a solution of the first order difference equation $z(k+1)=r z(k)$, $k\in \ZZ$, iff it is a solution of the Chebyshev equation $z(k+1)-2q z(k)+z(k-1)=0$, $k\in \ZZ$, satisfying that $z(1)=r z (0)$ or equivalently, iff $z$ is a multiple of $\alpha U_{k-1}(q)-U_{k-2}(q)$.
The second result concerns to the even and odd subsequences of a given Chebyshev sequence.
\[Ch:odd-even\] Let $q\in \KK$ and $z\in \ell(\KK)$ be a solution of the Chebyshev equation $$z(k+1)-2qz(k)+z(k-1)=0,\hspace{.25cm} k\in \ZZ.$$ Then for any $m\in \ZZ$, the subsequence $z_{2,m}$ is a Chebyshev sequence with parameter $2q^2-1$.
When $q=0$, then $z(k)=-z(k-2)$ for any $k\in \ZZ$; which implies that $z_{2,m}(k)=-z_{2,m}(k-1)$, for any $k\in \ZZ$. Applying Lemma \[order:1\] we obtain that $z_{2,m}$ is solution of the Chebyshev equation with parameter $-1=2q^2-1$.
When $q\in \KK^*$, for any $k\in \ZZ$ we have that $z(k)=\dfrac{1}{2q}\big[z(k+1)+z(k-1)\big]$ and hence, $$\begin{array}{rl}
0=&\hspace{-.25cm}z(2k+m+1)-2qz(2k+m)+z(2k+m-1)\\[1ex]
=&\hspace{-.25cm}\dfrac{1}{2q}\big[z(2k+m+2)+2z(2k+m)+z(2k+m-2)\big]-2qz(2k+m)\\[2ex]
=&\hspace{-.25cm}\dfrac{1}{2q}\big[z(2k+m+2)-2(2q^2-1)z(2k+m)+z(2k+m-2)\big]\\[2ex]
=&\hspace{-.25cm}\dfrac{1}{2q}\big[z_{2,m}(k+1)-2(2q^2-1)z_{2,m}(k)+z_{2,m}(k-1)\big]\end{array}$$ and the claim follows.
Many properties and identities involving Chebyshev polynomials are consequence of being solutions of the Chebyshev equations. For instance, from the above lemma we have the following classical identities, see $(1.14)$ and $(1.15)$ in [@MH03], $$\label{doubling:second}
U_{2k}(x)=W_k(2x^2-1)\hspace{.25cm}\hbox{and}\hspace{.25cm}U_{2k+1}(x)=2x U_k(2x^2-1), \hspace{.25cm}k\in \ZZ,$$ which in turns implies that $$\label{doubling:first}
T_{2k}(x)=T_k(2x^2-1)\hspace{.25cm}\hbox{and}\hspace{.25cm}T_{2k+1}(x)=xV_k(2x^2-1), \hspace{.25cm}k\in \ZZ.$$
The next result shows that any difference equation with constant coefficients is equivalent to a Chebyshev equation. Although it is a known result, see [@ABD05 Theorem 3.1], we reproduce here its proof, for the sake of completeness.
\[Ch:complex\] Consider $a,c\in \KK^*$, $b\in \KK$ and $z\in \ell(\KK)$ a solution of the second order difference equation with constant coefficients $$az(k+1)-bz(k)+cz(k-1)=0,\hspace{.25cm} k\in \ZZ.$$ Then $z(k)=(\sqrt{a^{-1}c})^{k}v(k)$, $k\in \ZZ$, where $v\in \ell(\CC)$ is a solution of the Chebyshev equation $$v(k+1)-2qv(k)+v(k-1)=0,\hspace{.25cm} k\in \ZZ$$ whose parameter is $q=\dfrac{b}{2\sqrt{ac}}$. Moreover, if $\KK=\RR$ and $ac>0$ then $v \in\ell(\RR)$, whereas when $ac<0$ then, for any $m\in \ZZ$, $z_{2,m}(k)=( a^{-1}c )^{k}w(k)$, where $w\in \ell(\RR)$ is a solution of the Chebyshev equation $$w(k+2)-2\hat qw(k+1)+w(k)=0,\hspace{.25cm} k\in \ZZ$$ whose parameter is $\hat q=\dfrac{b^2}{2ac}-1\in \RR$.
Clearly, for any $k\in \ZZ$ we have that $$0=az(k+1)-bz(k)+cz(k-1)=(\sqrt{a^{-1}c})^{k-1}\Big[cv(k+1)-b\sqrt{a^{-1}c}\,v(k)+cv(k-1)\Big].$$ Therefore, $v\in \ell(\CC)$ is a solution of the difference equation with complex coefficients $$0=cv(k+1)-b\sqrt{a^{-1}c}v(k)+cv(k-1)$$ or, equivalently of the Chebyshev equation with parameter $q=\dfrac{b\sqrt{a^{-1}c}}{2c}=
\dfrac{b}{2\sqrt{ac}}$. When $\KK=\RR$, if $ac>0$, then $q\in \RR$ and moreover $v\in \ell(\RR)$; whereas if $ac<0$, then $\hat q=2q^2-1=\dfrac{b^2}{2ac}-1\in \RR$. Moreover, given $m\in \ZZ$, $(\sqrt{a^{-1}c})^mv$ is also a solution of the Chebyshev equation with parameter $q$ and hence applying Lemma \[Ch:odd-even\], $w=(\sqrt{a^{-1}c})^mv_{2,m}$ is a solution of the Chebyshev equation with parameter $\hat q\in \RR$. Therefore, for any $k\in \ZZ$ we have $$w(k)= (\sqrt{a^{-1}c})^mv(2k+m)=(\sqrt{a^{-1}c})^m (\sqrt{a^{-1}c})^{-m-2k}z(2k+m)=(a^{-1}c)^{-k}z_{2,m}(k)$$ which in particular implies that $w\in \ell(\RR)$ and hence the result.
No we can derive a Floquet’s type theorem for equations with constant coefficients.
\[floquet:constant\] Given $a,c\in \KK^*$ and $b\in \KK$, the equation with constant coefficients $$az(k+1)-bz(k)+cz(k-1)=0,\hspace{.5cm}k\in \ZZ$$ has quasi-periodic solutions of period $p\in \NN^*$ and ratio $r\in\KK^*$ iff $$r=\sqrt{\dfrac{c^p}{a^p}}\left[T_p(q)\pm \sqrt{T_p(q)^2-1}\right],\hspace{.25cm}\hbox{where}\hspace{.25cm}q=\dfrac{b}{2\sqrt{ac}}.$$ Therefore, the equation has geometric solutions with ratio $r$ iff $r=\dfrac{b\pm \sqrt{b^2-4ac}}{2a}$ and, in particular, it has constant solutions iff $b=a+c$.
According to Theorem \[Ch:complex\], the $z\in \ell(\KK)$ is a solution of the above equation iff the sequence defined for any $k\in \ZZ$ as $v(k)=\alpha^kz(k)$, where $\alpha=\sqrt{ac^{-1}}$, is a Chebyshev sequence with parameter $q=\dfrac{b}{2\sqrt{ac}}$. Therefore, $z\in \ell(\KK;p,r)$ iff $v\in \ell(\KK;p,r\alpha^p)$ and from Proposition \[Cheb:periodic\], this happens iff $$r\alpha^p=T_p(q)\pm \sqrt{T_p(q)^2-1}.$$
Our aim in this paper is to extend the above results to a wider class of linear difference equations.
Second Order Difference Equations with Quasi-Periodic Coefficients
==================================================================
We say that the Equation has [*quasi-periodic coefficients with period $p\in \NN^*$ and ratio $r\in \KK^*$*]{} if $a,c\in \ell(\KK^*;p,r)$ and $b\in\ell(\KK;p,r)$. In particular, we say that the Equation has [*constant coefficients*]{} when $a,c\in \ell(\KK^*;1,1)\equiv \KK^*$ and $b\in \ell(\KK;1,1)\equiv \KK$.
The Equation is called [*symmetric*]{} when $a=c$. When $a,b,c\in \ell(\RR)$, symmetric equations are also called [*self-adjoint equations*]{}. It is well-known that any irreducible second order linear difference equation is equivalent to a symmetric one, and to a self-adjoint one when $a,b,c\in \ell(\RR)$. With this end, we consider the function $\phi\func{\ell(\KK^*)\times \ell(\KK^*)}{\ell(\KK^*)}$ defined as $$\label{sym}\phi(a,c)(0)=1,\hspace{.25cm}\phi(a,c)(k)=\displaystyle
\prod\limits_{j=0}^{k-1}\dfrac{a(j)}{c(j)},\hspace{.25cm}\hbox{when $k>0$}\hspace{.25cm}\hbox{and}\hspace{.25cm}\phi(a,c)(k)=\displaystyle
\prod\limits_{j=k}^{-1}\dfrac{c(j)}{a(j)},\hspace{.25cm}\hbox{when $k<0$}.$$
\[equivalence\] Given $a,c\in \ell(\KK^*)$, the following properties hold:
- $\phi(a,c)(k)=1$ for all $k\in \ZZ$ iff $a=c$.
- $\phi(a,c)(k-1)a(k-1)=\phi(a,c)(k)c(k-1)$, $k\in \ZZ.$ Therefore, $z\in \ell(\KK)$ is a solution of the difference equation whose coefficients are $a,b$ and $c$ iff it is a solution of the symmetric difference equation whose coefficients are $\phi(a,c) a$ and $\phi(a,c) b$.
- If $a,c\in \ell(\KK^*;p,r)$, then $\phi(a,c)\in \ell\big(\KK^*;p,\phi(a;c)(p)\big)$. Therefore, if $b\in \ell(\KK;p,r)$, then $\phi(a,c)b\in \ell\big(\KK^*;p,r\phi(a;c)(p)\big)$.
The next result shows the role that Chebyshev equations play to solve some linear systems of difference equations with constant coefficients. It is the key to solve general second order linear difference equations with quasi-periodic coefficients.
\[complex:system\] Given $p\in \NN^*$, $a_j\in \KK^*$ and $b_j\in \KK$, $j=0,\ldots,p-1$, consider the sequences $v_j\in \ell(\KK)$, $j=0,\ldots,p-1$, satisfying the equalities $$\left\{\begin{array}{rll}
b_0v_0(k)=&\hspace{-.25cm} a_0v_1(k)+ a_{p-1}v_{p-1}(k-1),& \\[1ex]
b_jv_j(k)=&\hspace{-.25cm} a_jv_{j+1}(k)+ a_{j-1}v_{j-1}(k), & j=1,\ldots,p-2,\\[1ex]
b_{p-1}v_{p-1}(k)=&\hspace{-.25cm} a_{p-1}v_0(k+1)+ a_{p-2}v_{p-2}(k),&
\end{array}\right.$$ where $v_1(k)=v_0(k+1)$, $k\in \ZZ$, when $p=1$. Then, there exists $q_p(a_0,\ldots,a_{p-1};b_0,\ldots,b_{p-1})\in \KK$ such that for any $j=0,\ldots,p-1$, $v_j$ is a solution of the Chebyshev equation $$z(k+1)-2q_p(a_0,\ldots,a_{p-1};b_0,\ldots,b_{p-1})z(k)+z(k-1)=0,\hspace{.25cm}k\in \ZZ.$$ Moreover, if $a_j,b_j\in \RR$, $j=0,\ldots,p-1$, then for any $j=1,\ldots,p-1$ it is verified that $$iq_p(a_0,\ldots,\pm i a_j,\ldots,a_{p-1};b_0,\ldots,b_{p-1})\in \RR.$$
We prove the claim by induction on $p$.
If $p=1$, the system is reduced to the equation $b_0v_0(k)=a_0v_0(k+1)+ a_0v_0(k-1)$ and hence it suffices to take $q_1(a_0;b_0)=\dfrac{b_0}{2a_0}\in \KK$. Moreover, if $a_0,b_0\in \RR$, then $iq_1(\pm ia_0;b_0)=\pm q_1(a_0;b_0)\in \RR$.
If $p=2$, then the system becomes $$\left\{\begin{array}{rl}
b_0v_0(k)=&\hspace{-.25cm}a_0v_1(k)+ a_1v_1(k-1), \\[1ex]
b_1v_1(k)=&\hspace{-.25cm}a_1v_0(k+1)+a_0v_0(k).
\end{array}\right.$$
If $b_1\not=0$, obtaining $v_1$ from the second equation and substituting its value at the first one, we get $$b_0v_0(k)=\dfrac{1}{b_1}\Big[a_0a_1v_0(k+1)+(a_0^2+ a_1^2)v_0(k)+ a_0a_1v_0(k-1)\Big],$$ which implies that $q_2(a_0,a_1;b_0,b_1)=\dfrac{1}{2a_0a_1}\big[b_0b_1-a_0^2- a_1^2\big]$. Moreover, when $a_0,a_1,b_0,b_1\in \RR$, have that $$iq_2(\pm ia_0,a_1;b_0,b_1)=\dfrac{\pm1}{2a_0a_1}\big[b_0b_1+a_0^2-a_1^2\big]\in \RR\hspace{.25cm}\hbox{and}\hspace{.25cm}iq_2(a_0,\pm ia_1;b_0,b_1)= \dfrac{\pm 1}{2a_0a_1}\big[b_0b_1-a_0^2+a_1^2\big]\in \RR.$$
As the above Chebyshev equation with parameter $q_2$ has constant coefficients, the sequence $v\in \ell(\CC)$ defined for any $k\in \ZZ$ as $v(k)=v_0(k+1)$ is also a solution of the same equation. Therefore, $v_1$ is also a solution since, from the second equation of the system, it is linear combination of the sequences $v$ and $v_0$.
If $b_0\not=0$, then obtaining $v_0$ from the first equation and substituting its value at the second one, we get $$b_1v_1(k)=\dfrac{1}{b_0}\Big[a_0a_1v_1(k+1)+(a_0^2+ a_1^2)v_1(k)+ a_0a_1v_1(k-1)\Big],$$ which newly implies the same conclusions than above.
If $b_0=b_1=0$, then $v_1(k)= r v_1(k-1)$ and $v_0(k+1)=r^{-1}v_0(k)$, where $r=-\dfrac{a_1}{a_0}$. Therefore, applying Lemma \[order:1\], we get the both, $v_0$ and $v_1$ are solutions of the Chebyshev equation with parameter $\frac{1}{2}(r+r^{-1})=q_2(a_0,a_1;0,0)$.
Suppose now that $p\ge 3$ and that the claims are true for any $1\le\ell \le p-1$.
If $b_{p-1}\not=0$, then from the last equation we have $$v_{p-1}(k)=b_{p-1}^{-1}a_{p-1}v_0(k+1)+b_{p-1}^{-1}a_{p-2}v_{p-2}(k),\hspace{.5cm}\hbox{for any $k\in \ZZ$}$$ and substituting the value of $v_{p-1}(k-1)$ and of $v_{p-1}(k)$ at the first and at the penultimate equations of the system, we get $$\left\{\begin{array}{rll}
b_{p-1}^{-1}(b_0b_{p-1}- a_{p-1}^2)v_0(k)=&\hspace{-.25cm}a_0v_1(k)+ b_{p-1}^{-1}a_{p-1}a_{p-2}v_{p-2}(k-1), & \\[1ex]
b_jv_j(k)=&\hspace{-.25cm}a_jv_{j+1}(k)+a_{j-1}v_{j-1}(k),& j=1,\ldots,p-3,\\[1ex]
b_{p-1}^{-1}(b_{p-2}b_{p-1}-a_{p-2}^2)v_{p-2}(k)=&\hspace{-.25cm}b_{p-1}^{-1}a_{p-2}a_{p-1}v_0(k+1)+a_{p-3}v_{p-3}(k).&
\end{array}\right.$$ Applying the induction hypothesis and taken $$q_p=
q_{p-1}\left(a_0,a_1,\ldots,a_{p-3},b_{p-1}^{-1}a_{p-2}a_{p-1};b_{p-1}^{-1}(b_0b_{p-1}-a_{p-1}^2),b_1,\ldots,b_{p-3},b_{p-1}^{-1}(b_{p-2}b_{p-1}-a_{p-2}^2)\right)\in \KK,$$ for any $j=0,\ldots,p-2$ it is satisfied that $$2q_pv_j(k)= v_j(k+1)+ v_j(k-1),\hspace{.25cm}\hbox{for any $k\in \ZZ$}.$$ Moreover, since $v_{p-1}$ is a linear combination of two solutions of the same Chebyshev equation, it is also a solution of it. Furthermore, applying the induction hypothesis, when $a_j,b_j\in \RR$, $j=0,\ldots,p-1$, then $b_{p-1}^{-1}a_{p-2}a_{p-1},b_{p-1}^{-1}(b_0b_{p-1}-a_{p-1}^2),b_{p-1}^{-1}(b_{p-2}b_{p-1}-a_{p-2}^2)\in \RR$ and we can also conclude that $$i q_p(a_0,\ldots,\pm i a_j,\ldots,a_{p-1};b_0,\ldots,b_{p-1})\in \RR,$$ for any $j=0,\ldots,p-1$.
When $b_0\not=0$, obtaining $v_0$ from the first equation and applying the same reasoning than above, for any $j=0,\ldots,p-1$ we get $$2q_pv_j(k)= v_j(k+1)+v_j(k-1),\hspace{.25cm}\hbox{for any $k\in \ZZ$}.$$ where $$q_p=q_{p-1}\left(a_1,\ldots,a_{p-2},b_{0}^{-1}a_{0}a_{p-1};b_{0}^{-1}(b_{0}b_{1}-a_{0}^2),b_2,\ldots,b_{p-2},b_{0}^{-1}(b_0b_{p-1}-a_{p-1}^2)\right)\in \KK$$ and the remaining properties for $q_p$ are also true.
If $b_0=b_{p-1}=0$, then $v_{0}(k)=-a_{p-1}^{-1}a_{p-2}v_{p-2}(k-1)$ and $v_{p-1}(k)=-a_{p-1}^{-1} a_{0} v_{1}(k+1)$, and hence substituting its values at the second and at the penultimate equations, we obtain $$\left\{\begin{array}{rll}
b_1v_1(k)=&\hspace{-.25cm}a_1v_2(k)-a_{p-1}^{-1}a_0a_{p-2} v_{p-2}(k-1), & \\[1ex]
b_jv_j(k)=&\hspace{-.25cm}a_jv_{j+1}(k)+a_{j-1}v_{j-1}(k), & j=2,\ldots,p-3,\\[1ex]
b_{p-2}v_{p-2}(k)=&\hspace{-.25cm}-a_{p-1}^{-1}
a_{0}a_{p-2}v_{1}(k+1)+a_{p-3}v_{p-3}(k).&
\end{array}\right.$$
Applying newly the induction hypothesis and taken $$q_p=q_{p-2}\left(a_1,\ldots,a_{p-3},-a_{p-1}^{-1} a_0a_{p-2};b_1,\ldots,b_{p-2}\right)\in \KK,$$ then for any $j=1,\ldots,p-2$ it is satisfied that $$2q_pv_j(k)= v_j(k+1)+v_j(k-1),\hspace{.25cm}\hbox{for any $k\in \ZZ$}.$$ Finally, as the sequences $v_0$ and $v_{p-1}$ are both multiple of the solutions of the above Chebyshev equation, they are also solutions of it. Moreover, when $a_j,b_j\in \RR$, $j=0,\ldots,p-1$, it is also clear that $$i q_p(a_0,\ldots,\pm i a_j,\ldots,a_{p-1};b_0,\ldots,b_{p-1})\in \RR,\hspace{.25cm}\hbox{for any $j=1,\ldots,p-1$,}$$ since then $-a_{p-1}^{-1} a_0a_{p-2}\in \RR$.
\[Ch:sys\] Consider $p\in \NN^*$, $r\in \KK^*$, $a,c\in \ell(\KK^*;p,r)$, $b\in \ell(\KK;p,r)$, $s=\displaystyle \prod\limits_{j=0}^{p-1}\dfrac{a(j)}{c(j)}$, $\gamma=(\sqrt{rs})^{-1}$ and $z\in \ell(\KK)$ a solution of the equation with quasi-periodic coefficients $$a(k)z(k+1)-b(k)z(k)+c(k-1)z(k-1)=0,\hspace{.25cm}k\in \ZZ.$$ Then, there exists $q_{p,r}(a;b;c) \in \CC$ such that for any $m\in \ZZ$, $z_{p,m}(k)=\gamma^{k}v(k)$, $k\in \ZZ$, where the sequence $v\in \ell(\CC)$ is a solution of the Chebyshev equation $$v(k+1)-2 q_{p,r}(a;b;c)v(k)+v(k-1)=0,\hspace{.25cm}k\in \ZZ.$$ Moreover, if $\KK=\RR$ we have the following results:
- If $rs>0$, then $q_{p,r}(a;b;c)\in \RR$ and $v\in \ell(\RR)$.
- If $rs<0$, then $q_{p,r}(a;b;c)^2\in \RR$ and then $z_{2p,m}(k)=(rs)^{-k}u(k)$, $k\in \ZZ$, where $u\in \ell(\RR)$ is a solution of the Chebyshev equation $$u(k+1)-2 \big(2q_{p,r}(a;b;c)^2-1\big)u(k)+u(k-1)=0,\hspace{.25cm}k\in \ZZ.$$
Given $m\in \ZZ$, then $m=k_0p+j$, where $0\le j\le p-1$. Therefore, $z_{p,m}(k)=z_{p,j}(k+k_0)$ and hence it suffices to prove the claims for $0\le j\le p-1$ and take into account that if a sequence is a solution of a difference equation with constant coefficients, then any shift is also a solution of the same equation.
From Part (ii) of Lemma \[equivalence\], we know that $z\in \ell(\KK)$ is a solution of the given equation iff it is a solution of the symmetric equation $$\phi(a,c)(k)a(k) z(k+1)-\phi(a,c)(k)b(k)z(k)+\phi(a,c)(k-1)a(k-1)z(k-1)=0,\hspace{.25cm}k\in \ZZ$$ and moreover, part (iii) of Lemma \[equivalence\] implies that $\phi(a,c)a,\phi(a,c) b\in \ell(\KK;p,rs)$.
Since the coefficients of this last equation are quasi-periodic with period $p$ and ratio $rs$, $z\in \ell(\KK)$ is a solution iff the subsequences $z_{p,j}$, $j=0,\ldots,p-1$ satisfy the equalities $$\left\{\begin{array}{rl}
\phi(a,c)(0)b(0)z_{p,0}(k)=&\hspace{-.25cm}\phi(a,c)(0)a(0)z_{p,1}(k)+\gamma^{2}\phi(a,c)(p-1) a(p-1)z_{p,p-1}(k-1), \\[1ex]
\phi(a,c)(j)b(j)z_{p,j}(k)=&\hspace{-.25cm}\phi(a,c)(j)a(j)z_{p,j+1}(k)+\phi(a,c)(j-1)a(j-1)z_{p,j-1}(k) ,\hspace{.15cm} 1\le j\le p-2,\\[1ex]
\phi(a,c)(p-1)b(p-1)z_{p,p-1}(k)=&\hspace{-.25cm}\phi(a,c)(p-1) a(p-1)z_{p,0}(k+1)+\phi(a,c)(p-2)a(p-2)z_{p,p-2}(k).
\end{array}\right.$$
Defining $v_j(k)=\gamma^{-k}z_{p,j}(k)$, $j=0,\ldots,p-1$, we get that $$\left\{\begin{array}{rl}
\phi(a,c)(0)b(0)v_0(k)=&\hspace{-.25cm}\phi(a,c)(0)a(0)v_1(k)+\gamma\phi(a,c)(p-1) a(p-1)v_{p-1}(k-1) \\[1ex]
\phi(a,c)(j)b(j)v_j(k)=&\hspace{-.25cm}\phi(a,c)(j)a(j)v_{j+1}(k)+\phi(a,c)(j-1)a(j-1)v_{j-1}(k) ,\hspace{.15cm} 1\le j\le p-2,\\[1ex]
\phi(a,c)(p-1)b(p-1)v_{p-1}(k)=&\hspace{-.25cm}\gamma\phi(a,c)(p-1) a(p-1)v_0(k+1)+ \phi(a,c)(p-2)a(p-2)v_{p-2}(k)
\end{array}\right.$$ We obtain the result applying Proposition \[complex:system\] and taking $$q_{p,r}(a;b;c)=q_{p}\big(\phi(a,c)(0)a(0),\ldots,\gamma\phi(a,c)(p-1)a(p-1); \phi(a,c)(0)b(0),\ldots,\phi(a,c)(p-1)b(p-1)\big).$$
Moreover, it is clear that when $a,b,c\in \ell(\RR)$ and $rs>0$, then $\gamma\in \RR$. Applying newly Proposition \[complex:system\] we obtain that $q_{p,r}(a;b;c)\in\RR$ and $v\in \ell(\RR)$, which prove (i).
\(ii) When $rs<0$, then $\gamma=-i(\sqrt{|rs|})^{-1}$. Therefore, if we consider $\hat a\in \ell(\RR)$ defined for any $k,j\in \ZZ$ as $\hat a(pk+j)=a(pk+j)$ if $j\not=p-1$ and as $\hat a(pk+p-1)=(\sqrt{|rs|})^{-1}\,a(pk+p-1)$. Clearly, $$q_{p,r}(a;b;c)=q_p\big(\phi(a,c)(0)\hat a(0),\ldots,-i\phi(a,c)(p-1)\hat a(p-1); \phi(a,c)(0)b(0),\ldots,\phi(a,c)(p-1)b(p-1)\big)$$ that from Proposition \[complex:system\] implies that $iq_{p,r}(a;b;c)\in \RR$ and hence, $q_{p,r}(a;b;c)^2\in \RR$. The conclusion follows the same reasoning as in the last part of Theorem \[Ch:complex\], tacking into account that $z_{p,m}(2k)=z(2pk+m)=z_{2p,m}(k)$, whereas $z_{p,m}(2k+1)=z(2pk+p+m)=z_{2p,p+m}(k)$ for any $k\in \ZZ$.
The Floquet Functions
=====================
Given $p\in \NN^*$, $r\in \CC$, $a,c\in \ell(\CC^*;p,r)$ and $b\in \ell(\CC,;p,r)$, Theorem \[Ch:sys\] establishes that there exists $q_{p,r}(a;b;c)\in \CC$ such that the difference equation $$a(k)z(k+1)-b(k)z(k)+c(k-1)z(k-1)=0,\hspace{.25cm}k\in \ZZ.$$ is equivalent to the Chebyshev equation with parameter $q_{p,r}(a;b;c)$. The aim of this section is to obtain the expression of $q_{p,r}(a;b;c)$. Therefore, given $p\in \NN^*$ and $r\in \KK^*$, we call [*Floquet function of order $p$ and ratio $r$*]{}, the function $q_{p,r}\func{\ell(\CC^*;p,r)\times \ell(\CC;p,r)\times \ell(\CC^*;p,r)}{\CC}$ such that for any $a,c\in \ell(\CC^*;p,r)$ and $b\in \ell(\CC;p,r)$, if $z\in \ell(\CC)$ is a solution of the equation $$a(k)z(k+1)-b(k)z(k)+c(k-1)z(k-1)=0,\hspace{.25cm}k\in \ZZ,$$ then for any $m\in \ZZ$, $v(k)=\gamma^{-k}z_{p,m}(k)$ is a solution of the Chebyshev equation $$v(k+1)-2q_{p,r}(a;b;c)zv(k)+v(k-1)=0,\hspace{.25cm}k\in \ZZ.$$
Notice that it suffices to determine the expression of the parameter for the symmetric case and for periodic coefficients. Specifically if given $a,c\in \ell(\CC^*;p,r)$ and $b\in \ell(\CC;p,r)$ we consider $\gamma=\left(r\prod\limits_{j=0}^{p-1}\dfrac{a(j)}{c(j)}\right)^{-\frac{1}{2}}$ and the pair $(a_\phi,b_\phi)\in \ell(\CC^*;p)\times \ell(\CC;p)$ defined as the periodic extension of $$\begin{array}{rlrl}
a_\phi(k)=\phi(a,c)(k)a(k),& k=0,\ldots,p-2; & a_\phi(p-1)=\gamma\phi(a,c)(p-1)a(p-1),\\[1ex]
b_\phi(k)=\phi(a,c)(k)b(k),& k=0,\ldots,p-1,&\end{array}$$ then in the proof of Floquet’s Theorem we have shown that $q_{p,r}(a;b;c)=q_{p,1}(a_\phi;b_\phi;a_\phi)$. Moreover, if we consider the function $Q_p\func{\ell(\CC^*;p)\times \ell(\CC;p)}{\CC}$ given by $Q_p(a;b)=q_{p,1}(a;b;a)$, then $Q_p$ is determined by the following non-linear recurrence $$\label{p=1,2}
Q_1(a;b)=\dfrac{b(0)}{2a(0)}, \hspace{.5cm}Q_2(a;b)=\dfrac{1}{2a(0)a(1)}\Big(b(0)b(1)- a(0)^2-a(1)^2\Big),$$ and $$\label{recurrence}
Q_{p+1}(a;b)=
\left\{\begin{array}{cl}
Q_{p}(\hat a;\hat b), & b(p)\not=0,\\[1ex]
Q_{p}(\check a;\check b), & b(0)\not=0,\\[1ex]
Q_{p-1}(\tilde a;\tilde b), & b(p)=b(0)=0,
\end{array}\right.$$ for $p\ge 2$, where the periodic sequences $\hat a,\hat b,\check a,\check b,\tilde a$ and $\tilde b$ are defined as the periodic extension of $$\begin{array}{llll}
\hat a(k)=a(k),& \hspace{-.15cm}k=0,\ldots,p-2, & \hat a(p-1)=\dfrac{a(p-1)a( p)}{b( p)},& \\[3ex]
\hat b(k)=b(k), &\hspace{-.15cm} k=1,\ldots,p-2, & \hat b(p-1)=\dfrac{b(p-1)b( p)-a(p-1)^2}{b( p)},& \hat b(0)=\dfrac{b(0)b( p)-a( p)^2}{b( p)},\\[3ex]
\check a(k)=a(k+1),&\hspace{-.15cm} k=0,\ldots,p-2, & \check a(p-1)=\dfrac{a(0)a( p)}{b(0)},& \\[3ex]
\check b(k)=b(k+1), & \hspace{-.15cm}k=1,\ldots,p-2, & \check b(p-1)=\dfrac{b(0)b( p)-a( p)^2}{b(0)},& \check b(0)=\dfrac{b(0)b(1)-a(0)^2}{b(0)},\\[3ex]
\tilde a(k)=a(k+1),&\hspace{-.15cm} k=0,\ldots,p-3, & \tilde a(p-2)=-\dfrac{a(0)a(p-1)}{a( p)},&
\\[3ex]
\tilde b(k)=b(k+1), & \hspace{-.15cm}k=0,\ldots,p-2. & &
\end{array}$$ Notice that when $p=2$, $\tilde a(0)=-\dfrac{a(0)a(1)}{a( 2)}$. In addition, the properties of Chebyshev equations and its solutions, establishes that when $b(0),b(p)\not=0$, necessarily $Q_{p}(\hat a;\hat b)=Q_{p}(\check a;\check b)$.
Our next aim is to obtain a closed expression for the Floquet functions and with this end, we introduce some concepts and notations.
A [*binary multi-index of order $p$*]{} is a $p$-tuple $\alpha=(\alpha_0,\ldots,\alpha_{p-1})\in \{0,1\}^p$ and its [*length*]{} is defined as $|\alpha|=\sum\limits_{j=0}^{p-1}\alpha_j\le p$. So $|\alpha|=m$ iff exactly $m$ components of $\alpha$ are equal to $1$ and exactly $p-m$ components of $\alpha$ are equal to $0$. The only binary multi-index of order $p$ whose length equals $p$ is $\pi_p=(1,\ldots,1)$. Moreover, $\alpha\in \{0,1\}^p$ and $|\alpha|=m$, we consider $0\le i_1<\cdots<i_m\le p-1$ such that $\alpha_{i_1}=\cdots=\alpha_{i_m}=1$.
Given $\alpha\in \{0,1\}^p$ and a sequence $a\in \ell(\KK)$, we consider the following values $$\label{exp}
a^\alpha=\prod\limits_{j=0}^{p-1}a(j)^{\alpha_j}\hspace{.5cm}\hbox{and}\hspace{.5cm}a^{2\alpha}=\prod\limits_{j=0}^{p-1}a(j)^{2\alpha_j}$$ respectively, where we assume $0^0=1$. Observe that $a^{\pi_p}=\prod\limits_{j=0}^{p-1}a(j)$ for any $a\in \ell(\KK;p)$. In the sequel we also assume the usual convention that empty sums and empty products are defined as $0$ and $1$, respectively.
Given $p\in \NN^*$, we define $\Lambda_p^0=\{(0,\ldots,0)\}$ and for $p\ge 2$, $\Lambda_p^1=\big\{\alpha\in \{0,1\}^p: |\alpha|=1\big\}$. Moreover, when $p\ge 4$, for any $m=2,\ldots,\lfloor\frac{p}{2}\rfloor$, we define $$\label{floor}
\Lambda_p^m=\Big\{\alpha\in \{0,1\}^p: |\alpha|=m,\hspace{.15cm}\hbox{and $i_j+2\le i_{j+1}\le p-2(m-j)+\min\{i_1,1\}$, \hspace{.15cm}$j=1,\ldots,m-1$}\Big\},$$ or, equivalently, $$\Lambda_p^m=\Big\{\alpha\in \{0,1\}^p: |\alpha|=m,\hspace{.15cm}\hbox{ $i_{j+1}-i_j\ge 2$, \hspace{.15cm}$j=1,\ldots,m-1$ and $i_m\le p-2$ when $i_1=0$}\Big\}.$$
Given $p\ge 2$, $m=1,\ldots,\lfloor\frac{p}{2}\rfloor$ and $\alpha\in \Lambda_p^m$, let $0\le i_1<\cdots<i_m\le p-1$ be the indexes such that $\alpha_{i_1}=\cdots=\alpha_{i_m}=1$. Then, we define the binary multi-index $\bar \alpha$ of order $p$ as $$\bar \alpha_{i_j}=\bar \alpha_{i_j+1}=0, \hspace{.15cm}j=1,\ldots,m,\hspace{.25cm}\hbox{and}\hspace{.25cm}\bar \alpha_i=1\hspace{.25cm}\hbox{otherwise},$$ where if $i_m=p-1$, then $\bar \alpha_{p-1}=\bar \alpha_0=0$. Moreover, if $\alpha\in\Lambda_p^0$; that is, if $\alpha=(0,\ldots,0)$, then we define $\bar \alpha=\pi_p$. It is clear that, in any case, $|\bar \alpha|=p-2m$.
Given $p\ge 2$ and $0\le j\le\lfloor\frac{p}{2}\rfloor$, we define the following sets of binary multi-indices of order $p$ $$\begin{array}{rlrl}
A^{j,1}_p=&\hspace{-.25cm}\big\{\alpha \in \Lambda_{p}^{j}: \bar \alpha_0=\bar\alpha_{p-1}=1\big\},&\\[1ex]
A^{j,2}_p=&\hspace{-.25cm} \big\{\alpha \in \Lambda_{p}^{j}: \bar \alpha_0=0,\,\,\bar\alpha_{p-1}=1\big\},& A^{j,3}_p=&\hspace{-.25cm}\big\{\alpha \in \Lambda_{p}^{j}: \bar \alpha_0=1,\,\,\bar\alpha_{p-1}=0\big\}, \\[1ex]
A^{j,4}_p=&\hspace{-.25cm}\big\{\alpha \in \Lambda_{p}^{j}: \bar \alpha_0=\bar\alpha_{p-1}=0\hspace{.15cm}\hbox{and}\hspace{.15cm}\alpha_0=1\big\}, & A^{j,5}_p=&\hspace{-.25cm}\big\{\alpha \in \Lambda_{p}^{j}: \bar \alpha_0=\bar\alpha_{p-1}=0\hspace{.15cm}\hbox{and}\hspace{.15cm}\alpha_0=0\big\},\end{array}$$ that clearly determine a partition of $\Lambda_p^j$. Moreover, $A^{0,1}_p=\{(0,\ldots,0)\}$ and $A^{0,2}_{p}=A^{0,3}_{p}=A^{0,4}_{p}=A^{0,5}_{p}=\emptyset$.
\[partition\] Given $p\ge 4$ and $2\le j\le\lfloor\frac{p}{2}\rfloor$, then $A^{j,3}_{p+1}=A^{j,5}_p\times \{0\}$ and moreover $$\begin{array}{rlrl}
A^{j,1}_{p+1}=&\hspace{-.25cm}\big(A^{j,1}_p\times \{0\}\big)\cup \big(A^{j,3}_p\times \{0\}\big), & \hspace{.5cm}
A^{j,2}_{p+1}=&\hspace{-.25cm} \big(A^{j,2}_p\times \{0\}\big)\cup \big(A^{j,4}_p\times \{0\}\big), \\[1ex]
A^{j,4}_{p+1}=&\hspace{-.25cm}\big(A^{j-1,2}_{p-1}\times \{(1,0)\}\big)\cup \big(A^{j-1,4}_{p-1}\times \{(1,0)\}\big), & \hspace{.5cm}
A^{j,5}_{p+1}=&\hspace{-.25cm}\big(A^{j-1,1}_p\times \{1\}\big)\cup \big(A^{j-1,3}_p\times \{1\}\big).\end{array}$$
\[cardinal\] Given $p\in \NN^*$ and $0\le j\le\lfloor\frac{p}{2}\rfloor$, then $\displaystyle |\Lambda^{j}_p|=\frac{p}{p-j}{p-j\choose j}$. Therefore, for any $m\in \NN^*$ we have that $\displaystyle\sum\limits_{j=0}^m|\Lambda^{j}_{2m}|=2T_{m}\Big(\frac{3}{2}\Big)$ and $\displaystyle \sum\limits_{j=0}^m|\Lambda^{j}_{2m+1}|=W_{m}\Big(\frac{3}{2}\Big)$.
We know that $|\Lambda_p^0|=1$, for any $p\in \NN^*$ and that $|\Lambda_p^1|=\big|\big\{\alpha\in \{0,1\}^p: |\alpha|=1\big\}\big|=p$, for any $p\ge 2$.
If $\alpha\in \Lambda_{2m}^m$, $m\in \NN^*$, and $0\le i_1<\cdots<i_m\le 2m-1$ are such that $\alpha_{i_1}=\cdots=\alpha_{i_m}=1$, then $0\le i_1\le 1$ and $$2+i_j\le i_{j+1}\le 2j+\min\{i_1,1\},\hspace{.25cm}j=1,\ldots,m-1.$$
If $i_1=0$, then $i_j=2(j-1)$, $j=1,\ldots,m$, whereas when $i_1=1$, then $i_j=2(j-1)+1$, $j=1,\ldots,m$. In both cases $\bar \alpha=(0,\ldots,0)$ and moreover $|\Lambda_{2m}^m|=2$.
If $\alpha\in \Lambda_{2m+1}^m$, then $0\le i_1<\cdots<i_m\le 2m$ are such that $\alpha_{i_1}=\cdots=\alpha_{i_m}=1$, then $0\le i_1\le 2$ and $$2+i_j\le i_{j+1}\le 2j+1+\min\{i_1,1\},\hspace{.25cm}j=1,\ldots,m-1.$$
If $i_1=0$, then either $i_j=2(j-1)$, $j=1,\ldots,m$, which implies that $\bar \alpha=(0,\ldots,0,1)$; or there exists $2\le \ell\le m$ such that $i_j=2(j-1)$ when $1\le j< \ell$ and $i_\ell=2\ell-1$. Then $i_j=2j-1$, $j=\ell,\ldots,m$ and hence, $\bar \alpha_{2\ell-2}=1$ and $\bar \alpha_i=0$, otherwise. Then, $|\{\alpha \in \Lambda_{2m+1}^m:\alpha_0=1\}|=m$.
If $i_1=1$, then either $i_j=2j-1$, $j=1,\ldots,m$, which implies that $\bar \alpha=(1,0\ldots,0)$; or there exists $2\le \ell\le m$ such that $i_j=2j-1$ when $1\le j< \ell$ and $i_\ell=2\ell$. Then $i_j=2j$, $j=\ell,\ldots,m$ and hence, $\bar \alpha_{2\ell-1}=1$ and $\bar \alpha_i=0$, otherwise. Moreover, $|\{\alpha \in \Lambda_{2m+1}^m:\alpha_0=0,\,\alpha_1=1\}|=1$.
If $i_1=2$, then $i_j=2j$, $j=1,\ldots,m$ and hence, $\bar \alpha_{1}=1$ and $\bar \alpha_i=0$, otherwise; which in turns implies that $|\{\alpha \in \Lambda_{2m+1}^m:\alpha_0=\alpha_1=0,\,\alpha_2=1\}|=1$.
Therefore, $|\{\alpha \in \Lambda_{2m+1}^m:\alpha_1=1\}|=2m+1$ and hence, we have obtained that given $p\in \NN^*$, the claimed formula for $|\Lambda^j_p|$ is true for $j=0,1$ and for $j=\lfloor\frac{p}{2}\rfloor$.
Assume now that the formula is true for $p\ge 2$ and $0\le j\le\lfloor\frac{p}{2}\rfloor$. Then given $1\le j\le\lfloor\frac{p+1}{2}\rfloor-1$ and applying Lemma \[partition\], we get that $$\begin{array}{rl}
|\Lambda^{j+1}_{p+1}|=&\hspace{-.25cm}\displaystyle \sum\limits_{i=1}^3|A^{j+1,i}_{p+1}|+\sum\limits_{i=4}^5|A^{j+1,i}_{p+1}|=\sum\limits_{i=1}^5|A^{j+1,i}_{p}|+ |A^{j,2}_{p-1}|+|A^{j,4}_{p-1}|+ |A^{j,1}_{p}|+ |A^{j,3}_{p}|\\[3ex]
=&\hspace{-.25cm}\displaystyle|\Lambda^{j+1}_{p}|+ |A^{j,2}_{p-1}|+|A^{j,4}_{p-1}|+ |A^{j,1}_{p-1}|+ |A^{j,3}_{p-1}|+ |A^{j,5}_{p-1}|=|\Lambda^{j+1}_{p}|+|\Lambda^{j}_{p-1}|\\[2ex]
=&\hspace{-.25cm}\displaystyle\frac{p}{p-j-1}{p -j-1\choose j+1}+\frac{p-1}{p-1-j}{p-1-j\choose j}=\dfrac{p+1}{p-j}{p-j\choose j+1}.\end{array}$$
Finally, from the Identity we obtain that $T_p\big(\frac{i}{2}\big)=\dfrac{(i)^p}{2}\sum\limits_{j=0}^{\lfloor \frac{p}{2}\rfloor}|\Lambda^{j}_{p}|$ for any $p\in \NN^*$, that in turns, from the identities and taking into account that $T_m(-x)=(-1)^mT_m(x)$ and $V_m(-x)=(-1)^mW_m(x)$ for any $m\in \NN^*$, implies the last claims.
\[even:odd\] Given $p,m\in \NN^*$, then the following identities hold $$\begin{array}{rl}
\displaystyle \sum\limits_{\alpha\in \Lambda^0_{p}}a^{2\alpha}b^{\bar \alpha}=&\hspace{-.25cm}\displaystyle b^{\pi_p}=\prod\limits_{j=0}^{p-1}b(j)\\[3ex]
\displaystyle \sum\limits_{\alpha\in \Lambda^1_{p}}a^{2\alpha}b^{\bar \alpha}=&\hspace{-.25cm}\displaystyle
\sum\limits_{i=0}^{p-2}a(i)^2\prod\limits_{j=0\atop j\not=i,i+1}^{p-1}b(j)+a(p-1)^2\prod\limits_{j=1}^{p-2}b(j)\\[3ex]
\displaystyle \sum\limits_{\alpha\in \Lambda^m_{2m}}a^{2\alpha}b^{\bar \alpha}=&\hspace{-.25cm}\displaystyle \prod\limits_{j=0}^{m-1}a(2j)^2+\prod\limits_{j=1}^{m}a(2j-1)^2\\[3ex]
\displaystyle \sum\limits_{\alpha\in \Lambda^m_{2m+1}}a^{2\alpha}b^{\bar \alpha}=&\hspace{-.25cm}\displaystyle \sum\limits_{i=0}^{m}b(2i)\prod\limits_{j=0}^{i-1}a(2j)^2\prod\limits_{j=i+1}^{m}a(2j-1)^2+\sum\limits_{i=1}^{m}b(2i-1)\prod\limits_{j=1}^{i-1}a(2j-1)^2\prod\limits_{j=i}^{m}a(2j)^2\end{array}$$
\[Ff\] Given $p\in \NN^*$ and $r\in \CC^*$ then for any $a,c\in \ell(\CC^*;p,r)$ and $b\in \ell(\CC;p,r)$, we have that $$q_{p,r}(a;b;c)=\dfrac{1}{2 }\sqrt{\dfrac{r}{a^{\pi_p} c^{\pi_p}}}
\sum\limits_{j=0}^{\lfloor\frac{p}{2}\rfloor}(-1)^j\sum\limits_{\alpha\in \Lambda_p^j}r^{-\alpha_{p-1}}a^{\alpha} b^{\bar \alpha} c^\alpha.$$
We first prove, by induction on $p$, that for any $a\in \ell(\CC^*,p)$ and $b\in \ell(\CC;p)$ we have $$Q_p(a;b)=q_{p,1}(a;b;a)=\dfrac{1}{2a^{\pi_p}}
\sum\limits_{j=0}^{\lfloor\frac{p}{2}\rfloor}(-1)^j\sum\limits_{\alpha\in \Lambda_p^j}a^{2\alpha} b^{\bar \alpha}.$$
From Corollary \[even:odd\], for $p=1$ the claimed formula gives the value $\dfrac{b(0)}{2a(0)}$, whereas for $p=2$ gives the value $\dfrac{1}{2a(0)a(1)}\Big(b(0)b(1)-a(0)^2-a(1)^2\Big)$. Therefore, taking into account the identities , the proposed formula coincides with the expression for $Q_p$ where $p=1,2$. Assume now that it is true for $p\ge 2$ and consider $a\in \ell(\KK^*;p+1)$ and $b\in \ell(\KK;p+1)$. Since the hypotheses $b(0)\not=0$ or $b(0)=b(p)=0$ lead to analogous reasoning that the case $b(p)\not=0$, in the sequel we always assume that $b(p)\not=0$ and hence, our aim is to prove that $Q_p(\hat a;\hat b)=Q_{p+1}(a;b)$ for any $p\ge 2$.
When $p=2$, then $$\begin{array}{rl}
Q_2(\hat a;\hat b)=&\hspace{-.25cm}\dfrac{b(2)}{2a(0)a(1)a(2)}\left(\dfrac{\big(b(0)b(2)-a(2)^2)\big(b(1)b(2)-a(1)^2\big)}{b(2)^2}-a(0)^2-\dfrac{a(1)^2a(2)^2}{b(2)^2}\right)\\[3ex]
=&\hspace{-.25cm}\dfrac{1}{2a(0)a(1)a(2)}\Big(b(0)b(1)b(2)-a(1)^2b(0)-a(2)^2b(1)-a(0)^2b(2)\Big)=Q_3(a;b).\end{array}$$
When $p=3$, then $$\begin{array}{rl}
Q_3(\hat a;\hat b)=&\hspace{-.25cm}\dfrac{b(3)}{2a(0)a(1)a(2)a(3)}\left(\dfrac{b(1)\big(b(0)b(3)-a(3)^2)\big(b(2)b(3)-a(2)^2\big)}{b(3)^2}-\dfrac{a(1)^2\big(b(0)b(3)-a(3)^2)}{b(3)}\right.\\[3ex]
&\hspace{3cm}\left.-\dfrac{a(2)^2a(3)^2b(1)}{b(3)^2}-\dfrac{a(0)^2\big(b(2)b(3)-a(2)^2)}{b(3)}\right)\\[3ex]
=&\hspace{-.25cm}\dfrac{1}{2a(0)a(1)a(2)a(3)}\Big(b(0)b(1)b(2)b(3)-a(2)^2b(0)b(1)-a(3)^2b(1)b(2)-a(1)^2b(0)b(3)\\[3ex]
& \hspace{3cm} +a(1)^2a(3)^2-a(0)^2b(2)b(3)+a(0)^2a(2)^2\Big)=Q_4(a;b).\end{array}$$
When $p\ge 4$, taking into account that $$b(p) \hat a^{\pi_p}= a^{\pi_{p+1}}\hspace{.25cm}\hbox{and}\hspace{.25cm}b(p)\hat b^{\pi_p}=b^{\pi_{ p+1}}- a(p-1)^2\prod\limits_{j=0}^{p-2}b(j)-a(p)^2\prod\limits_{j=1}^{p-1}b(j)+\dfrac{a(p-1)^2a(p)^2}{b(p)}\prod\limits_{j=1}^{p-2}b(j),$$ the first two identities of Lemma \[even:odd\] imply that $$Q_p(\hat a;\hat b)=\dfrac{1}{2a^{\pi_{p+1}}}\left[ \sum\limits_{\alpha \in \Lambda_{p+1}^0}a^{2\alpha}b^{\bar \alpha}-\sum\limits_{\alpha \in \Lambda_{p+1}^1}a^{2\alpha}b^{\bar \alpha}+b(p)\sum\limits_{j=2}^{\lfloor\frac{p}{2}\rfloor}(-1)^j\sum\limits_{\alpha\in \Lambda_p^j}\hat a^{2\alpha} \hat b^{\bar \alpha} +R_p(a;b) \right]$$ where $$\begin{array}{rl}
R_p(a;b)=&\hspace{-.25cm}\displaystyle a(p-1)^2\sum\limits_{i=0}^{p-3} a(i)^2\prod\limits_{j=0\atop j\not=i,i+1}^{p-2} b(j)+a(p)^2\sum\limits_{i=1}^{p-2} a(i)^2\prod\limits_{j=1\atop j\not=i,i+1}^{p-1} b(j)-\dfrac{a(p-1)^2a(p)^2}{b(p)}\sum\limits_{i=1}^{p-3} a(i)^2\!\!\!\prod\limits_{j=1\atop j\not=i,i+1}^{p-2} b(j)\\[3ex]
=&\hspace{-.25cm}\displaystyle \sum\limits_{\alpha \in A^{2,3}_{p+1}}a^{2\alpha}b^{\bar \alpha}+\sum\limits_{\alpha \in A^{2,4}_{p+1}}a^{2\alpha}b^{\bar \alpha}+\sum\limits_{\alpha \in A^{2,5}_{p+1}}a^{2\alpha}b^{\bar \alpha}-\dfrac{a(p)^2}{b(p)}\sum\limits_{\alpha \in A^{2,5}_p}a^{2\alpha}b^{\bar \alpha}.\end{array}$$
On the other hand, from the two last identities of Corollary \[even:odd\], when $p$ is even then $$\begin{array}{rl}
\displaystyle b(p)\sum\limits_{\alpha\in \Lambda^{\lfloor\frac{p}{2}\rfloor}_{p}}\hat a^{2\alpha}\hat b^{\bar \alpha}=&\hspace{-.25cm}\displaystyle b(p)\prod\limits_{j=0}^{\lfloor\frac{p}{2}\rfloor-1}a(2j)^2+\dfrac{a(p)^2}{b(p)}\prod\limits_{j=1}^{\lfloor\frac{p}{2}\rfloor}a(2j-1)^2\\[3ex]
=&\hspace{-.25cm}\displaystyle \sum\limits_{\alpha \in A^{\lfloor\frac{p}{2}\rfloor,1}_{p+1}}a^{2\alpha}b^{\bar \alpha}+\sum\limits_{\alpha \in A^{\lfloor\frac{p}{2}\rfloor,2}_{p+1}}a^{2\alpha}b^{\bar \alpha}+\dfrac{a(p)^2}{b(p)}\sum\limits_{\alpha \in A^{\lfloor\frac{p}{2}\rfloor,5}_{p}}a^{2\alpha}b^{\bar \alpha}\end{array}$$ since $A^{\lfloor\frac{p}{2}\rfloor,1}_{p+1}=\emptyset$; whereas when $p$ is odd, then $\lfloor\frac{p+1}{2}\rfloor=\lfloor\frac{p}{2}\rfloor+1$ and $$\begin{array}{rl}
\displaystyle b(p)\sum\limits_{\alpha\in \Lambda^{\lfloor\frac{p}{2}\rfloor}_{p}}\hat a^{2\alpha}\hat b^{\bar \alpha}=&\hspace{-.25cm}\displaystyle
\big(b(0)b(p)-a(p)^2\big)\prod\limits_{j=1}^{\lfloor\frac{p}{2}\rfloor}a(2j-1)^2+\big(b(p-1)b(p)-a(p-1)^2\big)\prod\limits_{j=0}^{\lfloor\frac{p}{2}\rfloor-1} a(2j)^2\\[3ex]
+&\hspace{-.25cm}\displaystyle b(p)\sum\limits_{i=1}^{\lfloor\frac{p}{2}\rfloor-1} b(2i)\prod\limits_{j=0}^{i-1} a(2j)^2\prod\limits_{j=i+1}^{\lfloor\frac{p}{2}\rfloor}a(2j-1)^2+\dfrac{a(p)^2}{b(p)}\sum\limits_{i=1}^{\lfloor\frac{p}{2}\rfloor} b(2i-1)\prod\limits_{j=1}^{i-1} a(2j-1)^2\prod\limits_{j=i}^{\lfloor\frac{p}{2}\rfloor} a(2j)^2\\[3ex]
=&\hspace{-.25cm}\displaystyle
-\prod\limits_{j=1}^{\lfloor\frac{p+1}{2}\rfloor}a(2j-1)^2-\prod\limits_{j=0}^{\lfloor\frac{p}{2}\rfloor} a(2j)^2\\[3ex]
+&\hspace{-.25cm}\displaystyle b(p)\sum\limits_{i=0}^{\lfloor\frac{p}{2}\rfloor} b(2i)\prod\limits_{j=0}^{i-1} a(2j)^2\prod\limits_{j=i+1}^{\lfloor\frac{p}{2}\rfloor}a(2j-1)^2+\dfrac{a(p)^2}{b(p)}\sum\limits_{i=1}^{\lfloor\frac{p}{2}\rfloor} b(2i-1)\prod\limits_{j=1}^{i-1} a(2j-1)^2\prod\limits_{j=i}^{\lfloor\frac{p}{2}\rfloor} a(2j)^2\\[3ex]
=&\hspace{-.25cm}\displaystyle
-\sum\limits_{\alpha\in \Lambda^{\lfloor\frac{p+1}{2}\rfloor}_{p+1}}a^{2\alpha}b^{\bar \alpha}+\sum\limits_{\alpha\in A^{\lfloor\frac{p}{2}\rfloor,1}_{p+1}}a^{2\alpha}b^{\bar \alpha}+\sum\limits_{\alpha\in A^{\lfloor\frac{p}{2}\rfloor,2}_{p+1}}a^{2\alpha}b^{\bar \alpha}+\dfrac{a(p)^2}{b(p)}\sum\limits_{\alpha \in A^{\lfloor\frac{p}{2}\rfloor,5}_p}a^{2\alpha}b^{\bar \alpha}.\end{array}$$
In particular, when $p=4,5$ then $\lfloor\frac{p}{2}\rfloor=2$ we obtain that $$b(4)\sum\limits_{\alpha\in \Lambda_4^2}\hat a^{2\alpha} \hat b^{\bar \alpha} +R_4(a;b) =\sum\limits_{\alpha\in \Lambda_{5}^2}a^{2\alpha} b^{\bar \alpha}\hspace{.25cm}\hbox{and}\hspace{.25cm}b(5)\sum\limits_{\alpha\in \Lambda_5^2}\hat a^{2\alpha} \hat b^{\bar \alpha} +R_5(a;b) =-\sum\limits_{\alpha\in \Lambda_{6}^3}a^{2\alpha} b^{\bar \alpha}+\sum\limits_{\alpha\in \Lambda_{6}^2}a^{2\alpha} b^{\bar \alpha}$$ and hence $Q_p(\hat a;\hat b)=Q_{p+1}( a;b)$.
Consider now $p\ge 6$ and $2\le j\le \lfloor\frac{p}{2}\rfloor-1$. Then, $b(p)\sum\limits_{\alpha \in \Lambda^j_p}\hat a^{2\alpha}\hat b^{\bar \alpha}=b(p)\sum\limits_{i=1}^5\sum\limits_{\alpha \in A^{j,i}_{p}}\hat a^{2\alpha}\hat b^{\bar \alpha}$ and moreover we have $$\begin{array}{rl}
\displaystyle b(p)\sum\limits_{\alpha \in A^{j,1}_p}\hat a^{2\alpha}\hat b^{\bar \alpha}=&\hspace{-.25cm}\displaystyle
b(p)\sum\limits_{\alpha \in A^{j,1}_p} a^{2\alpha}b^
{\bar \alpha}-a(p-1)^2 \sum\limits_{\alpha \in A^{j,1}_p} a^{2\alpha}\prod\limits_{i=0}^{p-2} b^{\bar \alpha_i}-a(p)^2 \sum\limits_{\alpha \in A^{j,1}_p} a^{2\alpha}\prod\limits_{i=1}^{p-1} b^{\bar \alpha_i}\\[4ex]
+ &\hspace{-.25cm}\displaystyle \dfrac{a(p-1)^2a(p)^2}{b(p)}\sum\limits_{\alpha \in A^{j,1}_p} a^{2\alpha}\prod\limits_{i=1}^{p-2} b^{\bar \alpha_i},\\[4ex]
=&\hspace{-.25cm}\displaystyle
\sum\limits_{\alpha \in A^{j,1}_p\times\{0\}} a^{2\alpha}b^
{\bar \alpha}-\sum\limits_{\alpha \in A^{j+1,5}_{p}\times \{0\}} a^{2\alpha} b^{\bar \alpha}- \sum\limits_{\alpha \in A^{j,1}_p\times \{1\}} a^{2\alpha}b^{\bar \alpha}+\dfrac{a(p)^2}{b(p)}\sum\limits_{\alpha \in A^{j+1,5}_p} a^{2\alpha}b^{\bar \alpha},\\[4ex]
\displaystyle b(p)\sum\limits_{\alpha \in A^{j,2}_p}\hat a^{2\alpha}\hat b^{\bar \alpha}=&\hspace{-.25cm}\displaystyle
b(p)\sum\limits_{\alpha \in A^{j,2}_p} a^{2\alpha}b^
{\bar \alpha}-a(p-1)^2 \sum\limits_{\alpha \in A^{j,2}_p} a^{2\alpha}\prod\limits_{i=0}^{p-2} b^{\bar \alpha_i}=\sum\limits_{\alpha \in A^{j,2}_p\times \{0\}} a^{2\alpha}b^
{\bar \alpha}- \sum\limits_{\alpha \in A^{j+1,4}_{p+1}} a^{2\alpha} b^{\bar \alpha },\\[4ex]
\displaystyle b(p)\sum\limits_{\alpha \in A^{j,3}_p}\hat a^{2\alpha}\hat b^{\bar \alpha}=&\hspace{-.25cm}\displaystyle
b(p)\sum\limits_{\alpha \in A^{j,3}_p} a^{2\alpha}b^
{\bar \alpha}-a(p)^2 \sum\limits_{\alpha \in A^{j,3}_p} a^{2\alpha}\prod\limits_{i=1}^{p-1} b^{\bar \alpha_i}=\sum\limits_{\alpha \in A^{j,3}_p\times \{0\}} a^{2\alpha}b^
{\bar \alpha}-\sum\limits_{\alpha \in A^{j,3}_p\times \{1\}} a^{2\alpha}b^{\bar \alpha},\\[4ex]
\displaystyle b(p)\sum\limits_{\alpha \in A^{j,4}_p}\hat a^{2\alpha}\hat b^{\bar \alpha}=&\hspace{-.25cm}\displaystyle b(p)\sum\limits_{\alpha \in A^{j,4}_p} a^{2\alpha} b^{\bar \alpha}=\sum\limits_{\alpha \in A^{j,4}_p\times \{0\}} a^{2\alpha} b^{\bar \alpha},\\[4ex]
\displaystyle b(p)\sum\limits_{\alpha \in A^{j,5}_p}\hat a^{2\alpha}\hat b^{\bar \alpha}=&\hspace{-.25cm}\displaystyle \dfrac{a(p)^2}{b(p)}\sum\limits_{\alpha \in A^{j,5}_p} a^{2\alpha} b^{\bar \alpha}\end{array}$$ Therefore, from Lemma \[partition\] we obtain that $$\begin{array}{rl}
\displaystyle b(p)\sum\limits_{j=2}^{\lfloor\frac{p}{2}\rfloor-1}(-1)^j\sum\limits_{\alpha\in \Lambda_p^j}\hat a^{2\alpha} \hat b^{\bar \alpha} =&\hspace{-.25cm}\displaystyle \sum\limits_{j=2}^{\lfloor\frac{p}{2}\rfloor-1}(-1)^j\Bigg[\sum\limits_{\alpha \in A^{j,1}_{p+1}} a^{2\alpha}b^
{\bar \alpha}+\sum\limits_{\alpha \in A^{j,2}_{p+1}} a^{2\alpha}b^
{\bar \alpha}\Bigg]\\[4ex]
-&\hspace{-.25cm}\displaystyle\sum\limits_{j=2}^{\lfloor\frac{p}{2}\rfloor-1}(-1)^j\Bigg[\sum\limits_{\alpha \in A^{j+1,3}_{p+1}} a^{2\alpha}b^
{\bar \alpha}+\sum\limits_{\alpha \in A^{j+1,4}_{p+1}} a^{2\alpha}b^
{\bar \alpha}+\sum\limits_{\alpha \in A^{j+1,5}_{p+1}} a^{2\alpha}b^
{\bar \alpha}\Bigg]\\[4ex]
+&\hspace{-.25cm}\displaystyle \dfrac{a(p)^2}{b(p)}\sum\limits_{j=2}^{\lfloor\frac{p}{2}\rfloor-1}(-1)^j\Bigg[\sum\limits_{\alpha \in A^{j+1,5}_{p}}a^{2\alpha}b^{\bar \alpha}+
\sum\limits_{\alpha \in A^{j,5}_{p}}a^{2\alpha}b^{\bar \alpha}\Bigg]\\[4ex]
=&\hspace{-.25cm}\displaystyle \sum\limits_{\alpha \in A^{2,1}_{p+1}} a^{2\alpha}b^
{\bar \alpha}+\sum\limits_{\alpha \in A^{2,2}_{p+1}} a^{2\alpha}b^
{\bar \alpha}+\sum\limits_{j=3}^{\lfloor\frac{p}{2}\rfloor-1}(-1)^j \sum\limits_{\alpha \in \Lambda^{j}_{p+1}} a^{2\alpha}b^
{\bar \alpha}\\[4ex]
+&\hspace{-.25cm}\displaystyle (-1)^{\lfloor\frac{p}{2}\rfloor}\Bigg[\sum\limits_{\alpha \in A^{{\lfloor\frac{p}{2}\rfloor},3}_{p+1}} a^{2\alpha}b^
{\bar \alpha}+\sum\limits_{\alpha \in A^{{\lfloor\frac{p}{2}\rfloor},4}_{p+1}} a^{2\alpha}b^
{\bar \alpha}+\sum\limits_{\alpha \in A^{{\lfloor\frac{p}{2}\rfloor},5}_{p+1}} a^{2\alpha}b^
{\bar \alpha}\Bigg]\\[4ex]
+&\hspace{-.25cm}\displaystyle \dfrac{a(p)^2}{b(p)}\Bigg[
\sum\limits_{\alpha \in A^{2,5}_{p}}a^{2\alpha}b^{\bar \alpha}-(-1)^{\lfloor\frac{p}{2}\rfloor}\sum\limits_{\alpha \in A^{\lfloor\frac{p}{2}\rfloor,5}_{p}}a^{2\alpha}b^{\bar \alpha}\Bigg],
\end{array}$$ $$\begin{array}{rl}
R_p(a;b)=&\hspace{-.25cm}\displaystyle a(p-1)^2\sum\limits_{i=0}^{p-3} a(i)^2\prod\limits_{j=0\atop j\not=i,i+1}^{p-2} b(j)+a(p)^2\sum\limits_{i=1}^{p-2} a(i)^2\prod\limits_{j=1\atop j\not=i,i+1}^{p-1} b(j)-\dfrac{a(p-1)^2a(p)^2}{b(p)}\sum\limits_{i=1}^{p-3} a(i)^2\!\!\!\prod\limits_{j=1\atop j\not=i,i+1}^{p-2} b(j)\\[3ex]
=&\hspace{-.25cm}\displaystyle \sum\limits_{\alpha \in A^{2,3}_{p+1}}a^{2\alpha}b^{\bar \alpha}+\sum\limits_{\alpha \in A^{2,4}_{p+1}}a^{2\alpha}b^{\bar \alpha}+\sum\limits_{\alpha \in A^{2,5}_{p+1}}a^{2\alpha}b^{\bar \alpha}-\dfrac{a(p)^2}{b(p)}\sum\limits_{\alpha \in A^{2,5}_p}a^{2\alpha}b^{\bar \alpha},\end{array}$$ which implies that $$\begin{array}{rl}
\displaystyle b(p)\sum\limits_{j=2}^{\lfloor\frac{p}{2}\rfloor}(-1)^j\sum\limits_{\alpha\in \Lambda_p^j}\hat a^{2\alpha} \hat b^{\bar \alpha} +R_p(a;b)=&\hspace{-.25cm}\displaystyle \sum\limits_{j=2}^{\lfloor\frac{p}{2}\rfloor-1}(-1)^j \sum\limits_{\alpha \in \Lambda^{j}_{p+1}} a^{2\alpha}b^
{\bar \alpha}\\[4ex]
+&\hspace{-.25cm}\displaystyle (-1)^{\lfloor\frac{p}{2}\rfloor}\Bigg[\sum\limits_{\alpha \in A^{{\lfloor\frac{p}{2}\rfloor},3}_{p+1}} a^{2\alpha}b^
{\bar \alpha}+\sum\limits_{\alpha \in A^{{\lfloor\frac{p}{2}\rfloor},4}_{p+1}} a^{2\alpha}b^
{\bar \alpha}+\sum\limits_{\alpha \in A^{{\lfloor\frac{p}{2}\rfloor},5}_{p+1}} a^{2\alpha}b^
{\bar \alpha}\Bigg]\\[4ex]
+&\hspace{-.25cm}\displaystyle (-1)^{\lfloor\frac{p}{2}\rfloor}\Bigg[b(p)\sum\limits_{\alpha\in \Lambda^{\lfloor\frac{p}{2}\rfloor}_{p}}\hat a^{2\alpha}\hat b^{\bar \alpha}-\dfrac{a(p)^2}{b(p)}\sum\limits_{\alpha \in A^{\lfloor\frac{p}{2}\rfloor,5}_{p}}a^{2\alpha}b^{\bar \alpha}\Bigg].
\end{array}$$
Finally, using the expression for $\displaystyle b(p)\sum\limits_{\alpha\in \Lambda_p^{\lfloor\frac{p}{2}\rfloor}}\hat a^{2\alpha} \hat b^{\bar \alpha}$ in both cases, $p$ even or odd, we obtain that $$b(p)\sum\limits_{j=2}^{\lfloor\frac{p}{2}\rfloor}(-1)^j\sum\limits_{\alpha\in \Lambda_p^j}\hat a^{2\alpha} \hat b^{\bar \alpha} +R_p(a;b)=
\sum\limits_{j=2}^{\lfloor\frac{p+1}{2}\rfloor}(-1)^j \sum\limits_{\alpha \in \Lambda^{j}_{p+1}} a^{2\alpha}b^
{\bar \alpha}$$ and hence that $Q_p(\hat a;\hat b)=Q_{p+1}(a;b)$.
If we consider now $a,c\in \ell(\KK^*;p,r)$, $b\in \ell(\KK;p,r)$, then $\gamma=\sqrt{\dfrac{c^{\pi_p}}{ra^{\pi_p}}}$ and hence, $$\begin{array}{rl}
q_{p,r}(a;b;c)=Q_p(a_\phi;b_\phi)=&\hspace{-.25cm}\displaystyle \dfrac{1}{2a_\phi^{\pi_p}}
\sum\limits_{j=0}^{\lfloor\frac{p}{2}\rfloor}(-1)^j\sum\limits_{\alpha\in \Lambda_p^j}a_\phi^{2\alpha} b_\phi^{\bar \alpha}\\[3ex]
=&\hspace{-.25cm}\displaystyle \dfrac{1}{2 a^{\pi_p}\phi(a,c)^{\pi_p}}
\sum\limits_{j=0}^{\lfloor\frac{p}{2}\rfloor}(-1)^j\sum\limits_{\alpha\in \Lambda_p^j}\gamma^{2\alpha_{p-1}-1}a^{2\alpha} b^{\bar \alpha}\phi(a,c)^{2\alpha+\bar \alpha}\\[3ex]
=&\hspace{-.25cm}\displaystyle \dfrac{1}{2 \phi(a,c)^{\pi_p}}\sqrt{\dfrac{r}{a^{\pi_p} c^{\pi_p}}}
\sum\limits_{j=0}^{\lfloor\frac{p}{2}\rfloor}(-1)^j\sum\limits_{\alpha\in \Lambda_p^j}\left(\dfrac{c^{\pi_p}}{ra^{\pi_p}}\right)^{\alpha_{p-1}}a^{2\alpha} b^{\bar \alpha}\phi(a,c)^{2\alpha+\bar \alpha}.\end{array}$$ The result follows taking into account that for any $\alpha\in \Lambda_p^m$, $m=0,\ldots,\lfloor\frac{p}{2}\rfloor$ we get $$\left(\dfrac{c^{\pi_p}}{a^{\pi_p}}\right)^{\alpha_{p-1}}\phi(a,c)^{2\alpha+\bar \alpha}=\phi(a;c)^{\pi_p}\dfrac{c^\alpha}{a^\alpha}.$$
Taking into account that from Lemma \[qp:cha\], $\ell(\CC;p,r)\subset \ell(\CC;np,r^n)$, for any $p\in \NN^*$ any $r\in \KK^*$ and any $n\in \NN^*$, then $q_{np,r^n}(a;b;c)$ has sense when $a,c\in \ell(\CC^*;p,r)$ and $b\in \ell(\CC;p,r)$. In fact, we have the following relation between the Floquet functions $q_{np,r^n}$ and $q_{p,r}$
Given $p\in \NN^*$ and $r\in \CC^*$ then for any $a,c\in \ell(\CC^*;p,r)$ and $b\in \ell(\CC;p,r)$, then for any $n\in \NN^*$ we have that $$q_{np,r^n}(a;b;c)=T_n\big(q_{p,r}(a;b;c)\big).$$
First, suppose that $a,c\in \CC^*$ and $b\in \CC$; that is, that the equation has constant coefficients. Then, $p=r=1$ and from Theorem \[Ff\] and Proposition \[cardinal\], for any $n\in \NN^*$ we have $$q_{n,1}(a;b;c)=\dfrac{1}{2 }\sqrt{\dfrac{1}{a^nc^n}}
\sum\limits_{j=0}^{\lfloor\frac{n}{2}\rfloor}(-1)^ja^jb^{n-2j}c^j|\Lambda_n^j|=\dfrac{1}{2 }
\sum\limits_{j=0}^{\lfloor\frac{n}{2}\rfloor}(-1)^j\dfrac{n}{n-j}{n-j\choose j}\Big(\dfrac{b}{\sqrt{ac}}\Big)^{n-2j}=T_n(q),$$ where $q=\dfrac{b}{2\sqrt{ac}}=q_{1,1}(a;b;c)$.
Assume now that $a,c\in \ell(\CC^*;p,r)$ and $b\in \ell(\CC;p,r)$ and consider $z\in \ell(\KK)$ a solution of the Equation $$a(k)z(k+1)-b(k)z(k)+c(k-1)z(k-1)=0,\hspace{.25cm}k\in \ZZ.$$
If $\gamma=\big(r\phi(a,c)(p)\big)^{-\frac{1}{2}}$, then from Theorem \[Ch:sys\], for any $m\in \ZZ$, the sequence $v(k)=\gamma^{-k}z_{p,m}(k)$ is a solution of the Chebyshev equation with parameter $q_{p,r}(a;b;c)$.
On the other hand, since $\phi(a,c)(np)=\phi(a,c)(p)^n$ we have that $\big(r^n\phi(a,c)(np)\big)^{-\frac{1}{2}}=\gamma^n$ and hence, Theorem \[Ch:sys\] also implies that for any $m\in \ZZ$, the sequence $w(k)=\gamma^{-nk}z_{np,m}(k)$ is a solution of the Chebyshev equation with parameter $q_{np,r^n}(a;b;c)$.
Finally, since $z_{np,m}(k)=z(knp+m)=z_{p,m}(nk)$, we have that $w(k)=v(nk)=v_{n,0}(k)$. Therefore, the result follows by applying the first part of this proof.
The particularization of the above Proposition to Chebyshev equations, leads to the following generalization of Lemma \[Ch:odd-even\].
\[Fc\] Given $z\in \ell(\CC)$ a Chebyshev sequence with parameter $q\in \CC$, then for any $n,m\in \NN^*$, the subsequence $z_{n,m}$ is a Chebyshev sequence with parameter $T_n(q)$.
Applying the above result together with the identity , for any $n,m\in \NN^*$ and any $k\in \ZZ$ we have the following relations between Chebyshev polynomials of first and second kind: $$\begin{array}{rl}
T_{kn+m}(x)=&\hspace{-.25cm}T_m(x)U_k\big(T_n(x)\big)-T_{m-n}(x)U_{k-1}\big(T_n(x)\big),\\[1ex]
U_{kn+m}(x)=&\hspace{-.25cm}U_m(x)U_k\big(T_n(x)\big)-U_{m-n}(x)U_{k-1}\big(T_n(x)\big).
\end{array}$$ Taking $k=1$ at both identities we obtain the well known relations, see [@MH03] $$\label{com:first}
2T_m(x)T_n(x)=T_{n+m}(x)+T_{m-n}(x)\hspace{.25cm}\hbox{and}\hspace{.25cm}2U_m(x)T_n(x)=U_{n+m}(x)+U_{m-n}(x),$$
Taking $m=0$ in the first equation and $n=2m+2$ in the second one, we obtain the following generalizations of the first identity in both and $$\label{com:first}
T_{kn}(x)=T_k\big(T_n(x)\big)\hspace{.25cm}\hbox{and}\hspace{.25cm}U_{2k(m+1)+m}(x)=U_{m}(x)W_k\big(T_{2(m+1)}(x)\big).$$ Finally, taking $n=2m$ in the first equation and $m=n-1$ in the second one, we obtain the following generalizations of the second identity in and in , respectively $$\label{doubling:general}
T_{m(2k+1)}(x)= T_m(x)V_k\big(T_{2m}(x)\big)\hspace{.25cm}\hbox{and}\hspace{.25cm}U_{(k+1)n-1}(x)= U_{n-1}(x)U_k\big(T_n(x)\big).$$
We end this paper, analyzing when a second order difference equation with quasi-periodic coefficients with period $p$ has also quasi-periodic solutions with the same period. So, given $p\in \NN^*$, $r\in \KK^*$ and the sequences $a,c\in \ell(\KK^*;p,r)$, $b\in \ell(\KK;p,r)$, if $z\in \ell(\KK;p,\hat r)$ is a solution of Equation (\[equation\]), from Lemma \[qp:cha\] we know that $z_{p,m}\in \ell(\KK;1,\hat r)$. Moreover, Theorem \[Ch:sys\] establishes that $z_{p,m}(k)=\gamma^k v(k)$, where $\gamma=\sqrt{\dfrac{c^{\pi_p}}{ra^{\pi_p}}}$ and $v(k)$ is a solution of the Chebyshev equation with parameter $q_{p,r}(a;b;c)$. Therefore, $z\in \ell(\KK;p,\hat r)$ iff $v\in \ell(\KK;1,\hat r\gamma^{-1})$. So, applying Proposition \[Cheb:periodic\] we have the following characterization.
\[qperiodic:solutions\] Given $p\in \NN^*$ and $r\in \KK^*$ then for any $a,c\in \ell(\KK^*;p,r)$ and $b\in \ell(\KK;p,r)$, the difference equation $$a(k)z(k+1)-b(k)z(k)+c(k-1)z(k-1)=0,\hspace{.25cm}k\in \ZZ,$$ has quasi-periodic solutions with period $p$ and ratio $\hat r\in \KK^*$ iff $$\hat r a^{\pi_p}+(r\hat r)^{-1}c^{\pi_p}=
\sum\limits_{j=0}^{\lfloor\frac{p}{2}\rfloor}(-1)^j\sum\limits_{\alpha\in \Lambda_p^j}r^{-\alpha_{p-1}}a^{\alpha} b^{\bar \alpha}c ^{ \alpha}.$$
When $r=\hat r=1$, the above result establishes the necessary and sufficient condition for the existence of periodic solutions for difference equations with periodic coefficients, which represent a fully generalization of Corollary \[floquet:constant\].
If $a,c\in \ell(\KK^*;p)$ and $b\in \ell(\KK;p)$, the difference equation $$a(k)z(k+1)-b(k)z(k)+c(k-1)z(k-1)=0,\hspace{.25cm}k\in \ZZ,$$ has periodic solutions with period $p$ iff $$a^{\pi_p}+c^{\pi_p}=
\sum\limits_{j=0}^{\lfloor\frac{p}{2}\rfloor}(-1)^j\sum\limits_{\alpha\in \Lambda_p^j}a^{\alpha} b^{\bar \alpha}c ^{ \alpha}.$$
The reader could compare the condition given in the above corollary with the same result in [@A00 Corollary 2.9.2].
[**Acknowledgments.**]{} This work has been partly supported by the Spanish Research Council (Comisión Interministerial de Ciencia y Tecnología,) under project MTM2011-28800-C02-02.
[99]{}
Agarwal, R.P., [*Difference equations and inequalities*]{}, Marcel Dekker, 2000.
Aharonov, D., Beardon A., Driver, K. Fibonacci, Chebyshev, and Orthogonal Polynomials. , [**112**]{} (2005), 612-630.
Mallik, R.K. On the Solution of a Second Order Linear Homogeneous Difference Equation with Variable Coefficients. *J. Math. Anal. Appl.*, [**215**]{} (1997), 32-47.
Mason, J.C., Handscomb, D.C. *Chebyshev Polynomials*. Chapman & Hall/CRC, 2003.
| {
"pile_set_name": "ArXiv"
} |
---
address: 'Centre for Particle Physics, University of Alberta, Edmonton, T6G2E1, CANADA'
author:
- 'P. Gorel, for the DEAP Collaboration'
title: 'Search for Dark Matter with Liquid Argon and Pulse Shape Discrimination: Results from DEAP-1 and Status of DEAP-3600'
---
Dark matter detection using liquid argon
========================================
For decades, liquid argon (LAr) has been a candidate of choice for large scintillating detectors. It is easy to purify and boasts a high light yield, which makes it ideal for low background, low threshold experiments. The scintillating process involves the creation of excimer, either directly or through ionization/recombination. The ultra-violet photons are emitted when the excited state decays to the ground state, breaking the molecule. Since the wavelength are not energetic enough to recreate the excimer, argon is transparent to its scintillation light, making it suitable for use in very large detectors.
The two energy states are populated depending on the ionizing particle Linear Energy Transfer. Since they have very different time constants (see Table \[tab:ScintProp\]), a Pulse Shape Discrimination (PSD) is possible between ionizations due to electrons ($\beta$ or $\gamma$ radiation), or heavier particles (nuclear recoil or $\alpha$). Since the expected WIMP signal is a nuclear recoil, it can be discriminated from the electromagnetic background, especially the high rate $\beta$-decay of $^{39}$Ar isotope, a naturally occurring isotope of argon (1 Bq/kg in natural argon)[@PSD].
Singlet Triplet
---------------------------------------- ------------ -------------
Time constant $\sim$7 ns $\sim$1.6 s
Population ratio for Electron ionizing $33\%$ $67\%$
Population ratio for Nucleus ionizing $75\%$ $25\%$
: Scintillation properties of argon[]{data-label="tab:ScintProp"}
DEAP-1
======
In order to study the background of a LAr dark matter detector and characterize the PSD, the DEAP collaboration built a small scale prototype, DEAP-1, with 7 kg of target material, 2 photomultipliers (PMTs), and a simple geometry (Fig. \[fig:DEAP1Det\]). This detector took data between 2007 and 2011, with several iterations mainly concerning with reducing detector backgrounds.
A more detailed review of these results can be found in two papers under publication [@DEAP_Bckgd] [@DEAP_PSD]. The conclusion of this study was twofold: the collaboration acquired a good understanding of the backgrounds in DEAP-1 and confirmed that the key to improve the PSD performance is to maximize the light collection.
The remaining background can be fully explained by a the leakage of the PSD at low energy and by the surface $\alpha$ events at higher energies. The main source of background, coming from the radon decay chains, was reduced to $\sim$16 Bq/kg for $^{222}$Rn and $\sim$2 Bq/kg for $^{220}$Rn. Other contributions to the $\alpha$ backgrounds are below 3.5x10$^{-5}$ Hz.
The data confirmed that the light collection is the main parameter of the PSD performance, and that a light collection of 8 photo-electrons/keV would improve the PSD down to $10^{-10}$ for a 60 keV threshold on the nuclear recoil. These results would suggest that a tonne-scale detector would have less than one leakage in the WIMP region of interest for 3 years of data.
DEAP-3600: Status
=================
DEAP-3600 is a tonne-scale LAr dark matter detector with a total mass of 3,600 kg of target material, and a fiducial mass of 1,000 kg (Fig. \[fig:DEAP3600Det\]). The goal is to reach a sensitivity for the WIMP elastic scattering cross section of 10$^{-46}$ cm$^2$ for a WIMP mass of 100 GeV [@TAUP2011]. The detector specifications directly follow the guidelines defined by the DEAP-1 results: firstly the mitigation of background and secondly the maximization of the light collection.
The first requirement is fulfilled by several layers of shielding coupled with a careful choice of all the material used for the construction. The detector is being built at SNOLAB [@SNOLAB] (Canada) where the rock overburden is equivalent to 6010 m of water and the cosmic muon flux was measured to be 0.23 muon/m$^2$/day. It is located in a 7 m diameter by 7 m height ultra-pure water tank to shield against the rock radioactivity. Between the target material and the light detector, 0.5 m of acrylic light guides and filler blocks (layers of polyethylene and styrofoam) act both as thermal insulation and as a shield against neutrons emitted by $(\alpha,n)$ reactions in the PMT glass. The acrylic vessel, containing the LAr, has been cast out of pure, distilled monomer ensuring a high radiopurity. It should be noted that prior to the evaporation of the purified TPB onto the inner surface, a 1 mm thick layer of acrylic will be sanded out by a custom-made robotic arm in order to remove the radon daughters embedded on and under the surface during the construction.
The second requirement is achieved through the choice of a single phase design which maximizes the photocathode coverage ($\sim75\%$) with 255 8" PMTs. The efficiency is improved by the choice of high quantum efficiency Hamamatsu R5912. In order to gather as much light as possible, the light guides have been glued to the acrylic vessel and wrapped with specular reflectors.
The construction of the detector is almost finished. The AV is completed, most of the PMTs are mounted and the collaboration is getting ready to place everything inside the steel shell. The electronics and cooling systems are being tested. Commissioning is expected to happen this summer with first data scheduled this fall.
References {#references .unnumbered}
==========
[99]{} M.G. Boulay and A. Hime. Technique for direct detection of weakly interacting massive particles using scintillation time discrimination in liquid argon, . P.-A. Amaudruz [*et al*]{}, Radon backgrounds in the DEAP-1 liquid-argon-based Dark Matter detector, submitted to [*Astropart. Phys.*]{} ([*arXiv:1211.0909v3*]{}). P.-A. Amaudruz [*et al*]{}, Measurement of the scintillation time spectra and pulse-shape discrimination of low-energy beta and nuclear recoils in liquid argon with DEAP-1, submitted to [*Astropart. Phys.*]{} M. Boulay for the DEAP Collaboration, Proceedings for the 12th International Conference on Topics in Astroparticle and Underground Physics (TAUP 2011), arXiv:1203.0604, 2012 http://www.snolab.ca
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The contemporary society has become more dependent on telecommunication networks. Novel services and technologies supported by such networks, such as cloud computing or e-Health, hold a vital role in modern day living. Large-scale failures are prone to occur, thus being a constant threat to business organizations and individuals. To the best of our knowledge, there are no publicly available reports regarding failure propagation in core transport networks. Furthermore, Software Defined Networking (SDN) is becoming more prevalent in our society and we can envision more SDN-controlled Backbone Transport Networks (BTNs) in the future. For this reason, we investigate the main motivations that could lead to epidemic-like failures in BTNs and SDNTNs. To do so, we enlist the expertise of several research groups with significant background in epidemics, network resiliency, and security. In addition, we consider the experiences of three network providers. Our results illustrate that Dynamic Transport Networks (DTNs) are prone to epidemic-like failures. Moreover, we propose different situations in which a failure can propagate in SDNTNs. We believe that the key findings will aid network engineers and the scientific community to predict this type of disastrous failure scenario and plan adequate survivability strategies.'
author:
- '[^1]'
bibliography:
- 'epidemicsOTN.bib'
title: Unveiling Potential Failure Propagation Scenarios in Core Transport Networks
---
Backbone Transport Networks; SDN-controlled Transport Networks; Failure Propagation; Epidemics.
Introduction\[sec:introduction\]
================================
In the last decade, the Information and Communication Technology sector has substantially increased its dependency on communication networks for both business and pleasure. Additionally, this dependency has increased even more with the inception of the myriad of new emerging technologies and services such as smart-cities, cloud computing, e-Health and the Internet of the Things. Backbone Transport Networks (BTNs) constitute the foundations of the aforementioned network-dependent applications and services. Traditionally, BTNs have been classified in two types:
Static Transport Networks (STNs); and
Dynamic Transport Networks (DTNs).
Typically, BTNs are divided in three differentiated layers: Data Plane (DP), Control Plane (CP), and Network Management Plane (NMP). STNs are centrally controlled by a network management system (NMS) such that the operation is manual and predetermined. On the contrary, DTNs are architectures under Automatically Switched Optical Network (ASON)/Generalized Multi-Protocol Label Switching (GMPLS) CP (or similar) where the management plane is a facilitator, but the actual network control is automated (to some extent) and services are configured and managed via distributed intelligence.
The advent of Software-Defined Networking (SDN) technologies such as OpenFlow might contribute to changing BTNs as we know them today, realizing the concept of SDN-controlled Transport Networks [@McKeown2008Openflow; @mcdysan2013SDTNS]. Currently, the Open Networking Foundation (ONF) has a dedicated working group defining guidelines for applying SDN standards to transport networks. Following the work in this Standards Developing Organization (SDO), major manufacturers of transport network equipment currently offer SDN-based products [@infinera2013ots]. In addition, large-scale SDNTNs such as the B4 Google network have been deployed [@googleB4]. As traditional DTNs, SDNTNs consist of a data and control plane such that the CP can be centralized or distributed [@Gringeri2013SDNTN; @distributed2013sdtns]. The SDN-controller is the main component of the CP, and the applications that run at the controller provide the CP functionalities. Since extensive work regarding the development of standards for SDTNs is ongoing, we strongly believe that next-generation transport networks will be SDN-enabled.
Although BTNs play a pivotal role to ascertain performance and integrity of the aforementioned novel services, their ubiquity is often taken for granted. Recently, network failures of great significance have occurred, re-enforcing the need to take the possibility of such large and potentially catastrophic failures into consideration in the underlying network design [@Habib2013630]. According to the European Network and Information Security Agency (ENISA), in 2011 at least 51 severe outages of communication networks were reported in Europe, where each affected about 400,000 users of fixed and mobile Internet [@CCC2011].
Many different protection and recovery techniques for single failures in communication networks have been extensively analyzed in the last decades. Consequently, in this work we focus on multiple failures, which can be broadly classified as depicted in Fig. \[fig:multfail\]. Multiple failures can be either *static* or *dynamic*. Static multiple failures are essentially one-off failures that affect nodes or links simultaneously at any given point (e.g., an earthquake). Dynamic failures have a temporal dimension. From all possible multiple failures, we focus on the dynamic scenarios and more specifically, in epidemic-like failure scenarios, where failure of one or more nodes might be propagated through the network, possibly resulting in an outbreak.
The aim of this work is to shed light on the following questions: *“Are epidemic-like failures or attacks likely in BTNs?”* and *“Will the upgrade to SDNTNs increase the vulnerability with respect to these type of failures or attacks?”* To do so, we present the state of the art of epidemic-like failure models in Section \[soa\]. Then, we review the main failure propagation model that has been proposed for transport networks in Section \[epidemicsontelecom\]. In Section \[vulnerability\] we discuss whether a failure propagation could occur in BTNs and SDNTNs. Section \[providers\] presents feedback from three network providers based on their experience in addition to the results of our research. Finally, Section \[sec:conclusions\] reviews the main contributions and findings of this work.
![Multiple failures broad classification. An epidemic-like failure is a process where a temporary failure propagates to physical neighbors. A cascading failure that triggers failures in physical neighbors is also considered as an epidemic-like failure.[]{data-label="fig:multfail"}](multfail2.pdf)
What is an epidemic-like failure propagation?\[soa\]
====================================================
Epidemics theory has been used to describe and predict the propagation of diseases, human interactions, natural phenomena, and failures in a wide range of networks. An epidemic-like failure is a dynamic process where a partial/temporary failure propagates to physical neighbors. The spreading of the aforementioned events is formally represented by epidemic models, and can be generally classified in one of the following three families:
- The *Susceptible-Infected* (SI) considers individuals as being either susceptible (S) or infected (I). This family assumes that the infected individuals will remain infected forever, and so can be used for worst-case propagation scenarios ($S\rightarrow I$).
- The *Susceptible-Infected-Susceptible* (SIS) considers that a susceptible individual can become infected on contact with another infected individual, then recovers with some probability of becoming susceptible again. Therefore, individuals might change their state from susceptible to infected, and vice versa, repeatedly ($S\leftrightarrows I$).
- The *Susceptible-Infected-Removed* (SIR), which extends the SI model to take into account a removed state. Here, an individual can be infected just once because when the infected individual recovers, it becomes immune and will no longer pass the infection onto others ($S\rightarrow I \rightarrow R$).
As shown in Fig. \[fig:multfail\], the subset of cascading failures intersects the subset of epidemic failures. Cascading failures are common in most critical infrastructures such as telecommunications, electrical power, rail, and fuel distribution networks [@Strogatz2001]. In telecommunication networks, we consider cascading failures as an epidemic when it occurs due to a malfunctioning in one node of a network which eventually triggers a failure in its neighbors. Real cascading failures in telecommunication networks have been observed in the IP layer of the Internet and in the physical layer of BTNs [@wrap32818]. The propagation of a cascading failure happens gradually in phases: after the initial failure (e.g., a massive broadcast of a routing message with a bug), some of the neighboring nodes get overloaded and fail. This first step leads to further overloading of more nodes and their collapse, constituting the second step and so on. In this way, networks go through multiple stages of cascading failures before they finally stabilize and there are no more failures. It is worth noting that cascading failures in other critical infrastructures, such as power grids, do not necessarily propagate by the physical contact of nodes or links, but by the load balancing in the global network. In such cases, cascading failures are not similar to epidemics, and thus are out of the scope of this work.
A Failure Propagation Model for Telecommunication Networks\[epidemicsontelecom\]
================================================================================
Calle et al. pioneered the research in this field by presenting a new epidemic model called *Susceptible-Infected-Disabled* (SID), which relates each state with a specific functionality of a node in the network [@calle2010multiple]. The state diagram of the SID model (*Susceptible$\leftrightarrows$Infected$\rightarrow$Disabled$\rightarrow$Susceptible*), as seen from a single node, is shown in Fig. \[fig:sid\]. Each node, at each moment of time, can be either susceptible (S), infected (I) or disabled (D). A susceptible node can be infected with probability $\beta$ by receiving the infection from a neighbor (e.g., a bug in the routing or signaling protocol). An infected node can be repaired with probability $\delta_1$ (e.g., the network operator might manually reboot the CP). Finally, the disabled state takes into account the fact that the CP failure eventually affects the DP of the node with probability $\tau$ (e.g., the forwarding tables of the DP become unaccessible). After that, the model states that a repairing time, such as the mean time to repair (MTTR) of the node, determines when it becomes susceptible again ($\gamma$) (e.g., the required time to replace the node).
No operations can be performed during control plane node failures. However, as long as the data plane of the node does not fail, established connections should not fail or be re-routed as a result of control plane node failures. The routing protocol is assumed to be capable of detecting the failure of the control plane and informing all other nodes. Once the routing protocol has converged with this new information, a new connection will not be routed through this node. This same behavior is taken into account by the SID epidemic model.
![State diagram of the SID model and its relationship with the STN and DTN planes.[]{data-label="fig:sid"}](sidstates.pdf)
\
According to the SID model, Fig. \[fig:failurepropagation\] illustrates how a failure can propagate in a BTN. From Fig. \[fig:prop1\], the network operates properly and thus, all nodes are in the susceptible state. At a given time $t$, the network management system updates a module of a controller with software that contains a bug, as shown in Fig. \[fig:prop2\]. As a result, this node becomes infected and propagates the failure (e.g., the bug) to his neighbors, as observed in Fig. \[fig:prop3\]. The epidemic continues to spread while the CP of an infected node eventually affects its DP operation. In this case, the incapability to resolve the problem at the CP might necessitate a complete node replacement (e.g., shutdown), thus impacting the operation of the DP. Consequently, this node becomes disabled as shown in Fig. \[fig:prop4\].
Throughout the last decades, several failures have spread through communication networks. In the early 90s, a rapidly spreading malfunction collapsed the AT&Ts long distance network[^2], causing the loss of more than \$60 million in terms of unconnected calls. In 2002, a failure propagation in the IP layer of the Internet was caused by a vulnerability of the BGP protocol. More recently, a BGP update bug which propagated through Juniper routers caused a major Internet outage in 2011[^3]. In this latter case, routers were reseting and re-establishing its functioning state after five minutes.
Although there are no commercial references nor reports with respect to the occurrence of propagation of failures in BTNs, in the following sections we identify several failure scenarios that could be modelled as epidemic-like spreadings.
Failure Propagation Scenarios in Transport Networks\[vulnerability\]
====================================================================
This section describes BTNs and SDNTNs, showing that both contemporary and future networks might be predisposed to enduring epidemic-like failures.
Backbone Transport Networks
---------------------------
As explained in Section \[sec:introduction\], a BTN is divided into Data Plane (DP), Control Plane (CP) and Network Management Plane (NMP), which have the following functionalities:
- Data plane (or transport plane): responsible for user data transport, usually called data-path.
- Control plane: responsible for connection and resource management, which can be either associated with or separated from the managed DP.
- Management plane: responsible for supervision and management of the whole system (including transport and control planes).
As mentioned, there are two types of BTNs: static and dynamic. In STNs, the control plane does not act autonomously, but takes orders and updates from the NMP, which is eventually driven by a Network Management System (NMS). Thus, the intelligence of STNs is centralized. On the contrary, in DTN architectures such as GMPLS, the control plane dynamically reconfigures the network according to its current state. In this case, the NMP is only used as a facilitator (e.g. initializing or updating software of CP). In DTNs the control plane exists in each physical router and hence, it is distributed. For instance, GMPLS-based DTNs are networks where the control plane runs over an IP/Ethernet network, while the data plane runs over a wavelength routed WDM network.
Failures occurring in BTNs can be classified in two broad groups:
- Faults: They include component failures as a result of natural exhaustion, human errors, and natural disasters.
- Attacks: They are intentional component failures. In fact, components may be selected to maximize the resulting impact of the attack. Furthermore, the identified targets may depend on various criteria such as the number of potentially affected users and additional socio-political and economic considerations.
There are three possible types of failures in the BTNs namely: (a) link; (b) node; and (c) software failures. In 2011 41% of failures were attributed to hardware and software, 12% to human errors, 12% to natural disasters, and 6% to malicious attacks [@CCC2011]. Although the percentage of malicious attacks was the smallest, they resulted in an average of 31 outage hours, as opposed to 17 outage hours for other failures. Therefore, despite the fact that targeted failures are not frequent, they are of paramount importance because they cause major disruptions. Control plane modules may fail due to software or hardware bugs, or protocol logic errors. Recovery from such failures may involve switching to a hot standby, if a redundant software process has been implemented for that module. If the routing and signaling modules are implemented as separate processes, then either one of them may fail independently. If the signaling module fails, then new connections cannot be established through this node, and existing connections cannot be deleted.
In STNs, epidemic-like spreading of failures are rare because CP nodes do not interact between each other (i.e., they only communicate with the NMS). Therefore, if the NMS were compromised, a massive network failure affecting first the CP nodes and then the DP ones could easily occur (e.g., all nodes could be shut down). Nonetheless, this would not be an epidemic-like failure scenario, but a catastrophic static one. On the contrary, in DTNs an error during a software update, or a failure produced by a software itself could induce a CP node failure, which could spread through the network depending on the type of failure (e.g., a message with wrong objects in the fields). For instance, this initial failure, just like an infection, could be caused by the NMS. The spreading itself would be carried out between all nodes of the CP. This scenario is illustrated in Fig. \[fig:failurepropagation\].
SDN-controlled Transport Networks
---------------------------------
As with DTNs, SDNTNs require a DP and a CP. The SDN-controllers constitute the CP of a SDNTN and the network nodes constitute the DP. Applications that run at the controller provide the CP functionalities. Therefore, the SDN-controller can host different applications that support various carrier-class transport technologies such as SDH, DWDM, Ethernet or the suite of protocols of GMPLS.
Fig. \[fig:sdntn\] illustrates a possible scenario with three network providers operating SDNTNs. As observed, the CP can be either centralized (providers A and B) or distributed (provider C) [@Gringeri2013SDNTN; @distributed2013sdtns]. As shown in the bottom left, the SDN-controller might host different applications that run on a Network Operating System (NOS). In addition, there is a component called FlowVisor that acts as a transparent proxy between the DP nodes and the SDN-controller. FlowVisor is a network virtualization layer and its main objective is to make sure that each user of the SDN-controller controls his own virtual network [@sherwood2009flowvisor].
![Possible scenario of SDN transport networks.[]{data-label="fig:sdntn"}](sdntn.pdf)
The emergence of SDNTNs implies an increased reliance on software. On the one hand, this aspect brings many benefits such as network programmability and control logic centralization. On the other hand however, it is also the source of major security concerns. It has been argued that SDN introduces new threat vectors with respect to traditional networks [@vectorsvulnerable2013SDN]. For instance, software faults hold a pivotal role in the reliability of SDNTNs, and significant efforts are made to define tools able to detect SW bugs or errors [@Canini2012NICE]. Additionally, other type of failures could occur due to policy (also called rules) conflicts, given that several users might work on the same physical network, each one on his own virtualized network. In such cases, FlowVisor is in charge to ensure compatibility among all policies.
\
There are several cases in which epidemic-like failures could occur in SDNTNs. We classify these scenarios according to the initial event that triggers the propagation, which can be either a fault or an attack. First, in order to illustrate some potential scenarios in SDNTNs, we assume that a DDoS attack can be launched, for instance, by following the method provided in [@attacking2013SDN]. Consequently, we propose two propagation scenarios:
- *Vertical propagation*, which can be bottom-top or top-bottom: as shown in Fig. \[fig:sdntn\_prop2\], an attacker launches a DoS attack against an OpenFlow switch, which is connected to a primary SDN-controller. In order to increase the resilience of the system, the switch has been assigned a secondary SDN-controller. The switch needs to contact the SDN-controller for every packet. The SDN-controller is not able to handle all the new flow requests and fails. Then, the switch redirects its traffic to the secondary SDN-controller, where the same situation can happen. If the failing SDN-controller is the only controller available for other OpenFlow switches, then the switches lose their connection with the controller and can be considered as failed.
- *Horizontal DP propagation*: as observed in Fig. \[fig:sdntn\_prop1\], an attacker injects huge quantities of traffic to an edge OpenFlow switch (e.g., a DoS attack). The destination of the injected traffic is outside the SDNTN, and to reach it, the traffic has to be routed through core switches and exit via another edge switch. Since SDN allows heterogeneous network scenarios (e.g., hardware from different vendors), we assume that edge OpenFlow switches have high-performance capabilities, while core OpenFlow switches are commodity devices. Therefore, the huge amount of injected traffic can eventually overflow the buffer of core switches, causing successive failures in the DP plane.
Second, it is possible to observe the same scenario shown in Fig. \[fig:sdntn\_prop1\] caused by a fault. In such case, instead of being caused by a DoS attack, the switch failures would occur due to, for instance, a software bug in the routing protocol of the SDN-controller. Lastly, it is worth noting that the proposed horizontal DP propagation, as well as the vertical propagation, are closer to cascading failures. However, in the case of a distributed CP with several SDN-controllers, an epidemic-like spreading could happen if, for instance, the SDN-controllers where infected by masterful worms such as Tuxera[^4].
Transport Network Providers Experience with Large-Scale Failures\[providers\]
=============================================================================
With the purpose of validating some of the failure scenarios mentioned in the previous section, we have approached three network operators asking them about their experience with epidemic and cascading failure occurrence. The three operators were willing to discuss network vulnerabilities and specific spreading patterns within their networks. However, they were only willing to provide this information under a non-disclosure agreement. Consequently, we refer to the three operators as FO, SO and TO, which stand for first, second and third operator, respectively.
According to the experience of the FO with their STN, the most typical errors of hardware (HW) components are closely related to software (SW) bugs. Nonetheless, these failures have no connection to the outside world, i.e., no spreading is possible. The FO stated that their current procedures to operate their STN, which is based on the NMP, could potentially be vulnerable if the management server got compromised. This is highly unlikely, since the NMS has no access to the global Internet and a strong Firewall protection is provided. However, assuming a system breach, the damage to the network could be devastating, i.e., the entire network could fail simultaneously. As to epidemic-like failures, the only situation that the FO could see as problematic would be the SW update process. The SW update happens step-wise, where portions of the network are updated. Furthermore, when an element is updated, it has 2 banks of SW (i.e., an active one and a hot-standby). Typically, the server updates the hot-standby and then switches the operation of the element to that bank of SW, and then updates the primary one. If SW with a bug was loaded to the node, the node could stop operating. Since in STNs the node only communicates with the management server, no direct “infection” to the neighbors can occur. Nevertheless, considering that an entire region is updated (possibly with the same erroneous SW), then several elements can potentially be hit.
The SO uses equipment from a different vendor for their STN, and his experience with that equipment is different than the FO. In general, node HW failures are negligible in comparison with fiber cuts. The SO also suggested that SW updates might cause unresponsive management entity on the node. Furthermore, according to the SO, there is another potential vulnerability: the Digital Communication Network (DCN) interconnecting all nodes in a STN for the purpose of management and configuration information exchange. Typically, this case is related to standard packet-switched network problems, which are well-known from the IP world. Nonetheless, the SO outlined one possible problem: if buggy SW/mis-configurations were installed/occurred in the main routers (the designated router for example), as a result, nodes could start flapping the routes they knew from one port to the other. If one node became confused, he could advertise wrong routes to the rest and if flapping occurred, potentially blocking of the controller entities would be possible. This same scenario could be spread to the rest of nodes, since they also would need to readjust their router interfaces.
The TO has an extensive experience with GMPLS control plane technologies. This expert asserts that in STNs an epidemic spreading of failures is possible only if SW elements are failing, and if some type of distributed network (such as DCN) is deployed between nodes. On the contrary, in DTNs, where GMPLS is assumed to be one of the control plane solutions for an automatic provisioning and operation, this picture might change. The TO stated that there are numerous examples of spreading of failures in the contemporary IP distributed networks (e.g., based on BGP update bugs). Moreover, since the GMPLS CP is mostly a distributed IP-based network, it leads to the expectation that epidemic-like failures become reasonably possible. Three examples were outlined:
1. When a node is affected by a CP failure and it attempts to recover its RSVP-TE state, synchronization becomes very difficult. This process involves synchronization with the neighbors, with the DP and with the NMP. Any of these three steps might fail for different reasons and this can directly impact the knowledge and states of the neighboring nodes. Such a process pertains mainly to the CP and possibly would affect only some of the connections. However, the fact is that an unsuccessful process of synchronization might lead to nodes operating in a state which is erroneous and potentially influencing the node’s service delivery capability.
2. If a node fails, the neighbors of a node must update their state and the state of the network. In time, the neighbors will have moved towards a state they are not “used to be”, i.e., an untypical state (the typical state is when everything is properly functioning). Under this situation, the neighbors might move closer to an unstable state of operation (e.g., if a node switches from a functioning to non-functioning state and vice versa) and this can potentially bring these nodes into a dysfunctional state.
3. Carrier-class components are typically undergoing very stringent tests and the requirements for robustness are extensive. Nevertheless, it is still possible to observe implementations where a failure in one single process might bring other processes in the same node to stop functioning. For example, this can occur if processes (routing and signaling) share CPUs (which might suffer HW/SW problems) or if they share memory (buffer overflows).
Finally, the TO indicated that RSVP-TE is highly vulnerable to problems, and when problems occur, recovering the functional state in the network is much more difficult (due to the synchronization process described earlier). OSPF (Open Shortest Path First) failures are simpler to fix and synchronization between nodes might be established in a shorter period of time. Nevertheless, mis-configured routing is one of the main sources of large-scale failures in networks, as stated in the previous sections of this work. Thus, OSPF is also considered as a potential vulnerability.
Conclusions\[sec:conclusions\]
==============================
In this paper, we have studied the possibility of observing epidemic-like failures in Backbone Transport Networks and in SDN-controlled Transport Networks. We have analyzed both architectures and related each of them to several failure scenarios. Finally, to reinforce our study, three network operators have revealed several vulnerabilities in STNs and DTNs that could eventually lead to epidemic-like failures.
One of the most significant differences between traditional BTNs and SDNTNs is that BTNs are closed systems, so the surface of potential risks that might initiate the propagation of a failure is smaller that the one of SDNTNs, where homogeneous scenarios prevail. In any case, human-induced errors are the major issue, which could be either deliberate or unintentional. For this reason, the robustness of the CP of both DTNs and SDNTNs depends highly on the applied SW engineering principles, and on the way the processes and protocols are configured. Poor network management or mis-configured controllers, or inconsistent protocol implementations might become the root cause of epidemic-like failures in the CP. Additionally, cyber-security plays a pivotal role in DTNs and SDNTNs, given that in such networks the operation of each node relies exclusively on software. Since one of the potential threats is to compromise either the network management plane or the SDN-controller, it will be a matter of how to hack a closed system to gain non-granted access. To conclude, we anticipate that this paper will inspire researchers and professionals to design and implement security mechanisms to enhance the resilience of transport networks under dynamic multiple failure scenarios.
Acknowledgements {#acknowledgements .unnumbered}
================
This work is partially supported by Spanish Ministry of Science and Innovation project TEC 2012-32336, and by the Generalitat de Catalunya research support program SGR-1202. This work is also partially supported by the Secretariat for Universities and Research (SUR) and the Ministry of Economy and Knowledge through AGAUR FI-DGR 2012 and BE-DGR 2012 grants (M. M.)
[^1]: Marc Manzano and Eusebi Calle are with University of Girona, Spain. Anna Manolova Fagertun and Sarah Ruepp are with Technical University of Denmark, Denmark. Caterina Scoglio is with Kansas State University, USA. Ali Sydney is with Raytheon BBN Technologies, USA. Antonio de la Oliva and Alfonso Muñoz are with University Carlos III of Madrid, Spain. Corresponding author: Marc Manzano (email: mmanzano@eia.udg.edu - marcmanzano@ksu.edu).
[^2]: <http://users.csc.calpoly.edu/~jdalbey/SWE/Papers/att_collapse.html>
[^3]: <http://www.zdnet.com/juniper-fail-seen-as-culprit-in-site-outages-4010024743/>
[^4]: <http://spectrum.ieee.org/telecom/security/the-real-story-of-stuxnet>
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Protein-stabilised emulsions can be seen as mixtures of unadsorbed proteins and of protein-stabilised droplets. To identify the contributions of these two components to the overall viscosity of sodium caseinate o/w emulsions, the rheological behaviour of pure suspensions of proteins and droplets were characterised, and their properties used to model the behaviour of their mixtures. These materials are conveniently studied in the framework developed for soft colloids. Here, the use of viscosity models for the two types of pure suspensions facilitates the development of a semi-empirical model that relates the viscosity of protein-stabilised emulsions to their composition.'
author:
- Marion Roullet
- 'Paul S. Clegg'
- 'William J. Frith'
bibliography:
- 'References.bib'
title: 'Viscosity of protein-stabilised emulsions: contributions of components and development of a semi-predictive model'
---
Introduction
============
Despite their complexity, food products can be conveniently studied from the perspective of colloid science [@mezzenga:2005]. In the last three decades, research in the field of food colloids has led to major advances in understanding their structure over a wide range of lengthscales [@dickinson:2011], which has proved key to developing a good control of their flavour and texture properties [@vilgis:2015].
Many food products such as mayonnaise, ice cream, and yogurt involve protein-stabilised emulsions either during their fabrication or as the final product. Proteins have particularly favourable properties as emulsifiers because of their ability to strongly adsorb at oil/water interfaces and to stabilise oil droplets by steric and electrostatic repulsion. However, proteins do not completely adsorb at the interface, leaving a residual fraction of protein suspended in the continuous phase after emulsification [@srinivasan:1996; @srinivasan:1999]. Protein-stabilised emulsions are thus mixtures of protein-stabilised droplets and suspended proteins, as illustrated in Figure \[Fig:CartoonProtDrop\]. Understanding the contributions of these two components to the properties of the final emulsion remains a challenge.
![Illustration of a protein assembly, protein-stabilised droplet, and protein-stabilised emulsion seen as a mixture of droplets and un-adsorbed proteins. []{data-label="Fig:CartoonProtDrop"}](Fig1){width="80.00000%"}
When considered separately, the droplets in protein-stabilised emulsions can be considered as colloidal particles with some degree of softness. It is thus possible to compare the rheological properties of protein-stabilised emulsions to other types of soft particle suspension and to model their behaviour. From a theoretical point of view, particles, colloidal or not, can be described as soft if they have the ability to change size and shape at high concentration [@vlassopoulos:2014]. Such a definition covers a striking variety of systems, including gel microparticles [@adams:2004; @shewan:2015], microgels [@cloitre:2003; @tan:2005], star polymers [@roovers:1994; @winkler:2014] and block co-polymer micelles [@lyklema:2005:4]. These systems have been the focus of many studies in the last two decades, however one major challenge to comparing the behaviour of such diverse materials is the availability of a well defined volume fraction $\phi$ for the suspensions.
To overcome the challenge of defining the volume fraction of soft colloids, a common approach is to use an effective volume fraction $\phi_{eff}$ proportional to the concentration $c$, $\phi_{eff}=k_0\times c$, where $k_0$ is a constant indicating the voluminosity of the soft particle of interest, usually determined in the dilute or semi-dilute regime. Such a definition for $\phi_{eff}$ does not take into account the deformation or shrinking of the particle at high concentrations, high values of ($\phi_{eff}>1$) can thus be reached. $k_0$ can be estimated using osmometry [@farrer:1999], light scattering [@vlassopoulos:2001] or viscosimetry [@tan:2005; @roovers:1994; @boulet:1998]. In this study, $k_0$ was estimated, for each individual component of the emulsions, by modelling the relative zero-shear viscosity $\eta_0/\eta_s$ behaviour of the pure suspensions in the semi-dilute regime with Batchelor equation for hard spheres [@batchelor:1977]: $$\label{Eq:Batchelor}
\frac{\eta_0}{\eta_s} = 1+2.5 \phi_{eff}+6.2 \phi_{eff}^2$$
Sodium caseinate is used here to stabilise emulsions as a case-study, because of its outstanding properties as a surface-active agent and stabiliser, and because it is widely used in industry. Sodium caseinate is produced by replacing the calcium in native milk casein micelles, with sodium, to increase its solubility [@dalgleish:1988], a process which also leads to the disruption of the micelles. It has been established that sodium caseinate is not present as a monomer in suspension, but rather in the form of small aggregates [@lucey:2000]. The exact nature of the interactions in play in the formation of these aggregates is not well-known but they have been characterised as elongated and their size estimated to be around $\SI{20}{\nano\metre}$ [@farrer:1999; @lucey:2000; @huppertz:2017]. Some larger aggregates can also form in presence of residual traces of calcium or oil from the original milk, however these only represent a small fraction of the protein [@dalgleish:1988]. The viscosity behaviour of sodium caseinate as a function of concentration shows similarities with hard-sphere suspensions at relatively small concentrations, but at higher concentrations, over $c>\SI{130}{\gram\per\liter}$, the viscosity continues to increase with a power-law rather than diverging [@farrer:1999; @pitowski:2008] as would be expected for a hard sphere suspension [@faroughi:2014].
In this study, the rheology of protein-stabilised emulsions is examined within the framework of soft colloidal particles. Modeling proteins in this way ignores protein-specific elements, such as surface hydration, conformation changes, association, and surface charge distribution [@sarangapani:2013; @sarangapani:2015], but it provides a convenient theoretical framework to separate and discuss the contributions of both sodium caseinate and the droplets to the viscosity of emulsions. Similarly, protein-stabilised droplets can be seen as comprising an oil core and a soft protein shell [@bressy:2003], allowing for a unifying approach for both components of the emulsions.
The aim of this study is to present a predictive model of the viscosity of protein-stabilised emulsions, that takes into account the presence and behaviour of both the protein stabilised droplets and the unadsorbed protein. A first step is to characterise separately the flow behaviour and viscosity of suspensions of purified protein-stabilised droplets, and of protein suspensions over a wide range of concentrations. This also allows a critical assessment of the soft colloidal approach. These components are then combined to form mixtures of well-characterised composition and their viscosity is compared to a semi-empirical model. Because they are well dispersed, most of the suspensions and emulsions display a Newtonian behavior at low shear, with shear thinning at higher shear-rates. In this context, we model the concentration dependence of zero-shear viscosity and the shear thinning behaviour separately to confirm the apparent colloidal nature of the components of the emulsions and protein suspensions.
Materials & Methods
===================
Preparation of protein suspensions
----------------------------------
Because of its excellent ability to stabilise emulsions, sodium caseinate (Excellion S grade, spray-dried, kindly provided by DMV, Friesland Campina, Netherlands), was used in this study. It was further purified by first suspending it in deionised water, at $5-9 \%$ (w/w), and then by mixing thoroughly with a magnetic stirrer for $\SI{16}{\hour}$. After complete dispersion, a turbid suspension was obtained, which was centrifuged at $\num{40000}\times$g (Evolution RC, Sorvall with rotor SA 600, Sorvall and clear $\SI{50}{\milli\liter}$ tubes, Beckmann) for $\SI{4}{\hour}$ at $\SI{21}{\degreeCelsius}$. Subsequently, the supernatant, made of residual fat contamination, and the sediment were separated from the suspension, that was now clearer. The solution was then filtered using a $\SI{50}{\milli\liter}$ stirred ultra-filtration cell (Micon, Millipore) with a $\SI{0.45}{\micro\meter}$ membrane (Sartolon Polyamid, Sartorius). In order to avoid spoilage of the protein solution 0.05% of ProClin 50 (Sigma Aldrich) was added. The suspension at $5\%$ (w/w) was then diluted to the required concentration. Concentrated suspensions of sodium caseinate were prepared by evaporating a stock solution of sodium caseinate at $\SI{5}{\percent}$(w/w), prepared following the previous protocol, using a rotary evaporator (Rotavapor R-210, Buchi). Mild conditions were used to avoid changing the structure of the proteins: the water bath was set at $\SI{40}{\degreeCelsius}$ and a vacuum of $\SI{45}{\milli\bar}$ was used to evaporate water. The concentration of all the suspensions after purification was estimated by refractometry, using a refractometer RM 50 (Mettler Toledo), LED at $\SI{589.3}{\nano\metre}$ and a refractive index increment of $dn/dc =\SI[separate-uncertainty=true]{0.1888(00033)}{\milli\liter\per\gram}$ [@zhao:2011].
![Size distributions of sodium caseinate after the purification protocol. The sample was fractionated by Asymetric Flow Field Flow Fractionation (kindly performed by PostNova Analytics Ltd), and the sizes were measured online by Dynamic Light Scattering (dot line, red) and by Multi Angle Light Scattering (dash dot line, orange). The relative percentage of each class is weighted by the intensity of the scattered light. The inset is a zoom of the small fraction of proteins that are present as larger aggregates []{data-label="Fig:SizeProtFFF"}](Fig2){width="90.00000%"}
Size analysis by Flow Field Fractionation (kindly performed by PostNova Analytics Ltd) showed that the resulting suspensions of sodium caseinate were composed of small aggregates of a hydrodynamic radius of $\SI{11}{\nano\metre}$ at $\SI{96}{\percent}$, while the remaining $\SI{4}{\percent}$ formed larger aggregates with a wide range of sizes (hydrodynamic radii from $\SI{40}{\nano\metre}$ to $\SI{120}{\nano\metre}$) as shown in Figure \[Fig:SizeProtFFF\].
Preparation of emulsions
------------------------
Nano-sized caseinate-stabilised droplets were prepared in two steps. First, the pre-emulsion was produced by mixing $\SI{45}{\milli\gram\per\milli\liter}$ sodium caseinate solution (prepared as detailed previously) with glyceryl trioctanoate ($\rho=\SI{0.956}{\gram\per\milli\liter}$, Sigma Aldrich) at a weight ratio 4:1 using a rotor stator system (L4R, Silverson). This pre-emulsion was then stored at $\SI{4}{\degreeCelsius}$ for $\SI{4}{\hour}$ to reduce the amount of foam. It was then passed through a high-pressure homogeniser (Microfluidizer, Microfluidics) with an input pressure of $\SI{5}{\bar}$, equivalent to a pressure of $\approx\SI{1000}{\bar}$ in the micro-chamber, three times consecutively. After 3 passes, a stationary regime was reached where the size of droplets could not be reduced any further. This protocol for emulsification produced droplets of radius around $\SI{110}{\nano\metre}$ as measured by Dynamic Light Scattering (Zetasizer, Malvern) and $\SI{65}{\nano\metre}$ by Static Light Scattering (Mastersizer, Malvern).
Because not all the protein content was adsorbed at the interface, an additional centrifugation step was required to separate the droplets from the continuous phase of protein suspension. This separation was performed by spinning the emulsion at $\num{235000}\times$g with an ultra-centrifuge (Discovery SE, Sorvall, with fixed-angle rotor $45$Ti, Beckmann Coulter) for $\SI{16}{\hour}$ at $\SI{21}{\degreeCelsius}$. The concentrated droplets then formed a solid layer at the top of the subnatant that could be carefully removed with a spatula. The subnatant containing proteins and some residual droplets was discarded. The drying of a small fraction of the concentrated droplet layer and the weighing of its dry content yielded a concentration of the droplet paste of $\SI[separate-uncertainty=true]{0.519(0008)}{\gram\per\milli\liter}$, so the concentration in droplets of all the suspensions were derived from the dilution parameters. Only one centrifugation step was employed to separate the droplets from the proteins, as it was felt that further steps may lead to protein desorption and coalescence. The pure nano-sized droplets were then re-dispersed at the required concentration, in the range $\num{0.008}$ to $\SI{0.39}{\gram\per\milli\liter}$ in deionised water for $1$ to $\SI{30}{\minute}$ with a magnetic stirrer.
Preparation of mixtures
-----------------------
To prepare emulsions with a controlled concentration of proteins in suspension, the concentrated droplets were re-suspended in a protein suspension at the desired concentration using a magnetic stirrer and a stirring plate for $\SI{5}{\minute}$ to $\SI{2}{\hour}$.
Viscosity measurements
----------------------
Rotational rheology measurements were performed using a stress-controlled MCR 502 rheometer (Anton Paar) and a Couette geometry (smooth bob and smooth cup, $\SI{17}{\milli\liter}$ radius) at $\SI{25}{\degreeCelsius}$. For each sample, three measurements are performed and averaged to obtain the flow curve. The values of viscosity on the plateau at low shear are averaged to determine the zero-shear viscosity. Viscosity measurements were performed at different concentrations for protein suspensions, protein-stabilised droplet suspensions, and mixtures.
Results & Discussion {#S:1}
====================
In order to study the rheological behaviour of protein-stabilised emulsions, the approach used here is to separate the original emulsion into its two components, namely un-adsorbed protein assemblies and protein-coated droplets, and to characterise the suspensions of each of these components. Despite their intrinsic complexity due to their biological natures, random coil proteins such as sodium caseinate can conveniently be considered as colloidal suspensions, as we demonstrate in the discussion below.
Viscosity of suspensions in the semi-dilute regime: determination of volume fraction
------------------------------------------------------------------------------------
The weight concentration (in $\SI{}{\gram\per\milli\liter}$) is a sufficient parameter to describe the composition in the case of one suspension, but only the use of the volume fraction of the suspended particles allows meaningful comparisons between protein assemblies and droplets. In the framework of soft colloids, the effective volume fraction $\phi_{eff}$ of a colloidal suspension can be determined by modelling the viscosity in the semi-dilute regime with a hard-sphere model.
![Relative viscosity of sodium caseinate suspensions ($\square$, navy blue) and sodium caseinate-stabilised droplets ($\bigcirc$, cyan) as a function of the concentration of dispersed material. The lines denote Batchelor model for hard spheres in the dilute regime, Equation\[Eq:Batchelor\].[]{data-label="Fig:DiluteRegimeConc"}](Fig3){width="90.00000%"}
The relative zero-shear viscosities of semi-dilute samples are displayed in Figure \[Fig:DiluteRegimeConc\] as a function of the mass concentration of protein or droplets (viscosity data at the full range of concentration can be found in Figure S2 in the supplementary material). As can be seen, protein suspensions reach a higher viscosity at a lower weight fraction than droplet suspensions. This is because the protein is highly hydrated and swollen, and so occupies a greater volume per unit mass than do the droplets, where the main contributor to the occupied volume is the oil core.
The viscosity behaviour of each type of suspension in the semi-dilute regime can be described by a theoretical model such as Batchelor’s equation [@batchelor:1977], Equation \[Eq:Batchelor\] as a function of the volume fraction $\phi$. This involves assuming that the particles in the suspension of interest do not have specific interparticle interactions or liquid interfaces in this regime, and can be accurately described as hard spheres.
In addition, as a first approximation, the effective volume fraction $\phi_{eff}$ of soft particles in suspension is assumed to be proportional to the weight concentration $c$: $$\label{Eq:EffPhi_proportional}
\phi_{eff}=k_0 \times c$$ where $k_0$ is a constant expressed in $\SI{}{\milli\liter\per\gram}$. This equation is combined with Equation \[Eq:Batchelor\] in order to obtain an expression for the viscosity as a function of the concentration. When applied to experimental viscosity values for suspensions of protein or droplets at concentrations in the semi-dilute regime, such an expression allows estimation of $k_0$. The effective volume fraction $\phi_{eff}$ of the suspensions can then be calculated using Equation \[Eq:EffPhi\_proportional\].
When fitted to the viscosity data for pure sodium caseinate and pure droplets, as described above, Equation \[Eq:Batchelor\] gives satisfactory fits as shown in Figure \[Fig:DiluteRegimeConc\]. The resulting values for $k_0$ are, for protein suspensions, $k_{0,prot}=\SI[separate-uncertainty=true]{8.53(23)}{\milli\liter\per\gram}$, and for droplet suspensions, $k_{0,drop}=\SI[separate-uncertainty=true]{2.16(13)}{\milli\liter\per\gram}$.
The protein result is in reasonable agreement with previous results, where determinations of the volume fraction using the intrinsic viscosity gave $\phi_{eff,prot} = \num[separate-uncertainty=true]{6.4}~c$ [@pitowski:2008] and $\phi_{eff,prot} = \num[separate-uncertainty=true]{6.5(5)}~c$ [@huppertz:2017], while osmometry measurements (at a higher temperature) gave $\phi_{eff,prot}=\num{4.47}~c$ [@farrer:1999]. For droplet suspensions, $k_{0,drop}$ corresponds to the voluminosity of the whole droplets. If these were purely made of a hard oil core, their voluminosity would be $1/\rho_{oil}=\SI{1.05}{\milli\liter\per\gram}$. The higher value observed can be attributed to the layer of adsorbed proteins at the surface of the droplets. This is an indication that the nano-sized droplets can be modelled as core-shell particles.
These results make it possible to calculate the effective volume fractions $\phi_{eff}$ of both types of suspensions, which is a necessary step to allowing their comparison. It is however important to keep in mind that $\phi_{eff}$ is an estimate of the volume fraction using the hard sphere-assumption, which is likely to break down as the concentration is increased, where deswelling, deformation and interpenetration of the particles may occur [@vlassopoulos:2014].
Modelling the viscosity behaviours of colloidal suspensions
-----------------------------------------------------------
In order to identify the contributions of the components to the viscosity of the mixture, it is important to characterise the viscosity behaviours of the pure suspensions of caseinate-stabilised nano-sized droplets and of sodium caseinate. This is achieved by modelling the volume fraction dependence of the viscosity with equations for hard and soft colloidal particles.
### Suspensions of protein-stabilised droplets {#Sec:ViscoDropModel}
The viscosity of protein-stabilised droplet suspensions is displayed in Figure \[Fig:ViscoDropQuemada\]. A sharp divergence is observed at high volume fraction and this behaviour is typical of hard-sphere suspensions [@dekruif:1985]. It is thus appropriate to use one of the relationships derived for such systems to model the viscosity behaviour of droplet suspensions.
![Relative viscosity of sodium caseinate-stabilised droplets ($\circ$, cyan) as a function of the effective volume fraction. The red dashed line denotes Quemada equation for hard spheres, Equation \[Eq:QuemadaHS\] with $\phi_m =\num{0.79}$[]{data-label="Fig:ViscoDropQuemada"}](Fig4){width="80.00000%"}
Amongst the multiple models for the viscosity of hard-sphere suspensions that have been proposed over time, the theoretical model developed by Quemada [@quemada:1977] is used in this work: $$\label{Eq:QuemadaHS}
\frac{\eta_0}{\eta_s} = \left(1-\frac{\phi}{\phi_m}\right)^{-2}$$ Where the parameter $\phi_m$ is the maximum volume fraction at which the viscosity of the suspension diverges: $$\label{Eq:DivergenceVisco}
\lim_{\phi\to\phi_m} \frac{\eta_0}{\eta_s}=\infty$$
The Quemada model fits remarkably well to the experimental data of the relative viscosity $\frac{\eta_0}{\eta_s}$ of suspensions of droplets. The value for the maximum volume fraction is found to be $\phi_m =\num[separate-uncertainty=true]{0.79(2)}$. Despite the similarity in viscosity behaviour between the droplet suspensions and hard-sphere suspensions, the maximum volume fraction found here is considerably higher than the theoretical value of $\phi_m=\phi_{rcp}=\num{0.64}$ for randomly close-packed hard spheres.
A possible explanation for this discrepancy is the polydispersity of the droplet suspension. Indeed, random close-packing is highly affected by the size distribution of the particles, as smaller particles can occupy the gaps between larger particles [@farris:1968]. In a recent study, Shewan and Stokes modelled the viscosity of hard sphere suspensions using a maximum volume fraction predicted by a numerical model developed by Farr and Groot [@shewan:2015b; @farr:2009], which allows the maximum volume fraction of multiple hard-sphere suspensions to be predicted from their size distribution.
Here, the same approach is used with the size distributions of the protein-stabilised droplets obtained from both the Mastersizer and the Zetasizer. The numerically estimated random close-packing volume fraction $\phi_{rcp}$ is close for both size distributions, and its value is $\phi_{rcp}=\num{0.68}$. Although this is a higher maximum volume fraction than for a monodisperse hard-sphere suspension, it is still considerably lower than the experimental value, $\phi_m=\num{0.79}$. Such a high random close-packing fraction can be achieved numerically only if a fraction of much smaller droplets is added to the distribution obtained by light scattering. The hypothesis of the presence of small droplets, undetectable by light scattering without previous fractionation is supported by the observation of such droplets upon fractionation of a very similar emulsion in a previous study [@dalgleish:1997].
It is also possible that other mechanisms than the polydispersity come into play at high volume fractions of droplets. Although it would be hard to quantify, it is likely that the soft layer of adsorbed proteins may undergo some changes at high volume fraction, such as deswelling or interpenetration.
### Protein suspensions
Sodium caseinate is known to aggregate in solution to form clusters or micelles [@farrer:1999; @pitowski:2008; @huppertz:2017]. These differ from protein-stabilised droplets because of their swollen structure, and likely dynamic nature. The viscosity behaviour of the suspensions they form is displayed in Figure \[Fig:ViscoProtSoftQuemada\].
![Relative viscosity of sodium caseinate suspensions ($\square$, navy) as a function of the effective volume fraction. The red dashed line denotes the modified Quemada equation, Equation \[Eq:ModifQuemadaSoft\], the values for $n$ and $\phi_m$ are listed in table \[Tab:ModifQuemadaProtParameters\].[]{data-label="Fig:ViscoProtSoftQuemada"}](Fig5){width="80.00000%"}
At high concentrations, the viscosity does not diverge as quickly as for the suspensions of droplets. This result is in agreement with previous studies on sodium caseinate, in which suspensions at higher concentrations were studied [@farrer:1999; @pitowski:2008; @loveday:2010]. In these works, it was shown that the viscosity does not diverge but follows a power law $\eta_0/\eta_s\propto(\phi_{eff,prot})^{12}$ .
The behaviour displayed by sodium caseinate resembles that of core-shell microgels [@tan:2005] and soft spherical brushes[@vlassopoulos:2001], hence a soft colloid framework (as reviewed e.g. in Ref. [@vlassopoulos:2014]) seems suitable for the study of these suspensions.
A general feature of the viscosity behaviour of soft colloidal suspensions is the oblique asymptote at high concentrations. This behaviour is beleived to arise because, as the concentration increases, the effective volume occupied by each particle decreases, by de-swelling or interpenetration. Thus, the strong viscosity divergence of hard-sphere suspensions is absent for soft colloids. To describe the behaviour of such suspensions, a model is thus required, that takes into account this distinctive limit at high concentrations while retaining the hard sphere behaviour at lower concentrations.
A semi-empirical modification that fulfills the above criteria is the substitution of the maximum volume fraction $\phi_m$ by a $\phi$-dependent parameter $\phi_m^*$ that takes the form: $\phi_m^* = \left({\phi_m}^n+\phi^n\right)^{1/n}$.
As a result, a modified version of Equation \[Eq:QuemadaHS\] can be derived, that takes into account the softness of the particles via a concentration-dependent maximum volume fraction $\phi_m^*$. This semi-empirical viscosity model is expressed: $$\label{Eq:ModifQuemadaSoft}
\frac{\eta_0}{\eta_s} = \left(1-\frac{\phi}{\phi_m^*}\right)^{-2}$$ where: $$\phi_m^* = \phi_m\left(1+\left(\frac{\phi}{\phi_m}\right)^n\right)^{1/n}$$ The addition of the exponent $n$ as a parameter expresses the discrepancy from the hard-sphere model. The smaller $n$, the lower the volume fraction $\phi$ at which $\phi^*_m$ diverges from $\phi_m$, and the less sharp the divergence in viscosity.
The model in Equation \[Eq:ModifQuemadaSoft\] was applied to fit the experimental data displayed in Figure \[Fig:ViscoProtSoftQuemada\], and the resulting fitting parameters are listed in Table \[Tab:ModifQuemadaProtParameters\].
Parameter Value Standard Error
----------- ------- ----------------
$\phi_m$
$n$
: Parameters for modified Quemada model for soft colloids, Equation \[Eq:ModifQuemadaSoft\], applied to sodium caseinate suspensions
\[Tab:ModifQuemadaProtParameters\]
The use of this approach gives a good fit of the viscosity behaviour of sodium caseinate in the range of concentrations used here. In addition, this semi-empirical model also satisfactorily describes the viscosity of sodium caseinate suspensions at higher concentration from Ref. [@farrer:1999; @pitowski:2008]. It is worth noting that the inflection of viscosity is slightly sharper for the model than for the experimental data.
The power-law towards which the relative viscosity $\eta_0/\eta_s$ described by Equation \[Eq:ModifQuemadaSoft\] tends at high concentration (ie $\phi>\phi_m$) can be calculated by developing $\phi_m^*$. Indeed, at high concentration $\phi_m^*$ converges towards $\phi\times\left(1+\frac{1}{n} \times \left(\frac{\phi_m}{\phi}\right)^n\right)\propto(\phi_{eff})^{n}$, so $\eta_0/\eta_s$ converges towards $\left(1+n\left(\frac{(\phi_{eff})}{\phi_m}\right)^{n}\right)^{2}$ (detailed calculations are provided in the supplementary material). Using the value in Table \[Tab:ModifQuemadaProtParameters\], the relative viscosity of sodium caseinate suspensions is found to follow the power law $\eta_0/\eta_s\propto(\phi_{eff,prot})^{\num[separate-uncertainty=true]{12.2(8)}}$. This value is in good agreement with the literature where $\eta_0/\eta_s\propto(\phi_{eff,prot})^{12}$ in the concentrated regime [@farrer:1999; @pitowski:2008; @loveday:2010].
It is interesting to note that Equation \[Eq:ModifQuemadaSoft\] provides a good model for the behaviour of particle suspensions and emulsions whose particles have a wide range of softness, as will be detailed elsewhere. Within this context the concentration behaviour of sodium caseinate suspensions seems to indicate that they can also be regarded as suspensions of soft particles. This interpretation of the behaviour can be further tested by considering the shear-rate dependent response of both the emulsions and sodium caseinate suspensions.
Shear thinning behaviour of protein and droplet suspensions
-----------------------------------------------------------
Over most of the concentration range studied here, the protein suspensions display Newtonian behaviour. However, at high concentration of protein, shear thinning is observed at high shear-rates (flow curves in supplementary material). By comparison, the droplet suspensions display shear thinning at a much broader range of concentrations. This behaviour is common in colloidal suspensions [@dekruif:1985; @helgeson:2007], as well as polymer and surfactant solutions and arises from a variety of mechanisms [@cross:1965; @cross:1970]. In non-aggregated suspensions of Brownian particles shear-thinning arises from the competition between Brownian motion (which increases the effective diameter of the particles) and the hydrodynamic forces arising from shear. Shear thinning then occurs over a range where the two types of forces balance, as characterised by the dimensionless reduced shear stress ($\sigma_{r}$) being of order unity. $\sigma_{r}$ is given by $$\sigma_{r} = \frac{\sigma R^3}{k T}
\label{Eq:SigmaRC}$$ where $R$ is the radius of the colloidal particle, $k$ is the Boltzmann constant and $T$ is the temperature of the suspension (here $T = \SI{298}{\kelvin}$).
In such suspensions, the flow curve can be described using the following equation for the viscosity as a function of shear stress [@woods:1970; @krieger:1972; @frith:1987]: $$\frac{\eta}{\eta_s} = \eta_\infty + \frac{\eta_0-\eta_\infty}{1+\left(\sigma_r/\sigma_{r,c}\right)^m}
\label{Eq:ShearThin}$$ Where $\eta_0$ is the zero-shear viscosity, $\eta_\infty$ is the high-shear limit of the viscosity, $m$ is an exponent that describes the sharpness of the change in regime between $\eta_0$ and $\eta_\infty$, and $\sigma_{r,c}$ is the reduced critical shear stress.
Because shear thinning arises from the competition between Brownian motion and the applied external flow, the use of a dimensionless stress that takes into account the size of the colloidal particles allows meaningful comparisons between the different suspensions[@woods:1970; @frith:1987]. Here, we use this approach to compare the flow behaviour of the protein and droplet suspensions, and to test further the hypothesis that the protein suspensions can be considered to behave as though they are suspensions of soft particles.
![Shear thinning behaviour of concentrated suspensions of sodium caseinate ($\square$, navy), and sodium-caseinate stabilised droplets ($\circ$, cyan) as characterised by the critical shear stress for shear-thinning. (a) Critical shear stress $\sigma_c$ as a function of the zero-shear relative viscosity $\eta_0/\eta_s$ for several concentrated suspensions. $\sigma_c$ and $\eta_0$ were estimated by fitting the flow curves (Figure S1 in supplementary material) with Equation \[Eq:ShearThin\]. (b) Reduced critical shear stress $\sigma_{r,c}$ \[Eq:SigmaRC\] as a function of the zero-shear relative viscosity $\eta_0/\eta_s$. The error bars indicate the uncertainty of the fitting parameters (more details are provided in the supplemetary material), and the lines are indicated as guide for the eye.[]{data-label="Fig:SigmaCShearThin"}](Fig6){width="90.00000%"}
Fitting the flow curves with Equation \[Eq:ShearThin\] allows for the extraction of the critical stress $\sigma_c$. The behaviour of this parameter as a function of the zero shear relative viscosity (as a proxy for concentration) is shown in Figure \[Fig:SigmaCShearThin\](a). The corresponding values of $\sigma_{r,c}$ are calculated using $R_{drop} \equiv R_{h,drop} = \SI{110}{\nano\metre}$ and $R_{prot}\equiv R_{h,prot} = \SI{11}{\nano\metre}$, are displayed in Figure \[Fig:SigmaCShearThin\] (b).
As can be observed, the protein suspensions require a much higher stress to produce a decrease in viscosity than do the droplet suspensions, as $\sigma_c$ is more than two orders of magnitudes higher. However, this difference is largely absent when the reduced critical shear-stress is used, indicating that the main difference between both systems is the size of the particles and that there are no differences in interparticle interactions at high concentrations, notably no further extensive aggregation of sodium caseinate.
Shear thinning is thus another aspect of the rheology of sodium caseinate that shows an apparent colloidal behaviour rather than polymeric behaviour . This result reinforces the relevance of the soft colloidal framework as an approach for studying the viscosity of sodium caseinate and sodium caseinate-stabilised droplets.
Viscosity of mixtures
=====================
After having studied separately the components of protein-stabilised emulsions, the next logical step is to investigate mixtures of both with well-characterised compositions by combining purified droplets and protein suspensions. In addition, the soft colloidal framework developed above provides a basis for the development of a predictive approach to the viscosity of mixtures of proteins and droplets, as formed upon emulsification of oil in a sodium caseinate suspension. These topics are the subject of the current section.
![Composition of suspensions of sodium caseinate ($\square$, navy), sodium-caseinate stabilised droplets ($\circ$, cyan), and of mixtures ($\triangle$, colour-coded as a function of $\chi_{prot}$ defined in Equation \[Eq:ChiProt\]). []{data-label="Fig:SamplesVisco"}](Fig7){width="70.00000%"}
These mixtures are composed of water and of two types of colloidal particles (droplets and protein aggregates), hence they are conveniently represented as a ternary mixture, as displayed on Figure \[Fig:SamplesVisco\]. This representation is limited by the high volume fractions reached by proteins in suspension, hence some data points lie outside of the diagram. The two-dimensional space of composition for the mixtures can be described by the total effective volume fraction $\phi_{eff,tot} = \phi_{eff,prot}+\phi_{eff,drop}$ and the ratio of their different components $\chi_{prot}$: $$\label{Eq:ChiProt}
\chi_{prot}=\frac{\phi_{eff,prot}}{\phi_{eff,prot}+\phi_{eff,drop}}$$ $\chi_{prot}$ describes the relative percentage of protein in the emulsion compared to the droplets: $\chi_{prot}=1$ for samples containing only proteins, $\chi_{prot}=0$ for samples containing only protein-stabilised droplets, and $\chi_{prot}=0.5$ for mixtures containing an equal volume fraction of proteins and protein-stabilised droplets.
The viscosity of the mixtures containing both proteins and protein-stabilised droplets was measured as for the pure suspensions. The values can be compared with the pure suspensions using the total volume fraction for the mixtures $\phi_{eff,tot}$, and are displayed in Figure \[Fig:RawViscoMix\].
![Relative viscosities $\eta_0/\eta_s$ of suspensions as a function of the effective volume fraction $\phi_{eff}$: sodium caseinate suspensions ($\square$, navy), sodium-caseinate stabilised droplets suspensions ($\circ$, cyan), and suspensions of mixtures ($\triangle$, colour-coded as a function of $\chi_{prot}$ defined in Equation \[Eq:ChiProt\]).[]{data-label="Fig:RawViscoMix"}](Fig8){width="90.00000%"}
The mixtures all display viscosities between those of the pure droplets and of the pure proteins at a given volume fraction, their exact value depending on their compositional index $\chi_{prot}$. Notably, no phase separation is observed in the emulsion samples on the timescale of the experiments. This is an unusual result as sodium caseinate-stabilised emulsions are notoriously prone to depletion induced-flocculation caused by the presence of unadsorbed sodium caseinate [@bressy:2003; @srinivasan:1996; @dickinson:1997; @dickinson:2010; @dickinson:1999]. Presumably, this unusual behaviour is due to the small size of the droplets, which are only one order of magnitude larger than the naturally-occurring caseinate structures.
The knowledge and models introduced for the suspensions of proteins and droplets in the previous sections can be used to develop a semi-empirical model to describe the viscosity of mixtures.
Semi-empirical predictive model
-------------------------------
Models have been developed previously to predict the viscosity of suspensions of multi-modal particles, for example in references [@mendoza:2017] or [@mwasame:2016a], the latter was then extended for mixtures of components of different viscosity behaviours in [@mwasame:2016b]. However these models are mathematically complex and do not describe accurately our experimental results.
Instead, a simple and useful approach is to consider that each component of the mixture is independent from the other, as in the early model for multi-modal suspensions described in [@farris:1968]. In this case, the protein suspension acts as a viscous suspending medium for the droplets, whose viscosity behaviour was previously characterised and modelled by Equation \[Eq:QuemadaHS\]. Because the viscosity behaviour of the protein suspension is also known, it can be combined with the droplet behaviour to determine the viscosity of the mixture. This approach is illustrated on Figure \[Fig:MixViscoSchema\].
![Development of a semi-empirical model to predict the viscosity of emulsions. The contribution of the proteins in suspension to the viscosity of the emulsion is modelled by an increase of viscosity of the continuous medium.[]{data-label="Fig:MixViscoSchema"}](Fig9){width="90.00000%"}
### Development of the model
Considering the suspending medium alone first, it is useful to consider the protein content of the aqueous phase residing in the interstices between the droplets, $\phi_{prot}^i$: $$\label{Eq:PhiProtInter}
\phi_{prot}^i=\frac{V_{prot}}{V_{prot}+V_{water}}=\frac{\phi_{prot}}{\phi_{prot}+\phi_{water}}=\frac{\phi_{prot}}{1-\phi_{droplet}}$$ Where it is assumed that $\phi_{prot}\simeq \phi_{eff,prot} = k_{0,prot}\times c_{prot}$ and $\phi_{droplet} \simeq \phi_{eff,drop} = k_{0,drop}\times c_{drop}$ according to Eq. \[Eq:EffPhi\_proportional\], with $k_{0,prot}$ and $k_{0,drop}$ determined previously using the Batchelor equation fitted to the viscosities of semi-dilute suspensions of pure proteins and pure droplets.
The study of the pure suspensions of protein-stabilised droplets and of proteins makes it possible to model the viscosity behaviour of both suspensions:
- The relative viscosity of a suspension of protein-stabilised droplets $\eta_{r,drop}(\phi)$ is described by Equation \[Eq:QuemadaHS\] with the parameter $\phi_{m}=\num[separate-uncertainty=true]{0.79(2)}$ (Quemada model for hard spheres [@quemada:1977])
- The relative viscosity of a suspension of sodium caseinate $\eta_{r,prot}(\phi)$ is described by Equation \[Eq:ModifQuemadaSoft\] with the parameters listed in Table \[Tab:ModifQuemadaProtParameters\] (modified Quemada model) and using $\phi_{prot}^i$ as described above.
These elements are then combined to predict the relative viscosity of the mixture $\eta_{r,mix}^p$, in the absence of specific interactions between the droplets and the proteins, thus: $$\label{Eq:ViscoMixModel}
\eta_{r,mix}^p(\phi_{eff,prot}, \phi_{eff,drop}) = \eta_{r,prot}\left(\phi_{prot}^i\right) \times \eta_{r,drop}\left(\phi_{eff,drop}\right)$$
### Application of the model
The values of the relative viscosity calculated for each mixture using Equation \[Eq:ViscoMixModel\] are compared to the experimentally measured relative viscosity $\eta_{r,mix}^m$, in Figure \[Fig:RawViscoMix\]. Details of the estimated viscosity of the continuous phase of the mixture can be found in the supplementary material (Figure S3).
![Predicted relative viscosity of mixture suspensions $\eta_{r,mix}^p$, calculated with Equation \[Eq:ViscoMixModel\], as a function of the measured viscosity $\eta_{r,mix}^m$ from Figure \[Fig:RawViscoMix\]. Each point is a mixture of different composition , and its colour indicates the value of the compositional index $\chi_{prot}$ defined by Equation \[Eq:ChiProt\]. The straight line represents y=x. The error bars indicate the uncertainty arising from the calculations (more details are provided in the supplemetary material).](Fig10){width="80.00000%"}
Despite the simplicity of this model, it provides a reasonably accurate prediction of the viscosity of protein-stabilised emulsions. This result seems to indicate that there are no specific interactions between the proteins and the droplets, neither at a molecular scale between un-adsorbed and adsorbed proteins, nor at a larger length scale where depletion interactions could occur. This is likely to be related to the small size of the droplets in this specific system, and increasing the droplet size may result in a decreased accuracy of this simple model.
The small inaccuracies in the predicted viscosities probably lie in the slightly imperfect fit of Equations \[Eq:QuemadaHS\] and \[Eq:ModifQuemadaSoft\]. First, at moderate viscosity ($\eta_r<10$), the slight discrepancy between predicted and measured viscosity of the samples with a high $\chi_{prot}$ is probably a reflection of the modest underestimation of the viscosity of protein suspensions for $2<\phi_{eff}<10$ by Equation \[Eq:ModifQuemadaSoft\].
At higher concentrations, the effective volume fraction approximation may break down. Indeed, as observed previously for pure suspensions, $\phi_{eff}$ can reach high values and may not correspond exactly to the volume fraction actually occupied by the particles, especially in the case of $\phi_{eff,prot}$ . A natural consequence is that the relationship $\phi_{eff,prot} + \phi_{eff,drop} + \phi_{eff,water} = 1$ may not be verified, leading to an overestimation of $\phi_{prot}^i$ when calculated by Equation \[Eq:PhiProtInter\]. It should be noted that the lack of unifying definition of the volume fraction for soft colloids is a particularly relevant challenge when dealing with mixtures. An approach to address this problem could be to take the viscosity behaviour of one of the two components as a reference, and map the volume fraction of the other component to follow this reference viscosity [@mwasame:2016b], but it would considerably increase the complexity of the model.
Finally, another possible source of discrepancy is the assumption that the proteins in the interstices will reach the same random close packing fraction as for proteins in bulk $\phi_{rcp,prot}$. However, at high droplet volume fraction, there are geometrical arguments to support the hypothesis of a different random close packing volume fraction due to excluded volume effects. Therefore, this assumption may lead to a decreased accuracy of the model at high concentrations.
To summarize, in this section we have shown that the preliminary study of the individual components of a mixture allows the subsequent prediction of the viscosity of mixtures of these components with reasonable accuracy, providing that the composition of the mixtures is known.
### Reversal of the model: estimation of the composition of emulsions
A common challenge when formulating protein-stabilised emulsions is to estimate the amount of protein adsorbed at the interface as opposed to the protein suspended in the aqueous phase. Here we suggest that reversing the semi-empirical model developed in the previous section allows estimation of the amount of proteins in suspension after emulsification with a simple viscosity measurement, which can be performed on-line in advanced industrial processing lines. The calculation process is illustrated in Figure \[Fig:PredictViscoReverse\].
![Reversal of semi-predictive model for the viscosity of protein-stabilised emulsions. The measurement of the emulsion viscosity $\eta_{r,mix}$ makes possible the calculation of the volume fraction of un-adsorbed proteins $\phi_{eff,prot}$, given that the volume fraction of droplets $\phi_{eff,drop}$ is known from the preparation protocol.[]{data-label="Fig:PredictViscoReverse"}](Fig11){width="90.00000%"}
To assess the accuracy of the suggested method, a case in point is the emulsion used to prepare the sodium caseinate droplets in this study after microfluidisation. It is composed of $\num{20}\%$(wt) oil and $\num{4.0}\%$(wt) sodium caseinate, and its relative viscosity was measured to be $\eta_{r,mix}^m=\num{10}$.
The first step is to calculate the contribution of the oil droplets to the viscosity of the mixture, in order to isolate the protein contribution. A $\num{20}$(wt)$\%$ content in oil corresponds to $\phi_{eff,drop}=\num{0.40}$, so $\eta_{r,drop}=\left(1-\phi_{eff,drop}/\phi_m\right)^{-2}=\num{4.1}$.
It is then possible, using the Equation \[Eq:ViscoMixModel\], to calculate the viscosity of the continuous phase $\eta_{r,prot}\left(\phi_{prot}^i\right)=\eta_{r,mix}^m/\eta_{r,drop}=2.4$, assumed to arise from the presence of un-adsorbed proteins. In order to estimate the volume fraction of proteins in the interstices $\phi_{prot}^i$, the equation below has to be solved: $$\label{Eq:MixReverse}
\left(1+\left(\frac{\phi_{eff,prot,m}}{\phi_{prot}^i}\right)^n\right)^{-1/n}=1-\frac{1}{\sqrt{\eta_{r,prot}}}$$
Finally, numerically solving Equation \[Eq:MixReverse\] with the values for $n$ and $\phi_m$ from Table \[Tab:ModifQuemadaProtParameters\] gives $\phi_{prot}^i=\num{0.33}$. This result corresponds to a volume fraction of un-adsorbed proteins in the overall emulsion $\phi_{eff,prot} = \phi_{prot}^i (1-\phi_{eff,drop})=\num{0.20}$, or expressed as a concentration in the emulsion: $c=\SI{23}{\milli\gram\per\milli\liter}$. This has to be compared with the initial concentration of $\SI{45}{\milli\gram\per\milli\liter}$ in proteins before emulsification. Thus, only half of the amount of proteins adsorb at the interface, while the other half is still in suspension.
This result can be converted into a surface coverage to be compared with studies on sodium caseinate-stabilised emulsions using micron-sized droplets. It is estimated that $\SI{1}{\liter}$ of emulsion containing $\num{20}$(wt)$\%$ of oil, and with a droplet size of $R_{opt,c}=\SI{65}{\nano\metre}$ presents a surface area of oil of $\SI{920}{\metre\squared}$, and from the viscosity $\SI{22}{\gram}$ of sodium caseinate is adsorbed at the interfaces. Thus, the surface coverage is around $\SI{24}{\milli\gram\per\meter\squared}$. This result is in good correspondence with studies on similar emulsions at larger droplet sizes [@srinivasan:1996; @srinivasan:1999], and thus provides a validation for the use of the measurement of the viscosity as a tool to estimate the amount of unadsorbed proteins present in emulsions.
The semi-empirical model for the viscosity of emulsions developed in this study, once calibrated, can thus be used not only as a predictive tool for mixtures of droplets and proteins of known composition, but also as a method to estimate the amount of adsorbed proteins without the need for further separation of the components.
Conclusion
==========
Previous studies have attempted to compare the rheological properties of sodium caseinate to those of a suspension of hard spheres, and found that agreement at high concentrations is poor[@farrer:1999; @pitowski:2008]. As a result it was concluded that a colloidal model is inadequate to describe the observed behaviour. Here we argue that this is mainly due to the choice of hard spheres as colloidal reference. We have shown that using the framework developed for soft colloidal particles, such as microgels and block co-polymer micelles [@vlassopoulos:2014], helps toward a better description of the viscosity behaviour of the protein dispersions. Although this approach neglects the additional layer of complexity due to the biological nature of the sodium caseinate, such as inhomogeneous charge distribution and dynamic aggregation [@sarangapani:2013; @sarangapani:2015], it gives a satisfactory model that can be used to build a better description of protein-stabilised emulsions. Interestingly, the soft colloidal approach can also be successfully applied to the rheology of non-colloidal food particles, such as fruit purees [@leverrier:2017].
In addition, a protocol was developed for preparing pure suspensions of protein-stabilised droplets rather than emulsions containing unadsorbed proteins. The viscosity behaviour of the nano-sized droplets appeared to be very similar to the hard sphere model. The main discrepancy is the high effective volume fraction at which the viscosity diverges, which may be due to the size distribution of droplets or arise from the softness of the layer of adsorbed proteins.
Finally, examining protein-stabilised emulsions as ternary mixtures of water, unadsorbed proteins and droplets has allowed us to develop a semi-empirical model for their viscosity. The contributions of each component to the overall viscosity of the emulsions being quantified by the analysis of the properties of the pure suspensions of droplets or proteins. The model can also be reversed to estimate the composition, after emulsification, of a protein-stabilised emulsion given its viscosity. It should be noted, however, that the droplet size is likely to be critical to the success of the model, as it is expected that flocculation of droplets will occur for larger droplets [@bressy:2003; @srinivasan:1996; @dickinson:1997; @dickinson:2010; @dickinson:1999]. This is due to the depletion interaction generated by the proteins in the mixture, which is not taken into account in the present model. For this reason, it would be interesting to explore further the influence of the droplet size on the viscosity behaviour of emulsions. In addition, increasing the droplet size would change the hardness of the droplets by decreasing the internal pressure as well as the influence of the soft layer of proteins, adding further complexity to the system.
Supplementary material {#supplementary-material .unnumbered}
======================
The Supplementary material contains information on the calculation of the error bars, the viscosity as a function of the concentration, calculations of the asymptotic behaviour of Equation \[Eq:ModifQuemadaSoft\], flow curves of the shear-thinning samples and the contributions to the viscosity of mixtures by the dispersed and continuous phases.
This project forms part of the Marie Curie European Training Network COLLDENSE that has received funding from the European Union’s Horizon 2020 research and innovation programme Marie Skłodowska-Curie Actions under the grant agreement No. 642774. The authors wish to acknowledge DMV for graciously providing the sodium caseinate sample used in this study, and PostNova Analytics Ltd for graciously performing the field flow fractionation measurement of sodium caseinate.
| {
"pile_set_name": "ArXiv"
} |
With the continually decreasing size of electronic and micromechanical devices, there is an increasing interest in materials that conduct heat efficiently, thus preventing structural damage. The stiff $sp^3$ bonds, resulting in a high speed of sound, make monocrystalline diamond one of the best thermal conductors [@diamond-condmax]. An unusually high thermal conductance should also be expected in carbon nanotubes, which are held together by even stronger $sp^2$ bonds. These systems, consisting of seamless and atomically perfect graphitic cylinders few nanometers in diameter, are self-supporting. The rigidity of these systems, combined with virtual absence of atomic defects or coupling to soft phonon modes of the embedding medium, should make isolated nanotubes very good candidates for efficient thermal conductors. This conjecture has been confirmed by experimental data that are consistent with a very high thermal conductivity for nanotubes [@Zettl].
In the following, we will present results of molecular dynamics simulations using the Tersoff potential [@tersoff], augmented by Van der Waals interactions in graphite, for the temperature dependence of the thermal conductivity of nanotubes and other carbon allotropes. We will show that isolated nanotubes are at least as good heat conductors as high-purity diamond. Our comparison with graphitic carbon shows that inter-layer coupling reduces thermal conductivity of graphite within the basal plane by one order of magnitude with respect to the nanotube value which lies close to that for a hypothetical isolated graphene monolayer.
The thermal conductivity $\lambda$ of a solid along a particular direction, taken here as the $z$ axis, is related to the heat flowing down a long rod with a temperature gradient $dT/dz$ by $$\frac{1}{A}\frac{dQ}{dt} = - \lambda \frac{dT}{dz} \;,
\label{Eq1}$$ where $dQ$ is the energy transmitted across the area $A$ in the time interval $dt$. In solids where the phonon contribution to the heat conductance dominates, $\lambda$ is proportional to $Cvl$, the product of the heat capacity per unit volume $C$, the speed of sound $v$, and the phonon mean free path $l$. The latter quantity is limited by scattering from sample boundaries (related to grain sizes), point defects, and by umklapp processes. In the experiment, the strong dependence of the thermal conductivity $\lambda$ on $l$ translates into an unusual sensitivity to isotopic and other atomic defects. This is best illustrated by the reported thermal conductivity values in the basal plane of graphite [@Landolt-Bornstein] which scatter by nearly two orders of magnitude. As similar uncertainties may be associated with thermal conductivity measurements in “mats” of nanotubes [@Zettl], we decided to determine this quantity using molecular dynamics simulations.
The first approach used to calculate $\lambda$ was based on a direct molecular dynamics simulation. Heat exchange with a periodic array of hot and cold regions along the nanotube has been achieved by velocity rescaling, following a method that had been successfully applied to the thermal conductivity of glasses [@Jund]. Unlike glasses, however, nanotubes exhibit an unusually high degree of long-range order over hundreds of nanometers. The perturbations imposed by the heat transfer reduce the effective phonon mean free path to below the unit cell size. We found it hard to achieve convergence, since the phonon mean free path in nanotubes is significantly larger than unit cell sizes tractable in molecular dynamics simulations.
As an alternate approach to determine the thermal conductivity, we used equilibrium molecular dynamics simulations based on the Green-Kubo expression that relates this quantity to the integral over time $t$ of the heat flux autocorrelation function by [@mcquarrie] $$\lambda=\frac{1}{3 V k_B T^2}
\int_0^{\infty}<{\bf J}(t)\cdot{\bf J}(0)>dt \;.
\label{Eq2}$$ Here, $k_B$ is the Boltzmann constant, $V$ is the volume, $T$ the temperature of the sample, and the angled brackets denote an ensemble average. The heat flux vector ${\bf J}(t)$ is defined by $$\begin{aligned}
{\bf J}(t) &=& \frac{d}{dt} \sum_i {\bf r}_i {\Delta}e_i \\ \nonumber
&=& \sum_i {\bf v}_i {\Delta}e_i
- \sum_i\sum_{j({\ne}i)}{\bf r}_{ij}
({\bf f}_{ij}\cdot{\bf v}_i) \;,
\label{Eq3}\end{aligned}$$ where ${\Delta}e_i=e_i-<e>$ is the excess energy of atom $i$ with respect to the average energy per atom $<e>$. ${\bf r}_i$ is the position and ${\bf v}_i$ the velocity of atom $i$, and ${\bf
r}_{ij}={\bf r}_j-{\bf r}_i$. Assuming that the total potential energy $U=\sum_i u_i$ can be expressed as a sum of binding energies $u_i$ of individual atoms, then ${\bf f}_{ij}=-{\nabla}_i u_j$, where ${\nabla}_i$ is the gradient with respect to the position of atom $i$.
In low-dimensional systems, such as nanotubes or graphene monolayers, we infer the volume from the way how these systems pack in space (nanotubes form bundles and graphite a layered structure, both with an inter-wall separation of ${\approx}3.4$ [Å]{}) in order to convert thermal conductance of a system to thermal conductivity of a material.
Once ${\bf J}(t)$ is known, the thermal conductivity can be calculated using Eq. (\[Eq2\]). We found, however, that these results depend sensitively on the initial conditions of each simulation, thus necessitating a large ensemble of simulations. This high computational demand was further increased by the slow convergence of the autocorrelation function, requiring long integration time periods.
=0.9
=0.9
=0.9
These disadvantages have been shown to be strongly reduced in an alternate approach [@maeda] that combines the Green-Kubo formula with nonequilibrium thermodynamics in a computationally efficient manner [@Rapaport]. In this approach, the thermal conductivity along the $z$ axis is given by $$\lambda = \lim_{{\bf F}_e{\rightarrow}0}
\lim_{t{\rightarrow}\infty}
\frac{<J_z({\bf F}_e,t)>}{F_e T V} \;,
\label{Eq4}$$ where $T$ is the temperature of the sample, regulated by a Nosé-Hoover thermostat [@Nose-Hoover], and $V$ is the volume of the sample. $J_z({\bf F}_e,t)$ is the $z$ component of the heat flux vector for a particular time $t$. ${\bf F}_e$ is a small fictitious “thermal force” (with a dimension of inverse length) that is applied to individual atoms. This fictitious force ${\bf F}_e$ and the Nosé-Hoover thermostat impose an additional force ${\Delta}{\bf F}_i$ on each atom $i$. This additional force modifies the gradient of the potential energy and is given by $$\begin{aligned}
{\Delta}{\bf F}_i &=& {\Delta}e_i {\bf F}_e -
\sum_{j({\neq}i)} {\bf f}_{ij}
({\bf r}_{ij}{\cdot}{\bf F}_e) \nonumber \\
&& + \frac{1}{N}\sum_j\sum_{k({\ne}j)} {\bf f}_{jk}
({\bf r}_{jk}{\cdot}{\bf F}_e)
- \alpha {\bf p}_i \;.
\label{Eq5}\end{aligned}$$ Here, $\alpha$ is the Nosé-Hoover thermostat multiplier acting on the momentum ${\bf p}_i$ of atom $i$. $\alpha$ is calculated using the time integral of the difference between the instantaneous kinetic temperature $T$ of the system and the heat bath temperature $T_{eq}$, from $\dot\alpha=(T-T_{eq})/Q$, where $Q$ is the thermal inertia. The third term in Eq. (\[Eq5\]) guarantees that the net force acting on the entire $N$-atom system vanishes.
In Fig. \[Fig1\] we present the results of our nonequilibrium molecular dynamics simulations for the thermal conductance of an isolated $(10,10)$ nanotube aligned along the $z$ axis. In our calculation, we consider 400 atoms per unit cell, and use periodic boundary conditions. Each molecular dynamics simulation run consists of 50,000 time steps of $5.0{\times}10^{-16}$ s. Our results for the time dependence of the heat current for the particular value $F_e=0.2$ [Å]{}$^{-1}$, shown in Fig. \[Fig1\](a), suggest that $J_z(t)$ converges within the first few picoseconds to its limiting value for $t{\rightarrow}\infty$ in the temperatures range below 400 K. The same is true for the quantity $J_z(t)/T$, shown in Fig. \[Fig1\](b), the average of which is proportional to the thermal conductivity $\lambda$ according to Eq. (\[Eq4\]). Our molecular dynamics simulations have been performed for a total time length of $25$ ps to represent well the long-time behavior.
=0.9
In Fig. \[Fig1\](c) we show the dependence of the quantity $$\tilde\lambda {\equiv}
\lim_{t\rightarrow\infty} \frac{<J_z({\bf F}_e,t)>}{F_e T V}
\label{Eq6}$$ on $F_e$. We have found that direct calculations of $\tilde\lambda$ for very small thermal forces carry a substantial error, as they require a division of two very small numbers in Eq. (\[Eq6\]). We base our calculations of the thermal conductivity at each temperature on 16 simulation runs, with $F_e$ values ranging from $0.4-0.05$ [Å]{}$^{-1}$. As shown in Fig. \[Fig1\](c), data for $\tilde\lambda$ can be extrapolated analytically for ${\bf
F}_e{\rightarrow}0$ to yield the thermal conductivity $\lambda$, shown in Fig. \[Fig2\].
Our results for the temperature dependence of the thermal conductivity of an isolated $(10,10)$ carbon nanotube, shown in Fig. \[Fig2\], reflect the fact that $\lambda$ is proportional to the heat capacity $C$ and the phonon mean free path $l$. At low temperatures, $l$ is nearly constant, and the temperature dependence of $\lambda$ follows that of the specific heat. At high temperatures, where the specific heat is constant, $\lambda$ decreases as the phonon mean free path becomes smaller due to umklapp processes. Our calculations suggest that at $T=100$ K, carbon nanotubes show an unusually high thermal conductivity value of $37,000$ W/m$\cdot$K. This value lies very close to the highest value observed in any solid, $\lambda=41,000$ W/m$\cdot$K, that has been reported [@diamond-condmax] for a 99.9% pure $^{12}$C crystal at 104 K. In spite of the decrease of $\lambda$ above 100 K, the room temperature value of $6,600$ W/m$\cdot$K is still very high, exceeding the reported thermal conductivity value of $3,320$ W/m$\cdot$K for nearly isotopically pure diamond [@diamond-condRT].
=0.9
We found it useful to compare the thermal conductivity of a $(10,10)$ nanotube to that of an isolated graphene monolayer as well as bulk graphite. For the graphene monolayer, we unrolled the 400-atom large unit cell of the $(10,10)$ nanotube into a plane. The periodically repeated unit cell used in the bulk graphite calculation contained 720 atoms, arranged in three layers. The results of our calculations, presented in Fig. \[Fig3\], suggest that an isolated nanotube shows a very similar thermal transport behavior as a hypothetical isolated graphene monolayer, in general agreement with available experimental data . Whereas even larger thermal conductivity should be expected for a monolayer than for a nanotube, we must consider that unlike the nanotube, a graphene monolayer is not self-supporting in vacuum. For all carbon allotropes considered here, we also find that the thermal conductivity decreases with increasing temperature in the range depicted in Fig. \[Fig3\].
Very interesting is the fact that once graphene layers are stacked in graphite, the inter-layer interactions quench the thermal conductivity of this system by nearly one order of magnitude. For the latter case of crystalline graphite, we also found our calculated thermal conductivity values to be confirmed by corresponding observations in the basal plane of highest-purity synthetic graphite which are also reproduced in the figure. We would like to note that experimental data suggest that the thermal conductivity in the basal plane of graphite peaks near 100 K, similar to our nanotube results.
Based on the above described difference in the conductivity between a graphene monolayer and graphite, we should expect a similar reduction of the thermal conductivity when a nanotube is brought into contact with other systems. This should occur when nanotubes form a bundle or rope, become nested in multi-wall nanotubes, or interact with other nanotubes in the “nanotube mat” of “bucky-paper” and could be verified experimentally. Consistent with our conjecture is the low value of $\lambda{\approx}0.7$ W/m$\cdot$K reported for the bulk nanotube mat at room temperature [@Zettl].
In summary, we combined results of equilibrium and non-equilibrium molecular dynamics simulations with accurate carbon potentials to determine the thermal conductivity $\lambda$ of carbon nanotubes and its dependence on temperature. Our results suggest an unusually high value ${\lambda}{\approx}6,600$ W/m$\cdot$K for an isolated $(10,10)$ nanotube at room temperature, comparable to the thermal conductivity of a hypothetical isolated graphene monolayer or graphite. We believe that these high values of $\lambda$ are associated with the large phonon mean free paths in these systems. Our numerical data indicate that in presence of inter-layer coupling in graphite and related systems, the thermal conductivity is reduced significantly to fall into the experimentally observed value range.
This work was supported by the Office of Naval Research and DARPA under Grant No. N00014-99-1-0252.
Lanhua Wei, P.K. Kuo, R.L. Thomas, T.R. Anthony, and W.F. Banholzer, Phys. Rev. Lett. [**70**]{}, 3764 (1993).
S. Iijima, Nature [**354**]{}, 56 (1991).
M.S. Dresselhaus, G. Dresselhaus, and P.C. Eklund, [*Science of Fullerenes and Carbon Nanotubes*]{} (Academic Press, San Diego, 1996).
J. Hone, M. Whitney, A. Zettl, Synthetic Metals [**103**]{}, 2498 (1999).
J. Tersoff, Phys. Rev. B [**37**]{}, 6991 (1988).
Citrad Uher, in [*Landolt-Börnstein*]{}, New Series, III [**15c**]{} (Springer-Verlag, Berlin, 1985), pp. 426–448. Philippe Jund and Rémi Jullien, Phys. Rev. B [**59**]{}, 13707 (1999).
M. Schoen and C. Hoheisel, Molecular Physics [**56**]{}, 653 (1985).
D. Levesque and L. Verlet, Molecular Physics [**61**]{}, 143 (1987).
D.A. McQuarrie, [*Statistical Mechanics*]{} (Harper and Row, London, 1976).
A. Maeda and T. Munakata, Phys. Rev. E [**52**]{}, 234 (1995).
D.J. Evans, Phys. Lett. [**91A**]{}, 457 (1982).
D.P. Hansen and D.J. Evans, Molecular Physics [**81**]{}, 767 (1994). D.C. Rapaport, [*The Art of Molecular Dynamics Simulation*]{} (Cambridge University Press 1998).
S. Nosé, Mol. Phys. [**52**]{}, 255 (1984); W.G. Hoover, Phys. Rev. A [**31**]{}, 1695 (1985).
T.R. Anthony, W.F. Banholzer, J.F.Fleischer, Lanhua Wei, P.K. Kuo, R.L. Thomas, and R.W. Pryor, Phys. Rev. B [**42**]{}, 1104 (1990).
Takeshi Nihira and Tadao Iwata, Jpn. J. Appl. Phys. [**14**]{}, 1099 (1975).
M.G. Holland, C.A. Klein and W.D. Straub, J. Phys. Chem. Solids [**27**]{}, 903 (1966).
A. de Combarieu, J. Phys. (Paris) [**28**]{}, 951 (1967).
| {
"pile_set_name": "ArXiv"
} |
---
author:
- |
Trang Pham, Truyen Tran, Svetha Venkatesh\
Deakin University, Australia\
{*phtra,truyen.tran,svetha.venkatesh*}*@deakin.edu.au*
bibliography:
- '../bibs/ME.bib'
- '../bibs/truyen.bib'
- '../bibs/trang.bib'
title: 'One Size Fits Many: Column Bundle for Multi-X Learning'
---
Introduction
============
Related work \[sec:Related-work\]
=================================
Method \[sec:Method\]
=====================
Experiments \[sec:Experiments\]
===============================
Conclusion \[sec:Discussion\]
=============================
Acknowledgement
===============
This work is partially supported by the Telstra-Deakin Centre of Excellence in Big Data and Machine Learning
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In this paper, we establish some new inequalities for functions whose third derivatives in the absolute value are $m-$convex.'
address: '$^{\bigstar }$Atatürk University, K.K. Education Faculty, Department of Mathematics, 25240 Campus, Erzurum, Turkey'
author:
- 'M.E. Özdemir$^{\bigstar }$'
- 'Merve Avci$^{\bigstar \blacklozenge }$'
- 'Havva Kavurmaci$^{\bigstar }$'
title: 'Simpson Type Inequalities for $m-$convex Functions'
---
[^1]
introduction
============
The following inequality is well known in the literature as Simpson’s inequality:$$\begin{aligned}
&&\left\vert \int_{a}^{b}f(x)dx-\frac{b-a}{3}\left[ \frac{f(a)+f(b)}{2}%
+2f\left( \frac{a+b}{2}\right) \right] \right\vert \label{1.1} \\
&\leq &\frac{1}{2880}\left\Vert f^{(4)}\right\Vert _{\infty }\left(
b-a\right) ^{5}, \notag\end{aligned}$$where the mapping $f:[a,b]\rightarrow
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
$ is assumed to be four times continuously differentiable on the interval and $f^{(4)}$ to be bounded on $(a,b)$ , that is,$$\left\Vert f^{(4)}\right\Vert _{\infty }=\sup_{t\in (a,b)}\left\vert
f^{(4)}(t)\right\vert <\infty .$$In [@T], G.Toader defined the concept of $m-$convexity as the following: The function $f:[0,b]\rightarrow
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
$ is said to be $m-$convex, where $m\in \lbrack 0,1],$ if for every $x,y\in
\lbrack 0,b]$ and $t\in \lbrack 0,1]$ we have:$$f(tx+m(1-t)y)\leq tf(x)+m(1-t)f(y).$$Denote by $K_{m}(b)$ the set of the $m-$convex functions on $[a,b]$ for which $f(0)\leq 0.$
Some important inequalities for $m-$convex functions can be found in [OAS]{}-[@BOP].
In [@AH], Alomari and Hussain used the following lemma in order to establish some inequalities for $P-$convex functions.
\[lem 1.1\] Let $f:I\rightarrow
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
$ be a function such that $f^{\prime \prime \prime \text{ }}$be absolutely continuous on $I^{\circ }$, the interior of I. Assume that $a,b\in I^{\circ
},$ with $a<b$ and $f^{\prime \prime \prime \text{ }}\in L[a,b].$ Then, the following equality holds,$$\begin{aligned}
&&\int_{a}^{b}f(x)dx-\frac{b-a}{6}\left[ f(a)+4f\left( \frac{a+b}{2}\right)
+f(b)\right] \\
&=&\left( b-a\right) ^{4}\int_{0}^{1}p(t)f^{\prime \prime \prime
}(ta+(1-t)b)dt,\end{aligned}$$where$$p(t)=\left\{
\begin{array}{c}
\frac{1}{6}t^{2}\left( t-\frac{1}{2}\right) ,\text{ \ \ \ }t\in \lbrack 0,%
\frac{1}{2}] \\
\\
\frac{1}{6}(t-1)^{2}\left( t-\frac{1}{2}\right) ,\text{ \ \ }t\in (\frac{1}{2%
},]\text{\ \ \ .\ \ \ \ }%
\end{array}%
\right.$$
In [@AHO], Avci et. al obtained following results using the above lemma.
\[teo 1.1\] Let $f:I\subset \lbrack 0,\infty )\rightarrow
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
$ be a differentiable function on $I^{\circ }$ such that $f^{\prime \prime
\prime \text{ }}\in L[a,b],$ where $a,b\in I^{\circ }$ with $a<b.$ If $%
\left\vert f^{\prime \prime \prime \text{ }}\right\vert ^{q}$ is $s-$convex in the second sense on $[a,b]$ and for some fixed $s\in (0,1]$ and $q>1$ with $\frac{1}{p}+\frac{1}{q}=1,$ then the following inequality holds:$$\begin{aligned}
&&\left\vert \int_{a}^{b}f(x)dx-\frac{b-a}{6}\left[ f(a)+4f\left( \frac{a+b}{%
2}\right) +f(b)\right] \right\vert \\
&\leq &\frac{\left( b-a\right) ^{4}}{48}\left( \frac{1}{2}\right) ^{\frac{1}{%
p}}\left( \frac{\Gamma (2p+1)\Gamma (p+1)}{\Gamma (3p+2)}\right) ^{\frac{1}{p%
}} \\
&&\times \left\{ \left[ \frac{1}{2^{s+1}(s+1)}\left\vert f^{\prime \prime
\prime \text{ }}(a)\right\vert ^{q}+\frac{2^{s+1}-1}{2^{s+1}(s+1)}\left\vert
f^{\prime \prime \prime \text{ }}(b)\right\vert ^{q}\right] ^{\frac{1}{q}%
}\right. \\
&&\left. +\left[ \frac{2^{s+1}-1}{2^{s+1}(s+1)}\left\vert f^{\prime \prime
\prime \text{ }}(a)\right\vert ^{q}+\frac{1}{2^{s+1}(s+1)}\left\vert
f^{\prime \prime \prime \text{ }}(b)\right\vert ^{q}\right] ^{\frac{1}{q}%
}\right\} .\end{aligned}$$
\[co 1.1\] If we choose $s=1$ in Theorem \[teo 1.1\], we have$$\begin{aligned}
&&\left\vert \int_{a}^{b}f(x)dx-\frac{b-a}{6}\left[ f(a)+4f\left( \frac{a+b}{%
2}\right) +f(b)\right] \right\vert \label{1.2} \\
&\leq &\frac{\left( b-a\right) ^{4}}{96}\left( \frac{1}{4}\right) ^{\frac{1}{%
q}}\left( \frac{\Gamma (2p+1)\Gamma (p+1)}{\Gamma (3p+2)}\right) ^{\frac{1}{p%
}} \notag \\
&&\times \left\{ \left( \left\vert f^{\prime \prime \prime \text{ }%
}(a)\right\vert ^{q}+3\left\vert f^{\prime \prime \prime \text{ }%
}(b)\right\vert ^{q}\right) ^{\frac{1}{q}}+\left( 3\left\vert f^{\prime
\prime \prime \text{ }}(a)\right\vert ^{q}+\left\vert f^{\prime \prime
\prime \text{ }}(b)\right\vert ^{q}\right) ^{\frac{1}{q}}\right\} . \notag\end{aligned}$$
\[teo 1.2\] Suppose that all the assumptions of Theorem \[teo 2.1\] are satisfied. Then$$\begin{aligned}
&&\left\vert \int_{a}^{b}f(x)dx-\frac{b-a}{6}\left[ f(a)+4f\left( \frac{a+b}{%
2}\right) +f(b)\right] \right\vert \\
&\leq &\frac{\left( b-a\right) ^{4}}{6}\left( \frac{1}{192}\right) ^{1-\frac{%
1}{q}} \\
&&\times \left\{ \left( \frac{2^{-4-s}}{(3+s)(4+s)}\left\vert f^{\prime
\prime \prime \text{ }}(a)\right\vert ^{q}+\frac{2^{-4-s}\left(
34+2^{4+s}(-2+s)+11s+s^{2}\right) }{(1+s)(2+s)(3+s)(4+s)}\left\vert
f^{\prime \prime \prime \text{ }}(b)\right\vert ^{q}\right) ^{\frac{1}{q}%
}\right. \\
&&\left. +\left( \frac{2^{-4-s}\left( 34+2^{4+s}(-2+s)+11s+s^{2}\right) }{%
(1+s)(2+s)(3+s)(4+s)}\left\vert f^{\prime \prime \prime \text{ }%
}(a)\right\vert ^{q}+\frac{2^{-4-s}}{(3+s)(4+s)}\left\vert f^{\prime \prime
\prime \text{ }}(b)\right\vert ^{q}\right) ^{\frac{1}{q}}\right\} .\end{aligned}$$
\[co 1.2\] If we choose $s=1$ in Theorem \[teo 1.2\], we have$$\begin{aligned}
&&\left\vert \int_{a}^{b}f(x)dx-\frac{b-a}{6}\left[ f(a)+4f\left( \frac{a+b}{%
2}\right) +f(b)\right] \right\vert \label{1.3} \\
&\leq &\frac{\left( b-a\right) ^{4}}{1152}\left\{ \left( \frac{3\left\vert
f^{\prime \prime \prime \text{ }}(a)\right\vert ^{q}+7\left\vert f^{\prime
\prime \prime \text{ }}(b)\right\vert ^{q}}{10}\right) ^{\frac{1}{q}}+\left(
\frac{7\left\vert f^{\prime \prime \prime \text{ }}(a)\right\vert
^{q}+3\left\vert f^{\prime \prime \prime \text{ }}(b)\right\vert ^{q}}{10}%
\right) ^{\frac{1}{q}}\right\} . \notag\end{aligned}$$
In [@BBS], Barani et. al proved the following results.
\[teo 1.3\] Let $f:I\rightarrow
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
$ be a function such that $f^{\prime \prime \prime \text{ }}$be absolutely continuous on $I^{\circ }.$ Assume that $a,b\in I^{\circ },$ with $a<b$ and $%
f^{\prime \prime \prime \text{ }}\in L[a,b].$ If $\left\vert f^{\prime
\prime \prime \text{ }}\right\vert $ is a $P-$convex function on $[a,b]$ then, the following inequality holds:$$\begin{aligned}
&&\left\vert \int_{a}^{b}f(x)dx-\frac{b-a}{6}\left[ f(a)+4f\left( \frac{a+b}{%
2}\right) +f(b)\right] \right\vert \\
&\leq &\frac{\left( b-a\right) ^{4}}{1152}\left\{ \left\vert f^{\prime
\prime \prime \text{ }}(a)\right\vert +\left\vert f^{\prime \prime \prime
\text{ }}\left( \frac{a+b}{2}\right) \right\vert +\left\vert f^{\prime
\prime \prime \text{ }}(b)\right\vert \right\} .\end{aligned}$$
\[co 1.3\] Let $f$ as in Theorem \[teo 1.3\]. If $f^{\prime \prime
\prime \text{ }}\left( \frac{a+b}{2}\right) =0,$ then we have$$\begin{aligned}
&&\left\vert \int_{a}^{b}f(x)dx-\frac{b-a}{6}\left[ f(a)+4f\left( \frac{a+b}{%
2}\right) +f(b)\right] \right\vert \label{1.4} \\
&\leq &\frac{\left( b-a\right) ^{4}}{1152}\left\{ \left\vert f^{\prime
\prime \prime \text{ }}(a)\right\vert +\left\vert f^{\prime \prime \prime
\text{ }}(b)\right\vert \right\} . \notag\end{aligned}$$
The aim of this study is to establish some inequalities for $m-$convex functions. In order to obtain our results, we modified Lemma \[lem 1.1\] given in the [@AH].
main results
============
\[lem 2.1\] Let $f:I\rightarrow
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
$ be a function such that $f^{\prime \prime \prime \text{ }}$be absolutely continuous on $I^{\circ }$, the interior of I. Assume that $a,b\in I^{\circ
},$ with $a<b,$ $m\in (0,1]$ and $f^{\prime \prime \prime \text{ }}\in
L[a,b].$ Then, the following equality holds,$$\begin{aligned}
&&\int_{a}^{mb}f(x)dx-\frac{mb-a}{6}\left[ f(a)+4f\left( \frac{a+b}{2}%
\right) +f(mb)\right] \\
&=&\left( mb-a\right) ^{4}\int_{0}^{1}p(t)f^{\prime \prime \prime
}(ta+m(1-t)b)dt,\end{aligned}$$where$$p(t)=\left\{
\begin{array}{c}
\frac{1}{6}t^{2}\left( t-\frac{1}{2}\right) ,\text{ \ \ \ }t\in \lbrack 0,%
\frac{1}{2}] \\
\\
\frac{1}{6}(t-1)^{2}\left( t-\frac{1}{2}\right) ,\text{ \ \ }t\in (\frac{1}{2%
},]\text{\ \ \ .\ \ \ \ }%
\end{array}%
\right.$$
A simple proof of the equality can be done by performing an integration by parts in the integrals from the right side and changing the variable. The details are left to the interested reader.
\[teo 2.1\] Let $f:I\subset \lbrack 0,b^{\ast }]\rightarrow
%TCIMACRO{\U{211d} }%
%BeginExpansion
\mathbb{R}
%EndExpansion
,$ be a differentiable function on $I^{\circ }$ such that $f^{\prime \prime
\prime \text{ }}\in L[a,b]$ where $a,b\in I$ with $a<b,$ $b^{\ast }>0.$ If $%
\left\vert f^{\prime \prime \prime \text{ }}\right\vert ^{q}$ is $m-$convex on $[a,b]$ for $m\in (0,1],$ $q>1,$ then the following inequality holds:$$\begin{aligned}
&&\left\vert \int_{a}^{mb}f(x)dx-\frac{mb-a}{6}\left[ f(a)+4f\left( \frac{a+b%
}{2}\right) +f(mb)\right] \right\vert \\
&\leq &\frac{\left( mb-a\right) ^{4}}{96}\left( \frac{\Gamma (2p+1)\Gamma
(p+1)}{\Gamma (3p+1)}\right) ^{\frac{1}{p}} \\
&&\times \left\{ \left( \frac{\left\vert f^{\prime \prime \prime \text{ }%
}(a)\right\vert ^{q}+3m\left\vert f^{\prime \prime \prime \text{ }%
}(b)\right\vert ^{q}}{4}\right) ^{\frac{1}{q}}+\left( \frac{3\left\vert
f^{\prime \prime \prime \text{ }}(a)\right\vert ^{q}+m\left\vert f^{\prime
\prime \prime \text{ }}(b)\right\vert ^{q}}{4}\right) ^{\frac{1}{q}}\right\}
.\end{aligned}$$
From Lemma \[lem 2.1\] and using Hölder inequality we have$$\begin{aligned}
&&\left\vert \int_{a}^{mb}f(x)dx-\frac{mb-a}{6}\left[ f(a)+4f\left( \frac{a+b%
}{2}\right) +f(mb)\right] \right\vert \\
&\leq &\frac{\left( mb-a\right) ^{4}}{6}\left\{ \left( \int_{0}^{\frac{1}{2}%
}\left( t^{2}\left( \frac{1}{2}-t\right) \right) ^{p}dt\right) ^{\frac{1}{p}%
}\left( \int_{0}^{\frac{1}{2}}\left\vert f^{\prime \prime \prime \text{ }%
}(ta+m(1-t)b)\right\vert ^{q}dt\right) ^{\frac{1}{q}}\right. \\
&&\left. +\left( \int_{\frac{1}{2}}^{1}\left( (t-1)^{2}\left( t-\frac{1}{2}%
\right) \right) ^{p}dt\right) ^{\frac{1}{p}}\left( \int_{\frac{1}{2}%
}^{1}\left\vert f^{\prime \prime \prime \text{ }}(ta+m(1-t)b)\right\vert
^{q}dt\right) ^{\frac{1}{q}}\right\} .\end{aligned}$$Since $m-$convexity of $\left\vert f^{\prime \prime \prime \text{ }%
}\right\vert ^{q},$ we have$$\begin{aligned}
\int_{0}^{\frac{1}{2}}\left\vert f^{\prime \prime \prime \text{ }%
}(ta+m(1-t)b)\right\vert ^{q}dt &\leq &\int_{0}^{\frac{1}{2}}\left[
t\left\vert f^{\prime \prime \prime \text{ }}(a)\right\vert
^{q}+m(1-t)\left\vert f^{\prime \prime \prime \text{ }}(b)\right\vert ^{q}%
\right] dt \\
&=&\frac{\left\vert f^{\prime \prime \prime \text{ }}(a)\right\vert
^{q}+3m\left\vert f^{\prime \prime \prime \text{ }}(b)\right\vert ^{q}}{8},\end{aligned}$$$$\begin{aligned}
\int_{\frac{1}{2}}^{1}\left\vert f^{\prime \prime \prime \text{ }%
}(ta+m(1-t)b)\right\vert ^{q}dt &\leq &\int_{\frac{1}{2}}^{1}\left[
t\left\vert f^{\prime \prime \prime \text{ }}(a)\right\vert
^{q}+m(1-t)\left\vert f^{\prime \prime \prime \text{ }}(b)\right\vert ^{q}%
\right] dt \\
&=&\frac{3\left\vert f^{\prime \prime \prime \text{ }}(a)\right\vert
^{q}+m\left\vert f^{\prime \prime \prime \text{ }}(b)\right\vert ^{q}}{8}.\end{aligned}$$Using the fact that $$\int_{0}^{\frac{1}{2}}\left( t^{2}\left( \frac{1}{2}-t\right) \right)
^{p}dt=\int_{\frac{1}{2}}^{1}\left( (t-1)^{2}\left( t-\frac{1}{2}\right)
\right) ^{p}dt=\frac{\Gamma \left( 2p+1\right) \Gamma \left( p+1\right) }{%
2^{3p+1}\Gamma \left( 3p+1\right) },$$where $\Gamma $ is the Gamma function, we obtain$$\begin{aligned}
&&\left\vert \int_{a}^{mb}f(x)dx-\frac{mb-a}{6}\left[ f(a)+4f\left( \frac{a+b%
}{2}\right) +f(mb)\right] \right\vert \\
&\leq &\frac{\left( mb-a\right) ^{4}}{96}\left( \frac{\Gamma (2p+1)\Gamma
(p+1)}{\Gamma (3p+1)}\right) ^{\frac{1}{p}} \\
&&\times \left\{ \left( \frac{\left\vert f^{\prime \prime \prime \text{ }%
}(a)\right\vert ^{q}+3m\left\vert f^{\prime \prime \prime \text{ }%
}(b)\right\vert ^{q}}{4}\right) ^{\frac{1}{q}}+\left( \frac{3\left\vert
f^{\prime \prime \prime \text{ }}(a)\right\vert ^{q}+m\left\vert f^{\prime
\prime \prime \text{ }}(b)\right\vert ^{q}}{4}\right) ^{\frac{1}{q}}\right\}
,\end{aligned}$$which is the required.
\[rem 2.1\] In Theorem \[teo 2.1\], if we choose $m=1,$ we have the inequality in (\[1.2\]).
\[teo 2.2\] Let the assumptions of Theorem \[teo 2.1\] hold with $%
q\geq 1.$ Then $$\begin{aligned}
&&\left\vert \int_{a}^{mb}f(x)dx-\frac{mb-a}{6}\left[ f(a)+4f\left( \frac{a+b%
}{2}\right) +f(mb)\right] \right\vert \\
&\leq &\frac{\left( mb-a\right) ^{4}}{1152}\left\{ \left( \frac{3\left\vert
f^{\prime \prime \prime \text{ }}(a)\right\vert ^{q}+7m\left\vert f^{\prime
\prime \prime \text{ }}(b)\right\vert ^{q}}{10}\right) ^{\frac{1}{q}}+\left(
\frac{7\left\vert f^{\prime \prime \prime \text{ }}(a)\right\vert
^{q}+3m\left\vert f^{\prime \prime \prime \text{ }}(b)\right\vert ^{q}}{10}%
\right) ^{\frac{1}{q}}\right\} .\end{aligned}$$
From Lemma \[lem 2.1\] , using the well known power-mean inequality and $%
m- $convexity of $\left\vert f^{\prime \prime \prime \text{ }}\right\vert
^{q}$, we have $$\begin{aligned}
&&\left\vert \int_{a}^{mb}f(x)dx-\frac{mb-a}{6}\left[ f(a)+4f\left( \frac{a+b%
}{2}\right) +f(mb)\right] \right\vert \\
&\leq &\frac{\left( b-a\right) ^{4}}{6}\left\{ \left( \int_{0}^{\frac{1}{2}%
}t^{2}\left( \frac{1}{2}-t\right) dt\right) ^{1-\frac{1}{q}}\left( \int_{0}^{%
\frac{1}{2}}t^{2}\left( \frac{1}{2}-t\right) \left\vert f^{\prime \prime
\prime \text{ }}(ta+m(1-t)b)\right\vert ^{q}dt\right) ^{\frac{1}{q}}\right.
\\
&&\left. +\left( \int_{\frac{1}{2}}^{1}(t-1)^{2}\left( t-\frac{1}{2}\right)
dt\right) ^{1-\frac{1}{q}}\left( \int_{\frac{1}{2}}^{1}(t-1)^{2}\left( t-%
\frac{1}{2}\right) \left\vert f^{\prime \prime \prime \text{ }%
}(ta+m(1-t)b)\right\vert ^{q}dt\right) ^{\frac{1}{q}}\right\} \\
&\leq &\frac{\left( b-a\right) ^{4}}{6}\left\{ \left( \int_{0}^{\frac{1}{2}%
}t^{2}\left( \frac{1}{2}-t\right) dt\right) ^{1-\frac{1}{q}}\left( \int_{0}^{%
\frac{1}{2}}t^{2}\left( \frac{1}{2}-t\right) \left[ t\left\vert f^{\prime
\prime \prime \text{ }}(a)\right\vert ^{q}+m(1-t)\left\vert f^{\prime \prime
\prime \text{ }}(b)\right\vert ^{q}\right] dt\right) ^{\frac{1}{q}}\right. \\
&&\left. +\left( \int_{\frac{1}{2}}^{1}(t-1)^{2}\left( t-\frac{1}{2}\right)
dt\right) ^{1-\frac{1}{q}}\left( \int_{\frac{1}{2}}^{1}(t-1)^{2}\left( t-%
\frac{1}{2}\right) \left[ t\left\vert f^{\prime \prime \prime \text{ }%
}(a)\right\vert ^{q}+m(1-t)\left\vert f^{\prime \prime \prime \text{ }%
}(b)\right\vert ^{q}\right] dt\right) ^{\frac{1}{q}}\right\} .\end{aligned}$$By simple calculations we obtain$$\begin{aligned}
&&\left\vert \int_{a}^{mb}f(x)dx-\frac{mb-a}{6}\left[ f(a)+4f\left( \frac{a+b%
}{2}\right) +f(mb)\right] \right\vert \\
&\leq &\frac{\left( b-a\right) ^{4}}{6}\left( \frac{1}{192}\right) ^{1-\frac{%
1}{q}}\left\{ \left( \frac{7\left\vert f^{\prime \prime \prime \text{ }%
}(a)\right\vert ^{q}+3m\left\vert f^{\prime \prime \prime \text{ }%
}(b)\right\vert ^{q}}{1920}\right) ^{\frac{1}{q}}+\left( \frac{3\left\vert
f^{\prime \prime \prime \text{ }}(a)\right\vert ^{q}+7m\left\vert f^{\prime
\prime \prime \text{ }}(b)\right\vert ^{q}}{1920}\right) ^{\frac{1}{q}%
}\right\}\end{aligned}$$which is the desired result.
\[rem 2.2\] In Theorem \[teo 2.2\], if we choose $m=1,$ we have the inequality in (\[1.3\]).
\[rem 2.3\] In Theorem \[teo 2.2\], if we choose $m=1$ and $q=1,$ we have the inequality in (\[1.4\]).
[9]{} M. Alomari, S. Hussain, Two inequalities of Simpson type for quasiconvex functions and applications, Appl. Math. E-notes 11 (2001), 110-117.
G.H. Toader, Some generalisations of the convexity, Proc. Colloq. Approx. Optim (1984) 329 338.
A. Barani, S. Barani and S.S. Dragomir, Simpson’s type inequalities for functions whose third derivatives in the absolute values are $P-$convex, RGMIA Res. Rep. Coll., 14(2011), Preprints, Article 95.
M. Avci, H. Kavurmaci and M.E. Özdemir, Simpson type inequalities for functions whose third derivatives in the absolute value are $s-$convex and $s-$concave functions, RGMIA Res. Rep. Coll., Submitted.
M.E. Özdemir, M. Avci, E. Set, On some inequalities of Hermite–Hadamard type via $m-$convexity, Appl. Math. Lett. 23 (9) (2010) 1065–1070.
M.E. Özdemir, E. Set and M.Z. Sarikaya, Some new Hadamard type inequalities for co-ordinated $m-$convex and $(\alpha ,m)-$convex functions, Hacattepe Journal Of Mathematics And Statistics, 40, 219-229, (2011).
M.K. Bakula, M. E Özdemir, J. Pečarić, Hadamard type inequalities for $m-$convex and $(\alpha ,m)-$convex functions, J. Inequal. Pure Appl. Math. 9 (2008). Article 96, online: http:// jipam.vu.edu.au.
[^1]: $^{\blacklozenge }$Corresponding author
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In this second part, we analyze the dissipation properties of Generalized Poisson-Kac (GPK) processes, considering the decay of suitable $L^2$-norms and the definition of entropy functions. In both cases, consistent energy dissipation and entropy functions depend on the whole system of primitive statistical variables, the partial probability density functions $\{ p_\alpha({\bf x},t) \}_{\alpha=1}^N$, while the corresponding energy dissipation and entropy functions based on the overall probability density $p({\bf x},t)$ do not satisfy monotonicity requirements as a function of time. Examples from chaotic advection (standard map coupled to stochastic GPK processes) illustrate this phenomenon. Some complementary physical issues are also addressed: the ergodicity breaking in the presence of attractive potentials, and the use of GPK perturbations to mollify stochastic field equations.'
author:
- 'Massimiliano Giona$^*$'
- Antonio Brasiello
- Silvestro Crescitelli
title: 'Stochastic foundations of undulatory transport phenomena: Generalized Poisson-Kac processes - Part II Irreversibility, Norms and Entropies'
---
Introduction {#sec_1}
============
The setting of Generalized Poisson-Kac processes (GPK, for short) has been addressed in detail in part I [@part1], explaining their physical motivation, as a generalization of the Kac’s paradigm of stochastic processes possessing finite propagation velocity [@kac], and their structural properties with particular emphasis on the Kac limit. In point of fact, the Kac limit represents a form of asymptotic consistency of GPK stochastic differential equations with respect to classical Langevin equation driven by Wiener processes (which can be also referred to as the [*Brownian motion consistency of GPK processes*]{}). We refer to part I for the notation and the basic properties of GPK dynamics that are not reviewed here to avoid repetition.
In this second part of the work we focus on the characterization of irreversibility in GPK dynamics, essentially grounded on the definition of suitable $L^2$-norms (energy dissipation functions) based on the system of the partial probability density functions $p_\alpha({\bf x},t)$, $\alpha=1,\dots,N$, and possessing a monotonic decay in time, and of proper entropy functions for GPK processes. Section \[sec\_2\] is entirely dedicated to this issue. In both cases, dissipation and entropy functions can be defined for GPK processes of increasing structural complexity and depend on the whole system of partial probability waves. Starting from the simplest cases, we consider transitionally symmetric GPK processes [@part1] and extend the analysis to the transitionally non-symmetric case, admitting relativistic implications.
Moreover, energy dissipation and entropy functions constructed solely upon the knowledge of the overall probability density $p({\bf x},t)= \sum_{\alpha=1}^N p_\alpha({\bf x},t)$ do not satisfy the requirement of monotonicity in time, and consequently are thermodynamically inconsistent. This is a first important physical indication on the fact that the primitive statistical description of GPK processes is entirely based on the whole set of partial probability waves and cannot be reduced to the coarser description based on the overall probability density function $p({\bf x},t)$ and its associated probability density flux ${\bf J}_d({\bf x},t)$, see part I. This result is fully consistent with the underlying hypothesis of extended thermodynamic theories [@ext1; @ext2; @ext3], and indicates that GPK processes are the simplest candidate for the microdynamic equations of motion in extended theory of far-from-equilibrium phenomena. This issue is further elaborated in part III [@part3].
A physically meaningful example illustrating these properties refers to chaotic advection of tracer particles in the presence of a stochastic perturbation (diffusion), modelled as a GPK process. As a prototypical model flow we consider the continuous-time flow associated with the standard map [@sm1; @sm2] on the two-dimensional torus (Section \[sec\_3\]).
Finally Section \[sec\_4\] addresses some auxiliary physical properties of GPK processes: (i) the use of GPK perturbations to mollify stochastic field equations (stochastic partial differential equations) [@stocafield1; @stocafield2], and (ii) the occurrence of ergodicity breaking in the presence of attractive potentials. The analysis of one-dimensional models addressed in [@giona_epl] is briefly reviewed, and the theory is extended to higher-dimensional systems.
Norm dynamics, fluxes and entropy {#sec_2}
=================================
The definition and evolution of suitable $L^2$-norms accounting for the dissipation induced by stochasticity is strictly connected with the setting of a proper entropy function. Both the definition of an energy-dissipation function, based on the $L^2$-norms of the characteristic partial probability waves, and that of an entropy function depend not solely on the overall probability density function $p({\bf x},t)$, as for Wiener-driven Langevin equations, but also on other dynamic quantities describing the process. These quantities are simply the diffusive flux in the one-dimensional Poisson-Kac process, or a combination of fluxes and other auxiliary quantities in the general GPK case. The analysis of the GPK case reveals that the use of primitive statistical quantities, i.e., of the partial probability waves, provides the simplest and physically meaningful description of dissipation.
The exposition is organized as follows. To begin with, the one-dimensional Poisson-Kac process in the absence of deterministic biasing fields is addressed. Subsequently, the case of GPK processes is thoroughly treated, starting from simpler cases up to the more general case of transitionally non-symmetric GPK dynamics. The latter case is relevant in connection with the relativistic setting of GPK dynamics, and with their transformation properties under a Lorentz boost.
One-dimensional Poisson-Kac diffusion {#sec_2_1}
-------------------------------------
Consider the one-dimensional Poisson-Kac process in the absence of a deterministic bias, i.e., $d x(t) = b \, (-1)^{\chi(t)}$, where the Poisson process $\chi(t)$ is characterized by the transition rate $\lambda>0$ . In the Kac limit, it corresponds to a purely diffusive Brownian motion, possessing an effective diffusivity $D_{\rm eff}=b^2/2 \lambda$. Its statistical description involves the two partial probability waves $p^{\pm}(x,t)$, satisfying the hyperbolic system of equation $$\partial_t p^\pm(x,t)= \mp b \, \partial_x p^{\pm}(x,t) \mp \lambda
\left [ p^+(x,t) -p^-(x,t) \right ]
\label{eq2_0}$$
An energy-dissipation function of the process ${\mathcal E}_d[p^+,p^-](t)$ is a bilinear functional of the partial probability densities $p^+(x,t)$, $p^-(x,t)$, $${\mathcal E}_d(t)={\mathcal E}_d[p^+,p^-](t) = \sum_{\alpha,\beta=\pm} C_{\alpha,\beta}
\int p^{\alpha}(x,t) \, p^{\beta}(x,t) \, d x
\label{eq7_1}$$ where $C_{\alpha,\beta}$ are non-negative constants, such that along the evolution of the process $$\frac{d {\mathcal E}_d(t)}{d t} \leq 0
\label{eq7_2}$$ where, for notational simplicity, the explicit dependence on the partial waves has been omitted.
Consider the unbounded propagation in $x \in (-\infty,\infty)$. In order to obtain an expression for ${\mathcal E}_d(t)$ take the system of eqs. (\[eq2\_0\]), multiply the evolution equation for $p^+(x,t)$ by $p^+(x,t)$, and that for $p^-(x,t)$ by $p^-(x,t)$, sum them together, and integrate over $x$, $$\begin{aligned}
\frac{1}{2} \frac{d }{d t} \int_{-\infty}^\infty
\left [ (p^+)^2 + (p^-)^2 \right ] \, dx
& = & - \frac{b}{2} \int_{-\infty}^\infty \partial_x \left [ (p^+)^2 - (p^-)^2
\right ] \, dx \nonumber \\
& - & \lambda \, \int_{-\infty}^\infty (p^+-p^-)^2 \, d x
\label{eq7_3} \end{aligned}$$ where the regularity conditions at infinity have been enforced. This means that an energy-dissipation function can be defined as $${\mathcal E}_d(t) = \frac{1}{2} \int_{-\infty}^\infty \left [
(p^+)^2 + (p^-)^2 \right ] \, d x
\label{eq7_4}$$ and eq. (\[eq7\_3\]) implies that $$\frac{d {\mathcal E}_d(t)}{d t} = - \frac{\lambda}{b^2} \, || J_d ||_{L^2}^2(t)
\label{eq7_5}$$ where $||f||_{L^2}^2(t)$, for a real-valued square summable function $f(x,t)$, is the square of its $L^2$-norm $||f||_{L^2}^2(t)= \int_{-\infty}^\infty
f^2(x,t) \, d x$. Since $p^{\pm}= (p\pm J_d)/2$, ${\mathcal E}_d(t)$ can be expressed as $${\mathcal E}_d = \frac{1}{4} \int_{-\infty}^\infty
\left ( p^2 + \frac{J_d^2}{b^2} \right ) \, d x
\label{eq7_6}$$ i.e., it is a quadratic functional of both the overall probability density function $p(x,t)$ and of its diffusive flux $J_d(x,t)$. In the Kac limit, $\lambda/b^2= 1/2 D_{\rm eff}$, $J_d(x,t)=- D_{\rm eff}
\partial_x p(x,t)$, and eq. (\[eq7\_5\]) reduces to the classical Fickian dissipation relation $\partial_t
||p||_{L^2}^2 = - 2 \, D_{\rm eff} \, ||\partial_x p||_{L^2}^2$. The remarkable property of eq. (\[eq7\_5\]) is that the energy dissipation function depends also on the flux, which is the fundamental starting point in the theory of extended irreversible thermodynamics.
Next consider, instead of unbounded propagation, a closed interval $x \in
[0,L]$, where zero-flux conditions applies at the boundaries $x=0,L$. These conditions correspond to the reflection conditions $p^+|_{x=0}=p^-|_{x=0}$, $p^-|_{x=L}=p^+|_{x=L}$ for the partial probability waves. Eq. (\[eq7\_3\]) holds also in this case, substituting the integration extremes $x=-\infty$ and $x=\infty$ with $x=0$ and $x=L$, respectively. Observe that the reflection conditions make the divergence integral appearing in eq. (\[eq7\_3\]) identically equal to zero, so that eqs. (\[eq7\_5\])-(\[eq7\_6\]) hold true also in this case, substituting the integration extremes $-\infty$ and $\infty$ with $0$ and $L$, respectively.
Next, consider the entropy function. As a candidate for entropy consider the Boltzmann-Shannon entropy $S_{BS}(t)$ defined with respect to the partial probability waves. In the case of unbounded propagation it reads $$S_{BS}(t)= - \int_{-\infty}^\infty \left [
p^+ \, \log p^+ + p^- \, \log p^- \right ] d x
\label{eq7_7}$$ Enforcing the balance equations for the partial waves, one obtains $$\begin{aligned}
\frac{d S_{BS}(t)}{d t} & = & - b \, \int_{-\infty}^\infty
\partial_x \left [ p^+ \, \log p^+ - p^- \, \log p^- - p^+ + p^- \right ]
d x \nonumber \\
& = & \lambda \, \int_{-\infty}^\infty (p^+- p^-) \, \log \left (\frac{p^+}{p^-}
\right ) \, d x
\label{eq7_8}\end{aligned}$$ The divergence integral vanishes because of the regularity conditions at infinity, so that $$\frac{d S_{BS}(t)}{d t} = \lambda \, \int_{-\infty}^\infty (p^+- p^-) \, \log \left (\frac{p^+}{p^-}
\right ) \, d x \geq 0
\label{eq7_9}$$ since the function $g(x,y)=(x-y) \log(x/y)$ is non negative for $x,y \geq 0$. An analogous result holds in a closed bounded system represented by the interval $[0,L]$, in the presence of reflecting boundary conditions for the partial waves, substituting the integral from $-\infty$ to $\infty$ with an integral from $0$ to $L$
The first analytical results for the entropy function in the presence of a hyperbolic Cattaneo transport model have been derived by Camacho and Jou [@entropy1], exhibiting a quadratic dependence on the probability flux, and reducing under equilibrium condition to the Boltzmann $H$-function. This result has been generalized by Vlad and Ross for telegrapher-type master equations [@entropy2]. A recent survey on the entropy principle related with extended thermodynamic formulations can be found in [@entropy3].
GPK processes {#sec_2_2}
-------------
The analysis of dissipation functions and entropies in GPK processes is essentially related to the underlying Markov-chain structure of the finite $N$-state Poisson process generating stochasticity in the system. The key role is played by the spectral properties of the transition matrix ${\bf A}$ which, in the general setting, is simply an irreducible left-stochastic matrix $A_{\alpha,\beta} \geq 0$, $\sum_{\gamma=1}^N A_{\gamma,\alpha}=1$, $\alpha=1,\dots,N$. The spectral properties of ${\bf A}$ that are relevant in the remainder are: (i) the spectral radius of ${\bf A}$ is 1 [@seneta], i.e., all the eigenvalues $\mu_\alpha$, $\alpha=1,\dots,N$ of ${\bf A}$, i.e., $\sum_{\beta=1}^N A_{\alpha, \beta} \, c_\beta = \mu_\alpha
\, c_\alpha$, are such that $|\mu_\alpha|\leq 1$; (ii) the dominant Frobenius eigenvalue is $\mu=1$, corresponding to a uniform left eigenvector (all the entries are equal); (iii) for $\alpha=2,\dots,N$, $\mu_\alpha < 1$.
From part I we know that the statistical description of a GPK process defined by $N$ distinct constant stochastic velocity vectors ${\bf b}_\alpha$, $\alpha=1,\dots, N$, in the presence of a deterministic velocity field ${\bf v}({\bf x})$, involves $N$ partial probability density functions $p_\alpha({\bf x},t)$, $\alpha=1,\dots,N$ satisfying the hyperbolic system of equations $$\begin{aligned}
\hspace{-1.0cm} \partial_t p_{\alpha}({\bf x}, t)
= - \nabla \cdot \left [
({\bf v}({\bf x})+ {\bf b}_\alpha) \,
p_{\alpha}({\bf x}, t) \right ]
- \lambda_\alpha p_{\alpha}({\bf x}, t)
+ \sum_{\gamma=1}^N \lambda_\gamma \, A_{\alpha,\gamma}
\, p_{\gamma}({\bf x}, t)
\label{eq2_1}
\\
\nonumber\end{aligned}$$
Throughout this paragraph, we assume for simplicity that the deterministic velocity field ${\bf v}({\bf x})$ is solenoidal, i.e., $\nabla \cdot {\bf v}({\bf x})=0$. This condition, with some further technical efforts, could be removed, at least for some classes of potential and mixed flows. The generalization to generic irrotational velocity fields (potential flows) is left open, and is not as simple as it may seem, for technical reasons that are briefly addressed in Section \[sec\_4\].
The analysis of energy dissipation and entropy functions for GPK processes is developed gradually by considering classes of processes of increasing structural complexity, defined by the symmetry properties of ${\mathcal B}_N$, ${\bf A}$ and $\boldsymbol{\Lambda}$ (see part I).
To begin with, consider the simplest case of a transitionally symmetric GPK process possessing a uniform transition rate vector ${\boldsymbol \Lambda}=
(\lambda,\dots,\lambda)$. In this, case the transition probability matrix ${\bf A}$ is also symmetric. For this class of processes, an energy dissipation function is given by $${\mathcal E}_d[\{p_\alpha \}_{\alpha=1}^N](t)= \frac{1}{2}
\sum_{\alpha=1}^N \int_{{\mathbb R}^n} p_\alpha^2({\bf x},t) \, d {\bf x}
\label{eq7_10}$$ To prove this, multiply each balance equation for the corresponding partial wave $p_\alpha({\bf x},t)$, sum over the states $\alpha$ and integrated with respect to ${\bf x}$ to obtain $$\begin{aligned}
\frac{d {\mathcal E}_d(t)}{d t} &= & - \sum_{\alpha=1}^N
\int_{{\mathbb R}^n} p_\alpha
\, \nabla \cdot \left ( {\bf v} \, p_\alpha \right )
\, d {\bf x} - \sum_{\alpha=1}^N \int_{{\mathbb R}^n}
p_\alpha \, \nabla \cdot
\left ( {\bf b}_\alpha \, p_\alpha \right ) \, d {\bf x} \nonumber \\
& - & \lambda \, \sum_{\alpha=1}^N \int_{{\mathbb R}^n}
p_\alpha^2 \, d {\bf x} +
\lambda \sum_{\alpha,\gamma=1}^N \int_{{\mathbb R}^n}
p_\alpha \, A_{\alpha,\gamma} \, p_\gamma
\, d {\bf x}
\label{eq7_11}\end{aligned}$$ The first two integrals vanish as they can be expressed in a divergence form, $p_\alpha \, \nabla \cdot \left ({\bf v} \, p_\alpha
\right )= \nabla \cdot \left ({\bf v} \, p_\alpha^2/2 \right )$, $p_\alpha \, \nabla \cdot \left ({\bf b}_\alpha \, p_\alpha
\right )= \nabla \cdot \left ({\bf b}_\alpha \, p_\alpha^2/2 \right )$, and regularity conditions at infinity apply.
Indicating with $({\bf f},{\bf g})_{L^2_N}$ the scalar product for $N$-dimensional real-valued square summable functions ${\bf f}({\bf x})=(f_1({\bf x}),\dots,f_N({\bf x}))$, ${\bf g}({\bf x})=(g_1({\bf x}),\dots,g_N({\bf x}))$ of ${\mathbb R}^n$, $$({\bf f},{\bf g})_{L_N^2}= \sum_{\alpha=1}^N \int_{{\mathbb R}^n}
f_\alpha({\bf x}) \,
g_\alpha({\bf x}) \, d {\bf x}
\label{eq7_12}$$ eq. (\[eq7\_11\]) can be expressed as $$\frac{1}{\lambda} \, \frac{d {\mathcal E}_d(t)}{d t}=
- ({\bf p},{\bf p} )_{L_N^2} + ({\bf A} \, {\bf p},{\bf p})_{L^2_N}
\label{eq7_13}$$ where ${\bf p}({\bf x},t)=(p_1({\bf x},t),\dots,p_N({\bf x},t))$ is the vector of the partial probability waves. Since the maximum (Frobenius) eigenvalue of ${\bf A}$ equals $1$, and all the other eigenvalues lie within the unit circle and possess real parts less than $1$, it follows that $$|({\bf A} \, {\bf p},{\bf p})_{L^2_N} | \leq ({\bf p},{\bf p} )_{L_N^2}
\label{eq7_14}$$ and consequently, eq. (\[eq7\_13\]) provides the inequality $$\frac{d {\mathcal E}_d(t)}{d t} \leq 0
\label{eq7_15}$$ As regards the entropy function, one can consider the Boltzmann-Shannon expression defined starting from the partial probability waves characterizing the process $$S_{BS}(t)= - \sum_{\alpha=1}^N \int_{{\mathbb R}^n} p_\alpha({\bf x},t) \,
\log p_\alpha({\bf x},t) \, d {\bf x}
\label{eq7_16}$$ Enforcing the conservation property $\sum_{\alpha=1}^N \int_{{\mathbb R}^n} p_\alpha({\bf x},t) d {\bf x}=1$, and simplifying the resulting expression as regards the divergence terms that are vanishing because of the regularity at infinity, one finally obtains $$\begin{aligned}
\frac{d S_{BS}(t)}{d t} &= & \lambda \left [
\sum_{\alpha=1}^N \int_{{\mathbb R}^n} p_\alpha \, \log p_\alpha \, d {\bf x}
- \sum_{\alpha,\gamma=1}^N \int_{{\mathbb R}^n}
\log p_\alpha \, A_{\alpha,\gamma}
\, p_{\gamma} \, d {\bf x} \right ] \nonumber \\
& = & \lambda \left [ \left ({\bf p},\log {\bf p} \right )_{L^2_N}
- \left ( {\bf A} \, {\bf p} , \log {\bf p} \right )_{L_N^2} \right ]
\label{eq7_17}\end{aligned}$$ where we have set $\log {\bf p}=(\log p_1,\dots,\log p_N)$. The term at the right-hand side of eq. (\[eq7\_17\]) equals $\lambda$ times the integral over ${\bf x}$ of a function $g_S({\bf p})$, i.e., $dS_{BS}(t)/dt= \lambda \, \int_{{\mathbb R}^n}
g_S({\bf p}({\bf x},t)) \, d {\bf x}$, given by $$\begin{aligned}
g_S({\bf p}) & = & \frac{1}{2} \sum_{\alpha,\gamma=1}^N
A_{\alpha,\gamma} \, (p_\alpha-p_\gamma) \, \log \left ( \frac{p_\alpha}{p_\gamma} \right ) \nonumber \\
& = & \frac{1}{2} \left [
\sum_{\alpha,\gamma=1}^N A_{\alpha,\gamma} \, p_\alpha \, \log p_{\alpha}
-\sum_{\alpha,\gamma=1}^N A_{\alpha,\gamma} \, p_\gamma \, \log p_{\alpha}
\right . \nonumber \\
&- & \left . \sum_{\alpha,\gamma=1}^N A_{\alpha,\gamma} \, p_\alpha
\, \log p_\gamma + \sum_{\alpha,\gamma=1}^N A_{\alpha,\gamma}
\, p_\gamma \log p_\gamma \right ] \nonumber \\
& = & \sum_{\alpha=1}^N p_\alpha \, \log p_\alpha - \sum_{\alpha,\gamma=1}^N
\log p_\alpha \, A_{\alpha,\gamma} \, p_\gamma
\label{eq7_18}\end{aligned}$$ where the symmetry of $A_{\alpha,\gamma}$ and its left stochasticity have been enforced. Since $A_{\alpha,\gamma} \geq 0$, and each factor $(p_\alpha-p_\gamma) \, \log(p_\alpha/p_\gamma)$ is greater than or at most equal to zero for $p_\alpha({\bf x},t),
p_\gamma({\bf x},t) \geq 0$, it follows that $$\frac{d S_{BS}(t)}{d t} \geq 0
\label{eq7_19}$$ Next, consider the general case in the presence of an arbitrary distribution of transition rate $\lambda_\alpha$, $\alpha=1,\dots,N$. As regards the energy-dissipation function, it is convenient to introduce the auxiliary functions $u_\alpha({\bf x},t)$ defined as $$u_\alpha({\bf x},t) = \lambda_\alpha \, p_\alpha({\bf x},t)
\, , \qquad \alpha=1,\dots,N
\label{eq7_20}$$ In terms of the $u_\alpha$’s, the balance equations (\[eq2\_1\]) become $$\begin{aligned}
\lambda_\alpha^{-1} \, \partial_t u_\alpha =
-\lambda_\alpha^{-1} \nabla \cdot
\left ( {\bf v} \, u_\alpha \right )
- \lambda_\alpha^{-1} \, \nabla \cdot \left ( {\bf b}_\alpha \, u_\alpha
\right )
- u_\alpha + \sum_{\gamma=1}^N A_{\alpha,\gamma} \, u_\gamma
\label{eq7_21}\end{aligned}$$ $\alpha=1,\dots,N$. It is natural to introduce the following energy dissipation function $${\mathcal E}_d[\{p_\alpha \}_{\alpha=1}^N](t) =
\sum_{\alpha=1}^N \frac{1}{2 \, \lambda_\alpha} \int_{{\mathbb R}^n} u_\alpha^2({\bf x},t)
\, d {\bf x}= \sum_{\alpha=1}^N \frac{\lambda_\alpha}{2}
\int_{{\mathbb R}^n} p_\alpha^2({\bf x},t) \, d {\bf x}
\label{eq7_22}$$ Performing the same algebra as in the previous case and setting ${\bf u}=(u_1,\dots,u_N)$, one obtains $$\frac{d {\mathcal E}_d(t)}{d t}= -\left ({\bf u},{\bf u} \right )_{L^2_N}
+ \left ( {\bf A} \, {\bf u},{\bf u} \right )_{L^2_N} \leq 0
\label{eq7_23}$$ that follows from the fact that ${\bf A}$ is left-stochastic. Observe that no assumptions has been made on the symmetry of the transition matrix $K_{\alpha,\gamma}=\lambda_\gamma \, A_{\alpha,\gamma}$ so that eqs. (\[eq7\_22\]) apply both for transitionally symmetric and non-symmetric GPK processes.
Next, consider the entropy. As regards the entropy function, the local detailed balance defining transitionally symmetric GPK processes counts. To begin with, consider transitionally symmetric GPK processes, characterized by the property that the transition matrix ${\bf K}= {\bf A} \, {\boldsymbol \Lambda}$ is symmetric, i.e., $$K_{\alpha,\gamma}= \lambda_\gamma \, A_{\alpha,\gamma}=
\lambda_\alpha \, A_{\gamma,\alpha}=K_{\gamma,\alpha}
\, , \qquad \alpha,\gamma=1,\dots,N
\label{eq7_24}$$ with the property that $K_{\alpha,\gamma} \geq 0$ and $$\sum_{\gamma=1}^N K_{\gamma,\alpha}= \lambda_\alpha
\label{eq7_25}$$ For transitionally symmetric GPK processes the expression (\[eq7\_16\]) for the Boltzmann-Shannon entropy is still a valid candidate as the entropy function of the process. Enforcing the properties (\[eq7\_24\])-(\[eq7\_25\]) of the transition matrix ${\bf K}$, the time derivative of the Boltzmann-Shannon entropy becomes $$\hspace{-1.0cm}
\frac{d S_{BS}(t)}{d t} = \int_{{\mathbb R}^n} \sum_{\alpha,\gamma=1}^N
K_{\alpha,\gamma} \left [ p_\alpha({\bf x},t) - p_\gamma({\bf x},t) \right ]
\, \log p_\alpha({\bf x},t)
] \, d {\bf x}
\label{eq7_26}$$ The latter expression can be written in terms of a entropy-rate density $r_S({\bf p}({\bf x},t))$, i.e., as $dS_{BS}(t)/dt= \int_{{\mathbb R}^n}
r_S({\bf p}({\bf x},t)) \, d {\bf x}$, given by $$r_S({\bf p}) = \frac{1}{2} \sum_{\alpha,\gamma=1}^N K_{\alpha,\gamma}
\, (p_\alpha-p_\gamma ) \, \log \left ( \frac{p_\alpha}{p_\gamma} \right )
\label{eq7_27}$$ which, by definition, is greater than or at most equal to zero for any $p_{\alpha}({\bf x},t) \geq 0$, $\alpha=1,\dots,N$.
There is another situation of physical interest (see further paragraph \[sec\_2\_4\]), namely when the transition probability matrix $A_{\alpha,\gamma}$ is symmetric, but the transition rates $\lambda_\alpha$, $\alpha=1,\dots,N$ are arbitrary positive constants. The resulting GPK process is therefore transitionally non-symmetric. In this case, one can define a modified Boltzmann-Shannon entropy $\widehat{S}_{BS}(t)$ as $$\hspace{-2.0cm}
\widehat{S}_{BS}(t)= - \sum_{\alpha=1}^N \frac{1}{\lambda_\alpha}
\int_{{\mathbb R}^n} u_\alpha({\bf x},t) \, \log u_\alpha({\bf x},t)
\, d{\bf x}= - \sum_{\alpha=1}^N \int_{{\mathbb R}^n} p_\alpha({\bf x},t)
\, \log \left [ \lambda_\alpha \, p_\alpha({\bf x},t) \right ] \, d {\bf x}
\label{eq7_28}$$ where $u_\alpha({\bf x},t)$ are defined by eq. (\[eq7\_20\]). From the evolution equations (\[eq7\_21\]) it follows after some algebra that $$\frac{d \widehat{S}_{BS}(t)}{ d t}= \left ({\bf u}, \log {\bf u} \right
)_{L_N^2} - \left ({\bf A} {\bf u}, \log {\bf u} \right
)_{L_N^2}
\label{eq7_29}$$ where we have used the notation ${\bf u}=(u_1,\dots,u_N)$, $\log {\bf u}=(\log u_1,\dots,\log u_N)$. Eq. (\[eq7\_29\]) is formally analogous to a previously treated case, see eq. (\[eq7\_17\]), so that $$\frac{d \widehat{S}_{BS}(t)}{ d t} = \int_{{\mathbb R}^n}
r_S({\bf }u({\bf x},t))
\, d {\bf x}
\label{eq7_30}$$ where $$r_S({\bf u})= \frac{1}{2} \sum_{\alpha,\gamma=1}^N A_{\alpha,\gamma}
\, (u_\alpha-u_\gamma) \, \log \left ( \frac{u_\alpha}{u_\gamma} \right )
\geq 0
\label{eq7_31}$$
A simple example {#sec_2_3}
----------------
This paragraph highlights the dissipation properties addressed in the previous paragraph through a simple example. Consider the one-dimensional, purely stochastic ($v(x)=0$) Poisson-Kac process $d x(t) = b (-1)^{\chi(t)} \, dt$ on the unit interval with reflective conditions at the boundaries. Keeping fixed $D_{\rm eff}=b^2/2 \lambda=1$, use the transition rate $\lambda$ as a parameter. As initial condition take $$p^+(x,0)=p^-(x,0)= \left \{
\begin{array}{ccc}
1/4 d & & |x-1/2|\leq 0 \\
0 & & \mbox{otherwise}
\end{array}
\right .
\label{eq7_2_1}$$ so that $\int_0^1 p(x,t) \, dx=1$ for $t \geq 0$. The energy dissipation function introduced in the previous paragraph can be normalized by considering the auxiliary function $${\mathcal E}_d^*(t)= 2 {\mathcal E}_d(t) - \frac{1}{2}
\label{eq7_2_2}$$ so that $\lim_{\rightarrow \infty} {\mathcal E}_d^*(t)=0$. The Fickian counterpart of ${\mathcal E}^*(t)$ is represented by $$E^*(t)= \int_0^1 p^2(x,t) \, dx -1 = ||p-1||_{L^2}^2
\label{eq7_2_3}$$ that corresponds to the square of the $L^2$-norm of the overall probability density function normalized to zero mean. Figure \[Fig11\] depicts several concentration profiles of the overall probability density function $p(x,t)$ for $\lambda=10$, sampled at time-intervals of $0.1$, just to visualize the typical deviation from Brownian evolution characterizing Poisson-Kac dynamics at short timescales.
[![$p(x,t)$ vs $x$ at several time instant for $\lambda=10$. Line (a) refers to the initial condition (\[eq7\_2\_1\]), line (b) to $t=0.1$, line (c) to $t=0.2$, line (d) to $t=0.3$.[]{data-label="Fig11"}](closed_pro_10_05.eps "fig:"){height="6cm"}]{}
The comparison between the Fickian $E^*(t)$ and the correct ${\mathcal E}_d^*(t)$ energy dissipation functions is depicted in panels (a) and (b) of figure \[Fig12\]. While $E^*(t)$ exhibits an evident non-monotonic behavior as a function of $t$ in the range of transition rates $\lambda \in (0.1,10)$, that becomes more pronounced as $\lambda$ decreases, the function ${\mathcal E}_d^*(t)$ is monotonically non-increasing. This example expresses pictorially the claim that an energy-dissipation function which is a quadratic functional solely of the overall probability density cannot be compatible with Poisson-Kac dynamics, and more generally with stochastic evolution possessing a finite propagation velocity. Conversely, the function ${\mathcal E}_d^*(t)$, that depends on the whole system of partial waves - in the present case $p^+(x,t)$ and $p^-(x,t)$ - provides a correct description of dissipation. This example supports the fundamental ansatz of the theory of extended thermodynamics that state and dissipation functions in irreversible processes should depend also on the fluxes, as ${\mathcal E}_d^*(t)$ in the present case [@ext1]. At $\lambda=100$ (lines (d)) $E^*(t)$ and ${\mathcal E}_d^*(t)$ practically coincide, and this corresponds to the Kac limit of the process.
A specular behavior is displayed by the entropy function. In this framework, the behavior of the Boltzmann-Shannon entropy $S_{BS}(t)$ based on the full structure of the partial probability waves should be contrasted with the classical Boltzmannian entropy $\Sigma_B(t)$ $$\Sigma_B(t) = - \int_0^1 p(x,t) \, \log p(x,t) \, d x
\label{eq7_2_4}$$ depending exclusively on the overall probability density function $p(x,t)$. Panel (c) and (d) of figure \[Fig12\] show the comparison of these two entropy functions. A similar analysis based on the Cattaneo equation has been performed by Jou et al. [@ext1]. All the observations addressed for the energy dissipation functions apply [*verbatim*]{} to $\Sigma_B(t)$ and $S_{BS}(t)$. In the long-term limit $\Sigma_B(t) \rightarrow 0$, while $S_{SB}(t) \rightarrow \log 2$, corresponding to the complete homogenization amongst the partial waves.
[![Review of the time evolution of dissipation and entropy functions for the Poisson-Kac process considered in the main text at $D_{\rm eff}=1$. Panel (a) refers to $E^*(t)$, panel (b) by to ${\mathcal E}_d^*(t)$, panel (c) to the Boltzmannian entropy $\Sigma_B(t)$, panel (d) to the Boltzmann-Shannon entropy $S_{BS}(t)$ defined using the partial probability waves. Lines from (a) to (d) in all the panels refer to $\lambda=0.1\,, 1,\, 10,\, 100$, respectively.[]{data-label="Fig12"}](energy_entropy.eps "fig:"){height="8cm"}]{}
Relativistic transformation of entropy {#sec_2_4}
--------------------------------------
At the end of paragraph \[sec\_2\_2\] the expression for the entropy of a GPK process possessing a symmetric transition probability matrix and generic transition rates has been obtained, eq. (\[eq7\_28\]). This case find application in relativistic analysis of stochastic processes as outlined below.
Consider a one-dimensional, purely diffusive Poisson-Kac dynamics in an inertial reference frame $\Sigma$, defined by the space-time coordinates $(x,t)$, $d x(t)= b \, (-1)^{\chi(t)} \, dt$, in which the evolution equations of the partial probability waves of the process $p^\pm(x,t)$ is expressed by eq. (\[eq2\_0\]). The frame $\Sigma$ can be referred to as the [*rest frame*]{} of the process, since the process is characterized by a vanishing effective velocity (corresponding to the time-derivative of the first-order moment in the long-term regime).
Let $\Sigma^\prime$ be another inertial frame, defined by the space-time coordinates $(x^\prime,t^\prime)$ moving with respect to $\Sigma$ at constant relative velocity $v < c$, where $c$ is the velocity of light [*in vacuo*]{}. Enforcing the Lorentz boost connecting $(c \, t^\prime, x^\prime)$ to $( c \, t,x)$, $$\left (
\begin{array}{c}
c \, t^\prime \\
x^\prime
\end{array}
\right )
= \gamma(v) \, \left (
\begin{array}{cc}
1 & -\beta \\
-\beta & 0
\end{array}
\right )
\,
\left (
\begin{array}{c}
c \, t \\
x
\end{array}
\right )
\label{eq_m1}$$ where $\gamma(v)=1/\sqrt{1-v^2/c^2}$ is the Lorentz factor and $\beta=v/c$, the statistical description of the process in $\Sigma^\prime$ involves the partial probability density functions $p^{\pm,\prime}(x^\prime,t^\prime)$ that satisfy the balance equations [@giona_rel1; @giona_rel2] $$\begin{aligned}
\partial_{t^\prime} p^{+,\prime}(x^\prime,t^\prime) &
= & - b_+^\prime \, \partial_{x^\prime} p^{+,\prime}(x^\prime,t^\prime)
- \lambda_+^\prime p^{+,\prime}(x^\prime,t^\prime)+\lambda_-^\prime p^{-,\prime}(x^\prime,t^\prime) \nonumber \\
\partial_{t^\prime} p^{-,\prime}(x^\prime,t^\prime) &
= & - b_-^\prime \, \partial_{x^\prime} p^{-,\prime}(x^\prime,t^\prime)
+ \lambda_+^\prime p^{+,\prime}(x^\prime
,t^\prime)- \lambda_-^\prime p^{-,\prime}(x^\prime,t^\prime)
\label{eq_m2}\end{aligned}$$ where the velocities $b_\pm^\prime$ satisfy the usual relativistic velocity transformation for $b$ and $-b$, respectively, $$b_+^\prime = \frac{b-v}{1-b v/c^2} \, , \qquad
b_-^\prime = \frac{-b-v}{1+b v/c^2}
\label{eq_m3}$$ while the transition rates $\lambda_\pm^\prime$ in $\Sigma^\prime$ are expressed by the relations (see [@giona_rel1; @giona_rel2]) $$\lambda_+^\prime = \frac{\lambda}{\gamma(v)} \left ( 1- \frac{b \, v}{c^2}
\right )^{-1} \, ,
\qquad
\lambda_-^\prime = \frac{\lambda}{\gamma(v)} \left ( 1 + \frac{b \, v}{c^2}
\right )^{-1}
\label{eq_m4}$$ The Lorentz boost does not change the transition probability matrix, that in the present case is ${\bf A}^\prime={\bf A}=
\left ( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right )$, but modifies the transition rates $\lambda_+^\prime$, $\lambda_-^\prime$ in $\Sigma^\prime$. The transition rates $\lambda_+$, $\lambda_-$ coincide at $v=0$ but, as the velocity $v$ increases, become progressively more different from each other. In $\Sigma^\prime$ the stochastic process considered is still a GPK process with uneven transition rates and symmetric transition probability matrix, as addressed at the end of paragraph \[sec\_2\_2\]. Consequently, a suitable expression for the entropy function in a generic frame is given by eq. (\[eq7\_28\]), i.e. $$\hspace{-1.5cm} \widehat{S}_{BS}(t)= - \int_{-\infty}^\infty
\left [ p^+(x,t) \, \log (\lambda_+ p^+(x,t)) + p^-(x,t) \, \log (\lambda_- p^-(x,t)) \right ] \, d x
\label{eq_m5}$$ In $\Sigma$, $\lambda_+=\lambda_-=\lambda$ and eq. (\[eq\_m5\]) returns $$\hspace{-1.5cm} \widehat{S}_{BS}(t)= -
\int_{-\infty}^\infty \left [ p^+(x,t) \, \log p^+(x,t) +
p^-(x,t) \, \log p^-(x,t) \right ] \, d x -\log \lambda
= S_{BS}(t)-\log \lambda
\label{eq_m6}$$ while in $\Sigma^\prime$, the entropy function becomes $$\hspace{-1.5cm} \widehat{S}_{BS}^\prime(t^\prime)=
- \int_{-\infty}^\infty
\left [ p^{+,\prime}(x^\prime,t^\prime) \, \log (\lambda_+^\prime p^{+,\prime}(x^\prime,t^\prime)) + p^{-,\prime}(x^\prime,t^\prime) \, \log (\lambda_-^\prime p^{-,\prime}(x^\prime,t^\prime)) \right ] \, d x^\prime
\label{eq_m7}$$
Set $c=1$ a.u., and $b=c$, i.e., consider a stochastic perturbation the characteristic velocity of which coincides with that of light, as for electromagnetic fluctuations. Figure \[Fig\_r1\] depicts the behavior of $\widehat{S}_{BS}^\prime(t^\prime)$ vs time $t^\prime$ for a Poisson-Kac process characterized in its rest frame by $D_{\rm eff}=1$, i.e., $\lambda=b^2/2 D_{\rm eff}=1/2$. The initial condition is symmetric and impulsive, namely $p^+(x,0)=p^-(x,0)=\delta(x)/2$, centered at the origin. Apart from the monotonic behavior of $\widehat{S}_{BS}^\prime(t^\prime)$ with time $t^\prime$, it should be observed that the relativistic transformation of the modified Boltzmann-Shannon entropy cannot be easily expressed as a simple function of the Lorentz factor $\gamma(v)$, as occurs e.g. for the tensor diffusivity [@giona_rel1].
[![$S(t^\prime)=\widehat{S}_{BS}^\prime(t^\prime)$ vs $t^\prime$ for the Poisson-Kac process on the real line measured in a reference system $\Sigma^\prime$ moving at constant relative velocity $v$ with respect to the rest frame of the process. The arrow indicates increasing values of $v=0,\,0.2,\,0.4,\,0.6,\,0.8,\,0.9,\,0.95$.[]{data-label="Fig_r1"}](rel_entropy.eps "fig:"){height="7cm"}]{}
The extension to GPK processes is straightforward using the relativistic transformation for the partial probability densities developed in [@giona_rel2] and the results at the end of paragraph \[sec\_2\_2\].
GPK processes and chaotic advection-diffusion problems {#sec_3}
======================================================
An interesting physical application of the GPK theory developed above involves tracer dynamics in chaotic flows in the presence of stochastic perturbations.
Consider a two-dimensional problem defined on the unit two-torus ${\mathcal T}^2=[0,1] \times [0,1]$, equipped with periodic boundary conditions. Let ${\bf v}({\bf x},t)$, ${\bf x}=(x,y)$ be a time-periodic solenoidal velocity field, $\nabla \cdot {\bf v}=0$, and consider the GPK process $$d {\bf x}(t)= {\bf v}({\bf x}(t),t) \, dt + \frac{1}{Pe} \, {\bf b}_{\chi_N(t)}
\, dt
\label{eq7_4_1}$$ Equation (\[eq7\_4\_1\]) represents the dimensionless kinematic equations of motion of a passive tracer in an incompressible flow subjected to stochastic (thermal) agitation expressed by a finite $N$-state Poisson process acting on a family of $N$ stochastic velocity vectors. The parameter $Pe$ is the Péclet number, representing the ratio of the characteristic diffusion to the characteristic advection times.
Assume for the $N$-state finite Poisson process a constant transition rate $\lambda$, and a transition probability matrix expressed by $A_{\alpha,\beta}=1/N$, $\alpha,\beta=1,\dots,N$. For the stochastic velocity vectors ${\bf b}_\alpha$ choose the family given by eq. (78) in part I, so that $D_{\rm nom}=1$. As regards the velocity field, consider a simple but widely used model of Hamiltonian chaos, originating from the standard map ${\bf x}^\prime = \boldsymbol{\Phi}({\bf x})$ [@sm1; @sm2], expressed by $$\left \{
\begin{array}{lll}
x^\prime = x+\frac{\nu}{2 \pi} \, \sin(2 \pi y) & & \mbox{mod.} \; 1 \\
y^\prime = y + x^\prime & & \mbox{mod.} \; 1
\end{array}
\right .
\label{eq7_4_2bis}$$ where $\nu>0$ is a real parameter. In a continuous time setting, the standard map can be recovered as the stroboscopic map associated with the time-periodic incompressible flow possessing period $T=2$ obtained from the periodic repetition of the flow protocol $${\bf v}({\bf x},t)= \left \{
\begin{array}{lll}
( \frac{\nu}{2 \pi} \, \sin(2 \pi y), 0) & & 0 \leq t < 1 \\
(0, x) & & 1 \leq t < 2
\end{array}
\right .
\label{eq7_4_2}$$ and corresponding to the periodic switching of two shear flows along the $x$- and $y$-coordinates, respectively, the first of which is sinusoidally modulated. Observe that the second flow, is not continuous on the torus, while the resulting stroboscopic map is $C^\infty$.
By varying the parameter $\nu$, the typical phenomenologies of chaotic advection can be recovered from the standard map. We consider the case $\nu=1$, the Poincaré map of which (i.e., the stroboscopic map sampled at the period of the flow protocol) is depicted in figure \[Fig\_x1\], and is characterized by the presence of invariant chaotic regions possessing a maximum positive Lyapunov exponent, intertwined with regular invariant islands of different sizes.
[![Poincaré map of the standard map at $\nu=1$.[]{data-label="Fig_x1"}](poincare_sm_m.eps "fig:"){height="6cm"}]{}
In the Kac limit, the statistical characterization of eq. (\[eq7\_4\_1\]) converges to the solution of a classical parabolic advection-diffusion equation for the overall probability density function $p({\bf x},t)$, $$\partial_t p({\bf x},t) = - {\bf v}({\bf x},t) \cdot \nabla p({\bf x},t)
+ \frac{1}{Pe} \nabla^2 p({\bf x},t)
\label{eq7_4_3}$$ Consider as an initial condition a completely segregated initial profile of the partial probability waves, namely $$p_\alpha({\bf x},0) = \left \{
\begin{array}{lll}
2/N & & 0 \leq x < 1/2 \\
0 & & 1/2 \leq x < 1
\end{array}
\right .
\qquad
\alpha=1,\dots,N
\label{eq7_4_4}$$ and let $E^*(t)$ be the normalized $L^2$-norm of $p({\bf x},t)$ $$E^*(t)= \frac{||p({\bf x},t)-1||_{L^2}}{||p({\bf x},0)-1||_{L^2}}
\label{eq7_4_5}$$ so that $E^*(0)=1$ and $\lim_{t \rightarrow \infty} E^*(t)=0$.
Figure \[Fig13\] depicts the evolution of $E^*(t)$ and at two different values of the Péclet number: $Pe=10^1$ (panel a) and $Pe=10^2$ (panel b). Numerical simulations have been performed by expanding $p_\alpha({\bf x},t)$ in truncated Fourier series, $p_\alpha({\bf x},t)=\sum_{h,k=-N_c}^{N_c} P_{\alpha,h,k} \, e^{i 2 \pi
(h x+ k y)}$, solving the corresponding system of linear differential equations for the Fourier coefficients $P_{\alpha,h,k}$ with an explicit 4-th order Runge-Kutta solver. For the range of $Pe$ values considered $Pe \leq 10^2$, we choose $N_c=50$, which is fully sufficient for an accurate description of the dynamics, apart from the very early stages of the process. These graphs refer to a GPK process with $N=4$ using the transition rate $\lambda$ as parameter. The case $Pe=10^1$ (panel a) is indicative of the typical relaxation properties of GPK systems: the normalized $L^2$-norm $E^*(t)$ decays asymptotically in an exponential way $E^*(t) \sim e^{-\mu(\lambda) \, t}$, but for moderate values of $\lambda$, the decay exponent $\mu(\lambda)$ is a function of the transition rate $\lambda$ and is smaller that the limit value $\mu_\infty= \lim_{\lambda \rightarrow \infty}
\mu(\lambda)$.
[![Normalized square $L^2$-norm $E^*(t)$ vs time $t$ for the GPK flow associated with the standard map ($N=4$) at $Pe=10^1$ (panel a), and $Pe=10^2$ (panel b). Symbols ($\bullet$) represent the solution of the corresponding parabolic advection-diffusion equation (\[eq7\_4\_3\]). The arrows indicate increasing values of $\lambda$. Panel (a): $\lambda= 0.5, \, 1,\, 2,\,4,\,10,\,40$. Panel (b): $\lambda= 0.5, \, 1,\, 2$.[]{data-label="Fig13"}](relax_l2.eps "fig:"){height="6cm"}]{}
As $\lambda$ increases, the Kac-limit property dictates that $\mu(\lambda)$ converges towards the decay exponent $\Lambda(Pe)$ of the parabolic advection-diffusion model (\[eq7\_4\_3\]) for the same value of the Péclet number, i.e. $\mu_\infty=\Lambda(Pe)$. The convergence of $\mu(\lambda)$ towards $\mu_\infty$ is depicted in figure \[Fig\_x2\], plotting the ratio $r_\mu(\lambda)=[\mu_\infty-\mu(\lambda)]/\mu_\infty$ vs $\lambda$ for the two Péclet values considered. At $Pe=10^1$, the Kac convergence is achieved approximately for $\lambda \geq 10^2$. At higher Péclet values, the influence of $\lambda$ is less pronounced and the Kac-limit convergence is practically achieved at smaller values of $\lambda$, e.g. $\lambda\geq 2$ for $Pe=10^2$.
[![Ratio $r_\mu(\lambda)=[\mu_\infty-\mu(\lambda)]/\mu_\infty$ vs $\lambda$ for the standard-map flow considered in the main text. Line (a) and ($\circ$) refers to $Pe=10^1$, line (b) and ($\square$) to $Pe=10^2$.[]{data-label="Fig_x2"}](expo_sm.eps "fig:"){height="7.5cm"}]{}
Figure \[Fig14\] depicts a review of the early-time dynamics, at four time instants $t=n T$, $n=1,2,3,10$, at $Pe=10^2$, referred to the contour plots of the rescaled overall probability density profiles $p^*({\bf x},t)=C(p({\bf x},t)-1)$, where $C$ is a normalization constant so that $p^*({\bf x},t)$ possess unit $L^2$-norm. Two situations are considered: far way from the Kac limit ($\lambda=1$), panels (a)-(d), and close to the Kac limit ($\lambda=10$), panels (a$^\prime$)-(d$^\prime$), compared to the corresponding profiles obtained from the solution of the parabolic advection-diffusion equation (\[eq7\_4\_3\]), panel (a$^*$),(d$^*$). The probability density profiles at $\lambda=1$ still show a significant effect of the hyperbolic (undulatory) dynamics characterizing the evolution of the partial probability waves, as the density profiles display much sharper discontinuities with respect to the smoother behavior displayed by the solutions of the parabolic equation (\[eq7\_4\_3\]). The graph for $t=n T=20$ depicted in the last row correspond to the profile in asymptotic conditions of the second eigenfunction of the Floquét operator associated with the advection-diffusion dynamics, see [@giona_ces].
[![Concentration profiles of $p^*({\bf x},t)$ at $Pe=10^2$, $N=4$ at $t=n T$. Upper row $n=1$, second row $n=2$, third row $n=3$, and lower row $n=10$. Panels (a)-(d), first column, refer to $\lambda=1$ panels (a$^\prime$)-(d$^\prime$), second column, to $\lambda=10$; panels (a$^*$)-(d$^*$), third column, to the solution of the parabolic advection-diffusion equation (\[eq7\_4\_3\]).[]{data-label="Fig14"}](profili_1e2.eps "fig:"){height="16cm"}]{}
Next, consider the energy dissipation functions and entropies. A review of their behavior at $Pe=10^1$, $\lambda=1$ is depicted in figure \[Fig15\], panels (a) to (d), using the number $N$ of stochastic velocity vectors as parameter. In these plots, $E^*_d(t)$ is the normalized energy dissipation function based on the partial probability waves expressed as $$E^*_d(t) = \frac{{\mathcal E}^*_d(t)}{{\mathcal E}_d^*(0)}
\qquad {\mathcal E}_d^*(t) = \frac{1}{2} \sum_{\alpha=1}^N
\left | \left | p_\alpha({\bf x},t)-\frac{1}{N} \right | \right |_{L^2}
\label{eq7_4_6}$$ while the normalized Boltzmann-Shannon entropy $S_{BS}^*(t)$ is the difference between the Boltzmann-Shannon entropy $S_{BS}(t)$ and its limit value $\log N$ for $t \rightarrow \infty$, $$S_{BS}^*(t) = S_{BS}(t)- \log N
\label{eq7_4_7}$$ so that $\lim_{t \rightarrow \infty} S_{BS}^*(t) =0$ as for the Boltzmannian entropy $\Sigma_B(t)$.
[![Review of the energy-dissipation functions/entropies for the GPK processes associated with the standard-map flow at $Pe=10^1$, $\lambda=1$. The arrows in the four panels indicate increasing values of $N=3,\,4,\,5,\,10$. Panels (a)-(b) depict energy-dissipation functions: panel (a) refers to $E^*(t)$ vs $t$, panel (b) to $E^*_d(t)$ vs $t$. Panels (c)-(d) depict entropy functions: panel (c) refers to the Boltzmannian entropy $\Sigma_B(t)$ vs $t$, panel (d) to the rescaled entropy $S_{BS}^*(t)$ based on the full structure of the partial probability waves $\{p_\alpha({\bf x},t) \}_{\alpha=1}^N$.[]{data-label="Fig15"}](sm_energy_entropy.eps "fig:"){height="12cm"}]{}
The comparisons of $E^*(t)$ and $E^*_d(t)$ (panels (a) and (b)) and of $\Sigma_B(t)$ and $S_{BS}^*(t)$ (panels (c) and (d)) indicate that the dissipation functionals $E^*(t)$ and $\Sigma_B(t)$ based exclusively on the overall probability density function $p({\bf x},t)$ display a non-monotonic/oscillatory behavior, while the corresponding quantities $E_d^*(t)$ and $S_{BS}^*(t)$ based on the full structure of the partial probability waves are monotonic functions of time $t$. This is analogous to the case of the purely diffusive one-dimensional Poisson-Kac model addressed in paragraph \[sec\_2\_3\]. There is however, a major conceptual difference between the two problems, as regard the representation of the dissipation functions. In the one-dimensional problem treated in paragraph \[sec\_2\_3\], $p^+(x,t)$ adn $p^-(x,t)$ can be expressed as a linear combination of $p(x,t)$ and $J_d(x,t)$, $p^{\pm}(x,t)= \left [ p(x,t) \pm J_d(x,t) \right ]/2$, indicating that, in the one-dimensional case in the presence of the two-state process $(-1)^{\chi(t)}$, a correct energy dissipation function and a consistent expression for the entropy can be always expressed in terms of the overall probability density function $p(x,t)$ and of its diffusive flux $J_d(x,t)$.
This functional symmetry is broken in the two-dimensional advection-diffusion problem considered in this paragraph whenever $N \geq 4$. For $N \geq 4$, the functional expressions for $E^*_d(t)$ and $S_{BS}^*(t)$ cannot be expressed exclusively in terms of $p({\bf x},t)$ and ${\bf J}_d({\bf x},t)$ as they depend on the complete statistical structure of the GPK process, which is accounted for by the system of partial probability waves $\{ p_{\alpha}({\bf x},t) \}_{\alpha=1}^N$.
This is a first, physically significant, case in which the concentration/flux paradigm characterizing the classical theory of transport phenomena [@de_groot] results insufficient. There is a further observation emerging from the analysis of the data depicted in figure \[Fig15\]. The decay dynamics of the energy dissipation functions and entropies depend on the number $N$ of stochastic velocity vectors considered. However, as $N$ increases, a limit behavior occurs, indicating that, above a given threshold $N^*$, the use of a higher number $N>N^*$ of states (velocity vectors) is practically immaterial.
Physical properties {#sec_4}
===================
In this Section, we address some physical observations on the properties of GPK processes that can be of interest in several branches of physics.
Stochastic field equations and Brownian-motion mollification {#sec_4_1}
------------------------------------------------------------
Starting from the works by Wong and Zakai [@wong_zakai1; @wong_zakai2], mollification (regularization) of Wiener processes and Wiener-driven stochastic differential equations has become an important field of stochastic analysis. In the original Wong-Zakai papers, Wiener processes have been mollified using interpolation techniques obtaining piecewise linear, and therefore almost everywhere (a.e.) smooth approximations of Brownian motion. The so-called Wong-Zakai theorem derived by these authors admits several important implications in stochastic theory [@wong1; @wong2; @wong3] and the mollified version of a Langevin equation is described statistically in a suitable limit by the Stratonovich Fokker-Planck equation associated with the original Langevin model, using the Stratonovich recipe for the stochastic integrals.
Poisson-Kac and GPK processes provide a physically significant way of mollifying stochastic dynamics, as the Poisson-Kac perturbations, admitting a finite propagation velocity, evolves as a physical field, and possess a.e. regular trajectories. This property is particularly important in all the cases, the stochastic perturbation does not derive by a coarse-grained approximation of many uncorrelated disturbances, but admits itself a fundamental physical nature, such as the fluctuating component of the electromagnetic field (including the zero-point field), which plays a central role in quantum electrodynamics, in understanding fundamental particle-field interactions, and in general cosmology [@milonni].
In this framework, GPK processes are the natural candidates for attempting a modeling of fundamental field fluctuations, since their wave-like propagation intrinsically match the requirement of special relativity as it regards the bounds on the propagation velocity. It is rather straightforward to derive from Poisson-Kac and GPK processes a Wong-Zakai theorem connecting the Kac limit to the Stratonovich Fokker-Planck equation.
Mollification of Brownian motion can be of wide mathematical physical interest in connection with the analysis of Stochastic Partial Differential Equation (SPDE), that recently experienced significant progresses due to the introduction of new concept and mathematical tools such as that of “regularity structures” and rough-path analysis [@rough_path0; @rough_path].
For SPDE and in stochastic field theory, the use of Poisson-Kac and GPK processes provides an interesting alternative approach in order to study these models using a.e. differentiable stochastic perturbations (which are definitely simpler to handle both numerically and theoretically), and considering their Kac limit for approaching the nowhere-differentiable case.
Let us clarify this approach with a very simple example, leaving the analysis of physically interesting SPDE to future works. Let $\Omega$ be a bounded domain of ${\mathbb R}^n$ and ${\mathcal L} : {\mathcal D}(\Omega)
\rightarrow L^2(\Omega)$ a differentiable operator, mapping a subset ${\mathcal D}(\Omega) \subset L^2(\Omega)$ into $L^2(\Omega)$, equipped with suitable boundary conditions at $\partial \Omega$. Assume that ${\mathcal L}$, equipped with the given boundary conditions, admit a complete eigenbasis $\{ \psi_k({\bf x}) \}_{k=1}^\infty$ $${\mathcal L}[\psi_k({\bf x})] = \mu_k \, \psi_k({\bf x})
\label{eq9_1_1}$$ normalized to unit $L^2$-norm, spanning $L^2(\Omega)$. The simplest case is ${\mathcal L}=\nabla^2$ equipped at the boundary of the domain, say with homogeneous Dirichlet conditions.
Let us consider a linear SPDE, given by $$\partial_t c({\bf x},t) = {\mathcal L}[c({\bf x},t)] + b({\bf x})
\, (-1)^{\chi(t)}
\label{eq9_1_2}$$ where $\chi(t)$ is a simple Poisson process characterized by the transition rate $\lambda$. If ${\mathcal L}=\nabla^2$, eq. (\[eq9\_1\_2\]) is a modified form of the Edwards-Wilkinson model [@edwards; @racz] of interface dynamics. Setting $c(x,t) = \sum_{k=1}^\infty c_k(t) \, \psi_k({\bf x})$, eq. (\[eq9\_1\_2\]) reduces to the system of stochastic differential equations for the Fourier coefficients $$d c_k(t) = \mu_k \, c_k(t) \, dt + b_k \, (-1)^{\chi(t)} \, dt
\label{eq9_1_3}$$ where $b_k = \int_{\Omega} b({\bf x}) \, \psi_k({\bf x}) \, d {\bf x}$. The evolution equations for the associated partial waves $p^{\pm}(\{c_k \}_{k=1}^\infty,t)$ thus become $$\partial_t p^{\pm} = -\sum_{k=1}^\infty \partial_k \left [ (\mu_k \, c_k \pm b_k
) \, p^\pm\right ] \mp \lambda \, ( p^+ -p^- )
\label{eq9_1_4}$$ that can be solved truncating the summation up to a given integer $N$. From eq. (\[eq9\_1\_4\]) all the information on the mean field $$\langle c({\bf x},t) \rangle = \sum_{k=1}^\infty \psi_k({\bf x}) \, \int
c_k \, p(\{ c_k \}_{k=1}^\infty,t) \, d {\bf c}
\label{eq9_1_4bis}$$ where $d {\bf c}=\prod_{k=1}^\infty d c_k$, as well as on the correlation functions can be derived.
The problem analyzed above is fairly simple as the noise perturbation does not depend on ${\bf x}$. It is however straightforward to consider space-time Poisson processes representing mollifications of delta-correlated stochastic perturbations both in time and in space, which is the classical prototype of stochastic forcing in many problems involving SPDE.
Consider for example a one-dimensional space dimension. Since the spatial coordinate is defined also for negative values, the extension of a Poisson-Kac process over the real line is necessary. This can be performed, as for the Wiener case, by considering two independent Poisson processes $\chi_{1^\prime}(x)$ and $\chi_{1^{\prime \prime}}(x)$, possessing the same transition rate $\lambda_1$, and defined for $x \geq 0$, by introducing the extended process $\chi_1(x)$ defined for $x \in {\mathbb R}$ as $$\chi_1(x)=
\left \{
\begin{array}{lll}
\chi_{1^\prime}(-x) & & x<0 \\
\chi_{1^{\prime \prime}}(x) & & x>0
\end{array}
\right .
\label{eq_9_add}$$ Next consider process $\chi(x,t)$, $(x,t) \in {\mathbb R} \times {\mathbb R}^+$+{0} defined as $$\chi(x,t)= \chi_1(x) + \chi_2(t)
\label{eq9_1_5}$$ where $\chi_1(x)$ and $\chi_2(t)$ are two independent Poisson processes characterized by transition rates $\lambda_1$ and $\lambda_2$, respectively, where $\chi_1(x)$ is the extended process defined by eq. (\[eq\_9\_add\]), and the SPDE $$\partial_t c(x,t) = {\mathcal L}[c(x,t)] + \alpha \, b(x) (-1)^{\chi(x,t)}
\label{eq9_1_6}$$ $x \in {\mathbb R}$, where $\alpha>0$ is a parameter specified below. As regards the correlation properties of the noise perturbation one has $$\begin{aligned}
\left \langle (-1)^{\chi(x^\prime,t^\prime)} \, (-1)^{\chi(x,t)} \right
\rangle
& = & \left \langle (-1)^{\chi_1(x^\prime)-\chi_1(x)} \, (-1)^{\chi_2(t^\prime)
-\chi_2(t)} \right \rangle \nonumber \\
& = & e^{-2 \lambda_1 |x^\prime-x|} \,
e^{-2 \lambda_2 |t^\prime -t|}
\label{eq9_1_7}\end{aligned}$$ Therefore, if one sets $\alpha$ equal to $$\alpha= \sqrt{ 4 \, \lambda_1 \, \lambda_2}
\label{eq9_1_7bis}$$ the process $\alpha \, (-1)^{\chi(x,t)}$ corresponds to a mollification of a $\delta$-correlated process both in time and space, converging to it in the limit $\lambda_1,,\lambda_2 \rightarrow \infty$.
The evolution equations for the Fourier coefficients of $c(x,t)$ become $$d c_k(t) = \mu_k \, c_k(t) \, dt + \alpha \, b_k (-1)^{\chi_2(t)} \, dt
\label{eq9_1_8}$$ where $$b_k = \int_{-\infty}^{\infty} (-1)^{\chi_1(x)} \, b(x) \psi_k(x) d x
\label{eq9_1_9}$$ The expression for the random variables $b_k$ can be easily obtained by considering the dichotomous nature of $(-1)^{\chi_1(x)}$, and the fact that the transition instants follows an exponential distribution defined by the transition rate $\lambda_1$.
In a similar way, nonlinear problems, as classical stochastic fluid dynamic models (e.g. Burgers equation), growth models (e.g. the KPZ equation), or the stochastic quantization of fields can be approached both numerically and theoretically. Once again, it is important to observe that the mollification arising from the use of Poisson-Kac and GPK process, is not just a mathematical artifact to regularize the structure of a SPDE, but a way of describing physical fluctuations possessing bounded propagation velocity, and intrinsic relativistic consistency. The extension to higher dimension is also straightforward, by considering space-time Poisson processes $\chi({\bf x},t)$ in ${\mathbb R}^n
\times {\mathbb R}^+\{0\}$ defined, analogously to eq. (\[eq9\_1\_5\]) as $\chi({\bf x},t)= \sum_{h=1}^n \chi_h(x_h)+
\chi_{n+1}(t)$.
Ergodicity and $L^2$-dynamics {#sec_4_2}
-----------------------------
In this paragraph we address some issues on the ergodicity of Poisson-Kac and GPK processes and on some anomalies of $L^2$-dynamics in the presence of conservative deterministic fields, considering the one-dimensional Poisson-Kac process $$d x(t) = v(x(t)) \, dt + b \, (-1)^{\chi(t)} \, dt
\label{eq9_1_10}$$ $x \in {\mathbb R}$. This paragraph represents a brief review with some extensions of the results presented in [@giona_epl]. In one-dimensional problems, $v(x)$ can be always regarded as a potential field deriving from the potential $U(x)=-\int^x v(\xi) \, d \xi$. The associated partial probability waves satisfy eqs. (A7) of part I where $v_\pm(x)=v(x)\pm b$. The stationary partial density functions $p^\pm_*(x)$, satisfy the differential equations $$\begin{aligned}
\frac{d \left ( v_+(x) \, p^+_*(x) \right )}{ d x}
&= & -\lambda \, (p_*^+(x) - p_*^-(x))
\nonumber \\
\frac{d \left ( v_-(x) \, p^-_*(x) \right )}{ d x}& = &
\lambda \, (p_*^+(x) - p_*^-(x))
\label{eq9_1_11}\end{aligned}$$ from which it follows that $$v_+(x) \, p_*^+(x) + v_-(x) \, p_*^-(x) = C= \mbox{constant}
\label{eq9_1_12}$$ where the constant $C$ should be in general equal to zero because of the regularity at infinity. Therefore, $$p_*^-(x)= - \frac{v_+(x)}{v_-(x)} \, p_*^+(x)
\label{eq9_1_13}$$ Since by definition $v^+(x)>v_-(x)$ for $b>0$, it follows that a stationary (positive) partial probability density may occur solely within intervals $(a,b)$, where the conditions $$v_-(x) <0 \,, \qquad v_+(x) >0 \, \qquad x \in (a,b)
\label{eq9_1_14}$$ are satisfied. Conditions (\[eq9\_1\_14\]) correspond formally to the simultaneous presence of a forwardly propagating waves $p^+(x,t)$ and of a backwardly propagating wave $p^-(x,t)$.
Suppose that $v(x)$ and $b$ are such that there exists a double sequence $x_{-,h}^*$, $x_{+,h}$, $h = -N_1,\dots,N_2$, $N_1,N_2>0$, of abscissas $$\dots < x_{+,h-1}^* < x_{-,h}^* < x_{+,h}^* < x_{-,h+1}^* < \dots
\label{eq9_1_15}$$ such that $\{ x_{-,h}^* \}$ correspond to the nodal points of $v_-(x)$, $v_-(x_{-,h}^*)=0$, and $\{ x_{+,h}^* \}$ to the nodal point of $v_+(x)$, $v_+(x_{+,h}^*)=0$. From the above discussion, and from eq. (\[eq9\_1\_14\]), it follows that each subinterval $I_h=[x_{-,h}^*,x_{+,h}^*]$ represents an invariant interval for the partial-wave dynamics. If more than a single invariant interval exists, then the stochastic dynamics (\[eq9\_1\_10\]) is not ergodic, meaning that there exists a multiplicity of stationary invariant densities, each of which possesses compact support localized in the invariant intervals $I_h$.
A typical situation where invariant-density multiplicity occurs is depicted in figure \[Fig22\] for a sinusoidal deterministic drift $v(x)=\cos(x)$ and $b<1$ (actually $b=1/2$). The phenomenon of multiplicity of stationary invariant densities disappears generically for sufficiently large values of $b$ and, [*a fortiori*]{}, in the Kac limit.
[![$v_+(x)$ (line a) and $v_-(x)$ (line b) for $v(x)=\cos(x)$, $b=1/2$.[]{data-label="Fig22"}](scheme_1.eps "fig:"){height="6cm"}]{}
There is another peculiarity of Poisson-Kac and GPK processes that should be addressed. In Section \[sec\_2\] we analyzed the property of energy dissipation functions represented by suitable $L^2$-norms of the partial probability density waves in two distinct cases: (i) in the absence of a deterministic bias, and (ii) where ${\bf v}({\bf x})$ is solenoidal i.e., it stems from a vector potential. The complementary case where ${\bf v}({\bf x})$ derives from a scalar potential, i.e., ${\bf v}({\bf x})=-\nabla \phi({\bf x})$ has not been addressed. This was not fortuitous as, in the presence of potential velocity fields, the $L^2$-dynamics of the partial waves may display highly anomalous and singular properties for low values of the intensity of the stochastic velocity $b$. The archetype of such a singular behavior can be easily understood by mean of the one-dimensional model (\[eq9\_1\_10\]) defined on the unit interval $[0,1]$, and equipped with reflecting conditions at the endpoints $x=0,1$. As a model of the deterministic bias $v(x)$ choose, as an instance, $$v(x) = \frac{3}{2} + \sin(2 \pi x)
\label{eq9_2_1}$$ and take the stochastic velocity intensity $b$ as a parameter. Figure \[Fig23\] panel (a) depicts the behavior of $v_\pm(x)$ at $b=0.7$, while panel (b) refers to $b=3/2$.
[![$v_+(x)$ (line a) and $v_-(x)$ (line b) for the deterministic field $v(x)$ eq. (\[eq9\_2\_1\]) at two different values of $b$. panel (a): $b=0.7$, panel (b); $b=3/2$.[]{data-label="Fig23"}](scheme2tot.eps "fig:"){height="10cm"}]{}
Let us analyze separately the two cases in terms of the qualitative evolution of the partial probability waves. With reference to the case $b=0.7$ (panel a), the interval $[0,x^*)$, where $x^*$ is the first zero of $v_-(x)$, $x^* \simeq 0.65$ is an escaping interval for the partial wave dynamics: both $p^+(x,t)$ and $p^-(x,t)$ are two progressive waves in $[0,x^*)$, so that there exists a time instant $T^*$ such that, for $t>T^*$ and for any initial condition, $p^\pm(x,t)=0$, $x \in [0,x^*)$. In the interval $[x^*,x_1]$, where $x_1$ is the second zero of $v_-(x)$ there is the coexistence of a forwardly propagating wave $p^+(x,t)$, and of a backwardly propagating one $p^-(x,t)$. Due to the recombination amongst the partial waves and to the fact that the forward $p^+$-wave propagates further towards $x>x_1$, even this subinterval will be eventually depleted, so that, for sufficiently long times $t$, both $p^\pm(x,t)$ for $x \in [x^*,x_1]$ will be arbitrarily small. Therefore, the wave-nature of the dynamics pushes the probability densities towards the interval $(x_1,1]$. But in this region both $v_\pm(x)>0$ so that the two partial probability waves continue to propagate forward until they reach $x=1$ where they progressively accumulate due to the reflection conditions.
Therefore, just because of the reflecting boundary condition at $x=1$, the unique stationary density becomes singular, $${p_*}^+(x)={p_*}^-(x)= \frac{\delta(x-1)}{2}
\label{eq9_2_2}$$
Figure \[Fig24\] depicts the evolution of the moments (panel a) and of the $L^2$-norms (panel b), obtained from stochastic simulations of eq. (\[eq9\_1\_10\]) at $D_{\rm eff}=1$, starting from an initial distribution localized at $x=0$, $p^+(x,0)=p^-(x,0)=\delta(x)/2$. As expected from eq. (\[eq9\_2\_2\]) the first-order moments $m_1(t)$ approaches $1$ at an exponential rate $1- m_1(t) \sim e^{-2 \lambda t}$. The variance $\sigma_x^2(t)$ display a non-monotonic behavior with respect to $t$, converging asymptotically to zero at the same exponential rate.
[![Panel (a): $1-m_1(t)$ (line a) and $\sigma_x^2(t)$ (line b) for the model problem described in the main text at $b=0.7$, $D_{\rm eff}=1$. Panel (b): Norm dynamics for the same problem: $e^*(t)$ (line a) and $e_d(t)$ (line b) vs $t$ (see the main text for the definition of these quantities).[]{data-label="Fig24"}](dinamica_wall.eps "fig:"){height="10cm"}]{}
As regards the $L^2$-norm depicted in panel (b), data have been obtained sampling a population of $10^7$ particles using a partition of the unit interval into $10^3$ subintervals. In this figure $e^*(t)=||p(x,t)||_{L^2}$ and $e_d(t)=\sqrt{{\mathcal E}_d(t)}$, where ${\mathcal E}_d(t)=(||p^+(x,t)||_{L^2}^2
+ ||p^-(x,t)||_{L^2}^2)/2$. As expected, both these quantities admit a non-monotonic behavior and eventually diverge for $t \rightarrow \infty$.
The occurrence of a singular impulsive invariant density occurs for $b<b^*=3/2$ at which $v_-(1)=0$. In this case, $b=3/2$, a unique invariant density admitting compact non-atomic support in $[x^*,1]$, $x^*=1/2$ appears. From eqs. (\[eq9\_1\_11\]), (\[eq9\_1\_13\]), after elementary manipulations, the invariant density $p_*(x)$ takes the expression $$p_*(x) =\frac{A}{b^2 -v(x)} \, \exp \left [ -2 \lambda
\int_{x^*}^x \frac{v(\xi)}{v^2(\xi) - b^2} \, d \xi \right ]
\label{eq9_2_3}$$ where $A$ is a normalization constant.
[![Invariant stationary probability density $p_*(x)$ for the Poisson-Kac scheme (\[eq9\_1\_10\]) with $v(x)$ given by eq. (\[eq9\_2\_1\]) at $D_{\rm eff}=1$, $b=3/2$. The “noisy” line represents the result of stochastic simulation, the smooth line represents eq. (\[eq9\_2\_3\]).[]{data-label="Fig25"}](pot_1d.eps "fig:"){height="6cm"}]{}
Figure \[Fig25\] depicts the comparison of the closed-form expression for the invariant density at $b=3/2$, $D_{\rm eff}=1$ and the results of stochastic simulation of eq. (\[eq9\_1\_10\]).
Ergodicity breaking in higher dimensions {#sec4_3}
----------------------------------------
Ergodicity breaking occurs also in higher dimensional GPK models in the presence of attractive and periodic potentials. Consider a two-dimensional GPK process $$d {\bf x}(t) = {\bf v}({\bf x}(t)) \, dt + {\bf b}_{\chi_N(t)} \, d t
\label{eqs_1}$$ with ${\bf b}_\alpha=b \, (\cos \phi_\alpha, \sin \phi_\alpha)$, $\phi_\alpha= 2 \pi (\alpha-1)/N$, $\alpha=1,\dots,N$, $A_{\alpha,\beta}=1/N$, and $\lambda_\alpha=\lambda$, $\alpha,\beta=1,\dots, N$, in the presence of a deterministic bias ${\bf v}({\bf x})$ stemming from a potential, ${\bf v}({\bf x})= - \nabla U({\bf x})/\eta$, which corresponds to a typical transport problem under overdamped conditions, where $\eta$ is the friction factor.
To begin with consider a harmonic, globally attractive, contribution $${\bf v}({\bf x})= - v_0 \, \left (
\begin{array}{c}
x \\
y
\end{array}
\right )
\label{eqs_2}$$ deriving from the quadratic potential $U({\bf x}) = U_0 (x^2+y^2)/2$, with $v_0=U_0/\eta$, and set $D_{\rm nom}=b^2/2 \lambda=1$, and $v_0=1$. Figure \[Fig\_z1\] depicts some orbits of GPK particles for several values of $N$ and $b$. As can be observed particle motion is localized within an invariant region $\Omega$ of the plane, the structure of which depends on the choice of the stochastic velocity vectors, i.e., on $N$ and $b$.
[![Orbits of the GPK process described in the main text in the presence of a two-dimensional attractive harmonic potential. Panel (a): $N=5$, $b=1$; panel (b): $N=20$, $b=1$; panel (c): $N=5$, $b=10$.[]{data-label="Fig_z1"}](traj_radial_2d_m.eps "fig:"){height="4cm"}]{}
The structure of the invariant domain $\Omega$ can be derived from the condition of invariance, which dictates $$\left . \left ({\bf v}({\bf x}) + {\bf b}_\alpha \right ) \cdot {\bf n}_e({\bf x}) \right |_{{\bf x} \in \partial \Omega} \leq 0 \qquad \forall \alpha=1,\dots,N
\label{eqs_3}$$ at the boundary $\partial \Omega$ of $\Omega$, where ${\bf n}_e({\bf x})$ is the outwardly oriented normal unit vector at ${\bf x} \in \partial \Omega$, and “$\cdot$” indicates the Euclidean scalar product. By considering the radial symmetry of the potential, an invariant region (not the minimal one) can be sought as a circle of radius $R$ around the origin. Let $(r,\theta)$ be the radial coordinates. Since ${\bf v}({\bf x})=- v_0 \, r \, {\bf e}_r$, where ${\bf e}_r$ is the unit radial vector, eq. (\[eqs\_3\]) can be expressed as $$- v_0 \, R + b \, \cos(\phi_\alpha - \theta) \leq 0
\label{eqs_4}$$ $\alpha =1,\dots N$, $\theta \in [0,2 \pi)$, i.e., $$R \geq \frac{b}{v_0} \, \cos(\phi_\alpha-\theta)
\label{eqs_5}$$ that is certainly satisfied provides that $R>R_c=b/v_0$. The contour plots of the stationary invariant densities $p_*({\bf x})$ associated with two typical GPK processes depicted in figure \[Fig\_z1\] are shown in figure \[Fig\_z2\].
[![Contour plot of the compactly supported stationary invariant densities $p_*({\bf x})$ for GPK processes in the presence of a harmonic potential. Panel (a) refers to the contour plot of $\log(p_*({\bf x}))$ at $N=5$, $b=1$, so that $R_c=1$, panel (b) to the contour plot of $p_*(x,y)$ at $N=5$, $b=10$, thus $R_c=10$.[]{data-label="Fig_z2"}](3dradial.eps "fig:"){height="6.0cm"}]{}
For low values of $b$ (panel a), the support of the invariant density strongly depends on the geometry of the stochastic velocity vectors (in this case, possessing a pentagonal shape, since $N=5$). For high values of $b$, the stationary invariant density does not depend on $\{ {\bf b}_\alpha\}_{\alpha=1}^N$, and can be accurately approximated by the corresponding Kac-limit solution, that in the present case provides the expression $$p_*({\bf x})= A \, \exp \left [ - \frac{U({\bf x})}{ \eta \, D_0}
\right ] = A \, \exp \left [ - \frac{v_0 \, (x^2+y^2)}{2 \, D_0} \right ]
\label{eqs_6}$$ where $A$ is a normalization constant. Figure \[Fig\_z3\] depicts the stationary radial distribution function $p_r*(r)$, $\int_0^\infty p_r^*(r) \, dr=1$ in the cases considered above.
[![Stationary radial distribution $p^*_r(r)$ vs $r$ for GPK processes in the presence of a harmonic potential. Panel (a) refers to $b=1$. line (a) corresponds to $N=5$, line (b) to $N=10$. Panel (b) to $b=10$. Symbols ($\circ$) correspond to $N=5$, ($\square$) to $N=20$. The solid line is the Kac-limit expression for $p_r^*(r)$ eq. (\[eqs\_7\]).[]{data-label="Fig_z3"}](radial.eps "fig:"){height="6.cm"}]{}
For small values of $b$ (panel (a), $b=1$), $p_r^*(r)$ is essentially localized at the outer boundary, i.e., at $r=R_c=1$, while for high $b's$ it practically coincides with the Kac-limit expression $$p_r^*(r)= \frac{v_0}{D_0} \, r \, e^{-v_0 r^2/2 D_0}
\label{eqs_7}$$
This preliminary analysis of GPK processes in radially attractive potential is propaedeutical to the interpretation of ergodicity-breaking phenomena in periodic potentials. Consider the GPK process (\[eqs\_1\]) in ${\mathbb R}^2$, in the presence of a generic periodic potential ${\bf v}({\bf x})=-\nabla U({\bf x}) /\eta$, say $$U({\bf x})= - \frac{U_0}{ 2 \pi} \, \cos(2 \pi x) \, \sin(2 \pi y)
\label{eqs_8}$$ Set $\eta=1$, and $U_0=5$ for convenience, as the analysis of ergodicity breaking is qualitatively independent of the values attained by $\eta$ and $U_0$, and set $N=20$, $D_{\rm nom}=1$. Figure \[Fig\_z4\] panel (a) shows the structure of the periodic potential (\[eqs\_8\]) considered.
[![Panel (a): Contour plot of the periodic potential (\[eqs\_8\]). Panel (b): $b=3$, orbits of the GPK process starting from different initial positions. Panel (c): $b=3.7$, generic orbit of the GPK process. Panel (d): $b=10$, generic orbit of the process. []{data-label="Fig_z4"}](fig_potential.eps "fig:"){height="12cm"}]{}
Once $D_{\rm nom}$ is fixed, the only parameter of the model is the intensity $b$ of the stochastic velocity fluctuations. For small values of $b$, below a given threshold $b_{\rm crit}$, multiplicity of stationary invariant measures occur, corresponding to the presence of a countable system of invariant regions for the GPK process located around the local potential minima. This phenomenon is depicted in panel (b) of figure \[Fig\_z4\] representing some trajectories of GPK particles at $b=3$, starting from several different initial positions that become trapped within the invariant regions around the local potential minima. The critical value $b_{\rm crit}$ depends linearly on the intensity of potential $U_0$. For $b<b_{\rm crit}$, ergodicity breaking occurs. Above $b_{\rm crit}$, that in the present case is approximately $b_{\rm crit} \simeq 3.59$, GPK dynamics does not display multiplicity of localized stationary invariant measures, and ergodicity is recovered. The qualitative behavior of the orbits above the threshold $b_{\rm crit}$ is depicted in panels (c) and (d). For $b \simeq b_{\rm crit}$, but above the threshold, as in panel (c) corresponding to $b=3.7$, the orbits of GPK particles are characterized by a “punctured dynamics”, characterized by longer residence times in the neighborhood of potential minima followed by sudden jumps towards one of the nearest neighboring minima. Conversely, for $b \gg b_{\rm crit}$, as depicted in panel (d) for $b=10$, GPK-particle orbits resemble those of a Brownian particle, and the influence of the potential involves the long-term dispersion properties, the quantitative analysis of which can be recovered from the Kac limit of the model, for sufficiently high values of $b$.
Concluding remarks {#sec5}
==================
In this second part we have focused attention on the quantitative description of dissipation in GPK dynamics, both in terms of energy-dissipation functions (substantially corresponding to $L^2$-norms of the partial probability densities) and entropies.
A correct representation of these dissipation functions should necessarily take into account the primitive statistical formulation of the process, based on the full system of partial probability density functions $\{ p_\alpha({\bf x},t) \}_{\alpha=1}^N$. No consistent energy dissipation or entropy functions can be formulated exclusively upon the knowledge of the overall probability density function $p({\bf x},t)$. This represents a qualitative stochastic confirmation of the basic ansatz underlying extended thermodynamic theories of irreversible processes. On the other hand, the analysis of higher dimensional GPK processes, $(n>1)$, indicates that it is not possible to develop a consistent thermodynamic theory of dynamic processes possessing finite propagation velocity, by expressing thermodynamic state variables exclusively in terms of concentrations and their “diffusive” fluxes, as the whole systems of partial probability densities (concentrations) should be taken into account. This issue is further developed in part III.
We have also outlined another relevant application of Poisson-Kac and GPK processes as mollifiers of space-time stochastic perturbations in the analysis of field equations (stochastic partial differential equations). This application has been only sketched in Section \[sec\_4\], and hopefully Poisson-Kac mollification can lead to interesting physical and mathematical results, in the spirit of Wong-Zakai theorems and regularity-structures’ theory
Moreover, we have shown that ergodicity breaking, and the occurrence of multiple stationary invariant measures are generic properties of GPK dynamics in higher dimensional periodic potential, provided that the characteristic intensity $b^{(c)}$ of the stochastic velocity vectors is below a critical threshold $b_{\rm crit}$ which depends on the intensity and on the structure of the potential barriers.
[160]{} Giona M, Brasiello A and Crescitelli S 2016 Stochastic foundations of undulatory transport phenomena: Generalized Poisson-Kac processes - Part I Basic theory, submitted to [*J. Phys. A*]{} Kac M 1974 [*Rocky Mount. J. Math.*]{} [**4**]{} 497 Jou D, Casas-Vazquez J and Lebon G 1996 [*Extended irreversible thermodynamics*]{} (Berlin: Springer Verlag) Müller I and Ruggeri T 2013 [*Rational extended thermodynamics*]{} (Berlin: Springer Verlag) Jou D, Casas-Vazquez J and Lebon G 1999 [*Rep. Prog. Phys.*]{} [**62**]{} 1035 Giona M, Brasiello A and Crescitelli S 2016 Stochastic foundations of undulatory transport phenomena: Generalized Poisson-Kac processes - Part III Extensions and applications to kinetic theory and transport, submitted to [*J. Phys. A*]{} Chirikov B V 1979 [*Phys. Rep.*]{} [**52**]{} 263 McKay R S, Meiss J D and Percival I C 1984 [*Physica D*]{} [**13**]{} 55 Guerra F 1981 [*Phys. Rep.*]{} [**77**]{} 263 Adler R J and Taylor J E 2007 [*Random fields and geometry*]{} (Berlin: Springer Verlag) Giona M, Brasiello A and Crescitelli S 2015 [*Europhys. Lett.*]{} 112 30001 Camacho J and Jou D 1992 [*Phys. Lett. A*]{} [**171**]{} 26 Vlad M O and Ross J 1994 [*Phys. Lett A*]{} [**184**]{} 403 Cimmelli V A, Jou D, Ruggeri T and Van P 2014 [*Entropy*]{} [**16**]{} 1756 Seneta E 2006 [*Non-negative matrices and Markov Chains*]{} (New York: Springer Science & Business Media)
Giona M 2016 Covariance and spinorial statistical description of simple relativistic stochastic kinematics, in preparation Giona M 2016 Relativistic analysis of stochastic kinematics, in preparation Giona M, Adrover A, Cerbelli S and Vitacolonna V, 2004 [*Phys. Rev. Lett.*]{} [**92**]{} 114101 de Groot S R and Mazur P 1984 [*Non-equilibrium thermodynamics*]{} (New York: Dover Publ.) Wong E and Zakai M 1965 [*Int. J. Eng. Sci.*]{} [**3**]{} 213 Wong E and Zakai M 1965 [*Ann. Math. Stat.*]{} [**36**]{} 1560 Konecny F 1983 [*J. Multivariate Anal.*]{} [**13**]{} 605 Twardowska K 1996 [*Acta Appl. Math.*]{} [**43**]{} 317 Hairer M and Pardoux E 2015 [*J. Math. Soc. Japan*]{} [**67**]{} 1551 Milonni P W 1994 [*The quantum vacuum: an introduction to quantum electrodynamics*]{} (Boston: Academic Press) Lyons T J 1998 [*Rev. Mat. Iberoamericana*]{} [**14**]{} 215 Friz P K and Hairer M 2014 [*A Course on Rough Paths*]{} (New York: Springer Science & Business media) Edwards S F and Wilkinson D R 1982 [*Proc. R. Soc. London Ser. A*]{} [**381**]{} 17 Antal T and Racz Z 1996 [*Phys. Rev. E*]{} [**54**]{} 2256
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
Our goal is to find accurate and efficient algorithms, when they exist, for evaluating rational expressions containing floating point numbers, and for computing matrix factorizations (like LU and the SVD) of matrices with rational expressions as entries. More precisely, [*accuracy*]{} means the relative error in the output must be less than one (no matter how tiny the output is), and [*efficiency*]{} means that the algorithm runs in polynomial time. Our goal is challenging because our accuracy demand is much stricter than usual.
The classes of floating point expressions or matrices that we can accurately and efficiently evaluate or factor depend strongly on our model of arithmetic:
1. In the “Traditional Model” (TM), the floating point result of an operation like $a + b$ is $fl(a + b) = (a + b)(1 + \delta)$, where $|\delta|$ must be tiny.
2. In the “Long Exponent Model” (LEM) each floating point number $x = f \cdot 2^e$ is represented by the pair of integers $(f,e)$, and there is no bound on the sizes of the exponents $e$ in the input data. The LEM supports strictly larger classes of expressions or matrices than the TM.
3. In the “Short Exponent Model” (SEM) each floating point number $x = f \cdot 2^e$ is also represented by $(f,e)$, but the input exponent sizes are bounded in terms of the sizes of the input fractions $f$. We believe the SEM supports strictly more expressions or matrices than the LEM.
These classes will be described by factorizability properties of the rational expressions, or of the minors of the rational matrices. For each such class, we identify new algorithms that attain our goals of accuracy and efficiency. These algorithms are often exponentially faster than prior algorithms, which would simply use a conventional algorithm with sufficiently high precision.
For example, we can factorize Cauchy matrices, Vandermonde matrices, totally positive generalized Vandermonde matrices, and suitably discretized differential and integral operators in all three models much more accurately and efficiently than before. But we provably cannot add $x+y+z$ accurately in the TM, even though it is easy in the other models.
[**2000 Mathematics Subject Classification:**]{} 65F, 65G50, 65Y20, 68Q25.
[**Keywords and Phrases:**]{} Roundoff, Numerical linear algebra, Complexity.
author:
- 'J. Demmel[^1]'
title: '**The Complexity of -2mm Accurate Floating Point Computation** '
---
Introduction {#section 1}
============
-5mm
We will survey recent progress and describe open problems in the area of accurate floating point computation, in particular for matrix computations. A very short bibliography would include [@demmelkoev99; @demmel98; @DGESVD; @koevthesis; @dhillonthesis; @barlowdemmel; @demmelkahan; @demmelveselic; @LAA_Special_Issue].
We consider the evaluation of multivariate rational functions $r(x)$ of floating point numbers, and matrix computations on rational matrices $A(x)$, where each entry $A_{ij}(x)$ is such a rational function. Matrix computations will include computing determinants (and other minors), linear equation solving, performing Gaussian Elimination (GE) with various kinds of pivoting, and computing the singular value decomposition (SVD), among others. Our goals are [*accuracy*]{} (computing each solution component with tiny relative error) and [*efficiency*]{} (the algorithm should run in time bounded by a polynomial function of the input size).
We consider three models of arithmetic, defined in the abstract, and for each one we try to classify rational expressions and matrices as to whether they can be evaluated or factored accurately and efficiently (we will say “compute(d) accurately and efficiently,” or “CAE” for short).
In the Traditional “$1 + \delta$” Model (TM), we have $fl(a \otimes b) = (a \otimes b)(1 + \delta)$, $\otimes \in \{ +, -, \times, \div \}$ and $|\delta| \leq \epsilon$, where $\epsilon \ll 1$ is called [*machine precision*]{}. It is the conventional model for floating point error analysis, and means that every floating point result is computed with a relative error $\delta$ bounded in magnitude by $\epsilon$. The values of $\delta$ may be arbitrary real (or complex) numbers satisfying $| \delta | \leq \epsilon$, so that any algorithm proven to CAE in the TM must work for arbitrary real (or complex) number inputs and arbitrary real (or complex) $|\delta| \leq \epsilon$. The size of the input in the TM is the number of floating point words needed to describe it, independent of $\epsilon$.
The Long Exponent (LEM) and Short Exponent (SEM) models, which are implementable on a Turing machine, make errors that may be described by the TM, but their inputs and $\delta$’s are much more constrained. Also, we compute the size of the input in the LEM and SEM by counting the number of bits, so that higher precision and wider range take more bits.
This will mean that problems we can provably CAE in the TM are a strict subset of those we can CAE in the LEM, which in turn we conjecture are a strict subset of those we can CAE in the SEM. In all three models we will describe the classes of rational expressions and rational matrices in terms of the factorization properties of the expressions, or of the minors of the matrices.
The reader may wonder why we insist on accurately computing tiny quantities with small relative error, since in many cases the inputs are themselves uncertain, so that one could suspect that the inherent uncertainty in the input could make even the signs of tiny outputs uncertain. It will turn out that in the TM, the class we can CAE appears to be identical to the class where all the outputs are in fact accurately determined by the inputs, in the sense that small relative changes in the inputs cause small relative changes in the outputs. We make this conjecture more precise in section 3 below.
There are many ways to formulate the search for efficient and accurate algorithms [@cuckersmale99; @smale2; @smale3; @BCSS; @higham96; @ReliableComputing; @pourelrichards89]. Our approach differs in several ways. In contrast to either conventional floating point error analysis [@higham96] or the model in [@cuckersmale99], we ask that even the tiniest results have correct leading digits, and that zero be exact. In [@cuckersmale99] the model of arithmetic allows a tiny absolute error in each operation, whereas in TM we allow a tiny relative error. Unlike [@cuckersmale99] our LEM and SEM are conventional Turing machine models, with numbers represented as bit strings, and so we can take the cost of arithmetic on very large and very small numbers (i.e. those with many exponent bits) into precise account. For these reasons we believe our models are closer to computational practice than the model in [@cuckersmale99]. In contrast to [@pourelrichards89], we (mostly) consider the input as given exactly, rather than as a sequence of ever better approximations. Finally, many of our algorithms could easily be modified to explicitly compute guaranteed interval bounds on the output [@ReliableComputing].
Factorizability and minors {#section 2}
==========================
-5mm
We show here how to reduce the question of accurate and efficient matrix computations to accurate and efficient rational expression evaluation. The connection is elementary, except for the SVD, which requires an algorithm from [@demmelkoev99].
\[prop\_CAEnecessity\] Being able to CAE the absolute value of the determinant $|\det (A(x))|$ is [*necessary*]{} to be able to CAE the following matrix computations on $A(x)$: LU factorization (with or without pivoting), QR factorization, all the eigenvalues $\lambda_i$ of $A(x)$, and all the singular values of $A(x)$. Conversely, being able to CAE [*all*]{} the minors of $(A(x))$ is [*sufficient*]{} to be able to CAE the following matrix computations on $A(x)$: $A^{-1}$, LU factorization (with or without pivoting), and the SVD of $A(x)$. This holds in any model of arithmetic.
[**Proof** ]{} First consider necessity. $|\det (A(x))|$ may be written as the product of diagonal entries of the matrices $L$, $U$ and $R$ in these factorizations, or as the product of eigenvalues or singular values. If these entries or values can be CAE, then so can their product in a straightforward way.
Now consider sufficiency. The statement about $A^{-1}$ is just Cramer’s rule, which only needs $n^2+1$ different minors. The statement about LU factorization depends on the fact that each nontrivial entry of $L$ and $U$ is a quotient of minors. The SVD is more difficult [@demmelkoev99], and depends on the following two step algorithm: (1) Compute a [*rank revealing*]{} decomposition $A = X \cdot D \cdot Y$ where $X$ and $Y$ are “well-conditioned” (far from singular in the sense that $\|X\|
\cdot \| X^{-1} \|$ is not too large) and (2) use a bisection-like algorithm to compute the SVD from $XDY$.
We believe that computing $\det (A(x))$ is actually necessary, not just $|\det (A(x))|$. The sufficiency proof can be extended to other matrix computations like the QR decomposition and pseudoinverse by considering minors of matrices like ${\left[ \begin{array}}{cc} I
& A \\ A^T & 0 {\end{array} \right]}$. Furthermore, if we can CAE the minors of $C(x) \cdot A(x) \cdot B(x)$, and $C(x)$ and $B(x)$ are well-conditioned, then we can still CAE a number of matrix factorizations, like the SVD. The SVD can be applied to get the eigendecomposition of symmetric matrices, but we know of no sufficient condition for the accurate and efficient calculation of eigenvalues of nonsymmetric matrices.
CAE in the traditional model {#section 3}
============================
-5mm
We begin by giving examples of expressions and matrix computations that we can CAE in the TM, and then discuss what we cannot do. The results will depend on details of the axioms we adopt, but for now we consider the minimal set of operations described in the abstract.
As long as we only do [*admissible operations*]{}, namely multiplication, division, addition of like-signed quantities, and addition/subtraction of (exact!) input data ($x \pm y$), then the worst case relative error only grows very slowly, roughly proportionally to the number of operations. It is when we subtract two like-signed approximate quantities and significant cancellation occurs, that the relative error can become large. So we may ask which problems we can CAE just using only admissible operations, i.e. which rational expressions factor in such a way that only admissible operations are needed to evaluate them, and which matrices have all minors with the same property.
Here are some examples, where we assume that the inputs are arbitrary real or complex numbers. (1) The determinant of a Cauchy matrix $C_{ij} = 1/(x_i + y_j)$ is CAE using the classical expression $\prod_{i<j} (x_j - x_i)(y_j - y_i)/\prod_{i,j} (x_i + y_j)$, as is every minor. In fact, changing one line of the classical GE routine will compute each entry of the LU decomposition accurately in about the same time as the original inaccurate version. (2) We can CAE all minors of sparse matrices, i.e. those with certain entries fixed at 0 and the rest independent indeterminates $x_{ij}$, if and only if the undirected bipartite graph presenting the sparsity structure of the matrix is [*acyclic*]{}; a one-line change to GE again renders it accurate. An important special case are bidiagonal matrices, which arise in the conventional SVD algorithm. (3) The eigenvalue problem for the second centered difference approximation to a Sturm-Liouville ODE or elliptic PDE on a rectangular grid (with arbitrary rectilinear boundaries) can be written as the SVD of an “unassembled” problem $G = D_1 U D_2$ where $D_1$ and $D_2$ are diagonal (depending on “masses” and “stiffnesses”) and $U$ is [*totally unimodular*]{}, i.e. all its minors are $\pm 1$ or 0. Again, a simple change to GE renders it accurate.
In contrast, one can show that it is impossible in the TM to add $x+y+z$ accurately in constant time; the proof involves showing that for [*any*]{} algorithm the rounding errors $\delta$ and inputs $x,y,z$ can be chosen to have an arbitrarily large relative error. This depends on the $\delta$’s being permitted to be arbitrary real numbers in our model.
Vandermonde matrices $V_{ij} = x_i^{j-1}$ are more subtle. Since the product of a Vandermonde matrix and the Discrete Fourier Transform (DFT) is Cauchy, and we can compute the SVD of a Cauchy, we can compute the SVD of a Vandermonde. This fits in our TM model because the roots of unity in the DFT need only be known approximately, and so may be computed in the TM model. In contrast, one can use the result in the last paragraph to show that the inverse of a Vandermonde cannot be computed accurately. Similarly, polynomial Vandermonde matrices with $V_{ij} = P_i(x_j)$, $P_i$ a (normalized) orthogonal polynomial, also permit accurate SVDs, but probably not inverses.
Adding nonnegativity to the traditional model {#section 4}
=============================================
-5mm
If we further restrict the domain of (some) inputs to be nonnegative, then much more is possible, $x+y+z$ as a trivial example. A more interesting example are weakly diagonally dominant M-matrices, which arise as discretizations of PDEs; they must be represented as offdiagonal entries and the row sums.
More interesting is the class of [*totally positive (TP) matrices*]{}, all of whose minors are positive. Numerous structure theorems show how to represent such matrices as products of much simpler TP matrices. Accurate formulas for the (nonnegative) minors of these simpler matrices combined with the Cauchy-Binet theorem yield accurate formulas for the minors of the original TP matrix, but typically at an exponential cost.
An important class of TP matrices where we can do much better are the TP generalized Vandermonde matrices $G_{ij} = x_i^{\mu_j}$, where the $\mu_j$ form an increasing nonnegative sequence of integers. $\det (G)$ is known to be the product of $\prod_{i<j} (x_j-x_i)$ and a [*Schur function*]{} [@macdonald] $s_{\lambda} (x_i)$, where the sequence $\lambda = (\lambda_j) = (\mu_{n+1-j}-(n-j))$ is called a [*partition*]{}. Schur functions are polynomials with nonnegative integer coefficients, so since their arguments $x_i$ are nonnegative, they can certainly be computed accurately. However, straightforward evaluation would have an exponential cost $O(n^{|\lambda|})$, $|\lambda| = \sum_j \lambda_j$. But by exploiting combinatorial identities satisfied by Schur functions along with techniques of divide-and-conquer and memoization, the cost of evaluating the determinant can be reduced to polynomial time $n^2\prod_j (\lambda_j+1)^2$. The cost of arbitrary minors and the SVD remains exponential at this time. Note that the $\lambda_i$ are counted as part of the size of the input in this case.
Here is our conjecture generalizing all the cases we have studied in the TM. We suppose that $f(x_1,...,x_n)$ is a homogeneous polynomial, to be evaluated on a domain $\cal D$. We assume that ${\cal D} \subseteq \overline { {\rm int} { \cal D } }$, to avoid pathological domains. Typical domains could be all tuples of the real or complex numbers, or the positive orthant. We say that $f$ satisfies condition $(A)$ (for [*Accurate*]{}) if $f$ can be written as a product $f = \prod_m f_m$ where each factor $f_m$ satisfies
- $f_m$ is of the form $x_i$, $x_i - x_j$ or $x_i + x_j$, or
- $|f_m|$ is bounded away from 0 on $\cal D$.
Let $f$ and $\cal D$ be as above. Then condition $(A)$ is a necessary and sufficient condition for the existence of an algorithm in the TM model to compute $f$ accurately on $\cal D$.
Note that we make no claims that $f$ can be evaluated efficiently; there are numerous examples where we only know exponential-time algorithms (doing GE with complete pivoting on a totally positive generalized Vandermonde matrix).
Extending the TM {#section 5}
================
-5mm
So far we have considered the simplest version of the TM, where (1) we have only the input data, and no additional constants available, (not even integers, let alone arbitrary rationals or reals), (2) the input data is given exactly (as opposed to within a factor of $1+\delta$), and (3) there is no way to “round” a real number to an integer, and so convert the problem to the LEM or SEM models. We note that in [@cuckersmale99], (1) integers are available, (2) the input is rounded, and (3) there is no way to “round” to an integer. Changes to these model assumptions will affect the classes of problems we can solve. For example, if we (quite reasonably) were to permit exact integers as input, then we could CAE expressions like $x-1$, and otherwise presumably not. If we went further and permitted exact rational numbers, then we could also CAE $9x^2-1 = 9(x-\frac{1}{3})(x+\frac{1}{3})$. Allowing algebraic numbers would make $x^2-2 = (x-\sqrt{2})(x+\sqrt{2})$ CAE.
If inputs were not given exactly, but rather first multiplied by a factor $1+\delta$, then we could no longer accurately compute $x \pm y$ where $x$ and $y$ are inputs, eliminating Cauchy matrices and most others. But the problems we could solve with exact inputs in the TM still have an attractive property with inexact inputs: Small relative changes in the inputs cause only a small relative change in the outputs, independent of their magnitudes. The output relative errors may be larger than the input relative error by a factor called a [*relative condition number*]{} $\kappa_{rel}$, which is at most a polynomial function of $\max (1/ {\rm rel\_gap} (x_i,\pm x_j))$. Here ${\rm rel\_gap} (x_i, \pm x_j) = |x_i \mp x_j|/( |x_i| + |x_j| )$ is the [*relative gap*]{} between inputs $x_i$ and $\pm x_j$, and the maximum is taken over all expressions $x_i \mp x_j$ where appearing in $f = \prod_m f_m$. So if all the input differ in several of their leading digits, all the leading digits of the outputs are determined accurately. We note that $\kappa_{rel}$ can be large, depending on $f$ and $\cal D$, but it can only be unbounded when a relative gap goes to zero.
If a problem has this attractive property, we say that it possesses a relative perturbation theory. In practical situations, where only a few leading digits of the inputs $x_i$ are known, this property justifies the use of algorithms that try to compute the output as accurately as we do. We state a conjecture very much like the last one about when a relative perturbation theory exists.
Let $f$ and $\cal D$ be as in the last conjecture. Then condition $(A)$ is a necessary and sufficient condition for $f$ to have a relative perturbation theory.
CAE in the long and short exponent models {#section 6}
=========================================
-5mm
Now we consider standard Turing machines, where input floating point numbers $x = f \cdot 2^e$ are stored as the pair of integers $(f,e)$, so the size of $x$ is ${{\rm size}}(x) = \#{\rm bits}(f) + \#{\rm bits}(e)$. We distinguish two cases, the Long Exponent Model (LEM) where $f$ and $e$ may each be arbitrary integers, and the Short Exponent Model (SEM), where the length of $e$ is bounded depending on the length of $f$. In the simplest case, when $e=0$ (or lies in a fixed range) then the SEM is equivalent to taking integer inputs, where the complexity of problems is well understood. This is more generally the case if $\#{\rm bits}(e)$ grows no faster than a polynomial function of $\#{\rm bits}(f)$.
In particular it is possible to CAE the determinant of an integer (or SEM) matrix each of whose entries is an independent floating point number [@clarkson]. This is not possible as far as we know in the LEM, which accounts for a large complexity gap between the two models.
We start by illustrating some differences between the LEM and SEM, and then describe the class of problems that we can CAE in the LEM.
First, consider the number of bits in an expression with LEM inputs can be exponentially larger than the number of bits in the same expression when evaluated with SEM inputs. For example, ${{\rm size}}(x \cdot y) \leq {{\rm size}}(x) + {{\rm size}}(y)$ when $x$ and $y$ are integers, but ${{\rm size}}(x \cdot y) \leq {{\rm size}}(x) \cdot {{\rm size}}(y)$ when $x$ and $y$ are LEM numbers: $(\sum_{i=1}^n 2^{e_i}) \cdot (\sum_{i=1}^n 2^{e'_i})$ has up to $n^2$ different bit positions to store, each $2^{e_i + e'_j}$, not $2n$. In other words, LEM arithmetic can encode symbolic algebra, because if $e_1$ and $e_2$ have no overlapping bits, then we can recover $e_1$ and $e_2$ from the product $2^{e_1} \cdot 2^{e_2} = 2^{e_1 + e_2}$.
Second, the error of many conventional matrix algorithms is typically proportional to the condition number $\kappa (A) = \|A\| \cdot \|A^{-1}\|$. This means that a conventional algorithm run with $O(\log \kappa(A))$ extra bits of precision will compute an accurate answer. It turns out that if $A(x)$ has rational entries in the SEM model, then $\log \kappa(A)$ is at most a polynomial function of the input size, so conventional algorithms run in high precision will CAE the answer. However $\log \kappa (A)$ for LEM matrices can be exponentially larger, so this approach does not work. The simplest example is $\log \kappa ( {{\rm diag}}(1,2^e) ) = e = 2^{\#{\rm bits}(e)}$. On the other hand $\log \log \kappa (A(x))$ is a lower bound on the complexity of any algorithm, because this is a lower bound on the number of exponent bits in the answer. One can show that $\log \log \kappa (A(x))$ grows at most polynomially large in the size of the input.
Finally, we consider the problem of computing an arbitrary bit in the simple expression $p = \prod_{i=1}^n (1+x_i)$. When the $x_i$ are in the SEM, then $p$ can be computed exactly in polynomial time. However when the $x_i$ are in the LEM, then one can prove that computing an arbitrary bit of $p$ is as hard as computing the permanent, a well-known combinatorially difficult problem. Here is another apparently simple problem not known to even be in NP: testing singularity of a floating point matrix. In the SEM, we can CAE the determinant. But in the LEM, the obvious choice of a “witness” for singularity, a null vector, can have exponentially many bits in it, even if the matrix is just tridiagonal. We conjecture that deciding singularity of an LEM matrix is NP-hard.
So how do we compute efficiently in the LEM? The idea is to use [*sparse arithmetic*]{}, or to represent only the nonzero bits in the number. (A long string of 1s can be represented as the difference of two powers of 2 and similarly compressed). In contrast, in the SEM one uses [*dense arithmetic*]{}, storing all fraction bits of a number. For example, in sparse arithmetic $2^e+1$ takes $O(\log e)$ bits to store in sparse arithmetic, but $e$ bits in dense arithmetic. This idea is exploited in practical floating point computation, where extra precise numbers are stored as arrays of conventional floating point numbers, with possibly widely different exponents [@priest].
Now we describe the class of rational functions that we can CAE in the LEM. We say the rational function $r(x)$ is in factored form if $r(x) = \sum_{i=1}^n p_i (x_1,...,x_k)^{e_i}$, where each $e_i$ is an integer, and $p_i(x_1,...,x_k)$ is written as an explicit sum of nonzero monomials. We say ${{\rm size}}(r)$ is the number of bits needed to represent it in factored form. Then by (1) computing each monomial in each $p_i$ exactly, (2) computing the leading bits of their sum $p_i$ using sparse arithmetic (the cost is basically sorting the bits), and (3) computing the leading bits of the product of the $p_i^{e_i}$ by conventional rounded multiplication or division, one can evaluate $r(x)$ accurately in time a polynomial in ${{\rm size}}(r)$ and ${{\rm size}}(x)$. In other words, the class of rational expression that we can CAE are those that we can express in factored form in polynomial space.
Now we consider matrix computations. It follows from the last paragraph that if each minor $r(x)$ of $A(x)$ can be written in factored form of a size polynomial in the size of $A(x)$, then we can CAE all the matrix computations that depend on minors. So the question is which matrix classes $A(x)$ have all their minors (or just the ones needed for a particular matrix factorization) expressible in a factored form no more than polynomially larger than the size of $A(x)$. The obvious way to write $r(x)$, with the Laplace expansion, is clearly exponentially larger than $A(x)$, so it is only specially structured $A(x)$ that will work.
All the matrices that we could CAE in the TM are also possible in the LEM. The most obvious classes of $A(x)$ that we can CAE in the LEM that were impossible in the TM are gotten by replacing all the indeterminates in the TM examples by arbitrary rational expressions of polynomial size. For example, the entries of an M-matrix can be polynomial-sized rational expressions in other quantities. Another class are Green’s matrices (inverses of tridiagonals), which can be thought of as discretized integral operators, with entries written as $A_{ij} = x_i \cdot y_j$.
The obvious question is whether $A$ each of whose entries is an independent number in the LEM falls in this class. We conjecture that it does not, as mentioned before.
Conclusions and open problems {#section 7}
=============================
-5mm
Our goal has been to identify rational expressions (or matrices) that we can evaluate accurately (or on which we can perform accurate matrix computations), in polynomial time. Accurately means that we want to get a relative error less than 1, and polynomial time means in a time bounded by a polynomial function of the input size.
We have defined three reasonable models of arithmetic, the Traditional Model (TM), the Long Exponent Model (LEM) and the Short Exponent Model (SEM), and tried to identify the classes of problems that can or cannot be computed accurately and efficiently for each model. The TM can be used as a model to do proofs that also hold in the implementable LEM and SEM, but since it ignores the structure of floating point numbers as stored in the computer, it is strictly weaker than either the LEM or SEM. In other words, there are problems (like adding $x+y+z$) that are provably impossible in the TM but straightforward in the other two models.
We also believe that the LEM is strictly weaker than the SEM, in the sense that there appear to be computations (like computing the determinant of a general, or even tridiagonal, matrix) that are possible in polynomial time in the SEM but not in the LEM. In the SEM, essentially all problems that can be written down in polynomial space can be solved in polynomial time. For the LEM, only expressions that can be written in [*factored form*]{} in polynomial space can be computed efficiently in polynomial time.
A number of open problems and conjectures were mentioned in the paper. We mention just one additional one here: What can be said about the nonsymmetric eigenvalue problem? In other words, what matrix properties, perhaps related to minors, guarantee that all eigenvalues of a nonsymmetric matrix can be computed accurately?
[**Acknowledgements** ]{} The author acknowledges Benjamin Diament, Zlatko Drmač, Stan Eisenstat, Ming Gu, William Kahan, Ivan Slapničar, Kresimir Veselic, and especially Plamen Koev for their collaboration over many years in developing this material.
[10]{}
J. Barlow and J. Demmel. Computing accurate eigensystems of scaled diagonally dominant matrices. , 27(3):762–791, June 1990.
J. Barlow, B. Parlett, and K. Veselic, editors. , volume 309 of [*Linear Algebra and its Applications*]{}. Elsevier, 2001.
L. Blum, F. Cucker, M. Shub, and S. Smale. . Springer, 1997.
L. Blum, M. Shub, and S. Smale. On a theory of computation and complexity over the real numbers: [NP]{}-completeness, recursive functions and universal machines. , 21(1):1–46, July 1989.
K. Clarkson. Safe and effective determinant evaluation. In [*33rd Annual Symp. on Foundations of Comp. Sci.*]{}, 387–395, 1992.
F. Cucker and S. Smale. Complexity estimates depending on condition and roundoff error. , 46(1):113–184, Jan 1999.
J. Demmel, Accurate [SVDs]{} of structured matrices. , 21(2):562–580, 1999.
J. Demmel, M. Gu, S. Eisenstat, I. Slapničar, K. Veselić, and Z. Drmač. Computing the singular value decomposition with high relative accuracy. , 299(1–3):21–80, 1999.
J. Demmel and W. Kahan. Accurate singular values of bidiagonal matrices. , 11(5):873–912, September 1990.
J. Demmel and P. Koev. Necessary and sufficient conditions for accurate and efficient singular value decompositions of structured matrices. In V. Olshevsky, editor, [*Special Issue on Structured Matrices in Mathematics, Computer Science and Engineering*]{}, volume 281 of [ *Contemporary Mathematics*]{}, 117–145, AMS, 2001.
J. Demmel and K. Veselić. Jacobi’s method is more accurate than [QR]{}. , 13(4):1204–1246, 1992.
I. S. Dhillon. . PhD thesis, University of California, Berkeley, California, May 1997.
N. J. Higham. . , Philadelphia, PA, 1996.
P. Koev. . PhD thesis, University of California, Berkeley, California, May 2002.
I. G. MacDonald. . Oxford University Press, 2nd edition, 1995.
M. Pour-El and J. Richards. . Springer-Verlag, 1989.
D. Priest. Algorithms for arbitrary precision floating point arithmetic. In P. Kornerup and D. Matula, editors, [*Proceedings of the 10th Symposium on Computer Arithmetic*]{}, 132–145, Grenoble, France, June 26-28, 1991. [IEEE]{} Computer Society Press.
eliable [C]{}omputing (a journal). Kluwer.www.cs.utep.edu/interval-comp/rcjournal.html
S. Smale. Some remarks on the foundations of numerical analysis. , 32(2):211–220, June 1990.
\[lastpage\]
[^1]: Mathematics Department and Computer Science Division, University of California, Berkeley, CA 94720, USA. E-mail: demmel@cs.berkeley.edu
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Along with the growing interest in using the transverse-spatial modes of light in quantum and classical optics applications, developing an accurate and efficient measurement method has gained importance. Here, we present a technique relying on a unitary mode conversion for measuring any full-field transverse-spatial mode. Our method only requires three consecutive phase modulations followed by a single mode fiber and is, in principle, error-free and lossless. We experimentally test the technique using a single spatial light modulator and achieve an average error of 4.2 % for a set of 9 different full-field Laguerre-Gauss and Hermite-Gauss modes with an efficiency of up to 70%. Moreover, as the method can also be used to measure any complex superposition state, we demonstrate its potential in a quantum cryptography protocol and in high-dimensional quantum state tomography.'
author:
- 'Markus Hiekkamäki$^{1}$, Shashi Prabhakar$^{1}$ and Robert Fickler$^{1,*}$'
bibliography:
- 'ref.bib'
date: |
[$^{1}$Photonics Laboratory, Physics Unit, Tampere University, Tampere, FI-33720, Finland\
$^{*}$robert.fickler@tuni.fi]{}
title: '[Near-perfect measuring of full-field transverse-spatial modes of light]{}'
---
Introduction
============
Using light to encode and transmit information is key in today’s information technology-driven age. Amongst the different degrees of freedom, encoding information in the transverse-spatial degree of light has attracted significant attention over the last years, as it offers another way to increase bandwidth [@Willner]. During these past years, the rates with which information is transmitted using the transverse-spatial structure has been pushed to terabits per second, alongside studies into novel channels to enable long-distance transmissions, such as fibers, free space as well as water [@Willner; @bozinovic_terabit-scale_2013; @krenn2014communication; @bouchard_quantum_2018]. In addition to their use in classical information technologies, spatial modes have also been harnessed as physical realizations of high-dimensional quantum states [@erhard2018twisted]. These so-called *qudits* are beneficial in terms of information capacity per single quantum carrier in addition to their noise resistance in quantum communication schemes [@cerf2002security; @mirhosseini2015high; @ecker2019entanglement]. Moreover, encoding quantum information into the transverse-spatial degree of freedom has also enabled simple quantum simulation and computation schemes [@cardano2015quantum; @cardano2017detection].
However, the benefits obtained, when using spatial modes of light in both classical and quantum communication, strongly depend on the ability to generate and measure such modes with high precision and high efficiency. Various techniques have been developed ranging from holographic generation and projection [@Heckenberg1992; @mair2001entanglement], to direct transverse phase and amplitude modulation [@beijersbergen1994helical; @marrucci2006optical] to mode multi- and de-multiplexing schemes [@leach2002measuring; @berkhout_efficient_2010; @fickler2017custom; @ruffato2017test; @Fontaine2018:LGsorter; @zhou2017sorting; @gu2018gouy], often focusing on the measurement of the modal content of a light-field [@schulze2013measurement]. While in some cases sorting or de-multiplexing schemes might be beneficial, it also requires the ability to have multiple detectors or efficient cameras, such that the process of direct filtering or projecting onto modes is necessary. The key idea behind most of such projection techniques relies on the fact that only a Gaussian mode with a plane phase front couple efficiently into single mode fibers (SMF) [@mair2001entanglement]. Thus, when certain light modes are under investigation, the measurement was performed by flattening the transverse phase structures with a spatial light modulator (SLM), which then acts together with the SMF as a mode filter. While this technique works with low modal cross talk for azimuthally structured light fields, it does not work for radially structured fields of light [@qassim2014limitations] without inducing a large amount of loss [@BouchardPhaseFlat]. Recently, the idea was extended to two phase modulations planes, one in the near and one in the far field, to determine the radial decomposition using a phase-retrieval algorithm [@choudhary2018measurement].
In this article, we demonstrate a technique to project on any type of transverse-spatial mode with, in principle, perfect efficiency and no errors. Our measurement procedure relies on a unitary transformation of any given spatial mode into the Gaussian mode of a single mode fiber implemented via the technique of multi-plane mode conversion. We show that three planes of phase modulation are already enough to detect a broad range of modes, i.e. the mode families of Laguerre-Gauss and Hermite-Gauss modes with errors as low as 2.3 % and efficiencies reaching values above 70 %. We further demonstrate the broad applicability of our technique by using it in a 7-dimensional quantum cryptography protocol as well as a full quantum state tomography, where include both the azimuthal as well as the radial transverse structure, i.e. the full-field transverse-spatial modes of light.
Multi-plane mode conversion
===========================
Converting spatial modes of light has been a task, which has not been investigated in a lot of detail over the last years due to its complexity and the non-existence of utilizable bulk optics devices. However, already a few years back it has been shown that multiple phase modulations between free space propagation, can be used to perform some elementary transformations [@berkhout_efficient_2010]. Although the general working principle was demonstrated, the technique was never extended or implemented in subsequent experiments, possibly due to the complexity of the optimization algorithm used to generate the phase modulations. Only recently, a similar approach using multiple planes of phase modulation was demonstrated that improved the performance by using a technique from waveguide design called wave-front matching (WFM) to obtain the required transverse phase modulations [@Hashimoto2005]. This technique compares the complex transverse amplitudes of the input and output light fields and iteratively matches the wave front by simple phase modulations. While it was shown that this technique works for multiple input modes and spatially separated output channels [@Fontaine2018:LGsorter] as well as multiple output modes [@Brandt2019], we restrict the iterative process to match one input mode, i.e. the mode to be measured, and a Gaussian output mode, i.e. the mode of the single mode fiber.
The general iterative optimization process is the following: The mode to be measured, i.e. the input mode $M$, is propagated forward through an optical system containing $n$ phase elements $\Phi_t$, each followed by some free space propagation. At these modulation planes, $t=1,...,n$, the complex amplitudes of the mode $M(x,y,t)$ is recorded. Then, the Gaussian output mode $G$, i.e. the collimated beam from the single mode fiber, is propagated backwards through the system to the last phase modulation plane ($t=n$) to obtain $G(x,y,n)$. The two fields, $M(x,y,t)$ and $G(x,y,t)$ are now matched by imprinting a transverse phase. This phase is calculated by the field overlap between the mode pair $$o_{t}(x,y)=\overline{M(x,y,t)} G(x,y,t) e^{i\Phi_t(x,y)},$$ including a transverse phase modulation $\Phi_t(x,y)$, which is set to zero in the beginning, but will be updated during the WFM process. The transverse phase of this overlap, offset by its mean value $\phi$, is then the required change in the phase pattern for the plane $t$, i.e. $$\label{eq:phaseupdate}
\Delta\Phi_t(x,y) = - \arg (o_{t}(x,y)e^{-i\phi}).$$ This phase modulation is imprinted on the backwards propagating Gaussian mode $G$ and subsequently propagated to the $(n-1)$th modulation plane. We note that due to the free-space propagation between two modulation planes, the amplitude is also slowly adjusted to match the mode $M$ to the Gaussian output mode. At plane $(n-1)$, the same procedure is repeated, i.e. both modes are “compared” and matched by another phase modulation, before the field $G$ is further propagated backwards. This procedure is repeated till the first plane. If the number of phase modulation planes is large enough, one such optimization already matches the input mode perfectly to the Gaussian mode, i.e. it acts in forward direction as a unitary mode transformation. If now a single mode fiber, which only allows coupling of Gaussian modes, is placed into the beam after this transformation, a perfect projection onto the mode $M$ is performed. This exclusive coupling naturally arises due to the unitarity of the transformation, which preserves the orthogonality of the modes.
If the number of phase modulations is limited, the whole procedure can be repeated, also in the forward propagation, until a desired fidelity is achieved. In this regard, in equation \[eq:phaseupdate\], $\phi$ is used to add an offset to each phase modulation plane, which shortens the convergence time for the optimization using multiple iterations. While generating the phase modulations we found that, for all the modes we investigated, three phase modulations were enough to achieve an overlap of 99.9 % between the converted mode and a Gaussian. An example of the phase transformation obtained for transforming a LG mode with indices $l=-1, p=1$ into a Gaussian mode can be seen in Fig.\[fig1\].
Experimental setup
==================
After having established a measurement method that relies on a unitary transformation between any given input mode and a Gaussian mode plus a coupling into a single mode fiber, we now test how current technical limitations affect the quality of the measurement scheme.
We use a simple experimental setup (see Fig. \[fig1\]), where we at first generate any complex transverse light field using SLM1 and shape the incident laser beam (808 nm) by modulating its transverse phase and amplitude through a complex and lossy hologram [@Bolduc:ModeCarving]. We then redirect the modulated beam into our mode measurement system, consisting of mode-converter and a single mode fiber. We realize the mode-converter using multi-plane phase modulations displayed on SLM2 on three separate regions. To achieve this, we place a mirror 40 cm away from SLM2 parallel to its surface. For experimental convenience, we also adjust the beam waist during the mode conversion procedure to match it to the beam waist we measured for our coupling system, i.e. a microscope objective (20x) and a SMF. To reduce misalignment errors and detrimental effects due to the finite resolution of our SLM in the mode conversion (Holoeye, 8$\mu$m pixel pitch), we use a beam waist of 0.94 mm at the input of our measurement system and phase modulations spanning 630 by 630 pixels. As the phase modulation using an SLM is only 75 % efficient, we additionally display a blazed grating structure in all of the phase modulations and only use the first diffraction order from each phase screens. Note, that this additional diffraction is only required due to the limited efficiency of an SLM, which can be overcome by using custom-designed diffractive optical phase elements [@ruffato2017test].
Measurement of full-field transverse-spatial modes
==================================================
We first test the presented method by measuring the nine lowest order modes of two of the most common mode families, i.e. the Laguerre-Gauss (LG) and Hermite-Gauss (HG) modes.
Laguerre-Gauss modes
--------------------
The LG mode family is obtained by solving the paraxial wave equation in cylindrical coordinates [@andrews_angular_2013]. The modes form a complete orthonormal set, where the first index $l$ corresponds to the azimuthal structure and the second index $p$ describes the radial profile. These modes have attracted a lot of attention over the last decades as they are nicely matching the symmetry of most of the optical devices and, more importantly, the azimuthal index $l$ corresponds to an orbital angular momentum (OAM) caused by the twisted phase front $e^{i l \varphi}$, where $\varphi$ is the angular position [@allen_orbital_1992; @padgett2017orbital]. They have found a myriad of applications, e.g. in optical tweezers[@padgett2011tweezers], optical communication [@Willner] as well as quantum optics [@erhard2018twisted]. LG modes, and in particular the OAM quantum number ($l$), have been used in various fundamental studies [@fickler2016quantum; @erhard2018experimental] as well as in quantum information applications [@mirhosseini2015high; @zhang_experimentally_2016; @bouchard_quantum_2018], where they are used as a physical realization of high-dimensional Hilbert spaces. In our experiment, we test the nine lowest order modes including both the azimuthal and radial indices.
Irrespective of the mode order (besides the zeroth order, i.e. Gaussian mode), we find that all modes couple into the SMF after being transformed into a Gaussian mode with an efficiency between 55-72 %. We note that due to the limited modulation efficiency of the SLM and the three consecutive phase modulations, around 25 % of the input light was detected after the fiber. A more important measure for many applications is the modal cross talk between all modes, which can be characterized by the visibility $V=\sum_{i}C_{ii}/\sum_{ij}C_{ij}$, where $C_{ij}$ corresponds to the cross-talk matrix (see Fig. \[fig2\] a). We achieve a visibility of $V_{LG}=95.5 \pm 0.9$ % for the nine lowest order LG modes. We attribute the reduced efficiency (simulations predict 99.9 %) and small cross-talk to the finite resolution and some minor miss-alignments. In addition, we generate the modes through amplitude and phase modulation [@Bolduc:ModeCarving], a very good but also not perfect technique. As our multi-plane mode conversion measurement is calculated for perfect modes, the imperfections from generation might also induce errors and cause the slightly reduced coupling. Nevertheless, both results, the high coupling efficiency and visibility, show that the transformation is very close to a unitary operation and, thus, a near-perfect projection.
Hermite-Gauss modes
-------------------
Another popular mode family is obtained by solving the paraxial wave equation in Cartesian coordinates, i.e. the HG modes. Similarly to the LG modes, HG modes are characterized by two mode indices commonly labeled as $n$ and $m$, which correspond to the number of vertical and horizontal $\pi$ phase jumps, respectively. As our measurement technique works independent of the input mode, we find very similar results, when generating and measuring the lowest, nine order HG modes. Here, the visibility is found to be $V_{HG}=96.2 \pm 1.0$ % and the efficiency is again around 50-72 %, independent of the mode order (see Fig. \[fig2\] c). This first set of measurements shows that, compared to other techniques, projecting on specific modes through the presented technique offers a highly efficient and mode independent way to measure the full-field modal content of a light field with low errors.
Applications in high-dimensional quantum information
====================================================
In the second set of experiments, we take advantage of the introduced projection method and studied two important applications in quantum information, namely performing a high-dimensional quantum cryptography protocol and quantum state tomography. Although we could have performed both tasks in the LG or HG mode basis (or any other orthogonal mode set), we perform both tasks using the set of 7 LG modes $$\label{eq:7Modes}
\ket{\psi_n}\in\{\ket{-1,1},\ket{-1,0},\ket{0,0},\ket{0,1},\ket{0,2},\ket{1,0},\ket{1,1}\} \equiv \{ \ket{1}, \ket{2}, ..., \ket{7}\},$$ where the positions in the ket-vectors label the $l$ and $p$ indices of the LG modes, respectively. We chose the LG mode family as they have been the key player in high-dimensional quantum optics using spatial modes as $d$-dimensional encoding.
Quantum Cryptography {#sec:QCrypt}
--------------------
To test the applicability of our method, we measure the performance of our mode filter if applied to a quantum cryptography scheme, in particular the high-dimensional version of the well-known BB84 protocol [@bouchard_experimental_2018]. This protocol requires measurements in two mutually unbiased bases, between which, the two parties randomly select states to establish a secure key. In our test, one basis will be realized by the set of seven LG modes defined in equation \[eq:7Modes\]. The second basis set is in a mutually unbiased basis (MUB), and is obtained through the linear superpositions $\ket{\varphi_n} = \frac{1}{\sqrt{d}} \sum_{m=0}^{d-1}\omega^{nm}_d\ket{\psi_m}$, with $\omega_d=\exp{(i2\pi/d)}$, and is often also called Fourier-basis as it is obtained through the quantum Fourier transformation.
Using qudits instead of bi-dimensional systems in quantum cryptography allows one to transmit more information per carrier, i.e. $\log_2(d)$-bit per photon for the error-free case. Additionally, such protocols tolerate larger error rates, such that they are especially useful in noisy conditions. The secret key rate $R$ is given by $R = \log_2(d) - 2h^{(d)}(e_b)$, where $e_b$ is the bit error rate and $h^{(d)}(x):=-x\log_2(x/(d-1))-(1-x)\log_2(1-x)$ is the $d$-dimensional Shannon entropy.
The obtained cross talk matrix can be found in Fig. \[fig3\]. The average error rate was found to be 4.98$\pm$ 2.81 %, which corresponds to $R=$1.98 bits per sifted photon. We note that by using a single outcome measurement like ours instead of a multi-outcome measurement scheme, the overall efficiency, i.e. the bit rate per sent photon, is reduced by a factor of $d$. Nevertheless, our measurement technique allows for a bit rate that is larger than 1 bit per photon using both, the azimuthal $l$ and radial $p$ degree of freedom of the photons.
Quantum State Tomography
------------------------
Another important task in quantum information schemes is the precise measurement of a quantum state. Through the so-called quantum state tomography, it is possible to reconstruct the full density matrix $\hat{\rho}=\ket{\Psi}\bra{\Psi}$ and, thus, fully characterize a generated state. One way to perform such state tomography is to measure the state in all MUBs. In prime and power of prime dimensions, the number of MUBs is known to be $d$+1 [@durt2010mutually]. We again use the same 7-dimensional LG mode state space described in section \[sec:QCrypt\] and generate a visually appealing state of the form $\ket{\Psi} = \frac{1}{N} \sum_{n=0}^{6} \sin(\frac{n\pi}{6}) \ket{n+1}$, where $N$ is a normalization constant and $\ket{n}$, are the LG modes as defined in equation \[eq:7Modes\]. In addition to the computational basis states given in equation \[eq:7Modes\], the $k$ MUBs to perform the high-dimensional state tomography can be constructed through the high-dimensional Hadamard transformations $$\begin{aligned}
\ket{\phi_n^{(k)}} = \frac{1}{\sqrt{d}} \sum_{m=0}^{d-1}\omega^{(nm+(k-1)m^2)}_d\ket{\psi_m},
\label{eq:Hadamard}\end{aligned}$$ again with $\omega_d=\exp{(i2\pi/d)}$. For $k$=1, equation (\[eq:Hadamard\]) leads to the above-described quantum Fourier transform, i.e. Fourier-basis. After performing all $d(d+1)$ measurements, to avoid systematic errors [@schwemmer2015systematic], we reconstruct the density matrix through direct inversion given by $\hat{\rho}=\sum_{k,n}P_n^{(k)}\Pi_m^{(k)}-\mathds{1}$, where $P_n^{(k)}$ corresponds to the probability of measuring the $n$-th state of the $k$-th MUB, i.e. $\ket{\phi_n^{(k)}}$, and $\Pi_m^{(k)}$ corresponds to the projector onto that state, i.e. $\ket{\phi_n^{(k)}}\bra{\phi_n^{(k)}}$. The results of this reconstruction can be seen in Fig. \[fig4\].
As a good measure to evaluate how close the measured density matrix $\hat{\rho}_{exp}$ is with respect to the theoretical density matrix $\hat{\rho}_{th}$, we use fidelity defined as $$F=\left(\operatorname{Tr}\sqrt{\sqrt{\hat{\rho}_{exp}}\hat{\rho}_{th}\sqrt{\hat{\rho}_{exp}}}\right)^2.$$ From our measurement we achieve a fidelity of 96.4 $\pm$ 0.5 % for our reconstructed state, which again shows the quality of the projection method introduced here.
Conclusion and Outlook
======================
In conclusion, we have presented a measurement technique that relies on mode conversion requiring only three phase modulating planes to perform a near-perfect unitary transformation of any given transverse-spatial mode into a Gaussian mode and a subsequent coupling into a single mode fiber. We achieved very high efficiencies irrespective of the mode order as well as very low cross talk between other modes. We further have demonstrated the quality of the projections for two very prominent mode families, i.e. the Laguerre-Gauss and Hermite-Gauss modes, and applied the techniques in a high-dimensional quantum cryptography protocol and also performed a quantum state tomography.
As our technique is highly efficient and can measure the full field structure, it can be directly applied in quantum optics experiments, e.g. to enable the measurements of correlations in the azimuthal and radial degree of freedom of a high-dimensional entangled bi-photon state generated in down-conversion experiments [@mair2001entanglement; @ecker2019entanglement]. Moreover, it can be used to decompose the transverse light field into any specific mode basis of choice and as such might be applied in a broad range of experiments investigating the spatial domain. Finally, as the transformation is unitary it can also be used in reverse to perfectly generate complex spatial modes or a desired transverse structure of light in a highly efficient, near-perfect manner.
Funding Information {#funding-information .unnumbered}
===================
MH, SP and RF acknowledge the support of the Academy of Finland through the Competitive Funding to Strengthen University Research Profiles (decision 301820) and the Photonics Research and Innovation Flagship (PREIN - decision 320165).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
Multi-dimensional data classification is an important and challenging problem in many astro-particle experiments. Neural networks have proved to be versatile and robust in multi-dimensional data classification. In this article we shall study the classification of gamma from the hadrons for the MAGIC Experiment. Two neural networks have been used for the classification task. One is Multi-Layer Perceptron based on supervised learning and other is Self-Organising Map (SOM), which is based on unsupervised learning technique. The results have been shown and the possible ways of combining these networks have been proposed to yield better and faster classification results.\
\
[*Keywords:*]{} Neural Networks, Multidimensional data classification, Self-Organising Maps, Multi-layer Perceptrons.\
author:
- |
F. Barbarino, P. Boinee, A. De Angelis\
\
[ ]{}
title: Multidimensional data classification with artificial neural networks
---
Introduction
============
Many high-energy gamma ray experiments have to deal with the problem of separating gammas from hadrons [@p1]. The experiments usually generate large data sets with many attributes in them. This multi-dimensional data classification problem offers a daunting challenge of extracting small number of interesting events (gammas) from an overwhelming sea of background (hadrons) . Many techniques are in active research for addressing this problem. The list includes classical statistical techniques to more sophisticated techniques like neural networks, classification trees and kernel functions.
The class of neural networks provides an automated technique for the classification of the data set into given number of classes [@kk]. It is in active research in both artificial intelligence and machine learning communities. Several neural network models have been developed to address the classification problem. Usually, one makes the distinction between supervised and unsupervised classifiers: A supervised classifier is used , when an analyst has some examples, for which the correct classification is known. This can be done, for example, in most problems related to particle physics at accelerators, where there is a generally good knowledge of detectors and of the underlying physics, and good simulations are available. Whereas in an unsupervised technique, the events are partitioned into classes of similar elements, without using additional information. This is the case especially for fields operating in a discovery regime, as, e.g., astroparticle physics [@ale].
From a mathematical perspective, a neural network is simply a mapping from $R^n \rightarrow R^m$, where $R^n$ is the input data set dimension and $R^m$ is the output dimension of the neural network . The network is typically divided into various layers; each layer has a set of neurons also called as nodes or information units, connected together by the links. The artificial neural networks are able to classify data by learning to discriminate patterns in features (or parameters) associated with the data. The neural network learns from the data set when each data vector from the input set is subjected to it. The learning or information gain is stored in the links associated with the neurons.
The output generated by the network depends on both the problem and network type. For the gamma/hadron separation problem the supervised network maps each input vector onto the \[0,1\] interval, whereas in unsupervised networks the nodes are adapted to the input vector in such a way that the output of the network represents the natural groups that exist in the data set. A visualization technique is used to view the groups discovered by the network.
Section 2 describes the data sets used for the classification. Section 3 deals with the multilayer perceptron network and its classification results. Section 4 deals with Self-Organizing maps and its variant along with their classification results. Conclusions and future perspectives have been discussed in the section 5.
Data set description
====================
The data sets are generated by a montecarlo program, CORSIKA [@cor]. They contain 12332 gammas, 7356 ’on’ events (mixture of gammas and hadrons), and 6688 hadron events. These events are stored in different files. The files contain event parameters in ASCII format, each line of 12 numbers being one event [@boc], with the parameters defined below,
1. fLength: major axis of ellipse \[mm\]
2. fWidth: minor axis of ellipse \[mm\]
3. fSize: 10-log of sum of content of all pixels
4. fConc: ratio of sum of two highest pixels over fSize \[ratio\]
5. fConc1: ratio of highest pixel over fSize \[ratio\]
6. fAsym: distance from highest pixel to centre, projected onto major axis \[mm\]
7. fM3Long 3rd root of third moment along major axis \[mm\]
8. fM3Trans 3rd root of third moment along minor axis \[mm\]
9. fAlpha: angle of major axis with vector to origin \[deg\]
10. fDist: distance from origin to centre of ellipse \[mm\]
11. fEner: 10-log of MC energy \[in GeV\]
12. fTheta: MC zenith angle \[rad\]
The first 10 image parameters are derived from pixel analysis, and are used for classification.
Multi-Layer Perceptron
======================
For this approach we used the ROOT Analysis Package (v. 4.00/02) and in particular the MultiLayer Perceptron class [@kn:mlp], which implements a generic layered network. Since this is a supervised network we took half of Gamma and OFF data to train the network and the remaining data to test it. The code of the ROOT package is very flexible and simple to use. It allowed us to create a network with a 10 nodes input layer, a hidden layer with the same number of nodes and an output layer with just a single neuron which should return “0” if the data represent hadrons or “1” if they’re gammas. Weights are put randomly at the beginning of the training session and then adjusted from the following runs in order to minimize errors (back-propagation). Errors at cycle $i$ are defined as: $err_i = \frac{1}{2} \; o_i^2$ where $o_i$ is the error of the output node. Data to input and output nodes are transferred linearly, while for hidden layers they use a sigmoid (usually: $\sigma(x) = 1/(1 + \exp(-x))$).
We have tested the same network using different learning methods proposed by the code authors, as for example the so called “Stochastic minimization”, based on the Robbins-Monro stochastic approximation, but the default “Broyden, Fletcher, Goldfarb, Shanno” method has proved to be the quickest and with the better error approximation.
Figures 1.a and 1.b represent a possible output when using the ROOT package on those data. The first one depicts the error function for each run of the network, comparing the training and the test data. Note that the greater is the number of runs, the better the network behaves. The second one shows the distributions of output nodes, that is how many times the network decides to give a value near to “0” or to “1”.
Self-Organising Maps (SOM)
==========================
SOM is based on unsupervised learning technique. It is used in the classification of data sets with no labels. It consists of a map of information units also called as neurons, arranged in a two-dimensional grid [@p2]. Every neuron $i$ of the map is associated with a $n$-dimensional reference vector $m_i = { [ { m_{i1},\ldots,m_{in} } ] }^T$ , where $n$ denotes the dimension of the input vectors. The neurons of the map are connected to adjacent neurons by a neighbourhood relation, which dictates the topology, or the structure, of the map. The most common topologies in use are rectangular and hexagonal. The learning process of the SOM is as follows:
1. [**Initialisation phase:**]{} Initialise all the neurons in the map with the input vectors randomly.
2. [**Data normalization:**]{} For a better identification of the groups the data have to be normalized. We employed the ‘range’ method where each component of the data vector is normalized to lie in the intravel \[0,1\].
3. [**SOM Training:**]{} Select an input vector $x$ from the data set randomly. A best matching unit (BMU) for this input vector, is found in the map by the following metric $$\left\| x- m_c \right\| =
\min_i \left\{ \left\| x- m_i \right\| \right\}$$ where $m_i$ is the reference vector associated with the unit $i$.
4. [**Updating Step:**]{} The reference vectors of BMU and its neighbourhood are updated according to the following rule $$m_i(t+1)=\left\{
\begin{alignedat}{2}
&m_i(t) + \alpha(t)\cdot h_{ci}(t)\cdot[x(t)-m_i(t)], & & \qquad i \in N_c(t)
\\
&m_i(t), & & \qquad i \notin N_c(t)
\end{alignedat}
\right.$$ where\
$h_{ci}(t) $ is the kernel neighbourhood around the winner unit $c$.\
\
$t$ is the time constant.\
\
$x(t)$ is an input vector randomly drawn from the input data set at time $t$.\
\
$\alpha(t)$ is the learning rate at time $t$.\
\
$N_c(t)$ is the neighbourhood set for the winner unit $c$.\
The above equation make BMU and its neighbourhood move closer to the input vector. This adaptation to input vector forms the basis for the group formation in the map.
5. [**Data groups visualisation:**]{} steps 3 and 4 are repeated for selected number of trials or epochs. After the trails are completed the map unfolds itself to the distribution of the data set finding the number of natural groups exist in the data set. The output of the SOM is the set of reference vectors associated with the map units. This set is termed as a codebook. To view the groups and the outliers discovered by the SOM we have to visualize the codebook. U-Matrix is the technique typically used for this purpose.
The ON events data set has directly used with the SOM. No prior training is required. The unsupervised behavior of SOM had discovered the groups in the data set in an automatic way. We worked with two kernel neighbourhoods of the SOM that are described below.
1. [**Gaussian SOM** ]{}\
The kernel neighbourhood is defined by gaussian function\
$h_{ci}(t) =e^{\LARGE{\left({d^2_{ci}}/{2\sigma^2_t}\right)}}, $ here $d^2_{ci}$ is the distance between the winner unit $c$ and the unit $i$, $\sigma_t$ is the neighbourhood radius. The results of the classification are shown in the figure 2.a. The Map is a 25X25 network and is trained with 300 epochs. Further increase in map size and epochs does not shown any improved results.
2. [**Cutgaussian SOM** ]{}\
The kernel neighbourhood is defined by cut-gaussian function\
$h_{ci}(t) = e^{\LARGE{\left({d^2_{ci}}/{2\sigma^2_t}\right)}}\cdot1\left(\sigma_t-d_{ci}\right). $ The results of the classification are shown in the figure 2.b. The Map is 40X30 network and is trained with 300 epochs. Further increase in map size and epochs does not shown improved results.The cutgaussian kernel shown better performance than that of of gaussian kernel.
We developed a c++ implementation of SOM with both kernel neighbourhoods. The SOM trained results are visualized using the u-matrix technique implemented in SOM TOOLBOX 2.0 in MATLAB environment [@kn:vesa].
Conclusions and Future Work
===========================
In this article we classified the monte-carlo gamma ray data of the MAGIC experiment, using MLP and SOM. Both the networks shown good classification results.
The advantage of the SOM algorithm is that it needs no training vectors to find the groups in the data set [*i.e.*]{} it clusters the data set in an automatic way, but the disadvantage of this technique is that it cannot label the data groups found. At the other hand MLP based on supervised technique identifies the group labels, but the training session could be longer.
The proposal for the future work will be combining MLP and SOM techniques. The combination of both techniques could yield better results. First train the data set with SOM, which yields in a clustered data set then use this data set to train the MLP to label the groups. This will significantly decrease the training period for MLP and thus makes the network to perform faster.
[99]{} P. Boinee, A. De Angelis, E. Milotti, [*Automatic Classification using Self-Organising Neural Networks in Astrophysical Experiments,* ]{} S. Ciprini, A. De Angelis, P. Lubrano and O. Mansutti (eds.): Proc. of Science with the New Generation of High Energy Gamma-ray Experiments (Perugia, Italy, May 2003). Forum, Udine 2003, p. 177, arXiv:cs.NE/0307031. S. Lawrence et. al., [*Neural Network Classification and Prior Class Probabilities,*]{} Lecture Notes in Computer Science State-of-the-Art Surveys, edited by G. Orr and K.-R. Müller and R. Caruana, Springer Verlag, pp. 299-314, 1998. A. De Angelis et.al., [*Self-Organising Networks for Classification: developing Applications to Science Analysis for Astroparticle Physics,*]{} arXiv:cs.NE/0402014v1, 9 Feb 2004. D. Heck, et al., CORSIKA, [*A Monte Carlo code to simulate extensive air showers,* ]{} Forschungszentrum Karlsruhe, Report FZKA 6019, 1998. http://rkb.home.cern.ch/rkb/format.txt. C.Delaere [*Multilayer Perceptron Root Class*]{}, http://root.cern.ch. T.Kohonen [*Self-Organizing Maps*]{}, 2nd Ed., Springer, 1997. J.Vesanto, J.Himberg [*Neural Network Tool for Data Mining: SOM Toolbox*]{}, a paper on the computational load and applicability of SOM Toolbox 2.0 in TOOLMET 2000.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The internal organization of complex networks often has striking consequences on either their response to external perturbations or on their dynamical properties. In addition to small-world and scale-free properties, clustering is the most common topological characteristic observed in many real networked systems. In this paper, we report an extensive numerical study on the effects of clustering on the structural properties of complex networks. Strong clustering in heterogeneous networks induces the emergence of a core-periphery organization that has a critical effect on the percolation properties of the networks. We observe a novel double phase transition with an intermediate phase in which only the core of the network is percolated and a final phase in which the periphery percolates regardless of the core. This result implies breaking of the same symmetry at two different values of the control parameter, in stark contrast to the modern theory of continuous phase transitions. Inspired by this core-periphery organization, we introduce a simple model that allows us to analytically prove that such an anomalous phase transition is in fact possible.'
author:
- 'Pol Colomer-de-Simón'
- Marián Boguñá
title: Double percolation phase transition in clustered complex networks
---
Introduction
============
The essence of complex systems lies in the interactions among their constituents. In many cases, these interactions are organized into complex topological architectures that have a determinant role in the behavior and functionality of this class of systems. In regular lattices, dimensionality appears to be one of the most distinctive features; however, randomness and heterogeneity in the interactions of complex networked systems induce phenomena that are very different from, or that are not even observed in, regular lattices. Examples range from the absence of epidemic thresholds that separate healthy and endemic phases [@Pastor-Satorras:2001ly; @Lloyd:2001mz; @Boguna:2003zr; @Berger:2005fk; @Chatterjee:2009uq; @Boguna:2013kx] to the anomalous behavior of Ising-like dynamics [@Bianconi:2002fj; @Goltsev:2003yq; @Hinczewski:2006vn; @Dorogovtsev:2008kx] and percolation properties [@Cohen2000; @Callaway2000; @Cohen2002; @Newman2002; @Newman2003; @Vazquez2003].
Percolation theory has played a prominent role in understanding the anomalous behaviors observed in complex networks and, in most cases, is the common underlying principle behind these behaviors. Interestingly, the interplay between a complex network topology and different percolation mechanisms leads to phenomena that have not previously been observed in statistical physics, including a lack of percolation thresholds in scale-free networks with a degree distribution of the form $P(k)\sim k^{-\gamma}$ for $\gamma < 3$ [@Pastor-Satorras:2001ly; @Lloyd:2001mz; @Boguna:2003zr; @Berger:2005fk; @Chatterjee:2009uq; @Boguna:2013kx], anomalous infinite-order percolation transitions in non-equilibrium growing random networks [@Dorogovtsev:2001md; @Callaway:2001fk], or cascading processes in interdependent networks [@Buldyrev2010a; @Son:2012lq; @Baxter:2012fp]. However, these phenomena have already been observed on random graphs with given degree distributions. Random graphs of this type are locally tree-like, that is, the number of triangles, and thus the clustering coefficient, can be neglected in the thermodynamic limit. However, the strong presence of triangles is, along with the small-world effect and heterogeneity of the degree distribution, a common and distinctive topological property of many real complex networked systems. While clustering is not a necessary condition for the emergence of any of these phenomena, the effects of clustering on the percolation properties of a network are unknown.
Percolation in clustered networks has been widely studied. However, previous reports differ concerning the position of the percolation threshold. Some studies report that clustered networks have a larger percolation threshold than do unclustered networks due to redundant edges in triangles that cannot be used to connect to the giant component [@Kiss2008; @Newman2009; @Miller2009; @Gleeson2010a]. Other studies report that strongly clustered networks are more resilient due to the existence of a core that is extremely difficult to break [@Newman2003b; @Gleeson2009; @Serrano2006a]. In fact, as we shall demonstrate, both arguments are correct.
In this paper, we show that strong clustering induces a core-periphery organization in the network [@csermely:2013] that gives rise to a new phenomenon, namely, a “double percolation” transition, in which the core and periphery percolate at different points. This behavior is in stark contrast to the modern theory of continuous phase transitions, which forbids the possibility of breaking the same symmetry at two different values of the control parameter. Multiple percolation transitions have recently been reported in [@Nagler:2012fr; @Chen:2013rt; @Chen:2013ys; @Bianconi:2014]. However, in each of these cases, anomalous percolation arises as a consequence of either complex percolation protocols [@Nagler:2012fr; @Chen:2013rt; @Chen:2013ys] or the interdependence between different networks [@Bianconi:2014], and it is never associated with the same symmetry breaking. Instead, our results are obtained with the simplest percolation mechanism, bond percolation with bond occupation probability $p$, which indicates that this double percolation transition is exclusively induced by a particular organization of the network topology.
![Bond percolation simulations for networks of $N=5 \times 10^4$ nodes with a power law degree distribution, $\gamma=3.1$, and different levels of clustering. [**a:**]{} Relative size of the largest connected component $g$ as a function of the bond occupation probability $p$. [**b:**]{} Degree-dependent clustering coefficient $\bar{c}(k)$. [**c:**]{} Susceptibility $\chi$ as a function of the bond occupation probability $p$. [**d:**]{} Percolation threshold ($p_{max}$) as a function of the level of clustering.[]{data-label="fig:clustering"}](FIG1.pdf){width="\linewidth"}
Random graphs with a given clustering spectrum
==============================================
We can generate scale-free random graphs with a given clustering spectrum $\bar{c}(k)$ and fixed degree-degree correlations, as shown in the Appendix \[appendix\_A\]. A preliminary analysis shows that the percolation properties depend on two network features, the heterogeneity of its degree distribution and the shape of the clustering spectrum $\bar{c}(k)$ [@Serrano2006a]. For weakly heterogeneous networks ($\gamma \gg 3$), we observe that increasing clustering in the network while keeping the degree-degree correlations fixed increases the percolation threshold and decreases the size of the giant component (see the Appendix \[appendix\_B\]). However, the most interesting case corresponds to heterogeneous networks, typically with $\gamma<3.5$. In this work, we focus on the case of $\gamma=3.1$ and a constant clustering spectrum [^1]. This value of $\gamma$ generates scale-free heterogeneous networks but with a finite second moment, which allows us to clearly isolate the new phenomenon. The results for $\gamma \le 3$ are qualitatively similar but more involved and will be presented in a forthcoming publication.
![Bond percolation simulations for networks with a power law degree distribution with $\gamma=3.1$, target clustering spectrum $\bar c(k)=0.25$, and different network sizes. [**a:**]{} Relative size of the largest connected component as a function of the bond occupation probability $p$. [**c:**]{} Susceptibility $\chi$ as a function of the bond occupation probability $p$. [**b**]{} and [**d:**]{} Position $p_{max}$ and height $\chi_{max}$ of the two peaks of $\chi$ as functions of the network size $N$. The straight lines are power-law fits, and [**b**]{} and [**d**]{} show the measured values of the critical exponents.[]{data-label="fig:FSS"}](FIG2.pdf){width="\linewidth"}
![image](./FIG3.pdf){width="\linewidth"}
Figure \[fig:clustering\] compares the percolation properties of networks with identical degree sequence and degree-degree correlations but with different levels of clustering. For each network, we perform bond percolation $10^4$ times using the Newman–Ziff algorithm [@Newman2000] and measure the average relative size of the largest (giant) connected component, $g \equiv \langle G \rangle/N$, and its fluctuations, [*i.e.*]{}, the susceptibility $\chi=\left[\langle G^2 \rangle - \langle G \rangle^2\right]/\langle G \rangle$. These results are then averaged over 100 network realizations. In finite systems, a peak in the susceptibility $\chi$ indicates the presence of a continuous phase transition, and its position provides an estimate of the percolation threshold. Plots [**c**]{} and [**d**]{} in Fig. \[fig:clustering\] show new and surprising results. For low levels of clustering, there is a unique and well-defined peak in $\chi$, but increasing clustering gives rise to the emergence of a secondary peak at higher values of $p$. This result suggests the presence of a double phase transition, in which two different parts of the network percolate at different times.
To confirm this possibility, we perform finite size scaling on networks with a target clustering spectrum of $c(k)=0.25$ and different system sizes, ranging from $N=5\times 10^3$ to $N=5\times 10^5$. Plot [**d**]{} in Fig. \[fig:FSS\] shows that the susceptibility exhibits two peaks whose maxima $\chi_{max}$ diverge as power laws, $\chi_{max}(N)\sim N^{\gamma'/\nu}$. The position of the first peak also approaches zero as a power law $p_{max}(N)\sim N^{1/\nu}$, as shown in Fig. \[fig:FSS\] [**b**]{}, which suggests that even if the network has bounded fluctuations, $\langle k^2 \rangle < \infty$, it is always percolated in the thermodynamic limit. In contrast, the position of the second peak is nearly constant in the range of sizes we have considered. The divergence of the two peaks in the susceptibility strongly suggests that we are indeed observing two different continuous phase transitions. The first transition is between non-percolated/percolated phases, and the second transition is between two percolated phases with very different internal organizations.
The $m$-core decomposition
--------------------------
To understand the effect of clustering on the global structure of networks, we use the $m$-core decomposition developed in [@Colomer-de-Simon2013]. This process is based on the concept of edge multiplicity $m$, which is defined as the number of triangles passing through an edge. We further define the $m$-core as the maximal subgraph whose edges all have at least multiplicity $m$ within it. By increasing $m$ from $0$ to $m_{max}$, we define a set of nested subgraphs that we call the $m$-core decomposition of the network. This decomposition can be represented as a branching process that encodes the fragmentation of $m$-cores into disconnected components as $m$ is increased. The tree-like structure of this process provides information regarding the global organization of clustering in networks. To visualize this process, we use the LaNet-vi 3.0 tool developed in [@Colomer-de-Simon2013] (see the caption of Fig. \[fig:mcore\]). Figure \[fig:mcore\] shows the $m$-core decomposition of three networks with $N=5\times 10^4$ nodes, the same degree sequence (with $\gamma=3.1$) and degree-degree correlations, and different levels of clustering. For low levels of clustering, the $m1$-core is very small, and thus, the $m$-core structure is almost nonexistent. As clustering increases, $m$-cores begin to develop new layers and $m_{max}$ increases. For instance, for $\bar{c}(k)=0.25$ (Fig. \[fig:mcore\] [**c**]{}), after the recursive removal of all links that do not participate in triangles, we obtain the $m1$-core, which is composed of a large connected cluster with a well-developed internal structure – a core in the center of the figure – and a large number of small disconnected components – a periphery. This result indicates that even if the network is connected, by iteratively removing all edges with multiplicities of zero, we are left with a small but well-connected subgraph and the reminder of the network is fragmented.
The aforementioned result suggests that the two peaks in the susceptibility could be related to this core-periphery organization. Both parts would percolate at different times, first the core and then the periphery, and hence have their own percolation thresholds. To test this hypothesis, we perform bond percolation on the network with a bond occupation probability of $p$ between the two peaks. The giant component at this value of $p$ defines a subgraph that we identify with the core and that roughly corresponds to the core observed in Fig. \[fig:mcore\] [**c**]{} (see Appendix \[appendix\_C\]). We then extract the latter core subgraph from the original network, and the remaining network is thus identified with the periphery. Once the core and periphery are isolated, we perform bond percolation on both components independently and compare the results with the original network. Figure \[fig:core\_vs\_out\] shows that the core percolates precisely at the point where the first peak appears in the original network, whereas the periphery percolates at the second peak.
The core-periphery random graph: a simple model showing a double percolation transition
=======================================================================================
The modern theory of continuous phase transitions states that, in a connected system, it is not possible to break the same symmetry at two different values of the control parameter. In our context, this statement implies that it is not possible to have two genuine percolation transitions at two different values of $p$. It is then unclear whether the second peak observed in our simulations corresponds to a real percolation transition or to a smeared transition, with the percolated core acting as an effective external field that provides connectivity among nodes in the periphery.
Unfortunately, strongly clustered networks cannot be studied analytically. However, we can devise a system with a core-periphery organization similar to that induced by strong clustering. Let us consider two interconnected Erdös-Rényi random graphs, a core and a periphery, of average degrees of $\bar{k}_c$ and $\bar{k}_p$, respectively. The relative size is $r=N_c/N_p$, and the average numbers of connections of a node in the core to nodes in the periphery (and vice versa) are $\bar{k}_{cp}$ and $\bar{k}_{pc}=r \bar{k}_{cp}$, respectively. To model a core-periphery organization, we chose $r<1$ and $\bar{k}_c>\bar{k}_p \gg \bar{k}_{cp}$. The relative size of the giant component of the combined network is $$g(p)=\frac{r}{1+r}g_c(p)+\frac{1}{1+r}g_p(p),
\label{eq:1}$$ where $g_c(p)$ and $g_p(p)$ are the solution of the system of transcendent equations $$\left.
\begin{array}{rcl}
g_c(p)&=&1-\displaystyle{e^{-p\bar{k}_c g_c(p)-p\bar{k}_{cp} g_{cp}(p)}}\\
g_{cp}(p)&=&1-\displaystyle{e^{-p\bar{k}_{pc} g_{pc}(p)-p\bar{k}_{p} g_{p}(p)}}\\
g_{pc}(p)&=&1-\displaystyle{e^{-p\bar{k}_{cp} g_{cp}(p)-p\bar{k}_{c} g_{c}(p)}}\\
g_p(p)&=&1-\displaystyle{e^{-p\bar{k}_p g_p(p)-p\bar{k}_{pc} g_{pc}(p)}}.
\label{eq:2}
\end{array}
\right\}$$ The derivation of these equations can be found in the Appendix \[appendix\_D\]. From here, it readily follows that $g_c$ and $g_p$ must be either both different from zero or equal to zero, implying that there is generally only one percolation transition, whereas at $p \approx \bar{k}_p^{-1}$, there is a crossover effect due to growth of the periphery.
This result is true if the coupling between the core and periphery is macroscopic, that is, the number of connections between the two structures is proportional to the size of the system such that $\bar{k}_{cp}$ and $\bar{k}_{pc}$ are constants in the thermodynamic limit. Instead, suppose that the number of connections among nodes in the core and periphery scales sub-linearly with the system size, [*i. e.*]{}, as $N^{\alpha}$ with $0<\alpha<1$. In this case, $\bar{k}_{cp}$ and $\bar{k}_{pc}$ are zero in the thermodynamic limit: thus, $g_c$ and $g_p$ become decoupled in Eq. (\[eq:2\]) such that $g_c$ can be different from zero while $g_p=0$. However, when both the core and periphery have a giant connected component as isolated networks, the combined network forms a single connected component because there is an infinite number of connections between each part.
The effect of such structure on bond percolation is as follows. When the bond occupation probability is increased from $p=0$, the first phase transition occurs at $p=\bar{k}_c^{-1}$, where the core percolates. In the range $\bar{k}_c^{-1}<p<\bar{k}_p^{-1}$, the number of nodes in the periphery connected through the giant component of the core scales as $N^{\alpha}$; therefore, its fraction vanishes in the limit $N \gg 1$. Once we reach $p=\bar{k}_p^{-1}$, a percolating cluster is formed in the periphery that becomes macroscopic as we increase $p$ by an infinitesimal amount. At this moment, and not before, the giant clusters in the periphery and core become connected. Thus, we have a double percolation transition defined by a regular transition at $p=\bar{k}_c^{-1}$ and the sudden emergence at $p=\bar{k}_p^{-1}$ of a macroscopic subgraph in the periphery with two types of connectivity; namely, each pair of nodes in this subgraph can be connected not only by a path going through the core but also by a path composed exclusively of nodes outside the core.
Figures \[fig:5\] [**a, b**]{} present the simulation results of the relative size of the giant component for $\alpha=1$ and $\alpha=0.5$, respectively. In the first case, we observe a crossover effect at approximately $p=\bar{k}_p^{-1}$, whereas in the second case, we observe a clear discontinuity in the derivative of $g(p)$ at exactly $p=\bar{k}_p^{-1}$, which is consistent with the analytical prediction in Eqs. (\[eq:1\]) and (\[eq:2\]) for $\bar{k}_{cp}=\bar{k}_{pc}=0$. However, the strongest evidence for the presence of a genuine double phase transition is provided by analysis of the susceptibility. In the case of a crossover effect, fluctuations in the percolated phase behave as $\langle G^2 \rangle-\langle G \rangle^2 \sim \langle G \rangle$; consequently, the quantity $\chi$ should diverge at the critical point and become size-independent after this point has been surpassed. In contrast, if the second transition in the periphery is a real phase transition, this quantity should diverge at both $p=\bar{k}_c^{-1}$ and $p=\bar{k}_p^{-1}$. This behavior is clearly observed in Figs. \[fig:5\] [**c, d**]{} (we provide a finite size analysis of both transitions in the Appendix \[appendix\_E\]).
In the case of clustered networks, it is difficult to clearly identify the core. Nevertheless, by using the giant $m1$-core as a rough approximation, we find that, in the case of $\bar{c}(k)=0.25$, the average number of connections between a node not in the giant $m1$-core and nodes in the giant $m1$-core is approximately $0.02$, indicating that the core and periphery are in fact very weakly coupled. In any case, the double divergence of $\chi$ shown in Fig. \[fig:FSS\] [**c**]{}, just as in the core-periphery random graph model with $\alpha<1$, is clear evidence for a genuine double phase transition.
![[]{data-label="fig:core_vs_out"}](./FIG4.pdf){width="\linewidth"}
![[]{data-label="fig:5"}](./FIG5.pdf){width="\linewidth"}
Discussion
==========
As we have demonstrated, clustering has a non-trivial effect on the properties of complex networks. This effect depends on three main factors: the heterogeneity of the degree distribution, the degree-degree correlations, and the shape of the clustering spectrum $\bar{c}(k)$. If we avoid degree-degree correlations, the combination of strong clustering and heterogeneity induces the emergence of a small but macroscopic core surrounded by a large periphery. This organization redefines the percolation phase space of complex networks by inducing a new percolated phase in which the core of the network is percolated but the periphery is not. In this situation, increasing clustering makes the core larger and more entangled, thereby decreasing the percolation threshold of the first transition, as suggested in [@Newman2003b; @Gleeson2009; @Serrano2006a]. However, in the remaining part of the network (the periphery) clustering generates small clique-like structures that are sparsely interconnected (see Fig. \[fig:mcore\] c). Thus, the periphery becomes more fragile, and the percolation threshold of the second phase transition increases, in agreement with [@Kiss2008; @Newman2009; @Miller2009; @Gleeson2010a]. For weakly heterogeneous networks, the size of the core is not macroscopic; thus, clustering only makes these networks more susceptible to the removal of links. This fact reconciles the two dominant interpretations of the effect of clustering on the percolation properties of complex networks. Interestingly, this behavior is also observed in a large sample of real complex networks (see Appendix \[appendix\_F\]), which provides evidence of the generality of this phenomenon.
We have shown that, in contrast to previous theory, it is possible to have two or more consecutive continuous phase transitions associated with the same symmetry breaking. Our work opens new lines of research concerning the effect of this core-periphery architecture on dynamical processes that occur in networks. In the case of epidemic spreading, for instance, the core could act as a reservoir of infectious agents that would be latently active in the core while the remainder of the network is uninfected.
This work was supported by a James S. McDonnell Foundation Scholar Award in Complex Systems; the ICREA Academia prize, funded by the [*Generalitat de Catalunya*]{}; MICINN project No. FIS2010-21781-C02-02; [*Generalitat de Catalunya*]{} grant No. 2014SGR608; and EC FET-Proactive Project MULTIPLEX (grant 317532).
Maximally random clustered networks {#appendix_A}
===================================
Maximally random clustered networks are generated by means of a biased rewiring procedure. One edge is chosen at random that connects nodes A with B. Then, we choose at random a second link attached at least to one node (C) with the same degree of A. This link connects C with D. Then, the two edges are swapped so that nodes A and D, on the one hand, and C and B, on the other, are now connected. We take care that no self-connections or multiple connection between the same pair of nodes are created in this process. Notice that this procedure preserves both the degree of each node and the actual nodes’ degrees at the end of the two original edges. Therefore, the procedure preserves the full degree-degree correlation structure encoded in the joint distribution $P(k,k')$. The procedure is ergodic and satisfies detailed balance.
Regardless of the rewiring scheme at use, the process is biased so that generated graphs belong to an exponential ensemble of graphs $\cal{G} = \mit \lbrace G \rbrace$, where each graph has a sampling probability $P(G)\propto e^{-\beta H(G)}$, where $\beta$ is the inverse of the temperature and $H(G)$ is a Hamiltonian that depends on the current network configuration. Here we consider ensembles where the Hamiltonian depends on the target clustering spectrum $\bar{c}(k)$ as $$H = \sum_{k=k_{min}}^{k_c} |\bar{c}^*(k)-\bar{c}(k)|,$$ where $\bar{c}^*(k)$ is the current degree-dependent clustering coefficient. We then use a simulated annealing algorithm based on a standard Metropolis-Hastings procedure. Let $G'$ be the new graph obtained after one rewiring event, as defined above. The candidate network $G'$ is accepted with probability $$p = \min{(1,e^{\beta [H(G)-H(G')]})} = \min{(1,e^{-\beta \Delta H})},$$ otherwise, we keep the graph $G$ unchanged. We first start by rewiring the network $200E$ times at $\beta=0$, where $E$ is the total number of edges of the network. Then, we start an annealing procedure at $\beta_0=50$, increasing the parameter $\beta$ by a $10\%$ after $200E$ rewiring events have taken place. We keep increasing $\beta$ until the target clustering spectrum is reached within a predefined precision or no further improvement can be achieved.
![**Top:** Bond percolation simulations for networks of $10.000$ nodes with a power law degree distribution with $\gamma=3.5$ and different levels of clustering. **a** relative size of the largest connected component $g$ as a function of the bond occupation probability $p$. **b** degree-dependent clustering coefficient $\bar{c}(k)$. **c** susceptibility $\chi$ as a function of bond occupation probability $p$. **d** Percolation threshold ($p_{max}$) as a function of the level of clustering. **Bottom:** **e-g**: $m$-core decomposition of three different networks of $50000$ nodes, $\gamma=3.5$, and different levels of clustering, $\bar c(k) = 0.01, 0.10,0.25$. **h**: Size of the largest connected component of the m-core as a function of $m$.[]{data-label="fig:homo35"}](./FigSI1.pdf "fig:"){width="\linewidth"} ![**Top:** Bond percolation simulations for networks of $10.000$ nodes with a power law degree distribution with $\gamma=3.5$ and different levels of clustering. **a** relative size of the largest connected component $g$ as a function of the bond occupation probability $p$. **b** degree-dependent clustering coefficient $\bar{c}(k)$. **c** susceptibility $\chi$ as a function of bond occupation probability $p$. **d** Percolation threshold ($p_{max}$) as a function of the level of clustering. **Bottom:** **e-g**: $m$-core decomposition of three different networks of $50000$ nodes, $\gamma=3.5$, and different levels of clustering, $\bar c(k) = 0.01, 0.10,0.25$. **h**: Size of the largest connected component of the m-core as a function of $m$.[]{data-label="fig:homo35"}](./FigSI2.pdf "fig:"){width="\linewidth"}
![**Top:** Bond percolation simulations for networks of $10.000$ with a power law degree distribution with $\gamma=4$ and different levels of clustering. **a** relative size of the largest connected component $g$ as a function of the bond occupation probability $p$. **b** degree-dependent clustering coefficient $\bar{c}(k)$. **c** susceptibility $\chi$ as a function of bond occupation probability $p$. **d** Percolation threshold ($p_{max}$) as a function of the level of clustering. **Bottom:** **e-g**: $m$-core decomposition of three different networks of $50000$ nodes, $\gamma=4$, and different levels of clustering, $\bar c(k) = 0.003, 0.05,0.25$. **h**: Size of the largest connected component of the m-core as a function of $m$. []{data-label="fig:homo4"}](./FigSI3.pdf "fig:"){width="\linewidth"} ![**Top:** Bond percolation simulations for networks of $10.000$ with a power law degree distribution with $\gamma=4$ and different levels of clustering. **a** relative size of the largest connected component $g$ as a function of the bond occupation probability $p$. **b** degree-dependent clustering coefficient $\bar{c}(k)$. **c** susceptibility $\chi$ as a function of bond occupation probability $p$. **d** Percolation threshold ($p_{max}$) as a function of the level of clustering. **Bottom:** **e-g**: $m$-core decomposition of three different networks of $50000$ nodes, $\gamma=4$, and different levels of clustering, $\bar c(k) = 0.003, 0.05,0.25$. **h**: Size of the largest connected component of the m-core as a function of $m$. []{data-label="fig:homo4"}](./FigSI4.pdf "fig:"){width="\linewidth"}
Effect of clustering on weakly heterogeneous networks {#appendix_B}
=====================================================
Figures \[fig:homo35\] for $\gamma=3.5$ and \[fig:homo4\] for $\gamma=4$ show the comparison of the percolation properties of networks with exactly the same degree sequence and degree-degree correlations but different levels of clustering. For each network, we perform bond percolation $10^4$ times using the Newman-Ziff algorithm [@Newman2000] and measure the average relative size of the largest (giant) connected component, $g \equiv \langle G \rangle/N$, and its fluctuations, [*i.e.*]{}, the susceptibility $\chi=[\langle G^2 \rangle - \langle G \rangle^2]/\langle G \rangle$. These results are then averaged over 50 network realizations. In finite systems, a peak in the susceptibility $\chi$ indicates the presence of a continuous phase transition and its position gives an estimate of the percolation threshold. All networks have a unique and well defined peak in $\chi$, and an increase of the clustering moves the peak to higher values of $p$. Hence clustering decreases the Giant component and increases the percolation threshold.
Identification of the core {#appendix_C}
==========================
In order to identify which nodes belong to the core and which to the periphery we perform a bond percolation simulation on a network of $50000$ nodes $\gamma=3.1$ and $c(k)=0.25$. We first delete all edges and then we add the edges one by one randomly. Once we added a 20% of the total number of edges ($p=0.2$ that lays between the two percolation thresholds) the giant component (GC) defines a subgraph that we identify with the core (red nodes in Fig \[fig:core\]). If, in the same simulation, we keep adding edges we will observe another phase transition where the periphery percolates at $p=0.5$. However the periphery has percolated regardless of the core. This can be observe if we subtract the nodes that belong to the core and see that largest component that remains is still a macroscopic component (blue nodes at Fig. \[fig:core\]), and only few nodes leave the GC (green nodes in Fig. \[fig:core\]).
![A network of 50.000 nodes, with a power law degree distribution with $\gamma=3.1$ and a clustering spectrum $\bar c(k)=0.25$. The nodes are distributed according to its $m$-core decomposition. Red nodes (1811) are the core they because belong to the Giant component once we perform a bond percolation with $p=0.2$ (between the two percolation thresholds). Blue and green nodes are peripheral nodes that belong to the giant component at $p=0.5$ (just after the second percolation threshold). Once we subtract the core, blue nodes (10408) still remain in the GC meanwhile green nodes (4271) belong to small components. Black nodes (33510) never belong to the GC.[]{data-label="fig:core"}](./FigSI5.png){width="\linewidth"}
Bond percolation on interconnected networks {#appendix_D}
===========================================
Let us consider two interconnected random graphs $a$ and $b$ with average degrees $\bar{k}_{aa}$ and $\bar{k}_{bb}$, respectively. The relative size is $r=N_a/N_b$ and the average number of connections of a node in $a$ to nodes in $b$ (and vice versa) are $\bar{k}_{ab}$ and $\bar{k}_{ba}=r \bar{k}_{ab}$. Each node has connections to both networks and therefore its degree can be represented as a vector $\vec{k}=(k_a,k_b)$. Hence $P_a(\vec{k})$ is the probability of a node of the network $a$ to have degree $\vec{k}$ and $P_{ab}(\vec{k}'|\vec{k})$ is the probability that a node of $a$ with degree $\vec{k}$ is connected to a node of $b$ with degree $\vec{k}'$. The relative size of the giant component of the combined network is $$g(p)=\frac{r}{1+r}g_a(p)+\frac{1}{1+r}g_b(p).
\label{eq:g(p)}$$ Where $g_a$ is the probability that a node of $a$ belongs to the giant component, or $1$ minus the probability that it belongs to a finite cluster, that is, $g_{a} = 1 - \sum_{s=0}^{\infty}Q_{a}(s)$, where $Q_a(s)$ is the probability that a randomly chosen node from network $a$ belongs to a cluster of size $s$.
In heterogeneous networks, the size of the cluster a given node belongs to is correlated with the degree of the node. Thus, $Q_a(s)$ must be evaluated as $Q_a(s)=\sum_{\vec{k}} P_a(\vec{k}) Q_a(s|\vec{k})$, where $Q_a(s|\vec{k})$ is the probability that a node from network $a$ of degree $\vec{k}$ belongs to a cluster of size $s$. The latter function satisfies
$$\begin{split}
Q_a(s|\vec{k}) &= \sum_{n_a} {k_a \choose n_a}p^{n_a}(1-p)^{k_a-n_a}\sum_{n_b} {k_b \choose n_b}p^{n_b}(1-p)^{k_b-n_b}\\ & \sum_{s_1\cdots s_{n_a}}G_{aa}(s_1|\vec{k})\cdots G_{aa}(s_{n_a}|\vec{k})\sum_{s'_1\cdots s'_{n_b}}G_{ab}(s'_1|\vec{k})\cdots G_{ab}(s'_{n_b}|\vec{k})\\ &\delta_{s,1+s_1+\cdots +s_{n_a}+s'_1+\cdots+s'_{n_b}},
\end{split}$$
where $G_{aa}(s|\vec{k})$ ($G_{ab}(s|\vec{k})$) is the probability to reach $s$ other nodes by following a neighbor in network $a$ ($b$). The generating function of $Q_a(s|\vec{k})$ can be written as $$\begin{split}
\hat{Q}_a(z|\vec{k}) = \sum_{s=0}^{\infty} Q_a(s|\vec{k}) z^{s} = z (1-p+p\hat{G}_{aa}(z|\vec{k}))^{k_a}(1-p+p\hat{G}_{ab}(z|\vec{k}))^{k_b} .
\end{split}$$ Functions $G_{aa}(s|\vec{k})$, $G_{ab}(s|\vec{k})$, $G_{ba}(s|\vec{k})$, and $G_{bb}(s|\vec{k})$ follow similar recurrence equations. Thus, their generating functions satisfy $$\hat{G}_{aa}(z|\vec{k}) = z \sum_{\vec{k}} P_{aa}(\vec{k}'|\vec{k}) (1-p+p\hat{G}_{aa}(z|\vec{k}))^{k_a'-1}(1-p+p\hat{G}_{ab}(z|\vec{k}))^{k_b'}$$ $$\hat{G}_{ab}(z|\vec{k}) = z \sum_{\vec{k}} P_{ab}(\vec{k}'|\vec{k}) (1-p+p\hat{G}_{ba}(z|\vec{k}))^{k_a'-1}(1-p+p\hat{G}_{bb}(z|\vec{k}))^{k_b'}$$ $$\hat{G}_{ba}(z|\vec{k}) = z \sum_{\vec{k}} P_{ba}(\vec{k}'|\vec{k}) (1-p+p\hat{G}_{aa}(z|\vec{k}))^{k_a'}(1-p+p\hat{G}_{ab}(z|\vec{k}))^{k_b'-1}$$ $$\hat{G}_{bb}(z|\vec{k}) = z \sum_{\vec{k}} P_{bb}(\vec{k}'|\vec{k}) (1-p+p\hat{G}_{ba}(z|\vec{k}))^{k_a'}(1-p+p\hat{G}_{bb}(z|\vec{k}))^{k_b'-1} ,$$
where $P_{aa}(\vec{k}'|\vec{k})$ is the probability that a randomly chosen neighbor among all the $a$ neighbors of a node that belongs to network $a$ with degree $\vec{k}$ has degree $\vec{k}'$, and analogously for the rest of the transition probabilities.
For networks with no degree-degree correlations, these transition probabilities simplify as $$\begin{split}
P_{aa}(\vec{k}'|\vec{k}) = \frac{k_a' P_a(\vec{k}')}{\bar{k}_{aa}} \quad
P_{bb}(\vec{k}'|\vec{k}) = \frac{k_b' P_b(\vec{k}')}{\bar{k}_{bb}} \\
P_{ab}(\vec{k}'|\vec{k}) = \frac{k_a' P_b(\vec{k}')}{\bar{k}_{ba}} \quad
P_{ba}(\vec{k}'|\vec{k}) = \frac{k_a' P_a(\vec{k}')}{\bar{k}_{ab}}.
\end{split}$$ This implies that functions $G_{aa}(z|\vec{k})$, $G_{ab}(z|\vec{k})$, $G_{ba}(z|\vec{k})$, and $G_{bb}(z|\vec{k})$ become independent of $\vec{k}$. We further assume that the number of neighbors from $a$ and $b$ of a given node are uncorrelated, that is $$P_a(\vec{k}) = P_a(k_a)P_a(k_b) \quad P_b(\vec{k}) = P_b(k_a)P_b(k_b).$$ In the case of two coupled Erdös-Rényi random graphs, the degree distributions $P_a(k_a)$, $P_a(k_b)$, $P_b(k_a)$, and $P_b(k_b)$ are all Poisson distributions of parameter $\bar{k}_{aa}$, $\bar{k}_{ab}$, $\bar{k}_{ba}$, and $\bar{k}_{bb}$, respectively. In this case, it is easy to check that $\hat{Q}_a(z)=\hat{G}_{aa}(z)$, $\hat{Q}_b(z)=\hat{G}_{bb}(z)$, and $$\hat{G}_{aa}(z)= z e^{-\bar{k}_{aa} p (1 - \hat{G}_{aa}(z))}e^{-\bar{k}_{ab} p (1 - \hat{G}_{ab}(z))}
\label{eq:Gaa}$$ $$\hat{G}_{ab}(z) = z e^{-\bar{k}_{ba} p (1 - \hat{G}_{ba}(z))}e^{-\bar{k}_{bb} p (1 - \hat{G}_{bb}(z))}
\label{eq:Gab}$$ $$\hat{G}_{ba}(z) = z e^{-\bar{k}_{ab} p (1 - \hat{G}_{ab}(z))}e^{-\bar{k}_{aa} p (1 - \hat{G}_{aa}(z))}
\label{eq:Gba}$$ $$\hat{G}_{bb}(z) = z e^{-\bar{k}_{bb} p (1 - \hat{G}_{bb}(z))}e^{-\bar{k}_{ba} p (1 - \hat{G}_{ba}(z))}.
\label{eq:Gbb}$$ Finally, using that $g_a=1-\hat{Q}_a(z=1)=1-\hat{G}_{aa}(z=1)$, $g_b=1-\hat{Q}_b(z=1)=1-\hat{G}_{bb}(z=1)$ and after defining $g_{ab}=1-\hat{G}_{ab}(z=1)$ and $g_{ba}=1-\hat{G}_{ba}(z=1)$ we obtain Eq. (\[eq:2\]).
Finite size scaling of the core-periphery random graph model {#appendix_E}
============================================================
We first notice that the susceptibility that we use in our work is not the standard one, although it is directly related to it. The standard one is defined as $$\chi_{st} \equiv \frac{\langle G^2 \rangle -\langle G \rangle^2}{N},$$ whereas ours is defined as $$\chi \equiv \frac{\langle G^2 \rangle -\langle G \rangle^2}{\langle G \rangle},$$ For a finite system of size $N$, the peak of the susceptibility near the critical point behaves as $\chi_{st}^{max} \sim N^{\gamma/\nu}$ and the average cluster size as $\langle G \rangle \sim N^{1-\beta/\nu}$ (in this context $\gamma$ is not the exponent of the degree distribution but the critical exponent of the susceptibility). Therefore, our version of the susceptibility $\chi$ diverges near the critical point as $\chi \sim N^{\gamma'/\nu}$, where $\gamma'=\gamma+\beta$.
Let $(\beta_c,\gamma'_c,\nu_c)$ and $(\beta_p,\gamma'_p,\nu_p)$ be the critical exponents of the core and the periphery when they are isolated from each other. Close to the percolation transition of the core, the giant component is mainly composed of nodes in the core and, therefore, we expect the first transition to have the critical properties of regular percolation in the core subgraph, in particular, the susceptibility near the first peak diverges with the exponent $\gamma'_c/\nu_c$. Close to the second transition point, the giant component is the sum of the giant component in the core, $G_c$, plus the percolating cluster in the periphery, $G_p$. The susceptibility in this region can be evaluated as $$\chi \approx \chi_c+\frac{\langle G_p \rangle}{\langle G_c \rangle} \chi_p.$$ However, if the second transition point is well separated from the first one, close to this second transition $\chi_c \sim$ cte and $\langle G_c \rangle \sim N$. Then, we expect that near the second transition the susceptibility behaves as $\chi \sim N^{(\gamma'_p-\beta_p)/\nu_p}$. The critical exponents in the case of Erdös-Renyí random graphs are the mean field ones, that is, $\beta=\gamma=1$ and $\nu=3$. Therefore, in our simulations, we expect the first peak to diverge as $N^{2/3}$, the second peak as $N^{1/3}$ and the approach of the position of the peaks to their respective critical points as $p_{max} \sim p_c+A N^{-1/3}$. This is confirmed in Fig. \[fig:FSS\].
![Bond percolation simulations for the core-periphery random graph model for $\alpha=1$ for different sizes. In both cases the core has an average degree $\bar{k}_c=10$ and the periphery $\bar{k}_p=2.5$. The ratio core/periphery is $r=0.2$. [**a:**]{} Relative size of the largest connected component as a function of the bond occupation probability $p$. [**c:**]{} Susceptibility $\chi$ as a function of bond occupation probability $p$. [**b**]{} and [**d:**]{} Position $p_{max}$ and height $\chi_{max}$ of the two peaks of $\chi$ as function of network size $N$. The straight lines are power-law fits. [**b**]{} and [**d**]{} show the measured values of the critical exponents.[]{data-label="fig:FSS"}](./FigSI6.pdf){width="\linewidth"}
![Bond percolation simulations for the core-periphery random graph model for $\alpha=0.5$ for different sizes. In both cases the core has an average degree $\bar{k}_c=10$ and the periphery $\bar{k}_p=2.5$. The ratio core/periphery is $r=0.2$. [**a:**]{} Relative size of the largest connected component as a function of the bond occupation probability $p$. [**c:**]{} Susceptibility $\chi$ as a function of bond occupation probability $p$. [**b**]{} and [**d:**]{} Position $p_{max}$ and height $\chi_{max}$ of the two peaks of $\chi$ as function of network size $N$. The straight lines are power-law fits. [**b**]{} and [**d**]{} show the measured values of the critical exponents.[]{data-label="fig:FSS"}](./FigSI7.pdf){width="\linewidth"}
Real Networks {#appendix_F}
=============
US air transportation network
-----------------------------
In the US air transportation network the nodes are airports and a link is the existence of a direct flight between two airports [@Serrano2009]. The resulting network has $583$ nodes, $1087$s edges, an average degree of $\bar k=3.73$, a clustering coefficient of $\bar C=0.43$ and a maximum degree of $k_{max}=109$.
![image](./FigSI8.pdf){width="0.45\linewidth"} ![image](./FigSI9.png){width="0.45\linewidth"}
Human disease network
---------------------
In the “human disease network” nodes represent disorders, and two disorders are connected to each other if they share at least one gene in which mutations are associated with both disorders [@Goh2007]. The resulting network has $867$ nodes, $1527$ edges, an average degree of $\bar k=3.52$, a clustering coefficient of $\bar C=0.81$ and a maximum degree of $k_{max}=50$.
![image](./FigSI10.pdf){width="0.45\linewidth"} ![image](./FigSI11.png){width="0.45\linewidth"}
Pokec Online Social Network
---------------------------
Pokec is one of the most popular on-line social network in Slovakia. Pokec has been provided for more than 10 years and connects more than 1.6 million people by 2012. We analyse the undirected network by deleting all non-bidirectional links. For having a smaller system we only considered nodes that sign up into the online network before 2004. The resulting network has $44285$ nodes, $75285$ edges, an average degree of $\bar k=3.4$, a clustering coefficient of $\bar C=0.09$ and a maximum degree of $k_{max}=58$.
![image](./FigSI12.pdf){width="0.45\linewidth"} ![image](./FigSI13.png){width="0.45\linewidth"}
[37]{} natexlab\#1[\#1]{}url \#1[`#1`]{}urlprefix
Pastor-Satorras, R. & Vespignani, A. Epidemic spreading in scale-free networks. *Physical Review Letters* **86**, 3200–3203 (2001).
Lloyd, A. & May, R. Epidemiology - How viruses spread among computers and people. *Science* **292**, 1316–1317 (2001).
Boguna, M., Pastor-Satorras, R. & Vespignani, A. Absence of epidemic threshold in scale-free networks with degree correlations. *Physical Review Letters* **90** (2003).
Berger, N., Borgs, C., Chayes, J. T. & Saberi, A. On the Spread of Viruses on the Internet. In *Proceedings of the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms*, SODA ’05, 301–310 (Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2005).
Chatterjee, S. & Durrett, R. Contact processes on random graphs with power law degree distributions have critical value 0. *The Annals of Probability* **37**, 2332–2356 (2009).
Boguñá, M., Castellano, C. & Pastor-Satorras, R. Nature of the Epidemic Threshold for the Susceptible-Infected-Susceptible Dynamics in Networks. *Phys. Rev. Lett.* **111**, 068701 (2013).
Bianconi, G. Mean field solution of the Ising model on a Barabasi-Albert network. *Physics Letters A* **303**, 166–168 (2002).
Goltsev, A., Dorogovtsev, S. & Mendes, J. Critical phenomena in networks. *Physical Review E* **67** (2003).
Hinczewski, M. & Berker, A. N. Inverted Berezinskii-Kosterlitz-Thouless singularity and high-temperature algebraic order in an Ising model on a scale-free hierarchical-lattice small-world network. *Physical Review E* **73**, 066126 (2006).
Dorogovtsev, S. N., Goltsev, A. V. & Mendes, J. F. F. Critical phenomena in complex networks. *Reviews of Modern Physics* **80**, 1275–1335 (2008).
Cohen, R., Erez, K., Ben-Avraham, D. & Havlin, S. Resilience of the Internet to random breakdowns. *Physical Review Letters* 20–22 (2000).
Callaway, D. S., Newman, M. E., Strogatz, S. H. & Watts, D. J. Network robustness and fragility: percolation on random graphs. *Physical Review Letters* **85**, 5468–5471 (2000).
Cohen, R., Ben-Avraham, D. & Havlin, S. Percolation critical exponents in scale-free networks. *Physical Review E* **66**, 036113 (2002).
Newman, M. Assortative Mixing in Networks. *Physical Review Letters* **89**, 208701 (2002).
Newman, M. E. J. & Web, W.-w. Properties of highly clustered networks. *Physical Review E* 1–7 (2003).
Vázquez, A. & Moreno, Y. Resilience to damage of graphs with degree correlations. *Physical Review E* **67**, 015101 (2003).
Dorogovtsev, S. N., Mendes, J. F. F. & Samukhin, A. N. Anomalous percolation properties of growing networks. *Phys. Rev. E* **64**, 066110 (2001).
Callaway, D. S., Hopcroft, J. E., Kleinberg, J. M., Newman, M. E. J. & Strogatz, S. H. Are randomly grown graphs really random? *Phys. Rev. E* **64**, 041902 (2001).
Buldyrev, S. V., Parshani, R., Paul, G., Stanley, H. E. & Havlin, S. Catastrophic cascade of failures in interdependent networks. *Nature* **464**, 1025–8 (2010).
Son, S.-W., Bizhani, G., Christensen, C., Grassberger, P. & Paczuski, M. Percolation theory on interdependent networks based on epidemic spreading. *EPL* **97** (2012).
Baxter, G. J., Dorogovtsev, S. N., Goltsev, A. V. & Mendes, J. F. F. Avalanche Collapse of Interdependent Networks. *PHYSICAL REVIEW LETTERS* **109** (2012).
Kiss, I. & Green, D. Comment on “Properties of highly clustered networks”. *Physical Review E* **78**, 048101 (2008).
Newman, M. Random Graphs with Clustering. *Physical Review Letters* **103**, 058701 (2009).
Miller, J. Percolation and epidemics in random clustered networks. *Physical Review E* **80**, 020901 (2009).
Gleeson, J. P., Melnik, S. & Hackett, A. How clustering affects the bond percolation threshold in complex networks. *Physical Review E* **81**, 066114 (2010).
Newman, M. Properties of highly clustered networks. *Physical Review E* **68**, 026121 (2003).
Gleeson, J. Bond percolation on a class of clustered random networks. *Physical Review E* **80**, 036107 (2009).
Serrano, M. & Boguñá, M. Clustering in complex networks. II. Percolation properties. *Physical Review E* **74**, 056115 (2006).
Csermely, P., London, A., Wu, L.-Y. & Uzzi, B. Structure and dynamics of core/periphery networks. *Journal of Complex Networks* **1**, 93–123 (2013).
Nagler, J., Tiessen, T. & Gutch, H. W. Continuous Percolation with Discontinuities. *Phys. Rev. X* **2**, 031009 (2012).
Chen, W. *et al.* Phase transitions in supercritical explosive percolation. *Phys. Rev. E* **87**, 052130 (2013).
Chen, W. *et al.* Unstable supercritical discontinuous percolation transitions. *Phys. Rev. E* **88**, 042152 (2013).
Bianconi, G. & Dorogovtsev, S. N. Multiple percolation transitions in a configuration model of network of networks. *arXiv:1402.0218* (2014).
Newman, M. E. & Ziff, R. M. Efficient Monte Carlo algorithm and high-precision results for percolation. *Physical Review Letters* **85**, 4104–7 (2000).
Colomer-de Simón, P., Serrano, M. A., Beiró, M. G., Alvarez-Hamelin, J. I. & Boguñá, M. Deciphering the global organization of clustering in real complex networks. *Scientific reports* **3**, 2517 (2013).
Melnik, S., Hackett, A., Porter, M. a., Mucha, P. J. & Gleeson, J. P. The unreasonable effectiveness of tree-based theory for networks with clustering. *Physical Review E* **83**, 036112 (2011).
Serrano, M. A. & Bogu[ñ]{}[á]{}, M. Clustering in complex networks. I. General formalism. *Phys. Rev. E* **74**, 056114 (2006).
& ** ****, ().
, & ** ****, ().
*et al.* ** ****, ().
[^1]: Note that this case is not the same as fixing the average clustering coefficient because, in our case, we enforce nodes of any degree to have the same local clustering. In any case, due to structural constraints, for very strong clustering it is not possible to keep $\bar{c}(k)$ constant for very large values of $k$. In this case, the algorithm generates the maximum possible clustering [@Serrano:2006qj]
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Anomalies in time-series data give essential and often actionable information in many applications. In this paper we consider a model-free anomaly detection method for univariate time-series which adapts to non-stationarity in the data stream and provides probabilistic abnormality scores based on the conformal prediction paradigm. Despite its simplicity the method performs on par with complex prediction-based models on the Numenta Anomaly Detection benchmark and the Yahoo! S5 dataset.'
author:
- |
\
Skolkovo Institute of Science and Technology, Skolkovo, Moscow Region, Russia\
Institute for Information Transmission Problems, Moscow, Russia\
Skolkovo Institute of Science and Technology, Skolkovo, Moscow Region, Russia\
Institute for Information Transmission Problems, Moscow, Russia\
Skolkovo Institute of Science and Technology, Skolkovo, Moscow Region, Russia\
Institute for Information Transmission Problems, Moscow, Russia\
Skolkovo Institute of Science and Technology, Skolkovo, Moscow Region, Russia\
Institute for Information Transmission Problems, Moscow, Russia
bibliography:
- 'references/references.bib'
title: 'Conformal $k$-NN Anomaly Detector for Univariate Data Streams'
---
Conformal prediction, nonconformity, anomaly detection, time-series, nearest neighbours
Introduction {#sec:introduction}
============
Anomaly detection in time-series data has important applications in many practical fields [@kejariwal2015], such as monitoring of aircraft’s cooling systems in aerospace industry [@alestraetal2014], detection of unusual symptoms in healthcare, monitoring of software-intensive systems [@artemovburnaev16], of suspicious trading activity by regulators or high frequency dynamic portfolio management in finance, etc.
General anomaly detection methods can be broadly categorized in five families, [@pimenteletal2014], each approaching the problem from a different angle: probabilistic, distance-based, prediction-based, domain-based, and information-theoretic techniques. The common feature of all families is the reliance on a negative definition of abnormality: “abnormal” is something which is not “normal”, i.e. a substantial deviation from a typical set of patterns.
Prediction-based anomaly detection techniques rely on an internal regression model of the data: for each test example the discrepancy between the observed and the prediction, i.e. the reconstruction error, is used to decide its abnormality. For example, neural networks are used in this manner in [@augusteijnfolkert2002] and [@hawkinsetal2002; @williamsetal2002], whereas in [@chandolaetal2009] the predictions are based on a comprehensive description of the variability of the input data. Other reconstruction methods include dimensionality reduction [@jolliffe2014], linear and kernel Principal Component Analysis [@duttaetal2007; @shyuetal2003; @hoffmann2007; @scholkopfetal1998].
Anomaly detection in time-series analysis is complicated by high noise and the fact that the assumptions of classical change point models are usually violated by either non-stationarity or quasi-periodicity of the time-series [@artemovetal2015; @artemovburnaev16], or long-range dependence [@artemovburnaev2015a]. Classical methods require strong pre- and post- change point distributional assumptions, when in reality change-points might exhibit clustering, or be starkly contrasting in nature between one another. Thus, the usual approach of detecting anomalies against a fixed model, e.g. the classical models [@burnaev2009; @burnaevetal2009], is unsubstantiated. This has compelled practitioners to consider specialized methods for anomaly model selection [@burnaevetal2015a], construction of ensembles of anomaly detectors [@artemovburnaev2015b], and explicit rebalancing of the normal and abnormal classes [@burnaevetal2015b], among others.
Time-series anomaly detection techniques include, among others, spatiotemporal self organising maps [@barretoetal2009], kurtosis-optimising projections of a VARMA model used as features for outlier detection algorithm based on the CUSUM [@galeanoetal2006], Multidimensional Probability Evolution method to identify regions of the state space frequently visited during normal behaviour [@leeroberts2008], tracking the empirical outlier fraction of the one-class SVM on sliding data slices [@gardneretal2006], or applying one-class SVM to centred time-series embedded into a phase space by a sliding window [@maperkins2003]. The main drawback of these approaches is that they use explicit data models, which require parameter estimation and model selection.
Distance-based anomaly detection methods perform a task similar to that of estimating the pdf of data and do not require prior model assumptions. They rely on a metric, usually Euclidean of Mahalanobis, to quantify the degree of dissimilarity between examples and to derive either a distance-based or a local density score in order to assess abnormality. Such methods posit that the normal observations are well embedded within their metric neighbourhood, whereas outliers are not.
Despite being model-free, distance-based methods do not provide a natural probabilistic measure, which conveys detector’s degree of confidence in abnormality of an observation. Indeed, there do exist distance-based methods, for example LoOP, [@kriegeletal2009], which output this kind of score, but typically they rely on quite limiting distributional assumptions. Such assumptions can potentially be avoided by using conformal prediction methods, [@shaferetal2008]. For instance, conformal prediction allows efficient construction of non-parametric confidence intervals [@nazarov2016].
This paper outlines an anomaly detection method in univariate time-series, which attempts to adapt to non-stationarity by computing “deferred” scores and uses conformal prediction to construct a non-parametric probabilty measure, which efficiently quantifies the degree of confidence in abnormality of new observations. We also provide technical details on boosting the performance of the final anomaly detector, e.g. signal pruning. The extensive comparison on Yahoo! S5 and Numenta benchmark datasets revealed that the proposed method performs on par with complex prediction-based detectors. The proposed method is among the top 3 winning solutions of the 2016 Numenta Anomaly Detection Competition, see [@numenta2016].
In section \[sec:conformal\_anomaly\_detection\] we review general non-parametric techniques for assigning confidence scores to anomaly detectors. In sec. \[sec:anomaly\_detection\_in\_uts\] we propose a conformal detector for univariate time-series based on $k$-NN ($k$ Nearest Neighbours) and time-delay embedding, which attempts to tackle quasi-periodicity and non-stationarity issues. In section \[sec:anomaly\_detection\_benchmark\] we provide details on the comparison methodology and the Numenta Anomaly Detection benchmark, and in section \[sec:benchmark\_results\] we compare the performance of the proposed method.
Conformal Anomaly Detection {#sec:conformal_anomaly_detection}
===========================
Conformal Anomaly Detection (CAD), [@laxhammar2014], is a distribution-free procedure, which assigns a probability-like confidence measure to predictions of an arbitrary anomaly detection method. CAD uses the scoring output of the detector $A(X_{:t}, {\mathbf}{x}_{t+1})$ as a measure of non-conformity (Non-Conformity Measure, NCM), which quantifies how much different a test object ${\mathbf}{x}_{t+1} \in \mathcal{X}$ is with respect to the *reference* sample $X_{:t}=({\mathbf}{x}_s)_{s=1}^t \in \mathcal{X}$. Typical examples of NCMs are prediction error magnitude for a regression model, reconstruction error for dimensionality reduction methods, average distance to the $k$ nearest neighbours, etc. The NCM may have intrinsic randomness independent of the data, [@vovk2013]. For a sequence of observations ${\mathbf}{x}_t \in \mathcal{X}$, $t = 1,2,\ldots$, at each $t\geq 1$ CAD computes the scores $$\label{eq:cad_scores}
\alpha_s^t
= A(X_{:t}^{-s}, {\mathbf}{x}_s)
\,,\, s = 1,\ldots,t
\,,$$ where $X_{:t}^{-s}$ is the sample $X_{:t}$ without the $s$-th observation. The confidence that ${\mathbf}{x}_t$ is anomalous relative to the *reference* sample $X_{:(t-1)}$ is one minus the empirical $p$-value of its non-conformity score $\alpha_t^t$ in : $$\label{eq:cad_p_value}
p({\mathbf}{x}_t, X_{:(t-1)}, A)
= \frac1t \Bigl|
\{s=1,\ldots,t \,:\, \alpha_s^t \geq \alpha_t^t\}
\Bigr|
\,.
\tag{CPv}$$ Basically, the more abnormal ${\mathbf}{x}_t$ is the lower its $p$-value is, since anomalies, in general, poorly conform to the previously observed reference sample $X_{:(t-1)}$.
In [@shaferetal2008] it was shown that online conformal prediction, and by extension CAD, offers conservative coverage guarantees in online learning setting. Indeed, when iid sequence ${\mathbf}{x}_t \sim D$ is fed into the conformal anomaly detector one observation at a time, then for any NCM $A$ and all $t\geq 1$ $$\label{eq:cad_coverage}
\mathbb{P}_{X \sim D^t} \bigl(
p({\mathbf}{x}_t, X^{-t}, A) < \epsilon
\bigr)
\leq \epsilon
\,,\, X=({\mathbf}{x}_s)_{s=1}^t
\,.$$ Intuitively, is the empirical CDF, obtained on a sample $(A(X^{-s}, {\mathbf}{x}_s))_{s=1}^t$, evaluated at a random point $A(X^{-t}, {\mathbf}{x}_t)$ with the sample $X$ drawn from an exchangeable distribution $D^t$. This means that the distribution of the $p$-value itself is asymptotically uniform. The NCM, used in , affects the tightness of the guarantee and the volume of computations.
At any $t\geq1$ in CAD requires $t$ evaluations of $A$ with different samples $X_{:t}^{-s}$, which is potentially computationally heavy. [@laxhammaretal2015] proposed the Inductive Conformal Anomaly Detection (ICAD) which uses a fixed proper training sample of size $n$ as the reference in the non-conformity scores. If the sequence $({\mathbf}{x}_t)_{t\geq1}$ is relabelled so that it starts at $1-n$ instead of $1$, then for each $t\geq1$ the ICAD uses the following setup: $$\underbrace{
{\mathbf}{x}_{-n+1}
, \ldots
, {\mathbf}{x}_0
}_{\tilde{X} \text{ proper training}}
, \overbrace{{}{blue}{
{\mathbf}{x}_1
, {\mathbf}{x}_2
, \ldots
, {\mathbf}{x}_{t-1}
}}^{\text{calibration}}
, \underbracket{{}{red}{
{\mathbf}{x}_t
}}_{\text{test}}
, \ldots \,.$$ The conformal $p$-value of a test observation ${\mathbf}{x}_t$ is computed using on the modified scores: $$\label{eq:icad_scores}
\alpha_s^t
= A\bigl(\tilde{X}, {\mathbf}{x}_s\bigr)
\,,\, s = 1,\ldots,t
\,,\, \tilde{X} = ({\mathbf}{x}_{-n+1},\ldots,{\mathbf}{x}_0)
\,.$$ The ICAD is identical to CAD over the sequence $({\mathbf}{x}_t)_{t\geq n+1}$ (relabelled to start at $1$) with a non-conformity measure $\bar{A}$, which always *ignores* the supplied *reference* sample and uses the proper training sample $\tilde{X}$ instead. Therefore the ICAD has similar coverage guarantee as with scores given by .
By trading the deterministic guarantee for a PAC guarantee it is possible to make the ICAD use a fixed-size calibration set. The resulting “sliding” ICAD fixes the size of the calibration sample to $m$ and forces it to move along the sequence $({\mathbf}{x}_t)_{t\geq1}$, i.e. $$\underbrace{
{\mathbf}{x}_{-n+1}
, \ldots
, {\mathbf}{x}_0
}_{\tilde{X} \text{ training}}
, \, \ldots \,
, \overbrace{{}{blue}{
{\mathbf}{x}_{t-m}
, \ldots
, {\mathbf}{x}_{t-1}
}}^{\text{calibration}}
, \underbracket{{}{red}{
{\mathbf}{x}_t
}}_{\text{test}}
, \ldots
\,.$$ The conformal $p$-value uses a subsample of the non-conformity scores : $$\label{eq:sliding_icad_p_value}
p({\mathbf}{x}_t, X_{:(t-1)}, A)
= \frac1{m+1} \Bigl|
\{i=0,\ldots,m\,:\, \alpha_{t-i}^t \geq \alpha_t^t\}
\Bigr|
\,.
\tag{$\text{CPv}_m$}$$ The guarantee for ICAD is a corollary to proposition (2) in [@vovk2013]. In fact, the exchangeability of $({\mathbf}{x}_t)_{t\geq1}$ further implies a similar PAC-type validity result for the sliding ICAD, which states that for any $\delta, \epsilon\in(0,1)$ for any fixed proper training set $\tilde{X}$ and data distribution $D$ it is true that $$\label{eq:sliding_icad_validity}
\mathbb{P}_{{\mathbf}{x} \sim D} \bigl(
p({\mathbf}{x}, X, \bar{A}) < \epsilon
\bigr)
\leq \epsilon + \sqrt{\frac{\log\frac1{\delta}}{2m}}
\,,$$ with probability at least $1-\delta$ over draws of $X \sim D^m$, and $\bar{A}$ is the NCM ${\mathbf}{x}\mapsto A(\tilde{X}, {\mathbf}{x})$, which uses $\tilde{X}$ as the *reference* sample.
Anomaly Detection in Univariate Time Series {#sec:anomaly_detection_in_uts}
===========================================
In this section we outline the building blocks of the proposed model-free detection method which produces conformal confidence scores for its predictions. The conformal scores are computed using an adaptation of the ICAD to the case of potentially non-stationary and quasi-periodic time-series.
Consider a univariate time-series $X = (x_t)_{t\geq1} \in \real$. The first step of the proposed procedure is to embed $X$ into an $l$-dimensional space, via a sliding historical window: $$\label{eq:time_delay_embed}
\ldots
, x_{t-l-1}
, \rlap{$\overbracket{
\phantom{
x_{t-l}
, x_{t-l+1}
, \ldots
, x_{t-1}
}}^{{\mathbf}{x}_{t-1}}$
}
{}{red}{x_{t-l}}
, \underbracket{
{}{blue}{
x_{t-l+1}
, \ldots
, x_{t-1}
}
, {}{red}{x_t}
}_{{\mathbf}{x}_t}
, x_{t+1}
, \ldots
\,. \tag{T-D}$$ In other words, ${\mathbf}{x}_t\in \real^l$ is $l$ most recent observations of $x_s$, $s=t-l+1,\ldots,t$. This embedding requires a “burn-in” period of $l$ observations to accumulate at least one full window, unless padding is used.
This embedding of $X$ permits the use of multivariate distance-based anomaly detection techniques. Distance-based anomaly detection uses a distance $d$ on the input space $\mathcal{X}$ to quantify the degree of dissimilarity between observations. Such methods posit that the normal observations are generally closer to their neighbours, as opposed to outlying examples which typically lie farther. If the space $\mathcal{X}$ is $\real^{d\times 1}$ then, the most commonly used distance is the Mahalanobis metric, which takes into account the general shape of the sample and correlations of the data. In the following the distance, induced by the sample $\mathcal{S}=({\mathbf}{x}_i)_{i=1}^n$, is $d({\mathbf}{x}, {\mathbf}{y}) = \sqrt{({\mathbf}{x}-{\mathbf}{y})' {\hat{\Sigma}}^{-1} ({\mathbf}{x}-{\mathbf}{y})}$, where $\hat{\Sigma}$ is an estimate of the covariance matrix on $\mathcal{S}$.
The $k$-NN anomaly detector assigns the abnormality score to some observation ${\mathbf}{x}\in \mathcal{X}$ based on the neighbourhood proximity measured by the average distance to the $k$ nearest neighbours: $$\label{eq:k_nn_scorer}
\text{NN}({\mathbf}{x}; k, \mathcal{S})
= \frac1{|N_k({\mathbf}{x})|} \sum_{{\mathbf}{y} \in N_k({\mathbf}{x})} d({\mathbf}{x},{\mathbf}{y})
\,,$$ where $N_k({\mathbf}{x})$ are the $k$ nearest neighbours of ${\mathbf}{x}$ within $\mathcal{S}$ excluding itself. The detector labels as anomalous any observation with the score exceeding some calibrated threshold. The main drawbacks are high sensitivity to $k$ and poor interpretability of the score $\text{NN}({\mathbf}{x}; k)$, due to missing natural data-independent scale. Various modifications of this detector are discussed in [@ramaswamyetal2000; @angiullietal2002; @bayetal2003; @hautamakietal2004] and [@zhangwang2006].
Alternatively, it is also possible to use density-based detection methods. For example the schemes proposed in [@breunigetal2000; @kriegeletal2009] are based on $k$-NN, but introduce the concept of *local data density*, a score that is inversely related to a distance-based characteristic of a point within its local neighbourhood. Similarly to the $k$-NN detector, these methods lack a natural scale for the abnormality score. Modifications of this algorithm are discussed in [@jinetal2006] and [@papadimitriouetal2003].
The combination of the embedding and the scoring function produces a non-conformity measure $A$ for conformal procedures in sec. \[sec:conformal\_anomaly\_detection\]. The most suitable procedure is the sliding ICAD, since CAD and the online ICAD are heavier in terms of runtime complexity (tab. \[tab:method\_cplx\]). However, the sliding ICAD uses a fixed proper training sample for *reference*, which may not reflect potential non-stationarity. Therefore we propose a modification called the Lazy Drifting Conformal Detector (LDCD) which adapts to normal regime non-stationarity, such as quasi-periodic or seasonal patterns. The LDCD procedure is conceptually similar to the sliding ICAD, and thus is expected to provide similar validity guarantees at least in the true iid case. The main challenge is to assess the effects of the calibration scores within the same window being computed on different windows of the training stream.
For the observed sequence $({\mathbf}{x}_t)_{t\geq1}$, the LDCD maintains two fixed-size separate samples at each moment $t\geq n+m$: the training set $\mathcal{T}_t = ({\mathbf}{x}_{t-N+i})_{i=0}^{n-1}$ of size $n$ ($N=m+n$) and the calibration **queue** $\mathcal{A}_t$ of size $m$. The sample $\mathcal{T}_t$ is used as the *reference* sample for conformal scoring as in . The calibration **queue** $\mathcal{A}_t$ keeps $m$ most recent non-conformity scores given by $\alpha_s = A(\mathcal{T}_s, {\mathbf}{x}_s)$ for $s=t-m,\ldots,t-1$. At each $t\geq n+m$ the samples $\mathcal{A}_t$ and $\mathcal{T}_t$ look as follows: $$\begin{aligned}
\text{data: }
& \ldots
, \overbrace{
{\mathbf}{x}_{t-m-n}
, \ldots
, {\mathbf}{x}_{t-m-1}
}^{\mathcal{T}_t \text{ training}}
, & {\mathbf}{x}_{t-m}
, \ldots
, {\mathbf}{x}_{t-1}
, &\,& \overbracket{{}{red}{{\mathbf}{x}_t}}^{\text{test}}
, \ldots \\
\text{scores: }
& \ldots
, \alpha_{t-m-n}
, \ldots
, \alpha_{t-m-1}
, & \underbrace{{}{blue}{
\alpha_{t-m}
, \ldots
, \alpha_{t-1}
}}_{\mathcal{A}_t \text{ calibration}}
, &\,& \overbracket{{}{red}{\alpha_t}}^{\text{test}}
, \ldots
\end{aligned}$$ The procedure uses the current test observation ${\mathbf}{x}_t$ to compute the non-conformity score $\alpha_t$ used to obtain the $p$-value similarly to , but with respect to scores in the calibration queue $\mathcal{A}_t$. At the end of step $t$ the calibration queue is updated by pushing $\alpha_t$ into $\mathcal{A}_t$ and evicting $\alpha_{t-m}$.
The final conformal $k$-NN anomaly detector is defined by the following procedure:
1. the time-series $(x_t)_{t\geq1}$ is embedded into $\real^l$ using to get the sequence $({\mathbf}{x}_{t+l-1})_{t\geq 1}$;
2. the LDCD uses $k$-NN average distance for scoring $({\mathbf}{x}_t)_{t\geq 1}$.
The proper training sample $\mathcal{T}_t$ for $t=n+m+1$ is initialized to the first $n$ observations of ${\mathbf}{x}_t$, and the calibration queue $\mathcal{A}_t$ and is populated with the scores $\alpha_{n+s} = \text{NN}({\mathbf}{x}_{n+s}; k, \mathcal{T}_{n+m+1})$ for $s=1,\ldots,m$.
Anomaly Detection Benchmark {#sec:anomaly_detection_benchmark}
===========================
The Numenta Anomaly Benchmark (NAB), [@lavinetal2015], is a corpus of datasets and a rigorous performance scoring methodology for evaluating algorithms for online anomaly detection. The goal of NAB is to provide a controlled and repeatable environment for testing anomaly detectors on data streams. The scoring methodology permits only automatic online adjustment of hyperparameters to each dataset in the corpus during testing. In this study we supplement the dataset corpus with additional data (sec. \[sub:datasets\]), but employ the default NAB scoring methodology (sec. \[sub:performance\_scoring\]).
Datasets {#sub:datasets}
--------
The NAB corpus contains $58$ real-world and artificial time-series with $1000$-$22000$ observations per series. The real data ranges from network traffic and CPU utilization in cloud services to sensors on industrial machines and social media activity. The dataset is labelled manually and collaboratively according to strict and detailed guidelines established by Numenta. Examples of time-series are provided in fig. \[fig:example\_datasets\].
Yahoo! Corpus
NAB Corpus
We supplement the NAB corpus with Yahoo! S5 dataset, [@yahoos5], which was collected to benchmark detectors on various kinds of anomalies including outliers and change-points. The corpus contains $367$ tagged real and synthetic time-series, divided into $4$ subsets. The first group contains real production metrics of various Yahoo! services, and the other 3 – synthetic time-series with varying trend, noise and seasonality, which include either only outliers, or both outliers and change-points. We keep all univariate time-series from first two groups for benchmarking. Statistics of the datasets in each corpus are given in tab. \[tab:corpora\_stat\].
-- ----------- ----- ------ ------ ------- -------- ----- ------ ----- -----
Min Mean Max Min Mean Max
Synthetic 33 1421 1594 1680 52591 1 4.03 8 133
Real 67 741 1415 1461 94866 0 2.13 5 143
Total 100 741 1475 1680 147457 0 2.76 8 276
Synthetic 11 4032 4032 4032 44352 0 0.55 1 6
Real 47 1127 6834 22695 321206 0 2.43 5 114
Total 58 1127 6302 22695 365558 0 2.07 5 120
-- ----------- ----- ------ ------ ------- -------- ----- ------ ----- -----
: Description of the NAB and Yahoo! S5 corpora.[]{data-label="tab:corpora_stat"}
Performance scoring {#sub:performance_scoring}
-------------------
Typical metrics, such as precision and recall, are poorly suited for anomaly detection, since they do not incorporate time. The Numenta benchmark proposes a scoring methodology, which favours timely true detections, softly penalizes tardy detections, and harshly punishes false alarms. The scheme uses anomaly windows around each event to categorize detections into true and false positives, and employs sigmoid function to assign weights depending on the relative time of the detection. Penalty for missed anomalies and rewards for timely detections is schematically shown in fig. \[fig:nab\_scoring\].
The crucial feature of scoring is that all false positives decrease the overall score, whereas only the earliest true positive detection within each window results in a positive contribution. The number of false negatives is the number of anomaly windows in the time-series, with no true positive detections. True negatives are not used in scoring.
Metric $A_{TP}$ $A_{FP}$ $A_{TN}$ $A_{FN}$
---------- ---------- ---------- ---------- ----------
Standard 1.0 -0.11 1.0 -1.0
LowFP 1.0 -0.22 1.0 -1.0
LowFN 1.0 -0.11 1.0 -2.0
: The detection rewards of the default application profiles in the benchmark.[]{data-label="tab:nad_app_costs"}
The relative costs of true positives (TP), false positives (FP) and false negatives (FN) vary between applications. In NAB this domain specificity is captured by the *application profile*, which multiplicatively adjusts the score contributions of TP, FP, and FN detections. NAB includes three prototypical application profiles, tab. \[tab:nad\_app\_costs\]. The “Standard” application profile mimics symmetric costs of misdetections, while the “low FP” and “low FN” profiles penalize either overly optimistic or conservative detectors, respectively. For the anomaly window of size $\approx 10\%$ of the span of the time-series, the standard profile assigns relative weights so that random detections made $10\%$ of the time get on average a zero final score, [@lavinetal2015].
If $X$ is the time-series with labelled anomalies, then the NAB score for a given detector and application profile is computed as follows. Each detection is matched to the anomaly window with the nearest right end after it. If $\tau$ is the relative position of a detection with respect to the right end of the anomaly window of width $W$, then the score of this detection is $$\sigma(\tau)
= \begin{cases}
A_{FP},
&\text{ if } \tau < -W \,; \\
(A_{TP} - A_{FP}) \bigl(1 + e^{5 \tau}\bigr)^{-1} + A_{FP},
&\text{ otherwise } \,.
\end{cases}$$ The overall performance of the detector over $X$ under profile $A$ is the sum of the weighted rewards from individual detections and the impact of missing windows. It is given by $$S_\mathtt{det}^A(X)
= \sum_{d\in D_\mathtt{det}(X)} \sigma(\tau_d)
+ A_{FN} f_\mathtt{det} \,,$$ where $D_\mathtt{det}(X)$ is the set of all alarms fired by the detector on the stream $X$, $\tau_d$ is the relative position of a detection $d\in D_\mathtt{det}(X)$, and $f_\mathtt{det}$ is the number of anomaly windows which cover no detections at all. The raw benchmark score $S_\mathtt{det}^A$ of the detector is the sum of scores on each dataset in the benchmark corpus: $\sum_X S_\mathtt{det}^A(X)$.
The final NAB score takes into account the detector’s responsiveness to anomalies and outputs a normalized score, [@lavinetal2015], computed by $$\label{eq:nab_final_score}
\mathtt{NAB\_score}_\mathtt{det}^A
= 100 \frac{S_\mathtt{det}^A - S_\mathtt{null}^A}
{S_\mathtt{perfect}^A - S_\mathtt{null}^A}
\,,$$ where $S_\mathtt{perfect}$ and $S_\mathtt{null}$ are the scores, respectively, for the detector, which generates true positives only, and the one which outputs no alarms at all. The range of the final score for any default profile is $(-\infty,100]$, since the worst detector is the one which fires only false positive alarms.
Benchmark Results {#sec:benchmark_results}
=================
In this section we analyze the runtime complexity of the proposed method (sec. \[sec:anomaly\_detection\_in\_uts\]) and conduct a comparative study on the anomaly benchmark dataset (sec. \[sec:anomaly\_detection\_benchmark\]).
Tab. \[tab:method\_cplx\] gives the worst case runtime complexity for the conformal procedures in terms of the worst case complexity of the NCM $A(X_{:t}, {\mathbf}{x})$, denoted by $c_A(t)$.
---------------- --------------------------------- -------------------------
Scores Pv
LDCD $T c_A(n)$ $T m$
ICAD (sliding) $T c_A(n)$ $T m$
ICAD (online) $T c_A(n)$ $T \log T$
CAD $\sum_{t=1}^T (t+n) c_A(t+n-1)$ $n T + \frac12 T(T+1)$
---------------- --------------------------------- -------------------------
: Worst case runtime complexity of conformal procedures on $({\mathbf}{x}_s)_{s=1-n}^T$, $n$ is the length of the train sample.[]{data-label="tab:method_cplx"}
The CAD procedure is highly computationally complex: for each ${\mathbf}{x}_t$ computing requires a leave-one-out-like run of $A$ over the sample of size $t+n$ and a linear search through new non-conformity scores. In the online ICAD it is possible to maintain a sorted array of non-conformity scores and thus compute each $p$-value via the binary search and update the scores in one evaluation of $A$ on ${\mathbf}{x}_t$ and the *reference* train sample. In the sliding ICAD and the LDCD updating the calibration queue requires one run of $A$ as well, but computing the $p$-value takes one full pass through $m$ scores. The key question therefore is how severe the reliability penalty in is, how well each procedure performs under non-stationarity or quasi-periodicity.
In sec. \[sec:anomaly\_detection\_benchmark\] we described a benchmark for testing detector performance based on real-life datasets and scoring technique, which mimics the actual costs of false negatives and false alarms. Almost all datasets in the Numenta Benchmark and Yahoo! S5 corpora exhibit signs of quasi-periodicity or non-stationarity. We use this benchmark to objectively measure the performance of the conformal $k$-NN detector, proposed in sec. \[sec:anomaly\_detection\_in\_uts\].
The benchmark testing instruments provide each detector with the duration of the “probationary” period, which is $15\%$ of the total length of the currently used time-series. Additionaly, the benchmark automatically calibrates each detector by optimizing the alarm decision threshold. We use the benchmark suggested thresholds and the probationary period duration as the size $n$ of the sliding historical window for training and the size of the calibration queue $m$.
To measure the effect of conformal $p$-values on the performance we also test a basic $k$-NN detector with a heuristic rule to assign confidence. Similarly to sliding train and calibration samples in the proposed LDCD $k$-NN, the baseline $k$-NN detector uses the train sample $\mathcal{T}_t$ as in sec. \[sec:anomaly\_detection\_in\_uts\], to compute the score of the $t$-th observation with : $$\alpha_t = \text{NN}({\mathbf}{x}_t; k, \mathcal{T}_t) \,.$$ Then the score is dynamically normalized to a value within the $[0,1]$ range with a heuristic : $$\label{eq:dynrange_pv}
\mathtt{Pv}_t
= \frac{\max_{i=0}^m \alpha_{t-i} - \alpha_t}
{\max_{i=0}^m \alpha_{t-i} - \min_{i=0}^m \alpha_{t-i}}
\,.
\tag{DynR}$$ The conformal $k$-NN detector using the LDCD procedure performs the same historical sliding along the time-series, but its $p$-value is computed with (sec. \[sec:anomaly\_detection\_in\_uts\]): $$\label{eq:ldcd_pv}
\mathtt{Pv}_t
= \frac1{m+1}
\Bigl|\{i=0,\ldots, m\,:\,
\alpha_{t-i} \geq \alpha_t \}\Bigr|
\,.
\tag{LDCD}$$ The value $p_t = 1 - \mathtt{Pv}_t$ is the conformal abnormality score returned by each detector for the observation $x_t$. We report the experiment results on two settings of $k$ and $l$ hyperparameters: $(27, 19)$ and $(1, 1)$ for the number of neighbours $k$ and the embedding dimension $l$ respectively. The seemingly arbitrary setting $(27, 19)$ achieved the top-3 performance in the Numenta Anomaly Detection challenge, [@numenta2016]. These hyperparameter values were tuned via grid search over the accumulated performance on the combined corpus of $\approx 400$ time series, which makes the chosen parameters unlikely to overfit the data.
Preliminary experimental results have revealed that the \[eq:ldcd\_pv\] $k$-NN detector has adequate anomaly coverage, but has high false positive rate. In order to decrease the number of false alarms, we have employed the following ad hoc pruning strategy in both detectors:
- output $p_t = 1-\mathtt{Pv}_t$ for the observation $x_t$, and if $p_t$ exceeds $99.5\%$ fix the output at $50\%$ for the next $\frac{n}5$ observations.
The results for $k$-NN detector with $27$ neighbours and $19$-dimensional embedding are provided in table \[tab:knn2719\].
[c>p[0.2]{}rrr]{} Corpus & $p$-value & LowFN & LowFP & Standard\
& \[eq:dynrange\_pv\] & -9.6 & -185.7 & -54.9\
& \[eq:ldcd\_pv\] & 4.3 & -143.8 & -34.0\
& \[eq:dynrange\_pv\] w. pruning & 63.0 & 36.2 & 54.9\
& \[eq:ldcd\_pv\] w. pruning & 64.1 & 42.6 & 56.8\
& \[eq:dynrange\_pv\] & 50.0 & 0.3 & 36.1\
& \[eq:ldcd\_pv\] & 50.1 & 0.4 & 36.1\
& \[eq:dynrange\_pv\] w. pruning & 68.2 & 56.4 & 63.8\
& \[eq:ldcd\_pv\] w. pruning & 68.8 & 56.9 & 64.3\
The key observation is that indeed the $k$-NN detector with the \[eq:ldcd\_pv\] confidence scores performs better than the baseline \[eq:dynrange\_pv\] detector. At the same time the abnormality score produced by the dynamic range heuristic are not probabilistic in nature, whereas the conformal confidence scores of the $k$-NN with the LDCD are. The rationale behind this is that conformal scores take into account the full distribution of the calibration set, whereas the \[eq:dynrange\_pv\], besides being simple scaling, addresses only the extreme values of the scores.
[c>p[0.2]{}rrr]{} Corpus & $p$-value & LowFN & LowFP & Standard\
& \[eq:dynrange\_pv\] & -167.0 & -658.4 & -291.0\
& \[eq:ldcd\_pv\] & 62.3 & 34.8 & 53.8\
& \[eq:dynrange\_pv\] w. pruning & 52.2 & 4.2 & 39.0\
& \[eq:ldcd\_pv\] w. pruning & 62.7 & 30.7 & 53.5\
& \[eq:dynrange\_pv\] & 30.8 & -20.7 & 16.9\
& \[eq:ldcd\_pv\] & 47.7 & 21.5 & 37.6\
& \[eq:dynrange\_pv\] w. pruning & 50.6 & 35.2 & 44.8\
& \[eq:ldcd\_pv\] w. pruning & 53.8 & 36.2 & 46.9\
Tab. \[tab:knn0101\] shows the final scores for the $k$-NN detector with $1$ neighbour and no embedding ($l=1$). The table illustrates that the conformal LDCD procedure works well even without alarm thinning. Heuristically, this can be explained by observing that LDCD procedure on the $k$-NN with $1$-D embeddings in fact a sliding-window prototype-based distribution support estimate. Furthermore, the produced $p$-values are closely related to the probability of an extreme observation relative to the current estimate of the support.
Tables \[tab:yahoo\_lederboard\] and \[tab:numenta\_lederboard\] show the benchmark performance scores for detectors, which were competing in the Numenta challenge, [@numenta2016].
[>p[0.35]{}rrr]{} Detector & LowFN & LowFP & Standard\
$27$-NN $l=19$ \[eq:ldcd\_pv\] w. pruning & 68.8 & 56.9 & 64.3\
$1$-NN $l=1$ \[eq:ldcd\_pv\] w. pruning & 53.8 & 36.2 & 46.9\
relativeEntropy & 52.5 & 40.7 & 48.0\
Numenta & 44.4 & 37.5 & 41.0\
Numenta & 42.5 & 36.6 & 39.4\
bayesChangePt & 43.6 & 17.6 & 35.7\
windowedGaussian & 40.7 & 25.8 & 31.1\
skyline & 28.9 & 18.0 & 23.6\
Random ($p_t \sim \mathcal{U}[0,1]$) & 47.2 & 1.2 & 29.9\
[>p[0.35]{}rrr]{} Detector & LowFN & LowFP & Standard\
$27$-NN $l=19$ \[eq:ldcd\_pv\] w. pruning & 64.1 & 42.6 & 56.8\
$1$-NN $l=1$ \[eq:ldcd\_pv\] w. pruning & 62.7 & 30.7 & 53.5\
Numenta & 74.3 & 63.1 & 70.1\
Numenta & 69.2 & 56.7 & 64.6\
relativeEntropy & 58.8 & 47.6 & 54.6\
windowedGaussian & 47.4 & 20.9 & 39.6\
skyline & 44.5 & 27.1 & 35.7\
bayesChangePt & 32.3 & 3.2 & 17.7\
Random ($p_t \sim \mathcal{U}[0,1]$) & 25.9 & 5.8 & 16.8\
Conclusion {#sec:conclusion}
==========
In this paper we proposed a conformal $k$-NN anomaly detector for univariate time series, which uses sliding historical windows both to embed the time series into a higher dimensional space for $k$-NN and to keep the most relevant observations to explicitly address potential quasi-periodicity. The proposed detector was tested using a stringent benchmarking procedure [@lavinetal2015], which mimics the real costs of timely signals, tardy alarms and misdetections. Furthermore we supplemented the benchmark dataset corpus with Yahoo! S5 anomaly dataset to cover more use-cases. The results obtained in sec. \[sec:benchmark\_results\] demonstrate that the conformal $k$-NN has adequate anomaly coverage rate and low false negative score. The cases, when the conformal LDCD scores required the use of a signal pruning step, were also the cases when the baseline $k$-NN detector was over-sensitive. Nevertheless, in all cases, conformal abnormality confidence scores improved the benchmark scores.
Numenta held a detector competition in 2016 in which the prototype of the proposed procedure, [@2016arXiv160804585B], took the third place, [@numenta2016], competing against much more complex methods based on cortical memory, neural networks, etc. The favourable results on the NAB corpus (sec. \[sec:benchmark\_results\]) suggest that the theoretical foundations of the LDCD procedure, specifically the assumptions required for the proper validity guarantee, should be subject of further research. Besides the validity guarantees, the effects of the violations of the iid assumption should be investigated as well, especially since the embedded time-series vectors overlap.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Effective Poisson-Nernst-Planck (PNP) equations are derived for macroscopic ion transport in charged porous media. Homogenization analysis is performed for a two-component periodic composite consisting of a dilute electrolyte continuum (described by standard PNP equations) and a continuous dielectric matrix, which is impermeable to the ions and carries a given surface charge. Three new features arise in the upscaled equations: (i) the effective ionic diffusivities and mobilities become tensors, related to the microstructure; (ii) the effective permittivity is also a tensor, depending on the electrolyte/matrix permittivity ratio and the ratio of the Debye screening length to mean pore size; and (iii) the surface charge per volume appears as a continuous “background charge density". The coefficient tensors in the macroscopic PNP equations can be calculated from periodic reference cell problem, and several examples are considered. For an insulating solid matrix, all gradients are corrected by a single tortuosity tensor, and the Einstein relation holds at the macroscopic scale, which is not generally the case for a polarizable matrix. In the limit of thin double layers, Poisson’s equation is replaced by macroscopic electroneutrality (balancing ionic and surface charges). The general form of the macroscopic PNP equations may also hold for concentrated solution theories, based on the local-density and mean-field approximations. These results have broad applicability to ion transport in porous electrodes, separators, membranes, ion-exchange resins, soils, porous rocks, and biological tissues.'
author:
- 'Markus Schmuck[^1]'
- 'Martin Z. Bazant [^2]'
bibliography:
- 'porous-extra.bib'
- 'pmSPNP\_Martin2.bib'
title: 'Homogenization of the Poisson-Nernst-Planck equations for ion transport in charged porous media [^3]'
---
diffusion, electromigration, porous media, membranes, Poisson-Nernst-Planck equations, homogenization
Introduction {#sec:Intro}
============
Poisson-Nernst-Planck equations for homogeneous media {#sec:BaGr}
======================================================
Homogenized Poisson-Nernst-Planck equations for porous media {#sec:pmPNP}
============================================================
Optimizing the conductivity for parallel straight channels {#sec:optCond}
==========================================================
Conclusion {#sec:Disc}
===========
[^1]: Departments of Chemical Engineering and Mathematics, Imperial College, London, UK
[^2]: Departments of Chemical Engineering and Mathematics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
[^3]: This work was supported by the Swiss National Science Foundation (SNSF) under the grant PBSKP2-12459/1 (MS) and in part by the National Science Foundation under contract DMS-0948071 (MZB).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'This paper provides a probabilistic algorithm to determine generators of the $m$-torsion subgroup of the Jacobian of a hyperelliptic curve of genus two.'
address: |
Department of Mathematical Sciences\
University of Aarhus\
Ny Munkegade\
Building 1530\
DK-8000 Aarhus C
author:
- Christian Robenhagen Ravnshøj
title: Generators of Jacobians of Hyperelliptic Curves
---
[^1]
Introduction {#sec:intro}
============
Let $C$ be a hyperelliptic curve of genus two defined over a prime field ${\mathbb{F}}_p$, and ${\mathcal{J}_{C}}$ the Jacobian of $C$. Consider the rational subgroup ${\mathcal{J}_{C}}({\mathbb{F}}_p)$. ${\mathcal{J}_{C}}({\mathbb{F}}_p)$ is a finite abelian group, and $${\mathcal{J}_{C}}({\mathbb{F}}_p)\simeq{\mathbb{Z}}/n_1{\mathbb{Z}}\oplus{\mathbb{Z}}/n_2{\mathbb{Z}}\oplus{\mathbb{Z}}/n_3{\mathbb{Z}}\oplus{\mathbb{Z}}/n_4{\mathbb{Z}},$$ where $n_i\mid n_{i+1}$ and $n_2\mid p-1$. [@frey-ruck] shows that if $m\mid p-1$, then the discrete logarithm problem in the rational $m$-torsion subgroup ${\mathcal{J}_{C}}({\mathbb{F}}_p)[m]$ of ${\mathcal{J}_{C}}({\mathbb{F}}_p)$ can be reduced to the corresponding problem in ${\mathbb{F}}_p^\times$ [@frey-ruck corollary 1]. In the proof of this result it is claimed that the non-degeneracy of the Tate pairing can be used to determine whether $r$ random elements of the finite group ${\mathcal{J}_{C}}({\mathbb{F}}_p)[m]$ in fact is an independent set of generators of ${\mathcal{J}_{C}}({\mathbb{F}}_p)[m]$. This paper provides an explicit, probabilistic algorithm to determine generators of ${\mathcal{J}_{C}}({\mathbb{F}}_p)[m]$.
In short, the algorithm outputs elements $\gamma_i$ of the Sylow-$\ell$ subgroup ${\Gamma}_\ell$ of the rational subgroup ${\Gamma}={\mathcal{J}_{C}}({\mathbb{F}}_p)$, such that ${\Gamma}_\ell=\bigoplus_i{\langle \gamma_i \rangle}$ in the following steps:
1. Choose random elements $\gamma_i\in{\Gamma}_\ell$ and $h_j\in{\mathcal{J}_{C}}({\mathbb{F}}_p)$, $i,j\in\{1,\dots,4\}$.\[choose:intro\]
2. Use the non-degeneracy of the tame Tate pairing $\tau$ to *diagonalize* the sets $\{\gamma_i\}_i$ and $\{h_j\}_j$ with respect to $\tau$; i.e. modify the sets such that $\tau(\gamma_i,h_j)=1$ if $i\neq j$ and $\tau(\gamma_i,h_i)$ is an $\ell^{\mathrm{th}}$ root of unity.\[step:diagonaliser\]
3. If $\prod_i|\gamma_i|<|{\Gamma}_\ell|$ then go to step \[choose:intro\].
4. Output the elements $\gamma_1$, $\gamma_2$, $\gamma_3$ and $\gamma_4$.
The key ingredient of the algorithm is the diagonalization in step \[step:diagonaliser\]; this process will be explained in section \[sec:generators\].
We will write ${\langle \gamma_i|i\in I \rangle}={\langle \gamma_i \rangle}_i$ and $\bigoplus_{i\in I}{\langle \gamma_i \rangle}=\bigoplus_i{\langle \gamma_i \rangle}$ if the index set $I$ is clear from the context.
Hyperelliptic curves
====================
A hyperelliptic curve is a smooth, projective curve $C\subseteq{\mathbb{P}}^n$ of genus at least two with a separable, degree two morphism $\phi:C\to{\mathbb{P}}^1$. In the rest of this paper, let $C$ be a hyperelliptic curve of genus two defined over a prime field ${\mathbb{F}}_p$ of characteristic $p>2$. By the Riemann-Roch theorem there exists an embedding $\psi:C\to{\mathbb{P}}^2$, mapping $C$ to a curve given by an equation of the form $$y^2=f(x),$$ where $f\in{\mathbb{F}}_p[x]$ is of degree six and have no multiple roots [see @cassels chapter 1].
The set of principal divisors $\mathcal{P}(C)$ on $C$ constitutes a subgroup of the degree zero divisors $\operatorname{Div}_0(C)$. The Jacobian ${\mathcal{J}_{C}}$ of $C$ is defined as the quotient $${\mathcal{J}_{C}}=\operatorname{Div}_0(C)/\mathcal{P}(C).$$ Consider the subgroup ${\mathcal{J}_{C}}({\mathbb{F}}_p)<{\mathcal{J}_{C}}$ of ${\mathbb{F}}_p$-rational elements. There exist numbers $n_i$, such that $$\label{eq:rank4}
{\mathcal{J}_{C}}({\mathbb{F}}_p)\simeq{\mathbb{Z}}/n_1{\mathbb{Z}}\oplus{\mathbb{Z}}/n_2{\mathbb{Z}}\oplus{\mathbb{Z}}/n_3{\mathbb{Z}}\oplus{\mathbb{Z}}/n_4{\mathbb{Z}},$$ where $n_i\mid n_{i+1}$ and $n_2\mid p-1$ [see @hhec proposition 5.78, p. 111]. We wish to determine generators of the $m$-torsion subgroup ${\mathcal{J}_{C}}({\mathbb{F}}_p)[m]<{\mathcal{J}_{C}}({\mathbb{F}}_p)$, where $m\mid |{\mathcal{J}_{C}}({\mathbb{F}}_p)|$ is the largest number such that $\ell\mid p-1$ for every prime number $\ell\mid m$.
Finite abelian groups
=====================
[@miller] shows the following theorem.
\[teo:ssh-generator\] Let $G$ be a finite abelian group of torsion rank $r$. Then for $s\geq r$ the probability that a random $s$-tuple of elements of $G$ generates $G$ is at least $$\frac{C_r}{\log\log|G|}$$ if $s=r$, and at least $C_s$ if $s>r$, where $C_s>0$ is a constant depending only on $s$ (and not on $|G|$).
[@miller theorem 3, p. 251]
Combining theorem \[teo:ssh-generator\] and equation , we expect to find generators of ${\Gamma}[m]$ by choosing $4$ random elements $\gamma_i\in{\Gamma}[m]$ in approximately $\frac{\log\log |{\Gamma}[m]|}{C_4}$ attempts.
To determine whether the generators are independent, i.e. if ${\langle \gamma_i \rangle}_i=\bigoplus_i{\langle \gamma_i \rangle}$, we need to know the subgroups of a cyclic $\ell$-group $G$. These are determined uniquely by the order of $G$, since $$\{0\}<{\langle \ell^{n-1}g \rangle}<{\langle \ell^{n-2}g \rangle}<\dots<{\langle \ell g \rangle}<G$$ are the subgroups of the group $G={\langle g \rangle}$ of order $\ell^n$. The following corollary is an immediate consequence of this observation.
\[kor:UGsnit\] Let $U_1$ and $U_2$ be cyclic subgroups of a finite group $G$. Assume $U_1$ and $U_2$ are $\ell$-groups. Let ${\langle u_i \rangle}<U_i$ be the subgroups of order $\ell$. Then $$U_1\cap U_2=\{e\}\Longleftrightarrow{\langle u_1 \rangle}\cap{\langle u_2 \rangle}=\{e\}.$$ Here $e\in G$ is the neutral element.
The tame Tate pairing
=====================
Let ${\Gamma}={\mathcal{J}_{C}}({\mathbb{F}}_p)$ be the rational subgroup of the Jacobian. Consider a number $\lambda\mid\gcd(|{\Gamma}|,p-1)$. Let $g\in{\Gamma}[\lambda]$ and $h=\sum_ia_i P_i\in{\Gamma}$ be divisors with no points in common, and let $$\overline{h}\in{\Gamma}/\lambda{\Gamma}$$ denote the class containing the divisor $h$. Furthermore, let $f\in{\mathbb{F}}_{p}(C)$ be a rational function on $C$ with divisor $\operatorname{div}(f)=\lambda g$. Set $f(h)=\prod_if(P_i)^{a_i}$. Then $$e_\lambda(g,\overline{h})=f(h)$$ is a well-defined pairing ${\Gamma}[\lambda]\times{\Gamma}/\lambda{\Gamma}\longrightarrow{\mathbb{F}}_{p}^\times/({\mathbb{F}}_{p}^\times)^\lambda$, the *Tate pairing*; cf. [@galbraith]. Raising to the power $\frac{p-1}{\lambda}$ gives a well-defined element in the subgroup $\mu_\lambda<{\mathbb{F}}_{p}^\times$ of the $\lambda^{\mathrm{th}}$ roots of unity. This pairing $$\tau_\lambda:{\Gamma}[\lambda]\times{\Gamma}/\lambda{\Gamma}\longrightarrow\mu_\lambda$$ is called the *tame Tate pairing*.
Since the class $\overline{h}$ is represented by the element $h\in{\Gamma}$, we will write $\tau_\lambda(g,h)$ instead of $\tau_\lambda(g,\overline{h})$. Furthermore, we will omit the subscript $\lambda$ and just write $\tau(g,h)$, since the value of $\lambda$ will be clear from the context.
[@hess] gives a short and elementary proof of the following theorem.
\[teo:tatepairing\] The tame Tate pairing $\tau$ is bilinear and non-degenerate.
\[kor:tatepairing\] For every element $g\in{\Gamma}$ of order $\lambda$ an element $h\in{\Gamma}$ exists, such that $\mu_\lambda={\langle \tau(g,h) \rangle}$.
[@sil corollary 8.1.1., p. 98] gives a similar result for elliptic curves and the Weil pairing. The proof of this result only uses that the pairing is bilinear and non-degenerate. Hence it applies to corollary \[kor:tatepairing\].
In the following we only need the existence of the element $h\in{\Gamma}$, such that $\mu_\lambda={\langle \tau(g,h) \rangle}$; we do not need to find it.
Generators of ${\Gamma}[m]$ {#sec:generators}
===========================
As in the previous section, let ${\Gamma}={\mathcal{J}_{C}}({\mathbb{F}}_p)$ be the rational subgroup of the Jacobian. We are searching for elements $\gamma_i\in{\Gamma}[m]$ such that ${\Gamma}[m]=\bigoplus_i{\langle \gamma_i \rangle}$. As an abelian group, ${\Gamma}[m]$ is the direct sum of its Sylow subgroups. Hence, we only need to find generators of the Sylow subgroups of ${\Gamma}[m]$.
Set $N=|{\Gamma}|$ and let $\ell\mid\gcd(N,p-1)$ be a prime number. Choose four random elements $\gamma_i\in{\Gamma}$. Let ${\Gamma}_\ell<{\Gamma}$ be the Sylow-$\ell$ subgroup of ${\Gamma}$, and set $N_\ell=|{\Gamma}_\ell|$. Then $\frac{N}{N_\ell}\gamma_i\in {\Gamma}_\ell$. Hence, we may assume that $\gamma_i\in {\Gamma}_\ell$. If all the elements $\gamma_i$ are equal to zero, then we choose other elements $\gamma_i\in{\Gamma}$. Hence, we may assume that some of the elements $\gamma_i$ are non-zero.
Let $|\gamma_i|=\lambda_i$, and re-enumerate the $\gamma_i$’s such that $\lambda_i\leq\lambda_{i+1}$. Since some of the $\gamma_i$’s are non-zero, we may choose an index $\nu\leq 4$, such that $\lambda_\nu\neq 1$ and $\lambda_i=1$ for $i<\nu$. Choose $\lambda_0$ minimal such that $\lambda=\frac{\lambda_\nu}{\lambda_0}\mid p-1$. Then ${\mathbb{F}}_p$ contains an element $\zeta$ of order $\lambda$. Now set $g_i=\frac{\lambda_i}{\lambda}\gamma_i$, $\nu\leq i\leq 4$. Then $g_i\in{\Gamma}[\lambda]$, $\nu\leq i\leq 4$. Finally, choose four random elements $h_i\in{\Gamma}$.
Let $$\tau:{\Gamma}[\lambda]\times{\Gamma}/\lambda{\Gamma}\longrightarrow{\langle \zeta \rangle}$$ be the tame Tate pairing. Define remainders $\alpha_{ij}$ modulo $\lambda$ by $$\tau(g_i,h_j)=\zeta^{\alpha_{ij}}.$$ By corollary \[kor:tatepairing\], for any of the elements $g_i$ we can choose an element $h\in{\Gamma}$, such that $|\tau(g_i,h)|=\lambda$. Assume that ${\Gamma}/\lambda{\Gamma}={\langle \overline{h}_1,\overline{h}_2,\overline{h}_3,\overline{h}_4 \rangle}$. Then $\overline{h}=\sum_iq_i\overline{h}_i$, and so $$\tau(g_i,h)=\zeta^{\alpha_{i1}q_1+\alpha_{i2}q_2+\alpha_{i3}q_3+\alpha_{i4}q_4}.$$ If $\alpha_{ij}\equiv 0\pmod{\ell}$, $1\leq j\leq 4$, then $|\tau(g_i,h)|<\lambda$. Hence, if ${\Gamma}/\lambda{\Gamma}={\langle \overline{h}_1,\overline{h}_2,\overline{h}_3,\overline{h}_4 \rangle}$, then for all $i\in\{\nu,\dots,4\}$ we can choose a $j\in\{1,\dots,4\}$, such that $\alpha_{ij}\not\equiv 0\pmod{\ell}$.
Enumerate the $h_i$ such that $\alpha_{44}\not\equiv 0\pmod{\ell}$. Now assume a number $j<4$ exists, such that $\alpha_{4j}\not\equiv 0\pmod{\lambda}$. Then $\zeta^{\alpha_{4j}}=\zeta^{\beta_1\alpha_{44}}$, and replacing $h_j$ with $h_j-\beta_1h_4$ gives $\alpha_{4j}\equiv 0\pmod{\lambda}$. So we may assume that $$\alpha_{41}\equiv\alpha_{42}\equiv\alpha_{43}\equiv 0\pmod{\lambda}\qquad\textrm{and}\qquad\alpha_{44}\not\equiv 0\pmod{\ell}.$$ Assume similarly that a number $j<4$ exists, such that $\alpha_{j4}\not\equiv 0\pmod{\lambda}$. Now set $\beta_2\equiv\alpha_{44}^{-1}\alpha_{j4}\pmod{\lambda}$. Then $\tau(g_j-\beta_2g_4,h_4)=1$. So we may also assume that $$\alpha_{14}\equiv\alpha_{24}\equiv\alpha_{34}\equiv 0\pmod{\lambda}.$$ Repeating this process recursively, we may assume that $$\alpha_{ij}\equiv 0\pmod{\lambda}\qquad\textrm{and}\qquad\alpha_{44}\not\equiv 0\pmod{\ell}.$$ Again $\nu\leq i\leq 4$ and $1\leq j\leq 4$.
The discussion above is formalized in the following algorithm.
\[alg:1\] As input we are given a hyperelliptic curve $C$ of genus two defined over a prime field ${\mathbb{F}}_p$, the number $N=|{\Gamma}|$ of ${\mathbb{F}}_p$-rational elements of the Jacobian, and a prime factor $\ell\mid\gcd(N,p-1)$. The algorithm outputs elements $\gamma_i\in {\Gamma}_\ell$ of the Sylow-$\ell$ subgroup ${\Gamma}_\ell$ of ${\Gamma}$, such that ${\langle \gamma_i \rangle}_i=\bigoplus_i{\langle \gamma_i \rangle}$ in the following steps.
1. Compute the order $N_\ell$ of the Sylow-$\ell$ subgroup of ${\Gamma}$.
2. Choose elements $\gamma_i\in{\Gamma}$, $i\in I:=\{1,2,3,4\}$. Set $\gamma_i:=\frac{N}{N_\ell}\gamma_i$. \[step:choose-x\]
3. Choose elements $h_j\in{\Gamma}$, $j\in J:=\{1,2,3,4\}$. \[step:choose-h\]
4. Set $K:=\{1,2,3,4\}$.
5. For $k'$ from $0$ to $3$ do the following:
1. Set $k:=4-k'$.
2. If $\gamma_i=0$, then set $I:=I\setminus\{i\}$. If $|I|=0$, then go to step \[step:choose-x\].
3. Compute the orders $\lambda_\kappa:=|\gamma_\kappa|$, $\kappa\in K$. Re-enumerate the $\gamma_\kappa$’s such that $\lambda_\kappa\leq \lambda_{\kappa+1}$, $\kappa\in K$. Set $I:=\{5-|I|,6-|I|,\dots,4\}$.
4. Set $\nu:=\min(I)$, and choose $\lambda_0$ minimal such that $\lambda:=\frac{\lambda_\nu}{\lambda_0}\mid p-1$. Set $g_\kappa:=\frac{\lambda_\kappa}{\lambda}\gamma_\kappa$, $\kappa\in I\cap K$.
1. If $g_k=0$, then go to step \[step:last\].
2. If $\tau(g_k,h_j)^{\lambda/\ell}=1$ for all $j\leq k$, then go to step \[step:choose-h\].
5. Choose a primitive $\lambda^{\mathrm{th}}$ root of unity $\zeta\in{\mathbb{F}}_p$. Compute $\alpha_{kj}$ and $\alpha_{\kappa k}$ from $\tau(g_k,h_j)=\zeta^{\alpha_{kj}}$ and $\tau(g_\kappa,h_k)=\zeta^{\alpha_{\kappa k}}$, $1\leq j<k$, $\kappa\in I\cap K$. Re-enumerate $h_1,\dots,h_k$ such that $\alpha_{kk}\not\equiv 0\pmod{\ell}$.
6. For $1\leq j<k$, set $\beta\equiv\alpha_{kk}^{-1}\alpha_{kj}\pmod{\lambda}$ and $h_j:=h_j-\beta h_k$.
7. For $\kappa\in I\cap K\setminus\{k\}$, set $\beta\equiv\alpha_{kk}^{-1}\alpha_{\kappa k}\pmod{\lambda}$ and $\gamma_\kappa:=\gamma_\kappa-\beta\frac{\lambda_k}{\lambda_\kappa}\gamma_k$.
8. Set $K:=K\setminus\{k\}$.
6. Output $\gamma_1$, $\gamma_2$, $\gamma_3$ and $\gamma_4$.\[step:last\]
\[rem:runningtime\] Algorithm \[alg:1\] consists of a small number of
1. calculations of orders of elements $\gamma\in{\Gamma}_\ell$,\[item:orden\]
2. multiplications of elements $\gamma\in{\Gamma}$ with numbers $a\in{\mathbb{Z}}$,\[item:gange\]
3. additions of elements $\gamma_1,\gamma_2\in{\Gamma}$,\[item:addition\]
4. evaluations of pairings of elements $\gamma_1,\gamma_2\in{\Gamma}$ and\[item:parring\]
5. solving the discrete logarithm problem in ${\mathbb{F}}_p$, i.e. to determine $\alpha$ from $\zeta$ and $\xi=\zeta^\alpha$.\[item:DL\]
By [@miller proposition 9], the order $|\gamma|$ of an element $\gamma\in{\Gamma}_\ell$ can be calculated in time $O(\log^3 N_\ell)\mathcal{A}_{\Gamma}$, where $\mathcal{A}_{\Gamma}$ is the time for adding two elements of ${\Gamma}$. A multiple $a\gamma$ or a sum $\gamma_1+\gamma_2$ is computed in time $O(\mathcal{A}_{\Gamma})$. By [@frey-ruck], the pairing $\tau(\gamma_1,\gamma_2)$ of two elements $\gamma_1,\gamma_2\in{\Gamma}$ can be evaluated in time $O(\log N_\ell)$. Finally, by [@pohlig-hellmann] the discrete logarithm problem in ${\mathbb{F}}_p$ can be solved in time $O(\log p)$. We may assume that addition in ${\Gamma}$ is easy, i.e. that $\mathcal{A}_{\Gamma}<O(\log p)$. Hence algorithm \[alg:1\] runs in expected time $O(\log p)$.
Careful examination of algorithm \[alg:1\] gives the following lemma.
\[lem:diagonal\] Let ${\Gamma}_\ell$ be the Sylow-$\ell$ subgroup of ${\Gamma}$, $\ell\mid p-1$. Algorithm \[alg:1\] determines elements $\gamma_i\in {\Gamma}_\ell$ and $h_i\in{\Gamma}$, $1\leq i\leq 4$, such that one of the following cases holds.
1. $\alpha_{11}\alpha_{22}\alpha_{33}\alpha_{44}\not\equiv 0\pmod{\ell}$ and $\alpha_{ij}\equiv 0\pmod{\lambda}$, $i\neq
j$, $i,j\in\{1,2,3,4\}$.\[case:dia1\]
2. $\gamma_1=0$, $\alpha_{22}\alpha_{33}\alpha_{44}\not\equiv 0\pmod{\ell}$ and $\alpha_{ij}\equiv 0\pmod{\lambda}$, $i\neq
j$, $i,j\in\{2,3,4\}$.
3. $\gamma_1=\gamma_2=0$, $\alpha_{33}\alpha_{44}\not\equiv 0\pmod{\ell}$ and $\alpha_{ij}\equiv 0\pmod{\lambda}$, $i\neq
j$, $i,j\in\{3,4\}$.
4. $\gamma_1=\gamma_2=\gamma_3=0$.
If $|\gamma_i|=\lambda_i$, then $\lambda_i\leq \lambda_{i+1}$. Set $\nu=\min\{i|\lambda_i\neq 1\}$, and define $\lambda_0$ as the least number, such that $\lambda=\frac{\lambda_\nu}{\lambda_0}\mid p-1$. Set $g_i=\frac{\lambda_i}{\lambda}\gamma_i$, $\nu\leq i\leq 4$. Then the numbers $\alpha_{ij}$ above are determined by $$\tau(g_i,h_j)=\zeta^{\alpha_{ij}},$$ where $\tau$ is the tame Tate pairing ${\Gamma}[\lambda]\times{\Gamma}/\lambda{\Gamma}\to\mu_\lambda={\langle \zeta \rangle}$.
\[teo:p-1\] Algorithm \[alg:1\] determines elements $\gamma_1$, $\gamma_2$, $\gamma_3$ and $\gamma_4$ of the Sylow-$\ell$ subgroup of ${\Gamma}$, $\ell\mid p-1$, such that ${\langle \gamma_i \rangle}_i=\bigoplus_i{\langle \gamma_i \rangle}$.
Choose elements $\gamma_i,h_i\in{\Gamma}$ such that the conditions of lemma \[lem:diagonal\] are fulfilled. Set $\lambda_i=|\gamma_i|$, and let $\nu=\min\{i|\lambda_i\neq 1\}$. Define $\lambda_0$ as the least number, such that $\lambda=\frac{\lambda_\nu}{\lambda_0}\mid p-1$. Set $g_i=\frac{\lambda_i}{\lambda}\gamma_i$. Then the $\alpha_{ij}$’s from lemma \[lem:diagonal\] are determined by $$\tau(g_i,h_j)=\zeta^{\alpha_{ij}}.$$ We only consider case \[case:dia1\] of lemma \[lem:diagonal\], since the other cases follow similarly. We start by determining ${\langle \gamma_3 \rangle}\cap{\langle \gamma_4 \rangle}$. Assume that $g_3=ag_4$. Then $$1=\tau(g_3,h_4)=\tau(ag_4,h_4)=\zeta^{a\alpha_{44}},$$ i.e. $a\equiv 0\pmod{\lambda}$. Hence ${\langle \gamma_3 \rangle}\cap{\langle \gamma_4 \rangle}=\{0\}$. Then we determine ${\langle \gamma_2 \rangle}\cap{\langle \gamma_3,\gamma_4 \rangle}$. Assume $g_2=ag_3+bg_4$. Then $$1=\tau(g_2,h_3)=\tau(ag_3,h_3)=\zeta^{a\alpha_{33}},$$ i.e. $a\equiv 0\pmod{\lambda}$. In the same way, $$1=\tau(g_2,h_4)=\zeta^{b\alpha_{44}},$$ i.e. $b\equiv 0\pmod{\lambda}$. Hence ${\langle \gamma_2 \rangle}\cap{\langle \gamma_3,\gamma_4 \rangle}=\{0\}$. Similarly ${\langle \gamma_1 \rangle}\cap{\langle \gamma_2,\gamma_3,\gamma_4 \rangle}=\{0\}$. Hence ${\langle \gamma_i \rangle}_i=\bigoplus_i{\langle \gamma_i \rangle}$.
From theorem \[teo:p-1\] we get the following probabilistic algorithm to determine generators of the $m$-torsion subgroup ${\Gamma}[m]<{\Gamma}$, where $m\mid |{\Gamma}|$ is the largest divisor of $|{\Gamma}|$ such that $\ell\mid p-1$ for every prime number $\ell\mid m$.
\[alg:1a\] As input we are given a hyperelliptic curve $C$ of genus two defined over a prime field ${\mathbb{F}}_p$, the number $N=|{\Gamma}|$ of ${\mathbb{F}}_p$-rational elements of the Jacobian, and the prime factors $p_1,\dots,p_n$ of $\gcd(N,p-1)$. The algorithm outputs elements $\gamma_i\in{\Gamma}[m]$ such that ${\Gamma}[m]=\bigoplus_i{\langle \gamma_i \rangle}$ in the following steps.
1. Set $\gamma_i:=0$, $1\leq i\leq 4$. For $\ell\in\{p_1,\dots,p_n\}$ do the following:
1. Use algorithm \[alg:1\] to determine elements $\tilde\gamma_i\in{\Gamma}_\ell$, $1\leq i\leq 4$, such that ${\langle \tilde\gamma_i \rangle}_i=\bigoplus_i{\langle \tilde\gamma_i \rangle}$.\[step:kandidater\]
2. If $\prod_i|\tilde\gamma_i|<|{\Gamma}_\ell|$, then go to step \[step:kandidater\].
3. Set $\gamma_i:=\gamma_i+\tilde\gamma_i$, $1\leq i\leq 4$.
2. Output $\gamma_1$, $\gamma_2$, $\gamma_3$ and $\gamma_4$.
By remark \[rem:runningtime\], algorithm \[alg:1a\] has expected running time $O(\log p)$. Hence algorithm \[alg:1a\] is an efficient, probabilistic algorithm to determine generators of the $m$-torsion subgroup ${\Gamma}[m]<{\Gamma}$, where $m\mid |{\Gamma}|$ is the largest divisor of $|{\Gamma}|$ such that $\ell\mid p-1$ for every prime number $\ell\mid m$.
The strategy of algorithm \[alg:1\] can be applied to *any* finite, abelian group ${\Gamma}$ with bilinear, non-degenerate pairings into cyclic groups. For the strategy to be efficient, the pairings must be efficiently computable, and the discrete logarithm problem in the cyclic groups must be easy.
[99]{}
<span style="font-variant:small-caps;">J.W.S. Cassels and E.V. Flynn</span>. *Prolegomena to a Middlebrow Arithmetic of Curves of Genus $2$*. London Mathematical Society Lecture Note Series. Cambridge University Press, 1996. <span style="font-variant:small-caps;">G. Frey and T. Lange</span>. Varieties over Special Fields. In H. Cohen and G. Frey, editors, *Handbook of Elliptic and Hyperelliptic Curve Cryptography*, pp. 87–113. Chapman & Hall/CRC, 2006. <span style="font-variant:small-caps;">G. Frey and H.-G. R[ü]{}ck</span>. A remark concerning $m$-divisibility and the discrete logarithm in the divisor class group of curves. *Math. Comp.*, vol. 62, pp. 865–874, 1994. <span style="font-variant:small-caps;">S. Galbraith</span>. Pairings. In I.F. Blake, G. Seroussi and N.P. Smart, editors, *Advances in Elliptic Curve Cryptography*. London Mathematical Society Lecture Note Series, vol. 317, pp. 183–213. Cambridge University Press, 2005. <span style="font-variant:small-caps;">F. Hess</span>. A note on the Tate pairing of curves over finite fields. *Arch. Math.*, no. 82, pp. 28–32, 2004. <span style="font-variant:small-caps;">V.S. Miller</span>. The Weil Pairing and Its Efficient Calculation. *J. Cryptology*, no. 17, pp. 235–261, 2004. <span style="font-variant:small-caps;">S. Pohlig and M. Hellmann</span>. An improved algorithm for computing logarithms over $GF(p)$ and its cryptographic significance. *IEEE Trans. Inform. Theory*, vol. 24, pp. 106–110, 1978. <span style="font-variant:small-caps;">J.H. Silverman</span>. *The Arithmetic of Elliptic Curves*. Springer, 1986.
[^1]: Research supported in part by a Ph.D. grant from CRYPTOMAThIC
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The one-pion exchange current corrections to isoscalar and isovector magnetic moments of double-closed shell nuclei plus and minus one nucleon with $A=15,17,39$ and $41$ have been studied in the relativistic mean field (RMF) theory and compared with previous relativistic and non-relativistic results. It has been found that the one-pion exchange current gives a negligible contribution to the isoscalar magnetic moments but a significant correction to the isovector ones. However, the one-pion exchange current doesn’t improve the description of nuclear isovector magnetic moments for the concerned nuclei.'
author:
- Jian Li
- 'J. M. Yao'
- 'J. Meng'
- 'A. Arima'
title: 'One-pion exchange current corrections for nuclear magnetic moments in relativistic mean field theory'
---
Nuclear magnetic moment is one of the most important physics observables. It provides a highly sensitive probe of the single-particle structure, serves as a stringent test of nuclear models, and has attracted the attention of nuclear physicists since the early days [@Blin-Stoyle1956; @Arima1984].
The static magnetic dipole moments of ground states and excited states of lots of atomic nuclei have already been measured with several methods [@Stone2005]. With the development of the radioactive ion beam (RIB) technique, it is now even possible to measure the nuclear magnetic moments of many short-lived nuclei near the proton and neutron drip lines with very high precision [@Neyens2003; @Yordanov2007; @Tripathi2008].
Theoretical description for nuclear magnetic moment is a long-standing problem. For the last decades, many successful nuclear structure models have been built up. However, the application of these models for nuclear magnetic moments is still not satisfactory.
The Schmidt values predicted by the extreme single-particle shell model qualitatively succeeded in explaining the magnetic moments of odd-$A$ nuclei near double-closed shells. Later on, the magnetic moment of nuclei farther away from closed shells were found to be sandwiched by the Schmidt lines. Therefore, lots of efforts have been made to explain the deviations of the nuclear magnetic moments from the Schmidt values. In shell model, the first-order configuration mixing (core polarization) [@Arima1954], i.e., the single-particle state coupled to more complicated $2p-1h$ configurations, and the second-order core polarization as well as the meson exchange current (MEC) [@Chemtob1969; @Shimizu1974; @Towner1983] are taken into account to explain the deviations.
The magnetic moments of $LS$ closed shell nuclei plus or minus one nucleon are of particular importance, in which, there are no spin-orbit partners on both sides of the Fermi surface and therefore all first-order core polarization corrections vanish. It has been shown in non-relativistic calculations that the second-order core polarization effect dominates the deviations of isoscalar magnetic moments and also gives large corrections to the isovector magnetic moments [@Towner1987; @Arima1987]. The MEC effect, due to its isovector nature, gives rather small corrections to the isoscalar magnetic moments while gives important corrections to the isovector magnetic moments [@Towner1983; @Ichii1987]. As a result, the calculated corrections to the isoscalar magnetic moments are in reasonable agreement with the data, and the net effect of second order core polarization and MEC gives the right sign for the correction to the Schmidt isovector magnetic moments [@Towner1987; @Arima1987].
In the past decades, the RMF theory, which can take into account the spin-orbit coupling naturally, has been successfully applied to the analysis of nuclear structure over the whole periodic table, from light to superheavy nuclei with a few universal parameters [@Ring1996; @Vretenar2005; @Meng2006]. However, a straightforward application of the single-particle relativistic model, where only sigma and the time-like component of the vector mesons were considered, cannot reproduce the experimental magnetic moments [@Miller1975; @Serot1981]. It is because the reduced Dirac effective nucleon mass ($M^*\sim0.6M$) enhances the relativistic effect on the electromagnetic current [@McNeil1986]. After the introduction of the vertex corrections to define effective single-particle currents in nuclei, e.g., the “back-flow" effect in the framework of the relativistic extension of Landau’s Fermi-liquid theory [@McNeil1986] or the random phase approximation (RPA) type summation of p-h and p-$\bar{n}$ bubbles in relativistic Hartree approximation [@Ichii1987a; @Shepard1988], or the consideration of non-zero space-like components of vector meson in the self-consistent deformed RMF theory [@Hofmann1988; @Furnstahl1989; @Yao2006], the isoscalar magnetic moment can be reproduced quite well. Unfortunately, these effects cannot remove the discrepancy existing in isovector magnetic moments. To eliminate the discrepancy, the MEC corrections have been investigated in the linear RMF theory in Ref. [@Morse1990], which was found to be significant but enlarge the disagreement with data.
In view of these facts, it is essential to investigate the nuclear magnetic moments in the RMF theory with modern effective interactions. In this work, the isoscalar and isovector magnetic moments of light odd-mass nuclei near the double-closed shells will be studied in axially deformed RMF theory with the consideration of non-zero space-like components of vector meson. In particular, the one-pion exchange current corrections to nuclear magnetic moments will be examined.
The starting point of the RMF theory is the standard effective Lagrangian density constructed with the degrees of freedom associated with nucleon field ($\psi$), two isoscalar meson fields ($\sigma$ and $\omega_\mu$), isovector meson field ($\vec\rho_\mu$) and photon field ($A_\mu$). The equation of motion for a single-nucleon orbit $\psi_i(\bm{r})$ reads, $$\label{DiracEq}
\{\mathbf{\alpha}\cdot[\bm{p}- \bm{V} (\bm{r})]
+\beta M^*(\bm{r})+V_0(\bm{r})\}\psi_i(\bm{r})
=\epsilon_i\psi_i(\bm{r}),$$ where $M^*(\bm{r})$ is defined as $M^*(\bm{r})\equiv
M+g_\sigma\sigma(\bm{r})$, with $M$ referring to the mass of bare nucleon. The repulsive vector potential is $\displaystyle
V_0(\bm{r})=g_\omega\omega_0(\bm{r})+g_\rho\tau_3\rho_0(\bm{r})+e\frac{1-\tau_3}{2}A_0(\bm{r})$, where $g_i(i=\sigma,\omega,\rho)$ are the coupling strengthes of nucleon with mesons. The time-odd fields $\bm{V}(\bm{r})$ are naturally given by the space-like components of vector fields, $\mbox{\boldmath$V$}({\mbox{\boldmath$r$}}) = g_{\omega}\bm{\omega} ({{\mbox{\boldmath$r$}}})$, where the space components of $\rho$-meson field $\bm{\rho}({\mbox{\boldmath$r$}})$ and Coulomb field $\mbox{\boldmath$A$}({\mbox{\boldmath$r$}})$ are neglected since they turn out to be small compared with $\bm{\omega}({\mbox{\boldmath$r$}})$ field in light nuclei [@Hofmann1988].
The non-vanishing time-odd fields in Eq.(\[DiracEq\]) give rise to splitting between pairwise time-reversal states $\psi_{i}$ and $\psi_{\overline i}(\equiv\hat T\psi_{i})$ and also the non-vanishing current in the core, where $\hat T$ is the time-reversal operator. Each Dirac spinor $\psi_i(\bm{r})$ and meson fields are expanded in terms of a set of isotropic harmonic oscillator basis in cylindrical coordinates with 16 major shells [@Gambhir1990; @Ring1997]. The pairing correlations for these double-closed shell nuclei plus or minus one nucleon are neglected due to the quenching effect from unpaired valence nucleon. More details about solving the Dirac equation with time-odd fields can be found in Refs. [@Li2009; @Li2009a].
The effective electromagnetic current operator used to describe the nuclear magnetic moment is given by [@Furnstahl1989; @Yao2006] $$\hat{J}^\mu(x) =
\bar{\psi}(x)\gamma^\mu\frac{1-\tau_3}{2}\psi(x)+\frac{\kappa}{2M}\partial_\nu
[\bar{\psi}(x)\sigma^{\mu\nu}\psi(x)],$$ where $\sigma^{\mu\nu}=\frac{i} {2} [\gamma^\mu,\gamma^\nu]$, and $\kappa$ is the free anomalous gyromagnetic ratio of the nucleon, $\kappa_p=1.793$ and $\kappa_n=-1.913$. The nuclear dipole magnetic moment is determined by $$\begin{aligned}
\label{magnetic-moment}
\mbox{\boldmath{$\mu$}}
&=& \frac{1}{2}\int d^3r
{\mbox{\boldmath$r$}}\times\langle g.s. \vert \hat{\mbox{\boldmath$j$}}({\mbox{\boldmath$r$}})\vert g.s. \rangle,\end{aligned}$$ where $\hat{\mbox{\boldmath$j$}}({\mbox{\boldmath$r$}})$ is the operator of space-like components of the effective electromagnetic current.
In addition, for isovector magnetic moment, the one-pion exchange current correction should be taken into account. Although there is no explicit pion meson in the RMF theory, it is possible to study the MEC corrections due to the virtual pion exchange between two nucleons, which is given by the Feynman diagrams in Fig. \[fig:1\].
![\[fig:1\] Diagrams of the one-pion exchange current: seagull (left) and in-flight (right).](fig1.eps){width="10cm"}
The one-pion exchange current contributions to magnetic moments can thus be obtained as [@Morse1990], $$\begin{aligned}
\label{magnetic moment-MEC}
\mbox{\boldmath{$\mu$}}_{\mathrm{MEC}}
&=& \frac{1}{2}\int d {\mbox{\boldmath$r$}}\,{\mbox{\boldmath$r$}}\times
\langle g.s.|\hat{\mbox{\boldmath$j$}}^{\mathrm{seagull}}({\mbox{\boldmath$r$}})
+\hat{\mbox{\boldmath$j$}}^{\mathrm{in\mbox{-}flight}}({\mbox{\boldmath$r$}})|g.s.\rangle,\end{aligned}$$ where the corresponding one-pion exchange currents $\hat{\mbox{\boldmath$j$}}^{\mathrm{seagull}}({\mbox{\boldmath$r$}})$ and $\hat{\mbox{\boldmath$j$}}^{\mathrm{in\mbox{-}flight}}({\mbox{\boldmath$r$}})$ are respectively, $$\begin{aligned}
\hat{\mbox{\boldmath$j$}}^{\mathrm{seagull}}({\mbox{\boldmath$r$}})
&=&-\frac{8ef^2_{\pi}M}{m^2_\pi} \int d {{\mbox{\boldmath$x$}}}\,
\bar{\psi}_p({{\mbox{\boldmath$r$}}}) {\bm\gamma}\gamma_5\psi_n({{\mbox{\boldmath$r$}}})
D_\pi({{\mbox{\boldmath$r$}}},{{\mbox{\boldmath$x$}}})
\bar{\psi}_n({{\mbox{\boldmath$y$}}})\frac{M^*}{M}\gamma_5\psi_p({{\mbox{\boldmath$x$}}}),\\
\hat{\mbox{\boldmath$j$}}^{\mathrm{in\mbox{-}flight}}({\mbox{\boldmath$r$}})
&=&-\frac{16ief^2_{\pi}M^2}{m_\pi^2} \int d{{\mbox{\boldmath$x$}}} d{{\mbox{\boldmath$y$}}} \bar{\psi}_p({{\mbox{\boldmath$x$}}})\frac{M^*}{M}\gamma_5\psi_n({{\mbox{\boldmath$x$}}})
D_\pi({{\mbox{\boldmath$x$}}},{{\mbox{\boldmath$r$}}}){\bm\nabla}_{{{\mbox{\boldmath$r$}}}}
D_\pi({{\mbox{\boldmath$r$}}},{{\mbox{\boldmath$y$}}})\bar{\psi}_n({{\mbox{\boldmath$y$}}})\frac{M^*}{M}\gamma_5\psi_p({{\mbox{\boldmath$y$}}}),\nonumber\\\end{aligned}$$ with the $\pi$-nucleon coupling constant $f_\pi=1$ and the pion mass $m_\pi=138$ MeV. The pion propagator in r-space is given by $D_\pi({{\mbox{\boldmath$x$}}},{{\mbox{\boldmath$r$}}})=\dfrac{1}{4\pi}\dfrac{e^{-m_\pi|{{\mbox{\boldmath$x$}}}-{{\mbox{\boldmath$r$}}}|}} {|{{\mbox{\boldmath$x$}}}-{{\mbox{\boldmath$r$}}}|}$.
The magnetic moments of double-closed shell nuclei plus or minus one nucleon with $A=15, 17, 39$ and 41 are studied in the RMF theory using PK1 effective interaction [@Long2004], which includes the self-couplings of $\sigma$ and $\omega$ mesons.
The magnetic moments in Eq.(\[magnetic-moment\]) will be calculated using Dirac spinors $\psi_i$ from the axially deformed RMF calculations with space-like components of vector meson field. As small deformation in these nuclei, the one-pion exchange current contributions to magnetic moments in Eq.(\[magnetic moment-MEC\]) are calculated using the spherical Dirac spinors of corresponding double-closed shell nucleus as done in Ref. [@Morse1990].
---- ---------------- -------------- --------------- ------------ -- -------------- -----------
[@Chemtob1969] [@Hyuga1980] [@Towner1983] [@Ito1987] [@Morse1990] This work
15 0.127 0.116 0.092 0.111 0.102 0.091
17 0.084 0.093 0.065 0.092 0.151 0.092
39 0.204 0.199 0.149 0.184 0.174 0.190
41 0.195 0.201 0.115 0.180 0.270 0.184
---- ---------------- -------------- --------------- ------------ -- -------------- -----------
: \[tab:mec\]The one-pion exchange current corrections to the isovector magnetic moments obtained from RMF calculations using PK1 effective interaction, in comparison with the Linear RMF [@Morse1990] and non-relativistic results [@Chemtob1969; @Hyuga1980; @Towner1983; @Ito1987] (see text for details).
The one-pion exchange current corrections to the isovector magnetic moments obtained from RMF calculations using PK1 are compared in Table \[tab:mec\] with linear RMF calculations [@Morse1990] using L3 [@Lee1986] and non-relativistic calculations [@Chemtob1969; @Hyuga1980; @Towner1983; @Ito1987]. It is shown that the obtained corrections to the isovector magnetic moments in this work are in reasonable agreement with other calculations. As noted in Ref. [@Morse1990], the differences between the various calculations presented in Table \[tab:mec\] are most likely due to relatively small changes in the balance of contributions from seagull and in-flight diagrams rather than any fundamental differences in the models used. Other nonlinear effective interactions are also used to calculate the one-pion exchange current corrections, and similar results are obtained as those given by PK1.
---- ------- --------- ----------------- --------------- -- ---------------------- ----------------------
Schmidt [@Towner1987] [@Arima1987] QHD+MEC [@Morse1990] RMF+MEC
15 0.218 0.187 0.228 0.233 0.200(0.199$+$0.001) $0.216(0.216+0.000)$
17 1.414 1.440 1.410 1.435 1.42 (1.43 $-$0.011) $1.467(1.469-0.002)$
39 0.706 0.636 0.706 0.735 0.659(0.660$-$0.001) $0.707(0.707+0.000)$
41 1.918 1.940 1.893 1.944 1.93 (1.94 $-$0.007) $1.988(1.991-0.003)$
---- ------- --------- ----------------- --------------- -- ---------------------- ----------------------
: \[tab:isoscalar\]Isoscalar magnetic moments obtained from RMF calculations using PK1 effective interaction, in comparison with the corresponding data, Schmidt value, previous relativistic result [@Morse1990] and non-relativistic results [@Towner1987; @Arima1987](see text for details).
In Table \[tab:isoscalar\], the isoscalar magnetic moments and corresponding pion exchange current corrections obtained from RMF calculations using PK1 are presented in comparison with the corresponding data, Schmidt value, previous relativistic result [@Morse1990] and non-relativistic results [@Towner1987; @Arima1987]. The isoscalar magnetic moments obtained from deformed RMF theory with space-like components of vector meson are labeled as RMF and corresponding one-pion exchange current corrections calculated similarly as in Ref. [@Morse1990] are labeled as MEC.
The isoscalar magnetic moment in Ref. [@Morse1990] consists of two parts, i.e., the QHD calculations taken from Ref. [@Furnstahl1989] and the additional one-pion exchange current corrections calculated with L3 effective interaction.
For the non-relativistic calculations in Refs. [@Towner1987; @Arima1987], the harmonic oscillator wave functions are used for single-particle states and one-boson-exchange potential [@Towner1987] and Hamada-Johnston potential [@Arima1987] were respectively employed for the residual interaction. For the corrections to magnetic moments, the second-order core polarization, MEC, and the crossing term between MEC and core polarization have been included. For the MEC corrections, the $\Delta$ isobar current as well as the exchange current of the mesons $\pi$, $\sigma$, $\omega$, and $\rho$ have been taken into account.
It is shown that all calculated results in Table \[tab:isoscalar\] are in good agreement with data, and same as the previous relativistic [@Morse1990] and non-relativistic calculations [@Towner1987; @Arima1987], the MEC corrections to isoscalar moments in present calculations are negligible. For the mirror nuclei with double-closed shell plus or minus one nucleon, the MEC corrections to isoscalar moments reflect the violation of isospin symmetry in wave functions. With the small MEC corrections to isoscalar moments here, it is easy to understand the excellent description of the isoscalar magnetic moments in deformed RMF theory with space-like components of vector meson in Refs. [@Hofmann1988; @Yao2006].
---- --------------- --------------- ----------------- --------------- -- ---------------------------------------- ----------------------------------
Schmidt [@Towner1987] [@Arima1987] QHD+MEC [@Morse1990] RMF+MEC
15 $-0.501$ $-0.451$ $-0.456$ $-0.508$ $-0.347(-0.449+0.102)$ $-0.339(-0.430+0.091)$
17 $\;\;\,3.308$ $\;\;\,3.353$ $\;\;\,3.281$ $\;\;\,3.306$ $\;\;\,3.61\;\;(\;\;\,3.46\;\;+0.151)$ $\;\;\,3.576(\;\;\,3.483+0.092)$
39 $-0.316$ $-0.512$ $-0.286$ $-0.481$ $-0.106(-0.280+0.174)$ $-0.115(-0.305+0.190)$
41 $\;\;\,3.512$ $\;\;\,3.853$ $\;\;\,3.803$ $\;\;\,3.729$ $\;\;\,4.41\;\;(\;\;\,4.14\;\;+0.270)$ $\;\;\,4.322(\;\;\,4.138+0.184)$
---- --------------- --------------- ----------------- --------------- -- ---------------------------------------- ----------------------------------
: \[tab:isovector\] Same as Table \[tab:isoscalar\], but for the isovector magnetic moments.
In Table \[tab:isovector\], the isovector magnetic moments and corresponding pion exchange current corrections in RMF calculations using PK1 are compared with the data, Schmidt value, previous relativistic [@Morse1990] and non-relativistic results [@Towner1987; @Arima1987].
It is shown that the pion exchange current gives a significant positive correction to isovector magnetic moments, which is consistent with the calculations in Ref. [@Morse1990] as well as most non-relativistic calculations [@Towner1987; @Arima1987]. However, compared with the case for the isoscalar magnetic moments, the results of relativistic calculations deviate much more from data explicitly, namely, this positive contribution is not welcome to improve the agreement with data. Such a phenomenon is also found from RMF calculations with other effective interactions. Therefore, the RMF theory with one-pion exchange current corrections could not improve the description of isovector magnetic moment for the concerned nuclei.
In the future relativistic investigation, the other effects due to the second-order core polarization, the $\Delta$ isobar current, exchange current corrections due to other mesons, and the crossing term between MEC and core polarization should be taken into account, as noted already in the non-relativistic calculations [@Towner1987; @Arima1987].
In summary, the one-pion exchange current corrections to the isoscalar and isovector magnetic moments have been studied in the RMF theory with PK1 effective interaction and compared with previous relativistic and non-relativistic results. It has been found that the one-pion exchange current gives a negligible contribution to the isoscalar magnetic moments but a significant correction to the isovector ones. However, the one-pion exchange current doesn’t improve the description of nuclear isovector magnetic moments for the concerned nuclei. In the future investigation, similar as the non-relativistic cases [@Towner1987; @Arima1987], the second-order core polarization effects, the $\Delta$ isobar current, crossing term between MEC and core polarization, and exchange current corrections due to other mesons should be taken into account. In addition, the correction due to the restoration of the rotational symmetry [@Yao2009] may play a role as well. The investigation towards these directions is in progress.
We would like to thank W. Bentz for his careful reading of the manuscript and comments. This work is partly supported by Major State Basic Research Developing Program 2007CB815000, the National Natural Science Foundation of China under Grant Nos. 10775004, 10720003, 10947013, 10975008, and 10975007, as well as the Southwest University Initial Research Foundation Grant to Doctor No. SWU109011.
[99]{} R. J. Blin-Stoyle, Rev. Mod. Phys. **28**, 75 (1956).
A. Arima, Prog. Part. Nucl. Phys. **11**, 53 (1984).
N. Stone, At. Data Nucl. Data Tables **90**, 75 (2005).
G. Neyens, Rep. Prog. Phys. **66**, 633 (2003).
D. T. Yordanov *et al.,* Phys. Rev. Lett. **99**, 212501 (2007).
V. Tripathi *et al.,* Phys. Rev. Lett. **101**, 142504 (2008).
A. Arima and H. Horie, Prog. Theor. Phys. **11**, 209 (1954).
M. Chemtob, Nucl. Phys. **A123**, 449 (1969).
K. Shimizu, M. Ichimura, and A. Arima, Nucl. Phys. **A226**, 282 (1974).
I. S. Towner and F. C. Khanna, Nucl. Phys. **A399**, 334 (1983).
I. S. Towner, Phys. Rep. **155**, 263 (1987).
A. Arima, K. Shimizu, W. Bentz, and H. Hyuga, Adv. Nucl. Phys. **18**, 1 (1987).
S. Ichii, W. Bentz, and A. Arima, Nucl. Phys. **A464**, 575 (1987).
P. Ring, Prog. Part. Nucl. Phys. **37**, 193 (1996).
D. Vretenar, A. Afanasjev, G. Lalazissis, and P. Ring, Phys. Rep. **409**, 101 (2005).
J. Meng, H. Toki, S. Zhou, S. Zhang, W. Long, and L. Geng, Prog. Part. Nucl. Phys. **57**, 470 (2006).
L. D. Miller, Ann. Phys. **91**, 40 (1975).
B. D. Serot, Phys. Lett. **B107**, 263 (1981).
J. A. McNeil, R. D. Amado, C. J. Horowitz, M. Oka, J. R. Shepard, and D. A. Sparrow, Phys. Rev. C **34**, 746 (1986).
S. Ichii, W. Bentz, A. Arima, and T. Suzuki, Phys. Lett. **B192**, 11 (1987).
J. R. Shepard, E. Rost, C.-Y. Cheung, and J. A. Mc Neil, Phys. Rev. C **37**, 1130 (1988).
U. Hofmann and P. Ring, Phys. Lett. **B214**, 307 (1988).
R. J. Furnstahl and C. E. Price, Phys. Rev. C **40**, 1398 (1989).
J. M. Yao, H. Chen, and J. Meng, Phys. Rev. C **74**, 024307 (2006).
T. M. Morse, C. E. Price, and J. R. Shepard, Phys. Lett. **B251**, 241 (1990).
Y. K. Gambhir, P. Ring, and A. Thimet, Ann. Phys. **198**, 132 (1990).
P. Ring, Y. K. Gambhir, and G. A. Lalazissis, Comput. Phys. Commun. **105**, 77 (1997).
J. Li, Y. Zhang, J. M. Yao, and J. Meng, Sci. China Ser. G: Phys. Mech. Astron. **52**, 1586 (2009).
J. Li, J. M. Yao, and J. Meng, Chin. Phys. C **33(S1)**, 98 (2009).
W. Long, J. Meng, N. V. Giai, and S.-G. Zhou, Phys. Rev. C **69**, 034319 (2004).
H. Hyuga, A. Arima, and K. Shimizu, Nucl. Phys. **A336**, 363 (1980).
H. Ito and L. S. Kisslinger, Ann. Phys. **174**, 169 (1987).
S.-J. Lee, J. Fink, A. B. Balantekin, M. R. Strayer, A. S. Umar, P. G. Reinhard, J. A. Maruhn, and W. Greiner, Phys. Rev. Lett. **57**, 2916 (1986).
J. M. Yao, J. Meng, P. Ring, and D. P. Arteaga, Phys. Rev. C **79**, 044312 (2009).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'For a bivariate random variable $(Z_1,Z_2)$ having Kulkarni’s bivariate phase-type distribution (see [@Ku89]), we derive a simple expression for the semi-explicit density function ${\mathbb{E}}(e^{-sZ_2} 1_{ \{ Z_1 \in dy \} })$. Some immediate consequences are presented as concluding remarks.'
author:
- Lothar Breuer
title: 'A semi-explicit density function for Kulkarni’s bivariate phase-type distribution'
---
Introduction {#sec-ku}
============
Multivariate phase-type distributions have been a topic of research interest for quite some time. The first constructive proposal of such a class (henceforth denoted by MPH) can be found in [@AL84]. This class was later extended to MPH$^*$ in [@Ku89], while the latest (and perhaps final) proposal of a definition (denoted by MVPH) is given in [@BN10]. All of the proposals carry their distinctive problems, be it that they seem too limited (as MPH) or that even elementary descriptions like distribution functions are not given explicitly (for MPH$^*$).
The purpose of the present paper is the derivation of semi-explicit expressions for the density function of bivariate MPH$^*$ - distributed random variables. More exactly, for $(Z_1,Z_2) \in \text{MPH}^*$ we shall derive a simple expression for ${\mathbb{E}}(e^{-sZ_2} 1_{ \{ Z_1 \in dy \} })$, i.e. a density function for $Z_1$ joint with a Laplace transform for $Z_2$. As a univariate Laplace transform, this can be readily inverted to yield the bivariate density function $\P( Z_1 \in dy, Z_2 \in dx)$ for $x,y > 0$, see e.g. [@AW95].
For ease of reference, we shall use the remainder of this introduction to restate the pertinent results in Kulkarni’s construction of the class MPH$^*$. The main result along with some remarks are then presented in section 2.
Let ${\cal J} = ( J_t: t \geq 0)$ denote a Markov process on a finite state space $E' := \{ 1, \ldots, m+1 \}$ with $m \in {\mathbb{N}}$, having generator matrix $$\begin{pmatrix} Q & - Q {{\bf 1}}\\ {{\bf 0}}& 0
\end{pmatrix}$$ where $Q$ is invertible, i.e. the states $i \in E := \{ 1, \ldots, m \}$ are transient. The initial distribution of ${\cal J}$ is denoted by $(\alpha, \alpha_{m+1})$ with $\alpha = (\alpha_1, \ldots, \alpha_m)$ and $\alpha_i := \P (J_0 = i)$ for $i \in E'$. We assume of course $\alpha_{m+1} < 1$. Let $R = (r_{ij})_{i \leq k, j \leq m}$ denote a reward matrix of dimension $k \times m$, with $r_{ij} \geq 0$ for all $i,j$. Write also $r_i(j) := r_{ij}$ whenever it is more convenient. Define the time of absorption of ${\cal J}$ by $$\label{def-tau}
\tau := \min \{ t \geq 0: J_t = m+1 \}$$ and further the random variables $$\label{def-Z}
Z_i := \int_0^\tau r_i(J_t) dt$$ for $i \in \{ 1, \ldots, k \}$. Then we say that $(Z_1, \ldots, Z_k) \in MPH^*$. The distribution of $(Z_1, \ldots, Z_k)$ shall be denoted by $MPH^*( \alpha, Q, R)$. To avoid trivial singularities later on, we assume that $\sum_{j=1}^m r_{ij} >0$, i.e. $\P(Z_i >0) >0$ for all $i \in \{ 1, \ldots, k \}$.
The bivariate phase-type distribution
=====================================
From now on we set $k=2$, i.e. we consider bivariate distributions in MPH$^*$ only. The plan is the following: By theorem 1 in [@Ku89] the marginal distribution of $Z_1$ is phase-type. As indicated in remark 1 of [@Br12a], it can be characterised in terms of the first passage times for a fluid flow. To be a bit more precise, let $$\label{def-fpt}
\tau(y) := \inf \{ t \geq 0: Y_t > y \}$$ denote the first passage times for a suitable fluid flow model $({\cal J}, {\cal Y})$. Then $$\P( Z_1 > y ) = \P_\alpha \left( \tau(y) < \tau | Y_0 = 0 \right)$$ where $\tau$ is the same as in (\[def-tau\]) and $\P_\alpha$ denotes the conditional probability given that $\P( J_0 = i ) = \alpha_i$ for $i \in E$. Now we attach a phase-dependent time devaluation along the path of ${\cal Y}$ up to $\tau(y)$ to obtain an expression for $${\mathbb{E}}\left( e^{- s \int_0^{\tau(y)} r_2(J_s) ds} \right)$$ which is the Laplace transform of $Z_2$ (with argument $s$) on the set of paths that satisfy $\tau(y) < \tau$, i.e. $Z_1 > y$. From here it is only a small step to obtain an expression for ${\mathbb{E}}( e^{-s Z_2} 1_{ \{ Z_1 \in dy \} })$.
Two-dimensional fluid flow models have been analysed in detail in [@BR13]. We shall make use of some of the results therein, adapted to the question investigated here. In order to do so, we need to introduce some more notation. First we define the fluid flow models $({\cal J}, {\cal Y})$ and $({\cal J}, {\cal X})$ by $$\label{def-XY}
Y_t := \int_0^t r_1(J_s) \; ds \qquad \text{and} \qquad X_t := \int_0^t r_2(J_s) \; ds$$ for all $t \geq 0$, where the phase process ${\cal J}$ is the same as in section \[sec-ku\] and $$r_1(m+1) := r_2(m+1) := 0$$ Partition the set $E$ of transient states into $E=E_0 \cup E_+$, where $$E_0 := \{ i \in E: r_{1i} =0 \} \qquad \text{and} \qquad E_+ := \{ i \in E: r_{1i} > 0 \}$$ According to this partition, write $Q$ and $\alpha$ in block form, i.e.$$Q = \begin{pmatrix} Q_{00} & Q_{0+} \\ Q_{+0} & Q_{++} \end{pmatrix} \qquad \text{and} \qquad \alpha = (\alpha_0, \alpha_+)$$ Further write $(\eta_0, \eta_+)' := \eta := -Q {{\bf 1}}$. Finally, define the diagonal matrices $$R_+ := diag( r_{1i}: i \in E_+),$$ $$D_+ := diag( r_{2i}: i \in E_+) \qquad \text{and} \qquad D_0 := diag( r_{2i}: i \in E_0).$$ Now we can state the main result:
Let $(Z_1, Z_2) \sim MPH^*( \alpha, Q, R)$. Then $${\mathbb{E}}( e^{-s Z_2} 1_{ \{ Z_1 =0 \} }) = \alpha_0 (s D_0 - Q_{00})^{-1} \eta_{0}$$ for $s \geq 0$ and $${\mathbb{E}}( e^{-s Z_2} 1_{ \{ Z_1 \in dy \} }) = \alpha(s) e^{W(s) y} \eta(s) \; dy$$ for $y > 0$ and $s \geq 0$, where $$\begin{aligned}
\alpha(s) &:= \alpha_0 (s D_0 - Q_{00})^{-1} Q_{0+} + \alpha_+ \\
W(s) &:= R_+^{-1} \left( (Q_{++} - s D_+) - Q_{+0} (Q_{00} - s D_0)^{-1} Q_{0+} \right) \\
\eta(s) &:= R_+^{-1} \left( Q_{+0} (s D_0 - Q_{00})^{-1} \eta_0 + \eta_+ \right)\end{aligned}$$
Due to the construction in (\[def-Z\]) and (\[def-XY\]), the representations $Z_1 = Y_\tau$ and $Z_2 = X_\tau$ hold, where $\tau$ is defined in (\[def-tau\]).
This means that on the set $\{ Z_1 =0 \}$, the phase process ${\cal J}$ lives only on $E_0$ before it gets absorbed. Define $\sigma := \min \{ t \geq 0: J_t \notin E_0 \}$. Clearly, $\sigma \leq \tau < \infty$ and $\{ \sigma = \tau \} = \{ Z_1 = 0 \}$. Thus $${\mathbb{E}}( e^{-s Z_2} 1_{ \{ Z_1 =0 \} }) = {\mathbb{E}}( e^{-s \int_0^\tau r_2(J_s) \; ds} 1_{ \{ \sigma = \tau \} })$$ Theorem 1 in [@BR13] states that $${\mathbb{E}}( e^{-s \int_0^t r_2(J_s) \; ds} 1_{ \{ t < \sigma < \tau \} }) = \alpha_0 e^{(Q_{00} - s D_0) t} {{\bf 1}}$$ for $s \geq 0$. Hence, $${\mathbb{E}}( e^{-s \int_0^t r_2(J_s) \; ds} 1_{ \{ \sigma = \tau \in dt \} }) = \alpha_0 e^{(Q_{00} - s D_0) t} \eta_0$$ for all $t > 0$. Now integrating over $t \in ]0, \infty[$ yields the first statement. For the second statement, theorem 2 in [@BR13] states that $${\mathbb{E}}\left( e^{-s \int_0^{\tau(y)} r_2(J_s) \; ds} 1_{ \{ \tau(y) < \tau \} } \right) = e^{W(s) y}$$ for $s \geq 0$, where $\tau(y)$ is defined in (\[def-fpt\]). Given our construction of ${\cal Y}$ and $Z_1$, this is equivalent to $${\mathbb{E}}\left( e^{-s \int_0^{\tau(y)} r_2(J_s) \; ds} 1_{ \{ Z_1 > y \} } \right) = e^{W(s) y}$$ From here we obtain for small $h > 0$ $$\begin{gathered}
{\mathbb{E}}\left( e^{-s \int_0^{\tau(Z_1)} r_2(J_s) \; ds} 1_{ \{ y < Z_1 < y+h \} } \right) \\
= e^{W(s) y} \left( h R_+^{-1} \eta_+ + h R_+^{-1} Q_{+0} (s D_0 - Q_{00})^{-1} \eta_0 + o(h) \right)\end{gathered}$$ and hence $$\begin{aligned}
{\mathbb{E}}( \left. e^{-s Z_2} 1_{ \{ Z_1 \in dy \} } \right| J_0 = i) &= e_i' e^{W(s) y} \eta(s) \; dy\end{aligned}$$ for $y > 0$ and ascending phases $i \in E_+$. Considering all possible initial phases, we obtain by the same reasoning as for the first statement $${\mathbb{E}}( e^{-s Z_2} 1_{ \{ Z_1 \in dy \} }) = \left( \alpha_0 (s D_0 - Q_{00})^{-1} Q_{0+} + \alpha_+ \right) e^{W(s) y} \eta(s) \; dy$$ for $y > 0$, which is the second statement.
Let $Z=(Z_1, \ldots, Z_k) \in \text{MPH}^*$ with $k \geq 3$. According to theorem 6 in [@Ku89], every pair $(Z_i, Z_j)$ with $i \neq j$ has a bivariate MPH$^*$ distribution. Thus we can use theorem 1 to determine the two-dimensional marginal distributions of a $k$-variate MPH$^*$ distribution.
For $s=0$ we obtain the marginal distribution of $Z_1$, which is given as follows. Let $k := | E_0 |$ and $n := | E_+ |$, where $|M|$ denotes the cardinality of a set $M$. $Z_1$ has a PH($\beta, T$) distribution of order $n$ with $$\beta_i = \alpha_{k+i} + \alpha_0 (- Q_{00}^{-1}) Q_{0+} e_i$$ for $i \in \{ 1, \ldots, n \}$ and $$\beta_{n+1} = \alpha_{m+1} + \alpha_0 (- Q_{00}^{-1}) \eta_0$$ The rate matrix $T$ is given by $T = W(0) = R_+^{-1} \left( Q_{++} - Q_{+0} Q_{00}^{-1} Q_{0+} \right)$ such that $$\begin{aligned}
- T {{\bf 1}}&= - R_+^{-1} \left( Q_{++} {{\bf 1}}- Q_{+0} Q_{00}^{-1} Q_{0+} {{\bf 1}}\right) \\
&= R_+^{-1} \left( \eta_+ + Q_{+0} {{\bf 1}}+ Q_{+0} Q_{00}^{-1} (\eta_0 - Q_{00} {{\bf 1}}) \right) \\
&= R_+^{-1} \left( \eta_+ + Q_{+0} Q_{00}^{-1} \eta_0 \right) \\
&= \eta(0)\end{aligned}$$ as to be expected.
Theorem 4 in [@Ku89] states that the joint Laplace transform of $(Z_1, Z_2)$ is given by $$\label{eq-ku}
{\mathbb{E}}( e^{- s_1 Z_1} e^{- s_2 Z_2} ) = - \alpha ( \Delta - Q)^{-1} Q {{\bf 1}}$$ where $\Delta := diag( s_1 r_1(j) + s_2 r_2(j): j \in E)$. A relatively arduous way to arrive at this result is $$\begin{aligned}
{\mathbb{E}}( e^{- s_1 Z_1} e^{- s_2 Z_2} ) &= \int_0^\infty e^{- s_1 y} {\mathbb{E}}( e^{-s_2 Z_2} 1_{ \{ Z_1 \in dy \} }) \; dy + {\mathbb{E}}( e^{-s_2 Z_2} 1_{ \{ Z_1 =0 \} }) \\
&= \left( \alpha_0 (Q_{00} - s_2 D_0)^{-1} Q_{0+} + \alpha_+ \right) \int_0^\infty e^{- s_1 y} e^{ W(s_2) y} \eta(s_2) \; dy \\
& \qquad - \alpha_0 (Q_{00} - s_2 D_0)^{-1} \eta_{0} \\
&= - \left( \alpha_0 (Q_{00} - s_2 D_0)^{-1} Q_{0+} + \alpha_+ \right) (W(s_2) - s_1 I)^{-1} \eta(s_2) \\
& \qquad - \alpha_0 (Q_{00} - s_2 D_0)^{-1} \eta_{0}\end{aligned}$$ First we observe that $$\begin{aligned}
W(s_2) - s_1 I &= R_+^{-1} \left( (Q_{++} - s_2 D_+) - Q_{+0} (Q_{00} - s_2 D_0)^{-1} Q_{0+} - s_1 R_+ \right) \\
&= R_+^{-1} \left( (Q_{++} - s_1 R_+ -s_2 D_+) - Q_{+0} (Q_{00} - s_1 R_0 - s_2 D_0)^{-1} Q_{0+} \right) \end{aligned}$$ since $R_0 = {{\bf 0}}$ by definition. To shorten notations, we write $W:= W(s_2) - s_1 I $. Further, we write $$R_+^{-1} \left( Q_{+0} (sD_0 - Q_{00})^{-1} \eta_0 + \eta_+ \right) = R_+^{-1} \left( - Q_{+0} (Q_{00} - s D_0)^{-1}, I \right) \begin{pmatrix} \eta_0 \\ \eta_+ \end{pmatrix}$$ To arrive at (\[eq-ku\]), we need to show that $$\begin{aligned}
( \Delta - Q)^{-1} &= \begin{pmatrix} (Q_{00} - s_2 D_0)^{-1} Q_{0+} \\ I \end{pmatrix} (-W)^{-1} R_+^{-1} \begin{pmatrix} - Q_{+0} (Q_{00} - s_2 D_0)^{-1} & I \end{pmatrix} \\
& \qquad + \begin{pmatrix} -(Q_{00} - s_2 D_0)^{-1} & {{\bf 0}}\\ {{\bf 0}}& {{\bf 0}}\end{pmatrix}\end{aligned}$$ In block form we can write $$( \Delta - Q) = \begin{pmatrix} s_2 D_0 - Q_{00} & Q_{0+} \\ Q_{+0} & s_1 R_+ + s_2 D_+ - Q_{++} \end{pmatrix}$$ since $R_0 = {{\bf 0}}$. Thus $$\begin{aligned}
( \Delta - Q) & \begin{pmatrix} (Q_{00} - s_2 D_0)^{-1} Q_{0+} \\ I \end{pmatrix} (-W)^{-1} \begin{pmatrix} - Q_{+0} (Q_{00} - s_2 D_0)^{-1} & I \end{pmatrix} \\
&= \begin{pmatrix} {{\bf 0}}\\ - R_+ W \end{pmatrix} (-W)^{-1} R_+^{-1} \begin{pmatrix} - Q_{+0} (Q_{00} - s_2 D_0)^{-1} & I \end{pmatrix} \\
&= \begin{pmatrix} {{\bf 0}}& {{\bf 0}}\\ - Q_{+0} (Q_{00} - s_2 D_0)^{-1} & I \end{pmatrix} \end{aligned}$$ and further $$( \Delta - Q) \begin{pmatrix} -(Q_{00} - s_2 D_0)^{-1} & {{\bf 0}}\\ {{\bf 0}}& {{\bf 0}}\end{pmatrix} = \begin{pmatrix} I & {{\bf 0}}\\ Q_{+0} (Q_{00} - s_2 D_0)^{-1} & {{\bf 0}}\end{pmatrix}$$ Together this yields the desired result.
The most important ingredient to compute the covariance is ${\mathbb{E}}( Z_1 Z_2)$. Corollary 1 in [@Ku89] provides an iteration scheme to compute joint moments. An explicit formula is obtained via $${\mathbb{E}}( Z_1 Z_2) = \left. \frac{d}{ds} {\mathbb{E}}( e^{-s Z_2} Z_1) \right|_{s=0}$$ To this aim, $$\begin{aligned}
{\mathbb{E}}( e^{-s Z_2} Z_1) &= \int_0^\infty y {\mathbb{E}}( e^{-s Z_2} 1_{ \{ Z_1 \in dy \} }) = \alpha(s) \int_0^\infty y e^{W(s) y} \; dy \; \eta(s) \end{aligned}$$ and $$\begin{aligned}
\int_0^\infty y e^{W(s) y} \; dy &= W(s)^{-1} \left[ y e^{W(s) y} \right]_{y=0}^\infty - W(s)^{-1} \int_0^\infty e^{W(s) y} \; dy = W(s)^{-2}\end{aligned}$$ yield $$\begin{aligned}
{\mathbb{E}}( Z_1 Z_2) &= \left. - \frac{d}{ds} \alpha(s) W(s)^{-2} \eta(s) \right|_{s=0} \end{aligned}$$ This can be readily evaluated using the differentiation rule $$\frac{d}{ds} (M(s)^{-1}) = M(s)^{-1} \frac{d}{ds} M(s) M(s)^{-1}$$ for matrix-valued functions $M(s)$, see sections I.1.3-4 in [@Bo04].
The special case MPH as described in [@AL84] is obtained as follows. Using the decomposition of state space $E$ and generator matrix $A$ as on p.692 therein, we can translate $E_+ = \Gamma_2^c$, $E_0 = \Gamma_2$, and $$Q_{++} = \begin{pmatrix} A^{(1,2)} & B^{(1)} \\ {{\bf 0}}& A^{(1)} \end{pmatrix}, \qquad Q_{+0} = \begin{pmatrix} B^{(2)} \\ {{\bf 0}}\end{pmatrix}, \qquad Q_{0+} = {{\bf 0}}, \qquad Q_{00} = A^{(2)}$$ The construction in [@AL84] further specifies $R_+ = I$, $D_0 = I$, and $$D_+ = \begin{pmatrix} I^{1,2} & {{\bf 0}}\\ {{\bf 0}}& {{\bf 0}}\end{pmatrix}$$ where $I^{1,2}$ denotes the identity matrix on $ \Gamma_1^c \cap \Gamma_2^c$. This yields $\alpha(s) = \alpha_+$, $$W(s) = Q_{++} - s D_+ = \begin{pmatrix} A^{(1,2)} - s I & B^{(1)} \\ {{\bf 0}}& A^{(1)} \end{pmatrix}$$ and $$\eta(s) = \begin{pmatrix} B^{(2)} (sI - A^{(2)})^{-1} (- A^{(2)} {{\bf 1}}) \\ {{\bf 0}}\end{pmatrix} + \begin{pmatrix} -( A^{(1,2)} {{\bf 1}}+ B^{(1)} {{\bf 1}}+ B^{(2)} {{\bf 1}}) \\ - A^{(1)} {{\bf 1}}\end{pmatrix} .$$
If $r_{1j} > 0$ for all $j \in E$, then $E=E_+$, hence ${\mathbb{E}}( e^{-s Z_2} 1_{ \{ Z_1 =0 \} }) =0$, and $${\mathbb{E}}( e^{-s Z_2} 1_{ \{ Z_1 \in dy \} }) = \alpha_+ e^{W(s) y} \eta_+$$ where $$W(s) = R_+^{-1} \left( Q_{++} - s D_+ \right)$$ for all $s \geq 0$. If further $r_{1i}=r_{2i}$ for all $i \in E$ with $r_{2i} > 0$ and $q_{ij} = 0$ for $r_{2i}=0$ and $r_{2j} > 0$, then we obtain the special case of the class MPH where $Z_1 \geq Z_2$ almost surely. This specifies to $\Gamma_2 = {{\bf 0}}$, $\Gamma_1 = \{ i \in E: r_{2i} = 0 \}$, as well as $$A^{(1,2)} = \left( \frac{q_{ij}}{r_{1i}} \right)_{i,j \in \Gamma_1^c}, \quad B^{(1)} = \left( \frac{q_{ij}}{r_{1i}} \right)_{i \in \Gamma_1^c, j \in \Gamma_1} \quad \text{and} \quad A^{(1)} = \left( \frac{q_{ij}}{r_{1i}} \right)_{i,j \in \Gamma_1}.$$
With no additional effort, the current framework can be extended to allow $r_{2i} < 0$ for some $i \in E$. One needs to take care of the range of $s$ for the Laplace transform ${\mathbb{E}}( e^{-s Z_2} 1_{ \{ Z_1 \in dy \} })$ to converge (but there is such one, see lemma 2 in [@BR13]) or consider Fourier transforms. Then $Z_2$ has a so-called bilateral phase-type distribution, i.e. it is the mixture of two random variables $Z_2^+$ and $Z_2^-$ where $Z_2^+$ and $- Z_2^-$ have phase-type distributions. In particular, $Z_2$ may also assume negative values now. Theorem 2.3.2 in [@AR05] states that bilateral phase-type distributions are (weakly) dense in the class of all distributions on ${\mathbb{R}}$. For the marginal distribution of $Z_2$ see [@As04], for more on bilateral phase-type distributions see [@AR05].
[1]{}
J. Abate and W. Whitt. . , 7:36–43, 1995.
S. Ahn and V. Ramaswami. . , 21:239–259, 2005.
S. Asmussen. . Technical Report 14, MaPhySto, June 2004. ISSN 1398-2699.
D. Assaf, N. Langberg, T. Savits, and M. Shaked. . , 32:688–702, 1984.
N. G. Bean and M. M. O’Reilly. . , 29:31–63, 2013.
M. Bladt and B. F. Nielsen. . , 26:1–26, 2010.
N. Bourbaki. . Springer, 2004.
L. Breuer. . , 49:549–565, 2012.
V. Kulkarni. . , 37:151–158, 1989.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We discuss efficient solutions to systems of shifted linear systems arising in computations for oscillatory hydraulic tomography (OHT). The reconstruction of hydrogeological parameters such as hydraulic conductivity and specific storage using limited discrete measurements of pressure (head) obtained from sequential oscillatory pumping tests, leads to a nonlinear inverse problem. We tackle this using the quasi-linear geostatistical approach [@kitanidis1995quasi]. This method requires repeated solution of the forward (and adjoint) problem for multiple frequencies, for which we use flexible preconditioned Krylov subspace solvers specifically designed for shifted systems based on ideas in [@gu2007flexible]. The solvers allow the preconditioner to change at each iteration. We analyze the convergence of the solver and perform an error analysis when an iterative solver is used for inverting the preconditioner matrices. Finally, we apply our algorithm to a challenging application taken from oscillatory hydraulic tomography to demonstrate the computational gains by using the resulting method.'
author:
- 'Arvind K. Saibaba'
- Tania Bakhos
- 'Peter K. Kitanidis'
title: A Flexible Krylov Solver for Shifted Systems with Application to Oscillatory Hydraulic Tomography
---
Introduction
============
Hydraulic tomography (HT) is a method for characterizing the subsurface that consists of applying pumping in wells while aquifer pressure (head) responses are measured. Using the data collected at various locations, important aquifer parameters (e.g., hydraulic conductivity and specific storage) are estimated. An example of such a technique is transient hydraulic tomography (reviewed in [@cardiff20113D]). Oscillatory hydraulic tomography (OHT) is an emerging technology for aquifer characterization that involves a tomographic analysis of oscillatory signals. Here we consider that a sinusoidal signal of known frequency is imposed at an injection point and the resulting change in pressure is measured at receiver wells. Consequently, these measurements are processed using a nonlinear inversion algorithm to recover estimates for the desired aquifer parameters. Oscillatory hydraulic tomography has notable advantages over transient hydraulic tomography; namely, a weak signal can be distinguished from the ambient noise and by using signals of different frequencies, we are able to extract additional information without having to drill additional wells.
Using multiple frequencies for OHT has the potential to improve the quality of the image. However, it involves considerable computational burden. Solving the inverse problem, i.e. reconstructing the hydraulic conductivity field from pressure measurements, requires several application of the forward (and adjoint) problem for multiple frequencies. As we shall show in section \[sec:application\], solving the forward (and adjoint) problem involves the solution of shifted systems for multiple frequencies. For finely discretized grids, the cost of solving the system of equations corresponding to each frequency can be high to the extent that it might prove to be computationally prohibitive when many frequencies are used, for example, on the order of $200$. The objective is to develop an approach in which the cost of solving the forward (and adjoint) problem for multiple frequencies is not significantly higher than the cost of solving the system of equations for a single frequency - in other words, the cost should depend only weakly on the number of frequencies.
Direct methods, such as sparse LU, Cholesky or LDL$^T$ factorization, are suited to linear systems in which the matrix bandwidth is small, so that the fill-in is somewhat limited. An additional difficulty that direct methods pose is that for solving a sequence of shifted systems, the matrix has to be re-factorized for each frequency, resulting in a considerable computational cost. By contrast, Krylov subspace methods for shifted systems are particularly appealing since they exploit the shift-invariant property of Krylov subspaces [@simoncini2007recent] to obtain approximate solutions for all frequencies by generating a single approximation space that is shift independent. Several algorithms have been developed for dealing with shifted systems. Some are based on Lanczos recurrences for symmetric systems [@meerbergen2003solution; @meerbergen2010lanczos]; others use the unsymmetric Lanczos [@freund1993solution], and some others use Arnoldi iteration [@datta1991arnoldi; @frommer1998restarted; @simoncini2003restarted; @darnell2008deflated; @gu2007flexible]. Shifted systems also occur in several other applications such as control theory, time dependent partial differential equations, structural dynamics, and quantum chromodynamics (see [@simoncini2003restarted] and references therein). Hence, several other communities can benefit from advances in efficient solvers for shifted systems. The Krylov subspace method that we propose is closest in spirit to [@gu2007flexible]. However, as we shall demonstrate, we have extended their solver significantly.
**Contributions**: Our major contributions can be summarized as follows:
- We have extended the flexible Arnoldi algorithm discussed in [@gu2007flexible] for shifted systems of the form $(A + \sigma_jI)x_j =b$ to systems of the form $(K + \sigma_j M) x_j = b $ for $j = 1,\dots,n_f$ that employs multiple preconditioners of the form $(K+\tau M)$. In addition, we provide some analysis for the convergence of the solver.
- When an iterative solver is used to apply the preconditioner, we derive an error analysis that gives us stopping tolerances for monitoring convergence without constructing the full residual.
- Our motivation for the need for fast solvers for shifted systems comes from oscillatory hydraulic tomography. We describe the key steps involved in inversion for oscillatory hydraulic tomography, and discuss how the computation of the Jacobian can be accelerated by the use of the aforementioned fast solvers.
**Limitations**: The focus of this work has been on the computational aspects of oscillatory hydraulic tomography. Although the initial results are promising, several issues remain to be resolved for application to realistic problems of oscillatory hydraulic tomography. For example, we are inverting for the hydraulic conductivity assuming that the storage field is known. In practice, the storage is also unknown and needs to be estimated from the data as well. Moreover, simulating realistic conditions (higher variance in the log conductivity field, and adding measurement noise in a realistic manner) may significantly improve the performance with the addition of information from different frequencies. We will deal with these issues in another paper.
The paper is organized as follows. In section \[sec:krylov\], we discuss the Krylov subspace methods for solving shifted linear systems of equations based on the Arnoldi iteration using preconditioners that are also shifted systems. In section \[sec:geneigen\], we discuss the convergence of the iterative solver and its connection to the convergence of the eigenvalues of the generalized eigenvalue problem $Kx=\lambda Mx$. In section \[sec:inexact\], we discuss an error analysis when an iterative method is used to invert the preconditioner matrices. In section \[sec:application\], we discuss the basic constitutive equations in OHT, which can be expressed as shifted linear system of equations and discuss the geostatistical method for solving inverse problems. Finally, in section \[sec:numerical\] we present some numerical results on systems of shifted systems and then discuss numerical results involving the inverse problem arising from OHT. We observe significant speed-ups using our Krylov subspace solver.
Krylov subspace methods for shifted systems {#sec:krylov}
===========================================
The goal is to solve systems of equations of the form $$\label{eqn:multipleshifted}
\left( K + \sigma_j M \right) x_j = b \qquad j=1,\dots,n_f$$
Note that $\sigma_j$, for $j=1,\dots,n_f$ are (in general) complex shifts. We assume that none of these systems are singular. In particular, for our application, both $K$ and $M$ are stiffness and mass matrices respectively and are positive definite, but our algorithm only requires that they are invertible. By using a finite volume or lumped mass approach [@hughes2012finite], the mass matrices become diagonal but this assumption is not necessary. Later, in sections \[sec:forward\] and \[sec:sensitivity\], we will show how such equations arise in our applications.
(0,0) rectangle ( 10, 10); (A) at (5,5) [$A$]{}; (10.5, 0) rectangle (13.5, 10); (Vm) at (12, 5) [$V_m$]{}; (eq) at (14.5, 5) [$=$]{}; (15.5,0) rectangle (18.5, 10); (Vm1) at (17,5) [$V_m$]{}; (18.7,0) rectangle (19,10); (vm1) at (20,-0.5) [$v_{m+1}$]{};
(20,10) – (23, 10) – (23,7) – (22.6, 7) – (20,9.6) – (20, 10) ; (Hm) at (22,9) [$H_m$]{};
(22.8,6.7) circle \[radius = 0.2\]; (hm1m) at (24.3,5.6) [$h_{m+1,m}$]{};
As a brief introduction, we review the Krylov based iterative solvers for the system of equations $Ax = b$. In particular, we describe the variants generated by Arnoldi iteration, such as Full Orthogonalization Method (FOM) and Generalized Minimum RESidual method (GMRES). Krylov solvers typically generate a sequence of orthonormal vectors $v_1,\dots,v_m$ that are orthonormal. These vectors form a basis for the Krylov subspace: $${{\cal{K}}_{m}(A,b)} {\stackrel{\text{def}}{=}}{\text{span}\left\{b,Ab,\dots,A^{m-1}b\right\}}$$ At the end of the $m$th iteration, a typical relation is obtained of the form (see figure \[fig:arnoldi\]), $$AV_m = V_{m+1}\bar{H}_m$$ where, $V_m = [v_1,\dots,v_m]$ and $v_1 = b/\beta$, $\beta = {\lVert b \rVert_2}$ with $x_0= 0$ and $\bar{H}_m$ is an upper Hessenberg matrix. Then, an approximate solution to the system $Ax = b$ by searching for a solution of the form $x_m = V_my_m$, where $y_m$ is chosen such that it minimizes the residual $r_m {\stackrel{\text{def}}{=}}b - Ax_m$ which, leads to GMRES subproblem, $$\label{eqn:gmressubproblem}
\min_{y_m \in \mathbb{C}^m}{\lVert r_m \rVert_2} \Rightarrow \min_{y_m \in \mathbb{C}^m} {\lVert \beta e_1 - \bar{H}_my_m \rVert_2}$$ or an oblique projection $r_m \perp {\text{span}\left\{V_m\right\}}$ which leads to the FOM subproblem $$\label{eqn:fomsubproblem}
r_m \perp {\text{span}\left\{V_m\right\}} \qquad \Rightarrow \qquad H_my_m = \beta e_1$$ As $m$ increases, the cost per iteration increases at least as ${\cal{O}}(m^2n)$ and the memory costs increase as ${\cal{O}}(mn)$[@saad2003iterative]. The standard remedies to reducing the number of iterations are 1) using an appropriate preconditioner, 2) truncating the orthogonalization in the Arnoldi algorithm and 3) restarting the Arnoldi algorithm periodically.
An interesting property of the Krylov subspaces is that they are shift-invariant. In other words, ${{\cal{K}}_{m}(A,b)} ={{\cal{K}}_{m}(A + \sigma I,b)}$. Therefore, the same Krylov basis generated for the system $Ax=b$ can be effectively used to solve shifted systems of the form $(A+\sigma I)x = b$. The strategy for solving the shifted systems is therefore, to first generate a basis that is applicable to all systems, and then use the shift-invariant property (for a detailed review, see [@simoncini2007recent section 14.1] and references therein) to solve a smaller subproblem of the form or . The same idea can be extended to systems of the form using a preconditioner of the form $(K+\tau M)$ [@meerbergen2003solution; @gu2007flexible], which solves for multiple shifted systems roughly at the cost of a single system. However, in practice, the number of iterations taken can often be large, especially for large matrices arising from realistic applications.
In order to minimize the number of iterations, Meerbergen [@meerbergen2003solution] proposes a left preconditioner of the form $K_\tau {\stackrel{\text{def}}{=}}K + \tau M $ that is factorized and inverted using a direct solver. The application of $K_\tau^{-1}$ to a vector is, in general, not cheap but the spectrum of $(K+\tau M)^{-1}(K + \sigma M)$ is often more favorable, which results in fast convergence of the Krylov methods in just a few iterations [@meerbergen2003solution]. This form of preconditioning has its roots in solving large-scale generalized eigenvalue problems and is known as [*Cayley transformation*]{} [@golub1996matrix]. In [@popolizio2008acceleration], the authors provide some analysis for choosing the best value of $\tau$ that optimally preconditions all the systems. However, we observed that (also, see [@gu2007flexible]) using a single preconditioner for all the systems may not yield optimal convergence for all systems. In [@gu2007flexible], the authors propose a flexible Arnoldi method for shifted systems that uses different values of $\tau$ resulting in different preconditioners at each iteration. This can potentially reduce the number of iterations for all the shifted systems. Before we describe our flexible algorithm in section \[sec:flexibleprecond\], we will derive the right preconditioned version of Krylov subspace method for shifted systems. This serves two purposes - it motivates our algorithm, while clarifying some of the notation.
Right preconditioning for shifted systems {#sec:rightprecond}
-----------------------------------------
As mentioned earlier, we will review the right preconditioned version of the Krylov subspace algorithm for shifted systems. Following the approach in [@meerbergen2003solution; @simoncini2007recent], we solve the system of equations using a shifted right preconditioner of the form $K_\tau {\stackrel{\text{def}}{=}}K+\tau M $ $$\label{eqn:rightpreconditioned}
(K + \sigma_j M) K_\tau^{-1} \bar{x}(\sigma_j) = b \qquad x(\sigma_j) = K_\tau^{-1}\bar{x}(\sigma_j)$$ for $j=1,\dots,n_f$. We have the following identity that $$\label{eqn:shiftinvariant}
(K+\sigma M) (K + \tau M)^{-1} = I + (\sigma - \tau) M (K+\tau M)^{-1}$$
Using the identity in equation , we have the following shift-invariance property ${{\cal{K}}_{m}(MK_\tau^{-1},b)} = {{\cal{K}}_{m}((K+\sigma M)(K+\tau M)^{-1},b)}$. Note that ${{\cal{K}}_{m}(MK_\tau^{-1},b)} $ is independent of $\sigma$. This shift-invariance property suggests an efficient algorithm for solving the system of equations . There is a distinct advantage in using iterative solvers for shifted systems; the expensive step of constructing the basis for the Krylov subspace is performed only once and using the shift-invariance property of the Krylov subspace, the sub-problem for each shift in algorithm \[alg:shiftedsolve\] can be computed at a relatively low cost.
$M$ and a right hand side $b$. Compute $v_1 = b /\beta$ and $\beta {\stackrel{\text{def}}{=}}{\lVert b \rVert_2}$ Choose $\tau$ and factorize $K_\tau {\stackrel{\text{def}}{=}}K + \tau M$ Define the $(m+1)\times m$ matrix $\bar{H}_m = \{ h_{i,k}\}_{1\leq i \leq m+1, 1 \leq k \leq m}$. Set $\bar{H}_m = 0$. Compute ${z}_k = K_\tau^{-1} v_k$ $w_k := Mz_k$ $h_{ik} := w_k^*v_i$ Compute $w_k := w_k - h_{i,k}v_k$ $h_{k+1,k}:= \lVert w_k \rVert_2 $. If $h_{k+1,k} = 0$ stop $v_{k+1} = w_k/ h_{k+1,k}$
The algorithm proceeds as follows: first, we run $m$ steps of the Arnoldi algorithm on the matrix $M K_\tau^{-1}$ with the starting vector $b$ to get a basis for the Krylov subspace ${{\cal{K}}_{m}(MK_\tau^{-1},b)}$. This is summarized in algorithm \[alg:arnoldi\]. At the end of $m$ steps of the Arnoldi process, we construct two sets of vectors $V_{m+1} = [v_1,\dots,v_{m+1}]$ and $Z_m = [z_1,\dots,z_m ]$, and an upper Hessenberg matrix $\bar{H}_m$ that satisfy the following relations,
$$\begin{aligned}
\label{eqn:arnoldi}
M Z_m = & \quad V_{m+1}\bar{H}_m \\
(K + \tau M) Z_m = & \quad V_m \label{eqn:zm}\end{aligned}$$
where, $ V_m^* V_m = I $. Multiplying the first equation by $(\sigma_j-\tau)$ and adding it to the second equation gives us
$$\label{eqn:shiftedarnoldi}
(K + \sigma_j M)Z_m = V_{m+1} \underbrace{\left(\begin{bmatrix} I \\ {0}\end{bmatrix} + (\sigma_j-\tau) \bar{H}_m \right)}_{{\stackrel{\text{def}}{=}}\bar{H}_m(\sigma_j; \tau)} = V_{m+1}\bar{H}_m(\sigma_j;\tau)$$
In algorithm \[alg:arnoldi\], $V_m$ forms a basis for the Krylov subspace ${\cal{K}}_m(MK_\tau^{-1},b)$. However, we seek solutions of the form $x_m = Z_my_m$ (with zero as the initial guess). Now $x_m \in {\text{span}\left\{Z_m\right\}}$ where $Z_m$ is the space spanned by the vectors $z_k = K_\tau^{-1}v_k$ for $k=1,\dots,m$. By minimizing the residual norm over all possible vectors in ${\text{span}\left\{Z_m\right\}}$, we obtain the generalized minimum residual (GMRES) method for shifted systems, whereas by imposing the Petrov-Galerkin condition $r_m \perp{\text{span}\left\{V_m\right\}}$, we obtain the full orthogonalized method (FOM) for shifted systems. This is summarized in algorithm \[alg:shiftedsolve\]. It should be noted that the way we have described this algorithm, we need to store the vectors $Z_m$. In practice, this is not necessary. We chose to present it this way in order to have consistent notation with the flexible algorithm we will describe in subsection \[sec:flexibleprecond\].
matrices $K$ and $M$, a right hand side $b$, $\sigma \in \{ \sigma_1,\dots,\sigma_{n_f}\}$ Choose $\tau$, build $K_\tau {\stackrel{\text{def}}{=}}K + \tau M$ and construct preconditioner. Set $m = 1$. Generate $V_{m+1},\bar{H}_m$ and $Z_m$ using algorithm \[alg:arnoldi\]. Construct $\bar{H}_m(\sigma_j;\tau) {\stackrel{\text{def}}{=}}I + (\sigma_j -\tau)\bar{H}_m$ (see equation ). FOM: $$H_m(\sigma_j; \tau)y_m^{fom}(\sigma_j) = \beta e_1$$ GMRES: $$y_m^{gmres}(\sigma_j) {\stackrel{\text{def}}{=}}\min_{ y_m \in \mathbb{C}^m} \lVert \beta e_1 - \bar{H}_m(\sigma_j;\tau) y_m \rVert_2$$ Construct the approximate solution $x_m(\sigma_j) = Z_my_m(\sigma_j)$ $m\leftarrow m + 1$.
In [@meerbergen2003solution], the spectrum of $(K+\sigma M)(K+\tau M)^{-1}$ was analyzed and it was shown that the preconditioner $ K_\tau$ is well suited only for values of frequencies $\sigma$ near $\tau$. However, the values of $\sigma$ can be widely spread and a single preconditioner $K_\tau$ might not be a good choice for preconditioning all the systems. In section \[sec:solverexperiments\], we demonstrate an example in which a single preconditioner does not satisfactorily precondition all the systems. In [@gu2007flexible], the authors propose a flexible approach using a (possibly) different preconditioner at each iteration. We shall adopt this approach.
Flexible preconditioning {#sec:flexibleprecond}
------------------------
We now describe our flexible Krylov approach for solving shifted systems based on [@gu2007flexible] which we have extended to the case $M\neq I$. Following [@saad1993flexible] and [@gu2007flexible], we use a variant of GMRES which allows a change in the preconditioner at each iteration. In algorithm \[alg:arnoldi\], we considered a fixed preconditioner of the form $K_\tau {\stackrel{\text{def}}{=}}K + \tau M$ for a fixed $\tau$. Suppose we used a different preconditioner at each iteration of the form $K+\tau_k M$ for $k=1,\dots,m$, then instead of we have, $$(K+\tau_k M) z_k = v_k \qquad k=1,\dots,m \label{eqn:zmmod}$$ The algorithm is summarized in algorithm \[alg:shiftedsolvemod\]. In this algorithm, in addition to saving $V_m$, we also save the matrix $Z_m$. If at every step in the flexible Arnoldi algorithm we use the same value of $\tau$, we are in the same position as in algorithm \[alg:arnoldi\]. We have $Z_m = [z_1,\dots,z_m]$, $\bar{H}_m = \{h_{ik}\}_{1\leq i \leq m+1,1\leq k \leq m}$ and $V_{m} = [v_1,\dots,v_{m}]$ which satisfies $V_m^*V_m = I_m$. In addition, we also have the following relations
$$\begin{aligned}
MZ_m = & \quad V_{m+1}\bar{H}_m \label{eqn:arnoldimod} \\
KZ_m + MZ_m T_m = & \quad V_m \label{eqn:zmvmrelation}\end{aligned}$$
where, $T_m = \text{diag}\{ \tau_1,\dots,\tau_m\}$. Multiplying by $\sigma_jI_m - T_m$ and adding , we obtain for $j = 1,\dots,n_f$ $$(K + \sigma_jM )Z_m = V_{m+1} \underbrace{\left( \begin{bmatrix} I \\ 0 \end{bmatrix} + \bar{H}_m (\sigma_jI_m - T_m) \right)}_{{\stackrel{\text{def}}{=}}\bar{H}(\sigma_j; T_m)} = V_{m+1}\bar{H}_m(\sigma_j;T_m) \label{eqn:shiftedarnoldimod}$$
$M$ and $b$ the right hand side, $\tau_k, k=1,\dots,m$, $v_1 = b /\beta$ and $\beta {\stackrel{\text{def}}{=}}{\lVert b \rVert_2}$ Define the $(m+1)\times m$ matrix $\bar{H}_m = \{h_{i,k} \}_{1\leq i \leq m+1, 1 \leq k \leq m}$. Solve $ (K+\tau_kM) {z}_k = v_k$ $w_k := Mz_k$ $h_{i,k} := w_k^*v_i$ Compute $w_k := w_k - h_{i,k}v_k$ $h_{k+1,k}:= \lVert w_k \rVert_2 $. If $h_{k+1,k} = 0$ stop $v_{k+1} = w_k/ h_{k+1,k}$
We are now in a position to derive a FOM/GMRES algorithm for shifted systems with flexible preconditioning. We search for solutions which are approximations of the form $x_m(\sigma_j) = Z_m y_m(\sigma_j)$, which spans the columns of $Z_m$. Strictly speaking, $\text{span}\{Z_m\}$ is no longer a Krylov subspace. By minimizing the residual norm over all possible vectors in ${\text{span}\left\{Z_m\right\}}$, we obtain the flexible generalized minimum residual (FGMRES) method for shifted systems, whereas by imposing the Petrov-Galerkin condition $r_m \perp{\text{span}\left\{V_m\right\}}$, we obtain the flexible full orthogonalized method (FFOM) for shifted systems. This is summarized in algorithm \[alg:shiftedsolvemod\]. The residuals can be computed as $$\begin{aligned}
\label{eqn:fomgmresresidual}
r_m(\sigma_j) = & \quad b - (K+\sigma_jM)x_m(\sigma_j) \\ \nonumber
= & \quad V_{m+1}\left(\beta e_1 - \bar{H}_m(\sigma_j;T_m)y_m(\sigma_j)\right) \\ \nonumber\end{aligned}$$
matrices $K$ and $M$, vector $b$, $\sigma \in \{ \sigma_1,\dots,\sigma_{n_f}\}$, set $m=1$.
Choose $T_m = \text{diag}\{\tau_1,\dots,\tau_m\}$. Generate $V_{m+1},\bar{H}_m$ and $Z_m$ using algorithm \[alg:arnoldimod\]. Construct $\bar{H}_m(\sigma_j;T_m){\stackrel{\text{def}}{=}}I + \bar{H}_m(\sigma_jI-T_m)$ (see equation ). FOM: $$H_m(\sigma_j; T_m)y_m^{fom}(\sigma_j) = \beta e_1$$ GMRES: $$y_m^{gmres}(\sigma_j) {\stackrel{\text{def}}{=}}\min_{ y_m \in \mathbb{C}^m} \lVert \beta e_1 - \bar{H}_m(\sigma_j;T_m) y_m \rVert_2$$ Construct the approximate solution as $x_m(\sigma_j) = Z_m y_m(\sigma_j)$ $m\leftarrow m+1$
### Selecting values of $\tau_k$
In algorithm \[alg:arnoldimod\], at each iteration we solve a system of the form $(K+\tau_k M)z_k = v_k$ for $k=1,\dots,m$. This cost can be high if the dimension of the Arnoldi subspace $m$ is large and a different preconditioner $K+\tau_kM$ is used at every iteration. In practice, it is not necessary to form and factorize $m$ systems corresponding to different $\tau_k$. In applications, we only need choose a few different $\tau_k$ that cover the entire range of the parameters $\sigma_j$. This was also described in [@gu2007flexible]. The system of equations is solved using a direct solver and since only a few values of $\tau_k$ are chosen, the systems can be formed and factorized. Thus, the computational cost will not be affected greatly even if the number of frequencies $n_f$ is large.
Let $\bar{\tau} = \{\bar{\tau}_1,\dots,\bar{\tau}_{n_p} \}$ be the set of values that $\tau_k$ can take. In other words, we take $n_p$ distinct preconditioners. Then, the first $m_1$ values of $\tau_k$ are assigned $\bar{\tau}_1$, the next $m_2$ values of $\tau_k$ are assigned $\bar{\tau}_2$ and so on. We also have $m = \sum_{k=1}^{n_p}m_k$.
### Restarting {#sec:restarting}
As the dimension of the subspace $m$ increases, the computational and memory costs increase significantly. A well known solution to this problem is restarting. The old basis is discarded and the Arnoldi algorithm is restarted on a new residual. However, for shifted systems, in order to preserve the shift-invariant property, one needs to ensure collinearity of the residuals of the shifted systems. For FOM the residuals are naturally collinear and the Arnoldi algorithm can be restarted by scaling each residual by some scalar that depends on the shift [@simoncini2003restarted; @gu2007flexible]. For GMRES, the approach used by [@frommer1998restarted] was extended to shifted systems with multiple preconditioners by [@gu2007flexible]. We did not explore this issue further, and the reader is referred to [@gu2007flexible] for further details.
Generalized eigenvalue problem and error estimates {#sec:geneigen}
==================================================
We start by computing the approximate eigenvalues and eigenvectors for the matrix $(K+\sigma M)M^{-1}$. Using estimates for approximate eigenvalues and eigenvectors we derive expressions for the convergence of the flexible algorithms. The approximate eigenvalues are called Ritz values. For convenience, we drop the subscript on the shifted frequency, i.e., use $\sigma$ instead of $\sigma_j$ where, $j=1,\dots,n_f$.
\[prop:ritz\] Let $Z_m$, $\bar{H}_m$ and $V_{m+1}$ be computed according to algorithm \[alg:arnoldimod\]. Calculate the eigenpairs of the generalized eigenvalue problem $$\label{eqn:genritz}
H_m(\sigma;T_m) f = \theta H_m f$$ Then, the Ritz pair $\left( \theta, u {\stackrel{\text{def}}{=}}V_{m+1}\bar{H}_mf \right)$ satisfy the Petrov-Galerkin condition [@saad1992numerical section 4.3.3] $$\label{eqn:petrovgalerkin}
(K + \sigma M)M^{-1}u - \theta u \perp {\text{span}\left\{V_m\right\}} \qquad u \in {\text{span}\left\{V_{m+1}\bar{H}_m\right\}}$$
We first begin by manipulating equations and . Eliminating $Z_m$ from those equations and adding $\sigma V_{m+1}\bar{H}_m$ to both sides, we have $$\label{eqn:rationalarnoldi}
V_{m+1}\bar{H}_m(\sigma;T_m) = (K+\sigma M)M^{-1}V_{m+1}\bar{H}_m$$ Now, consider the residual of the eigenvalue calculation for the $k$th eigenpair, where $k=1,\dots,m$ is
$$\begin{aligned}
\label{eqn:residualritzeigen}
r^\text{eig}_k(\sigma) = &\quad (K+\sigma M)M^{-1}V_{m+1}\bar{H}_m f_k - \theta_k V_{m+1}\bar{H}_mf_k \\ \nonumber
= & \quad V_{m+1}\bar{H}_m(\sigma;T_m)f_k - \theta_k V_{m+1}\bar{H}_mf_k \\ \nonumber
= &\quad V_m\left(H_m(\sigma;T_m)f_k - \theta_kH_mf_k \right) - h_{m+1,m}v_{m+1}(\tau_m +\theta_k - \sigma)e_m^*f_k \\ \nonumber
= & \quad - h_{m+1,m}v_{m+1}(\tau_m +\theta_k - \sigma)e_m^*f_k \\ \nonumber\end{aligned}$$
From which we can claim that $u \in {\text{span}\left\{V_{m+1}\bar{H}_m\right\}}$ and $(K+\sigma M)M^{-1}u - \theta u \perp {\text{span}\left\{V_m\right\}}$. In other words, they satisfy the Petrov-Galerkin and are an approximate eigenpair of $(K+\sigma M)M^{-1}$.
Furthermore, we define $\rho_k {\stackrel{\text{def}}{=}}{\lVert (K+\sigma M)M^{-1}u_k-\theta_k u_k \rVert_2}$, which is the residual norm of the $k$th eigenvalue calculations. When the residual of the eigenvalue calculations $\rho_k$ is small, say machine precision, the Ritz values are a good approximation to the eigenvalues. It is readily verified that the eigenvalues $\lambda$ of $KM^{-1}$ (and the generalized eigenvalue problem $Kx=\lambda Mx$) are related to the eigenvalues $\lambda(\sigma)$ of $(K+\sigma M)M^{-1}$ by the relation $\lambda(\sigma) = \lambda + \sigma$. The importance of the convergence of Ritz values to the convergence of the Krylov subspace solver using FOM can be established by the following result.
\[prop:fom\] Assume the requirements of proposition \[prop:ritz\]. Further, assume $F$ (matrix of generalized eigenvectors, see ) is invertible so that the generalized eigendecomposition $H_m(\sigma;T_m) = H_mF\Theta F^{-1}$ exists. The residual using FFOM satisfies the following inequality $$\label{eqn:fomresidualineq} {\lVert r_m(\sigma) \rVert_2} \leq \sum_{k=1}^m \rho_k\left|\frac{\sigma -\tau_m}{\theta_k +\tau_m-\sigma}\right| |\theta_k^{-1}| |s_k|$$ where, $s_k {\stackrel{\text{def}}{=}}e_k^*F^{-1}(H_m^{-1}\beta e_1)$ and $\rho_k$ is the residual norm of the eigenvalue calculation, defined above.
We start by writing the residual in equation $$r_m(\sigma) = V_m( \beta e_1 - H_m(\sigma;T_m)y_m) - v_{m+1}(\sigma-\tau_m)h_{m+1,m}e_m^*y_m$$ Since, for flexible FOM for shifted systems $y_m = H_m(\sigma;T_m)^{-1}\beta e_1$, the first term in the above expression is zero and we have, $$\label{eqn:fomresidualsimplified}
r_m(\sigma) = \quad -v_{m+1}(\sigma-\tau_m)h_{m+1,m}e_m^*H_m(\sigma;T_m)^{-1}\beta e_1 \\ \nonumber$$ Now, using the generalized eigendecomposition in $H_m(\sigma;T_m) = H_m F\Theta F^{-1}$. Therefore, we have $H_m(\sigma;T_m)^{-1} = F\Theta^{-1} F^{-1}H_m^{-1}$. We can write this is as a sum of rank-$1$ vectors $$H_m(\sigma;T_m)^{-1} = \sum_{k=1}^m \theta_k^{-1}f_k e_k^*F^{-1}H_m^{-1}$$ where, $e_k$ is the $k$-th canonical basis vector and $f_k$ is the $k$-th column of $F$ for $k=1,\dots,m$. Using the residual of the eigenvalue calculation $r^\text{eig}_k(\sigma)$ in and the expression derived above, $$\begin{aligned}
r_m(\sigma) \quad = & \quad -\sum_{k=1}^mv_{m+1}h_{m+1,m} (\sigma -\tau_m) e_m^*f_k\theta_k^{-1}s_k \\ \nonumber
= & \quad \sum_{k=1}^m r^\text{eig}_k(\sigma) \frac{\sigma - \tau_m}{\tau_m +\theta_k - \sigma} \theta_k^{-1} s_k \\ \nonumber\end{aligned}$$ where, $s_k{\stackrel{\text{def}}{=}}e_k^*F^{-1}H_m^{-1}\beta e_1$. The proof follows from the properties of vector norms.
The inequality provides insight into the importance of the accuracy of approximate eigenpairs for the convergence of flexible FOM for shifted systems. We follow the arguments in [@meerbergen2003solution]. In particular, the residual is very small if $\sigma \approx \tau_m$, $|\theta_k^{-1}|$, $|s_k|$ or $\rho_k$ are small. We shall ignore the case that $\sigma \approx \tau_m$ for further analysis, i.e. that the shifted system is almost exactly the preconditioned system. The eigenvalue residual norm $\rho_k$ being small implies that the Ritz values are a good approximation to the eigenvalues of $(K+\sigma M)M^{-1}$. This implies that all the eigenvalues in this interval have been computed fairly accurately. We now discuss when $|\theta_k^{-1}|$ is large. When all the values of $\tau_k$ are equal to $\tau$, the approximate eigenvalues $\theta_k$ of $KM^{-1}$ are related to approximate eigenvalues $\lambda_k$ of the preconditioned system $(K+\sigma M)(K+\tau M)^{-1}$ by the Cayley transformation $\frac{\lambda_k + \sigma}{\lambda_k + \tau}$. Therefore, $|\theta_k^{-1}|$ is large only if $|\lambda_k + \sigma| \ll |\lambda_k + \tau|$. The term $s_k$ can be rewritten as $s_k = e_k^*F^{-1}H_m^{-1}V_m^*V_m\beta e_1 = e_k^*F^{-1}H_m^{-1}V_m^* b$. It is readily verified that $e_k^*F^{-1}H_m^{-1}V_m^*$ is orthonormal to all other approximate eigenvectors $V_{m+1}\bar{H}_mZ_m$ and thus, $s_k$ can be interpreted as the component of the right hand side $b$ in the direction of the approximate eigenvector. In other words, $s_k$ is small when the solution $x_m(\sigma)$ has a small component in the direction of $b$.
The analysis for the convergence of flexible FOM for shifted systems can be extended to flexible GMRES as well. The following result bounds the difference in the residuals obtained from $m$ steps using flexible FOM and flexible GMRES.
\[prop:gmres\] Let $Z_m$, $\bar{H}_m$ and $V_{m+1}$ be computed according to algorithm \[alg:arnoldimod\]. Further, from algorithm \[alg:shiftedsolvemod\] we define the flexible FOM quantities $y_m^\text{fom}(\sigma) = H_m(\sigma;T_m)^{-1}\beta e_1$, residual $r_m^\text{fom}(\sigma) = V_{m+1}(\beta e_1-\bar{H}_m(\sigma;T_m)y_m^\text{fom}(\sigma))$ and flexible GMRES quantities $y_m^\text{gmres}(\sigma) = \arg\min_{y\in \mathbb{C}^m}{\lVert \beta e_1-\bar{H}_m(\sigma;T_m)y \rVert_2}$, residual $r_m^\text{gmres}(\sigma) = V_{m+1}(\beta e_1-\bar{H}_m(\sigma;T_m)y_m^\text{gmres}(\sigma))$. Further, assume that $H_m(\sigma;T_m)$ is invertible. We have the following inequality $$\label{eqn:fomgmresresdiff}
{\lVert r_m^\text{fom}(\sigma) - r_m^\text{gmres}(\sigma) \rVert_2} \leq \frac{\alpha(1+\alpha)}{1+\alpha^2}{\lVert r_m^\text{fom} \rVert_2}$$ where, $\eta {\stackrel{\text{def}}{=}}h_{m+1,m}(\sigma - \tau_m)$ and $\alpha {\stackrel{\text{def}}{=}}{\lVert \eta H_m^{-*}(\sigma;T_m)e_m \rVert_2}$.
We begin by the following observation from equation $r_m^\text{fom}(\sigma) = -\eta e_m^*y_m^\text{fom} v_{m+1}$ and $$r_m^\text{fom}(\sigma) - r_m^\text{gmres}(\sigma) = -V_{m+1}\bar{H}_m(\sigma;T_m) \left( y_m^\text{fom}(\sigma) - y_m^\text{gmres} (\sigma)\right)$$ Next, we look at the solution to the GMRES least squares problem which can be written as the normal equations $$\bar{H}_m^*(\sigma;T_m)\bar{H}_m(\sigma;T_m)y_m^\text{gmres} = \bar{H}_m^*(\sigma;T_m)\beta e_1 = H_m^*(\sigma;T_m)\beta e_1$$ This can be rewritten as $$\begin{aligned}
\left(H_m^*(\sigma;T_m)H_m(\sigma;T_m)+\eta^2e_me_m^*\right)y_m^\text{gmres}(\sigma) =& \quad H_m^*(\sigma;T_m)\beta e_1 \\ \nonumber
\left(H_m(\sigma;T_m) + \eta^2H_m^{-*}e_me_m^*\right)y_m^\text{gmres}(\sigma) = & \quad \beta e_1 \end{aligned}$$ In other words, the solution to the GMRES subproblem is a rank-one perturbation of the FOM subproblem. Using the Sherman-Morrison identity $$y^\text{gmres}_m(\sigma) = \underbrace{H_m(\sigma;T_m)^{-1}\beta e_1}_{ = y^\text{fom}_m(\sigma) } - H_m(\sigma;T_m)^{-1}\eta H_m^{-*}(\sigma;T_m)e_m \frac{(\eta e_m^*H_m(\sigma;T_m)^{-1}\beta e_1)}{ 1+ {\lVert \eta H_m^{-*}(\sigma;T_m)e_m \rVert_2}^2}$$ Then, the residual difference between FOM and GMRES can be bounded as $${\lVert r_m^\text{fom}(\sigma) - r_m^\text{gmres}(\sigma) \rVert_2} \leq {\lVert \bar{H}_m(\sigma;T_m)H_m(\sigma;T_m)^{-1} \rVert_2} \frac{\alpha|\eta e_m^*y_m^\text{fom}(\sigma)|}{1+ \alpha^2}$$ The inequality follows from the following observations ${\lVert \bar{H}_m(\sigma;T_m)H_m(\sigma;T_m)^{-1} \rVert_2} \leq 1 + {\lVert \eta H_m^{-*}(\sigma;T_m)e_m \rVert_2}$ and from we have ${\lVert r_m^\text{fom} \rVert_2} = |\eta e_m^*y_m^\text{fom}(\sigma)|$.
If ${\lVert \eta H_m^{-*}(\sigma;T_m)e_m \rVert_2}$ is large, then the difference between the two residuals can be large. This happens either when $\eta$ is large or $H_m(\sigma;T_m)$ is close to singular. In this case, flexible GMRES can stagnate and further progress may not occur. We now discuss situations in which breakdown occurs, i.e. $h_{m+1,m} =0$. If $h_{m+1,m} \neq 0$ and $H_m$ is full rank, then it can be shown from equation that ${\text{span}\left\{MZ_m\right\}} \subseteq {\text{span}\left\{V_{m+1}\right\}} $ and from equation , it follows that ${\text{span}\left\{(K+\sigma_j M)Z_m\right\}} \subseteq {\text{span}\left\{V_{m+1}\right\}}$. Further, $h_{m+1,m} = 0$ if and only if $x_m(\sigma_j)$ is the exact solution and $H_m(\sigma_j;T_m)$ is non-singular. The argument closely follows [@saad1993flexible] and will not be repeated here.
Inexact preconditioning {#sec:inexact}
=======================
We observe that to compute vectors $z_k$ for $k =1,\dots,m$ in equation , we have to invert matrices of the form $K+\tau_kM$. When the problem sizes are large, iterative methods may be necessary to invert such matrices, resulting in a variable preconditioning procedure in which a different preconditioning operator is applied at each iteration. More precisely, for $k=1,\dots,m$, $$\label{eqn:zmmoditer}
\tilde{z}_k \approx (K + \tau_k M)^{-1}v_k \qquad p_k {\stackrel{\text{def}}{=}}v_k - (K +\tau_k M) \tilde{z}_k$$ where, $p_k$ is the residual that results after the iterative solver has been terminated. To simplify the discussion, we assume that the termination criteria for the iterative solver is such that ${\lVert p_k \rVert_2} \leq \varepsilon \underbrace{{\lVert v_k \rVert_2}}_{= 1} = \varepsilon$, for some $\varepsilon$. We closely follow the approach in [@simoncini2003theory]. The new flexible Arnoldi relationship is now,
$$\label{eqn:flexshiftedarnoldimod}
(K + \sigma_j M)\tilde{Z}_m + P_m = V_{m+1}\bar{H}_m(\sigma_j;T_m) \qquad j=1,\dots,n_f$$
where, $\tilde{Z}_m = [\tilde{z}_1,\dots,\tilde{z}_m]$ and $P_m = [p_1,\dots,p_m]$ and $\bar{H}_m(\sigma_j;T_m)$ is defined in equation . By using inexact applications of the preconditioner, the vectors $v_k$ for $k=1,\dots,m+1$ are no longer the same vectors generated from algorithm \[alg:arnoldimod\]. In particular, ${\text{span}\left\{V_m\right\}}$ is no longer a Krylov subspace generated by $A$. However, by construction, $V_m$ is still an orthogonal matrix.
Having constructed the matrix $\tilde{Z}_m$, we seek approximate solutions spanned by the columns of $\tilde{Z}_m$, i.e., solutions of the form $x_m(\sigma_j) = \tilde{Z}_m y_m(\sigma_j)$. The true residual corresponding to the approximation solution $x_m(\sigma_j) = \tilde{Z}_m y_m(\sigma_j)$ can be computed as follows, $$\begin{aligned}
r_m(\sigma_j)\quad = & \quad b - (K+\sigma_jM)\tilde{Z}_my_m(\sigma_j) \\
= & \quad b - V_{m+1}\bar{H}_m(\sigma_j;T_m)y_m(\sigma_j) + P_m y_m(\sigma_j) \\
= & \quad V_{m+1}\left(\beta e_1 - \bar{H}_m(\sigma_j;T_m)y_m(\sigma_j)\right) + P_my_m(\sigma_j) \end{aligned}$$
The columns of the matrix $P_m$ are not computed in practice because they require an additional matrix-vector product with $K+\tau_k M$. As a result, computing the true residual is expensive. However, in order to monitor the convergence of the iterative solver, we need bounds on the true residual. Using such bounds, we can derive stopping criteria for the flexible Krylov solvers for shifted systems with inexact preconditioning. To do this, we first derive bounds on the norm of inexact residual $\tilde{r}_m(\sigma_j)$ and a bound on the difference between the true and the inexact residual ${\lVert r_m(\sigma_j)-\tilde{r}_m(\sigma_j) \rVert_2}$. A simple application of the triangle inequality for vector norms, leads us to the desired bounds on the true residual
The inexact residual $ \tilde{r}_m(\sigma_j) $ defined as $$\tilde{r}_m(\sigma_j) {\stackrel{\text{def}}{=}}V_{m+1}\left(\beta e_1 - \bar{H}_m(\sigma_j;T_m)y_m(\sigma_j)\right)$$ The expression for$ \tilde{r}_m(\sigma_j)$ is similar to the exact residual $\tilde{r}_m(\sigma_j)$ ignoring the error due to early termination of the inner iterative solver, i.e., $P_my_m(\sigma_j)$. It is easy to verify that ${\lVert \tilde{r}_m(\sigma_j) \rVert_2} = {\lVert \beta e_1 - \bar{H}_m(\sigma_j;T_m)y_m(\sigma_j) \rVert_2} $. We now derive an expression for the norm of the difference between the true and the inexact residuals,
$$\begin{aligned}
{\lVert r_m(\sigma_j)-\tilde{r}_m(\sigma_j) \rVert_2} & = {\lVert P_my_m(\sigma_j) \rVert_2} = {\lVert \sum_{k=1}^m e_k^Ty_m(\sigma_j)p_k \rVert_2} \nonumber\\
& \leq \sum_{k=1}^m|e_k^Ty_m(\sigma_j)|{\lVert p_k \rVert_2} \nonumber \\
& \leq \varepsilon\sum_{k=1}^m|e_k^Ty_m(\sigma_j)| = \varepsilon \lVert y_m(\sigma_j)\rVert_1 \label{eqn:resdiff}\end{aligned}$$
Finally, the norm of the true residual $r_m(\sigma_j)$ can be bounded using the following relation $$\begin{aligned}
{\lVert r_m(\sigma_j) \rVert_2} \leq & \quad {\lVert r_m(\sigma_j)-\tilde{r}_m(\sigma_j) \rVert_2} + {\lVert \tilde{r}_m(\sigma_j) \rVert_2} \nonumber \\
\leq & \quad \varepsilon {\lVert y_m(\sigma_j) \rVert_{1}} + {\lVert \beta e_1 - \bar{H}_m(\sigma_j;T_m)y_m(\sigma_j) \rVert_2} \label{eqn:trueresidual}\end{aligned}$$
This bound on the true residual, gives us a convenient expression to monitor the convergence of the iterative solver for each system, corresponding to a given shift $\sigma_j$.
We can also derive specialized results for the flexible FOM/GMRES for shifted systems with inexact preconditioning. The approach used is and argument similar to [@simoncini2003theory Proposition 4.1]. Let $r^\text{fom}_m(\sigma_j) {\stackrel{\text{def}}{=}}b - (K+\sigma_j M)\tilde{Z}_my_m^\text{fom}(\sigma_j)$ and $r^\text{gmres}(\sigma_j) {\stackrel{\text{def}}{=}}b - (K+\sigma_j M)\tilde{Z}_my_m^\text{gmres}(\sigma_j)$ be the true residual, respectively resulting from the flexible FOM/GMRES for shifted systems. We have the following error bounds
$$\begin{aligned}
{\lVert V_m^*r_m^\text{fom}(\sigma_j) \rVert_2} \leq & \quad \varepsilon {\lVert y_m^\text{fom}(\sigma_j) \rVert_{1}} \\
{\lVert \left(V_{m+1}\bar{H}_m(\sigma_j;T_m)\right)^*r_m^\text{gmres} \rVert_2} \leq & \quad \varepsilon {\lVert \bar{H}_m(\sigma_j;T_m) \rVert_2} {\lVert y_m^\text{fom}(\sigma_j) \rVert_{1}} \end{aligned}$$
One of main results of the paper [@simoncini2003theory] is that they provide theory for why the residual norm due inexact preconditioning can be allowed to grow at the later outer iterations. In particular, they provide computable bounds for the monitoring the outer Krylov solver residual when the termination criteria for the inner preconditioning is allowed to change at each iteration, from which efficient termination criteria can be derived. We have not pursued this issue and the reader is referred to [@simoncini2003theory] for further details.
Application to Oscillatory Hydraulic Tomography {#sec:application}
===============================================
In this section, we briefly review the application of Oscillatory Hydraulic Tomography and the Geostatistical approach for solving the resulting inverse problem.
The Forward Problem {#sec:forward}
-------------------
The equations governing ground water flow through an aquifer for a given domain $\Omega$ with boundary $\partial \Omega = \partial \Omega_D \cup \partial \Omega_N, \partial \Omega_D \cap \partial \Omega_N = \emptyset$ are given by,
$$\begin{aligned}
\label{eqn:timedomain}
S_s({\textbf{x}}) \frac{\partial \phi({\textbf{x}},t)}{\partial t} - \nabla \cdot \left(K({\textbf{x}}) \nabla \phi({\textbf{x}},t)\right) & = q({\textbf{x}},t), & {\textbf{x}}&\in \Omega \\ \nonumber
\phi({\textbf{x}},t) & = 0, & {\textbf{x}}& \in \partial \Omega_D \\ \nonumber
\nabla \phi({\textbf{x}},t) \cdot \textbf{n} & = 0, & {\textbf{x}}&\in \partial \Omega_N \\ \nonumber \end{aligned}$$
where $S_s({\textbf{x}})$ \[L$^{-1}$\] represents the specific storage and $K ({\textbf{x}})$ \[L/T\] represents the hydraulic conductivity. In the case of one source oscillating at a fixed frequency $\omega$ \[radians/T\] , $q({\textbf{x}},t)$ is given by $$\label{eq:oscillation}
q({\textbf{x}},t) = Q_0\delta({\textbf{x}}-{\textbf{x}}_s) \cos(\omega t)$$ To model periodic simulations, we will assume the source to be a point source oscillating at a known frequency $\omega$ and peak amplitude $Q_0$ at the source location ${\textbf{x}}_s$. In the case of multiple sources oscillating at distinct frequencies, each source is modeled independently with its corresponding frequency as in , and then combined to produce the total response of the aquifer.
Since the solution is linear in time, we assume the solution (after some initial time has passed) can be represented as $$\label{eqn:measurementequation}
\phi({\textbf{x}},t) = \Re(\Phi({\textbf{x}}) \exp(i\omega t) )$$ where $\Re(\cdot)$ is the real part and $\Phi({\textbf{x}})$ is known as the phasor, is a function of space only and contains information about the phase and amplitude of the signal. Assuming this solution, the equations in the phasor domain are, $$\begin{aligned}
\label{eqn:phasor1}
- \nabla \cdot \left(K({\textbf{x}}) \nabla \Phi({\textbf{x}})\right) + i\omega S_s({\textbf{x}}) \Phi({\textbf{x}}) = & \quad Q_0\delta({\textbf{x}}-{\textbf{x}}_s), & {\textbf{x}}\in \Omega \\ \nonumber
\Phi({\textbf{x}}) = & \quad 0, &\quad {\textbf{x}}\in \partial \Omega_D \\ \nonumber
\nabla \Phi({\textbf{x}}) \cdot \textbf{n} = & \quad 0, &\qquad {\textbf{x}}\in \partial \Omega_N \nonumber\end{aligned}$$ The differential equation along with the boundary conditions are discretized using FEniCS [@LoggMardalEtAl2012a; @LoggWells2010a; @LoggWellsEtAl2012a] by using standard linear finite elements. Solving it for several frequencies results in system of shifted equations of the form $$\label{eqn:genshifted}
\left( K + \sigma_j M \right) x_j = b \qquad j=1,\dots,n_f$$ where, $K$ and $M$ are the stiffness and mass matrices, respectively, that arise precisely from the discretization of .
The Geostatistical Approach
---------------------------
The Geostatistical approach (described in the following papers [@kitanidis1995quasi; @kitanidis2010bayesian; @kitanidis2007on]) is one of the prevalent approaches for solving stochastic inverse problems. The idea is to represent the unknown field as the sum of a few deterministic low-order polynomials and a stochastic term that models small-scale variability. Inference from the measurements is obtained by invoking the Bayes’ theorem, through the posterior probability density function which is the product of two parts - likelihood of the measurements and the prior distribution of the parameters. Let $s({\textbf{x}}) \in \mathbb{R}^{N_s}$ be the function to be estimated, here the log conductivity, and let it be modeled by a Gaussian random field. After discretization, the field can be written as $s \sim {\cal{N}}(X\beta,Q)$. Here $X$ is a matrix of low-order polynomials, $\beta$ are a set of drift coefficients to be determined and $Q$ is a covariance matrix with entries $Q_{ij} = \kappa({\textbf{x}}_i,{\textbf{x}}_j)$, and $\kappa(\cdot,\cdot)$ is a generalized covariance kernel [@christakos1984problem]. The measurement equation can be written as, $$y = h(s) + v, \qquad v \sim {\cal{N}}(0,R)$$ where $y \in \mathbb{R}^{N_y}$ represents the noisy measurements and $v$ is a random vector of observation error with mean zero and covariance matrix $R$. The matrices $R$, $Q$ and $X$ are part of a modeling choice and more details to choose them can be obtained from the following references [@kitanidis1995quasi]. The operator $h: \mathbb{R}^{N_s}\rightarrow \mathbb{R}^{N_y}$ is known as the parameter-to-observation map or [*measurement operator*]{}, with entries that are the coefficients of the oscillatory terms in the expression, $$\label{eqn:measurement}
\int_\Omega \Re\left\{e^{i\omega t}\Phi({\textbf{x}}) \delta({\textbf{x}}-{\textbf{x}}_i)\right\} d{\textbf{x}}$$ where ${\textbf{x}}_i$, is the location of the measurement sensor and $i=1,\dots,n_y$, where $n_y$ is the number of measurement locations. At each measurement location, two coefficients are measured for every frequency. In all, we have $N_y=2n_fn_y$ measurements, where $n_f$ is the number of frequencies.
Following the geostatistical method for quasi-linear inversion [@kitanidis1995quasi], we compute $\hat{s}$ and $\hat\beta$ corresponding to the maximum-a-posteriori probability which is equivalent to computing the solution to a weighted nonlinear least squares problem. To solve the optimization problem, the Gauss-Newton algorithm is used. Starting with an initial estimate for the field $s_0$, the procedure is described in algorithm \[alg:quasi\].
Compute the $N_y \times N_s$ Jacobian $J$ as, $$J_k = {\left.\frac{\partial{h}}{\partial{s}}\right|_{s = {s}_k}}$$ Solve the system of equations,
$$\label{eq:inversion}
\left( \begin{array}{cc}
J_k Q J_k^T + R & J_kX \\
\left(J_k X\right)^T & 0
\end{array}
\right)
\left( \begin{array}{c}{\xi_{k+1}}\\ {\beta_{k+1}} \end{array} \right) =
\left( \begin{array}{c} y - h({s}_k) + J_k{s}_k \\ 0 \end{array} \right)$$
The update $s_{k+1}$ is computed by, $$s_{k+1} = X \beta_{k+1} + Q J_k^T \xi_{k+1}$$
Repeat steps $1-3$ until the desired tolerance has been reached. (If necessary, add a line search).
Algorithm \[alg:quasi\] requires, at each iteration, computation of the matrices $QJ_k^T$ and $J_kQJ_k^T$. Since the prior covariance matrix $Q$ is dense, a straightforward computation of $QJ_k^T$ can be performed in ${\cal{O}}(N_yN_s^2)$. However, for fine grids, i.e., when the number of unknowns $N_s$ is large, storing $Q$ can be expensive in terms of memory and computing $QJ_k^T$ can be computationally expensive. For regular equispaced grids and covariance kernels that are stationary or translation invariant, an FFT based method can be used to reduce the storage costs of the covariance matrix $Q$ to ${\cal{O}}(N_s)$ and cost of matrix-vector product to ${\cal{O}}(N_s\log N_s)$. For irregular grids, the Hierarchical matrix approach can be used to reduce the storage costs and cost of approximate matrix-vector product to ${\cal{O}}(N_s\log N_s)$ for a wide variety of covariance kernels [@saibaba2012efficient]. Thus, in either situation, the cost for computing $QJ_K^T$ can be done in ${\cal{O}}(N_yN_s\
\log N_s)$ and the cost of computing $J_kQJ_k^T$ is ${\cal{O}}(N_sN_y\log N_s + N_sN_y)$.
Sensitivity Matrix computation {#sec:sensitivity}
------------------------------
Computing the Jacobian matrix $J_k$ at each iteration is often an expensive step. Although explicit analytical expressions for the entries are nearly impossible, several approaches exist. One simple approach is to use finite differences, but this approach is expensive because it requires as many $N_s+1$ runs of the forward problem, i.e. one more than the number of parameters to be estimated. For large problems and on finely discretized grids, the number of unknowns can be quite large and so this procedure is not feasible.
To reduce the computational cost associated with calculating the sensitivity matrix we use the adjoint state method (see for example, [@sun1990coupled]). This approach is exact and is computationally advantageous when the number of measurements is far smaller than the number of unknowns. For a complete derivation of the adjoint state equations for oscillatory hydraulic tomography, refer to [@cardiff2012multi]. For the type of measurements described in , the entries of the sensitivity matrix can calculated by the following expression for $j=1,\dots,N_s$ $$\label{eqn:sensitivity}
\frac{\partial h}{\partial s_j} = \int_\Omega \Re\left\{ e^{i\omega t}\left( \left[i \omega \frac{\partial S_s({\textbf{x}})}{\partial s_j} \Phi - \frac{\partial Q_0}{\partial s_j} \right] \Psi_{\omega} + \frac{\partial K({\textbf{x}})}{\partial s_j} \nabla \Phi \cdot \nabla \Psi_{\omega}\right) \right\}d{\textbf{x}}$$ Since at measurement location corresponding to each frequency, two measurements are obtained from the coefficients of the oscillatory terms, so the Jacobian matrix has $N_y\times N_s$ entries where, $N_y = 2n_fn_y$. Here, $\Psi_{\omega}$ is the known as the [*adjoint solution*]{} that depends on the measurement location ${\textbf{x}}_m$ and the forcing frequency $\omega$. It satisfies the following system of equations $$\begin{aligned}
\label{eqn:adjoint}
- \nabla \cdot \left(K \nabla \Psi_{\omega } \right) + i \omega S_s\Psi_{ \omega} = & \quad- \delta({\textbf{x}}-{\textbf{x}}_m), &\quad {\textbf{x}}\in \Omega \\ \nonumber
\Psi_{\omega} = & \quad 0, &\quad {\textbf{x}}\in \partial \Omega_D \\ \nonumber
\nabla \Psi_{\omega}({\textbf{x}}) \cdot \textbf{n} = & \quad 0, &\quad {\textbf{x}}\in \partial \Omega_N \nonumber\end{aligned}$$ where, ${\textbf{x}}_m$ is the measurement location and $\omega$ is the particular frequency. The procedure for calculating the sensitivity matrix can thus be summarized as follows.
1\. For a given field ${s}$, solve the forward problem for $\Phi$. 2. For each measurement and frequency $\omega$, solve the adjoint problem for $\Psi_\omega$. 3. Compute the integral in to calculate the sensitivity.
Since is evaluated for all $s_j$ for each measurement, the adjoint state method requires only $N_y+1$ forward model solves to compute the sensitivity matrix. Thus, when the number of measurements is far fewer than the number of unknowns, the adjoint state method provides a much cheaper alternative for computing the entries of the Jacobian matrix. This is typically the case in hydraulic tomography, where having several measurement locations is infeasible because it requires digging new wells.
Further, we realize that equation takes the same form as equation for multiple frequencies. Thus, we can use the algorithms developed in section \[sec:krylov\] to solve the system of equations for as many right hand sides as measurements. It is possible to devise algorithms for multiple right hand sides in the context of shifted systems [@meerbergen2010lanczos; @darnell2008deflated] but we will not adopt this approach.
Numerical Experiments and Results {#sec:numerical}
=================================
We present numerical results for the Krylov subspace solvers and its application to OHT. As mentioned before, we use the FEniCS software [@LoggMardalEtAl2012a; @LoggWells2010a; @LoggWellsEtAl2012a] to discretize the appropriate partial differential equations. We use the Python interface to FEniCS, with uBLASSparse as the linear algebra back-end. For the direct solvers we use SuperLU [@superlu99] package that is interfaced by Scipy whereas for the iterative solver we use an algebraic multigrid package PyAMG [@BeOlSc2011], with smoothed aggregation along with BiCGSTAB iterative solver. In the following sections, for brevity, we only most results for FOM solver but we observed similar results for the GMRES method as well. This is also suggested by the result in proposition \[prop:gmres\].
Krylov subspace solver {#sec:solverexperiments}
----------------------
In this section, we present some of the results of the algorithms that we have described in section \[sec:krylov\]. We now describe the test problem that we shall use for the rest of the section. We consider a $2$D aquifer in a rectangular domain with Dirichlet boundary conditions on the boundaries. For the log-conductivity field $\log K({\textbf{x}})$, we consider a random field generated using an exponential covariance kernel $\kappa({\textbf{x}},\textbf{y}) = 4\exp(-2{\lVert {\textbf{x}}-\textbf{y} \rVert_2}/L)$ using the algorithm described in [@dietrich1993fast]. Other parameters used for the model problem are summarized in table \[tab:parameters\]. We choose $200$ frequencies evenly spaced between the minimum and maximum frequencies, which results in $200$ systems each of size $90601$.
Definition Parameters Values
-------------------------- ----------------------- -------------------------------------
Aquifer length L (m) 500
Specific storage $\log S_s$ (m$^{-1}$) $-11.52$
Mean conductivity $\mu(\log K)$ (m/s) $ -11.02$
Variance of conductivity $\sigma^2(\log K) $ $1.42$
Frequency range $\omega$ ($s^{-1}$) $[\frac{2\pi}{600},\frac{2\pi}{3}]$
: Parameters Chosen For Test Problem[]{data-label="tab:parameters"}
First, we motivate the need for multiple preconditioners to solve the shifted system of equations . We begin by looking at the number of iterations taken by restarted GMRES without a preconditioner and using a single preconditioner. We choose a preconditioner of the form $K+\tau M$, for five different values of $\tau$. For illustration purposes, we use a direct solver to invert the preconditioned systems. These values represent the minimum frequency, the average frequency and the maximum frequency in the parameter range. The number of iterations corresponding to restarted GMRES ($30$) for each of the system is computed and displayed in figure \[fig:itercountsingleprecond\]. We observe that the number of iterations corresponding to the systems increases as the frequency of the system decreases. When we use a preconditioner $K + \tau M$, the systems with frequencies nearby $|\tau|$ converge rapidly. However, systems with frequencies further from $|\tau|$ converge more slowly, the further away they are from the frequency of the preconditioned system $|\tau|$. This is consistent with the analysis in section \[sec:geneigen\] and in particular, proposition \[prop:fom\]. Thus, no single preconditioner effectively preconditions all the systems in the given frequency range. Not surprisingly, choosing $|\tau|$ in the center of the frequency range seems to be the best choice. Thus, in order to make the iterative method competitive, we consider using multiple preconditioners to solve the shifted system .
![(left) The log conductivity field that we use for the test problem with $90601$ grid points, and (right) iteration count for restarted GMRES ($30$) for unpreconditioned case and also with single preconditioners of the form $K + \tau M$, with $|\tau| \in \{ \frac{2\pi}{600},\frac{2\pi}{6},\frac{2\pi}{3} \}$. Results indicate that for the particular choices made, $|\tau| = \frac{2\pi}{6}$ which is roughly at the center of the frequency range, performs best.[]{data-label="fig:itercountsingleprecond"}](logcond "fig:") ![(left) The log conductivity field that we use for the test problem with $90601$ grid points, and (right) iteration count for restarted GMRES ($30$) for unpreconditioned case and also with single preconditioners of the form $K + \tau M$, with $|\tau| \in \{ \frac{2\pi}{600},\frac{2\pi}{6},\frac{2\pi}{3} \}$. Results indicate that for the particular choices made, $|\tau| = \frac{2\pi}{6}$ which is roughly at the center of the frequency range, performs best.[]{data-label="fig:itercountsingleprecond"}](itercount.png "fig:")
We choose preconditioners of the form $K + \tau_k M , k = 1,\dots,m$, where $m$ is the maximum dimension of the Arnoldi iteration. From figure \[fig:itercountsingleprecond\], it is clear that the systems with smaller frequencies converge slower, so we choose the values of $|\tau_k|$ that are distributed closer to the origin. In particular, we choose the values of $\tau_k$ such that they are evenly spaced on a log scale in the domain $\omega \in [\frac{2\pi}{600},\frac{2\pi}{3}]$. For example, for $n_p = 5$, the distribution of $|\tau|$ is illustrated in figure \[fig:multipletau\]. Now, let the possible values that $\tau$ can take be labeled as $\bar{\tau} = \{ \bar{\tau}_1,\dots,\bar{\tau}_{n_p} \}$ where, $n_p$ is the number of distinct number of preconditioner frequencies. Then, the first $m_1$ values of $\tau_k$ are assigned $\bar{\tau}_1$, the next $m_2$ values of $\tau_k$ are assigned $\bar{\tau}_2$ and so on. We also have $m = \sum_{k=1}^{n_p}m_k$. We pick $m_k = m/n_p$. If the algorithm has not converged in $m$ iterations, we restart using the method in section \[sec:restarting\] if we are using a direct solver as the preconditioner. Else, we recycle the same sequence of preconditioners. We implemented both algorithms to invert the preconditioner matrices - using direct solver and using an iterative solver which is an algebraic multigrid preconditioned BiCGSTAB. Using $n_p = 5$ and $m=40$ along with the scheme to choose the preconditioner frequencies described above, we observed that the number of iterations (and hence, matrix-vector products) in both cases were less than $40$.
Finally, we present the comparison in terms of the run time of our algorithm compared to solving each system using a direct solver. The results are presented in figure \[fig:timing\]. In the plots, “Direct” implies that every system is solved individually by a direct solver. “Flexible” algorithm uses $5$ different preconditioners (see figure \[fig:multipletau\] with a direct solver for inverting the preconditioners, and solves the FOM subproblem, whereas “Inexact” uses the preconditioners and inverts the preconditioners using an iterative solver. We see that both the “Flexible” and “Inexact” algorithms outperform the “Direct” approach even for a small number of frequencies. A relative tolerance of ${\lVert r_m(\sigma_j) \rVert_2} / {\lVert r_0(\sigma_j) \rVert_2} \leq 10^{-10}$ was used as stopping criteria for all systems $j=1,\dots,n_f$ and all systems converged within $40$ iterations. Although, the “Inexact” algorithm seems to behave nearly independent of number of frequencies, its runtime is longer than the “Direct” approach. This is due to the fact that the PyAMG solver requires an additional $17$ matvecs on average per inner iteration, totaling $706$ matvecs with $K+\tau_kM$ with $k=1,\dots,m$ and $m=40$.
![Comparison of time for the different algorithms. “Direct” implies that every system is solved individually by a direct solver. “Flexible” algorithm uses $5$ different preconditioners (see figure \[fig:multipletau\] with a direct solver for inverting the preconditioners, and solves the FOM subproblem, whereas “Inexact” uses the preconditioners and inverts the preconditioners using an iterative solver. We see that both the “Flexible” and “Inexact” algorithms outperform the “Direct” approach even for a small number of frequencies. The system size is $90601$. []{data-label="fig:timing"}](timecomparison)
In figures \[fig:reserrordirect\] and \[fig:reserrorflexible\], we plot the residuals and error as a function of the frequency of the system. A direct solver was used for figure \[fig:reserrordirect\], whereas an iterative solver (algebraic multigrid preconditioned BiCGSTAB) was used in figure \[fig:reserrorflexible\] with a stopping criterion $\varepsilon = 10^{-12}$ (refer to section \[sec:inexact\]. A relative tolerance of ${\lVert r_m(\sigma_j) \rVert_2} / {\lVert r_0(\sigma_j) \rVert_2} \leq 10^{-10}$ was used for all systems $j=1,\dots,n_f$ and all systems converged within 40 iterations. The behavior of residual and the error is quite similar but this is to be expected.
As discussed in section \[sec:inexact\], when an inexact preconditioner is used, the flexible Arnoldi relation is no longer exact and the true residual $r_m(\sigma_j)$ and the inexact residual $\tilde{r}_m(\sigma_j)$ are no longer exactly equal. In fact, the error between them can be bounded by the relation . In figure \[fig:reserrorflexible\], we compare the difference between the true residual and the inexact residuals with the predicted bound $\varepsilon{\lVert y_m(\sigma_j) \rVert_{1}}$. We see in figure \[fig:resdiff\] that the bound is fairly accurate. The stopping tolerances were chosen to be $\varepsilon = 10^{-9},10^{-11}, 10^{-12}$. In fact, for larger stopping tolerances for the inner solver, the outer solver did not converge.
Application: Tomographic reconstruction
---------------------------------------
The objective is now to determine an known conductivity field $K({\textbf{x}})$ from discrete measurements of the head $\phi$ obtained from several pumping tests performed with multiple frequencies. Since the conductivity field needs to be positive, so that the forward problem is well-posed, we consider a log-transformation $s = \log K$. The “true" field is taken to be that in figure \[fig:trueloc\] which is a scaled version of Franke’s function [@franke1979critical]. We choose the covariance matrix $Q$ to have entries $Q_{ij} = \kappa({\textbf{x}}_i,{\textbf{x}}_j)$, corresponding to an exponential covariance kernel $$\kappa({\textbf{x}},{\textbf{y}}) = \exp\left(-\frac{4{\lVert {\textbf{x}}-{\textbf{y}} \rVert_2}}{L}\right)$$ where, $L$ is the length of the domain. We also choose $R = \eta^2 I$ and $X = [1,\dots,1]^T$. We did not try to optimize the choice of covariance kernels to get the best possible reconstruction. Our goal is to study the associated computational costs. The size of our problem is chosen to be $10201$ discretization points. We assume no noise in our measurements and choose $\eta = 10^{-6}$.
The measurements are obtained by taking as the true log conductivity field, the field in figure \[fig:trueloc\]. Then, the phasor is calculated by solving equations with the source location and measurement locations given in figure \[fig:trueloc\]. The measurements are collected for each frequency and two pieces of information are recored, the coefficients of the sine and cosine terms in equation . The inverse problem is then solved using these measurements. The frequency range that is chosen is $\omega \in [2\pi/150,2\pi/30]$. We pick $n_p = 5$ evenly spaced in log-scale in this particular frequency range and set $m=40$. All systems converged in $40$ iterations.
The time for computing the Jacobian is listed in figure \[fig:jacobian\]. For the iterative solver, we use the flexible FOM solver using direct solver as preconditioner. The relative stopping tolerance we used was $10^{-10}$. Since the cost to solve systems with multiple frequencies is nearly the same as the cost to solve a single system, the time for building the Jacobian is, more or less, independent of the number of frequencies. However, when a direct solver is used to independently solve the systems for multiple frequencies, the cost for constructing the Jacobian scales linearly with the number of frequencies. This results in significant reduction in the cost for solving the inverse problem, since constructing the Jacobian is the most expensive part of solving the inverse problem. The disparity in the computation times for the Jacobian between the direct approach and the iterative procedure is exacerbated further, with larger problem sizes resulting from finer discretizations.
Finally, we compare the error in the the reconstruction with multiple frequencies. Table \[table:frequencies\] lists the $L^2$ error in the reconstruction. We report two errors - the first being the $L^2$ error in the entire domain, the second being the $L^2$ error in the area enclosed by the measurement wells. While increasing the number of frequencies improves the error in the entire domain, as well as in the region enclosed by the measurement wells. This is the primary motivation for using multiple frequencies in the inversion. However, beyond a point, the addition of frequencies does not seem to reduce the error. This might be because there is no additional information that is obtained from the addition of measurements with these frequencies, and to further improve estimation accuracy, one would need to introduce more stimulation and observation points [@cardiff2012multi].
$N_f$ Total error Error within box
------- ------------- ------------------
$1$ $0.3794$ $0.0511$
$5$ $0.3379$ $0.0352$
$10$ $0.3264$ $0.0337$
$20$ $0.3180$ $0.0328$
: L$^2$ error due to the reconstruction. We report two errors - the first being the $L^2$ error in the entire domain, the second being the $L^2$ error in the area enclosed by the measurement wells. []{data-label="table:frequencies"}
Conclusions
===========
We have presented a flexible Krylov subspace algorithm for shifted systems of the form that uses multiple shifted preconditioners of the form $K+\tau M$. The values of $\tau$ are chosen in order to improve convergence of the solver for all the shifted systems. The number of preconditioners chosen varies based on the distribution of the shifts. A good rule of thumb is that the systems having shift $\sigma$ will converge faster if there a preconditioner with shift $\tau$ that is nearby $\sigma$. When the size of the linear systems is much larger, direct solvers are much more expensive. In such cases, preconditioning would be done using iterative solvers. The error analysis in section \[sec:inexact\] provides insight into monitor approximate residuals without constructing the true residuals. One can naturally extend the ideas in this paper to systems with multiple shifts and multiple right hand sides using either block or deflation techniques.
We applied the flexible Krylov solver to an application problem that benefited significantly from fast solvers for shifted systems. In particular, oscillatory hydraulic tomography is a technique for aquifer characterization. However, since drilling observation wells to obtain measurements is expensive, one of the advantages of oscillatory hydraulic tomography is obtaining more informative measurements by pumping at different frequencies using the same pumping locations and measurement wells. In future studies we aim to study more realistic conditions for tomography, including a joint inversion for storage and conductivity. This would be ultimately beneficial to the practitioners. We envision that fast solvers for shifted systems would be beneficial for rapid aquifer characterization using oscillatory hydraulic tomography.
Acknowledgments
===============
The research in this work was funded by NSF Award 0934596, “CMG Collaborative Research: Subsurface Imaging and Uncertainty Quantification” and by NSF Award 1215742, “ Collaborative Research: Fundamental Research on Oscillatory Flow in Hydrogeology.” The authors would also like to thank their collaborators Michael Cardiff and Warren Barrash for useful discussions and the two anonymous reviewers for their insightful comments.
[10]{}
W. N. Bell, L. N. Olson, and J. B. Schroder. : [A]{}lgebraic multigrid solvers in [Python]{} v2.0, 2011. Release 2.0.
M. Cardiff, T. Bakhos, P.K. Kitanidis, and W. Barrash. Multi-frequency oscillatory hydraulic tomography: The use of steady-state periodic signals for sensitivity analysis and inversion. , in review.
M. Cardiff and W. Barrash. 3-[D]{} transient hydraulic tomography in unconfined aquifers with fast drainage response. , 47(12):W12518, 2011.
G. Christakos. On the problem of permissible covariance and variogram models. , 20(2):251–265, 1984.
D. Darnell, R.B. Morgan, and W. Wilcox. Deflated [GMRES]{} for systems with multiple shifts and multiple right-hand sides. , 429(10):2415–2434, 2008.
B.N. Datta and Y. Saad. methods for large [S]{}ylvester-like observer matrix equations, and an associated algorithm for partial spectrum assignment. , 154:225–244, 1991.
James W. Demmel, Stanley C. Eisenstat, John R. Gilbert, Xiaoye S. Li, and Joseph W. H. Liu. A supernodal approach to sparse partial pivoting. , 20(3):720–755, 1999.
CR Dietrich and GN Newsam. A fast and exact method for multidimensional gaussian stochastic simulations. *Water Resources Research*, 290 (8):0 2861–2869, 1993.
R. Franke. A critical comparison of some methods for interpolation of scattered data. Technical report, DTIC Document, 1979.
R.W. Freund. Solution of shifted linear systems by quasi-minimal residual iterations. , pages 101–121, 1993.
A. Frommer and U. Gl[ä]{}ssner. Restarted [GMRES]{} for shifted linear systems. , 19(1):15–26, 1998.
G.H. Golub and C.F. Van Loan. , volume 3. Johns Hopkins University Press, 1996.
G. Gu, X. Zhou, and L. Lin. A flexible preconditioned [A]{}rnoldi method for shifted linear systems. , 25(5):522-530, 2007.
T.J.R. Hughes. The finite element method: linear static and dynamic finite element analysis. , 2012.
P. K. Kitanidis. Quasilinear geostatistical theory for inversing. , 31(10):2411–2419, 1995.
P. K. Kitanidis. , volume Geophysical Monograph 171, pages 19–30. AGU, Washington, D. C., 2007.
P. K. Kitanidis. , pages 71–85. John Wiley & Sons, Ltd, 2010.
Anders Logg, Kent-Andre Mardal, Garth N. Wells, et al. . Springer, 2012.
Anders Logg and Garth N. Wells. Dolfin: Automated finite element computing. , 37(2), 2010.
Anders Logg, Garth N. Wells, and Johan Hake. , chapter 10. Springer, 2012.
K. Meerbergen. The solution of parametrized symmetric linear systems. , 24(4):1038–1059, 2003.
K. Meerbergen and Z. Bai. The [Lanczos]{} method for parameterized symmetric linear systems with multiple right-hand sides. , 31(4):1642–1662, 2010.
M. Popolizio and V. Simoncini. Acceleration techniques for approximating the matrix exponential. , pages 657–683, 2008.
Y. Saad. , volume 158. SIAM, 1992.
Y. Saad. A flexible inner-outer preconditioned [GMRES]{} algorithm. , 14:461–461, 1993.
Y. Saad. . Society for Industrial Mathematics, 2003.
A.K. Saibaba and P.K. Kitanidis. Efficient methods for large-scale linear inversion using a geostatistical approach. , 48(5):W05522, 2012.
V. Simoncini. Restarted full orthogonalization method for shifted linear systems. , 43(2):459–466, 2003.
V. Simoncini and D.B. Szyld. Theory of inexact [Krylov]{} subspace methods and applications to scientific computing. , 25(2):454–477, 2003.
V. Simoncini and D.B. Szyld. Recent computational developments in [Krylov]{} subspace methods for linear systems. , 14(1):1–59, 2007.
N.Z. Sun, W.W.G. Yeh, et al. Coupled inverse problems in groundwater modeling 1. sensitivity analysis and parameter identification. , 26(10):2507–2525, 1990.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We introduce a family of fourth order two-step methods that preserve the energy function of canonical polynomial Hamiltonian systems. Each method in the family may be viewed as a correction of a linear two-step method, where the correction term is $O(h^5)$ ($h$ is the stepsize of integration). The key tools the new methods are based upon are the line integral associated with a conservative vector field (such as the one defined by a Hamiltonian dynamical system) and its discretization obtained by the aid of a quadrature formula. Energy conservation is equivalent to the requirement that the quadrature is exact, which turns out to be always the case in the event that the Hamiltonian function is a polynomial and the degree of precision of the quadrature formula is high enough. The non-polynomial case is also discussed and a number of test problems are finally presented in order to compare the behavior of the new methods to the theoretical results.'
author:
- 'Luigi Brugnano[^1]'
- 'Felice Iavernaro[^2]'
- 'Donato Trigiante[^3]'
title: 'A two step, fourth order, nearly-linear method with energy preserving properties[^4]'
---
Ordinary differential equations, mono-implicit methods, multistep methods, canonical Hamiltonian problems, Hamiltonian Boundary Value Methods, energy preserving methods, energy drift.
65L05, 65P10.
Introduction and Background
===========================
We consider canonical Hamiltonian systems in the form $$\label{ham}
\frac{dy}{dt}= J \nabla H(y), \qquad
J=\pmatrix{cc}0&I_m\\-I_m&0\endpmatrix, \qquad y(t_0)=y_0\in\RR^{2m},\vspace{-.5em}$$ where $H(y)$ is a smooth real-valued function. Our interest is in researching numerical methods that provide approximations $y_n\simeq y(t_0+nh)$ to the true solution along which the energy is precisely conserved, namely $$\label{energy-conservation} H(y_n)=H(y_0), \qquad \mbox{for all
stepsizes } h\le h_0.$$
The study of energy-preserving methods form a branch of *geometrical numerical integration*, a research topic whose main aim is preserving qualitative features of simulated differential equations. In this context, symplectic methods have had considerable attention due to their good long-time behavior as compared to standard methods for ODEs [@Ru; @Fe; @LeRe]. A related interesting approach based upon exponential/trigonometric fitting may be found in [@IxVB04; @VB06; @Si08]. Unfortunately, symplecticity cannot be fully combined with the energy preservation property [@GM], and this partly explains why the latter has been absent from the scene for a long time.
Among the first examples of energy-preserving methods we mention discrete gradient schemes [@Gon96; @McL99] which are defined by devising discrete analogs of the gradient function. The first formulae in this class had order at most two but recently discrete gradient methods of arbitrarily high order have been researched by considering the simpler case of systems with one-degree of freedom [@CieRat10; @CieRat10a].
Here, the key tool we wish to exploit is the well-known line integral associated with conservative vector fields, such us the one defined at , as well as its discrete version, the so called *discrete line integral*. Interestingly, the line integral provides a means to check the energy conservation property, namely $$\begin{array}{rl}
H(y(t_1))-H(y_0) & = \displaystyle \int_{y_0 \rightarrow y(t_1)}
\hspace*{-.6cm} \nabla H(y) \d y = h\int_0^1 y'(t_0+\tau h)^T
\nabla
H(y(t_0+\tau h )) \d \tau \\[.35cm] & = h\displaystyle \int_0^1 \nabla^T H(y(t_0+\tau h
)) J^T \nabla H(y(t_0+\tau h )) \d \tau = 0,
\end{array}$$ with $h=t_1-t_0$, that can be easily converted into a discrete analog by considering a quadrature formula in place of the integral.
The discretization process requires to change the curve $y(t)$ in the phase space $\RR^{2m}$ to a simpler curve $\sigma(t)$ (generally but not necessarily a polynomial), which is meant to yield the approximation at time $t_1=t_0+h$, that is $y(t_0+h)=\sigma(t_0+h)+O(h^{p+1})$, where $p$ is the order of the resulting numerical method. In a certain sense, the problem of numerically solving while preserving the Hamiltonian function is translated into a quadrature problem.
For example, consider the segment $\sigma(t_0+ch)=(1-c)y_0+cy_{1}$, with $c\in[0,1]$, joining $y_0$ to an unknown point $y_1$ of the phase space. The line integral of $\nabla H(y)$ evaluated along $\sigma$ becomes $$\label{iav-eq1} {H(y_{1})-H(y_0)} = h(y_{1}-y_0)^T \int_0^1 \nabla
H((1-c)y_0+cy_{1}) \dd c.$$ Now assume that $H(y)\equiv H(q,p)$ is a polynomial of degree $\nu$ in the generalized coordinates $q$ and in the momenta $p$. The integrand in is a polynomial of degree $\nu-1$ in $c$ and can be exactly solved by any quadrature formula with abscissae $c_1<c_2<\cdots<c_k$ in $[0,1]$ and weights $b_1,\dots,b_k$, having degree of precision $d \ge \nu-1$. We thus obtain $$H(y_{1})-H(y_0) =h(y_{1}-y_0)^T {\sum_{i=1}^k b_i\nabla
H((1-c_i)y_0+c_iy_{1})}.$$ To get the energy conservation property we impose that $y_1-y_0$ be orthogonal to the above sum, and in particular we choose (for the sake of generality we use $f(y)$ in place of $J\nabla H(y)$ to mean that the resulting method also makes sense when applied to a general ordinary differential equation $y'=f(y)$) $$\label{iav-s-stage-trap}
%\begin{array}{rcl}
y_{1}=\displaystyle y_0+h\sum_{i=1}^kb_if(Y_i), \qquad
Y_i=(1-c_i)y_0+c_iy_{1}, \quad i=1,\dots,k.
%\end{array}$$ Formula defines a Runge–Kutta method with Butcher tableau $\begin{array}{c|c} c & c b^T \\ \hline & b^T
\end{array}$, where $c$ and $b$ are the vectors of the abscissae and weights, respectively. The stages $Y_i$ are called *silent stages* since their presence does not affect the degree of nonlinearity of the system to be solved at each step of the integration procedure: the only unknown is $y_1$ and consequently defines a mono-implicit method. Mono-implicit methods of Runge–Kutta type have been researched in the past by several authors (see, for example, [@Ca75; @Bo; @CaSi; @BuChMu] for their use in the solution of initial value problems).
Methods such as date back to 2007 [@IP1; @IT3] and are called $k$-stage trapezoidal methods since on the one hand the choice $k=2$, $c_1=0$, $c_2=1$ leads to the trapezoidal method and on the other hand all other methods evidently become the trapezoidal method when applied to linear problems.
Generalizations of to higher orders require the use of a polynomial $\sigma$ of higher degree and are based upon the same reasoning as the one discussed above. Up to now, such extensions have taken the form of Runge–Kutta methods [@BIT1; @BIT2; @BIT3]. It has been shown that choosing a proper polynomial $\sigma$ of degree $s$ yields a Runge–Kutta method of order $2s$ with $k\ge s$ stages. The peculiarity of such energy-preserving formulae, called Hamiltonian Boundary Value Methods (HBVMs), is that the associated Butcher matrix has rank $s$ rather than $k$, since $k-s$ stages may be cast as linear combinations of the remaining ones, similarly to the stages $Y_i$ in .[^5] As a consequence, the nonlinear system to be solved at each step has dimension $2ms$ instead of $2mk$, which is better visualized by recasting the method in block-BVM form [@BIT1].
In the case where $H(y)$ is not a polynomial, one can still get a *practical* energy conservation by choosing $k$ large enough so that the quadrature formula approximates the corresponding integral to within machine precision. Strictly speaking, taking the limit as $k \rightarrow \infty$ leads to limit formulae where the integrals come back into play in place of the sums. For example, letting $k\rightarrow \infty$ in just means that the integral in must not be discretized at all, which would yield the *Averaged Vector Field* method $y_1=y_0+h\int_0^1 f((1-c)y_0+cy_{1}) \dd c$, (see [@M2AN; @QMc] for details).
In this paper we start an investigation that follows a different route. Unlike the case with HBVMs, we want now to take advantage of the previously computed approximations to extend the class in such a way to increase the order of the resulting methods, much as the class of linear multistep method may be viewed as a generalization of (linear) one step methods. The general question we want to address is whether there exist $k$-step mono-implicit energy-preserving methods of order greater than two. Clearly, the main motivation is to reduce the computational cost associated with the implementation of HBVMs.
The purpose of the present paper is to give an affermative answer to this issue in the case $k=2$. More specifically, the method resulting from our analysis, summarized by formula , may be thought of as a nearly linear two-step method in that it is the sum of a fourth order linear two-step method, formula , plus a nonlinear correction of higher order.
The paper is organized as follows. In Section \[def\_methods\] we introduce the general formulation of the method, by which we mean that the integrals are initially not discretized to maintain the theory at a general level. In this section we also report a brief description of the HBVM of order four, since its properties will be later exploited to deduce the order of the new method: this will be the subject of Section \[analysis\_sec\]. Section \[discr\_sec\] is devoted to the discretization of the integrals, which will produce the final form of the methods making them ready for implementation. A few test problems are presented in Section \[test\_sec\] to confirm the theoretical results.
Definition of the method {#def_methods}
========================
Suppose that $y_1$ is an approximation to the true solution $y(t)$ at time $t_1=t_0+h$, where $h>0$ is the stepsize of integration. More precisely, we assume that
- $y(t_1)=y_1+O(h^{p+1})$ with $p \ge 4$;
- $H(y_1)=H(y_0)$, which means that $y_1$ lies on the very same manifold $H(y)=H(y_0)$ as the continuous solution $y(t)$.
The two above assumptions are fulfilled if, for example, we compute $y_1$ by means of a HBVM (or an $\infty$-HBVM [@BIT2]) of order $p\ge 4$. The new approximation $y_2\simeq y(t_2) \equiv y(t_0+2h)$ is constructed as follows.
Consider the quadratic polynomial $\sigma(t_0+ 2 \tau h)$ that interpolates the set of data $\{(t_0+jh, y_j)\}_{j=0,1,2}$. Expanded along the Newton basis $\{P_j(\tau)\}$ defined on the nodes $\tau_0=0$, $\tau_1=\frac{1}{2}$, $\tau_2=1$, the polynomial $\sigma$ takes the form (for convenience we order the nodes as $\tau_0, \tau_2,
\tau_1$) $$\label{sigma} \sigma(t_0 + 2 \tau h) = y_0+(y_2-y_0)\tau
+2(y_2-2y_1+y_0)\tau(\tau-1).$$ As $\tau$ ranges in the interval $[0,1]$, the $2m$-length vector $\gamma(\tau) \equiv \sigma(t_0+2\tau h)$ describes a curve in the phase space $\RR^{2m}$. The line integral of the conservative vector field $\nabla H(y)$ along the curve $\gamma$ will match the variation of the energy function $H(y)$, that is $$\begin{array}{rl}
H(y_2)-H(y_0) &= \displaystyle \int_{y_0 \rightarrow y_2}
\hspace*{-.6cm} \nabla H(y) \d y = \int_0^1 \left[ \gamma'(\tau)
\right]^T \nabla
H(\gamma(\tau)) \, \d \tau \\[.5cm]
& \hspace*{-1.5cm} \displaystyle =(y_2-y_0)^T \int_0^1 \nabla
H(\gamma(\tau)) \, \mathrm{d}\tau + 2(y_2-2y_1+y_0)^T \int_0^1
(2\tau-1) \nabla H(\gamma(\tau)) \, \mathrm{d}\tau.
\end{array}$$ The energy conservation condition $H(y_2)=H(y_0)$ yields the following equation in the unknown $z\equiv y_2$ $$\label{nonlin} (z-y_0)^T \int_0^1 \nabla H(\gamma(\tau)) \,
\mathrm{d}\tau = -2(z-2y_1+y_0)^T \int_0^1 (2\tau-1) \nabla
H(\gamma(\tau)) \, \mathrm{d}\tau.$$ The method we are interested in has the form $y_2=\Psi_h(y_0,y_1)$, where $\Psi_h$ is implicitly defined by the following nonlinear equation in the unknown $z$: $$\label{nonlin_sys}
%\begin{array}{l}
\displaystyle z=y_0+ 2 h J a(z) + \frac{r(z)}{||a(z)||_2^2} a(z),
\qquad \mbox{with}~~ a(z)=\int_0^1 \nabla H (\gamma(\tau)) \, \d
\tau,$$ where the residual $r(z)$ is defined as $$\label{residual}
r(z) \equiv -2(z-2y_1+y_0)^T \int_0^1 (2\tau-1) \nabla
H(\gamma(\tau)) \, \mathrm{d}\tau.$$ A direct computation shows that any solution $z^\ast$ of also satisfies . In the next section we will show that admits a unique solution $y_2\equiv z^\ast$ satisfying the order condition $y_2=y(t_0+2h)+O(h^5)$. Such a result will be derived by regarding as a perturbation of the HBVM of order $4$ and, in turn, by comparing the two associated numerical solutions. To this end and to better explain the genesis of formula and the role of the integrals therein, a brief introduction of the HBVM formula of order four is in order.
HBVM of order four {#rem1}
------------------
Suppose that both $y_1$ and $y_2$ are unknown (so now $y_1$ is no longer given a priori as indicated by assumption ($A_1$)): let us call them $u_1$ and $u_2$ respectively. For to be satisfied, we can impose the two orthogonality conditions $$\label{orthbvm}\left\{
\begin{array}{l}
\displaystyle u_2 - y_0 = \eta_1 h J \int_0^1 \nabla H(\gamma(\tau))
\,
\mathrm{d}\tau, \\[.5cm]
\displaystyle u_2-2u_1+y_0 = \eta_2 h J \int_0^1 (2\tau-1) \nabla
H(\gamma(\tau)) \, \mathrm{d}\tau,
\end{array}
\right.$$ giving rise to a system of two block-equations (the curve $\gamma(\tau)=\sigma(t_0+2\tau h)$ is as in with $u_1$ and $u_2$ in place of $y_1$ and $y_2$). Setting the free constants $\eta_1$ and $\eta_2$ equal to $2$ and $3$, respectively, confers the highest possible order, namely $4$, on the resulting method: $u_2=y(t_0+2h)+O(h^5)$ (see [@IT3] for details).[^6] Furthermore, it may be shown that the internal stage $u_1$ satisfies the order condition $u_1=y(t_0+h)+O(h^4)$.
Evidently, the implementation of on a computer cannot leave out of consideration the issue of solving the integrals appearing in both equations. Two different situations may emerge:
- the Hamiltonian function $H(y)$ is a polynomial of degree $\nu$. In such a case, the two integrals in are exactly computed by a quadrature formula having degree of precision $d \ge 2\nu-1$.
- $H(y)$ is not a polynomial, nor do the two integrands admit a primitive function in closed form. Again, an appropriate quadrature formula can be used to approximate the two integrals to within machine precision, so that no substantial difference is expected during the implementation process by replacing the integrals by their discrete counterparts.
Case (a) gives rise to an infinite family of Runge-Kutta methods, each depending on the specific choice (number and distribution) of nodes the quadrature formula is based upon (see [@BIT2] for a general introduction on HBVMs and [@BIT3] for their relation with standard collocation methods). For example, choosing $k$ nodes according to a Gauss distribution over the interval $[0,1]$ results in a method that precisely conserves the energy if applied to polynomial canonical Hamiltonian systems with $\nu \le k$ and that becomes the classical $2$-stage Gauss collocation method when $k=2$. On the other hand, choosing a Lobatto distribution yields a Runge-Kutta method that preserves polynomial Hamiltonian functions of degree $\nu \le k-1$ and that becomes the Lobatto IIIA method of order four when $k=2$.
The method resulting from case (b) are undistinguishable from the original formulae in that they are energy-preserving up to machine precision when applied to any regular canonical Hamiltonian system. Stated differently, may be viewed as the limit of the family of HBVMs of order four, as the number of nodes tends to infinity. For this reason the limit formulae have been called $\infty$-HBVM of order $4$ (see [@BIT2]).
In the present context, $y_1$ being a known quantity, the unknown $z$ in cannot in general satisfy, at the same time, both orthogonality conditions in . However, since $y_1$ may be thought of as an approximation of order four to the quantity $u_1$ in , should we only impose the first orthogonality condition, namely $$\label{orth1} z - y_0 = 2 h J a(z),$$ we would expect the residual $r(z)$ (the right hand side of ) to be very small.[^7] This suggests that a solution to that yields an approximation of high order to $y(t_0+2h)$ may be obtained by allowing a small deviation from orthogonality in . This is accomplished by setting $z - y_0 = 2 h J a(z) + \delta a(z)$, and by tuning the perturbation parameter $\delta$ in such a way that be satisfied: this evidently gives $\delta=\frac{r(z)}{||a(z)||_2^2}$ and we arrive at .
Analysis of the method {#analysis_sec}
======================
Results on the existence and uniqueness of a solution of as well as on its order of accuracy will be derived by first analyzing the simpler nonlinear system $$\label{M}
%\begin{array}{l}
\displaystyle z=y_0+ 2 h J a(z), \qquad \mbox{with}~~ a(z)=\int_0^1
\nabla H (\gamma(\tau)) \, \d \tau,$$ obtained by neglecting the correction term $\frac{r(z)}{||a(z)||_2^2} a(z)$. For $z\in \RR^{2m}$ we set (see ) $$\label{gammaz} \gamma_z(\tau) = y_0+(z-y_0)\tau
+2(z-2y_1+y_0)\tau(\tau-1),$$ and (see ) $$\label{Phi} \Phi(z)=y_0+ 2 h J a(z).$$ In the following $||\cdot||$ will denote the $2$-norm.
\[lem1\] There exist positive constants $\rho$ and $h_0$ such that, for $h\le h_0$, system admits a unique solution $\hat z$ in the ball $B(y_0,\rho)$ of center $y_0$ and radius $\rho$.
We show that constants $h_0,\rho>0$ exist such that the function defined in satisfies the following two conditions for $h
\le h_0$:
- $\Phi(z)$ is a contraction on $B(y_0,\rho)$, namely $$\forall z,w\in B(y_0,\rho),\quad ||\Phi(z)-\Phi(w)|| \le L ||z-w||,
\qquad \mbox{with} \quad L<1;$$
- $||\Phi(y_0)-y_0||\le (1-L) \rho$.
The contraction mapping theorem can then be applied to obtain the assertion.
Let $B(y_0,\rho)$ a ball centered at $y_0$ with radius $\rho$. We can choose $h'_0$ and $\rho$ small enough that the image set $\Omega=\{\gamma_z(\tau): \tau\in[0,1],~ z\in B(y_0,\rho),~h\le
h'_0\}$ is entirely contained in a ball $B(y_0,\rho')$ which, in turn, is contained in the domain of $\nabla^2H(y)$.[^8] We set $$M_\rho = \max_{w\in B(y_0,\rho')} \left\|\nabla^2H(w) \right\|.$$ From and we have $$\frac{\partial a(z)}{\partial z} = \int_0^1
\nabla^2H(\gamma_z(\tau)) \frac{\partial \gamma_z}{\partial z} \, \d
\tau = \int_0^1 \nabla^2H(\gamma_z(\tau)) \, \tau(2\tau-1) \, \d
\tau$$ and hence $$\left\|\frac{\partial a(z)}{\partial z}\right\| \le M_\rho \int_0^1
\tau |2\tau-1| \, \d \tau =\frac{1}{4} M_\rho.$$ Consequently (a) is satisfied by choosing $$\label{Lcond}
L=\frac{h}{2}M_\rho$$ and $h_0<\min\{\frac{2}{M_\rho},h'_0\}$. Concerning (b), we observe that $$\Phi(y_0)-y_0 = 2hJa(y_0) =2h J \int_0^1 \nabla
H(y_0+4(y_0-y_1)\tau(\tau-1)) \, \d \tau,$$ hence $||\Phi(y_0) -y_0|| = 2h ||a(y_0)||$ with $||a(y_0)||$ bounded with respect to $h$. Since $L$ vanishes with $h$ (see (\[Lcond\])), we can always tune $h_0$ in such a way that $2h||a(y_0)|| \le (1-L) \rho$.
\[lem2\] The solution $\hat z$ of satisfies $y(t+2h)-\hat z=O(h^5)$.
Under the assumption ($A_1$), may be regarded as a perturbation of system , since $y_1$ and $u_1$ are $O(h^5)$ and $O(h^4)$ close to $y(t+h)$ respectively.[^9] Since $u_2=y(t+2h)+O(h^5)$, we can estimate the accuracy of $\hat z$ as an approximation of $y(t+2h)$ by evaluating its distance from $u_2$.
Let $\tilde \gamma(\tau)$ be the underlying quadratic curve associated with the HBVM defined by , namely $$\label{tgamma} \tilde \gamma(\tau) \equiv y_0+(u_2-y_0)\tau
+2(u_2-2u_1+y_0)\tau(\tau-1).$$ Considering that (see ) $$\gamma_{u_2}(\tau) \equiv y_0+(u_2-y_0)\tau
+2(u_2-2y_1+y_0)\tau(\tau-1) = \tilde
\gamma(\tau)+4(u_1-y_1)\tau(\tau-1),$$ from the first equation in and we get $$\begin{array}{rl}
\Phi(u_2) = & \displaystyle y_0+2hJ\int_0^1\nabla
H(\gamma_{u_2}(\tau)) \,\d \tau = y_0+2hJ\int_0^1\nabla
H(\tilde \gamma(\tau)) \,\d \tau \\[.4cm]
& \displaystyle + 8hJ \int_0^1 \nabla^2 H(\tilde
\gamma(\tau))\tau(\tau-1) \,\d \tau \cdot (u_1-y_1) +
O(||u_1-y_1||^2) \\[.4cm]
=& \displaystyle u_2+O(h^5).
\end{array}$$ If $h$ is small enough, $u_2$ will be inside the ball $B(y_0,\rho)$ defined in Lemma \[lem1\]. The Lipschitz condition yields (see ) $$||\hat z -u_2|| = ||\Phi(\hat z) -\Phi(u_2) +O(h^5)|| \le
\frac{h}{2}M_\rho ||\hat z-u_2|| +O(h^5),$$ and hence $||\hat z -u_2|| = O(h^5)||$.
The above result states that defines a method of order $4$ which is a simplified (non corrected) version of our conservative method defined at . In Section \[test\_sec\] the behavior of these two methods will be compared on a set of test problems. We now state the analogous results for system .
\[th1\] Under the assumption ($A_1$), for $h$ small enough, equation admits a unique solution $z^\ast$ satisfying $y(t+2h)-z^\ast=O(h^5)$.
Consider the solution $\hat z$ of system . We have (see ) $$\gamma_{\hat z}(\tau)-\tilde \gamma(\tau)=(\hat z -u_2)\tau(2\tau-1)
+4(u_1-y_1)\tau(\tau-1)=O(h^5),$$ and $$\hat z-2y_1+y_0=u_2-2u_1+y_0 +O(h^5).$$ Hence, by virtue of , $$r(\hat z) = -2\left[(u_2-2u_1+y_0) +O(h^5)\right]^T \left[\int_0^1
(2\tau-1)\nabla H(\tilde \gamma(\tau)) \, \d \tau
+O(h^5)\right]=O(h^5).$$ Since $a(\hat z)$ is bounded with respect to $h$, it follows that, in a neighborhood of $\hat z$, system may be regarded as a perturbation of system , the perturbation term being $R(z,h)\equiv \frac{r(z)}{||a(z)||_2^2}a(z)$.
Consider the ball $B(\hat z, R(\hat z, h))$: since $\hat
z=y_0+O(h)$, and $R(\hat z, h)=O(h^5)$, this ball is contained in $B(y_0, \rho)$ defined in Lemma \[lem1\] and the perturbed function $\Phi(z)+R(z,h)$ is a contraction therein, provided $h$ is small enough. Evaluating the right-hand side of at $z=\hat z$ we get $$y_0+2h J a(\hat z) + R(\hat z, h) = \hat z + R(\hat z, h),$$ which means that property (b) listed in the proof of Lemma \[lem1\], with $\hat z$ in place of $y_0$, holds true for the perturbed function $y_0+2h J a(z) + R(z, h)$, and the contraction mapping theorem may be again exploited to deduce the assertion.
Discretization {#discr_sec}
==============
As was stressed in Section \[def\_methods\], formula is not operative unless a technique to solve the two integrals is taken into account. The most obvious choice is to compute the integrals by means of a suitable quadrature formula which may be assumed exact in the case where the Hamiltonian function is a polynomial, and to provide an approximation to within machine precision in all other cases.
Hereafter we assume that $H(q,p)$ is a polynomial in $q$ and $p$ of degree $\nu$. Since $\gamma(\tau)$ has degree two, it follows that the integrand functions appearing in the definitions of $a(z)$ and $r(z)$ at and have degree $2\nu-2$ and $2\nu-1$ respectively and can be solved by any quadrature formula with abscissae $c_1<c_2<\cdots<c_k$ in $[0,1]$ and weights $b_1,\dots,b_k$, having degree of precision $d \ge
2\nu-1$. In place of we now consider the equivalent form suitable for implementation $$\label{twostep} \displaystyle y_2=y_0+ 2 h J \sum_{i=1}^kb_i\nabla
H(\gamma(c_i)) + G(y_0,y_1,y_2),$$ where $$G(y_0,y_1,y_2) = \frac{-2(y_2-2y_1+y_0)^T \sum_{i=1}^k b_i (2c_i-1)
\nabla H(\gamma(c_i))}{\| \sum_{i=1}^kb_i\nabla H(\gamma(c_i))
\|_2^2} \, \sum_{i=1}^kb_i\nabla H(\gamma(c_i)).$$ Notice that from we get $$\label{lin-comb}
\gamma(c_i)=(1-3c_i+2c_i^2)y_0+4c_i(1-c_i)y_1+c_i(2c_i-1)y_2,$$ that is, $\gamma(c_i)$ is a linear combination, actually a weighted average, of the approximations $y_0$, $y_1$ and $y_2$. Therefore, since $G(y_0,y_1,y_2) = O(h^5)$ (see Lemma \[lem2\] and Theorem \[th1\]), we may look at this term as a nonlinear correction of the generalized linear multistep method $$\label{twostep-lin} \displaystyle y_2=y_0+ 2 h J
\sum_{i=1}^kb_i\nabla H(\gamma(c_i)).$$
If $H(q,p)$ is quadratic, we can choose $k=3$, $c_1=0$, $c_2=\frac{1}{2}$, $c_3=1$, $b_1=b_3=\frac{1}{6}$ and $b_2=\frac{2}{3}$, that is we can use Simpson’s quadrature formula to compute the integrals in and . Since, in such a case, $\gamma(c_i)=y_{i-1}$, method becomes $$\displaystyle y_2=y_0+ \frac{h}{3} J \left(\nabla H(y_0)+4 \nabla
H(y_1)+\nabla H(y_2) \right),$$ that is, the standard Milne-Simpson’s method.
In all other cases $\gamma(c_i)$ will differ in general from $y_j$, $j=1,2,3$ and may be regarded as an off-point entry in formula . In the sequel we will denote the method defined at by $M_k$ and its linear part, defined at , by $M'_k$. Of course, the choice of the abscissae distribution influences the energy preserving properties of the method $M_k$, as is indicated in Table \[nodes-distribution-table\].
-----------------------------------------------------------------------------------------------------
[Abscissae distribution:]{} uniform Lobatto Gauss
------------------------------ -------------------------------- ------------------- -----------------
[ Energy preserving when:]{} $\deg H \le \lceil \frac{k}{2} $ \deg H \le k-1$ $ \deg H \le k$
\rceil$
-----------------------------------------------------------------------------------------------------
: Energy preserving properties of method $M_k$ for some well-known distributions of the nodes $\{c_i\}$.[]{data-label="nodes-distribution-table"}
Numerical tests {#test_sec}
===============
Hereafter we implement the order four method $M_k$ on a few Hamiltonian problems to show that the numerical results are consistent with the theory presented in Section \[analysis\_sec\]. In particular, in the first two problems the Hamiltonian function is a polynomial of degree three and six respectively, while the last numerical test reports the behavior of the method on a non-polynomial problem.
Each step of the integration procedure requires the solution of a nonlinear system, in the unknown $y_2$, represented by for the method $M_k$ and for the method $M'_k$. The easiest way (although not the most efficient one) to find out a solution is by means of fixed point iteration that, in the case of the method $M_k$, reads $$\label{iteration} z_{s+1}=y_0+ 2 h J \sum_{i=1}^kb_i\nabla
H(\gamma_{z_s}(c_i)) + G(y_0,y_1,z_s),\qquad s=1,2,\dots,$$ where $\gamma_z$ is defined at and $z_0$ is an initial approximation of $y_2$ which is then refined by setting $y_2=z_{\bar s}$ with $z_{\bar s}\simeq \lim_{s\rightarrow \infty}
z_s$. From Theorem \[th1\] and the preceding lemmas we deduce that such a limit always exists provided that $h$ is small enough. The value of $z_0$ could be retrieved via an extrapolation based on the previous computed points or by considering the method $M'_k$ as a predictor for $M_k$.
We will consider a Lobatto distribution with an odd number $k$ of abscissae $\{c_i\}$. In fact, if $k$ is odd, since $y_0=\gamma(0)=\gamma(c_1)$ and $y_1=\gamma(\frac{1}{2})=\gamma(c_{\lceil \frac{k}{2}\rceil})$, we save two function evaluations during the iteration .
Test problem 1
--------------
The Hamiltonian function $$\label{cubic_pendulum}
H(q,p)=\frac{1}{2}p^2+\frac{1}{2}q^2-\frac{1}{6}q^3$$ defines the cubic pendulum equation. We can solve it by using five Lobatto nodes to discretize the integrals in , thus getting the method $M_5$. The corresponding numerical solution, denoted by $(q_n,p_n)$, is plotted in Figure \[cubic\_pendulum\_fig1\]. For comparison purposes we also compute the numerical solution $(q'_n, p'_n)$ provided by the fourth order method, say $M'_5$, obtained by neglecting in the correction term, that is by posing $r(z)\equiv 0$. Figure \[cubic\_pendulum\_fig2\] clearly shows the energy conservation property, while Table \[tab1\] summarizes the convergence properties of the two methods.
![Numerical solution $(q_n, p_n)$ versus time $t_n$ (left picture) and on the phase plane (right picture). Parameters: initial condition $y_0=[0,\,1]$; stepsize $h=0.5$; integration interval $[0, 200 \pi]$.[]{data-label="cubic_pendulum_fig1"}](cubic_pendulum_ty "fig:"){width="6.7cm" height="5cm"} ![Numerical solution $(q_n, p_n)$ versus time $t_n$ (left picture) and on the phase plane (right picture). Parameters: initial condition $y_0=[0,\,1]$; stepsize $h=0.5$; integration interval $[0, 200 \pi]$.[]{data-label="cubic_pendulum_fig1"}](cubic_pendulum_phase "fig:"){width="6.7cm" height="5cm"}
![Hamiltonian function evaluated along the numerical solution $(p_n,q_n)$ (horizontal line) and along the numerical solution $(p'_n, q'_n)$ (irregularly oscillating line).[]{data-label="cubic_pendulum_fig2"}](cubic_pendulum_ham){width="10cm" height="5cm"}
\[tab1\]
---------- --------------------- --------- ---------------------------- --------------------- --------- -----------------------------
error order [$\max |H(y_n)-H(y_0)|$]{} error order [$\max |H(y'_n)-H(y_0)|$]{}
$3.1\cdot 10^{-2}$ $2.5\cdot 10^{-15}$ $1.1\cdot 10^{-1}$ $1.1008\cdot 10^{-1}$
\[.1cm\] $3.8\cdot 10^{-4}$ $6.373$ $1.9\cdot 10^{-15}$ $3.1\cdot 10^{-3}$ $5.183$ $2.9680\cdot 10^{-3}$
\[.1cm\] $2.6\cdot 10^{-5}$ $3.866$ $1.5\cdot 10^{-15}$ $2.5\cdot 10^{-4}$ $3.655$ $1.5755\cdot 10^{-4}$
\[.1cm\] $1.6\cdot 10^{-6}$ $4.059$ $8.8\cdot 10^{-16}$ $1.8\cdot 10^{-5}$ $3.811$ $8.5163\cdot 10^{-6}$
\[.1cm\] $9.5\cdot 10^{-8}$ $4.032$ $9.9\cdot 10^{-16}$ $1.2\cdot 10^{-6}$ $3.905$ $4.8883\cdot 10^{-7}$
\[.1cm\] $5.9\cdot 10^{-9}$ $4.017$ $1.1\cdot 10^{-15}$ $7.6\cdot 10^{-8}$ $3.952$ $2.9131\cdot 10^{-8}$
\[.1cm\] $3.6\cdot 10^{-10}$ $4.008$ $1.1\cdot 10^{-15}$ $4.9\cdot 10^{-9}$ $3.976$ $1.7771\cdot 10^{-9}$
\[.1cm\] $2.3\cdot 10^{-11}$ $4.004$ $2.3\cdot 10^{-15}$ $3.1\cdot 10^{-10}$ $3.988$ $1.0968\cdot 10^{-10}$
\[.1cm\] $1.4\cdot 10^{-12}$ $4.006$ $2.4\cdot 10^{-15}$ $1.9\cdot 10^{-11}$ $3.994$ $6.8121\cdot 10^{-12}$
---------- --------------------- --------- ---------------------------- --------------------- --------- -----------------------------
: Methods $M_5$ (with correction term) and $M'_5$ (without correction term) are implemented on the cubic pendulum equation on the time interval $[0,
10]$ for several values of the stepsize $h$. The order of convergence is numerically evaluated by means of the formula $\log_2
\frac{\mbox{\rm error}(\frac{h}{2})}{\mbox{\rm error}(h)}$. As was expected, the maximum displacement of the numerical Hamiltonian $H(y_n)$ from the theoretical value $H(y_0)$ is close to the machine precision for the method $M_5$, independently of the stepsize $h$ used.
Test problem 2
--------------
The Hamiltonian function $$\label{fhp} H(p,q)= \frac{1}{3}p^3- \frac{1}{2}p
+\frac{1}{30}q^6+\frac{1}{4}q^4-\frac{1}{3}q^3+\frac{1}{6}$$ has been proposed in [@FaHaPh] to show that symmetric methods may suffer from the energy drift phenomenon even when applied to reversible systems, that is when $H(-p,q)=H(p,q)$.[^10] For our experiment, we will use $y_0=[0.2,\,
0.5]$ as initial condition.
Since $\deg(H(q,p))=6$, we need a Lobatto quadrature based on at least seven nodes to assure that the integrals in are computed exactly. Therefore we solve by method $M_7$. For comparison purposes, it is also interesting to show the dynamics of the symmetric non-conservative method $M'_7$. Figure \[fhp\_fig1\] displays the results obtained by the two methods implemented with stepsize $h=\frac{1}{10}$ over the interval $[0,\,10^3]$. In particular, the numerical trajectories generated by method $M'_7$ and $M_7$, are reported in the left-top and left-bottom pictures respectively, while the right picture reports the corresponding error in the Hamiltonian function evaluated along the two numerical solutions, namely $|H(y_n)-H(y_0)|$.
Evidently, the numerical solution produced by $M'_7$ rapidly departs from the level curve $H(q,p)=H(q_0,p_0)$ but it remain eventually bounded and the points $(q_n,p_n)$ seem to densely fill a bounded region of the phase plane.
On the contrary, since the degree of freedom of the present problem is one, the points $(q_n,p_n)$ produced by $M_7$ lie on the very same continuous trajectory covered by $y(t)$: this is also confirmed by looking at the bottom graph in the right picture.
Table \[tab2\] shows the behavior of method $M_7$ applied to problem as the stepsizes $h$ goes to zero. Notice the $O(h^5)$ rate of convergence to zero for the residual function $r(z)$ in .
![Left pictures: numerical solutions in the phase plane computed by method $M'_7$ (top picture) and $M_7$ (bottom picture). Right picture: error in the numerical Hamiltonian function $|H(y_n)-H(y_0)|$ produced by the two methods. Parameters: initial condition $y_0=[0.2,\,0.5]$; stepsize $h=0.1$; integration interval $[0, 1000]$.[]{data-label="fhp_fig1"}](faou_phase "fig:"){width="6.7cm" height="5cm"} ![Left pictures: numerical solutions in the phase plane computed by method $M'_7$ (top picture) and $M_7$ (bottom picture). Right picture: error in the numerical Hamiltonian function $|H(y_n)-H(y_0)|$ produced by the two methods. Parameters: initial condition $y_0=[0.2,\,0.5]$; stepsize $h=0.1$; integration interval $[0, 1000]$.[]{data-label="fhp_fig1"}](faou_ham "fig:"){width="6.7cm" height="5cm"}
\[tab2\]
---------- ---------------------- --------- ------------------------ ----------------------- -------------------
error order [ $|H(y_N)-H(y_0)|$]{} residual $r(y_N)$ order of $r(y_N)$
$4.47\cdot 10^{-2}$ $ $ $1.6\cdot 10^{-16}$ $-1.21\cdot 10^{-03}$ $ $
\[.1cm\] $7.38\cdot 10^{-4}$ $5.920$ $4.4\cdot 10^{-16}$ $-3.23\cdot 10^{-06}$ $8.559$
\[.1cm\] $3.90\cdot 10^{-5}$ $4.243$ $5.8\cdot 10^{-16}$ $-2.15\cdot 10^{-08}$ $7.225$
\[.1cm\] $2.39\cdot 10^{-6}$ $4.027$ $2.4\cdot 10^{-16}$ $-6.61\cdot 10^{-10}$ $5.029$
\[.1cm\] $1.49\cdot 10^{-7}$ $4.007$ $2.5\cdot 10^{-15}$ $-2.03\cdot 10^{-11}$ $5.021$
\[.1cm\] $9.27\cdot 10^{-9}$ $4.002$ $3.2\cdot 10^{-15}$ $-6.27\cdot 10^{-13}$ $5.018$
\[.1cm\] $5.77\cdot 10^{-10}$ $4.006$ $5.5\cdot 10^{-16}$ $-2.00\cdot 10^{-14}$ $4.972$
\[.1cm\] $3.16\cdot 10^{-11}$ $4.188$ $5.4\cdot 10^{-15}$ $-5.36\cdot 10^{-16}$ $5.219$
---------- ---------------------- --------- ------------------------ ----------------------- -------------------
: Performance of method $M_7$ applied to problem , with initial condition $y_0=[0.2,\,0.5]$, on the time interval $[0, 250]$ for several values of the stepsize $h$, as specified in the first column. The second and third columns report the relative error in the last computed point $y_N$, $N=T/h$ and the corresponding order of convergence. Since the integrals appearing in are precisely computed by the Lobatto quadrature formula with seven nodes, the error in the numerical Hamiltonian $H(y_N)$ is zero up to machine precision. The last two columns list the residual $r(y_N)$ defined in and its order of convergence to zero.
Test problem 3
--------------
We finally consider the non-polynomial Hamiltonian function $$\label{kepler} H(q_1,q_2,p_1,p_2) =
\frac{1}2(p_1^2+p_2^2)-\frac{1}{\sqrt{q_1^2+q_2^2}}$$ that defines the well known Kepler problem, namely the motion of two masses under the action of their mutual gravitational attraction. Taking as initial condition $$\label{kepler0}
(q_1(0),q_2(0),p_1(0),p_2(0))=\pmatrix{cccc}1-e,&0,&0,
&\sqrt{\frac{1+e}{1-e}}\endpmatrix^T$$ yields an elliptic periodic orbit of period $2\pi$ and eccentricity $e\in[0,1)$. We have chosen $e=0.6$. Though the vector field fails to be a polynomial in $q_1$ and $q_2$, we can plan to use a sufficiently large number of quadrature nodes to discretize the integrals in so that the corresponding accuracy is within the machine precision. Under this assumption, and taking aside the effect of the floating point arithmetic, the computer will make no difference between the conservative formulae and their discrete counterparts.
The left picture in Figure \[kepler\_fig1\] explains the above argument. It reports the error $|H(y_n)-H(y_0)|$ in the Hamiltonian function for various choices of the number of Lobatto nodes, and precisely $k=3,\,5,\,7,\,9$. We see that the error decreases quickly as the number of nodes is incremented and for $k=9$ it is within the epsilon machine.[^11]
The use of finite arithmetic may sometimes cause a mild numerical drift of the energy over long times, like the one shown in the upper line in the right picture of Figure \[kepler\_fig1\]. This is due to the fact that on a computer the numerical solution satisfy the conservation relation $H(y_n)=H(y_0)$ up to machine precision times the conditioning number of the nonlinear system that is to be solved at each step.
To prevent the accumulation of roundoff errors we may apply a simple and costless *correction* technique on the approximation $y_n$ which consists in a single step of a gradient descent method (see also [@BIS]). More precisely, the corrected solution $y^*_n$ is defined by $$\label{descent} y^*_n = y_n-\alpha \frac{\nabla H(y_n)}{||\nabla
H(y_n)||_2}, \qquad \mbox{with} ~
\alpha=\frac{H(y_n)-H(y_0)}{||\nabla H(y_n)||_2},$$ which stems from choosing as $\alpha$ the value that minimizes the linear part of the function $F(\alpha) = H(y_n-\alpha \frac{\nabla
H(y_n)}{||\nabla H(y_n)||_2})-H(y_0)$. The bottom line in the right picture of Figure \[kepler\_fig1\] shows the energy conservation property of the corrected solution.
![Left picture. Error in the numerical Hamiltonian function $|H(y_n)-H(y_0)|$ produced by methods $M_k$, with $k=3,\,5,\,7,\,9$. Parameters: stepsize $h=0.05$, integration interval $[0, 50]$. Right picture. Roundoff errors may cause a drift of the numerical Hamiltonian function (upper line) which can be easily taken under control by coupling the method with a costless correction procedure like the one described at .[]{data-label="kepler_fig1"}](kepler_ham "fig:"){width="6.7cm" height="5cm"} ![Left picture. Error in the numerical Hamiltonian function $|H(y_n)-H(y_0)|$ produced by methods $M_k$, with $k=3,\,5,\,7,\,9$. Parameters: stepsize $h=0.05$, integration interval $[0, 50]$. Right picture. Roundoff errors may cause a drift of the numerical Hamiltonian function (upper line) which can be easily taken under control by coupling the method with a costless correction procedure like the one described at .[]{data-label="kepler_fig1"}](kepler_correction "fig:"){width="6.7cm" height="5cm"}
Conclusions
===========
We have derived a family of mono-implicit methods of order four with energy-preserving properties. Each element in the family originates from a limit formula and is defined by discretizing the integral therein by means of a suitable quadrature scheme. This process assures an exact energy conservation in the case where the Hamiltonian function is a polynomial, or a conservation to within machine precision in all other cases, as is also illustrated in the numerical tests. Interestingly, each method may be conceived as a $O(h^5)$ perturbation of a two-step linear method.
[10]{}
W.M.G. van Bokhoven, *Efficient higher order implicit one-step methods for integration of stiff differential equations*, BIT **20**, 1 (1980), 34–43.
L.Brugnano, F.Iavernaro, T.Susca, *Hamiltonian BVMs (HBVMs): implementation details and applications*, [*AIP Conf. Proc.*]{} [**1168**]{} (2009), 723–726.
L. Brugnano, F. Iavernaro and D. Trigiante, The Hamiltonian BVMs (HBVMs) Homepage, <arXiv:1002.2757> (URL: <http://www.math.unifi.it/~brugnano/HBVM/>).
L. Brugnano, F. Iavernaro and D. Trigiante, *Analisys of Hamiltonian Boundary Value Methods (HBVMs): a class of energy-preserving Runge-Kutta methods for the numerical solution of polynomial Hamiltonian dynamical systems*, (2009) (submitted) ([
arXiv:0909.5659](
arXiv:0909.5659)).
L. Brugnano, F. Iavernaro and D. Trigiante, *Hamiltonian Boundary Value Methods (Energy Preserving Discrete Line Integral Methods)*, [ Jour. of Numer. Anal. Industr. and Appl. Math.]{} [**5**]{}, 1–2 (2010), 17–37 ([
arXiv:0910.3621](
arXiv:0910.3621)).
L. Brugnano, F. Iavernaro, D. Trigiante, *The Lack of Continuity and the Role of Infinite and Infnitesimal in Numerical Methods for ODEs: the Case of Symplecticity*, Applied Mathematics and Computation (to appear), [DOI: 10.1016/j.amc.2011.03.022](DOI: 10.1016/j.amc.2011.03.022), (<arXiv:1010.4538>).
L. Brugnano and D. Trigiante, *Energy drift in the numerical integration of Hamiltonian problems*, [Jour. of Numer. Anal. Industr. and Appl. Math.]{} [**4**]{}, 3–4 (2009), 153-170.
K. Burrage, F.H. Chipman and P.H. Muir, *Order results for mono-implicit Runge–Kutta methods* SIAM J. Numer. Anal. **31**, 3 (1994), 876–891.
J.R. Cash, *A Class of Implicit Runge–Kutta Methods for the Numerical Integration of Stiff Ordinary Differential Equations*, J. ACM **22**, 4 (1975), 504–511.
J.R. Cash and A. Singhal, *Mono-implicit Runge–Kutta formulae for the numerical integration of stiff differential systems*, IMA J. Numer. Anal. **2**, 2 (1982), 211–227.
E. Celledoni, R.I. McLachlan, D. McLaren, B. Owren, G.R.W. Quispel and W.M. Wright, *Energy preserving Runge-Kutta methods*, [M2AN]{} **43** (2009), 645–649.
J.L. Cieśliński and B. Ratkiewicz, *Improving the accuracy of the discrete gradient method in the one-dimensional case*, Phys. Rev. E **81** (2010) 016704.
J.L. Cieśliński and B. Ratkiewicz, *Energy-preserving numerical schemes of high accuracy for one-dimensional Hamiltonian systems*, (<arXiv:1009.2738>).
E. Faou, E. Hairer and T.-L. Pham, *Energy conservation with non-symplectic methods: examples and counter-examples*, BIT Numerical Mathematics **44** (2004), 699–709.
K. Feng, *On difference schemes and symplectic geometry. Proceedings of the 5-th Intern*, Symposium on differential geometry & differential equations, August 1984, Beijing (1985) 42–58.
Z. Ge and J.E. Marsden, [*Lie-Poisson Hamilton-Jacobi theory and Lie-Poisson integrators*]{}, Phys. Lett. A, 133 (1988), 134–139.
O. Gonzales, *Time integration and discrete Hamiltonian systems*, J. Nonlinear Sci. [**6**]{} (1996), 449–467.
F. Iavernaro and B. Pace, *$s$-Stage Trapezoidal Methods for the Conservation of Hamiltonian Functions of Polynomial Type*, AIP Conf. Proc. **936** (2007), 603–606.
F. Iavernaro and D. Trigiante, *High-order symmetric schemes for the energy conservation of polynomial Hamiltonian problems*, J.Numer. Anal. Ind. Appl. Math. [**4**]{}, 1-2 (2009), 87–101.
L.Gr. Ixaru and G. Vanden Berghe, *Exponential fitting*, Kluwer, Dordrecht 2004.
B. Leimkuhler and S. Reich, *Simulating Hamiltonian Dynamics*, Cambridge University Press, Cambridge, 2004.
R. I. McLachlan and M. Perlmutter, *Energy drift in reversible time integration*, J. Phys. A [**37**]{}, 45 (2004), 593–598.
R.I. McLachlan, G.R.W. Quispel and N. Robidoux, *Geometric integration using discrete gradients*, R. Soc. Lond. Philos. Trans. Ser. A Math. Phys. Eng. Sci. [**357**]{}, (1999), 1021–1045.
G.R.W. Quispel and D.I. McLaren, *A new class of energy-preserving numerical integration methods*, J. Phys. A **41** (045206), 2008.
R.D. Ruth, *A canonical integration technique*, IEEE Trans. Nuclear Science **30**,4 (1983) 2669–2671.
T.E. Simos, *High-order closed Newton-Cotes trigonometrically-fitted formulae for long-time integration of orbital problems*, Comput. Phys. Comm. **178** (2008), 199–207.
G. Vanden Berghe and M. Van Daele: *Exponentially-fitted Störmer/Verlet methods*, J. Numer. Anal. Ind. Appl. Math. **1** (2006), 237–251.
[^1]: Dipartimento di Matematica “U.Dini”, Università di Firenze, Italy ([luigi.brugnano@unifi.it]{}).
[^2]: Dipartimento di Matematica, Università di Bari, Italy ([felix@dm.uniba.it]{}).
[^3]: Dipartimento di Energetica “S.Stecco”, Università di Firenze, Italy ([trigiant@unifi.it]{}).
[^4]: Work developed within the project “Numerical methods and software for differential equations”.
[^5]: A documentation about HBVMs, Matlab codes, and a complete set of references is available at the url [@BIT0].
[^6]: Since we are integrating the problem on an interval $[t_0,t_2]$ of length $2h$, we have scaled the constants $\eta_1$ and $\eta_2$ by a factor two with respect to the values reported in [@IT3].
[^7]: By exploiting the result in Lemma \[lem2\] below, it is not difficult to show that actually implies $r(z)=O(h^5)$. This aspect is further emphasized in the numerical test section (see Table \[tab2\]).
[^8]: Notice that, by definition, the set $\Omega$ is an open simply connected subset of $\RR^{2m}$ containing $B(y_0,\rho)$ while, from the assumption ($A_1$), decreasing $h$ causes the point $y_1$ to approach $y_0$.
[^9]: This also implies that $u_1-y_1=O(h^4)$.
[^10]: In fact, the authors show that the system deriving from is equivalent to a reversible system (see also [@BrTr0; @McPe] for a discussion on the integration of reversible Hamiltonian systems by symmetric methods).
[^11]: All tests were performed in Matlab using double precision arithmetic.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In this paper we extend the construction of the canonical polarized variation of Hodge structures over tube domain considered by B. Gross in [@G] to bounded symmetric domain and introduce a series of invariants of infinitesimal variation of Hodge structures, which we call characteristic subvarieties. We prove that the characteristic subvariety of the canonical polarized variations of Hodge structures over irreducible bounded symmetric domains are identified with the characteristic bundles defined by N. Mok in [@M]. We verified the generating property of B. Gross for all irreducible bounded symmetric domains, which was predicted in [@G].'
address:
- 'Department of Mathematics, East China Normal University, 200062 Shanghhai, P.R. China'
- 'Universität Mainz, Fachbereich 17, Mathematik, 55099 Mainz, Germany'
author:
- Mao Sheng
- Kang Zuo
title: 'Polarized Variation of Hodge Structures of Calabi-Yau Type and Characteristic Subvarieties Over Bounded Symmetric Domains'
---
\[section\] \[thm\][Theorem]{} \[thm\][Lemma]{} \[thm\][Corollary]{} \[thm\][Proposition]{} \[thm\][Addendum]{} \[thm\][Variant]{} \[thm\][Construction]{} \[thm\][Notations]{} \[thm\][Question]{} \[thm\][Problem]{} \[thm\][Remark]{} \[thm\][Remarks]{} \[thm\][Definition]{} \[thm\][Claim]{} \[thm\][Assumption]{} \[thm\][Assumptions]{} \[thm\][Properties]{} \[thm\][Example]{} \[thm\][Conjecture]{}
Introduction
============
It has been interesting for long to find a new global Torelli theorem for Calabi-Yau manifolds, which extends the celebrated global Torelli theorem for polarized K3 surfaces. The work of B. Gross [@G] is closely related to this problem. In fact, at the Hodge theoretical level, B. Gross [@G] has constructed certain canonical real polarized variation of Hodge structures (PVHS) over each irreducible tube domain, and then asked for the possible algebraic geometrical realizations of them (cf. [@G] §8). In this paper, we introduce certain invariants, called *characteristic subvarieties*, which turned out to be nontrivial obstructions to the realization problem posed by B. Gross.\
For a universal family of polarized Calabi-Yau $n$-folds $f: {{\mathcal X}}\to
S$, we consider the ${{\mathbb Q}}$-PVHS ${{\mathbb V}}$ formed by the primitive middle rational cohomologies of fibers. Let $(E,\theta)$ be the system of Hodge bundles associated with ${{\mathbb V}}$. By the definition of Calabi-Yau manifold, we have the first property of $(E,\theta)$: $${{\rm rank}}E^{n,0}=1.$$
The Bogomolov-Todorov-Tian unobstructedness theorem for the moduli space of Calabi-Yau manifolds gives us the second property of $(E,\theta)$: $$\theta:
T_{S}\stackrel{\simeq}{\longrightarrow}{{\rm Hom}}(E^{n,0},E^{n-1,1}).$$
On the other hand, we see that the canonical ${{\mathbb R}}$-PVHSs associated to tube domains considered by B. Gross in [@G] also have the above two properties (See [@G], Proposition 4.1 and Proposition 5.2). In this paper, we consider the following types of ${{\mathbb C}}$-PVHS (See Definition 4.6, [@Zuc] for the notion of ${{\mathbb C}}$-PVHS).
Let ${{\mathbb V}}$ be a ${{\mathbb C}}$-PVHS of weight $n$ over complex manifold $S$ with associated system of Hodge bundles $(E,\theta)$. We call ${{\mathbb V}}$ PVHS of Calabi-Yau type or [**Type I**]{} if ${{\mathbb V}}$ is a ${{\mathbb R}}$-PVHS and $(E,\theta)$ satisfies
- ${{\rm rank}}E^{n,0}=1$;
- $\theta:
T_{S}\stackrel{\simeq}{\longrightarrow}{{\rm Hom}}(E^{n,0},E^{n-1,1})$.
If ${{\mathbb V}}$ is not defined over ${{\mathbb R}}$ and $(E,\theta)$ satisfies the above two properties, then we call ${{\mathbb V}}$ PVHS of [**Type II**]{}.
For a type I PVHS, one has ${{\rm rank}}E^{0,n}=1$ by the Hodge isometry. In our case it is not important to emphasize the real structure of a PVHS. Rather we will simply regard a PVHS of type I or type II as of Calabi-Yau type.\
This paper consists of three parts. The first part extends the construction of PVHS by B. Gross in [@G] to each irreducible bounded symmetric domain. This is a straightforward step. In the second part, we introduce a series of invariants associated with the infinitesimal variation of Hodge structures (IVHS) (See the initiative work of IVHS in [@CGGH]). We call these invariants characteristic subvarieties. Our main result (Theorem \[identification\]) identifies the characteristic subvarieties of the canonical PVHS over an irreducible bounded symmetric domain with the characteristic bundles defined by N. Mok in [@M] (see [@M1] Chapter 6 and Appendix III for a more expository introduction). The last part of this paper verifies the generating property predicted by B. Gross (cf. [@G] §5), and describes the canonical PVHSs over irreducible bounded symmetric domains in certain detail.\
In a recent joint work with Ralf Gerkmann [@GSZ], we used the results of this paper to disprove modularity of the moduli space of Calabi-Yau 3-folds arising from eight planes of ${{\mathbb P}}^3$ in general positions.\
**Acknowledgements:** The authors would like to thank Ngaiming Mok for his explanation of the notion of characteristic bundles, and thank Eckart Viehweg for his interests and helpful discussions on this work.
The canonical PVHS over Bounded Symmetric Domain
================================================
Let $D$ be an irreducible bounded symmetric domain, and let $G$ be the identity component of the automorphism group of $D$. We fix an origin $0\in D$. Then the isotropy subgroup of $G$ at 0 is a maximal compact subgroup $K$. By Proposition 1.2.6 [@D], $D$ determines a special node $v$ of the Dynkin diagram of the simple complex Lie algebra $\mathfrak{g}^{{{\mathbb C}}}=Lie(G)\otimes {{\mathbb C}}$. By the standard theory on the finite dimensional representations of semi-simple complex Lie algebras (cf. [@FH]), we know that the special node $v$ also determines a fundamental representation $W$ of $\mathfrak{g}^{{{\mathbb C}}}$. By the Weyl’s unitary trick, $W$ gives rise to an irreducible complex representation of $G$. When $D$ is tube domain, the representation $W$ is exactly the one considered by B. Gross in [@G] and only in this case $W$ admits an $G$-invariant real form. It is helpful to illustrate the above construction in the simplest case.
\[Type A example\] Let $D=SU(p,q)/S(U(p)\times U(q))$ be a type A bounded symmetric domain. Then $$G=SU(p,q),\quad K=S(U(p)\times U(q)),\quad
\mathfrak{g}^{{{\mathbb C}}}=sl(p+q,{{\mathbb C}}).$$ The special node $v$ of the Dynkin diagram of $sl(p+q,{{\mathbb C}})$ corresponding to $D$ is the $p$-th node. Let ${{\mathbb C}}^{p+q}$ be the standard representation of $Sl(p+q,{{\mathbb C}})$. Then the fundamental representation denoted by $v$ is $$W=\bigwedge^{p}{{\mathbb C}}^{p+q}.$$ The group $G$ preserves a hermitian symmetric bilinear form $h$ with signature $(p,q)$ over ${{\mathbb C}}^{p+q}$. Then $D$ is the parameter space of the $h$-positive $p$-dimensional vector subspace of ${{\mathbb C}}^{p+q}$. By fixing an origin $0\in D$, we obtain an $h$-orthogonal decomposition $${{\mathbb C}}^{p+q}={{\mathbb C}}^{p}_{+}\oplus {{\mathbb C}}^{q}_{-}.$$ The corresponding Higgs bundle to ${{\mathbb W}}$ is of the form $$E=\bigoplus_{i+j=n}E^{i,j},$$ where $n=\rm{min}(p,q)$ is the rank of $D$. The Hodge bundle $E^{n-i,i}$ is the homogeneous vector bundle determined by, at the origin 0, the irreducible $K$-representation $$(E^{n-i,i})_{0}=\bigwedge^{p-i}{{\mathbb C}}^{p}_{+}\otimes
\bigwedge^{i}{{\mathbb C}}^{q}_{-}.$$
Let $\Gamma$ be a torsion free discrete subgroup of $G$, we can obtain from the representation $W$ the complex local system $${{\mathbb W}}=W\times_{\Gamma}D$$ over the locally symmetric variety $X=\Gamma\backslash D$. By the construction on the last paragraph of §4, [@Zuc], we know that ${{\mathbb W}}$ is a ${{\mathbb C}}$-PVHS. We denote by $(E,\theta)$ the associated system of Hodge bundles with ${{\mathbb W}}$. With similar proofs as those in Proposition 4.1 and Proposition 5.2 of [@G], or from the explicit descriptions given in the last section, we have the following
\[Calabi-Yau like property\] Let $D=G/K$ be an irreducible bounded symmetric domain of rank $n$ and $\Gamma$ be a torsion free discrete subgroup of $G$. Let ${{\mathbb W}}$ be the irreducible PVHS over the locally symmetric variety $X=\Gamma\backslash D$ constructed above. Then ${{\mathbb W}}$ is a weight $n$ ${{\mathbb C}}$-PVHS of Calabi-Yau type.
Following B. Gross we shall call ${{\mathbb W}}$ over $X$ in the above theorem as the canonical PVHS over $X$.
The Characteristic Subvariety and The Main Result
=================================================
We start with a system of Hodge bundles $$(E=\bigoplus_{p+q=n}E^{p,q},\theta=\bigoplus_{p+q=n}\theta^{p,q})$$ over a complex manifold $X$ with $\dim E^{n,0}\neq 0$. By the integrability of Higgs field $\theta$, the $k$-iterated Higgs field factors as $E\to E\otimes S^k(\Omega_X)$. It induces in turn the following natural map $$\theta^k:S^k(T_X)\to {{\rm End}}(E).$$ By the Griffiths’s horizontal condition, the image of $\theta^k$ is contained in the subbundle $$\bigoplus_{p+q=n}{{\rm Hom}}(E^{p,q},E^{p-k,q+k})\subset {{\rm End}}(E).$$ We are interested in the projection of $\theta^k$ into the first component of the above subbundle. Abusing the notation a little bit, we denote the composition map still by $\theta^k$. That is, we concern the following map $$\theta^k: S^k(T_X)\to {{\rm Hom}}(E^{n,0},E^{n-k,k}).$$ We have a tautological short exact sequence of analytic coherent sheaves defined by the iterated Higgs field $\theta^{k}$: $$0\to I_k\to S^k(T_X)\stackrel{\theta^k}{\to} J_k\to 0.$$ We define a sheaf of graded ${{\mathcal O}}_X$-algebras ${{\mathcal J}}_{k}$ by putting $${{\mathcal J}}_{k}^{i}=\left\{
\begin{array}{ll}
S^{i}\Omega_{X} & \textrm{if $i< k$}, \\
S^{i}\Omega_{X}/Im((J_{k})^*\otimes
S^{i-k}\Omega_{X}\stackrel{mult.}{\longrightarrow} S^{i}\Omega_{X})
& \textrm{if $i\geq k$}.
\end{array}
\right.$$
\[characteristic subvariety\] For $k\geq 0$, we call $$C_{k}=Proj({{\mathcal J}}_{k+1})$$ the $k$-th characteristic subvariety of $(E,\theta)$ over $X$.
By the definition the fiber of a characteristic subvariety over a point is the zero locus of system of polynomial equations determined by the Higgs field in the projective tangent space over that point. In a concrete situation one will be able to calculate some numerical invariants of the characteristic subvarieties. For example, for a complete smooth family of hypersurfaces in a projective space, one can use the Jacobian ring to represent the system of Hodge bundles associated with the PVHS of the middle dimensional primitive cohomolgies in a small neighborhood. In [@GSZ] one finds such a calculation in another case.
Because $\theta^{n+1}=0$, $$C_{k}={{\mathbb P}}(T_X),\ k\geq n,$$ where ${{\mathbb P}}(T_X)$ is the projective tangent bundle of $X$. For $0\leq
k \leq n-1$, the natural surjective morphism of graded ${{\mathcal O}}_X$-algebras $$\bigoplus_{i=0}^{\infty}S^{i}\Omega_{X}\twoheadrightarrow {{\mathcal J}}_k$$ gives a proper embedding over $X$, $$\xymatrix{
C_k \ar[rr]^{\hookrightarrow} \ar[dr]_{p_k}
& & {{\mathbb P}}(T_X) \ar[dl]^{p} \\
& X }$$ The next lemma gives a simple criterion to test whether a nonzero tangent vector at the point $x\in X$ has its image in $(C_{k})_{x}=p_k^{-1}(x)$.
\[key lemma\] Let $v\in (T_X)_x$ be a non-zero tangent vector at $x$ and $v^{k}\in
(S^k(T_X))_x $ the $k$-th symmetric tensor power of $v$. Then its image $[v]\in ({{\mathbb P}}(T_X))_{x}$ lies in $(C_{k-1})_x$ if and only if $v^k\in (I_k)_{x}$, the stalk of $I_k$ at $x$.
**Proof:** $(C_{k-1})_{x}\subset ({{\mathbb P}}(T_X))_{x}$ is defined by the homogeneous elements contained in $((J_k)^*)_x$. Thus $[v]\in
(C_{k-1})_{x}$ if and only for all $f\in ((J_k)^*)_x$, $f([v])=0$. Now we choose a basis $\{e_1,\cdots,e_m\}$ for $(T_X)_x$ and the dual basis $\{e_{1}^{*},\cdots,e_{m}^{*}\}$ for $(\Omega_X)_x$.\
$f([v])=0$ if and only $f(v^k)=0$. In the latter, we consider $f$ as a linear form on $(S^k(T_X))_x$.\
Let $I=(i_1,\cdots,i_m)$ denote the multi-index with $i_j\neq 0$ for all $j$, and one puts $$I!=i_{1}!\cdots i_{m}!, \quad |I|= i_{1}+\cdots +i_{m}.$$ We write $v=\sum_{i=1}^{m}a_{i}e_i$ and $f=\sum_{|I|=k}b^{I}e^{I}$. Then considering $f$ as a polynomial of degree $k$ on $(T_X)_x$, we have $$f(v)=\sum_{|I|=k}b^{I}a^{I}.$$ On the other hand, we have $$v^k=k!\sum_{|I|=k}\frac{1}{I!}a^{I}e^{I},$$ where $a^{I}=a_{1}^{i_1}\cdots a_{m}^{i_m}$ etc. By Ex.B.12,[@FH] the canonically dual basis of $(S^k(\Omega_X))_x$ to the natural basis $\{e^{I},\ |I|=k\}$ of $(S^k(T_X))_x$ is $\{\frac{1}{I!}(e^{*})^{I},\ |I|=k\}$. Hence, evaluating $f$ as a linear form of $(S^k(T_X))_x$ at $v^k$, we obtain $$f(v^k)=k!(\sum_{|I|=k}b^{I}a^{I}).$$ It is clear now that our claim holds.\
Finally, it is easy to see that $v^k\in (S^k(T_X))_{x}$ lies in $(I_k)_{x}$ if and only if for all $f\in ((J_k)^*)_x$, considered as a linear form of $(S^k(T_X))_{x}$, $f(v^k)=0$. Therefore, the lemma follows.
[ $\square$\
]{}
Our main result identifies the characteristic subvarieties of the canonical PVHS over an irreducible bounded symmetric domain with the characteristic bundles defined by N. Mok in [@M].
\[identification\] Let $D$ be an irreducible bounded symmetric domain of rank $n$, and let $(E,\theta)$ be the system of Hodge bundles associated to the canonical PVHS over $X=\Gamma\backslash D$ as constructed in Theorem \[Calabi-Yau like property\]. Then for each $k$ with $1\leq k\leq
n-1$ the $k$-th characteristic subvariety $C_{k}$ of $(E,\theta)$ over $X$ coincides with the $k$-th characteristic bundle ${{\mathcal S}}_{k}$ over $X$.
By the second property of being of Calabi-Yau type, $C_{0}$ is always empty. For the self-containedness of this paper, we would like to describe briefly the notion of characteristic bundles and refer to Chapter 6 and Appendix III in [@M1] for a full account.\
The $k$-th characteristic bundle ${{\mathcal S}}_k$ over $X=\Gamma\backslash D$ is firstly defined over $D$. It is a projective subvariety of ${{\mathbb P}}(T_D)$ and homogeneous under the natural action of automorphism group $G$ on the projective tangent bundle of $D$. By taking quotient under the left action of $\Gamma$, one obtains the $k$-th characteristic bundle over $X$. So it suffices to describe the construction of characteristic bundle at one point of $D$. At the origin 0 of $D$, the vectors contained in the fiber $({{\mathcal S}}_{k})_{0}$ are in fact determined by a rank condition. We have the isotropy representation of $K$ on the tangent space $(T_{D})_{0}$. Fix a Cartan subalgebra $\mathfrak{h}$ of $\mathfrak{g}$ and choose a maximal set of strongly orthogonal positive non-compact roots $$\Psi=\{\psi_1,\cdots,\psi_{n}\}.$$ Let $e_{i},1\leq i\leq n,$ be a root vector corresponding to the root $\psi_{i}$. Then the set $\Psi$ determines a distinguished polydisk $$\triangle^{n}\subset D$$ passing through the origin 0, and $$(T_{\triangle^{n}})_{0}=\sum_{1\leq i \leq n}{{\mathbb C}}e_{i}\subset
(T_{D})_{0}.$$ Moreover, for any nonzero element $v\in (T_{D})_{0}$, there exists an element $k\in K^{{{\mathbb C}}}$ such that $$k(v)=\sum_{1\leq i \leq r(v)}e_{i}.$$ Such an expression for the vector $v$ is unique and the natural number $r(v)$ is called the *rank* of $v$. Then, for $1\leq
k\leq n-1$, one defines $$({{\mathcal S}}_{k})_{0}=\{[v]\in ({{\mathbb P}}(T_D))_{0}| 1\leq r(v)\leq k\}.$$ By the definition, we have a natural inclusion $${{\mathcal S}}_{1}\subset \cdots \subset {{\mathcal S}}_{n-1}\subset {{\mathbb P}}(T_D),$$ We can add two trivially defined characteristic bundles by putting $${{\mathcal S}}_0=\emptyset,\quad {{\mathcal S}}_n={{\mathbb P}}(T_D).$$ $({{\mathbb P}}(T_D))_{0}$ is then decomposed into a disjoint union of irreducible $K^{{{\mathbb C}}}$ orbits $$({{\mathbb P}}(T_D))_{0}=\coprod_{1\leq k\leq n}\{
({{\mathcal S}}_{k})_{0}-({{\mathcal S}}_{k-1})_{0}\}.$$
Let $D$ be the type A tube domain of rank $n$. Then $$D=SU(n,n)/S(U(n)\times U(n)).$$ One classically represents $D$ as a space of matrices $$D=\{Z\in M_{n,n}({{\mathbb C}})|I_n-\bar{Z}^{t}Z>0\}.$$ At the origin $0\in D$, $$(T_{D})_{0}\simeq M_{n,n}({{\mathbb C}}).$$ The action of $$K^{{{\mathbb C}}}\simeq S(Gl(n,{{\mathbb C}})\times Gl(n,{{\mathbb C}}))$$ defined by $$M\mapsto AMB^{-1}, \ \mathrm{for} \ M\in M_{n,n}({{\mathbb C}}) \ \mathrm{and}
\ (A,B)\in Gl(n,{{\mathbb C}})\times Gl(n,{{\mathbb C}})$$ gives the isotropy representation of $K^{{{\mathbb C}}}$ on $(T_{D})_{0}$. Then the rank of a vector $M\in (T_{D})_{0}$ defined above is just the rank of $M$ as matrix. Let $(\tilde{{{\mathcal S}}}_{k})_{0}$ be the lifting of $({{\mathcal S}}_k)_0$ in $(T_{D})_{0}$. Therefore, for $1\leq k\leq n-1$, we have $$(\tilde{{{\mathcal S}}}_{k})_{0}-(\tilde{{{\mathcal S}}}_{k-1})_{0}=S(Gl(n,{{\mathbb C}})\times
Gl(n,{{\mathbb C}}))/P_{k},$$ where $$P_{k}=\{(A,B)\in S(Gl(n,{{\mathbb C}})\times Gl(n,{{\mathbb C}}))| A\left(
\begin{array}{cc}
I_{k} & 0 \\
0 & 0 \\
\end{array}
\right)=\left(
\begin{array}{cc}
I_{k} & 0 \\
0 & 0 \\
\end{array}
\right)B
\}.$$ One can show easily that for $1\leq k\leq n-1$ the dimension of $(\tilde{{{\mathcal S}}}_{k})_{0}-(\tilde{{{\mathcal S}}}_{k-1})_{0}$ is $(2n-k)k$. In particular, the dimension of $k$-th characteristic bundle ${{\mathcal S}}_{k}$ is of dimension $(n+k)^2-2k^2-1$. The proof goes as follows. We first compute the dimension of the stabilizer $P_k$. We write the matrices $A$ and $B$ as $$A=\left(
\begin{array}{cc}
A_{11} & A_{12} \\
A_{21} & A_{22} \\
\end{array}
\right), B=\left(
\begin{array}{cc}
B_{11} & B_{12} \\
B_{21} & B_{22} \\
\end{array}
\right)$$ where $A_{11},B_{11}$ are $k\times k$ and $A_{22},B_{22}$ are $n-k
\times n-k$. Then the constraints required by $P_k$ give the linearly independent equations $$\left\{
\begin{array}{ll}
A_{11}=& B_{11} \\
A_{21}=& 0 \\
B_{12}=& 0
\end{array}
\right.$$ The total number of equations is $$k^2+2k(n-k)=k(2n-k).$$ But the dimension of $P_k$ is then $$2n^2-1-k(2n-k),$$ and the dimension of $(\tilde{{{\mathcal S}}}_{k})_{0}-(\tilde{{{\mathcal S}}}_{k-1})_{0}$ is $$2n^2-1-(2n^2-1-k(2n-k))=k(2n-k).$$
Now let $D$ be an irreducible bounded symmetric domain of rank $n$, and let $$i: \triangle^n=\triangle_{1}\times \cdots\times
\triangle_{n}\hookrightarrow D$$ be a polydisc embedding. We are going to study the decomposition of $i^{*}{{\mathbb W}}$ into a direct sum of irreducible PVHSs over the polydisc. The following proposition is a key ingredient in the proof of Theorem \[identification\].
\[decomposition over polydisc\] Let $p_i, 1\leq i \leq n,$ be the projection of the polydisc $\triangle^n$ into the $i$-th direct factor $\triangle_i$. Then each irreducible component contained in $i^{*}{{\mathbb W}}$ is of the form $$p_{1}^{*}({{\mathbb L}}^{\otimes k_1})\otimes \cdots\otimes
p_{n}^{*}({{\mathbb L}}^{\otimes k_n})\otimes {{\mathbb U}}$$ with $$0\leq k_i\leq 1,\ \textrm{for all}\ \ i,$$ where ${{\mathbb L}}$ is the weight 1 PVHS coming from the standard representation of $Sl(2,{{\mathbb R}})$ and ${{\mathbb U}}$ is a certain unitary factor. As a consequence, there exists a unique component of the form $$p_{1}^{*}{{\mathbb L}}\otimes \cdots\otimes p_{n}^{*}{{\mathbb L}}$$ in $i^{*}{{\mathbb W}}$ because ${{\mathbb W}}$ is of Calabi-Yau type.
**Proof:** It is known that the polydisc embedding $$i: \triangle^{n}\hookrightarrow D,$$ determined by a maximal set of strongly orthogonal noncompact roots $\Psi\subset \mathfrak{h}^{*}$, lifts to a group homomorphism $$\phi: Sl(2,{{\mathbb R}})^{\times n}\to G.$$ Our problem is to study the decomposition of $W$ with respect to all $Sl(2,{{\mathbb R}})$ direct factors of $\phi$.\
We can in fact reduce this to the study of only one direct factor. This is because a permutation of direct factors can be induced from an inner automorphism of $G$, which implies the restriction to each direct factor is isomorphic to each other. Furthermore, we can assume that the highest root $\tilde{\alpha}$ appears in our chosen $\Psi$ without loss of generality (cf. [@M1] Ch. 5, Proposition 1).\
Let $s_{\tilde{\alpha}}$ be the distinguished $sl_2$-triple in the complex simple Lie algebra $\mathfrak{g}^{{{\mathbb C}}}$ corresponding to $\tilde{\alpha}$. Let $$W=\bigoplus_{\beta\in \Phi} W_{\beta}$$ be the weight decomposition of $W$ with respect to the Cartan subalgebra $\mathfrak{h}$. Then by (14.9) [@FH], it is clear that all irreducible component in $W$ with respect to $s_{\tilde{\alpha}}$ is contained in $$W_{[\beta]}=\bigoplus_{n\in {{\mathbb Z}}}W_{\beta+n\tilde{\alpha}}.$$ Let $\mathrm{Conv}(\Phi)$ be the convex hull of $\Phi$, which is a closed convex polyhedron in $\mathfrak{h}^*$. We put $$\partial \Phi=\Phi\cap \mathrm{Conv}(\Phi).$$ Then for $\beta\in \partial \Phi$, we know by (14.10) [@FH] that the largest component in $W_{[\beta]}$ has dimension equal to $\beta(H_{\tilde{\alpha}})+1$. Our proof boils down to showing the following\
For all $\beta\in \partial \Phi$, we have $$|\beta(H_{\tilde{\alpha}})|\leq 1.$$
We first note that $$\displaystyle{ |\beta(H_{\tilde{\alpha}})|=|\frac{2(\beta,
\tilde{\alpha})}{(\tilde{\alpha},\tilde{\alpha})}|}$$ defines a convex function on $\Phi$. The maximal value will be achieved for the vertices of $\Phi$, namely, the orbit of highest weight $\omega$ of $W$ under the Weyl group $\mathrm{W(R)}$. Since the Weyl group preserves the Killing form, we will show that $$|\omega(H_{s(\tilde{\alpha})})|\leq 1, \ \textrm{for all}\ \ s\in
W(R).$$ The above inequality holds obviously for $s=id$. Let $\alpha_0$ be the simple root which is the special node determined by $D$ in the last section. By the definition of special node, in the expression of $\tilde{\alpha}$ as a linear combination of simple roots the coefficient before $\alpha_0$ is one (cf. 1.2.5. [@D]). Therefore, $$\begin{aligned}
\omega(H_{\tilde{\alpha}}) &=& \frac{2(\omega,\tilde{\alpha})}{(\tilde{\alpha},\tilde{\alpha})} \\
&=&\frac{2(\omega,\alpha_0)}{(\tilde{\alpha},\tilde{\alpha})}\\
&=&\frac{(\alpha_0,\alpha_0)}{(\tilde{\alpha},\tilde{\alpha})}\\
&=&1.\end{aligned}$$ Now we have to separate the exceptional cases from the ongoing proof because of the complicated description of the Weyl group in the exceptional cases. In the following, we use the same notation as the appendix of [@B]. Let $\{\varepsilon_1,\cdots,\varepsilon_{l}\}$ be the standard basis of the Euclidean space ${{\mathbb R}}^{l}$, and $\sigma$ denotes a permutation of index.\
Type $A_{l-1}$: The highest root $\tilde{\alpha}=\varepsilon_1-\varepsilon_{l}$. The Weyl group permutes the basis elements. All fundamental weights $$\omega_{i}=\sum_{j=1}^{i}\varepsilon_j-\frac{i}{l+1}\sum_{j=1}^{l}\varepsilon_j, 1\leq
i\leq l-1$$ correspond to a special node. Then $$\begin{aligned}
|\omega_{i}(H_{s(\tilde{\alpha})})| &=& |(\omega_{i},s(\tilde{\alpha}))| \\
&=&|(\sum_{j=1}^{i}\varepsilon_j,\varepsilon_{\sigma(1)}-\varepsilon_{\sigma(l)})|\\
&\leq &1.\end{aligned}$$
Type $B_l$: The highest root $\tilde{\alpha}=\varepsilon_1+\varepsilon_{2}$. The Weyl group permutes the basis elements, or acts by $\varepsilon_{i}\mapsto \pm
\varepsilon_{i}$. The first fundamental weight $\omega_1=\varepsilon_1$ corresponds to the special node. Then $$\begin{aligned}
|\omega_{1}(H_{s(\tilde{\alpha})})| &=& |(\omega_{1},s(\tilde{\alpha}))| \\
&=&|(\varepsilon_1,\pm \varepsilon_{\sigma(1)}\pm \varepsilon_{\sigma(2)})|\\
&\leq &1.\end{aligned}$$
Type $C_l$: The highest root $\tilde{\alpha}=2\varepsilon_1$. The Weyl group permutes the basis elements, or acts by $\varepsilon_{i}\mapsto \pm \varepsilon_{i}$. The last fundamental weight $\omega_l=\sum_{i=1}^{l}\varepsilon_i$ corresponds to the special node.Then $$\begin{aligned}
|\omega_{l}(H_{s(\tilde{\alpha})})| &=& |\frac{1}{2}(\omega_{l},s(\tilde{\alpha}))| \\
&=&|\frac{1}{2}(\sum_{i=1}^{l}\varepsilon_i,\pm 2\varepsilon_{\sigma(1)})|\\
&=&1.\end{aligned}$$
Type $D_{l}$: The highest root $\tilde{\alpha}=\varepsilon_1+\varepsilon_2$. The Weyl group permutes the basis elements, or acts by $\varepsilon_{i}\mapsto (\pm
1)_{i} \varepsilon_{i}$ with $\prod_{i}(\pm 1)_{i}=1$. We have three special nodes in this case. It suffices to check $\omega_{1}=\varepsilon_1$ and $\omega_{l}=\frac{1}{2}(\sum_{i=1}^{l}\varepsilon_i)$. For $\omega_{1}$, we have $$\begin{aligned}
|\omega_{1}(H_{s(\tilde{\alpha})})| &=& |(\omega_{1},s(\tilde{\alpha}))| \\
&=&|(\varepsilon_1,\pm \varepsilon_{\sigma(1)}\pm \varepsilon_{\sigma(2)})|\\
&\leq&1.\end{aligned}$$ For $\omega_{l}$, we have $$\begin{aligned}
|\omega_{l}(H_{s(\tilde{\alpha})})| &=& |(\omega_{l},s(\tilde{\alpha}))| \\
&=&|\frac{1}{2}(\sum_{i=1}^{l}\varepsilon_i,\pm \varepsilon_{\sigma(1)}\pm \varepsilon_{\sigma(2)})|\\
&\leq &1.\end{aligned}$$ Now we treat with the exceptional cases. In the following, we shall compute the largest value of $|\beta(H_{\tilde{\alpha}})|$ among all weights $\beta$ in $\Phi$. The results will particularly imply the claim.\
Type $E_6$: Let $\{\alpha_1,\cdots, \alpha_6\}$ be the set of simple roots of simple Lie algebra of type $E_6$ and $\{\omega_1,\cdots,\omega_6\}$ be the fundamental weights. The highest root is then $$\tilde{\alpha}=\alpha_1+2\alpha_2+2\alpha_3+3\alpha_4+2\alpha_5+\alpha_6.$$ A 6-tuple $(a_1,\cdots a_6)$ denotes the weight $\beta=\sum_{i=1}^{6}a_{i}\omega_{i}$. There are two special nodes in this case and it suffices to study either of them. The following table lists all elements of $\Phi$ for the fundamental representation $\omega_1$: $$\begin{array}{cccc}
(1,0,0,0,0,0)&(-1,0,1,0,0,0)&(0,0,-1,1,0,0)&(0,1,0,-1,1,0) \\
(0,1,0,0,-1,1)&(0,-1,0,0,1,0)&(0,1,0,0,0,-1)&(0,-1,0,1,-1,1)\\
(0,0,1,-1,0,1)&(0,-1,0,1,0,-1)&(1,0,-1,0,0,1)&(0,0,1,-1,1,-1)\\
(1,0,-1,0,1,-1)&(0,0,1,0,-1,0)&(-1,0,0,0,0,1)&(1,0,-1,1,-1,0)\\
(-1,0,0,0,1,-1)&(1,1,0,-1,0,0)&(-1,0,0,1,-1,0)&(1,-1,0,0,0,0)\\
(-1,1,1,-1,0,0)&(0,1,-1,0,0,0)&(-1,-1,1,0,0,0)&(0,-1,-1,1,0,0)\\
(0,0,0,-1,1,0)&(0,0,0,0,-1,1)&(0,0,0,0,0,-1).&
\end{array}$$ For an element $(a_1,\cdots a_6)$ in the above table, we have $$\begin{aligned}
\beta(H_{\tilde{\alpha}}) &=& (\beta,\tilde{\alpha}) \\
&=&(\sum_{i=1}^{6}a_{i}\omega_{i},\alpha_1+2\alpha_2+2\alpha_3+3\alpha_4+2\alpha_5+\alpha_6 )\\
&= & a_{1}+2a_{2}+2a_{3}+3a_{4}+2a_{5}+a_{6}.\end{aligned}$$ According to this formula, it is straightforward to compute that the largest value of $|\beta(H_{\tilde{\alpha}})|$ is equal to one.\
Type $E_7$: Let $\{\alpha_1,\cdots,\alpha_7\}$ be the set of simple roots of simple Lie algebra of type $E_7$ and $\{\omega_1,\cdots,\omega_7\}$ be the fundamental weights. We can choose the maximal set $\Psi$ of the strongly orthogonal noncompact roots to be $$\{\psi_1=2\alpha_1+2\alpha_2+3\alpha_3+4\alpha_4+3\alpha_5+2\alpha_6+\alpha_7,
\psi_2=\alpha_2+\alpha_3+2\alpha_4+2\alpha_5+2\alpha_6+\alpha_7,
\psi_3=\alpha_7\}.$$ It is simpler to use $\psi_3$ to verify our statement, instead of $\psi_1$ which is the highest root $\tilde{\alpha}$. As in the last case, a 7-tuple $(a_1,\cdots a_7)$ denotes the weight $\beta=\sum_{i=1}^{7}a_{i}\omega_{i}$. The following table lists all elements of $\Phi$ for the fundamental representation $\omega_7$: $$\begin{array}{cccc}
(0,0,0,0,0,0,1)&(0,0,0,0,0,1,-1)&(0,0,0,0,1,-1,0)&(0,0,0,1,-1,0,0)\\
(0,1,1,-1,0,0,0)&(1,1,-1,0,0,0,0)&(0,-1,1,0,0,0,0)&(1,-1,-1,1,0,0,0)\\
(-1,1,0,0,0,0,0)&(1,0,0,-1,1,0,0)&(-1,-1,0,1,0,0,0)&(1,0,0,0,-1,1,0) \\
(-1,0,1,-1,1,0,0)&(1,0,0,0,0,-1,1)&(0,0,-1,0,1,0,0)&(-1,0,1,0,-1,1,0) \\
(1,0,0,0,0,0,-1)&(0,0,-1,1,-1,1,0)&(-1,0,1,0,0,-1,1)&(0,1,0,-1,0,1,0) \\
(0,0,-1,1,0,-1,1)&(-1,0,1,0,0,0,-1)&(0,1,0,-1,1,-1,1)&(0,0,-1,1,0,0,-1)\\
(0,-1,0,0,0,1,0)&(0,1,0,0,-1,0,1)&(0,1,0,-1,1,0,-1)&(0,-1,0,0,1,-1,1)\\
(0,1,0,0,-1,1,-1)&(0,-1,0,1,-1,0,1)&(0,-1,0,0,1,0,-1)&(0,1,0,0,0,-1,0) \\
(0,0,1,-1,0,0,1)&(0,-1,0,1,-1,1,-1)&(1,0,-1,0,0,0,1)&(0,0,1,-1,0,1,-1) \\
(0,-1,0,1,0,-1,0)&(1,0,-1,0,0,1,-1)&(0,0,1,-1,1,-1,0)&(-1,0,0,0,0,0,1) \\
(1,0,-1,0,1,-1,0)&(0,0,1,0,-1,0,0)&(-1,0,0,0,0,1,-1)&(1,0,-1,1,-1,0,0) \\
(-1,0,0,0,1,-1,0)&(1,1,0,-1,0,0,0)&(-1,0,0,1,-1,0,0)&(1,-1,0,0,0,0,0)\\
(-1,1,1,-1,0,0,0)&(0,1,-1,0,0,0,0)&(-1,-1,1,0,0,0,0)&(0,-1,-1,1,0,0,0)\\
(0,0,0,-1,1,0,0)&(0,0,0,0,-1,1,0)&(0,0,0,0,0,-1,1)&(0,0,0,0,0,0,-1).
\end{array}$$ Then for an element $(a_1,\cdots a_7)$ in the above table, we have $$\begin{aligned}
\beta(H_{\alpha_7}) &=& (\beta, \alpha_7) \\
&=&(\sum_{i=1}^{7}a_{i}\omega_{i},\alpha_7 )\\
&= & a_{7}.\end{aligned}$$ It is straightforward to see the largest value of $|
\beta(H_{\alpha_7})|$ is one. This completes the whole proof.
[ $\square$\
]{}
We can now proceed to prove our main result.
**Proof of Theorem \[identification\]:** It suffices to prove the isomorphism over $D$, and we obtain the claimed isomorphism by taking quotient under the left action of $\Gamma$. Since the constructions on both sides are $G$-equivariant, it is enough to show the isomorphism at the origin of $D$. Over the origin 0, we have the adjoint action of $K^{{{\mathbb C}}}$ on the holomorphic tangent space $(T_{D})_{0}$ and the dual action on $(\Omega_{D})_{0}$. Since the Higgs field of a locally homogeneous VHS is $G$-equivariant, for each $k$, $(J_{k})_0\subset (S^{k}\Omega_{D})_{0}$ is $K^{{{\mathbb C}}}$-invariant. This implies $C_k$ is $K^{{{\mathbb C}}}$-invariant. So we can obtain a decomposition of $({{\mathbb P}}(T_D))_{0}$ into disjoint union of $K^{{{\mathbb C}}}$ orbits as follows: $$({{\mathbb P}}(T_D))_{0}=\coprod_{1\leq k\leq n} \{(C_{k})_{0}-(C_{k-1})_{0}\}.$$ For $1\leq k \leq n$, we put $$v_k=e_1+\cdots+e_k.$$ It is clear that $$v_k\in ({{\mathcal S}}_{k})_{0}-({{\mathcal S}}_{k-1})_{0}$$ and $K^{{{\mathbb C}}}(v_{k})=({{\mathcal S}}_{k})_{0}-({{\mathcal S}}_{k-1})_{0}$. Next we make the following\
For $1\leq k\leq n$, $$v_k\in (C_{k})_{0}-(C_{k-1})_{0}.$$ This claim implies the inclusion of $K^{{{\mathbb C}}}$ orbit for each $k$ $$({{\mathcal S}}_{k})_{0}-({{\mathcal S}}_{k-1})_{0}\subset (C_{k})_{0}-(C_{k-1})_{0},$$ and hence the equality for each $k$. Therefore, for $1\leq k\leq
n-1$, $$(C_{k})_{0}=({{\mathcal S}}_{k})_{0}.$$
Let $\gamma_k: \triangle \to D$ be the composition map $$\triangle \stackrel{\rm{diag.}}{\longrightarrow
}\triangle_1\times\cdots\times\triangle_k\hookrightarrow
\triangle^n\stackrel{i}{\hookrightarrow }D.$$ Obviously, for a suitable basis element $u$ of $(T_{\triangle})_0$ one has $(d\gamma_{k})_{0}(u)=v_k$. By Proposition \[decomposition over polydisc\], we have the decomposition of PVHS $$\gamma_{k}^{*}{{\mathbb W}}={{\mathbb L}}^{\otimes k}\otimes {{\mathbb U}}\oplus {{\mathbb V}}^{'}$$ where ${{\mathbb U}}$ is a unitary factor and ${{\mathbb V}}^{'}$ is a PVHS with width $\leq k-1$. Let $(E,\theta)$ be the system of Hodge bundles corresponding to ${{\mathbb W}}$, and for $v\in (T_{D})_0$, we denote by $$\theta_{v}: E_{0}\to E_{0}$$ the action of the Higgs field $\theta$ on the bundle $E$ at the origin 0 along the tangent direction $v$. Since $$\theta_{v_{k}}(E)=\theta_{u}(\gamma_{k}^{*}E),$$ we see that $$(\theta_{v_k})^k\neq 0,\quad (\theta_{v_k})^{k+1}=0.$$ Together with Lemma \[key lemma\], one easily sees that the claim holds.
[ $\square$\
]{}
Enumeration of Canonical PVHS over Irreducible Bounded Symmetric Domain and the Generating Property of Gross
============================================================================================================
Let $(E,\theta)$ be a system of Hodge bundles over $X$. We use the same notation as that in previous sections. We note that $$I=\bigoplus_{k\geq 1}I_{k}$$ forms a graded ideal of the symmetric algebra $$Sym(T_X)=\bigoplus_{k\geq 0}S^{k}T_{X}.$$ It is trivial to see that $$I_{k}=S^{k}(T_{X}),\ \textrm{for}\ k\geq n+1.$$ In [@G] §5, Gross suspected if $I$ is generated by $I_2$ for the canonical PVHS over an irreducible tube domain. We can assert this generating property for the canonical PVHS over an irreducible bounded symmetric domain in general.
\[generating property\] We use the same notation as Theorem \[Calabi-Yau like property\]. Then the graded ideal $I$, formed by the kernel of iterated Higgs field, is generated by the degree 2 graded piece $I_2$. That is, the multiplication map $$I_{2}\otimes S^{k-2}(T_X)\to I_{k}$$ is surjective for all $k\geq 2$.
It suffices to prove the surjectivity for $k\leq n+1$ where $n={{\rm rank}}(D)$. In fact, for $k\geq n+2$, we have $$\begin{aligned}
I_{2}\otimes S^{n-1}(T_{X})\otimes (T_{X})^{\otimes
k-n-1} & \twoheadrightarrow& I_{n+1}\otimes (T_{X})^{\otimes k-n-1} \\
&=& S^{n+1}(T_{X})\otimes (T_{X})^{\otimes k-n-1} \\
&\twoheadrightarrow& S^{k}(T_{X})=I_{k}.\end{aligned}$$ By the integrality of Higgs field, the above surjective map factors through $I_{2}\otimes S^{k-2}(T_{X})$. As the proof of Theorem \[identification\], we can work on the level of bounded symmetric domain and prove the statement at the origin as $K$-representations. The theorem will be proved case by case. In the classical case, we shall also describe the system of Hodge bundles $(E,\theta)$ associated with the Calabi-Yau like PVHS ${{\mathbb W}}$ using the Grassmannian description of classical symmetric domain.\
Let $D$ be an irreducible bounded symmetric domain. By fixing an origin of $D$, we obtain an equivalence of categories of homogeneous vector bundles and finite dimensional complex representations of $K$. Since $K$ has one dimensional center, a finite dimensional complex $K$-representation is written as ${{\mathbb C}}(l)\otimes V$ where $V$ is a representation of the semisimple part $K'$ of $K$ and is determined by the induced action of the complexified Lie algebra $\mathfrak{k'^{{{\mathbb C}}}}$. In the following, the same notation for a $K$-representation and the corresponding homogeneous vector bundle will be used when the context causes no confusion. All the isomorphisms are isomorphisms between homogeneous bundles. A highest weight representation of $sl(n,{{\mathbb C}})$ will be denoted interchangeably by $\Gamma_{a_1,\cdots,a_{n-1}}$ and ${{\mathbb S}}_{\lambda}({{\mathbb C}}^{n})$ (cf. [@FH] §15.3).
Type $A$
--------
The irreducible bounded symmetric domain of type A is $D^{I}_{p,q}=G/K$ where $$G=SU(p,q),\quad K=S(U(p)\times U(q)).$$ Let $V={{\mathbb C}}^{p+q}$ be a complex vector space equipped with a Hermitian symmetric bilinear form $h$ of signature $(p,q)$. Then $D^{I}_{p,q}$ parameterizes the dimension $p$ complex vector subspaces $U\subset
V$ such that $$h|_{U}: U\times U\to {{\mathbb C}}$$ is positive definite. This forms the tautological subbundle $S\subset V\times D$ of rank $p$ and denote by $Q$ the tautological quotient bundle of rank $q$. We have the natural isomorphism of holomorphic vector bundles $$\begin{aligned}
\label{equation1}
T_{D^{I}_{p,q}} &\simeq & {{\rm Hom}}(S,Q).\end{aligned}$$ The standard representation $V$ of $G$ gives rise to a weight 1 PVHS ${{\mathbb V}}$ over $D^{I}_{p,q}$, and its associated Higgs bundle $$F=F^{1,0}\oplus F^{0,1},\quad\eta=\eta^{1,0}\oplus \eta^{0,1}$$ is determined by $$F^{1,0}=S,\quad F^{0,1}=Q,\quad \eta^{0,1}=0,$$ and $\eta^{1,0}$ is defined by the above isomorphism. The canonical PVHS is $${{\mathbb W}}=\bigwedge^{p}{{\mathbb V}}$$ and its associated system of Hodge bundles $(E,\theta)$ is then $$(E,\theta)=\bigwedge^{p}(F,\eta).$$ Since $$\mathfrak{k'^{{{\mathbb C}}}}=sl(p,{{\mathbb C}})\oplus sl(q,{{\mathbb C}}),$$ by Schur’s lemma, a finite dimensional irreducible complex representation of $\mathfrak{k'^{{{\mathbb C}}}}$ is of the form $$\Gamma_{a_1,\cdots,a_{p-1}}\otimes \Gamma'_{b_1,\cdots,b_{q-1}}.$$ We put $V_1={{\mathbb C}}^{p}$ to be the representation space $\Gamma_{0,\cdots,0,1}$ of $sl(p,{{\mathbb C}})$ and $V_2={{\mathbb C}}^{q}$ the representation space $\Gamma'_{0,\cdots,0,1}$ of $sl(q,{{\mathbb C}})$. In the remaining subsection, we shall assume $p\leq q$ in order to simplify the notations in the argument.
\[formula A\] We have isomorphism $$T_{D^{I}_{p,q}}\simeq V_1\otimes V_2.$$ Then, for $k\geq 2$, we have isomorphism $$S^{k}(T_{D^{I}_{p,q}})\simeq
\bigoplus_{\lambda}{{\mathbb S}}_{\lambda}(V_1)\otimes {{\mathbb S}}_{\lambda}(V_2),$$ where $\lambda$ runs through all partitions of $k$ with at most $p$ rows. Under this isomorphism, the $k$-th iterated Higgs field for $k\leq p$, $$\theta^{k}: S^{k}(T_{D^{I}_{p,q}})\to {{\rm Hom}}(E^{p,0},E^{p-k,k})$$ is identified with the projection map onto the irreducible component $$\bigoplus_{\lambda}{{\mathbb S}}_{\lambda}(V_1)\otimes
{{\mathbb S}}_{\lambda}(V_2)\twoheadrightarrow {{\mathbb S}}_{\lambda^{0}}(V_1)\otimes
{{\mathbb S}}_{\lambda^{0}}(V_2),$$ where $\lambda^{0}=(1,\cdots,1)$.
**Proof:** By the isomorphism \[equation1\], we have isomorphism $$T_{D^{I}_{p,q}}\simeq V_1\otimes V_2.$$ The formula in Ex.6.11 [@FH] gives the decomposition of $S^{k}(V_1\otimes V_2)$ with respect to $sl(p,{{\mathbb C}})\oplus sl(q,{{\mathbb C}})$: $$S^{k}(V_1\otimes V_2)=\bigoplus_{\lambda}{{\mathbb S}}_{\lambda}(V_1)\otimes
{{\mathbb S}}_{\lambda}(V_2),$$ where $\lambda$ runs through all partitions of $k$ with at most $p$ rows. Since the center of $K$ acts on $(T_{D^{I}_{p,q}})_{0}$ trivially, it acts on $(S^{k}(T_{D^{I}_{p,q}}))_{0}$ trivially too. Hence the second isomorphism of the statement follows. For the last statement, it suffices to show $\theta^k$ is a non-zero map because ${{\rm Hom}}(E^{p,0},E^{p-k,k})$ is irreducible. But this follows directly from the definition of the Higgs field $\theta$ as $p$-th wedge power of $\eta$. The lemma is proved.
[ $\square$\
]{}
From the lemma, we know that $$\theta^2 \simeq pr: \Gamma_{0,\cdots,2} \otimes
\Gamma'_{0,\cdots,2}\oplus \Gamma_{0,\cdots,0,1,0} \otimes
\Gamma'_{0,\cdots,0,1,0}\to \Gamma_{0,\cdots,0,1,0} \otimes
\Gamma'_{0,\cdots,0,1,0}.$$ So by definition, $$I_{2}\simeq \Gamma_{0,\cdots,0,2}\otimes \Gamma'_{0,\cdots,0,2}.$$
[**Proof of Theorem \[generating property\] for Type A:**]{} Now we proceed to prove that $I_{2}\otimes S^{k-2}(T_{D^{I}_{p,q}})$ generates $I_{k}$. By the above lemma and the Formula 6.8 [@FH], we have $$\begin{aligned}
I_{2}\otimes S^{k-2}(T_{D^{I}_{p,q}}) &\simeq& S^2(V_1)\otimes S^2(V_2)\otimes S^{k-2}(V_1\otimes V_2) \\
&\simeq & \bigoplus_{\mu}(S^2(V_1)\otimes {{\mathbb S}}_{\mu}(V_1))\otimes (S^2(V_2)\otimes {{\mathbb S}}_{\mu}(V_2)) \\
&=& \bigoplus_{\mu}[(\bigoplus_{\nu_{\mu}^{1}}{{\mathbb S}}_{\nu_{\mu}^{1}}(V_1))\otimes (\bigoplus_{\nu_{\mu}^{2}}{{\mathbb S}}_{\nu_{\mu}^{2}}(V_2)) ]\\
&=& \bigoplus_{\mu}\bigoplus_{\nu_{\mu}^{1},\nu_{\mu}^{2}}({{\mathbb S}}_{\nu_{\mu}^{1}}(V_1)\otimes {{\mathbb S}}_{\nu_{\mu}^{2}}(V_2)), \\\end{aligned}$$ where $\mu$ runs through all partitions of $k-2$ with at most $p$ rows, and for a fixed $\mu$, $\nu_{\mu}^{i},i=1,2$ runs through those Young diagrams by adding two boxes to different columns of the Young diagram of $\mu$. Let $\lambda$ be a Young diagram corresponding to a direct factor of $I_k$ under the isomorphism in the above lemma. Since $$\bigoplus_{\mu}\bigoplus_{\nu_{\mu}}({{\mathbb S}}_{\nu_{\mu}}(V_1)\otimes
{{\mathbb S}}_{\nu_{\mu}}(V_2)) \subset
\bigoplus_{\mu}\bigoplus_{\nu_{\mu}^{1},\nu_{\mu}^{2}}({{\mathbb S}}_{\nu_{\mu}^{1}}(V_1)\otimes
{{\mathbb S}}_{\nu_{\mu}^{2}}(V_2)),$$ it is enough to show that $\lambda$ can be obtained by a Young diagram $\mu$ by adding two boxes to different columns of $\mu$. Actually, by the above lemma the partition $\lambda$ of $I_{k}$ has the property that, either for some $1\leq i_{0}\leq p-1$, $$\lambda_{i_{0}}>\lambda_{i_{0}+1}\geq 1,$$ or for some $1\leq i_{0}\leq p$, $$2 \leq \lambda_1=\cdots=\lambda_{i_{0}}>\lambda_{i_{0}+1}\geq 0.$$ In the first case, we can choose $\mu$ as $$\mu_{i}=\left\{
\begin{array}{ll}
\lambda_{i}-1 & \textrm{if $i=i_0, i_{0}+1$}, \\
\lambda_{i}
& \textrm{otherwise}.
\end{array}
\right.$$ In the second case, we choose $\mu$ as $$\mu_{i}=\left\{
\begin{array}{ll}
\lambda_{i}-2 & \textrm{if $i=i_0$}, \\
\lambda_{i}
& \textrm{otherwise}.
\end{array}
\right.$$ The proof of Theorem \[generating property\] in the type A case is therefore completed.
[ $\square$\
]{}
Type $B$, Type $D^{{{\mathbb R}}}$
----------------------------------
For $n\geq 3$, we let $$G=Spin(2,n),\quad K=Spin(2)\times_{\mu_2} Spin(n).$$ Then $D^{IV}_{n}=G/K$ is the bounded symmetric domain of type B when $n$ is odd, of type $D^{{{\mathbb R}}}$ when $n$ even. Let $(V_{{{\mathbb R}}},Q)$ be a real vector space of dimension $n+2$ equipped with a symmetric bilinear form of signature $(2,n)$. Then $D^{IV}_{n}$ is one of connected components parameterizing all $Q$-positive two dimensional subspace of $V_{{{\mathbb R}}}$. In order to see clearer the complex structure of $D^{IV}_{n}$, we complexify $(V_{{{\mathbb R}}},Q)$, to obtain $(V=V_{{{\mathbb R}}}\otimes {{\mathbb C}},Q)$. Then it is known that $D^{IV}_{n}$ is an open submanifold of the quadratic hypersurface defined by $Q=0$ in ${{\mathbb P}}(V)\simeq {{\mathbb P}}^{n+1}$, which is just the compact dual of $D^{IV}_{n}$. For a $Q$-isotropic line $L\subset V$, we define its polarization hyperplane to be $$P(L)=\{v\in V| Q(L,v)=0\}.$$ So for each point of $D^{IV}_{n}$, we obtain a natural filtration of $V$ by $$L\subset P(L)\subset V.$$ Varying the points on $D^{IV}_{n}$, the above filtration yields a filtration of homogeneous bundles $$S\subset P(S)\subset V\times D^{IV}_{n}.$$ On the other hand, we have a commutative diagram $$\begin{CD}
T_{D^{IV}_{n}} @>\simeq >> {{\rm Hom}}(L,\frac{P(L)}{L}) \\
@V\cap VV @VV\cap V \\
T_{{{\mathbb P}}(V),[L]} @>\simeq >> {{\rm Hom}}(L,\frac{V}{L}),
\end{CD}$$ whose top horizontal line gives the isomorphism of tangent bundle $$\begin{aligned}
\label{equation2}
T_{D^{IV}_{n}} &\simeq & {{\rm Hom}}(S,\frac{P(S)}{S}).\end{aligned}$$ We also notice that $Q$ descends to a non-degenerate bilinear form on $\frac{P(L)}{L}$, so that we have a natural isomorphism $$\begin{aligned}
\label{equation2'}
\big(\frac{P(S)}{S}\big)^* &\simeq & \frac{P(S)}{S}.\end{aligned}$$ Now we put $$E^{2,0}=S,\quad E^{1,1}=\frac{P(S)}{S},\quad E^{0,2}=\frac{V\times
D^{IV}_{n}}{P(S)},$$ and $$\begin{aligned}
\theta^{2,0}: E^{2,0} &\to & E^{1,1}\otimes \Omega_{D^{IV}_{n}}, \\
\theta^{1,1}: E^{1,1} &\to & E^{0,2}\otimes \Omega_{D^{IV}_{n}}\end{aligned}$$ are determined by the isomorphisms \[equation2\] and \[equation2’\], and $\theta^{0,2}=0$. The Higgs bundle $$(E=\bigoplus_{p+q=2}E^{p,q},\theta=\bigoplus_{p+q=2}\theta^{p,q})$$ is the associated system of Hodge bundles with the canonical PVHS ${{\mathbb W}}$.\
Let $m=[\frac{n}{2}]$ be the rank of $\mathfrak{so(n)}$, and $\Gamma_{a_1,\cdots,a_{m}}$ denotes a highest weight representation of $\mathfrak{so(n)}$. In terms of this notation, we have $$E^{2,0}\simeq{{\mathbb C}}(-2)\otimes \Gamma_{0,\cdots,0},\quad
E^{1,1}\simeq{{\mathbb C}}\otimes \Gamma_{1,0,\cdots,0},\quad
E^{0,2}\simeq{{\mathbb C}}(2)\otimes \Gamma_{0,\cdots,0}.$$ The following easy lemma makes Theorem \[generating property\] in the cases of type B and type $D^{{{\mathbb R}}}$ clear.
\[formula B\] We have isomorphisms $$\begin{aligned}
T_{D^{IV}_{n}} &\simeq& {{\mathbb C}}(2)\otimes \Gamma_{1,0,\cdots,0}, \\
S^{2}(T_{D^{IV}_{n}}) &\simeq& {{\mathbb C}}(4)\otimes \Gamma_ {2,0,\cdots,0}\oplus {{\mathbb C}}(4)\otimes \Gamma_{0,\cdots,0}, \\
I_{2} &\simeq& {{\mathbb C}}(4)\otimes \Gamma_ {2,0,\cdots,0}, \\
I_{2}\otimes T_{D^{IV}_{n}}&\simeq& {{\mathbb C}}(6)\otimes\Gamma_ {3,0,\cdots,0} \oplus {{\mathbb C}}(6)\otimes \Gamma_ {1,0,\cdots,0}\oplus {{\mathbb C}}(6)\otimes \Gamma_ {1,1,0,\cdots,0}, \\
S^{3}(T_{D^{IV}_{n}}) &\simeq&{{\mathbb C}}(6)\otimes \Gamma_ {3,0,\cdots,0}\oplus {{\mathbb C}}(6)\otimes \Gamma_
{1,0,\cdots,0}.\end{aligned}$$
Type $C$
--------
We fix $n\geq 2$. Let $$G=Sp(2n,{{\mathbb R}}),\quad K=U(n).$$ Then $D^{III}_{n}=G/K$ is the bounded symmetric domain of type C. $D^{III}_{n}$ is known as the Siegel space of degree $n$. Let $(V_{{{\mathbb R}}},\omega)$ be a real vector space of dimension $2n$ equipped with a skew symmetric bilinear form $\omega$. As before, we denote also by $(V,\omega)$ the complexification, and $h(u,v)=i\omega(u,\bar{v}) $ defines a hermitian symmetric bilinear form over $V$. Then $D^{III}_{n}$ parameterizes the maximal $\omega$-isotropic and $h$-positive complex subspaces of $V$. The standard representation $V$ of $G$ gives a weight 1 ${{\mathbb R}}$-PVHS ${{\mathbb V}}$ over $D^{III}_{n}$. Let $(F,\eta)$ be the associated Higgs bundle with ${{\mathbb V}}$. Then $F^{1,0}$ is simply the tautological subbundle over $D^{III}_{n}$ and $F^{0,1}$ is the $h$-orthogonal complement of $F^{1,0}$. Clearly, we have a natural embedding of bounded symmetric domains $$\iota: D^{III}_{n}\hookrightarrow D^{I}_{n,n}.$$ It induces a commutative diagram: $$\begin{CD}
T_{D^{III}_{n}} @>\simeq >> S^{2}(F^{0,1}) \\
@V\cap VV @VV\cap V \\
\iota^{*}(T_{D^{I}_{n,n}}) @>\simeq >> (F^{0,1})^{\otimes 2}.
\end{CD}$$ The Higgs field $\eta^{1,0}$ is defined by the composition of maps $$\begin{aligned}
\label{equation3}
T_{D^{III}_{n}} &\simeq& S^{2}(F^{0,1}) \hookrightarrow
(F^{0,1})^{\otimes 2}\simeq {{\rm Hom}}(F^{1,0},F^{0,1}).\end{aligned}$$ The canonical PVHS ${{\mathbb W}}$ is the unique weight $n$ sub-PVHS of $\bigwedge^{n}({{\mathbb V}})$. In fact, we have a decomposition of ${{\mathbb R}}$-PVHS $$\bigwedge^{n}({{\mathbb V}})={{\mathbb W}}\oplus {{\mathbb V}}',$$ where ${{\mathbb V}}'$ is a weight $n-2$ ${{\mathbb R}}$-PVHS. Therefore the corresponding Higgs bundle $(E,\theta)$ to ${{\mathbb W}}$ is a sub-Higgs bundle of $\bigwedge^{n}(F,\eta)$.\
Let $V_1=(F^{0,1})_{0}$ be the standard representation of $K$. It is straightforward to obtain the following
\[formula C\] We have isomorphism $$T_{D^{III}_{n}}\simeq {{\mathbb S}}_{(2)}(V_1).$$ Then, for $k\geq 2$, we have isomorphism $$S^{k}(T_{D^{III}_{n}})\simeq
\bigoplus_{\lambda}{{\mathbb S}}_{\lambda}(V_{1}),$$ where $\lambda=\{\lambda_1,\cdots,\lambda_{l}\}$ runs through all partitions of $2k$ with each $\lambda_{i}$ even and $l\leq n$. Under this isomorphism, for $k\leq n$, the $k$-th iterated Higgs field $\theta^{k}$ is identified with the projection map onto the irreducible component ${{\mathbb S}}_{\lambda^{0}}(V_{1})$ where $\lambda^{0}=(2,\cdots,2)$.
[**Proof of Theorem \[generating property\] for Type C:**]{} By the last lemma, we know that $$\theta^2\simeq pr: {{\mathbb S}}_{(4)}(V_{1})\oplus {{\mathbb S}}_{(2,2)}(V_{1})\to
{{\mathbb S}}_{(2,2)}(V_{1})$$ and then $I_2\simeq {{\mathbb S}}_{(4)}(V_{1})$. Applying the Formula 6.8,[@FH] to decompose $I_2\otimes S^{k-2}(T_{D^{III}_{n}})$, we obtain $$\begin{aligned}
I_2\otimes S^{k-2}(T_{D^{III}_{n}})&\simeq& {{\mathbb S}}_{(4)}(V_{1}) \otimes \bigoplus_{\mu}{{\mathbb S}}_{\mu}(V_{1}) \\
&\simeq & \bigoplus_{\mu}({{\mathbb S}}_{(4)}(V_{1}) \otimes {{\mathbb S}}_{\mu}(V_{1})) \\
&=& \bigoplus_{\mu}[(\bigoplus_{\nu_{\mu}}{{\mathbb S}}_{\nu}(V_1))],\end{aligned}$$ where $\mu$ runs through all partitions of $2(k-2)$ with the property as that in Lemma \[formula C\], and for a fixed $\mu$, $\nu_{\mu}$ runs through those Young diagrams by adding four boxes to different columns of the Young diagram $\mu$. The partition $\lambda$ of an irreducible component in $I_k$ is of the form $$\lambda_1\geq \cdots \geq
\lambda_{s}>\lambda_{s+1}=\cdots=\lambda_{l}\geq 2.$$ We may then take $\mu$ to be $$\mu_{i}=\left\{
\begin{array}{ll}
\lambda_{i}-2 & \textrm{if $i=s, l$}, \\
\lambda_{i}
& \textrm{otherwise}.
\end{array}
\right.$$ Then we define $\nu_{\mu}$ by adding two boxes to $\lambda_{s}$ and $\lambda_{l}$ in $\mu$ respectively to obtain the starting $\lambda$. Therefore Theorem \[generating property\] in the type C case is proved.
[ $\square$\
]{}
Type $D^{{{\mathbb H}}}$
------------------------
For $n\geq 3$, we let $$G=SO^{*}(2n),\quad K=U(n).$$ Then $D^{II}_{n}=G/K$ is the bounded symmetric domain of type $D^{{{\mathbb H}}}$. We recall that $$G\simeq \{M\in Sl(2n,{{\mathbb C}})|MI_{n,n}M^{*}=
I_{n,n}, MS_{n}M^{\tau}=
S_{n}
\},$$ where $I_{n,n}$ denotes the matrix $\left(
\begin{array}{cc}
I_n & 0 \\
0 & -I_n \\
\end{array}
\right)$ and $S_n$ denotes the matrix $\left(
\begin{array}{cc}
0& I_n \\
I_n & 0 \\
\end{array}
\right)$. Let $(V,h,S)$ be a complex vector space of dimension $2n$ equipped with a hermitian symmetric form $h$ and symmetric bilinear form $S$, where, under the identification $V\simeq {{\mathbb C}}^{2n}$, $h$ is defined by the matrix $I_{n,n}$ and $S$ is defined by the matrix $S_n$. Then $D^{II}_{n}$ parameterizes all $n$-dimensional $S$-isotropic and $h$-positive complex subspaces of $V$. The standard representation $V$ of $G$ determines a weight 1 PVHS ${{\mathbb V}}$. Its associated Higgs bundle $(F,\eta)$ is determined in a similar manner as type C case. Namely, $F^{1,0}$ is simply the tautological subbundle and $F^{0,1}$ is its $h$-orthogonal complement. The natural embedding $$\iota': D^{II}_{n}\hookrightarrow D^{I}_{n,n}$$ induces a commutative diagram: $$\begin{CD}
T_{D^{II}_{n}} @>\simeq >> \bigwedge^{2}(F^{0,1}) \\
@V\cap VV @VV\cap V \\
\iota'^{*}(T_{D^{I}_{n,n}}) @>\simeq >> (F^{0,1})^{\otimes 2},
\end{CD}$$ and the Higgs field $\eta^{1,0}$ is induced by the composition of maps $$\begin{aligned}
\label{equation4}
T_{D^{II}_{n}}&\simeq& \bigwedge^{2}(F^{0,1}) \hookrightarrow
(F^{0,1})^{\otimes 2}\simeq {{\rm Hom}}(F^{1,0},F^{0,1}).\end{aligned}$$ The canonical PVHS ${{\mathbb W}}$ comes from a half spin representation. We write the corresponding Higgs bundle as $$(E=\bigoplus_{p+q=[\frac{n}{2}]}E^{p,q},\theta=\bigoplus_{p+q=[\frac{n}{2}]}\theta^{p,q}).$$ Then the Hodge bundle is $$E^{p,q}= \bigwedge^{n-2q}F^{1,0},$$ and the Higgs field $\theta^{p,q}$ is induced by the natural wedge product map $$\bigwedge^{2}F^{0,1}\otimes \bigwedge^{2q}F^{0,1}\to
\bigwedge^{2q+2}F^{0,1}.$$ While type $D^{{{\mathbb H}}}$ case enjoys many similarity with type C case, there is one difference we would like to point out. That is, the canonical PVHS ${{\mathbb W}}$ is not a sub-PVHS of $\bigwedge^{n}{{\mathbb V}}$. In fact, the PVHS $\bigwedge^{n}{{\mathbb V}}$ is the direct sum of two irreducible PVHSs. One of them, say ${{\mathbb V}}'$, has $$\bigwedge^{n}(F^{1,0})\otimes \bigwedge^{0}(F^{0,1})\simeq
(\bigwedge^{n}(F^{1,0}))^{\otimes 2}$$ as the first Hodge bundle. For this irreducible ${{\mathbb V}}'$, we have an inclusion of PVHS $${{\mathbb V}}'\subset Sym^{2}({{\mathbb W}}).$$ Let $V_1=(F^{0,1})_{0}$ be the dual of the standard representation of $K$. It is straightforward to obtain the following
\[formula D\] We have isomorphism $$T_{D^{II}_{n}} \simeq {{\mathbb S}}_{(1,1)}(V_1).$$ Then, for $k\geq 2$, we have isomorphism $$S^{k}(T_{D^{II}_{n}})\simeq
\bigoplus_{\lambda}{{\mathbb S}}_{\lambda}(V_{1}),$$ where $\lambda=\{\lambda_1,\cdots,\lambda_{l}\}$ runs through all partitions of $2k$ with $l\leq n$ and each entry of the conjugate $\lambda'$ of $\lambda$ even. Under this isomorphism, for $k\leq
[\frac{n}{2}]$, the $k$-th iterated Higgs field $\theta^{k}$ is identified with the projection map onto the irreducible component ${{\mathbb S}}_{\lambda^{0}}(V_{1})$ with $\lambda^{0}=(k,k)$.
By the above lemma, we know that $$\theta^2\simeq pr: {{\mathbb S}}_{(1,1,1,1)}(V_{1})\oplus
{{\mathbb S}}_{(2,2)}(V_{1})\to {{\mathbb S}}_{(2,2)}(V_{1}).$$ Thus we have isomorphism $I_2\simeq {{\mathbb S}}_{(1,1,1,1)}(V_{1})$. By Formula 6.9,[@FH], we have $$\begin{aligned}
I_2\otimes S^{k-2}(T_{D^{II}_{n}}) &
\simeq & {{\mathbb S}}_{(1,1,1,1)}(V_{1}) \otimes \bigoplus_{\mu}{{\mathbb S}}_{\mu}(V_{1}) \\
&\simeq & \bigoplus_{\mu}({{\mathbb S}}_{(1,1,1,1)}(V_{1}) \otimes {{\mathbb S}}_{\mu}(V_{1})) \\
&=& \bigoplus_{\mu}[(\bigoplus_{\nu_{\mu}}{{\mathbb S}}_{\nu}(V_1))],\end{aligned}$$ where $\mu$ runs through all partitions of $2(k-2)$ with the property as that in Lemma \[formula D\], and for a fixed $\mu$, $\nu_{\mu}$ runs through those Young diagrams by adding four boxes to different rows of the Young diagram of $\mu$. We observe that we are in the conjugate case of that of type C. Theorem \[generating property\] in type $D^{{{\mathbb H}}}$ case follows easily.
Type $E$
--------
There are two exceptional irreducible bounded symmetric domains. We first discuss the $E_6$ case. In this case, $$G=E_{6,2},\quad K=U(1)\times_{\mu_4}Spin(10).$$ Then $D^{V}=G/K$ is a 16-dimensional bounded symmetric domain of rank 2. There are two special nodes in the Dynkin diagram of $E_6$. But they induce isomorphic bounded symmetric domains. We take the first node so that the fundamental representation corresponding to this special node is $W_{27}$. Let $(E=\oplus_{p+q=2}E^{p,q},\theta)$ be the corresponding Higgs bundle to ${{\mathbb W}}$. Then we have isomorphism $$E^{2,0}\simeq {{\mathbb C}}(-2),\quad E^{1,1}\simeq {{\mathbb C}}\otimes
\Gamma_{0,0,0,1,0},\quad E^{0,2}\simeq {{\mathbb C}}(2)\otimes
\Gamma_{1,0,0,0,0}.$$ Furthermore, it is straightforward to obtain the following
\[formula E6\] We have following isomorphisms: $$\begin{aligned}
T_{X} &\simeq & {{\mathbb C}}(2)\otimes \Gamma_{0,0,0,1,0}, \\
S^{2}(T_{X}) &\simeq & {{\mathbb C}}(4)\otimes \Gamma_{0,0,0,2,0}\oplus {{\mathbb C}}(4)\otimes \Gamma_{1,0,0,0,0}, \\
I_{2} &\simeq& {{\mathbb C}}(4)\otimes \Gamma_{0,0,0,2,0}, \\
I_{2}\otimes T_{X}&\simeq& {{\mathbb C}}(6)\otimes \Gamma_{0,0,0,3,0}\oplus {{\mathbb C}}(6)\otimes \Gamma_{1,0,0,1,0}\oplus {{\mathbb C}}(6)\otimes \Gamma_{0,0,1,1,0}, \\
S^{3}(T_{X}) &\simeq &{{\mathbb C}}(6)\otimes \Gamma_{0,0,0,3,0}\oplus {{\mathbb C}}(6)\otimes
\Gamma_{1,0,0,1,0}.\end{aligned}$$
We continue to discuss the remaining case, which has already appeared in [@G]. Let $$G=E_{7,3},\quad K=U(1)\times_{\mu_3}E_{6}.$$ Then $D^{VI}=G/K$ is of dimension 27 and rank 3. We refer the reader to §4 [@G] for the description of Hodge bundles. The lemma corresponding to Lemma \[formula E6\] is the following
\[formula E7\] We have the following isomorphisms: $$\begin{aligned}
T_{X} &\simeq & {{\mathbb C}}(2)\otimes \Gamma_{1,0,0,0,0,0}, \\
S^{2}(T_{X}) &\simeq& {{\mathbb C}}(4)\otimes \Gamma_{2,0,0,0,0,0} \oplus {{\mathbb C}}(4)\otimes \Gamma_{0,0,0,0,0,1}, \\
I_{2} &\simeq& {{\mathbb C}}(4)\otimes \Gamma_{2,0,0,0,0,0}, \\
I_{2}\otimes T_{X}&\simeq& {{\mathbb C}}(6)\otimes \Gamma_{1,0,0,0,0,1}\oplus {{\mathbb C}}(6)\otimes \Gamma_{3,0,0,0,0,0}\oplus {{\mathbb C}}(6)\otimes \Gamma_{1,0,1,0,0,0}, \\
S^{3}(T_{X}) &\simeq& {{\mathbb C}}(6)\otimes \Gamma_{1,0,0,0,0,1}\oplus {{\mathbb C}}(6)\otimes
\Gamma_{3,0,0,0,0,0}\oplus {{\mathbb C}}(6)\otimes \Gamma_{0,0,0,0,0,0},\\
I_{3} &\simeq& {{\mathbb C}}(6)\otimes \Gamma_{1,0,0,0,0,1}\oplus {{\mathbb C}}(6)\otimes
\Gamma_{3,0,0,0,0,0}, \\
I_{2}\otimes S^{2}(T_{X}) &\simeq&{{\mathbb C}}(8)\otimes \Gamma_{4,0,0,0,0,0}\oplus {{\mathbb C}}(8)\otimes \Gamma_{2,0,0,0,0,1}\oplus {{\mathbb C}}(8)\otimes
\Gamma_{0,0,0,0,0,2}\\
&\phantom{\simeq}& \oplus {{\mathbb C}}(8)\otimes \Gamma_{2,0,1,0,0,0}\oplus {{\mathbb C}}(8)\otimes
\Gamma_{0,0,2,0,0,0} \oplus {{\mathbb C}}(8)\otimes \Gamma_{0,0,1,0,0,1}\\
&\phantom{\simeq}& \oplus {{\mathbb C}}(8)\otimes \Gamma_{1,0,0,0,0,0}\oplus {{\mathbb C}}(8)\otimes
\Gamma_{1,1,0,0,0,0} \oplus {{\mathbb C}}(8)\otimes \Gamma_{2,0,0,0,0,1},\\
S^{4}(T_{X}) &\simeq& {{\mathbb C}}(8)\otimes \Gamma_{4,0,0,0,0,0}\oplus {{\mathbb C}}(8)\otimes \Gamma_{2,0,0,0,0,1}\oplus {{\mathbb C}}(8)\otimes
\Gamma_{0,0,0,0,0,2} \\
&\phantom{\simeq}& \oplus {{\mathbb C}}(8)\otimes \Gamma_{1,0,0,0,0,0}.\end{aligned}$$
Lemma \[formula E6\] and Lemma \[formula E7\] make it clear that the generating property of Gross also holds for the exceptional cases. Then the proof of Theorem \[generating property\] is completed.
[X-X00]{}
Bourbaki, N.: [*Lie groups and Lie algebras*]{}, Chapters 4-6, Springer-Verlag, Berlin, 2002.
Carlson, J.; Green, M.; Griffiths, P.; Harris, J.; [*Infinitesimal variations of Hodge structures(I)*]{}. Compositio Mathematica 50(1983), 109-205.
Deligne, P.; [*Variétés de Shimura: interprétation modulaire, et techniques de construction de modèles canoniques*]{}, Proc. Sympos. Pure Math., XXXIII, 247-289, 1979.
Fulton, W.; Harris, J.; [*Representation theory, A first course*]{}, GTM 129, Springer-Verlag, New York, 1991.
Gerkmann, R.; Sheng, M.; Zuo, K.; [*Disproof of modularity of moduli space of CY 3-folds coming from eight planes of $\mathbb
P^3$ in general positions*]{}, ArXiv 0709.1054.
Gross, B.; [*A remark on tube domains*]{}, Math. Res. Lett., Vol.1, 1-9, 1994.
Mok, N.; [*Uniqueness theorems of Hermitian metrics of seminegative curvature on quotients of bounded symmetric domains*]{}, Annals of Mathematics, 125 (1987), no. 1, 105-152.
Mok, N.; [*Metric rigity theorems on Hermitian locally symmetric manifolds*]{}, Series in Pure Mathmatics, Vol.6, World Scientific Publishing Co., Inc., Teaneck, NJ, 1989.
Zucker, S.; [*Locally homogenous variations of Hodge structure*]{}, L’Enseignement Mathématique, 27, No.3-4, 243-276, 1981.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Establishing the security of continuous-variable quantum key distribution against general attacks in a *realistic* finite-size regime is an outstanding open problem in the field of theoretical quantum cryptography if we restrict our attention to protocols that rely on the exchange of coherent states. Indeed, techniques based on the uncertainty principle are not known to work for such protocols, and the usual tools based on de Finetti reductions only provide security for unrealistically large block lengths. We address this problem here by considering a new type of *Gaussian* de Finetti reduction, that exploits the invariance of some continuous-variable protocols under the action of the unitary group $U(n)$ (instead of the symmetric group $S_n$ as in usual de Finetti theorems), and by introducing generalized $SU(2,2)$ coherent states. Crucially, combined with an energy test, this allows us to truncate the Hilbert space globally instead as at the single-mode level as in previous approaches that failed to provide security in realistic conditions. Our reduction shows that it is sufficient to prove the security of these protocols against *Gaussian* collective attacks in order to obtain security against general attacks, thereby confirming rigorously the widely held belief that Gaussian attacks are indeed optimal against such protocols.'
author:
- Anthony Leverrier
title: 'Security of continuous-variable quantum key distribution via a Gaussian de Finetti reduction'
---
Quantum key distribution (QKD) is a cryptographic primitive aiming at distributing large secret keys to two distant parties, Alice and Bob, who have access to an authenticated classical channel. Mathematically, a QKD protocol ${\mathcal{E}}$ is described by a quantum channel, that is a completely positive trace-preserving (CPTP) map transforming an input state, typically a large bipartite entangled state shared by Alice and Bob, into two keys, ideally two identical bit strings unknown to any third party. Establishing the security of the protocol against arbitrary attacks means proving that the map ${\mathcal{E}}$ is approximately equal to an ideal protocol ${\mathcal{F}}$. An operational way of quantifying the security is by bounding the completely positive trace distance, or diamond distance between the two maps [@PR14]: the protocol is said to be ${\varepsilon}$-secure if $\|{\mathcal{E}}- {\mathcal{F}}\|_{\diamond} \leq {\varepsilon}$. If ${\mathcal{E}}$ and ${\mathcal{F}}$ act on some Hilbert space ${\mathcal{H}}$ and $\Delta = {\mathcal{E}}- {\mathcal{F}}$, then the diamond norm is defined as $$\begin{aligned}
\|\Delta\|_\diamond = \sup_{\rho \in {\mathfrak{S}}({\mathcal{H}}\otimes {\mathcal{H}}') } \|(\Delta \otimes {\mathbbm{1}}_{{\mathcal{H}}'}) (\rho) \|_1 \label{eqn:diamond1}\end{aligned}$$ where $\|\cdot\|_1$ is the trace norm and ${\mathfrak{S}}({\mathcal{H}}\otimes {\mathcal{H}}')$ is the set of normalized density matrices (positive operators of trace 1) on ${\mathcal{H}}\otimes {\mathcal{H}}'$ with ${\mathcal{H}}' \cong {\mathcal{H}}$ (see *e.g.* [@wat16]). Computing an upper bound of Eq. is very challenging in general because the Hilbert space ${\mathcal{H}}= {\mathcal{H}}_1^{\otimes n}$ has a dimension scaling exponentially with the number $n$ of quantum systems shared by Alice and Bob. Typical values of $n$ range in the millions or billions.
In order to estimate the diamond norm, it is natural to exploit all the symmetries displayed by $\Delta$. For instance, if ${\mathcal{E}}$ is a QKD protocol involving many 2-qubit pairs, such as BB84 for instance [@BB84], then $\Delta$ might be covariant under any permutation of these pairs. For such maps, Christandl, König and Renner [@CKR09] showed that the optimization of Eq. can be dramatically simplified provided that one is only interested in a polynomial approximation of $\|\Delta\|_\diamond$: indeed, it is then sufficient to consider a *single* state, called a “de Finetti state”, instead of optimizing over ${\mathcal{H}}\otimes {\mathcal{H}}' \cong ({\mathbb{C}}^4)^n \otimes ({\mathbb{C}}^4)^n$. More precisely, this de Finetti state is a purification of $ \tau_{{\mathcal{H}}} = \int \sigma_{{\mathcal{H}}_1}^{\otimes n} \mu(\sigma_{{\mathcal{H}}_1})$, where ${\mathcal{H}}_1 \cong {\mathbb{C}}^{4}$ is the single-system Hilbert space and $\mu(\cdot)$ is the measure on the space of density operators on a single system induced by the Hilbert-Schmidt metric.
This approach, called a *de Finetti reduction*, has been applied successfully to analyze the security of QKD protocol such as BB84 [@SLS10] or qudit protocols [@SS10]. Indeed, computing the value of $\|(\Delta \otimes {\mathbbm{1}}_{{\mathcal{H}}'}) \tau_{{\mathcal{H}}{\mathcal{H}}'}\|_1$ for some purification $\tau_{{\mathcal{H}}{\mathcal{H}}'}$ of $\tau_{{\mathcal{H}}}$ is usually tractable and is closely related to the task of establishing the security of the QKD protocol against *collective attacks*, corresponding to restricting the inputs of ${\mathcal{E}}$ to i.i.d. states of the form $\sigma_{{\mathcal{H}}_1}^{\otimes n}$. A full security proof then consists of two steps: proving the security against these restricted collective attacks, and applying the de Finetti reduction to obtain security (with a polynomially larger security parameter) against general attacks.
An outstanding problem in the theory of QKD is to address the security of protocols with continuous variables, that is protocols encoding the information in the continuous degrees of freedom of the quantified electro-magnetic field [@WPG12; @DL15]. From a practical point of view, the essential difference between continuous-variable (CV) protocols and discrete-variables ones lies in the detection method: CV protocols rely on coherent detection, either homodyne or heterodyne depending on whether one or two quadratures are measured for each mode, while discrete-variable protocols use photon counting. The main theoretical difference is the Hilbert space ${\mathcal{H}}$, which is *infinite-dimensional* for CV QKD, corresponding to a $2n$-mode Fock space: ${\mathcal{H}}= F({\mathbb{C}}^n \otimes {\mathbb{C}}^n)=\bigoplus_{k=0}^\infty \mathrm{Sym}^k({\mathbb{C}}^{n} \otimes {\mathbb{C}}^{n})$, where $\mathrm{Sym}^k(H)$ stands for the symmetric part of $H^{\otimes k}$. Note that the definition of Eq. is formally restricted to finite-dimensional spaces, but we will ignore this issue here because one can always truncate ${\mathcal{H}}$ to make its dimension finite (arbitrary large) and will therefore assume that the supremum can still be taken on ${\mathcal{H}}\otimes {\mathcal{H}}'$ for ${\mathcal{H}}' \cong {\mathcal{H}}$. For later convenience, let us denote ${\mathcal{H}}$ by $F_{1,1,n}$ and ${\mathcal{H}}\otimes {\mathcal{H}}'$ by $F_{2,2,n} :=F({\mathbb{C}}^{2n} \otimes {\mathbb{C}}^{2n}) \cong F_{1,1,n}\otimes F_{1,1,n}$.
A possible strategy to prove the security of such CV protocols is to follow the same steps as for BB84: first establish the security against collective attacks, then prove that this implies security against general attacks (with a reasonable loss). For protocols involving a Gaussian modulation of coherent states and heterodyne detection [@WLB04], composable security against collective attacks was recently demonstrated in [@Lev15]. The second step is to apply the de Finetti reduction outlined above. The difficulty here comes from the infinite dimensionality of the Fock space ${\mathcal{H}}$. In order to apply the technique of [@CKR09], it is therefore needed to truncate the Fock space in a suitable manner. This can be achieved with the help of an energy test, but unfortunately, the local dimension of $\overline{{\mathcal{H}}}_1$, the truncated single-mode space, needs to grow like the logarithm of $n$, for the technique to apply [@LGRC13]. Indeed, the technique of [@CKR09] was developed for finite-dimensional systems, and the energy test needs to enforce that with high probability, *each the unmeasured modes* contains a number of photons below some given threshold. Such a guarantee can only be obtained for a threshold increasing logarithmically with $n$. The dimension of the total truncated Hilbert space is then super-exponential in $n$, on the order of $(\log n)^{Cn}$, for some constant $C>1$. Since the loss in the security parameter obtained with [@CKR09] is superpolynomial in the dimension of the total Hilbert space, this means that if the protocol is ${\varepsilon}$-secure against collective attacks, this approach only shows that the protocol is also ${\varepsilon}'$-secure against general attacks with ${\varepsilon}' = {\varepsilon}\times 2^{\mathrm{polylog}(n)}$. While this gives a proof that the protocol is asymptotically secure in the limit of infinitely large block lengths, it fails to provide any useful bound in practical regimes where $n \sim 10^6 - 10^9$. We note that a related strategy relies on the exponential de Finetti theorem but fails similarly to provide practical security bounds in the finite-size regime [@Ren08; @RC09].
Let us also mention that there exists a CV QKD protocol with proven security where Alice sends squeezed states to Bob instead of coherent states [@CLV01]. This protocol can be analyzed thanks to an entropic uncertainty relation [@FBB12], but this technique requires the exchange of squeezed states, which makes the protocol much less practical. Moreover, this approach does not recover the secret key rate corresponding to Gaussian attacks in the asymptotic limit of large $n$, even though these attacks are expected to be optimal. Here, in contrast, we are interested in the security of CV protocols based on the exchange of coherent states.
The idea that we exploit in this paper is that CV QKD protocols not only display the permutation invariance common to most QKD protocols, but also a specific symmetry with a continuous-variable flavor [@LKG09]. This new symmetry is linked to the unitary group $U(n)$ instead of the symmetric group $S_n$. More precisely, the protocols are covariant if Alice and Bob process their $n$ respective modes with linear-optical networks acting like the unitary $u \in U(n)$ on Alice’s annihilation operators and its complex conjugate $\overline{u}$ on Bob’s annihilation operators.
Our main technical result is an upper bound on $\|\Delta\|_{\diamond}$ for maps $\Delta$ covariant under a specific representation of the unitary group. For such maps, we show that is it sufficient to consider again a single state, which is the purification of a specific mixture of *Gaussian* i.i.d. states. This in turn will imply that it is sufficient to establish the security of the protocol against *Gaussian* collective attacks in order to prove the security of the protocol against general attacks. An important technicality is that we still need to truncate the total Hilbert space to replace it by a finite-dimensional one. Crucially, this truncation can now be done globally and not for single-mode Fock spaces as in [@LGRC13] and this is this very point that makes our approach so effective. Indeed, in our security proof, we argue that it is sufficient to consider states that are invariant under the action of $U(n)$ and such states live in a very small subspace of the ambient Fock space. More precisely, the dimension of the restriction of this subspace to states containing $K$ photons grows polynomially in $K$, instead of exponentially in the case of the total Fock space. This phenomenon is reminiscent of the fact that the dimension of the symmetric subspace of $({\mathbb{C}}^{\otimes d})^{\otimes n}$ only grows polynomially in $n$ if the local dimension $d$ is constant.
The consequence is that the security loss due to the reduction from general to collective attacks will not scale like $2^{\mathrm{polylog}(n))}$ anymore, but rather like $O(n^4)$, which behaves *much more nicely* for typical values of $n$, and yields the first practical security proof of a CV QKD protocol with coherent states against general attacks. Indeed, our security reduction performs even better than the original de Finetti reduction developed for BB84, where the security loss scales like $O(n^{15})$ [@CKR09].
Ideally, truncating the Fock space could be done by projecting the quantum state given as an input to $\Delta$ onto a finite dimensional subspace with say, less than $K$ photons (where the value of $K$ scales linearly with the total number of modes). Of course, such a projection ${\mathcal{P}}$ is unrealistic, and one will instead apply an energy test ${\mathcal{T}}$ that passes if the energy measured on a small number $k \ll n$ of modes is below some threshold and will abort the protocol otherwise. Such an idea was already considered in previous works dealing with the security of CV QKD [@RC09; @LGRC13; @fur14]. An application of the triangle inequality (see Lemma \[lem:sec-red\] in the appendix) yields: $$\begin{aligned}
\|\Delta \circ {\mathcal{T}}\|_\diamond \leq \|\Delta \circ {\mathcal{P}}\|_\diamond + 2 \|({\mathbbm{1}}-{\mathcal{P}})\circ {\mathcal{T}}\|_\diamond. \label{eqn:triangle}\end{aligned}$$ In other words, it is sufficient for our purposes to show the security of the protocol restricted to input states subject to a maximum photon number constraint, provided that we can bound the value of $\|({\mathbbm{1}}-{\mathcal{P}})\circ {\mathcal{T}}\|_\diamond$, which corresponds to the probability that the energy test passes but that the number of photons in the remaining modes is large.
[**Analysis of the energy test**]{}.—We show that $\|({\mathbbm{1}}-{\mathcal{P}})\circ {\mathcal{T}}\|_\diamond$ is indeed small for a maximal number of photons $K$ scaling linearly with $n$ (see Appendix \[sec:test\]). The energy test ${\mathcal{T}}(k, d_A, d_B)$ depends on 3 parameters: the number $k$ of additional modes that will be measured for the test and maximum allowed average energies $d_A$ and $d_B$ for Alice and Bob’s modes. The input of the state is a $2(n+k)$-mode state. Alice and Bob should symmetrize this state by processing them with random conjugate linear-optical networks and measure the last $k$ modes with heterodyne detection, corresponding to a projection of standard (Glauber) coherent states. If the average energy per mode is below $d_A$ for Alice and $d_B$ for Bob, the test passes and Alice and Bob apply the protocol ${\mathcal{E}}_0$ to their remaining modes. Otherwise the protocol simply aborts. These thresholds $d_A$, $d_B$ should be chosen large enough to ensure that the energy test passes with large probability. Note that the symmetrization of the state can be done on the classical data for the protocols of Refs [@WLB04; @POS15] since these protocols require both parties to measure all the modes with heterodyne detection, which itself commutes with the action of the linear-optical networks. For this, Alice and Bob need to multiply their measurement results (gathered as vectors for $\mathbbm{R}^{2n}$) by an identical random orthogonal matrix. There is also hope that this symmetrization can be further simplified, but we do not address this issue here.
[**An upper bound on $\|\Delta \circ {\mathcal{P}}\|_\diamond$ via de Finetti reduction**]{}.— This requires two main ingredients: first, a proof that any mixed state on $F_{1,1,n}$ that is invariant under the action of the unitary group admits a purification in the *symmetric subspace* $F_{2,2,n}^{U(n)}$, and second, that Gaussian states resolve the identity on the symmetric subspace. The symmetric subspace $F_{2,2,n}^{U(n)}$ was introduced and studied in Ref. [@lev16] and is defined as follows: $$\begin{aligned}
F_{2,2,n}^{U(n)} = \left\{|\psi\rangle \in F_{2,2,n} \: :\: W_u |\psi\rangle = |\psi\rangle, \forall u \in U(n) \right\},\end{aligned}$$ where $u \mapsto W_u$ is a representation of the unitary group $U(n)$ on the Fock space $F_{2,2,n}$ corresponding to mapping the $4n$ annihilation operators $\vec{a} = (a_1, \ldots, a_n), \vec{b} =(b_1, \ldots, b_n), \vec{a}' = (a'_1, \ldots, a'_n), \vec{b}'=(b'_1, \ldots, b'_n)$ of each of the $n$ modes of ${\mathcal{H}}_A, {\mathcal{H}}_B, {\mathcal{H}}_{A'}, {\mathcal{H}}_{B'}$ to $u \vec{a}, \overline{u} \vec{b}, \overline{u} \vec{a}', u \vec{b}'$. Here $\overline{u}$ denotes the complex conjugate of $u$ and $F_{2,2,n} = {\mathcal{H}}_A\otimes {\mathcal{H}}_B \otimes {\mathcal{H}}_{A'}\otimes {\mathcal{H}}_{B'}$.
In Ref. [@lev16], a full characterization of the symmetric subspace $F_{2,2,n}^{U(n)}$ is given. Let us introduce the four operators $Z_{11}, Z_{12}, Z_{21}, Z_{22}$ defined by: $$\begin{aligned}
Z_{11} &= \sum_{i=1}^n a_i^\dagger b_i^\dagger, \quad Z_{12} =\sum_{i=1}^n a_i^\dagger a'^\dagger_i,\\
Z_{21} &= \sum_{i=1}^n b_i^\dagger b'^\dagger_i, \quad Z_{22} =\sum_{i=1}^n a'^\dagger_i b'^\dagger_i.\end{aligned}$$ We now define the so-called $SU(2,2)$ *generalized coherent states* [@per72; @per86]: to any $2\times 2$ complex matrix $\Lambda = \left(\begin{smallmatrix} \lambda_{11} & \lambda_{12} \\ \lambda_{21} & \lambda_{22} \end{smallmatrix} \right)$ such that $\Lambda \Lambda^\dagger \prec {\mathbbm{1}}_2$ (that is, with a spectral norm strictly less than 1), we associate the $4n$-mode Gaussian state $|\Lambda, n\rangle = |\Lambda,1\rangle^{\otimes n}$ given by $$\begin{aligned}
|\Lambda,n\rangle = \mathrm{det} (1-\Lambda\Lambda^\dagger)^{n/2} \exp\left( \sum_{i,j=1}^2 \lambda_{ij} Z_{ij}\right) |\mathrm{vacuum}\rangle.\end{aligned}$$ Since the polynomial $\sum_{i,j=1}^2 \lambda_{ij} Z_{ij}$ is quadratic in the creation operators, the generalized coherent state is a Gaussian state. More specifically, it corresponds to $n$ copies of a centered 4-mode pure Gaussian state whose covariance matrix is a function of $\Lambda$ (see the discussion in Section 3 of Ref. [@lev16] for details).
These generalized coherent states span the symmetric subspace [@lev16], and moreover, for $n\geq 4$, they resolve the identity on the symmetric subspace [@lev16]: $$\begin{aligned}
\int_{{\mathcal{D}}} |\Lambda,n\rangle \langle \Lambda,n| \d \mu_n(\Lambda) = {\mathbbm{1}}_{F_{2,2,n}^{U(n)}} \label{eqn:maintext-id}\end{aligned}$$ where ${\mathcal{D}}$ is the set of $2\times 2$ matrices $\Lambda$ such that $\Lambda\Lambda^\dagger \prec {\mathbbm{1}}_2$ and $\mathrm{d}\mu_n(\Lambda)$ is the invariant measure on ${\mathcal{D}}$ given by $$\begin{aligned}
\mathrm{d}\mu_n (\Lambda) = \frac{(n-1)(n-2)^2(n-3)}{\pi^{4}\det(\mathbbm{1}_2 - \Lambda \Lambda^\dagger)^4 } \d \lambda_{11} \d \lambda_{12}\d \lambda_{21} \d \lambda_{22}. \end{aligned}$$
Since the space $F_{2,2,n}^{U(n)}$ is infinite-dimensional, the integral of Eq. is not normalizable. In order to obtain an operator with finite norm, we consider the finite-dimensional subspace $F_{2,2,n}^{U(n), \leq K}$ of $F_{2,2,n}^{U(n)}$ spanned by states with less than $K$ “excitations”: $$\begin{aligned}
\mathrm{Span}\left\{ (Z_{11})^i (Z_{12})^j (Z_{21})^k (Z_{22})^\ell |\mathrm{vac}\rangle \: : \: i+j+k+\ell \leq K \right\}.\end{aligned}$$ We show in Appendix \[sec:finite\] that an approximate resolution of the identity still holds for this space when restricting the coherent states $|\Lambda,n\rangle$ to $\Lambda \in {\mathcal{D}}_\eta$ for ${\mathcal{D}}_\eta = \left\{ \Lambda \in {\mathcal{D}}\: : \: \eta {\mathbbm{1}}_2 -\Lambda\Lambda^\dagger \succeq 0Ê\right\}$ for $\eta \in [0,1[$. Let us denote by $\Pi_{\leq K}$ the identity onto the subspace $F_{2,2,n}^{U(n), \leq K}$ and introduce the relative entropy $D(x||y) = x \log \frac{x}{y} + (1-x) \log \frac{1-x}{1-y}$.
\[thm:finite-version\] For $n\geq 5$ and $\eta \in [0,1[$, if $K \leq\frac{\eta N}{1-\eta}$ for $N=n-5$, then the operator inequality $$\begin{aligned}
\int_{\mathcal{D}_\eta} |\Lambda,n\rangle \langle \Lambda,n | \d \mu_n(\Lambda)\geq (1-{\varepsilon}) \Pi_{\leq K} \label{eqn:approximate-main}\end{aligned}$$ holds with ${\varepsilon}= 2 N^4 (1+K/N)^7 \exp\left(-N D\left(\frac{K}{K+N} \big\| \eta \right) \right)$.
This approximate resolution of the identity allows us to bound the diamond norm of maps which are covariant under the action $W_u$ of the unitary group $U(n)$, provided that the total photon number of the input state is upper bounded by some known value $K$. Let us define $\tau_{{\mathcal{H}}}^\eta$ to be the normalized state corresponding to the left-hand side of Eq. , and $\tau_{{\mathcal{H}}{\mathcal{N}}}^{\eta}$ a purification of $\tau_{{\mathcal{H}}}^\eta$.
\[thm:postselection\] Let $\Delta: \mathrm{End}(F_{1,1,n}^{\leq K}) \to \mathrm{End}(\mathcal{H}')$ such that for all $u \in U(n)$, there exists a CPTP map $\mathcal{K}_u: \mathrm{End}(\mathcal{H}') \to \mathrm{End}(\mathcal{H}')$ such that $\Delta \circ W_u = \mathcal{K}_u \circ \Delta$, then $$\begin{aligned}
\|\Delta\|_\diamond \leq \frac{K^4}{50} \|(\Delta \otimes \mathrm{id}) \tau^\eta_{\mathcal{H}\mathcal{N}} \|_1,\end{aligned}$$ for $\eta = \frac{K-n+5}{K+n-5}$, provided that $n \geq N^*(K/(n-5))$.
The function $N^*$ is defined in Eq. and its argument is an upper bound on the average number of photons per mode. One has for instance $N^*(21) \approx 10^4$, $N^*(60) \approx 10^5$.
Similarly as in [@CKR09] for the case of permutation invariance, Theorem \[thm:postselection\] shows that one can obtain a polynomial approximation of degree 4 (if the average number of photons per mode is constant) of the diamond norm by simply evaluating the trace norm of the map on a very simple state, namely a purification of a mixture of Gaussian i.i.d. states. We note that we restricted the analysis to $SU(2,2)$ coherent states here because they are the relevant ones for cryptographic applications, but our results can be extended to $SU(p,q)$ coherent states for arbitrary integers $p, q$. In that case, the prefactor of the diamond norm approximation would be a polynomial of degree $pq$.
[**Security reduction to Gaussian collective attacks**]{}.—We now explain how to obtain a bound on $\|(\Delta \otimes \mathrm{id}) \tau^\eta_{\mathcal{H}\mathcal{N}} \|_1$, if we already know that the initial protocol (without the energy test) is ${\varepsilon}$-secure against collective attacks. Let us therefore assume that we are given such a CV QKD protocol ${\mathcal{E}}_0$ acting on $2n$-mode states shared by Alice and Bob which is, in addition, covariant under the action of the unitary group (*i.e.* there exists $\mathcal{K}_u$ such that ${\mathcal{E}}_0 \circ W_u = {\mathcal{K}}_u \circ {\mathcal{E}}_0$). Examples of such protocols are the no-switching protocol [@WLB04] and the measurement-device-independent protocol of Ref. [@POS15], provided that they are suitably symmetrized. We define ${\mathcal{E}}:= {\mathcal{R}}\circ {\mathcal{E}}_0 \circ {\mathcal{T}}$, where ${\mathcal{R}}$ is an additional privacy amplification step that reduces the key by $\lceil 2 \log_2 \tbinom{K+4}{4} \rceil$ bits.
Recall that by definition, the QKD protocol ${\mathcal{E}}_0$ is ${\varepsilon}$-secure against Gaussian collective attacks if $$\begin{aligned}
\| (({\mathcal{E}}_0-{\mathcal{F}}_0)\otimes \mathrm{id})(|\Lambda,n\rangle \langle \Lambda,n|)\|_1\leq {\varepsilon}\end{aligned}$$ for all $\Lambda \in {\mathcal{D}}$. It means that the protocol is shown to be secure for input states of the form ${\mathrm{tr}}_{{\mathcal{H}}_{A'}{\mathcal{H}}_{B'}} (|\Lambda, n\rangle\langle \Lambda,n|)$, which are nothing but i.i.d. bipartite Gaussian states. By linearity, we immediately obtain that $\|(\Delta \otimes \mathrm{id}) \tau^\eta_{\mathcal{H}} \|_1 \leq {\varepsilon}$. To finish the proof, we need to take into account the extra system ${\mathcal{N}}$ given to Eve. This system can be chosen of dimension $\tbinom{K+4}{4}$ and the leftover hashing lemma of Renner [@Ren08] says that by shortening the final key of the protocol by $2 \log_2 (\mathrm{dim} \, {\mathcal{N}})$, one ensures that the protocol remains ${\varepsilon}$-secure. This is the role of the map ${\mathcal{R}}$. Overall, we find that $\|(({\mathcal{E}}-{\mathcal{F}}) \otimes \mathrm{id}) \tau^\eta_{\mathcal{H}\mathcal{N}} \|_1 \leq {\varepsilon}$.
[**Results**]{}.—Putting everything together, we show that if ${\mathcal{E}}_0$ is covariant under the action of the unitary group and ${\varepsilon}$-secure against Gaussian collective attacks, then the protocol ${\mathcal{E}}= {\mathcal{R}}\circ {\mathcal{E}}_0 \circ {\mathcal{T}}$ is ${\varepsilon}'$-secure against general attacks, with $$\begin{aligned}
{\varepsilon}' =\frac{K^4}{50} {\varepsilon}\label{eqn:final-result-main}\end{aligned}$$ for $K = \max \Big\{1, n(d_A + d_B)\Big(1 + 2 \sqrt{\frac{\ln (8/{\varepsilon})}{2n}} + \frac{\ln (8/{\varepsilon})}{n}\Big)\Big(1-2{\sqrt{\frac{\ln (8/{\varepsilon})}{2k}}}\Big)^{-1}\Big\}$. The full proof is presented in Appendix \[sec:general-proof\]. The advantage of our approach compared to the previous results of [@LGRC13] is two-fold: first the improvement of the prefactor in Eq. from $2^{\mathrm{polylog}(n)}$ to $O(n^4)$ yields security for practical settings; second, it is only required to establish the security of the protocol against Gaussian collective attacks in order to apply our security reduction, a task arguably much simpler than addressing the security against collective attacks in the case of CV QKD.
[**Discussion**]{}.—Despite their wide range of application, there is a regime where “standard” de Finetti theorems fail, namely when the local dimension is not negligible compared to the number $n$ of subsystems [@CKMR07]. In particular, these techniques do not apply directly to CV protocols where the local spaces are infinite-dimensional Fock spaces. In this work, we considered a natural symmetry displayed by some important CV QKD protocols, which are covariant under the action of beamsplitters and phase-shifts on their $n$ modes [@LKG09]. For such protocols, one legitimately expects that stronger versions of de Finetti theorems should hold. In particular, a widely held belief that it is enough to consider *Gaussian* i.i.d. input states instead of all i.i.d. states in order to analyze the security of the corresponding protocol.
We proved this statement rigorously here. Our main tool is a family of $SU(2,2)$ generalized coherent states that resolve the identity of the subspace spanned by states invariant under the action of $U(n)$. This implies that in some applications such as QKD, it is sufficient to consider the behaviour of the protocol on these states in order to obtain guarantees that hold for arbitrary input states.
Let us conclude by discussing the issue of active symmetrization. For the proof above to go through, it is required that the protocols are covariant under the action of the unitary group. Such an invariance can be enforced by symmetrizing the classical data held by Alice and Bob. However, this step is computationally costly and it would be beneficial to bypass it. We believe that this should be possible. Indeed, it is often argued that a similar step is unnecessary when proving the security of BB84 for instance, and there is no fundamental reason to think that the situation is different here. Moreover, we already know of security proofs based on the uncertainty principle [@TR11; @TLG12; @DFR16] where such a symmetrization is not required.
I gladly acknowledge inspiring discussions with Matthias Christandl and Tobias Fritz.
[30]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ** (), .
, ** ().
, in ** (), vol. .
, , , ****, ().
, , , ****, ().
, ****, ().
, , , , , , , ****, ().
, ****, ().
, , , , , , ****, ().
, ****, ().
, , , , ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, , , , , , , ****, ().
, , , , ****, ().
, ****, ().
, , , , , , , , , ****, ().
, ().
, ****, ().
, ** (, ).
, , , , ****, ().
, ****, ().
, , , , ****, ().
, , , ().
, ().
, **, vol. (, ).
, **, Wiley series in probability and mathematical statistics. Probability and mathematical statistics (, , ), ISBN .
, ****, ().
, ****, ().
****
In Section \[sec:symm-sub\], we recall the main results from Ref. [@lev16] about the symmetric subspace $F_{2,2,n}^{U(n)}$ and the generalized $SU(2,2)$ coherent states. In Section \[sec:lemmas\], we present a series of technical lemmas and prove in Section \[sec:finite\] that bounded-energy generalized coherent states approximately resolve the identity on $F_{2,2,n}^{U(n), \leq K}$. In Section \[sec:general-proof\], we explain how to perform the security proof of the protocol and show that bounding the norm of $\Delta = {\mathcal{E}}- {\mathcal{F}}$ decomposes into separate tasks. In Section \[sec:generalization\], we derive our generalization of the de Finetti reduction of [@CKR09] to maps that are covariant under the action of the unitary group $U(n)$. In Section \[sec:collective\], we show how to reduce the security analysis against general attacks to a security analysis against Gaussian collective attacks, if the photon number of the input states is bounded. Finally, in Section \[sec:test\], we analyze the energy test and show how it provides the restriction on the input states required for the proof of Section \[sec:collective\] to go through.
The symmetric subspace $F_{2,2,n}^{U(n)}$ and generalized $SU(2,2)$ coherent states {#sec:symm-sub}
===================================================================================
In this section, we recall some results from Ref. [@lev16] where the symmetric subspace $F_{p,q,n}^{U(n)}$ is considered for arbitrary integers $p,q$ and specialize them to the case where $p=q=2$.
The symmetric subspace $F_{2,2,n}^{U(n)}$
-----------------------------------------
Let $H_A \cong H_B \cong H_{A'} \cong H_{B'} \cong {\mathbb{C}}^n$ and define the Fock space $F_{2,2,n}$ as $$\begin{aligned}
F_{2,2,n} := \bigoplus_{k=0}^\infty \mathrm{Sym}^k(H_A \otimes H_B \otimes H_{A'} \otimes H_{B'}),\end{aligned}$$ where $\mathrm{Sym}^k(H)$ is the symmetric part of $H^{\otimes k}$.
In this paper, we will use both the standard Hilbert representation and the Segal-Bargmann representation of $F_{2,2,n}$. Using the Segal-Bargmann representation, the Hilbert space $F_{2,2,n}$ is realized as a functional space of complex holomorphic functions square-integrable with respect to a Gaussian measure, $F_{2,2,n} \cong L^2_{\mathrm{hol}}({\mathbb{C}}^{4n}, \| \cdot\|)$, with a state $\psi \in F_{2,2,n}$ represented by a holomorphic function $\psi(z,z')$ with $z \in {\mathbb{C}}^{2n}, z' \in {\mathbb{C}}^{2n}$ satisfying $$\begin{aligned}
\label{eqn:norm}
\|\psi\|^2 := \langle \psi, \psi\rangle = \frac{1}{\pi^{4n}}\int \exp(-|z|^2 -|z'|^2) |\psi(z,z')|^2 \d z \d z'< \infty\end{aligned}$$ where $\d z := \prod_{k=1}^n \prod_{i=1}^2 \mathrm{d}z_{k,i}$ and $\d z' := \prod_{k=1}^n \prod_{j=1}^2 \mathrm{d}z_{k,j}'$ denote the Lebesgue measures on ${\mathbb{C}}^{2n}$ and ${\mathbb{C}}^{2n}$, respectively, and $|z|^2 := \sum_{k=1}^n\sum_{i=1}^2 |z_{k,i}|^2, |z'|^2 := \sum_{k=1}^n \sum_{j=1}^2 |z_{k,j}'|^2$. A state $\psi$ is therefore described as a holomorphic function of $4n$ complex variables $(z_{1,1}, z_{n,1}; z_{1,2}, \ldots, z_{n,2}; z_{1,1}', \ldots, z_{n,1}'; z_{1,2}', \ldots, z_{n,2}')$. In the following, we denote by $z_i$ and $z_j'$ the vectors $(z_{1,i}, \ldots, z_{n,i})$ and $(z_{1,j}', \ldots, z_{n,j}')$, respectively, for $i,j \in \{1,2\}$. With these notations, the vector $z_1$ is associated to the space $H_A$, the vector $z_1'$ to $H_B$, the vector $z_2$ to $H_B'$ and the vector $z_2'$ to $H_A'$. These notations are chosen so that the unitary $u \in U(n)$ acts as $u$ on $z_1, z_2$, and $\overline{u}$ on $z'_1, z_2'$.
Let $\mathfrak{B}(F_{2,2,n})$ denote the set of bounded linear operators from $F_{2,2,n}$ to itself and let $\mathfrak{S}(F_{2,2,n})$ be the set of quantum states on $F_{2,2,n}$: positive semi-definite operators with unit trace.
Formally, one can switch from the Segal-Bargmann representation to the representation in terms of annihihation and creation operators by replacing the variables $z_{k,1}$ by $a_k^\dagger$, $z_{k,2}$ by $b'^\dagger_k$, $z'_{k,1}$ by $b_k^\dagger$ and $z'_{k,2}$ by $a'^\dagger_k$. The function $f(z,z')$ is therefore replaced by an operator $f(a^\dagger, b^\dagger, a'^\dagger, b'^\dagger)$ and the corresponding state in the Fock basis is obtained by applying this operator to the vacuum state.
The metaplectic representation of the unitary group $U(n) \subset Sp(2n,{\mathbb{R}})$ on $ F_{2,2,n}$ associates to $u \in U(n)$ the operator $W_u$ performing the change of variables $z \to uz$, $z' \to \overline{u} z'$: $$\begin{aligned}
U(n) & \to \mathfrak{B}(F_{2,2,n})\\
u & \mapsto W_u = \big[ \psi(z_1, z_2, z_1', z_2') \mapsto \psi(u z_1, u z_2, \overline{u} z_1', \overline{u} z_2')\big]\end{aligned}$$ where $\overline{u}$ denotes the complex conjugate of the unitary matrix $u$. In other words, the unitary $u$ is applied to the modes of $F_A \otimes F_{B'}$ and its complex conjugate is applied to those of $F_B \otimes F_{A'}$.
The states that are left invariant under the action of the unitary group $U(n)$ are relevant for instance in the context of continuous-variable quantum key distribution, and we define the symmetric subspace as the space spanned by such invariant states.
For integer $n \geq 1$, the *symmetric subspace* $F_{2,2,n}^{U(n)}$ is the subspace of functions $\psi \in F_{2,2,n}$ such that $$\begin{aligned}
W_u \psi = \psi \quad \forall u \in U(n).\end{aligned}$$
The name *symmetric subspace* is inspired by the name given to the subspace $\mathrm{Sym}^n(\mathbbm{C}^d)$ of $(\mathbbm{C}^d)^{\otimes n}$ of states invariant under permutation of the subsystems: $$\begin{aligned}
\mathrm{Sym}^n(\mathbbm{C}^d) := \left\{|\psi\rangle \in(\mathbbm{C}^d)^{\otimes n} \: : \: P(\pi) |\psi\rangle = |\psi\rangle, \forall \pi \in S_n \right\}\end{aligned}$$ where $\pi \mapsto P(\pi)$ is a representation of the permutation group $S_n$ on $(\mathbbm{C}^d)^{\otimes n}$ and $P(\pi)$ is the operator that permutes the $n$ factors of the state according to $\pi \in S_n$. See for instance [@har13] for a recent exposition of the symmetric subspace from a quantum information perspective.
In [@lev16], a full characterization of the symmetric subspace $F_{2,2,n}^{U(n)}$ is given. It is helpful to introduce the four operators $Z_{11}, Z_{12}, Z_{21}, Z_{22}$ defined by: $$\begin{aligned}
Z_{11} = \sum_{i=1}^n z_{i,1} z'_{i,1} \quad & \leftrightarrow \quad \sum_{i=1}^n a_i^\dagger b_i^\dagger\\
Z_{12} =\sum_{i=1}^n z_{i,1} z'_{i,2} \quad & \leftrightarrow \quad \sum_{i=1}^n a_i^\dagger a'^\dagger_i,\\
Z_{21} = \sum_{i=1}^n z_{i,2} z'_{i,1} \quad & \leftrightarrow \quad \sum_{i=1}^n b_i^\dagger b'^\dagger_i, \quad \\
Z_{22} =\sum_{i=1}^n z_{i,2} z'_{i,2} \quad & \leftrightarrow \quad \sum_{i=1}^n a'^\dagger_i b'^\dagger_i.\end{aligned}$$
For integer $n\geq 1$, let $E_{2,2,n}$ be the space of analytic functions $\psi$ of the $4$ variables $Z_{1,1}, \ldots, Z_{2,2}$, satisfying $\|\psi\|_E^2 < \infty$, that is $E_{2,2,n} = L^2_{\mathrm{hol}}({\mathbb{C}}^{pq}, \|\cdot\|_E)$.
In [@lev16], is was proven that $E_{2,2,n}$ coincides with the symmetric subspace $F_{2,2,n}^{U(n)}$.
\[thm:charact-symm\] For $n \geq 2$, the symmetric subspace $F_{2,2,n}^{U(n)}$ is isomorphic to $E_{2,2,n}$.
In other words, any state in the symmetric subspace can be written as $$\begin{aligned}
|\psi\rangle = f\big(\sum_{i=1}^n a_i^\dagger b_i^\dagger, \sum_{i=1}^n a_i^\dagger a'^\dagger_i, \sum_{i=1}^n b_i^\dagger b'^\dagger_i, \sum_{i=1}^n a'^\dagger_i b'^\dagger_i \big) |\mathrm{vacuum}\rangle\end{aligned}$$ for some function $f$. Said otherwise, such a state is characterized by only 4 parameters instead of $4n$ for an arbitrary state in $F_{2,2,n}$; or else, the symmetric subspace is isomorphic to a 4-mode Fock space (with “creation” operators corresponding to $Z_{11}, Z_{12}, Z_{21}, Z_{22}$, instead of the ambient $4n$-mode Fock space.
Coherent states for $SU(2,2)/SU(2)\times SU(2) \times U(1)$ {#sec:CS}
-----------------------------------------------------------
In this section, we first review a construction due to Perelomov that associates a family of generalized coherent states to general Lie groups [@per72], [@per86]. In this language, the standard Glauber coherent states are associated with the Heisenberg-Weyl group, while the atomic spin coherent states are associated with $SU(2)$. The symmetric subspace $F_{2,2,n}^{U(n)}$ is spanned by $SU(2,2)$ coherent states, where $SU(2,2)$ is the special unitary group of signature $(2,2)$ over ${\mathbb{C}}$: $$\begin{aligned}
SU(2,2) := \left\{ A \in M_{4}({\mathbb{C}}) \: : \: A {\mathbbm{1}}_{2,2} A^\dagger ={\mathbbm{1}}_{2,2}Ê\right\}\end{aligned}$$ where $M_{4}({\mathbb{C}})$ is the set of $4\times 4$-complex matrices and ${\mathbbm{1}}_{2,2} = {\mathbbm{1}}_{2} \oplus (-{\mathbbm{1}}_2)$.
In Perelomov’s construction, a *system of coherent states of type* $(T, |\psi_0\rangle)$ where $T$ is the representation of some group $G$ acting on some Hilbert space $\mathcal{H} \ni |\psi_0\rangle$, is the set of states $\left\{|\psi_g\rangle \: : \: |\psi_g\rangle = T_g |\psi_0\rangle\right\}$ where $g$ runs over all the group $G$. One defines $H$, the *stationary subgroup* of $|\psi_0\rangle$ as $$\begin{aligned}
H := \left\{g \in G \: :Ê\: T_g |\psi_0\rangle = \alpha |\psi_0\rangle \, \text{for} \, |\alpha|=1Ê\right\},\end{aligned}$$ that is the group of $h \in G$ such that $|\psi_h\rangle $ and $|\psi_0\rangle$ differ only by a phase factor. When $G$ is a connected noncompact simple Lie group, $H$ is the maximal subgroup of $G$. In particular, for $G = SU(2,2)$, one has $H= SU(2,2) \cap U(4) = SU(2) \times SU(2)\times U(1)$ and the factor space $G/H$ corresponds to a Hermitian symmetric space of classical type (see *e.g.* Chapter X of [@hel79]). The generalized coherent states are parameterized by points in $G/H$. For $G/H = SU(2,2)/SU(2)\times SU(2) \times U(1)$, the factor space is the set ${\mathcal{D}}$ of $2\times 2$ matrices $\Lambda$ such that $\Lambda \Lambda^\dagger < {\mathbbm{1}}_{p}$, i.e. the singular values of $\Lambda$ are strictly less than 1. $$\begin{aligned}
{\mathcal{D}}= \left\{ \Lambda \in M_{2}({\mathbb{C}}) \: : \:\mathbbm{1}_2 - \Lambda\Lambda^\dagger >0 \right\},\end{aligned}$$ where $A>0$ for a Hermitian matrix $A$ means that $A$ is positive definite.
We are now ready to define our coherent states for the noncompact Lie group $SU(2,2)$.
\[defn:CS\] For $n \geq 1$, the coherent state $\psi_{\Lambda,n}$ associated with $\Lambda \in {\mathcal{D}}$ is given by $$\begin{aligned}
\psi_{\Lambda,n}(Z_{1,1}, \ldots, Z_{2,2}) = \det (1-\Lambda \Lambda^\dagger)^{n/2} \det \exp (\Lambda^T Z)
\end{aligned}$$ where $Z$ is the $2\times 2$ matrix $\left[ Z_{i,j}\right]_{i,j \in \{1,2\}}$.
In the following, we will sometimes abuse notation and write $\psi_{\Lambda}$ instead of $\psi_{\Lambda,n}$, when the parameter $n$ is clear from context.
We note that the coherent states have a tensor product form in the sense that $$\begin{aligned}
\psi_{\Lambda,n}=\psi_{\Lambda,1}^{\otimes n}.\end{aligned}$$ We will also write $|\Lambda,n\rangle = |\Lambda,1\rangle^{\otimes n}$ for $\psi_{\Lambda,n}$. Such a state is called *identically and independently distributed* (i.i.d.) in the quantum information literature.
The main feature of a family of coherent states is that they resolve the identity. This is the case with the $SU(2,2)$ coherent states introduced above: see Ref. [@lev16].
\[thm:resol\] For $n \geq 4$, the coherent states resolve the identity over the symmetric subspace $F_{2,2,n}^{U(n)}$: $$\begin{aligned}
\int_{{\mathcal{D}}} |\Lambda,n\rangle \langle \Lambda,n| \mathrm{d}\mu_n(\Lambda) = \mathbbm{1}_{F_{2,2,n}^{U(n)}}, \end{aligned}$$ where $\mathrm{d}\mu_n(\Lambda)$ is the invariant measure on ${\mathcal{D}}$ given by $$\begin{aligned}
\label{eqn:mu}
\mathrm{d}\mu_n (\Lambda) = \frac{(n-1)(n-2)^2(n-3)}{\pi^{4}\det(\mathbbm{1}_2 - \Lambda \Lambda^\dagger)^4 } \prod_{i=1}^2 \prod_{j=1}^{2} \mathrm{d} \mathfrak{R}(\Lambda_{i,j}) \mathrm{d} \mathfrak{I}(\Lambda_{i,j}), \end{aligned}$$ where $\mathfrak{R}(\Lambda_{i,j})$ and $\mathfrak{I}(\Lambda_{i,j})$ refer respectively to the real and imaginary parts of $\Lambda_{i,j}$. This operator equality is to be understood for the weak operator topology.
Technical lemmas {#sec:lemmas}
================
In this section, we prove or recall a number of technical results that will be useful for analyzing the finite energy version of the de Finetti theorem in \[sec:finite\].
Tail bounds
-----------
For positive integers $k,n >0$, the Beta and regularized (incomplete) Beta functions are given respectively by $$\begin{aligned}
B(k,n) = \int_0^1 t^{k-1} (1-t)^{n-1} \d t = \frac{(k-1)!(n-1)!}{(n+k-1)!}, \quad B(x;k,n) = \int_0^x t^{k-1}(1-t)^{n-1} \d t,\end{aligned}$$ for $x >0$. Finally, the regularized Beta function is defined as $$\begin{aligned}
I_{x}(k,n) = \frac{B(x; k,n)}{B(k,n)}.\end{aligned}$$
Let us recall the Chernoff bound for a sum of independent Bernoulli variables.
\[thm:chernoff\] Let $X_1, \ldots, X_n$ be independent random variables on $\{0,1\}$ with $\mathrm{Pr}[X_i=1]=p$, for $i=1, \ldots, n$. Set $X = \sum_{i=1}^n X_i$. Then for any $t \in [0,1-p]$, we have $$\begin{aligned}
\mathrm{Pr}[X \geq (p+t)n ] \leq \exp \left(-n D(p+t||p) \right),\end{aligned}$$ where the relative entropy is defined as $D(x||y) = x \log \frac{x}{y} + (1-x) \log \frac{1-x}{1-y}$.
Pinsker’s inequality gives a lower bound on $D(x||y)$ as a function of the total variation distance between the two probability distributions.
\[lem:pinsker\] For $0 < y < x <1$, it holds that $$\begin{aligned}
D(x \| y) \geq \frac{2}{\ln 2} (x-y)^2.\end{aligned}$$
We now prove a tail bound for the regularized Beta function.
\[lem:tail-bound\] For integers $k,n >0$, it holds that $$\begin{aligned}
1-I_{\eta}(k,n) \leq \exp\left(-(n+k-1) D\left(\frac{k-2}{n+k-1} \| \eta \right) \right),\end{aligned}$$ provided that $\eta \geq (k-2)/(n+k-1)$.
The incomplete Beta function can be related to the tail of the binomial distribution as follows: $$\begin{aligned}
1- I_{\eta}(k,n) &= F(k-1,n+k-1,\eta) \label{eqn:injected}\end{aligned}$$ where $F(K,N,p)$ is the probability that there are at most $K$ successes when drawing $N$ times from a Bernoulli distribution with success probability $p$. Equivalently, if $X_i$ are $\{0,1\}$-random variables such that $\mathrm{Pr}[X_i=1] = 1-p$ for $i = 1, \ldots, n$, then $$\begin{aligned}
F(K,N,p) = \mathrm{Pr}[ X \geq N-K+1],\end{aligned}$$ where $X = \sum_{i=1}^{n} X_i$. The Chernoff bound of Theorem \[thm:chernoff\] yields $$\begin{aligned}
F(K,N,p) \leq \exp \left(-N D(1-p+t||1-p) \right)\end{aligned}$$ for $t = p - \frac{K-1}{N}$, provided that $N-K+1 \geq (1-p)N$, *i.e.* $p \geq (K-1)/N$ or $\eta \geq (K-1)/N$. Taking $K = k-1, N = n+k-1$ and $p=\eta$, and injecting into Eq. , gives $$\begin{aligned}
1- I_{\eta}(k,n) & \leq \exp \left(-(n+k-1) D\left(1-\frac{k-2}{n+k-1}||1- \eta \right) \right)\\
& \leq \exp \left(-(n+k-1) D\left(\frac{k-2}{n+k-1}||\eta \right) \right),\end{aligned}$$ which holds provided that $\eta \geq (k-2)/(n+k-1)$. This proves the claim.
Energy cutoff
-------------
The resolution of the identity of Theorem \[thm:resol\] involves operators which are not trace-class, as well as coherent states with arbitrary large energy. The natural solution to get operators with finite norm is to replace the domain $\mathcal{D}$ by a cut-off versions $\mathcal{D}_\eta$ defined by $$\begin{aligned}
\mathcal{D}_\eta := \left\{ \Lambda \in M_{p,q} ({\mathbb{C}}) \: : \: \eta {\mathbbm{1}}_p - \Lambda \Lambda^\dagger \geq 0\right\},\end{aligned}$$ for $\eta \in [0,1[$. Note that $$\begin{aligned}
\lim_{\eta \to 1} \mathcal{D}_\eta = \mathcal{D}.\end{aligned}$$ The integration over $\mathcal{D}_\eta$ can then be performed by first integrating the measure $\d \mu_n(\Lambda)$ on the “polar variables”, and only later on the “radial” variables corresponding to the singular values of $\Lambda$.
For a fixed pair of squared singular values $(x,y)$, let $V_{x,y}$ be the set of matrices $\Lambda \in \mathcal{D}$ with squared singular values $(x,y)$, *i.e.*, $$\begin{aligned}
V_{x,y} := \left\{u \big[\begin{smallmatrix} \sqrt{x} & 0 \\ 0 & \sqrt{y}\end{smallmatrix}\big] v^\dagger \: : \: u, v \in U(2)Ê\right\}.\end{aligned}$$ We further define the operator $P_{x,y}$ corresponding to the integral of $|\Lambda,n\rangle \langle \Lambda, n|$ over $V_{x,y}$: $$\begin{aligned}
P_{x,y} := \int_{V_{x,y}} |\Lambda,n\rangle\langle \Lambda,n| \d \mu_{x,y}(\Lambda) \geq 0 \label{eqn:Pxy}\end{aligned}$$ where $\mathrm{d}\mu_{x,y} (\Lambda)$ is the Haar measure on $V_{x,y}$ and the normalization is chosen so that ${\mathrm{tr}}\, P_{x,y} = 1$.
We have the following equivalent version of the resolution of the identity of Theorem \[thm:resol\].
\[thm:resol2\] For $n \geq 4$, it holds that: $$\begin{aligned}
\int_{0}^1 \int_0^1 q(x,y) P_{x,y} \d x \d y = \mathbbm{1}_{F_{2,2,n}^{U(n)}},\end{aligned}$$ where the distribution $q(x,y)$ is given by $$\begin{aligned}
q(x,y) := \frac{(n-1)(n-2)^2(n-3) (x-y)^2}{2(1-x)^4 (1-y)^4}. \label{eqn:Qxy}\end{aligned}$$
We wish to integrate $|\Lambda, n \rangle \langle \Lambda, n| \d\mu_n(\Lambda)$ over the “polar” variables. For this, we perform the singular value decomposition of $\Lambda$, which reads $\Lambda = u \Sigma v^\dagger$, where $u, v \in U(2)$ and $\Sigma = \mathrm{diag}(\sigma_1, \sigma_2)$, with $\sigma_1, \sigma_2 \in [0,1[$.
The Jacobian for the singular value decomposition is [@mui82]: $$\begin{aligned}
\mathrm{d}\Lambda = (\sigma_1^2-\sigma_1^2)^2 \sigma_1 \sigma_2 (u^\dagger \mathrm{d} u) \mathrm{d} \Sigma (v^\dagger \mathrm{d} v).\end{aligned}$$
Exploiting this Jacobian and performing the change of variables $x = \sigma_1^2$, $y=\sigma_2^2$, one obtains that the resolution of the identity of Theorem \[thm:resol\] can be written: $$\begin{aligned}
C \int_0^1 \d x \int_0^1 \d y \frac{(x-y)^2}{(1-x)^4(1-y)^4} P_{x,y} = \mathbbm{1}_{F_{2,2,n}^{U(n)}},\end{aligned}$$ for the appropriate constant $C$. Here, we have used that $\det(\mathbbm{1}_2 - \Lambda \Lambda^\dagger)^4 = (1-x)^4 (1-y)^4$ for any $\Lambda \in V_{x,y}$.
The constant $C$ can be determined by considering the overlap between ${\mathbbm{1}}_{F_{2,2,n}^{U(n)}}$ and the vacuum state: $$\begin{aligned}
1 & = \langle 0 | {\mathbbm{1}}_{F_{2,2,n}^{U(n)}}| 0\rangle\\
&= C \int_0^1 \d x \int_0^1 \d y \int \frac{(x-y)^2}{(1-x)^4(1-y)^4} \big \langle 0 \big|u \big[\begin{smallmatrix} \sqrt{x} & 0 \\ 0 & \sqrt{y}\end{smallmatrix}\big] v^\dagger, n\big\rangle \big\langle u \big[\begin{smallmatrix} \sqrt{x} & 0 \\ 0 & \sqrt{y}\end{smallmatrix}\big] v^\dagger, n\big|0 \big\rangle \d u \d v\\
&= C \int_0^1 \d x \int_0^1 \d y\frac{(x-y)^2}{(1-x)^4(1-y)^4}(1-x)^n (1-y)^n \int \d u \d v\\
&= C \int_0^1 \d x \int_0^1 \d y\frac{(x-y)^2}{(1-x)^4(1-y)^4}(1-x)^n (1-y)^n\\
&= C \frac{2}{(n-1)(n-2)^2(n-3)},
\end{aligned}$$ where we used that $ \big \langle 0 \big|u \big[\begin{smallmatrix} \sqrt{x} & 0 \\ 0 & \sqrt{y}\end{smallmatrix}\big] v^\dagger, n\big\rangle = (1-x)^{n/2} (1-y)^{n/2}$ for any $u, v\in U(2)$ and that the measures $\d u$ and $\d v$ are normalized.
Let $K \geq 0$ be an integer. We define $V_{=K}$ as the subspace of $F_{2,2,n}^{U(n)}$ spanned by vectors with $K$ pairs of excitations: $$\begin{aligned}
V_K := \mathrm{Span}\{Z_{1,1}^i Z_{1,2}^j Z_{2,1}^k Z_{2,2}^\ell |0\rangle \: : \: i + j + k+\ell = K; i, j,k, \ell \in {\mathbb{N}}\},\end{aligned}$$ and the projector $\Pi_{=K}$ to be the orthogonal projector onto $V_{=K}$. Physically, this is the subspace of the Fock space restricted to states containing $2K$ photons in total in the $4n$ optical modes.
Moreover, let us denote by $a_k^n := \tbinom{n+k-1}{k}$ the number of configurations of $k$ particles in $n$ modes.
\[lem:Piq\] For $K \in {\mathbb{N}}$ and $x, y \in [0,1[$, we have $$\begin{aligned}
{\mathrm{tr}}\left[\Pi_{=K} P_{x,y} \right] = \sum_{k_1+k_2=K} a_{k_1}^n a_{k_2}^n (1-x)^n (1-y)^n x^{k_1} y^{k_2}.\end{aligned}$$
The total photon number distribution of a state $|\Lambda, n\rangle$ is invariant under local unitaries $u, v \in U(2)$ applied on the creation operators of $F_A$ or $F_B$. This means that this distribution only depends on the squared singular values of the matrix $\Lambda$. In particular, denoting by $|(\sqrt{x},\sqrt{y}),n\rangle$ the coherent state corresponding to the matrix $\mathrm{diag}(\sqrt{x}, \sqrt{y})$, we obtain: $$\begin{aligned}
{\mathrm{tr}}\left[\Pi_{=K} P_{x,y} \right] = \langle (\sqrt{x},\sqrt{y}),n | \Pi_{=K} | (\sqrt{x},\sqrt{y}),n\rangle.\end{aligned}$$ Since this coherent state is given by $$\begin{aligned}
| (\sqrt{x},\sqrt{y}),n\rangle := (1-x)^{n/2} (1-y)^{n/2} \exp(\sqrt{x} Z_{11}) \exp(\sqrt{y} Z_{22}),\end{aligned}$$ it implies that $$\begin{aligned}
{\mathrm{tr}}\left[\Pi_{=K} P_{x,y} \right] =\sum_{k_1+k_2=K} a_{k_1}^n a_{k_2}^n (1-x)^n (1-y)^n x^{k_1} y^{k_2}.\end{aligned}$$
Let us define the operator $\overline{P}_{\eta}$ as $$\begin{aligned}
\overline{P}_{\eta} := \int_{\mathcal{D} \setminus \mathcal{D}_\eta} |\Lambda,n\rangle \langle \Lambda,n| \d \mu_n(\Lambda).\end{aligned}$$
\[eqn:crucial-step\] For $n \geq 38$, $K \in {\mathbb{N}}$ and $\eta \in [0,1[$ such that $K \leq \frac{\eta}{1-\eta}(n-5) $, it holds that $$\begin{aligned}
{\mathrm{tr}}(\Pi_{=K} \overline{P}_\eta) \leq 2 N^4 (1+\alpha)^7 \exp\left(-N D\left(\frac{\alpha}{\alpha+1} \big\|\eta \right) \right),\end{aligned}$$ where $N = n-5$ and $\alpha := K/N$.
For any non negative distribution $f(x,y) \geq 0$ symmetric in $x$ and $y$, *i.e.* such that $f(x,y) = f(y,x)$, it holds that $$\begin{aligned}
\int_{\overline{\mathcal{E}}_\eta} f(x,y) \d x \d y &\leq 2\int_{\eta}^1 \d x \int_0^1 \d y f(x,y).\end{aligned}$$ Since $q(x,y) {\mathrm{tr}}\left[P_{=K} \Pi_{x,y} \right]$ is such a distribution, it holds that $$\begin{aligned}
{\mathrm{tr}}(\Pi_{=K} \overline{P}_\eta)\leq 2\int_{\eta}^1 \d x \int_0^1 \d y q(x,y) \, {\mathrm{tr}}\left[P_{=K} \Pi_{x,y} \right].\end{aligned}$$ Lemma \[lem:Piq\] then yields $$\begin{aligned}
{\mathrm{tr}}(\Pi_{=K} \overline{P}_\eta) &\leq 2 \sum_{k_1+k_2=K} a_{k_1}^n a_{k_2}^n \int_{\eta}^1 \d x (1-x)^n x^{k_1} \int_0^1 \d y q(x,y) (1-y)^n y^{k_2}\\
&\leq (n-1)(n-2)^2(n-3) \sum_{k_1+k_2=K} a_{k_1}^n a_{k_2}^n \int_{\eta}^1 \d x (1-x)^{n-4} x^{k_1} \int_0^1 \d y (x-y)^2 (1-y)^{n-4} y^{k_2}\\
&\leq (n-1)(n-2)^2(n-3) \sum_{k_1+k_2=K} a_{k_1}^n a_{k_2}^n \int_{\eta}^1 \d x (1-x)^{n-4} x^{k_1} \int_0^1 \d y (1-y)^{n-4} y^{k_2}\end{aligned}$$ where we used the trivial bound $ (x-y)^2 \leq 1$ for $0 \leq x,y \leq 1$ in the last inequality.
The normalization of the Beta function reads $$\begin{aligned}
\int_0^1 (1-y)^{n} y^k \d y =\frac{k! n!}{(n+k+1)!},\end{aligned}$$ which gives $$\begin{aligned}
\label{eqn:interm}
{\mathrm{tr}}(\Pi_{=K} \overline{P}_\eta) &\leq (n-2) \sum_{k_1+k_2=K} (n+k_2-1)(n+k_2-2) \int_{\eta}^1 a_{k_1}^n (1-x)^{n-4} x^{k_1} \d x\end{aligned}$$ Lemma \[lem:tail-bound\] allows us to bound the integral: $$\begin{aligned}
\int_{\eta}^1 a_{k_1}^{n-4} (1-x)^{n-4} x^{k_1} \d x \leq\exp\left(-(n+k_1-5) D\left(\frac{n-3}{n+k_1-5} \big\| 1-\eta \right) \right),\end{aligned}$$ provided that $1-\eta \leq (n-3)/(n+k_1-5)$.
If $\frac{n-5}{n+K-5} \geq 1-\eta$, this term can be bounded uniformly as $$\begin{aligned}
\int_{\eta}^1 a_{k_1}^{n-4} (1-x)^{n-4} x^{k_1} \d x \leq\exp\left(-N D\left(\frac{N}{N+K} \big\| 1-\eta \right) \right),\end{aligned}$$ where we defined $N := n-5$. Injecting this in Eq. , we obtain $$\begin{aligned}
{\mathrm{tr}}(\Pi_{=K} \overline{P}_\eta) &\leq (N+3) \sum_{k_1+k_2=K} (N+k_2+4)(N+k_2+3) \frac{(N+k_1+4)! N!}{(N+k_1)! (N+4)!} \exp\left(-N D\left(\frac{N}{N+K} \big\| 1-\eta\right) \right) \nonumber\\
&\leq \frac{(K+1) (N+K+4)^6}{(N+1)^3} \exp\left(-N D\left(\frac{N}{N+K} \big\| 1-\eta \right) \right) \label{eqn:bound33}\end{aligned}$$ Imposing in addition that $N\geq 4$, *i.e.* $n\geq 9$, so that $N+K+4 \leq 2(N+K)$, one finally obtains the bound: $$\begin{aligned}
{\mathrm{tr}}(\Pi_{=K} \overline{P}_\eta) \leq 64 \frac{(N+K)^7}{N^3} \exp\left(-N D\left(\frac{N}{N+K} \big\| 1-\eta \right) \right).\end{aligned}$$ One can get a better bound by choosing $N \geq 33$, *i.e.* $n\geq 38$: in that case, one can check that for any $K \geq 0$, it holds that $$\begin{aligned}
\left( 1 + \frac{4}{N+K}\right)^6 \leq 2,\end{aligned}$$ which gives $(N+K+4)^6 \leq 2(N+K)^6$. Injecting this into Eq. yields $$\begin{aligned}
{\mathrm{tr}}(\Pi_{=K} \overline{P}_\eta) \leq 2 \frac{(N+K)^7}{N^3} \exp\left(-N D\left(\frac{N}{N+K} \big\| 1-\eta \right) \right).\end{aligned}$$
\[lem:proj\] For any nonnnegative operator $A \geq 0$ and projector $\Pi$ with $\mathrm{rank}(\Pi) < \infty$, it holds that: $$\Pi A \Pi \leq {\mathrm{tr}}[\Pi A ] \Pi.$$
The support of $\Pi A \Pi$ is contained in that of ${\mathrm{tr}}[\Pi A ] \Pi$. Since both operators are positive semi-definite, the only thing we need to prove is that for any $\lambda \in \mathrm{spec}(\Pi A \Pi)$, it holds that $$\begin{aligned}
\lambda \leq {\mathrm{tr}}[\Pi A]\end{aligned}$$ since all the nonzero eigenvalues of ${\mathrm{tr}}[\Pi A ] \Pi$ are equal to ${\mathrm{tr}}[\Pi A ]$. The sum of the eingenvalues of an operator is equal to its trace, which gives $$\begin{aligned}
\sum_{\lambda \in \mathrm{spec}(\Pi A \Pi)}Ê\lambda = {\mathrm{tr}}(\Pi A \Pi).\end{aligned}$$ Moreover, since all these eigenvalues are nonnegative, we have that $\lambda_{\max} (\Pi A \Pi) \leq \sum_{\lambda \in \mathrm{spec}(\Pi A \Pi)}Ê\lambda$, which concludes the proof.
Finite energy version of de Finetti theorem {#sec:finite}
===========================================
In this section, we establish a *de Finetti reduction*, similar to the one obtained in [@CKR09] in the case of permutation invariance. Such a reduction uses as a main tool as statement analogous to the resolution of the identity $$\begin{aligned}
\mathbbm{1}_{\mathrm{Sym}} \leq C_{n,d} \int \left(|\phi\rangle \langle \phi|\right)^{\otimes n} \d \mu(\phi)\end{aligned}$$ where $C(n,d)$ is a polynomial in $n$ provided the local dimension $d$ is finite.
In the case of continuous-variable protocols, the local dimension is infinite and we need to find a better reduction. This is indeed possible provided we have bounds on the maximum energy (or total number of photons) of the states under consideration.
For $\eta \in [0,1[$, define the sets $\mathcal{E}_\eta = [0, \eta] \times [0,\eta]$ and $\overline{\mathcal{E}}_\eta = [0,1[^2 \setminus \mathcal{E}_\eta$.
We introduce the following positive operators $$\begin{aligned}
P_\eta &:= \int_{\mathcal{E}_\eta} q(x,y) P_{x,y} \d x \d y = \int_{\mathcal{D}_\eta} |\Lambda,n\rangle \langle \Lambda,n| \d \mu_n(\Lambda), \label{eqn:Peta} \\
\overline{P}_{\eta} &:= \int_{\overline{\mathcal{E}}_\eta} q(x,y) P_{x,y} \d x \d y = \int_{\mathcal{D} \setminus \mathcal{D}_\eta} |\Lambda,n\rangle \langle \Lambda,n| \d \mu_n(\Lambda),\end{aligned}$$ where the equalities follow from the fact that one can integrate over $\mathcal{D}_\eta$ by first integrating over $V_{x,y}$ and then over $\mathcal{E}_\eta$. We recall that the operator $P_{xy}$ is defined in Eq. and that the distribution $q(x,y)$ is defined in Eq. . The resolution of the identity over $F_{2,2,n}^{U(n)}$ (Theorem \[thm:resol\]) immediately implies that $$\begin{aligned}
P_\eta + \overline{P}_\eta = {\mathbbm{1}}_{F_{2,2,n}^{U(n)}}.\end{aligned}$$
Let $K \geq 0$ be an integer. We recall that $V_{=K}$ is the subspace of $F_{2,2,n}^{U(n)}$ spanned by vectors with $K$ pairs of excitations: $$\begin{aligned}
V_K := \mathrm{Span}\{Z_{1,1}^i Z_{1,2}^j Z_{2,1}^k Z_{2,2}^\ell |0\rangle \: : \: i + j + k+\ell = K; i, j,k, \ell \in {\mathbb{N}}\}.\end{aligned}$$ The subspace $V_{\leq K}$ is defined as $V_{\leq K} := \bigoplus_{k=0}^K V_{=k}$. The projector $\Pi_{=K}$ is the orthogonal projector onto $V_{=K}$ and the projector $\Pi_{\leq K}$ is defined as $$\begin{aligned}
\Pi_{\leq K} : = \sum_{k=0}^K \Pi_{=k}.\end{aligned}$$
\[thm:finite-version\] For $n\geq 5$ and $\eta \in [0,1[$, if $K \leq\frac{\eta}{1-\eta} (n-5) $, then the following operator inequality holds $$\begin{aligned}
\int_{\mathcal{D}_\eta} |\Lambda,n\rangle \langle \Lambda,n | \d \mu_n(\Lambda)\geq (1-{\varepsilon}) \Pi_{\leq K}\end{aligned}$$ with $$\begin{aligned}
{\varepsilon}:= 2 N^4 (1+\alpha)^7 \exp\left(-N D\left(\frac{\alpha}{\alpha+1} \big\| \eta \right) \right).\end{aligned}$$ for $\alpha = K/N$ and $N=n-5$.
In particular, choosing $K$ such that $\alpha = \frac{1+\eta}{1-\eta} = \frac{K}{N}$ and using Pinsker’s inequality (Lemma \[lem:pinsker\]) yields $$\begin{aligned}
{\varepsilon}\leq \frac{2 (N+K)^7}{N^3}\exp\left(- \frac{2N^3}{(N+K)^2 \ln 2} \right).\end{aligned}$$
The resolution of the identity reads $$\begin{aligned}
\int_{\overline{\mathcal{E}}_\eta} P_{x,y}q(x,y) \d x \d y+\int_{{\mathcal{E}}_\eta} P_{x,y}q(x,y) \d x \d y = {\mathbbm{1}}_{F_{2,2,n}^{U(n)}} = \sum_{k=0}^\infty\Pi_{=k} .\end{aligned}$$ For all $k \leq K$, the projector $\Pi_{=k}$ can be written as: $$\begin{aligned}
\Pi_{=k} = \int_{\overline{\mathcal{E}}_\eta} \Pi_{=k}P_{x,y}\Pi_{=k} q(x,y) \d x \d y+\int_{{\mathcal{E}}_\eta} \Pi_{=k} P_{x,y} \Pi_{=k} q(x,y) \d x \d y.\end{aligned}$$ In particular, since $k \leq K \leq \frac{\eta}{1-\eta} (n-5)$, we have $$\begin{aligned}
\int_{{\mathcal{E}}_\eta} P_{x,y}q(x,y) \d x \d y &\geq \int_{{\mathcal{E}}_\eta} \Pi_{=k}P_{x,y} \Pi_{=k} q(x,y) \d x \d y \nonumber\\
&\geq \Pi_{=k} - \int_{\overline{\mathcal{E}}_\eta} \Pi_{=k}P_{x,y} \Pi_{=k} q(x,y) \d x \d y \nonumber \\
&\geq \Pi_{=k} - \int_{\overline{\mathcal{E}}_\eta} {\mathrm{tr}}[ \Pi_{=k}P_{x,y}] \Pi_{=k} q(x,y) \d x \d y \label{eqn640}\\
&\geq (1-{\varepsilon})\Pi_{=k} \label{eqn641}\end{aligned}$$ where we used Lemma \[lem:proj\] in Eq. and the upper bound resulting from Lemma \[eqn:crucial-step\]: $$\begin{aligned}
\int_{\overline{\mathcal{E}}_\eta} {\mathrm{tr}}\left[ \Pi_{=k}P_{x,y}\right] q(x,y) \d x \d y \leq {\varepsilon}\end{aligned}$$ in Eq. . It follows that: $$\begin{aligned}
\int_{{\mathcal{E}}_\eta} P_{x,y}q(x,y) \d x \d y \geq (1-{\varepsilon}) \Pi_{=k} \end{aligned}$$ for all $k \leq K$. This finally implies that $$\begin{aligned}
\int_{{\mathcal{E}}_\eta} P_{x,y}q(x,y) \d x \d y \geq(1-{\varepsilon}) \sum_{k \leq K} \Pi_{=k} = (1-{\varepsilon}) \Pi_{\leq K}.\end{aligned}$$
The crucial property of Theorem \[thm:finite-version\] that will be important for application is that the volume of $\mathcal{D}_\eta$ is finite, and scales as a low degree polynomial in $n$ and $K$.
\[thm:volume\] For $n \geq 38$, $K \geq n-5$ and $\eta = \frac{K-n+5}{K+n-5}$, it holds that $$\begin{aligned}
T(n,\eta) := {\mathrm{tr}}\int_{\mathcal{D}_\eta} |\Lambda,n\rangle\langle \Lambda,n| \d \mu_n(\Lambda) \leq \frac{K^4}{100}.\end{aligned}$$
The volume of $\mathcal{D}_\eta$ is given by $$\begin{aligned}
{\mathrm{tr}}\int_{\mathcal{D}_\eta} |\Lambda,n\rangle\langle \Lambda,n| \d \mu_n(\Lambda) &= \int_0^{\eta} \int_0^{\eta} q(x,y)\d x \d y \\
&= \frac{(n-1)(n-2)^2(n-3)\eta^4}{12(1-\eta)^4} \\
&\leq \frac{n^4 \eta^4}{12 (1-\eta)^4}\\
& \leq \frac{n^4(1+\eta)^4}{192(1-\eta)^4}\end{aligned}$$ where we used that $\eta \leq (1+\eta)/2$ in the last equation.
In particular, choosing $K$ such that $\eta = \frac{K-n+5}{K+n-5}$ gives $\frac{1+\eta}{1-\eta} = \frac{K}{n-5}$, and therefore $$\begin{aligned}
{\mathrm{tr}}\int_{\mathcal{D}_\eta} |\Lambda,n\rangle\langle \Lambda,n| \d \mu_n(\Lambda) \leq \frac{n^4 K^4}{192(n-5)^4}.\end{aligned}$$
For $n \geq 38$, it holds that $\frac{1}{192} \left(\frac{n}{n-5}\right)^4 \leq \frac{1}{100}$, which finally gives $$\begin{aligned}
{\mathrm{tr}}\int_{\mathcal{D}_\eta} |\Lambda,n\rangle\langle \Lambda,n| \d \mu_n(\Lambda) \leq \frac{K^4}{100}.\end{aligned}$$
For future reference, let us not that that under the same assumptions as in the theorem, the following inequality also holds: $$\begin{aligned}
T(n,\eta) +1 \leq \frac{K^4}{100}. \label{eqn:tighter}\end{aligned}$$ This inequality will later be useful to analyze Eq. at the end of Section \[sec:general-proof\] and obtain Eq. (5) in the main text.
In other words, the volume $T(n,\eta)$ of ${\mathcal{D}}_\eta$ is upper bounded by a polynomial of degree 4 in the number of modes (or equivalently in the total energy).
Let us define the function $N^* : [1,\infty[ \to {\mathbb{N}}$ such that $$\begin{aligned}
N^*(\alpha) = \max\left\{ 38, \min \left\{N \in {\mathbb{N}}\: : \: 2(1+\alpha)^7 N^4 \exp\left(- \frac{2N}{(1+\alpha)^2 \ln 2} \right) \leq \frac{1}{2} \right\}\right\}. \label{eqn:N*}\end{aligned}$$ For instance, $N^*(21) \approx 10^4$, $N^*(60) \approx 10^5$.
We obtain:
\[corol16\] For $K \geq n-5$, if $n \geq N^*\big( \frac{K}{n-5} \big)-5$ then, for $\eta^* = \frac{K-n+5}{K+n-5}$, it holds that $$\begin{aligned}
&\int_{\mathcal{D}_{\eta^*}} |\Lambda,n\rangle \langle \Lambda,n | \d \mu_n(\Lambda)\geq \frac{1}{2} \Pi_{\leq K} \\
&{\mathrm{tr}}\int_{\mathcal{D}_{\eta^*}} |\Lambda,n\rangle\langle \Lambda,n| \d \mu_n(\Lambda) \leq \frac{K^4}{100}.\end{aligned}$$
Security proof for a modified CV QKD protocol {#sec:general-proof}
=============================================
In this section, we recall some facts about security proofs for QKD protocols and explain how to obtain a secure protocol from an initial protocol ${\mathcal{E}}_0$ known to be secure against Gaussian collective attacks, by prepending an energy test and adding an additional privacy amplification test. These various steps will then be detailed in the subsequent sections.
[**QKD protocols and their security**]{}.— A QKD protocol is a CP map from the infinite-dimensional Hilbert space $(\mathcal{H}_A\otimes \mathcal{H}_B)^{\otimes n}$, corresponding to the initially distributed entanglement, to the set of pairs $(S_A,S_B)$ of $\ell$-bit strings (Alice and Bob’s final keys, respectively) and $C$, a transcript of the classical communication. In order to assess the security of a given QKD protocol $\mathcal{E}$ in a composable framework, one compares it with an ideal protocol [@MKR09; @PR14]. The action of an ideal protocol $\mathcal{F}$ is defined by concatenating the protocol $\mathcal{E}$ with a map $\mathcal{S}$ taking $(S_A,S_B,C)$ as input and outputting the triplet $(S,S,C)$ where the string $S$ is a perfect secret key (uniformly distributed and unknown to Eve) with the same length as $S_A$, that is $\mathcal{F} = \mathcal{S}\circ\mathcal{E}$. Then, a protocol will be called *$\epsilon$-secure* if the advantage in distinguishing it from an ideal version is not larger than $\epsilon$. This advantage is quantified by (one half of) the diamond norm defined by $$||\mathcal{E} - \mathcal{F}||_\diamond := \sup_{\rho_{ABE} } \left\|(\mathcal{E}-\mathcal{F})\otimes \mathrm{id}_\mathcal{K} (\rho_{ABE})\right\|_1,$$ where the supremum is taken over density operators on $(\mathcal{H}_A\otimes \mathcal{H}_B)^{\otimes n} \otimes \mathcal{K}$ for any auxiliary system $\mathcal{K}$. The diamond norm is also known as the *completely bounded trace norm* and quantifies a notion of distinguishability for quantum maps [@wat16].
Our main technical result is a reduction of the security against general attacks to that against Gaussian collective attacks, for which security has already been proved in earlier work, for instance in [@Lev15]. Let us therefore suppose that our CV QKD protocol of interest, $\mathcal{E}_0$, is secure against Gaussian collective attacks. We will slightly modify it by prepending an initial test $\mathcal{T}$. More precisely, $\mathcal{T}$ is a CP map taking a state in a slightly larger Hilbert space, $(\mathcal{H}_A\otimes \mathcal{H}_B)^{\otimes (n+k)}$, applying a random unitary $u \in U(n+k)$ to it (corresponding to a network of beamsplitters and phaseshifters), measuring the last $k$ modes and comparing the measurement outcome to a threshold fixed in advance. The test succeeds if the measurement outcome (related to the energy) is small, meaning that the global state is compatible with a state containing only a low number of photons per mode. Such a state is well-described in a low dimensional Hilbert space, as we will discuss in Section \[sec:test\]. Depending on the outcome of the test, either the protocol aborts, or one applies the original protocol $\mathcal{E}_0$ on the $n$ remaining modes.
For the test to be practical, it is important that the legitimate parties do not have to physically implement the transformation $u \in U(n+k)$. Rather, they can both measure their $n+k$ modes with heterodyne detection, perform a random rotation of their respective classical vector in ${\mathbb{R}}^{2(n+k)}$ according to $u \in U(n+k) \cong O(2(n+k)) \cap Sp(2(n+k))$.
In this paper, we assume that this symmetrization step is performed, as it is anyway required for the security proof of the protocol against collective attacks [@Lev15]. We believe, however, that this step might not be required for establishing the security of the protocol and leave it as an important open question for future work. In particular, recent proof techniques in discrete-variable QKD have shown that the permutation need not be applied in practice [@TLG12].
In section \[sec:generalization\], we will prove a de Finetti reduction that allows to upper bound the diamond distance between two quantum channels, provided that they display the right invariance under the action of the unitary group $U(n)$ and that the input states have a maximum number of photons. We address this second issue by introducing another CP map $\mathcal{P}$ which projects a state acting on $F_{1,1,n} = (\mathcal{H}_A\otimes \mathcal{H}_B)^{\otimes n}$ onto a low-dimensional Hilbert space $F_{1,1,n}^{\leq K} $ with less than $K$ photons overall in the $2n$ modes shared by Alice and Bob. Here, the value of $K$ scales linearly with $n$.
Let us denote by ${\mathcal{E}}_0$ a CV QKD proven ${{\varepsilon}}$-secure against Gaussian collective attacks, for instance as in [@Lev15]. This means that (see Section \[sec:collective\] for details) $$\begin{aligned}
\|(({\mathcal{E}}_0 - {\mathcal{F}}_0)\otimes {\mathbbm{1}}) (|\Lambda,n\rangle \langle \Lambda, n|)\|_1 \leq {{\varepsilon}},\end{aligned}$$ for any generalized coherent state $|\Lambda,n\rangle$. Here ${\mathcal{F}}_0 := {\mathcal{S}}\circ {\mathcal{E}}_0$ and ${\mathcal{S}}$ is a map that replaces the output key of ${\mathcal{E}}_0$ by an independent and uniformly distributed string of length $\ell$ when ${\mathcal{E}}_0$ did not abort, and does nothing otherwise.
Here ${\mathcal{E}}_0$ maps an arbitrary density operator $\rho_{AB} \in \mathfrak{S}(F_{1,1,n})$ to a state $\rho_{S_A, S_B, C}$ where the registers are all classical and store respectively Alice’s final key, Bob’s final key and a transcript of the classical communication.
Let us define the following maps: $$\begin{aligned}
{\mathcal{T}}&: {\mathcal{B}}(F_{1,1,n+k}) \to {\mathcal{B}}(F_{1,1,n}) \otimes \{\mathrm{passes} / \mathrm{aborts}\},\\
{\mathcal{P}}&: {\mathcal{B}}(F_{1,1,n}) \to {\mathcal{B}}(F_{1,1,n}^{\leq K}),\\
{\mathcal{R}}&: \{0,1\}^{\ell} \times \{0,1\}^{\ell} \to \{0,1\}^{\ell'} \times \{0,1\}^{\ell'},\end{aligned}$$ where
- ${\mathcal{T}}(k, d_A, d_B)$ takes as input an arbitrary state $\rho_{AB}$ on $F_{1,1,n+k}$, maps it to $V_u \rho_{AB} V_u^{\dagger}$ where the unitary $u$ is chosen from the Haar measure on $U(n+k)$, measures the last $k$ modes for $A$ and $B$ with heterodyne detection and check whether the measurement outputs pass the test if the $k$ outcomes $\alpha_1, \cdots, \alpha_k$ of Alice and $\beta_1, \cdots, \beta_k$ of Bob satisfy $$\begin{aligned}
\sum_{i=1}^k |\alpha_i|^2 \leq k d_A \quad \text{and} \quad \sum_{i=1}^k |\beta_i|^2 \leq k d_B.\end{aligned}$$ If they pass the test, the map returns the state on the first $n$ modes (that were not measured) as well as the flag “passes”. Otherwise, it returns the vacuum state and the flag “aborts”.
- ${\mathcal{P}}$ is the projector onto the finite-dimensional subspace $F_{1,1,n}^{\leq K}$ (corresponding to states with at most $K$ photons in the $2n$ modes): it maps any state $\rho \in {\mathcal{B}}(F_{1,1,n})$ to $\Pi_{\leq K} \rho \Pi_{\leq K} \in {\mathcal{B}}(F_{1,1,n}^{\leq K})$. This trace non-increasing map is introduced as a technical tool for the security analysis but need not be implemented in practice. It simply ensures that the states that are fed to the original QKD protocol ${\mathcal{E}}_0$ live in a finite-dimensional subspace. In the text, we will alternatively denote this projection by ${\mathcal{P}}^{\leq K}$ or ${\mathcal{P}}(n,K)$, depending on which parameters we wish to make explicit.
- ${\mathcal{R}}$ takes two $\ell$-bit strings as input and returns $\ell'$-bit strings (for $\ell' < \ell$).
We finally define our CV QKD protocol ${\mathcal{E}}$ as $$\begin{aligned}
{\mathcal{E}}= {\mathcal{R}}\circ {\mathcal{E}}_0 \circ {\mathcal{T}}\end{aligned}$$ and the ideal protocol as ${\mathcal{F}}= {\mathcal{S}}\circ {\mathcal{E}}$. Abusing notation slightly, the map ${\mathcal{S}}$ now acts on strings of length $\ell'$ instead of $\ell$.
\[lem:sec-red\] Let $\overline{{\mathcal{E}}}$ be the protocol ${\mathcal{R}}\circ {\mathcal{E}}_0$ where the inputs are restricted to the finite-dimensional subspace ${\mathcal{B}}(F_{1,1,n}^{\leq K})$, and $\overline{{\mathcal{F}}} = {\mathcal{S}}\circ \overline{{\mathcal{E}}}$. Then the security of $\overline{{\mathcal{E}}}$ implies the security of ${\mathcal{E}}$: $$\begin{aligned}
\label{eqn:sec-red}
||\mathcal{E} - \mathcal{F}||_\diamond &\leq ||\overline{\mathcal{E}} - \overline{\mathcal{F}}||_\diamond + 2 || ({\mathbbm{1}}- \mathcal{P}) \circ \mathcal{T}||_\diamond,\end{aligned}$$ provided that the quantity $|| ({\mathbbm{1}}- \mathcal{P}) \circ \mathcal{T}||_\diamond$ can be made arbitrarily small.
We define (virtual) protocols $\tilde{\mathcal{E}}:= {\mathcal{R}}\circ \mathcal{E}_0 \circ \mathcal{P} $ and $\tilde{\mathcal{F}}:= \mathcal{S} \circ \tilde{\mathcal{E}}$. The security of the protocol $\mathcal{E}$ is then a consequence of the following derivation: $$\begin{aligned}
||\mathcal{E} - \mathcal{F}||_\diamond &\leq ||\tilde{\mathcal{E}}\circ {\mathcal{T}}- \tilde{\mathcal{F}} \circ {\mathcal{T}}||_\diamond + ||\mathcal{E} - \tilde{\mathcal{E}} \circ {\mathcal{T}}||_\diamond+ ||\mathcal{F} - \tilde{\mathcal{F}} \circ {\mathcal{T}}||_\diamond \nonumber \\
&\leq ||(\tilde{\mathcal{E}} - \tilde{\mathcal{F}}) \circ {\mathcal{T}}||_\diamond + ||{\mathcal{R}}\circ \mathcal{E}_0 \circ (\mathrm{id}- \mathcal{P}) \circ \mathcal{T}||_\diamond +||{\mathcal{S}}\circ {\mathcal{R}}\circ \mathcal{E}_0 \circ (\mathrm{id}- \mathcal{P}) \circ \mathcal{T}||_\diamond \nonumber\\
&\leq ||\tilde{\mathcal{E}} - \tilde{\mathcal{F}}||_\diamond + 2 || ({\mathbbm{1}}- \mathcal{P}) \circ \mathcal{T}||_\diamond ,\end{aligned}$$ where we used the triangle inequality and the fact that the CP maps ${\mathcal{T}}$, ${\mathcal{R}}\circ \mathcal{E}_0$ and $\mathcal{S}$ cannot increase the diamond norm.
Since $\overline{E} \circ {\mathcal{P}}= \tilde{E}$ and ${\mathcal{P}}$ is trace non-increasing, we finally obtain that $$\begin{aligned}
||\mathcal{E} - \mathcal{F}||_\diamond &\leq ||\overline{\mathcal{E}} - \overline{\mathcal{F}}||_\diamond + 2 || ({\mathbbm{1}}- \mathcal{P}) \circ \mathcal{T}||_\diamond.\end{aligned}$$
Bounding the two terms in the right hand side of Eq. is done with the two following theorems, which will be proven in Sections \[sec:collective\] and \[sec:test\], respectively.
\[thm:diamond-protocol\] With the previous notations, if ${\mathcal{E}}_0$ is ${\varepsilon}$-secure against Gaussian collective attacks, then $$\begin{aligned}
||\overline{\mathcal{E}} - \overline{\mathcal{F}}||_\diamond \leq 2 T(n,\eta) {\varepsilon}\end{aligned}$$ where $T(n,\eta) =(n-1)(n-2)^2(n-3) \frac{\eta^4}{12(1-\eta)^4}$ and $\overline{{\mathcal{E}}} = {\mathcal{R}}\circ {\mathcal{E}}_0 \circ {\mathcal{P}}^{\leq K}$.
\[thm:test\] For integers $n,k \geq 1$, and $d_A, d_B >0$, define $K = n(d'_A + d'_B)$ for $d'_{A/B} = d_{A/B} g(n,k,{\varepsilon}/4)$ for the function $g$ defined in Eq. . Then $$\begin{aligned}
\big\| \big({\mathbbm{1}}- {\mathcal{P}}(n,K)\big) \circ {\mathcal{T}}(k, d_A, d_B)\big\|_{\diamond} \leq {\varepsilon}.\end{aligned}$$
Putting everything together yields our main result.
\[thm:main\] If the protocol ${\mathcal{E}}_0$ is ${\varepsilon}$-secure against Gaussian collective attacks, then the protocol ${\mathcal{E}}= {\mathcal{R}}\circ {\mathcal{E}}_0 \circ {\mathcal{P}}$ is ${\varepsilon}'$-secure against general attacks with $$\begin{aligned}
{\varepsilon}' \leq 2 T(n,\eta) {\varepsilon}+ {\varepsilon}.\end{aligned}$$
Putting everything together, we show that if ${\mathcal{E}}_0$ is covariant under the action of the unitary group and ${\varepsilon}$-secure against Gaussian collective attacks, then the protocol ${\mathcal{E}}= {\mathcal{R}}\circ {\mathcal{E}}_0 \circ {\mathcal{T}}$ is ${\varepsilon}'$-secure against general attacks, with $$\begin{aligned}
{\varepsilon}' = 2{\varepsilon}( T(n, \eta)+1) \label{eqn:final-result}\end{aligned}$$ for $T(n,\eta) \leq \frac{1}{12} \left(\frac{\eta n}{1-\eta}\right)^4$, $\eta = \frac{K-n+5}{K+n-5}$ and $K = n(d_A + d_B)\left(1 + 2 \sqrt{\frac{\ln (8/{\varepsilon})}{2n}} + \frac{\ln (8/{\varepsilon})}{n}\right)\left(1-2{\sqrt{\frac{\ln (8/{\varepsilon})}{2k}}}\right)^{-1}$. The first term in Eq. results from the de Finetti reduction and the second term results for the energy test failure probability.
In particular, for $n \geq 38$ and $K \geq n-5$, we obtain the bound $T(n,\eta) +1 \leq \frac{K^4}{100}$ from Eq. . This yields ${\varepsilon}' \leq \frac{K^4}{50} {\varepsilon}$, which corresponds to Eq. (5) in the main text.
Generalization of the postselection technique of Ref. [@CKR09] {#sec:generalization}
==============================================================
The goal of this section is to prove the following theorem (Theorem 2 in the main text).
\[thm:postselection\] Let $\Delta: \mathrm{End}(F_{1,1,n}^{\leq K}) \to \mathrm{End}(\mathcal{H}')$ such that for all $u \in U(n)$, there exists a CPTP map $\mathcal{K}_u: \mathrm{End}(\mathcal{H}') \to \mathrm{End}(\mathcal{H}')$ such that $\Delta \circ u = \mathcal{K}_u: \circ \Delta$, then $$\begin{aligned}
\|\Delta\|_\diamond \leq 2 T(n,\eta) \|(\Delta \otimes \mathrm{id}) \tau^\eta_{\mathcal{H}\mathcal{N}} \|_1,\end{aligned}$$ for $\eta = \frac{K-n+5}{K+n-5}$, provided that $n \geq N^*(K/(n-5))$.
One way to make sure that the input of the map is indeed restricted to states with less than $K$ photons is to replace $\Delta$ by $\Delta \circ {\mathcal{P}}^{\leq K}$.
In the following, for conciseness, we will denote by $\mathcal{H}$ the symmetric subspace: $$\begin{aligned}
{\mathcal{H}}:= F_{2,2,n}^{U(n)}.\end{aligned}$$
Let $\tau^\eta_{\mathcal{H}}$ be the normalized state corresponding to the projector $P_\eta$ defined in Eq. : $$\begin{aligned}
\tau^\eta_{\mathcal{H}} = T(n,\eta)^{-1} \int_{ \mathcal{D}_\eta} |\Lambda,n\rangle \langle \Lambda,n| \d \mu_n(\Lambda)\end{aligned}$$ where $$\begin{aligned}
T(n,\eta) := {\mathrm{tr}}(P_\eta) = \frac{(n-1)(n-2)^2(n-3)\eta^4}{12(1-\eta)^4}. \label{eqn:defT}\end{aligned}$$
Consider an orthonormal basis $\left\{Ê|\nu_i\rangle \right\}$ of $F_{2,2,n}^{U(n)}$ and define the non normalizable operator $$\begin{aligned}
|\Phi\rangle_{{\mathcal{H}}{\mathcal{N}}} := \sum_{i} |\nu_i\rangle_{{\mathcal{H}}} |\nu_i\rangle_{{\mathcal{N}}}.\end{aligned}$$
A conjecture for an explicit such orthonormal basis was given in [@lev16], but we do not need to have such an explicit basis for our present purpose.
Let us further define the state $|\Phi^\eta\rangle \in F_{2,2,n}^{U(n)} \otimes F_{2,2,n}^{U(n)}$: $$\begin{aligned}
|\Phi^\eta\rangle = \left(\sqrt{ \tau^\eta} \otimes {\mathbbm{1}}\right) |\Phi\rangle.\end{aligned}$$ It is well-known that $|\Phi^\eta\rangle$ is a purification of $\tau_{\mathcal{H}}^{\eta}$: $$\begin{aligned}
{\mathrm{tr}}_{\mathcal{N}} \left(|\Phi^\eta\rangle\langle \Phi^\eta|_{\mathcal{H}\mathcal{N}} \right)=\tau_{\mathcal{H}}^{\eta}.\end{aligned}$$
Recall that $F_{2,2,n}^{U(n), \leq K}$ denotes the finite-dimensional subspace of $F_{2,2,n}^{U(n)}$ with less than $K$ excitations.
\[lem:measurement\] Let $\rho$ be an arbitrary density operator on $F_{2,2,n}^{U(n), \leq K}$. Then there exists a binary measurement $\mathcal{M} = \{M_{\mathcal{N}}, {\mathbbm{1}}_{\mathcal{N}}-M_{\mathcal{N}}\}$ on ${\mathcal{N}}$ applied to $|\Phi^\eta\rangle \in {\mathcal{H}}\otimes {\mathcal{N}}$ that successfully prepares the state $\rho$ with probability at least $\frac{1}{2T(n,\eta)}$.
To avoid cluttering up the notations, let us write $\tau$ instead of $\tau^\eta_{{\mathcal{H}}}$. Recall that $\tau \geq p {\mathbbm{1}}_{F_{2,2,n}^{U(n), \leq K}}$ with $p = \frac{1}{2T(n,\eta)}$, as a consequence of Corollary \[corol16\].
Let us define the non negative operator $M := p \tau^{-1/2} \rho \tau^{-1/2}$. Since $p^{-1} \tau \geq {\mathbbm{1}}$ on the support of $\rho \leq {\mathbbm{1}}$, the operator $M$ satisfies $$\begin{aligned}
0 \leq M \leq {\mathbbm{1}}.\end{aligned}$$ Let us define the measurement $\mathcal{M} = \{ M, {\mathbbm{1}}-M\}$. Performing this measurement on state $|\Phi^\eta\rangle$ prepares the state $$\begin{aligned}
{\mathrm{tr}}_{{\mathcal{N}}} \left( (1 \otimes M^{1/2}) |\Phi^\eta\rangle \langle \Phi^\eta | (1 \otimes M^{1/2}) \right)\end{aligned}$$ with probability $ \langle \Phi^\eta | (1 \otimes M) |\Phi^\eta\rangle$. This state can be written: $$\begin{aligned}
{\mathrm{tr}}_{{\mathcal{N}}} \left( (1 \otimes M^{1/2}) |\Phi^\eta\rangle \langle \Phi^\eta | (1 \otimes M^{1/2})\right) &= {\mathrm{tr}}_{{\mathcal{N}}} \left( (1 \otimes M^{1/2})\left(\sqrt{ \tau} \otimes {\mathbbm{1}}\right) |\Phi\rangle \langle \Phi | \left(\sqrt{ \tau} \otimes {\mathbbm{1}}\right) (1 \otimes M^{1/2}) \right) \nonumber \\
&= {\mathrm{tr}}_{{\mathcal{N}}} \left( (\tau^{1/2} \otimes M^{1/2}) \sum_{i,j} |\nu_i \rangle \langle \nu_j| \otimes |\nu_i \rangle \langle \nu_j| (\tau^{1/2} \otimes M^{1/2}) \right) \nonumber \\
&= \sum_{i,j} \tau^{1/2} |\nu_i \rangle \langle \nu_j| \tau^{1/2} \langle \nu_j| M^{1/2} M^{1/2} |\nu_i \rangle \nonumber\\
&= \sum_{i,j} \tau^{1/2} |\nu_i \rangle \langle \nu_j| \tau^{1/2} \langle \nu_i| M^{1/2} M^{1/2} |\nu_j \rangle \label{eqn:inv} \\
&= \sum_{i,j} \tau^{1/2} |\nu_i \rangle \langle \nu_i| M^{1/2} M^{1/2} |\nu_j \rangle \langle \nu_j| \tau^{1/2} \nonumber \\
&= \tau^{1/2} M^{1/2} M^{1/2} \tau^{1/2} \nonumber \\
&= \tau^{1/2} p \tau^{-1/2} \rho \tau^{-1/2} \tau^{1/2} \nonumber \\
&= p \rho, \nonumber\end{aligned}$$ and it is obtained with probability $p$. In Eq. , we used that $M$ is symmetric, that is $\langle \lambda_i |M |\lambda_j\rangle = \langle \lambda_j |M |\lambda_i\rangle$.
\[lem:dim\] For $k\geq 0$ and $n\geq 4$, the dimensions of $V_{=K}$ and $V_{\leq K} = F_{2,2,n}^{U(n), \leq K}$ are given by $$\begin{aligned}
\mathrm{dim} \, V_{=K} = \tbinom{K+3}{3} \quad \text{and} \quad \mathrm{dim} \, V_{\leq K} = \tbinom{K+4}{4}.\end{aligned}$$
It was proven in [@lev16] that the vectors $(Z_{1,1})^i (Z_{1,2})^j (Z_{2,1})^k (Z_{2,2})^{\ell}$ are independent (provided than $n\geq 4$), which means that the dimension of $V_{=K}$ is the cardinality of the sets of quadruples $\{(i,j,k,\ell) \in {\mathbb{N}}^4 \: : \: i+j+k+\ell =K\}$. This number is $\tbinom{K+3}{3}$. More generally, the number of $t$-uples of nonnegative integers that sum to $K$ is $\tbinom{n+K-1}{n-1}$. Since the subspaces $V_{=K}$ are orthogonal, it follows that $\mathrm{dim} \, V_{\leq K} = \sum_{k=0}^K \mathrm{dim} \, V_{=k}$, which can be computed explicitly. Alternatively, one can see that the space $V_{\leq K}$ of quadruples $(i,j,k,\ell)$ summing to $K-m$ for some integer $m \leq K$ corresponds to the space of $5$-uples $(i,j,k,\ell, m)$ that sum to $K$.
\[lem:design\] For any $K$ and $n$ integers, there exists a finite subset $\mathcal{U} \subset U(n)$, such that for any state $\rho$ with support on $F_{1,1,n}^{\leq K}$, the subspace of $F_{1,1,n}$ restricted to states with less than $K$ photons, the following holds: $$\begin{aligned}
\frac{1}{|\mathcal{U}|} \sum_{u \in \mathcal{U}} V_u \rho V_u^\dagger = \int V_u \rho V_u^\dagger \d u,\end{aligned}$$ where $\d u$ is the normalized Haar measure on $U(n)$.
Note that by definition of the Haar measure, the state $ \int V_u \rho V_u^\dagger \d u$ is invariant under the application of any unitary $u' \in U(n)$: $V_{u'} \int V_u \rho V_u^\dagger \d u V_{u'}^\dagger = \int V_u \rho V_u^\dagger \d u$, which means that it has support on $F_{1,1,n}^{U(n), \leq K}$.
By linearity, it is sufficient to establish the lemma for pure states $|\psi\rangle \in F_{1,1,n}^{\leq K}$. Such a state can be written as $$\begin{aligned}
|\psi\rangle = \sum_{\substack{k_1, \ldots, k_n, \ell_1, \ldots \ell_n\\ \sum k_i + \ell_i \leq K}} \lambda_{k_1 \ldots k_n, \ell_1 \ldots \ell_n} \prod_{i=1}^n \left(a_i^\dagger\right)^{k_i} \left(b_i^\dagger\right)^{\ell_i} |0\rangle.\end{aligned}$$
Applying $V_u$ maps $a_i^\dagger$ to $\sum_{j=1}^n u_{i,j} a_j^\dagger$ and $b_i^\dagger$ to $\sum_{j=1}^n \overline{u}_{i,j} b_j^\dagger$. In other words, the function $f: u \mapsto V_u |\psi\rangle \langle \psi | V_u^\dagger$ is a polynomial of degree at most $K$ in $u$ and $\overline{u}$. Taking $\mathcal{U}$ to be a $K$-design of $U(n)$, we obtain that $$\begin{aligned}
\frac{1}{|\mathcal{U}|} \sum_{u \in \mathcal{U}} f(u) = \int f(u) \d u,\end{aligned}$$ which proves the result.
We recall the following theorem that was established in [@lev16].
\[theo:purification\] Any density operator $\rho \in \mathfrak{S}(F_{1,1,n})$ invariant under $U(n)$ admits a purification in $F_{2,2,n}^{U(n)}$.
\[lem:symmetrization\] It is sufficient to consider states $\rho_{{\mathcal{H}}\mathcal{N}} $ with support on $F_{2,2,n}^{U(n),\leq K}$ when computing the diamond norm of Theorem \[thm:postselection\].
Consider a state $\rho_{{\mathcal{H}}{\mathcal{N}}}$ with support on $F_{2,2,n}^{\leq K}$. Let $\mathcal{U}$ be a finite set of unitaries as promised by Lemma \[lem:design\]. Let $\{ |u\rangle_{\mathcal{C}}\}_{u \in \mathcal{U}}$ be an orthogonal basis for some classical register $\mathcal{C}$. The following sequence of equalities holds: $$\begin{aligned}
\|(\Delta \otimes {\mathbbm{1}}) \rho_{{\mathcal{H}}{\mathcal{N}}}\|_1 &= \frac{1}{|\mathcal{U}|} \sum_{u \in \mathcal{U}} \|( \Delta \otimes {\mathbbm{1}}) (\rho_{{\mathcal{H}}{\mathcal{N}}} \otimes |u\rangle \langle u|_{\mathcal{C}}) \|_1 \nonumber \\
&= \left\|\frac{1}{|\mathcal{U}|} \sum_{u \in \mathcal{U}} ( \Delta \otimes {\mathbbm{1}}) (\rho_{{\mathcal{H}}{\mathcal{N}}} \otimes |u\rangle \langle u|_{\mathcal{C}}) \right\|_1 \label{eqn00} \\
&= \left\|\frac{1}{|\mathcal{U}|} \sum_{u \in \mathcal{U}} ( \mathcal{K}_u \circ \Delta \otimes {\mathbbm{1}}) (\rho_{{\mathcal{H}}{\mathcal{N}}} \otimes |u\rangle \langle u|_{\mathcal{C}}) \right\|_1 \label{eqn01} \\
&= \left\|\frac{1}{|\mathcal{U}|} \sum_{u \in \mathcal{U}} ( \Delta \circ u \otimes {\mathbbm{1}}) (\rho_{{\mathcal{H}}{\mathcal{N}}} \otimes |u\rangle \langle u|_{\mathcal{C}}) \right\|_1 \label{eqn02} \\
&= \left\| ( \Delta \otimes {\mathbbm{1}}) \left(\frac{1}{|\mathcal{U}|} \sum_{u \in \mathcal{U}} ((u \circ {\mathbbm{1}}) \rho_{{\mathcal{H}}{\mathcal{N}}} \otimes |u\rangle \langle u|_{\mathcal{C}})\right) \right\|_1 \nonumber \end{aligned}$$ where we used that the classical states $|u\rangle$ are all pairwise orthogonal in Eq. , that ${\mathcal{K}}_u$ is trace preserving in Eq. , that ${\mathcal{K}}_u \circ \Delta = \Delta \circ u$ in Eq. . Consider now the reduced state $\tilde{\rho}_{{\mathcal{H}}}$: $$\begin{aligned}
\tilde{\rho}_{{\mathcal{H}}} = {\mathrm{tr}}_{{\mathcal{N}}{\mathcal{C}}}\left(\frac{1}{|\mathcal{U}|} \sum_{u \in \mathcal{U}} ((u \circ {\mathbbm{1}}) \rho_{{\mathcal{H}}{\mathcal{N}}} \otimes |u\rangle \langle u|_{\mathcal{C}})\right) =\frac{1}{|{\mathcal{U}}|} \sum_{u \in \mathcal{U}} V_u \rho_{{\mathcal{H}}} V_u^\dagger = \int V_u \rho_{{\mathcal{H}}} V_u^{\dagger} \d u\end{aligned}$$ where the last equality follows from Lemma \[lem:design\]. Theorem \[theo:purification\] now assures the existence of some purification $\tilde{\rho}_{{\mathcal{H}}{\mathcal{N}}}$ of $\tilde{\rho}_{{\mathcal{H}}}$ in $F_{2,2,n}^{U(n), \leq K}\cong {\mathcal{H}}\otimes {\mathcal{N}}$. In particular, there exists a CPTP map $g: \mathrm{End}({\mathcal{N}}) \to \mathrm{End}({\mathcal{N}}\otimes {\mathcal{C}})$ such that $$\begin{aligned}
\frac{1}{|\mathcal{U}|} \sum_{u \in \mathcal{U}} ((u \circ {\mathbbm{1}}) \rho_{{\mathcal{H}}{\mathcal{N}}} \otimes |u\rangle \langle u|_{\mathcal{C}}) = ({\mathbbm{1}}_{\mathcal{H}}\otimes g) \tilde{\rho}_{{\mathcal{H}}{\mathcal{N}}}.\end{aligned}$$ Since $g$ is trace preserving, it further implies that $$\begin{aligned}
\|(\Delta \otimes {\mathbbm{1}}) \rho_{{\mathcal{H}}{\mathcal{N}}}\|_1 =\|(\Delta \otimes {\mathbbm{1}}) ({\mathbbm{1}}_{\mathcal{H}}\otimes g) \overline{\rho}_{{\mathcal{H}}{\mathcal{N}}}\|_1 =\|(\Delta \otimes {\mathbbm{1}}) \tilde{\rho}_{{\mathcal{H}}{\mathcal{N}}}\|_1,\end{aligned}$$ which concludes the proof
We are now in position to prove Theorem \[thm:postselection\].
[thm:postselection]{} Let $\Delta: \mathrm{End}(F_{1,1,n}^{\leq K}) \to \mathrm{End}(\mathcal{H}')$ such that for all $u \in U(n)$, there exists a CPTP map $\mathcal{K}_u: \mathrm{End}(\mathcal{H}') \to \mathrm{End}(\mathcal{H}')$ such that $\Delta \circ u = \mathcal{K}_u: \circ \Delta$, then $$\begin{aligned}
\|\Delta\|_\diamond \leq 2 T(n,\eta) \|(\Delta \otimes \mathrm{id}) \tau^\eta_{\mathcal{H}\mathcal{N}} \|_1,\end{aligned}$$ for $\eta = \frac{K-n+5}{K+n-5}$, provided that $n \geq N^*(K/(n-5))$.
According to Lemma \[lem:symmetrization\], it is sufficient to prove the theorem for a state $\rho_{{\mathcal{H}}{\mathcal{N}}}$ on $F_{2,2,n}^{U(n), \leq K}$. Lemma \[lem:measurement\] guarantees the existence of a trace-non-increasing map $\mathcal{T}$ from a copy of $F_{2,2,n}^{U(n), \leq K}$ to ${\mathbb{C}}$ such that $$\begin{aligned}
\rho_{{\mathcal{H}}{\mathcal{N}}} = {2T(n,\eta)}({\mathbbm{1}}\otimes \mathcal{T}) (|\Phi^\eta\rangle \langle \Phi^\eta |).\end{aligned}$$ This gives $$\begin{aligned}
(\Delta \otimes {\mathbbm{1}}) \rho_{{\mathcal{H}}{\mathcal{N}}} = {2T(n,\eta)}(\Delta \otimes \mathcal{T}) (|\Phi^\eta\rangle \langle \Phi^\eta |)\end{aligned}$$ and finally that $$\begin{aligned}
\|(\Delta \otimes {\mathbbm{1}}) \rho_{{\mathcal{H}}{\mathcal{N}}}\|_1 ={2T(n,\eta)} \|(\Delta \otimes \mathcal{T}) (|\Phi^\eta\rangle \langle \Phi^\eta |)\|_1.\end{aligned}$$
Security against collective attacks provides a bound on $\| {\mathcal{R}}\circ \Delta \circ {\mathcal{P}}\|_{\diamond}$ {#sec:collective}
=======================================================================================================================
In order to exploit Theorem \[thm:postselection\], one needs an upper bound on $\|(({\mathcal{R}}\circ \Delta \circ {\mathcal{P}}^{\leq K})\otimes \mathrm{id}) \tau^\eta_{\mathcal{H}\mathcal{N}} \|_1$. We will see that such a bound can be obtained if the protocol is known to be secure against Gaussian collective attacks. For this, we follow the same strategy as in [@CKR09]. Let us first recall the definition of being secure against Gaussian collective attacks.
The QKD protocol ${\mathcal{E}}_0$ is ${\varepsilon}$-secure against Gaussian collective attacks if $$\begin{aligned}
\| (({\mathcal{E}}_0-{\mathcal{F}}_0)\otimes \mathrm{id})(|\Lambda,n\rangle \langle \Lambda,n|)\|_1\leq {\varepsilon}\label{eqn:sec-coll}\end{aligned}$$ for all $\Lambda \in {\mathcal{D}}$.
We show the following result.
With the previous notations, if ${\mathcal{E}}_0$ is ${\varepsilon}$-secure against Gaussian collective attacks, then $$\begin{aligned}
\|({\mathcal{R}}\circ \Delta \circ {\mathcal{P}}^{\leq K} \otimes \mathrm{id}) \tau^\eta_{\mathcal{H}\mathcal{N}} \|_1 \leq {\varepsilon},\end{aligned}$$ where $\tau^\eta_{\mathcal{H}\mathcal{N}}$ is a purification of $\tau^\eta_{\mathcal{H}}$. Here ${\mathcal{R}}$ is an additional privacy amplification step that reduces the key by $\lceil 2 \log_2 \tbinom{K+4}{4} \rceil$ bits and ${\mathcal{P}}^{\leq K}$ is the projection onto $F_{1,1,n}^{\leq K}$.
Recall that $$\begin{aligned}
\tau^\eta_{\mathcal{H}} = T(n,\eta)^{-1} \int_{\mathcal{D}_\eta} |\Lambda,n\rangle \langle \Lambda,n| \d \mu_n(\Lambda)\end{aligned}$$ where $$\begin{aligned}
T(n,\eta) := {\mathrm{tr}}(P_\eta) = \frac{(n-1)(n-2)^2(n-3)\eta^4}{12(1-\eta)^4}.\end{aligned}$$ By linearity, it holds that $$\begin{aligned}
\| (({\mathcal{E}}-{\mathcal{F}})\otimes \mathrm{id})( \tau^\eta_{\mathcal{H}})\|_1 &=
\| (({\mathcal{E}}-{\mathcal{F}})\otimes \mathrm{id})( T(n,\eta)^{-1} \int_{\Lambda \in \mathcal{D}_\eta} |\Lambda,n\rangle \langle \Lambda,n| \d \mu_n(\Lambda) )\|_1 \\
&\leq {\varepsilon}T(n,\eta)^{-1}\left\| \int_{\Lambda \in \mathcal{D}_\eta} \d \mu_n(\Lambda) \right\|_1 \\ &= {\varepsilon}\end{aligned}$$ In order to obtain the theorem, we need to consider a purification of $\tau_{{\mathcal{H}}{\mathcal{N}}}^{\eta}$. Since ${\mathcal{P}}^{\leq K}$ restricts the states to live in a space of dimension at most $\mathrm{dim} \, F_{2,2,n}^{U(n), \leq K} = \tbinom{K+4}{4}$ (according to Lemma \[lem:dim\]), it implies that the purifying system ${\mathcal{N}}$ can be chosen of this dimension. Giving this extra system to Eve can at most provide her with a limited amount of information. Applying an additional privacy amplification step ${\mathcal{R}}$ ensures that the protocol remains ${\varepsilon}$-secure for the state $\tau_{{\mathcal{H}}{\mathcal{N}}}^\eta$ thanks to the leftover hashing lemma (Theorem 5.1.1 of [@Ren08]): $$\begin{aligned}
\|({\mathcal{R}}\circ \Delta \circ {\mathcal{P}}^{\leq K} \otimes \mathrm{id}) \tau^\eta_{\mathcal{H}\mathcal{N}} \|_1 \leq \|( \Delta \circ {\mathcal{P}}^{\leq K} \otimes \mathrm{id}) \tau^\eta_{\mathcal{H}} \|_1.\end{aligned}$$
Combining this result with Theorem \[thm:postselection\] yields Theorem \[thm:diamond-protocol\].
[thm:diamond-protocol]{} With the previous notations, if ${\mathcal{E}}_0$ is ${\varepsilon}$-secure against Gaussian collective attacks, then $$\begin{aligned}
\|{\mathcal{R}}\circ ({\mathcal{E}}_0-{\mathcal{F}}_0) \circ {\mathcal{P}}^{\leq K} \|_{\diamond} \leq 2 T(n,\eta) {\varepsilon}.\end{aligned}$$
Energy test {#sec:test}
===========
The goal of this section is to prove the following result.
[thm:test]{} For integers $n,k \geq 1$, and $d_A, d_B >0$, define $K = n(d'_A + d'_B)$ for $d'_{A/B} = d_{A/B} g(n,k,{\varepsilon}/4)$ for the function $g$ defined in Eq. . Then $$\begin{aligned}
\big\| \big({\mathbbm{1}}- {\mathcal{P}}(n,K)\big) \circ {\mathcal{T}}(k, d_A, d_B)\big\|_{\diamond} \leq {\varepsilon}.\end{aligned}$$
For $d >0$, let us introduce the following operators on ${\mathcal{H}}^{\otimes n}$ for a single-mode Fock space ${\mathcal{H}}$: $$\begin{aligned}
T_n^{d} &:= \frac{1}{\pi^n} \int_{\sum_{i=1}^n |\alpha_i|^2 \geq n d} |\alpha_1\rangle\langle \alpha_1| \otimes \ldots \otimes |\alpha_n\rangle \langle \alpha_n| \d \alpha_1 \ldots \alpha_n\\
U_n^{d} &:= \sum_{m = n d+1}^\infty \Pi_{m}^n,\end{aligned}$$ where $\Pi_m^n$ is the projector onto the subspace of ${\mathcal{H}}^{\otimes n}$ spanned by Fock states containing $m$ photons: $$\begin{aligned}
\Pi_m^n = \sum_{m_1+\ldots+m_n=m} |m_1, \ldots, m_n\rangle \langle m_1, \ldots, m_n|.\end{aligned}$$ In words, $T_n^{d}$ is the sum of the projectors onto products of coherent states such that the total squared amplitude is greater than $n d$ and $U_n^{d}$ is the projector onto Fock states containing more that $n d$ photons. Intuitively, both operators should be “close” to each other. This is formalized with the following lemma that was proven in [@LGRC13].
\[lem:LGRC\] For any integer $n$ and any $d\geq0$, it holds that $$\begin{aligned}
U_n^{d} \leq 2 T_n^{d}.\end{aligned}$$
The following lemma results from the definitions of $U_n^d$ and ${\mathcal{P}}^{\leq K}$, the projector onto $F_{1,1,n}^{\leq K}$.
\[lem:obs\] For any $d_A, d_B \geq 0$ and integer $K$ such that $K \leq n(d_A+d_B)$, it holds that $$\begin{aligned}
{\mathbbm{1}}_{{\mathcal{H}}_A^{\otimes n} \otimes {\mathcal{H}}_B^{\otimes n}} - {\mathcal{P}}^{\leq K} \leq U_n^{d_A}\otimes {\mathbbm{1}}_{{\mathcal{H}}_B^{\otimes n}} + {\mathbbm{1}}_{{\mathcal{H}}_A^{\otimes n}} \otimes U_n^{d_B}.\end{aligned}$$
The left hand side is the projector onto the states of ${\mathcal{H}}_A^{\otimes n} \otimes {\mathcal{H}}_B^{\otimes n}$ containing strictly more than $K$ photons. Any such state must contain either at least $n d_A$ photons in ${\mathcal{H}}_A^{\otimes n}$ or at least $K - n d_A$ photons in ${\mathcal{H}}_B^{\otimes n}$, for any possible value of $d_A$. This proves the claim.
Combining Lemmas \[lem:LGRC\] and \[lem:obs\], we obtain the immediate corollary.
\[cor:proj\] For any $d_A, d_B \geq 0$ and integer $K$ such that $K \leq n(d_A+d_B)$, it holds that $$\begin{aligned}
{\mathbbm{1}}_{{\mathcal{H}}_A^{\otimes n} \otimes {\mathcal{H}}_B^{\otimes n}} - {\mathcal{P}}^{\leq K} \leq 2 T_n^{d_A}\otimes {\mathbbm{1}}_{{\mathcal{H}}_B^{\otimes n}} + 2{\mathbbm{1}}_{{\mathcal{H}}_A^{\otimes n}} \otimes T_n^{d_B}.\end{aligned}$$
Recall that the heterodyne measurement corresponds to a projection onto (Glauber) coherent states, and is described by the resolution of the identity: $$\begin{aligned}
{\mathbbm{1}}_{{\mathcal{H}}^{\otimes k}} = \frac{1}{\pi^k} \int_{{\mathbb{C}}^k} |\alpha_1\rangle \langle \alpha_1| \otimes \ldots \otimes |\alpha_k\rangle \langle \alpha_k| \d\alpha_1 \ldots \d \alpha_k.\end{aligned}$$ In other words, measuring a state $\rho$ on ${\mathcal{H}}^{\otimes k}$ with heterodyne detection outputs the result $(\alpha_1, \ldots, \alpha_k) \in {\mathbb{C}}^k$ with probability $$\begin{aligned}
\mathrm{Pr}_\rho(\alpha_1, \ldots, \alpha_k) =\frac{1}{\pi^k} {\mathrm{tr}}( \rho |\alpha_1\rangle \langle \alpha_1| \otimes \ldots \otimes |\alpha_k\rangle \langle \alpha_k|).\end{aligned}$$
Laurent and Massart [@LM00] established the following tail bounds for $\chi^2(D)$ distributions.
\[lem:LM\] Let $U$ be a $\chi^2$ statistic with $D$ degrees of freedom. For any $x >0$, $$\begin{aligned}
\mathrm{Pr}[U-D \geq 2\sqrt{D x} + 2 Dx] \leq \exp(-x) \quad \text{and} \quad \mathrm{Pr}[D-U \geq 2\sqrt{Dx}] \leq \exp(-x). \end{aligned}$$
A state $\rho$ on ${\mathcal{H}}^{\otimes n} = F({\mathbb{C}}^n)$ is said *rotationally invariant* is $V_u \rho V_u^\dagger = \rho$ for all $u \in U(n)$.
In particular, the state $\int V_u \rho V_U^\dagger \d u$ is invariant if $\d u$ is the Haar measure on $U(n)$.
\[lem:36\] Let $\rho$ be an rotationally invariant state on ${\mathcal{H}}^{\otimes (n+k)}$. Then, for any $d >0$, $$\begin{aligned}
{\mathrm{tr}}\left[ ( T_n^{d'} \otimes ({\mathbbm{1}}-T_k^d)) \rho \right] \leq {\varepsilon},\end{aligned}$$ for $d' = g(n,k,{\varepsilon}) d$ and $$\begin{aligned}
g(n,k,{\varepsilon}) = \frac{1 + 2 \sqrt{\frac{\ln (2/{\varepsilon})}{2n}} + \frac{\ln (2/{\varepsilon})}{n}}{1-2{\sqrt{\frac{\ln (2/{\varepsilon})}{2k}}}}. \label{eqn:g}\end{aligned}$$
By definition, ${\mathrm{tr}}[T_k^d \rho]$ is the probability that the outcome $(\alpha_1, \ldots, \alpha_k)\in {\mathbb{C}}^k$ obtained by measuring the last $k$ modes of the state $\rho$ with heterodyne detection satisfies $\sum_{i=1}^k |\alpha_i|^2 \geq k d$. Similarly, ${\mathrm{tr}}\left[ ((T_n^{d'} \otimes ({\mathbbm{1}}-T_k^d)) \rho \right]$ is the probability that the outcome of measuring the $n+k$ modes of $\rho$ with heterodyne detection yields a vector $(\alpha_1, \ldots, \alpha_{n+k})$ such that $$\begin{aligned}
Y_n := \sum_{i=1}^n |\alpha_i|^2 \geq n d' \quad \text{and} \quad Y_k :=\sum_{i=1}^k |\alpha_{n+i}|^2 \leq k d.\end{aligned}$$ Since the state is rotationally invariant, it means that the random vector $(\alpha_1, \ldots, \alpha_{n+k})$ is uniformly distributed on the sphere of radius $M$ in ${\mathbb{C}}^{n+k}$, conditioned on the fact that the modulus is $\sqrt{\sum_{i=1}^{n+k} |\alpha_i|^2}=M$. Equivalently, one can consider the $2(n+k)$-dimensional real vector $({\mathfrak{R}}(\alpha_1), {\mathfrak{I}}(\alpha_1), \ldots, {\mathfrak{R}}(\alpha_1), {\mathfrak{I}}(\alpha_1))$ which is uniformly distributed over the sphere in ${\mathbb{R}}^{2(n+k)}$. Here ${\mathfrak{R}}(\alpha_1)$ and ${\mathfrak{I}}(\alpha)$ refer respectively to the real and imaginary part of $\alpha$. We obtain $$\begin{aligned}
{\mathrm{tr}}\left[ ( T_n^{d'} \otimes ({\mathbbm{1}}-T_k^d)) \rho \right] &= \mathrm{Pr}[(Y_n \geq nd' ) \wedge (Y_k \leq kd)]\\
& \leq \mathrm{Pr}[kd Y_n \geq nd' Y_k]\end{aligned}$$ where the inequality is a simple consequence of the fact that the rectangle $[nd',\infty] \times [0, kd]$ is a subset of the triangle $\{ (x,y) \in [0,\infty]^2 \: : \: kd x \geq nd' y\}$.
It is well-known that the uniform distribution over the unit sphere of ${\mathbb{R}}^{2(n+k)}$ can be generated by sampling $2(n+k)$ normal variables with 0 mean and unit variance. In that case, the squared norm $\sum_{i=1}^n |\alpha_i|^2$ is simply a $\chi^2$ variable with $2n$ degrees of freedom while $\sum_{i=1}^k |\alpha_{n+i}|^2$ corresponds to an independent $\chi^2$ variable with $2k$ degrees of freedom. Let us denote by $Z_n$ and $Z_k$ the corresponding random variables: $Z_n \sim \chi^2(2n)$, $Z_k \sim \chi^2(2k)$. Since $(Y_n, Y_k)$ and $(Z_n, Z_k)$ follow the same distribution, up to rescaling, we obtain that $$\begin{aligned}
\mathrm{Pr}[kd Y_n \geq nd' Y_k] = \mathrm{Pr}[kd Z_n \geq nd' Z_k].\end{aligned}$$ This is particularly useful because it means that there is no need to enforce normalization explicitly. Finally, using now that the triangle $\{ (x,y) \in [0,\infty]^2 \: : \: kd x \geq dd' y\}$ is a subset of the union of the rectangles $[\alpha nd',\infty]\times [0,\infty]$ and $[0,\infty] \times [0,\alpha kd]$ for any $\alpha >0$, it follows that $$\begin{aligned}
\mathrm{Pr}[kd Z_n \geq nd' Z_k] \leq \mathrm{Pr}[ Z_n \geq \alpha n d'] + \mathrm{Pr}[ Z_k \leq \alpha k d].\end{aligned}$$ Choosing $\alpha$ such that $$\begin{aligned}
\alpha k d = 2k \left(1 - 2 \sqrt{\frac{\ln({\varepsilon}/2)}{2k}}\right)\end{aligned}$$ and applying the lower bounds on the tails of the $\chi^2$ distribution given in Lemma \[lem:LM\] gives $$\begin{aligned}
\mathrm{Pr}[ Z_n \geq \alpha n d'] \leq \frac{{\varepsilon}}{2}, \quad \mathrm{Pr}[ Z_k \leq \alpha k d] \leq \frac{{\varepsilon}}{2}.\end{aligned}$$ This establishes that $$\begin{aligned}
{\mathrm{tr}}\left[ ( T_n^{d'} \otimes ({\mathbbm{1}}-T_k^d)) \rho \right] \leq {\varepsilon}, \end{aligned}$$ which concludes the proof.
We are now ready to define and analyze the energy test. Alice and Bob perform a random rotation of their data according to a unitary $u \in U(n)$ chosen from the Haar measure on $U(n)$, and measure the last $k$ modes of their respective state with heterodyne detection. They compute the squared norm of their respective vectors and obtain two values $Y_A$ for Alice and $Y_B$ for Bob. The test depends on three parameters: the number $k$ of modes which are measured, a maximum value for Alice $d_A$ and a maximum value for Bob, $d_B$. The test $\mathcal{T}(k, d_A, d_B)$ passes if $$\begin{aligned}
Y_A \leq k d_A \quad \text{and} \quad Y_B \leq k d_B.\end{aligned}$$
We are interested in the probability of passing the test and failing for the remaining modes to contain less than $K$ photons, more precisely in the quantity $$\begin{aligned}
\|({\mathbbm{1}}- {\mathcal{P}}) \circ {\mathcal{T}}\|_{\diamond}.\end{aligned}$$
Let us denote by $\mathrm{Inv}( {\mathfrak{S}}({\mathcal{H}}^{\otimes (n+k)}))$ the set of density matrices which are invariant under the action of $U(n+k)$.
[thm:test]{} For integers $n,k \geq 1$, and $d_A, d_B >0$, define $K = n(d'_A + d'_B)$ for $d'_{A/B} = d_{A/B} g(n,k,{\varepsilon}/4)$ for the function $g$ defined in Eq. . Then $$\begin{aligned}
\big\| \big({\mathbbm{1}}- {\mathcal{P}}(n,K)\big) \circ {\mathcal{T}}(k, d_A, d_B)\big\|_{\diamond} \leq {\varepsilon}.\end{aligned}$$
Writing ${\mathcal{P}}$ and ${\mathcal{T}}$ for conciseness, the definition of the diamond norm yields: $$\begin{aligned}
\|({\mathbbm{1}}- {\mathcal{P}}) \circ {\mathcal{T}}\|_{\diamond} &= \max_{\rho \in {\mathcal{H}}_{AB}^{\otimes (n+k)} \otimes {\mathcal{H}}_{AB}^{\otimes (n+k)}} \big\| \big(({\mathbbm{1}}- {\mathcal{P}}) \circ {\mathcal{T}})\otimes {\mathbbm{1}}_{{\mathcal{H}}_{AB}^{\otimes (n+k)}}\big) (\rho)\big\|_{1} \nonumber\\
&= \max_{\rho \in {\mathfrak{S}}({\mathcal{H}}_{AB}^{\otimes (n+k)})} \|({\mathbbm{1}}- {\mathcal{P}}) \circ {\mathcal{T}}(\rho)\|_{1} \label{eqn:nonneg}\\
& \leq \max_{\rho \in \mathrm{Inv}\left( {\mathfrak{S}}({\mathcal{H}}_{AB}^{\otimes (n+k)}\right)} \|({\mathbbm{1}}- {\mathcal{P}}) \circ {\mathcal{T}}(\rho)\|_{1} \label{eqn:invar}\\
& \leq \max_{\rho \in \mathrm{Inv}\left( {\mathfrak{S}}({\mathcal{H}}_{AB}^{\otimes (n+k)}\right)} \| (U_n^{d'_A} \otimes {\mathbbm{1}}+{\mathbbm{1}}\otimes U_n^{d'_B}) \circ \big( ({\mathbbm{1}}- T_k^{d_A} )\otimes ({\mathbbm{1}}-T_k^{d_B}) \big)(\rho)\|_{1} \label{eqn:sum} \\
& = \max_{\rho \in \mathrm{Inv}\left( {\mathfrak{S}}({\mathcal{H}}_{AB}^{\otimes (n+k)}\right)} \| (U_n^{d'_A} \circ ({\mathbbm{1}}-T_k^{d_A}) + U_n^{d'_B} \circ ({\mathbbm{1}}- T_k^{d_B})) (\rho)\|_{1} \nonumber \\
& \leq \max_{\rho \in \mathrm{Inv}\left( {\mathfrak{S}}({\mathcal{H}}_{A}^{\otimes (n+k)}\right)} \| (U_n^{d'_A} \circ ({\mathbbm{1}}-T_k^{d_A} )) (\rho)\|_{1} + \max_{\rho \in \mathrm{Inv}\left( {\mathfrak{S}}({\mathcal{H}}_{B}^{\otimes (n+k)}\right)} \| ( U_n^{d'_B} \circ ({\mathbbm{1}}-T_k^{d_B})) (\rho)\|_{1} \label{eqn:triang}\\
& \leq 2 \max_{\rho \in \mathrm{Inv}\left( {\mathfrak{S}}({\mathcal{H}}_{A}^{\otimes (n+k)}\right)} \| (T_n^{d'_A} \circ ({\mathbbm{1}}-T_k^{d_A}) ) (\rho)\|_{1} +2 \max_{\rho \in \mathrm{Inv}\left( {\mathfrak{S}}({\mathcal{H}}_{B}^{\otimes (n+k)}\right)} \| ( T_n^{d'_B} \circ ({\mathbbm{1}}-T_k^{d_B})) (\rho)\|_{1}\\
&\leq {\varepsilon}\label{eqn:final}\end{aligned}$$ where we used that $\big(({\mathbbm{1}}- {\mathcal{P}}) \circ {\mathcal{T}})\otimes {\mathbbm{1}}_{{\mathcal{H}}_{AB}^{\otimes (n+k)}}\big) (\rho)$ is a nonnegative operator in Eq. , the fact that both ${\mathcal{P}}$ and ${\mathcal{T}}$ are rotationally invariant in Eq. , Lemma \[lem:obs\] in Eq. , the triangle inequality in Eq. , Lemma \[lem:36\] in Eq. .
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'G. Anzolin'
- 'D. de Martino'
- 'M. Falanga'
- 'K. Mukai'
- 'J.-M. Bonnet-Bidaud'
- 'M. Mouchet'
- 'Y. Terada'
- 'M. Ishida'
bibliography:
- '0000.bib'
date: 'Received ...; accepted ...'
title: 'Broad-band properties of the hard X-ray cataclysmic variables IGR J00234+6141 and 1RXS J213344.1+510725 [^1]'
---
Introduction
============
The first deep survey above 20 keV performed by the *INTEGRAL* satellite has allowed the detection of more than 400 X-ray sources [@bird07]. The extensive survey of the Galactic plane has revealed that the contribution of Galactic X-ray binaries, especially the cataclysmic variable (CV) type, is non-negligible in hard X-rays [@sazonov06]. Most of the hard X-ray emitting CVs were found to be magnetic intermediate polars (IPs) [@barlow06]. Up to now, IPs represent $\sim 5 \%$ of the *INTEGRAL* sources detected at that energy range and this number is prone to increase in the near future, as demonstrated by systematic optical follow-ups for some of the $\sim 100$ *INTEGRAL* unidentified sources [see e.g. @masetti08]. From the detection of thousands of discrete low-luminosity X-ray sources in a deep *Chandra* survey of regions close to the Galactic center, @muno04 proposed that magnetic CVs of the IP type may represent a significant fraction of the Galactic background. Further support that magnetic CVs could still be a hidden population of faint X-ray sources and, therefore, play an important role in the X-ray emission of the Galaxy comes from recent studies of the Galactic ridge with *INTEGRAL* and *RXTE* [@revnivtsev08].
IPs are believed to harbor weakly magnetized ($B \lesssim 10$ MG) white dwarfs (WDs) because of their fast asynchronous rotation with respect to the orbital period and of the lack of significant polarized emission in the optical/near-IR in most systems. This contrasts with the other group of magnetic CVs, the polars, that instead possess strongly magnetized ($B \sim 10 - 230$ MG) WDs rotating synchronously with the binary period. The X-ray properties of magnetic CVs are strictly related to the accretion mechanism onto the WD primary. Material from the late type companion is driven by the magnetic field lines onto the magnetic polar caps, where a shock develops [@aizu73] below which hard X-rays and cyclotron radiation are emitted. Bremsstrahlung radiation is believed to be the dominant cooling mechanism in IPs [@wu94], while cyclotron radiation may dominate in polars. The complex interplay between the two mechanisms greatly depends on both magnetic field strength and local mass accretion rates [@woelk_beuermann96; @fischer_beuermann01]. Hence, if IPs indeed host weakly magnetized WDs with respect to polars, this could qualitatively explain why they are hard X-ray sources. The detection of a soft X-ray optically thick component in an increasing number of systems [@anzolin08] poses further questions in the interpretation of the X-ray emission properties of IPs.
In the framework of an ongoing optical identification program, we identified two new members of the IP group: 1RXS J213344.1+510725 = IGR J21335+5105 (hereafter RXJ2133) [@bonnetbidaud06] and IGR J00234+6141 = 1RXS J002258.3+614111 (hereafter IGR0023) [@bonnetbidaud07]. Both of them are hard CVs in the *INTEGRAL* source catalog [@bird07].
The weak hard X-ray source IGR0023 was detected by the *INTEGRAL* satellite during an observation of the Cassiopeia region of the Galaxy [@denhartog06]. The optical counterpart of IGR0023 was identified by @masetti06, who proposed a possible magnetic nature of this CV. A tentative $\sim
570$ s optical periodicity was recognized in the $R$ band [@bikmaev06]. However, a clear periodic modulation of $563.53 \pm 0.62$ s was discovered with optical photometric data by @bonnetbidaud07 and readily ascribed to the rotational period of the WD, while an orbital period of $4.033 \pm 0.005$ hr was derived from optical spectroscopy. The properties of the *INTEGRAL* spectrum, which is well fitted by a bremsstrahlung with a temperature of 31 keV, strongly support the magnetic nature of IGR0023.
RXJ2133 was identified as a hard X-ray point source from the *ROSAT* Galactic Plane Survey [@motch98]. A clear persistent optical light pulsation at $570.823 \pm 0.013$ s was then discovered with fast optical photometry, while optical spectroscopy revealed an additional periodic variability at $7.193 \pm 0.016$ hr [@bonnetbidaud06]. These two periodicities were respectively identified as the WD spin and the orbital periods, thus suggesting RXJ2133 as a member of the IP class with a relatively long orbital period which falls into the so-called IP gap between 6.5 and 9.5 hrs [@schenker04]. @katajainen07 found also that RXJ2133 emits optical circularly polarized light up to $\sim 3 \%$ and proposed that the WD magnetic field could be as high as 25 MG, one of the highest amongst IPs.
The X-ray variability and broad-band spectra of these two systems have been investigated using pointed *XMM-Newton* [@jansen01] observations and publicly available hard X-ray data obtained with the *INTEGRAL* satellite [@winkler03]. In the case of RXJ2133, we also present the temporal and spectral analysis of a pointed *Suzaku* [@mitsuda07] observation.
Observations and data reduction
===============================
The summary of the observations of IGR0023 and RXJ2133 is reported in Table \[tab:observ\].
Object Instrument Date UT (start) Exposure time (s) Net count rate (${\mbox{counts s}^{-1}}$)
--------- ------------ ------------ ------------ -------------------- -------------------------------------------
IGR0023 EPIC-pn 2007-07-10 05:58 24660 $0.984 \pm 0.007$
EPIC-MOS 05:36 26540 $0.345 \pm 0.004$
RGS 05:35 26706 $0.038 \pm 0.001$
OM-V 05:44 1960 $3.26 \pm 0.04$
06:22 1959 $3.12 \pm 0.04$
07:01 1960 $2.79 \pm 0.04$
08:12 1962 $2.97 \pm 0.04$
08:50 1959 $2.85 \pm 0.04$
OM-UVM2 09:28 1960 $0.16 \pm 0.01$
10:06 1959 $0.19 \pm 0.01$
10:45 1960 $0.20 \pm 0.01$
11:23 1960 $0.18 \pm 0.01$
12:01 1960 $0.18 \pm 0.01$
IBIS/ISGRI $\sim 6\,900\,000$ $0.10 \pm 0.01$
RXJ2133 EPIC-pn 2005-05-29 13:33 13585 $5.42 \pm 0.02$
EPIC-MOS 12:47 16580 $1.398 \pm 0.008$
RGS 13:28 13890 $0.147 \pm 0.004$
OM-B 10:11 2320 $24.9 \pm 0.1$
10:55 2319 $23.2 \pm 0.1$
11:39 2319 $22.4 \pm 0.1$
12:23 2319 $22.1 \pm 0.1$
13:07 2320 $23.7 \pm 0.1$
OM-UVM2 13:51 2319 $0.69 \pm 0.02$
14:35 2321 $0.68 \pm 0.02$
15:20 2320 $0.89 \pm 0.02$
16:03 2320 $0.85 \pm 0.02$
16:48 2319 $0.87 \pm 0.02$
EPIC-pn 2005-07-06 18:03 9871 $5.12 \pm 0.03$
EPIC-MOS 17:05 13680 $1.105 \pm 0.009$
RGS 17:04 13910 $0.120 \pm 0.005$
OM-UVM2 17:31 1681 $0.60 \pm 0.02$
18:16 1679 $0.70 \pm 0.02$
19:20 1680 $0.64 \pm 0.02$
19:53 1680 $0.69 \pm 0.02$
20:27 1679 $0.59 \pm 0.02$
IBIS/ISGRI $\sim 3\,740\,000$ $0.55 \pm 0.02$
XIS-FI 2006-04-29 06:50 84288 $0.738 \pm 0.002$
XIS-BI 06:50 84288 $0.892 \pm 0.004$
HXD 06:50 62879 $0.739 \pm 0.004$
The *XMM-Newton* observations
-----------------------------
For all our *XMM-Newton* observations, we reprocessed and analyzed the EPIC-pn [@struder01], MOS [@turner01], RGS [@denherder01] and OM [@mason01] data using the standard reduction pipelines included in SAS 8.0 and the latest calibration files. For IGR0023, because of a problem with the standard source detection task, the OM-$UVM2$ data were reprocessed at MSSL using an unreleased reduction routine. Heliocentric corrections were applied to the EPIC and OM data of both sources. The SAS tasks *rmfgen* and *arfgen* were used to generate the photon redistribution matrix and the ancillary region files for all the EPIC cameras and RGS instruments.
IGR0023 was observed on July 10, 2007 (OBSID: 0501230201) with the EPIC-pn and MOS cameras operated in full frame imaging mode with the thin and medium filters, respectively. The total exposure times were 25 ks for EPIC-pn and 26.6 ks for both the MOS cameras. The RGS was operated in spectroscopy mode for a total exposure time of 26.9 ks. The OM was operated in fast imaging mode using sequentially the $V$ (5000–6000 Å) and $UVM2$ (2000–2800 Å) filters, for 9.8 ks each.
A 28 aperture radius was used to extract EPIC light curves and spectra from a circular region centered on the source and from a background region located on the same CCD chip where the source was imaged. In order to improve the S/N ratio, we filtered the data by selecting pattern pixel events up to double with zero quality flag for the EPIC-pn data, and up to quadruple pixel events for the EPIC-MOS data. The average background level of the EPIC cameras was quite low for almost all the duration of the observation, with the exception of a moderate flaring activity, which occurred during the last $\sim 2000$ s of the EPIC-pn exposure. This flare did not significantly affect the data used for the timing analysis, however we conservatively did not consider these events in the spectral analysis.
Due to the weakness of IGR0023, the RGS data had poor S/N ratios and therefore were not useful for spectral analysis.
Background subtracted OM-$V$ light curves were obtained with a binning time of 20 s, while the OM-$UVM2$ light curves were provided by MSSL with a binning time of 10 s. The average count rates were $3.00\ {\mbox{counts s}^{-1}}$ in the $V$ band and $0.18\ {\mbox{counts s}^{-1}}$ in the $UVM2$ band, corresponding to instrumental magnitudes $V = 16.8$ and $UVM2 = 17.6$ and average fluxes of $7.4 \times 10^{-16}\ {\mbox{erg cm}^{-2}\ \mbox{s}^{-1}\ \mbox{\AA}^{-1}}$ and $4.0 \times 10^{-16}\ {\mbox{erg cm}^{-2}\ \mbox{s}^{-1}\ \mbox{\AA}^{-1}}$, respectively. As a comparison, the continuum flux of the optical spectrum obtained by @masetti06 was $\sim 6 \times 10^{-16}\ {\mbox{erg cm}^{-2}\ \mbox{s}^{-1}\ \mbox{\AA}^{-1}}$.
RXJ2133 was observed on May 29, 2005 (OBSID: 0302100101) with all the EPIC cameras operated in full frame mode and with the medium filter. Due to high background radiation the observation time was shortened and the net exposure times were 13.6 ks for EPIC-pn, 16.5 ks for EPIC-MOS and 13.9 ks for RGS, the latter operated in spectroscopy mode. The OM, operated in fast imaging mode, was used sequentially with the $B$ filter, covering the spectral range 3900–4900 Å, and the $UVM2$ filter for 11.6 ks each. RXJ2133 was observed again on July 06, 2005 (OBSID: 0302100301) with the same instrumental configurations and with net exposure times of 9.9 ks for EPIC-pn, 13.7 ks for EPIC-MOS and 13.9 ks for RGS. The OM was only operated with the $UVM2$ filter for a total exposure time of 8.4 ks. The source was found at about the same count rate at the two epochs in all instruments.
EPIC light curves and spectra of source and background were extracted from circular regions of 37 radius. The same filtering adopted for IGR0023 was applied to improve the S/N ratio for both cameras. During the observation of May 2005 the background was moderately active, but not enough to affect the timing analysis. Filtering of higher background periods was done only during the extraction of spectra from both EPIC and RGS data. These spectra were rebinned to have a minimum of 25 and 20 counts per bin, respectively.
OM $B$ and $UVM2$ light curves were extracted with a binning time of 10 s and 20 s, respectively. The average count rates were about $25\ {\mbox{counts s}^{-1}}$ in the $B$ band and $0.7\ {\mbox{counts s}^{-1}}$ in the $UVM2$ band, corresponding to instrumental magnitudes $B = 15.8$ and $UVM2 = 16.2$. These translate into $B$ and $UVM2$ fluxes of $3 \times 10^{-15}\ {\mbox{erg cm}^{-2}\ \mbox{s}^{-1}\ \mbox{\AA}^{-1}}$ and $1.5\times 10^{-15}\ {\mbox{erg cm}^{-2}\ \mbox{s}^{-1}\ \mbox{\AA}^{-1}}$, respectively.
The *INTEGRAL* observations
---------------------------
The *INTEGRAL* IBIS/ISGRI [@ubertini03; @lebrun03] hard X-ray data of both sources were extracted from all pointings within 12 from the source positions, spanning from March 2003 to October 2006. The total effective exposure times are $\sim 6.9$ Ms (2875 pointings) and $\sim 3.74$ Ms (1563 pointings) for IGR0023 and RXJ2133, respectively. To study the weak persistent X-ray emission, the time averaged ISGRI spectra have been obtained from mosaic images in five energy bands, logarithmic spaced between 20 and 100 keV. Data were reduced with the standard OSA software version 7.0 and, then, analyzed using the algorithms described by @goldwurm_etal03.
The *Suzaku* observations of RXJ2133
------------------------------------
RXJ2133 was observed with *Suzaku* between Apr 29, 2006 and May 1, 2006 (sequence number 401038010). We have analyzed data from the X-ray imaging spectrometer (XIS, @koyama07) and the non-imaging hard X-ray detector (HXD, @taka07). The observation was done with the object at the “HXD-nominal” pointing position, $\sim 5 \arcmin$ off-axis from the center of field-of-view (FOV) of the XIS, to optimize the S/N ratio of the HXD data. We based our analysis on data processed using the V2.0.6.13 pipeline released as a part of HEADAS 6.3.1.
For the XIS, we updated the energy scale calibration using the February 1, 2008 release of the calibration database. We then applied the following screening criteria: attitude control system in nominal mode, pointing within 15 of the mean direction, XIS data rate medium or high, the satellite outside the South Atlantic Anomaly (SAA) and at least 180 s after the last SAA passage, elevation above Earth limb $>5 \degr$, elevation about the bright Earth limb $>15 \degr$. An inspection of the XIS image revealed a second source, near the center of the FOV, with a flux of $6.7 \times 10^{-13}\ {\mbox{erg cm}^{-2}\ \mbox{s}^{-1}}$ in the 2–10 keV band. This same source is seen in the *XMM-Newton* data at similar flux levels [^2]. Although being faint, we conservatively excluded this source from our analysis of the XIS data. We cannot do so for the HXD data analysis, but it is expected to have a negligible effect.
We used a 35 radius extraction region and an annular extraction region of 75 outer and 4 inner radii for the background, both centered on the position of RXJ2133. For spectroscopy, we summed the data and the responses of the three XIS-FI units because they have nearly identical responses. For photometry, we added background subtracted light curves from all the XIS units over the energy range 0.3 (FI) / 0.2 (BI) – 12 keV.
For the HXD data, we took the PIN event data from the processing pipeline and applied the dead time correction. We obtained the “tuned” non X-ray background files [@fukazawa09], estimated by the HXD team using LCFITDT method. For phase-resolved spectroscopy, we used the phase-averaged background and dead-time fraction, since both tend to vary on a longer time scale.
Data analysis and results
=========================
IGR0023
-------
### The X-ray variability
The EPIC-pn light curve extracted in the energy range 0.2–10.0 keV and binned in 20 s time intervals reveals a clear variability with a time scale of the order of $\sim 10$ min. We have not analyzed the EPIC-MOS light curves because they were too noisy to provide reliable results. The power spectrum of the full-band EPIC-pn light curve (see Fig. \[fig:0023pow\]) shows significant peaks at $\omega \sim 154\ \mathrm{d}^{-1}$, at $2 \omega$ and also at low frequencies. A peak at $f_1 \sim 22\ \mathrm{d}^{-1}$, although its significance is below $2 \sigma$, is close to a pseudo-periodicity detected in the optical [@bonnetbidaud07].
A sinusoidal fit was performed on the EPIC-pn light curve previously corrected for low frequency trends by using a third-order polynomial. We used three sinusoids with different frequencies accounting for all the observed peaks, thus finding $\omega = 153.83 \pm 0.15\ \mbox{d}^{-1}$ and $f_1 = 21.94 \pm 0.15\ \mbox{d}^{-1}$ (errors are at the $1 \sigma$ confidence level). The inferred period $P_\omega = 561.64 \pm 0.56$ s can be identified with the WD spin period, since the difference with the optical period of @bonnetbidaud07 is not significant.
A Fourier analysis was also performed on EPIC-pn light curves extracted with a 40 s binning time in selected energy bands: 0.2–1.0 keV, 1.0–2.0 keV and 2.0–10.0 keV (see Fig. \[fig:0023pow\]). Peaks at $\omega$ and $2 \omega$ are clearly detected in the low and intermediate energy bands, while they do not appear at high energies. Instead, the pseudo-periodicity at low frequency is visible only in the intermediate 1–2 keV band.
We then folded the EPIC-pn light curves with the 561.64 s spin period (Fig. \[fig:0023flc\]) using the time of maximum obtained from the sinusoidal fit of the 0.2–10.0 keV light curve: $HJD = 2454291.8667(2)$. These light curves show a quasi-sinusoidal modulation only below 2.0 keV, with a secondary maximum at $\phi \sim 0.5$. The pulse amplitude is $\sim 50 \%$ in the 0.2–1.0 keV range and $\sim 16 \%$ in the 1.0–2.0 keV range. The count ratio between the 1.0–2.0 keV and the 0.2–1.0 keV bands indicates an hardening of the emission at the secondary maximum.
### The visible and UV light curves
The power spectrum of the OM-$V$ light curve shows a strong peak at the X-ray spin period, while that of the $UVM2$ light curve does not reveal any significant peak. The average count rate is not constant in the 5 observations with the $V$ filter spanning $\sim 3/4$ of the orbital cycle, probably suggesting a dependence on the orbital period. The spin-folded OM-$V$ light curve (lower panel of Fig.\[fig:0023flc\]) is pretty sinusoidal, with an amplitude of $8 \pm 1 \%$, and shows a single peak at a phase consistent with that of the main maximum of the X-ray pulse. We notice that @bonnetbidaud07 also found a single peaked pulsation in their observations carried out with the Gunn-$g$ filter.
### Spectral properties of IGR0023
The EPIC-pn and combined MOS spectra (in the range 0.3–10.0 keV) and the IBIS/ISGRI spectrum (in the range 20–100 keV) were simultaneously analyzed with the XSPEC 12 package. An absorbed isothermal optically thin MEKAL component plus a zero width Gaussian at 6.4 keV fit relatively well the spectrum ($\chi_{\nu}^2
= 1.04$), but the temperature is unconstrained ($> 78$ keV). The fit improves using a multi-temperature CEMEKL emission component and a dense ($N_H \sim 10^{23}\ \mathrm{cm}^{-2}$) absorber covering $\sim 40 \%$ of the source (model A in Table \[tab:spectra0023\]), although the $\alpha$ parameter assumes an unreasonably high value. Metal abundances are consistent, within errors, with the solar values. The total absorber is likely of interstellar origin, as it is comparable to that of the total Galactic absorption in the direction of the source ($N_{\mathrm{H, gal}} = 7.4 \times
10^{21}\ \mathrm{cm}^{−2}$, @dickeylockman90). The dense partial absorber, instead, is likely located close to the source, as suggested by the energy dependence of the spin light curve. We also obtained similar quality fits by substituting the CEMEKL component with 2 (Fig. \[fig:0023spec\]) or 3 MEKALs (model B and C in Table \[tab:spectra0023\], respectively). In both cases, we find a low ($k T \sim 0.17$ keV) and an intermediate temperature ($k T \sim 10$ keV) components, and, in model C, a lower limit to the temperature of the third MEKAL $k T > 27$ keV. The observed X-ray flux in the 0.2–10.0 keV range is $6.8 \times 10^{-12}\ {\mbox{erg cm}^{-2}\ \mbox{s}^{-1}}$. We notice that only model B and model C account for the (21.9 Å) and (19.1 Å) lines barely detectable in the low quality RGS spectra.
The *INTEGRAL* spectrum above 60 keV seems to be underpredicted by the three best-fit models but, given the poor S/N ratio, the excess of counts is not significant. The addition of a reflection component, suggested by the presence of the Fe line at 6.4 keV, does not improve the fits. We also fitted the broad-band spectrum of IGR0023 with the recent post-shock region (PSR) model of @suleimanov08 that computes the emergent spectrum taking also into account the Compton scattering ($\chi^2$ / d.o.f. = 136/110). We obtained a shock temperature $k T_{\mathrm{shock}} = 51 \pm 11$ keV and absorption parameters consistent with those found with models A, B and C. The contribution of the Compton scattering is found to be $\sim 10 \%$ and, within errors, does not seem to affect the temperature determination.
A phase-resolved analysis of the EPIC-pn spectrum of IGR0023 was performed selecting two phase intervals centered on the two maxima of the spin pulse ($\phi = 0.85 - 1.15$ and $\phi = 0.35 - 0.65$, respectively). We used model B, since model A and C would give badly constrained parameters. The hydrogen column density of the total absorber, the temperatures of the two MEKALs and the metal abundance were kept fixed to the values obtained for the average spectrum. As shown in Table \[tab:phspectra0023\], the normalization of the low-temperature optically thin component is significantly lower at the secondary maximum, where we also find a marginal evidence of an increase of the covering fraction of the local absorber.
Parameters Maximum 1 Maximum 2
--------------------------------------------------- ------------------------ ---------------------
$N_{\mathrm{H}}$ ($10^{23}\ \mathrm{cm}^{-2}$) $1.5_{-0.6}^{+1.0}$ $1.2_{-0.3}^{+0.5}$
Cov. Frac. $0.36_{-0.07}^{+0.06}$ $0.51 \pm 0.04$
$C_1$ ($10^{-4}$) $1.2_{-0.6}^{+0.3}$ $0.5 \pm 0.5$
$C_2$ ($10^{-2}$) $1.5_{-0.1}^{+0.2}$ $1.7 \pm 0.1$
$F_{0.2-10.0\ \mathrm{keV}}$ 2.06 2.00
($10^{-12}\ {\mbox{erg cm}^{-2}\ \mbox{s}^{-1}}$)
$\chi_{\nu}^2$ ($\chi^2$ / d.o.f.) 1.01 (259/257) 1.10 (258/235)
: Spectral parameters obtained from fitting the EPIC-pn spectra of IGR0023 extracted in the phase intervals quoted in the text with model B shown in Table \[tab:spectra0023\].[]{data-label="tab:phspectra0023"}
RXJ2133
-------
### The X-ray periodicities in RXJ2133
The EPIC light curves, extracted in the 0.2–12.0 keV energy range and binned in 5 s time intervals, and the *Suzaku* XIS light curve, extracted with a binning time of 8 s, show a variability of the order of 10 min. In the EPIC light curve of May 2005 a quasi-sinusoidal trend with an apparent period of $\sim 2$ hr is found, while in July 2005 a $\sim 1$ hr pseudo-periodicity is detected. However, in the later XIS observation we do not find any evidence of such a variability.
We then performed a Fourier analysis on the light curves at the three epochs. Peaks at the optically identified spin frequency, $\omega$, and at $2\omega$ are clearly detected. In Fig. \[fig:2133pow\] we report the full-band XIS power spectra, as well as that obtained in different energy bands. Sinusoidal fits to the full-band light curves give $\omega = 151.30 \pm 0.25\ \mbox{d}^{-1}$, $\omega = 151.55 \pm 0.78\ \mbox{d}^{-1}$ and $\omega = 151.350 \pm 0.009\ \mbox{d}^{-1}$ for the May and July 2005 *XMM-Newton* and April 2006 *Suzaku* data sets, respectively. In the analysis of the EPIC-pn light curves, a third sinusoid has to be included, accounting for the low frequency variations, thus giving $f_{\mathrm{May 2005}} = 43.78 \pm 0.29\ \mbox{d}^{-1}$ and $f_{\mathrm{July 2005}} = 27.87 \pm
0.41\ \mbox{d}^{-1}$. We identify the precise *Suzaku* period $P_\omega = 570.862 \pm 0.034\ \mathrm{s}$ with the true spin period of the accreting WD. This agrees, within errors, with the optical $570.823 \pm 0.013$ s period.
We folded the EPIC-pn and XIS light curves at the 570.86 s X-ray spin period using the time of maxima of the pulsation at $\omega$: $HJD_{\mathrm{May 2005}} = 2453520.1444(2)$, $HJD_{\mathrm{July 2005}} = 2453558.3134(4)$ and $HJD_{\mathrm{April 2006}} = 2453856.09618(5)$. The three folded light curves (Fig. \[fig:2133flc1\]) present two maxima at phases 0.9 and 0.35, with a dip at phase $\sim 0.1$ that is more evident in the May 2005 data. The full amplitude of the primary maximum at $\phi \sim 0.9$ is almost similar at the three epochs ($\sim 36 \%$), while that of the secondary maximum decreases from $\sim 13 \%$ in May 2005 to $\sim 3\%$ in 2006.
The energy-resolved EPIC-pn light curves (Fig. \[fig:2133flc2\]) generally show the secondary maximum dominating the emission in the soft 0.2–0.5 keV band and the primary maximum dominating in the range 0.5–2.0 keV. At higher energies the two maxima have similar amplitudes of $\sim 10 \%$. The XIS light curves, though being broadly similar to the EPIC-pn light curves, present an evident secondary maximum only in the hard X-rays.
### The UV and optical light variability of RXJ2133 {#sec:om2133}
The power spectrum of the OM-$B$ light curve shows a strong peak at the spin frequency (consistent within errors with the X-ray and the previous optical determinations). We also detect power at low frequencies, but we cannot establish whether this indicates a true periodic variability. The OM-$B$ light curve of May 2005 folded at the spin period (see Fig. \[fig:2133flc1\]) is single peaked, with a broad maximum at phase $\sim 0$ and a full amplitude of $14 \%$. The UV light curve, instead, is almost unmodulated in May 2005, while in July 2005 it is sinusoidal with a maximum shifted by half a period with respect to the OM-$B$ light curve of May 2005. A similar antiphased behavior is also seen in some soft X-ray IPs, like UU Col [@demartino06a]. However, we point out that each OM light curve covers only $\sim 41\%$ of the orbital period and hence could be affected by orbital dependent changes of the spin modulation.
### The X-ray spectrum of RXJ2133
The X-ray spectrum of RXJ2133 was studied in the broad-band energy range 0.3–100 keV using the EPIC-pn and combined MOS data at each epoch, together with the IBIS/ISGRI spectrum. We also analyzed the combined XIS-FI, XIS-BI and HXD spectra in the restricted range 0.3–40 keV, where the S/N ratio is still good. The average X-ray spectrum is well fitted by a multi-temperature optically thin component plus a rather hot ($k T_{\mathrm{BB}}
\sim 100$ eV) blackbody, that is required to best fit the data at energies below 1 keV. A total absorber with a low hydrogen column density, likely of interstellar origin (the total Galactic absorption along the direction of the source is $N_{\mathrm{H, gal}} = 7.35 \times 10^{21}\ \mbox{cm}^{-2}$, @dickeylockman90), and a dense ($N_H \sim 10^{23}\ \mbox{cm}^{-2}$) absorber partially covering the source are also required. In all the spectral fits we fixed the $N_H$ of the total absorber to the value found for the May 2005 data, since it is not expected to vary with time, and used an unresolved Gaussian line to account for the fluorescence Fe line at 6.4 keV. The best-fit parameters are reported in Table \[tab:spectra2133\], where the optically thin components in models A, B and C are the same used in the corresponding spectral models of IGR0023. We have marked with *1* and *2* the May and July 2005 *XMM-Newton* spectra, respectively, while *3* indicates the *Suzaku* spectra [^3]. Also for this object, the *INTEGRAL* spectrum is underpredicted above $\sim 60$ keV.
In general, we find no significant differences between the spectral parameters of May 2005 and July 2005, with the exception of the normalization of the blackbody component that is slightly higher in the earliest epoch. The absorbed flux in the range 0.2–10.0 keV is $2.4 \times 10^{-11}\ {\mbox{erg cm}^{-2}\ \mbox{s}^{-1}}$. In the *Suzaku* spectra, the temperature of the hottest MEKAL (or the maximum temperature of the CEMEKL model) attains higher values than in 2005. In addition, we find a decrease of the covering fraction of the partial absorber $(50 \%)$ and a lower value of the normalization of the blackbody (with the exception of model C). The absorbed flux in the 0.2–10.0 keV range is $2.5 \times 10^{-11}\ {\mbox{erg cm}^{-2}\ \mbox{s}^{-1}}$, in agreement with that found in the two observations of 2005. Although we find different values of the bolometric fluxes of both the blackbody and the optically thin components between the two epochs, they are consistent within errors: $F_{\mathrm{BB}} = (4.1_{-0.4}^{+0.5} - 4.8_{-0.8}^{+0.6}) \times10^{-11}\ {\mbox{erg cm}^{-2}\ \mbox{s}^{-1}}$ and $F_{\mathrm{hard}} = (7.3_{-1.4}^{+1.1} - 7.6_{-0.9}^{+0.8}) \times10^{-11}\ {\mbox{erg cm}^{-2}\ \mbox{s}^{-1}}$ in 2005, $F_{\mathrm{BB}} = 4.0_{-0.6}^{+1.0} \times 10^{-11}\ {\mbox{erg cm}^{-2}\ \mbox{s}^{-1}}$ and $F_{\mathrm{hard}} > 8.3 \times 10^{-11}\ {\mbox{erg cm}^{-2}\ \mbox{s}^{-1}}$ in 2006. Therefore, the ratio between the soft and hard X-ray fluxes is of the order of 0.5.
Because of the similar temperatures of the optically thin components of model C found at the three epochs, we adopt this model to describe the broad-band spectrum of RXJ2133 (see Fig. \[fig:2133spec\] and Fig. \[fig:2133specsuz\]). Although there is clear evidence of a temperature gradient in the post-shock region, the broad-band spectra do not provide constraints on lower temperatures. Instead, the presence of a low temperature plasma can be inferred from the May 2005 RGS spectra, where the strong ($0.651 \pm 0.002$ keV , $E.W. = 22$ eV) and ($1.02 \pm 0.01$ keV, $E.W. = 14$ eV) lines are clearly visible (see Fig. \[fig:2133spec\]). The line at 0.58 keV ($E.W. = 8$ eV) and the line at 0.73 keV ($E.W. = 11$ eV) are likely present, while there is only a weak evidence of the line. We then estimate a ratio $\sim 2.5$ between the H- and He-like oxygen lines, that would imply a temperature of $\sim 0.3-0.4$ keV. This is also supported by the Ne line ratio of $\sim 1.5$, again indicating $k T \sim 0.4$ keV.
The Fe line at 6.4 keV has a large equivalent width ($E.W. = 150 - 170$ eV), suggesting a Compton reflection component. However, we do not find improvements of the fits with the inclusion of a reflection component. We then applied the PSR model to the combined *XMM-Newton* (May 2005) and *INTEGRAL* spectra ($\chi^{2}$ / d.o.f. = 347/279), as well as to the *Suzaku* spectra ($\chi^{2}$ / d.o.f. = 395/359). We find a shock temperature of $50 \pm 2$ keV and $53 \pm 3$ keV, respectively, while the parameters of the absorbers and of the blackbody component are consistent, within errors, with those found using models A, B and C.
We also performed a phase-resolved analysis of the EPIC-pn (May 2005) and XIS spectra. The former was extracted in phase intervals approximately centered on the two maxima ($\phi = 0.72 - 0.94$ and $\phi = 0.16-0.28$) and at the minimum ($\phi = 0.44-0.61$). Instead, the higher quality XIS spectrum was extracted in 4 consecutive phase intervals, namely $\phi = 0.05-0.3$ (P1), $0.3-0.55$ (P2), $0.55-0.8$ (P3) and $0.8-1.05$ (P4). We fitted these spectra adopting model C, fixing $N_H$ of the total absorber, the temperatures of all the emission components and $A_Z$ to the values found for the corresponding average spectrum. The results reported in Table \[tab:phspectra2133\] show significant differences in some of the parameters. The normalization of the soft MEKAL increases at the primary maximum in the two epochs, whilst only upper limits are found at other phases. Also, a decrease of the hydrogen column density of the partial absorber and a slight increase of the blackbody normalization at the secondary maximum are found only in May 2005.
------------------------------------------------------------------ ------------------------ ------------------------ ------------------------ ------------------------ ------------------------ ------------------------ ---------------------
Parameters Maximum 1 Maximum 2 Minimum P1 P2 P3 P4
$N_{\mathrm{H}}^{\mathrm{PCFABS}}$ ($10^{23}\ \mathrm{cm}^{-2}$) $1.4_{-0.4}^{+0.6}$ $0.6 \pm 0.2$ $1.2_{-0.4}^{+0.6}$ $1.1 \pm 0.2$ $1.0 \pm 0.1$ $1.1 \pm 0.2$ $1.1 \pm 0.2$
$C_F$ $0.45_{-0.06}^{+0.05}$ $0.56_{-0.07}^{+0.06}$ $0.55_{-0.07}^{+0.05}$ $0.44 \pm 0.02$ $0.53 \pm 0.01$ $0.46 \pm 0.02$ $0.38 \pm 0.02$
$C_{\mathrm{BB}}$ ($10^{-4}$) $4.7 \pm 0.5$ $6.4_{-0.9}^{+1.0}$ $4.6_{-0.6}^{+0.7}$ $4.7 \pm 0.2$ $4.9 \pm 0.2$ $4.2 \pm 0.2$ $4.4 \pm 0.2$
$C_1$ ($10^{-4}$) $14 \pm 3$ $< 2.5$ $< 4.2$ $< 0.8$ $< 0.2$ $< 1.5$ $6.4 \pm 0.8$
$C_2$ ($10^{-3}$) $< 2.3$ $5.2_{-2.6}^{+2.7}$ $2.6_{-2.2}^{+2.5}$ $3.6_{-0.7}^{+0.8}$ $3.9_{-0.6}^{+0.5}$ $2.7 \pm 0.7$ $3.7_{-0.7}^{+0.6}$
$C_3$ ($10^{-2}$) $1.6 \pm 0.2$ $1.2 \pm 0.2$ $1.4 \pm 0.2$ $1.48_{-0.08}^{+0.09}$ $1.55_{-0.06}^{+0.07}$ $1.47_{-0.08}^{+0.09}$ $1.32 \pm 0.08$
$F_{0.2-10.0\ \mathrm{keV}}$ 2.51 2.38 2.12 2.52 2.48 2.35 2.47
($10^{-11}\ {\mbox{erg cm}^{-2}\ \mbox{s}^{-1}}$)
$\chi_{\nu}^2$ 1.04 0.96 0.97 0.82 0.87 0.80 0.87
($\chi^2$ / d.o.f.) (434/418) (266/276) (280/290) (1651/2019) (1721/1982) (1548/1939) (1747/2013)
------------------------------------------------------------------ ------------------------ ------------------------ ------------------------ ------------------------ ------------------------ ------------------------ ---------------------
Discussion
==========
IGR0023
-------
The *XMM-Newton* data confirm the previously identified optical 561 s period as the spin period of the accreting WD. A pseudo-periodicity of the order of 1 hr, though with a significance level below $3 \sigma$, seems to be present above 1 keV and was previously found also in optical data [@bonnetbidaud07]. This variability, if real, is not easy to explain. Assuming a circular Keplerian orbit, a periodicity of 1.09 hr implies a radius of $\sim 3 \times 10^{10}$ cm. For comparison, adopting a mass ratio $q = 0.5$ and a WD mass of $\sim 0.9 M_{\sun}$ (see below), the outer edge of the accretion disc would be $4 \times 10^{10}$ cm. This may suggest an origin within the disc, but unlikely due to absorption effects as instead found in many IPs [@parker_etal05].
The spin pulse is detected only below 2 keV, where the power spectrum shows substantial signal also at $2 \omega$. The double-humped spin pulse and the increase of amplitudes with decreasing energy could be the signatures of two accreting poles, as well as of variable absorption. From the phase resolved spectra, we find a decrease by a factor $\sim 2$ in the normalization constant of the MEKAL at 0.17 keV, as well as an increase of a dense partial absorber at the secondary maximum. This prevents us to isolate the true contribution from a secondary pole. In addition, the optical pulse is single-humped, thus not revealing a secondary pole. Hence, we are left with the uncertainty on whether the X-ray pulse shape is due to the presence of a secondary and less active pole or to phase-dependent absorption effects in the accretion curtain [@rosen88].
The X-ray emission in IGR0023, extending up to $\sim 90$ keV, requires a multi-temperature optically thin plasma with a temperature distribution more complex than a simple power law. The spectral fits indicate a likely maximum temperature of $\sim 50$ keV. High plasma temperatures were also inferred from broad-band analysis of the combined *XMM-Newton – INTEGRAL* spectra of 1RXS J173021.5-055933 [@demartino08] and from hard X-ray observations of several other IPs [@landi08; @suleimanov08; @brunschweiger09]. Although caution has to be taken to interpret these temperatures in terms of the WD mass, the extension of spectra above 30 keV allow us to directly observe the exponential cut-off of the underlying continuum and hence the inferred temperatures are likely to be more reliable than those based only on softer energy coverages (e.g. *XMM-Newton*). For IGR0023 we estimate a WD mass of $0.91^{+0.14}_{-0.16} M_{\sun}$, consistent with that found by @brunschweiger09 from the analysis of *Swift/BAT* data. Although relatively massive WD primaries were inferred in several magnetic CVs [see e.g. @ramsay00; @brunschweiger09], the mass distribution of magnetic CV primaries is not very much different from that of non-magnetic systems @ramsay00. Hence, for a CV to be a hard X-ray source, there should be an additional parameter that plays an important role.
The hard X-ray bolometric flux $F_{\mathrm{hard}} = 1.8 \times 10^{-11}\ {\mbox{erg cm}^{-2}\ \mbox{s}^{-1}}$ is an approximate estimate of the total accretion luminosity, as the pulsed components at other wavelengths need to be included as well as a secure distance estimate is missing. Here we can obtain only a lower limit using the 8 % pulsed $V$ band contribution, adopting $E(B-V) = 0.35$ as derived by the $N_H$ of the total absorber [@ryter75]. Also, assuming a lower limit of 500 pc for the distance of IGR0023, as estimated by @bonnetbidaud07 using the 2MASS $K$ band magnitude when attributed solely to the secondary star [@knigge06], we obtain $L_{\mathrm{accr}} > L_{\mathrm{hard}}
+ L_V = 5.5 \times 10^{32}\ {d_{\mathrm{500 pc}}^2\ \mbox{erg s}^{-1}}$. This in turn gives us a lower limit for the accretion rate $\dot{M} > 2.5 \times 10^{15}\ {d_{\mathrm{500 pc}}^2\ \mbox{g s}^{-1}}$. Unless the distance is much larger and the hard X-rays do not trace the bulk of accretion, this value is much lower than $2.2 \times 10^{17}\ {\mbox{g s}^{-1}}$ predicted for its 4.033 hr orbital period [@warner].
The short spin periods of IGR0023 and RXJ2133 are similar to those of YY Dra [@patterson92], V405 Aur [@skillman96] and 1RXS J070407.9+262501 [@gan05]. Fast rotators were proposed to possess weakly magnetized WDs [@norton99], although V405 Aur and RXJ2133 are among the few IPs showing polarized optical emissions. Assuming a pure disc accretion, as indicated by the absence of orbital sidebands in the power spectrum, a lower limit for the magnetic moment $\mu \gtrsim 9 \times 10^{31}\ {d_{\mathrm{500 pc}}\ \mbox{G cm}^3}$ is obtained using the inferred values of $M_{\mathrm{WD}}$ and $\dot{M}$. This could suggest that IGR0023 is not strongly magnetized and not spinning at equilibrium.
RXJ2133
-------
Our X-ray analysis confirms the 571 s period detected in optical photometry and polarimetry [@bonnetbidaud06; @katajainen07] as the rotational period of the WD primary. RXJ2133 is, therefore, one of the most asynchronous systems among IPs, with a ratio between the spin and the orbital periods of 0.022.
The double peaked X-ray pulse shows a complex energy dependence and might be interpreted in terms of absorption and two emitting poles. The energy-resolved light curves suggest that the primary pole dominates the emission in the energy range 0.5–5.0 keV, while the two poles contribute similarly at higher energies. This is also found in the phase-resolved spectral analysis where the main changes are due to the emitting volume of an intermediate temperature region. The soft X-ray blackbody component is seen at all spin phases, suggesting either that the irradiated main pole is never occulted or that reprocessing occurs at both poles. However, in 2005 the secondary maximum is dominant in the soft band, and at this phase we find a decrease in the absorption and a slight increase in the blackbody normalization. Hence, when the main pole points away from the observer, we are viewing the contribution of the irradiated area of the secondary pole. In order to have a lower absorption at this phase, a modification of the standard accretion curtain model [@norton99], where the optical depth is largest when viewing the curtain perpendicularly, could be envisaged.
UV and optical data also support a scenario with two accreting poles whose contributions are however variable with time. The *XMM-Newton* UV observation of July 2005 shows a weak modulation antiphased with the X-ray pulse, while in May 2005 the UV light is not modulated but the $B$ band pulse is single-peaked. Previous white light photometry acquired in 2003 [@bonnetbidaud06] shows the presence of two poles, being the first harmonic of both the spin frequency and the beat clearly detected. This is not seen in the 2006 optical photometry and polarimetry [@katajainen07] as well as in the *Suzaku* X-ray data in the same year. Note that @bonnetbidaud06 suggested a relatively low binary inclination $(i \leq 45 \degr)$ and the behavior of the circular polarization along the spin cycle indicates a magnetic colatitude $\beta \sim 90\degr - i$ [@katajainen07]. This implies $i \geq
45\degr$ and hence that the secondary pole, whenever active, can be observed.
The broad-band X-ray spectrum of RXJ2133 is well fitted by a multi-temperature optically thin plasma with a likely maximum temperature $k T_{\max} \sim 50$ keV. As for IGR0023, a power-law multi-temperature flow appears a too simple and inadequate description of the post-shock emitting region, implying that the emergent spectrum is highly sensitive to local pressure and temperature across the flow. Furthermore, a blackbody component at 100 eV is also required to account for the soft X-ray emission. To date this component has been detected in $\sim 42 \%$ of the IPs, thus appearing a common characteristics of magnetic CVs and not solely of polars [@anzolin08]. If produced by the heating of a small area surrounding the WD polar cap, the range of values of the normalization constant obtained by applying model C to the spectra of RXJ2133 (see Table \[tab:spectra2133\]) implies an emission area of $1.6 - 1.9 \times 10^{13}\ d_{\mathrm{600 pc}}^2\ \mbox{cm}^2$. Here we use a minimum distance to the source as derived by @bonnetbidaud06 using 2MASS $K$ band magnitude. Assuming $k T_{\max} \sim k T_{\mathrm{shock}}$, we infer a WD mass of $0.93 \pm 0.04 M_{\sun}$, in agreement with the values found by @bonnetbidaud06 and @brunschweiger09, and a radius of $5.6 \times 10^8$ cm [@nauenberg72]. The fractional area of the blackbody emitting region is $f \sim 2 - 3 \times 10^{-6}\ d_{\mathrm{600 pc}}^2$. This could represent a small core onto the WD pole, as found in other soft X-ray IPs with similar temperatures [@haberl02; @demartino08]. We also point out that a $\sim 100$ eV blackbody would imply that the emission from the accretion region locally exceeds the Eddington limit. At that temperature, the radiation pressure is 4 times stronger than the gravity at the surface of a $0.93 M_{\sun}$ WD. Clearly we still need a better understanding of emission mechanisms to solve this puzzle.
The bolometric flux of the blackbody represents $\sim 40 \%$ of the total bolometric X-ray emission. RXJ2133 shows circularly polarized emission up to $\sim 3 \%$ and hence the flux due to cyclotron radiation cannot be neglected. Since the blackbody flux does not exceed that of the hard X-ray emission, the reprocessed radiation seen in the soft X-rays does not balance the primary radiation components. We have found that the UV emission is modulated by $6 \%$ at the spin period in July 2005, but antiphased with respect to the hard X-ray component, thus suggesting that the reprocessed radiation at the WD surface is also emitted at UV wavelengths. The pulsed UV flux observed in July 2005, dereddened for $E(B-V) = 0.25$, is $F_{\mathrm{UVM2}} \sim 2.1 \times 10^{-15}\ {\mbox{erg cm}^{-2}\ \mbox{s}^{-1}\ \mbox{\AA}^{-1}}$. This value is much larger than that expected from the extrapolation of the soft X-ray flux towards UV wavelengths. Also, the UV flux integrated over the $UVM2$ band is only 4 % of the soft X-ray bolometric flux and hence only provide a lower limit to the UV luminosity.
To evaluate the mass accretion rate we then consider the hard and soft X-ray components, as well as the modulated fraction of the flux in the $B$ band of the May 2005 observation. A lower limit for the accretion luminosity is therefore: $L_{\mathrm{accr}} \gtrsim 5 \times 10^{33}\ {d_{\mathrm{600 pc}}^2\ \mbox{erg s}^{-1}}$. We obtain $\dot{M} \gtrsim 2.3 \times
10^{16}\ {d_{\mathrm{600 pc}}^2\ \mbox{g s}^{-1}}$, much lower than the secular value of $1.9 \times 10^{18}\ \mbox{g s}^{-1}$ predicted for its long 7.2 hr orbital period. From the detection of circular polarization in the optical range, that peaks in the $V$ band, @katajainen07 suggest a magnetic moment $\mu$ in the range $3 \times 10^{33} - 3 \times
10^{34}$. The condition for accretion would then imply $\dot{M} \gtrsim 5.3 \times 10^{17}\ \mbox{g s}^{-1}$. Although this value is probably at the high side, it might indicate that the a substantial contribution to the accretion luminosity comes from cyclotron radiation and not from the hard X-rays.
Because of the wide range of temperatures found for the soft X-ray component, @anzolin08 proposed that $T_{\mathrm{BB}}$ could be related to the magnetic field strength. Lower temperatures would indicate higher field systems because of larger irradiated WD areas by cyclotron radiation. However, the recent detection of significant polarized emission in RXJ2133 [@katajainen07] and 1RXS J173021.5-055933 [@butters09] adds them to the subset of soft X-ray and polarized IPs together with V2400 Oph, PQ Gem and V405 Aur. While the degree of polarization could also be affected by the accretion geometry, these five polarized systems possess soft blackbody components that span a wide range of temperatures, with three out of five displaying hot blackbodies. The inferred magnetic field strengths are quite uncertain for most systems, being 9–21 MG for PQ Gem [@potter97], 9–27 MG for V2400 Oph [@vaeth97] and $\sim 30$ MG for V405 Aur [@piirola08]. A 20 MG field was proposed for RXJ2133 [@katajainen07], while no field estimate is given for 1RXS J173021.5-055933 [@butters09]. To really test the proposal made by @anzolin08, spectro-polarimetric measures of the magnetic field strengths are essential for all soft systems. Furthermore, there are indications that the reprocessed radiation emerges in the UV range also in IPs, as found in PQ Gem [@stavroyiannopoulos97] and UU Col [@demartino06a]. This aspect is essential to properly determine the reprocessed energy budget.
Conclusions
===========
The two CVs IGR0023 and RXJ2133 have been confirmed as true members of the IP class by using pointed X-ray observations with *XMM-Newton* and *Suzaku* satellites. A strong pulsation at the WD rotational period dominates the power spectra of IGR0023 below 2 keV, while for RXJ2133 it is detected up to 12 keV. Both systems have fast rotating WDs, with $P_{\omega}$ of 561 s for IGR0023 and 571 s for RXJ2133. The latter is one of the most asynchronous systems among IPs, having a ratio between the spin and the orbital periods of 0.022. The fast rotation and very long orbital period, together with the detection of substantial polarized emission, suggests that RXJ2133 is likely a young IP that will evolve into a polar when attaining synchronism.
Their broad-band X-ray spectra were analyzed including also *INTEGRAL* data, that allowed us to cover the energy range 0.2–100 keV. These are well described by a multi-temperature plasma emission, with a minimum temperature of 0.2–0.3 keV and a maximum temperature of $\sim 50$ keV, which implies relatively massive WDs. While this might suggest that hard X-ray CVs harbor massive primaries, it has still to be understood if this is the only ingredient for a CV to be a hard X-ray source.
In RXJ2133, a $\sim 100$ eV blackbody emission is also required to fit the soft portion of the spectrum. The temperature of this component is similar to that found in V2400 Oph [@demartino04] and 1RXS J173021.5-055933 [@demartino08], both of them also showing circularly polarized radiation. It is however different from that found in the other two soft X-ray and polarized IPs, PQ Gem and V405 Aur [@demartino04; @anzolin08]. This casts doubts on the possible relation between the soft X-ray temperature and the magnetic field strength proposed by @anzolin08. The present work also opens a further question on whether the reprocessed radiation in the soft and polarized IPs also emerges at UV wavelengths, as recently demonstrated by @konig06 for the polar prototype AM Her.
Furthermore, the fact that RXJ2133, V2400 Oph and 1RXS J173021.5-055933 are bright hard X-ray sources and are also polarized IPs might not imply that cyclotron cooling decreases the hard X-ray flux [@woelk_beuermann96; @fischer_beuermann01], at least for the magnetic strength values covered by these IPs. Observations aiming at measuring the magnetic field strength of IPs will help in making a clear picture of emission mechanisms in these systems.
We acknowledge the *XMM-Newton* MSSL and SOC staff for help in the reduction of the OM data. DdM and GA acknowledge financial support from ASI under contract ASI/INAF I/023/05/06 and ASI/INAF I/088/06/0 and from INAF under contract PRIN-INAF 2007 N.17. We also acknoweledge useful suggestions and comments from the referee, prof. Klaus Beuermann, that helped us to improve this work.
--- ----------------------- ----------------------- ------------------------ --------------------- --------------------- ------------------------ --------------------- --------------- --------------------- ------------------ --------------------- ------------------- ---------------------
$N_{\mathrm{H}}$ [^4] $N_{\mathrm{H}}$ [^5] $C_F$ [^6] $A_Z$ [^7] $\alpha$ [^8] $k T_1$ $C_1$ [^9] $k T_2$ $C_2$ [^10] $k T_3$ [^11] $C_3$ [^12] E.W. [^13] $\chi_{\nu}^2$
(keV) (keV) (kev) (eV) ($\chi^2$ / d.o.f.)
A $1.79 \pm 0.07$ $0.8 \pm 0.2$ $0.39_{-0.06}^{+0.05}$ $0.7 \pm 0.2$ $1.8_{-0.4}^{+0.6}$ $28_{-11}^{+19}$ $1.7_{-0.5}^{+1.3}$ $100_{-25}^{+30}$ 0.86 (892/1033)
B $2.0 \pm 0.1$ $1.1_{-0.2}^{+0.3}$ $0.44 \pm 0.04$ $0.5 \pm 0.1$ $0.17_{-0.04}^{+0.02}$ $3.0_{-1.7}^{+3.3}$ $14 \pm 2$ $5.2_{-0.3}^{+0.2}$ $100_{-29}^{+24}$ 0.86 (886/1032)
C $2.0 \pm 0.1$ $0.9_{-0.2}^{+0.3}$ $0.36 \pm 0.06$ $0.7_{-0.2}^{+0.3}$ $0.17_{-0.05}^{+0.03}$ $1.5_{-1.0}^{+2.3}$ $9_{-3}^{+2}$ $2.1_{-0.9}^{+1.5}$ $> 27$ $0.3 \pm 0.1$ $94_{-27}^{+24}$ 0.85 (870/1030)
--- ----------------------- ----------------------- ------------------------ --------------------- --------------------- ------------------------ --------------------- --------------- --------------------- ------------------ --------------------- ------------------- ---------------------
: Spectral parameters of the best-fit models to the EPIC-pn, combined MOS and IBIS/ISGRI average spectra of IGR0023.The different models (see text) are indicated with A, B and C in the first column. Errors indicate the $90 \%$ confidence level of the corresponding parameter.[]{data-label="tab:spectra0023"}
--- ----- ------------------------ ------------------------ ------------------------ --------------------- ------------------------- --------------------- --------------------- --------------------- --------------------- --------------------- --------------------- ------------------ ------------------------ ------------------- ---------------------
$N_{\mathrm{H}}$ [^14] $N_{\mathrm{H}}$ [^15] $C_F$ [^16] $k T_{\mathrm{BB}}$ $C_{\mathrm{BB}}$ [^17] $A_Z$ [^18] $\alpha$ [^19] $k T_1$ $C_1$ [^20] $k T_2$ $C_2$ [^21] $k T_3$ [^22] $C_3$ [^23] E.W. [^24] $\chi_{\nu}^2$
(eV) (keV) (keV) (keV) (eV) ($\chi^2$ / d.o.f.)
*1* $1.7_{-0.2}^{+0.1}$ $1.3_{-0.1}^{+0.2}$ $0.56_{-0.04}^{+0.03}$ $94_{-2}^{+3}$ $5.8_{-1.1}^{+0.8}$ $0.6_{-0.1}^{+0.2}$ $0.9_{-0.1}^{+0.2}$ $55_{-15}^{+21}$ $3.5 \pm 0.4$ $159 \pm 22$ 0.98 (1416/1444)
A *2* 1.7 (fixed) $1.1 \pm 0.2$ $0.58_{-0.04}^{+0.03}$ $97_{-1}^{+2}$ $4.9_{-0.5}^{+0.4}$ $0.6_{-0.1}^{+0.2}$ $0.9 \pm 0.2$ $45_{-10}^{+23}$ $3.5 \pm 0.6$ $149_{-31}^{+16}$ 1.05 (819/780)
*3* 1.7 (fixed) $1.06 \pm 0.07$ $0.48_{-0.02}^{+0.01}$ $102 \pm 1$ $3.7 \pm 0.2$ $0.8 \pm 0.1$ 1 (fixed) $82_{-10}^{+13}$ $3.80_{-0.08}^{+0.09}$ $181 \pm 10$ 1.12 (2907/2608)
*1* $1.7_{-0.08}^{+0.12}$ $1.4 \pm 0.2$ $0.52_{-0.02}^{+0.04}$ $98 \pm 2$ $4.9_{-0.6}^{+1.0}$ $0.5 \pm 0.1$ $1.4_{-0.2}^{+0.4}$ $0.9_{-0.3}^{+0.5}$ $27_{-6}^{+4}$ $1.70_{-0.07}^{+0.05}$ $168_{-24}^{+23}$ 0.99 (1431/1443)
B *2* 1.7 (fixed) $1.2 \pm 0.2$ $0.59 \pm 0.04$ $103 \pm 2$ $4.3_{-0.4}^{+0.5}$ $0.7_{-0.2}^{+0.3}$ $4.6_{-1.0}^{+1.9}$ $5.1_{-1.7}^{+1.9}$ $36_{-8}^{+13}$ $1.3 \pm 0.2$ $140 \pm 30$ 1.06 (822/779)
*3* 1.7 (fixed) $1.03 \pm 0.07$ $0.46_{-0.01}^{+0.02}$ $107 \pm 1$ $3.1 \pm 0.1$ $0.9_{-0.1}^{+0.2}$ $7.3_{-1.1}^{+0.6}$ $3.6_{-0.9}^{+0.8}$ $> 71$ $1.46_{-0.06}^{+0.07}$ $173 \pm 9$ 1.08 (2818/2606)
*1* $1.7 \pm 0.1$ $1.3 \pm 0.2$ $0.54 \pm 0.03$ $96_{-2}^{+3}$ $5.4_{-0.9}^{+0.8}$ $0.7 \pm 0.2$ $1.1_{-0.2}^{+0.3}$ $4.1_{-1.7}^{+3.3}$ $4.8_{-0.9}^{+3.2}$ $3.3_{-0.5}^{+1.5}$ $39_{-9}^{+11}$ $1.46_{-0.18}^{+0.09}$ $156_{-21}^{+22}$ 0.97 (1403/1441)
C *2* 1.7 (fixed) $1.1 \pm 0.2$ $0.57 \pm 0.04$ $99 \pm 2$ $4.7 \pm 0.5$ $0.7_{-0.2}^{+0.3}$ $1.0_{-0.2}^{+0.3}$ $4.5_{-2.3}^{+3.6}$ $5.4_{1.4}^{+1.7}$ $4.7_{-1.7}^{+2.4}$ $39_{-8}^{+12}$ $1.3_{-0.2}^{+0.1}$ $142_{-30}^{+31}$ 1.04 (805/777)
*3* 1.7 (fixed) $1.10 \pm 0.09$ $0.45_{-0.02}^{+0.01}$ $100_{-3}^{+2}$ $4.7_{-0.8}^{+1.0}$ $0.9_{-0.1}^{+0.2}$ $0.9_{-0.1}^{+0.2}$ $1.6_{-0.6}^{+0.9}$ $6.6 \pm 1.0$ $3.5_{-0.9}^{+1.0}$ $> 61$ $1.50_{-0.05}^{+0.07}$ $170 \pm 9$ 1.07 (2794/2603)
--- ----- ------------------------ ------------------------ ------------------------ --------------------- ------------------------- --------------------- --------------------- --------------------- --------------------- --------------------- --------------------- ------------------ ------------------------ ------------------- ---------------------
: Spectral parameters of the best-fit models to the EPIC-pn, combined MOS and IBIS/ISGRI average spectra of RXJ2133 for May 2005 (*1*) and July 2005 (*2*), as well as to the XIS and HXD spectra (*3*). The different models (see text) are indicated with A, B and C in the first column. Errors indicate the $90 \%$ confidence level of the corresponding parameter.[]{data-label="tab:spectra2133"}
[^1]: Based on observations obtained with *XMM-Newton* and *INTEGRAL*, ESA science missions with instruments and contributions directly funded by ESA Member States and NASA, and with *Suzaku*, a Japan’s mission developed at the Institute of Space and Astronautical Science of Japan Aerospace Exploration Agency in collaboration with U.S. (NASA/GSFC, MIT) and Japanese institutions.
[^2]: We have not investigated the nature of this second source, but a likely explanation is that it is a background AGN.
[^3]: We caution that the quoted errors reflect only the statistical errors. However, our experimentation of adjusting the HXD non X-ray background by $\pm 1\%$ from the best estimate values did not cause significant changes in the fit parameters, showing that the systematic errors are not dominant in this case.
[^4]: Column density of the total absorber in units of $10^{21}\ \mathrm{cm}^{-2}$.
[^5]: Column density of the partial absorber in units of $10^{23}\ \mathrm{cm}^{-2}$.
[^6]: Covering fraction of the partial absorber.
[^7]: Metal abundance.
[^8]: Index of the power-law emissivity function of the CEMEKL component.
[^9]: Normalization constant of the first optically thin plasma component in units of $10^{-4}$.
[^10]: Normalization constant of the second optically thin plasma component in units of $10^{-3}$.
[^11]: In model A this the maximum temperature of the CEMEKL.
[^12]: Normalization constant of the third optically thin plasma component (CEMEKL in model A) in units of $10^{-2}$.
[^13]: Equivalent width of the 6.4 keV Fe line.
[^14]: Column density of the total absorber in units of $10^{21}\ \mathrm{cm}^{-2}$.
[^15]: Column density of the partial absorber in units of $10^{23}\ \mathrm{cm}^{-2}$.
[^16]: Covering fraction of the partial absorber.
[^17]: Normalization constant of the BBODY component in units of $10^{-4}$.
[^18]: Metal abundance.
[^19]: Index of power-law emissivity function of the CEMEKL component.
[^20]: Normalization constant of the first optically thin plasma component in units of $10^{-4}$.
[^21]: Normalization constant of the second optically thin plasma component in units of $10^{-3}$.
[^22]: In model A this the maximum temperature of the CEMEKL.
[^23]: Normalization constant of the third optically thin plasma component (CEMEKL in model A) in units of $10^{-2}$.
[^24]: Equivalent width of the 6.4 keV Fe line.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Using a quantum version of the Bell-Ziv-Zakai bound, I derive a Heisenberg limit to multiparameter estimation for any Gaussian prior probability density. The mean-square error lower bound is shown to have a universal quadratic scaling with respect to a quantum resource, such as the average photon number in the case of optical phase estimation, suitably weighted by the prior covariance matrix.'
author:
- Mankei Tsang
bibliography:
- 'research.bib'
title: Multiparameter Heisenberg limit
---
Introduction
============
The probabilistic nature of quantum mechanics imposes fundamental limits to information processing applications [@helstrom; @glm_science; @glm2011; @holevo12]. Such quantum limits have practical implications to many metrological applications, such as optical interferometry, optomechanical sensing, gravitational-wave detection [@braginsky; @twc; @tsang_nair; @tsang_open], optical imaging [@treps; @centroid; @taylor2013], magnetometry, gyroscopy, and atomic clocks [@bollinger]. The existence of the so-called Heisenberg (H) limit to parameter estimation has in particular attracted much attention in recent years, as it implies that a minimum amount of resource, such as the average photon number for optical phase estimation, is needed to achieve a desired precision. After much debate and confusion [@yurke; @sanders; @ou; @bollinger; @zwierz; @zwierz_err; @rivas; @luis_rodil; @luis13; @anisimov; @zhang13], it has now been proven that the H limit indeed exists for the mean-square error of single-parameter estimation [@qzzb; @glm2012; @hall2012; @nair2012; @gm_useless]. Although decoherence can impose stricter limitations [@knysh; @escher; @escher_bjp; @escher_prl; @latune; @demkowicz; @tsang_open; @knysh14] than the H limit, the latter can still be relevant when the decoherence is relatively weak.
For many applications, such as waveform estimation [@twc; @tsang_open; @bhw2013] and optical imaging [@humphreys], the estimation of multiple parameters from measurements is needed [@yuen_lax; @helstrom_kennedy; @genoni]. In that case, the existence of a general H limit remains an open question. A recent work by Zhang and Fan [@zhang14] studies the quantum Ziv-Zakai bound (QZZB) [@qzzb] for multiple parameters, but they assume that the parameters are *a priori* independent, such that the single-parameter bound is applicable to each. In practice, and especially for the waveform estimation problem, the parameters often have nontrivial prior correlations, in which case a proper definition of the relevant quantum resource is unknown and the H limit remains to be proven.
Here I prove a multiparameter version of the H limit for any Gaussian prior. The proof uses the Bell-Ziv-Zakai bound (BZZB) , which is an extension of the Ziv-Zakai family of bounds for single-parameter estimation [@bell]. The H limit is found to obey a universal quadratic scaling with respect to a quantum resource suitably weighted by the prior covariance matrix. To illustrate the result, the bound is applied to the problem of optical phase waveform estimation, showing that an H limit can be defined with respect to the average photon number within the prior correlation time scale of the waveform.
Quantum Bell-Ziv-Zakai bound
============================
Let $x$ be a column vector of the unknown parameters, $P(x)$ be its prior probability density, $P(y|x)$ be the likelihood function with observation $y$, and $\tilde x(y)$ be the estimator. The mean-square error covariance matrix is defined as [@vantrees] $$\begin{aligned}
\Sigma &\equiv \int dx dy P(y|x)P(x){\left[\tilde x(y)-x\right]}{\left[\tilde x(y)-x\right]}^\top,\end{aligned}$$ where $^\top$ denotes the transpose. One useful version of the BZZB is given by [@bell; @bell1997] $$\begin{aligned}
u^\top \Sigma u &\ge \int_0^\infty d\tau \tau
\max_{v: u^\top v = 1}
\int dx \min{\left[P(x), P(x+v \tau)\right]}
\nonumber\\&\quad\times
P_e(x,x+v\tau),
\label{zzb}\end{aligned}$$ where $u$ is an arbitrary real vector and $P_e(x^{(0)},x^{(1)})$ is the error probability in discriminating equally likely hypotheses $x =
x^{(0)}$ and $x = x^{(1)}$ from an observation $y$ with the likelihood function $P(y|x)$. If $P_e(x,x+v\tau)$ does not depend on $x$, the $x$ integral in Eq. (\[zzb\]) depends only on the prior distribution $P(x)$. For a Gaussian $P(x)$ with covariance matrix $\Sigma_0$ [@bell1997], $$\begin{aligned}
\int dx \min{\left[P(x), P(x+v \tau)\right]} =
{\operatorname{erfc}}\frac{\tau}{\tau_0},
\nonumber\\
{\operatorname{erfc}}z \equiv \frac{2}{\sqrt\pi} \int_0^z d\xi \exp(-\xi^2),
\nonumber\\
\tau_0 \equiv {\left(\frac{8}{v^\top \Sigma_0^{-1} v}\right)}^{1/2}.
\label{gauss}\end{aligned}$$ The erfc function is plotted in Fig. \[erfc\].
![The erfc function.[]{data-label="erfc"}](erfc){width="45.00000%"}
Suppose now that a quantum probe is used to measure the parameters. The likelihood function becomes $$\begin{aligned}
P(y|x) &= {\operatorname{tr}}E(y) \rho_x,\end{aligned}$$ where $E(y)$ is the positive operator-valued measure (POVM) that describes the measurement and $\rho_x$ is the density operator conditioned on the unknown $x$. The following quantum bound can be used [@fuchs]: $$\begin{aligned}
P_e(x,x+v\tau) &\ge \frac{1}{2}{\left[1-\sqrt{1-F(\rho_x,\rho_{x+v\tau})}\right]},
\label{helstrom}\end{aligned}$$ where $$\begin{aligned}
F(\rho_x,\rho_{x+v\tau}) &\equiv
{\left({\operatorname{tr}}\sqrt{\sqrt{\rho_x} \rho_{x+v\tau}\sqrt{\rho_x}}\right)}^2\end{aligned}$$ is the Uhlmann fidelity between $\rho_x$ and $\rho_{x+v\tau}$. This quantum bound, together with the BZZB, results in a quantum Bell-Ziv-Zakai bound (QBZZB) on the mean-square error of multiparameter estimation, just like the single-parameter case [@qzzb]. It is possible to define QBZZBs for error functions other than the mean-square criterion [@bell; @bell1997], although I shall focus on the mean-square error here because of its popularity.
Quantum phase estimation
========================
Suppose that the density operator is $$\begin{aligned}
\rho_x &= U_x \rho U_x^\dagger,\end{aligned}$$ and the unitary has the following form: $$\begin{aligned}
U_x &= \exp(i x^\top n) = \exp\bigg(i\sum_j x_j n_j \bigg),\end{aligned}$$ where $n$ is a column vector of quantum operators and $\rho$ is the initial density operator. Assuming that ${|\psi\rangle}$ is a purification of $\rho$ and defining $$\begin{aligned}
{\langleO\rangle} \equiv {\langle\psi|}O{|\psi\rangle},\end{aligned}$$ a lower bound on the fidelity is given by $$\begin{aligned}
F(\rho_x,\rho_{x+v\tau}) &\ge {\left|{\left\langle\exp(i\tau v^\top n)\right\rangle}\right|}^2
\\
&= \sum_{m,l} P_m P_l \exp[i\tau v^\top (m-l)]
\\
&= \sum_{m,l} P_m P_l \cos[\tau v^\top (m-l)],
\label{cosine}\end{aligned}$$ where $$\begin{aligned}
P_m &\equiv {\left|{\langlem|\psi\rangle}\right|}^2\end{aligned}$$ is the probability distribution with respect to the $n$ eigenstates.
![A lower bound for cosine.[]{data-label="cosine_bound"}](cosine_bound){width="45.00000%"}
A useful bound for the cosine function for deriving the H limit is [@qzzb] $$\begin{aligned}
\cos \theta &\ge 1-\lambda|\theta|,
\label{bound}\end{aligned}$$ where $\lambda \approx 0.7246$ is a solution of $\lambda = \sin\phi =
(1-\cos\phi)/\phi$, as shown in Fig. \[cosine\_bound\]. Substituting this bound into Eq. (\[cosine\]) and using the triangle inequality, one obtains $$\begin{aligned}
F &\ge
\sum_{m,l} P_m P_l{\left[1-\lambda \tau|v^\top (m-l)|\right]}
\\
&\ge \sum_{m,l} P_m P_l{\left[1-\lambda \tau{\left(|v^\top m - H_0|+|v^\top l-H_0|\right)}\right]}
\\
&= 1- 2\lambda \tau {\langle|v^\top n-H_0|\rangle},
\label{speed}\end{aligned}$$ where $H_0$ is an arbitrary constant. It is possible to obtain a slightly tighter bound numerically using the method in Refs. [@glm_speed; @gm_useless], but Eq. (\[speed\]) will produce the same scaling. Since $0\le F\le 1$, a tighter lower bound is $$\begin{aligned}
F &\ge \Lambda{\left(\frac{\tau}{\tau_F}\right)}\equiv \Big\{
\begin{array}{ll}
1- \tau/\tau_F, & \tau < \tau_F,
\\
0, & \tau \ge \tau_F,
\end{array}
\nonumber\\
\tau_F &\equiv \frac{1}{2\lambda{\langle|v^\top n-H_0|\rangle}},
\label{F_bound}\end{aligned}$$ as shown in Fig. \[triangle\].
![Bounds for the fidelity. The white area is the permissable area.[]{data-label="triangle"}](triangle){width="45.00000%"}
Putting Eqs. (\[gauss\]), (\[helstrom\]), and (\[F\_bound\]) together, $$\begin{aligned}
&\quad\max_{v: u^\top v = 1}
\int dx \min{\left[P(x), P(x+v \tau)\right]}
P_e(x,x+v\tau)
\nonumber\\
&\ge
\frac{1}{2}\max_{v: u^\top v = 1}
{\operatorname{erfc}}{\left(\frac{\tau}{\tau_0}\right)}\Lambda{\left(\sqrt{\frac{\tau}{\tau_F}}\right)}.
\label{max_v}\end{aligned}$$ Recall that $\tau_0$ and $\tau_F$ depend on $v$. The maximization does not seem to be tractable analytically, so I choose a $v$ that maximizes only the erfc function: $$\begin{aligned}
v_0 &\equiv \arg \max_{v: u^\top v = 1}
{\operatorname{erfc}}{\left(\frac{\tau}{\tau_0}\right)} = \frac{\Sigma_0 u}{u^\top \Sigma_0 u},\end{aligned}$$ such that $$\begin{aligned}
&\quad \max_{v: u^\top v = 1}
{\operatorname{erfc}}{\left(\frac{\tau}{\tau_0}\right)}\Lambda{\left(\sqrt{\frac{\tau}{\tau_F}}\right)}
\nonumber\\
&\ge
{\operatorname{erfc}}{\left(\frac{\tau}{\tau_0}\right)}\Lambda{\left(\sqrt{\frac{\tau}{\tau_F}}\right)}\bigg|_{v = v_0},
\label{v0}
\nonumber\\
\tau_0(v_0) &= 2\sqrt{2 u^\top\Sigma_0 u},
\nonumber\\
\tau_F(v_0) &= \frac{1}{2\lambda{\langle|u^\top\Sigma_0 n/(u^\top\Sigma_0 u) - H_0|\rangle}}.\end{aligned}$$ Combining Eqs. (\[zzb\]), (\[max\_v\]), and (\[v0\]) then produces the following bound: $$\begin{aligned}
u^\top\Sigma u &\ge Z\equiv \frac{1}{2}\int_0^{\tau_F} d\tau \tau
{\operatorname{erfc}}{\left(\frac{\tau}{\tau_0}\right)}{\left(1-\sqrt{\frac{\tau}{\tau_F}}\right)}\bigg|_{v = v_0}
\label{Z}\end{aligned}$$ The integral can be computed numerically, as shown in Fig. \[qbzzb\], but there are two analytic limits of interest:
1. The prior-information limit ($\tau_F\gg \tau_0$): $$\begin{aligned}
\lim_{\tau_F/\tau_0\to \infty} Z &= \frac{\tau_0^2}{8} = u^\top \Sigma_0 u,\end{aligned}$$ where the bound is determined only by the prior covariance matrix, as expected;
2. The asymptotic limit ($\tau_F \ll \tau_0$), where the measurement provides much more information: $$\begin{aligned}
\lim_{\tau_0/\tau_F\to \infty} Z &= \frac{\tau_F^2}{20} =
\frac{1}{80\lambda^2 H_+^2},
\nonumber\\
H_+ &\equiv {\left\langle{\left|\frac{u^\top\Sigma_0 n}{u^\top\Sigma_0 u}-H_0\right|}\right\rangle},
\label{central}\end{aligned}$$ and $H_+$ quantifies the relevant resource for the estimation. Eq. (\[central\]) is the central result of this paper and an appropriate generalization of the single-parameter case [@qzzb].
![A quantum lower error bound $Z$ on $u^\top\Sigma u$ versus the parameter $\tau_0/\tau_F$ in log-log scale, the prior-information limit $Z \to \tau_0^2/8$, and the asymptotic H limit $Z \to \tau_F^2/20$.[]{data-label="qbzzb"}](qbzzb){width="45.00000%"}
For example, the error bound for estimating a particular parameter $x_k$ can be obtained by setting $u$ as $$\begin{aligned}
u_j &= \delta_{jk},
\\
u^\top\Sigma u &= \Sigma_{kk} \ge Z_k \to
\frac{1}{80\lambda^2 H_{+k}^2},
\\
H_{+k} &\equiv
{\left\langle{\left|\frac{1}{\Sigma_{0kk}}\sum_l \Sigma_{0 kl} n_l-H_0\right|}\right\rangle}.
\label{hlimit}\end{aligned}$$ For optical phase estimation with $n_l$ being a photon number operator, one can assume $H_0 = 0$ and use the triangle inequality to obtain $$\begin{aligned}
H_{+k} &\le
\frac{1}{\Sigma_{0kk}}\sum_l |\Sigma_{0 kl}| {\left\langlen_l\right\rangle},\end{aligned}$$ which produces an H limit with respect to a weighted average of the photon numbers. The weighting of the photon numbers with respect to the prior covariance matrix is the key feature of the bound, as it properly accounts for the optical modes that can contribute to the estimation of a particular phase.
A special case is when the parameters are independent *a priori*, such that $$\begin{aligned}
\Sigma_{0kl} &= \Sigma_{0kk}\delta_{kl},
\\
H_{+k} &= {\langle|n_k-H_0|\rangle},\end{aligned}$$ and the single-parameter bound [@qzzb] is recovered. Zhang and Fan used this [@zhang14] to rule out any significant quantum enhancement with a proposal by Humphreys *et al.* for quantum multiparameter estimation [@humphreys].
Optical phase waveform estimation
=================================
To illustrate the result derived in the previous section, consider the continuous-time limit of the QBZZB for optical phase estimation. The photon number of each mode is related to the photon flux $I(t)$ and the time duration $dt$ of the mode: $$\begin{aligned}
n_l &= dt I(t_l).\end{aligned}$$ The mean-square error for each phase parameter becomes the error for estimating the phase at a certain time: $$\begin{aligned}
\Sigma_{kk} &= \Sigma(t_k,t_k),\end{aligned}$$ and the H limit becomes $$\begin{aligned}
\Sigma(t,t) &\ge Z(t)\to \frac{1}{80\lambda^2 H_+^2(t)},
\\
H_+(t) &\equiv
{\left\langle{\left|\frac{1}{\Sigma_0(t,t)}\int dt' \Sigma_0(t,t') I(t')-H_0\right|}\right\rangle}
\\
&\le \frac{1}{\Sigma_0(t,t)}\int dt' |\Sigma_0(t,t')| {\left\langleI(t')\right\rangle}.\end{aligned}$$ The relevant resource $H_+$ is defined as the time integral of the average photon flux ${\langleI(t')\rangle}$ weighted by the prior covariance function $\Sigma_0(t,t')$. For example, for the Ornstein-Uhlenbeck process, $$\begin{aligned}
\Sigma_0(t,t') &= \sigma_0\exp{\left(-\frac{|t-t'|}{T_0}\right)},
\\
H_+(t) &\le {\int_{-\infty}^{\infty}}dt' \exp{\left(-\frac{|t-t'|}{T_0}\right)} {\left\langleI(t')\right\rangle},\end{aligned}$$ which states that only the optical modes within the prior time scale $T_0$ can contribute to the estimation at a particular time.
If ${\langleI\rangle}$ is constant in time, $H_+(t)\propto {\langleI\rangle}$, and there exists a universal quadratic error scaling $\propto 1/{\langleI\rangle}^2$ for any Gaussian prior. Tighter scalings can be derived for Gaussian quantum states [@bhw2013], but the H limit is still valuable as a simple and more general no-go theorem.
Conclusion
==========
To conclude, I have proved an H limit with a universal $1/N^2$ scaling for multiparameter estimation with any Gaussian prior, where $N$ is an appropriately defined quantum resource. The key feature of the bound is the use of the prior covariance matrix to define $N$, enabling a proper accounting of the relevant quantum resources. In the case of optical phase waveform estimation, the H limit implies the intuitive result that only the optical modes within the prior correlation time scale can contribute to the estimation at a particular time.
It should be emphasized that the H limit derived here may well not be attainable and the quantum Cramér-Rao bound [@twc; @tsang_open; @bhw2013] may provide tighter bounds for more specific quantum states, but the generality and simplicity of the result here should still be valuable as a no-go theorem. It may also be possible to derive tighter bounds or study other priors using the present formalism. These possibilities are left for future investigations.
Acknowledgments {#acknowledgments .unnumbered}
===============
Discussions with Ranjith Nair are gratefully acknowledged. This work is supported by the Singapore National Research Foundation under NRF Grant No. NRF-NRFF2011-07.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We first relate an approximate $n^{th}$-order left derivative of $s(R, f^t)$ at the F-pure threshold $c$ to the F-splitting ratio $r_F(R, f^c)$. Next, we apply the methods developed by Monsky and Teixeira in their investigation of syzygy gaps and $p$-fractals to obtain uniform convergence of the F-signature when $f$ is a product of distinct linear polynomials in two variables. Finally, we explicitly compute the F-signature function for several examples using Macaulay2 code outlined in the last section of this paper.'
address: |
Department of Mathematics\
University of Nebraska - Lincoln\
Lincoln, NE 68588
author:
- Eric Canton
bibliography:
- 'refspaper2.bib'
title: |
Relating F-signature and F-splitting Ratio of Pairs\
Using Left-Derivatives
---
Introduction
============
Let $(R, {{\mathfrak}{m}}, k)$ be a local ring containing a field $k$, and assume ${\text{char}}(R) = p > 0$. For simplicity we assume that $k = k^p$ ($k$ is perfect); this is always the case when our field is finite. For an ideal $I {\subseteq}R$ we denote by $I{^{[p^{e}]}} = \{\sum_1^n c_i r_i^{p^e} : c_i \in R
\text{ and }r_i \in I\}$ the ideal generated by all $p^e$-th powers of elements in $I$. Because ${\text{char}}(R)$ is positive this is in general an ideal distinct from the normal $p^e$-th power of the ideal.
The [**Frobenius endomorphism**]{} of $R$ is the map $F: R \to R$ which takes $r \mapsto r^p$. We will write $R$ for the domain of this map and ${F^{}_*}R$ for the codomain, when it is important to distinguish the two (although they are isomorphic as abelian groups). We define an $R$-module structure on ${F^{}_*}R$ by $r.x = r^px$ for $r \in R$ and $x \in {F^{}_*}R$. A reduced ring $R$ is [**F-finite**]{} if ${F^{}_*}R$ is finitely generated as an $R$-module. Similarly, we can consider the $e$-th iterated Frobenius map $F^e: R \to R$ and define an $R$-module structure for this iterated Frobenius map; the codomain is denoted here ${F^{e}_*}R$. Note that if $R$ is F-finite, then ${F^{e}_*}R$ is finitely generated for all $e \in {{\mathbb}{N}}$.
Important classes of F-finite rings include polynomial rings over a field in positive characteristic, quotients, and localizations of these rings. For example consider $S = k[x_1, \dots, x_n]$. Because ${F^{}_*}S$ is generated as an $S$-module by the products $\Pi_{i=1}^n x_i^{d_i}$ with each $d_i \le p-1$, we conclude that $S$ is F-finite. Similarly, when $R$ is an F-finite ring and $I {\subseteq}R$ is an ideal, then ${F^{}_*}(R/I)$ is generated over $R/I$ by the images of the generators of ${F^{}_*}R$ over $R$. Thus any quotient of an F-finite ring is also F-finite. We also have that localizations of F-finite rings are again F-finite (see R. Fedder, Lemmas 1.4 and 1.5 from [@Fedd83] for more information.)
When $R$ is reduced, we naturally identify ${F^{e}_*}R$ with the ring $R^{1/p^e}$ of $p^e$-th roots of elements of $R$ by sending $r \mapsto r^{1/p^e}$. Next, we will decompose this module as $R^{1/p^e} = R^{a}\oplus M$ where $M$ has no free summands of $R$. We define $a_e$ to be the maximal rank of any free decomposition of $R^{1/p^e}$. A famous result of E. Kunz (which we refer to as Kunz’s Theorem) states that for all $p^e$, we have that $a_e \le p^{ed}$ with equality if and only if $R$ is regular if and only if the Frobenius map is flat. This result prompted C. Huneke and G. Leuschke to define [@HL04] the F-signature as the following limit.
Let $(R, {{\mathfrak}{m}}, k)$ be an F-finite, $d$-dimensional reduced local ring and $k = R/{{\mathfrak}{m}}$ a field of positive characteristic $p$. The [**F-signature**]{} of $R$ is $$s(R) := \lim_{e \to \infty} \frac{a_e}{p^{ed}}$$
Recently, K. Tucker showed that this limit exists in full generality [@FsigExists]. It is 1 if and only if $R$ is regular, and so the F-signature serves as a measure to which $R$ fails to be regular in comparison to other local rings of the same dimension. If for some $e$ (and thus every $e$) we have that $a_e \ne 0$ then we say that $R$ is [**F-pure**]{}. Often it happens that the limit $s(R)$ is zero. I. Aberbach and F. Enescu defined [@AE03] the [**Frobenius splitting dimension**]{} (F-splitting dimension) $\operatorname{sdim}(R) = m$ to be the greatest integer such that $$\begin{aligned}
\lim_{e \to \infty} \frac{a_e}{p^{em}}\end{aligned}$$ is greater than zero. This limit exists by [@FsigExists]. The corresponding limit is defined in [@AE03] to be the [**Frobenius splitting ratio**]{} (F-splitting ratio) denoted as $r_F(R)$. Note that if the splitting dimension $\operatorname{sdim}(R) = \dim(R)$, then $r_F(R) = s(R)$.
In [@BST11] M. Blickle, K. Schwede, and K. Tucker introduce the concept of F-signature to the pair $(R, f^t)$, where $t \in [0, \infty)$ is a real number and $f \in R$ is nonzero.
Let $f\in R$ be a nonzero element in an F-finite regular local ring $(R, {{\mathfrak}{m}}, k)$. Let $d = \dim(R)$ be the Krull dimension of $R$, and $t \in [0, \infty)$ a positive real number. The [**F-signature**]{} of the pair $(R, f^t)$ is defined to be the limit $$s(R, f^t) := \lim_{e\to\infty} \frac{1}{p^{ed}}\ell_R\left(\frac{R}{{{\mathfrak}{m}}{^{[p^{e}]}}:f^{\lceil t(p^e-1) \rceil}}\right)$$
The supremum over all $t$ such that $s(R, f^t)$ is F-pure is called the [**F-pure threshold**]{} of $f$ and will be denoted in this paper as FPT$(f)$.
An important theorem regarding computation of the F-signature when $t = a/p^s$ is Proposition 4.1 found in [@BST112], which states that $$s(R, f^{a/p^s}) = \frac{1}{p^{sd}}\ell_R\left(\frac{R}{{{\mathfrak}{m}}^{[p^s]}:f^a}\right)$$ which is to say that in this case, we do not need to take a limit: a single length suffices to compute $s(R, f^{a/p^s})$.
In much the same way, we may now define the [**F-splitting dimension**]{} of the pair $(R, f^t)$, where $t$ is a rational number whose denominator is not divisible by $p$, to be the greatest integer $m$ such that $$\limsup_{e\to\infty} \frac{1}{p^{em}}\ell_R\left(\frac{R}{{{\mathfrak}{m}}{^{[p^{e}]}}: f ^{\lceil t(p^e-1)\rceil}}\right)$$ is nonzero. We again define this limit to be the [**Frobenius splitting ratio**]{} (F-splitting ratio) of the pair $(R, f^t)$, denoted here $r_F(R, f^t)$. The first result in this paper relates an approximate higher-order left derivative of $s(R, f^t)$ at $c = FPT(f)$ to a constant multiple of $r_F(R, f^c)$ when $p$ does not divide the denominator of $c$. This is a generalization of Theorem 2.1 in the next section, which can be found in [@BST112] as Theorem 4.2 relating the first left derivative $D_-s(R, f^1)$ to the F-signature $s(R/{\langle}f {\rangle})$.
Computing the F-splitting dimension is aided by formation of a special ideal, called the [**splitting prime**]{} of $(R, f^t)$. This ideal is defined to be the maximal ideal $J {\subseteq}R$ such that $f^{\lceil t(p^e-1)\rceil}J {\subseteq}J{^{[p^{e}]}}$ for all $e > 0$. As the name suggests, when it is a proper ideal of $R$ it is a prime ideal. This result can be found in ([@Sch08], Corollary 6.4) but it is presented again here for the reader’s convenience, albeit with a different proof.
\[splittingPrime\] Let $(R, {{\mathfrak}{m}}, k)$ be an F-finite regular local ring, and $f \in R$ a nonzero element of $R$. Take $t \in [0, \infty)$ and let $P$ be the splitting prime of the pair $(R, f^t)$. If $P$ is proper, then it is a prime ideal.
Suppose $P \ne R$ is the splitting prime of $(R, f^t)$ and let $c \in R \setminus P$. We wish to show that $(P:c) = P$ which implies that $P$ is a prime ideal.
I claim that if $J$ is an ideal of $R$ satisfying $f^{\lceil t(p^e-1)\rceil}J {\subseteq}J{^{[p^{e}]}}$, then for any $r \in R \setminus J$ we also have that $f^{\lceil t(p^e-1)\rceil}(J:r) {\subseteq}(J{^{[p^{e}]}}: r^{p^e})$. To see this, let $g \in (J:r)$ so that $gr \in J$. Then $f^{\lceil t(p^e-1)\rceil}gr \in
J{^{[p^{e}]}}$ by the assumption we made on $J$. Then of course $f^{\lceil t(p^e-1)\rceil}gr^{p^e} \in J{^{[p^{e}]}}$, and so $f^{\lceil t(p^e-1)\rceil}g \in
(J{^{[p^{e}]}}:r^{p^e})$, establishing the claim.
By Kunz’s theorem, we know that the Frobenius endomorphism on $R$ is flat when $R$ is regular, so we may tensor the exact sequence [ $$\begin{CD}
0 @>>> R/(J:r) @>>> R/J @>>> R/(J+r) @>>> 0
\end{CD}$$ ]{} with ${F^{e}_*}R$ to conclude that in this case $(J{^{[p^{e}]}}:r^{p^e}) = (J:r){^{[p^{e}]}}$. Therefore, whenever $J$ is an ideal satisfying $f^{\lceil t(p^e-1)\rceil}J{\subseteq}J{^{[p^{e}]}}$ and $r \in R \setminus J$ is any element, we have that $f^{\lceil t(p^e-1)\rceil}(J:r){\subseteq}(J:r){^{[p^{e}]}}$.
Because $P \ne R$ is the splitting prime for the pair $(R, f^t)$, it is contained in no other ideal satisfying $f^{\lceil t(p^e-1)\rceil}J
{\subseteq}J{^{[p^{e}]}}$. This implies that $(P:c) = P$ and so $P$ is prime.
The splitting prime is related to the F-splitting dimension $\operatorname{sdim}(R, f^t)$ by the following theorem, which can be found in [@BST11].
Let $(R, {{\mathfrak}{m}}, k)$ be an F-finite $d$-dimensional regular local ring, $f \in R$ a nonzero element, $t \in [0, \infty)$ and let $P$ be the splitting prime of the pair $(R, f^t)$. Then $$\operatorname{sdim}(R, f^t) = \dim(R/P)$$
In the third section of this paper, we provide an application of Monsky and Teixeira’s work on $p$-fractals to compute the F-signature when $f$ is a homogeneous polynomial in two variables. Specifically, we prove that $s(R, f^t)$ converges uniformly to a quadratic polynomial in $t$ as $p \to \infty$. We provide an example where we can explicitly compute the F-pure threshold for $f = xy(x+y)$ and apply the $p$-fractal techniques mentioned before to compute the left derivative at the F-pure threshold.
Finally, several computational examples (using Macaulay2) are included in the second-to-last section of this paper; algorithms that were used to compute these examples comprise the final section.
[*Special thanks to Karl Schwede for many insightful and encouraging discussions over the course of this work and preparation of this document. In particular, the proofs of \[splittingPrime\] and \[DerivativeThm\] were discussed with him. I would like to thank Kevin Tucker for a stimulating discussion of some of the results presented here and his suggestions regarding this paper, and I would also like to thank Florian Enescu, who suggested applying the results of Teixeira’s thesis to F-signature in two variables. Thanks to Wenliang Zhang and Lance Miller for their useful critiques of this paper.*]{}
F-Splitting Ratio of Principal Ideals
=====================================
In this section, we assume that $R$ is an F-finite regular local [*domain*]{} with Krull dimension $d$. The F-signature of the pair $(R, f^t)$ was shown recently in [@BST112] to be continuous and convex on $[0, \infty)$, thus differentiable almost everywhere on the domain $[0, \infty)$. Indeed, the authors of [@BST112] proved the following theorem:
If $(R, {{\mathfrak}{m}}, k)$ is an F-finite $d$-dimensional local domain and $f\in R$ is a nonzero element, then $$\begin{aligned}
&D_-s(R, f^1) = -s(R/{\langle}f {\rangle}) &D_+s(R, f^0) = -e_{HK}(R/{\langle}f{\rangle})\end{aligned}$$
In this section, we generalize the first part of the above result to an approximate left $n^{th}$-derivative at the F-pure threshold of our nonzero element $f$. Let $FPT(f) = c$ be the F-pure threshold of $f$, and denote by $n= d - \operatorname{sdim}(R, f^c)$. We make the following assumption on $c$ in this section only.
Assume that $c$ is a rational number whose denominator is not divisible by $p$.
---------------------------------------------------------------------------------
In particular, the pair $(R, f^c)$ is sharply F-pure [@Sch08] and so the splitting prime is proper. Because $p$ does not divide the denominator of $c$, we can write $c = a/(p^s-1)$ for an appropriate $a$ and $s$. To arrive at an approximation of $c$, we write $$\begin{aligned}
c &= \frac{a}{p^s-1}\\
&= \frac{a(p^{(e-1)s} + p^{(e-2)s} + \cdots + 1)}{(p^s-1)(p^{(e-1)s} + p^{(e-2)s} + \cdots + 1)}\\
&= \frac{a(p^{(e-1)s} + p^{(e-2)s} + \cdots + 1)}{p^{es}-1}\end{aligned}$$ Now let $K_e = (p^{(e-1)s} + p^{(e-2)s} + \cdots + 1)$, so that above we have $c = aK_e/(p^{es}-1)$. Define then $$t_e = \frac{aK_e}{p^{es}}$$
and note that $t_e \to c$ as $e \to \infty$. We compute the following limit, which serves as a sort of approximate left $n^{th}$-derivative: $$\begin{aligned}
\limsup_{t_e \to c} \frac{s(R, f^{t_e})}{(t_e-c)^n} &= \limsup_{e\to\infty}\left(-\frac{p^{es}(p^s-1)}{a}\right)^n\frac{1}{p^{esd}}\ell_R\left(\frac{R}{{{\mathfrak}{m}}{^{[p^{es}]}}:{\langle}f{\rangle}^{(p^{es}-1)c}}\right)\\
&= \left(-\frac{p^s-1}{a}\right)^n \limsup_{e\to \infty}\frac{1}{p^{es(d-n)}}\ell_R\left(\frac{R}{{{\mathfrak}{m}}{^{[p^{es}]}}:{\langle}f {\rangle}^{aK_e}}\right)\\
&= (-c)^{-n} r_F(R, f^c)\end{aligned}$$
Since $n$ is the smallest integer such that the above limit is nonzero, if $\operatorname{sdim}(R, f^t) < \dim(R) - 1$ then we have $n = \dim(R) - \operatorname{sdim}(R, f^t)
\ge 2$. Because we know that $s(R, f^t)$ is differentiable almost everywhere, if $FPT(f) = c$ as above and we have $\operatorname{sdim}(R, f^c) < \dim(R) - 1$. Therefore the left derivative $D_-s(R, f^c) = 0$. [*The proof in the non-square-free case was suggested by Wenliang Zhang.*]{}
\[DerivativeThm\] Suppose $f \in R$ and $(R, f^c)$ is sharply F-pure for $c < 1$. Write $f = uf_1^{n_1}\cdots f_r^{n_r}$, where $u$ is a unit and each $f_i$ is irreducible. If $f$ is not square free, assume that $FPT(f_i)=c_i<1$ for each $i$ such that $n_i \ge 2$. Then the left derivative $D_-s(R, f^c) = 0$.
Let $P$ be the splitting prime of the pair $(R, f^c)$. I claim that with the hypotheses of the theorem, we have $\dim(R/P) < \dim(R)-1$. Because $(R, f^c)$ is sharply F-pure, this implies that $P$ is proper and so prime. Towards a contradiction assume that $\operatorname{ht}(P) = 1$. We will need the following lemma:
\[localizationLemma\] If $P$ is the splitting prime of $(R, f^c)$ then $PR_P$ is the splitting prime of $(R_P, f_P^c)$ where $f_P$ is the image of $f$ in $R_P$.
$PR_P$ satisfies $f_P^{\lceil c(p^e-1) \rceil}(PR_P) {\subseteq}(PR_P){^{[p^{e}]}}$ by definition of $P$. Also, prime ideals of $R_P$ are in bijective correspondence with primes of $R$ contained in $P$; thus because $PR_P$ is maximal in $R_P$, it must be the splitting prime of $(R_P, f_P^c)$.
We now consider two cases: either $f$ is square free, or $FPT(f_i) = c_i < 1$ for all $i$ such that $n_i \ge 2$. Because $\operatorname{ht}(P) = 1$, this implies that $P = {\langle}f_i{\rangle}$ for some $i$. Localize at $P$ and suppose that $f$ is square-free or $n_i = 1$. This tells us that $PR_P = fR_P$, since $R_P$ is a DVR with maximal ideal $PR_P$ and $f_P$ cannot be a unit, since $f_P = (uf_1^{n_1}\cdots f_i^1 \cdots f_r^{n_r})_{{\langle}f_i {\rangle}} = vf_i$, where $v \in R_P$ is a unit and the image of $f_i$ generates the maximal ideal. Note that by assumption that $c <1$ is a rational number whose denominator is not divisible by $p$, there exist infinitely many $e$ such that $c(p^e-1)$ is an integer $r_e$, and each such $r_e$ satisfies that $c(p^e-1) < r_e+1 < (p^e-1)+1$. This implies that $$f_P^{r_e}PR_P \not{\subseteq}(PR_P){^{[p^{e}]}}$$ contradicting that $PR_P$ is the splitting prime of $R_P$.
Suppose then that $P = {\langle}f_i {\rangle}$ and $n_i\ge 2$, and recall that by assumption $FPT(f_i) = c_i <1$. This gives that $c \le c_i/n_i < 1/n_i$, so $cn_i(p^e-1)+1 \le c_i(p^e-1)+1 < p^e$. The same argument as the previous case leads to a contradiction of lemma \[localizationLemma\]. Thus, $\operatorname{ht}(P) \ge 2$ and by Theorem 1.2, we have $$\dim(R/P) = \operatorname{sdim}(R, f^c)$$ so $\dim(R/P) < \dim(R)-1$, implying that $n \ge 2$ and so $D_-s(R, f^c) = 0$.
It follows immediately that because $s(R, f^r) = 0$ for all $r \ge c$, $D_+s(R, f^c) = 0$ and so the F-signature is differentiable at $c = FPT(f)$ whenever $c<1$ is a rational number whose denominator is not divisible by $p$. In the next two sections, we will see examples where the result is false when $p$ divides the denominator of $c$.
F-Signature of Homogeneous Polynomials in Two Variables
=======================================================
We turn our attention now to the case when $R = k[x,y]_{{\langle}x, y {\rangle}}$ and let $f \in R$ be a product of $r\ge 2$ distinct linear forms with $FPT(f) = c$. Here we relax the condition of the previous section that if $c$ is rational, then $p$ does not divide the denominator. By the exact sequence $$\begin{CD}
0 @>>> \dfrac{R}{{{\mathfrak}{m}}{^{[p^{e}]}}:f^a} @>>> \dfrac{R}{{{\mathfrak}{m}}{^{[p^{e}]}}} @>>> \dfrac{R}{{{\mathfrak}{m}}{^{[p^{e}]}}+{\langle}f^a {\rangle}} @>>> 0
\end{CD}\label{sesFsigHomog}$$ we have that $s(R, f^{a/p^s}) = 1 - \frac{1}{p^{2s}}\ell_R(R/{\langle}x^{p^s}, y^{p^s}, f^a{\rangle})$. This length has been studied extensively by P. Monsky and P. Teixeira in their work on $p$-fractals. We can use theorems found in [@Teix02] and [@Mon06] to obtain the following result:
The F-signature of the pair $s(R, f^t)$ where $f$ is the product of at least two distinct linear factors converges uniformly on the interval $[0, c]$ to the polynomial $\frac{r^2}{4}t^2 - rt + 1$ as $p \to \infty$.
The Hilbert Syzygy Theorem implies that the module of syzygies between $x^{p^e}, y^{p^e}$ and $f^a$ can be generated by two homogeneous elements of degrees $m_1 \ge m_2$. Their difference $\delta = m_1 - m_2$ is called the [**syzygy gap**]{} of $(x^{p^e}, y^{p^e}, f^a)$. If we need to consider more than one triplet $(x^{p^e}, y^{p^e}, f^a)$ we will write this syzygy gap as $\delta(x^{p^e}, y^{p^e}, f^a)$. Theorem 2.10 in [@Teix02] tells us that $$\ell_R(R/{\langle}x^{p^e}, y^{p^e}, f^a{\rangle}) = \frac{1}{4}(4rap^e - (ra)^2) + \frac{\delta^2}{4}$$ Also in his thesis [@Teix02], Teixeira showed the functions $\frac{a}{p^e} \mapsto \frac{1}{p^{2e}}\ell_R(R/{\langle}x^{p^e}, y^{p^e}, f^a{\rangle})$ and $\frac{a}{p^e} \mapsto \frac{1}{p^e}\delta(x^{p^e}, y^{p^e}, f^a)$ defined on $[0, 1] \cap \mathbb{Z}[p^{-1}]$ can be extended uniquely to continuous functions on $[0, \infty)$. These extended functions are denoted $\phi_f(t)$ and $\delta_f(t)$ respectively.
In [@Mon06] Monsky proved an upper bound for $\delta(x^{p^e}, y^{p^e}, f^a)$ in the case when $f$ is homogeneous of degree $\ge 2$.
Let $l_1, \dots, l_r$ be linear forms such that $l_i$ and $l_j$ share no common non-unit factor for $i \ne j$ and $r \ge 2$. Suppose $0 \le a_1, \dots, a_r \le p^e$ and he $a_i$ satisfy the inequalities $2a_i \le \sum_1^r a_j \le 2p^e$. Then $\delta(x^{p^e}, y^{p^e}, \Pi_1^r l_i^{a_i}) \le (r-2)p^{e-1}$.
In our case, this theorem tells us that for $f$ the product of $r \ge 2$ distinct linear forms and $ra \le 2p^e$ then $$\delta(x^{p^e}, y^{p^e}, f^a) \le (r-2)p^{e-1}.$$ Rearranging $ra \le 2p^e$, we see that if $(ra)/2 \le p^e$, then Monsky’s bound holds. Note that each term in $f^a$ has degree in $x$ or degree in $y$ at least $(ra)/2$, so if $f^a \not\in {{\mathfrak}{m}}{^{[p^{e}]}}$ then $(ra)/2 \le p^e$. Now, we remember the exact sequence to compute $$\begin{aligned}
s(R, f^{a/p^e}) &= 1 - \frac{1}{p^{2e}}\ell_R\left(\frac{R}{{\langle}x^{p^e}, y^{p^e}, f^a{\rangle}}\right)\\
&= 1 - \frac{1}{4p^{2e}}(4rap^e - r^2a^2 + \delta^2)\\
&= \frac{r^2}{4}\left(\frac{a}{p^e}\right)^2 - r\left(\frac{a}{p^e}\right) + 1 - \frac{\delta^2}{4p^{2e}}\end{aligned}$$ Extending $s(R, f^{a/p^e})$ to $[0, \infty)$ we get that $$s(R, f^t) = \frac{r^2}{4}t^2 - rt + 1 - \left(\frac{\delta_f(t)}{2}\right)^2$$ and Monsky’s upper bound for $\delta(x^{p^e}, y^{p^e}, f^a)$ shows that as $p \to \infty$, $\delta_f(t) \to 0$ for $t < c$ and so the F-signature converges uniformly to $\frac{r^2}{4}t^2 - rt + 1$ on the interval $[0, c]$.
Is there some geometric significance to the quadratic polynomial to which the F-signature converges with respect to resolution of singularities?
To finish this section, we use the above method to compute the limiting quadratic polynomial of the F-signature for three distinct lines in the plane, which we may assume is given by $f = xy(x+y)$. Furthermore, we can compute not only the exact value of $c$ in characteristic $2 \mod 3$, but also the left derivative of this function at $c$ in this case. Because we show that the denominator of $c$ is always divisible by $p$ when $p \equiv 2 \mod 3$, the results of the previous section regarding approximate higher-order left derivatives do not apply.
Let $f = xy(x+y)$ and suppose that $k$ has characteristic $p\ge 5$ congruent to $2 \mod 3$. Define $b_e = \left(p - \frac{p+1}{3}\right)p^{e-1}$ so that $b_e/p^e = \frac{2}{3} - \frac{1}{3p}$. A straightforward calculation using Lucas’ theorem shows that for all $e \in {{\mathbb}{N}}$, we have that $f^{b_e} \in {{\mathfrak}{m}}{^{[p^{e}]}}$ but $f^{b_e - 1} \not\in {{\mathfrak}{m}}{^{[p^{e}]}}$. Therefore, $$\begin{aligned}
\ell_R\big(R/({{\mathfrak}{m}}{^{[p^{e}]}}: f^{b_e})\big) &= 0\\
\ell_R\big(R/({{\mathfrak}{m}}{^{[p^{e}]}}: f^{b_e - 1})\big) &\ne 0\end{aligned}$$ and so we conclude that the F-pure threshold of $xy(x+y)$ is $c = b_e/p^e = \frac{2}{3}-\frac{1}{3p}$ in this case. The above-proven limiting polynomial of $s(R, f^t)$ is $g(t) = \frac{9}{4}t^2 - 3t + 1$ and $g(c) = \frac{1}{4p^2}$. Because $s(R, f^c) = 0$ but $g(c) \ne 0$, we can directly compute $\delta_f(c)$: $$\begin{aligned}
\left(\frac{\delta_f(c)}{2}\right)^2 &= g(c) - s(R, f^c)\\
&= \frac{1}{4p^2}\end{aligned}$$ so then $\delta_f(c) = \frac{1}{p} = \frac{(3-2)p^{e-1}}{p^e}$ so by Monsky’s bound, we have that $\delta_f(c)$ achieves a local maximum at $c$.
We can also provide an affirmative answer to what the left derivative is at $FPT(f) = c$ in this case. Note that the denominator is divisible by $p$, so we cannot apply Theorem \[DerivativeThm\] from the previous section. We return again to the work of Teixeira, who proves ([@Teix10], Theorem II) that if $\delta_f$ achieves a local maximum at $u \in [0, 1]$ then for all $t \in [0, 1]$ such that $3|t - u| \le \delta_f(u)$, we have that $$\delta_f(t) = \delta_f(u) - 3|t - u|$$ This implies that $\delta_f(t)$ is piecewise linear near $c$ so we can apply the techniques of calculus to take the left derivative at $c$: $$\begin{aligned}
\frac{9}{4}t^2 - 3t + 1 - \left(\frac{\delta_f(t)}{2}\right)^2 &= \left(\frac{3}{2}t-1\right)^2 - \left(\frac{\delta_f(t)}{2}\right)^2\\
&= \left(\frac{3}{2}t-1 + \frac{\delta_f(t)}{2}\right)\left(\frac{3}{2}t-1 - \frac{\delta_f(t)}{2}\right)\end{aligned}$$ now applying the product rule and substituting $c = \frac{2p-1}{3p}$, we have: $$\begin{aligned}
D_-s(R, f^c) &= \left(\frac{3}{2}-\frac{1}{2}D_- \delta_f\left(\frac{2p-1}{3p}\right)\right)\left(\frac{3}{2}\left(\frac{2p-1}{3p}\right)-1 + \frac{1}{2}\delta_f\left(\frac{2p-1}{3p}\right)\right)\\
& + \left(\frac{3}{2}+\frac{1}{2}D_-\delta_f\left(\frac{2p-1}{3p}\right)\right)\left(\frac{3}{2}\left(\frac{2p-1}{3p}\right)-1 - \frac{1}{2}\delta_f\left(\frac{2p-1}{3p}\right)\right)\\
&= \left(\frac{3}{2}-\frac{1}{2}(3)\right)\left(1-\frac{1}{2p} -1 + \frac{1}{2p}\right) + \left(\frac{3}{2} + \frac{1}{2}(3)\right)\left(1-\frac{1}{2p} - 1 - \frac{1}{2p}\right)\\
&= -\frac{3}{p}\end{aligned}$$ To complete this computation, we used that $D_-\delta_f(c) = 3$. We can see this by recalling Theorem II from [@Teix10], which tells us for $t$ sufficiently close to $c$, we have $\delta_f(t) = \delta_f(c) - 3|t - c|$. This gives that the left derivative at $c$ is $3$.
Note that this computation of $D_-s(R, f^c) = -\frac{3}{p} \ne 0$ in contrast to Theorem \[DerivativeThm\] in the previous section, where it was shown that when $p$ does not divide the denominator of $c$ the left derivative $D_-s(R, f^c) = 0$.
Computational Examples
======================
In this section, we will use the computational algebra package Macaulay2 to explicitly compute the F-signature of pairs for several polynomials and graph the data obtained using gnuplot. For the first example, we analyze the cusp $C$ and provide an affirmative answer for the left derivative of the F-signature at $FPT(C)$ in characteristics 5, 11, and 17. We also explicitly compute the F-signature function for three and four distinct linear forms in various characteristics using Macaulay2, and graph them with the quadratic limiting polynomials for the F-signature functions in these cases. All examples rely on routines defined in the next section.
[*(The cusp in characteristic 5, 11, and 17)*]{} Let $C = y^2 - x^3$ be the cuspidal cubic.
It is known that whenever characteristic $p \equiv 2 \mod 3$, the F-pure threshold of $C$ is $\frac{5}{6} - \frac{1}{6p}$. If $6$ divides $p+1$ we define $b_e = (p - \frac{p+1}{6})p^{e-1}$ so that $b_e/p^e = \frac{5}{6} - \frac{1}{6p}$. In this example, we will use Macaulay2 to compute $s(R, C^{(b_e-1)/p^e})$ in characteristics $p = 5, 11, $ and $17$ for $e = 2$ and $3$ using a function defined in section 5 of this paper. The code below will compute the value of the function for $p = 5$ and $e = 2$ (so that $b_e-1 = 19$); by changing the base ring, value of $e$, and $b_e$ appropriately, we may use this same code for other $p$ and $e$ to obtain the corresponding values. Here [Fsig]{} is a Macaulay2 routine defined explicitly in the next section; it computes $s(R, C^{a/p^e})$ for a single value of $a/p^e$.
----------------------
[:R = ZZ/5\[x,y\]]{}
[:C = y`^`2-x`^`3]{}
[:Fsig(2, 19, C)]{}
----------------------
The following data was collected using the above code, changing parameters as mentioned above:
$p$ $e$ $s(R, C^{(b_e-1)/p^e})$
----- ----- -------------------------
5 2 1/125
5 3 1/625
11 2 1/1331
11 3 1/14641
17 2 1/4913
17 3 1/83521
So we have that for each of these values, $s(R, C^{(b_e-1)/p^e}) = 1/p^{e+1}$. Using this data, we can compute the derivative of the cusp $C$ in these characteristics. Let $p$ be either 5, 11, or 17 and $e$ be 2 or 3. We compute the difference quotient $$\frac{s(R, C^{(b_e-1)/p^e}) - s(R, C^{b_e/p^e})}{(b_e-1)/p^e - b_e/p^e}$$ and arrive at $-1/p$ for each value of $p$ and $e$. For these computations, notice $b_e/p^e = \frac{5}{6} - \frac{1}{6p}$ so $s(R, C^{b_e/p^e}) = 0$. This shows that the points $\left(\frac{b_2-1}{p^2}, \frac{1}{p^3}\right)$, $\left(\frac{b_3-1}{p^3} , \frac{1}{p^4}\right)$, and $\left(\frac{5}{6} - \frac{1}{6p} , 0\right)$ are colinear points on the convex function $s(R, C^t)$. By convexity of $s(R, f^t)$ we have that the F-signature is linear on the interval $\left[\frac{b_2-1}{p^2}, \frac{5}{6} - \frac{1}{6p}\right]$. Therefore, we can affirmatively say that the derivative of $s(R, C^t)$ at $t = FPT(C)$ is $-1/p$.
[*(Four distinct lines in characteristic 29)*]{} Let $k = \mathbb{Z}/29\mathbb{Z}$ and consider $f = xy(x+y)(x+2y) \in k[x,y]$.
We will use the Macaulay2 functions defined in the next section to generate a graph of the F-signature of $(R, f^t)$ for $0 \le t \le
\frac{1}{2}$. Here, [GenPlot]{} is a function that computes the F-signature of $(R, f^t)$ at values of $t$ of the form $0 \le b/p^e \le FPT(f)$ for some fixed value of $e$, passed as the first argument.
------------------------------------
[:R = ZZ/29\[x,y\]]{}
[:f = x\*y\*(x+y)\*(x+2\*y)]{}
[:GenPlot(2, f, “$\sim$/c29e2”)]{}
------------------------------------
Once complete, this operation will compute the length $$1- \frac{1}{29^4}\ell_R\left(\frac{R}{(x^{29^2}, y^{29^2}, f^a)}\right)$$ for $0 \le a \le 421$ and output these lengths to a file named [c29e2]{} (so titled for “characteristic 29, e=2”) which is formatted to be graphed by the program gnuplot. We provide these two graphs of the computed F-signature and the limiting polynomial $g(t)$ here. Even at such a low characteristic, the two are nearly indistinguishable if plotted simultaneously.
[2]{}
[*(Three distinct lines in characteristic 5)*]{} Let $f = xy(x+y) \in k[x,y]$ where $k = \mathbb{Z}/5\mathbb{Z}$.
Notice that we are in the case of the example at the end of the last section: $f = xy(x+y)$ and characteristic $5 \equiv 2 \mod 3$, so we can compute explicitly that the F-pure threshold is $\frac{2}{3}-\frac{1}{15} = \frac{3}{5}$. Using code similar to the above example, we generate a plot for the F-signature of $f$, the limiting polynomial, and also provide a plot of $\frac{1}{4}\delta_f(t)^2$ on $[0, \frac{3}{5}]$.
[2]{} ![Let $f = xy(x+y)$ as in example 5. The left picture here is the plot of $s(R, f^t)$ for $0\le t \le \frac{3}{5}$ generated using Macaulay. The right picture is the plot of $\frac{9}{4}t^2-3t+1$ on the same interval.](plot2 "fig:")
![Let $f = xy(x+y)$ as in example 5. The left picture here is the plot of $s(R, f^t)$ for $0\le t \le \frac{3}{5}$ generated using Macaulay. The right picture is the plot of $\frac{9}{4}t^2-3t+1$ on the same interval.](poly2)
![This is the plot of the term $\frac{1}{4}\delta_f(t)^2$ on the interval $[0, \frac{3}{5}]$ which was obtained by computing $\frac{9}{4}t^2
-3t+1 - s(R, f^t)$ with $f = xy(x+y)$ as in example 5.](sgap1)
Routines for Computing F-signatures using Macaulay2
===================================================
The results presented here were significantly influenced by experimental data gathered using the computational algebra package Macaulay2 [@M2]. This final section provides the source code for functions referenced in the examples from the previous section. The first function defined here accepts an ideal $I$ in a polynomial ring $R$ and returns the $e^{th}$ Frobenius power $I{^{[p^{e}]}}$.
[l]{} fpow = (I, e) ->\
(\
L:= first entries gens I;\
p := char ring I;\
\
J:= ideal(L\#0`^`(p`^`e));\
for i from 1 to (length L)-1 do\
J = J + ideal(L\#i`^`(p`^`e));\
J\
)
This second function returns a single length $1 - \dfrac{1}{p^{ed}}\ell_R\big(R/(x_1^{p^e}, \dots, x_d^{p^e}, f^a)\big)$, where $R$ is a polynomial ring in variables $x_1, \dots, x_d$ and $f$ is some polynomial in this ring.
------------------------------------------------
Fsig = (e, a, f) ->
(
R1:=ring f;
p:= char ring f;
I = fpow(ideal(first entries vars R1), e);
1-(1/p`^`(dim(R1)\*e))\*degree(I+ideal(f`^`a))
)
------------------------------------------------
We can now build on this function to compute the F-signature of specific polynomials and output these lengths to a file. The first function will compute the values of the F-signature for some homogeneous polynomial $f$ (specified as the second argument when the function is called) at each value $a/p^e$ ($e$ is specified as the first argument) such that $0 \le a/p^e \le FPT(f)$. This is accomplished by repeatedly calling [Fsig(e, a, f)]{}. The values computed are then written to a file named $fileN$ (the third argument passed to the function) which should be enclosed in quotation marks and give the full path name of the file. The data is stored in the correct format for use with the program gnuplot to produce images like those found in the previous section and a new window is opened which contains a plot of the data just computed.
[l]{} GenPlot = (e, f, fileN) ->\
(\
cL = for i from 0 to (char (ring f))`^`e list\
q := Fsig(e, i, f) do (stdio<<i<<“, ”<<q<<endl<<“=============”<<endl;\
if q==0 then break;)\
\
fp = toString(fileN)<<`" "`;\
for i from 0 to (length cL)-1 do\
fp<<toRR(i/(char (ring f))`^`e)<<`" "`<<toRR(cL\#i)<<endl;\
fp<<close;\
\
fp=“plotComm”<<“plot ’”<<toString(fileN)<<“’ with lines”;\
fp<<close;\
\
run “gnuplot -p plotComm”;\
run “rm plotComm”;\
)\
\
You can find this code on my website: [www.math.unl.edu/$\sim$ecanton2/]{}.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In this paper, we extend considerably the global existence results of entropy-weak solutions related to compressible Navier-Stokes system with density dependent viscosities obtained, independently (using different strategies), by Vasseur-Yu \[[*Inventiones mathematicae*]{} (2016) and arXiv:1501.06803 (2015)\] and by Li-Xin \[arXiv:1504.06826 (2015)\]. More precisely we are able to consider a physical symmetric viscous stress tensor $\sigma = 2 \mu(\rho) \,{\mathbb D}({{ u}}) + \bigl(\lambda(\rho) {\rm div} {{ u}}- P(\rho)\bigr) \, {\rm Id}$ where ${\mathbb D}({{ u}}) = [\nabla {{ u}}+ \nabla^T {{ u}}]/2$ with a shear and bulk viscosities (respectively $\mu(\rho)$ and $\lambda(\rho)$) satisfying the BD relation $\lambda(\rho)=2(\mu''(\rho)\rho - \mu(\rho))$ and a pressure law $P(\rho)=a\rho^\gamma$ (with $a>0$ a given constant) for any adiabatic constant $\gamma>1$. The nonlinear shear viscosity $\mu(\rho)$ satisfies some lower and upper bounds for low and high densities (our mathematical result includes the case $\mu(\rho)= \mu\rho^\alpha$ with $2/3 < \alpha < 4$ and $\mu>0$ constant). This provides an answer to a longstanding mathematical question on compressible Navier-Stokes equations with density dependent viscosities as mentioned for instance by F. Rousset in the Bourbaki 69ème année, 2016–2017, no 1135.'
address:
- 'LAMA UMR5127 CNRS, Université Savoie Mont-Blanc, France'
- 'Department of Mathematics, The University of Texas at Austin.'
- 'Department of Mathematics, University of Florida.'
author:
- Didier Bresch
- 'Alexis F. Vasseur'
- Cheng Yu
title: 'Global Existence of Entropy-Weak Solutions to the Compressible Navier-Stokes Equations with Non-Linear Density Dependent Viscosities'
---
Introduction
============
When a fluid is governed by the barotropic compressible Navier-Stokes equations, the existence of global weak solutions, in the sense of J. [Leray]{} (see [@Le]), in space dimension greater than two remained for a long time without answer, because of the weak control of the divergence of the velocity field which may provide the possibility for the density to vanish (vacuum state) even if initially this is not the case.
There exists a huge literature on this question, in the case of constant shear viscosity $\mu$ and constant bulk viscosity $\lambda$. Before 1993, many authors such as Hoff [@Hoff87], Jiang-Zhang [@JZ], Kazhikhov–Shelukhin [@KS], Serre [@S], Veigant–Kazhikhov [@VK] (to cite just some of them) have obtained partial answers: We can cite, for instance, the works in dimension 1 in 1986 by Serre [@S], the one by Hoff [@Hoff87] in 1987, and the one in the spherical case in 2001 by Jiang-Zhang [@JZ]. The first rigorous approach of this problem in its generality is due in 1993 by P.–L. Lions [@Lions] when the pressure law in terms of the density is given by $P(\rho)=a \rho^\gamma$ where $a$ and $\gamma$ are two strictly positive constants. He has presented in 1998 a complete theory for $P(\rho)=a \rho^\gamma$ with $\gamma\ge 3d/(d+2)$ (where $d$ is the space dimension) allowing to obtain the result of global existence of weak solutions à la Leray in dimension $d=2$ and $3$ and for general initial data belonging to the energy space. His result has been then extended in 2001 to the case $P(\rho)= a \rho^\gamma$ with $\gamma>d/2$ by Feireisl-Novotny-Petzeltova [@FNP] introducing an appropriated method of truncation. Note also in 2014 the paper by Plotnikov-Weigant [@PW] in dimension 2 for the linear pressure law that means $\gamma =1$. In 2002, Feireisl [@F04] has also proved it is possible to consider a pressure $P(\rho)$ law non-monotone on a compact set $[0,\rho_*]$ (with $\rho_*$ constant) and monotone elsewhere. This has been relaxed in 2018 by Bresch-Jabin [@BJ] allowing to consider real non-monotone pressure laws. They have also proved that it is possible to consider some constant anisotropic viscosities. The Lions theory has also been extended recently by Vasseur-Wen-Yu [@VWY] to pressure laws depending on two phases (see also Mastese $\&$ [*al.*]{} [@MaMiMuNoPoZa], Novotny [@No] and Novotny-Pokorny [@NoPo]). The method introduced by Bresch-Jabin in [@BJ] has also been recently developped in the bifluid framework by Bresch-Mucha-Zatorska in [@BrMuZa].
When the shear and the bulk viscosities (respectively $\mu$ and $\lambda$) are assumed to depend on the density $\rho$, the mathematical framework is completely different. It has been discussed, mathematically, initially in a paper by Bernardi-Pironneau [@BP] related to viscous shallow-water equations and by P.–L. Lions [@Lions] in his second volume related to mathematics and fluid mechanics. The main ingredient in the constant case which is the compactness in space of the effective flux $F= (2\mu+\lambda) {\rm div} u - P(\rho)$ is no longer true for density dependent viscosities. In space dimension greater than one, a real breakthrough has been realized with a series of papers by Bresch-Desjardins [@BD; @BD2006; @BrDeFormula; @BrDeSpringer], (started in 2003 with Lin [@BDL] in the context of Navier-Stokes-Korteweg with linear shear viscosity case) who have identified an information related to the gradient of a function of the density if the viscosities satisfy what is called the Bresch-Desjardins constraint. This information is usually called the BD entropy in the literature with the introduction of the concept of entropy-weak solutions. Using such extra information, they obtained the global existence of entropy-weak solutions in the presence of appropriate drag terms or singular pressure close to vacuum. Concerning the one-dimensional in space case or the spherical case, many important results have been obtained for instance by Burtea-Haspot [@BuHa], Ducomet-Necasova-Vasseur [@DNV], Constantin-Drivas-Nguyen-Pasqualottos [@CoDrNgPa], Guo-Jiu-Xin [@GJX], Haspot [@Haspot], Jiang-Xin-Zhang [@JXZ], Jiang-Zhang [@JZ], Kanel [@Kan], Li-Li-Xin [@LiLiXi], Mellet-Vasseur [@MV2], Shelukhin [@S] without such kind of additional terms. Stability and construction of approximate solutions in space dimension two or three have been investigated during more than fifteen years with a first important stability result without drag terms or singular pressure by Mellet-Vasseur [@MV]. Several important works for instance by Bresch-Desjardins [@BD; @BD2006; @BrDeFormula; @BrDeSpringer] and Bresch-Desjardins-Lin [@BDL], Bresch-Desjardins-Zatorska [@BDZ], Li-Xin [@LiXi], Mellet-Vasseur [@MV], Mucha-Pokorny-Zatorska [@MuPoZa], Vasseur-Yu [@VY-1; @VY], and Zatorska [@Z] have also been written trying to find a way to construct approximate solutions. Recently a real breakthrough has been done in two important papers by Li-Xin [@LiXi] and Vasseur-Yu [@VY]: Using two different ways, they got the global existence of entropy-weak solutions for the compressible paper when $\mu(\rho)=\rho$ and $\lambda(\rho)=0$. Note that in the last paper [@LiXi] by Li-Xin, they also consider more general viscosities satisfying the BD relation but with a non-symmetric stress diffusion ($\sigma =
\mu(\rho)\nabla u + (\lambda(\rho){\rm div} u - P(\rho)) {\rm Id}$) and more restrictive conditions on the shear $\mu(\rho)$ viscosity and bulk viscosity $\lambda(\rho)$ and on the pressure law $P(\rho)$ compared to the present paper.
The objective of this current paper is to extend the existence results of global entropy-weak solutions obtained independently (using different strategies) by Vasseur-Yu [@VY] and Lin-Xin [@LiXi] to answer a longstanding mathematical question on compressible Navier-Stokes equations with density dependent viscosities as mentioned for instance by Rousset [@Ro]. More precisely extending and coupling carefully the two-velocities framework by Bresch-Desjardins-Zatorska [@BDZ] with the generalization of the quantum Böhm identity found by Bresch-Couderc-Noble-Vila [@BCNV] (proving a generalization of the dissipation inequality used by Jüngel [@J] for Navier-Stokes-Quantum system and established by Jüngel-Matthes in [@JuMa]) and with the renormalized solutions introduced in Lacroix-Violet and Vasseur [@LaVa], we can get global existence of entropy-weak solutions to the following Navier-Stokes equations: $$\label{NS equation}
\begin{split}
&\rho_t+{{\rm div}}(\rho{{ u}})=0\\
&(\rho{{ u}})_t+{{\rm div}}(\rho{{ u}}\otimes{{ u}})+\nabla P(\rho) - 2 {\rm div}\bigl(\sqrt{\mu(\rho)} \mathbb{S}_\mu
+ \frac{\lambda(\rho)}{2\mu(\rho)} {\rm Tr}(\sqrt{\mu(\rho)} \mathbb S_\mu) {\rm Id} \bigr)=0,
\end{split}$$ where $$\sqrt{\mu(\rho)} \mathbb{S}_\mu =
\mu(\rho) {\mathbb D}({{ u}})$$ with data $$\label{initial data}
\rho|_{t=0}=\rho_0(x)\ge 0,\;\;\;\;\;\rho{{ u}}|_{t=0}={{ m}}_0(x)=\rho_0{{ u}}_0,$$ and where $P(\rho) =a \rho^{\gamma}$ denotes the pressure with the two constants $a>0$ and $\gamma >1$, $\rho$ is the density of fluid, ${{ u}}$ stands for the velocity of fluid, $\mathbb{D}{{ u}}=[\nabla{{ u}}+\nabla^T{{ u}}]/2$ is the strain tensor. As usually, we consider $${{ u}}_0= \frac{m_0}{\rho_0} \hbox{ when } \rho_0\not=0 \hbox{ and }{{ u}}_0 = 0 \hbox{ elsewhere},
\qquad \frac{|m_0|^2}{\rho_0} = 0 \hbox{ a.e. on } \{x\in \Omega: \rho_0(x) = 0\}.$$ We remark the following identity $$2 {\rm div}\bigl(\sqrt{\mu(\rho)} \mathbb{S}_\mu
+ \frac{\lambda(\rho)}{2\mu(\rho)} {\rm Tr}(\sqrt{\mu(\rho)} \mathbb S_\mu) {\rm Id} \bigr)=-2{{\rm div}}(\mu(\rho)\mathbb{D}{{ u}})-\nabla(\lambda(\rho){{\rm div}}{{ u}}).$$
The viscosity coefficients $\mu=\mu(\rho)$ and $\lambda=\lambda(\rho)$ satisfy the Bresch-Desjardins relation introduced in [@BrDeFormula] $$\label{BD relationship}
\lambda(\rho)=2(\rho\mu'(\rho)-\mu(\rho)).$$ The relation between the stress tensor $\mathbb{S}_\mu$ and the triple $(\mu(\rho)/\sqrt\rho, \sqrt \rho {{ u}}, \sqrt\rho {{ v}})$ where ${{ v}}= 2 \nabla s(\rho)$ with $s'(\rho)= \mu'(\rho)/\rho$ will be proved in the following way: The matrix $\mathbb{S}_\mu$ is the symetric part of a matrix value function $\mathbb{T}_\mu$ namely $$\label{Smu}
\mathbb{S}_\mu = \frac{(\mathbb{T}_\mu + \mathbb{T}_\mu^t)}{2}$$ where $\mathbb{T}_\mu$ is defined through $$\label{Tmu}
\begin{split}
\sqrt{\mu(\rho)} \mathbb{T}_\mu
= \nabla (\sqrt\rho {{ u}}\, \frac{\mu(\rho)}{\sqrt\rho})
- \sqrt\rho {{ u}}\otimes \sqrt\rho \nabla s(\rho)
\end{split}$$ with $$\label{s}
s'(\rho) = \mu'(\rho) /\rho,$$ and $$\label{Tmu1}
\begin{split}
\frac{\lambda(\rho)}{2\mu(\rho)} {\rm Tr}(\sqrt{\mu(\rho)} \mathbb T_\mu) {\rm Id}
= \Bigl[ {\rm div}(\frac{\lambda(\rho)}{\mu(\rho)} \sqrt\rho {{ u}}\, \frac{\mu(\rho)}{\sqrt\rho})
- \sqrt\rho {{ u}}\cdot \sqrt\rho \,\nabla s(\rho) \, \frac{\rho \mu''(\rho)}{\mu'(\rho)}\Bigr] {\rm Id}.
\end{split}$$ For the sake of simplicity, we will consider the case of periodic boundary conditions in three dimension in space namely ${\Omega}=\mathbb{T}^3$. In the whole paper, we assume: $$\label{regmu}
\mu \in C^0({\mathbb R_+}; \, {\mathbb R_+})\cap C^2({\mathbb R}_+^*; \,{\mathbb R}),$$ where $\mathbb R_+=[0,\infty) \text{ and } \mathbb R_+^*=(0,\infty).$ We also assume that there exists two positive numbers $\alpha_1,\alpha_2$ such that $$\label{mu estimate}
\begin{array}{l}
\displaystyle{
\frac{2}{3}<\alpha_1<\alpha_2<4,
}\\[0.3cm]
\displaystyle{\mathrm{for \ any } \ \rho>0, \qquad
0<\frac{1}{\alpha_2}\rho \mu'(\rho)\leq \mu(\rho)\leq \frac{1}{\alpha_1}\rho \mu'(\rho),
}
\end{array}$$ and there exists a constant $C>0$ such that $$\label{mu estimate1}
\left|\frac{\rho \mu''(\rho)}{\mu'(\rho)}\right| \le C < +\infty.$$ Note that if $\mu(\rho)$ and $\lambda(\rho)$ satisfying and , then $$\lambda(\rho) + 2\mu(\rho)/3 \ge 0$$ and thanks to $$\mu(0)= \lambda(0) = 0.$$ Note that the hypothesis – allow a shear viscosity of the form $\mu(\rho)=\mu \rho^{\alpha}$ with $\mu>0$ a constant where $2/3<\alpha<4$ and a bulk viscosity satisfying the BD relation: $\lambda(\rho)= 2(\mu'(\rho)\rho - \mu(\rho))$.
[**Remark.**]{} In [@VY] and [@LiXi] the case $\mu(\rho)=\mu\rho$ and $\lambda(\rho)=0$ is considered, and in [@LiXi] more general cases have been considered but with a non-symmetric viscous term in the three-dimensional in space case, namely $- {{\rm div}}(\mu(\rho)\nabla {{ u}}) - \nabla (\lambda(\rho){{\rm div}}{{ u}})$. In [@LiXi] the viscosities $\mu(\rho)$ and $\lambda(\rho)$ satisfy with $\mu(\rho) = \mu \rho^\alpha$ where $\alpha \in [3/4,2)$ and with the following assumption on the value $\gamma$ for the pressure $p(\rho)=a\rho^\gamma$: $$\hbox{ If } \alpha\in [3/4,1], \qquad \gamma \in (1,6\alpha-3)$$ and $$\hbox{ if } \alpha \in (1,2), \qquad \gamma\in [2\alpha-1,3\alpha-1].$$
The main result of our paper reads as follows:
\[main result\] Let $\mu(\rho)$ verify – and $\mu$ and $\lambda$ verify . Let us assume the initial data satisfy $$\label{initial energy}
\begin{split}
& \int_{{\Omega}}\left(\frac{1}{2}\rho_0|{{ u}}_0+ 2\kappa \nabla s(\rho_0)|^2
+\kappa(1-\kappa)\rho_0\frac{|2\nabla s(\rho_0)|^2}{2}\right) \, dx \\
& \hskip6cm
+ \int_{{\Omega}}\left(a\frac{\rho_0^{\gamma}}{\gamma-1} + \mu(\rho_0)\right)\,dx\leq C <+\infty.
\end{split}$$ with $k\in (0,1)$ given. Let $T$ be given such that $0<T<+\infty$, then, for any $\gamma>1$, there exist a renormalized solution to - as defined in Definition \[def\_renormalise\_u\]. Moreover, this renormalized solution with initial data satisfying is a weak solution to - in the sense of Definition \[defweak\].
Our result may be considered as an improvement of [@LiXi] for two reasons: First it takes into account a physical symmetric viscous tensor and secondly, it extends the range of coefficients $\alpha$ and $\gamma$. The method is based on the consideration of an approximated system with an extra pressure quantity, appropriate non-linear drag terms and appropriate capillarity terms. This generalizes the Quantum-Navier-Stokes system with quadratic drag terms considered in [@VY-1; @VY]. First we prove that weak solutions of the approximate solution are renormalized solutions of the system, in the sense of [@LaVa]. Then we pass to the limit with respect to $r_2,r_1, r_0, r, \delta$ to get renormalized solutions of the compressible Navier-Stokes system. The final step concerns the proof that a renormalized solution of the compressible Navier-Stokes system is a global weak solution of the compressible Navier–Stokes system. Note that, thanks to the technique of renormalized solution introduced in [@LaVa], it is not necessary to derive the Mellet-Vasseur type inequality in this paper: This allows us to cover the all range $\gamma>1$.
[*First Step.*]{} Motivated by the work of [@LaVa], the first step is to establish the existence of global $\kappa$ entropy weak solution to the following approximation $$\label{last level approximation}
\begin{split}
&\rho_t+{{\rm div}}(\rho{{ u}})=0\\
&(\rho{{ u}})_t+{{\rm div}}(\rho{{ u}}\otimes{{ u}})+\nabla P(\rho) + \nabla P_\delta(\rho) \\
&\hskip3cm- 2 {\rm div}\Bigl(\sqrt{\mu(\rho)} \mathbb{S}_\mu
+ \frac{\lambda(\rho)}{2\mu(\rho)} {\rm Tr}(\sqrt{\mu(\rho)} \mathbb S_\mu) {\rm Id}\Bigr) \\
& \hskip3cm- 2 r {\rm div}\Bigl(\sqrt{\mu(\rho)} \mathbb{S}_r
+ \frac{\lambda(\rho)}{2\mu(\rho)} {\rm Tr}(\sqrt{\mu(\rho)} \mathbb S_r) {\rm Id}\Bigr) \\
& \hskip7cm + r_0{{ u}}+r_1\frac{\rho}{\mu'(\rho)}|{{ u}}|^2{{ u}}+r_2\rho|{{ u}}|{{ u}}= 0
\end{split}$$ where the barotorpic pressure law and the extra pressure term are respectively $$P(\rho)= a\rho^\gamma, \qquad P_\delta (\rho)= \delta \rho^{10} \hbox{ with } \delta>0.$$ The matrix $\mathbb{S}_\mu$ is defined in and $\mathbb{T}_\mu$ is given in- . The matrix $\mathbb{S}_r$ is compatible in the following sense: $$\label{eq_quantic}
\begin{split}
r\sqrt{\mu(\rho)} \mathbb{S}_r = 2r \Bigl[2 \sqrt{\mu(\rho)} \nabla\nabla Z(\rho)
- \nabla (\sqrt{\mu(\rho)} \nabla Z(\rho))\Bigr],
\end{split}$$ where $$\label{ZZ}
\displaystyle Z(\rho) = \int_0^\rho [(\mu(s))^{1/2} \mu'(s)]/s \, ds, \qquad
\displaystyle k(\rho) = \int_0^\rho [{\lambda(s)\mu'(s)}]/{\mu(s)^{3/2}} ds$$ and $$\label{eq_quantic11}
\begin{split}
r\frac{\lambda(\rho)}{2\mu(\rho)} {\rm Tr}(\sqrt{\mu(\rho)} \mathbb S_r) {\rm Id}
= r(\frac{\lambda(\rho)}{\sqrt{\mu(\rho)}}+ \frac{1}{2} k(\rho))\Delta Z(\rho) {\rm Id}
- \frac{r}{2}{\rm div} [ k(\rho)\nabla Z(\rho)] {\rm Id}.
\end{split}$$ [**Remark.**]{} Note that the previous system is the generalization of the quantum viscous Navier-Stokes system considered by Lacroix-Violet and Vasseur in [@LaVa] (see also the interesting papers by Antonelli-Spirito [@AnSp1; @AnSp2] and by Carles-Carrapatoso-Hillairet [@CaCaHi]). Indeed if we consider $\mu(\rho)=\rho$ and $\lambda(\rho)=0$, we can write $\sqrt{\mu(\rho)} \mathbb S_r$ as $$\sqrt{\mu(\rho)} \mathbb{S}_r =4 \sqrt{\rho} \Bigl[ \nabla\nabla \sqrt\rho
- 4 (\nabla \rho^{1/4} \otimes \nabla \rho^{1/4}) \Bigr],$$ using $Z(\rho) = 2\sqrt\rho.$ The Navier–Stokes equations for quantum fluids was also considered by A. J" ungel in [@J].
As the first step generalizing [@VY], we prove the following result.
\[main result 1\] Let $\mu(\rho)$ verifies – and $\lambda(\rho)$ is given by . If $r_0>0$, then we assume also that ${\rm inf}_{s \in [0,+\infty)} \mu'(s)=\epsilon_1 >0$. Assume that $r_1$ is small enough compared to $r$, $r_2$ is small enough compared to $\delta$, and that the initial values verify $$\label{Initial conditions}
\begin{split}
& \int_\Omega \rho_0\left(\frac{|{{ u}}_0+2\kappa\nabla s(\rho_0)|^2}{2}+(\kappa (1-\kappa)+r)\frac{|2\nabla s(\rho_0)|^2}{2}\right) \, dx\\
& \hskip4cm + \int_\Omega \bigl(a \frac{\rho_0^\gamma}{\gamma-1}+
\mu(\rho_0) + \delta \frac{\rho_0^{10}}{9}+\frac{r_0}{\varepsilon_1}|(\ln \rho_0)_-|\bigr)\,dx < + \infty,
\end{split}$$ for a fixed $\kappa\in (0,1)$. Then there exists a $\kappa$ entropy weak solution $(\rho,{{ u}}, \mathbb T_\mu, \mathbb S_r)$ to – satisfying the initial conditions , in the sense that $(\rho,{{ u}}, \mathbb T_\mu, \mathbb S_r)$ satisfies the mass and momentum equations in a weak form, and satisfies the compatibility formula in the sense of definition \[defweak\]. In addition, it verifies the following estimates: $$\label{priori estimates}
\begin{split}
&\|\sqrt{\rho}\, ({{ u}}+2\kappa \nabla s(\rho))\|^2_{L^{\infty}(0,T;L^2({\Omega}))}\leq C,
\quad\quad\quad\quad\quad
a \|\rho\|^\gamma_{L^{\infty}(0,T;L^{\gamma}({\Omega}))}\leq C,
\\&\|\mathbb T_\mu\|^2_{L^2(0,T;L^2({\Omega}))}\leq C,
\quad\quad\quad
(\kappa(1-\kappa)+r)\|\sqrt\rho \nabla s(\rho)\|^2_{L^{\infty}(0,T;L^2({\Omega}))}\leq C,
\\&
\kappa\|\sqrt{\mu'(\rho)\rho^{\gamma-2}}\nabla\rho\|^2_{L^2(0,T;L^2({\Omega}))}\leq C,
\end{split}$$ and $$\label{priorie estimate2}
\begin{split}
\\&\delta\|\rho\|^{10}_{L^{\infty}(0,T;L^{10}({\Omega}))}\leq C,\quad\quad\;\;\;\quad\quad\quad\quad\delta\|\sqrt{\mu'(\rho)\rho^{8}}\nabla\rho\|^2_{L^2(0,T;L^2({\Omega}))}\leq C,
\\&r_2\|(\frac{\rho}{\mu'(\rho_n)})^{\frac{1}{4}}{{ u}}\|^4_{L^4(0,T;L^4({\Omega}))}\leq C,
\quad\quad\quad r_1\|\rho^{\frac{1}{3}}|{{ u}}|\|^3_{L^3(0,T;L^3({\Omega}))}\leq C,
\\&r_0\|{{ u}}\|^2_{L^2(0,T;L^2({\Omega}))}\leq C,
\quad\quad\quad\quad\quad\quad\quad
r \|\mathbb S_r\|^2_{L^2(0,T;L^2({\Omega}))} \leq C.
\end{split}$$ Note that the bounds provide the following control on the velocity field $$\|\sqrt{\rho}\, {{ u}}\|^2_{L^{\infty}(0,T;L^2({\Omega}))}\leq C.$$ Moreover let $$\displaystyle Z (\rho)= \int_0^\rho \frac{\sqrt{\mu(s)}\mu'(s)}{s}\, ds\;\;\text{and }\; \displaystyle Z_1(\rho) = \int_0^\rho \frac{\mu'(s)}{(\mu(s))^{1/4} s^{1/2}} \, ds,$$ we have the extra control $$\label{J inequality for sequence}
r \left[\int_0^T\int_{{\Omega}}|\nabla^2Z(\rho)|^2\,dx\,dt
+\int_0^T\int_{{\Omega}} |\nabla Z_1(\rho)|^4\,dx\,dt\right]
\leq C,$$ and $$\label{priori mu}
\begin{split}
&\|\mu(\rho)\|_{L^\infty(0,T;W^{1,1}(\Omega))} +
\|\mu(\rho){{ u}}\|_{L^\infty(0,TL^{3/2}({\Omega}))\cap L^2(0,T;W^{1,1} ({\Omega}))} \leq C,\\
& \|\partial_t \mu(\rho)\|_{L^{\infty}(0,T;W^{-1,1}({\Omega}))}\leq C, \\
& \|Z(\rho)\|_{L^\infty(0,T;L^{1+}(\Omega))} +
\|Z_1(\rho)\|_{L^\infty(0,T;L^{1+}(\Omega))} \leq C,
\end{split}$$ where $C>0$ is a constant which depends only on the initial data.
[**Sketch of proof for Theorem \[main result 1\].**]{} To show Theorem \[main result 1\], we need to build the smooth solution to an approximation associated to . Here, we adapt the ideas developed in [@BDZ] to construct this approximation. More precisely, we consider an augmented version of the system which will be more appropriate to construct approximate solutions. Let us explain the idea. 0.1cm [*First step: the augmented system.*]{} Defining a new velocity field generalizing the one introduced in the BD entropy estimate namely $$\w={{ u}}+ 2\kappa\nabla s(\rho)$$ and a drift velocity ${{ v}}=2 \nabla s(\rho)$ and ${s}(\rho)$ defined in .
Assuming to have a smooth solution of with damping terms, it ca[v]{}own that $(\rho,\w,{v})$ satisfies the following system of equations $$\rho_t + {\rm div}(\rho \w) - 2\kappa \Delta \mu(\rho) = 0$$ and $$\begin{split}
\\&(\rho\w)_t+{{\rm div}}(\rho{{ u}}\otimes\w)-2(1-\kappa){{\rm div}}(\mu(\rho)\mathbb{D}\, \w)
-2\kappa{{\rm div}}(\mu(\rho)\mathbf{A}(w))
\\&- (1-\kappa) \nabla(\lambda(\rho) {{\rm div}}(w-\kappa {v}))
+\nabla\rho^{\gamma}+\delta\nabla\rho^{10}
+4(1-\kappa)\kappa {{\rm div}}(\mu(\rho)\nabla^2{s}(\rho))
\\&=- r_0 (w-2\kappa \nabla s(\rho))
- r_1 \rho|\w-2\kappa\nabla{s}(\rho)|(\w-2\kappa\nabla{s}(\rho))\\&
- r_2 \frac{\rho}{\mu'(\rho)} |\w-2\kappa\nabla{s}(\rho)|^2(\w-2\kappa\nabla{s}(\rho))
+r\rho\nabla\left(\sqrt{K(\rho)}\D(\int_0^{\rho}\sqrt{K(s)}\,ds)\right),
\end{split}$$ and $$\begin{split}
& (\rho{v})_t+{{\rm div}}(\rho{{ u}}\otimes{v})-2\kappa{{\rm div}}(\mu(\rho)\nabla {v})
+ 2{{\rm div}}(\mu(\rho)\nabla^t\w) + \nabla(\lambda(\rho){{\rm div}}(\w-\kappa {v}))=0,
\end{split}$$ where $${v}= 2 \nabla {s}(\rho), \qquad \w={{ u}}+\kappa {v}$$ and $$K(\rho) = 4 (\mu'(\rho))^2 / \rho .$$ This is the augmented version for which we will show that there exists global weak solutions, adding an hyperdiffusivity $\varepsilon_2[ \Delta^{2s}\w -{{\rm div}}((1+|\nabla w|^2)\nabla w)]$ on the equation satisfied by $w$, and passing to the limit $\varepsilon_2$ goes to zero.
[**Important remark.**]{} Note that recently Bresch-Couderc-Noble-Vila [@BCNV] showed the following interesting relation $$\rho\nabla\left(\sqrt{K(\rho)}\D(\int_0^{\rho}\sqrt{K(s)}\,ds)\right)
={{\rm div}}(F(\rho)\nabla^2 \psi(\rho))+
\nabla\left((F'(\rho)\rho- F(\rho))\D\psi(\rho)\right),$$ with $F'(\rho)=\sqrt{K(\rho)\rho}$ and $\sqrt\rho \psi'(\rho) = \sqrt{K(\rho)}.$ Thus choosing $$F(\rho)=2\,\mu(\rho) \hbox{ and therefore } F'(\rho)\rho- F(\rho)=\lambda(\rho),$$ this gives $\psi(\rho) = 2 {s}(\rho)$ and thus $$\label{BCNV relationship}
\rho\nabla\left(\sqrt{K(\rho)}\D(\int_0^{\rho}\sqrt{K(s)}\,ds)\right)=
2 {{\rm div}}\Bigl(\mu(\rho)\nabla^2\bigl(2 {s}(\rho)\bigr)\Bigr)
+\nabla\Bigl(\lambda(\rho)\D\bigl(2{s}(\rho)\bigr)\Bigr).$$ This identity will play a crucial role in the proof. It defines the appropriate capillarity term to consider in the approximate system. Other identities will be used to define the weak solution for the Navier-Stokes-Korteweg system and to pass to the limit in it namely $$\label{rel}
\begin{split}
& 2\mu(\rho)\nabla^2(2{\mathbf s}(\rho))
+ \lambda(\rho) \Delta (2{\mathbf s}(\rho)) = 4 \Bigl[2 \sqrt{\mu(\rho)} \nabla\nabla Z(\rho)
- \nabla (\sqrt{\mu(\rho)} \nabla Z(\rho)\Bigr] \\
& \hskip3cm + (\frac{2\lambda(\rho)}{\sqrt{\mu(\rho)}}+ k(\rho))\Delta Z(\rho)\, {\rm Id}
- {\rm div} [ k(\rho)\nabla Z(\rho)]\, {\rm Id}.
\end{split}$$ where $\displaystyle Z(\rho) = \int_0^\rho [(\mu(s))^{1/2} \mu'(s)]/s \, ds$ and $\displaystyle k(\rho) = \int_0^\rho \frac{\lambda(s)\mu'(s)}{\mu(s)^{3/2}} ds.$
Note that the case considered in [@LaVa; @VY-1; @VY] is related $\mu(\rho) = \rho$ and $K(\rho) = 4/\rho$ which corresponds to the quantum Navier-Stokes system. Note that two very interesting papers have been written by Antonelli-Spirito in [@AnSp0; @AnSp] considering Navier-Stokes-Korteweg systems without such relation between the shear viscosity and the capillary coefficient.
The additional pressure $\delta\rho^{10}$ is used in thanks to $3\alpha_2-2\leq 10$.
[*Second Step and main result concerning the compressible Navier-Stokes system.*]{} To prove global existence of weak solutions of the compressible Navier-Stokes equations, we follow the strategy introduced in [@LaVa; @VY]. To do so, first we approximate the viscosity $\mu$ by a viscosity $\mu_{\varepsilon_1}$ such that $\inf_{s\in [0,+\infty)} \mu_{\varepsilon_1}'(s)\ge \varepsilon_1 >0$. Then we use Theorem \[main result 1\] to construct a $\kappa$ entropy weak solution to the approximate system . We then show that this $\kappa$ entropy weak solution is a renormalized solution of in the sense introduced in [@LaVa]. More precisely we prove the following theorem:
\[renorm\] Let $\mu(\rho)$ verifies –, $\lambda(\rho)$ given by . If $r_0>0$, then we assume also that ${\rm inf}_{s \in [0,+\infty)} \mu'(s)=\epsilon_1 >0$. Assume that $r_1$ is small enough compared to $r$ and $r_2$ is small enough compared to $\delta$, the initial values verify and $$\label{Initial conditions}
\begin{split}
& \int_\Omega \left(\rho_0\left(\frac{|{{ u}}_0+ 2\kappa \nabla s(\rho_0)|^2}{2}+(\kappa (1-\kappa)+r)\frac{|2\nabla s(\rho_0)|^2}{2}\right) \right)\, dx\\
&\hskip4cm +\int_\Omega
\left(a \frac{\rho_0^\gamma}{\gamma-1}+ \mu(\rho_0) +\delta \frac{\rho^{10}}{9}+\frac{r_0}{\varepsilon_1}|(\ln \rho_0)_-|\right)\,dx <+\infty.
\end{split}$$ Then the $\kappa$ entropy weak solutions is a renormalized solution of in the sense of Definition \[def\_renormalise\_u\].
We then pass to the limit with respect to the parameters $r,r_0,r_1,r_2$ and $\delta$ to recover a renormalized weak solution of the compressible Navier-Stokes equations and prove our main theorem.
0.3cm **Definitions**. Following [@LaVa] (based on the work in [@VY]), we will show the existence of renormalized solutions in ${{ u}}$. Then, we will show that this renormalized solution is a weak solution. The renormalization provides weak stability of the advection terms $\rho {{ u}}\otimes {{ u}}$ together and $\rho {{ u}}\otimes {{ v}}$. Let us first define the renormalized solution:
0.3cm
\[def\_renormalise\_u\] Consider $\mu>0$, $3\lambda +2 \mu>0$, $r_0\geq0$, $r_1\geq0$, $r_2\ge 0$ and $r\geq0$. We say that $({{\sqrt{\rho}}},{{\sqrt{\rho}}}{{ u}})$ is a renormalized weak solution in ${{ u}}$, if it verifies -, and for any function ${{\varphi}}\in W^{2,\infty}({{\mathbb R}}^d)$ with $\varphi(s)s \in L^{\infty}({{\mathbb R}}^d)$, there exists three measures $R_{{{\varphi}}}, \overline{R}^1_{{\varphi}}, \overline{R}^2_{{\varphi}}\in \mathcal{M}({{\mathbb R}}^+\times{\Omega})$, with $$\|R_{{{\varphi}}}\|_{ \mathcal{M}({{\mathbb R}}^+\times{\Omega})}+ \|\overline{R}^1_{{{\varphi}}}\|_{ \mathcal{M}({{\mathbb R}}^+\times{\Omega})}
+ \|\overline{R}^2_{{{\varphi}}}\|_{ \mathcal{M}({{\mathbb R}}^+\times{\Omega})} \leq C \|{{\varphi}}''\|_{L^\infty({{\mathbb R}})},$$ where the constant $C$ depends only on the solution $({{\sqrt{\rho}}},{{\sqrt{\rho}}}{{ u}})$, and for any function $\psi\in {C^\infty_c({{\mathbb R}}^+\times{\Omega})}$, $$\begin{aligned}
&&\int_0^T \int_{\Omega}\left(\rho \psi_t + \sqrt \rho \sqrt \rho {{ u}}\cdot \nabla\psi \right)dx\, dt=0,\\
&&\int_0^T \int_{\Omega}\bigl( \rho {{\varphi}}({{ u}}) \psi_t + \rho {{\varphi}}({{ u}})\otimes {{ u}}:\nabla \psi \bigr) \> dx\, dt\\
&& \hskip.2cm - \int_0^T \int_\Omega \left( 2 (\sqrt{\mu(\rho)} {{\mathbb S}_\mu}+ \frac{\lambda(\rho)}{2\mu(\rho)} {\rm Tr} (\sqrt{\mu(\rho)}\mathbb S_\mu) {\rm Id}) \, {{\varphi}}'({{ u}})
\right)\cdot \nabla\psi \, dx dt \\
&& \hskip.2cm - \, r \int_0^T \int_\Omega \left(2(\sqrt{\mu(\rho)} \mathbb S_r
+ \frac{\lambda(\rho)}{2\mu(\rho)} {\rm Tr} (\sqrt{\mu(\rho)}\mathbb S_r) {\rm Id}\bigr)\, {{\varphi}}'({{ u}})
\right) \cdot \nabla\psi \, dx dt\\
&&\hskip7cm +F(\rho,{{ u}})\, {{\varphi}}'({{ u}}) \psi \, dx\, dt=\left \langle R_{{{\varphi}}}, \psi\right\rangle, \\
&& \int_0^T \int_{\Omega}(\mu(\rho) \psi_t + \frac{\mu(\rho)}{\sqrt \rho} \sqrt \rho {{ u}}\cdot \nabla \psi) \, dx dt - \int_0^T
\int_\Omega \frac{\lambda(\rho)}{2\mu(\rho)} {\mathrm Tr} (\sqrt{\mu(\rho)} {{\mathbb T}_\mu})
\psi \, dx dt = 0,\end{aligned}$$ where ${{\mathbb S}_\mu}$ is given in and $\mathbb{T}_\mu$ is given in . The matrix $\mathbb S_r$ is compatible in , , and .
The vector valued function $F$ is given by $$\label{eq_F}
\begin{split}
F(\rho,{{ u}})
& = \sqrt{\frac{P'(\rho) \rho }{\mu'(\rho)}} \nabla \int_0^\rho \sqrt{\frac{P'(s)\mu'(s)}{s}}\, ds \\
& \hskip.5cm + \delta\sqrt{\frac{P_\delta'(\rho) \rho }{\mu'(\rho)}}
\nabla \int_0^\rho \sqrt{\frac{P_\delta'(s)\mu'(s)}{s}}\, ds
-r_0 {{ u}}- r_1 \rho|{{ u}}|{{ u}}-\frac{r_2}{\mu'(\rho)}\rho|{{ u}}|^2{{ u}}.
\end{split}$$ For every $i,j,k$ between 1 and $d$: $$\label{eq_viscous_renormaliseAAA}
\sqrt{\mu(\rho)}{{\varphi}}_i'({{ u}})[{{\mathbb T}_\mu}]_{jk}= \partial_j(\mu(\rho)\rho{{\varphi}}'_i({{ u}}){{ u}}_k)
-{{\sqrt{\rho}}}\ u_k{{\varphi}}'_i({{ u}}) \sqrt\rho \partial_j s(\rho)+ \overline{R}^1_{{\varphi}},$$ $$\label{eq_kortweg_renormalise}
r{{\varphi}}_i'({{ u}})[\nabla(\sqrt{\mu(\rho)} \nabla Z(\rho))]_{jk}=
r\partial_j(\sqrt{\mu(\rho)} {{\varphi}}'_i({{ u}})\partial_k Z(\rho))+ \overline{R}^2_{{\varphi}},$$ and $$\|\overline{R}^1_{{\varphi}}\|_{\mathcal{M}({{\mathbb R}}^+\times{\Omega})} +
\|\overline{R}^2_{{\varphi}}\|_{\mathcal{M}({{\mathbb R}}^+\times{\Omega})} +
\|R_{{\varphi}}\|_{\mathcal{M}({{\mathbb R}}^+\times{\Omega})}
\leq C\|{{\varphi}}''\|_{L^\infty}.$$ and for any $\overline{\psi}\in C^\infty_c({\Omega})$: $$\begin{aligned}
&&\lim_{t\to0}\int_{\Omega}\rho(t,x)\overline{\psi}(x)\,dx=\int_{\Omega}\rho_0(x)\overline{\psi}(x)\,dx,\\
&&\lim_{t\to0}\int_{\Omega}\rho(t,x){{ u}}(t,x)\overline{\psi}(x)\,dx=\int_{\Omega}m_0 (x)\overline{\psi}(x)\,dx,\\
&& \lim_{t\to0}\int_{\Omega}\mu(\rho)(t,x)\overline{\psi}(x)\,dx=\int_{\Omega}\mu(\rho_0)(x)\overline{\psi}(x)\,dx\end{aligned}$$
We define a global weak solution of the approximate system or the compressible Navier-Stokes equation (when $r=r_0=r_1=r_2=\delta=0$) as follows
\[defweak\] Let ${\mathbb S}_\mu$ the symmetric part of $\mathbb {T}_\mu$ in $L^2((0,T)\times {\Omega})$ verifying – and $\mathbb{S}_r$ the capillary quantity in $L^2((0,T)\times {\Omega})$ given by –. Let us denote $P(\rho) = a \rho^\gamma$ and $P_\delta (\rho) = \delta \rho^{10}$. We say that $(\rho,{{ u}})$ is a weak solution to –, if it satisfies the [*a priori*]{} estimates – and for any function $\psi \in {\mathcal C}_c^\infty ((0,T)\times \Omega)$ verifying $$\begin{split}
& \int_0^T \int_\Omega (\rho \partial_t \psi + \rho {{ u}}\cdot \nabla \psi) \, dxdt= 0, \\
&\int_0^T \int_\Omega (\rho {{ u}}\partial_t \psi + \rho {{ u}}\otimes {{ u}}: \nabla \psi )\, dx dt \\
& \hskip1.5cm - \int_0^T \int_\Omega 2 ( \sqrt{\mu(\rho)} \mathbb{S}_\mu
+ \frac{\lambda(\rho)}{2\mu(\rho)} {\rm Tr} (\sqrt{\mu(\rho)} \mathbb S_\mu)
{\rm Id}) \cdot \nabla\psi \, dx dt \\
& \hskip1.5cm - r \int_0^T \int_\Omega 2 ( \sqrt{\mu(\rho)} \mathbb{S}_r
+ \frac{\lambda(\rho)}{2\mu(\rho)} {\rm Tr} (\sqrt{\mu(\rho)} \mathbb S_r)
{\rm Id}) \cdot\nabla\psi \, dx dt \\
& \hskip7cm + F(\rho,{{ u}}) \, \psi \, dx dt = 0,\\
& \int_0^\infty \int_{\Omega}\left(\mu(\rho) \psi_t + \frac{\mu(\rho)}{\sqrt \rho} \sqrt \rho {{ u}}\cdot \nabla \psi\right)
dx \, dt \\
&\hskip5cm
- \int_0^T \int_\Omega \frac{\lambda(\rho)}{2\mu(\rho)}
{\mathrm Tr} (\sqrt{\mu(\rho)}\mathbb T_\mu) \psi \, dx dt = 0,
\end{split}$$ with $F$ given through and for any $\overline \psi \in {\mathcal C}_c^\infty({\Omega})$: $$\begin{aligned}
&&\lim_{t\to0}\int_{\Omega}\rho(t,x)\overline{\psi}(x)\,dx=\int_{\Omega}\rho_0(x)\overline{\psi}(x)\,dx,\\
&&\lim_{t\to0}\int_{\Omega}\rho(t,x){{ u}}(t,x)\overline{\psi}(x)\,dx=\int_{\Omega}m_0 (x)\overline{\psi}(x)\,dx,\\
&& \lim_{t\to0}\int_{\Omega}\mu(\rho)(t,x)\overline{\psi}(x)\,dx=\int_{\Omega}\mu(\rho_0)(x)\overline{\psi}(x)\,dx.\end{aligned}$$
[**Remark.**]{} As mentioned in [@BrGiLa], the equation on $\mu(\rho)$ is important: By taking $\psi= {\rm div} \varphi$ for all $\varphi \in {\mathcal C}_0^\infty$, we can write the equation satisfied by $\nabla \mu(\rho)$ namely $$\label{grad}
\begin{split}
\partial_t \nabla\mu(\rho) + {\rm div}(\nabla\mu(\rho) \otimes {{ u}})
& =
{\rm div}(\nabla\mu(\rho) \otimes {{ u}}) - \nabla {\rm div} (\mu(\rho) {{ u}}) \\
& \hskip3cm - \nabla\bigl( \frac{\lambda(\rho)}{2\mu(\rho)} {\rm Tr} (\sqrt{\mu(\rho)}{\mathbb T}_\mu)\Bigr) \\
& = - {\rm div}(\sqrt{\mu(\rho)} {}^t{\mathbb T}_\mu) - \nabla\bigl( \frac{\lambda(\rho)}{2\mu(\rho)} {\rm Tr} (\sqrt{\mu(\rho)}{\mathbb T}_\mu)\Bigr). \\
\end{split}$$ This will justify in some sense the two-velocities formulation introduced in [@BDZ] with the extra velocity linked to $\nabla\mu(\rho)$.
The first level of approximation procedure
==========================================
The goal of this section is to construct a sequence of approximated solutions satisfying the compactness structure to prove Theorem \[main result 1\] namely the existence of weak solutions of the approximation system with capillarity and drag terms. Here we present the first level of approximation procedure.
1\. The continuity equation $$\label{approximation of the continuity equation}
\begin{split}&\rho_t+{{\rm div}}(\rho[\w]_{\varepsilon_3})=2\kappa{{\rm div}}\left([\mu'(\rho)]_{\varepsilon_4}\nabla\rho\right),
\end{split}$$ with modified initial data $$\rho(0,x)=\rho_0\in C^{2+\nu}(\bar{{\Omega}}), \quad0<\underline{\rho}\leq \rho_0(x)\leq \bar{\rho}.$$ Here $\varepsilon_3$ and $\varepsilon_4$ denote the standard regularizations by mollification with respect to space and time. This is a parabolic equation recalling that in this part ${\rm Inf}_{[0,+\infty)} \mu'(s) >0$. Thus, we can apply the standard theory of parabolic equation to solve it when $\w$ is given smooth enough. In fact, the exact same equation was solved in paper [@BDZ]. In particular, we are able to get the following bound on the density at this level approximation $$\label{low and upper bound on density}
0<\underline{\rho}\leq \rho(t,x)\leq\bar{\rho}<+\infty.$$
2\. The momentum equation with drag terms is replaced by its Faedo-Galerkin approximation with the additional regularizing term $\varepsilon_2[ \Delta^{2s}\w -{{\rm div}}((1+|\nabla w|^2)\nabla w)]$ where $s\ge 2$ $$\begin{split}
\label{approximation of the momentum equation}
&\int_{{\Omega}}\rho\w\cdot\psi\,dx-\int_0^t\int_{{\Omega}}\left(\rho([\w]_{\varepsilon_3}-2\kappa\frac{[\mu'(\rho)]_{\varepsilon_4}}{\rho}\nabla\rho)\otimes\w\right):\nabla\psi\,dx\,dt
\\&+2(1-\kappa)\int_0^t\int_{{\Omega}}\mu(\rho)\mathbb{D}\w:\nabla\psi\,dx\,dt+2\kappa\int_0^t\int_{{\Omega}}\mu(\rho)\mathbf{A}(w):\nabla\psi\,dx\,dt
\\&+(1-\kappa)\int_0^t\int_{{\Omega}}\lambda(\rho){{\rm div}}\w{{\rm div}}\psi\,dx\,dt-2\kappa(1-\kappa)\int_0^t\int_{{\Omega}}\mu(\rho)\nabla{{ v}}:\nabla\psi\,dx\,dt
\\&-\kappa(1-\kappa)\int_0^t\int_{{\Omega}}\lambda(\rho){{\rm div}}{{ v}}{{\rm div}}\psi\,dx\,dt-\int_0^t\int_{{\Omega}}\rho^{\gamma}{{\rm div}}\psi\,dx\,dt
-\delta\int_0^t\int_{{\Omega}}\rho^{10}{{\rm div}}\psi\,dx\,dt
\\&+\varepsilon_2\int_0^t\int_{{\Omega}}\left( \Delta^s\w\cdot\Delta^s\psi+(1+|\nabla \w|^2)\nabla\w:\nabla \psi\right)\,dx\,dt=
-\int_0^t\int_{{\Omega}} r_0 (\w-2\kappa\nabla{s}(\rho))\cdot\psi\,dx\,dt
\\ & -r_1\int_0^t\int_{{\Omega}} \rho|\w-2\kappa\nabla{s}(\rho)|(\w-2\kappa\nabla{s}(\rho))\cdot\psi\,dx\,dt
\\&-r_2\int_0^t\int_{{\Omega}}\frac{\rho}{\mu'(\rho)}|\w-2\kappa\nabla{s}(\rho)|^2(\w-2\kappa\nabla{s}(\rho))\cdot\psi\,dx\,dt
\\&-r\int_0^t\int_{{\Omega}} \sqrt{K(\rho)}\D(\int_0^{\rho}\sqrt{K(s)}\,ds){{\rm div}}(\rho\psi)\,dx\,dt+\int_{{\Omega}}\rho_0\w_0\cdot\psi\,dx
\end{split}$$ satisfied for any $t>0$ and any test function $\psi\in C([0,T],X_n)$, where $\lambda (\rho)= 2(\mu'(\rho)\rho-\mu(\rho))$, and ${s}'(\rho)= \mu'(\rho) /\rho
$, and $X_n=\text{span}\{e_i\}_{i=1}^{n}$ is an orthonormal basis in $W^{1,2}({\Omega})$ with $e_i\in C^{\infty}({\Omega})$ for any integers $i>0$.
3\. The Faedo-Galerkin approximation for the equation on the drift velocity ${v}$ reads $$\begin{split}
\label{artificial equation}
&\int_{{\Omega}}\rho{{ v}}\cdot\phi\,dx-\int_0^t\int_{{\Omega}}(\rho ([\w]_{\varepsilon_3}-2\kappa\frac{[\mu'(\rho)]_{\varepsilon_4}}{\rho} \nabla\rho)\otimes{{ v}}):\nabla\phi\,dx\,dt
\\&+2\kappa\int_0^t\int_{{\Omega}}\mu(\rho)\nabla{{ v}}:\nabla\phi\,dx\,dt
+ \kappa\int_0^t\int_{{\Omega}} \lambda(\rho){{\rm div}}{{ v}}\, {{\rm div}}\phi\,dx\,dt
\\&-\int_0^t\int_{{\Omega}}
\lambda(\rho){{\rm div}}\w{{\rm div}}\phi\,dx\,dt
+2\int_0^t\int_{{\Omega}}\mu(\rho)\nabla^T\w:\nabla\phi\,dx\,dt
=\int_{{\Omega}}\rho_0{{ v}}_0\cdot\phi\,dx
\end{split}$$ satisfied for any $t>0$ and any test function $\phi\in C([0,T],Y_n)$, where $Y_n=\text{span}\{b_i\}_{i=1}^n$ and $\{b_i\}_{i=1}^{\infty}$ is an orthonormal basis in $W^{1,2}({\Omega})$ with $b_i\in C^{\infty}({\Omega})$ for any integers $i>0.$\
The above full approximation is similar to the ones in [@BDZ]. We can repeat the same argument as their paper to obtain the local existence of solutions to the Galerkin approximation. In order to extend the local solution to the global one, the uniform bounds are necessary so that the corresponding procedure can be iterated.
The energy estimate if the solution is regular enough.
------------------------------------------------------
For any fixed $n>0,$ choosing test functions $\psi=\w, \,\phi={{ v}}$ in and , we find that $(\rho,\w,{{ v}})$ satisfies the following $\kappa-$entropy equality $$\label{entropy for first level approximation}
\begin{split}
&\int_{{\Omega}}\left(\rho\left(\frac{|\w|^2}{2}+(1-\kappa)\kappa\frac{|{{ v}}|^2}{2}\right)+\frac{\rho^{\gamma}}{\gamma-1}+\delta\frac{\rho^{10}}{9}\right)\,dx
+2(1-\kappa)\int_0^t
\int_{{\Omega}}\mu(\rho)|\mathbb{D}\w-\kappa\nabla{{ v}}|^2\,dx\,dt
\\
&+ (1-\kappa)\int_0^t\int_{{\Omega}} \lambda(\rho)({{\rm div}}\w-\kappa{{\rm div}}{{ v}})^2\,dx\,dt+
+2\kappa\int_0^t\int_{{\Omega}}\frac{\mu'(\rho)p'(\rho)}{\rho}|\nabla\rho|^2\,dx\,dt
\\&+2\kappa\int_0^t\int_{{\Omega}}\mu(\rho)|A\w|^2\,dx\,dt+\varepsilon_2\int_0^t\int_{{\Omega}}\left( |\Delta^s\w|^2+(1+|\nabla \w|^2)|\nabla \w|^2\right)\,dx\,dt
\\&+ r \int_0^t \int_{{\Omega}} \sqrt{K(\rho)} \Delta (\int_0^\rho \sqrt{K(s)}\, ds) {\rm div}(\rho w) \,dx\,dt
+20\kappa\int_0^t\int_{{\Omega}}\mu'(\rho)\rho^8|\nabla\rho|^2\,dx\,dt
\\&
+ r_0 \int_0^t \int_{\Omega}(w-2\kappa\nabla{s}(\rho))\cdot w \, dx\,dt
+ r_1 \int_0^t\int_{{\Omega}} \rho|\w-2\kappa\nabla{s}(\rho)|(\w-2\kappa\nabla{s}(\rho))\cdot\w\,dx\,dt
\\&+ r_2 \int_0^t\int_{{\Omega}}\frac{\rho}{\mu '(\rho)}|\w-2\kappa\nabla{s}(\rho)|^2(\w-2\kappa\nabla{s}(\rho))\cdot\w\,dx\,dt
\\&= \int_{{\Omega}}\left(\rho_0\left(\frac{|\w_0|^2}{2}+(1-\kappa)\kappa\frac{|{{ v}}_0|^2}{2}\right)+\frac{\rho_0^{\gamma}}{\gamma-1}+\delta\frac{\rho_0^{10}}{9}\right)\,dx-\int_0^T\int_{{\Omega}}\rho^{\gamma}{{\rm div}}([\w]_{\varepsilon_3}-\w)\,dx\,dt
\\&-\delta\int_0^T\int_{{\Omega}}\rho^{10}{{\rm div}}([\w]_{\varepsilon_3}-\w)\,dx\,dt,
\end{split}$$ where ${s}'= {\mu'(\rho)}/{\rho}$ and $p(\rho)=\rho^{\gamma}.$ Compared to the calculations made in [@BDZ], we have to take care of the capillary term and then to take care of the drag terms showing that they can be controlled using that $\int_{s\in [0,T]} \mu'(s) \ge \varepsilon_1$ for the linear drag, using the extra pressure term $\delta \rho^{10}$ for the quadratic drag term and using the capillary term $r \rho \nabla(\sqrt{K(\rho)} \Delta (\int_0^\rho \sqrt{K(s)})$ for the cubic drag term. To do so, let us provide some properties on the capillary term and rewrite the terms coming from the drag quantities.
### Some properties on the capillary term
Using the mass equation, the capillary term in the entropy estimates reads $$\begin{split}
&\int_\Omega \sqrt{K(\rho)}
\Delta(\int_0^\rho \sqrt{K(s)} \, ds)\, {{\rm div}}(\rho w)
= \frac{r}{2} \frac{d}{dt}\int_\Omega |\nabla \int_0^\rho \sqrt{K(s)} \, ds|^2 \\
& + 2\kappa \int_{{\Omega}}
\sqrt{K(\rho)} \Delta(\int_0^\rho \sqrt{K(s)} \, ds)\, \Delta \mu(\rho) = I_1 + I_2 .
\end{split}$$ In fact, we write term $I_1$ as follows $$\frac{r}{2} \frac{d}{dt}\int_\Omega |\nabla \int_0^\rho \sqrt{K(s)} \, ds|^2 =\frac{r}{2} \frac{d}{dt}\int_\Omega \rho|\nabla{s}(\rho)|^2\,dx.$$ By , we have $$\begin{split}
I_2 &= \int_{{\Omega}}
\sqrt{K(\rho)} \Delta(\int_0^\rho \sqrt{K(s)} \, ds)\, \Delta \mu(\rho)
\\&= - \int_{{\Omega}} \rho \nabla \Bigl( \sqrt{K(\rho)} \Delta(\int_0^\rho \sqrt{K(s)} \, ds)\Bigr)
\cdot \nabla {s}(\rho)
\\&
= \int_{{\Omega}} 2 \mu(\rho) |2\nabla^2 {s}(\rho)|^2 + \lambda(\rho)|2\Delta {s}(\rho)|^2.
\end{split}$$
[*Control of norms using $I_2$.*]{} Let us first recall that since $$\lambda(\rho) = 2(\mu'(\rho)\rho- \mu(\rho)) > -2\mu(\rho)/3,$$ there exists $\eta >0$ such that $$2 \int_0^T\int_{{\Omega}}\mu(\rho)|\nabla^2{s}(\rho)|^2\,dx\,dt
+ \int_0^T\int_{{\Omega}}\lambda(\rho)|\Delta{s}(\rho)|^2\,dx\,dt$$ $$\hskip3cm \ge \eta \Bigl[
2 \int_0^T\int_{{\Omega}}\mu(\rho)|\nabla^2{s}(\rho)|^2\,dx\,dt
+ \frac{1}{3}\int_0^T\int_{{\Omega}}\mu(\rho)|\Delta{s}(\rho)|^2\,dx\,dt \Bigr].$$ As the second term in the right-hand side is positive, lower bound on the quantity $$\label{kor}
\int_0^T\int_{{\Omega}}\mu(\rho)|\nabla^2{s}(\rho)|^2\,dx\,dt$$ will provide the same lower bound on $I_2$.
Let us now precise the norms which are controlled by . To do so, we need to rely on the following lemma on the density. In this lemma, we prove a more general entropy dissipation inequality than the one introduced by Jüngel in [@J] and more general than those by Jüngel-Matthes in [@JuMa].
\[Lemma on jungel type inequality\] Let $\mu'(\rho)\rho<k\mu(\rho)$ for $2/3<k<4$ and $${s}(\rho)= \int_0^\rho \frac{\mu'(s)}{s} \, ds, \qquad
Z(\rho) =\int_0^\rho \frac{\sqrt{\mu(s)}}{s}\mu'(s)\, ds, \qquad
Z_1(\rho) = \int_0^\rho \frac{\mu'(s)}{(\mu(s))^{1/4}s^{1/2}} \, ds.$$ [i)]{} Assume $\rho>0$ and $\rho\in L^2(0,T;H^2(\Omega))$ then there exists $\varepsilon(k) >0$, such that we have the following estimate $$\int_0^T\int_{{\Omega}}|\nabla^2Z(\rho)|^2\,dx\,dt+\varepsilon(k)\int_0^T\int_{{\Omega}}\frac{\rho^2}{\mu(\rho)^{3}}|\nabla Z(\rho)|^4\,dx\,dt
\leq \frac{C}{\varepsilon(k)} \int_0^T\int_{{\Omega}}\mu(\rho)|\nabla^2{s}(\rho)|^2\,dx\,dt,$$ where $C$ is a universal positive constant.
[ii)]{} Consider a sequence of smooth densities $\rho_n>0$ such that $Z(\rho_n)$ and $Z_1(\rho_n)$ converge strongly in $L^1((0,T)\times\Omega)$ respectively to $Z(\rho)$ and $Z_1(\rho)$ and $\sqrt{\mu(\rho_n)} \nabla^2 {\mathbf s}(\rho_n)$ is uniformly bounded in $L^2((0,T)\times\Omega)$. Then $$\int_0^T\int_{{\Omega}}|\nabla^2Z(\rho)|^2\,dx\,dt+\varepsilon(k)\int_0^T\int_{{\Omega}}|\nabla Z_1(\rho)|^4\,dx\,dt
\leq C < +\infty$$
The case of $Z=2\sqrt{\rho}$ for the inequality was proved in [@J], which is critical to derive the uniform bound on approximated velocity in $L^2(0,T;L^2({\Omega}))$ in [@VY-1; @VY]. The above lemma will play a similar role in this paper.
Let us first prove the part i). Note that $Z'(\rho)=\frac{\sqrt{\mu(\rho)}}{\rho}\mu'(\rho)$, we get the following calculation: $$\begin{split}
\label{key-1}
\sqrt{\mu(\rho)}\nabla^2s(\rho)&=\sqrt{\mu(\rho)}\nabla(\frac{\nabla\mu(\rho)}{\rho})=\sqrt{\mu(\rho)}\nabla\left(\frac{1}{\sqrt{\mu(\rho)}}\nabla Z(\rho)\right)
\\&=\nabla^2 Z(\rho)-\frac{\nabla Z(\rho)}{\sqrt{\mu(\rho)}}\otimes\nabla\sqrt{\mu(\rho)}
\\&=\nabla^2 Z(\rho) -\frac{\rho\nabla Z(\rho)\otimes \nabla Z(\rho)}{2 \mu(\rho)^{\frac{3}{2}}}.
\end{split}$$ Thus, we have $$\begin{split}
\label{key-2}
\int_{{\Omega}}\mu(\rho)|\nabla^2s(\rho)|^2\,dx&=\int_{{\Omega}}|\nabla^2Z(\rho)|^2\,dx
+\frac{1}{4}\int_{{\Omega}}\frac{\rho^2}{\mu(\rho)^3}|\nabla Z(\rho)|^4\,dx
\\&-
\int_{{\Omega}}\frac{\rho}{\mu(\rho)^{\frac{3}{2}}}\nabla^2Z(\rho) :(\nabla Z(\rho)\otimes \nabla Z(\rho))\,dx.
\end{split}$$ By integration by parts, the cross product term reads as follows $$\begin{split}
\label{key-3}&-\int_{{\Omega}}\frac{\rho}{\mu(\rho)^{\frac{3}{2}}}\nabla^2Z(\rho):(\nabla Z(\rho)\otimes \nabla Z(\rho))\,dx \\
& =
-\int_{{\Omega}}\frac{\rho\sqrt{\mu(\rho)}}{\mu(\rho)}\nabla^2Z(\rho):(\frac{\nabla Z(\rho)}{\sqrt{\mu(\rho)}}\otimes \frac{\nabla Z(\rho)}{\sqrt{\mu(\rho)}})\,dx
\\&=\int_{{\Omega}}\frac{\rho}{\mu(\rho)}\sqrt{\mu(\rho)}\nabla Z(\rho)\cdot{{\rm div}}(\frac{\nabla Z(\rho)}{\sqrt{\mu(\rho)}}\otimes \frac{\nabla Z(\rho)}{\sqrt{\mu(\rho)}})\,dx \\
& \hskip1cm +\int_{{\Omega}}\nabla(\frac{\rho}{\sqrt{\mu(\rho)}})\otimes\nabla Z(\rho):\frac{\nabla Z(\rho)\otimes \nabla Z(\rho)}{\mu(\rho)}\,dx
\\&=I_1+I_2.
\end{split}$$ To this end, we are able to control $I_1$ directly, $$\label{key-I1}\begin{split}
|I_1|&\leq \varepsilon\int_{{\Omega}}\frac{\rho^2}{\mu(\rho)^3}|\nabla Z(\rho)|^4\,dx
+ \frac{C}{\varepsilon} \int_{{\Omega}}\mu(\rho)|\nabla(\frac{\nabla Z(\rho)}{\sqrt{\mu(\rho)}})|^2\,dx
\\&\leq \varepsilon\int_{{\Omega}}\frac{\rho^2}{\mu(\rho)^3}|\nabla Z(\rho)|^4\,dx
+ \frac{C}{\varepsilon}\int_{{\Omega}}\mu(\rho)|\nabla^2 s(\rho)|^2\,dx,
\end{split}$$ where $C$ is a universal positive constant. We calculate $I_2$ to have $$\label{key-I2}
\begin{split}
I_2&=\int_{{\Omega}}\nabla(\frac{\rho}{\sqrt{\mu(\rho)}})\otimes\nabla Z(\rho):\frac{\nabla Z(\rho)\otimes \nabla Z(\rho)}{\mu(\rho)}\,dx
\\&=\int_{{\Omega}}\frac{\nabla\rho\otimes\nabla Z(\rho)}{\mu(\rho)^{\frac{3}{2}}}:\left(\nabla Z(\rho)\otimes \nabla Z(\rho)\right)\,dx \\
&\hskip2cm -\int_{{\Omega}}\frac{\rho}{\mu(\rho)^2}\nabla\sqrt{\mu(\rho)}\otimes \nabla Z(\rho):\left(\nabla Z(\rho)\otimes \nabla Z(\rho)\right)\,dx
\\&=\int_{{\Omega}}\frac{\rho}{\mu(\rho)^2\mu(\rho)'}|\nabla Z(\rho)|^4\,dx-\frac{1}{2}\int_{{\Omega}}\frac{\rho^2}{\mu(\rho)^3}|\nabla Z(\rho)|^4\,dx.
\end{split}$$ Relying on -, we have $$\begin{split}&
\int_{{\Omega}}|\nabla^2Z(\rho)|^2\,dx+\int_{{\Omega}}\frac{\rho}{\mu(\rho)^2\mu'(\rho)}|\nabla Z(\rho)|^4\,dx
-(\frac{1}{4}+\varepsilon)\int_{{\Omega}}\frac{\rho^2}{\mu(\rho)^3}|\nabla Z(\rho)|^4\,dx
\\&\leq \frac{C}{\varepsilon} \int_{{\Omega}}\mu(\rho)|\nabla^2 s(\rho)|^2\,dx.
\end{split}$$ Since $k_1\mu'(s) s\leq \mu(s),$ we have $$\frac{s}{\mu^2(s)\mu'(s)}-(\frac{1}{4}+\varepsilon)\frac{s^2}{\mu(s)^3}\geq (k_1-\frac{1}{4}-\varepsilon)\frac{s^2}{\mu(s)^3}>\varepsilon\frac{s^2}{\mu(s)^3},$$ where we choose $k_1>\frac{1}{4}$. This implies $$\int_{{\Omega}}|\nabla^2Z(\rho)|^2\,dx+\varepsilon\int_{{\Omega}}\frac{\rho^2}{\mu(\rho)^3}|\nabla Z(\rho)|^4\,dx
\leq \frac{C}{\varepsilon} \int_{{\Omega}}\mu(\rho)|\nabla^2 s(\rho)|^2\,dx.$$ This ends the proof of part i). Concerning part ii), it suffices to pass to the limit in the inequality proved previously using the lower semi continuity on the left-hand side.
### Drag terms control.
We have to discuss three kind of drag terms: Linear drag term, quadratic drag term and finally cubic drag term.
[a) *Linear drag terms.*]{} As in previous works [@BD; @VY-1; @Z], we need to choose a linear drag with constant coefficient $$\begin{split}
\label{11additional velocity term-control}
&r_0\int_0^t\int_{{\Omega}}(\w-2\kappa\nabla{s}(\rho))\cdot\w\,dx\,dt
=r_0\int_0^t\int_{{\Omega}}|\w-2\kappa\nabla{s}(\rho)|^2\,dx\,dt \\&
+r_0\int_0^t\int_{{\Omega}}(\w-2\kappa\nabla{s}(\rho))
\cdot(2\kappa\nabla{s}(\rho))\,dx\,dt.
\end{split}$$ The second term on the right side of reads $$\begin{split}r_0\int_0^t\int_{{\Omega}}(\w-2\kappa\nabla{s}(\rho))&\cdot(2\kappa\nabla{s}(\rho))\,dx\,dt
=r_0\int_0^t\int_{{\Omega}}\rho(\w-2\kappa\nabla{s}(\rho))\cdot\frac{2\kappa\nabla{s}(\rho)}{\rho}\,dx\,dt
\\&
=r_0\int_0^t\int_{{\Omega}}\rho(\w-2\kappa\nabla{s}(\rho))\cdot2\kappa\nabla g(\rho)\,dx\,dt\\&
=r_0\int_0^t\int_{{\Omega}}\rho_t g(\rho)\,dx\,dt,
\end{split}$$ where $g'(\rho)= \frac{s'(\rho)}{\rho}=\frac{\mu'(\rho)}{\rho^2}$ and $g(\rho)=\int_1^\rho\frac{\mu'(r)}{r^2}\,dr.$ Letting $$G(\rho)=\int_1^\rho\int_1^r\frac{\mu'(\zeta)}{\zeta^2}\,d\zeta\,dr,$$ then $$r_0\int_{{\Omega}}\rho_t g(\rho)\,dx= r_0\frac{\partial}{\partial_t}\int_{{\Omega}}G(\rho)\,dx,$$ which implies $$r_0\int_0^t\int_{{\Omega}}\rho_t g(\rho)\,dx\,dt= r_0\int_{{\Omega}}G(\rho)\,dx.$$ Meanwhile, since $\lim_{\zeta\to 0}\mu'(\zeta)=\varepsilon_1>0$, for any $|\zeta|<\epsilon$ and any small number $\epsilon>0$, we have $\mu'(\zeta)\geq \frac{\varepsilon_1}{2}.$ Thus, we have further estimate on $G(\rho)$ as follows $$\begin{split}
G(\rho)=\int_1^\rho\int_1^r\frac{\mu'(\zeta)}{\zeta^2}\,d\zeta\,dr
&\geq \frac{\varepsilon_1}{2}\int_1^\rho(1-\frac{1}{r})\,dr
\\&= \frac{\varepsilon_1}{2}(\rho-1-\ln\rho)
\\&\geq -\frac{\varepsilon_1}{4}(\ln\rho)_{-},
\end{split}$$ for any $\rho\leq \epsilon$. Similarly, we can show that $$G(\rho)\leq 4\varepsilon_1(\ln\rho)_{+}$$ for any $\rho\leq \epsilon$. For given number $\epsilon_0>0$, if $\rho\geq \epsilon_0$, then we have $$0\leq G(\rho)\leq C\int_1^\rho\int_1^r\mu'(\zeta)\,d\zeta\,dr\leq C\mu(\rho)\rho.$$
[b) *Quadratic drag term.*]{} We use the same argument as in [@BDZ] to handle this term. The quadratic drag term gives $$\begin{split}
\label{drag term control}
&r_1\int_0^t\int_{{\Omega}}
\rho|\w-2\kappa\nabla{s}(\rho)|(\w-2\kappa\nabla{s}(\rho))\cdot\w\,dx\,dt
\\&=r_1\int_0^t\int_{{\Omega}} \rho
|\w-2\kappa\nabla{s}(\rho)|^3\,dx\,dt
\\&\quad\quad\quad\quad+r_1\int_0^t\int_{{\Omega}} \rho|\w-2\kappa\nabla{s}(\rho)|(\w-2\kappa\nabla{s}(\rho))\cdot(2\kappa\nabla{s}(\rho))\,dx\,dt.
\end{split}$$ The second drag term of the right–hand side can be controlled as follows $$\label{the second term of drag term control}
\begin{split}
&r_1\left|\int_0^t\int_{{\Omega}}\rho|\w-2\kappa\nabla{s}(\rho)|(\w-2\kappa\nabla{s}(\rho))
\cdot(2\kappa\nabla{s}(\rho))\,dx\,dt\right|
\\&\leq r_1\int_0^t\int_{{\Omega}}\mu(\rho)|{{ u}}||\mathbb{D}{{ u}}|\,dx\,dt\\
&\leq \frac{1}{2}\int_0^t\int_{{\Omega}}\mu(\rho)|\mathbb{D}{{ u}}|^2\,dx\,dt
+\frac{r_1^2}{2}\int_0^t\int_{{\Omega}}\mu(\rho)|{{ u}}|^2\,dx\,dt,
\end{split}$$ and $$\|\sqrt{\mu(\rho)}|{{ u}}|\|_{L^2(0,T;L^2({\Omega}))}\leq C\|\rho^{\frac{1}{3}}|{{ u}}|\|_{L^3(0,T;L^3({\Omega}))}\|\frac{\sqrt{\mu(\rho)}}{\rho^{\frac{1}{3}}}\|_{L^6(0,T;L^6({\Omega}))}.$$ Note that $$\begin{split}
&\int_0^t\int_{{\Omega}}\frac{\mu(\rho)^3}{\rho^2}\,dx\,dt=\int_0^t\int_{0\leq \rho\leq 1}\frac{\mu(\rho)^3}{\rho^2}\,dx\,dt+
\int_0^t\int_{\rho\geq 1}\frac{\mu(\rho)^3}{\rho^2}\,dx\,dt
\\&\leq C\int_0^t\int_{0\leq \rho\leq 1}\mu(\rho)(\mu'(\rho))^2\,dx\,dt+
\int_0^t\int_{\rho\geq 1}\frac{\mu(\rho)^3}{\rho^2}\,dx\,dt
\\&\leq C+\int_0^t\int_{\rho\geq 1}\frac{\mu(\rho)^3}{\rho^2}\,dx\,dt.
\end{split}$$ From , for any $\rho\geq 1$, we have $$c'\rho^{\alpha_1} \leq \mu(\rho)\leq c\rho^{\alpha_2},$$ where $2/3<\alpha_1\leq \alpha_2<4.$ This yields to $$\label{control preasuer}\int_0^t\int_{\rho\geq 1}\frac{\mu(\rho)^3}{\rho^2}\,dx\,dt\leq c\int_0^t\int_{\rho\geq 1}\rho^{3\,\alpha_2-2}\,dx\,dt
\leq c \int_0^t \int_{{\Omega}}\rho^{10}\,dx$$ for any time $t>0.$
[c) *Cubic drag term.*]{} The non-linear cubic drag term gives $$\begin{split}
\label{drag term control}
&r_2\int_0^t\int_{{\Omega}}
\frac{\rho}{\mu'(\rho) }|\w-2\kappa\nabla{s}(\rho)|^2(\w-2\kappa\nabla{s}(\rho))\cdot\w\,dx\,dt
\\&=r_2\int_0^t\int_{{\Omega}} \frac{\rho}{\mu'(\rho) }
|\w-2\kappa\nabla{s}(\rho)|^4\,dx\,dt
\\&\quad\quad\quad\quad+r_2\int_0^t\int_{{\Omega}} \frac{\rho}{\mu'(\rho) }|\w-2\kappa\nabla{s}(\rho)|^2(\w-2\kappa\nabla{s}(\rho))\cdot(2\kappa\nabla{s}(\rho))\,dx\,dt.
\end{split}$$ The novelty now is to show that we control the second drag term of the right–hand side using the Korteweg-type information on the left-hand side $$\label{the second term of drag term control}
\begin{split}
&r_2\int_0^t\int_{{\Omega}}\frac{\rho}{\mu'(\rho) }|\w-2\kappa\nabla{s}(\rho)|^2(\w-2\kappa\nabla{s}(\rho))\cdot(2\kappa\nabla{s}(\rho))\,dx\,dt
\\&\le r_2 \Bigl(
\frac{3}{4} \int_0^t \int_{{\Omega}} \frac{\rho}{\mu'(\rho)} |w-2\kappa \nabla {s}(\rho)|^4
+ \frac{(2\kappa)^4}{4} \int_0^t \int_{{\Omega}} \frac{\rho}{\mu'(\rho)} |\nabla {s}(\rho)|^4 \Bigr).
\end{split}$$ Remark that the first term in the right-hand side may be absorbed using the first term in . Let us now prove that if $r_1$ small enough, the second term in the right-hand side may be absorbed by the term coming from the capillary quantity in the energy. From Lemma \[Lemma on jungel type inequality\], we have $$\int_0^t\int_{{\Omega}}\frac{\rho^2}{\mu^{3}(\rho)}|\nabla Z(\rho)|^4\,dx\,dt=\int_0^t\int_{{\Omega}}\frac{1}{\mu(\rho)\rho^2}|\nabla\mu(\rho)|^4\,dx\,dt.$$ It remains to check that $$\int_0^t \int_{{\Omega}} \frac{\rho}{\mu'(\rho)} |\nabla {s}(\rho)|^4=
\int_0^t\int_{{\Omega}}\frac{1}{\mu'(\rho)\rho^3}|\nabla\mu(\rho)|^4\,dx\,dt\leq
C\int_0^t\int_{{\Omega}}\frac{1}{\mu(\rho)\rho^2}|\nabla\mu(\rho)|^4\,dx\,dt.$$ This concludes assuming $r_1$ small enough compared to $r$.
### The $\kappa$-entropy estimate.
Using the previous calculations, assuming $r_2$ small enough compared to $r$, and denoting $$E[\rho,u+2\kappa \nabla \mathbf{s(\rho)}, \nabla \mathbf{s(\rho)}]
= \int_{\Omega}\rho\left(\frac{|{{ u}}+2\kappa\nabla{s}(\rho)|^2}{2}
+ (1-\kappa)\kappa\frac{|\nabla{s}(\rho)|^2}{2}\right)+
\frac{\rho^{\gamma}}{\gamma-1}+\frac{\delta\rho^{10}}{9}+G(\rho),$$ we get the following $\kappa$-entropy estimate $$\label{entropy obtained}
\begin{split}
& E[\rho,u+2\kappa \nabla \mathbf{s(\rho)}, \nabla \mathbf{s(\rho)}](t)
+r_0\int_0^t\int_{{\Omega}}|{{ u}}|^2\,dx\,dt
\\&+\frac{r}{2}\int_{{\Omega}}|\nabla\int_0^{\rho}\sqrt{K(s)}\,ds|^2 \,dx
+2(1-\kappa)\int_0^t\int_{{\Omega}}\mu(\rho)|\mathbb{D}{{ u}}|^2\,dx\,dt+
20\kappa\int_0^t\int_{{\Omega}}\mu'(\rho)\rho^8|\nabla\rho|^2\,dx\,dt
\\
&+2(1-\kappa)\int_0^t\int_{{\Omega}}(\mu'(\rho)\rho-\mu(\rho))({{\rm div}}{{ u}})^2\,dx\,dt+2\kappa\int_0^t\int_{{\Omega}}\mu(\rho)|A({{ u}}+2\kappa\nabla{s}(\rho))|^2\,dx\,dt
\\&+2\kappa\int_0^t\int_{{\Omega}}\frac{\mu'(\rho)p'(\rho)}{\rho}|\nabla\rho|^2\,dx\,dt
+ r_1 \int_0^t \int_{{\Omega}} \rho|{{ u}}|^3\, dx\,dt
+ \frac{r_2}{4} \int_0^t \int_{{\Omega}} \frac{\rho}{\mu'(\rho)} |{{ u}}|^4 \, dx\, dt\\
&
+ \kappa r\int_0^t\int_{{\Omega}}\mu(\rho)|2 \nabla^2{s}(\rho)|^2\,dx\,dt
+ \frac{1}{2}\kappa r\int_0^t\int_{{\Omega}}\lambda(\rho)|2\Delta{s}(\rho)|^2\,dx\,dt
\\&\leq \int_{{\Omega}}\left(\rho_0\left(\frac{|\w_0|^2}{2}+(1-\kappa)\kappa\frac{|{{ v}}_0|^2}{2}\right)+\frac{\rho_0^{\gamma}}{\gamma-1}+
\frac{\delta\rho_0^{10}}{9}+\frac{r}{2}|\nabla\int_0^{\rho_0} \sqrt{K(s)} \, ds|^2+G(\rho_0)\right)\,dx\\
& + C \frac{r_1}{\delta}
\int_\Omega E[\rho,u+2\kappa \nabla \mathbf{s(\rho)}, \nabla \mathbf{s(\rho)}] dx \, dt .
\end{split}$$ It suffices now to remark that $$\nonumber
\begin{split}
& \int_0^t\int_{\Omega}\mu(\rho) | \mathbb{D}{{ u}}|^2
+ \int_0^t \int_{\Omega}(\mu'(\rho)\rho - \rho) |{\rm div} {{ u}}|^2 \\
& = \int_0^t\int_{\Omega}\mu(\rho) | \mathbb{D}{{ u}}-\frac{1}{3} {\rm div} {{ u}}\, {\rm Id}|^2 \, dx dt
+ \int_0^t \int_{\Omega}(\mu'(\rho)\rho - \mu(\rho) + \frac{1}{3}\mu(\rho)) |{\rm div} {{ u}}|^2 .
\end{split}$$ Note that $\alpha_1>2/3$, there exists $\varepsilon>0$ such that $$\mu'(\rho)\rho - \frac{2}{3}\mu(\rho) > \varepsilon \mu(\rho).$$ Such information and the control of $\sqrt{\mu(\rho)} |A(u)+2\kappa\nabla {\mathbf s}(\rho)|$ in $L^2(0,T;L^2({\Omega}))$ allow us, using the Grönwall Lemma and the constraints on the parameters, to get the uniform estimates –.
Now we can show . First, we have $$\nabla \mu(\rho) = \frac{\nabla \mu(\rho)}{\sqrt \rho} \sqrt \rho
\in L^\infty(0,T;L^1(\Omega)),$$ due to the mass conservation and the uniform control on $\nabla\mu(\rho)/\sqrt\rho$ given in . Let us now write the equation satisfied by $\mu(\rho)$ namely $$\partial_t\mu(\rho) + {\rm div}(\mu(\rho) {{ u}}) + \frac{ \lambda(\rho)}{2} {\rm div} {{ u}}= 0.$$ Recalling that $\lambda(\rho) = 2( \mu'(\rho)\rho - \mu(\rho))$ and the hypothesis on $\mu(\rho)$, we get $$\frac{d}{dt} \int_\Omega \mu(\rho) \le
C \, \bigl(\int_\Omega |\lambda(\rho)||{\rm div} {{ u}}|^2 + \int_\Omega \mu(\rho)\bigr),$$ and therefore $$\mu(\rho) \in L^\infty(0,T;L^1(\Omega)),$$ if $\mu(\rho_0) \in L^1(\Omega)$ due to the fact that $\sqrt{|\lambda(\rho)|}{\rm div} {{ u}}\in L^2(0,T;L^2(\Omega)).$ Now, we observe that $\mu(\rho)/\sqrt{\rho}$ is smaller than $1$ for $\rho\leq 1$ because $\alpha_1 > 2/3$, and smaller than $\mu(\rho)$ for $\rho_n>1$, then $$\frac{\mu(\rho)}{\sqrt{\rho}} \in L^\infty(L^1).$$ Meanwhile, thanks to , we have $$|\nabla( \mu(\rho)/\sqrt{\rho})|\leq \left|\frac{\nabla\mu(\rho)}{\sqrt{\rho}}\right|+\frac{\mu(\rho)}{2\rho\sqrt{\rho}}|\nabla\rho|\leq \left(1+\frac{1}{\alpha_1}\right)\left|\frac{\nabla\mu(\rho)}{\sqrt{\rho}}\right|.$$ By , $\nabla(\mu(\rho)/\sqrt{\rho})$ is bounded in $L^\infty(0,T;L^2(\Omega))$ and finally $\mu(\rho)/\sqrt{\rho}$ is bounded in $L^\infty(0,T;(L^6(\Omega))$. Thus, we have that $$\mu(\rho) {{ u}}= \frac{\mu(\rho)}{\sqrt\rho} \sqrt \rho{{ u}},$$ is uniformly bounded in $ L^\infty(0,T;L^{3/2}({\Omega})).$ Let us come back to the equation satisfied by $\mu(\rho)$ which reads $$\partial_t \mu(\rho) + {\rm div}(\mu(\rho) {{ u}}) + \frac{\lambda(\rho)}{2}{\rm div} {{ u}}= 0.$$ Recalling that $\lambda(\rho) {\rm div} {{ u}}\in L^\infty(0,T;L^1({\Omega}))$, then we get the conclusion on $\partial_t \mu(\rho)$. Let us now to prove that $$Z(\rho)= \displaystyle \int_0^{\rho_n} \frac{\sqrt{\mu(s)} \mu'(s)}{s} ds \in L^{1+}((0,T)\times {\Omega})
\hbox{ uniformly.}$$ Note first that $$0 \le \frac{\sqrt{\mu(s)} \mu'(s)}{s} \le \alpha_2 \frac{\mu(s)^{3/2}}{s^2}
\le c_2 \alpha_2(s^{3\alpha_1/2-2} 1_{s\le 1} + \frac{\mu(s)^{3/2-}}{s^{2-}} 1_{s\ge 1}).$$ There exists $ \varepsilon>0 \hbox{ such that } \alpha_1 > 2/3+ \varepsilon,$ thus $$0 \le \frac{\sqrt{\mu(s)} \mu'(s)}{s} \le
c_2 \alpha_2 ( s^{\varepsilon -1}1_{s\le 1} + \frac{\mu(s)^{3/2-}}{s^{2-}} 1_{s\ge 1}).$$ Note that $\mu'(s) > 0$ for $s>0$ and the definition of $Z(\rho)$, we get $$0\le Z(\rho) \le C (\rho^\varepsilon + \mu(\rho)^{3/2-})$$ with $C$ independent of $n$. Thus $Z(\rho) \in L^{\infty}(0,T; L^{1+}({\Omega}))$ uniformly with respect to $n$. Bound on $Z_1(\rho)$ follows the similar lines.
Compactness Lemmas.
-------------------
In this subsection, we provide general compactness lemmas which will be used several times in this paper.
[*Some uniform compactness.*]{}
\[compactuniforme\] Assume we have a sequence $\{\rho_n\}_{n\in \mathbb N}$ satisfying the estimates in Theorem \[main result 1\], uniformly with respect to $n$. Then, there exists a function $\rho \in L^\infty(0,T;L^\gamma({\Omega}))$ such that, up to a subsequence, $$\mu(\rho_n) \to \mu(\rho) \hbox{ in } {\mathcal C}([0,T]; L^{3/2}({\Omega}) \hbox{ weak}),$$ and $$\rho_n \to \rho \hbox{ a.e. in } (0,T)\times {\Omega}.$$ Moreover $$\rho_n \to \rho \hbox{ in } L^{(4\gamma/3)^+}((0,T)\times \Omega),$$ $$\sqrt{\frac{P'(\rho_n)\rho_n}{\mu'(\rho_n)}} \nabla
\displaystyle \Bigl(\int_0^{\rho_n} \sqrt{\frac{P'(s)\mu'(s)}{s}}\, ds\Bigr)
\rightharpoonup \sqrt{\frac{P'(\rho)\rho}{\mu'(\rho)}} \nabla
\displaystyle \Bigl(\int_0^{\rho} \sqrt{\frac{P'(s)\mu'(s)}{s}}\, ds\Bigr)
\hbox{ in } L^{1}((0,T)\times {\Omega})$$ and $$\sqrt{\frac{P'(\rho_n)\rho_n}{\mu'(\rho_n)}} \nabla
\displaystyle \Bigl(\int_0^{\rho_n} \sqrt{\frac{P'(s)\mu'(s)}{s}}\, ds\Bigr)
\in L^{1+}((0,T)\times\Omega).$$ If $\delta_n>0$ is such that $\delta_n\to \delta\geq 0$, then $$\delta_n\rho_n^{10}\to \delta\rho^{10}\quad\text{ in } L^{\frac{4}{3}}((0,T)\times{\Omega}).$$
[**Proof.**]{} From the estimate on $\mu(\rho_n)$ and Aubin-Lions lemma, up to a subsequence, we have $$\mu(\rho_n) \to \mu(\rho) \hbox{ in } {\mathcal C}([0,T]; L^{3/2}({\Omega}) \hbox{ weak})$$ and therefore using that $\mu'(s)>0$ on $(0,+\infty)$ with $\mu(0)=0$, we get the conclusion on $\rho_n$. Let us now recall that $$\label{muineq}
\frac{\alpha_1}{\rho_n} \le \frac{\mu'(\rho_n)}{\mu(\rho)} \le \frac{\alpha_2}{\rho_n}$$ and therefore $$c_1 \rho_n^{\alpha_2} \le \mu(\rho_n) \le c_2 \rho_n^{\alpha_1}
\qquad \hbox{ for } \rho_n \le 1,$$ and $$c_1 \rho_n^{\alpha_1} \le \mu(\rho_n) \le c_2 \rho_n^{\alpha_2}
\qquad \hbox{ for } \rho\ge 1.$$ with $c_1$ and $c_2$ independent on $n$. Note that $$\label{estimpressure}
\sqrt{\frac{p'(\rho_n)\mu'(\rho_n)}{\rho_n}}\nabla \rho_n \in L^\infty(0,T;L^2({\Omega}))
\hbox{ uniformly.}$$ Let us prove that there exists $\varepsilon$ such that $$I_0= \displaystyle \int_0^T\int_\Omega \rho_n^{\frac{4\gamma}{3}+\varepsilon} < C$$ with $C$ independent on $n$ and the parameters. We first remark that it suffices to look at it when $\rho_n \ge 1$ and to remark there exists $\varepsilon$ such that $\varepsilon \le (\gamma-1)/3.$ Let us take such parameter then $$\int_0^T\int_\Omega \rho_n^{\frac{4\gamma}{3}+\varepsilon} 1_{\rho \ge 1}
\le \int_0^T\int_\Omega \rho_n^{\frac{2\gamma}{3} + \gamma - \frac{1}{3}} 1_{\rho \ge 1}
\le \int_0^T \int_{\Omega}\rho_n^{\frac{2\gamma}{3} + \gamma + \alpha_1 -1} 1_{\rho \ge 1}$$ recalling that $\alpha_1 >2/3.$ Following [@LiXi], it remains to prove that $$\displaystyle I_1= \int_0^T\int_{\Omega}\bigl[\rho_n^{[5\gamma + 3(\alpha_1-1)]/3} \, 1_{\rho \ge 1} \bigr] <+\infty$$ uniformly. Denoting $$I_2 = \int_0^T\int_{\Omega}\bigl[\rho^{[5\gamma + 3(\alpha_2-1)]/3} \, 1_{\rho \le 1} \bigr]$$ and using the bounds on $\mu(\rho_n)$ in terms of power functions in $\rho$, which are different if $\rho_n \ge 1$ or $\rho_n\le 1$, we can write: $$I_1 \le I_1 + I_2 \le C_a \int_0^T \int_{\Omega}\rho_n^{2\gamma/3} P'(\rho_n) \,\mu(\rho_n)
\le C_a \int_0^T \|\rho_n^\gamma\|^{2/3}_{L^1({\Omega})}\|P'(\rho_n)\mu(\rho_n)\|_{L^3({\Omega})}$$ where $C$ does not depend on $n$. Using the Poincaré-Wirtinger inequality, one obtains that $$\begin{split}\|P'(\rho_n)\mu(\rho_n)\|_{L^3({\Omega})} &= \|\sqrt{P'(\rho_n) \mu(\rho_n)}\|_{L^6({\Omega})}^2
\\&\le \|\sqrt{P'(\rho_n)\mu(\rho_n)}\|_{L^1({\Omega})}
+ \|\nabla \bigl[\sqrt{P'(\rho_n)\mu(\rho_n)}\bigr]\|_{L^2({\Omega})}^2.
\end{split}$$ Let us now check that the two terms are uniformly bounded in time. First we caculate $$\nabla \bigl[\sqrt{P'(\rho_n)\mu(\rho_n)}\bigr]
= \frac{P''(\rho_n) \mu(\rho_n)
+ P'(\rho_n)\mu'(\rho_n)}{\sqrt{P'(\rho_n)\mu(\rho_n)} }\nabla \rho_n$$ and using , we can check that $$\frac{P''(\rho_n) \mu(\rho_n)
+ P'(\rho_n)\mu'(\rho_n)}{\sqrt{P'(\rho_n)\mu(\rho_n)} }
\le \sqrt{\frac{P'(\rho_n)\mu'(\rho_n)}{\rho_n}}.$$ Therefore, using , uniformly with respect to $n$, we get $$\sup_{t\in [0,T]} \|\nabla \bigl[\sqrt{P'(\rho_n)\mu(\rho_n)}\bigr]\|_{L^2({\Omega})}^2 < + \infty.$$ Let us now check that uniformly with respect to $n$ $$\label{AAAestimm}
\sup_{t\in [0,T]} \|\sqrt{P'(\rho_n)\mu(\rho_n)}\|_{L^1({\Omega})} < + \infty.$$ Using the bounds on $\mu(\rho_n)$, we have $$\int_{\Omega}\sqrt{P'(\rho_n)\mu(\rho_n)}
\le C \int_{\Omega}\Bigl[\rho_n^{(\gamma-1+\alpha_1)/2} 1_{\rho_n \le 1}
+ \rho_n^{(\gamma-1+\alpha_2)/2} 1_{\rho_n \ge 1} \Bigr]$$ with $C$ independent on $n$. Recalling that $\alpha_1 \ge 2/3$ and $\alpha_2 < 4$, we can check that $$\int_{\Omega}\sqrt{P'(\rho_n)\mu(\rho_n)}
\le C \int_{\Omega}\Bigl[\rho_n^{\gamma/3} + \rho_n^{\frac{\gamma}{2}}\rho_n^{\frac{3}{2}} \Bigr],$$ and therefore using that $\rho_n^\gamma \in L^\infty(0,T;L^1({\Omega}))$ and $\rho_n\in L^{\infty}(0,T;L^{10}({\Omega}))$, we get . This ends the proof of the convergence of $\rho_n$ to $\rho$ in $L^{(4\gamma/3)^+}((0,T)\times \Omega$.
Let us now focus on the convergence of $$\label{weak convergence of product}
\sqrt{\frac{P'(\rho_n)\rho_n}{\mu'(\rho_n)}} \nabla
\displaystyle \Bigl(\int_0^{\rho_n} \sqrt{\frac{P'(s)\mu'(s)}{s}}\, ds\Bigr).$$ First let us recall that $$\nabla \displaystyle \Bigl(\int_0^{\rho_n} \sqrt{\frac{P'(s)\mu'(s)}{s}}\, ds\Bigr)
\in L^\infty(0,T;L^2(\Omega)) \hbox{ uniformly}.$$ Let us now prove that $$\label{estimm}
\sqrt{\frac{P'(\rho_n)\rho_n}{\mu'(\rho_n)}}
\in L^{2+}((0,T)\times \Omega).$$ Recall first that $\alpha_1 >\frac{2}{3}$, we just have to consider $\rho_n \ge 1$. We write $$\frac{P'(\rho_n)\rho_n}{\mu'(\rho_n)} 1_{\rho_n\ge 1}
\le C \rho_n^{\gamma - \alpha_1 +1} 1_{\rho_n\ge 1}
\le C \rho_n^{\gamma +1/3} 1_{\rho_n\ge 1}
\le C \rho_n^{\frac{4\gamma}{3}} 1_{\rho_n \ge 1}.$$ We can use the fact that $\rho_n^{(4\gamma/3)^+} \in L^1((0,T)\times \Omega)$ uniformly to conclude on . Thanks to $$\sqrt{\frac{P'(\rho_n)\rho_n}{\mu'(\rho_n)}}
\to \sqrt{\frac{P'(\rho)\rho}{\mu'(\rho)}} \hbox{ in } L^2((0,T)\times \Omega)$$ and $$\nabla
\displaystyle \Bigl(\int_0^{\rho_n} \sqrt{\frac{P'(s)\mu'(s)}{s}}\, ds\Bigr)
\to \nabla
\displaystyle \Bigl(\int_0^{\rho} \sqrt{\frac{P'(s)\mu'(s)}{s}}\, ds\Bigr)
\hbox{ weakly in } L^2((0,T)\times \Omega),$$ we have the weak convergence of in $ L^{1}((0,T)\times {\Omega})$.
We now investigate limits on ${{ u}}$ independent of the parameters. We need to differentiate the case with hyper-viscosity ${\varepsilon}_2>0$, from the case without. In the case with hyper-viscosity, the estimate depends on ${\varepsilon}_1$ because of the drag force $r_1$, while the estimate in the case ${\varepsilon}_2=0$ is independent of all the other parameters. This is why we will consider the limit ${\varepsilon}_2$ converges to 0 first.
\[lem u\] Assume that ${\varepsilon}_1>0$ is fixed. Then, there exists a constant $C>0$ depending on ${\varepsilon}_1$ and $C_{in}$, but independent of all the other parameters (as long as they are bounded), such that for any initial values $(\rho_0, \sqrt{\rho_0}u_0)$ verifying (\[Initial conditions\]) for $C_{in}>0$ we have $$\begin{aligned}
&&\|\partial_t(\rho {{ u}})\|_{L^{1+}(0,T;W^{-s,2}(\Omega))}\leq C,\\
&&\|\nabla(\rho {{ u}})\|_{L^2(0,T;L^1(\Omega))}\leq C.\end{aligned}$$
Assume now that ${\varepsilon}_2=0$. Let $\Phi:{{\mathbb R}}^+\to {{\mathbb R}}$ be a smooth function, positive for $\rho>0$, such that $$\begin{aligned}
&&\Phi(\rho)+|\Phi'(\rho)|\leq C e^{-\frac{1}{\rho}}, \qquad \mathrm{for} \ \rho\leq 1,\\
&&\Phi(\rho)+|\Phi'(\rho)|\leq C e^{-\rho}, \qquad \mathrm{for} \ \rho\geq 2.\end{aligned}$$ Assume that the initial values $(\rho_0, \sqrt{\rho_0}u_0)$ verify (\[Initial conditions\]) for a fixed $C_{in}>0$. Then, there exists a constant $C>0$ independent of ${\varepsilon}_1, r_0, r_1, r_2, \delta$ (as long as they are bounded), such that $$\begin{aligned}
&&\|\partial_t\left[\Phi(\rho) {{ u}}\right]\|_{L^{1+}(0,T;W^{-2,1}(\Omega))}\leq C,\\
&&\|\nabla\left[\Phi(\rho) {{ u}}\right]\|_{L^2(0,T;L^1(\Omega))}\leq C.\end{aligned}$$
We split the proof into the two cases. 0.3cm [**Case 1:**]{} Assume that ${\varepsilon}_1>0$. From the equation on $\rho u$ and the [*a priori* ]{} estimates, we find directly that $$\|\partial_t (\rho {{ u}})\|_{L^{1+}(0,T;W^{-s,2}(\Omega))}\leq C+ r_1^{1/4} \frac{\|\rho\|^{1/4}_{L^1((0,T)\times\Omega)}}{\|\mu'(\rho)\|_{L^\infty((0,T)\times\Omega)}}\left(r_1\int_0^T\int_\Omega \rho |{{ u}}|^4\,dx\,dt \right)^{3/4}\leq C(1+1/{\varepsilon}_1).$$ We have $\mu(\rho)\geq {\varepsilon}_1 \rho$, and from (\[priori estimates\]), we have the [*a priori*]{} estimate $$\|\nabla \sqrt{\rho}\|^2_{L^\infty(0,T;L^2(\Omega))}\leq \frac{C}{{\varepsilon}_1}.$$ Hence $$\begin{aligned}
\|\nabla(\rho {{ u}})\|_{L^2(0,T;L^1(\Omega))}
&& \leq
\left\|\frac{\rho}{\sqrt{\mu}(\rho)}\right\|_{L^\infty(0,T;L^2(\Omega))}
\left\|\sqrt{\mu}(\rho)\nabla u\right\|_{L^2(0,T;L^2(\Omega)))} \\
&& \> +2\|\nabla \sqrt{\rho}\|_{L^\infty(0,T;L^2(\Omega))} \|\sqrt{\rho} {{ u}}\|_{L^\infty(0,T;L^2(\Omega))}\\
&&\> \leq C.\end{aligned}$$
0.3cm [**Case 2**]{}: Assume now that ${\varepsilon}_2=0$. Multiplying the equation on $(\rho u)$ by $\Phi(\rho)/\rho$, we get, as for the renormalization, that $$\|\partial_t\left[\Phi(\rho){{ u}}\right]\|_{L^{1+}(0,T;W^{-2,1}(\Omega))}\leq C.$$ Note that $$\begin{aligned}
&& \|\nabla\left[\Phi(\rho) {{ u}}\right]\|_{L^2(0,T;L^1(\Omega))}\leq
\left\|\frac{\Phi(\rho)}{\sqrt{\mu}(\rho)}\right\|_{L^\infty} \left\|\sqrt{\mu}(\rho)\nabla {{ u}}\right\|_{L^2(L^2)}\\
&&\qquad\qquad +2\| \frac{\Phi'(\rho)}{\mu'(\rho)}\|_{L^\infty((0,T)\times\Omega)} \|\mu'(\rho)\nabla \sqrt{\rho}\|_{L^\infty(0,T;L^2(\Omega))} \|\sqrt{\rho} {{ u}}\|_{L^\infty(0,T;L^2(\Omega))}\\
&&\qquad\qquad \leq C.\end{aligned}$$
\[Compactnesstool1\] Assume either that ${\varepsilon}_{2,n}=0$, or ${\varepsilon}_{1,n}={\varepsilon}_1>0$. Let $(\rho_n,\sqrt{\rho_n} {{ u}}_n)$ be a sequence of solutions for a family of bounded parameters with uniformly bounded initial values verifying (\[Initial conditions\]) with a fixed $C_{in}$. Assume that there exists $\alpha>0$, and a smooth function $h:{{\mathbb R}}^+\times{{\mathbb R}}^3\to{{\mathbb R}}$ such that $\rho_n^\alpha$ is uniformly bounded in $L^p((0,T)\times\Omega)$ and $h(\rho_n,{{ u}}_n)$ is uniformly bounded in $L^q((0,T)\times \Omega)$, with $$\frac{1}{p}+\frac{1}{q}<1.$$ Then, up to a subsequence, $\rho_n$ converges to a function $\rho$ strongly in $L^1$, $\sqrt{\rho_n}{{ u}}_n$ converges weakly to a function $q$ in $L^2$. We define ${{ u}}=q/\sqrt{\rho}$ whenever $\rho\neq 0$, and ${{ u}}=0$ on the vacuum where $\rho=0$. Then $\rho_n^\alpha h(\rho_n,{{ u}}_n)$ converges strongly in $L^1$ to $\rho^\alpha h(\rho, {{ u}})$.
Thanks to the uniform bound on the kinetic energy $\int \rho_n |{{ u}}_n|^2$, and to Lemma \[compactuniforme\], up to a subsequence, $\rho_n$ converges strongly in $L^1((0,T)\times \Omega)$ to a function $\rho$, and $\sqrt{\rho_n} {{ u}}_n$ converges weakly in $L^2((0,T)\times \Omega)$ to a function $q$.
0.3cm We want to show that, up to a subsequence, ${{ u}}_n {\bf 1}_{\{\rho>0\}}$ converges almost every where to $u {\bf 1}_{\{\rho>0\}}$. We consider the two cases. First, if ${\varepsilon}_{1,n}={\varepsilon}_1>0$, then from Lemma \[lem u\] and the Aubin-Lions Lemma, $\rho_n {{ u}}_n$ converges strongly in $C^0(0,T; L^1(\Omega))$ to $\sqrt{\rho} q=\rho {{ u}}$. Up to a subsequence, both $\rho_n$ and $\rho_n {{ u}}_n$ converges almost everywhere to, respectively, $\rho$ and $\rho {{ u}}$. For almost every $(t,x) \in \{\rho>0\}$, for $n$ big enough, $\rho_n(t,x)>0$, so ${{ u}}_n=\rho_n {{ u}}_n/\rho_n$ at this point converges $u$. If ${\varepsilon}_{2,n}=0$ we use the second part of Lemma \[lem u\] and thanks to the Aubin-Lions Lemma, $\Phi(\rho_n){{ u}}_n$ converges strongly in $C^0(0,T; L^1(\Omega))$ to $\Phi(\rho) {{ u}}$. We still have, up to a subsequence, both $\rho_n$ and $\Phi(\rho_n) {{ u}}_n$ converging almost everywhere to, respectively, $\rho$ and $\phi(\rho) {{ u}}$ (we used the fact that $\Phi(r)/\sqrt{r}=0$ at $r=0$). Since $\Phi(r)\neq 0$ for $r\neq0$, for almost every $(t,x) \in \{\rho>0\}$, for $n$ big enough, $\Phi(\rho_n)(t,x)>0$, so $u_n=\Phi(\rho_n) {{ u}}_n/\Phi(\rho_n)$ at this point converges ${{ u}}$.
0.3cm Note that $$\rho_n^\alpha h(\rho_n,{{ u}}_n) =\rho_n^\alpha h(\rho_n,{{ u}}_n) {\bf 1}_{\{\rho>0\}}+\rho_n^\alpha h(\rho_n,{{ u}}_n) {\bf 1}_{\{\rho=0\}}.$$ The first term converges almost everywhere to $\rho^\alpha h(\rho,{{ u}}) {\bf 1}_{\{\rho>0\}}$, and therefore to $\rho^\alpha h(\rho,{{ u}}) $ in $L^1$ by the Lebesgue’s theorem. The second part can be estimated as follows $$\|\rho_n^\alpha h(\rho_n,{{ u}}_n) {\bf 1}_{\{\rho=0\}}\|_{{L^1}}\leq \|h(\rho_n,{{ u}}_n)\|_{L^q}\|\rho_n^\alpha {\bf 1}_{\{\rho=0\}}\|_{{L^{p-{\varepsilon}}}}.$$ But $\rho_n^\alpha {\bf 1}_{\{\rho=0\}}$ converges almost everywhere to 0, by the Lebesgue’s theorem, the last term converges to 0.
[*Some compactness when the parameters are fixed.*]{} For any positive fixed $\delta$, $r_0$, $r_1$, $r_2$ and $r$, to recover a weak solution to , we only need to handle the compactness of the terms $$r\rho_n\nabla\left(\sqrt{K(\rho_n)}\D(\int_0^{\rho_n}\sqrt{K(s)}\,ds)\right)$$ and $$\frac{\rho_n}{\mu'(\rho_n)}|{{ u}}_n|^2{{ u}}_n.$$ Indeed due to the term $r_0\rho_n|{{ u}}_n|{{ u}}_n$ and the fact that $\inf_{s\in [0,+\infty)}\mu'(s) >\varepsilon_1>0$, one obtains the compactness for all other terms in the same way as in [@BDZ; @MV].
[*Capillarity term.*]{} To pass to the limits in $$r\rho_n\nabla\left(\sqrt{K(\rho_n)}\D(\int_0^{\rho_n}\sqrt{K(s)}\,ds)\right),$$ we use the identity $$\begin{split}
&\rho\nabla\left(\sqrt{K(\rho_n)}\D(\int_0^{\rho_n}\sqrt{K(s)}\,ds)\right) \\
& \hskip3cm =
4 \Bigl[2{\rm div}(\sqrt{\mu(\rho_n)} \nabla\nabla Z(\rho_n))
- \Delta (\sqrt{\mu(\rho_n)} \nabla Z(\rho_n)\Bigr]\\
&\hskip4cm + \Bigl[ \nabla \bigl[(\frac{2\lambda(\rho_n)}{\sqrt{\mu(\rho_n)}}
+ k(\rho_n))\Delta Z(\rho_n)\bigr]
- \nabla {\rm div} [ k(\rho_n)\nabla Z(\rho_n)] \Bigr]
\end{split}$$ where $\displaystyle Z(\rho_n) = \int_0^{\rho_n} [(\mu(s))^{1/2} \mu'(s)]/s \, ds$ and $\displaystyle k(\rho_n) = \int_0^{\rho_n} \frac{\lambda(s)\mu'(s)}{\mu(s)^{3/2}} ds.$ It allows us to rewrite the weak form coming for the capillarity term as follows $$\begin{split}
&\int_0^t\int_{\Omega}\sqrt{K(\rho_n)} \Delta (\int_0^{\rho_n} \sqrt{K(s)}\, ds) {\rm div} (\rho_n \psi) \, dx\, dt
\\&= 4 \int_0^t\int_{\Omega}\bigl(2\sqrt{\mu(\rho_n)} \nabla\nabla Z(\rho_n): \nabla \psi
+ \sqrt{\mu(\rho_n)}\nabla Z(\rho_n)\cdot \Delta \psi\bigr) \\
& \hskip1cm
+ \int_0^t\int_\Omega \bigl(\frac{2\lambda(\rho_n)}{\sqrt{\mu(\rho_n)}}
+ k(\rho_n))\Delta Z(\rho_n) \, {\rm div} \psi
+ k(\rho_n) \nabla Z(\rho_n) . \nabla {\rm div} \psi\bigr)
\\
&=A_1+A_2.
\end{split}$$ In fact, with Lemma \[compactuniforme\] at hand, we are able to have compactness of $A_1$ and $A_2$ easily. Concerning $A_1$, we know that $$\sqrt{\mu(\rho_n)} \to \sqrt{\mu(\rho)} \hbox{ in } L^p((0,T); L^q({\Omega}))
\hbox{ for all } p<+\infty \hbox{ and } q<3.$$ Note that $\nabla\nabla Z(\rho_n)$ is uniformly bounded in $L^2(0,T;L^2({\Omega}))$, we have $\nabla Z(\rho_n)$ is uniformly bounded in $L^2(0,T;L^6({\Omega}))$, because $\int_{\Omega}\nabla Z(\rho_n) = 0$ due to the periodic condition. Thus we have following weak convergence $$\int_{{\Omega}}\sqrt{\mu(\rho_n)}\nabla Z(\rho_n)\cdot \Delta\psi\,dx
\to \int_{{\Omega}}\sqrt{\mu}\nabla Z\cdot \Delta\psi\,dx,$$ and $$\int_{{\Omega}}\sqrt{\mu(\rho_n)}\nabla \nabla Z(\rho_n)\nabla\psi\,dx
\to \int_{{\Omega}}\sqrt{\mu}\nabla \nabla Z:\nabla\psi\,dx,$$ thanks to Lemma \[compactuniforme\]. We conclude that $Z=Z(\rho)$, thanks to the bound on $Z(\rho_n)$ and the strong convergence on $\rho_n$. Thus using the compactness on $\rho_n$, the passage to the limit in $A_1$ is done. Concerning $A_2$, we just have to look at the coefficients $$\displaystyle k(\rho_n)= \int_0^{\rho_n} \lambda(s)\mu'(s)/\mu(s)^{3/2} \, ds, \qquad
j(\rho_n)= {2\lambda(\rho_n)}/{\sqrt{\mu(\rho_n)}}.$$ Recalling the assumptions on $\mu(s)$ and the relation $\lambda(s) = 2 (\mu'(s)s -\mu(s))$, we have $$2(\alpha_1- 1) \mu(s) \le \lambda(s) \le 2(\alpha_2-1) \mu(s),$$ and $$\frac{\alpha_1}{\sqrt{\mu(s)}s} \le \frac{\mu'(s)}{\mu(s)^{3/2}}
\le \frac{\alpha_2}{\sqrt{\mu(s)}s}.$$ This means that the coefficients $k(\rho_n)$ and $j(\rho_n)$ are comparable to $\sqrt{\mu(\rho_n)}$. Using the compactness of the density $\rho_n$ and the informations on $\mu(\rho_n)$ given in Corollary \[compactuniforme\], we conclude the compactness of $A_2$ doing as for $A_1$.
[*Cubic non-linear drag term.*]{} We will use Lemma \[Compactnesstool1\] to show the compactness of $$\frac{\rho_n}{\mu'(\rho_n)}|{{ u}}_n|^2{{ u}}_n.$$ More precisely, we write $$\label{decompdrag}
\frac{\rho_n}{\mu'(\rho_n)}|{{ u}}_n|^2{{ u}}_n=\rho_n^{\frac{1}{6}}\sqrt{\frac{\rho_n}{\mu'(\rho_n)}}|{{ u}}_n|^2\rho_n^{\frac{1}{3}}|{{ u}}_n|\frac{1}{\sqrt{\mu'(\rho_n)}}
= \rho_n^{1/6} h(\rho_n,|{{ u}}_n|),$$ By Lemma \[compactuniforme\], there exists $\varepsilon>0$ such that $\rho_n^{\frac{1}{6}}$ is uniformly bounded in $L^{\infty}(0,T;L^{6\gamma+\varepsilon}({\Omega}))$ and $\rho_n\to \rho\text{ a.e.}$, so $$\label{comp1}
\rho_n^{\frac{1}{6}}\to\rho^{\frac{1}{6}}\quad\text{ in } L^{6\gamma+\varepsilon}((0,T)\times {\Omega})).$$ Note that $\sqrt{\frac{\rho_n}{\mu'(\rho_n)}}|{{ u}}_n|^2$ is uniformly bounded in $L^2(0,T;L^2({\Omega}))$, and $\inf_{s\in [0,+\infty)} \mu'(s) \ge \varepsilon_1 >0$, $\rho_n^{\frac{1}{3}}|{{ u}}_n|\frac{1}{\sqrt{\mu'(\rho_n)}}$ is uniformly bounded in $L^3(0,T;L^3({\Omega}))$, thus $$\label{comp2}
h(\rho_n,|{{ u}}_n|) =
\sqrt{\frac{\rho_n}{\mu'(\rho_n)}}|{{ u}}_n|^2\rho_n^{\frac{1}{3}}|{{ u}}_n|\frac{1}{\sqrt{\mu'(\rho_n)}}
\in L^{\frac{6}{5}}(0,T;L^{\frac{6}{5}}({\Omega}))
\hbox{ uniformly.}$$ By Lemma \[Compactnesstool1\] and –, we deduce that $$\int_0^t\int_{{\Omega}}\frac{\rho_n}{\mu'(\rho_n)}|{{ u}}_n|^2{{ u}}_n\,dx\,dt\to \int_0^t\int_{{\Omega}}\frac{\rho}{\mu'(\rho)}|{{ u}}|^2{{ u}}\,dx\,dt. \, \square$$
Relying on the compactness stated in this section and the compactness in [@MV], we are able to follow the argument in [@BDZ] to show Theorem \[main result 1\]. Thanks to term $r_0\rho_n|{{ u}}_n|{{ u}}_n$, we have $$\int_0^T\int_{{\Omega}}r_0\rho_n|{{ u}}_n|^4\,dx\,dt\leq C.$$ This gives us that $$\sqrt{\rho_n}{{ u}}_n\to\sqrt{\rho}{{ u}}\; \text{ strongly in } L^2(0,T;L^2({\Omega})).$$ With above compactness of this section, we are able to pass to the limits for recovering a weak solution. In fact, to recover a weak solution to , we have to pass to the limits as the order of $\varepsilon_4\to 0$, $n\to\infty,$ $\varepsilon_3\to0$ and $\varepsilon\to 0$ respectively. In particular, when passing to the limit $\varepsilon_3$ tends to zero, we also need to handle the identification of ${{ v}}$ with $2\nabla{s}(\rho)$. Following the same argument in [@BDZ], one shows that ${{ v}}$ and $2\nabla{s}(\rho)$ satisfy the same moment equation. By the regularity and compactness of solutions, we can show the uniqueness of solutions. By the uniqueness, we have ${{ v}}=2\nabla{s}(\rho)$. This ends the proof of Theorem \[main result 1\].
From weak solutions to renormalized solutions to the approximation
==================================================================
This section is dedicated to show that a weak solution is a renormalized solution for our last level of approximation namely to show Theorem \[renorm\]. First, we introduce a new function $$[f(t,x)]_\varepsilon =f*\eta_{\varepsilon}(t,x),\text{ for any\ \ } t>\varepsilon,\quad\text{ and }\;[f(t,x)]_\varepsilon^x =f*\eta_{\varepsilon}(x)$$ where $$\eta_{\varepsilon}(t,x)=\frac{1}{\varepsilon^{d+1}}\eta(\frac{t}{\varepsilon},\frac{x}{\varepsilon}),\quad\text{ and } \eta_{\varepsilon}(x)=\frac{1}{\varepsilon^{d}}\eta(\frac{x}{\varepsilon}),$$ with $\eta$ a smooth nonnegative even function compactly supported in the space time ball of radius 1, and with integral equal to 1. In this section, we will rely on the following two lemmas to proceed our ideas. Let $\partial$ be a partial derivative in one direction (space or time) in these two lemmas. The first one is the commutator lemma of DiPerna and Lions, see [@Lions].
\[Lions’s lemma\] Let $f\in W^{1,p}({{\mathbb R}}^N\times{{\mathbb R}}^{+}),\,g\in L^{q}({{\mathbb R}}^N\times{{\mathbb R}}^{+})$ with $1\leq p,q\leq \infty$, and $\frac{1}{p}+\frac{1}{q}\leq 1$. Then, we have $$\|
[\partial(fg)]_\varepsilon -\partial(f([g]_{\varepsilon}))\|_{L^{r}({{\mathbb R}}^N\times {{\mathbb R}}^+)}\leq C\|f\|_{W^{1,p}({{\mathbb R}}^N\times{{\mathbb R}}^{+})}\|g\|_{L^{q}({{\mathbb R}}^N\times{{\mathbb R}}^{+})}$$ for some $C\geq 0$ independent of $\varepsilon$, $f$ and $g$, $r$ is determined by $\frac{1}{r}=\frac{1}{p}+\frac{1}{q}.$ In addition, $$[\partial(fg)]_{\varepsilon}-\partial(f([g]_{\varepsilon}))\to0\;\;\text{ in }\,L^{r}({{\mathbb R}}^N\times{{\mathbb R}}^{+})$$ as $\varepsilon \to 0$ if $r<\infty.$ Moreover, in the same way if $f\in W^{1,p}({{\mathbb R}}^N),\,g\in L^{q}({{\mathbb R}}^N)$ with $1\leq p,q\leq \infty$, and $\frac{1}{p}+\frac{1}{q}\leq 1$. Then, we have $$\|
[\partial(fg)]^x_\varepsilon -\partial(f([g]^x_{\varepsilon}))\|_{L^{r}({{\mathbb R}}^N)}\leq C\|f\|_{W^{1,p}({{\mathbb R}}^N)}\|g\|_{L^{q}({{\mathbb R}}^N)}$$ for some $C\geq 0$ independent of $\varepsilon$, $f$ and $g$, $r$ is determined by $\frac{1}{r}=\frac{1}{p}+\frac{1}{q}.$ In addition, $$[\partial(fg)]^x_{\varepsilon}-\partial(f([g]^x_{\varepsilon}))\to0\;\;\text{ in }\,L^{r}({{\mathbb R}}^N)$$ as $\varepsilon \to 0$ if $r<\infty.$
We also need another very standard lemma as follows.
\[standard lemma\] If $f\in L^p({\Omega}\times{{\mathbb R}}^{+})$ and $g\in L^q({\Omega}\times{{\mathbb R}}^{+})$ with $\frac{1}{p}+\frac{1}{q}=1$ and $H\in W^{1,\infty}({{\mathbb R}})$, then $$\begin{split}
&\int_0^T\int_{{\Omega}} [f]_\varepsilon g\,dx\,dt=\int_0^T\int_{{\Omega}}f [g]_\varepsilon \,dx\,dt,
\\&\lim_{\varepsilon\to 0 } \int_0^T\int_{{\Omega}} [f]_\varepsilon g\,dx\,dt=\int_0^T\int_{{\Omega}}f g\,dx\,dt,
\\&\partial [f]_\varepsilon =[\partial f]_\varepsilon,
\\&\lim_{\varepsilon\to 0}\|H([f]_\varepsilon)-H(f)\|_{L^s_{loc}}({\Omega}\times{{\mathbb R}}^+)=0,\quad\text{for any }\, 1\leq s<\infty.
\end{split}$$
We define a nonnegative cut-off functions $\phi_m$ for any fixed positive $m$ as follows. $$\label{cutoff function}
\phi_m(y)\begin{cases}= 0, \;\;\;\;\;\quad\quad\quad\quad\text{ if }0\leq y\leq \frac{1}{2m},
\\ =2my-1,\;\;\;\;\;\quad\text{ if } \frac{1}{2m}\leq y\leq \frac{1}{m},
\\ =1,\,\;\;\;\;\quad\quad\;\quad\quad\text{ if } \frac{1}{m}\leq y\leq m,
\\=2-\frac{y}{m},\,\;\;\;\;\quad\quad\text{ if } m\leq y\leq 2m,
\\=0,\,\;\;\;\;\quad\quad\;\quad\quad\text{ if } y\geq 2m.
\end{cases}$$ It enables to define an approximated velocity for the density bounded away from zero and bounded away from infinity. It is crucial to process our procedure, since the gradient approximated velocity is bounded in $L^2((0,T)\times {\Omega})$. In particular, we introduce ${{ u}}_m={{ u}}\phi_m(\rho)$ for any fixed $m>0$. Thus, we can show $\nabla{{ u}}_m$ is bounded in $L^2(0,T;L^2({\Omega}))$ due to . In fact, $$\begin{split}
\nabla{{ u}}_m&=\phi_m'(\rho){{ u}}\otimes\nabla\rho+\phi_m(\rho)\frac{1}{\sqrt{\mu(\rho)}}{{\mathbb T}_\mu}\\&=\big(\phi_m'(\rho)\frac{(\mu(\rho)\rho)^{1/4}}{(\mu'(\rho))^{\frac{3}{4}}}\big)
\big((\frac{\rho}{\mu'(\rho)})^{\frac{1}{4}}{{ u}}\big)\otimes
\big(\frac{\mu'(\rho)}{\rho^{\frac{1}{2}}\mu(\rho)^{\frac{1}{4}}}\nabla\rho\big)
+\phi_m(\rho)\frac{1}{\sqrt{\mu(\rho)}}{{\mathbb T}_\mu}.
\end{split}$$ Similarly to [@LaVa], thanks to the cut-off function and for $m$ fixed, $\phi_m'(\rho){(\mu(\rho)\rho)^{\frac{1}{4}}}/{(\mu'(\rho))^{\frac{3}{4}}}$ and $\phi_m(\rho)/\sqrt{\mu(\rho)}$ are bounded. Then $\nabla{{ u}}_m$ is bounded in $L^2((0,T)\times \Omega)$ using the estimates with $r>0$ and $r_2>0$, and hence for $\varphi \in W^{2,+\infty}({{\mathbb R}})$, we get $\nabla\varphi'(({{ u}}_m)_j)$ is bounded in $L^2((0,T)\times \Omega)$ for $j=1,2,3$.
The following estimates are necessary. We state them in the lemma as follows.
\[estimate of approximation\] There exists a constant $C>0$ depending only on the fixed solution $(\sqrt{\rho},\sqrt{\rho}{{ u}})$, and $C_m$ depending also on $m$ such that $$\begin{split}&\|\rho\|_{L^{\infty}(0,T;L^{10}(\Omega))}
+\|\rho{{ u}}\|_{L^3(0,T;L^{\frac{5}{2}}({\Omega}))}
+ \|\rho|{{ u}}|^2\|_{L^{2}(0,T; L^{\frac{10}{7}}({\Omega}))}
\\& +\|\sqrt{\mu}\big(|{{\mathbb S}_\mu}|+r|{{\mathbb S}_r}|\big)\|_{L^{2}(0,T; L^{\frac{10}{7}}({\Omega}))}
+ \|\frac{\lambda(\rho)}{\mu(\rho)}\|_{L^{\infty}((0,T)\times {\Omega})}
\\& + \|\sqrt{\frac{P'(\rho_n)\rho_n}{\mu'(\rho_n)}} \nabla
\displaystyle \Bigl(\int_0^{\rho_n} \sqrt{\frac{P'(s)\mu'(s)}{s}}\, ds\Bigr)\|_{L^{1+}((0,T)\times\Omega)} \\
& + \|\sqrt{\frac{P_\delta'(\rho_n)\rho_n}{\mu'(\rho_n)}} \nabla
\displaystyle \Bigl(\int_0^{\rho_n} \sqrt{\frac{P_\delta'(s)\mu'(s)}{s}}\, ds\Bigr)\|_{L^{1+}((0,T)
\times\Omega)}
+\|r_0{{ u}}\|_{L^2((0,T)\times \Omega)}\leq C,
\end{split}$$ and $$\|\nabla\phi_m(\rho)\|_{L^4((0,T)\times \Omega}+\|\partial_t\phi_m(\rho)\|_{L^2((0,T\times\Omega))}\leq C_m.$$
By , we have $\rho \in L^{\infty}(0,T;L^{10}({\Omega}))$. Now we have $\nabla\sqrt{\rho}\in L^{\infty}(0,T;L^2({\Omega}))$ because $\mu'(s) \ge \varepsilon_1$ and $\mu'(\rho) \nabla \rho /\sqrt \rho \in L^\infty((0,T);L^2({\Omega}))$. Note that $$\rho{{ u}}=\rho^{\frac{2}{3}}\rho^{\frac{1}{3}}{{ u}},$$ $\rho^{\frac{2}{3}}\in L^{\infty}(0,T;L^{15}({\Omega}))$ and $\rho^{\frac{1}{3}}{{ u}}\in L^{3}(0,T;L^3({\Omega}))$, $\rho{{ u}}$ is bounded in $L^{3}(0,T;L^{\frac{5}{2}}({\Omega}))$.
By , we have $(\frac{\rho}{\mu'(\rho)})^{1/2}|{{ u}}|^2\in L^2((0,T)\times {\Omega})$. Note that $$\rho|{{ u}}|^2= (\rho\mu'(\rho))^{1/2} (\frac{\rho}{\mu'(\rho)})^{1/2}|{{ u}}|^2,$$ it is bounded in $L^{2}(0,T;L^{\frac{10}{7}}({\Omega}))$, where we used facts that $\mu(\rho) \in L^\infty(0,T;L^{5/2}(\Omega))$ (recalling that for $\rho \ge 1$ we have $\mu(\rho)\le c\rho^4$ and $\rho \in L^\infty(0,T;L^{10}(\Omega))$) and $\mu'(\rho) \rho \le \alpha_2 \mu(\rho)$.
Similarly, we get $\sqrt{\mu}(|{{\mathbb S}_\mu}|+r|{{\mathbb S}_r}|) \in L^2(0,T;L^{10/7}(\Omega))$ by . The $L^\infty((0,T)\times {\Omega})$ bound for $\lambda(\rho)/\mu(\rho)$ may be obtained easily due to and .
Concerning the estimates related to the pressures, we just have to look at the proof in Lemma \[compactuniforme\]. Note that $$\begin{split}
&\nabla\phi_m(\rho)=\phi_m'(\rho) \nabla\rho= \phi_m'(\rho)\frac{\rho^{1/2} \mu(\rho)^{1/4}}{\mu'(\rho)} [\frac{\mu'(\rho)}{\rho^{1/2}\mu(\rho)^{1/4}}\nabla\rho]
\end{split}$$ by , we conclude that $\nabla\phi_m(\rho)$ is bounded in $L^4((0,T)\times{\Omega})$. It suffices to recall that thanks to the cut-off function $\phi_m$, we have $\phi_m'(\rho) \rho^{1/2} \mu(\rho)^{1/4}/\mu'(\rho)$ bounded in $L^{\infty}((0,T)\times {\Omega})$. Similarly, we write $$\begin{split}
\partial _t\phi_m(\rho)&=\phi_m'(\rho)\partial_t \rho=-\phi'_m(\rho){{\rm div}}(\rho{{ u}})
\\&=-\phi_m'(\rho)\frac{\rho}{\sqrt{\mu}}\mathrm{Tr} ({{\mathbb T}_\mu})
- \big(\phi_m'(\rho)\frac{(\mu(\rho)\rho)^{\frac{1}{4}}}{(\mu'(\rho))^{\frac{3}{4}}}\big)
\big(\frac{\rho^{\frac{1}{4}}}{(\mu'(\rho))^{\frac{1}{4}}}{{ u}}\big)\cdot
\big(\frac{\mu'(\rho)}{\rho^{1/2} \mu(\rho)^{1/4}} \nabla \rho \big)
,\end{split}$$ which provides $\partial_t\phi_m(\rho)$ bounded in $L^2(0,T;L^2({\Omega}))$ thanks to , and . and using the cut-off function property to bound the extra quantiies in $L^\infty((0,T)\times{\Omega})$ as previously.
\[Lemma of renormalized approxiamtion\] The $\kappa$-entropic weak solution constructed in Theorem \[main result 1\] is a renormalized solution, in particular, we have $$\label{limit for m large}
\begin{split}
&
\int_0^T\int_{{\Omega}}\big(\rho\varphi({{ u}})\psi_t+ (\rho \varphi({{ u}})\otimes {{ u}}) \nabla\psi\big)\\
& - \int_0^T\int_{\Omega}\nabla\psi \varphi'({{ u}})\big[2\bigl(\sqrt{\mu(\rho)}({{\mathbb S}_\mu}+ r \, {{\mathbb S}_r})
+ \frac{\lambda(\rho)}{2\mu(\rho)}{\rm Tr}(\sqrt{\mu(\rho)} {{\mathbb S}_\mu}+ r \sqrt{\mu(\rho)} {{\mathbb S}_r}) {\rm Id} \big]\\
& -\int_0^T \int_{\Omega}\psi\varphi''({{ u}}){{\mathbb T}_\mu}\big[2\bigl(({{\mathbb S}_\mu}+ r \, {{\mathbb S}_r})
+ \frac{\lambda(\rho)}{2\mu(\rho)}{\rm Tr}({{\mathbb S}_\mu}+ r {{\mathbb S}_r}) {\rm Id} \big] \\
& + \int_0^T \int_{\Omega}\psi\varphi'({{ u}})F(\rho,{{ u}})\big)\,dx\,dt=0,
\end{split}$$ where $$\label{eq_viscous_renormalise}
\begin{split}
& \sqrt{\mu(\rho)}\varphi_i'({{ u}})[{{\mathbb T}_\mu}]_{jk}= \partial_j(\mu\varphi'_i({{ u}}){{ u}}_k)-{{\sqrt{\rho}}}{{ u}}_k\varphi'_i({{ u}})\frac{\nabla\mu}{\sqrt{\rho}}+ \bar{R}^1_\varphi, \\
&\sqrt{\mu(\rho)} \varphi_i'({{ u}}) [\mathbb S_r]_{jk}
= 2 \sqrt{\mu(\rho)} \varphi_i'({{ u}}) \partial_j \partial_k Z(\rho)
- 2 \partial_j (\sqrt{\mu(\rho)} \partial_k Z(\rho) \varphi_i'({{ u}}))
+ \bar{R}^2_\varphi \\
&\frac{\lambda(\rho)}{2\mu(\rho)} \varphi_i'({{ u}}) {\rm Tr} (\sqrt{\mu(\rho)} \mathbb T_\mu)
= {\rm div} \bigl(\frac{\lambda(\rho)}{\mu(\rho)} \sqrt\rho {{ u}}\frac{\mu(\rho)}{\sqrt\rho} \varphi'({{ u}}) \bigr) \\
& \hskip4.4cm - \sqrt \rho u \cdot \sqrt\rho \nabla s(\rho)\frac{\rho \mu''(\rho)}{\mu(\rho)}
\varphi'({{ u}})+ \bar{R}^3_\varphi \\
& \frac{\lambda(\rho)}{\mu(\rho)} \varphi'({{ u}}) {\rm Tr} (\sqrt{\mu(\rho)} \mathbb S_r)
= \varphi_i'({{ u}}) \bigl(\frac{\lambda(\rho)}{\sqrt{\mu(\rho)}} + \frac{1}{2} k(\rho) \bigr) \Delta Z(\rho) \\
& \hskip4.4cm - \frac{1}{2} {\rm div}(k(\rho) \varphi'_i({{ u}}) \nabla Z(\rho))
+ \bar{R}^4_\varphi
\end{split}$$ where $$\begin{split}
&\bar{R}^1_{\varphi}=\varphi''_i({{ u}}){{\mathbb T}_\mu}\sqrt{\mu(\rho)}{{ u}}\\
&\bar{R}^2_\varphi
= 2 \varphi_i''(u) \mathbb T_\mu \nabla Z(\rho)\\
&\bar{R}^3_\varphi =
- \varphi_i''(u) \mathbb T_\mu\cdot \sqrt{\mu(\rho)} {{ u}}\frac{\lambda(\rho)}{\mu(\rho)} \\
& \bar{R}^4_\varphi =
\frac{k(\rho)}{2 \sqrt{\mu(\rho)}} \varphi_i''({{ u}}) \mathbb T_\mu \cdot \nabla Z(\rho)
\end{split}$$
We choose a function $\Bigl[\phi_m'([\rho]_\varepsilon)\psi\Bigr]_\varepsilon$ as a test function for the continuity equation with $\psi\in C_c^{\infty}((0,T)\times{\Omega})$. Using Lemma \[standard lemma\], we have $$\begin{split}
\label{weak formulation for mass with varepsilon}
0&=\int_0^T\int_{{\Omega}}\big(\partial_t\Bigl[\phi_m'([\rho]_\varepsilon)\psi\Bigr]_\varepsilon \rho
+\rho{{ u}}\cdot\nabla\Bigl[\phi_m'([\rho]_\varepsilon)\psi\Bigr]_\varepsilon\big)\,dx\,dt
\\&=-\int_0^T\int_{{\Omega}}\big(\phi_m'([\rho]_\varepsilon)\psi \, \partial_t [\rho]_\varepsilon
+{{\rm div}}([\rho{{ u}}]_{\varepsilon}) \phi_m'([\rho]_\varepsilon)\psi\big)\,dx\,dt
\\&=\int_0^T\int_{{\Omega}}\left(\psi_t\phi_m([\rho]_\varepsilon)
-\psi\phi'_m([\rho]_\varepsilon)
\bigl[\frac{\rho}{\sqrt{\mu(\rho)}}\mathrm{Tr} ({{\mathbb T}_\mu})+2 \sqrt{\rho}{{ u}}\cdot\nabla\sqrt{\rho}\bigr]_\varepsilon\right)\,dx\,dt.
\end{split}$$ Using Lemma \[estimate of approximation\] and Lemma \[standard lemma\], and passing into the limit as $\varepsilon$ goes to zero, from , we get: $$\begin{split}
\label{modified continuity equation}
0&=\int_0^T\int_{{\Omega}}\big(\psi_t\phi_m(\rho)-\psi\phi'_m(\rho)[\frac{\rho}{\sqrt{\mu}}\mathrm{Tr} ({{\mathbb T}_\mu})+2\sqrt{\rho}{{ u}}\cdot\nabla\sqrt{\rho}]\big)\,dx\,dt
\\&=\int_0^T\int_{{\Omega}}\big(\psi_t\phi_m(\rho)
-\psi \bigl[\phi'_m(\rho)\frac{\rho}{\sqrt{\mu}}\mathrm{Tr} ({{\mathbb T}_\mu})+{{ u}}\cdot\nabla\phi_m(\rho)\bigr]\big)\,dx\,dt,
\end{split}$$ thanks to $\psi\nabla\phi_m(\rho)\in L^4((0,T)\times {\Omega})$, ${{ u}}\in L^2((0,T)\times {\Omega})$, and $\psi $ compactly supported.
Similarly, we can choose $[\psi\phi_m(\rho)]_\varepsilon$ as a test function for the momentum equation. In particular, we have the following lemma.
\[Lemma for limits-first two terms\] $$\int_0^T\int_{{\Omega}} [\psi\phi_m(\rho)]_\varepsilon \big(\partial_t (\rho {{ u}}) +{{\rm div}}(\rho{{ u}}\otimes{{ u}})\big)\,dx\,dt$$ tends to $$-\int_0^T\int_{{\Omega}}\psi_t\rho{{ u}}_m+\nabla\psi\cdot(\rho{{ u}}\otimes{{ u}}_m
+\psi(\partial_t\phi_m(\rho)+{{ u}}\cdot\nabla\phi_m(\rho))\rho{{ u}}\,dx\,dt$$ as $\varepsilon\to 0.$
By Lemma \[Lions’s lemma\], we can show that $$\begin{split}
&
\int_0^T\int_{{\Omega}} [\psi\phi_m(\rho)]_\varepsilon \partial_t(\rho {{ u}})\,dx\,dt\to
-\int_0^T\int_{{\Omega}}\partial_t\psi \rho{{ u}}_m
+\psi\partial_t\phi_m(\rho) \rho {{ u}}\,dx\,dt.
\end{split}$$ For the second term, we have $$\begin{split}
&\int_0^T\int_{{\Omega}} \bigl[\psi\phi_m(\rho)\bigr]_\varepsilon {{\rm div}}(\rho{{ u}}\otimes{{ u}})\,dx\,dt
=\int_0^T\int_{{\Omega}}\psi\phi_m(\rho)\bigl[ {{\rm div}}(\rho{{ u}}\otimes{{ u}})\bigr]_\varepsilon\,dx\,dt\\
&=\big(\int_0^T\int_{{\Omega}}\psi\phi_m(\rho) \bigl[ {{\rm div}}(\rho{{ u}}\otimes{{ u}})\bigr]_\varepsilon\,dx\,dt
-\int_0^T\int_{{\Omega}}\psi\phi_m(\rho)\bigl[{{\rm div}}(\rho{{ u}}\otimes{{ u}})\bigr]^x_\varepsilon\,dx\,dt\big)
\\&+\int_0^T\int_{{\Omega}}\psi\phi_m(\rho)\bigl[ {{\rm div}}(\rho{{ u}}\otimes{{ u}})\bigr]^x_\varepsilon\,dx\,dt
\\&=R_1+R_2,
\end{split}$$ where $[f (t,x)]_\varepsilon =f(t,x)*\eta_{\varepsilon}(t,x)$ and $[f(t,x)]_\varepsilon^x =f*\eta_{\varepsilon}(x)$ with $\varepsilon>0$ a small enough number. We write $R_1$ in the following way $$\begin{split}
R_1&=\int_0^T\int_{{\Omega}}\psi\phi_m(\rho)\bigl[{{\rm div}}(\rho{{ u}}\otimes{{ u}})\bigr]_\varepsilon\,dx\,dt
-\int_0^T\int_{{\Omega}}\psi\phi_m(\rho)\bigl[{{\rm div}}(\rho{{ u}}\otimes{{ u}})\bigr]_\varepsilon^x\,dx\,dt
\\&=\int_0^T\int_{{\Omega}}\psi\nabla\phi_m(\rho):\bigl[\rho{{ u}}\otimes{{ u}}\bigr]_\varepsilon\,dx\,dt
-\int_0^T\int_{{\Omega}}\psi\nabla\phi_m(\rho):\bigl[\rho{{ u}}\otimes{{ u}}\big]^x_\varepsilon\,dx\,dt.
\end{split}$$ Thanks to Lemma \[estimate of approximation\], $\rho|{{ u}}|^2 \in L^{2}(0,T; L^{10/7}(\Omega))$ and $\psi\nabla\phi_m(\rho)\in L^4((0,T)\times \Omega)$, we conclude that $R_1\to 0$ as $\varepsilon\to0.$ Meanwhile, we can apply Lemma \[Lions’s lemma\] to $R_2$ directly, thus $$\begin{split}&
\int_0^T\int_{{\Omega}}\psi\phi_m(\rho)\bigl[ {{\rm div}}(\rho{{ u}}\otimes{{ u}})\big]^x_\varepsilon \,dx\,dt
\\&=\big(\int_0^T\int_{{\Omega}}\psi\phi_m(\rho)\bigl[{{\rm div}}(\rho{{ u}}\otimes{{ u}})\bigr]^x_\varepsilon\,dx\,dt
-\int_0^T\int_{{\Omega}}\psi\phi_m(\rho) {{\rm div}}(\rho{{ u}}\otimes [{{ u}}]^x_\varepsilon)\,dx\,dt\big)
\\&+\int_0^T\int_{{\Omega}}\psi\phi_m(\rho) {{\rm div}}(\rho{{ u}}\otimes [{{ u}}]^x_\varepsilon)\,dx\,dt
\\&=R_{21}+R_{22}.
\end{split}$$ By Lemma \[Lions’s lemma\], we have $R_{21}\to 0$ as $\varepsilon\to 0$. The term $R_{22}$ will be calculated in the following way, $$\begin{split}
&\int_0^T\int_{{\Omega}}\psi\phi_m(\rho) {{\rm div}}(\rho{{ u}}\otimes [{{ u}}]^x_\varepsilon)\,dx\,dt
\\&=\int_0^T\int_{{\Omega}}\psi\phi_m(\rho) {{\rm div}}(\rho{{ u}}) [{{ u}}]^x_\varepsilon\,dx\,dt
+\int_0^T\int_{{\Omega}}\psi\phi_m(\rho) \rho{{ u}}\cdot \nabla [{{ u}}]^x_\varepsilon\,dx\,dt
\\&=\int_0^T\int_{{\Omega}}\psi {{\rm div}}(\rho{{ u}})[{{ u}}_m]^x_\varepsilon\,dx\,dt+\int_0^T\int_{{\Omega}}\psi\rho{{ u}}\nabla(\phi_m(\rho) [{{ u}}]^x_\varepsilon)\,dx\,dt-
\\&\int_0^T\int_{{\Omega}}\psi [{{ u}}]_\varepsilon^x \cdot\nabla\phi_m(\rho)\rho{{ u}}\,dx\,dt
\\&=-\int_0^T\int_{{\Omega}}\nabla\psi\rho{{ u}}\otimes [{{ u}}_m]_\varepsilon^x\,dx\,dt
-\int_0^T\int_{{\Omega}}\psi\cdot [{{ u}}]_\varepsilon^x \nabla\phi_m(\rho)\rho{{ u}}\,dx\,dt,
\end{split}$$ which tends to $$-\int_0^T\int_{{\Omega}}\nabla\psi\rho{{ u}}\otimes {{ u}}_m\,dx\,dt -\int_0^T\int_{{\Omega}}\psi\cdot {{ u}}\nabla\phi_m(\rho)\rho{{ u}}\,dx\,dt,$$ as $\varepsilon \to 0$.
For the other terms in the momentum equation, we can follow the same way as above method for to have $$\label{modified momentum equation}
\begin{split}&\int_0^T\int_{{\Omega}}\big(\psi_t\rho{{ u}}_m+\nabla\psi\cdot(\rho{{ u}}\otimes{{ u}}_m
- 2\phi_m(\rho) (\sqrt{\mu(\rho)}({{\mathbb S}_\mu}+{{\mathbb S}_r})
+ \frac{\lambda(\rho)}{2\mu(\rho)} {\rm Tr} (\sqrt{\mu(\rho)} \mathbb S_\mu+ r \mathbb S_r) {\rm Id} ))
\\
& + \int_0^T\int_{\Omega}\psi(\partial_t\phi_m(\rho)+{{ u}}\cdot\nabla\phi_m(\rho))\rho{{ u}}\\&- \int_0^T\int_{\Omega}2 \psi( \sqrt{\mu(\rho)}({{\mathbb S}_\mu}+{{\mathbb S}_r}) + \frac{\lambda(\rho)}{2\mu(\rho)} {\rm Tr} (\sqrt{\mu(\rho)} \mathbb S_\mu
+ r \mathbb S_r) {\rm Id} )\nabla\phi_m(\rho)+\psi\phi_m(\rho) F(\rho,{{ u}})\big)\,dx\,dt
\\&=0.
\end{split}$$ Thanks to , we have $$\label{modified momentum equation}
\begin{split}&\int_0^T\int_{{\Omega}}\big(\psi_t\rho{{ u}}_m+\nabla\psi\cdot(\rho{{ u}}\otimes{{ u}}_m
- 2\phi_m(\rho)(\sqrt{\mu(\rho)}({{\mathbb S}_\mu}+ r{{\mathbb S}_r})
+ \frac{\lambda(\rho)}{2\mu(\rho)} {\rm Tr} (\sqrt{\mu(\rho)} (\mathbb S_\mu+ r \mathbb S_r) ) {\rm Id} )\\
& - \int_0^T\int_{\Omega}\psi \phi'_m(\rho)\frac{\rho}{\sqrt{\mu(\rho)}}\mathrm{Tr} ({{\mathbb T}_\mu})\rho{{ u}}-\psi\phi_m(\rho) F(\rho,{{ u}})
\\&-\int_0^T\int_{\Omega}2 \psi(\sqrt{\mu(\rho)}({{\mathbb S}_\mu}+ r{{\mathbb S}_r})
+ \frac{\lambda(\rho)}{2\mu(\rho)} {\rm Tr} (\sqrt{\mu(\rho)} (\mathbb S_\mu+ r \mathbb S_r) )
{\rm Id}) \nabla\phi_m(\rho)\big)\,dx\,dt=0.
\end{split}$$
The goal of this subsection is to derive the formulation of renormalized solution following the idea in [@LaVa]. We choose the function $\bigl[\psi\varphi'([{{ u}}_m]_\varepsilon)\bigr]_\varepsilon$ as a test function in . As the same argument of Lemma \[Lemma for limits-first two terms\], we can show that $$\begin{split}&
\int_0^T\int_{{\Omega}}\big(\partial_t\bigl[\psi\varphi'([{{ u}}_m]_\varepsilon)\bigr]_\varepsilon\, \rho{{ u}}_m
+\nabla\bigl[\psi\varphi'([{{ u}}_m]_\varepsilon)\bigr]_\varepsilon:(\rho{{ u}}\otimes{{ u}}_m)\big)\,dx\,dt
\\&\to
\int_0^T\int_{{\Omega}}\big(\rho\varphi({{ u}}_m)\psi_t+\rho{{ u}}\otimes\varphi({{ u}}_m)\nabla\psi\big)\,dx\,dt,
\end{split}$$ and $$\begin{split}&\int_0^T\int_{{\Omega}}\nabla\bigl[\psi\varphi'([{{ u}}_m]_\varepsilon)\bigr]_\varepsilon
\big(-2 \phi_m(\rho)(\sqrt{\mu(\rho)}({{\mathbb S}_\mu}+ r{{\mathbb S}_r})
+ \frac{\lambda(\rho)}{2\mu(\rho)} {\rm Tr} (\sqrt{\mu(\rho)} \mathbb S_\mu+ r \mathbb S_r) ) {\rm Id} \big) \\
& +\bigl[\psi\varphi'([{{ u}}_m]_\varepsilon)\bigr]_\varepsilon \big(-\phi'_m(\rho)\frac{\rho}{\sqrt{\mu(\rho)}}\mathrm{Tr} ({{\mathbb T}_\mu})\rho{{ u}}\\&-2(\sqrt{\mu(\rho)}({{\mathbb S}_\mu}+ r{{\mathbb S}_r})
+ \frac{\lambda(\rho)}{2\mu(\rho)} {\rm Tr} (\sqrt{\mu(\rho)} (\mathbb S_\mu+ r \mathbb S_r) {\rm Id}) )\nabla\phi_m(\rho)+\phi_m(\rho) F(\rho,{{ u}})\big)\,dx\,dt
\\&\to \int_0^T\int_{{\Omega}}\nabla(\psi\varphi'({{ u}}_m))\
\big(-2\phi_m(\rho)(\sqrt{\mu(\rho)}({{\mathbb S}_\mu}+ r{{\mathbb S}_r})
+ \frac{\lambda(\rho)}{2\mu(\rho)} {\rm Tr} (\sqrt{\mu(\rho)} (\mathbb S_\mu+ r \mathbb S_r) ) {\rm Id} )\big)\\
& +\psi\varphi'({{ u}}_m)
\big(-\phi'_m(\rho)\frac{\rho}{\sqrt{\mu(\rho)}}\mathrm{Tr} ({{\mathbb T}_\mu})\rho{{ u}}\\&-2(\sqrt{\mu(\rho)}({{\mathbb S}_\mu}+ r{{\mathbb S}_r})
+ \frac{\lambda(\rho)}{2\mu(\rho)} {\rm Tr} (\sqrt{\mu(\rho)} \mathbb S_\mu+ r \mathbb S_r) )\nabla\phi_m(\rho)+\phi_m(\rho) F(\rho,{{ u}})\big)\,dx\,dt
\end{split}$$ as $\varepsilon$ goes to zero. Putting these two limits together, we have $$\label{weak formulation with m}
\begin{split}&
\int_0^T\int_{{\Omega}}\big(\rho\varphi({{ u}}_m)\psi_t+\rho{{ u}}\otimes\varphi({{ u}}_m)\nabla\psi\big)
\\&+\nabla\psi\varphi'({{ u}}_m)
\big(-2 \phi_m(\rho)(\sqrt{\mu(\rho)}({{\mathbb S}_\mu}+ r{{\mathbb S}_r})
+ \frac{\lambda(\rho)}{2\mu(\rho)} {\rm Tr} (\sqrt{\mu(\rho)} \mathbb S_\mu+ r \mathbb S_r) )\big)\\
& +\psi\varphi''({{ u}}_m)\nabla{{ u}}_m\big(-\phi_m(\rho)2(\sqrt{\mu(\rho)}({{\mathbb S}_\mu}+ r{{\mathbb S}_r})
+ \frac{\lambda(\rho)}{2\mu(\rho)} {\rm Tr} (\sqrt{\mu(\rho)} \mathbb S_\mu+ r \mathbb S_r) )\big)
\\&+\psi\varphi'({{ u}}_m)
\big(-\phi'_m(\rho)\frac{\rho}{\sqrt{\mu(\rho)}}\mathrm{Tr} ({{\mathbb T}_\mu})\rho{{ u}}-2(\sqrt{\mu(\rho)}({{\mathbb S}_\mu}+ r{{\mathbb S}_r})
\\&+ \frac{\lambda(\rho)}{2\mu(\rho)} {\rm Tr} (\sqrt{\mu(\rho)} \mathbb S_\mu+ r \mathbb S_r) )\nabla\phi_m(\rho)+\phi_m(\rho) F(\rho,{{ u}})\big)\,dx\,dt=0.
\end{split}$$ Now we should pass to the limit in as $m$ goes to infinity. To this end, we should keep the following convergences in mind: $$\begin{split}
\label{basic convergence for m}
&\phi_m(\rho)\;\text{ converges to }1, \quad \text{ for almost every} (t,x)\in {{\mathbb R}}^+\times{\Omega},\\
&{{ u}}_m\text{ converges to } {{ u}}, \quad\text{ for almost every} (t,x)\in {{\mathbb R}}^+\times{\Omega},\\
&|\rho\phi'_m(\rho)|\leq 2, \quad\text{ and converges to } 0 \text{ for almost every} (t,x)\in {{\mathbb R}}^+\times{\Omega}.
\end{split}$$ We can find that $$\begin{split}
&\sqrt{\mu(\rho)}\nabla{{ u}}_m=\sqrt{\mu(\rho)}\nabla(\phi_m(\rho){{ u}})
=\phi_m(\rho)\sqrt{\mu(\rho)}\nabla{{ u}}+\phi'_m(\rho)\sqrt{\mu(\rho)}{{ u}}\cdot\nabla\rho
\\&=\frac{\phi_m(\rho)}{\sqrt{\mu(\rho)}}\big(\nabla(\mu(\rho){{ u}})-\sqrt{\rho}{{ u}}\cdot\frac{\nabla\mu(\rho)}{\sqrt{\rho}}\big)+
\frac{\sqrt{\rho}}{\mu(\rho)^{\frac{3}{4}}}\big(\frac{\sqrt{\mu(\rho)}}{\rho}\mu'(\rho)\nabla\rho\big)
\big(\frac{\rho^{\frac{1}{4}}}{(\mu'(\rho))^{\frac{1}{4}}}{{ u}}\big)\big(\phi_m'(\rho)\frac{\mu(\rho)^{\frac{3}{4}}\rho^{\frac{1}{4}}}{(\mu'(\rho))^{\frac{3}{4}}}\big)
\\&=\phi_m(\rho){{\mathbb T}_\mu}+\frac{\sqrt{\rho}}{\mu(\rho)^{\frac{3}{4}}}\big(\frac{\sqrt{\mu(\rho)}}{\rho}\mu'(\rho)\nabla\rho\big)
\big(\frac{\rho^{\frac{1}{4}}}{(\mu'(\rho))^{\frac{1}{4}}}{{ u}}\big)\big(\phi_m'(\rho)\frac{\mu(\rho)^{\frac{3}{4}}\rho^{\frac{1}{4}}}{(\mu'(\rho))^{\frac{3}{4}}}\big)
\\&=A_{1m}+A_{2m}.
\end{split}$$ Note that $$|\phi_m'(\rho)\frac{\mu(\rho)^{\frac{3}{4}}\rho^{\frac{1}{4}}}{(\mu'(\rho))^{\frac{3}{4}}}|\leq C|\phi'_m(\rho)\rho|,$$ thus $\phi_m'(\rho){\mu(\rho)^{\frac{3}{4}}\rho^{\frac{1}{4}}}/{(\mu(\rho)')^{\frac{3}{4}}}$ converges to zero for almost every $(t,x).$ Thus, the Dominated convergence theorem yields that $A_{2m}$ converges to zero as $m\to\infty.$ Meanwhile, the Dominated convergence theorem also gives us $A_{1m}$ converges to ${{\mathbb T}_\mu}$ in $L^2_{t,x}$. Hence, with at hand, letting $m\to\infty$ in , one obtains that $$\label{limit for m large}
\begin{split}&
\int_0^T\int_{{\Omega}}\big(\rho\varphi({{ u}})\psi_t+\rho{{ u}}\otimes\varphi({{ u}})\nabla\psi\big)
- 2 \nabla\psi\varphi'({{ u}})\big((\sqrt{\mu(\rho)}({{\mathbb S}_\mu}+ r{{\mathbb S}_r})
\\&+ \frac{\lambda(\rho)}{2\mu(\rho)} {\rm Tr} (\sqrt{\mu(\rho)} (\mathbb S_\mu+ r \mathbb S_r) )
{\rm Id} \big)-2\psi\varphi''({{ u}}){{\mathbb T}_\mu}(({{\mathbb S}_\mu}+r {{\mathbb S}_r}) \\
&+ \frac{\lambda(\rho)}{2\mu(\rho)} {\rm Tr} ((\mathbb S_\mu+ r \mathbb S_r) {\rm Id})
+\psi\varphi'({{ u}})F(\rho,{{ u}})\big)\,dx\,dt=0.
\end{split}$$ From now, we denote $R_{\varphi}=2\psi\varphi''({{ u}}){{\mathbb T}_\mu}(({{\mathbb S}_\mu}+ r{{\mathbb S}_r})+ \frac{\lambda(\rho)}{2\mu(\rho)} {\rm Tr} ( (\mathbb S_\mu+ r \mathbb S_r) {\rm Id})$. This ends the proof of Theorem \[renorm\].
renormalized solutions and weak solutions
==========================================
The main goal of this section is the proof of Theorem \[main result\] that obtains the existence of renormalized solutions of the Navier-Stokes equations without the additional terms, thus the existence of weak solutions of the Navier-Stokes equations.
Renormalized solutions
----------------------
In this subsection, we will show the existence of renormalized solutions. To this end, we need the following lemma of stability.
\[Lemma of stability of renormalized solution\]
For any fixed $\alpha_1<\alpha_2$ as in and consider sequences $\delta_n$, $r_{0n}$, $r_{1n}$ and $r_{2n}$, such that $r_{i,n}\to r_{i}\geq 0$ with $i=0,1,2$ and then $\delta_n\to \delta\geq 0$. Consider a family of $\mu_n:{{\mathbb R}}^{+}\to {{\mathbb R}}^{+}$ verifying and for the fixed $\alpha_1$ and $\alpha_2$ such that $$\mu_n\to \mu\quad\text{ in }C^0({{\mathbb R}}^{+}).$$ Then, if $(\rho_n,{{ u}}_n)$ verifies -, up to a subsequence, still denoted $n$, the following convergences hold.\
1. The sequence $\rho_n$ convergences strongly to $\rho$ in $C^0(0,T;L^p({\Omega}))$ for any $1\leq p<\gamma.$\
2. The sequence $\mu_n(\rho_n)\, {{ u}}_n$ converges to $\mu(\rho){{ u}}$ in $L^{\infty}(0,T;L^p({\Omega})$ for $p \in [1,3/2)$.\
3. The sequence $({{\mathbb T}_\mu})_n$ convergences to ${{\mathbb T}_\mu}$ weakly in $L^2(0,T;L^2({\Omega}))$.\
4. For every function $H\in W^{2,\infty}(\overline{{{\mathbb R}}^d})$ and $0<\alpha<{2\gamma}/{\gamma+1}$, we have that $\rho_n^{\alpha} H({{ u}}_n)$ convergences to $\rho^{\alpha}H({{ u}})$ strongly in $L^p(0,T;{\Omega})$ for $1\leq p<\frac{2\gamma}{(\gamma+1)\alpha}.$ In particular, $\sqrt{\mu(\rho_n)}H({{ u}}_n)$ convergences to $\sqrt{\mu(\rho)}H({{ u}})$ strongly in $L^{\infty}(0,T;L^2({\Omega})).$
Using , the Aubin-Lions lemma gives us, up to a subsequence, $$\mu_n(\rho_n)\to \tilde{\mu}\quad\text{ in }\; C^0(0,T;L^q({\Omega}))$$ for any $q<\frac{3}{2}.$ But $$\sup|\mu_n-\mu|\to 0$$ as $n\to \infty.$ Thus, we have $$\label{almost anywhere for mu}
\mu_n(\rho_n)\to \tilde{\mu}(t,x)\quad\text{ in }\; C^0([0,T];L^q({\Omega})),$$ so up to a subsequence, $$\mu(\rho_n)\to \tilde{\mu}(t,x)\;\;\text{a. e}.$$ Note that $\mu$ is increasing function, so it is invertible, and $\mu^{-1}$ is continuous. This implies that $\rho_n\to \rho$ a.e. with $\mu(\rho)=\tilde{\mu}(t,x).$ Together with and $\rho_n$ is uniformly bounded in $L^{\infty}(0,T;L^{\gamma}({\Omega}))$, thus we get part 1.
Note that $$\nabla\frac{\mu(\rho_n)}{\sqrt{\rho_n}}=\frac{\sqrt{\rho_n}\nabla \mu(\rho_n)}{\rho_n}-\frac{\mu(\rho_n)\nabla\rho_n}{2\rho\sqrt{\rho_n}},$$ thus $$\left|\nabla\frac{\mu(\rho_n)}{\sqrt{\rho_n}}\right|\leq C\left|\sqrt{\rho_n}\right|\left|\frac{\nabla\mu(\rho_n)}{\sqrt{\rho_n}}\right|,$$ so $\nabla\frac{\mu(\rho_n)}{\sqrt{\rho_n}}$ is bounded in $L^{\infty}(0,T;L^2({\Omega}))$, thanks to . Using , we have $\frac{\mu(\rho_n)}{\sqrt{\rho_n}}$ is bounded in $L^{\infty}(0,T;W^{1,2}({\Omega}))$, thus it is uniformly bounded in $L^{\infty}(0,T;L^6({\Omega}))$.
On the other hand, $\sqrt{\rho_n}{{ u}}_n$ is uniformly bounded in $L^{\infty}(0,T;L^2({\Omega}))$. From Lemma \[Compactnesstool1\], we have $$\mu(\rho_n){{ u}}_n=\frac{\mu(\rho_n)}{\sqrt{\rho_n}} \sqrt{\rho_n}{{ u}}_n\to \mu(\rho){{ u}}\;\;\text{ in }\; L^{\infty}(0,T;L^q({\Omega}))$$ for any $1\leq q<\frac{3}{2}.$ Since $({{\mathbb T}_\mu})_n$ is bounded in $L^2(0,T;L^2({\Omega}))$, and so, up to a sequence, convergences weakly in $L^2(0,T;L^2({\Omega}))$ to a function ${{\mathbb T}_\mu}$. Using Lemma \[Compactnesstool1\], this gives part 4.
With Lemma \[Lemma of stability of renormalized solution\], we are able to recover the renormalized solutions of Navier-Stokes equations without any additional term by letting $n\to\infty$ in . We state this result in the following Lemma. In this lemma, we fix $\mu$ such that $\varepsilon_1>0$.
\[Lemma of existence for ren\] For any fixed $\varepsilon_1>0$, there exists a renormalized solution $(\sqrt{\rho},\sqrt{\rho}{{ u}})$ to the initial value problem -.
We can use Lemma \[Lemma of stability of renormalized solution\] to pass to the limits for the extra terms. We will have to follow this order: let $r_2$ goes to zero, then $r_1$ tends to zero, after that $r_0, \delta, r$ go to zero together.
– If $r_2= r_2(n) \to 0$, we just write $$r_2\frac{\rho_n}{\mu'(\rho_n)}|{{ u}}_n|^2{{ u}}_n=r_2^{\frac{1}{4}}\big(\frac{\rho_n}{\mu'(\rho_n)}\big)^{\frac{1}{4}}\big(\frac{\rho_n}{\mu'(\rho_n)}\big)^{\frac{3}{4}}|{{ u}}_n|^2{{ u}}_n,$$ and $\mu'(\rho_n)\geq \varepsilon_1 >0,$ so $\big(\frac{\rho_n}{\mu'(\rho_n)}\big)^{\frac{1}{4}}\leq C|\rho_n|^{\frac{1}{4}}$, thus, $$r_2\frac{\rho_n}{\mu'(\rho_n)}|{{ u}}_n|^2{{ u}}_n\to 0 \hbox{ in } L^{\frac{4}{3}}(0,T;L^{\frac{6}{5}}({\Omega})).$$
– For $r_1=r(n)\to 0$, $$|r_1\rho_n|{{ u}}_n|{{ u}}_n|\leq r^{\frac{1}{3}}\rho_n^{\frac{1}{3}}r^{\frac{2}{3}}\rho_n^{\frac{2}{3}}|{{ u}}_n|^2,$$ which convergences to zero in $L^{\frac{3}{2}}(0,T;L^{\frac{9}{7}}({\Omega}))$ using the drag term control in the energy and the information on the pressure law $P(\rho) = a \rho^\gamma$.
– For $r_0 = r_0(n) \to 0$, it is easy to conclude that $$r_0 {{ u}}_{n} \to 0 \hbox{ in } L^2((0,T)\times \Omega).$$
– We now consider the limit $r\to 0$ of the term $$r\rho_n\nabla\left(\sqrt{K(\rho_n)}\D(\int_0^{\rho_n}\sqrt{K(s)}\,ds)\right).$$ Note the following identity $$\label{BCNV relation}
\rho_n\nabla\left(\sqrt{K(\rho_n)}\D(\int_0^{\rho_n}\sqrt{K(s)}\,ds)\right)=
2 {{\rm div}}\Bigl(\mu(\rho_n)\nabla^2\bigl(2 {s}(\rho_n)\bigr)\Bigr)
+\nabla\Bigl(\lambda(\rho_n)\D\bigl(2{s}(\rho_n)\bigr)\Bigr),$$ we only need to focus on ${{\rm div}}\Bigl(\mu(\rho_n)\nabla^2\bigl(2 {s}(\rho_n)\bigr)\Bigr)$ since the same argument holds for the other term. Since $$\begin{split}
r\int_{{\Omega}}{{\rm div}}\Bigl(\mu(\rho_n)&\nabla^2\bigl(2 {s}(\rho_n)\bigr)\Bigr)\psi\,dx
\\&=r\int_{{\Omega}}\frac{\rho_n}{\mu_n}\nabla Z(\rho_n) \otimes \nabla Z(\rho_n)\nabla\psi\,dx
+r\int_{{\Omega}}\mu_n\nabla{s}(\rho_n)\Delta\psi\,dx\\&=
r\int_{{\Omega}}\frac{\rho_n}{\mu_n}\nabla Z(\rho_n) \otimes \nabla Z(\rho_n)\nabla\psi\,dx+r\int_{{\Omega}}\sqrt{\mu_n}\nabla Z(\rho_n)\Delta\psi\,dx,
\end{split}$$ the first term can be controlled as $$\begin{split}
&\big|r\int_{{\Omega}}\sqrt{\mu_n}\nabla Z(\rho_n)\Delta\psi\,dx\big|\leq Cr^{\frac{1}{2}}\|\sqrt{\mu(\rho_n)}\|_{L^2(0,T;L^2({\Omega}))}\|\sqrt{r}\nabla Z(\rho_n)\|_{L^2(0,T;L^2({\Omega}))}\to 0,
\end{split}$$ thanks to and ; and the second term as $$\begin{split}
&\big|\int_{{\Omega}}\frac{\rho_n}{\mu_n}\nabla Z(\rho_n)\otimes \nabla Z(\rho_n)\nabla\psi\,dx\big|\leq \sqrt{r}\sqrt{r}\int_{{\Omega}}\sqrt{\mu(\rho_n)}\frac{\rho_n}{\mu(\rho_n)^{\frac{3}{2}}}|\nabla Z(\rho_n)|^2|\nabla\psi|\,dx
\\&\leq C\|\sqrt{r}\frac{\rho_n}{\mu(\rho_n)^{\frac{3}{2}}}|\nabla Z(\rho_n)|^2\|_{L^2(0,T;L^2({\Omega}))}\|\sqrt{\mu(\rho_n)}\|_{L^2(0,T;L^2({\Omega}))}r^{\frac{1}{2}}\to 0.
\end{split}$$
– Concerning the quantity $\delta \rho^{10}$, thanks to $\mu'_{\varepsilon_1}(\rho)\geq \varepsilon_1>0,$ $\sqrt{\delta}|\nabla\rho^{5}|$ is uniformly bounded in $L^2(0,T;L^2({\Omega}))$. This gives us that $\delta^{\frac{1}{30}}\rho$ is uniformly bounded in $L^{10}(0,T;L^{30}({\Omega})).$ Thus, we have $$\left|
\int_0^T\int_{{\Omega}}\delta\rho^{10}\nabla\psi\,dx\,dt\right| \leq C(\psi) \delta^{\frac{2}{3}}\|\delta^{\frac{1}{3}}\rho^{10}\|_{L^1(0,T;L^3({\Omega}))}\to 0$$ as $\delta\to0.$
With Lemma \[Lemma of stability of renormalized solution\] at hand, we are ready to recover the renormalized solutions to -. By part 1 and part 2 of Lemma \[Lemma of stability of renormalized solution\], we are able to pass to the limits on the continuity equation. Thanks to part 4 of Lemma \[Lemma of stability of renormalized solution\], $$\sqrt{\mu(\rho_n)}\varphi'({{ u}}_n)\to \sqrt{\mu(\rho)}\varphi'({{ u}}) \quad\text{ in }\;\; L^{\infty}(0,T;L^2({\Omega})).$$ With the help of Lemma \[compactuniforme\], we can pass to the limit on pressure, thus we can recover the renormalized solutions.
Recover weak solutions from renormalized solutions
--------------------------------------------------
In this part, we can recover the weak solutions from the renormalized solutions constructed in Lemma \[Lemma of existence for ren\]. Now we show that Lemma \[Lemma of existence for ren\] is valid without the condition $\varepsilon_1>0$. For such a $\mu$, we construct a sequence $\mu_n$ converging to $\mu$ in $C^0({{\mathbb R}}^+)$ and such that $\varepsilon_{1n}=\inf \mu_n'>0$. Lemma \[Lemma of stability of renormalized solution\] shows that, up to a subsequence, $$\rho_n\to\rho\;\;\text{ in }\; C^0(0,T;L^p({\Omega}))$$ and $$\rho_n{{ u}}_n\to\rho{{ u}}\;\;\text{ in } L^{\infty}(0,T;L^{\frac{p+1}{2p}}({\Omega}))$$ for any $1\leq p<\gamma,$ where $(\rho,\sqrt{\rho}{{ u}})$ is a renormalized solution to .
Now, we want to show that this renormalized solution is also a weak solution in the sense of Definition 1.2. To this end, we introduce a non-negative smooth function $\Phi:{{\mathbb R}}\to{{\mathbb R}}$ such that it has a compact support and $\Phi(s)=1$ for any $-1\leq s\leq1.$ Let $\tilde{\Phi}(s)=\int_0^s\Phi(r)\,dr$, we define $$\varphi_n(y)=n\tilde{\Phi}(\frac{y_1}{n})\Phi(\frac{y_2}{n})....\Phi(\frac{y_N}{n})$$ for any $y=(y_1,y_2,....,y_N)\in {{\mathbb R}}^N$.
Note that $\varphi_n$ is bounded in $W^{2,\infty}({{\mathbb R}}^N)$ for any fixed $n>0$, $\varphi_n(y)$ converges everywhere to $y_1$ as $n$ goes to infinity, $\varphi_n'$ is uniformly bounded in $n$ and converges everywhere to unit vector $(1,0,....0)$, and $$\|\varphi_n''\|_{L^{\infty}}\leq \frac{C}{n}\to 0$$ as $n$ goes to infinity. This allows us to control the measures in Definition \[def\_renormalise\_u\] as follows $$\|R_{{{\varphi}}_n}\|_{ \mathcal{M}({{\mathbb R}}^+\times{\Omega})}+ \|\overline{R}^1_{{{\varphi}}_n}\|_{ \mathcal{M}({{\mathbb R}}^+\times{\Omega})}
+ \|\overline{R}^2_{{{\varphi}}_n}\|_{ \mathcal{M}({{\mathbb R}}^+\times{\Omega})} \leq C \|{{\varphi}}''_n\|_{L^\infty({{\mathbb R}})}\to 0$$ as $n$ goes to infinity. Using this function $\varphi_n$ in the equation of Definition \[def\_renormalise\_u\], the Lebesgue’s Theorem gives us the equation on $\rho{{ u}}_1$ in Definition 1.2 by passing limits as $n$ goes to infinity. In this way, we are able to get full vector equation on $\rho{{ u}}$ by permuting the directions. Applying the Lebesgue’s dominated convergence Theorem, one obtains by passing to limit in with $i=1$ and the function $\varphi_n$. Thus, we have shown that the renormalized solution is also a weak solution.
acknowledgement
===============
Didier Bresch is supported by the SingFlows project, grant ANR-18-CE40-0027 and by the project Bords, grant ANR-16-CE40-0027 of the French National Research Agency (ANR). He want to thank Mikael de la Salle (CNRS-UMPA Lyon) for his efficiency within the National Committee for Scientific Research - CNRS (section 41) which allowed him to visit the university of Texas at Austin in January 2019 with precious progress on this work during this period. Alexis Vasseur is partially supported by the NSF grant: DMS 1614918. Cheng Yu is partially supported by the start-up funding of University of Florida.
[99]{}
P. Antonelli, S. Spirito. Global existence of weak solutions to the Navier-Stokes-Korteweg equations. ArXiv:1903.02441 (2019).
P. Antonelli, S. Spirito. On the compactness of weak solutions to the Navier-Stokes-Korteweg equations for capillary fluids. ArXiv:1808.03495 (2018).
P. Antonelli, S. Spirito. Global Existence of Finite Energy Weak Solutions of Quantum Navier-Stokes Equations. *Archive of Rational Mechanics and Analysis, 225 (2017), no. 3, 1161–1199.*
P. Antonelli, S. Spirito. On the compactness of finite energy weak solutions to the Quantum Navier-Stokes equations. *J. of Hyperbolic Differential Equations, 15 (2018), no. 1, 133–147.*
C. Bernardi, O. Pironneau. On the shallow water equations at low Reynolds number. *Comm. Partial Differential Equations 16 (1991), no. 1, 59–104.*
D. Bresch, B. Desjardins. Existence of global weak solutions for 2D viscous shallow water equations and convergence to the quasi-geostrophic model. *Comm. Math. Phys., 238 (2003), no.1-3, 211–223.*
D. Bresch, F. Couderc, P. Noble, J.-P. Vila. A generalization of the quantum Bohm identity: Hyperbolic CFL condition for Euler–Korteweg equations. *C.R. Acad. Sciences Paris Volume 354, Issue 1, 39–43, (2016).*
D. Bresch and B. Desjardins. On the construction of approximate solutions for the 2D viscous shallow water model and for compressible Navier-Stokes models. *J. Math. Pures Appl. (9) 86 (2006), no. 4, 362–368.*
D. Bresch, B. Desjardins. Quelques modèles diffusifs capillaires de type Korteweg. *C. R. Acad. Sci. Paris, section mécanique, [**332**]{}, no. 11, 881–886, (2004).*
D. Bresch, B. Desjardins. Weak solutions via the total energy formulation and their quantitative properties - density dependent viscosities. In: Y. Giga, A. Novotný (éds.) Handbook of Mathematical Analysis in Mechanics of Viscous Fluids. Springer, Berlin (2017).
D. Bresch, B. Desjardins, Chi-Kun Lin. On some compressible fluid models: Korteweg, lubrication, and shallow water systems. *Comm. Partial Differential Equations 28 (2003), no. 3-4, 843–868.*
D. Bresch, B. Desjardins, E. Zatorska. Two-velocity hydrodynamics in Fluid Mechanics, Part II. Existence of global $\kappa$-entropy solutions to compressible Navier-Stokes system with degenerate viscosities. *J. Math. Pures Appl. Volume 104, Issue 4, 801–836 (2015).*
D. Bresch, P.-E. Jabin. Global existence of weak solutions for compressible Navier-Stokes equations: thermodynamically unstable pressure and anisotropic viscous stress tensor. *Ann. of Math. (2) 188 (2018), no. 2, 577-684.*
D. Bresch, I. Lacroix-Violet, M. Gisclon. On Navier-Stokes-Korteweg and Euler-Korteweg systems: Application to quantum fluids models. To appear in *Arch. Rational Mech. Anal. (2019).*
D. Bresch, P. Mucha, E. Zatorska. Finite-energy solutions for compressible two-fluid Stokes system. *Arch. Rational Mech. Anal., 232, Issue 2, (2019), 987–1029.*
C. Burtea, B. Haspot. New effective pressure and existence of global strong solution for compressible Navier-Stokes equations with general viscosity coefficient in one dimension. arXiv:1902.02043 (2019).
R. Carles, K. Carrapatoso, M. Hillairet. Rigidity results in generalized isothermal fluids. *Annales Henri Lebesgue, 1, (2018), 47–85.*
P. Constantin, T. Drivas, H.Q. Nguyen, F. Pasqualotto. Compressible fluids and active potentials. ArXiv:1803.04492.
B. Ducomet, S. Necasova, A. Vasseur, On spherically symmetric motions of a viscous compressible barotropic and self-graviting gas. *J. Math. Fluid Mech. 13 (2011), no. 2, 191–211.*
E. Feireisl, A. Novotný, H. Petzeltová. On the existence of globally defined weak solutions to the Navier-Stokes equations. *J. Math. Fluid Mech. **3** (2001), 358–392.*
E. Feireisl. Compressible Navier–Stokes Equations with a Non-Monotone Pressure Law. *J. Diff. Eqs 183, no 1, 97–108, (2002).*
Z. Guo, Q. Jiu, Z. Xin. Spherically symmetric isentropic compressible flows with density-dependent viscosity coefficients. *SIAM J. Math. Anal. 39 (2008), no. 5, 1402–1427.*
B. Haspot. Existence of global strong solution for the compressible Navier-Stokes equations with degenerate viscosity coefficients in 1D. *Mathematische Nachrichten, 291 (14-15), 2188–2203, (2018).*
D. Hoff. Global existence for 1D, compressible, isentropic Navier-Stokes equations with large initial data. *Trans. Amer. Math. Soc. 303 (1987), no. 1, 169–181.*
S. Jiang, Z. Xin, P. Zhang. Global weak solutions to 1D compressible isentropic Navier-Stokes equations with density-dependent viscosity. *Methods Appl. Anal. 12 (2005), no. 3, 239–251.*
S. Jiang, P. Zhang. On spherically symmetric solutions of the compressible isentropic Navier-Stokes equations. *Comm. Math. Phys. 215 (2001), no. 3, 559–581.*
A. Jüngel. Global weak solutions to compressible Navier-Stokes equations for quantum fluids. *SIAM J. Math. Anal. 42 (2010), no. 3, 1025–1045.*
A. Jüngel, D. Matthes. The Derrida–Lebowitz-Speer-Spohn equations: Existence, uniqueness, and Decay rates of the solutions. *SIAM J. Math. Anal., 39(6), (2008), 1996–2015.*
A.V. Kazhikhov, V.V. Shelukhin. Unique global solution with respect to time of initial-boundary value problems for one-dimensional equations of a viscous gas. *J. Appl. Math. Mech. 41 (1977), no. 2, 273–282.; translated from *Prikl. Mat. Meh.41 (1977), no. 2, 282–291(Russian).**
J. I. Kanel. A model system of equations for the one-dimensional motion of a gas. *Differ. Uravn. 4 (1968), 721–734 (in Russian).*
I. Lacroix-Violet, A. Vasseur. Global weak solutions to the compressible quantum Navier-Stokes equation and its semi-classical limit. *J. Math. Pures Appl. (9) 114 (2018), 191–210.*
J. Leray. Sur le mouvement d’un fluide visqueux remplissant l’espace, *Acta Math. 63 (1934), 193–248.*
H.L. Li, J. Li, Z.P. Xin. Vanishing of vacuum states and blow-up phenomena of the compressible Navier–Stokes equations. *Comm. Math. Phys., 281, 401–444 (2008).*
J. Li, Z.P. Xin. Global Existence of Weak Solutions to the Barotropic Compressible Navier-Stokes Flows with Degenerate Viscosities. arXiv:1504.06826 (2015).
P.-L. Lions. *Mathematical topics in fluid mechanics. Vol. 2. Compressible models. Oxford Lecture Series in Mathematics and its Applications, 10. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1998.*
D. Maltese, M. Michalek, P. Mucha, A. Novotny, M. Pokorny, E. Zatorska. Existence of weak solutions for compressible Navier-Stokes with entropy transport. *J. Differential Equations, 261, No. 8, 4448–4485 (2016)*
A. Mellet, A. Vasseur. On the barotropic compressible Navier-Stokes equations. *Comm. Partial Differential Equations 32 (2007), no. 1-3, 431–452.*
A. Mellet, A. Vasseur. Existence and uniqueness of global strong solutions for one-dimensional compressible Navier-Stokes equations. .*SIAM J. Math. Anal. 39 (2007/08), no. 4, 1344–1365.*
P.B. Mucha, M. Pokorny, E. Zatorska. Approximate solutions to a model of two-component reactive flow. *Discrete Contin. Dyn. Syst. Ser. S, 7, No. 5 , 1079–1099 (2014).*
A. Novotny. Weak solutions for a bi-fluid model of a mixture ot two compressible non interacting fluids. Submitted (2018).
A. Novotny, M. Pokorny. Weak solutions for some compressible multi-component fluid models. Submitted (2018).
P.I. Plotnikov, W. Weigant. Isothermal Navier-Stokes equations and Radon transform. *SIAM J. Math. Anal. 47 (2015), no. 1, 626–653.*
F. Rousset. Solutions faibles de l’équation de Navier-Stokes des fluides compressible \[d’après A. Vasseur et C. Yu\]. Séminaire Bourbaki, 69ème année, 2016–2017, no 1135.
D. Serre. Solutions faibles globales des équations de Navier-Stokes pour un fluide compressible. *C. R. Acad. Sci. Paris. I Math. 303 (1986), no. 13, 639–642.*
V. A. Vaigant, A. V. Kazhikhov. On the existence of global solutions of two-dimensional Navier-Stokes equations of a compressible viscous fluid. (Russian). *Sibirsk. Mat. Zh. 36 (1995), no. 6, 1283-1316, ii; translation in *Siberian Math. J. 36 (1995), no.6, 1108–1141.**
A. Vasseur, C. Yu. Global weak solutions to compressible quantum Navier-Stokes equations with damping. *SIAM J. Math. Anal. 48 (2016), no. 2, 1489–1511.*
A. Vasseur, C. Yu. Existence of Global Weak Solutions for 3D Degenerate Compressible Navier-Stokes Equations. *Inventiones mathematicae (2016), 1–40.*
A. Vasseur, H. Wen, C. Yu. Global weak solution to the viscous two-phase model with finite energy. *To appear in J. Math Pures Appl. (2018).*
E. Zatorska. On the flow of chemically reacting gaseous mixture. *J. Diff. Equations. 253 (2012) 3471–3500.*
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We consider some questions concerning the monotonicity properties of entropy and mean entropy of states on translationally invariant systems (classical lattice, quantum lattice and quantum continuous). By taking the property of strong subadditivity, which for quantum systems was proven rather late in the historical development, as one of four primary axioms (the other three being simply positivity, subadditivity and translational invariance) we are able to obtain results, some new, some proved in a new way, which appear to complement in an interesting way results proved around thirty years ago on limiting mean entropy and related questions. In particular, we prove that as the sizes of boxes in $\mathbb{Z}^{\nu}$ or $\mathbb{R}^{\nu}$ increase in the sense of set inclusion, (1) their mean entropy decreases monotonically and (2) their entropy increases monotonically. Our proof of (2) uses the notion of *m-point correlation entropies* which we introduce and which generalize the notion of *index of correlation* (see e.g. R. Horodecki, Phys. Lett. A 187 p145 1994). We mention a number of further results and questions concerning monotonicity of mean entropy for more general shapes than boxes and for more general translationally invariant (/homogeneous) lattices and spaces than $\mathbb{Z}^{\nu}$ or $\mathbb{R}^{\nu}$.'
author:
- |
Amanda R. Kay[^1] and Bernard S. Kay[^2]\
*[Department of Mathematics, University of York,]{}\
*[York YO10 5DD, UK]{}**
title: Monotonicity with volume of entropy and of mean entropy for translationally invariant systems as consequences of strong subadditivity
---
PACS numbers: 05.30.-d, 03.65.-w, 03.67.-a, 04.62.+v
INTRODUCTION {#S:intro}
============
The mid 1960’s saw the beginning of an intense period of research into the mathematical properties of the entropy of translationally invariant states on translationally invariant (infinite Euclidean) systems, both classical and quantum. Amongst the questions which were of interest at that time was the question of the existence of what we shall call in this paper *limiting mean entropy*. A simple variant of this question (see Corollary \[C:limiting\] below) is the question whether the mean entropy of a box tends to a definite limit as the lengths of each of its sides tend to infinity. Here, by the *mean entropy* of a (finite) box, we simply mean its entropy divided by its volume. (The reader should be warned that in the literature referred to here, no particular phrase is attached to this concept, and the term ‘mean entropy’ is used instead to denote what we call here ‘limiting mean entropy’.) This variant had been proven to be true both for classical systems by Robinson and Ruelle in [@rr67] and for quantum systems by Lanford and Robinson in [@lr68]. However, there were important reasons for wanting to prove variants of this result which involved more general shapes than boxes, such as the variant known as ‘(limiting) mean entropy in the sense of van Hove’ [@rr67]. This had been proven in the classical case in the Robinson-Ruelle paper [@rr67] as a consequence of a general property called *strong subadditivity* (SSA). The Lanford-Robinson paper [@lr68] put forward the conjecture that SSA held also in the quantum case but, in the absence of a proof of this, could not immediately establish limiting mean entropy in the sense of van Hove. (It was in fact first proven for quantum systems by Araki and Lieb [@al70].) In fact, six years were to pass before SSA was finally established for quantum systems by Lieb and Ruskai [@lru73].
Here we recall that, if $\rho_{123}$ is a state on a Hilbert space which is given to us as a triple tensor product of three preferred Hilbert spaces, ${\mathcal{H}}={\mathcal{H}}_{1}\otimes{\mathcal{H}}_{2}\otimes{\mathcal{H}}_{3}$, and if $\rho_{2}$, $\rho_{12}$, and $\rho_{23}$ denote its partial traces over ${\mathcal{H}}_{1}\otimes{\mathcal{H}}_{3}$, ${\mathcal{H}}_{3}$, and ${\mathcal{H}}_{1}$ respectively, then the property of SSA can be written as $$\label{Eq:ssain}
S(\rho_{123})+S(\rho_{2})\leqslant S(\rho_{12})+S(\rho_{23})$$ where for any density matrix $\rho$, $S(\rho)$ denotes its (von Neumann) entropy $-{\operatorname{Tr}}(\rho\log\rho)$.
Since then, it has become increasingly clear that SSA is a key property of states on composite quantum systems, and in particular of translationally invariant states on translationally invariant quantum systems [@l75; @r94]. However, we feel that our understanding of the significance of SSA has remained incomplete, partly because of the historical accident that its discovery and first use were very much bound up with the specific technical problem of generalizing results on limiting mean entropy from the case of simple boxes – where it was not needed – to the case of van Hove – where it was useful. In an attempt to partially remedy this situation, we have considered a number of questions relating to monotonicity properties of entropy and of mean entropy of boxes, bearing in mind the SSA property from the outset. We have found that, while SSA might not be *needed* to establish limiting mean entropy for the case of boxes, it can, in fact, be used with profit to throw new light on this result. Namely, we shall show in this paper that, for translationally invariant states on translationally invariant quantum systems, SSA implies the stronger result that the *mean entropy* of boxes *decreases* monotonically as the boxes increase in size in the sense of set inclusion. We shall also mention a number of results and (as far as we are aware) open questions concerning monotonicity of mean entropy, suggested by our approach, which concern more general shapes than boxes and more general translationally invariant (/homogeneous) lattices and spaces than the usual infinite Euclidean lattices and spaces. Finally, we shall give a new proof of the known result that SSA implies that the *entropy* of boxes *increases* monotonically, again as the boxes increase in size in the sense of set inclusion.
We now explain our basic setting and list our results in detail. We begin with the following discrete and continuous versions of the standard definition of a translationally invariant quantum system (see for example [@lr68; @r69]). In the case of a lattice, $\mathbb{Z}^{\nu}$, we define a *region* $\Lambda$ to be a non-empty finite subset. In the case of a continuum, $\mathbb{R}^{\nu}$, we define a *region* $\Lambda$ to be a measurable set with finite (non-zero) volume. In either case, there is an assignment of a separable Hilbert space ${\mathcal{H}}_{\Lambda}$ to each region, satisfying, in the continuum case, the additional condition that this assignment be the same for any two regions which differ by a region of zero volume. Further, this assignment satisfies the compatibility condition that if two regions $\Lambda_{1}$ and $\Lambda_{2}$ are disjoint, then ${\mathcal{H}}_{\Lambda_{1}\cup\Lambda_{2}}={\mathcal{H}}_{\Lambda_{1}}
\otimes{\mathcal{H}}_{\Lambda_{2}}$, where, in the continuum case, two regions are said to be *disjoint* if their intersection has zero volume. We define a *state* mathematically to consist of a family $\{\rho_{\Lambda}\}$ of density operators (positive trace-class operators with trace $1$) on the Hilbert spaces ${\mathcal{H}}_{\Lambda}$ which are compatible in the sense that, for disjoint $\Lambda_{1}$ and $\Lambda_{2}$, $$\label{Eq:rho}
\rho_{\Lambda_{1}}={\operatorname{Tr}}_{\Lambda_{2}}(\rho_{\Lambda_{1}\cup\Lambda_{2}})$$ where for any region $\Lambda$, ${\operatorname{Tr}}_{\Lambda}$ means the partial trace over ${\mathcal{H}}_{\Lambda}$.
We remark that it is well known that classical lattice systems can be regarded as special cases of quantum lattice systems, where the density matrices representing the state are simultaneously diagonal, and so any result for a quantum lattice system will also be true for a classical lattice system. However, our results below are not applicable to classical continuous systems since property (A) below fails (see [@rr67]) in this case.
In this paper we shall mainly confine our interest to situations where not only the quantum system, but also the *state* is translationally invariant. This means that for all regions $\Lambda$ and all translations $\tau$ from the relevant translation group ($\mathbb{Z}^{\nu}$ for lattice systems or $\mathbb{R}^{\nu}$ for continuous systems) there exists a unitary operator $U(\tau,\Lambda)$ from ${\mathcal{H}}_{\Lambda}$ to ${\mathcal{H}}_{\tau(\Lambda)}$ such that $$\label{Eq:traninv}
\rho_{\tau(\Lambda)}=U(\tau,\Lambda)\rho_{\Lambda}U(\tau,\Lambda)^{-1}.$$
Given any state on a translationally invariant quantum system, we define the *entropy of a region $\Lambda$* to be the von Neumann entropy of $\rho_{\Lambda}$, i.e. $$\label{Eq:entropy}
S(\Lambda){\overset{ \text{def} }{=}}-{\operatorname{Tr}}(\rho_{\Lambda}\log\rho_{\Lambda}).$$ The entropy of a region is known to satisfy many properties [@l75]. However, in the present paper we shall focus on:
*(A) Positivity.* $$S(\Lambda)\geqslant 0 \text{\qquad for all } \Lambda$$
*(B) Subadditivity (SA).*\
If $\Lambda_{1}$ and $\Lambda_{2}$ are disjoint, then $$S(\Lambda_{1}\cup\Lambda_{2})\leqslant
S(\Lambda_{1})+S(\Lambda_{2})$$
*(C) Strong subadditivity (SSA).* $$S(\Lambda_{1}\cup\Lambda_{2})
+S(\Lambda_{1}\cap\Lambda_{2})\leqslant
S(\Lambda_{1})+S(\Lambda_{2}).$$
\(A) follows immediately from . (C) follows immediately from , , and . (B) is just a special case of (C) but we prefer to view it as a separate property. Furthermore, if our state is translationally invariant, it follows immediately from and that
*(D) Translational invariance.*\
For any element $\tau$ of the relevant translation group $$S(\Lambda)=S(\tau(\Lambda)).$$
As we discussed above, Property (C) (or rather from which it is an easy consequence) has the status of a difficult theorem [@lru73], but in spite of this, the game we wish to play from now on is to regard (A), (B), (C) and (D) as axioms and to see what one can easily prove about the class of functions $\Lambda\mapsto
S(\Lambda)$ from regions of $\mathbb{Z}^{\nu}$ or $\mathbb{R}^{\nu}$ to the real numbers which obey these axioms.
We begin by defining the *mean entropy* $\bar{S}$ of a region $\Lambda$ by $$\bar{S}(\Lambda){\overset{ \text{def} }{=}}\frac{S(\Lambda)}{|\Lambda|}$$ where $|\Lambda|$ denotes, in the lattice case, the number of lattice points contained in $\Lambda$ and, in the continuum case, the volume of $\Lambda$.
We also define the notion of *box* regions, $\Lambda_{a}$, $a=(a_{1},\dots,a_{\nu})$, where $a_{1},\dots,a_{\nu}$ are positive integers (in the lattice case) or positive real numbers (in the continuum case) by $$\Lambda_{a}{\overset{ \text{def} }{=}}\lbrace x\in\mathbb{Z}^{\nu}\text{ or }\mathbb{R}^{\nu}:
0<x_{i}\leqslant a_{i}\text{ for }i=1,\dots,\nu\rbrace$$ These have $|\Lambda_{a}|=\prod_{i=1}^{\nu}a_{i}$. With these two definitions, we shall prove in Sections \[S:thm1\] and \[S:correlate\] that, both in the lattice and continuum cases, and for arbitrary dimension $\nu$, Axioms (A), (B), (C) and (D) imply:
\[Th:meanent\] $\Lambda_{a}\subset\Lambda_{b}\qquad\Rightarrow\qquad
\bar{S}(\Lambda_{a})\geqslant\bar{S}(\Lambda_{b})$
\[Th:entropy\] $\Lambda_{a}\subset\Lambda_{b}\qquad\Rightarrow\qquad
S(\Lambda_{a})\leqslant S(\Lambda_{b})$
By Axiom (A) and the elementary result from real analysis that any monotonic sequence which is bounded below has a limit, we immediately have from Theorem \[Th:meanent\] the corollary:
\[C:limiting\] Given any infinite sequence of boxes $\Lambda_{i}$, $i=1,2,\dots$, which increase in size in the sense of set inclusion $$\lim_{i\rightarrow\infty}\bar{S}(\Lambda_{i})$$ exists.
The special case of this where every edge length of $\Lambda_{i}$ tends to infinity as $i$ tends to infinity, is the result of Lanford and Robinson [@lr68].
We have found a number of intriguing hints that it should be possible to considerably generalize Theorem \[Th:meanent\] both to settings which involve a class of shapes more general than boxes and to translationally (and rotationally etc.) invariant systems more general than $\mathbb{Z}^{\nu}$ and $\mathbb{R}^{\nu}$. In Section \[S:hexagon\] we outline a number of partial results in this direction and pose a number of open questions.
Theorem \[Th:entropy\] is not an entirely new result. Robinson and Ruelle [@rr67] proved such a monotonicity result, for classical lattice systems, which was more general in that our boxes were replaced by general regions. Also, in an article by Wehrl [@w78], Theorem \[Th:entropy\] is proven in the one-dimensional quantum case; this can then easily be extended to higher dimensions as in our proof below. However, we remark that Wehrl’s proof *both* relies on SSA *and* requires the existence (on the line) of limiting mean entropy to have been established first. Instead, our proof proceeds directly from Axioms (A), (B), (C) and (D) and involves the concept of *$m$-point correlation entropies* which we introduce in Section \[S:correlate\] and which are related to the *index of correlation* (see e.g. [@h94]) in somewhat the same way that truncated correlation functions are related to full correlation functions in quantum field theory and statistical mechanics [@ha96; @p88].
PROOF OF THEOREM \[Th:meanent\] {#S:thm1}
===============================
We shall treat in turn the four cases of the one-dimensional lattice, the $\nu$-dimensional lattice, the one-dimensional continuum, and the $\nu$-dimensional continuum.
\[case1\] In this case, a box region, $\Lambda_{(n)}$, is simply a set consisting of the first $n$ natural numbers for some natural number $n$. Writing $S(n)$ instead of $S(\Lambda_{(n)})$ for ease of notation, it follows from Axioms (B) and (D) that $$\label{Eq:SA2}
S(q+r)\leqslant S(q)+S(r)$$ and from Axioms (C) and (D) that $$\label{Eq:SSA2}
S(q+r+t)+S(r)\leqslant S(q+r)+S(r+t)$$ where $q,r,t\in\mathbb{N}$.
The statement of our theorem in this case amounts to the statement that the mean entropy $S(n)/n$ is monotonically decreasing. We prove this by establishing the proposition $$\label{Eq:prop}
\frac{S(n)}{n}\geqslant\frac{S(n+1)}{n+1}$$ with the following simple inductive argument. First notice that a special case of is the statement that $S(2)\leqslant 2S(1)$. This establishes Propostion in the case $n=1$. Next, on the assumption that Proposition is true for $n=p$, we have, by in the case $r=p$ and $q=t=1$ that $$\begin{aligned}
S(p+2) &\leqslant S(p+1)+S(p+1)-S(p) \\
&\leqslant 2S(p+1)-\frac{p}{p+1}S(p+1) \\
&=\frac{p+2}{p+1}S(p+1)
\end{aligned}$$ which implies that is true for $n=p+1$. We conclude that is true and hence that Theorem \[Th:meanent\] is true in the case of a one-dimensional lattice.
\[case2\]
With a similar change in notation to that used above, we now need to prove $$\label{Eq:case2}
\frac{S(a_{1},\dots,a_{\nu})}{a_{1}\dots a_{\nu}}
\geqslant
\frac{S(b_{1},\dots,b_{\nu})}{b_{1}\dots b_{\nu}}$$ where $a_{i},b_{i}\in\mathbb{N}$ and $a_{i}\leqslant b_{i}$ for $i=1,\dots,\nu$. We first notice that the function $S_{a_{2},\dots,a_{\nu}}(\cdot){\overset{ \text{def} }{=}}S(\cdot,a_{2},\dots,a_{\nu})$, from the natural numbers to $\mathbb{R}$, clearly satisfies and . Thus, by Case \[case1\], we have $$\frac{S(a_{1},a_{2},\dots,a_{\nu})}{a_{1}a_{2}\dots a_{\nu}}
\geqslant
\frac{S(b_{1},a_{2},\dots,a_{\nu})}{b_{1}a_{2}\dots a_{\nu}}$$ We next notice that, in a similar way to above, the function $S_{b_{1},a_{3},\dots,a_{\nu}}(\cdot){\overset{ \text{def} }{=}}S(b_{1},\cdot,a_{3},\dots,a_{\nu})$ also satisfies and . Thus, by applying Case \[case1\] again, we have $$\frac{S(b_{1},a_{2},a_{3},\dots,a_{\nu})}
{b_{1}a_{2}a_{3}\dots a_{\nu}}
\geqslant
\frac{S(b_{1},b_{2},a_{3},\dots,a_{\nu})}
{b_{1}b_{2}a_{3}\dots a_{\nu}}$$ One may clearly continue in this way, arriving at after a total of $\nu$ such steps.
\[case3\] In this case, a box region, $\Lambda_{(x)}$, is simply a real interval $(0,x]$. Writing $S(x)$ instead of $S(\Lambda_{(x)})$ we now need to prove $$\label{Eq:case3}
\frac{S(y)}{y}\geqslant\frac{S(x)}{x}$$ for $y\leqslant x$.
We first argue that holds on the rationals. For any two rationals $x$ and $y$, let $c$ be their common denominator and define the function $S_c(\cdot)$, taking its argument from the natural numbers, by $S_c(n){\overset{ \text{def} }{=}}S(n/c)$. This function satisfies and of Case \[case1\] and thus $S_{c}(n)/n$ and hence $S(n/c)/(n/c)$ are monotonically decreasing by the argument given there, thus establishing for these $x$ and $y$. To extend to the reals, it then clearly suffices to prove that $S(x)$ is continuous. This follows immediately from the following lemmas and Axiom (A).
\[L:lieb\] $S(x)$ is weakly concave i.e. for positive real numbers $x$ and $y$, $S((x+y)/2)\geqslant S(x)/2 +S(y)/2$.
\[L:cont\] A function which is weakly concave and bounded below is necessarily continuous.
To prove Lemma \[L:lieb\], first note that if $x=y$ the statement is trivially true. Otherwise, assume without loss of generality that $y< x$. The result then follows from in the case established above where $q,r$ and $t$ are real, by identifying $q=t=(x-y)/2$ and $r=y$. We remark that this is essentially the same as an argument given in [@w78], where it is attributed to E. Lieb. Lemma \[L:cont\] (or rather the alternative statement with “convex” substituted for “concave” and “bounded above” substituted for “bounded below”) is proved in [@ps72]. We remark that this is the only place where we use Axiom (A). In particular, Axiom (A) is unnecessary for Cases \[case1\] and \[case2\].
\[case4\] This case can be established from Case \[case3\] by an argument similar to that used above to go from Case \[case1\] to Case \[case2\].
This completes the proof of Theorem \[Th:meanent\]. We remark that it can be helpful to visualize the steps in the above proof using a geometrical picture in which lattice points are identified with $\nu$-dimensional continuum cubes of side $1$. In detail, one identifies the particular lattice point $(1,\dots,1)$ with the particular continuum cube $\Lambda_{(1,\dots,1)}$ and extends this identification by identifying the general lattice point $(a_{1},\dots,a_{\nu})$, $a_{1},\dots,a_{\nu}\in\mathbb{Z}$, with the result of translating the cube $\Lambda_{(1,\dots,1)}$ by the vector $(a_{1}-1,\dots,a_{\nu}-1)$. We also remark that, in the continuum case, Theorem \[Th:meanent\] can trivially be extended from the case of nested box regions to nested parallelepiped regions with parallel faces (by simply “squashing” the boxes in the theorem).
REMARKS ABOUT POSSIBLE GENERALIZATIONS OF THEOREM \[Th:meanent\] {#S:hexagon}
================================================================
We now discuss two different directions in which one can attempt to generalize Theorem \[Th:meanent\].
Firstly, one can ask whether Theorem \[Th:meanent\] generalizes to more general shapes than boxes (or parallelepipeds). Indeed, one can ask the very general question
\[Q:gen\] Is mean entropy monotonically decreasing on any sequence of regions in $\mathbb{Z}^{\nu}$ or $\mathbb{R}^{\nu}$ which increase in size in the sense of set inclusion?
In other words, is the mean entropy of any region in the system less than or equal to the mean entropy of any subregion of that region? We remark that this question is more likely to have a positive answer if we extend the translation group of Section \[S:intro\] to the appropriate full symmetry group of $\mathbb{Z}^\nu$ or $\mathbb{R}^\nu$, i.e. if we also include rotations and reflections. From now on we shall assume this extension to be made. We have been unable to answer this question in anything like its full generality, but we have found no negative answers and some partial positive answers in the case of a few specific simple shapes which go beyond the box-shapes (and parallelepiped shapes – cf. the second remark at the end of Section \[S:thm1\]) of Theorem \[Th:meanent\]. For example, in $\mathbb{Z}^{2}$ we can prove inequalities such as $$\label{Eq:boxes}
\frac{S(
\begin{picture}(20,20)(0,0)
\curve(0,0, 20,0)
\curve(0,0, 0,20)
\curve(0,10, 20,10)
\curve(10,0, 10,20)
\curve(20,0, 20,10)
\curve(0,20, 10,20)
\end{picture}
)}{3}
\leqslant
\frac{S(
\begin{picture}(20,10)(0,0)
\curve(0,0, 20,0)
\curve(0,10, 20,10)
\curve(0,0, 0,10)
\curve(10,0, 10,10)
\curve(20,0, 20,10)
\end{picture}
)}{2}$$ where we are now using an obvious notation suggested by the first remark at the end of Section \[S:thm1\].
Equation may easily be proven from the special cases $$\begin{aligned}
S(\begin{picture}(20,10)(0,0)
\curve(0,0, 20,0)
\curve(0,10, 20,10)
\curve(0,0, 0,10)
\curve(10,0, 10,10)
\curve(20,0, 20,10)
\end{picture})
&\leqslant
S(\begin{picture}(10,10)(0,0)
\curve(0,0, 10,0)
\curve(0,10, 10,10)
\curve(0,0, 0,10)
\curve(10,0, 10,10)
\end{picture})
+
S(\begin{picture}(10,10)(0,0)
\curve(0,0, 10,0)
\curve(0,10, 10,10)
\curve(0,0, 0,10)
\curve(10,0, 10,10)
\end{picture}) \\
S(\begin{picture}(20,20)(0,0)
\curve(0,0, 20,0)
\curve(0,0, 0,20)
\curve(0,10, 20,10)
\curve(10,0, 10,20)
\curve(20,0, 20,10)
\curve(0,20, 10,20)
\end{picture})
+
S(\begin{picture}(10,10)(0,0)
\curve(0,0, 10,0)
\curve(0,10, 10,10)
\curve(0,0, 0,10)
\curve(10,0, 10,10)
\end{picture})
&\leqslant
S(\begin{picture}(20,10)(0,0)
\curve(0,0, 20,0)
\curve(0,10, 20,10)
\curve(0,0, 0,10)
\curve(10,0, 10,10)
\curve(20,0, 20,10)
\end{picture})
+
S(\begin{picture}(20,10)(0,0)
\curve(0,0, 20,0)
\curve(0,10, 20,10)
\curve(0,0, 0,10)
\curve(10,0, 10,10)
\curve(20,0, 20,10)
\end{picture})
\end{aligned}$$ of subadditivity and strong subadditivity in an entirely analogous way to the way we established Case \[case1\] of Theorem \[Th:meanent\] from equations and in Section \[S:thm1\] in the case that $q=r=t=1$. However we have been unable, for example, to prove or disprove either of the candidate inequalities $$\begin{aligned}
\frac{
S(\begin{picture}(30,20)(0,0)
\curve(0,0, 30,0)
\curve(0,10, 30,10)
\curve(0,20, 10,20)
\curve(0,0, 0,20)
\curve(10,0, 10,20)
\curve(20,0, 20,10)
\curve(30,0, 30,10)
\end{picture}) }{4}
&{\overset{ ? }{\leqslant}}\frac{
S(\begin{picture}(20,20)(0,0)
\curve(0,0, 20,0)
\curve(0,0, 0,20)
\curve(0,10, 20,10)
\curve(10,0, 10,20)
\curve(20,0, 20,10)
\curve(0,20, 10,20)
\end{picture}) }{3} \label{Eq:lshape} \\
\frac{
S(\begin{picture}(30,20)(0,0)
\curve(0,0, 30,0)
\curve(0,10, 30,10)
\curve(0,20, 10,20)
\curve(0,0, 0,20)
\curve(10,0, 10,20)
\curve(20,0, 20,10)
\curve(30,0, 30,10)
\end{picture}) }{4}
&{\overset{ ? }{\leqslant}}\frac{
S(\begin{picture}(30,10)(0,0)
\curve(0,0, 30,0)
\curve(0,10, 30,10)
\curve(0,0, 0,10)
\curve(10,0, 10,10)
\curve(20,0, 20,10)
\curve(30,0, 30,10)
\end{picture}) }{3} \label{Eq:plank}\end{aligned}$$ (but see after Corollary \[C:av\] below for a partial answer to these questions).
In fact, many of the cases where we have been able to answer Question \[Q:gen\] positively turn out to refer to consecutive figures in a one-dimensional “chain” of figures. For example the case illustrated above clearly easily extends to a more general inequality which refers to an arbitrary pair of successive figures in the chain shown in Figure \[Fig\].
(320,90)(0,0) (0,30, 60,90) (30,0, 120,90) (90,0, 180,90) (150,0, 240,90) (210,0, 270,60) (30,0, 0,30) (90,0, 30,60) (150,0, 60,90) (210,0, 120,90) (240,30, 180,90) (270,60, 240,90) (270,45, 280,45) (290,45, 300,45) (310,45, 320,45)
An interesting special case of Question \[Q:gen\] is
\[Q:spec\] Is mean entropy monotonically decreasing on any sequence of *similar* regions in $\mathbb{Z}^{\nu}$ or $\mathbb{R}^{\nu}$ which increase in size in the sense of set inclusion?
Of course, we know from Theorem \[Th:meanent\] that we can answer Question \[Q:spec\] positively for the case of similar boxes and parallelepipeds. But consideration of more general shapes forces us to leave the realm of one-dimensional chains and, for this reason, we have found it difficult to find other shapes for which we can prove anything. In fact, we have not even been able to answer Question \[Q:spec\] in the case of discs in $\mathbb{R}^{2}$ with increasing radii. However, we *have* been able to answer Question \[Q:spec\] positively in the case of two regular hexagons in the plane (i.e. $\mathbb{R}^2$) with diameters in the ratio two-to-one.
(160,140)(0,0) (40,138, 120,138) (20,103.5, 140,103.5) (0,69, 160,69) (20,34.5, 140,34.5) (40,0, 120,0) (0,69, 40,138) (20,34.5, 80,138) (40,0, 120,138) (80,0, 140,103.5) (120,0, 160,69) (0,69, 40,0) (20,103.5, 80,0) (40,138, 120,0) (80,138, 140,34.5) (120,138, 160,69)
To treat this situation we consider Figure \[fighex\]. Denoting the smaller hexagon made of 6 small triangles by $H$ and the diamond region made of two small triangles by $D$, we begin by noting that the mean entropy of $H$ is less than or equal to the mean entropy of $D$. This follows immediately once one notices that $H$ can be viewed as the disjoint union of three copies of $D$, since by applying Axiom (B) (twice) we have $S(H)\leqslant S(D)+S(D)+S(D)$ which implies that $$\label{Eq:hexdiam}
\frac{S(H)}{6}\leqslant \frac{S(D)}{2}$$
Next, imagine that the vertices of the central small hexagon in Figure \[fighex\] are numbered (say) clockwise, starting at some particular vertex, from 1 to 6 and regard the large hexagon as the union of 6 copies of $H$, which we shall call $H_1,\dots,H_6$, centred respectively at each of these 6 vertices. Also define the sequence of figures $F_1=H_1$, $F_2=F_1\cup H_2$, $F_3=F_2\cup H_3$, etc. so that $F_6$ is our large hexagon. We may then argue that each of these figures $F_n$, taken successively, has a mean entropy less than or equal to that of $H$. The first step in this argument proceeds by first noticing that $F_2$ consists of the union of two copies of $H$ whose intersection is a copy of $D$ and hence by Axioms (C) and (D) that $S(F_2)+S(D)\leqslant S(H)+S(H)$. This is easily combined with the inequality to conclude that $S(F_2)/10\leqslant
S(H)/6$. The subsequent steps proceed along similar lines, each using the result of the previous step, along with inequality and the facts that (a) $F_i$ consists of the union of the figure $F_{i-1}$ and a copy of the figure $H$ (b) the intersection of $F_{i-1}$ and the same $H$ is a copy of $D$. After the fourth step we have the result that $S(F_{5})/22\leqslant S(H)/6$. For the final step we note that $F_6$ is the union of the figure $F_5$ and a copy of the figure $H$, but this time the intersection of these figures is a new figure $G$ (composed of 4 small triangles). To derive the final result that $S(F_{6})/24\leqslant S(H)/6$ we now need, instead of , the result that $S(H)/6\leqslant S(G)/4$. This can easily be shown by using Axioms (C) and (D) to establish that $S(H)+S(D)\leqslant 2S(G)$ and combining this with .
Besides the above specific examples, we can also prove (say for a lattice $\mathbb{Z}^{\nu}$, and continuing to interpret lattice points as cubes and to refer to collections of cubes as ‘figures’) the general result:
\[Th:av.entropy\] The mean entropy of a figure $\mathcal{F}(n)$ composed of $n$ cubes ($n\geqslant 2$) is less than or equal to the average of the mean entropies of all the (connected or disconnected) figures contained in $\mathcal{F}(n)$ which are composed of $n-1$ cubes.
We remark that Theorem \[Th:av.entropy\] and Corollary \[C:av\] below actually only assume Axioms (B) and (C). In particular, the symmetry-invariance axiom (D) is not required in any form.
First we introduce some new notation. Labelling the cubes of $\mathcal{F}(n)$ by the integers $1,\dots,n$ we let $\mathcal{F}(n;i,j,\dots)$ denote the figure that is formed from the figure $\mathcal{F}(n)$ by taking away its $i$th, $j$th, $\dots$ cubes. Then the statement of Theorem \[Th:av.entropy\] amounts to $$\label{Eq:av.entropy}
\frac{S(\mathcal{F}(n))}{n}\leqslant\frac{1}{n}\sum_{j}
\frac{S(\mathcal{F}(n;j))}{n-1}$$
We prove this inequality by induction on $n$. First, is true for all figures $\mathcal{F}$ with $n=2$ by Axiom (B). Next, we assume that is true for all figures $\mathcal{F}$ with $n=p$ cubes. Taking any figure $\mathcal{F}(p+1)$, we note that $\mathcal{F}(p+1;i)$ consists of just $p$ cubes, so by our assumption $$\label{Eq:av.entropy2}
\frac{S(\mathcal{F}(p+1;i))}{p}\leqslant\frac{1}{p}\sum_{j\neq i}
\frac{S(\mathcal{F}(p+1;i,j))}{p-1}$$ Also, for $j\neq i$, Axiom (C) implies that $$\label{Eq:SSA3}
S(\mathcal{F}(p+1))\leqslant S(\mathcal{F}(p+1;i))+S(\mathcal{F}(p+1;j))
-S(\mathcal{F}(p+1;i,j))$$ Summing for $j=1,\dots,p+1$, with $j\neq i$, leads to $$\begin{gathered}
pS(\mathcal{F}(p+1)) \leqslant pS(\mathcal{F}(p+1;i)) \\
+\sum_{j\neq i}S(\mathcal{F}(p+1;j))
-\sum_{j\neq i}S(\mathcal{F}(p+1;i,j))
\end{gathered}$$ Combining this with we have $$\begin{aligned}
pS(\mathcal{F}(p+1))
& \leqslant pS(\mathcal{F}(p+1;i))+\sum_{j\neq i}
S(\mathcal{F}(p+1;j)) \\
& \hspace{5cm} -(p-1)S(\mathcal{F}(p+1;i)) \\
&=\sum_{j}S(\mathcal{F}(p+1;j))
\end{aligned}$$ Dividing this last equation by $p(p+1)$ shows that is true for $n=p+1$.
This theorem also leads to the natural corollary:
\[C:av\] $$\frac{S(\mathcal{F}(n))}{n}\leqslant
\frac{\max_{j}S(\mathcal{F}(n;j))}{n-1}$$
Thus the mean entropy of a figure on a lattice is less than or equal to the mean entropy of at least one of its subfigures composed of one less cube. Returning to an example discussed above, we see that this remark implies that the mean entropy of the figure
(30,20)(0,0) (0,0, 30,0) (0,10, 30,10) (0,20, 10,20) (0,0, 0,20) (10,0, 10,20) (20,0, 20,10) (30,0, 30,10)
is less than or equal to the mean entropy of one of its four subfigures each composed of 3 cubes. In fact, we have been able to prove, by an alternative route, the stronger result that its mean entropy is less than or equal to the mean entropy of one of the two *connected* subfigures composed of 3 cubes, i.e. that one of the two inequalities and is actually true, but we can’t say which one. This is done by first noting that by Axiom (C): $$S(\begin{picture}(30,20)(0,0)
\curve(0,0, 30,0)
\curve(0,10, 30,10)
\curve(0,20, 10,20)
\curve(0,0, 0,20)
\curve(10,0, 10,20)
\curve(20,0, 20,10)
\curve(30,0, 30,10)
\end{picture})
+
S(\begin{picture}(20,10)(0,0)
\curve(0,0, 20,0)
\curve(0,10, 20,10)
\curve(0,0, 0,10)
\curve(10,0, 10,10)
\curve(20,0, 20,10)
\end{picture})
\leqslant
S(\begin{picture}(30,10)(0,0)
\curve(0,0, 30,0)
\curve(0,10, 30,10)
\curve(0,0, 0,10)
\curve(10,0, 10,10)
\curve(20,0, 20,10)
\curve(30,0, 30,10)
\end{picture})
+
S(\begin{picture}(20,20)(0,0)
\curve(0,0, 20,0)
\curve(0,0, 0,20)
\curve(0,10, 20,10)
\curve(10,0, 10,20)
\curve(20,0, 20,10)
\curve(0,20, 10,20)
\end{picture})$$ Combining this with we have: $$\label{Eq:three}
S(\begin{picture}(30,20)(0,0)
\curve(0,0, 30,0)
\curve(0,10, 30,10)
\curve(0,20, 10,20)
\curve(0,0, 0,20)
\curve(10,0, 10,20)
\curve(20,0, 20,10)
\curve(30,0, 30,10)
\end{picture})
\leqslant
S(\begin{picture}(30,10)(0,0)
\curve(0,0, 30,0)
\curve(0,10, 30,10)
\curve(0,0, 0,10)
\curve(10,0, 10,10)
\curve(20,0, 20,10)
\curve(30,0, 30,10)
\end{picture})
+\frac{1}{3}
S(\begin{picture}(20,20)(0,0)
\curve(0,0, 20,0)
\curve(0,0, 0,20)
\curve(0,10, 20,10)
\curve(10,0, 10,20)
\curve(20,0, 20,10)
\curve(0,20, 10,20)
\end{picture})$$ But, we must have *either* $ S(\begin{picture}(30,10)(0,0)
\curve(0,0, 30,0)
\curve(0,10, 30,10)
\curve(0,0, 0,10)
\curve(10,0, 10,10)
\curve(20,0, 20,10)
\curve(30,0, 30,10)
\end{picture})
\leqslant
S(\begin{picture}(20,20)(0,0)
\curve(0,0, 20,0)
\curve(0,0, 0,20)
\curve(0,10, 20,10)
\curve(10,0, 10,20)
\curve(20,0, 20,10)
\curve(0,20, 10,20)
\end{picture}) $ *or* $ S(\begin{picture}(20,20)(0,0)
\curve(0,0, 20,0)
\curve(0,0, 0,20)
\curve(0,10, 20,10)
\curve(10,0, 10,20)
\curve(20,0, 20,10)
\curve(0,20, 10,20)
\end{picture})
\leqslant
S(\begin{picture}(30,10)(0,0)
\curve(0,0, 30,0)
\curve(0,10, 30,10)
\curve(0,0, 0,10)
\curve(10,0, 10,10)
\curve(20,0, 20,10)
\curve(30,0, 30,10)
\end{picture}) $. Thus, we conclude from that one of the inequalities and must be true.
A second direction in which one can attempt to generalize Theorem \[Th:meanent\] is suggested by the fact that the basic setting of Section \[S:intro\] clearly generalizes to more general lattices than $\mathbb{Z}^\nu$ and to more general homogeneous spaces than $\mathbb{R}^{\nu}$ such as discs in one-dimension and spheres and tori in higher dimensions. One can thus ask to what extent Theorem \[Th:meanent\] generalizes to such settings, where Axiom (D) is now replaced by invariance under the relevant symmetry group. As far as more general lattices are concerned, we remark that the hexagon example discussed above could be regarded as an example concerning a triangular lattice. For the case of the one-dimensional circle and higher dimensional tori, it is easy to see that the obvious analogue of Theorem \[Th:meanent\] still holds. For example, on both a one-dimensional “lattice unit-circle” (where the allowed angles are $2m\pi/n$, $m=1,\dots,n$) and a “continuum unit-circle”, one easily shows by a close analogue to the arguments in Cases \[case1\] and \[case3\] of Section \[S:thm1\] that $$\frac{S(\theta_{1})}{\theta_{1}}\geqslant
\frac{S(\theta_{2})}{\theta_{2}}
\text{\qquad for \qquad}
\theta_{1}\leqslant\theta_{2}$$ It is natural to ask the following question (and the obvious counterparts to this question in higher dimensions) concerning a possible generalization of this result, in the continuum case, to the 2-sphere:
Does the mean entropy of a disc drawn on the surface of a sphere decrease monotonically as the solid angle subtended at the centre increases?
But, just as for discs in $\mathbb{R}^{2}$, we have been unable to answer this question.
PROOF OF THEOREM \[Th:entropy\] {#S:correlate}
===============================
We shall find it useful to begin by introducing, in the case of a one-dimensional lattice, the notion of the *m-point correlation entropies* of a translationally invariant state.
To motivate this definition, we first recall the notion of the *index of correlation* (see for example [@h94] where it is discussed in an abstract setting concerning states on tensor products of Hilbert spaces). In the case of a one-dimensional quantum lattice system we can interpret this as the difference between the entropy of the union of $n$ consecutive lattice points (or, in our alternative interpretation, cubes) and the sum of their individual entropies: $$\label{Eq:index}
I_{n}{\overset{ \text{def} }{=}}n S(1)-S(n)$$ By using Axiom (B) $n-1$ times, it is easy to show that $I_{n}$ is positive.
Our new notion of *m-point correlation entropies* may be regarded as designed so as to provide a new way of writing the index of correlation $I_{n}$ as a sum of positive terms, each of which concerns $m\leqslant n$ lattice points. Namely, we define the *m-point correlation entropies* by $$\label{Eq:corr.entropy}
S^{c}_{m} {\overset{ \text{def} }{=}}\begin{cases}
2S(1)-S(2) & m=2 \\
2S(m-1)-S(m-2)-S(m) & m\geqslant 3
\end{cases}$$ Note that $S^{c}_{m}$ is positive by Axiom (B) for $m=2$ and by Axiom (C) for $m\geqslant3$. An easy calculation then shows that $$\label{Eq:index2}
I_{n}=\sum_{m=2}^{n}(n+1-m)S^{c}_{m}$$ By and , we can write the entropy of $n$ consecutive lattice points as $$\label{Eq:entropysum}
S(n)=nS(1)-\sum_{m=2}^{n}(n+1-m)S^{c}_{m}$$ We note that by adding an extra lattice point onto a region of $n$ consecutive lattice points, the entropy increases by $S(1)$ i.e. the entropy of one lattice point, but decreases by $S^{c}_{i}$ (for $i=2,\dots,n+1$). Thus it is natural to think of $S^{c}_{i}$ as a measure of the degree of correlation of a chain of lattice points of length $i$ over and above the correlations involving subchains of length $j$ where $j<i$. Thus as we mentioned in the introduction, our $S^{c}_{n}$ is related to the index of correlation $I_{n}$ in somewhat the same way that truncated correlation functions (sometimes known as connected correlation functions) are related to full correlation functions in quantum field theory and statistical mechanics [@ha96; @p88].
We now use this formalism to prove Theorem \[Th:entropy\] for the case of the lattice $\mathbb{Z}$. This can then be proven to extend to $\mathbb{Z}^{\nu}$ and $\mathbb{R}^{\nu}$ in a similar way to that in which we proved Cases \[case2\], \[case3\] and \[case4\] from Case \[case1\] in Section \[S:thm1\].
Proving Theorem \[Th:entropy\] in the case of $\mathbb{Z}$ is equivalent to proving that $$\label{Eq:diff}
0\leqslant S(N)- S(N-1) \text{\quad for } N\geqslant 2$$ To do this, we first note that all the terms in the sum in are positive. Thus for any $n>N$, removing the last $n-N$ terms gives us the inequality $$S(n)\leqslant nS(1)-\sum^{N}_{m=2}(n+1-m)S^{c}_{m}$$ from which we have $$0\leqslant \frac{S(n)}{n}\leqslant
S(1)-\frac{1}{n}\sum_{m=2}^{N}(n+1-m)S^{c}_{m}.$$ Taking the limit $n\rightarrow\infty$, we deduce that $$0\leqslant S(1)-\sum^{N}_{m=2}S^{c}_{m}.$$ Substituting the expression for $S_{m}^{c}$ given in Equation into the right hand side of this inequality, one finds that all but two of the $3(N-2)+3$ terms cancel and one is left with .
We remark that actually the above proof clearly proves a stronger statement than our theorem, namely that $S(N)-S(N-1)$ is greater than or equal to the limiting mean entropy!
We also remark that it is essential for Theorem \[Th:entropy\] that the full system be infinite. For example, if instead of the one dimensional system $\mathbb{Z}$ one were to take a closed lattice unit-circle consisting of $n$ lattice points, as discussed in Section \[S:hexagon\], then it is obviously easy to have states (‘pure total states’) for which $S(n)=0$ while $S(m)>0$ for some $m<n$. An amusing example of this is provided by the case where each point around our circle corresponds to a quantum system with Hilbert space ${\mathcal{H}}=\mathbb{C}^2$ and the pure total state is the generalized GHZ [@ghsz90] state on the $n$-fold tensor product of ${\mathcal{H}}$ with itself $$\Psi=\frac{1}{\sqrt{2}}
|\dots\uparrow\uparrow\uparrow\dots\rangle
+\frac{1}{\sqrt{2}}
|\dots\downarrow\downarrow\downarrow\dots\rangle$$ where $\mid\uparrow\rangle$ and $\mid\downarrow\rangle$ are a choice of orthonormal basis for ${\mathcal{H}}$. Clearly, in this case, we would have $S(m)=\log 2$ whenever $m<n$, but $S(n)=0$ !
Note that if we were to attempt to consider an analogue of this example in the case of an infinite row of lattice points, then there would be no such difficulty because we never actually *assign* an entropy to an infinite row of lattice points. (Note though that, at least if we take the view that all observables are local observables, it would still be correct to assign an entropy of $\log 2$ even to the state which formally corresponds to the above generalized GHZ state in the case “$n=\infty$” notwithstanding the fact that this “looks like” a vector state on an infinite tensor product of $\mathbb{C}^2$.)
EPILOGUE
========
One immediate consequence of Axioms (A) and (C) is that, if two regions each have zero entropy, then both their intersection and their union must also have zero entropy. This might be expressed by saying: “If a state is pure on each of two regions, it must be pure on both their union and intersection.”
Amongst other things, this remark further illuminates one of the heuristic remarks (concerning Theorem 6.4 of [@kw91]) made in a paper [@kw91] by Kay and Wald on quantum field theory in curved spacetime. (See pages 55, 99 and 105 of [@kw91].) Namely, that it is impossible for a state to be pure on each of two ‘double-wedge regions’ [@kw91] but mixed on their intersection. In fact, one of the motivations for the present research was a desire to elucidate that remark.
With an extension of the reasoning behind the above remark, another result which one can easily derive, now from our full set of axioms (A), (B), (C) and (D), is:
In both lattice and continuum cases, and for arbitrary dimension $\nu$, if the entropy of any box is zero the entropy of all boxes is zero.
One may prove this either as an immediate consequence of Theorems \[Th:meanent\] and \[Th:entropy\], or as an easy direct consequence of the axioms.
ACKNOWLEDGEMENTS {#acknowledgements .unnumbered}
================
We thank Tony Sudbery for pointing out to us the GHZ example mentioned at the end of Section \[S:correlate\]. A. R. Kay thanks the EPSRC for a research studentship.
[99]{} D. W. Robinson and D. Ruelle, “Mean entropy of states in classical statistical mechanics,” Commun. Math. Phys. **5**, 288-300 (1967). O. E. Lanford and D. W. Robinson, “Mean entropy of states in quantum-statistical mechanics,” J. Math. Phys. **9**, 1120-1125 (1968). H. Araki and E. L. Lieb, “Entropy inequalities,” Commun. Math. Phys. **18**, 160-170 (1970). E. L. Lieb and M. B. Ruskai, “Proof of the strong subadditivity of quantum-mechanical entropy,” J. Math. Phys. **14**, 1938-1941 (1973). E. L. Lieb, “Some convexity and subadditivity properties of entropy,” Bull. Am. Math. Soc. **81**, 1-13 (1975). M. B. Ruskai, “Beyond strong subadditivity? Improved bounds on the contraction of generalized relative entropy,” Rev. Math. Phys. **6**, 1147-1161 (1994). D. Ruelle, *Statistical Mechanics* (Benjamin, New York, 1969). A. Wehrl, “General properties of entropy,” Rev. Mod. Phys. **50**, 221-260 (1978). R. Horodecki, “Informationally coherent quantum states,” Phys. Lett. A **187**, 145-150 (1994). R. Haag, *Local Quantum Physics*, 2nd ed. (Springer, Berlin, 1996). G. Parisi, *Statistical Field Theory* (Addison-Wesley, Reading, MA, 1988). G. Pólya and G. Szegö, *Problems and Theorems in Analysis I* (Springer, Berlin, 1972). D. M. Greenberger, M. A. Horne, A. Shimony and A. Zeilinger, “Bell’s theorem without inequalities,” Am. J. Phys. **58**, 1131-1143 (1990). B. S. Kay and R. M. Wald, “Theorems on the uniqueness and thermal properties of stationary, nonsingular, quasifree states on spacetimes with a bifurcate Killing horizon,” Physics Reports **207**, 49-136 (1991).
[^1]: Electronic mail: ark102@york.ac.uk
[^2]: Electronic mail: bsk2@york.ac.uk
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'While model selection is a well-studied topic in parametric and nonparametric regression or density estimation, model selection of possibly high dimensional nuisance parameters in semiparametric problems is far less developed. In this paper, we propose a new model selection framework for making inferences about a finite dimensional functional defined on a semiparametric model, when the latter admits a doubly robust estimating function. The class of such doubly robust functionals is quite large, including many missing data and causal inference problems. Under double robustness, the estimated functional should incur no bias if either of two nuisance parameters is evaluated at the truth while the other spans a large collection of candidate models. We introduce two model selection criteria for bias reduction of functional of interest, each based on a novel definition of pseudo-risk for the functional that embodies this double robustness property and thus may be used to select the candidate model that is nearest to fulfilling this property even when all models are wrong. Both selection criteria have a bias awareness property that selection of one nuisance parameter can be made to compensate for excessive bias due to poor learning of the other nuisance parameter. We establish an oracle property for a multi-fold cross-validation version of the new model selection criteria which states that our empirical criteria perform nearly as well as an oracle with a priori knowledge of the pseudo-risk for each candidate model. We also describe a smooth approximation to the selection criteria which allows for valid post-selection inference. Finally, we apply the approach to perform model selection of a semiparametric estimator of average treatment effect given an ensemble of candidate machine learning methods to account for confounding in a study of right heart catheterization in the intensive care unit of critically ill patients.'
author:
- |
Yifan Cui, Eric Tchetgen Tchetgen\
Department of Statistics, The Wharton School, University of Pennsylvania
bibliography:
- 'causal.bib'
- 'survtrees.bib'
title: 'Bias-aware model selection for machine learning of doubly robust functionals'
---
[***Keywords:*** Model Selection, Machine Learning, Doubly Robust, Influence Function, Average Treatment Effect, Cross-validation]{}
Introduction {#sec:intro}
============
Model selection is a well-studied topic in statistics, econometrics and machine learning. In fact, methods for model selection and corresponding theory abound in these disciplines, although primarily in settings of parametric and nonparametric regression and density estimation ([@akaike1974new; @BIC1978; @vuong1989likelihood; @zhang1993model; @wand1994kernel; @fan1995jrssb; @ruppert1995; @HALL1996165; @lasso; @yang2000; @birge2001gaussian; @fan2001variable; @rao; @wegkamp2003; @ruppert2003semiparametric; @efron2004least; @zhao2006lasso; @BIRGE2006497; @candes2007dantzig; @celisse2014; @belloni2014] and many others). Model selection methods are far less developed in settings where one aims to make inferences about a finite dimensional, pathwise differentiable functional defined on a semiparametric model. Model selection for the purpose of estimating such a functional may involve selection of an infinite dimensional parameter, say a nonparametric regression for the purpose of more accurate estimation of functional in view, which can be considerably more challenging than selecting a regression model strictly for the purpose of prediction. This is because whereas the latter admits a risk, e.g., mean squared error loss, that can be estimated unbiasedly and therefore can be minimized with small error, the risk of a semiparametric functional will typically not admit an unbiased estimator and therefore may not be minimized without excessive error. This is an important gap in both model selection and semiparametric theory which this paper aims to address.
Specifically, we propose a novel approach for model selection of a functional defined on a semiparametric model, in settings where inferences about the targeted functional involves infinite dimensional nuisance parameters, and the functional of scientific interest admits a doubly robust estimating function. Doubly robust inference has received considerable interest in the past few years across multiple disciplines including Statistics, Epidemiology and Econometrics [@robins1994; @Rotnitzky1998; @Scharfstein1999; @robins2000; @robins2001comment; @unified; @Lunceford2004; @bang2005; @tan2006; @cao2009; @ett2010dr; @funk2011; @rotnitzky2012dr; @han2013; @FARRELL20151; @2015biasreduce; @vermeulen2016adaptive; @Chernozhukov2018; @rotnitzky2019mix; @tan2019; @fulcher2017robust]. An estimator is said to be doubly robust if it remains consistent if one of two nuisance parameters needed for estimation is consistent, even if both are not necessarily consistent. The class of functionals that admit doubly robust estimators is quite rich, and includes estimation of pathwise differentiable functionals in missing data problems under missing at random assumptions, and also in more complex settings where missingness process might be not at random. Several problems in causal inference also admit doubly robust estimating equations, the most prominent of which is the average treatment effect under assumptions that include positivity, consistency and no unmeasured confounding [@Scharfstein1999; @robins2000]. All of these functionals are members of a large class of doubly robust functionals recently studied by [@robins2008HOIF] in a unified theory of first and higher order influence functions.
The literature on double robustness combined with machine learning methods is rapidly expanding [@van2010collaborative; @van2011targeted; @belloni2014res; @FARRELL20151; @belloni2017Econometrica; @robins2017; @Chernozhukov2018; @athey2018jrssb; @van2018targeted; @dukes2018high; @rotnitzky2019mix; @tan2019model]. A well-documented advantage of using doubly robust influence functions is that flexible machine learning or other nonparametric data adaptive methods may generally be used to estimate high dimensional nuisance parameters such that valid inferences may be obtained about the functional of interest provided that estimated nuisance parameters have mean squared error of order smaller than $n^{-1/4}$, which can be considerably slower than converge rates attained by parametric models [@robins2017; @Chernozhukov2018]. As in practice, one cannot be certain that any model is either correctly specified or estimated with small bias, model selection remains important even in the advent of doubly robust estimation including for methods that leverage machine learning. Clearly, the performance of doubly robust semiparametric estimators is intimately related to the performance of estimators of its nuisance parameters, a task towards which model selection is paramount.
This paper aims at the selection of an optimal estimator for the functional $\psi(\theta)$ in a class of doubly robust functionals where $\theta$ is a parameter (possibly infinite dimensional) indexing the observed data law within a semiparametric/nonparametric model. Given a large collection of doubly robust estimators $\Psi_{K}=\{\widehat \psi_k: k=1,\ldots,K\}$ of size $K$ (which may grow with sample size) indexed by candidate estimators of nuisance parameters, we ultimately wish to identify an estimator that minimizes the risk associated with a measurable loss function. A natural choice would be to try to select the estimator that minimizes the mean squared error $E(\widehat \psi_{k,\tk}-\psi)^2$. However, it is clear that this cannot be done empirically in a straightforward fashion, as an unbiased estimator of the mean squared error (even up to a constant shift) is generally not available, so that model selection becomes challenging.
In this paper, we propose two novel model selectors each based on minimization of a certain cross-validated quadratic pseudo-risk for a large class of doubly robust functionals. The proposed pseudo-risk embodies the idea of double robustness: The first kind of pseudo-risk is given by the overall maximum squared bias (i.e., change in the estimated functional) at a given candidate estimator, induced by perturbing one nuisance parameter at the time over candidate models holding the other one fixed; The second proposed pseudo-risk is given by the sum of two maximum squared bias quantities, each capturing the bias induced by perturbing a single nuisance parameter only. As we establish both procedures are guaranteed to recover a consistent estimator for the functional whenever consistent estimators of nuisance parameters are available, with corresponding pseudo-risk converging to zero. However, even when all models are wrong, as in many practical settings where parametric models are used, and therefore all candidate estimators are inconsistent with pseudo-risk bounded away from zero, a minimizer of pseudo-risk nevertheless corresponds to a choice of models that is least sensitive to perturbations, i.e., misspecification of either nuisance parameter. Both selection criteria have a bias awareness property that selection of one nuisance parameter is aware and therefore may be made to compensate for excessive bias due to poor learning of the other nuisance parameter. We find such awareness may be key to bias reduction in context of machine learning of doubly robust functionals.
Our cross-validation scheme is akin to that of [@vanderlann2003cross] and [@vaart2006cv; @vaart2006oracle], who formally established that such a scheme can perform nearly as well as an oracle with access to underlying data generating mechanism, in selecting an optimal estimator in settings such as nonparametric regression or density estimation. In contrast, we aim to perform model selection for a pathwise differentiable functional of such nonparametric regression or density function, and therefore to minimize a risk function for the functional; a different task which generally proves to be more challenging. For each split of the observed sample, a training sample is used to estimate each candidate model of the nuisance parameters. The validation subsample is then used to construct corresponding candidate estimators of functional $\psi$, and subsequently, to estimate the pseudo-risk of each candidate estimator conditional on the training sample. The optimal model is selected by minimizing multi-fold cross-validated pseudo-risk over the set of candidate nuisance models. To our knowledge, this is the first model selection result for doubly robust functionals which aims directly at bias reduction of the functional. Significant amounts of work have been devoted to improving performance of doubly robust estimators [@bang2005; @tan2006; @cao2009; @tan2010; @rotnitzky2012dr; @FARRELL20151; @2015biasreduce; @vermeulen2016adaptive; @van2018targeted; @dukes2018high; @smucler2019unifying; @rotnitzky2019mix; @tan2019; @bradic2019sparsity] from a variety of perspectives, however, none have considered model selection for the underlying functional, over a generic collection of candidate nuisance parameter models that may include classical parametric, semiparametric and nonparametric estimators, as well as modern highly data adaptive machine learning estimators. The task of model selection of parametric nuisance models for specific semiparametric doubly robust problems was recently considered by [@han2013; @chan2013; @HAN2014101; @han2014jasa; @chan2014; @duan2017; @chen2017; @liu2019], although, their goal differs from ours as they aim to select parametric nuisance models that best approximate each nuisance model, which may generally conflict with selecting the nuisance models that minimize a well-defined pseudo-risk of the targeted functional, especially when as often the case in practice, all candidate models are wrong.
A related targeted maximum likelihood learning approach for model selection in functional estimation, known as cross-validated targeted maximum likelihood estimation [@zheng2010asymptotic; @van2011targeted; @van2018targeted] can provide notable improvements on the above methods by allowing the use of an ensemble of semiparametric or nonparametric methods, including modern machine learning for flexible estimation of nuisance parameters, still the ensemble learning is targeted at optimal estimation of nuisance parameters, not bias reduction of the functional ultimately of interest. Another state of the art approach recently proposed incorporates modern machine learning in functional estimation via double debiased machine learning (DDML) [@Chernozhukov2018]; however the approach uses a single machine learning algorithm for estimating each nuisance parameter, and does not leverage model selection targeted at the functional of interest. In comparison, as we will show, our approach ensures that selection of one nuisance model is made to minimize bias due to possible misspecification of the other, such bias awareness for the functional of interest endows the proposed model selection procedure with additional robustness.
The proposed approach is generic, in the sense that it allows the space of candidate models/learners to be quite large ($K_1\times K_2$ of order $c^{n^\gamma}$ for any constants $c>0$ and $\gamma<1$), and arbitrary in the sense of including parametric, semiparametric, nonparametric, as well as modern machine learning highly adaptive estimators. Importantly, our results are completely agnostic as to whether the collection of models includes a correct model for nuisance parameters, in the sense that our procedure will select the nuisance models that optimize our doubly robust criteria. Another aspect in which our approach is generic is that it does not depend on a particular choice of doubly robust estimator of a given functional. In the sense, the approach may be used with say doubly robust targeted maximum likelihood learning to construct an ensemble of doubly robust targeted maximum likelihood estimators, each of which based on different estimators of nuisance parameters. As discussed in Section \[sec:dr\], a very general class of doubly robust functionals are considered here. The purpose of considering a broad class is to demonstrate the flexibility of our method for various functionals that are generally of interest. Several functionals, such as the expected conditional covariance, marginal mean of an outcome subject to missingness as well as the closely related marginal mean of a counterfactual outcome are within our class. As a running example to develop the proposed methodology, throughout we consider the average treatment effect under unconfoundedness as the target of inference.
In settings where all candidate estimators of the functional are regular and asymptotically linear, although not necessarily consistent, we propose a smooth approximation of the proposed criteria which allows for valid post-selection inference. In case the selected model fails to be consistent for the functional of interest, because all candidate models fail to consistently estimate nuisance parameters, valid inference can nevertheless be obtained for the approximate functional that minimizes a population version of the proposed doubly-robust-inspired pseudo-risk function, whenever such approximate functional is well-defined. Confidence intervals can then be constructed either using an estimate of asymptotic variance of smooth selected estimator of the functional based on a standard application of the delta method, or via the nonparametric bootstrap.
The paper is organized as follows: in Section \[sec:dr\], we introduce the general class of doubly robust functionals and give specific examples of interest within this class. In Section \[sec:prelim\], we introduce the problem setting, and demonstrate the main challenge of model selection. Section \[sec:selection\] is devoted to developing our proposed selection criteria. Utilizing these criteria, we propose a general cross-validation scheme to construct empirical minimax functional selectors in Section \[sec:cf\]. In Section \[sec:theory\], we use powerful exponential inequalities for tails of extreme of second order U-statistics to establish a risk bound for the cross-validated minimax and mixed-minimax criteria. The risk bound firmly establishes that our empirical criteria select a pair of nuisance models which performs nearly as well as the pair of models selected by an oracle with access to the law that generated the data. In Section \[sec:simulations\], we present simulation studies to evaluate the performance of the proposed approach in a range of settings. In Section \[sec:softmax\], we describe a smooth approximation to the cross-validated pseudo-risk minimizer which allows for post-selection inferences. In Section \[sec:real\], we illustrate the proposed methods by studying the effectiveness of right heart catheterization in the insentive care unit (ICU) of critically ill patients. Details of proofs are given in the appendices.
A class of doubly robust functionals\[sec:dr\]
==============================================
Suppose we observe $n$ i.i.d. samples $\mathcal O \equiv \{O_i,i=1,\cdots,n\}$ from a law $F_0$, belonging to a model $\mathcal M=\{F_\theta: \theta \in \Theta\}$, where $\Theta$ may be infinite dimensional. We are interested in inference about a functional $\psi(\theta)=\psi(F_\theta)$ on $\cal M$ for a large class of functionals known to admit a doubly robust first order influence function as defined in [@robins2016tr].
Suppose that $\theta = \theta_1 \times \theta_2$, where $\times$ denotes Cartesian product, $\theta_1 \in \Theta_1$ governs the marginal law of $X$ which is a $d$-dimensional subset of variables in $O$, and $\theta_2 \in \Theta_2$ governs the conditional distribution of $O|X$. \[as1\]
An influence function is a fundamental object of statistical theory that allows one to characterize a wide range of estimators and their efficiency. The influence function of a regular and asymptotically linear estimator $\widehat \psi$ of $\psi (\theta)$, $\theta \in \cal M$, is a random variable $IF(\theta)\equiv IF(O;\theta)$ which captures the first order asymptotic behavior of $\widehat \psi$, such that ${n}^{1/2}\{\widehat \psi-\psi(\theta)\}=n^{-1/2} \sum_{i=1}^n IF(O_i;\theta) + o_p(1)$. The set of influence functions of all regular and asymptotically linear estimators of a given functional $\psi(\theta)$ on $\cal M$ is contained in the Hilbert subspace of mean zero random variables $U\equiv u(O;\theta)$ that solve the following equation, $$d(\psi(\theta_t)/dt|_{t=0} =E\{US\},$$ for all regular parametric submodels of $\cal M$, $F_{\theta_t}$, $t \in (-\epsilon,\epsilon)$ with $F(\theta_0)=F_0$, and $S$ the score function of $f(O; \theta_t)$ at $t = 0$ [@newey1990semiparametric; @bickel1993efficient; @van2000asymptotic; @tsiatis2007semiparametric]. Once one has identified the influence function of a given estimator, one knows its asymptotic distribution, and can construct corresponding confidence intervals for the target parameter. We now describe a large class of doubly robust influence functions.
The parameter $\theta_2$ contains components $b:\mathbbm{R}^d\rightarrow \mathbbm{R}$ and $p:\mathbbm{R}^d\rightarrow \mathbbm{R}$, such that the functional $\psi(\theta)$ of interest has a first order influence function $IF(\theta)= H(b,p)-\psi(\theta)$, where $$\begin{aligned}
\label{eq:H}
H(b,p) \equiv b(X)p(X)h_1(O) + b(X)h_2(O) + p(X)h_3(O) + h_4(O),\end{aligned}$$ and $h_1,\ldots,h_4$ are measurable functions of $O$. \[as2\]
$\Theta_{2b} \times \Theta_{2p} \subseteq \Theta_2$, where $\Theta_{2b}$ and $\Theta_{2p}$ are the parameter spaces for the functions $b$ and $p$. Furthermore, the sets $\Theta_{2b}$ and $\Theta_{2p}$ are dense in $L_2(F_0)$ at each $\theta_1\in \Theta_1$, where $L_2(F_0)$ is the Hilbert space of all functions with finite variance. \[as3\]
[@robins2016tr] point out that Assumptions \[as1\]-\[as3\] imply the following double robustness property, $$\begin{aligned}
\label{eq:dr0}
E_\theta[H(b^*,p^*)]-E_\theta[H(b,p)]=E[(b(X)- b^*(X))(p(X)- p^*(X))h_1(O)],\end{aligned}$$ for all $(b^*,p^*)\in \Theta_{2b} \times \Theta_{2p}$. In which case $E[H(b^*,p^*)]=\psi$ if either $b^*=b$ or $p^*=p$. Examples of functionals within this class include:
(Expected product of conditional expectations) Suppose we observe $O=(A,Y,X)$, where $A$ and $Y$ are univariate random variables. Let $\psi(\theta)= E_\theta[p(X)b(X)] $, where $b(X)=E_\theta[Y|X]$ and $p(X)=E_\theta[A|X]$ are a priori unrestricted. In this nonparametric model, the first order influence function of $\psi$ is given by $$IF(\theta)= p(X)b(X)+p(X)\{Y-b(X)\} +b(X)\{A-p(X)\}-\psi(\theta),$$ so $h_1(O)=-1, h_2(O)=A, h_3(O)=Y, h_4(O)=0$.
(Expected conditional covariance) Suppose $O=(A,Y,X)$, where $A$ and $Y$ are univariate random variables. Let $\psi(\theta)= E_\theta [Cov_\theta (Y,A|X)]= E_\theta[AY]- E_\theta[p(X)b(X)]$, where $b(X)=E_\theta[Y|X]$ and $p(X)=E_\theta[A|X]$. In this model, the first order influence function is $$IF(\theta)= AY- \big[ p(X)b(X)-\psi(\theta)+p(X)\{Y-b(X)\} +b(X)\{A-p(X)\} \big]-\psi(\theta),$$ so $h_1(O)=1, h_2(O)=-A, h_3(O)=-Y, h_4(O)=AY$.
As pointed out by [@robins2008HOIF; @robins2016tr; @robins2017], inference about expected conditional covariance is key to obtaining valid inferences about $\beta$ in the widely used semiparametric regression model $E(Y|A,X)=\beta A+b(X)$, where $b(X)$ is unrestricted [@robins2008HOIF].
(Missing at random) Suppose $O=(AY,A,X)$, where $A$ is the binary missing indicator, and $X$ is a $d$-dimensional vector of fully observed continuous covariates. We assume $Y$ is missing at random, i.e., $A{\rotatebox[origin=c]{90}{$\models$}}Y|X$. Let $b(X) = E(Y|A=1,X)$ be the outcome model and $\Pr(A = 1|X)>0$. The parameter of interest $\psi(\theta)$ is the marginal mean of $Y$. In this model, the first order influence function is $$IF(\theta)= Ap(X)\{Y-b(X)\}+b(X)-\psi(\theta),$$ where $p(X) = 1/\Pr(A = 1|X)$. So $h_1(O)=-A, h_2(O)=1, h_3(O)=AY, h_4(O)=0$.
(Missing not at random) We consider the setting in the last example allowing for missing not at random. We assume that $\Pr(A=1|X,Y)=[1+\exp\{-[\gamma(X)+\alpha Y]\} ]^{-1}$, where $\gamma(X)$ is an unknown function and $\alpha$ is a known constant. The marginal mean of $Y$ is again of interest and given by $\psi(\theta)=E_\theta (AY[1+\exp\{-[\gamma(X)+\alpha Y]\} ])$. [@robins2001comment] derived the first order influence function of $\psi$, $$IF(\theta)=A[1+\exp\{-\alpha Y\}p(X)][Y-b(X)]+b(X)-\psi(\theta),$$ where $b(X)=E[ Y\exp\{ -\alpha Y\} |A=1,X]/E[ \exp\{-\alpha Y\}|A=1,X]$ and $p(X)=\exp\{-\gamma(X)\}$. So $h_1(O)= -A \exp\{-\alpha Y\}, h_2(O)=1-A, h_3(O)=AY\exp\{-\alpha Y\}$, $h_4(O)=AY. $
(Average treatment effect) Suppose we observe $O=(A,Y,X)$, where $A$ is a binary treatment taking values in $\{0,1\}$, $Y$ is a univariate response, and $X$ is a collection of covariates. We wish to make inferences about the average treatment effect $E\left\{ Y_{1}-Y_{0}\right\}$, where $Y_{1}$ and $Y_{0}$ are potential outcomes.
Three important assumptions are sufficient for identification of the average treatment effect from the observed data. First, we make the consistency assumption that $Y = Y_A$ almost surely. This assumption essentially states that one observes $Y_a$ only if the treatment $a$ is equal to a subject’s actual treatment assignment $A$. The next assumption is known as ignorability [@10.2307/2335942], which requires that there are no unmeasured confounders for the effects of $A$ on $Y$, i.e., for both $a\in \{0,1\}$, $Y_a {\rotatebox[origin=c]{90}{$\models$}}A|X$. Finally, we assume that $\pi(a|X=x)=\Pr(A = a|X=x)>0$ for $a\in \{0,1\}$ if $f(x)>0$. This positivity assumption basically states that any subject with an observed value of $x$ has a positive probability of receiving both values of the treatment.
Under these three identifying conditions, functional $\psi_0(\theta) = E[E(Y |A = 1, X) - E(Y |A = 0, X)]$ is the average effect of treatment on the outcome. The first order influence function of this functional is $$\begin{aligned}
IF(\theta)=\frac{\left( -1\right) ^{1-A}}{\pi\left( A|X\right) }Y
-\left\{ \frac{\left( -1\right) ^{1-A}}{\pi\left( A|X\right) }E(Y|A,X)-\sum_{a}\left( -1\right) ^{1-a}E(Y|a,X)\right\}-\psi_0(\theta).
\label{eq:dr}\end{aligned}$$ In fact, this model has four nuisance parameters. Note that we can rewrite the influence function as $$\frac{A}{\pi(1|X)}\{Y-E(Y|1,X)\}+E(Y|1,X) - \left[\frac{1-A}{\pi(0|X)}\{Y-E(Y|0,X)\}+E(Y|0,X) \right]-\psi_0(\theta).$$ Then $IF(\theta)$ can be viewed as a difference of two influence functions of similar form as the missing at random (MAR), where $p^{(1)}(X)=1/\pi(1|X),$ $b^{(1)}(X)= E(Y|1,X)$, $p^{(2)}(X)=1/\pi(0|X),$ $b^{(2)}(X)= E(Y|0,X)$, $h^{(1)}_1(O)=-A, h^{(1)}_2(O)=1, h^{(1)}_3(O)=AY, h^{(1)}_4(O)=0$, $h^{(2)}_1(O)=-(1-A), h^{(2)}_2(O)=1, h^{(2)}_3(O)=(1-A)Y, h^{(2)}_4(O)=0$.
\[remark:rm2\] [@rotnitzky2019mix] study a more general class of doubly robust influence functions, which admit the following “mixed bias property”: For each $\theta$, there exist functions $c(X;\theta)$ and $d(X;\theta)$ such that for any $\theta'$, $$\begin{aligned}
\psi(\theta') - \psi(\theta) + E_\theta (IF(\theta')) = E_\theta[ s_{ab}(O) \{ c(X,\theta')-c(X,\theta) \}\{ d(X,\theta') - d(X,\theta) \} ],
\label{eq:mix}\end{aligned}$$ where $s_{ab}$ is a known function not depending on $\theta$. Note that the selection procedure proposed in Section \[sec:select\] extends to this richer class of doubly robust influence functions which includes both the classes of [@riesz2018] and of doubly robust functionals described above [@robins2016tr]. In fact, all that is required by the proposed approach is the influence function has mean zero when either nuisance parameter is evaluated at the truth. As will be discussed later, the approach can readily be extended to multiply robust influence functions in the sense of [@tchetgentchetgen2012; @wang2018bounded; @Caleb2019; @Shi2019MultiplyRC; @sun2019multiple].
The practical implication of double robustness is that the asymptotic bias of an estimator obtained by solving $\PP_n \widehat {IF}(\widehat \psi)=\PP_n IF(\widehat p, \widehat b,\widehat \psi) = 0$ is guaranteed to be zero provided either but not necessarily both $\widehat p$ is consistent for $p$ or $\widehat b$ is consistent for $b$. Despite this local robustness property, in practice one may be unable to ensure that either model is consistent, and even when using nonparametric models, that the resulting bias is small. For this reason, model selection over a class of candidate estimators may be essential to optimize performance in practical settings.
Challenges of model selection for doubly robust inference\[sec:prelim\]
=======================================================================
Hereinafter, in order to ground ideas, we focus the presentation to the average treatment effect functional of Example 2.5. It is well known that $$\begin{aligned}
\psi_0 &=&E\left[ E\left( Y|A=1,X\right) -E\left( Y|A=0,X\right) \right] \\
&=&E\left( \frac{\left( -1\right) ^{1-A}}{\pi\left( A|X\right) }Y\right) \\
&=&E\left(
\begin{array}{c}
\frac{\left( -1\right) ^{1-A}}{\pi\left( A|X\right) }Y
-\left\{ \frac{\left( -1\right) ^{1-A}}{\pi\left( A|X\right) }E(Y|A,X)-\sum_{a}\left( -1\right) ^{1-a}E(Y|a,X)\right\}
\end{array}\right).\end{aligned}$$
The first representation is known as outcome regression as it depends on the regression of $Y$ on $\left( A,X\right) $; the second is inverse probability weighting with weights depending on the propensity score [@10.2307/2335942]; the third representation is known as doubly robust as it relies on outcome regression or propensity score model to be correct but not necessarily both. In fact, the doubly robust representation based on the efficient influence function of $\psi _{0}$ is given by Equation , which will be used as our estimating equation for $\psi_0$ for the proposed model selection.
In order to describe the inherent challenges of performing model selection for $\psi_0$, consider the sample splitting scheme whereby a random half of the sample is used to construct $\widehat \pi_k(A|X)$ and $\widehat E_\tk(Y|A,X)$, while the other half is used to obtain the doubly robust estimator $\widehat \psi_{k,\tk}$. Consider the goal of selecting a pair of models $(k, \tk)$ that minimizes the mean squared error $E[(\widehat \psi_{k, \tk}-\psi_0)^2|\text{Training sample}]$ = bias$^2(\widehat \psi_{k,\tk})$ + variance$(\widehat \psi_{k,\tk})$, where bias$^2(\widehat \psi_{k,\tk})$ is given by Equation As we expect the variance term to be of order $1/n$ conditional on training sample, we may focus primarily on minimizing the squared bias. As no unbiased estimator of $\text{bias}^2(\widehat \psi_{k,\tk})$ exists, minimizing $\text{bias}^2(\widehat \psi_{k,\tk})$ will generally not be possible without incurring excessive bias. Hereafter, for a given split of sample we shall refer to $\arg \min_{k,\tk} \text{bias}^2(\widehat \psi_{k,\tk})$ as “squared bias minimizer”, which depends on the true data generating law (through $\pi(A|X)$ and $E(Y|A,X)$), and therefore may not be accurately estimated even in large samples. In the next section, we propose alternative criteria for selecting an estimator with a certain optimality condition that is nearly attainable empirically.
Recall that consistent estimators of the propensity score and outcome regression are not necessarily contained as candidates for model selection, so the minimal squared bias may not necessarily converge to zero asymptotically; nevertheless, it will do so when at least one nuisance parameter is estimated consistently. Furthermore, as we formally establish in Sections \[sec:theory1\] and \[sec:theory2\] and illustrate in our simulations, when a library of flexible machine learning estimators is used to estimate nuisance parameters, the approach proposed in the next section behaves nearly as well as an oracle that selects the estimator with smallest average squared bias, which vanishes at least as fast as any given choice of machine learners. This is quite remarkable as the proposed approach avoids directly estimating the squared bias.
Model selection via a minimax cross-validation\[sec:select\]
============================================================
Minimax criteria for model selection \[sec:selection\]
------------------------------------------------------
In this section, we consider alternative selection criteria which avoid estimating and directly minimizing Equation . Suppose that we have candidate models $\pi_k =\left( A|X\right), k \in \mathcal K_1 \equiv \{1,\cdots,K_1\}$ for the propensity score and $E_\tk\left( Y|A,X\right), \tk \in \mathcal K_2 \equiv \{1,\cdots,K_2\}$ for the outcome model, respectively. We begin by describing the population version of our minimax criteria, i.e., we focus on $\pi_{k}\left( A|X\right)$ and $E_{\tk}(Y|a,X)$, the asymptotic limits of $\widehat \pi_{k}\left( A|X\right)$ and $\widehat E_{\tk}(Y|a,X)$. We will introduce the cross-validated estimator in Section \[sec:cf\]. For each pair of candidate models $(k_1, \tk_1)$, we have $$\begin{aligned}
\psi _{k_1,\tk_1} = E\left(\frac{\left( -1\right) ^{1-A}}{\pi_{k_1}\left( A|X\right) }Y
-\left\{ \frac{\left( -1\right) ^{1-A}}{\pi_{k_1}\left( A|X\right) }E_{\tk_1}(Y|A,X)-\sum_{a}\left( -1\right) ^{1-a}E_{\tk_1}(Y|a,X)\right\}\right).
\label{eq:if2}\end{aligned}$$ The working models for the propensity score and outcome model could be parametric, semiparametric or nonparametric. A simple parametric case may entail positing that $\pi_k \left( A|X\right)$ and $E_\tk\left( Y|A,X\right)$ are chosen to be the following regression models $$\begin{aligned}
\text{logit}\Pr \left( A=1|X\right) &=&\alpha _{k,0}+\alpha _{k,1}^{T}h_{k}(X),\label{model1}\\
E\left( Y|A,X\right) &=&\beta _{\tk,0}+ \beta _{\tk,1}^{T}h_{\tk}(X)+\beta _{\tk,2}^{T}g_{\tk}(X)A+\beta _{\tk,3}A, \label{model2}\end{aligned}$$ for dictionary $\{h_k, h_\tk, g_\tk: k \in \mathcal K_1;\tk \in \mathcal K_2\}$. Subsequently, [$$\begin{aligned}
\psi_{k,\tk}=E \bigg( \frac{\left( -1\right) ^{1-A}}{\pi \left( A|X;\alpha _{k}\right) }\Big\{
Y -\beta _{\tk,0}-\beta _{\tk,1}^{T} h_{\tk}(X)
- \beta _{\tk,2}^{T} g_{\tk}(X) A -\beta _{\tk,3} A
\Big\}
+ \beta _{\tk,2}^{T} g_{\tk}(X)+ \beta _{\tk,3} \bigg).\end{aligned}$$]{}
Recall that the doubly robust estimator which depends on both unknown functions has zero bias if either one contains the truth. Motivated by this observation, we define the following perturbation of a fixed index pair $(k_1,\tk_1)$, $$\begin{aligned}
\per(k,\tk; k_1,\tk_1) \equiv (\psi_{k,\tk}- \psi_{k_1,\tk_1})^2. \label{definition:per}\end{aligned}$$
The perturbations defined above have the following forms.
\[lemma:1\] $$\begin{aligned}
\per( k_1,\tk; k_1,\tk_1) = E\left[\sum_{a}\left( -1\right) ^{1-a} (\frac{\pi\left(
a|X\right) }{\pi_{k_1}\left( a|X\right) }-1) (E_{\tk_1}(Y|a,X)-E_{\tk}(Y|a,X))\right]^2,\end{aligned}$$
$$\begin{aligned}
\per( k,\tk_1; k_1,\tk_1) = E\left[\sum_{a}\left( -1\right) ^{1-a} (\frac{\pi\left(
a|X\right) }{\pi_k\left( a|X\right) }- \frac{\pi\left(
a|X\right) }{\pi_{k_1}\left( a|X\right) } ) (E(Y|a,X)-E_{\tk_1}(Y|a,X))\right]^2.\end{aligned}$$
Subsequently, for each fixed pair $(k_1,\tk_1)$, we only consider perturbations over pairs $(k,\tk)$ with either $k=k_1$ or $\tk=\tk_1$, and evaluate the perturbation of $ \psi_{k,\tk}$ at $\psi_{k_1,\tk_1}$ as $$\text{per}(k,\tk; k_1,\tk_1)=
\begin{cases}
\text{per}(k_1,\tk; k_1,\tk_1) & \text{if}\ k=k_1, \\
\text{per}(k,\tk_1; k_1,\tk_1) & \text{if}\ \tk=\tk_1,\\
0 & \text{otherwise}.
\end{cases}
\label{bias}$$ We may define the pseudo-risk $$\begin{aligned}
B^{(1)}_{k_1,\tk_1}=\max_{\substack{k=k_1,\tk \in \mathcal K_2;\\ \tk=\tk_1, k\in \mathcal K_1}} \text{per}(k,\tk; k_1,\tk_1),\end{aligned}$$ which measures the maximum change of underlying functional at a candidate selected model $(k_1,\tk_1)$ induced by perturbing one of the nuisance parameters at the time, and holding the other fixed. We call this a pseudo-risk because unlike a standard definition of risk (e.g., mean squared error) which is typically defined in terms of the data generating mechanism and a given candidate model/estimator, the proposed definition is in terms of all candidate models/estimators. We also consider the following pseudo-risk, $$\begin{aligned}
B^{(2)}_{k_1,\tk_1}=\max_{\tk_0 \in \mathcal K_2} \max_{\tk \in \mathcal K_2} \text{per}(k_1,\tk; k_1,\tk_0) + \max_{k_0\in \mathcal K_1} \max_{k\in \mathcal K_1} \text{per}(k,\tk_1; k_0,\tk_1).\end{aligned}$$ Evaluating the above perturbation for each pair $(k_1,\tk_1)$ gives $K_1 \times K_2$ pseudo-risk values $B_{k_1,\tk_1}, k_1\in \mathcal K_1;\tk_2\in \mathcal K_2$. Finally, we define $$\begin{aligned}
\arg\min_{(k_1,\tk_1)} B^{(1)}_{k_1,\tk_1},~~ \arg\min_{(k_1,\tk_1)} B^{(2)}_{k_1,\tk_1},\end{aligned}$$ as population version of selected models, respectively. we refer to $B^{(1)}_{k_1,\tk_1}$ as population minimax pseudo-risk and $B^{(2)}_{k_1,\tk_1}$ as population mixed-minimax pseudo-risk.
One may also consider the following alternative criterion: $$\begin{aligned}
\begin{cases}
\dot{k} &=\arg\min_{k_1} \max_{\tilde k} \text{per}( k_1,\tk; k_1,\dot{\tilde k}),\\
\dot{\tilde k} &= \arg\min_{\tilde k_1} \max_{k} \text{per}( k,\tk_1; \dot{k},\tk_1).\\
\end{cases}\end{aligned}$$ However, there may not exist such pair $(\dot{k},\dot{\tilde k})$. The proposed minimax criteria solve the optimization jointly and avoid this difficulty.
There may be different ways to define pseudo-risk using different norms, e.g., the first kind $B^{(1)}_{k_1,\tk_1}$ can also be defined as $$\max_{\tk \in \mathcal K_2} \text{per}(k_1,\tk; k_1,\tk_1) + \max_{ k\in \mathcal K_1} \text{per}(k,\tk_1; k_1,\tk_1);$$ the second kind $B^{(2)}_{k_1,\tk_1}$ can also be defined as $$\sum_{\tk_0 \in \mathcal K_2} \max_{\tk \in \mathcal K_2} \text{per}(k_1,\tk; k_1,\tk_0) + \sum_{k_0\in \mathcal K_1} \max_{k\in \mathcal K_1} \text{per}(k,\tk_1; k_0,\tk_1).$$ The proposed two criteria are representatives of two novel ideas: The first type is given by the overall maximum squared bias (i.e., change in the estimated functional) at a given candidate estimator, induced by perturbing one nuisance parameter at the time over candidate models holding the other one fixed; The second type is given by the sum of two maximum squared bias terms, each capturing the bias induced by perturbing a single nuisance parameter only.
The second mixed-minimax criterion has a doubly robust property, i.e., $\psi_{\arg\min_{(k_1,\tk_1)} B^{(2)}_{k_1,\tk_1}}$ has zero bias if either nuisance model is consistently estimated by at least one candidate learner.
Multi-fold cross-validated estimator\[sec:cf\]
----------------------------------------------
Following [@vaart2006oracle], we avoid overfitting in implementing an empirical minimax selector by considering a multi-fold cross-validation scheme which repeatedly randomly splits the data $\mathcal O$ into two subsamples: a training set $\mathcal O^{0s}$ and a test set $\mathcal O^{1s}$, where $s$ refers to $s$-th split. The splits may be either deterministic or random without loss of generality. In the following, we consider random splits, whereby we let $T^s = (T_1^s,\cdots,T_n^s)\in \{0,1\}^n$ denote a random vector independent of $O_1,\cdots, O_n$. If $T_i^s=0$, $O_i$ belongs to the $s$-th training sample $\mathcal O^{0s}$; otherwise it belongs to the $s$-th test sample $\mathcal O^{1s}$, $s=1, \ldots,S$. For each $s$ and $(k,\tk)$, our construction uses the training samples to construct estimators $\widehat \pi_k^s$ and $\widehat E_\tk^s(Y|A,X)$. The validation sample is then used to estimate the perturbation defined in Equation , $$\begin{aligned}
\widehat {\text{per}}(k,\tk; k_1,\tk_1) =& \frac{1}{S}\sum_{s=1}^S \left[ (\widehat \psi_{k,\tk}^s - \widehat \psi_{k_1,\tk_1}^s )
\right]^{2},
$$ where $$\widehat \psi_{k,\tk}^s = \PP^1_s
\left\{ \frac{\left( -1\right) ^{1-A}}{\widehat \pi^s_k\left( A|X\right) }\left\{
\begin{array}{c}
Y - \widehat E^s_{\tk}(Y| X,A)
\end{array}\right\} +\sum_a (-1)^{1-a}\widehat E^s_\tk(Y|a,X) \right\},$$ $$\widehat \psi_{k_1,\tk_1}^s = \PP^1_s
\left\{ \frac{\left( -1\right) ^{1-A}}{\widehat \pi^s_{k_1}\left( A|X\right) }\left\{
\begin{array}{c}
Y - \widehat E^s_{\tk_1}(Y| X,A)
\end{array}\right\} +\sum_a (-1)^{1-a}\widehat E^s_{\tk_1}(Y|a,X) \right\},$$ $$\PP^j_s= \frac{1}{\#\{1\leq i\leq n:T_i^s=j\}}\sum_{i:T_i^s=j} \delta_{X_i}, \quad j=0,1,$$ and $\delta_X$ is the Dirac measure. We then select the empirical minimizers of $$\widehat B^{(1)}_{k_1,\tk_1} = \max_{\substack{k=k_1,\tk \in \mathcal K_2;\\ \tk=\tk_1, k\in \mathcal K_1}} \widehat {\text{per}}(k,\tk; k_1,\tk_1)$$ and $$\widehat B^{(2)}_{k_1,\tk_1} = \max_{\tk_0 \in \mathcal K_2} \max_{\tk \in \mathcal K_2} \widehat {\text{per}}(k_1,\tk; k_1,\tk_0) + \max_{k_0\in \mathcal K_1} \max_{k\in \mathcal K_1} \widehat {\text{per}}(k,\tk_1; k_0,\tk_1)$$ among all candidate pairs $(k_1,\tk_1)$ as our final models, i.e., $$\begin{aligned}
(k^\dagger,\tk^\dagger) = \arg \min_{(k_1,\tk_1)}\widehat B^{(1)}_{k_1,\tk_1}, \label{eq:dagger} \\
(k^\diamond,\tk^\diamond) = \arg \min_{(k_1,\tk_1)}\widehat B^{(2)}_{k_1,\tk_1}.
\end{aligned}$$ The final estimators are $$\widehat \psi_{k^\dagger,\tk^\dagger}=\frac{1}{S}\sum_{s=1}^S \widehat \psi^s_{k^\dagger,\tk^\dagger},~~\widehat \psi_{k^\diamond,\tk^\diamond}=\frac{1}{S}\sum_{s=1}^S \widehat \psi^s_{k^\diamond,\tk^\diamond}.$$ We provide a high-level Algorithm 1 for the proposed selection procedure. We also define the cross-validated oracle selectors $$\begin{aligned}
(k^\star,\tk^\star) = \arg \min_{(k_1,\tk_1)} B^{(1)}_{k_1,\tk_1}, \label{eq:defstar1}\\
(k^\circ,\tk^\circ) = \arg \min_{(k_1,\tk_1)} B^{(2)}_{k_1,\tk_1}, \label{eq:defstar2}
\end{aligned}$$ where in a slight abuse of notation, we define the cross-validated pseudo-risk $$B^{(1)}_{k_1,\tk_1} = \max_{\substack{k=k_1,\tk \in \mathcal K_2;\\ \tk=\tk_1, k\in \mathcal K_1}} \frac{1}{S}\sum_{s=1}^S \left[ (\psi_{k,\tk}^s - \psi_{k_1,\tk_1}^s )
\right]^{2},$$ $$B^{(2)}_{k_1,\tk_1} = \max_{\tk_0 \in \mathcal K_2} \max_{\tk \in \mathcal K_2} \frac{1}{S}\sum_{s=1}^S \left[ (\psi_{k_1,\tk}^s - \psi_{k_1,\tk_0}^s )
\right]^{2} + \max_{k_0\in \mathcal K_1} \max_{k\in \mathcal K_1} \frac{1}{S}\sum_{s=1}^S \left[ (\psi_{k,\tk_1}^s - \psi_{k_0,\tk_1}^s )
\right]^{2},$$ $$\psi_{k,\tk}^s = \PP^1
\left\{ \frac{\left( -1\right) ^{1-A}}{\widehat \pi^s_k\left( A|X\right) }\left\{
\begin{array}{c}
Y - \widehat E^s_{\tk}(Y| X,A)
\end{array}\right\} +\sum_a (-1)^{1-a}\widehat E^s_\tk(Y|a,X) \right\},$$ $$\psi_{k_1,\tk_1}^s = \PP^1
\left\{ \frac{\left( -1\right) ^{1-A}}{\widehat \pi^s_{k_1}\left( A|X\right) }\left\{
\begin{array}{c}
Y - \widehat E^s_{\tk_1}(Y| X,A)
\end{array}\right\} +\sum_a (-1)^{1-a}\widehat E^s_{\tk_1}(Y|a,X) \right\},$$ and $\PP^1$ denotes the true measure of $\PP_s^1$.
For each pair $(k_1,\tk_1)$, average the perturbations over the splits and obtain $$\widehat {\text{per}}(k,\tk; k_1,\tk_1) = \frac{1}{S}\sum_{s=1}^S \left[ (\widehat \psi_{k,\tk}^s - \widehat \psi_{k_1,\tk_1}^s )
\right]^{2},$$ where $k=k_1,\tk \in \mathcal K_2$ or $\tk=\tk_1, k\in \mathcal K_1$ Calculate $$\widehat B^{(1)}_{k_1,\tk_1}=\max_{\substack{k=k_1,\tk \in \mathcal K_2;\\ \tk=\tk_1, k\in \mathcal K_1}} \wper(k,\tk,k_1,\tk_1),$$ $$\widehat B^{(2)}_{k_1,\tk_1} = \max_{\tk_0 \in \mathcal K_2} \max_{\tk \in \mathcal K_2} \widehat {\text{per}}(k_1,\tk; k_1,\tk_0) + \max_{k_0\in \mathcal K_1} \max_{k\in \mathcal K_1} \widehat {\text{per}}(k,\tk_1; k_0,\tk_1)$$ for each pair $(k_1,\tk_1)$ Pick $(k^\dagger,\tk^\dagger)=\arg\min_{(k,\tk)} \widehat B^{(1)}_{k,\tk}$, $(k^\diamond,\tk^\diamond)=\arg\min_{(k,\tk)} \widehat B^{(2)}_{k,\tk}$ as our selected models, and obtain the estimations of the parameter over the splits $$\widehat \psi_{k^\dagger,\tk^\dagger}=\frac{1}{S}\sum_{s=1}^S \widehat \psi^s_{k^\dagger,\tk^\dagger},~~\widehat \psi_{k^\diamond,\tk^\diamond}=\frac{1}{S}\sum_{s=1}^S \widehat \psi^s_{k^\diamond,\tk^\diamond};$$\
**Return** $(k^\dagger,\tk^\dagger)$, $(k^\diamond,\tk^\diamond)$ and $\widehat \psi_{k^\dagger,\tk^\dagger}$, $\widehat \psi_{k^\diamond,\tk^\diamond}$
Theoretical results\[sec:theory\]
=================================
Optimality of oracle selectors\[sec:theory1\]
---------------------------------------------
In this section, we establish certain optimality properties of the minimax and mixed-minimax oracle pseudo-risk selectors defined by Equations and respectively. As we will later show by establishing excess risk bounds relating empirical selectors to their oracle counterparts, these optimality results imply near optimal behavior of the corresponding empirical (cross-validated) selector. In this vein, focusing on a functional of the doubly robust class with property , consider the collections of learners for $p$ and $b$ obtained from an independent sample of size $n$: $$\begin{aligned}
\mathcal{C}_{p} = \left\{ \widehat{p}_{1},\ldots,\widehat{p}_{K_{1}}\right\};~~
\mathcal{C}_{b} = \left\{ \widehat{b}_{1},\ldots,\widehat{b}_{K_{2}}\right\}.\end{aligned}$$For the purpose of inference, in the following, our analysis is conditional on $\mathcal{C}_{p}$ and $\mathcal{C}_{b}$. Suppose further that these learners satisfy the following assumptions.
\[asm:1\] Given any $\epsilon>0$, there exist constants $C_p, C_b>1$ and sufficiently large $n_0$ such that for $n>n_0$, $$\begin{aligned}
\frac{1}{C_{p}}\nu _{j}\leq \left\vert \widehat{p}_{j}(x)-p\left( x\right)
\right\vert \leq C_{p}\nu _{j},~~ j\in \mathcal K_1,\\
\frac{1}{C_{b}}\omega _{j}\leq \left\vert \widehat{b}_{j}(x)-b\left(
x\right) \right\vert \leq C_{b}\omega _{j},~~ j\in \mathcal K_2,\end{aligned}$$for any $x$ with probability $1-\epsilon$, where $\nu _{j}$ and $\omega_{j}$ depend on $n$.
In the following we write $a_n \lesssim b_n$ when there exists a constant $C>0$ such that $a_n \leq Cb_n$ for sufficiently large $n$. Without loss of generality, suppose that$$\nu _{\min }=\min_{j}\left\{ \nu _{j}:j \in \mathcal K_1\right\} \lesssim \omega
_{\min }=\min_{j}\left\{ \omega _{j}:j \in \mathcal K_2\right\}.$$ Let $$\begin{aligned}
\nu _{\max } =\max_{j}\left\{ \nu _{j}:j \in \mathcal K_1 \right\},~~ \omega _{\max } =\max_{j}\left\{ \omega _{j}:j \in \mathcal K_2\right\}.\end{aligned}$$
\[asm:2\] We assume that $\lim_{n\rightarrow \infty }v_{\max } < \infty ; ~~
\lim_{n\rightarrow \infty }\omega _{\max } < \infty .$
\[asm:3\] Suppose $$(p(X)- \widehat p_i(X))(b(X)- \widehat b_j(X))E[h_1(O)|X],$$ is continuous with respect to $X$ for $i\in \mathcal K_1; j\in \mathcal K_2$. Furthermore, suppose that the support of $X$ is closed and bounded.
Assumption 5.1 essentially states that the bias of $\widehat p_j$ and $\widehat b_j$ is eventually exactly of order $v_j$ and $w_j$ with large probability. Note that $\widehat p_j$ and $\widehat b_j$ may not necessarily be consistent i.e., $v_j$ and $w_j$ may converge to a positive constant. Assumption \[asm:2\] guarantees the bias of each learner does not diverge. Note also that Assumption \[asm:3\] need only hold for $i$ and $j$ such that $v_i=v_{\text{min}}$ and $w_j=w_{\text{max}}$ for Lemma \[lemma:rate\] given below to hold for the minimax bias selector. Let $\psi_{k^\star,\tk^\star} =\PP^1 \{H(\widehat{p}_{k^\star},\widehat{b}_{\tk^\star})\}$ and $\psi_{k^\circ,\tk^\circ} =\PP^1 \{H(\widehat{p}_{k^\circ},\widehat{b}_{\tk^\circ})\}$, where $H(\cdot,\cdot)$ is defined in Equation , $(k^\star,\tk^\star)$ and $(k^\circ,\tk^\circ)$ are defined in Equations and . We have the following lemma.
\[lemma:rate\] Under Assumptions \[asm:1\]-\[asm:3\], we have that the bias of the minimax oracle selector is of the order of:$$\left\vert \psi _{k^\star,\tk^\star}-\psi_0 \right\vert =O_{P}\left( \frac{\nu _{\max }}{\omega _{\max }}\omega _{\min }^{2}\right),$$while the bias of the mixed-minimax oracle selector is of the order of: $$\left\vert \psi _{k^\circ,\tk^\circ}-\psi_0
\right\vert =O_{P}\left( \nu _{\min }\omega _{\min }\right).$$
The above lemma implies that in the event $\nu _{\max }/\omega _{\max }~ {\rightarrow }$ as $n \rightarrow \infty$ in probability, as would be the case, say in a setting where at least one machine learner of both $p$ and $b$ fails to be consistent, the bias of the oracle minimax selector $\left\vert \psi _{k^\star,\tk^{\star}}-\psi_0
\right\vert $ is of order of the maximum (comparing learners of $b$ to those of $p)$ of the minimum (across learners for $b$ and $p,$ respectively) squared bias, that is the maximin squared bias of learners of $b$ and $p$ which under our assumptions is equal to $\omega _{\min }^{2}$. In this case, the minimax selector provides a guaranty for adaptation only up to the least favorable optimal learner across nuisance parameters, and therefore may fail to fully leverage the fact that a fast learner of one nuisance parameter may compensate for a slower learner of another. In contrast, the mixed-minimax selector can leverage such a gap to improve estimation rate, so that in the above scenario, its rate of estimation would be $\nu _{\min }\omega _{\min
}\leq \omega _{\min }^{2}$ with equality only if $\nu _{\min }=\omega _{\min
},$ that is if one can learn $b$ and $p$ equally well.
Excess risk bound of the proposed minimax selector\[sec:theory2\]
-----------------------------------------------------------------
In this section, we focus on our first estimator, however, analogous results hold for the second estimator. We first introduce some notation used to study the excess risk bound of $\widehat \psi_{k^\dagger,\tk^\dagger}$. Define $U^s_{(k,\tk)}(k_1,\tk_1)$ as $$\begin{aligned}
&\frac{\left( -1\right) ^{1-A}}{\widehat \pi_k^s \left(
A|X\right) }Y -\left\{ \frac{\left( -1\right) ^{1-A}}{\widehat \pi_k^s \left( A|X\right) }\widehat E_\tk^s(Y|A,X)-\sum_{a}\left( -1\right) ^{1-a}\widehat E_\tk^s(Y|a,X)\right\} \nonumber \\
-& \frac{\left( -1\right) ^{1-A}}{\widehat \pi_{k_1}^s \left(
A|X\right) }Y -\left\{ \frac{\left( -1\right) ^{1-A}}{\widehat \pi_{k_1}^s \left( A|X\right) }\widehat E_{\tk_1}^s(Y|A,X)-\sum_{a}\left( -1\right) ^{1-a}\widehat E_{\tk_1}^s(Y|a,X)\right\}.
\label{eq:U}\end{aligned}$$ Based on our minimax selection criterion, Equation and are equivalently expressed as $$(k^\dagger,\tk^\dagger) = \arg\min_{(k_1,\tk_1)} \max_{\substack{k=k_1,\tk \in \mathcal K_2;\\ \tk=\tk_1, k\in \mathcal K_1}} \frac{1}{S}\sum_{s=1}^{S} [\PP^1_s\{U^s_{(k,\tk)}(k_1,\tk_1) \}]^2,$$ $$(k^\star,\tk^\star) = \arg\min_{(k_1,\tk_1)} \max_{\substack{k=k_1,\tk \in \mathcal K_2;\\ \tk=\tk_1, k\in \mathcal K_1}} \frac{1}{S} \sum_{s=1}^{S} [\PP^1 \{U^s_{(k,\tk)}(k_1,\tk_1) \} ]^2.$$
Next, we derive a risk bound for empirically selected model $(k^\dagger,\tk^\dagger)$ which states that its risk is not much bigger than the risk provided by the oracle selected model $(k^\star,\tk^\star)$. For this purpose, it is convenient to make the following boundedness assumption.
\[asm:positivity\] (1) $\pi(a|X)\geq M_1$ and $\widehat \pi_{k}(a|X)\geq M_1$ almost surely for $a=0,1$, $k \in \{1,\ldots,K_1\}$, and some $0<M_1<1$. (2) $|Y| \leq M_2$ and $|\widehat E_{\tk}(Y|A,X)| \leq M_2$ almost surely for $\tk \in \{1,\ldots,K_2\}$, and some $M_2>0$.
Suppose Assumptions \[asm:positivity\] holds, then we have that $$\begin{aligned}
& \frac{1}{S}\sum_{s=1}^S \PP^0[\PP^1 \{U^s_{(k^\ddagger,\tk^\ddagger)}(k^\dagger,\tk^\dagger)\}]^2 \\
\leq & \frac{1+2\delta}{S}\sum_{s=1}^S \PP^0[\PP^1 \{U^s_{(k^\sstar,\tk^\sstar)}(k^\star,\tk^\star)\}]^2 + O\left(\frac{(1+\delta)\log(1+K_1^2 \times K_2^2)}{n^{1/p}} \left(\frac{1+\delta}{\delta}\right)^{(2-p)/p} \right),\end{aligned}$$ for any $\delta>0$, and $1\leq p\leq 2$, where $$(k^\ddagger,\tk^\ddagger)=\arg\max_{\substack{k=k^\dagger,\tk \in \mathcal K_2;\\ \tk=\tk^\dagger, k\in \mathcal K_1}} \frac{1}{S} \sum_{s=1}^S [\PP^1_s\{U^s_{(k,\tk)}(k^\dagger,\tk^\dagger)\}]^2,$$ $$(k^\sstar,\tk^\sstar)=\arg\max_{\substack{k=k^\star,\tk \in \mathcal K_2;\\ \tk=\tk^\star, k\in \mathcal K_1}} \frac{1}{S}\sum_{s=1}^S [\PP^1 \{U^s_{(k,\tk)}(k^\star,\tk^\star)\}]^2,$$ and $\PP^0$ denotes the true measure of $\PP_s^0$. \[thm:2\]
The proof of this result is based on a finite sample inequality for $$\begin{aligned}
\frac{1}{S}\sum_{s=1}^S \PP^0[\PP^1 \{U^s_{(k^\ddagger,\tk^\ddagger)}(k^\dagger,\tk^\dagger)\}]^2 - \frac{1}{S}\sum_{s=1}^S \PP^0[\PP^1 \{U^s_{(k^\sstar,\tk^\sstar)}(k^\star,\tk^\star)\}]^2,\end{aligned}$$ the excess pseudo-risk of our selected model compared to the oracle’s, which requires an extension of Lemma 8.1 of [@vaart2006oracle] to handle second order U-statistics. We obtained such an extension by making use of a powerful exponential inequality for the tail probability of the maximum of a large number of second order U-statistics derived by [@10.1007/978-1-4612-1358-1_2]. Note that Theorem \[thm:2\] generalizes to the doubly robust functionals in the class of [@rotnitzky2019mix], with Equation replaced by $IF^s_{k,\tk}(\widehat \psi^s_{k_1,\tk_1})-IF^s_{k_1,\tk_1}(\widehat \psi^s_{k_1,\tk_1})$ in the definition of $U^s_{(k,\tk)}(k_1,\tk_1)$, where $IF^s_{k,\tk}(\widehat \psi^s_{k_1,\tk_1})$ is an influence function of $\psi$ evaluated at nuisance parameters $(k,\tk)$ and $\widehat \psi^s_{k_1,\tk_1}$ solves $\PP_1^s IF^s_{k_1,\tk_1}(\widehat \psi^s_{k_1,\tk_1})=0$ (see Algorithm 2 in the Appendix for details).
The bound given in Theorem \[thm:2\] is valid for any $\delta>0$, such that the error incurred by empirical risk is of order $n^{-1}$ for any fixed $\delta$ if $p=1$, suggesting in this case that our cross-validated selector performs nearly as well as a oracle selector with access to the true pseudo-risk. Theorem \[thm:2\] is most of interest in nonparametric/machine learning setting where pseudo-risk can be of order substantially larger than $O(n^{-1})$ in which can the error made in selecting optimal learner is negligible relative to risk. By allowing $\delta_n \rightarrow 0$, the choice $p=2$ gives an error of order $n^{-1/2}$, which may be substantially larger. If we write the bound in the form of [$$\begin{aligned}
\frac{\sum_{s=1}^S \PP^0[\PP^1 \{U^s_{(k^\ddagger,\tk^\ddagger)}(k^\dagger,\tk^\dagger)\}]^2}{\sum_{s=1}^S \PP^0[\PP^1 \{U^s_{(k^\sstar,\tk^\sstar)}(k^\star,\tk^\star)\}]^2} \leq (1+2\delta) + O\left(\frac{n^{-1/p}S(1+\delta)^{2/p}\log(1+K_1^2 \times K_2^2)}{{\sum_{s=1}^S \PP^0[\PP^1 \{U^s_{(k^\sstar,\tk^\sstar)}(k^\star,\tk^\star)\}]^2 \delta^{(2-p)/p}}} \right),\end{aligned}$$ ]{}the risk ratio converges to 1 with the remainder of order $n^{-1/2}$. Furthermore, the derived excess risk bound holds for as many as $c^{n^\gamma}$ models ($K_1\times K_2$) for any $\gamma<1$ and $c>0$.
Suppose that $\widehat \pi_k(X)\rightarrow \pi(X)$ and $\widehat E_\tk(Y|A,X) \rightarrow E(Y|A,X)$ in probability for some pair $(k,\tk)$, $[\PP^1 \{U^s_{(k^\sstar,\tk^\sstar)}(k^\star,\tk^\star)\}]^2$ converges to zero as $n \rightarrow \infty$ by Lemma \[lemma:1\], otherwise $\lim_{n\rightarrow \infty}[\PP^1 \{U^s_{(k^\sstar,\tk^\sstar)}(k^\star,\tk^\star)\}]^2>0$ and we select model/estimator that is nearest to satisfying the double robustness property.
An theorem analogous to Theorem \[thm:2\] holds for the mixed-minimax selector. Although details are omitted, the proof is essentially the same.
Simulation studies\[sec:simulations\]
=====================================
In Section \[sec:simu1\], we first consider two different settings in context of (possibly misspecified) parametric models as illustrative examples of the proposed approach. Next, in Section \[sec:simu3\], we evaluate the proposed approach in the context of primary interest, where modern machine learning methods are used to estimate nuisance parameters; and compare to double debiased machine learning (DDML) [@Chernozhukov2018] for each candidate machine learner of nuisance parameters. For each setting, we use three-fold cross-validation, i.e., $S=3$ with $T_i$’s from Bernoulli(0.5).
Illustrative examples with parametric candidate models {#sec:simu1}
------------------------------------------------------
Consider the following five-variate functional forms [@zhao2017selective] as potential candidate parametric models, $$\begin{aligned}
f_1(x) &= \Big((x_1-0.5)^2, (x_2-0.5)^2, \ldots, (x_{5}-0.5)^2\Big)^T,\\
f_2(x) &= \Big((x_1-0.5)^3, (x_2-0.5)^3, \ldots, (x_{5}-0.5)^3\Big)^T,\\
f_3(x) &= \Big(x_1,x_2, \ldots, x_{5}\Big)^T,\\
f_4(x) &= \Big(\frac{1}{1+\exp(-20(x_1-0.5))}, \ldots, \frac{1}{1+\exp(-20(x_5-0.5))}\Big)^T.\end{aligned}$$
In the first simulation setting, the true model is not included as candidate model and all working models are misspecified, whereas in the second setting, the true model is included as a candidate model. Each simulation was repeated 500 times. For each setting, covariates $X_i$’s were independently generated from a uniform distribution, and the noise of outcome was normal with mean 0 and standard deviation $0.1$. In the first scenario, the data were generated from $$\begin{aligned}
\text{logit}\Pr \left( A=1|X\right) &=&(1,-1,1,-1,1)f_1(X),\\
E\left( Y|A,X\right) &=&1 + \mathbbm{1}^Tf_1(X)+ \mathbbm{1}^Tf_1(X) A+ A,\end{aligned}$$ where $ \mathbbm{1}=(1,1,1,1,1)^T$. In the second scenario, the data were generated from $$\begin{aligned}
\text{logit}\Pr \left( A=1|X\right) & =& (1,-1,1,-1,1)f_2(X),\\
E\left( Y|A,X\right) &= & 1 + \mathbbm{1}^Tf_2(X)+ \mathbbm{1}^Tf_2(X) A+ A.\end{aligned}$$ For both scenarios, we used $\{f_2,f_3,f_4\}$ as candidate models of $g$ and $h$ specified in Equations and .
The squared bias of $\widehat \psi$ for both scenarios is shown in Figures \[fig:1\]-\[fig:3\], respectively. (bias$^2 \times 10^{-4}$) is displayed. “Minimizer” refers to “squared bias minimizer” given by Equation ; “Oracle1” and “Oracle2” refer to the oracle minimax and mixed-minimax selectors evaluated by Lemma \[lemma:1\], respectively; “Proposed1” and “Proposed2” refer to the proposed minimax and mixed-minimax selectors, respectively; “Separate” refers to the more conventional practical approach which performs model selection separately for each nuisance parameter via AIC [@akaike1974new]; “Truth” refers to using the underlying true candidate models for estimation.
In the first scenario, because the true model is not a candidate model, there is a notable gap between the squared bias of estimators obtained from working models and those estimated directly from true models. In addition, because the “squared bias minimizer” minimizes the squared bias, it naturally has smaller Monte Carlo squared bias than the proposed criteria. Note that we do not expect “squared bias minimizer” and the proposed approach to perform similarly even asymptotically because they minimize different objective functions, and recall that the former may not be attainable in this specific setting. The proposed method has smaller bias than selecting models separately. However, this may not always be the case because the proposed method does not necessarily minimize the squared bias directly. Additional simulations in the Appendix illustrate this point. In the second scenario, the gap observed in the first scenario vanishes asymptotically, and both proposed methods perform nearly as well as both oracle and “squared bias minimizer” selectors in large samples as illustrated in Figure \[fig:3\]. Mixed-minimax selector appears to perform somewhat better than minimax selector in small to moderate samples.
![Squared bias of Scenario 1[]{data-label="fig:1"}](biasplot1.pdf "fig:"){width="4in" height="2.5in"}\
![Squared bias of Scenario 2[]{data-label="fig:3"}](biasplot2.pdf "fig:"){width="4in" height="2.5in"}\
Model selection with machine learners {#sec:simu3}
-------------------------------------
Finally, we report simulation results for the setting of primarily interest, where various machine learners are used to form candidate estimators of nuisance parameters. We considered the following machine learning methods. For the propensity score model: 1. Logistic regression with $l_1$ regularization [@lasso; @friedman2010regularization]; 2. Classification random forests [@Breiman2001; @randomForest]; 3. Gradient boosting trees [@friedman2001greedy; @gbm2019]. For the outcome model: 1. Lasso [@lasso; @friedman2010regularization]; 2. Regression random forests [@Breiman2001; @randomForest]; 3. Gradient boosting trees. Data were generated from $$\begin{aligned}
\text{logit}\Pr \left( A=1|X\right) &=&(1,-1,1,-1,1)^Tf_4(X),\\
E\left( Y|A,X\right) &=&1 + \mathbbm{1}^Tf_4(X)+ \mathbbm{1}^Tf_4(X) A+ A.\end{aligned}$$ The outcome error term was normal with mean 0 and standard deviation $0.25$.
Implementing candidate estimators required selecting corresponding tuning parameters: The penalty $\lambda_n$ for Lasso was chosen using 10-fold cross-validation over the pre-specified grid $[10^{10},\ldots,10^{-2}]$; For gradient boosting trees [@gbm2019], all parameters were tuned using a 4-fold cross-validation over the following grid: `ntrees`=\[100,300\], `depth`=\[1,2,3,4\], `shrinkag`=\[0.001,0.01,0.1\]; We used the default values of minimum node size (1 for classification, 5 for regression), and number of trees (500) for random forest [@randomForest], while the number of variables randomly sampled at each split, i.e., `mtry`, was tuned by `tuneRF` function [@randomForest].
We compared the proposed estimators with three DDML estimators using Lasso, random forests, and gradient boosting trees to estimate nuisance parameters respectively [@Chernozhukov2018]. Each DDML was estimated by cross-fitting [@Chernozhukov2018], i.e., 1) using training data (random half of sample) to estimate nuisance parameters and validation data to obtain $\widehat \psi_1$; 2) swaping the role of training and validation dataset to obtain $\widehat \psi_2$; 3) computing the estimator as the average $\widehat \psi_{\text{CF}} = (\widehat \psi_1 + \widehat \psi_2)/2$. The squared bias of $\widehat \psi$ of different methods are shown in Figure \[fig:ml\]. (bias$^2 \times 10^{-4}$) is displayed. “LASSO” refers to using logistic regression with $l_1$ regularization for propensity score model, and standard Lasso for outcome model; “RF” refers to using classification forests for propensity score model, and regression forests for outcome model; “GBT” refers to using gradient boosting classification tree for propensity score model, and gradient boosting regression tree for outcome model. Both proposed estimators have smallest bias across sample sizes, and there is a notable gap between the proposed estimators and those estimated by DDML. It is not surprising that Lasso has the largest bias because the working models are not correctly specified. This confirms our earlier claim that combined with flexible nonparametric/machine learning methods, our proposed approach can in finite samples yield smaller squared bias than any given machine learning estimator, without directly estimating the corresponding squared bias.
![Squared bias of the proposed estimator and different machine learners[]{data-label="fig:ml"}](bias1.pdf "fig:"){width="4in" height="2.5in"}\
A smooth-max approach to post-selection approximate inference {#sec:softmax}
=============================================================
In this section, we propose a novel smooth-max approach as smooth approximation to proposed minimax and mixed-minimax model selection criteria. Such smooth approximation provides a genuine opportunity to perform valid post-selection inference, appropriately accounting for uncertainty in both selecting and estimating nuisance parameters. It is well known that the following smooth-max function $$\begin{aligned}
\Gamma(\tau) = \frac{1}{\tau} \log \sum_{i=1}^{m} \exp(\tau z_i),\end{aligned}$$ converges to $ \max(z_1,\ldots,z_m)$ as $\tau \rightarrow \infty$, where $z_1,\ldots,z_m$ are positive real numbers. Similarly, we define $$\begin{aligned}
\Gamma_{k_1,\tk_1}(\tau) = \frac{1}{\tau} \log \sum_{\substack{k=k_1,\tk \in \mathcal K_2;\\ \tk=\tk_1, k\in \mathcal K_1}} \exp\{\frac{\tau}{S}\sum_{s=1}^{S}(\widehat \psi^s_{k,\tk} -\widehat \psi^s_{k_1,\tk_1})^2 \},\end{aligned}$$ for minimax selector and $$\begin{aligned}
\Gamma_{k_1,\tk_1}(\tau) =& \frac{1}{\tau} \log \sum_{\tk,\tk_0 \in \mathcal K_2} \exp\{\frac{\tau}{S}\sum_{s=1}^{S}(\widehat \psi^s_{k_1,\tk} -\widehat \psi^s_{k_1,\tk_0})^2 \} \\+& \frac{1}{\tau} \log \sum_{ k,k_0 \in \mathcal K_1} \exp\{\frac{\tau}{S}\sum_{s=1}^{S}(\widehat \psi^s_{k,\tk_1} -\widehat \psi^s_{k_0,\tk_1})^2 \},\end{aligned}$$ for mixed-minimax selector. Note that $$\begin{aligned}
\Gamma_{k_1,\tk_1}(\tau) \rightarrow \max_{\substack{k=k_1,\tk \in \mathcal K_2;\\ \tk=\tk_1, k\in \mathcal K_1}} \frac{1}{S}\sum_{s=1}^S (\widehat \psi^s_{k,\tk} -\widehat \psi^s_{k_1,\tk_1})^2,
\label{eq:limit1}
\end{aligned}$$ and $$\begin{aligned}
\Gamma_{k_1,\tk_1}(\tau) \rightarrow \max_{\tk,\tk_0 \in \mathcal K_2} \frac{1}{S}\sum_{s=1}^S (\widehat \psi^s_{k_1,\tk} -\widehat \psi^s_{k_1,\tk_0})^2 + \max_{ k,k_0 \in \mathcal K_1} \frac{1}{S}\sum_{s=1}^S (\widehat \psi^s_{k,\tk_1} -\widehat \psi^s_{k_0,\tk_1})^2,
\label{eq:limit2}
\end{aligned}$$ as $\tau \rightarrow \infty$.
Recall that our goal is to select the model $(k_1,\tk_1)$ minimizing the right hand side of Equations or , which is then used to estimate $\psi_0$. A smooth-max approximation to this selection process is given by $$\begin{aligned}
\widehat \psi(\tau) = \sum_{(k_1,\tk_1)} p_{k_1,\tk_1}(\tau) \widehat \psi_{k_1,\tk_1},\end{aligned}$$ where $$\begin{aligned}
p_{k_1,\tk_1}(\tau)
= \frac{\exp\{ \tau [\sum_{(k,\tk)} \Gamma_{k,\tk}(\tau)-\Gamma_{k_1,\tk_1}(\tau)] \}}{ \sum_{(k',\tk')} \exp\{ \tau [\sum_{(k,\tk)} \Gamma_{k,\tk}(\tau)-\Gamma_{k',\tk'}(\tau)] \} },\end{aligned}$$ and $$\begin{aligned}
\widehat \psi_{k_1,\tk_1}=\frac{1}{S}\sum_{s=1}^S \widehat \psi^s_{k_1,\tk_1}.\end{aligned}$$ Because $\widehat \psi(\tau)$ is a smooth transformation of $\widehat \psi_{k_1,\tk_1}, k_1=1,\ldots,K_1, \tk_1=1,\ldots,K_2$, justification of this smooth approximation is that $p_{k_1, \tk_1}(\tau) \rightarrow 1$ as $\tau \rightarrow \infty$ if $(k_1, \tk_1)=\arg\min_{k,\tk}\Gamma_{k,\tk}(\tau)$, otherwise $p_{k_1, \tk_1}(\tau) \rightarrow 0$. Given a fixed $\tau$, inference can be carried out with the nonparametric bootstrap or sandwich asymptotic variance estimators using the delta method. In following section, we use this smooth approximation for post-selection inference in a data application.
The above claim certainly holds when $K_1$ and $K_2$ are bounded, and may be established even when $K_1$ and $K_2$ grow with $n$ by high-dimensional central limit theorem [@chernozhukov2017] although we do not formally prove this here. Assuming $K_1$ and $K_2$ are bounded is not a real limitation, particularly in machine learning settings, as most practical applications similar to that in the next section will likely be limited to a small to moderate number of machine learners. Furthermore, unlike in previous section, our proposed approach for post-selection inference technically requires that each $\widehat \psi_{k,\tk}$ admits a first order influence function, which is a condition that as previously mentioned may still hold even when candidate estimators include nonparametric regression or flexible machine learning methods provided that these estimators are consistent at rates faster than $n^{-1/4}$ [@robins2017; @Chernozhukov2018].
To conclude this section, we briefly discuss selection of $\tau$. The choice of $\tau$ essentially determines how well the smooth-max function approximates the minimax estimator as captured by the following inequality, $$\begin{aligned}
\max\{z_1,\ldots,z_m\}\leq \frac{1}{\tau} \log \sum_{i=1}^m \exp(\tau z_i) \leq \frac{1}{\tau} \log m + \max\{z_1,\ldots,z_m\},\end{aligned}$$ which holds for any positive real numbers $z_1,\ldots,z_m$. Thus, the approximation error of $\Gamma_{k,\tk}(\tau)$ is controlled by $\epsilon = \log m/\tau$.
Data Analysis\[sec:real\]
=========================
In this section, similarly to [@tan2006; @2015biasreduce; @tan2019model; @tan2019regularized], we reanalyze data from the Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatments (SUPPORT) to evaluate the effectiveness of right heart catheterization (RHC) in the intensive care unit of critically ill patients. At the time of the study by [@5c6af36c0fb64cfcbb482d75c2bc7ff1], RHC was thought to lead to better patient outcomes by many physicians. [@5c6af36c0fb64cfcbb482d75c2bc7ff1] found that RHC leads to lower survival compared to not performing RHC.
We consider the effect of RHC on 30-day survival. Data are available on 5735 individuals, 2184 treated and 3551 controls. In total, 3817 patients survived and 1918 died within 30 days. To estimate the additive treatment effect $\psi_0 = E\{Y_1-Y_0\}$, 72 covariates were used to adjust for potential confounding [@Hirano2001]. We posited $K_1=3$ candidate models/estimators for the propensity score model including all 72 covariates: 1. Logistic regression with $l_1$ regularization; 2. Classification random forests; 3. Gradient boosting trees. We posited $K_2=6$ candidate estimators for the outcome model $E_\tk\left( Y|A,X\right)$ with 72 covariates: 1. Linear regression; 2. Logistic regression; 3. Lasso; 4. Logistic regression with $l_1$ regularization; 5. Regression random forests; 6. Classification random forests. The proposed selection procedure was implemented with three-fold cross-validation. Tuning parameters were selected as in Section \[sec:simu3\].
The proposed minimax selection selected gradient boosting trees estimator of propensity score model and logistic regression with $l_1$ regularization for the outcome model. The proposed mixed-minimax selection selected classification random forest estimator of propensity score model and regression random forest for the outcome model. The estimated causal effect of RHC was $-0.0548$ and $-0.0476$ for the minimax and mixed-minimax criteria, respectively, while the point estimate obtained by smooth-max approach was $-0.0528$ and $-0.0483$ which, as expected, is close to the minimax point estimates, respectively. Results were somewhat smaller than other improved estimators considered by [@2015biasreduce], who did not perform model selection. Specifically, the TMLE with default super learner [@van2011targeted] gave $\widehat \psi_{\text{TMLE-SL}} = - 0.0586$; the bias reduced doubly robust estimation with linear and logit link gave $\widehat \psi_{\text{BR},\text{lin}}=-0.0612$ and $\widehat \psi_{\text{BR},\text{logit}}=-0.0610$, respectively; the calibrated likelihood estimator [@tan2010] gave $\widehat \psi_{\text{TAN}}=-0.0622$. Our estimates suggest that previous estimates may still be subject to a small amount of misspecification bias. To obtain valid confidence intervals, we applied the proposed smooth-max approach with error tolerance $\epsilon =0.002$. Smooth-max based 95% confidence intervals by nonparametric bootstrap with 200 replications were estimated as $(-0.1041,-0.0301)$ and $(-0.1083,-0.0277)$ for the minimax and mixed-minimax criteria, respectively, which are slightly wider than other improved estimators considered in [@2015biasreduce], e.g., the targeted maximum likelihood estimation with default super learner gave $(-0.0877, -0.0295)$; the bias reduced doubly robust estimator with linear and logit link gave $(-0.0889,-0.0335)$ and $(-0.0879,-0.0340)$, respectively; the calibrated likelihood estimator gave $(-0.0924,-0.0319)$. This is not surprising because we consider a richer class of models and formally account for such selection step, potentially resulting in smaller bias and more accurate confidence intervals.
Discussion\[sec:discussion\]
============================
We have proposed a general model selection approach to estimate a functional $\psi(\theta)$ in a general class of doubly robust functionals which admit an estimating equation that is unbiased if at least one of two nuisance parameters is correctly specified. The proposed method works by selecting the candidate model based on minimax or mixed-minimax criterion of pseudo-risk defined in terms of the doubly robust estimating equation. A straightforward, cross-validation scheme was proposed to estimate the pseudo-risk. While, throughout the paper, we have described and evaluated the proposed selection procedure primarily in estimating average treatment effect as a running example, in the appendix, all results are extended to the more general class of mixed-bias functionals of [@rotnitzky2019mix]. Extensive simulation studies and a real data example on the effectiveness of right heart catheterization in the intensive care unit of critically ill patients were also presented to illustrate the proposed approach.
As mentioned in Remark \[remark:rm2\], our selection criteria extend to multiply robust influence functions in the sense of [@tchetgentchetgen2012; @wang2018bounded; @Caleb2019; @Shi2019MultiplyRC; @sun2019multiple], where three or more nuisance parameters are needed to evaluate the influence function, however, the influence function remains unbiased if all but one of the nuisance parameters are evaluated at the truth. Briefly, in such setting, the minimax criterion entails the maximum squared change of the functional over all perturbations of one nuisance parameter holding the others fixed. The mixed-minimax selector likewise generalizes. We expect our theoretical results to extend to this setting, an application of which is in progress [@sun2019multiple].
The proposed methods may be improved or extended in multiple ways. The choice of the criterion could be more flexible and one may use a different norm rather than $L_\infty$ norm, e.g., $L_2$ or $L_1$ norm. Another potential extension of our method is in the direction of statistical inference. It would be both interesting and important to derive the exact asymptotic distribution of the proposed estimators, as originally described in Section \[sec:select\], instead of relying on a smooth approximation. Finally, in principle one could develop a stacked generalization of our proposed approach by forming linear combinations of various candidate estimators of nuisance parameters [@wolpert1992stacked; @breiman1996bagging]. An optimal estimator could then be obtained by minimizing the pseudo-risk for the functional of interest with respect to the weights. Clearly, the current minimax approach explores only a small set of possible values for such weights, i.e., all values that have unit mass at one candidate model and zero elsewhere, and therefore may be sup-optimal relative to the stacked generalization. Because candidate learners may yield estimates that are highly correlated, one may need a form of regularization to ensure good performance, such as restricting set of weights to a finite support (in the spirit of [@van2007super]), or alternatively penalizing the pseudo-risk. We are currently investigating theoretical properties of such stacked minimax functional learning which we plan to report elsewhere.
Acknowledgment {#acknowledgment .unnumbered}
==============
We thank James Robins, Andrea Rotnitzky, and Weijie Su for helpful discussions and suggestions. We thank Karel Vermeulen and Stijn Vansteelandt for providing the dataset. This research is supported in part by U.S. National Institutes of Health grants.
Proofs
======
In this section, we present proofs of the theoretical results.
[**Proof of Lemma \[lemma:1\].**]{}
We have that
$$\begin{aligned}
\left[ \psi_{k_1,\tk} - \psi_{k_1,\tk_1}\right]^2 &= E \Bigg[\frac{\left( -1\right) ^{1-A}}{\pi_{k_1}\left(
A|X\right) }Y -\left\{ \frac{\left( -1\right) ^{1-A}}{\pi_{k_1} \left( A|X\right) }
E_\tk(Y|A,X)-\sum_{a}\left( -1\right) ^{1-a}E_\tk(Y|a,X)\right\} \\
&- \frac{\left( -1\right) ^{1-A}}{\pi_{k_1}\left(
A|X\right) }Y + \left\{ \frac{\left( -1\right) ^{1-A}}{\pi_{k_1}\left( A|X\right) }
E_{\tk_1}(Y|A,X)-\sum_{a}\left( -1\right) ^{1-a}E_{\tk_1}(Y|a,X)\right\} \Bigg]^2\\
&= E\Bigg[\frac{\left( -1\right) ^{1-A}}{\pi_{k_1}\left(
A|X\right) } \left\{ E_{\tk_1}(Y|A,X)-E_{\tk}(Y|A,X) \right\} \\
& + \sum_{a}\left( -1\right) ^{1-a}\left\{ E_\tk(Y|a,X)-E_{\tk_1}(Y|a,X)\right\} \Bigg]^2\\
& = E\left[\sum_{a}\left( -1\right) ^{1-a} \left\{\frac{\pi\left(
a|X\right) }{\pi_{k_1}\left( a|X\right) }-1\right\} \left\{E_{\tk_1}(Y|a,X)-E_{\tk}(Y|a,X)\right\}\right]^2.\end{aligned}$$
$$\begin{aligned}
\left[\psi_{k,\tk_1} - \psi_{k_1,\tk_1}\right]^2 &= E \Bigg[\frac{\left( -1\right) ^{1-A}}{\pi_{k}\left(
A|X\right) }Y -\left\{ \frac{\left( -1\right) ^{1-A}}{\pi_{k} \left( A|X\right) }
E_{\tk_1}(Y|A,X)-\sum_{a}\left( -1\right) ^{1-a}E_{\tk_1}(Y|a,X)\right\} \\
&- \frac{\left( -1\right) ^{1-A}}{\pi_{k_1}\left(
A|X\right) }Y + \left\{ \frac{\left( -1\right) ^{1-A}}{\pi_{k_1}\left( A|X\right) }
E_{\tk_1}(Y|A,X)-\sum_{a}\left( -1\right) ^{1-a}E_{\tk_1}(Y|a,X)\right\} \Bigg]^2\\
&= E\Bigg[\left\{\frac{\left( -1\right) ^{1-A}}{\pi_{k}\left(
A|X\right)}-\frac{\left( -1\right)^{1-A}}{\pi_{k_1}\left(
A|X\right)}\right\} E(Y|A,X) \\
&- \left\{\frac{\left( -1\right) ^{1-A}}{\pi_{k}\left(
A|X\right)}-\frac{\left( -1\right)^{1-A}}{\pi_{k_1}\left(
A|X\right)}\right\} E_{\tk_1}(Y|A,X)\Bigg]^2\\
&= E\left[\sum_{a}\left( -1\right) ^{1-a} \left\{\frac{\pi\left(
a|X\right) }{\pi_k\left( a|X\right) }- \frac{\pi\left(
a|X\right) }{\pi_{k_1}\left( a|X\right) } \right\} \left\{E(Y|a,X)-E_{\tk_1}(Y|a,X)\right\}\right]^2.\quad \Box\end{aligned}$$
[**Proof of Lemma \[lemma:rate\]**]{}
Without loss of generality, we focus on $S=1$ and the proof of cross-validated oracle selector follows similarly. Let $$k^{\max }=\arg \max_{k\in \mathcal K_1} \text{per}\left( k,\tk^\star,k^\star,\tk^\star\right) ^{1/2},$$ $$\tk^{\max}=\arg \max_{\tk\in \mathcal K_2} \text{per}\left( k^\star,\tk,k^\star,\tk^\star\right) ^{1/2}.$$ By Assumptions \[asm:1\]-\[asm:3\] and the mean value theorem, there exist constants $C_{0},C_{1},$ and a value $x_{0}$ in the support of $X$ such that$$\begin{aligned}
&&\max_{\substack{ k=k^\star,\tk\in \mathcal K_2 \\ \tk = \tk^\star,k\in \mathcal K_1}}\text{per}\left(
k,\tk,k^\star,\tk^\star\right) ^{1/2}\\
&=&\max_{\tk=\tk^\star,k\in \mathcal K_1}\text{per}\left(
k,\tk,k^\star,\tk^\star \right) ^{1/2} \\
&=&\left\vert \left\{ \int \left( \widehat{p}_{k^\star}(x)-\widehat{p}_{k^{\max }}\left( x\right) \right) \left( \widehat{b}_{\tk^\star}(x)-b(x)\right) E[h_1(O)|X=x] \right\} dF\left( x\right) \right\vert \\
&=&\left\vert \left\{ \int \left( \widehat{p}_{k^\star}(x)-\widehat{p}_{k^{\max }}\left( x\right) \right) \left( \widehat{b}_{1}(x)-b(x)\right)E[h_1(O)|X=x] \right\} dF\left( x\right) \right\vert \\
&=&\left\vert \left( \widehat{p}_{k^\star}(x_{0})-\widehat{p}_{k^{\max
}}\left( x_0 \right) \right) \left( \widehat{b}_{1}(x_{0})-b(x_{0})\right) E[h_1(O)|X=x_0]
\right\vert \\
&=&C_{0}\nu _{\max }\omega _{\min } \\
&\geq &
\max_{k=k^\star,\tk\in \mathcal K_2}\text{per}\left(
k,\tk,k^\star,\tk^\star\right) ^{1/2}\\
&=&\left\vert \left\{ \int
\left( \widehat{p}_{k^\star}(x)-p\left( x\right) \right) \left( \widehat{b}_{1}(x)-\widehat{b}_{\tk^{\max }}(x)\right) E[h_1(O)|X=x] \right\} dF\left( x\right)
\right\vert \\
&=&C_{1}\nu _{k^\star}\omega _{\max }.\end{aligned}$$Therefore $$\nu _{k^\star}\leq \frac{C_{0}}{C_{1}}\nu _{\max }\frac{\omega _{\min }}{\omega _{\max }}.$$Implied by Equation and the mean value theorem, there exists a positive constant $C_{2}$ and a value $x^{\ast }$ in the support of $X$ such that $$\begin{aligned}
\left\vert \psi _{k^\star,\tk^\star}-\psi_0 \right\vert &=&\left\vert \int
\left( \widehat{p}_{k^\star}(x)-p\left( x\right) \right) \left( \widehat{b}_{1}(x)-b(x)\right) E[h_1(O)|X=x] dF\left( x\right) \right\vert \\
&=&C_{2}\left\vert \left( \widehat{p}_{k^\star}(x^{\ast })-p\left( x^{\ast
}\right) \right) \left( \widehat{b}_{1}(x^{\ast })-b(x^{\ast })\right) E[h_1(O)|X=x^*]
\right\vert \\
&\lesssim &\nu_{k^\star}\omega _{\min } \\
&\lesssim &\frac{\nu _{\max }}{\omega _{\max }}\omega _{\min }^{2}.\end{aligned}$$For $\psi _{k^\circ,\tk^\circ}$, it is straightforward to verify that, by Equation and the mean value theorem, there exists a positive constant $\overline{C}_{2}$ and a value $\overline{x}$ in the support of $X$ such that $$\begin{aligned}
\left\vert \psi _{k^\circ,\tk^\circ}-\psi_0
\right\vert &=&\overline{C}_{2}\left\vert \left( \widehat{p}_{k^\circ}(\overline{x})-p\left( \overline{x}\right) \right) \left( \widehat{b}_{\tk^\circ}(\overline{x})-b(\overline{x})\right) \right\vert \\
&=&O_{P}\left( \nu _{\min }\omega _{\min }\right).\end{aligned}$$
[**Proof of Theorem \[thm:2\].**]{}
By the definition of our estimator, $$\max_{\substack{k=k^\dagger,\tk \in \mathcal K_2;\\ \tk=\tk^\dagger, k\in \mathcal K_1}} \sum_{s=1}^{S} [\PP^1_s\{U^s_{(k,\tk)}(k^\dagger,\tk^\dagger)\}]^2 \leq \max_{\substack{k=k^\star,\tk \in \mathcal K_2;\\ \tk=\tk^\star, k\in \mathcal K_1}} \sum_{s=1}^{S} [\PP^1_s(U^s_{(k,\tk)}(k^\star,\tk^\star))]^2.$$ So we have that, $$\sum_{s=1}^{S} [\PP^1_s\{U^s_{(k^\ddagger,\tk^\ddagger)}(k^\dagger,\tk^\dagger)\}]^2 \leq \sum_{s=1}^{S} [\PP^1_s(U^s_{(k^\vartriangle,\tk^\vartriangle)}(k^\star,\tk^\star))]^2,$$ where $$(k^\ddagger,\tk^\ddagger)=\arg\max_{\substack{k=k^\dagger,\tk \in \mathcal K_2;\\ \tk=\tk^\dagger, k\in \mathcal K_1}} \sum_{s=1}^{S} [\PP^1_s(U^s_{(k,\tk)}(k^\dagger,\tk^\dagger))]^2,$$ and $$(k^\vartriangle,\tk^\vartriangle)=\arg\max_{\substack{k=k^\star,\tk \in \mathcal K_2;\\ \tk=\tk^\star, k\in \mathcal K_1}} \sum_{s=1}^{S} [\PP^1_s(U^s_{(k,\tk)}(k^\star,\tk^\star))]^2.$$
We denote $n^s_j = \#\{1\leq i\leq n:T_i^s=j\},$ $j=0,1$. For simplicity, in a slight abuse of notation in the following, we use $(k,\tk)$ instead of $(k^\ddagger,\tk^\ddagger), (k^\vartriangle,\tk^\vartriangle)$ for the sub-indices of $U$, i.e., $U^s_{(k,\tk)}(k^\dagger,\tk^\dagger)$ denotes $U^s_{(k^\ddagger,\tk^\ddagger)}(k^\dagger,\tk^\dagger)$; $U^s_{(k,\tk)}(k^\star,\tk^\star)$ denotes $U^s_{(k^\vartriangle,\tk^\vartriangle)}(k^\star,\tk^\star)$. By simple algebra, we have $$\begin{aligned}
& [\PP^1_s (U^s_{(k,\tk)}(k^\dagger,\tk^\dagger))]^2 \\ =& \frac{1}{{n_1^s}^2} \sum_{i,j} [U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_i U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_j ]\\
= &\frac{1}{{n_1^s}^2} \sum_{i,j} \Big ( U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_i - \PP^1 \big[ U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_i \big] \Big) \Big ( U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_j - \PP^1 \big[ U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_j \big] \Big)\\
+ & \frac{1}{{n_1^s}^2} \sum_{i,j} \PP^1 \big[ U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_i \big] U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_j\\
+ & \frac{1}{{n_1^s}^2} \sum_{i,j} \Big ( U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_i - \PP^1 \big[ U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_i \big] \Big) \PP^1 \big[ U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_j \big]\end{aligned}$$ $$\begin{aligned}
= & \frac{1}{{n_1^s}^2} \sum_{i,j} \Big(\PP^1 \big[ U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_i \big] \PP^1 \big[ U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_j \big] \Big) \\
+ & \frac{2}{{n_1^s}^2} \sum_{i,j} \Big ( U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_i - \PP^1 \big[ U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_i \big] \Big) \PP^1 \big[ U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_j \big]\\
+ & \frac{1}{{n_1^s}^2} \sum_{i,j} \Big ( U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_i - \PP^1 \big[ U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_i \big] \Big) \Big ( U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_j - \PP^1 \big[ U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_j \big] \Big),\end{aligned}$$ where $U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_i$ denotes the estimating equation evaluated at $i$-th observation. Thus, $[\PP^1_s (U^s_{(k,\tk)}(k^\dagger,\tk^\dagger))]^2$ further equals to $$\begin{aligned}
& [\PP^1_s (U^s_{(k,\tk)}(k^\dagger,\tk^\dagger))]^2\\
= & [\PP^1 \big( U^s_{k,\tk} ({k^\dagger,\tk^\dagger}) \big)]^2 \\
+ & \frac{2}{{n_1^s}^2} \sum_{i,j} \Big ( U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_i - \PP^1 \big[ U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_i \big] \Big) \PP^1 \big[ U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_j \big]\\
+ & \frac{1}{{n_1^s}^2} \sum_{i,j} \Big ( U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_i - \PP^1 \big[ U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_i \big] \Big) \Big ( U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_j - \PP^1 \big[ U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_j \big] \Big).\end{aligned}$$ A similar decomposition holds for $[\PP_s^1(U^s_{(k,\tk)}({k^\star,\tk^\star}))]^2$. By definition of our estimator, for any $\delta>0$, we have that $$\begin{aligned}
& \sum_{s=1}^S [\PP^1 (U^s_{(k,\tk)}({k^\dagger,\tk^\dagger}))]^2 \\
\leq &(1+2\delta)\sum_{s=1}^S[\PP^1 (U^s_{(k,\tk)}({k^\star,\tk^\star}))]^2\\
+ & \frac{1}{\sqrt{\nn}} \Big\{ (1+\delta)\sqrt{\nn} \sum_{s=1}^S\Big[ \big(\PP_s^1(U^s_{(k,\tk)}({k^\star,\tk^\star})) \big)^2 -\big( \PP^1 (U^s_{(k,\tk)}({k^\star,\tk^\star})) \big)^2 \Big] \\ &- \delta \sqrt{\nn}\sum_{s=1}^S \Big[ \PP^1 (U^s_{(k,\tk)}({k^\star,\tk^\star})) \Big]^2 \Big\}\\
-& \frac{1}{\sqrt{\nn}} \Big\{ (1+\delta)\sqrt{\nn} \sum_{s=1}^S \Big[ \big(\PP_s^1(U^s_{(k,\tk)}({k^\dagger,\tk^\dagger})) \big)^2 -\big( \PP^1(U^s_{(k,\tk)}({k^\dagger,\tk^\dagger})) \big)^2 \Big] \\ &+ \delta \sqrt{\nn} \sum_{s=1}^S \Big[ \PP^1 (U^s_{(k,\tk)}({k^\dagger,\tk^\dagger})) \Big]^2 \Big\}.\\\end{aligned}$$ Combined with the decomposition of $[\PP_s^1(U^s_{(k,\tk)}({k^\dagger,\tk^\dagger}))]^2$ and $[\PP_s^1(U^s_{(k,\tk)}({k^\star,\tk^\star}))]^2$, we further have that $$\begin{aligned}
&\sum_{s=1}^S [\PP^1(U^s_{(k,\tk)}({k^\dagger,\tk^\dagger}))]^2 \\
\leq &(1+2\delta)\sum_{s=1}^S [\PP^1(U^s_{(k,\tk)}({k^\star,\tk^\star}))]^2\\
+ & \frac{1}{\sqrt{n_1^s}} \Big\{ (1+\delta)\sqrt{n^s_1} \sum_{s=1}^S \Big[ \frac{2}{n_1^s} \sum_{i} \Big ( U^s_{k,\tk} ({k^\star,\tk^\star})_i - \PP^1 \big[ U^s_{k,\tk} ({k^\star,\tk^\star})_i \big] \Big) \PP^1 \big[ U^s_{k,\tk} (\widehat \psi_{k^\star,\tk^\star}) \big] \\
+ & \frac{1}{{n_1^s}^2} \sum_{i, j} \Big ( U^s_{k,\tk} ({k^\star,\tk^\star})_i - \PP^1 \big[ U^s_{k,\tk} ({k^\star,\tk^\star})_i \big] \Big) \Big ( U^s_{k,\tk} ({k^\star,\tk^\star})_j- \PP^1 \big[ U^s_{k,\tk} ({k^\star,\tk^\star})_j \big] \Big)\Big] \\ -& \delta \sqrt{n_1^s}\sum_{s=1}^S \Big[ \PP^1(U^s_{(k,\tk)}({k^\star,\tk^\star})) \Big]^2 \Big\}\\
- & \frac{1}{\sqrt{n_1^s}} \Big\{ (1+\delta)\sqrt{n^s_1}\sum_{s=1}^S \Big[ \frac{2}{n_1^s} \sum_{i} \Big ( U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_i - \PP^1 \big[ U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_i \big] \Big) \PP^1 \big[ U^s_{k,\tk} ({k^\dagger,\tk^\dagger}) \big] \\
+ & \frac{1}{{n_1^s}^2} \sum_{i, j} \Big ( U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_i - \PP^1 \big[ U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_i \big] \Big) \Big ( U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_j - \PP^1 \big[ U^s_{k,\tk} ({k^\dagger,\tk^\dagger})_j \big] \Big)\Big] \\+& \delta \sqrt{n_1^s} \sum_{s=1}^S \Big[ \PP^1(U^s_{(k,\tk)}({k^\dagger,\tk^\dagger})) \Big]^2 \Big\}.
\end{aligned}$$
Note that the only assumption on $s$ is its stochastic independence of the observations, we omit sup-index $s$ hereinafter. Because the maximum of sum is at most the sum of maxima, we deal with the first order and second order terms separately. By Lemma \[lemma:0\], and note that given Assumption \[asm:positivity\], the estimating equation is bounded, we further have the following bound for the first order term, $$\begin{aligned}
& \PP^0 \max_{k,\tk,k_1,\tk_1} \Big\{ \frac{2(1+\delta)\sqrt \nn}{\nn} \sum_{i} \Big ( U_{k,\tk} ({k_1,\tk_1})_i - \PP^1 \big[ U_{k,\tk} ({k_1,\tk_1})_i \big] \Big) \PP^1 \big( U_{k,\tk} ({k_1,\tk_1}) \big) \\
- & \delta \sqrt \nn \Big[ \PP^1 \big( U_{k,\tk} ({k_1,\tk_1}) \big) \Big]^2 \Big\} \\
\leq & \PP^0 \frac{16(1+\delta)}{\nn^{1/p-1/2}} \log(1+K^2_1K^2_2)\max_{k,\tk,k_1,\tk_1}\bigg [\frac{|| U_{k,\tk} ({k_1,\tk_1}) \PP^1 \big[ U_{k,\tk} ({k_1,\tk_1}) \big] ||_\infty}{\nn^{1-1/p}}\\
+& \bigg(\frac{3 \PP^1 \Big(U_{k,\tk} ({k_1,\tk_1})\PP^1 \big[ U_{k,\tk} ({k_1,\tk_1}) \big] \Big)^2 2^{(1-p)}(1+\delta)^{(2-p)}}{\delta^{2-p} \Big[ \PP^1 \big( U_{k,\tk} ({k_1,\tk_1}) \big) \Big]^{4-2p}} \bigg)^{1/p} \bigg ] = (I),\end{aligned}$$ and $$\begin{aligned}
& \PP^0 \max_{k,\tk,k_1,\tk_1} \Big\{-\Big[ \frac{2(1+\delta)\sqrt \nn}{\nn} \sum_{i} \Big ( U_{k,\tk} ({k_1,\tk_1})_i - \PP^1 \big[ U_{k,\tk} ({k_1,\tk_1})_i \big] \Big) \PP^1 \big( U_{k,\tk} ({k_1,\tk_1}) \big)\\
+& \delta \sqrt \nn \Big[ \PP^1 \big( U_{k,\tk} ({k_1,\tk_1}) \big) \Big]^2 \Big] \Big\}
\leq (I),\end{aligned}$$ where $\max$ is taken on the set $\{ k_1\in \{1,\ldots,K_1\},\tk_1 \in \{1,\ldots,K_2\}, k=k_1~\text{or}~ \tk = \tk_1\}$. Thus, $\PP^0 [\PP^1 (U_{(k,\tk)}(k^\dagger,\tk^\dagger))]^2$ is further bounded by $$\begin{aligned}
&\PP^0 [\PP^1 (U_{(k,\tk)}({k^\dagger,\tk^\dagger}))]^2 \\
\leq &(1+2\delta)\PP^0 [\PP^1 (U_{(k,\tk)}({k^\star,\tk^\star}))]^2 + \frac{2}{\sqrt \nn}(I)\\ + & \frac{1+\delta}{\nn^{2}} \PP^0 \sum_{i, j} \Big ( U_{k,\tk} ({k^\star,\tk^\star})_i - \PP^1 \big[ U_{k,\tk} ({k^\star,\tk^\star})_i \big] \Big) \Big ( U_{k,\tk} ({k^\star,\tk^\star})_j - \PP^1 \big[ U_{k,\tk} ({k^\star,\tk^\star})_j \big] \Big) \\
-& \frac{1+\delta}{\nn^{2}} \PP^0 \sum_{i, j} \Big ( U_{k,\tk} ({k^\dagger,\tk^\dagger})_i - \PP^1 \big[ U_{k,\tk} ({k^\dagger,\tk^\dagger})_i \big] \Big) \Big( U_{k,\tk} ({k^\dagger,\tk^\dagger})_j - \PP^1 \big[ U_{k,\tk} ({k^\dagger,\tk^\dagger})_j \big] \Big).\end{aligned}$$
By Lemma \[lemma:2\] and \[lemma:3\], the U-statistics are bounded and we have the bound of the risk of our selector $({k^\dagger,\tk^\dagger})$, $$\begin{aligned}
&\PP^0[\PP^1(U_{(k^\ddagger,\tk^\ddagger)}({k^\dagger,\tk^\dagger}))]^2 \\
\leq &(1+2\delta)\PP^0[\PP^1(U_{(k^\vartriangle,\tk^\vartriangle)}({k^\star,\tk^\star}))]^2 \\
+ & (1+\delta)C \Bigg\{ \left( \frac{2M}{n_1^2} \log(1+\frac{MK^2_1 K^2_2}{2} ) \right)^{1/2} + \frac{2M}{n_1} \log(1+\frac{MK^2_1 K^2_2}{2} ) \\
+ & \frac{4M^{3/2}}{n_1^{3/2}} \log^{3/2}(1+\frac{MK^2_1K^2_2}{2}+D_0)+ \frac{4 M^2}{n_1^2} \log^{2}(1+\frac{MK^2_1K^2_2}{2}+D_1) \Bigg\}\\
+ & \PP^0 \frac{16(1+\delta)}{\nn^{1/p}} \log(1+K^2_1K^2_2)\max_{k,\tk,k_1,\tk_1}\bigg [\frac{|| U_{k,\tk} ({k_1,\tk_1})\PP^1 \big[ U_{k,\tk} ({k_1,\tk_1}) \big] ||_\infty}{\nn^{1-1/p}} \\
+& \bigg(\frac{3\PP^1 \Big(U^2_{k,\tk} ({k_1,\tk_1}) \Big) 2^{(1-p)}(1+\delta)^{(2-p)}}{\delta^{2-p} \Big[ \PP^1 \big( U_{k,\tk} ({k_1,\tk_1}) \big) \Big]^{2-2p}} \bigg)^{1/p} \bigg ],\\
\end{aligned}$$ where $C$, $M$, $D_0$, and $D_1$ are some universal constants.
Finally, recall that for the term $(1+2\delta)\PP^0[\PP^1 (U_{(k^\vartriangle,\tk^\vartriangle)}({k^\star,\tk^\star}))]^2$, $(k^\vartriangle,\tk^\vartriangle)$ is chosen corresponding to $(k^\star,\tk^\star)$ under measure $\PP_s^1$. It is further bounded by $(1+2\delta)\PP^0[\PP^1 (U_{(k^\sstar,\tk^\sstar)}({k^\star,\tk^\star}))]^2$, where $(k^\sstar,\tk^\sstar)$ is chosen corresponding to $(k^\star,\tk^\star)$ under true measure $\PP^1$. $\Box$
[(Lemma 2.2 in [@vaart2006oracle])]{}\[lemma:0\] Assume that $E f\geq 0$ for every $f \in \cal F$. Then for any $1 \leq p \leq 2$ and $\delta>0$, we have that $$\begin{aligned}
E\max_{f \in \mathcal F}( \GG_n - \delta \sqrt{n} E) f \leq \frac{8}{n^{1/p-1/2}} \log(1+\#{\cal F}) \max_{f\in \mathcal F} [ \frac{M(f)}{n^{1-1/p}}+(\frac{v(f)}{(\delta E f)^{2-p}})^{1/p} ],\end{aligned}$$ where $\GG_n$ is the empirical process of the $n$ i.i.d. observations, and $(M(f),v(f))$ is any pair of Bernstein numbers of measurable function $f$ such that $$M(f)^2 E\Big(\exp\{ |f|/M(f)\} -1-|f|/M(f)\Big)\leq 1/2v(f).$$ Furthermore, if $f$ is uniformly bounded, then $(||f||_\infty,1.5E f^2)$ is a pair of Bernstein number.
$$\begin{aligned}
& \Pr\Bigg\{ \frac{1}{n_1^2} \sum_{1\leq i_1, i_2 \leq n_1} h(O_{i_1},O_{i_2})\geq x \Bigg\}\\
\leq & \frac{M}{2}\exp\Bigg\{ -\frac{1}{M} \min\bigg( \frac{x^2 n_1^2}{Eh^2}, \frac{xn_1}{||h||_{L_2\rightarrow L_2}} ,\frac{x^{2/3} n_1}{[( ||E_{O_1}h^2||_\infty + ||E_{O_2}h^2||_\infty)]^{1/3}},\frac{x^{1/2}n_1}{||h||_\infty^{1/2}} \bigg) \Bigg\},\end{aligned}$$
and $$\begin{aligned}
& \Pr \Bigg\{ -\frac{1}{n_1^2} \sum_{1\leq i_1, i_2 \leq n_1} h(O_{i_1},O_{i_2})\geq x \Bigg\}\\
\leq & \frac{M}{2}\exp\Bigg\{ -\frac{1}{M} \min\bigg( \frac{x^2 n_1^2}{Eh^2}, \frac{xn_1}{||h||_{L_2\rightarrow L_2}} ,\frac{x^{2/3} n_1}{[( ||E_{O_1}h^2||_\infty + ||E_{O_2}h^2||_\infty)]^{1/3}},\frac{x^{1/2}n_1}{||h||_\infty^{1/2}} \bigg) \Bigg\}.\end{aligned}$$ where $M$ is some universal constant, and $||h||_{L_2\rightarrow L_2}$ is defined as $$||h||_{L_2\rightarrow L_2}=\sup\{ E [h(O_1,O_2)a(O_1)c(O_2)]: E(a^2(O_1))\leq 1, E(c^2(O_2))\leq 1 \}.$$ \[lemma:2\]
[**Proof of Lemma \[lemma:2\].**]{}
The inequality follows directly from the Corollary 3.4 in [@10.1007/978-1-4612-1358-1_2].
$$\begin{aligned}
& E\max_{h\in \cal H}\Bigg\{ \frac{1}{n_1^2} \sum_{1\leq i_1, i_2 \leq n_1} h(O_{i_1},O_{i_2}) \Bigg\}\\
\leq & \left( 2M \log(1+\frac{M \# \cal H}{2} ) \max_{h} \frac{Eh^2}{ n_1^2} \right)^{1/2} \\
+ & 2M \log(1+\frac{M\# \cal H}{2} ) \max_{h} \frac{||h||_{L_2\rightarrow L_2}}{n_1 } \\
+ & 4 M^{3/2} \log^{3/2}(1+\frac{M\# \cal H}{2}+D_0) \max_{h} \frac{||E_{O_1}h^2||_\infty^{1/2}}{n_1^{3/2}} \\
+ & 4 M^2 \log^{2}(1+\frac{M\# \cal H}{2}+D_1) \max_{h}\frac{ ||h||_\infty}{n_1^2},\end{aligned}$$
and the same result holds for $E\max_{h\in \cal H} \Bigg\{- \frac{1}{n_1^2} \sum_{1\leq i_1, i_2 \leq n_1} h(O_{i_1},O_{i_2}) \Bigg\}$. \[lemma:3\]
[**Proof of Lemma \[lemma:3\].**]{}
Denote $$\begin{aligned}
\omega_{n_1}(x)=\min\bigg( \frac{x^2 n_1^2}{Eh^2}, \frac{xn_1}{||h||_{L_2\rightarrow L_2}} ,\frac{x^{2/3} n_1}{[ 2||E_{O_1}h^2||_\infty]^{1/3}},\frac{x^{1/2}n_1}{||h||_\infty^{1/2}} \bigg),\end{aligned}$$ and the following four sets:
$\Omega_{1,n_1}(h)=\{x: \omega_{n_1}(x)= \frac{ x^2 n_1^2}{ Eh^2} \}$
$\Omega_{2,n_1}(h)=\{x: \omega_{n_1}(x)= \frac{xn_1}{||h||_{L_2\rightarrow L_2}} \}$
$\Omega_{3,n_1}(h)=\{x: \omega_{n_1}(x)= \frac{x^{2/3}n_1}{[ 2||E_{O_1}h^2||_\infty]^{1/3}} \}$
$\Omega_{4,n_1}(h)=\{x: \omega_{n_1}(x)= \frac{ x^{1/2}n_1}{||h||_\infty^{1/2}} \}$
Then $$\begin{aligned}
&\Pr\Bigg\{ \bigg( \frac{1}{n_1^2} \sum_{1\leq i_1, i_2 \leq n_1} h(O_{i_1},O_{i_2}) \bigg) I_{\Omega_{1,n_1}}\geq x \Bigg\}
\leq \frac{1}{2}M\exp\Bigg\{- \frac{1}{M} \frac{ x^2 n_1^2}{ Eh^2} \Bigg\} ,\\
&\Pr\Bigg\{ \bigg( \frac{1}{n_1^2} \sum_{1\leq i_1, i_2 \leq n_1} h(O_{i_1},O_{i_2})\bigg) I_{\Omega_{2,n_1}}\geq x \Bigg\}\leq \frac{1}{2}M\exp\Bigg\{- \frac{1}{M} \frac{xn_1}{||h||_{L_2\rightarrow L_2}} \Bigg\},\\
& \Pr\Bigg\{ \bigg( \frac{1}{n_1^2} \sum_{1\leq i_1, i_2 \leq n_1} h(O_{i_1},O_{i_2}) \bigg) I_{\Omega_{3,n_1}}\geq x \Bigg\}\leq \frac{1}{2}M\exp\Bigg\{- \frac{1}{M} \frac{x^{2/3}n_1}{[ 2||E_{O_1}h^2||_\infty]^{1/3}} \Bigg\},\\
&\Pr\Bigg\{ \bigg( \frac{1}{n_1^2} \sum_{1\leq i_1, i_2 \leq n_1} h(O_{i_1},O_{i_2})\bigg) I_{\Omega_{4,n_1}}\geq x \Bigg\} \leq \frac{1}{2}M\exp\Bigg\{- \frac{1}{M} \frac{x^{1/2}n_1}{||h||_\infty^{1/2}}\Bigg\}.\end{aligned}$$
Then by the above inequalities and Lemma 8.1 in [@vaart2006oracle], $$\begin{aligned}
& E\max_{h}\Bigg\{ \frac{1}{n_1^2} \sum_{1\leq i_1, i_2 \leq n_1} h(O_{i_1},O_{i_2}) \Bigg\}\\
\leq & \left( 2M \log(1+\frac{M\# \cal H}{2} ) \max_{h} \frac{Eh^2}{ n_1^2} \right)^{1/2} \\
+ & 2M \log(1+\frac{M\# \cal H}{2} ) \max_{h} \frac{||h||_{L_2\rightarrow L_2}}{n_1 } \\
+ & 4 M^{3/2} \log^{3/2}(1+\frac{M\# \cal H}{2}+D_0) \max_{h} \frac{||E_{O_1}h^2||_\infty^{1/2}}{n_1^{3/2}} \\
+ & 4 M^2 \log^{2}(1+\frac{M\# \cal H}{2}+D_1) \max_{h}\frac{ ||h||_\infty}{n_1^2},\end{aligned}$$ where $D_0$ and $D_1$ are some universal constants. $\Box$\
For each pair $(k_1,\tk_1)$, average the perturbations over the splits and obtain $$\widehat {\text{per}}(k,\tk; k_1,\tk_1) = \frac{1}{S}\sum_{s=1}^S \left[ \PP_s^1( IF^s_{k,\tk}(\widehat \psi^s_{k_1,\tk_1}) - IF^s_{k_1,\tk_1}(\widehat \psi^s_{k_1,\tk_1}) )
\right]^{2},$$ where $k=k_1,\tk \in \{1\cdots, K_2\}$ or $\tk=\tk_1, k\in \{1\cdots, K_1\}$ Calculate $$\widehat B^{(1)}_{k_1,\tk_1}=\max_{\substack{k=k_1,\tk \in \{1\cdots, K_2\};\\ \tk=\tk_1, k\in \{1\cdots, K_1\}}} \wper(k,\tk,k_1,\tk_1),$$ $$\widehat B^{(2)}_{k_1,\tk_1} = \max_{\tk_0 \in \mathcal K_2} \max_{\tk \in \mathcal K_2} \widehat {\text{per}}(k_1,\tk; k_1,\tk_0) + \max_{k_0\in \mathcal K_1} \max_{k\in \mathcal K_1} \widehat {\text{per}}(k,\tk_1; k_0,\tk_1)$$ for each pair $(k_1,\tk_1)$ Pick $(k^\dagger,\tk^\dagger)=\arg\min_{(k,\tk)} \widehat B^{(1)}_{k,\tk}$, $(k^\diamond,\tk^\diamond)=\arg\min_{(k,\tk)} \widehat B^{(2)}_{k,\tk}$ as our selected models, and obtain the estimations of the parameter over the splits $$\widehat \psi_{k^\dagger,\tk^\dagger}=\frac{1}{S}\sum_{s=1}^S \widehat \psi^s_{k^\dagger,\tk^\dagger},~~\widehat \psi_{k^\diamond,\tk^\diamond}=\frac{1}{S}\sum_{s=1}^S \widehat \psi^s_{k^\diamond,\tk^\diamond};$$\
**Return** $(k^\dagger,\tk^\dagger)$, $(k^\diamond,\tk^\diamond)$ and $\widehat \psi_{k^\dagger,\tk^\dagger}$, $\widehat \psi_{k^\diamond,\tk^\diamond}$
[**Additional simulations.**]{} In this section, we present more simulations in which all models are misspecified. In both scenarios, the data were generated from $$\begin{aligned}
\text{logit}\Pr \left( A=1|X\right) &=&(1,-1,1,-1,1)f_1(X),\\
E\left( Y|A,X\right) &=&1 + \mathbbm{1}^Tf_1(X)+ \mathbbm{1}^Tf_1(X) A+ A.\end{aligned}$$ For the first scenario, we used $\{f_2,f_3,f_4, f_5\}$ as candidate models of $g$ and $h$ specified in Equations and , where $$f_5(x) = \Big(\text{cos}(\pi x_1), \ldots, \text{cos}(\pi x_5)\Big)^T.$$ For the second scenario, we used $\{f_2,f_3,f_4, f_6\}$ as candidate models of $g$ and $h$ specified in Equations and , where $$f_6(x) = \Big(\text{cos}(\pi x_1/2), \ldots, \text{cos}(\pi x_5/2)\Big)^T.$$
The squared bias of $\widehat \psi$ for each scenario is shown in Figures \[fig:11\]-\[fig:33\], respectively. We see that when all models are misspecified, the performance of each method depends on the class of candidate models. While in Scenario 1, Figure \[fig:11\] is similar to Figure \[fig:1\], and the proposed methods have smaller bias; In Scenario 2, conducting model selection separately performs similar to and sometimes better than the proposed methods.
![Squared bias of Scenario 1[]{data-label="fig:11"}](biasplot3.pdf "fig:"){width="4in" height="2.5in"}\
![Squared bias of Scenario 2[]{data-label="fig:33"}](biasplot4.pdf "fig:"){width="4in" height="2.5in"}\
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Axions and axion-like particles are a leading model for the dark matter in the Universe; therefore, dark matter halos may be boson stars in the process of collapsing. We examine a class of static boson stars with a non-minimal coupling to gravity. We modify the gravitational density of the boson field to be proportional to an arbitrary power of the modulus of the field, introducing a non-standard coupling. We find a class of solutions very similar to Newtonian polytropic stars that we denote “quantum polytropes.” These quantum polytropes are supported by a non-local quantum pressure and follow an equation very similar to the Lane-Emden equation for classical polytropes. Furthermore, we derive a simple condition on the exponent of the non-linear gravitational coupling, $\alpha>8/3$, beyond which the equilibrium solutions are unstable.'
author:
- Jeremy Heyl
- 'Matthew W. Choptuik'
- David Shinkaruk
bibliography:
- 'non-linear-schrodinger.bib'
title: 'The Modified Schrodinger Poisson Equation — Quantum Polytropes'
---
Introduction
============
Bosonic dark matter possibly in the form of low-mass axions is a leading contender to explain some inconsistencies in the standard cold dark matter model (CDM) [@2016arXiv161008297H]. It is inspired from both a theoretical point of view [@2016arXiv160705769A] as emerging from string theory and observationally where bosonic dark matter can address some potential discrepancies in the standard CDM model [@2015PhRvD..92j3510B; @2016PhRvD..93j3533K; @2016PDU....12...50P]. Because the bosons can collapse to form a star-like object [@2016JCAP...07..009V; @2016PhRvD..94d3513S], small-scale structure would be different if the dark matter were dominated by light bosons. Furthermore the collisions of these dark matter cores or boson stars would result in potentially observable interference [@2011PhRvD..83j3513G]. It is these boson stars that are the focus of this investigation.
The Schrodinger-Poisson equation provides a model for a boson star [@PhysRev.187.1767] in the Newtonian limit. We will explore the solutions to the Schrodinger-Poisson equation with a small yet non-trivial modification. The modified Schrodinger-Poisson equation is given by the following two equations $$i \frac{\partial \psi}{\partial t} = -\frac{1}{2} \nabla^2 \psi + V \psi
\label{eq:1}$$ where $$\nabla^2 V = |\psi|^\alpha
\label{eq:2}$$ where we have taken $m=1$ and $4\pi G=1$. For $\alpha=2$ this equation is the well-known non-relativistic limit of the Klein-Gordon equation coupled to gravity [@2012CQGra..29u5010G]. For $\alpha\neq 2$, this is not the case. Although the Newtonian limit of a self-gravitating scalar field with a potential of the form $|\psi|^\alpha$ would yield Eq. \[eq:2\], one would not get Eq. \[eq:1\], the Schrodinger equation, as the non-relativistic limit for the dynamics of the scalar field. Instead Eqs. \[eq:1\] and \[eq:2\] result as the Newtonian limit of a relativistic scalar field with a non-minimal coupling to gravity such as the following scalar-tensor action $$S = \int d^4 x \sqrt{-g} \left [ \frac{R+L_m}{|\psi|^{\alpha-2}} + \partial^\mu \bar \psi \partial_\mu \psi - |\psi|^2 \right]
\label{eq:64}$$ where $R$ is the Ricci scalar, $g$ is the determinant of the metric and $L_m$ is the Lagrangian density of the matter.
The small change in Eq. \[eq:2\] yields a new richness to the solutions for Newtonian boson stars that we will call “quantum polytropes” for reasons that will become obvious later. Although authors have considered other modifications to the Schrodinger-Poisson equation such as an electromagnetic field [@2015GReGr..47....1M] or non-linear gravitational terms [@2015CQGra..32f5010F; @2016CQGra..33g5002F], the non-linear coupling of the gravitational source proposed here is novel.
Homology {#sec:homology}
========
We can examine how the equations change under a homology or scale transformation. Let us replace the four variables with scaled versions as $$\psi \rightarrow A \psi,
V \rightarrow A^a V,
r \rightarrow A^b r ~\textrm{and}~
t \rightarrow A^c t
\label{eq:3}$$ and try to find the values of the exponents that result in the same equations again. $$i A^{1-c} \frac{\partial \psi}{\partial t} = -\frac{1}{2} A^{1-2b} \nabla^2 \psi + A^{1+a} V \psi
\label{eq:4}$$ and $$A^{a-2b} \nabla^2 V = A^{\alpha} |\psi|^\alpha.
\label{eq:5}$$ This yields the following equations for the exponents $$1-c=1-2b=1+a, a-2b=\alpha
\label{eq:6}$$ and the following scalings $$\psi \rightarrow A \psi, V \rightarrow A^{\alpha/2} V, r \rightarrow A^{-\alpha/4} r ~\textrm{and}~ t \rightarrow A^{-\alpha/2} t.
\label{eq:7}$$ The total norm of a solution which is conserved is given by $$N = \int_0^\infty 4\pi r^2 |\psi|^2 d r
\label{eq:8}$$ and scales under the homology transformation as $N \rightarrow A^{(8-3\alpha)/4}$. For a static solution the value of the energy eigenvalue ($E$) scales as $A^{\alpha/2}$. Because the solution is not normalized, the total energy will scale as the product of the eigenvalue and the norm, yielding $A^{(8-\alpha)/4}$.
We see that for $\alpha=8/3$, one can increase the central value of the wavefunction $\psi(0)$ without changing the norm but increasing the magnitude of the energy resulting in a more bound configuration. For larger values of $\alpha$ the value of the norm decreases. We can argue that the this decrease in the norm results in an unstable configuration. Let us divide the configuration arbitrarily into a central region and an arbitrarily small envelope. If we let the central region collapse slightly, energy is released but according to the decrease in norm of this central region, we still have some material left to add to the diffuse envelope to carry the excess energy and the process can continue to release energy. The star is unstable. For $\alpha<8/3$ the slight collapse results in an increase in the norm of the central region but there is no material to add except from the arbitrarily small envelope, so the collapse fails. If we let the star expand a bit in this case, the norm decreases. However, the expansion costs energy so the star is again stable to the radial perturbation.
For $\alpha=8/3$ the norm is independent of $\psi(0)$ and only depends on the number of nodes of the solution; therefore, it is natural to compare solutions for different values of $\alpha$ by choosing to normalize them to the value of the norm for $\alpha=8/3$ for the corresponding state.
Real Equations of Motion {#sec:real-equat-moti}
========================
We would like examine the static solutions of Eq. (\[eq:1\]) and Eq. (\[eq:2\]). We will make the following substitution $$\psi = a e^{iS}
\label{eq:9}$$ where the functions $a=a({\bf r},t)$ and $S=S({\bf r},t)$ are explicitly real. This results in the three equations $$\begin{aligned}
\frac{\partial a^2}{\partial t} + \nabla \cdot \left ( a^2 \nabla S \right ) &=& 0, \label{eq:10}\\
\frac{\partial S}{\partial t} + \frac{1}{2} \left ( \nabla S \right )^2 + V - \frac{1}{2 a} \nabla^2 a &=& 0, \label{eq:11}\\
\nabla^2 V &=& |a|^\alpha\label{eq:12}\end{aligned}$$ that in analogy with fluid mechanics we can call the continuity equation, the Euler equation and the Poisson equation. We can develop this analogy further by defining $\rho=a^2$ and ${\bf U}=\nabla S$ and taking the gradient of Eq. (\[eq:11\]) to yield $$\begin{aligned}
\frac{\partial \rho}{\partial t} + \nabla \cdot \left ( \rho {\bf U} \right ) &=& 0, \label{eq:13}\\
\frac{\partial {\bf U}}{\partial t} + \left ( {\bf U} \cdot \nabla \right ) {\bf U} + \nabla \left ( V - \frac{1}{2 a} \nabla^2 a \right ) &=& 0. \label{eq:14}\end{aligned}$$ These are simply the Madelung equations [@1927ZPhy...40..322M]. If we had retained constants such as the Planck constant $h$ in the Schrodinger equation, we would find the that final term in the Euler equation is proportional to $h^2$ and is a quantum mechanical specific enthalpy, $$w = - \frac{1}{2 a} \nabla^2 a.
\label{eq:15}$$ Furthermore, because ${\bf U}=\nabla S$ the vorticity of the flow must vanish.
We can exploit the fluid analogy further to write the equations in a Lagrangian form using $$\frac{d}{dt} = \frac{\partial}{\partial t} + \left ({\bf U} \cdot \nabla\right )
\label{eq:16}$$ to yield $$\begin{aligned}
\frac{d \rho}{d t} + \rho \nabla \cdot {\bf U} &=& 0, \label{eq:17}\\
\frac{d {\bf U}}{d t} + \nabla \left ( V - \frac{1}{2 a} \nabla^2 a \right ) &=& 0.\label{eq:18}\end{aligned}$$ A static solution to these equations will have $S=-E t$ in analogy with the time-independent Schrodinger equation and $a=a({\bf r})$ where $a$ satisfies $$-E a -\frac{1}{2} \nabla^2 a + V a = 0.\label{eq:19}$$ An alternative treatment would exploit the fact that ${\bf U}$ must vanish for this static solution so $$\frac{1}{2a} \nabla^2 a = V + \textrm{constant}
\label{eq:20}$$ where we can identify the constant with the value of $E$ in Eq. \[eq:19\]. Furthermore we have $$\nabla^2 V = |a|^\alpha = \nabla^2 \left ( \frac{1}{2a} \nabla^2 a \right )
\label{eq:21}$$ so if we specialize to a spherically symmetric solution, we have $$- \frac{1}{r} \frac{d^2}{dr^2} \left [ \frac{1}{2a} \frac{d^2}{dr^2} \left ( r a \right ) \right ] + |a|^\alpha = 0
\label{eq:22}$$ This equation is reminiscent of the Lane-Emden equation for polytropes $$\frac{1}{r} \frac{d^2}{d r^2} \left ( r \theta \right ) + \theta^n = 0,
\label{eq:23}$$ so a natural designation for these objects is “quantum polytropes.”
Our equation is of course fourth order with a negative sign. We must supply four boundary conditions. In principle these are $$\begin{aligned}
a(0) &=& a_0, \\
\left .\frac{da}{dr}\right|_{r=0}&=&0, \\
\left . -\frac{1}{2a r} \frac{d^2 (ra)}{dr^2} \right |_{r=0} &=& w_0
\label{eq:24}\end{aligned}$$ and $$\frac{d}{dr} \left [ \frac{1}{2a r} \frac{d^2 (ra)}{dr^2} \right ]_{r=0} = 0.
\label{eq:25}$$ Of course not all values of $a_0$ and $w_0$ will yield physically reasonable configurations, so we must vary $w_0$ for example to find solutions such that $\lim_{r\rightarrow \infty} a(r) = 0$. However, using the scaling rules in § \[sec:homology\], once the value of $w_0$ is determined, one can rescale the solution.
In the case of the Lane-Emden equation for $n>5$ one can find solutions where $\theta=0$ at a finite radius, [*i.e.*]{} a star with a surface. From Eq. \[eq:19\] we find that $$E = -\lim_{r\rightarrow\infty} \frac{1}{2a r} \frac{d^2
(ra)}{dr^2} = \lim_{r\rightarrow\infty} w(r).
\label{eq:26}$$ Therefore, if $E\neq 0$, the quantum system must extend to an infinite radius.
To examine the regularity conditions near the centre, let us expand the solution near the centre as $$a(r) = a_0 + a_2 r^2 + a_4 r^4
\label{eq:27}$$ where we have dropped the odd terms to ensure that the derivative of the density and the derivative of the enthalpy vanish at the centre. We find that $$w_0 = -3 \frac{a_2}{a_0}
\label{eq:28}$$ and $$a_4 = \frac{a_0^\alpha a_0^2 + 18 a_2^2}{60 a_0} = a_0 \left ( \frac{|a_0|^\alpha}{60} + \frac{w_0^2}{30} \right ).
\label{eq:29}$$ As we would like to focus on the ground state where the function $a(r)$ has no nodes, we can also make the substitution that $a(r)=e^b$ which yields a simpler differential equation for $b(r)$, $$b^{(4)}(r) = 2 \left [ e^{\alpha b} - \frac{2}{r} \left ( b' b''+ b''' \right ) - b' b''' -
\left (b'' \right)^2 \right ]
\label{eq:30}$$ and $$w = -\frac{b''+\left(b'\right)^2}{2}-\frac{b'}{r}.
\label{eq:31}$$ An examination of Eq. \[eq:30\] and \[eq:31\] yields the boundary conditions at $r=0$, $$\begin{aligned}
b'(0)&=&0,\\
b''(0)&=&-\frac{2}{3} w_0,\\
b'''(0)&=&0
\label{eq:32}\end{aligned}$$ so a series expansion about $r=0$ for $b(r)$ yields $$b(r) = b_0 - \frac{w_0}{3} r^2 + \frac{3 e^{\alpha b_0} - 4 w_0^2}{180} r^4 + {\cal O}(r^5)
\label{eq:33}$$ Furthermore, we can examine the behavior at large distances from Eq. \[eq:26\] to find that $$\lim_{r\rightarrow\infty} b(r) \approx -r \sqrt{-2E} = -r\sqrt{-2w}
\label{eq:34}$$ Fig. \[fig:ground\_states\] depicts the ground state wavefunction $b(r)=\ln \psi(r)$ for various values of $\alpha$. The wavefunction is normalized such that $N = \int dV |\psi|^2$ is constant. Furthermore, we have verified that the scaling relations of § \[sec:homology\] hold for these solutions. At fixed total normalization the wavefunction is more spatial extended as $\alpha$ increases. The slope for large values of $r$ decreases gradually with increasing $\alpha$ reflecting the modest decrease in the binding energy as $\alpha$ increases.
![ Upper: The energy eigenvalue of ground state. As discussed in the text, we choose to normalize the ground states states to have the same normalization of the $\alpha=8/3$ ground-state solution. Lower: The solid curves trace ground state The function is given by the equation $b(r)=\ln
\psi(r)$. The solutions from bottom to top are $\alpha=1,1.5,2,2.5,8/3$ and 3. The black lines show the expected slope of the solution for large values of $r$ from Eq. (\[eq:34\]) for $\alpha=1$ and 3. The dotted curves give the value of $w(r)$ for the same states from bottom to top.[]{data-label="fig:ground_states"}](ground-state){width="\columnwidth"}
Excited States {#sec:excited-states}
==============
To study the excited states [@1998CQGra..15.2733M] where $a(r)$ may have nodes, we have a more complicated differential equation of the form $$a^{(4)}(r) = 2 a |a|^\alpha - \frac{4 a'''}{r} + \frac{N_1}{a} + \frac{N_2}{a^2}
\label{eq:65}$$ where $$N_1 = 2 a' a''' + \left (a''\right)^2 + \frac{8}{r} a' a''
\label{eq:66}$$ and $$N_2 = -2 \left (a'\right)^2 a'' - \frac{4}{r} \left (a'\right)^3.
\label{eq:67}$$ Rather than deal with these singular points we can return to the coupled differential equations \[eq:1\] and \[eq:2\] to examine the excited states.
We will make the substitutions that $u=\psi(r) r e^{-iEt}$ and $v=V(r) r$ to yield the following equations $$E u = -\frac{1}{2} u'' + \frac{v u}{r}
\label{eq:68}$$ and $$v'' = |u|^\alpha r^{1-\alpha},
\label{eq:69}$$ where we have focused on spherically symmetric configurations. Because equations \[eq:1\] and \[eq:2\] are non-linear we cannot follow the strategy of expanding the solutions in terms of spherical harmonics to yield a simple solution beyond spherical symmetry. The general solution is beyond the scope of this paper.
We must supply four boundary conditions for the functions $u$ and $v$ and these are $u=0$, $u'=\psi(0)$, $v=0$ and $v'=V(0)$ where we take $V(0)=0$ because we can shift both the value of $E$ and $V(r)$ by a constant and retain the same equations. We generally shift $E$ and $V(r)$ such that $\lim_{r\rightarrow\infty} V(r)=0$. We can also take $\psi(0)=1$ and scale the resulting solution using the scaling relations in § \[sec:homology\]. Finally only specific values of $E$ will result in normalizable solutions, so we shoot from the origin to large radii and find the values of $E$ that result in normalizable solutions. Fig. \[fig:psi23\] depicts the ground state and the excited states for $\alpha=2$ and $\alpha=3$ where the wavefunction has been normalized such that $\psi(0)=1$. It is important to note that the various states correspond to different total normalizations, [*i.e.*]{} different numbers of particles. Furthermore, we will call the ground state the state without any nodes and excited states states with nodes, so the quantum number $n$ denotes the number of anti-nodes or extrema, starting with one; therefore, Fig. \[fig:psi23\] shows the wavefunctions for $n=1$ to $n=8$. The wavefunctions for $\alpha=2$ and $\alpha=3$ appear quite similar modulo a size scaling. The $\alpha=3$ wavefunctions with this particular normalization extend over a larger range in radius than the $\alpha=2$ wavefunctions.
![image](respsi_2){width="\columnwidth"} ![image](respsi_3){width="\columnwidth"}
Of course, what is most interesting are the configurations for a fixed number of particles, so a particular value of $N=\int dV |\psi|^2$. For $\alpha\neq 8/3$ the total normalization, $N$, can take any value. However, for $\alpha=8/3$ the normalization is fixed to the values of the ground and the various excited states. Fig. \[fig:ominter\] depicts the binding energy as a function of $\alpha$ for two particular choices of normalization. As both the logarithm of the normalization and the value of the energy $E$ are smooth functions of $\alpha$ for $\psi(0)=1$, we calculate these values for $\alpha=2,7/3,8/3,3$ and $10/3$ and interpolate or extrapolate over the plotted range. We then use the scaling relations from § \[sec:homology\] to find the eignenvalues for a particular normalization.
![Energy eigenvalue of the states for a number of particles fixed to that of the ground state of the $\alpha=8/3$ configuration (upper panel) and to the first excited state (lower panel). In both cases more bound states lie at the top. On the left-hand side ($\alpha<8/3$) of both plots the states from top to bottom are $n=1, 2, 3, 4, 5$ and 6. On the right-hand side ($\alpha>8/3$) the ordering is reversed, [*i.e.*]{} from top to bottom the states are $n=6,5,4,3,2$ and 1.[]{data-label="fig:ominter"}](ominterboth){width="\columnwidth"}
What is most striking about the energy levels is that for $\alpha<8/3$ we have the normal ordering where states with more nodes are less bound. For $\alpha>8/3$ as the number of nodes increases so does the binding energy of the state. The energy levels are not bounded from below in this case, a hallmark of instability. For the limiting case $\alpha=8/3$ we see that at most one state is bound for a particular total normalization, $N$, but that its energy is arbitrary because we can scale the value of the wavefunction which changes the energy eigenvalue without changing the total normalization.
Perturbations {#sec:perturbations}
=============
The results from scaling in § \[sec:homology\] and from the examination of the excited states in § \[sec:excited-states\] give very strong hints that quantum polytropes with $\alpha>8/3$ are unstable. We will prove that $\alpha>8/3$ is a sufficient condition for instability for an arbitrary stationary configuration. Let us take a constant background and examine small perturbations of the form $$a = a_0 + a_1({\bf r},t)~\textrm{and}~{\bf U} = {\bf U}_1({\bf r},t)\label{eq:35}$$ so we have $$\begin{aligned}
2 a_0 \frac{\partial a_1}{\partial t} + a_0^2 \nabla \cdot {\bf U}_1 &=& 0,\label{eq:36} \\
\frac{\partial {\bf U_1}}{\partial t} + \nabla \left ( V_1 - \frac{1}{2 a_0} \nabla^2 a_1 \right ) &=& 0. \label{eq:37}\end{aligned}$$ Now if we take the time derivative of Eq. (\[eq:36\]) and the divergence of Eq. (\[eq:37\]), we can combine the equations to yield $$2 a_0 \frac{\partial^2 a_1}{\partial t^2} - a_0^2 \nabla^2 V_1 + \frac{a_0}{2} \nabla^4 a_1 = 0
\label{eq:38}$$ and $$\frac{\partial^2 a_1}{\partial t^2} - \frac{\alpha}{2} a_1 |a_0|^\alpha + \frac{1}{4} \nabla^4 a_1 = 0.
\label{eq:39}$$ If we expand the perturbations in Fourier components we obtain the following dispersion relation $$\omega^2 = \frac{k^4}{4} - \frac{\alpha}{2} |a_0|^\alpha
\label{eq:40}$$ where the first term is the standard result for the deBroglie wavelength of a particle and the second term is due to the self-gravity of the perturbation.
We can be a bit more sophisticated now and assume that small perturbations lie near a static solution so $$a = a_0({\bf r}) + a_1({\bf r},t)~\textrm{and}~{\bf U} = {\bf U}_1({\bf r},t)\label{eq:41}$$ thus we have $$\begin{aligned}
2 a_0 \frac{\partial a_1}{\partial t} + \nabla \cdot \left ( a_0^2 {\bf U}_1 \right ) &=& 0,\label{eq:42}\\
\frac{\partial {\bf U_1}}{\partial t} + \nabla \left ( \frac{a_1}{a_0} V_0 + V_1 - \frac{1}{2 a_0} \nabla^2 a_1 \right ) &=& 0. \label{eq:43}\end{aligned}$$ and if we take the time derivative of Eq. (\[eq:42\]), we can combine the equations to yield $$2 a_0 \frac{\partial^2 a_1}{\partial t^2} =
\nabla \cdot \left [ a_0^2
\nabla \left ( \frac{a_1}{a_0} V_0 + V_1 - \frac{1}{2 a_0} \nabla^2 a_1 \right ) \right ].
\label{eq:44}$$ Furthermore, the perturbation of the potential satisfies $$\nabla^2 V_1 = \alpha \frac{a_1}{a_0} |a_0|^{\alpha-1}.
\label{eq:45}$$ These again yield a self-gravitating wave equation where the static background affects the propagation.
To examine the question of stability we can return to the Lagrangian formulation of the equations of motion, Eq. \[eq:17\] and Eq. \[eq:18\]. We can take the time derivative of Eq. \[eq:17\] to get $$\frac{d^2 \rho}{dt^2} + \frac{d\rho}{dt} \nabla \cdot {\bf U} + \rho \frac{d}{dt} \nabla \cdot {\bf U} = 0
\label{eq:46}$$ and the divergence of Eq. \[eq:18\] to yield $$\frac{d }{d t} \nabla \cdot {\bf U} + \nabla^2 \left ( V - \frac{1}{2 a} \nabla^2 a \right ) = 0
\label{eq:47}$$ If we have a perturbation on a static solution we find a simpler equation for the perturbations in the Lagrangian formulation $$\frac{d^2 \rho_1}{dt^2} = \nabla^2 \left ( \frac{a_1}{a_0} V_0 + V_1 - \frac{1}{2 a_0} \nabla^2 a \right ) .
\label{eq:48}$$ We will examine a homologous transformation where $${\bf r} = {\bf r}_0 \left (1 + \epsilon \sin \omega t \right ).
\label{eq:49}$$ From Eq. (\[eq:17\]) this gives $$\rho = \rho_0\left (1 - 3 \epsilon \sin \omega t \right )~\textrm{and}~
a = a_0\left (1 - \frac{3}{2} \epsilon \sin \omega t \right ).
\label{eq:50}$$
Of course this pertubation is not a solution of Eq. \[eq:48\]; however, we can use it to derive an upper bound on the squared frequency of the oscillation. From Eq. \[eq:48\] we obtain to order $\epsilon$ $$\int dV 3 \epsilon \omega^2 \sin\omega t a_0^2 < \int dV \left [ a_0^\alpha \left ( 1 - \frac{3}{2} \alpha \epsilon \sin \omega t \right) - \left ( 1 - 4 \epsilon \sin \omega t \right )
\nabla^2 \frac{1}{2 a_0} \nabla^2 a_0 \right ],
\label{eq:51}$$ and we can use the zeroth-order solution to simplify this to yield $$\int dV 3 \epsilon \omega^2 \sin\omega t a_0^2 < \int dV \left [ |a_0|^\alpha \left ( 1 - \frac{3}{2} \alpha \epsilon \sin \omega t \right) - \left ( 1 - 4 \epsilon \sin \omega t \right )
|a_0|^\alpha \right ]
\label{eq:52}$$
and $$3 \omega^2 \int dV a_0^2 < \int dV \left ( \frac{8-3\alpha}{2} \right ) |a_0|^\alpha
\label{eq:53}$$ so $$\omega^2 < \left ( \frac{8-3\alpha}{6} \right )\int dV |a_0|^\alpha \left [ \int dV a_0^2 \right]^{-1} = \frac{8-3\alpha}{6} \frac{M}{N}
\label{eq:54}.$$ where $M$ is the gravitational mass of the system and $N$ is the number of particles. Therefore, $\alpha>8/3$ is a sufficient condition for $\omega^2<0$ and instability for at least one perturbative mode regardless of the static configuration, as we argued from the homology transformations in § \[sec:homology\].
If we examine an initially stationary configuration where ${\bf U}\neq
0$ but $d\rho/dt=0$ so $\nabla \cdot {\bf U}=0$, we find to first order in the perturbation that the same stability condition applies when one uses the homologous transformation and the variational principle, so we find that $\alpha>8/3$ is a sufficient condition for instability in general.
Conclusions {#sec:conclusions}
===========
We examine a natural generalization of the Schrodinger-Poisson equation and develop the theory of the static solutions to this equation that we denote quantum polytropes and their stability. These solutions obey a natural fourth-order generalization of the Lane-Emden equation, the second order equation for classical polytropes. Furthermore, as for classical polytropes the question of the stability of the solutions comes down to the exponent of the coupling. In the classical case this is how the pressure depends on density with power-law indices greater than $4/3$ indicating stability. In the quantum case , it is how the boson field generates the gravitational field that leads to instability with power-law indices greater than $8/3$ indicating instability. We demonstrate the instability in three ways and the criteria all coincide. We employ two classical techniques, a homology scaling argument and perturbation analysis, and one quantum technique the observation that the states are not bounded from below for $\alpha>8/3$. This is a sufficient condition for instability not a necessary one. In particular the excited states even for $\alpha=2$ are unstable [@2002math.ph...8045H].
The modified Schrodinger-Poisson presented here allows for richer possibilities for the modeling of dark matter halos and structure formation, and can naturally emerge as the Newtonian limit from an underlying relativistic field theory. In particular if $\alpha>8/3$ the dark matter halos may develop a quasi-static core that ultimately collapses to form a cusp like standard cold dark matter [@1997ApJ...490..493N] or disperses, providing for especially rich phenomenology.
This work was supported by the Natural Sciences and Engineering Research Council of Canada.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Means to support collaboration for remote industrial facilities such as mining are an important topic, especially in Australia, where major mining sites can be more than a thousand kilometers from population centres. Software-based collaboration and maintenance solutions can help to reduce costs associated with these remote facilities. In this paper, we report on our collaborative engineering project providing a decision support solution tailored for Australian needs. We present two application examples: one related to incident handling in industrial automation, the other one in the area of smart energy systems.'
author:
- |
\
\
title: On Decision Support for Remote Industrial Facilities using the Collaborative Engineering Framework
---
collaboration solutions, decision support, distributed engineering, visualization
Introduction
============
Means to collaborate over large distances and ways to monitor and support remote industrial installations over large distances can improve operation, maintenance and commissioning costs. This is especially important in the Australian context, where industrial installations such as mining sites, but also plants related to power generation and agriculture, are often located far away from major population centres. Reducing the number of on-site staff, and the need to bring staff from the population centres into these remote areas by facilitating remote monitoring, operation and to a lesser extend commissioning, is the main goal of the software framework described in this paper.
The operation of industrial systems is typically associated with [*events*]{} occurring over time. We consider events of a very diverse nature: events may comprise automatically generated [*alarms*]{}, e.g., coming from a SCADA system and associated with a malfunctioning valve. On the other hand, we can also have manually created events such as someone pushing a help button on a service screen. In some cases many events can occur in a relatively short amount of time, e.g., lightning hits an electrical substation and a large number of components failure at literally same time. On the other hand, we are also covering events such as a consulting request where only a few may occur within a year.
Our collaborative engineering framework aims to support human collaboration by processing these events with respect to semantic carrying models and generating a response. A simple example of a semantic carrying model would be a spatial model of a plant with coordinates indicating the position of sensors. Once a sensor detects something unusual, the closest staff member can be determined by retrieving the relevant coordinates from the model and combining it with information on staff availability. Furthermore, rules can be used to indicate how the relevant staff members should be informed and what information shall be displayed, e.g., on the staff member’s mobile device.
The work presented in this paper summarizes the results achieved in the collaborative engineering project which took place between 2013 and 2016 in the context of the Australia-India Centre for Automation Software Engineering at RMIT University and the Indian ABB Corporate Research Centre.
Overview {#overview .unnumbered}
--------
Section \[sec:relwork\] provides an overview of related work. Our collaborative engineering framework is introduced in Section \[sec:ce\] and the decision support framework is described in Section \[sec:decs\]. Section \[sec:demo\] features a description of our demonstrator platform and a conclusion is provided in Section \[sec:concl\].
Related Work {#sec:relwork}
============
Table \[tab:t1\] provides an overview on main directions to support control and operation in industrial automation. We have chosen classical SCADA-based approaches, visualization sofware-based solutions, our collaborative engineering framework, as well as social media-like frameworks. The table gives an indication of their typical application in the life cycle of a facility, their relative strength with respect to distributedness, their ability to take semantic models such as ontologies into account, the industrial automation focus, and typical interaction times. Concrete solutions are discussed throughout this section. We classify existing software into the following categories: [*Classical SCADA*]{}, that is, supervisory control / data acquisition, is closely connected with traditional operational control, connecting with real time controller units, generating alarms in response to process status anomalies and enabling operator interaction with plant and equipment. [*Visualization software*]{} refers to software which provides a mix of modeling, simulation and visualization capabilities with a focus on design. [*Collaborative Engineering*]{} approaches refers to our approach of processing events and selection and generation of information to support human stakeholders. [*Social media*]{} approaches, emerging more recently, obviously focus on people as individuals and complementing or surpassing traditional, perhaps less efficient, communication related services such as telephone directories or email. The [*Life cycle*]{} attribute refers to the phase an industrial facility is encountering, i.e. design, construction, commissioning, operations and maintenance. The [*Distributed*]{} attribute refers to whether applications cater to multiple seats, users or sites. The [*Semantic*]{} attribute refers to the extent to which applications embody domain concepts such as physical models, for example through explicit ontology. The [*Industrial*]{} attribute refers to the extent to which applications are targeted to industrial automation. The [*Timing*]{} attribute refers to the typical event response times provided either by the applications or their users.
Life cycle Distributed Semantic Industrial Timing
--------------------------- ------------------ ------------- ---------- ------------ -----------------
classical SCADA ops some minor strong subseconds–days
visualization software all (design) minor minor medium seconds–years
collaborative engineering all (remote ops) strong strong key focus seconds–months
social media all strong minor minor minutes–months
#### Visualization and Collaboration Software
Relevant commercial solutions for collaboration comprise software frameworks such as Microsoft SharePoint, Dassault Syst[è]{}me’s Enovia [@enovia] and Delmia [@delmia]. While SharePoint concentrates on providing solutions for the exchange of documents such as texts, the latter products are focussed on providing geometric aspects of visual front-ends such as 3D graphics of involved plants and machinery. In this paper we refer to functionality which relies on physical modeling such as geometry as (partially) [*semantic*]{} approaches. SharePoint also comes with social network features. The collaboration relations between different participants can be subjected to analysis [@cross] as a basis for efficiency improvement, by identifying candidates for working groups and for resolving resource conflicts and protecting confidentiality. The mentioned frameworks support collaboration between and within organisations including the operations control context and industrial operations sites. An alternative non-commercial approach to visualization for collaboration are the SAGE2 [^1] framework and its predecessor SAGE. SAGE2 provides a web-based framework for hosting applications on large high resolution tiled display walls which are in principle also accessible from multiple remote sites simultaneously including (“scaled down”) via ordinary web browsers.
Early academic approaches feature the term [*collaborative engineering*]{} in [@collint] and [@shade]. Means for sharing and collaborative interaction on documents and other resources are presented and most ideas have meanwhile found entry into commercial collaboration frameworks. Further related academic approaches comprise examinations such as the impact of collaborative engineering on software development (see, e.g., [@boochce]) and collections software challenges for the development of ultra-large scaled systems [@ultrace], where collaboration is limited to the development phase.
#### Semantic Models and Ontologies
Semantic carrying formal models are an important ingredient of our framework. Typically on a less formal level, assigning semantics to distributed documents and other information sources has gained popularity in the context of the semantic web [@semweb]. Semantic web-like information models can provide a basis for collaboration between different industrial sites and facilitate the exchange of data. Ontology-based approaches to collaborative engineering also fall into this class and can be based on semantic web technology such as [@sure]. The ComVantage project [@salmen] is developing a mobile enterprise reference framework. It aims towards future internet operability. To some extent semantic annotations of data are taken into account and applications in industrial automation exist. A framework for collaborative requirements engineering, C-FaRM, is described in [@c-farm], and other background work on technologies for collaboration can be found in [@cabani07; @geo09]. In addition, semantic models can be used to formalize cyberphysical infrastructures in construction, plant automation and transport. Some existing real-world applications are aligned towards a geometric representation of their components and are sometimes based on so-called 2.5 dimensional GIS (Geographic Information System) representations (true three dimensional modelling seems far from common practice [@friedman]) where the 3rd dimension $z=f(x,y)$ is represented as a function $f$ of the 2 dimension $x$ and $y$ coordinates. However, this form of representation [@apel] may limit the ability to use these models as a basis for geometric, topological and information retrieval. We do not limit ourself to a particular geometric representation, coordinate or dimension system in our modelling work, but allow different instantiations. Related modelling approaches include standards such as the Web 3D Services and the Sensor Web Enablement Architecture of the Open Geospatial Consortium[^2], visualisation and decision support [@weaver], as well as efficient data structures for fast reasoning and decision support. Semantic formalisms for industry 4.0 are compared in [@chihhong] and some related guidelines to assist engineers are provided in the same work.
#### Formal Description Languages
Different logic-based means to formalize semantic entities have been developed. Logic formulae can incorporate spatial aspects and are used as a basis to formalize our semantic models of industrial facilities. The handbook of spatial logic [@hosl] discusses spatial logics, related algebraic specifications, as well as applications that are not only limited to computer science. Following process algebraic descriptions, another approach has been introduced in [@cardelli03; @cardelli04], which is used for describing spatial behavior and reasoning about it. Process algebraic descriptions cover the description of individual, generally asynchronously acting processes, but with distinct synchronization points as first class citizens from a specification point of view. In the spatio-temporal case, disjoint logical spaces are represented in terms of expressions by bracketing structures and carry or exchange concurrent process representations. Model checking for process algebras in this context is presented in [@slmc]. Complementing this, a graph-based technique for the verification of spatial properties of finite $\pi$-calculus fragments (another process algebra) is introduced in [@gadducci]. More work on process algebraic specifications in this context has been done in [@haar]. The establishment of specialized modal logics for spatio-temporal reasoning goes back to the seventies. The Region Connection Calculus (RCC) [@Bennett] includes predicates to indicate spatial separation of and topological relations between entities (regions). For example, RCC has predicates indicating that regions do not share any points at all, shared points on the boundary of regions, internal contact of regions — where one region is included and touches on the boundary of another from the inside, overlap of regions, and inclusion. The work [@Bennett] also features an overview of the relation of these logics to various Kripke-style modal logics, reductions of RCC-style fragments to a minimal number of topological predicates, their relationship to interval-temporal logics as well as some decidability results. More results on spatial interpretations are presented in [@hirschkoff] and additional decidability results can be found in [@zilio].
#### Smart Energy Systems
One application area of our framework comprises Smart Grid systems. Challenges regarding this topic have been outlined in [@sg]. Different topics such as small grid sizes as a grid comprising one office building have been considered in [@fortiss], needs on security and robustness are discussed in [@amin], and operational needs for different energy sources (e.g., a focus on photo-voltaic operations relevant for our paper is put in [@pv]) have been studied. In our smart energy work, we are complementing these research areas. We are applying our framework to provide software support for managing smart grids / smart energy systems, such as automatic decision support for human operators in a control room, or in the field by using a mobile form-factor.
#### Our Previous Work
Previously we have published work around collaborative engineering. Initial ideas and a first implementation are presented in [@etfa] and [@etfa2]. The extension to smart energy systems is described in [@etfa2016se; @tr2017]. Our spatial constraint solving framework BeSpaceD is introduced and described in [@bespaced1; @bespaced0; @newoperators]. A summary of our VxLab visualization facility can be found in [@vxlab; @vxlab2]. The collaborative engineering platform has been used together with software defined network technology for gathering data from sensors (see [@sdn1; @sdn2; @sdn3]). This paper unifies the previously published work.
The Collaborative Engineering Framework {#sec:ce}
=======================================
This section discusses our collaborative engineering framework. We discuss the workflow and the main ingredients.
Architectural Overview
----------------------
Figure \[fig:arch\] provides an overview on our collaborative engineering architecture. The first step involves event listeners waiting for incoming events. Events are either triggered by various devices such as ABB’s IRC 5 robot controllers or via SOA (Service Oriented Architecture) style interfaces. SOA triggered events can be automatically generated or triggered by humans. The event listeners communicate via pipes with the main part of the collaborative engineering framework. In the main part of the collaborative engineering framework, events are pulled from the pipe, sorted and queued and eventually handed to event specific code. The event specific code uses our BeSpaceD tool and other services to generate visualization output which is then interpreted for various devices to trigger the display of information.
![The collaborative engineering architecture[]{data-label="fig:arch"}](colenarch){width=".475\textwidth"}
Visualization Information and Display
-------------------------------------
We have designed an XML format to encode visualization and other event related information. The outcome of the collaborative engineering framework is encoded in this language, thereby supporting event- and stakeholder-specific delivery of notification elements. The language is of scalable complexity, ultimately up to custom applications, it can describe interactive content and the integration of more specific collaboration tools: Depending on an event, its location, time and the relevant stakeholders, different notifications and representations may be needed. The XML format provides a notation for expressing and initiating notifications, this includes abstraction of stakeholder and/or location. To provide a look-and-feel, Figure \[fig:visxml\] contains example XML output for visualization. Each command triggers the display of an individual window on a device. The arguments allow the specification if the device and content. Commands can displays images with annotations [ display]{} such as graphical or textual overlay elements, staff profiles, e.g., from Microsoft SharePoint and live camera views. Furthermore, the display of google map views with coordinates and zoom factor is encoded using the [type]{} map and earth commands.
<!--output>
<command device="vxportal6" type="event" catagory="highrisk" id="1001"></command>
<command device="vxlab" type="display" profile="robot1"></command>
<command device="vxportal2" type="display" profile="bob"></command>
<command device="vxlab" type="display" profile="eric"></command>
<command device="vxlab" type="view" image="gridsubstation.jpg"
rectx="350" recty="600" rectw="120" recth="150" text="Grid Substation"
txtx="130" txty="90"></command>
<command device="vxportal4" type="earth" lat="-38.1771269" long="146.3428259"
height="100m">
<command device="amritlab" type="map" lat="-38.1771269" long="146.3428259"
zoom="15z"></command>
</command>
</output-->
<output>
<command type="display" profile="ptz_camera3_view"></command>
<command type="composite_image" image="gridsubstation.jpg">
<display type="rect" x="350" y="600" w="120" h="150"></display>
<display type="text" text="Incident at Grid Substation" x="130" y="90" color="red"></display>
</command>
<command type="earth" lat="-38.1771269" long="146.3428259" height="100m"></command>
<command type="map" lat="-38.1771269" long="146.3428259" zoom="15z"></command>
</output>
Decision support {#sec:decs}
================
This section takes a closer look at the models and rules for decision support as well as the algorithms to support this based on our BeSpaceD framework. It is important to note that we do not aim at a fully automated system. Information to support decisions by humans is provided and processed. This information may, but does not necessarily comprise a recommendation. Ultimately decisions are taken by humans.
BeSpaceD overview
-----------------
BeSpaceD is our spatio-temporal modelling and reasoning framework developed at RMIT University[^3]. It is a general purpose modelling and reasoning framework, thus, applications are not limited to industrial automation or the smart-energy context. In our work, The BeSpaceD framework is used as:
- a description language, for formal models of industrial plants, grids and relations between entities appearing in these models;
- a way to reason about the formalized models by using BeSpaceD’s libraries and functionalities.
In the following, we describe the modelling language and the BeSpaceD-based reasoning library functionality and provide some implementation highlights.
BeSpaceD is implemented in the Scala programming language. Scala is bytecode compatible with Java, thus BeSpaceD’s core functionality runs in a Java environment. BeSpaceD is highly scalable; BeSpaceD-based functionality such as services can be offered as cloud-based services using highly scalable infrastructure, but can also run locally on embedded devices that support a Java runtime environment such as Raspberry Pi-based controllers. In addition to the work described in this paper, we have successfully applied BeSpaceD to other application areas such as coverage analysis in the area of mobile devices [@han2015] and for verification of spatio-temporal properties of industrial robots [@apscc; @fesca2014; @cyberbeht].
BeSpaceD-based Modelling
------------------------
BeSpaceD models are created using Scala case classes. Providing a functional abstract datatype-like feeling is a design goal for the BeSpaceD modelling language. Case classes serve as abstract datatype constructors and can be combined to create larger data structures. Some basic language constructs (see also [@2015arXiv151204656O]) are provided in Figure \[fig:belang\] to give a look and feel of BeSpaceD specifications. A key construct of our modelling framework is called an [ Invariant]{}, it is a basic logical entity and is supposed to hold for a system. Despite the fact that invariants hold for an entire system, they may contain conditional parts such as a logical formula being part of the overall invariant with a precondition. For example, an event occurring at a point in space and time implies a state of a system. In our modelling methodology, constructors for basic logical operations connect invariants to form a new invariant. Some of these basic constructors are provided in the figure.
case class OR (t1 : Invariant, t2 : Invariant) extends Invariant;
case class AND (t1 : Invariant, t2 : Invariant) extends Invariant;
case class NOT (t : Invariant) extends Invariant;
case class IMPLIES (t1 : Invariant, t2 : Invariant) extends Invariant;
case class TRUE() extends ATOM;
case class FALSE() extends ATOM;
case class TimePoint [T] (timepoint : T) extends ATOM;
case class TimeInterval [T](timepoint1 : T, timepoint2 : T) extends ATOM;
case class Event[E] (event : E) extends ATOM;
case class Owner[O] (owner : O) extends ATOM;
case class OccupyBox (x1 : Int,y1 : Int,x2 : Int,y2 : Int) extends ATOM;
case class Occupy3DBox (x1 : Int, y1: Int, z1 : Int,
x2 : Int, y2 : Int, z2 : Int) extends ATOM;
case class OccupyPoint (x:Int, y:Int) extends ATOM
case class Edge[N] (source : N, target : N) extends ATOM
case class Transition[N,E] (source : N, event : E, target : N) extends
ATOM
In the first part of the Figure, we show operators from propositional logic (e.g., [ AND]{}, [OR]{}, [IMPLIES]{}). The second part provides predicates for time (e.g.,[TimePoint]{}). While the third part allows the specification of events and ownership of logical entities, the fourth part provides constructs for geometry and space, such as the [OccupyBox]{} predicate. It refers to a rectangular two-dimensional geometric space which is parameterized by its left lower and its right upper corner as x– and y–coordinates. The last part provides constructs for the specification of mathematical graphs for topologies and state transition systems such as [Edge]{}.
The following BeSpaceD formula expresses that the rectangular geometric space with the corner points $(143, 4056)$ and $(1536,
2612)$ is subject to a semantic condition “A” between the intervals of integer-defined time points $800$ and $950$, as well as alternatively $1000$ to $1050$.
IMPLIES(AND(
OR(TimeInterval(800,950),
TimeInterval(1000,1050)),
Owner("A")),
OccupyBox(143,4056,1536,2612))
The semantic condition “A” is a placeholder; it can be instantiated for example by an indicator for a rain cloud or a solar panal on a weather map.
By combining the different constructors, BeSpaceD formulae can be constructed to formalize relevant information for our demonstrators: such as the topologies where components influence each other, and concrete specifications such as locations of machines, average UV intensity in an area, specification of capacity and location of power lines. In our framework, data can be imported while the system is running which enables the possibility to process live streamed data and integrate it into our decision process.
BeSpaceD-based Reasoning
------------------------
In addition to the modelling language, we need ways to reason about the models. BeSpaceD provides means for the efficient analysis of BeSpaceD formulas. The functionality comprises ways to abstract formulas, map-reduce-like functionality, filtering and efficient processing of information with a special focus on time and space, such as breaking geometric constraints on areas down to geometric constraints on points.
Various ways to import and visualize information are supported and have been developed to program specific needs. BeSpaceD supports the import of information from databases, but also the collection of information from sensors and the conversion into BeSpaceD datastructures. Some tasks can be outsourced to other specialized tools such as external SMT solvers (e.g., we have a connection to z3 [@z3]) for external processing of some information. In the SMT case, resolving geometric constraints such as deciding whether there is an overlapping of different areas in time and space is a task.
An Example Use Case {#sec:exuc}
-------------------
To illustrate the workflow in our framework we are using an example use-case. Figure \[fig:frameworkwf\] shows our framework responding to an event, an alarm triggered by a machine malfunction (see also [@etfa]).
![image](CollEngarchitecture){width="80.00000%"}
A simple example is provided below (cf. [@etfa]):
1. A sensor sends a signal to a conroller, the controller informs the plant control system via an internal network, thereby generating an alarm which is observed by collaborative engineering. Alternatively the sensor controller directly communicates with a cloud instantiation of the collaborative engineering framework, thereby indicating a machine malfunction in a remote plant. Based on the provided information we check the confidence by investigating historical data from the sensor and by using data collected from nearby sensors.
2. Our goal is to provide information to staff, stakeholders, experts, and/or engineers to facilitate the handling and collaboration among the stakeholders. Information is preferably provided in a concise visual way. For example, we can face situations where many alarms arrive in a short time such as a lightning strike to a major electrical component in a plant and information for display has to be filtered accordingly so that humans are not overburdened with information. The information provisioning is customized: different human stakeholders can be chosen by capabilities, availability or proximity, in case a physical investigation is necessary or remote support is sufficient. By using our BeSpaceD based reasoning, the collaboration platform can automatically match experts to the situation and offer resource conflict resolution while alerting them. The BeSpaceD-based reasoning can take additional information into account, e.g., information provided in databases including semantic models of plants and processes and real-time information from streaming sources. Semantic models of plants provide mathematical logical descriptions of aspects of the plant such as its geometric layout, cable and pipe interconnections, information how this changes over time, information on states of machinery (e.g., on,off, scheduled maintenance), possible dangers, possible interactions, physical locations, and possible effects on the surrounding area. Semantic models can also include timed information indicating how these aspects change over time.
3. In order to display information, we generate incident-relevant information and encode in XML what shall be displayed to humans. We use XML commands that are interpreted and trigger changes to display information on mobile, devices, normal workstations or large scale visualization facilities.
For this use case, typical information displayed comprises profiles and other data stored in Microsoft SharePoint, camera views, as well as maps (including annotations). Our Microsoft SharePoint-based data is triggered in browser windows opened by our BeSpaceD framework. It can than be handled interactively, for example, BeSpaceD may trigger the display of an editable collaborative document on multiple devices in different sites.
Demonstrator Platforms {#sec:demo}
======================
This section describes some of our demonstrator work.
Demonstrator Platform
---------------------
RMIT set up the VxLab [@vxlab2]. In the context of collaborative engineering VxLab serves as a space for demonstrators. The VxLab is a distributed lab developed to enable training, combining high resolution visualization, industrial automation facilities and cloud-based compute servers for analytics, visualization and simulation, connected via dedicated private networks. VxLab supports the collaborative engineering project.
The goals of the VxLab approach are:
- provision of a “sandbox” capability enabling rapid prototyping while mitigating security risks;
- network interconnectivity of complementary infrastructure such as industry labs, cloud and visualization facilities not commonly combined in research or industry practice;
- access to remote or dangerous labs without requiring physical travel, risk assessments, safety inductions or personal protective equipment;
- for university-industry collaborations, applications in training and education, in particular through projects giving students access to current and next-generation capabilities.
VxLab design concerns include: (i) Safety and security—which are typically connected with legislation, industry standards and mission critical requirements, and are directly applicable where VxLab extends into industrial applications: e.g. robot labs, or data collection from SCADA servers. Security has implied closed networks and firewalls which has a major impact on capability and usability. (ii) Latency—inherent in collaboration over distance, it is a high priority to minimize or mitigate latency to enable (and test) real-time machine-to-machine interoperation, human-human and human-machine interaction. Use of pre-cached content such as models and video sent prior to a collaboration task has been demonstrated in proof of concept form. Off-the-shelf solutions such as cloud-based video conference tools have been preferred. Note, however, that in this work, we do not aim at hard real-time responses. Especially, we do not want to eliminate a human-in-the-loop; the goal is rather to support humans in their work. (iii) Bandwith, connectivity between cloud infrastructure and external labs and devices. (iv) High quality visualization—to support monitoring and diagnosis of equipment such as robots, video quality should be maximized—in phase 1 a minimum of 1080p HD video quality was specified. (v) Usability—as skilled users migrate through different roles and physical locations in distributed projects, a usable and consistent virtual work environment must be preserved across locations.
The experience-oriented approach of VxLab has driven the exploration of use and adaptation of a mix of points on an architecture spectrum including (i) a large number of desktop and service-based local network applications, (ii) open source service- or web-based technologies and (iii) cloud technologies such as IBM Bluemix or services such as those focused on open source developer support. This has enabled the experience of different architectural approaches and consequent tradeoffs among the above concerns.
![image](VXLabArchitecture.png){width="1.57\columnwidth"}
VxLab’s network architecture is shown in Figure \[fig:vxlab-arch\]:
\(i) Within VxLab, the Global Operations Visualization (GOV) Lab provides a high resolution 8m x 2m video display wall and PC cluster supporting multiple local users simultaneously displaying and interacting with standard applications and internet services. GOV Lab is the primary visualization facility for the collaborative engineering project. Figure \[fig:screenshot2\] shows GOV Lab. The video wall is based on the web-based SAGE2 framework[^4] which is scalable with respect to display wall size, number of applications and services, and number of users interacting with the display. The SAGE2 framework provides a method for deployment and co-ordination of active tiled applications and supports flexible size, layout and compositing of displayed applications. Application windows can be arbitrarily rearranged and sized across the display wall (irrespective of physical monitor boundaries) and display elements can incorporate transparency. The display wall is driven by PCs running Linux supported by a dedicated 10Gbps switch. Each display column is driven by a separate display PC running Linux. Applications are distributed across external clients, which provide input, data or video sources, the SAGE2 server itself, and display tiles. The SAGE2 NodeJS web server is responsible for tracking and managing application layout, routing video and input/state sharing events via the websockets HTTP extension for real time TCP-based communication in a web context. For rendering, display tiles run a customized web browser. SAGE2-compliant applications send display output via the server to display tiles. Since applications are HTML5 embedded in displays it is possible to rapidly prototype and deploy rich tiled applications including 3D animations using extensions such as WebGL. In VxLab a SAGE2 virtual desktop client is used to connect to workstations running the VNC protocol, thus existing applications for development, simulation and data analytics can be combined with video feeds for live monitoring of connected facilities. The use of standard protocols for desktop sharing enables the use of virtualized host desktops running anywhere in VxLab, subject to bandwidth/latency limitations. Beyond adaptation of existing applications, we have demonstrated that SAGE2 enables combinations of either or both of (a) rich 2D or 3D applications such as interactive maps or 3D model viewers (b) with multiple users in multiple locations separated by thousands of kilometres and high latency. (ii) A 1Gbps local area network switch connects the video wall to other local VxLab equipment for example video conference and hardware-in-the-loop simulator, as well as a collection of services such as SharePoint and ABB’s RobotStudio IDE. Cloud-based video conference software integrates with traditional H323 video conference hardware and enables basic sharing of the video wall content via steerable front and rear cameras. (iii) The VxLab is connected to a set of industry and industrial labs. For example a mini-factory which provides sensor data for the collaborative engineering framework. A wireless internet gateway connects to a set of single board computer-based controllers connected via custom I/O boards to factory components such as pick and place units and conveyors. This enables experimental cloud-based analytics, control and visualization of plant status, including to mobile devices. Two ABB IRB120 industrial robot arms are also used to feed events into the collaborative engineering framework via IRC5 controllers. The robots have been used to demonstrate remote simulation, configuration and operation including service-based synchronisation of physical and simulated equipment. A ROS-based collaborative robot enables high bandwidth instrumentation of robot joint encoders and 3D visualization of robot configuration in real time via ROS rviz. (iv) An additional visualization facility for collaborative engineering is provided by a cluster of 6 high resolution virtual experience portals (VxPortals) [@salento], each consisting of a 4K 60hz display, a workstation and a depth-based sensor for tracking a small number of human users. The VxPortals are connected to VxLab and each other via a 10Gbps network. The VxPortals have been used for example with a highly customized version of SAGE2 to display immersive 3D applications stretching across multiple VxPortals based on parallax via head tracking—thus the VxPortal functions as a “magic window” (or virtual/mixed reality with headset for a single user) into a virtual 3D rendered environment. (v) The Cyber-Physical Simulation Rack (CSRack) provides a privately hosted publicly accessible cloud capability to support modelling, simulation and collaboration services. CSRack consists of a 40 node cluster of blade servers with 100G of RAM and solid state disk, in a data centre connected to VxLab via shared 4x10Gbps link. Experimental OpenStack configurations provide some consistency of user experience with the Australian Nectar cloud infrastructure[^5], enabling for example prototyping the integration of computer vision into automation applications and the use of hosted message queues either in CSRack or in Nectar for cloud-based distributed sensor monitoring or analytics. BeSpaceD can be deployed in Nectar or CSRack and used as a service by monitoring equipment such as controllers. A future possibility is to explore the use of CSRack as a “cloudlet,” (or fog) a locally-connected (low-latency high-bandwidth) proxy to Nectar. (vi) Advanced user interface devices such as depth sensor-based controllers, and virtual reality (VR) and augmented reality (AR) headsets are being used in the lab. VR and AR devices are currently used for gaming research with efforts underway to explore multi-disciplinary crossover into automation.
The SmartSpace Demonstrator {#sec:demosmart}
---------------------------
We have built a demonstrator for smart-energy decision support based on the ingredients described in the previous sections. Decision support uses the collaborative engineering framework and is triggered by regularly recurring feeds of live weather data: we have implemented a connection to the Australian Bureau of Meteorology. BeSpaceD is used as a format to represent the weather data and the conversion happens in real-time Furthermore, the BeSpaceD language is used to describe rules and models for our smart-grid system. Rules and models can be changed, added and removed. Rules and models integrate into the Scala-based environment. We have experimented with a variety of rules, for example we have used the one provided below [@etfa2016se]:\
[$t_1 \le time \le t_2$ $\wedge$\
[cloud coverage filtered by area]{} $_1$ $\ge$ [threshold]{}\
$\wedge$ ... $\wedge$\
[cloud coverage filtered by area]{}$_n$ $\ge$ [ threshold]{}\
$\longrightarrow$\
[critical solar energy level]{} ]{}
![image](ColEngoldview){width=".99\textwidth"}
The rule is provided using assumption logic on a very abstract specification level. The form schema of the rule follows the pattern: a condition implies / triggers a reaction. In this case, $t_1$ and $t_2$ are timepoints (the first line specifies a time interval), [area]{}$_1$ .. [area]{}$_n$ are spatial / geometric areas, e.g., on a map. Furthermore, the critical solar energy level triggers a reaction. Using this rule, based on rain-cloud coverage in some areas stakeholders can be informed.
The implementation of the rules is done in Scala using the constructs provided in previous sections, for example, one needs to specify how the filtering of an area is done by instantiating the BeSpaceD code. The shown rule is relatively simple, in addition to the rules one also needs to specify the triggered reactions: the XML code that comprises visualization information and is then interpreted and triggering a visualization.
Figure \[fig:screenshot2\] provides an example for a triggered reaction in the collaborative engineering framework. Multiple windows with profiles of staff and a information for a machine is shown. In addition, the triggered visual information features an annotated map.
![image](DSC07790){width=".7\textwidth"}
Figure \[fig:ed\] shows another way of visualizing weather data. We have implemented this fronted for the SmartSpace demonstrator. The visualization takes descriptions provided in our BeSpaceD language as input. Here, information on rain-cloud coverage, their position in space and locations of power plants are contained. The generated 3D view can be animated by adding time information to the rain-cloud positions in the underlying BeSpaceD datastructures. A head tracking system based on Microsoft Kinect technology can be used to move the view, so that the perspective changes with the position of the observer.
Conclusion and Future Work {#sec:concl}
==========================
We presented our collaborative engineering framework and the BeSpaceD tool-set as a main ingredient. Collaborative engineering supports remote industrial operations. Specifically when industrial sites are located far away from major population centres, software-based collaboration and maintenance solutions can help to reduce costs associated with these remote operations. We presented our demonstrator platform infrastructure and provided examples in decision support for industrial automation and smart-energy systems. Future work comprises the support of additional monitoring functionality. Monitoring of machines through cloud-based services and connecting this to the collaborative engineering framework is an ongoing topic. Monitors are able to detect abnormalies in communication and timing behavior that can indicate malware and other security violations.
Acknowledgement {#acknowledgement .unnumbered}
---------------
The authors would like to thank Lasith Fernando, Edward Watkins, Keith Foster, Abhilash G, and Yvette Wouters for their help.
[99]{}
Khandakar Ahmed, Jan Olaf Blech, Mark Gregory and Heinz Schmidt. Software Defined Networking for Communication and Control of Cyber-physical Systems. International Workshop on Internet of Things Technologies, IEEE, 2015.
Khandakar Ahmed, Nazmus S. Nafi, Jan Olaf Blech, Mark A. Gregory, Heinrich Schmidt. Software defined industry automation networks. 27th International Telecommunication Networks and Applications Conference (ITNAC), IEEE, 2017.
Amin, S. M., & Wollenberg, B. F. (2005). Toward a smart grid: power delivery for the 21st century. Power and energy Magazine, IEEE, 3(5), 34-41.
M. Aiello, I. E. Pratt-Hartmann, and J. FAK van Benthem, eds. Handbook of spatial logics. Springer, 2007.
M. Apel. A 3D geological information system framework. Geophysical Research Abstracts, vol. 7, European Geosciences Union, 2005.
B. Bennett, A. G. Cohn, F. Wolter, M. Zakharyaschev. Multi-Dimensional Modal Logic as a Framework for Spatio-Temporal Reasoning. Applied Intelligence, Volume 17, Issue 3, Kluwer Academic Publishers, November 2002.
T. Berners-Lee, J. Hendler, and O. Lassila. The semantic web. Scientific american 284, no. 5 (2001): 28-37.
J. O. Blech. An Example for BeSpaceD and its Use for Decision Support in Industrial Automation. CoRR abs/1512.04656 (2015). <http://arxiv.org/abs/1512.04656>
J. O. Blech and Peter Herrmann. Behavioral Types for Space-aware Systems. Model-based Architecting of Cyber-Physical and Embedded Systems. CEUR proceedings, vol. 1508, 2015.
Jan Olaf Blech, Lasith Fernando, Keith Foster, Abhilash G and Sudarsan Sd. Spatio-temporal Reasoning and Decision Support for Smart Energy Systems. Emerging Technologies and Factory Automation (ETFA), IEEE, 2016.
Jan Olaf Blech, Lasith Fernando, Keith Foster, Abhilash G, Sudarsan SD. Towards Decision Support for Smart Energy Systems based on Spatio-temporal Models. https://arxiv.org/abs/1705.03860. arXiv.org, 2017.
Jan Olaf Blech, Keith Foster. Operators for Space and Time in BeSpaceD. CoRR abs/1602.08809 (2016) <http://arxiv.org/abs/1602.08809>
J. O. Blech and H. Schmidt. BeSpaceD: Towards a Tool Framework and Methodology for the Specification and Verification of Spatial Behavior of Distributed Software Component Systems. In [*arXiv.org*]{}, <http://arxiv.org/abs/1404.3537>, 2014.
J. O. Blech and H. Schmidt. Towards Modeling and Checking the Spatial and Interaction Behavior of Widely Distributed Systems. In [*Improving Systems and Software Engineering Conference*]{}, 2013.
J. O. Blech, M. Spichkova, I. Peake, and H. Schmidt. Cyber-Virtual Systems: Simulation, Validation & Visualization. In [*Evaluation of Novel Approaches to Software Engineering*]{}, 2014.
Jan Olaf Blech, Ian Peake, Heinz Schmidt, Mallikarjun Kande, Akilur Rahman, Srini Ramaswamy, Sudarsan SD, Venkateswaran Narayanan. Efficient Incident Handling in Industrial Automation through Collaborative Engineering. Emerging Technologies and Factory Automation (ETFA), IEEE, 2015.
J. O. Blech, I. Peake, H. Schmidt, M. Kande, S. Ramaswamy, Sudarsan SD., and V. Narayanan. Collaborative Engineering through Integration of Architectural, Social and Spatial Models. , IEEE Computer, 2014.
G. Booch and A. W. Brown. Collaborative development environments. Advances in Computers, 59 (2003): 1-27, Academic Press, 2003.
A. Cabani, S. Ramaswamy, M. Itmi, J.P. P[é]{}cuchet, PHAC: an Environment for Distributed Collaborative Applications on P2P Networks, International Journal of Intelligent Control and Systems, Vol 12, \#3, Sept 2007.
L. Caires and L. Cardelli.A Spatial Logic for Concurrency (Part I). Information and Computation, Vol 186/2 November 2003.
L. Caires and L. Cardelli. A Spatial Logic for Concurrency (Part II). Theoretical Computer Science, 322(3) pp. 517-565, September 2004.
L. Caires and H. Torres Vieira. SLMC: a tool for model checking concurrent systems against dynamical spatial logic specifications. Tools and Algorithms for the Construction and Analysis of Systems. Springer, 2012. Chih-Hong Cheng, Tuncay Guelfirat, Christian Messinger, Johannes O. Schmitt, Matthias Schnelte, Peter Weber: Semantic degrees for Industrie 4.0 engineering: deciding on the degree of semantic formalization to select appropriate technologies. Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering Pages 1010-1013, ACM, 2015.
R. Cross, S. P. Borgatti, and A. Parker. Making Invisible Work Visible: USING SOCIAL NETWORK ANALYSIS TO SUPPORT STRATEGIC COLLABORATION. California management review 44, no. 2 (2002)
M. R. Cutkosky, J. M. Tenenbaum, and J. Glicksman. Madefast: collaborative engineering over the Internet. Communications of the ACM 39.9 (1996): 78-87.
DS DELMIA V6R2013x – Fact Sheet: 3DEXPERIENCES of Global Production Systems for all stakeholders in the extended supply chain. Dassault Syst[è]{}mes 2013.
ENOVIA V6R2013x – Fact Sheet. Dassault Syst[è]{}mes 2013
Hassan Farhangi. The path of the smart grid. IEEE Power and Energy Magazine, vol 8., issue 1, 2010.
F. Gadducci and A. Lluch Lafuente. Graphical Verification of a Spatial Logic for the $\pi$-calculus." Electronic Notes in Theoretical Computer Science 154.2, 2006. S. Ghosh, A. Dubey, S. Ramaswamy. C-FaRM: A Collaborative and Context Aware Framework for Requirements Management”, 4th International Workshop on Managing Requirements Knowledge, 19th IEEE International Requirements Engineering Conference, August 30th 2011, Trento Italy.
S. J. Geoghegan, G. McCorkle, C. Robinson, J. Brown, G. Fundyler, S. Ramaswamy, M. Tudoreanu, R. Seker and M. Itmi, “A Multi-Agent System Architecture for Cooperative Maritime Networks” 3rd Annual IEEE International Systems Conference, Vancouver, Canada, March 2009.
S. Haar, S. Perchy, C. Rueda, F. Valencia. An Algebraic View of Space/Belief and Extrusion/Utterance for Concurrency/Epistemic Logic. Principles and Practice of Declarative Programming, 2015.
F. Han, J. O. Blech, P. Herrmann, H. Schmidt. Towards Verifying Safety Properties of Real-Time Probabilistic Systems. Formal Engineering approaches to Software Components and Architectures, 2014.
F. Han, J. O. Blech, P. Herrmann, H. Schmidt. Model-based Engineering and Analysis of Space-aware Systems Communicating via IEEE 802.11. COMPSAC, IEEE, 2015.
P. Herrmann, J. O. Blech, F. Han, and H. Schmidt. A Model-based Toolchain to Verify Spatial Behavior of Cyber-Physical Systems. Asia-Pacific Services Computing Conference (APSCC), 2014.
D. Hirschkoff, [É]{}. Lozes, D. Sangiorgi. Minimality Results for the Spatial Logics. Foundations of Software Technology and Theoretical Computer Science, vol 2914 of LNCS, Springer, 2003.
Kanchev, H., Lu, D., Colas, F., Lazarov, V.,& Francois, B. (2011). Energy management and operational planning of a microgrid with a PV-based active generator for smart grid applications. Industrial Electronics, IEEE Transactions on, 58(10), 4583-4592.
D. Ko[ß]{}, D. Bytschkow, P. K. Gupta, B. Sch[ä]{}tz, F. Sellmayr, S. Bauerei[ß]{}. Establishing a smart grid node architecture and demonstrator in an office environment using the soa approach. Proceedings of the First International Workshop on Software Engineering Challenges for the Smart Grid, IEEE, 2012.
McGuire, J. G., Kuokka, D. R., Weber, J. C., Tenenbaum, J. M., Gruber, T. R., & Olsen, G. R. (1993). SHADE: Technology for knowledge-based collaborative engineering. Concurrent Engineering, 1(3), 137-146.
L. De Moura, N. Bj[ø]{}rner. Z3: An efficient SMT solver. In Tools and Algorithms for the Construction and Analysis of Systems (pp. 337-340). Springer, 2008.
L. Northrop, P. Feiler, R. P. Gabriel, J. Goodenough, R. Linger, T. Longstaff, R. Kazman et al. [Ultra-Large-Scale Systems]{}-The Software Challenge of the Future. (2006).
Ian Peake, Jan Olaf Blech, Lasith Fernando, Heinz Schmidt, Ravi Sreenivasamurthy and Sudarsan SD. Visualization Facilities for Distributed and Remote Industrial Automation: VxLab. Emerging Technologies and Factory Automation (ETFA), IEEE, 2015.
Ian D. Peake, Jan Olaf Blech, Edward Watkins, Stefan Greuter, Heinz W. Schmidt. The Virtual Experiences Portals – A Reconfigurable Platform for Immersive Visualization. 3rd International Conference on Augmented Reality, Virtual Reality and Computer Graphics (SALENTO AVR 2016), Springer, 2016.
A. Salmen et al. ComVantage: Mobile Enterprise Collaboration Reference Framework and Enablers for Future Internet Information Interoperability. Future Internet, vol. 7858 of LNCS, Springer 2013.
Ben Schneider, Alois Zoitl, Monika Wenger and Jan Olaf Blech. Evaluating Software-defined Networking for Deterministic Communication in Distributed Industrial Automation Systems. Emerging Technologies and Factory Automation (ETFA), IEEE, 2017.
G. Smith and J. Friedman. A Technology Whose Time Has Come. Earth Observation Magazine, November 2004.
Y. Sure, M. Erdmann, J, Angele, S. Staab, R. Studer, and D. Wenke. OntoEdit: Collaborative ontology development for the semantic web. Springer Berlin Heidelberg, 2002.
C. Weaver, D. Peuquet, A. M. MacEachren. STNexus: An Integrated Database and Visualization Environment for Space-Time Information Exploitation. <http://www.geovista.psu.edu/publications/2005/Weaver_ARDA_05.pdf>, 2005.
S. Dal Zilio, D. Lugiez, C. Meyssonnier. A logic you can count on. Symposium on Principles of programming languages, ACM, 2004.
[^1]: sage2.sagecommons.org
[^2]: <http://www.opengeospatial.org>
[^3]: <https://bitbucket.org/bespaced/bespaced-2016/>
[^4]: [sage2.sagecommons.org](sage2.sagecommons.org)
[^5]: [nectar.org.au](nectar.org.au)
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We analize a possible explanation of the pulsar motions in terms of resonant neutrino transitions induced by a violation of the equivalence principle (VEP). Our approach, based on a parametrized post-Newtonian (PPN) expansion, shows that VEP effects give rise to highly directional contributions to the neutrino oscillation length. These terms induce anisotropies in the linear and angular momentum of the emitted neutrinos, which can account for both the observed translational and rotational pulsar motions. The violation needed to produce the actual motions is completely compatible with the existing bounds.'
address: |
$^{\dagger }$Instituto de Ciencias Nucleares, UNAM, Ap. Postal 70-543, 04510\
México DF, Mexico\
$^{*}$ Theoretical Physics, University of Oxford, 1 Keble Road, Oxford\
OX13NP, United Kingdom\
$^{\ddagger}$ Instituto Balseiro and CAB, Universidad Nacional de Cuyo and CNEA,\
8400 Bariloche, Argentina
author:
- 'M. Barkovich$^{\dagger }$, H. Casini$^{*}$, J.C. D’Olivo$^{\dagger }$, R.Montemayor$^{\ddagger }$'
title: Pulsar motions from neutrino oscillations induced by a violation of the equivalence principle
---
It is very difficult to obtain precise evidence on the characteristics of the gravitational interaction beyond the range where the Newtonian approximation holds. Only systems with very large densities of mass in rapid motion can provide suitable laboratories for such a phenomenology. One well known example is the orbital behavior of binary pulsars, which gives support to the production of gravitational waves. Type II supernovas are another interesting scenario. In this case the intense neutrino flux produced during the gravitational collapse can be sensible to subtle characteristics of the gravitational interaction. In this letter we analyze some effects on this flux that could test a possible violation of the equivalence principle.
Perhaps one of the most intringuing characteristic of the pulsar dynamics related with the supernova stage is their anomalous proper motions. There is strong observational evidence that translational velocities of pulsars include a significant component from kicks given when they are formed [@velocidad]. Several mechanisms have been proposed to explain such kicks, but none of them is completely satisfactory[@mecanismos; @kusegre; @horvat]. Recently it has also been pointed out that the observed rotation periods are several orders of magnitude shorter than the predictions for the cores of the protoneutron stars[@spruit]. Thus the spin of the pulsars are probably produced by the same mechanism that gives them their translational velocities during the formation stage. Moreover, there is significant observational evidence that seems to indicate a polarization of the motion of young pulsars along a direction near the plane of the galaxy[@polarization]. This correlation could mean that kicks involve a characteristic length at least of the order of the galaxy radius, which is very difficult to explain on the basis of the proposed mechanisms.
An appealing possibility that could account for the translational kick is a 1% anisotropy in the momentum carried by the neutrinos emitted during pulsar formation. However small, such anisotropy is not easy to obtain. Kusenko and Segre (KS)[@kusegre] have proposed a mechanism based on the deformation of the resonance surface when neutrinos undergo matter oscillations in the presence of a magnetic field[@magnetic]. Unfortunately the necessary magnetic field is relatively high, $B\gtrsim
10^{15}$ G[@kusegre; @qian; @raffelt]. Furthermore, the condition that the resonance surface has to lie between the neutrinoespheres implies $m_{\nu
}\sim 100$ eV. The existence of such heavy neutrinos is cosmologically ruled out unless they are unstable.
A less orthodox mechanism for neutrino oscillation was proposed several years ago. It requires a flavor dependent coupling of neutrinos to gravity[@gasperini], and no neutrino mass. Consequences of such a violation of the equivalence principle (VEP) in the neutrino sector have been analyzed in a number of papers[@vep]. In particular, in Ref.[@horvat] it was applied to the problem of the translational motion of pulsars . In this case the desired kick can be achieved with massless neutrinos, but the intensity of the magnetic field is similar to the one required by the KS mechanism.
In this work we propose a purely gravitational explanation for both the translational and rotational motion of pulsars, where the neutrino oscillation and the momentum anisotropy are induced by VEP effects and that do not rely on the magnetic field of the protostar. We work within the framework of a generalized parametrized post-Newtonian (PPN) formalism [@will], previously applied to the solar neutrino problem[@nuestro], that naturally includes the effect of a preferred reference system. Our approach generalizes the usual VEP scheme[@vep], by including the effect of potentials of the next PPN order to the Newtonian potential $U$, and a tensorial potential of the same order than $U$. In principle all these terms should be present if the equivalence principle is violated. In this context the neutrino oscillations are a manifestation of a VEP effect, and the momentum anisotropy is signature of the preferred reference system. The accuracy of the equivalence principle may be characterized by limits in the differences of the PPN parameters for different neutrinos. As we show, violations of the equivalence principle consistent with the present bounds generate the necessary kicks to produce the observed pulsar motions.
The linearized Dirac equation for massless neutrinos in a static gravitational field leads to the dispersion relation[@cm]: $$E=p\left[ 1+h_{oi}\,\hat{p}_{i}\,-\frac{1}{2}h_{ij}\hat{p}_{i}\hat{p}_{j}-
\frac{1}{2}h_{oo}U\right] \,, \label{HAM}$$ where the $h^{\mu \nu }$ fields are defined by $g^{\mu \nu }=\eta ^{\mu \nu
}+h^{\mu \nu }$, referred to the Minkowskian metric. In deriving this relation we have neglected the spatial derivatives of the gravitational potentials, which is justified for neutrinos in astrophysical systems. Up to third order in the velocity of the source $w$ we can write ($G=\hbar =c=1$): $$\begin{aligned}
h_{oo} &=&2\gamma ^{\prime }U+{\cal O}(w^{4})\,, \\
h_{oi} &=&-\frac{7}{2}\Delta _{1}V_{i}-\frac{1}{2}\Delta _{2}W_{i}+(\alpha
_{2}-\frac{1}{2}\alpha _{1})v_{i}U-\alpha _{2}v_{j}U_{ji}+{\cal O}(w^{4})\,,
\label{PPN} \\
h_{ij} &=&2\gamma U\delta _{ij}+\Gamma U_{ij}\,+{\cal O}(w^{4})\,.\end{aligned}$$ The adimensional parameters of the PPN expansion are $\gamma $, $\gamma
^{\prime }$, $\Delta _{1}$, $\Delta _{2}$, $\Gamma $, ${\bf v}$ , $\alpha
_{1}$, and $\alpha _{2}$. The parameters $\alpha _{1}$ and $\alpha _{2}$ vanish in Lorentz covariant theories, but if there exists a preferred reference frame, characterized by a velocity ${\bf v}$, they should be non null.
The general expressions for the potentials $U$, $V_{i}$, $W_{i}$ and $U_{ji}$ can be found in Ref.[@nuestro]. In the present case, the source of the gravitational field is the protoneutron star. Considering a spherical configuration and a rigid rotation, the PPN potentials become $$\begin{aligned}
U &=&4\pi \int_{0}^{R}dr^{\prime }r^{\prime 2}\left[ \frac{1}{r}\theta
\left( r-r^{\prime }\right) +\frac{1}{r^{\prime }}\theta \left( r^{\prime
}-r\right) \right] \rho \left( r^{\prime }\right) \,, \\
U_{ij} &=&\hat{r}_{i}\hat{r}_{j}I(r)+\delta _{ij}\;J(r)\,, \\
V_{i}\; &=&W_{i}\,=\;w_{i}J(r)\,.\end{aligned}$$ where $\rho (r)$ is the mass distribution of the star and $w_{i}=\epsilon
_{ijk}\Omega _{j}r_{k}$. Here ${\bf \Omega }$ is the angular velocity and $$\begin{aligned}
I &=&\frac{4\pi }{r^{3}}\int_{0}^{R}dr^{\prime }r^{\prime \;2}\left(
r^{2}-r^{\prime \;2}\right) \theta \left( r-r^{\prime }\right) \rho \left(
r^{\prime }\right) \,, \\
J &=&\frac{4\pi }{3}\int_{0}^{R}dr^{\prime }\left[ \frac{r^{\prime 4}}{r^{3}}
\theta \left( r-r^{\prime }\right) +r^{\prime \;}\theta \left( r^{\prime
}-r\right) \right] \rho \left( r^{\prime }\right) \,.\end{aligned}$$
In presence of VEP all the PPN parameters can depend on the flavor numbers. We assume that deviations from a metric theory are small, so that in a very good approximation there is a common coordinate frame for all flavors. Since the parameters are flavor dependent, distinct neutrinos will undergo different phase shifts when passing through the same sector of space. In the presence of neutrino mixing phase shift differences become observable as neutrino oscillations. For simplicity, in what follows we consider two neutrino flavors, $\nu _{e}$ and $\nu _{\mu }$ or $\nu _{\tau }$. They are supposed to be linear superpositions of the gravitational eigenstates $\nu
_{1}^{g}$and $\nu _{2}^{g}$, with a mixing angle $\theta _{g}$. Along the neutrino path flavor evolution is governed by $$i\frac{d}{dr}\left(
\begin{tabular}{l}
$\nu _{e}$ \\
$\nu _{\mu }$
\end{tabular}
\right) =\frac{\Delta _{0}}{2}\left(
\begin{tabular}{ll}
-cos$2\theta _{g}$ & $\sin 2\theta _{g}$ \\
\ $\sin 2\theta _{g}\,$ & cos$2\theta _{g}$
\end{tabular} \right) \left(
\begin{tabular}{l}
$\nu _{e}$ \\
$\nu _{\mu }$
\end{tabular}
\right) \,, \label{alfa}$$ with $\Delta _{0}=E^{2}-E^{1}$. For a rotating protoneutron star we have $$\begin{aligned}
\Delta _{0} &=&\left\{ -(\delta \gamma ^{\prime }+\delta \gamma )U-\delta
\Gamma J-\delta \Gamma I({\bf \hat{r}\cdot \hat{p})}^{2}\right. \nonumber \\
&&+\left[ (\delta \alpha _{2}-\frac{1}{2}\delta \alpha _{1})U-\delta \alpha
_{2}J\right] {\bf v\cdot \hat{p}}-\delta \alpha _{2}I({\bf \hat{r}\cdot v})
{{\bf \hat{r}\cdot \hat{p}}} \nonumber \\
&&\left. -\frac{1}{2}\left( 7\delta \Delta _{1}+\delta \Delta _{2}\right) J
{\bf \Omega }\times {\bf r\cdot \hat{p}}\right\} E\,, \label{delta}\end{aligned}$$ where $E=p$ is the neutrino energy, $\delta \gamma =\gamma ^{2}-\gamma ^{1}$, and the same for the difference between the other PPN parameters. Here $\Delta_{0}$ plays the same role as the quantity $(m_{2}^{2}-m_{1}^{2})/2E$ in the mass mechanism for neutrino oscillations. Note that in our case the potentials depend on $r$ and hence $\Delta _{0}=$ $\Delta _{0}(r)$. Terms with ${\bf v}$ appear whenever a preferred frame exists. In principle ${\bf
v }$ could also depend on the gravitational flavor, but the observed position offset for pulsar-supernova remnant pairs[@polarization] can be interpreted as the existence of a translational effect associated to a preferred direction. For this reason we take ${\bf v}$ as a flavor independent parameter. Its action is analogous to the one produced by a magnetic field in the KS mechanism.
As is well known, neutrino oscillations in matter differ from the oscillations in vacuum. The interaction of neutrinos with the background modifies their dispersion relations, and under favorable conditions leads to the MSW phenomena of resonant flavor transformation. If electrons are the only leptons present in the medium, the term $\frac{G_{F}}{\sqrt{2}}
N_{e}(r)\sigma _{3}$ has to be added to the matrix in Eq.(\[alfa\]), where $\sigma _{3}$ is the Pauli matrix and $N_{e}(r)$ denotes the electron number density. The resulting Hamiltonian can be diagonalized at every point by a local rotation, with the mixing angle in matter $\theta _{m}(r)$ given by ${\rm \sin }2\theta _{m}(r)=\frac{\Delta _{0}(r)\ }{\Delta (r)}{\rm \sin }
2\theta _{g}$, with $$\Delta (r)=\sqrt{\left( \Delta _{0}(r)\ {\rm \cos }2\theta _{g}-\sqrt{2}
G_{F}N_{e}(r)\right) ^{2}+\left( \Delta _{0}(r)\ {\rm \sin }2\theta
_{g}\right) ^{2}}\,.$$
There is a resonance when the diagonal elements of the Hamiltonian vanish, i.e. when $\sqrt{2}G_{F}N_{e}(r_{R})=\Delta _{0}(r_{R})\,\cos 2\theta _{g}$. The efficiency of the flavor transformation depends on the adiabaticity of the process, which is characterized by the parameter $$\kappa =\left| {\displaystyle {\frac{1 }{\Delta }}}{\displaystyle {\frac{
d\theta _{m} }{dr}}}\right| _{r=r_{R}}=\left| \Delta _{0}\frac{\sin 2\theta
_{g}\tan 2\theta _{g} }{h_{N_{e}}^{-1}-h_{\Delta _{0}}^{-1}}\right|
_{r=r_{R}} \,. \label{adiaba}$$ where the scale heights are $h_{N_{e}}^{-1}=\frac{d}{dr}\ln N_e$ and $h_{\Delta _{0}}^{-1}=\frac{d}{dr}\ln \Delta _{0}$. The transition will be adiabatic whenever $\kappa \gg 1$.
The translational kick comes from the anisotropy in the radial momentum carried by neutrinos emerging from the resonance surface. The resulting effect on the motion of the pulsar is obtained integrating over all the surface. To this integration will only contribute the radial component of ${\bf \hat{p}}$. Therefore, to estimate the translational kick we use a simplified situation with a purely radial neutrino flux. In this case, $$\Delta _{0}=\left[ A\left( r\right) +B(r)v\cos \chi \right] E \,,$$ where $\chi $ is the angle between ${\bf r}$ and ${\bf v}$. The functions $A(r)$ and $B(r)$ are given by $$\begin{aligned}
A &=&-(\delta \gamma ^{\prime }+\delta \gamma )U-\delta \Gamma \left(
I+J\right)\,, \\
B &=&(\delta \alpha _{2}-\frac{1}{2}\delta \alpha _{1})U-\delta \alpha
_{2}\left( I+J\right)\,.\end{aligned}$$
The radius of a point on the distorted resonance surface can be written as $r_{R}=r_{o}+\delta $ cos$\chi$ ($\delta \ll r_{o}$). The radius of the unperturbed resonance sphere $r_{o}$ is determined by $$A\left( r_{o}\right) =\frac{\sqrt{2}G_{F}}{\cos 2\theta _{g}}\frac{
N_{e}(r_{o})}{E} \,,$$ and $$\delta =\left.\frac{B}{A}\frac{v}{h_{N_{e}}^{-1}-h_{A}^{-1}}
\right|_{r_{o}}\,,$$ where we keep only the terms linear in $\delta$, and $h_{A}^{-1}=
{\displaystyle{\frac{d }{dr}}}\ln A(r)$.
At the moment there is no agreement about the details of the production of a kick by a distorted neutrinosphere. To explore the possibilities of the VEP mechanism we will now consider this effect in the context of the main neutrinosphere models proposed.
For a hard neutrinosphere model in thermal equilibrium as considered in Refs. [@kusegre; @kusegref; @qian], the momentum asymmetry in the ${\bf v}$ direction is generated by the emission at points with different temperatures on the resonance surface: $\Delta p/p\approx \frac{2}{9}h_{T}^{-1}\delta $, where $h_{T}^{-1}=\frac{d}{dr}\ln T $. In the case of a quasi-degenerate gas of relativistic electrons with a constant chemical potential $\mu _{e}\approx
\left( 3\pi ^{2}N_{e}\right) ^{1/3}$and $\frac{dN_{e}}{dT}=\frac{2}{3}T\mu
_{e}$. Then $$\frac{\Delta p}{p}\approx Q\frac{Bv}{A}\,, \label{asym}$$ with $Q=\frac{\eta ^{2}\Lambda }{9\pi ^{2}}$, where $\eta =\mu _{e}/T$ is the degeneracy parameter for the electrons and $\Lambda
=h_{A}/(h_{A}-h_{N_{e}})$. Another possibility is to assume that the electron fraction $\ Y_{e}$ remains constant and $\rho \sim
T^{3}$[@qian]. In this case $h_{N_{e}}\sim h_{T}/3$, and $Q=\frac{2}{27}\Lambda $.
A different kick model in the literature uses a soft neutrinosphere[@kusegref; @raffelt]. In such a case there is an important reduction in the anisotropy given by the ratio ${\rho _{o}}/{\rho _{c}}$ of the density at the resonance and the density at the core. The momentum asymmetry can also be written as in Eq.(\[asym\]), with $Q=\rho _{o}h_{N_{e}}\Lambda
/18m_{c}$, where $m_{c}= \int_{r_{c}}^{r_{s}}\rho \,dr$ is the integral of the mass density between the central core and the surface of the star. In all the cases considered above the adimensional parameter $Q$ depends only on the specific model and the remaining factors contain the PPN parameters.
For a quantitative estimation of the effects of VEP on the neutrinosphere, we use the density profile $\rho (r)=\rho _{c}$ for $r<r_{c}$ and $\rho
(r)=\rho _{c}\left( r_{c}/r\right) ^{n}$ for $r>r_{c}$[@horvat]. We take $\rho _{c}=8\times 10^{14}\;g/cm^{3}$, $r_{c}=10\;km$, and $5\leq n\leq 7$ that give a good description of the supernova SN1987A[@parametros]. The resonance surface has to lie below the $\nu _{e}$ neutrinosphere and above the $\nu _{\mu }$ neutrinosphere. If we take $\rho _{o}\sim 10^{-11}g/cm^{3}$ and $Y_{e}\sim 0.1$, then we obtain $\left( \delta \gamma +\delta \gamma
^{^{\prime }}+0.95\,\delta \Gamma \right) \cos 2\theta _{g}\simeq -6\times
10^{-10}$. For $\delta \Gamma =0$ our result agrees with the one obtained in Ref.[@horvat]. As pointed out in this work the adiabaticity condition is achieved provided that $\theta _{g}>10^{-4}$, $h_{N_{e}}^{-1}\lesssim
h_{\Delta_{0}}^{-1}$, and hence $\Lambda \simeq 1$ for every value of $n$. The value of the momentum asymmetry is $$\frac{\Delta p}{p}\simeq -Q\left( \delta \alpha _{1}-0.1\delta \alpha
_{2}\right) v\cos 2\theta _{g}\times 10^{9}\,.$$ For $T=3$MeV, $Q\sim 0.1$ in the hard neutrinosphere models, and $Q\simeq
4\times 10^{-5}$ for a soft neutrinosphere. Taking $\delta \alpha _{1}\sim
\delta \alpha _{2}\sim \delta \alpha $, and requiring $\Delta p/{p}\sim
0.01$, we obtain $v\delta \alpha\sim 10^{-10}$ and $v\delta\alpha \sim
10^{-7}$, respectively.
We now analyze the effect of the non radial component of the neutrino momentum. When ${\bf \Omega }=0$, at a given point of the resonance surface the emitted neutrinos have an azimuthal symmetry respect to the position vector. For non vanishing angular velocity of the protoneutron star, the last term in Eq. (\[delta\]) brakes this symmetry and produces an angular acceleration of the star. To make a perturbative estimation of this effect we ignore the dependence of $\Delta _{o}$ on $v$ and adopt a very simple model of a hard resonance surface at $r_{0}+\delta r$. From the resonant condition we get $$\delta r=\left. \frac{C}{A}{\displaystyle {\Lambda h_{N_{e}}}}\;{\bf \Omega
\cdot r\times \hat{p}}\right| _{r_{o}}\,,$$ where $C(r)=-\frac{1}{2}\left( 7\delta \Delta _{1}+\delta \Delta _{2}\right)
J\left( r\right) $.
Neutrinos emitted in different directions come from regions at different $r$ and therefore at different energies. Hence they have different angular momenta. If we adopt the Stefan-Boltzmann law for the neutrino flux at the resonance surface, a neutrino emitted in a direction ${\bf \hat{p }}$ has a momentum $p=E_{o}(1+4h_{T}^{-1}\delta r)$, where $E_{o}=E(r_{o})$. Therefore it carries an angular momentum $${\bf l=}r_{o}E_{o}({\bf \hat{r}}\times {\bf \hat{p})}\left[
1+4h_{T}^{-1}\delta r\right] \,.$$ By integrating at each point of the resonance surface over all directions and also over all the points, we compute the angular momentum gained by the star. Because of the symmetry of the system the resulting angular acceleration points along the rotational axis. The time derivative of the total angular momentum can be expressed as $$\dot{L}=\frac{C\Lambda }{3\pi A}\frac{h_{N_{e}}}{h_{T}}{\dot{{\cal E}}}
\Omega r_{o}\frac{\int_{0}^{\pi }d\theta \sin \theta\int_{0}^ {\frac{\pi }{2}
}d\theta^{\prime }\int_{0}^{2\pi }d\varphi ^{\prime } \sin \theta ^{\prime
}r_{o}\left( {\bf \hat{\Omega}}\times {\bf \hat{r}\cdot \hat{p}} \right)
^{2} }{\int_{0}^{\pi }d\theta \sin \theta \int_{0}^{\frac{\pi }{2} }d\theta
^{\prime }\sin \theta ^{\prime }}\,, \label{L}$$ where ${\dot{{\cal E}}}$ is the energy carried by the neutrinos per time unit, and a factor $\frac{1}{6}$ has been included to take into account that although six neutrino and antineutrino species are radiated, only one comes from the distorted neutrinosphere. In the latter expression $\theta $ is the angle between the radius vector ${\bf \hat{r}}$ and the angular velocity ${\bf \Omega }$, while $\theta ^{\prime }$ and $\varphi ^{\prime }$ are the spherical coordinates for ${\bf \hat{p}}$ taking ${\bf \hat{r}}$ as the $z$ axis. From Eq. (\[L\]) $$\Omega (t)=\Omega _{o}\exp \left( \frac{4r_{o}^{2}}{27}\int_{t_{0}}^{t}\frac{
C\Lambda }{AI}\frac{h_{N_{e}}}{h_{T}}{\dot{{\cal E}}}dt\right) \,,$$ where $I$ and $\Omega _{o}$ are the momentum of inertia and the initial angular velocity of the protostar. It should be noted that the rotational kick does not require a velocity ${\bf v}$ associated to a preferred frame.
As an example, let us consider the density profile introduced above. Assuming that all the quantities in the integrand except ${\dot{{\cal E}}}$ are constant during the cooling period and taking $\Delta {\cal E}\sim
3\times 10^{53}erg$, the angular velocity after the angular kick is $$\Omega _{f}\simeq \Omega _{o}\exp \left[ \xi \left( \delta \Delta _{1}+\frac
{1}{7}\delta \Delta _{2}\right) \frac{h_{N_{e}}}{h_{T}}\;\,10^{8}\right] \,,$$ where $0.1<\xi <10$ for $5<n<6$. The ratio $\frac{h_{N_{e}}}{h_{T}}$ is a model dependent quantity of the order of unity. Values usually considered are $h_{N_{e}}\lesssim h_{T}/3$. The star angular velocity will increase or decrease depending on the sign of $\delta \Delta _{1}+\frac{1}{7}\delta
\Delta _{2}$. If we accept that typical initial angular velocities of the protostar are $\Omega _{o}\sim 0.01\Omega _{f}$, the VEP parameters must be in the range $10^{-6}\;\lesssim \delta \Delta _{1}+\frac{1}{7}\delta \Delta
_{2}\lesssim 10^{-8}$ to reproduce the observed values for $\Omega _{f}$.
To estimate the order of magnitude of the translational and rotational accelerations, we have assumed the corresponding kicks decoupled one from the other. In a more realistic situation the rotational motion could produce an average in the translational kick. This effect depends on the relation between the characteristic time of reaccommodation of the neutrinosphere and the period of rotation of the star. The anisotropy axis coincides at every time with ${\bf v}$ and is not affected by the rotation, but the temperature of the resonance surface could change. In a soft neutrinosphere the deformed resonance surface changes the atmosphere opacity over the core and induces a temperature anisotropy in the core-atmosphere interface, which in turn affects the neutrino flux. As the star rotates the resonance surface also rotates with respect to the rest frame of the star inducing a time-changing opacity over the core region. Therefore we have here to consider the characteristic thermal response time of the system, which is of the order of a few hundred miliseconds[@raffelt], in contrast with the pulsar period. Thus, in this case we can expect an averaging of the translational kick. This effect tends to cancel the component orthogonal to the rotational axis and develops a correlation between the translational kick and the axis of rotation. In the case of a hard neutrinosphere the energy flux depends on the temperature at the point from which the neutrinos are radiated from the surface of resonance. The atmosphere here has enough heat capacity to act as a thermal reservoir with a radius dependent temperature. Therefore, there is no effective average and there is no correlation between the translational kick and the rotational axis.
For simplicity, we have assumed that the only mechanism responsible for the pulsar motion is VEP. If this were the only cause for the translational velocity, then all pulsar velocities should show a certain correlation driven by the ${\bf v}$ parameter. This correlation will be more or less accentuated depending on how hard or soft the neutrinospheres are, and also could be blurred by the presence of other kick mechanisms besides the one here considered.
In conclusion, we have shown that resonant VEP neutrino oscillations may be responsible for both the translational and rotational motion of pulsars. Since this mechanism works even for massless neutrinos, it does not clash with cosmological bounds. The strictest boundaries known at present in the neutrino sector are given by accelerator experiments, mainly from CCFR, which correspond to the highest tested energies[@ccfr]. These experiments are sensitive to large mixing angles, because they have no access to the MSW effect. The exclusion region for these experiments extends down to $\sin^2
2\theta>2.10^{-3}$ for $\nu_e\-nu_\mu$ and $\sin^2 2\theta>0.2$ for $\nu_e\-nu_\tau$, independently of the value of $\Delta_0$. Therefore, the parameter region relevant for the neutrino resonance in neutron stars, taking $ 10^{-4}<\theta _{g}<10^{-3}$ for $\nu_e\-nu_\mu$ or $
10^{-4}<\theta _{g}<10^{-1}$ for $\nu_e\-nu_\tau$, is well outside the range tested by accelerators[@mann]. With respect to atmospheric neutrinos, they are not affected by these small mixing angle oscillations, and the MSW effect in the solar neutrinos corresponds to a medium of much lesser density, and thus the involved parameter sector is very different[@nuestro]. In this way the kick pulsar physics gives access to a new phenomenological sector of VEP effects.
Acknowledgments
===============
This work was partially supported by CONICET-Argentina, CONACYT- México, Universidad Nacional Autónoma de México under grants DGAPA-IN117198 and DGAPA-IN100397, and Centro Latino Americano de Física. M. B. also acknowledges support from SRE (México).
A.G. Lyne and D. R. Lorimer, Nature [**369**]{} (1994) 127; J. M. Cordes and D.F.Chernoff, Ap.J. [**505**]{} (1998) 315; C. Fryer, A. Burrows and W. Benz, Ap.J. [**496**]{} (1998) 333.
E. Kh. Akhmedov, A. Lanza and D.W. Sciama, Phys. Rev. [**D56**]{} (1997) 6117; D. Grasso, H. Nunokawa, and J.W. Valle, Phys. Rev. Lett. [**81**]{} (1998) 2412; A.Burrows and J. Hayes, Phys. Rev. Lett. [**76** ]{} (1996) 352; C. J. Horowitz and G. Lee, Phys. Rev. Lett. [**80**]{} (1998) 3694; W. Keil, H.-Th. Janka, and E. Muller, Ap.J. [**473**]{} (1996) L111.
A. Kusenko and G. Segrè, Phys. Rev. Lett. [**77**]{} (1996) 4872; ibid [**79**]{} (1997) 2751; Phys. Lett. [**B396**]{} (1997) 197.
A. Kusenko and G. Segrè, Phys. Rev. [**D59**]{} (1999) 061302.
R. Horvat, Mod. Phys. Lett. [**A13**]{} (1998) 2379.
H. Spruit and E. S. Phinney, Nature [**393**]{} (1998) 139.
D. R. Lorimer and R. Ramachandran, astro-ph/9911010, to appear in the proceedings of the IAU 177 meeting, [*Pulsar Astronomy 2000 and Beyond*]{}.
J.C. D‘Olivo, J.F.Nieves and P.B.Pal, Phys. Rev [**D40**]{} (1989) 3679; J. C. D‘Olivo and J. F. Nieves, Phys. Lett. [**B383**]{} (1996) 87; S. Esposito and G. Capone, Z. Phys. [**C70**]{} (1996) 55; P. Elmfors, D. Grasso, and G. Raffelt, Nucl. Phys. [**B479**]{} (1996) 3.
Y. Z.Qian, Phys. Rev. Lett. [**79**]{} (1997) 2750.
H. T. Janka and G. G. Raffelt, Phys. Rev. [**D59**]{} (1999) 023005.
M.Gasperini, Phys.Rev. [**D38**]{} (1988) 2635; A. Halprin and C. N. Leung, Phys. Rev. Lett.[**67**]{} (1991) 1833.
A. Halprin, C. N. Leung and J. Pantaleone, Phys. Rev. [**D53** ]{} (1995) 5365; J.N.Bahcall, P.I.Krastev and C. N. Leung, Phys. Rev. [**D52**]{} (1995) 1770; J.R. Mureika and R. B. Mann, Phys. Rev. [**D54**]{} (1996) 2761.
C.M. Will, [*Theory and experiment in gravitational physics*]{} (Cambridge Univ. Press, Cambridge, England, 1981).
H.Casini, J. C. D‘Olivo, R. Montemayor and L. Urrutia, Phys. Rev. [**D59**]{} (1999) 062001; H.Casini, J. C. D‘Olivo and R. Montemayor, Phys. Rev. [**D61**]{} (2000) 105004.
H.Casini and R. Montemayor, Phys. Rev. [**D50**]{} (1994) 1092.
M. S. Turner, Phys. Rev. Lett. [**60**]{} (1988) 1797.
The CCFR collaboration, A. Romosan [*et al.*]{}, Phys. Rev. Lett. [**78**]{} (1997) 2912; D. Naples [*et al.*]{}, Phys. Rev. [**D59**]{} (1999) 031101.
R. B. Mann and U. Sarkar, Phys. Rev. Lett. [**76**]{} (1996) 865; J. Pantaleone, T. K. Kuo, and S.W. Mansour, Phys. Rev. [**D61**]{} (2000) 033011.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Massive stars ejected from their parent cluster and supersonically sailing away through the interstellar medium (ISM) are classified as exiled. They generate circumstellar bow shock nebulae that can be observed. We present two-dimensional, axisymmetric hydrodynamical simulations of a representative sample of stellar wind bow shocks from Galactic OB stars in an ambient medium of densities ranging from $n_{\rm ISM }=0.01$ up to $10.0\, \rm cm^{-3}$. Independently of their location in the Galaxy, we confirm that the infrared is the most appropriated waveband to search for bow shocks from massive stars. Their spectral energy distribution is the convenient tool to analyze them since their emission does not depend on the temporary effects which could affect unstable, thin-shelled bow shocks. Our numerical models of Galactic bow shocks generated by high-mass ($\approx 40\, \rm M_{\odot}$) runaway stars yield H$\alpha$ fluxes which could be observed by facilities such as the [*SuperCOSMOS H-Alpha Survey*]{}. The brightest bow shock nebulae are produced in the denser regions of the ISM. We predict that bow shocks [*in the field*]{} observed at H$\alpha$ by means of Rayleigh-sensitive facilities are formed around stars of initial mass larger than about $20\, \rm M_{\odot}$.'
date: 'Received January 18 2015; accepted Month day, 2015'
title: On the observability of bow shocks of Galactic runaway OB stars
---
\[firstpage\]
methods: numerical – circumstellar matter – stars: massive.
Introduction {#sect:introduction}
============
The estimate of massive star feedback is a crucial question in the understanding of the Galaxy’s functioning [@langer_araa_50_2012]. Throughout their short lives, they release strong winds [@holzer_araa_8_1970] and ionising radiation [@diazmiller_apj_501_1998] which modify their ambient medium. This results in diaphanous [H[ii]{} ]{}regions [@dyson_ass_35_1975], parsec-scale bubbles of stellar wind [@weaver_apj_218_1977], inflated [@petrovic_aa_450_2006] or shed [@woosley_rvmp_74_2002; @garciasegura_1996_aa_305] stellar envelopes that impact their close surroundings and which can alter the propagation of their subsequent supernova shock wave [@vanveelen_phd; @meyer_mnras_450_2015]. Understanding the formation processes of these circumstellar structures allows us to constrain the impact of massive stars, e.g. on the energetics or the chemical evolution of the interstellar medium (ISM). Moreover, it links studies devoted to the dynamical evolution of supernova remnants expanding into the ISM [@rozyczka_mnras_261_1993] with works focusing on the physics of the star forming ISM [@peters_apj_711_2010].
These arc-like structures of swept-up stellar wind material and ISM gas the distortion of their stellar wind bubble by the bulk motion of their central star [@weaver_apj_218_1977]. Their size and their morphology are governed by their stellar wind mass loss, the bulk motion of the runaway star and their local ambient medium properties [@comeron_aa_338_1998]. These distorted wind bubbles have been first noticed in optical \[O[iii]{}\] $\lambda
\, 5007$ spectral emission line around the Earth’s closest runaway star, the OB star $\zeta$ Ophiuchi [@gull_apj_230_1979]. Other noticeable fast-moving massive stars producing a stellar wind bow shock are, e.g. the blue supergiant Vela-X1 [@kaper_apj_475_1997], the red supergiant Betelgeuse [@noriegacrespo_aj_114_1997] and the very massive star BD+43${\ensuremath{^\circ}}$365 running away from Cygnus OB2 [@comeron_aa_467_2007].
$M_{\star}\, (\rm M_{\odot})$ $t_{\mathrm{ start}}\, (\rm Myr)$ $\textcolor{black}{\log(L_{\star}/\rm L_{\odot})}$ $\log(\dot{M}/\rm M_{\odot}\, \rm yr^{-1})$ $v_{\rm w}\, (\mathrm{km}\, \mathrm{s}^{-1})$ $T_{\rm eff}\, (\mathrm{K})$ $S_{\star} (\mathrm{photon}\, \mathrm{s}^{-1})$ $t_{\rm MS} (\mathrm{Myr})$
------------------------------- ----------------------------------- ---------------------------------------------------- --------------------------------------------- ----------------------------------------------- ------------------------------ ------------------------------------------------- -----------------------------
$10$ $5.0$ $3.80$ $-9.52$ $1082$ $25200$ $10^{45}$ $22.5$
$20$ $3.0$ $4.74$ $-7.38$ $1167$ $33900$ $10^{48}$ $\,\,\,8.0$
$40$ $0.0$ $5.34$ $-6.29$ $1451$ $42500$ $10^{49}$ $\,\,\,4.0$
\[tab:lum\_stars\]
Analysis of data from the [*Infrared Astronomical Satellite*]{} facility [[*IRAS*]{}, @neugebauer_278_apj_1984] later extended to measures taken with the [*Wide-Field Infrared Satellite Explorer*]{} [[ *WISE*]{}, @wright_aj_140_2010] led to the compilation of bow shock records, see e.g. [@buren_apj_329_1988]. Soon arose the speculation that those isolated nebulae can serve physics of these stars, to constrain the still highly debated mass loss of massive stars [@gull_apj_230_1979] and/or their ambient medium density [@huthoff_aa_383_2002]. This also raised questions related to the ejection mechanisms of OB stars from young stellar clusters [@hoogerwerf_aa_365_2001]. More recently, multi-wavelengths data led to the publication of the E-BOSS catalog of stellar wind bow shocks [@peri_aa_538_2012; @2015arXiv150404264P].
Early simulations discussed the general morphology of the bow shocks around OB stars [@brighenti_mnras_277_1995, and references therein], their (in)stability [@blondin_na_57_1998] and the general uncompatibility of the shape of stellar wind bow shocks with analytical approximations such as the one of @wilkin_459_apj_1996, see in @comeron_aa_338_1998. However, observing massive star bow shocks remains difficult and they are mostly serendipitously noticed in infrared observations of the neighbourhood of stellar clusters [@gvaramadze_aa_490_2008]. Moreover, their optical emission may be screened by the [H[ii]{} ]{}region which surrounds the driving star and this may affect their H$\alpha$ observations [@brown_aa_439_2005]. We are particularly interested in the prediction of the easiest bow shocks to observe, their optical emission properties and their location in the Galaxy.
In the present study, we extend our numerical investigation of the circumstellar medium of runaway massive stars [@meyer hereafter Paper I]. explores the effects of the ambient medium density on the emission properties of the bow-like nebulae around the most common runaway stars, in the spirit of works on bow shocks generated by low-mass stars [@villaver_apj_748_2012 and references therein].
Our paper is organised as follows. In Section \[sect:method\] we present the numerical methods and the microphysics that is included in our models. The resulting numerical simulations are presented and discussed in Section \[sect:results\]. We then analyze and discuss the emission properties of our bow shock models in Section \[sect:emission\]. Finally, we formulate our conclusions in Section \[section:cc\].
Method {#sect:method}
======
Governing equations {#subsect:goveq}
-------------------
Hydrodynamical simulations {#subsect:hydrosim}
--------------------------
We run two-dimensional, axisymmetric, hydrodynamical numerical simulations using the [ pluto]{} code [@mignone_apj_170_2007; @migmone_apjs_198_2012] in axisymmetric, cylindrical coordinates on a uniform grid $[z_{\rm min},z_{\rm
max}]\times[O,R_{\rm max}]$ of spatial resolution $\Delta=2.25\times 10^{-4}\,
\rm{pc}\, \rm{cell}^{-1}$ minimum. The stellar wind is injected into the computational domain filling a circle of radius 20 cells centered onto the origin $O$ [see e.g., @comeron_aa_338_1998; @meyer_mnras_2013 and references therein]. The interaction with the ISM is calculated in the reference frame of the moving star [@vanmarle_aa_469_2007; @vanmarle_apj_734_2011; @vanmarle_aa_561_2014]. Inflowing ISM gas mimicing the stellar motion is set at the $z=z_{\rm max}$ boundary whereas semi-permeable boundary conditions are set at $z=z_{\rm min}$ and at $R=R_{\rm max}$. Wind material is distinguished from the ISM using a passive tracer $Q$ that is advected with the gas and initially set to $Q=1$ in the stellar wind and to $Q=0$ in the ISM. The ISM composition is assumed to be solar [@asplund_araa_47_2009].
${\rm {Model}}$ $M_{\star}\, (\rm M_{\odot})$ $v_{\star}\, (\mathrm{km}\, \mathrm{s}^{-1})$ $n_{\rm ISM}\, (\mathrm{cm}^{-3})$
----------------- ------------------------------- ----------------------------------------------- ------------------------------------ -- --
MS1020n0.01 $10$ $20$ $0.01$
MS1040n0.01 $10$ $40$ $0.01$
MS1070n0.01 $10$ $70$ $0.01$
MS2040n0.01 $20$ $40$ $0.01$
MS2070n0.01 $20$ $70$ $0.01$
MS1020n0.1 $10$ $20$ $0.10$
MS1040n0.1 $10$ $40$ $0.10$
MS1070n0.1 $10$ $70$ $0.10$
MS2020n0.1 $20$ $20$ $0.10$
MS2040n0.1 $20$ $40$ $0.10$
MS2070n0.1 $20$ $70$ $0.10$
MS4070n0.1 $40$ $70$ $0.10$
MS1020n10 $10$ $20$ $10.0$
MS1040n10 $10$ $40$ $10.0$
MS1070n10 $10$ $70$ $10.0$
MS2020n10 $20$ $20$ $10.0$
MS2040n10 $20$ $40$ $10.0$
MS2070n10 $20$ $70$ $10.0$
MS4020n10 $40$ $20$ $10.0$
MS4040n10 $40$ $40$ $10.0$
MS4070n10 $40$ $70$ $10.0$
: The hydrodynamical models. Parameters $M_{\star}$ (in $\rm M_{\odot}$), $v_{\star}$ (in $\mathrm{km}\, \mathrm{s}^{-1}$) and $n_{\rm ISM}$ (in $\mathrm{cm}^{-3}$) are the initial mass of the considered moving star, its space velocity and its local ISM density, respectively.
\[tab:models\]
Microphysics {#subsect:phys}
------------
In order to proceed on our previous bow shock studies [Paper I, @meyer_mnras_450_2015], we include the same microphysics in our simulations of the circumstellar medium of runaway, massive stars, i.e. we take into account losses and gain of internal energy by optically-thin cooling and heating together with electronic thermal conduction. Optically-thin radiative processes are included into the model using the cooling and heating laws established for a fully ionized medium in Paper I. It mainly consist of cooling contributions from hydrogen and helium for temperatures $T<10^{6}\, \rm K$ whereas it is principally due to metals for temperatures $T \ge 10^{6}\, \rm K$ [@wiersma_mnras_393_2009]. A term representing the cooling from collisionally excited forbidden lines [@henney_mnras_398_2009] incorporates the effects of, among other, the \[O[iii]{}\] $\lambda \, 5007$ line emission. The heating contribution includes the reionisation of recombining hydrogen atoms by the starlight [@osterbrock_1989; @hummer_mnras_268_1994]. All our models include electronic thermal conduction [@cowie_apj_211_1977].
![ Stellar wind bow shocks from the main sequence phase of the $20\, \rm M_{\odot}$ star moving with velocity $70\, \mathrm{km}\,
\mathrm{s}^{-1}$ as a function of the ISM density, with $n_{\rm ISM}=0.01$ (a), $0.1$ (b), $0.79$ (c) and $10.0\, \mathrm{cm}^{-3}$ (d). The gas number density (in $\rm cm^{-3}$) is shown in the logarithmic scale. The dashed black contour traces the boundary between wind and ISM material. The cross indicates the position of the runaway star. The $R$-axis represents the radial direction and the $z$-axis the direction of stellar motion (in $\mathrm{pc}$). Only part of the computational domain is shown. []{data-label="fig:grid_density"}](./MS2070n001_legend.eps){width="100.00000%"}
\
![ Stellar wind bow shocks from the main sequence phase of the $20\, \rm M_{\odot}$ star moving with velocity $70\, \mathrm{km}\,
\mathrm{s}^{-1}$ as a function of the ISM density, with $n_{\rm ISM}=0.01$ (a), $0.1$ (b), $0.79$ (c) and $10.0\, \mathrm{cm}^{-3}$ (d). The gas number density (in $\rm cm^{-3}$) is shown in the logarithmic scale. The dashed black contour traces the boundary between wind and ISM material. The cross indicates the position of the runaway star. The $R$-axis represents the radial direction and the $z$-axis the direction of stellar motion (in $\mathrm{pc}$). Only part of the computational domain is shown. []{data-label="fig:grid_density"}](./MS2070n01_legend.eps){width="100.00000%"}
\
![ Stellar wind bow shocks from the main sequence phase of the $20\, \rm M_{\odot}$ star moving with velocity $70\, \mathrm{km}\,
\mathrm{s}^{-1}$ as a function of the ISM density, with $n_{\rm ISM}=0.01$ (a), $0.1$ (b), $0.79$ (c) and $10.0\, \mathrm{cm}^{-3}$ (d). The gas number density (in $\rm cm^{-3}$) is shown in the logarithmic scale. The dashed black contour traces the boundary between wind and ISM material. The cross indicates the position of the runaway star. The $R$-axis represents the radial direction and the $z$-axis the direction of stellar motion (in $\mathrm{pc}$). Only part of the computational domain is shown. []{data-label="fig:grid_density"}](./MS2070n10_legend.eps){width="100.00000%"}
\
![ Stellar wind bow shocks from the main sequence phase of the $20\, \rm M_{\odot}$ star moving with velocity $70\, \mathrm{km}\,
\mathrm{s}^{-1}$ as a function of the ISM density, with $n_{\rm ISM}=0.01$ (a), $0.1$ (b), $0.79$ (c) and $10.0\, \mathrm{cm}^{-3}$ (d). The gas number density (in $\rm cm^{-3}$) is shown in the logarithmic scale. The dashed black contour traces the boundary between wind and ISM material. The cross indicates the position of the runaway star. The $R$-axis represents the radial direction and the $z$-axis the direction of stellar motion (in $\mathrm{pc}$). Only part of the computational domain is shown. []{data-label="fig:grid_density"}](./MS2070n100_legend.eps){width="100.00000%"}
Parameter range {#subsect:para}
---------------
This work consists of a parameter study extending our previous investigation of stellar wind bow shock (Paper I) to regions of the Galaxy where the ISM has either lower or higher densities. The boundary conditions are unchanged, i.e. we consider runaway stars of $10$, $20$ and $40\, \rm M_{\odot}$ star moving with velocity $v_{\star}=20$, $40$ and $70\, \rm km\, \rm s^{-1}$, respectively. Differences come from the chosen ISM number density that ranges from $n_{\rm ISM}=0.01$ to $10.0\, \rm
cm^{-3}$ whereas our preceeding work exclusively focused on bow shocks models with $n_{\rm ISM}=0.79\, \rm cm^{-3}$.
Bow shocks morphology {#sect:results}
=====================
Bow shocks structure {#subsect:structure}
--------------------
In Fig. \[fig:grid\_density\] we show the density fields in our hydrodynamical simulations of our $20\, \rm M_{\odot}$ star moving with velocity $v_{\star}=70\, \rm
km\, \rm s^{-1}$ in a medium of number density $n_{\rm ISM}=0.01$ (panel a, model MS2070n0.01), $0.1$ (panel b, model MS2070n0.1), $0.79$ (panel c, model MS2070) and $10.0\, \rm cm^{-3}$ (panel d, model MS2070n10), respectively.
Bow shocks size {#subsect:scaling}
---------------
The bow shocks have a stand-off distance $R(0)$, i.e. the distance separating them from the star along the direction of motion predicted by @wilkin_459_apj_1996. It decreases as a function of (i) $v_{\star}$, (ii) $\dot{M}$ (c.f. Paper I) and (iii) $n_{\rm ISM}$ since $R(0) \propto n_{\rm ISM}^{-1/2}$. A dense ambient medium produces a large ISM ram pressure $n_{\rm ISM}v_{\star}^{2}$ which results in a compression of the whole bow shock and consequently in a reduction of $R(0)$. As an example, our simulations involving a $20\, \rm M_{\odot}$ star with $v_{\star}=70\,
\mathrm{km}\, \mathrm{s}^{-1}$ has $R(0)\approx 3.80$, $1.14$, $0.38$ and $0.07\rm pc$ when the driving star moves in $n_{\rm ISM}=0.01$, $0.1$, $0.79$ and $10\, \rm cm^{-3}$, respectively (Fig. \[fig:grid\_density\]a-d), which is reasonably in accordance with @wilkin_459_apj_1996. All our measures of $R(0)$ are taken at the contact discontinuity, because it is appropriate measure to compare models with Wilkin’s analytical solution [@mohamed_aa_541_2012].
![ Axis ratio $R(0)/R(90)$ of our bow shock models. The figure shows the ratio $R(0)/R(90)$ measured in the density field of our models measured at their contact discontinuity, as a function of their stand-off distance $R(0)$ (in $\rm pc$). Symbols distinguish models as a function of (i) the ISM ambient medium with $n_{\rm ISM}=0.01$ (triangles), $0.1$ (diamonds), $0.79$ (circles) and $10.0\, \rm cm^{-3}$ (squares) and (ii) of the initial mass of the star with $10\, \rm M_{\odot}$ (blue dots), $20\, \rm M_{\odot}$ (blue plus signs) and $40\, \rm M_{\odot}$ (dark green crosses), respectively. The thin horizontal black line corresponds to the analytic solution $R(0)/R(90)= 1/\sqrt{3}\approx 0.58$ of @wilkin_459_apj_1996. []{data-label="fig:axis_ratio"}](./R0_vs_R90.eps){width="100.00000%"}
Non-linear instabilities and mixing of material {#subsect:stability}
-----------------------------------------------
In Fig. \[fig:grid\_velocity\] we show a time sequence evolution of the density field in hydrodynamical simulations of $40\, \rm M_{\odot}$ zero-age main-sequence star moving with velocity $v_{\star}=70\, \rm km\, \rm s^{-1}$ in a medium of number density $n=10.0\, \rm cm^{-3}$ (model MS4070n10). The figures are shown at times $0.02$ (a), $0.05$ (b), $0.11$ (c) and $0.12\, \rm Myr$ (d), respectively. After $0.02\, \rm Myr$ the whole shell is sparsed with small size clumps which are the seeds of non-linear instabilities (Fig. \[fig:grid\_velocity\]b). The fast stellar motion ($v_{\star}=70\, \rm km\, \rm s^{-1}$) provokes a distortion of the bubble into an ovoid shape [see fig. 7 of @weaver_apj_218_1977] and the high ambient medium density ($n=10.0\, \rm cm^{-3}$) induces rapidly a thin shell after only about $0.01\, \rm Myr$.
![ Same as Fig. \[fig:grid\_density\] for our $40\, \rm M_{\odot}$ star moving through an ISM of density $n_{\rm ISM}=10.0\, \mathrm{cm}^{-3}$ with velocity $70\, \mathrm{km}\, \mathrm{s}^{-1}$ . Figures are shown at times $0.02$ (a), $0.05$ (b), $0.11$ (c) and $0.12\, \rm Myr$ (d) after the beginning of the main sequence phase of the star, respectively. It illustrates the development of the non-linear thin-shell instability in the bow shock. []{data-label="fig:grid_velocity"}](./MS2070n100Time2_legend.eps){width="100.00000%"}
\
![ Same as Fig. \[fig:grid\_density\] for our $40\, \rm M_{\odot}$ star moving through an ISM of density $n_{\rm ISM}=10.0\, \mathrm{cm}^{-3}$ with velocity $70\, \mathrm{km}\, \mathrm{s}^{-1}$ . Figures are shown at times $0.02$ (a), $0.05$ (b), $0.11$ (c) and $0.12\, \rm Myr$ (d) after the beginning of the main sequence phase of the star, respectively. It illustrates the development of the non-linear thin-shell instability in the bow shock. []{data-label="fig:grid_velocity"}](./MS2070n100Time3_legend.eps){width="100.00000%"}
\
![ Same as Fig. \[fig:grid\_density\] for our $40\, \rm M_{\odot}$ star moving through an ISM of density $n_{\rm ISM}=10.0\, \mathrm{cm}^{-3}$ with velocity $70\, \mathrm{km}\, \mathrm{s}^{-1}$ . Figures are shown at times $0.02$ (a), $0.05$ (b), $0.11$ (c) and $0.12\, \rm Myr$ (d) after the beginning of the main sequence phase of the star, respectively. It illustrates the development of the non-linear thin-shell instability in the bow shock. []{data-label="fig:grid_velocity"}](./MS2070n100Time4_legend.eps){width="100.00000%"}
\
![ Same as Fig. \[fig:grid\_density\] for our $40\, \rm M_{\odot}$ star moving through an ISM of density $n_{\rm ISM}=10.0\, \mathrm{cm}^{-3}$ with velocity $70\, \mathrm{km}\, \mathrm{s}^{-1}$ . Figures are shown at times $0.02$ (a), $0.05$ (b), $0.11$ (c) and $0.12\, \rm Myr$ (d) after the beginning of the main sequence phase of the star, respectively. It illustrates the development of the non-linear thin-shell instability in the bow shock. []{data-label="fig:grid_velocity"}](./MS2070n100Time5_legend.eps){width="100.00000%"}
![ Bow shock volume ($z\ge0$) in our model (see Fig. \[fig:grid\_velocity\]a-e). The figure shows the volume of perturbed material (in $\rm pc^{3}$) in the computational domain (thick solid blue line), together with the volume of shocked ISM gas (thin solid red line) and shocked stellar wind (thick dotted orange line), respectively, as function of time (in $\rm Myr$). The large dotted black line represents the volume of the thin shell of shocked ISM. []{data-label="fig:volume"}](./volume.eps){width="100.00000%"}
\
The bow shock then experiences a series of cycles in which small scaled eddies grow in the shell (Fig. \[fig:grid\_velocity\]b) and further distort its apex into wing-like structures (Fig. \[fig:grid\_velocity\]c) which are pushed sidewards because of the transverse component of the stellar wind acceleration (Fig. \[fig:grid\_velocity\]d). Our model MS4070n10 has both characteristics from the models E “ High ambient density” and G “ Instantaneous cooling” of @comeron_aa_338_1998. Thin-shelled stellar wind bow shocks develop non-linear instabilities, in addition to the Kelvin-Helmholtz instabilities that typically affect interfaces between shearing flows of opposite directions, i.e. the outflowing stellar wind and the ISM gas penetrating the bow shocks [@vishniac_apj_428_1994; @garciasegura_1996_aa_305; @vanmarle_aa_469_2007]. A detailed discussion of the development of such non-linearities affecting bow shocks generated by OB runaway stars is in @comeron_aa_338_1998.
In Fig. \[fig:volume\] we plot the evolution of the volume of the bow shock in our model MS4070n10 (thick solid blue line), separating the volume of shocked ISM gas (thin dotted red line) from the volume of shocked stellar wind (thick dotted orange line) in the apex ($z\ge0$) of the bow shock. Such a discrimination of the volume of wind and ISM gas is possible because a passive scalar tracer is is numerically advected simultaneously with the flow. The figure further illustrates the preponderance of the volume of shocked ISM in the bow shock compared to the stellar wind material, regardless the growth of eddies. Interestingly, the volume of dense shocked ISM gas (large dotted black line) does not have large time variations (see Section \[sect:emission\]).
Bow shock energetics and emission signatures {#sect:emission}
============================================
![ Bow shocks luminosities. The panels correspond to models with an ISM density The simulations labels are indicated under the corresponding values. []{data-label="fig:lum1"}](./luminosity_grid_001.eps "fig:"){width="44.00000%"} ![ Bow shocks luminosities. The panels correspond to models with an ISM density The simulations labels are indicated under the corresponding values. []{data-label="fig:lum1"}](./luminosity_grid_01.eps "fig:"){width="44.00000%"} ![ Bow shocks luminosities. The panels correspond to models with an ISM density The simulations labels are indicated under the corresponding values. []{data-label="fig:lum1"}](./luminosity_grid_100.eps "fig:"){width="44.00000%"}
Methods {#subsect:methods}
-------
In Fig. \[fig:lum1\] the total bow shock luminosity $L_{\rm total}$ (pale green diamonds) is calculated integrating the losses by optically-thin radiation in the $z \ge 0$ region of the computational domain [@mohamed_aa_541_2012 Paper I]. Shocked wind emission $L_{\rm wind}$ (orange dots) is discriminated from $L_{\rm total}$ with the help of the passive scalar $Q$ that is advected with the gas, Additionaly, we compute $L_{\rm H\alpha}$ (blue crosses) and $L_{[\rm O{\sc III}]}$ (dark green triangles) which stand for the bow shock luminosities at H$\alpha$ and at \[O[iii]{}\] $\lambda \, 5007$ spectral line emission using the prescriptions for the emission coefficients in @dopita_aa_29_1973 and @osterbrock_1989, respectively. The overall X-ray luminosity $L_{\rm X}$ (black right crosses) is computed with emission coefficients generated with the [xspec]{} program [@arnaud_aspc_101_1996] with solar metalicity and chemical abundances from @asplund_araa_47_2009.
Results {#subsect:results}
-------
### Optical luminosities {#subsect:luminosities}
![ Luminosities of our bow shock simulation of a $40\, \rm M_{\odot}$ star moving with velocity $v_{\star}=70\, \rm km\, \rm s^{-1}$ through a medium with $n_{\rm ISM}=10\, \rm cm^{-3}$ (see corresponding time-sequence evolution of its density field in Fig. \[fig:grid\_velocity\]a-e). Plotted quantities and color-coding are similar to Fig. \[fig:lum1\] and are shown as function of time (in $\rm Myr$). []{data-label="fig:lum3"}](./luminosity.eps){width="100.00000%"}
\
![image](./observability.eps){width="100.00000%"}
![image](./feedback_energy_2.eps){width="100.00000%"}
In Fig. \[fig:lum1\] we display the bow shocks luminosities as a function of the initial mass of the runaway star, its space velocity $v_{\star}$ and its ambient medium density $n_{\rm ISM}$. At a given density of the ISM, all of our models have luminosities from optically-thin gas radiation which with respect to the stellar mass loss are as described in Paper I for the simulations with $n_{\rm ISM}\approx 0.79\, \rm
cm^{-3}$.
The behaviour of the optically-thin emission originating from the shocked stellar wind $L_{\rm wind}$, the \[O[iii]{}\] $\lambda \, 5007$ spectral line emission and the H$\alpha$ emission at fixed $n_{\rm ISM}$ are similar as described in @meyer_mnras_2013. The contribution of $L_{\rm wind}$ is smaller than $L_{\rm total}$ by several orders of magnitude for all models, e.g. our model MS1020n0.1 has $L_{\rm wind}/L_{\rm total} \approx 10^{-5}$. All our models have $L_{\rm H\alpha} < L_{[\rm O{\sc III}]} < L_{\rm
total}$ and the H$\alpha$ emission, the $[\rm O{\sc III}]$ spectral line emission and $L_{\rm wind}$ have variations which are similar to $L_{\rm ISM}$ with respect to $M_{\star}$, $v_{\star}$ and $n_{\rm ISM}$.
Fig. \[fig:lum3\] shows the lightcurve of our model MS4070n10 computed over the whole simulation and plotted as a function of time with the color coding from Fig. \[fig:lum1\]. Very little variations of the emission are present at the beginning of the calculation up to a time of about $0.004\, \rm Myr$ and it remains almost constant at larger times. This is in accordance with the volume of the dense ISM gas trapped into the nebula (see large dotted black line in Fig. \[fig:volume\]). The independence of $L_{\rm IR}$ with respect to the strong volume fluctuations of thin-shelled nebulae (Fig. \[fig:lum3\]) indicates that their spectral energy distributions is likely to be the appropriate tool to analyze them since it constitutes an observable which is not reliable to temporary effects.
### Infrared and X-rays luminosities {#subsect:thermalisation}
reprocessed starlight on dust grains penetrating the bow shocks, $L_{\rm IR}$, is larger than $L_{\rm total}$ by about $1-2$ orders of magnitude. This is possible because the reemission of starlight by dust grains is not taken into account in our simulations. for the models MS2040n0.01 and MS2040n10, respectively. $L_{\rm IR}$ increases with $M_{\star}$ (Figs. \[fig:lum1\]a-d). Particularly, we find that $L_{\rm IR} \gg L_{\rm H\alpha}$ and $L_{\rm IR} \gg L_{[\rm O{\sc III}]}$, and therefore we conclude that the infrared waveband is the best way to detect and observe bow shocks from massive main-sequence runaway stars regardless of $n_{\rm ISM}$ (see section \[sect:observability\]).
Several current and/or planned facilities are designed to observe at these wavelengths and may be able to detect bow shocks from runaway stars:
1. First, the [*James Webb Space Telescope*]{} (JWST) which [*Mid-Infrared Instrument*]{} [MIRI, @swinyard_2004] observes in the infrared ($5$$-$$28\, \mu \rm m$) that roughly corresponds to our predicted waveband of dust continuum emission from stellar wind bow shocks of runaway OB stars.
2. Secondly, the [*Stratospheric Observatory for Infrared Astronomy*]{} (SOFIA) airborne facility which [*Faint Object infraRed CAmera for the SOFIA Telescope*]{} [FORCAST, @adams_2008] instrument detects photons in the $5.4$$-$$37\, \mu \rm m$ waveband.
3. Then, the proposed [*Space Infrared Telescope for Cosmology and Astrophysics*]{} [SPICA, @kaneda_2004] satellite would be the ideal tool the observe stellar wind bow shock, since it is planed to be mounted with (i) a far-infrared imaging spectrometer ($30$$-$$210\, \mu \rm m$), (ii) a mid-infrared coronograph ($3.5/5$$-$$27\, \mu \rm m$) and (iii) a mid-infrared camera/spectrometer ($5$$-$$38\, \mu \rm m$).
4. Finally, we should mention the proposed [*The Mid-infrared E-ELT Imager and Spectrograph*]{} (METIS) on the planned [*European Extremely Large Telescope*]{} [E-ELT, @brandl_2006], that will be able to scan the sky in the $3$$-$$19\, \mu \rm m$ waveband.
Exploitation of the associated archives of these instruments in regions surroundings young stellar clusters and/or at the locations of previously detected bow-like nebulae [@buren_apj_329_1988; @vanburen_aj_110_1995; @noriegacrespo_aj_113_1997; @peri_aa_538_2012; @2015arXiv150404264P] are research avenues to be explored.
Finally, we notice that the X-rays emission are much smaller than any other emission lines or bands, e.g. the model MS2070 has $L_{\rm X}/L_{\rm H\alpha} \approx 10^{-5}$, and it is consequently not a relevant waveband to observe our bow shocks.
### Feedback {#subsect:feedback}
The ratio $\dot{E}_{\rm motion}/L_{\rm total}$ is shown as a function of the bow shock volume in Fig. \[fig:obser\]b.
![image](./Ha_M10.eps){width="100.00000%"}
![image](./Ha_M20.eps){width="100.00000%"}
![image](./Ha_M40.eps){width="100.00000%"}
\
![image](./IR_M10.eps){width="100.00000%"}
![image](./IR_M20.eps){width="100.00000%"}
![image](./IR_M40.eps){width="100.00000%"}
Discussion {#sect:discussion}
----------
### The appropriated waveband to observe stellar wind bow shocks in the Galaxy {#sect:observability}
In Fig. \[fig:paving\_lum\] we show the H$\alpha$ surface brightness (in $\rm erg\,\rm
s^{-1}\,\rm cm^{-2}\,\rm arcsec^{-2}$, panels a-c) and the infrared luminosity (in $\rm erg\, \rm s^{-1}$, panels d-f) for models with $M_{\star}=10\, \rm M_{\odot}$ (left panels), $20\, \rm M_{\odot}$ (middle panels) and $40\, \rm M_{\odot}$ (right panels). The surface brightness $\Sigma^{\rm max}_{\rm H \alpha}$ scales with $n^{2}$, see Appendix A of Paper I, therefore the lower the ISM background density of the star, i.e. the higher its Galactic latitude, the fainter the projected emission of the bow shocks and the lower the probability to observe them. The brightest bow shocks are generated both in infrared and H$\alpha$ by our most massive stars running in the denser regions of the ISM ($\rm n_{\rm ISM} =
10.0\, \rm cm^{-3}$). The estimate of the infrared luminosity confirms our result relative to bow shock models with $n_{\rm ISM}=0.79\, \rm
cm^{-3}$ in the sense that the brightest bow shocks are produced by high-mass, stars (Paper I) moving in a relatively dense ambient medium, i.e. within the Galactic plane (Fig. \[fig:paving\_lum\]d-f). At H$\alpha$, these bow shocks are associated to fast-moving stars ($v_{\star}=70\, \rm km\, \rm s^{-1}$) producing the strongest shocks, whereas in infrared they are associated to slowly-moving stars ($v_{\star}=20\, \rm km\, \rm s^{-1}$) generating the largest nebulae.
![image](./map_rotation_legend_one.eps){width="100.00000%"}
![image](./map_rotation1_legend_1.eps){width="100.00000%"}
![image](./map_rotation_legend_a.eps){width="100.00000%"}
\
![image](./map_rotation_legend_two.eps){width="100.00000%"}
![image](./map_rotation_legend_2.eps){width="100.00000%"}
![image](./map_rotation_legend_b.eps){width="100.00000%"}
\
![image](./map_rotation_legend_three.eps){width="100.00000%"}
![image](./map_rotation_legend_3.eps){width="100.00000%"}
![image](./map_rotation_legend_c.eps){width="100.00000%"}
\
![image](./map_rotation_legend_four.eps){width="100.00000%"}
![image](./map_rotation_legend_4.eps){width="100.00000%"}
![image](./map_rotation_legend_d.eps){width="100.00000%"}
\
### Synthetic optical emission maps {#sect:maps}
In Fig. \[fig:maps1\] we plot synthetic H$\alpha$ and \[O[iii]{}\] $\lambda \, 5007$ emission maps of the bow shocks generated by our $20\, \rm M_{\odot}$ star moving with velocity $70\,
\mathrm{km}\, \mathrm{s}^{-1}$ moving through a medium with $n_{\rm ISM}=0.1$ (left column of panels), $0.79$ (middle column of panels) and $10.0\, \rm
cm^{-3}$ (right column of panels). The region of maximum H$\alpha$ emission of the gas is located close to the apex of the bow shock and extended to its trail ($z\,
\le 0$). This broadening of the emitting region is due to the high space velocity of the star, see Paper I. Neither the shocked stellar wind nor the hot shocked ISM of the bow shock contributes significantly to these emission since the $\rm H\alpha$ emission coefficient $j_{\rm H\alpha} \propto T^{-0.9}$ and the contact discontinuity is the brightest part of the whole structure (Fig. \[fig:maps1\]a). The \[O[iii]{}\] $\lambda \, 5007$ emission is maximum at the same location but, however, slightly different dependence on the temperature of the corresponding emission coefficient $j_{\rm [OIII]} \propto
\exp(-1/T)/T^{1/2}$ [@dopita_aa_29_1973] induces a weaker extension of the emission to the tail of the structure (Fig. \[fig:maps1\]a). The unstable simulations with $v_{\star}\, \ge 40\, \mathrm{km}\, \mathrm{s}^{-1}$ and $n_{\rm ISM} \simeq
10\, \rm cm^{-3}$ have ring-like artefacts which dominate the emission (see Fig. \[fig:maps1\]e-h and Fig. \[fig:maps1\]i-l). They are artificially generated by the over-dense regions of the shell that are rotated and mapped onto the Cartesian grid. A tri-dimensional unstable bow shock would have brighter clumps of matters sparsed around its layer of cold shocked ISM rather than regular rings [@mohamed_aa_541_2012]. Regardless of the properties of their driving star, our bow shocks are brighter in large ambient medium, e.g. the model MS2070n0.1 has $\Sigma^{\rm max}_{[\rm H\alpha]} \approx 10^{-18}
\, \mathrm{erg}\, \mathrm{s}^{-1}\, \mathrm{cm}^{-2}\, \mathrm{arcsec}^{-2}$ whereas the model MS2070n10 has $\Sigma^{\rm max}_{[\rm H\alpha]} \approx
3\times 10^{-15} \, \mathrm{erg}\, \mathrm{s}^{-1}\, \mathrm{cm}^{-2}\,
\mathrm{arcsec}^{-2}$. The projected \[O[iii]{}\] $\lambda \, 5007$ emission behaves similarly.
![ Cross-sections taken along the direction of motion of our $20\, \rm M_{\odot}$ star moving with velocity $70\, \rm km\, \rm s^{-1}$ in an ambient medium of number density $n_{\rm ISM}=0.1\, \rm cm^{-3}$. The data are plotted for inclination angles $\phi=30{\ensuremath{^\circ}}$ (thin solid red line), $\phi=45{\ensuremath{^\circ}}$ (thin dotted blue line), $\phi=60{\ensuremath{^\circ}}$ (thick solid orange line) and $\phi=90{\ensuremath{^\circ}}$ (thick dotted dark green line) through their H$\alpha$ surface brightness (see Fig. \[fig:maps1\]a-d). The position of the star is located at the origin. []{data-label="fig:profiles"}](./cut_profile_2070n01_Ha.eps){width="45.00000%"}
![ Bow shock H$\alpha$ surface brightness (a) and ratio $\Sigma^{\rm max}_{[\rm O{\sc III}]}/
\Sigma^{\rm max}_{[\rm H\alpha]}$ (b) as a function of its volume $R(0)^{3}$ (in $\rm pc^{3}$). Upper panel shows the H$\alpha$ surface brightness as a function of the detection threshold of the SuperCOSMOS H$\alpha$ Survey (SHS) of $\Sigma_{\rm SHS}\approx 1.1-2.8 \times 10^{-17}\,
\rm erg\, \rm s^{-1}\, \rm cm^{-2}\, \rm arcsec^{-2}$ [@parker_mnras_362_2005]. Lower panel plots the ratio $\Sigma^{\rm max}_{[\rm O{\sc III}]}/\Sigma^{\rm max}_{[\rm H\alpha]}$ of the same models. []{data-label="fig:vol_vs_S_Ha"}](./vol_vs_S_Ha.eps){width="100.00000%"}
![ Bow shock H$\alpha$ surface brightness (a) and ratio $\Sigma^{\rm max}_{[\rm O{\sc III}]}/
\Sigma^{\rm max}_{[\rm H\alpha]}$ (b) as a function of its volume $R(0)^{3}$ (in $\rm pc^{3}$). Upper panel shows the H$\alpha$ surface brightness as a function of the detection threshold of the SuperCOSMOS H$\alpha$ Survey (SHS) of $\Sigma_{\rm SHS}\approx 1.1-2.8 \times 10^{-17}\,
\rm erg\, \rm s^{-1}\, \rm cm^{-2}\, \rm arcsec^{-2}$ [@parker_mnras_362_2005]. Lower panel plots the ratio $\Sigma^{\rm max}_{[\rm O{\sc III}]}/\Sigma^{\rm max}_{[\rm H\alpha]}$ of the same models. []{data-label="fig:vol_vs_S_Ha"}](./vol_vs_OIII_over_Ha.eps){width="100.00000%"}
\
In Fig. \[fig:profiles\] we show cross-sections of the H$\alpha$ surface brightness of the model MS2070n0.1. The cuts are taken along the symmetry axis of the figures and plotted as a function of the inclination angle $\phi$ with respect to the plane of the sky. The emission rises slightly as $\phi$ increases from for $\phi=30{\ensuremath{^\circ}}$ (thin red solid line) to $\phi=60{\ensuremath{^\circ}}$ (thick solid orange line) since $\Sigma^{\rm max}_{[\rm H\alpha]}$ peaks at about $6\times 10^{-19}$ and about $10^{-18}\, \mathrm{erg}\, \mathrm{s}^{-1}\,
\mathrm{cm}^{-2}\, \mathrm{arcsec}^{-2}$, respectively. The case with $\phi=90{\ensuremath{^\circ}}$ is different since the emission decreases to about $\approx
2\times 10^{-19}\, \rm erg\, \rm s^{-1}\, \rm cm^{-2}\, \rm arcsec^{-2}$ (see thick dotted green line in Fig. \[fig:profiles\]). The same is true for the \[O[iii]{}\] emission since its dependence on the post-shock density is similar. Large angles of inclination make the opening of the bow shocks larger (Fig. \[fig:profiles\]a-c, e-g, i-k) and the stand-off distance appears smaller (Fig. \[fig:profiles\]a-c). Note that bow shocks observed with a viewing angle of $\phi=90{\ensuremath{^\circ}}$ do not resemble an arc-like shape but rather an overlapping of iso-emitting concentric circles (Fig. \[fig:profiles\]d,h,l).
### Bow shocks observability at H$\alpha$ and comparison with observations {#sect:comp}
In Fig. \[fig:vol\_vs\_S\_Ha\] we show our bow shocks’ H$\alpha$ surface brightness (a) and their $\Sigma^{\rm max}_{[\rm O{\sc III}]}/\Sigma^{\rm
max}_{[\rm H\alpha]}$ ratio (b), both as a function of the volume of emitting gas ($z\ge0$). The color coding of both panels takes over the definitions adopted in Fig. \[fig:axis\_ratio\]. The models with a $10\, \rm
M_{\odot}$ have a volume smaller than about a few $\rm pc^{3}$ and have emission smaller than about $10^{-15}\, \rm erg\, \rm s^{-1}\, \rm cm^{-2}\, \rm
arcsec^{-2}$. The models with $M_{\star}=20\, \rm M_{\odot}$ have larger volume at equal $n_{\rm ISM}$ and can reach surface brightness of about a few $10^{-14}\, \rm erg\, \rm s^{-1}\, \rm cm^{-2}\, \rm arcsec^{-2}$ if $n_{\rm
ISM}=10\, \rm cm^{-3}$. Note that all models with $n_{\rm ISM} \ge 10.0\, \rm
cm^{-3}$ produce emission larger than the diffuse emission sensitivity threshold of the [*SuperCOSMOS H-Alpha Survey*]{} (SHS) of $\Sigma_{\rm SHS} \approx
1.1$-$2.8 \times 10^{−17}\, \rm erg\, \rm s^{-1}\, \rm cm^{-2}\, \rm
arcsec^{-2}$ [@parker_mnras_362_2005] and such bow shocks should consequently be observed by this survey (see horizontal black line in Fig. \[fig:vol\_vs\_S\_Ha\]a).
As discussed above, a significant fraction of our sample of bow shocks models have a H$\alpha$ surface brightness larger than the sensitivity limit of the SHS survey [@parker_mnras_362_2005]. This remark can be extended to other (all-sky) H$\alpha$ observations campaigns, especially if their detection threshold is lower than the SHS. This is the case of, e.g. the [*Virginia Tech Spectral-Line Survey*]{} survey [VTSS, @dennison_aas_195_1999] and the [*Wisconsin H-Alpha Mapper*]{} [WHAM, @reynolds_pasa_15_1998] which provide us with images of diffuse sensitivity detection limit that allow the revelation of structures associated with sub-Rayleigh intensity. Consequently, one can expect to find optical traces of stellar wind bow shocks from OB stars in these data. According to our study, their driving stars are more likely to be of initial mass $M_{\star} \ge 20\, \rm M_{\odot}$ (Fig. \[fig:vol\_vs\_S\_Ha\]a). This also implies that bow shocks [*in the field*]{} that are observed with such facilities are necessary produced by runaway stars of initial mass larger than $M_{\star} \ge 20\, \rm M_{\odot}$. Moreover, we find that the models involving an $10\, \rm M_{\odot}$ star and with $v_{\star}\, \ge 40\, \mathrm{km}\,
\mathrm{s}^{-1}$ have $\Sigma^{\rm max}_{[\rm O{\sc III}]}/\Sigma^{\rm
max}_{[\rm H\alpha]}>10$, whereas almost all of the other simulations do not satisfy this criterion (Fig. \[fig:vol\_vs\_S\_Ha\]b).
, we find a similarity between some of the cross-sections taken along the symmetry axis of the H$\alpha$ surface brightness of our bow shock models (Fig. \[fig:profiles\]) and the measure of the radial brightness in emission measure of the bow shock generated by the runaway O star HD 57061 [see fig. 5 of @brown_aa_439_2005]. This observable and our model authorize a comparison since H$\alpha$ emission and emission measures have the same quadratic dependence on the gas number density. The emission measure profile of HD 57061 slightly increases from the star to the bow shock and steeply peaks in the region close to the contact discontinuity, before to decrease close to the forward shock of the bow shock and reach the ISM background emission. Our H$\alpha$ profile with $\phi=60{\ensuremath{^\circ}}$ is consistent with (i) the above described variations and (ii) with the estimate of the inclination of the symmetry axis of HD 57061 with respect to the plane of the sky of about $75{\ensuremath{^\circ}}$, see table 3 of @brown_aa_439_2005. Note that according to our simulations, the emission peaks in the region separating the hot from the cold shocked ISM gas.
### Implication for the evolution of supernova remnants generated by massive runaway stars {#sect:pre_sn}
Massive stars evolve and die supernovae, a sudden and strong release of matter, energy and momentum taking place inside the ISM pre-shaped by their past stellar evolution [@langer_araa_50_2012]. In the case of a runaway progenitor, the circumstellar medium at the pre-supernova phase can be a bow shock nebula with which the shock wave interacts into the unperturbed ISM [@brighenti_mnras_270_1994]. The subsequent growing supernova remnant develops asymmetries since it is braked by the mass at the apex of the bow shock but expands freely in the cavity driven by the star in the opposite direction [@borkowski_apj_400_1992]. If the progenitor is slightly supersonic, the bow shock is mainly shaped during the main-sequence phase of the star; whereas if the progenitor is a fast-moving star then the bow shock is essentially made of material from the last pre-supernova evolutionary phase. In the Galactic plane ($n_{\rm ISM}=0.79\, \rm cm^{-3}$) such asymmetries arise if the apex of the bow shock accumulates at least $1.5\, \rm M_{\odot}$ of shocked material [@meyer_mnras_450_2015].
In Fig. \[fig:mass\] we present the mass trapped into the $z\ge0$ region of our bow shock models as a function of their volume. As in Fig. \[fig:vol\_vs\_S\_Ha\] the figure distinguishes the initial mass and the ambient medium density of each models. Amongst our bow shock simulations, 9 models have $M_{\rm bow} \gtrsim
1.5\, \rm M_{\odot}$ and 4 of them are generated by the runaway stars which asymmetric supernova remnant studied in detail in @meyer_mnras_450_2015. The other models with $v_{\star} \le 40\, \rm km\, \rm s^{-1}$ may produce asymmetric remnants because they will explode inside their main-sequence wind bubble. The model MS4070n0.1 has $v_{\star}=70\, \rm km\, \rm s^{-1}$ which indicates that the main-sequence bow shock will be advected downstream by the rapid stellar motion and the surroundings of the progenitor at the pre-supernova phase is made of, e.g. red supergiant material. Consequently, its shock wave may be unaffected by the presence of the circumstellar medium. We leave the examination via hydrodynamical simulations of this conjecture for future works. Interestingly, we notice that most of the potential progenitors of asymmetric supernova remnants are moving in a low density medium $n_{\rm ISM}
\le 0.1\, \rm cm^{-3}$, i. e. in the rather high latitude regions of the Milky Way. This is consistent with the interpretation of the elongated shape of, e.g. Kepler’s supernova remnant as the consequence of the presence of a massive bow shock at the time of the explosion [@velazquez_apj_649_2006; @toledoray_mnras_442_2014].
### The influence of the interstellar magnetic field on the shape of supernovae remnants
![ Bow shocks mass as a function of the bow shock volume. The figure shows the mass $M_{\rm bow}$ (in $M_{\odot}$) trapped in the $z \ge 0$ region of the bow shock as a function of its volume $R(0)^{3}$ (in $\rm pc^{3}$). The dots distinguish between models (i) as a function of the ISM ambient medium with $n_{\rm ISM}=0.01$ (triangles), $0.1$ (diamonds), $0.79$ (circles) and $10\, \rm cm^{-3}$ (squares), and (ii) as a function of the initial mass of the star with $10$ (blue dots), $20$ (red plus signs) and $40\, \rm M_{\odot}$ (green crosses). The thin horizontal black line corresponds to $M_{\rm bow} = 1.5\, \rm M_{\odot}$, i.e. the condition to produce an asymmetric supernova remnant if $n_{\rm ISM}=0.79\, \rm cm^{-3}$ [@meyer_mnras_450_2015]. []{data-label="fig:mass"}](./vol_vs_mass.eps){width="100.00000%"}
\
Conclusion {#section:cc}
==========
Our bow shock simulations indicate that no structural difference arise when changing the density of the background ISM in which the stars move, i.e. their internal organisation is similar as described in @comeron_aa_338_1998 and Paper I. The same is true for their radiative properties, governed by line cooling such as the \[O[iii]{}\] $\lambda \, 5007$ line and showing faint H$\alpha$ emission, both principally originating from outer region of shocked ISM gas. We also find that their X-rays signature is fainter by several orders of magnitude than their H$\alpha$ emission, and, consequently, it is not a good waveband to search for such structures.
The best way to observe bow shocks remains their infrared emission of starlight reprocessed by shocked ISM dust [@meyer_mnras_2013]. We find that the brightest infrared bow shocks, i.e. the most easily observable ones, are produced by high-mass ($M_{\star} \approx 40\, \rm M_{\odot}$) stars moving with a slow velocity ($v_{\star}\approx 20\, \rm km\, \rm s^{-1}$) in the relatively dense regions ($n_{\rm ISM}\approx 10\, \rm cm^{-3}$) of the ISM, whereas the brightest H$\alpha$ structures are produced by these stars when moving rapidly ($v_{\star}\approx 70\, \rm km\, \rm s^{-1}$). Thin-shelled bow shocks have mid-infrared luminosities which does not report the time-variations of their unstable structures. This indicates that spectral energy distributions of stellar wind bow shocks are the appropriate tool to analyze them since they do not depend on the temporary effects that affect their density field. .
A detailed analysis of our grid of simulations indicates that the H$\alpha$ surface brightness of Galactic stellar wind bow shocks increases if their angle of inclination with respect to the plane of the sky increases up to $\phi = 60{\ensuremath{^\circ}}$, however, edge-on viewed bow shocks are particularly faint. We find that all bow shocks generated by a $40\, \rm M_{\odot}$ runaway star could be observed with Rayleigh-sensitive H$\alpha$ facilities and that bow shocks observed [*in the field*]{} by means of these facilities should have an initial mass larger than about $20\, \rm M_{\odot}$. Furthermore, all of our bow shocks generated by a $10\, \rm M_{\odot}$ star moving with $v_{\star}\ge 40\, \rm km\,
\rm s^{-1}$ have a line ratio $\Sigma^{\rm max}_{[\rm O{\sc III}]}/\Sigma^{\rm max}_{[\rm H\alpha]}>10$. Our study suggests that slowly-moving stars of ZAMS mass $M_{\star} \ge 20\, \rm
M_{\odot}$ moving in a medium of $n_{\rm ISM }\ge 0.1\, \rm cm^{-3}$ generate massive bow shocks, i.e. are susceptible to induce asymmetries in their subsequent supernova shock wave. This study will be enlarged, e.g. estimating observability of red supergiant stars.
Acknowledgements {#acknowledgements .unnumbered}
================
D. M.-A. Meyer thanks P. F. Velazquez, F. Brighenti and L. Kaper for their advices, and F. P. Wilkin for useful comments on stellar wind bow shocks which partly motivated this work. This study was conducted within the Emmy Noether research group on “Accretion Flows and Feedback in Realistic Models of Massive Star Formation” funded by the German Research Foundation under grant no. KU 2849/3-1. A.-J. van Marle acknowledges support from FWO, grant G.0227.08, KU Leuven GOA/2008, 04 and GOA/2009/09. The authors gratefully acknowledge the computing time granted by the John von Neumann Institute for Computing (NIC) and provided on the supercomputer JUROPA at J" ulich Supercomputing Centre (JSC).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'High-redshift submillimetre-bright galaxies identified by blank field surveys at millimetre and submillimetre wavelengths appear in the region of the IRAC colour–colour diagrams previously identified as the domain of luminous active galactic nuclei (AGNs). Our analysis using a set of empirical and theoretical dusty starburst spectral energy distribution (SED) models shows that power-law continuum sources associated with hot dust heated by young ($\la 100$ Myr old), extreme starbursts at $z>2$ also occupy the same general area as AGNs in the IRAC colour–colour plots. A detailed comparison of the IRAC colours and SEDs demonstrates that the two populations are distinct from each other, with submillimetre-bright galaxies having a systematically flatter IRAC spectrum ($\ga1$ mag bluer in the observed \[4.5\]–\[8.0\] colour). Only about 20% of the objects overlap in the colour–colour plots, and this low fraction suggests that submillimetre galaxies powered by a dust-obscured AGN are not common. The red IR colours of the submillimetre galaxies are distinct from those of the ubiquitous foreground IRAC sources, and we propose a set of IR colour selection criteria for identifying SMG counterparts that can be used even in the absence of radio or [[*S*pitzer]{}]{} MIPS 24 data.'
author:
- |
Min S. Yun$^{1}$[^1], Itziar Aretxaga$^{2}$, Matthew L. N. Ashby$^{3}$, Jason Austermann$^{1}$, Giovanni G. Fazio$^{3}$, Mauro Giavalisco$^{1}$, Jia–Sheng Huang$^{3}$, David H. Hughes$^{2}$, Sungeun Kim$^{4}$, James D. Lowenthal$^{5}$, Thushara Perera$^{1}$, Kim Scott$^{1}$, Grant W. Wilson$^{1}$, Joshua D. Younger$^{3}$\
$^{1}$Department of Astronomy, University of Massachusetts, Amherst, MA 01003, USA\
$^{2}$Instituto Nacional de Astrofísica, Ópitca y Electrónica (INAOE), Tonantzintla, Peubla, México\
$^{3}$Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA\
$^{4}$Astronomy and Space Sciences Department, Sejong University, 98 Kwangjin-gu, Kunja-dong, Seoul, 143-747, Korea\
$^{5}$Astronomy Department, Smith College, Northampton, MA 01060, USA
date:
- Accepted 2008 June 12
-
title: '[*S*pitzer]{} IRAC Infrared Colours of Submillimetre-bright Galaxies'
---
\[firstpage\]
cosmology: observations – galaxies: evolution – galaxies: high–redshift – galaxies: starburst – galaxies: active – infrared: galaxies.
Introduction
============
One of the most exciting developments of the past decade has been the resolution of the cosmic far-infrared background into discrete sources, providing a first glimpse of the rapid build-up of massive galaxies in the early universe long predicted by theory. Deep, wide blank–field surveys at millimetre (mm) and submillimetre (submm) wavelengths [@smail97; @barger98; @hughes98; @eales99; @scott2002; @wang04; @wang06; @greve04; @laurent05; @mortier05; @bertoldi07; @scott08] have shown that ultraluminous infrared galaxies (ULIRGs) at $z\ga 1$ contribute significantly to the observed far-IR background. Multi–wavelength follow–up studies of these so-called submillimetre galaxies (SMGs) suggest that they are massive, young galaxies seen during the period of rapid stellar mass build-up, with very high specific star formation rates at $z>1$ [see review by @blain02].
A major obstacle in understanding the nature of this luminous dusty galaxy population is the limited angular resolution ($>10\arcsec$) of current instrumentation, which prevents unambiguous identification of their counterparts at other wavelengths. Deep interferometric radio imaging and [[*S*pitzer]{}]{} 24 MIPS imaging are shown to be effective for identifying a subset of SMGs at $z\la3$ [@ivison02; @chapman05; @pope06], and our understanding of their nature and evolution is based almost entirely on some 100 SMGs identified this way. Recent 890 continuum imaging of the seven brightest AzTEC [@wilson08] 1100 sources in the COSMOS field [@scoville07] using the Smithsonian Submillimeter Array (SMA) by @younger07 has shown that there may be a substantial population of higher redshift ($z>3$) SMGs that are extremely faint or undetected at radio and/or 24 MIPS bands. This discovery accentuates the need for a new method to identify and investigate the higher redshift SMGs in order to quantify their contribution to the cosmic energy budget and star formation history, and to map their evolution over time.
One common feature among all seven COSMOS AzTEC/SMA sources and the four other SMGs detected with the SMA by @iono06, @wang07, and @younger08 is that they are all detected in the [[*S*pitzer]{}]{} IRAC bands at $\ga1\mu$Jy level, raising the exciting possibility that deep IRAC imaging may provide a powerful new tool for identifying and obtaining deeper understanding of the SMG phenomenon. The IRAC bands cover at $z\sim3$ the rest-frame optical to near-IR portion of the spectral energy distribution (SED) and offer direct insight to the properties of their stellar component. A detailed analysis of the multiwavelength properties of the AzTEC/SMA sources is presented elsewhere (Yun et al., in prep.). In this paper, we examine the rest-frame optical/near-IR properties of a large sample of well-studied SMGs and the utility of the IRAC colour–colour plots for identifying and investigating the nature of the SMG population in general.
![image](figure1.eps){width="5.5in"}
[[*S*pitzer]{}]{} IRAC colours of Submillimetre Galaxies and AGNs {#sec:colourcolour}
=================================================================
Red IRAC colours of SMGs
------------------------
A colour–colour diagram is a powerful tool for analyzing SEDs sampled coarsely with just a few broad-band measurements. The four [[*S*pitzer]{}]{} IRAC bands (3.6 , 4.5 , 5.8 , and 8.0 ) probe the portion of the galaxy SED that includes photospheric emission from cool stars (at $z=0-4$) and power-law continuum from hot dust surrounding young stars and/or AGN. Polycyclic aromatic hydrocarbon (PAH) and other spectral features are also important for the $z\sim0$ galaxies. The 1.6 stellar photospheric PAH feature is prominent in stellar systems older than 10 Myr, and galaxies with substantial cool stellar populations appear blue in an IRAC colour–colour diagram as a result [see @simpson99; @sawicki02]. Sources dominated by stellar photospheric emission form a densely concentrated cloud near ($-$0.4, $-$0.4) in the $S_{5.8}/S_{3.6}$ vs. $S_{8.0}/S_{4.5}$ colour–colour diagram for a sample of field galaxies shown by @lacy04. The $z\sim0$ late type galaxies with varying amounts of PAH emission in the 8.0 band form a distinct branch of galaxies with a constant $S_{5.8}/S_{3.6}$ ratio emerging from this cloud. Noting that 54 quasars identified in the Sloan Data Release 1 quasar survey [@schneider03] are associated with a second branch with red IR colours, Lacy et al. identified a wedge-shaped area in the colour–colour diagram as the characteristic region populated by obscured and unobscured AGNs.
To explore the importance of AGN activity among SMGs, we examine the IR properties of a sample of 47 well studied SMGs and compare them to AGNs using the AGN diagnostic IRAC colour–colour plot by @lacy04. Our SMG sample is constructed from the literature primarily for their secure identification using high angular resolution interferometric imaging at radio wavelengths and deep [[*S*pitzer]{}]{} MIPS imaging [@ivison05; @greve05; @tacconi06; @pope06]. A small subset of this sample has been observed with the [[*S*pitzer]{}]{} Infrared Spectrograph [IRS; @menendez07; @valiante07; @rigby08; @pope08], and those classified as “starburst” (strong PAH features) and “starburst+AGN” (PAH plus power-law continuum) are shown using different symbols in Figure \[fig:colourcolourplot\]. Of these 20 submm-selected galaxies, none shows a pure power-law-dominated IRS spectrum characteristic of an AGN. We compare the IR colours of our SMG sample to those of the 19 $z\sim1$ AGNs identified by their power-law continuum in the First Look Survey (FLS) field by @lacy04, and to the IR colours of the foreground galaxy population, using the 4000 IRAC sources randomly selected from the COSMOS field [@sanders07]. The IRAC colours of these foreground sources are slightly offset from the FLS centroid (indicated in Figure \[fig:colourcolourplot\] by a large circle), probably because the COSMOS sources are on average fainter and at higher redshift.
A surprising result is that more than 90% of the SMGs are located within the region designated for AGNs by @lacy04. It is tempting to interpret this result as an indication that the majority of SMGs host a luminous AGN, as @greve07 concluded for their MAMBO-selected SMGs in the field surrounding the $z=3.8$ radio galaxy 4C 41.17. However, the majority (13 out of 20, or 65%) of SMGs observed with [[*S*pitzer]{}]{} IRS show spectra dominated by PAH features, characteristic of pure starburst systems. The remaining seven SMGs show starburst+AGN hybrid spectra, suggesting that the power-law AGN emission is not the dominant contributor to the total IR luminosity of these SMGs either. Four of the SMGs targeted by @valiante07 were undetected by their [[*S*pitzer]{}]{} IRS observations. Deep Chandra X-ray data in the GOODS-North field suggest that only a small fraction of SMGs are detected in the hard X-ray band [@pope06; @pope08], further supporting the notion that energetically-dominant, luminous AGNs are not common among the SMGs [also see @alexander03].
![image](figure2.eps){width="5.5in"}
Another commonly used diagnostic for AGN activity is the [*Spitzer*]{}/IRAC \[5.8\]–\[8.0\] vs. \[3.6\]–\[4.5\] colour–colour diagram, first introduced by @stern05. In addition to red colours, Stern et al. utilize empirical colour tracks of star-forming galaxies (e.g., M82) to further differentiate starbursts from AGNs. We reproduce the Stern et al. plot in Figure \[fig:colourcolourplot2\] using the same samples as used in Figure \[fig:colourcolourplot\]. About 1/3 of the SMGs (16/47), including 5 out of 13 “starburst” IRS spectrum sources, are now found outside the AGN region outlined by Stern et al. However, a large fraction of the SMGs (31/47), including many with “starburst” IRS spectrum, are within the AGN region.
Analysis using empirical and theoretical SED models
---------------------------------------------------
A natural explanation for the observed red IR colours of SMGs emerges when empirical and theoretical SED models of dusty starbursts are examined in the context of these IRAC colour–colour plots. The extreme IR luminosity of the mm/submm detected sources ($L_{IR}\ge10^{12-13}L_\odot$) indicates that a dust-obscured extreme starburst or a luminous AGN dominates their energy output, and the entire observed SED can be described by a relatively simple model [see @yun02]. Theoretically motivated radiative transfer models for dust-enshrouded starbursts and AGNs [@silva98; @efstathiou00; @siebenmorgen07; @chakrabarti08] are successful in reproducing the observed UV-to-radio SEDs, including those of extreme objects such as SMGs and “hyperluminous” IR galaxies not commonly found in the local universe [see @farrah03; @efstathiou03; @vega08; @groves08]. A large number of free parameters (e.g., source geometry, initial mass function, metallicity) and some degeneracy among them limit the utility of these SED models for unique quantitative interpretations of the photometric data. Nevertheless, they are highly valuable tools for evaluating the effects of model parameters such as starburst age and extinction. For this work, we adopt the model SEDs computed by @efstathiou00 for their ease of use and because they have already been widely tested [e.g. @efstathiou03; @farrah03].
![A Comparison of the @efstathiou00 model SEDs with the observed SEDs of the $z=1.74$ QSO SSTXFL J172253.9+582955 [@lacy04] and the $z=2.49$ SMG SMM J123707.7+621411 [GN19; @pope06]. The model SEDs are normalized to roughly match the measured flux densities of GN19 in the IRAC bands. The SED of SSTXFL J172253.9+582955 is redshifted to $z=2.49$ to match that of GN19 for an easier comparison. []{data-label="fig:SEDs"}](figure3.eps){width="\hsize"}
The Efstathiou et al. SED models are shown in Figure \[fig:SEDs\] along with the observed SEDs of the $z=2.49$ SMG J123707.7+621411 [@pope06] and the $z=1.74$ QSO SSTXFL J172253.9+582955 [@lacy04] in order to illustrate the IRAC colour evolution with starburst age. The shape of the model SED changes quickly during the first 100 Myr, primarily driven by the rapid evolution of the young stellar population. As the stellar population ages, the combined effect of decreasing radiation intensity and increasing photospheric emission (“1.6 bump”) makes the SEDs flatten (become bluer) in the IRAC bands. These are generic features of all SED models, only dependent on the details of the input stellar population synthesis model. Even though the build-up of cool stars and the 1.6 photospheric component become evident as early as $\sim$30 Myr after the initial starburst, the overall colour of starburst systems remains [*red*]{} even at 64 Myr after the burst. In comparison, the red continuum of the IR AGN increases monotonically across the IRAC bands into the mid-IR (MIPS 24 ) band, with a steeper slope than that of the SMG and most of the theoretical starburst SEDs. The comparison shown in Figure \[fig:SEDs\] nicely demonstrates the clear difference in the origin of their near-IR emission and the spectral slope between IR AGNs and SMGs, despite the broad similarity in their red IRAC colours.
Three colour tracks for a single starburst population SED model with different ages are shown in Figures \[fig:colourcolourplot\]c & \[fig:colourcolourplot2\]c. They cover the full range of IRAC colours associated with both AGNs and SMGs, with the older starburst tracks showing successively [*bluer*]{} IRAC colours. The majority of SMGs appear scattered about the model SED colour tracks, consistent with their [[*S*pitzer]{}]{} IRS spectra being characteristic of starburst-dominated systems. For a given model SED, the IRAC colour becomes monotonically redder with increasing redshift at $z\ga1$, and most SMGs have colours consistent with model SEDs redshifted to $z=1-5$. The colour dependencies on starburst age and redshift are nearly parallel to each other, leading to some degeneracy between the two quantities. Nevertheless, the observed red IRAC colours of SMGs can arise [*only if SMGs are at high redshift*]{} ($z\ga1$).
The effects of extinction (column density along the line of sight) are not as important in determining the observed IRAC colours of SMGs, unlike the case at shorter, optical wavelengths. The model colour tracks for two different visual extinctions ($A_V=50$ and 200) shown in Figures \[fig:colourcolourplot\]d & \[fig:colourcolourplot2\]d track each other closely at $z\la2$, despite the large opacity difference between the two models. Optical depth is greatly reduced at these long wavelengths [$A_{\lambda}/A_V=20\sim40$ for the IRAC bands; see @indebetouw05; @roman07], and the weak dependence on extinction $A_V$ can be naturally understood. As redshift increases, the IRAC bands begin to probe the near-IR to optical bands, and an increasing dependence on extinction is expected. Indeed the IRAC colours of the two model SEDs diverge at $z>2$, with the higher extinction $A_V=200$ model predicting redder IRAC colours as expected (see Fig. \[fig:colourcolourplot\]d), and some degeneracy between redshift and extinction is also identified. Overall, extinction still plays a minor role compared with starburst age, and we identify starburst age and redshift as the dominant physical parameters that affect the observed IRAC colours. Accounting for SMGs with the reddest observed colours (e.g., log($S_{5.8}/S_{3.6}$)$>$0.3, log($S_{8.0}/S_{4.5}$)$>$0.3) requires young ($\la30$ Myr old) starbursts at high redshifts ($z\ga3$) or a power-law AGN dominating the rest-frame near-IR SED.
An AGN-like warm IR colour is a generic feature for [*all*]{} stellar systems whose near-IR luminosity is dominated by a dust-obscured young stellar population. In analyzing the “mid-IR excess” of galaxies selected using the Infrared Astronomy Satellite (IRAS) data, @yun01 noted that the youngest dusty starbursts can display warm mid-IR colours, exceeding the classic Seyfert division at log $(S_{25\mu}/S_{60\mu}) \ge 0.18$ [@degrijp85]. While exploring the presence of power-law AGN candidates in the Chandra Deep Field North region, @donley07 also noted the incursions by their empirical ULIRG colour tracks (derived from the observed SEDs of Arp 220, IRAS 17208$-$0014, and Mrk 273) into the AGN region in their IRAC colour–colour diagram. The incursion of M82-like objects into the AGN boundary has lead @stern05 to refine the selection boundary, and @barmby06 question the completeness and reliability of AGN identification using IRAC colours for a similar reason. Using theoretical and empirical starburst SED models, we demonstrate that young, dusty starbursts exhibit red IRAC colours, and red IRAC colour is [*not*]{} unique to power-law AGNs. The popular AGN identification methods using red IRAC colours, such as by @lacy04 and @stern05, should be used with caution and a clear understanding of this important caveat.
Systematic colour difference between SMGs and AGNs
--------------------------------------------------
Both SMGs and AGNs appear within the broadly defined red IRAC colour regions by Lacy et al. and Stern et al., but SMGs and AGNs are distinguished by clear systematic difference in their mean IRAC colours. The SMGs as a group are systematically bluer than the AGNs, with only about 20% of SMGs appearing mixed in among the power-law AGNs in both diagnostic colour–colour plots. All AGNs show log($S_{8.0}/S_{4.5}$) $\ge0.2$ in Figure \[fig:colourcolourplot\] while the overwhelming majority ($\ga80\%$) of the SMGs have a smaller flux ratio. Similarly, all AGNs have \[5.8\]–\[8.0\] colours redder than +0.2 in Figure \[fig:colourcolourplot2\]. The most significant colour difference is between the 4.5 and 8.0 bands with an average difference of $\langle[4.5]-[8.0]\rangle\ga 1$ mag. The \[3.6\]–\[4.5\] colour difference is the smallest, with $\langle[3.6]-[4.5]\rangle \sim 0.25$ mag.
The comparison of the full IRAC band SEDs shown in Figure \[fig:IRACSEDs\] offers a visually compelling demonstration of a steeper spectral slope for the AGNs. The average IRAC SED of the AGN sample is in excellent agreement with the empirical type-2 QSO SED template by @polletta07. The template SED for a type-1 QSO is nearly identical (not shown) to type-2 QSO at these wavelengths, and the steep spectral slope is characteristic of all obscured and unobscured AGNs. In comparison, the average IRAC SED of the SMG sample is significantly flatter, with only a few outliers with an AGN-like steeper spectral slope. The ULIRG Arp 220 SED redshift to $z=2$ offers a far better match to the SMGs, lending further support for the dust obscured starburst interpretation. Furthermore, the flatter spectral slope (bluer IRAC colours) suggests that SMGs in general do not host a dust-obscured, energetically dominant AGN in most cases. We note that rigorous diagnostics of an AGN include the detection of high-ionization, high-excitation emission lines and copious amounts of hard X-ray emission, and an IR power-law spectrum is only indirect evidence for the presence of an AGN, and does not directly measure the AGN accretion power.
The distinct difference in the observed SEDs between AGNs and SMGs also offers an important constraint on the possible SMG-QSO evolutionary scenario. The apparent correlation observed between the black hole mass ($M_{BH}$) and the host spheroid mass [velocity dispersion $\sigma$; see @magorrian98; @ferrarese00; @gebhardt00] has raised considerable interests in the process that leads to the build-up of the stellar mass in galaxies and the growth of the central super-massive black hole. The evolutionary scenario that a massive nuclear starburst associated with an ultraluminous infrared galaxy leads to a QSO phase [@sanders88; @norman88] offers an attractive explanation for the $M_{BH}-\sigma$ relation if the SMG phase corresponds to the period of rapid stellar mass build-up. However, only the minor overlap between the AGNs and the SMGs we find here suggests that the duration of the transition period should be much shorter than the SMG or the IR AGN phase in such an evolutionary scenario.
![The IRAC band continuum spectral energy distributions for the AGNs and SMGs shown in Figures \[fig:colourcolourplot\] & \[fig:colourcolourplot2\]. All SEDs are normalized at 4.5 band, and the AGNs and SMGs are displaced vertically by an arbitrary amount for ease of comparison. The $z=2$ SEDs of type-2 QSO and Arp 220 (M82 SED is nearly identical) compiled by @polletta07 are shown for comparison, with an arbitrary offset in the flux scaling.[]{data-label="fig:IRACSEDs"}](figure4.eps){width="\hsize"}
Identification of Optical/IR Counterparts to Submillimetre Sources {#sec:ID}
==================================================================
As discussed above, every SMG identified with high angular resolution interferometric imaging using the SMA has a clear IRAC counterpart, including those undetected in the radio and in the MIPS 24 bands. [[*S*pitzer]{}]{} IRAC data may therefore offer a powerful new method for identification of SMGs that is otherwise extremely difficult because of their faintness at optical and near-IR wavelengths. The uncertainty in the mm-wave positions of SMGs is typically 5-10, and the high IRAC source density [$\sim60$ arcmin$^{-2}$ at the 1.4 $\mu$Jy level in the 3.6 band; @fazio04] is too high to enable unique identification of SMG counterparts in general [see discussions by @pope06]. The new knowledge obtained from Figures \[fig:colourcolourplot\] & \[fig:colourcolourplot2\] that SMGs as a population have red IR colours, similar to AGNs and distinct from the foreground field population, offers the exciting possibility of identifying or significantly narrowing down the counterpart candidates using the IRAC data alone.
The distribution of SMG IRAC colours and the starburst model SED colour tracks in Figures \[fig:colourcolourplot\] & \[fig:colourcolourplot2\] extends slightly beyond the AGN selection boundaries proposed by @lacy04 and @stern05. We propose a new set of SMG counterpart candidate selection criteria for the $S_{5.8}/S_{3.6}$ vs. $S_{8.0}/S_{4.5}$ colour–colour diagram, expanded to include all SMGs and the model SED tracks as shown with long-dashed lines in Figure \[fig:colourcolourplot\]. The new proposed criteria are: $$\begin{aligned}
log(S_{8.0}/S_{4.5}) > -0.3 &\wedge &
log(S_{5.8}/S_{3.6}) > -0.3 \nonumber \\
& \wedge &
log(S_{8.0}/S_{4.5}) < log(S_{5.8}/S_{3.6}) + 0.4\end{aligned}$$ where $\wedge$ is the logical AND operator. Equivalent criteria for the Stern et al. colour–colour diagram (Fig. \[fig:colourcolourplot2\]) in AB magnitudes are: $$\begin{aligned}
([5.8] - [8.0]) &>& -0.4 \nonumber \\
~~~~~~& \wedge & ([3.6] - [4.5]) > 0.036 \times ([5.8] - [8.0]) - 0.318 \nonumber \\
~~~~~~& \wedge & ([3.6] - [4.5]) > 2.5 \times ([5.8] - [8.0]) - 2.5\end{aligned}$$ The relative merits of these selection criteria are discussed further below.
To evaluate the utility of these new IRAC colour selection criteria for the SMG counterpart candidate selection, we examine the colours of the IRAC sources in the fields surrounding the nine SMGs securely identified through submm interferometric imaging [@iono06; @wang07; @younger07; @younger08]. The 20 $\times$ 20 IRAC 3.6 images (not shown) centered on the original submillimetre source coordinates contain up to a dozen IRAC sources each, demonstrating the difficulty of using the IRAC source catalog alone for the SMG counterpart identification. There are a total of 20 IRAC sources located within a 6 radius (typical positional accuracy) of the nine nominal SMG positions; their colours are shown in Figures \[fig:colourcolourplot\]b & \[fig:colourcolourplot2\]b. Colours of the nine SMGs, shown as filled circles, fall within our proposed SMG colour boundaries in both plots. These SMGs were not used in defining the IRAC colour selection criteria and they thus represent an independent and successful test of our method. The remaining IRAC sources have colours consistent with the background sources, and about 1/2 of them fall within the new SMG candidate identification boundaries. The new colour selection technique does not completely resolve the confusion problem. However, the situation is now greatly improved, as the candidate counterparts are reduced to a single unique candidate or two potential candidates, making expensive follow-up observations more efficient. Other identification methods such as deep radio or MIPS 24 imaging do not greatly improve the situation in these test cases since many of these SMGs are undetected at other wavelengths. Three of the securely identified sources have IRAC colours typical of $z\sim2$ starburst systems while the remaining six cluster near the power-law AGNs (see Fig. \[fig:colourcolourplot\]b & \[fig:colourcolourplot2\]b). In addition to being undetected in the radio, these six SMGs with AGN-like colour are undetected in the [[*S*pitzer]{}]{} MIPS 24 bands, ruling out the presence of a power-law AGN component that extends into the mid-IR bands, as seen for the QSO spectrum shown in Figure \[fig:SEDs\]. Therefore these SMGs with very red IRAC colours may be the youngest, highly-obscured starburst systems at very high redshifts [$z\ga3$; also see the discussions in @younger07].
The IRAC colour selection method described here is qualitatively similar to the IRAC 8.0 band selection method used to identify the SCUBA galaxies in the CUDSS 14 hour field by @ashby06. Noting the red colour of the SMGs and reduced foreground confusion compared with the shorter wavelength IRAC bands, Ashby et al. argued that 8.0 selection offers a better means for identifying SMG counterparts than near-IR or optical selection. Their 8.0 selection resulted in a different counterpart from the optical or $K$-band selection methods in ten out of 17 cases, casting some doubt on the earlier identifications of SMGs. The main difference of the new IRAC colour selection method outlined here is that we fully quantify the red colour of the SMG population using Eqs. 1 & 2 for easy implementation and calibrate them using a large, well-studied sample in conjunction with empirical and theoretical dusty starburst SEDs.
Discussion and Concluding Remarks {#sec:summary}
=================================
High-redshift submillimetre-bright galaxies identified by blank field surveys at millimetre and submillimetre wavelengths appear in the region of the IRAC colour–colour diagrams previously identified as the domain of luminous AGNs by @lacy04 and @stern05. Rather than interpreting this as a sign that the majority of the SMGs are powered by a luminous AGN, we have shown using empirical and theoretically motivated dusty starburst SED models that their IR colours can be interpreted as those of power-law continuum associated with hot dust heated by young ($\la 100$ Myr old), extreme [*starbursts*]{} at $z>2$. These SMGs fall along the branch of galaxies extending from the blue photospheric peak toward the red power-law region in the IRAC colour–colour plot by @lacy04, and our analysis suggests that this branch is a heterogeneous ensemble of power-law AGNs and dust obscured starbursts whose continuum near-IR luminosity exceeds that of the photospheric emission from cool stars. In fact, our analysis demonstrates that the popular red IRAC colour selection methods for AGN identification, such as the criteria by Lacy et al. and Stern et al., should be used with caution because of the significant starburst contribution expected. While there is some overlap between SMGs and AGNs in these IRAC colour–colour plots, SMGs are systematically bluer ($\langle[4.5]-[8.0]\rangle\ga 1$ mag), consistent with 30 to 70 Myr old starbursts observed at redshifts between $z\sim1$ and $z\sim5$.
Our examination of the model dusty starburst SEDs of @efstathiou00 shows that the main physical parameters that determine the observed IRAC colours are starburst age and redshift, while extinction (column density) plays a less important role. For the models examined in Figures \[fig:colourcolourplot\] and \[fig:colourcolourplot2\], starburst age and redshift are nearly degenerate. The use of IRAC colours as a redshift indicator was first proposed by @simpson99 and @sawicki02, and various empirical photometric redshift relations have been proposed recently with estimated accuracies of $\delta z/(1+z)=0.1-0.2$ [see @pope06; @wilson08b]. This photometric redshift method may not work well for galaxies with red power-law continuum and/or weak 1.6 photospheric feature that are more common at $z\ga2$. The degeneracy between the starburst age and redshift found with these theoretical model SEDs suggests that a careful calibration using a large sample of SMGs is needed in order to quantify fully the systematic uncertainty of this method.
A detailed comparison of the IRAC colour–colour plots and SEDs shows that AGNs and SMGs are distinct from each other due to intrinsic differences in their energy source and dust distribution. SMGs as a group have a flatter SED (bluer by $\langle[4.5]-[8.0]\rangle\ga 1$ mag) in comparison with AGNs. Only 20% of the objects overlap in the colour–colour plots shown in Figures \[fig:colourcolourplot\] & \[fig:colourcolourplot2\], and this suggests that SMGs powered by an AGN are not common. In the context of the ULIRG-QSO evolutionary scenario [@sanders88; @norman88], the little overlap between the AGNs and the SMGs population may indicate that the transition period is much shorter than the duration of the SMG or the IR AGN phase.
The red IR colours of the SMGs are distinct from the colours of the majority of the foreground IRAC sources, which are both early and late type galaxies at $z\la2$. We show that colour selection criteria similar to those of AGNs proposed by Lacy et al. and Stern et al. can be used to pinpoint the IRAC counterpart to an SMG uniquely or to pare down the candidates to just a few, improving the efficiency of expensive follow-up observations at other wavelengths. The lack of high-quality multi-wavelength data for a large sample of SMGs is currently the primary limiting factor that prevents a better understanding of the SMG phenomenon. The IRAC colour selection method discussed here appears to work even in cases when the radio and MIPS 24 counterparts are too faint to be detected at the present sensitivity, and an unbiased investigation of the entire SMG population may become possible using this new identification method.
There are some important advantages and disadvantages to using the ($S_{5.8}/S_{3.6}$ vs. $S_{8.0}/S_{4.5}$) colour–colour plot (Figure \[fig:colourcolourplot\]) versus the (\[5.8\]–\[8.0\] vs. \[3.6\]–\[4.5\]) colour–colour plot (Figure \[fig:colourcolourplot2\]) for SMG candidate identification. The former uses IR colours with a longer baseline in wavelength, making the colour measurements more robust and the model colour tracks better behaved, as shown in Figure \[fig:colourcolourplot\]. On the other hand, the instrumental sensitivity of the IRAC 3.6 and 4.5 bands are a factor of a few better than the 5.8 and 8.0 bands, and the 5.8 and 8.0 channels will be inoperable during the upcoming warm [[*S*pitzer]{}]{} operation. The model colour tracks shown in Figure \[fig:colourcolourplot2\] are more irregular because of shorter wavelength baselines in these colours, but the \[3.6\]–\[4.5\] colour is the best determined quantity among all IRAC colour combinations.
A far simpler and potentially more robust alternative SMG candidate selection criterion is \[3.6\]–\[4.5\] $>-0.2$. This single colour selection method does only a slightly poorer job of rejecting foreground sources than the full colour section criteria described by Eq. 2. Given the superior sensitivity of the IRAC 3.6 and 4.5 bands and their availability during the warm [[*S*pitzer]{}]{} mission, this simpler colour selection criterion may be the more effective method for identifying SMG counterpart candidates in the longer term. Alternatively, colours of IRAC to optical or near-IR bands or to MIPS 24 bands have also been proposed previously for identifying AGN activity [see @huang04; @ashby06; @webb06], and similar analysis may also prove fruitful for SMG candidate identification.
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank M. Poletta for providing us with the library of empirical galaxy SED templates used in Figure \[fig:IRACSEDs\]. This work is based on observations made with the [*Spitzer*]{} Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under NASA contact 1407. This work is also partially funded by NSF Grant AST 05-40852 to the Five College Radio Astronomy Observatory.
[99]{} Alexander, D. M. et al. 2003, [AJ]{}, 125, 383 Aretxaga, I., et al. 2003, [MNRAS]{}, 342, 759 Aretxaga, I. et al. 2007, [MNRAS]{}, 379, 1571 Ashby, M.L.N., Dye, S., Huang, J.-S., Eales, S., Willner, S.P., et al. 2006, [ApJ]{}, 644, 778 Barger, A.J., et al. 1998, Nature, 394, 248 Barmby, P. et al. 2006, [ApJ]{}, 642, 126 Bertoldi, F., Carilli, C., Aravena, M., Schinnerer, E., Voss, H. et al. 2007, [ApJS]{}, 172, 132 Blain, A.W., Smail, I., Ivison, R.J., Kneib, J.-P., Frayer, D.T. 2002, [*Physics Reports*]{}, 369, 111 Chakrabarti, S., & Whitney, B. 2008, [ApJ]{}, submitted (astro-ph/0711.4361) Chapman, S.C., Windhorst, R., Odewahn, S., Yan, H., Conselice, C. 2003, [ApJ]{}, 599, 92 Chapman, S.C., Blain, A.W., Smail, I., Ivison, R.J. 2005, [ApJ]{}, 622, 772 Dannerbauer, D., Walter, F., & Morrison, G. 2008, [ApJ]{}, 673, L127 de Grijp, M. H. K., Miley, G. K., Lub, J., de Jong, T. 1985, [Nature]{}, 314, 240 Donley, J. L., Rieke, G. H., Pérez-González, P. G., Rigby, J. R., Alonso-Herrero, A. 2007, [ApJ]{}, 660, 167 Eales, S., Lilly, S., Gear, W., Dunne, L., Bond, J.R., et al. 1999, [ApJ]{}, 515, 518 Efstathiou, A., Rowan-Robinson, M., Siebenmorgan, R. 2000, [MNRAS]{}, 313, 734 Efstathou, A., & Rowan-Robinson, M. 2003, [MNRAS]{}, 343, 322 Egami, E., Dole, H., Huang, J.-S., Perez-Gonzalez, P., Le Floc’h, E., et al. 2004, [ApJS]{}, 154, 130 Farrah, D., et al. 2003, [MNRAS]{}, 343, 585 Fazio, G. G. et al. 2004, [ApJS]{}, 154, 39 Ferrarese, L., Merritt, D., 2000, [ApJ]{}, 539, 9 Gebhardt, K., Bender, R., Bower, G., Dressler, A., Faber, S. M. et al. 2000, [ApJ]{}, 539, 13 Greve, T.R., et al. 2004, [MNRAS]{}, 354, 779 Greve, T.R., et al. 2005, [MNRAS]{}, 359, 1165 Greve, T. R., et al. 2007, [MNRAS]{}, 382, 48 Groves, B., et al. 2008, [ApJS]{}, in press (astro-ph/0712.1824) Huang, J.-S., et al. 2004, [ApJS]{}, 154, 44 Hughes, D.H., et al. 1998, Nature, 394, 241 Indebetouw, R., Mathis, J. S., Babler, B. L., Meade, M. R., Watson, C. et al. 2007, [ApJ]{}, 619, 931 Iono, D., Peck, A.B., Pope, A., Borys, C., Scott, D., et al. 2006, [ApJ]{}, 640, L1 Ivison, R. J. et al. 2002, [MNRAS]{}, 337, 1 Ivison, R. J. et al. 2005, [MNRAS]{}, 364, 1025 Lacy, M., et al. 2004, [ApJS]{}, 154, 166 Laurent, G. T., Aguirre, J. E., Glenn, J., Ade, P. A. R., Bock, J. J., et al. 2006, [ApJ]{}, 623, 742 Magorrian, J., Tremaine, S., Richstone, D., Bender, R., Bower, G. et al. 1998, [AJ]{}, 115, 2285 Martínez-Sansigre, A., Lacy, M., Sajina, A., Rawlings, S. 2008, [ApJ]{}, 674, 676 Menéndez-Delmestre, K., et al. 2007, [ApJ]{}, 655, L65 Mortier, A.M.J., Serjeant, S., Dunlop, J.S., Scott, S.E., Ade, P., et al. 2005, [MNRAS]{}, 363, 563 Norman, C., Scoville, N. Z., 1988, [ApJ]{}, 332, 124 Polletta, M., et al. 2007, [ApJ]{}, 663, 81 Pope, A., Scott, Dickinson, M., Chary, R.R., Morrison, G., D., et al. 2006, [MNRAS]{}, 370, 1185 Pope, A., Chary, R.-R., Alexander, D. M., Armus, L., Dickinson, M., et al. 2008, [ApJ]{}, 675, 1171 Rigby, J. R., Marcillac, D., Egami, E., Rieke, G. H., Richard, J., et al. 2008, [ApJ]{}, 675, 262 Román-Zúniga, C. G., Lada, C. J., Muench, A., Alves, J. F., 2007, [ApJ]{}, 664, 357 Sawicki, M. 2002, [AJ]{}, 124, 3050 Sanders, D.B., Soifer, B. T., Elias, J. H., Madore, B. F., Matthews, K., Neugebauer, G., Scoville, N. Z., 1988, [ApJ]{}, 325, 74 Sanders, D.B., Salvato, M., Aussel, H., Ilbert, O., Scoville, N. et al. 2007, [ApJS]{}, 86 Schneider, D. P. et al. 2003, [AJ]{}, 126, 2579 Scott, S. E., Fox, M. J., Dunlop, J. S., Serjeant, S., Peacock, J. A., et al. 2002, [MNRAS]{}, 331, 817 Scott, K. S., et al. 2008, [MNRAS]{}, in press (astro-ph/0801.2779) Scoville, N. Z., et al. 2007, [ApJS]{}, 172, 1 Siebenmorgen, R., Krügel, E. 2007, [A$\&$A]{}, 461, 445 Silva, L., Granato, G. L., Bressan, A., Danese, L. 1998, [ApJ]{}, 509, 103 Simpson, C., Eisenhardt, P. 1999, [PASP]{}, 111, 691 Smail, I., Ivison, R. J., & Blain, A. W. 1997, ApJ, 490, L5 Stern, D., Eisenhardt, P., Gorjian, V., Kochanek, C. S., Caldwell, N., et al. 2005, [ApJ]{}, 631, 163 Tacconi, L.J., Neri, R., Chapman, S.C., Genzel, R., Smail, I., et al. 2006, [ApJ]{}, 640, 228 Valiante, E., Lutz, D., Sturm, E., Genzel, R., Tacconi, L. J., et al. 2007, [ApJ]{}, 660, 1060 Vega, O., Clemens, M. S., Bressan, A., Granato, G. L., Silva, L., Panuzzo, P. 2008, [A$\&$A]{}, submitted (astro-ph/0712.1202) Wang, W.-H., Cowie, L.L., Barger, A.J. 2004, [ApJ]{}, 613, 655 Wang, W.-H., Cowie, L.L., Barger, A.J. 2006, [ApJ]{}, 647, 74 Wang, W.-H., et al. 2007, [ApJ]{}, 670, L89 Webb, T. M. A., et al. 2006, [ApJ]{}, 636, L17 Wilson, G. W., et al. 2008a, [MNRAS]{}, in press (astro-ph/0801.2783) Wilson, G. W., et al. 2008b, [MNRAS]{}, submitted (astro-ph/0803.3462) Younger, J. D., Fazio, G. G., Huang, J.-S., Yun, M. S., Wilson, G. W. et al. 2007, [ApJ]{}, 671, 1531 Younger, J. D., et al. 2008, [MNRAS]{}, in press (astro-ph/0801.2764) Yun, M.S., Reddy, N., & Condon, J.J. 2001, [ApJ]{}, 554, 803 Yun, M.S., & Carilli, C.L. 2002, [ApJ]{}, 568, 88
\[lastpage\]
[^1]: E-mail: myun@astro.umass.edu
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The International Large Detector (ILD) is one of the proposed detector concepts for the International Linear Collider (ILC). The work on the ILD machine-detector interface (MDI) concentrates on the optimisation of the experimental area design for the operation and maintenance of the detector. The ILC will use a push-pull system to allow the operation of two detectors at one interaction region. The special requirements of this system pose technical constraints to the interaction region and the design of both detectors.'
author:
- |
Karsten Buesser\
Deutsches Elektronen-Synchrotron DESY\
Notkestrasse 85, 22607 Hamburg, Germany
bibliography:
- 'LCWS11\_buesser.bib'
title: |
ILD Machine-Detector Interface\
and Experimental Hall Issues
---
=1
Introduction
============
The Machine-Detector Interface (MDI) work at the International Linear Collider (ILC) [@Brau:2007zza; @Elsen:2011zz] covers all aspects that are of common concern to the detectors and the machine. This comprises topics like the mechanical integration of the detectors and the machine as well as aspects of beam induced backgrounds, common instrumentation for beam diagnostics (polarisation, beam energy) as well as common services. Recently, the collaborative work between both detector concepts, ILD [@Group:2010eu] and SiD [@Aihara:2009ad], together with the ILC machine groups concentrates on the definition and the engineering design of the infrastructures in the experimental area.
The ILC push-pull scheme
========================
Other as in a storage ring, the total integrated luminosity in a linear collider does not scale with the number of interaction regions. The violent beam-beam interaction degrades the beam quality after the collisions so far, that each bunch of particles can only be used once and is disposed in the beam dump afterwards. Nevertheless, it is a broad consensus, that two complementary detectors, that are run by two independent collaborations, are mandatory to exploit the benefits of healthy competition and to allow for independent cross-checks of the measurements. While the beam delivery system of the ILC is a complex and rather expensive system, a duplication of the interaction region is excluded for economic reasons. Therefore the ILC design foresees to have two detectors that share one interaction region in a push-pull operation scheme. In that scheme, one detector would take data, while the other one is waiting in the close-by maintenance position. At regular schedules, the data-taking detector is pushed laterally out of the interaction region, while the other detector is being pulled in. As the data taking intervals for each experiment should be short enough to avoid a potential discovery by one detector alone, the transition time for the exchange of the detectors needs to be short, i.e. in the order of one day, to keep the total integrated luminosity at the ILC high.
The technical design of the push-pull system is a novel and challenging engineering task. Of importance is the definition of the interfaces and boundary conditions that are needed to allow for the friendly co-existence of the two experiments and the accelerator. The top level interfaces and requirements have been laid down in a common publication of the detector concepts and the ILC machine group [@Parker:2009zz]. Major requirements that needed to be defined comprise among others geometrical boundary conditions, vibration tolerances, alignment requirements, vacuum conditions, radiation and magnetic environment.
One important subject was the conceptual development of the detector transport system for the push-pull operations. The recently agreed upon scheme foresees a platform-based movement system, where each detector would be placed on a platform of reinforced concrete that runs on a suitable transportation system, e.g. a rail based system or on air pads. Of major concern were the possible amplifications of ground motion in this system; detailed simulations that were cross-checked with measurements at existing structures showed that these effects can be controlled [@Oriunno:2011]. Therefore the platform system has been adopted as the baseline for the experimental area design. Figure \[Fig:Push-pull\] shows a possible configuration of the ILC experimental hall with both detectors on a platform-based push-pull system.
![The push-pull system at the ILC: SiD and ILD in the experimental hall (left); ILD on the transport platform (right) [@Oriunno:2011].[]{data-label="Fig:Push-pull"}](ILC_Hall.pdf "fig:"){width="0.405\columnwidth"} ![The push-pull system at the ILC: SiD and ILD in the experimental hall (left); ILD on the transport platform (right) [@Oriunno:2011].[]{data-label="Fig:Push-pull"}](ILD_Platform.pdf "fig:"){width="0.6\columnwidth"}
The ILD detector
================
The International Large Detector (ILD) is one of the two detector concepts that are under study for the ILC. The design follows a multi-purpose high energy physics detector design and is optimised for the high precision measurements that are anticipated for the ILC [@Group:2010eu]. The size of ILD is roughly 16 m (width) $\times$ 16 m (height) $\times$ 14 m (length); it has a mass of $\approx$ 15.5 kt.
Assembly and maintenance
------------------------
The mechanical design of the ILD detector is inspired by the CMS experiment at the LHC. The main parts are the five rings of the iron yoke, three in the barrel part and two end caps. The detector will be pre-assembled and tested in a surface building. The large assembly pieces will then be lowered into the experimental hall through a large vertical access shaft. The dimensions of the shaft and of the (temporary) crane for these operations are given by the masses and dimensions of the biggest assembly piece. In the case of ILD that would be the central yoke ring that carries the solenoid coil. A shaft diameter of $\approx$ 18 m and a hoist for $\approx$ 3500 t mass is needed for this.
The five yoke rings are mounted on air pads and can therefore be moved easily within the underground experimental hall. In the beam position and during the push-pull movement, the detector is mounted on the transport platform. In the maintenance position, the detector can be opened and the yoke rings can move independently away from the platform. Figure \[Fig:ILD\_opening\] shows the detector on the beam position and in the maintenance area. The hall layout needs to foresee enough space in the maintenance position to allow the complete opening of the detector rings. During maintenance, access to the inner detector parts and the removal of the large detector components (e.g. the time projection chamber) needs to be possible.
![ILD detector opened at the beam line (top) and in the maintenance position (cut view, bottom) [@Group:2010eu].[]{data-label="Fig:ILD_opening"}](opening_beamline.pdf){width="0.65\columnwidth"}
![ILD detector opened at the beam line (top) and in the maintenance position (cut view, bottom) [@Group:2010eu].[]{data-label="Fig:ILD_opening"}](opening_garage.pdf){width="0.65\columnwidth"}
### Modified assembly scheme {#Sec:Japan}
Possible ILC sites in Asia (Japan) are different to the other reference sites as they are situated in mountainous regions where a vertical access to the experimental hall might not be given. Instead, horizontal tunnels of $\approx$ 1 km length might serve as access ways into the underground experimental area. As the tunnel diameters and the transport capacity within the tunnels are limited for technical and economic reasons, a modified assembly scheme for the ILD detector is under investigation for these sites. In these cases, it is foreseen to still pre-assemble the detector parts on the surface. The yoke rings are however too big and heavy and can therefore be only assembled in the underground hall. The yoke would be transported in segments into the hall where enough space for the yoke assembly and the necessary tools needs to be provided. The largest part of the ILD detector, that should not be divided and therefore needs to be transported in one piece, is the superconducting solenoid coil. Its outer diameter of $\approx$ 8.7 m puts stringent lower limits on the diameter of the access tunnel. A considerable effort has been started recently to define the requirements to the detector assembly for the mountain sites.
Integration with the machine
----------------------------
### Final focus magnets
The interaction region of ILD is designed to fulfil at the same time the requirements from the ILC machine as well as the needs of the detector. As the allowed focal length range of the inner final focus quadrupoles (QD0) for ILC ($3.5$ m $\leq L^* \leq 4.5$ m) is smaller than the detector size, the QD0 magnets of the final lenses need to be supported by the detector itself. As a consequence, SiD and ILD will have their own pair of QD0 magnets that move together with the detector during push-pull operations. In contrast, the QF1 magnets of the final lenses with a focal length of $L^*=9.5$ m are not supported by the detectors and stay on the beam line during detector movements. A set of vacuum valves between the QD0 and the QF1 magnets define the break point for the push-pull operations. The biggest concerns for the QD0 support systems are the alignment and the protection against ground motion vibrations. The limit on the vibration amplitudes is given at 50 nm within the 1 ms long ILC bunch train [@Parker:2009zz].
![Support system of the QD0 magnets in ILD. The inner parts of the detector and the end caps are not shown [@Group:2010eu].[]{data-label="Fig:qd0support"}](qd0support.pdf){width="0.65\columnwidth"}
Due to these tight requirements, the support of the magnets in the detector is of special importance. ILD has chosen a design where the magnets are supported from pillars that are standing directly on the transport platform. In the detector, the magnets are supported by a system of tie rods from the cryostat of the solenoid coil. This design de-couples the detector end caps from the QD0 magnets and allows a limited opening of the end caps also in the beam position without the need to break the machine vacuum (c.f. figure \[Fig:ILD\_opening\]). In addition, the QD0 magnets are coupled via the pillar directly to the platform and limit in that way the number of other vibration sources. Simulations taking into account realistic ground motion spectra for different sample sites have been done to understand the vibration amplification in the QD0 support system [@Yamaoka:2010]. These studies show, that with the exception of very noisy sites, the requirements for the QD0 magnets are fulfilled with large safety margins. Even if the additional amplification characteristics of the platform (c.f. [@Oriunno:2011]) are taken into account, the total integrated vibration amplitudes are in the order of less than 10 nm for frequencies above 5 Hz.
Also the proper alignment of the QD0 magnets with respect to the axis that is defined by the QF1 quadrupoles is of crucial importance. While the alignment accuracy of the detector axis after the movement into the beamline is moderate (horizontal: $\pm$ 1 mm and $\pm$ 100 $\mu$rad), the requirements for the initial alignment of the quadrupoles are much tighter: $\pm$ 50 $\mu$m and $\pm$ 20 $\mu$rad. An alignment system that comprises an independent mover system for the magnets and frequency scanning interferometers is part of the detector design.
### Interaction region
The central interaction region of ILD comprises the beam pipe, the surrounding silicon detectors, the forward calorimeters and the interface to the QD0 magnets (c.f. Figure \[Fig:interaction\_region\]).
![The interaction region of ILD [@Group:2010eu].[]{data-label="Fig:interaction_region"}](ir_blowup_comm.pdf){width="0.95\columnwidth"}
The most delicate part of this region is the very light beam pipe made from Beryllium, that is surrounded by the vertex detector and the intermediate silicon tracking devices. A carbon fibre reinforced cylindrical structure will form the mechanical support for these elements. This tube is attached to the inner field cage of the surrounding time projection chamber (not shown in the figure). As the horizontal alignment tolerance of the detector axis after push-pull operations is $\pm$1 mm, an adjustment system is needed to eventually re-align the tube structure with the beam pipe and the inner tracking detectors. This is especially important to keep the stay-clear distances to the tracks of the beam induced background particles within the beam pipe. mT The beam pipe opens conically away from the interaction point to allow enough space for the beam induced background, most importantly the electron-positron pairs from beamstrahlung. The shape of the beam pipe results in a rather large volume that needs to be kept evacuated by means of of vacuum pumps that are on both sides as far as 3.3 m away from the interaction point. Simulations show however, that the vacuum requirements for the ILC can be met [@Group:2010eu].
The forward calorimeters have a two-fold purpose. They enlarge the hermeticity of the detector for physics analyses, but they also serve as beam diagnostic devices by measuring the patterns from the beamstrahlung pairs [@Group:2010eu].
Detector services {#Sec:Services}
-----------------
A number of service and supply equipments needs to be established for the running and the maintenance of ILD. The arrangement of the services depends on the technical requirements and can be sorted according to their proximity to the detector. Primary services should be located on the surface above the experimental hall. They comprise usually large and sometimes noisy facilities like water chillers, high voltage transformers, auxiliary power supplies (Diesel generators), Helium storage and compressors, and gas storage systems. Secondary services will be placed into the underground cavern in dedicated service areas. Examples are cooling water distributions, power supplies, gas mixture systems, power converters, and parts of the cryogenic system for the detector (He liquefier and re-heater, control system). As the detector will not be disconnected during the push-pull operations, all supplies that go directly to the detector will be run in flexible cable chains. As the supply with cryogenic Helium needs to be established also during the detector movement, flexible cryogenic lines are foreseen. The detector will carry those services on-board that need to stay close or directly at the detector. Examples are the He system for the QD0 magnets, on-board electronics and the electronic containers.
Requirements for the experimental area
======================================
Underground hall design
-----------------------
![Conceptual design of the underground facilities for ILD. The detector is opened in the maintenance position, the crane coverage is shown [@Sinram:2011].[]{data-label="Fig:ILD_underground"}](ILD_3D_white.pdf){width="1.0\columnwidth"}
The discussions between both detector concepts and the civil facility experts of the ILC project are converging on an underground hall design that follows a z-shape floor layout as indicated in figure \[Fig:Push-pull\] (left). The common interaction point is in the middle of the hall, the detectors move in and out of the beam position on their transport platforms. Alcoves in the maintenance positions allow for lateral space that is needed to open the detectors. Figure \[Fig:ILD\_underground\] shows a recently developed design for the maintenance position for ILD. The detector is shown in fully opened position that allows for the removal of the large detector parts. The biggest element that might need to be removed from the detector (though not in routine maintenance periods) is the superconducting solenoid. Enough space is foreseen to manoeuvre the parts of the detector in the hall and bring them safely to the vertical access shafts. In addition, space for the detector services (c.f. section \[Sec:Services\]) is available in this design.
The size and location of the vertical access shafts is still under study. The preferred solution is to foresee one central big shaft directly above the interaction point with a diameter of $\approx$ 18 m. This shaft would be used during the assembly of both detectors where the big parts are pre-assembled on the surface and then lowered through the big shaft directly onto the respective transport platform. Two smaller diameter shafts ($\approx$ 10 m) are needed in the maintenance positions to allow access from the surface while one detector is at the beam position and blocks the access to the big shaft. Additional smaller shafts for elevators and services might be needed as well. An overall optimisation of the layout and number of the shafts with respect to the functionality and the cost is under study.
As the yoke rings will be moved on air pads within the hall, the crane covering the maintenance area needs to have a modest capacity of preferably 2 $\times$ 40 t. However, a temporary hoist with a capacity of up to 3500 t is needed on the surface over the main access shaft to lower the big detector parts during the primary assembly.\
[*Note:*]{} the underground hall design discussed here would be chosen for the ILC sites that allow vertical access via relatively short shafts. This needs to be modified significantly for the Asian ILC reference sites (c.f. section \[Sec:Japan\]). A detailed design following these very different requirements is under study at this time.
Shielding
---------
### Radiation
![Design of the beamline shielding compatible with two detectors of different sizes [@Elsen:2011zz].[]{data-label="Fig:shielding"}](ILD_SiD_shielding.pdf){width="0.75\columnwidth"}
The ILD detector is self-shielding with respect to ionising radiation that stems from maximum credible beam loss scenarios [@Sanami:2009]. Additional shielding in the hall is necessary to fill the gap between the detector and the wall in the beam position. The design of this beamline shielding needs to accommodate both detectors, SiD and ILD, that are of significant size differences. A common ‘pac-man’ design has been developed, where the movable shielding parts are attached to the wall of the detector hall - respectively to the tunnel stubs of the collider - and match to interface pieces that are borne by the experiments (c.f. figure \[Fig:shielding\]).
### Magnetic fields
The magnetic stray fields outside the iron return yoke of the detector need to be small enough to not disturb the other detector during operation or maintenance. A limit for the magnetic fields has been set to 5 mT at a lateral distance of 15 m from the beam line [@Parker:2009zz]. This allows the use of standard iron-based tools at the other detector. The design of the ILD return yoke has been tested carefully for the fringe fields. Figure \[Fig:strayfield\] shows the magnetic fields that have been simulated for a central solenoid field of 4 T.
![Magnetic stray fields from the detector solenoid [@Group:2010eu].[]{data-label="Fig:strayfield"}](strayfield.pdf){width="0.65\columnwidth"}
Summary and outlook
===================
Significant efforts in the worldwide ILC MDI work have been spent to develop an engineering design of the experimental environment for the two planned detectors in the push-pull scheme. While the conceptual design of all relevant infrastructures has been defined by now, the work is concentrating on the finalisation of the engineering specifications that will form the basis of the respective parts of the ILC Technical Design Report and the accompanying detector Detailed Baseline Descriptions that are envisaged to be published by the end of 2012.
Acknowledgments
===============
The work on the ILC Machine-Detector Interface and the design of the detector related conventional facilities is a collaborative effort between the detector concepts and the respective ILC machine working groups. This report includes therefore the efforts of many people within the global ILC endeavour. I am especially grateful for the support of the members of the ILD MDI/Integration Group, the ILC MDI Common Task Group, the ILC Beam Delivery System Group, and the ILC Conventional Facilities Group.
Parts of this work were supported by the Commission of the European Communities, contract 206711 “ILC-HiGrade”, and by the Helmholtz Association, contract HA-101 “Physics at the Terascale”.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In this work we carry out a theoretical study of the phonon-induced resistivity in twisted double bilayer graphene (TDBG), in which two Bernal-stacked bilayer graphene devices are rotated relative to each other by a small angle $\theta$. We show that at small twist angles ($\theta\sim 1^\circ$) the effective mass of the TDBG system is greatly enhanced, leading to a drastically increased phonon-induced resistivity in the high-temperature limit where phonon scattering leads to a linearly increasing resistivity with increasing temperature. We also discuss possible implications of our theory on superconductivity in such a system, and provide an order of magnitude estimation of the superconducting transition temperature.'
author:
- Xiao Li
- Fengcheng Wu
- 'S. Das Sarma'
bibliography:
- 'Ref-TDBLG.bib'
title: Phonon scattering induced carrier resistivity in twisted double bilayer graphene
---
Introduction
============
Recent experimental discoveries of correlated insulator and superconductivity in twisted bilayer graphene (TBG) [@Cao2018; @Cao2018a; @Yankowitz2019; @Sharpe2019; @Lu2019] have attracted great interest in the community. The fact that the electronic band structure of TBG can become almost flat near the magic angle [@Bistritzer2011] strongly enhances the effect of interactions, making it possible to study novel quantum phases that are otherwise difficult to realize experimentally. Furthermore, the apparent similarities between the phase diagram of TBG and cuprate high-temperature superconductors [@Lee2006] suggest that the study of electron correlations in TBG may provide useful hints for our understanding of the electronic properties in cuprates.
The experimental observation of novel quantum phases in TBG has since stimulated the investigation of other van der Waals heterostructures using the twist angle degree of freedom, including, e.g., the trilayer graphene/h-BN [[[moiré]{}]{}]{} superlattice [@Chen2019; @Chen2019a]. One of the motivations of such studies is to go beyond certain limitations of TBG. For example, although the twist angle offers an unprecedented tuning knob to modify the electron band structure in TBG, it still cannot be changed continuously. To date, properties of TBG are mostly modified by fabricating new devices with different twist angles or by applying hydrostatic pressure [@Carr2018; @Yankowitz2019]. It will thus be advantageous to find a way to modify the band structure of a van der Waals heterostructure continuously near quantum critical points, which will enable a more detailed experimental characterization of the electron correlation effects.
Twisted double bilayer graphene (TDBG) has emerged as a promising platform in this respect, because the band structure of a single Bernal-stacked bilayer graphene [@Neto2009; @McCann2013] can be tuned continuously by an external perpendicular electric field. Consequently, one can expect to adjust the band structure of TDBG continuously by an external electric field. Such a tunability is highly desirable, especially near certain quantum critical points. As a result, TDBG has attracted much attention and rapid experimental [@Shen2019; @Cao2019; @Liu2019] and theoretical [@Lee2019] progress has been made. In particular, the application of an external electric field has indeed given rise to a very rich TDBG phase diagram, including signatures of correlated insulator states as well as superconductivity and possibly ferromagnetism in some cases.
The interesting TDBG physics for small twist angles arises from the same moire flatband physics dominating the extensively studied TBG phenomena near the magic twist angle. Basically, the moire potential for small twist angle strongly flattens the relevant graphene bands, leading to very small band velocities (or very large carrier effective masses), which lead to a great enhancement of all interaction phenomena since typically interaction physics is proportional to the carrier effective mass. In the current work, we use a suitable continuum model TDBG band structure to estimate band flattening effects.
In this work, however, we take a different perspective and study the resistivity in TDBG in the high-temperature limit, when phonon scattering will be the dominant mechanism for resistivity. Thus, instead of focusing on the $T=0$ ground state phase diagram, we investigate the ohmic transport properties of the finite temperature effective metallic TDBG phase above the applicable critical temperatures (or the ground state energy gaps) of the symmetry-broken states where TDBG behaves as a metal. Considering rather clean systems, and focusing specifically on the temperature dependence of carrier resistivity, we neglect effects of disorder, impurities, and defects since the main temperature dependence of metallic resistivity arises from phonon scattering effects. We also ignore all electron-electron interaction effects, and only take into account resistive scattering by acoustic scatttering.
Specifically, the scattering of electrons by acoustic phonons can be generally divided into two regimes: a low-temperature regime ($T<{T_\text{BG}}$) and a high-temperature one ($T>{T_\text{BG}}$). The characteristic temperature ${T_\text{BG}}$ is known as the [[Bloch-Grüneisen]{}]{} (BG) temperature, given by $k_B{T_\text{BG}}= 2\hbar {v_\text{ph}}k_F$ [@Hwang2008; @Min2011], where $k_B$ is the Boltzmann constant, ${v_\text{ph}}$ is the phonon velocity and $k_F$ is the Fermi wave vector. (We note that for regular metals where ${T_\text{BG}}$ is very high, or in any situation where ${T_\text{BG}}>T_D$ with $T_D$ being the Debye temperature, the characteristic temperature defining the low and high temperature phonon scattering regimes is $T_D$ and not ${T_\text{BG}}$.) In the high-temperature regime, the electron resistivity will scale as a linear function of temperature $T$, giving rise to $\rho\sim T$, which has been well understood in the context of graphene devices [@Neto2009; @Peres2010; @Sarma2011]. We are interested in this regime because similar to TBG, a wide range of linear-in-$T$ resistivity has been observed in this temperature range in TDBG [@Shen2019; @Cao2019; @Liu2019]. In the context of TBG, such a behavior is often attributed to the putative ‘strange-metal’ phase [@Cao2019a; @Polshyn2019], although it can be compatible with a phonon-scattering mechanism [@Wu2019], [[albeit]{}]{} with greatly enhanced phonon scattering induced carrier resistivity. In this work, we will theoretically study the phonon-induced resistivity in TDBG in the high-temperature regime, and analyze its compatibility with the experimental observations. In particular, we want to understand whether electron-phonon scattering in TDBG can be a contributing factor for the linear-in-$T$ resistivity seen in recent experiments. Our work can be thought of as the TDBG generalization of Ref. [@Wu2019] or as the small twist angle double-bilayer generalization of Ref. [@Min2011]. The goal is to theoretically obtain the acoustic phonon scattering induced carrier resistivity of TDBG as a function of temperature, twist angle, and carrier density.
The structure of the paper is the following. In Section \[Section:Model\] we set up a continuum model for TDBG and demonstrate that its low-energy bands become almost flat (i.e. very large effective mass or equivalently very small effective velocity) at small twist angles ($\theta\sim 1^\circ$). In Section \[Section:Resistivity\] we explain the theoretical framework we use to evaluate phonon-induced resistivity in TDBG and present our numerical results. In Section \[Section:Discussions\] we provide some additional discussions. In particular, we will comment on the possible implications of our theory on superconductivity in TDBG, and provide a rough estimate of the superconducting transition temperature $T_c$ arising from the enhanced electron-phonon coupling. Finally, in Section \[Section:Conclusions\] we provide a brief summary of our results.
Continuum description of a twisted double bilayer graphene \[Section:Model\]
============================================================================
We start by introducing the continuum model of a TDBG. We consider two Bernal-stacked bilayer graphene (BLG) rotated relative to each other by a small angle $\theta$, as shown in Fig. \[Fig:Bandstructure\](a). In particular, we adopt the convention that the top BLG will be rotated by an angle of $\theta/2$, while the bottom one will be rotated by $-\theta/2$. As a result, the continuum description of TDBG near valley $+K$ can be written as $$\begin{aligned}
\mathcal{H}_{+} = {\begin{pmatrix} h_t({\bm{k}}) & T(\bm{r}) \\ T^\dagger(\bm{r}) & h_b({\bm{k}}) \end{pmatrix}}, \label{Eq:TheModel}\end{aligned}$$ which is given in the basis of {$A_1$, $B_1$, $A_2$, $B_2$, $A_3$, $B_3$, $A_4$, $B_4$}. Here $A$ and $B$ denote the two sublattices of BLG and indices $1$-$4$ denote the four atomic layers, with $1$-$2$ belonging to the top BLG and $3$-$4$ to the bottom one.
In the continuum model Eq. , $h_{t(b)}$ denotes the Hamiltonian for the isolated top (bottom) BLG, given by $$\begin{aligned}
h_{\lambda}({\bm{k}}) =
\begin{pmatrix}
0 & \hbar vk_{\lambda}^\ast e^{il_{\lambda}\theta/2} & 0 & 0 \\
\hbar vk_{\lambda}e^{-il_{\lambda}\theta/2} & 0 & \gamma_1 & 0\\
0 & \gamma_1 & 0 & \hbar vk_{\lambda}^\ast e^{il_{\lambda}\theta/2}\\
0 & 0 & \hbar vk_{\lambda}e^{-il_{\lambda}\theta/2} & 0
\end{pmatrix}. \label{Eq:BareBLG}\end{aligned}$$ In the above equation $\lambda = t,b$ denotes the top and bottom BLG, and $l_{t(b)}=+1 (-1)$. In addition, $k_{t(b)}\equiv k_x^{(t(b))} + ik_y^{(t(b))}$ denotes the (complex) in-plane momentum measured from the Brillouin zone corner of the top (bottom) BLG. In addition, $v = \SI{1e6}{m/s}$ is the bare Dirac velocity of monolayer graphene, while $\gamma_1$ is the interlayer coupling energy of an isolated BLG. Note that the value of $\gamma_1$ in the literature varies widely from $\SI{300}{meV}$ to $\SI{400}{meV}$ [@McCann2013]. In this work we will take $\gamma_1 = \SI{380}{meV}$, but note that results do depend on the specific choice of the $\gamma_1$ band parameter.
In TDBG the [[[moiré]{}]{}]{} potential arising from the twist angle between the two BLGs only induces direct coupling between atomic layers $2$ and $3$. As a result, the [[[moiré]{}]{}]{} potential term in the continuum model Eq. can be written as $$\begin{aligned}
T(\bm{r}) =
\begin{pmatrix}
0 & 0 \\ \mathbbm{t}(\bm{r}) & 0
\end{pmatrix}, \end{aligned}$$ where $\mathbbm{t}(\bm{r}) = w \displaystyle\sum_{j=1}^3 T_j e^{i\bm{Q}_j\cdot \bm{r}}$. Here $w\simeq\SI{118}{meV}$ [@Wu2018] and $$\begin{aligned}
T_j = \sigma_0 + \cos(2\pi j/3)\sigma_x + \sin(2\pi j/3)\sigma_y, \; (j = 1, 2, 3), \end{aligned}$$ where $\sigma_i$ are the Pauli matrices. The three vectors $\bm{Q}_j$ read as $$\begin{aligned}
\bm{Q}_1 = K_\theta \left(\dfrac{\sqrt{3}}{2}, \dfrac{1}{2}\right), \;
\bm{Q}_2 = K_\theta \left(-\dfrac{\sqrt{3}}{2}, \dfrac{1}{2}\right), \;
\bm{Q}_3 = K_\theta (0, -1), \notag \end{aligned}$$ with $K_\theta = 4\pi/(3a_M)$. Here $a_M = a_0/[2\sin(\theta/2)]$ is the lattice constant of TDBG, and $a_0$ is the lattice constant of monolayer graphene.
Within this continuum description, the band structure of TDBG can be obtained by diagonalizing a large matrix in the momentum space which connects each ${\bm{k}}$ point in the first superlattice [[[moiré]{}]{}]{} Brillouin zone (MBZ) to three other points ${\bm{k}}+ \bm{Q}_j$ ($j = 1,2,3$) in an adjacent MBZ. The resulting band structures for TDBG at three different twist angles are shown in Fig. \[Fig:Bandstructure\](b). For a large twist angle ($\theta=5.0^\circ$), the band structure near the $\kappa_{-}$ point in the MBZ is close to that of pristine BLG, although a small band gap opens up due to the broken inversion symmetry. In contrast, for much smaller twist angles ($\theta = 1.5^\circ$ and $\theta = 1.31^\circ$), the bands near the $\kappa_{-}$ point become very flat, giving rise to a large density of states (DOS) near the band bottom. In Fig. \[Fig:Bandstructure\](c)-(d) we show the DOS per spin per valley $\nu(\varepsilon_F)$ in TDBG, which indeed becomes quite large for small twist angles ($\theta\sim1^\circ$). In addition, the flattened bands also lead to a much reduced Fermi velocity, as shown in Fig. \[Fig:FermiVelocity\]. This physics strongly enhances phonon scattering as we discuss later in the paper.
Approximate zero-energy eigenstates
-----------------------------------
In order to obtain the full band structure of TDBG one must numerically diagonalize a large matrix. However, near the $\kappa_{\pm}$ points in the MBZ it is possible to obtain an approximate analytical expression for the two lowest-energy eigenstates. Such a method was first developed in Ref. [@Bistritzer2011] to obtain an approximate two-band model for TBG, and was used to obtain approximate lowest-energy eigenstates in TDBG in Ref. [@Lee2019].
Specifically, one can truncate the Hamiltonian $\mathcal{H}_+$ in Eq. by retaining only four momentum points ${\bm{k}}_t$, and ${\bm{k}}_b^{(j)} \equiv {\bm{k}}_t + \bm{Q}_j$ ($j = 1, 2, 3$), and obtain a $8$-by-$8$ Hamiltonian $H(k)$ $$\begin{aligned}
H(k) =
\begin{pmatrix}
{\tilde{h}}_0(k) & T_1(k) & T_2(k) & T_3(k) \\
T_1^{\dagger}(k) & {\tilde{h}}_1(k) & 0 & 0 \\
T_2^{\dagger}(k) & 0 & {\tilde{h}}_2(k) & 0 \\
T_3^{\dagger}(k) & 0 & 0 & {\tilde{h}}_3(k) \\
\end{pmatrix}, \label{Eq:8-bandModel}\end{aligned}$$ where [@Lee2019] $$\begin{aligned}
{\tilde{h}}_0(k) &=
\begin{pmatrix}
0 & -v^2(k_\theta^\ast)^2/\gamma_1 \\
-v^2(k_\theta)^2/\gamma_1 & 0
\end{pmatrix}, \notag\\
{\tilde{h}}_j(k) &=
\begin{pmatrix}
0 & -v^2\left[(k+Q_j)_{-\theta}^\ast\right]^2/\gamma_1 \\
-v^2\left[(k+Q_j)_{-\theta}\right]^2/\gamma_1 & 0
\end{pmatrix}, \notag\\
T_j(k) &=
\begin{pmatrix}
-t_Mvk_\theta^\ast/\gamma_1 & t_Mv^2\lambda_j^\ast k_\theta^\ast(k+Q_j)_{-\theta}^\ast/\gamma_1^2\\
t_m\lambda_j & -t_Mv(k+Q_j)_{-\theta}^\ast/\gamma_1
\end{pmatrix}. \end{aligned}$$ In the above results, we have defined $\lambda_j = e^{2\pi ij/3}$ ($j = 1, 2, 3$). In addition, we have introduced a short-hand notation that $k_\theta \equiv (k_x + ik_y)e^{-i\theta/2}$.
One can verify that in the $k\to0$ limit the two zero-energy eigenstates of $H(k)$ in Eq. can be approximately written as $$\begin{aligned}
\ket{\Psi^{(\alpha)}} = S_\alpha
\begin{pmatrix}
\psi_0^{(\alpha)} \\ \psi_{1}^{(\alpha)}\\ \psi_{2}^{(\alpha)} \\ \psi_3^{(\alpha)}
\end{pmatrix},
\quad \alpha = A, B, \label{Eq:ZeroEnergyState}\end{aligned}$$ where $S_\alpha$ is the normalization factor, $\psi_0^{(\alpha)}$ is a two-component spinor, and $$\begin{aligned}
\psi_j^{(\alpha)} = \dfrac{t_M}{vK_\theta^4}{\begin{pmatrix} 0 & K_\theta^2(Q_j)_\theta \\ 0 & \dfrac{\gamma_1}{v}e^{ij\phi}\left[(Q_j)_\theta^2\right]^\ast \end{pmatrix}} \psi_0^{(\alpha)} \equiv M_j \psi_0^{(\alpha)}, \quad j = 1, 2, 3. \notag \end{aligned}$$ The wave function normalization can be determined from the following condition, $$\begin{aligned}
1 &= \inner{\Psi^{(\alpha)}}{\Psi^{(\alpha)}}
= \lvert S_\alpha\rvert^2 \left(\psi_0^{(\alpha)}\right)^\dagger \left[1 + \sum_{j=1}^{3} M^\dagger_j M_j\right]\psi_0^{(\alpha)} \notag\\
&= \lvert S_\alpha\rvert^2 \left(\psi_0^{(\alpha)}\right)^\dagger {\begin{pmatrix} 1 & 0 \\ 0 & 1+3\Delta \end{pmatrix}} \psi_0^{(\alpha)}, \end{aligned}$$ where $\Delta = \dfrac{\gamma_1^2t_M^2}{v^4K_\theta^4}\left(1+\dfrac{v^2K_\theta^2}{\gamma_1^2}\right)$. Now if we adopt the natural choice of $\psi_0^{(A)} = \begin{pmatrix} 1\\0\end{pmatrix}$, and $\psi_0^{(B)} = \begin{pmatrix} 0\\ 1 \end{pmatrix}$, we find that $$\begin{aligned}
S_A = 1, \quad S_B = \dfrac{1}{\sqrt{1+3\Delta}}. \label{Eq:Normalization}\end{aligned}$$ This approximate analytical expression for the lowest-energy states in TDBG will be helpful for our analysis of phonon-induced resistivity in the next section.
Phonon-induced resistivity \[Section:Resistivity\]
==================================================
In the previous section we have shown that for small twist angles ($\theta\sim1^\circ$), the Fermi velocity of TDBG near the MBZ corners can become quite small. As we will show in this section, such a substantial reduction in Fermi velocity can give rise to a much enhanced phonon-induced resistivity in the high-temperature ($T\gg{T_\text{BG}}$) limit as happens also for TBG at small twist angles [@Wu2019]. In particular, in such a limit the resistivity scales linearly with temperature, $\rho\approx CT$, and our goal is to estimate the coefficient $C$ and explain how it increases substantially at small twist angles. To verify the validity of our theory, we also numerically evaluate the resistivity in the full temperature range (i.e. $T\gg{T_\text{BG}}$ as well as $T\ll{T_\text{BG}}$), and estimate the crossover temperature above which this enhanced linear-in-$T$ resistivity regime applies.
Resistivity from Boltzmann transport theory
-------------------------------------------
To begin with, we recall that in the Boltzmann transport theory the energy-averaged scattering time $\tau$ in monolayer graphene in the limit of $k_BT\ll \varepsilon_F$ is given by [@Min2011] $$\begin{aligned}
{\langle\tau\rangle}^{-1} = \dfrac{2\pi}{\hbar}\nu_0\lvert W(k_F)\rvert^2 I, \label{Eq:ScatteringTime}\end{aligned}$$ where $\nu_0$ is the DOS per spin and valley at the Fermi energy, and $\lvert W(k_F)\rvert^2 = D^2\hbar k_F/(2\rho_m {v_\text{ph}})$ is the squared matrix element for acoustic phonon scattering. Here $\rho_m=\SI{7.6e-8}{g/cm^2}$ is the mass density of a single graphene sheet, ${v_\text{ph}}=\SI{2.6e6}{cm/s}$ is the phonon velocity in monolayer graphene, $D=\SI{25}{eV}$ is the acoustic phonon deformation potential [@Min2011], and $v_F$ is the Fermi velocity. The integral $I$ has the following form, $$\begin{aligned}
I = \int \dfrac{\phi}{2\pi} \dfrac{F(q)(1-\cos\phi)}{\epsilon^2(q)} \dfrac{2q}{k_F}\beta \hbar\omega_q N_q(N_q+1), \label{Eq:IntegralI}\end{aligned}$$ where $q = 2k_F \sin(\phi/2)$ is the magnitude of the acoustic phonon wave vector, $\beta = 1/(k_B T)$, and $F(q)$ is the chiral factor defined as the square of the wave function overlap between incoming and scattered electrons. In addition, $N_q = (e^{\beta\hbar\omega_q}-1)^{-1}$ is the phonon occupation number, with $\omega_q = {v_\text{ph}}q$ being the frequency of the acoustic phonon. Finally, $\epsilon(q)$ is the dielectric function, which takes into account the screening effect at wave vector $q$. In this work we will only consider the unscreened limit, so we will take $\epsilon(q) = 1$. The reason for neglecting screening, which is easy to include, is that there is no experimental evidence that the electron-acoustic phonon resistive scattering gets screened in graphene as a direct comparison between theory [@Hwang2008] and experiment [@Efetov2010] supports the unscreened approximation. We thus do not believe that screening plays any role in TDBG (or TBG) phonon scattering.
We can apply the above formalism to the case of TDBG, and obtain the electron resistivity as $\rho = \sigma^{-1}$, where $\sigma$ is the electron conductivity, given by $$\begin{aligned}
\sigma = g_s g_v e^2\nu(\varepsilon_F)\dfrac{v_F^2}{2}{\langle\tau\rangle}. \label{Eq:Conductivity}\end{aligned}$$ In the above equation $g_s = 2$ and $g_v = 2$ are the degeneracies due to electron spin and valley degrees of freedom, respectively, while $\nu(\varepsilon_F)$ is the DOS per spin per valley in TDBG shown in Fig. \[Fig:Bandstructure\]. It is worth noting that when evaluating the scattering time ${\langle\tau\rangle}$ in TDBG using Eq. , we should replace the DOS $\nu_0$ there by $\nu(\varepsilon_F)/2$, for the following reasons. In this work we only consider electron densities below the van Hove singularity in TDBG, in which case the topology of the Fermi surface consists of two disconnected mini-valleys ($\kappa_{\pm}$) in the MBZ near the $+K$ valley in the original Brillouin zone of BLG. As a result, the scattering matrix element $W(k_F)$ in Eq. is only appreciable for electrons within the same mini-valley. The above observation leads us to conclude that in the low-density regime we are interested in, only half of electrons at the Fermi surface contribute to the scattering time. Consequently, we need to substitute $\nu_0$ by $\nu(\varepsilon_F)/2$ in Eq. [^1]. Putting everything together, we finally obtain the following expression for the resistivity in TDBG: $$\begin{aligned}
\rho =\dfrac{1}{2g_sg_v}\left(\dfrac{h}{e^2}\right)\left(\dfrac{D^2k_F I}{\hbar\rho_m{v_\text{ph}}v_F^2}\right). \label{Eq:Resistivity}\end{aligned}$$
In order to calculate the resistivity in TDBG using the above equation, we need to evaluate the integral $I$, whose explicit form is given in Eq. . It can be simplified by setting $x = q/(2k_F) = \sin(\phi/2)$, which yields $$\begin{aligned}
I = \dfrac{16}{\pi}\int_{0}^{1} dx \dfrac{F(2k_Fx)}{\sqrt{1-x^2}} \dfrac{z_\text{BG} x^4 e^{z_\text{BG} x}}{(e^{z_\text{BG} x}-1)^2}, \end{aligned}$$ where $z_\text{BG} = {T_\text{BG}}/T$. In the high-temperature limit ($T\gg{T_\text{BG}}$) we are interested in, we find that $I\approx z_{\infty}/z_\text{BG}$, where $$\begin{aligned}
{z_{\infty}}= \dfrac{16}{\pi}\int_0^{1} dx \dfrac{x^2F(2k_Fx)}{\sqrt{1-x^2}}, \end{aligned}$$ and therefore the resistivity in Eq. becomes $\rho \approx CT$, where the coefficient $C$ is given by $$\begin{aligned}
C = \dfrac{\pi D^2k_Bz_{\infty}}{2g_s g_v e^2\hbar\rho_m{v_\text{ph}}^2v_F^2}. \label{Eq:Coefficient_C}\end{aligned}$$ Therefore, phonon-induced electron resistivity becomes linear in $T$ in the high-temperature ($T\gg{T_\text{BG}}$) limit, a regime we focus on in this work. In addition, from the above result one can see that the quantity $z_\infty$ is a key quantity in this calculation, which depends solely on the chiral form factor $F(q)$ \[or equivalently, $F(\phi)$\]. Thus, we will discuss this quantity first.
The chiral form factor
----------------------
We will use three different approximations to evaluate the chiral form factor $F(\phi)$ and hence ${z_{\infty}}$ for low-energy conduction-band states in TDBG. Specifically, we will use the two-band and four-band description for a pristine BLG, as well as a low-energy two-band description for TDBG. We will see that they capture different aspects of the band structure. A more accurate estimate of $F(\phi)$ necessitates a full numerical evaluation, which we leave for future studies. We comment that, given the simplified nature of our TDBG band structure model, it is unclear that a full numerical calculation of the form factor is warranted.
### Two-band model for bilayer graphene
We start with the simplest case, where a pristine BLG is described by a two-band model, given by $$\begin{aligned}
H_\text{BLG-2band} = -\dfrac{\hbar^2v^2}{\gamma_1}{\begin{pmatrix} 0 & (k_x+ik_y)^2 \\ (k_x-ik_y)^2 & 0 \end{pmatrix}}. \end{aligned}$$ As a result, the conduction band eigenstates are given by $$\begin{aligned}
\ket{\psi_{+}({\phi_{{\bm{k}}}})} = \dfrac{1}{\sqrt{2}}\begin{pmatrix}e^{-2i{\phi_{{\bm{k}}}}}\\1\end{pmatrix}, \end{aligned}$$ and the chiral form factor is given by $$\begin{aligned}
F(\phi) = \lvert\inner{\psi_{+}({\phi_{{\bm{k}}}}+\phi)}{\psi_{+}({\phi_{{\bm{k}}}})}\rvert^2. \label{Eq:2BandFormFactor}\end{aligned}$$ In order to evaluate ${z_{\infty}}$, we note that $x = \sin(\phi/2)$, and thus $F(\phi) = (1-2x^2)^2$. It follows that $$\begin{aligned}
{z_{\infty}}^\text{(BLG-2band)} = \dfrac{16}{\pi}\int_0^1 dx \dfrac{x^2}{\sqrt{1-x^2}}(1-2x^2)^2 = 2. \label{Eq:zinf-BLG-2band}\end{aligned}$$ As a result, within the two-band model of BLG, ${z_{\infty}}$ is a constant, independent of either electron density or twist angle.
### Four-band model for bilayer graphene
Next, we consider pristine BLG in the four-band description. The corresponding Hamiltonian is given by $$\begin{aligned}
H_\text{BLG-4band} =
\begin{pmatrix}
0 & \hbar vke^{-i{\phi_{{\bm{k}}}}} & 0 & 0\\
\hbar vke^{i{\phi_{{\bm{k}}}}} & 0 & \gamma_1 & 0\\
0 & \gamma_1 & 0 & \hbar vke^{-i{\phi_{{\bm{k}}}}}\\
0 & 0 & \hbar vke^{i{\phi_{{\bm{k}}}}} & 0
\end{pmatrix}, \end{aligned}$$ which is written in the {$A_1$, $B_1$, $A_2$, $B_2$} basis. We consider the lower conduction band of BLG, whose energy is $$\begin{aligned}
E_c({\bm{k}}) = \dfrac{1}{2}\left[\sqrt{4\hbar^2v^2k^2+\gamma_1^2} - \gamma_1\right], \end{aligned}$$ and the corresponding wave function is given by $$\begin{aligned}
\psi&({\phi_{{\bm{k}}}})^\dagger = \\
&\begin{bmatrix}
e^{2i{\phi_{{\bm{k}}}}}\dfrac{-\sqrt{1+\eta}}{2}, & e^{i{\phi_{{\bm{k}}}}}\dfrac{-\sqrt{1-\eta}}{2}, & e^{i{\phi_{{\bm{k}}}}}\dfrac{\sqrt{1-\eta}}{2}, & \dfrac{\sqrt{1+\eta}}{2}
\end{bmatrix}. \notag\end{aligned}$$ In the above expression, $n$ is the electron density and $\eta = \left(1 + \frac{n}{n_0}\right)^{-1/2}$, with $n_0 = k_0^2/\pi$ and $\hbar vk_0 = \gamma_1/2$. From this wave function, we can obtain the chiral form factor $F(\phi)$ as follows, $$\begin{aligned}
F(\phi) = \dfrac{1}{4}\left[(1-\eta) + (1+\eta)\cos\phi\right]^2. \end{aligned}$$ Note that in the low-density limit ($\eta\to1$) the above form factor reduces to $\cos^2\phi$, the result derived from the two-band model given in Eq. , as expected.
The expression for ${z_{\infty}}$ derived from the four-band model for pristine BLG has an appreciable electron density dependence. In particular, we find that $$\begin{aligned}
{z_{\infty}}^\text{(BLG-4band)} = \dfrac{1}{2}(5\eta^2-2\eta+1). \label{Eq:zinf-BLG-4band}\end{aligned}$$
### Two-band model for TDBG
Finally, we consider a low-energy two-band description of TDBG. Because we are only interested in the chiral form factor, we do not need the exact two-band model for TDBG. Instead, we know from symmetry considerations that to leading order the two-band model for TDBG must be of the form $$\begin{aligned}
H_\text{TDBG} = \mathcal{A} {\begin{pmatrix} 0 & (k_x+ik_y)^2 \\ (k_x-ik_y)^2 & 0 \end{pmatrix}}, \label{Eq:t-DBLG}\end{aligned}$$ where the coefficient $\mathcal{A}$ depends on band structure details, which we do not need. However, we do need explicit expressions for the basis states of this two-band Hamiltonian at $k=0$, which were already given in Eq. as $\ket{\Psi^{(A)}}$ and $\ket{\Psi^{(B)}}$. With this knowledge, we can write down general expressions for the eigenstates of this two-band model at small ${\bm{k}}$ as follows, $$\begin{aligned}
\ket{\zeta, {\bm{k}}} &= \dfrac{1}{\sqrt{2}}\left(\ket{\Psi^{(A)}} + \zeta e^{-2i{\phi_{{\bm{k}}}}}\ket{\Psi^{(B)}} \right) \notag\\
&\equiv
\begin{pmatrix}
\ket{\zeta, {\bm{k}}}_0, & \ket{\zeta, {\bm{k}}}_1, & \ket{\zeta, {\bm{k}}}_2, &\ket{\zeta, {\bm{k}}}_3
\end{pmatrix}, \label{Eq:TwobandEigenstate}\end{aligned}$$ where $\zeta = \pm1$ is the band index, and $$\begin{aligned}
\ket{\zeta, {\bm{k}}}_n = \dfrac{1}{\sqrt{2}}\left(S_A\ket{\psi_n^{(A)}} + S_B\zeta e^{-2i{\phi_{{\bm{k}}}}}\ket{\psi_n^{(B)}} \right), \end{aligned}$$ where $n = 0, 1, 2, 3$. One can verify that $\ket{\zeta = \pm1, {\bm{k}}}$ are indeed the two eigenstates of the two-band Hamiltonian Eq. .
When calculating the chiral form factor, we will consider phonon scattering in the two layers independently. In particular, note that the first component of the four-component eigenstate $\ket{\zeta = \pm1, {\bm{k}}}$ in Eq. resides in the top BLG, while the other three reside in the bottom one. As a result, the chiral form factor should be evaluated as follows, $$\begin{aligned}
F(\phi) &= \lvert _0\inner{\zeta', {\bm{k}}'}{\zeta, {\bm{k}}}_0\rvert^2 + \bigg\lvert \sum_{j=1}^{3} {}_j\inner{\zeta', {\bm{k}}'}{\zeta, {\bm{k}}}_j \bigg\rvert^2 \notag\\
&\equiv |\mathcal{F}_1(\phi)|^2 + |\mathcal{F}_2(\phi)|^2. \end{aligned}$$ The above derivations lead to the following results, $$\begin{aligned}
\mathcal{F}_1(\phi) = \dfrac{1}{2}\left(1+\dfrac{\zeta\zeta'}{1+3\Delta}e^{2i\phi}\right), \;
\mathcal{F}_2(\phi) = \dfrac{\zeta\zeta'}{2}\dfrac{3\Delta}{1+3\Delta}e^{2i\phi}, \end{aligned}$$ which then gives rise to the following form factor for the two-band model of TDBG, $$\begin{aligned}
F(\phi) = \dfrac{1}{4}\left[1+\dfrac{9\Delta^2+1}{(3\Delta+1)^2} + \dfrac{2}{3\Delta+1}\cos(2\phi) \right]. \label{Eq:TDBG-FormFactor}\end{aligned}$$ Such a chiral form factor yields the following result for ${z_{\infty}}$, $$\begin{aligned}
{z_{\infty}}^\text{(TDBG)} = 1 + \dfrac{9\Delta^2+1}{(3\Delta + 1)^2}. \label{Eq:zinf-t-dBLG}\end{aligned}$$
Some numerical results for ${z_{\infty}}$ under different approximations are given in Fig. \[Fig:FormFactor\]. One can see that the three different approximations of ${z_{\infty}}$ are of the same order, although they capture different aspects of the band structure. In the rest of the paper, we will use both ${z_{\infty}}^\text{(TDBG)}$ and ${z_{\infty}}^\text{(BLG-4band)}$ to calculate the phonon-induced resistivity. Note that, in order to evaluate ${z_{\infty}}$ for TDBG accurately, one has to resort to full numerical evaluations from the band structure. Although such a calculation is beyond the scope of this work, we expect that the exact value of ${z_{\infty}}$ is still within the same order of magnitude as the ones we used in this work.
Phonon-induced resistivity: High temperature limit
--------------------------------------------------
After explaining the calculations of ${z_{\infty}}$, we are now ready to evaluate the phonon-induced resistivity explicitly. In this subsection we will consider the high-temperature limit first, when the resistivity is a linear function of temperature, and then present results for the full temperature range in the next subsection. Before showing our results, however, we make a few comments on our numerical evaluation of the coefficient $C$ using Eq. . First, we will use both ${z_{\infty}}^\text{(TDBG)}$ and ${z_{\infty}}^\text{(BLG-4band)}$ to approximate ${z_{\infty}}$, and demonstrate how different approximations affect the final value of $C$. Second, the Fermi velocity $v_F$ will be extracted directly from the full numerical band structure of TDBG, instead of just from the two-band effective model in Eq. . Finally, all of our calculations are limited to carrier densities below the van Hove singularities in the band structure, because our theory will break down for higher carrier densities, when the topology of the Fermi surface is different from our assumptions because of the complications arising from van Hove singularities. Thus, our theory is explicitly limited to low carrier densities where the Fermi level stays below the van Hove singularities.
Some numerical results for the coefficient $C$ are given in Fig. \[Fig:DensityDependence\]. In (a)-(b) the results for two different twist angles are shown. One can see that as the twist angle decreases from $5.0^\circ$ to $1.31^\circ$, the coefficient $C$ increases substantially. Such a trend is also apparent in panel (c), which shows how the coefficient $C$ depends on the twist angle at a fixed electron density. We note that such an angular dependence with resistivity increasing strongly with decreasing twist angle is consistent with recent resistivity measurements in TDBG [@Shen2019; @Cao2019; @Liu2019].
In addition, we find from Fig. \[Fig:DensityDependence\] that within our theory the coefficient $C$ has a strong density dependence. This feature in our theory arises from the fact that the coefficient $C$ is inversely proportional to the Fermi velocity, which has a strong density dependence for parabolic bands. When we compare our theory of phonon-induced resistivity in TDBG with the experimental results, we find that we cannot capture the very weak density dependence of $C$ observed in the experiment. In particular, at low carrier densities and small twist angles ($\theta\sim 1^\circ$) experimental results show that the coefficient $C$ has almost no dependence on the carrier density [@Shen2019; @Cao2019; @Liu2019]. Our current understanding of this disprepancy is that the electron correlation may play an important role in this limit, which strongly suppresses the density dependence of $C$, an effect not accounted for in our theory. We expect that for larger twist angles ($\theta\gtrsim 2.0^\circ$), when the band width becomes large, electron-phonon scattering will overcome the electron correlation effects, and become the dominant mechanism for resistivity in the high-temperature limit. Our theory should be more applicable in that regime, and the coefficient $C$ should exhibit a substantial carrier density dependence in the experimental measurements. It will thus be interesting to carry out an experiment to resolve the crossover between these two regimes, which will help us better understand the role of electron correlation in TDBG.
It is also interesting to draw a comparison between TBG and TDBG in this context. In particular, note that such a density dependence in $C$ is absent in the case of TBG, even within the framework of phonon-induced resistivity and at small twist angles [@Wu2019]. The underlying reason is straightforward: the low-energy electronic states in TBG has a linear dispersion. As a result, the Fermi velocity and hence the coefficient $C$ in TBG does not depend on the carrier density. It is worth noting that such an absence of density dependence in $C$ is consistent with the experimental observations in TBG [@Cao2018a]. By contrast, TDBG bands are parabolic, and hence one expects a density dependence in the temperature coefficient of the resistivity.
[We do mention, however, that our calculated TDBG resistivity approximately agrees with recent measurements [@KimPrivate] where we obtain $d\rho/dT$ $\sim$ $\SI{95}{\Omega/K}$ at a twist angle of $\theta\sim 1.24^\circ$ and a carrier density of $\SI{3.0e12}{cm^{-2}}$, compared with the experimental values of $d\rho/dT$ $\sim$ $\SI{75}{\Omega/K}$.]{} But any definitive agreement between our current theory and the mesured TDBG temperature dependent resistivity awaits a careful experimental study of temperature, twist angle, and density dependence of TDBG resistivity, which is unavailable right now.
Phonon-induced resistivity: Full temperature range
--------------------------------------------------
Finally, we evaluate the phonon-induced resistivity in the full temperature range. Such a calculation will not only allow us to present a complete result for the phonon-induced resistivity in TDBG, but also help us determine the temperature range in which the resistivity is linear in $T$.
To begin with, we consider the integral $I$ in the low-temperature limit ($T\ll{T_\text{BG}}$), which can be evaluated by introducing $y = z_\text{BG} x$, yielding $$\begin{aligned}
I \approx \dfrac{16}{\pi z_\text{BG}^4} \int_0^{+\infty} \dfrac{y^4 e^{y}}{(e^y-1)^2} dy = \dfrac{16}{\pi} \times \dfrac{4! \zeta(4)}{z_\text{BG}^4} \propto T^4, \end{aligned}$$ where we have used $F(0)\approx 1$, and $\zeta(s)$ is the Riemann-$\zeta$ function. As a result, in the low-temperature limit the resistivity depends on the temperature as $\rho \propto T^4$. This is the so-called [[Bloch-Grüneisen]{}]{} regime of phonon scattering, which in 3D metals produces a $T^5$ power law for the temperature-dependent resistivity. In addition, the resistivity becomes independent of the chiral form factor $F(\theta)$ in this low-temperature limit. In Fig. \[Fig:FullResistivity\] we show the resistivity in TDBG across the full temperature range for a carrier density of $\SI{3.0e12}{cm^{-2}}$ and two different twist angles. These results are obtained by using the chiral form factor for the two-band model of TDBG in Eq. . One can clearly observe that the resistivity has a $\rho \propto T^4$ behavior in the low-temperature range, while a $\rho\propto T$ behavior in the high-temperature limit.
After obtaining the resistivity in the full temperature range, it is instructive to examine the crossover temperature above which the resistivity becomes linear in temperature. It has been established previously that the linear-in-$T$ behavior already kicks in at a characteristic temperature $T\gtrsim T_L\approx{T_\text{BG}}/4$ [@Hwang2008; @Min2011]. In Fig. \[Fig:TL\] we plot the crossover temperature $T_L$ in TDBG as a function of carrier density for three different twist angles. We find that $T_L$ is below $\SI{11}{K}$ for almost all carrier densities and twist angles we considered. As a result, our analysis of the phonon-induced resistivity in the high temperature limit should be applicable above $\sim \SI{11}{K}$. In fact, recent resistivity measurements in TDBG indeed show a linear-in-$T$ behavior for temperatures between $\SI{10}{K}$ and $\SI{30}{K}$ [@KimPrivate], a temperature range where our theory is applicable. Therefore, our theory will be relevant for the understanding of the linear-in-$T$ resistivity observed in recent experiments in TDBG [@Shen2019; @Cao2019; @Liu2019]. We note that in the low-temperature [[Bloch-Grüneisen]{}]{} regime, the $T^4$ power law in the resistivity may not be easy to discern because of other resistive scattering contributions such as electron-impurity and electron-electron interactions which are neglected in our theory.
Phonon-Mediated Superconductivity\[Section:Discussions\]
========================================================
We now discuss possible implication of our theory on superconductivity in TDBG. The electron-acoustic phonon coupling mediates an effective attractive electron-electron interaction with a strength given by $g_0$ $=$ $D^2/(4 \rho_m {v_\text{ph}}^2) \approx \SI{50}{meV\cdot nm^2}$ [@Wu2019]. The dimensionless electron-phonon coupling constant is determined by $\lambda^* = g_0 \nu(\varepsilon_F)$, where $\nu(\varepsilon_F)$ is the DOS per spin and valley. Because of the narrow bandwidth for small twist angle ($\sim 1^{\circ}$), $\lambda^*$ in TDBG can reach order of $0.25$ given the DOS shown in Fig. \[Fig:Bandstructure\](d). The superconducting transition temperature $T_c$ can be roughly estimated as $k_B T_c = \Lambda \exp(-1/\lambda^*)$ within a BCS-type theory, where $\Lambda$ is a cutoff energy approximately given by the flatband bandwidth ($\sim 5$ meV). Therefore, $T_c$ can be of order $\SI{1}{K}$ from electron-phonon interactions. Moreover, the effective attractive interactions mediated by acoustic phonons have an enlarged SU(2) $\times$ SU(2) symmetry, namely, each valley has its own spin rotational symmetry. Therefore, acoustic phonons mediate both spin singlet and spin triplet pairings [@Wu2019], and can account for the spin triplet superconductivity experimentally identified in TDBG [@Liu2019]. We note that the possibility of the electron-phonon mediated superconductivity at low temperatures ($<\SI{1}{K}$) and the large phonon-induced linear-in-$T$ resistivity at high temperatures ($>\SI{10}{K}$) are closely connected, both arising from the strongly enhanced electron-phonon coupling induced by flatband moire physics, as has already been emphasized in the context of TBG in Ref. [@Wu2019].
Conclusion \[Section:Conclusions\]
==================================
To summarize, in this work we developed a theory to calculate the phonon-induced resistivity in twisted double bilayer graphene in the high-temperature ($T>{T_\text{BG}}$) limit, where the resistivity $\rho$ scales linearly with temperature $T$, $\rho \approx CT$. We present a quantitative analysis of the coefficient $C$ and showed that it increases substantially as the twist angle $\theta$ is reduced. However, since we did not account for electron correlation effects on the resistivity, we expect that our predictions are likely applicable for devices in which the twist angle is relatively large ($\theta\gtrsim 2^\circ$). The main qualitative conclusion of our theory is that for $T>\SI{10}{K}$ or so, TDBG should manifest a very large linear-in-$T$ resistivity arising from phonon scattering at small twist angles. The linear coefficient should manifest a strong density dependence, which is not seen in current experiments for reasons not obvious right now.
Acknowledgment {#acknowledgment .unnumbered}
==============
This work is supported by Microsoft and Laboratory of Physics. X.L. also acknowledges support from City University of Hong Kong (Project No. 9610428).
[^1]: Note that we should still use the full DOS $\nu(\varepsilon_F)$ in Eq. , because all electrons at the Fermi surface contribute to the conductivity.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We propose that cold dark matter is made of Kaluza-Klein particles and explore avenues for its detection. The lightest Kaluza-Klein state is an excellent dark matter candidate if standard model particles propagate in extra dimensions and Kaluza-Klein parity is conserved. We consider Kaluza-Klein gauge bosons. In sharp contrast to the case of supersymmetric dark matter, these annihilate to hard positrons, neutrinos and photons with unsuppressed rates. Direct detection signals are also promising. These conclusions are generic to bosonic dark matter candidates.'
author:
- 'Hsin-Chia Cheng'
- 'Jonathan L. Feng'
- 'Konstantin T. Matchev'
title: ' Kaluza-Klein Dark Matter '
---
The identity of dark matter is currently among the most profound mysteries in particle physics, astrophysics, and cosmology. Recent data from supernovae luminosities, cosmic microwave anisotropies, and galactic rotation curves all point consistently to the existence of dark matter with mass density $\Omega
\approx 0.3$ relative to the critical density. At the same time, all known particles are excluded as dark matter candidates, making the dark matter problem the most pressing phenomenological motivation for particles and interactions beyond the standard model.
Among the myriad options, the possibility of particle dark matter with weak interactions and weak-scale mass is particularly tantalizing. Puzzles concerning electroweak symmetry breaking suggest that such particles exist, and, if stable, their thermal relic density is generically in the desired range. Among these candidates, neutralinos in supersymmetric theories are by far the most widely studied. Neutralinos have spin 1/2 and are their own anti-particles; that is, they are Majorana fermions. They may be detected directly through scattering in detectors, or indirectly through the decay products that result when neutralinos annihilate in pairs. For indirect detection, however, the Majorana nature of neutralinos implies that annihilation is chirality-suppressed, leading to soft secondary positrons, photons, and neutrinos, and considerably diminishing prospects for discovery.
Here we study a specific example of a generic alternative: bosonic cold dark matter. If particles propagate in extra spacetime dimensions, they will have an infinite tower of partner states with identical quantum numbers, as noted long ago by Kaluza and Klein [@Kaluza:tu]. We consider the case of universal extra dimensions (UED) [@Appelquist:2000nn], in which all standard model particles propagate. Such models provide, in the form of stable Kaluza-Klein (KK) partners, the only specific dark matter candidate to emerge from theories with extra dimensions [@Dienes:1998vg; @Cheng:2002iz; @Cheng:2002ab]. KK dark matter generically has the desired relic density [@Servant:2002aq; @Kolb:fm]. Here we explore for the first time the prospects for its detection.
For concreteness, we consider the simplest UED model, with one extra dimension of size $R\sim{\text{TeV}}^{-1}$ compactified on an $S^1/Z_2$ orbifold. At the lowest order, the KK masses are simply the momenta along the extra dimension and are quantized in units of $1/R$. The degeneracy at each KK level is lifted by radiative corrections and boundary terms [@Cheng:2002iz]. The boundaries also break momentum conservation in the extra dimension down to a $Z_2$ parity, under which KK modes with odd KK number are charged. This KK-parity corresponds to the symmetry of reflection about the midpoint in the extra dimension; it is anomaly-free and not violated by quantum gravity effects. KK-parity conservation implies that the lightest KK particle is stable. KK partners of electroweak gauge bosons and neutrinos are then all possible dark matter candidates. We consider $B^1$, the first KK mode of the hypercharge gauge boson, which at one-loop is naturally the lightest KK mass eigenstate in minimal models [@Cheng:2002iz; @Cheng:2002ab].
In this UED scenario, constraints from precision data require only $1/R \agt 300~{\text{GeV}}$ [@Appelquist:2000nn]. Collider searches are also quite challenging: the Tevatron Run II may probe slightly beyond this bound and the LHC may reach $1/R \sim
1.5~{\text{TeV}}$ [@Cheng:2002ab]. Dark matter searches provide another possibility for probing these models and differentiating them from other new physics.
For a given KK spectrum, the $B^1$ thermal relic density may be accurately determined [@Servant:2002aq]. $B^1$s annihilate effectively through $s$-wave processes, unlike neutralinos, and so the desired thermal relic density is obtained for higher masses than typical for neutralinos. If $B^1$s are the only KK modes with significant abundance at the freeze-out temperature, the desired relic density is found for ${m_{B^1}}\approx 1~{\text{TeV}}$. However, many other KK states may be closely degenerate with $B^1$, and their presence at freeze-out will modify this conclusion. KK quarks and gluons annihilate with much larger cross sections through strong interactions, and so increase the predicted ${m_{B^1}}$. On the other hand, degenerate KK leptons lower the average annihilation cross section and require lower ${m_{B^1}}$. In addition to the cosmological assumptions present in all relic density calculations, the $B^1$ relic density is therefore rather model-dependent, with the optimal ${m_{B^1}}$ ranging from several hundred GeV to a few TeV, depending sensitively on the KK spectrum. Here we study the prospects for detection in a model-independent way by considering ${m_{B^1}}$ as a free parameter in the appropriate range.
We first consider the direct detection of $B^1$ dark matter. Dark matter particles are currently non-relativistic, with velocity $v \sim
10^{-3}$. For weak-scale dark matter, the recoil energy from scattering off nuclei is $\sim 0.1~{\text{MeV}}$, and far less for scattering off electrons. We therefore consider elastic scattering off nucleons and nuclei.
At the quark level, $B^1$ scattering takes place through KK quarks, with amplitude ${\cal M}_q^{q^1} = {\cal M}_{q_L}^{q^1} + {\cal
M}_{q_R}^{q^1}$, where $$\begin{aligned}
\lefteqn{{\cal M}_{q_i}^{q^1} = - i \frac{e^2}{\cos^2 \theta_W} Y_{q_i}^2
\varepsilon_{\mu}^{\ast}(p_3) \varepsilon_{\nu}(p_1) \times }
\nonumber \\
&& \bar{u} (p_4) \! \left[
\frac{\gamma^{\mu} \! {\not{\! k}}_1 \gamma^{\nu}} {k_1^2 - m_{q_i^1}^2}
+ \frac{\gamma^{\nu} \! {\not{\! k}}_2 \gamma^{\mu}} {k_2^2 - m_{q_i^1}^2}
\right] \! P_i \, u(p_2) \ , \end{aligned}$$ $Y=Q-I$ is hypercharge, $k_1 = p_1 + p_2$, and $k_2 = p_2 - p_3$; and through Higgs exchange, with amplitude $${\cal M}^h_q = i \frac{e^2}{2 \cos^2 \theta_W}
\frac{m_f}{k_3^2 - m_h^2} \varepsilon_{\mu}^{\ast}(p_3)
\varepsilon^{\mu}(p_1) \bar{u}(p_4) u(p_2) \ ,$$ where $k_3 = p_1 - p_3$. In the extreme non-relativistic limit, $p_1
= p_3 = ({m_{B^1}}, {{\text{\normalsize\bm{$0$}}}})$, and expanding to linear order in $p_2 =
(E_q, {{\text{\normalsize\bm{$p$}}}}_q)$, these amplitudes then reduce to $$\begin{aligned}
{\cal M}_{q}^{q^1}
&\approx& \alpha_{q}
\varepsilon_{\mu}^\ast(p_3) \varepsilon_{\nu}(p_1)
\varepsilon^{0\mu \nu \rho}
\xi_4^{\dagger} \frac{\sigma_{\rho}}{2} \xi_2 \nonumber \\
&& - i \beta_q \varepsilon_{\mu}^{\ast}(p_3)
\varepsilon^{\mu}(p_1) \xi_4^\dagger \xi_2 \\
{\cal M}_{q}^{h} &\approx&
- i \gamma_q \varepsilon_{\mu}^{\ast}(p_3)
\varepsilon^{\mu}(p_1) \xi_4^\dagger \xi_2 \ ,\end{aligned}$$ where $\xi_4$ and $\xi_2$ are two-component spinors, and $$\begin{aligned}
\alpha_q \! \! &=& \! \! \frac{2 e^2}{\cos^2 \theta_W} \left[
\frac{Y_{q_L}^2 {m_{B^1}}}{m_{q_L^1}^2 - {m_{B^1}}^2} +
(L \to R) \right] \label{alpha} \\
\beta_q \! \! &=& \! \! E_q \frac{e^2}{\cos^2 \theta_W}
\left[ Y_{q_L}^2 \frac{{m_{B^1}}^2 + m_{q_L^1}^2}{(m_{q_L^1}^2 - {m_{B^1}}^2)^2}
+ (L \to R) \right]
\label{beta} \\
\gamma_q \! \! &=& \! \! m_q \frac{e^2}{2 \cos^2 \theta_W}
\frac{1}{m_h^2} \ .\end{aligned}$$ The interactions divide into spin-dependent and spin-independent parts [@Goodman:1984dc]. Higgs exchange contributes to scalar couplings, while $q^1$ exchange contributes to both. Note that the two contributions to scalar interactions interfere constructively; barring extremely heavy KK masses, there is an inescapable lower bound on both spin-dependent and scalar cross sections.
The spin-dependent coupling is $\alpha_{q} {{\text{\normalsize\bm{$S$}}}}_{B^1} \cdot
{{\text{\normalsize\bm{$S$}}}}_{q}$, where ${{\text{\normalsize\bm{$S$}}}}_{B^1}$ and ${{\text{\normalsize\bm{$S$}}}}_{q}$ are spin operators. We must evaluate this matrix element between nucleon or nucleus bound states. By the Wigner-Eckart theorem, we may replace ${{\text{\normalsize\bm{$S$}}}}_{q}$ by $\lambda_q {{\text{\normalsize\bm{$J$}}}}_N$, where ${{\text{\normalsize\bm{$J$}}}}_N$ is the nucleon or nuclear spin operator. The constant of proportionality is $$\lambda_q = \Delta_q^p \langle S_p \rangle/J_N
+ \Delta_q^n \langle S_n \rangle/J_N \ .
\label{lambda}$$ $\Delta_q^{p,n}$ is given by $\langle p,n | {{\text{\normalsize\bm{$S$}}}}^{\mu}_q | p, n
\rangle \equiv \Delta_q^{p,n} {{\text{\normalsize\bm{$S$}}}}^{\mu}_{p,n}$ and is the fraction of the nucleon spin carried by quark $q$. A recent analysis finds $\Delta_u^p = \Delta_d^n = 0.78 \pm 0.02$, $\Delta_d^p =
\Delta_u^n = -0.48 \pm 0.02$, and $\Delta_s^p = \Delta_s^n = -0.15 \pm
0.02$ [@Mallot:1999qb]. $\langle S_{p,n} \rangle / J_N \equiv
\langle N | S_{p,n} | N \rangle / J_N$ is the fraction of the total nuclear spin $J_N$ that is carried by the spin of protons or neutrons. For scattering off protons and neutrons, $\lambda_q$ reduces to $\Delta_q^p$ and $\Delta_q^n$, respectively.
The spin-dependent cross section is $m_N^2/[4\pi ({m_{B^1}}+ m_N)^2]
\langle | {\cal M}|^2 \rangle$, where ${\cal M} = \sum_q {\cal M}_q$ and $\langle \ \rangle$ denotes an average over initial polarizations and sum over final polarizations. Using $\langle ({{\text{\normalsize\bm{$S$}}}}_{B^1}
\cdot {{\text{\normalsize\bm{$J$}}}}_N)^2 \rangle = \frac{2}{3} J_N (J_N+1)$, we find $$\sigma_{\text{spin}} = \frac{1}{6\pi} \frac{m_N^2}{({m_{B^1}}+ m_N)^2}
J_N (J_N+1) \bigg[ \sum_{u,d,s} \alpha_q \lambda_q \bigg]^2 \ ,
\label{sigma_spin}$$ where $\alpha_q$ and $\lambda_q$ are given in [Eqs. (\[alpha\]) and (\[lambda\])]{}.
The spin-independent cross section is $$\sigma_{\text{scalar}} = \frac{m_N^2}{4\pi\, ({m_{B^1}}+ m_N)^2}
\left[Z f_p +(A-Z) f_n\right]^2 \ ,$$ where $Z$ and $A$ are nuclear charge and atomic number, $$f_p = \sum_{u, d, s} (\beta_q + \gamma_q)
\langle p | \bar{q} q | p \rangle
= \sum_{u, d, s} \frac{\beta_q + \gamma_q}{m_q} m_p
f^p_{T_q} \ ,
\label{si}$$ and similarly for $f_n$. We take $f^{p}_{T_u}=0.020\pm 0.004$, $f^{p}_{T_d}=0.026\pm 0.005$, $f^{n}_{T_u}=0.014\pm 0.003$, $f^{n}_{T_d}=0.036\pm 0.008$, and $f^{p,n}_{T_s}=0.118\pm
0.062$ [@Ellis:2000ds]. $E_q$ of [Eq. (\[beta\])]{} is the energy of a bound quark and is rather ill-defined. In evaluating [Eq. (\[si\])]{}, we conservatively replace $E_q$ by the current mass $m_q$. We also neglect couplings to gluons mediated by heavy quark loops; a detailed loop-level analysis along the lines of Refs. [@Drees:1993bu; @Drees:1992am] for neutralinos is in progress [@inprogress]. Given the constructive interference noted above, these contributions can only increase the cross section.
We present both spin-independent and spin-dependent cross sections in Fig. \[fig:direct\]. We assume that all first level KK quarks are degenerate with mass ${m_{q^1}}$. Proton cross sections are given; neutron cross sections are similar for spin-dependent interactions and almost identical for scalar cross sections. The cross sections are large for low ${m_{B^1}}$. They are also strikingly enhanced by $r^{-2}$ for small $r
\equiv ({m_{q^1}}- {m_{B^1}}) / {m_{B^1}}$ when scattering takes place near an $s$-channel pole. Such degeneracy is unmotivated in general, but is natural for UED models, where all KK particles are highly degenerate at tree-level.
Projected sensitivities of near future experiments are also shown in Fig. \[fig:direct\]. For scattering off individual nucleons, scalar cross sections are suppressed relative to spin-dependent ones by $\sim
m_p/{m_{B^1}}$. However, this effect is compensated in large nuclei where spin-independent rates are enhanced by $\sim A^2$. In the case of bosonic KK dark matter, the latter effect dominates, and the spin-independent experiments have the best prospects for detection, with sensitivity to ${m_{B^1}}$ far above current limits.
Dark matter may also be detected when it annihilates in the galactic halo, leading to positron excesses in space-based and balloon experiments. The positron flux is [@Moskalenko:1999sb] $$\frac{d\Phi_{e^+}}{d\Omega dE} = \frac{\rho^2}{{m_{B^1}}^2}
\sum_i \langle \sigma_i v \rangle B_{e^+}^i
\! \! \int \! \! dE_0 f_i(E_0) G(E_0, E) \ ,
\label{dPhi}$$ where $\rho$ is the local dark matter mass density, the sum is over all annihilation channels $i$, and $B_{e^+}^i$ is the $e^+$ branching fraction in channel $i$. The initial positron energy distribution is given by $f(E_0)$, and the Green function $G(E_0, E)$ propagates positrons in the galaxy.
Several channels contribute to the positron flux. Here we focus on the narrow peak of primary positrons from direct $B^1 B^1\rightarrow
e^+ e^-$ annihilation. (Annihilation to muons, taus and heavy quarks also yield positrons through cascade decays, but with relatively soft and smeared spectra.) In this case, the source is monoenergetic, and [Eq. (\[dPhi\])]{} simplifies to $$\begin{aligned}
\lefteqn{\frac{d\Phi_{e^+}}{d\Omega dE} = 2.7\times 10^{-8}
{\text{cm}}^{-2} {\text{s}}^{-1} {\text{sr}}^{-1} {\text{GeV}}^{-1}
\frac{\langle\sigma_{ee} v \rangle}{{\text{pb}}} }
\nonumber \\
&&\times
\left[ \frac{\rho}{0.3~{\text{GeV}}/{\text{cm}}^3}\right]^2
\left[ \frac{1~{\text{TeV}}}{{m_{B^1}}}\right]^2
g\left(1,\frac{E}{{m_{B^1}}}\right) ,\end{aligned}$$ where the annihilation cross section is $$\langle \sigma_{ee} v \rangle = \frac{e^4}{9\pi \cos^4 \theta_W}
\left[ \frac{Y_{e^1_L}^4}{{m_{B^1}}^2+m_{e^1_L}^2}
+ (L \to R) \right]\ ,
\label{sigma_ee}$$ and the reduced Green function $g$ is as in Ref. [@Feng:2001zu].
Positron spectra and an estimated background (model C from Ref. [@Moskalenko:1999sb]) are given in Fig. \[fig:positrons\]. The sharp peak at $E_{e^+} = {m_{B^1}}$ is spectacular — while propagation broadens the spectrum, the mono-energetic source remains evident. This feature is extremely valuable, as the background, although resulting from many sources, should be smooth. Maximal $E_{e^+}$ also enhances detectability since the background drops rapidly with energy. Both of these virtues are absent for neutralinos, where Majorana-ness implies helicity-suppressed annihilation amplitudes, and positrons are produced only in cascades, leading to soft, smooth spectra [@Ellis:2001hv]. A peak in the $e^+$ spectrum will not only be a smoking gun for $B^1$ dark matter, it will also exclude neutralinos as the source.
Of the many positron experiments, the most promising is AMS [@Barrau:2001ux], the anti-matter detector to be placed on the International Space Station. AMS will distinguish positrons from electrons even at 1 TeV energies [@Hofer:1998sx]. With aperture $6500~{\text{cm}}^2{\text{sr}}$ and a runtime of 3 years, AMS will detect $\sim 1000$ positrons with energy above 500 GeV, and may detect a positron peak from $B^1$ dark matter.
Photons from dark matter annihilation in the center of the galaxy also provide an indirect signal. The line signal from $B^1 B^1 \to \gamma
\gamma$ is loop-suppressed, and so we consider continuum photon signals. The integrated photon flux above some photon energy threshold $E_{th}$ is [@Feng:2001zu] $$\begin{aligned}
\lefteqn{\Phi_{\gamma} (E_{th})= 5.6 \times 10^{-12}~{\text{cm}}^{-2}~{\text{s}}^{-1}
\bar{J}(\Delta \Omega) \, \Delta \Omega} \nonumber \\
&&\times
\left[ \frac{1~{\text{TeV}}}{{m_{B^1}}} \right]^2
\sum_q \frac{\langle\sigma_{qq} v\rangle}{{\rm pb}}
\int_{E_{th}}^{{m_{B^1}}}
\! \! dE \frac{dN_{\gamma}^q}{dE}\ ,
\label{phigamma}\end{aligned}$$ where the sum is over all quark pair annihilation channels (with cross sections similar to Eq. (\[sigma\_ee\])), and $dN_{\gamma}^q/dE$ is the differential gamma ray multiplicity for channel $qq$. The hardest spectra result from fragmentation of light quarks [@Bergstrom:1997fj], and so the lack of chirality suppression again gives a relative enhancement over neutralinos. $\Delta \Omega$ is the solid angle of the field of view of a given telescope, and $\bar{J}$ is a measure of the cuspiness of the galactic halo density profile. There is a great deal of uncertainty in $\bar{J}$, with possible values in the range 3 to $10^5$. We choose $\Delta \Omega = 10^{-3}$ and a moderate value of $\bar{J} = 500$.
Integrated photon fluxes are given in Fig. \[fig:photons\] for two representative $E_{th}$: 1 GeV, accessible to space-based detectors, and 50 GeV, characteristic of ground-based telescopes. Estimated sensitivities for two of the more promising experiments, GLAST [@Sadrozinski:wu] and MAGIC [@MAGIC], are also shown. We find that photon excesses are detectable for ${m_{B^1}}\alt 600~{\text{GeV}}$. Note that these signals may be greatly enhanced for clumpy halos with large $\bar{J}$.
Finally, high-energy neutrinos from annihilating dark matter trapped in the core of the Sun or the Earth, can be detected through their charged-current conversion to muons. Unlike the case in supersymmetry, $B^1$s can annihilate directly to neutrinos, with branching ratio $\approx 1.2\%$. Secondary neutrinos may also result from final states with heavy quarks, charged leptons, or Higgs bosons. Considering primary neutrinos and those from tau decays from the Sun (which is typically full, with capture and annihilation in equilibrium), we find that, for $r = 0.5\, (0.02)$, next generation neutrino telescopes like AMANDA, NESTOR and ANTARES will probe ${m_{B^1}}$ up to 200 GeV (600 GeV) and IceCube will be sensitive to ${m_{B^1}}= 400$ GeV (1400 GeV) [@inprogress].
In conclusion, we find excellent prospects for KK dark matter detection. The elastic scattering cross sections are enhanced near $s$-channel KK resonances, providing good chances for [*direct*]{} detection. In addition, [*indirect*]{} detection is typically much more promising than in supersymmetry for three reasons. First, there is no helicity suppression for the annihilation of bosonic KK dark matter into fermion pairs. Second, the preferred $B^1$ mass range is higher than in supersymmetry, leading to harder positron, photon, and neutrino spectra, with better signal-to-background ratio. And third, $B^1$ annihilation produces primary positrons and neutrinos with distinctive energy spectrum shapes, again facilitating observation above background. Kaluza-Klein gauge bosons therefore provide a promising and qualitatively new possibility for dark matter and dark matter searches.
[99]{} T. Kaluza, Sitzungsber. Preuss. Akad. Wiss. Berlin (Math. Phys. ) [**K1**]{}, 966 (1921); O. Klein, Z. Phys. [**37**]{}, 895 (1926) \[Surveys High Energ. Phys. [**5**]{}, 241 (1986)\]. T. Appelquist, H.-C. Cheng and B. A. Dobrescu, Phys. Rev. D [**64**]{}, 035002 (2001) \[hep-ph/0012100\]. K. R. Dienes, E. Dudas and T. Gherghetta, Nucl. Phys. B [**537**]{}, 47 (1999) \[hep-ph/9806292\]. H.-C. Cheng, K. T. Matchev and M. Schmaltz, Phys. Rev. D [**66**]{}, 036005 (2002) \[hep-ph/0204342\]. H.-C. Cheng, K. T. Matchev and M. Schmaltz, hep-ph/0205314. G. Servant and T. M. Tait, hep-ph/0206071. See also E. W. Kolb and R. Slansky, Phys. Lett. B [**135**]{}, 378 (1984); J. Saito, Prog. Theor. Phys. [**77**]{}, 322 (1987). M. W. Goodman and E. Witten, Phys. Rev. D [**31**]{}, 3059 (1985). G. K. Mallot, Int. J. Mod. Phys. A [**15S1**]{}, 521 (2000). J. R. Ellis, A. Ferstl and K. A. Olive, Phys. Lett. B [**481**]{}, 304 (2000) \[hep-ph/0001005\]. M. Drees and M. Nojiri, Phys. Rev. D [**48**]{}, 3483 (1993). M. Drees and M. M. Nojiri, Phys. Rev. D [**47**]{}, 376 (1993). H.-C. Cheng, J. L. Feng and K. T. Matchev, in progress.
N. J. Spooner [*et al.*]{}, Phys. Lett. B [**473**]{}, 330 (2000). A. Benoit [*et al.*]{}, astro-ph/0206271. R. W. Schnee [*et al.*]{}, Phys. Rept. [**307**]{}, 283 (1998). H. V. Klapdor-Kleingrothaus, hep-ph/0104028. M. Bravin [*et al.*]{} \[CRESST-Collaboration\], Astropart. Phys. [**12**]{}, 107 (1999) \[hep-ex/9904005\]. I. V. Moskalenko and A. W. Strong, Phys. Rev. D [**60**]{}, 063003 (1999) \[astro-ph/9905283\]. J. L. Feng, K. T. Matchev and F. Wilczek, Phys. Rev. D [**63**]{}, 045024 (2001) \[astro-ph/0008115\]. J. R. Ellis [[*et al.*]{}]{}, Eur. Phys. J. C [**24**]{}, 311 (2002). A. Barrau \[AMS Collaboration\], astro-ph/0103493. H. Hofer and M. Pohl, Nucl. Instrum. Meth. A [**416**]{}, 59 (1998) \[hep-ex/9804016\]. L. Bergstrom, P. Ullio and J. H. Buckley, Astropart. Phys. [**9**]{}, 137 (1998) \[astro-ph/9712318\]. H. F. Sadrozinski, Nucl. Instrum. Meth. A [**466**]{}, 292 (2001). MAGIC Collaboration, M. Martinez [*et al.*]{}, OG.4.3.08 in [*Proceedings of ICRC99*]{}, Utah, 17-25 August 1999.
| {
"pile_set_name": "ArXiv"
} |
---
author:
- |
Hideto Fukazawa$^{1}$[^1], Kenji Hirayama$^{1}$, Kenji Kondo$^{1}$, Takehiro Yamazaki$^{1}$, Yoh Kohori$^{1}$,\
Nao Takeshita$^{2}$, Kiichi Miyazawa$^{2,3}$, Hijiri Kito$^{2}$, Hiroshi Eisaki$^{2}$, Akira Iyo$^{2}$
title: '$^{75}$As NMR study of the ternary iron arsenide BaFe$_{2}$As$_{2}$ '
---
The discovery of superconductivity in F-doped LaFeAsO with $T_{\rm c} = 26$ K [@Kam1] has accelerated the world-wide investigations of the related superconductors [@Kit1; @Take1; @Nak1; @Mat1; @Ish1; @Sat1; @Taka1; @GCh1; @Ren1; @Ren2; @XCh1; @Cru1]. The common feature of these compounds is the possession of FeAs layer which is analogous to CuO$_{2}$ plane in high superconducting transition temperature (high $T_{\rm c}$) cuprates. In addition, non-doped materials commonly exhibit spin density wave (SDW) or antiferromagnetic order with adjacent structural phase transition, which is also resemblance of the parent materials of high $T_{\rm c}$ cuprates. At present stage, the suppression of spin density wave (or antiferromagnetic order of localized moment) and/or the carrier control of non-doped materials seem to have a key role of the emergence of this new-class superconductivity. Recent nuclear magnetic resonance (NMR) studies of some of these superconducting oxypnictides strongly suggest that the superconductivity in the materials is unconventional one with nodes in the superconducting gap [@Nak1; @Mat1].
Soon after the intensive investigations of the oxypnictides, oxygen-free iron pnictides BaFe$_{2}$As$_{2}$ [@Rot1] and SrFe$_{2}$As$_{2}$ [@Kre1] were proposed as a next candidate of the parent materials of superconductors with high $T_{\rm c}$. The lattice parameter and the magnetic susceptibility of these materials were already reported about 25-30 years ago [@Pfi1; @Pfi2]. However, the deeper studies of the compounds have just started with consideration as the parent materials of superconductors. The crystal structure of these pnictides is the ThCr$_{2}$Si$_{2}$-type structure which is familiar in the heavy fermion systems. This structure possesses the similar FeAs layer to that realized in LaFeAsO. Moreover, both the materials exhibit the SDW anomalies at $T_{\rm SDW} =$ 140 K (BaFe$_{2}$As$_{2}$) [@Rot1; @Hua1] and $T_{\rm SDW} =$ 205 K (SrFe$_{2}$As$_{2}$) [@Kre1; @Yan1]. It is important to notice that both the compounds exhibit structural phase transition from tetragonal one ($I4/mmm$) to orthorhombic one ($Fmmm$) simultaneously with the SDW anomaly [@Rot1; @Kre1; @Hua1; @Yan1]. The orders of these anomalies are reported as a second one for BaFe$_{2}$As$_{2}$ [@Rot1] and a first one for SrFe$_{2}$As$_{2}$ [@Kre1; @Yan1]. However, quite recent neutron diffraction measurements of BaFe$_{2}$As$_{2}$ report hysteresis of tetragonal (220) peak between on-cooling sequence and on-warming sequence, which indicates the structural order at $T_{\rm SDW}$ is of the first order also in BaFe$_{2}$As$_{2}$ [@Hua1].
The most striking feature of these compounds is that the SDW anomaly disappears and the superconductivity indeed sets in by hole doping, for example, K substitution for Ba [@Rot2] or Sr [@GCh2]. In order to understand the superconductivity in these doped oxygen-free iron-based pnictides, it is also important to study the magnetic and the electronic properties of the parent materials. Especially the relation between the SDW instability and the superconductivity should be revealed by local-probe measurements in addition to bulk measurements. Hence, we performed $^{75}$As-NMR measurements of BaFe$_{2}$As$_{2}$. $^{75}$As-NMR spectra clearly exhibited the magnetic transition at around 131 K in our samples. Temperature $T$ dependence of the internal magnetic field suggests that the transition is likely of the first order. The critical-slowing-down phenomenon in the spin-lattice relaxation rate $1/T_{1}$ is not pronounced in this material.
The polycrystalline BaFe$_{2}$As$_{2}$ was synthesized by the high temperature and high pressure method. The samples were confirmed as nearly single phase by x-ray diffraction. The standard four-probe resistivity measurement revealed the rapid decrease of the resistivity below 131 K, which corresponds to the SDW anomaly [@Tom1]. The $T_{\rm SDW}$ of our samples is slightly lower than those reported by Rotter [*et al.*]{} [@Rot1] and Huang [*et al.*]{} [@Hua1] The samples were crushed into powder for the experiments. The NMR experiments of the $^{75}$As nucleus ($I=3/2$, $\gamma = 7.292$ MHz/T) have been carried out by using phase-coherent pulsed NMR spectrometers and a superconducting magnet. The NMR spectra were measured both by sweeping the applied fields at a constant resonance frequency and by sweeping the resonance frequency at a constant applied field. The origin of the Knight shift $K=0$ of $^{75}$As nucleus was determined by the $^{75}$As NMR of GaAs [@Bas1]. The $1/T_{1}$ was measured with the saturation recovery method.
![ (Color online) $^{75}$As-NMR spectrum of BaFe$_{2}$As$_{2}$ at 141 K. Inset shows the center line of the enlarged spectrum between 5.975 and 6 T. ](Fig1_BaFe2As2_radom_BSpec_141K_43o83MHz){width="8cm"}
In Fig. 1, we show the $^{75}$As-NMR spectrum of BaFe$_{2}$As$_{2}$ at 141 K in paramagnetic state. At this temperature, the crystal structure of BaFe$_{2}$As$_{2}$ is tetragonal [@Rot1; @Hua1]. The obtained spectrum is a powder pattern expected for weak electric quadrupole coupling [@Car1]. The nuclear-spin Hamiltonian with external magnetic field $H_{\rm ext}$ is given by $\mathcal{H} = -\gamma H_{\rm ext}I_{z} + \frac{e^{2}qQ}{4hI(2I-1)}\left( 3I_{z'}^{2}-I(I+1)\right), $ where $h$,$eq$, $eQ$ represent the Planck constant, the electric field gradient (EFG) and the nuclear quadrupole moment, respectively. The principal axis of the EFG is along the crystal $c$ axis since As site above $T_{\rm SDW}$ has a local four-fold symmetry around the $c$ axis [@Rot1; @Hua1]. Because of the same reason, the $\mathcal{H}$ above $T_{\rm SDW}$ does not contain the asymmetry parameter $\eta$ of EFG. Sharp central peak was observed at around 5.98 T. In addition, rather broader satellite peaks were observed at around 5.83 and 6.13 T, which is due to the first-order perturbation effect of the electric quadrupole term against the Zeeman term in $\mathcal{H}$. From the difference of the resonance fields of the satellite lines, a nuclear quadrupole resonance frequency, $\nu_{\rm Q} \equiv \frac{3e^{2}qQ}{2hI(2I-1)}$, was estimated as 2.2 MHz. This value is about 4-5 times smaller than the values in iron-based oxypnictide superconductors [@Nak1; @Mat1; @Gra1; @Muk1]. This smaller value might suggest the low career density in the parent materials. Similar discussion was done in the nuclear quadrupole resonance of the oxygen-deficient iron oxypnictides [@Muk1] and the high $T_{\rm c}$ cuprates [@Zhe1].
In the inset of Fig. 1, we show the center line of the enlarged spectrum between 5.975 and 6 T. This line shape is due to the second-order perturbation effect of the electric quadrupole term. The large peak at around 5.984 T and the edge-like structure at around 5.993 T originate from the resonance components perpendicular and 41.8$^{\rm o}$-inclined to the principal axis (crystal $c$ axis) of the EFG, respectively. The enhancement of the perpendicular component, which is parallel to the crystal $ab$ plane, is due to the partial orientation of the crystals in the $H_{\rm ext}$. This suggests that the magnetic easy axis of BaFe$_{2}$As$_{2}$ is within the $ab$ plane, which is consistent with the magnetic susceptibility parallel to the $ab$ plane is about 1.5 times larger than that parallel to the $c$ axis [@Wan1].
![ (Color online) $^{75}$As-NMR spectra of BaFe$_{2}$As$_{2}$ at various temperatures. Circles denote the spectrum with much oriented powders in external magnetic field and triangles the spectrum with partially oriented powders. Inset shows the zero-external-field $^{75}$As-NMR spectrum at 1.5 K. ](Fig2_BaFe2As2_oriented_BSpec_4o2_141K_43o83MHz_2.eps){width="8cm"}
In Fig. 2, we show the $^{75}$As-NMR spectra of BaFe$_{2}$As$_{2}$ at various temperatures. The spectrum broadens below 130 K. This is clearly ascribable to the SDW magnetic ordering below $T_{\rm SDW} = 131$ K. The quite weak signals from the center line observed in the paramagnetic state remains at around 6 T even in the magnetically ordered state. Since the signal intensity of this center peak below $T_{\rm SDW}$ becomes about 1/10 of that above $T_{\rm SDW}$, we may speculate that the SDW ordered state and the paramagnetic state separately exist just below $T_{\rm SDW}$, which implies that the order of the SDW anomaly in BaFe$_{2}$As$_{2}$ is of the first order. Note that the spectral broadening below $T_{\rm SDW}$ cannot be explained by the change of the $\nu_{\rm Q}$ associated with the structural phase transition at $T_{\rm SDW}$. Because the $\nu_{\rm Q}$ is roughly inversely proportional to unit cell volume and the change of the unit cell volume associated with the SDW anomaly is at most 5%, the change of the $\nu_{\rm Q}$ gives the spectral broadening of at most 0.02 T at half maximum of the intensity at 6 T. This is clearly much less than the experimental spectral broadening of 0.5 T at 120 K.
In Fig. 2, we also show two $^{75}$As-NMR spectra at 4.2 K. One is obtained with much oriented powders and another is with partially oriented powders. The orientation of the micro crystals in powders was estimated from the fraction of the perpendicular component of the center line spectrum in the paramagnetic state. Clear difference of the spectra was observed. The broader spectrum with partially oriented powders indicates the larger projection of the internal magnetic field $H_{\rm int}$ at As site along the $H_{\rm ext}$. Since the $c$ axis of each micro crystal in fully oriented powders is perpendcicular to the $H_{\rm ext}$, we deduce that the $H_{\rm int}$ at As site directs the crystal $c$ axis. Recent neutron diffraction measurements revealed the ordered moment of 0.87$\mu_{\rm B}$ (Bohr magneton) at Fe site with the $q$ vector of $(1,0,1)$ for the orthorhombic structure [@Hua1]. Indeed, the antiferromagnetic Fe ordered moment within the $ab$ plane will make the $H_{\rm int}$ at As site parallel to the $c$ axis because the As site locates at the top of the AsFe$_{4}$ pyramid. The sharper $^{75}$As-NMR spectrum at 4.2 K is obtained in a condition that the $H_{\rm ext}$ and the $H_{\rm int}$ orient nearly perpendicular. Hence, the line width of the spectrum in Fig. 2 arises from the small angle distribution of the alignment, which is reduced from the bare $H_{\rm int}$. However, the $T$ dependence of the line width corresponds to that of the $H_{\rm int}$. Assuming this ordered structure and the moment along the $a$ or the $b$ axis, we evaluated that the dipole field at As site is approximately 0.3 T. This is less than the $H_{\rm int}$ estimated from the spectral width of the $^{75}$As-NMR. The contribution of conduction electrons is involved in the actual $H_{\rm int}$, though the direction of the $H_{\rm int}$ depends on that of the ordered moments and the ordered structure.
In the inset of Fig. 2, we show zero-field $^{75}$As-NMR spectrum at 1.5 K. These narrow center line and broad satellites indicate that the $H_{\rm int}$ at As site had a magnitude of about 1.3 T and its orientation is the maximum EFG direction at As site ($c$ axis). The narrow lines suggest that the magnetically ordered state is basically formed with the commensurate $q$ vector, which is consistent with neutron scattering measurements [@Hua1]. The origin of the broader satellites is probably due to the slight distribution of the EFG in the orthorhombic structure through the first order structural phase transition. Note that the broadening of the NMR spectra in the $H_{\rm ext}$ is due to the distribution of the projection component of the $H_{\rm int}$ along the $H_{\rm ext}$. Quite recent $^{75}$As-NMR measurements of single crystals and polycrystals of BaFe$_{2}$As$_{2}$ by Baek [*et al.*]{} apparently revealed that the magnetically ordered state is incommensurate [@Bae1]. However, our zero-field $^{75}$As-NMR spectrum indicates that the magnetically ordered state is commensurate.
![ (Color online) $T$ dependence of the full width at half maximum $FWHM$ (closed circle) and quarter maximum $FWQM$ (closed triangle) of the $^{75}$As-NMR spectra of BaFe$_{2}$As$_{2}$. Closed squares denote the $FWHM$ of the center line in the paramagnetic state. Inset shows Knight shift $K_{ab}$ along the $ab$ plane of $^{75}$As-NMR of BaFe$_{2}$As$_{2}$ versus the magnetic susceptibility $\chi_{ab}$ [@Wan1]. ](Fig3_BaFe2As2_oriented_B_FWHM_43o83MHz.eps){width="8cm"}
In Fig. 3, we show the $T$ dependence of the full width at half maximum $FWHM$ and quarter maximum $FWQM$ of the $^{75}$As-NMR spectra with much oriented powders. As already described above, spectral width can be considered as the rough measure of the distribution of the internal magnetic field $H_{\rm int}$ at As site. The $FWHM$ are nearly $T$ independent and the $FWQM$ slightly decreases on warming between 4.2 and 130 K. Then they abruptly decrease from finite field toward zero field at around $T_{\rm SDW}$. This discontinuous tendency of the magnetic internal field at $T_{\rm SDW}$ can be interpreted that the order of the SDW anomaly is likely of the first one.
In the inset of Fig. 3, we show the Knight shift $K_{ab}$ along the $ab$ plane of $^{75}$As-NMR of BaFe$_{2}$As$_{2}$ versus $\chi_{ab}$, magnetic susceptibility parallel to the $ab$ plane above $T_{\rm SDW}$. The $K_{ab}$ above $T_{\rm SDW}$ was obtained from the perpendicular component of the center line of the frequency-swept spectra at a constant applied field of 5.98 T. The second order quadrupole effect was taken into account to obtain $K_{ab}$ [@Car1]. The data of $\chi_{ab}$ was taken from ref. \[Wang\]. The hyperfine coupling constant $A_{ab}$ parallel to the $ab$ plane was evaluated from the Knight shift at As site parallel to the $ab$ plane and the magnetic susceptibility parallel to the $ab$ plane by assuming the following formulae, $K_{ab}(T) = \frac{A_{ab}^{d}}{N_{\rm A}\mu_{\rm B}}\chi_{ab}^{d}(T)
+ \frac{A_{ab0}}{N_{\rm A}\mu_{\rm B}}\chi_{ab0}$ $\left( \chi_{ab}(T) = \chi_{ab}^{d}(T) + \chi_{ab0}\right).$ Here, $N_{\rm A}$ represents Avogadro number. The evaluated hyperfine coupling constant $A_{ab}^{d}$ originating from the coupling between $^{75}$As nuclear spin and 3$d$ conduction electron of Fe is +19(2) kOe/$\mu_{\rm B}$. This is the same magnitude of the coupling constant as that reported for the $^{75}$As-NMR of LaFeAsO$_{0.9}$F$_{0.1}$ [@Gra1].
![ (Color online) $T$ dependence of spin-lattice relaxation rate $1/T_{1}$ of $^{75}$As of BaFe$_{2}$As$_{2}$. The inset shows the typical recovery curve obtained at 266 K. ](Fig4_BaFe2As2_T1T_43o835MHz_5o98T.eps){width="8cm"}
In Fig. 4, we show $T$ dependence of $1/T_{1}$ of $^{75}$As of BaFe$_{2}$As$_{2}$. We obtained $T_{1}$ at a fixed frequency of 43.83 MHz with the external field of 5.75-5.98 T in the $T$ range of 4.2-131 K and at a fixed field of 5.98 T with the frequency of 43.83-43.87 T in the $T$ range of 131-300 K. The nuclear magnetization recovery curve was fitted by a following double exponential function as expected for the center line of the spectrum of the nuclear spin $I=3/2$ of the $^{75}$As nucleus [@Sim1], $1-\frac{m(t)}{m_{0}} = 0.1\exp\left( -\frac{t}{T_{1}}\right) + 0.9\exp\left( -\frac{6t}{T_{1}}\right) ,$ where $m(t)$ and $m_{0}$ are nuclear magnetizations after time $t$ and enough time from the NMR saturation pulse. In the inset of Fig. 4, we show typical recovery curve obtained at 266 K. Clearly the data are well fitted to the above ideal curve with a single $T_{1}$ component. We used this formula even below $T_{\rm SDW}$, where the recovery curve should be more complicated and the concrete form cannot be derived. However, the analysis below $T_{\rm SDW}$ yielded good fitting with a single $T_{1}$ component. $1/T_{1}$ exhibits rapid decrease between 180 and 300 K. The slope of the $1/T_{1}$ is greater than $T$ linear dependence. This anomalous $T$ dependence is quite different from that reported in the related parent compound LaFeAsO where $1/T_{1}$ of $^{139}$La is nearly $T$ independent sufficiently above $T_{\rm SDW}$ [@Nak1]. Below about 180 K, $1/T_{1}$ exhibits gradual increase and sudden decrease below 131 K $=T_{\rm SDW}$. Note that the critical-slowing-down phenomenon in this material is not pronounced, which is contrast to the case of LaFeAsO in which the phenomenon was more clearly observed in $1/T_{1}$ of $^{139}$La [@Nak1]. The steep decrease of $1/T_{1}$ below $T_{\rm SDW}$ is probably due to a gap formation of the SDW at part of the Fermi surface. $1/T_{1}$ below about 100 K is nearly proportional to $T$. This is attributable to the relaxation due to the remaining conduction electron at the Fermi level even below $T_{\rm SDW}$.
In BaFe$_{2}$As$_{2}$ and its isomorph SrFe$_{2}$As$_{2}$, the magnetic anomaly and the structural anomaly coincide with each other at $T_{\rm SDW}$ [@Rot1; @Kre1; @Hua1; @Yan1]. This is in contrast to the case in the related compound LaFeAsO in which the magnetic anomaly and the structural anomaly occur with the separation of about 10 K [@Nak1]. Moreover, the present results of $^{75}$As NMR measurements of BaFe$_{2}$As$_{2}$ support that the magnetic transition at $T_{\rm SDW}$ is likely of the first order. Further investigation of the relation between the coincidence/separation of the magnetic anomaly and the structural anomaly and the order of the phase transition in the parent materials is quite important to understand the superconductivity adjacent to the magnetic and the structural anomalies in the iron-based oxypnictides/pnictides.
In summary, we performed $^{75}$As NMR measurements of the ternary iron arsenide BaFe$_{2}$As$_{2}$ which is a parent compound of new-class iron-based superconductors. $^{75}$As-NMR spectra clearly revealed that magnetic transition occurs at around 131 K in our samples, which corresponds to the emergence of SDW. $T$ dependence of the $H_{\rm int}$ suggests that the transition is likely of the first order. However, it is still an open question whether this transition is of the first order or of the second one. The detailed specific heat measurements will answer this question and is strongly required. The critical-slowing-down phenomenon in $1/T_{1}$ is not pronounced in this compound. Finally we comment about the pressure effect on these iron-based oxypnictides/pnictides. Not only doping study but also recent pressure study of the oxypnictides revealed that the pressure much affects these materials in the electronic states in the broad pressure range up to about 30 GPa [@Take1; @Taka1]. This indicates that the subsequent NMR/NQR studies of iron-based oxypnictides/pnictides under high pressure requires such high pressure of 10 GPa class. Recent development of high pressure NMR/NQR technique by the authors’ group [@Fuk1; @Hir1] will help for giving further insights into the understanding of these fascinating iron-based oxypnictides/pnictides.
This work is supported by a Grant-in-Aid for Scientific Research from the MEXT.
[99]{}
Y. Kamihara [*et al.*]{}: J. Am. Chem. Soc. [**130**]{} (2008) 3296.
H. Kito [*et al.*]{}: J. Phys. Soc. Jpn. [**77**]{} (2008) 063707.
N. Takeshita [*et al.*]{}: J. Phys. Soc. Jpn. [**77**]{} (2008) 075003.
Y. Nakai [*et al.*]{}: J. Phys. Soc. Jpn. [**77**]{} (2008) 073701.
K. Matano [*et al.*]{}: to be published in J. Phys. Soc. Jpn. [**77**]{} (2008).
S. Ishibashi [*et al.*]{}: J. Phys. Soc. Jpn. [**77**]{} (2008) 053709.
T. Sato [*et al.*]{}: J. Phys. Soc. Jpn. [**77**]{} (2008) 063708.
H. Takahashi [*et al.*]{}: Nature [**453**]{} (2008) 376.
Z. A. Ren [*et al.*]{}: Chin. Physics. Lett. [**25**]{} (2008) 2215.
G. F. Chen [*et al.*]{}: Chin. Physics. Lett. [**25**]{} (2008) 2235.
Z. A. Ren [*et al.*]{}: Euro. Phys. Lett. [**82**]{} (2008) 57002.
X. H. Chen [*et al.*]{}: Nature [**453**]{} (2008) 761.
C. Cruz [*et al.*]{}: Nature [**453**]{} (2008) 899.
M. Rotter [*et al.*]{}: arXiv:0805.4021v1.
C. Krellner [*et al.*]{}: arXiv: 0806.1043v1.
M. Pfisterer and G. Nagorsen: Z. Naturforsch. B: Chem. Sci. [**35**]{} (1980) 703.
M. Pfisterer and G. Nagorsen: Z. Naturforsch. B: Chem. Sci. [**38**]{} (1983) 811.
Q. Huang [*et al.*]{}: arXiv:0806.2776v1.
J.-Q. Yan [*et al.*]{}: arXiv:0806.2711v1.
M. Rotter [*et al.*]{}: arXiv:0805.4630v1.
G. F. Chen [*et al.*]{}: arXiv:0806.1209v1.
Y. Tomioka: private communications.
T. J. Bastow: J. Phys.: Condens. Matter [**11**]{} (1999) 569.
G. C. Carter [*et al.*]{}: [*Metallic Shifts in NMR*]{} (Pergamon Press, Oxford, New York, Toronto, Sydney, Paris, Frankfurt, 1977) Chap. 2&6.
H.-J. Grafe [*et al.*]{}: arXiv:0805.2595v2.
H. Mukuda [*et al.*]{}: arXiv:0806.3238v1.
G.-q. Zheng [*et al.*]{}: Physica C [**260**]{} (1996) 197.
\[Wang\] X. F. Wang [*et al.*]{}: arXiv:0806.2452v1.
S.-H. Baek [*et al.*]{}: arXiv: 0807.1084v2.
W. W. Simmons [*et al.*]{}: Rhys. Rev. [**127**]{} (1962) 1168.
H. Fukazawa [*et al.*]{}: Rev. Sci. Instrum. [**78**]{} (2007) 015106.
K. Hirayama [*et al.*]{}: J. Phys. Soc. Jpn. [**77**]{} (2008) 075001.
[^1]: E-mail address: hideto@nmr.s.chiba-u.ac.jp
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Wave functions and density matrices represent our knowledge about a quantum system and give probabilities for the outcomes of measurements. If the combined dynamics and measurements on a system lead to a density matrix $\rho(t)$ with only diagonal elements in a given basis $\{|n\rangle\}$, it may be treated as a classical mixture, i.e., a system which randomly occupies the basis states $|n\rangle$ with probabilities $\rho_{nn}(t)$. Equivalent to so-called smoothing in classical probability theory, subsequent probing of the occupation of the states $|n\rangle$ may improve our ability to retrodict what was the outcome of a projective state measurement at time $t$. Here, we show with experiments on a superconducting qubit that the smoothed probabilities do not, in the same way as the diagonal elements of $\rho(t)$, permit a classical mixture interpretation of the state of the system at the past time $t$.'
author:
- 'D. Tan'
- 'M. Naghiloo'
- 'K. Mølmer'
- 'K. W. Murch'
---
The quantum mechanical wavefunction, $\psi(x)$, yields the probability for detection of a particle at location $x$, but most textbooks carefully emphasize that this does not imply that, prior to detection, the particle *was* at the location $x$ with that probability. In contrast, a density matrix $\rho$ is often attributed a mixed interpretation as a classical random mixture of quantum states, i.e., the system is said to populate one out of several candidate states. A density matrix $\rho$ which is diagonal in a particular basis $|n\rangle$, indeed, leads to the same predictions about projective measurements in that basis, $P(n)=\rho_{nn}$, as if states had been Moreover, for any general measurement, described by a positive operator valued measure (POVM) [@Nielsenbook] with operators $\Omega_m$ that fulfill $\sum_m \Omega_m^\dagger \Omega_m = I$ (the identity operator), the outcome probabilities $P(m) =$Tr($\Omega_m \rho{\textcolor{black}{(t)}} \Omega_m^\dagger$) equal the weighted mean of the probabilities over states $|n\rangle$, $$\begin{aligned}
\label{eq:weighted}
P^{cm}(m) = \sum_n P(n) \textrm{Tr}\bigl(\Omega_m \rho_n \Omega_m^\dagger\bigr),\end{aligned}$$ where $\rho_n=|n\rangle \langle n|$.
$$\label{eq:pqsprob}
P_P(m) = \frac{\textrm{Tr}(\Omega_m \rho(t) \Omega_m^\dagger E(t))}{\sum_{m'} \textrm{Tr}(\Omega_{m'} \rho(t) \Omega_{m'}^\dagger E(t))}.$$
When applied to measurements on quantum systems, the PQS expression Eq. reveals unique features such as anomalous weak values [@Ahar88] arising from the pre- and postselection process [@Hatr13; @murc13traj; @Groe13; @Lang14] and quantum coherence [@dres15; @tan15]. Smoothed predictions for the outcomes of measurements on quantum systems have been tested in a variety of experimental systems [@tan15; @Ryba15; @whea10] and they have been used in the interpretation of temporal signal correlation functions [@camp13; @chan15; @xu15; @Foro16]. $$\label{eq:qfb}
P_P(n)=\frac{\rho_{nn}{\textcolor{black}{(t)}}E_{nn}{\textcolor{black}{(t)}}}{\sum_{n'}\rho_{n'n'}{\textcolor{black}{(t)}}E_{n'n'}{\textcolor{black}{(t)}}}$$ $$\label{eq:cmprob}
P_P^{cm}(m)= \sum_{{\textcolor{black}{n}}} P_P(n) \textrm{Tr}\bigl(\Omega_m|n\rangle \langle n|\Omega_m^\dagger\bigr),$$
So far, we merely observed an inconsistency between different theoretical predictions for experiments. We shall now present experiments on a superconducting qubit, where projective test measurements in bases different from the density matrix eigenbasis will illustrate and confirm Eq. while rejecting the classical mixture leading to Eq. .
Our experiment, depicted in Figure \[fig:rho\]a, consists of a superconducting transmon circuit that is dispersively coupled to aluminum cavity [@EXP]. The dispersive interaction between the qubit and cavity is given by an interaction Hamiltonian $H_\mathrm{int.} = -\hbar \chi \sigma_z a^\dagger a$, $a^\dagger (a)$ are the creation (annihilation) operators for a photon in the cavity mode and $\chi$ . This interaction allows quantum non-demolition (QND) measurements of the qubit in the $\sigma_z$ basis through probing of the qubit-state-dependent cavity resonance. , represented by the projection operators $\Pi_{\pm,z}$. achieve measurement fidelities in excess of $95\%$ with the predominant sources of infidelity arising from qubit transitions [@slic12; @sank16] that occur during the finite duration of the measurement [@Evan14; @Rist12; @John12; @Mack15; @Wall05; @sank16; @chen16].
We can make more general projective measurements by combining measurements in the $\sigma_z$ basis with arbitrary rotations ($R_x^\theta,\ R_y^\theta$) about the $x$ and $y$ axes of the qubit. For example, a projective measurement along the axis that forms an angle $\theta$ with the $z$ axis and azimuthal angle $\phi = 0$ can be performed through the , $\Pi_{ \pm,\theta} = R^{-\theta}_y\ \Pi_{\pm,z}\ R^{\theta}_y$ (Fig. \[fig:rho\]b). In the following these projective measurements will constitute the POVMs, $\Omega_{ \pm,\theta} = \Omega_{ \pm,\theta}^\dagger \equiv \Pi_{ \pm,\theta}$, for which we will test the predictions, Eqs. (\[eq:cmprob\], \[eq:pqsprob\]). If the qubit is described by a diagonal density matrix $\rho{\textcolor{black}{(t)}}$, the probability of obtaining eigenvalue $+1$ from such a measurement is given by, $$\begin{aligned}
P_\rho(+,\theta)=\rho_{00}{\textcolor{black}{(t)}} \cos^2\bigg(\frac{\theta}{2}\bigg)
+ \rho_{11}{\textcolor{black}{(t)}} \sin^2\bigg(\frac{\theta}{2}\bigg).\label{Ptheta} $$
In Figure \[fig:rho\]c we test the predictions given by Eq. (\[Ptheta\]) for different values of $\rho{\textcolor{black}{(t)}}$. To prepare different mixed states, we apply a qubit rotation pulse $R_y^{\varphi}$ followed by a projective measurement $\Pi_{\pm, z}$. When the result of this measurement is ignored, the projective measurement decoheres the system and prepares the qubit in a diagonal mixed state in the qubit basis eigenstates $|0(1)\rangle \equiv | +(-)z\rangle$ with $\rho_{00}{\textcolor{black}{(t)}}$ and $\rho_{11}{\textcolor{black}{(t)}}$ determined by the initial rotation angle $\varphi$ and $T_1$ decay during the first measurement. Following this preparation, we make projective measurements at different angles $\theta$ to determine $\tilde{P}(+,\theta) {\textcolor{black}{\equiv}} N_+/(N_++N_-)$ from the number of positive (negative) eigenvalue results $N_+$ ($N_-$). The projective measurements $\Pi_{\pm, \theta}$ are subject to infidelities originating predominantly from $T_1$ decay during the $t_\mathrm{m} = 400$ ns projective measurement. This results in a $\theta$-dependent measurement fidelity that is given by the overlap of the $\Pi_{\pm, \theta}$ eigenstates and the qubit excited state, $\mathcal{F}_\theta = 0.99-\sin^2 (\theta/2) (1-e^{-t_\mathrm{m}/T_1})$ and ranges from $0.945$ when $\theta = \pi$ to $0.99$ when $\theta = 0$. The maximum readout fidelity of $0.99$ arises from residual overlap of the measurement distributions. After correcting for the measurement fidelity, the predictions given by $\rho{\textcolor{black}{(t)}}$ are in good agreement with the measured probabilities as shown in Figure \[fig:rho\]c.
We now address how the subsequent continuous probing of the qubit in the $\sigma_z$ basis, as depicted in Figure \[fig2\]a, yields our smoothed predictions for the outcomes of the projective measurements $\Pi_{\pm, \theta}$. After the dispersive interaction, the phase of the coherent probe field depends on the qubit state, and the time integral $\xi$ of the measured $Q$-quadrature is Gaussian distributed with opposite mean values for the states $|0(1)\rangle$. In Fig \[fig2\]b, we show the experimentally obtained distributions $P(\xi|0)$ and $P(\xi|1)$, where we have normalized the integrated signal to have mean values $\pm 1$ for the two qubit states. The Gaussian widths are significant for short probing times and become much narrower when the system is probed for longer. For a given measured signal $\xi$, we can extract the values $P(\xi|0)$ and $P(\xi|1)$, i.e., the probability of the measured signal conditioned on the state. By Bayes’ rule, these are precisely the factors multiplying the prior probabilities $\rho_{nn}{\textcolor{black}{(t)}}$ to yield the classical smoothing theory. *I.e*., if we disregard the effect of qubit decay during the probing, they yield the values of $E_{00}{\textcolor{black}{(t)}}$ and $E_{11}{\textcolor{black}{(t)}}$ in Eq.(2), $$\begin{aligned}
\label{Emap}
E_{00}{\textcolor{black}{(t)}}=\frac{P(\xi|0)}{P(\xi|1)+P(\xi|0)}, \
E_{11}{\textcolor{black}{(t)}} =\frac{P(\xi|1)}{P(\xi|1)+P(\xi|0)},\end{aligned}$$ where we have applied a common normalization factor, leading to Tr($E$)$=1$. Fig \[fig2\]c shows how the inferred normalized value of $E_{00}$ ($E_{11}=1-E_{00}$) depends on the measured signal $\xi$. The continuous probing constitutes a QND measurement of the qubit state, and the accumulated back-action on the qubit state populations in the forward propagation of $\rho$ [@koro11] amounts to the same factors—which confirms that the evolution of $E$ is, indeed equivalent to the evolution of $\rho$ (the QND back-action is equal to its adjoint).
For a projective measurement in the qubit basis $(\theta = 0)$ at time $t$, $\rho(t)$ leads to the prediction $P_{\rho}(0) = \rho_{00}(t)$, while the pair of matrices $\bigl(\rho(t),E(t)\bigr)$ implies $${\textcolor{black}{P_P(0)\equiv P_P(+,0) =}} \frac{\rho_{00}(t)E_{00}(t)}{\rho_{00}(t)E_{00}(t)+\rho_{11}(t)E_{11}(t)}.$$ If the values of $P_P(0)$ and could be interpreted as refined populations of a classical mixture of the two qubit states at time $t$, the projective measurement, corresponding to $\Pi_{+,\theta}$ would have the probability $$\begin{aligned}
\label{Pcmtheta}
P_P^{cm}(+,\theta)=P_P(0) \cos^2\left(\frac{\theta}{2}\right)
+ P_P(1) \sin^2\bigg(\frac{\theta}{2}\bigg),\end{aligned}$$ while insertion of the projection operators $\Pi_{\pm,\theta}$ for $\Omega_m$ in yields the expression $$\label{eq:Ppthetared}
P_P(+,\theta) = \frac{P_\rho(+,\theta) P_E(+,\theta)}{P_\rho(+,\theta) P_E(+,\theta) + P_\rho(-,\theta)P_E(-,\theta)},$$ where $P_\rho(+,\theta)$ is given in , and we have introduced the formally similar $P_E(+,\theta) = E_{00} \cos^2\left(\frac{\theta}{2}\right) + E_{11} \sin^2\left(\frac{\theta}{2}\right)$ and $P_\rho(-,\theta)=1-P_\rho(+,\theta)$, $P_E(-,\theta)=1-P_E(+,\theta)$.
In our experiment, the signal related to $E{\textcolor{black}{(t)}}$ is obtained from measurement $\Pi_{\pm, \theta}$. $E{\textcolor{black}{(t)}}$ is given by the Eq. ($\ref{Emap}$) and depicted in Figure \[fig2\]c. In Figure \[fig2\]a, we display our experimental results that test the prediction of Eq. ($\ref{eq:Ppthetared}$) for three different combinations of $\rho{\textcolor{black}{(t)}}$ and $E{\textcolor{black}{(t)}}$ [@eref]. The experimental and theoretical curves show good agreement and highlight how information before and after the projective measurement contribute to the smoothed prediction.
Figure $\ref{fig3}$ summarizes our experimental results, showing the measured $\tilde{P}(+,\theta)$ as a function of the angle $\theta$ and the post-selected value of $E_{00}{\textcolor{black}{(t)}}$ (the corresponding values of the integrated signal $\xi$ are given on the right hand axis in the figure). Results are shown for three different density matrices $\rho{\textcolor{black}{(t)}}$ prior to the projective measurement along the direction $\theta$. For $\theta=\pi/2$ both conventional and smoothed predictions assign unbiased probabilities $0.5$ to the outcomes $\pm,\theta$. For any $\theta$ and for all three values of $\rho{\textcolor{black}{(t)}}$, a certain value of the probing signal after the projective measurements results in an unbiased smoothed prediction $P_P(+,\theta)=0.5$. This amounts to an increased uncertainty about the outcome and it happens because the subsequent probing of the system is at loggerheads with the prior state $\rho{\textcolor{black}{(t)}}$ (e.g., $\rho_{00}{\textcolor{black}{(t)}} = 0.91$, $E_{00}{\textcolor{black}{(t)}} = 0.25$, cf., Fig. 2c). Conversely, when the $\rho{\textcolor{black}{(t)}}$ and $E{\textcolor{black}{(t)}}$ are similar (e.g., $\rho_{00}{\textcolor{black}{(t)}} = 0.91$, $E_{00}{\textcolor{black}{(t)}} = 0.94$, cf., Fig. 2c), the later probing “confirms” the prediction by $\rho{\textcolor{black}{(t)}}$, and thus enhances the probability of the most likely outcome of the projective measurement. These trends are most clearly observed in Figure \[fig4\], where we compare the measurement probabilities $\tilde{P}(+,\theta)$ to the smoothed prediction $P_P(+,\theta)$ and the classical mixture interpretation $P_P^\mathrm{cm} (+,\theta)$. Notably, the figure shows a clear disagreement of the experimental data with the classical mixture interpretation.
In conclusion, we have presented a description of a quantum system, evolving without developing coherences, and hence, both prior and posterior information about the system are represented by diagonal matrices. While the theory of smoothing yields probabilities in better agreement with predictions for the outcome of measurements in the eigenstate basis, we have shown that these probabilities do not permit a classical mixture interpretation of the (past) quantum state.
At a more foundational level, our work dismisses simple “hidden variable theories” that equate eigenstates of incoherent ensembles with hidden “true” states of the system, and it offers an illustration of the problematic character of macrorealism [@legg85] which separates the evolution of quantum states and the measurements performed. Rather than demonstrating an explicit statistical violation of the Bell [@Bell65], CSCH [@CSCH81], or Leggett-Garg [@Groe13; @pala10; @will08; @gogg11; @whit15] inequalities, we have merely shown the failure of the simplest preconceived probabilistic classical mixture interpretation of the quantum description, and we have shown that the pair of matrices $\rho(t)$ and $E(t)$ offers a satisfactory account of the outcomes of past measurements on a quantum system.
We acknowledge P. Harrington and N. Foroozani for discussions and assistance with the manuscript and G. Zhao, L. Xu, and L. Yang for fabrication assistance. This research was supported in part by the John Templeton Foundation and the Sloan Foundation and used facilities at the Institute of Materials Science and Engineering at Washington University. K.M. acknowledges support from the Villum Foundation.
Correspondence and requests for materials should be addressed to K.W.M. (murch@physics.wustl.edu)
[38]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ** (, ).
W.H. Press, S.A. Teukolsky, W.T. Vetterling,and B.P. Flannery, Numerical Recipes: The Art of Scientific Computing,3. Ed., (Cambridge University Press, New York, 2007).
R. Rabiner, A tutorial on hidden Markov models and selected applications in speech recognition, Proc. IEEE **77**, 257 (1989).
, , , ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, , , ****, ().
S. Gammelmark, K. Mølmer, W. Alt, T. Kampschulte, and D. Meschede, Phys. Rev. A **89**, 043839 (2014).
, , , ****, ().
, , , , , , , , , , , ****, ().
, , , , ****, ().
, , , , , , , , ****, ().
, , , , , , , , , ****, ().
, ****, ().
, , , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , , , , , , , ****, ().
, , , , , , ****, ().
, ****, ().
, , , , ****, ().
, , , , , ****, ().
The experimental set-up is similar to previous work [@tan15],
, , , , , , , , ****, ().
, , , , , , , , , , , ().
, , , , , , , , , , , ****, ().
, , , , ****, ().
, , , , , , , ****, ().
, , , , , , , , ****, ().
, , , , , , , , ****, ().
, , , , , , , , , , , ****, ().
.
, ().
, ****, ().
, ****, ().
, , , ****, ().
, , , , , , , ****, ().
, ****, ().
, , , , , , , ****, ().
, , , , , , , , , , , ().
| {
"pile_set_name": "ArXiv"
} |
Nature of Gravitation
Dr. A.V. Rykov
Chief of Seismometry lab. of IPE RAS, Moscow, Russia.
*The photoeffect, (vacuum analogue of the photoelectric effect,) is used to study the structure of the physical vacuum, the outcome of which is the basis for an hypothesis on the nature of gravitation and inertia. The source of gravitation is the vacuum which has a weak massless elementary electrical dipole (+/-) charge. Inertia is the result of the elastic force of the vacuum in opposition to the accelerated motion of material objects. The vacuum is seen as the source of attraction for all bodies according to the law of induction.*
The nature of gravitation remains one of the central problems of science and the discovery of its true basis will introduce major changes to our understanding of the physical laws. The following hypothesis is a departure from commonly accepted physical theories. Newton presented the laws of gravitation and inertia and shows acceleration as absolute in ambient space. Einstein’s General Theory represents gravitation as the curvature of space near gravitating masses, and inertia is seen as equivalent to gravitation. In consideration of the absolute or relative character of acceleration, Einstein adopted Mach’s theory, in which the property of inertia is seen as the gravitational attraction of all masses in the Universe. This is despite the paradox that an isolated rotating object should not experience centrufugal forces. It is commonly acknowledged in physics, that the curvature of space - time is sufficient, and gravitation is not required. However this concept is not convincing even from a philosophical point of view. The physics of the past century has continued the methodology of prior centuries which is to search for answers to problems of HOW? and not WHY?, considering the latter approach to be religious rather than physical. For example, the Big Bang generates the whole substance of the Universe from a mathematical point, presumably under no influence other than God. The theories view the physical vacuum as playing an exclusive role in all interactions except gravitation. Exchange forces are implemented in the vacuum with the help of virtual particles: photons in electromagnetic interaction, mesons in nuclear forces and gluons in nucleons. Gravitons as exchange field quanta, have not received sufficient development in the quantum theory of gravitation although a similar approach to the above is indicated. The nature of gravitation is here presented as the vacuum composed of massless charge dipoles, one component having a small charge superiority over the other. In this manner, it is possible to represent a primitive scheme of universal gravity and antigravity:
( body1 +) ($-$ + $-$ + $-$ vacuum $-$ + $-$ + $-$) (+ body2 )
- The Coulomb attraction (gravity) in the presence of material bodies,
($---$ vacuum $---$ )
- The Coulomb self-repulsion (antigravity) in the absence of material bodies or bodies separated by large distances in space. An inequality zero of the sum of charges is shown visually: \[($-$) is numerically greater than (+).\] The retio of gravitation and repulsion in the universe forms the numerical equality of $\Lambda $ -member in Einstein’s theory \[1, P. Davis, 1985\].
At first we’ll remove a blunder of physics presented by Coulomb’s formula. It lies in the fact that parameters of vacuum were put to the denominator of formulas for electric and magnetic forces. We’l introduce inverse values:
$\eta =\frac 1\mu =1.0000000028\cdot 10^7[a^2kg^{-1}m^{-1}s^2]$ - is a magnetic constant of vacuum equal to inverse value of magnetic permeability. $\xi =\frac 1\varepsilon =8.98755179\cdot 10^9[a^{-2}m^3kg\cdot s^{-4}]$ - is a dielectric constant of vacuum equal to inverse value of dielectric permittivity. Newton’s and Coulomb’s formulas get an identical view. Speed of light gets more logical idea $c=\sqrt{\eta \xi }$ .
Experimental physics presents necessary data for the study of vacuum. We mean the data on photoeffects in vacuum, on nuclei and nucleons \[2,Karjakin N.I. and others, 1964\]. Let’s remind the values of gamma-quanta energies: 1, 137, 1836, 3672 MeV ($2m_ec^2,137\cdot 2m_ec^2,1836\cdot 2m_ec^2,1836\cdot
4m_ec^2$). This series of energy gives a valuable information for the physical ideas about the structure of vacuum and matter \[3,Rykov A.V., 2001\].
Gamma-quantum of $\nu $ frequency deforms the structure of cosmic vacuum. Being within the size of $r_e$ between its elements, gamma-quantum creates a deformation $\Delta r_e$ . The deformation energy will be $e_oE\Delta r_e$ $%
, $ where $e_o$ is a elementary charge, $E$ - is electrical intensity of the structure. To avoid a well known experimental noise at real birth of electron+positron pair by gamma-quantum we shall take the equation of the energy in pure case:
$h\nu =e_oE\Delta r_e$ (1),
where $h$ - is a Plank’s constant. Deformation is function of time
$\Delta r_e=\Delta [r_e\sin (2\pi \nu t)]=2\pi \nu r_e\Delta t\cos (2\pi \nu
t)$ (2).
Let’s define the intensity of electrical field, where N is some coefficient of proportionality:
$E=N\xi \frac{e_o}{r_e^2}$ (3).
Let’s put the obtained expressions, amplitude from (2) and intensity from (3) to (1):
$h=2\pi Ne_o^2\xi \frac 1{r_e/\Delta t}$ (4).
We can assume quite naturally that $r_e/\Delta t=c$ - is speed of light. Let’s find an unknown quantity:
$N=\frac h{2\pi e_o^2r_q}=137.035990905=\alpha ^{-1}$ ! (5),
where $r_q=\sqrt{\xi /\eta }$ . We have got a well known formula of Plank’s constant:
$h=2\pi e_o^2\alpha ^{-1}\sqrt{\xi /\eta }=6.6260755(40)\cdot 10^{-34}$ (6).
On this stage we should clear a situation with a choice of numerical values for $h$ or $\alpha ^{-1}$ . All next values are calculated on the base of $h$. But the $\alpha ^{-1}$ is in reality more fundamental then $h$, because the last one is derivative from $e_o^{},\alpha ^{-1},\xi $ $,\eta $ - vacuum parameters. The choice made here is based upon this quite new study of vacuum.
Gamma–quantum of energy $w\geq 1$ MeV interacting with vacuum changes a ”virtual” electron-positron pair to the real ones. The energy equation of this change is:
$w=h\nu _{rb}=\xi \frac{e_o^2}{r_e}$ (7),$^{}$
where $r_e$ - distance between charges (+) and (-) of vacuum structure, $\nu
_{rb}=2.4892126289\cdot 10^{20}$ Hz - ”red border” for frequency of gamma-quantum . The last exact value is determined below. Let’s find $r_e$ :
$r_e=\frac{\xi \alpha }{2\pi r_q\nu _{rb}}=\frac{c\alpha }{2\pi \nu _{rb}}%
=1.398763188\cdot 10^{-15}m$ (8).
We have from (2) $\Delta r_e=2\pi \nu _{rb}r_e\Delta t=\frac{2\pi \nu
_{rb}r_e}cr_e=\alpha \cdot r_e$ under assumption $r_e/\Delta t=c$. In other words, it is the limit of the vacuum deformation above what a rupture of structure ties occurred:
$\Delta r_e=\alpha \cdot r_e=1.020726874\cdot 10^{-17}m$ (9).
The exact value for $\nu _{rb}=\frac c{2\pi r_e\alpha ^{-1}}=2.48921263\cdot
10^{20}Hz$ . Deformation of structure lower than the given value has electroelastic character. Let’s find the coefficient of elasticity $b$ from a next equation:
$f=b\Delta r_{rb}=\xi \frac{e_o^2}{r_e^2}$ $,$ $b=1.155219829\cdot
10^{19}[kg\cdot s^{-2}]$ (10).
Dipoles can be polarized and the polarizetion will be next:
$\sigma _{\Delta r}=\alpha ^{-2}\frac{e_o}{4\pi r_e^4}(\Delta r)^2=S(\Delta
r)^2$ , where
$S=\alpha ^{-2}\frac{e_o}{4\pi r_e^4}=6.254509137\cdot 10^{43}[Q\cdot m^{-4}$ (11).
Another useful parameter of vacuum will be :
$E_\sigma =\sqrt{\gamma \xi }=0.77440463$ $[a^{-1}m^3s^{-3}]$ (12) .
The names for this parameters are not yet known.
To that stage we get the main parameters of the vacuum structure. Massless vacuum structure follows the fact that energy required for creation pair of electron+positron definites by energy equation $w=2m_oc^2+$ $2m_oc^2/137.036$ , where $2m_oc^2$ went on birth of two particle masses and $2m_oc^2/137.036$ went to break the dipole tie.
Dielectric vacuum media has a tied charges. The moving charge generates a Maxwell’s displacement current $j$. This current generates magnetic strength $\overline{dH}=\frac 1c\overline{j}\stackrel{}{}$ where $\overline{j}=\frac
1{4\pi }\frac{d\overline{E}}{dt}$. The $\overline{H}$ is necessary magnetic component to the $\overline{E}$ for the Electromagnetic wave (light). The vacuum structure is natural media for light excitation and propagation in space. Thus, the connected charges - dipoles - are re-translators of an electromagnetic wave. Light reaching the observer is not the initial phenomenon of a photon emitted at source, but must be viewed as a multiply-relayed signal.
It is natural to assume that the longitudinal polarization of the dipoles of space involves gravitational phenomena. Gravitation is explained by the electrostatic ”field”, which is transmitted in vacuo as a longitudinal signal. The longitudinal motion of the polarized front between connected charges is not accompanied by the appearance of a parallel magnetic field moving in one direction, and of identical sign.The magnetic strength should in this case surround the displacement current of moving charges similar to a current in a conductor. As an electrostatics or gravitation act as central and frequently spherical forces, the total magnetic strength of displacement currents appears equal to zero for gravitating objects or those charged by static electricity. The outcome is minimal damping. This infers an extremely large and almost instantaneous speed of propagation of longitudinal waves in the vacuum. The universe appears to be an interconnected system in which any part ”feels” in full unity with the whole. It is only in this way that it is capable of existence and development. In essence, cosmology cannot manage without ”instantaneous” gravitational transfer.
The laws of Newton and Coulomb can be united next way.
$f=G\frac{m^2}{R^2}=\xi \frac{q^2}{R^2}$ and $\rho =\sqrt{\frac G\xi }%
=8.6164135164\cdot 10^{-11}[Q\cdot kg^{-1}]$- the electrical charge of one kg of any mass. The same value may be presented by a micro parameters - $%
\rho =e_o\sqrt{\frac{2\pi G}{ch\alpha }}=8.6164135\cdot 10^{-11}$ $.$Gravitational constant is defined by parameters of vacuum $G=\xi \frac{e_o^2%
}{m_x}=6.67259049725\cdot 10^{-11}$\[kg$^{-1}$m$^3$s$^{-2}$\] where $m_x=m_{Pl}%
\sqrt{\alpha }=1.8594480544\cdot 10^{-9}$kg, $m_{Pl}$ - Plank mass. It is indirect evidence of electrical nature of gravitation.
It is necessary to state that it is impossible to formally transfer accepted physical concepts regarding material substance on the structured vacuum as here indicated. Strength $E=\xi \frac q{R^2}$ and potential $U=\xi \frac qR$. For example, the calculation of acceleration of gravity for the earth in terms of electrical forces gives $g=\sqrt{G\xi }\frac{\rho \,M}{R^2}$ . For instance, The Earth has $g=9,82$ m/c2 and electrical strength $%
E=1.1402\cdot 10^{10}$ V/m in *vacuum*. This is nonsense from the usual point of view. However, it is not surprising that the electrical strength of an electron is $1.8367*10^{20}$ V/m and proton $6.399*10^{26}$ V/m. This is the medium in which ” the microparticles exist ”, and of which the macro bodies consist. Distances between the constituents of atoms on 3-4 oder exceed the indicated distance. The vacuum penetrates everywhere, whether it is a dielectric or a conductor. Therefore it must be realized that customary concepts of shielding or electrical voltage are here completely unsuitable. It is impossible for example, to arrange a conductor between gravitating bodies to shield the operation of gravity. It is impossible to arrange electrodes in space to remove and use the electrical voltage of the vacuum. The carriers of electricity in a substance and in the vacuum are completely different. The interaction of bodies with the vacuum is implemented at the level of electrons and nucleons of substances. Gravitation also begins at the same level, finally being integrated in macroscopic masses.
We state that the force of elastic electrical deformation will be defined as
$f=b\Delta r_{rb}=\xi \frac{e_o^2}{r_e^2}$ and $b=1.155065\cdot 10^{19}$\[kg/s2\]. (13)
Where $b$ is the factor of electrical elasticity. Charge polarization -
$\sigma =Q/4\pi R^2$ \[Q/m2\](14).
Using formula (11), (14) and $g=G\frac M{R^2}$ for the acceleration of gravity we have:
$g=4\pi \sqrt{G\xi }S(\Delta r_g)^2$ m/s2. (15)
The longitudinal deformation of the vacuum dipoles by a gravitating object determines the acceleration of gravity and alternately, the acceleration of gravity determines the deformation of the vacuum structure. We calculate the maximum acceleration on (15)and (9):
$g_{\max }=6.3409\cdot 10^{10}$ m/s2. (16)
The force of electroelastic deformation from (9) will be defined by the maximum acceleration of an unknown mass $m_x$:
$b\Delta r_{rb}=g_{\max }m_x$ . (17)
The unknown mass is determined from the equation(17)
$m_x=\sqrt{\alpha }m_{Pl}=1.859459\cdot 10^{-9}$ kg, where $m_{Pl}$ - Planck’s mass !
This gives $Q=\rho m_x=1.602177\cdot 10^{-19}$ - the value of the charge of an electron (!), inadvertently identifying a surprising connection of values $\rho ,\alpha ,m_x,m_{Pl}$, and that indirectly supports the gravitational theory.
Mass provides the ability to determine the gear of gravitation through the availability of a gravitational charge. We now calculate the number of pairs of electrons and positrons forming vacuum dipoles in this mass: $%
n=m_x/m_o=2.0412553^{21}$ pieces. From this, the value of the charge is determined $\Delta e_o=e_o/n=7.848981^{-41}$ Q, where the charge of an electron exceeds the positron charge by: $7.848981^{-41}$ Q. For instance, the excess of negative charges over positive by a factor of 21 is the basis for gravitation. It corresponds to a minimum gravitational charge of the mass of an electron or positron, i.e. $q_g=\rho m_o=$ $7.848981^{-41}$ Q. We get there another very vivid coincidence and thus additional prove of validity of description of nature gravity.
Summary.
For centuries the nature of gravity is being unknown. The gravity was and is up to now the most mysterious forth of the Nature. It is difficult to make references to many-many publisched attempts to solve the problem of gravity.It seems to auther that he find a realystic and physicaly based theory of non-geometric gravitation. Above there are remarkable and unexpected coincidences that serves as source of auther hope that this article has sense. Any new theory should expect new knowledge about The Nature.
1)The velocity of gravity is expected almost to be infinitiv. Experiment on solar tide data in comparison with local sun time should give the estimated velocity of gravity.
2)There are possibilites to control the deformation of vacuum structure by electrical and magnetic forces, by gamma-quanta radiation etc. Thus the gravity and inertia may be controled as the russian scientists Roschin V.V. and Godin S.M. shows \[4,Roschin, Godin, 2000\]. There is a tale about wonderful discs after John R.R.Searl which was taken by the mentioned scientists for construction of their device.
Literature
1\. Davis P. Superforth // Publisher ”Mir”, M., 1989, 277 p.
2\. Karjakin N.I. et al. Physics Handbook // Publisher ”High School”, M., 1964, 574 p.
3.Rykov A.V. Principles of natural physics // UIPE RAS, M.:2001, 58 p.(in russian).
4.Roschin V.V., Godin S.M. Experimental research of physical effects in dynamic magnetic system // The letters to MTP, St.Pb, 2000, v.26, is.24, 73-81 p.(in russian).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We argue that supersymmetric grand unification of gauge couplings is not incompatible with small ${\alpha_s}$, even without large GUT-scale corrections, if one relaxes a usual universal gaugino mass assumption. A commonly assumed relation ${M_2}\simeq{m_{\widetilde g}}/3$ is in gross contradiction with ${\alpha_s}\approx0.11$. Instead, small ${\alpha_s}$ favors ${M_2}\gg{m_{\widetilde g}}$. If this is indeed the case our observation casts doubt on another commonly used relation ${M_1}\simeq 0.5{M_2}$ which originates from the same constraint of a common gaugino mass at the GUT scale. One firm prediction emerging within the small ${\alpha_s}$ scenario with the unconstrained gaugino masses is the existence of a relatively light gluino below $\sim$ 200[GeV]{}.'
---
‘@=11 caption\#1\[\#2\]\#3
‘@=12 \#1
\
[**University of Minnesota**]{}
TPI-MINN-95/04-T\
UMN-TH-1330-95\
hep-ph/9503358\
March 1995\
(Revised version)
[L. Roszkowski[^1] and M. Shifman[^2] ]{}\
Introduction
============
One of the testing grounds for various models of grand unification is calculating the strong coupling constant ${\alpha_s(m_Z)}$ using, as input, the experimental values of the electromagnetic coupling constant $\alpha$ and ${\sin^2\theta_W}$, where $\theta_W$ is the Weinberg angle. These calculations have been repeatedly carried out in different models and under different assumptions (for recent reviews see, [*e.g.*]{}, Ref. [@Langacker]). It has been shown, in particular, that the simplest grand unification based on the Standard Model (SM) and $SU(5)$ gauge group leads to too small a value of the strong coupling constant, ${\alpha_s(m_Z)} =
0.073\pm 0.002$ [@Langacker2] and is, thus, ruled out [@F1]. In contrast, supersymmetric models generally predicted ${\alpha_s(m_Z)}$ in agreement [@F1] with experimental data available at that time.
A straightforward supersymmetrization of SM gives rise to the Minimal Supersymmetric Standard Model (MSSM) [@mssm_review]. Actually, to fully specify the model one has to make an additional assumption about the pattern of supersymmetry (SUSY) breaking. The most popular mechanism is that of soft breaking in which one adds to the Lagrangian all possible soft SUSY breaking terms and treats them as independent parameters. Such terms arise, [*e.g.*]{}, when the MSSM is coupled to supergravity [@minisugra]. This mechanism of generating soft terms is so deeply rooted that quite often in the current literature no distinction is made between the MSSM [*per se*]{} and the MSSM plus the assumptions of the minimal supergravity-based SUSY breaking. In fact, an overwhelming majority of papers devoted to even purely phenomenological studies of the MSSM assume some (but typically not all) relations stemming from minimal supergravity, [*e.g.*]{}, the relation between the mass parameters of the gauginos of $SU(2)$ and $U(1)$.
Encouraged by early studies [@F1], many authors (see, [*e.g.*]{}, Refs. [@rr; @roberts; @kkrw1; @cmssmstudies]) then studied unification in the context of the MSSM coupled to minimal supergravity. The set of SUSY breaking terms generated this way is quite restrictive. In particular, in the context of minimal $N=1$ supergravity the masses of all gauginos – gluinos of $SU(3)$, winos of $SU(2)$ and the bino of $U(1)$ – turn out to be the same at the Planck scale. Similarly, the soft mass parameters of all squarks and sleptons are equal at that scale. In this restrictive model, which was called Constrained MSSM (CMSSM) [@kkrw1], one assumes universal masses for all the gauginos ($m_{1/2}$) and all the scalars ($m_0$) at the GUT scale, and often additionally imposes a mechanism of radiative electroweak symmetry breaking (EWSB) [@gsymbreak]. Accepting these assumptions one arrives at quite definite predictions for the spectra of masses of the model at the weak scale and for ${\alpha_s(m_Z)}$. For example, the gluino turns out to be roughly three times heavier than wino [@mssm_review]. Furthermore, ${\alpha_s(m_Z)}$ generally decreases with increasing $m_{1/2}$ and $m_0$. Restricting $m_{1/2}$ and $m_0$ (or alternatively all the masses) below roughly 1[TeV]{} leads to ${\alpha_s(m_Z)}
{\lower.7ex\hbox{$\;\stackrel{\textstyle>}{\sim}\;$}}
0.12$ [@kkrw1; @Langacker2]. For example, an updated analysis of Ref. [@lp:new] quotes ${\alpha_s(m_Z)} = 0.129 \pm 0.008$. The theoretical error here is mostly due to uncertainty associated with the so-called threshold corrections at the GUT and low (SUSY breaking) scales and higher-dimensional non-renormalizable operators (NRO’s) in the GUT scale Lagrangian. The above prediction for ${\alpha_s(m_Z)}$ was considered as a great success and the strongest evidence in favor of the MSSM in light of the fact that, as was believed, the direct measurement of the strong coupling constant at LEP and SLD yields ${{\alpha_s(m_Z)}} = 0.125\pm 0.05$ [@F2].
Recently it has been pointed out, however, that QCD cannot tolerate such a large value of the coupling constant [@Shifman]. A wealth of low-energy data indicates that ${\alpha_s(m_Z)}$ must be very close to 0.11 [@Altarelli], three standard deviations below the alleged LEP/SLD value. A method of determining ${\alpha_s}$ which seems to be clean theoretically is extracting ${\alpha_s}$ from deep inelastic scattering (DIS) [@Virchaux]. A similar number is obtained in the lattice QCD [@lattice]. Another reliable approach is using [@Eidelman; @Voloshin] (Euclidean) QCD sum rules. The observation of Ref. [@Shifman] motivated a new analysis of the $\Upsilon$ sum rules [@Voloshin] claiming the record accuracy achieved so far, $${\alpha_s(m_Z)} = 0.109\pm 0.001 \, .
\label{als_voloshin:eq}$$ The apparent clash between the low-energy determinations of the strong coupling constant and those at the $Z$ peak may be explained [@Shifman] by contributions going beyond SM which were not taken into account in the global fits. It should be stressed that the two scenarios – large ${\alpha_s}$ versus small ${\alpha_s}$ – cannot coexist peacefully, as it is sometimes implied in the current literature. Our starting point is the assumption that the large ${\alpha_s}$ option [@F3], inconsistent with crucial features of QCD, will eventually evaporate and the value of the strong coupling constant at $m_Z$ will stabilize close to 0.11. In fact, in Ref. [@Consoli] it has been argued that the systematic error usually quoted in the LEP number is grossly underestimated, and that at present LEP experiments can only claim $0.10{\lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$}}
{\alpha_s(m_Z)}{\lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$}}0.15$.
The question arises whether grand unification within the framework of the MSSM can accommodate small ${\alpha_s}\approx0.11$. This study addresses this question. Our task is to sort out assumptions (sometimes implicit) which are inevitable in analyses of this type and to find out which assumptions of the CMSSM absolutely preclude one from descending to small ${\alpha_s(m_Z)}$ and, therefore, have to be relaxed.
There are several possible ways to reconcile the prediction for ${\alpha_s(m_Z)}$ in supersymmetric grand unification with ${\alpha_s(m_Z)}\approx 0.11$. One is to remain in the context of the CMSSM but adopt a heavy SUSY scenario with the SUSY mass spectra significantly exceeding 1[TeV]{}. This scenario would not only put SUSY into both theoretical and experimental oblivion but is also, for the most part, inconsistent with our expectations that the lightest supersymmetric particle (LSP) should be neutral and/or with the lower bound on the age of the Universe of at least some 10 billion years [@kkrw1]. Another possibility is to invoke large enough negative corrections due to GUT-scale physics. The issue has been reanalyzed in a very recent publication [@lp:new]. Under a natural assumption (the so-called no-conspiracy assumption) it was found that ${\alpha_s(m_Z)}>0.12$. Relaxing this assumption one can, in principle, construct models of the CMSSM with large negative contributions coming, say, from NRO’s which could decrease the value of ${\alpha_s(m_Z)}$ by $\sim 10\%$ [@lp:new; @urano]. (Alternatively, one can entertain the possibility of an intermediate scale [@mohapatra] around $10^{11}{\rm\,GeV}$ whose existence is motivated by other reasons. In this case, however, many more unknowns affect the running of the gauge couplings and one cannot really talk about [*predicting*]{} ${\alpha_s(m_Z)}$.) None of these possibilities seem particularly appealing to us. Although it may well happen that the GUT-scale and NRO corrections are abnormally large, the guiding idea of grand unification becomes much less appealing in this case, and the predicitive power is essentially lost. Indeed, by appropriately complicating GUT-scale physics one could, perhaps, achieve gauge coupling unification even in the Standard Model.
Below we will discuss an alternative route. We will adopt a down-to-earth, purely phenomenological attitude, with no assumptions about mechanisms of SUSY breaking. We do not assume $N=1$ supergravity, nor any mass relations associated with this scheme, for instance, the equality of the gaugino masses at the GUT scale. If no theoretical scheme for the mass generation of SUSY partners is specified one is free to consider any values of these masses. Our task is to try to find out what pattern of masses is preferred by phenomenology. We consider the MSSM and limit ourselves to a “minimal set" of assumptions: (i) all gauge coupling constants are exactly equal to each other at the GUT scale; (ii) the breaking of supersymmetry occurs below 1[TeV]{}.
We will show that by relaxing the CMSSM to the MSSM one can easily descend to ${\alpha_s(m_Z)} \approx 0.11$. The only effect which is actually important in dramatically reducing the minimal value of ${\alpha_s(m_Z)}$ is untying the gluino and wino masses. One firm conclusion is a relatively light gluino (in the ballpark of 100[GeV]{}, and typically below 200[GeV]{}) and a relatively heavy wino (at least a few hundred[GeV]{}), [*i.e.*]{}, a relation opposite to the one emerging in the CMSSM. This summarizes our main results.
Calculating ${\alpha_s(m_Z)}$ from grand unification
====================================================
Procedure
---------
The procedure for predicting ${\alpha_s(m_Z)}$ assuming gauge coupling unification has been adequately described in the literature (see, [*e.g.*]{}, Ref. [@kkrw1] and references therein), and we will only summarize it briefly here. The strategy is simple: the coupling constants $\alpha_1$ and $\alpha_2$ (which are known more accurately than ${\alpha_s}$) are evolved from their experimental values at $m_Z$ up to the point where they intersect (which thus defines the unification scale ${M_X}$ and the gauge strength ${\alpha_X}$). At that point one identifies ${\alpha_s}$ with ${\alpha_X}$ and runs it down to $m_Z$, thus predicting the value of ${\alpha_s(m_Z)}$ as a function of input parameters. One- and two-loop corrections are taken into account.
The renormalization group equations (RGE’s) for the gauge couplings are given by $$\frac{d\alpha_i}{d t} = \frac{b_i}{2\pi}\alpha_i^2
+ \mbox{two loops},
\label{rge:eq}$$ where $i=1,2,3$, $t\equiv\log(Q/m_Z)$ and $\alpha_1\equiv\frac{5}{3}\alpha_Y$. The one-loop coefficients $b_i$ of the $\beta$ functions for the gauge couplings change across each new running mass threshold. In the MSSM they can be parametrized as follows [@rr; @gutcorrs; @kkrw1] $$\begin{aligned}
\lefteqn{b_1= \frac{41}{10}
+ \frac{2}{5}\theta_{{\widetilde{H}}}+\frac{1}{10}\theta_{H_2}}
\nonumber \\
& &\mbox{}+\frac{1}{5}\sum_{i=1}^{3}
\left\lbrace\frac{1}{12}\left(
\theta_{{\tilde{u}}_{L_i}}
+ \theta_{{\tilde{d}}_{L_i}}\right)
+ \frac{4}{3}\theta_{{\tilde{u}}_{R_i}}
+ \frac{1}{3}\theta_{{\tilde{d}}_{R_i}}
+ \frac{1}{4}\left(\theta_{{\tilde{e}}_{L_i}}
+ \theta_{{\tilde{\nu}}_{L_i}}\right)
+ \theta_{{\tilde{e}}_{R_i}}\right\rbrace
\label{b1:eq} \\
\lefteqn{b_2= -\frac{19}{6}
+ \frac{4}{3}\theta_{\widetilde W}
+ \frac{2}{3}\theta_{\widetilde{H}} + \frac{1}{6}\theta_{H_2}
+ \frac{1}{2}\sum_{i=1}^3\left\lbrace\theta_{{\tilde{u}}_{L_i}}
\theta_{{\tilde{d}}_{L_i}}
+ \frac{1}{3}\theta_{{\tilde{e}}_{L_i}}
\theta_{{\tilde{\nu}}_{L_i}}\right\rbrace }
\label{b2:eq} \\
\lefteqn{b_3= -7 + 2\,\theta_{\widetilde g}
+ \frac{1}{6}\sum_{i=1}^3\left\lbrace\theta_{{\tilde{u}}_{L_i}}
+ \theta_{{\tilde{d}}_{L_i}}
+ \theta_{{\tilde{u}}_{R_i}} +
\theta_{{\tilde{d}}_{R_i}}\right\rbrace}
\label{b3:eq}\end{aligned}$$ where $\theta_x\equiv\theta(Q^2-m_x^2)$.
In Eqs. (\[b1:eq\])–(\[b3:eq\]) ${\tilde{H}}$ stands for the (mass degenerate) higgsino fields, $\widetilde{W}$ for the winos, the partners of the $SU(2)$ gauge bosons ($m_{\widetilde{W}}\equiv{M_2}$), and ${\widetilde g}$ stands for the gluino, all taken to be mass eigenstates in this approximation. Also, in this approximation $H_2$ stands for a heavy Higgs doublet, as explained in Ref. [@kkrw1]. (The full 2-loop gauge coupling $\beta$-functions for the SM and the MSSM which we use in actual calculations can be found, [*e.g.*]{}, in Ref. [@bbo].)
Eqs. (\[b1:eq\])–(\[b3:eq\]) represent so-called leading log approximation and involves some simplifications. However, as we will argue later, it will be sufficient to present the basic points of our analysis and answer the question how low one could descend in the values of ${\alpha_s(m_Z)}$ assuming only strict unification of the gauge couplings in the MSSM.
The prediction for ${\alpha_s(m_Z)}$ depends on the adopted values of the input parameters: $\alpha$, ${\sin^2\theta_W}(m_Z)$, and ${m_t}$. It also receives corrections from: the two-loop gauge and Yukawa contributions, scheme dependence (${\rm\overline{MS}}$ [*versus*]{} ${\rm\overline{DR}}$), mass thresholds at the electroweak scale and, finally, the GUT-scale mass thresholds and NRO contributions. We will discuss these effects in turn now.
The input values of $\alpha_1$ and $\alpha_2$ at $Q=m_Z$ can be extracted from the experimental values of $\alpha(m_Z)$ and ${\sin^2\theta_W}(m_Z)$. For the electromagnetic coupling we take [@pdb] $$\alpha(m_Z)={1\over{127.9\pm0.1}}.
\label{alphaeminput:eq}$$ Recently, three groups have reanalyzed $\alpha(m_Z)$ [@alpha_recent] and obtained basically similar results: $\alpha(m_Z)^{-1}=
127.96\pm0.06$ (Martin and Zeppenfeld), $127.87\pm0.10$ (Eidelman and Jegerlehner), and $128.05\pm0.10$ (Swartz). Adopting even the largest (central) value of Swartz would shift ${\alpha_s(m_Z)}$ up by only 0.001 [@lp:new].
The range of input values of ${\sin^2\theta_W}(m_Z)$ is rather critical. This sensitivity is due to the fact that $\alpha_2(Q)$ does not change between $Q=m_Z$ and the GUT scale $Q={M_X}$ as much as the other two couplings. Thus, a small increase in ${\sin^2\theta_W}(m_Z)$ has an enhanced (and negative) effect on the resulting value of ${\alpha_s(m_Z)}$. Following Ref. [@lp:new] we assume [@F4] $${\sin^2\theta_W}(m_Z)=0.2316\pm0.0003 - 0.88\times10^{-7}{{\rm\,GeV}}^2
\left[{m_t}^{2}
- (160{\rm\,GeV})^{2} \right].
\label{s2winput:eq}$$ Moreover, the global analysis of Ref. [@EL] implies that in the MSSM ${m_t}=160\pm13{\rm\,GeV}$. Recently, both the CDF and D0 collaborations have reported discovery of the top quark and quoted somewhat higher mass ranges: ${m_t}=176 \pm8\pm10{\rm\,GeV}$ (CDF) [@cdf:top] and ${m_t}= 199\pm
20\pm22{\rm\,GeV}$ (D0) [@dzero:top]. Such high (central) values of ${m_t}$ would lower ${\sin^2\theta_W}(m_Z)$ and [*increase*]{} ${\alpha_s(m_Z)}$ by 0.002 and 0.005, respectively.
Including the two-loop terms in the RGE’s increases ${\alpha_s(m_Z)}$ by about 10%. There are two types of contributions to ${\alpha_s(m_Z)}$ at the two-loop level. Pure gauge term yields $\Delta{\alpha_s(m_Z)}=0.012$ if one assumes SUSY in both one- and two-loop coefficients of the $\beta$ function all the way down to $Q=m_Z$. This is the most important correction to the one-loop value of ${\alpha_s(m_Z)}$. If, instead, the two-loop coefficients of the pure gauge part are changed to their SM values at $Q=1{\rm\,TeV}$, one finds an additional shift $\Delta{\alpha_s(m_Z)}\approx 0.0007$. Since this shift is negligibly small, we keep the two-loop coefficients supersymmetric all the way down to $m_Z$. Corrections due to the Yukawa-coupling contribution to the RGE’s are also small, although negative [@lp:new]. In the limit of large top Yukawa coupling (${h_t}\simeq1$, ${h_b}\simeq0\simeq {h_\tau}$, as in the small $\tan\beta\simeq1$ scenario) one finds $\Delta{\alpha_s(m_Z)}=-0.0015$ while even in the extreme case of the large $\tan\beta$ scenario (${h_t}\simeq {h_b} \simeq {h_\tau}\simeq1$) $\Delta{\alpha_s(m_Z)}=-0.004$, in agreement with Ref. [@lp:new].
Above $Q=1$[TeV]{} we also change from the conventional ${\rm\overline{MS}}$ scheme, that we use throughout this paper, to the fully supersymmetric ${\rm\overline{DR}}$ scheme. The corresponding shift in ${\alpha_s(m_Z)}$ is about 0.0002 and is negligible numerically [@Langacker2; @kkrw1; @gutcorrs].
Before proceeding to discussing in more detail the contribution from one-loop threshold effects, a remark is in order on possible corrections from the GUT-scale mass thresholds and NRO’s. Since in this paper we look for an alternative way of lowering ${\alpha_s(m_Z)}$, we switch off all corrections from the GUT-scale physics whatsoever. As was noted previously [@gutcorrs; @Langacker2; @urano; @lp:new] they are GUT-model dependent and, in principle, can be sizeable. For instance, according to Refs. [@Langacker2; @lp:new] the corresponding effect in ${\alpha_s(m_Z)}$ can be as large as $\sim0.008$; a factor of 2.5 larger effect is needed, however, to ensure ${\alpha_s(m_Z)}\approx 0.11$. Building a fully elaborated and phenomenologically acceptable model of this type seems to be a task for the future.
What remains to be done is to explain our treatment of the mass thresholds at the electroweak and SUSY scales. We use usual the step-like approximation in the coefficients of the $\beta$-function, Eqs. (\[b1:eq\])–(\[b3:eq\]). In the one-loop coefficients the jumps occur at the positions of the masses of the individual particles while in the two-loop coefficients it is sufficient, to our accuracy, to consider one jump at a common SUSY scale, as explained above. As a matter of fact, with no loss of accuracy, we take this scale in the two-loop coefficients to be lower than $m_Z$ so that in our evolution from ${M_X}$ down to $m_Z$ we treat the two-loop coefficients as fully supersymmetric. Also, the $t$ quark is not frozen at ${m_t}$ in the two-loop coefficients. It is well known that the step-like approximation is not absolutely accurate in the problem of the coupling constant evolution (see, [*e.g.*]{}, Ref. [@kataev] for a recent discussion), especially if the mass thresholds are rather close to $m_Z$, as is the case with $t$ quark. We find that the other thresholds are far less important, since, as we vary their positions, the effect of the variation mimics the non-logarithmic corrections omitted in the step approximation. The error in ${\alpha_s(m_Z)}$ due to the inacuracy of our approximation of the ${\alpha_s}$ evolution at ${m_t}$ is less than 1% and is, thus, unimportant.
MSSM with gauge unification only
--------------------------------
The question we want to address is whether supersymmetric grand unification necessarily predicts large values of ${\alpha_s(m_Z)}{\lower.7ex\hbox{$\;\stackrel{\textstyle>}{\sim}\;$}}0.12$ as long as all SUSY masses are restricted to lie below 1[TeV]{}. This is indeed the case in the CMSSM with additional assumptions of common gaugino mass and common scalar mass, as described in the Introduction.
In order to track the role of these mass relations we begin by treating the masses of the different types of states as completely independent parameters. We choose to remain open-minded and not biased by any additional (even well-motivated) assumptions about the parameters involved, other than the basic idea of gauge coupling unification. Thus, we assume no relation between squarks and sleptons, or between the gauginos. (Actually, the structure of supersymmetry alone forces certain relations between sfermion masses and gaugino masses, thus disallowing, for example, very light squarks and very heavy gauginos [@ibanez]. We will see below that this will not have any substantial effect on our results.) We also do not impose a mechanism of radiative electroweak symmetry breaking. We will see [*a posteriori*]{} that requiring EWSB will not change our conclusions significantly.
In Fig. \[allmass:fig\] we show ${\alpha_s(m_Z)}$ as a function of the mass of each relevant type of state. We assume all other masses to be degenerate and equal to either 100[GeV]{} or 1[TeV]{}. Generally, we will treat all squarks and all sleptons as mass-degenerate. The only exception to this rule will be the scalar top states, ${{\tilde t}_L}$ and ${{\tilde t}_R}$. This is because their masses are typically expected to be significantly different from the other squarks and from each other.
It is obvious from the form of the $\beta$-functions, Eqs. (\[b1:eq\])–(\[b3:eq\]), that the resulting value of ${\alpha_s}(m_Z)$ will most sensitively depend on two parameters only: the gluino mass ${m_{\widetilde g}}$ and the soft mass parameter ${M_2}$ of the wino. The reasons are twofold: not only are their $\beta$-function coefficients among the largest but also they change only one out of the three $b_i$’s. Fig. \[allmass:fig\] clearly confirms our expectation. Also, Table \[als:table\] shows ${\alpha_s(m_Z)}$ for several choices of relevant parameters. The first four rows are meant to demonstrate the dependence of ${\alpha_s(m_Z)}$ on ${M_2}$ and ${m_{\widetilde g}}$.
We are interested in the lowest possible values of ${\alpha_s(m_Z)}$ allowed by (strict) grand unification. As it is obvious from Fig. \[allmass:fig\], minimization of ${\alpha_s(m_Z)}$ requires minimizing ${m_{\widetilde g}}$ and ${m_{{\tilde t}_R}}$ while simultaneously maximizing the masses of the wino, the sleptons, the higgsino, and of the heavy Higgs. We have also verified that, in order to minimize ${\alpha_s(m_Z)}$, one should also set ${m_{\tilde{q}}}$ (${m_{{\tilde t}_L}}$) at its lowest (largest) possible value. Since the “standard" prediction for ${\alpha_s(m_Z)}$ emerging in the CMSSM is quoted above under the assumption that all sparticles are lighter than 1[TeV]{} we accordingly restrict all the masses to that range. At the lower end, we allow the masses to lie as low as $100{\rm\,GeV}$. (Lowering this limit down to $m_Z$ would not noticeably change ${\alpha_s(m_Z)}$ [@bagger].) In the last row of Table \[als:table\] we show the lowest value of ${\alpha_s(m_Z)}$ obtained by varying all the mass parameters between 100[GeV]{} and 1[TeV]{}. Experimental bounds on most of those states are still less than $m_Z$. Even for ${m_{\widetilde g}}$ and the masses of the squarks there are no inescapable lower bounds, other than roughly $m_Z/2$ from LEP [@galtieri]. (Very recently, the D0 collaboration [@galtieri] has published new improved limits: ${m_{\widetilde g}}>144{\rm\,GeV}$ for any ${m_{\tilde q}}$ and ${m_{\widetilde g}}>212{\rm\,GeV}$ for ${m_{\widetilde g}}={m_{\tilde
q}}$. Adopting these limits in the last row of Table \[als:table\] would increase ${\alpha_s^{\rm min}(m_Z)}$ by only 0.002 and 0.003, respectively.)
We also display in Fig. \[allmass:fig\] ${\alpha_s^{\rm min}(m_Z)}$ (thick solid line) as a function of the mass of each individual state, while setting all the other masses as in the last row of Table \[als:table\]. It is clear that in general one can easily obtain values of ${\alpha_s(m_Z)}$ small enough to accomodate the range ${\alpha_s(m_Z)}\approx0.11$ which we favor. Furthermore, ${\alpha_s(m_Z)}$ shows little dependence on the masses of the states other than the $SU(2)$ and $SU(3)$ gauginos. Therefore one actually has considerable freedom in choosing the other masses as desired. This justifies our approach of assuming all sleptons to be mass-degenerate, and similarly with squarks. Furthermore, relatively weak dependence of ${\alpha_s(m_Z)}$ on the mass of the higgsino (which we approximate by the Higgs/higgsino mass parameter $\mu$) shows that imposing EWSB would probably not lead to any strong increase in the lower bound on ${\alpha_s(m_Z)}$. This is because the conditions of EWSB determine $\mu$ in terms of (soft) Higgs mass parameters which influence ${\alpha_s(m_Z)}$ even less.
It is also evident from the gluino window of Fig. \[allmass:fig\] that the mass of the gluino is strongly confined to rather small values in the range of a few hundred [GeV]{} only. This is a distinctive feature and a strong prediction of our approach. The exact value of the upper bound on ${m_{\widetilde g}}$ that one allows clearly depends on how large GUT- related corrections one assumes and also how large values of ${\alpha_s(m_Z)}$ one is willing to accept.
=3.5in
On the other hand, the wino mass parameter ${M_2}$ should preferably be larger than ${m_{\widetilde g}}$, contrary to what is commonly expected. This is clearly shown in Fig. \[winogluino:fig\] where, in the plane (${m_{\widetilde g}},{M_2}$), we plot the lowest allowed values of ${\alpha_s(m_Z)}$ found by assuming all other mass parameters as in the last row of Table \[als:table\]. It is clear that ${\alpha_s(m_Z)}\approx0.11$ favors relatively small ${m_{\widetilde g}}$ and large ${M_2}$.
Relating gaugino masses
-----------------------
Among perhaps the most commonly assumed, and least questioned, relations are the ones between the mass parameters of the gauginos $$\begin{aligned}
{M_1}&=& {5\over3}\tan^2\theta_{\rm W}\,{M_2}\simeq\,0.5 {M_2},
\label{monemtwo:eq} \\
{M_2} &=& \frac{\alpha_2}{{\alpha_s}}{m_{\widetilde g}}
\simeq\,0.3{m_{\widetilde g}},
\label{mtwomgluino:eq}\end{aligned}$$ where the SUSY breaking parameters ${M_1}$, ${M_2}$ and ${m_{\widetilde g}}$ of the bino, the wino, and the gluino states are evaluated at the electroweak scale. Virtually all phenomenological and experimental studies adopt at least the relation (\[monemtwo:eq\]). Strictly speaking, however, both relations are not necessary in the context of the MSSM. They both originate from the assumption that, in minimal $SU(5)$ $N=1$ supergravity, the kinetic term of the gauge bosons and gauginos is equal to a Kronecker delta. Clearly, [*a priori*]{} this assumption is not an indispensible part of the MSSM.
From our previous analysis it is evident that any additional assumption relating the masses of the wino and the gluino will have a significant impact on the prediction of ${\alpha_s(m_Z)}$. In Fig. \[xgluinogut:fig\] we plot ${\alpha_s^{\rm min}(m_Z)}$ versus ${m_{\widetilde g}}$ for ${M_2}=x\,{m_{\widetilde g}}$. We set all the other masses in such a way as to minimize ${\alpha_s(m_Z)}$, as in the last row of Table \[als:table\]. We also show the lowest allowed ${\alpha_s(m_Z)}$ (thick solid curve) as a function of ${m_{\widetilde g}}$ only by setting also ${M_2}=1{\rm\,TeV}$. It is clear that the usually assumed ratio $x\approx 0.3$ forces ${\alpha_s(m_Z)}$ above $\sim0.120$. To be consistent with ${\alpha_s(m_Z)}\approx 0.11$ the ratio $x{\lower.7ex\hbox{$\;\stackrel
{\textstyle>}{\sim}\;$}}3$ is required. This corresponds to ${M_2}{\lower.7ex\hbox{$\;\stackrel{\textstyle>}{\sim}\;$}}
9\,{m_{\widetilde g}}$ at the GUT scale.
=3.5in
Furthermore, Fig. \[xgluinogut:fig\] shows that the mass of the gluino must again be rather small, ${m_{\widetilde g}}{\lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$}}
300{\rm\,GeV}$, in the absence of large GUT-scale corrections, unless one allows for the wino mass parameter ${M_2}$ significantly above 1[TeV]{}.
The above considerations put into doubt also the relation (\[monemtwo:eq\]), which has its root in the same assumption of the equality of all the gaugino masses at the GUT scale. It is true that the mass parameter of the bino ${M_1}$ does not enter Eqs. (\[b1:eq\])–(\[b3:eq\]) and cannot be directly related to ${M_2}$ and ${m_{\widetilde g}}$. However, in the CMSSM the lightest neutralino almost invariably comes out to be an almost pure bino [@roberts; @kkrw1] and ${m_\chi}\simeq{M_1}$. It is also an excellent dark matter candidate. There are also stringent limits on the cosmic abundance of exotic particles with color and electric charges. Requiring that the lightest (bino-like) neutralino be lighter than the gluino, and thus a likely candidate for the lightest supersymmetric particle (LSP) leads to ${M_1}{\lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}
\;$}}{1\over3}{M_2}$ (or ${M_1}{\lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$}}
{2\over3}{M_2}$ at ${M_X}$), thus violating the relation (\[monemtwo:eq\]) [@dennis].
Many phenomenological and dark matter properties of the neutralinos depend on the relation (\[monemtwo:eq\]). Relaxing it may bear important consequences for neutralino detection in accelerators [@gr; @majerotto] and in dark matter searches [@gr], as well as in placing bounds on other sparticles. Basically, the mass of the (lightest) bino-like neutralino is ${m_\chi}\simeq{M_1}$. Reducing the ratio ${M_1}/{M_2}$ leads to lighter neutralinos. The region of the plane ($\mu,{M_2}$) (as it is usually presented) where $\chi$ remains mostly bino-like actually increases somewhat [@gr]. Also, even rather light neutralinos with mass in the range 3[GeV]{} to a few tens of [GeV]{} are in principle not excluded and possess excellent dark matter properties (${\Omega_\chi h_0^2}\sim1$) [@gr].
Finally, it is worth commenting that, even in the context of $N=1$ supergravity one can relax the assumptions (\[monemtwo:eq\])–(\[mtwomgluino:eq\]) [@eent; @drees]. This can be done by considering a general form of the kinetic term of the gauge and gaugino fields, rather than assuming it to be equal to unity. In this case one finds that the gauge couplings at ${M_X}$ need not be equal (thus making the GUT energy scale ${M_X}$ somewhat ill-defined) and, in general, relations among gaugino masses become arbitrary. If, however, one assumes ${M_X}\ll m_{\rm
Planck}$ then one finds, at ${M_X}$, ${m_{\widetilde g}}/{\alpha_s}=
-\frac{3}{2}{M_2}/\alpha_2 + \frac{5}{2}{M_1}/\alpha_1$ [@eent]. In the limit in which the gauge couplings are only slightly displaced from each other at ${M_X}$ we find $({m_{\widetilde g}}/{M_2})_{|_{M_X}}\simeq-\frac{3}{2}
+ \frac{5}{2}({M_1}/{M_2})_{|_{M_X}}$. One solution is the usual ${m_{\widetilde g}}={M_2}={M_1}$. But there exist also solutions to this relation which are consistent with small ${\alpha_s(m_Z)}$, for example $({m_{\widetilde g}}/{M_2})_{|_{M_X}}\simeq0.1$ and $({M_1}/{M_2})_{|_{M_X}}\simeq0.64$, in agreement with what we have found above. Thus it may be possible to reconcile ${\alpha_s(m_Z)}\approx0.11$ with some non-minimal versions on $N=1$ supergravity.
Phenomenological consequences
=============================
The version of supersymmetric grand unification considered here leads to several distinct implications. One is the necessary existence of a relatively light gluino below $\sim$ 200 [GeV]{} and preferably large wino mass parameter ${M_2}$. The likely violation of the commonly assumed relations (\[monemtwo:eq\])–(\[mtwomgluino:eq\]) may lead to many important consequences for placing bounds on various sparticles and to more promissing prospects for neutralino dark matter searches.
Below we discuss how the existence of a light gluino affects possible solutions to the long-lasting anomaly of the $Z\rightarrow b\bar b $ width. Furthermore, ${\alpha_s}\approx 0.11$ may lead to a significant relaxation of the constraints on $\tan\beta$ from requiring $b$–$\tau$ mass unification. We discuss these points below.
Consequences of light gluino
----------------------------
If ${\alpha_s(m_Z)}\approx 0.11$ does indeed require the gluino mass to lie in the ballpark of 100[GeV]{}, as was argued above, the question which immediately comes to one’s mind is: “what are other phenomenological implications of such a light gluino?"
First and foremost, with this mass, the gluino must be accessible to direct searches at the Tevatron. Currently, a gluino mass range up to about 200[GeV]{} is probed [@galtieri] but no firm assumption-independent bounds can be drawn. On the other hand, with the Main Injector upgrade, the Tevatron experiments will be able to probe ${m_{\widetilde g}}$ in the range up to 300[GeV]{}. If the gluino is indeed found below some 240[GeV]{} and no (wino-like) chargino is found at LEP-II up to some 80[GeV]{}, we will know that the relation (\[mtwomgluino:eq\]) does not hold.
Second, light gluinos propagating in loops make the corresponding radiative corrections more pronounced. They can then become important in understanding several facts where hints on disagreement between observations and SM expectations were detected. The most well-known example of this type is the problem of ${\alpha_s}$ itself. As was noted in Refs. [@Hagi; @Djou] the gluino exchange correction to the $Zq\bar q$ vertices is positive so that the gluino correction enhances the hadronic width of $Z$, imitating in this way a larger value of ${\alpha_s}$. Fig. 2 of Ref. [@Hagi] shows that the correction can reach $\sim 0.4\%$ in each quark channel provided that ${m_{\widetilde g}}\sim 100{\rm\,GeV}$ and ${m_{\tilde{q}}}
\sim70{\rm\,GeV}$. With such a correction the value of ${\alpha_s}$ measured at the $Z$ peak slides down by $\sim 10\%$ solving the problem in full.
On the other hand, it seems extremely unlikely that the very same mechanism may be responsible for the alleged enhancement in the $b\bar b$ channel. Indeed, if we take the central value for the experimental $Z\rightarrow b\bar b $ width, the excess over the theoretical expectation amounts to $\sim 7$ MeV [@Langacker], a factor of 5 larger than the excess produced by the gluino correction above. One would have to descend to unacceptably low squark and gluino masses to get this factor of 5. Recently, another possible solution of the $R_b$ problem was suggested in Ref. [@kkw]. In this work the mass parameters of the MSSM were also considered as [*a priori*]{} unrelated. It was shown that, in order to induce large enough SUSY correction to reconcile the measured value of $R_b$ with the SM prediction, a relatively light (below roughly 80[GeV]{}) higgsino-like chargino is required. The authors also need at least one stop with a significant ${{\tilde t}_R}$ component in the same mass range. In order to examine what predictions for ${\alpha_s(m_Z)}$ this scenario leads to we have set the higgsino mass parameter $\mu$ and ${m_{{\tilde t}_R}}$ at $m_Z$, and chosen all other mass parameters in such a way as to minimize ${\alpha_s(m_Z)}$, as before. We find ${\alpha_s(m_Z)}{\lower.7ex\hbox{$\;\stackrel{\textstyle>}{\sim}\;$}}0.11$.
Another problem where the relatively light gluino can help is the deficit of the semileptonic branching ratio in $B$ mesons and the charm multiplicity [@BBSV]. Theoretical calculations of these quantities are at a rather advanced stage now. Both perturbative and non-perturbative effects have been considered. The most detailed analysis of the non-perturbative effects is carried out in Ref. [@BBSV], with the conclusion that they can be essentially neglected in the problem at hand. As for perturbative calculations, they have been repeatedly discussed in the literature. (See, [*e.g.*]{}, recent papers [@Ball; @Ball2] and references therein.) The theoretical prediction turns out to be rather sensitive to the choice of the value of ${\alpha_s}$ and the normalization scale $\mu$ relevant to the process. Smaller values of $\mu$ and larger values ${\alpha_s}$ tend to enhance the non-leptonic width and, thus, lower the prediction for the semileptonic branching ratio. On the contrary, larger values of $\mu$ and smaller ${\alpha_s}$ suppress the non-leptonic width and enhance the branching ratio. The theoretical prediction can be made marginally compatible [@Ball2] with the data on the semileptonic branching ratio [@glasgow] provided that ${\alpha_s}$ is chosen on the high side and $\mu$ on the low side. At the same time, if ${\alpha_s(m_Z)}\approx 0.11$ the prediction for Br$_{\rm sl}(B)$ does not fall lower than 11.5% [@Kagan], while the corresponding experimental number is $(10.43\pm0.24)\%~\cite{glasgow}$. Moreover, no reasonable choice of the parameters above allows one to eliminate a very substantial deficit in the charm multiplicity.
Both discrepancies evaporate if the $B$ non-leptonic decays receive a contribution from the $b\rightarrow s$ + gluon transition, at the level of $\sim$ 15% of the total width. Then the theoretical prediction for Br$_{\rm sl}(B)$ shifts down to 10.4%; simultaneously, the charm multiplicity turns out to be within error bars. As was observed in Ref. [@Kagan2], in supersymmetric models such a transition can naturally arise, with the right strength, if the gluino and squark masses lie in the 100[GeV]{} ballpark. What is important is that the additional graphs giving rise to $b\rightarrow s$ + gluon transition do not spoil the $b\rightarrow s$ + photon transition. Indeed, the ratio of the photon to gluon probabilities is $(Q_d^2\alpha)/({\alpha_s}\eta^2)$ where $Q_d=1/3$ is the down quark electric charge, and $\eta$ is a numerical factor including, among other effects, an enhancement of the $b\rightarrow s$ + gluon transition due to the gluon radiative corrections. According to Ref. [@Kagan2] $\eta\sim 2.5$ to 3. With ${\alpha_s}\approx 0.11$ the ratio is close to $10^{-3}$. This means that the $b\rightarrow s$ + gluon transition can well contribute at the level of 15%; the corresponding contribution to the $b\rightarrow s\gamma$ is at the level of $10^{-4}$, which is quite acceptable phenomenologically [@bsgamma].
$b$–$\tau$ unification
----------------------
It has been argued that, in the MSSM alone, with no additional mass relations, the requirement of strict $b$–$\tau$ mass unification can only be achieved in a relatively very narrow region of the (${m_t},\tan\beta$) plane for a wide range of ${\alpha_s(m_Z)}$ [@btau; @lp2]. However, it was noted in Ref. [@kkrw1] that, if ${\alpha_s(m_Z)}$ is small $\sim0.11$, the above strong relation between $\tan\beta$ and ${m_t}$ can be significantly relaxed provided that strict unification condition $h_b/h_{\tau}=1$ at the GUT scale is reduced somewhat ($\sim10\%$). (See Figs. 1 and 2 of Ref. [@kkrw1].) GUT-scale uncertainties of this size are actually typically present in GUT’s [@lp2].
Conclusions
===========
The observation that the gauge coupling constants, which look so different at the electroweak scale, evolve and converge at a scale somewhat smaller than the Planck mass was crucial in the original idea of grand unification [@gut:original]. Later on, with more accurate data and more precise calculations available, it turned out that the gauge couplings do not intersect at one point. The fact that we are off by only a relatively very small amount is very encouraging and shows that the original idea is viable, and only details must be adjusted. This first led people from the SM to the MSSM. This work concludes that, if ${\alpha_s(m_Z)}$ is indeed close to $0.11$, the gluino must be rather light, ${m_{\widetilde g}}\sim100{\rm\,GeV}$, and thus accessible to present direct searches. It is also gratifying to note that, with the mass of the gluino lying in this ballpark, other problems (like the $R_b$ excess at LEP, a deficit of the semileptonic branching ratio of B-mesons, [*etc.*]{}) might find their solutions as well. Finally, many studies of SUSY, including mass bounds on sparticles and dark matter searches, rely on the mass relations (\[monemtwo:eq\])–(\[mtwomgluino:eq\]). This analysis provides arguments for relaxing them.
Acknowledgements {#acknowledgements .unnumbered}
================
This work was supported in part by the U.S. Department of Energy under the grant number DE-FG02-94ER40823.
[99]{}
P. Langacker, [*Test of the Standard Model and Searches for New Physics*]{}, to be published in [*Precision Tests of the Standard Electroweak Model*]{}, ed P. Langacker, World Scientific, Singapore, 1994 \[hep-ph/9412361\]; [*Grand Unification and the Standard Model*]{}, invited talk at Int. Symp. [*Radiative Corrections*]{}, Gatlinburg, Tennessee, 1994, preprint UPR-0639T \[hep-ph/9411247\].
P. Langacker and N. Polonsky, Phys. Rev. [**D47**]{} (1993) 4028. (For an update see N. Polonsky, [*Unification and Low-Energy Supersymmetry at One and Two-Loop Orders*]{}, PhD Thesis, University of Pennsylvania, 1994.)
Earlier calculations were phrased in different terms, but the conclusions were the same; see, [*e.g.*]{}, U. Amaldi, W. de Boer and H. Fürstenau, Phys. Lett. [**B260**]{} (1991) 443; J. Ellis, S. Kelley and D.V. Nanopoulos, Phys. Lett. [**B260**]{} (1991) 131; P. Langacker and M.-X. Luo, Phys. Rev. [**D44**]{} (1991) 817; F. Anselmo, L. Cifarelli, A. Peterman, and A. Zichichi, Nuovo Cim. [**104A**]{} (1991) 1817, and Nuovo Cim. [**105A**]{} (1992) 581.
For reviews, see, [*e.g.*]{}, H.-P. Nilles, Phys. Rep. [**110**]{} (1984) 1; H.E. Haber and G.L. Kane, Phys. Rep. [**117**]{} (1985) 75; L.E. Ibáñez and G.G. Ross, in [*Perspectives in Higgs*]{}, ed. by G.L. Kane (World Scientific, Singapore, 1993); R. Mohapatra, [*Unification and Supersymmetry*]{}, 2nd Edition, Springer-Verlag, 1992.
L.E. Ibáñez, Phys. Lett. [**118B**]{} (1982) 73 P. Nath, R. Arnowitt, and A. Chamsedine, Phys. Rev. Lett. [**49**]{} (1982) 970; J. Ellis, D.V. Nanopoulos, and K. Tamvakis, Phys. Lett. [**121B**]{} (1983) 123; H.P. Nilles, M. Srednicki, and D. Wyler, Phys. Lett. [**120B**]{} (1982) 346; R. Barbieri, S. Ferrara, and C. Savoy, Phys. Lett. [**119B**]{} (1982) 343. R.G. Roberts and G.G. Ross, Nucl. Phys. [**B377**]{} (1992) 571. R.G. Roberts and L. Roszkowski, Phys. Lett. [**B309**]{} (1993) 329. G. Kane, C. Kolda, L. Roszkowski, and J. Wells, Phys. Rev. [**D49**]{} (1994) 6173. V. Barger, M.S. Berger, and P. Ohmann, Phys. Rev. [**D49**]{} (1994) 4908; R. Arnowitt and P. Nath, Phys. Lett. [**B287**]{} (1992) 89; S. Kelley, J.L. Lopez, D.V. Nanopoulos, H. Pois, and K. Yuan, Phys. Lett. [**B273**]{} (1991) 423; D. J. Castaño, E. J. Piard, and P. Ramond, Phys. Rev. [**D49**]{} (1994) 4882; B. de Carlos and A. Casas, Phys. Lett. [**B309**]{} (1993) 320. L.E. Ibáñez and G.G. Ross, Phys. Lett. [**110B**]{} (1982) 215; K. Inoue, A. Kakuto, H. Komatsu, and S. Takeshita, Progr. Theor. Phys. [**68**]{} (1982) 927; L. Alvarez-Gaum[é]{}, J. Polchinsky, and M. Wise, Nucl. Phys. [**B221**]{} (1983) 495; J. Ellis, D.V. Nanopoulos, and K. Tamvakis, in Ref. [@minisugra].
P. Langacker and N. Polonsky, [*The Strong Coupling, Unification and Recent Data*]{}, preprint UPR-642T \[hep-ph/9503214\].
Ref. [@Langacker] quotes even a higher value, ${\alpha_s(m_Z)}=0.127\pm0.005$. The errors are believed to be dominated by theoretical uncertainties. The most exhaustive theoretical analysis is done for the total hadronic width $\Gamma (Z\rightarrow\mbox{hadrons})$. The error quoted above, $\Delta{\alpha_s(m_Z)}
=\pm 0.005$, is essentially determined by analyzing $\Gamma (Z\rightarrow\mbox{hadrons})$.
M. Shifman, [*Determining ${\alpha_s}$ from Measurements at $Z$: How Nature Prompts us about New Physics*]{}, preprint TPI-MINN- 94/42-T \[hep-ph/9501222\] (Mod. Phys. Lett., to appear).
G. Altarelli, [*QCD and Experiment – Status of ${\alpha_s}$*]{}, in [*QCD – 20 Years Later*]{}, Proceedings of the 1992 Aachen Workshop, eds. P. Zerwas and H. Kastrup \[World Scientific, Singapore, 1993\], vol. 1, page 172; S. Bethke, [*Summary of ${\alpha_s}$ Measurements*]{}, to be published in Proc. Workshop QCD ’94, Montpellier, France, July 1994, \[preprint PITHA 94/30\].
An example of the DIS data analysis can be found, [*e.g.*]{}, in M. Virchaux and A. Milsztajn, Phys. Lett. [**B274**]{} (1992) 221; A. Martin, W. Stirling and R. Roberts, [*Pinning down the Glue in the Proton*]{}, preprint RAL-95-021 \[hep-ph/9502336\]. A nice compilation is given in Ref. [@Altarelli].
A.X. El-Khadra, G. Hockney, A. Kronfeld and P. Mackenzie, Phys. Rev. Lett. [**69**]{} (1992) 729; C. Davies, K. Hornbostel, G.P. Lepage, A. Lidsey, J. Shigemitsu and J. Sloan, [*A Precise determination of*]{} ${\alpha_s}$ [*from lattice QCD*]{}, preprint OHSTPY-HEP-T-94-013 \[hep-ph/9408328\].
S. Eidelman, L. Kurdadze and A. Vainshtein, Phys. Lett. [**82B**]{} (1979) 278. M. Voloshin, [*Precision Determination of ${\alpha_s}$ and ${m_b}$ from QCD Sum Rules for $b\bar b$*]{}, preprint TPI-MINN-95/1-T \[hep-ph/9502224\] (Int. J. Mod. Phys. A, to appear).
The following terminology is used throughout the paper: if ${\alpha_s(m_Z)}$ is 0.12 or larger it will be referred to as “large ${\alpha_s}$"; if ${\alpha_s(m_Z)}$ is close to 0.11 it will be referred to as “small ${\alpha_s}$".
M. Consoli and F. Ferroni, [*On the Value of $R=\Gamma_h/\Gamma_l$ at LEP*]{} \[hep-ph/9501371\].
D. Ring, S. Urano and R. Arnowitt, [*Planck Scale Physics and the Testability of SU(5) Supergravity*]{}, preprint CTP-TAMU-01/95 \[hep-ph/9501247\].
D.-G. Lee and R.N. Mohapatra, [*Intermediate Scales in SUSY $SO(10)$, $b$–$\tau$ Unification, and Hot Dark Matter Neutrinos*]{}, preprint UMD-PP-95-93 \[hep-ph/9502210\].
J. Ellis, S. Kelley, and D.V. Nanopoulos, Nucl. Phys. [**B373**]{} (1992) 55 and Phys. Lett. [**B287**]{} (1992) 95; J. Hisano, H. Murayama, and T. Yanagida, Phys. Rev. Lett. [**69**]{} (1992) 1014. V. Barger, M.S. Berger, and P. Ohmann, Phys. Rev. [**D47**]{} (1993) 1093. L. Montanet, [*et al.*]{} (PDG), Phys. Rev. [**D50**]{} (1994) 1173. A. Martin and D. Zeppenfeld, [*A Determination of the QED Coupling at the $Z$ Pole*]{}, preprint MAD-PH-855 \[hep-ph/9411377\]; M.L. Swartz, [*Reevaluation of the Hadronic Contribution to $\alpha (M(Z)^2$*]{}, preprint SLAC-PUB-6710 \[hep-ph/9411353\]; S. Eidelman and F. Jegerlehner, PSI Report PR-95-1 \[hep-ph/9502298\].
Let us parenthetically note that some authors investigate a wider range of variations of ${\sin^2\theta_W}(m_Z)$. For instance, in Ref. [@urano] the highest value considered was ${\sin^2\theta_W}(m_Z)=0.2327$. As was explained above inching ${\sin^2\theta_W}(m_Z)$ up one lowers the minimal value of ${\alpha_s(m_Z)}$. Thus, the result of Ref. [@urano] in the CMSSM with the Planck mass correction switched off is ${\alpha_s(m_Z)}_{\rm min}\approx 0.114$. The question whether the value ${\sin^2\theta_W}(m_Z)=0.2327$ is admissible is left open; we stick to Eq. (\[s2winput:eq\]).
J. Erler and P. Langacker, [*Implications of High Precision Experiments and the CDF Top Quark Candidates*]{}, preprint UPR-0632T \[hep-ph/9411203\].
F. Abe, [*et al.*]{} (CDF), [*Observation of Top Quark Production in $p-\bar p$ Collisions*]{}, FERMILAB-PUB-95/022-E \[hep-ex/9503002\].
S. Abachi, [*et al.*]{} (D0), [*Observation of the Top Quark*]{}, FERMILAB-PUB-95/028-E \[hep-ex/9503003\].
J. Chyla and A. Kataev, [*Theoretical Ambiguities of QCD Predictions at the $Z^0$ Peak*]{}, preprint PRA-HEP/95-03 \[hep-ph/9502383\].
L.E. Ibáñez and C. López, Nucl. Phys. [**B233**]{} (1984) 511; L.E. Ibáñez, C. López, and C. Muñoz, Nucl. Phys. [**B256**]{} (1985) 218. See, [*e.g.*]{}, L. Galtieri, to appear in the Proceeding of the [*Conference on Beyond the Standard Model IV*]{}, Lake Tahoe, December 13-18, 1994, ed. J. Gunion, T. Han, and J. Ohnemus; A. Jonckheere (D0), [*ibid*]{}; S. Eno (D0), to appear in the Proceeding of the [*Rencontres De Moriond XXX, Electroweak Interactions and Unified Theories*]{}, ed. J. Tran Thanh Van, March 1995; M. Paterno (D0), to appear in the Proceeding of the [*Les Rencontres de Physique de la Vallee d’Aoste*]{}, LaThuile, ed. G. Belletini, March 1995; F. Abe, [*et al.*]{} (CDF), Phys. Rev. Lett. [**69**]{} (1992) 3439; S. Abachi, [*et al.*]{} (D0), [*Search for Squarks and Gluinos in $p\bar
p$ Collisions at $\sqrt{s}=1.8{\rm\,TeV}$*]{}, Fermilab PUB-95/057-E (March 1995) (to appear in Phys. Rev. Lett.).
In a recent work of J. Bagger, [*et al.*]{}, ([*Precision Corrections to Supersymmetric Unification*]{}, preprint JHU-TIPAC-95001 \[hep-ph/9501277\]) the non-leading mass-threshold effects were found to be substantial in the CMSSM for $m_{1/2}<150{\rm\,GeV}$ – a corrected evolution curve significantly deviates from the leading log (LL) result. The reason is quite obvious: a jump in the slope of the LL evolution curve is due to the fact that at $m_{1/2}\approx 120{\rm\,GeV}$ the wino becomes lighter than $Z$ and freezes out. Nothing of the kind happens in our analysis and we generally do not expect the non-leading mass-threshold effects to be large.
L.R. thanks Dennis Silverman for this remark.
K. Griest and L. Roszkowski, Phys. Rev. [**D46**]{} (1992) 3309. A. Bartl, H. Fraas, W. Majerotto, and N. Oshima, Phys. Rev. [**D40**]{} (1989) 1594; M. Drees and X. Tata, Phys. Rev. [**D43**]{} (1991) 1971. J. Ellis, K. Enqvist, D. Nanopoulos, and K. Tamvakis, Phys. Lett. [**155B**]{} (1985) 381. M. Drees, Phys. Lett. [**158B**]{} (1985) 409. K. Hagiwara and H. Murayama, Phys. Lett. [**B246**]{} (1990) 533. G. Bhattacharyya and A. Raychaudhuri, Phys. Rev. [**D47**]{} (1993) 2014; A. Djouadi, M. Drees and H. Konig, Phys. Rev. [**D48**]{} (1993) 308. J. Wells, G. Kane and C. Kolda, Phys. Lett. [**B338**]{} (1994) 219. I. Bigi, B. Blok, M. Shifman, and A. Vainshtein, Phys. Lett. [**B323**]{} (1994) 408. E. Bagan, P. Ball and V. Braun, [*Charm quark mass corrections to non-leptonic inclusive B decays*]{}, preprint TUM-T31-67-94 \[hep-ph/9408306\]; E. Bagan, P. Ball, V. Braun, and P. Gosdzinsky, [*Theoretical update of the semileptonic branching ratio of B mesons*]{}, preprint DESY-94-172 \[hep-ph/9409440\].
E. Bagan, P. Ball, B. Fiol, and P. Gosdzinsky, [*Next-to-leading Order Radiative Corrections to the Decay $b\rightarrow c\bar c s$*]{}, preprint CERN-TH/95-25 \[hep-ph/9502338\].
R. Patterson, [*Weak and Rare Decays*]{}, plenary talk at Int. Conference on High Energy Physics, Glasgow, July 1994;\
P. Roudeau, [*Heavy Quark Physics*]{}, plenary talk at Int. Conference on High Energy Physics, Glasgow, July 1994.
A. Kagan, private communication and to be published.
A. Kagan, [*Implications of TeV Flavor Physics for the $\Delta I = 1/2$ Rule and ${\rm Br}_{sl}(B)$*]{}, preprint SLAC-PUB-6626/94 \[hep-ph/9409215\].
M.S. Alam, [*et al.*]{} (CLEO), [*First Measurement of the rate for the Inclusive Radiative Penguin Decay $b\rightarrow
s\gamma$*]{}, preprint CLNS-94-1314, December 1994.
V. Barger, M.S. Berger, P. Ohmann, and R.J.N. Phillips, Phys. Lett. [**B314**]{} (1993) 351; M. Carena, S. Pokorski and C. Wagner, Nucl. Phys. [**B406**]{} (1993) 59. P. Langacker and N. Polonsky, Phys. Rev. [**D49**]{} (1994) 1454. H. Georgi and S.L. Glashow, Phys. Rev. Lett. [**32**]{} (1974) 438; H. Georgi, H.R. Quinn, and S. Weinberg, Phys. Rev. Lett. [**33**]{} (1974) 451.
[^1]: E-mail: [leszek@mnhepw.hep.umn.edu]{}
[^2]: E-mail: [shifman@vx.cis.umn.edu]{}
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Unextendible product bases have been shown to have many important uses in quantum information theory, particularly in the qubit case. However, very little is known about their mathematical structure beyond three qubits. We present several new results about qubit unextendible product bases, including a complete characterization of all four-qubit unextendible product bases, which we show there are exactly 1446 of. We also show that there exist $p$-qubit UPBs of almost all sizes less than $2^p$.'
address: 'Institute for Quantum Computing, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada'
author:
- Nathaniel Johnston
bibliography:
- 'quantum.bib'
title: The Structure of Qubit Unextendible Product Bases
---
unextendible product basis ,quantum entanglement ,graph factorization
81P40 ,05C90 ,81Q30
Introduction
============
Unextendible product bases (UPBs) are one of the most useful and versatile objects in the theory of quantum entanglement. While they were originally introduced as a tool for constructing bound entangled states [@BDFMRSSW99; @DMSST03], they can also be used to construct indecomposible positive maps [@Ter01] and to demonstrate the existence of nonlocality without entanglement—that is, they can not be perfectly distinguished by local quantum operations and classical communication, even though they contain no entanglement. Furthermore, in the qubit case (i.e., the case where each local space has dimension $2$), unextendible product bases can be used to construct tight Bell inequalities with no quantum violation [@AFKKPLA12; @ASHKLA11] and subspaces of small dimension that are locally indistinguishable [@DXY10].
Despite their many uses, very little is known about the mathematical structure of unextendible product bases. For example, UPBs have only been completely characterized in $\mathbb{C}^2 \otimes \mathbb{C}^n$ (where all UPBs are trivial in the sense that they span the entire space [@BDMSST99]), $\mathbb{C}^3 \otimes \mathbb{C}^3$ (where all UPBs belong to a known six-parameter family [@DMSST03]), and $\mathbb{C}^2 \otimes \mathbb{C}^2 \otimes \mathbb{C}^2$ (where there is only one nontrivial UPB up to local operations [@Bra04]). The goal of the present paper is to thoroughly investigate the structure of qubit unextendible product bases (i.e., UPBs in $(\mathbb{C}^2)^{\otimes p}$ for some $p \in \mathbb{N}$).
Our first contribution is to completely characterize all unextendible product bases on four qubits. Unlike the three qubit case, where all nontrivial UPBs are essentially the same (in the sense that they all have the same orthogonality graph), we show that nontrivial UPBs on four qubits can have one of exactly 1446 different orthogonality graphs, and hence the set of qubit UPBs quickly becomes very complicated as the number of qubits increases.
We also consider UPBs on larger numbers of qubits. In particular, we address the question of how many states a $p$-qubit UPB can have. The minimum number of states in such a UPB is known to always be between $p+1$ and $p+4$ inclusive [@Joh13UPB], and the results of [@CD13] immediately imply that the maximum number of states is $2^p-4$ (or $2^p$ if we allow trivial UPBs that span the entire $2^p$-dimensional space). However, very little has been known about what intermediate sizes can be attained as the cardinality of some $p$-qubit UPB.
Surprisingly, we show that there are intermediate sizes that are *not* attainable as the cardinality of any $p$-qubit UPB (contrast this with the case of non-orthogonal UPBs, which exist of any size from $p+1$ to $2^p$ inclusive [@Bha06]). However, we show that these cases are rare in the sense that, as $p \rightarrow \infty$, the proportion of intermediate sizes that are attainable by some $p$-qubit UPB goes to $1$. Furthermore, we show that all unattainable sizes are very close to either the minimal or maximal size, and we provide examples to demonstrate that both of these cases are possible.
The paper is organized as follows. In Section \[section:prelims\] we introduce some basic facts about UPBs that will be of use for us, and present the mathematical tools that we will use to prove our results. We then describe our characterization of four qubit UPBs in Section \[section:4qubit\], which was found via computer search (described in Appendix A). We also discuss some new UPBs on five and six qubits that were found via the same computer search in Section \[section:5qubit\]. Finally, we consider the many-qubit case in Section \[section:manyqubit\], where we show that there exist qubit UPBs of most (but not all) sizes between the minimal and maximal size.
Preliminaries {#section:prelims}
=============
A $p$-qubit pure quantum state is represented by a unit vector ${| v \rangle} \in ({\mathbb{C}}^{2})^{\otimes p}$, which is called a *product state* if it can be decomposed in the following form: $$\begin{aligned}
{| v \rangle} = {| v_1 \rangle} \otimes \cdots \otimes {| v_p \rangle} \ \ \text{ with } \ \ {| v_j \rangle} \in {\mathbb{C}}^{2} \ \forall \, j.\end{aligned}$$ The standard basis of $\mathbb{C}^2$ is $\{{| 0 \rangle},{| 1 \rangle}\}$ and we use $\{{| a \rangle},{| \overline{a} \rangle}\}$, $\{{| b \rangle},{| \overline{b} \rangle}\}$, $\{{| c \rangle},{| \overline{c} \rangle}\}, \ldots$ to denote orthonormal bases of $\mathbb{C}^2$ that are different from $\{{| 0 \rangle},{| 1 \rangle}\}$ and from each other (i.e., ${| a \rangle} \neq {| 0 \rangle},{| 1 \rangle},{| b \rangle},{| \overline{b} \rangle},{| c \rangle},{| \overline{c} \rangle}$, and so on). We also will sometimes find it useful to omit the tensor product symbol when discussing multi-qubit states. For example, we use ${| 0a11\overline{a} \rangle}$ as a shorthand way to write ${| 0 \rangle}\otimes{| a \rangle}\otimes{| 1 \rangle}\otimes{| 1 \rangle}\otimes{| \overline{a} \rangle}$.
A $p$-qubit *unextendible product basis (UPB)* [@BDMSST99; @DMSST03] is a set $\mathcal{S} \subseteq ({\mathbb{C}}^{2})^{\otimes p}$ satisfying the following three properties:
(a) every ${| v \rangle} \in \mathcal{S}$ is a product state;
(b) ${\langle v | w \rangle} = 0$ for all ${| v \rangle} \neq {| w \rangle} \in \mathcal{S}$; and
(c) for all product states ${| z \rangle} \notin \mathcal{S}$, there exists ${| v \rangle} \in \mathcal{S}$ such that ${\langle v | z \rangle} \neq 0$.
That is, a UPB is a set of mutually orthogonal product states such that there is no product state orthogonal to every member of the set. To be explicit, when we refer to the “size” of a UPB, we mean the number of states in the set.
We now present a result that shows how to use known UPBs to construct larger UPBs on more qubits. This result is well-known and follows easily from [@Fen06 Lemma 2.3], but we make repeated use of it and thus prove it explicitly.
\[prop:qubit\_add\_together\] Let $p \in \mathbb{N}$. If there exist $p$-qubit UPBs $\mathcal{S}_1$ and $\mathcal{S}_2$ with $|\mathcal{S}_1| = s_1$ and $|\mathcal{S}_2| = s_2$ then there exists a $(p+1)$-qubit UPB $\mathcal{S}$ with $|\mathcal{S}| = s_1 + s_2$.
If we write $\mathcal{S}_1 = \{{| v_1 \rangle},\ldots,{| v_{s_1} \rangle}\}$ and $\mathcal{S}_2 = \{{| w_1 \rangle},\ldots,{| w_{s_2} \rangle}\}$ then it is straightforward to see that the set $$\begin{aligned}
\mathcal{S} := \big\{ {| v_1 \rangle}\otimes{| 0 \rangle}, \ldots, {| v_{s_1} \rangle}\otimes{| 0 \rangle}, {| w_1 \rangle}\otimes{| 1 \rangle}, \ldots, {| w_{s_2} \rangle}\otimes{| 1 \rangle} \big\} \subset (\mathbb{C}^2)^{\otimes(p+1)}
\end{aligned}$$ satisfies properties (a) and (b) of a UPB. We prove that it also satisfies property (c) by contradiction: suppose that there were a product state ${| z \rangle} \in (\mathbb{C}^2)^{\otimes(p+1)}$ such that ${\langle v | z \rangle} = 0$ for all ${| v \rangle} \in \mathcal{S}$. If we write ${| z \rangle} = {| z_{1\ldots p} \rangle} \otimes {| z_{p+1} \rangle}$ for some product state ${| z_{1\ldots p} \rangle} \in (\mathbb{C}^2)^{\otimes p}$ and ${| z_{p+1} \rangle} \in \mathbb{C}^2$ then we have ${\langle v_j | z_{1\ldots p} \rangle}{\langle 0 | z_{p+1} \rangle} = 0$ for all $1 \leq j \leq s_1$. Unextendibility of $\mathcal{S}_1$ implies that ${\langle 0 | z_{p+1} \rangle} = 0$. However, a similar argument using unextendibility of $\mathcal{S}_2$ shows that ${\langle 1 | z_{p+1} \rangle} = 0$, which implies that ${| z_{p+1} \rangle} = 0$, which is the contradiction that completes the proof.
Orthogonality Graphs {#section:orthog_graphs}
--------------------
One tool that we find helps to visualize UPBs and simplify proofs is an orthogonality graph. Given a set of product states ${\mathcal{S}} = \{{| v_1 \rangle},\ldots,{| v_{s} \rangle}\} \subseteq ({\mathbb{C}}^2)^{\otimes p}$ with $|{\mathcal{S}}| = s$, the *orthogonality graph of ${\mathcal{S}}$* is the graph on $s$ vertices $V := \{v_1,\ldots,v_{s}\}$ such that there is an edge $(v_i,v_j)$ of color $\ell$ if and only if ${| v_i \rangle}$ and ${| v_j \rangle}$ are orthogonal to each other on qubit $\ell$. Rather than actually using $p$ colors to color the edges of the orthogonality graph, for ease of visualization we instead draw $p$ different graphs on the same set of vertices—one for each qubit (see Figure \[fig:2dim\_eg\]).
\[fill\] (v00) at (0,1) \[label=90:$v_{1}$\]; \[fill\] (v01) at (-0.777,0.629) \[label=141:$v_{2}$\]; \[fill\] (v02) at (-0.975,-0.225) \[label=193:$v_{3}$\]; \[fill\] (v03) at (-0.434,-0.899) \[label=244:$v_{4}$\]; \[fill\] (v04) at (0.434,-0.899) \[label=296:$v_{5}$\]; \[fill\] (v05) at (0.975,-0.225) \[label=347:$v_{6}$\]; \[fill\] (v06) at (0.777,0.629) \[label=39:$v_{7}$\];
\[fill\] (v10) at (3.5,1) \[label=90:$v_{1}$\]; \[fill\] (v11) at (2.723,0.629) \[label=141:$v_{2}$\]; \[fill\] (v12) at (2.525,-0.225) \[label=193:$v_{3}$\]; \[fill\] (v13) at (3.066,-0.899) \[label=244:$v_{4}$\]; \[fill\] (v14) at (3.934,-0.899) \[label=296:$v_{5}$\]; \[fill\] (v15) at (4.475,-0.225) \[label=347:$v_{6}$\]; \[fill\] (v16) at (4.277,0.629) \[label=39:$v_{7}$\];
\[fill\] (v20) at (7,1) \[label=90:$v_{1}$\]; \[fill\] (v21) at (6.223,0.629) \[label=141:$v_{2}$\]; \[fill\] (v22) at (6.025,-0.225) \[label=193:$v_{3}$\]; \[fill\] (v23) at (6.566,-0.899) \[label=244:$v_{4}$\]; \[fill\] (v24) at (7.434,-0.899) \[label=296:$v_{5}$\]; \[fill\] (v25) at (7.975,-0.225) \[label=347:$v_{6}$\]; \[fill\] (v26) at (7.777,0.629) \[label=39:$v_{7}$\];
(v00) edge (v01) (v00) edge (v06) (v02) edge (v04) (v02) edge (v04) (v03) edge (v05) (v03) edge (v05)
(v11) edge (v16) (v10) edge (v12) (v10) edge (v13) (v10) edge (v14) (v10) edge (v15)
(v22) edge (v25) (v20) edge (v23) (v20) edge (v24) (v21) edge (v23) (v21) edge (v24) (v26) edge (v23) (v26) edge (v24) ;
The requirement (b) that the members of a UPB are mutually orthogonal is equivalent to requiring that every edge is present on at least one qubit in its orthogonality graph (in other words, the orthogonality graph is an edge coloring of the complete graph). The unextendibility condition (c) is more difficult to check, so we first need to make some additional observations. In particular, it is important to notice that if ${| z_1 \rangle},{| z_2 \rangle},{| z_3 \rangle} \in {\mathbb{C}}^2$ satisfy ${\langle z_1 | z_2 \rangle} = {\langle z_1 | z_3 \rangle} = 0$, then it is necessarily the case that ${| z_2 \rangle} = {| z_3 \rangle}$ (up to an irrelevant scalar multiple). There are two important consequences of this observation:
1. The orthogonality graph associated with any individual qubit in a product basis of $(\mathbb{C}^2)^{\otimes p}$ is the disjoint union of complete bipartite graphs. For example, the orthogonality graph of the first qubit in Figure \[fig:2dim\_eg\] is the disjoint union of $K_{2,1}$ and two copies of $K_{1,1}$, the orthogonality graph of the second qubit is the disjoint union of $K_{4,1}$ and $K_{1,1}$, and the orthogonality graph of the third qubit is the disjoint union of $K_{3,2}$ and $K_{1,1}$.
2. We can determine whether or not a set of qubit product states forms a UPB entirely from its orthogonality graph (a fact that is not true when the local dimensions are larger than $2$ [@DMSST03]). For this reason, we consider two qubit UPBs to be *equivalent* if they have the same orthogonality graphs up to permuting the qubits and relabeling the vertices (alternatively, we consider two qubit UPBs to be equivalent if we can permute qubits and change each basis of $\mathbb{C}^2$ used in the construction of one of the UPBs to get the other UPB).
Following [@Joh13UPB], we sometimes draw orthogonality graphs in a form that makes their decomposition in terms of complete bipartite graphs more transparent—we draw shaded regions indicating which states are equal to each other (up to scalar multiple) on the given qubit, and lines between shaded regions indicate that all states in one of the regions are orthogonal to all states in the other region on that qubit (see Figure \[fig:2dim\_eg\_compact\]).
(0,0.5) – (0,1);
(0,1) circle (0.2cm); (-0.777,0.629) – (-0.1,0.5) – (0.1,0.5) – (0.777,0.629) – (0.1,0.5) – (-0.1,0.5) – cycle; (-0.777,0.629) – (-0.1,0.5) – (0.1,0.5) – (0.777,0.629) – (0.1,0.5) – (-0.1,0.5) – cycle;
(-0.975,-0.225) – (0.434,-0.899); (-0.434,-0.899) – (0.975,-0.225);
(-0.975,-0.225) circle (0.2cm); (-0.434,-0.899) circle (0.2cm); (0.434,-0.899) circle (0.2cm); (0.975,-0.225) circle (0.2cm);
(2.723,0.629) – (4.277,0.629); (3.5,1) – (3.5,-0.7);
(2.525,-0.225) – (3.066,-0.899) – (3.934,-0.899) – (4.475,-0.225) – cycle; (2.525,-0.225) – (3.066,-0.899) – (3.934,-0.899) – (4.475,-0.225) – cycle;
(3.5,1) circle (0.2cm); (2.723,0.629) circle (0.2cm); (4.277,0.629) circle (0.2cm);
(6.025,-0.225) – (7.975,-0.225); (7,1) – (7,-0.899);
(6.025,-0.225) circle (0.2cm); (7.975,-0.225) circle (0.2cm);
(7,1) – (6.223,0.629) – (7.777,0.629) – cycle; (7,1) – (6.223,0.629) – (7.777,0.629) – cycle;
(6.566,-0.899) – (7.434,-0.899) – (6.566,-0.899) – cycle; (6.566,-0.899) – (7.434,-0.899) – (6.566,-0.899) – cycle;
\[fill\] (v00) at (0,1) \[label=90:$v_{1}$\]; \[fill\] (v01) at (-0.777,0.629) \[label=141:$v_{2}$\]; \[fill\] (v02) at (-0.975,-0.225) \[label=193:$v_{3}$\]; \[fill\] (v03) at (-0.434,-0.899) \[label=244:$v_{4}$\]; \[fill\] (v04) at (0.434,-0.899) \[label=296:$v_{5}$\]; \[fill\] (v05) at (0.975,-0.225) \[label=347:$v_{6}$\]; \[fill\] (v06) at (0.777,0.629) \[label=39:$v_{7}$\];
\[fill\] (v10) at (3.5,1) \[label=90:$v_{1}$\]; \[fill\] (v11) at (2.723,0.629) \[label=141:$v_{2}$\]; \[fill\] (v12) at (2.525,-0.225) \[label=193:$v_{3}$\]; \[fill\] (v13) at (3.066,-0.899) \[label=244:$v_{4}$\]; \[fill\] (v14) at (3.934,-0.899) \[label=296:$v_{5}$\]; \[fill\] (v15) at (4.475,-0.225) \[label=347:$v_{6}$\]; \[fill\] (v16) at (4.277,0.629) \[label=39:$v_{7}$\];
\[fill\] (v20) at (7,1) \[label=90:$v_{1}$\]; \[fill\] (v21) at (6.223,0.629) \[label=141:$v_{2}$\]; \[fill\] (v22) at (6.025,-0.225) \[label=193:$v_{3}$\]; \[fill\] (v23) at (6.566,-0.899) \[label=244:$v_{4}$\]; \[fill\] (v24) at (7.434,-0.899) \[label=296:$v_{5}$\]; \[fill\] (v25) at (7.975,-0.225) \[label=347:$v_{6}$\]; \[fill\] (v26) at (7.777,0.629) \[label=39:$v_{7}$\];
UPBs on Three or Fewer Qubits {#section:fewqubits}
-----------------------------
We now review what is known about UPBs in the space $(\mathbb{C}^2)^{\otimes p}$ when $1 \leq p \leq 3$. It is well-known that there are no nontrivial qubit UPBs when $p \leq 2$ [@BDMSST99], so the first case of interest is when $p = 3$. In this case, the **Shifts** UPB [@BDMSST99] provides one of the oldest examples of a nontrivial UPB and consists of the following four states: $$\begin{aligned}
\mathbf{Shifts} := \big\{ {| 000 \rangle}, {| 1{+}{-} \rangle}, {| {-}1{+} \rangle}, {| {+}{-}1 \rangle} \big\},\end{aligned}$$ where ${| + \rangle} := ({| 0 \rangle}+{| 1 \rangle})/\sqrt{2}$ and ${| - \rangle} := ({| 0 \rangle}-{| 1 \rangle})/\sqrt{2}$.
More interesting is the fact that **Shifts** is essentially the only nontrivial $3$-qubit UPB in the sense that every UPB in $(\mathbb{C}^2)^{\otimes 3}$ either spans the entire $8$-dimensional space or is equal to **Shifts** up to permuting the qubits and changing the bases used on each qubit [@Bra04] (i.e., replacing the basis $\{{| 0 \rangle},{| 1 \rangle}\}$ with another basis $\{{| a \rangle},{| \overline{a} \rangle}\}$ on any or all qubits, and similarly replacing $\{{| + \rangle},{| - \rangle}\}$ by another basis $\{{| b \rangle},{| \overline{b} \rangle}\}$ on any or all qubits). In other words, all nontrivial UPBs on $3$ qubits have the same othogonality graph, depicted in Figure \[fig:shifts\_og\].
(0,1) – (1,0); (0,-1) – (-1,0);
\[fill\] (v00) at (0,1) \[\]; \[fill\] (v01) at (1,0) \[\]; \[fill\] (v02) at (0,-1) \[\]; \[fill\] (v03) at (-1,0) \[\];
(4,1) – (4,-1); (3,0) – (5,0);
\[fill\] (v00) at (4,1) \[\]; \[fill\] (v01) at (5,0) \[\]; \[fill\] (v02) at (4,-1) \[\]; \[fill\] (v03) at (3,0) \[\];
(8,1) – (7,0); (9,0) – (8,-1);
\[fill\] (v00) at (8,1) \[\]; \[fill\] (v01) at (9,0) \[\]; \[fill\] (v02) at (8,-1) \[\]; \[fill\] (v03) at (7,0) \[\];
Minimum and Maximum Size {#section:minmax}
------------------------
One of the first questions asked about unextendible product bases was what their possible sizes are. While a full answer to this question is still out of reach, the minimum and maximum size of qubit UPBs is now known. It was shown in [@Joh13UPB] that if we define a function $f : \mathbb{N} \rightarrow \mathbb{N}$ by $$\begin{aligned}
\label{eq:qubit_min_size}
f(p) := \begin{cases}
p + 1 & \text{if $p$ is odd} \\
p + 2 & \text{if $p = 4$ or $p \equiv 2 (\text{mod } 4)$} \\
p + 3 & \text{if $p = 8$} \\
p + 4 & \text{otherwise,}
\end{cases}\end{aligned}$$ then the smallest $p$-qubit UPB has size $f(p)$.
At the other end of the spectrum, it is straightforward to see that the maximum size of a $p$-qubit UPB is $2^p$, since the standard basis forms a UPB. However, UPBs that span the entire $2^p$-dimensional space are typically not considered to be particularly interesting, so it is natural to instead ask for the maximum size of a *nontrivial* UPB (i.e. one whose size is strictly less than $2^p$). It is straightforward to use the **Shifts** UPB together with induction and Proposition \[prop:qubit\_add\_together\] to show that there exists a nontrivial $p$-qubit UPB of size $2^p - 4$ for all $p \geq 3$. The following proposition, which is likely known, shows that this is always the largest nontrivial UPB.
\[prop:no\_large\_qubit\_upbs\] Let $p,s \in \mathbb{N}$. There does not exist a $p$-qubit UPB of size $s$ when $2^p - 4 < s < 2^p$.
Given a UPB $\{{| v_1 \rangle},\ldots,{| v_{s} \rangle}\} \subset (\mathbb{C}^2)^{\otimes p}$, we can construct the (unnormalized) $p$-qubit mixed quantum state $\rho := I - \sum_{i=1}^{s} {| v_i \rangle\langle v_i |}$, which has rank $2^p - s$ and has positive partial transpose across any partition of the qubits. Furthermore, $\rho$ is entangled by the range criterion [@H97]. If $s = 2^p - 1$ then ${\rm rank}(\rho) = 1$, and it is well-known that pure states with positive partial transpose are necessarily separable, which is a contradiction that shows that no UPB of size $s = 2^p - 1$ exists. Similarly, it was shown in that [@CD13] that every multipartite state of rank $2$ or $3$ with positive partial transpose is separable, which shows that no UPB of size $s = 2^p - 2$ or $s = 2^p - 3$ exists.
Four-Qubit UPBs {#section:4qubit}
===============
This section is devoted to describing all of the nontrivial UPBs in $(\mathbb{C}^2)^{\otimes p}$ when $p = 4$. Unlike in the $p = 3$ case, which we saw earlier admits a very simple characterization in terms of the [**Shifts**]{} UPB, there are many different four-qubit UPBs. More specifically, we will see here that there are exactly $1446$ inequivalent nontrivial four-qubit UPBs, $1137$ of which arise from combining two $3$-qubit UPBs via Proposition \[prop:qubit\_add\_together\] and $309$ of which are not decomposable in this way. Note that all $4$-qubit UPBs with at most two bases per qubit were found in [@SFABCLA13], however this is the first characterization of *all* $4$-qubit UPBs (including those with three or more bases per qubit).
The process of finding these $4$-qubit UPBs (steps 1 and 2 in Appendix A) as well as the process of characterizing these UPBs and determining that they are inequivalent (step 3 in Appendix A) were both done via computer search. It is generally not straightforward to see that a given UPB is indeed unextendible, and it also does not seem to be easy to determine whether or not two given UPBs are equivalent.
Rather than trying to prove that these UPBs are indeed unextendible or are inequivalent as we claim, here we focus instead on summarizing the results, categorizing them as efficiently as possible, and explaining where already-known UPBs fit into this characterization. We sort the different four-qubit UPBs by their size, starting with the minimal $6$-state UPB and working our way up to the maximal (nontrivial) $12$-state UPBs. For a succinct summary of these results, see Table \[tab:4qubit\]. An explicit list of all $1446$ inequivalent four-qubit UPBs is available for download from [@Joh4QubitUPBs].
Four-Qubit UPBs of 6 States {#section:4qubit6}
---------------------------
It was already known that there exists a $6$-state UPB on $4$ qubits, and that this is the minimum size possible [@Fen06]: $$\begin{aligned}
\big\{ {| 0000 \rangle},{| 0aa1 \rangle},{| 10ba \rangle},{| 1a\overline{b}b \rangle},{| a1\overline{ab} \rangle},{| \overline{aa}1\overline{a} \rangle} \big\}.\end{aligned}$$ Our computer search showed that this $6$-state UPB is essentially unique—all other UPBs in this case are equivalent to it.
Four-Qubit UPBs of 7 States {#section:4qubit7}
---------------------------
A $7$-state UPB on $4$ qubits was found via computer search in [@AFKKPLA12]: $$\begin{aligned}
\big\{ {| 0000 \rangle},{| 0aa1 \rangle},{| 0\overline{a}1a \rangle},{| 100b \rangle},{| 1\overline{a}a\overline{b} \rangle},{| aa10 \rangle}, {| \overline{a}1\overline{aa} \rangle} \big\}.\end{aligned}$$ Once again, our computer search showed that this UPB is essentially unique in the sense that all other $7$-state UPBs on $4$ qubits are equivalent to it.
Four-Qubit UPBs of 8 States {#section:4qubit8}
---------------------------
This case is much less trivial than the previous two cases. First, note that we can use Proposition \[prop:qubit\_add\_together\] to construct an $8$-state UPB via two copies of the $3$-qubit [**Shifts**]{} UPB. In fact, there are many slightly different ways to do this, since we are free to let the bases used in one of the copies of [**Shifts**]{} be the same or different from any of the bases used in the other copy of [**Shifts**]{}. For example, we can consider the following two UPBs: $$\begin{aligned}
\text{UPB}_1 & := \big\{ {| 000 \rangle} \otimes {| 0 \rangle}, {| 1{+}{-} \rangle} \otimes {| 0 \rangle}, {| {-}1{+} \rangle} \otimes {| 0 \rangle}, {| {+}{-}1 \rangle} \otimes {| 0 \rangle} \big\} \\
& \quad \quad \cup \big\{ {| 000 \rangle} \otimes {| 1 \rangle}, {| 1{+}{-} \rangle} \otimes {| 1 \rangle}, {| {-}1{+} \rangle} \otimes {| 1 \rangle}, {| {+}{-}1 \rangle} \otimes {| 1 \rangle} \big\} \text{ and} \\
\text{UPB}_2 & := \big\{ {| 000 \rangle} \otimes {| 0 \rangle}, {| 1{+}{-} \rangle} \otimes {| 0 \rangle}, {| {-}1{+} \rangle} \otimes {| 0 \rangle}, {| {+}{-}1 \rangle} \otimes {| 0 \rangle} \big\} \\
& \quad \quad \cup \big\{ {| 000 \rangle} \otimes {| 1 \rangle}, {| 1a\overline{a} \rangle} \otimes {| 1 \rangle}, {| \overline{a}1a \rangle} \otimes {| 1 \rangle}, {| a\overline{a}1 \rangle} \otimes {| 1 \rangle} \big\}.\end{aligned}$$
It is straightforward to see that UPB$_1$ and UPB$_2$ are inequivalent, since they have different orthogonality graphs. However, they both can be seen as arising from Proposition \[prop:qubit\_add\_together\]: UPB$_1$ arises from following the construction given in its proof exactly, while UPB$_2$ arises from replacing $\{{| {+} \rangle},{| {-} \rangle}\}$ by $\{{| a \rangle},{| \overline{a} \rangle}\}$ on one copy of [**Shifts**]{} before following the construction. Our computer search found that there are exactly $89$ inequivalent UPBs that arise from two copies of [**Shifts**]{} and Proposition \[prop:qubit\_add\_together\] in this manner.
Furthermore, there are also $55$ inequivalent UPBs in this case that are really “new”—they can not be constructed via Proposition \[prop:qubit\_add\_together\] in any way. One of these $55$ UPBs was found in [@AFKKPLA12]: $$\begin{aligned}
\big\{ {| 0000 \rangle}, {| 1a\overline{a}a \rangle}, {| a\overline{a}1\overline{a} \rangle}, {| \overline{a}1ab \rangle}, {| 0a\overline{a}1 \rangle}, {| 1a\overline{aa} \rangle}, {| a1aa \rangle}, {| \overline{aa}1\overline{b} \rangle} \big\},\end{aligned}$$ and several more were found in [@SFABCLA13].
This gives a total of $144$ inequivalent $8$-state UPBs on $4$ qubits. We note that some (but not all) of these new UPBs can be constructed via the method given in the proof of the upcoming Theorem \[thm:qubit\_4k\_upbs\].
Four-Qubit UPBs of 9 States {#section:4qubit9}
---------------------------
Our computer search found that there are exactly $11$ inequivalent $4$-qubit UPBs in this case, which are presented in their entirety in Table \[tab:4qubit\]. The following two of these UPBs were found in [@AFKKPLA12]: $$\begin{aligned}
& \big\{ {| 0000 \rangle}, {| 1a\overline{a}0 \rangle}, {| a\overline{a}10 \rangle}, {| \overline{a}1aa \rangle}, {| 0001 \rangle}, {| 01\overline{a}1 \rangle}, {| 1\overline{a}0\overline{a} \rangle}, {| 0011 \rangle}, {| 1011 \rangle} \big\} \text{ and} \\
& \big\{ {| 0000 \rangle}, {| \overline{a}a1a \rangle}, {| a1a1 \rangle}, {| \overline{a}11\overline{a} \rangle}, {| aa\overline{a}1 \rangle}, {| 1\overline{aa}a \rangle}, {| 10a\overline{a} \rangle}, {| \overline{a}10\overline{a} \rangle}, {| a1a0 \rangle} \big\},\end{aligned}$$ while the other $9$ UPBs are new.
size \# summary of $4$-qubit UPBs
------ -------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
$6$ $1$ ${| 0000 \rangle},{| 0aa1 \rangle},{| 10ba \rangle},{| 1a\overline{b}b \rangle},{| a1\overline{ab} \rangle},{| \overline{aa}1\overline{a} \rangle} \quad$ (unique)
$7$ $1$ ${| 0000 \rangle},{| 0aa1 \rangle},{| 0\overline{a}1a \rangle},{| 100b \rangle},{| 1\overline{a}a\overline{b} \rangle},{| aa10 \rangle}, {| \overline{a}1\overline{aa} \rangle} \quad$ (unique)
$8$ $144$ $89$ are of the form $({\bf Shifts} \otimes {| 0 \rangle}) \cup ({\bf Shifts} \otimes {| 1 \rangle})$
plus $55$ others, such as:
${| 0000 \rangle}, {| 1a\overline{a}a \rangle}, {| a\overline{a}1\overline{a} \rangle}, {| \overline{a}1ab \rangle}, {| 0a\overline{a}1 \rangle}, {| 1a\overline{aa} \rangle}, {| a1aa \rangle}, {| \overline{aa}1\overline{b} \rangle}$
$9$ $11$ ${| 0000 \rangle}, {| 1a\overline{a}0 \rangle}, {| a\overline{a}10 \rangle}, {| \overline{a}1aa \rangle}, {| 0001 \rangle}, {| 01\overline{a}1 \rangle}, {| 1\overline{a}0\overline{a} \rangle}, {| 0011 \rangle}, {| 1011 \rangle}$
${| 0000 \rangle}, {| \overline{a}a1a \rangle}, {| a1a1 \rangle}, {| \overline{a}11\overline{a} \rangle}, {| aa\overline{a}1 \rangle}, {| 1\overline{aa}a \rangle}, {| 10a\overline{a} \rangle}, {| \overline{a}10\overline{a} \rangle}, {| a1a0 \rangle}$
${| 0000 \rangle}, {| 0001 \rangle}, {| 0010 \rangle}, {| 010a \rangle}, {| 1aaa \rangle}, {| 100\overline{a} \rangle}, {| 11\overline{a}0 \rangle}, {| a1a\overline{a} \rangle}, {| \overline{aa}11 \rangle}$
${| 0000 \rangle}, {| 0001 \rangle}, {| 001a \rangle}, {| 010b \rangle}, {| 1aab \rangle}, {| 100\overline{b} \rangle}, {| 11\overline{a}a \rangle}, {| a1a\overline{b} \rangle}, {| \overline{aa}1\overline{a} \rangle}$
${| 0000 \rangle}, {| 0001 \rangle}, {| 001a \rangle}, {| 01aa \rangle}, {| 1aab \rangle}, {| 10\overline{a}a \rangle}, {| 110\overline{b} \rangle}, {| a1\overline{a}b \rangle}, {| \overline{aa}1\overline{a} \rangle}$
${| 0000 \rangle}, {| 0001 \rangle}, {| 01aa \rangle}, {| 01\overline{a}a \rangle}, {| 1a0a \rangle}, {| 1\overline{aa}b \rangle}, {| a01\overline{b} \rangle}, {| a1a\overline{a} \rangle}, {| \overline{a}a1\overline{a} \rangle}$
${| 0000 \rangle}, {| 0001 \rangle}, {| 01aa \rangle}, {| 01\overline{a}a \rangle}, {| 1a0a \rangle}, {| 1\overline{a}bb \rangle}, {| a01\overline{b} \rangle}, {| a1\overline{ba} \rangle}, {| \overline{a}a1\overline{a} \rangle}$
${| 0000 \rangle}, {| 0001 \rangle}, {| 001a \rangle}, {| 01aa \rangle}, {| 1a0b \rangle}, {| 101a \rangle}, {| 1\overline{a}a\overline{a} \rangle}, {| aa1\overline{a} \rangle}, {| \overline{a}1\overline{ab} \rangle}$
${| 0000 \rangle}, {| 001a \rangle}, {| 001\overline{a} \rangle}, {| 01a0 \rangle}, {| 1aaa \rangle}, {| 10\overline{a}0 \rangle}, {| 111\overline{a} \rangle}, {| a1\overline{a}a \rangle}, {| \overline{aa}01 \rangle}$
${| 0000 \rangle}, {| 001a \rangle}, {| 001\overline{a} \rangle}, {| 01a0 \rangle}, {| 1a1\overline{a} \rangle}, {| 1000 \rangle}, {| 1\overline{a}a1 \rangle}, {| aa01 \rangle}, {| \overline{a}1\overline{a}a \rangle}$
${| 0000 \rangle}, {| 01aa \rangle}, {| 0a1\overline{a} \rangle}, {| 1110 \rangle}, {| 1a0a \rangle}, {| 10a\overline{a} \rangle}, {| a01a \rangle}, {| a10\overline{a} \rangle}, {| \overline{aaa}1 \rangle}$
$10$ $80$ ${| 0000 \rangle}, {| 1a\overline{a}0 \rangle}, {| a\overline{a}10 \rangle}, {| \overline{a}1aa \rangle}, {| 0001 \rangle}, {| 0011 \rangle}, {| 1001 \rangle}, {| 1011 \rangle}, {| 010\overline{a} \rangle}, {| 11\overline{a}1 \rangle}$
plus $79$ others
$11$ $0$
$12$ $1209$ $1048$ are of the form $(\mathbf{Shifts} \otimes {| 0 \rangle}) \cup (\mathbf{B}_i \otimes {| 1 \rangle})$ for some $i$ (see Table \[tab:3qubitproductbases\])
plus $161$ others, such as:
${| 0000 \rangle}, {| \overline{a}aa1 \rangle}, {| a11a \rangle}, {| \overline{a}1\overline{ab} \rangle}, {| 1000 \rangle}, {| a001 \rangle},$
$\quad {| a10\overline{a} \rangle}, {| a010 \rangle}, {| a011 \rangle}, {| a11\overline{a} \rangle}, {| \overline{aa}1b \rangle}, {| a10a \rangle}$
: A summary of the $1446$ inequivalent $4$-qubit UPBs. A complete list can be found at [@Joh4QubitUPBs].[]{data-label="tab:4qubit"}
Four-Qubit UPBs of 10 States {#section:4qubit10}
----------------------------
Once again, it was already known that a $10$-state UPB exists, as one was found in [@AFKKPLA12]: $$\begin{aligned}
\big\{ {| 0000 \rangle}, {| 1a\overline{a}0 \rangle}, {| a\overline{a}10 \rangle}, {| \overline{a}1aa \rangle}, {| 0001 \rangle}, {| 0011 \rangle}, {| 1001 \rangle}, {| 1011 \rangle}, {| 010\overline{a} \rangle}, {| 11\overline{a}1 \rangle} \big\},\end{aligned}$$ and one more was found in [@SFABCLA13]. We have found that there are $78$ more inequivalent UPBs, for a total of $80$.
Four-Qubit UPBs of 11 States {#section:4qubit11}
----------------------------
Our computer search showed that there does not exist an $11$-state UPB on $4$ qubits. This case is interesting for at least two reasons. First, it shows that UPBs are not “continuous” in the sense that there can be nontrivial UPBs of sizes $s-1$ and $s+1$ in a given space, yet no UPB of size $s$ (we will see in Section \[section:manyqubit\] that UPBs are also not “continuous” in this sense when the number of qubits is odd and at least $5$).
Second, this is currently the only case where it is known that no UPB exists via means other than an explicit (human-readable) proof. This raises the question of whether or not there is a “simple” proof of the fact that there is no $11$-state UPB on $4$ qubits. More generally, it would be interesting to determine whether or not there exists a $p$-qubit UPB of size $2^p - 5$ when $p \geq 5$—we will see in Section \[section:manyqubit\] that this is the only unsolved case that is near the $2^p - 4$ upper bound.
Four-Qubit UPBs of 12 States {#section:4qubit12}
----------------------------
The vast majority of $4$-qubit UPBs arise in this case. Similar to the $8$-state case considered in Section \[section:4qubit8\], we can construct many $12$-state UPBs by using Proposition \[prop:qubit\_add\_together\] to combine $3$-qubit UPBs of size $4$ and $8$ (i.e., [**Shifts**]{} and a full $3$-qubit product basis). However, things are more complicated in this case, as there are $17$ inequivalent $3$-qubit product bases of size $8$ that can be used in Proposition \[prop:qubit\_add\_together\].
$\quad \quad \ \ \ 3$-qubit product basis $4$-qubit UPBs
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ----------------
$\mathbf{B}_1 := \{{| 000 \rangle},{| 001 \rangle},{| 010 \rangle},{| 011 \rangle},{| 100 \rangle},{| 101 \rangle},{| 110 \rangle},{| 111 \rangle}\}$ $5$
$\mathbf{B}_{2} := \{{| 000 \rangle}, {| 001 \rangle}, {| 010 \rangle}, {| 011 \rangle}, {| 100 \rangle}, {| 101 \rangle}, {| 11a \rangle}, {| 11\overline{a} \rangle}\}$ $32$
$\mathbf{B}_{3} := \{{| 000 \rangle}, {| 001 \rangle}, {| 010 \rangle}, {| 011 \rangle}, {| 10a \rangle}, {| 10\overline{a} \rangle}, {| 11b \rangle}, {| 11\overline{b} \rangle}\}$ $47$
$\mathbf{B}_{4} := \{{| 000 \rangle}, {| 001 \rangle}, {| 010 \rangle}, {| 011 \rangle}, {| 10a \rangle}, {| 1a\overline{a} \rangle}, {| 11a \rangle}, {| 1\overline{aa} \rangle}\}$ $99$
$\mathbf{B}_{5} := \{{| 000 \rangle}, {| 001 \rangle}, {| 010 \rangle}, {| 011 \rangle}, {| 1aa \rangle}, {| 1a\overline{a} \rangle}, {| 1\overline{a}a \rangle}, {| 1\overline{aa} \rangle}\}$ $25$
$\mathbf{B}_{6} := \{{| 000 \rangle}, {| 001 \rangle}, {| 010 \rangle}, {| 011 \rangle}, {| 1aa \rangle}, {| 1a\overline{a} \rangle}, {| 1\overline{a}b \rangle}, {| 1\overline{ab} \rangle}\}$ $93$
$\mathbf{B}_7 := \{{| 000 \rangle}, {| 001 \rangle}, {| 01a \rangle}, {| 01\overline{a} \rangle}, {| 100 \rangle}, {| 101 \rangle}, {| 11a \rangle}, {| 11\overline{a} \rangle}\}$ $18$
$\mathbf{B}_{8} := \{{| 000 \rangle}, {| 001 \rangle}, {| 01a \rangle}, {| 01\overline{a} \rangle}, {| 100 \rangle}, {| 1a1 \rangle}, {| 110 \rangle}, {| 1\overline{a}1 \rangle}\}$ $85$
$\mathbf{B}_{9} := \{{| 000 \rangle}, {| 001 \rangle}, {| 01a \rangle}, {| 01\overline{a} \rangle}, {| 10a \rangle}, {| 10\overline{a} \rangle}, {| 110 \rangle}, {| 111 \rangle}\}$ $13$
$\mathbf{B}_{10} := \{{| 000 \rangle}, {| 001 \rangle}, {| 01a \rangle}, {| 01\overline{a} \rangle}, {| 10b \rangle}, {| 10\overline{b} \rangle}, {| 110 \rangle}, {| 111 \rangle}\}$ $34$
$\mathbf{B}_{11} := \{{| 000 \rangle}, {| 001 \rangle}, {| 01a \rangle}, {| 01\overline{a} \rangle}, {| 10b \rangle}, {| 10\overline{b} \rangle}, {| 11c \rangle}, {| 11\overline{c} \rangle}\}$ $26$
$\mathbf{B}_{12} := \{{| 000 \rangle}, {| 001 \rangle}, {| 01a \rangle}, {| 01\overline{a} \rangle}, {| 10b \rangle}, {| 1a\overline{b} \rangle}, {| 11b \rangle}, {| 1\overline{ab} \rangle}\}$ $143$
$\mathbf{B}_{13} := \{{| 000 \rangle}, {| 001 \rangle}, {| 01a \rangle}, {| 01\overline{a} \rangle}, {| 1a0 \rangle}, {| 1a1 \rangle}, {| 1\overline{a}a \rangle}, {| 1\overline{aa} \rangle}\}$ $51$
$\mathbf{B}_{14} := \{{| 000 \rangle}, {| 001 \rangle}, {| 01a \rangle}, {| 01\overline{a} \rangle}, {| 1a0 \rangle}, {| 1a1 \rangle}, {| 1\overline{a}b \rangle}, {| 1\overline{ab} \rangle}\}$ $142$
$\mathbf{B}_{15} := \{{| 000 \rangle}, {| 001 \rangle}, {| 01a \rangle}, {| 01\overline{a} \rangle}, {| 1ab \rangle}, {| 1\overline{a}b \rangle}, {| 1b\overline{b} \rangle}, {| 1\overline{bb} \rangle}\}$ $75$
$\mathbf{B}_{16} := \{{| 000 \rangle}, {| 001 \rangle}, {| 01a \rangle}, {| 01\overline{a} \rangle}, {| 1ab \rangle}, {| 1a\overline{b} \rangle}, {| 1\overline{a}c \rangle}, {| 1\overline{ac} \rangle}\}$ $81$
$\mathbf{B}_{17} := \{{| 000 \rangle}, {| 01a \rangle}, {| 01\overline{a} \rangle}, {| 1a0 \rangle}, {| 1\overline{a}0 \rangle}, {| a01 \rangle}, {| \overline{a}01 \rangle}, {| 111 \rangle}\}$ $79$
$1048$
: A summary of the $17$ inequivalent $3$-qubit orthogonal product bases. The table also gives the number of inequivalent $12$-state $4$-qubit UPBs that these product bases give rise to by being combined with [**Shifts**]{} via Proposition \[prop:qubit\_add\_together\].[]{data-label="tab:3qubitproductbases"}
These $17$ product bases as well as the number of inequivalent $12$-state $4$-qubit UPBs that they give rise to via Proposition \[prop:qubit\_add\_together\] are given in Table \[tab:3qubitproductbases\]. For example, there are $5$ inequivalent $4$-qubit UPBs of the form $(\mathbf{Shifts} \otimes {| 0 \rangle}) \cup (\mathbf{B}_1 \otimes {| 1 \rangle})$, where $\mathbf{B}_1$ is the standard basis of $(\mathbb{C}^2)^{\otimes 3}$. A total of $1048$ inequivalent $4$-qubit UPBs can be constructed in this way by using the $17$ different $3$-qubit product bases.
Furthermore, there are also $161$ inequivalent UPBs in this case that can not be constructed via Proposition \[prop:qubit\_add\_together\], for a total of $1209$ inequivalent $12$-state UPBs on $4$ qubits. To the best of our knowledge, only two of these $161$ UPBs have been found before [@AFKKPLA12]: $$\begin{aligned}
& \big\{ {| 0000 \rangle}, {| \overline{a}aa1 \rangle}, {| a11a \rangle}, {| \overline{a}1\overline{ab} \rangle}, {| 1000 \rangle}, {| a001 \rangle}, {| a10\overline{a} \rangle}, {| a010 \rangle}, {| a011 \rangle}, {| a11\overline{a} \rangle}, {| \overline{aa}1b \rangle}, {| a10a \rangle} \big\},\\
& \big\{ {| 0000 \rangle}, {| 1aaa \rangle}, {| a\overline{a}1b \rangle}, {| 10\overline{ab} \rangle}, {| 0ab1 \rangle}, {| 01\overline{bb} \rangle}, {| 1\overline{a}0b \rangle}, {| 1aa\overline{a} \rangle}, {| 1\overline{a}a\overline{b} \rangle}, {| 1a\overline{a}b \rangle}, {| 11\overline{ab} \rangle}, {| \overline{aa}1b \rangle} \big\}.\end{aligned}$$
Five- and Six-Qubit UPBs {#section:5qubit}
========================
In $(\mathbb{C}^2)^{\otimes 5}$, the minimum and maximum sizes of UPBs are well-known to be $6$ and $28$, respectively, but otherwise very little is known. The only UPBs in this case that have appeared in the past that we are aware of are the [**GenShifts**]{} UPB of size $6$ [@DMSST03] and the UPBs of sizes $12$–$26$ and $28$ that can be created by combining two $4$-qubit UPBs via Proposition \[prop:qubit\_add\_together\]. This leaves UPBs of size $7, 8, 9, 10, 11,$ and $27$ unaccounted for.
Our computer search has shown that there does not exist a $5$-qubit UPB of size $7$, but there do exist UPBs of size $8$, $9$, and $10$: $$\begin{aligned}
& \big\{ {| 00000 \rangle},{| 001aa \rangle},{| aaa1\overline{a} \rangle},{| aa\overline{aa}1 \rangle},{| 1\overline{a}bbb \rangle}, {| 1\overline{ab}cc \rangle},{| \overline{a}1c\overline{bc} \rangle},{| \overline{a}1\overline{ccb} \rangle} \big\}, \\
& \big\{ {| 00000 \rangle},{| 0aa01 \rangle},{| 0b1a0 \rangle},{| 1abaa \rangle},{| 1b\overline{b}bb \rangle},{| a\overline{b}a1\overline{a} \rangle},{| a1\overline{aab} \rangle},{| \overline{aa}0\overline{b}1 \rangle},{| \overline{ab}11\overline{a} \rangle} \big\} \text{ and} \\
& \big\{ {| 00000 \rangle},{| 0a001 \rangle},{| 0b1aa \rangle},{| 10abb \rangle},{| 1abb\overline{b} \rangle},{| ab1\overline{ba} \rangle},{| a\overline{bb}1\overline{b} \rangle},{| a\overline{ba}1b \rangle},{| \overline{a}1\overline{ba}0 \rangle},{| \overline{aaaa}1 \rangle} \big\}.\end{aligned}$$ In fact, we have completely characterized qubit UPBs of $8$ or fewer states (on any number of qubits), which are available for download from [@Joh5QubitUPBs]. We have not been able to prove or disprove the existence of $5$-qubit UPBs of size $11$ or $27$, as they are beyond our computational capabilities. We leave them as open problems (see Table \[tab:qubit\_summary\]).
The case of $6$-qubit UPBs is quite similar, with sizes $9$, $10$, $11$, $13$, and $59$ unknown. We have found a $6$-qubit UPB of $9$ states, leaving four cases still unsolved: $$\begin{aligned}
\big\{ {| 000000 \rangle}, {| 0aaaa1 \rangle}, {| 1aabaa \rangle}, {| 1bb\overline{b}bb \rangle}, {| a\overline{a}b1\overline{b}c \rangle}, {| ab\overline{ba}1\overline{a} \rangle}, {| \overline{a}1\overline{a}cc\overline{b} \rangle}, {| b\overline{ab}a\overline{c}1 \rangle}, {| \overline{bb}1\overline{cac} \rangle} \big\}\end{aligned}$$
Many-Qubit UPBs {#section:manyqubit}
===============
We now turn our attention to the construction of UPBs on an arbitrary number of qubits. We already saw that $p$-qubit UPBs are not “continuous” when $p = 4$ (where there are UPBs of size $10$ and $12$, but none of size $11$) or $p = 5$ (where there are UPBs of size $6$ and $8$, but none of size $7$). Our first result shows that the same is true whenever $p \geq 5$ is odd.
\[prop:qubit\_no\_p2\] If $p$ is odd then there does not exist a UPB in $({\mathbb{C}}^{2})^{\otimes p}$ consisting of exactly $p+2$ states.
Suppose for a contradiction that such a UPB exists. We first note that there can not be a set of $3$ or more states of the UPB that are equal to each other on a given qubit, or else unextendibility is immediately violated, since we could construct a product state orthogonal to all $3$ of those states on that qubit and orthogonal to one other state on each of the other $p-1$ qubits, for a total of all $3 + (p-1) = p+2$ states.
Additionally, since $p+2$ is odd, [@Joh13UPB Lemma 2] implies that on every party there is a pair of two states of the UPB that are equal to each other (and furthermore, the number of such pairs must be odd).
We now argue that there must be exactly one such pair on every party. To see this, suppose for a contradiction that there are three or more pairs of states that are equal to each other on some qubit (which we assume without loss of generality is the first qubit). On the second qubit, there is at least one pair of states that are equal to each other, and this pair contains no vertices in common with at least one of the pairs of states that are equal to each other on the first qubit. Thus we can find a product state that is orthogonal to $2$ states on the second qubit, $2$ more states on the first qubit, and $1$ more state on each of the remaining $p-2$ qubits, for a total of all $2 + 2 + (p-2) = p+2$ states. It follows that the UPB is extendible, so in fact there can only be one pair of equal states on each party, as in Figure \[fig:prop\_p2\_proof\].
(-0.910,0.415) – (-0.541,0.841); (0,1) – (-0.756,-0.655); (0.541,0.841) – (-0.282,-0.959);
(-0.910,0.415) – (-0.990,-0.142) – (-0.9500,0.1365) – cycle; (-0.910,0.415) – (-0.990,-0.142) – (-0.9500,0.1365) – cycle;
\[fill\] (v00) at (0,1) \[\]; \[fill\] (v01) at (-0.541,0.841) \[\]; \[fill\] (v02) at (-0.910,0.415) \[\]; \[fill\] (v03) at (-0.990,-0.142) \[\]; \[fill\] (v04) at (-0.756,-0.655) \[\]; \[fill\] (v05) at (-0.282,-0.959) \[\]; \[fill\] (v010) at (0.541,0.841) \[\];
(0.837,0.547) circle(0.02); (0.910,0.415) circle(0.02); (0.961,0.275) circle(0.02);
(0.137,-0.991) circle(0.02); (0.282,-0.959) circle(0.02); (0.421,-0.907) circle(0.02);
However, it then directly follows that the orthogonality graph of each qubit can contain no more than $(p+1)/2$ edges, so the orthogonality graph of all $p$ qubits contains no more than $p(p+1)/2$ edges. However, in order for the $p+2$ states of the UPB to be mutually orthogonal, there would have to be at least $(p+1)(p+2)/2 > p(p+1)/2$ edges present in the orthogonality graph, so it follows that some of these states are not orthogonal on any qubit and thus do not form a UPB.
We have now seen examples that demonstrate that there are sizes near the minimal size $f(p)$ (defined in Equation ) for which no $p$-qubit UPB exists, and similarly there are sizes near the maximal size $2^p - 4$ for which no $p$-qubit UPB exists (e.g., there is no $4$-qubit UPB of size $11 = 2^p - 5$). We now show that these are essentially the only possible cases where qubit UPBs do not exist—there exist UPBs of all sizes that are sufficiently far in between the minimal and maximal sizes.
\[thm:many\_qubit\_sizes\] If $p \geq 7$ then there exists a $p$-qubit UPB of size $s$ whenever $$\begin{aligned}
\frac{p^2 + 3p - 30}{2} \leq s \leq 2^p - 6.
\end{aligned}$$
Before proving this result, we note that we actually prove the slightly better lower bound $\sum_{k=4}^{p-1} f(k)$. However, these two lower bounds never differ by more than $2$ (which is proved in Appendix B), so we prefer the present statement of the result with the lower bound $(p^2 + 3p - 30)/2$, which is much easier to work with.
We prove the result via Proposition \[prop:qubit\_add\_together\] and induction on $p$. As indicated above, we actually prove the slightly stronger statement that such a UPB exists whenever $\sum_{k=4}^{p-1} f(k) \leq s \leq 2^p - 6$.
For the base case $p = 7$, recall from Section \[section:5qubit\] that there exist $6$-qubit UPBs of sizes 8, 9, 12, 14–58, 60, and 64. It follows from Proposition \[prop:qubit\_add\_together\] that there exist $7$-qubit UPBs of sizes 16–18, 20–122, 124, and 128. We thus see that there is a $7$-qubit UPB of any size from $\sum_{k=4}^{6} f(k) = 6 + 6 + 8 = 20$ to $2^7 - 6 = 122$, as desired.
For the inductive step, fix $p$ and define the following four intervals of positive integers: $$\begin{aligned}
I_0^p & := \big[\sum_{k=4}^{p-1} f(k), 2^p - 6\big] &
I_1^p & := \big[\sum_{k=4}^{p} f(k), f(p) + (2^p - 6)\big] \\
I_2^p & := \big[ 2\sum_{k=4}^{p-1} f(k), 2^{p+1} - 12 \big] &
I_3^p & := \big[ 2^p + \sum_{k=4}^{p-1} f(k), 2^{p+1} - 6 \big].
\end{aligned}$$ Assume that there exist $p$-qubit UPBs of all sizes $s \in I_0^p$ and notice that the intervals we have defined satisfy the following relationships: $$\begin{aligned}
\label{eq:interval_rel}
I_1^p & = f(p) + I_0^p, & I_2^p & = I_0^p + I_0^p, & I_3^p & = 2^p + I_0^p.
\end{aligned}$$
Equations hint at the remainder of the proof. We can combine a minimal $p$-qubit UPB of size $f(p)$ with one of the $p$-qubit UPBs with size in $I_0^p$ via Proposition \[prop:qubit\_add\_together\] to obtain a $(p+1)$-qubit UPB of any size in $I_1^p$. Similarly, we can combine two $p$-qubit UPBs with sizes in $I_0^p$ to obtain $(p+1)$-qubit UPBs of any size in $I_2^p$. Finally, we can combine a $p$-qubit UPB of size $2^p$ (e.g., the standard basis) with one of the $p$-qubit UPBs with size in $I_0^p$ to obtain a $(p+1)$-qubit UPB of any size in $I_3^p$.
In order to complete the inductive step and the proof, it suffices to show that $I_1^p \cup I_2^p \cup I_3^p = I_0^{p+1}$. It is clear that the minimal values of $I_1^p$ and $I_0^{p+1}$ coincide, as do the maximal values of $I_3^p$ and $I_0^{p+1}$, so it is enough to show that $I_2^p$ overlaps with each of $I_1^p$ and $I_3^p$.
In order to show that $I_1^p$ and $I_2^p$ overlap, we must show that $2\sum_{k=4}^{p-1} f(k) \leq f(p) + (2^p - 6)$. This inequality can be seen from noting that $f(k) \leq k+4$, so $$\begin{aligned}
2\sum_{k=4}^{p-1} f(k) \leq 2\sum_{k=4}^{p-1} (k+4) = p^2 + 7p - 44 \leq 2^p - 6 \leq f(p) + (2^p - 6),
\end{aligned}$$ where we note that the second-to-last inequality can easily be verified by typical methods from calculus.
In order to show that $I_2^p$ and $I_3^p$ overlap, we must show that $2^p + \sum_{k=4}^{p-1} f(k) \leq 2^{p+1} - 12$. Similar to before, this inequality follows straightforwardly: $$\begin{aligned}
2^p + \sum_{k=4}^{p-1} f(k) \leq 2^p + \sum_{k=4}^{p-1} (k+4) = 2^p + (p^2 + 7p - 44)/2 \leq 2^{p+1} - 12,
\end{aligned}$$ where the final inequality once again can be verified by straightforward calculus. It follows that $I_1^p \cup I_2^p \cup I_3^p = I_0^{p+1}$, as desired, which completes the proof.
As an immediate corollary of Theorem \[thm:many\_qubit\_sizes\], we note that as $p \rightarrow \infty$, almost all cardinalities in the interval $[1,2^p]$ are attainable as the size of a $p$-qubit UPB. This fact can be seen by noting that the proportion of attainable cardinalities is at least $$\begin{aligned}
\frac{(2^p - 6) - (p^2 + 3p - 30)/2 + 1}{2^p},\end{aligned}$$ which tends to $1$ as $p \rightarrow \infty$.
On the other hand, Theorem \[thm:many\_qubit\_sizes\] does not place a very good bound on how large of a “gap” $g_p$ there can be such that there exist nontrivial $p$-qubit UPBs of size $s$ and $s + g_p + 1$, but no $p$-qubit UPB of any intermediate size $s+1,\ldots,s+g_p$. Indeed, Theorem \[thm:many\_qubit\_sizes\] does not even guarantee that the maximal value of $g_p$ stays bounded as $p \rightarrow \infty$, since the difference between $(p^2 + 3p - 30)/2$ and $f(p)$ tends to infinity as $p$ does.
We now show that there is indeed an absolute upper bound on how large of a “gap” in qubit UPB sizes there can be: $g_p \leq 7$ regardless of $p$, and if $p \not\equiv 1 \, (\text{mod } 4)$ then $g_p \leq 3$. The construction of UPBs presented in the proof of the following theorem generalizes [**Shifts**]{} as well as the $5$-qubit UPB of size $8$ that was presented in Section \[section:5qubit\]. This result also generalizes [@Joh13UPB Lemma 4].
\[thm:qubit\_4k\_upbs\] Let $p,s \in \mathbb{N}$ be such that $p+1 \leq s \leq 2^p$ and $s$ is a multiple of $4$. Then there is a UPB in $({\mathbb{C}}^{2})^{\otimes p}$ of cardinality $s$, with the possible exception of the case when $p \equiv 1 \, (\text{mod } 4)$ and $s = 2p + 2$.
We note that it suffices to construct a UPB in the case when $p+1 \leq s \leq 2p$ since the case when $s \geq 2p + 1$ follows directly from Proposition \[prop:qubit\_add\_together\] and induction. For example, when $p \equiv 0 \, (\text{mod } 4)$, we know (by inductive hypothesis) that there are $(p-1)$-qubit UPBs of any size in the set $\{p,p+4,\ldots,2^{p-1}-4,2^{p-1}\}$, and combining these UPBs via Proposition \[prop:qubit\_add\_together\] gives $p$-qubit UPBs of any size in $\{2p,2p+4,\ldots,2^p-4,2^p\}$. The cases when $p \equiv 1,2,3 \, (\text{mod } 4)$ are similar.
We now focus on constructing a UPB in the $s = 2p$ case, and we will generalize this construction to smaller values of $s$ later. Define the integer $k := s/4$. To construct the orthogonality graph of the desired UPB, begin by letting the orthogonality graph on one of the parties be such that every vertex is connected to exactly one other vertex (as in the top graph of Figure \[fig:4k\_upb\]). On each of the remaining parties, have each of these pairs of states be equal to each other.
\[fill\] (v0) at (-0.383,0.824) \[\]; \[fill\] (v1) at (-0.924,0.283) \[\]; \[fill\] (v2) at (-0.924,-0.483) \[\]; \[fill\] (v3) at (-0.383,-1.024) \[\]; \[fill\] (v4) at (0.383,-1.024) \[\]; \[fill\] (v5) at (0.924,-0.483) \[\]; \[fill\] (v6) at (0.924,0.283) \[\]; \[fill\] (v7) at (0.383,0.824) \[\];
(0,-2.076) – (0,-3.924); (-0.924,-3) – (0.924,-3);
(-0.924,-2.617) – (-0.924,-3.383) – cycle; (-0.924,-2.617) – (-0.924,-3.383) – cycle; (-0.383,-3.924) – (0.383,-3.924) – cycle; (-0.383,-3.924) – (0.383,-3.924) – cycle; (0.924,-3.383) – (0.924,-2.617) – cycle; (0.924,-3.383) – (0.924,-2.617) – cycle; (-0.383,-2.076) – (0.383,-2.076) – cycle; (-0.383,-2.076) – (0.383,-2.076) – cycle;
\[fill\] (w0) at (-0.383,-2.076) \[\]; \[fill\] (w1) at (-0.924,-2.617) \[\]; \[fill\] (w2) at (-0.924,-3.383) \[\]; \[fill\] (w3) at (-0.383,-3.924) \[\]; \[fill\] (w4) at (0.383,-3.924) \[\]; \[fill\] (w5) at (0.924,-3.383) \[\]; \[fill\] (w6) at (0.924,-2.617) \[\]; \[fill\] (w7) at (0.383,-2.076) \[\];
(-3,-2.076) – (-2.076,-3); (-3.924,-3) – (-3,-3.924);
(-3.924,-2.617) – (-3.924,-3.383) – cycle; (-3.924,-2.617) – (-3.924,-3.383) – cycle; (-3.383,-3.924) – (-2.617,-3.924) – cycle; (-3.383,-3.924) – (-2.617,-3.924) – cycle; (-2.076,-3.383) – (-2.076,-2.617) – cycle; (-2.076,-3.383) – (-2.076,-2.617) – cycle; (-3.383,-2.076) – (-2.617,-2.076) – cycle; (-3.383,-2.076) – (-2.617,-2.076) – cycle;
\[fill\] (x0) at (-3.383,-2.076) \[\]; \[fill\] (x1) at (-3.924,-2.617) \[\]; \[fill\] (x2) at (-3.924,-3.383) \[\]; \[fill\] (x3) at (-3.383,-3.924) \[\]; \[fill\] (x4) at (-2.617,-3.924) \[\]; \[fill\] (x5) at (-2.076,-3.383) \[\]; \[fill\] (x6) at (-2.076,-2.617) \[\]; \[fill\] (x7) at (-2.617,-2.076) \[\];
(3,-2.076) – (2.076,-3); (3.924,-3) – (3,-3.924);
(3.924,-2.617) – (3.924,-3.383) – cycle; (3.924,-2.617) – (3.924,-3.383) – cycle; (3.383,-3.924) – (2.617,-3.924) – cycle; (3.383,-3.924) – (2.617,-3.924) – cycle; (2.076,-3.383) – (2.076,-2.617) – cycle; (2.076,-3.383) – (2.076,-2.617) – cycle; (3.383,-2.076) – (2.617,-2.076) – cycle; (3.383,-2.076) – (2.617,-2.076) – cycle;
\[fill\] (y0) at (3.383,-2.076) \[\]; \[fill\] (y1) at (3.924,-2.617) \[\]; \[fill\] (y2) at (3.924,-3.383) \[\]; \[fill\] (y3) at (3.383,-3.924) \[\]; \[fill\] (y4) at (2.617,-3.924) \[\]; \[fill\] (y5) at (2.076,-3.383) \[\]; \[fill\] (y6) at (2.076,-2.617) \[\]; \[fill\] (y7) at (2.617,-2.076) \[\];
(v0) edge (v7) (v2) edge (v1) (v4) edge (v3) (v6) edge (v5) ;
Since the complete graph on $2k$ vertices has a $1$-factorization [@Har69 Theorem 9.1], on each of these remaining $p - 1 = 2k - 1$ parties we can connect each pair of vertices to exactly one other pair of vertices in such a way that the union of these $p$ orthogonality graphs is the complete graph, so the corresponding product states are mutually orthogonal. The fact that this product basis is also unextendible follows easily from its construction – any product state can be orthogonal to at most $1$ state on the first party and at most $2$ of the states on each of the remaining $p-1$ parties, for a total of $1 + 2(p-1) = 2p-1$ states. Thus there is no product state orthogonal to all $s = 2p$ members of this product basis.
\[fill\] (w0) at (0.617,-2.076) \[\]; \[fill\] (w1) at (0.076,-2.617) \[\]; \[fill\] (w2) at (0.076,-3.383) \[\]; \[fill\] (w3) at (0.617,-3.924) \[\]; \[fill\] (w4) at (1.383,-3.924) \[\]; \[fill\] (w5) at (1.924,-3.383) \[\]; \[fill\] (w6) at (1.924,-2.617) \[\]; \[fill\] (w7) at (1.383,-2.076) \[\];
(w0) edge (w5) (w7) edge (w6) (w1) edge (w4) (w2) edge (w3) ;
(-1.4,-3) – (-0.5,-3);
(-3,-2.076) – (-2.076,-3); (-3.924,-3) – (-3,-3.924);
(-3.924,-2.617) – (-3.924,-3.383) – cycle; (-3.924,-2.617) – (-3.924,-3.383) – cycle; (-3.383,-3.924) – (-2.617,-3.924) – cycle; (-3.383,-3.924) – (-2.617,-3.924) – cycle; (-2.076,-3.383) – (-2.076,-2.617) – cycle; (-2.076,-3.383) – (-2.076,-2.617) – cycle; (-3.383,-2.076) – (-2.617,-2.076) – cycle; (-3.383,-2.076) – (-2.617,-2.076) – cycle;
\[fill\] (x0) at (-3.383,-2.076) \[\]; \[fill\] (x1) at (-3.924,-2.617) \[\]; \[fill\] (x2) at (-3.924,-3.383) \[\]; \[fill\] (x3) at (-3.383,-3.924) \[\]; \[fill\] (x4) at (-2.617,-3.924) \[\]; \[fill\] (x5) at (-2.076,-3.383) \[\]; \[fill\] (x6) at (-2.076,-2.617) \[\]; \[fill\] (x7) at (-2.617,-2.076) \[\];
\[fill\] (y0) at (4.383,-2.076) \[\]; \[fill\] (y1) at (4.924,-2.617) \[\]; \[fill\] (y2) at (4.924,-3.383) \[\]; \[fill\] (y3) at (4.383,-3.924) \[\]; \[fill\] (y4) at (3.617,-3.924) \[\]; \[fill\] (y5) at (3.076,-3.383) \[\]; \[fill\] (y6) at (3.076,-2.617) \[\]; \[fill\] (y7) at (3.617,-2.076) \[\];
(y0) edge (y2) (y7) edge (y1) (y3) edge (y5) (y4) edge (y6) ;
To generalize this construction to the $p+1 \leq s < 2p$ case, we modify some of the last $p-1$ parties in the $s = 2p$ construction above by “splitting” one party into two. To “split” a qubit, replace each pair of orthogonal states of the form $\{{| a \rangle},{| a \rangle},{| \overline{a} \rangle},{| \overline{a} \rangle}\}$ with the two-qubit states $\{{| aa \rangle},{| bb \rangle},{| \overline{ab} \rangle},{| \overline{ba} \rangle}\}$ (see Figure \[fig:split\_og\]). This procedure is easily-verified to preserve unextendibility and orthogonality, so the resulting set of product states is a UPB.
Furthermore, since this procedure keeps $s$ the same but increases the number of parties by $1$, and we can split anywhere from $1$ up to $p-1 = s/2 - 1$ orthogonality graphs in this way, we can construct an unextendible product basis of $s$ states for any number of parties $p$ from $s/2$ up to $s-1$, as desired (see Figure \[fig:split\_og\_ex\]).
\[fill\] (v0) at (-3.383,0.824) \[\]; \[fill\] (v1) at (-3.924,0.283) \[\]; \[fill\] (v2) at (-3.924,-0.483) \[\]; \[fill\] (v3) at (-3.383,-1.024) \[\]; \[fill\] (v4) at (-2.617,-1.024) \[\]; \[fill\] (v5) at (-2.076,-0.483) \[\]; \[fill\] (v6) at (-2.076,0.283) \[\]; \[fill\] (v7) at (-2.617,0.824) \[\];
(v0) edge (v3) (v7) edge (v4) (v1) edge (v6) (v2) edge (v5) ;
\[fill\] (w0) at (-0.383,0.824) \[\]; \[fill\] (w1) at (-0.924,0.283) \[\]; \[fill\] (w2) at (-0.924,-0.483) \[\]; \[fill\] (w3) at (-0.383,-1.024) \[\]; \[fill\] (w4) at (0.383,-1.024) \[\]; \[fill\] (w5) at (0.924,-0.483) \[\]; \[fill\] (w6) at (0.924,0.283) \[\]; \[fill\] (w7) at (0.383,0.824) \[\];
(w0) edge (w4) (w1) edge (w5) (w2) edge (w6) (w7) edge (w3) ;
(3,0.824) – (2.076,-0.1); (3.924,-0.1) – (3,-1.024);
(3.924,0.283) – (3.924,-0.483) – cycle; (3.924,0.283) – (3.924,-0.483) – cycle; (3.383,-1.024) – (2.617,-1.024) – cycle; (3.383,-1.024) – (2.617,-1.024) – cycle; (2.076,-0.483) – (2.076,0.283) – cycle; (2.076,-0.483) – (2.076,0.283) – cycle; (3.383,0.824) – (2.617,0.824) – cycle; (3.383,0.824) – (2.617,0.824) – cycle;
\[fill\] (y0) at (3.383,0.824) \[\]; \[fill\] (y1) at (3.924,0.283) \[\]; \[fill\] (y2) at (3.924,-0.483) \[\]; \[fill\] (y3) at (3.383,-1.024) \[\]; \[fill\] (y4) at (2.617,-1.024) \[\]; \[fill\] (y5) at (2.076,-0.483) \[\]; \[fill\] (y6) at (2.076,0.283) \[\]; \[fill\] (y7) at (2.617,0.824) \[\];
\[fill\] (v20) at (-3.383,3.824) \[\]; \[fill\] (v21) at (-3.924,3.283) \[\]; \[fill\] (v22) at (-3.924,2.517) \[\]; \[fill\] (v23) at (-3.383,1.976) \[\]; \[fill\] (v24) at (-2.617,1.976) \[\]; \[fill\] (v25) at (-2.076,2.517) \[\]; \[fill\] (v26) at (-2.076,3.283) \[\]; \[fill\] (v27) at (-2.617,3.824) \[\];
(v20) edge (v27) (v22) edge (v21) (v24) edge (v23) (v26) edge (v25) ;
\[fill\] (w20) at (-0.383,3.824) \[\]; \[fill\] (w21) at (-0.924,3.283) \[\]; \[fill\] (w22) at (-0.924,2.517) \[\]; \[fill\] (w23) at (-0.383,1.976) \[\]; \[fill\] (w24) at (0.383,1.976) \[\]; \[fill\] (w25) at (0.924,2.517) \[\]; \[fill\] (w26) at (0.924,3.283) \[\]; \[fill\] (w27) at (0.383,3.824) \[\];
(w20) edge (w25) (w27) edge (w26) (w21) edge (w24) (w22) edge (w23) ;
\[fill\] (y20) at (3.383,3.824) \[\]; \[fill\] (y21) at (3.924,3.283) \[\]; \[fill\] (y22) at (3.924,2.517) \[\]; \[fill\] (y23) at (3.383,1.976) \[\]; \[fill\] (y24) at (2.617,1.976) \[\]; \[fill\] (y25) at (2.076,2.517) \[\]; \[fill\] (y26) at (2.076,3.283) \[\]; \[fill\] (y27) at (2.617,3.824) \[\];
(y20) edge (y22) (y27) edge (y21) (y23) edge (y25) (y24) edge (y26) ;
Note that Theorem \[thm:qubit\_4k\_upbs\] says nothing about the existence or nonexistence of qubit UPBs in the case when $p \equiv 1 \, (\text{mod } 4)$ and $s = 2p + 2$. The smallest such case is when $p = 5$ and $s = 12$, and in fact a UPB *does* exist in this case, simply by combining two copies of the $4$-qubit UPB of size $6$. This leaves the $p = 9, s = 20$ case as the smallest case where the existence of a qubit UPB whose size is a multiple of $4$ is unknown.
Conclusions and Outlook {#section:conclusions}
=======================
We have investigated the structure of qubit unextendible product bases by completely characterizing them in the $4$-qubit case and deriving many new results concerning the (non)existence of qubit UPBs of given sizes. This work has many immediate applications and consequences.
For example, there is a long history of trying to determine the possible ranks of bound entangled states [@HSTT03; @Cla06; @Ha07; @KO12; @CD13], however very little is known about this question in the multipartite case. By using the standard method of creating a bound entangled state from a UPB (i.e., if $\{{| v_1 \rangle},\ldots,{| v_s \rangle}\}$ is a UPB then $\rho := I - \sum_{i=1}^s {| v_i \rangle\langle v_i |}$ is a multiple of a bound entangled state), we can use our results to construct $p$-qubit bound entangled states of many different ranks. For example, Theorem \[thm:many\_qubit\_sizes\] immediately implies that (for $p \geq 7$) there exist $p$-qubit bound entangled states of every rank from $6$ through $2^p - \frac{p^2 + 3p - 30}{2}$, inclusive.
It is also known that some qubit UPBs can be used to construct locally indistinguishable subspaces of the same size [@DXY10]. It is unknown whether or not *all* qubit UPBs span a locally indistinguishable subspace, so it might be the case that the hundreds of new UPBs found in this work contain a counter-example. Alternatively, if it turns out that qubit UPBs do always span a locally indistinguishable subspace, it would follow that there exist such subspaces of all sizes found in this work.
However, some notable open problems about qubit UPBs remain:
1. Does there exist a $p$-qubit UPB of size $2^p - 5$ when $p \geq 5$? Our computer search showed that the answer is “no” when $p = 4$, but we still do not know of a simple reason for why this is the case. It is worth noting that if the answer is “yes” for any particular value of $p$ then it must be “yes” for all larger values of $p$ as well, by Proposition \[prop:qubit\_add\_together\]. Closely related to this question is whether or not there exist $p$-qubit bound entangled states of rank $5$.
2. Does there exist a $p$-qubit UPB of size $2p+2$ when $p \equiv 1 \, (\text{mod } 4)$? That is, can we fill in the hole in Theorem \[thm:qubit\_4k\_upbs\]?
3. What is the true maximum “gap size” $g_p$ as described in Section \[section:manyqubit\]? The best bounds that we have so far are $g_p \geq 1$ when $p \geq 5$ is odd, $g_p \leq 3$ when $p \not\equiv 1 \, (\text{mod } 4)$, and $g_p \leq 7$ always.
4. There are also many other cases where the existence of an $s$-state $p$-qubit UPB is unknown, since very little is known about the existence of such UPBs when $f(p) < s < (p^2 + 3p - 30)/2$ and $s$ is not a multiple of $4$ (see Table \[tab:qubit\_summary\]).
--------- ----- ----- ----- ----- ----- ----- -----
size $1$ $2$ $3$ $4$ $5$ $6$ $7$
1
2
3
4
5
6
7
8
9
10 ? ?
11 ? ? ?
12
13 ? ?
14 ?
15 ?
16
17–18
19 ?
20–26
27 ?
28
29–31
32
33–58
59 ?
60
61–63
64
65–122
123 ?
124
125–127
128
--------- ----- ----- ----- ----- ----- ----- -----
: (color online) A summary of what sizes of UPBs are possible on small numbers of qubits. Empty cells indicate that no UPB of that size is possible on that number of qubits, while checkmarks () indicate that an explicit UPB of that size is known. question marks (?) indicate that it is currently unknown whether or not such a UPB exists.[]{data-label="tab:qubit_summary"}
[**Acknowledgements.**]{} The author thanks Remigiusz Augusiak and Jianxin Chen for helpful conversations related to this work. The author was supported by the Natural Sciences and Engineering Research Council of Canada.
Appendix A: Details of the Computation {#section:computation .unnumbered}
======================================
Since it is not feasible to find all $4$-qubit UPBs via naive brute force search, our search is split into three steps so that most of the work can be done in parallel. The code used to carry out the search can be downloaded from [@JohUPBCode14]. The code that does the time-consuming parts of the computation is written in C, and then some post-processing is done in Maple.
Step 1: Finding Potential Bipartite Graph Decompositions {#section:computation_step1 .unnumbered}
--------------------------------------------------------
Recall from Section \[section:orthog\_graphs\] that the orthogonality graph of each qubit of a UPB can decomposed as the disjoint union of complete bipartite graphs. The first step is to try to find sizes of complete bipartite graphs that could possibly correspond to UPBs, without considering the actual placement of those complete bipartite graphs within each qubit.
That is, we first search for positive integers $n_1,\ldots,n_p$ and $\{a^{(i)}_1,b^{(i)}_1,\ldots,a^{(i)}_{n_i},b^{(i)}_{n_i}\}_{i=1}^{p}$ such that there could possibly be a UPB with the property that the orthogonality graph of its $i$-th qubit ($1 \leq i \leq p$) is the disjoint union of $K_{a^{(i)}_1,b^{(i)}_1},\ldots,K_{a^{(i)}_{n_i},b^{(i)}_{n_i}}$, where $n_i$ is the total number of complete bipartite graphs present in the orthogonality graph of the $i$-th qubit.
There are many simple necessary conditions that the $a^{(i)}_j$’s and $b^{(i)}_j$’s must satisfy in order for there to be a corresponding UPB. For example, we clearly must have $\sum_{j=1}^{n_i}(a^{(i)}_j+b^{(i)}_j) = s$ for all $i$, since there must be exactly $s$ vertices in each qubit’s orthogonality graph. Also, it is straightforward to see that $a^{(i)}_j,b^{(i)}_j \leq s - p$ for all $i,j$, since otherwise we could find a product state that is orthogonal to $s-p+1$ states of the UPB on one qubit and $1$ state of the UPB on each of the remaining qubits, for a total of $(s-p+1) + (p-1) = s$ states, which implies extendibility. Furthermore, since $K_{a^{(i)}_j,b^{(i)}_j}$ has $a^{(i)}_j b^{(i)}_j$ edges and we need at least $s(s-1)/2$ edges in order for the corresponding states to be mutually orthogonal, we must have $\sum_{i,j}a^{(i)}_j b^{(i)}_j \geq s(s-1)/2$.
There are also some less obvious restrictions that we can place on our search space, as described by the following lemmas.
\[lem:reverse\_combine\] Suppose that the constants $\{a^{(i)}_1,b^{(i)}_1,\ldots,a^{(i)}_{n_i},b^{(i)}_{n_i}\}_{i=1}^{p}$ (defined above) correspond to a $p$-qubit UPB. There exists a particular $i$ such that $n_i = 1$ if and only if there exist $(p-1)$-qubit UPBs of size $a^{(i)}_{1}$ and $b^{(i)}_{1}$.
The “if” direction of this lemma is actually just a rewording of Proposition \[prop:qubit\_add\_together\]. If there are $(p-1)$-qubit UPBs of size $a^{(i)}_{1}$ and $b^{(i)}_{1}$ then the method of construction given in the proof of Proposition \[prop:qubit\_add\_together\] gives a $p$-qubit UPB with $n_p = 1$ (i.e., only one basis of $\mathbb{C}^2$ is used on the $p$-th qubit).
For the “only if” direction of the proof, suppose for a contradiction that $n_i = 1$ for some $i$, which means that there exists a basis $\{{| a \rangle},{| \overline{a} \rangle}\}$ of $\mathbb{C}^2$ such that $a^{(i)}_{1}$ of the states in the $p$-qubit UPB are equal to ${| a \rangle}$ on the $i$-th qubit, and the remaining $b^{(i)}_{1}$ states are equal to ${| \overline{a} \rangle}$ on the $i$-th qubit. It is straightforward to check that those sets of $a^{(i)}_{1}$ and $b^{(i)}_{1}$ states form $(p-1)$-qubit UPBs if we remove their $i$-th qubits.
\[lem:search\_reduce\] Suppose that the constants $\{a^{(i)}_1,b^{(i)}_1,\ldots,a^{(i)}_{n_i},b^{(i)}_{n_i}\}_{i=1}^{p}$ (defined above) correspond to a UPB. Fix any permutation $\sigma : \{1,\ldots,p\} \rightarrow \{1,\ldots,p\}$, let $t_1$ be any of the $a^{(\sigma(1))}_j$’s or $b^{(\sigma(1))}_j$’s, and define the constants $t_2,\ldots,t_p$ recursively as follows: $$\begin{aligned}
t_k & := \min_{c_j,d_j} \Big\{ \max_{j} \Big\{ a^{(\sigma(k))}_j - c_j, b^{(\sigma(k))}_j - d_j : c_j,d_j \geq 0 \text{ are integers}, \sum_j c_j + \sum_j d_j = t_{k-1}\Big\} \Big\}.
\end{aligned}$$ Then $\sum_{i=1}^p t_i \leq s - 1$.
Before proving Lemma \[lem:search\_reduce\], we note that it is actually slightly more intuitive than it appears at first glance, as it is just a generalization of the fact that $a^{(i)}_j,b^{(i)}_j \leq s - p$ for all $i,j$ that takes into account how large the $a^{(i)}_j$’s and $b^{(i)}_j$’s are on more than one party. For example, the lemma says that there can not be an $8$-state UPB on $5$ qubits such that the orthogonality graphs of its first two qubits each decompose as $K_{3,3} \cup K_{1,1}$, since we could then choose $t_1 = 3$, which gives $t_2 = 2$ and $t_3 = t_4 = t_5 = 1$ and we have $\sum_{i=1}^p t_i = 8 > s-1 = 7$.
We can come to the same conclusion in a more intuitive manner by noting that we can always find a product state orthogonal to $3$ states on the first qubit and at least $2$ more states on the second qubit (and of course $1$ state on each of the remaining qubits), since the groups of $3$ equal states on the first two qubits can not overlap “too much”.
The result follows from simply observing that we can find a product state that is orthogonal to $t_1$ members of the UPB on party $\sigma(1)$, $t_2$ more members of the UPB on qubit $\sigma(2)$, and so on. Thus unextendibility implies that $\sum_{i=1}^p t_i \leq s - 1$.
By making use of Lemmas \[lem:reverse\_combine\] and \[lem:search\_reduce\], as well as the other basic restrictions mentioned earlier, we are able to perform a brute-force search that gives a list of potential values of the $a^{(i)}_j$’s and $b^{(i)}_j$’s. However, many of these results do not actually lead to UPBs. For example, in the case of $11$-state $4$-qubit UPBs, the above restrictions give a list of $14449$ possible values for the $a^{(i)}_j$’s and $b^{(i)}_j$’s, such as the following: $$\begin{aligned}
\text{Qubit 1:} & \ K_{4,3} \cup K_{1,1} \cup K_{1,1} \\
\text{Qubit 2:} & \ K_{4,1} \cup K_{3,3} \\
\text{Qubit 3:} & \ K_{3,3} \cup K_{3,2} \\
\text{Qubit 4:} & \ K_{3,3} \cup K_{3,2}.\end{aligned}$$ However, it turns out that there are not actually any UPBs whose orthogonality graph has such a decomposition (nor any of the other $14448$ potential decompositions), which is determined in the next step of the computation.
Step 2: Checking Each Decomposition {#section:computation_step2 .unnumbered}
-----------------------------------
The second step in the computation is much more time-consuming—it consists of checking each of the decompositions found in the first step to see if there are actually any UPBs with such a decomposition. However, searching each of these different decompositions can be done in parallel, which greatly speeds up the process. This part of the search is done via standard brute force and is fairly straightforward.
Step 3: Sorting the Results {#section:computation_step3 .unnumbered}
---------------------------
The third (and final) step in the computation is to sort the UPBs into equivalence classes based on their orthogonality graphs. This step is necessary because many of the UPBs found in step 2 are actually equivalent to each other (i.e., they are the same up to relabeling local bases and permuting states and qubits). This step is quick enough that it is also done by fairly standard brute force: for each UPB found in step 2, all possible relabelings and permutations of the UPB are generated and checked against all other UPBs found in step 2. If a match is found, then one of the two UPBs is discarded, since they are equivalent. Note that no particular preference is given to *which* UPB is discarded.
Appendix B: Comparison of the Lower Bounds of Theorem \[thm:many\_qubit\_sizes\] {#section:bound_comparison .unnumbered}
================================================================================
Recall from the discussion surrounding Theorem \[thm:many\_qubit\_sizes\] that we claimed that $\sum_{k=4}^{p-1}f(k)$ is a lower bound of $(p^2+3p-30)/2$, and furthermore than these two quantities never differ by more than $2$. We now prove these claims explicitly.
\[prop:fk\_sum\] Let $p \geq 7$ be an integer. Then $$\begin{aligned}
\sum_{k=4}^{p-1}f(k) \leq \frac{p^2+3p-30}{2} \leq \sum_{k=4}^{p-1}f(k) + 2.
\end{aligned}$$
We prove the result by induction. We first prove six base cases by noting that, for $p = 7, 8, 9, 10, 11, 12$ we have $\sum_{k=4}^{p-1}f(k) = 20, 28, 39, 49, 61, 73$ and $(p^2+3p-30)/2 = 20, 29, 39, 50, 62, 75$, so the result holds in these cases.
For the inductive step, our goal is to show that, if the result holds for a fixed value of $p$ (with $p \geq 9$), then it holds for $p+4$ as well. To this end, note that it follows from the definition given in Equation that $\sum_{k=p}^{p+3}f(k) = 4p + 14$ for all $p \geq 9$. Thus, by making use of the inductive hypothesis, we have $$\begin{aligned}
& \ \ \sum_{k=4}^{p-1}f(k) \leq \frac{p^2+3p-30}{2} \leq \sum_{k=4}^{p-1}f(k) + 2 \\
\Longrightarrow & \ \ \sum_{k=4}^{p-1}f(k) + (4p+14) \leq \frac{p^2+3p-30}{2} + (4p+14) \leq \sum_{k=4}^{p-1}f(k) + 2 + (4p+14) \\
\Longrightarrow & \ \ \sum_{k=4}^{p+3}f(k) \leq \frac{(p+4)^2+3(p+4)-30}{2} \leq \sum_{k=4}^{p+3}f(k) + 2,
\end{aligned}$$ which completes the inductive step and the proof.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In this paper we consider left-invariant pseudo-Kähler structures on six-dimensional nilpotent Lie algebras. The explicit expressions of the canonical complex structures are calculated, and the curvature properties of the associated pseudo-Kähler metrics are investigated. It is proved that the associated pseudo-Kähler metric is Ricci-flat, that the curvature tensor has zero pseudo-Riemannian norm, and that the curvature tensor has some non-zero components that depend only on two or, at most, three parameters. The pseudo-Kähler structures obtained give basic models of pseudo-Kähler six-dimensional nilmanifolds.'
---
****
N. K. Smolentsev
Canonical pseudo-Kähler structures
on six-dimensional nilpotent Lie groups[^1]
Preface {#Preface}
=======
A left-invariant pseudo-Kähler (or indefinite Kähler) structure $(J, \omega)$ on a Lie group $G$ with Lie algebra ${{\mathfrak g}}$ consists of a nondegenerate closed left-invariant 2-form $\omega$ and a left-invariant complex structure $J$ on $G$ which are *compatible*, i.e. $\omega(JX, JY) = \omega(X, Y)$, for all $X, Y \in {{\mathfrak g}}$.
Given a pseudo-Kähler structure $(J, \omega)$ on ${{\mathfrak g}}$, there exists an associated nondegenerate symmetric 2-tensor $g$ on ${{\mathfrak g}}$ defined by $g(X, Y) = \omega(X,JY)$ for $X, Y \in {{\mathfrak g}}$. It is well-known [@BG] that if the Lie algebra ${{\mathfrak g}}$ is nilpotent, then the associated metric $g$ for any compatible pair $(J, \omega)$ cannot be positive definite unless ${{\mathfrak g}}$ is abelian. Therefore $g$ is a pseudo-Riemannian metric. We shall say that $g$ is a left-invariant pseudo-Kähler metric on the Lie group $G$.
A classification of left-invariant complex structures on six-dimensional nilpotent Lie groups is given in the work of Salamon [@Sal-1]. The classification of left-invariant symplectic structures on six-dimensional nilpotent Lie groups is established in the paper by Goze, Khakimdjanov and Medina [@Goze-Khakim-Med]. In the paper by Cordero, Fernández and Ugarte [@CFU2], the pseudo-Kähler structures in a six-dimensional nilpotent Lie group are studied. It is shown that for a pseudo-Kähler structure $(J, \omega)$ on a six-dimensional nilpotent Lie group, the complex structure $J$ will be nilpotent, and on some Lie groups abelian.
Let $\{e^1,\dots,e^n\}$ be a basis of left-invariant 1-forms on $G$. In the following theorem we write $\mathfrak{g}$ as an $n$-tuple $(0, 0, de^3, . . . , de^n)$, abbreviating $e^{ij}=e^ {i} \wedge e^ {j}$ further to $ij$. For example, the $n$-tuple $(0,0,0,0,12,34)$ designates a Lie algebra with the equations: $de^1=0$, $de^4=0$, $de^5 =e^1\wedge e^2$ and $de^6 =e^3\wedge e^4$. Besides, for each algebra its number in the classification list for symplectic Lie algebras given in [@Goze-Khakim-Med] is also specified.
\[[@CFU2]\] \[PsKahl\_6\] Let $\mathfrak {g}$ be a (nonabelian) six-dimensional nilpotent Lie algebra. Then $\mathfrak {g}$ possesses a compatible pair $(J, \omega)$ if and only if $\mathfrak {g}$ is isomorphic to one of the following Lie algebras: $$\begin{array}{ll}
{{\mathfrak g}}_{21} = (0, 0, 0, 0, 12, 14 + 25), & {{\mathfrak g}}_{24} = (0, 0, 0, 0, 12, 34), \\
{{\mathfrak g}}_{14} = (0, 0, 0, 12, 13, 14), & {{\mathfrak g}}_{17} = (0, 0, 0, 0, 12, 14 + 23), \\
{{\mathfrak g}}_{13} = (0, 0, 0, 12, 13, 14 + 23), & {{\mathfrak g}}_{16}=(0, 0, 0, 0, 13 + 42, 14 + 23),\\
{{\mathfrak g}}_{15} = (0, 0, 0, 12, 13, 24), & {{\mathfrak g}}_{23} = (0, 0, 0, 0, 12, 13), \\
{{\mathfrak g}}_{11} = (0, 0, 0, 12, 13 + 14, 24), & {{\mathfrak g}}_{18} = (0, 0, 0, 12, 13, 23), \\
{{\mathfrak g}}_{10} = (0, 0, 0, 12, 14, 13 + 42), & {{\mathfrak g}}_{25} = (0, 0, 0, 0, 0, 12), \\
{{\mathfrak g}}_{12} = (0, 0, 0, 12, 13 + 42, 14 + 23). & \\
\end{array}$$
\[[@CFU2]\] In dimension 6, the Lie algebra ${{\mathfrak g}}$ has compatible pairs $(J,\omega)$ if and only if it admits both symplectic and nilpotent complex structures.
For all the Lie algebras listed, the compatible complex structure $J$ is nilpotent, and for algebras ${{\mathfrak g}}_{21}$, ${{\mathfrak g}}_{12}$, ${{\mathfrak g}}_{16}$ and ${{\mathfrak g}}_{25}$ $J$ is abelian. For each Lie algebra in this list an example of a nilpotent complex structure is given in [@CFU2], and the compatible symplectic forms for it are presented.
It is more natural to start with the classification list of Goze, Khakimdjanov and Medina [@Goze-Khakim-Med] in which all symplectic 6-dimensional Lie algebras are presented and where it is shown that each nilpotent Lie algebra is symplectoisomorphic to one of the algebras on this list.
Therefore we will consider the Lie algebras from theorem \[PsKahl\_6\] with the symplectic structure from the list in [@Goze-Khakim-Med] and from these we will search for all compatible complex structures. Generally speaking, for a given symplectic structure $\omega$ there is a multiparametrical set of complex structures. For example, for the abelian group ${\mathbb{R}}^6$ with symplectic structure $\omega = e^1\wedge e^2 +e^3\wedge e^4 +e^5\wedge e^6$, there is a 12-parametrical set compatible with $\omega$ complex structures $J=(\psi_{ij})$. However the curvature of the associated metric $g(X,Y)=\omega(X,JY)$ will be zero for any complex structure $J$. Therefore, from the geometrical point of view, there is no sense in considering the general compatible complex structure $J$. It is much more natural to choose the elementary one: $J(e_1)=e_2$, $J(e_3)=e_4$, $J(e_5)=e_6$. We will adhere to this point of view for all Lie algebras g.
After finding a multiparametrical set of complex structures $J=(\psi_{ij})$, compatible with the symplectic structure $\omega$ on ${{\mathfrak g}}$, we will consider all free parameters $\psi_{ij}$ on which the curvature of the associated metric does not depend to be zero. We will call such structures *canonical*.
We will obtain explicit expressions for the canonical complex structures, and we will investigate the curvature properties of the associated pseudo-Kähler metrics. We will prove that the associated pseudo-Kähler metric is Ricci-flat, that the scalar square $g(R,R)$ of the curvature tensor $R$ is equal to zero and that the curvature tensor has some non-zero components that depend only on two or, at the most, three parameters.
For some Lie groups in this paper, to find the compatible complex structure $J=(\psi_{ij})$ we will use the Magnin complex structures discovered in [@Mag-3], and we will solve only the compatible condition $\omega(JX,JY)=\omega(X,Y)$. For the remaining Lie groups the compatible complex structure $J=(\psi_{ij})$ is a solution of three equations: the compatible condition, $\omega \circ J+J^t\circ \omega=0$; the condition of an almost complex structure, $J^2=-Id$; the integrability $N_J(X,Y)=0$. First we solve a linear set of equations under the compatible condition. If there are difficulties in finding a solution to the equations for the integrability, we calculate a tensor of curvature of the associated metric of the almost complex structure, and we find the parameters on which it depends. Setting the remaining free parameters equal to zero, we solve the equations for the integrability, and we discover the required complex structure and the associated metric.
All computations are fulfilled in system of computer mathematics Maple. Formulas for computations are specified in the end of this paper.
Pseudo-Kähler structures on nilpotent Lie groups
================================================
Let $G$ be a real Lie group of dimension $n$ with Lie algebra $\mathfrak{g}$. In this paper we will study a left-invariant pseudo-Kähler structure $(J, \omega)$ on a Lie group $G$. A left-invariant pseudo-Kähler (or indefinite Kähler) structure $(J, \omega)$ on a Lie group $G$ with Lie algebra ${{\mathfrak g}}$ consists of a nondegenerate closed left-invariant 2-form $\omega$ and a left-invariant complex structure $J$ on $G$ which are *compatible*, i.e. $\omega(JX, JY) = \omega(X, Y)$, for all $X, Y \in {{\mathfrak g}}$. Given a pseudo-Kähler structure $(J, \omega)$ on ${{\mathfrak g}}$, there exists an associated left-invariant pseudo-Kähler metric $g$ on ${{\mathfrak g}}$ defined by $$g(X, Y) = \omega(X,JY), \mbox{ for } X, Y \in {{\mathfrak g}}.$$
As the symplectic structure $(J, \omega)$ and the complex structure $J$ are left-invariant on $G$, they are defined by their values on the Lie algebra ${{\mathfrak g}}$. Therefore from now on we will deal only with the Lie algebra ${{\mathfrak g}}$ and we will define $\omega$ and $J$ as the symplectic and, respectively, the complex structures on the Lie algebra ${{\mathfrak g}}$.
An almost complex structure on a Lie algebra ${{\mathfrak g}}$ is an endomorphism $J:{{\mathfrak g}}\rightarrow {{\mathfrak g}}$ satisfying $J^2 = -I$, where $I$ is the identity map. The integrability condition of a left-invariant almost complex structure $J$ on $G$ is expressed in terms of the Nijenhuis tensor $N_J$ on ${{\mathfrak g}}$: $$\label{Nij1}
N_J(X,Y) = [JX, JY] - [X,Y] - J[JX,Y] - J[X, JY], \mbox{ for all } X, Y \in {{\mathfrak g}}.$$ An almost complex structure $J$ on ${{\mathfrak g}}$ is called *integrable* if $N_J \equiv 0$. In this case $J$ is called a *complex structure* on $G$.
The descending central series $\{C^k({g})\}$ of $\mathfrak{g}$ is defined inductively by $$C^0({{{\mathfrak g}}})={g},\quad C^k({{{\mathfrak g}}})=[\,{{{\mathfrak g}}},C^{k-1}({{{\mathfrak g}}})],\ k>0.$$ The Lie algebra $\mathfrak{g}$ is said to be *nilpotent* if $C^k({{\mathfrak g}}) =0$ for some $k$. The Lie group is said to be nilpotent if its Lie algebra is nilpotent.
If ${{\mathfrak g}}$ is $s$-step nilpotent, then the ascending central series $\mathfrak{g}_0=\{0\}\subset \mathfrak{g}_1 \subset \mathfrak{g}_2\subset\dots \subset \mathfrak{g}_{s-1}\subset \mathfrak{g}_s =\mathfrak{g}$ of ${{\mathfrak g}}$ is defined inductively by $$\mathfrak{g}_k=\{X\in \mathfrak{g} |\ [X,\mathfrak{g}]\subseteq \mathfrak{g}_{k-1}\}, \ k\geq 1.$$ It is well-known that the sequence $\{{{\mathfrak g}}_k\}$ increases strictly until ${{\mathfrak g}}_s$ and, in particular, that the ideal ${{\mathfrak g}}_1$ is the center $\mathcal{Z}$ of Lie algebra ${{\mathfrak g}}$.
Since the spaces $\{{{\mathfrak g}}_k\}$ are not, in general, $J$-invariant, the sequence is not suitable for working with $J$. We introduce a new sequence $\{\mathfrak{a}_l(J)\}$ having the property of $J$-invariance [@CFGU01]. The ascending series $\{\mathfrak{a}_l(J)\}$ of the Lie algebra ${{\mathfrak g}}$, compatible with the left-invariant complex structure $J$ on $G$, is defined inductively as follows: $\frak{a}_0(J) = \{0\}$, $$\mathfrak{a}_l(J) =\{X\in \mathfrak{g}\ |\ [X,\mathfrak{g}]\subseteq \mathfrak{a}_{l-1}(J) \mbox { and } [JX,\mathfrak{g}]\subseteq \mathfrak{a}_{l-1}(J) \},\ l\ge 1.$$ It is clear that each $\mathfrak{a}_l(J)$ is a $J$-invariant ideal in ${{\mathfrak g}}$ and $\mathfrak{a}_k(J) \subseteq \mathfrak{g}_{k}$ for $k\ge 1$. It must be noted that the terms $\mathfrak{a}_l(J)$ depend on the complex structure $J$ considered on $G$. Moreover, this ascending series, in spite of ${{\mathfrak g}}$ being nilpotent, can stop without reaching the Lie algebra ${{\mathfrak g}}$, that is, it may happen that $\mathfrak{a}_l(J) \neq {{\mathfrak g}}$ for all $l$. The following definition is motivated by this fact.
The left-invariant complex structure $J$ on $G$ is called *nilpotent* if there is a number $p$ such that $\mathfrak{a}_p(J)=\mathfrak{g}$.
It is obvious that the ideal $\mathfrak{a}_1(J)$ lies in the center $\mathcal{Z}$ of the Lie algebra ${{\mathfrak g}}$. If the nilpotent Lie algebra has the two-dimensional center $\mathcal{Z}$ for any left-invariant complex nilpotent structure $J$, the ideal $\mathcal{Z}$ is $J$-invariant. If the nilpotent Lie algebra has an increasing central sequence of ideals $\mathfrak{g}_k$, $k=0,1,\dots, s$, for which the dimension increases each time by two units for any left-invariant complex nilpotent structure $J$, then the equalities $\mathfrak{g}_k =\frak{a}_k(J)$, $k=0,1,\dots, s$ are fulfilled. If the basis of ${{\mathfrak g}}$ is chosen so that $\mathfrak{g}_1 =\{e_{2n-1},\, e_{2n}\}$, $\mathfrak{g}_2 =\{e_{2n-3},\, e_{2n-2},\, e_{2n-1},\, e_{2n}\}$, …, then the complex structure $J$ has the following block type (for example, for a six-dimensional case): $$\label{J6Bl}
J_0= \left( \begin {array}{cccccc} \psi_{11}&\psi_{12}&0&0&0&0 \\
\psi_{21}&\psi_{22}&0&0&0&0\\
\psi_{31}&\psi_{32}&\psi_{33}&\psi_{34} &0&0\\
\psi_{41}&\psi_{42}&\psi_{43}&\psi_{44} &0&0\\
\psi_{51}&\psi_{52}&\psi_{53}&\psi_{54} &\psi_{55}&\psi_{56}\\
\psi_{61}&\psi_{62}&\psi_{63}&\psi_{64} &\psi_{65}&\psi_{66}
\end {array} \right).$$
The remaining parameters in (\[J6Bl\]) are not free, because they are restricted by the integrability conditions $N_J=0$ and $J^2=-1$.
Let us establish some properties of a symplectic nilpotent Lie algebra $({{\mathfrak g}},\omega)$ with a compatible almost complex structure $J$.
\[C1ortZ\] Let $\mathcal{Z}$ be the Lie algebra center, then for any symplectic form $\omega$ on $\mathfrak{g}$, the equality $\omega (C^1\mathfrak {g}, \mathcal {Z}) =0$ is fulfilled
This follows at once from the formula $d\omega(X,Y,Z)=\omega([X,Y],Z) -\omega([X,Z],Y) +\omega([Y,Z],X)=0$, $\forall X,Y\in\mathfrak{g}$, $\forall Z\in\mathcal{Z}$.
If the ideal $C^k{{\mathfrak g}}$, $k\ge 1$, is $J$-invariant, then any vector $X\in\mathcal {Z} \cap C^k{{\mathfrak g}}$ is isotropic for the associated metric $g(X, Y) = \omega(X,JY)$. In particular, the associated metric is pseudo-Riemannian.
$\omega (C^1\mathfrak{g} \oplus J (C^1\mathfrak {g}), \mathfrak{a}_1(J)) =0$.
\[1.6\] For any (pseudo) Kähler structure $(\mathfrak{g}, \omega, g, J) $, the ideal $\mathfrak{a}_1(J) \subset \mathcal{Z} $ is orthogonal to a subspace $C^1\mathfrak{g}\oplus J(C^1\mathfrak {g})$: $$g(C^1\mathfrak{g}\oplus J(C^1\mathfrak{g}),\mathfrak{a}_1(J)) =0.$$
From the formula $$2g (\nabla_X Y, Z) {=} g ([X, Y], Z) {+} g ([Z, X], Y) {+} g (X, [Z, Y])$$ for a covariant derivative $\nabla$ of left-invariant vector fields, the following observations are at once implied:
- if the vectors $X $ and $Y $ lie in the center $\mathcal{Z}$ of the Lie algebra ${{\mathfrak g}}$, then $ \nabla_X Y=0$ for any left-invariant (pseudo) Riemannian metric $g$ on the Lie algebra;
- if the vector $X$ lie in the center $\mathcal{Z}$ of the Lie algebra ${{\mathfrak g}}$, then $\nabla_X Y = \nabla_Y X$.
\[1.7\] If the vector $X$ lies in an ideal $\mathfrak{a}_1(J)\subset \mathcal{Z}$, then $\nabla_X Y=\nabla_Y X=0$, $\forall Y\in \mathfrak{g}$.
Let $X\in \mathfrak{a}_1(J)\subset \mathcal{Z}$ and $Z,Y\in \mathfrak{g}$. Then corollary \[1.6\] implies that $2g(\nabla_X Y,Z) = g(X,[Z,Y])=0$.
Let $R(X,Y)Z = \nabla_X \nabla_Y Z - \nabla_Y \nabla_X Z -\nabla_{[X,Y]} Z$ be the curvature tensor.
\[1.8\] If $X\in \mathfrak{a}_1(J) \subset \mathcal{Z}$, then $R(X,Y)Z = R(Z,Y) X=0$, for all $Y, Z\in \mathfrak {g} $.
\[ParRiem\] Let $J$ be a compatible almost complex structure on the symplectic nilpotent Lie algebra $({{\mathfrak g}}, \omega)$, and let $g=\omega \cdot J$ be the associated (pseudo) Riemannian metric. Let us choose a basis $\{e_1, e_2,\dots e_{2p}, e_{2p+1},\dots, e_{2m-1}, e_{2m}\}$ of the Lie algebra ${{\mathfrak g}}$, such that $\{e_{2p},e_{2p+1}\dots e_{2m-1},e_{2m}\}$ is a basis of the ideal $\mathfrak{a}_1(J) \subset \mathcal{Z}$. Let $J = (\psi_{ij})$ be the matrix of $J$ in this basis. Then for all $X,Y\in {{\mathfrak g}}$, the covariant derivative $\nabla_X Y$ does not depend on the free parameters $\psi_{ij}$, where $i=2p,\dots 2m$, and for all $j$. In particular, the curvature tensor does not depend on these parameters $\{\psi_{ij}\}$, $i=2p,\dots 2m,\, j=1,\dots, 2m$.
For all $X\in {{\mathfrak g}}$, let $JX = J_1X +J_aX$, where $J_1X \in \mathbb{R}\{e_1, e_2,\dots, e_{2p-1}\}$ and $J_aX\in \mathbb{R}\{e_{2p}, e_{2p+1},\dots, e_{2m-1}, e_{2m}\} = \mathfrak{a}_1(J)\subset \mathcal{Z}$. From the covariant derivative formula it follows that for all $Z\in {{\mathfrak g}}$: $$2g(\nabla_X Y, Z)=2\omega(\nabla_X Y, JZ)= \omega([X,Y],JZ) + \omega([Z,X],JY) +\omega([Z,Y],JX),$$ $$2\omega(\nabla_X Y, Z)= \omega([X,Y],Z) - \omega([JZ,X],JY) -\omega([JZ,Y],JX)=$$ $$= \omega([X,Y],Z) + \omega([J_1Z+J_aZ,X],J_1Y+J_aY) +\omega([J_1Z+J_aZ,Y],J_1X+J_aX)=$$ $$=\omega([X,Y],Z) + \omega([J_1Z,X],J_1Y) +\omega([J_1,Y],J_1X).$$ As the component $J_1$ does not depend on the parameters $\{\psi_{ij}\}$, $i=2p,\dots 2m,\, j=1,\dots, 2m$, it is also true that the covariant derivative $\nabla_X Y$ does not depend on them.
\[ParRiem6\] For the six-dimensional case, if the nilpotent complex structure $J$ compatible with $\omega$ is of the form in (\[J6Bl\]), then the curvature tensor $R(X,Y)Z=\nabla_X \nabla_Y Z -\nabla_Y \nabla_X Z -\nabla_{[X,Y]}Z$ of the associated metric does not depend on the free parameters $\psi_{5j}$, $\psi_{6j}$, $j=1,\dots, 6$.
**Remark 1.** The parameters $\psi_{ij}$ of the complex structure $J$ are restricted by three conditions: a compatibility condition, an integrability condition and the fact that $J^2 =-1$. Therefore, from what is specified above, some of the parameters can be express through others. In corollaries \[ParRiem\] and \[ParRiem6\] it this is a question of free parameters, i.e., parameters which remain independent. Curvature does not depend on them.
For a six-dimensional nilpotent Lie algebra ${{\mathfrak g}}$ which possesses a pseudo-Kähler structure, the dimensions of its increasing central sequence $\mathfrak{g}_k$ can be: (2,4,6), (2,6), (3,6), (4,6) and 6. We will call the sequence of these dimensions the *Lie algebra type*. In the list of Lie algebras in theorem \[PsKahl\_6\], the Lie algebras with type (2,4,6) are at the beginning, and are the first seven Lie algebras.
Let us consider the nilpotent Lie algebras for which the sequence of ideals $\mathfrak{g}_1\subset \mathfrak{g}_2\subset\mathfrak{g}_3 =\mathfrak{g}$ has the dimensions (2,4,6). It is easy to see that such a Lie algebra of type (2,4,6) is decomposed into the direct sum of two-dimensional subspaces: $$\mathfrak {g} = A \oplus B \oplus \mathcal {Z},$$ with properties:
- $\mathcal{Z} = \mathfrak{g}_1$, the Lie algebra center,
- $B \oplus \mathcal{Z} = \mathfrak{g}_2$,
- $[A, A] \subset B \oplus \mathcal{Z}$, $ [A, B] \subset \mathcal{Z}$.
Let us consider further that in $\mathfrak{g}$ the basis $e_1,\dots, e_6$ is chosen so that $\{e_1,e_2\}$, $\{e_3,e_4\}$ and $\{e_5,e_6\}$ are the bases of the subspaces $A$, $B$ and $\mathcal{Z}$ respectively.
For any nilpotent complex structure $J $ on algebra of type $ (2,4,6), $ the sequence of ideals $\mathfrak {a}_k(J) $ coincides with $\mathfrak {g}_k $, $k=1,2,3$ and the matrix $J$ has a block shape (\[J6Bl\]). For this complex structure $J$ we also have $C^1\mathfrak{g}\oplus J(C^1\mathfrak{g})=B\oplus \mathcal{Z}=\mathfrak{g}_2$.
The subspace $W\subset \mathfrak {g}$ is called $\omega$-*isotropic* if and only if $\omega(W, W) =0$. We will call subspaces $U, V \subset \mathfrak{g}$ $\omega$-*dual* if, for any vector $X\in U$ there is a vector $Y\in V$ such that $\omega (X, Y) \ne 0$ and, on the contrary, $\forall Y\in V$, $\exists X\in U$, such that $\omega (X, Y) \ne 0$.
\[246\] Let the six-dimensional symplectic Lie algebra $(\mathfrak{g}, \omega)$ have type (2,4,6) and $$\label{abz}
\mathfrak {g} = A \oplus B \oplus \mathcal {Z},$$ where $B \oplus \mathcal{Z} = \mathfrak{g}_2$ is an abelian subalgebra. We will assume that the subspaces $A$ and $\mathcal{Z}$ are $\omega$-isotropic and $\omega$-dual, and that on the subspace $B$ the form $\omega$ is nondegenerate. Then for any nilpotent complex structure $J$ compatible with $\omega$, and the Levi-Civita connection $\nabla$ of the associated pseudo-Riemannian metric $g=\omega\cdot J$, the following properties are fulfilled:
- $\nabla_X Y\in B \oplus \mathcal{Z},\quad \forall\, X,Y\in A$,
- $\nabla_X Y, \nabla_Y X\in \mathcal{Z}$,for all $X\in A,\, Y\in B$,
- $\nabla_X Y = \nabla_Y X =0$,for all $X\in A,\, Y\in \mathcal{Z}$,
- $\nabla_X Y=0,\quad \forall \, X,Y\in B\oplus \mathcal{Z}$.
Let $X, Y\in A$. If $\nabla_X Y$ has a nonzero component from $A$ then there is a vector $JZ\in \mathcal {Z} $, such that $ \omega (\nabla_X Y, JZ) \ne 0$. On the other hand, $2\omega (\nabla_X Y, JZ) = 2g(\nabla_X Y, Z) = g([X, Y], Z) + g([Z,X], Y) + g([Z, Y], X) = \omega ([X, Y], JZ) =0$, as, from lemma \[C1ortZ\], $ \omega (C^1\mathfrak {g}, \mathfrak {a} _1 (J)) =0$.
Let now $X\in A$ and $Y\in B$. It can be shown in an exactly similar way that $\nabla_X Y$ has a zero component from $A$. We will assume that $\nabla_X Y $ has a non-zero component from $B$. Then there is a vector $Z\in B\oplus \mathcal{Z}$ such that $JZ\in B$ and such that $\omega (\nabla_X Y, JZ) \ne 0$. At the same time, $2\omega(\nabla_X Y,JZ) = 2g(\nabla_X Y,Z) = g([X,Y],Z) +g([Z,X],Y) = \omega([X,Y],JZ)+ \omega([Z,X],JY) =0$. The last equality follows from the commutativity of $B\oplus \mathcal{Z}$, and from this that $Y,JY,Z,JZ\in B \subset C^1\mathfrak{g} \oplus J(C^1\mathfrak{g})$, so then $[X,Y],[Z,X] \in \mathcal{Z}$ and $\omega(C^1\mathfrak{g} \oplus J(C^1\mathfrak{g}),\mathcal{Z})=0$. $\nabla_Y X =\nabla_X Y - [X, Y]\in \mathcal{Z}$.
Let us consider the third statement. Let $X\in A$ and $Y\in \mathcal{Z}$. Then for any $Z\in \mathfrak{g}$, $2g(\nabla_X Y,Z) = g([X,Y],Z) +g([Z,X],Y) +g([Z,Y],X) = \omega([Z,X],JY) =0$ by the same arguments as for the previous point.
Let us consider the last statement. Let $X,Y\in B \oplus \mathcal{Z}$. Then for any $Z\in \mathfrak{g}$, $2g(\nabla_X Y,Z) = g([X,Y],Z) +g([Z,X],Y) +g([Z,Y],X) = \omega([Z,X],JY)+ \omega([Z,Y],JX) =0$ by the same arguments as for the previous point.
\[3.3\] Under the suppositions of theorem \[246\], if the vector $X$ lies in an ideal $\mathfrak{a}_2(J)=B\oplus\mathcal{Z}$, then $R(X, Y)Z = R(Z,Y)X =0$, for all $Y, Z\in \mathfrak{g}$.
\[Ricci-zero\] Under the suppositions of theorem \[246\], for any $X, Y, Z\in \mathfrak{g}$, the condition $R(X, Y)Z\in \mathcal{Z}$ is fulfilled. Therefore the pseudo-Riemannian norm of the curvature tensor is equal to zero. According to the expansion $\mathfrak{g}=A\oplus B\oplus \mathcal{Z}$ we choose the bases $\{e_1, e_2\}$, $\{e_3, e_4 \}$ and $\{e_5, e_6 \}$. Then the curvature tensor can have within its symmetries only four non-zero components, $R_{1,2,1}^5$, $R_{1,2,1}^6$, $R_{1,2,2}^5$, $R_{1,2,2}^6$. In particular, the Ricci tensor is equal to zero.
\[Par3.3\] Under the suppositions of theorem \[246\], if the compatible almost complex structure $J$ has a block type (\[J6Bl\]), then the curvature tensor of the associated metric $g=\omega \cdot J$ does not depend on the free parameters $\psi_{i1}$ and $\psi_{i2}$,for $i=3,4,5,6$.
Let us notice that parameters $\psi_{ij}$ of the complex structure $J$ are connected by three conditions: a compatibility condition, an integrability condition and the fact that $J^2=-1$. Therefore from what has been specified above some of the parameters can be expressed through others. If, as a result, among $\psi_{i1}$ and $\psi_{i2}$, $i=3,4,5,6$, there were independent parameters, it would be possible to set them at zero as the curvature does not depend on them. We remember that, according to corollary \[ParRiem6\], the curvature $R(X,Y)$ of the associated metric also does not depend on the free parameters $\psi_{51}$, $\psi_{52}$, $\psi_{53}$, $\psi_{54}$, $\psi_{61}$, $\psi_{62}$, $\psi_{63}$, $\psi_{64}$.
Similar statements are true for type (2,6) Lie algebras. A type (4,6) Lie algebra is the direct product of a four-dimensional Lie algebra and $\mathbb{R}^2$. The case (3,6) is the most complicated.
**Remark 2.** There is an obvious generalization of theorem \[246\] to the case where $\dim {{\mathfrak g}}> \, 6$. It is necessary to assume that there exist sequences of ideals ${{\mathfrak g}}_1\subset {{\mathfrak g}}_2\subset \dots \subset {{\mathfrak g}}_n = {{\mathfrak g}}$ with dimensions of $2,4, \dots, n $, invariant under $J$. Choosing additional two-dimensional subspaces $A_i$ to these ideals, we obtain the expansion ${{\mathfrak g}}=A_1\oplus A_2\oplus \dots \oplus A_n ={{\mathfrak g}}_1 =\mathcal{Z}$. The form of $\omega$ should be such that on each subspace $A_i$ (except, maybe, on $A_{n/2}$) it is degenerated, and also the subspaces $A_i$ and $A_{n-i}$ are $\omega$-dual.
**Remark 3.** All calculations are made in the Maple system using the formulas specified at the end of the paper.
Lie algebras of type $(2,4,6)$
==============================
Let us consider all six-dimensional nilpotent Lie algebras of type (2,4,6). According to theorem \[PsKahl\_6\], there are seven such Lie algebras which possess a pseudo-Kähler structure. We remember that the number of each algebra corresponds to its number in the classification list in [@Goze-Khakim-Med].
The Lie group $G_{14}$
----------------------
Let us consider a six-dimensional Lie group $G_{14}$ which has a Lie algebra ${{\mathfrak g}}_{14}$ with non-trivial Lie brackets (see [@Goze-Khakim-Med]: $[X_{1},X_{2}]=X_{4}$, $[X_{1},X_{4}]=X_{6}$, $[X_{1},X_{3}]=X_{5}$. The algebra ${{\mathfrak g}}_{14}$ has (see [@Goze-Khakim-Med]) three symplectic structures which are noted using the dual base $\{\alpha^{i}\}$ as follows:
$\omega _{1}=\alpha^{1}\wedge \alpha^{6}+\alpha^{2}\wedge \alpha^{4}+\alpha^{3}\wedge \alpha^{5}$,
$\omega _{2}=\alpha^{1}\wedge \alpha^{6}-\alpha^{2}\wedge \alpha^{4} +\alpha^{3}\wedge \alpha^{5}$,
$\omega _{3}=\alpha^{1}\wedge \alpha^{6}+\alpha^{2}\wedge \alpha^{5}+\alpha^{3}\wedge \alpha^{4}$.\
Left-invariant complex structures on this group are found in an explicit form in the work of Magnin [@Mag-3] (the algebra $M1$). We will use Magnin’s results, so we will make the replacement: $X_2=-e_1$, $X_1=e_2$, $X_3=e_3$, $X_4=e_4$, $X_6=e_5$, $X_5=e_6$. Then the non-trivial brackets are given in [@Mag-3]:
$[e_1,e_2]=e_4$, $[e_2,e_3] = e_6$, $[e_2,e_4] = e_5$,\
and the symplectic forms become:
$\omega _{1}=-e^{1}\wedge e^{4}+e^{2}\wedge e^{5} +e^{3}\wedge e^{6}$,
$\omega _{2}=e^{1}\wedge e^{4} +e^{2}\wedge e^{5}+e^{3}\wedge e^{6}$,
$\omega _{3}=-e^{1}\wedge e^{6}+e^{2}\wedge e^{5}+e^{3}\wedge e^{4}$.
In [@Mag-3] it is shown that the Lie group $G_{14}$ has a 10-parametrical set of left-invariant complex structures. A direct check of the compatible property $\omega(JX,Y)+\omega(X,JY)$, $\forall \, X,Y\in {{\mathfrak g}}$, shows that for first two symplectic forms there are no compatible complex structures on the group $G_{14}$. For the form $\omega_3$ the compatible complex structure depends on 6 parameters and is of the form: $$J = \left(\begin{array}{cccccc}
\psi_{11} & \psi_{12} & 0 & 0 & 0 & 0 \\
-\frac{\psi_{11}^2+1}{\psi_{12}} & -\psi_{11} & 0 & 0 & 0 & 0 \\
\frac{\psi_{42}(\psi_{11}^2+1) -2\psi_{41}\psi_{12}\psi_{11}}{\psi_{12}^2} & -\psi_{41} & -\psi_{11} & -\frac{\psi_{11}^2+1}{\psi_{12}} & 0 & 0 \\
\psi_{41} & \psi_{42} & \psi_{12} & \psi_{11} & 0 & 0 \\
\psi_{51} & J_{52} & \psi_{42} & \psi_{41} & \psi_{11} & \psi_{12}\\
\psi_{61} & -\psi_{51} & -\psi_{41} & \frac{\psi_{42}(\psi_{11}^2+1)-
2\psi_{41}\psi_{12}\psi_{11}}{\psi_{12}^2} & -\frac{\psi_{11}^2+1}{\psi_{12}}& -\psi_{11} \\
\end{array}\right),$$ where $J_{52}= \frac{-2\psi_{11}\psi_{12}(\psi_{42}\psi_{41} -\psi_{12}\psi_{51}) +\psi_{42}^2(\psi_{11}^2 +1) +\psi_{12}^2(\psi_{41}^2 +\psi_{12}\psi_{61})}{(\psi_{11}^2+1)\psi_{12}}$.
The curvature tensor of the metric $g(X,Y)=\omega_3(X,JY)$ is equal to zero for all values of the parameters. Therefore we choose the elementary pseudo-Kähler structure with zero values of the free parameters and $\psi_{12}=-1$. Then the canonical pseudo-Kähler structure is set as follows:
$J(e_1) = e_2,\quad J(e_3) = e_4,\quad
J(e_5) = e_6$,
$g=-e^1\,e^5 -e^2\,e^6 -(e^3)^2 -(e^4)^2.$
The Lie group $G_{21}$
----------------------
The Lie algebra ${{\mathfrak g}}_{21}$ is defined by: $[e_1,e_2] = e_4$, $[e_1,e_4] = e_6$, $[e_2,e_3] = e_6$. This Lie algebra has two symplectic structures [@Goze-Khakim-Med]:
$\omega_1 = e^1\wedge e^6 +e^2\wedge e^4 -e^3\wedge e^4 -e^3\wedge e^5,$
$\omega_2 = e^1\wedge e^6 +e^2\wedge e^5 -e^3\wedge e^4.$
A direct check shows that for the first structure $\omega_1 = e^1\wedge e^6 +e^2\wedge e^4 -e^3\wedge e^4 -e^3\wedge e^5$ there are no compatible complex structures. We will consider the second symplectic structure $\omega_2$. There is a multiparametrical set of compatible complex structures. Taking into account the results of theorem \[246\] and corollaries \[1.8\], \[ParRiem6\] and \[Ricci-zero\], we find by direct evaluation that the tensor of curvature of the associated metric $g(X,Y)=\omega(X,JY)$ depends on two parameters $\psi_{11}$ and $\psi_{12}\ne 0$ and has the following non-zero components:$R_{1, 2, 1}^6~=~1~+~\psi_{11}^2$,$R_{1, 2, 2}^6 = \psi_{12} \psi_{11}$, $R_{1, 2, 1}^5 = \psi_{12}\psi_{11}$,$R_{1, 2, 2}^5 = \psi_{12}^2$. Therefore the semicanonical complex structure is set as follows:
$J(e_1) = \xi_{11}\, e_1 -\frac{\xi_{11}^2+1}{\xi_{12}} \, e_2,\qquad
J(e_2) = \psi_{12}\, e_1 -\psi_{11}\, e_2$,
$J(e_3) = \psi_{11}\, e_3 +\frac{\xi_{11}^2+1}{\xi_{12}}\, e_4,\qquad
J(e_4) = -\xi_{12}\, e_3 -\psi_{11}\, e_4$,
$J(e_5) = \psi_{11}\, e_5 +\frac{\xi_{11}^2+1}{\xi_{12}}\, e_6, \qquad
J(e_6) = -\psi_{12}\, e_5 -\psi_{11}\, e_6$.\
The corresponding associated metric is: $$g {=} \left[\!\!\begin{array}{cccccc}
0 & 0 & 0 & 0 & \frac{\psi_{11}^2+1}{\psi_{12}} & -\psi_{11} \\
0 & 0 & 0 & 0 & \psi_{11} & -\psi_{12} \\
0 & 0 & -\frac{\psi_{11}^2+1}{\psi_{12}} & \psi_{11} & 0 & 0 \\
0 & 0 & \psi_{11} & -\psi_{12} & 0 & 0 \\
\frac{\psi_{11}^2+1}{\psi_{12}} & \psi_{11} & 0 & 0 & 0 & 0 \\
-\psi_{11} & -\psi_{12} & 0 & 0 & 0 & 0 \\
\end{array}\!\!\right].$$ On omitting the index of the curvature tensor, it turns out that there is only one (within symmetries) non-zero component of the tensor of curvature $R_{1, 2, 1, 2} =-\psi_{12}$. Then, supposing $\psi_{12}=-a\ne 0$ and $\psi_{11}=0$, we find the following canonical complex structure and pseudo-Kähler metric with curvature $R_{1, 2, 1, 2}=a$ on a Lie algebra $\mathfrak{h}_{21}$:
$$J(e_2) = -a\, e_1,\ J(e_4) = a\, e_3,\ J(e_6) = a\, e_5,$$ $$g= -\frac 2a\, e^1\cdot e^5 +2a\, e^2\cdot e^6 -\frac 1a\, (e^3)^2+ a\, (e^4)^2.$$
The Lie group $G_{13}$
----------------------
The Lie algebra $\mathfrak{g}_{13}$ is defined by: $[e_1,e_2] =e_4$, $[e_1,e_3]=e_5$, $[e_1,e_4]=e_6$, $[e_2,e_3]=e_6$. This Lie algebra has three symplectic structures [@Goze-Khakim-Med]. Left-invariant complex structures on this group are discovered in an explicit form in the work of Magnin [@Mag-3] (algebra $M6$). In order to use the outcomes of Magnin’s work, we will rename the base vectors $e_3:=-e_3$, $e_5:=-e_5$, and also find the Lie brackets from Magnin’s paper [@Mag-3]:
$[e_1,e_2] =e_4$, $[e_1,e_3]=e_5$, $[e_1,e_4]=e_6$, $[e_2,e_3]=-e_6$.
The symplectic structures are:
$\omega_1 = e^1\wedge e^6 -\lambda e^2\wedge e^5 - (\lambda-1)e^3\wedge e^4$,
$\omega_2 =e^1\wedge e^6 +\lambda e^2\wedge e^4 -e^2\wedge e^5 +e^3\wedge e^5$,
$\omega_3 = e^1\wedge e^6 +e^2\wedge e^4 -\frac 12 e^2\wedge e^5 +\frac 12 e^3\wedge e^4$.
**First case.** We will consider the form $\omega_1= e^1\wedge e^6 -\lambda e^2\wedge e^5 -(\lambda-1)e^3\wedge e^4$. There is a multiparametrical set of the compatible complex structures. Taking into account the outcomes of theorem \[246\] and corollaries \[ParRiem6\] and \[Ricci-zero\], we find by direct evaluation that the tensor of curvature of the associated metric $g_1(X,Y)=\omega_1(X,J_1Y)$ depends on two parameters $\psi_{11}$ and $\psi_{12}\ne0$ and also has following non-zero components:$R_{1, 2, 2}^6 =\frac {(3\lambda -1) \psi_{12} \psi_{11}}{\lambda -1}$,$R_{1, 2, 2}^5 =-\frac{(1 + \lambda)(3\lambda -1)\psi_{12}^2}{\lambda (\lambda - 1)}$,$R_{1, 2, 1}^6 =\frac{(3 \lambda -1)(1 + \psi_{11}^2)}{\lambda^2 -1}$,$R_{1, 2, 1}^5 =-\frac{(3 \lambda - 1) \psi_{12} \psi_{11}}{\lambda (\lambda - 1)}$. After omitting the index, one non-zero component turns out to be $R_{1, 2, 1, 2} = -\frac{(3 \lambda -1) \psi_{12}}{\lambda -1}$. Therefore the semicanonical complex structure is set out as follows:
$J_1(e_1) = \psi_{11}\,e_1 -\frac{1+{\psi_{11}}^2}{(1+\lambda)\psi_{12}}\, e_2,\quad
J_1(e_2) = (1+\lambda)\psi_{12}\, e_1 -\psi_{11}\, e_2$, $J_1(e_3) = \psi_{11}\, e_3 -\frac{1+{\psi_{11}}^2}{\psi_{12}}\, e_4$,$J_1(e_4) = \psi_{12}\, e_3 -\psi_{11}\, e_4$, $J_1(e_5) = \psi_{11}\, e_5 -\frac{\lambda(1+{\psi_{11}}^2)}{(1+\lambda)\psi_{12}}\, e_6,\quad
J_1(e_6) = \frac{(1+\lambda)\psi_{12}}{\lambda}\, e_5 -\psi_{11}\, e_6$.
The corresponding pseudo-Kähler metric comes from the formula $g_1=\omega_1\circ J_1$. Suppose that $\psi_{12}=a\ne 0$ and $\psi_{11}=0$. We find the following canonical complex structure and the pseudo-Kähler metric of curvature $R_{1, 2, 1, 2}=-\frac{(3 \lambda -1) a}{\lambda -1}$ on a Lie algebra $\mathfrak{h}_{13}$:
$$J_1(e_2) = (1+\lambda)a\, e_1,\quad J_1(e_4) = a\, e_3,\quad
J_1(e_6) = \frac{(1+\lambda)a}{\lambda}\, e_5,$$ $$g_1= \left[ \begin {array}{cccccc} 0&0&0&0&-{\frac {\lambda}{ \left( 1+
\lambda \right) a}}&0\\
\noalign{\medskip}0&0&0&0&0&- \left( 1+\lambda \right) a\\ \noalign{\medskip}0&0&{\frac {\lambda-1}{a}}&0&0&0\\
\noalign{\medskip}0&0&0& \left(\lambda-1 \right) a&0&0\\ \noalign{\medskip}-{\frac {\lambda}{(1+\lambda )a}}&0&0&0&0&0\\ \noalign{\medskip}0&- \left( 1+\lambda \right)a&0&0&0&0
\end {array} \right].$$
**Second case.** For the symplectic form $\omega_2 =e^1\wedge e^6 +\lambda e^2\wedge e^4 -e^2\wedge e^5 +e^3\wedge e^5$ there are no compatible complex structures.
**Third case.** The symplectic structure is: $\omega_3 = e^1\wedge e^6 +e^2\wedge e^4 -\frac 12 e^2\wedge e^5 +\frac 12 e^3\wedge e^4$. For any compatible complex structure and its associated metric, the curvature tensor depends on two parameters $\psi_{11}$ and $\psi_{12}\ne 0$: $R_{1, 2, 2}^5 =\frac {4 \psi_{12}^2}{3}$, $R_{1, 2, 1}^5 =\frac{4 \psi_{12} \psi_{11}}{3}$, $R_{1, 2, 2}^6 =-\frac{2 \psi_{12} \psi_{11}}{3}$,$R_{1, 2, 1}^6 =-\frac{2(1+ \psi_{11}^2)}{3}$.
Therefore the semicanonical pseudo-Kähler structure is as follows: $$J_3 = \left[ \begin {array}{cccccc} \psi_{11}&\psi_{12}&0&0&0&0 \\
\noalign{\medskip}-{\frac {{\psi_{11}}^{2}+1}{\psi_{12}}}&-
\psi_{11}&0&0&0&0\\
\noalign{\medskip}0&0&\psi_{11}&2\,\psi_{12}/3&0&0\\
\noalign{\medskip}0&0&-3\,{\frac {{\psi_{11}}^{2}+1}{2\,
\psi_{12}}}&-\psi_{11}&0&0\\ \noalign{\medskip}0&0&-3\,{\frac {{\psi_{11}}^{2}+1}{
\psi_{12}}}&-4\,\psi_{11}&\psi_{11}&2\,\psi_{12}\\
\noalign{\medskip}0&0&0&{\frac {{\psi_{11}}^{2}+1}{\psi_{12}}}
&-{\frac {{\psi_{11}}^{2}+1}{2\,\psi_{12}}}&-\psi_{11}
\end {array} \right],$$ $$g_3= \left[ \begin {array}{cccccc} 0&0&0&{\frac {{\psi_{11}}^{2}+1}{
\psi_{12}}}&-{\frac {{\psi_{11}}^{2}+1}{2\,\psi_{12}}} & -\psi_{11}\\ \noalign{\medskip}0&0&0&\psi_{11}&-\psi_{11}/2&
-\psi_{12}\\
\noalign{\medskip}0&0&-3\,{\frac {{\psi_{11}}^{2}+1}{4\,
\psi_{12}}}&-\psi_{11}/2&0&0\\
\noalign{\medskip}{\frac {{\psi_{11}}^{2}+1}{
\psi_{12}}}&\psi_{11}&-1/2\,\psi_{11}&-\psi_{12}/3&0&0\\
\noalign{\medskip}-{\frac {{\psi_{11}}^{2}+1}{2\,
\psi_{12}}}&-\psi_{11}/2&0&0&0&0\\
\noalign{\medskip}-\psi_{11}& -\psi_{12}&0&0&0&0\end {array} \right]$$ On omitting the index of the curvature tensor, it turns out that there is only one (within symmetries) non-zero component of the tensor of curvature $R_{1, 2, 1, 2} =\frac {2\psi_{12}}{3}$. Then, supposing $\psi_{12}= -a\ne 0$ and $\psi_{11}=0$, we find the following canonical complex structure and pseudo-Kähler metric of curvature $R_{1, 2, 1, 2}=-\frac {2\,a}{3}$ on a Lie algebra $\mathfrak{h}_{13}$ with $J_3$-invariant 2-planes $\{e_1,e_2\}$ and $\{e_5,e_6\}$:
$J_3(e_2) = -a\, e_1$,$J_3(e_3) = \frac {3}{2\,a}\, e_4 +\frac{3}{a}\, e_5$, $J_3(e_4) = -\frac{2a}{3}\, e_3 -\frac{1}{a}\, e_6$, $J_3(e_6) = -2\,a\, e_5$,
$$g_3=\left[ \begin {array}{cccccc}
0&0&0&-\frac 1a&\frac{1}{2a}&0\\
0&0&0&0&0&a\\
0&0&\frac{3}{4a}&0&0&0\\
-\frac 1a&0&0&\frac{a}{3}&0&0\\
\frac{1}{2a}&0&0&0&0&0\\
0&a&0&0&0&0\end {array} \right].$$
The Lie group $G_{15}$
----------------------
The Lie algebra $\mathfrak{g}_{15}$ is defined in [@Goze-Khakim-Med]: $[X_{1},X_{2}]=X_{4}$, $[X_{1},X_{4}]=X_{6}$, $[X_{2},X_{3}]=X_{5}$. The Lie algebra has two symplectic structures:
$\omega _{1}=-\alpha^{1}\wedge \alpha^{5}+\alpha^{1}\wedge \alpha^{6}+\alpha^{2}\wedge \alpha^{5}+\alpha^{3}\wedge \alpha^{4},$
$\omega _{2}=\alpha^{1}\wedge \alpha^{6}+\alpha^{2}\wedge \alpha^{4}+\alpha^{3}\wedge \alpha^{5}.$
This is a Lie algebra $M7$ for which Magnin, in [@Mag-3], discovered the complex structures in an explicit form. To use these outcomes, we will make the replacement: $X_1:=e_2$, $X_2:=-e_1$, $X_5:=-e_6$, $X_6:=e_5$. Then the Lie bracket relations and symplectic structures become:
$[e_1,e_2]=e_{4}$, $[e_2,e_{4}]=e_5$, $[e_1,e_{3}]=e_6$,
$\omega _{1}=e^1\wedge e^6+ e^2\wedge e^6+e^2\wedge e^5 +e^{3}\wedge e^{4},$
$\omega _{2}=-e^1\wedge e^{4}+e^2\wedge e^5-e^{3}\wedge e^6.$
**First case.** The symplectic structure is: $\omega_1 = e^2\wedge e^6 +e^2\wedge e^5 +e^1\wedge e^6 +e^3\wedge e^4$. For any compatible complex structure, the curvature tensor of the associated metric depends on two parameters $\psi_{11}$ and $\psi_{12}\ne 0$. After omitting the index there is one non-zero component: $$R_{1, 2, 1, 2} =-\frac{\psi_{11}^4+\psi_{11}^3\,\psi_{12} +2\,\psi_{11}^2-2\,\psi_{12}^2\,\psi_{11}^2+ \psi_{11}\,\psi_{12}+1}{\psi_{12}\,(-2\,\psi_{11}\psi_{12}+1+\psi_{11}^2)}.$$
Setting the remaining free parameters $\psi_{ij}$ to zero, we find the canonical complex structure $J_1$:
$J_1(e_1) = \psi_{11}\, e_1 -{\frac {1+{\psi_{11}}^{2}}{\psi_{12}}}\, e_2$,
$J_1(e_2) = \psi_{12}\, e_1 -\psi_{11}\, e_2$,
$J_1(e_3) = -{\frac {{\psi_{11}}^{3}-
{\psi_{11}}^{2}\psi_{12}+\psi_{11}+\psi_{12}}{-2\,\psi_{11}
\,\psi_{12}+1+{\psi_{11}}^{2}}}\, e_3 -{\frac { \left( {\psi_{12}}^{2}-2\,\psi_{11}\,\psi_{12}+1+{\psi_{11}}^
{2} \right) \psi_{12}}{-2\,\psi_{11}\,\psi_{12}+1+{\psi_{11}}^
{2}}}\, e_4$,
$J_1(e_4) = \frac {1+2\,{\psi_{11}}^{2}
+{\psi_{11}}^{4}}{\psi_{12}\, (-2\,\psi_{11}\,\psi_{12}+1+{
\psi_{11}}^{2})}\, e_3 +\frac {{\psi_{11}}^{3}-{\psi_{11}}^{2}\psi_{12}+\psi_{11}+
\psi_{12}}{-2\,\psi_{11}\,\psi_{12}+1+{\psi_{11}}^{2}}\, e_4$,
$J_1(e_5) = -\frac {-\psi_{11}\,\psi_{12}+1+
{\psi_{11}}^{2}}{\psi_{12}}\, e_5 +\frac {1+{\psi_{11}}^{2}}{\psi_{12}}
\, e_6$,
$J_1(e_6) = -{\frac {{\psi_{12}}^{2}-2\,
\psi_{11}\,\psi_{12}+1+{\psi_{11}}^{2}}{\psi_{12}}}\, e_5 +{\frac {-\psi_{11}\,\psi_{12}+1+{\psi_{11}}^{2}}{\psi_{12}}}
\, e_6$.\
The metric tensor of the pseudo-Kähler structure is easily found using the formula $g_1=\omega_1\circ J_1$.
**Second case.** The symplectic structure $\omega _{2}=-e^1\wedge e^{4}+e^2\wedge e^5-e^{3}\wedge e^6$ does not admit a compatible complex structure.
The Lie group $G_{11}$
----------------------
The Lie algebra $\mathfrak{g}_{11}$ is defined by:
$[e_1,e_2] = e_4$, $[e_1,e_4] = e_5$, $[e_2,e_3] = e_6$, $[e_2,e_4] = e_6$.\
Its symplectic structure comes from the list in [@Goze-Khakim-Med]:
$\omega = e^1\wedge e^6 + e^2\wedge e^5 - e^3\wedge e^4 + \lambda e^2\wedge e^6.$
This is the Lie algebra $M8$, considered in the work of Magnin [@Mag-3] and for which he found the complex structures in an explicit form. There is a multiparametrical set of the compatible complex structures. For any compatible complex structure $J$ and its associated metric $g=\omega \cdot J$, the curvature tensor depends on three parameters $\lambda$, $\psi_{11}$ and $\psi_{12}\ne 0$. The canonical compatible complex structure has $J$-invariant 2-plane $\{e_1,e_2\}$, $\{e_3,e_4\}$ and $\{e_5,e_6\}$, however its shape is complicated:
$J(e_1) = \psi_{11}\, e_1 -{\frac {1+{\psi_{11}}^{2}}{\psi_{12}}}\, e_2,\quad
J(e_2)=\psi_{12}\, e_1 -\psi_{11} \, e_2$,
$J(e_3)=-{\frac {2\,{\psi_{12}}^{2}-3\,\psi_{11}\,\lambda\,\psi_{12}
+{\lambda}^{2}(1+{\psi_{11}}^{2})}{\lambda \,\psi_{12}}}\, e_3+{\frac {{\psi_{12}}^{2}-2\,\psi_{11}\,\lambda\,\psi_{12}
+\lambda^{2}(1+\psi_{11}^{2})}{\lambda\, \psi_{12}}}\, e_4$,
$J(e_4)=-{\frac {{\lambda}^{2}({\psi_{11}}^{2}+1)-4\,\psi_{11}\,\lambda\,\psi_{12}
+4\,{\psi_{12}}^{2}}{\psi_{12}\,\lambda}}\, e_3+{\frac {{
\lambda}^{2}({\psi_{11}}^{2}+1)-3\,\psi_{11}\,\lambda\,\psi_{12}+2\,
{\psi_{12}}^{2}}{\psi_{12}\,\lambda}}\, e_4$,
$J(e_5)={\frac {\psi_{11}\,\psi_{12}-
\lambda(1+\psi_{11}^{2})}{\psi_{12}}}\,e_5 +{\frac {1+
{\psi_{11}}^{2}}{\psi_{12}}}\, e_6, $
$J(e_6)=-\frac {{\psi_{12}}
^{2}-2\,\psi_{11}\,\lambda\,\psi_{12}+{\lambda}^{2}+{\lambda}^{2}{
\psi_{11}}^{2}}{\psi_{12}}\,e_5 +\frac {-\psi_{11}\,\psi_{12}+
\lambda+{\psi_{11}}^{2}\lambda}{\psi_{12}}\, e_6. $
The curvature tensor of associated metric $g=\omega \cdot J$ has the following non-zero component: $$R_{1, 2, 1, 2} = -\frac{\lambda^2(\psi_{11}^2+1) -5\lambda \psi_{12}\,\psi_{11} +4\psi_{12}^2}{\lambda \psi_{12}}.$$
The Lie group $G_{10}$
----------------------
The Lie algebra $\mathfrak{g}_{10}$ is defined by:
$[e_1,e_2] = e_4$, $[e_1,e_4] = e_5$, $[e_1,e_3] = e_6$, $[e_2,e_4] = e_6$.\
Its symplectic structure comes from the list in [@Goze-Khakim-Med]:
$\omega = e^1\wedge e^6 + e^2\wedge e^5 - e^2\wedge e^6 - e^3\wedge e^4 .$\
For any compatible complex structure $J$ and its associated metric $g=\omega \cdot J$, the curvature tensor depends on two parameters $\psi_{11}$ and $\psi_{12}\ne 0$. The canonical compatible complex structure has $J$-invariant 2-plane $\{e_1,e_2\}$, $\{e_3,e_4\}$ and $\{e_5,e_6\}$, however its shape is complicated:
$J(e_1) = \psi_{11}\, e_1 -{\frac {1+{\psi_{11}}^{2}}{\psi_{12}}}\, e_2,\quad
J(e_2)=\psi_{12}\, e_1 -\psi_{11} \, e_2$,
$J(e_3)=-{\frac {\psi_{12}+3\,{
\psi_{11}}^{2}\psi_{12}+2\,{\psi_{12}}^{2}\psi_{11}+\psi_{11}
+{\psi_{11}}^{3}}{2\,{\psi_{12}}^{2}+2\,\psi_{12}\,\psi_{11}+1
+{\psi_{11}}^{2}}}\, e_3 -{\frac {({\psi_{12}}^{2}+2\,\psi_{12}\,\psi_{11}+1+{{
\psi_{11}}}^{2}) \psi_{12}}{2\,{\psi_{12}}^{2}+2\,\psi_{12}\,\psi_{11}+1+{\psi_{11}}^{2}}}\, e_4$,
$J(e_4)={\frac { \left( {\psi_{11}}^{2}+4\,\psi_{12}
\,\psi_{11}+4\,{\psi_{12}}^{2}+1 \right) \left( 1+{\psi_{11}}^{
2} \right) }{\psi_{12}\, \left( 2\,{\psi_{12}}^{2}+2\,\psi_{12}
\,\psi_{11}+1+{\psi_{11}}^{2} \right) }}\, e_3 +{\frac {\psi_{12}+3\,{\psi_{11}}^{2}\psi_{12}+2\,{\psi_{12}}^{2}\psi_{11}+\psi_{11}
+{\psi_{11}}^{3}}{2\,{\psi_{12}}^{2}+2\,\psi_{12}\,\psi_{11}+1
+{\psi_{11}}^{2}}}\, e_4$,
$J(e_5)={\frac {\psi_{12}\,\psi_{11}+1+{\psi_{11}}^{2}}{\psi_{12}}}\,e_5 +{\frac {1+\psi_{11}^{2}}{\psi_{12}}}\, e_6$.
$J(e_6)=-{\frac {{\psi_{12}}^{2}+2\,\psi_{12}\,\psi_{11}+1+{\psi_{11}}^{2}}{\psi_{12}}}\,e_5 -{\frac {\psi_{12}\,\psi_{11}+1+{\psi_{11}}^{2}}{\psi_{12}}}\, e_6$.
The curvature tensor of associated metric $g=\omega \cdot J$ has the following non-zero component: $$R_{1, 2, 1, 2} =
{\frac {{\psi_{11}}^{4}+4\,\psi_{12}\,{\psi_{11}}^{3}+3\,
{\psi_{11}}^{2}{\psi_{12}}^{2}+2\,{\psi_{11}}^{2}+4\,\psi_{11}\,
\psi_{12}-2\,\psi_{11}\,{\psi_{12}}^{3}+3\,{\psi_{12}}^{2}+1-2
\,{\psi_{12}}^{4}}{\psi_{12}\, \left( 2\,{\psi_{12}}^{2}+2\,
\psi_{11}\,\psi_{12}+1+{\psi_{11}}^{2} \right) }}.$$
The Lie group $G_{12}$
----------------------
The Lie algebra $\mathfrak{h}_{12}$ is defined by: $[X_1,X_2] = X_4$, $[X_1,X_4] = X_5$, $[X_1,X_3] = X_6$, $[X_2,X_3] = -X_5$, $[X_2,X_4] = X_6$. Its symplectic structure comes from the list in [@Goze-Khakim-Med]:
$\omega = \lambda \alpha^1\wedge \alpha^5 +\alpha^2\wedge \alpha^6 +(\lambda+1)\alpha^3\wedge \alpha^4, \qquad \lambda\ne 0; -1.$
This is the Lie algebra $M10$, considered in the work of Magnin [@Mag-3] and for which he found the complex structures in an explicit form. To use these outcomes, we will make the replacements: $X_1=-e_1$, $X_2=e_2$, $X_3=-e_4$, $X_4=-e_3$, $X_5=e_5$, $X_6=e_6$. Then the Lie bracket relations and symplectic structures become:
$[e_1,e_2] = e_3$, $[e_1,e_3] = e_5$, $[e_1,e_4] = e_6$, $[e_2,e_4] = e_5$, $[e_2,e_3] = -e_6$.
$\omega = -\lambda e^1\wedge e^5 +e^2\wedge e^6 -(\lambda+1)e^3\wedge e^4, \qquad \lambda\ne 0; -1.$
In [@Mag-3] it is shown that on the given group there are several types of complex structures from which we will choose the ones that are compatible with $\omega$.
**First type: $\psi_{12}\ne \psi_{34}$.**\
For any compatible complex structure of this type and its associated metric, the curvature tensor depends on three parameters $\psi_{11}$, $\psi_{33}$ and $\psi_{12}\ne 0$. The elements of the tensor of curvature have very complicated expressions which also depend on the parameter $\lambda$. We will reduce the expressions for the compatible complex structure $J_1$ and the corresponding metric $g_1$ using the zero parameters which do not appear in the curvature and adding the side condition: $\psi_{11}=0$, $\psi_{33}=0$. $$J_1(e_2)=\psi_{12}\, e_1,\qquad J_1(e_4)={\frac { \left( \lambda-1 \right)
\psi_{12}}{{\lambda\,\psi_{12}}^{2}-1}}\, e_3,\qquad J_1(e_6)=-{\frac {1}{\lambda\,\psi_{12}}}\, e_5.$$ $$g_1= \left[ \begin {array}{cccccc} 0&0&0&0&0&{\psi_{12}}^{-1}\\ \noalign{\medskip}0&0&0&0&\lambda\,\psi_{12}&0
\\ \noalign{\medskip}0&0&-{\frac { \left( \lambda+1 \right) \left(
{\psi_{12}}^{2}\lambda-1 \right) }{ \left( \lambda-1 \right)
\psi_{12}}}&0&0&0\\ \noalign{\medskip}0&0&0&-{\frac {\psi_{12}\,
\left( {\lambda}^{2}-1 \right) }{{\psi_{12}}^{2}\lambda-1}}&0&0
\\ \noalign{\medskip}0&\lambda\,\psi_{12}&0&0&0&0
\\ \noalign{\medskip}{\psi_{12}}^{-1}&0&0&0&0&0\end {array} \right]$$
The curvature tensor has the following non-zero components:
$$R_{1, 2, 2}^6 = -\frac{\lambda^3\psi_{12}^2+3\lambda^2\psi_{12}^2 -3\lambda - 1}{\lambda^2-1},\quad
R_{1, 2, 1}^5 = \frac{\lambda^3\psi_{12}^2+3\lambda^2\psi_{12}^2 -3\lambda - 1}{\lambda(\lambda^2-1)\psi_{12}^2}.$$
After omitting the index, there remains one component: $$R_{1, 2, 1, 2}=\frac{\lambda^3\psi_{12}^2+3\lambda^2\psi_{12}^2 -3\lambda - 1}{(\lambda^2-1)\psi_{12}^2}.$$
**Second type: $\psi_{12}=\psi_{34}$, $\psi_{11}=\psi_{33}=0$.**\
The compatible complex structure of this type is of the form: $$J_2= \left[ \begin {array}{cccccc} 0&-1&0&0&0&0\\ \noalign{\medskip}1&0&0&0
&0&0\\ \noalign{\medskip}\psi_{31}&\psi_{41}&0&-1&0&0
\\ \noalign{\medskip}\psi_{41}&-\psi_{31}&1&0&0&0
\\ \noalign{\medskip}\psi_{51}&\psi_{52}&{\frac {\psi_{41}\,
\left( \lambda+1 \right) }{\lambda}}&-{\frac {\psi_{31}\, \left(
\lambda+1 \right) }{\lambda}}&0&{\lambda}^{-1}\\
\noalign{\medskip}-
\lambda\,\psi_{52}&J_{61}&\psi_{31}
\,(\lambda+1)&\psi_{41}\,(\lambda+1)&-\lambda&0
\end {array} \right],$$ where $J_{61}=\lambda\,\psi_{51}-({\psi_{41}}^{2}+{\psi_{31}}^{2})(\lambda
+1)$.
For any such complex structure and its associated metric, the curvature tensor does not depend on the parameters $\psi_{ij}$, and depends only on $\lambda$. The non-zero elements of the tensor of curvature are: $R_{1, 2, 2}^6 = -\frac{\lambda^2+4\lambda +1}{\lambda+1}$,$R_{1, 2, 1}^5 = \frac{\lambda^2+4\lambda +1}{\lambda(\lambda+1)}$. After omitting the index, there remains one component: $R_{1, 2, 1, 2}=-\frac{\lambda^2+4\lambda +1}{\lambda+1}$. From our point of view, it is possible to consider that all the remaining parameters are zero. Then we find the canonical expressions for the complex structure and the pseudo-Riemannian metric: $$J_{20}(e_1)=e_2,\quad J_{20}(e_3)=e_4,\quad J_{20}(e_5)=-\lambda\, e_6,$$ $$g_{20}=-2\,e^1\cdot e^6 -2\,\lambda\,e^2\cdot e^5 -(\lambda+1)\, (e^3)^2 -(\lambda+1)\, (e^4)^2.$$
**Third type: $\psi_{34}=\psi_{12}$, $\psi_{33}\ne \psi_{11}$.**\
For any compatible complex structure of this type and the associated metric, the curvature tensor does not depend on the parameters $\psi_{ij}$. Converting the free parameters to zero, we find the following canonical expressions for the complex structure and the associated pseudo-Kähler metric: $$J_{30}= \left[ \begin {array}{cccccc} 1&\sqrt {2}&0&0&0&0\\ \noalign{\medskip}-\sqrt {2}&-1&0&0&0&0\\ \noalign{\medskip}0&0&-{
\frac {\lambda+1}{\lambda-1}}&\sqrt {2}&0&0\\ \noalign{\medskip}0&0&-{
\frac { \left( {\lambda}^{2}+1 \right) \sqrt {2}}{ \left( \lambda-1
\right) ^{2}}}&{\frac {\lambda+1}{\lambda-1}}&0&0
\\ \noalign{\medskip}0&0&0&0&-1&-{\frac {\sqrt {2}}{\lambda}}
\\ \noalign{\medskip}0&0&0&0&\sqrt {2}\lambda&1\end {array} \right],$$ $$g_{30}= \left[ \begin {array}{cccccc} 0&0&0&0&\lambda&\sqrt {2}\\ \noalign{\medskip}0&0&0&0&\sqrt {2}\lambda&1\\ \noalign{\medskip}0&0
&{\frac { \left( \lambda+1 \right) \left( {\lambda}^{2}+1 \right)
\sqrt {2}}{ \left( \lambda-1 \right) ^{2}}}&-{\frac { \left( \lambda+1
\right) ^{2}}{\lambda-1}}&0&0\\ \noalign{\medskip}0&0&-{\frac {
\left( \lambda+1 \right) ^{2}}{\lambda-1}}& \left( \lambda+1 \right)
\sqrt {2}&0&0\\ \noalign{\medskip}\lambda&\sqrt {2}\lambda&0&0&0&0
\\ \noalign{\medskip}\sqrt {2}&1&0&0&0&0\end {array} \right].$$ The curvature tensor has the following non-zero components: $$R_{1, 2, 2}^6 = -\frac{2(\lambda^4+4\lambda^3 -2\lambda^2 +4\lambda +1)}{(\lambda-1)^2(\lambda+1)},\quad
R_{1, 2, 1}^5 = \frac{2(\lambda^4+4\lambda^3 -2\lambda^2 +4\lambda +1)}{(\lambda+1)\lambda(\lambda-1)^2}.$$ After omitting the index, there remains one component: $$R_{1, 2, 1, 2}=\frac{\sqrt{2}(\lambda^4+4\lambda^3 -2\lambda^2 +4\lambda +1)}{(\lambda-1)^2(\lambda+1)}.$$
Lie algebras of type $(2,6)$
============================
There are three Lie algebras of type $(2,6)$ admitting a pseudo-Kähler structure.
The Lie group $G_{24}$
----------------------
The Lie algebra $\mathfrak{h}_{24}$ is defined by:
$[e_1,e_4] = e_6$, $[e_2,e_3] = e_5$.\
This Lie algebra is a direct product two three-dimensional Lie algebras of Heisenberg: $\mathfrak{g}_{24} =\mathfrak{h}_3\times\mathfrak{h}_3$ The symplectic structure is:
$\omega = e^1\wedge e^6 +e^2\wedge e^5 +e^3\wedge e^4$.
For any compatible complex structure $J$ and its associated metric $g=\omega \circ J$, the curvature tensor depends on two parameters $\psi_{11}\ne 0$ and $\psi_{12}\ne 0$. Therefore the semicanonical pseudo-Kähler structure is as follows:
$J(e_1) = \psi_{11}\, e_1 -\frac {1+\psi_{11}^{2}}{\psi_{12}}\, e_2,\quad J(e_2) = \psi_{12}\, e_1 -\psi_{11}\, e_2$,
$J(e_3) = {\frac {{\psi_{11}}^{2}-1}{2\psi_{11}}}\, e_3 -{\frac {2\,{\psi_{11}}^{2}+ {\psi_{11}}^{4}+1}{2\psi_{11}\,{\psi_{12}}^{2}}}\, e_4,\quad J(e_4) = \frac {\psi_{12}^{2}}{2\psi_{11}}\, e_3 -\frac {\psi_{11}^{2}-1}{2\,\psi_{11}}\, e_4$,
$J(e_5) = \psi_{11}\, e_5 +\frac {1+\psi_{11}^{2}}{\psi_{12}}\, e_6,\quad
J(e_6) = -\psi_{12}\, e_5 -\psi_{11}\, e_6.$
$$g= \left[ \begin {array}{cccccc}
0&0&0&0&{\frac {1+{\psi_{11}}^{2}}{\psi_{12}}}&-\psi_{11}\\
\noalign{\medskip}0&0&0&0&\psi_{11}&-\psi_{12}\\
\noalign{\medskip}0&0&-{\frac {2\,{\psi_{11}}^{2}+
{\psi_{11}}^{4}+1}{2\,{\psi_{12}}^{2}\psi_{11}}}&-{\frac {{
\psi_{11}}^{2}-1}{2\,\psi_{11}}}&0&0\\
\noalign{\medskip}0&0&-{\frac {{\psi_{11}}^{2}-1}{2\,\psi_{11}}}&-{\frac {{\psi_{12}}^{2}}{2\,\psi_{11}}}&0&0\\
\noalign{\medskip}{\frac {1+{\psi_{11}}^{2}}{\psi_{12}}}&\psi_{11}&0&0&0&0\\
\noalign{\medskip}-\psi_{11}&-\psi_{12}&0&0&0&0\end {array} \right].$$
For any compatible complex structure and its associated metric, the curvature tensor depends on two parameters $\psi_{11}\ne 0$ è $\psi_{12}\ne 0$: $R_{1, 2, 2}^6 = -\psi_{11}^2$,$R_{1, 2, 1}^6 =-\frac{(1 +\psi_{11}^2)\psi_{11}}{\psi_{12}}$,$R_{1, 2, 2}^5 =-\psi_{12} \psi_{11}$,$R_{1, 2, 1}^5 = -\psi_{11}^2$.On omitting the index of the curvature tensor, it turns out that there is only one (within symmetries) non-zero component of the tensor of curvature $R_{1, 2, 1, 2} = \psi_{11}$.
The Lie group $G_{17}$
----------------------
The Lie algebra $\mathfrak{h}_{17}$ is defined by:
$[e^1,e^3] = e^5$, $[e^1,e^4] = e^6$, $[e^2,e^3] = e^6$,\
Its symplectic structure comes from the list in [@Goze-Khakim-Med]:
$\omega = e^1\wedge e^6 + e^2\wedge e^5 + e^3\wedge e^4$
For any compatible complex structure and its associated metric, the curvature tensor depends on two parameters $\psi_{11}$ and $\psi_{12}\ne 0$:
Setting the remaining free parameters $\psi_{ij}$ to zero, we find the canonical complex structure $J$ and associated metric $g=\omega \circ J$:
$J(e_1) = \psi_{11}\, e_1 -{\frac {1+{\psi_{11}}^{2}}{\psi_{12}}}\, e_2,\qquad
J(e_2) = \psi_{12}\, e_1 -\psi_{11}\, e_2$, $J(e_3) = \psi_{11}\,e_3 +2\,{\frac {1+{\psi_{11}}^{2}}{\psi_{12}}}\, e_4,\qquad J(e_4) = -\frac {\psi_{12}}{2}\, e_3 -\psi_{11}\, e_4$, $J(e_5) = \psi_{11}\, e_5 +{\frac {1+{\psi_{11}}^{2}}{\psi_{12}}}\, e_6,\qquad
J(e_6) = -\psi_{12}\, e_5 -\psi_{11}\, e_6$. $$g=\left[ \begin {array}{cccccc} 0&0&0&0&{\frac {1+{\psi_{11}}^{2}}{
\psi_{12}}}&-\psi_{11}\\
\noalign{\medskip}0&0&0&0&\psi_{11}&-\psi_{12}\\
\noalign{\medskip}0&0&2\,{\frac {1+{\psi_{11}}^{2}}{\psi_{12}}}&-\psi_{11}&0&0\\ \noalign{\medskip}0&0&-\psi_{11}&\psi_{12}/2&0&0\\
\noalign{\medskip}{\frac {1+{\psi_{11}}^{2}}{\psi_{12}}}&\psi_{11}&0&0&0&0\\ \noalign{\medskip}-\psi_{11}&-\psi_{12}&0&0&0&0\end {array} \right]$$ The curvature tensor has the following non-zero components: $R_{1, 2, 1}^5 = \psi_{11}\psi_{12}$,$R_{1, 2, 1}^6 = 1+\psi_{11}^2$,$R_{1, 2, 2}^5 = \psi_{12}^2$,$R_{1, 2, 2}^6 = \psi_{11}\psi_{12} $. After omitting the index, there remains one component: $R_{1, 2, 1, 2} = -\psi_{12}$.
The Lie group $G_{16}$
----------------------
The Lie algebra $\mathfrak{g}_{16}$ is defined by: $[X_1,X_2] = X_5$, $[X_1,X_3] = X_6$, $[X_2,X_4] = X_6$, $[X_3,X_4] =-X_5$. It is complex Heisenberg Lie algebra. The algebra $\mathfrak{g}_{16}$ has (see [@Goze-Khakim-Med]) two symplectic structures which are noted using the dual base $\alpha^i$ as follows:
$\omega_1 = \alpha^1\wedge \alpha^6 +\alpha^2\wedge \alpha^3 -\alpha^4\wedge \alpha^5$,
$\omega_2 = \alpha^1\wedge \alpha^6 -\alpha^2\wedge \alpha^3 +\alpha^4\wedge \alpha^5$.\
Left-invariant complex structures on this group are discovered in an explicit form in the work of Magnin [@Mag-3] (algebra $M5$). We will use Magnin’s results, so we will make the replacement: $X_1=e_1$, $X_2=e_3$, $X_3=e_4$, $X_4=e_2$, $X_5=e_5$, $X_6=e_6$. Then the non-trivial brackets are given in [@Mag-3]:
$[e_1,e_3] = e_5$, $[e_1,e_4] = e_6$, $[e_2,e_3] =-e_6$, $[e_2,e_4] =e_5$.\
The symplectic forms become:
$\omega_1 = e^1\wedge e^6 +e^3\wedge e^4 -e^2\wedge e^5$,
$\omega_2 = e^1\wedge e^6 -e^3\wedge e^4 +e^2\wedge e^5$.
Let $z_1=\frac{1}{2}(e_1+ie_2)$, $z_2=\frac{1}{2}(e_3-ie_4)$, $z_3=\frac{1}{2}(e_5-ie_6)$, then Lie algebra $\mathfrak{g}_{16}$ is defined by: $[z_1,z_2]=z_3$. If $J_0$ is complex structure of the complex Lie algebra $M5$, then: $J_0(e_1)=-e_2$, $J_0(e_3)=e_4$, $J_0(e_5)=e_6$. The complex structure $J_0$ is not $\omega_1$-compatible, but compatible with $\omega_2$. Thus, the metric $g_0$ of pseudo-Kähler structure $(J_0,\omega_2,g_0)$ is as follows: $$g_{0}=2\,e^1\cdot e^6 -2\,e^2\cdot e^5 -\, (e^3)^2 -\, (e^4)^2.$$ The curvature tensor has the following non-zero components: $R_{1, 2, 1}^6 =1$, $R_{1, 2, 2}^5 =1$, $R_{1, 2, 1, 2} =-1$.
**First case.** The symplectic structure is: $\omega_1 = e^1\wedge e^6 -e^2\wedge e^5 +e^3\wedge e^4$.
For any compatible complex structure $J$, the curvature tensor of the associated metric $g=\omega_1 \circ J$ depends on two parameters $\psi_{11}$ and $\psi_{12}\ne 0$. Setting the remaining free parameters $\psi_{ij}$ to zero, we find the canonical complex structure $J_1$ and pseudo-Kähler metric $g_1=\omega_1 \circ J_1$:
$J_1(e_1) = \psi_{11}\, e_1 -{\frac {1+{\psi_{11}}^{2}}{\psi_{12}}}\, e_2,\qquad
J_1(e_2) = \psi_{12}\, e_1 -\psi_{11}\, e_2$, $J_1(e_3) = -{\frac {\psi_{11}\, (1+{\psi_{11}}^{2}-{\psi_{12}}^{2})}{ {\psi_{12}}^{2}+1+{\psi_{11}}^{2}}}\, e_3 -{\frac {2( 1+{\psi_{11}}^{2}) \psi_{12}}{{\psi_{12}}^{2}+1+{\psi_{11}}^{2}}}\, e_4$, $J_1(e_4) = {\frac {2\,{\psi_{11}}^{2} +{\psi_{11}}^{4}-2\,{\psi_{11}}^{2}{\psi_{12}}^{2} +{\psi_{12}}^{4}+2
\,{\psi_{12}}^{2}+1}{2\psi_{12}\, ({\psi_{12}}^{2} +1
+{\psi_{11}}^{2}) }}\, e_3 +{\frac {\psi_{11}\, (1+{\psi_{11}}^{2}-{\psi_{12}}^{2}) }{{\psi_{12}}^{2}+1+{\psi_{11}}^{2}}}\, e_4$, $J_1(e_5) = \psi_{11}\, e_5 -{\frac {1+{\psi_{11}}^{2}}{\psi_{12}}}\, e_6,\qquad
J_1(e_6) = \psi_{12}\, e_5 -\psi_{11}\, e_6$.\
The curvature tensor has the following non-zero components: $$R_{1, 2, 1}^5 =\frac{\psi_{11}(1+\psi_{11}^2+\psi_{12}^2)}{\psi_{12}},\quad
R_{1, 2, 2}^5=1+\psi_{11}^2+\psi_{12}^2,$$ $$R_{1, 2, 1}^6 =-\frac{(1+\psi_{11}^2)(1+\psi_{11}^2+\psi_{12}^2)}{\psi_{12}^2},\quad
R_{1, 2, 2}^6=-\frac{\psi_{11}(1+\psi_{11}^2+\psi_{12}^2)}{\psi_{12}}.$$ After omitting the index, there remains one component: $$R_{1, 2, 1, 2} =\frac{1+\psi_{11}^2+\psi_{12}^2}{\psi_{12}}.$$
**Second case.** The symplectic structure is: $\omega_2 = e^1\wedge e^6 +e^2\wedge e^5 -e^3\wedge e^4$. For any compatible complex structure $J$, the curvature tensor of the associated metric $g=\omega_2 \circ J$ depends on two parameters $\psi_{34}$ and $\psi_{12}=\pm 1$. Let $\psi_{12}=1$. Setting the remaining free parameters $\psi_{ij}$ to zero, we find the canonical complex structure $J_2$ and pseudo-Kähler metric $g_2=\omega_2 \circ J_2$: $$J_2(e_2) = e_1,\quad J_2(e_4) = \psi_{34}\, e_3,\quad
J_2(e_6) = -e_5,$$ $$g_{2}=2\,e^1\cdot e^5 -2\,e^2\cdot e^6 +\frac{1}{\psi_{34}}\, (e^3)^2 +\psi_{34}\, (e^4)^2.$$ The curvature tensor has the following non-zero components: $R_{1, 2, 1}^6 = -{\rm sign}(\psi_{12})\psi_{34}$,$R_{1, 2, 2}^5 = -{\rm sign}(\psi_{12})\psi_{34}$. After omitting the index, there remains one component: $R_{1, 2,1, 2} = {\rm sign}(\psi_{12})\psi_{34}$.
The complex structure $J_2$ is biinvariant (then $G_{16}$ is complex Lie group) if and only if $\psi_{34} =-1$, that is $J_2=J_0$.
Lie algebras of type $(4,6)$
============================
In this class there is only one algebra.
The Lie group $G_{25}$
----------------------
The Lie algebra $\mathfrak{h}_{25}$ is defined by $[e_1,e_2] =e_3$. This Lie algebra is a direct product of a three-dimensional nilpotent Lie algebra of Heisenberg $\mathfrak{h}_3$ and $\mathbb{R}^3$. The symplectic structure is:
$\omega=e^1\wedge e^3+e^2\wedge e^4+e^5\wedge e^6$
In this case there is an 8-parametrical set of the compatible complex structures and pseudo-Kähler metrics. All the metrics are flat. Therefore we will specify only the simplest expressions, without parameters: $$J(e_1) = e_2,\quad J(e_3) = e_4,\quad J(e_5) = e_6.$$ $$g=\left[ \begin {array}{cccccc}
0&0&0&-1&0&0\\
0&0&1&0&0&0\\
0&1&0&0&0&0\\
-1&0&0&0&0&0\\
0&0&0&0&1&0\\
0&0&0&0&0&1\end {array} \right].$$
Lie algebras of type $(3,6)$
============================
There are two Lie algebras of type (3,6) admitting a pseudo-Kähler structure.
The Lie group $G_{18}$
----------------------
The Lie algebra $\mathfrak{h}_{18}$ is defined by: $[e_{1},e_{2}] =e_{4}$, $[e_{1},e_{3}]=e_{5}$, $[e_{2},e_{3}] =e_{6}$. This Lie algebra has three symplectic structures [@Goze-Khakim-Med]:
$\omega _{1}(\lambda )=e^{1}\wedge e^{6}+\lambda e^{2}\wedge e^{5}+\left( \lambda -1\right) e^{3}\wedge e^{4}$, $\quad \lambda \ne 0,\,1$,
$\omega _{2}(\lambda ) = e^{1}\wedge e^{5}{+}\lambda e^{1}\wedge e^{6}{-}\lambda e^{2}\wedge e^{5}{+}e^{2}\wedge e^{6}{-}2\lambda e^{3}\wedge e^{4}$, $\quad \lambda \ne 0$,
$\omega _{3}= -e^{1}\wedge e^{6} +e^{2}\wedge e^{5}+2e^{3}\wedge e^{4}+e^{3}\wedge e^{5}$.\
Left-invariant complex structures on this group are discovered in an explicit form in the work of Magnin [@Mag-3] (algebra $M3$).
**First case.** The symplectic structure is: $\omega_1 = e^1\wedge e^6 +\lambda e^2\wedge e^5 +(\lambda-1) e^3\wedge e^4$. The compatible complex structures exist only at $\lambda =-1$, therefore
$\omega_1 = e^1\wedge e^6 - e^2\wedge e^5 -2\, e^3\wedge e^4$.\
For any compatible complex structure and its associated metric, the curvature tensor depends on three parameters $\psi_{11}$, $\psi_{12}$ and $\psi_{34}$. The curvature tensor has the following non-zero components: $R_{1, 2, 1}^6 =-\frac{2\psi_{34}(1 + \psi_{11}^2 )}{\psi_{12}}$,$R_{1, 2, 2}^6 =-2\psi_{11} \psi_{34}$,$R_{1, 2, 2}^5~=~2\psi_{12} \psi_{34}$,$R_{1, 2, 1}^5 =2\psi_{11} \psi_{34}$. After omitting the index, there remains one component: $R_{1, 2, 1, 2} =2\psi_{34}$. Setting the remaining free parameters $\psi_{ij}$ to zero, we find the canonical complex structure $J_1$ and the pseudo-Kähler metric $g_1 =\omega_1 \circ J_1$:
$J_1(e_1) = \psi_{11}\, e_1 -{\frac {1+{\psi_{11}}^{2}}{\psi_{12}}}\, e_2,\qquad J_1(e_2) = \psi_{12}\, e_1 -\psi_{11}\, e_2$, $J_1(e_4) = -{\frac {1}{\psi_{34}}}\, e_4,\qquad J_1(e_4) = \psi_{34}\, e_3$, $J_1(e_5) = \psi_{11}\, e_5 -{\frac {1+{\psi_{11}}^{2}}{\psi_{12}}}\, e_6,\qquad J_1(e_6) = \psi_{12}\, e_5 -\psi_{11}\, e_6$. $$g_1= \left[ \begin {array}{cccccc}
0&0&0&0&-{\frac {1+{\psi_{11}}^{2}}{\psi_{12}}}&-\psi_{11}\\
\noalign{\medskip}0&0&0&0&-\psi_{11}&-\psi_{12}\\
\noalign{\medskip}0&0&{\frac {2}{\psi_{34}}}&0&0&0\\
\noalign{\medskip}0&0&0&2\,\psi_{34}&0&0\\
\noalign{\medskip}-{\frac {1+{\psi_{11}}^{2}}{\psi_{12}}}&-\psi_{11}&0&0&0&0\\ \noalign{\medskip}-\psi_{11}&-\psi_{12}&0&0&0&0\end {array} \right]$$
**Second case.** The symplectic structure is: $\omega_2 = e^1\wedge e^5 +e^2\wedge e^6 +\lambda e^1\wedge e^6 -\lambda e^2\wedge e^5 -\\ -2\lambda e^3\wedge e^4$. The compatible complex structures exist only for a case $\psi_{16}=0$, $\psi_{25}=0$ and $\psi_{12}^2=1$. We take a case $\psi_{12}=1$. For any compatible complex structure and its associated metric, the curvature tensor depends on one parameter $\psi_{34}$. The curvature tensor has the following non-zero components: $R_{1, 2, 2}^6 =-\frac{2 \lambda \psi_{34}}{\lambda^2 +1}$,$R_{1, 2, 1}^6 = -\frac{2 \lambda^2 \psi_{34}}{\lambda^2 +1}$,$R_{1, 2, 1}^5 = -\frac{2 \lambda \psi_{34}}{\lambda^2 +1}$,$R_{1, 2, 2}^5 =\frac{2 \lambda^2 \psi_{34}}{\lambda^2 +1}$. After omitting the index, there remains one component: $R_{1, 2, 1, 2} = 2\lambda \psi_{34}$.Setting the remaining free parameters $\psi_{ij}$ to zero, we find the canonical complex structure $J_2$ and the pseudo-Kähler metric $g_2 =\omega_2 \circ J_2$: $$J_2(e_2) =e_1,\quad
J_2(e_4) = \psi_{34}\, e_3,\quad
J_2(e_6) = e_5,$$ $$g_2=\left[ \begin {array}{cccccc} 0&0&0&0&-\lambda&1\\
0&0&0&0&-1&-\lambda\\
0&0&\frac {2\,\lambda}{\psi_{34}}&0&0&0\\
0&0&0&2\,\lambda\,\psi_{34}&0&0\\
-\lambda&-1&0&0&0&0\\
1&-\lambda&0&0&0&0\end {array} \right].$$
**Third case.** The symplectic structure is:\
$\omega_3 = -e^1\wedge e^6 +e^2\wedge e^5 +2e^3\wedge e^4 +e^3\wedge e^5 $.
For any compatible complex structure and its associated metric, the curvature tensor depends on two parameters $\psi_{25}\ne 0$ and $\psi_{46}\ne 0$. The curvature tensor has the following non-zero components: $R_{1, 2, 1}^6 = -\frac{6 \psi_{25}}{\psi_{46}}$,$R_{1, 2, 2}^5 = 54 \psi_{46} \psi_{25}$,$R_{1, 2, 3}^5 = 18 \psi_{46} \psi_{25}$,$R_{1, 2, 3}^4 = -6 \psi_{46} \psi_{25}$,$R_{1, 3, 1}^6 =-\frac{2 \psi_{25}}{\psi_{46}}$,$R_{1, 3, 2}^5 = 18 \psi_{46} \psi_{25}$,$R_{1, 2, 2}^4 =-18 \psi_{46} \psi_{25}$,$R_{1, 3, 2}^4 = -6 \psi_{46} \psi_{25}$,$R_{1, 3, 3}^5 = 6 \psi_{46} \psi_{25}$,$R_{1, 3, 3}^4 = -2 \psi_{46} \psi_{25}$.
After omitting the index, there remains three components: $R_{1, 2, 1, 3} = 6 \psi_{25}$,$R_{1, 3, 1, 3} = 2 \psi_{25}$,$R_{1, 2, 1, 2} = 18 \psi_{25}$. Setting the remaining free parameters $\psi_{ij}$ to zero, we find the canonical complex structure $J_3$ and the pseudo-Kähler metric $g_3 =\omega_2 \circ J_3$: $$J_3= \left[ \begin {array}{cccccc} 0&-3\,\psi_{46}&-\psi_{46}&0&0&0\\ \noalign{\medskip}0&0&0&3\,\psi_{25}&\psi_{25}&0\\
\noalign{\medskip}{\psi_{46}}^{-1}&0&0&-9\,\psi_{25}&-3\,\psi_{25}&0\\ \noalign{\medskip}0&-{\psi_{25}}^{-1}&0&0&0&\psi_{46} \\ \noalign{\medskip}0&2\,{\psi_{25}}^{-1}&0&0&0&-3\,\psi_{46}\\ \noalign{\medskip}0&0&0&2\,{\psi_{46}}^{-1}&{\psi_{46}}^{-1}&0
\end {array} \right],$$ $$g_{3}=\left[ \begin {array}{cccccc}
0&0&0&-2\,{\psi_{46}}^{-1}&-{\psi_{46}}^{-1}&0\\
\noalign{\medskip}0&2\,{\psi_{25}}^{-1}&0&0&0&-3\,\psi_{46}\\
\noalign{\medskip}0&0&0&0&0&-\psi_{46}\\
\noalign{\medskip}-2\,{\psi_{46}}^{-1}&0&0&18\,\psi_{25}&6\,\psi_{25}&0\\
\noalign{\medskip}-{\psi_{46}}^{-1}&0&0&6\,\psi_{25}&2\,\psi_{25}&0\\
\noalign{\medskip}0&-3\,\psi_{46}&-\psi_{46}&0
&0&0\end {array} \right].$$
The Lie group $G_{23}$
----------------------
The Lie algebra $\mathfrak{h}_{23}$ is defined by: $[e_1,e_2] = e_5$, $[e_1,e_3] = e_6$. There are, according to [@Goze-Khakim-Med], three different symplectic structures:
$\omega_1 = e^1\wedge e^6 +e^2\wedge e^5 +e^3\wedge e^4$, $\omega_2 = e^1\wedge e^4 +e^2\wedge e^6 +e^3\wedge e^5$ and
$\omega_3 = e^1\wedge e^4 +e^2\wedge e^6 - e^3\wedge e^5$.
For the first two symplectic structures there are no compatible complex structures. For the third symplectic structure $\omega_3$, there is a set of compatible complex structures that depend on several parameters. It will be convenient to enumerate basis vectors as follows: $e_2:=e_1$, $e_3:=-e_2$, $e_1:=e_3$, then
$[e_1,e_3] = -e_5$, $[e_2,e_3] = e_6$\
and
$\omega_3 = e^1\wedge e^6 + e^2\wedge e^5+ e^3\wedge e^4.$
It is easy to see that the given Lie algebra comes from the $\mathbb{R}^4= \mathbb{R}\{e_1, e_2, e_5, e_6\}$ semidirect product with $\mathbb{R} e_3$ and then a direct product with $\mathbb{R}e_4$, $\mathfrak{g}_{23}=\mathbb{R}^4 \rtimes \mathbb{R} e_3\times \mathbb{R} e_4$.
The set of complex structures whose parameters influence the curvature operates on the invariant 2-planes $\{e_1, e_2\}$, $\{e_3, e_4\}$ è $\{e_5, e_6\}$ as follows: $$J(e_2) = \psi_{12}\, e_1 -\psi_{11}\, e_2,\quad
J(e_4) = \psi_{34}\, e_3,\quad
J(e_6) = -\psi_{12}\, e_5 -\psi_{11}\, e_6.$$
The curvature tensor depends on three parameters and has the following non-zero components: $R_{1, 2, 1}^6 =\frac{\psi_{34} (1 + \psi_{11}^2 )}{\psi_{12}}$,$R_{1, 2, 1}^5 = \psi_{11} \psi_{34}$,$R_{1, 2, 2}^5~=~\psi_{12} \psi_{34}$. After omitting the index, there remains one component $R_{1, 2, 1, 2} =-\psi_{34}$. Supposing $\psi_{34}=-a$, $\psi_{12}=1$, $\psi_{11}=0$ and $\psi_{33}=0$, we find the canonical pseudo-Kähler structure of curvature $R_{1, 2, 1, 2} =a$: $$J(e_2) = e_1,\quad J(e_4) = -a\, e_3 ,\quad J(e_6) = -e_5.$$ $$g=\left[ \begin {array}{cccccc}
0&0&0&0&1&0\\
0&0&0&0&0&-1\\
0&0&{a}^{-1}&0&0&0\\
0&0&0&a&0&0\\
1&0&0&0&0&0\\
0&-1&0&0&0&0
\end {array} \right].$$
Formulas for evaluations
========================
We now present the formulas which were used for the evaluations (on Maple) of the Nijenhuis tensor and curvature tensor of the associated metrics. Let $e_1,\ldots,e_{2n}$ be a basis of the Lie algebra $\mathfrak g$ and $C_{ij}^k$ a structure constant of the Lie algebra in this base: $$[e_i,e_j]=\sum_{k=1}^{2n}C_{ij}^{k}e_k, \label{strukt}$$
**1. Nijenhuis tensor.** Let $J^{k}_{i}$ be a matrix of a left-invariant almost complex structure $J$ in basis $\{e_i\}$, $Je_i=J^{k}_{i}e_k$. The Nijenhuis tensor is defined by formula (\[Nij1\]). For the basis vectors we obtain: $N(e_i,e_j) = N_{ij}^k e_k$, $$N(e_i,e_j)= [Je_i,Je_j] -[e_i,e_j] -J[Je_i,e_j] -J[e_i,Je_j]=$$ $$=\left(J_i^l J_j^m C_{lm}^k -J_i^l J_m^k C_{lj}^m -J_j^l J_m^k C_{il}^m -C_{ij}^k \right)\,e_k,$$ $$N_{ij}^k =J_i^l J_j^m C_{lm}^k -J_i^l J_m^k C_{lj}^m -J_j^l J_m^k C_{il}^m -C_{ij}^k.$$
**2. Compatible condition.** This is the condition that $\omega(JX,Y) + \omega(X, JY) =0$, $\forall \, X, Y\in {{\mathfrak g}}$. For the basis vectors we have: $\omega (J (e_i), e_j) + \omega (e_i, J (e_j)) =0$, $\omega (J^k_i e_k, e_j) + \omega (e_i, J^s_j e_s) =0$. $$\omega _ {k, j} J^k_i + \omega _ {i, s} J^s_j=0.$$
**3. Connection components.** These are the components $\Gamma_{ij}^{k}$ in the formula $\nabla _ {e _ {i}} e_j =\Gamma_{ij}^{k}e_k.$ For left-invariant vector fields we have: $2g ({\nabla}_{X} Y, Z) =g ([X, Y], Z) +g ([Z, X], Y)-g ([Y, Z], X)$. For the basis vectors we have: $$2g (\nabla _ {e _ {i}} e_j, e_k) =g ([e_i, e_j], e_k) +g ([e_k, e_i], e_j) +g (e_i, [e_k, e_j]),$$ $$2g _ {lk} \Gamma _ {ij} ^ {l} =g _ {pk} C _ {ij} ^ {p} +g _ {pj} C _ {ki} ^ {p} +g _ {ip} C _ {kj} ^ {p},$$ $$\Gamma_{ij}^{n}=\frac{1}{2}g^{kn}\left(g_{pk}C_{ij}^{p}+g_{pj}C_{ki}^{p} +g_{ip}C_{kj}^{p}\right).$$
**4. Curvature tensor.** The formula is: $R(X, Y)Z =\nabla_{X} \nabla_{Y} Z -\nabla_{Y}\nabla_{X}Z -\nabla_{[X,Y]}Z$. For the basis vectors we have: $R (e_i, e_j) e_k=R _ {ijk} ^s e_s $, $$R(e_i,e_j)e_k=\nabla_{e_{i}}\nabla_{e_{j}}e_{k}-\nabla_{e_{j}}\nabla_{e_{i}}e_{k} -\nabla_{[e_{i},e_{j}]}e_{k}.$$ Therefore: $$R_{ijk}^{s}=\Gamma_{ip}^{s}\Gamma_{jk}^{p}-\Gamma_{jp}^{s}\Gamma_{ik}^{p} -C_{ij}^{p}\Gamma_{pk}^{s}.$$
[999]{}
Benson C. and Gordon C.S. Kähler and symplectic structures on nilmanifold. Topology, Vol. 27, no. 4, 513-518, 1988.
Cordero L.A., Fernandez M., Gray A., Ugarte L. Nilpotent complex structures. Rev. R. Acad. Cien. Serie A. Mat. Vol. 95(1), 2001, 45–55.
Cordero L. A., Fernández M. and Ugarte L. Pseudo-Kähler metrics on six dimensional nilpotent Lie algebras. J. of Geom. and Phys., Vol. 50, 2004, 115 - 137.
Goze M., Khakimdjanov Y., Medina A. Symplectic or contact structures on Lie groups. Differential Geom. Appl. Vol. 21, no. 1, 41-54, 2004, (arXiv:math/0205290v1 \[math.DG\]).
Kobayashi S. and Nomizu K. Foundations of Differential Geometry, Vol. 1 and 2. Interscience Publ. New York, London. 1963.
Magnin L. Complex structures on indecomposable 6-dimensional nilpotent real Lie algebras. Intern. J. of Algebra and Computation, vol. 17, Nr 1, 2007, p. 77–113. (http://monge.u-bourgogne.fr/lmagnin/artmagnin1.ps)
Salamon S.M. Complex structure on nilpotent Lie algebras. J. Pure Appl. Algebra, Vol. 157, 311-333, 2001. (arXiv:math/9808025v2 \[math.DG\])
Smolentsev N.K. Canonical pseudo-Kähler metrics on six-dimensional nilpotent Lie groups. Bull. of Kemerovo State Univ. Vol. 3/1(47), 2011, P. 155–168.
Ovando G. Invariant pseudo Kaehler metrics in dimension four. J. of Lie Theory, Vol. 16 (2), 2006, 371–391. (arXiv:math/0410232v1 \[math.DG\])
[^1]: The work was partially supported by RFBR 12-01-00873-a and by Russian President Grant supporting scientific schools SS-544.2012.1
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We study the best arm identification ([<span style="font-variant:small-caps;">Best-$1$-Arm</span>]{}) problem, which is defined as follows. We are given $n$ stochastic bandit arms. The $i$th arm has a reward distribution ${\mathcal{D}}_i$ with an unknown mean $\mu_{i}$. Upon each play of the $i$th arm, we can get a reward, sampled i.i.d. from ${\mathcal{D}}_i$. We would like to identify the arm with the largest mean with probability at least $1-\delta$, using as few samples as possible. We provide a nontrivial algorithm for [<span style="font-variant:small-caps;">Best-$1$-Arm</span>]{}, which improves upon several prior upper bounds on the same problem. We also study an important special case where there are only two arms, which we call the [<span style="font-variant:small-caps;">Sign</span>-$\xi$]{} problem. We provide a new lower bound of [<span style="font-variant:small-caps;">Sign</span>-$\xi$]{}, simplifying and significantly extending a classical result by Farrell in 1964, with a completely new proof. Using the new lower bound for [<span style="font-variant:small-caps;">Sign</span>-$\xi$]{}, we obtain the first lower bound for [<span style="font-variant:small-caps;">Best-$1$-Arm</span>]{} that goes beyond the classic Mannor-Tsitsiklis lower bound, by an interesting reduction from [<span style="font-variant:small-caps;">Sign</span>-$\xi$]{} to [<span style="font-variant:small-caps;">Best-$1$-Arm</span>]{}. We propose an interesting conjecture concerning the optimal sample complexity of [<span style="font-variant:small-caps;">Best-$1$-Arm</span>]{} from the perspective of instance-wise optimality.'
author:
- |
Lijie Chen Jian Li\
Institute for Interdisciplinary Information Sciences (IIIS), Tsinghua University
bibliography:
- 'team.bib'
title: On the Optimal Sample Complexity for Best Arm Identification
---
Concluding Remarks
==================
The most interesting open problem from this paper is to obtain an almost instance optimal algorithm for [<span style="font-variant:small-caps;">Best-$1$-Arm</span>]{}, in particular to prove (or disprove) Conjecture \[conj:optimal\]. Note that for the clustered instances, and the instances where the gap entropy is $\Omega(\ln\ln n)$, we already have such an algorithm. Our techniques may be helpful for obtaining better bounds for the [<span style="font-variant:small-caps;">Best-$k$-Arm</span>]{} problem, or even the combinatorial pure exploration problem. In an ongoing work, we already have some partial results on applying some of the ideas in this paper to obtain improved upper and lower bounds for [<span style="font-variant:small-caps;">Best-$k$-Arm</span>]{}.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We determine the behavior of the general solution, small or large, of nonlinear first order ODEs in a neighborhood of an irregular singular point chosen to be infinity. We show that the solutions can be controlled in a ramified neighborhood of infinity using a finite set of asymptotic constants of motion; the asymptotic formulas can be calculated to any order by quadratures. These constants of motion enable us to obtain qualitative and accurate quantitative information on the solutions in a neighborhood of infinity, as well as to determine the position of their singularities. We discuss how the method extends to higher order equations. There are some conceptual similarities with a KAM approach, and we discuss this briefly.'
address:
- 'Mathematics Department The Ohio State University Columbus, OH 43210'
- 'Department of Mathematics, The University of Chicago, 5734 S. University Avenue Chicago, Illinois 60637'
- 'IRMA, Université de Strasbourg et CNRS, 67084 Strasbourg, France'
author:
- 'O. Costin, M. Huang and F. Fauvet'
title: 'Global behavior of solutions of nonlinear ODEs: first order equations'
---
$ $ -0.2cm
Introduction
============
The point at infinity is most often an [*irregular singular point*]{} for equations arising in applications.[^1] Within this class of equations, there are essentially two types for which a global description of solutions exists: linear systems and integrable ones. However, in a stricter sense, even for some linear problems global questions such as explicit values of connection coefficients are still open. The behavior of the general solutions of [*linear*]{} ODEs has been thoroughly analyzed starting in the late 19th century, see [@Fabry] and [@Wasow] and references therein. After the pioneering work of Écalle, Ramis, Sibuya and others the description of their solutions in $\CC$ is by now quite well understood [@Ecalle; @Ecalle-book; @Balser; @Balser3; @Braaksma; @Ramis1; @Ramis2; @Duke].
[*Integrable*]{} systems provide another important class of systems allowing for global description of solutions. The ensemble of integrable systems is a zero measure set in the parameter space of general equations: a generic small perturbation of an integrable system destroys integrability. Nonetheless, integrable equations occur remarkably often in many areas of mathematics, such as orthogonal polynomials, the analysis of the Riemann-zeta function, random matrix theory, self-similar solutions of integrable PDEs and combinatorics, cf. [@Bleher],[@Deift3]–[@Deift1], [@Ablowitz; @Fokas3; @Calogero; @Conte; @Deift1], [@Fokas]–[@zak]. However, even in integrable systems, achieving global control of solutions in a [ *practical way*]{} is a challenging task, and it is one of the important aims of the emerging Painlevé project [@Painleve22].
In [*nonintegrable*]{} systems, particularly near irregular singularities, our understanding is much more limited. Small solutions are given by generalized Borel summable [*transseries*]{}; this was discovered by Écalle in the 1980s and proved rigorously in many contexts subsequently. Transseries are essentially formal multiseries in powers of $1/x^{k_i}$ $e^{-\lambda_j x}$, and possibly $x^{-1}\log
x$; see again [@Ecalle; @Ecalle-book; @Balser; @Balser3; @Braaksma; @Ramis1; @Ramis2; @Duke] and [@OCBook]. Here $x$ is the independent variable and $\lambda_j$ are eigenvalues of the linearization with the property $\Re (\lambda_j x)>0$. In general, [*only*]{} small solutions are well understood. However, for generic nonlinear systems of higher order, small solutions form lower dimensional manifolds in the space of all solutions, see, e.g., [@Duke]. The present understanding of general nonlinear equations is thus quite limited.
We introduce a new line of approach, combining ideas from generalized Borel summability and KAM theory (see, e.g. [@Arnold]) for the analysis near infinity, chosen to be an irregular singular point, of solutions of relatively general differential equations with meromorphic coefficients. Applying the method does not require knowledge of Borel summability, transseries or KAM theory.
For small solutions, in [@Invent] it was shown that in a region adjacent to the sector where the solution, $y$, is small, $y(x)$ is almost periodic. In this sense $y$ becomes an approximately cyclic variable. In the $x$-complex plane, the singular points of $y$ are arranged in quasi-periodic arrays as well. The analysis in [@Invent] covers an angularly small region beyond the sector where $y$ is small. Looking directly at the asymptotics of $y$ beyond this region would require a multiscale approach: $y$ has a periodic behavior–the fast scale, with $O(1/x)$ changes in the quasi-period. Multiscale analysis is usually a quite involved procedure (see, e.g., [@Bender]).
It is natural to make a hodograph transformation in which the dependent and independent variables are switched. As mentioned above, in the “nontrivial” regions, the dependent variable is an almost cyclic one. The setting becomes somewhat similar to a KAM one: there is an underlying completely integrable system, and one looks for persistence of invariant tori. Adiabatic invariants are simply the conserved quantities associated with these tori. Evidently there are many differences between the ODE setting and the KAM one, for instance the fact that the small parameter is “internal”, $1/x$.
In this work we restrict the analysis to first order equations, mainly to ensure a transparent and concrete analysis. In theory however, the method generalizes to equations of any order, and we touch on these issues at the end of the paper.
We look at equations which, after normalization, are of the form $dy/dx=F(z,y)$, $z=1/x$, with $g$ bi-analytic at ${\bf 0}$ and $F_y({\bf
0})=1$.
We show that in any sector on Riemann surfaces towards infinity, the [*general*]{} solution is represented by transseries and/ or, in an implicit form, by some constant of motion. In fact, on large circles around $x=0$, the solution cycles among transseries representations and ones in which constants of motion describe it accurately. The regions where these behaviors occur overlap slightly to allow for asymptotic matching (cf. Corollary \[C1\]). The connection between the large $x$ behavior and the initial condition is relatively easy to obtain.
Let $\beta=F_{zy}({\bf 0})$. The constants of motion have asymptotic expansions of the form $$\label{eq:eqinit}
C(x,y)\sim x-\beta\log x+F_0(y)+x^{-1}F_1(y)+\cdots+x^{-j}F_j(y)+\cdots, \ \ x\to\infty
.$$ Clearly, under the assumptions above, the solution $y$ can be obtained asymptotically from and the implicit function theorem. The requirement that $C$ is to leading order of the form $f_1(x)+f_2(y)$, determines $C$ up to trivial transformations, see Theorem \[T1\] and Note \[N1\].
The functions $F_j$ are shown in the proof of Theorem \[T1\] to solve first order autonomous ODEs, and thus they can always be calculated by quadratures.
To illustrate this, we use a nonintegrable Abel equation, $$\label{eq:Abel}
u'=u^3-t
.$$
We note that there is no consensus on how nonintegrabilty should be defined; for (\[eq:Abel\]), it is the case that the equation passes no criterion of integrability, including the poly-Painlevé test, and that there are no solutions known, explicit or coming from, say, some associated Riemann-Hilbert reformulation.
The Abel equation has the normal form (see §\[sable\], where further details about this example are given) $$\begin{gathered}
\label{tr12}
y'+3y^3-\frac{1}{9}+\frac{1}{5x}y=0
.\end{gathered}$$ Regions of smallness are those for which $y$ approaches a root of $3y^3-1/9$; in these regions, $y$ is given by a transseries [@Invent]. Otherwise, $y$ has an implicit representation of the form $$\begin{gathered}
\label{newton0}
y=\frac{1}{3}\exp\bigg(-C-x+\frac{1}{5}\log x+\left(\sqrt{3}-\frac{2\sqrt{3}}{5x}\right)\arctan \left(\frac{6y+1}{\sqrt{3}}\right)\\
-\log(3y-1)+\frac{1}{2}\log(9y^2+3y+1))+\frac{1}{x}\left(\frac{27y^2}{5(1-27y^3)}+\frac{1}{25}+O(1/x)\right)\bigg)+\frac{1}{3}
,\end{gathered}$$ obtained by inverting an appropriate constant of motion $C$ (see ); for the values of $\beta, F_0, F_1$ see §\[S4\].
While in a numerical approach to calculating solutions the precision deteriorates as $x$ becomes large, the accuracy of instead, [*increases*]{}. In examples, even when is truncated to two orders, is strikingly close to the actual solution even for relatively small values of the independent variable, see e.g. Figure\[fig:abel4\].
The procedure allows for a convenient way to link initial conditions to global asymptotic behavior, see e.g. .
Solvability versus integrability
--------------------------------
First order equations for which the associated second order autonomous system is Hamiltonian are in particular [ *integrable*]{}. Indeed, by their definition, there is a globally defined smooth $H$ with the property that $\dot{x}\frac{\partial
H}{\partial x}+\dot{y}\frac{\partial H}{\partial y}=0$, that is $H(x(t),y(t))=const$, providing a closed form implicit, global representation of $y$. While the differential equation provides “infinitesimal” information, $H$ –effectively an integral– provides a global one.
Conversely, clearly, if there exists an implicit solution of the equation or indeed a smooth enough conserved quantity, the equation comes from a Hamiltonian system.
What we provide is a finite set of matching conserved quantities, analogous to an atlas of overlapping maps projecting the differential field onto the trivial one, $H'=0$. They give, in a sense, a [ *foliation of the phase space*]{} allowing for global control of solutions. With obvious adaptations, this picture extends to higher order systems. In integrable systems there is just one single-valued map and the field is globally rectifiable. In general, the conserved quantities may be branched and not globally defined.
Normalization and definitions {#Sec11}
-----------------------------
Many equations of the form $y'=F(y,1/x)$ with $F$ analytic for small $y$ and small $1/x$ can be brought to the normal form $y'=P_0(y)+Q(y,1/x)$ by systematic changes of variables, see [*e.g.*]{} [@Duke], [@OCBook].
The assumptions are that $Q(y,z)$ is entire in $y$ and analytic in $z$ for small $z$, and $O(y^2,yz^2,z)$ for small $y$ and $z$ and that $P_0$ is a polynomial. We assume that the roots of $P_0$ are [*simple*]{}. It will be seen from the analysis that a more general $P_0$ can be accommodated. We thus write the equation as $$\label{eq:eqy0}
y'=\sum_{k=0}^{\infty}\frac{P_k(y)}{x^k}=Q_1(y,1/x)=P_0(y)+Q(y,1/x)
.$$
\[D1\]
\[def1\] $\bullet$ A formal constant of motion of for $x\to \infty$ in an unbounded domain $\mathcal{D}\subset\mathbb{C}^2$ or on a Riemann surface covering it, and in which to leading order in $1/x$ the variables $x$ and $y$ are separated additively is a formal series $$\label{eq:eq0}
\tilde{C}(y,x)=A(x) +F_0(y)+\frac{F_1(y)}{x}+\cdots+\frac{F_j(y)}{x^j}+\cdots$$ such that we have $$\frac{d}{dx}\tilde{C}(y(x),x)=O(x^{-\infty})$$ in the sense that, for any $j$, $F_j$ and $H_j$ defined by $$\label{eq:defC}
\frac{H_{j+1}(x,y)}{x^{j+1}}:= A'(x) +D_x \left(F_0(y)+\frac{F_1(y)}{x}+\cdots+\frac{F_j(y)}{x^j}\right)$$ are uniformly bounded in $\mathcal{D}$; here $D_x$ is the derivative along the field, $$D_x F(x,y)=\nabla F\cdot
(1,Q_1)=F_x(x,y)+F_y(x,y)Q_1(y,1/x).$$ See also below.
$\bullet$ An actual constant of motion associated to $\tilde{C}$ in $\mathcal{D}\subset\CC^2$ is a function $C$ so that $C(y,x)\sim\tilde{C}(y,x)$ as $x\to\infty$ and $\frac{d}{dx}C(y(x),x)=0$ for all solutions in $\mathcal{D}$.
\[N1\] It will be seen that there is rigidity in the form of the constant of motion: if the variables in $\tilde{C}$ are, to leading order, separated additively as in , then, up to trivial transformations, we must have $$\label{eq:eqA}
A(x)=-x+a\log x$$ where $a$ is the same as the one in the transseries expansion of the solution, see Proposition \[trans\].
Finding the terms in the expansion of $\tilde{C}$
-------------------------------------------------
Using (\[eq:eqA\]) and truncating at an arbitrary $n>2$, let $$\label{eq:def2c}
{C}_n(y,x)=:-x+a \log x+F_0(y)+\sum_{k=1}^{n}\frac{F_k(y)}{x^k}
.$$ We can check that $D_x C_n$ satisfies $$\begin{gathered}
\label{formalexpans}
D_x C_n=-1+P_0F_0'+\frac{a+P_1F_0'+P_0F_1'}{x}\\
+\sum_{k=2}^{n}\frac{(1-k)F_{k-1}+\sum_{j=0}^{k}P_{k-j}F'_{j}}{x^k}
+\frac{-nF_n+\sum_{j=0}^{n}\sum_{k=0}^{\infty}P_{n+k+1-j}F'_{j}x^{-k}}{x^{n+1}}\end{gathered}$$ (cf. ) where the numerator of the last term is $H_n$ by definition. In order for $\tilde{C}$ to be a formal constant of motion, the coefficients of $x^{-j},j=0,1,2,\ldots$ must vanish, giving $$\begin{aligned}
\label{refF}
F_0'(y)&=\frac{1}{P_0(y)}\\
F_1'(y)&=-\frac{a+P_1(y)F_0'(y)}{P_0(y)} \label{refF1}\\
\label{dfk}
F_k'(y)&=\frac{(k-1)F_{k-1}(y)-\sum_{j=0}^{k-1}P_{k-j}(y)F'_{j}(y)}{P_0(y)}\;\quad(2\leq k \leq n)
.\end{aligned}$$ It follows in particular that $F'_0\ne 0$ and $F_0$ is bounded in $\mathcal{D}$. In solving the differential system, the constants of integration are chosen so that $F_k$ are indeed uniformly bounded in $y$, see .
Solving for $y(x)$
------------------
The expression $C_n$ is an approximate constant of motion ; we thus can find an approximate solution $y_n$ by fixing $C_n=K$. We then write $$\label{eq:eqyn}
G(y;K):= F_0(y)-K-x+a\log x+\sum_{k=1}^n \frac{F_k(y)}{x^k}=0$$ and we note that in the domain relevant to us ($\mathcal{S}_1$, see Theorem \[regcom\] below) the analytic implicit function theorem applies since $$\label{eq:difK}
\frac{\partial G}{\partial y}=\frac{1}{P_0(y)} +\frac{1}{x}E_1(y,x)$$ where $P_0$ is away from $0$ in our domain, and for some $E_1$ which is bounded in $\mathcal{D}$ by since $Y$ is bounded. Writing $y=y_n$ in and using the analytic implicit function theorem, treating $1/x$ as a small parameter, we get
$$\label{eq:eqy}
y_n=G_0(x;K)+\frac{G_1(x;K)}{x}+\cdots+\frac{G_n(x;K)}{x^n} + \frac{\tilde{H}_n(x;K)}{x^{n+1}}$$
where $\tilde{H}_n$ and the $G_j$’s are bounded. In the same way it is checked that $y_n$ is solution of up to corrections $R_n(x;K)/x^{n+1}$, that is, $y_n'-Q_1(y_n,1/x)=-R_n(x;K)/x^{n+1}$ where $R_n$ is bounded.
Let $p_1,\ldots,p_m$ be the distinct roots of $P_0$.
Let $\mathcal{R}_y$ be the universal cover of $Y=\mathbb{C}\backslash\{p_1,...,p_m\}$. Let $\pi:X\rightarrow Y$ be the covering map.
\[Def4\] $\bullet$
An [**elementary $\bf y$-path**]{} of type
$$\alpha=(\alpha_1,...,\alpha_m,\alpha_{m+1},\ldots,\alpha_{mk})\in\mathbb{Z}^{mk}, k\in\NN$$ is a piecewise smooth curve $\gamma$ in $\mathcal{R}_y$ whose image under $\pi$ turns $\alpha_{1}$ times around $p_1$, then $\alpha_{2}$ times around $p_2$, and so on, $\alpha_{m}$ times around $p_m$, then again $\alpha_{m+1}$ times around $p_1$ , etc. Note that $\alpha$ is in fact an element of the fundamental group.
$\bullet$ A [**$\bf y$-path of type $\boldsymbol\alpha$** ]{} is a smooth curve $\gamma$ obtained as an arbitrary forward concatenation of elementary $
y$-paths of type $\alpha$. More precisely, a $ y$-path of type $\alpha$ is a map $\gamma:[0,\infty)\to \mathcal{R}_y$ so that, for any $N\in \mathbb{Z}^+$, $\gamma|_{[N,N+1]}$ is an elementary $
y$-path of type $\alpha$. We will naturally denote by $\gamma|_{[0,a]}$ subarcs of $\gamma$. We see that $y$-paths are compositions of [*closed loops*]{} in the [*complex $y$ domain.*]{}
$\bullet$ $\mathcal{S}_r$ is [**a regular domain of type $\boldsymbol\alpha$**]{} or an $R$-domain of type $\alpha$, if it is an unbounded open subset of $\mathcal{R}_y$ that contains only images of $
y$-paths of type $\alpha$. Thus the image of any unbounded $y$-path of type $\alpha'\neq \alpha$ is not a subset of $\mathcal{S}_r$.
[**Note.**]{} In our results we only need $ y$-paths with the additional property that $x(y)\to\infty$ along the path.
To take a trivial illustration, in the equation $y'=y$ an example of a $ y$-path along which $x\to \infty$ is $t\mapsto \exp(it), t\ge 0$.
Main results
============
Existence of formal constants of motion
---------------------------------------
Under the assumptions at the beginning of §\[D1\] we have
\[regcom\] Let $\mathcal{S}_y$ be an $R$-domain of type $\alpha$, and $$\mathcal{S}_1=\{y\in \mathcal{S}_y: |\pi(y)|< M_0 ~{\rm and}
~|\pi(y)-p_k|>\epsilon ~{\rm for~all~}k\}$$ where $M_0>0$ is an arbitrary constant. Let $\mathcal{C}$ be the union of $m$ circular paths surrounding $\alpha_k\in\ZZ$ times the root $p_k$, $k=1,\ldots,m$, chosen so that $$\label{eq:eqnontr}
\int_{\mathcal{C}}\frac{1}{P_0(y)}dy\neq0
.$$ Then, if $R_0$ is large enough, there exists a formal constant of motion in $$\mathcal{D}_1=\{(x,y):|x|>R_0,y\in\mathcal{S}_1\}$$ of the form . The terms $F_k$ in the expansion of $\tilde{C}$ in can be calculated by quadratures.
Actual constants of motion are obtained in Theorem \[T1\].
Consider now a set $\mathcal{S}$ of curves $\gamma$, $|\gamma(t)|\to\infty$ as $t\to\infty$, with the property that for all $t_1<t_2$ and all $n$ (which is in fact equivalent to for $n=0,1$ ) $$\label{eq:restrgam}
\left| \Re \displaystyle \int_{\gamma(t_1)}^{\gamma(t_2)} \frac{\partial}{\partial y}Q_1(y,\gamma(t))|_{y=y_n(\gamma(t))}\gamma'(t)dt\right|\leqslant b\log(|\gamma(t_2)/\gamma(t_1)|+1)
,$$ where $b>0$ is a constant, and such that there is an $M$ so that for all $n$ we have $|y_n|<M$ along $\gamma$. Here $M$ can be chosen large if $x$ is large. Note that $\mathcal{S}$ contains the curves $\gamma(t)$ so that $y_n(\gamma(t))$ is an $\alpha$-path. Indeed, by , in this case, the integrand in is of the form $\frac{P_0'(y)}{P_0(y)}dy+O(1)\frac{d\gamma(t)}{\gamma(t)}$ and hence the integral equals $2\pi i N +O(\log(|N|+1))$ for large $N$ where $N$ is the number of loops.
\[T1\] Assume $\tilde{C}$ in is a formal constant of motion in a region $\mathcal{D}=\mathcal{S}\cap \mathcal{D}_1$. Then there exists an actual constant of motion $C=C(x,y)$ defined in the same region, so that $C\sim \tilde{C}$ as $x\to\infty$.
Regions where $P_0(u)$ is small
-------------------------------
Assume $x_0$ is large and $|P_0(y(x_0))|<\epsilon$ is sufficiently small. This means that for some root $r_k$ of $P_0$ we have $|y(x_0)-r_k|<\epsilon_1$ where $\epsilon_1$ is also small. Without loss of generality we can assume that $r_k=0$ and $x\in \RR^+$ since the change of variables $y_1=y-r_k$, $x=x_1 e^{i\phi}$ does not change the form of the equation. Assume also that after normalization the stability condition $\Re P_0'(0)<0$ holds. Again without loss of generality, by taking $y_2=\alpha y_1$ we can arrange that $P_0'(0)=-1$. The new function $Q$ in will have the form $y^2Q_1(y,1/x)+x^{-2}Q_2(y,1/x)$ where $Q_1$ and $Q_2$ are analytic for small $y$ and $1/x$. As a result, the normalized equation assumes the form $$\label{nf}
y'=-y+f_0(x)+\frac{ay}{x}+y^2Q_1(y,1/x)+x^{-2}Q_2(y,1/x)
.$$ We also arrange that $f_0=O(x^{-M})$ as $x\to\infty$, for suitably large $M$; this is possible through a change of variables of the form $y_2=y_3+\sum_{k=1}^M c_k
x^{-k}$, where the $c_k$’s are the coefficients of the formal power series solution for small $y$.
\[trans\] \[see [@Duke] Theorem 3\] Any solution of that is $o(1)$ as $x\to\infty$ along some ray in the right half plane can be written as a Borel summed transseries, that is $$\label{eq:eqtrans}
y(x)=\sum_{k=0}^{\infty}C^k x^{ka+1}e^{-kx} y_k$$ where $y_k$ are generalized Borel sums of their asymptotic series, and the decomposition is unique. There exist bounds, uniform in $n$ and $x$, of the form $|y_n(x)|<A^k$, and thereby the sum converges uniformly in a region $\mathcal{R}$ that contains any sector $\mathcal{S}_c:=\{x:|\arg\, x|<c<\pi/2\}$. Note that Theorem 3 in [@Duke] applies to general n-th order ODEs.
\[ptranss\]
\(i) If, after the normalization above, $y(x_0)$ is small (estimates can be obtained from the proof), then $y$ is given by .
\(ii) $C(y(x),x)$, obtained by inversion of (\[eq:eqtrans\]) for large $x$ in the right half plane and small $y$, is a constant of motion defined for all solutions for which $y(x_0)$ is small (cf. (i)).
\(i) We write the differential equation in the equivalent integral form $$\begin{gathered}
\label{eq:intfor}
y=F_0(x)+y_0 e^{-(x-x_0)}(x/x_0)^a\\ + e^{-x}x^a \int_{x_0}^x e^s s^{-a} \left[ y^2(s)Q_1(y(s),1/s)+s^{-2}Q_2(y(s),1/s) \right]ds
,\end{gathered}$$ where $F_0(x)=O(x^{-M})$ ($M$ can be chosen arbitrarily large in the normalization process, [@Duke]) and $F_0(x_0)=0$. It is straightforward to show that for (\[eq:intfor\]) is contractive in the norm $\|y\|=\sup_{x\in\mathcal{S}_c}|x^{M-1} y(x)|$ (see the beginning of this section) and thus it has a unique solution in this space. Hence, by uniqueness, the solution of the ODE with $y(x_0)=y_0$, has the property $y(x)\to 0$ as $x\to\infty$. The rest of (i) now follows from [@Duke].
\(ii) We see from Proposition \[trans\] that $y(x;C)$ is analytic in a domain of the form $\mathcal{S}_c\times \mathbb{D}_\rho$ (As usual, $\mathbb{D}_\rho$ denotes the disk of radius $\rho$.) We look at the rhs of as a function $H(x,C)$. It follows from [@Duke] that $y_1(x)=x^{-1}(1+o(1/x))$. By uniform convergence, we clearly have $$\label{eq:difh}
\frac{\partial H}{\partial C}=\sum_{k=0}^{\infty}kC^{k-1} x^{ka+1}e^{-kx} y_k=e^{-x}x^a(1+o(1))\ne 0
.$$ The rest follows from the implicit function theorem.
As a result of Theorem \[T1\] and Proposition \[ptranss\] we have the following:
\[C1\] If $G_0$ in approaches a root of $P_0$ and $x$ is large enough, then $y$ enters a transseries region, where the new constant is given, after normalization, by Proposition \[ptranss\] (ii); thus the constants of motion in different regions match.
Proofs and further results
==========================
Proof of Theorem \[regcom\]
---------------------------
Let $(x_0,y_0)\in\mathcal{D}_1$. Recalling (\[formalexpans\]), we see that has the solution $$F_0(y)=\int_{y_0}^y\frac{1}{P_0(s)}ds+c_0$$ (we take $c_0=0$ since it can be absorbed into the constant of motion). Eq. gives $$\label{aaa}F_1(y)=f_1(y)+c_1:=-\int_{y_0}^y\frac{a+\frac{P_1(s)}{P_0(s)}}{P_0(s)}ds+c_1,$$ where to ensure boundedness of $F_1(y)$ as the number of loops $\to\infty$, we let $$a=-\frac{\int_{\mathcal{C}}\frac{P_1(y)}{P_0(y)^2}dy}{\int_{\mathcal{C}}\frac{1}{P_0(y)}dy}$$ and $c_1$ is determined to ensure boundedness of $F_2$ (cf. ). Inductively we have $$\label{fk}
F_{k+1}(y)=\int_{y_0}^{y}\frac{k(f_k(s)+c_k)-\sum_{j=0}^{k}P_{{k+1}-j}(s)F'_{j}(s)}{P_0(s)}ds+c_{k+1}=:f_{k+1}(y)+c_{k+1}$$ for $2\leq k+1\leq n$, and, to ensure boundedness of $F_{k+1}(y)$ as the number of loops $\to\infty$ we need to choose $$\label{gk}
c_{k}=\dfrac{\int_{\mathcal{C}}\dfrac{-kf_{k}+\sum_{j=0}^{k}P_{k+1-j}(y)f'_{j}(y)}{P_0(y)}dy}{k\int_{\mathcal{C}}\dfrac{1}{P_0(y)}dy}$$ for $1\leq k\leq n-1$.
It is clear by induction that every singularity of $F_k(y)$ is a root of $P_0$. To complete the proof we need to show that the $F_k$’s are bounded in $\mathcal{D}_1$:
Assume $y\in\mathcal{S}_1$. For $\deg(P_0)\geq1$ and $1\leq k\leq n$ we have $$|F_k'(y)|\lesssim k!\ \ \ \ \text{and}\ \ \ \ |F_k(y)|\lesssim k!(|y|+1)$$ where, as usual, $\lesssim $ means $\le $ up to an irrelevant multiplicative constant.
We prove the lemma by induction on $k$. Note that in (\[aaa\]) and (\[fk\]) the integration paths can be decomposed into finitely many circular loops $\mathcal{C}$ and a ray, slightly deformed around possible singularities, which implies $$|F_1(y)|\lesssim \log|y|+1\lesssim |y|+1$$ and $$|F_k(y)|\lesssim \left|\int_{y_0}^{y}|F_k'(s)|ds\right|$$ where the integration path is a straight line (possibly bent as above).
We see from (\[dfk\]) that $$|F_k'(y)|\lesssim \frac{(k-1)|F_{k-1}(y)|}{|P_0(y)|}+\sum_{j=0}^{k-1}|F_j'(y)|\lesssim \frac{(k-1)|F_{k-1}(y)|}{|y|+1}+\sum_{j=0}^{k-1}|F_j'(y)|.$$ The conclusion then follows by induction. Note that the the last term of satisfies $$\left|-nF_n+\sum_{j=0}^{n}\sum_{k=0}^{\infty}P_{n+k+1-j}F'_{j}x^{-k}\right|\lesssim (n+1)!(|y|+1)|P_0(y)|$$
Proof of Theorem \[T1\]
-----------------------
Let $y(x;K)=y_n(x;K)+\delta(x;K)$, where $y_n$ is given in . We seek $\delta$ so that $y$ is an exact solution of in $\mathcal{D}$.
Let $\phi(y,\delta)$ be the polynomial satisfying $Q_1(y+\delta,x)-Q_1(y,x)=Q_{1,y}(y,x)\delta+\delta^2 \phi(y,\delta,x)$ where $Q_{1,y}(y,x):=\frac{\partial Q_1(y,x)}{\partial y}$. We obtain $$\label{eq:del}
\delta'-\frac{b\delta}{x}-\frac{\partial Q_1(y,x)}{\partial y}\delta=\frac{R(x;K)}{x^{n+1}}-\frac{b\delta}{x}+\phi(y_n,\delta,x)\delta^2=:E(x;\delta(x);K)
,$$ where $R=:R_n$ is defined after ; both $R$ and $\phi$ are, by assumption, bounded. In integral form, reads $$\label{eq:deli}
\delta(x)=\int_{\infty}^x \frac{x^b}{s^b}e^{\int_{s}^xQ_{1,y}(y_n(t),t)dt} E(s;\delta(s);K)ds$$ where the integrals are taken along curves in $\mathcal{D}$. Using we see that (\[eq:deli\]) is contractive in the norm $\|\delta\|=\ds \sup_{|x|\geqslant |x_1|;
x\in\mathcal{D}}|x|^{n}|\delta(x)|$ in an arbitrarily large ball, if $|x_1|$ is large enough and $n>b2^{b+1}$.
Thus has a unique solution and, of course, $\delta(x)$ is the limit of the Picard like iteration $$\begin{gathered}
\label{eq:eqar}
\delta_0=\int_{\infty}^x \frac{x^b}{s^b} e^{\int_{s}^xQ_{1,y}(y_n(t),t)dt} \frac{R(s;K)}{s^{n+1}}ds\\
\delta_1=\int_{\infty}^x \frac{x^b}{s^b} e^{\int_{s}^xQ_{1,y}(y_n(t),t)dt} E(s;\delta_0(s);K) ds\\
etc.\end{gathered}$$ By $\delta$ is a smooth function depending on $(x, K)$ only, and $\delta=O(x^{-n})$. Smoothness is shown as usual by bootstrapping the integral representation .
Now we have, by , $\partial_K y_n(x;K)=P_0(y_n)(1+o(1))$. We can easily check that $\partial_K\delta(x,K)=O(x^{-n})$. This is done using essentially the same arguments employed to check contractivity of the integral equation for $\delta$ in the equation in variations for $\delta_K$, derived by differentiating with respect to $K$. We use the implicit function theorem to solve for $K$, giving $K=K(x,y)$, a smooth function of $(x,y)$. It has the following properties: $K(x,y(x))$ is by construction constant along admissible trajectories and by straightforward verification, i.e. comparing $K$ with $\tilde{C}$, we see that it is asymptotic to $\tilde{C}$ up to $O(x^{-n})$. It is known that if a function differs from the $n$th truncate of its series by $O(x^{-n})$ for large $n$, then in fact the difference is $o(x^{-n})$ (cf. [@OCBook] Proposition 1.13 (iii)).
Position of singularities of the solution
-----------------------------------------
It is convenient to introduce constants of motion specific to singular regions; they provide a practical way to determine the position of singularities, to all orders.
We define a [**simple singular solution path**]{} $\gamma(s):[0,1)\rightarrow \mathcal{R}_y$ to be a piecewise smooth curve whose projection $\pi(\gamma[0,1))\in \CC$ is unbounded but turns around every $p_k$ only finitely many times.
A [**simple singular solution domain**]{} $\mathcal{S}_s$ is the homotopy class of any simple singular solution path, in the sense that any two unbounded paths in $\mathcal{S}_s$ can be continuously deformed into each other without passing through any $p_k$.
\[sincom\] Let $m_0=\deg(P_0)\geq 2$, $\mathcal{S}_s$ be a simple singular solution domain, and $\mathcal{D}_2=\{(x,y):|x|>R,y\in\mathcal{S}_s, ~{\rm and} ~|y-p_k|>\epsilon ~{\rm for~all~}k\}$. Assume that $$\frac{|P_k(y)|}{|P_0(y)|}\lesssim |y|^{-q}$$ for large $y$, for some $q\geq 0$ and all $k\geq 1$. Note that this needs only be true in $\mathcal{S}_s$, which could be an angular region.
Then there exists in $\mathcal{D}_2$ a formal constant of motion of the form $$\label{eq:com1}
\tilde{C}(y,x)=x+F_0(y)+\frac{F_1(y)}{x}+\cdots+\frac{F_j(y)}{x^j}+\cdots
,$$ where $F_k(y)$ are single valued as $y\to\infty$. Furthermore, any simple singular solution path passing through some arbitrary $(x_0,y_0)$ tends to a singularity, whose position $x_{sing}$ satisfies $$\label{sing}
x_{sing}= C_n(y_0,x_0)+O\left(\frac{1}{x_0^{n+1}}\right)$$ for all $n\in\mathbb{N}$, where $C_n$ is $\tilde{C}$ truncated to $x^{-n}$.
Moreover, if there are only finitely many nonzero $P_k$, then there exists in $\mathcal{D}_2$ a true constant of motion of the form (\[eq:com1\]), i.e. the sum is convergent for large $|x|$.
The proof is similar to that of Theorem \[regcom\].
In order for $\tilde{C}$ to be a formal constant of motion, we must have $$\begin{aligned}
F_0'(y)&=-\frac{1}{P_0(y)}\\
\label{dfk1}
F_k'(y)&=\frac{(k-1)F_{k-1}(y)-\sum_{j=0}^{k-1}P_{k-j}(y)F'_{j}(y)}{P_0(y)}\;\quad(1\leq k\leq n).\end{aligned}$$
We solve successively for the $F_k$ and obtain $$F_0(y)=\int_{\infty}^y\frac{1}{P_0(s)}ds$$ where the integration path lies in $\mathcal{S}_s$. Clearly $F_0$ is bounded and single valued as $y\to\infty$.
Inductively we have $$\label{fk1}
F_k(y)=\int_{\infty}^{y}\frac{(k-1)F_{k-1}(s)-\sum_{j=0}^{k-1}P_{k-j}(s)F'_{j}(s)}{P_0(s)}ds$$ for $1\leq k\leq n$.
To prove the rest of the proposition, we need the following lemma:
Assume that $y\in\mathcal{S}_s$. For $1\leq k\leq n$ we have $$|F_k'(y)|\lesssim \frac{k!}{|y|^{m_0+q}}$$ $$|F_k(y)|\lesssim \frac{k!}{|y|^{m_0+q-1}}$$ as $y\rightarrow\infty$.
Furthermore, if $P_k=0$ for $k>k_0>0$, then $$|F_k'(y)|\lesssim \frac{c^k}{|y|^{m_0+q}}$$ $$|F_k(y)|\lesssim \frac{c^k}{|y|^{m_0+q-1}}.$$
The estimates are obtained by induction on $k$. Note that (\[dfk1\]) implies $$|F_k'(y)|\lesssim \frac{(k-1)|F_{k-1}(y)|}{|y|^{m_0}}+|y|^{-q}\sum_{j=0}^{k-1}|F'_{j}(y)|$$ provided that the assumptions of the lemma hold for $1\leq j\leq k-1$.
If $P_k=0$ for $k>k_0>0$, we again show the lemma by induction.
Assume that for $0<l\leq k$ we have $$|F_l'(y)|\leq(c_0 l_0)^l \sum_{j=1}^{l+1}\binom {l}{j-1}|y|^{-1+j(1-m_0)-q}$$ (this is obviously true for $l=1$).
This implies $$|F_k(y)|\leq(c_0 k_0)^k\sum_{j=1}^{k+1}\binom {k}{j-1}\frac{|y|^{j(1-m_0)-q}}{j(m_0-1)+q}$$
Thus it follows from (\[dfk1\]) that
$$F_{k+1}'(y)=\frac{kF_{k}(y)-\sum_{j=\max\{k-k_0+1,0\}}^{k}P_{k+1-j}(y)F'_{j}(y)}{P_0(y)}$$ where, by the induction assumption, the first term satisfies the estimate $$\begin{gathered}
\left|\frac{kF_{k}(y)}{P_0(y)}\right|\leq c_1 k \left|\frac{F_{k}(y)}{y^{m_0}}\right|\\
\leq c_0^k k_0^{k+1} c_1\sum_{j=2}^{k+2}\frac{(k+1)\binom {k}{j-2}}{(j-1)(m_0-1)}|y|^{-1+j(1-m_0)-q}\\
\leq c_0^k k_0^{k+1} c_1 \sum_{j=2}^{k+2}\binom {k+1}{j-1}|y|^{-1+j(1-m_0)-q}\end{gathered}$$ where $c_0>1+c_1$. Note that the last inequality follows from $$(k+1)\binom {k}{j-2}=\binom {k+1}{j-1}(j-1).$$ The second term is easy to estimate, since it is clearly bounded by $$k_0 (c_0 k_0)^k \sum_{j=1}^{k+1}\binom {k}{j-1}|y|^{-1+j(1-m_0)-q}.$$ Since $c_1$ is fixed, we can assume that $c_0>1+c_1$, and we have $$|F_{k+1}'(y)|\leq(c_0 k_0)^{k+1} \sum_{j=1}^{k+2}\binom {k+1}{j-1}|y|^{-1+j(1-m_0)-q}.$$ This shows the second part of the lemma.
Now since $$|D_x C_n|=\Big|\frac{-nF_n+\sum_{j=0}^{n}\sum_{k=0}^{\infty}P_{n+k+1-j}F'_{j}x^{-k}}{x^{n+1}}\Big|
\lesssim \frac{|P_0(y)|}{|x|^{n+1}|y|^{m_0+q}}$$ (cf. \[formalexpans\]), the estimate for $x_{sing}$ follows immediately from integrating $D_xC_n$ from $x_0$ to $x_{sing}=C_n(\infty,x_{sing})$ along the simple singular solution path.
The condition $$\frac{|P_k(y)|}{|P_0(y)|}\lesssim |y|^{-q}$$ is not the most general one for which there exists a formal constant of motion in a simple singular domain. However, this condition is frequently satisfied by ODEs that occur in applications (see §\[sable\]). In such cases we can easily use (\[sing\]) to find the position of the singularity (see e.g. (\[asing\])).
Example: the nonintegrable Abel equation {#sable}
=========================================
To illustrate how to obtain information of the solution of a first order ODE using Theorem \[regcom\] and Proposition \[sincom\], we take as an example the nonintegrable Abel equation . Normalization is achieved by the transformation $x=-(9/5)A^2 t^{5/3}$, $A^3=1$, $u(t)=A^{3/5}(-135)^{1/5}x^{1/5}y(x)$ [@Invent], yielding $$\label{abel}
y'=-3y^3+\frac{1}{9}-\frac{y}{5x}.$$
Obviously (\[abel\]) satisfies the assumptions in Theorem \[regcom\] and \[sincom\], with $P_0(y)=-3y^3+\ds\frac{1}{9}$ and $P_1(y)=-\ds\frac{y}{5}$.
The three roots of $P_0$ are $\ds\frac{1}{3},~\ds\frac{(-1)^{2/3}}{3}$, and $\ds\frac{(-1)^{4/3}}{3}$. It is known [@Invent] that there exists a solution in the right half plane $\mathbb{H}$ that goes to the root $\ds\frac{1}{3}$ as $x\to\infty$. Similarly, there are solutions that go to the other two roots in other regions, which we will explore in §\[phase\]. In those cases, the behavior of the solution follows from Proposition \[trans\] (see also [@Invent]). However, there are also solutions that do not go to any of the three roots. In these cases, the formal constant of motion will be a useful tool to describe quantitatively the behavior of the solution.
Constants of motion in $R$-domains {#S4}
----------------------------------
(cf. Definition \[Def4\]). First we choose an elementary solution path along which the solution $y$ to (\[abel\]) turns around the root $\ds\frac{1}{3}$ clockwise as shown in Fig \[fig:abel1\] and \[fig:abel2\].
![Solution $y(x)$ with $y_0=1.1$ along the line segments from 1+5i to 1.5+50i to 1.6+120i[]{data-label="fig:abel1"}](abel1.eps)
![Real and imaginary parts of $y(x)$. The upper curve is the real part, the lower curve is the imaginary part, and the straight line is the root $1/3$.[]{data-label="fig:abel2"}](abel2.eps)
For simplicity we calculate the first two terms of the expansion (\[eq:def2c\]). We have $$\begin{aligned}
\label{f36}
F_0(y)&=\int\frac{1}{-3y^3+\frac{1}{9}}dy=\sqrt{3}\arctan \left(\frac{6y+1}{\sqrt{3}}\right)-\log(3y-1)+\frac{1}{2}\log(9y^2+3y+1)\\
a&=\dfrac{\int_{\mathcal{C}}\frac{y}{5(-3y^3+\frac{1}{9})^2}dy}{\int_{\mathcal{C}}\frac{1}{-3y^3+\frac{1}{9}}dy}=\frac{1}{5}\\
F_1(y)&=-\int\dfrac{\frac{1}{5}-\frac{y}{5(-3y^3+\frac{1}{9})}}{-3y^3+\frac{1}{9}}dy
=\frac{1}{10}\left(\frac{54y^2}{1-27y^3}-4\sqrt{3}\arctan \left(\frac{6y+1}{\sqrt{3}}\right)\right)+\frac{1}{25}
,\end{aligned}$$ where the constant $\ds\frac{1}{25}$ is found using (\[gk\]).
We plot the first two orders of the formal constant of motion in Fig. \[fig:abel3\].
![Formal constant of motion with $F_0$ and $F_1$.[]{data-label="fig:abel3"}](abel3.eps)
Since this formal constant of motion is almost a constant along any path in the same $R$-domain, it can be used to find the solution asymptotically, writing $$C=-x+\frac{1}{5}\log x+F_0(y)+\frac{F_1(y)+O(1/x)}{x}.$$ Placing the term $\log(3y-1)$ (cf. ) in the equation above on the left side and $C$ on the right side, taking the exponential, and solving for $y$, we obtain $$\begin{gathered}
\label{newton}
y=\frac{1}{3}\exp\bigg(-C-x+\frac{1}{5}\log x+\left(\sqrt{3}-\frac{2\sqrt{3}}{5x}\right)\arctan \left(\frac{6y+1}{\sqrt{3}}\right)\\
+\frac{1}{2}\log(9y^2+3y+1)+\frac{1}{x}\left(\frac{27y^2}{5(1-27y^3)}+\frac{1}{25}+O(1/x)\right)\bigg)+\frac{1}{3}
.\end{gathered}$$
The reason for taking the exponential in (\[newton\]) is to take care of the branching due to $\log x$, whereas the other $\log$ and $\arctan$ do not matter since the solution does not encircle their singularities. Equation contains, in an implicit form, the solution $y$ to two orders in $x$. $y$ can be determined from this implicit equation in a number of ways; we chose, for simplicity to numerically solve the implicit equation using Newton’s method. The solution is plotted in Fig. \[fig:abel4\], where we take $C=2.18-4.65i$ and calculate the solution for the second half of the path corresponding to $|x|>61.4$. Note that the relative error is within $1.5\%$.
Since the accuracy of the formal constant of motion is unaffected by going along the solution path as long as $|x|$ is large, we can obtain quantitative behavior of the solution for very large $|x|$. By contrast, in a numerical approach, the further one integrates along the path, the less accurate the calculated solution becomes.
![Comparison of solutions obtained numerically by the Runge-Kutta method and using the formal constant of motion .[]{data-label="fig:abel4"}](abel4n.eps)
![A small section of the left-top plot in Fig. \[fig:abel4\].[]{data-label="fig:abel4z"}](abel4z.eps)
Finding the positions of the singularities
------------------------------------------
We illustrate how to find singularities of the Abel’s equation using Proposition \[sincom\]. It is known [@Invent] that there are only square root singularities, and they appear in two arrays.
For simplicity we choose a simple singular path along which $y$ goes to $+\infty$.
According to Proposition \[sincom\] we have $$\begin{gathered}
\label{F0}
F_0(y)=-\int_{\infty}^{y}\frac{1}{-3s^3+\frac{1}{9}}ds\\=-\sqrt{3}\arctan (\frac{6y+1}{\sqrt{3}})+\log(3y-1)-\frac{1}{2}\log(9y^2+3y+1)+\frac{\sqrt{3}\pi}{2}\\
F_1(y)=-\frac{1}{5}\int\frac{y}{(-3y^3+\frac{1}{9})^2}dy\\
=\frac{-\frac{54y^2}{1-27y^3}+2\sqrt{3}\arctan (\frac{6y+1}{\sqrt{3}})+2\log(3y-1)-\log(9y^2+3y+1)}{10}
.\end{gathered}$$
Thus the position of the singularity is given by the formula $$\begin{gathered}
\label{asing}
x_1 =C+o(1)=x_0-\sqrt{3}\left(1-\frac{1}{5x_0}\right)\left(\arctan \left(\frac{6y_0+1}{\sqrt{3}}\right)-\frac{\pi}{2}\right)\\
+\left(1+\frac{1}{5x_0}\right)\left(\log(3y_0-1)-\frac{1}{2}\log(9y_0^2+3y_0+1)\right)-\frac{27y_0^2}{5x_0(1-27y_0^3)}+o(1)
,\end{gathered}$$ where the initial condition $(x_0,y_0)$ satisfies $|x_0|$ is large and $y_0$ is not close to any of the three roots. We note that the presence of the arctan in the leading order implies that the solutions remain quasi-periodic beyond the domain accessible to the methods in [@Invent]. In (\[asing\]) we have the freedom of choosing branch of $\log$ and $\arctan$, which enables us to find arrays of singularities.
For example, the position of a singularity corresponding to the initial condition $x_0=10+60i,~y_0=0.7+0.3i$, calculated using (\[asing\]) is $x_1=9.80628+60.2167i$, which is accurate with six significant digits, as checked numerically.
The detailed behavior of the solution near the singularity can be found by expanding the right hand side of (\[asing\]). We omit the calculation here since there are many other methods to determine this behavior (cf. [@Invent]) and it is of lesser importance to the paper.
Connecting regions of transseries {#phase}
---------------------------------
We choose a path consisting of line segments The path in $x$ consists of line segments connecting $50i$, $50$, $-50i$, $-50$, $50i$, $50$, $-50i$, and $-50(\sqrt{3}+i)$. This corresponds to an angle of $2\pi$ in the original variable, with initial condition $y(50i)=0.6$.
Along this path, the solution of (\[abel\]) approaches all three complex cube roots of $1/27$. For instance, the root $1/3$ is approached when $x$ traverses the first quadrant along the first segment, the root ${(-1)^{4/3}}/{3}$ is approached when $x$ goes to the lower half plane, and the root ${(-1)^{2/3}}/{3}$ is approached when $x$ goes back to the upper half plane. Some of these values are approached more than once along the entire path. This behavior can easily be shown using the phase portrait of $G_0$, cf. and Corollary \[C1\].
Note that along a straight line $x=x_0+x e^{t i}$ where the angle $t$ is fixed the leading term (with only $G_0$ on the right hand side) of the ODE (\[abel\]) can be written as $$\frac{d y}{d x}=e^{t i}\left(-3y^3-\frac{y}{5(x_0+x e^{t i})}+\frac{1}{9}\right).$$
Denoting $y_1=\Re{y}$ and $y_2=\Im{y}$, we have
$$\left\{
\begin{array}{ll}
\dfrac{d y_1}{d x}=-3 y_1^3\cos t-3 y_2^3\sin t+9 y_1 y_2^2\cos t+9 y_1^2 y_2\sin t+\ds\frac{\cos t}{9}\\
\dfrac{d y_2}{d x}=-3 y_1^3\sin t+3 y_2^3\cos t+9 y_1 y_2^2\sin t-9 y_1^2 y_2\cos t+\ds\frac{\sin t}{9}
\end{array}
\right.$$
We can then analyze the phase portraits. For the purpose of illustration, we show some of them in Fig \[fig:abel7\] and \[fig:abel8\].
![Phase portrait of $\Re(y)$ and $\Im(y)$ for $t=-\pi/4$. The $``\times"$ marks are the three roots.[]{data-label="fig:abel7"}](abelode7b.eps)
On the line segment connecting $50i$ and $50$, it is clear that the initial condition $0.6$ is in the basin of attraction of $1/3$ (cf. Fig. \[fig:abel7\]).
![Phase portrait of $\Re(y)$ and $\Im(y)$ for $t=5\pi/4$.[]{data-label="fig:abel8"}](abelode8b.eps)
Since the only stable equilibrium is $a_0=\ds\frac{(-1)^{4/3}}{3}$, on the line segment connecting $50$ and $-50i$ the solution converges to $a_0$ (cf. Fig. \[fig:abel8\]).
Numerical calculations confirm this, (cf. Fig \[fig:abel6\]).
![Behavior of the solution across transseries regions. Dotted horizontal lines are the imaginary parts of the three roots. The horizontal line is arclength. The path in $x$ consists of line segments connecting $50i$, $50$, $-50i$, $-50$, $50i$, $50$, $-50i$, and $-50(\sqrt{3}+i)$. This corresponds to an angle of $2\pi$ in the original variable.[]{data-label="fig:abel6"}](abel6n.eps)
Finally, note that there cannot be a limit cycle in the phase portraits drawn if $x$ goes along a straight line. If the solution $y$ approaches a limit cycle, it must lie in an $R$-domain. Thus the formal constant motion formula (\[eq:def2c\]) is valid, and the first term $F_0$ specifies a direction for $x$. If $x$ goes strictly along this direction towards $\infty$ then the term $a\log x$, which does not vanish in our case, will go to $\infty$, contradicting the results about the constant of motion. On the other hand, if $x$ goes in a different direction, then $-x+F_0(y)$ goes to $\infty$ much faster than $a\log x$, again a contradiction.
Extension to higher orders
--------------------------
For higher orders, such as the Painlevé equations P1 and P2, a similar procedure works, though the details are quite a bit more complicated, and we leave them for a subsequent work. We illustrate, without proofs, the results for $P1$, $y''=6y^2+z$. Now, there are two asympotic constants of motion, as expected. The normal form we work with is $u''+u'x^{-1}-u-u^2/2-392x^{-4}/625=0$. Denoting by $s$ the “energy of elliptic functions” $s={u'}^2/2-u^3/3+u^2$ (it turns out that $s$ is one of the bicharacteristic variables of the sequence of now PDEs governing the terms of the expansion; thus the pair $(u,s)$ is preferable to $(u,u')$), one constant of motion has the asymptotic form $$C_1=x-L(s,u)+x^{-1}K_1(s,u)+\cdots$$ In the above, denoting $R=\sqrt{u^3/3+u^2+s}$, $L$ is an incomplete elliptic integral, $L=\int R^{-1}(s,u)du$ and the integration is following a path winding around the zeros of $R$. The functions $K_1$, $K_2$, $\cdots$ have similar but longer expressions. We note the absence of a term of the form $a\log x$ (the reason for this is easy to see once the calculation is performed). A second constant can now be obtained by reduction of order and applying the first order techniques, or better, by the “action-angle” approach described in the introduction. It is of the form $$C_2=xJ(s)+[L(s)J(u,s)-J(s)L(u,s)]+x^{-1}\tilde{K}_1+\cdots$$ where $J(u,s)=\int R(s,u)du$; when the variable $u$ is missing from $J(u,s)$ or $R(u,s)$, this simply means that we are dealing with complete elliptic integrals. There is directionality in the asymptotics, as the loops encircling the singularities need to be rigidly chosen according to the asymptotic direction studied. A slightly different representation allows us to calculate the constants to all orders. Because of directionality, a different asymptotic formula exists and is more useful for the “lateral connection”, that is, for calculating the solution along a circle of fixed but large radius, which will be detailed in a separate paper, as part of the Painlevé project, see e.g. [@Painleve22].
Acknowledgments
===============
The authors are very grateful to R. Costin for a careful reading of the manuscript and numerous useful suggestions. OC’s work was partially supported by the NSF grants DMS 0807266 and DMS 0600369.
[99]{} Ablowitz, M., Biondini, G. Prinari, B., Inverse scattering transform for the integrable discrete nonlinear Schrödinger equation with nonvanishing boundary conditions. Inverse Problems 23, no. 4, pp. 1711–1758, (2007).
Abramowitz, M. and Stegun I.A., [*Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables*]{}, 9th printing. New York: Dover, pp. 804–806, (1972).
Arnold, V.I., [*Geometrical Methods in the Theory of Ordinary Differential Equations*]{}: Springer; 2nd edition (1988). Baik, J., Buckingham, R., DiFranco, J., Its, A. Total integrals of global solutions to Painlevé II. Nonlinearity 22 no. 5, pp. 021–1061. (2009).
Balser, W., From Divergent Power Series to Analytic Functions, Springer, 1st ed. (1994).
Balser, W., Braaksma, B. L. J., Ramis, J.-P., Sibuya, Y., Multisummability of formal power series solutions of linear ordinary differential equations Asymptotic Anal. 5 no. 1, pp. 27–45. (1991). Bender, C. and Orszag, S. [*Advanced Mathematical Methods for scientists and engineers*]{}, McGraw-Hill, 1978, Springer-Verlag (1999). Bleher, P. and Its, A., Double scaling limit in the random matrix model: the Riemann-Hilbert approach. Comm. Pure Appl. Math. 56, no. 4, pp. 433–516. (2003). Bornemann, F., Clarkson, P., Deift, P. Edelman, A., Its, A.and Lozier, D., AMS notices, V. 57, 11 p. 1389 (2010). Braaksma, B. L. J., Multisummability of formal power series solutions of nonlinear meromorphic differential equations. Ann. Inst. Fourier no. 3, 42, pp. 517–540, (1992).
Boutet de Monvel, A., Fokas, A. S., Shepelsky, D., Integrable nonlinear evolution equations on a finite interval. Comm. Math. Phys. 263 , no. 1, pp. 133–172 (2006).
Calogero, F. A new class of solvable dynamical systems. J. Math. Phys. 49 no. 5, 052701, 9 (2008).
Clarkson, P A and Kruskal, M.D., The Painlevé-Kowalevski and poly-Painlevé tests for integrability", Studies in Applied Mathematics 86 pp 87-165, (1992).
Conte, R., Musette, M., Verhoeven, C. Painlevé property of the Hńon-Heiles Hamiltonians. Théories asymptotiques et équations de Painlevé, pp. 65–82, Sémin. Congr., 14, Soc. Math. France, Paris, (2006).
Costin O., Asymptotics and Borel summability, Chapmann & Hall, New York (2009) Costin O., On Borel summation and Stokes phenomena of nonlinear differential systems [*Duke Math. J.*]{}, 93, No. 2, (1998).
Costin O. and Costin R.D., On the formation of singularities of solutions of nonlinear differential systems in antistokes directions [*Inventiones Mathematicae*]{}, 145, 3, pp. 425–485, (2001).
Deift, P. Four lectures on random matrix theory. Asymptotic combinatorics with applications to mathematical physics (St. Petersburg, 2001), pp. 21–52, Lecture Notes in Math., 1815, Springer, Berlin, (2003).
Baik, J., Deift, P., Johansson, K. On the distribution of the length of the longest increasing subsequence of random permutations. J. Amer. Math. Soc. 12, no. 4, pp. 1119–1178 (1999).
Deift, P.; Zhou, X. A steepest descent method for oscillatory Riemann-Hilbert problems. Asymptotics for the MKdV equation, Ann. of Math. (2) 137 no. 2, pp. 295–368, (1993).
Fokas, A. S. Soliton multidimensional equations and integrable evolutions preserving Laplace’s equation. Phys. Lett. A 372, no. 8, pp. 1277–1279, (2008).
Écalle, J., [*Fonctions Resurgentes, Publications Mathematiques D’Orsay*]{}, (1981).
Écalle, J., Six lectures on transseries, analysable functions and the constructive proof of Dulac’s conjecture, [*Bifurcations and periodic orbits of vector fields*]{}, NATO ASI Series, Vol. 408, pp. 75-184, (1993)
Fabry, C. E. [*Thèse (Faculté des Sciences)*]{}, Paris, 1885
Fokas, A. S. Integrable nonlinear evolution equations on the half-line. Comm. Math. Phys., no. 1, pp. 1–39, 230 (2002).
Goriely, A. Integrability and nonintegrability of dynamical systems. Advanced Series in Nonlinear Dynamics, 19. World Scientific Publishing Co., Inc., River Edge, NJ, (2001).
Grammaticos, B., Ramani, A., Tamizhmani, K. M., Tamizhmani, T., Carstea, A. S. Do all integrable equations satisfy integrability criteria? Adv. Difference Equ., Art. ID 317520, (2008).
Its, A., Jimbo, M. and Maillet, J-M Integrable quantum systems and solvable statistical mechanics models. J. Math. Phys. 50, no. 9, 095101, 1 pp. 81–06, (2009)
Kruskal, M. D., Grammaticos, B., Tamizhmani, T. Three lessons on the Painlevé property and the Painlevé equations. Discrete integrable systems, 1–15, Lecture Notes in Phys., 644, Springer, Berlin (2004).
Its, A. Connection formulae for the Painlevé transcendents. The Stokes phenomenon and Hilbert’s 16th problem (Groningen, 1995), pp. 139–165, World Sci. Publ., River Edge, NJ, (1996).
Its, A. The Riemann-Hilbert problem and integrable systems. Notices Amer. Math. Soc. 50 no. 11, pp. 1389–1400 (2003). Iwano, M., Integration analytique d’un systeme d’equations differentielles non lineaires dans le voisinage d’un point singulier, [*Ann. Mat. Pura Appl. (4) [**44**]{} 1957, 261-292*]{}
Ramis, J-P., Gevrey, Dévissage. (French) Journées Singulières de Dijon (Univ. Dijon, Dijon, 1978), pp. 4, 173–204, Astérisque, 59-60, Soc. Math. France, Paris, (1978).
Ramis, J.-P. Les séries $k$-sommables et leurs applications. (French) Lecture Notes in Phys., 126, pp. 178–199, Springer, Berlin-New York, (1980).
Takahashi, M., On completely integrable first order ordinary differential equations. Real and complex singularities, 388–418, World Sci. Publ., Hackensack, NJ, (2007).
Tamizhmani, K. M., Grammaticos, B., Ramani, A. Do all integrable evolution equations have the Painlevé property? SIGMA Symmetry Integrability Geom. Methods Appl. 3, Paper 073, 6 (2007).
Treves, F., Differential algebra and completely integrable systems. Hyperbolic problems and related topics, pp. 365–408, Grad. Ser. Anal., Int. Press, Somerville, MA, (2003).
Wasow, W., [ Asymptotic expansions for ordinary differential equations, Interscience Publishers, (1968)]{}
Yoshino, M., Analytic non-integrable Hamiltonian systems and irregular singularity. Ann. Mat. Pura Appl. (4) 187, no. 4, pp. pp. 555–562. (2008).
Zakharov, V. E., (ed.): What is Integrability? Berlin etc., Springer-Verlag 1991
[^1]: A singular point of an equation is irregular if, for [*small*]{} solutions, the linearization is not of Frobenius type. By a small solution we mean one that tends to zero in some direction after simple changes of coordinates.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We present results of N-body simulations aimed at understanding the dynamics of young stars near the Galactic center. Specifically, we model the inspiral of a cluster core containing an intermediate mass black hole and $N \sim 50$ cluster stars in the gravitational potential of a supermassive black hole. We first study the elliptic three-body problem to isolate issues of tidal stripping and subsequent scattering, followed by full N-body simulations to treat the internal dynamics consistently. We find that our simulations reproduce several dynamical features of the observed population. These include the observed inner edge of the claimed clockwise disk, as well as the thickness of said disk. We find that high density clumps, such as that claimed for IRS13E, also result generically from our simulations. However, not all features of the observations are reproduced. In particular, the surface density profile of the simulated disk scales as $\Sigma \propto r^{-0.75}$, which is considerably shallower than that observed. Further, at no point is any significant counter-rotating population formed.'
author:
- 'Steven J. Berukoff and Bradley M.S. Hansen'
title: Cluster Core Dynamics in the Galactic Center
---
Introduction
============
In the past decade, observations have conclusively established the presence of a supermassive black hole (SMBH) at the Galactic Center (GC) (Sanders 1992, Haller et al. 1996, Ghez et al. 2003). Efforts to measure the mass of the SMBH have led to a large body of evidence regarding the stellar kinematics and mass distribution in this region (Genzel et al. 2003, Ghez et al. 2003, Schödel et al. 2003, Paumard et al. 2005, Maillard et al. 2004, Eisenhauer et al. 2005, Ghez et al. 2005). The most surprising findings include the discovery of a number of young, massive stars closely orbiting the central object Sgr A\*, young stellar populations in two possibly counter-rotating stellar disks orbiting Sgr A\* further out, and small stellar clumps or associations (so-called “comoving groups”) within these disks (Levin & Beloborodov 2003, Genzel et al. 2003, Lu et al. 2005, Maillard et al. 2004, Schödel et al. 2005).
The origin and peculiar kinematics of these structures beg theoretical explanation. Formation of stars near the SMBH is problematic because the strong tidal field is likely to shear and disrupt normal molecular clouds well before they can gravitationally collapse. Other formation scenarios include molecular cloud collisions, star-star collisions, and giants whose envelopes have been tidally stripped, but none of these is entirely satisfactory. Current opinion falls into one of two classes – either formation of stars by gravitational instability in an AGN-like accretion disk (Kolykhalov & Sunyaev 1980, Shlosman & Begelman 1989, Morris 1996, Sanders 1998, Goodman 2003, Levin & Beloborodov 2003, Nayakshin & Cuadra 2005) or rapid inward transport (due to dynamical friction) and subsequent tidal disruption of a star cluster that formed at larger radii (Gerhard et al. 2001). Early numerical simulations of the latter scenario revealed that the cluster would not survive the infall to small radii unless extraordinary demands were placed on the mass and stellar density (McMillan & Portegies Zwart 2003, Kim & Morris 2003). An enhancement of this idea was the inclusion of an intermediate mass black hole (IMBH) at the center of the cluster, which served to both maintain the cluster potential well and slow internal relaxation (Hansen & Milosavljevi’c 2003). Further direct simulations, including this refinement, again placed stringent demands on the cluster initial conditions, although the required core density decreased (Kim, Figer, & Morris 2004). Recent Monte Carlo and N-body simulations of this process have demonstrated that $\sim 100$ stars can be transported via dynamical friction to about $1\pc$ from the GC, with an IMBH formed naturally through a runaway process of stellar merger during the migration (Baumgardt et al. 2004, Gürkan & Rasio 2005). However, these simulations could not accurately follow the further evolution of this cluster core because of algorithmic limitations. It is the principal goal of this paper to follow the physics of this process further inwards to determine to what extent this scenario may reproduce the observed features of the young star distribution.
To date, the reliability of existing simulations has effectively ended at about $1\pc$. A chief cause of failure of these codes is the presence of very strong tidal fields due to the nearby SMBH, which causes normally simple algorithms for energy conservation and treatment of close encounters to become delicate and highly complex, resulting in occasional failure. Since the majority of the interesting and puzzling observations have been made interior to this radius, simulations that might illuminate answers to these riddles are necessary.
This paper discusses simulations of the dynamics of remnant cluster cores as they sink towards the GC, specifically focusing on the region interior to $1\pc$. The paper is organized as follows: § \[sec:nummeth\] describes the numerical methods of simulating the inspiral of general, three-body and $\sim 50-$body systems, including the implementation of dynamical friction that creates the inspiral. § \[sec:3body\] describes the three-body simulations, and § \[sec:nbody\] the N-body simulations, and compares and contrasts the two regimes. Finally § \[sec:disc\] concludes with a discussion of how these results can be used to understand dynamics at the Galactic Center, with particular regard to the curious observed structures such as the S-stars and the comoving groups IRS13E and IRS16SW.
Numerics {#sec:nummeth}
========
Overview
--------
While a large body of literature exists covering many aspects of N-body dynamics, few numerical studies incorporate strong tidal fields. Strong interactions between stars, when handled improperly, lead to large energy errors and subsequent spurious results, and strong tidal fields encourage stronger interactions. The two canonical methods of dealing with close encounters in N-body integrations are softening and regularization. By the inclusion of a small softening parameter into the denominator of the force calculation, the effects of small interparticle separations can be avoided, at the cost of reduced accuracy for close encounters. In simulations for which the particle density or interaction cross-section are relatively low, such as large-scale galaxy simulations, this method is useful and often employed. In this paper, the focus is on both individual dynamics and in statistical averages of these dynamics in dense environments; therefore, softening is inappropriate, as some portion of the essential motions would be lost.
Regularization {#subsec:Regular}
--------------
Regularization is a technique in which a close encounter between particles with $1/R$-style force terms is mathematically transformed into a single center-of-mass particle, the singularity removed, and its path integrated for a suitable time (see, e.g., Aarseth 2003). The essential benefit of this method is that the close encounter is correctly integrated with minimal error, and thus the individual dynamics are more realistic. Several schemes are in widespread use; two of the most common are two-body, or Kustaanheimo-Stiefel (KS), regularization, and “chain”, which is essentially a coupled set of KS regularizations for multiple particles, in which each pair of neighbors is KS-regularized in a link-list fashion, but without closure of the chain. We utilize the benefits of these approaches, although for the systems under consideration here they are not without pitfalls.
KS regularization is analytically straightforward, and its extension to chain regularization is more complex but tractable. Typically, though these algorithms are well-developed, they are optimized for similar-mass particles in the absence of tidal fields. When faced with large mass ratios, the decision-making behind the initiation, continuation, and termination of regularization can be poorly defined, leading to frequent integration errors. The addition of strong tidal fields further complicates matters, as the regularized pair becomes subject to a perturbation that can be of the same order-of-magnitude as the two-body interaction itself. These problems may be alleviated by better decision making, including that governing regularization and the treatment of high-velocity intruders. However, the central difficulty is a basic inadequacy of the algorithms, and no amount of tweaking is going to produce a universally successful numerical treatment.
A common configuration in these simulations consists of several particles that orbit close to the massive particle (“IMBH”). Each particle-IMBH pair requires regularization to correctly compute the two-body orbits. Simultaneously, all close particle-particle pairs need the same treatment. This type of configuration is not properly dealt with by modern chain regularization, the only fully implemented multiple regularization technique commonly available. There is an alternative algorithm, called “wheel-spoke” (Aarseth 2003), which could remedy this problem, but it is not yet mature and currently suffers from a number of fatal setbacks. (Aarseth, private communication)
This inadequacy of current regularization techniques somewhat limits the applicability of our present work. As the IMBH inspirals and attempts to transport multiple tightly bound stars down the potential well, the present algorithms must continually switch between regularizing the IMBH-particle and particle-particle pairs. This is inefficient and can be error-prone; for example, when two or more particles are each in hard binaries with the IMBH, the integration steps may be incorrectly calculated, resulting in erroneous orbits. The termination of one of these regularizations can result in the introduction of incorrect orbits into the calculation. Alternatively, drastic changes in the orbits may occur, causing high velocity ejections in directions normal to their orbital planes. Indeed, this was observed in some simulations, but further analysis showed that these events were caused by numerical error rather than basic physics. So, while one might be tempted to excitement at the observation of high-velocity ejections in the simulations, such results are not always correct. Given the systems simulated here, such errors place computational lower limits on the possible simultaneous star-IMBH distances, both during the simulation and when creating initial conditions, limiting the density of the initial stellar systems and consequently our ability to simulate strongly bound subsystems deep into the SMBH potential well.
In the present study, most of the N-body runs were recalculated at least once, and care was taken to ensure that situations arising from poor regularizations were removed from the final data analysis. This was done in a variety of ways, including
- the analysis of the energetics of difficult configurations;
- verifying that the timesteps used in regularizations were appropriate to the system being regularized;
- identifying and tracking the progress of multiple simultaneous regularizations;
- tuning parameters to avoid the unwarranted initiation of chain regularization, when appropriate.
Difficult dynamical configurations arose from a number of situations, caused sometimes by algorithmic difficulties. For instance, consider again the common scenario of an IMBH bound to several close particles, with a high-velocity particle intruder. This interloper is close enough to the IMBH to require regularization, but moving sufficiently fast that it might be in the region for only a short time. The basic criterion for the initiation and termination of regularization requires only information about the relative separations of the particles. In addition, the onset of a regularization period is controlled by several parameters that are set at the beginning of the simulation but that, on short timescales, have only a limited ability to adapt to the environment being integrated. In this case, the intruder particle initiates a regularization because of its proximity, and significantly alters the regularization environment for the other stars near the IMBH. This is unfortunate because the intruder moves off, leaving a computationally error-prone system behind. Alternatively, the intruder does not trigger a regularization, but interacts strongly with several members of the core. Its velocity is high, and the required timestep to properly treat the interactions is too long to maintain low energy error. The regularization parameters are updated a short time later, but by then the intruder has gone on its way, leading again to an untenable configuration. Such issues are typical in these types of simulations; their identification and analysis may indicate a path toward improved techniques.
Particulars
-----------
In all of this work, Aarseth’s NBODY6 was employed for the direct integration of stellar orbits, with several modifications. Besides needing the basic N-body integration engine, requirements for a drag term and regularization of close encounters in the presence of strong tidal fields placed strict constraints on the algorithms used, and, often, required some adjustment. NBODY6 includes several subroutines which compute a tidal field due to a variety of scenarios, but none are well-suited to the extreme mass ratios and tidal fields considered here, and were not used. Instead, new algorithms were built and integrated into the current experiments. NBODY6 also includes modern KS and chain regularization techniques for minimizing error during close encounters between particles, but, as discussed above, these methods are tailored primarily to regularizing similar-mass encounters, and current implementations are not as agile when faced with large mass ratios. Thus, constants in the vanilla NBODY6 which govern the classification of close encounters and the initiation and termination of regularization were monitored and tuned to maintain the integrity of the simulations.
Initial phase space coordinates were created using an isotropic King $W_{0}=9$ model generator. The code uses a 1D Poisson solver and fifth-order Runge-Kutta integrator with adaptive step size. The inclusion of a central black hole in the distribution is achieved by adding the potential of the black hole to that of the King model, then computing a new distribution function based on this potential and the original density profile. Candidate sets of initial conditions are then selected based on minimizing the moment of inertia and its derivative, then allowing the cluster to partially virialize. These clusters thus come as close to dynamical equilibrium as our simple model will allow. The cluster core, or, in the three-body case, the IMBH and star, are then placed at $1\pc$ away from the SMBH on the x-axis, with an initial velocity appropriate for the eccentricity used. Other specifications of initial conditions specific to either the three-body or N-body case are detailed in their respective sections.
We assume Chandrasekhar dynamical friction as a drag on the cluster, forcing inspiral. This treatment must be handled carefully, for two reasons. This approximation depends on the assumptions that the stars are uniformly distributed and their velocity distribution is isotropic, and in practice, the Chandrasekhar formula provides a reasonable estimate of the drag induced on an orbiter. However, it fails to accurately describe strongly inhomogeneous systems in which the forces applied to the orbiter are nonuniform. An extension of the standard paradigm, in which the Holtsmark distribution characteristic of the Chandrasekhar formalism is generalized, concluded that inhomogeneities drive a stronger drag against orbiting bodies than would be expected with a smooth distribution (Del Popolo & Gambera 1999). This is important in cases where the gravitational field is strongly discretized due, for example, to nonuniform stellar density. During early experiments, however, we found that the inhomogeneities need to be very dense and localized (such as those from high-density molecular cloud cores) to produce significant changes, and so may be neglected here.
Second, the Hermite integration utilized in NBODY6 depends not only on the force, but also on its three derivatives, and the correct treatment of energy error relies on a reasonable estimate of the work performed by any putative drag. The standard Chandrasekhar formula for dynamical friction is (Binney & Tremaine 1987) $$\frac{d {\bf v}}{dt} = -4 \pi \ln \Lambda G^2 M \chi \rho (r) \frac{{\bf v}}{v^3}$$ for $$\chi = \erf\biggl(\frac{v}{\sigma \sqrt{2}}\biggr) - \sqrt{\frac{2}{\pi}} e^{-v^2/2\sigma^2},
\label{eq:basicDF}$$ $\ln \Lambda$ is the Coulomb logarithm, and $\rho (r)$ is the background stellar density which interacts with the moving body to induce drag. In this paper, time derivatives of $\chi$ can be eliminated, since its growth is essentially adiabatic. We found that this slight improvement provides a significant reduction in computational time without affecting the results.
Once stars are lost from the control of the IMBH, subsequent interactions occur in a disk. A typical cluster is a spheroid with a halo+core structure, often created by mass segregation. As it is tidally stripped by a SMBH, its former members are strewn into orbits around the SMBH, with initial orbital semimajor axes, eccentricities, and inclinations similar to that of the IMBH. A large cluster that begins inspiral far from the depths of the SMBH potential well disrupts, leaving a wide concentric swirl of stellar debris in its wake. Disrupted clusters with large initial populations ($\sim 10^5$ stars) thus create a density enhancement over and above that of the background. This can amplify the deceleration due to the drag from the [*background*]{} density, given in Eq. \[eq:basicDF\], resulting in a short infall time. In these experiments, the tidal tail of the small cluster core has a much smaller stellar population. Therefore, the modelled analytic background stellar population is the primary source of drag for the IMBH and its former cluster members.
Three-body {#sec:3body}
==========
Setup {#sec:3bsetup}
-----
We start with the limited case of an IMBH with only a single star orbiting it. Simulating the dynamics of such a system inspiralling into a massive potential well will isolate the physics of tidal stripping and subsequent mutual scattering. In following sections we compare with the case of multiple stars to understand the role of internal dynamical evolution in the cluster. The main causes of variation in the three-body results are different IMBH eccentricities and change of inspiral speed, while other parameters such as the presence of a mass spectrum proved to be unimportant, and little mention will be made of them. Plots shown in the following sections are representative of results obtained, and do not contain data from the full 10000 runs, unless specified.
In order to more fully understand the relevant parameter space for the cluster core simulations, a large suite of three-body runs is performed. Initial conditions are created by generating King models, as described above, each with 1000 particles, with individual stars randomly selected from the phase-space distribution and placed into a three-body system with an IMBH and an SMBH point-mass potential (with $M\sim 4\times10^6 M_{\odot}$), whose motion is not integrated. The simulations employ a rough $M_{imbh}:M_{star}$ mass ratio of $10^3:1$, and a range of IMBH masses are used (250, 500, 750, 1000, 1200, 1500, 2000 $M_{\odot}$). Stars whose initial separation from the IMBH are more than their Jacobi radii are rejected. Selected stars are assigned a mass from a Kroupa-type (Kroupa, Tout, & Gilmore 1993) initial mass function, slightly modified to include stars of up to $100 M_{\odot}$. The IMBH+star system is then placed on an initial orbit with one of four eccentricities: $0$, $0.2$, $0.5$, $0.8$. For each value of the IMBH mass and initial IMBH eccentricity, $300$ stars are selected and simulated.
In order to force an inspiral, a drag term is applied to the IMBH, in the form of Eq. (\[eq:basicDF\]) with $\sigma = v/\sqrt{2}$, which is roughly an isothermal distribution, or $\rho \propto r^{-2}$. For these runs, the detailed density structure of the Galactic Center environment is unimportant, as the primary goal is to understand what effect star-star interactions have on cluster evolution by contrasting the three-body case with the more general N-body case. A basic inspiral lasts approximately $15 \Myr$, although faster ($\sim 5 \Myr$) and slower ($\sim 100 \Myr$) inspirals are also tested. This is a rough range of inspiral timescales for circular orbits, which clearly represent an upper limit when performing eccentric inspirals. (McMillan & Portegies Zwart 2003)
Simulations are terminated under one of two conditions: the IMBH migrated to within $0.01 \pc$ of the origin (SMBH); or the simulation fails, typically due to the ejection of a high-velocity particle (see § \[subsec:Regular\]) or the formation of a very hard star-IMBH-star triple with semimajor axes $\sim 20\AU$. Approximately 10000 runs were conducted using a dual-processor Linux workstation.
Results {#sec:3bresults}
-------
The presence of an IMBH provides a necessary and sufficient mechanism for transporting a star down the potential well to a tidal stripping radius. Figure \[fig:dragging\] shows different stars that are transported partially or fully toward the SMBH. The majority of stars are stripped and typically end up on large semimajor-axis orbits. However, a few stars are transported deep into the well, with those most tightly bound dragged to the simulation boundary. Just what fraction survive will be quantified below.
### Effect of IMBH Eccentricity
For a given IMBH initial eccentricity, the final stellar eccentricities tend to somewhat mirror that of the IMBH. This is not surprising, since the cluster’s systemic velocity is larger than the orbital velocities of stars about the IMBH when they are near the Roche lobe. Figure \[fig:3b\_ecc\] shows this relationship for initial IMBH eccentricities of 0.2 and 0.8. Note that there is significant scatter about the IMBH value, for both cases; further, note that the scatter for some stars in the $e=0.8$ case is larger, due to a strong perturbation during a highly eccentric encounter.
Similarly, Figure \[fig:3b\_inc\] shows this behavior for stellar inclinations. Stars beginning the simulation bound to the IMBH in its orbital plane have initial inclinations that trend linearly with their final states. Stars orbiting the IMBH far from its orbital plane are mapped to small final inclinations. Again, since the concominant motion is that of the IMBH, these results are not surprising. Of particular interest, however, are any stars that end with high inclinations, possibly rotating in a retrograde fashion, which would help to explain the source of the putative counter-rotating disk at the Galactic Center. Note from Figure \[fig:3b\_inc\] that there are a small number of stars with such orbits; statistically, summing over all runs, these account for approximately $0.01-0.1\%$ of the final states. For a remnant core of a globular cluster, retaining perhaps $10^3$ members, this results in perhaps a handful of stars in such orbits, assuming multiple stars could be transported in this manner, a subject discussed in the context of the N-body runs below.
There is a slight trend toward increased average stellar inclination as the initial IMBH eccentricity increases, for all IMBH masses simulated. Figure \[fig:3b\_inc\_e\] reflects this; the difference is not large, but is clearly discernible. The likely culprits are strong encounters near the IMBH peribothron. The IMBH orbital plane can be viewed as a midplane for a disk composed of the orbits of stripped stars. Since the stellar inclinations are nonzero, there is some scatter about the midplane. Should a star be unlucky enough to find itself near the IMBH peribothron when the IMBH itself passes by, the scattering interaction may cause the star’s inclination to grow due to the normal component of the perturbation. Few of these interactions typically occur in a typical simulation, but their result is a small number of high-inclination stellar orbits.
### Inspiral Speed
Three inspiral speeds were used to understand the effect on final stellar distributions: “regular”, corresponding to approximately $t_{insp}\sim 15 \Myr$, “slow” corresponding to $t_{insp}\sim 100 \Myr$, and “fast”, with $t_{insp}\sim 5\Myr$. This was effected by changing the value of the constants slightly in Eq. \[eq:basicDF\]. Slow inspirals transport fewer stars to orbits with semimajor axes of less than $0.5\pc$, while fast inspirals leave fewer stars with semimajor axis greater than $1\pc$. Slow inspirals promote larger final semimajor axes generally, for a relatively simple reason. In a slow inspiral, a stripping event leaves the IMBH and star in similar orbits. Subsequent IMBH passages can lead to significant scattering events. Note also that the endpoint of a resonant trapping event (discussed below) can produce configurations where the IMBH and star orbits nearly coincide. Since subsequent scattering is generally rare, the occurrence of strong star-IMBH interactions soon after stripping provides some estimate about the frequency of resonant trapping events.
### Miscellany
Generally $5-10\%$ of stars arrive in the inner $0.2\pc$, with the stellar orbits determined primarily by the initial IMBH eccentricity and the speed of inspiral. There are other effects, however, that are of perhaps anecdotal interest that we mention here. Due to our implementation of dynamical friction for this three-body case, the IMBH orbits tend to circularize during inspiral, as seen in Fig. \[fig:dfc\]. The circularization occurs for all initial IMBH eccentricities, and is caused by the tangential form of the drag. Gould & Quillen (2003) show that in a Kepler potential, a drag force due to a background density $\rho \sim r^{-\nu }$, will tend to circularize a stellar orbit if $\nu > 3/2$. Recalling that our drag $\rho\sim r^{-2}$, this behavior is expected. This is to be contrasted with realistic density profiles of the Galactic center used in the cluster core experiments below.
Extremely rich dynamics are often lost in studying dynamical systems statistically. For instance, for a star that has yet to be stripped, its motion around the IMBH is highly non-Keplerian owing to the IMBH’s motion about the SMBH. This can have profound influences on the star’s orbital elements, causing large oscillations in the inclination (relative to the IMBH orbital plane) as seen in Figure \[fig:incp\]. Once the star is stripped, such phenomena are extremely rare, but prior to stripping, this behavior is fairly common for stars beginning on large inclination orbits.
Another phenomenon observed in the simulations is resonant trapping (Murray & Dermott 2003). Immediately after a star gets stripped, it can wander near the IMBH’s $L_4$ and $L_5$ Lagrange points. While there, the star’s orbit is now SMBH-centric, but its orbital elements mirror that of the IMBH. This will not last for long, as the Lagrange points are unstable due to the drag force causing inspiral. However, the star will remain near the Lagrange point for perhaps a few tens to hundreds of years. After the star has wandered too far from the equilibrium points, its orbit will still be roughly that of the IMBH; for slow inspirals, this can cause strong subsequent star-IMBH encounters and change the orbital elements significantly.
Observations of individual particle orbits indicate that resonant trapping occurs in perhaps $1\%$ of the orbits. Resonant trapping can be identified by a combination of a Roche-style criterion and an escape condition: when the escape condition is fulfilled but the Roche condition is not, a flag may be raised and a more detailed examination of the dynamics is undertaken. Often it is the case that when both conditions are satisfied, the star is no longer trapped, although this is not foolproof, and merits further work. Resonant trapping could be of importance in future studies examining the detailed microdynamics of the stripping process, or of the dynamics of few-body exchange, which has direct relevance to stars’ inheritance of orbital parameters from the IMBH.
N-body ($N \sim 50$) {#sec:nbody}
====================
Setup {#sec:nbsetup}
-----
The basic initial setup has previously been described in § \[sec:nummeth\], and for these N-body runs, a number of additional features apply. The cluster cores were generated with virial radius $R_{vir}~\sim 0.06\pc$, so that the outermost edge of the core lies slightly interior to the maximum Jacobi radius of any cluster stars. The cluster is initially non-rotating, although a few rotating models were tested, yielding no significant differences. Termination conditions were also similar, although an added condition was the formation of difficult hierarchies including two or more very hard binaries with the IMBH.
In all, $211$ simulations were performed, with most models employing $\sim 50$ particles, although several simulations with $100$ and $200$ stars were tested as well. These larger data sets produced no differences. Multiple sets of initial phase space coordinates were used, and for each set, multiple masses of the IMBH were used, varying between $1015-5540 M_{\odot}$. Further, for each IMBH mass, three initial eccentricities of $0$, $0.25$, and $0.5$ were used. The SMBH mass was fixed at $4\times 10^6 M_{\odot}$.
All stellar masses were equal, the value being determined by a common set of simulation parameters, and typically ranged between $3-8$ solar masses, depending on the parameters used. There are two complimentary reasons why the equal mass case is sufficient here. First, in the absence of a tidal field, the mass segregation timescale for a group of stars this small is very short, on order $5\kyr$, which is an order of magnitude smaller than the orbital period at $1\pc$. Thus, one would expect that such a core is always dominated by high mass stars. The second reason is that the inspiralling cluster scenario posits that a cluster is delivered to the Galactic center after significant mass segregation has occurred in the weaker tidal field far from the GC (see, e.g., McMillan & Portegies Zwart 2003). By that time, the less massive stars, relegated to the halo, have been stripped from the cluster, and those stars that are delivered to the inner parsec are high-mass, and rather uniformly so. Either way, the present simulations have an outer boundary of $1\pc$, and stars that begin their final descent are all be of roughly equal mass. Approximately $50$ early experiments incorporating a mass distribution revealed no differences in the final stellar orbital elements or the creation of the various observed structures.
Two different schemes were used for the background density profile. In the first, the density employed is from Genzel et al. (2003), where the detected stellar background is a broken power-law, $$\label{eq:genz}
\rho (r) = 1.2 \times 10^6 \biggl(\frac{R}{0.4\pc}\biggr)^{-\alpha} M_{\odot} \pc^{-3}$$ where $\alpha$ is $2$ outside $0.4\pc$ and $1.4$ inside this radius. This estimate is based solely on the detected stellar population, and does not account for any dark component. Thus, in the second set of experiments, a putative dark population was added to Eq. \[eq:genz\], with density $$\rho (r) = 1.68 \times 10^5 \biggl(\frac{R}{0.7\pc}\biggr)^{-1.75} M_{\odot} \pc^{-3}$$ which is a Bahcall-Wolf cusp of stellar-mass black holes (Bahcall & Wolf 1976; Miralda-Escude & Gould 2000). In this paper, the addition of a dark-component cusp tests whether there are any significant effects on the properties of the final orbits of the former cluster members.
Results {#sec:nbresults}
-------
In reporting the N-body outcomes, plots are representative of the results, since they are not statistically divergent between runs, with mention of deviations made when appropriate.
### Effect of Cusp
The density enhancement introduced by the presence of the dark cusp has two significant effects on the global inspiral parameters, because its presence or absence affects the inspiral of the IMBH, which in turn dominates the local star-IMBH dynamics. The first is the eccentricity change of the IMBH inspiral, which, if large and positive, would alter any stellar orbits in its vicinity. From analytical estimates, the eccentricity can be expected to decrease if $\nu>3/2$ and increase for $\nu<3/2$ (Gould & Quillen (2003)). Outside of $0.4\pc$, the density profile is isothermal, and therefore the IMBH orbit will tend to circularize. After passing through the knee, the density profile is shallower, but only slightly, than the threshhold cited above. The presence of the cusp reduces the eccentricity growth of the orbit. The density enhancement due to the cusp is more important inside $\sim 0.02\pc$, which is near the simulation end boundary.
The second effect is much more significant. The presence of the mass distribution of the cusp significantly decreases the inspiral time. For initially circular orbits, the decrease is nearly a factor of $2$. For initially eccentric orbits, the decrease is nearly a factor of $3$; the effect is stronger because eccentric orbits experience a stronger drag during pericenter passage, losing more energy to their surroundings. The implications of this on how and why the massive OB stars can migrate through a variety of mechanisms to the Galactic Center are discussed in more detail in § \[sec:disc\]. However, besides these two effects, there was no statistically significant difference in the results between the simulations with a cusp and without.
### Stellar Transport & Comoving groups {#sec:trans}
The presence of an IMBH is a sufficient mechanism for transporting stars deep into the potential well of the SMBH. In most cases, the infall of the cluster core causes several stars ($10-20\%$ of the original core population) to be transported within $0.3\pc$ of the Galactic Center, and in some cases, further. Figure \[fig:trans0\] shows a typical set of stars which are carried into the potential well. Some stars are transported into small semimajor axis orbits but then are perturbed into larger orbits due to IMBH scattering. Due to the strong tidal field, the IMBH typically loses most of its stars by $0.1\pc$, and falls in alone.
In principle, simple analytical arguments based on tidal radii would allow transport interior to this radius, for more tightly bound systems. However, strong internal dynamics in the core coupled with the inspiral of the IMBH will regulate this effect. The dynamics promote an overall expansion of the system, so that tightly bound core remnants will tend to have their haloes removed by the tidal field. The subsystem then binds further, stengthening the dynamical interactions, possibly leading to the perturbation of a member into a halo orbit. This member is pared from the system by the tidal field, which becomes stronger due to the IMBH inspiral, and the core is bound tighter. This process continues until there is only one star bound to the IMBH. The consequence of this process is that the delivery of multiple stars with small semimajor axes can be damped, since the strong perturbations which kick stars out of the core will push them into larger orbits. Therefore, the deposition of the S-stars by a single IMBH passage seems improbable within this scenario.
Before all the stars get stripped, there will be a period in which a small ($N\sim 5$) number of stars remain bound to the IMBH. These configurations could be considered “comoving”, since while these stars orbit the IMBH, the IMBH orbital velocity dominates that of the stars, and an observer looking at such a configuration from afar would see small variations in the stellar velocities when viewed as a group. An example of this is seen in Figure \[fig:com\], for the case in which there is no Bahcall-Wolf cusp or mass distribution.
The important parameter for the endurance of comoving groups is obviously the IMBH mass, because of the implied larger Roche lobe. There is no evidence of any effect of the cusp, but that is because the cusp is a continuous structure seen by the cluster core, instead of a more realistic set of disretized masses. In Figure \[fig:trans0\], the IMBH transports stars into orbits of semimajor axis of roughly $0.2\pc$, but even manages to retain multiple stars further in, whether there is a cusp (left panel) or not (right panel).
Observationally, one would expect that the IMBH is unlikely to be detected directly, but its presence inferred from the gravitational hold it retains on its orbiting stars. In Figure \[fig:com\], we see a typical comoving group from above the inspiral plane. Astrophysical units are used, and the box denotes the IMBH. The radius of this structure is about $\sim 10^{-3}\pc$, is located $0.17\pc$ from the GC, and is being held together by an IMBH of mass $4062 M_{\odot}$. The edge of the Roche envelope of the IMBH lies $0.018\pc$ from the IMBH, so these stars lie deep within the Roche lobe. Further, analysis of the stellar orbital energies shows that these stars are gravitationally bound to the IMBH. Such structures form with some frequency, in this scenario.
Table \[tbl:com\] shows how the different simulation parameters affected the efficiency of transport. Note that for the modelled masses, the minimal semimajor axis ($0.129\pc$) to which $6$ or more stars are transported corresponds to a IMBH Roche lobe of $0.0145\pc$. Two trends are in evidence: only IMBHs more massive than about $4000 M_{\odot}$ can transport multiple stars within $\sim 0.13\pc$, and while massive IMBHs can make such deliveries, they must do so when their eccentricities are moderate, as no simulation with $e=0.5$ transported stars deeper than $0.185\pc$. These two points may constrain the applicability of the cluster scenario to the Galactic center.
Note that the IRS16SW comoving group has been recently identified less than two arcseconds ($< 0.08 \pc$) from Sgr A\*, while the IRS13E comoving group is about four arcseconds ($\sim 0.16\pc$) away. The current simulations are able to explicitly replicate structures similar to the latter, but not the former, due to weaknesses in the numerical treatment. However, it is unclear which portion of the IRS16SW orbit has been observed; it may be near periastron on an eccentric orbit, in which case its semi-major axis could be as much as a factor of two larger. While these simulations don’t reproduce IRS16SW, they suggest that it could be formed given more tightly bound initial cores, which are difficult to simulate with current numerical techniques.
### Disks {#sec:disks}
One goal of this effort is to understand if repeated few-body encounters in strong tidal environments can produce multiple disk populations, possibly counterrotating and/or inclined relative to one another, on realistic timescales. In the Galactic Center, two such disks are claimed to exist inside of $0.4\pc$, and their existence is not understood.
In principle, as the cluster core spirals in, tidal stripping produces a tail or wake of stars. The IMBH orbit dominates the energy and angular momentum of the stellar orbits, and so when stars are stripped they are strewn initially into orbits that reflect the environment in which they were stripped, i.e., the inclination, semimajor axis, and eccentricity of the IMBH. As a consequence, neglecting subsequent encounters, the disks that are produced should be vertically thin. Figure \[fig:disk1\] shows the side profile ($R-z$) of all stripped stars resident in disks. Note that post-stripping encounters can significantly perturb the stellar orbits and erase this history of inspiral. Typical star-star interactions are too weak to produce significant variations in orbital parameters, even considering the large number of weak impulses that occur over the long random walk these parameters endure during their lifetime in the disk. Thus, only IMBH-star interactions are important in randomizing the distribution of stars in the disk.
The strength and direction of these impulses determines the effect of the perturbation. Because the disks should be thin, there are typically no large normal forces involved in an interaction since the stars and IMBH are in nearly the same plane. Thus, one cannot expect that many large inclination orbits will be produced, even by strong interactions. Indeed, Figure \[fig:iemass\] shows that there is no difference in the final eccentricities and inclinations of stars, regardless of the IMBH mass used. If strong perturbations due to IMBH were occurring frequently, there would be some scaling of these values with that mass. Furthermore, resonant trapping by a migrating body can, in principle, create counter-rotating disks (Yu & Tremaine 2001). While some particles are trapped temporarily in our simulations, this mechanism does not appear to operate with any significant efficiency.
Since the disks are thin, the formation of two distinct, counter-rotating, inclined disks is unlikely. However, perturbations do occur, and stars undergoing strong interactions typically end up on high eccentricity, high inclination orbits. In Figure \[fig:eeoi\], the upper right corner contains several stars (order $10$ out of $\sim 2000$) that have such orbits. This ratio (order $0.5\%$) is what one would expect from simple analytical estimates.
While no multiple disk systems were produced, many of the parameters of the typical remnant disk are consistent with the individual observed disks. Figure \[fig:jz\] shows the normalized angular momentum $$\frac{J_z}{J_z(max)} = \frac{xv_y-yv_x}{rv_r}$$ against the radial distance from the origin, out to $0.4\pc$. This analysis is similar to that of Genzel et al. (2003). Most stars in the sample are rotating in the same direction with only $1\%$ of stars on counter-rotating orbits. The mean disk opening angle for simulations with no cusp is $13.8{\ensuremath{^\circ}}\pm 2.9{\ensuremath{^\circ}}$, and for simulations including the cusp, the mean disk opening angle is found to be $12.6{\ensuremath{^\circ}}\pm 2.8{\ensuremath{^\circ}}$, so there is no statistical difference between the two. In the figure, the points farthest to the left are singleton stars transported deep into the potential well by the IMBH; these were not included in the means calculated above. Note that there is a gap near $0.05\pc$, which could correspond to the possible inner edge reported by Paumard et al. (2006). This may reflect the formation of a standard core-halo separation in the cluster, formed as a result of internal dynamical relaxation. The inner edge of the disk would then correspond to the removal of the last of the more loosely bound halo stars, which follows naturally from the self-regulation mechanism addressed in § \[sec:trans\]. More experiments are needed to understand this in greater detail.
The cluster simulations produce remnant stellar disks with surface density $\Sigma \propto r^{-0.75}$. This result appears to be robust with respect to the initial density profile used. Figure \[fig:sig.eps\] shows a comparison of the surface density obtained from the simulations to that of Paumard et al. (2006). A linear fit of the simulated surface density profile finds a best-fit power-law slope of $\alpha=0.74\pm 0.05$. We also tested three other profiles: $1/r$, $1/r^3$, and a Plummer sphere, all three producing final profiles similar to the $r^{-0.75}$. Note that internal dynamical evolution will drive the density profile towards the Bahcall-Wolf law (Bahcall & Wolf 1976; Baumgardt, Makino, & Ebisuzaki 2004). This, followed by tidal stripping via a Roche criterion and deposition into a disk would imply a density $\Sigma \propto r^{-0.75}$.
Our finding contrasts with the observed profile of Paumard et al. (2006), who find $\Sigma \propto r^{-2}$. These observations are limited in several respects; most notably, the magnitudes are limited to about $K=13.5$, which therefore includes only the most massive stars, and full three-dimensional orbit information is incomplete with large uncertainties. It is not obvious that any of these limitations are responsible for the disagreement in surface density profile. However, if this discrepancy continues to hold as more refined observations are made, the comparison with our simulated surface density profiles will be a significant constraint on the cluster infall scenario.
As mentioned previously, the IMBH is responsible for assigning orbital parameters to stars immediately after they are stripped, subject to a small scatter about the mean represented by the IMBH values. Their resulting orbits therefore represent a fuzzy memory of the IMBH infall, providing a constraint on the history of the GC. In particular, this shows that the two, roughly coeval, counter-rotating disks at the Galactic center cannot have been formed through the inspiral of one cluster.
### Scattering during inspiral {#sec:scatter}
The results of these simulations have already provided insight into the nature of the environment in which stripped stars may live as the IMBH continues its plunge. Multiple lines of evidence suggest that weak scattering is common once stars are removed from IMBH orbit. On a global scale, Figure \[fig:sca\] shows the relationship between the Tisserand relation immediately after stripping and at the end of the simulation. The Tisserand relation $Q=1/2a+\cos{i}\sqrt{a(1-e^2)}$ is an approximate expression of the Jacobi constant in terms of the orbital parameters, and its variation reflects the effect of orbital perturbations due to a passing encounter. (e.g. Murray & Dermott 1999) Note that there are three regimes, in the left panel. The bottom left is composed of stars with high inclinations, the top right is of stars with small semimajor axes, and the middle of the plot contains the rest. Clearly, this middle section is a ’hot-zone’ for perturbations, since the other two regimes are outside the domain of strong IMBH influence. Strong scattering events clearly occur and change the value of the orbital elements. Evidence has been presented indicating that higher mass IMBHs are more efficient at transporting stars, but here, it is seen that they are less likely to provide significant perturbation to the stars, while lower mass IMBH may do a better job. In the context of the Galactic center, IMBHs are a pertinent mechanism for comoving groups, but are unlikely to explain the randomization of orbits seen there.
Figure \[fig:veldisp\] shows the evolution of the z-component velocity dispersion $\sigma_z$ for stripped stars for a typical set of simulations, covering a range of eccentricities and inspiral times. In the left panel, runs with different IMBH masses but zero eccentricity are compared, and in the right panel, the IMBH eccentricity is varied holding the mass constant ($M=4062 M_{\odot}$). The initial large increases are due to the loss of stars from the cluster, and addition to the list of stripped stars. After most or all stars are stripped, $\sigma_z$ remains fairly flat. The large spikes occur when tightly bound stars are stripped or interact through few-body interactions to leave the cluster at high velocity. This is typically followed by the loss of most of the remaining core members, and $\sigma_z$ flattens. On a smaller scale, an analogous and revealing set of evidence comes from an analysis of the changes in the orbital elements of typical stars. Most stars will experience $\sim 20$ strong scatters per $10 \Myr$ simulation, and the nature of these interactions is reflected in how $a$, $e$, and $i$ change. Figure \[fig:scatter\] shows the effect of interactions on a typical orbit. Numerous weak encounters force the orbital parameters into random walks, with stronger encounters more dramatically changing the values. As with any random walk, the final distribution of orbital parameters is uncertain, and is dependent on the nature of the perturbations.
Strong scattering is the cause of the counter-rotation of stars seen in several simulations. The culprit is usually a near passage of the IMBH. Figure \[fig:7pert\] shows the effect of the passage of one of the stars near the binary containing the IMBH. The star nears the binary and is subject to a strong torque. This perturbation is strong enough to send the star into a complex orbit, in which it is subject to additional strong perturbations and passages near the SMBH, which apply additional torque and result in a large inclination orbit. Figure \[fig:morepert\] shows a short time period after the IMBH encounter, when the star has been sent into a difficult orbit, with changing peribothron. Its potential energy spikes, and a plot of the Tisserand $Q$ shows the net orbital changes that occur. At the end of this simulation, this star attains an orbital inclination of $115{\ensuremath{^\circ}}$. This process is responsible for the orbital configurations of most of the counter-rotating stars. It needs to be emphasized that such events are rare; however, this process will tend to randomize the orbits of stars near the Galactic Center. In the present simulations, approximately $1\%$ of stars find themselves in retrograde orbits at the end of the simulation, and in no circumstance are “counter-rotating disks” created.
Stripping
---------
As in the three-body case, the standard escape energy criterion is better at determining actual stripping times, and the Roche criterion should be used solely as an order-of-magnitude estimate. This point is emphasized again here, for the Roche criterion fails badly at determining stripping radii for many more stars in the N-body case. Figure \[fig:n\_roche\] shows that the standard Roche criterion $r\sim R(m/M)^{1/3}$ is an averaged upper limit in this environment. Depending on the particular configuration, tidal forces may pull the star further or closer to the IMBH or SMBH, causing a scatter about the linear trend. Indeed, two-body interactions are responsible for this deviation, since in the dense environment of a cluster core, star-star interactions force stellar orbits to slightly deviate from what would be expected in the three-body problem.
Upon stripping, stars can be boosted into near resonant orbits with the IMBH. It is instructive to follow the orbital evolution of such as star as it interacts with the IMBH potential well. Figure \[fig:strip\] shows the interplay between a star and the IMBH for a few IMBH orbits. Note that the star experiences wild oscillations in its semimajor axis due to the strong perturbing potential of the IMBH, but is not lost from the potential well for some time. Other orbital elements will similarly be affected, and the result is that the memory of the stripping environment (i.e., the IMBH orbital elements) is made fuzzy by such encounters. Note that this occurs in conjunction with the random walks due to small encounters discussed in § \[sec:scatter\].
Discussion {#sec:disc}
==========
The goals of this study were twofold: to study the effectiveness with which an IMBH can transport a handful of stars deep into a SMBH potential well, and in the process, gain insight into the dynamics of young stars there. The results of the three-body integrations suggest that the principal factors that influence the eventual deposition of stars are the IMBH orbital eccentricity and the inspiral speed. Paumard et al. (2006) provide a review of particular features of the observations, and we compare our results to this.
- The observations show two counter-rotating disks, oriented at large angles with respect to one another, with inner radii of about $0.05\pc$. The cluster core simulations produce a single, pronounced disk of stars, but in no case is there any significant counter-rotating disk formed. In order to explain the claimed multiple contemporaneous disks, one would need to invoke the almost simultaneous infall of two distinct clusters.
- The disks are observed to have a well-defined inner radius, or edge. The edge of the more populous disk (the clockwise disk) has an edge at 1$''$. Such edges also emerge naturally from the cluster core simulations. They result from the internal dynamical relaxation within the cluster, which sets up a Bahcall-Wolf density cusp. While individual stars may be sufficiently tightly bound to be transported to smaller radii, this edge represents the limit to which any significant cusp containing several stars may be transported.
- Paumard et al. (2006) find that both disks have surface density profiles that scale as $\Sigma \propto r^{-2}$. Our simulations produce disks with surface density $\Sigma \propto r^{-0.75}$. This result appears to be robust with respect to changes in the initial cluster density law. We return to this below.
- There is an absence of stars on larger scales as might be expected from tidal stripping of less tightly bound material (e.g., Kim & Morris 2003, Kim, Figer, Morris 2004). Our simulations are devoted to the cluster core alone and do not address this directly. Simulations of larger clusters suggest that this lack of observed massive stars could be a significant constraint on the cluster scenario (e.g., Portegies Zwart & McMillan 2003). However, the observations are limited to massive stars and this constraint may be avoided if the massive stars in the cluster are initially centrally concentrated (Gürkan & Rasio 2005), in which case they would be stripped only at small radii.
- The disks are observed to have moderate thickness ($14{\ensuremath{^\circ}}\pm 4{\ensuremath{^\circ}}$ for the clockwise disk), which is well produced by our cluster core simulations.
- Most of the stars in the clockwise disk are on low eccentricity orbits, while those in the counter-clockwise system are mostly on eccentric orbits. Taken together, this is consistent with the two rings forming from entirely separate clusters, since the stripped disk stars tend to have eccentricities similar to those of the parent cluster orbit. The fact that the more eccentric orbits lie further out is also consistent with the ability of circular IMBH orbits to transport stars deeper into the potential well.
- Paumard et al. (2006) confirm earlier claims that the IRS13E ’clump’ corresponds to a real overdensity. Such core remnants emerge naturally from our simulations as well. Further, our simulations mirror the observations in that such associations reside in the tidal tail of stripped stars.
Thus, the cluster core simulations demonstrate an encouraging ability to reproduce several of the principal dynamical features of the observed disks. In particular, the observed inner edge appears naturally and the presence of dynamically long-lived clumps appears generic, although the endurance of such clumps is constrained by high IMBH mass and low eccentricity. Furthermore, we find that the cluster scenario naturally produces the observed thickness of the disks and results in similar eccentricities amongst stars in a given disk. Nevertheless, there are several observed features which remain elusive. In no case do we produce a significant second disk – if both disks are the result of cluster infall, then there must have been two separate clusters.
Furthermore, the resulting surface density profile appears both robust and at odds with the observations. It is a consequence of the internal dynamics of the cluster, and not a result of initial conditions specific to any particular formation scenario. Internal dynamical relaxation leads to a cluster density profile given by the Bahcall & Wolf (1976) law. Tidal stripping of this cusp results in remnant disks with inevitable surface densities of $\Sigma \propto r^{-0.75}$. The same internal density profile also explains the inner edge we observe. The number of surviving cluster stars, assuming a fixed Bahcall-Wolf density profile, scales as $N \propto r^{5/4}$. Thus, if the cluster core contained 100 stars at 1 pc, the radius at which a single star remains bound to the IMBH is $r \sim (1/100)^{4/5} \sim 0.025$ pc. Thus, our simulations show that the gross features of the resulting population can be understood in large part by approximating the internal structure of the cluster as a relaxed Bahcall-Wolf cusp and treating the stripping with a simple Roche lobe criterion.
In addition to the disk stars, there is also the cluster of B-type stars close to the SMBH, the S-star cluster. While our simulations do suggest that an IMBH can carry one or two stars very deep into the potential well, in no case do we find transport of a sufficient number of stars to explain the S-stars as the remnant of a single inspiral. Further, the observed S-stars have varying semimajor axes and eccentricities. These simulations show that at the end of its inspiral, the IMBH orbit has circularized, and stars acquire orbital parameters similar to that of the IMBH upon stripping. This implies that the large variation in the orbital parameters of the S-stars cannot be accounted for simply by their deposition by a cluster infall. Levin & Beloborodov (2003) argue that Lens-Thirring precession may account for some randomization of the orbits, but this can only be effective for stars with initially small semimajor axes and large eccentricities, like SO-2; otherwise, the precession timescales are far too long, compared to the stellar lifetimes. Infalling clusters likely do not survive to small radii, and are efficiently disrupted by strong peribothric tidal stresses when in highly eccentric initial orbits. Thus, the origin and subsequent evolution of the S-stars remains unclear from these simulations, although a suite of scattering experiments is currently underway which may provide some insight.
Finally, the inclusion of a dark-mass cusp, perhaps representing a population of small black holes (Miralda-Escudé & Gould 2000), significantly decreased the inspiral time of the IMBH. This somewhat loosens the constraints on the infalling cluster scenario’s ability to explain the peculiar dynamical environment at the Galactic Center, because ultimately, the scenario is constrained by the ages of the OB stars observed in the inner few arcseconds. Consequently, given such a cusp, the delivery of very young OB stars to the Galactic Center can occur over a longer range of timescales than previously considered, since their $\sim 10\Myr$ measured ages allow them to have spent longer times outside the central parsec. The outer portion of the inspiral thus may proceed more slowly, allowing the cluster time to further relax and mass-segregate.
Comparison to other work
------------------------
A recent paper by Levin, Wu, & Thommes (2005) reports on the simulations of a three-body problem similar to that treated in § \[sec:3bresults\]. They investigate the problem using a symplectic integrator in extended phase space, with an ad hoc treatment of close-encounters, which are the bane of many symplectic algorithms. The IMBH orbit analytically decays, and all stars are set to be massless with the same initial Jacobi radius. Further, their choice of inspiral timescale is in the range $1000-10000$ IMBH orbits. They find that for circular IMBH inspirals, stars can achieve significant eccentricities but only low inclinations, even for slower inspiral. These results are mirrored by those reported in § \[sec:3bresults\]. For eccentric inspirals, their results show two groups of stars, one in a thin disk of half-opening angle $10^{\circ}$, and the other with inclinations of $10^{\circ}<i<180^{\circ}$, but with randomized orbital parameters. The former, but not the latter, are reproduced here as well. Two points of departure may be their significantly longer inspiral times (roughly an order-of-magnitude) and the fact that their IMBH eccentricity is not allowed to evolve. The first point allows more scattering events to occur, because the presence of the IMBH in adiabatically similar orbits will tend to produce more close encounters with stars recently stripped from the cluster into similar orbits. This causes additional perturbations to orbital elements, lengthening their respective random walks and thus possibly increasing their net change.
As for the second point, if the IMBH eccentricity is not allowed to vary, one is dismissing the true nature of strong interactions that may occur, and in particular, the often violent recoil of the IMBH due to ejections. For an IMBH-star binary, an ejection may occur when an intruder makes a close passage to the binary, with kinetic energy exceeding the binary’s binding energy. Simple energy and momentum conservation arguments show that for typical systems simulated in this paper ($v_{BH}\sim 100 km/s$, $a_b\sim 10^{-4}\pc$), the perturbation $\delta{v}$ of the IMBH during such an event can be of order several percent of its orbital speed $v$. The change in energy is linearly proportional to the change in semimajor axis, $v\delta{v} \sim a^{-2}\delta{a}$, and the eccentricity is affected similarly. As a further consequence, the rest of the stars bound to the IMBH must respond to its perturbed motion by altering their orbits, and it is possible that any of these stars, perhaps marginally bound to the IMBH, will be lost ultimately due to the destruction of the original binary. Thus, although the mass ratio in this problem is large, some important dynamics are missed in the test particle approximation.
Given reasonable timescales, stars typically experience thousands of weak perturbations during a simulation, but only $\sim 20$ strong deflections. Since these strong variations will dominate the net change, an analytic prescription for the IMBH orbit may yield incorrect results. Comparison in further detail is probably superfluous, as a detailed discussion of the overall feasibility of the scenario requires consideration of the internal dynamics as well, neglected in any three-body treatment and the motivation for our N-body studies in § \[sec:nbody\].
A number of papers have supported the idea that infalling star clusters are responsible for some, if not all, of the kinematic structures at the Galactic center, but not without exception. In particular, Paumard et al. (2006) argue that IRS13E is an overdensity, potentially bound by the presence of an IMBH. Schödel et al. (2005), on the other hand, argue that the velocity dispersion of the IRS13E stars requires too large an IMBH mass and that there is no additional observational evidence (such as X-ray flares) to support the presence of an IMBH. Nevertheless, our calculations demonstrate that such configurations are dynamically possible, as tightly bound groups can be transported to where comoving groups like IRS13E are found.
A recent series of papers by Nayakshin and collaborators (Nayakshin 2005, Nayakshin et al. 2005, Nayakshin & Sunyaev 2005, Nayakshin & Cuadra 2005) also argues against the cluster model, in favor of an AGN-like accretion disk scenario. Nayakshin & Sunyaev (2005) note that the lack of hard X-ray emission from the GC region shows that there are few young, intermediate mass stars there, as might be expected from stars stripped from the putative inspiralling cluster. The degree to which this datum applies to the cluster scenario is unclear, as it depends on uncertain assumptions regarding the final mass function of cluster stars (which may very well be top-heavy at the start and further biased by the stellar merging that is an integral part of the IMBH formation scenario) as well as the final fraction of cluster mass that ends up in the IMBH. Indeed, the lack of young stars is an important constraint on both cluster and disk scenarios, suggesting that the young star mass function is top-heavy, regardless of the kinematic origin of the population. The lack of any evidence for a stripped stellar population at large radii (Paumard et al 2006) is also an important constraint, but limited at present by the restricted area coverage of the observations.
Portegies Zwart et al. (2006) present results of large cluster inspiral simulations, from which they conclude that comoving groups can occur as IMBHs, formed through merger runaways, carry cluster remnants into the potential well. The present simulations confirm that this occurs, although our requisite masses for transporting multiple stars deep into the well is somewhat above their canonical $1000 M_{\odot}$. They also claim that several IMBH will be in the inner few milliparsecs at any given time. This would provide a possibly efficient randomization mechanism for stellar orbits there, as well as a truncation mechanism for stellar disks. However, at this time there is no clear observational evidence for a single IMBH, much less many; further, constraints on the Keplerian S-star orbits might rule out this possibility (Ghez et al. 2005).
Outlook
-------
In conclusion, we find that the simplest version of the IMBH+infalling cluster scenario can naturally produce several of the dynamical features of the population of young stars at the Galactic center, such as the disk thickness, the apparent inner edge, and the occurrence of dynamically long-lived clumps. The enthusiasm resulting from this agreement is, however, tempered somewhat by a few discrepancies that remain. Most important of these are the differences between the observed and model surface density profiles and the apparent inability to transport a significant population of stars close enough to explain the S-star population. Whether these represent the failure of the scenario as a whole, limitations of the numerical treatment, or simply an incompleteness of the model remains to be seen.
SB thanks the members of the UCLA Galactic Center group for useful comments on earlier drafts of this paper, and Sverre Aarseth for advice on the intricacies of NBODY6.
Aarseth, S.J. 2003, Gravitational N-body simulations (Cambridge: Cambridge University Press) Bahcall, J. N. & Wolf, R. A. 1976, , 209, 214 Baumgardt, H., Makino, J., Ebisuzaki, T. 2004, , 613, 1133 Binney, J. and Tremaine, S. 1987, Galactic Dynamics (Princeton: Princeton University Press) Chandrasekhar, S., & Von Neumann, J. 1943, , 97, 1 Del Popolo, A. & Gambera, A. 1999, A&A, 342, 34 Eisenhauer, F., et al. 2005, , 628, 246 Gerhard, O. 2001, , 546, L39 Genzel, R., Pichon, C., Eckart, A., Gerhard, O. E. & Ott, T. 2000, , 317, 348 Genzel, R., et al. 2003, , 594, 812 Ghez, A. M. et al. 2003, , 586, L127 Ghez, A.M. et al. 2005, , 620, 744 Goodman, J. 2003, , 339, 937 Gürkan, A. & Rasio, F. 2005, , 628, 236 Hansen, B. and Milosavljevi’c, M. 2003, , 593, L77 Kim, S., Morris, M. 2003, , 597, 312 Kim, S., Figer, D., and Morris, M. 2004, , 607, L123 Kolykhalov, P. & Sunyaev, R. 1980, SvAL, 6, 357 Kroupa, P., Tout, C.A., Gilmore, G. 1993, , 262, 545. Levin, Y., and Beloborodov, S. 2003, , 590, 33L Levin, Y., Wu, A., Thommes, E. 2005, , 635, 341 Lu, J.R., Ghez, A.M.; Hornstein, S.D.; Morris, M.; Becklin, E.E. 2005, , 625, 51L Maillard, J.P., Paumard, T., Stolovy, S.R., and Rigaut, F. 2004, A&A, 423, 155 McMillan, S.L. and Portegies Zwart, S.F. 2003, , 596, 314 Miralda-Ecud’e, J. & Gould, M. 2000, , 545, 847 Morris, M. & Serabyn, E. 1996, ARA&A, 24, 345 Murray, C.D. and Dermott, S.F. 1999, Solar System Dynamics (Cambridge: Cambridge University Press) Nakano, T., and Makino, J. 1999, , 525, L77 Nayakshin, S. & Cuadra, J. 2004, A&A, 437, 437 Nayakshin, S., Dehnen, W., Cuadra, J., Genzel, R. 2005, preprint (astro-ph/0511830) Nayakshin, S. & Sunyaev, R. 2005, , 364L, 23 Nayakshin, S. & Sunyaev, R. 2005, preprint (astro-ph/0507687) Portegies Zwart, S., Baumgardt, H., McMillan, S., Makino, J., Hut, P. 2005 preprint (astro-ph/0511397) Reid, M.J. & Brunthaler, A. 2004, , 616, 872 Sanders, R.H. 1992, Nature, 359, 131 Sanders, R.H. 1998, MNRAS, 294, 35 Shlosman, I. & Begelman, M. 1989, , 341, 685 Schödel, R., Eckart, A., Iserlohe, C., Genzel, R., Ott, T. 2005, , 625, 111L Spinnato, P.F., Fellhauer, M. and Portegies Zwart, S.F. 2003, , 22, 32 Spitzer, L. 1987, Dynamical evolution of globular clusters (Princeton:Princeton University Press) Yu, Q. & Tremaine, S. 2001, A&A, 121, 1736
-- --
-- --
-- --
-- --
-- --
-- --
-- --
-- --
-- --
-- --
| {
"pile_set_name": "ArXiv"
} |
---
address: |
Laboratori Nazionali dell”INFN, Via E.Fermi 40, I-00044 FRASCATI\
and CERN, EP Division\
E-mail: monica.pepe.altarelli@cern.ch
author:
- Monica Pepe Altarelli
title: Higgs Searches and prospects from LEP2
---
Introduction
============
After reviewing the indirect information on the Higgs mass based on precise electroweak measurements performed at LEP1, SLD and at the TEVATRON, I will discuss the mechanisms of Higgs production and decay and the strategy adopted to search for the neutral Higgs boson (in the SM and in the MSSM) at LEP2 [@reviews]. I will summarise the results based on the analysis of approximately 170 ${\mbox{$\rm pb$} }^{-1}$ collected by each LEP experiment at ${\sqrt{s}}=189$ [ ]{}updated to the more recent Winter Conferences numbers [@felcini]. In the end I will briefly discuss the prospects for Higgs discovery at LEP2.
Higgs mass from precision electroweak measurements and from theoretical arguments
=================================================================================
The aim of precision electroweak tests is to prove the SM beyond the tree level plus pure QED and QCD corrections and to derive constraints on its fundamental parameters. Through loop corrections, the SM predictions for the electroweak observables depend on the top mass via terms of order $\rm{G_F}/{M_{\mathrm{t}}}^2$ and on the Higgs mass via logarithmic terms. Therefore from a comparison of the theoretical predictions [@pre_calc], computed to a sufficient precision to match the experimental capabilities and the data for the numerous observables which have been measured, the consistency of the theory is checked and constraints on ${M_{\mathrm{H }}}$ are placed, once the measurement of ${M_{\mathrm{t}}}$ from the TEVATRON is input. The present 95% C.L. upper limit on the Higgs mass in the SM is [@mh_smfits; @felcini] $$\label{mh_up}
{M_{\mathrm{H }}}< 220\,{\mbox{${\rm {GeV}}/c^2$} }\,,$$ if one makes due allowance for unknown higher loop uncertainties in the analysis. The corresponding central value is still rather imprecise: $${M_{\mathrm{H }}}= 71^{+75}_{-42}\pm5\,{\mbox{${\rm {GeV}}/c^2$} }\,.$$ The range given by Eq.\[mh\_up\] may be compared with the one derived from theoretical arguments [@hambye]. It is well known that in the SM with only one Higgs doublet a lower limit on the Higgs mass ${M_{\mathrm{H }}}$ can be derived from the requirement of vacuum stability. This limit is a function of the energy scale $\Lambda$ where the model breaks down and new physics appears. Similarly an upper bound on ${M_{\mathrm{H }}}$ is obtained from the requirement that up to the scale $\Lambda$ no Landau pole appears. If, for example, the SM has to remain valid up to the scale $\Lambda\simeq{\rm M_{GUT}}$, then it is required that $135<{M_{\mathrm{H }}}<180~{\mbox{${\rm {GeV}}/c^2$} }$.
In the MSSM two Higgs doublets are introduced, in order to give masses to the up-type quarks on the one hand and to the down-type quarks and charged leptons on the other. The Higgs particle spectrum therefore consists of five physical states: two CP-even neutral scalars (h,A), one CP-odd neutral pseudo-scalar (A) and a charged Higgs boson pair ($\rm{H}^{\pm}$). Of these, h and A could be detectable at LEP2 [@yellow]. In fact, at tree-level h is predicted to be lighter than the Z. However, radiative corrections to ${M_{\mathrm{h}}}$ [@ellis], which are proportional to the fourth power of the top mass, shift the upper limit of ${M_{\mathrm{h}}}$ to approximately 135 [ ]{}, depending on the MSSM parameters.
Higgs production and decay
==========================
At LEP2, the dominant mechanism for producing the standard model Higgs boson is the so-called Higgs-strahlung process ${\mathrm{e}^+\mathrm{e}^-}\to$ HZ [@khoze; @bjorken], with smaller contributions from the WW and ZZ fusion processes leading to H$\nu_{\rm{e}}\bar{\nu}_{\rm{e}}$ and H${\mathrm{e}^+\mathrm{e}^-}$ final states, respectively. A sizeable cross section (few 0.1 pb) is obtained up to ${M_{\mathrm{H }}}\sim {\sqrt{s}}- {M_{\mathrm{Z}}}$, so that an energy larger than 190 [ ]{}is needed to extend the search above ${M_{\mathrm{H }}}\simeq {M_{\mathrm{Z}}}$. For example the production cross section at ${\sqrt{s}}=189$ GeV for ${M_{\mathrm{H }}}=95$ [ ]{}is 0.18 pb, which for an integrated luminosity $\cal{L}$=170 ${\mbox{$\rm pb$} }^{-1}$/exp. gives 30 signal events per experiment.
For the MSSM Higgs the main production mechanisms are the Higgs-strahlung process ${\mathrm{e}^+\mathrm{e}^-}\to$ hZ, as for the SM Higgs, and the associated pair production ${\mathrm{e}^+\mathrm{e}^-}\to$ hA [@ha-prod]. The corresponding cross sections may be written in terms of the SM Higgs-strahlung cross section, $\sigma^{\rm{SM}}$, and of the cross section $\sigma^{\rm{SM}}_{{{\nu}\overline{\nu}}}$ for the process $\rm{Z}^*\to{{\nu}\overline{\nu}}$ as $$\begin{aligned}
\label{Zh-hA}
\sigma({\mathrm{e}^+\mathrm{e}^-}\to\rm{Zh}) = & \rm{sin}^2(\beta-\alpha)\,\sigma^{\rm{SM}} \\
\sigma({\mathrm{e}^+\mathrm{e}^-}\to\rm{hA}) \propto & \rm{cos}^2(\beta-\alpha)\,\sigma^{\rm{SM}}_{{{\nu}\overline{\nu}}}. \nonumber \end{aligned}$$ The parameter $\rm{tan}\beta$ gives the ratio of the vacuum expectation values of the two Higgs doublets and $\alpha$ is a mixing angle in the CP-even sector.
The Higgs-strahlung hZ process occurs at large $\rm{sin}^2(\beta-\alpha)$, i.e., at small $\rm{tan}\beta$. Conversely, at small $\rm{sin}^2(\beta-\alpha)$, i.e., at large $\rm{tan}\beta$, when hZ production dies out, the associated hA production becomes the dominant mechanism with rates similar to the previous case. In this region the masses of h and A are approximately equal.
For masses below $\sim 110~$ [ ]{}, the SM Higgs decays into ${{\rm b}\overline{\rm b}}$ in approximately 85% of the cases and into ${\tau^+\tau^-}$ in approximately 8% of the cases. Similar branching ratios (BR) are expected for the MSSM Higgs bosons. Above ${M_{\mathrm{H }}}\sim 135$ [ ]{}, the BR into W and Z pairs becomes dominant.
Searches at LEP2
================
While at LEP1 energies the signal to noise ratio was as small as $10^{-6}$ due to the very high ${{\rm q}\overline{\rm q}}$ cross section, at LEP2 the signal to noise ratio is much more favourable, increasing to $\simeq1\%$. In order to reduce this background, mainly due to W pair production, ${{\rm q}\overline{\rm q}}$ (with two gluons or two additional photons in the final state) and ZZ events, use is made of b-tagging techniques which exploit the large BR of the Higgs into ${{\rm b}\overline{\rm b}}$. For ${M_{\mathrm{H }}}\simeq{M_{\mathrm{Z}}}$, as is the case for the expected experimental sensitivity, ZZ production represents an irreducible source of background since the Z decays into ${{\rm b}\overline{\rm b}}$ in 15% of the cases.
The following event topologies are studied:
- The leptonic channel (Z$\to {\mathrm{e}^+\mathrm{e}^-}, {\mu^+\mu^-}$, H$\to{{\rm b}\overline{\rm b}}$) which represents $7\%$ of the Higgs-strahlung cross section. These events are characterised by two energetic leptons with an invariant mass close to ${M_{\mathrm{Z}}}$ and a recoil mass equal to ${M_{\mathrm{H }}}$. Because of the clear experimental signature, no b-tag is necessary and therefore the signal efficiency is high, typically $\sim75\%$.
- The missing energy channel (Z$\to {{\nu}\overline{\nu}}$, H$\to{{\rm b}\overline{\rm b}}$) comprising $\simeq20\%$ of the Higgs-strahlung cross section. This channel is characterised by a missing mass consistent with ${M_{\mathrm{Z}}}$ and two b-jets. The selection efficiency is $\simeq35\%$.
- The four jet channel (Z$\to {{\rm q}\overline{\rm q}}$, H$\to{{\rm b}\overline{\rm b}}$) which is not as distinctive as the two previous topologies but compensates for this drawback with its large BR of $\simeq64\%$. The efficiency for this channel is typically $\simeq40\%$.
- The ${\tau^+\tau^-}{{\rm q}\overline{\rm q}}$ channel (Z$\to {\tau^+\tau^-}$, H$\to{{\rm q}\overline{\rm q}}$ and vice-versa) with a $\simeq9\%$ BR. The event topology includes two hadronic jets and two oppositely-charged, low multiplicity jets due to neutrinos from the $\tau$ decays. The signal efficiency is of the order of 25%.
The b-tagging algorithms are based on the long lifetime of weakly decaying b-hadrons, on jet shape variables such as charged multiplicity or boosted sphericity and on high $p_t$ leptons from semileptonic b decays. The b-jet identification is improved by combining information from the different b-tagging algorithms with tools like neural-networks and likelihoods. Typically, for a 60% signal efficiency, the WW background, which has no b-content, is suppressed by a factor over 100, and the ${{\rm q}\overline{\rm q}}$ and ZZ backgrounds by approximately a factor 10. With respect to the b-tagging algorithms developed for the measurement at LEP1 of $\rm{R_b}$, the b fraction of Z hadronic decays, the performances at LEP2 have improved by almost a factor of 2, due to vertex detectors with an extended solid angle coverage and to more efficient b-tagging techniques.
All the analyses developed for the standard model Higgs produced via the Higgs-strahlung mechanism can be used with no modification for the supersymmetric case, provided that the Higgs decays to standard model particles (${{\rm b}\overline{\rm b}}$, ${\tau^+\tau^-}$). The results can then be reinterpreted in the MSSM context, by simply rescaling the number of expected events by the factor $\rm{sin}^2(\beta-\alpha)$.
For the pair production process, the signal consists of events with four b-quark jets or a ${\tau^+\tau^-}$ pair recoiling against a pair of b-quark jets.
Results and prospects
=====================
Table \[tab:res\] shows the number of selected events in the data for the SM Higgs search, the expected number of background events and the expected numbers of signal events assuming ${M_{\mathrm{H }}}=95$ [ ]{} [@felcini; @al_moriond; @del_moriond; @l3_moriond; @op_moriond].
$n_{\rm obs}$ $n_{\rm back}$ $n_{\rm sig}$
--------------------------------------- --------------- ---------------- ---------------
ALEPH 53 44.8 13.8
DELPHI 26 31.3 10.1
L3 30 30.3 9.9
OPAL 50 43.9 12.6
Total 159 150 46.4
$\Delta{M_{\mathrm{H }}}=92-96$ [ ]{} 47 37.5 24.6
: Standard Model Higgs search. Number of observed events in the data $n_{\rm obs}$, expected number of background events $n_{\rm back}$ and expected numbers of signal events $n_{\rm sig}$ assuming ${M_{\mathrm{H }}}=95$ [ ]{}for the four LEP experiments and for their combination. Also shown are the number of events observed and expected by the four experiments combined in the mass window $\Delta{M_{\mathrm{H }}}=92-96$ [ ]{}.[]{data-label="tab:res"}
As can be observed from Table \[tab:res\], an excess of events is observed by ALEPH [@al_moriond] and OPAL [@op_moriond] which, in the case of OPAL, is concentrated in the mass region around ${M_{\mathrm{H }}}\simeq{M_{\mathrm{Z}}}$, while for ALEPH it is distributed over higher masses, typically $\geq95$ [ ]{}. These results translate into the lower limits shown in Table \[tab:lim\], together with the sensitivity (expected limit) of each experiment.
-------- --------------- --------------
Observed Expected
limit ([ ]{}) limit([ ]{})
ALEPH 90.2 95.7
DELPHI 95.2 94.8
L3 95.2 94.4
OPAL 91.0 94.9
-------- --------------- --------------
: Observed 95% C.L. lower limits on ${M_{\mathrm{H }}}$. Also shown are the limits predicted by the simulation if no signal were present. []{data-label="tab:lim"}
Table \[tab:MSSM\_lim\] shows the preliminary 95% C.L. lower limits on ${M_{\mathrm{h}}}$ and ${M_{\mathrm{A}}}$ for the four LEP experiments [@felcini; @al_moriond; @del_moriond; @l3_moriond; @op_moriond], as well as the derived excluded ranges of $\tan\beta$ for both no mixing and maximal mixing in the scalar-top sector.
-------- ---------------------------- ---------------------------- --------------------- -----------------------
${M_{\mathrm{h}}}$ ([ ]{}) ${M_{\mathrm{A}}}$ ([ ]{}) $\tan\beta$ $\tan\beta$
max. mixing no mixing
ALEPH 80.8 81.2 - $1<\tan\beta<2.2$
DELPHI 83.5 84.5 $0.9<\tan\beta<1.5$ $0.6<\tan\beta<2.6$
L3 77.0 78.0 $1.<\tan\beta<1.5$ $1.<\tan\beta<2.6$
OPAL 74.8 76.5 - $0.81<\tan\beta<2.19$
-------- ---------------------------- ---------------------------- --------------------- -----------------------
: Observed 95% C.L. lower limits on ${M_{\mathrm{h}}}$ and ${M_{\mathrm{A}}}$. Also shown are the derived excluded ranges of $\tan\beta$. The mass limits are given for $\tan\beta>1$, except for those of DELPHI, given for $\tan\beta>0.5$. []{data-label="tab:MSSM_lim"}
In the years 1999 to 2000 LEP2 is expected to deliver a luminosity larger than 200 $\rm{pb}^{-1}$ per experiment at a centre-of-mass energy eventually as high as $\sim 200$ GeV. These data should allow to discover a SM Higgs of 107 [ ]{}or to exclude a Higgs lighter than $\sim$108 [ ]{} [@lellouch; @chamonix]. This is a particularly interesting region to explore, given the present indication for a light Higgs from the standard model fit of the electroweak precision data. The sensitivity to the Higgs in the MSSM will reach $\sim90$ [ ]{}for the high $\tan\beta$ region and $\sim108$ [ ]{}for $\tan\beta\simeq1$, therefore allowing good coverage of the MSSM plane.
Acknowledgments {#acknowledgments .unnumbered}
===============
I would like to thank Cesareo Dominguez and Raul Viollier for their great hospitality and excellent organization of the Workshop and “Maestro” Patrick Janot for his precious advice and for carefully reading this manuscript.
[99]{} For other more detailed reviews on the subject, see, [*e.g.*]{} P.Janot in [*Perspectives on Higgs Physics II*]{}, Advanced Series on Direction in High Energy Physic, Vol.17 (1997), ed. by G.L.Kane, 104;\
F.Richard, hep-ex/9810045, 28 Oct. 1998.
M.Felcini, Rencontres de Moriond on ElectroWeak Interactions and Unified Theories, 13-20 March 1999, Les Arcs, France.
, ed. by D.Bardin, W.Hollik and G.Passarino, CERN Yellow Report 95-03, March 1995 and references therein.
LEP EW Working Group, available at\
http://www.cern.ch/LEPEWWG/plots/winter99/.
See, [*e.g.*]{}, T.Hambye, K.Riesselmann, DESY-97-152, Aug. 1997.
See, [*e.g.*]{} [*Physics at LEP2*]{}, ed. by G.Altarelli, T.Sjostrand and F.Zwirner, CERN 96-01, Vol.1 (1996) 351.
J.Ellis, G.Ridolfi and F.Zwirner, [*Phys.Lett.*]{} [**B257**]{} (1991) 83; [*Phys.Lett.*]{} [**B262**]{} (1991) 477.
B.L.Ioffe and V.A.Khoze, Sov.J.Part.Nucl.[****]{}9 (1978) 50.
J.D.Bjorken, Proc. Summer Institute on particle Physics, SLAC Report 198 (1976).
J.F.Gunion and H.E.Haber, [*Nucl.Phys.*]{} [**B272**]{} (1986) 1; [*Nucl.Phys.*]{} [**B278**]{} (1986) 449 and [*Nucl.Phys.*]{} [**B307**]{} (1988) 445.
ALEPH Collaboration, Contribution to 1999 Winter Conferences, ALEPH 99-007, CONF 99-003, March 1999.
DELPHI Collaboration, Contribution to 1999 Winter Conferences, DELPHI 99-8 CNF 208, March 1999.
L3 Collaboration, Contributions to 1999 Winter Conferences, L3 Note 2382, 12 March 1999; L3 Note 2383, 15 March 1999.
OPAL Collaboration, Contribution to 1999 Winter Conferences, OPAL PN382, March 12, 1999.
E.Gross, A.L.Read and D.Lellouch, CERN-EP/98-094.
P.Janot in [*Proceedings of the Workshop on LEP-SPS Performance*]{}, Chamonix IX, Jan. 1999, 222.
| {
"pile_set_name": "ArXiv"
} |
---
author:
- Denija Crnojević
bibliography:
- 'biblio.bib'
title: Resolved Stellar Populations as Tracers of Outskirts
---
The Importance of Haloes {#intro}
========================
Our understanding of galaxy formation and evolution has dramatically evolved in the past fifty years. The first and simplest idea for the formation scenario of our own Milky Way (MW) Galaxy was put forward by [@eggen62], who proposed the bulk of a stellar halo to be formed in a rapid collapse of gas in the protogalaxy. This scenario, often referred to as “monolithic” collapse, is a dissipative process and takes place on dynamical timescales of the order of $\sim10^8$yr. This process gives birth to a metal-poor stellar component in the halo outer regions, while the inner regions ends up being more metal-rich due to the reprocessing of the gas as it collapses deeper into the protogalaxy potential well. This idea was later challenged by an alternative explanation, based on the observation that globular clusters (GCs) at different Galactocentric distances have a wide range of metallicities. In this scenario, the halo is formed on longer timescales ($\sim10^9$yr) and, instead of being a self-contained system, it comes together as the product of several protogalactic fragments (@searle78). These fragments can be pre-enriched before they are accreted. While both scenarios are capable of explaining many observed quantities of the Galactic halo, they cannot individually give a comprehensive picture (@norris91 [@chiba00]), which has led to the development of hybrid “two-phase” models. In the latter, the inner Galaxy regions are formed in a first phase as a result of a monolithic-like process, while the outer halo regions are built up over the Galaxy’s lifetime through dissipationless accretion events (@freeman02).
In the past couple of decades, the most widely accepted paradigm of the hierarchical Lambda-Cold Dark Matter ($\rm \Lambda$CDM) structure formation model has prevailed, favouring the predominance of merger and accretion events in the build-up of galactic haloes (@white91 [@bullock05; @springel06; @johnston08]). These models predict the ubiquitous presence of haloes, which are characterized by old and metal-poor populations and often shows signs of recent interactions, in contrast with the smooth haloes predicted by dissipative models (@bullock05 [@abadi06; @font06]). The interaction events provide a mine of information on the assembly of haloes: dynamical timescales become relatively long (up to several Gyr) in the outer regions of a galaxy, and thus accretion/merger events that occurred a long time ago are often still visible as coherent structures like disrupting galaxies or streams, which readily testify the past assembly history of their host. The assembly itself depends on a variety of factors, such as number, mass, stellar content and structural properties of the accreted satellites, as well as orbital properties, timing and energy of the accretion event. Even when the progenitor is completely dissolved in the host’s halo (which is particularly true in the inner halo regions where dynamical timescales are relatively short), its stripped stellar content still retains a characteristic coherence in velocity space as well as in metallicity content, thus giving important clues about the progenitor’s properties. Observing the stellar “fossils” that populate galaxy haloes thus offers a unique opportunity to reconstruct the modes, timing, and statistics of the halo formation process.
Besides being taletellers of their host system’s merger history, the shape and size of haloes also hold vital clues to the process of galaxy formation. In particular, they can teach us about the primordial power spectrum of density fluctuations at the smallest scales; about the reionization process, that shall lead to faint and concentrated haloes for an early suppression of star formation in low-mass dark matter (DM) subhaloes; or about the triaxiality of DM haloes, which are predicted to be more flattened for dissipationless formation scenarios (@abadi06). Despite only accounting for a mere $\sim1\%$ of a galaxy’s total mass (e.g., @morrison93), extended haloes are clearly extremely valuable to test and refine theoretical predictions on the halo assembly process. Due to their extreme faintness, however, haloes have not been as fully exploited as they should have been as key tests of galaxy formation models: they are not easily detected above the sky level, i.e., surface brightness values of $\mu_V\sim25$magarcsec$^{-2}$, posing a serious observing challenge to their investigation. Cosmological simulations predict the majority of past and ongoing accretion events to have surface brightness values well below this value (e.g., @bullock05). According to some models, reaching a surface brightness of $\mu_V\sim29$magarcsec$^{-2}$ should allow the detection of at least one stream per observed galaxy (@johnston08 [@cooper10]). How is it then possible to extract the information locked in the faint outskirts of galaxies?
Resolved Stellar Populations {#rsp}
----------------------------
The best method to study faint haloes and their substructure in nearby galaxies is to resolve individual stars. Even when sparse and faint, resolved stars can be individually counted, and a stellar number density can easily be converted into a surface brightness value. When the Galactic extinction presents a high degree of spatial inhomogeneity (possibly mimicking faint irregular substructures), and the sky level is higher than the integrated light signal coming from extremely faint sources, resolved populations provide a very powerful means to trace them. This method is not free from complications: there will always be contamination coming both from foreground Galactic stars as well as from background unresolved galaxies. This can be accounted for statistically, by observing “field” regions away from the main target and quantifying the contaminants, while a direct confirmation of a star’s membership requires spectroscopy. At the same time, resolving individual stars poses constraints on the inherent nature and on the distance of the putative targets: for systems where the stellar density is so high that stars fall on top of each other on the sky, the “crowding” prevents the resolution of individual objects. This can of course occur also in the case of a relatively sparse galaxy which has a large line-of-sight distance, so that the stars are packed in a small region of the sky. Distance is also the principal enemy of depth: the larger the distance, the brighter the detection limit, i.e., the absolute magnitude/surface brightness that we can reach for a fixed apparent magnitude. Nonetheless, resolved stellar populations are able to deliver powerful information for galaxies located within $\sim10$Mpc, i.e., within the so-called Local Volume.
The discovery of the Sagittarius dwarf galaxy by @ibata94 from the identification of a comoving group of stars opened the door to the era of halo studies and their substructure: a galaxy resembling the properties of classical dwarf spheroidals was clearly in the process of being disrupted by its giant host, our own MW. This evidence was the first to support theoretical predictions for the hierarchical assembly models and the existence of observable accretion events. Soon thereafter, stellar density maps allowed the discovery of a prominent low surface brightness stream around the MW’s closest giant spiral Andromeda (M31), the so-called Giant Stellar Stream (@ibata01). This feature, invisible to the naked eye, is a clear example of the elusive nature of haloes and their substructure: the surface brightness of the Giant Stellar Stream is $\mu_V\sim30$magarcsec$^{-2}$, which is prohibitive for integrated light images.
As challenging as it is, the mere detection of haloes and their substructures is not enough to provide quantitative constraints on models of galaxy evolution. From the stars’ photometry and thus position in the colour-magnitude diagram (CMD), i.e., the observational counterpart of the Hertzsprung-Russel diagram, it is possible to characterize the properties of the considered stellar system. First and foremost, in contrast to integrated light, accurate distance measurements can be obtained from CMD features that act as standard candles, e.g., the luminosity of the tip of the red giant branch (TRGB) or of the horizontal branch (HB). Another key advantage of resolved populations is the possibility to constrain ages and metallicities more tightly than with integrated light alone. The CMD is used to quantify the star formation rate as a function of lookback time, and thus derive the star formation history (SFH) of a composite stellar population (e.g., @gallart05, and references therein). Spectroscopy of individual stars is the ultimate method to constrain their metallicity content and kinematical properties, such as radial velocity and proper motion, which allows for the full six-dimensional phase space to be investigated. The latter cannot, for the moment, be achieved beyond the LG limits, and still only occasionally for M31.
Besides giving precious insights into galaxy haloes and their accretion histories, resolved stellar populations can help us characterizing the “surviving” low-mass galaxies that have not been accreted to date and reside in the outskirts of giant hosts.
The Low-mass End of the Galaxy Luminosity Function {#satellites}
--------------------------------------------------
The low-mass end of the galaxy luminosity function (LF) is of no less interest than haloes themselves. Besides the MW and M31, the Local Group (LG) contains tens of smaller galaxies which can be studied in detail due to their proximity (see @tolstoy09 for a review). While the $\rm \Lambda$CDM cosmological model has provided a convincing match to the large-scale structures observed in the high-redshift Universe, it falls short at the smallest, galactic scales, indicating an incomplete understanding of the physics involved in the evolution of galaxies: for example, the “missing-satellite problem” has been highlighted for the first time by @moore99 and @klypin99. Briefly, the number of DM subhaloes predicted in simulations exceeds the observed number of MW satellites by almost two orders of magnitude. The shape of the DM profile in the innermost regions of dwarf galaxies is also a matter of debate (the “cusp-core” problem; @walker11). In addition, the more massive among the MW satellites are less dense than what is expected from simulations, which is puzzling because they should be affected by fewer observational biases than their smaller, sparser siblings (the “too-big-to-fail” problem; @boylan11). In addition, the fact that many of the MW and M31 satellites are distributed along planes does not have a straightforward explanation in $\rm \Lambda$CDM models (e.g., @pawlowski14).
From the theoretical point of view, the inclusion of baryonic physics in DM-only simulations is key to reconcile predictions with observations of the smallest galaxies. In particular, effects such as supernova feedback, stellar winds, cosmic reionisation, and tidal/ram pressure stripping all concur to reduce star formation efficiency in the least massive DM haloes. Tremendous progress is being made on this front, taking into account realistic physics as well as increasing the resolution of simulations (e.g., @stinson09 [@brooks13; @sawala16; @wetzel16]). At the same time, new observational discoveries keep offering intriguing challenges at the smallest galactic scales, as further described in Sect. \[mw\_sats\] and \[m31\_sats\].
Local Group {#sec:lg}
===========
The galaxies closest to us give us the most detailed information because of the large number of stars that can be resolved. Here, I will summarize what we have learnt in the past two decades about our own Galaxy (even though an extensive picture of the MW outskirts goes beyond the scope of this contribution and can be found in Figueras, this volume), about its closest spiral neighbour M31 and about their lower-mass satellites.
Milky Way {#mw}
---------
The MW is traditionally divided into discrete components, i.e., the bulge, the disks (thin and thick) and the halo. The spheroidal portion of the MW is given by the central bulge, which consists mainly of metal-rich populations, and an extended diffuse component, which has a lower mean metallicity. Overall, stars and GCs in the halo have ages $\sim11-13$Gyr (@carollo07). The halo can be further deconstructed into an inner halo and an outer halo, even though the distinction could partly arise from observational biases (@schoenrich14). The inner and outer haloes also seem to have different chemical composition (\[Fe/H\]$\sim-1.6$ and \[Fe/H\]$\sim-2.2$, respectively; @ryan91 [@carollo07]). According to simulations, the two halo components should also have formed on different timescales: the inner halo ($<20$kpc) is partly constituted by early-formed in-situ stars, partly due to a violent relaxation process, and partly assembled from early, massive merging events that provide metal-rich populations (@abadi06 [@font11; @tissera13; @pillepich14; @cooper15]); the outer halo is assembled more recently, with its mass beyond $\sim30$kpc being mainly accreted in the past $\sim8$Gyr (@bullock05 [@cooper10]). These predictions are, however, still not sufficient at a quantitative level, and unconstrained as to the exact ratio of accreted stars versus in-situ populations. At the same time, observations of the MW halo with better statistics and precision are needed to inform them.
Our position within the MW puts us at a clear disadvantage for global studies of its outskirts: the distant and sparse halo stars are observed from within the substantial disk component, which produces contamination both in terms of extinction and numerous disk stars along the line of sight, which completely “obscure” the sky at low Galactic latitudes. Nonetheless, thanks to the advent of wide-field imagers, the past two decades have revolutionized the large scale view of our Galaxy. Several stellar tracers can be used to dig into the MW halo at different distance ranges: old main sequence turnoff (MSTO) stars are identified mostly out to $\sim20$kpc, brighter RGB stars out to $\sim40-50$kpc, while RR Lyrae and blue horizontal branch (BHB) stars can be detected out to 100kpc. Spatial clustering of these stellar components indicate non-mixed substructure, which is often confirmed to be kinematically coherent.
### The Emergence of Streams {#mw_streams}
![Spatial density of SDSS stars around the Galactic cap, binned in $0.5\times0.5$deg$^2$; the colour scale is such that blue indicates the nearest stars while red is for the furthest ones. Labelled are the main halo substructures, which are in some cases streams associated with a GC or a dwarf galaxy; the circles show some newly discovered dwarf satellites of the MW. Plot adapted from @belokurov06a (http://www.ast.cam.ac.uk/$\sim$vasily/sdss/field\_of\_streams/dr6/) []{data-label="fig:fos"}](fos_dr6_marked.eps)
![Stellar density maps of the whole PanSTARRS footprint, obtained by selecting MSTO stars at a range of heliocentric distances (as indicated in each panel). The map is on a logarithmic scale, with darker areas indicating higher stellar densities. The many substructures are highlighted in each panel. Reproduced from [@bernard16], their Fig. 1, with permission of MNRAS[]{data-label="fig:panstarrs"}](f1LR.eps){width="12cm"}
After the cornerstone discovery of the disrupting Sagittarius dwarf, it became clear that substructure is not only present in the MW halo, but it also might constitute a big portion of it. To put it in S. Majewski’s words (@majewski99a),
> “There is good reason to believe that within a decade we will have a firm handle on the contribution of satellite mergers in the formation of the halo, as we move observationally from serendipitous discoveries of circumstantial evidence to more systematic surveys for the fossils left behind by the accretion process” .
In the following decade, several stream-like features have indeed emerged from a variety of multi-band photometric and spectroscopic surveys indeed, and the Sloan Digital Sky Survey (SDSS) proved to be an especially prolific mine for such discoveries around the northern Galactic cap. The Sagittarius stream has been traced further, including in the Galactic anti-centre direction (e.g., @mateo98 [@majewski03]), and independent substructures have been uncovered (@ivezic00 [@yanny00; @newberg02; @grillmair06; @juric08]), most notably the Monoceros ring, the Virgo overdensity, the Orphan stream and the Hercules-Aquila cloud. Some of these have later been confirmed to be coherent with radial velocities (@duffau06). Note that most of these substructures are discovered at Galactocentric distances $>15$kpc, while the inner halo is smooth due to its shorter dynamical timescales.
During the past decade, one of the most stunning vizualisations of the ongoing accretion events in the MW halo was provided by the Field of Streams (@belokurov06a), reproduced in Fig. \[fig:fos\]. The stunning stellar density map is derived from SDSS data of stars around the old MSTO at the distance of Sagittarius, with a range of magnitudes to account for a range in distances. This map not only shows the Sagittarius stream and its distance gradient, but also a plethora of less massive streams, as well as an abundance of previously unknown dwarf satellites (see Sect. \[mw\_sats\]). The Field of Streams has been now complemented with results from the latest state-of-the-art surveys, most notably the all-sky Panoramic Survey Telescope and Rapid Response System (PanSTARRS), which covers an area significantly larger than that of SDSS. In Fig. \[fig:panstarrs\] the first stellar density maps from PanSTARRS are shown, obtained in a similar way as the Field of Streams (@bernard16). The map highlights the fact that the deeper and wider we look at the Galaxy halo, the more substructures can be uncovered and used to constrain its past accretion history and the underlying DM halo properties. From this kind of maps, for example, the halo stellar mass that lies in substructures can be estimated, amounting to $\sim2-3 \times
10^8\,M_\odot$ (see @belokurov13). Using SDSS, [@bell08] also highlight the predominant role of accretion in the formation of the MW’s halo based on MSTO star counts, adding to up to $\sim40\%$ of the total halo stellar mass (note that, however, different tracers could indicate much smaller values; e.g., @deason11).
Many of the known halo streams arise from tidally disrupting GCs, of which Palomar 5 is one of the most obvious examples (@odenkirchen01). This demonstrates the possible role of GCs, besides dwarf satellites, in building up the halo stellar population, and additionally implies that some of the halo GCs may be stripped remnants of nucleated accreted satellites (see @freeman02, and references therein). In order to discern between a dwarf or a cluster origin of halo stars, we need to perform chemical “tagging”, i.e., obtain spectroscopic abundances for tens of elements for these stars (e.g., @martell10): stars born within the same molecular cloud will retain the same chemical composition and allow us to trace the properties of their birthplace. A number of ambitious ongoing and upcoming spectroscopic surveys (SEGUE, APOGEE, Gaia-ESO, GALAH, WEAVE, 4MOST) is paving the path for this promising research line, even though theoretical models still struggle to provide robust predictions for the fraction of GC stars lost to the MW halo (e.g., @schiavon16, and references therein).
### The Smooth Halo Component {#mw_halo}
Once the substructure in the halo is detected, it is important that it is “cut out” in order to gain insights into the smooth, in-situ stellar component (note that, however, the latter will inevitably suffer from residual contamination from accreted material that is now fully dissolved). The stellar profile of the Galactic halo is, in fact, not smooth at all: several studies have found a break at a radius $\sim25$kpc, with a marked steepening beyond this value (@watkins09 [@sesar13]), in qualitative agreement with halo formation models. Some of the explanations put forward suggest that a density break in the halo stellar profile is the likely consequence of a massive accretion event, corresponding to the apocentre of the involved stars (@deason13).
The kinematics of halo stars, of GCs and of satellite galaxies, as well as the spatial distribution of streams and tidal features in satellites, can be further used as mass tracers for the DM halo. The total MW mass is to date still poorly constrained, given the difficulty of evaluating it with a broad range of different tracers. The general consensus is for a virial mass value of $\sim1.3\pm0.3 \times 10^{12}\,M_\odot$, even though values discrepant up to a factor of two have recently been suggested (see @bland16 for a compilation of estimates). Besides providing estimates for the total MW mass, studies of SDSS kinematical data, of the Sagittarius stream and of GCs tidal streams have provided discording conclusions on the shape of the MW DM halo: nearly spherical from the modelling of streams or strongly oblate from SDSS kinematics at Galactocentric distances $<20$kpc, while nearly spherical and oblate based on stream geometry or prolate from kinematical arguments for distances as large as $\sim100$kpc (see @bland16 for details). These constraints need a substantial improvement in the future to be able to inform cosmological models: the latter predict spherical/oblate shapes once baryons are included in DM-only flattened haloes (see @read14).
### Dwarf Satellites {#mw_sats}
As mentioned above, the SDSS has revolutionized our notions of dwarf satellites of the MW. Bright enough to be easily recognized on photographic plates, a dozen “classical” MW dwarf satellites has been known for many decades before the advent of wide-field surveys (@mateo98 [@grebel00]). Starting with the SDSS, an entirely new class of objects has started to emerge with properties intermediate between the classical dwarfs and GCs (see @willman10, and references therein). The so-called ultra-faint satellites have magnitudes higher than $M_V\sim-8$ and surface brightness values so low that the only way to find them is to look for spatial overdensities of resolved main sequence/BHB stars. Their discovery ten years ago doubled the number of known MW satellites and revealed the most DM-dominated galaxies in the Universe, with mass-to-light ratios of up to several times $10^3\,M_\odot/L_\odot$ (@simon07).
More recently, the interest in the low end of the galaxy LF has been revitalized once again with deep, wide-field surveys performed with CTIO/DECam, VST/Omegacam, and PanSTARRS: these have led to the discovery of more than 20 southern dwarfs in less than two years (@bechtol15 [@koposov15; @kim15a; @drlica16; @torrealba16], and references therein). Some of these discoveries represent extremes in the properties of MW satellites, with surface brightness values as low as $\sim30$magarcsec$^{-2}$, total luminosities of only a few hundred $L_\odot$ and surprisingly low stellar density regimes. One of the perhaps most intriguing properties of the newly discovered dwarfs is that many of them appear to be clustered around the Large Magellanic Cloud (LMC): this might be the smoking gun for the possible infall of a group of dwarfs onto the MW, which is predicted by simulations (@donghia08 [@sales16]). Low-mass galaxies are expected to have satellites on their own and to provide a large fraction of a giant galaxy’s dwarf companions (e.g., @wetzel15). The properties of the possible LMC satellites will give us a glimpse onto the conditions of galaxy formation and evolution in an environment much different from the LG as we know it today.
These faintest galaxies, or their accreted and fully dispersed counterparts, are also excellent testbeds to look for the very most metal-poor stars and to investigate the star formation process in the early stages of the Universe (e.g., @frebel15). The study of the lowest mass galaxies holds the promise to challenge our knowledge of galaxy physics even further and pushes us to explore unexpected and exciting new limits.
M31 (Andromeda) {#sec:m31}
---------------
Our nearest giant neighbour has received growing attention in the past decade. Having a remarkable resemblance with the MW and a comparable mass (e.g., @veljanoski14), it is a natural ground of comparison for the study of spiral haloes. In terms of a global perspective, the M31 halo is arguably known better than that of the MW: our external point of view allows us to have a panoramic picture of the galaxy and its surrounding regions. The other side of the medal is that, at a distance of $\sim780$kpc, we can only resolve the brightest evolved stars in M31, and we are mostly limited to a two-dimensional view of its populations. Its proximity also implies a large angular size on the sky, underlining the need for wide field-of-view imagers to cover its entire area.
At the distance of M31, ground-based observations are able to resolve at best the uppermost $\sim3-4$ magnitudes below the TRGB, which is found at a magnitude $i\sim21$. The RGB is an excellent tracer for old ($>1$Gyr) populations, but suffers from a degeneracy in age and metallicity: younger, metal-rich stars overlap in magnitude and colour with older, metal-poor stars (@koch06). Despite this, the RGB colour is often used as a photometric indicator for metallicity, once a fixed old age is assumed (@vandenberg06 [@crnojevic10]). This assumption is justified as long as a prominent young and intermediate-age population seems to be absent (i.e., as judged from the lack of luminous main sequence and asymptotic giant branch, AGB, stars), and it shows very good agreement with spectroscopic metallicity values where both methods have been applied.
The very first resolved studies of M31’s halo introduced the puzzling evidence that the M31 halo stellar populations along the minor axis have a higher metallicity than that of the MW at similar galactocentric distances (e.g., @mould86). This was further confirmed by several studies targeting projected distances from 5 to 30kpc and returning an average value of \[Fe/H\]$\sim-0.8$: in particular, [@durrell01] study a halo region at a galactocentric distance of $\sim20$kpc and underline the difference between the properties of M31 and of the MW, suggesting that our own Galaxy might not represent the prototype of a typical spiral. In fact, it has later been suggested that the MW is instead fairly atypical based on its luminosity, structural parameters and the metallicity of its halo stars when compared to spirals of similar mass (@hammer07). This result was interpreted as the consequence of an abnormally quiet accretion history for the MW, which apparently lacked a major merger in its recent past.
The wide-area studies of M31’s outskirts were pioneered $\sim15$ years ago with an Isaac Newton Telescope survey mapping $\sim40$deg$^2$ around M31, reaching significantly beyond its disk out to galactocentric distances of $\sim55$kpc (@ibata01 [@ferguson02]). As mentioned before, the southern Giant Stream was first uncovered with this survey, and the halo and its substructures could be studied with a dramatically increased detail. A metal-poor halo component (\[Fe/H\]$\sim-1.5$) was finally uncovered for regions beyond 30kpc and out to 160kpc (@irwin05 [@kalirai06; @chapman06]), similar to what had been observed for the MW both in terms of metallicity and for its stellar density profile. These studies do not detect a significant gradient in metallicity across the covered radial range. Nonetheless, the properties of the inner halo remained a matter of debate: while [@chapman06] found a metal-poor halo population within 30kpc above the disc, [@kalirai06] analysed a kinematically selected sample of stars within 20kpc along the minor axis and derived a significantly higher value of \[Fe/H\]$\sim-0.5$. At the same time, [@brown06] used deep, pencil beam [*Hubble Space Telescope*]{} ([*HST*]{}) pointings in M31’s inner halo to conclude that a significant fraction of its stellar populations have an intermediate age with an overall high metallicity. These results were later interpreted by [@ibata07] in light of their wider-field dataset: the samples from [@kalirai06] and [@brown06] are simply part of regions dominated by an extended disc component and with a high contamination from various accretion events, respectively. This underlines, once again, the importance of wide-field observations to reach a global understanding of halo properties.
![Stellar density maps of metal-poor RGB populations at the distance of M31, as derived from the PAndAS survey. The large circles lie at projected radii of 150kpc and 50kpc from M31 and M33, respectively. [*Upper panel*]{}: The Andromeda satellites are visible as clear overdensities and are marked with circles. The vast majority of them were uncovered by the PAndAS survey. Reproduced by permission of the AAS from [@richardson11], their Fig. 1. [*Lower panel*]{}: The main substructures around M31 are highlighted, showcasing a broad range of morphologies and likely progenitor type. Tidal debris is also present in the vicinities of the low-mass satellites M33 and NGC 147, indicating an ongoing interaction with M31. Reproduced by permission of the AAS from [@lewis13], their Fig. 1[]{data-label="pandas1"}](apj384661f1_hr.jpg "fig:"){width="7.5cm"} ![Stellar density maps of metal-poor RGB populations at the distance of M31, as derived from the PAndAS survey. The large circles lie at projected radii of 150kpc and 50kpc from M31 and M33, respectively. [*Upper panel*]{}: The Andromeda satellites are visible as clear overdensities and are marked with circles. The vast majority of them were uncovered by the PAndAS survey. Reproduced by permission of the AAS from [@richardson11], their Fig. 1. [*Lower panel*]{}: The main substructures around M31 are highlighted, showcasing a broad range of morphologies and likely progenitor type. Tidal debris is also present in the vicinities of the low-mass satellites M33 and NGC 147, indicating an ongoing interaction with M31. Reproduced by permission of the AAS from [@lewis13], their Fig. 1[]{data-label="pandas1"}](apj453276f1_hr.jpg "fig:"){width="6.8cm"}
![Stellar density map of M31 (akin to Fig. \[pandas1\]), this time subdivided into photometric metallicity bins (as indicated in each subpanel). The [*upper*]{} panels show high metallicity cuts, where the Giant Stream and Stream C are the most prominent features; note that the shape of the Giant Stream changes as a function of metallicity. The [*lower*]{} panels show lower metallicity cuts: the [*lower left*]{} panel is dominated by substructure at large radii, while the most metal-poor panel ([*lower right*]{}) is smoother and believed to mostly contain in-situ populations. Reproduced by permission of the AAS from [@ibata14], their Fig. 9.[]{data-label="pandas2"}](apj488754f9_hr.jpg){width="12cm"}
The M31 INT survey was further extended out to $150$kpc (200kpc in the direction of the low-mass spiral M33) with the Canada-France-Hawaii Telescope/Megacam and dubbed Pan-Andromeda Archaeological Survey (PAndAS; @ibata07 [@mcconnachie09]). This survey contiguously covered an impressive 380deg$^2$ around M31, reaching 4mag below the TRGB. The PAndAS RGB stellar density map (see Fig. \[pandas1\]) is a striking example of an active accretion history, with a copious amount of tidal substructure at both small and large galactocentric radii. PAndAS also constituted a mine for the discovery of a number of very faint satellites and GCs (see below; @richardson11 [@huxor14; @martin16]). Fig. \[pandas2\] further shows the RGB stellar map broken into bins of photometric metallicity. The parallel Spectroscopic and Photometric Landscape of Andromeda’s Stellar Halo (SPLASH) survey (@guha06 [@kalirai06]) provides a comparison dataset with both photometric and spectroscopic information, the latter obtained with Keck/DEIMOS. The SPLASH pointings are significantly smaller than the PAndAS ones but strategically cover M31 halo regions out to $\sim225$kpc. Deeper, pencil-beam photometric follow-up studies have further made use of the [*HST*]{} to target some of the substructures uncovered in M31’s outskirts, resolving stars down to the oldest MSTO (e.g., @brown06 [@bernard15]). These observations reveal a high complexity in the stellar populations in M31, hinting at a high degree of mixing in its outskirts. Overall, M31 has evidently had a much richer recent accretion history than the MW (see also @ferguson16).
### Streams and Substructures {#m31_streams}
As seen from the maps in Figs. \[pandas1\] and \[pandas2\], while the inner halo has a flattened shape and contains prominent, relatively metal-rich substructures (e.g., the Giant Stream), the outer halo ($>50$kpc) hosts significantly less extended, narrow, metal-poor tidal debris.
The features in the innermost regions of M31 can be connected to its disk populations (e.g., the north-east structure or the G1 clump): kinematic studies show that a rotational component is present in fields as far out as 70kpc, and they retain a fairly high metallicity (@dorman13). This reinforces the possible interpretation as a vast structure, which can be explained as disk stars torn off or dynamically heated due to satellite accretion events. Deep [*HST*]{} pointings of these features indeed reveal relatively young populations, likely produced from pre-enriched gas in a continuous fashion, comparable to the outer disk (@ferguson05 [@brown06; @bernard15]).
The most prominent feature in M31’s outer halo, the Giant Stream, was initially thought to originate from the disruption of either M32 or NGC 205, the two dwarf ellipticals located at only $\sim25-40$kpc from M31’s centre (@ibata01 [@ferguson02]). While both these dwarfs shows signs of tidal distortion, it was soon clear that none of them could produce the vast structure extending $\sim100$kpc into M31’s halo. Great effort has been spent into mapping this substructure both photometrically and spectroscopically, in order to trace its orbit and define its nature: a gradient in its line-of-sight distance was first highlighted by [@mcconnachie03], who found the outer stream regions to be located behind M31, the innermost regions at about the distance of M31, and an additional stream component on the opposite (northern) side of M31 to be actually in front of M31. The stream presents a metallicity gradient, with the core regions being more metal-rich and the envelope more metal-poor (see also Fig. \[pandas2\]), as well as a very narrow velocity dispersion, with the addition of a puzzling second kinematic component (@gilbert09); possible interpretations for the latter may be a wrap or bifurcation in the stream, as well as a component from M31’s populations.
A number of increasingly sophisticated theoretical studies have tried to reproduce the appearance of the Giant Stream and picture its progenitor, which is undetected to date. The general consensus seems to be that a relatively massive ($\sim10^9\,M_\odot$) satellite, possibly with a rotating disk, impacted M31 from behind with a pericentric passage around $1-2$Gyr ago (most recently, @fardal13 [@sadoun14]). In particular, simulations can reproduce the current extension and shape of the stream and predict the progenitor to be located to the north-east of M31, just beyond its disk (@fardal13). This study also concludes that some of the substructures linked to M31’s inner regions are likely to have arisen from the same accretion event, i.e., the north-east structure and the G1 clump (Fig. \[pandas1\]): these shelf features would trace the second and third passage around M31, which is also supported by their radial velocities. CMDs of the Giant Stream populations are in agreement with these predictions: its stellar populations have mixed properties, consistent with both disk and stream-like halo features (@ferguson05 [@richardson08]). Detailed reconstruction of its SFH indicate that most star formation occurred at early ages, and was possibly quenched at the time of infall in M31’s potential (around 6Gyr ago) (@bernard15). Again, these studies deduce a likely origin of these populations as a dwarf elliptical or a spiral bulge.
Besides the Giant Stream, the only other tidal feature with a relatively high metallicity is Stream C (see Fig. \[pandas1\] and \[pandas2\]), which appears in the metal-poor RGB maps as well. The origin of this feature is obscure, even though it is tempting to speculate that it could be part of the Giant Stream event. The lower left panel of Fig. \[pandas2\], showing metal-poor populations, encompasses all of the narrow streams and arcs beyond 100kpc, which extend for up to several tens of kpc in length. All these substructures are extremely faint ($\mu_V\sim31.5$magarcsec$^{-2}$), and their origin is mostly unknown because of the difficulty in following up such faint and sparse populations. As part of the [*HST*]{} imaging of these features, [@bernard15] find that their populations are mainly formed at early ages and undergo a more rapid chemical evolution with respect to the disk populations. Despite the metal-poor nature of these features, the hypothesis of a single accretion event producing most of the tidal features observed in the outer halo is not that unlikely, given the metallicity gradient present in the Giant Stream itself.
An efficient alternative to investigate the nature of these streams is to study the halo GC population: the wide-field surveys of M31 have allowed to uncover a rich population of GCs beyond a radius of $\sim25$kpc (e.g., @huxor14, and references therein), significantly more numerous than that of the MW halo. [@mackey10] first highlighted a high spatial correlation between the streams in M31’s halo and the GC population, which would be extremely unlikely in a uniform distribution. Following the hypothesis that the disrupting satellites might be providing a high fraction of M31’s halo GCs, [@veljanoski14] obtained spectroscopic follow-up: they were able to confirm that streams and GCs often have correlated velocities and remarkably cold kinematics. This exciting result gives hope for studies of more distant galaxies, where halo populations cannot be resolved and GCs could be readily used to trace possible substructure.
### Smooth Halo {#m31_halo}
One of the first spatially extended datasets to investigate the halo of M31 in detail is described in [@tanaka10]: their Subaru/SuprimeCam photometry along the minor axis in both directions are deeper, even though less extended, than PAndAS. The stellar density profile derived in this study extends out to 100kpc and shows a consistent power law for both directions. The authors also suggest that, given the inhomogeneities in the stellar populations, the M31 halo is likely not fully mixed.
In the most metal-poor (lower right) panel of Fig. \[pandas2\], the substructures in the outer halo fade away, displaying a smoother component that can be identified with the in-situ M31 halo. Once the substructures are decoupled based on the lack of obvious spatial correlation and with an additional photometric metallicity cut, [@ibata14] derive a stellar density profile out to 150kpc. Again, the profile follows a power-law, which turns out to be steeper when increasingly more metal-rich populations are considered. [@ibata14] also conclude that only $5\%$ of M31’s total halo luminosity lies in its smooth halo, and the halo mass is as high as $\sim10^{10}\,M_\odot$, significantly larger than what estimated for the MW.
The SPLASH survey extends further out than PAndAS, and benefits from kinematical information that is crucial to decontaminate the studied stellar samples from foreground stars and decreases the scatter in the radial profiles. Based on this dataset, [@gilbert12] find that the halo profile does not reveal any break out to 175kpc. This is somewhat surprising given the prediction from simulations that accreted M31-sized stellar haloes should exhibit a break beyond a radius of $\sim100$kpc (@bullock05 [@cooper10]). Beyond a radius of 90kpc, significant field-to-field variations are identified in their data, which suggests that the outer halo regions are mainly comprised of stars from accreted satellites, in agreement with previous studies. At the outermost radii probed by SPLASH ($\sim230$kpc), there is a tentative detection of M31 stars, but this is hard to confirm given the high contamination fraction. Finally, the [@gilbert12] stellar halo profile suggests a prolate DM distribution, which is also consistent with being spherical, in agreement with [@ibata14].
Both [@ibata14] and [@gilbert14] investigate the existence of a metallicity gradient in the smooth halo of M31: they found a steady decrease in metallicity of about 1dex from the very inner regions out to 100kpc. This might indicate the past accretion of (at least) one relatively massive satellite. At the same time, a large field-to-field metallicity variation could mean that the outer halo has been mainly built up by the accretion of several smaller progenitors.
### Andromeda Satellites {#m31_sats}
Similarly to the boom of satellite discoveries around the MW, the vast majority of dwarfs in M31’s extended halo has been uncovered by the SDSS, PAndAS, and PanSTARRS surveys in the past decade (see @martin16, and references therein). The M31 satellites follow the same relations between luminosity, radius and metallicity defined by MW satellites, with the exception of systems that are likely undergoing tidal disruption (@collins14). Once more, the characterization of the lowest-mass galaxies raises new, unexpected questions: from the analysis of accurate distances and kinematics, [@ibata13] conclude that half of the M31 satellites lie in a vast ($\sim200$kpc) and thin ($\sim12$kpc) corotating plane, and share the same dynamical orbital properties. The extreme thinness of the plane is very hard to reconcile with $\rm \Lambda$CDM predictions, where such structures should not survive for a Hubble time. While several theoretical interpretations have been offered (e.g., @fernando16), none is conclusive, and this reinforces the allure of mystery surrounding low-mass satellites.
Low-mass Galaxies In and Around the Local Group {#dwarfs_halo}
-----------------------------------------------
Besides the detailed studies of the two LG spirals, increasing attention is being paid to lower-mass galaxies and their outskirts. Given the self-similar nature of DM, low-mass galaxies should naively be expected to possess haloes and satellites of their own; however, our difficulty in constraining star formation efficiency and physical processes affecting galaxy evolution at these scales blurs these expectations. In the last couple of years, the increasing resolution of cosmological simulations has allowed to make quantitative predictions about the halo and substructures in sub-MW-mass galaxies, and about the number of satellites around them (@wheeler15 [@dooley16]). Observations are thus much needed to test these predictions.
Since the late 90s, numerous studies of star-forming dwarfs within or just beyond the LG have claimed the detection of an RGB component extending beyond the blue, young stars (see @stinson09, and references therein), hinting at a generic mode of galaxy formation independent on galaxy size. Such envelopes, however, were not characterized in detail, and in fact could not be identified uniquely as the product of hierarchical merging without, e.g., accurate age and metallicity estimates.
The presence of extended haloes in the most luminous satellites of the MW and M31, i.e., the irregular LMC and the low-mass spiral M33, respectively, has not been confirmed to date despite the availability of exquisite datasets. [@gallart04] demonstrate how, out to a galactocentric distance of 7kpc, the stellar density profile of the LMC disk does not show a clear break, in contrast to previous tentative claims. Clearly, the question is complicated by the fact that the LMC is undergoing tidal disruption, and stripped stellar material could easily be misinterpreted as a halo component. Nonetheless, [@mcmonigal14] suggest to have found a sparse LMC halo population from a wide-field dataset around the nearby dwarf galaxy Carina, at galactocentric distances as large as $20$deg. The question might be settled in the near future with the help of wide-field surveys such as the Survey of MAgellanic Stellar History (@martin15). With regard to possible low-mass satellites, there is now tantalizing indication that the LMC might have fallen onto the MW with its own satellite system, as mentioned in Sect. \[mw\_sats\]. As part of the PAndAS survey, deep imaging of M33 has revealed prominent substructure in its outer disk reminiscent of a tidal disturbance, and a faint, diffuse substructure possibly identified as a halo component (@cockcroft13). This result was, however, carefully reconsidered by [@mcmonigal16], who claim that a definitive sign of a halo structure cannot be confirmed, and if present it must have a surface brightness below $\mu_V\sim35$magarcsec$^{-2}$.
Besides the investigation of haloes and satellites, deep and wide-field views of low-mass galaxies are crucial to, e.g., assess the presence of tidal disturbances, which in turn are key to estimate mass values and constrain DM profiles (e.g., @sand12). As demonstrated by [@crnojevic14a], a striking similarity in the global properties (luminosity, average metallicity, size) of two low-mass galaxies, such as the M31 satellites NCG 185 and NGC 147, can be quite misleading: once deep imaging was obtained around these galaxies (within PAndAS), NCG 147 revealed extended, symmetric tidal tails, returning a much larger extent and luminosity for this dwarf than what was previously thought. This dataset further showed a flat metallicity gradient for NGC 147, in contrast with the marked gradient found in NGC 185. All these pieces of evidence point at an ongoing interaction of NGC 147 with M31. Large-scale studies of LG dwarfs also provide useful insights into their evolutionary history: by studying CMDs reaching below the MSTO, [@hidalgo13] trace significant age gradients that advocate an outside-in mode of star formation for dwarf galaxies.
Clearly, systematic deep searches are needed to detect and characterize the outskirts of low-mass satellites. With this goal in mind, wide-field surveys of nearby ($<3$Mpc) dwarfs have started to be pursued. The first of these efforts targets NGC 3109, a sub-LMC-mass dwarf located just beyond the boundaries of the LG: several candidate satellites of NGC 3109 are identified from a CTIO/DECam survey targeting regions out to its virial radius (@sand15). One of them, confirmed to be at the distance of NGC 3109, is relatively bright ($M_V\sim-10$), and is already in excess of the predicted number by @dooley16 for this system. Other ongoing surveys are similarly looking for halo substructures and satellites in several relatively isolated dwarfs, e.g., the SOlitary LOcal dwarfs survey (@higgs16) and the Magellanic Analog Dwarf Companions And Stellar Halos survey (@carlin16), by using wide-field imagers on large telescopes such as CFHT/MegaCam, Magellan/Megacam, CTIO/DECam and Subaru/HyperSuprimeCam. These datasets will constitute a mine of information to constrain the role of baryonic processes at the smallest galactic scales.
Beyond the Local Group {#sec:beyondlg}
======================
The ground-breaking photometric and kinematic surveys carried out in the past two decades have significantly advanced our knowledge of haloes and their substructures within LG galaxies. Nonetheless, the MW and M31 may not be representative of generic MW-sized haloes, given the stochasticity of the hierarchical assembly process: several marked differences in the stellar populations of their haloes underline the need for observations of a statistically significant sample of galaxy haloes with different morphologies, with surveys targeting large portions of their haloes.
Cosmological simulations of MW-mass analogues show a wide variation in the properties of their haloes. As already mentioned, the relative contribution of in-situ star formation and disrupted satellites remains unclear: depending on the models (e.g., full hydrodynamical simulations, $N$-body models with particle tagging), they can vary from a negligible number of accretion events for a MW-sized halo, to making up for most of a stellar halo content (e.g., @lu14 [@tissera14]). Even within the same set of simulations, the number, mass ratio and morphology of accretion and merger events span a wide range of possible values (@bullock05 [@johnston08; @garrison14]). The chemical content of extended haloes can provide useful insights into their assembly history: mergers or accretion events of similar-mass satellites will generally tend to produce mild to flat gradients; in-situ populations will feature increasingly metal-poor populations as a function of increasing galactocentric radius, similarly to the accretion of one or two massive companions (e.g., @cooper10 [@font11]). More extended merger histories are also expected to return younger and relatively metal-rich populations with respect to those coming from a shorter assembly, and to produce more massive stellar haloes, with the final result that the mean halo metallicities of MW-mass spirals can range by up to 1dex (e.g., @renda05).
Comprehensive observational constraints are key to guide future simulations of galaxy haloes: the past decade has seen a dramatic increase in the observational census of resolved galaxy haloes beyond the LG, thanks to deep imaging obtained with space facilities, as well as to the advent of wide-field imagers on large ground-based telescopes.
While the increasing target distance means that it is easier to survey larger portions of their haloes, the drawback is that the depth of the images decreases dramatically, and thus we are only able to detect the brightest surface brightness features in the haloes, i.e., the uppermost $\sim2-3$mag below the TRGB in terms of resolved stars (see Fig. 6 in @radburn11 for a schematic visualization of the different stellar evolutionary phases recognizable in such shallow CMDs). A number of studies has surveyed relatively nearby and more distant galaxy haloes in integrated light despite the serious challenges posed by sky subtraction at such faint magnitudes, masking of bright stars, flat-fielding and scattered light effects, point spread function modelling, and/or spatially variable Galactic extinction.
A few early studies have been able to uncover a halo component and tidal debris in the target galaxies (e.g., @malin83 [@morrison94; @sackett94]), without, however, settling the questions about their existence, nature or ubiquity. Different approaches have been adopted to detect haloes and their substructures, i.e., targeting either individual galaxies (e.g., @zheng99 [@pohlen04; @jablonka10; @janowiecki10; @martinez10; @adams12; @atkinson13]) or stacking the images of thousands of objects (e.g., @zibetti04 [@vandokkum05; @tal09]). A precise quantification of the occurrence of faint substructure in the outskirts of nearby galaxies seems as uncertain as it can be, ranging from a few percent to $\sim70\%$ (see, e.g., @atkinson13, and references therein). This is perhaps unsurprising given the heterogeneity of methods used, target galaxy samples, and surface brightness limits in such studies. Besides the identification of such features, the characterization of unresolved halo stellar populations constitutes an even harder challenge: integrated colours and spectra can at most reach a few effective radii, thus missing the outer haloes. Even for the available datasets, the degeneracies between age, metallicity and extinction are generally challenging to break (e.g., @dejong07); in addition, tidal features can rarely tell us about the mass ratio of a merger event or its orbit (with the exception of tails). Here, we do not intend to discuss the detection of haloes and the variety of fractions and morphologies for tidal features observed in integrated light studies; Knapen & Trujillo (this volume) treat this topic in detail, while this contribution focusses on resolved populations.
Obtaining resolved photometry beyond the LG is a daunting task as well, due to the very faint luminosities involved—the brightest RGB stars for galaxies at have magnitudes of $I\sim24-28.5$, and thus this approach is so far really limited to the Local Volume. Early attempts to perform photometry of individual stars in the outskirts of nearby galaxies have been made using large photographic plates and the first CCDs (e.g., @humphreys86 [@davidge89; @georgiev92]). The brightest populations (i.e., the youngest) could often be reconciled with being members of the parent galaxy, but the critical information on the faint, old stars was still out of reach. With the advent of wide-format CCDs in the mid 90s, photometry finally became robust enough to open up new perspectives on the resolved stellar content of our closest neighbours.
The first studies of this kind date back to twenty years ago and mainly focus on the inner regions of the target galaxies, most commonly their disks or inner haloes, with the goal of studying their recent star formation and of deriving TRGB distances (see, e.g., @soria96 for CenA, @sakai99 for M81 and M82). [@elson97], in particular, resolved individual stars in the halo of the S0 galaxy NGC 3115 with [*HST*]{}. By analysing the uppermost 1.5mag of the RGB at a galactocentric distance of 30kpc, they derived a distance of $\sim11$Mpc, and additionally discovered for the first time a bimodality in the photometric metallicity distribution function of this early-type galaxy. [@tikhonov03] studied for the first time the resolved content of the nearest ($\sim3.5$Mpc) S0 galaxy NGC 404 with combined ground-based and [*HST*]{} imaging. Their furthermost [*HST*]{} pointings ($\sim20$kpc in projection) contain RGB stars that are clearly older than the main disk population, with similar colour (metallicity). The authors conclude that the disk of NGC 404 extends out to this galactocentric distance, but they do not mention a halo component.
Beyond these early studies of individual galaxies, the need for systematic investigations of resolved stellar haloes was soon recognized. Next we describe the design and results of some systematic surveys targeting samples of galaxies in the Local Volume.
Systematic Studies {#systematic}
------------------
A decade ago, [@mouhcine05a; @mouhcine05b; @mouhcine05c] started an effort to systematically observe the haloes of eight nearby ($<7$Mpc) spiral galaxies with the resolution of [*HST*]{}. In particular, they utilized WFPC2 to target fields off of the galaxies’ disks (2 to 13kpc in projection along the minor axis) with the goal of investigating their stellar populations, and obtaining accurate distance estimates as well as photometric metallicity distribution functions, to gain insights into the halo formation process. [@mouhcine05c] find the haloes to predominantly contain old populations, with no younger components and little to no intermediate-age populations. Interestingly, [@mouhcine05b] find a correlation between luminosity and metallicity for the target galaxies, where the metallicity is derived from the mean colour of the resolved RGB. Both the spiral galaxies from their sample (NGC 253, NGC 4244, NGC 4945, NGC 4258, NGC 55, NGC 247, NGC 300, and NGC 3031 or M81) and the two ellipticals (NGC 3115 and NGC 5128 or Centaurus A, included in their comparison from previous literature data) fall on the same relation, indicating that haloes might have a common origin regardless of the galaxy morphological type. Interestingly enough, the MW halo turns out to be substantially more metal-poor than those of the other galaxies of comparable luminosity, based on kinematically selected pressure-supported halo stars within $\sim10$kpc above the disk (see also Sect. \[sec:m31\]). This relation is consistent with a scenario where halo field stars form in the potential well of the parent galaxy in a gradual way from pre-enriched gas. Moreover, the relatively high metallicities of the target haloes seem to suggest that they likely originate from the disruption of intermediate-mass galaxies, rather than smaller metal-poor dwarf galaxies (@mouhcine05c).
Interestingly, the dataset presented and studied in [@mouhcine05a; @mouhcine05b; @mouhcine05c] is further analyzed by [@mouhcine06] to find that each spiral of the sample presents a bimodal metallicity distribution. In particular, both a metal-poor and a metal-rich component are present in the outskirts of the target galaxies, and both components correlate with the host’s luminosity. This is taken as a hint that these populations are born in subgalactic fragments that were already embedded in the dark haloes of the host galaxy; the metal-poor component additionally has a broader dispersion than that of the metal-rich population. These properties show similarities with GC subpopulations in the haloes of early-type galaxies (e.g., @peng06). [@mouhcine06] argues that the metal-poor component may arise from the accretion of low-mass satellites, while the metal-rich one could be linked to the formation of the bulge or the disk.
The shortcoming of this ambitious study is, however, twofold: first, the limited field of view (FoV) of [*HST*]{} hampers global conclusions on the galaxies’ haloes, and the stellar populations at even larger radii may have different properties than those in the observed fields; second, perhaps most importantly, it is not obvious what structure of the galaxy is really targeted, i.e., the halo, the outer bulge or disk, or a mixture of these.
Along the same lines of these studies, [@radburn11] present an even more ambitious [*HST*]{} survey of 14 nearby disk galaxies within 17Mpc, with a range of luminosities, inclinations and morphological types. The Galaxy Halos, Outer disks, Substructure, Thick disks, and Star clusters (GHOSTS) survey aims at investigating radial light profiles, axis ratios, metallicity distribution functions (MDFs), SFHs, possible tidal streams and GC populations, all to be considered as a function of galaxy type and position within the galaxies. The 76 ACS pointings of the survey are located along both major and minor axes for most of the targets, and reach $\sim2-3$mag below the TRGB, down to surface brightness values of $V\sim30$magarcsec$^{-2}$. This dataset thus represents a very valuable resource for testing hierarchical halo formation models. [@monachesi16] investigate six of the galaxies in this sample (NGC 253, NGC 891, M81, NGC 4565, NGC 4945, and NGC 7814) and conclude that all of them contain a halo component out to 50kpc, and two of them out to 70kpc along their minor axis. The colour (i.e., photometric metallicity) distribution of RGB stars in the target haloes is analysed and reveals a non-homogeneity which likely indicates the presence of non-mixed populations from accreted objects. The average metallicity out to the largest radii probed remains relatively high when compared to the values of the MW halo; metallicity gradients are also detected in half of the considered galaxies. Surprisingly, and in contrast to the results presented by [@mouhcine05b], the spiral galaxies in this sample do not show a strong correlation between the halo metallicity and the total mass of the galaxies, highlighting instead the stochasticity inherent to the halo formation process through accretion events (e.g., @cooper10). The advantage of the GHOSTS dataset over the one from [@mouhcine05b] is that the GHOSTS fields are deeper, there are several pointings per galaxy and they reach significantly larger galactocentric distances, thus offering a more global view of the haloes of the targets.
In an effort to increase the sample of nearby galaxies for which stellar haloes are resolved and characterized, several groups have individually targeted Local Volume objects with either ground-based or space-borne facilities: the low-mass spirals NGC 2403 (@barker11, with Subaru/SuprimeCam), NGC 300 (@vlajic09, with Gemini/GMOS), and NGC 55 (@tanaka11, with Subaru/SuprimeCam), the ellipticals NGC 3379 (@harris07a, with [*HST*]{}) and NGC 3377 (@harris07b, with [*HST*]{}), and the lenticular NGC 3115 (@peacock15, with [*HST*]{}). In most of these galaxies, a resolved faint halo (or at least an extended, faint and diffuse component) has been detected and is characterized by populations more metal-poor than the central/disk regions. Most of these haloes also show signs of substructure, pointing at past accretion/merger events as predicted by a hierarchical galaxy formation model. Even galaxies as far as the central elliptical of the Virgo cluster, M87, ($\sim16$Mpc) are starting to be targeted with [*HST*]{}, although pushing its resolution capabilities to the technical limits (@bird10).
While spectroscopically targeting individual RGB stars to obtain radial velocity and metallicity information is still prohibitive beyond the LG (see Sect. \[dwarfs\_halo\]), some cutting-edge studies have pushed the limits of spectroscopy for dwarf galaxies within $\sim1.5$Mpc (e.g., @kirby12, and references therein). At the same time, novel spectroscopic techniques are being developed to take full advantage of the information locked into galaxy haloes. One example is the use of co-added spectra of individual stars, or stellar blends, to obtain radial velocities, metallicities and possibly gradients in galaxies within $\sim4$Mpc, as robustly demonstrated by [@toloba16]. The development of new analysis methods and the advent of high-resolution spectrographs will soon allow for systematic spectroscopic investigations of nearby galaxy haloes which will importantly complement the available photometric studies, similarly to the studies of LG galaxies.
Besides the systematic studies presented here, which mostly involve deep space observations, an increasing effort is being invested in producing spatial density maps of outer haloes in some of the closest galaxies with ground-based observations, akin to the panoramic view of M31 offered by PAndAS. In the following Section we describe some of these efforts.
Panoramic Views of Individual Galaxies {#panoramic}
--------------------------------------
Panoramic views of nearby galaxies can be obtained with the use of remarkable ground-based wide-field imagers such as Subaru/SuprimeCam and HyperSuprimeCam and CFTH/MegaCam in the northern hemisphere, and Magellan/Megacam, CTIO/DECam and VISTA/VIRCAM in the southern hemisphere. Clearly, such CMDs cannot reach the depth of those obtained for M31; these studies nevertheless represent cornerstones for our investigation of global halo properties, and serve as precursor science cases for the next generation of telescopes that will open new perspectives for this kind of studies to be performed on a significantly larger sample of galaxies. As mentioned in Sect. \[dwarfs\_halo\], the haloes of low-mass galaxies are also starting to be systematically investigated, to gain a more complete picture of galaxy formation at all mass scales. Here we further describe the few examples of spatially extended imaging obtained to date for some of the closest spiral and elliptical galaxies.
### NGC 891
![Surface density map of RGB stars in the halo of NGC 891, obtained with Subaru/SuprimeCam. The overdensities of old RGB stars reveal a large complex of arcing streams that loops around the galaxy, tracing the remnants of an ancient accretion. The second spectacular morphological feature is the dark cocoon-like structure enveloping the high surface brightness disk and bulge. Fig. 1 from [@mouhcine10], reproduced by permission of the AAS[]{data-label="fig:mouhcine10"}](ngc891_v2.eps){width="7.cm"}
Despite its relatively large distance ($\sim9$Mpc, @radburn11), the “MW-twin” NCG 891 (@vanderkruit84) is one of the first spirals to be individually investigated in resolved light. Its high inclination and absence of a prominent bulge make it an appealing target for halo studies.
@mouhcine07 exploit three [*HST*]{} pointings located approximately 10kpc above the disk of NGC 891 to investigate the properties of this galaxy’s halo. The broad observed RGB indicates a wide range of metallicities in this population, with metal-rich peaks and extended metal-poor tails. The three fields also show a decreasing mean metallicity trend as a function of increasing distance along the major axis. The mean metallicity of this sample of RGB stars (\[Fe/H\]$\sim-1$) falls on the halo metallicity-galaxy luminosity relation pointed out by [@mouhcine05b]: this, together with the gradient mentioned before, is in contrast with the lower metallicities and absence of a gradient for non-rotating stars in the inner haloes of the MW and M31 (@chapman06 [@kalirai06]). @mouhcine07 thus suggest that not all massive galaxies’ outskirts are dominated by metal-poor, pressure-supported stellar populations (because of the inclination and absence of a bulge, the studied RGB sample is thought to be representative of the true halo population). A possible explanation is suggested with the presence of two separate populations: a metal-rich one that is present in the most massive galaxies’ outskirts, and one constituting the metal-poor, pressure-supported halo, coming from the accretion of moderate-mass satellites. For smaller-mass galaxies, the halo would instead be dominated by debris of small satellites with lower metallicities.
Follow-up analysis on the same [*HST*]{} dataset has been carried out by [@ibata09] and [@rejkuba09]. After careful accounting for the internal reddening of the galaxy, a mild metallicity gradient is confirmed in NGC 891’s spheroidal component, which is surveyed out to $\sim20$kpc (assuming elliptical radii), and suggested to arise from the presence of a distinct outer halo, similarly to the MW (@ibata09). Most importantly, and for the first time, this refined analysis reveals a substantial amount of substructure not only in the RGB spatial distribution but also as metallicity fluctuations in the halo of NGC 891. This evidence points at multiple small accretion events that have not fully blended into the smooth halo.
Motivated by these studies, @mouhcine10 provide the first attempt to derive a PAndAS-like map of a MW-analogue beyond the LG: their wide-field map of NGC 891’s halo is shown in Fig. \[fig:mouhcine10\]. The panoramic survey, performed contiguously with Subaru/SuprimeCam, covers an impressive $\sim90\times90$kpc$^2$ in the halo of NGC 891 with the $V$ and $i$ filters, reaching $\sim2$mag below the TRGB. Among the abundant substructures uncovered by the RGB map around NGC 891, a system of arcs/streams reaches out some $\sim50$kpc into the halo, including the first giant stream detected beyond the LG with ground-based imaging. The latter’s shape does not rule out a single accretion event origin, but a possible progenitor cannot be identified as a surviving stellar overdensity. These structures appear to be old, given the absence of corresponding overdensities in the luminous AGB (i.e., intermediate-age populations) maps. Another surprising feature highlighted by the RGB map is a flattened, super-thick envelope surrounding the disk and bulge of NGC 891, which does not seem to constitute a simple extension of its thick disk but is instead believed to generate from the tidal disruption of satellites given its non-smooth nature (@ibata09).
### M81
![Isodensity contour map of red RGB stars in the M81 group, as observed by Subaru/HyperSuprimeCam. Structures up to $20\sigma$ above the background level are visible; the cross marks represent the centres of known M81 group members, while solid lines are ${R}_{25}$ of galaxies. The high degree of substructure underlines the ongoing tidal interactions in this group; note in particular the S-shape of the outer regions in NGC 3077 and M82. Fig. 4 from [@okamoto15], reproduced by permission of the AAS[]{data-label="fig:okamoto"}](okamoto_m81.eps){width="7.cm"}
Located at a distance of 3.6Mpc (@radburn11) and with a dynamical mass inside 20kpc of $\sim10^{11}\,M_\odot$, M81 is one of the closest MW-analogues, and has thus been among the first targets for extended halo studies beyond the LG. The earliest H[i]{} imaging of the galaxy group dominated by this spiral unambiguously shows a spectacular amount of substructure, most prominently a bridge of gas between M81 and its brightest companions NGC 3077 and M82, located at a projected distance of $\sim60$kpc (@vanderhulst79 [@yun94]).
Given the high level of interaction and H[i]{} substructure in a group that can be considered as a LG-analogue, it is natural to pursue the investigation of this complex environment even further. The intergalactic gas clouds embedding this environment are traced by young stellar systems identified in resolved stellar studies (@durrell04 [@davidge08; @demello08]). Some of them are classified as tidal dwarf galaxies, such as Holmberg IX and the Garland (@makarova02 [@kara04; @sabbi08; @weisz08]), characterized by a predominance of young stellar populations. This type of galaxy has no counterpart in our own LG, and it is believed to be DM-free (see, e.g., @duc00).
The first detailed look into the resolved populations in the outskirts of M81 is through the eye of [*HST*]{}: the predominantly old halo RGB stars show a broad range of metallicities and a radial gradient (@tikhonov05 [@mouhcine05c]). The radial stellar counts (along several different directions) also reveal a break at a radius of $\sim25$kpc, which is interpreted as the transition point between thick disk and halo (@tikhonov05). In a similar fashion, the ground-based wide-field imager Subaru/SuprimeCam has been used to uncover a faint and extended component beyond M81’s disk with a flat surface brightness profile extending out to $\sim0.5$deg (or $\sim30$kpc) to the north of M81 (@barker09). This low surface brightness feature ($\sim28$magarcsec$^{-2}$) traced by the brightest RGB star counts appears bluer than the disk, suggesting a metallicity lower than that of M81’s main body, but its true nature remains unclear. The authors suggest this component to have intermediate properties between the MW’s halo and its thick disk, but the limited surveyed area ($0.3$deg$^2$) precludes any robust conclusions.
As part of a campaign to obtain panoramic views of nearby galaxy haloes, [@mouhcine09] present a $0.9\times 0.9$deg$^2$ view of M81’s surroundings obtained with the CFHT/MegaCam imager. The images resolve individual RGB stars down to $\sim2$mag below the TRGB, but this study focusses on the younger, bright populations such as massive main sequence stars and red supergiants, which reveal further young systems tracing the H[i]{} tidal distribution between M81 and its companions. These systems are younger than the estimated dynamical age of the large-scale interaction and do not have an old population counterpart, suggesting that they are not simply being detached from the main body of the primary galaxies but are instead formed within the H[i]{} clouds.
[@durrell10] recently conducted a deeper, albeit spatially limited, [*HST*]{} study of a field at a galactocentric distance of $\sim20$kpc. This field reveals an \[M/H\]$\sim-1.15$ population with an approximate old age of $\sim9$Gyr. This field thus contains the most metal-poor stars found in M81’s halo to that date, which led the authors to the conclusion that they were dealing with an authentic halo component. This study is extended by [@monachesi13] with the [*HST*]{} GHOSTS dataset (see Sect. \[systematic\]): they construct a colour profile out to a radius of $\sim50$kpc, and this dataset does not show a significant gradient. The mean photometric metallicity derived is \[Fe/H\]$\sim-1.2$, similarly to @durrell10. This result is found to be in good agreement with simulations and the authors suggest that the halo of M81 could have been assembled through an early accretion of satellites with comparable mass (e.g., @cooper10 [@font06]).
As a further step in the investigation of M81’s halo, the [@barker09] and [@mouhcine09] ground-based imaging of M81 is being improved by means of the Subaru/HyperSuprimeCam. The first $\sim2\times2$deg$^2$ ($\sim100\times115$kpc$^2$) resolved stellar maps from different subpopulations (upper main sequence, red supergiants, RGB and AGB stars) are presented in [@okamoto15] and constitute a preview of an even wider-field effort to map the extended halo of this group. These first maps (see Fig. \[fig:okamoto\]) confirm a high degree of substructure, most interestingly: the youngest populations nicely trace the H[i]{} gas content, confirming previous small FoV studies; the RGB distributions are smoother and significantly more extended than the young component, and show stream-like overlaps between the dominant group galaxies, e.g., M82’s stars clearly being stripped by M81; a redder RGB distribution is detected for M81 and NGC 3077 compared to M82, indicating a lower metallicity in the latter; in addition, M82 and NGC 3077’s outer regions present S-shaped morphologies, a smoking gun of the tidal interaction with M81 and typical of interacting dwarf galaxies with larger companions (e.g., @penarrubia09).
Not less importantly, the widest-field survey to date ($\sim65$deg$^2$) of the M81 group has been performed by [@chiboucas09] with CFHT/MegaCam, although with only one filter. The main goal of this survey was to identify new, faint dwarf galaxies and investigate the satellite LF in a highly interacting group environment as compared to the LG. This is the first survey to systematically search for faint dwarfs beyond the LG. Resolved spatial overdensities consistent with candidate dwarfs have been followed up with two-band [*HST*]{}/ACS and [*HST*]{}/WFPC2 observations. Fourteen of the 22 candidates turned out to be real satellites of M81 based on their CMDs and TRGB distances, extending the previously known galaxy LF in this group by three orders of magnitude down to $M_r\sim-9.0$ (@chiboucas13), with an additional possibly ultra-faint member at $M_r\sim-7.0$. The measured slope of the LF in the M81 group appears to be flatter than cosmological predictions ($\alpha\sim-1.27$, in contrast to the theoretical value of $\alpha\sim-1.8$), similar to what has been found for the MW and M31 satellites.
### NGC 253
Another obvious MW-mass spiral target for halo studies is NCG 253 ($\sim3.5$Mpc, @radburn11). Its role of brightest object within the loose Sculptor filament of galaxies makes it ideally suited to investigate the effects of external environment on the assembly of haloes. As already apparent from old photographic plates, NGC 253’s outskirts show faint perturbation signs, such as an extended shelf to the south of its disk (@malin97), pointing at a possible accretion event. This spiral galaxy, despite its relative isolation, is experiencing a recent starburst and a pronounced nuclear outflow: the latter is believed to host local star formation extending as high as $\sim15$kpc above the disk in the minor axis direction (see @comeron01, and references therein).
The resolved near-infrared study of [@davidge10] allowed them to detect bright AGB stars, but not RGB stars, extending out to $\sim13$kpc from the disk plane in the south direction: these are interpreted as being expelled from the disk into the halo as consequence of a recent interaction. Subsequently, [@bailin11] exploited a combination of [*HST*]{} data from the GHOSTS survey and ground-based Magellan/IMACS imaging, the former being deeper while the latter have a more extended FoV (out to $\sim30$kpc in the halo NGC 253 in the south direction). The authors are able to estimate NGC 253’s halo mass as $\sim2\times10^9\,M_\odot$, or 6% of the galaxy’s total stellar mass: this value is broadly consistent with those derived from the MW and M31 but higher, reminiscent of the halo-to-halo scatter seen in simulations. A power law is fit to the RGB radial profile which is found to be slightly steeper than that of the two LG spirals, and appears to be flattened in the same direction as the disk component. This is the one of the few studies to date to quantitatively measure such parameters for a halo beyond the LG, and it sets the stage for the possibilities opened by similar studies of other nearby galaxies. The RGB density maps derived in [@bailin11] from IMACS imaging confirm the early detection of a shelf structure, and uncover several additional kpc-scale substructures in the halo of this spiral.
A more recent wide-field study of NGC 253 is presented by [@greggio14], who exploit the near-infrared VISTA/VIRCAM imager to study the RGB and AGB stellar content of this galaxy out to $\sim40-50$kpc, covering also the northern portion which was not included in previous studies. This portion, in particular, reveals an RGB substructure symmetric (and likely connected) to the one in the south. A prominent arc ($\sim20$kpc in length) to the north-west of the disk is detected and estimated to arise from a progenitor with a stellar mass of roughly $\sim7\times10^6\,M_\odot$. The RGB radial density profile shows a break at a radius of $\sim25$kpc, indicative of the transition from disk to halo. The elongated halo component already discussed in [@bailin11] is confirmed here, but is considered to be an inner halo: an outer, more spherical and homogeneous component extends at least out to the galactocentric distances covered by this survey. Intriguingly, the AGB density map reveals that 25% of this intermediate-age (i.e., up to a few Gyr old) population is spread out to $\sim30$kpc above the disk: this component cannot easily be explained with either an in-situ or an accreted origin.
NGC 253 is also one of the two targets of the Panoramic Imaging Survey of Centaurus and Sculptor (PISCeS), recently initiated with the wide-field imager Magellan/Megacam. This ambitious survey aims at obtaining RGB stellar maps of this galaxy and of the elliptical Centaurus A (Cen A; see next Section) out to galactocentric radii of $\sim150$kpc, similarly to the PAndAS survey of M31. Early results from this survey include the discovery of two new faint satellites of NGC 253, one of which is clearly elongated and in the process of being disrupted by its host (@sand14 [@toloba16b]).
### NGC 5128 (Centaurus A) {#cena}
![Surface density map of RGB stars in the halo of Cen A, obtained with Magellan/Megacam as part of the PISCeS survey. The map extends out to a radius of 150kpc in the north and east directions (physical and density scales are reported). Several tidal features are easily recognized, including a stunning disrupting dwarf with tails 2deg long in the outer halo, an extended sparse cloud to the south of the galaxy, as well as arcs and plumes around the inner regions, tracing both ongoing and past accretion events. Fig. 3 from [@crnojevic16], reproduced by permission of the AAS[]{data-label="fig:crnojevic16"}](fig2_1.eps){width="11cm"}
It is important to target galaxies of different morphologies and environments to thoroughly investigate the assembly of haloes. The closest ($\sim3.8$Mpc; @harrisg09) elliptical galaxy is Centaurus A (Cen A; technically speaking, Maffei 1 is slightly closer but it lies behind the Galactic disk and is thus heavily reddened, see @wu14). Cen A is the dominant galaxy of a rich and dense group, which also has a second subgroup component centred on the spiral M83 (e.g., @kara07).
Despite having often been referred to as a peculiar galaxy, due to its pronounced radio activity, its central dust lanes, and a perturbed morphology, the luminosity of Cen A is quite typical of field elliptical galaxies: a recent ($<1$Gyr) merger event is believed to be the culprit for its peculiar features (see @israel98, and references therein). Besides this main merger event, [@peng02] uncover a system of faint shells and an arc within $\sim25$kpc of Cen A’s centre from integrated light observations; the arc is believed to have been produced by the infall of a low-mass, star forming galaxy around $\sim300$Myr ago.
This elliptical galaxy has been the subject of a systematic study conducted with [*HST*]{}/ACS and [*HST*]{}/WFPC2 throughout the past couple of decades: a number of pointings at increasingly large galactocentric radii (from a few out to $\sim150$kpc) have been used to investigate the properties and gradients of Cen A’s halo populations (@rejkuba14, and references therein). The considered pointings out to 40kpc reveal metal-rich populations (\[Fe/H\]$>-1.0$), not dissimilar to what has been observed for the haloes of spiral galaxies. The deepest CMD to date of this elliptical is presented by [@rejkuba11] for the [*HST*]{} field at 40kpc: this study concludes that the vast majority of Cen A’s halo population is old ($\sim12$Gyr), with a younger ($\sim2-4$Gyr) component accounting for $\sim20\%$ of the total population.
The first wide-field study of Cen A was performed with the ground-based VLT/VIMOS imager, reaching out to $\sim85$kpc along both minor and major axes (@crnojevic13). Cen A’s halo population seems to extend all the way out to this large radius. This study confirms the relatively high metallicity for halo populations found by the [*HST*]{} studies, although with a considerable presence of metal-poor stars at all radii; the authors also highlight the absence of a strong metallicity gradient from a $\sim30$kpc radius out to the most distant regions probed. This study suggests that the outer regions of Cen A’s halo show an increase in ellipticity as a function of radius, which could, however, be interpreted as the presence of substructure contaminating the observed fields. A subsequent study exploits additional [*HST*]{} pointings out to a remarkably large radius of $\sim150$kpc: the edge of Cen A’s halo is not reached even by this study (@rejkuba14). This dataset, analysed together with the previous [*HST*]{} pointings, confirms that a very mild metallicity gradient is present, with median metallicities remaining high out to the largest distances probed. [@rejkuba14], however, also detect a significant pointing-to-pointing variation in both the RGB star counts and the median metallicity, which is likely indicative of non-mixed accreted populations.
Recently, the PISCeS survey (see previous Section) has sketched a PAndAS-like picture of Cen A’s halo out to $\sim150$kpc: the RGB stellar density map derived from a mosaic of Magellan/Megacam images is presented in Fig. \[fig:crnojevic16\]. This map, very much like the ones obtained for M31 and NGC 891, uncovers a plethora of faint substructures, both in the inner regions of the target galaxy and in its outskirts. The morphological variety of these features is reminiscent of that observed in PAndAS, with shells, plumes, an extended cloud and long tidal streams. In particular, one of the newly discovered dwarf satellites of Cen A is clearly in the process of being disrupted, with $\sim2$deg long tails: taking into account the stellar content of these tails, this galaxy’s pre-disruption luminosity could have been similar to that of Sagittarius in the LG. This survey also led to the discovery of nine (confirmed) dwarf satellites down to $M_V\sim-7$. Their properties are consistent with those of faint LG satellites, but some of them lie at the faint/diffuse end of the LG luminosity/surface brightness/radius distribution: this indicates that we might be looking at previously unexplored physical regimes for these faintest satellites, which opens new exciting perspectives for future studies.
Summary and Future Prospects {#conclusions}
============================
In a $\rm \Lambda$CDM hierarchical model, all galaxies are predicted to have experienced mergers, of which many should be recognizable as debris/streams that make up for a large fraction of their haloes. Haloes and their substructures thus provide a unique glimpse into the assembly history of galaxies, and can inform the models at the smallest galactic scales, where they still fall short in reproducing observations. The time is now ripe for in-depth systematic studies of the resolved stellar populations in galaxy haloes, which will dramatically increase our understanding of galaxy evolution over the next decade.
The challenges for this type of studies are of a different nature: for our own Galaxy, state-of-the-art results on its halo shape, profile and mass inevitably suffer from assumptions on underlying density models and extrapolations of the available data to radii larger than observed. The major current limitation of MW halo studies lies in observational biases due to small field-of-view samples, which preclude the identification of possible substructure contamination. Future surveys hold the promise to advance the knowledge of our Galaxy by obtaining significantly larger samples of tracers, especially in areas so far not covered. Most notably, the astrometric [*Gaia*]{} mission (which will provide unprecedented six-dimensional phase space information for two billion stars out to the inner MW halo) and the Large Synoptic Survey Telescope (LSST; designed to provide a southern sky counterpart to SDSS, and reaching $\sim4$ magnitudes fainter than its predecessor for a total sample of tens of billions of stars), are going to revolutionize our view of the MW. At the same time, the current and future generation of high-resolution spectrographs will follow up these surveys from the ground, providing comprehensive kinematic and chemical information to assess the origin of halo stars and characterize their birthplaces (see also Figueras, this volume).
The pioneering studies of an increasing number of haloes beyond the LG, and across a range of masses, will soon be extended by the next generation of ground-based extremely large telescopes (E-ELT, GMT, TMT), as well as space-borne missions ([*JWST*]{}, [*Euclid*]{}, [*WFIRST*]{}). The PAndAS survey of M31 has extensively demonstrated that only the synergy of wide-field ground-based observations, deep (but spatially limited) observations from space, and spectroscopy can return a truly global understanding of haloes made up of a complex mixture of in-situ and accreted populations. The aforementioned facilities will open new perspectives with wide-field optical and infrared imagers in concert with high-resolution spectrographs, which will allow us to systematically survey hundreds of galaxies within tens of Mpc in the next decade or two. For example, with the E-ELT/MICADO and [*JWST*]{}/NIRcam imagers (the former having higher resolving power and the latter a wider field-of-view), we should resolve stars down to the HB within $\sim10$Mpc, thus identifying and characterizing the SFHs of streams and faint satellites; derive radial profiles, MDFs and stellar population gradients in haloes within 20Mpc from the uppermost few magnitudes of the RGB; and trace the halo shape and possible overdensities down to $\mu_V\sim33$magarcsec$^{-2}$ from the uppermost $\sim0.5$mag of the RGB out to 50Mpc (@greggio16).
These observational constraints will be crucial to inform increasingly sophisticated theoretical models, and ultimately answer intriguing open questions (as well as possibly unexpected ones that will likely be raised by these observations themselves), such as:
- [Do all galaxies have haloes?]{}
- [What are the relative fractions of in-situ versus accreted populations in galaxy haloes, and how does this depend on galactocentric distance, galaxy morphology, and environment?]{}
- [What are the properties of the objects currently being accreted, i.e., mass, chemical content, SFH, orbital properties, and how do they relate to those of the present day low-mass satellites?]{}
- [Do low-mass galaxies possess haloes/satellites of their own, and what is their fate and contribution upon infall onto a massive galaxy?]{}
- [How extended really are the haloes of massive galaxies?]{}
- [What is the shape and mass of the DM haloes underlying galaxies?]{}
- [What is the relation between the outer halo and the bulge/disk of a galaxy?]{}
- [What is the role of internal versus external processes in shaping a galaxy’s properties, especially at the low-mass end of the galaxy LF?]{}
- [What is the relation between the present-day haloes/satellites and their unresolved, high-redshift counterparts?]{}
The era of resolved populations in galaxy haloes has just begun, and it holds the promise to be a golden one.
I would like to thank the organizers for a lively and stimulating conference. I am indebted to S. Pasetto for his advice and support throughout the preparation of this contribution. I acknowledge the hospitality of the Carnegie Observatories during the completion of this work.
| {
"pile_set_name": "ArXiv"
} |
In this paper, the Universe is viewed as a curved four-dimensional bubble[@bubble] floating in a higher dimensional flat background. To discuss the quantum disintegration of such a Brane-like Universe and derive the corresponding time-dependent and Big-Bang resistant wave function, we restrict ourselves to the framework of the mini-superspace model[@mini].
Assuming the Universe to be homogeneous, isotropic, and closed, the Friedman-Robertson-Walker (FRW) line element can be written in the form $$ds^{2} = \sigma^{2}\left[-({\dot \tau}^{2}-
{\dot a}^{2})dt^{2}+
a^{2}(t)d\Omega_{3}^{2}\right] ~,$$ where $d\Omega_{3}^{2}$ denotes the metric of a unit $3$-sphere, and $\sigma^{2}\equiv \frac{4G}{3\pi}$ is a normalization factor. One may still exercise of course the gauge freedom of fixing $\tau(t)$. However, the more general form helps us keep track of the way the FRW manifold is embedded within a $5$-dim Minkowski spacetime $$ds^{2}_{5}=\sigma^{2}\left[-d\tau^{2}+da^{2}+
a^{2}d\Omega_{3}^{2}\right] ~.
\label{5flat}$$ A pedagogical case of sufficient complexity involves a positive cosmological constant $\Lambda$. The corresponding mini-Lagrangian, defined by $I=\int_{}^{}{\cal L}dt$, is given by $${\cal L} = -\left(\frac{a{\dot a}^{2}}
{\sqrt{{\dot \tau}^{2}-{\dot a}^{2}}} +
a(H^{2}a^{2}-1)\sqrt{{\dot \tau}^{2} -
{\dot a}^{2}}\right) ~,$$ where $H^{2}\equiv\frac{16\pi G}{3} \Lambda$.
$\bullet$ The *standard* mini-superspace prescription is to impose the so-called cosmic gauge, namely ${\dot \tau}^{2}-{\dot a}^{2}=1$, and treat $a(t)$ as a single canonical variable. This way, the variation with respect to $a(t)$ gives rise to Raychaudhuri equation, and the complementary evolution equation $${\dot a}^{2}+1 = H^{2}a^{2}$$ is then nothing but the Arnowitt-Deser-Misner (ADM) Hamiltonian constraint ${\cal H}=0$. Let $P=-2a{\dot a}$ be the momentum conjugate to $a$, the Hamiltonian takes the form $${\cal H} = -\frac{1}{4a}\left(P^{2}+V(a)\right) ~,$$ involving the familiar potential $$V(a) = 4a^{2}\left(1-H^{2}a^{2}\right) ~.$$ Quantization means replacing $\displaystyle{P
\rightarrow -i\frac{\delta}{\delta a}}$ and imposing the Wheeler-DeWitt (WDW) equation ${\cal H}\Psi(a)=0$ on the wave function of the Universe.
$\bullet$ The *non-standard* procedure would be to allow both $a(t)$ and $\tau(t)$ to serve as two independent canonical variables. The variation with respect to $\tau(t)$ results in a simple conservation law. Owing to ${\cal L}(a,\dot a,\dot \tau)$, the ’energy’ $\omega$ conjugate to $\tau$ is conserved. Imposing the cosmic gauge ${\dot \tau}^{2}-{\dot a}^{2}=1$ only at this stage, we can rearrange the ’energy’ conservation equation into a generalized evolution equation $${\dot a}^{2}+1 = \xi H^{2}a^{2} ~,
\label{RT}$$ with $\xi(a)$ being a root of $$\xi (\xi-1)^{2}=\frac{\omega^{2}}{H^{6}a^{8}} ~.
\label{xi}$$ Eq.(\[RT\]) is recognized as the Regge-Teitelboim (RT)[@RT; @RTcosmo] equation of motion, with Einstein limit approached as $\omega\rightarrow 0$, that is $\xi \rightarrow1$ (the physical brunch is identified with $\xi \geq 1$). This comes with no surprise, given that RT canonical variables are in fact the embedding coordinates. Recalling that classical RT-cosmology[@RTcosmo] involves only one independent equation of motion, the equation arising by varying with respect to $a(t)$ is superfluous.
The $2$-momentum $\displaystyle{P_{\alpha}=(\frac{\delta
{\cal L}}{\delta{\dot \tau}},\frac{\delta {\cal L}}{\delta
{\dot a}})}$ is given by
\[P\] $$\begin{aligned}
P_{\tau} & = &
\left[\left(\frac{\dot a}{\sqrt{{\dot \tau}^{2} -
{\dot a}^{2}}}\right)^{2} + 1 - H^{2}a^{2}\right]
\frac{a\dot \tau}{\sqrt{{\dot \tau}^{2}-{\dot a}^{2}}} ~, \\
P_{a} & = &
-\left[\left(\frac{\dot a}{\sqrt{{\dot \tau}^{2} -
{\dot a}^{2}}}\right)^{2} + 3 - H^{2}a^{2}\right]
\frac{a\dot a}{\sqrt{{\dot \tau}^{2}-{\dot a}^{2}}} ~.\end{aligned}$$
The $t$-derivatives which enter eq.(\[P\]) conveniently furnish a time-like unit $2$-vector $$n^{\alpha} \equiv
\left(\frac{\dot \tau}{\sqrt{{\dot \tau}^{2}-{\dot a}^{2}}},
\frac{\dot a}{\sqrt{{\dot \tau}^{2}-{\dot a}^{2}}}\right)
~,~~ n^{2}+1=0 ~.$$ Invoking now a $(\dot \tau,\dot a)$-independent matrix $$\rho^{\alpha}_{\,\beta} =
\left(
\begin{array}{cc}
2a(H^{2}a^{2}-1) & 0 \\
0 & 2a(H^{2}a^{2}-2)
\end{array}
\right) ~,
\label{rho}$$ $P^{\alpha}$ can be put in the compact form $$P^{\alpha} = \frac{1}{2}(n\rho n)n^{\alpha} +
\rho^{\alpha}_{\beta}n^{\beta} ~,$$ after subtracting $a(H^{2}a^{2}-1)(n^{2}+1)n^{\alpha}$. A naive attempt to solve $n^{\alpha}(\rho, P)$, as apparently dictated by the Hamiltonian formalism, and substitute into the constraint $n^{2}+1 = 0$, falls short. The cubic equation involved does not admit a simple solution, and the resulting constraint is anything but quadratic in the momenta.
The way out, that is linearizing the problem, involves the definition of an independent quantity $\lambda$, such that $$n\rho n +2\lambda = 0 ~.$$ Off the Einstein limit, $\lambda$ is not an eigenvalue of $\rho^{\alpha}_{\,\beta}$, and we can solve for $n^{\alpha}
(\rho,P,\lambda)$ to find $$n^{\alpha}=\left[\left(\rho-\lambda
I\right)^{-1}\right]^{\alpha}_{\,\beta}P^{\beta} ~.$$ This allows us to finally convert the combined constraints $n^{2}+1=0$ and $n\rho n +2\lambda-\lambda (n^{2}+1)=0$ into $$\left\{
\begin{array}{c}
P(\rho-\lambda I)^{-2}P + 1 = 0 ~,\\
P(\rho-\lambda I)^{-1}P + \lambda = 0 ~.
\end{array}
\right.
\label{PP}$$ The first equation is the derivative with respect to $\lambda$ of the other. This suggests that $\lambda$ be elevated to the level of a canonical non-dynamical variable in the forthcoming Hamiltonian formalism.
Needless to say, the above seems to be a tip of a bigger iceberg, a mini-superspace version of Brane-like gravity. Indeed, carrying out the (say) $10$-dim RT embedding of the $4$-dim ADM formalism, we have recently derived the quadratic Hamiltonian[@RTH] $${\cal H} = \frac{1}{2}N
\left[P_{A}\left((\rho-\lambda I)^{-1}\right)^{AB}P_{B} +
\lambda \right] + N^{i}y^{A}_{\,,i}P_{A} ~,$$ where the novel Lagrange multiplier $\lambda$ accompanies the standard non-dynamical variables, the lapse function $N$ and the shift vector $N^{i}$. To shed light on the matrix $\rho^{AB}$, one infers that $$\frac{\rho^{AB}}{ 2\sqrt{h}}=(h^{ia}h^{jb}-h^{ij}
h^{ab})y^{A}_{\,|ab}y^{B}_{\,|ij}+\left({\cal R}^{(3)}+
6H^{2}\right)\eta^{AB} ~,$$ with ${\cal R}^{(3)}$ denoting the $3$-dim Ricci scalar constructed by means of the spatial $3$-metric $h_{ij}=\eta_{AB}y^{A}_{|i}y^{B}_{|j}$.
The quantum theory dictates ${\displaystyle P_{A}\rightarrow
-i\frac{\delta}{\delta y^{A}}}$. The corresponding wave function $\Psi(\tau,a)$ is subject to two Virasoro-type constraints. The so-called momentum constraint equation $\displaystyle
{y^{A}_{,i}\frac{\delta\Psi}{\delta y^{A}}=0}$, which is trivially satisfied at the mini-superspace level, is accompanied by a *bifurcated* WDW equation $$\left\{
\begin{array}{c}
\displaystyle{\frac{\delta}{\delta y^{A}}
\left((\rho-\lambda I)^{-1}\right)^{AB}
\frac{\delta}{\delta y^{B}}\Psi = \lambda\Psi} ~, \\
\displaystyle{\frac{\delta}{\delta y^{A}}
\left((\rho-\lambda I)^{-2}\right)^{AB}
\frac{\delta}{\delta y^{B}}\Psi = \Psi} ~.
\end{array}
\right.
\label{WDW}$$ Given the diagonal $\rho^{\alpha}_{\,\beta}$ specified by eq.(\[rho\]), and up to all sorts of order ambiguities, the mini-superspace wave function $\Psi(\tau,a)$ obeys
$$\begin{aligned}
-\frac{\partial^{2}\Psi}{\partial \tau^{2}} & = &
\xi(\xi-1)^{2}H^{6}a^{8}\Psi ~,
\label{psitau} \\
-\frac{\partial^{2}\Psi}{\partial a^{2}} & = &
a^{2}\left[2+(\xi-1)H^{2}a^{2}\right]^{2}
(-1+\xi H^{2}a^{2})\Psi ~,
\label{psia}\end{aligned}$$
where the $\lambda \leftrightarrow \xi$ dictionary reads $$\lambda \equiv a\left[(\xi+1)H^{2}a^{2}-2\right] ~.$$ The separation of variables is accomplished by substituting $\Psi(\tau,a)=\psi(a)\chi(\tau)$. Eq.(\[psia\]) then tells us that $\xi=\xi(a)$. In turn, eq.(\[psitau\]) can admit a solution only provided $\xi(a)$ is such that $\xi(\xi-1)^{2}H^{6}a^{8}
=\omega^{2}$ is a constant. This is how the conserved ’energy’ $\omega$, introduced by eq.(\[xi\]), enters the quantum game.
Altogether, the $\tau$-dependent wave function of the Universe acquires the familiar form $$\Psi(\tau,a) = \psi(a)e^{-i\omega\tau} ~,$$ with the $\tau$-dependence dropping out at the Einstein limit. The radial component $\psi(a)$ satisfies the residual WDW equation $$\left(-\frac{\partial^{2}}{\partial a^{2}} +
V(a)\right)\psi = 0 ~,$$ where the modified potential, depicted in fig.(1), is given explicitly by $$V(a) = a^{2}\left[2+(\xi-1)H^{2}a^{2}\right]^{2}
(1-\xi H^{2}a^{2}) ~.$$
![Wheeler-DeWitt potential for a Brane-like Universe (solid curve), and for Einstein Universe (dashed curve).[]{data-label="fig1"}](fig1.eps)
$V(a)$ admits a barrier provided $\omega H\leq\frac{2}
{3\sqrt{3}}$, which we now adopt as the case of interest. The barrier is stretched between $a_{L}<a<a_{R}$, where $a_{L,R}$ are the two positive roots of $H^{2}a^{3}-a+
\omega=0$. For $\omega H\ll 1$, the classical turning points are located at $$a_{L}\simeq \omega ~,~~
a_{R}\simeq H^{-1}(1-\frac{1}{2}\omega H) ~.$$ At long distances, only a slight deviation from the original potential is detected, namely $$V(a \gg \omega) \simeq
4a^{2}(1-H\omega-H^{2}a^{2}) ~.$$ But at short distances, a serendipitous well (with a surplus of ‘kinetic energy’ at the origin) makes its appearance $$V(a \leq \omega) \simeq
-\omega^{2} -3\omega^{4/3}a^{2/3} +4a^{2} ~.
\label{well}$$ The emerging classically disconnected Embryonic epoch is the essence of brane-like quantum cosmology.
A theory of boundary conditions is still to be constructed. The situation is even more complicated in a scheme where the Big-Bang is classically alive and cannot be traded for a Euclidean conic-singularity-free pole. The Riemann tensor gets pathological as $a\rightarrow 0$, leaving us with no alternative but to interpret ’nothing’ [@nothing] as $$\Psi(\tau,a=0) = 0 ~.
\label{BB}$$ This way, following DeWitt[@BBboundary] argument, we ’neutralize’ the Big Bang singularity by making the origin quantum mechanically inaccessible to wave packets.
At this stage, while sticking to the full Lorentzian picture, namely $\Psi=\psi(a)e^{- i\omega \tau}$ even under the potential barrier, our discussion bifurcates with respect to the left over boundary condition:
$\bullet$ Following Hartle-Hawking (HH)[@HH] or Linde (L)[@L] proposals, where Hermiticity (real $\omega$) is the name of the game, the naive WKB wave function under the barrier is given by $$\psi_{HH,L}(a_{L}<a<a_{R}) \simeq
\mp \frac{1}{\sqrt{V}}
\exp\left[\pm \int_{a_{L}}^{a}\sqrt{V}da'\right] ~,$$ respectively. The corresponding nucleation probability is $${\cal P} \sim
e^{{\displaystyle \pm 2\int_{a_{L}}^{a_{R}}
\sqrt{V}da'}} \simeq
e^{{\displaystyle \pm \frac{4}{3H^{2}}
(1-\frac{3}{2}H\omega)}} ~.$$ The matching at $a=a_{R}$ yields a symmetric (antisymmetric) combination of equal strength outgoing and ingoing waves. The $a=a_{L}$ matching into the Embryonic zone would contradict the Big-Bang boundary condition eq.(\[BB\]) unless $$\exp\left[2i \left(\int_{0}^{a_{L}}\sqrt{-V}da'
-\frac{\pi}{4}\right)\right] = \pm 1 ~.$$ The result is ’energy’ (not to be confused with the energy $E=0$) quantization. To be specific, for $\omega H \ll 1$, we invoke eq.(\[well\]) and after some algebra derive the discrete ’energy’ spectrums $$\omega^{HH,L}_{n} \simeq
\sqrt{\frac{2}{3}(4n\pm 1)} ~,$$ such that $\omega^{L}_{min}=\sqrt{3}\omega^{HH}_{min}$.
Having a non-zero ground state ’energy’ $\omega_{min}$ is remarkable. It is the closest one can get to Einstein limit $\omega=0$. But what exactly do we mean by a ground state, and why does the Einstein limit make sense? A successful (presumably Euclidean) theory of boundary conditions must explain why is low $\omega$ preferable to high $\omega$.
![Hartle-Hawking ground state ($n=0$) wave function of a brane-like Universe (notice its vanishing at the Big Bang). The dashed curve being the underlying brane-like Wheeler-DeWitt potential.[]{data-label="fig2"}](fig2.eps)
$\bullet$ Vilenkin (V)[@V] proposal on the other hand is characterized by an outgoing wave function $$\psi_{V}(a>a_{R}) \sim \frac{1}{\sqrt{-V}}
\exp\left[i\int_{a_{R}}^{a}\sqrt{-V}da'\right] ~.$$ The WKB behavior of the wave function can then be traced back all the way to the origin where it is supposed to vanish. The consistency condition then reads $$\exp\left[2i\left(\int_{0}^{a_{L}}\sqrt{-V}da'-
\frac{\pi}{4}\right)\right]
\simeq \frac{1+4\theta^{2}}{1-4\theta^{2}} ~,$$ where $\theta \equiv e^{\int_{a_{L}}^{a_{R}}\sqrt{V}da'}$ is the opacity coefficient. The latter equation can only be satisfied by a complex ’energy’ $\omega=\tilde{\omega}-\frac{1}{2}i\Gamma$. It should be noticed how the Hartle-Hawking (Linde) discrete spectrum, that is $\tilde{\omega}\rightarrow
\omega_{n}^{HH(L)}$ followed by $\Gamma\rightarrow 0$, is recovered for $\theta \ll 1$ ($\theta \gg 1$). Altogether, the disintegration of Vilenkin bubble highly resembles $\alpha$-decay.
Euclidization is next. In the first glance it may look like the Lorentzian and the Euclidean regimes share the one and the same Embedding spacetime, and that Euclidization can be formulated in the language of the mini-superspace light-cone. However, a simple investigation reveals that a closed Euclidean FRW metric cannot be embedded within a flat Minkowski spacetime. It calls for a flat Euclidean background, attainable by means of Wick rotation $\tau \rightarrow \pm i\tau_{E}$ (with the corresponding cosmic gauge being $\dot{\tau_{E}}^{2}+
\dot{a_{E}}^{2}=1$).
We are not in a position to tell whether Euclidean gravity is only a technical tool, serving to explain certain quantum and/or thermodynamic aspects of the Lorentzian theory, or perhaps has life of its own. This way or the other, the emerging picture is of a Euclidean manifold sandwiched between two Lorentzian regimes.
![The Euclidean regime sandwiched between the Embryonic and the Expanding Lorentzian epochs.[]{data-label="fig3"}](fig3.eps)
The Euclidean time difference $\delta$ to travel back and forth the $a_{L}<a<a_{R}$ well of the upside down potential $-V$ is given by $$\delta = 2\int_{a_{L}}^{a_{R}}
\frac{da}{\sqrt{1-\xi H^{2}a^{2}}} ~,$$ and takes the value $$\delta \simeq \left\{
\begin{array}{ccc}
\frac{\pi}{H}
\left(1-\frac{2}{\pi}\sqrt{\omega H}\right) &
~~~\text{if}~ & \omega H \ll 1 ~, \\
4\sqrt{3}\omega & ~~~\text{if}~ &
\omega H =\frac{2}{3\sqrt{3}} ~.
\end{array}
\right.$$ Recall two relevant facts: (i) The Euclidean manifold *can* be periodic in $t_{E}$. The allowed periodicities are restricted, however, to the sequence $\Delta t_{E} = N\delta$ ($N$ integer). (ii) At the Euclidean de-Sitter limit, where $\omega
\rightarrow 0$, the Euclidean manifold *must* be periodic in $t_{E}$ with period $\Delta t_{E}=2\pi H^{-1}$, as otherwise a conic singularity is present. Combining these two facts, one can identify $t_{E}$ with $t_{E}+\Delta t_{E}$ provided $$\Delta t_{E} = 2\delta ~.$$ In turn, our bubble Universe is characterized by a temperature $\displaystyle{T=\frac{1}{\Delta t_{E}}}$ and an entropy $\displaystyle{S=\frac{1}{4\pi}\Delta
t_{E}^{2}}$.
The model discussed here has no pretension to be realistic. Its objective is primarily pedagogical, to concretely demonstrate (i) How to overcome the problem (absence) of time in canonical quantum gravity, and (ii) How to ’neutralize’, quantum-mechanically, the Big-Bang problem at the Lorentzian level. All this without upsetting the leading wave-function proposals. It remains to be understood though how to convert the emerging closed Universe into an open one (following perhaps Hawking-Turok[@open1] prescription), how does inflation enter the game (presumably along Linde[@open2] or Vilenkin[@open3] trails), and whether there exists some leftover experimental crumb. At any rate, several model independent features, notably the classically disconnected Embryonic epoch, are to be regarded as the finger-prints of the underlying theory. Brane-like Universe gravity constitutes a controlled deviation (automatic energy/momentum conservation) from Einstein gravity, with the latter regarded as the classical ground-state limit.
It is our pleasure to thank Professors E. Guendelman and R. Brustein for valuable discussions and enlightening remarks.
G.W. Gibbons and D.L. Wiltshire, Nucl. Phys. **B287**, 717 (1987); S. Coleman and F. DeLuccia, Phys. Rev. D21, 3305 (1980); R. Basu, A.H. Guth and A. Vilenkin, Phys. Rev. **D44**, 340 (1991). B.S. DeWitt, Phys. Rev. **160**, 1113 (1967); J.A. Wheeler, in *Battelle Rencontres*, p.242 (Benjamin NY, 1968); W.E. Blyth and C. Isham, Phys. Rev. **D11**, 768 (1975). T. Regge and C. Teitelboim, in Proc. Marcel Grossman, p.77 (Trieste, 1975); S. Deser, F.A.E. Pirani, and D.C. Robinson, Phys. Rev. **D14**, 3301 (1976). A. Davidson, (gr-qc/9710005). A. Davidson and D. Karasik, (honorable mentioned, Grav. Res. Found. 1998). E.P. Tyron, Nature **246**, 396 (1973). J.D. Barrow and R. Matzner, Phys. Rev. **D21**, 336 (1980); M.J. Gotay and J. Demaret, Phys. Rev. **D28**, 2402 (1983). S.W. Hawking and I.G. Moss, Phys. Lett. **110B**, 35 (1982); J. Hartle and S.W. Hawking, Phys. Rev. **D28**, 2960 (1983); J.J. Halliwell and S.W. Hawking, Phys. Rev. **D31**, 1777 (1985). A.D. Linde, Nuovo Cimento **39**, 401 (1984); A.D. Linde, Sov. Phys. JETP **60**, 211 (1984); A. Vilenkin, Phys. Lett. **117B**, 25 (1982); A. Vilenkin, Phys. Rev. **D30**, 509 (1984); A. Vilenkin, Phys. Rev. **D50**, 2581 (1994). N. Turok and S.W. Hawking (hep-th/9802030,9803156); A.D. Linde (gr-qc/980238). A. Vilenkin (hep-th/9803084).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
**Abstract**
How can tissues generate large numbers of cells, yet keep the divisional load (the number of divisions along cell lineages) low in order to curtail the accumulation of somatic mutations and reduce the risk of cancer? To answer the question we consider a general model of hierarchically organized self-renewing tissues and show that the lifetime divisional load of such a tissue is independent of the details of the cell differentiation processes, and depends only on two structural and two dynamical parameters. Our results demonstrate that a strict analytical relationship exists between two seemingly disparate characteristics of self-renewing tissues: divisional load and tissue organization. Most remarkably, we find that a sufficient number of progressively slower dividing cell types can be almost as efficient in minimizing the divisional load, as non-renewing tissues. We argue that one of the main functions of tissue-specific stem cells and differentiation hierarchies is the prevention of cancer.
author:
- Imre Derényi
- 'Gergely J. Szöllősi'
title: Hierarchical tissue organization as a general mechanism to limit the accumulation of somatic mutations
---
Introduction {#introduction .unnumbered}
============
In each multicellular organism a single cell proliferates to produce and maintain tissues comprised of large populations of differentiated cell types. The number of cell divisions in the lineage leading to a given somatic cell governs the pace at which mutations accumulate [@Gao:2016]. The resulting somatic mutational load determines the rate at which unwanted evolutionary processes, such as cancer development, proceed [@Nowell:1976; @Merlo:2006; @Beerenwinkel:2016]. In order to produce $N$ differentiated cells from a single precursor cell the theoretical minimum number of cell divisions required along the longest lineage is $\log_2(N)$. To achieve this theoretical minimum, cells must divide strictly along a perfect binary tree of height $\log_2(N)$ (Fig. \[fig1\]a). In multicellular organisms such differentiation typically takes place early in development. It is responsible for producing the cells of non-renewing tissues (e.g., primary oocytes in the female germ line [@Crow:2000; @Gao:2016]) and the initial population of stem cells in self-renewing tissues (e.g., hematopoietic stem cells [@Busch:2015; @Werner:2015; @Werner:2015_eLife] or the spermatogonia of the male germ line [@Crow:2000; @Gao:2016]).
In self-renewing tissues, which require a continuous supply of cells, divisions along a perfect binary tree are unfeasible. Strictly following a perfect binary tree throughout the lifetime of the organism would require extraordinarily elaborate scheduling of individual cell divisions to ensure tissue homeostasis [@Morris:2014], and would be singularly prone to errors (e.g., the loss of any single cell would lead to the loss of an entire branch of the binary tree). Instead, to compensate for the continuous loss of cells, mechanisms have evolved to replenish the cell pool throughout the organism’s lifetime [@Pardee:1989]. In most multicellular organisms hierarchically organized tissue structures are utilized. At the root of the hierarchy are a few tissue-specific stem cells defined by two properties: self-replication and the potential for differentiation [@Till:1961; @McCulloch:2005]. During cell proliferation cells can differentiate and become increasingly specialized toward performing specific functions within the hierarchy, while at the same time losing their stem cell-like properties (Fig. \[fig1\]b). A classic example is the hematopoietic system [@Michor:2005; @Dingli:2007], but other tissues such as skin [@Tumbar:2004] or colon [@Barker:2007; @Potten:2009] are also known to be hierarchically organized. Identifying each level of the hierarchy, however, can be difficult, especially if the cells at different levels are only distinguished by their environment, such as their position in the tissue (e.g., the location of the transit-amplifying cells along intestinal crypts). As a result, information on the details of differentiation hierarchies is incomplete [@Rossi:2008; @Vermeulen:2013; @Sutherland:2015].
![ [**Differentiation in non-renewing vs. self-renewing tissues.**]{} a) To produce $N$ mature cells from a single precursor with a minimum number of cell divisions, $\log_2(N)$, strict division along a perfect binary tree is necessary. In multicellular organisms such “non-renewable” differentiation typically takes place early in development. b) However, in self-renewing tissues, where homeostasis requires a continuous supply of cells, a small population of self-replicating tissue-specific stem cells sustain a hierarchy of progressively differentiated and larger populations of cell types, with cells of each type being continuously present in the tissue. []{data-label="fig1"}](FIGURE1_ncomms-16-15484A-final){width="60.00000%"}
Nonetheless, in a recent paper, Tomasetti and Vogelstein [@Tomasetti:2015] gathered available information from the literature and investigated the determinants of cancer risk among tumors of different tissues. Examining cancers of 31 different tissues they found that the lifetime risk of cancers of different types is strongly correlated with the total number of divisions of the normal self-replicating cells. Their conclusion that the majority of cancer risk is attributable to bad luck [@Tomasetti:2015] arguably results from a misinterpretation of the correlation between the logarithms of two quantities [@Wild:2015; @Wu:2016]. However, regardless of the interpretation of the correlation, the data display a striking tendency: the dependence of cancer incidence on the number of stem cell divisions is sub-linear, i.e., a 100 fold increase in the number of divisions only results in a 10 fold increase in incidence. This indicates that tissues with a larger number of stem cell divisions (typically larger ones with rapid turnover, e.g., the colon) are relatively less prone to develop cancer. This is analogous to the roughly constant cancer incidence across animals with vastly different sizes and life-spans (Peto’s paradox), which implies that large animals (e.g., elephants) possess mechanisms to mitigate their risk relative to smaller ones (e.g., mice) [@Peto:1975; @Caulin:2011; @Peto:2015].
What are the tissue-specific mechanisms that explain the differential propensity to develop cancer? It is clear that stem cells that sustain hierarchies of progressively differentiated cells are well positioned to provide a safe harbor for genomic information. Qualitative arguments suggesting that hierarchically organized tissues may be optimal in reducing the accumulation of somatic mutations go back several decades [@Hindersin:2016]. As mutations provide the fuel for somatic evolution (including not only the development of cancer, but also tissue degeneration, aging, germ line deterioration, etc.) it is becoming widely accepted that tissues have evolved to minimize the accumulation of somatic mutations during the lifetime of an individual [@Hindersin:2016]. The potential of hierarchical tissues to limit somatic mutational load simply by reducing the number of cell divisions along cell lineages, however, has not been explored in a mathematically rigorous way. Here, we discuss this most fundamental mechanism by which hierarchical tissue organization can curtail the accumulation of somatic mutations. We derive simple and general analytical properties of the divisional load of a tissue, which is defined as the number of divisions its constituent cells have undergone along the longest cell lineages, and is expected to be proportional to the mutational load of the tissue.
Models conceptually similar to ours have a long history [@Loeffler:1980; @Nowak:2003; @Takizawa:2011sf; @Pepper:2007; @Werner:2011; @Werner:2013; @Werner:2015], going back to Loeffler and Wichman’s work on modeling hematopoietic stem cell proliferation [@Loeffler:1980], and several qualitative arguments have been made suggesting why hierarchically organized tissues may be optimal in minimizing somatic evolution. In a seminal contribution Nowak et al. [@Nowak:2003] showed that tissue architecture can contribute to the protection against the accumulation of somatic mutations. They demonstrated that the rate of somatic evolution will be reduced in any tissue where geometric arrangement or cellular differentiation induce structural asymmetries such that mutations that do not occur in stem cells tend to be washed out of the cell population, slowing down the rate of fixation of mutations. Here, we begin where Nowak et al. [@Nowak:2003] left off: aside of structural asymmetry, we consider a second and equally important aspect of differentiation, the dynamical asymmetry of tissues, i.e., the uneven distribution of divisional rates across the differentiation hierarchy.
More recently a series of studies have investigated the dynamics of mutations in hierarchical tissues with dynamical asymmetry [@Pepper:2007; @Werner:2011; @Werner:2013] and found that hierarchical tissue organization can (i) suppress single [@Werner:2011] as well as multiple mutations [@Werner:2013] that arise in progenitor cells, and (ii) slow down the rate of somatic evolution towards cancer [@Pepper:2007] if selection on mutations with non-neutral phenotypic effects is also taken into account. The epistatic interactions between individual driver mutations are, however, often unclear and show large variation among cancer types. The fact that the majority of cancers arise without a histologically discernible premalignant phase indicates strong cooperation between driver mutations, suggesting that major histological changes may not take place until the full repertoire of mutations is acquired [@Martincorena:2015rev]. For this reason, here we do not consider selection between cells, but rather, focus only on the pace of the accumulation of somatic mutations in tissues, which provide the fuel for somatic evolution.
The uneven distribution of divisional rates considered by Werner et al.[@Werner:2011; @Werner:2013] followed a power law, however, this distribution was taken for granted without prior justification. Their focus was instead on “reproductive capacity”, an attribute of a single cell corresponding to the number of its descendants, which is conceptually unrelated to our newly introduced “divisional load”, which characterizes the number of cell divisions along the longest cell lineages of the tissue. Here we show mathematically, to the best of our knowledge for the first time, that the minimization of the divisional load in hierarchical differentiation indeed leads to power law distributed differentiation rates.
More generally, evolutionary thinking is becoming an indispensable tool to understand cancer, and even to propose directions in the search for treatment strategies [@Komarova:2015]. Models that integrate information on tissue organization have not only provided novel insight into cancer as an evolutionary process [@Rejniak:2011; @Altrock:2015; @Hindersin:2016], but have also produced direct predictions for improved treatment [@Michor:2015; @Tang:2016; @Werner:2016]. The simple and intuitive relations that we derive below have the potential to further this field of research by providing quantitative grounds for the deep connection between organization principles of tissues and disease prevention and treatment.
According to our results, the lifetime divisional load of a hierarchically organized tissue is independent of the details of the cell differentiation processes. We show that in self-renewing tissues hierarchical organization provides a robust and nearly ideal mechanism to limit the divisional load of tissues and, as a result, minimize the accumulation of somatic mutations that fuel somatic evolution and can lead to cancer. We argue that hierarchies are how the tissues of multicellular organisms keep the accumulation of mutations in check, and that populations of cells currently believed to correspond to tissue-specific stem cells may in general constitute a diverse set of slower dividing cell types [@Li:2010; @Busch:2015]. Most importantly, we find that the theoretical minimum number of cell divisions can be very closely approached: as long as a sufficient number of progressively slower dividing cell types towards the root of the hierarchy are present, optimal self-sustaining differentiation hierarchies can produce $N$ terminally differentiated cells during the course of an organism’s lifetime from a single precursor with no more than $\log_2(N)+2$ cell divisions along any lineage.
Results {#results .unnumbered}
=======
Divisional load of cell differentiation hierarchies {#divisional-load-of-cell-differentiation-hierarchies .unnumbered}
---------------------------------------------------
![ [**Hierarchical cell differentiation in self-renewing tissue.**]{} a) A model tissue produces terminally differentiated cells through $n$ intermediate levels of partially differentiated cells. b) Five microscopic events can occur with a cell: (i) symmetric cell division with differentiation, (ii) asymmetric cell division, (iii) symmetric cell division without differentiation, (iv) single cell differentiation, and (v) cell death. To the right of each type of event present in optimal hierarchies we give the corresponding per cell rate that is used to derive Eq. \[dotD\]. []{data-label="fig2"}](FIGURE2_ncomms-16-15484A-final){width="0.8\columnwidth"}
To quantify how many times the cells of self-renewing tissues undergo cell divisions during tissue development and maintenance, we consider a minimal generic model of hierarchically organized, self-sustaining tissue. According to the model, cells are organized into $n+1$ hierarchical levels based on their differentiation state. The bottom level (level $0$) corresponds to tissue-specific stem cells, higher levels represent progressively differentiated progenitor cells, and the top level (level $n$) is comprised of terminally differentiated cells (Fig. \[fig2\]a). The number of cells at level $k$ in fully developed tissue under normal homeostatic conditions is denoted by $N_k$. During homeostasis cells at levels $k<n$ can differentiate (i.e., produce cells for level $k+1$) at a rate $\delta_k$, and have the potential for self-replication. At the topmost $k=n$ level of the hierarchy terminally differentiated cells can no longer divide and are expended at the same rate $\delta_{n-1}$ that they are produced from the level below. The differentiation rates $\delta_k$ are defined as the total number of differentiated cells produced by the $N_k$ cells of level $k$ per unit time. The differentiation rate of a single cell is, thus $\delta_k/N_k$.
In principle five microscopic events can occur with a cell: (i) symmetric cell division with differentiation, (ii) asymmetric cell division, (iii) symmetric cell division without differentiation, (iv) single cell differentiation, and (v) cell death (Fig. \[fig2\]b). Our goal is to determine the optimal tissue organization and dynamics that minimize the number of cell divisions that the cells undergo until they become terminally differentiated. For this reason cell death, except for the continuous expenditure of terminally differentiated cells, is disallowed as it can only increase the number of divisions. We note, however, that cell death with a rate proportional to that of cell divisions would simply result in a proportionally increased divisional load and, thus, would have no effect on the optimum.
Similarly, we also disregard single cell differentiation, because if it is rare enough (i.e., its rate is smaller than the asymmetric cell division rate plus twice the rate of symmetric cell division without differentiation) then it can be absorbed in cell divisions with differentiation; otherwise it would merely delegate the replication burden down the hierarchy towards the less differentiated and supposedly less frequently dividing cells, and would be sub-optimal.
Two of the remaining three microscopic events involve differentiation. If we denote the fraction of differentiation events that occur via symmetric cell division at level $k$ by $p_k$, then the rate of symmetric cell division at level $k$ can be written as $p_k \delta_k /2$ (the division by $2$ accounts for the two daughter cells produced by a single division), while the rate of asymmetric cell division is $(1-p_k)\delta_k$. Symmetric cell division with differentiation leaves an empty site at level $k$, which will be replenished either (i) by differentiation from the level below or (ii) by division on the same level. Assuming the first case and denoting the fraction of replenishment events that occur by differentiation from the level below by $q_k$, the combined rate of the contributing processes (asymmetric cell division and symmetric cell division with differentiation from the level below) can be written as $q_k
p_k \delta_k /2$. By definition this is equal to $\delta_{k-1}$, the differentiation rate from level $k-1$, leading to the recursion relation $$\begin{aligned}
\delta_{k-1} &= \delta_k p_k q_k /2
\, .\end{aligned}$$ Alternatively, if replenishment occurs by cell division on the same level $k$, i.e., as a result of symmetric cell division without differentiation, the corresponding rate is $(1-q_k)
p_k \delta_k /2$.
To keep track of how cell divisions accumulate along cell lineages during tissue renewal, we introduce the divisional load $D_k(t)$ for each level separately defined as the average number of divisions that cells at level $k$ have undergone by time $t$ since the stem cell level was created at time zero.
Using the rates of the microscopic events (also shown in Fig. \[fig2\]b), considering that each division increases the accumulated number of divisions of both daughter cells by one, and taking into account the divisional loads that the departure of cells take and the arrival of cells bring, the following mean-field differential equation system can be formulated for the time evolution of the total divisional load ($D_k N_k$) of levels $k<n$ of a fully developed tissue: $$\begin{aligned}
\dot D_k N_k &= - \frac{\delta_k}{2} p_k D_k+ \delta_k (1-p_k)
\nonumber\\
&+ \frac{\delta_k}{2} p_k \left[ q_k (D_{k-1}+1) + (1-q_k)(D_k+2) \right]
\, .\label{dotD}\end{aligned}$$ Because stem cells cannot be replenished from below we have $q_0=0$. The terminal level $k=n$ can be included in the system of equations by specifying $p_n=q_n=1$ and formally defining $\delta_n=2 \delta_{n-1}$.
The above equations are valid when each level $k$ contains the prescribed number of cells $N_k$ of a fully developed, homeostatic tissue and, therefore, do not directly describe the initial development of the tissue from the original stem cells. This shortcoming can, however, be remedied by introducing virtual cells that at the initial moment ($t=0$) fill up all $k>0$ levels. As the virtual cells gradually differentiate to higher levels of the hierarchy, they are replaced by the descendants of the stem cells. Tissue development is completed when the non-virtual descendants of the initial stem cell population fill the terminally differentiated level for the first time, expelling all virtual cells. Using this approach the initial development of the tissue is assumed to follow the same dynamics as the self-renewal of the fully developed tissue. Even though cell divisions in a developing tissue might occur at an elevated pace, such differences in the overall pace of cell divisions (along with any temporal variation in the tissue dynamics) are irrelevant, as long as only the relation between the number of cell divisions and the number of cells generated are concerned.
Using the recursion relation the above differential equation system simplifies to $$\begin{aligned}
\dot D_k N_k &= (\delta_k -\delta_{k-1}) - \delta_{k-1} (D_k - D_{k-1})
\, ,\end{aligned}$$ revealing that the average number of cell divisions is independent of both the fraction of symmetric division $p_k$ in differentiation, and the fraction of differentiation $q_k$ in replenishment.
From any initial condition $D_k(t)$ converges to the asymptotic solution $$\begin{aligned}
D_k(t) &= t \frac{\delta_0}{N_0} + D_k^0
\, ,
\label{Dkt}\end{aligned}$$ which shows that the divisional load of the entire tissue grows linearly according to the differentiation rate of the stem cells ($t\delta_0/N_0$), and the progenitor cells at higher levels of the hierarchy have an additional load ($D_k^0$) representing the number of divisions having led to their differentiation. By definition, the additional load of the stem cells ($D_0^0$) is zero. The convergence involves a sum of exponentially decaying terms, among which the slowest one is characterized by the time scale $$\begin{aligned}
\tau_k^\textrm{tr} &= \sum_{l=1}^k \frac{N_l}{\delta_{l-1}}
\, ,
\label{tau}\end{aligned}$$ which can be interpreted as the transient time needed for the cells at level $k$ to reach their asymptotic behavior. $\tau_k^\textrm{tr}$ can also be considered as the transient time required for the initial development of the tissue up to level $k$. The rationale behind this is that during development the levels of the hierarchy become populated by the descendants of the stem cells roughly sequentially, and the initial population of level $l$ takes about $N_l/\delta_{l-1}$ time after level $l-1$ has become almost fully populated.
Plugging the asymptotic form of $D_k(t)$ into the system of differential equations and prescribing $D_0^0=0$, the constants $D_k^0$ can be determined, and expressed as $$\begin{aligned}
D_k^0 &= \sum_{l=1}^k \frac{\delta_l -\delta_{l-1}}{\delta_{l-1}}
- \delta_0 \sum_{l=1}^k \frac{N_l}{\delta_{l-1}}
\nonumber\\
&= \sum_{l=1}^k (\gamma_l - 1) - \frac{\delta_0}{N_0} \tau_k^\textrm{tr}
\, ,\end{aligned}$$ where we have introduced the ratios $$\begin{aligned}
\gamma_k &= \frac{\delta_k}{\delta_{k-1}} = \frac{2}{p_k q_k} \geq 2\end{aligned}$$ between any two subsequent differentiation rates. The asymptotic solution then becomes $$\begin{aligned}
D_k(t) &= \frac{\delta_0}{N_0} (t - \tau_k^\textrm{tr})
+ \sum_{l=1}^k (\gamma_l - 1)
\, .
\label{Dk}\end{aligned}$$ This simple formula, which describes the accumulation of the divisional load along the levels of a hierarchically organized tissue, is one of our main results.
Differentiation hierarchies that minimize divisional load {#differentiation-hierarchies-that-minimize-divisional-load .unnumbered}
---------------------------------------------------------
The number of mutations that a tissue allows for its constituent cells to accumulate can be best characterized by the expected number of mutations accumulated along the longest cell lineages. On average, the longest lineage corresponds to the last terminally differentiated cell that is produced by the tissue at the end of the lifetime of the organism. Therefore, as the single most important characteristics of a hierarchically organized tissue, we define its lifetime divisional load, $D$, as the divisional load of its last terminally differentiated cell. If the total number of terminally differentiated cells produced by the tissue during the natural lifetime of the organism per stem cell is denoted by $N$, then the lifetime of the organism can be expressed as $t_\textrm{life} = \tau_{n-1}^\textrm{tr} + N_0 N/\delta_{n-1}$, where the first term is the development time of the tissue up to level $n-1$, and the second term is the time necessary to generate all the $N_0 N$ terminally differentiated cells by level $n-1$ at a rate of $\delta_{n-1}$. Because the last terminally differentiated cell is the result of a cell division at level $n-1$, its expected divisional load, $D$, is the average divisional load of level $n-1$ increased by $1$: $$\begin{aligned}
D &=
D_{n-1} \left( t_\textrm{life} \right) +1
= N \frac{\delta_0}{\delta_{n-1}}
+ \sum_{l=1}^{n-1} (\gamma_l - 1) + 1
= N \prod_{l=1}^{n-1} \frac{1}{\gamma_l}
+ \sum_{l=1}^{n-1} (\gamma_l - 1) + 1
\, .
\label{D}\end{aligned}$$ Note that the complicated $\tau_{n-1}^\textrm{tr}$ term drops out of the formula. A remarkable property of $D$ is that it depends only on two structural and two dynamical parameters of the tissue. The two structural parameters are the total number of the terminally differentiated cells produced by the tissue per stem cell, $N$, and the number of the hierarchical levels, $n$. The two dynamical parameters are the product and sum of the ratios of the differentiation rates, $\gamma_k$. The lifetime divisional load neither depends on most of the microscopic parameters of the cellular processes, nor on the number of cells at the differentiation levels.
For fixed $N$ and $n$ the ratios $\gamma_k^*$ of the differentiation rates that minimize the lifetime divisional load $D$ can be easily determined by setting the derivatives of $D$ with respect to the ratios $\gamma_k$ to zero, resulting in $$\begin{aligned}
\gamma_k^* &= N \prod_{l=1}^{n-1} \frac{1}{\gamma_l^*}
\, .\end{aligned}$$ This expression shows that $\gamma_k^*$ is identical for all intermediate levels ($0<k<n$) and, therefore, can be denoted by $\gamma^*$ without a subscript. This uniform ratio can then be expressed as $$\begin{aligned}
\gamma^* = N^{1/n}
\, ,
\label{gs}\end{aligned}$$ as long as the condition $\gamma^*\geq2$ holds, i.e., when $n\leq\log_2(N)$. For $n\geq\log_2(N)$, however, the ratio has to take the value of $$\begin{aligned}
\gamma^* = 2
\, .
\label{gss}\end{aligned}$$ Plugging $\gamma^*$ into Eq. (\[D\]) results in $$\begin{aligned}
D^* &=
n \left( N^{1/n} - 1 \right) + 2
\label{Ds}\end{aligned}$$ for $n\leq\log_2(N)$ and $$\begin{aligned}
D^* &=
N \left( \frac{1}{2} \right)^{n-1} + n
\label{Dss}\end{aligned}$$ for $n\geq\log_2(N)$. Eq. (\[Ds\]) is a monotonically decreasing function of $n$, while Eq. (\[Dss\]) has a minimum at $$\begin{aligned}
n_\textrm{opt} &=
\log_2(N) + 1 + \log_2(\ln2) \approx
\log_2(N) + 0.471
\label{nopt}\end{aligned}$$ levels. This $n_\textrm{opt}$ together with the ratio $$\begin{aligned}
\gamma^*_\textrm{opt} &= 2
\label{gopt}\end{aligned}$$ represent the optimal tissue-structure in the sense that it minimizes the lifetime divisional load of a self-renewing tissue, yielding $$\begin{aligned}
D^*_\textrm{opt} &=
\log_2(N) + 1 + \log_2(\ln2) + 1/\ln2 \approx
\log_2(N) + 1.914
\, .\end{aligned}$$ Note that under this optimal condition the divisional rate of the stem cell level is very low: in a mature tissue (i.e., after the tissue has developed) the expected number of divisions of a stem cell, which is equivalent to the expected number of differentiation to level $1$ per stem cell is only $(\delta_0/N_0)(N_0 N/\delta_{n-1}) = 1/\ln2 \approx 1.44$.
Implications of the analytical results {#implications-of-the-analytical-results .unnumbered}
--------------------------------------
![ [**The lower limit of the lifetime divisional load as a function of the number of hierarchical levels.**]{} The black and gray solid lines (with filled circles at integer values of $n$) show the lower limit of the lifetime divisional load of a tissue, $D^*$, as a function of the number of hierarchical levels, $n$, for $n\leq\log_2(N)$ and $n\geq\log_2(N)$, respectively. The theoretical minimum, $\log_2(N)$, achievable by a series of divisions along a perfect binary tree characteristic of non-renewing tissues, is displayed with a dashed line. Here we have assumed $N=3 \times 10^8$ roughly corresponding to the number of cells shed by a few square millimeters of human skin that is sustained by a single stem cell. []{data-label="fig3"}](FIGURE3_ncomms-16-15484A-final){width="0.8\columnwidth"}
Remarkably, $D^*_\textrm{opt}$ corresponds to less than two cell divisions in addition to the theoretical minimum of $\log_2(N)$, achievable by a series of divisions along a perfect binary tree characteristic of non-renewing tissues. In other words, in terms of minimizing the number of necessary cell divisions along cell lineages, a self-renewing hierarchical tissue can be almost as effective as a non-renewing one. Consequently, hierarchical tissue organization with a sufficient number of hierarchical levels provides a highly adaptable and practically ideal mechanism not only for ensuring self-renewability but also keeping the number of cell divisions near the theoretical absolute minimum.
An important result of our mathematical analysis is that it provides a simple and mathematically rigorous formula (Eqs. \[Ds\] and \[Dss\], and Fig. \[fig3\]) for the lower limit of the lifetime divisional load of a tissue for a given number of hierarchical levels and a given number of terminally differentiated cells descending from a single stem cell. This lower limit can be reached only with a power law distribution of the differentiation rates (i.e., with a uniform ratio between the differentiation rates of any two successive differentiation levels), justifying the assumptions of the models by Werner et al.[@Werner:2011; @Werner:2013].
In the optimal scenario, where $\gamma_k = \gamma^*_\textrm{opt} = 2$, the recursion relation imposes $p_n=q_n=1$, thereby, all cell divisions must be symmetric and involve differentiation. This is a shared feature with non-renewable differentiation, which is the underlying reason, why the number of cell divisions of the optimal self-renewing mechanism can closely approach the theoretical minimum.
As a salient example of self-renewing tissues, let us consider the human skin. Clonal patches of skin are of the order of square millimeters in size [@Martincorena:2015], the top layer of skin, which is renewed daily, is composed of approximately a thousand cells per square millimeter [@Hoath:2003]. If we assume that a $10$ mm$^2$ patch is maintained by a single stem cell for $80$ years, this corresponds to about $N =3\times 10^8$ cells. As Fig. \[fig3\] demonstrates, the $D^*$ vs. $n$ curve becomes very flat for large values of $n$, indicating that in a real tissue the number of hierarchical levels can be reduced by at least a factor of $2$ from the optimal value, without significantly compromising the number of necessary cell divisions along the cell lineages.
It is a question how the total number of terminally differentiated cells ($N_0 N$) produced by the tissue during the natural lifetime of the organism can be best partitioned into the number of tissue-specific stem cells ($N_0$) and the number of terminally differentiated cells per stem cell ($N$). The initial generation of the stem cells along a binary tree requires $\log_2(N_0)$ divisions. The production of the terminally differentiated cells in a near-optimal hierarchy requires about $\log_2(N)$ divisions. Their sum, which is about $\log_2(N_0 N)$, depends only on the total number of terminally differentiated cells, irrespective of the number of stem cells. This means, that the minimization of the divisional load poses no constraint on the number of stem cells. However, since both maintaining a larger number of differentiation levels and keeping the differentiation hierarchy closer to optimum involve more complicated regulation, we suspect that a relatively large stem cell pool is beneficial, especially as a larger stem cell population can also be expected to be more robust against stochastic extinction, population oscillation, and injury.
Discussion {#discussion .unnumbered}
==========
In general, how closely the hierarchical organization of different tissues in different organisms approaches the optimum described above depends on (i) the strength of natural selection against unwanted somatic evolution, which is expected to be much stronger in larger and longer lived animals; and (ii) intrinsic physiological constraints on the complexity of tissue organization and potential lower limits on stem cell division rate. Neither the strength of selection nor the physiological constraints on tissue organization are known at present. However, in the case of the germ line mutation rate, which is proportional to the number of cell divisions in lineages leading to the gametes, current evidence indicates that physiological constraints are not limiting [@Lynch:2012]. Across species, differences in effective population size, which is in general negatively correlated with body size and longevity [@Nabholz:2013], indicate the effectiveness of selection relative to drift. As a result, differences in effective population size between species determine the effectiveness of selection in spreading of favorable mutations and eliminating deleterious ones and, as such, can be used as indicator of the efficiency of selection [@Kimura:1983; @Charlesworth:2009]. This implies that, in contrast to somatic tissues, we expect germ line differentiation hierarchies to be more optimal for smaller animals with shorter life spans as a result of their increased effective population sizes. For species for which information is available, the number of levels across species indeed follows an increasing trend as a function of the effective population size, ranging from $n=5$ in humans with relatively small effective population size of approximately $10^4$ and correspondingly less efficient selection, $n=8$ in macaque with intermediate effective population size of the order of $10^5$, and $n=10$ in mice with the largest effective population size of approximately $5\times10^5$ [@Lynch:2010; @Ramm:2014].
A qualitative examination of Fig. \[fig3\] suggests that a similar number of levels, of the order of $n\approx10$ may be present in most somatic tissues, because the $D^*$ vs. $n$ curve becomes progressively flatter after it reaches around twice the optimal value of $D^*$ at $n\gtrsim10$, and the reduction in the divisional load becomes smaller and smaller as additional levels are added to the hierarchy and other factors are expected to limit further increase in $n$. Alternatively, if we consider for example the human hematopoietic system, where approximately $10^4$ hematopoietic stem cells (HSCs) produce a daily supply of $\sim3.5\times10^{11}$ blood cells, we can calculate that over $80$ years each stem cell produces a total of $N\approx10^{12}$ terminally differentiated cells. For this larger value of $N$ the $D^*$ vs. $n$ curve reaches twice the optimal value of $D^*$ at $n\gtrsim15$ after which, similarly to Fig. \[fig3\], it becomes progressively flatter and the reduction in divisional load diminishes as additional levels are added. This rough estimate of $n\gtrsim15$ levels is consistent with explicit mathematical models of human hematopoiesis that predict between $17$ and $31$ levels [@Dingli:2007]. Active or short term HSCs (ST-HSCs) are estimated to differentiate about once a year, whereas a quiescent population of HSCs that provides cells to the active population is expected to be characterized by an even lower rate of differentiation. This is in good agreement with our prediction about the existence of a heterogeneous stem cell pool, a fraction of which consists of quiescent cells that only undergo a very limited number of cell cycles during the lifetime of the organism. Indeed, recently Busch et al. found that adult hematopoiesis in mice is largely sustained by previously designated ST-HSCs that nearly fully self-renew, and receive rare but polyclonal HSC input [@Busch:2015]. Mouse HSCs were found to differentiate into ST-HSCs only about three times per year.
For most somatic tissues the differentiation hierarchies that underpin the development of most cellular compartments remain inadequately resolved, the identity of stem and progenitor cells remains uncertain, and quantitative information on their proliferation rates is limited [@Sutherland:2015]. However, synthesis of available information on tissue organization by Tomasetti and Vogelstein [@Tomasetti:2015], as detailed above, suggests that larger tissues with rapid turnover (e.g., colon and blood) are relatively less prone to develop cancer. This phenomenon, as noted in the introduction, can be interpreted as Peto’s paradox across tissues with the implication that larger tissues with rapid turnover rates have hierarchies with more levels and stem cells that divide at a slower pace. Accumulating evidence from lineage-tracing experiments [@Blanpain:2013] is also consistent with a relatively large number of hierarchical levels. Populations of stem cells in blood, skin, and the colon have begun to be resolved as combinations of cells that are long-lived yet constantly cycling, and emerging evidence indicates that both quiescent and active cell subpopulations may coexist in several tissues, in separate yet adjoining locations [@Li:2010]. Lineage-tracing techniques [@Blanpain:2013] are rapidly developing, and may be used for directly testing the predictions of our mathematical model about the highly inhomogeneous distributions of the differentiation rates in the near future. In the context of estimates of the number of stem cells in different tissues that underlie Tomasetti and Vogelstein’s results, the potential existence of such unresolved hierarchical levels suggests the possibility that the number of levels of the hierarchy are systematically underestimated and, correspondingly, that the number of stem cells at the base of these hierarchies are systematically overestimated.
Independent of the details of the hierarchy the dynamics of how divisional load accumulates in time is described by two phases: (i) a transient development phase during which each level of the hierarchy is filled up and (ii) a stationary phase during which homeostasis is maintained in mature tissue. The dynamic details and the divisional load incurred during the initial development phase depend on the details of the hierarchy (cf.Eqs. (\[Dk\]) and (\[tau\])). In contrast, in the stationary phase, further accumulation of the mutational load is determined solely by $\delta_0/N_0$ the rate at which tissue-specific stem cells differentiate at the bottommost level of the hierarchy. Such biphasic behavior has been observed in the accumulation of mutations both in somatic [@Rozhok:2015] and germ line cells [@Kong:2012; @Gao:2016; @Rahbari:2016]. In both cases a substantial number of mutations were found to occur relatively rapidly during development followed by a slower linear accumulation of mutation thereafter. General theoretical arguments imply that the contribution of the mutational load incurred during development to cancer risk is substantial [@Frank:2003], but this has been suggested to be in conflict with the fact that the majority of cancers develop late in life [@Rozhok:2015; @Rozhok:2016]. Resolving this question and more generally understanding the development of cancer in self-renewing tissues will require modeling the evolutionary dynamics of how the hierarchical organization of healthy tissues breaks down.
Spontaneously occurring mutations accumulate in somatic cells throughout a person’s lifetime, but the majority of these mutations do not have a noticeable effect. A small minority, however, can alter key cellular functions and a fraction of these confer a selective advantage to the cell, leading to preferential growth or survival of a clone [@Martincorena:2015rev]. Hierarchical tissue organization can limit somatic evolution at both these levels: (i) at the level of mutations, as we demonstrated above, it can dramatically reduce the number of cell divisions required and correspondingly the mutational load incurred during tissue homeostasis; and (ii) at the level of selection acting on mutations with non-neutral phenotypic effects, as demonstrated by Nowak et al. [@Nowak:2003] and later by Pepper et al. [@Pepper:2007], tissues organized into serial differentiation experience lower rates of such detrimental cell-level phenotypic evolution. Extending the seminal results of Nowak et al. and Pepper et al., we propose that in addition to limiting somatic evolution at the phenotypic level, hierarchies are also how the tissues of multicellular organisms keep the accumulation of mutations in check, and that tissue-specific stem cells may in general correspond to a diverse set of slower dividing cell types.
In summary, we have considered a generic model of hierarchically organized self-renewing tissue, in the context of which we have derived universal properties of the divisional load during tissue homeostasis. In particular, our results provide a lower bound for the lifetime divisional load of a tissue as a function of the number of its hierarchical levels. Our simple analytical description provides a quantitative understanding of how hierarchical tissue organization can limit unwanted somatic evolution, including cancer development. Surprisingly, we find that the theoretical minimum number of cell divisions can be closely approached (cf. Fig. \[fig3\], where the theoretical minimum corresponds to the dashed horizontal line), demonstrating that hierarchical tissue organization provides a robust and nearly ideal mechanism to limit the divisional load of tissues and, as a result, minimize somatic evolution.
This work was supported by the Hungarian Science Foundation (grant K101436). The authors would like to acknowledge the comments of anonymous reviewers on a previous version of the manuscript, as well as discussion with and comments from Bastien Boussau, Márton Demeter, Máte Kiss, and Dániel Grajzel.
Data availability {#data-availability .unnumbered}
=================
No data was generated as part of this study.
Conflict of Interest {#conflict-of-interest .unnumbered}
====================
The authors declare no conflict of interest.
Author contributions {#author-contributions .unnumbered}
====================
I.D. and Sz.G. designed the study, carried out research, and wrote the paper.
[10]{}
Ziyue Gao, Minyoung J Wyman, Guy Sella, and Molly Przeworski. Interpreting the dependence of mutation rates on age and time. , 14(1):e1002355, Jan 2016.
P C Nowell. The clonal evolution of tumor cell populations. , 194(4260):23–8, Oct 1976.
Lauren M F Merlo, John W Pepper, Brian J Reid, and Carlo C Maley. Cancer as an evolutionary and ecological process. , 6(12):924–35, Dec 2006.
Niko Beerenwinkel, Chris D Greenman, and Jens Lagergren. Computational cancer biology: An evolutionary perspective. , 12(2):e1004717, Feb 2016.
J F Crow. The origins, patterns and implications of human spontaneous mutation. , 1(1):40–7, Oct 2000.
Katrin Busch, Kay Klapproth, Melania Barile, Michael Flossdorf, Tim Holland-Letz, Susan M Schlenner, Michael Reth, Thomas H[ö]{}fer, and Hans-Reimer Rodewald. Fundamental properties of unperturbed haematopoiesis from stem cells in vivo. , 518(7540):542–6, Feb 2015.
Benjamin Werner, Arne Traulsen, and David Dingli. Ontogenic growth as the root of fundamental differences between childhood and adult cancer. , advance access:doi:10.1002/stem.2251, Dec 2016.
Benjamin Werner, Fabian Beier, Sebastian Hummel, Stefan Balabanov, Lisa Lassay, Thorsten Orlikowsky, David Dingli, Tim H Br[ü]{}mmendorf, and Arne Traulsen. Reconstructing the in vivo dynamics of hematopoietic stem cells from telomere length distributions. , 4:e08687, 2015.
James A Morris. The hierarchical model of stem cell genesis explains the man mouse paradox, peto’s paradox, the red cell paradox and wright’s enigma. , 83(6):713–7, Dec 2014.
A B Pardee. G1 events and regulation of cell proliferation. , 246(4930):603–8, Nov 1989.
James E Till and Ernest A McCulloch. A direct measurement of the radiation sensitivity of normal mouse bone marrow cells. , 14:213–22, Feb 1961.
Ernest A McCulloch and James E Till. Perspectives on the properties of stem cells. , 11(10):1026–8, Oct 2005.
Franziska Michor, Timothy P Hughes, Yoh Iwasa, Susan Branford, Neil P Shah, Charles L Sawyers, and Martin A Nowak. Dynamics of chronic myeloid leukaemia. , 435(7046):1267–70, Jun 2005.
David Dingli, Arne Traulsen, and Jorge M Pacheco. Compartmental architecture and dynamics of hematopoiesis. , 2(4):e345, 2007.
Tudorita Tumbar, Geraldine Guasch, Valentina Greco, Cedric Blanpain, William E Lowry, Michael Rendl, and Elaine Fuchs. Defining the epithelial stem cell niche in skin. , 303(5656):359–63, Jan 2004.
Nick Barker, Johan H van Es, Jeroen Kuipers, Pekka Kujala, Maaike van den Born, Miranda Cozijnsen, Andrea Haegebarth, Jeroen Korving, Harry Begthel, Peter J Peters, and Hans Clevers. Identification of stem cells in small intestine and colon by marker gene lgr5. , 449(7165):1003–7, Oct 2007.
C S Potten, R Gandara, Y R Mahida, M Loeffler, and N A Wright. The stem cells of small intestinal crypts: where are they? , 42(6):731–50, Dec 2009.
Derrick J Rossi, Catriona H M Jamieson, and Irving L Weissman. Stems cells and the pathways to aging and cancer. , 132(4):681–96, Feb 2008.
Louis Vermeulen, Edward Morrissey, Maartje van der Heijden, Anna M Nicholson, Andrea Sottoriva, Simon Buczacki, Richard Kemp, Simon Tavar[é]{}, and Douglas J Winton. Defining stem cell dynamics in models of intestinal tumor initiation. , 342(6161):995–8, Nov 2013.
Kate D. Sutherland and Jane E. Visvader. Cellular mechanisms underlying intertumoral heterogeneity. , 1(1):15 – 23, 2015.
Cristian Tomasetti and Bert Vogelstein. Cancer etiology. variation in cancer risk among tissues can be explained by the number of stem cell divisions. , 347(6217):78–81, Jan 2015.
Christopher Wild, Paul Brennan, Martyn Plummer, Freddie Bray, Kurt Straif, and Jiri Zavadil. Cancer risk: Role of chance overstated. , 347(6223):728–728, 2015.
Song Wu, Scott Powers, Wei Zhu, and Yusuf A Hannun. Substantial contribution of extrinsic risk factors to cancer development. , 529(7584):43–7, Jan 2016.
R Peto, F J Roe, P N Lee, L Levy, and J Clack. Cancer and ageing in mice and men. , 32(4):411–26, Oct 1975.
Aleah F Caulin and Carlo C Maley. Peto’s paradox: evolution’s prescription for cancer prevention. , 26(4):175–82, Apr 2011.
Richard Peto. Quantitative implications of the approximate irrelevance of mammalian body size and lifespan to lifelong cancer risk. , 370(1673), 2015.
Laura Hindersin, Benjamin Werner, David Dingli, and Arne Traulsen. Should tissue structure suppress or amplify selection to minimize cancer risk? , 11:41, 2016.
M Loeffler and H E Wichmann. A comprehensive mathematical model of stem cell proliferation which reproduces most of the published experimental results. , 13(5):543–61, Sep 1980.
Martin A Nowak, Franziska Michor, and Yoh Iwasa. The linear process of somatic evolution. , 100(25):14966–9, Dec 2003.
Hitoshi Takizawa, Roland R Regoes, Chandra S Boddupalli, Sebastian Bonhoeffer, and Markus G Manz. Dynamic variation in cycling of hematopoietic stem cells in steady state and inflammation. , 208(2):273–84, Feb 2011.
John W Pepper, Kathleen Sprouffske, and Carlo C Maley. Animal cell differentiation patterns suppress somatic evolution. , 3(12):e250, Dec 2007.
Benjamin Werner, David Dingli, Tom Lenaerts, Jorge M Pacheco, and Arne Traulsen. Dynamics of mutant cells in hierarchical organized tissues. , 7(12):e1002290, Dec 2011.
Benjamin Werner, David Dingli, and Arne Traulsen. A deterministic model for the occurrence and dynamics of multiple mutations in hierarchically organized tissues. , 10(85):20130349, Aug 2013.
I[ñ]{}igo Martincorena and Peter J Campbell. Somatic mutation in cancer and normal cells. , 349(6255):1483–9, Sep 2015.
Natalia L Komarova. Cancer: A moving target. , 525(7568):198–9, Sep 2015.
Katarzyna A Rejniak and Alexander R A Anderson. Hybrid models of tumor growth. , 3(1):115–25, 2011.
Philipp M Altrock, Lin L Liu, and Franziska Michor. The mathematics of cancer: integrating quantitative models. , 15(12):730–45, Dec 2015.
Franziska Michor and Kathryn Beal. Improving cancer treatment via mathematical modeling: Surmounting the challenges is worth the effort. , 163(5):1059–63, Nov 2015.
Min Tang, Rui Zhao, Helgi van de Velde, Jennifer G Tross, Constantine Mitsiades, Suzanne Viselli, Rachel Neuwirth, Dixie-Lee Esseltine, Kenneth Anderson, Irene M Ghobrial, Jes[ú]{}s F San Miguel, Paul G Richardson, Michael H Tomasson, and Franziska Michor. Myeloma cell dynamics in response to treatment supports a model of hierarchical differentiation and clonal evolution. , 22(16):4206–14, Aug 2016.
Benjamin Werner, Jacob G Scott, Andrea Sottoriva, Alexander R A Anderson, Arne Traulsen, and Philipp M Altrock. The cancer stem cell fraction in hierarchically organized tumors can be estimated using mathematical modeling and patient-specific treatment trajectories. , 76(7):1705–13, Apr 2016.
Linheng Li and Hans Clevers. Coexistence of quiescent and active adult stem cells in mammals. , 327(5965):542–5, Jan 2010.
I[ñ]{}igo Martincorena, Amit Roshan, Moritz Gerstung, Peter Ellis, Peter Van Loo, Stuart McLaren, David C Wedge, Anthony Fullam, Ludmil B Alexandrov, Jose M Tubio, Lucy Stebbings, Andrew Menzies, Sara Widaa, Michael R Stratton, Philip H Jones, and Peter J Campbell. Tumor evolution. high burden and pervasive positive selection of somatic mutations in normal human skin. , 348(6237):880–6, May 2015.
Steven B Hoath and D G Leahy. The organization of human epidermis: functional epidermal units and phi proportionality. , 121(6):1440–6, Dec 2003.
Way Sung, Matthew S Ackerman, Samuel F Miller, Thomas G Doak, and Michael Lynch. Drift-barrier hypothesis and mutation-rate evolution. , 109(45):18488–92, Nov 2012.
Benoit Nabholz, Nicole Uwimana, and Nicolas Lartillot. Reconstructing the phylogenetic history of long-term effective population size and life-history traits using patterns of amino acid replacement in mitochondrial genomes of mammals and birds. , 5(7):1273–90, 2013.
Motoo Kimura. . Cambridge University Press, Cambridge, 1983.
Brian Charlesworth. Fundamental concepts in genetics: effective population size and patterns of molecular evolution and variation. , 10(3):195–205, Mar 2009.
Michael Lynch. Evolution of the mutation rate. , 26(8):345–52, Aug 2010.
Steven A Ramm, Lukas Sch[ä]{}rer, Jens Ehmcke, and Joachim Wistuba. Sperm competition and the evolution of spermatogenesis. , 20(12):1169–79, Dec 2014.
C[é]{}dric Blanpain and Benjamin D Simons. Unravelling stem cell dynamics by lineage tracing. , 14(8):489–502, Aug 2013.
Andrii I Rozhok and James DeGregori. Toward an evolutionary model of cancer: Considering the mechanisms that govern the fate of somatic mutations. , 112(29):8914–21, Jul 2015.
Augustine Kong, Michael L Frigge, Gisli Masson, Soren Besenbacher, Patrick Sulem, Gisli Magnusson, Sigurjon A Gudjonsson, Asgeir Sigurdsson, Aslaug Jonasdottir, Adalbjorg Jonasdottir, Wendy S W Wong, Gunnar Sigurdsson, G Bragi Walters, Stacy Steinberg, Hannes Helgason, Gudmar Thorleifsson, Daniel F Gudbjartsson, Agnar Helgason, Olafur Th Magnusson, Unnur Thorsteinsdottir, and Kari Stefansson. Rate of de novo mutations and the importance of father’s age to disease risk. , 488(7412):471–5, Aug 2012.
Raheleh Rahbari, Arthur Wuster, Sarah J Lindsay, Robert J Hardwick, Ludmil B Alexandrov, Saeed Al Turki, Anna Dominiczak, Andrew Morris, David Porteous, Blair Smith, Michael R Stratton, [UK10K Consortium]{}, and Matthew E Hurles. Timing, rates and spectra of human germline mutation. , 48(2):126–33, Feb 2016.
Steven A Frank and Martin A Nowak. Cell biology: Developmental predisposition to cancer. , 422(6931):494, Apr 2003.
Andrii I Rozhok, Jennifer L Salstrom, and James DeGregori. Stochastic modeling reveals an evolutionary mechanism underlying elevated rates of childhood leukemia. , 113(4):1050–5, Jan 2016.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The effects of the oxidation atmosphere and crystal faces on the interface-trap density was examined by using constant-capacitance deep-level transient spectroscopy to clarify the origin of them. By comparing the DLTS spectra of the low-mobility interfaces oxidized in a N$_2$O atmosphere with those of the high-mobility interfaces on C-face oxidized in a wet atmosphere, it was found that a high density of traps are commonly observed around the energy of 0.16 eV from the edge of the conduction band ($C1$ traps) in low-mobility interfaces irrespective of crystal faces. It was also found that the generation and elimination of traps specific to crystal faces: (1) the $C1$ traps can be eliminated by wet oxidation only on the C-face, and (2) the $O2$ traps (0.37 eV) can be observed in the SiC/SiO$_2$ interface only on the Si-face. The generation of $O2$ traps on the Si-face and the elimination of $C1$ traps on the C-face by wet oxidation may be caused by the oxidation reaction specific to the crystal faces.'
author:
- Tetsuo Hatakeyama
- Mitsuru Sometani
- Yoshiyuki Yonezawa
- Kenji Fukuda
- Hajime Okumura
- Tsunenobu Kimoto
bibliography:
- 'DLTS\_htk.bib'
title: 'Characterization of Interface Traps in SiO$_2$/SiC Structures Close to the Conduction Band by Deep-Level Transient Spectroscopy'
---
Introduction
============
SiC metal–oxide–semiconductor field-effect transistors (MOSFETs) are regarded as promising candidates for the next-generation high-voltage electrical power switches owing to the high critical electric field of SiC [@palmour; @cooper; @baliga]. However, the low mobility in the SiC/SiO$_2$ interfaces hinders the potential performance of SiC MOSFETs. Thus, the improvement in the mobility in the SiC/SiO$_2$ interfaces is a central issue in the research and development of SiC MOSFETs. It was presumed that the traps in the SiC/SiO$_2$ interfaces are closely related to the degradation in mobility [@afanasiev_pss]. In 2000, Saks and Agarwal clearly showed that the low mobility in the SiC/SiO$_2$ interfaces is caused by the trapping of electrons at the high-density interface traps on the bases of the Hall effect measurements of SiC MOSFETs[@saks2000]. They showed that most of the inversion electrons induced by the gate voltage were trapped by interface traps by comparing the free carrier density in an interface obtained by Hall measurements with the total number of inversion electrons. They also pointed out that the Coulombic scattering by the trapped electrons may dominate the inversion electron transport by examining the temperature dependence of the Hall mobility. Later, detailed studies on the inversion electron transport of various types of SiC MOSFETs using Hall measurements confirmed the above-described degradation mechanism in mobility [@tilak_pss; @dhar_jap].
Therefore, great efforts have been focused on reducing the interface trap density to improve mobility by examining the gate-oxidation and post-gate-oxidation annealing processes in detail. In recent years, annealing or oxidation in a nitric oxide (NO) or nitrous oxide (N$_2$O) atmosphere, which is hereinafter collectively referred to as oxynitridation, has been used to reduce the high density of interface traps [@Jamet; @Chung; @Rozen]. The optimized oxynitridation process reduces the interface trap density ($D_{\mathrm it}$) evaluated by using the conventional Hi-Lo method [@nicollian_hilo] down to less than 10$^{12}$ cm$^{-2}$/eV at $E_{\mathrm C}-E$ = 0.2 eV, where $E_{\mathrm C}$ and $E$ are referred to as the conduction-band edge and energy, respectively [@Suzukino]. However, the effect of oxynitridation on the mobility is limited. In fact, the mobility in the SiC/SiO$_2$ interfaces fabricated by using oxynitridation is typically approximately 30 cm$^2$/(Vs) [@Suzukino; @Rozenno]. Another way to improve the channel mobility is to combine the use of the C-terminated face (C-face) instead of the Si-terminated face (Si-face) along with annealing or oxidation in a wet atmosphere. The typical mobility in the SiC/SiO$_2$ interfaces fabricated on the C-face by using wet oxidation is approximately 90 cm$^2$/(Vs) [@fukuda; @Suzukiwet]. However, the cause of the relatively low mobility of the oxynitrided interface could not yet be identified. The densities of interface traps characterized by using the conventional Hi-Lo method are not correlated with the mobilities between these two types of samples [@Suzukino; @Suzukiwet]; thus, it seems that $D_{\mathrm it}$ would not be the cause of the relatively low mobility of the oxynitrided interface.
The authors reported that the $D_{\mathrm it}$ close to the conduction band in SiC/SiO$_2$ interfaces fabricated using oxynitridation was much higher than that of SiC/SiO$_2$ interfaces fabricated using wet oxidation on C-face by using constant-capacitance deep-level transient spectroscopy (CCDLTS) characterization. [@HatakeyamaDLTS]. They concluded that the low mobility in SiC/SiO$_2$ interfaces oxidized in a N$_2$O atmosphere on C-face should be caused by trapping electrons in the high density of the traps close to the conduction band. In this study, the effects crystal faces and oxidation atmosphere on the traps in SiC/SiO$_2$ interfaces was examined to confirm that the high density of the traps close to the conduction band are the common cause of the low mobility in SiC/SiO$_2$ interfaces irrespective of the crystal faces. Further, to elucidate the origin of the traps in SiC/SiO$_2$ interfaces, the properties of the identified traps close to the conduction band are discussed according to the dependence of the CCDLTS spectra on the crystal face and oxidation condition.
Experimental Methods
====================
The samples characterized in this study were MOS capacitors on the C-face $(000\overline{1})$ or Si-face $(0001)$ of 4H-SiC n-type epitaxial wafers. The density of nitrogen in the epitaxial layer was approximately $1\times 10^{16}$ cm$^{-3}$. The SiC/SiO$_2$ interfaces of the MOS capacitors were fabricated by using the following gate-oxidation processes: (1) oxidation in an O$_2$ atmosphere at 1250 ${}^\circ\mathrm{C}$, followed by wet oxidation at 900 ${}^\circ\mathrm{C}$, followed by H$_2$ anneal at 800${}^\circ\mathrm{C}$ on the C-face (DWHC); (2) oxidation in a N$_2$O atmosphere at 1250 ${}^\circ\mathrm{C}$, followed by a H$_2$ anneal at 1000 ${}^\circ\mathrm{C}$ on the C-face (NHC); (3) oxidation in an O$_2$ atmosphere at 1250 ${}^\circ\mathrm{C}$, followed by wet oxidation at 900 ${}^\circ\mathrm{C}$ on the Si-face (DWS); and (4) oxidation in an O$_2$ atmosphere at 1250 ${}^\circ\mathrm{C}$, followed by post-oxidation annealing in a N$_2$O atmosphere at 1250 ${}^\circ\mathrm{C}$, followed by H$_2$ anneal at 800 ${}^\circ\mathrm{C}$ on the Si-face (DNHS). The thickness of the oxide layer is approximately 50 nm, and the gate electrode is aluminum. The mobilities of the MOSFETs fabricated by using the processes of DWHC, NHC, DWS, and DNHS are approximately 80 cm$^2$/(Vs), 30 cm$^2$/(Vs), 8 cm$^2$/(Vs), and 30 cm$^2$/(Vs), respectively [@Suzukiwet; @Suzukino; @hatakeyama_mobility].
CCDLTS spectra were obtained by measuring the transient voltage signal generated by a feedback loop to maintain the capacitance at a constant value during the measurement of MOS capacitors in the temperature range from 80 K to 400 K. The pulse and reverse bias voltage were approximately 6 V and $-1$ V, respectively. The capacitance at the reverse bias was kept constant during the temperature scan. For the analysis of the transient voltage signal at each temperature, a deep-level transient Fourier spectroscopy (DLTFS) technique was used [@Weiss; @Weissphd].
![(a) Comparison of CCDLTS spectra between a DWHC sample and an NHC sample. The horizontal axis is the first order of the sine coefficient of the DLTFS signal (b1) [@Weiss; @Weissphd]. (b) An Arrhenius plot for the peak at approximately 100 K ($C1$) in the CCDLTS spectrum for the NHC sample. []{data-label="fig_1"}](Figure1b.eps){width="7cm"}
Results and Discussion
======================
First, we examined the CCDLTS spectra of MOS capacitors on C-face to clarify the cause of the low mobility of the oxynitrided interface [@HatakeyamaDLTS]. Figure \[fig\_1\] (a) shows the comparison of the CCDLTS spectra between a DWHC sample, the interface of which exhibits a high mobility, and an NHC sample, the interface of which exhibits relatively low mobility. In Fig. \[fig\_1\] (a), the horizontal axis is the first order of the sine coefficient of the DLTFS-signal (b1) with a period width of 205 ms and a recovery time of 4 ms [@Weiss; @Weissphd]. A peak was observed at approximately 100 K in the CCDLTS spectrum for the NHC sample. We refer to this peak as $C1$ for which an Arrhenius-plot analysis was carried out, and the results was presented in Fig. \[fig\_1\] (b). The obtained energy of the traps that comprise the $C1$ peak ($C1$ traps) was estimated to be 0.16eV. The capture cross-section of the $C1$ traps was estimated to be 4 $\times$10$^{−15}$cm$^{−2}$. In contrast, the CCDLTS spectrum for the DWHC sample is almost constant. Especially, the CCDLTS signal at approximately 100 K for the the DWHC sample is one-fourth of that for for the NHC sample. This means that the areal density of $C1$ traps of the DWHC sample is approximately one-fourth of that of the NHC sample. We conclude that the low mobility in the SiC/SiO$_2$ interfaces fabricated by the NHC process is caused by the high density of $C1$ traps for the following reasons: (1) the interface mobility is inversely correlated with the density of $C1$ traps, and (2) the interface mobility degradation mechanism proposed by Saks[@saks2000] can be applied to the high density of $C1$ traps because the energy level of the $C1$ traps (0.16 eV) is located above the Fermi energy at the onset of the formation of the inversion layer (approximately 0.2 eV from the edge of the conduction band at room temperature). Consequently, the $C1$ traps are not filled by electrons at the onset of the formation of the inversion layer; thus some of the inversion electrons are captured when the gate voltage exceeds the threshold voltage, which leads to a degradation in the interface mobility, as described in the introduction.
![$D_{\mathrm{ it}}(E)$ for the DWHC and NHC samples transformed from the CCDLTS spectra. $D_{\mathrm{ it}}(E)$ characterized via the Hi-Lo method (100 kHz for high frequency) are also shown for comparison. []{data-label="fig_2"}](Figure2.eps){width="7cm"}
The CCDLTS spectra can be transformed into the energy distribution of the density of the interface traps ($D_{\mathrm it}(E)$) with the following two assumptions: (1) $D_{\mathrm it}(E)$ depends only weakly on the energy, and (2) the capture cross section does not depend on the energy and temperature. Figure \[fig\_2\] shows the energy distributions for the DWHC and NHC samples calculated from the CCDLTS spectra. In the calculation of $D_{\mathrm it}(E)$, the capture cross sections for the DWHC and NHC samples are assumed to be 4$\times$10$^{−15}$cm$^{−2}$ and 1$\times$ 10$^{−15}$cm$^{−2}$, respectively. For comparison, $D_{\mathrm{ it}}(E)$ calculated via the Hi-Lo method are also shown. $D_{\mathrm{ it}}(E)$ for the NHC sample steeply increases as the energy become close to the edge of conduction band, whereas that for the DWHC sample gradually increases. As a result, $D_{\mathrm{ it}}(E)$ close to the conduction band for the NHC sample is larger than that for the DWHC sample. This large $D_{\mathrm{ it}}(E)$ close to the conduction band for the NHC sample corresponds to the $C1$ traps, and they degrade the MOS mobility for the reason as described above. In contrast, the small $D_{\mathrm{ it}}(E)$ close to the conduction band for the DWHC sample results in a relatively large interface mobility of 80 cm$^2$/(Vs).
![Comparison of CCDLTS spectra between the NHC sample and the DNHS sample.[]{data-label="fig_3"}](Figure3.eps){width="8cm"}
In Fig. \[fig\_2\], we also present the difference of the energy distributions of $D_{\mathrm{ it}}(E)$ characterized from the CCDLTS spectra and those characterized according to the Hi-Lo method [@nicollian_hilo], where high-frequency C-V characteristics are measured at 100 kHz. It can be seen that the Hi-Lo method underestimates $D_{\mathrm{ it}}(E)$ compared to those estimated from CCDLTS spectra. This is because the frequency of the high-frequency capacitance measurement (100 kHz ) is not high enough to measure the real “high-frequency capacitance” [@yoshioka_psi]. We note that the interface traps are modeled as a series connection of the resistance and the capacitance in the equivalent circuit of a MOS capacitor [@nicollian_hilo]. Accordingly, the interface traps have cut-off frequencies. To measure the real “high-frequency capacitance”, the frequency of the C-V measurement should be higher than the cut-off frequency of the traps, which exponentially increases as the energy of traps becomes close to the edge of the conduction band [@nicollian_hilo]. Therefore, the measured “high-frequency capacitance” is overestimated at the energy close to the edge of the conduction band, which leads to the underestimation of $D_{\mathrm{ it}}(E)$. As for the deep traps ($>$ 0.5 eV), the capacitance measurements tend to be carried out in a non-equilibrium state, which also leads to the underestimation of $D_{\mathrm{ it}}(E)$. As for the NHC sample, the pile up of the nitrogen atoms at the SiC/SiO$_2$ interface[@haney_2013] may cause a deviation in the estimate of trap energy in the C-V measurements. In summary, the $D_{\mathrm{ it}}(E)$ characterization of the SiC/SiO$_2$ interfaces via the Hi-Lo method at room temperature has a numbers of problems; thus, it should be avoided.
Hereafter, we discuss the difference between the CCDLTS spectrum for MOS capacitors on the C-face and that for MOS capacitors on the Si-face to consider the origin of the $C1$ traps and other defects at the SiC/SiO$_2$ interfaces. Figure \[fig\_3\] shows a comparison of the CCDLTS spectrum between the oxynitried MOS capacitor on the C-face (the NHC sample) and the one on the Si-face (the DNHS sample). We found two peaks ($01$ and $O2$) in the CCDLTS spectrum for the DNHS sample, as shown in Fig. \[fig\_3\]. From an Arrhenius-plot analysis, the energies of the $O1$ traps and $O2$ traps are estimated to be 0.14 eV and 0.37 eV, respectively. These peaks were also reported by Basile and his coworkers [@Basileno]. It should be noted that the energy of the $C1$ trap at the SiC/SiO$_2$ interface on the C-face is almost equal to that of the $O1$ trap on the Si-face. On the other hand, the $O2$ peak in the CCDLTS spectrum is specific to the SiC/SiO$_2$ interface on the Si-face. The absence of the $O2$ peak in the CCDLTS spectrum of the SiC/SiO$_2$ interface on the C-face means that the density of $O2$ traps on C-face is, at least, negligible compared with that of the $O1$ traps. This information on the dependence of the trap densities on the crystal faces provides an insight into the origin and formation mechanism of traps in the SiC/SiO$_2$ interface.
Here, we review the structure of the SiC/SiO$_2$ interface on the Si-face and C-face. For the SiC/SiO$_2$ interface on the Si-face, uppermost Si atoms, which terminate the SiC layer, are connected to the O atoms in the SiO$_2$ layer [@deak_iop2007; @Ohnuma_2007Si; @devynck2011; @devynckphd; @xshen_jap2013]. For the SiC/SiO$_2$ interface on the C-face, it may be reasonable to assume that the uppermost C atoms are connected to the O atoms in the SiO$_2$ layer. However, first-principles molecular-dynamics calculations showed that this interface structure is not stable [@Ohnuma_2009C]. Consequently, it is believed that the Si atoms in the SiO$_2$ layer are connected to the uppermost C atoms in SiC [@Ohnuma_2009C; @xshen_jap2013]. One of this type of SiC/SiO$_2$ structure was proved to be stable according to first-principles molecular-dynamics calculations [@Ohnuma_2009C]. Whatever else it might be, the SiC/SiO$_2$ interface on the C-face may be more unstable than that on the Si-face. This may cause the high oxidation rate of the C-face, which is ten times higher than that of the Si-face [@ysong_jap2004]. Further, the oxidation mechanism may be different between C-face and Si-face [@xshen_jap2013]. We speculate that the generation of $O2$ traps on the Si-face may be due to the oxidation mechanism specific to the Si-face.
Basile and his co-workers concluded that the $O1$ and $O2$ traps are defects in the oxide on the base of the comparison of CCDLTS spectra between MOS structures on the Si-face of 4H-SiC and those on the Si-face of 6H-SiC [@Basileno]. These traps correspond to near-interface oxide traps (NIT), which was first reported by Afanasev and his coworkers in 1997 on the bases of the experiments on phton-stimulated tunneling of trapped electrons (PST) [@afanasiev_pss]. Their PST measurements on MOS structures on 4H-SiC and 6H-SiC showed a barrier height of 2.8 eV, which corresponds to an energy for NIT levels at approximately $E_{C}-0.1$ eV, where $E_C$ is the energy of the edge of the conduction band of 4H-SiC. The idea of NIT was also supported by thermally stimulated current measurements using MOS structures on 4H-SiC and 6H-SiC [@Rudenko2005545]. In consideration of these reports, the $C1$ traps on the C-face, the $O1$ traps and $O2$ traps on the Si-face are likely to be oxide traps. Further, the $C1$ traps on the C-face are likely to be same as the $O1$ traps on the Si-face because the energy of each of them is almost the same. We presume that the origin of a $C1$ trap is a carbon dimer or a single carbon defect in SiO$_2$ by comparing the energy of the $C1$ traps with the charge transition energy of a point defect in SiO$_2$ on the basis of first-principles calculations [@deak_iop2007; @devynck2011new].
![Comparison of CCDLTS spectra of SiC/SiO$_2$ interfaces oxidized in a wet atmosphere on the C-face (a DWHC sample) with that on Si-face (a DWS sample). []{data-label="fig_4"}](Figure4.eps){width="8cm"}
A comparison of CCDLTS spectra of SiC/SiO$_2$ interfaces oxidized in a wet atmosphere on the C-face (a DWHC sample) with that on the Si-face (a DWS sample) is shown in Fig. \[fig\_4\]. The CCDLTS spectrum of the DWS sample (Si-face) is much larger than that of the DWHC sample (C-face). This difference in the characteristics of the CCDLTS spectra reflects the difference in interface mobilities between the C-face and the Si-face (DWHC: 80 cm$^2$/(Vs), DWS: 8 cm$^2$/(Vs)). Figure \[fig\_4\] shows that the $C1$ traps are passivated or removed only on the C-face. If we assume that the $C1$ trap is an oxide trap, possible mechanisms for the elimination of the $C1$ traps from the interface can be narrowed down. First, we exclude the possibility of the acceleration of the decomposition of the $C1$ traps by wet oxidation because the $C1$ traps could be removed by wet oxidization also on the Si-face if this mechanism works. Therefore, it is natural to think that the difference in the density of the $C1$ traps between the C-face and the Si-face is due to the difference in the defect-generation rate during wet oxidation. As described above, the structure of the SiC/SiO$_2$ interface on C-face may be totally different from that on Si-face. It is certain that the oxidation mechanism in a wet atmosphere is different between the C-face and the Si-face and that the difference in the oxidation mechanism causes the difference in defect-generation rate at the oxidation front. A more detailed investigation of wet oxidation of SiC from first principles is needed to clarify the mechanism of removal of the $C1$ traps.
Conclusions
===========
We used CCDLTS measurements to characterize and compare $D_{\mathrm{it}}(E)$ close to the edge of the conduction band for SiC/SiO$_2$ interfaces on the Si-face and C-face fabricated using two techniques: oxynitridation and wet oxidation. The results showed that the $D_{\mathrm{it}}(E)$ close to the edge of the conduction band for the SiC/SiO$_2$ interface on the C-face and Si-face fabricated by using oxynitridation was much higher than that on C-face fabricated by using wet oxidation. The high value of $D_{\mathrm{it}}(E)$ close to the edge of the conduction band of oxynitridated samples is due to the $C1$ traps, which are likely to be the main cause of the low interface mobility. The origin of the $C1$ traps is likely to be the carbon-related defects in the oxide, which are common in the SiC/SiO$_2$ interfaces on the C-face and Si-face. We found $O2$ traps in the SiC/SiO$_2$ interface only on the Si-face. We also found that $C1$ traps in the interface can be eliminated only on the C-face by wet oxidation. It is presumed that the generation of $O2$ traps in the interface on the Si-face and the elimination of $C1$ traps in the interface on the C-face by wet oxidation are caused by the oxidation reactions specific to the crystal faces, which are caused by the different atomic structures of the SiC/SiO$_2$ interface between the Si-face and the C-face.
We thank Dr. S. Weiss and Dr. L. Cohausz at Phys Tech GmbH and Dr. H. Okada at Kobelco research, Inc. for their help with DLTS measurements. This research was supported by a grant from the Japan Society for the Promotion of Science (JSPS) through the Funding Program for World-Leading Innovative R &D on Science and Technology (FIRST Program), under the aegis of the Council for Science and Technology Policy (CSTP).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We propose a non-deterministic CNOT gate based on a quantum cloner, a quantum switch based on all optical routing of single photon by single photon, a quantum-dot spin in a double-sided optical microcavity with two photonic qubits, delay lines and other linear optical photonic devices. Our CNOT provides a fidelity of 78% with directly useful outputs for a quantum computing circuit and requires no ancillary qubits or electron spin measurements.'
author:
- '[Amor Gueddana$^{1,\,2}$ , Peyman Gholami$^{1}$ and Vasudevan Lakshminarayanan$^{1,\,3}$]{}'
bibliography:
- 'Quantum\_Bib\_2019.bib'
title: '[Can A Universal Quantum Cloner Be Used to Design an Experimentally Feasible Near-Deterministic CNOT Gate ?]{}'
---
1\. Theoretical & Experimental Epistemology Lab, TEEL, School of Optometry and Vision Science, University of Waterloo 200 University Avenue West, Waterloo, Ontario N2l 3G1, Canada.
2\. Green & Smart Communication Systems Lab, Gres’Com, Engineering School of Communication of Tunis, Sup’Com, University of Carthage, Ghazela Technopark, 2083, Ariana, Tunisia.
3\. Department of Physics, Department of Electrical and Computer Engineering and Department of Systems Design Engineering , University of Waterloo 200 University Avenue West, Waterloo, Ontario N2l 3G1, Canada.
Introduction
============
Physical implementation of the photonic quantum computer and secure optical quantum communication systems are based on photonic quantum gates [@Kok_2007]. The gate which is universal for building all quantum circuits is the Controlled-NOT gate (CNOT) [@Shende_2006; @Maslov_2008]. Experimentally realizable photonic CNOT gates are those based on linear optical devices [@Pittman_2001; @Ralph_2002; @OBrien_2003; @Pittman_2003; @Gasparoni_2004; @Pittman_2004; @Bao_2007; @Clark_2009; @Gueddana_2013a; @Gueddana_2015]. These gates have success probabilities less than 1/4. Further improvement in the success probability of this CNOT model is not possible because the best success probability for the CNOT functioning is 3/4 when using linear optical devices [@Knill_2003]. This is a major hurdle for realization of all complex versions of the quantum computing circuits, since the success probability of such circuits may be very low due to multiple combinations of many non-deterministic CNOTs . To overcome this inefficiency, other techniques such as superconducting qubits [@Plantenberg_2007] have to be employed. Other work using non-linearities have been proposed in order to achieve non-deterministic CNOTs with high success probability [@Luo_2016; @Li_2013]. Other designs of CNOTs are based on spin of electron trapped in Quantum Dot (QD) and confined in a double sided optical micro-cavity [@Wei_2013; @Wei_2014; @Wang_2013; @Luo_2014; @Bonato_2010]. Although the fidelity values of these CNOTs seems to be near unity, in practice all these designs have major drawbacks. This is because of physical constraints that make them less effective in serial or parallel combinations. For the special CNOT model of Wang *et al* [@Wang_2013], we presented comments on the parametric values taken for the simulation and showed that the proposed CNOT gate is valid only in the strong coupling regime [@Gueddana_2018d]. In this paper, we refer to this same CNOT and show that it provides a fidelity of only 47% in a realistic implementation (while it is 93.7% in the theoretical case), then we propose an optimized design based on a quantum cloner.
Several theoretical and experimental work has addressed quantum cloning machines providing optimal polarization cloning of single photons when using either Parametric Down Conversion (PDC) or photon bunching on Beam Splitter (BS). Fasel *et al [@Fasel_2002]* proposed close-to optimal quantum cloning of the polarization state of light using standard Erbium dopped fiber for amplification and provided a fidelity $F_{cloner}$ equal to 0.82. Linares *et al* [@Lamas_Linares_2002] proposed a cloning technique, based on stimulated emission in PDC using non linear $\beta$-Barium borate (BBo) crystal, and obtained experimental results with $F_{cloner}=0.81$. Martini *et al* [@Martini_2004] used for cloning a BBo crystal slab cut for type II phase matching for implementing optimal cloning and NOT gate. Bartuskova *et al* [@Bartifmmodemathringuelserufiifmmodecheckselsevsfikova_2007] addressed in their work a phase covariant $1\rightarrow2$ qubit cloner based on two single mode optical fiber, non linear crystal, attenuator and phase modulator, providing a fidelity of $F_{cloner}=0.854$, which slightly surpasses the theoretical optimal value of the Universal Cloner (UC), denoted $F_{UC}=5/6$. The question is: can these cloners be used for designing photonic CNOT gates? The main objective of this paper is to answer this question. Therefore, this paper is composed of five sections: Section 2 describes the several photonic components used for the CNOT design in the imperfect case. Section 3 presents the concept and modelling of the photonic CNOT using a quantum cloner and two quantum switches. In section 4, we present the simulation results and the corresponding experimental realization challenges. Finally, a conclusion is given in section 5.
Imperfect photonic devices
==========================
Let us first consider the basic photonic components in the imperfect case as illustrated by figure \[fig:1\].
Figure \[fig:1a\] illustrates a Half Wave Plate (HWP) with arbitrary error[ $\xi$]{}. For inputs right-circularly polarized single photon denoted[$\left|R\right\rangle $ ]{}and left-circularly polarized single photon denoted[$\left|L\right\rangle $,]{} the HWP behaves as follows:
[ $$\begin{array}{c}
\left|R\right\rangle \rightarrow\sqrt{\frac{1-\xi}{2}}\left|R\right\rangle +\sqrt{\frac{1+\xi}{2}}\left|L\right\rangle ;\,\,\,\left|L\right\rangle \rightarrow\sqrt{\frac{1-\xi}{2}}\left|R\right\rangle -\sqrt{\frac{1+\xi}{2}}\left|L\right\rangle \end{array}\label{eq:1}$$ ]{}
An ideal Circular Polarizing Beam Splitter (CPBS) transmits[ $\left|R\right\rangle $]{} and totally reflects[ ]{}[$\left|L\right\rangle $.]{} When considering a CPBS with arbitrary errors[ $\tau_{R}$]{} and [$\tau_{L}$ ]{}on [$\left|R\right\rangle $]{} and [$\left|L\right\rangle $,]{} and for an arbitrary incident input state[ $\alpha\left|R\right\rangle +\beta\left|L\right\rangle $]{} (figure \[fig:1b\]), the CPBS transmits the state[ $\sqrt{\alpha-\tau_{R}}\left|R\right\rangle +\sqrt{\tau_{L}}\left|L\right\rangle $ ]{}and reflects the state [$\sqrt{\tau_{R}}\left|R\right\rangle +\sqrt{\beta-\tau_{L}}\left|L\right\rangle $.]{}
A quantum switch (SW) has two inputs $I_{1}$ and $I_{2}$, and two outputs $O_{1}$ and $O_{2}$. The transmittance ( solid arrows in figure\[fig:1c\]) and reflectance ( dashed arrows in figure\[fig:1c\]) coefficients from $I_{1}$ and $I_{2}$ to $O_{1}$ and $O_{2}$, are denoted by $T_{1,2}$, $T_{2,1}$, $R_{1,1}$ and $R_{2,2}$. Several works addressed physical implementation of the quantum SW [@OShea_2013; @Smart_2014; @Sun_2016; @Volz_2012]. The switching process considered in this work is based on all optical routing of single photon by single photon, without any additional control field [@Shomroni_2014]. This switch is based on three-level atomic $\Lambda$-configuration, with two different transitions denoted $\sigma^{+}$ and $\sigma^{-}$, and representing transitions coupled only to the right or to the left photonic mode propagation, respectively. Each of the inputs $I_{1}$ or $I_{2}$, may be either transmitted or reflected into $O_{1}$ or $O_{2}$, depending on the atom’s state, being either on the $m_{F}=+1$ state or $m_{F}=-1$. When the atom is in this latter state, incoming photon on $I_{1}$ in the state $\sigma^{+}$ are reflected to $O_{1}$ and their state becomes $\sigma^{-}$, which toggles the atom’s state to $m_{F}=+1$. When the atom’s state is $m_{F}=+1$, it doesn’t interact with $\sigma^{+}$ photons from $I_{1}$ and they are totally transmitted to $O_{2}$. The whole process is symmetric for $I_{2}$.
Other photonic devices used in our optimized model are the Beam Splitter (BS), the Quarter Wave Plate (QWP) and Delay Line (DL). In this work, we consider these devices only in the ideal case.
Optimized CNOT gate model based on a quantum cloner
====================================================
With the previously mentioned components , we propose a CNOT architecture using the quantum universal cloner as illustrated by figure \[fig:2\].
The central part of the CNOT shaded grey is the proposed CNOT [@Wang_2013] under optimization. This CNOT is based on spin of electron in a QD trapped in a double sided optical microcavity which behaves like a BS [@Hu_2009]. Inside the grey background, we consider two input photons 1 and 2, being the control and target photons, with initial states$\left|\Psi_{ph}^{1}\right\rangle $ and $\left|\Psi_{ph}^{2}\right\rangle $, respectively.
![\[fig:2\] CNOT gate optimized model. ](Fig2){width="0.55\columnwidth"}
The electron spin state inside the QD is denoted $\left|\Psi_{s}\right\rangle $. Consider the following initial states:
[ $$\begin{array}{c}
\left|\Psi_{ph}^{1}\right\rangle =\alpha\left|R_{1}\right\rangle +\beta\left|L_{1}\right\rangle \\
\Psi_{ph}^{2}=\delta\left|R_{2}\right\rangle +\gamma\left|L_{2}\right\rangle \\
\left|\Psi_{s}\right\rangle =\left(\left|\uparrow_{s}\right\rangle -\left|\downarrow_{s}\right\rangle \right)/\sqrt{2}
\end{array}\label{eq:2}$$ ]{}
The two photons come successively to interact with the optical micro-cavity. For the coupled cavity, when considering equal the frequencies of the input photon, cavity mode and the spin-dependent optical transition, the reflection and transmission coefficients of the double sided optical micro-cavity system used in the CNOT design, are denoted $r\left(\omega\right)$ and $t\left(\omega\right)$, respectively, and are given by [@Wang_2013; @Bonato_2010; @Hu_2009]:
[ $$t\left(\omega\right)=-\frac{2\gamma\kappa}{\gamma\left(2\kappa+\kappa_{s}\right)+4g^{2}};\,\,\,\,\,r\left(\omega\right)=1+t\left(\omega\right)\label{eq:3}$$ ]{}
where $g$ is the coupling strength, $\kappa$ and $\kappa_{s}/2$ are the cavity field decay rate into the input/output modes and the leaky modes, respectively, and $\gamma/2$ is the $X^{-}$ dipole decay rate. For the uncoupled cavity, the reflection and transmission coefficients are denoted $r_{0}\left(\omega\right)$ and $t_{0}\left(\omega\right)$ , and they are directly obtained from equation \[eq:3\] for $g=0$. For a realistic spin cavity unit, the side leakage and cavity loss can not be neglected. In this case, $t\left(\omega\right)$ in the coupled cavity and $r_{0}\left(\omega\right)$ in the uncoupled cavity will introduce bit-flip errors. Relevant energy levels and optical selection rules for exciton $X^{-}$ inside the single charged GaAs/ InAs QD, have been well detailed in [@Wang_2013; @Wei_2014], and the dynamics of the interaction of the QD spin in a double sided optical micro-cavity, for $r_{0}=\left|r_{0}\left(\omega\right)\right|$, $t_{0}=\left|t_{0}\left(\omega\right)\right|$, $r_{1}=\left|r\left(\omega\right)\right|$ and $t_{1}=\left|t\left(\omega\right)\right|$, are given as follows:
[ $$\begin{array}{c}
\left|R^{\downarrow},\,\uparrow_{s}\right\rangle \rightarrow-t_{0}\left|R^{\downarrow},\,\uparrow_{s}\right\rangle -r_{0}\left|L^{\uparrow},\,\uparrow_{s}\right\rangle \\
\left|R^{\downarrow},\,\downarrow_{s}\right\rangle \rightarrow r_{1}\left|L^{\uparrow},\,\downarrow_{s}\right\rangle +t_{1}\left|R^{\downarrow},\,\downarrow_{s}\right\rangle \\
\left|R^{\uparrow},\,\uparrow_{s}\right\rangle \rightarrow r_{1}\left|L^{\downarrow},\,\uparrow_{s}\right\rangle +t_{1}\left|R^{\uparrow},\,\uparrow_{s}\right\rangle \\
\left|R^{\uparrow},\,\downarrow_{s}\right\rangle \rightarrow-t_{0}\left|R^{\uparrow},\,\downarrow_{s}\right\rangle -r_{0}\left|L^{\downarrow},\,\downarrow_{s}\right\rangle \\
\left|L^{\downarrow},\,\uparrow_{s}\right\rangle \rightarrow r_{1}\left|R^{\uparrow},\,\uparrow_{s}\right\rangle +t_{1}\left|L^{\downarrow},\,\uparrow_{s}\right\rangle \\
\left|L^{\downarrow},\,\downarrow_{s}\right\rangle \rightarrow-t_{0}\left|L^{\downarrow},\,\downarrow_{s}\right\rangle -r_{0}\left|R^{\uparrow},\,\downarrow_{s}\right\rangle \\
\left|L^{\uparrow},\,\uparrow_{s}\right\rangle \rightarrow-t_{0}\left|L^{\uparrow},\,\uparrow_{s}\right\rangle -r_{0}\left|R^{\downarrow},\,\uparrow_{s}\right\rangle \\
\left|L^{\uparrow},\,\downarrow_{s}\right\rangle \rightarrow r_{1}\left|R^{\downarrow},\,\downarrow_{s}\right\rangle +t_{1}\left|L^{\uparrow},\,\downarrow_{s}\right\rangle
\end{array}\label{eq:45}$$ ]{}
Photon 1 first passes through HWP1, then it travels through the optical micro-cavity and then it passes through HWP2. The two switches used in the CNOT are denoted SW1 and SW2, the transmittance and reflectance coefficients of SW1 are denoted $T_{1,2}^{1}$, $T_{2,1}^{1}$, $R_{1,1}^{1}$ and $R_{2,2}^{1}$, while they are $T_{1,2}^{2}$, $T_{2,1}^{2}$, $R_{1,1}^{2}$ and $R_{2,2}^{2}$ for SW2. No details were given about SW1 and SW2 in [@Wang_2013], and they were supposed to switch between photons 1 and 2 perfectly. After a certain time defined by DL1, photon 2 is switched by SW1 and injected to the spin cavity system, but before entering and after leaving the system, two Hadamard transforms are performed on the electron spin state, through $\nicefrac{\pi}{2}$ microwave pulses [@Wang_2013; @Bonato_2010], which transforms the state [$\left|\uparrow_{s}\right\rangle \rightarrow\left(\left|\uparrow_{s}\right\rangle +\left|\downarrow_{s}\right\rangle \right)/\sqrt{2}$]{} and[ $\left|\downarrow_{s}\right\rangle \rightarrow\left(\left|\uparrow_{s}\right\rangle -\left|\downarrow_{s}\right\rangle \right)/\sqrt{2}$]{}. After being switched by SW2, photon 2 is delayed by DL2 to wait for the interaction between photon 1 and it’s clone.
For the inputs of equation \[eq:2\], the state at the output of the CNOT is then transformed as follows:
[ $$\begin{array}{c}
\left|\Psi_{ph}^{1}\right\rangle \otimes\left|\Psi_{ph}^{2}\right\rangle \otimes\left|\Psi_{s}\right\rangle \rightarrow\\
\left(\alpha\delta\left|R_{1}\right\rangle \left|R_{2}\right\rangle +\alpha\gamma\left|R_{1}\right\rangle \left|L_{2}\right\rangle -\beta\delta\left|L_{1}\right\rangle \left|L_{2}\right\rangle -\beta\gamma\left|L_{1}\right\rangle \left|R_{2}\right\rangle \right)\left|\uparrow_{s}\right\rangle \\
+\left(\alpha\delta\left|R_{1}\right\rangle \left|R\right\rangle _{2}+\alpha\gamma\left|R_{1}\right\rangle \left|L_{2}\right\rangle +\beta\delta\left|L_{1}\right\rangle \left|L_{2}\right\rangle +\beta\gamma\left|L_{1}\right\rangle \left|R_{2}\right\rangle \right)\left|\downarrow_{s}\right\rangle
\end{array}\label{eq:4}$$ ]{}
It is clear from equation \[eq:4\] that the CNOT function is correctly entangled to the spin state $\left|\downarrow_{s}\right\rangle $, but a $\left(-\right)$ sign is introduced to the CNOT when photon 1 is in the state $\left|L_{1}\right\rangle $ and both photons are entangled with the spin state $\left|\uparrow_{s}\right\rangle $. A measurement of the spin is required to determine the spin state, and then decide whether to apply an identity $\left(I\right)$ or a negation gate ($\sigma_{z}$) on photon 1, to get correct CNOT entangled with both $\left|\uparrow_{s}\right\rangle $ and $\left|\downarrow_{s}\right\rangle $ states. This heralded function has a fidelity of 94 %, which means that the correct CNOT performed only with 47 % of success probability. However, without spin measurement, this CNOT model cannot be used in a serial or parallel combination, since the success probability of the entire circuit will decrease exponentially.
Our main idea is to eliminate this $\left(-\right)$ sign at the output, in order to be independent of the spin state for further circuit realization. The idea is to apply a $\sigma_{z}$ transform on photon 1, being at the state $\left|L_{1}\right\rangle $, only when the spin state at the output is $\left|\uparrow_{s}\right\rangle $. A measurement of the spin state inside a QD has been addressed in [@Hu_2009]: if we have an horizontally-polarized $\left(\left|H\right\rangle \right)$ or vertically-polarized $\left(\left|V\right\rangle \right)$ single photon at the input of QD spin system, being initially at the state $\mu\left|\uparrow_{s}\right\rangle +\nu\left|\downarrow_{s}\right\rangle $, it is possible using a QWP after the QD system to transmit the state of the electron to the photon as $\mu\left|R\right\rangle +\nu\left|L\right\rangle $. Based on this idea, we use in our architecture a UC to clone photon 1 and produce another photon 1’ in the same state $\left|\Psi_{ph}^{1'}\right\rangle =\left|\Psi_{ph}^{1}\right\rangle =\alpha\left|R_{1'}\right\rangle +\beta\left|L_{1'}\right\rangle $. $\sigma_{z}$ gate. After traversing QWP1, the state of photon 1’ becomes $\left|\Psi_{ph}^{1'}\right\rangle =\alpha\left|H_{1'}\right\rangle +\beta\left|V_{1'}\right\rangle $. Photon 1’ is then delayed by DL3 to wait for photons 1 and 2 to pass the QD system and alter the spin state (the spin state is initially given by equation \[eq:2\], and after two imperfect Hadamard gates, it becomes $\mu\left|\uparrow_{s}\right\rangle +\nu\left|\downarrow_{s}\right\rangle $, for $\mu\approx\nu\approx\nicefrac{1}{2}$). Photon 1’ passes through the QD spin system and after QWP2, it’s state becomes $\left|\Psi_{ph}^{1'}\right\rangle =\mu\left|R_{1}\right\rangle +\nu\left|L_{1}\right\rangle $. CPBS4 transmits $\mu\left|R_{1}\right\rangle $ while $\nu\left|L_{1}\right\rangle $ is discarded. The transmitted $\mu\left|R_{1}\right\rangle $ is flipped to $\mu\left|L_{1}\right\rangle $ by HWP3. At this level, photon 1’ is present with same probability amplitude $\mu$ of the electron spin being at the state $\left|\uparrow_{s}\right\rangle $, moreover, it is exactly in the same mode of photon 1, this allows it to serve as control for the $\sigma_{z}$ gate. This gate should perform a $\left(-\right)$ sign only to photon 1 being at the state $\left|L_{1}\right\rangle $, this is the role of CPBS2 and CPBS3. Finally, the time interval between all photons, the paths lengths traveled by photons and the time delay of DL1, DL2 and DL3, should take into consideration cavity photon lifetime and single charged electron spin coherence time [@Wang_2013].
We consider $\xi_{1}$ and $\xi_{2}$ to be the errors related to HWP1 and HWP2. We suppose that $\left(\tau_{R}^{1},\,\tau_{L}^{1}\right)$,$\left(\tau_{R}^{2},\,\tau_{L}^{2}\right)$, $\left(\tau_{R}^{3},\,\tau_{L}^{3}\right)$ and $\left(\tau_{R}^{4},\,\tau_{L}^{4}\right)$ are the errors related to CPBS1, CPBS2, CPBS3 and CPBS4, respectively. For simplicity, we neglect errors due to QWP1, QWP2 and HWP3. For the same inputs of equation \[eq:2\], we compute the output of the optimized CNOT and we obtain:
[ $$\begin{array}{c}
\left|\Psi_{ph}^{1}\right\rangle \otimes\left|\Psi_{ph}^{2}\right\rangle \otimes\left|\Psi_{s}\right\rangle \rightarrow\sqrt{T_{1,2}^{1}\,R_{2,2}^{1}\,T_{1,2}^{2}\,R_{1,1}^{2}\,F_{cloner}}\\
\times\left(\left(\eta_{1}\left|R_{1}\right\rangle \left|R_{2}\right\rangle +\eta_{2}\left|R_{1}\right\rangle \left|L_{2}\right\rangle +\eta_{3}\left|L_{1}\right\rangle \left|L_{2}\right\rangle +\eta_{4}\left|L_{1}\right\rangle \left|R_{2}\right\rangle \right)\left|\uparrow_{s}\right\rangle \right.\\
\left.+\left(\eta_{5}\left|R_{1}\right\rangle \left|R_{2}\right\rangle +\eta_{6}\left|R_{1}\right\rangle \left|L_{2}\right\rangle +\eta_{7}\left|L_{1}\right\rangle \left|L_{2}\right\rangle +\eta_{8}\left|L_{1}\right\rangle \left|R_{2}\right\rangle \right)\left|\downarrow_{s}\right\rangle \right)
\end{array}\label{eq:5}$$ ]{}
[where:]{}
[ $$\begin{array}{c}
\eta_{1}=\frac{\sqrt{1-\xi_{2}}}{2\sqrt{2}}\left(\left(a_{2}a_{2}^{'}+a_{4}a_{4}^{'}-a_{1}a_{1}^{'}-a_{3}a_{3}^{'}\right)\left(\delta\delta^{'}+\gamma\gamma^{'}\right)+\right.\\
\left.\left(a_{2}a_{2}^{"}+a_{4}a_{4}^{"}-a_{1}a_{1}^{"}-a_{3}a_{3}^{"}\right)\left(\delta\delta^{"}+\gamma\gamma^{"}\right)\right)\\
a_{1}=\left(\alpha+\beta\right)\sqrt{\left(1-\tau_{R}^{1}\right)\left(1-\xi_{1}\right)/2}\\
a_{2}=\left(\alpha+\beta\right)\sqrt{\tau_{R}^{1}\left(1-\xi_{1}\right)/2}\\
a_{3}=\left(\alpha-\beta\right)\sqrt{\left(1-\tau_{L}^{1}\right)\left(1+\xi_{1}\right)/2}\\
a_{4}=\left(\alpha-\beta\right)\sqrt{\tau_{L}^{1}\left(1+\xi_{1}\right)/2}\\
a_{1}^{'}=\sqrt{\left(1-\tau_{R}^{1}\right)}\left(t_{0}+t_{1}\right)+\sqrt{\left(1-\tau_{L}^{1}\right)}\left(r_{0}+r_{1}\right)\\
a_{2}^{'}=\sqrt{\tau_{R}^{1}}\left(t_{0}+t_{1}\right)+\tau_{L}^{1}\left(r_{0}+r_{1}\right)\\
a_{3}^{'}=\sqrt{\left(1-\tau_{R}^{1}\right)}\left(r_{0}+r_{1}\right)+\sqrt{\left(1-\tau_{L}^{1}\right)}\left(t_{0}+t_{1}\right)\\
a_{4}^{'}=\sqrt{\tau_{R}^{1}}\left(r_{0}+r_{1}\right)+\sqrt{\tau_{L}^{1}}\left(t_{0}+t_{1}\right)\\
a_{1}^{"}=\sqrt{\left(1-\tau_{R}^{1}\right)}\left(t_{0}-t_{1}\right)+\sqrt{\left(1-\tau_{L}^{1}\right)}\left(r_{0}-r_{1}\right)\\
a_{2}^{"}=\sqrt{\tau_{R}^{1}}\left(t_{1}-t_{0}\right)+\tau_{L}^{1}\left(r_{1}+r_{0}\right)\\
a_{3}^{"}=\sqrt{\left(1-\tau_{R}^{1}\right)}\left(r_{0}-r_{1}\right)+\sqrt{\left(1-\tau_{L}^{1}\right)}\left(t_{0}-t_{1}\right)\\
a_{4}^{"}=\sqrt{\tau_{R}^{1}}\left(r_{1}+r_{0}\right)+\sqrt{\tau_{L}^{1}}\left(t_{1}-t_{0}\right)\\
\delta^{'}=t_{1}\tau_{R}^{1}-t_{0}\left(1-\tau_{R}^{1}\right)\\
\gamma^{'}=r_{1}\sqrt{\tau_{R}^{1}\tau_{L}^{1}}-r_{0}\sqrt{\left(1-\tau_{R}^{1}\right)\left(1-\tau_{L}^{1}\right)}\\
\delta^{"}=t_{1}\left(1-\tau_{R}^{1}\right)-t_{0}\tau_{R}^{1}\\
\gamma^{"}=r_{1}\sqrt{\left(1-\tau_{R}^{1}\right)\left(1-\tau_{L}^{1}\right)}-r_{0}\sqrt{\tau_{R}^{1}\tau_{L}^{1}}
\end{array}\label{eq:6}$$ ]{}
and $\left\{ \eta_{i}\right\} _{2\leq i\leq8}$ have all the same form of equation \[eq:6\].
It is worth highlighting the fact that [$\Theta=\left(-1\right)\times\sqrt{\left(1-\tau_{L}^{2}\right)\left(1-\tau_{L}^{3}\right)\left(1-\tau_{R}^{4}\right)}$ i]{}s the operator that will eliminate the (-) sign for the CNOT being entangled with $\left|\uparrow_{s}\right\rangle $, this operator appears only in $\eta_{3}$ and $\eta_{4}$ of equation \[eq:5\]. This is the main contribution in this work since the CNOT function is correctly entangled with both $\left|\uparrow_{s}\right\rangle $ and $\left|\downarrow_{s}\right\rangle $ spin states. To measure the performance of our optimized CNOT, we refer to the fidelity denoted $F_{CNOT}$ and given by [@Wang_2013]:
[ $$F_{CNOT}=\left\langle \overline{\Psi_{in}|U_{CNOT}^{\dagger}\rho_{t}U_{CNOT}|\Psi_{in}}\right\rangle \label{eq:9}$$ ]{}
where the upper line indicates that the fidelity is obtained according to the average over all possible four input states $\left|\Psi_{in}\right\rangle $, $U_{CNOT}$ is the ideal CNOT transform, $\rho_{t}=\left|\Psi_{out}\right\rangle \left\langle \Psi_{out}\right|$, with $\left|\Psi_{out}\right\rangle $ is the state at the output of the CNOT for the specific $\left|\Psi_{in}\right\rangle $ input.
Simulation results and experimental challenges
==============================================
A first simulation concerns only the original proposed CNOT, where we study the impact of the errors of HWP1, HWP2 and CPBS1. To this end, we consider perfect SW1 and SW2[ $\left(T_{1,2}^{1}=T_{1,2}^{2}=R_{2,2}^{1}=R_{1,1}^{2}=1\right)$]{}, and we vary all errors around a realistic range of $10^{-2}$.
We illustrate in figures \[fig:3a\] and \[fig:3b\], the average fidelities for spin $\left|\uparrow_{s}\right\rangle $ and $\left|\downarrow_{s}\right\rangle $ states, denoted $\overline{F}_{CNOT}^{\uparrow}$ and $\overline{F}_{CNOT}^{\downarrow}$, versus the normalized coupling strength. Here we have set $\gamma=0.1\kappa$.
In [@Wang_2013], errors due to HWP1, HWP2 and CPBS1 have been neglected and it has been shown that the CNOT provides best $\overline{F}_{CNOT}^{\uparrow}$ or $\overline{F}_{CNOT}^{\downarrow}$ value around 93.74% for the strong coupling regime (obtained for $g>\left(\kappa_{s}+\kappa\right)/4$), and 32.34 % for the weak coupling regime (obtained when $g<\left(\kappa_{s}+\kappa\right)/4$). If we consider the same parameters used for the strong coupling ($\kappa_{s}=0.05\kappa$ and $g=2.5\kappa$) and weak coupling ($\kappa_{s}=1.0\kappa$ and $g=0.45\kappa$), our simulation shows that the errors affect the fidelities $\overline{F}_{CNOT}^{\uparrow}$ and $\overline{F}_{CNOT}^{\downarrow}$ , and we obtain best values around 87.89% and 30.02%, respectively.
Another simulation concerns our optimized model while taking into consideration realistic features of all devices. In this case, we consider SW1 and SW2 realized according to [@Shomroni_2014]. For SW1 being initially in the state $m_{F}=+1$, the experimental results obtained are $T_{1,2}^{1}=89.9\%$, $R_{2,2}^{1}=65\%$. For SW2 being initially in the ground state $m_{F}=-1$, the obtained coefficients are $T_{1,2}^{2}=95.6\%$ and $R_{1,1}^{2}=64.8\%$. We consider the universal cloning of polarization state experimentally realized and providing $F_{cloner}=0.82$ [@Fasel_2002]. We consider arbitrary errors around $10^{-2}$ affecting separately all devices of the CNOT of figure \[fig:2\] (except QWP1, QWP2 and HWP3). With these assumptions, we show in Figure \[fig:4a\] the average fidelity of the CNOT function being correctly entangled with both $\left|\uparrow_{s}\right\rangle $ and $\left|\downarrow_{s}\right\rangle $ , denoted $\overline{F}_{CNOT}^{\uparrow\downarrow}$. Best values according to figure \[fig:4a\] in the strong coupling regime is $\overline{F}_{CNOT}^{\uparrow\downarrow}=26.27\%$.
It is clear that $\overline{F}_{CNOT}^{\uparrow\downarrow}$ value is highly sensitive to the fidelity of the cloner and SW1 and SW2 imperfections, we have considered only the strong coupling regime and the optimal cloner with $F_{UC}=5/6$. We denote $E_{rr}$ the set of all errors affecting the CNOTs components and we vary them in $\left[10^{-4}..10^{-1}\right]$. We consider also the same range of errors separately altering the coefficients $T_{1,2}^{1}$, $T_{1,2}^{2}$, $R_{2,2}^{1}$ and $R_{1,1}^{2}$, therefore, we denote $P_{SW}\simeq T_{1,2}^{1}$$\simeq T_{1,2}^{2}$$\simeq R_{2,2}^{1}$$\simeq R_{1,1}^{2}$, and we illustrate $\overline{F}_{CNOT}^{\uparrow\downarrow}$ depending on $E_{rr}$ and $P_{SW}$ in figure \[fig:4b\]. The best fidelity permitted by our CNOT for lowest error range and $P_{SW}$ approaching unity is $\overline{F}_{CNOT}^{\uparrow\downarrow}$=78%. This fidelity is very close to $F_{UC}$ and our optimized CNOT is very advantageous since neither a measurement of the electron spin state nor an extra treatment are required to allow using the CNOT outputs as inputs for another circuits.
Conclusion
==========
We propose a quantum CNOT gate that overcomes the inefficiencies of a previously published CNOT design based on Quantum-Dot system. The previous proposal provides fidelity of 94 % but with a heralded functioning, which means that the correct CNOT performed only with 47 % probability success. Our CNOT functioning is not heralded by any spin states and it provides a success of 78%. This is a highly innovative result since it uses the quantum Cloner. This design will lead to another CNOT that uses the cloner with better fidelity approaching the cloner optimal limit of 5/6 and will allow possible generalization of the CNOT to all $C^{n}NOT$ photonic gates.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: '**Majorana fermions, quantum particles that are their own anti-particles, are not only of fundamental importance in elementary particle physics and dark matter, but also building blocks for fault-tolerant quantum computation. Recently Majorana fermions have been intensively studied in solid state and cold atomic systems. These studies are generally based on superconducting pairing with zero total momentum. On the other hand, finite total momentum Cooper pairings, known as Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) states, were widely studied in many branches of physics. However, whether FFLO superconductors can support Majorana fermions has not been explored. Here we show that Majorana fermions can exist in certain types of gapped FFLO states, yielding a new quantum matter: topological FFLO superfluids/superconductors. We demonstrate the existence of such topological FFLO superfluids and the associated Majorana fermions using spin-orbit coupled degenerate Fermi gases and derive their parameter regions. The implementation of topological FFLO superconductors in semiconductor/superconductor heterostructures are also discussed.**'
author:
- 'Chunlei Qu$^{1}$'
- 'Zhen Zheng$^{2}$'
- 'Ming Gong$^{3}$'
- 'Yong Xu$^{1}$'
- 'Li Mao$^{1}$'
- 'Xubo Zou$^{2}$'
- 'Guangcan Guo$^{2}$'
- 'Chuanwei Zhang$^{1}$'
title: Topological Superfluids with Finite Momentum Pairing and Majorana Fermions
---
[^1]
[^2]
[^3]
[^4]
[^5]
Topological superconductors and superfluids are exotic quantum matters that host topological protected excitations, such as robust edge modes and Majorana Fermions (MFs) with non-Abelian exchange statistics [@Wilczek]. MFs are important not only because of their fundamental role in elementary particle physics and dark matters [@Hisano], but also their potential applications in fault-tolerant topological quantum computation [@TQC]. Recently some exotic systems, such as $\nu =5/2$ fractional quantum Hall states [@TQC], chiral *p*-wave superconductors/superfluids [TQC]{}, heterostructure composed of $s$-wave superconductors and semiconductor nanowires (nanofilms) or topological insulators [@Fu; @JSau; @Roman; @Oreg; @Alicea; @Lee; @Mao], etc., have been proposed as systems supporting MFs. Following the theoretical proposals, exciting experimental progress for the observation of MFs has been made recently in semiconductor [@Mourik; @Deng; @Das; @Rokhinson] or topological insulator heterostructures [@YCui], although unambiguous experimental evidence for MFs is still lacked.
These theoretical and experimental studies are based on the superconducting Cooper pairing ($s$-wave or chiral $p$-wave) with zero total momentum, that is, the pairing is between two fermions with opposite momenta $\mathbf{k}$ and $\mathbf{-k}$ (denoted as BCS pairing hereafter). On the other hand, the superconducting pairing can also occur between fermions with finite total momenta (pairing between $\mathbf{k}$ and $\mathbf{-k+Q}$) in the presence of a Zeeman field, leading to spatially modulated superconducting order parameters in real space, known as FFLO states. The FFLO states were first predicted in 1960s [@FF64; @LO64], and now are a central concept for understanding exotic phenomena in many different systems [FFLOreview,FFLO2,FFLO3,FFLO1,Parish, HuPRA]{}. A natural question to ask is whether MFs can also exist in a FFLO superconductor or superfluid?
In this Letter, we propose that FFLO superconductors/superfluids may support MFs if they possess two crucial elements: gapped bulk quasi-particle excitations and nontrivial Fermi surface topology. These new quantum states are topological FFLO superconductors/superfluids. In this context, traditional gapless FFLO states induced by a large Zeeman field do not fall into this category. Here we propose a possible platform for the realization of topological FFLO superfluids using two-dimensional (2D) or one-dimensional (1D) spin-orbit (SO) coupled degenerate Fermi gases subject to in-plane and out-of-plane Zeeman fields. Recently, the SO coupling and Zeeman fields for cold atoms have already been realized in experiments [Ian, Pan, Peter, Zhang, Martin]{}, which provide a completely new avenue for studying topological superfluid physics. It is known that SO coupled degenerate Fermi gases with an out-of-plane Zeeman field support MFs with zero total momentum pairing [@CW; @Jiang; @Gong; @Melo]. We find in suitable parameter regions the in-plane Zeeman field can induce the finite total momentum pairing [@Zheng; @WYi; @Hui; @Lee2], while still keeps the superfluid gapped and preserves its Fermi surface topology. The region for topological FFLO superfluids depends not only on the chemical potential, pairing strength, but also on the SO coupling strength, total momentum and effective mass of the Cooper pair, as well as the orientation and magnitude of the Zeeman field, thus greatly increases the tunability in experiments. Finally, the potential implementation of the proposal in semiconductor/superconductor heterostructures is also discussed.
[**Results**]{}
**System and Hamiltonian**: Consider a SO coupled Fermi gas in the $xy$ plane with the effective Hamiltonian $$H=\sum_{\mathbf{k}\sigma \sigma ^{\prime }}c_{\mathbf{k},\sigma }^{\dagger
}H_{0}^{\sigma \sigma ^{\prime }}c_{\mathbf{k},\sigma ^{\prime }}+V_{\text{%
int}} \label{eq-H}$$where $H_{0}=\frac{\mathbf{k}^{2}}{2m}-\mu +\alpha \mathbf{k}\times \vec{%
\sigma}\cdot {\hat{e}_{z}}-\mathbf{h}\cdot \vec{\sigma}$, $\mathbf{k}%
=(k_{x},k_{y})$, $\alpha $ is the Rashba SO coupling strength, $\mathbf{h}%
=(h_{x},0,h_{z})$ is the Zeeman field and ${\vec{\sigma}}$ is the Pauli matrices. $V_{\text{int}}=g{\sum }c_{\mathbf{k}_{1},\uparrow }^{\dagger }c_{%
\mathbf{k}_{2},\downarrow }^{\dagger }c_{\mathbf{k}_{3},\downarrow }c_{%
\mathbf{k}_{4},\uparrow }$ describes the $s$-wave scattering interaction, where $g=-(\sum_{\mathbf{k}}\left( \mathbf{k}^{2}/m+E_{b}\right) ^{-1}{)}%
^{-1}$ is the scattering interaction strength, $E_{b}$ is the binding energy, and $\mathbf{k}_{1}+\mathbf{k}_{2}=\mathbf{k}_{3}+\mathbf{k}_{4}$ due to the momentum conservation. Without in-plane Zeeman field $h_{x}$, the Fermi surface is symmetric around $\mathbf{k}=0$, and the superfluid pairing is between atoms with opposite momenta $\mathbf{k}$ and $-\mathbf{k}$. While with both $h_{x}$ and SO coupling, the Fermi surface becomes asymmetric along the $y$ direction (see Fig. \[fig-FS\]a), and the pairing can occur between atoms with momenta $\mathbf{k}$ and $-\mathbf{k}+\mathbf{Q}$. In real space, such a finite total momentum pairing leads to a FF-type order parameter $\Delta (\mathbf{x})=\Delta e^{i\mathbf{Q}\cdot \mathbf{x}}$, where $\mathbf{Q}=(0,Q_{y})$ is parallel to the deformation direction of the Fermi surface [@Zheng; @WYi; @Hui]. Notice that the energies of the superfluids with total momentum $\mathbf{Q}$ and $\mathbf{-Q}$ are nondegenerate, therefore FF phase with a single $\mathbf{Q}$, instead of LO phase ($\Delta (\mathbf{x})=\Delta \cos (\mathbf{Q}\cdot \mathbf{x})$) where pairing occurs at both $\pm \mathbf{Q}$, is considered here. Hereafter, if not specified, FFLO superfluids refer to FF superfluids.
The dynamics of the system can be described by the following Bogliubov-de Gennes (BdG) Hamiltonian in the mean-field level, $$H_{\text{BdG}}(\mathbf{k})=%
\begin{pmatrix}
H_{0}({\frac{\mathbf{Q}}{2}}+\mathbf{k}) & \Delta \\
\Delta & -\sigma _{y}H_{0}^{\ast }({\frac{\mathbf{Q}}{2}}-\mathbf{k})\sigma
_{y}%
\end{pmatrix}%
, \label{eq-bdg}$$where the Nambu basis is chosen as $(c_{\mathbf{k}+\mathbf{Q}/2,\uparrow
},c_{\mathbf{k}+\mathbf{Q}/2,\downarrow },c_{-\mathbf{k}+\mathbf{Q}%
/2,\downarrow }^{\dagger },-c_{-\mathbf{k}+\mathbf{Q}/2,\uparrow }^{\dagger
})^{T}$. The gap, number and momentum equations are solved self-consistently to obtain $\Delta $, $\mu $ and $\mathbf{Q}$, see Methods, through which we determine different phases.
![**Single particle band structure and Berry curvature**. (a) Energy dispersion of the lower band. The green arrows represent the momenta of a Cooper pair of two atoms on the asymmetric Fermi surface. The red arrow represents the total finite momentum of the paring, which is along the deformation direction of the Fermi surface. (b) Berry curvature of the lower band, whose peak is shifted from the origin by $h_{x}/\protect\alpha $ along the $k_{y}$ direction. []{data-label="fig-FS"}](fig1.eps){width="3.2in"}
**Physical mechanism for topological FFLO phase**: Without SO coupling, the orientation of the Zeeman field does not induce any different physics due to SU(2) symmetry. The presence of both $h_{x}$ and SO coupling breaks this SU(2) symmetry, leading to a Fermi surface without inversion symmetry, see Fig. \[fig-FS\]a. Here, $h_{x}$ deforms the Fermi surface, leading to FFLO Cooper pairings; while $h_{z}$ opens a gap between the two SO bands, making it possible for the chemical potential to cut a single Fermi surface for the topological FFLO phase. The Berry curvature of the lower band reads as $$\Omega _{\mathbf{k}}={\frac{\alpha ^{2}h_{z}}{2(\alpha ^{2}k_{x}^{2}+(\alpha
k_{y}+h_{x})^{2}+h_{z}^{2})^{3/2}}}.$$Note that $h_{x}$ shifts the peak of Berry curvature from $\mathbf{k}=0$ to $%
(0,-h_{x}/\alpha )$ (denoted by arrow in Fig. \[fig-FS\]b). When atoms scatter from $\mathbf{k}$ to $\mathbf{k}^{{\prime }}$ on the Fermi surface, they pick up a Berry phase, whose accumulation around the Fermi surface $%
\theta =\int d^{2}\mathbf{k}\Omega _{\mathbf{k}}\approx \pi $. Such Berry phase modifies the effective interaction from $s$-wave ($V_{\mathbf{k}%
\mathbf{k^{\prime }}}\sim g$ is a constant) to $s$-wave plus asymmetric $p$-wave $$V_{\mathbf{k}\mathbf{k^{\prime }}}\sim g\left( ke^{-i\theta _{\mathbf{k}}}+{%
\frac{h_{x}}{\alpha }}\right) \left( k^{\prime }e^{i\theta _{\mathbf{k}%
^{\prime }}}-{\frac{h_{x}}{\alpha }}\right)$$on the Fermi surface. Here we recover the well-known chiral $p_{x}+ip_{y}$ pairing [@CW] in the limit $h_{x}=0$. The in-plane Zeeman field here creates an effective $s$-wave pairing component (although still hosts MFs), and the effective pairing is reminiscent to the ($s$+$p$)-wave pairing in some solid materials [@Yuan].
**Parameter region for MFs**: The BdG Hamiltonian (\[eq-bdg\]) satisfies the particle-hole symmetry $\Xi =\Lambda \mathcal{K}$, where $%
\Lambda =i\sigma _{y}\tau _{y}$, $\mathcal{K}$ is the complex conjugate operator, and $\Xi ^{2}=1$. The parameter region for the MFs is determined by the topological index $\mathcal{M}=\text{sign}(\text{Pf}\{\Gamma \})$, where Pf is the Pfaffian of the skew matrix $\Gamma =H_{\text{BdG}%
}(0)\Lambda $. $\mathcal{M}=-1(+1)$ corresponds to the topologically nontrivial (trivial) phase [@Parag]. The topological phase exists when $$h_{z}^{2}+\bar{h}_{x}^{2}>\bar{\mu}^{2}+\Delta ^{2}\text{,}\quad \alpha
h_{z}\Delta \neq 0\text{,}\quad E_{g}>0, \label{eq-parameter}$$where $\bar{h}_{x}=h_{x}+\alpha Q_{y}/2$ and $\bar{\mu}=\mu -Q_{y}^{2}/8m$. $%
E_{g}=\text{min}(E_{\mathbf{k},s})$ defines the bulk quasi-particle excitation gap of the system with $E_{\mathbf{k},s}$ as the particle branches of the BdG Hamiltonian (\[eq-bdg\]). The first condition reduces to the well-known $h_{z}^{2}>\Delta ^{2}+\mu ^{2}$ in BCS topological superfluids [@Mourik; @Deng; @Das; @Rokhinson; @Roman; @Oreg; @Gong]. The last condition ensures the bulk quasi-particle excitations are gapped to protect the zero energy MFs in the topological regime. The SO coupling and the FFLO vector shift the effective in-plane Zeeman field and the chemical potential. In contrast, in the BCS topological superfluids, the SO coupling strength, although required, does not determine the topological boundaries. Our system therefore provides more knobs for tuning the topological phase transition. To further verify condition (\[eq-parameter\]), we calculate the Chern number in the hole branches $\mathcal{C}=$ $\sum_{n}\mathcal{C}_{n}$ in the gapped superfluids [@Parag], and confirm $\mathcal{C}=+1$ when Eq. (5) is satisfied and $\mathcal{C}=0$ otherwise. Here $\mathcal{C}_{n}=\frac{1}{%
2\pi }\int d^{2}k\Gamma _{n}$ is the Chern number, $\Gamma _{n}=-2$Im$%
\left\langle \frac{\partial \Psi _{n}}{\partial k_{x}}|\frac{\partial \Psi
_{n}}{\partial k_{y}}\right\rangle $ is the Berry curvature [@Xiao], and $\left\vert \Psi _{n}\right\rangle $ is the eigenstate of two hole bands of the BdG Hamiltonian (\[eq-bdg\]).
![**The order parameter $\Delta $, chemical potential $\protect%
\mu $, bulk quasi-particle gap $E_{g}$, and FFLO vector $Q_{y}$ as a function of Zeeman fields**. In (b) and (d), the dashed lines are the best fitting with quadratic and linear functions in the small Zeeman field regime, respectively. In (a) and (b), $h_{x}=0.2E_{F}$, while in (c) - (d), $%
h_{Z}=0.2E_{F}$. Other parameters are $E_{b}=0.4E_{F}$, $\protect\alpha %
K_{F}=1.0E_{F}$. The vertical lines mark the points where the Pfaffian changes the sign.[]{data-label="fig-Delta"}](becbcs.eps){width="3in"}
The transition from non-topological to topological phases defined by Eq. (\[eq-parameter\]) can be better understood by observing the close and reopen of the excitation gap $E_{g}$, which is necessary to change the topology of Fermi surface. In Fig. \[fig-Delta\], we plot the change of $%
E_{g} $ along with the order parameter $|\Delta |$, the chemical potential $%
\mu $, and the FF vector $\mathbf{Q}$ as a function of Zeeman fields. For a fixed $h_{x}$ but increasing $h_{z}$, $E_{g}$ may first close and then reopen (Fig. \[fig-Delta\]a), signalling the transition from non-topological to topological gapped FFLO superfluids ($Q_{y}$ is finite for all $h_{z}$, see Fig. \[fig-Delta\]b). For a fixed $h_{z}$, the superfluid is gapped and $Q_{y}\propto h_{x}$ for a small $h_{x}$ (see Fig. \[fig-Delta\]d), thus any small $h_{x}$ can transfer the gapped BCS superfluids at $h_{x}=0$ to FFLO superfluids. However, such a small $h_{x}$ does not destroy the bulk gap of BCS superfluids (topological or non-topological), making gapped topological FFLO superfluids possible when the system is initially in topological BCS superfluids without $h_{x}$. With increasing $h_{x}$ (Fig. \[fig-Delta\]c), $E_{g}$ may first close but does not reopen immediately, signalling the transition from gapped FFLO superfluids to gapless FFLO superfluids. For a small $h_{z}=0.2E_{F}$, further increasing $h_{x}$ to $\sim 0.78E_{F}$, $E_{g}$ reopens again (Fig. \[fig-Delta\]c), signalling the transition from gapless FFLO to gapped topological FFLO superfluids. In this regime, $Q_{y}\sim 0.6K_{F}$, which is not small. For a strong enough Zeeman field, the pairing may be destroyed and the system becomes a normal gas.
The complete phase diagrams are presented in Fig. \[fig-Phases\]. Since $%
Q_{y}$ and $h_{x}$ have the same sign, the phase diagram show perfect symmetry in the $h_{x}-h_{z}$ plane. The BCS superfluids can only be observed at $h_{x}=0$, hence are not depicted. With increasing SO coupling strength, the topological FFLO phase is greatly enlarged through the expansion to the normal gas phase. For a small SO coupling (Fig. [fig-Phases]{}a), a finite $h_{z}$ is always required to create the topological FFLO phase; In the intermediate regime (Fig. \[fig-Phases\]b) we find an interesting parameter regime where the topological FFLO phase can be reached with an extremely small $h_{z}$ around $h_{x}\sim 0.8E_{F}$. However, the topological FFLO phase can never be observed at $h_{z}=0$, as analyzed before from the Berry curvature and Chern number. From Fig. [fig-Phases]{}a-b we see that the topological gapped FFLO phase can be mathematically regarded as an adiabatic deformation of the topological BCS superfluids by an in-plane Zeeman field, although their physical meaning are totally different. In Fig. \[fig-Phases\]c-d, we see that the gapless FFLO phase can be observed at small binding energy and small $h_{z}$, while for large enough binding energy, the system can be either topological or non-topological gapped phases. In this regime, $E_{g}\sim \sqrt{\mu
^{2}+\Delta ^{2}}-\sqrt{h_{x}^{2}+h_{z}^{2}}$, where $\mu \sim E_{F}-E_{b}/2$, and $\Delta ^{2}\sim 2E_{F}E_{b}$, thus $h_{z}\propto E_{b}$ is required to close and reopen $E_{g}$ (see Fig. \[fig-Phases\]c-d).
![(Color online). **Phase diagram of the FFLO superfluid**. The phases are labelled with different colors: topological gapped FFLO superfluid (red), non-topological gapped FFLO superfluid (yellow), gapless FFLO superfluid (blue) and normal gas (white). Other parameters are: (a) $%
E_{b}=0.4E_{F}$, $\protect\alpha k_{F}=0.5E_{F}$; (b) $E_{b}=0.4E_{F}$, $%
\protect\alpha k_{F}=1.0E_{F}$; (c) $h_{x}=0.5E_{F}$, $\protect\alpha %
k_{F}=0.5E_{F}$; (d) $h_{x}=0.5E_{F}$, $\protect\alpha k_{F}=1.0E_{F}$. The symbols in each panel are the tricritical points.[]{data-label="fig-Phases"}](phase_diagram.eps){width="3in"}
The tricritical points marked by symbols in Fig. \[fig-Phases\] are essential for understanding the basic structure of the phase diagram. Along the $h_{z}$ axis, the system only supports gapped BCS superfluids (topological or non-topological) and normal gas [@Gong], while along the $h_{x}$ axis the system only supports trivial FFLO superfluids and normal gas [@Zheng; @WYi; @Hui]. So the adiabatic connection between the topological BCS superfluids and trivial FFLO phases is impossible, and there should be some points to separate different phases, which are exactly the tricritical points. In our model the transition between different phases is of first-order process. The existence of tricritical point here should be in stark contrast to the tricritical point at finite temperature in the same system without SO coupling, which arises from the accidental intersection of first and second order transition lines [@Parish]. Therefore the tricritical points in Fig. \[fig-Phases\] cannot be removed, although their specific positions vary with the system parameters.
**Chiral edge modes**: The topological FFLO superfluids support exotic chiral edge modes. To see the basic features more clear, we consider the same model in a square lattice with the following tight-binding Hamiltonian, $$H_{\text{L}}=H_{0}+H_{\text{Z}}+H_{\text{so}}+V_{\text{int}}, \label{eq-TB}$$where $H_{0}=-t\sum_{\langle i,j\rangle ,\sigma }c_{i\sigma }^{\dagger
}c_{j\sigma }-\mu \sum_{i\sigma }n_{i\sigma }$, $H_{\text{Z}%
}=-h_{x}\sum_{i}(c_{i\uparrow }^{\dagger }c_{i\downarrow }+c_{i\downarrow
}^{\dagger }c_{i\uparrow })-h_{z}\sum_{i}(n_{i\uparrow }-n_{i\downarrow })$, $H_{\text{so}}=-\frac{\alpha }{2}\sum_{i}(c_{i-\hat{x}\downarrow }^{\dagger
}c_{i\uparrow }-c_{i+\hat{x}\downarrow }^{\dagger }c_{i\uparrow }+ic_{i-\hat{%
y}\downarrow }^{\dagger }c_{i\uparrow }-ic_{i+\hat{y}\downarrow }^{\dagger
}c_{i\uparrow }+\text{H.C})$, and $V_{\text{int}}=-U\sum_{i}n_{i\uparrow
}n_{i\downarrow }=\sum_{i}\Delta _{i}^{\ast }c_{i\downarrow }c_{i\uparrow
}+\Delta _{i}c_{i\uparrow }^{\dagger }c_{i\downarrow }^{\dagger }-|\Delta
_{i}|^{2}/U$, with $\Delta _{i}=-U\langle c_{i\downarrow }c_{i\uparrow
}\rangle $, $n_{i\sigma }=c_{i\sigma }^{\dagger }c_{i\sigma }$. Here $%
c_{i\sigma }$ denotes the annihilation operator of a fermionic atom with spin $\sigma $ at site $i=(i_{x},i_{y})$. Hereafter, we use $t=1$ as the basic energy unit. For more details, see Methods.
![**Chiral edge states of topological FFLO phases in a 2D strip**. The strip is along the $x$ direction (a); $y$ direction (b). The parameters are $Q_{y}=-0.25$, $\protect\mu =-4t$, $\protect\alpha =2.0t$, $%
\Delta =1.0t$, $h_{z}=-1.2t$, $h_{x}=-0.3t$.[]{data-label="fig-edgestate"}](chiralstate.eps){width="3in"}
In the following, we only present the chiral edge states in the topological gapped FFLO superfluid regime, and assume $\Delta _{i}=\Delta
e^{iQ_{y}i_{y}} $. We consider a 2D strip with width $W=200$, and the results for the strip along $x$ and $y$ directions in the topological FFLO phase are presented in Fig. \[fig-edgestate\]. The linear dispersion of the edge states reads as $$H_{\text{edge}}=\sum_{k}v_{L}\psi _{kL}^{\dagger }k\psi _{kL}-v_{R}\psi
_{kR}^{\dagger }k\psi _{kR},$$where $L$ and $R$ define the left and right edges of the strip, and $v_{L}$ and $v_{R}$ are the corresponding velocities. We have also confirmed that the wavefunctions of the edge states are well localized at two edges. For a strip along the $x$ direction, the particle-hole symmetry as well as the discrete $\mathbb{Z}_{2}$ symmetry for $k_{x}\rightarrow -k_{x}$ ensure the eigenenergies of Eq. \[eq-TB\] always come in pairs ($E_{k}$,$-E_{k}$), thus $v_{R}=v_{L}$. However, when the strip is along the $y$ direction (parallel to the FFLO momentum $\mathbf{Q}$), the eigenenergies no longer come in pairs, therefore $v_{R}\neq v_{L}$. The two chiral edge states with totally different velocities and density of states represent the most remarkable feature of our model. The Chern number $C=1$ in our lattice model, thus only one pair of chiral edge states can be observed.
![**Majorana fermions in a 1D chain.** (a) The BdG quasi-particle excitation energies ($E_{2}$, $E_{1}$, -$E_{1}$, -$E_{2}$) and the order parameter; (b) The spatial profile of the FF type order parameter obtained self-consistently. (c) The wavefunction (WF) of the Majorana zero energy state $\left( U_{\uparrow },V_{\uparrow }\right) $ in the 1D chain. $\left( U_{\downarrow },V_{\downarrow }\right) $ is similar but with different amplitudes. The parameters are $\protect\alpha =2.0t$, $%
h_{x}=-0.5t$, $h_{z}=-1.2t$, $U=4.5t$, $\protect\mu =-2.25t$. []{data-label="fig-mf"}](Majorana1D.eps){width="3in"}
**MFs in 1D Chain**: Topological FFLO superfluid and associated MFs can also be observed in 1D SO coupled Fermi gas when the Hamiltonian (\[eq-TB\]) is restricted to 1D chain. In this case, the system is characterized by a $%
\mathbb{Z}_{2}$ invariant, which can be determined using the similar procedure as discussed above. The only difference is that now not only $k=0$, but also $k=\pi $ needs be taken into account (see Methods). In Fig. [fig-mf]{}a, we see Majorana zero-energy state protected by a large gap ($\sim
0.3t$) emerges in a suitable parameter region. The superfluid order parameter (Fig. \[fig-mf\]b) has the FF form. The local Bogoliubov quasi-particle operator $\gamma (E_{n})=\sum_{i\sigma }u_{i\sigma
}^{n}c_{i\sigma }+v_{i\sigma }^{n}c_{i\sigma }^{\dagger }$, where the zero energy wavefunction $\left( u_{i\uparrow }^{0},u_{i\downarrow
}^{0},v_{i\uparrow }^{0},v_{i\downarrow }^{0}\right) =\left( U_{i\uparrow
}e^{i\phi _{i\uparrow }},U_{i\downarrow }e^{i\phi _{i\downarrow
}},V_{i\uparrow }e^{-i\phi _{i\uparrow }},V_{i\downarrow }e^{-i\phi
_{i\downarrow }}\right) $ satisfies $u_{i\sigma }^{0}=v_{i\sigma }^{0\ast }$ at the left edge and $u_{i\sigma }^{0}=-v_{i\sigma }^{0\ast }$ at the right edge (see Fig. \[fig-mf\]c). This state supports two local MFs at two edges, respectively [@Roman].
[**Discussions**]{}
Our proposed topological FFLO phase may also be realized using semiconductor/superconductor heterostructures. Recently, topological BCS superconductors and the associated MFs have been proposed in such heterostructures [@JSau; @Roman; @Oreg; @Alicea] and some preliminary experimental signatures have been observed [@Mourik; @Deng; @Das; @Rokhinson]. To realize a topological FFLO superconductor, the semiconductor should be in proximity contact with a FFLO superconductor, which introduces finite momentum Cooper pairs. The topological parameter region defined in Eq. (5) still applies except that the order parameter, chemical potential and FFLO vector are external independent parameters. The flexibility of Eq. (5) makes it easier for tuning to the topological region with MFs. Because the FFLO state can sustain in the presence of a large magnetic field, it opens the possibility for the use of many semiconductor nanowires with large spin-orbit coupling but small $g$-factors (e.g, GaSb, hole-doped InSb, etc.).
In summary, we propose that topological FFLO superfluids or superconductors with finite momentum pairings can be realized using SO coupled $s$-wave superfluids subject to Zeeman fields and they support exotic quasi-particle excitations such as chiral edge modes and MFs. The phase transition to the topological phases depends strongly on all physical quantities, including SO coupling, chemical potential, Zeeman field and its orientations, paring strength, FFLO vector $\mathbf{Q}$ and the effective mass of Cooper pairs explicitly, which are very different from topological BCS superfluids/superconductors that are intensively studied recently. These new features not only provide more knobs for tuning topological phase transitions, but also greatly enrich our understanding of topological quantum matters. The topological FFLO phases have not been discussed before, and the phases unveiled in this Letter represent a totally new quantum matter.
[**Methods**]{}
**Momentum space BdG equations**: The partition function at finite temperature $T$ is $Z=\int \mathcal{D}[\psi ,\psi ^{\dagger }]e^{-S[\psi
,\psi ^{\dagger }]}$, where $S[\psi ,\psi ^{\dagger }]=\int d\tau d\mathbf{r}%
\sum_{\sigma =\uparrow ,\downarrow }\psi _{\sigma }(\mathbf{x})^{\dagger
}\partial _{\tau }\psi _{\sigma }(\mathbf{x})+H$, with $H$ defined in Eq. \[eq-H\], and $V_{\text{int}}=g\psi _{\uparrow }^{\dagger }\psi
_{\downarrow }^{\dagger }\psi _{\downarrow }\psi _{\uparrow }$ in real space. The FFLO phase is defined as $g\langle \psi _{\downarrow }(\mathbf{x}%
)\psi _{\uparrow }(\mathbf{x})\rangle =\Delta e^{i\mathbf{Q}\cdot \mathbf{x}%
} $, where $\mathbf{Q}$ is the total momentum of the Cooper pairs and $%
\Delta $ is a spatially independent constant. Here the position dependent phase of $\Delta (\mathbf{x})$ can be gauged out by the transformation $\psi
_{\sigma }\rightarrow \psi _{\sigma }e^{i\mathbf{Q}\cdot \mathbf{x}/2}$. Integrating out the fermion field $\psi $ and $\psi ^{\dagger }$, we obtain $%
Z=\int \mathcal{D}\Delta e^{-S_{\text{eff}}}$, with effective action $S_{%
\text{eff}}=\int d{\tau }d\mathbf{r}{\frac{|\Delta |^{2}}{g}}-{\frac{1}{%
2\beta }}\ln \text{Det}\beta G^{-1}+\text{Tr}(H)$, where $\beta =1/T$, and $%
G^{-1}=\partial _{\tau }+H_{\text{BdG}}$. The order parameter, chemical potential and FFLO vector $\mathbf{Q}$ are determined self-consistently by solving the following equation set $${\frac{\partial S_{\text{eff}}}{\partial \Delta }}=0,\quad {\frac{\partial
S_{\text{eff}}}{\partial \mu }}=-\beta n,\quad {\frac{\partial S_{\text{eff}}%
}{\partial \mathbf{Q}}}=0.$$In our model the deformation of Fermi surface is along the $y$ direction, thus we have $\mathbf{Q}=(0,Q_{y})$, and only three parameters need be determined self-consistently. We determine the different quantum phases using the following criterion. When $E_{g}>0$, $\Delta \neq 0$, we have gapped FFLO phases ($\mathcal{M}=-1$ ($\mathcal{C}=+1$) for topological, and $M=+1$ ($C=0$) for non-topological). When there is a nodal line with $%
E_{g}=0 $ and $\Delta \neq 0$, we have gapless FFLO phases. When $\Delta =0$ (then $\mathbf{Q}=0$ is enforced), we get normal gas phases. It is still possible to observe gapless excitations in the gapless FFLO phase regime, however, we do not distinguish this special condition because gapless excitations are not protected by gaps. In our numerics, the energy and momentum are scaled by Fermi energy $E_{F}$ and its corresponding momentum $%
K_{F}$ in the case without SO coupling and Zeeman fields. The results in Fig. \[fig-Delta\] and Fig. \[fig-Phases\] are determined at $%
n=K_{F}^{2}/2\pi $ and $T=0$.
**Real space BdG equations**: In the tight-binding model of (\[eq-TB\]), the many-body interaction is decoupled in the mean-field approximation. The particle number $n_{i\sigma }=c_{i\sigma }^{\dagger }c_{i\sigma }$ and superfluid pairing $\Delta _{i}=-U\langle c_{i\downarrow }c_{i\uparrow
}\rangle $ are determined self-consistently for a fixed chemical potential. Using the Bogoliubov transformation, we obtain the BdG equation $$\sum_{j}%
\begin{pmatrix}
H_{ij\uparrow } & \alpha _{ij} & 0 & \Delta _{ij} \\
-\alpha _{ij} & H_{ij\downarrow } & -\Delta _{ij} & 0 \\
0 & -\Delta _{ij}^{\ast } & -H_{ij\uparrow } & -\alpha _{ij} \\
\Delta _{ij}^{\ast } & 0 & \alpha _{ij} & -H_{ij\downarrow }%
\end{pmatrix}%
\begin{pmatrix}
u_{j\uparrow }^{n} \\
u_{j\downarrow }^{n} \\
-v_{j\uparrow }^{n} \\
v_{j\downarrow }^{n}%
\end{pmatrix}%
=E_{n}%
\begin{pmatrix}
u_{j\uparrow }^{n} \\
u_{j\downarrow }^{n} \\
-v_{j\uparrow }^{n} \\
v_{j\downarrow }^{n}%
\end{pmatrix}%
, \label{BdG}$$where $H_{ij\uparrow }=-t\delta _{i\pm 1,j}-(\mu +h_z)\delta _{ij}$, $%
H_{ij\downarrow }=-t\delta _{i\pm 1,j}-(\mu -h_z)\delta _{ij}$, $\alpha
_{ij}=\frac{1}{2}(j-i)\alpha \delta _{i\pm 1,j}-h_x \delta_{i,j}$, $%
\left\langle \hat{n}_{i\sigma }\right\rangle =\sum_{n=1}^{2N}[|u_{i\sigma
}|^{2}f(E_{n})+|v_{i\sigma }|^{2}f(-E_{n})]$, $\Delta _{ij}=-U\delta
_{ij}\sum_{n=1}^{2N}[u_{i\uparrow }^{n}v_{i\downarrow }^{n\ast
}f(E_{n})-u_{i\downarrow }^{n}v_{i\uparrow }^{n\ast }f(-E_{n})]$, with $%
f(E)=1/\left( 1+e^{E/T}\right) $. In the tight-binding model, FF phase and LO phase can be determined naturally, which depend crucially on the parameters of the system as well as the position of the chemical potential. The results in Fig. \[fig-edgestate\] and Fig. \[fig-mf\] are obtained at $T=0$.
**Topological boundaries in lattice models:** To determine the topological phase transition conditions, we transform the tight-binding Hamiltonian to the momentum space in Eq. \[eq-bdg\]. Here $\xi _{\mathbf{k}%
}$ is replaced by $-2t\cos (k_{x})-2t\cos (k_{y})-\mu $ for the kinetic energy, and $k_{\alpha }$ by $\sin (k_{\alpha })$ for the SO coupling, where $\alpha =x,y$. The topological boundary conditions can still be determined by the Pfaffian of $\Gamma (\mathbf{K})=H_{\text{BdG}}(\mathbf{K})\Lambda $ at four nonequivalent points, $K_{1}=(0,0)$, $K_{2}=(0,\pi )$, $K_{3}=(\pi
,0)$, $K_{4}=(\pi ,\pi )$ when the system is gapped. At these special points, $\Gamma (\mathbf{k})$ is a skew matrix. The topological phase is determined by $\mathcal{M}=\prod_{i=1}^{4}\text{sign}(\text{Pf}(\Gamma
(K_{i})))=-1$. For uniform BCS superfluids, the Pfaffian at $K_{2}$ and $%
K_{3}$ are identical, thus only $K_{1}$ and $K_{4}$ are essential to determine the topological boundaries. However, in our system, all four points affect the topological boundaries, and the exact expression of $%
\mathcal{M}$ is too complex to present here. In 1D chain, there are only two nonequivalent points at $K_{1}=0$ and $K_{2}=\pi $. We find $\text{Pf}%
(\Gamma (K_{1}))=\Delta ^{2}-(h_{z}-\mu -2t\cos (Q_{y}/2))(h_{z}+\mu +2t\cos
(Q_{y}/2))-(h_{x}+\alpha \sin (Q_{y}/2))^{2}$ , and $\text{Pf}(\Gamma
(K_{2}))=\Delta ^{2}-(h_{z}-\mu +2t\cos (Q_{y}/2))(h_{z}+\mu -2t\cos
(Q_{y}/2))-(h_{x}-\alpha \sin (Q_{y}/2))^{2}$. The topological index in the gapped regime is determined by $\text{sign}(\text{Pf}(\Gamma (K_{1})))\text{%
sign}(\text{Pf}(\Gamma (K_{2})))$.
[99]{} Wilczek, F. Majorana returns. *Nature Phys*. **5,** 614-618 (2009).
Hisano, J., Matsumoto, S. & Nojiri, M. M., Explosive dark matter annihilation. *Phys. Rev. Lett.* **92,** 031303 (2004).
Nayak, C., Simon, S. H., Stern, A., Freedman, M. & Das Sarma, S. Non-abelian anyons and topological quantum computation. *Rev. Mod. Phys.* **80,** 1083 (2008).
Fu, L. & Kane, C. L., Superconducting proximity effect and Majorana fermions at the surface of a topological insulator. *Phys. Rev. Lett.* **100,** 096407 (2008).
Sau, J. D., Lutchyn, R. M., Tewari, S. & Das Sarma, S. Generic new platform for topological quantum computation using semiconductor heterostructures. *Phys. Rev. Lett.* **104,** 040502 (2010).
Lutchyn, R. M., Sau, J. D. & Das Sarma, S. Majorana fermions and a topological phase transition in semiconductor-superconductor heterostructures. *Phys. Rev. Lett.* **105,** 077001 (2010).
Oreg, Y., Refael, G. & Oppen, F. V. Helical liquids and Majorana bound states in quantum wires. *Phys. Rev. Lett.* **105,** 177002 (2010).
Alicea, J., Oreg, Y., Refael, G., Oppen, F. V. & Fisher, M. P. A. Non-Abelian statistics and topological quantum information processing in 1D wire networks. *Nature Phys.* **7,** 412-417 (2011).
Potter, A. C. & Lee, P. A. Multichannel generalization of Kitaev’s Majorana end states and a practical route to realize them in thin films. *Phys. Rev. Lett.* **105,** 227003 (2010).
Mao, L., Gong, M., Dumitrescu, E., Tewari, S. & Zhang, C. Hole-doped semiconductor nanowire on top of an $s$-wave superconductor: a new and experimentally accessible system for Majorana fermions. *Phys. Rev. Lett.* **108,** 177001 (2012).
Mourik, V. *et al.* Signatures of Majorana fermions in hybrid superconductor-semiconductor nanowire devices. *Science* **336,** 1003-1007 (2012).
Deng, M. T. *et al.* Anomalous zero-bias conductance peak in a Nb-InSb nanowire-Nb hybrid device. *Nano Lett.* **12,** 6414-6419 (2012).
Das, A. *et al.* Zero-bias peaks and splitting in an Al-InAs nanowire topological superconductor as a signature of Majorana fermions. *Nature Phys.* **8,** 887-895 (2012).
Rokhinson, L. P., Liu, X. & Furdyna, J. K. The fractional a.c. Josephson effect in a semiconductor-superconductor nanowire as a signature of Majorana particles. *Nature Phys.* **8,** 795-799 (2012).
Williams, J. R. *et al*. Unconventional Josephson effect in hybrid superconductor-topological insulator devices. *Phys. Rev. Lett.* **109,** 056803 (2012).
Fulde, P. & Ferrell, R. A. Superconductivity in a strong spin-exchange field. *Phys. Rev.* **135,** 550 (1964).
Larkin, A. I. & Ovchinnikov, Y. N. Nonuniform state of superconductors. *Zh. Eksp. Teor. Fiz.* **47,** 1136 (1964).
Casalbuoni, R. & Narduli, G. Inhomogeneous superconductivity in condensed matter and QCD. *Rev. Mod. Phys.* **76,** 263 (2004).
Kenzelmann, M. *et al*. Coupled superconducting and magnetic order in CeCoIn$_5$. *Science* **321,** 1652-1654 (2008).
Li, L., Richter, C., Mannhart, J. & Ashoori, R. C. Coexistence of magnetic order and two-dimensional superconductivity at LaAlO$%
_3$/SrTiO$_3$ interfaces. *Nature Physics* **7,** 762-766 (2011).
Liao, Y.-A. *et al*. Spin-imbalance in a one-dimensional Fermi gas. *Nature* **467,** 567 (2010).
Hu, H. & Liu, X.-J. Mean-field phase diagram of imbalanced Fermi gases near a Feshbach resonance. *Phys. Rev. A* **73,** 051603(R) (2006).
Parish, M. M., Marchetti, F. M., Lamacraft, A. & Simons, B. D. Finite-temperature phase diagram of a polarized Fermi condensate. *Nature Phys.* **3,** 124-128 (2007).
Lin, Y.-J., Garcia, K. J. & Spielman, I. B. Spin-orbit-coupled Bose-Einstein condensates. *Nature* **471,** 83-86 (2011).
Zhang, J.-Y. *et al*. Collective dipole oscillation of a spin-orbit coupled Bose-Einstein condensate. *Phys. Rev. Lett.* **109,** 115301 (2012).
Qu, C., Hamner, C., Gong, M., Zhang, C. & Engels, P. Non-equilibrium spin dynamics and Zitterbewegung in quenched spin-orbit coupled Bose-Einstein condensates. *ArXiv e-prints* (2013) http://arxiv.org/abs/1301.0658
Wang, P. *et al*. Spin-orbit coupled degenerate fermi gases. *Phys. Rev. Lett.* **109,** 095301 (2012).
Cheuk, L. W. *et al*. Spin-Injection spectroscopy of a spin-orbit coupled Fermi gas. *Phys. Rev. Lett.* **109,** 095302 (2012).
Zhang, C., Tewari, S., Lutchyn, R. M. & Das Sarma, S. $%
p_{x}+ip_{y}$ superfluid from $s$-save interactions of fermionic cold atoms. *Phys. Rev. Lett.* **101,** 160401 (2008).
Jiang, L. *et al*. Majorana fermions in equilibrium and driven cold atom quantum wires. *Phys. Rev. Lett.* **106,** 220402 (2011).
Gong, M., Chen, G., Jia, S. & Zhang, C. Searching for Majorana fermions in 2D spin-orbit coupled fermi superfluids at finite temperature. *Phys. Rev. Lett.* **109,** 105302 (2012).
Seo, K., Han, L. & Sá de Melo, C.A.R. Who is the lord of the rings: Majorana, Dirac or Lifshitz? The spin-orbit-zeeman saga in ultra-cold fermions. *Phys. Rev. Lett.* **109,** 105303 (2012).
Zheng, Z., Gong, M., Zou, X., Zhang, C. & Guo, G. Route to observable Fulde-Ferrell-Larkin-Ovchinnikov phases in three-dimensional spin-orbit-coupled degenerate Fermi gases. *Phys. Rev. A* **87,** 031602(R) (2013).
Wu, F., Guo, G., Zhang, W. & Yi, W. Unconventional superfluid in a two-dimensional Fermi gas with anisotropic spin-orbit coupling and zeeman fields. *Phys. Rev. Lett.* **110,** 110401 (2013).
Liu, X.-J. & Hu, H. Inhomogeneous Fulde-Ferrell superfluidity in spin-orbit-coupled atomic Fermi gases. *Phys. Rev. A* **87,** 051608(R) (2013).
Michaeli, K., Potter, A. C. & Lee, P. A. Superconducting and ferromagnetic phases in SrTiO$_3$/LaAlO$_3$ oxide interface structures: possibility of finite momentum paring. *Phys. Rev. Lett.* **108,** 117003 (2012).
Yuan, H. Q. *et al*. S-wave spin-triplet order in superconductors without inversion symmetry: Li$_2$Pd$_3$B and Li$_2$Pt$_3$B. *Phys. Rev. Lett.* **97,** 017006 (2006).
Ghosh, P., Sau, J. D., Tewari, S. & Das Sarma, S. Non-Abelian topological order in noncentrosymmetric superconductors with broken time-reversal symmetry. *Phys. Rev. B* **82,** 184525 (2010).
Xiao, D., Chang, M.-C. & Niu, Q. Berry phases effects on electronic properties. *Rev. Mod. Phys.* **82,** 1959 (2010).
**Acknowledgement**
C.Q, Y.X, L.M, C.Z are supported by ARO (W911NF-12-1-0334), AFOSR (FA9550-13-1-0045), and NSF-PHY (1104546). Z.Z., X.Z., and G.G. are supported by the National 973 Fundamental Research Program (Grant No. 2011cba00200), the National Natural Science Foundation of China (Grant No. 11074244 and No. 11274295). M.G is supported in part by Hong Kong RGC/GRF Project 401512, the Hong Kong Scholars Program (Grant No. XJ2011027) and the Hong Kong GRF Project 401113.
**Author contributions** All authors designed and performed the research and wrote the manuscript.
**Competing financial interests**
The authors declare no competing financial interests.
[^1]: These authors contributed equally to this work
[^2]: These authors contributed equally to this work
[^3]: Email: skylark.gong@gmail.com
[^4]: Email: xbz@mail.ustc.edu.cn
[^5]: Email: chuanwei.zhang@utdallas.edu
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We study longwave Marangoni convection in a layer heated from below. Using the scaling $k = O(\sqrt{\rm Bi})$, where $k$ is the wavenumber and ${\rm Bi}$ is the Biot number, we derive a set of amplitude equations. Analysis of this set shows presence of monotonic and oscillatory modes of instability. Oscillatory mode has not been previously found for such direction of heating. Studies of weakly nonlinear dynamics demonstrate that stable steady and oscillatory patterns can be found near the stability threshold.'
author:
- 'S. Shklyaev'
- 'M. Khenner'
- 'A. A. Alabuzhev'
title: Oscillatory and monotonic modes of longwave Marangoni convection in a thin film
---
Marangoni convection in a liquid layer with upper free boundary is a classical problem in the dynamics of thin films and in the pattern formation [@books; @reviews]. In the pioneer theoretical paper, Pearson [@Pearson] analyzed the linear stability of the layer with a nondeformable free surface. He considered two cases of thermal boundary conditions at the substrate: the ideal and poor heat conductivity, when either the temperature or the heat flux are specified. In the latter case he found a monotonic longwave instability mode for heating from below and zero Biot number $\rm Bi$. For ${\rm Bi} \ll 1$ the critical wavenumber $k$ is proportional to ${\rm Bi}^{1/4}$ [@books]. Many authors extended the analysis in order to include the deformation of the free surface. Review of analytical and numerical works can be found in [@books]. In particular, several oscillatory modes were revealed; these modes were reported only for heating from above. In the case of heating from below, a nonlinear analysis for ideally conductive substrate was performed in Ref. [@VanHook]: it was shown that the subcritical bifurcation occurs and instability with necessity results in film rupture. The behavior of perturbations near the stability threshold was studied in [@G_YV] for the case of a poorly conductive substrate. Under assumption of large gravity, and, hence, small surface deflection, the amplitude equation was derived and the subcritical bifurcation was found.
In this paper we demonstrate the existence of a new *oscillatory* mode of longwave instability for the film *heated from below*. Using the scaling $k=O(\sqrt{\rm
Bi})$, which was first suggested in Ref. [@Alla-05], we derive a set of amplitude equations. Linear stability analysis gives both the monotonic and the oscillatory modes. Pattern selection near the stability threshold clearly demonstrates that instability does not necessarily lead to rupture and that both steady and oscillatory regimes can be found experimentally within certain domains of parameters. We consider a three-dimensional thin liquid film of the unperturbed height $H_0$ on a planar horizontal substrate heated from below. The heat conductivity of the solid is assumed small in comparison with the one of the liquid, thus the constant vertical temperature gradient $-A$ is prescribed at the substrate. (The Cartesian reference frame is chosen such that the $x$ and $y$ axes are in the substrate plane and the $z$ axis is normal to the substrate.)
The dimensionless boundary-value problem governing the fluid dynamics reads:
\[base\_eq\] $$\begin{aligned}
\frac{1}{P}\left({\bf v}_t + {\bf v}\cdot \nabla {\bf
v}\right)&=& - {\bf \nabla} p + \nabla ^2 {\bf v} -G {\bf e}_z,\\
T_t + {\bf v}\cdot \nabla T &=& \nabla ^2 T, \ \nabla \cdot {\bf
v} = 0,\end{aligned}$$
\[base\_bcs\] $$\begin{aligned}
\label{bcs_velo}{\bf v} &=& 0, \ T_z=-1 \ {\rm at} \ z = 0, \\
\nonumber {\bf \Sigma}\cdot {\bf n} &=&\left(p-{\rm Ca}
K\right){\bf n}- M\nabla_\tau \left( T|_{z=h}\right),\
\nabla_n T=-{\rm Bi}\; T,\\
h_t &=& w-{\bf v} \cdot {\bf \nabla} h
\ {\rm at} \ z = h(x,y,t).\end{aligned}$$
Here, ${\bf v}=({\bf u},w)$ is the fluid velocity (where ${\bf u}$ is the velocity in the substrate plane and $w$ is the $z$-component), $T$ is the temperature, $p$ is the pressure in the liquid, $\bf \Sigma$ is the viscous stress tensor, $h$ is the dimensionless height of the film, ${\bf e}_{z}$ is the unit vector directed along the $z$ axis, ${\bf n}$ and $\bf \tau$ are the normal and tangent unit vectors to the free surface, respectively, $K$ is the mean curvature of the free surface. The dimensionless parameters entering the above set of equations are the capillary number, the Marangoni number, the Galileo number, the Biot number, and the Prandtl number: $${\rm Ca}=\frac{\sigma H_0}{\eta \chi}, \ M=-\frac{\sigma_T A
H_0^2}{\eta \chi}, \ G=\frac{g H_0^3}{\nu\chi}, \ {\rm
Bi}=\frac{qH_0}{\kappa},$$ and $P=\nu/\chi$. Here $\sigma$ is the surface tension, $\sigma_T\equiv d\sigma/dT$, $g$ is the acceleration of gravity, $q$ is the heat transfer rate, $\kappa$ is the thermal conductivity, $\chi$ is the thermal diffusivity, $\nu$ and $\eta $ are the kinematic and dynamics viscosity of liquid, respectively.
Below we study evolution of a large-scale convection using the set of Eqs. (\[base\_eq\]) and (\[base\_bcs\]).
We rescale the coordinates and the time as follows: $$\label{rescaling}
X=\epsilon x, \ Y=\epsilon y, \ \tau=\epsilon^2 t,
%\ {\bf u}=\epsilon {\bf U}, \ w=\epsilon^2 W,$$ where $\epsilon \ll 1$ is the ratio of $H_0$ to a typical horizontal lengthscale. The temperature field is represented as $T=-z+{\rm Bi}^{-1}+\theta(X,Y,\tau)+O(\epsilon^2)$.
We assume large values of ${\rm Ca}$ and small values of $\rm Bi$, $$\label{Biot}
{\rm Ca}=\epsilon^{-2}C, \ {\rm Bi}=\epsilon^2 \beta.$$ Thus we deal with the intermediate asymptotics between the conventional longwave mode, ${\rm Bi}=O\left(\epsilon^4\right)$, [@G_YV] and the case of finite $\rm Bi$ [@Pearson]. These cases correspond to $\beta=0$ and $\beta\to\infty$, respectively.
Substituting the rescaled fields into Eqs. (\[base\_eq\]) and (\[base\_bcs\]) and applying the conventional technique of the lubrication approximation (see [@reviews]), we arrive at $$\begin{aligned}
\label{h_t} h_{\tau}&=&{\bf \nabla}\cdot\left[\frac{h^3}{3} {\bf
\nabla} \Pi + \frac{M h^2}{2}{\bf \nabla} \left(\theta-h\right)
\right] \equiv {\bf
\nabla \cdot j},\\
\nonumber h \theta_\tau&=&{\bf \nabla}\cdot \left(h{\bf
\nabla}\theta\right) -\frac{1}{2}(\nabla h)^2-\beta(\theta-h)+
{\bf j} \cdot {\bf \nabla}(\theta-h) \\
&&+ {\bf \nabla}\cdot \left[\frac{h^4}{8}{\bf \nabla}
\Pi+\frac{Mh^3}{6}{\bf \nabla}(\theta-h)\right].\label{T_t}\end{aligned}$$ Here $\Pi=Gh-C\nabla^2 h$ and $\nabla$ is a two-dimensional gradient with respect to $X$ and $Y$.
Equations (\[h\_t\]) and (\[T\_t\]) form a closed set of the amplitude equations governing the nonlinear interaction of two well-known longwave modes: the Pearson’s mode ($h=1$) [@Pearson] and the surface deformation-induced mode. (Note that the latter mode with $\theta=const$ emerges only in the case of the conductive substrate [@VanHook].) Conductive state obviously corresponds to $h=\theta=1$.
Substituting the perturbed fields $h=1+\xi$ and $\theta=1+\Theta$ into Eqs. (\[h\_t\]) and (\[T\_t\]), linearizing the equations for perturbations about the equilibrium, and representing the perturbation fields proportional to $\exp
\left(\lambda \tau+ikX\right)$, one arrives at
$$\begin{aligned}
\nonumber \lambda^2+\lambda\left[\beta+k^2\left(1+\frac{\tilde
G-M}{3}\right)\right]\\
+\frac{k^2}{3}\left(\beta+k^2\right)\tilde
G-\frac{Mk^4}{2}\left(1+\frac{\tilde
G}{72}\right)=0,\label{lambda}\end{aligned}$$
where $\tilde G\equiv G+Ck^2$. Equation (\[lambda\]) possesses both real (monotonic instability) and complex (oscillatory instability) solutions.
For the [*monotonic*]{} mode $\lambda=0$ at the stability border, thus the marginal stability curve is given by $$\label{M_mono}
M_{m}=\frac{48\left(\beta+k^2\right)\tilde G}{k^2\left(72+\tilde G\right)}.$$ These marginal curves have a minimum at the finite values of $k$ only if $$\label{beta_lim}
\beta C<72,$$ otherwise the minimal value, $M_c^{(m)}$, is achieved in the limit $k\to \infty$, i.e. the longwave mode is not critical. Hereafter we assume that the inequality (\[beta\_lim\]) holds; since the limit $C=0$ is well studied [^1], for all computations we set $C=1$ without loss of generality [^2]. The critical wavenumber materializing the minimum of the marginal stability curve, Eq. (\[M\_mono\]), is $$\left(k_c^{(m)}\right)^2=\frac{\beta C G+\sqrt{72\beta C
G\left(G+72-\beta C\right)}}{C\left(72-\beta C\right)}.$$
![(a): Marginal stability curves $M_{*}(k)$ for $G=10$: solid lines correspond to the monotonic mode, dashed ones – to the oscillatory mode; $\beta=1, \, 10,\, 40$ for lines 1, 2, and 3, respectively. (b): The domain of oscillatory instability. The dashed vertical line marks the boundary of the longwave instability, Eq. (\[beta\_lim\]).[]{data-label="fig:Mk"}](fig1.eps){width="8.5"}
For the [*oscillatory*]{} mode the marginal stability curve is determined by the expression $$\label{M_osc}
M_{o}=3+\tilde G+\frac{3\beta}{k^2}.$$ The imaginary part of the growth rate for neutral perturbations is $$\lambda_i\equiv {\rm Im} (\lambda)
=\frac{k^2}{12}\sqrt{(72+\tilde G)\left(M_{m}-M_{o}\right)},$$ i.e. the oscillatory mode is present only at $M_{o}(k)<M_{m}(k)$.
Minimization of the Marangoni number with respect to $k$ gives $$M_{c}^{(o)}=3+G+2\sqrt{3\beta C}, \
k_c^{(o)}=\left(\frac{3\beta}{C}\right)^{1/4}. \label{Mc_osc}$$ Examples of the marginal stability curves for these modes are shown in Fig. \[fig:Mk\](a). Domains of monotonic and oscillatory instability are demonstrated in Fig. \[fig:Mk\](b). It is clear that the oscillatory mode is critical for $\beta C >
17.4$ and $G<17.2$. Take, for instance, a layer of water of thickness $H_0=10^{-3} {\rm cm}$. Then $G\approx 0.1$, ${\rm
Ca}\approx 10^4$ and $\rm Bi$ has to be approximately $10^{-3}$ in order to provide the required value of $\beta C$; this value seems achievable in experiments.
Equations (\[M\_osc\])-(\[Mc\_osc\]) indicate why the oscillatory mode has not been found earlier. As we have emphasized above, all previous studies deal with either $\tilde G \gg 1$ [@Pearson], or $\beta=0$ [@G_YV], or $C=0$ [@Alla-05]. In these cases the oscillatory mode does not exist. Here we study the nonlinear dynamics of perturbations at small supercriticality, $M-M_c^{(m)}\approx 0$, see Ref. [@Hoyle-book]. To this end, we represent the primary part of the small perturbation of $h$ in the form:
![(Color online). Pattern selection for the monotonic mode. (a) and (b) – the domains of stability for Rolls (marked with “R”) and Squares (“S”) on the [*square*]{} lattice. Solid (dashed) lines separate between supercritical and subcritical branching for Rolls (Squares). The latter domains are marked by “sub. R” (“sub. S”). Dotted lines separate domains of stability for Rolls and Squares. Dashed-dotted line in panel (a) is the locus of points $N=0$; in the vicinity of this curve Eq. (\[hex3\]) holds. Diamond (circle) shows the threshold value $G_1$ ($G_2$) for pattern selection on the [*hexagonal*]{} lattice.[]{data-label="fig:domains"}](fig2.eps){width="8.5"}
$$\xi=\sum_{j=1}^{n} A_{j} \exp\left(i{\bf k}_j\cdot {\bf
R}\right)+c.c.
\label{mono_lin}$$
where $c.c.$ denotes complex conjugate terms and $k_j=k_c^{(m)}$. (The primary part of $\Theta$ is expressed in terms of $\xi$.) The amplitudes $A_j$ are functions of a slow time. For [*square*]{} ($n=2$) and [*hexagonal*]{} ($n=3$) lattices, the wavevectors are $$\begin{aligned}
\label{k_sq} {\bf k}_1&=&k_c(1,0), \ {\bf k}_2=k_c(0,1)\\
\label{k_hex} {\rm and} \ {\bf k}_1&=&k_c(1,0), \ {\bf
k}_{2,3}=\frac{1}{2}k_c(-1,\pm \sqrt{3}),\end{aligned}$$ respectively.
For [*square lattice*]{}, the amplitude equations read $$\begin{aligned}
\dot A_j&=&\left(\gamma -K_0|A_j|^2-K_1S_A\right)A_j, \ j=1,2,
%\; S_A=\sum_1^n |A_l|^2,
%\\
%\dot A_2&=&\left(\gamma -K_0|A_2|^2-K_1|A_1|^2\right)A_2,\end{aligned}$$ where $S_A = \sum_1^n |A_l|^2$. Here the dot denotes the derivatives with respect to the slow time, and $\gamma\sim
M-M_c^{(m)}$ is the real growth rate. The Landau constants, $K_0$ and $K_1$ are real; they are cumbersome and thus are not presented here. Results of the numerical calculations are shown in Fig. \[fig:domains\]. One can readily see that supercritical branching occurs only in two domains of parameters. These domains are situated either at rather small values of $\beta C$, Fig. \[fig:domains\](a), or at sufficiently small $G$, Fig. \[fig:domains\](b). In the former case Rolls are selected everywhere except for a very small region shown in the inset. In the latter case Squares are selected everywhere excluding the small region where Rolls are stable.
For [*hexagonal lattice*]{}, the resonant quadratic interaction results in the following amplitude equation: $$\label{hex3}
\dot A_1=\gamma A_1-N A_2^*A_3^*
-\left(K_0|A_1|^2+K_1S_A\right)A_1,$$ and a similar equations for $A_{2,3}$. (Hereafter the asterisk denotes the complex-conjugate terms.) Generally speaking, the quadratic term prevails over cubic ones, which leads to subcritical excitation of the hexagonal patterns through a transcritical bifurcation [@Hoyle-book]. However, $N=0$ at the dashed-dotted line shown in Fig. \[fig:domains\](a) and in the vicinity of this line Eq. (\[hex3\]) becomes appropriate.
Among the variety of possible patterns [@Hoyle-book], three are important. They are Rolls with $A_1\neq 0, \, A_2=A_3=0$ and two types of Hexagons with $A_1=A_2=A_3\equiv A$: $H^+$ for $A>0$ and $H^-$ in the opposite case. In the former case the flow is upward in the center of the convective cell, whereas in the latter case it is downward. Pattern selection on a hexagonal lattice is shown in Fig. \[fig:domains\](a). At $G<G_1\approx 8.20$ there are no stable solutions; the subcritical bifurcation occurs for Rolls and one branch of Hexagons (either $H^-$ below or $H^+$ above the dashed-dotted line). At $G_1<G<G_2=10$ Rolls are still subcritical and unstable; stable Hexagons emerge only within the finite interval of supercriticality. Finally, at $G>G_2$, $H^-(H^+)$ is stable within the interval of supercriticality, whereas Rolls become stable when $M-M_c^{(m)}$ increases.
To finalize the discussion of steady patterns, we briefly discuss the competition of patterns on the square and hexagonal lattices. It is clear that at the finite values of $N$, Hexagons emerge subcritically and no stable patterns can be found near the stability threshold. Therefore, weakly nonlinear analysis provides stable patterns only near the dashed-dotted curve shown in Fig. \[fig:domains\](a), where the competition between Hexagons and Rolls occurs.
For the oscillatory mode the solution is presented in the form $$\label{osc_lin}
\xi=\sum_{j=1}^n \left(A_{j} e^ {i{\bf k}_j\cdot {\bf
R}}+B_je^{-i{\bf k}_j\cdot {\bf R}} \right)e^{i\lambda_i\tau}+c.c.$$ Note that the pair $(A_j,B_j)$ corresponds to counter-propagating waves, which must be taken into account separately. The wavevectors for the square and hexagonal lattices are given by Eqs. (\[k\_sq\]) and (\[k\_hex\]), respectively.
For [*square lattice*]{}, the equation governing the dynamics of the amplitudes $A_j$ reads: $$\begin{aligned}
\nonumber \dot A_j&=&\left[\gamma
-K_0|A_j|^2-K_1|B_j|^2-K_2\left(S_A+S_B\right)\right]A_j\\
&&-K_4B_j^*S_{AB}, \ j=1,2, \label{sq_osc}
%\\
%\nonumber \dot B_1&=&\left[\gamma
%-K_0|B_1|^2-K_1|A_1|^2-K_2\left(|A_2|^2+|B_2|^2\right)\right]B_1\\
%&&-K_4A_1^*A_2B_2.\end{aligned}$$ where $S_B = \sum_1^n|B_l|^2$, $S_{AB}= \sum_1^nA_lB_l$. A similar pair of equations for $B_j$ is obtained from Eqs. (\[sq\_osc\]) by replacement $A_j\leftrightarrow B_j$. The Landau coefficients $K_l$ ($l=0,1,2,4$) as well as the growth rate $\gamma$ are now complex-valued.
Equations (\[sq\_osc\]) were studied in details in Ref. [@Silber-Knobloch]. Using the results of that paper, we found that Traveling Rolls (TR), $A_1\neq 0, \,
A_2=B_{1,2}=0$ can branch either supercritically or subcritically \[see Fig. \[fig:osc\_sel\](a)\], whereas the remaining patterns emerge through the direct Hopf bifurcation; TR are selected in the domain of supercritical excitation. Alternating Rolls are stable within the small area marked by “AR”; here depending on the initial condition the system either approaches AR or demonstrates the infinite growth of TR.
![(Color online). Pattern selection for the oscillatory convection. (a) – square lattice: Domains of stability for TR (below the dashed line) and AR (to the left of the dotted line). Above the dashed line TR bifurcate subcritically. (b) – hexagonal lattice: Domains of stability for TR (below the dashed line and to the right of the dotted line) and TRa2 (between the dotted and the solid line) are marked by “TR" and “TRa2”, respectively. Above the dashed line TR bifurcate subcritically, to the left of the solid line TRa2 are subcritical. []{data-label="fig:osc_sel"}](fig3.eps){width="8.5"}
For [*hexagonal lattice*]{}, the amplitude equation governing the dynamics of the complex amplitudes $A_j$ reads: $$\begin{aligned}
\nonumber \dot A_j&=&\left[\gamma
-K_0|A_j|^2-K_1|B_j|^2-K_2S_A-K_3S_B\right]A_j\\
&&-K_4B_j^*S_{AB}, \ j=1,2,3. \label{hex_osc}
%\\
%\nonumber \dot B_1&=&\left[\gamma
%-K_0|B_1|^2-K_1|A_1|^2-K_2N_B-K_3N_A\right]B_1\\
%&&-K_4A_1^*\left(A_2B_2+A_3B_3\right),\end{aligned}$$ Three similar equations are obtained from Eqs. (\[hex\_osc\]) by a replacement $A_j\leftrightarrow B_j$.
Analysis of the Hopf bifurcation for the above set of equations was performed in Ref. [@Roberts], where eleven limit cycles were found and studied. Based on that paper, the results on pattern selection are presented in Fig. \[fig:osc\_sel\](b). The dashed line again separates direct and inverse Hopf bifurcations for TR, it is obviously the same as in the panel (a). However, for the hexagonal lattice, there appears a competition between TR and Traveling Rectangles 2 (TRa2, $A_1=B_3\neq 0$, whereas all other amplitudes vanish). The latter pattern is stable in the domain marked by “TRa2”. The entire domain of supercritical bifurcation becomes smaller because TRa2 can bifurcate either supercritically or subcritically.
Studying the competition between patterns on hexagonal and square lattices, we found that the stability boundaries for both TR and TRa2 are the same as shown in Fig. \[fig:osc\_sel\](b), whereas stability domain for AR nearly disappears.
We studied the longwave Marangoni convection in a liquid layer heated from below; the heat flux at the substrate is specified. In such setup, an interaction of two well-known monotonic modes of longwave instability, the Pearson’s mode and the surface deformation-induced mode, can result in the emergence of a longwave oscillatory mode. However, the oscillatory mode has not been detected in spite of extensive numerical, analytical, and experimental studies [@books] since the publication of Pearson’s paper. We succeed in such analysis and point out the domain of parameters where the oscillatory mode exists, which can be reached in experiments.
Moreover, we point out the domains of parameters where the convection emerges supercritically and hence either stationary or oscillatory terminal state with distorted surface is stable. This result is also very unusual, since only subcritical branching was found in the previous studies [@VanHook; @G_YV]. We are grateful to A. A. Nepomnyashchy and A. Oron for the fruitful discussions. S.S. and A.A. are partially supported by joint grants of the Israel Ministry of Sciences (Grant 3-5799) and Russian Foundation for Basic Research (Grant 09-01-92472). M.K. acknowledges the support of WKU Faculty Scholarship Council via grants 10-7016 and 10-7054.
[99]{}
P. Colinet, J.C. Legros, and M.G. Velarde, (Wiley-VCH, Berlin, 2001); A. A. Nepomnyashchy, M.G. Velarde, and P. Colinet, (Chapman and Hall/CRC Press, London, 2001); R. V. Birikh et al., (Marsel Dekker, New York, Basel, 2003).
A. Oron, S. H. Davis, and S. G. Bankoff, Rev. Mod. Phys. [**69**]{}, 931 (1997); R. V. Craster, O. K. Matar, Rev. Mod. Phys. [**81**]{}, 1131 (2009). J. R. A. Pearson, J. Fluid Mech. [**4**]{}, 489 (1958).
S. J. VanHook et al., J. Fluid Mech. [**345**]{}, 45 (1997). P. L. Garcia-Ybarra, J. L. Castillo, and M. G. Velarde, Phys. Fluids [**30**]{}, 2655 (1987); A. Oron and P. Rosenau, Phys. Rev. A [**39**]{}, 2063 (1989). A. Podolny, A. Oron, and A. A. Nepomnyashchy, Phys. Fluids [**17**]{}, 104104 (2005). R. B. Hoyle, .
M. Silber, E. Knobloch, Nonlinearity [**4**]{}, 1063 (1991). M. Roberts, J.W. Swift, and D.H. Wagner, Multiparameter Bifurcation Theory, eds. M. Golubitsky and J. Guckenheimer, Contemp. Math. [**56**]{}, 283 (1986).
[^1]: For $C=0$ (i.e., ${\rm Ca}$ is finite) the critical Marangoni number reduces to the conventional value $48G/(G+72)$ [@G_YV], which is approached as $k\to \infty$. The same $M_c^{(m)}$ holds for $\beta=0$ as well, but with zero critical wavenumber.
[^2]: This can be achieved by the rescaling of Eqs. (\[h\_t\]) and (\[T\_t\]): $(X,Y)\to \sqrt{C}(X,Y),\tau \to C \tau, \, \beta\to\beta/C$.
| {
"pile_set_name": "ArXiv"
} |
---
title: The Karlskrona manifesto for sustainability design
---
**** *Version 1.0, May 2015*\
As software practitioners and researchers, we are part of the group of people who design the software systems that run our world. Our work has made us increasingly aware of the impact of these systems and the responsibility that comes with our role, at a time when information and communication technologies are shaping the future. We struggle to reconcile our concern for planet Earth and its societies with the work that we do. Through this work we have come to understand that we need to redefine the narrative on sustainability and the role it plays in our profession.\
What is sustainability, really? We often define it too narrowly. Sustainability is at its heart a systemic concept and has to be understood on a set of dimensions, including social, individual, environmental, economic, and technical.\
Sustainability is fundamental to our society. The current state of our world is unsustainable in more ways that we often recognize. Technology is part of the dilemma and part of possible responses. We often talk about the immediate impact of technology, but rarely acknowledge its indirect and systemic effects. These effects play out across all dimensions of sustainability over the short, medium and long term.\
Software in particular plays a central role in sustainability. It can push us towards growing consumption of resources, growing inequality in society, and lack of individual self-worth. But it can also create communities and enable thriving of individual freedom, democratic processes, and resource conservation. As designers of software technology, we are responsible for the long-term consequences of our designs. Design is the process of understanding the world and articulating an alternative conception on how it should be shaped, according to the designer’s intentions. Through design, we cause change and shape our environment. If we don’t take sustainability into account when designing, no matter in which domain and for what purpose, we miss the opportunity to cause positive change.\
**We recognize that** there is a rapidly increasing awareness of the fundamental need and desire for a more sustainable world, and a lot of genuine desire and goodwill - but this alone can be ineffective unless we come to understand that:\
**There is** a narrow perception of sustainability that frames it as protecting the environment or being able to maintain a business activity.\
**Whereas** as a systemic property, sustainability does not apply simply to the system we are designing, but most importantly to the environmental, economic, individual, technical and social contexts of that system, and the relationships between them.\
**There is** a perception that sustainability is a distinct discipline of research and practice with a few defined connections to software.\
**Whereas** sustainability is a pervasive concern that translates into discipline-specific questions in each area it applies.\
**There is** a perception that sustainability is a problem that can be solved, and that our aim is to find the ‘one thing‘ that will save the world.\
**Whereas** it is a ‘wicked problem‘ - a dilemma to respond to intelligently and learn in the process of doing so; a challenge to be addressed, not a problem to be solved.\
**There is** a perception that there is a tradeoff to be made between present needs and future needs, reinforced by a common definition of sustainable development, and hence that sustainability requires sacrifices in the present for the sake of future generations.\
**Whereas** it is possible to prosper on this planet while simultaneously improving the prospects for prosperity of future generations.\
**There is** a tendency to focus on the immediate impacts of any new technology, in terms of its functionality and how it is used.\
**Whereas** the following orders of effects have to be distinguished: *Direct, first order effects* are the immediate opportunities and effects created by the physical existence of software technology and the processes involved in its design and production. *Indirect, second order effects* are the opportunities and effects arising from the application and usage of software. *Systemic, third order effects*, finally, are the effects and opportunities that are caused by wide-scale use of software systems over time.\
**There is** a tendency to overly discount the future. The far future is discounted so much that it is considered for free (or worthless). Discount rates mean that long-term impacts matter far less than current costs and benefits.\
**Whereas** the consequences of our actions play out over multiple timescales, and the cumulative impacts may be irreversible.\
**There is** a tendency to think that taking small steps towards sustainability is sufficient, appropriate, and acceptable.\
**Whereas** incremental approaches can end up reinforcing existing behaviours and lure us into a false sense of security. However, current society is so far from sustainability that deeper transformative changes are needed.\
**There is** a tendency to treat sustainability as a desirable quality of the system that should be considered once other priorities have been established.\
**Whereas** is not in competition with a specific set of quality attributes against which it has to be balanced - it is a fundamental precondition for the continued existence of the system and influences many of the goals to be considered in systems design.\
**There is** a desire to identify a distinct completion point to a given project, so success can be measured at that point, with respect to pre-ordained criteria.\
**Whereas** measuring success at one point in time fails to capture the effects that play out over multiple timescales, and so tells us nothing about long-term success. Criteria for success change over time as we experience those impacts.\
**There is** a narrow conception of the roles of system designers, developers, users, owners, and regulators and their responsibilities, and there is a lack of agency of these actors in how they can fulfill these responsibilities.\
**Whereas** sustainability imposes a distinct responsibility on each one of us, and that responsibility comes with a right to know the system design and its status, so that each participant is able to influence the outcome of the technology application in both design and use.\
**There is** a tendency to interpret the codes of ethics for software professionals narrowly to refer to avoiding immediate harm to individuals and property.\
**Whereas** it is our responsibility to address the potential harm from the 2nd and 3rd-order effects of the systems we design as part of our design process, even if these are not readily quantifiable.
As a result, even though the importance of sustainability is increasingly recognized, many software systems are unsustainable, and the broader impacts of most software systems on sustainability are unknown.\
****\
**Sustainability is systemic.** Sustainability is never an isolated property. Systems thinking has to be the starting point for the transdisciplinary common ground of sustainability.\
**Sustainability has multiple dimensions.** We have to include those dimensions into our analysis if we are to understand the nature of sustainability in any given situation.\
**Sustainability transcends multiple disciplines.** Working in sustainability means working with people from across many disciplines, addressing the challenges from multiple perspectives.\
**Sustainability is a concern independent of the purpose of the system.** Sustainability has to be considered even if the primary focus of the system under design is not sustainability.\
**Sustainability applies to both a system and its wider contexts** There are at least two spheres to consider in system design: the sustainability of the system itself and how it affects sustainability of the wider system of which it will be part.\
**Sustainability requires action on multiple levels.** Some interventions have more leverage on a system than others. Whenever we take action towards sustainability, we should consider opportunity costs: action at other levels may offer more effective forms of intervention.\
**System visibility is a necessary precondition and enabler for sustainability design.** The status of the system and its context should be visible at different levels of abstraction and perspectives to enable participation and informed responsible choice.\
**Sustainability requires long-term thinking.** We should assess benefits and impacts on multiple timescales, and include longer-term indicators in assessment and decisions.\
**It is possible to meet the needs of future generations without sacrificing the prosperity of the current generation.** Innovation in sustainability can play out as decoupling present and future needs. By moving away from the language of conflict and the trade-off mindset, we can identify and enact choices that benefit both present and future.\
Sustainability design in the context of software systems is the process of designing systems with sustainability as a primary concern, based on a commitment to these principles.
****\
Each of the following stakeholders can do something right now to get started.\
**Software practitioners**: Try to identify effects of your project on technical, economic, environmental sustainability. Start asking questions about how to incorporate the principles into daily practice. Think about the social and individual dimensions. Talk about it with your colleagues.\
**Researchers**: Identify one research question in your field that can help us to better understand sustainability design. Discuss it with your peers and think about how sustainability impacts your research area.\
**Professional associations**: Revise code of ethics and practice to incorporate principles and explicitly acknowledge the need to consider sustainability as part of professional practice.\
**Educators**: Integrate sustainability design in curricula for software engineering and other disciplines and articulate competencies required for successful sustainability design.\
**Customers**: Put the concern on the table. Demand it in the next project.\
**Users**: Demand that the products you use demonstrate that their designers have considered all dimensions of sustainability.
**Signed,**
*Christoph Becker*, University of Toronto & Vienna University of Technology
*Ruzanna Chitchyan*, University of Leicester
*Leticia Duboc*, State University of Rio de Janeiro
*Steve Easterbrook*, University of Toronto
*Martin Mahaux*, University of Namur
*Birgit Penzenstadler*, California State University Long Beach
*Guillermo Rodriguez-Navas*, Malardalen University
*Camille Salinesi*, Universite Paris 1
*Norbert Seyff*, University of Zurich
*Colin C. Venters*, University of Huddersfield
*Coral Calero*, University of Castilla-La Mancha
*Sedef Akinli Kocak*, Ryerson University
*Stefanie Betz*, Karlsruhe Institute of Technology\
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We introduce a unique experimental testbed that consists of a fleet of 16 miniature Ackermann-steering vehicles. We are motivated by a lack of available low-cost platforms to support research and education in multi-car navigation and trajectory planning. This article elaborates the design of our miniature robotic car, the *Cambridge Minicar*, as well as the fleet’s control architecture. Our experimental testbed allows us to implement state-of-the-art driver models as well as autonomous control strategies, and test their validity in a real, physical multi-lane setup. Through experiments on our miniature highway, we are able to tangibly demonstrate the benefits of cooperative driving on multi-lane road topographies. Our setup paves the way for indoor large-fleet experimental research.'
author:
- 'Nicholas Hyldmar$^*$, Yijun He$^*$, Amanda Prorok[^1]'
title: A Fleet of Miniature Cars for Experiments in Cooperative Driving
---
[^1]: All authors are with the University of Cambridge, UK: [{nh490, yh403, asp45}@cam.ac.uk]{}. $^*$These authors contributed equally to this work. We gratefully acknowledge the Isaac Newton Trust who are supporting Amanda Prorok through an Early Career Grant.
| {
"pile_set_name": "ArXiv"
} |
---
author:
- Kaifeng Huang
- Bihuan Chen
- Bowen Shi
- Ying Wang
- Congying Xu
- Xin Peng
bibliography:
- 'src/reference.bib'
title: 'Interactive, Effort-Aware Library Version Harmonization'
---
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We calculate model-independently the impact of the critical point on higher order baryon susceptibilities $\chi_n$, showing how they depend on fluctuations of the order parameter. Including all tree level diagrams, we find new contributions to $\chi_4$ equally important to the kurtosis of the order parameter fluctuations, and we characterize the kurtosis and other nonguassian moments as functions on the phase diagram. Important features of this analysis are then confirmed by a Gross-Neveu model study with good agreement with other model studies as well as lattice and experimental data. This suggests the universality of the characteristic peak in baryon susceptibilities as a signal of the critical point. We discuss leveraging measurements of different $\chi_n$ to extrapolate the location of the critical point.'
author:
- 'Jiunn-Wei Chen$^{1,2}$, Jian Deng$^{3}$ and Lance Labun$^2$'
date: '18 October, 2014'
title: 'Baryon susceptibilities, nongaussian moments and the QCD critical point'
---
A major goal of QCD theory and heavy-ion collision (HIC) experiment is to locate the critical end point in the chemical potential–temperature ($\mu\!-\!T$) plane[@QCDphases]. It is the target of the beam energy scan at RHIC and the future FAIR experiment, which are designed to create and measure QCD matter at high temperature and density. Lattice simulations are also developing methods to calculate properties of QCD matter at $\mu \neq 0$[@altlattice; @Gavai:2010zn], which cannot be reached directly due to the sign problem.
The critical point itself is a second-order transition, characterized by diverging correlation length $\xi $, due to vanishing mass of the order parameter field $\sigma $. This fact, $m_{\sigma }^{-1}=\xi \rightarrow \infty $, is a statement about the two-point correlation function of the $\sigma $ field, and we can use low energy effective field theory to relate other correlation functions to the critical point and phase structure. $\sigma $ correlations influence observables such as baryon number fluctuations because the $\sigma $ couples like a mass term for the baryons, meaning that the presence of $\sigma $ changes the baryon energy[@Stephanov:1998dy]. Thus our aim is to establish the theory connection from the phase structure through $\sigma $ dynamics to observables, here proton number fluctuations, which can be compared to event-by-event fluctuations in HICs[@Aggarwal:2010wy; @Adamczyk:2013dal] and to lattice simulations[@Gavai:2010zn].
It is important to keep in mind that the QCD matter created in HICs is dynamic. The measured data in general integrate properties from the initial state and expansion dynamics, and they may not represent equilibrium properties of QCD matter at the freeze-out $\mu ,T$, especially if the fireball has passed near the critical point [@slowing]. Assuming the departure from equilibrium is small, we interpret the freeze-out data as approximate measurements of the phase diagram, which can be compared with theory and lattice predictions to help locate the critical point.
The fluctuation observables compared between HICs and lattice simulations are ratios of baryon susceptibilities $$m_{1}=\frac{T\chi _{3}}{\chi _{2}},\quad m_{2}=\frac{T^{2}\chi _{4}}{\chi
_{2}},\quad \chi _{n}=\frac{\partial ^{n}\ln \mathcal{Z}}{\partial \mu ^{n}}
\label{chin}$$with the volume dependence eliminated in the ratios. More precisely, HICs measure proton fluctuations, which are shown to directly reflect the baryon fluctuations, because the order parameter field, the scalar $\sigma$, is an isospin singlet [@Hatta:2003wn]. From here, one approach is model independent, considering the partition function as a path integral over $\sigma $, $\mathcal{Z}=\int \!\mathcal{D}\sigma \ e^{-\Omega \lbrack \sigma ]/T}$, and the effective potential of the Landau theory $\Omega[\sigma]$ contains the phase structure in its coefficients. However those parameters are not determined by the theory. Previously this has been used to search for dominant contributions to $\chi _{n}$ close to the critical point [@Stephanov:2008qz; @Stephanov:2011pb]. Another approach is to evaluate $\ln \mathcal{Z}$ in a QCD-like model, such as NJL [@Asakawa:2009aj], to gain predictive power of $\chi _{n}$ as functions on the phase diagram. We pursue both approaches to put the model independent results into the context of the global phase diagram.
We analyze a general polynomial form of the effective potential $\Omega[\sigma]$. We derive the $\chi _{n}$ as functions of the $\sigma $ fluctuation moments $\langle \delta \sigma ^{k}\rangle $, extracting new, equally important contributions to $m_{2}$ in additional to the $\sigma $ field kurtosis $\kappa_{4}$, studied by [@Stephanov:2011pb]. We show that negative $\kappa_{4}$ is restricted to the normal phase, and thus these new contributions are necessary to understand recent HIC and lattice results for $m_{2}$. Our model independent results are corroborated with quantitative study of the 1+1 dimensional Gross-Neveu (GN) model, revealing remarkably good qualitative agreement with both other model studies [@Asakawa:2009aj] as well as the experimental data. This consistency suggests that those features of our findings are model-independent.
We begin with the effective potential for the order parameter field, $$\Omega \lbrack \sigma ]=\int d^{3}x\left( -\!J\sigma +\frac{g_{2}}{2}\sigma
^{2}+\frac{g_{4}}{4}\sigma ^{4}+\frac{g_{6}}{6}\sigma ^{6}+\cdots \right)
\label{Veffglobal}$$with coefficients $g_{2n}$ functions of temperature and chemical potential, determining the phase diagram. Focusing on long range correlations, we consider only the zero momentum $\vec{k}=0$ mode, and so do not write the kinetic energy term $(\vec{\nabla}\sigma )^{2}$ here [@Stephanov:2008qz]. With the explicit symmetry breaking parameter $J\rightarrow 0$, the point where $g_{2}=g_{4}=0$ is the tricritical point (TCP), separating the second order transition line for $g_{4}>0$ from the first order line for $g_{4}<0$. When $J\neq 0$, the second order line disappears into a crossover transition through which the $\sigma $ minimum $\langle \sigma \rangle \equiv v$ changes smoothly as a function of temperature, and the TCP becomes a critical end point (CEP).
Fluctuations of the order parameter field obey an effective potential obtained by first minimizing the potential Eq.(\[Veffglobal\]) and then Taylor expanding around $v$, yielding $$\Omega \lbrack \delta \sigma ]-\Omega _{0}=\int d^{3}x\left( \frac{m_{\sigma
}^{2}}{2}\delta \sigma ^{2}+\frac{\lambda _{3}}{3}\delta \sigma ^{3}+\frac{\lambda _{4}}{4}\delta \sigma ^{4}+\cdots \right) \label{Veffflucns}$$with $\delta \sigma (x)=\sigma (x)-v$. The constant $\Omega _{0}\equiv\Omega[\sigma \!=\!v]$ does not influence the fluctuations, but does appear in the observables corresponding to the mean field contribution. The vev $v$ satisfies the gap equation $v(g_{2}+g_{4}v^{2}+g_{6}v^{4})=J$, and depends on $\mu ,T$ through the $g_{2n}$.
Calculating $\mu $-derivatives of the partition function gives an explicit relation between susceptibilities $\chi _{n}$ and $\delta \sigma $ fluctuations. Starting with the second order, $$T^{2}\chi _{2}=T^{2}\frac{\partial ^{2}\ln \mathcal{Z}}{\partial \mu ^{2}}=-T\langle \Omega ^{\prime \prime }\rangle +\langle (\Omega ^{\prime
})^{2}\rangle -\langle \Omega ^{\prime }\rangle ^{2} \label{dPdmu2}$$where $\langle f\rangle =\mathcal{Z}^{-1}\int \mathcal{D}\sigma \ f\ e^{-\Omega /T}$ is the expectation value of the function $f$ including $\sigma $ fluctuations. The prime indicates differentiation with respect to $\mu $, $$\label{aijexp}
\frac{\partial ^{k}\Omega }{\partial \mu ^{k}}=\int d^{3}x\left(
a_{k0}+a_{k1}\delta \sigma +a_{k2}\delta \sigma ^{2}+\cdots \right) .$$The first term $a_{k0}$ is the mean-field contribution from differentiating $\Omega_0$. The linear term arises from the $\mu$-dependence of the vev $v$.
Plugging these derivatives into Eq.(\[dPdmu2\]), we keep all tree-level contributions, where the power of the correlator is less than or equal to order of the $\mu$-derivative. This means that the expectation value of a product of correlators at different points is equal to the product of expectation values of correlators formed by making all possible contractions of $\delta\sigma $ at different points. The combination $\langle (\Omega ^{\prime})^{2}\rangle -\langle \Omega ^{\prime }\rangle ^{2}$ cancels disconnected diagrams. Applying these rules, $$T^{2}\chi _{2}=-VTa_{20}+V^2a_{11}^{2}\langle\delta\sigma^{2}\rangle
\label{dPdmu2flucns}$$A diagrammatic method helps to organize these calculations and distinguish loops arising from contractions. So far [Eq.(\[dPdmu2flucns\])]{} is just the usual second moment of particle number, here expanded in terms of the fluctuations of the $\delta\sigma$ field.
Applying this procedure, the higher order susceptibilities are $$\begin{aligned}
\label{dPdmu3sigma}
T^{3}\chi _{3}=&
-VT^{2}a_{30}+3V^2Ta_{11}a_{21}\langle \delta \sigma^{2}\rangle
\\ \notag &
-V^3a_{11}^{3}\langle \delta \sigma^{3}\rangle -6V^{3}a_{11}^{2}a_{12}\langle \delta \sigma^{2}\rangle ^{2} \end{aligned}$$ and $$\begin{aligned}
\label{dPdmu4sigma}
T^{4}\chi _{4}=& -VT^{3}a_{40}+V^2T^{2}(4a_{31}a_{11}+3a_{21}^{2})\langle \delta\sigma^{2}\rangle \\
& -6V^3Ta_{21}a_{11}^{2}\langle\delta\sigma^{3}\rangle
+V^4a_{11}^{4}\big(\langle \delta \sigma^{4}\rangle -3\langle \delta \sigma^{2}\rangle ^{2}\big) \notag \\
& -12V^{3}T(a_{22}a_{11}^{2}+2a_{21}a_{11}a_{12})\langle \delta \sigma^{2}\rangle ^{2} \notag \\
& +24V^{4}(2a_{11}^{2}a_{12}^{2}+a_{11}^{3}a_{13})\langle \delta \sigma^{2}\rangle ^{3} \notag \\
& +24V^{4}a_{11}^{3}a_{12}\langle \delta \sigma^{3}\rangle \langle\delta \sigma^{2}\rangle \notag\end{aligned}$$ Each factor of $V$ comes from the $d^3x$ integration in [Eq.(\[aijexp\])]{}, and after inserting the expressions for $\langle\delta\sigma^{k}\rangle $, each $\chi_2,\chi_3,\chi_4\propto V$. The fluctuation moments $\langle\delta\sigma^{k}\rangle $ are derived by functional differention of Eq.(\[Veffflucns\]), $$\begin{aligned}
\kappa _{2}& =\langle \delta \sigma ^{2}\rangle =\frac{T}{V}\xi ^{2},\quad
\kappa _{3}=\langle \delta \sigma ^{3}\rangle =-2\lambda _{3}\frac{T^{2}}{V^{2}}\xi ^{6} \label{kappa4} \\
\kappa _{4}& =\langle \delta \sigma ^{4}\rangle -3\langle \delta \sigma
^{2}\rangle ^{2}=6\frac{T^{3}}{V^{3}}\big(2(\lambda _{3}\xi )^{2}-\lambda
_{4}\big)\xi ^{8}\end{aligned}$$ The point is that the $a_{jk}$ coefficients weight how the $\delta\sigma$ correlations contribute to the higher order susceptibilities, $\chi_2,\chi_3...$. Moveover, the $a_{jk}$ have their own $\xi$ dependence, which can be estimated analytically and model-independently, as well as compared with model studies. For example, we find that $a_{11}=m^2\partial v/\partial\mu$ scales $\sim\xi^{-1}$ near the critical point. To compare to a given solvable model (such as the GN model below), the coupling constants $m_\sigma^2,\lambda_3,\lambda_4...$ are calculated from the model’s effective potential and then their $\mu$-derivatives evaluated yielding $a_{kj}$ coefficients.
The third moment $\chi_{3}$ has been studied in the NJL model and found to be negative around the phase boundary [@Asakawa:2009aj]. In agreement with power-counting $\xi $ only in the $\delta\sigma$ correlators [@Stephanov:2008qz], the behavior of $m_{1}$ near the critical point can be explained by focusing on $\langle\delta\sigma^{3}\rangle $ and hence the function $\kappa_{3}(\mu ,T)$: In this case, estimating the $\xi$ dependence of the $a_{jk}$ coefficients in [Eq.(\[dPdmu3sigma\])]{} reveals that the $a_{11}^3\kappa_3$ term scales with the largest positive power of $\xi$.
However, for $\chi_{4}$ there are many terms of the same (tree-level) order in the perturbation theory. Taking into account the $\xi$ dependence of the coefficients, several contributions, including those represented by the diagrams in Fig. \[fig:diags\] scale with the same power of $\xi$ as the $\kappa_{4}$ term. Although fewer $\sigma$ propagators are visible in some of these diagrams, the coefficient functions $a_{11},a_{12},$ and $a_{13}$ all have important $m_{\sigma}$ dependence. Looking at $\xi$-scaling, we find all three terms $\langle\delta\sigma^2\rangle^2$, $\langle\delta\sigma^2\rangle^3$, and $\langle\delta\sigma^2\rangle\langle\delta\sigma^3\rangle$ are approximately equally relevant as the $\kappa_4$ term. These analyses are supported by separately evaluating these terms in the GN model.
(200,210) (30,150)[![Diagrams 1a,1b give leading contributions to $\chi_3$. Diagrams 2(a-e) are some of the leading contributions to $\chi_4$. Omitted are diagrams involving multiple $\mu$ derivaties at the same point. []{data-label="fig:diags"}](chi3lambda3.1 "fig:"){width="70pt"}]{} (55,140)[(1a)]{} (100,150)[![Diagrams 1a,1b give leading contributions to $\chi_3$. Diagrams 2(a-e) are some of the leading contributions to $\chi_4$. Omitted are diagrams involving multiple $\mu$ derivaties at the same point. []{data-label="fig:diags"}](chi3xi2.1 "fig:"){width="70pt"}]{} (125,140)[(1b)]{} (30,80)[![Diagrams 1a,1b give leading contributions to $\chi_3$. Diagrams 2(a-e) are some of the leading contributions to $\chi_4$. Omitted are diagrams involving multiple $\mu$ derivaties at the same point. []{data-label="fig:diags"}](chi4lambda4.1 "fig:"){width="70pt"}]{} (55,70)[(2a)]{} (100,80)[![Diagrams 1a,1b give leading contributions to $\chi_3$. Diagrams 2(a-e) are some of the leading contributions to $\chi_4$. Omitted are diagrams involving multiple $\mu$ derivaties at the same point. []{data-label="fig:diags"}](chi4lambda3sq.1 "fig:"){width="70pt"}]{} (125,70)[(2b)]{} (0,10)[![Diagrams 1a,1b give leading contributions to $\chi_3$. Diagrams 2(a-e) are some of the leading contributions to $\chi_4$. Omitted are diagrams involving multiple $\mu$ derivaties at the same point. []{data-label="fig:diags"}](chi4xi31.1 "fig:"){width="70pt"}]{} (25,0)[(2c)]{} (70,10)[![Diagrams 1a,1b give leading contributions to $\chi_3$. Diagrams 2(a-e) are some of the leading contributions to $\chi_4$. Omitted are diagrams involving multiple $\mu$ derivaties at the same point. []{data-label="fig:diags"}](chi4xi32.1 "fig:"){width="70pt"}]{} (95,0)[(2d)]{} (140,10)[![Diagrams 1a,1b give leading contributions to $\chi_3$. Diagrams 2(a-e) are some of the leading contributions to $\chi_4$. Omitted are diagrams involving multiple $\mu$ derivaties at the same point. []{data-label="fig:diags"}](chi4lambda3xi.1 "fig:"){width="70pt"}]{} (165,0)[(2e)]{}
Next to see how the $\sigma $ fluctuations are impacted by the CEP, we investigate $\kappa _{3}(\mu ,T)$ and $\kappa _{4}(\mu ,T)$ as functions on the phase diagram. With $J\rightarrow 0$, the unbroken phase is where $\langle \sigma \rangle =0$, and in this case $\lambda _{2n}=g_{2n}$. Odd terms are zero, in particular $\lambda _{3}\equiv 0$ in the unbroken phase, and negative $\kappa _{4}$ exists when $\lambda _{4}=g_{4}>0$ above the second order phase transition line. In the symmetry broken phase, $$\begin{aligned}
2(\lambda _{3}\xi )^{2}-\lambda _{4}& =\frac{4}{\sqrt{D}}\left( (g_{4}-2\sqrt{D})^{2}+D\right) , \label{kappa4broken} \\
& D=g_{4}^{2}-4g_{2}g_{6}>0\quad (J=0) \notag\end{aligned}$$$D$ is the algebraic discriminant obtained when solving the gap question for the extrema, and it is positive in the broken phase, corresponding to real, nontrivial ($\sigma \neq 0$) solutions. Therefore, with $J=0$, $\kappa_{4}$ is positive definite in the broken phase and the $\kappa_4<0$ region is defined by the conditions $g_{2}>0$ and $g_{4}>0$ occuring only in the unbroken phase. For concreteness, this is illustrated in the GN model, Figure \[fig:GNphases\].
Turning on $J\neq 0$ produces a continuous change in the $\lambda_i$. In particular, the $\kappa_4=0$ lines, bounding the $\kappa_4<0$ region, move continuously away from their $J=0$ limits, and continue to obey the constraint “remembered” from $J=0$ theory.
To see this, first recall that the tricritical point anchors one corner of the $\kappa_{4}<0$ region, and in the $J\neq 0$ theory, the critical end point continues to do so[@Stephanov:2011pb]. The reason is that $\kappa_{4}$ Eq.(\[kappa4\]) has a local minimum where $\lambda_{3}=0$. For $J=0$, $\lambda_{3}=0$ holds throughout the unbroken phase, but for any fixed $J\neq 0$, the relation $\lambda_{3}(g_{2},g_{4},...)=0$ is an equation whose solution defines a line in the $\mu -T$ plane. The $\lambda_{3}=0$ line must pass through the critical end point. Differing trajectories of the phase boundary and $\lambda_{3}=0$ line are seen in the GN model, Figure \[fig:GNphases\].
The critical end point is located by the conditions $m_{\sigma}^{2}=\lambda_{3}=0$, which means the coefficients $g_{2n}$ satisfy [@Stephanov:1998dy] $$g_{2}=5g_{6}v^{4},~~g_{4}=-\frac{10}{3}g_{6}v^{2},~~
v^{5}=\frac{3}{8}\frac{J}{g_{6}}\quad @\,\mathrm{CEP} \label{CEPg2n}$$The vev $v=\langle \sigma \rangle $ is nonzero, as expected, and as the symmetry breaking is turned off $J\rightarrow 0$ these equations return to their $J=0$ limits. Since $v^{2}>0$, the CEP always shifts to the southeast, into the fourth quadrant relative to the TCP of the $J=0$ theory at $g_{2}=g_{4}=0$.
To locate the $\lambda_{3}=0$ line, relax condition on $m_{\sigma }^{2}$ to find that $\lambda_{3}=0$ is the set of points satisfying $$g_{2}=\frac{7}{3}g_{6}v^{4}+\frac{J}{v},\quad g_{4}=-\frac{10}{3}g_{6}v^{2}
\label{lambda3zero}$$The $\lambda_{3}=0$ line leaves the CEP parallel to the first order line, and hence proceeds in the direction of decreasing $g_{4}$. With $g_{2}>0$ and $g_{4}<0$ near the critical point (Eq.(\[CEPg2n\])), the relation Eq.(\[lambda3zero\]) requires that $v$ decreases along the $\lambda_{3}=0$ line. In the high $T$ limit, $v\rightarrow 0$, so that the $\lambda_{3}=0$ line asymptotes to $g_{4}=0$ from below. Thus, from Eq.(\[lambda3zero\]) we deduce that $\lambda_{3}=0$ typically cannot proceed close to the $\mu =0$ axis, since that would require that the tricritical point of the $J=0$ theory is near the $\mu =0$ axis. The $\kappa_{4}<0$-region must migrate toward higher $T$ and $\mu $ with the critical end point.
In the high $T$, low $\mu$ behaviour of $\kappa_4$ is given by expanding for small $v$: $2(\lambda_3\xi)^2-\lambda_4= -g_4+2\big(\frac{(3g_4)^2}{g_2}-5g_6\big)v^2+
\mathcal{O}(v^4)$ which is valid where $g_2,g_4>0$, far away from the lines where $g_2,g_4$ vanish. Approaching from high $T$, $\kappa_4$ starts out negative just as in the $J=0$ theory, and becomes positive just where the vev $v$ becomes large enough that the second term starts to win over the first. Therefore, as the magnitude of explicit breaking increases enhancing the order parameter, the $\kappa_4=0$ line and $\kappa_4<0$ region move farther from the phase boundary.
We demonstrate the features derived above in the phase diagram and susceptibilities of the GN model. The fermion number susceptibilites behave very similarly to other models such as PNJL [@Skokov:2011rq]. The GN model comprises $N$ fermions in 1 spatial dimension with bare mass $m_{0}$ and a four-fermion interaction $\propto g^{2}$, and in the large $N$ limit has a rich phase structure [@Schnetz:2005ih]. The physical mass $m$ is given by $m\gamma =(\pi/Ng^{2})m_{0}$ where $\gamma =\pi /(Ng^{2})-\ln \Lambda /m$ is the parameter controlling the magnitude of explicit symmetry breaking. At small $\mu ,T$, there is a chiral condensate $\langle \bar{\psi}\psi \rangle $ and the order parameter is the effective mass $M=m_{0}-g^{2}N\langle \bar{\psi}\psi \rangle $. The effective potential is a function of $M$, and we focus on the region above and on the low $\mu $ side of the critical point [@Schnetz:2005ih]
![Phase diagram of the GN model, with phase boundaries and TCP of the $\protect\gamma=0$ theory and the CEP of the $\protect\gamma=0.1$ theory. The $\protect\kappa_4<0$ region of the $\protect\gamma=0$ is above the second order (green) line and left of the dashed (blue) line that joins the boundary at the TCP. The $\protect\kappa_4<0$ region of the $\protect\gamma=0.1$ theory is delineated by the dot-dashed (red) line, and $\protect\lambda_3=0$ the solid (red) line inside this region. []{data-label="fig:GNphases"}](lam3-kappa4-cross-0-new.eps){width="45.00000%"}
The phase diagram behaves as described model-independently: For $\gamma\rightarrow 0$, there is a tricritical point and second order line extending to the $\mu =0$ axis. For $\gamma \neq 0$, the second order line vanishes into a crossover and the critical end point shifts increasingly to the southeast away from the former tricritical point. Figure \[fig:GNphases\] compares the phase diagrams of the GN model for $\gamma =0$ and $\gamma =0.1$. For $\gamma \neq 0$ the phase boundary is determined as the peak in the chiral susceptibility, $$\chi _{M}=\frac{\partial \langle \bar{\psi}\psi \rangle }{\partial m}=\frac{1}{m}\left( M-T\frac{\partial M}{\partial T}-\mu \frac{\partial M}{\partial
\mu }\right) \label{chiM}$$as is used in lattice QCD studies [@chiMlattice]. The phase boundary stays near the critical line of the $\gamma =0$ theory, which is robust for different values of $\gamma $. All our results are shown in units of $m=1$.
For $\gamma=0$, the ${\kappa_4}<0$ region is delineated by the second-order line and the $g_4=0$ line. For $\gamma=0.1$, it is delineated by the dot-dashed line with a cusp at the CEP. Varying $\gamma$, we see that the ${\kappa_4}<0$ region evolves continuously as a function of $\gamma$ from its $\gamma\to 0$ limit. The $\lambda_3=0$ line leaves the CEP parallel to the first order line, and the $\kappa_4<0$ region is approximately symmetric around it very near the CEP. However, the $\lambda_3=0$ line then asymptotes to the $g_4=0$ line, which pulls the $\kappa_4<0$ region away from the phase boundary.
![Upper frame: Density plot of $m_2$ in the $\protect\mu-T$ plane with $\protect\gamma=0.1$. The white lines indicate where $m_2=0$ and in the (red) wedge between these lines $m_2<0$. The first order line is the solid heavy line, and the crossover line is the dotted line, determined by the max of Eq.(\[chiM\]). The dashed lines are hypothetical freeze-out curves, color-coded to correspond to the lines in the lower frame. []{data-label="fig:c2"}](m2_muT_folines.eps "fig:"){width="45.00000%"}![Upper frame: Density plot of $m_2$ in the $\protect\mu-T$ plane with $\protect\gamma=0.1$. The white lines indicate where $m_2=0$ and in the (red) wedge between these lines $m_2<0$. The first order line is the solid heavy line, and the crossover line is the dotted line, determined by the max of Eq.(\[chiM\]). The dashed lines are hypothetical freeze-out curves, color-coded to correspond to the lines in the lower frame. []{data-label="fig:c2"}](m2_T.eps "fig:"){width="45.00000%"}
We plot $m_2$ on the phase diagram in Figure \[fig:c2\]. The negative $m_2$ region forms a wedge opening up from the CEP and extends deeper across the phase boundary than the $\kappa_4<0$ region. Negative $m_2$ could be accessible to freeze-out at $\mu<\mu_{\mathrm{CEP}}$, and the signature would be a minimum followed by a rapid increase to a positive peak, as seen in the (green) freeze-out curve closest to the phase boundary. Moving freeze-out progressively away from the phase boundary, both the minimum and maximum of $m_2$ decrease in magnitude. Thus it is possible $m_2$ is only positive along the freeze-out curve (for example the lowest curve). Its maximum provides a residual signal of proximity to the CEP, seeing that the height of the peak decreases rapidly away from the phase boundary. Comparing upper and lower frames of Fig. \[fig:c2\], we see that the peak in $m_2$ is always at a temperature higher (or $\mu$ lower) than the CEP.
Strikingly, the black line is in good qualitative agreement with lattice and HIC results. However, non-monotonic behaviour of $m_{2}$ along a single freeze-out line is insufficient to establish proximity to the CEP. Many possible freeze-out curves can be drawn that cross several contours of constant $m_{2}$ twice, and each will display a local maximum of $m_{2}$ as a function of $\mu $ or the collision energy. For this reason, it will be important to combine several probes of the phase diagram, and one way to start is to compare $m_{2}$ and $m_{1}$
![$m_1$ along the hypothetical freeze-out lines given in the upper frame of Fig.\[fig:c2\].\
[]{data-label="fig:c1"}](m1_T.eps){width="45.00000%"}
In Figure \[fig:c1\], we plot $m_{1}$ along the same hypothetical freeze-out curves. Like $m_{2}$ it displays a positive peak close the CEP, and the magnitude of the maximum decreases for freeze-out lines farther away from the phase boundary. Again, the peak is at higher temperature (lower $\mu $) than the CEP. This fact appears to be universal, as it is seen an Ising-model evaluation of $\kappa _{3}$ and $\kappa _{4}$ similar to [@Stephanov:2011pb]. Despite many similarities, the topography of the peaks in $m_{1}$ and $m_{2}$ differ in detail. Combining measurements of these two observables along the freeze-out curve, we may be able to extract more information about the CEP location.
To conclude, we have studied the fermion susceptibilities $\chi_2,\chi_3,\chi_4$ analytically using a low energy effective theory for the order parameter field and numerically using the Gross-Neveu model as an example system. The model-independent analysis shows that larger quark mass pushes the critical end point to higher $\mu$, and there are constraints on the position of the CEP relative to the tricritical point of the zero quark mass theory.
In agreement with previous work, nonmonotonic behaviour of $m_{1}$ and $m_{2} $ appears as a signal of the critical region in the phase diagram. [ Consistent with experimental data, we find $m_2$ first decreases as a function of chemical potential $\mu$, which is a remnant of the $m_2<0$ region above the critical point. Seeing a large peak in $m_2$ at larger $\mu$/smaller $\sqrt{s}$ would support this explanation of the data. However, it is necessary to accumulate as much corroborating evidence as possible to preclude false positive, and we note in this same region, $m_1$ is also expected to peak and decrease again. The peaks in $m_{1},m_{2}$ are typically not the point of closest approach, and the temperature of the peaks are ordered $T_{\mathrm{max},m_{1}}>T_{\mathrm{max},m_{2}}>T_{\mathrm{CEP}}$, a fact which might be leveraged to indicate the location of the critical point]{}.
To the extent that the fireball is near thermodynamic equilibrium at freeze-out, the model independent features we find can be compared to experiment. It may be possible to refine the predictions by taking into account expansion dynamics[@slowing]. More information may be extracted from the experimental data by combining measurements of $m_{1}$ and $m_{2}$ along the single available freeze-out curve (and possibly other curves available from lattice).
0.2cm *Acknowledgments*: JWC is supported in part by the MOST, NTU-CTS, and the NTU-CASTS of R.O.C. J.D. is supported in part by the Major State Basic Research Development Program in China (Contract No. 2014CB845406), National Natural Science Foundation of China (Projects No. 11105082).
[99]{} M. A. Stephanov, PoS LAT **2006**, 024 (2006) \[hep-lat/0701002\]; K. Fukushima and C. Sasaki, Prog. Part. Nucl. Phys. **72**, 99 (2013) \[arXiv:1301.6377 \[hep-ph\]\]; S. Gupta, X. Luo, B. Mohanty, H. G. Ritter and N. Xu, Science [**332**]{}, 1525 (2011) \[arXiv:1105.3934 \[hep-ph\]\]; and references therein.
P. de Forcrand and O. Philipsen, Phys. Rev. Lett. **105**, 152001 (2010) \[arXiv:1004.3144 \[hep-lat\]\]; A. Li, A. Alexandru and K. F. Liu, Phys. Rev. D **84**, 071503 (2011) \[arXiv:1103.3045 \[hep-ph\]\].
R. V. Gavai and S. Gupta, Phys. Lett. B **696**, 459 (2011) \[arXiv:1001.3796 \[hep-lat\]\].
M. A. Stephanov, K. Rajagopal and E. V. Shuryak, Phys. Rev. Lett. **81**, 4816 (1998) \[hep-ph/9806219\]. M. A. Stephanov, K. Rajagopal and E. V. Shuryak, Phys. Rev. D **60**, 114028 (1999) \[hep-ph/9903292\].
M. M. Aggarwal *et al.* \[STAR Collaboration\], Phys. Rev. Lett. **105**, 022302 (2010) \[arXiv:1004.4959 \[nucl-ex\]\].
L. Adamczyk *et al.* \[STAR Collaboration\], Phys. Rev. Lett. **112**, 032302 (2014) \[arXiv:1309.5681 \[nucl-ex\]\].
B. Berdnikov and K. Rajagopal, Phys. Rev. D **61**, 105017 (2000) \[hep-ph/9912274\]. C. Nonaka and M. Asakawa, Phys. Rev. C **71**, 044904 (2005) \[nucl-th/0410078\]. C. Athanasiou, K. Rajagopal and M. Stephanov, Phys. Rev. D **82**, 074008 (2010) \[arXiv:1006.4636 \[hep-ph\]\].
Y. Hatta and M. A. Stephanov, Phys. Rev. Lett. [**91**]{}, 102003 (2003) \[Erratum-ibid. [**91**]{}, 129901 (2003)\] \[hep-ph/0302002\].
M. A. Stephanov, Phys. Rev. Lett. **102**, 032301 (2009) \[arXiv:0809.3450 \[hep-ph\]\].
M. A. Stephanov, Phys. Rev. Lett. **107**, 052301 (2011) \[arXiv:1104.1627 \[hep-ph\]\].
M. Asakawa, S. Ejiri and M. Kitazawa, Phys. Rev. Lett. **103**, 262301 (2009) \[arXiv:0904.2089 \[nucl-th\]\].
V. Skokov, B. Friman and K. Redlich, Phys. Lett. B **708**, 179 (2012) \[arXiv:1108.3231 \[hep-ph\]\].
O. Schnetz, M. Thies and K. Urlichs, Annals Phys. **321**, 2604 (2006) \[hep-th/0511206\].
Y. Aoki, Z. Fodor, S. D. Katz and K. K. Szabo, Phys. Lett. B **643**, 46 (2006) \[hep-lat/0609068\]. A. Bazavov, T. Bhattacharya, M. Cheng, *et al.* Phys. Rev. D **85**, 054503 (2012) \[arXiv:1111.1710 \[hep-lat\]\]. L. Levkova, PoS LATTICE **2011**, 011 (2011) \[arXiv:1201.1516 \[hep-lat\]\].
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
[ We study the regularity of a Markov semigroup $(P_t)_{t>0}$, that is, when $P_t(x,dy)=p_t(x,y)dy$ for a suitable smooth function $p_t(x,y)$. This is done by transferring the regularity from an approximating Markov semigroup sequence $%
(P^n_t)_{t>0}$, $n\in{\mathbb{\N}}$, whose associated densities $p^n_t(x,y)$ are smooth and can blow up as $n\to\infty$. We use an interpolation type result and we show that if there exists a good equilibrium between the blow up and the speed of convergence, then $P_{t}(x,dy)=p_{t}(x,y)dy$ and $p_{t}$ has some regularity properties. ]{}
author:
- |
[Vlad Bally]{}[^1]\
[Lucia Caramellino]{}[^2]
title: Transfer of regularity for Markov semigroups
---
*Keywords:* Markov semigroups; regularity of probability laws; interpolation spaces.
*2010 MSC:* 60J25, 46B70.
Introduction
============
In this paper we study Markov semigroups, that is, strongly continuous and positive semigroups $P_{t}$, ${t\geq 0}$, such $P_t1=1$. We set the domain equal to the Schwartz space ${\mathcal{S}({\mathbb{R}}^d)}$ of the $C^\infty(%
{\mathbb{R}}^d)$ functions all of whose derivatives are rapidly decreasing.
The link with Markov processes gives the representation $$P_{t}f(x)=\int_{{\mathbb{R}}^d}f(y)P_t(x,dy),\quad t\geq 0,\ f\in {\mathcal{S%
}({\mathbb{R}}^d)}.$$ We study here the regularity of a Markov semigroup, which is the property $%
P_t(x,dy)=p_t(x,y)dy$, $t>0$, for a suitable smooth function $p_t(x,y)$, by transferring the regularity from an approximating Markov semigroup sequence $%
P^n_t$, $n\in{\mathbb{N}}$.
To be more precise, let $P_{t}$ be a Markov semigroup on ${\mathcal{S}({%
\mathbb{R}}^{d})}$ with infinitesimal operator $L$ and let $P_{t}^{n},n\in {%
\mathbb{N}}$, be a sequence of Markov semigroups on ${\mathcal{S}({\mathbb{R}%
}^{d})}$ with infinitesimal operators $L_{n}$, $n\in {\mathbb{N}}$. Classical results (Trotter Kato theorem, see e.g. [@EK]) assert that, if $L_{n}\rightarrow L$ then $P_{t}^{n}\rightarrow P_{t}.$ The problem that we address in this paper is the following. We suppose that $P_{t}^{n}$ has the regularity (density) property $P_{t}^{n}(x,dy)=p_{t}^{n}(x,y)dy$ with $%
p_{t}^{n}\in C^{\infty }({\mathbb{R}}^{d}\times {\mathbb{R}}^{d})$ and we ask under which hypotheses this property is inherited by the limit semigroup $P_{t}.$ If we know that $p_{t}^{n}$ converges to some $p_{t}$ in a sufficiently strong sense, of course we obtain $P_{t}(x,dy)=p_{t}(x,y)dy.$ But in our framework $p_{t}^{n}$ does not converge: here, $p_{t}^{n}$ can even blow up as $n\rightarrow \infty $. However, if we may find a good equilibrium between the blow up and the speed of convergence, then we are able to conclude that $P_{t}(x,dy)=p_{t}(x,y)dy$ and $p_{t}$ has some regularity properties. This is an interpolation type result.
Roughly speaking our main result is as follows. We assume that the speed of convergence is controlled in the following sense: there exists some $a\in {%
\mathbb{N}}$ such that for every $q\in {\mathbb{N}}$ $$\left\Vert (L-L_{n})f\right\Vert _{q,\infty }\leq \varepsilon _{n}\left\Vert
f\right\Vert _{q+a,\infty } \label{i1}$$Here $\left\Vert f\right\Vert _{q,\infty }$ is the norm in the standard Sobolev space $W^{q,\infty }.$ In fact we will work with weighted Sobolev spaces, and this is an important point. And also, we will assume a similar hypothesis for the adjoint $(L-L_{n})^{\ast }$ (see Assumption \[A1A\*1\] for a precise statement).
Moreover we assume a “propagation of regularity” property: there exist $%
b\in {\mathbb{N}}$ and $\Lambda _{n}\geq 1$ such that for every $q\in {%
\mathbb{N}}$$$\left\Vert P_{t}^{n}f\right\Vert _{q,\infty }\leq \Lambda _{n}\left\Vert
f\right\Vert _{q+b,\infty } \label{i2}$$Here also we will work with weighted Sobolev norms. And a similar hypothesis is supposed to hold for the adjoint $P_{t}^{\ast ,n}$ (see Assumption [A2A\*2]{} for a precise statement).
Finally we assume the following regularity property: for every $t\in (0,1]$, $P_{t}^{n}(x,dy)=p_{t}^{n}(x,y)dy$ with $p_{t}^{n}\in C^{\infty }({\mathbb{R}%
}^{d}\times {\mathbb{R}}^{d})$ and for every $\kappa \geq 0$, $t\in (0,1]$, $$\left\vert \partial _{x}^{\alpha }\partial _{y}^{\beta
}p_{t}^{n}(x,y)\right\vert \leq \frac{C}{(\lambda _{n}t)^{\theta
_{0}(\left\vert \alpha \right\vert +\left\vert \beta \right\vert +\theta
_{1})}}\times \frac{(1+\left\vert x\right\vert ^{2})^{\pi (\kappa )}}{%
(1+\left\vert x-y\right\vert ^{2})^{\kappa }}. \label{i3}$$Here, $\alpha ,\beta $ are multi-indexes and $\partial _{x}^{\alpha
},\partial _{y}^{\beta }$ are the corresponding differential operators. Moreover, $\pi (\kappa )$, $\theta _{0}$ and $\theta _{1}$ are suitable parameters and $\lambda _{n}\rightarrow 0$ as $n\rightarrow \infty $ (we refer to Assumption \[H3\]).
By (\[i1\])–(\[i3\]), the rate of convergence is controlled by $%
\varepsilon _{n}\rightarrow 0$ and the blow up of $p_{t}^{n}$ is controlled by $\lambda _{n}^{-\theta _{0}}\rightarrow \infty $. So the regularity property may be lost as $n\rightarrow \infty $. However, if there is a good equilibrium between $\varepsilon _{n}\rightarrow 0$ and $\lambda
_{n}^{-\theta _{0}}\rightarrow \infty $ and $\Lambda _{n}\rightarrow \infty $ then the regularity is saved: we ask that for some $\delta >0$ $$\overline{\lim_{n}}\frac{\varepsilon _{n}\Lambda _{n}}{\lambda _{n}^{\theta
_{0}(a+b+\delta )}}<\infty , \label{i4}$$the parameters $a$, $b$ and $\theta _{0}$ being given in (\[i1\]), ([i2]{}) and (\[i3\]) respectively. Then $P_{t}(x,dy)=p_{t}(x,y)dy$ with $%
p_{t}\in C^{\infty }({\mathbb{R}}^{d}\times {\mathbb{R}}^{d})$ and the following upper bound holds: for every $\varepsilon >0$ and $\kappa \in \N$ one may find some constants $C,\pi(\kappa)>0$ such that for every $(x,y)\in
\R^d\times \R^{d}$$$\left\vert \partial _{x}^{\alpha }\partial _{y}^{\beta
}p_{t}(x,y)\right\vert \leq \frac{C}{t^{\theta _{0}(1+\frac{a+b}{\delta }%
)(\left\vert \alpha \right\vert +\left\vert \beta \right\vert
+2d+\varepsilon )}}\times \frac{(1+\left\vert x\right\vert
^{2})^{\pi(\kappa) }}{(1+\left\vert x-y\right\vert
^{2})^{\kappa }}. \label{i3'}$$This is the transfer of regularity that we mention in the title and which is stated in Theorem \[TransferBIS-new\]. The proof is based on a criterion of regularity for probability measures given in [@[BC]], which is close to interpolation spaces techniques.
A second result concerns a perturbation of the semigroup $P_{t}$ by adding a compound Poisson process: we prove that if $P_{t}$ verifies (\[i2\]) and (\[i3\]) then the perturbed semigroup still verifies (\[i3\]) – see Theorem \[J\]. A similar perturbation problem is discussed in [@[Z]] (but the arguments there are quite different).
The regularity criterion presented in this paper is tailored in order to handle the following example (which will be treated in a forthcoming paper). We consider the integro-differential operator $$Lf(x)=\<b(x),\nabla f(x)\>+\int_{E}\big(f(x+c(z,x))-f(x)-\<c(z,x),\nabla f(x)\>%
\big)d\mu (z) \label{i5}$$where $\mu $ is an infinite measure on the normed space $(E,\left\vert \circ
\right\vert _{E})$ such that $\int_{E}1\wedge \left\vert c(z,x)\right\vert
^{2}d\mu (z)<\infty .$ Moreover, we consider a sequence $\varepsilon
_{n}\downarrow \emptyset ,$ we denote$$A_{n}^{i,j}(x)=\int_{\{\left\vert z\right\vert _{E}\leq \varepsilon
_{n}\}}c^{i}(z,x)c^{j}(z,x)d\mu (z)$$and we define$$\begin{array}{rl}
L_{n}f(x)= & \displaystyle\<b(x),\nabla f(x)\>+\int_{\{\left\vert
z\right\vert _{E}\geq \varepsilon _{n}\}}(f(x+c(z,x))-f(x)-\<c(z,x),\nabla
f(x)\>)d\mu (z)\smallskip \\
& \displaystyle+\frac{1}{2}\mathrm{tr}(A_{n}(x)\nabla ^{2}f(x)).%
\end{array}
\label{i6}$$By Taylor’s formula, $$\left\Vert Lf-L_{n}f\right\Vert _{\infty }\leq \left\Vert f\right\Vert
_{3,\infty }\varepsilon _{n}\quad \mbox{with}\quad \varepsilon
_{n}=\sup_{x}\int_{\{\left\vert z\right\vert _{E}\leq \varepsilon
_{n}\}}\left\vert c(z,x)\right\vert ^{3}d\mu (z)$$Under the uniform ellipticity assumption $A_{n}(x)\geq \lambda _{n}$ for every $x\in {\mathbb{R}}^{d},$ the semigroup $P_{t}^{n}$ associated to $%
L_{n} $ has the regularity property (\[i3\]) with $\theta _{0}$ depending on the measure $\mu .$ The speed of convergence in (\[i1\]), with $a=3,$ is controlled by $\varepsilon _{n}\downarrow 0.$ So, if (\[i4\]) holds, then we obtain the regularity of $P_{t}$ and the short time estimates ([i3’]{}).
The semigroup $P_{t}$ associated to $L$ corresponds to stochastic equations driven by the Poisson point measure $N_{\mu }(dt,dz)$ with intensity measure $\mu $, so the problem of the regularity of $P_{t}$ has been extensively discussed in the probabilistic literature. A first approach initiated by Bismut [@[Bi]], Leandre [@[L]] and Bichteler, Gravereaux and Jacod [@[BGJ]] (see also the recent monograph of Bouleau and Denis [@[BD]] and the bibliography therein), is done under the hypothesis that $E={\mathbb{%
R}}^{m}$ and $\mu (dz)=h(z)dz$ with $h\in C^{\infty }({\mathbb{R}}^{m}).$ Then one constructs a Malliavin type calculus based on the amplitude of the jumps of the Poisson point measure $N_{\mu }$ and employs this calculus in order to study the regularity of $P_{t}.$ A second approach initiated by Carlen and Pardoux [@[CP]] (see also Bally and Clément [@[BCl]]) follows the ideas in Malliavin calculus based on the exponential density of the jump times in order to study the same problem. Finally a third approach, due to Picard [@[P1]; @[P2]] (see also the recent monograph by Ishikawa [@[I]] for many references and developments in this direction), constructs a Malliavin type calculus based on finite differences (instead of standard Malliavin derivatives) and obtains the regularity of $P_{t}$ for a general class of intensity measures $\mu $ including purely atomic measures (in contrast with $\mu (dz)=h(z)dz)$. We stress that all the above approaches work under different non degeneracy hypotheses, each of them corresponding to the specific noise that is used in the calculus. So in some sense we have not a single problem but three different classes of problems. The common feature is that the strategy in order to solve the problem follows the ideas from Malliavin calculus based on some noise contained in $%
N_{\mu }.$ Our approach is completely different because, as described above, we use the regularization effect of $\mathrm{tr}(A_{n}(x)\nabla ^{2}).$ This regularization effect may be exploited either by using the standard Malliavin calculus based on the Brownian motion or using some analytical arguments. The approach that we propose in [@[BCW]] is probabilistic, so employs the standard Malliavin calculus. But anyway, as mentioned above, the regularization effect vanishes as $n\rightarrow \infty $ and a supplementary argument based on the equilibrium given in (\[i4\]) is used. We precise that the non degeneracy condition $A_{n}(x)\geq \lambda _{n}>0$ is of the same nature as the one employed by J. Picard so the problem we solve is in the same class.
The idea of replacing small jumps (the ones in $\{\left\vert z\right\vert _{E}\leq \varepsilon _{n}\}$ here) by a Brownian part (that is $\mathrm{tr}(A_{n}(x)\nabla ^{2})$ in $L_{n})$ is not new - it has been introduced by Asmussen and Rosinski in [@[AR]] and has been extensively employed in papers concerned with simulation problems: since there is a huge amount of small jumps, they are difficult to simulate and then one approximates them by the Brownian part corresponding to $%
\mathrm{tr}(A_{n}(x)\nabla ^{2}).$ See for example [@[A]; @[BK]; @[CD]] and many others. However, at our knowledge, this idea has not been yet used in order to study the regularity of $P_{t}.$
The paper is organized as follows. In Section \[sect:NotRes\] we give the notation and the main results mentioned above and in Section [sect:proofs]{} we give the proof of these results. Section \[sect:reg\] is devoted to some preliminary results about regularity. Namely, in Section [sect:3.1]{} we recall and develop some results concerning regularity of probability measures, based on interpolation type arguments, coming from [@[BC]]. These are the main instruments used in the paper. In Section \[sect:3.2\] we prove a regularity result which is a key point in our approach. In fact, it allows to handle the multiple integrals coming from the application of a Lindeberg method for the decomposition of $P_t-P^n_t$. The results stated in Section \[sect:NotRes\] are then proved in the subsections in which Section \[sect:proofs\] is split. Finally, in Appendix \[app:weights\], \[app:semi\] and \[app:ibp\] we prove some technical results used in the paper.
Notation and main results {#sect:NotRes}
=========================
Notation {#sect:notation}
--------
For a multi-index $\alpha =(\alpha _{1},...,\alpha _{m})\in \{1,...,d\}^{m}$ we denote $\left\vert \alpha \right\vert =m$ (the length of the multi-index) and $\,\partial ^{\alpha }$ is the derivative corresponding to $\alpha ,$ that is $\partial ^{\alpha _{m}}\cdots\partial ^{\alpha _{1}}$, with $%
\partial^{\alpha_i}=\partial_{x_{\alpha_i}}$. For $f\in C^{\infty }({\mathbb{%
R}}^{d}\times {\mathbb{R}}^{d})$, $(x,y)\in {\mathbb{R}}^{d}\times {\mathbb{R%
}}^{d}$ and two multi-indexes $\alpha$ and $\beta$, we denote by $\partial
_{x}^{\alpha }$ the derivative with respect to $x$ and by $\partial
_{y}^{\alpha }$ the derivative with respect to $y$.
Moreover, for $f\in C^{\infty }({\mathbb{R}}^{d})$ and $q\in {\mathbb{N}}$ we denote$$\left\vert f\right\vert _{q}(x)=\sum_{0\leq \left\vert \alpha \right\vert
\leq q}\left\vert \partial ^{\alpha }f(x)\right\vert . \label{NOT1}$$If $f$ is not a scalar function, that is, $f=(f^{i})_{i=1,\ldots ,d}$ or $%
f=(f^{i,j})_{i,j=1,\ldots ,d}$, we denote $\left\vert f\right\vert
_{q}=\sum_{i=1}^{d}\left\vert f^{i}\right\vert _{q}$ respectively $%
\left\vert f\right\vert _{q}=\sum_{i,j=1}^{d}\left\vert f^{i,j}\right\vert
_{q}.$
We will work with the weights $$\psi _{\kappa }(x)=(1+\left\vert x\right\vert ^{2})^{\kappa },\quad \kappa
\in {\mathbb{Z}} . \label{NOT2}$$The following properties hold:
- for every $\kappa\geq \kappa^{\prime }\geq 0$, $$\psi _{\kappa }(x) \leq \psi _{\kappa ^{\prime }}(x); \label{NOT3a}$$
- for every $\kappa\geq 0$, there exists $C_{\kappa }>0$ such that $$\psi _{\kappa }(x)\leq C_{\kappa }\psi _{\kappa }(y)\psi _{\kappa }(x-y);
\label{NOT3b}$$
- for every $\kappa \geq 0$, there exists $C_{\kappa }>0$ such that for every $\phi \in C_{b}^{\infty }({\mathbb{R}}^{d})$, $$\psi _{\kappa }(\phi (x))\leq C_{\kappa }\psi _{\kappa }(\phi
(0))(1+\left\Vert \nabla \phi \right\Vert _{\infty }^{2})^{\kappa }\psi
_{\kappa }(x); \label{NOT3d}$$
- for every $q\in {\mathbb{N}}$ there exists $\overline{C}_{q}\geq
\underline{C}_{q}>0$ such that for every $\kappa \in {\mathbb{R}}$ and $f\in
C^{\infty }({\mathbb{R}}^{d})$, $$\underline{C}_{q}\psi _{\kappa }\left\vert f\right\vert _{q}(x)\leq
\left\vert \psi _{\kappa }f\right\vert _{q}(x)\leq \overline{C}_{q}\psi
_{\kappa }\left\vert f\right\vert _{q}(x). \label{NOT3c}$$
Note that (\[NOT3a\])–(\[NOT3d\]) are immediate, whereas (\[NOT3c\]) is proved in Appendix \[app:weights\] (see Lemma \[Psy1\]).
For $q\in {\mathbb{N}}$, $\kappa \in {\mathbb{R}}$ and $p\in (1,\infty ]$ (we stress that we include the case $p=+\infty $), we set $\Vert \cdot \Vert
_{p}$ the usual norm in $L^{p}({\mathbb{R}}^{d})$ and $$\left\Vert f\right\Vert _{q,\kappa ,p}=\left\Vert \left\vert \psi _{\kappa
}f\right\vert _{q}\right\Vert _{p}. \label{NOT4}$$We denote $W^{q,\kappa ,p}$ to be the closure of $C^{\infty }({\mathbb{R}}%
^{d})$ with respect to the above norm. If $\kappa =0$ we just denote $%
\left\Vert f\right\Vert _{q,p}=\left\Vert f\right\Vert _{q,0,p}$ and $%
W^{q,p}=W^{q,0,p}$ (which is the usual Sobolev space). So, we are working with weighted Sobolev spaces. The following properties hold:
- for every $q\in {\mathbb{N}}$ there exists $\overline{C}_{q}\geq
\underline{C}_{q}>0$ such that for every $\kappa \in {\mathbb{R}}$, $p>1$ and $f\in W^{q,k,p}({\mathbb{R}}^{d})$, $$\underline{C}_{q}\Vert \psi _{\kappa }|f|_{q}\Vert _{p}\leq \Vert f\Vert
_{q,\kappa ,p}\leq \overline{C}_{q}\Vert \psi _{\kappa }|f|_{q}\Vert _{p};
\label{NOT4a}$$
- for every $q\in {\mathbb{N}}$ and $p>1$ there exists $C_{q,p}>0$ such that for every $\kappa \in {\mathbb{R}}$ and $f\in W^{q,k,p}({\mathbb{R}}%
^{d})$, $$\left\Vert f\right\Vert _{q,\kappa ,p}\leq C_{q,p}\left\Vert f\right\Vert
_{q,\kappa +d,\infty } \label{NOT5a}$$and if $p>d$, $$\left\Vert f\right\Vert _{q,\kappa ,\infty }\leq C_{q,p}\left\Vert
f\right\Vert _{q+1,\kappa ,p}; \label{NOT5b}$$
- for $\kappa ,\kappa ^{\prime }\in {\mathbb{R}}$, $q,q^{\prime }\in {%
\mathbb{N}}$, $p\in (1,\infty ]$ and $U:C^{\infty }({\mathbb{R}}%
^{d})\rightarrow C^{\infty }({\mathbb{R}}^{d})$, the following two assertions are equivalent: there exists a constant $C_{\ast }\geq 1$ such that for every $f$, $$\left\Vert Uf\right\Vert _{q,\kappa ,\infty }\leq C_{\ast }\left\Vert
f\right\Vert _{q^{\prime },\kappa ^{\prime },p} \label{NOT6a}$$and there exists a constant $C^{\ast }\geq 1$ such that for every $f$, $$\Big\Vert\psi _{\kappa }U\Big(\frac{1}{\psi _{\kappa ^{\prime }}}f\Big)%
\Big\Vert_{q,\infty }\leq C^{\ast }\left\Vert f\right\Vert _{q^{\prime },p} .
\label{NOT6b}$$
Notice that (\[NOT4a\]) is a consequence of (\[NOT3c\]). The inequality (\[NOT5a\]) is an immediate consequence of (\[NOT3c\]) and of the fact that $\psi _{-d}\in L^{p}({\mathbb{R}}^{d})$ for every $p\geq 1$. And the inequality (\[NOT5b\]) is a consequence of Morrey’s inequality (Corollary IX.13 in [@Morrey]), whose use gives $\left\Vert f\right\Vert
_{0,0,\infty }\leq \left\Vert f\right\Vert _{1,0,p}$, and of (\[NOT3c\]). In order to prove the equivalence between (\[NOT6a\]) and (\[NOT6b\]), one takes $g=\psi _{\kappa ^{\prime }}f$ (respectively $g=\frac{1}{\psi
_{\kappa ^{\prime }}}f)$ and uses(\[NOT3c\]) as well.
Main results {#sect:results}
------------
We consider a Markov semigroup $P_{t}$ on ${\mathcal{S}({\mathbb{R}}^{d})}$ with infinitesimal operator $L$ and a sequence $P_{t}^{n},n\in {\mathbb{N}}$ of Markov semigroups on ${\mathcal{S}({\mathbb{R}}^{d})}$ with infinitesimal operator $L_{n}.$ We suppose that ${\mathcal{S}({\mathbb{R%
}}^{d})}$ is included in the domain of $L$ and of $L_{n}$ and we suppose that for $%
f\in {\mathcal{S}({\mathbb{R}}^{d})}$ we have $Lf\in {\mathcal{S}({\mathbb{R}}^{d})}$ and $L_{n}f\in {\mathcal{S}({\mathbb{R}}^{d})}$. We denote $\Delta _{n}=L-L_{n}.$ Moreover, we denote by $P_{t}^{\ast ,n}$ the formal adjoint of $P_{t}^{n}$ and by $\Delta _{n}^{\ast }$ the formal adjoint of $\Delta _{n}$ that is$$\left\langle P_{t}^{\ast ,n}f,g\right\rangle =\left\langle
f,P_{t}^{n}g\right\rangle \quad \mbox{and}\quad \left\langle \Delta
_{n}^{\ast }f,g\right\rangle =\left\langle f,\Delta _{n}g\right\rangle ,
\label{TR1}$$$\left\langle \cdot ,\cdot \right\rangle $ being the scalar product in $%
L^{2}({\mathbb{R}}^{d},dx).$
We present now our hypotheses. The first one concerns the speed of convergence of $L_{n}\rightarrow L.$
\[A1A\*1\] Let $a\in {\mathbb{N}}$, and let $(\varepsilon _{n})_{n\in {%
\mathbb{N}}}$ be a decreasing sequence such that $\lim_{n}\varepsilon
_{n}=0.$We assume that for every $q\in {\mathbb{N}},\kappa \geq 0$ and $p>1$ there exists $C>0$ such that for every $n$ and $f$, $$\begin{aligned}
(A_{1})& \qquad \left\Vert \Delta _{n}f\right\Vert _{q,-\kappa ,\infty }\leq
C\varepsilon _{n}\left\Vert f\right\Vert _{q+a,-\kappa ,\infty },
\label{TR3} \\
(A_{1}^{\ast })& \qquad \left\Vert \Delta _{n}^{\ast }f\right\Vert
_{q,\kappa ,p}\leq C\varepsilon _{n}\left\Vert f\right\Vert _{q+a,\kappa ,p}.
\label{TR3'}\end{aligned}$$
Our second hypothesis concerns the “propagation of regularity” for the semigroups $P_{t}^{n}$.
\[A2A\*2\] Let $\Lambda _{n}\geq 1,n\in {\mathbb{N}}$ be an increasing sequence such that $\Lambda _{n+1}\leq \gamma \Lambda _{n}$ for some $\gamma
\geq 1.$ For every $q\in {\mathbb{N}}$ and $\kappa \geq 0,p>1$, there exist $%
C>0$ and $b\in {\mathbb{N}}$, such that for every $n\in {\mathbb{N}}$ and $f$, $$\begin{aligned}
(A_{2})& \qquad \sup_{s\leq t}\left\Vert P_{s}^{n}f\right\Vert _{q,-\kappa
,\infty }\leq C\Lambda _{n}\left\Vert f\right\Vert _{q+b,-\kappa ,\infty },
\label{TR2} \\
(A_{2}^{\ast })& \qquad \sup_{s\leq t}\left\Vert P_{s}^{\ast ,n}f\right\Vert
_{q,\kappa ,p}\leq C\Lambda _{n}\left\Vert f\right\Vert _{q+b,\kappa ,p}.
\label{TR2'}\end{aligned}$$
The hypothesis $(A_{2}^{\ast })$ is rather difficult to verify so, in Appendix \[app:semi\], we give some sufficient conditions in order to check it (see Proposition \[A2\]).
Our third hypothesis concerns the “regularization effect” of the semi-group $P_{t}^{n}$.
\[A3\] We assume that $$P_{t}^{n}f(x)=\int_{{\mathbb{R}}^{d}}p_{t}^{n}(x,y)f(y)dy \label{TR4}$$with $p_{t}^{n}\in C^{\infty }({\mathbb{R}}^{d}\times {\mathbb{R}}^{d})$. Moreover, we assume there exist $\theta _{0}>0$ and a sequence $\lambda _{n}$, $n\in {\mathbb{N}}$ with $$\lambda _{n}\downarrow 0,\quad \lambda _{n}\leq \gamma \lambda _{n+1},
\label{TRa1-new}$$foe some $\gamma \geq 1,$ such that the following property holds: for every $%
\kappa \geq 0,q\in {\mathbb{N}}$ there exist $\pi (q,\kappa ),$ increasing in $q$ and in $\kappa ,$ a constant $\theta _{1}\geq 0,$ and a constant $C>0$ such that for every $n\in {\mathbb{N}}$, $t\in (0,1]$, for every multi-indexes $\alpha $ and $\beta $ with $\left\vert \alpha \right\vert
+\left\vert \beta \right\vert \leq q$ and $(x,y)\in {\mathbb{R}}^{d}\times {%
\mathbb{R}}^{d}$ $$(A_{3})\quad \left\vert \partial _{x}^{\alpha }\partial _{y}^{\beta
}p_{t}^{n}(x,y)\right\vert \leq C\frac{1}{(\lambda _{n}t)^{\theta
_{0}(q+\theta _{1})}}\times \frac{\psi _{\pi (q,\kappa )}(x)}{\psi _{\kappa
}(x-y)} \label{TR5}$$
Note that in (\[TR5\]) we are quantifying the possible blow up of $%
|\partial _{x}^{\alpha }\partial _{y}^{\beta }p_{t}^{n}(x,y)|$ as $%
n\rightarrow \infty $ .
We also assume the following statement will holds for the semigroup $P_{t}$.
\[A5\] For every $\kappa \geq 0,q\in {\mathbb{N}}$ there exists $C\geq 1$ such that $$(A_{4})\quad \left\Vert P_{t}f\right\Vert _{q,-\kappa ,\infty }\leq
C\left\Vert f\right\Vert _{q,-\kappa ,\infty }. \label{R7}$$
For $\delta \geq 0$ we denote$$\Phi _{n}(\delta )=\varepsilon _{n}\Lambda _{n}\times \lambda _{n}^{-\theta
_{0}(a+b+\delta )}, \label{R7'}$$where $a$ and $b$ are the constants in Assumption \[A1A\*1\] and \[A2A\*2\] respectively. Notice that $$\Phi _{n}(\delta )\leq \gamma ^{1+\theta _{0}(a+b+\delta )}\Phi
_{n+1}(\delta ). \label{TRa}$$
And, for $\kappa \geq 0,\eta \geq 0$ we set $$\Psi _{\eta ,\kappa }(x,y):=\frac{\psi _{\kappa }(y)}{\psi _{\eta }(x)}%
,\quad (x,y)\in {\mathbb{R}}^{d}\times {\mathbb{R}}^{d}. \label{R7''}$$
Our first result concerns the regularity of the semigroup $P_{t}:$
\[Transfer\] Suppose that Assumption \[A1A\*1\], \[A2A\*2\] , \[A3\] and \[A5\] hold. Moreover we suppose there exists $\delta >0$ such that $$\limsup_{n}\Phi _{n}(\delta )<\infty , \label{TR6}$$$\Phi_n(\delta)$ being given in (\[R7’\]). Then the following statements hold.
**A**. $P_{t}f(x)=\int_{{\mathbb{R}}^{d}}p_{t}(x,y)dy$ with $p_{t}\in
C^{\infty }({\mathbb{R}}^{d}\times {\mathbb{R}}^{d}).$
**B**. Let $n\in {\mathbb{N}}$ and $\delta _{\ast }>0$ be such that $$\overline{\Phi }_{n}(\delta _{\ast }):=\sup_{n^{\prime }\geq n}\Phi
_{n^{\prime }}(\delta _{\ast })<\infty . \label{TR6a}$$We fix $q\in {\mathbb{N}}$, $p>1$, $\varepsilon _{\ast }>0,$ $\kappa \geq 0$ and we put $\mathfrak{m}=1+\frac{q+2d/p_{\ast }}{\delta _{\ast }}$ with $%
p_{\ast }$ the conjugate of $p$. There exist $C\geq 1$ and $\eta_0\geq 1$ (depending on $q,p,\varepsilon
_{\ast },\delta _{\ast },\kappa $ and $\gamma $) such that for every $\eta>\eta_0$ and $t>0$ $$\begin{aligned}
\left\Vert \Psi _{\eta ,\kappa }p_{t}\right\Vert _{q,p} &\leq &C\times
Q_{n}(q,\mathfrak{m})\times t^{-\theta _{0}((a+b)\mathfrak{m}+q+2d/p_{\ast
})(1+\varepsilon _{\ast })}\quad with \label{TR6'} \\
Q_{n}(q,\mathfrak{m}) &=&\Big(\frac{1}{\lambda _{n}^{\theta _{0}(a+b)%
\mathfrak{m}+q+2d/p_{\ast }}}+\overline{\Phi }_{n}^{\mathfrak{m}}(\delta
_{\ast })\Big)^{1+\varepsilon _{\ast }} \label{TR6''}\end{aligned}$$
**C**. Let $p>2d.$ Set $\bar{%
\mathfrak{m}}=1+\frac{q+1+2d/p_{\ast }}{\delta _{\ast }}$. There exist $%
C\geq 1,\eta \geq 0$ (depending on $q,p,\varepsilon _{\ast },\delta _{\ast
},\kappa $) such that for every $t>0$, $n\in {\mathbb{N}}$ and for every multi-indexes $\alpha ,\beta $ such that $\left\vert \alpha \right\vert
+\left\vert \beta \right\vert \leq q$, $$\left\vert \partial _{x}^{\alpha }\partial _{y}^{\beta
}p_{t}(x,y)\right\vert \leq C\times Q_{n}(q+1,\bar{\mathfrak{m}})\times
t^{-\theta _{0}((a+b)\bar{\mathfrak{m}}+q+1+2d/p_{\ast })(1+\varepsilon _{\ast
})})\times \frac{\psi _{\eta +\kappa }(x)}{\psi _{\kappa }(x-y)}
\label{TR6d}$$for every $t\in (0,1]$ and $x,y\in {\mathbb{R}}^{d}.$
We stress that in hypothesis (\[TR6a\]) the order of derivation $q$ does not appear. However the conclusions (\[TR6’\]) and (\[TR6d\]) hold for every $q.$ The motivation of this is given by the following heuristics. The hypothesis (\[TR5\]) says that the semi-group $P_{t}^{n}$ has a regularization effect controlled by $1/(\lambda _{n}t)^{\theta _{0}}.$ If we want to decouple this effect $m_{0}$ times we write $%
P_{t}^{n}=P_{t/m_{0}}^{n}....P_{t/m_{0}}^{n}$ and then each of the $m_{0}$ operators $P_{t/m_{0}}^{n}$ acts with a regularization effect of order $%
(\lambda _{n}\times t/m_{0})^{\theta _{0}}.$ But this heuristics does not work directly: in order use it, we have to use a development in Taylor series coupled with the interpolation type criterion given in the following section.
The proof of Theorem \[Transfer\] is developed in Section [sect:proofTransfer]{}. We give now a variant of the estimate (\[TR6d\]), whose proof can be found in Section \[sect:proofTransferBIS\].
\[TransferBIS-new\] Suppose that Assumption \[A1A\*1\], \[A2A\*2\], [A3]{} and \[A5\] hold. Suppose also that (\[TR6\]) holds for some $\delta >0$ and that for every $\kappa>0$ there exist $\bar{\kappa}, \bar C>0$ such that $P_t\psi_\kappa(x)\leq \bar C\psi_{\bar \kappa}(x)$, for all $x\in \R^d$ and $t>0$. Then $%
P_{t}(x,y)=p_{t}(x,y)$ with $p_{t}\in C^{\infty }(\R^{d}\times \R^{d})$ and for every $\kappa \in \N$, $\varepsilon >0$ and for every multi-indexes $\alpha $ and $\beta $ there exists $%
C=C(\kappa ,\varepsilon ,\delta,\alpha ,\beta )$ such that for every $t>0$ and $x,y\in \R^{d}$ $$\left\vert \partial _{x}^{\alpha }\partial _{y}^{\beta
}p_{t}(x,y)\right\vert \leq C\times t^{-\theta _{0}(1+\frac{a+b}{\delta }%
)(\left\vert \alpha \right\vert +\left\vert \beta \right\vert
+2d +\varepsilon)}\times \frac{\psi_{\eta+\kappa}(x)}{\psi _{\kappa }(x-y)} \label{TR7e-new}$$ with $\theta _{0}$ from (\[TR5\]).
We give now a result which goes in another direction (but the techniques used to prove it are the same): we assume that the semigroup $%
P_{t}:C_{b}^{\infty }({\mathbb{R}}^{d})\rightarrow C_{b}^{\infty }({\mathbb{R%
}}^{d})$ verifies hypothesis of type $(A_{2})$ (see (\[TR2\]) and ([TR2’]{})) and $(A_{3})$ (see (\[TR5\])), we perturb it by a compound Poisson process, and we prove that the new semigroup still verifies a regularity property of type $(A_{3})$. This result will be used in [@BCW] in order to cancel the big jumps.
Let us give our hypotheses.
\[H2H\*2-P\] For every $q\in {\mathbb{N}},\kappa \geq 0$ and $p\geq 1$ there exist $C_{q,\kappa ,p}(P),C_{q,\kappa ,\infty }(P)\geq 1$ such that $$\begin{aligned}
(H_{2})& \qquad \left\Vert P_{t}f\right\Vert _{q,-\kappa ,\infty }\leq
C_{q,\kappa ,\infty }(P)\left\Vert f\right\Vert _{q,-\kappa ,\infty },
\label{J6a} \\
(H_{2}^{\ast })& \qquad \left\Vert P_{t}^{\ast }f\right\Vert _{q,\kappa
,p}\leq C_{q,\kappa ,p}(P)\left\Vert f\right\Vert _{q,\kappa ,p} \label{J6b}\end{aligned}$$
This means that the hypotheses $(A_{2})$ and$(A_{2}^{\ast })$ (see (\[TR2\]) and (\[TR2’\])) from Section \[sect:3.2\] hold for $P_{t}$ (instead of $P_{t}^{n})$ with $\Lambda _{n}$ replaced by $C_{q,\kappa ,\infty }(P)\vee
C_{q,\kappa ,p}(P)$ and with $b=0$.
\[H3\] We assume that $P_{t}(x,dy)=p_{t}(x,y)dy$ with $p_{t}\in
C^{\infty }({\mathbb{R}}^{d}\times {\mathbb{R}}^{d})$ and the blow up of $%
p_{t}\rightarrow \infty $ as $t\rightarrow 0$ is controlled in the following way. For every fixed $q\in {\mathbb{N}},\kappa \geq 0$ there exist some constants $C\geq 1,0<\lambda \leq 1$ and $\eta >0$ such that for every two multi-indexes $\alpha ,\beta $ with $\left\vert \alpha \right\vert
+\left\vert \beta \right\vert \leq q$$$(H_{3})\quad \left\vert \partial _{x}^{\alpha }\partial _{y}^{\beta
}p_{t}(x,y)\right\vert \leq \frac{C}{(\lambda t)^{\theta _{0}(q+\theta _{1})}%
}\times \frac{\psi _{\eta }(x)}{\psi _{\kappa }(x-y)} \label{J6c}$$Here $\theta _{i}\geq 0,i=0,1$ are some fixed parameters.
We construct now the perturbed semigroup. We consider a Poisson process $%
N(t) $ of parameter $\rho >0$ and we denote by $T_{k}$, $k\in {\mathbb{N}}$, its jump times. We also consider a sequence $Z_{k}$, $k\in {\mathbb{N}}$, of independent random variables of law $\nu ,$ on a measurable space $(E,%
\mathcal{E}),$ which are independent of $N(t)$ as well. Moreover we take a function $\phi :E\times {\mathbb{R}}^{d}\rightarrow {\mathbb{R}}^{d}$ and we denote $\phi _{z}(x)=\phi (z,x).$ We will precise in a moment the hypothesis on $\phi .$ We associate the operator$$U_{z}f(x)=f(\phi _{z}(x)) \label{J1}$$and we define $\overline{P}_{t}$ to be the perturbation of $P_{t}$ in the following way. Conditionally to $T_{k}$ and $Z_{k},k\in {\mathbb{N}}$ we define$$\begin{array}{ll}
P_{t}^{N,Z}=P_{t} & \quad \mbox{for}\quad t<T_{1}, \\
P_{T_{k}}^{N,Z}=U_{Z_{k}}P_{T_{k}-}^{N,Z}, & \\
P_{t}^{N,Z}=P_{t-T_{k}}P_{T_{k}}^{N,Z} & \quad \mbox{for}\quad T_{k}\leq
t<T_{k+1}%
\end{array}
\label{J2}$$The second equality reads $P_{T_{k}}^{N,Z}f(x)=P_{T_{k}-}^{N,Z}f(\phi
_{Z_{k}}).$ Essentially (\[J2\]) means that on $[T_{k-1},T_{k})$ we have a process which evolves according to the semigroup $P_{t}$ and at time $T_{k}$ it jumps according to $\phi _{Z_{k}}$. Then we define$$\overline{P}_{t}f(x)={\mathbb{E}}(P_{t}^{N,Z}f(x))=\sum_{m=0}^{\infty
}I_{m}(f)(x)$$with$$I_{0}(f)(x)={\mathbb{E}}(1_{\{N(t)=0\}}P_tf(x)) =e^{-\rho t}P_tf(x)$$ and for $m\geq 1$, $$\begin{aligned}
I_{m}(f)(x) &={\mathbb{E}}\Big(1_{\{N(t)=m\}}\frac{m!}{t^m}%
\int_{0<t_{1}<...<t_{m-1}<t_{m}\leq
t}P_{t-t_{m}}\prod_{i=0}^{m-1}U_{Z_{m-i}}P_{t_{m-i}-t_{m-i-1}}f(x)dt_{1}%
\ldots dt_{m}\Big) \\
&=\rho^me^{-\rho t}{\mathbb{E}}\Big(\int_{0<t_{1}<...<t_{m-1}<t_{m}\leq t}%
\Big(\prod_{i=0}^{m-1}P_{t_{m-i+1}-t_{m-i}}U_{Z_{m-i}}\Big)%
P_{t_1}f(x)dt_{1}\ldots dt_{m}\Big),\end{aligned}$$ in which $t_0=0$ and $t_{m+1}=t$. We come now to the hypothesis on $\phi .$ We assume that for every $z\in E,$ $\phi _{z}\in C^{\infty }({\mathbb{R}}%
^{d})$ and $\nabla \phi _{z}\in C_{b}^{\infty }({\mathbb{R}}^{d})$ and that for every $q\in {\mathbb{N}}$$$\begin{aligned}
\left\Vert \phi \right\Vert _{1,q,\infty }& :=\sup_{z\in E}\left\Vert \phi
_{z}\right\Vert _{1,q,\infty }=\sum_{1\leq \left\vert \alpha \right\vert
\leq q}\sup_{z\in E}\sup_{x\in {\mathbb{R}}^{d}}\left\vert \partial
_{x}^{\alpha }\phi (z,x)\right\vert <\infty, \label{J4'} \\
\widehat{\phi }& :=\sup_{z\in E}\left\vert \phi _{z}(0)\right\vert <\infty .
\label{J4''}\end{aligned}$$Moreover we define $\sigma (\phi _{z})=\nabla \phi _{z}(\nabla \phi
_{z})^{\ast }$ and we assume that there exists a constant $\varepsilon (\phi
)>0$ such that for every $z\in E$ and $x\in {\mathbb{R}}^{d}$$$\det \sigma (\phi _{z})(x)\geq \varepsilon (\phi ). \label{J3}$$
\[rem-J\] We recall that in Appendix \[app:ibp\] we have denoted $%
V_{\phi _{z}}f(x)=f(\phi _{z}(x))=U_{z}f(x).$ With this notation, under ([J4’]{}), (\[J4”\]), (\[J3\]) we have proved in (\[ip10\]) and ([ip12]{}) that, for every $z\in E$, $$\begin{aligned}
\left\Vert U_{z}f\right\Vert _{q,-\kappa ,\infty }& \leq C1\vee \widehat{%
\phi }^{2\kappa }\left\Vert \phi \right\Vert _{1,q,\infty }^{q+2\kappa
}\left\Vert f\right\Vert _{q,-\kappa ,\infty }, \label{J5} \\
\left\Vert U_{z}^{\ast }f\right\Vert _{q,\kappa ,p}& \leq C\,\frac{1\vee
\widehat{\phi }^{2\kappa }}{\varepsilon (\phi )^{q(q+1)+1/p_{\ast }}}\times
(1\vee \left\Vert \phi \right\Vert _{1,q+2,\infty }^{2dq+1+2\kappa })\times
\left\Vert f\right\Vert _{q,\kappa ,p}. \label{J6} \\
&\end{aligned}$$This means that Assumption \[H1H\*1\] from Section \[sect:3.2\] hold uniformly in $z\in E$ and the constant given in (\[hh3’\]) is upper bounded by $$C_{q,\kappa ,\infty ,p}(U,P)\leq C\frac{1\vee \widehat{\phi }^{\kappa }}{%
\varepsilon (\phi )^{q(q+1)+1/p_{\ast }}}\times (1\vee \left\Vert \phi
\right\Vert _{1,q+2,\infty }^{2dq+1+2\kappa })\times (C_{q,\kappa ,\infty
}(P)\vee C_{q,\kappa ,p}(P)). \label{J6'}$$
We are now able to give our result (the proof being postponed for Section \[sect:proofJ\]):
\[J\] Suppose that $P_{t}$ satisfies assumptions \[H2H\*2-P\] and [H3]{}. Suppose moreover that $\phi $ satisfies (\[J4’\]), (\[J4”\]), (\[J3\]). Then $\overline{P}%
_{t}(x,dy)=\overline{p}_{t}(x,y)dy$ and we have the following estimates. Let $q\in {\mathbb{N}},\kappa \geq 0$ and $\delta >0$ be given. There exist some constants $C,\chi $ such that for every $\alpha ,\beta $ with $\left\vert
\alpha \right\vert +\left\vert \beta \right\vert \leq q$$$\left\vert \partial _{x}^{\alpha }\partial _{y}^{\beta }\overline{p}%
_{t}(x,y)\right\vert \leq \frac{C^{\rho }}{((\lambda t)^{\theta
_{0}(q+2d)(1+\delta )}}\times \frac{\psi _{\chi }(x)}{\psi _{\kappa }(x-y)}.
\label{J10}$$We stress that the constant $C$ depends on $C_{k,\kappa ,\infty ,p}(U,P)$ (see (\[J6’\])) and on $q,\kappa $ and $\delta $ **but not on** $%
t,\rho $ and $\lambda .$
This gives the following consequence concerning the semigroup $P_{t}$ itself:
\[Cor\]Suppose that (\[J6a\]),(\[J6b\]),(\[J6c\]) hold. Then, does not matter the value of $\theta _{1}$ in (\[J6c\]), the inequality ([J6c]{}) holds with $\theta _{1}^{\prime }=2d+\varepsilon $ for every $%
\varepsilon >0.$
**Proof.** Just take $\phi _{z}(x)=x.$ $\square $
Regularity results {#sect:reg}
==================
This section is devoted to some preliminary results allowing us to prove the statements resumed in Section \[sect:results\]: in Section \[sect:3.1\] we give an abstract regularity criterion, in Section \[sect:3.2\] we prove a regularity result for iterated integrals.
A regularity criterion based on interpolation {#sect:3.1}
---------------------------------------------
Let us first recall some results obtained in [@[BC]] concerning the regularity of a measure $\mu $ on ${\mathbb{R}}^{d}$ (with the Borel $\sigma$-field). For two signed finite measures $\mu ,\nu $ and for $k\in {\mathbb{N}%
}$ we define the distance$$d_{k}(\mu ,\nu )=\sup \Big\{\Big\vert \int fd\mu -\int fd\nu \Big\vert %
:\Vert f\Vert _{k,\infty }\leq 1\Big\}. \label{reg2}$$If $\mu $ and $\nu $ are probability measures, $d_0$ is the total variation distance and $d_1$ is the Fortét Mourier distance. In this paper we will work with an arbitrary $k\in {\mathbb{N}}$. Notice also that $d_{k}(\mu ,\nu
)=\left\Vert \mu -\nu \right\Vert _{W_{\ast }^{k,\infty }}$ where $W_{\ast
}^{k,\infty }$ is the dual of $W^{k,\infty }.$
We fix now $k,q, h\in {\mathbb{N}}$, with $h\geq 1$, and $p>1$. Hereafter, we denote by $p_{\ast }=p/(p-1)$ the conjugate of $p.$ Then, for a signed finite measure $\mu $ and for a sequence of absolutely continuous signed finite measures $\mu _{n}(dx)=f_{n}(x)dx$ with $f_{n}\in C^{2h+q}({\mathbb{R}%
}^{d}),$ we define$$\pi _{k,q,h,p}(\mu ,(\mu _{n})_{n})=\sum_{n=0}^{\infty }2^{n(k+q+d/p_{\ast
})}d_{k}(\mu ,\mu _{n})+\sum_{n=0}^{\infty }\frac{1}{2^{2nh}}\left\Vert
f_{n}\right\Vert _{2h+q,2h,p}. \label{reg3}$$
Notice that $\pi _{k,q,h,p}$ is a particular case of $\pi _{k,q,h,\mathbf{e}%
} $ treated in [@[BC]]: just choose the Young function $\mathbf{e}%
(x)\equiv \mathbf{e}_{p}(x)=|x|^{p}$, giving $\beta _{\mathbf{e}%
_{p}}(t)=t^{1/p_{\ast }}$ (see Example 1 in [@[BC]]). Moreover, $\pi
_{k,q,h,p}$ is strongly related to interpolation spaces. More precisely, let $$\overline{\pi }_{k,q,h,p}(\mu )=\inf \{\pi _{k,q,h,p}(\mu ,(\mu
_{n})_{n}):\mu _{n}(dx)=f_{n}(x)dx,\quad f_{n}\in C^{2h+q}({\mathbb{R}}%
^{d})\}.$$Then $\overline{\pi }_{k,q,h,p}$ is equivalent with the interpolation norm of order $\rho =\frac{k+q+d/p_{\ast }}{2h}$ between the spaces $W_{\ast
}^{k,\infty }$ (the dual of $W^{k,\infty })$ and $W^{2h+q,2h,p}=\{f:$ $%
\left\Vert f_{n}\right\Vert _{2h+q,2h,p}<\infty \}$. This is proved in [\[BC\]]{}, see Section 2.4 and Appendix B. So the inequality (\[reg4\]) below says that the Sobolev space $W^{q,p}$ is included in the above interpolation space. However we prefer to remain in an elementary framework and to derive directly the consequences of (\[reg4\]) - see Lemma \[REG\] below
The following result is the key point in our approach (this is Proposition 2.5 in [@[BC]]):
\[lemma-inter\] Let $k,q,h\in {\mathbb{N}}$ with $h\geq 1$, and $p>1$ be given. There exists a constant $C_*$ (depending on $k,q,h$ and $p$ only) such that the following holds. Let $\mu $ be a finite measure for which one may find a sequence $\mu _{n}(dx)=f_{n}(x)dx$, $n\in {\mathbb{N}}$ such that $\pi _{k,q,h,p}(\mu ,(\mu _{n})_{n})<\infty .$ Then $\mu (dx)=f(x)dx$ with $%
f\in W^{q,p}$ and moreover $$\left\Vert f\right\Vert _{q,p}\leq C_{\ast }\times \pi _{k,q,h,p}(\mu ,(\mu
_{n})_{n}). \label{reg4}$$
The proof of Lemma \[lemma-inter\] is given in [@[BC]], being a particular case (take $\mathbf{e}=\mathbf{e}_{p}$) of Proposition A.1 in Appendix A.
We give a first simple consequence.
\[reg\] Let $p_{t}\in C^{\infty }(\R^{d}),t>0$ be a family of non negative functions and let $\varphi=\varphi(x)\geq 0$ be such that $\int \varphi(x)p_{t}(x)dx\leq m<\infty$ for every $t<1$. We assume that for some $\theta _{0}>0$ and $\theta _{1}>0$ the following holds: for every $q\in \N$ and $%
p> 1$ there exists a constant $C=C(q,p)$ such that $$\left\Vert \varphi p_{t}\right\Vert _{q,p}\leq Ct^{-\theta
_{0}(q+\theta _{1})},\quad t<1. \label{a1}$$Let $\delta>0$. Then, there exists a constant $C_*=C_*(q,p,\delta) $ such that $$\left\Vert \varphi p_{t}\right\Vert _{q,p}\leq C_\ast t^{-\theta _{0}(q+%
\frac{d}{p_{\ast }}+\delta)} ,\quad t<1, \label{a2}$$where $p_\ast$ is the conjugate of $p$. So, does not matter the value of $\theta _{1},$ one may replace it by $\frac{d%
}{p_{\ast }}.$
**Proof** We take $n_{\ast }\in \N$ and we define $f_{n}=0$ for $n\leq
n_{\ast }$ and $f_{n}=\varphi p_t$ for $n>n_{\ast }.$ Notice that $d_{0}(\varphi p_{t},0)\leq m.$ Then (\[reg4\]) with $k=0$ gives ($C$ denoting a positive constant which may change from a line to another) $$\begin{aligned}
\left\Vert \varphi p_{t}\right\Vert _{q,p}
&\leq &C\Big(m\sum_{n=0}^{n_{\ast }}2^{n(q+\frac{d}{p_{\ast }})}+\left\Vert \varphi p_{t}\right\Vert _{2h+q,2h,p}\sum_{n=n_{\ast }+1}^{\infty }\frac{1%
}{2^{2nh}}\Big) \\
&\leq &C\Big(m2^{n_{\ast }(q+\frac{d}{p_{\ast }})}+\left\Vert \varphi p_{t}\right\Vert _{2h+q,2h,p}\frac{1}{2^{2n_{\ast }h}}\Big).\end{aligned}$$We denote $\rho _{h}=(q+\frac{d}{p_{\ast }})/2h.$ We optimize over $n_{\ast
}$ and we obtain $$\begin{aligned}
\left\Vert \varphi p_{t}\right\Vert _{q,p} &\leq &2C\times
m^{\frac{1}{1+\rho _{h}}}\times \left\Vert \varphi p_{t}\right\Vert _{2h+q,2h,p}^{\frac{\rho _{h}}{1+\rho _{h}}} \\
&\leq &2Cm^{\frac 1{1+\rho_h}}\times Ct^{-\theta _{0}(2h+q+\theta _{1})%
\frac{\rho _{h}}{1+\rho _{h}}}.\end{aligned}$$Since $\lim_{h\rightarrow \infty }\rho _{h}=0$ and $\lim_{h\rightarrow \infty }(2h+q+\theta _{1})\frac{\rho _{h}}{1+\rho
_{h}}=q+\frac{d}{p_{\ast }}$ the proof is completed, we choose $h$ large enough and we obtain (\[a2\]). $\square $
We will also use the following consequence of Lemma \[lemma-inter\].
\[REG\] Let $k,q,h\in {\mathbb{N}}$, with $h\geq 1$, and $p>1$ be given and set $$\rho _{h}:=\frac{k+q+d/p_{\ast }}{2h}. \label{reg5}$$We consider an increasing sequence $\theta (n)\geq 1,n\in {\mathbb{N}}$such that $\lim_{n}\theta (n)=\infty $ and $\theta (n+1)\leq \Theta \times
\theta (n)$ for some constant $\Theta \geq 1.$ Suppose that we may find a sequence of functions $f_{n}\in C^{2h+q}({\mathbb{R}}^{d}),n\in {\mathbb{N}}$ such that $$\left\Vert f_{n}\right\Vert _{2h+q,2h,p}\leq \theta (n) \label{reg9}$$and, with $\mu _{n}(dx)=f_{n}(x)dx,$ $$\limsup_{n}d_{k}(\mu ,\mu _{n})\times \theta ^{\rho _{h}+\varepsilon
}(n)<\infty \label{reg10}$$for some $\varepsilon >0.$ Then $\mu (dx)=f(x)dx$ with $f\in W^{q,p}.$
Moreover, for $\delta ,\varepsilon >0$ and $n_{\ast }\in {\mathbb{N}}$, let $$\begin{aligned}
A(\delta )& =\left\vert \mu \right\vert ({\mathbb{R}}^{d})\times 2^{l(\delta
)(1+\delta )(q+k+d/p_{\ast })}\quad
\mbox{with $l(\delta )=\min
\{l:2^{l\times \frac{\delta }{1+\delta }}\geq l\}$}, \label{reg12'} \\
B(\varepsilon )& =\sum_{l=1}^{\infty }\frac{l^{2(q+k+d/p_{\ast }+\varepsilon
)}}{2^{2\varepsilon l}}, \label{reg12''} \\
C_{h,n_{\ast }}(\varepsilon )& =\sup_{n\geq n_{\ast }}d_{k}(\mu ,\mu
_{n})\times \theta ^{\rho _{h}+\varepsilon }(n). \label{reg11}\end{aligned}$$Then, for every $\delta >0$ $$\left\Vert f\right\Vert _{q,p}\leq C_{\ast }(\Theta +A(\delta )\theta
(n_{\ast })^{\rho _{h}(1+\delta )}+B(\varepsilon )C_{h,n_{\ast
}}(\varepsilon )), \label{reg12}$$$C_{\ast }$ being the constant in (\[reg4\]) and $\rho _{h}$ being given in (\[reg5\]).
**Proof of Lemma \[REG\]**. We will produce a sequence of measures $%
\nu _{l}(dx)=g_{l}(x)dx,l\in {\mathbb{N}}$ such that $$\pi _{k,q,h,p}(\mu ,(\nu _{l})_{l})\leq \Theta +A(\delta )\theta (n_{\ast
})^{\rho _{h}(1+\delta )}+B(\varepsilon )C_{h,n_{\ast }}(\varepsilon
)<\infty .$$Then by Lemma \[lemma-inter\] one gets $\mu (dx)=f(x)dx$ with $f\in
W^{q,p} $ and (\[reg12\]) follows from (\[reg4\]). Let us stress that the $\nu _{l}$’s will be given by a suitable subsequence $\mu _{n(l)}$, $%
l\in {\mathbb{N}}$, from the $\mu _{n}$’s.
**Step 1**. We define $$n(l)=\min \{n:\theta (n)\geq \frac{2^{2hl}}{l^{2}}\}$$and we notice that $$\frac{1}{\Theta }\theta (n(l))\leq \theta (n(l)-1)<\frac{2^{2hl}}{l^{2}}\leq
\theta (n(l)). \label{reg13}$$Moreover we define $$l_{\ast }=\min \{l:\frac{2^{2hl}}{l^{2}}\geq \theta (n_{\ast })\}.$$Since$$\theta (n(l_{\ast }))\geq \frac{2^{2hl_{\ast }}}{l_{\ast }^{2}}\geq \theta
(n_{\ast })$$it follows that $n(l_{\ast })\geq n_{\ast }.$
We take now $\varepsilon (\delta )=\frac{h\delta }{1+\delta }$ which gives $%
\frac{2h}{2(h-\varepsilon (\delta ))}=1+\delta .$ And we take $l(\delta
)\geq 1$ such that $2^{l\delta /(1+\delta )}\geq l$ for $l\geq l(\delta ).$ Since $h\geq 1$ it follows that $\varepsilon (\delta )\geq \frac{\delta }{%
1+\delta }$ so that, for $l\geq l(\delta )$ we also have $2^{l\varepsilon
(\delta )}\geq l.$ Now we check that $$2^{2(h-\varepsilon (\delta ))l_{\ast }}\leq 2^{2hl(\delta )}\theta (n_{\ast
}). \label{reg14}$$If $l_{\ast }\leq l(\delta )$ then the inequality is evident (recall that $%
\theta (n)\geq 1$ for every $n).$ And if $l_{\ast }>l(\delta )$ then $%
2^{l_{\ast }\varepsilon (\delta )}\geq l_{\ast }.$ By the very definition of $l_{\ast }$ we have $$\frac{2^{2h(l_{\ast }-1)}}{(l_{\ast }-1)^{2}}<\theta (n_{\ast })$$so that $$2^{2hl_{\ast }}\leq 2^{2h}(l_{\ast }-1)^{2}\theta (n_{\ast })\leq
2^{2h}\times 2^{2l_{\ast }\varepsilon (\delta )}\theta (n_{\ast })$$and this gives (\[reg14\]).
**Step 2**. We define$$\mbox{$\nu _{l} =0$ if $l<l_{\ast }$ and $\nu_l=\mu _{n(l)}$ if $l\geq
l_{\ast }$}$$and we estimate $\pi _{k,q,h,p}(\mu ,(\nu _{l})_{l}).$ First, by (\[reg9\]) and (\[reg13\]) $$\sum_{l=l_{\ast }}^{\infty }\frac{1}{2^{2hl}}\left\Vert f_{n(l)}\right\Vert
_{q+2h,2h,p}\leq \sum_{l=l_{\ast }}^{\infty }\frac{1}{2^{2hl}}\theta
(n(l))\leq \Theta \sum_{l=l_{\ast }}^{\infty }\frac{1}{l^{2}}\leq \Theta .$$Then we write$$\sum_{l=1}^{\infty }2^{(q+k+d/p_{\ast })l}d_{k}(\mu ,\nu _{l})=S_{1}+S_{2}$$with$$S_{1}=\sum_{l=1}^{l_{\ast }-1}2^{(q+k+d/p_{\ast })l}d_{k}(\mu ,0),\quad
S_{2}=\sum_{l=l_{\ast }}^{\infty }2^{(q+k+d/p_{\ast })l}d_{k}(\mu ,\mu
_{n(l)}).$$Since $d_{k}(\mu ,0)\leq d_{0}(\mu ,0)\leq \left\vert \mu \right\vert ({%
\mathbb{R}}^{d})$ we use (\[reg14\]) and we obtain $$\begin{aligned}
S_{1} &\leq &\left\vert \mu \right\vert ({\mathbb{R}}^{d})\times
2^{(q+k+d/p_{\ast })l_{\ast }}=\left\vert \mu \right\vert ({\mathbb{R}}%
^{d})\times (2^{2(h-\varepsilon (\delta ))l_{\ast }})^{(q+k+d/p_{\ast
})/2(h-\varepsilon (\delta ))} \\
&\leq &\left\vert \mu \right\vert ({\mathbb{R}}^{d})\times (2^{2hl(\delta
)}\theta (n_{\ast }))^{\rho _{h}(1+\delta )}=A(\delta )\theta (n_{\ast
})^{\rho _{h}(1+\delta )}.\end{aligned}$$If $l\geq l_{\ast }$ then $n(l)\geq n(l_{\ast })\geq n_{\ast }$ so that, from (\[reg11\]), $$d_{k}(\mu ,\mu _{n(l)})\leq \frac{C_{h,n_{\ast }}(\varepsilon )}{\theta
^{\rho _{h}+\varepsilon }(n(l))}\leq C_{h,n_{\ast }}(\varepsilon )\Big(\frac{%
l^{2}}{2^{2hl}}\Big)^{\rho _{h}+\varepsilon }=\frac{C_{h,n_{\ast
}}(\varepsilon )}{2^{(q+k+d/p_{\ast })l}}\times \frac{l^{2(\rho
_{h}+\varepsilon )}}{2^{2h\varepsilon l}}.$$We conclude that$$S_{2}\leq C_{h,n_{\ast }}(\varepsilon )\sum_{l=l_{\ast }}^{\infty }\frac{%
l^{2(\rho _{h}+\varepsilon )}}{2^{2h\varepsilon l}}\leq C_{h,n_{\ast
}}(\varepsilon )\times B(\varepsilon ).$$$\square $
A regularity lemma {#sect:3.2}
------------------
We give here a regularization result in the following abstract framework. We consider a sequence of operators $U_{j}:{\mathcal{S}({\mathbb{R}}^d)}%
\rightarrow {\mathcal{S}({\mathbb{R}}^d)}$, $j\in {\mathbb{N}}$, and we denote by $U_{j}^{\ast }$ the formal adjoint defined by $\langle U_{j}^{\ast }f,g\rangle =\langle f,U_{j}g\rangle $ with the scalar product in $L^{2}({\mathbb{R}}^{d})$.
\[H1H\*1\] Let $a\in {\mathbb{N}}$ be fixed. We assume that for every $%
q\in {\mathbb{N}},\kappa \geq 0$ and $p\in \lbrack 1,\infty )$ there exist constants $C_{q,\kappa ,p}(U)$ and $C_{q,\kappa ,\infty }(U)$ such that for every $j$ and $f$, $$\begin{aligned}
(H_{1})& \qquad \left\Vert U_{j}f\right\Vert _{q,-\kappa ,\infty }\leq
C_{q,\kappa ,\infty }(U)\left\Vert f\right\Vert _{q+a,-\kappa ,\infty },
\label{h1} \\
(H_{1}^{\ast })& \qquad \left\Vert U_{j}^{\ast }f\right\Vert _{q,\kappa
,p}\leq C_{q,\kappa ,p}(U)\left\Vert f\right\Vert _{q+a,\kappa ,p}.
\label{h1'}\end{aligned}$$We assume that $C_{q,\kappa ,p}(U)$, $p\in \lbrack 1,\infty ]$, is non decreasing with respect to $q$ and $\kappa $.
We also consider a semigroup $S_{t}$, $t\geq 0$, of the form $$S_{t}(x,dy)=s_{t}(x,y)dy\quad \mbox{with}\quad s_{t}\in {\mathcal{S}}({%
\mathbb{R}}^{d}\times {\mathbb{R}}^{d}).$$
We define the formal adjoint operator $$S_{t}^{\ast }f(y)=\int_{{\mathbb{R}}^{d}}s_{t}(x,y)f(x)dx,\quad t>0.$$
\[H2H\*2\] If $f\in {\mathcal{S}({\mathbb{R}}^{d})}$ then $S_{t}f\in {\mathcal{S}({\mathbb{R}}^{d})}$. Moreover, there exist $b\in {\mathbb{N}}$ such that for every $q\in {\mathbb{%
N}},\kappa \geq 0$ and $p\in \lbrack 1,\infty )$ there exist constants $%
C_{q,\kappa ,p}(S)$ such that for every $t>0$, $$\begin{aligned}
(H_{2})& \qquad \left\Vert S_{t}f\right\Vert _{q,-\kappa ,\infty }\leq
C_{q,\kappa ,\infty }(S)\left\Vert f\right\Vert _{q+b,-\kappa ,\infty },
\label{h2} \\
(H_{2}^{\ast })& \qquad \left\Vert S_{t}^{\ast }f\right\Vert _{q,\kappa
,p}\leq C_{q,\kappa ,p}(S)\left\Vert f\right\Vert _{q+b,\kappa ,p}.
\label{h2'}\end{aligned}$$We assume that $C_{q,\kappa ,p}(S)$, $p\in \lbrack 1,\infty ]$, is non decreasing with respect to $q$ and $\kappa $.
We denote $$\begin{aligned}
C_{q,\kappa ,\infty }(U,S)& =C_{q,\kappa ,\infty }(U)C_{q,\kappa ,\infty
}(S),\quad C_{q,\kappa ,p}(U,S)=C_{q,\kappa ,p}(U)C_{q,\kappa ,p}(S),
\label{h3'} \\
C_{q,\kappa ,\infty ,p}(U,S)& =C_{q,\kappa ,\infty }(U,S)\vee C_{q,\kappa
,p}(U,S). \label{hh3'}\end{aligned}$$Under Assumption \[H1H\*1\] and \[H2H\*2\], one immediately obtains$$\begin{aligned}
\left\Vert (S_{t}U_{j})f\right\Vert _{q,-\kappa ,\infty }& \leq C_{q,\kappa
,\infty }(U,S)\left\Vert f\right\Vert _{q+a+b,-\kappa ,\infty }, \label{h}
\\
\left\Vert (S_{t}^{\ast }U_{j}^{\ast })f\right\Vert _{q,\kappa ,p}& \leq
C_{q,\kappa ,p}(U,S)\left\Vert f\right\Vert _{q+a+b,\kappa ,p}. \label{h'}\end{aligned}$$In fact these are the inequalities that we will employ in the following. We stress that the above constants $C_{q,\kappa ,\infty }(U,S)$ and $%
C_{q,\kappa ,p}(U,S)$ may depend on $a,b$ and are increasing w.r.t. $q$ and $%
\kappa $.
Finally we assume that the (possible) blow up of $s_{t}\rightarrow \infty $ as $t\rightarrow 0$ is controlled in the following way.
\[HH3\] Let $\theta_0,\lambda>0$ be fixed. We assume that for every $%
\kappa \geq 0$ and $q\in {\mathbb{N}}$ there exist $\pi (q,\kappa )$, $%
\theta _{1}\geq 0$ and $C_{q,\kappa}>0$ such that for every multi-indexes $%
\alpha $ and $\beta $ with $\left\vert \alpha \right\vert +\left\vert \beta
\right\vert \leq q$, $(x,y)\in {\mathbb{R}}^{d}\times {\mathbb{R}}^{d}$ and $%
t\in (0,1]$ one has $$(H_{3})\qquad \left\vert \partial _{x}^{\alpha }\partial _{y}^{\beta
}s_{t}(x,y)\right\vert \leq \frac{C_{q,\kappa}}{(\lambda
t)^{\theta_0(q+\theta_1)}}\times \frac{\psi _{\pi (q,\kappa )}(x)}{\psi
_{\kappa }(x-y)}. \label{h3}$$We also assume that $\pi (q,k)$ and $C_{q,\kappa}$ are both increasing in $q$ and $\kappa $.
This property will be used by means of the following lemma:
\[lemmaB\] Suppose that Assumption \[HH3\] holds.
$\mathbf{A.}$ For every $\kappa \geq 0$, $q\in {\mathbb{N}}$ and $p>1$ there exists $C>0$ such that for every $t\in(0,1]$ and $f$ one has $$\left\Vert S_{t}^{\ast }f\right\Vert _{q,\kappa ,p}\leq \frac{C}{(\lambda
t)^{\theta_0(q+\theta_1)}}\left\Vert f\right\Vert _{0,\nu ,1} \label{B1}$$where $\nu =\pi (q,\kappa +d)+\kappa +d$
$\mathbf{B.}$ For every $\kappa \geq 0$, $q_1,q_2\in{\mathbb{N}}$ there exists $C>0$ such that for every $t\in(0,1]$, for every multi-index $\alpha $ with $\left\vert \alpha \right\vert \leq q_{2}$ and $f$ one has $$\left\Vert \frac{1}{\psi _{\eta }}S_{t}(\psi _{\kappa }\partial ^{\alpha
}f)\right\Vert _{q_{1},\infty }\leq \frac{C}{(\lambda
t)^{\theta_0(q_1+q_2+\theta_1)}}\,\|f\|_\infty \label{B2}$$where $\eta =\pi (q_{1}+q_{2},\kappa +d+1)+\kappa$.
**Proof.** In the sequel, $C$ will denote a positive constant which may vary from a line to another and which may depend only on $\kappa$ and $q$ for the proof of **A.** and only on $\kappa, q_1$ and $q_2$ for the proof of **B.**
**A.** Using (\[h3\]) if $\left\vert \alpha \right\vert \leq q,$$$\left\vert \partial ^{\alpha }S_{t}^{\ast }f(x)\right\vert \leq \int
\left\vert \partial _{x}^{\alpha }s_{t}(y,x)\right\vert \times \left\vert
f(y)\right\vert dy\leq \frac{C}{(\lambda t)^{\theta_0(q+\theta_1)}}\int
\frac{\psi _{\pi (q,\kappa +d)}(y)}{\psi _{\kappa +d}(x-y)}\times \left\vert
f(y)\right\vert dy.$$By (\[NOT3b\]) $\psi _{\kappa +d}(x)/\psi _{\kappa +d}(x-y)\leq C \psi
_{\kappa +d}(y)$ so that$$\begin{aligned}
\psi _{\kappa +d}(x)\left\vert \partial ^{\alpha }S_{t}^{\ast
}f(x)\right\vert &\leq \frac{C}{(\lambda t)^{\theta_0(q+\theta_1)}} \int
\frac{ \psi _{\kappa +d}(x)\psi _{\pi (q,\kappa +d)}(y)}{\psi _{\kappa
+d}(x-y)} \times \left\vert f(y)\right\vert dy \\
&\leq \frac{C}{(\lambda t)^{\theta_0(q+\theta_1)}} \int \psi _{\pi (q,\kappa
+d)+\kappa +d}(y)\times \left\vert f(y)\right\vert dy \\
&=\frac{C}{(\lambda t)^{\theta_0(q+\theta_1)}}\left\Vert f\right\Vert
_{0,\nu ,1}\end{aligned}$$We conclude that $$\left\Vert S_{t}^{\ast }f\right\Vert _{q,\kappa +d,\infty }\leq \frac{C}{%
(\lambda t)^{\theta_0(q+\theta_1)}}\left\Vert f\right\Vert _{0,\nu ,1}.$$ By (\[NOT5a\]) $\left\Vert S_{t}^{\ast }f\right\Vert _{q,\kappa ,p}\leq
C\left\Vert S_{t}^{\ast }f\right\Vert _{q,\kappa +d,\infty }$ so the proof of (\[B1\]) is completed.
**B.** Let $\gamma $ with $\left\vert \gamma \right\vert \leq q_{1}$. Using integration by parts$$\begin{aligned}
\partial ^{\gamma }S_{t}(\psi _{\kappa }\partial ^{\alpha }f)(x) &=&\int_{{%
\mathbb{R}}^{d}}\partial _{x}^{\gamma }s_{t}(x,y)\psi _{\kappa }(y)\partial
^{\alpha }f(y)dy \\
&=&(-1)^{\left\vert \alpha \right\vert }\int_{{\mathbb{R}}^{d}}\partial
_{y}^{\alpha }(\partial _{x}^{\gamma }s_{t}(x,y)\psi _{\kappa }(y))\times
f(y)dy.\end{aligned}$$Using (\[NOT3c\]), (\[h3\]) and (\[NOT3b\]), it follows that $$\begin{aligned}
\left\vert \partial ^{\gamma }S_{t}(\psi _{\kappa }\partial ^{\alpha
}f)(x)\right\vert &\leq \int_{{\mathbb{R}}^{d}}\left\vert \partial
_{y}^{\alpha }(\partial _{x}^{\gamma }s_{t}(x,y)\psi _{\kappa
}(y))\right\vert \times \left\vert f(y)\right\vert dy \\
&\leq \int_{{\mathbb{R}}^{d}}\left\vert s_{t}(x,y)\psi _{\kappa
}(y)\right\vert _{q_{1}+q_{2}}\times \left\vert f(y)\right\vert dy \\
&\leq C \int_{{\mathbb{R}}^{d}}\left\vert s_{t}(x,y)\right\vert
_{q_{1}+q_{2}}\psi _{\kappa }(y)\times \left\vert f(y)\right\vert dy \\
&\leq \frac{C}{(\lambda t)^{\theta_0(q_1+q_2+\theta_1)}} \left\Vert
f\right\Vert _{\infty }\int_{{\mathbb{R}}^{d}}\frac{\psi _{\pi
(q_{1}+q_{2},\kappa +d+1)}(x)}{\psi _{\kappa +d+1}(x-y)}\times \psi _{\kappa
}(y)dy \\
&\leq \frac{C}{(\lambda t)^{\theta_0(q_1+q_2+\theta_1)}} \left\Vert
f\right\Vert _{\infty }\int_{{\mathbb{R}}^{d}}\frac{\psi _{\pi
(q_{1}+q_{2},\kappa +d+1)+\kappa }(x)}{\psi _{d+1}(x-y)}dy \\
&\leq \frac{C}{(\lambda t)^{\theta_0(q_1+q_2+\theta_1)}}\left\Vert
f\right\Vert _{\infty }\psi _{\pi (q_{1}+q_{2},\kappa +d+1)+\kappa }(x).\end{aligned}$$This implies (\[B2\]). $\square $
We are now able to give the regularity lemma. This is the core of our approach.
\[Reg\] Suppose that Assumption \[H1H\*1\], \[H2H\*2\] and \[HH3\] hold. We fix $t\in (0,1]$, $m\geq 1$ and $\delta_{i}>0$, $i=1,\ldots ,m$ such that $\sum_{i=1}^{m}\delta_{i}=t.$
$\mathbf{A.}$ There exists a function $\tilde{p}_{\delta_{1},...,\delta_{m}}%
\in C^{\infty }({\mathbb{R}}^{d}\times {\mathbb{R}}^{d})$ such that $$\prod_{i=1}^{m-1}(S_{\delta_{i}}U_{i})S_{\delta_{m}}f(x)=\int \tilde{p}%
_{\delta_{1},...,\delta_{m}}(x,y)f(y)dy. \label{h6}$$
$\mathbf{B.}$ We fix $q_{1},q_{2}\in {\mathbb{N}},\kappa \geq 0,p>1$ and we denote $q=q_{1}+q_{2}+(a+b)(m-1).$ One may find universal constants $C,\chi ,%
\bar{p}\geq 1$ (depending on $\kappa ,p$ and $q_{1}+q_{2})$ such that for every multi-index $\beta $ with $\left\vert \beta \right\vert \leq q_{2}$ and every $x\in {\mathbb{R}}^{d}$$$\left\Vert \partial _{x}^{\beta }\tilde{p}_{\delta _{1},...,\delta
_{m}}(x,\cdot )\right\Vert _{q_{1},\kappa ,p}\leq C\Big(\frac{2m}{\lambda t}%
\Big)^{\theta _{0}(q_{1}+q_{2}+d+2\theta _{1})}\Big(C_{q,\chi ,\bar{p}%
,\infty }(U,S)\Big(\frac{2m}{\lambda t}\Big)^{\theta _{0}(a+b)}\Big)%
^{m-1}\psi _{\chi }(x). \label{h7}$$
**Proof.** **A.** For $g=g(x,y)$, we denote $g^{x}(y):=g(x,y)$. By the very definition of $U_{i}^{\ast }$ one has$$S_{t}U_{i}f(x)=\int_{{\mathbb{R}}^{d}}U_{i}^{\ast }s_{t}^{x}(y)f(y)dy.$$As a consequence, one gets the kernel in (\[h6\]): $$\tilde{p}_{\delta_{1},\ldots ,\delta_{m}}(x,y)=\int_{{\mathbb{R}}^{d\times
(m-1)}}U_{1}^{\ast }s_{\delta_{1}}^{x}(y_{1})\Big(\prod_{j=2}^{m-1}U_{j}^{%
\ast }s_{\delta_{j}}^{y_{j-1}}(y_{j})\Big)s_{\delta_{m}}(y_{m-1},y)dy_{1}%
\cdots dy_{m-1},$$and the regularity immediately follows.
**B.** We split the proof in several steps.
**Step 1: decomposition**. Since $\sum_{i=1}^{m}\delta_{i}=t$ we may find $j\in \{1,...,m\}$ such that $\delta_{j}\geq \frac{t}{m}.$ We fix this $j$ and we write$$\prod_{i=1}^{m-1}(S_{\delta_{i}}U_{i})S_{\delta_{m}}=Q_{1}Q_{2}$$with$$Q_{1}=\prod_{i=1}^{j-1}(S_{\delta_{i}}U_{i})S_{\frac{1}{2}\delta_{j}}\quad %
\mbox{and}\quad Q_{2}=S_{\frac{1}{2}\delta_{j}}U_{j}\prod_{i=j+1}^{m-1}(S_{%
\delta_{i}}U_{i})S_{\delta_{m}}=S_{\frac{1}{2}\delta_{j}}%
\prod_{i=j}^{m-1}(U_{i}S_{\delta_{i+1}}).$$Here we use the semi-group property $S_{\frac{1}{2}\delta_{j}}S_{\frac{1}{2}%
\delta_{j}}=S_{\delta_{j}}.$
We suppose that $j\leq m-1$. In the case $j=m$ the proof is analogous but simpler. We will use Lemma \[lemmaB\] in order to estimate the terms corresponding to each of these two operators. As already seen, both $Q_1$ and $Q_2$ are given by means of smooth kernels, that we call $p_1(x,y)$ and $%
p_2(x,y)$ respectively.
**Step 2**. We take $\beta $ with $\left\vert \beta \right\vert \leq
q_{2}$ and we denote $g^{\beta ,x}(y):=\partial _{x}^{\beta }g(x,y)$. For $%
h\in L^{1}$ we write$$\begin{aligned}
& \int_{{\mathbb{R}}^{d}}h(z)\partial _{x}^{\beta }\tilde{p}_{\delta
_{1},...,\delta _{m}}(x,z)dz=\int_{{\mathbb{R}}^{d}}h(z)\int_{{\mathbb{R}}%
^{d}}\partial _{x}^{\beta }p_{1}(x,y)p_{2}(y,z)dydz \\
& \qquad =\int_{{\mathbb{R}}^{d}}\partial _{x}^{\beta }p_{1}(x,y)\int
h(z)p_{2}(y,z)dzdy=\int_{{\mathbb{R}}^{d}}\partial _{x}^{\beta
}p_{1}(x,y)Q_{2}h(y)dy \\
& \qquad =\int_{{\mathbb{R}}^{d}}Q_{2}^{\ast }p_{1}^{\beta ,x}(y)h(y)dy.\end{aligned}$$It follows that $$\partial _{x}^{\beta }\tilde{p}_{\delta _{1},...,\delta
_{m}}(x,z)=Q_{2}^{\ast }p_{1}^{\beta ,x}(z)=\prod_{i=1}^{m-j}(S_{\delta
_{m-i+1}}^{\ast }U_{m-i}^{\ast })S_{\frac{1}{2}\delta _{j}}^{\ast
}p_{1}^{\beta ,x}(z).$$We will use (\[h’\]) $m-j$ times first and (\[B1\]) then. We denote $$q_{1}^{\prime }=q_{1}+(m-j)(a+b)$$and we write $$\begin{array}{rl}
\Vert \partial _{x}^{\beta }\tilde{p}_{\delta _{1},...,\delta _{m}}(x,\cdot
)\Vert _{q_{1},\kappa ,p} & \leq C_{q_{1}^{\prime },\kappa
,p}^{m-j}(U,S)\Vert S_{\frac{1}{2}\delta _{j}}^{\ast }p_{1}^{\beta ,x}\Vert
_{q_{1}^{\prime },\kappa ,p}\smallskip \\
& \displaystyle\leq C_{q_{1}^{\prime },\kappa ,p}^{m-j}(U,S)\,C\Big(\frac{2m%
}{\lambda t}\Big)^{\theta _{0}(q_{1}^{\prime }+\theta _{1})}\Vert
p_{1}^{\beta ,x}\Vert _{0,\nu ,1}%
\end{array}
\label{h9}$$with$$\nu =\pi (q_{1}^{\prime },\kappa +d)+\kappa +d.$$**Step 3.** We denote $g_{z}(u)=\prod_{l=1}^{d}1_{(0,\infty
)}(u_{l}-z_{l})$, so that $\delta _{0}(u-z)=\partial _{u}^{\rho }g_{z}(u)$ with $\rho =(1,2,\ldots ,d).$ We take $\mu =\nu +d+1$ and we formally write $$p_{1}(x,z)=\frac{1}{\psi _{\mu }(z)}Q_{1}(\psi _{\mu }\partial ^{\rho
}g_{z})(x).$$This formal equality can be rigorously written by using the regularization by convolution of the Dirac function.
We denote$$q_{2}^{\prime }=q_{2}+(j-1)(a+b),\quad \eta =\pi (d+q_{2}^{\prime },\mu
+d+1)+\mu$$and we write$$|p_{1}^{\beta ,x}(z)|=|\partial _{x}^{\beta }p_{1}(x,z)|\leq \frac{\psi
_{\eta }(x)}{\psi _{\mu }(z)}\Big\Vert\frac{1}{\psi _{\eta }}\partial
^{\beta }Q_{1}(\psi _{\mu }\partial ^{\rho }g_{z})\Big\Vert_{\infty }.$$Since $\mu =\nu +d+1$, $\int \psi _{\nu }\times \frac{1}{\psi _{\mu }}%
<\infty $, so using (\[NOT3c\]), we obtain (recall that $\left\vert \beta
\right\vert \leq q_{2})$ $$\begin{aligned}
\Vert p_{1}^{\beta ,x}\Vert _{0,\nu ,1}& \leq C\psi _{\eta }(x)\sup_{z\in {%
\mathbb{R}}^{d}}\big\Vert\frac{1}{\psi _{\eta }}\partial ^{\beta }Q_{1}(\psi
_{\mu }\partial ^{\rho }g_{z})\big\Vert_{\infty }\leq C\psi _{\eta
}(x)\sup_{z\in {\mathbb{R}}^{d}}\big\Vert\frac{1}{\psi _{\eta }}Q_{1}(\psi
_{\mu }\partial ^{\rho }g_{z})\big\Vert_{q_{2},\infty } \\
& \leq C\psi _{\eta ^{\prime }}(x)\sup_{z\in {\mathbb{R}}^{d}}\big\Vert %
Q_{1}(\psi _{\mu }\partial ^{\rho }g_{z})\big\Vert_{q_{2},-\eta ,\infty }.\end{aligned}$$Using (\[h\]) $j-1$ times and (\[B2\]) (with $\kappa =\mu )$ we get $$\begin{aligned}
\left\Vert Q_{1}(\psi _{\mu }\partial ^{\rho }g_{z})\right\Vert
_{q_{2},-\eta ,\infty } &\leq &C_{q_{2}^{\prime },\eta ,\infty
}^{j-1}(U,S)\Vert S_{\frac{1}{2}\delta _{j}}(\psi _{\mu }\partial ^{\rho
}g_{z})\Vert _{q_{2}^{\prime },-\eta ,\infty } \\
&\leq &C_{q_{2}^{\prime },\eta ,\infty }^{j-1}(U,S)\left\Vert
g_{z}\right\Vert _{\infty }\,C\Big(\frac{2m}{\lambda t}\Big)^{\theta
_{0}(q_{2}^{\prime }+d+\theta _{1})}.\end{aligned}$$Since $\left\Vert g_{z}\right\Vert _{\infty }=1$ we obtain$$\Vert p_{1}^{\beta ,x}\Vert _{0,\nu ,1}\leq \psi _{\eta }(x)C_{q_{2}^{\prime
},\eta ,\infty }^{j-1}(U,S)\,C\Big(\frac{2m}{\lambda t}\Big)^{\theta
_{0}(q_{2}^{\prime }+d+\theta _{1})}.$$By inserting in (\[h9\]) we obtain (\[h7\]), so the proof is completed. $%
\square $
Proofs of the main results {#sect:proofs}
==========================
In the present section, we use the results in Section \[sect:reg\] in order to prove Theorem \[Transfer\] (Section \[sect:proofTransfer\]) and Theorem \[J\] (Section \[sect:proofJ\]).
Proof of Theorem \[Transfer\] {#sect:proofTransfer}
-----------------------------
**Step 0: constants and parameters set-up.** In this step we will choose some parameters which will be used in the following steps. To begin we stress that we work with measures on ${\mathbb{R}}^{d}\times {\mathbb{R}}%
^{d}$ so the dimension of the space is $2d$ (and not $d).$ We recall that in our statement the quantities $q,d,p,\delta _{\ast },\varepsilon _{\ast
},\kappa $ and $n$ are given and fixed. In the following we will denote by $%
C $ a constant depending on all these parameters and which may change from a line to another. We define $$m_{0}=1+\Big\lfloor\frac{q+2d/p_{\ast }}{\delta _{\ast }}\Big\rfloor>0
\label{H4}$$and given $h\in {\mathbb{N}}$ we denote $$\rho _{h}=\frac{(a+b)m_{0}+q+2d/p_{\ast }}{2h}. \label{H5'}$$Notice that this is equal to the constant $\rho _{h}$ defined in (\[reg5\]) corresponding to $k=(a+b)m_{0}$ and $q$ and to $2d$ (instead of $d).$
**Step 1: a Lindeberg-type method to decompose $P_t-P^n_t$.** We fix (once for all) $t\in(0,1]$ and we write$$P_{t}f-P_{t}^{n}f=\int_{0}^{t}\partial
_{s}(P_{t-s}^{n}P_{s})fds=\int_{0}^{t}P^n
_{t-s}(L-L_{n})P_{s}fds=\int_{0}^{t}P^n_{t-s}\Delta _{n}P_{s}fds$$We iterate this formula $m_{0}$ times (with $m_{0}$ chosen in (\[H4\])) and we obtain$$P_{t}f(x)-P_{t}^{n}f(x)=\sum_{m=1}^{m_{0}-1}I_{n}^{m}f(x)+R_{n}^{m_{0}}f(x)
\label{R2}$$with (we put $t_{0}=t)$$$\begin{aligned}
I_{n}^{m}f(x)&=\int_{0}^{t}dt_{1}\int_{0}^{t_{1}}dt_{2}...%
\int_{0}^{t_{m-1}}dt_{m}\prod_{i=0}^{m-1}(P_{t_{i}-t_{i+1}}^{n}\Delta
_{n})P_{t_{m}}^{n}f(x),\quad 1\leq m\leq m_0-1, \\
R_{n}^{m_{0}}f(x)&=\int_{0}^{t}dt_{1}\int_{0}^{t_{1}}dt_{2}...%
\int_{0}^{t_{m_{0}-1}}dt_{m_{0}}\prod_{i=0}^{m_{0}-1}(P_{t_{i}-t_{i+1}}^{n}%
\Delta _{n})P_{t_{m_{0}}}f(x).\end{aligned}$$
In order to analyze $I_{n}^{m}f$ we use Lemma \[Reg\] for the semigroup $%
S_{t}=P_{t}^{n}$ and for the operators $U_{i}=\Delta _{n}=L-L_{n}$ (the same for each $i$), with $\delta _{i}=t_{i}-t_{i+1}$, $i=0,\ldots ,m$ (with $%
t_{m+1}=0$). So the hypotheses (\[h1\]) and (\[h1’\]) in Assumption \[H1H\*1\] coincide with the requests (\[TR3\]) and (\[TR3’\]) in Assumption \[A1A\*1\]. And we have $%
C_{q,\kappa ,\infty }(U)=C_{q,\kappa ,p}(U)=C\varepsilon _{n}.$ Moreover the hypotheses (\[h2\]) and (\[h2’\]) in Assumption \[H2H\*2\] coincide with the hypotheses (\[TR2\]) and (\[TR2’\]) in Assumption \[A2A\*2\]. And we have $%
C_{q,\kappa ,\infty }(P^{n})=C_{q,\kappa ,p}(P^{n})=\Lambda _{n}$. Hence, $$C_{q,\kappa ,\infty ,p}(\Delta _{n},P^{n})=C\,\varepsilon _{n}\times \Lambda
_{n}, \label{app1}$$Finally, the hypothesis (\[h3\]) in Assumption \[HH3\] coincides with (\[TR5\]) in Assumption \[A3\]. So, we can apply Lemma \[Reg\]: by using (\[h6\]) we obtain$$I_{n}^{m}f(x)=\int_{0}^{t}dt_{1}...\int_{0}^{t_{m-1}}dt_{m}\int
p_{t-t_{1},t_{1}-t_{2},...,t_{m}}^{n,m}(x,y)f(y)dy.$$We denote$$\phi
_{t}^{n,m_{0}}(x,y)=p_{t}^{n}(x,y)+\sum_{m=1}^{m_{0}-1}\int_{0}^{t}dt_{1}...%
\int_{0}^{t_{m-1}}dt_{m}p_{t-t_{1},t_{2}-t_{1},...,t_{m}}^{n,m}(x,y)$$so that (\[R2\]) reads$$\int f(y)P_{t}(x,dy)=\int f(y)\phi _{t}^{n,m_{0}}(x,y)dy+R_{n}^{m_{0}}f(x).$$We recall that $\Psi _{\eta ,\kappa }$ is defined in (\[R7”\]) and we define the measures on ${\mathbb{R}}^{d}\times {\mathbb{R}}^{d}$ defined by $$\mu ^{\eta ,\kappa }(dx,dy)=\Psi _{\eta ,\kappa }(x,y)P_{t}(x,dy)dx\quad %
\mbox{and}\quad \mu _{n}^{\eta ,\kappa ,m_{0}}(dx,dy)=\Psi _{\eta ,\kappa
}(x,y)\phi _{t}^{n,m_{0}}(x,y)dxdy.$$So, the proof consists in applying Lemma \[REG\] to $\mu =\mu ^{\eta
,\kappa }$ and $\mu _{n}=\mu _{n}^{\eta ,\kappa ,m_{0}}$.
**Step 2: analysis of the principal term.** We study here the estimates for $f_{n}(x,y)=\Psi _{\eta ,\kappa }\phi _{t}^{n,m_{0}}(x,y)$ which are required in (\[reg9\]).
We first use (\[h7\]) in order to get estimates for $%
p_{t-t_{1},t_{2}-t_{1},...,t_{m}}^{n,m}(x,y)$. We fix $q_{1},q_{2}\in {\
\mathbb{N}},\kappa \geq 0,p>1$ and we recall that in Lemma \[Reg\] we introduced $\overline{q}=q_{1}+q_{2}+(a+b)(m_{0}-1).$ Moreover in Lemma [Reg]{} one produces $\chi $ such that (\[h7\]) holds true: for every multi-index $\beta $ with $\left\vert \beta \right\vert \leq q_{2}$$$\begin{array}{l}
\left\Vert \psi _{\kappa }\partial _{x}^{\beta
}p_{t-t_{1},t_{1}-t_{2},...,t_{m}}^{n,m}(x,\cdot )\right\Vert
_{q_{1},p}\smallskip \\
\displaystyle\quad \leq C\Big(\frac{1}{\lambda _{n}t}\Big)^{\theta
_{0}(q_{1}+q_{2}+d+2\theta _{1})}\times \left( \varepsilon _{n}\Lambda _{n}%
\Big(\frac{1}{\lambda _{n}t}\Big)^{\theta _{0}(a+b)}\right) ^{m}\psi _{\chi
}(x).%
\end{array}%$$We recall the constant defined in (\[R7’\]): $$\Phi _{n}(\delta )=\varepsilon _{n}\Lambda _{n}\times \frac{1}{\lambda
_{n}^{\theta _{0}(a+b+\delta )}}.$$Denote$$\xi _{1}(q)=q+d+2\theta _{1}+m_{0}(a+b),\qquad \omega _{1}(q)=q+d+2\theta
_{1}.$$With this notation, if $\left\vert \beta \right\vert \leq q_{2}$ we have $$\begin{aligned}
\left\Vert \psi _{\kappa }\partial _{x}^{\beta }\phi _{t}^{n,m_{0}}(x,\cdot
)\right\Vert _{q_{1},p} &\leq &C\Big(\frac{1}{\lambda _{n}t}\Big)^{\theta
_{0}(q_{1}+q_{2}+d+2\theta _{1})}\times \left( \varepsilon _{n}\Lambda _{n}%
\Big(\frac{1}{\lambda _{n}t}\Big)^{\theta _{0}(a+b)}\right) ^{m_{0}}\psi
_{\chi }(x) \label{h12} \\
&=&Ct^{-\theta _{0}\xi _{1}(q_{1}+q_{2})}\lambda _{n}^{-\theta _{0}\omega
_{1}(q_{1}+q_{2})}\Phi _{n}^{m_{0}}(0)\psi _{\chi }(x).\end{aligned}$$
We take $l=2h+q,l^{\prime }=2h$ and we take $q(l)=l+(a+b)m_{0}.$ Moreover we fix $q_{1}$ and $q_{2}$ (so $q=q_{1}+q_{2}\leq l)$ and we take $\chi $ to be the one in (\[h12\]). Moreover we take $\eta $ is sufficiently large in order to have $p\eta -2h-p\chi \geq d+1.$ This guarantees that $$\int_{{\mathbb{R}}^{d}}\frac{dx}{\psi _{p\eta -l^{\prime }-p\chi }(x)}%
=C<\infty . \label{h14}$$By (\[NOT3c\]) and (\[h12\]) $$\begin{aligned}
& \left\Vert \Psi _{\eta ,\kappa }\phi _{t}^{n,m_{0}}\right\Vert
_{l,l^{\prime },p}^{p}\leq C\sum_{\left\vert \alpha \right\vert +\left\vert
\beta \right\vert \leq l}\int_{{\mathbb{R}}^{d}}\int_{{\mathbb{R}}^{d}}\Psi
_{\eta ,\kappa }^{p}(x,y)\left\vert \partial _{x}^{\alpha }\partial
_{y}^{\beta }\phi _{t}^{n,m_{0}}(x,y)\right\vert ^{p}\psi _{l^{\prime
}}(x)\psi _{l^{\prime }}(y)dydx \\
& \quad =C\sum_{\left\vert \alpha \right\vert +\left\vert \beta \right\vert
\leq l}\int_{{\mathbb{R}}^{d}}\frac{1}{\psi _{p\eta -l^{\prime }}(x)}\int_{{%
\ \mathbb{R}}^{d}}\left\vert \psi _{\kappa +l^{\prime }/p}(y)\partial
_{x}^{\alpha }\partial _{y}^{\beta }\phi _{t}^{n,m_{0}}(x,y)\right\vert
^{p}dydx \\
& \quad \leq C\sum_{\left\vert \alpha \right\vert +\left\vert \beta
\right\vert \leq l}\int_{{\mathbb{R}}^{d}}\frac{1}{\psi _{p\eta -l^{\prime
}}(x)}\left\Vert \psi _{\kappa +l^{\prime }/p}\partial _{x}^{\alpha }\phi
_{t}^{n,m_{0}}(x,\cdot )\right\Vert _{\left\vert \beta \right\vert ,p}^{p}dx
\\
& \quad \leq C(t^{-\theta _{0}\xi _{1}(l)}\lambda _{n}^{-\theta _{0}\omega
_{1}(l)}\Phi _{n}(0))^{pm_{0}}\int_{{\mathbb{R}}^{d}}\frac{dx}{\psi _{p\eta
-l^{\prime }-p\chi }(x)}.\end{aligned}$$We conclude that $$\left\Vert \Psi _{\eta ,\kappa }\phi _{t}^{n,m_{0}}\right\Vert
_{2h+q,2h,p}\leq Ct^{-\theta _{0}\xi _{1}(q+2h)}\times \lambda _{n}^{-\theta
_{0}\omega _{1}(q+2h)}\Phi _{n}^{m_{0}}(0)=:\theta (n). \label{R6}$$By (\[TRa\]) $\theta (n)\uparrow +\infty $ and $\Theta \theta (n)\geq
\theta (n+1)$ with $$\Theta =\gamma ^{\theta _{0}((a+b)m_{0}+q+2h+d+2\theta _{1})+m_{0}}\geq 1.$$In the following we will choose $h$ sufficiently large, depending on $\delta
_{\ast },m_{0},q,d$ and $p.$ So $\Theta $ is a constant depending on $\delta
_{\ast },m_{0},q,d,a,b$,$\gamma $ and $p,$ as the constants considered in the statement of our theorem.
**Step 3: analysis of the remainder**. We study here $%
d_{m_{0}}(n):=d_{(a+b)m_{0}}(\mu ^{\eta ,\kappa },\mu _{n}^{\eta ,\kappa
,m_{0}})$ as required in (\[reg10\]): we prove that, if $\eta \geq \kappa
+d+1,$ then$$d_{m_{0}}(n)\leq C(\Lambda _{n}\varepsilon _{n})^{m_{0}}\leq \lambda
_{n}^{\theta _{0}(a+b+\delta _{\ast })m_{0}}\Phi _{n}^{m_{0}}(\delta _{\ast
}). \label{R9}$$
Using first $(A_{1})$ and $(A_{2})$ (see (\[TR3\]) and (\[TR2\])) and then $(A_{4})$ (see (\[R7\])) we obtain $$\left\Vert \prod_{i=0}^{m_{0}-1}(P_{t_{i}-t_{i+1}}^{n}\Delta
_{n})P_{t_{m_{0}}}f\right\Vert _{0,-\kappa ,\infty }\leq C\left\Vert
f\right\Vert _{(a+b)m_{0},-\kappa ,\infty }(\Lambda _{n}\varepsilon
_{n})^{m_{0}}$$which gives $$\left\Vert R_{n}^{m_{0}}f\right\Vert _{0,-\kappa ,\infty }\leq C\left\Vert
f\right\Vert _{(a+b)m_{0},-\kappa ,\infty }(\Lambda _{n}\varepsilon
_{n})^{m_{0}}.$$Using now the equivalence between (\[NOT6a\]) and (\[NOT6b\]) we obtain$$\left\Vert \frac{1}{\psi _{\kappa }}R_{n}^{m_{0}}(\psi _{\kappa
}f)\right\Vert _{\infty }\leq C\left\Vert f\right\Vert _{(a+b)m_{0},\infty
}(\Lambda _{n}\varepsilon _{n})^{m_{0}}. \label{R8}$$We take now $g\in C^{\infty }({\mathbb{R}}^{d}\times {\mathbb{R}}^{d}),$ we denote $g_{x}(y)=g(x,y),$ and we write $$\begin{aligned}
& \left\vert \int_{{\mathbb{R}}^{d}\times {\mathbb{R}}^{d}}g(x,y)(\mu ^{\eta
,\kappa }-\mu _{n}^{,\kappa ,m_{0}})(dx,dy)\right\vert \\
& \quad \leq \int_{{\mathbb{\ R}}^{d}}\frac{dx}{\psi _{\eta }(x)}\left\vert
\int_{{\mathbb{R}}^{d}}g_{x}(y)\psi _{\kappa }(y)(P_{t}(x,dy)-\phi
_{t}^{n,m_{0}}(x,y))dy\right\vert \\
& \quad \leq \int_{{\mathbb{R}}^{d}}\frac{dx}{\psi _{\eta -\kappa }(x)}%
\left\vert \frac{1}{\psi _{\kappa }(x)}R_{n}^{m_{0}}(\psi _{\kappa
}g_{x})(x)\right\vert dx \\
& \quad \leq C\sup_{x\in {\mathbb{R}}^{d}}\left\Vert g_{x}\right\Vert
_{(a+b)m_{0},\infty }(\Lambda _{n}\varepsilon _{n})^{m_{0}}\end{aligned}$$the last inequality being a consequence of (\[R8\]) and of $\eta -\kappa
\geq d+1$. Now (\[R9\]) is proved because $\sup_{x\in {\mathbb{R}}%
^{d}}\left\Vert g_{x}\right\Vert _{(a+b)m_{0},\infty }$ $\leq \left\Vert
g\right\Vert _{(a+b)m_{0},\infty }$.
**Step 4: use of Lemma \[REG\] and proof of A. and B.** We recall that $\rho _{h}$ is defined in (\[H5’\]) and we estimate $$d_{m_{0}}(n)\times \theta (n)^{\rho _{h}}\leq Ct^{-\theta _{0}\xi
_{2}(h)}\lambda _{n}^{\theta _{0}\omega _{2}(h)}\Phi _{n}^{m_{0}(1+\rho
_{h})}(\delta _{\ast })$$with$$\xi _{2}(h)=\rho _{h}\xi _{1}(q+2h)=\rho _{h}(q+2h+d+2\theta _{1}+m_{0}(a+b))$$and$$\begin{aligned}
\omega _{2}(h) &=&(a+b+\delta _{\ast })m_{0}-\rho _{h}(q+2h+d+2\theta _{1})
\\
&=&\delta _{\ast }m_{0}-\frac{(a+b)m_{0}+q+2d/p_{\ast }}{2h}(q+d+2\theta
_{1})-(q+2d/p_{\ast }).\end{aligned}$$By our choice of $m_{0}$ we have $$\delta _{\ast }m_{0}>q+2d/p_{\ast }$$so, taking $h$ sufficiently large we get $\omega _{2}(h)>0.$ And we also have $\xi _{2}(h)\leq \xi _{3}:=(a+b)m_{0}+q+\frac{2d}{p_{\ast }}%
+\varepsilon _{\ast }$ and $\rho _{h}\leq \varepsilon _{\ast }.$ So we finally get$$d_{m_{0}}(n)\times \theta (n)^{\rho _{h}}\leq Ct^{-\theta _{0}\xi _{3}}\Phi
_{n}^{m_{0}(1+\varepsilon _{\ast })}(\delta _{\ast }). \label{R8'}$$The above inequality guarantees that (\[reg10\]) holds so that we may use Lemma \[REG\]. We take $\eta >\kappa +d$ and, using $(A_{4})$ (see ([R7]{})) we obtain $$\left\vert \mu ^{\eta ,\kappa }\right\vert =\int_{{\mathbb{R}}^{2}}\frac{%
\psi _{\eta }(x)}{\psi _{\kappa }(y)}P_{t}(x,dy)dx\leq C\int_{{\mathbb{R}}}%
\frac{dx}{\psi _{\kappa -\eta }(x)}<\infty .$$Then, $A(\delta )<C$ (see (\[reg12’\])). One also has $B(\varepsilon
)<\infty $ (see (\[reg12”\])) and finally (see (\[reg11\])) $$C_{h,n_{\ast }}(\varepsilon )\leq Ct^{-\theta _{0}\xi _{3}}\Phi
_{n}^{m_{0}(1+\varepsilon _{\ast })}(\delta _{\ast }).$$We have used here (\[R8’\]). For large $h$ we also have $$\theta (n)^{\rho _{h}}\leq C(\lambda _{n}t)^{-\theta _{0}((a+b)m_{0}+q+\frac{%
2d}{p_{\ast }})(1+\varepsilon _{\ast })}\Phi _{n}^{\varepsilon _{\ast }}(0).$$
Now (\[reg12\]) gives (\[TR6’\]). So **A** and **B** are proved.
**Step 5: proof of C.** We apply **B.** with $q$ replaced by $\bar{%
q}=q+1$, so $\Psi _{\eta ,\kappa }p_{t}\in W^{\bar{q},p}({\mathbb{R}}^{d}\times {%
\mathbb{R}}^{d})=W^{\bar{q},p}({\mathbb{R}}^{2d})$. Since $\bar{q}>2d/p$ (here the dimension is $2d$), we can use the Morrey’s inequality: for every $%
\alpha $, $\beta $ with $|\alpha |+|\beta |\leq \lfloor\bar{q}- 2d/p\rfloor
=q$, then $|\partial _{x}^{\alpha }\partial _{y}^{\beta }(\Psi _{\eta
,\kappa }p_{t})(x,y)|\leq C\Vert \Psi _{\eta ,\kappa }p_{t}\Vert _{\bar{q},p}
$. By (\[TR6’\]), one has (with $\bar{\mathfrak{m}}=1+\frac{\bar{q}+2d/p_{\ast }}{\delta _{\ast }}$) $$\left\vert \partial _{x}^{\alpha }\partial _{y}^{\beta }(\Psi _{\eta ,\kappa
}p_{t})(x,y)\right\vert \leq C\Big(1+\Big(\frac{1}{\lambda _{n}t}\Big)^{(a+b)%
\bar{\mathfrak{m}}+\bar{q}+2d/p_{\ast }}+\Phi _{t,n,r}^{\bar{\mathfrak{m}}}(\delta
_{\ast })\Big)^{(1+\varepsilon _{\ast })}$$i.e. (using (\[NOT3c\])), $$\left\vert \partial _{x}^{\alpha }\partial _{y}^{\beta
}p_{t}(x,y)\right\vert \leq C\Big(1+\Big(\frac{1}{\lambda _{n}t}\Big)^{(a+b)%
\bar{\mathfrak{m}}+\bar{q}+2d/p_{\ast }}+\Phi _{t,n,r}^{\bar{\mathfrak{m}}}(\delta
_{\ast })\Big)^{(1+\varepsilon _{\ast })}\times \frac{1}{\Psi _{\eta ,\kappa
}(x,y)}$$Now, by a standard calculus, ${\Psi _{\eta ,\kappa }(x,y)}\geq C_{\kappa }%
\frac{\psi _{\kappa }(x-y)}{\psi _{\eta +\kappa }(x)}$ (use that $\psi
_{\kappa }(x-y)\leq C_{\kappa }\psi _{\kappa }(x)\psi _{\kappa
}(-y)=C_{\kappa }\psi _{\kappa }(x)\psi _{\kappa }(y)$), so (\[TR6d\]) follows. $\square $
Proof of Theorem \[TransferBIS-new\] {#sect:proofTransferBIS}
------------------------------------
By applying Theorem \[Transfer\], $P_{t}(x,dy)=p_{t}(x,y)dy$ and $p_{t}$ satisfies (\[TR6’\]), which we rewrite here as $$\|\Psi_{\eta,\kappa}p_t\|_{q,p}\leq Ct^{-\theta_\ast(q+\theta_1)},$$ where $\theta _{\ast }=\theta _{0}(1+\frac{a+b}{\delta}%
)(1+\varepsilon )$ and $\theta _{1}$ is computed from (\[TR6’\]) (the precise value is not important here). The constant $C$ in the above inequality depends on $\kappa ,\eta, \varepsilon ,\delta, q $. Moreover, by choosing $\eta>\kappa+d$, $$\int_{\R^d\times \R^d}\Psi_{\eta,\kappa}(x,y)p_t(x,y)dx dy
= \int_{\R^d}\frac 1{\psi_{\eta}(x)}\times P_t\psi_{\kappa}(x)dx
\leq \int_{\R^d}\frac 1{\psi_{\eta-\kappa}(x)}dx=m<\infty.$$ So, Lemma \[reg\] (recall that we are working here with $\R^d\times \R^d=\R^{2d}$) gives $$\|\Psi_{\eta,\kappa}p_t\|_{q,p}\leq C_\ast t^{-\theta_\ast(q+2d/p_\ast)}.$$ We choose now $p>2d$ and by Morrey’s inequality, $$\|\Psi_{\eta,\kappa}p_t\|_{q,\infty}\leq
C \|\Psi_{\eta,\kappa}p_t\|_{q+1,p}\leq
C t^{-\theta_\ast(q+1+2d/p_\ast)}.$$ By taking $p=2d/(1-\varepsilon)$, we get $$\|\Psi_{\eta,\kappa}p_t\|_{r,\infty}\leq
C t^{-\theta_\ast(r+2d+\varepsilon)},$$ where $C$ denotes here a constant depending on $\kappa ,\eta, \varepsilon$. This gives that, for every $x,y\in\R^d$ and for every multi-index $\alpha$ and $\beta$, $$\left\vert \partial _{x}^{\alpha }\partial _{y}^{\beta
}p_{t}(x,y)\right\vert \leq C\times t^{-\theta_\ast(|\alpha|+|\beta|+2d+\varepsilon)}\,\frac{\psi_{\eta}(x)}{\psi_{\kappa}(y)}.$$ The statement now follows from . $\square$
Proof of Theorem \[J\] {#sect:proofJ}
----------------------
In this section we give the proof of Theorem \[J\].
**Step 1.** Let$$\omega _{t}(dt_{1},...,dt_{m})=\frac{m!}{t^{m}}1_{\{0<t_{1}<...<t_{m}<t%
\}}dt_{1}....dt_{m}$$and (with $t_{m+1}=t$) $$I_{m}(f)(x)={\mathbb{E}}\Big(1_{\{N(t)=m\}}\int_{{\mathbb{R}}^m_+}\Big(%
\prod_{i=0}^{m-1}P_{t_{m-i+1}-t_{m-i}}U_{Z_{m-i}}\Big)P_{t_1}f(x) \omega
_{t}(dt_{1},...,dt_{m}))\Big),$$ Since, conditionally to $N(t)=m,$ the law of $(T_{1},...,T_{m})$ is given by $\omega _{t}(dt_{1},...,dt_{m}),$ it follows that$$\overline{P}_{t}f(x)=\sum_{m=0}^{\infty
}I_{m}(f)(x)=\sum_{m=0}^{m_{0}}I_{m}(f)(x)+R_{m_{0}}f(x)$$with$$R_{m_{0}}f(x)=\sum_{m=m_{0}+1}^{\infty }I_{m}(f)(x).$$
**Step 2.** We analyze first the regularity of $I_{m}(f).$ We apply Lemma \[Reg\]. Here $S_{t}=P_{t}$, so assumptions \[H2H\*2\] and \[HH3\] hold due to assumptions \[H2H\*2-P\] and \[H3\] respectively. Moreover, here $U_{i}=U_{m-Z_{i}}$, so Assumption \[H1H\*1\] is satisfied uniformly in $\omega $ as observed in Remark \[rem-J\]. Notice that $a=b=0$ in our case. Then Lemma \[Reg\] gives$$\Big(\prod_{i=0}^{m-1}P_{t_{m-i+1}-t_{m-i}}U_{Z_{m-i}}\Big)%
P_{t_{1}}f(x)=\int p_{t_{1},t_{2}-t_{1},...,t-t_{m}}(x,y)f(y)dy$$and, for $q_{1},q_{2}\in {\mathbb{N}},\kappa \geq 0,p>1$ and $\left\vert
\beta \right\vert \leq q_{2}$ we have $$\left\Vert \partial _{x}^{\beta }p_{t_{1},t_{2}-t_{1},...,t-t_{m}}(x,\cdot
)\right\Vert _{q_{1},\kappa ,p}\leq \theta _{q,t}(m)\times \psi _{\chi }(x).
\label{j1}$$with$$\theta _{q,t}(m)=\frac{C\times m^{q+d+2\theta _{1}}}{(\lambda t)^{\theta
_{0}(q+d+2\theta _{1})}}\times C_{q,\chi ,p,\infty }^{m-1}(P,U).$$Here $C$ and $\chi $ are constants which depend on $q_{1},q_{2}$ and $\kappa
.$ We notice that $$\theta _{q,t}(m+1)\leq C\times C_{q,\chi ,p,\infty }(P,U)\times \theta
_{q,t}(m).$$We summarize: for each fixed $q,\chi ,\kappa ,p$ and each $\delta >0$ there exists some constants $\Theta \geq 1$ and $Q\geq 1$ (depending on $q,\chi
,\kappa ,p$ and $\delta $ but not on $m$ and on $t)$ such that for every $%
m\in \N$ $$\begin{aligned}
\theta _{q,t}(m+1)
&\leq \Theta \times \theta _{q,t}(m),\qquad and
\label{j3} \\
%\theta _{q,t}^{1/q}(m) &\leq &\frac{Q^{m}}{(\lambda t)^{\theta _{0}(1+\delta)}}
\theta _{q,t}(m)
& \leq \frac{Q^{m q}}{(\lambda t)^{\theta _{0}(q+d+2\theta_1)}} .
\label{j4}\end{aligned}$$
We define now $$\phi _{t}^{m_{0}}(x,y)=\sum_{m=0}^{m_{0}}\int_{{\mathbb{R}}%
_{+}^{m}}p_{t_{1},t_{2}-t_{1},...,t-t_{m}}(x,y)\omega _{t}(dt_{m}...dt_{1}).$$Using (\[j1\]), standard computations give: for every $l,l^{\prime }\in {\mathbb{N}}$, $p>1$ and $\kappa \in {\mathbb{N}}
$ there exists $\eta _{0}\in {\mathbb{N}}$ such that for every $\eta >\eta
_{0}$, $$\left\Vert \Psi _{\eta ,\kappa }\phi _{t}^{m_{0}}\right\Vert _{l,l^{\prime
},p}\leq \theta _{l,t}(m_{0}). \label{j2}$$
**Step 3.** We fix $\eta >\eta _{0}$ and $\eta >\kappa +d$ and we define the measures$$\mu ^{\eta ,\kappa }(dx,dy)=\Psi _{\eta ,\kappa }(x,y)\overline{P}%
_{t}(x,dy)dx\quad and\quad \mu ^{\eta ,\kappa ,m_{0}}(dx,dy)=\Psi _{\eta
,\kappa }(x,y)\phi _{t}^{m_{0}}(x,y)dxdy.$$Let $g=g(x,y)$ be a bounded function and set $g_{x}(y)=g(x,y)$. We have, $$\begin{aligned}
\Big|\int gd\mu ^{\eta ,\kappa }-\int gd\mu ^{\eta ,\kappa ,m_{0}}\Big|&
\leq \int \frac{1}{\psi _{\eta -\kappa }(x)}\Big\|\frac{1}{\psi _{\kappa }}%
R_{m_{0}}(\psi _{\kappa }g_{x})\Big\|_{\infty }\,dx \\
& \leq \sum_{m\geq m_{0}+1}\int \frac{1}{\psi _{\eta -\kappa }(x)}\Big\|%
\frac{1}{\psi _{\kappa }}I_{m}(\psi _{\kappa }g_{x})\Big\|_{\infty }\,dx.\end{aligned}$$We deal with the norm in the integral above. By iterating (\[J6a\]) and (\[J5\]) (with $q=0$) we get $$\Big\|\Big(\prod_{i=0}^{m-1}P_{t_{m-i+1}-t_{m-i}}U_{Z_{m-i}}\Big)%
P_{t_{1}}(\psi _{\kappa }g_{x})\Big\|_{0,-\kappa ,\infty }\leq c_{\kappa
}^{m}\Vert \psi _{\kappa }g_{x}\Vert _{0,-\kappa ,\infty }\leq c_{\kappa
}^{m}\Vert g\Vert _{\infty },$$where $c_{\kappa }>0$ is a constant depending on the constants appearing in (\[J6a\]) and (\[J5\]). Therefore, we obtain $$\Big\|\frac{1}{\psi _{\kappa }}I_{m}(\psi _{\kappa }g_{x})\Big\|_{\infty
}\leq \,c_{\kappa }^{m}P(N(t)=m)\Vert g\Vert _{\infty }.$$Since $\eta -\kappa >d$, it follows that for every $m_0\geq 1$ (recall that $t<1$), $$\begin{aligned}
d_{0}(\mu ^{\eta ,\kappa },\mu ^{\eta ,\kappa ,m_{0}})
\leq \sum_{m\geq m_{0}+1}\frac{(c_\kappa \rho)^m}{m!}
\leq \frac{(c_\kappa \rho)^{m_0}}{{m_0}!}\, e^{c_\kappa \rho}.\end{aligned}$$ So, for $m\geq
1$ we have, for every $l$$$\begin{aligned}
\sup_{m\geq 1}d_{0}(\mu ^{\eta ,\kappa },\mu
^{\eta ,\kappa ,m})\times \theta _{l,t}(m)^{r}
&\leq \frac{1}{(\lambda
t)^{\theta _{0}(l+d+2\theta _{1})r}}\times \sup_{m\geq 1}\frac{(c_\kappa \rho Q^r)^{m}}{{m}!}\, e^{c_\kappa \rho}\nonumber\\
&\leq \frac{e^{c_\kappa \rho (1+Q^r)}}{(\lambda
t)^{\theta _{0}(l+d+2\theta _{1})r}}.
\label{J9-bis}\end{aligned}$$ We now use Lemma \[REG\] and we get $\mu ^{\eta ,\kappa }(dx,dy)=p^{\eta
,\kappa }(x,y)dxdy$ with $p^{\eta ,\kappa }\in C^{\infty }({\mathbb{R}}%
^{d}\times {\mathbb{R}}^{d}).$ And one concludes that $\overline{P}%
_{t}(x,dy)=\overline{p}_{t}(x,y)dxdy$ with $\overline{p}_{t}\in C^{\infty }({%
\mathbb{R}}^{d}\times {\mathbb{R}}^{d}).$
We will now obtain estimates of $\overline{p}_{t}.$ We fix $h\in {\mathbb{N}}
$ (to be chosen sufficiently large, in a moment) and we recall that in ([reg5]{}) we have defined $\rho _{h}=(q+2d/p_{\ast })/2h$ (in our case $k=0$ and we work on $R^{d}\times R^{d}\sim R^{2d}).$ So, with the notation from (\[reg11\]) (with $n_\ast=1$) $$C_{h,1 }(\varepsilon )=\frac{{e^{c_\kappa \rho (1+Q^{\rho_h+\varepsilon})}}}{(\lambda t)^{\theta
_{0}(2h+q+d+2\theta _{1})(\rho _{h}+\varepsilon)} }.$$We have used here (\[J9-bis\]) with $l=2h+q$ and $r=\rho _{h}+\varepsilon .$ Then by (\[reg12\]) with $n_{\ast }=1 $, for every $\delta >0$$$\left\Vert p^{\eta ,\kappa }\right\Vert _{q,p}\leq C(\Theta +\theta
_{2h+q,t}^{\rho _{h}(1+\delta )}(1 )+C_{h,1 }(\varepsilon )).$$Taking $h$ sufficiently large we have$$C_{h,1 }(\varepsilon )\leq \frac{e^{2c_\kappa \rho }}{(\lambda t)^{\theta
_{0}(q+2d/p_{\ast })(1+\delta )}}.$$and, for $\delta >0$, $$\theta _{2h+q,t}^{\rho _{h}(1+\delta )}(c(r_h)\rho )\leq
\frac{Q^{\rho_h+\varepsilon}}{(\lambda t)^{(2h+q+d/p_{\ast })\rho_h(1+\delta )}}
\leq
C\times \frac{e^{2c_\kappa \rho}}{(\lambda t)^{(q+2d/p_{\ast })(1+\delta )}}.$$Since $\rho \geq 1$ we conclude that $$\left\Vert p^{\eta ,\kappa }\right\Vert _{q,p}\leq
C\times \frac{e^{2c_\kappa \rho}}{(\lambda t)^{(q+2d/p_{\ast })(1+\delta )}},$$$C$ denoting a constant which is independent of $\rho$. We take now $p=2d+\varepsilon $ and, using now Morrey’s inequality $$\left\Vert p^{\eta ,\kappa }\right\Vert _{q,\infty }\leq \left\Vert p^{\eta
,\kappa }\right\Vert _{q+1,p}\leq
C\times \frac{e^{2c_\kappa \rho}}{(\lambda t)^{(q+2d)(1+\delta )}}.$$This proves (\[J10\]). $\square $
Appendix
========
Weights {#app:weights}
-------
We denote$$\psi _{k}(x)=(1+\left\vert x\right\vert ^{2})^{k}. \label{n1}$$
\[Psy1\]For every multi-index $\alpha $ there exists a constant $%
C_{\alpha }$ such that $$\Big|\partial ^{\alpha }\Big(\frac{1}{\psi _{k}}\Big)\Big\vert \leq \frac{%
C_{\alpha }}{\psi _{k}}. \label{n2}$$Moreover, for every $q$ there is a constant $C_{q}\geq 1$ such that for every $f\in C_{b}^{\infty }({\mathbb{R}}^{d})$$$\frac{1}{C_{q}}\sum_{0\leq |\alpha | \leq q}\Big\vert \partial ^{\alpha }%
\Big(\frac{f}{\psi _{k}}\Big)\Big\vert \leq \sum_{0\leq \left\vert \alpha
\right\vert \leq q}\frac{1}{\psi _{k}}\left\vert \partial ^{\alpha
}f\right\vert \leq C_{q}\sum_{0\leq \left\vert \alpha \right\vert \leq q}%
\Big\vert \partial ^{\alpha }\Big(\frac{f}{\psi _{k}}\Big)\Big\vert .
\label{n3}$$
**Proof**. One checks by recurrence that $$\partial ^{\alpha }\Big(\frac{1}{\psi _{k}}\Big)=\sum_{q=1}^{\left\vert
\alpha \right\vert }\frac{P_{\alpha ,q}}{\psi _{k+q}}$$where $P_{\alpha ,q}$ is a polynomial of order $q.$ And since$$\frac{(1+\left\vert x\right\vert )^{q}}{(1+\left\vert x\right\vert
^{2})^{q+k}}\leq \frac{C}{(1+\left\vert x\right\vert ^{2})^{k}}$$the proof (\[n2\]) is completed. In order to prove (\[n3\]) we write$$\partial ^{\alpha }\Big(\frac{f}{\psi _{k}}\Big)=\frac{1}{\psi _{k}}\partial
^{\alpha }f+\sum_{\substack{ (\beta ,\gamma )=\alpha \\ \left\vert \beta
\right\vert \geq 1}}c(\beta ,\gamma )\partial ^{\beta }\Big(\frac{1}{\psi
_{k}}\Big)\partial ^{\gamma }f.$$This, together with (\[n2\]) implies $$\Big\vert \partial ^{\alpha }\Big(\frac{f}{\psi _{k}}\Big)\Big\vert \leq
C\sum_{0\leq \left\vert \gamma \right\vert \leq \left\vert \alpha
\right\vert }\frac{1}{\psi _{k}}\left\vert \partial ^{\gamma }f\right\vert$$so the first inequality in (\[n3\]) is proved. In order to prove the second inequality we proceed by recurrence on $q$. The inequality is true for $q=0.$ Suppose that it is true for $q-1.$ Then we write$$\frac{1}{\psi _{k}}\partial ^{\alpha }f=\partial ^{\alpha }\Big(\frac{f}{%
\psi _{k}}\Big)-\sum_{\substack{ (\beta ,\gamma )=\alpha \\ \left\vert
\beta \right\vert \geq 1}}c(\beta ,\gamma )\partial ^{\beta }\Big(\frac{1}{%
\psi _{k}}\Big)\partial ^{\gamma }f$$and we use again (\[n2\]) in order to obtain$$\frac{1}{\psi _{k}}\left\vert \partial ^{\alpha }f\right\vert \leq \Big\vert %
\partial ^{\alpha }\Big(\frac{f}{\psi _{k}}\Big)\Big\vert +C\sum_{\left\vert
\gamma \right\vert <\left\vert \alpha \right\vert }\frac{1}{\psi _{k}}%
\left\vert \partial ^{\gamma }f\right\vert \leq C\sum_{0\leq \left\vert
\beta \right\vert \leq q}\Big\vert \partial ^{\beta }\Big(\frac{f}{\psi _{k}}%
\Big)\Big\vert$$the second inequality being a consequence of the recurrence hypothesis. $%
\square $
The assertion is false if we define $\psi _{k}(x)=(1+\left\vert x\right\vert
)^{k}$ because $\partial _{i}\partial _{j}\left\vert x\right\vert =\frac{%
\delta _{i,j}}{\left\vert x\right\vert }-\frac{x_{i}x_{j}}{\left\vert
x\right\vert ^{2}}$ blows up in zero.
We look now to $\psi _{k}$ itself.
\[Psy2\]For every multi-index $\alpha $ there exists a constant $%
C_{\alpha }$ such that $$\left\vert \partial ^{\alpha }\psi _{k}\right\vert \leq C_{\alpha }\psi _{k}.
\label{n4}$$Moreover, for every $q$ there is a constant $C_{q}\geq 1$ such that for every $f\in C_{b}^{\infty }({\mathbb{R}}^{d})$$$\frac{1}{C_{q}}\sum_{0\leq \left\vert \alpha \right\vert \leq q}\left\vert
\partial ^{\alpha }(\psi _{k}f)\right\vert \leq \sum_{0\leq \left\vert
\alpha \right\vert \leq q}\psi _{k}\left\vert \partial ^{\alpha
}f\right\vert \leq C_{q}\sum_{0\leq \left\vert \alpha \right\vert \leq
q}\left\vert \partial ^{\alpha }(\psi _{k}f)\right\vert . \label{n5}$$
**Proof**. One proves by recurrence that, if $\left\vert \alpha
\right\vert \geq 1$ then $\partial ^{\alpha }\psi
_{k}=\sum_{q=1}^{\left\vert \alpha \right\vert }\psi _{k-q}P_{q}$ with $%
P_{q} $ a polynomial of order $q.$ Since $1+\left\vert x\right\vert \leq
2(1+\left\vert x\right\vert ^{2})$ it follows that $\left\vert
P_{q}\right\vert \leq C\psi _{q}$ and (\[n4\]) follows. Now we write $$\psi _{k}\partial ^{\alpha }f=\partial ^{\alpha }(\psi _{k}f)-\sum
_{\substack{ (\beta ,\gamma )=\alpha \\ \left\vert \beta \right\vert \geq 1
}}c(\beta ,\gamma )\partial ^{\beta }\psi _{k}\partial ^{\gamma }f$$and the same arguments as in the proof of (\[n3\]) give (\[n5\]).
Semigroup estimates {#app:semi}
-------------------
We consider a semigroup $P_{t}$ on $C^{\infty }({\mathbb{R}}^{d})$ such that $P_{t}f(x)=\int f(y)P_{t}(x,dy)$ where $P_{t}(x,dy)$ is a probability transition kernel and we denote by $P_{t}^{\ast }$ its formal adjoint.
\[B1B2\] There exists $Q\geq 1$ such that for every $t\leq T$ and every $f\in
C^{\infty }({\mathbb{R}}^{d})$$$\left\Vert P_{t}f\right\Vert _{1}\leq Q\left\Vert f\right\Vert _{1}.
\label{A31}$$ Moreover, for every $k\in {\mathbb{N}}$ there exists $K_{k}\geq 1$ such that for every $x\in {\mathbb{R}}^{d}$ $$\left\vert P_{t}(\psi _{k})(x)\right\vert \leq K_{k}\psi _{k}(x).
\label{A32}$$
Under Assumption \[B1B2\], one has $$\left\Vert \psi _{k}P_{t}^{\ast }(f/\psi _{k})\right\Vert _{p}\leq
K_{kp}^{1/p}Q^{1/p_{\ast }}\left\Vert f\right\Vert _{p}. \label{A34}$$
**Proof**. Using Hölder’s inequality, the identity $\psi
_{k}^{p}=\psi _{kp},$ and (\[A32\])$$\left\vert P_{t}(\psi _{k}g)(x)\right\vert \leq \left\vert P_{t}(\psi
_{k}^{p})(x)\right\vert ^{1/p}\left\vert P_{t}(\left\vert g\right\vert
^{p_{\ast }})(x)\right\vert ^{1/p_{\ast }}\leq K_{kp}^{1/p}\psi
_{k}(x)\left\vert P_{t}(\left\vert g\right\vert ^{p_{\ast }})(x)\right\vert
^{1/p_{\ast }}.$$Then, using (\[A31\]) $$\begin{aligned}
\left\Vert \frac{1}{\psi _{k}}P_{t}(\psi _{k}g)\right\Vert _{p_{\ast }}
&\leq &K_{kp}^{1/p}\left\Vert \left\vert P_{t}(\left\vert g\right\vert
^{p_{\ast }})\right\vert ^{1/p_{\ast }}\right\Vert _{p_{\ast
}}=K_{kp}^{1/p}(\left\Vert P_{t}(\left\vert g\right\vert ^{p_{\ast
}})\right\Vert _{1})^{1/p_{\ast }} \\
&\leq &K_{kp}^{1/p}Q^{1/p_{\ast }}(\left\Vert \left\vert g\right\vert
^{p_{\ast }}\right\Vert _{1})^{1/p_{\ast }}=K_{kp}^{1/p}Q^{1/p_{\ast
}}\left\Vert g\right\Vert _{p_{\ast }}.\end{aligned}$$Using Hölder’s inequality first and the above inequality we obtain$$\begin{aligned}
\left\vert \left\langle g,\psi _{k}P_{t}^{\ast }(f/\psi _{k})\right\rangle
\right\vert &=&\left\vert \left\langle \frac{1}{\psi _{k}}P_{t}(g\psi
_{k}),f\right\rangle \right\vert \leq \left\Vert f\right\Vert _{p}\left\Vert
\frac{1}{\psi _{k}}P_{t}(g\psi _{k})\right\Vert _{p_{\ast }} \\
&\leq &K_{kp}^{1/p}Q^{1/p_{\ast }}\left\Vert g\right\Vert _{p_{\ast
}}\left\Vert f\right\Vert _{p}.\end{aligned}$$$\square $
We consider also the following hypothesis.
\[B3\] There exists $\rho >1$ such that for every $q\in {\mathbb{N}}$ there exists $D_{(q)}^{\ast }(\rho )\geq 1$ such that for every $x\in {\mathbb{R}}^{d}$$$\sum_{\left\vert \alpha \right\vert \leq q}\left\vert \partial ^{\alpha
}P_{t}^{\ast }f(x)\right\vert \leq D_{(q)}^{\ast }(\rho )\sum_{\left\vert
\alpha \right\vert \leq q}(P_{t}^{\ast }(\left\vert \partial ^{\alpha
}f\right\vert ^{\rho })(x))^{1/\rho }. \label{A36}$$
\[A2\]Suppose that Assumption \[B1B2\] and \[B3\] hold. Then for every $k,q\in {\mathbb{N}}$ and $p>\rho $ there exists a universal constant $C$ (depending on $k$ and $q$ only) such that $$\left\Vert \psi _{k}P_{t}^{\ast }(f/\psi _{k})\right\Vert _{q,p}\leq
CK_{k\rho p}^{1/p}Q^{(p-\rho )/\rho p}D_{(q)}^{\ast }(\rho )\left\Vert
f\right\Vert _{q,p}. \label{A38}$$
**Proof**. We will prove (\[A38\]) first. Let $\alpha $ with $%
\left\vert \alpha \right\vert \leq q.$ By (\[A36\]) $$\begin{aligned}
\left\vert \partial ^{\alpha }(\psi _{k}P_{t}^{\ast }(f/\psi
_{k})(x))\right\vert &\leq &C\psi _{k}(x)\sum_{\left\vert \gamma \right\vert
\leq q}\left\vert \partial ^{\gamma }(P_{t}^{\ast }(f/\psi
_{k})(x))\right\vert \\
&\leq &CD_{(q)}^{\ast }(\rho )\psi _{k}(x)\sum_{\left\vert \beta \right\vert
\leq q}(P_{t}^{\ast }(\left\vert \partial ^{\beta }(f/\psi _{k})\right\vert
^{\rho })(x))^{1/\rho } \\
&=&CD_{(q)}^{\ast }(\rho )\sum_{\left\vert \beta \right\vert \leq q}(\psi
_{\rho k}(x)P_{t}^{\ast }(\left\vert \partial ^{\beta }(f/\psi
_{k})\right\vert ^{\rho })(x))^{1/\rho } \\
&=&CD_{(q)}^{\ast }(\rho )\sum_{\left\vert \beta \right\vert \leq q}(\psi
_{\rho k}(x)P_{t}^{\ast }(g/\psi _{\rho k})(x))^{1/\rho }\end{aligned}$$with$$g(x)=\psi _{\rho k}(x)\left\vert \partial ^{\beta }(f/\psi
_{k})(x)\right\vert ^{\rho }=\left\vert \psi _{k}(x)\partial ^{\beta
}(f/\psi _{k})(x)\right\vert ^{\rho }.$$Using (\[A34\]) $$\left\Vert (\psi _{\rho k}P_{t}^{\ast }(g/\psi _{\rho k}))^{1/\rho
}\right\Vert _{p}=\left\Vert \psi _{\rho k}P_{t}^{\ast }(g/\psi _{\rho
k})\right\Vert _{p/\rho }^{1/\rho }\leq K_{k\rho p}^{1/p}Q^{(p-\rho )/\rho
p}\left\Vert g\right\Vert _{p/\rho }^{1/\rho }.$$And we have$$\left\Vert g\right\Vert _{p/\rho }^{1/\rho }=(\int \left\vert \psi
_{k}(x)\partial ^{\beta }(f/\psi _{k})(x)\right\vert ^{p}dx)^{1/p}\leq
C\sum_{\left\vert \gamma \right\vert \leq q}(\int \left\vert \partial
^{\gamma }f(x)\right\vert ^{p}dx)^{1/p}=C\left\Vert f\right\Vert _{q,p}.$$We conclude that $$\left\Vert \psi _{k}P_{t}^{\ast }(f/\psi _{k})\right\Vert _{q,p}\leq
CK_{k\rho p}^{1/p}Q^{(p-\rho )/\rho p}D_{(q)}^{\ast }(\rho )\left\Vert
f\right\Vert _{q,p}.$$$\square $
Integration by parts {#app:ibp}
--------------------
We consider a function $\phi \in C^{\infty }({\mathbb{R}}^{d},{\
\mathbb{R}}^{d})$ such that $\partial_j\phi \in C_{b}^{\infty }({\mathbb{R}}%
^{d},{\mathbb{R}}^{d})$, $j=1,\ldots,d$. We denote $\nabla \phi$ the $%
d\times d$ matrix field whose $(i,j)$ entry is $\partial_j\phi^i$ and $%
\sigma (\phi )=\nabla \phi (\nabla \phi )^{\ast }$.
We suppose that $\sigma (\phi )$ is invertible and we denote $\gamma (\phi
)=\sigma ^{-1}(\phi ).$ Then $$\int (\partial _{i}f)(\phi (x))g(x)dx=\int f(\phi (x))H_{i}(\phi ,g)(x)dx
\label{ip1}$$with$$H_{i}(\phi ,g)=-\sum_{k=1}^{d}\partial _{k}\left( g\sum_{j=1}^{d}\gamma
^{i,j}(\phi )\partial _{k}\phi ^{j}\right). \label{ip2}$$Moreover, for a multi-index $\alpha =(\alpha _{1},...,\alpha _{m})$ we define$$H_{\alpha }(\phi ,g)=H_{\alpha _{m}}(\phi ,H_{(\alpha _{1},...,\alpha
_{m-1})}(\phi ,g)) \label{ip3}$$and we obtain $$\int (\partial ^{\alpha }f)(\phi (x))g(x)dx=\int f(\phi (x))H_{\alpha }(\phi
,g)(x)dx \label{ip4}$$
**Proof**. The proof is standard: we use the chain rule and we obtain $%
\nabla (f(\phi ))=(\nabla \phi)^*(\nabla f)(\phi ) .$ By multiplying with $%
\nabla \phi $ first and with $\gamma (\phi )$ then, we get $(\nabla f)(\phi
)=\gamma(\phi )\nabla \phi \nabla (f(\phi ))$. Using standard integration by parts, (\[ip1\]) and (\[ip2\]) hold. And (\[ip3\]) follows by iteration. $\square $
Our aim now is to give estimates of $\left\vert H_{a}(\phi
,g)(x)\right\vert_q .$ We use the notation introduced in (\[NOT1\]) and for $q\in{\mathbb{N}}$, we denote$$C_{q}(\phi )(x)=\frac{1\vee \left\vert \phi (x)\right\vert _{1,q+2}^{2d-1}}{%
1\wedge (\det \sigma (\phi )(x))^{q+1}}. \label{ip5}$$
For every multi index $\alpha $ and every $q\in {\mathbb{N}}$ there exists a universal constant $C\geq 1$ such that$$\left\vert H_{\alpha }(\phi ,g)(x)\right\vert _{q}\leq C\left\vert
g(x)\right\vert _{q+\left\vert \alpha \right\vert }\times C_{q+\left\vert
\alpha \right\vert }^{\left\vert \alpha \right\vert }(\phi )(x). \label{ip6}$$
**Proof**. We begin with some simple computational rules:$$\begin{aligned}
\left\vert f(x)g(x)\right\vert _{q} &\leq &C\sum_{k_{1}+k_{2}=q}\left\vert
f(x)\right\vert _{k_{1}}\left\vert g(x)\right\vert _{k_{2}}, \label{ip8} \\
\left\vert \left\langle \nabla f(x),\nabla g(x)\right\rangle \right\vert
_{q} &\leq &C\sum_{k_{1}+k_{2}=q}\left\vert f(x)\right\vert
_{1,k_{1}+1}\left\vert g(x)\right\vert _{1,k_{2}+1}, \label{ip8'} \\
\left\vert \frac{1}{g(x)}\right\vert _{q} &\leq &\frac{C}{\left\vert
g(x)\right\vert }\sum_{l=0}^{q}\frac{\left\vert g(x)\right\vert _{q}^{l}}{%
\left\vert g(x)\right\vert ^{l}}. \label{ip8''}\end{aligned}$$We denote by $\widehat{\sigma }^{i,j}(\phi )$ the algebraic complement and write $\gamma ^{i,j}(\phi )=\widehat{\sigma }^{i,j}(\phi )/\det \sigma (\phi
).$ Then, using the above computational rules we obtain$$\left\vert \gamma ^{i,j}(\phi )(x)\right\vert _{q}\leq C\times \frac{%
\left\vert \phi (x)\right\vert _{1,q+1}^{2(d-1)}}{1\wedge (\det \sigma (\phi
)(x))^{q+1}}$$and moreover$$\left\vert H_{i}(\phi ,g)(x)\right\vert _{q}\leq C\left\vert g(x)\right\vert
_{q+1}\times \left\vert \phi (x)\right\vert _{1,q+2}\times \frac{\left\vert
\phi (x)\right\vert _{1,q+2}^{2(d-1)}}{1\wedge (\det \sigma (\phi )(x))^{q+1}%
}\leq C\left\vert g(x)\right\vert _{q+1}\times C_{q}(\phi )(x).$$Let $\alpha =(\beta ,i).$ Iterating the above estimate we obtain $$\begin{aligned}
\left\vert H_{\alpha }(\phi ,g)(x)\right\vert _{q} &=&\left\vert H_{i}(\phi
,H_{\beta }(\phi ,g))(x)\right\vert _{q}\leq C\left\vert H_{\beta }(\phi
,g)(x)\right\vert _{q+1}\times C_{q}(\phi )(x) \\
&\leq &C\left\vert g(x)\right\vert _{q+\left\vert \alpha \right\vert }\times
C_{q+\left\vert \alpha \right\vert }^{\left\vert \alpha \right\vert }(\phi
)(x).\end{aligned}$$$\square $
We define now the operator $V_{\phi }:C_{b}^{\infty }({\mathbb{R}}%
^{d})\rightarrow C_{b}^{\infty }({\mathbb{R}}^{d})$ by$$V_{\phi }f(x)=f(\phi (x)). \label{ip9}$$
**A**. One has $$\left\Vert \frac{1}{\psi _{\kappa }}V_{\phi }(\psi _{\kappa }f)\right\Vert
_{q,\infty }\leq C\psi _{\kappa }(\phi (0))\left\Vert \phi \right\Vert
_{1,q,\infty }^{q+2\kappa }\left\Vert f\right\Vert _{q,\infty }.
\label{ip10}$$**B**. Suppose that $$\inf_{x\in {\mathbb{R}}^{d}}\det \sigma (\phi )(x)\geq \varepsilon (\phi )>0
\label{ip11}$$Then, for $\kappa, q\in{\mathbb{N}}$ and $p> 1$,
$$\left\Vert \psi _{\kappa }V_{\phi }^{\ast }(\frac{1}{\psi _{\kappa }}%
f)\right\Vert _{q,p} \leq C \psi _{\kappa }(\phi (0))\times \frac{1\vee
\left\Vert \phi \right\Vert _{1,q+2,\infty }^{2dq+1+2\kappa }}{ \varepsilon
(\phi )^{q(q+1)+1/p_\ast}} \times \left\Vert f\right\Vert _{q+1,p}.
\label{ip12}$$
**Proof**. We notice first that $$\left\vert g(\phi (x))\right\vert _{q}\leq C(1\vee \left\vert \phi
(x)\right\vert _{1,q}^{q})\sum_{\left\vert \alpha \right\vert \leq
q}\left\vert (\partial ^{\alpha }g)(\phi (x))\right\vert . \label{ip13}$$Using (\[NOT3c\]) and the above inequality we obtain$$\begin{aligned}
\left\vert \frac{1}{\psi _{\kappa }(x)}V_{\phi }(\psi _{\kappa
}f)(x)\right\vert _{q} &\leq &\frac{C}{\psi _{\kappa }(x)}\left\vert V_{\phi
}(\psi _{\kappa }f)(x)\right\vert _{q} \leq \frac{C(1\vee \left\vert \phi
(x)\right\vert _{1,q}^{q})}{\psi _{\kappa }(x)}\sum_{\left\vert \alpha
\right\vert \leq q}\left\vert (\partial ^{\alpha }(\psi _{\kappa }f))(\phi
(x))\right\vert \\
&\leq &\frac{C(1\vee \left\vert \phi (x)\right\vert _{1,q}^{q})}{\psi
_{\kappa }(x)}\times \psi _{\kappa}(\phi (x))\sum_{\left\vert \alpha
\right\vert \leq q}\left\vert (\partial ^{\alpha }f)(\phi (x))\right\vert .\end{aligned}$$And using (\[NOT3d\]) this gives (\[ip10\]).
**B**. We take now $\alpha $ with $\left\vert \alpha \right\vert \leq q$ and we write$$\begin{aligned}
\left\langle \partial ^{\alpha }(\psi _{\kappa }V_{\phi }^{\ast }(\frac{1}{%
\psi _{\kappa }}f)),g\right\rangle &=&(-1)^{|\alpha |}\left\langle \frac{f}{%
\psi _{\kappa }},V_{\phi }(\psi _{\kappa }\partial ^{\alpha }g)\right\rangle
\\
&=&(-1)^{|\alpha |}\int_{{\mathbb{R}}^{d}}\frac{f}{\psi _{\kappa }}(x)(\psi
_{\kappa }\partial ^{\alpha }g)(\phi (x))dx \\
&=&(-1)^{|\alpha |}\int_{{\mathbb{R}}^{d}}g(\phi (x))H_{\alpha }\Big(\phi ,%
\frac{f}{\psi _{\kappa }}\times \psi _{\kappa }(\phi )\Big)(x)dx.\end{aligned}$$It follows that$$\left\vert \left\langle \partial ^{\alpha }(\psi _{\kappa }V_{\phi }^{\ast }(%
\frac{1}{\psi _{\kappa }}f)),g\right\rangle \right\vert \leq \left\Vert
g(\phi )\right\Vert _{p_{\ast }}\left\Vert H_{\alpha }\Big(\phi ,\frac{f}{%
\psi _{\kappa }}\times \psi _{\kappa }(\phi )\Big)\right\Vert _{p}.$$Using (\[ip6\]) and (\[ip11\]) we obtain (recall that $\left\vert \alpha
\right\vert \leq q$) $$\begin{aligned}
\left\vert H_{\alpha }\Big(\phi ,\frac{f}{\psi _{\kappa }}\times \psi
_{\kappa }(\phi )\Big)(x)\right\vert &\leq &C\left\vert \Big(\frac{f}{\psi
_{\kappa }}\times \psi _{\kappa }(\phi )\Big)(x)\right\vert _{q+1}\times
C_{q}^{q}(\phi )(x) \\
&\leq &C\left\vert f(x)\right\vert _{q+1}\left\vert \frac{1}{\psi _{\kappa }}%
\times \psi _{\kappa }(\phi )(x)\right\vert _{q+1}\times \Big(\frac{%
1\vee \left\vert \phi (x)\right\vert _{1,q+2}^{2d-1}}{\varepsilon (\phi
)^{q+1}}\Big)^q\end{aligned}$$By (\[ip13\]) we have $$\begin{aligned}
\left\vert \psi _{\kappa }(\phi )(x)\right\vert _{q+1} &\leq
&C(1\vee\left\vert \phi (x)\right\vert _{1,q+1}^{q+1})\sum_{\left\vert
\alpha \right\vert \leq k+1}\left\vert (\partial ^{\alpha }\psi _{\kappa
})(\phi (x))\right\vert \leq C(1\vee\left\vert \phi (x)\right\vert
_{1,q+1}^{q+1})\times \psi _{\kappa }(\phi (x)) \\
&\leq &C(1\vee\left\vert \phi (x)\right\vert _{1,q+1}^{q+1})\times \psi
_{\kappa }(\phi (0))(1\vee \left\Vert \nabla \phi \right\Vert _{\infty
}^{2\kappa })\psi _{\kappa }(x).\end{aligned}$$Finally$$\begin{aligned}
\left\vert H_{\alpha }\Big(\phi ,\frac{f}{\psi _{\kappa }}\times \psi
_{\kappa }(\phi )\Big)(x)\right\vert &\leq & C\left\vert f(x)\right\vert
_{q+1}\psi _{\kappa }(\phi (0))(1\vee\left\Vert \nabla \phi \right\Vert
_{\infty }^{2\kappa })\times \frac{1\vee \left\vert \phi (x)\right\vert
_{1,q+2}^{2dq+1}}{\varepsilon (\phi )^{q(q+1)}} \\
&\leq &C\left\vert f(x)\right\vert _{q+1}\psi _{\kappa }(\phi (0))\times
\frac{1\vee \left\Vert \phi \right\Vert _{1,q+2,\infty }^{2dq+1+2\kappa }}{%
\varepsilon (\phi )^{q(q+1)}}\end{aligned}$$and this gives$$\left\Vert H_{\alpha }\Big(\phi ,\frac{f}{\psi _{\kappa }}\times \psi
_{\kappa }(\phi )\Big)\right\Vert _{p} \leq C \psi _{\kappa }(\phi
(0))\times \frac{1\vee \left\Vert \phi \right\Vert _{1,q+2,\infty
}^{2dq+1+2\kappa }}{ \varepsilon (\phi )^{q(q+1)}} \times \left\Vert
f\right\Vert _{q+1,p}$$Using a change of variable and (\[ip11\]) $\left\Vert g(\phi )\right\Vert
_{p_{\ast }}\leq \varepsilon (\phi )^{-1/p_{\ast }}\left\Vert g\right\Vert
_{p_{\ast }}.$ These two inequalities prove (\[ip12\]). $\square $
[99]{} Alfonsi, A., Cancès, E., Turinici, G., Di Ventura, B., and Huisinga, W. (2005). Adaptive simulation of hybrid stochastic and deterministic models for biochemical systems. In *ESAIM Proceedings* **14**, 1–13.
Asmussen, S., Rosinski, J. (2001). Approximations of small jumps of Lévy processes with a view towards simulation. *J. Appl. Probab.* **38**, 482–493.
Bally, V., Caramellino, L. (2017). Convergence and regularity of probability laws by using an interpolation method. *Ann. Probab.* **45**, 1110–1159.
Bally, V., Caramellino, L. (2019). Regularity for the semigroup of jump equations. Working paper.
Bally, V., Clément, E. (2011). Integration by parts formulas with respect to jump times for stochastic differential equations. *Stochastic Analaysis 2010*, D. Crisan (Ed.), Springer Verlag.
Ball, K., Kurtz, T. G., Popovic, L., Rempala, G. (2006). Asymptotic analysis of multiscale approximations to reaction networks. *Ann. Appl. Probab.* **16**, 1925–1961.
Bichteler, K., Gravereaux, J.-B., Jacod, J. (1987). Malliavin calculus for processes with jumps. *Gordon and Breach science publishers*, New York.
Bismut, J.M. (1983). Calcul des variations stochastique et processus de sauts. *Z. Wahrsch. Verw. Gebiete* **63**, 147–235.
Bouleau, N., Denis, L. (2015). Dirichlet forms and methods for Poisson point measures and Lévy processes. *Probability Theory and Stochastic Modelling*, **76**, Springer.
Brezis, H. (1983) *Analyse fonctionelle. Théorie et applications*. Masson, Paris.
Carlen, E., Pardoux, E. (1990). Differential calculus and integration by parts on Poisson space. *Stochastics, algebra and analysis in classical and quantum dynamics* (Marseille, 1988), Math. Appl. **59**, 63–73.
Crudu, A., Debussche, A., Muller, A,. Radulescu, O. (2012). Convergence of stochastic gene networks to hybrid piecewise deterministic processes. *Ann. Appl. Probab.* **22**, 1822–1859.
Ethier, S.N., Kurtz, T.G. (1986). *Markov processes. Characterization and convergence.* Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics. John Wiley & Sons.
Ishikawa, Y. (2013). *Stochastic Calculus of Variation for Jump Processes*. De Gruyter Studies in Math. **54**.
Lapeyre, B., Pardoux, É., Sentis, R. (1998). *Méthodes de Monte-Carlo pour les équations de transport et des diffusion*. Mathématiques et Applications **29**, Springer-Verlag.
Léandre, R. (1985). Régularité des processus de sauts gégénérés. *Ann. Inst. H. Poincaré Probab. Statist.* **21**, 125–146.
Picard, J, (1996). On the existence of smooth densities for jump processes. *Probab. Theory Related Fields* **105**, 481–511.
Picard, J. (1997). Density in small time for Lévy processes. *ESAIM Probab. Statist.* **1**, 358–389.
Zhang X. (2014). Densities for SDEs driven by degenerate $\alpha $-stable processes. *Ann. Probab.*, **42**, 1885–1910.
[^1]: LAMA (UMR CNRS, UPEMLV, UPEC), MathRisk INRIA, Université Paris-Est - [vlad.bally@u-pem.fr]{}
[^2]: Dipartimento di Matematica, Università di Roma Tor Vergata, and INdAM-GNAMPA - [caramell@mat.uniroma2.it]{}
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'I compute the first viscous correction to the thermal distribution function. With this correction, I calculate the effect of viscosity on spectra, elliptic flow, and HBT radii. Indicating the breakdown of hydrodynamics, viscous corrections become of order one for $p_{T} \sim 1.5\,\mbox{GeV}$. Viscous corrections to HBT radii are particularly large and reduce the outward and longitudinal radii. This reduction is a direct consequence of the reduction in longitudinal pressure.'
address: 'Department of Physics, Bldg. 510A, Upton, NY 11973-5000'
author:
- Derek Teaney
title: ' Viscous Corrections to Spectra, Elliptic Flow, and HBT Radii'
---
Viscous Corrections
===================
Ideal hydrodynamics describes a wide variety of data from heavy ion collisions [@Teaney1; @Kolb1]. In particular ideal hydrodynamics successfully predicted the observed elliptic flow and its dependence on mass, centrality, beam energy, and transverse momentum. Nevertheless, the hydrodynamic approach failed in several respects. First, above a transverse momentum $p_{T} \sim 1.5\,\mbox{GeV}$ the particle spectra deviate from hydrodynamics and approach a power law. Second, HBT radii are significantly too large compared to ideal hydrodynamics [@BassDumitru]. Considering the partial success of ideal hydrodynamics, viscous corrections may provide a natural explanation for these failures.
For an ideal Bjorken expansion, the entropy per unit space-time rapidity $(\tau s)$ is conserved. For a viscous Bjorken expansion the entropy per unit rapidity increases as a function of proper time [@MG84] $$\begin{aligned}
\frac{d ( \tau s) }{d\tau} =
\frac{\frac{4}{3} \eta}{\tau T}\;,\end{aligned}$$ where $\eta$ is the shear viscosity. In this equation and below we have neglected the bulk viscosity. For hydrodynamics to be valid, the entropy produced over the time scale of the expansion (to wit, $\tau \frac{\frac{4}{3} \eta}{\tau T}$) must be small compared to the the total entropy ($\tau s$). This leads to the requirement that $$\begin{aligned}
\frac{\Gamma _{s}}{\tau} \ll 1\;, \end{aligned}$$ where we have defined the [*sound attenuation length*]{} as $\Gamma_{s} \equiv \frac{ \frac{4}{3} \eta } {s T}$. Perturbative estimates of the shear viscosity in the plasma give $\frac{\Gamma_{s}}{\tau} \sim 1 $. Below we take $\frac{\Gamma_{s}}{\tau} = \frac{1}{3}$, assuming that non-perturbative effects shorten equilibration times.
Viscosity modifies the thermal distribution function. This modification influences the observed particle spectrum and HBT correlations. The formal procedure for determining the viscous correction to the thermal distribution function is described in references [@deGroot; @Yaffe]. However, the basic form of the viscous correction can be intuited without calculation. First write $f(p) = f_{o}(1 + g(p))$, where $f_{o}(\frac{p\cdot u}{T})$ is the equilibrium thermal distribution function and $g(p)$ is the first viscous correction. $g(p)$ is linearly proportional to the spatial gradients in the system. Spatial gradients which have no time derivatives in the rest frame and are therefore formed with the differential operator $\nabla_{\mu} = (g_{\mu\nu} - u_{\mu}u_{\nu})\partial^{\nu}$. For a baryon free fluid, these gradients are $\nabla_{\alpha}T$, $\nabla_{\alpha}u^{\alpha}$, and $\left\langle \nabla_{\alpha}u_{\beta} \right\rangle$, where $\left\langle \nabla_{\alpha}u_{\beta} \right\rangle \equiv
\nabla_{\alpha}u_{\beta} + \nabla_{\beta}u_{\alpha} -
\frac{2}{3} \Delta_{\alpha\beta}\nabla_{\gamma}u^{\gamma}$. $\nabla_{\alpha}T$ can be eliminated in favor of the other two spatial gradients using the condition that $T^{\mu \nu}u_{\nu} = \epsilon u^{\mu}$ and the ideal equations of motion. $\nabla_{\alpha}u^{\alpha}$ leads ultimately to a bulk viscosity and will be neglected in what follows. Finally, $\left\langle \nabla_{\alpha}u_{\beta} \right\rangle$ leads to a shear viscosity. If $g(p)$ is restricted to be a polynomial of degree less than two, then the functional form of the viscous correction is completely determined, $$\begin{aligned}
\label{correction}
f = f_{o}(1 + \frac{C}{T^3} p^{\alpha}p^{\beta}
\left\langle \nabla_{\alpha}u_{\beta} \right\rangle)\;.\end{aligned}$$ For a Boltzmann gas this is the form of the viscous correction adopted in this work. For Bose and Fermi gasses the ideal distribution function in Eq. \[correction\] is replaced with $f_{o}(1 \pm f_{o})$ [@Yaffe].
The coefficient $C$ is directly related to the sound attenuation length. Indeed, using the distribution function in Eq. \[correction\] to determine the stress energy tensor, yields a relationship between the shear viscosity $\eta$ and the coefficient $C$, $$\begin{aligned}
\label{tensor}
T^{\mu\nu}= T^{\mu\nu}_{o} + \eta
\left\langle \nabla^{\mu}u^{\nu} \right\rangle =
\int d^3p \, \frac{p^{\mu} p^{\nu}}{E} \,
f_{o}(1 + \frac{C}{T^3} p^{\alpha}p^{\beta}
\left\langle \nabla_{\alpha}u_{\beta} \right\rangle) \; .\end{aligned}$$ For a Boltzmann gas, Eq. \[tensor\] yields $C=\frac{\eta}{s}$.
The thermal distribution function is now completely determined. In the next section this correction is used to calculate corrections to the observables used in heavy ion collisions.
Corrections to Spectra, Elliptic Flow, HBT Radii
================================================
To quantify the effect of viscous corrections on spectra and HBT radii, I generalize the blast wave model. In the blast wave model used here, the matter undergoes a boost invariant Bjorken expansion and decouples at a proper time of $\tau_{o}=6.5\:\mbox{Fm}$ at a temperature of $T_{o}=160\:\mbox{MeV}$. The matter is distributed uniformly up to a radius of $R_{o}=10.0\:\mbox{Fm}$, with velocity profile up to a maximum velocity $u^{r}_{o}=0.5\,c$ [^1]. The blast wave model with these parameters closely models the output of a full hydrodynamic simulation [@Teaney1; @Kolb1] and gives a reasonable fit of the data.
The spectrum of produced particles is given by the Cooper-Frye formula, $$\begin{aligned}
\label{EqSpectra}
\frac{d^{2}N}{d^{2}p_{T}\,dy} &=& \int p^{\mu} d\Sigma_{\mu}\, f \\
dN_{o} + \delta\,dN &=& \int p^{\mu} d\Sigma_{\mu}\, f_{o} + \delta f \;. \end{aligned}$$ Fig. \[figSpectra\] shows
![Elliptic flow as a function transverse momentum. The blast wave parameters (see text) are chosen to approximate a AuAu collision at $b=6\;\mbox{Fm}$.[]{data-label="figV2"}](mComparec2.eps){height="85mm" width="85mm"}
![Elliptic flow as a function transverse momentum. The blast wave parameters (see text) are chosen to approximate a AuAu collision at $b=6\;\mbox{Fm}$.[]{data-label="figV2"}](mV2c2.eps){height="85mm" width="85mm"}
the ratio of the correction compared to the ideal spectrum, $\frac{\delta\,dN}{ dN_{o} }$.
To understand this figure qualitatively, consider a Bjorken expansion of infinitely large nuclei. The longitudinal pressure is reduced [@MG84], $p_{L}=p-\frac{4}{3}\frac{\eta}{\tau}$. Because the shear tensor is traceless, the [*transverse*]{} pressure is [*increased*]{}, $p_{T} = p + \frac{2}{3}\frac{\eta}{\tau}$. Thus, the matter distribution is pushed out to larger $p_{T}$ by the shear in the longitudinal direction. More mathematically, the ratio of the corrected spectrum to the uncorrected spectrum is given by, $$\begin{aligned}
\label{spectra}
\frac{\delta\,dN}{ dN_{o} } &=& \frac{\Gamma_s} {4\tau}
\left\{ \left( \frac{p_T}{T}
\right)^2 -
\left( \frac{m_T}{T}
\right)^2
\frac{1}{2}
\left( \frac{
K_3(\frac{m_T}{T})
}{
K_1(\frac{m_T}{T})
} -1
\right)
\right\}\;.\end{aligned}$$ For large $p_{T}$ we find, $\frac{\delta\,dN}{ dN_{o} } \approx \frac{\Gamma_s}{4\tau}
\left( \frac{p_T}{T} \right)^2 $. Eq. \[spectra\] reproduces the shape and dependence of the full viscous blast wave calculation shown in Fig. \[figSpectra\].
Viscous corrections become of order one when the $p_{T}$ of the particle approaches $1.4$ GeV. This signals the breakdown of the hydrodynamic approach. In fact, ideal hydrodynamics generally fails to reproduce the single particle spectra above $p_{T}$ of 1.5 GeV. Viscosity provides a ready explanation for this breakdown.
In non-central collisions elliptic flow is calculated using the spectrum indicated in Eq. \[EqSpectra\]. In non-central collisions, the matter is assumed to have a cylindrical distribution but the flow velocity has an elliptic component, $u^{r}(r,\phi) = u^{r}_{o}\frac{r}{R_{o}}(1 + 2 u_{2} \cos(\phi))$. For non-central collisions the parameters are: $u_{o}=0.5, u_{2}=0.1,
R_{o}=6\;\mbox{Fm}$ and $\tau_{o}=4.0\;\mbox{Fm}$. As illustrated in Fig. \[figV2\], viscosity reduces elliptic flow by a factor of three.
Next consider viscous corrections to HBT radii. The HBT radii are calculated with the method of variances. First the ideal radii parameters are displayed in Fig. \[IdealHBT\]. The results
![Viscous corrections to the ideal HBT radii illustrated in Fig. \[IdealHBT\].[]{data-label="VisHBTCorrections"}](mRadiic2.eps){height="85mm" width="85mm"}
![Viscous corrections to the ideal HBT radii illustrated in Fig. \[IdealHBT\].[]{data-label="VisHBTCorrections"}](mCorrectionsc2.eps){height="85mm" width="85mm"}
are typical of the blast wave parametrization. Next the viscous correction to the blast wave results are illustrated in Fig. \[VisHBTCorrections\].
Viscous corrections to $R_{L}^{2}$ and $R_{O}^{2}$ are large and negative. This may be understood qualitatively by again considering a simple Bjorken expansion of infinite nuclei. The longitudinal pressure is reduced, $p_{L} = p - \frac{4}{3}\frac{\eta}{\tau}$. Therefore the $p_{z}$ distribution ($
\frac{dN}{dp_{z} d\eta}$) is narrower. However, by boost invariance the single particle distribution ($\frac{dN}{dy\,d\eta}$) is a function of $\left|y-\eta\right|$, which yields the relation, $$\begin{aligned}
\left. m_{T} \frac{dN}{dp_{z} d\eta}\right|_{\eta=0}
=
\left.\tau \frac{dN}{dy dz} \right|_{y=0} .\end{aligned}$$ The $z$ distribution ($\frac{dN}{dy dz} $) at mid rapidity is therefore narrower because the longitudinal pressure is reduced. This implies that $R_{L}^2\equiv\langle z^2\rangle$ decreases due to the viscous longitudinal expansion. In summary, viscosity provides a simple explanation for the very large radii predicted by ideal hydrodynamics.\
[**Acknowledgments:**]{} This work was supported by DE-AC02-98CH10886.
[9]{} D. Teaney, J. Lauret, and E.V. Shuryak, Phys. Rev. Lett. [**86**]{}, 4783 (2001); D. Teaney, J. Lauret, and E.V. Shuryak, nucl-th/0110037. P.F. Kolb, P.Huovinen, U. Heinz, H. Heiselberg, Phys. Lett. B [**500**]{}, 232 (2001); P. Huovinen, P.F. Kolb, U. Heinz, H. Heiselberg, Phys. Lett. B [**503**]{}, 58 (2001). S. Soff, S. A. Bass, Adrian Dumitru, Phys. Rev. Lett. [**86**]{}, 3981 (2001). P. Danielewicz, M. Gyulassy, Phys. Rev. D [**31**]{}, 53-62 (1985). S. de Groot, W. van Leeuven, Ch. van Veert, [*Relativistic Kinetic Theory* ]{}(North-Holland, 1980). Peter Arnold, Guy D. Moore, Laurence G. Yaffe, JHEP [**0011**]{}, 001 (2000).
[^1]: In contrast to common practice $u^{r} = \gamma v^{r}$ is linearly rising: $u^{r} = u^{r}_{o} \frac{r}{R_{o}}$
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We study strongly correlated electrons on a kagomé lattice at 1/6 (and 5/6) filling. They are described by an extended Hubbard Hamiltonian. We are concerned with the limit $|t|\ll V\ll U$ with hopping amplitude $t$, nearest-neighbor repulsion $V$ and on-site repulsion $U$. We derive an effective Hamiltonian and show, with the help of the Perron–Frobenius theorem, that the system is ferromagnetic at low temperatures. The robustness of ferromagnetism is discussed and extensions to other lattices are indicated.'
address:
- 'Max-Planck-Institut f[ü]{}r Physik komplexer Systeme, 01187 Dresden, Germany'
- 'Max-Planck-Institut f[ü]{}r Physik komplexer Systeme, 01187 Dresden, Germany'
- 'Asia Pacific Center for Theoretical Physics, Pohan, Korea'
- 'Department of Physics and Astronomy, University of California, Riverside, CA 92521 '
author:
- 'F. Pollmann'
- 'P. Fulde'
- 'K. Shtengel'
title: Kinetic ferromagnetism on a kagomé lattice
---
#### Introduction. {#introduction. .unnumbered}
Ferromagnetism in solids or molecules can be of different origin. The most common is spin exchange between electrons belonging to neighboring sites. Polarization of the spins and formation of a symmetric spin state reduces the effects of mutual Coulomb repulsions of the electrons due to the Pauli exclusion principle. The physics is the same as for intra-atomic Hund’s rule coupling, which also plays a significant role in the theory of ferromagnetism. The key ingredient of this mechanism is the *potential* energy of repulsive electron-electron interactions minimized by a symmetric spin state which is better at keeping electrons apart. This should be contrasted with the standard superexchange mechanism for antiferromagnetism where the *kinetic* energy of electrons is optimized instead. Hence it is often the competition between the potential and kinetic energies that determines the “winner”. This physics is illustrated, in its extreme limit, in the case of flat-band ferromagnetism [@Mielke91a; @Mielke92]. Mielke pointed out that electrons in a half-filled flat band become fully spin-polarized for *any* strength of the on-site repulsion $U$. (One could also think of this effect as an extreme case of the Stoner instability in metals.)
Therefore it might appear surprising that ferromagnetism can also originate from purely kinetic effects. A prominent example is the ferromagnetic ground state (GS) discussed by Nagaoka [@Nagaoka66] which is due to the motion of a single hole in an otherwise half-filled Hubbard system. The argument based on the application of the Perron–Frobenius theorem shall be presented later. Although it is only valid in the limit of the infinite on-site Hubbard repulsion (to exclude the possibility of double occupancy) on a finite lattice, it demonstrates how ferromagnetism can result from the motion of the electrons or holes. The same theorem is also the basis of ferromagnetism due to three-particle ring exchange, a process first pointed out by Thouless [@Thouless65] in the context of $^{3}$He (following the original observation by Herring [@Herring62]) and later also studied in the context of Wigner glass [@Chakravarty99] and frustrated magnets [@Misguich99]. In both cases, the ferromagnetic GS has the smoothest wavefunction and hence lowest kinetic energy.
Our introduction would not be complete without mentioning some other sources of ferromagnetism such as the RKKY interaction in metals or double-exchange (e.g., in manganites) to name a few. In this paper, however, we will be concerned with ferromagnetism of kinetic origin. In particular, we demonstrate that fermions on a partially filled kagomé lattice which are described by an extended one-band Hubbard model in the strong correlation limit have a ferromagnetic GS. (The difference with Mielke’s flat-band ferromagnetism is discussed later in the paper.) Again, the physics discussed here is motivated by the Perron–Frobenius theorem, but otherwise is quite different from Nagaoka’s and Thouless’ examples.
#### Model Hamiltonian. {#model-hamiltonian. .unnumbered}
We start from an extended one-band Hubbard model on a kagomé lattice with on-site repulsion $U$ and nearest-neighbor repulsion $V$. Using second quantized notation, the Hamiltonian is written as $$H
=-t\sum_{\langle
i,j\rangle,\sigma}\left(c_{i\sigma}^{\dag}c^{\vphantom{\dag}}_{j\sigma}
+ \text{H.c.}\right) \\
+V\sum_{\langle
i,j\rangle}n_{i}n_{j}+U\sum_{i}n_{i\uparrow}n_{i\downarrow}.
\label{eq:extended_hub}$$ Here the operators $c^{\vphantom{\dag}}_{i\sigma}$ ($c_{i\sigma}^{\dag}$) annihilate (create) an electron with spin $\sigma$ on site $i$. The density operators are given by $n_{i}=n_{i\uparrow}+n_{i\downarrow}$ with $n_{i\sigma}=c_{i\sigma}^{\dag}c^{\vphantom{\dag}}_{i\sigma}$. The notation $\langle i,j\rangle$ refers to pairs of nearest neighbors.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --
(a)![Panels (a) and (b) show two different configurations satisfying the constraint of zero or one electron per site and one electron of arbitrary spin per triangle. The arrows indicate possible ring-hopping processes. An equivalent colored dimer representation on a honeycomb lattice is shown in (c) and (d); the colors encode particle spins. Ring exchanges conserve the parity of the number of dimers on the sublattice shown in panel (e). Panel (f) shows the 24-site cluster used for the exact diagonalization. The dimers are arranged to maximize the next-nearest neighbor spin interaction along the dashed lines. \[dimerpanel\]](kagome_allowed1 "fig:"){width="28mm"} (b)![Panels (a) and (b) show two different configurations satisfying the constraint of zero or one electron per site and one electron of arbitrary spin per triangle. The arrows indicate possible ring-hopping processes. An equivalent colored dimer representation on a honeycomb lattice is shown in (c) and (d); the colors encode particle spins. Ring exchanges conserve the parity of the number of dimers on the sublattice shown in panel (e). Panel (f) shows the 24-site cluster used for the exact diagonalization. The dimers are arranged to maximize the next-nearest neighbor spin interaction along the dashed lines. \[dimerpanel\]](kagome_allowed2 "fig:"){width="28mm"}
(c)![Panels (a) and (b) show two different configurations satisfying the constraint of zero or one electron per site and one electron of arbitrary spin per triangle. The arrows indicate possible ring-hopping processes. An equivalent colored dimer representation on a honeycomb lattice is shown in (c) and (d); the colors encode particle spins. Ring exchanges conserve the parity of the number of dimers on the sublattice shown in panel (e). Panel (f) shows the 24-site cluster used for the exact diagonalization. The dimers are arranged to maximize the next-nearest neighbor spin interaction along the dashed lines. \[dimerpanel\]](mapping_dimer1 "fig:"){width="28mm"} (d)![Panels (a) and (b) show two different configurations satisfying the constraint of zero or one electron per site and one electron of arbitrary spin per triangle. The arrows indicate possible ring-hopping processes. An equivalent colored dimer representation on a honeycomb lattice is shown in (c) and (d); the colors encode particle spins. Ring exchanges conserve the parity of the number of dimers on the sublattice shown in panel (e). Panel (f) shows the 24-site cluster used for the exact diagonalization. The dimers are arranged to maximize the next-nearest neighbor spin interaction along the dashed lines. \[dimerpanel\]](mapping_dimer2 "fig:"){width="28mm"}
(e)![Panels (a) and (b) show two different configurations satisfying the constraint of zero or one electron per site and one electron of arbitrary spin per triangle. The arrows indicate possible ring-hopping processes. An equivalent colored dimer representation on a honeycomb lattice is shown in (c) and (d); the colors encode particle spins. Ring exchanges conserve the parity of the number of dimers on the sublattice shown in panel (e). Panel (f) shows the 24-site cluster used for the exact diagonalization. The dimers are arranged to maximize the next-nearest neighbor spin interaction along the dashed lines. \[dimerpanel\]](sign "fig:"){width="28mm"} (f)![Panels (a) and (b) show two different configurations satisfying the constraint of zero or one electron per site and one electron of arbitrary spin per triangle. The arrows indicate possible ring-hopping processes. An equivalent colored dimer representation on a honeycomb lattice is shown in (c) and (d); the colors encode particle spins. Ring exchanges conserve the parity of the number of dimers on the sublattice shown in panel (e). Panel (f) shows the 24-site cluster used for the exact diagonalization. The dimers are arranged to maximize the next-nearest neighbor spin interaction along the dashed lines. \[dimerpanel\]](total_af_dimer "fig:"){width="28mm"}
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --
We first focus on the case of 1/6 filling (i.e., one electron per three sites). In the limit of strong correlations, when $|t|\ll V\ll U$ and $U\rightarrow\infty$, the possibility of doubly occupied sites is eliminated. First we assume that $t=0$. In that case the GS is macroscopically degenerate. All configurations with precisely one electron of arbitrary spin orientation on each triangle are GSs (see Figs. \[dimerpanel\](a) and (b)). It is helpful to consider the honeycomb lattice which connects the centers of triangles of the kagomé lattice. Different GS configurations on the kagomé lattice correspond to different two-colored (spin) dimer configurations on the honeycomb lattice (particles are sitting here on links, see Fig. \[dimerpanel\] (c) and (d)). They are orthogonal because any wavefunction overlap is neglected.
When $t\ne0$, this GS degeneracy is lifted. In the lowest non-vanishing order in $t/V$, the effective Hamiltonian acting within the low-energy manifold spanned by the states with no double occupancy and exactly one electron per triangle becomes $$H_{\text{hex}}=-g\sum_{\left\{ { \;
\pspicture(0,0.1)(0.2,0.3)
\psset{linewidth=0.03,linestyle=solid}
\pspolygon[](0,0.0775)(0,0.225)(0.124,0.3)(0.255,0.225)(0.255,0.0775)(0.124,0)
\endpspicture
\;}\right\}
\left\{\blacktriangle\blacksquare\bullet\right\}}
\Big(\big|{\;\pspicture(0,0.1)(0.35,0.6)\psset{unit=0.75cm}
\pspolygon(0,0.15)(0,0.45)(0.2598,0.6)(0.5196,0.45)(0.5196,0.15)(0.2598,0)
\psset{linewidth=0.08,linestyle=solid}
\psline(0,0.15)(0,0.45)
\psline(0.2598,0.6)(0.5196,0.45)
\psline(0.5196,0.15)(0.2598,0)
\psdots[linecolor=gray,dotsize=.20](0,0.30)
\psdots[dotstyle=triangle*,linecolor=gray,dotsize=.20](0.3897,0.525)
\psdots[linecolor=gray,dotsize=.20,dotstyle=square*](0.3897,0.075)
\endpspicture\;}\big\rangle\big\langle{\;
\pspicture(0,0.1)(0.35,0.6)
\psset{unit=0.75cm}
\psset {linewidth=0.03,linestyle=solid}
\pspolygon[](0,0.15)(0,0.45)(0.2598,0.6)(0.5196,0.45)(0.5196,0.15)(0.2598,0)
\psset{linewidth=0.08,linestyle=solid}
\psline(0.2598,0.6)(0,0.45)
\psline(0.5196,0.15)(0.5196,0.45)
\psline(0.2598,0)(0,0.15)
\psdots[linecolor=gray,dotsize=.20](0.1299,0.525)
\psdots[dotstyle=triangle*,linecolor=gray,dotsize=.20](0.525,0.3)
\psdots[linecolor=gray,dotsize=.20,dotstyle=square*](0.1299,0.075)
\endpspicture\;}\big|
+\big|{\;
\pspicture(0,0.1)(0.35,0.6)
\psset{unit=0.75cm}
\pspolygon(0,0.15)(0,0.45)(0.2598,0.6)(0.5196,0.45)(0.5196,0.15)(0.2598,0)
\psset{linewidth=0.08,linestyle=solid}
\psline(0,0.15)(0,0.45)
\psline(0.2598,0.6)(0.5196,0.45)
\psline(0.5196,0.15)(0.2598,0)
\psdots[dotstyle=square*,linecolor=gray,dotsize=.20](0,0.30)
\psdots[linecolor=gray,dotsize=.20](0.3897,0.525)
\psdots[dotstyle=triangle*,linecolor=gray,dotsize=.20](0.3897,0.075)
\endpspicture\;}\big\rangle\big\langle{\;
\pspicture(0,0.1)(0.35,0.6)
\psset{unit=0.75cm}
\psset {linewidth=0.03,linestyle=solid}
\pspolygon[](0,0.15)(0,0.45)(0.2598,0.6)(0.5196,0.45)(0.5196,0.15)(0.2598,0)
\psset{linewidth=0.08,linestyle=solid}
\psline(0.2598,0.6)(0,0.45)
\psline(0.5196,0.15)(0.5196,0.45)
\psline(0.2598,0)(0,0.15)
\psdots[linecolor=gray,dotsize=.20](0.1299,0.525)
\psdots[dotstyle=triangle*,linecolor=gray,dotsize=.20](0.525,0.3)
\psdots[linecolor=gray,dotsize=.20,dotstyle=square*](0.1299,0.075)
\endpspicture\;}\big|+\text{H.c.}\Big)
\label{eq:Hhex}$$ with $g=6t^{3}/V^{2}$. Here the Hamiltonian is written in terms of dimers on a honeycomb lattice and the sum is performed over all hexagons. The sum over the three symbols is taken over all possible color (spin) combinations of a flippable hexagon. Particles hop either clockwise or counter-clockwise around the hexagons. These processes can lead to different configurations, depending on the colors (spins) of the dimers. We observe that $H_{\text{hex}}$ does not cause a fermionic sign problem. In particular, the local constraint of having one fermionic dimer attached to each site allows for an enumeration of dimers such that only an even number of fermionic operators has to be exchanged when the matrix elements of $H_{\text{hex}}$ are calculated.
If one were to ignore the spin degrees of freedom (the colors of the dimers), the model would be equivalent to the quantum dimer model (QDM) studied in Ref. [@Moessner01c]. Similarly to the QDM on a square lattice [@Rokhsar88], the effective Hamiltonian (\[eq:Hhex\]) conserves certain quantities – winding numbers – and connects configurations only when they belong to the same topological sector. (For the case of periodic boundary conditions, the winding numbers are defined by first orienting dimers so that the arrows point from the A to B sublattice, and second, by counting the net flow of these arrows across two independent essential cycles formed by the dual bonds.) The GS of the QDM was found to be three-fold degenerate in the thermodynamic limit, corresponding to the valence bond solid (VBS) plaquette phase with broken translational invariance. In what follows, we shall investigate the effects of quantum dynamics – the ring-exchange hopping of electrons (dimers) – on spin correlations. Note that $H_{\text{hex}}$ has no explicit spin dependency and conserves both $S^{z}_{\text{tot}}$ and the total spin $S_{\text{tot}}$.
#### Ferromagnetism from the Perron–Frobenius Theorem. {#ferromagnetism-from-the-perronfrobenius-theorem. .unnumbered}
In short, the Perron–Frobenius theorem states that the largest eigenvalue of a symmetric $n\times n$ matrix with only positive elements is positive and non-degenerate, while the corresponding eigenvector is “nodeless”, i.e., can be chosen to have only positive components. (For a simple proof of this theorem, see e.g. [@Ninio].) Applying this theorem to the (finite-dimensional) matrix $\exp{(-\tau\hat{H})}$ (for any $\tau>0$), one concludes that if all off-diagonal matrix elements of the Hamiltonian $\hat{H}$ are non-positive and the Hilbert space is connected by the quantum dynamics (meaning that any state can be reached from any other state by a repeated application of $\hat{H}$), then the GS is unique and nodeless. It is important to remember that the theorem only works for systems with a finite-dimensional Hilbert space.
To show the relation of this theorem to ferromagnetism, we now sketch the argument for Nagaoka’s ferromagnetism in the GS of an infinite $U$ Hubbard model (Eq. (\[eq:extended\_hub\]) with $V=0$, $U\to \infty$) with a single mobile hole (after Refs. [@Tasaki89; @Tian90]). Denote a state with a single electron or hole on a site $i$ as $\left|i,\alpha\right\rangle$ where $\alpha=\left\{\sigma_1,\ldots
\sigma_{i-1},\sigma_{i+1},\ldots \sigma_{N}\right\}$ is the spin configuration of electrons. We use the convention $\left|i,\alpha\right\rangle= (-1)^i c_{1,\sigma_1}^\dag,\ldots
c_{i-1,\sigma_{i-1}}^\dag
c_{i+1,\sigma_{i+1}}^\dag,\ldots c_{N,\sigma_{N}}^\dag |0\rangle$. (No double occupancy is allowed if $U\to \infty$.) In this basis, the matrix elements of the hopping Hamiltonian are either $t$, for the states related by a single hop of the hole between two neighbor sites, or 0 otherwise. The Hamiltonian commutes with both $\hat{S}^z_\text{tot}$ and $\hat{S}^2_\text{tot}$. Our chosen basis consists of eigenstates of $\hat{S}^z_\text{tot}$ but not $\hat{S}^2_\text{tot}$, hence we can immediately separate the Hilbert space into sectors of fixed $S^z_\text{tot}$. Within each sector, the Hamiltonian matrix has exactly $z$ (the coordination number) nonzero entries in each row and each column. A direct inspection shows that a vector whose entries are all 1 is an eigenvector with the eigenvalue $zt$. If $t<0$ and tunneling of a single hole satisfies the connectivity condition, the Perron-Frobenius theorem applies and hence such a state is the GS (there can be no other state with positive coefficients only that is orthogonal to this one), which is unique for a finite system. (We remark that the sign of $t$ can always be changed on a bipartite lattice.) Clearly, this state is fully spin-polarized in the $S^z_\text{tot}=S_\text{max}=\pm N/2$ sectors, and since the Hamiltonian commutes with $\hat{S}^2_\text{tot}$, the state with ${S}^2_\text{tot}=(S_\text{max}+1)S_\text{max}$ must have the same energy in every $S^z_\text{tot}$ sector. But we already saw that the states with the energy $\mathcal{E}=zt$ are unique GSs in every sector, hence they must have the same ${S}^2_\text{tot}=(S_\text{max}+1)S_\text{max}$, QED. The obvious pitfalls may come from taking the thermodynamic limit or violating the connectivity condition: in both cases the nodeless, fully spin polarized state remain a GS but no claims can be made about other potential GSs.
Turning to our case, we remark that the sign of the plaquette flip in Eq. (\[eq:Hhex\]) can be always chosen negative, irrespective of the sign of the original tunneling amplitude $t$ – this is just a matter of a simple local gauge transformation [@Rokhsar88]. Specifically, the sign of $g$ in Eq. (\[eq:Hhex\]) can be changed by multiplying all configurations $C$ with the color-independent factor $i^{\nu(C)}$ where $\nu(C)$ is the number of dimers on the sublattice shown in Fig. \[dimerpanel\] (e). This fact might appear surprising though since the actual sign of $t$ can typically be gauged away only for the cases of bipartite or half-filled lattices. The reason the sign of $t$ turns out to be inconsequential in our case of the (non-bipartite) kagomé lattice away from half-filling is due to the constrained nature of the ring exchange quantum dynamics of Eq. (\[eq:Hhex\]). We therefore choose all off-diagonal matrix elements of $H_{\text{hex}}$ to be non-positive. This by itself is not yet sufficient to apply the Perron–Frobenius theorem since the quantum dynamics of dimers on a (bipartite) honeycomb lattice is explicitly non-ergodic: as we have mentioned earlier, the Hilbert space is broken into sectors corresponding to the winding numbers which are conserved under *any* local ring exchanges. On the other hand, the ring-exchange dynamics of dimers given by Eq. (\[eq:Hhex\]) *is* ergodic within each sector [@Saldanha95]. Therefore we consider each topological sector separately. The argument is very similar to the one presented earlier for Nagaoka’s ferromagnetism. For the $S^z_\text{tot}=S_\text{max}=\pm N_\text{e}/2$ spin sectors, the GS is unique, fully spin-polarized, and all elements of its eigenvector are positive.
For all other $S^z_\text{tot}$ sectors, however, the situation appears more complicated at first sight. The reason is a much bigger configuration space – essentially, we are now dealing with two-color dimer configurations. A given state can now be connected by a ring-exchange Hamiltonian to a larger number of states than it would if all dimers had the same color (spin). We formalize this by introducing the notion of descendant states $|C^{k}_i\rangle,\ k=1\dots 2^{N_\text{e}}$ – two-color dimer configurations obtained from the uncolored “parent” configuration $|C_i\rangle$ by simply coloring its dimers (i.e., assigning spins). The subspace of descendant states can be partitioned according the conserved $S^z_\text{tot}$. The resulting sectors, in general, have different dimensionality $D(S^z_\text{tot})$ equal to the number of distinct permutations of spins (colors). A crucial observation is that $\sum_{k} \langle C^{k}_i|H_{\text{hex}}|C^{m}_j\rangle = \langle C_i|H_{\text{hex}}|C_j\rangle$ for any $i$, $j$, $m$. The immediate consequence is that if $|\Psi_0\rangle \equiv |\Psi_0(N_\text{e}/2)\rangle = \sum_i \gamma_i |C_i\rangle$ is the GS in the $S^z_\text{tot}=S_\text{max}=\pm N_\text{e}/2$ spin sector, then $|\Psi_0(S^z_\text{tot})\rangle = D^{-1/2}(S^z_\text{tot}) \sum_i \gamma_i \sum_k^\prime |C^{k}_i\rangle$ is an eigenstate with the same energy in any other $S^z_\text{tot}$ spin sector. (The sum over descendants $k$ is performed only within a given spin sector.) The SU(2) symmetry of the effective Hamiltonian (\[eq:Hhex\]) once again implies that $|\Psi_0(S^z_\text{tot})\rangle$ is a GS and is fully spin polarized.
Notice that the *uniqueness* of such a GS relies on the ergodicity of the Hamiltonian within each $S^z_\text{tot}$ spin sector. Numerical studies on finite clusters up to 48 kagomé lattice sites including different geometries show that this is in fact the case. Unfortunately, we were not able to provide a rigorous analytical argument. Should it turn *not* to be the case, it would open a possibility for degenerate GSs in $S^z_\text{tot}\neq S_\text{max}$ spin sectors. Still, at least one of the GSs is fully spin polarized [@Tasaki98].
While the Perron–Frobenius argument applies to finite systems, it does not withstand the thermodynamic limit. In particular, it is known that in the thermodynamic limit the ground state of the QDM is in the three-fold degenerate plaquette phase [@Moessner01c]. We suspect that ferromagnetism survives this limit and coexists with such a broken symmetry state; a conclusive resolution of this point remains a subject of further research.
The ferromagnetic GS which we find here should not be mistaken with Mielke’s flat-band ferromagnetism [@Mielke91a; @Mielke92]. In fact, Mielke has shown that a positive-$U$ Hubbard model with $V=0$ on a kagomé lattice at 5/6 filling has a fully spin-polarized GS. A detailed discussion of the differences to our case can be found below.
#### Stability of kinetic ferromagnetism. {#stability-of-kinetic-ferromagnetism. .unnumbered}
In order to test the robustness of the ferromagnetic GS, we introduce by hand an additional next-nearest neighbor interaction, in the spirit of [@Poilblanc07]: $$H'= H_{\text{hex}}+J\sum_{\langle\langle
i,j\rangle\rangle}\left(S_{i}S_{j}-\frac{1}{4}n_{i}n_{j}\right).\label{
eq:Hspin}$$ By adding such a term, we attempt to frustrate the ferromagnetic state. Indeed, this term favors configurations such as the one shown in Fig. \[dimerpanel\](f) which maximize the spin interactions by having electrons on the same sublattice. Not only the resulting charge order is expected to suppress the kinetic mechanism for ferromagnetism, the ferromagnetic order itself is now suppressed by antiferromagnetic fluctuations favored by the spin interactions for $J>0$.By gradually increasing $J/g$ towards strong antiferromagnetic coupling, we can estimate the stability of the ferromagnetic GS under the presence of short ranged perturbations.
#### Numerical Results. {#numerical-results. .unnumbered}
![Exact diagonalization of the two-color dimer model on a 18-site (left panels) and 24-site (right panels) honeycomb cluster. The upper panels show the GS energies of different $S_{z}$ sectors as a function of next-nearest neighbor coupling $J/g$. The lower ones show the GS expectation values of the spin part of the Hamiltonian. \[fig:Exact-diag\] ](both_new){width="87mm"}
We calculate by means of exact diagonalization the GS of a two-color dimer model on a 18- and 24-site honeycomb lattice. The system corresponds to a 1/6 filled kagomé cluster with 27 and 36 sites, respectively. The calculations on the clusters of different size and geometry show qualitatively the same results.
The ground-state energies of the different $S_{z}$ sectors are degenerate as long as $J/g<J/g)_{c}\approx0.2$, as shown in Fig. \[fig:Exact-diag\]. This demonstrates the robustness of ferromagnetism induced by ring-hopping processes. Above the transition point $(J/g)_{c}\approx0.2$ antiferromagnetic spin fluctuations are no longer suppressed. The ground-state degeneracy of the different $S_{z}$ sectors is lifted and the true GS is the one with the lowest $|S_{z}|$. The gain in kinetic energy decreases correspondingly since the spin fluctuations cause nodes in the wavefunction. The expectation value of $H_{\text{spin}}$ shows a jump at $(J/g)_{c}$ (see Fig. \[fig:Exact-diag\]). Note that a small second jump at $J/g\approx0.45$ found for the 18 site cluster is an effect of geometry. The observed findings demonstrate a considerable robustness of ferromagnetism generated by kinetic processes. In the limit $J/g\rightarrow\infty$, the kinetic processes are unimportant and the GS is that of a Heisenberg antiferromagnet on a kagomé lattice. One of the configurations which maximize the spin interactions is shown in Fig. \[dimerpanel\] (f).
The above considerations can also be applied to the case of 5/6 filling due to the arbitrary choice of sign of $g$ in Eq. (\[eq:Hhex\]). In addition, a ferromagnetic GS is found numerically for a filling factor of 1/3. With two occupied sites per triangle, we obtain a fully packed loop covering of the honeycomb lattice instead of a dimer covering. As before, there is no sign problem in the strong coupling limit. However, the effective Hamiltonian is *not* ergodic in the 1/3 filled case and thus the Perron–Frobenius theorem does not rule out other GSs which are not fully spin polarized. Numerical studies confirm a ferromagnetic GS which is much less robust – antiferromagnetic fluctuations occur already at very small ratios $(J/g)$. A more detailed discussion is left to an extended version of this paper.
We conclude by reiterating the difference between the two mechanisms for ferromagnetism in a Hubbard model on the kagomé lattice: the one presented here and the flat-band mechanism discussed in Refs. [@Mielke91a; @Mielke92; @Tasaki98]. The flat-band ferromagnetism was demonstrated for the case of $V=0$ in the Hamiltonian (\[eq:extended\_hub\]), while our mechanism requires $V \to
\infty$. In Mielke’s case, ferromagnetism has been predicted for the range of fillings between $5/6$ and $11/12$ and any value of $U>0$. The connection between the sign of the tunneling amplitude $t$ and the electron concentration is crucial. This is because the tight binding model on a kagomé lattice has one of its three bands completely flat: the lowest band for the case of $t<0$ or the highest band for the case of $t>0$. On the other hand, the mechanism presented here is insensitive to the sign of $t$ and, as we have already mentioned in the introduction, is of kinetic rather than of potential origin. Furthermore, Perron–Frobenius theorem is not applicable in Mielke’s case [@Tasaki98], instead the proof was based on graph-theoretical methods. By contrast, the kinetic ferromagnetism studied in this letter relies crucially on strong electron–electron repulsion $U \gg V\to \infty$; hence it belongs to a different class from flat-band ferromagnetism. Despite certain similarities, it must also be distinguished from Thouless’ three-particle ring exchange mechanism: the crucial difference is that a standard three-particle ring exchange leaves the particles at the original locations, it simply cyclically permutes them. In our case the particles actually move; the initial and final configurations are distinct.
Thus the mechanism found in this letter represents a new generic type of kinetic ferromagnetism. An interesting question is to what extent the ferromagnetism of the form found here can be extended to other lattice structures. A particularly interesting case is the pyrochlore lattice. For the strong correlation limit at $1/8$ filling one can use the Perron–Frobenius-based argument as well, implying a ferromagnetic GS. Again, this and other cases will be discussed in the extended version of this paper.
The authors would like to thank G. Misguich and R. Kenyon for illuminating and helpful discussions.
[16]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , , ****, (), .
, , , , ****, ().
, , , ****, (), .
, ****, ().
, ****, (); ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, (), .
, , , (), .
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Assume $\alpha\geq p>1$. Consider the following $p$-th Yamabe equation on a connected finite graph $G$: $$\Delta_p\varphi+h\varphi^{p-1}=\lambda f\varphi^{\alpha-1},$$ where $\Delta_p$ is the discrete $p$-Laplacian, $h$ and $f>0$ are fixed real functions defined on all vertices. We show that the above equation always has a positive solution $\varphi$ for some constant $\lambda\in\mathds{R}$.'
author:
- Huabin Ge
title: '**A $p$-th Yamabe equation on graph**'
---
Introduction
============
The well known smooth Yamabe problem asks for the considering of the following smooth Yamabe equation [@Aubin; @Lee; @Yamabe] $$\Delta\varphi+h(x)\varphi=\lambda f(x)\varphi^{N-1}$$ on a $C^{\infty}$ compact Riemannian manifold $M$ of dimension $n\geq 3$, where $h(x)$ and $f(x)$ are $C^{\infty}$ functions on $M$, with $f(x)$ everywhere strictly positive and $N=2n/(n-2)$. The problem is to prove the existence of a real number $\lambda$ and of a $C^{\infty}$ function $\varphi$, everywhere strictly positive, satisfying the above Yamabe equation. In this short paper, we consider the corresponding discrete Yamabe equation $$\Delta\varphi+h\varphi=\lambda \varphi^{\alpha-1},\;\;\alpha\geq2$$ on a finite graph. More generally, we shall establish the existence results of the following $p$-th discrete Yamabe equation $$\Delta_p\varphi+h\varphi^{p-1}=\lambda f\varphi^{\alpha-1}$$ on a finite graph $G$ with $\alpha\geq p>1$. This work is inspired by Grigor’yan, Lin and Yang’s pioneer paper [@GLY; @GLY'], where they studied similar equations on finite or locally finite graphs.
Settings and main results
=========================
Let $G=(V,E)$ be a finite graph, where $V$ denotes the vertex set and $E$ denotes the edge set. Fix a vertex measure $\mu:V\rightarrow(0,+\infty)$ and an edge measure $\omega:E\rightarrow(0,+\infty)$ on $G$. The edge measure $\omega$ is assumed to be symmetric, that is, $\omega_{ij}=\omega_{ji}$ for each edge $i\thicksim j$.
Denote $C(V)$ as the set of all real functions defined on $V$, then $C(V)$ is a finite dimensional linear space with the usual function additions and scalar multiplications. For any $p>1$, the $p$-th discrete graph Laplacian $\Delta_p:C(V)\rightarrow C(V)$ is $$\Delta_pf_i=\frac{1}{\mu_i}\sum\limits_{j\thicksim i}\omega_{ij}|f_j-f_i|^{p-2}(f_j-f_i)$$ for any $f\in C(V)$ and $i\in V$. $\Delta_p$ is a nonlinear operator when $p\neq2$.
\[sect-main-result\]
\[thm-main\] Let $G=(V,E)$ be a finite connected graph. Given $h, f\in C(V)$ with $f>0$. Assume $\alpha\geq p>1$. Then the following $p$-th Yamabe equation $$\Delta_p\varphi+h\varphi^{p-1}=\lambda f\varphi^{\alpha-1}$$\[def-Yamabe-equ-p\] on $G$ always has a positive solution $\varphi$ for some constant $\lambda\in\mathds{R}$.
Taking $p=2$, we get the following
\[croll-main\] Let $G=(V,E)$ be a finite connected graph. Given $h, f\in C(V)$ with $f>0$. Assume $\alpha>2$. Then the following Yamabe equation $$\label{def-Yamabe-equ-2}
\Delta\varphi+h\varphi=\lambda f\varphi^{\alpha-1}$$ on $G$ always has a positive solution $\varphi$ for some constant $\lambda\in\mathds{R}$.
Grigor’yan, Lin and Yang [@GLY'] established similar results for the following equation $$\label{equ-gly-2}
-\Delta u+hu=|u|^{\alpha-2}u,\;\;\alpha>2$$ on a finite graph under the assumption $h>0$. They show that the above equation (\[equ-gly-2\]) always has a positive solution. They also studied the following equation $$\label{equ-gly-p}
-\Delta_p u+h|u|^{p-2}u=f(x,u),\;\;p>1$$ and established some existence results under certain assumptions of $f(x,u)$. However, it is remarkable that their $\Delta_p$ considered in the equation (\[equ-gly-p\]) is different with ours when $p\neq2$. It is also remarkable that our Theorem \[thm-main\] doesn’t require $h>0$.
Proofs of theorem \[thm-main\] {#sect-preliminary-lemma}
==============================
Sobolev embedding
-----------------
For any $f\in C(V)$, define an integral of $f$ over $V$ with respect to the vertex weight $\mu$ by $$\int_Vfd\mu=\sum\limits_{i\in V}\mu_if_i.$$ Set $\mathrm{Vol}(G)=\int_Vd\mu$. Similarly, for any function $g$ defined on the edge set $E$, we define an integral of $g$ over $E$ with respect to the edge weight $\omega$ by $$\int_Egd\omega=\sum\limits_{i\thicksim j}\omega_{ij}g_{ij}.$$ Specially, for any $f\in C(V)$, $$\int_E|\nabla f|^pd\omega=\sum\limits_{i\thicksim j}\omega_{ij}|f_j-f_i|^p,$$ where $|\nabla f|$ is defined on the edge set $E$, and $|\nabla f|_{ij}=|f_j-f_i|$ for each edge $i\thicksim j$. Next we consider the Sobolev space $W^{1,\,p}$ on the graph $G$. Define $$W^{1,\,p}(G)=\left\{u\in C(V):\int_E|\nabla\varphi|^pd\omega+\int_V|\varphi|^pd\mu<+\infty\right\},$$ and $$\|u\|_{W^{1,\,p}(G)}=\left(\int_E|\nabla\varphi|^pd\omega+\int_V|\varphi|^pd\mu\right)^{\frac{1}{p}}.$$ Since $G$ is a finite graph, then $W^{1,\,p}(G)$ is exactly $C(V)$, a finite dimensional linear space. This implies the following Sobolev embedding:
\[lem-Sobolev-embedding\](Sobolev embedding) Let $G=(V,E)$ be a finite graph. The Sobolev space $W^{1,\,p}(G)$ is pre-compact. Namely, if $\{\varphi_n\}$ is bounded in $W^{1,\,p}(G)$, then there exists some $\varphi\in W^{1,\,p}(G)$ such that up to a subsequence, $\varphi_n\rightarrow\varphi$ in $W^{1,\,p}(G)$.
The convergence in $W^{1,\,p}(G)$ is in fact pointwise convergence.
Proofs step by step
-------------------
We follow the original approach pioneered by Yamabe [@Yamabe]. Denote an energy functional $$I(\varphi)=\left(\int_E|\nabla \varphi|^pd\omega-\int_Vh\varphi^pd\mu\right)\left(\int_Vf\varphi^{\alpha} d\mu\right)^{-\frac{p}{\alpha}},$$ where $\varphi\in W^{1,\,p}(G)$, $\varphi\geq 0$ and $\varphi\not\equiv0$. Define $$\beta=\inf \big\{I(\varphi): \varphi\geq0,\;\varphi\not\equiv0\big\}.$$ We shall find a solution to (\[def-Yamabe-equ-p\]) step by step as follows.\
**Step 1**. $I(\varphi)$ is bounded below for all $\varphi\geq0$, $\varphi\not\equiv0$. Hence $\beta\neq-\infty$ and $\beta\in\mathds{R}$. In fact, it’s easy to see $$0<\left(\int_Vf\varphi^{\alpha} d\mu\right)^{\frac{p}{\alpha}}\leq f_M^{\frac{p}{\alpha}}\left(\int_V\varphi^{\alpha} d\mu\right)^{\frac{p}{\alpha}}=f_M^{\frac{p}{\alpha}}\|\varphi\|_{\alpha}^p,$$ where $f_M=\max\limits_{i\in V}f_i>0$. Hence $$\label{equ-right}
\left(\int_Vf\varphi^{\alpha} d\mu\right)^{-\frac{p}{\alpha}}\geq f_M^{-\frac{p}{\alpha}}\|\varphi\|_{\alpha}^{-p}>0.$$ Similarly, we also have $$-\int_Vh\varphi^{p}d\mu\geq (-h)_m\int_V\varphi^{p}d\mu=(-h)_m\|\varphi\|_{p}^{p},$$ where $(-h)_m=\min\limits_{i\in V}(-h_i)$. Then it follows $$\label{equ-left}
\int_E|\nabla \varphi|^pd\omega-\int_Vh\varphi^pd\mu\geq(-h)_m\|\varphi\|_{p}^{p}.$$ By (\[equ-right\]) and (\[equ-left\]), we get $$I(\varphi)\geq(-h)_m\|\varphi\|_{p}^{p}f_M^{-\frac{p}{\alpha}}\|\varphi\|_{\alpha}^{-p},$$ and further $$\label{equ-I-fai-1}
I(\varphi)\geq\big((-h)_m\wedge 0\big)\|\varphi\|_{p}^{p}f_M^{-\frac{p}{\alpha}}\|\varphi\|_{\alpha}^{-p},$$ where $(-h)_m\wedge 0$ is the minimum of $(-h)_m$ and $0$. Since $\alpha\geq p$, then $$\label{equ-half-1}
0<\|\varphi\|_{p}^{p}\leq\left(\int_V\left(\varphi^p\right)^{\frac{\alpha}{p}}d\mu\right)^{\frac{p}{\alpha}}
\left(\int_V1^{\frac{\alpha}{\alpha-p}}d\mu\right)^{\frac{\alpha-p}{\alpha}}
=\|\varphi\|_{\alpha}^{p}\mathrm{Vol}(G)^{1-\frac{p}{\alpha}},$$ which leads to $$\label{equ-I-fai-2}
0<\|\varphi\|_{p}^{p}\|\varphi\|_{\alpha}^{-p}\leq\mathrm{Vol}(G)^{1-\frac{p}{\alpha}}.$$ Thus by (\[equ-I-fai-1\]) and (\[equ-I-fai-2\]), we obtain $$\label{equ-I-fai-final}
I(\varphi)\geq\big((-h)_m\wedge 0\big)f_M^{-\frac{p}{\alpha}}\mathrm{Vol}(G)^{1-\frac{p}{\alpha}}=C_{\alpha,p,h,f,G},$$ where $C_{\alpha,p,h,f,G}\leq0$ is a constant depending only on the information of $\alpha$, $p$, $h$, $f$ and $G$. Note that the information of $G$ contains $V$, $E$, $\mu$ and $\omega$. Hence $I(\varphi)$ is bounded below by a universal constant.\
**Step 2**. There exists a $\hat{\varphi}\geq0$, such that $\beta=I(\hat{\varphi})$. To find such $\hat{\varphi}$, we choose $\varphi_n\geq0$, satisfying $$\int_Vf\varphi_n^{\alpha}d\mu=1$$ and $$I(\varphi_n)\rightarrow\beta$$ as $n\rightarrow\infty$. We may well suppose $I(\varphi_n)\leq 1+\beta$ for all $n$. Note $$1=\int_Vf\varphi_n^{\alpha}d\mu\geq f_m\int_V\varphi_n^{\alpha}d\mu=f_m\|\varphi_n\|_{\alpha}^{\alpha},$$ where $f_m=\min\limits_{i\in V}f_i$. Hence $$\label{equ-half-2}
\|\varphi_n\|_{\alpha}^{p}\leq f_m^{-\frac{p}{\alpha}}.$$ Denote $|h|_M=\max\limits_{i\in V}|h_i|$, then by (\[equ-half-1\]) and (\[equ-half-2\]), we obtain $$\begin{aligned}
\|\varphi_n\|^p_{W^{1,\,p}(G)}=&\int_E|\nabla\varphi|^pd\omega+\int_V|\varphi|^pd\mu\\
=&\;I(\varphi_n)+\int_Vh\varphi_n^{p}d\mu+\|\varphi_n\|_{p}^{p}\\
\leq&\;1+\beta+(1+|h|_M)\|\varphi_n\|_{p}^{p}\\
\leq&\;1+\beta+(1+|h|_M)\mathrm{Vol}(G)^{1-\frac{p}{\alpha}}\|\varphi_n\|_{\alpha}^{p}\\
\leq&\;1+\beta+(1+|h|_M)\mathrm{Vol}(G)^{1-\frac{p}{\alpha}}f_m^{-\frac{p}{\alpha}},
\end{aligned}$$ which implies that $\{\varphi_n\}$ is bounded in $W^{1,\,p}(G)$. Therefore by Lemma \[lem-Sobolev-embedding\], there exists some $\hat{\varphi}\in C(V)$ such that up to a subsequence, $\varphi_n\rightarrow \hat{\varphi}$ in $W^{1,\,p}(G)$. We may well denote this subsequence as $\varphi_n$. Note $\varphi_n\geq0$ and $\int_Vf\varphi_n^{\alpha}d\mu=1$, let $n\rightarrow+\infty$, we know $\hat{\varphi}\geq0$ and $\int_Vf\hat{\varphi}^{\alpha}d\mu=1$. This implies that $\hat{\varphi}\not\equiv0$. Since the energy functional $I(\varphi)$ is continuous, we have $\beta=I(\hat{\varphi})$.\
**Step 3**. $\hat{\varphi}>0$.
Calculate the Euler-Lagrange equation of $I(\varphi)$, we get $$\label{equ-Euler-Lagrange}
\frac{d}{dt}\Big|_{t=0}I(\varphi+t\phi)=-p\left(\int_Vf\varphi^{\alpha} d\mu\right)^{-\frac{p}{\alpha}}\int_V\left(\Delta_p\varphi+h\varphi^{p-1}-\lambda_{\varphi}f\varphi^{\alpha-1}\right)\phi d\mu,$$ where $$\label{def-lamda-fai}
\lambda_{\varphi}=-\frac{\int_E|\nabla \varphi|^pd\omega-\int_Vh\varphi^pd\mu}{\int_Vf\varphi^{\alpha} d\mu}$$ for any $\varphi\geq0$, $\varphi\not\equiv0$. Thus $$\label{equ-gradient-I-i}
\frac{\partial I}{\partial \varphi_i}=-p\mu_i(\Delta_p\varphi_i+h\varphi^{p-1}_i-\lambda_{\varphi}f_i\varphi^{\alpha-1}_i)\left(\int_Vf\varphi^{\alpha} d\mu\right)^{-\frac{p}{\alpha}}.$$ Note the graph $G$ is connected, if $\hat{\varphi}>0$ is not satisfied, since $\hat{\varphi}\geq0$ and not identically zero, then there is an edge $i\thicksim j$, such that $\hat{\varphi}_i=0$, but $\hat{\varphi}_j>0$. Now look at $\Delta_p\hat{\varphi}_i$, $$\Delta_p\hat{\varphi}_i=\frac{1}{\mu_i}\sum\limits_{k\thicksim i}\omega_{ik}|\hat{\varphi}_k-\hat{\varphi}_i|^{p-2}(\hat{\varphi}_k-\hat{\varphi}_i)>0.$$ Therefore by (\[equ-gradient-I-i\]), we have $$\frac{\partial I}{\partial \varphi_i}\Big|_{\varphi=\hat{\varphi}}=-p\mu_i\Delta_p\hat{\varphi}_i\left(\int_Vf\hat{\varphi}^{\alpha} d\mu\right)^{-\frac{p}{\alpha}}<0.$$ Recall we had proved that $\hat{\varphi}$ is the minimum value of $I(\varphi)$, hence there should be $$\frac{\partial I}{\partial \varphi_i}\Big|_{\varphi=\hat{\varphi}}\geq0,$$ which is a contradiction. Hence $\hat{\varphi}>0$.\
**Step 4**. $\hat{\varphi}$ satisfied the equation (\[def-Yamabe-equ-p\]), that is $$\label{equ-final}
\Delta_p\hat{\varphi}+h\hat{\varphi}^{p-1}=\lambda_{\hat{\varphi}} f\hat{\varphi}^{\alpha-1},$$ where $\lambda_{\hat{\varphi}}$ is defined according to (\[def-lamda-fai\]). Because $I(\varphi)$ attains its minimum value at $\hat{\varphi}$, which lies in the interior of $\{\varphi\in C(V):\varphi\geq0\}$, so $$\frac{d}{dt}\Big|_{t=0}I(\hat{\varphi}+t\phi)=0$$ for all $\phi\in C(V)$. This leads to (\[equ-final\]).\
**Acknowledgements:** The author would like to thank Professor Gang Tian and Yanxun Chang for constant encouragement. The author would also like to thank Dr. Wenshuai Jiang, Xu Xu for many helpful conversations. The research is supported by National Natural Science Foundation of China under Grant No.11501027, and Fundamental Research Funds for the Central Universities (Nos. 2015JBM103, 2014RC028, 2016JBM071 and 2016JBZ012).
[50]{}
Aubin, T. *Some nonlinear problems in Riemannian geometry*. Springer Monographs in Mathematics. Springer-Verlag, Berlin, 1998.
Ge, H. *p-th Kazdan Warner equation on graph*, in preparation.
Grigor’yan, A.; Lin, Y.; Yang, Y. *Kazdan-Warner equation on graph*. Calc. Var. Partial Differential Equations 55 (2016), no. 4, Paper No. 92, 13 pp.
Grigor’yan, A.; Lin, Y.; Yang, Y. *Yamabe type equations on graphs*. J. Differential Equations 261 (2016), no. 9, 4924¨C4943.
Lee, J. M.; Parker, T. H. *The Yamabe problem*. Bull. Amer. Math. Soc. (N.S.) 17 (1987), no. 1, 37¨C91.
Yamabe, H. *On the deformation of Riemannian structures on compact manifolds*. Osaka Math. J. 12, (1960), 21-37.
Huabin Ge: hbge@bjtu.edu.cn
Department of Mathematics, Beijing Jiaotong University, Beijing 100044, P.R. China
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We study a class of quantum channels arising from the representation theory of compact quantum groups that we call Temperley-Lieb quantum channels. These channels simultaneously extend those introduced in [@BrCo18b], [@Al14], and [@LiSo14]. (Quantum) Symmetries in quantum information theory arise naturally from many points of view, providing an important source of new examples of quantum phenomena, and also serve as useful tools to simplify or solve important problems. This work provides new applications of quantum symmetries in quantum information theory. Among others, we study entropies and capacitites of Temperley-Lieb channels, their (anti-) degradability, PPT and entanglement breaking properties, as well as the behaviour of their tensor products with respect to entangled inpurs. Finally we compare the Tempereley-Lieb channels with the (modified) TRO-channels recently introduced in [@GaJuLa16].'
address:
- 'Michael Brannan, Department of Mathematics, Mailstop 3368, Texas A&M University, College Station, TX 77843-3368, USA'
- 'Benoît Collins, Department of Mathematics, Kyoto University, Kyoto 606-8502, Japan '
- 'Hun Hee Lee, Department of Mathematical Sciences and the Research Institute of Mathematics, Seoul National University, Gwanak-ro 1, Gwanak-gu, Seoul 08826, Republic of Korea'
- 'Sang-Gyun Youn, Department of Mathematical Sciences, Seoul National University, GwanAkRo 1, Gwanak-Gu, Seoul 08826, South Korea'
author:
- Michael Brannan
- Benoît Collins
- Hun Hee Lee
- 'Sang-Gyun Youn'
bibliography:
- 'TL-Capacity-MOE.bib'
title: 'Temperley-Lieb quantum channels'
---
Introduction {#sec-intro}
============
A fundamental problem in (quantum) information theory is to understand the capacity of a noisy communications channel. In the quantum world, this is harder, because there are many notions of capacities, non-trivial additivity questions related to these capacities, and a very poor understanding of the behaviour of quantum channels under the operation of tensoring. The non-trivial channels for which many entropic or capacity related quantities can be computed and be of non-trivial value or interest are rather scarce. One reason for this paucity is that many quantities are defined with minimizers, and many properties (e.g. PPT, entanglement breaking property (shortly, EBT), degradability and so on) rely on the existence of auxiliary objects or computations of tensors that are close to impossible to describe effectively without additional conceptual assumptions on the quantum channel.
One of the most natural (and to our mind, underrated) property of a quantum channel is to have some sort of group symmetry. In this paper, we will focus on quantum channels which feature symmetries with respect to structures which are more general than groups: compact quantum groups. For example, the notion of a covariant quantum channel channel with respect to a compact group action was introduced in many contexts ([@WeHo02; @DaFuHo06; @MoStDa17; @Al14; @LiSo14; @Rit05]) but these properties have not been extensively used from the analysis point of view of quantum information theory (shortly, QIT) such as estimating quantities. In addition, most of the time, the covariance under consideration is with respect to the most elementary group representations, e.g., the basic representation of a matrix group $G \subset M_n({\mathbb C})$ on ${\mathbb C}^n$. The principal reason behind the restriction to the basic representations so far is that the symmetries involved and the analysis behind many aspects of representation theory are not well-understood to the degree required to estimate important quantities. Nonetheless, it was observed in many places that such symmetries can be useful (e.g. [@MuReWo16; @HaMu15; @Sc05; @DaFuHo06; @KoWe09; @SaWoPeCi09; @MoStDa17], etc). See also [@CoOsSa18] for a covariant characterization of $k$-positive maps.
The first systematic attempt to remedy this limitation was conducted by Al Nuwairan [@Al14] in the context of $SU(2)$ symmetries. Here, Al Nuwairan investigated quantum channels arising from the intertwining isometries of the irreducible decomposition of the tensor product of two irreducible representations of $SU(2)$, which we will call $SU(2)$-Temperley-Lieb quantum channels (shortly, $SU(2)$-TL-channels). Thanks to the well-known $SU(2)$-Clebsch-Gordan formulas, explicit results could be obtained and it turned out that $SU(2)$-TL-channels play important roles of describing general $SU(2)$-covariant quantum channels. However, from the perspective of entanglement theory, the performance of $SU(2)$-TL-channels was not spectacular. Subsequently, [@BrCo18b] considered a quantum extension of $SU(2)$-TL-channels using irreducible representations of free orthogonal quantum groups, which we call $O^+_N$-TL-channels in this paper, and noticed that a notion of rapid decay was exactly the concept needed to estimate precisely the entanglement in a highly entangled setup. The main idea was to replace group symmetries by quantum group symmetries especially for the free orthogonal quantum group $O_N^+$ case, whose main advantage is that it allows to remain in a well-understood C$^\ast$-tensor category (the Temperley-Lieb category) which facilitates very explicit computations and estimates.
The present work undertakes a much more systematic study of $SU(2)$-TL-channels and $O^+_N$-TL-channels, and compares their various information theoretic properties. One important achievement of this paper is that the minimum output entropy (shortly, MOE) $H_{\min}$, the one-shot quantum capacity $Q^{(1)}$ and the Holevo capacity $\chi$ can be estimated, and that these estimates are asymptotically sharp as $N$ becomes big, in the case of $O_N^+$-TL-channels. More generally, the main results of this paper are summarized below in the following table:
**Properties**$\backslash$**Channels** $O^+_N$-TL-ch. \[sec. \[sec:moe-cap\], sec. \[sec:EBP-PPT\]\] $SU(2)$-TL-ch. \[sec. \[sec:EBP-PPT\]\]
---------------------------------------- --------------------------------------------------------------- -----------------------------------------
$H_{\min}$ asympt. sharp [@Al13]
$Q^{(1)}$ and $\chi$ asympt. sharp rough estimates
EBT No except for the lowest weight complete
PPT No except for the lowest weight with $N \gg 1$ complete
(Anti-)Degradability No except for the lowest weight with $N \gg 1$ partial results
$C$ (classical capacity) $C\le (2+{\varepsilon})\chi$ with $N \gg 1$ ? (open)
Equivalence to TRO ch. ? (open) No in general \[sec. \[sec:TRO\]\]
: Summary of results.[]{data-label="tab:table1"}
The term TRO in the above will be clarified later in the introduction and in section \[sec:TRO\] with more details. As it appears from the above table, many interesting and unexpected phenomena are unveiled, which we find counterintuitive, and whose proof boils down to an extensive case analysis. Just to mention a few:
- Many non-trivial results can be obtained about the degradability and anti-degradability of the covariant quantum channels. To the best of our knowledge, although these notions are really important to estimate capacities (and we use such results), there are almost no non-trivial examples in the literature of quantum channels for which one can assess the degradability and anti-degradability. Our computation is possible thanks to averaging methods stemming from (quantum) group invariance.
- In most cases, $O_N^+$-TL-channels with large $N$ have a highly non-trivial structure. Indeed, they are not PPT, not degradable, not anti-degradable except for the possibility of lowest weight subrepresentations, which we still have not settled. Moreover, we present a complete list for EBT and PPT for $SU(2)$-TL-channels and it turns out that the notions of PPT and EBT are actually equivalent in the case of $SU(2)$. One important ingredient here is the diagrammatic calculus for Temperley-Lieb category covered in Section \[subsec:Diagram\].
- On the other hand, we reveal unexpected results on (anti-)degradability of $SU(2)$-TL-channels. We show that they are degradable for extremal cases such as lowest or highest weight, whereas it is not true for other intermediate cases. Indeed, we provide an example of a non-degradable $SU(2)$-TL-channel in low dimensions (see Example \[ex:non-deg-non-antideg\]).
One crucial point in QIT is that it is often unavoidable to consider tensor products of quantum channels, and in general, computations in tensor products become very involved. However when the channels have nice symmetries, as we show in this paper, computations can remain tractable, even in non-trivial cases. The main techinical tool is an application of diagrammatic calculus explained in Section \[subsec:Diagram\], which can be applied to $O^+_N$-TL-channels, see Section \[sec:tensor\] for the details.
Finally, TL-channels bear some resemblance with another important family of operators introduced by [@GaJuLa16], called TRO-channels and their modified versions. Here, TRO refers to ternary ring of operators and name “TRO-channel” comes from the fact that its Stinespring space, i.e. the range of the Stinespring isometry actually has a TRO structure. Examples of TRO-channels include random unitary channels from regular representations of finite (quantum) groups and generalized dephasing channels [@GaJuLa16]. While the authors were preparing this manuscript and discussing it for the first time publicly, the question of how our TL-channels compare to TRO channels was posed (and, in particular, whether or not TL implies TRO). The answer is that these classes of channels bear important differences, as explained in section \[sec:TRO\].
This paper is organized as follows. After this introduction, section \[sec:preliminaries\] provides some background and reminders about quantum channels and compact quantum groups. Section \[sec:TL\] recalls some details on free orthogonal quantum groups and their associated representation theory. Then, we introduce Tempereley-Lieb quantum channels (shortly, TL-channels) and collect some details on their associated diagrammatic calculus. Section \[sec:moe-cap\] contains results about the entropies and capacities of TL-channels. Then, section \[sec:EBP-PPT\] addresses the property of entanglement breaking and PPT for TL-channels. Section \[sec:tensor\] shows that $O^+_N$-TL-channels (unlike most ‘structureless’ quantum channels) behave very well under tensor products. Finally, section \[sec:TRO\] addresses the question of comparing TL-channels with Junge’s (modified) TRO-channels.
Acknowledgements {#acknowledgements .unnumbered}
----------------
MB’s research was supported by NSF grant DMS-1700267. BC’s research was supported by JSPS KAKENHI 17K18734, 17H04823, 15KK0162. HHL and SY’s research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) Grant NRF-2017R1E1A1A03070510 and the National Research Foundation of Korea (NRF) Grant funded by the Korean Government (MSIT) (Grant No.2017R1A5A1015626).
The authors are grateful to Marius Junge for useful comments and discussons during various stages of preparation of this manuscript.
Preliminaries {#sec:preliminaries}
=============
Quantum channels and their information theoretic quantities
-----------------------------------------------------------
Here, we are only interested in quantum channels based on finite dimensional Hilbert spaces. Recall that a quantum channel is a linear completely positive trace-preserving (shortly, CPTP) map $\Phi: B(H_A) \to B(H_B)$. It is well-known that there is a so called [*Stinespring isometry*]{} $V : H_A \to H_B \otimes H_E$ such that $$\Phi(\rho) = (\iota \otimes {\rm Tr}_E)(V \rho V^*),\; \rho \in B(H_A),$$ where ${\rm Tr}_E$ refers to the trace on $B(H_E)$. For a given Stinespring isometry $V$ we can consider the complementary channel $\tilde{\Phi}:B(H_A) \to B(H_E)$ of $\Phi$ given by $$\tilde{\Phi}(\rho) = ({\rm Tr}_B \otimes \iota)(V \rho V^*),\; \rho \in B(H_A).$$
For each quantum channel there are several important information theoretic quantities, which we recall in the following.
Let $\Phi:B(H_A)\rightarrow B(H_B)$ be a quantum channel.
1. The Holevo capacity $\chi(\Phi)$ is defined by $$\chi (\Phi) := \max \Big \{ H(\Phi \big (\sum_x p_x \rho_x\big ))-\sum_x p_x H(\Phi(\rho_x)) \Big \},$$ where the maximum runs over all possible choice of ensemble of quantum states $\{(p_x), (\rho_x)\}$ on $H_A$ and $H(\cdot)$ refers to the von Neumann entropy of a state $\rho \in B(H_A)$.
2. The “one-shot” quantum capacity $Q^{(1)}(\Phi)$ is defined by $$Q^{(1)}(\Phi) := \max \{ H(\Phi(\rho)) - H(\tilde{\Phi}(\rho)) \}$$ where the maximum runs over all quantum states $\rho$ in $B(H_A)$. Note that the definition is independent of the choice of Stinespring isometry which determines the complementary channel $\tilde{\Phi}$.
3. The classical capacity $C(\Phi)$ and the quantum capacity $Q(\Phi)$ are obtained by the regularizations of the Holevo capacity and the “one-shot” quantum capacity, respectively, as follows. $$C(\Phi)=\lim_{n \to \infty} \frac{\chi (\Phi^{\otimes n})}{n},\;\; Q(\Phi) =\lim_{n\to \infty} \frac{Q^{(1)}(\Phi^{\otimes n})}{n}.$$
4. The minimum output entropy (MOE) $H_{\min}(\Phi)$ given by $$H_{\min}(\Phi):=\min_{\rho} H(\Phi(\rho)),$$ where the minimum runs over all quantum states $\rho$ in $B(H_A)$.
The two quantities $\chi$ and $H_{\min}$ are closely related. In general, we have the following for a quantum channel $\Phi:B(H_A)\rightarrow B(H_B)$. $$\label{eq-Holevo-MOE}
\chi(\Phi) \le \log d_B - H_{\min}(\Phi),$$ where $d_B$ refers to the dimension of $H_B$ [@Hol-book].
The regularization precedure for the classical capacity and the quantum capacity causes serious difficulties for the calculations of capacities in general. There are, however, some properties of channels that allow us to simplify the calculation, which we present below.
Let $\Phi:B(H_A)\rightarrow B(H_B)$ be a quantum channel with the complimentary channel $\tilde{\Phi}:B(H_A) \to B(H_E)$.
1. We say that $\Phi$ is degradable (resp. anti-degradable) if there exists a channel $\Psi:B(H_B)\rightarrow B(H_E)$ (resp. $\Psi:B(H_E)\rightarrow B(H_B))$ such that $\widetilde{\Phi}=\Psi\circ \Phi$ (resp. $\Phi= \Psi\circ \widetilde{\Phi})$.
2. We say that $\Phi$ is entanglement-breaking (shortly, EBT) if there exist a probability distribution $(p_x)_x$ and product states $\rho_x^B\otimes \rho_x^A \in B(H_B \otimes H_A)$ such that the Choi matrix of $\Phi$, $\displaystyle C_{\Phi} :=\frac{1}{d_A}\sum_{i,j=1}^{d_A}\Phi(e_{ij})\otimes e_{ij}$ is given by $\displaystyle C_{\Phi} = \sum_x p_x \rho_x^B\otimes \rho_x^A.$
3. We say that $\Phi$ is PPT (positive partial transpose) if $(T_B\otimes \iota) C_\Phi$ is a positive matrix in $B(H_B \otimes H_A)$, equivalently if $T_B\circ \Phi$ is also a channel where $T_B$ is the transpose map on $B(H_B)$.
4. We say that $\Phi$ is [*bistochastic*]{} if $\Phi(\frac{1_A}{d_A}) = \frac{1_B}{d_B}$.
From the definition it is clear that EBT channels are PPT and by [@Hol-book Corollary 10.28] they are also anti-degradable. Note that we have the following consequences of the above properties.
\[prop:implications\] Let $\Phi:B(H_A)\rightarrow B(H_B)$ be a quantum channel.
1. [@DeSh05] If $\Phi$ is degradable, then $Q(\Phi) = Q^{(1)}(\Phi)$.
2. [@HoHoHo96; @Pe96; @Hol-book] If $\Phi$ is PPT or anti-degradable, then $Q(\Phi) = Q^{(1)}(\Phi) = 0$.
3. [@Sh02] If $\Phi$ is EBT, then $C(\Phi) = \chi(\Phi)$.
Some bistochastic channels have the following straightforward capacity estimates.
\[prop-bistochastic-estimates\] Let $\Phi: B(H_A) \to B(H_B)$ be a bistochastic quantum channel with a Stinespring isometry $V: H_A \to H_B \otimes H_E$. Suppose further that its complementary channel $\tilde{\Phi}$ is also bistochastic, then we have $$\log \frac{d_B}{d_E} \le Q^{(1)}(\Phi) \le C(\Phi) \le \min \{ \log d_A, \log d_B, \log \frac{d_A d_B}{d_E} \}.$$
We first observe that positivity of $\Phi$ tells us $$||\Phi||_{S^1(H_A) \to B(H_B)} \le ||\Phi||_{B(H_A) \to B(H_B)} = ||\Phi(1_A)||_{B(H_B)} = \frac{d_A}{d_B}.$$ Since $\Phi^{\otimes n}$ is also bistochastic, we also have $||\Phi^{\otimes n}||_{S^1(H_A^{\otimes n}) \to B(H_B^{\otimes n})} \le \big(\frac{d_A}{d_B}\big)^n$. Thus, we have $$H_{\min}(\Phi^{\otimes n}) = \min_{\rho} H(\Phi^{\otimes n}(\rho)) \ge -\log ||\Phi^{\otimes n}||_{S^1(H_A^{\otimes n}) \to B(H_B^{\otimes n})} \ge n \log \frac{d_B}{d_A}.$$ Note also that $H_{\min}(\Phi^{\otimes n}) = H_{\min}(\widetilde{\Phi^{\otimes n}}) = H_{\min}(\tilde{\Phi}^{\otimes n}) \ge n \log \frac{d_E}{d_A}$ so that we have $$\begin{aligned}
\chi(\Phi^{\otimes n})
& \le \log d_B^n - H_{\min}(\Phi^{\otimes n})\\
& \le n \log d_B - n \cdot \max \{ \log \frac{d_B}{d_A}, \log \frac{d_E}{d_A}\}\\
& = n \cdot \min\{\log d_A, \log \frac{d_Ad_B}{d_E}\}.
\end{aligned}$$ Thus, we have $$C(\Phi) = \lim_{n\to \infty}\frac{\chi(\Phi^{\otimes n})}{n} \le \min \{ \log d_A, \log d_B, \log \frac{d_A d_B}{d_E} \}$$ together with the obvious estimate $\chi(\Phi^{\otimes n}) \le n\cdot \log d_B$.
The lower bound is direct from the definition of the “one-shot” quantum capacity. $$Q^{(1)}(\Phi) \ge H(\Phi(\frac{1_A}{d_A})) - H(\tilde{\Phi}(\frac{1_A}{d_A})) \ge H(\frac{1_B}{d_B}) - \log d_E = \log \frac{d_B}{d_E}.$$
Compact quantum groups and their representations
------------------------------------------------
A [*compact quantum group*]{} is a pair ${\mathbb{G}}=(C({\mathbb{G}}),\Delta)$ where $C({\mathbb{G}})$ is a unital $C^*$-algebra and $\Delta:C({\mathbb{G}})\rightarrow C({\mathbb{G}})\otimes_{\min}C({\mathbb{G}})$ is a unital $*$-homomorphism satisfying that (1) $(\Delta\otimes \iota)\Delta= (\iota\otimes \Delta)\Delta$ and (2) each of the spaces $\mathrm{span}\left \{\Delta(a)(1\otimes b):a,b\in C({\mathbb{G}}) \right\}$ and $\mathrm{span}\left \{\Delta(a)(b\otimes 1):a,b\in C({\mathbb{G}}) \right\}$ are dense in $C({\mathbb{G}})\otimes_{\min}C({\mathbb{G}})$. It is well known that every compact quantum group has the (unique) [*Haar state*]{} $h$, which is a state on $C({\mathbb{G}})$ such that $(\iota \otimes h)\Delta= h(\cdot )1=(h\otimes \iota)\Delta.$ If the Haar state $h$ is tracial, i.e. $h(ab)=h(ba)$ for all $a,b\in C({\mathbb{G}})$, then ${\mathbb{G}}$ is said to be of [*Kac type*]{}.
A (finite dimensional) [*representation*]{} of ${\mathbb{G}}$ is a pair $(u,H_u)$ where $H_u$ is a finite dimensional Hilbert space and $u=(u_{i,j})_{1\leq i,j\leq d_u}\in B(H_u)\otimes C({\mathbb{G}})$ such that $\displaystyle \Delta(u_{i,j})=\sum_{k=1}^{d_u}u_{i,k}\otimes u_{k,j}$ for all $1\leq i,j\leq d_u$. Here, $d_u$ refers to the dimension of $u$. The representation $u$ is called [*unitary*]{} if it further satisfies $u^*u=1_u\otimes 1=uu^*$. Whenever we have a unitary representation $(u,H_u)$ of ${\mathbb G}$ we obtain a so-called [*${\mathbb G}$-action*]{} on $B(H_u)$ $$\label{eq-G-action}
\beta_u: B(H_u) \to B(H_u) \otimes C({\mathbb G}), \quad x \mapsto u(x\otimes 1 )u^*.$$ For given unitary representations $v$ and $w$, we say that a linear map $T:B(H_v)\rightarrow B(H_w)$ [*intertwines*]{} $v$ and $w$ if $$(T\otimes 1) v = w (T\otimes 1)$$ and denote by $\mathrm{Hom}_{\mathbb G}(v,w)$ (simply, $\mathrm{Hom}(v,w)$) the space of [*intertwiners*]{}. If $\mathrm{Hom}(v,w)$ contains an invertible intertwiner, then $v$ and $w$ are said to be [*equivalent*]{}. A unitary representation $(v,H_v)$ is called [*irreducible*]{} if $\mathrm{Hom}(v)=\mathrm{Hom}(v,v)={\mathbb{C}}\cdot 1_v$ and we denote by $\mathrm{Irr}({\mathbb{G}})$ the set of all irreducible unitary representations of ${\mathbb{G}}$ up to equivalence.
When we fix a representative $u^{\alpha}= [u^{\alpha}_{ij}]_{i,j=1}^{d_\alpha} \in M_{d_{\alpha}} (C({\mathbb G}))$ for each $\alpha \in \mathrm{Irr}({\mathbb{G}})$, the Peter-Weyl theory for compact quantum groups says the space $\mathrm{Pol}({\mathbb{G}}) := \mathrm{span}\{u^{\alpha}_{ij}: \alpha \in \mathrm{Irr}({\mathbb{G}}), 1\le i,j\le d_\alpha \}$ is a subalgebra of $C({\mathbb G})$ containing all the information on the quantum group ${\mathbb G}$. In particular, it hosts the map $S$ called the [*antipode*]{} determined by the formula $$S(u^{\alpha}_{ij})= (u_{ji}^{\alpha})^*, \;\; \alpha \in \mathrm{Irr}({\mathbb{G}}), 1\le i,j\le d_\alpha.$$
For representations $v=(v_{ij})$ and $w=(w_{kl})$ we define its [*tensor product*]{} $v \tp w$ by $$v\tp w = \sum_{i,j=1}^{d_v}\sum_{k,l=1}^{d_w} e_{ij}\otimes e_{kl}\otimes v_{ij}w_{kl} \in B(H_v)\otimes B(H_w)\otimes C({\mathbb{G}}) .$$ Then [*the representation category*]{} consisting of unitary representations as objects and intertwiners as morphisms is a [*strict $C^*$-tensor category*]{} under the natural adjoint operation $Hom(v,w)\rightarrow Hom(w,v), T\mapsto T^*$, and the tensor product $\tp$. It is well known that any finite dimensional representation decomposes into a direct sum of irreducible representations, so that we have $$v \tp w \cong \oplus^N_{i=1}u_i.$$ In case $u$ is a component of the irreducible decomposition of $v \tp w$ we write $u\subset v \tp w$.
For a given unitary representation $(v,H_v)$ we consider the map $j:B(H)\rightarrow B(\overline{H})$ defined by $j(T)\overline{\xi}=\overline{T^*\xi}$. Then the [*contragredient representation*]{} of $v$ is given by $$v^c=(v_{ij}^*)_{1\leq i,j\leq d_v}=(j\otimes \iota)(v^{-1})\in B(\overline{H})\otimes C({\mathbb{G}}).$$ The contragredient representation $v^c$ is unitary if ${\mathbb{G}}$ is of Kac type.
For each compact quantum group ${\mathbb G}$ we have its opposite version ${\mathbb G}^{\rm op}$ with the same algebra $C({\mathbb G}^{\rm op}) = C({\mathbb G})$, but with the flipped co-multiplication $\Delta_{\rm op} = \Sigma \circ \Delta$, where $\Sigma$ is the flip map on $C({\mathbb G}) \otimes_{\min} C({\mathbb G})$. Then, for any unitrary representation $u= (u_{ij}) \in B(H_u)\otimes C({\mathbb G})$ of ${\mathbb G}$ we have an associated representation $u^* = (u^*_{ji}) \in B(H_u)\otimes C({\mathbb G})$ of ${\mathbb G}^{\rm op}$.
Clebsch-Gordan channels
-----------------------
Let ${\mathbb G}$ be a compact quantum group and $(u,H_u)$, $(v,H_v)$ and $(w,H_w)$ be unitary irreducible representations of ${\mathbb G}$ such that $u \subset v \tp w$, which gives us its intertwining isometry $\alpha_u^{v,w}:H_u \to H_v \otimes H_w$. By using $\alpha_u^{v,w}$ as the Stinespring isometry we get the following complementary pair of quantum channels: $$\begin{aligned}
\Phi_u^{\bar{v}, w}:B(H_u) \to B(H_w); \quad\rho \mapsto {{\operatorname{Tr}}}_v(\alpha_u^{v,w}\rho(\alpha_u^{v,w})^*)\\
\Phi_u^{v, \bar{w}}:B(H_u) \to B(H_v); \quad\rho \mapsto {{\operatorname{Tr}}}_w(\alpha_u^{v,w}\rho(\alpha_u^{v,w})^*).\end{aligned}$$ We name the above channels as Clebsch-Gordan channels (shortly, CG-channels) since the isometry $\alpha_u^{v,w}$ reflects the Clebsch-Gordan coefficients directly. Note that the symbol $\bar{v}$ does not refer to the conjugate representation, instead it means that we trace out the $H_v$ part. These channels have been studied by Al-Nuwairan [@Al14], Brannan-Collins [@BrCo18b], and also Leib-Solovej [@LiSo14]. It turns out that CG-channels preserve certain “quantum symmetries”. Recall that groups provide a certain symmetry on quantum channels through their (projective) unitary representations, namely covariance of channels. This concept naturally extends to the case of quantum groups as follows.
Let $\Phi:B(H_A)\rightarrow B(H_B)$ be a quantum channel. Suppose that there are unitary representations $(u, H_A)$ and $(w, H_B)$ of a compact quantum group ${\mathbb G}$ such that $$(\iota \otimes \Phi)(\beta_u(\rho)) = \beta_w ( \Phi (\rho)), \qquad \rho \in B(H_A),$$ where $\beta_u$ and $\beta_w$ are ${\mathbb G}$-actions from . Then we say that the channel $\Phi$ is ${\mathbb G}$-covariant with respect to $(u,w)$. In case we have no possibility of confusion we simply say ${\mathbb G}$-covariant.
Note that the covariance with respect to group representations has been studied in various contexts and has provided useful tools to handle information-theoretic problems [@Sc05; @DaFuHo06; @KoWe09; @MeWo09; @SaWoPeCi09; @MaSp14; @NaUe17; @MoStDa17].
We show that with mild assumptions, CG-channels are also [*${\mathbb G}$-covariant*]{}.
Let $u$, $v$ and $w$ be irreducible unitary representations of a compact quantum group ${\mathbb G}$ such that $u \subset v \tp w$. Then the CG-channel $\Phi_u^{v, \bar{w}}$ is ${\mathbb G}$-covariant with respect to $(u,v)$ if the conjugate representation $w^c$ is also unitary. Similarly, $\Phi_u^{\bar{v}, w}$ is ${\mathbb G}^{\rm op}$-covariant with respect to $(u^*,w^*)$ if $v^c$ is unitary.
We first check the case of $\Phi_u^{v, \bar{w}}$. For any quantum state $\rho \in B(H_u)$ we have $$\begin{aligned}
\lefteqn{(\Phi_u^{v, \bar{w}} \otimes \iota)(u (\rho \otimes 1)u^*)}\\
& = \iota \otimes {\rm Tr} \otimes \iota [(\alpha^{v,w}_u \otimes \iota) u (\rho \otimes 1)u^* ((\alpha^{v,w}_u)^* \otimes \iota)]\\
& = \iota \otimes {\rm Tr} \otimes \iota [ (v\tp w)(\alpha^{v,w}_u \otimes \iota) (\rho \otimes 1)((\alpha^{v,w}_u)^* \otimes \iota)(v\tp w)^* ]\\
& = \sum^{d_v}_{i,j,i',j'=1}\sum^{d_w}_{k,l,k',l'=1} \iota \otimes {\rm Tr} [(|i {\rangle}{\langle}j| \otimes |k {\rangle}{\langle}l| )\alpha^{v,w}_u\rho (\alpha^{v,w}_u)^*(|j' {\rangle}{\langle}i'| \otimes |l' {\rangle}{\langle}k'| )] \otimes v_{ij}w_{kl}w^*_{k'l'}v^*_{i'j'}\\
& = \sum^{d_v}_{i,j,i',j'=1}\sum^{d_w}_{l,l'=1} \iota \otimes {\rm Tr} [(|i {\rangle}{\langle}j| \otimes |l' {\rangle}{\langle}l| )\alpha^{v,w}_u\rho (\alpha^{v,w}_u)^*(|j' {\rangle}{\langle}i'| \otimes 1)] \otimes v_{ij}(\sum^{d_w}_{k=1}w_{kl}w^*_{k'l'})v^*_{i'j'}\\
& = \sum^{d_v}_{i,j,i',j'=1}\sum^{d_w}_{l,l'=1} \iota \otimes {\rm Tr} [(|i {\rangle}{\langle}j| \otimes |l' {\rangle}{\langle}l| )\alpha^{v,w}_u\rho (\alpha^{v,w}_u)^*(|j' {\rangle}{\langle}i'| \otimes 1)] \otimes v_{ij}(w^tw^c)_{ll'}v^*_{i'j'}\\
& = \sum^{d_v}_{i,j,i',j'=1}\iota \otimes {\rm Tr} [(|i {\rangle}{\langle}j| \otimes 1)\alpha^{v,w}_u\rho (\alpha^{v,w}_u)^*(|j' {\rangle}{\langle}i'| \otimes 1)] \otimes v_{ij}v^*_{i'j'}\\
& = v (\Phi_u^{v, \bar{w}}(\rho) \otimes 1) v^*,
\end{aligned}$$ where we use tracial property for the fourth equality and the assumption that $w^c$ is unitary for $(w^tw^c)_{ll'} = \delta_{ll'}$.
For $\Phi_u^{\bar{v}, w}$ we observe that $$(\alpha^{v,w}_u \otimes \iota) u^* (|\xi{\rangle}\otimes 1 ) = (\alpha^{v,w}_u \otimes S) u ( |\xi{\rangle}\otimes 1) = (\iota \otimes S)[(v\tp w) (\alpha^{v,w}_u |\xi{\rangle}\otimes 1) ],$$ where $S$ is the antipode of the quantum group ${\mathbb G}$. Thus, we get $$\begin{aligned}
\lefteqn{(\Phi_u^{\bar{v}, w} \otimes \iota)(u^* (\rho \otimes 1)u )}\\
& = \sum^{d_v}_{i,j,i',j'=1}\sum^{d_w}_{k,l,k',l'=1} {\rm Tr} \otimes \iota [(|i {\rangle}{\langle}j| \otimes |k {\rangle}{\langle}l| )\alpha^{v,w}_u\rho (\alpha^{v,w}_u)^*(|j' {\rangle}{\langle}i'| \otimes |l' {\rangle}{\langle}k'| )] \otimes w^*_{lk}v^*_{ji}v_{j'i'}w_{l'k'}.
\end{aligned}$$ Then, we get the wanted conclusion by the same argument.
The property ${\mathbb G}$-covariance has the following useful consequence.
\[prop-CGchannel-bistochastic\] Let $\Phi: B(H_u) \to B(H_v)$ be a quantum channel which is ${\mathbb G}$-covariant with respect to a pair of unitary representations $(u,v)$ of a compact quantum group ${\mathbb G}$. If, in addition, $v$ is assumed to be irreducible, then $\Phi$ is bistochastic. In particular, all CG-channels associated to a Kac type compact quantum group are bistochastic.
Since $\Phi$ is ${\mathbb G}$-covariant and $\frac{1_u}{d_u} \in \text{Hom}(u,u)$, we get $\Phi(\frac{1_u}{d_u}) \in \text{Hom}(v,v).$ But irreducibility and Schur’s lemma then give $\Phi(\frac{1_u}{d_u}) \in {\mathbb C}I$, which implies $\Phi(\frac{1_u}{d_u})= \frac{1_v}{d_v}$.
The following Proposition tells us that, under the assumption that ${\mathbb{G}}$ is of Kac type and $u\subseteq v \tp w$, the orthogonal projection from $H_v\otimes H_w$ onto $H_u$ can be obtained by applying an averaging technique using the Haar state, for each unit vector $\xi\in H_u$. Moreover, together with Theorem \[thm:choi-eq\], the following Proposition will be used to characterize EBT for TL-channels.
\[prop:ave\] Let ${\mathbb{G}}$ be a compact quantum group of Kac type and $u, v, w \in {{\operatorname{Irr}}}({\mathbb G})$ with $u \subset v \tp w$. Then for any unit vector $\xi\in \alpha^{v,w}_u(H_u)\subseteq H_v\otimes H_w $ we have $$\frac{1}{d_u}\alpha^{v,w}_u (\alpha^{v,w}_u)^*= (\iota \otimes \iota\otimes h)((v \tp w)^* ( |\xi{\rangle}{\langle}\xi |\otimes 1 ) (v \tp w)) .$$
Let $A=\displaystyle ( \iota \otimes \iota\otimes h)((v \tp w)^* (1\otimes |\xi{\rangle}{\langle}\xi | ) (v \tp w))$. Then, in order to reach the conclusion, it is enough to show that $${\langle}\eta | (\alpha^{v,w}_{u'})^*A \alpha^{v,w}_{u'} |\eta{\rangle}=\frac{\delta_{u,u'}}{d_u} 1_u$$ for any irreducible components $u'$ of $v\tp w$ and any $\eta\in H_{u'}$. Indeed,
$$\begin{aligned}
{\langle}\eta | (\alpha^{v,w}_{u'})^*A \alpha^{v,w}_{u'} |\eta{\rangle}&=h([ ({\langle}\eta |( \alpha^{v,w}_{u'})^* \otimes 1) (v\tp w)^* ] (|\xi {\rangle}{\langle}\xi |\otimes 1) [(v\tp w)(\alpha^{v,w}_{u'} |\eta {\rangle}\otimes 1)] )\\
&=h( ( {\langle}\eta |\otimes 1 )(u')^* ( (\alpha^{v,w}_{u'})^* |\xi {\rangle}{\langle}\xi | \alpha^{v,w}_{u'}\otimes 1 ) u'( |\eta {\rangle}\otimes 1)) .\end{aligned}$$
Then the facts that $\displaystyle ( \iota\otimes h)((u')^*(B\otimes 1) u')=\frac{{{\operatorname{Tr}}}(B)}{d_{u'}}1_{u'}$ and $${{\operatorname{Tr}}}((\alpha^{v,w}_{u'})^*|\xi {\rangle}{\langle}\xi | \alpha^{v,w}_{u'})= {\langle}\xi | \alpha^{v,w}_{u'}(\alpha^{v,w}_{u'})^*|\xi{\rangle}= \delta_{u,u'}$$ complete the proof.
Temperley-Lieb Channels {#TL-diagrams}
=======================
\[sec:TL\]
Free orthogonal quantum groups $O_F^+$
--------------------------------------
Let us fix an integer $N \ge 2$, $F \in \text{GL}_N({\mathbb C})$ satisfying $F \bar F = \pm 1$. We define $C(O_F^+)$ as the universal $C^*$-algebra generated by $u_{ij}$ $(1\leq i,j\leq N)$ with the defining relations (1) $\displaystyle u^*u= 1_N \otimes 1 = uu^*$ and (2) $u=(F\otimes 1) u^c (F^{-1}\otimes 1)$ where $u=(u_{ij})_{1\leq i,j\leq N}\in B({\mathbb{C}}^N )\otimes C(O_F^+)$ that is called the [*fundamental representation*]{}. Then, together with a unital $*$-homomorphism $\Delta:C(O_F^+)\rightarrow C(O_F^+)\otimes_{\min}C(O_F^+)$ determined by $$\Delta(u_{ij})=\sum_{k=1}^N u_{ik}\otimes u_{kj},$$ $O_F^+=(C(O_F^+),\Delta)$ forms a compact quantum group, which is called the free orthogonal quantum group with parameter matrix $F$ [@VaWa96; @Ba96; @Ba97]. In particular, $O_F^+=SU(2)$ if $F=\left (\begin{array}{cc} 0&1\\ -1&0 \end{array} \right )$ and we denote by $O_N^+$ if $F=1_N$. Note that $O^+_F$ is of Kac type if and only if $F$ is unitary ([@Ba97]), which covers both of the above cases.
Representations of $O_F^+$
--------------------------
It is known from [@Ba96] that the irreducible representations of $O_F^+$ can be labelled $(v^k)_{k \in {\mathbb N}_0}$ (up to unitary equivalence) in such a way that $v^0 = 1$, $v^1 = u$, the fundamental representation, $v^l \cong \overline{v^l}$, and the following fusion rule holds: $$\begin{aligned}
\label{frules}
v^l \tp v^m \cong \bigoplus_{0 \le r \le \min\{l,m\}} v^{l+m - 2r}.\end{aligned}$$
Denote by $H_k$ the Hilbert space associated to $v^k$. Then $H_0 = {\mathbb C}$, $H_1 = {\mathbb C}^N$, and shows that the dimensions $\dim H_k$ satisfy the recursion relations $\dim H_1 \dim H_k = \dim H_{k+1} + \dim H_{k-1}$. Defining the quantum parameter $$q_0:= \frac{1}{N}\Big(\frac{2}{1+ \sqrt{1 -4/N^2}}\Big) \in (0,1],$$ then one has $q_0 +q_0^{-1} = N$, and it can be shown by induction that the dimensions $\dim H_k$ are given by the [*quantum integers*]{} $$\dim H_k = [k+1]_{q_0}: = {q_0}^{-k}\Big(\frac{1-{q_0}^{2k+2}}{1-{q_0}^2}\Big) \qquad (N \ge 3).$$ When $N=2$, we have $q_0=1$, and then $\dim H_k = k+1 = \lim_{q_0 \to 1^-} [k+1]_{q_0}$. Note that for $N \ge 3$, we have the exponential growth asymptotic $[k+1]_{q_0} \sim N^k$ (as $N \to \infty$).
We now describe the explicit construction of the representations $v^k$ and their corresponding Hilbert spaces $H_k$ due to Banica [@Ba96]. (See also the description in [@VaVe07 Section 7]). The idea is that according to the fusion rules , the $k$-th tensor power $u^{\tiny \tp k}$ of the fundamental representation contains exactly one irreducible subrepresentation equivalent to $v^k$. In particular, if we agree to explicitly identify $v^k$ as a subrepresentation of $u^{\tiny \tp k}$, then there exists a unique projection $0 \ne p_k \in {{\operatorname{Hom}}}_{O^+_F}(u^{{\tiny \tp} k}, u^{\tiny \tp k}) \subset B(H_1^{\otimes k})$ called the [*Jones-Wenzl projection*]{} [@Jo83; @We87] satisfying $H_k = p_k(H_1^{\otimes k})$ and $$v^k = (p_k \otimes 1)u^{\tiny \tp k}(p_k \otimes 1) \in B(H_k)\otimes C(O_F^+).$$
Thus, we are left with the problem of describing the projection $p_k$. To this end, fix an orthonormal basis $(e_i)_{i=1}^N$ for $H_1= {\mathbb C}^N$, and put $$\begin{aligned}
\label{cup}
\cup_F = \sum_{i=1}^N e_i \otimes Fe_i.\end{aligned}$$ It is then a simple matter to check that $\cup_F \in \text{Hom}_{O^+_F}(1,u\tp u)$, i.e. $ u^{\tiny \tp 2}(\cup_F \otimes 1) = ( \cup_F\otimes 1 ) $. In particular, $\iota_{H_1^{\otimes i-1}} \otimes \cup_F\otimes \iota_{H_1^{\otimes k-i-1}} \in {{\operatorname{Hom}}}_{O^+_F}(u^{\tiny \tp (k-2)}, u^{\tiny \tp k})$ for each $1 \le i \le k-1$. Using these observations, we inductively define $(p_k)_{k \ge 1}$ using $p_1 = \iota_{H_1}$ together with the so-called [*Wenzl recursion*]{} $$\begin{aligned}
\label{Wenzl}
p_k = \iota_{H_1} \otimes p_{k-1} - \frac{[k-1]_{q}}{[k]_{q}}(\iota_{H_1}\otimes p_{k-1})(\cup_F \cup_F^* \otimes \iota_{H_1^{\otimes k-2}})(\iota_{H_1}\otimes p_{k-1}) \qquad (k \ge 2),\end{aligned}$$ where $q = q(F) \in (0, q_0]$ is another quantum parameter defined so that $q+q^{-1} = {{\operatorname{Tr}}}(F^*F)$.
The Jones-Wenzl projections first appeared in the context of II$_1$-subfactors [@Jo83]. The shared connection between subfactor theory and the representation theory of $O^+_F$ is through the famous [*Temperley-Lieb category.*]{} Indeed, as explained for example in [@Ba96; @BrCo18b; @BrCo17b], given $d \in (-\infty, 2] \cup [2, \infty)$ the Temperley-Lieb Category ${\text{TL}}(d)$ is defined to be the strict C$^\ast$-tensor category generated by two simple objects $\{0,1\}$, where $0$ denotes the unit object for the tensor category, and $1 \ne 0$ is a self-dual simple object with the property that the morphism spaces ${\text{TL}}_{k,l}(d):=\text{Hom}(1^{\otimes k}, 1^{\otimes l} )$ $(k,l \in {\mathbb N})$ are generated by the identity map $ \iota \in \text{Hom}(1,1)$ together with a unique morphism $\cup\in \text{Hom}(0, 1 \otimes 1)$ satisfying $\cap\circ\cup = |d| \in {{\operatorname{Hom}}}(0,0) = {\mathbb C}$ and the “snake equation” $(\iota \otimes \cap)(\cup \otimes \iota) = (\cap \otimes \iota)(\iota \otimes \cup) = \text{sgn}(d)\iota$. Here, the “cap” $\cap$ is simply the adjoint $\cup^* \in \text{Hom}(1 \otimes 1,0)$ of the “cup” $\cup$. On the other hand, we have the concrete C$^\ast$-tensor category $\text{Rep}(O^+_F)$ of finite dimensional unitary representations of $O^+_F$, and it was shown by Banica [@Ba96] that if $d = {{\operatorname{Tr}}}((F\bar F)(F^*F))$, then there exists a [*unitary fiber functor*]{} ${\text{TL}}(d) \to \text{Rep}(O^+_F)$ which is determined by mapping the simple objects $0,1 \in {\text{TL}}(d)$ to $v^0, v^1 \in \text{Rep}(O^+_F)$, respectively, and by mapping the generating morphisms as follows $$\iota \in {\text{TL}}_{1,1}(d) \mapsto \iota_{H_1} \in \text{Hom}_{O^+_F}(v^1, v^1) \quad \& \quad \cup \in {\text{TL}}_{0,2}(d) \mapsto \cup_F \in \text{Hom}_{O^+_F}(v^0, v^1 \tp v^1).$$ In other words, with $d$ and $F$ as above, we can concretely realize ${\text{TL}}(d)$ in terms of the subcategory of finite dimensional Hilbert spaces $\text{Rep}(O^+_F)$. In particular, for calculations involving morphisms and objects in $\text{Rep}(O^+_F)$, one can perform these calculations using the well-known planar diagrammatic calculus in the Temperley-Lieb category ${\text{TL}}(d)$ [@BrCo17b; @KaLi94; @CaFlSa95], which we now briefly review.
Diagrammatic calculus for $\text{Rep}(O^+_F)$ {#subsec:Diagram}
---------------------------------------------
In the following, we continue to use the notations (e.g. $H_k = p_k(H_1^{\otimes k})$, $\cup_F$, etc.) defined above. We use the standard string diagram calculus to depict linear transformations between Hilbert spaces. That is, a linear operator $\rho \in B(H_k, H_l)$ will be diagrammatically represented as a string diagram $$\begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (-1,9) {$l$};
\node at (-1,-1) {$k$};
\node at (0,4) {$\rho$};
\draw [-, color=black]
(0,6) -- (0,8);
\draw [-, color=black]
(0,0) -- (0,2);
\draw (-2,2) rectangle (2,6);
\end{tikzpicture}$$ with the input Hilbert space at the bottom of the diagram, and the output at the top. The string corresponding to $H_l$ will be labeled by $l$. We will generally omit the string corresponding to $H_0 = {\mathbb C}$, so a vector $\xi \in H_k \cong B({\mathbb C}, H_k)$ and a covector $\xi^* \in H_k^* \cong B(H_k,{\mathbb C})$ will be drawn, respectively, as
$$\begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (-1,9) {$k$};
\node at (0,4) {$\xi$};
\draw [-, color=black]
(0,6) -- (0,8);
\draw (-2,2) rectangle (2,6);
\end{tikzpicture}\, , \qquad \begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (-3,-1) {$k$};
\node at (-2,4) {$\xi^*$};
\draw [-, color=black]
(-2,0) -- (-2,2);
\draw (-4,2) rectangle (0,6);
\end{tikzpicture}\, .$$ Similarly, $\rho \in B (H_k \otimes H_l, H_{k'} \otimes H_{l'})$ is denoted using parallel input/output strings
$$\begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (-2,9) {$k'$};
\node at (-2,-1) {$k$};
\node at (2,9) {$l'$};
\node at (2,-1) {$l$};
\node at (0,4) {$\rho$};
\draw [-, color=black]
(-1,6) -- (-1,8);
\draw [-, color=black]
(-1,0) -- (-1,2);
\draw [-, color=black]
(1,6) -- (1,8);
\draw [-, color=black]
(1,0) -- (1,2);
\draw (-2,2) rectangle (2,6);
\end{tikzpicture}.$$ We define (for later use) the [*($k$-th) quantum trace*]{}[^1] functional $$\tau_k:{\mathcal}B(H_1^{\otimes k}) \to {\mathbb C}, \qquad \tau_k(\rho) := {{\operatorname{Tr}}}_{H_1}^{\otimes k}((F^t\bar F )^{\otimes k})\rho) \qquad (k \in {\mathbb N}),$$ which is depicted by the closure of a string diagram as follows:
$$\begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (-1,9) {$k$};
\node at (0,4) {$\rho$};
\draw [-, color=black]
(0,6) -- (0,8);
\draw [-, color=black]
(0,0) -- (0,2);
\draw (-1.5,2) rectangle (1.5,6);
\draw [-, color=black]
(0,8) to [bend left = 90] (0,0);
\end{tikzpicture} = \begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (-1,9) {$k$};
\node at (0,4) {$\rho$};
\draw [-, color=black]
(0,6) -- (0,8);
\draw [-, color=black]
(0,0) -- (0,2);
\draw (-1.5,2) rectangle (1.5,6);
\draw [-, color=black]
(0,8) to [bend right = 90] (0,0);
\end{tikzpicture}\, .$$ Composition of linear maps is depicted by vertical concatenation of string diagrams and tensoring is depicted by placing them in parallel, respectively.
$$\begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (-1,9) {$l$};
\node at (-1,-1) {$k$};
\node at (0,4) {$\rho \rho'$};
\draw [-, color=black]
(0,6) -- (0,8);
\draw [-, color=black]
(0,0) -- (0,2);
\draw (-2,2) rectangle (2,6);
\end{tikzpicture} = \begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (-1,13) {$l$};
\node at (-1,-5) {$k$};
\node at (0,8) {$\rho $};
\draw [-, color=black]
(0,10) -- (0,12);
\draw [-, color=black]
(0,4) -- (0,6);
\draw (-2,6) rectangle (2,10);
\node at (0,0) {$ \rho'$};
\draw [-, color=black]
(0,2) -- (0,4);
\draw [-, color=black]
(0,-4) -- (0,-2);
\draw (-2,-2) rectangle (2,2);
\end{tikzpicture}, \qquad \begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (-2,9) {$k'$};
\node at (-2,-1) {$k$};
\node at (2,9) {$l'$};
\node at (2,-1) {$l$};
\node at (0,4) {$\rho \otimes \rho'$};
\draw [-, color=black]
(-1,6) -- (-1,8);
\draw [-, color=black]
(-1,0) -- (-1,2);
\draw [-, color=black]
(1,6) -- (1,8);
\draw [-, color=black]
(1,0) -- (1,2);
\draw (-3,2) rectangle (3,6);
\end{tikzpicture} = \begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (-1,9) {$k'$};
\node at (-1,-1) {$k$};
\node at (0,4) {$\rho$};
\draw [-, color=black]
(0,6) -- (0,8);
\draw [-, color=black]
(0,0) -- (0,2);
\draw (-2,2) rectangle (2,6);
\end{tikzpicture} \ \ \begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (1,9) {$l'$};
\node at (1,-1) {$l$};
\node at (0,4) {$\rho'$};
\draw [-, color=black]
(0,6) -- (0,8);
\draw [-, color=black]
(0,0) -- (0,2);
\draw (-2,2) rectangle (2,6);
\end{tikzpicture}
.$$
Let us end this subsection by describing the string-diagrammatic representation of the maps specific to the representation category $\text{Rep}(O^+_F)$. Recall that for $\text{Rep}(O^+_F)$, we have the fundamental generating morphisms $\iota_{H_k}$, $\cup_F$, $\cap_F := \cup_F^*$. We depict these maps as follows: $$\begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (-1,9) {$k$};
\node at (-1,-1) {$k$};
\node at (0,4) {$\iota_{H_k}$};
\draw [-, color=black]
(0,6) -- (0,8);
\draw [-, color=black]
(0,0) -- (0,2);
\draw (-2,2) rectangle (2,6);
\end{tikzpicture} = \ \begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (-1,9) {$k$};
\draw [-, color=black]
(0,0) -- (0,8);
\end{tikzpicture} \ , \qquad \begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (-2,9) {$1$};
\node at (2,9) {$1$};
\node at (0,4) {$\cup_F$};
\draw [-, color=black]
(-1,6) -- (-1,8);
\draw [-, color=black]
(1,6) -- (1,8);
\draw (-2,2) rectangle (2,6);
\end{tikzpicture} = \begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (-2,9) {$1$};
\node at (2,9) {$1$};
\draw [-, color=black]
(-1,6) -- (-1,8);
\draw [-, color=black]
(1,6) -- (1,8);
\draw[-, color=black]
(-1,6) to [bend right = 90] (1,6);
\end{tikzpicture}, \qquad \begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (-2,-1) {$1$};
\node at (2,-1) {$1$};
\node at (0,4) {$\cap_F$};
\draw [-, color=black]
(-1,0) -- (-1,2);
\draw [-, color=black]
(1,0) -- (1,2);
\draw (-2,2) rectangle (2,6);
\end{tikzpicture} = \begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (-2,-1) {$1$};
\node at (2,-1) {$1$};
\draw [-, color=black]
(-1,0) -- (-1,2);
\draw [-, color=black]
(1,0) -- (1,2);
\draw [-, color=black]
(-1,2) to [bend left = 90](1,2);
\end{tikzpicture}
.$$ Then one has that the fundamental Temperley-Lieb relations are graphically depicted. For example, the value of a closed loop is $|d|$: $$\|\cup_F\|^2 = \cap_F \circ \cup_F = \begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (2,-1) {$1$};
\draw [-, color=black]
(-1,0) -- (-1,2);
\draw [-, color=black]
(1,0) -- (1,2);
\draw [-, color=black]
(-1,2) to [bend left = 90](1,2);
\draw [-, color=black]
(-1,0) to [bend right = 90](1,0);
\end{tikzpicture} = {{\operatorname{Tr}}}(F^*F) = |d|,$$ and the snake equations are given by $$(\iota_{H_1} \otimes \cap_F) (\cup_F \otimes \iota_{H_1}) = \begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\draw [-, color=black]
(0,0)--(0,3);
\draw [-, color=black]
(2,0)--(2,3);
\draw [-, color=black]
(4,0)--(4,3);
\draw [-, color=black]
(2,3) to [bend left = 90](4,3);
\draw [-, color=black]
(0,0) to [bend right = 90](2,0);
\end{tikzpicture} = F \bar F = \text{sgn}(d) \ \begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\draw [-, color=black]
(0,0)--(0,3);
\end{tikzpicture} \
=
\begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\draw [-, color=black]
(0,0)--(0,3);
\draw [-, color=black]
(2,0)--(2,3);
\draw [-, color=black]
(4,0)--(4,3);
\draw [-, color=black]
(0,3) to [bend left = 90](2,3);
\draw [-, color=black]
(2,0) to [bend right = 90](4,0);
\end{tikzpicture} = (\cap_F \otimes \iota_{H_1}) (\iota_{H_1} \otimes \cup_F).$$
Temperley-Lieb Channels {#temperley-lieb-channels}
-----------------------
We now come to our main objects of study, which are the CG-channels associated to the irreducible representations of the quantum groups $O^+_F$, which, in view of the above connection with the Temperley-Lieb category, we redub “Temperley-Lieb channels”:
A triple $(k,l,m) \in {\mathbb N}_0^3$ is called [*admissible*]{} if there exists an integer $0 \le r \le \min\{l,m\}$ such that $k = l+m - 2r$. For an admissible triple $(k,l,m) \in {\mathbb N}_0^3$ we have $v^k \subset v^l \tp v^m$ with the intertwining isometry $\alpha^{l,m}_k:H_k \to H_l \otimes H_m$ and the corresponding CG-channels $\Phi_{v^k}^{\overline{v^l}, v^m}$ and $\Phi_{v^k}^{v^l, \overline{v^m}}$ (shortly, $\Phi_k^{\bar{l}, m}$ and $\Phi_k^{l, \bar{m}}$) are called [*($O^+_F$-)Temperley-Lieb channels*]{}.
Let us now give a string-diagrammatic description of the covariant isometries $\alpha_k^{l,m}$ which define the TL-channels above. We begin by fixing an admissible triple $(k,l,m) \in {\mathbb N}_0^3$ and define $$\begin{aligned}
\label{unnormal}
A_k^{l,m} =(p_l \otimes p_m)\Big(\iota_{H_{l-r}} \otimes \cup_F^r \otimes \iota_{m-r}\Big)p_k \in \text{Hom}_{O^+_F}(v^k, v^l \tp v^m).\end{aligned}$$ where $\cup_F^r \in \text{Hom}_{O^+_F}(v^0, v^{\tiny \tp 2r})$ is defined recursively from $$\cup_F^1 := \cup_F,\;\; \cup_F^r := (\iota_{H_1^{\otimes r-1}} \otimes \cup_F \otimes \iota_{H_1^{\otimes r-1}})\cup_F^{r-1}.$$ In terms of our string diagram formalism, $\cup_F^r$ is given by $r$ nested cups $$\begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (0,4) {$\cup_F^r$};
\draw [-, color=black]
(-1,6) -- (-1,8);
\draw [-, color=black]
(-2,6) -- (-2,8);
\node at (0,7) {$...$};
\draw [-, color=black]
(1,6) -- (1,8);
\draw [-, color=black]
(2,6) -- (2,8);
\draw (-2,2) rectangle (2,6);
\end{tikzpicture} =
\begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\draw [-, color=black]
(-1,6) -- (-1,8);
\draw [-, color=black]
(-2,6) -- (-2,8);
\node at (0,7) {$...$};
\draw [-, color=black]
(1,6) -- (1,8);
\draw [-, color=black]
(2,6) -- (2,8);
\draw [-, color=black]
(2,6) -- (2,8);
\draw [-, color=black]
(2,6) -- (2,8);
\draw [-, color=black]
(-2,6) to [bend right = 90] (2,6);
\draw [-, color=black]
(-1,6) to [bend right = 90] (1,6);
\end{tikzpicture},$$ and $A_k^{l,m}$ is given by $$A_k^{l,m} = \begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\draw (-3,0) rectangle (3,2);
\node at (0,1) {$p_k$};
\draw (-9,10) rectangle (-4,12);
\node at (-6.5,11) {$p_l$};
\draw (4,10) rectangle (9,12);
\node at (6.5,11) {$p_m$};
\draw [-, color=black]
(-3,2) -- (-9,10);
\draw [-, color=black]
(-1,2) -- (-7,10);
\draw [-, color=black]
(3,2) -- (9,10);
\draw [-, color=black]
(1,2) -- (7,10);
\draw [-, color=black]
(-4,10) to [bend right = 40] (4,10);
\draw [-, color=black]
(-6,10) to [bend right = 90] (6,10);
\node at (0,8) {$\vdots$};
\node at (-4.3,5) {$\cdot \cdot$};
\node at (4.3,5) {$\cdot \cdot$};
\end{tikzpicture}$$
The (non-zero) map $A_{k}^{l,m}$ is often called a [*three-vertex*]{} in the context of tensor category theory and Temperley-Lieb recoupling theory [@KaLi94], and (following standard conventions) the above string diagram for $A_k^{l,m}$ is simply drawn as a trivalent vertex:
at (-5,0.5) [$A_{k}^{l,m} = $]{}; at (-5,5) [$l$]{}; at (5,5) [$m$]{}; at (-1,-5) [$k$]{};
(-4,4) – (0,0);
(0,-0) – (4,4);
(0,-4) – (0,0);
.
We then have that the the adjoint $(A_k^{l,m})^*$ is obtained by rotating 180 degrees about the horizontal axis. $$\begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (-5,0.5) {$(A_{k}^{l,m})^* = \quad $};
\node at (-5,-5) {$l$};
\node at (5,-5) {$m$};
\node at (-1,5) {$k$};
\draw [-, color=black]
(-4,-4) -- (0,0);
\draw [-, color=black]
(0,-0) -- (4,-4);
\draw [-, color=black]
(0,4) -- (0,0);
\end{tikzpicture}.$$ From Schur’s Lemma and irreducibility, it follows that our required isometry $\alpha_k^{l,m}$ must be a scalar multiple of the three-vertex $A_k^{l,m}$, and this scaling factor is given in terms of the so-called [*theta-net*]{} $\theta_q(k,l,m)$ [@KaLi94]. $$\theta_q(k,l,m):= \tau_k((A_k^{l,m})^*A_k^{l,m}) = \begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.17]
\node at (-4.5,5) {{\scriptsize $l$}};
\node at (1.5,5) {{\scriptsize $m$}};
\node at (-1,-5) {{\scriptsize $k$}};
\node at (-1,13) {{\scriptsize $k$}};
\draw [-, color=black]
(-4,4) -- (0,0);
\draw [-, color=black]
(0,-0) -- (4,4);
\draw [-, color=black]
(0,-4) -- (0,0);
\draw [-, color=black]
(-4,4) -- (0,8);
\draw [-, color=black]
(4,4) -- (0,8);
\draw [-, color=black]
(0,8) -- (0,12);
\draw [-, color=black]
(0,12) to [bend left =110] (0,-4);
\end{tikzpicture} = \frac{[r]_q![l-r]_q![m-r]_q![k+r+1]_q!}{[l]_q![m]_q![k]_q!},$$ where $q=q(F)$, $k=l+m - 2r$, and $[x]_q! = [x]_q[x-1]_q \ldots [2]_q[1]_q$ denotes the quantum factorial. Then one has
at (-22,0.5) [$\alpha_k^{l,m} = \Big(\frac{\tau_k(\iota_{H_k})}{ \tau_k((A_k^{l,m})^*A_k^{l,m})}\Big)^{1/2} A_k^{l,m} = \Big(\frac{[k+1]_q}{\theta_q(k,l,m)}\Big)^{1/2}\ \ \ \ $]{}; at (-5,5) [$l$]{}; at (5,5) [$m$]{}; at (-1,-5) [$k$]{};
(-4,4) – (0,0);
(0,-0) – (4,4);
(0,-4) – (0,0);
.
Kac type Temperley-Lieb channels
--------------------------------
Throughout the rest of the paper we make the standing assumption that all free orthogonal quantum groups $O^+_F$ under consideration are of Kac type, which is equivalent to the unitarity of $F$ [@Ba97]. (In fact, for the most part we just consider $O^+_N$, however this slightly higher level of generality is useful at times, allowing us for exmple to prove results for $SU(2)$ simultaneously). The main reason for making this assumption is that for the calculations that follow, it is essential for us to have that the “physical operations” of taking partial traces in tensor product spaces such as $B(H_l \otimes H_m)$ agree with the “quantum operations” coming from taking (partial) quantum traces using the functionals $\tau_k$ described above. In this case, we also have the handy feature that the $O^+_F$-covariant unit vectors $\alpha_0^{k,k}\in H_k \otimes H_k$ are all maximally entangled states.
Note that when $O^+_F$ is of Kac type, we have that both the quantum parameters $q_0$ and $q$ defined above are equal (since $N = {{\operatorname{Tr}}}(F^*F)$ when $F$ is unitary). From now on we simply use the letter $q$ to denote the quantum parameter.
Of course, since in the Kac case the quantum traces and ordinary traces agree, we have the following diagrammatic representations for the Temperley-Lieb quantum channels $ \Phi_k^{\bar l, m}, \Phi_k^{l, \bar m}$: $$\Phi_k^{\bar l, m}(\rho)=\frac{[k+1]_q}{\theta_q(k,l,m)} \begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.17]
\node at (5,15) {$m$};
\node at (5,-7) {$m$};
\node at (-9,5) {$l$};
\node at (-1,9) {$k$};
\node at (-1,-1) {$k$};
\node at (0,4) {$\rho$};
\draw [-, color=black]
(-4,16) -- (0,12);
\draw [-, color=black]
(0,12) -- (4,16);
\draw [-, color=black]
(0,6) -- (0,12);
\draw [-, color=black]
(-4,-8) -- (0,-4);
\draw [-, color=black]
(4,-8) -- (0,-4);
\draw [-, color=black]
(0,-4) -- (0,2);
\draw [-, color=black]
(-4,16) to [bend right=120] (-4,-8);
\draw (-2,2) rectangle (2,6);
\end{tikzpicture} \quad, \quad \Phi_k^{l, \overline{m}}(\rho)=\frac{[k+1]_q}{\theta_q(k,l,m)}\begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.17]
\node at (-5,15) {$l$};
\node at (-5,-7) {$l$};
\node at (8.5,5) {$m$};
\node at (-1,9) {$k$};
\node at (-1,-1) {$k$};
\node at (0,4) {$\rho$};
\draw [-, color=black]
(-4,16) -- (0,12);
\draw [-, color=black]
(0,12) -- (4,16);
\draw [-, color=black]
(0,6) -- (0,12);
\draw [-, color=black]
(-4,-8) -- (0,-4);
\draw [-, color=black]
(4,-8) -- (0,-4);
\draw [-, color=black]
(0,-4) -- (0,2);
\draw [-, color=black]
(4,16) to [bend left=120] (4,-8);
\draw (-2,2) rectangle (2,6);
\end{tikzpicture}.$$
Let us finish this section with an application of our string diagram formalism to the Choi maps associated to the TL-channels. The result below was proved for the cases of $SU(2)$ by Al-Nuwairan [@Al14] and $O^+_N$ in [@BrCo17b]. The following general case follows by the exact same planar isotopy arguments used in [@BrCo17b].
\[thm:choi-eq\] For any admissible triple $(k,l,m)\in {\mathbb{N}}^3_0$ the Choi matrices associated to any Kac type $O^+_F$-TL-channels $\Phi_k^{\bar{l}, m}$ and $\Phi_k^{l, \bar{m}}$ are given by $$\label{choi-eq}
C_{\Phi_k^{\overline{l}, m}} = \frac{[k+1]_q}{[l+1]_q} \alpha_l^{m,k}(\alpha_{l}^{m,k})^*,\;\;
C_{\Phi_k^{l, \overline{m}}} = \frac{[k+1]_q}{[m+1]_q} \alpha_m^{k,l}(\alpha_{m}^{k,l})^*,$$ respectively. In particular, these Choi maps are scalar multiples of $O^+_F$-covariant projections onto irreducible subrepresentations.
The minimum output entropy and capacities of $O_N^+$-Temperley-Lieb channels {#sec:moe-cap}
============================================================================
In this section we establish asymptotically sharp estimates on the minimum output entropy, the Holevo capacity and the “one-shot” quantum capacity of $O_N^+$-TL-channels for large enough $N$. The estimate begins with the following result of [@BrCo18b Corollary 4.2]. $$H_{\min}(\Phi^{l,\bar m}_k )=H_{\min}(\Phi^{\overline{l},m}_k)\geq \log(\frac{\theta_q(k,l,m)}{[k+1]_q})\ge \frac{l+m-k}{2} \cdot \log N - C(N)$$ with $C(N)\to 0$ as $N\to \infty$. The above estimate was conjectured to be asymptotically optimal as $N\to \infty$ in [@BrCo18b], which will be confirmed to be true below.
Before we dig into the above conjecture we prepare several elementary estimates. Let $f(t) = -t\log t$, $0<t<1$ be the function we use for the entropy. Then it is straightforwad to see that $f(t) \lesssim t^{1/2}$ and $f(t) \lesssim 1-t$, where $a\lesssim b$ means that there is a universal constant $C>0$ such that $a \le C \cdot b$. The Fannes-Audenaert inequality ([@A07]) says that for any quantum states $X,Y \in B(H)$ with ${\rm dim}H = n$ $$|H(X) - H(Y)| \le \delta \log (n-1) + f(\delta) + f(1-\delta),\; \delta = \frac{1}{2}||X-Y||_1,$$ where $||\cdot||_1$ is the trace norm, so that we have $$|H(X) - H(Y)| \lesssim \log n \cdot ||X-Y||_1 + ||X-Y||^{1/2}_1.$$
\[lem-Fannes-extend\] Let $X,Y \in B(H)_+$ with ${\rm dim}H = n$. Suppose further that ${{\operatorname{Tr}}}(X) = 1 \ge {{\operatorname{Tr}}}(Y) > 0$. Then we still have $$|H(X) - H(Y)| \lesssim \log n \cdot ||X-Y||_1 + ||X-Y||^{1/2}_1.$$
First we observe that $$\begin{aligned}
H(X) - H(Y)
& = H(X) + {{\operatorname{Tr}}}(Y) \log {{\operatorname{Tr}}}(Y) - {{\operatorname{Tr}}}(Y)H(\frac{Y}{{{\operatorname{Tr}}}(Y)})\\
& = {{\operatorname{Tr}}}(Y) \log {{\operatorname{Tr}}}(Y) + (1-{{\operatorname{Tr}}}(Y))H(X) + {{\operatorname{Tr}}}(Y) (H(X) - H(\frac{Y}{{{\operatorname{Tr}}}(Y)}))\\
& = A + B + C.
\end{aligned}$$ Since we have $1- {{\operatorname{Tr}}}(Y) = {{\operatorname{Tr}}}(X-Y) \le ||X-Y||_1$ we know $$|A| \lesssim ||X-Y||_1,\; |B| \lesssim \log n \cdot ||X-Y||_1.$$ For the third term we have $$|C| \le |H(X) - H(\frac{Y}{{{\operatorname{Tr}}}(Y)})| \lesssim \log n \cdot ||X-\frac{Y}{{{\operatorname{Tr}}}(Y)}||_1 + ||X-\frac{Y}{{{\operatorname{Tr}}}(Y)}||^{1/2}_1.$$ Finally we observe that $$||X-\frac{Y}{{{\operatorname{Tr}}}(Y)}||_1 \le ||X-Y||_1 + (\frac{1}{{{\operatorname{Tr}}}(Y)} - 1)||Y||_1 = ||X-Y||_1 + 1 - {{\operatorname{Tr}}}(Y) \le 2||X-Y||_1,$$ which leads us to the conclusion we wanted.
\[lem-factorial\] For any admissible $(l,m,k)\in {\mathbb{N}}^3_0$ with $k=l+m-2r$ we have $$\frac{N^r[k+1]_q}{\theta_q(k,l,m)}=1+O(\frac{1}{N^2}).$$
We first observe for any $k\ge 1$ that $$\begin{aligned}
\frac{[k+1]_q}{[k]_qN}
& = \frac{1}{2}(1 + \sqrt{1-4/N^2}) \frac{1-q^{2k+2}}{1-q^{2k}}\\
& = \frac{1}{2}(1 + \sqrt{1-4/N^2}) (1 + \frac{q^{2k}-q^{2k+2}}{1-q^{2k}})\\
& = \frac{1}{2}(2 + O(\frac{1}{N^2})) (1+O(\frac{1}{N^2})) = 1+O(\frac{1}{N^2}).
\end{aligned}$$ Then, we can easily see for all $a > b\in {\mathbb{N}}$ that $$\frac{[a]_q}{[b]_q N^{a-b}} = \frac{[a]_q}{[a-1]_q N}\cdots \frac{[b+1]_q}{[b]_q N} = (1+O(\frac{1}{N^2}))^{a-b} = 1+O(\frac{1}{N^2}),$$ which can be extended to the following $$\frac{[a]_q!}{[b]_q![a-b]_q! N^{b(a-b)}}=1+O(\frac{1}{N^2})=\frac{[b]_q! [a-b]_q! N^{b(a-b)}}{[a]_q!}.$$ Finally, we have $$\begin{aligned}
\frac{N^r[k+1]_q}{\theta_q(k,l,m)}
& = N^r\frac{[l]_q! [m]_q! [k+1]_q!}{[r]_q![l-r]_q![m-r]_q! [k+r+1]_q!}\\
& = \frac{[l]_q!}{[r]_q![l-r]_q!N^{r(l-r)}}\cdot \frac{[m]_q!}{[m-r]_q![r]_q!N^{r(m-r)}}\cdot \frac{[k+1]_q![r]_q!N^{r(k+1)}}{[k+r+1]_q!}\\
& = 1+O(\frac{1}{N^2})\end{aligned}$$ since $-r(l-r)-r(m-r)+r(k+1)=r(2r-l-m+k+1)=r$.
Here, we introduce some notations. For $N \ge 2$ we write the index set $I = \{1, 2, \cdots, N\}$. We also need multi-index sets $$I^n = \{ {\bf i} = (i_1, \cdots, i_n): i_k \in I,\; 1\le k \le n\}$$ and $$I^n_{\ne} := \{ {\bf i} = (i_1, \cdots, i_n) \in I^n: i_k \ne i_{k+1}, \;1\le k \le n-1\}.$$ We sometimes need to aviod particular indices as follows. $$(s,t)/I^n_{\ne} := \{ {\bf i} = (i_1, \cdots, i_n) \in I^n_{\ne}: i_1 \ne s, i_1 \ne t\}$$ and $$I^n_{\ne}\backslash(t) := \{ {\bf i} = (i_1, \cdots, i_n) \in I^n_{\ne}: i_n \ne t\}$$ for $n\in {\mathbb{N}}$, $s \ne t\in I$. Note that we have $|(s,t)/I^n_{\ne}| = (N-2)(N-1)^{n-1}$ and $|I^n_{\ne}\backslash(t)| = (N-1)^n$.
For each ${\bf i} \in I^n_{\neq}$ we can easily see that $|{\bf i}{\rangle}\in H_n$ so that $p_n |{\bf i}{\rangle}= |{\bf i}{\rangle}$ from the Jones-Wenzl recursion.
For ${\bf i} \in I^n$ and ${\bf j} \in I^m$ the vector $| {\bf i} {\rangle}\otimes | {\bf j} {\rangle}\in {\mathbb{C}}^{n+m}$ will simply be denoted by $| {\bf i} {\bf j} {\rangle}$. We will use a very specific index ${\bf m}^k := (1,2,1,\cdots) \in I^k$, $k\ge 1$. For ${\bf i}=(i_1,\cdots,i_n)\in I^n$, its order reversed multi-index $\check{{\bf i}}=(i_n,\cdots, i_1)\in I^n$ will be considered.
\[thm-MOE-sharp\] For each admissible triple $(l,m,k)\in {\mathbb{N}}^3_0$ we have $$\frac{l+m-k}{2} \cdot \log N - C(N) \le H_{\min}(\Phi^{l,\bar m}_k )=H_{\min}(\Phi^{\overline{l},m}_k)\le \frac{l+m-k}{2} \cdot \log N + D(N)$$ with $C(N), D(N)\to 0$ as $N\to \infty$. When $k=l+m$, we actually have the following. $$H_{\min}(\Phi^{l,\bar m}_{l+m})=H_{\min}(\Phi^{\bar{l},m}_{l+m}) = 0$$ for any $N\ge 2$.
We set $r = \frac{l+m-k}{2}$. We will use a very specific index ${\bf m} := (1,2,1,\cdots) \in H_k\subseteq H^{\otimes k}_1$, which splits into $(m_1,\cdots, m_k) = {\bf m} = {\bf m}'{\bf m}''$, where ${\bf m}' = (m_1,\cdots, m_{l-r}) \in H_{l-r}\subseteq H^{\otimes l-r}_1$ and ${\bf m}'' = (m_{l-r+1},\cdots, m_k)\in H_{m-r}\subseteq H^{\otimes m-r}_1$. Then, we have $$\begin{aligned}
\frac{\theta_q(k,l,m)}{[k+1]_q} \Phi^{\bar{l},m}_k(|{\bf m}{\rangle}{\langle}{\bf m}|)
& = {{\operatorname{Tr}}}\otimes \iota (A^{l,m}_k |{\bf m}{\rangle}{\langle}{\bf m}| (A^{l,m}_k )^*)\\
& = {{\operatorname{Tr}}}\otimes \iota (A^{l,m}_k |{\bf m}'{\bf m}''{\rangle}{\langle}{\bf m}'{\bf m}''| (A^{l,m}_k )^*)\\
& = \sum_{{\bf i}, {\bf i}' \in I^r} {{\operatorname{Tr}}}\otimes \iota [(p_l \otimes p_m) (|{\bf m}'{\bf i} {\rangle}{\langle}{\bf m}'{\bf i}'| \otimes |\check{{\bf i}}\, {\bf m}''{\rangle}{\langle}\check{{\bf i}'}{\bf m}''|) (p_l \otimes p_m)]\\
& = \sum_{{\bf i}, {\bf i}' \in I^r} {\langle}{\bf m}'{\bf i}'| p_l |{\bf m}'{\bf i} {\rangle}\cdot p_m |\check{{\bf i}}\, {\bf m}''{\rangle}{\langle}\check{{\bf i}'}{\bf m}''| p_m\\
& = \sum_{{\bf i}\in (1,2)/I^r_{\ne}} |\check{{\bf i}}\, {\bf m}''{\rangle}{\langle}\check{{\bf i}}{\bf m}''| + \sum_{{\bf i}, {\bf i}' \not\in (1,2)/I^r_{\ne}} {\langle}{\bf m}'{\bf i}'| p_l |{\bf m}'{\bf i} {\rangle}\cdot p_m |\check{{\bf i}}\, {\bf m}''{\rangle}{\langle}\check{{\bf i}'}{\bf m}''| p_m\\
& = \frac{\theta_q(k,l,m)}{[k+1]_q}(Z(1) + Z(2)),
\end{aligned}$$ where we used the fact that for ${\bf i}\in (1,2)/I^r_{\ne}$ we have ${\bf m}'{\bf i}\in H_l$ and $\check{{\bf i}}\, {\bf m}'' \in H_m$. Note that $$\frac{\theta_q(k,l,m)}{[k+1]_q}Z(2)= \mathrm{Tr}\otimes \iota ((p_l\otimes p_m)|\xi {\rangle}{\langle}\xi | (p_l\otimes p_m))\geq 0,$$ where $|\xi{\rangle}= \displaystyle \sum_{{\bf i}\notin (1,2)/I^r_{\neq}} |{\bf m}' {\bf i}{\rangle}\otimes |\check{{\bf i}}{\bf m}''{\rangle}$. The term $Z(1)$ is the dominant one with the entropy $$\begin{aligned}
H(Z(1))
& = (N-2)(N-1)^{r-1}\frac{[k+1]_q}{\theta_q(k,l,m)} \log \frac{\theta_q(k,l,m)}{[k+1]_q}\\
& = (1-\frac{2}{N})(1-\frac{1}{N})^{r-1}\frac{N^r[k+1]_q}{\theta_q(k,l,m)} \log \frac{\theta_q(k,l,m)}{[k+1]_q}\\
& = (1+O(\frac{1}{N}))\log[ (1+O(\frac{1}{N^2}))N^r]
\end{aligned}$$ by Lemma \[lem-factorial\]. For the second term $Z(2)$ we have $${{\operatorname{Tr}}}(Z(2))
= 1 - {{\operatorname{Tr}}}(Z(1))\\
= 1- (N-2)(N-1)^{r-1}\frac{[k+1]_q}{\theta_q(k,l,m)}\\
= O(\frac{1}{N}).$$ By Lemma \[lem-Fannes-extend\] we have $$| H(\Phi^{\bar{l},m}_k(|{\bf m}{\rangle}{\langle}{\bf m}|)) - H(Z(1)) | \lesssim m \log N {{\operatorname{Tr}}}(Z(2)) + {{\operatorname{Tr}}}(Z(2))^{1/2} \lesssim O(\frac{1}{\sqrt{N}}),$$ which leads us to the conclusion we wanted.
If $k=l+m$, then we have $r=0$ and $$\Phi^{\bar{l},m}_{l+m}(|{\bf m}{\rangle}{\langle}{\bf m}|) = |{\bf m}'{\rangle}{\langle}{\bf m}'|,$$ which is a pure state. Thus, we get the conclusion we wanted.
Now we move to the case of capacities. We will apply a similar argument for the lower bound of “one-shot” quantum capacity.
\[thm-QC\] For each admissible triple $(k,l,m)\in {\mathbb{N}}^3_0$ we have $$\begin{cases}\frac{l+k-m}{2} \cdot \log N - C(N) \le Q^{(1)}(\Phi^{l,\bar m}_k)\\ \frac{m+k-l}{2} \cdot \log N - D(N) \le Q^{(1)}(\Phi^{\overline{l},m}_k)\end{cases}$$ with constants $C(N), D(N)\to 0$ as $N\to \infty$. When $k=l+m$, we actually have the following. $$\begin{cases} l \cdot \log (N-1) \le Q^{(1)}(\Phi^{l,\bar m}_{l+m})\\ m \cdot \log (N-1) \le Q^{(1)}(\Phi^{\overline{l},m}_{l+m}).\end{cases}$$
We set $r = \frac{l+m-k}{2}$ and fix a specific index ${\bf n} := (1,2,1,\cdots) \in H_{m-r}\subseteq H^{\otimes m-r}_1$. We first consider the estimates of $Q^{(1)}(\Phi^{\bar{l},m}_k)$. For any ${\bf j} \in I^{l-r}_{\ne}\backslash(1)$ we use the same argument as in the proof of Theorem \[thm-MOE-sharp\] to get $$\begin{aligned}
\frac{\theta_q(k,l,m)}{[k+1]_q} \Phi^{\bar{l},m}_k(|{\bf j}{\bf n}{\rangle}{\langle}{\bf j}{\bf n}|)
& = {{\operatorname{Tr}}}\otimes \iota (A^{l,m}_k |{\bf j}{\bf n}{\rangle}{\langle}{\bf j}{\bf n}| (A^{l,m}_k )^*)\\
& = \sum_{{\bf i}, {\bf i}' \in I^r} {\langle}{\bf j}{\bf i}'| p_l |{\bf j}{\bf i} {\rangle}\cdot p_m |\check{{\bf i}}\, {\bf n}{\rangle}{\langle}\check{{\bf i}'}{\bf n}| p_m\\
& = \sum_{{\bf i}\in (1,j_{l-r})/I^r_{\ne}} |\check{{\bf i}}{\bf n}{\rangle}{\langle}\check{{\bf i}}{\bf n}| + \sum_{{\bf i}, {\bf i}' \not\in (1,j_{l-r})/I^r_{\ne}} {\langle}{\bf j}{\bf i}'| p_l |{\bf j}{\bf i} {\rangle}\cdot p_m |\check{{\bf i}}\, {\bf n}{\rangle}{\langle}\check{{\bf i}'}{\bf n}| p_m\\
& = \frac{\theta_q(k,l,m)}{[k+1]_q}(Z(1,{\bf j}) + Z(2,{\bf j})).
\end{aligned}$$ Now we set $\displaystyle \rho = \frac{1}{(N-1)^{l-r}}\sum_{{\bf j} \in I^{l-r}_{\ne}\backslash(1)}|{\bf j}{\bf n}{\rangle}{\langle}{\bf j}{\bf n}|$ and we get $$\Phi^{\bar{l},m}_k(\rho) = \frac{1}{(N-1)^{l-r}}\sum_{{\bf j} \in I^{l-r}_{\ne}\backslash(1)}(Z(1,{\bf j}) + Z(2,{\bf j})) = Z(1) + Z(2).$$ In other words, $$\begin{aligned}
Z(1)
& = \frac{[k+1]_q}{(N-1)^{l-r}\theta_q(k,l,m)}\sum_{{\bf j} \in I^{l-r}_{\ne}\backslash(1)}\sum_{{\bf i}\in (1,j_{l-r})/I^r_{\ne}} |\check{{\bf i}}{\bf n}{\rangle}{\langle}\check{{\bf i}}{\bf n}|\\
& = \frac{[k+1]_q}{(N-1)\theta_q(k,l,m)}\sum^N_{j_{l-r} =2 }\sum_{{\bf i}\in (1,j_{l-r})/I^r_{\ne}} |\check{{\bf i}}{\bf n}{\rangle}{\langle}\check{{\bf i}}{\bf n}|\\
& = \frac{(N-2)[k+1]_q}{(N-1)\theta_q(k,l,m)}\sum_{{\bf i}\in (1)/I^r_{\ne}} |\check{{\bf i}}{\bf n}{\rangle}{\langle}\check{{\bf i}}{\bf n}|.
\end{aligned}$$ As before we use Lemma \[lem-factorial\] to get $$\begin{aligned}
H(Z(1))
& = (N-1)^r \frac{(N-2)[k+1]_q}{(N-1)\theta_q(k,l,m)} \log \frac{(N-1)\theta_q(k,l,m)}{(N-2)[k+1]_q}\\
& = (1-\frac{1}{N})^r \frac{N-2}{N-1}\frac{N^r[k+1]_q}{\theta_q(k,l,m)} \log \frac{(N-1)\theta_q(k,l,m)}{(N-2)[k+1]_q}\\
& = (1+O(\frac{1}{N}))\log[ (1+O(\frac{1}{N}))N^r]
\end{aligned}$$ and $${{\operatorname{Tr}}}(Z(2)) = 1 - {{\operatorname{Tr}}}(Z(1)) = 1 - (N-1)^r \frac{(N-2)[k+1]_q}{(N-1)\theta_q(k,l,m)} = O(\frac{1}{N}).$$ By Lemma \[lem-Fannes-extend\] again we still have $$| H(\Phi^{\bar{l},m}_k(\rho)) - H(Z(1)) | \lesssim m \log N {{\operatorname{Tr}}}(Z(2)) + {{\operatorname{Tr}}}(Z(2))^{1/2} \lesssim O(\frac{1}{\sqrt{N}}).$$
For the complementary channel we similarly have $$\begin{aligned}
\frac{\theta_q(k,l,m)}{[k+1]_q} \Phi^{l,\bar{m}}_k(|{\bf j}{\bf n}{\rangle}{\langle}{\bf j}{\bf n}|)
& = \iota \otimes {{\operatorname{Tr}}}(A^{l,m}_k |{\bf j}{\bf n}{\rangle}{\langle}{\bf j}{\bf n}| (A^{l,m}_k )^*)\\
& = \sum_{{\bf i}, {\bf i}' \in I^r} p_l |{\bf j}{\bf i} {\rangle}{\langle}{\bf j}{\bf i}'| p_l \cdot {\langle}\check{{\bf i}'}{\bf n}| p_m |\check{{\bf i}}\, {\bf n}{\rangle}\\
& = \sum_{{\bf i}\in (1,j_{l-r})/I^r_{\ne}} |{\bf j}{\bf i} {\rangle}{\langle}{\bf j}{\bf i}| + \sum_{{\bf i}, {\bf i}' \not\in (1,j_{l-r})/I^r_{\ne}} {\langle}\check{{\bf i}'}{\bf n}| p_m |\check{{\bf i}}\, {\bf n}{\rangle}\cdot p_l |{\bf j}{\bf i} {\rangle}{\langle}{\bf j}{\bf i}'| p_l \\
& = \frac{\theta_q(k,l,m)}{[k+1]_q}(Y(1,{\bf j}) + Y(2,{\bf j})).
\end{aligned}$$ Thus, we have $$\Phi^{l,\bar{m}}_k(\rho) = \frac{1}{(N-1)^{l-r}}\sum_{{\bf j} \in I^{l-r}_{\ne}\backslash(1)}(Y(1,{\bf j}) + Y(2,{\bf j})) = Y(1) + Y(2),$$ which means $$Y(1) = \frac{[k+1]_q}{(N-1)^{l-r}\theta_q(k,l,m)}\sum_{{\bf j} \in I^{l-r}_{\ne}\backslash(1)}\sum_{{\bf i}\in (1,j_{l-r})/I^r_{\ne}} |{\bf j}{\bf i} {\rangle}{\langle}{\bf j}{\bf i}|.$$ Now we have $$\begin{aligned}
H(Y(1))
& = (N-2)(N-1)^{r-1}\frac{[k+1]_q}{\theta_q(k,l,m)} \log \frac{(N-1)^{l-r}\theta_q(k,l,m)}{[k+1]_q}\\
& = (1-\frac{2}{N}) (1-\frac{1}{N})^{r-1} \frac{N^r[k+1]_q}{\theta_q(k,l,m)} \log \frac{(N-1)^{l-r}\theta_q(k,l,m)}{[k+1]_q}\\
& = (1+O(\frac{1}{N}))\log[ (1+O(\frac{1}{N}))N^l]
\end{aligned}$$ and $${{\operatorname{Tr}}}(Y(2)) = 1 - {{\operatorname{Tr}}}(Y(1)) = 1 - (1-\frac{2}{N}) (1-\frac{1}{N})^{r-1} \frac{N^r[k+1]_q}{\theta_q(k,l,m)} = O(\frac{1}{N}).$$ Thus, we similarly get, by Lemma \[lem-Fannes-extend\], that $| H(\Phi^{l,\bar{m}}_k(\rho)) - H(Y(1)) | \lesssim O(\frac{1}{\sqrt{N}}).$
Combining all the above estimates we get $$\lim_{N\to \infty} |H(\Phi^{l,\bar{m}}_k(\rho)) - H(\Phi^{l,\bar{m}}_k(\rho)) - \frac{l+k-m}{2}\cdot \log N| = 0,$$ which gives us the desired lower estimate for $Q^{(1)}(\Phi^{l,\bar{m}}_k)$ as $N\to \infty$.
For the case $k=l+m$ we actually have the following exact formulae. $$\Phi^{l,\bar{m}}_{l+m}( \frac{1}{(N-1)^l}\sum_{{\bf j} \in I^l_{\ne}/(1)}| {\bf j}{\bf n} {\rangle}{\langle}{\bf j}{\bf n} |) = \frac{1}{(N-1)^l}\sum_{{\bf j} \in I^l_{\ne}/(1)}| {\bf j} {\rangle}{\langle}{\bf j} |$$ and $$\Phi^{\bar{l},m}_{l+m}( \frac{1}{(N-1)^l}\sum_{{\bf j} \in I^l_{\ne}/(1)}| {\bf j}{\bf n} {\rangle}{\langle}{\bf j}{\bf n} |) = | {\bf n} {\rangle}{\langle}{\bf n} |,$$ which tells us that $Q^{(1)}(\Phi^{l,\bar m}_{l+m}) \ge l \cdot \log (N-1)$.
The estimates for $Q^{(1)}(\Phi^{\bar{l},m}_k)$ can be obtained in a similar way.
Combining Theorem \[thm-MOE-sharp\] and Theorem \[thm-QC\], we obtain the following asymptotically sharp one-shot capacities:
\[cor-capacities\] For each admissible triple $(k,l,m) \in {\mathbb{N}}^3_0$ we have $$\frac{l+k-m}{2}\log(N) -C_1(N)\leq Q^{(1)}(\Phi^{l, \bar m}_k) \leq \chi(\Phi^{l,\bar m}_k)\leq \frac{l+k-m}{2}\log(N) +C_2(N)$$ and $$\frac{m+k-l}{2}\log(N) -D_1(N)\leq Q^{(1)}(\Phi^{l, \bar m}_k) \leq \chi(\Phi^{l,\bar m}_k)\leq \frac{m+k-l}{2}\log(N) +D_2(N)$$ with constants $C_1(N),C_2(N),D_1(N),D_2(N)\rightarrow 0$ as $N\rightarrow \infty$.
Theorem \[thm-QC\] directly gives us the wanted lower bounds, and Theorem \[thm-MOE-sharp\] together with a general fact completes the conclusion.
We note that Corollary \[cor-capacities\] gives us asymptotically sharp “one-shot” private capacities $P^{(1)}(\Phi^{l, \bar m}_k)$ and $P^{(1)}(\Phi^{\bar l, m}_k)$ since $$Q^{(1)}\leq P^{(1)}\leq \chi$$ in general. The one-shot private capacity $P^{(1)}$ is defined as $$\max \left \{ H(\sum_x p_x\Phi(\rho_x))-\sum_x p_x H(\Phi(\rho_x)) -H(\sum_x p_x\widetilde{\Phi}(\rho_x))+\sum_x p_xH(\widetilde{\Phi}(\rho_x)) \right\}$$ where the maximum runs over all ensembles of quantum states $\left \{(p_x),(\rho_x)\right\}$. See [@Wi17 Section 13.6] for details.
EBT/PPT and (anti-)degradability of TL-channels {#sec:EBP-PPT}
===============================================
Since we have studied “one-shot” capacities $Q^{(1)}$ and $\chi$ for $O_N^+$-TL-channels in previous section, it is very natural to investigate their regularized quantities $Q$ and $C$. Since our $O_N^+$-TL-channels are bistochastic, we know that the classical capacity $C$ is smaller than $2\chi$ asymptotically by Proposition \[prop-bistochastic-estimates\]: $$C(\Phi^{l, \bar m}_k)\leq (l+k-m)\log(N),~C(\Phi^{\bar l, m}_k)\leq (m+k-l)\log(N) .$$
Although the regularized quantities $Q$ and $C$ are computationally intractible for many channels, some structural properties such as EBT/PPT/(anti-)degradability enable us to handle the regularization issues (See Proposition \[prop:implications\]). However, we will show that our TL-channels associated with $O_N^+$ and $SU(2)$ have no such structural properties in most cases.
The case of $O^+_N$
-------------------
### EBT property
We now apply Theorem \[thm:choi-eq\] to investigate EBT property for our $O_N^+$-TL-channels $\Phi_{k}^{\overline{l},m}$. Before coming to our result characterizing the EBT property for the channels $\Phi_k^{\overline{l},m}$, we first need an elementary lemma.
\[lem:ent-sub\] Let $H_A$ and $H_B$ be finite dimensional Hilbert spaces, let $0 \neq p \in B(H_B \otimes H_A)$ be an orthogonal projection, and let $H_0 \subseteq H_B \otimes H_A$ denote the range of $p$. If $H_0$ is an entangled subspace of $H_B \otimes H_A$, then the state $\rho := \frac{1}{\mathrm{dim} H_0}p$ is entangled.
We prove the contrapositive. If $\rho$ is separable, then we can write $$p = \sum_i |\xi_i \rangle \langle \xi_i| \otimes |\eta_i \rangle \langle \eta_i| \qquad (0 \neq \xi_i \in H_B, \ 0 \ne \eta_i \in H_A).$$
For each $i$ put $x_i = |\xi_i \rangle \langle \xi_i| \otimes |\eta_i \rangle \langle \eta_i|$. Then since $x_i \leq p$ and $p$ is a projection, it follows that $x_i = px_ip$, which implies that the range of $x_i$ is contained in the range of $p$. In particular, $\xi_i \otimes \eta_i \in H_0$, so $H_0$ is separable.
\[thm:EBT\] Let $(k,l,m) \in {\mathbb N}_0^3$ be an admissible triple. If $k \neq l-m$, then the quantum channel $\Phi_{k}^{\overline{l},m}$ is not EBT. Also, if $k\ne m-l$, then the quantum channel $\Phi_k^{l,\overline{m}}$ is not EBT.
We have from Theorem \[thm:choi-eq\] that $C_{\Phi_{k}^{\overline{l},m}} = \frac{[k+1]_q}{[l+1]_q} \alpha_l^{m,k}(\alpha_{l}^{m,k})^* \in {\mathcal}B(H_m \otimes H_k).$ Consider the orthogonal projection $p = \alpha_l^{m,k}(\alpha_{l}^{m,k})^*$. The range of $p$ is the subrepresentation of $H_m \otimes H_k$ equivalent to $H_l$, and by \[Theorem 3.2, [@BrCo18b]\] this subspace is entangled iff $l\neq k+m$. Applying Lemma \[lem:ent-sub\], we conclude that $ \Phi_{k}^{\overline{l},m}$ is not EBT whenever $k \neq l-m$.
We note that Theorem \[thm:EBT\] leaves open whether or not the channels $\Phi_{l-m}^{\overline{l},m}$ are EBT. In this case, the corresponding Choi map is a multiple of a projection onto a separable subspace, and we do not know if this projection is a multiple of an entangled state.
### PPT/ (anti-)degradability
As the next step, one might naturally ask if $O_N^+$-TL-channels can have PPT property or (anti-)degradability. In fact, Theorem \[thm-QC\] provides a strong partial answer on these structural questions for large $N$ as follows:
1. The channel $\Phi^{l, \bar m}_k$ is not PPT if $k>m-l$ and $\Phi^{\bar l, m}_k$ is not PPT if $k>l-m$ for sufficiently large $N$. In particular, the channels $\Phi^{l,\bar m}_{l+m}$ and $\Phi^{\bar l, m}_{l+m}$ are not PPT for all $N\geq 3$.
2. The channels $\Phi^{l,\bar m}_k$ and $\Phi^{\bar l, m}_k$ are neither degradable nor anti-degradable if $k>|l-m|$ for sufficiently large $N$.
<!-- -->
1. Note that every PPT channel should have zero quantum capacity and that $Q(\Phi^{l, \bar m}_k)> 0$ if $k > m-l$ for sufficiently large $N$. Similar arguments are valid for $\Phi^{\bar l, m}_k$.
2. Note that every anti-degradable channel must have zero quantum capacity, while on the other hand both $\Phi^{l, \bar m}_k$ and $\Phi^{\bar l, m}_k$ have strictly positive quantum capacities for sufficiently large $N$ if $k>|l-m|$.
The case of $SU(2)$
-------------------
We have a much better understanding about the TL-channels associated with $SU(2)$ than the ones from $O^+_N$ based on the following concrete description of Clebsch-Gordan coefficients. For an admissible triple $(k,l,m)\in {\mathbb{N}}^3_0$ we consider the associated isometry $$\alpha^{l,m}_k |i{\rangle}=\sum_{j=0}^l \sum_{j'=0}^m C^{l,m,k}_{j,j',i}|j j'{\rangle},$$
We actually have a precise but complicated formula (e.g. [@VK page 510]) for the constant $C^{l,m,k}_{j,j',i}$, which is a sum with multiple terms. Thus, the general constant $C^{l,m,k}_{j,j',i}$ is difficult to handle, but they satisfy several symmetries and some extremal cases can be written in a simpler form.
\[prop-symmetry\] For any admissible triples $(k,l,m), (i,j,j') \in {\mathbb{N}}^3_0$ we have
1. $C^{l,m,k}_{j,j',i} = 0$ if $i + \frac{l+m-k}{2} \ne j+j'$,
2. $\begin{cases} {\langle}i_1| \Phi^{l,\bar{m}}_k(|i{\rangle}{\langle}j |)|j_1{\rangle}=0, & i_1-j_1\neq i-j\\ {\langle}i_2| \Phi^{\bar{l},m}_k(|i{\rangle}{\langle}j |)|j_2{\rangle}=0, & i_2-j_2\neq i-j\end{cases}$ for $\begin{cases} 0\leq i_1,j_1\leq l,~0\leq i,j\leq k\\ 0\leq i_2,j_2\leq m,~0\leq i,j\leq k\end{cases},$
3. $C^{l,m,k}_{j,j',i}=(-1)^{\frac{l+m-k}{2}}C^{m,l,k}_{j',j,i}$,
4. $C^{l,m,k}_{j,j',i}=(-1)^{\frac{l+m-k}{2}}C^{l,m,k}_{l-j,m-j',k-i}$,
5. $C^{l,m,k}_{j,j',i} \ne 0$ if $\displaystyle i+\frac{l+m-k}{2}=j+j'$ and if one of the following is true: $\begin{cases} j=0, l \\ j'=0, m\\ i=0,k\end{cases}$.
\(2) We have ${\langle}i_1| \Phi^{l,\bar{m}}_k(|i{\rangle}{\langle}j |)|j_1{\rangle}=\displaystyle \sum_{i_2=0}^m C^{l,m,k}_{i_1,i_2,i}\overline{C^{l,m,k}_{j_1,i_2,j}}=0$ if $i_1-i\neq j_1-j$ by (1) and a similar argument holds for $\Phi^{\bar{l},m}_k$.
\(5) If one of the parameters $i, j, j'$ becomes extremal, then the constant $C^{l,m,k}_{j,j',i}$ can be expressed in a single term, which is a ratio of several factorials by [@VK section 8.2.6] and the above symmetries (3) and (4).
The $SU(2)$-TL-channel $\Phi^{l, \bar m}_k$ is of the following form. $$\begin{aligned}
\Phi^{l, \bar m}_k(|i{\rangle}{\langle}\tilde{i}|)
& = (\iota \otimes {{\operatorname{Tr}}})(\alpha^{l,m}_k |i{\rangle}{\langle}\tilde{i}| (\alpha^{l,m}_k)^*) \nonumber \\
& = (\iota \otimes {{\operatorname{Tr}}})(\sum_{j, \tilde{j}=0}^l \sum_{j', \tilde{j'}=0}^m C^{l,m,k}_{j,j',i}\overline{C^{l,m,k}_{\tilde{j},\tilde{j'},\tilde{i}}}|j j'{\rangle}{\langle}\tilde{j} \tilde{j'}|)\nonumber \\
& = \sum_{j, \tilde{j}=0}^l \sum_{j'=0}^m C^{l,m,k}_{j,j',i}\overline{C^{l,m,k}_{\tilde{j},j',\tilde{i}}}\,|j{\rangle}{\langle}\tilde{j}|\\
& = \sum_{j'=0}^m \sum_{j, \tilde{j}=0}^l C^{m,l,k}_{j',j,i}\overline{C^{m,l,k}_{j',\tilde{j},\tilde{i}}}\,|j{\rangle}{\langle}\tilde{j}| = \Phi^{\bar m, l}_k (|i{\rangle}{\langle}\tilde{i}|).\nonumber
\end{aligned}$$ The fourth equality is due to (3) of Proposition \[prop-symmetry\].
For any admissible triple $(k,l,m) \in {\mathbb{N}}^3_0$ we have $\Phi^{l, \bar m}_k = \Phi^{\bar m, l}_k$. In particular, we have $\Phi^{l,\bar l}_k=\Phi^{\bar l, l}_k$, so that the channel $\Phi^{l,\bar l}_k$ is always degradable and anti-degradable.
This allows us to restrict our attention to the case of $l\ge m$.
### EBT/PPT properties
In this subsection, we completely characterize when the $SU(2)$-TL-channels $\Phi^{l,\overline{m}}_k$ and $\Phi^{\overline{l},m}_k$ are EBT or PPT. The main result of this subsection is as follows.
\[thm:PPT1\] Let $(k,l,m)\in {\mathbb{N}}^3_0$ be an admissible triple with $l\geq m$.
1. The channel $\Phi^{l,\bar{m}}_k$ is EBT if and only if it is PPT if and only if $k=0$.
2. The channel $\Phi^{\bar{l},m}_k$ is EBT if and only if it is PPT if and only if $k=l-m$.
\(1) If the channel $\Phi^{l,\bar{m}}_k$ is PPT, then its Choi matrix $$C_{T\circ \Phi}=(T\circ \Phi\otimes \iota)(\sum_{i,j=1}^{d_A} |i{\rangle}{\langle}j|\otimes |i{\rangle}{\langle}j|)=\sum_{i,j=1}^{d_A} T\circ \Phi(|i{\rangle}{\langle}j|)\otimes |i{\rangle}{\langle}j|$$ should be a positive definite matrix. In particular, for any orthogonal unit vectors $v_1,v_2\in H_B\otimes H_A$ we should have $$\begin{bmatrix} {\langle}v_1|C_{T\circ \Phi}|v_1{\rangle}&{\langle}v_1|C_{T\circ \Phi}|v_2{\rangle}\\
{\langle}v_2| C_{T\circ \Phi}|v_1{\rangle}&{\langle}v_2 | C_{T\circ \Phi}|v_2{\rangle}\end{bmatrix} = \begin{bmatrix} a & b \\
\bar{b} & c \end{bmatrix} \ge 0.$$ We take a particular choice of $v_1$, $v_2$ as follows. $$\begin{cases} |v_1{\rangle}= | l 0{\rangle}, |v_2{\rangle}= |0l{\rangle}& \text{if}\; k>l\\ |v_1{\rangle}= | l 0{\rangle}, |v_2{\rangle}= |l-k, k{\rangle}& \text{if}\; k\le l\end{cases}.$$ Now we have $\displaystyle a = {\langle}v_1|C_{T\circ \Phi}|v_1{\rangle}= \sum_{j'=0}^m C^{l,m,k}_{l,j',0}\overline{C^{l,m,k}_{l,j',0}}$. Since the channel $\Phi^{l,\bar{m}}_0$ is trivially EBT (and PPT) we may assume $k>0$, then $l + j' \ne \frac{l+m-k}{2}$ from the restriction that $l\ge m$. Thus, we get $a = 0$ by (1) of Proposition \[prop-symmetry\]. Similarly, we can check that $b = C^{l,m,k}_{0, \frac{l+m-k}{2}, 0}\overline{C^{l,m,k}_{l, \frac{l+m-k}{2}, l}}$ for $k > l$. By (5) of Proposition \[prop-symmetry\] we know that $b\ne 0$, so that ${\rm det}\begin{bmatrix} a & b \\
\bar{b} & c \end{bmatrix} = - |b|^2 <0$, which is a contradition. The case $k \le l$ can be done by the same argument.
\(2) We apply a similar argument as before. By taking $$\begin{cases} |v_1{\rangle}= | m 0{\rangle}, |v_2{\rangle}= |m-k, k{\rangle}& \text{if}\; l-m < k \le m\\ |v_1{\rangle}= | m 0{\rangle}, |v_2{\rangle}= |0 m{\rangle}& \text{if}\; k> m \lor l-m \end{cases}$$ we can similarly check that the matrix $ \begin{bmatrix} {\langle}v_1|C_{T\circ \Phi}|v_1{\rangle}&{\langle}v_1|C_{T\circ \Phi}|v_2{\rangle}\\
{\langle}v_2| C_{T\circ \Phi}|v_1{\rangle}&{\langle}v_2 | C_{T\circ \Phi}|v_2{\rangle}\end{bmatrix}$ is not positive definite, so that the channels $\Phi^{\bar l, m}_k$ is not PPT if $k>l-m$.
But the case $k=l-m$ is no longer trivial. Note that we can pick a product vector $e\otimes f\in H_{l}\subseteq H_m\otimes H_{l-m}$ with $e\in H_m$ and $f\in H_{l-m}$. Then, by Theorem \[thm:choi-eq\] and Proposition \[prop:ave\], we have $$\begin{aligned}
\frac{1}{l-m+1}C_{\Phi^{\overline{l},m}_{l-m}}&=\frac{1}{l+1}\alpha^{m,l-m}_l (\alpha^{m,l-m}_l)^*\\
&=\int_{SU(2)}\pi_m(x^{-1}) |e {\rangle}{\langle}e | \pi_m(x) \otimes \pi_{l-m}(x^{-1}) |f {\rangle}{\langle}f | \pi_{l-m}(x)dx,\end{aligned}$$ where $dx$ implies the normalized Haar measure on $SU(2)$. This implies that the normalized Choi matrix of $\Phi^{\overline{l},m}_{l-m}$ is a separable state since the set of separable states are closed.
(Anti-)Degradability {#subsec:highest}
--------------------
We first present the following cases when $SU(2)$-TL-channels are (anti-)degradable.
\[thm:highest\] Let $(k,l,m)\in {\mathbb{N}}^3_0$ be an admissible triple with $l\ge m$.
1. The channel $\Phi^{l,\bar{m}}_k$ is degradable if (a) $l=m$ or (b) $k=l+m$ or (c) $k = l-m$. Moreover, we have a degrading channel for the highest weight case as follows. $$\label{eq-degrading}
\Phi^{m,\overline{l-m}}_l\circ \Phi^{l,\bar{m}}_{l+m}=\Phi^{m, \bar{l}}_{l+m}.$$
2. The channel $\Phi^{l, \bar m}_k$ is not anti-degradable for $l>m$. Equivalently, $\Phi^{\bar l , m}_k$ is not degradable for $l>m$.
\(1) For the identity we need to show that for any $0\leq i,j\leq l+m$ and for any $s_2$ such that $\max \left \{0,i-j\right\}\leq s_2\leq \min \left \{m,m+i-j\right\}$, $$(\Phi^{\overline{l-m},m}_l\circ \Phi^{l,\bar{m}}_{l+m}(|i{\rangle}{\langle}j |))_{s_2,s_2+j-i}=(\Phi^{\bar{l},m}_{l+m}(|i {\rangle}{\langle}j|))_{s_2,s_2+j-i}$$ by (2) of Proposition \[prop-symmetry\].
Equivalently, let us show that for any $\max \left \{0,i-j\right\}\leq s_2\leq \min \left \{m,m+i-j\right\}$ $$\sum_{i_2}C^{l,m,l+m}_{i-i_2,i_2,i}\overline{C^{l,m,l+m}_{j-i_2,i_2,j}}C^{l-m,m,l}_{i-i_2-s_2,s_2,i-i_2}\overline{C^{l-m,m,l}_{i-i_2-s_2,s_2+j-i,j-i_2}}=C^{l,m,l+m}_{i-s_2,s_2,i}\overline{C^{l,m,l+m}_{i-s_2,s_2+j-i,j}},$$ where $i_2$ runs over $\max \left \{0,i-s_2-l+m\right\}\leq i_2\leq \min \left \{m,i-s_2\right\}$.
We use the following explicit formula for Clebsch-Gordan coefficients to the highest weight case, namely for any $l,m$ $$C^{l,m,l+m}_{j_1,j_2,j}=\delta_{j_1+j_2,j}\sqrt{\frac{l! m!}{(l+m)!}}\sqrt{\frac{j!(l+m-j)!}{j_1! j_2! (l-j_1)!(m-j_2)!}}.$$
Now, we have $$\begin{aligned}
\lefteqn{\sum_{i_2}C^{l,m,l+m}_{i-i_2,i_2,i}\overline{C^{l,m,l+m}_{j-i_2,i_2,j}}C^{l-m,m,l}_{i-i_2-s_2,s_2,i-i_2}\overline{C^{l-m,m,l}_{i-i_2-s_2,s_2+j-i,j-i_2}}}\\
\notag&=\frac{l!m!}{(l+m)!}\frac{(l-m)!m!}{l!}\sqrt{\frac{i!(l+m-i)!j!(l+m-j)!}{s_2!(m-s_2)!(s_2+j-i)!(m-s_2-j+i)!}}\\
\notag&\;\;\;\; \times \sum_{i_2} \frac{1}{i_2!(m-i_2)!(i-i_2-s_2)!(l-m+s_2+i_2-i)!}\\
\notag&= \frac{l!m!}{(l+m)!l!}\sqrt{\frac{i!(l+m-i)!j!(l+m-j)!}{s_2!(m-s_2)!(s_2+j-i)!(m-s_2-j+i)!}}\sum_{i_2} {m \choose i_2}{l-m \choose i-s_2-i_2}\\
\label{eq1}&=\frac{l!m!}{(l+m)!l!}\sqrt{\frac{i!(l+m-i)!j!(l+m-j)!}{s_2!(m-s_2)!(s_2+j-i)!(m-s_2-j+i)!}}{l \choose i-s_2}\\
\notag&= \frac{l!m!}{(l+m)!}\frac{1}{(i-s_2)!(l+s_2-i)!}\sqrt{\frac{i!(l+m-i)!j!(l+m-j)!}{s_2!(m-s_2)!(s_2+j-i)!(m-s_2-j+i)!}}\\
\notag&= \sqrt{\frac{l!m!}{(l+m)!}}\sqrt{\frac{i!(l+m-i)!}{(i-s_2)!s_2!(l-i+s_2)!(m-s_2)!}}\\
\notag& \;\;\;\; \times \sqrt{\frac{l!m!}{(l+m)!}}\sqrt{\frac{j!(l+m-j)!}{(i-s_2)!(s_2+j-i)!(l-i+s_2)!(m-s_2-j+i)!}}\\
\notag&= C^{l,m,l+m}_{i-s_2,s_2,i}\overline{C^{l,m,l+m}_{i-s_2,s_2+j-i,j}}.\end{aligned}$$
The third equality in the above is from the following fact $${l \choose i-s_2}=\sum_{\max \left \{0,i-s_2-l+m \right\}\leq i_2\leq \min \left \{m,i-s_2\right\}} {m \choose i_2} {l-m \choose i-s_2-i_2}.$$
\(2) By Proposition \[prop-bistochastic-estimates\] and Proposition \[prop-CGchannel-bistochastic\] we know that $$0< \log(\frac{l+1}{m+1}) \leq Q^{(1)}(\Phi^{l, \bar m}_k),$$ which leads us to the conlusion we wanted.
\[ex:non-deg-non-antideg\]
The channel $\Phi^{l, \bar m}_k$ could be non-degradable for intermediate $l-m<k<l+m$ at least in low dimensional examples. The strategy is to find explicit states $\rho\in M_{k+1}$ such that $$0<H(\Phi^{\bar l, m}(\rho))-H(\Phi^{l, \bar m}_k(\rho))\leq Q^{(1)}(\Phi^{\bar l, m}_k).$$ The inequality above implies that $\Phi^{\bar l, m}_k$ is not anti-degradable and equivalently $\Phi^{l, \bar m}_k$ is not degradable.
The channel $\Phi^{3,\bar 2}_3$ is not degradable. Indeed, for $\rho=\displaystyle \left [ \begin{array}{cccc} 0.25&0&0&0\\ 0&0.75&0&0 \\ 0&0&0&0\\0&0&0&0 \end{array} \right ]$ we have $$\begin{aligned}
&H(\Phi^{\bar 3, 2}_3(\rho))-H(\Phi^{3, \bar 2}_3(\rho))\\
& = H(\left [ \begin{array}{ccc}
0.5&0&0\\ 0&0.2&0\\0&0&0.3
\end{array} \right ]
)-H(\left [ \begin{array}{cccc}
0.45 &0&0&0\\ 0&0.15&0&0\\
0&0&0.4&0\\
0&0&0&0
\end{array} \right ])\\
&\approx 0.0192,\end{aligned}$$ where the first equality is obtained by the precise description of the associated isometry $$\alpha^{3,2}_3:\displaystyle {\mathbb{C}}^4\rightarrow {\mathbb{C}}^4\otimes {\mathbb{C}}^3, \begin{array}{ll} |1{\rangle}&\mapsto -\sqrt{\frac{3}{5}} |12{\rangle}+ \sqrt{\frac{2}{5}} |21{\rangle}\\ |2{\rangle}& \mapsto -\sqrt{\frac{2}{5}} |13{\rangle}-\sqrt{\frac{1}{15}} |22{\rangle}+\sqrt{\frac{8}{15}}|31{\rangle}\end{array}$$ using the known formula of Clebsch-Gordan coefficients. Here, $\left \{|j{\rangle}\right\}_{j=1}^{n+1}$ refers to the canonical orthonormal basis of $H_n={\mathbb{C}}^{n+1}$.
For the channels $\Phi^{\bar{l},m}_{l+m}$ with $l\geq m$, we have $$0=Q(\Phi^{\bar{l},m}_{l+m})<C(\Phi^{\bar{l},m}_{l+m})=\log(\mathrm{dim}(H_m))$$ by Theorem \[thm:highest\], [@Hol-book Proposition 8.8] and the fact that $H_{\min}(\Phi^{l, \bar{m}}_{l+m})=0$. This means that, outside the realm of entanglement-breaking channels, there exist channels which completely destroy quantum information though all the classical information can be preserved.
Tensor products of Temeperley-Lieb Channels and outputs of Entangled Covariant States {#sec:tensor}
=====================================================================================
It is well known that additivity of Holevo capacities is equivalent to additivity of minimum output entropies [@Sh04b] and Hastings [@Ha09] established non-additivity of the minimum output entropy by exhibiting the existence of random unitary channels $\Phi$ such that $$\label{ineq-Hastings}
H_{\min}(\Phi\otimes \overline{\Phi})< H_{\min}(\Phi)+H_{\min}(\overline{\Phi}),$$ where $\overline{\Phi}$ is the conjugate channel of $\Phi$. In the proof of (\[ineq-Hastings\]), the maximally entangled state was used to estimate an upper bound of $H_{\min}(\Phi\otimes \overline{\Phi})$. Since we know the minimum output entropies for single $O_N^+$-TL-channels in an asymptotic sense, it is natural to try to evaluate the minimum output entropies for tensor products of $O_N^+$-TL-channels. Although we are unable to fully evaluate such minimum output entropies for all tensor products, we do establish upper bounds for the minimum output entropies $H_{\min}(\Phi^{\bar l_1, m_1}_{k_1}\otimes \Phi^{l_2, \bar m_2}_{k_2})$. This is achieved by evaluating the entropies $H((\Phi^{\bar l_1, m_1}_{k_1}\otimes \Phi^{l_2, \bar m_2}_{k_2})(\rho))$ for certain entangled states $\rho$. More precisely, we will present explicit formulae for $$H((\Phi^{\bar l_1, m_1}_{k_1}\otimes \Phi^{l_2, \bar m_2}_{k_2})(\frac{1}{[i+1]_q}\alpha^{k_1,k_2}_i (\alpha^{k_1,k_2}_i)^*))$$ for all admissible triples $(i,k_1,k_2)\in {\mathbb{N}}^3_0$.
In this section we use all the notation and planar string diagram formalism for $\text{Rep}(O^+_F)$ introduced in Section \[TL-diagrams\].
Tetrahedral nets and the quantum $6j$-symbols
---------------------------------------------
Following [@KaLi94], let ${\mathcal}A \subset {\mathbb N}_0^6$ be the set of all sextuples $\left [\begin{matrix} a & b &i \\ c& d&j
\end{matrix}\right]$ with the property that each of the following triples $$(a,d,i), \ (b,c,i), \ (a,b ,j), \ (d,c,j)$$ is admissible. We define the *tetrahedral net* to be the function $\text{Tet}_q:{\mathcal}A \to {\mathbb C}$ given by $$\text{Tet}_q\left [\begin{matrix} a & b &i \\ c& d&j
\end{matrix}\right] = \tau_i((A_i^{b,c})^* (\iota_{H_b} \otimes (A_0^{j,j})^* \otimes \iota_{H_c})(A_a^{b,j} \otimes A_d^{j,c})A_i^{a,d}).$$ In terms of planar string diagrams, the Tet$_q$ functions are given by $$\text{Tet}_q\left [\begin{matrix} a & b &i \\ c& d&j
\end{matrix}\right] = \begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (-1,-5) {$i$};
\node at (-1,13) {$i$};
\node at (-2,1) {$a$};
\node at (2,1) {$d$};
\node at (0,5) {$j$};
\node at (-2,7) {$b$};
\node at (2,7) {$c$};
\draw [-, color=black]
(-4,4) -- (4, 4);
\draw [-, color=black]
(-4,4) -- (0,0);
\draw [-, color=black]
(0,-0) -- (4,4);
\draw [-, color=black]
(0,-4) -- (0,0);
\draw [-, color=black]
(-4,4) -- (0,8);
\draw [-, color=black]
(4,4) -- (0,8);
\draw [-, color=black]
(0,8) -- (0,12);
\draw [-, color=black]
(0,12) to [bend left =110] (0,-4);
\end{tikzpicture} .$$
Next, we introduce the [*quantum $6j$-symbols*]{} $\{\cdot\}_q: {\mathcal}A \to {\mathbb C}$, which are defined in terms of the tetrahedral nets as follows:
$$\left\{\begin{matrix} a & b &i \\ c& d&j
\end{matrix}\right\}_q = \frac{\text{Tet}_q\left [\begin{matrix} a & b &i \\ c& d&j
\end{matrix}\right][i+1]_q}{\theta_q(a,d,i)\theta_q(b,c,i)}.$$
We note that there exist simple algebraic formulae that allow one to numerically evaluate the tetrahedral nets (and hence also the quantum $6j$-symbols). See [@KaLi94 Section 9.11] for example.
The most important geometric-algebraic feature of the quantum $6j$-symbols $\left\{\begin{matrix} a & b &i \\ c& d&j
\end{matrix}\right\}_q$ is that they arise as the basis change coefficients for two canonical bases for the Hom-space $\text{Hom}_{O^+_F}(H_a \otimes H_d, H_b \otimes H_c)$. More precisely, $\text{Hom}_{O^+_F}(H_a \otimes H_d, H_b \otimes H_c)$ has one linear basis given by the string diagrams $$\begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (1,0) {$i$};
\node at (3,3) {$c$};
\node at (-3,3) {$b$};
\node at (3,-3) {$d$};
\node at (-3,-3) {$a$};
\draw [-, color=black]
(0,-2) -- (0, 2);
\draw [-, color=black]
(0,2) -- (2, 4);
\draw [-, color=black]
(0,2) -- (-2, 4);
\draw [-, color=black]
(0,-2) -- (-2, -4);
\draw [-, color=black]
(0,-2) -- (2, -4);
\end{tikzpicture} \qquad (i \in {\mathbb N}_0 \text{ such that }(i,a,d), \ (i,b,c) \text{ admissible}),$$ and another linear basis given by $$\begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (0,0) {$j$};
\node at (3,3) {$c$};
\node at (-3,3) {$b$};
\node at (3,-3) {$d$};
\node at (-3,-3) {$a$};
\draw [-, color=black]
(2,-2) -- (2, 4);
\draw [-, color=black]
(-2,-2) -- (-2, 4);
\draw [-, color=black]
(-2,1) -- (2,1);
\end{tikzpicture} \qquad (j \in {\mathbb N}_0 \text{ such that }(j,a,b), \ (j,c,d) \text{ admissible}).$$ We then have that the quantum $6j$-symbols are the basis change coefficients between these two bases : $$\begin{aligned}
\label{6j1}\begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (0,0) {$j$};
\node at (3,3) {$c$};
\node at (-3,3) {$b$};
\node at (3,-3) {$d$};
\node at (-3,-3) {$a$};
\draw [-, color=black]
(2,-2) -- (2, 4);
\draw [-, color=black]
(-2,-2) -- (-2, 4);
\draw [-, color=black]
(-2,1) -- (2,1);
\end{tikzpicture}= \sum_{i}\left\{\begin{matrix} a & b &i \\ c& d&j
\end{matrix}\right\}_q \begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (1,0) {$i$};
\node at (3,3) {$c$};
\node at (-3,3) {$b$};
\node at (3,-3) {$d$};
\node at (-3,-3) {$a$};
\draw [-, color=black]
(0,-2) -- (0, 2);
\draw [-, color=black]
(0,2) -- (2, 4);
\draw [-, color=black]
(0,2) -- (-2, 4);
\draw [-, color=black]
(0,-2) -- (-2, -4);
\draw [-, color=black]
(0,-2) -- (2, -4);
\end{tikzpicture}, \end{aligned}$$ and similarly by a rotational symmetry argument, $$\begin{aligned}
\label{6j2}\begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (1,0) {$j$};
\node at (3,3) {$b$};
\node at (-3,3) {$a$};
\node at (3,-3) {$c$};
\node at (-3,-3) {$d$};
\draw [-, color=black]
(0,-2) -- (0, 2);
\draw [-, color=black]
(0,2) -- (2, 4);
\draw [-, color=black]
(0,2) -- (-2, 4);
\draw [-, color=black]
(0,-2) -- (-2, -4);
\draw [-, color=black]
(0,-2) -- (2, -4);
\end{tikzpicture}= \sum_{i}\left\{\begin{matrix} a & b &i \\ c& d&j
\end{matrix}\right\}_q\begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (0,0) {$i$};
\node at (3,3) {$c$};
\node at (-3,3) {$b$};
\node at (3,-3) {$d$};
\node at (-3,-3) {$a$};
\draw [-, color=black]
(2,-2) -- (2, 4);
\draw [-, color=black]
(-2,-2) -- (-2, 4);
\draw [-, color=black]
(-2,1) -- (2,1);
\end{tikzpicture}.\end{aligned}$$
The following formula involving three-vertices and tetrahedral nets will be handy in the next subsection.
\[tetra-lemma\] Let $\left [\begin{matrix} a & b &i \\ c& d&j
\end{matrix}\right] \in {\mathcal}A$. Then $$\begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (-1,-5) {$i$};
\node at (-2,1) {$a$};
\node at (2,1) {$d$};
\node at (0,5) {$j$};
\node at (-6,7) {$b$};
\node at (6,7) {$c$};
\draw [-, color=black]
(-4,4) -- (4, 4);
\draw [-, color=black]
(-4,4) -- (0,0);
\draw [-, color=black]
(0,-0) -- (4,4);
\draw [-, color=black]
(0,-4) -- (0,0);
\draw [-, color=black]
(-4,4) -- (-8,8);
\draw [-, color=black]
(4,4) -- (8,8);
\end{tikzpicture} =\frac{ \text{Tet}_q\left [\begin{matrix} a & b &i \\ c& d&j
\end{matrix}\right]}{\theta_q(i,b,c)}\begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.2]
\node at (-1,-5) {$i$};
\node at (-6,7) {$b$};
\node at (6,7) {$c$};
\draw [-, color=black]
(-4,4) -- (0,0);
\draw [-, color=black]
(0,-0) -- (4,4);
\draw [-, color=black]
(0,-4) -- (0,0);
\draw [-, color=black]
(-4,4) -- (-8,8);
\draw [-, color=black]
(4,4) -- (8,8);
\end{tikzpicture} .$$
Denote the quantity on the left hand side by $B$. Then $B \in \text{Hom}_{O^+_F}(H_i, H_b\otimes H_c) = {\mathbb C}A_{i}^{b,c}$, and so there exists $\lambda \in {\mathbb C}$ such that $B = \lambda A_{i}^{b,c}$ (i.e., $B$ is a multiple of a three-vertex). But then we have $$\text{Tet}_q\left [\begin{matrix} a & b &i \\ c& d&j
\end{matrix}\right] = \tau_i((A_{i}^{b,c})^*B) = \tau_i((A_{i}^{b,c})^*\lambda A_{i}^{b,c}) = \lambda \theta_q(i,b,c).$$
Tensor products of TL-channels and outputs of entangled states
--------------------------------------------------------------
Here we address tensor products of the form $\Phi_{k_1}^{\bar l_1, m_1} \otimes \Phi_{k_2}^{l_2, \bar m_2}$, and compute explicitly the outputs of $O^+_F$-covariant states of the form $\rho_{i}^{k_1, k_2} = \frac{1}{[i+1]}_q \alpha_i^{k_1, k_2}(\alpha_i^{k_1, k_2})^*$, for all admissible triples $(i,k_1,k_2)$. Note that in the special case of $i = 0$ and $k_1 = k_2$, we have that $\rho_0^{k,k}$ is a maximally entangled state, and in general, $\rho_i^{k_1, k_2}$ is an entangled state [@BrCo17b Theorem 5.5] if $k_1,k_2>0$. In order to ease the notational burden on the following theorem, let us fix once and for all admissible triples $(i, k_1, k_2)$, $(k_j, l_j, m_j) \in {\mathbb N}_0^3$ ($j = 1,2$), and let $X_i = \big( \Phi_{k_1}^{\bar l_1, m_1} \otimes \Phi_{k_2}^{l_2, \bar m_2}\big)(\rho_i^{k_1, k_2})$
We have the following spectral decomposition for $X_i$: $$X_i =\sum_{\substack{l=m_1+l_2-2r \\
0 \le r \le \min\{m_1,l_2\}}} \lambda_{i, l}^{m_1, l_2} \alpha_l^{m_1,l_2}(\alpha_l^{m_1,l_2})^*,$$ where $$\begin{aligned}
\lambda_{i, l}^{m_1, l_2}
&=\Bigg(\frac{[i+1]_q[k_1+1]_q[k_2+1]_q\theta_q(l, m_1,l_2)}{[l+1]_q\theta_q(k_1,l_1,m_1)\theta_q(k_2,l_2,m_2)\theta_q(i,k_1,k_2)}\Bigg) \\
&\;\;\;\;\times \sum_{\substack{j=2t\\ 0 \le t \le \min\{k_1,k_2 \}}}
\frac{ \text{\small $\left\{\begin{matrix} k_1 & k_2 & j \\ k_2& k_1& i
\end{matrix}\right\}_q
\text{Tet}_q\left [\begin{matrix} l_1 & m_1 &m_1 \\ j& k_1&k_1
\end{matrix}\right]\text{Tet}_q\left [\begin{matrix} k_2 & j &l_2 \\ l_2& m_2&k_2
\end{matrix}\right]
\left\{\begin{matrix} m_1 & m_1 &l \\ l_2& l_2&j
\end{matrix}\right\}_q$}}{\theta_q (m_1,m_1, j)\theta_q (l_2,j,l_2)}, \end{aligned}$$ and occurs with multiplicity $[l+1]_q$.
We have that, up to planar isotopy, the planar tangle representating $X_i$ is given by: $$X_i = \frac{[i+1]_q[k_1+1]_q[k_2+1]_q}{\theta_q (k_1,k_2,i)\theta_q (l_1,m_1,k_1) \theta_q (l_2,m_2,k_2)} \begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.17]
\node at (1,0) {$i$};
\node at (3,3) {{\footnotesize $ k_2$}};
\node at (-3,3) {{\footnotesize $ k_1$}};
\node at (-5,8) {$m_1$};
\node at (5,8) {$l_2$};
\node at (-5,-8) {$m_1$};
\node at (5,-8) {$l_2$};
\node at (3,-3) {{\footnotesize $k_2$}};
\node at (-3,-3) {{\footnotesize $k_1$}};
\node at (-7,0) {$l_1$};
\node at (8,0) {$m_2$};
\draw [-, color=black]
(0,-2) -- (0, 2);
\draw [-, color=black]
(0,2) -- (3, 5);
\draw [-, color=black]
(0,2) -- (-3, 5);
\draw [-, color=black]
(0,-2) -- (-3, -5);
\draw [-, color=black]
(0,-2) -- (3, -5);
\draw [-, color=black]
(-3,-9) -- (-3, -5);
\draw [-, color=black]
(3,-9) -- (3, -5);
\draw [-, color=black]
(-3,5) -- (-3,9);
\draw [-, color=black]
(3,5) -- (3, 9);
\draw [-, color=black]
(-3,5) to [bend right = 90] (-3,-5);
\draw [-, color=black]
(3,5) to [bend left = 90] (3, -5);
\end{tikzpicture}$$
Using the formulae - for the quantum $6j$-symbols together with Lemma \[tetra-lemma\], we have $$\begin{aligned}
&\begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.17]
\node at (1,0) {$i$};
\node at (3,3) {{\footnotesize $ k_2$}};
\node at (-3,3) {{\footnotesize $ k_1$}};
\node at (-5,8) {$m_1$};
\node at (5,8) {$l_2$};
\node at (-5,-8) {$m_1$};
\node at (5,-8) {$l_2$};
\node at (3,-3) {{\footnotesize $k_2$}};
\node at (-3,-3) {{\footnotesize $k_1$}};
\node at (-7,0) {$l_1$};
\node at (8,0) {$m_2$};
\draw [-, color=black]
(0,-2) -- (0, 2);
\draw [-, color=black]
(0,2) -- (3, 5);
\draw [-, color=black]
(0,2) -- (-3, 5);
\draw [-, color=black]
(0,-2) -- (-3, -5);
\draw [-, color=black]
(0,-2) -- (3, -5);
\draw [-, color=black]
(-3,-9) -- (-3, -5);
\draw [-, color=black]
(3,-9) -- (3, -5);
\draw [-, color=black]
(-3,5) -- (-3,9);
\draw [-, color=black]
(3,5) -- (3, 9);
\draw [-, color=black]
(-3,5) to [bend right = 90] (-3,-5);
\draw [-, color=black]
(3,5) to [bend left = 90] (3, -5);
\end{tikzpicture}
= \sum_j
\left\{\begin{matrix} k_1 & k_2 & j \\ k_2& k_1& i
\end{matrix}\right\}_q \begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.17]
\node at (0,-1.2) {\footnotesize $j$};
\node at (1.9,3) {\footnotesize $k_2$};
\node at (-1.9,3) {\footnotesize $k_1$};
\node at (-5,8) {$m_1$};
\node at (5,8) {$l_2$};
\node at (-5,-8) {$m_1$};
\node at (5,-8) {$l_2$};
\node at (1.9,-3) {\footnotesize$k_2$};
\node at (-1.9,-3) {\footnotesize $k_1$};
\node at (-7,0) {\footnotesize$l_1$};
\node at (7.5,0) {\footnotesize $m_2$};
\draw [-, color=black]
(-3,0) -- (3, 0);
\draw [-, color=black]
(-3,-5) -- (-3, 5);
\draw [-, color=black]
(3,-5) -- (3, 5);
\draw [-, color=black]
(-3,-9) -- (-3, -5);
\draw [-, color=black]
(3,-9) -- (3, -5);
\draw [-, color=black]
(-3,5) -- (-3,9);
\draw [-, color=black]
(3,5) -- (3, 9);
\draw [-, color=black]
(-3,5) to [bend right = 90] (-3,-5);
\draw [-, color=black]
(3,5) to [bend left = 90] (3, -5);
\end{tikzpicture}
= \sum_j
\left\{\begin{matrix} k_1 & k_2 & j \\ k_2& k_1& i
\end{matrix}\right\}_q \begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.17]
\draw [-, color=black]
(-1,0) to [bend right = -90] (1, 0);
\node at (0,-1) {\footnotesize$j$};
\draw [-, color=black]
(-3,-5) -- (-1, 0);
\node at (-1,-3) {\footnotesize$k_1$};
\draw [-, color=black]
(-5,0) -- (-1, 0);
\node at (-2.5,1) {\footnotesize$k_1$};
\draw [-, color=black]
(3,-5) -- (1, 0);
\node at (1,-3) {\footnotesize$k_2$};
\draw [-, color=black]
(1,0) -- (5, 0);
\node at (2.5,1) {\footnotesize$k_2$};
\draw [-, color=black]
(-3,-9) -- (-3, -5);
\node at (-5,-8) {$m_1$};
\draw [-, color=black]
(3,-9) -- (3, -5);
\node at (5,-8) {$l_2$};
\draw [-, color=black]
(-5,0) -- (-5,4);
\node at (-5,5) {$m_1$};
\draw [-, color=black]
(5,0) -- (5, 4);
\node at (5,5) {$l_2$};
\draw [-, color=black]
(-3,-5) -- (-5,0);
\node at (-5,-2) {\footnotesize$l_1$};
\draw [-, color=black]
(3,-5) -- (5, 0);
\node at (6,-2) {\footnotesize$m_2$};
\end{tikzpicture} \\
&=\sum_j
\left\{\begin{matrix} k_1 & k_2 & j \\ k_2& k_1& i
\end{matrix}\right\}_q \frac{\text{ \small $ \text{Tet}_q\left [\begin{matrix} l_1 & m_1 &m_1 \\ j& k_1&k_1
\end{matrix}\right]\text{Tet}_q\left [\begin{matrix} k_2 & j &l_2 \\ l_2& m_2&k_2
\end{matrix}\right]$}}{\theta_q(m_1, m_1, j)\theta_q(l_2, j, l_2)} \begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.17]
\node at (0,-0.1) {\footnotesize$j$};
\node at (3.5,3) {\footnotesize$l_2$};
\node at (-3.5,3) {\footnotesize$m_1$};
\node at (3.5,-3) {\footnotesize$l_2$};
\node at (-3.5,-3) {\footnotesize$m_1$};
\draw [-, color=black]
(2,-2) -- (2, 4);
\draw [-, color=black]
(-2,-2) -- (-2, 4);
\draw [-, color=black]
(-2,1) -- (2,1);
\end{tikzpicture} \\
&= \sum_{l}\sum_j \left\{\begin{matrix} k_1 & k_2 & j \\ k_2& k_1& i
\end{matrix}\right\}_q \frac{ \text{ \small $ \text{Tet}_q\left [\begin{matrix} l_1 & m_1 &m_1 \\ j& k_1&k_1
\end{matrix}\right]\text{Tet}_q\left [\begin{matrix} k_2 & j &l_2 \\ l_2& m_2&k_2
\end{matrix}\right]$}}{\theta_q(m_1, m_1, j)\theta_q(l_2, j, l_2)} \left\{\begin{matrix} m_1 & m_1 &l \\ l_2& l_2&j
\end{matrix}\right\}_q \begin{tikzpicture}[baseline=(current bounding box.center),
wh/.style={circle,draw=black,thick,inner sep=.5mm},
bl/.style={circle,draw=black,fill=black,thick,inner sep=.5mm}, scale = 0.17]
\node at (1,0) {\footnotesize$l$};
\node at (3,3) {\footnotesize$l_2$};
\node at (-3,3) {\footnotesize$m_1$};
\node at (3,-3) {\footnotesize$l_2$};
\node at (-3,-3) {\footnotesize$m_1$};
\draw [-, color=black]
(0,-2) -- (0, 2);
\draw [-, color=black]
(0,2) -- (2, 4);
\draw [-, color=black]
(0,2) -- (-2, 4);
\draw [-, color=black]
(0,-2) -- (-2, -4);
\draw [-, color=black]
(0,-2) -- (2, -4);
\end{tikzpicture} \\
&=\sum_l \Big( \sum_j \text{ \small $\left\{\begin{matrix} k_1 & k_2 & j \\ k_2& k_1& i
\end{matrix}\right\}_q $} \frac{\text{ \small $ \text{Tet}_q\left [\begin{matrix} l_1 & m_1 &m_1 \\ j& k_1&k_1
\end{matrix}\right]\text{Tet}_q\left [\begin{matrix} k_2 & j &l_2 \\ l_2& m_2&k_2
\end{matrix}\right]$}}{\theta_q(m_1, m_1, j)\theta_q(l_2, j, l_2)} \text{ \small $ \left\{\begin{matrix} m_1 & m_1 &l \\ l_2& l_2&j
\end{matrix}\right\}_q $}\Big) \frac{\theta_q(l,m_1, l_2)}{[l+1]_q} \alpha_l^{m_1,l_2}\alpha_l^{m_1,l_2*}.\end{aligned}$$
In the above, the summands run over $l$ such that $(l, m_1, l_2)$ is admissible, and $j$ such that both $(j, k_1, k_1)$ and $(j, k_2, k_2)$ are admissible. This corresponds exactly to $l=m_1+l_2-2r$ with $0 \le r \le \min\{m_1,l_2\}$ and $j=2t$ with $0 \le t \le \min\{k_1,k_2 \}$. The claimed formula for the eigenvalue $\lambda_{i, l}^{m_1, l_2}$ is now immediate. Note also that the multiplicity of $\lambda_{i, l}^{m_1, l_2}$ is $\text{rank}(\alpha_l^{m_1,l_2} (\alpha_l^{m_1,l_2})^* )= \dim H_l = [l+1]_q$.
As remarked above, the element $X_0 \in B(H_{m_1} \otimes H_{l_2})$ is the output of the $O^+_F$-covariant Bell state $\rho_0^{k,k} \in B(H_k \otimes H_k)$. In this situation, the eigenvalue formula for $X_0$ simplifies greatly. This can be seen by using similar arguments to those in the proof given above, or by directly using algebraic relations satisfied by the quantum $6j$-symbols. In any case, we get $$X_0 = \sum_{\substack{l=m_1+l_2-2r \\
0 \le r \le \min\{m_1,l_2\}}} \lambda_{0, l}^{m_1, l_2} \alpha_l^{m_1,l_2}\alpha_l^{m_1,l_2*},$$ with $$\begin{aligned}
\lambda_{0, l}^{m_1, l_2} &= \frac{[k+1]_q \text{Tet}_q\left[\begin{matrix} m_1 & l_1 & l \\ m_2& l_2&k
\end{matrix}\right]^2}{\theta_q(l_1,m_1,k)\theta_q(l_2,m_2,k) \theta_q(m_1,l_2, l)\theta_q(l_1,m_2,l)} \\
&=\frac{[k+1]_q \left \{\begin{matrix} m_1 & l_1 & l \\ m_2& l_2&k
\end{matrix}\right\}_q^2\theta_q(l,l_1,m_2)\theta_q(l,m_1,l_2)}{\theta_q(l_1,m_1,k)\theta_q(l_2,m_2,k) [l+1]_q^2},\end{aligned}$$ occurring with multiplicity $[l+1]_q$.
Remarks on the MOE additivity problem for certain $O_N^+$-TL-channels
---------------------------------------------------------------------
Given that we have, on the one hand, asymptotically sharp estimates on the MOE of the $O_N^+$-TL-channels $\Phi_k^{\bar l, m}$, $\Phi_k^{l, \bar m}$ (given by $H_{\min}(\Phi_k^{\bar l, m}), H_{\min}(\Phi_k^{l, \bar m}) \sim \Big(\frac{l+m-k}{2}\Big)\log N$ - cf. Theorem \[thm-MOE-sharp\]), and on the other hand, we have exact formulae for the outputs $X_i = \big( \Phi_{k_1}^{\bar l_1, m_1} \otimes \Phi_{k_2}^{l_2, \bar m_2}\big)(\rho_i^{k_1, k_2}) $ of entangled states under the tensor products of certain TL-channels, it is natural to ask whether one can obtain a strict inequality of the form $$H(X_i) < \Big(\frac{l_1+m_1-k_1}{2}\Big)\log N + \Big(\frac{l_2+m_2-k_2}{2}\Big)\log N \qquad (\text{for suitable $i, k_j, l_j, m_j$}).$$ If this were the case, we would have obtained deterministic examples of pairs of quantum channels which witness the non-additivity of their minimum output entropy.
Unfortunately, however, extensive numerical evaluations of $H(X_i)$ for suitable parameter choices always yield inequalities of the form $H(X_i) - \Big(\frac{l_1+m_1-k_1}{2}\Big)\log N - \Big(\frac{l_2+m_2-k_2}{2}\Big)\log N >0$ with the difference going to zero as $N \to \infty$. We see this as strong evidence that the pairs of quantum channels $\Phi_{k_1}^{\bar l_1, m_1}, \Phi_{k_2}^{l_2, \bar m_2} $ are not MOE strictly subadditive.
Some Temperley-Lieb channels are not modified TRO-channels {#sec:TRO}
==========================================================
For a quantum channel $\Phi: B(H_A) \to B(H_B)$ with a Stinespring isometry $V : H_A \to H_B \otimes H_E$ the range space ${\rm Ran}V \subseteq H_B \otimes H_E$ is called a [*Stinespring space*]{} of $\Phi$. Note that the choice of isometry $V$ is not unique, but any associated Stinespring space is known to determine the channel $\Phi$. For this reason we will fix a Stinespring isometry $V$ and refer to the range ${\rm Ran}V$ as [*the Stinespring space*]{}. We say that the channel $\Phi$ is a [*TRO-channel*]{} if its Stinespring space is a [*TRO*]{}, i.e. a[*ternary ring of operators*]{}. Recall that a TRO is a subspace $X$ of $B(H,K)$ for some Hilbert spaces $H,K$ such that $x,y,z \in X \Rightarrow xy^*z\in X$, i.e. closed under triple product. It is well-known that finite dimensional TRO’s are direct sums of rectangular matrix spaces with mutiplicity. Since the Stinespring space determines the channel it has been observed in [@GaJuLa16] that a TRO-channel $\Phi: B(H_A) \to B(H_B)$ is always of the following form: the channel $\Phi$ has a Stinespring space $X$ given by $$X = \oplus^M_{i=1}B({\mathbb{C}}^{m_i}, {\mathbb{C}}^{n_i}) \otimes 1_{l_i} \subseteq B(H_E, H_B),$$ where $$H_E = \oplus^M_{i=1}{\mathbb{C}}^{m_i} \otimes {\mathbb{C}}^{l_i}\;\; \text{and} \;\; H_B = \oplus^M_{i=1}{\mathbb{C}}^{n_i} \otimes {\mathbb{C}}^{l_i}.$$ Moreover, we have $H_A = (X, {\langle}\cdot, \cdot {\rangle}_{H_A})$, where the inner product is given by ${\langle}x, y {\rangle}_{H_A} := {{\operatorname{Tr}}}_E(y^*x)$, $x,y \in X \subseteq B(H_E, H_B)$. Finally, the channel $\Phi$ is given by $$\Phi(|x{\rangle}{\langle}y|) = xy^*,\; x,y \in H_A=X \subseteq B(H_E, H_B).$$ Based on the above description we can define a variant of TRO-channels. We first fix a [*symbol*]{} $f\in B(H_E)$, i.e. a positive matrix with $\tau(f) := \frac{{{\operatorname{Tr}}}_E(f)}{d_E}=1$ and [*strongly independent*]{} of the right algebra $\mathcal{R}(X) = \text{span}\{x^*y: x,y\in X\}$. Here, we say that $x\in B(H_E)$ is [*independent*]{} of $\mathcal{R}(X)$ if $\tau(xy) = \tau(x)\tau(y)$ for all $y\in \mathcal{R}(X)$ and [*strongly independent*]{} of $\mathcal{R}(X)$ if $x^n$ is indepedent of $\mathcal{R}(X)$ for every $n\ge 1$. Then the [*modified TRO-channel*]{} $\Phi_f$ with the symbol $f$ is defined by $$\Phi_f : B(H_A) \to B(H_B),\;\; |x{\rangle}{\langle}y| \mapsto xfy^*.$$ The original TRO-channel $\Phi$ corresponds to the case of $\Phi_f$ with $f = 1_E$. It has been proved in [@GaJuLa16] that we have exact calculations for various capacities of $\Phi$ as follows. $$Q^{(1)}(\Phi) = P^{(1)}(\Phi) = Q(\Phi) = P(\Phi) = \log (\max_i n_i),\; \chi(\Phi) = C(\Phi) = \log (\sum_i n_i).$$ Moreover, we also have the following estimates for modified TRO-channels. $$Q^{(1)}(\Phi) \le Q^{(1)}(\Phi_f) \le Q^{(1)}(\Phi) + \tau(f \log f).$$ The same estimates hold for other capacities, i.e. we may replace $Q^{(1)}$ with $P^{(1)}, Q, P, \chi$ and $C$. Important examples of (modified) TRO-channels include random unitary channels using projective unitary representations of finite (quantum) groups and generalized dephasing channels [@GaJuLa16].
In this section we prove that some TL-channels do not belong to the class of modified TRO-channels. Before we proceed to the details we need to be more precise about comparing two quantum channels.
Let $\Phi : B(H_A) \to B(H_B)$ and $\Psi: B(H_{A'}) \to B(H_{B'})$ be quantum channels with $d_B \le d_{B'}$. We say that $\Phi$ is equivalent to $\Psi$ if there is a unitary $U: H_A \to H_{A'}$ and an isometry $V: H_B \to H_{B'}$ such that $$V\Phi(U^* \rho\, U)V^* = \Psi(\rho),\;\; \rho \in B(H_A).$$
We can find an example with minimal non-trivial dimensions.
\[prop-non-TRO\] The $SU(2)$-TL-channel $\Phi^{\bar{2},1}_1$ is not equivalent to any modified TRO-channel.
Since we have the associated isometry $\alpha^{2,1}_1:{\mathbb{C}}^2\rightarrow {\mathbb{C}}^3\otimes {\mathbb{C}}^2, \begin{array}{ll} |1{\rangle}&\mapsto -\sqrt{\frac{2}{3}} |12{\rangle}+ \sqrt{\frac{1}{3}} |21{\rangle}\\ |2{\rangle}& \mapsto -\sqrt{\frac{1}{3}} |22{\rangle}+\sqrt{\frac{2}{3}} |31{\rangle}\end{array}$ ([@VK]), we can see that channel $\Phi^{\bar{2},1}_1: B({\mathbb{C}}^2) \to B({\mathbb{C}}^2)$ maps $|1{\rangle}{\langle}1| \mapsto \begin{bmatrix} 1/3 & 0 \\ 0 & 2/3 \end{bmatrix}$, $|1{\rangle}{\langle}2|\mapsto \begin{bmatrix} 0 & - 1/3 \\ 0 & 0 \end{bmatrix}$, $|2{\rangle}{\langle}1|\mapsto \begin{bmatrix} 0 & 0 \\ -1/3 & 0 \end{bmatrix}$ and $|2{\rangle}{\langle}2|\mapsto \begin{bmatrix} 2/3 & 0 \\ 0 & 1/3 \end{bmatrix}$. Thus, we can observe that ${\rm Ran}\Phi^{\bar{2},1}_1 = B({\mathbb{C}}^2)$, which is a full matrix algebra.
Let $\Phi_f$ be a modified TRO-channel with the parameters $n_i, m_i, l_i$, $1\le i\le M$ as above. Since we need to match the dimensions of the sender’s Hilbert spaces we only have the following 3 possible cases. (1) $M=1$, $n_1 = 2$, $m_1=1$, (2) $M=1$, $n_1 = 1$, $m_1=2$ and (3) $M=2$, $n_1 = n_2 = m_1 = m_2 = 1$.
Case (1): The corresponding modified TRO-channel becomes (after identifying the orthonormal basis in a suitable way) $$\Phi_f : B({\mathbb{C}}^2) \to B({\mathbb{C}}^2) \otimes B({\mathbb{C}}^{l_1}),\; |i{\rangle}{\langle}j| \mapsto |i{\rangle}{\langle}j| \otimes \frac{f}{l_1}.$$ If we assume that $\Phi^{\bar{2},1}_1$ is equivalent to $\Phi_f$, then there are a unitary $U: {\mathbb{C}}^2 \to {\mathbb{C}}^2$ and an isometry $V: {\mathbb{C}}^2 \to {\mathbb{C}}^2 \otimes {\mathbb{C}}^{l_1}$ such that $$V\Phi^{\bar{2},1}_1(U^* \rho\, U)V^* = \Phi_f(\rho),\;\; \rho \in B(H_A).$$ Since ${\rm Ran}\Phi^{\bar{2},1}_1 = B({\mathbb{C}}^2)$ we also have ${\rm Ran}\Phi_f \cong B({\mathbb{C}}^2)$ as a subalgebra of $B({\mathbb{C}}^2) \otimes B({\mathbb{C}}^{l_1})$, which forces $g := \frac{f}{l_1}$ to be a pure state. This implies that $g^2 = g$, so that ${{\operatorname{Tr}}}((|1{\rangle}{\langle}1| \otimes g)^2) = {{\operatorname{Tr}}}(|1{\rangle}{\langle}1| \otimes\frac{f}{l_1}) = 1$. However, the state $\rho' = \Phi^{\bar{2},1}_1(U^* |1{\rangle}{\langle}1|U)$ can be easily shown to satisfy ${{\operatorname{Tr}}}((\rho')^2) = 5/9 \ne 1$. Since $X\mapsto VXV^*$ is a trace preserving map, we get a contradiction.
Case (2): The corresponding modified TRO-channel becomes $$\Phi_f : B({\mathbb{C}}^2) \to B({\mathbb{C}}^{l_1}),\; |i{\rangle}{\langle}j| \mapsto \frac{f_{ij}}{l_1},$$ where $f = \begin{bmatrix} f_{11} & f_{12}\\ f_{21} & f_{22} \end{bmatrix} \in B({\mathbb{C}}^2) \otimes B({\mathbb{C}}^{l_1})$ with $f_{ij} \in B({\mathbb{C}}^{l_1})$, $1\le i,j \le 2$. Since ${\rm Ran}\Phi^{\bar{2},1}_1 = B({\mathbb{C}}^2)$ we know that $l_1 \ge 2$. We assume that there are a unitary $U: {\mathbb{C}}^2 \to {\mathbb{C}}^2$ and an isometry $V: {\mathbb{C}}^2 \to {\mathbb{C}}^2 \otimes {\mathbb{C}}^{l_1}$ such that $V\Phi^{\bar{2},1}_1(U^* \rho\, U)V^* = \Phi_f(\rho),\;\; \rho \in B(H_A)$ as before. In this case we have $\mathcal{R}(X) = B({\mathbb{C}}^2)\otimes {\mathbb{C}}1_{l_1}$. It is straightforward to check that independence of $f$ with respect to $\mathcal{R}(X)$ implies that ${{\operatorname{Tr}}}(f_{11}) = l_1$. We also know that $f^2$ is independent of $\mathcal{R}(X)$, which means that ${{\operatorname{Tr}}}((f^2)_{11}) = l_1$. However, we have $$l_1 = {{\operatorname{Tr}}}((f^2)_{11}) = {{\operatorname{Tr}}}(f^2_{11} + f_{12}f_{21}) \ge {{\operatorname{Tr}}}(f^2_{11}) = \frac{5}{9}l_1^2,$$ which is a contradiction. The above inequality is from $f^*_{12} = f_{21}$ and the last equality is from the fact that $${{\operatorname{Tr}}}((\frac{f_{ij}}{l_1})^2) = {{\operatorname{Tr}}}((\rho')^2) = 5/9.$$
Case (3): The corresponding modified TRO-channel becomes $$\Phi_f : B({\mathbb{C}}^2) \to B({\mathbb{C}}^{l_1+l_2}),\; \begin{bmatrix} a_{11} & a_{12}\\ a_{21} & a_{22} \end{bmatrix} \mapsto \left[\frac{f_{ij}}{\sqrt{l_i l_j}}\right]_{1\le i,j \le 2},$$ where $f = \begin{bmatrix} f_{11} & f_{12}\\ f_{21} & f_{22} \end{bmatrix} \in B({\mathbb{C}}^{l_1+l_2})$ with $f_{ij} \in B({\mathbb{C}}^{l_j}, {\mathbb{C}}^{l_1})$, $1\le i,j \le 2$. Since ${\rm Ran}\Phi^{\bar{2},1}_1 = B({\mathbb{C}}^2)$ we know that $l_1 \ge 2$. We assume that there are a unitary $U: {\mathbb{C}}^2 \to {\mathbb{C}}^2$ and an isometry $V: {\mathbb{C}}^2 \to {\mathbb{C}}^2 \otimes {\mathbb{C}}^{l_1}$ such that $V\Phi^{\bar{2},1}_1(U^* \rho\, U)V^* = \Phi_f(\rho),\;\; \rho \in B(H_A)$ as before. In this case we have $\mathcal{R}(X) = {\mathbb{C}}1_{l_1} \oplus {\mathbb{C}}1_{l_2} \subseteq B({\mathbb{C}}^{l_1+l_2})$. It is also straightforward to check that independence of $f$ with respect to $\mathcal{R}(X)$ implies that ${{\operatorname{Tr}}}(f_{11}) = l_1$. We also know that $f^2$ is independent of $\mathcal{R}(X)$, which means that ${{\operatorname{Tr}}}((f^2)_{11}) = l_1$. However, we have $$l_1 = {{\operatorname{Tr}}}((f^2)_{11}) = {{\operatorname{Tr}}}(f^2_{11} + f_{12}f_{21}) \ge {{\operatorname{Tr}}}(f^2_{11}) = \frac{5}{9}l_1^2,$$ where the last identity is from the fact that $${{\operatorname{Tr}}}((\frac{f_{ij}}{l_1})^2) = {{\operatorname{Tr}}}(\begin{bmatrix}\frac{f_{ij}}{l_1} & 0 \\ 0 & 0\end{bmatrix}^2 ) = {{\operatorname{Tr}}}((\rho')^2) = 5/9.$$ Thus, we can conclude that $l_1 = 1$, which actually means that $f_{11} = {{\operatorname{Tr}}}(f_{11}) = l_1 = 1$. Thus, we have ${{\operatorname{Tr}}}((\frac{f_{ij}}{l_1})^2) = 1 \ne 5/9$, so that we get a contradiction.
The canonical complementary channel $\tilde{\Phi}_f$ of a modified TRO-channel $\Phi_f$ can be written as follows. $$\tilde{\Phi}_f : B(H_A) \to B(H_E),\;\; |x{\rangle}{\langle}y| \mapsto \sqrt{f}y^*x\sqrt{f}.$$ Then, we can also show that the Temperley-Lieb channel $\Phi^{\bar{2},1}_1$ for $G = SU(2)$ is not equivalent to any canonical complementary channel $\tilde{\Phi}_f$ of a modified TRO-channel $\Phi_f$. This time the argument is easier since we only need to observe that ${\rm rank}(\tilde{\Phi}_f) \le 2$ in all the 3 possible cases in the proof of Proposition \[prop-non-TRO\].
[^1]: The term “trace” comes from the fact that under the fiber functor ${\text{TL}}(d) \to \text{Rep}(O^+_F)$, $\tau_k$ corresponds to the well-known Markov trace $\tau_k:{\text{TL}}_{k,k}(d) \to {\mathbb C}$ obtained by tracial closure of Temperley-Lieb diagrams [@KaLi94].
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We update our previous work of a Langevin-with-interaction model for charmonium in heavy-ion collisions, by considering the effect due to recombination. We determine the contribution to yields from pairs whose constituent quarks originate from two different hard processes. Like the surviving states, the recombinant also undergo both a stochastic interaction, determined by a hydrodynamical simulation of the heavy-ion collision, and an interaction determined by the potentials measured on the lattice for appropriate temperatures. From the results of these simulations, we determine both the direct and the recombinant contribution to the yields for RHIC conditions, and find that for central collisions, between 30% and 50% of the yield is due to recombinant production. We compare our results with other models and look for how the recombinant contribution differs from the surviving contribution in the differential $p_T$ yields. Including the recombinant contribution improves the agreement with the latest analysis of charmonium at RHIC, which shows an absence of anomalous suppression except in the most central collisions.'
address: |
Department of Physics and Astronomy\
State University of New York, Stony Brook, NY 11794-3800
author:
- Clint Young and Edward Shuryak
title: 'Recombinant Charmonium in strongly coupled Quark-Gluon Plasma'
---
=1
Introduction
============
In a previous paper [@Young:2008he], we argued that the microscopic dynamics of charmonium in a heavy-ion collision should be modeled as a stochastic interaction with strongly-coupled quark-gluon plasma (sQGP). In such a plasma the diffusion coefficient is very small, leading to rapid thermalization in momentum space and slow spatial diffusion, which is further slowed by the attraction of the constituent charm and anti-charm quarks in the pair. We concluded that the amount of equilibration in space (and thus the yield) depends strongly on the timescales of the collision. In realistic simulations of Au+Au collisions at RHIC, where the sQGP phase exists for $\tau \sim 5\; {\rm fm}/c$ , our model predicted a survival probability for $\sim 1/2$ even in the most central collisions, much larger than previously expected. Those results were able to explain qualitatively the data from PHENIX.
In [@Young:2008he], we did not consider another possibly important source of charmonium at RHIC, the “recombinant” contribution of particles whose constituent quarks originate from different hard processes. At very high collision energies, as the number of charm pairs per event grows, recombinant charmonia could potentially lead to an enhancement of the final yields in a heavy-ion collision, reversing the current suppression trend. Using the grand canonical ensemble approach, Braun-Munzinger and collaborators [@BraunMunzinger:2000px] determine the fugacity of charm by the number of pairs produced initially. The “statistical hadronization" approach to charmonium assumes complete thermal equilibration of charmonium. Another approach has been taken by Grandchamp and Rapp [@Grandchamp:2002wp], who treat the yields from heavy-ion collisions as coming from two sources: the direct component, which decays exponentially with some lifetime $\tau_d$, and the coalescent component, which is determined by the same mechanism in [@BraunMunzinger:2000px], with the additional consideration that spatial equilibration of charm does not happen. To account for enhanced local charm density, by small spatial diffusion, they had introduced another factor - the “correlation volume" $V_{corr}$ - which was estimated. The present work can be viewd as a quantitative dynamical calculation of this parameter.
To gain insight, we should compare these models with our model in [@Young:2008he]. The Langevin-with-interaction model for pairs in medium makes no assumptions about complete thermalization, and shows how even in central Au+Au collisions at the RHIC, the majority of the yields may survive the QGP phase. However, the model predicts rapid thermalization in the momentum distributions of charmonium, as well as equilibration in the relative yields of the various charmonium states due to the formation of “quasi-equilibrium" in phase space. This requires no fine-tuning of the rates for charmonium in plasma; it is just a natural consequence of the strongly coupled nature of the media, detailed by the Langevin dynamics. However, recombinant production of charmonium may still be an important effect in our model, due to the fact that in central collisions, the densities of unbound charm quarks can be quite high in some regions of the transverse plane.
Our model simulates an ensemble of pairs, generated initially by PYTHIA event generation and then evolved according to the Langevin-with-interaction model. We evolve the pairs not assuming any form of equilibrium, and then average over possible pairings of the quarks to form recombinant charmonium.
The outline of this work is as follows: in Section \[simulation\] we will describe how we simulated charm in plasma and took into account the contribution due to recombinant , and in Section \[conclusions\] we take the opportunity to describe the progress in this model so far and also to summarize where future work is needed for a Langevin-with-interaction description of suppression. In Appendix \[canonical\], we discuss the statistics necessary to calculate the recombinant contribution to yields.
Recombinant charmonium in heavy-ion collisions {#simulation}
==============================================
Langevin-with-interaction model for pairs in a heavy-ion collision
------------------------------------------------------------------
As we have done in our previous paper, we simulate in medium with a hydrodynamical simulation of the collision. As before, we start with a large ensemble of pairs whose momenta are determined with PYTHIA event generation [@Andersson:1977qx]. The positions of the initial hard collisions in the transverse plane at mid-rapidity are determined by sampling the distribution in $N_{coll}$ determined from the Glauber model. In this way, our local densities of pairs vary as one would expect from the Glauber model, which gives an enhancement for recombination towards the center of the transverse plane. Each element of the ensemble now contains $N$ pairs. The number of pairs $N$ depends on the impact parameter of the collision and needs to be determined.
The average number of pairs for a Au+Au collision at RHIC varies with impact parameter and has been investigated by the PHENIX collaboration at mid-rapidity [@Adler:2004ta]. The measured cross sections for charm production vary somewhat with the centrality of the collision and achieves a maximum of about $800\; \mu b$ for semi-central collisions. The nuclear overlap function $T_{AA}(b)$ can be calculated with the Glauber model. We used a convenient program by Dariusz Miskowiec [@miskowiec] to evaluate this function. With a centrality dependent cross-section $\sigma_{\bar{c}c}$, we can easily calculate the average number of pairs in a collision: $N_{\bar{c}c}=T_{AA}\sigma_{\bar{c}c}$. The number of pairs reaches a maximum in central collisions, with an average of 19 pairs per collision.
In order to determine the probability for two charm quarks from different hard processes to form recombinant charmonium, we must average over the different possible pairings of all of the unbound quarks in each element of our ensemble. This is discussed in Appendix \[canonical\] in generality. Since the number of pairs approaches 20 for central Au+Au collisions at RHIC, we are faced with another issue: there are 20! possible pairings and it has become impractical to calculate the probability of each individual pairing this way. In general, we would be forced to perform [*permutation sampling*]{} of this partition function, preferably with some Metropolis algorithm. How to sample over permutations with a Metropolis algorithm is discussed thoroughly in the literature, for an excellent review of this see Ceperley [@ceperley]. However, for RHIC, the situation simplifies due to the relatively low densities of pairs involved. We ran our simulation for the most central Au+Au collisions at RHIC and examined how many “neighbors" each charm quark had. A “neighbor" is defined as a charm anti-quark, originating from a different pQCD event yielding the given charm quark, which is close enough to the charm quark that it could potentially form a bound state, in other words $r$ is such that $V_{cornell}(r)<0.88\;\GeV$. The number of charm quarks expected to have one and only one neighbor in the most central Au+Au collisions was found to be 5.5%, while only 0.2% of the charm quarks are expected to have more than one neighbor. Therefore, even at the most central collisions at RHIC, we can be spared possibly complicated permutation samplings. Of course, this situation is not true in general, and for the numbers of pairs produced in a typical heavy-ion collision at the LHC one should modify these combinatorics.
New analysis of the data including improved $dAu$ sample
--------------------------------------------------------
The data with which we now compare our results is different from that which we used for comparison in our previous work. New data analysis of Au+Au and d+Au described in [@leitch] account for the (anti-)shadowing and the breakup of charmonium due to the cold nuclear matter effects (parameterized by $\sigma_{abs}$) in the framework of a Glauber model for the collision. The calculations at forward and mid-rapidity are now done independently, since shadowing and breakup could be considerably different at different rapidities. This new analysis is a significant success, demonstrating the high suppression at forward rapidity (previously very puzzling) as being due to cold nuclear matter effects. New ratios of observed suppression due to cold nuclear matter $R_{AA}/R_{AA}^{CNM}$, plotted versus the energy density times time $\epsilon \tau$, show common trends for RHIC and SPS data, which was not the case previously. We use this new analysis as a measure of survival probability in our calculation.
![ (Color online.) $R^{anomalous}_{AA}=R_{AA}/R_{AA}^{CNM}$ for versus centrality of the AuAu collisions at RHIC. The data points with error bars show the PHENIX Au+Au measurements with cold nuclear matter effects factored out as in [@leitch]. Other points, connected by lines, are our calculations for the two values of the QCD phase transition temperature $T_c=165\; \MeV$ (upper) and $T_c=190\; \MeV$ (lower). From bottom to top: the (green) filled squares show our new results, the recombinant , the open (red) squares show the $R_{AA}$ for surviving diagonal , the open (blue) circles show the total. []{data-label="R_AA_rec"}](R_AA_rec "fig:"){width="8cm"} ![ (Color online.) $R^{anomalous}_{AA}=R_{AA}/R_{AA}^{CNM}$ for versus centrality of the AuAu collisions at RHIC. The data points with error bars show the PHENIX Au+Au measurements with cold nuclear matter effects factored out as in [@leitch]. Other points, connected by lines, are our calculations for the two values of the QCD phase transition temperature $T_c=165\; \MeV$ (upper) and $T_c=190\; \MeV$ (lower). From bottom to top: the (green) filled squares show our new results, the recombinant , the open (red) squares show the $R_{AA}$ for surviving diagonal , the open (blue) circles show the total. []{data-label="R_AA_rec"}](R_AA_rec_TC19 "fig:"){width="8cm"}
The results
-----------
Before we show the results, let us remind the reader that our calculation is intended to be a dynamical one, with no free parameters. We use a hydrodynamical simulation developed in [@Teaney:2001av] which is known to describe accurately the radial and elliptic collective flows observed in heavy-ion collisions. Our drag and random force terms for the Langevin dynamics has one input – the diffusion coefficient for charm – constrained by two independent measurements ($p_T$ distributions and $v_2(p_T)$ measurements for single lepton – charm – performed in Ref. [@Moore:2004tg]. The interaction of these charm quarks are determined by the correlators for two Polyakov lines in lQCD [@bielefeld].
Having said that, we still are aware of certain uncertainties in all the input parameters, which affect the results. In order to show by how much the results change if we vary some of them, we have used the uncertainty in the value for the critical temperature $T_c$. For these reasons, we show the results for two values $T_c=165,\;190\;\MeV$, in Fig.\[R\_AA\_rec\].
As can be seen, a higher $T_c$ value improves the agreement of our simulation with the latest analysis of the data, because in this case the QGP phase is shorter in duration and the survival probablity is larger. However the recombinant contribution (shown by filled squares) is in this case relatively smaller, making less than 1/3 of the yield even in the most central collisions at RHIC.
Our results for the total, direct, and recombinant contributions resembles considerably the results of Zhao and Rapp obtained from their two-component model [@Zhao:2008pp]. However it is important to point out two important differences. First of all, what is described by Zhao and Rapp as the second component due to statistical coalescence includes, with the recombinant , surviving , destroyed by the medium, which ultimately coalesce in the end. Second, the direct states’ abundance, when compared with the abundances of excited charmonium states, does not necessarily need to be as expected from these particles’ Boltzmann factors. For our model, these relative abundances do make sense for direct charmonium states, due to the formation of a quasi-equilibrium distribution.
Recombinant and $p_t$-distributions {#Jp_pt}
-----------------------------------
So far, we have only considered the effect of the recombinant production on the overall yields of particles at the RHIC. We should test our model by considering whether or not adding the recombinant contribution can change the shape of any differential yields.
One differential yield where we may expect the surviving and recombinant component to have different behaviors is in the $p_T$-distributions for central Au+Au collisions. The surviving states tend to originate in the periphery of the collision region, since the states produced here endure the sQGP phase for the shortest times. However, the recombinant contribution should form toward the center of the collision region, since this is where the density of initial pairs is highest, and as we have been showing for some time, spatial diffusion is incomplete in the sQGP. Therefore, since the effect due to flow on the $p_T$-distributions has Hubble-like behavior, with the radial velocity of the medium scaling with distance from the center of the transverse plane, we would expect the recombinant contribution to exist, on average, in regions of the medium with significantly smaller flow velocities.
![ (Color online.) The surviving and recombinant yields, plotted versus the radial distance from the center of the transverse plane. []{data-label="r_compare"}](XJHF){width="8cm"}
Figure \[r\_compare\] demonstrates this behavior existing in our simulation.
We should now determine whether or not this difference of the yield versus $r$ can be observed in the yield versus $p_t$. As we have shown in our previous paper, during the phase transition from QGP to the hadronic phase in heavy-ion collisions, our model predicts a small change in the total yield but relatively large changes in the $p_t$ distributions, with these changes strongly dependent on the drag coefficient for quarkonium during this time, and $T_c$. We can easily run our code with an LH8 equation of state and make several predictions for the two components’ $p_t$ distributions. However, for reasons which will become apparent, we are only interested in the upper limit of the effect of flow on $p_t$ in Au+Au collisions at the RHIC. Therefore, we ran our simulation where we assumed a phase transition which lasts $5 \; {\rm fm/c}$, during which the particles have a mean free path of zero, in a Hubble-like expansion.
![ (Color online.) The surviving and recombinant yields versus $p_t$. []{data-label="pt_HF"}](ptHF){width="8cm"}
The $p_t$ distributions after this expansion are shown in Figure \[pt\_HF\]. It is visible from this plot that the recombinant contribution will observably increase the total yield at low $p_t$ (where the total yield is significantly higher than the surviving component alone) and have little effect at higher $p_t$ (where the total and the surviving component alone are nearly the same). However, we have found that even in this extreme limit, there is no clear signal in the differential $p_t$ yields for there being two components for production at the RHIC.
This test, however, should not be abandoned for measurements of the differential yields at higher collision energies. Since the recombinant contribution grows substantially as charm densities are increased, it should be checked whether or not the recombinant contribution is more strongly peaked in the center of the transverse plane of LHC collisions, and whether or not two components to the differential yields should become observable there. We will follow up on this issue in a work we have in progress.
Discussion {#conclusions}
==========
We have found that at central Au+Au collisions at RHIC the fraction of recombinant pairs should be considerable, up to 30-50%, with smaller fractions at more peripheral collisions. The exact number depends on details of the model, such as duration of the QGP phase and the magnitude of the critical temperature $T_c$. We have also gone a step further, and attempted to find different behaviors of these two components in differential yields, so that these two components might be disentangled. This test (examining the differential $p_t$ yields) fails to identify clearly two different components. We will pursue whether or not this test works for the yields at the LHC.
Our model for charmonium in sQGP is rather conservative: we merely assume that the constituent charm quarks experience dynamics similar to the Langevin dynamics of single charm quarks in SQGP, which has already shown good agreement with the $R_{AA}(p_t)$ and $v_2(p_t)$ measured at PHENIX for single charm.
One final, careful observation of our results is worth mentioning. As one can see from our results of Fig. \[R\_AA\_rec\], the model seems to be working well for central collisions, in which there is a QGP phase lasting for several fm/c and leading to a possibility for charm quarks to diffuse away from each other, far enough so that states would not survive. However it overpredicts suppression for peripheral collisions, which – if the cold matter analysis will hold against further scutiny – is nearly completely absent. One possible reason for that can be survival of the flux tubes (QCD strings) between quarks well into the mixed phase or even in small region of temperatures $above$ $T_c$, as was recently advocated by one of us [@Shuryak:2009cy] in connection with “ridge" phenomenon.
1.0cm
[**Acknowledgments.**]{}
We thank P. Petreczky for pointing out the issues of setting $T_c$ in our model, which proved to be important in our results. Our work was partially supported by the US-DOE grants DE-FG02-88ER40388 and DE-FG03-97ER4014.
[99]{}
C. Young and E. Shuryak, Phys. Rev. C [**79**]{}, 034907 (2009) \[arXiv:0803.2866 \[nucl-th\]\]. P. Braun-Munzinger and J. Stachel, Phys. Lett. B [**490**]{}, 196 (2000) \[arXiv:nucl-th/0007059\]. B. Andersson, G. Gustafson and C. Peterson, Phys. Lett. B [**71**]{}, 337 (1977). T. Sjöstrand, S. Mrenna and P. Skands, JHEP [**0605**]{}, 026 (2006) L. Grandchamp and R. Rapp, Nucl. Phys. A [**709**]{}, 415 (2002) \[arXiv:hep-ph/0205305\]. D. M. Ceperley, [*Reviews of Modern Physics*]{} Vol. 67, No. 2, April 1995
S. S. Adler [*et al.*]{} \[PHENIX Collaboration\], Phys. Rev. Lett. [**94**]{}, 082301 (2005) \[arXiv:nucl-ex/0409028\]. One can find the program at http://www-linux.gsi.de/ misko/ .
D. Teaney, J. Lauret and E. V. Shuryak, arXiv:nucl-th/0110037. G. D. Moore and D. Teaney, Phys. Rev. C [**71**]{}, 064904 (2005) \[arXiv:hep-ph/0412346\]. http://www.int.washington.edu/talks/WorkShops/ , presented by M. Leitch at the Joint CATHIE-INT mini-program Quarkonium in Hot Media: From QCD to Experiment
O. Kaczmarek, S. Ejiri, F. Karsch, E. Laermann and F. Zantow, Prog. Theor. Phys. Suppl. [**153**]{}, 287 (2004) X. Zhao and R. Rapp, Eur. Phys. J. C [**62**]{}, 109 (2009) \[arXiv:0810.4566 \[nucl-th\]\]. E. Shuryak, arXiv:0903.3734 \[nucl-th\].
Canonical ensembles for $N$ -pair systems {#canonical}
=========================================
In this section we will determine a partition function for a canonical ensemble of $N$ charm pair systems (that is, an ensemble of very many systems, where each system contains $N$ pairs) which correctly averages over different possible pairings of charm and anticharm quarks and can therefore describe recombination in heavy-ion collisions. This averaging is possible computationally but is non-trivial, and for RHIC collisions we will take a binary approximation which makes this averaging much easier. We argue, however, that the unsimplified approach is necessary for describing collisions at the Large Hadron Collider, and for this reason we include this discussion here.
Our simulation could be thought of as a canonical ensemble description of charmonium in plasma: we can think of our large set of pairs as a set of systems, each system containing $N$ pairs, with each system’s dynamics modeled as a stochastic interaction with the medium, with a deterministic interaction of each heavy quark with the other quark in the pair. Each system in this set samples the distribution of pairs in the initial collision, the geometry of the collision, and also samples the stochastic forces on the heavy quarks. Up to this point, we have only thought of each system of this set as consisting of a single pair. The interaction of charm quarks from different hard events is negligible compared with the stochastic interaction and the interaction within the pair, partly because near $T_c$, the dynamics of charm pairs seems best described with some generalization of the Lund string model, which allows no interaction between unpaired charm quarks [@Andersson:1977qx]. Therefore, it is simple bookkeeping to think now of the systems as each consisting of $N$ pairs.
However, even though the dynamics of the system is not changed when considering many pairs per collision, the hadronization (“pairing") of these $2N$ charm quarks is now a non-trivial issue. For simplicity, assume that the quarks all reach the freezeout temperature $T_c$ at the same proper time. There are $N!$ different possible pairings of the quarks and anti-quarks into charmonium states (each pairing is an element of the permutation group $S_N$). Call a given pairing $\sigma$ (which is an element of $S_N$). Near $T_c$, the relative energetics of a pairing $\sigma$ is given by $$E_i=\sum_{i}V(|\vec{r}_i-\vec{r}^{'}_{\sigma(i)}|){\rm ,}$$ where $V(r)$ is the zero-temperature Cornell potential (with some maximum value at large $r$, corresponding to the string splitting), $\vec{r}_i$ the position of the $i$-th charm quark, $\vec{r}^{'}$ the position of the $i$-th charm antiquark, and $\sigma(i)$ the integer in the $i$-th term of the permutation.
One way to proceed is to average over these pairings according to their Boltzmann factors. In this way, the probability of a given pairing would be given by $$P(i) = \frac{1}{{\cal Z}} \exp(-E_i/T_c){\rm ,} \; {\cal Z}=\sum_{i=1}^{N!} \exp(-E_i/T_c){\rm .}$$ However, this averaging ignores the possibility of survival of bound states from the start to the finish of the simulation, in that pairings which artificially “break up" bound states are included in the average. This goes against the main point of our last paper: that it is actually the incomplete thermalization of pairs which explains the survival of charmonium states.
For this reason, the averaging we perform rejects permutations which break up pairs that would otherwise be bound: we average over a subgroup $S_N^{'}$ of $S_N$, and determine the probability based on this modified partition function: $$P(i) = \frac{1}{{\cal Z}} \exp(-E_i/T_c){\rm ,} \; {\cal Z}=\sum_{\sigma \in S_N^{'}} \exp(-E_i/T_c){\rm ,}$$ where $E_i$ specifies the energy of a pairing we permit. We will average over the permutations in this way.
By doing this, we will use a fully canonical ensemble description for charm in plasma, which holds for any value for $N$, large or small. Previous work in statistical hadronization used the grand canonical approach to explain relative abundances of open and hidden charm [@BraunMunzinger:2000px], which can only be applied where thermalization may be assumed to be complete.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Neural embeddings have been used with great success in Natural Language Processing (NLP). They provide compact representations that encapsulate word similarity and attain state-of-the-art performance in a range of linguistic tasks. The success of neural embeddings has prompted significant amounts of research into applications in domains other than language. One such domain is graph-structured data, where embeddings of vertices can be learned that encapsulate vertex similarity and improve performance on tasks including edge prediction and vertex labelling. For both NLP and graph based tasks, embeddings have been learned in high-dimensional Euclidean spaces. However, recent work has shown that the appropriate isometric space for embedding complex networks is not the flat Euclidean space, but negatively curved, hyperbolic space. We present a new concept that exploits these recent insights and propose learning neural embeddings of graphs in hyperbolic space. We provide experimental evidence that embedding graphs in their *natural* geometry significantly improves performance on downstream tasks for several real-world public datasets.'
author:
- Benjamin Paul Chamberlain
- James R Clough
- Marc Peter Deisenroth
bibliography:
- 'main.bib'
title: Neural Embeddings of Graphs in Hyperbolic Space
---
Introduction
============
Embedding (or vector space) methods find a lower-dimensional continuous space in which to represent high-dimensional complex data [@Roweis2000; @Belkin2001]. The distance between objects in the lower-dimensional space gives a measure of their similarity. This is usually achieved by first postulating a low-dimensional vector space and then optimising an objective function of the vectors in that space. Vector space representations provide three principle benefits over sparse schemes: (1) They encapsulate similarity, (2) they are compact, (3) they perform better as inputs to machine learning models [@Salton1975]. This is true of graph structured data where the native data format is the adjacency matrix, a typically large, sparse matrix of connection weights.
Neural embedding models are a flavour of embedding scheme where the vector space corresponds to a subset of the network weights, which are learned through backpropagation. Neural embedding models have been shown to improve performance in a large number of downstream tasks across multiple domains. These include word analogies [@Mikolov2013; @Mnih2013], machine translation [@Sutskever2014], document comparison [@Kusner2015], missing edge prediction [@Grover], vertex attribution [@Perozzi2014], product recommendations [@Grbovic2015; @Baeza-yates2015], customer value prediction [@Kooti2017; @Chamberlain2017] and item categorisation [@Barkan2016]. In all cases the embeddings are learned without labels (unsupervised) from a sequence of entities.
To the best of our knowledge, all previous work on neural embedding models either explicitly or implicitly (by using the Euclidean dot product) assumes that the vector space is Euclidean. Recent work from the field of complex networks has found that many interesting networks, such as the Internet [@Boguna2010] or academic citations [@Clough2015a; @Clough2016] can be well described by a framework with an underlying non-Euclidean hyperbolic geometry. Hyperbolic geometry provides a continuous analogue of tree-like graphs, and even infinite trees have nearly isometric embeddings in hyperbolic space [@Gromov]. Additionally, the defining features of complex networks, such as power-law degree distributions, strong clustering and hierarchical community structure, emerge naturally when random graphs are embedded in hyperbolic space [@Krioukov].
The starting point for our model is the celebrated word2vec Skipgram architecture, which is shown in Figure \[fig:skipgram\] [@Mikolov2013; @Mikolov2013a]. Skipgram is a shallow neural network with three layers: (1) An input projection layer that maps from a one-hot-encoded to a distributed representation, (2) a hidden layer, and (3) an output softmax layer. The network is necessarily simple for tractability as there are a very large number of output states (every word in a language). Skipgram is trained on a sequence of words that is decomposed into (input word, context word)-pairs. The model employs two separate vector representations, one for the input words and another for the context words, with the input representation comprising the learned embedding. The word pairs are generated by taking a sequence of words and running a sliding window (the context) over them. As an example the word sequence “chance favours the prepared mind” with a context window of size three would generate the following training data: (chance, favours), (chance, the), (favours, chance), ... }. Words are initially randomly allocated to vectors within the two vector spaces. Then, for each training pair, the vector representations of the observed input and context words are pushed towards each other and away from all other words (see Figure \[fig:skipgram\_updates\]).
The concept can be extended from words to network structured data using random walks to create sequences of vertices. The vertices are then treated exactly analogously to words in the NLP formulation. This was originally proposed as DeepWalk [@Perozzi2014]. Extensions varying the nature of the random walks have been explored in LINE [@Tang2015] and Node2vec [@Grover].
#### Contribution
In this paper, we introduce the new concept of neural embeddings in hyperbolic space. We formulate backpropagation in hyperbolic space and show that using the natural geometry of complex networks improves performance in vertex classification tasks across multiple networks.
Hyperbolic Geometry
===================
Hyperbolic geometry emerges from relaxing Euclid’s fifth postulate (the parallel postulate) of geometry. In hyperbolic space there is not just one, but an infinite number of parallel lines that pass through a single point. This is illustrated in Figure \[fig:parallel\] where every line is parallel to the bold, blue line and all pass through the same point. Hyperbolic space is one of only three types of isotropic spaces that can be defined entirely by their curvature. The most familiar is Euclidean, which is flat, having zero curvature. Space with uniform positive curvature has an elliptic geometry (e.g. the surface of a sphere), and space with uniform negative curvature is called hyperbolic, which is analogous to a saddle-like surface. As, unlike Euclidean space, in hyperbolic space even infinite trees have nearly isometric embeddings, it has been successfully used to model complex networks with hierarchical structure, power-law degree distributions and high clustering [@Krioukov].
One of the defining characteristics of hyperbolic space is that it is in some sense *larger* than the more familiar Euclidean space; the area of a circle or volume of a sphere grows exponentially with its radius, rather than polynomially. This suggests that low-dimensional hyperbolic spaces may provide effective representations of data in ways that low-dimensional Euclidean spaces cannot. However this makes hyperbolic space hard to visualise as even the 2D hyperbolic plane can not be isometrically embedded into Euclidean space of any dimension,(unlike elliptic geometry where a 2-sphere can be embedded into 3D Euclidean space). For this reason there are many different ways of representing hyperbolic space, with each representation conserving some geometric properties, but distorting others. In the remainder of the paper we use the Poincaré disk model of hyperbolic space.
Poincaré Disk Model
-------------------
[0.4]{} ![Illustrations of properties of hyperbolic space. Tiles of constant area Parallel lines.[]{data-label="fig:hyperbolic illustration"}](circle_limit1 "fig:"){width="\hsize"}
[0.4]{} ![Illustrations of properties of hyperbolic space. Tiles of constant area Parallel lines.[]{data-label="fig:hyperbolic illustration"}](parallel_lines.png "fig:"){width="\hsize"}
The Poincaré disk models two-dimensional hyperbolic space where the infinite plane is represented as a unit disk. We work with the two-dimensional disk, but it is easily generalised to the $d$-dimensional Poincaré ball, where hyperbolic space is represented as a unit $d$-ball.
In this model hyperbolic distances grow exponentially towards the edge of the disk. The circle’s boundary represents infinitely distant points as the infinite hyperbolic plane is squashed inside the finite disk. This property is illustrated in Figure \[fig:circle\_limit1\] where each tile is of constant area in hyperbolic space, but the tiles rapidly shrink to zero area in Euclidean space. Although volumes and distances are warped, the Poincaré disk model is **conformal**, meaning that Euclidean and hyperbolic angles between lines are equal. Straight lines in hyperbolic space intersect the boundary of the disk orthogonally and appear either as diameters of the disk, or arcs of a circle. Figure \[fig:parallel\] shows a collection of straight hyperbolic lines in the Poincaré disk. Just as in spherical geometry, the shortest path from one place to another is a straight line, but appears as a curve on a flat map. Similarly, these straight lines show the shortest path (in terms of distance in the underlying hyperbolic space) from one point on the disk to another, but they appear curved. This is because it is quicker to move close to the centre of the disk, where distances are shorter, than nearer the edge. In our proposed approach, we will exploit both the conformal property and the circular symmetry of the Poincaré disk.
Overall, the geometric intuition motivating our approach is that vertices embedded near the middle of the disk can have more close neighbours than they could in Euclidean space, whilst vertices nearer the edge of the disk can still be very far from each other.
Inner Product, Angles, and Distances
------------------------------------
The mathematics is considerably simplified if we exploit the symmetries of the model and describe points in the Poincaré disk using polar coordinates, $x = (r_e,\theta)$, with $r_e \in [0, 1)$ and $\theta \in [0, 2\pi)$. To define similarities and distances, we require an inner product. In the Poincaré disk, the *inner product* of two vectors $x = (r_x, \theta_x)$ and $y=(r_y, \theta_y)$ is given by $$\begin{aligned}
\langle x,y \rangle &= \Vert x \Vert \Vert y \Vert \cos (\theta_x - \theta_y) \\
&= 4 \operatorname{arctanh}r_x \operatorname{arctanh}r_y \cos(\theta_x - \theta_y)
\label{eq:poincare_inner}\end{aligned}$$ The *distance* of $x = (r_e,\theta)$ from the origin of the hyperbolic co-ordinate system is given by $r_h = 2 \operatorname{arctanh}r_e$ and the circumference of a circle of hyperbolic radius R is $C = 2 \pi \sinh R$.
Neural Embedding in Hyperbolic Space
====================================
![Geometric interpretation of the update equations in the Skipgram model. The vector representation of the output vertex $v_{w_O}^{\prime(\mathrm{new})}$ is moved closer (blue) to the vector representation of the input vertex $v_I$, while all other vectors $v_{w_j}^{\prime(\mathrm{new})}$ move further away (red). The magnitude of the change is proportional to the prediction error.[]{data-label="fig:skipgram_updates"}](stretch){width="48.00000%"}
We adopt the original notation of [@Mikolov2013] whereby the input vertex is $w_I$ and the output is $w_O$. Their corresponding vector representations are $v_{w_I}$ and $v'_{w_O}$, which are elements of the two vector spaces shown in Figure \[fig:skipgram\], $\mathbf{W}$ and $\mathbf{W'}$ respectively. Skipgram has a geometric interpretation, which we visualise in Figure \[fig:skipgram\_updates\] for vectors in $\mathbf{W}^\prime$. Updates to $v^\prime_{w_j}$ are performed by simply adding (if $w_j$ is the observed output vertex) or subtracting (otherwise) an error-weighted portion of the input vector. Similar, though slightly more complicated, update rules apply to the vectors in $\mathbf{W}$. Given this interpretation, it is natural to look for alternative geometries in which to perform these updates.
To embed a graph in hyperbolic space we replace Skipgram’s two Euclidean vector spaces ($\mathbf{W}$ and $\mathbf{W'}$ in Figure \[fig:skipgram\]) with two Poincaré disks. We learn embeddings by optimising an objective function that predicts output/context vertices from an input vertex, but we replace the Euclidean dot products used in Skipgram with hyperbolic inner products. A softmax function is used for the conditional predictive distribution $$\begin{aligned}
p(w_O|w_I) = \frac{\exp (\langle v'_{w_O}, v_{w_I} \rangle)}{\sum_{i=1}^V\exp (\langle v'_{w_i}, v_{w_I} \rangle )}\,,
\label{eq:cond_dist}\end{aligned}$$
where $v_{w_i}$ is the vector representation of the $i^{th}$ vertex, primed indicates members of the output vector space (See Figure \[fig:skipgram\]) and $\langle\cdot,\cdot \rangle$ is the hyperbolic inner product. Directly optimising is computationally demanding as the sum in the denominator extends over every vertex in the graph. Two commonly used techniques to make word2vec more efficient are (a) replacing the softmax with a hierarchical softmax [@Mnih2008; @Mikolov2013] and (b) negative sampling [@Mnih2012; @Mnih2013]. We use negative sampling as it is faster.
![image](skipgram){width="\hsize"}
Negative Sampling
-----------------
Negative sampling is a form of Noise Contrastive Estimation (NCE) [@Gutmann2012]. NCE is an estimation technique that is based on the assumption that a good model should be able to separate signal from noise using only logistic regression.
As we only care about generating good embeddings, the objective function does not need to produce a well-specified probability distribution. The negative log likelihood using negative sampling is $$\begin{aligned}
E &= -\log \sigma (\langle v_{w_O}', v_{w_I} \rangle) -\hspace{-4mm} \sum_{w_j \in W_{neg}}\hspace{-2mm} \log \sigma(- \langle v_{w_j}', v_{w_I} \rangle) \\
&= -\log \sigma (u_O) - \sum_{j=1}^K \mathbb{E}_{w_j \sim P_n} [\log \sigma(- u_j)]
\label{eq:loss}\end{aligned}$$ where $v_{w_I}$, $v_{w_O}^\prime$ are the vector representation of the input and output vertices, $u_j = \langle v_{w_j}^\prime, v_{w_I}\rangle$, $W_{\rm neg}$ is a set of samples drawn from the noise distribution, $K$ is the number of samples and $\sigma$ is the sigmoid function. The first term represents the observed data and the second term the negative samples. To draw $W_{\rm neg}$, we specify the noise distribution $P_n$ to be unigrams raised to $\frac{3}{4}$ as in [@Mikolov2013].
Model Learning
--------------
We learn the model using backpropagation. To perform backpropagation it is easiest to work in natural hyperbolic co-ordinates on the disk and map back to Euclidean co-ordinates only at the end. In natural co-ordinates $r \in (0,\infty)$, $\theta \in (0,2\pi]$ and $u_j = r_j r_I \cos(\theta_I - \theta_j)$. The major drawback of this co-ordinate system is that it introduces a singularity at the origin. To address the complexities that result from radii that are less than or equal to zero, we initialise all vectors to be in a patch of space that is small relative to its distance from the origin.
The gradient of the negative log-likelihood in w.r.t. $u_j$ is given by $$\begin{aligned}
\frac{\partial E}{\partial u_j} &=
\begin{cases}
\sigma(u_j) - 1, & \text{if}\ w_j = w_O\\
\sigma(u_j), & \text{if}\ w_j = W_{neg}\\
0, & \text{otherwise}
\end{cases}
\label{eq:error}\end{aligned}$$ Taking the derivatives w.r.t. the components of vectors in $\mathbf{W'}$ (in natural polar hyperbolic co-ordinates) yields $$\begin{aligned}
\frac{\partial E}{\partial (r'_j)_k} &= \frac{\partial E}{\partial u_j}\frac{\partial u_j}{\partial (r'_j)_k} = \frac{\partial E}{\partial u_j} r_I \cos(\theta_I - \theta'_j) \\
\frac{\partial E}{\partial (\theta'_j)_k} & = \frac{\partial E}{\partial u_j} r_j'r_I \sin(\theta_I - \theta_j') \,.\end{aligned}$$
The Jacobian is then $$\begin{aligned}
\nabla _\mathbf{r} E = \frac{\partial E}{\partial r} \mathbf{\hat{r}} + \frac{1}{\sinh r}\frac{\partial E}{\partial \theta} \mathbf{\hat{\theta}}\,,\end{aligned}$$ which leads to $$\begin{aligned}
r_j^{'new} &=
\begin{cases}
r_j^{'old} - \eta \epsilon_j r_I \cos(\theta_I - \theta'_j), & \text{if}\ w_j \in w_O\cup W_{neg}\\
r_j^{'old}, & \text{otherwise}
\end{cases} \\
\theta_j^{'new} &=
\begin{cases}
\theta_j^{'old} - \eta \epsilon_j \frac{r_Ir_j}{\sinh{r_j}}\sin(\theta_I - \theta_j') , & \text{if}\ w_j \in w_O\cup W_{neg}\\
\theta_j^{'old}, & \text{otherwise}
\end{cases} \end{aligned}$$ where $\eta$ is the learning rate and $\epsilon_j$ is the prediction error defined in Equation . Calculating the derivatives w.r.t. the input embedding follows the same pattern, and we obtain $$\begin{aligned}
\frac{\partial E}{\partial r_I} &= \sum_{j : w_j \in w_O \cup W_{neg}} \frac{\partial E}{\partial u_j}\frac{\partial u_j}{\partial r_I} \\
&= \sum_{j : w_j \in w_O \cup W_{neg}} \frac{\partial E}{\partial u_j} r_j' \cos(\theta_I - \theta'_j) \,,\\
\frac{\partial E}{\partial \theta_I} &= \sum_{j : w_j \in w_O \cup W_{neg}} \frac{\partial E}{\partial u_j}\frac{\partial u_j}{\partial \theta_I} \\
&= \sum_{j : w_j \in w_O \cup W_{neg}} -\frac{\partial E}{\partial u_j} r_I r_j' \sin(\theta_I - \theta'_j)\,. \end{aligned}$$ The corresponding update equations are $$\begin{aligned}
r_I^{new} &=
r_I^{old} - \eta \sum_{j : w_j \in w_O \cup W_{neg}} \epsilon_j r_j' \cos(\theta_I - \theta'_j)\,,\\
\theta_I^{new} &=
\theta_I^{old} - \eta \sum_{j : w_j \in w_O \cup W_{neg}} \epsilon_j \frac{r_I r_j'}{\sinh r_I} \sin(\theta_I - \theta'_j)\,,\end{aligned}$$ where $t_j$ is an indicator variable s.t. $t_j=1$ if and only if $w_j=w_O$, and $t_j = 0$ otherwise. On completion of backpropagation, the vectors are mapped back to Euclidean co-ordinates on the Poincaré disk through $\theta_h \to \theta_e$ and $r_h \to \tanh \frac{r_h}{2}$.
Experimental Evaluation
=======================
[0.4]{}
\
[0.48]{}
[0.48]{}
In this section, we assess the quality of hyperbolic embeddings and compare them to embeddings in Euclidean spaces on a number of public benchmark networks.
Datasets
--------
name |V| |E| |y| largest class Labels
---------- ------- -------- ----- --------------- ----------------
karate 34 77 2 0.53 Factions
polbooks 105 441 3 0.46 Affiliation
football 115 613 12 0.11 League
adjnoun 112 425 2 0.52 Part of Speech
polblogs 1,224 16,781 2 0.52 Affiliation
: Description of experimental datasets. ‘Largest class’ gives the fraction of the dataset composed by the largest class and thereby provides the benchmark for random prediction accuracy.
\[tab:datasets\]
We report results on five publicly available network datasets for the problem of vertex attribution.
1. Karate: Zachary’s karate club contains 34 vertices divided into two factions [@Zachary1977].
2. Polbooks: A network of books about US politics published around the time of the 2004 presidential election and sold by the online bookseller Amazon.com. Edges between books represent frequent co-purchasing of books by the same buyers.
3. Football: A network of American football games between Division IA colleges during regular season Fall 2000 [@Girvan2002].
4. Adjnoun: Adjacency network of common adjectives and nouns in the novel David Copperfield by Charles Dickens [@Newman2006].
5. Polblogs: A network of hyperlinks between weblogs on US politics, recorded in 2005 [@Adamic2005].
Statistics for these datasets are recorded in Table \[tab:datasets\].
[0.7]{} ![image](football){width="\hsize"}
\
[0.4]{} ![image](political_blogs){width="\hsize"}
[0.4]{} ![image](polbooks){width="\hsize"}
\
[0.4]{} ![image](adjnoun){width="\hsize"}
[0.4]{} ![image](karate){width="\hsize"}
Visualising Embeddings
----------------------
To illustrate the utility of hyperbolic embeddings we compare embeddings in the Poincaré disk to the two-dimensional deepwalk embeddings for the 34-vertex karate network with two factions. The results are shown in Figure \[fig:embeddings\]. Both embeddings were generated by running for five epochs on an intermediate dataset of 34, ten step random walks, one originating at each vertex.
The figure clearly shows that the hyperbolic embedding is able to capture the community structure of the underlying network. When embedded in hyperbolic space, the two factions (black and white discs) of the underlying graph are linearly separable, while the Deepwalk embedding does not exhibit such an obvious structure.
Vertex Attribute Prediction
---------------------------
We evaluate the success of neural embeddings in hyperbolic space by using the learned embeddings to predict held-out labels of vertices in networks. In our experiments, we compare our embedding to deepwalk [@Perozzi2014] embeddings of dimensions 2, 4, 8, 16, 32, 64 and 128. To generate embeddings we first create an intermediate dataset by taking a series of random walks over the networks. For each network we use a ten-step random walk originating at each vertex.
The embedding models are all trained using the same parameters and intermediate random walk dataset. For deepwalk, we use the gensim [@Rehurek2010] python package, while our hyperbolic embeddings are written in custom TensorFlow. In both cases, we use five training epochs, a window size of five and do not prune any vertices.
The results of our experiments are shown in Figure \[fig:results\]. The graphs show macro F1 scores against the percentage of labelled data used to train a logistic regression classifier. Here we follow the method for generating F1 scores when each test case can have multiple labels that is described in [@Liu2006]. The error bars show one standard error from the mean over ten repetitions. The blue lines show hyperbolic embeddings while the red lines depict deepwalk embeddings at various dimensions. It is apparent that in all datasets hyperbolic embeddings significantly outperform deepwalk.
Conclusion
==========
We have introduced the concept of neural embeddings in hyperbolic space. To the best of our knowledge, all previous embeddings models have assumed a flat Euclidean geometry. However, a flat geometry is not the natural geometry of all data structures. A hyperbolic space has the property that power-law degree distributions, strong clustering and hierarchical community structure emerge naturally when random graphs are embedded in hyperbolic space. It is therefore logical to exploit the structure of the hyperbolic space for useful embeddings of complex networks. We have demonstrated that when applied to the task of classifying vertices of complex networks, hyperbolic space embeddings significantly outperform embeddings in Euclidean space.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Understanding and developing a correlation measure that can detect general dependencies is not only imperative to statistics and machine learning, but also crucial to general scientific discovery in the big data age. In this paper, we establish a new framework that generalizes distance correlation — a correlation measure that was recently proposed and shown to be universally consistent for dependence testing against all joint distributions of finite moments — to the Multiscale Graph Correlation ([[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{}). By utilizing the characteristic functions and incorporating the nearest neighbor machinery, we formalize the population version of local distance correlations, define the optimal scale in a given dependency, and name the optimal local correlation as [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{}. The new theoretical framework motivates a theoretically sound Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} and allows a number of desirable properties to be proved, including the universal consistency, convergence and almost unbiasedness of the sample version. The advantages of [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} are illustrated via a comprehensive set of simulations with linear, nonlinear, univariate, multivariate, and noisy dependencies, where it loses almost no power in monotone dependencies while achieving better performance in general dependencies, compared to distance correlation and other popular methods.'
author:
- 'Cencheng Shen[^1]'
- 'Carey E. Priebe[^2]'
- 'Joshua T. Vogelstein[^3]'
bibliography:
- 'MGCbib.bib'
title: '**From Distance Correlation to Multiscale Graph Correlation**'
---
\#1
1
[1]{}
0
[1]{}
[**From Distance Correlation to Multiscale Graph Correlation**]{}
[*Keywords:*]{} testing independence, generalized distance correlation, nearest neighbor graph
Introduction
============
Given pairs of observations $(x_{i},y_{i}) \in {\mathbb{R}}^{p} \times {\mathbb{R}}^{q}$ for $i=1,\ldots,n$, assume they are generated by independently identically distributed (*iid*) $F_{{\ensuremath{X}}{\ensuremath{Y}}}$. A fundamental statistical question prior to the pursuit of any meaningful joint inference is the independence testing problem: the two random variables are independent if and only if $F_{{\ensuremath{X}}{\ensuremath{Y}}} = F_{{\ensuremath{X}}} F_{{\ensuremath{Y}}}$, i.e., the joint distribution equals the product of the marginals. The statistical hypothesis is formulated as: $$\begin{aligned}
& H_{0}: F_{{\ensuremath{X}}{\ensuremath{Y}}}=F_{{\ensuremath{X}}}F_{{\ensuremath{Y}}},\\
& H_{A}: F_{{\ensuremath{X}}{\ensuremath{Y}}} \neq F_{{\ensuremath{X}}}F_{{\ensuremath{Y}}}.\end{aligned}$$ For any test statistic, the testing power at a given type $1$ error level equals the probability of correctly rejecting the null hypothesis when the random variables are dependent. A test is consistent if and only if the testing power converges to $1$ as the sample size increases to infinity, and a valid test must properly control the type $1$ error level. Modern datasets are often nonlinear, high-dimensional, and noisy, where density estimation and traditional statistical methods fail to be applicable. As multi-modal data are prevalent in much data-intensive research, a powerful, intuitive, and easy-to-use method for detecting general relationships is pivotal.
The classical Pearson’s correlation [@Pearson1895] is still extensively employed in statistics, machine learning, and real-world applications. It is an intuitive statistic that quantifies the linear association, a special but extremely important relationship. A recent surge of interests has been placed on using distance metrics and kernel transformations to achieve consistent independence testing against all dependencies. A notable example is the distance correlation ([[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{}) [@SzekelyRizzoBakirov2007; @SzekelyRizzo2009; @SzekelyRizzo2013a; @SzekelyRizzo2014]: the population [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} is defined via the characteristic functions of the underlying random variables, while the sample [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} can be conveniently computed via the pairwise Euclidean distances of given observations. [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} enjoys universal consistency against any joint distribution of finite second moments, and is applicable to any metric space of strong negative type [@Lyons2013]. Notably, the idea of distance-based correlation measure can be traced back to the Mantel coefficient [@Mantel1967; @JosseHolmes2013]: the sample version differs from sample [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} only in centering, garnered popularity in ecology and biology applications, but does not have the consistency property of [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{}.
Developed almost in parallel from the machine learning community, the kernel-based method ([[[<span style="font-variant:small-caps;">Hsic</span>]{}]{}]{}) [@GrettonEtAl2005; @GrettonGyorfi2010] has a striking similarity with [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{}: it is formulated by kernels instead of distances, can be estimated on sample data via the sample kernel matrix, and is universally consistent when using any characteristic kernel. Indeed, it is shown in [@SejdinovicEtAl2013] that there exists a mapping from kernel to metric (and vice versa) such that [[[<span style="font-variant:small-caps;">Hsic</span>]{}]{}]{} equals [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{}. Another competitive method is the Heller-Heller-Gorfine method ([[[<span style="font-variant:small-caps;">Hhg</span>]{}]{}]{}) [@HellerGorfine2013; @heller2016consistent]: it is also universally consistent by utilizing the rank information and the Pearson’s chi-square test, but has better finite-sample testing powers over [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} in a collection of common nonlinear dependencies. There are other consistent methods available, such as the [[[<span style="font-variant:small-caps;">Copula</span>]{}]{}]{} method that tests independence based on the empirical copula process [@Genest06; @Genest07; @Holmes09], entropy-based methods [@Mendes06], and methods tailored for univariate data [@Reshef2011].
As the number of observations in many real world problems (e.g., genetics and biology) are often limited and very costly to increase, finite-sample testing power is crucial for certain data exploration tasks: [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} has been shown to perform well in monotone relationships, but not so well in nonlinear dependencies such as circles and parabolas; the performance of [[[<span style="font-variant:small-caps;">Hsic</span>]{}]{}]{} and [[[<span style="font-variant:small-caps;">Hhg</span>]{}]{}]{} are often the opposite of [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{}, which perform slightly inferior to [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} in monotone relationships but excel in various nonlinear dependencies.
From another point of view, unraveling the nonlinear structure has been intensively studied in the manifold learning literature [@TenenbaumSilvaLangford2000; @SaulRoweis2000; @BelkinNiyogi2003]: by approximating a linear manifold locally via the k-nearest neighbors at each point, these nonlinear techniques can produce better embedding results than linear methods (like PCA) in nonlinear data. The main downside of manifold learning often lies in the parameter choice, i.e., the number of neighbor or the correct embedding dimension is often hard to estimate and requires cross-validation. Therefore, assuming a satisfactory neighborhood size can be efficiently determined in a given nonlinear relationship, the local correlation measure shall work better than the global correlation measure; and if the parameter selection is sufficiently adaptive, the optimal local correlation shall equal the global correlation in linear relationships.
In this manuscript we formalize the notion of population local distance correlations and [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{}, explore their theoretical properties both asymptotically and in finite-sample, and propose an improved Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} algorithm. By combing distance correlation with the locality principle, [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} inherits the universal consistency in testing, is able to efficiently search over all local scales and determine the optimal correlation, and enjoys the best testing powers throughout the simulations. A number of real data applications via [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} are pursued in [@ShenEtAl2016], e.g., testing brain images versus personality and disease, identify potential protein biomarkers for cancer, etc. And [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} are employed for vertex dependence testing and screening in [@mgc3; @mgc4].
The paper is organized as follows: In Section \[sec:main1\], we define the population local distance correlation and population [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} via the characteristic functions of the underlying random variables and the nearest neighbor graphs, and show how the local variants are related to the distance correlation. In Section \[sec:main2\], we consider the sample local correlation on finite-samples, prove its convergence to the population version, and discuss the centering and ranking scheme. In Section \[sec:main3\], we present a thresholding-based algorithm for Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{}, prove its convergence property, propose a theoretically sound threshold choice, manifest that [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} is valid and consistent under the permutation test, and finish the section with a number of fundamental properties for the local correlations and [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{}. The comprehensive simulations in Section \[sec:exp\] exhibits the empirical advantage of [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{}, and the paper is concluded in Section \[sec:dis\]. All proofs are in Appendix \[sec:proofs\], the simulation functions are presented in Appendix \[appen:function\], and the code are available on Github 1[1]{}[[^4] ]{}0[1]{}and CRAN 1[1]{}[[^5]]{}0[1]{}.
Multiscale Graph Correlation for Random Variables {#sec:main1}
=================================================
Distance Correlation Review
---------------------------
We first review the original distance correlation in [@SzekelyRizzoBakirov2007]. A non-negative weight function $w(t,s)$ on $(t,s) \in \mathbb{R}^{p} \times \mathbb{R}^{q}$ is defined as: $$\begin{aligned}
w(t,s) &= (c_{p}c_{q} |t|^{1+p}|s|^{1+q})^{-1},\end{aligned}$$ where $c_{p}=\frac{\pi^{(1+p)/2}}{\Gamma((1+p)/2)}$ is a non-negative constant tied to the dimensionality $p$, and $\Gamma(\cdot)$ is the complete Gamma function. Then the population distance covariance, variance and correlation are defined by $$\begin{aligned}
dCov({\ensuremath{X}},{\ensuremath{Y}}) &= \int_{\mathbb{R}^{p} \times \mathbb{R}^{q}} |E(g_{{\ensuremath{X}}{\ensuremath{Y}}}(t,s))-E(g_{{\ensuremath{X}}}(t))E(g_{{\ensuremath{Y}}}(s))|^{2} w(t, s)dtds, \\
dVar({\ensuremath{X}}) &= dCov({\ensuremath{X}},{\ensuremath{X}}), \\
dVar({\ensuremath{Y}}) &= dCov({\ensuremath{Y}},{\ensuremath{Y}}), \\
dCorr({\ensuremath{X}},{\ensuremath{Y}}) &= \frac{dCov({\ensuremath{X}},{\ensuremath{Y}})}{\sqrt{dVar({\ensuremath{X}}) \cdot dVar({\ensuremath{Y}})}},\end{aligned}$$ where $|\cdot|$ is the complex modulus, $g_{\cdot}(\cdot)$ denotes the exponential transformation within the expectation of the characteristic function, i.e., $g_{{\ensuremath{X}}{\ensuremath{Y}}}(t,s) = e^{\textbf{i} \left\langle t,{\ensuremath{X}}\right\rangle +\textbf{i} \left\langle s,{\ensuremath{Y}}\right\rangle }$ ($\textbf{i}$ represents the imaginary unit) and $E(g_{{\ensuremath{X}}{\ensuremath{Y}}}(t,s))$ is the characteristic function. Note that distance variance equals $0$ if and only if the random variable is a constant, in which case distance correlation shall be set to $0$. The main property of population [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} is the following.
For any two random variables $({\ensuremath{X}},{\ensuremath{Y}})$ with finite first moments, $dCorr({\ensuremath{X}},{\ensuremath{Y}})=0$ if and only if ${\ensuremath{X}}$ and ${\ensuremath{Y}}$ are independent.
To estimate the population version on sample data, the sample distance covariance is computed by double centering the pairwise Euclidean distance matrix of each data, followed by summing over the entry-wise product of the two centered distance matrices. When the underlying random variables have finite second moments, the sample [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} is shown to converge to the population [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} , and is thus universally consistent for testing independence against all joint distributions of finite second moments.
Population Local Correlations {#main1a}
-----------------------------
Next we formally define the population local distance covariance, variance, correlation by combining the k-nearest neighbor graphs with the distance covariance. For simplicity, they are named the local covariance, local variance, and local correlation from now on, and we always assume the following regularity conditions: $$\begin{aligned}
& 1) \mbox{ $({\ensuremath{X}},{\ensuremath{Y}})$ have finite second moments}, \\
& 2) \mbox{ Neither random variable is a constant}, \\
& 3) \mbox{ $({\ensuremath{X}},{\ensuremath{Y}})$ are continuous random variables}. \end{aligned}$$ The finite second moments assumption is required by [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{}, and also required by the local version to establish convergence and consistency. The non-constant condition is to avoid the trivial case and make sure population local correlations behave well. The continuous assumption is for ease of presentation, so the definition and related properties can be presented in a more elegant manner. Indeed, for any discrete random variable one can always apply jittering (i.e., add trivial white noise) to make it continuous without altering the independence testing.
Suppose $({\ensuremath{X}},{\ensuremath{Y}}), ({\ensuremath{X}}',{\ensuremath{Y}}'), ({\ensuremath{X}}'',{\ensuremath{Y}}''), ({\ensuremath{X}}''',{\ensuremath{Y}}''')$ are *iid* as $F_{{\ensuremath{X}}{\ensuremath{Y}}}$. Let ${\boldsymbol{I}}(\cdot)$ be the indicator function, define two random variables $$\begin{aligned}
{\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}} &={\boldsymbol{I}}(\int_{B({\ensuremath{X}},\|{\ensuremath{X}}'-{\ensuremath{X}}\|)} dF_{\ensuremath{X}}(u) \leq \rho_k) \\
{\boldsymbol{I}}_{{\ensuremath{Y}}',{\ensuremath{Y}}}^{\rho_{l}} &={\boldsymbol{I}}(\int_{B({\ensuremath{Y}}',\|{\ensuremath{Y}}'-{\ensuremath{Y}}\|)} dF_{\ensuremath{Y}}(v) \leq \rho_l)\end{aligned}$$ with respect to the closed balls $B({\ensuremath{X}},\|{\ensuremath{X}}'-{\ensuremath{X}}\|)$ and $B({\ensuremath{Y}}',\|{\ensuremath{Y}}-{\ensuremath{Y}}'\|)$ centered at ${\ensuremath{X}}$ and ${\ensuremath{Y}}'$ respectively. Then let $\overline{\cdot}$ denote the complex conjugate, define $$\begin{aligned}
h^{\rho_{k}}_{{\ensuremath{X}}}(t) &=(g_{{\ensuremath{X}}}(t)\overline{g_{{\ensuremath{X}}'}(t)}-g_{{\ensuremath{X}}}(t)\overline{g_{{\ensuremath{X}}''}(t)}) {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}} \\
h^{\rho_{l}}_{{\ensuremath{Y}}'}(s) &=(g_{{\ensuremath{Y}}'}(s)\overline{g_{{\ensuremath{Y}}}(s)}-g_{{\ensuremath{Y}}'}(s)\overline{g_{{\ensuremath{Y}}'''}(s)}) {\boldsymbol{I}}_{{\ensuremath{Y}}',{\ensuremath{Y}}}^{\rho_{l}}\end{aligned}$$ as functions of $t \in \mathbb{R}^{p}$ and $s \in \mathbb{R}^{q}$ respectively,
The population local covariance, variance, correlation at any $(\rho_k,\rho_l) \in [0,1] \times [0,1]$ are defined as $$\begin{aligned}
\label{eq:dcov1}
dCov^{\rho_k, \rho_l}({\ensuremath{X}},{\ensuremath{Y}}) &= \int_{\mathbb{R}^{p} \times \mathbb{R}^{q}} \{ E(h^{\rho_{k}}_{{\ensuremath{X}}}(t) \overline{h^{\rho_{l}}_{{\ensuremath{Y}}'}(s)})-E(h^{\rho_{k}}_{{\ensuremath{X}}}(t))E(\overline{h^{\rho_{l}}_{{\ensuremath{Y}}'}(s)})\} w(t, s)dtds,\\
dVar^{\rho_k}({\ensuremath{X}}) &= dCov^{\rho_k, \rho_k}({\ensuremath{X}},{\ensuremath{X}}), \nonumber \\
dVar^{\rho_l}({\ensuremath{Y}}) &= dCov^{\rho_l, \rho_l}({\ensuremath{Y}},{\ensuremath{Y}}), \nonumber \\
dCorr^{\rho_k,\rho_l}({\ensuremath{X}},{\ensuremath{Y}}) &= \frac{dCov^{\rho_k,\rho_l}({\ensuremath{X}},{\ensuremath{Y}})}{\sqrt{dVar^{\rho_k}({\ensuremath{X}}) \cdot dVar^{\rho_l}({\ensuremath{Y}})}},\end{aligned}$$ where we limit the domain of population local correlation to $$\begin{aligned}
\mathcal{S}_{\epsilon}=\big\{(\rho_{k},\rho_{l}) \in [0,1] \times [0,1] \mbox{ that satisfies } \min\{dVar^{\rho_k}(X),dVar^{\rho_l}(Y)\} \geq \epsilon\big\}\end{aligned}$$ for a small positive $\epsilon$ that is no larger than $\min\{dVar(X),dVar(Y)\}$.
The domain of local correlation needs to be limited so the population version is well-behaved. For example, when ${\ensuremath{X}}$ is a constant or $\rho_{k}=0$, $dVar^{\rho_k}(X)$ equals $0$ and the corresponding local correlation is not well-defined. All subsequent analysis for the population local correlations is based on the domain $\mathcal{S}_{\epsilon}$, which is non-empty and compact as shown in Theorem \[thmMax\]. In practice, it suffices to set $\epsilon$ as any small positive number, see the sample version in Section \[sec:main2\]. Also note that in either indicator function, the two random variables and the distribution $dF$ are independent, e.g., at any realization $(x, x')$ of $({\ensuremath{X}},{\ensuremath{X}}')$, the first indicator equals ${\boldsymbol{I}}(\int_{B(x,\|x'-x\|)} dF_{\ensuremath{X}}(u) \leq \rho_k)$, and its expectation is taken with respect to $({\ensuremath{X}},{\ensuremath{X}}')$.
The above definition makes use of the characteristic functions, which is akin to the original definition of [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} and easier to show consistency. Alternatively, the local covariance can be equivalently defined via the pairwise Euclidean distances. The alternative definition better motivates the sample version in Section \[sec:main2\], is often handy for understanding and proving theoretical properties, and suggests that local covariance is always a real number, which is not directly obvious from Equation \[eq:dcov1\].
\[thm1\] Suppose $({\ensuremath{X}},{\ensuremath{Y}}),({\ensuremath{X}}',{\ensuremath{Y}}'),({\ensuremath{X}}'',{\ensuremath{Y}}''),({\ensuremath{X}}''',{\ensuremath{Y}}''')$ are *iid* as $F_{{\ensuremath{X}}{\ensuremath{Y}}}$, and define $$\begin{aligned}
d^{\rho_{k}}_{{\ensuremath{X}}} &=(\| {\ensuremath{X}}-{\ensuremath{X}}' \| - \|{\ensuremath{X}}-{\ensuremath{X}}''\|) {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}} \\
d^{\rho_{l}}_{{\ensuremath{Y}}'} &=(\| {\ensuremath{Y}}'-{\ensuremath{Y}}\| - \|{\ensuremath{Y}}'-{\ensuremath{Y}}'''\|) {\boldsymbol{I}}_{{\ensuremath{Y}}',{\ensuremath{Y}}}^{\rho_{l}}\end{aligned}$$
The local covariance in Equation \[eq:dcov1\] can be equally defined as $$\begin{aligned}
\label{eq:dcov2}
dCov^{\rho_k, \rho_l}({\ensuremath{X}},{\ensuremath{Y}}) = E(d^{\rho_{k}}_{{\ensuremath{X}}} d^{\rho_{l}}_{{\ensuremath{Y}}'}) - E(d^{\rho_{k}}_{{\ensuremath{X}}}) E(d^{\rho_{l}}_{{\ensuremath{Y}}'}),\end{aligned}$$ which shows that local covariance, variance, correlation are always real numbers.
Each local covariance is essentially a local version of distance covariance that truncates large distances at each point in the support, where the neighborhood size is determined by $(\rho_{k},\rho_{l})$. In particular, distance correlation equals the local correlation at the maximal scale, which will ensure the consistency of [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{}.
\[thm2\] At any $(\rho_k,\rho_l)\in \mathcal{S}_{\epsilon}$, $dCov^{\rho_{k},\rho_{l}}({\ensuremath{X}},{\ensuremath{Y}}) = 0$ when ${\ensuremath{X}}$ and ${\ensuremath{Y}}$ are independent. Moreover, at $(\rho_k,\rho_l)=(1,1)$, $dCov^{\rho_{k},\rho_{l}}({\ensuremath{X}},{\ensuremath{Y}}) = dCov({\ensuremath{X}},{\ensuremath{Y}})$. They also hold for the correlations by replacing all the $dCov$ by $dCorr$.
Population [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} and Optimal Scale
------------------------------------------------------------------------------------------
The population [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} can be naturally defined as the maximum local correlation within the domain, i.e., $$\begin{aligned}
\label{eq:pmgc}
{c}^{*}({\ensuremath{X}},{\ensuremath{Y}})=\max_{ (\rho_k,\rho_l) \in \mathcal{S}_{\epsilon}} \{dCorr^{\rho_{k},\rho_{l}}({\ensuremath{X}},{\ensuremath{Y}})\},\end{aligned}$$ and the scale that attains the maximum is named the optimal scale $$\begin{aligned}
\label{eq:pmgc1}
(\rho_k,\rho_l)^{*}=\arg\max_{ (\rho_k,\rho_l) \in \mathcal{S}_{\epsilon}} \{dCorr^{\rho_{k},\rho_{l}}({\ensuremath{X}},{\ensuremath{Y}})\}.\end{aligned}$$ The next theorem states the continuity of the local covariance, variance, correlation, and thus the existence of population [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{}.
\[thmMax\] Given two continuous random variables $({\ensuremath{X}},{\ensuremath{Y}})$,
(a)
: The local covariance is a continuous function with respect to $(\rho_{k},\rho_{l}) \in [0,1]^2$, so is local variance in $[0,1]$ and local correlation in $\mathcal{S}_{\epsilon}$.
(b)
: The set $\mathcal{S}_{\epsilon}$ is always non-empty unless either random variable is a constant.
(c)
: Excluding the trivial case in (b), the set $\{dCorr^{\rho_{k},\rho_{l}}({\ensuremath{X}},{\ensuremath{Y}}), (\rho_k,\rho_l) \in \mathcal{S}_{\epsilon}\}$ is always non-empty and compact, so an optimal scale $(\rho_k,\rho_l)^{*}$ and ${c}^{*}({\ensuremath{X}},{\ensuremath{Y}})$ exist.
Therefore, population [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} and the optimal scale exist, are distribution dependent, and may not be unique. Without loss of generality, the optimal scale is assumed unique for presentation purpose. The population [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} is always no smaller than [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} in magnitude, and equals $0$ if and only if independence, a property inherited from [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{}.
\[thm3\] When ${\ensuremath{X}}$ and ${\ensuremath{Y}}$ are independent, ${c}^{*}({\ensuremath{X}},{\ensuremath{Y}})=dCorr({\ensuremath{X}},{\ensuremath{Y}})=0$; when ${\ensuremath{X}}$ and ${\ensuremath{Y}}$ are not independent, ${c}^{*}({\ensuremath{X}},{\ensuremath{Y}}) \geq dCorr({\ensuremath{X}},{\ensuremath{Y}})>0$.
Sample Local Correlations {#sec:main2}
=========================
Sample [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} can be easily calculated via properly centering the Euclidean distance matrices, and is shown to converge to the population [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} [@SzekelyRizzoBakirov2007; @SzekelyRizzo2013a; @SzekelyRizzo2014]. Similarly, we show that the sample local correlation can be calculated via the Euclidean distance matrices upon truncating large distances for each sample observation, and the sample version converges to the respective population local correlation.
Definition {#sec:defi}
----------
Given pairs of observations $(x_{i},y_{i}) \in {\mathbb{R}}^{p} \times {\mathbb{R}}^{q}$ for $i=1,\ldots,n$, denote $\mathcal{X}_{n}=[x_{1},\ldots,x_{n}]$ as the data matrix with each column representing one sample observation, and similarly $\mathcal{Y}_{n}$. Let $\tilde{A}$ and $\tilde{B}$ be the $n \times n$ Euclidean distance matrices of $\mathcal{X}_{n}=\{x_{i}\}$ and $\mathcal{Y}_{n}=\{y_{i}\}$ respectively, i.e., $\tilde{A}_{ij}=\|x_{i}-x_{j}\|$. Then we compute two column-centered matrices $A$ and $B$ with the diagonals excluded, i.e., $\tilde{A}$ and $\tilde{B}$ are centered within each column such that $$\label{localCoef2}
A_{ij}=
\begin{cases}
\tilde{A}_{ij}-\frac{1}{n-1}\sum_{s=1}^{n} \tilde{A}_{sj}, & \text{if $i \neq j$}, \\
0, & \text{if $i=j$};
\end{cases} \qquad \qquad
B_{ij}=
\begin{cases}
\tilde{B}_{ij}-\frac{1}{n-1}\sum_{s=1}^{n} \tilde{B}_{sj}, & \text{if $i \neq j$}, \\
0, & \text{if $i=j$};
\end{cases}$$
Next we define $\{R^{A}_{ij}\}$ as the “rank” of $x_i$ relative to $x_j$, that is, $R^{A}_{ij}=k$ if $x_i$ is the $k^{th}$ closest point (or “neighbor”) to $x_j$, as determined by ranking the set $\{\tilde{A}_{1j},\tilde{A}_{2j},\ldots,\tilde{A}_{nj}\}$ by ascending order. Similarly define $R^{B}_{ij}$ for the $y$’s. As we assumed $({\ensuremath{X}},{\ensuremath{Y}})$ are continuous, with probability $1$ there is no repeating observation and the ranks always take value in $\{1,\ldots,n\}$. In practice ties may occur, and we recommend either using minimal rank to keep the ties or jittering to break the ties, which is discussed at the end of this section.
For any $(k,l) \in [n]^2=\{1,\ldots,n\} \times \{1,\ldots,n\}$, we define the rank truncated matrices $A^{k}$ and $B^{l}$ as $$\begin{aligned}
A_{ij}^{k} &=A_{ij} {\boldsymbol{I}}(R^{A}_{ij} \leq k), \\
B_{ij}^{l} &=B_{ij} {\boldsymbol{I}}(R^{B}_{ij} \leq l).\end{aligned}$$ Let $\circ$ denote the entry-wise product, ${\hat{E}}(\cdot)=\frac{1}{n(n-1)}\sum_{i \neq j}^{n} (\cdot)$ denote the diagonal-excluded sample mean of a square matrix, then the sample local covariance, variance, and correlation are defined as: $$\begin{aligned}
dCov^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n}) &= {\hat{E}}(A^{k} \circ B^{l'})- {\hat{E}}(A^{k}){\hat{E}}(B^{l}),\\
dVar^{k}(\mathcal{X}_{n}) &={\hat{E}}(A^{k} \circ A^{k'})- {\hat{E}}^2(A^{k}), \\
dVar^{l}(\mathcal{Y}_{n}) &={\hat{E}}(B^{l} \circ B^{l'})- {\hat{E}}^2(B^{l}), \\
dCorr^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n}) &=dCov^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n}) / \sqrt{dVar^{k}(\mathcal{X}_{n}) \cdot dVar^{l}(\mathcal{Y}_{n})}.\end{aligned}$$ If either local variance is smaller than a preset $\epsilon > 0$ (e.g., the smallest positive local variance among all), then we set the corresponding $dCorr^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n})=0$ instead. Note that once the rank is known, sample local correlations can be iteratively computed in $\mathcal{O}(n^2)$ rather than a naive implementation of $\mathcal{O}(n^3)$. A detailed running time comparison is presented in Section \[sec:exp\].
In case of ties, minimal rank offers a consecutive indexing of sample local correlations, e.g., if ${\ensuremath{Y}}$ only takes two values, $R^{B}_{ij}$ takes value in $\{1,2\}$ under minimal rank, but maximal rank yields $\{\frac{n}{2},n\}$. The sample local correlations are not affected by the tie scheme, but minimal rank is more convenient to work with for implementation purposes. Alternatively, one can break ties deterministically or randomly, e.g., apply jittering to break all ties. For example, in the Bernoulli relationship of Figure \[f:dependencies\], there are only three points for computing sample local correlations and the Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} equals $0.9$. If white noise of variance $0.01$ were added to the data, we break all ties and obtain a much larger number of sample local correlations. The resulting Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} is $0.8$, which is slightly smaller but still much larger than $0$ and implies a strong dependency.
Whether the random variable is continuous or discrete, and whether the ties in sample data are broken or not, does not affect the theoretical results except in certain theorem statements. For example, in Theorem \[thm4\], the convergence still holds for discrete random variables, but the index pair $(k,l)$ does not necessarily correspond to the population version at $(\rho_{k},\rho_{l})=(\frac{k-1}{n-1}, \frac{l-1}{n-1})$, e.g., when ${\ensuremath{X}}$ is Bernoulli with probability $0.8$ and minimal rank is used, $k=1$ corresponds to $\rho_{k}=0.8$ instead of $\rho_{k}=\frac{k-1}{n-1}$. Nevertheless, Theorem \[thm4\] and all results in the paper hold regardless of continuous or discrete random variables, but the presentation is more elegant for the continuous case.
Convergence Property
--------------------
The sample local covariance, variance, correlation are designed to converge to the respective population versions. Moreover, the expectation of sample local covariance equals the population counterpart up to a difference of $\mathcal{O}(\frac{1}{n})$, and the variance diminishes at the rate of $\mathcal{O}(\frac{1}{n})$.
\[thm4\] Suppose each column of $\mathcal{X}_{n}$ and $\mathcal{Y}_{n}$ are generated *iid* from $({\ensuremath{X}},{\ensuremath{Y}}) \sim F_{{\ensuremath{X}}{\ensuremath{Y}}}$. The sample local covariance satisfies $$\begin{aligned}
E(dCov^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n})) &= dCov^{\rho_{k},\rho_{l}}({\ensuremath{X}},{\ensuremath{Y}}) +\mathcal{O}(1/n) \\
Var(dCov^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n})) &= \mathcal{O}(1/n)\\
dCov^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n}) &\stackrel{n \rightarrow \infty}{\rightarrow} dCov^{\rho_{k},\rho_{l}}({\ensuremath{X}},{\ensuremath{Y}}),\end{aligned}$$ where $\rho_{k}=\frac{k-1}{n-1}$ and $\rho_{l}=\frac{l-1}{n-1}$. In particular, the convergence is uniform and also holds for the local correlation, i.e., for any $\epsilon$ there exists $n_{\epsilon}$ such that for all $n > n_{\epsilon}$, $$\begin{aligned}
|dCorr^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n}) -dCorr^{\rho_{k},\rho_{l}}({\ensuremath{X}},{\ensuremath{Y}})|< \epsilon\end{aligned}$$ for any pair of $(\rho_{k},\rho_{l}) \in \mathcal{S}_{\epsilon}$.
The convergence property ensures that Theorem \[thm2\] holds asymptotically for the sample version.
\[thm5\] For any $(k,l)$, $dCorr^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n}) \rightarrow 0$ when ${\ensuremath{X}}$ and ${\ensuremath{Y}}$ are independent. In particular, $dCorr^{n,n}(\mathcal{X}_{n},\mathcal{Y}_{n}) \rightarrow dCorr({\ensuremath{X}},{\ensuremath{Y}})$.
Moreover, one can show that $dCorr^{n,n}(\mathcal{X}_{n},\mathcal{Y}_{n}) \approx dCorr(\mathcal{X}_{n},\mathcal{Y}_{n})$ for the unbiased sample distance correlation in [@SzekelyRizzo2014] up-to a small difference of $\mathcal{O}(\frac{1}{n})$, which can be verified by comparing Equation \[localCoef2\] to Equation 3.1 in [@SzekelyRizzo2014].
Centering and Ranking
---------------------
To combine distance testing with the locality principle, other than the procedure proposed in Equation \[eq:dcov2\], there are a number of alternative options to center and rank the distance matrices. For example, letting $$\begin{aligned}
d^{\rho_{k}}_{{\ensuremath{X}}} &=(\| {\ensuremath{X}}-{\ensuremath{X}}' \| - \|{\ensuremath{X}}-{\ensuremath{X}}''\|-\|{\ensuremath{X}}'-{\ensuremath{X}}''\|+\|{\ensuremath{X}}''-{\ensuremath{X}}'''\|) {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}}, \\
d^{\rho_{l}}_{{\ensuremath{Y}}'} &=(\| {\ensuremath{Y}}'-{\ensuremath{Y}}\| - \|{\ensuremath{Y}}'-{\ensuremath{Y}}''\|-\|{\ensuremath{Y}}-{\ensuremath{Y}}''\|+\|{\ensuremath{Y}}''-{\ensuremath{Y}}'''\|) {\boldsymbol{I}}_{{\ensuremath{Y}}',{\ensuremath{Y}}}^{\rho_{l}}\end{aligned}$$ still guarantees the resulting local correlation at maximal scale equals the distance correlation; and letting $$\begin{aligned}
d^{\rho_{k}}_{{\ensuremath{X}}} &=\| {\ensuremath{X}}-{\ensuremath{X}}' \| {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}}, \\
d^{\rho_{l}}_{{\ensuremath{Y}}'} &=\| {\ensuremath{Y}}'-{\ensuremath{Y}}\| {\boldsymbol{I}}_{{\ensuremath{Y}}',{\ensuremath{Y}}}^{\rho_{l}},\end{aligned}$$ makes the resulting local correlation at maximal scale equal the [[[<span style="font-variant:small-caps;">Mantel</span>]{}]{}]{} coefficient, the earliest distance-based correlation coefficient.
Nevertheless, the centering and ranking strategy proposed in Equation \[eq:dcov2\] is more faithful to k-nearest neighbor graph: the indicator ${\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}}$ equals $1$ if and only if $\int_{B({\ensuremath{X}},\|{\ensuremath{X}}'-{\ensuremath{X}}\|)} dF_{\ensuremath{X}}(u) \leq \rho_k$, which happens with probability $\rho_{k}$. Viewed another way, when conditioned on $({\ensuremath{X}},{\ensuremath{X}}')=(x,x')$, the indicator equals $1$ if and only if $Prob(\|x'-x\|<\|{\ensuremath{X}}''-x\|) \leq \rho_{k}$, thus matching the column ranking scheme in Equation \[localCoef2\]. Indeed, the locality principle used in [@TenenbaumSilvaLangford2000; @SaulRoweis2000; @BelkinNiyogi2003] considers the k-nearest neighbors of each sample point in local computation, an essential step to yield better nonlinear embeddings. On the centering side, the [[[<span style="font-variant:small-caps;">Mantel</span>]{}]{}]{} test appears to be an attractive option due to its simplicity in centering. All the [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{}, [[[<span style="font-variant:small-caps;">Hhg</span>]{}]{}]{}, [[[<span style="font-variant:small-caps;">Hsic</span>]{}]{}]{} have their theoretical consistency, while the [[[<span style="font-variant:small-caps;">Mantel</span>]{}]{}]{} coefficient does not, despite it being merely a different centering of [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{}. An investigation of the population form of [[[<span style="font-variant:small-caps;">Mantel</span>]{}]{}]{} yields some additional insights:
Given $\mathcal{X}_{n}$ and $\mathcal{Y}_{n}$, the [[[<span style="font-variant:small-caps;">Mantel</span>]{}]{}]{} coefficient on sample data is computed as $$\begin{aligned}
M(\mathcal{X}_{n},\mathcal{Y}_{n}) &= {\hat{E}}(\tilde{A} \circ \tilde{B})-{\hat{E}}(\tilde{A}){\hat{E}}(\tilde{B}) \\
Mantel(\mathcal{X}_{n},\mathcal{Y}_{n}) &= \frac{M(\mathcal{X}_{n},\mathcal{Y}_{n})}{\sqrt{M(\mathcal{X}_{n},\mathcal{X}_{n}) M(\mathcal{Y}_{n},\mathcal{Y}_{n})}},\end{aligned}$$ where $\tilde{A}_{ij}$ and $\tilde{B}_{ij}$ are the pairwise Euclidean distance, and ${\hat{E}}(\cdot)=\frac{1}{n(n-1)}\sum_{i \neq j}^{n} (\cdot)$ is the diagonal-excluded sample mean of a square matrix.
\[cor1\] Suppose each column of $\mathcal{X}_{n}$ and $\mathcal{Y}_{n}$ are *iid* as $F_{{\ensuremath{X}}{\ensuremath{Y}}}$, and $({\ensuremath{X}},{\ensuremath{Y}}), ({\ensuremath{X}}',{\ensuremath{Y}}')$ are also *iid* as $F_{{\ensuremath{X}}{\ensuremath{Y}}}$. Then $$\begin{aligned}
Mantel(\mathcal{X}_{n},\mathcal{Y}_{n}) &\rightarrow Mantel({\ensuremath{X}},{\ensuremath{Y}}) = \frac{M({\ensuremath{X}},{\ensuremath{Y}})}{\sqrt{M({\ensuremath{X}},{\ensuremath{X}}) M({\ensuremath{Y}},{\ensuremath{Y}})}},\end{aligned}$$ where $$\begin{aligned}
M({\ensuremath{X}},{\ensuremath{Y}}) &= \int_{\mathbb{R}^{p} \times \mathbb{R}^{q}} \{|E(g_{{\ensuremath{X}}{\ensuremath{Y}}}(t,s))|^2-|E(g_{{\ensuremath{X}}}(t))E(g_{{\ensuremath{Y}}}(s))|^{2}\} w(t, s)dtds \\
&= E(\| {\ensuremath{X}}-{\ensuremath{X}}' \| \| {\ensuremath{Y}}-{\ensuremath{Y}}' \|) - E(\|{\ensuremath{X}}-{\ensuremath{X}}'\|)E(\|{\ensuremath{Y}}-{\ensuremath{Y}}'\|)) \\
&= Cov(\| {\ensuremath{X}}-{\ensuremath{X}}' \|, \| {\ensuremath{Y}}-{\ensuremath{Y}}' \|).\end{aligned}$$
Corollary \[cor1\] suggests that [[[<span style="font-variant:small-caps;">Mantel</span>]{}]{}]{} is actually a two-sided test based on the absolute difference of characteristic functions: under certain dependency structure, the [[[<span style="font-variant:small-caps;">Mantel</span>]{}]{}]{} coefficient can be negative and still imply dependency (i.e., $|E(g_{{\ensuremath{X}}{\ensuremath{Y}}}(t,s))| < |E(g_{{\ensuremath{X}}}(t))E(g_{{\ensuremath{Y}}}(s))|$); whereas population [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} and [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} are always no smaller than $0$, and any negativity of the sample version does not imply dependency. Therefore, [[[<span style="font-variant:small-caps;">Mantel</span>]{}]{}]{} is only appropriate as a two-sided test, which is evaluated in Section \[sec:exp\]. Another insight is that [[[<span style="font-variant:small-caps;">Mantel</span>]{}]{}]{}, unlike [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{}, is not universally consistent: due to the integral $w$, one can construct a joint distribution such that the population [[[<span style="font-variant:small-caps;">Mantel</span>]{}]{}]{} equals $0$ under dependence (see Remark 3.13 in [@Lyons2013] for an example of dependent random variables with uncorrelated distances). However, empirically, simple centering is still effective in a number of common dependencies (like two parabolas and diamond in Figure \[f:1DAll\]).
Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} and Estimated Optimal Scale {#sec:main3}
================================================================================================
A naive sample version of [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} can be defined as the maximum of all sample local correlations $$\begin{aligned}
\max_{(k,l) \in [n]^2}\{dCorr^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n})\}.\end{aligned}$$ Although the convergence to population [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} can be guaranteed, the sample maximum is a biased estimator of the population [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} in Equation \[eq:pmgc\]. For example, under independence, population [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} equals $0$, while the maximum sample local correlation has expectation larger than $0$, which may negate the advantage of searching locally and hurt the testing power.
This motivates us to compute Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} as a smoothed maximum within the largest connected region of thresholded local correlations. The purpose is to mitigate the bias of a direct maximum, while maintaining its advantage over [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} in the test statistic. The idea is that in case of dependence, local correlations on the grid near the optimal scale shall all have large correlations; while in case of independence, a few local correlations may happen to be large, but most nearby local correlations shall still be small. The idea can be similarly adapted whenever there are multiple correlated test statistics or multiple models available, for which taking a direct maximum may yield too much bias [@mgc3]. From another perspective, Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} is like taking a regularized maximum.
Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{}
--------------------------------------------------------------------
The procedure is as follows:
Input:
: A pair of datasets $(\mathcal{X}_{n}, \mathcal{Y}_{n})$.
Compute the Local Correlation Map:
: Compute all local correlations:\
$\{dCorr^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n}), (k,l) \in [n]^2\}$.
Thresholding:
: Pick a threshold $\tau_n \geq 0$, denote $LC(\cdot)$ as the operation of taking the largest connected component, and compute the largest region $R$ of thresholded local correlations: $$\begin{aligned}
\label{eq:region}
&R=LC(\{(k,l) \mbox{ such that } dCorr^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n})>\max\{\tau_n, dCorr^{n,n}(\mathcal{X}_{n},\mathcal{Y}_{n})\} \}). \end{aligned}$$ Within the region $R$, set $$\begin{aligned}
\label{eq:smgc1}
{c}^{*}(\mathcal{X}_{n},\mathcal{Y}_{n})&=\max_{ (k,l) \in R} \{dCorr^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n})\}\\
(k_{n},l_{n})^{*}&=\arg\max_{ (k,l) \in R} \{dCorr^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n})\}\end{aligned}$$ as the Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} and the estimated optimal scale. If the number of elements in $R$ is less than $2n$, or the above thresholded maximum is no more than $dCorr^{n,n}(\mathcal{X}_{n},\mathcal{Y}_{n})$, we instead set ${c}^{*}(\mathcal{X}_{n},\mathcal{Y}_{n})=dCorr^{n,n}(\mathcal{X}_{n},\mathcal{Y}_{n})$ and $(k_{n},l_{n})^{*}=(n,n)$.
Output:
: Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} ${c}^{*}(\mathcal{X}_{n}, \mathcal{Y}_{n})$ and the estimated optimal scale $(k_{n},l_{n})^{*}$.
If there are multiple largest regions, e.g., $R_{1}$ and $R_{2}$ where their number of elements are more than $2n$ and coincide with each other, then it suffices to let $R = R_{1} \displaystyle \cup R_{2}$ and locate the [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} statistic within the union. The selection of at least $2n$ elements for $R$ is an empirical choice, which balances the bias-variance trade-off well in practice. The parameter can be any positive integer without affecting the validity and consistency of the test. But if the parameter is too large, [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} tends to be more conservative and is unable to detect signals in strongly nonlinear relationships (e.g., trigonometric functions), and performs closer and closer to [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{}; if the parameter is set to a very small fixed number, the bias is inflated so [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} tends to perform similarly as directly maximizing all local correlations.
Convergence and Consistency
---------------------------
The proposed Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} is algorithmically enforced to be no less than the local correlation at the maximal scale, and also no more than the maximum local correlation. It also ensures in Theorem \[thm3\] to hold for the sample version.
\[thm6\] Regardless of the threshold $\tau_n$, the Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} statistic ${c}^{*}(\mathcal{X}_{n},\mathcal{Y}_{n})$ satisfies
(a)
: It always holds that $$\begin{aligned}
\max_{(k,l) \in [n]^2}\{dCorr^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n})\} \geq {c}^{*}(\mathcal{X}_{n},\mathcal{Y}_{n}) \geq dCorr^{n,n}(\mathcal{X}_{n},\mathcal{Y}_{n}).\end{aligned}$$
(b)
: When ${\ensuremath{X}}$ and ${\ensuremath{Y}}$ are independent, ${c}^{*}(\mathcal{X}_{n},\mathcal{Y}_{n}) \rightarrow 0$; when ${\ensuremath{X}}$ and ${\ensuremath{Y}}$ are not independent, ${c}^{*}(\mathcal{X}_{n},\mathcal{Y}_{n}) \rightarrow$ a positive constant.
The next theorem states that if the threshold $\tau_n$ converges to $0$, then whenever population [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} is larger than population [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{}, Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} is also larger than sample [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} asymptotically; otherwise if the threshold does not converge to $0$, Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} may equal sample [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} despite of the first moment advantage in population. Moreover, Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} indeed converges to population [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} when the optimal scale is in the largest thresholded region $R$. The empirical advantage of Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} is illustrated in Figure \[f:dependencies\].
\[thm7\] Suppose each column of $\mathcal{X}_{n}$ and $\mathcal{Y}_{n}$ are *iid* as continuous $({\ensuremath{X}}, {\ensuremath{Y}}) \sim F_{{\ensuremath{X}}{\ensuremath{Y}}}$, and the threshold choice $\tau_n \rightarrow 0$ as $n \rightarrow \infty$.
(a)
: Assume that ${c}^{*}({\ensuremath{X}}, {\ensuremath{Y}}) > Dcorr({\ensuremath{X}}, {\ensuremath{Y}})$ under the joint distribution. Then ${c}^{*}(\mathcal{X}_{n}, \mathcal{Y}_{n}) > Dcorr(\mathcal{X}_{n}, \mathcal{Y}_{n})$ for $n$ sufficiently large.
(b)
: Assume there exists an element within the the largest connected area of $\{(\rho_k,\rho_l) \in \mathcal{S}_{\epsilon}$ with $dCorr^{\rho_k,\rho_l}({\ensuremath{X}},{\ensuremath{Y}})> dCorr({\ensuremath{X}},{\ensuremath{Y}}) \}$, such that the the local correlation of that element equals ${c}^{*}({\ensuremath{X}},{\ensuremath{Y}})$. Then ${c}^{*}(\mathcal{X}_{n}, \mathcal{Y}_{n}) \rightarrow {c}^{*}({\ensuremath{X}},{\ensuremath{Y}})$.
Alternatively, Theorem \[thm7\](b) can be stated that the Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} always converges to the maximal population local correlation within the largest connected area of thresholded local correlations. Therefore, Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} converges either to [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} (when the area is empty) or something larger, thus improving over [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} statistic in first moment.
Choice of Threshold
-------------------
The choice of threshold $\tau_n$ is imperative for Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} to enjoy a good finite-sample performance, especially at small sample size. According to Theorem \[thm7\], the threshold shall converge to $0$ for Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} to prevail sample [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{}.
A model-free threshold $\tau_n$ was previously used in [@ShenEtAl2016]: for the following set $$\begin{aligned}
\{dCorr^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n}) \mbox{ s.t. } dCorr^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n})<0\},\end{aligned}$$ let $\sigma^2$ be the sum of all its elements squared, and set $\tau_n=5\sigma$ as the threshold; if there is no negative local correlation and the set is empty, use $\tau_n=0.05$. Although the previous threshold is a data-adaptive choice that works pretty well empirically and does not affect the consistency of Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} in Theorem \[thm8\], it does not converge to $0$. The following finite-sample theorem from [@SzekelyRizzo2013a] motivates an improved threshold choice here:
Under independence of $({\ensuremath{X}},{\ensuremath{Y}})$, assume the dimensions of ${\ensuremath{X}}$ are exchangeable with finite variance, and so are the dimensions of ${\ensuremath{Y}}$. Then for any $n \geq 4$ and $v=\frac{n(n-3)}{2}$, as $p,q$ increase the limiting distribution of $(dCorr^{n,n}(\mathcal{X}_{n}, \mathcal{Y}_{n})+1)/2$ equals the symmetric Beta distribution with shape parameter $\frac{v-1}{2}$.
The above theorem leads to the new threshold choice:
\[cor2\] Denote $v=\frac{n(n-3)}{2}$, $z \sim Beta(\frac{v-1}{2})$, $F^{-1}_{z}(\cdot)$ as the inverse cumulative distribution function. The threshold choice $$\begin{aligned}
\tau_n= 2F^{-1}_{z} \Big(1-\frac{0.02}{n}\Big)-1 \end{aligned}$$ converges to $0$ as $n \rightarrow \infty$.
The limiting null distribution of [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} is still a good approximation even when $p,q$ are not large, thus provides a reliable bound for eliminating local correlations that are larger than [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} by chance or by noise. The intuition is that Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} is mostly useful when it is much larger than [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} in magnitude, which is often the case in non-monotone relationships as shown in Section \[sec:exp\] Figure \[f:dependencies\]. Alternatively, directly setting $\tau_{n}=0$ also guarantees the theoretical properties and works equally well when the sample size $n$ is moderately large.
Permutation Test {#sec:permutation}
----------------
To test independence on a pair of sample data $(\mathcal{X}_{n},\mathcal{Y}_{n})$, the random permutation test has been the popular choice [@GoodPermutationBook] for almost all methods introduced, as the null distribution of the test statistic can be easily approximated by randomly permuting one data set. We discuss the computation procedure, prove the testing consistency of [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{}, and analyze the running time.
To compute the p-value of [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} from the permutation test, first compute the Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} statistic ${c}^{*}(\mathcal{X}_{n},\mathcal{Y}_{n})$ on the observed data pair. Then the [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} statistic is repeatedly computed on the permuted data pair, e.g. $\mathcal{Y}_{n}=[y_{1},\ldots,y_{n}]$ is permuted into $\mathcal{Y}_{n}^{\pi}=[y_{\pi(1)},\ldots,y_{\pi(n)}]$ for a random permutation $\pi$ of size $n$, and compute ${c}^{*}(\mathcal{X}_{n},\mathcal{Y}_{n}^{\pi})$. The permutation procedure is repeated for $r$ times to estimate the probability $Prob({c}^{*}(\mathcal{X}_{n},\mathcal{Y}_{n}^{\pi}) > {c}^{*}(\mathcal{X}_{n},\mathcal{Y}_{n}))$, and the estimated probability is taken as the p-value of [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{}. The independence hypothesis is rejected if the p-value is smaller than a pre-set critical level, say $0.05$ or $0.01$. The following theorem states that [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} via the permutation test is consistent and valid.
\[thm8\] Suppose each column of $\mathcal{X}_{n}$ and $\mathcal{Y}_{n}$ are generated *iid* from $F_{{\ensuremath{X}}{\ensuremath{Y}}}$. At any type $1$ error level $\alpha>0$, Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} is a valid test statistic that is consistent against all possible alternatives under the permutation test.
Miscellaneous Properties {#sec:misc}
------------------------
In this subsection, we first show a useful lemma expressing sample local covariance in Section \[sec:defi\] by matrix trace and eigenvalues, then list a number of fundamental and desirable properties for the local variance, local correlation, and [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{}, akin to these of Pearson’s correlation and distance correlation as shown in [@SzekelyRizzoBakirov2007; @SzekelyRizzo2009].
\[lem1\] Denote $tr(\cdot)$ as the matrix trace, $\lambda_{i} [\cdot]$ as the $i$th eigenvalue of a matrix, and $J$ as the matrix of ones of size $n$. Then the sample covariance equals $$\begin{aligned}
dCov^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n}) &= tr (A^{k}B^{l})- tr (A^{k}J)tr(B^{l}J)\\
&=tr [ (A^{k}- tr (A^{k}J)J) (B^{l}-tr(B^{l}J)J)] \\
& = \sum_{i=1}^{n} \lambda_{i} [ (A^{k}- tr (A^{k}J)J) (B^{l}-tr(B^{l}J)J)].\end{aligned}$$
\[thm:dvar\] For any random variable ${\ensuremath{X}}\sim F_{{\ensuremath{X}}} \in \mathbb{R}^{p}$, and any $\mathcal{X}_{n} \in \mathbb{R}^{p \times n}$ with each column *iid* as $F_{{\ensuremath{X}}}$,
(a)
: Population and sample local variances are always non-negative, i.e., $$\begin{aligned}
dVar^{\rho_{k}}({\ensuremath{X}}) \geq 0\\
dVar^{k}(\mathcal{X}_{n}) \geq 0\end{aligned}$$ at any $\rho_{k} \in [0,1]$ and any $k \in [n]$.
(b)
: $dVar^{\rho_{k}}({\ensuremath{X}}) =0$ if and only if either $\rho_k =0$ or $F_{{\ensuremath{X}}}$ is a degenerate distribution;
$dVar^{k}(\mathcal{X}_{n}) =0$ if and only if either $k=1$ or $F_{{\ensuremath{X}}}$ is a degenerate distribution.
(c)
: For two constants $v \in \mathbb{R}^{p},u \in \mathbb{R}$, and an orthonormal matrix $Q \in \mathbb{R}^{p \times p}$, $$\begin{aligned}
dVar^{\rho_{k}}(v+uQ{\ensuremath{X}}) &=u^2 \cdot dVar^{\rho_{k}}({\ensuremath{X}})\\
dVar^{k}(v^{T} J +u\mathcal{X}_{n} Q) &=u^2 \cdot dVar^{k}(\mathcal{X}_{n} ).\end{aligned}$$
Therefore, the local variances end up having properties similar to the distance variance in [@SzekelyRizzoBakirov2007], except the distance variance definition there takes a square root.
\[thm:dcor\] For any pair of random variable $({\ensuremath{X}},{\ensuremath{Y}}) \sim F_{{\ensuremath{X}}{\ensuremath{Y}}} \in \mathbb{R}^{p} \times \mathbb{R}^{q}$, and any $(\mathcal{X}_{n},\mathcal{Y}_{n}) \in \mathbb{R}^{p \times n} \times \mathbb{R}^{q \times n}$ with each column *iid* as $F_{{\ensuremath{X}}{\ensuremath{Y}}}$,
(a)
: Symmetric and Boundedness: $$\begin{aligned}
dCorr^{\rho_{k},\rho_{l}}({\ensuremath{X}},{\ensuremath{Y}})=dCorr^{\rho_{l},\rho_{k}}({\ensuremath{Y}},{\ensuremath{X}}) \in [-1,1]\\
dCorr^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n})=dCorr^{l,k}(\mathcal{Y}_{n},\mathcal{X}_{n}) \in [-1,1]\end{aligned}$$ at any $(\rho_k,\rho_l) \in (0,1]^2$ and any $(k,l) \in [2,\ldots,n]^2$.
(b)
: Assume $F_{\ensuremath{X}}$ is non-degenerate. Then at any $\rho_{k} > 0$, $dCorr^{\rho_{k},\rho_k}({\ensuremath{X}},{\ensuremath{Y}})=1$ if and only if $({\ensuremath{X}}, u {\ensuremath{Y}})$ are dependent via an isometry for some non-zero constant $u \in \mathbb{R}$.
Assume $F_{\ensuremath{X}}$ is non-degenerate. Then at any $k > 1$, $dCorr^{k,k}(\mathcal{X}_{n},\mathcal{Y}_{n})=1$ if and only if $({\ensuremath{X}}, u {\ensuremath{Y}})$ are dependent via an isometry for some non-zero constant $u \in \mathbb{R}$.
(c)
: Both population and Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} are symmetric and bounded: $$\begin{aligned}
{c}^{*}({\ensuremath{X}},{\ensuremath{Y}})={c}^{*}({\ensuremath{Y}},{\ensuremath{X}}) \in [-1,1] \\
{c}^{*}(\mathcal{X}_{n},\mathcal{Y}_{n})={c}^{*}(\mathcal{Y}_{n},\mathcal{X}_{n}) \in [-1,1].\end{aligned}$$
(d)
: Assume $F_{\ensuremath{X}}$ is non-degenerate. Then ${c}^{*}({\ensuremath{X}},{\ensuremath{Y}})=1$ if and only if $({\ensuremath{X}}, u {\ensuremath{Y}})$ are dependent via an isometry for some non-zero constant $u \in \mathbb{R}$.
Assume $F_{\ensuremath{X}}$ is non-degenerate. Then ${c}^{*}(\mathcal{X}_{n},\mathcal{Y}_{n})=1$ if and only if $({\ensuremath{X}}, u {\ensuremath{Y}})$ are dependent via an isometry for some non-zero constant $u \in \mathbb{R}$.
The proof of Theorem \[thm:dcor\](b)(d) also shows that the local correlations and [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} cannot be $-1$.
Experiments {#sec:exp}
===========
In the experiments, we compare Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} with [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{}, [[[<span style="font-variant:small-caps;">Pearson</span>]{}]{}]{}, [[[<span style="font-variant:small-caps;">Mantel</span>]{}]{}]{}, [[[<span style="font-variant:small-caps;">Hsic</span>]{}]{}]{}, [[[<span style="font-variant:small-caps;">Hhg</span>]{}]{}]{}, and [[[<span style="font-variant:small-caps;">Copula</span>]{}]{}]{} test on $20$ different simulation settings based on a combination of simulations used in previous works [@SzekelyRizzoBakirov2007; @SimonTibshirani2012; @GorfineHellerHeller2012]. Among the $20$ settings, the first $5$ are monotonic relationships (and several of them are linear or nearly so), the last simulation is an independent relationship, and the remaining settings consist of common non-monotonic and strongly nonlinear relationships. The exact distributions are shown in Appendix.
The Sample Statistics {#the-sample-statistics .unnumbered}
---------------------
Figure \[f:dependencies\] shows the sample statistics of [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{}, [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{}, and [[[<span style="font-variant:small-caps;">Pearson</span>]{}]{}]{} for each of the $20$ simulations in a univariate setting. For each simulation, we generate sample data $(\mathcal{X}_{n},\mathcal{Y}_{n})$ at $p=q=1$ and $n=100$ without any noise, then compute the sample statistics. From type $1-5$, the test statistics for both [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} and [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} are remarkably greater than $0$ and almost identical to each other. For the nonlinear relationships (type $6-19$), [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} benefits from searching locally and achieves a larger test statistic than [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{}’s, which can be very small in these nonlinear relationships. For type $20$, the test statistics for both [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} and [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} are almost $0$ as expected. On the other hand, [[[<span style="font-variant:small-caps;">Pearson</span>]{}]{}]{}’s test statistic is large whenever there exists certain linear association, and almost $0$ otherwise. The comparison of sample statistics indicate that [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} may have inferior finite-sample testing power in nonlinear relationships, but a strong dependency signal is actually hidden in a local structure that [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} may recover.
Finite-Sample Testing Power {#finite-sample-testing-power .unnumbered}
---------------------------
Figure \[f:noise\] shows the finite-sample testing power of [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{}, [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{}, and [[[<span style="font-variant:small-caps;">Pearson</span>]{}]{}]{} for a linear and a quadratic relationship at $n=20$ and $p=q=1$ with white noise (controlled by a constant). The testing power of [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} is estimated as follows: we first generate dependent sample data $(\mathcal{X}_{n},\mathcal{Y}_{n})$ for $r=10,000$ replicates, compute Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} for each replicate to estimate the alternative distribution of [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{}. Then we generate independent sample data $(\mathcal{X}_{n},\mathcal{Y}_{n})$ using the same marginal distributions for $r=10,000$ replicates, compute Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} to estimate the null distribution, and estimate the testing power at type $1$ error level $\alpha=0.05$. The testing power of [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} is estimated in the same manner, while the testing power of [[[<span style="font-variant:small-caps;">Pearson</span>]{}]{}]{} is directly computed via the t-test. [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} has the best power in the quadratic relationship, while being almost identical to [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} and [[[<span style="font-variant:small-caps;">Pearson</span>]{}]{}]{} in the linear relationship.
The same phenomenon holds throughout all the simulations we considered, i.e., [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} achieves almost the same power as [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} in monotonic relationships, while being able to improve the power in monotonic and strongly nonlinear relationships. The testing power of [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} versus all other methods are shown in Figure \[f:1DAll\] for the univariate settings, and we plot the power versus the sample size from $5$ to $100$ for each simulation. Note that the noise level is tuned for each dependency for illustration purposes.
Figure \[f:nDAll\] compares the testing performance for the same $20$ simulations with a fixed sample size $n=100$ and increasing dimensionality. The relative powers in the univariate and multivariate settings are then summarized in Figure \[f:Summary2\]. [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} is overall the most powerful method, followed by [[[<span style="font-variant:small-caps;">Hhg</span>]{}]{}]{} and [[[<span style="font-variant:small-caps;">Hsic</span>]{}]{}]{}. Since non-monotone relationships are prevalent among the $20$ settings, it is not a surprise that [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} is overall worse than [[[<span style="font-variant:small-caps;">Hhg</span>]{}]{}]{} and [[[<span style="font-variant:small-caps;">Hsic</span>]{}]{}]{}, both of which also excel at nonlinear relationships.
Note that the same $20$ simulations were also used in [@ShenEtAl2016] for evaluation purposes. The main difference is that the Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} algorithm is now based on the improved threshold with theoretical guarantee. Comparing to the previous algorithm, the new threshold slightly improves the testing power in monotonic relationships (the first $5$ simulations).
Running Time {#sec:time .unnumbered}
------------
Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} can be computed and tested in the same running time complexity as distance correlation: Assume $p$ is the maximum feature dimension of the two datasets, distance computation and centering takes ${\mathcal{O}}(n^2 p)$, the ranking process takes ${\mathcal{O}}(n^2 \log n)$, all local covariances and correlations can be incrementally computed in $O(n^2)$ (the pseudo-code is shown in [@ShenEtAl2016]), the thresholding step of Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} takes $O(n^2)$ as well. Overall, Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} can be computed in ${\mathcal{O}}(n^2 \max\{\log n,p\})$. In comparison, the [[[<span style="font-variant:small-caps;">Hhg</span>]{}]{}]{} statistic requires the same complexity as [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{}, while distance correlation saves on the $\log n$ term.
As the only part of [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} that has the additional $\log n$ term is the column-wise ranking process, a multi-core architecture can reduce the running time to ${\mathcal{O}}(n^2 \max\{\log n,p\}/T)$. By making $T=\log(n)$ ($T$ is no more than $30$ at $1$ billion samples), [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} effectively runs in ${\mathcal{O}}(n^2 p)$ and is of the same complexity as [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{}. The permutation test multiplies another $r$ to all terms except the distance computation, so overall the [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} testing procedure requires ${\mathcal{O}}(n^2 \max\{r,p\})$, which is the same as [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{}, [[[<span style="font-variant:small-caps;">Hhg</span>]{}]{}]{}, and [[[<span style="font-variant:small-caps;">Hsic</span>]{}]{}]{}. Figure \[f:time\] shows that [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} has approximately the same complexity as [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{}, and is slower by a constant in the actual running time.
Conclusion {#sec:dis}
==========
In this paper, we formalize the population version of local correlation and [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{}, connect them to the sample counterparts, prove the convergence and almost unbiasedness from the sample version to the population version, as well as a number of desirable properties for a well-defined correlation measure. In particular, population [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} equals $0$ and the sample version converges to $0$ if and only if independence, making Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} valid and consistent under the permutation test. Moreover, Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} is designed in a computationally efficient manner, and the new threshold choice achieves both theoretical and empirical improvements. The numerical experiments confirm the empirical advantages of [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} in a wide range of linear, nonlinear, high-dimensional dependencies.
There are many potential future avenues to pursue. Theoretically, proving when and how one method dominates another in testing power is highly desirable. As the methods in comparison have distinct formulations and different properties, it is often difficult to compare them directly. However, a relative efficiency analysis may be viable when limited to methods of similar properties, such as [[[<span style="font-variant:small-caps;">Dcorr</span>]{}]{}]{} and [[[<span style="font-variant:small-caps;">Hsic</span>]{}]{}]{}, or local statistic and global statistic. In terms of the locality principle, the geometric meaning of the local scale in [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} is intriguing — for example, does the family of local correlations fully characterize the joint distribution, and what is the relationship between the optimal local scale and the dependency geometry — answering these questions may lead to further improvement of [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{}, and potentially make the family of local correlations a valuable tool beyond testing.
Method-wise, there are a number of alternative implementations that may be pursued. For example, the sample local correlations can be defined via $\epsilon$ ball instead of nearest neighbor graphs, i.e., truncate large distances based on absolute magnitude instead of the nearest neighbor graph. The maximization and thresholding mechanism may be further improved, e.g., thresholding based on the covariance instead of correlation, or design a better regularization scheme. There are many alternative approaches that can maintain consistency in this framework, and it will be interesting to investigate a better algorithm. In particular, we name our method as “multiscale graph correlation" because the local correlations are computed via the k-nearest neighbor graphs, which is one way to generalize the distance correlation.
Application-wise, the [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} method can directly facilitate new discoveries in many kinds of scientific fields, especially data of limited sample size and high-dimensionality such as in neuroscience and omics [@ShenEtAl2016]. Within the domain of statistics and machine learning, [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} can be a very competitive candidate in any methodology that requires a well-defined dependency measure, e.g., variable selection [@LiZhongZhu2012], time series [@Zhou2012], etc. Moreover, the very idea of locality may improve other types of distance-based tests, such as the energy distance for K-sample testing [@SzekelyRizzo2013b].
1
[1]{}
Acknowledgment {#acknowledgment .unnumbered}
==============
This work was partially supported by the National Science Foundation award DMS-1712947, and the Defense Advanced Research Projects Agency’s (DARPA) SIMPLEX program through SPAWAR contract N66001-15-C-4041. The authors are grateful to the anonymous reviewers for the invaluable feedback leading to significant improvement of the manuscript, and thank Dr. Minh Tang and Dr. Shangsi Wang for useful discussions and suggestions.
[**APPENDIX**]{}
Proofs {#sec:proofs}
======
Theorem \[thm1\] {#theoremthm1 .unnumbered}
----------------
Equation \[eq:dcov1\] defines the local covariance as $$\begin{aligned}
dCov^{\rho_k, \rho_l}({\ensuremath{X}},{\ensuremath{Y}}) = \int_{\mathbb{R}^{p}\times \mathbb{R}^{q}} E(h^{\rho_{k}}_{{\ensuremath{X}}}(t) \overline{h^{\rho_{l}}_{{\ensuremath{Y}}'}(s)})-E(h^{\rho_{k}}_{{\ensuremath{X}}}(t))E(h^{\rho_{l}}_{{\ensuremath{Y}}'}(s)) w(t, s)dtds.\end{aligned}$$ Expanding the first integral term yields $$\begin{aligned}
&\int E(h^{\rho_{k}}_{{\ensuremath{X}}}(t) \overline{h^{\rho_{l}}_{{\ensuremath{Y}}'}(s)}) w(t, s)dtds \\
=&\ E(\int (g_{{\ensuremath{X}}}(t)\overline{g_{{\ensuremath{X}}'}(t)}-g_{{\ensuremath{X}}}(t)\overline{g_{{\ensuremath{X}}''}(t)}) (\overline{g_{{\ensuremath{Y}}'}(s)}g_{{\ensuremath{Y}}}(s)-\overline{g_{{\ensuremath{Y}}'}(s)}g_{{\ensuremath{Y}}'''}(s)) w(t, s)dtds \cdot {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}} {\boldsymbol{I}}_{{\ensuremath{Y}}',{\ensuremath{Y}}}^{\rho_{l}}) \\
=&\ E(\int g_{{\ensuremath{X}}{\ensuremath{Y}}}(t,s) \overline{g_{{\ensuremath{X}}' {\ensuremath{Y}}'}(t,s)} w(t, s)dtds \cdot {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}} {\boldsymbol{I}}_{{\ensuremath{Y}}',{\ensuremath{Y}}}^{\rho_{l}}) \\
& -E( \int g_{{\ensuremath{X}}{\ensuremath{Y}}}(t,s) \overline{g_{{\ensuremath{X}}''}(t)g_{{\ensuremath{Y}}'}(s)} w(t, s)dtds \cdot {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}} {\boldsymbol{I}}_{{\ensuremath{Y}}',{\ensuremath{Y}}}^{\rho_{l}}) \\
& - E( \int \overline{g_{{\ensuremath{X}}' {\ensuremath{Y}}'}(t,s)} g_{{\ensuremath{X}}}(t)g_{{\ensuremath{Y}}'''}(s) w(t, s)dtds \cdot {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}} {\boldsymbol{I}}_{{\ensuremath{Y}}',{\ensuremath{Y}}}^{\rho_{l}})\\
& + E( \int g_{{\ensuremath{X}}}(t)g_{{\ensuremath{Y}}'''}(s) \overline{ g_{{\ensuremath{X}}''}(t) g_{{\ensuremath{Y}}'}(s) } w(t, s)dtds \cdot {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}} {\boldsymbol{I}}_{{\ensuremath{Y}}',{\ensuremath{Y}}}^{\rho_{l}}) \\
=&\ E( \| {\ensuremath{X}}-{\ensuremath{X}}' \| \| {\ensuremath{Y}}-{\ensuremath{Y}}' \| {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}} {\boldsymbol{I}}_{{\ensuremath{Y}}',{\ensuremath{Y}}}^{\rho_{l}} ) - E( \| {\ensuremath{X}}-{\ensuremath{X}}'' \| \| {\ensuremath{Y}}-{\ensuremath{Y}}' \| {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}} {\boldsymbol{I}}_{{\ensuremath{Y}}',{\ensuremath{Y}}}^{\rho_{l}} ) \\
& - E( \| {\ensuremath{X}}'-{\ensuremath{X}}\| \| {\ensuremath{Y}}'-{\ensuremath{Y}}''' \| {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}} {\boldsymbol{I}}_{{\ensuremath{Y}}',{\ensuremath{Y}}}^{\rho_{l}} ) + E( \| {\ensuremath{X}}-{\ensuremath{X}}'' \| {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}} \| {\ensuremath{Y}}'-{\ensuremath{Y}}''' \| {\boldsymbol{I}}_{{\ensuremath{Y}}',{\ensuremath{Y}}}^{\rho_{l}} ) \\
=&\ E(d^{\rho_{k}}_{{\ensuremath{X}}} d^{\rho_{l}}_{{\ensuremath{Y}}'}).\end{aligned}$$ Every other step being routine, the third equality transforms the $w(t,s)$ integral to Euclidean distances via the same technique employed in Remark 1 and the proof of Theorem 8 in [@SzekelyRizzo2009]. Also note that all four expectations are finite. For example, the first expectation in the third equality is finite, because $\| {\ensuremath{X}}-{\ensuremath{X}}' \| \| {\ensuremath{Y}}-{\ensuremath{Y}}' \|$ is always non-negative, and $E( \| {\ensuremath{X}}-{\ensuremath{X}}' \| \| {\ensuremath{Y}}-{\ensuremath{Y}}' \|)$ is non-negative and finite by the finite second moments assumption on [$X$]{} and [$Y$]{}, such that $$\begin{aligned}
0 \leq E( \| {\ensuremath{X}}-{\ensuremath{X}}' \| \| {\ensuremath{Y}}-{\ensuremath{Y}}' \| {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}} {\boldsymbol{I}}_{{\ensuremath{Y}}',{\ensuremath{Y}}}^{\rho_{l}} ) \leq E( \| {\ensuremath{X}}-{\ensuremath{X}}' \| \| {\ensuremath{Y}}-{\ensuremath{Y}}' \|),\end{aligned}$$ which can be similarly established for the other three expectations.
The second integral term can be decomposed into $$\begin{aligned}
\int E(h^{\rho_{k}}_{{\ensuremath{X}}}(t))E(h^{\rho_{l}}_{{\ensuremath{Y}}'}(s)) w(t, s)dtds = \int E(h^{\rho_{k}}_{{\ensuremath{X}}}(t))w(t, s)dtds \cdot \int E(h^{\rho_{l}}_{{\ensuremath{Y}}'}(s)) w(t, s)dtds,\end{aligned}$$ because the first expectation only has $t$ and the second expectation only has $s$, and $w(t,s)$ is a product of $t$ and $s$. Then $$\begin{aligned}
&\int E(h^{\rho_{k}}_{{\ensuremath{X}}}(t)) w(t, s)dtds = E(\int g_{{\ensuremath{X}}}(t)\overline{g_{{\ensuremath{X}}'}(t)}-g_{{\ensuremath{X}}}(t)\overline{g_{{\ensuremath{X}}''}(t)} w(t, s)dtds \cdot {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}}) \\
=& E(\int g_{{\ensuremath{X}}}(t)\overline{g_{{\ensuremath{X}}'}(t)} w(t, s)dtds \cdot {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}}) - E(\int g_{{\ensuremath{X}}}(t)\overline{g_{{\ensuremath{X}}''}(t)} w(t, s)dtds \cdot {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}}) \\
=& E( \| {\ensuremath{X}}-{\ensuremath{X}}' \| {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}}) - E( \| {\ensuremath{X}}-{\ensuremath{X}}'' \| {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}}) \\
=& E(d^{\rho_{k}}_{{\ensuremath{X}}}),\end{aligned}$$ where the two expectations involved are also finite. Similarly $\int E(\overline{h^{\rho_{l}}_{{\ensuremath{Y}}'}(s)}) w(t, s)dtds = E( \| {\ensuremath{Y}}'-{\ensuremath{Y}}\| {\boldsymbol{I}}_{{\ensuremath{Y}}',{\ensuremath{Y}}}^{\rho_{l}}) - E( \| {\ensuremath{Y}}'-{\ensuremath{Y}}''' \| {\boldsymbol{I}}_{{\ensuremath{Y}}',{\ensuremath{Y}}}^{\rho_{l}})=E(d^{\rho_{l}}_{{\ensuremath{Y}}'})$. Thus $$\begin{aligned}
\int E(h^{\rho_{k}}_{{\ensuremath{X}}}(t))E(h^{\rho_{l}}_{{\ensuremath{Y}}'}(s)) w(t, s)dtds &= E(d^{\rho_{k}}_{{\ensuremath{X}}})E(d^{\rho_{l}}_{{\ensuremath{Y}}'}).\end{aligned}$$
Combining the results verifies that Equation \[eq:dcov2\] equals Equation \[eq:dcov1\]. Moreover, as every term in Equation \[eq:dcov2\] is of real-value, local covariance, variance, correlation are all real numbers.
Theorem \[thm2\] {#theoremthm2 .unnumbered}
----------------
When ${\ensuremath{X}}$ and ${\ensuremath{Y}}$ are independent, $$\begin{aligned}
\int E(h^{\rho_{k}}_{{\ensuremath{X}}}(t)\overline{h^{\rho_{l}}_{{\ensuremath{Y}}'}(s)})w(t, s)dtds =\int E(h^{\rho_{k}}_{{\ensuremath{X}}}(t))E(\overline{h^{\rho_{l}}_{{\ensuremath{Y}}'}(s)})w(t, s)dtds, \end{aligned}$$ thus $dCov^{\rho_{k}, \rho_{l}}({\ensuremath{X}},{\ensuremath{Y}})=0$ at any $(\rho_k,\rho_l)$. So is the local correlation at any $(\rho_k,\rho_l) \in \mathcal{S}_{\epsilon}$.
To show the local covariance at the maximal scale $(\rho_k,\rho_l)=(1,1)$ equals the distance covariance, we proceed via the alternative definition in Theorem \[thm1\]: $$\begin{aligned}
&dCov^{\rho_{k}=1, \rho_{l}=1}({\ensuremath{X}},{\ensuremath{Y}}) =E(d^{\rho_{k}}_{{\ensuremath{X}}} d^{\rho_{l}}_{{\ensuremath{Y}}'}) \\
=&\ E( \| {\ensuremath{X}}-{\ensuremath{X}}' \| \| {\ensuremath{Y}}-{\ensuremath{Y}}' \| ) - E( \| {\ensuremath{X}}-{\ensuremath{X}}'' \| \| {\ensuremath{Y}}-{\ensuremath{Y}}' \|) \\
&- E( \| {\ensuremath{X}}'-{\ensuremath{X}}\| \| {\ensuremath{Y}}'-{\ensuremath{Y}}''' \| ) + E( \| {\ensuremath{X}}-{\ensuremath{X}}'' \| ) E(\| {\ensuremath{Y}}'-{\ensuremath{Y}}''' \| ) \\
=&\ E( \| {\ensuremath{X}}-{\ensuremath{X}}' \| \| {\ensuremath{Y}}-{\ensuremath{Y}}' \| ) - E( \| {\ensuremath{X}}-{\ensuremath{X}}'' \| \| {\ensuremath{Y}}-{\ensuremath{Y}}' \|) \\
&- E( \| {\ensuremath{X}}-{\ensuremath{X}}' \| \| {\ensuremath{Y}}-{\ensuremath{Y}}'' \| ) + E( \| {\ensuremath{X}}-{\ensuremath{X}}'' \| ) E(\| {\ensuremath{Y}}-{\ensuremath{Y}}'' \| ) \\
=&\ dCov({\ensuremath{X}},{\ensuremath{Y}}),\end{aligned}$$ where the first equality follows by noting that $E(d^{\rho_{k}}_{{\ensuremath{X}}})=E(d^{\rho_{l}}_{{\ensuremath{Y}}'})=0$ at $\rho_{k}=\rho_{l}=1$, the second equality holds by switching the random variable notations within each expectation, and the last equality is the alternative definition of distance covariance in Theorem 8 of [@SzekelyRizzo2009]. It follows that $dVar^{\rho_{k}=1}({\ensuremath{X}})=dVar({\ensuremath{X}})$, $dVar^{\rho_{l}=1}({\ensuremath{Y}})=dVar({\ensuremath{Y}})$, and $dCorr^{\rho_{k}=1, \rho_{l}=1}({\ensuremath{X}},{\ensuremath{Y}})=dCorr({\ensuremath{X}},{\ensuremath{Y}})$.
Theorem \[thmMax\] {#theoremthmmax .unnumbered}
------------------
Given two continuous random variables $({\ensuremath{X}},{\ensuremath{Y}})$, we first illustrate the continuity of local covariance with respect to $\rho_{k}$ at fixed $\rho_{l}$: For any $\delta$ with the understanding that $\rho_{k} \pm \delta \in [0,1]$, we have $$\begin{aligned}
dCov^{\rho_k+\delta, \rho_l}({\ensuremath{X}},{\ensuremath{Y}}) -dCov^{\rho_k, \rho_l}({\ensuremath{X}},{\ensuremath{Y}}) = E((d^{\rho_{k}+\delta}_{{\ensuremath{X}}}-d^{\rho_{k}}_{{\ensuremath{X}}}) d^{\rho_{l}}_{{\ensuremath{Y}}'}) - E(d^{\rho_{k}+\delta}_{{\ensuremath{X}}}-d^{\rho_{k}}_{{\ensuremath{X}}}) E(d^{\rho_{l}}_{{\ensuremath{Y}}'}),\end{aligned}$$ where the expectation is taken with respect to all random variables inside, and $$\begin{aligned}
d^{\rho_{k}+\delta}_{{\ensuremath{X}}}&=(\|{\ensuremath{X}}-{\ensuremath{X}}'\|-\|{\ensuremath{X}}-{\ensuremath{X}}''\|) {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}+\delta}\\
d^{\rho_{k}}_{{\ensuremath{X}}}&=(\|{\ensuremath{X}}-{\ensuremath{X}}'\|-\|{\ensuremath{X}}-{\ensuremath{X}}''\|) {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}}\end{aligned}$$ Then Cauchy-Schwarz and finite second moment of ${\ensuremath{X}}$ yield that $$\begin{aligned}
&\lim_{\delta \rightarrow 0} |E(d^{\rho_{k}+\delta}_{{\ensuremath{X}}}-d^{\rho_{k}}_{{\ensuremath{X}}})|^2 \\
\leq &\ E\{(\|{\ensuremath{X}}-{\ensuremath{X}}'\|-\|{\ensuremath{X}}-{\ensuremath{X}}''\|)^2\} \lim_{\delta \rightarrow 0} E(|{\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}+\delta}-{\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}}|^2)\\
=&\ 0.\end{aligned}$$ Moreover, the finite second moment of [$Y$]{} guarantees finiteness of $E( d^{\rho_{l}}_{{\ensuremath{Y}}'})$ and $$\begin{aligned}
&\lim_{\delta \rightarrow 0} |E((d^{\rho_{k}+\delta}_{{\ensuremath{X}}}-d^{\rho_{k}}_{{\ensuremath{X}}}) d^{\rho_{l}}_{{\ensuremath{Y}}'})|^2 \\
\leq &\ E\{(\|{\ensuremath{X}}-{\ensuremath{X}}'\|-\|{\ensuremath{X}}-{\ensuremath{X}}''\|)^2 {d^{\rho_{l}}_{{\ensuremath{Y}}'}}^2\} \lim_{\delta \rightarrow 0} E(|{\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}+\delta}-{\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}}|^2)\\
=&\ 0,\end{aligned}$$ which leads to the continuity of local covariance with respect to $\rho_{k}$: $$\begin{aligned}
\lim_{\delta \rightarrow 0} dCov^{\rho_k+\delta, \rho_l}({\ensuremath{X}},{\ensuremath{Y}}) - dCov^{\rho_k, \rho_l}({\ensuremath{X}},{\ensuremath{Y}})=0.\end{aligned}$$
The same holds for fixed $\rho_{k}$ such that $$\begin{aligned}
\lim_{\delta \rightarrow 0} dCov^{\rho_k, \rho_l+\delta}({\ensuremath{X}},{\ensuremath{Y}}) - dCov^{\rho_k, \rho_l}({\ensuremath{X}},{\ensuremath{Y}})=0.\end{aligned}$$ Applying the above yields that $$\begin{aligned}
&dCov^{\rho_k+\delta_{1}, \rho_l+\delta_{2}}({\ensuremath{X}},{\ensuremath{Y}}) -dCov^{\rho_k, \rho_l}({\ensuremath{X}},{\ensuremath{Y}})\\
= &\ dCov^{\rho_k+\delta_{1}, \rho_l+\delta_{2}}({\ensuremath{X}},{\ensuremath{Y}}) -dCov^{\rho_k, \rho_l+\delta_{2}}({\ensuremath{X}},{\ensuremath{Y}}) + dCov^{\rho_k, \rho_l+\delta_{2}}({\ensuremath{X}},{\ensuremath{Y}}) - dCov^{\rho_k, \rho_l}({\ensuremath{X}},{\ensuremath{Y}})\\
\rightarrow & \ 0 \mbox { for any $\delta_{1}$ and $\delta_{2}$ satisfying $|\delta_{1}+\delta_{2}| \rightarrow 0$.}\end{aligned}$$ So the local covariance is continuous with respect to $(\rho_{k},\rho_{l}) \in [0,1]\times[0,1]$. The continuity of the local variance can be shown similarly, and it follows that the local correlation is continuous in $\mathcal{S}_{\epsilon}$.
At $\rho_{k}=1$, $dVar^{\rho_k}({\ensuremath{X}})=dVar({\ensuremath{X}}) \geq 0$ with equality if and only if ${\ensuremath{X}}$ is a constant, and $\mathcal{S}_{\epsilon}$ is empty in the trivial case. Otherwise by the continuity of local variance, for any $\epsilon < dVar({\ensuremath{X}})$ there exists $\epsilon_{k}$ such that for all $\rho_{k} \in [\epsilon_{k},1]$, $dVar^{\rho_k}({\ensuremath{X}}) \geq \epsilon$. Same for $dVar^{\rho_l}({\ensuremath{Y}})$, thus $\mathcal{S}_{\epsilon}$ is non-empty except when either random variable is a constant. It follows that the local correlation is continuous within the non-empty and compact domain $\mathcal{S}_{\epsilon}$, and extreme value theorem ensures the existence of population [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} and the optimal scale.
Theorem \[thm3\] {#theoremthm3 .unnumbered}
----------------
By Theorem \[thm2\] and definition of [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{}, it holds that $$\begin{aligned}
{c}^{*}({\ensuremath{X}},{\ensuremath{Y}}) \geq dCorr^{\rho_{k}=\rho_{l}=1}({\ensuremath{X}},{\ensuremath{Y}})=dCorr({\ensuremath{X}},{\ensuremath{Y}}).\end{aligned}$$ When ${\ensuremath{X}}$ and ${\ensuremath{Y}}$ are independent, all local correlations are $0$ by Theorem \[thm2\], so ${c}^{*}({\ensuremath{X}},{\ensuremath{Y}})=0$ as well. When dependent, distance correlation is larger than $0$, and it follows that ${c}^{*}({\ensuremath{X}},{\ensuremath{Y}}) \geq dCorr({\ensuremath{X}},{\ensuremath{Y}})>0$. Therefore, [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} equals $0$ if and only if independence, just like the distance correlation.
Theorem \[thm4\] {#theoremthm4 .unnumbered}
----------------
We prove this theorem by three steps: **(i)**, the expectation of the sample local covariance is shown to equal the population local covariance; **(ii)**, the variance of the sample statistic is of $\mathcal{O}(\frac{1}{n})$; **(iii)**, sample local covariance is shown to convergence to the population counterpart uniformly. Then the convergence trivially extends to the sample local variance and correlation.
**(i)**: Expanding the first and second term of population local covariance in Equation \[eq:dcov2\], we have $E(d^{\rho_{k}}_{{\ensuremath{X}}} d^{\rho_{l}}_{{\ensuremath{Y}}'})=\alpha_{1}-\alpha_{2}-\alpha_{3}+\alpha_{4}$ with $$\begin{aligned}
\alpha_{1}&=E( \| {\ensuremath{X}}-{\ensuremath{X}}' \| \| {\ensuremath{Y}}-{\ensuremath{Y}}' \| {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}} {\boldsymbol{I}}_{{\ensuremath{Y}}',{\ensuremath{Y}}}^{\rho_{l}} ),\\
\alpha_{2}&=E( \| {\ensuremath{X}}-{\ensuremath{X}}'' \| \| {\ensuremath{Y}}-{\ensuremath{Y}}' \| {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}} {\boldsymbol{I}}_{{\ensuremath{Y}}',{\ensuremath{Y}}}^{\rho_{l}} ),\\
\alpha_{3}&=E( \| {\ensuremath{X}}'-{\ensuremath{X}}\| \| {\ensuremath{Y}}'-{\ensuremath{Y}}''' \| {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}} {\boldsymbol{I}}_{{\ensuremath{Y}}',{\ensuremath{Y}}}^{\rho_{l}} ),\\
\alpha_{4}&=E( \| {\ensuremath{X}}-{\ensuremath{X}}'' \| \| {\ensuremath{Y}}'-{\ensuremath{Y}}''' \| {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}} {\boldsymbol{I}}_{{\ensuremath{Y}}',{\ensuremath{Y}}}^{\rho_{l}}),\end{aligned}$$ and $E(d^{\rho_{k}}_{{\ensuremath{X}}})E(d^{\rho_{l}}_{{\ensuremath{Y}}'})=\alpha_{5}-\alpha_{6}-\alpha_{7}+\alpha_{8}$ with $$\begin{aligned}
\alpha_{5}&=E( \| {\ensuremath{X}}-{\ensuremath{X}}' \| {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}}) E(\| {\ensuremath{Y}}-{\ensuremath{Y}}' \| {\boldsymbol{I}}_{{\ensuremath{Y}}',{\ensuremath{Y}}}^{\rho_{l}}),\\
\alpha_{6}&=E( \| {\ensuremath{X}}-{\ensuremath{X}}' \| {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}}) E(\| {\ensuremath{Y}}''-{\ensuremath{Y}}' \| {\boldsymbol{I}}_{{\ensuremath{Y}}',{\ensuremath{Y}}}^{\rho_{l}}),\\
\alpha_{7}&=E( \| {\ensuremath{X}}-{\ensuremath{X}}'' \| {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}}) E(\| {\ensuremath{Y}}-{\ensuremath{Y}}' \| {\boldsymbol{I}}_{{\ensuremath{Y}}',{\ensuremath{Y}}}^{\rho_{l}}),\\
\alpha_{8}&=E( \| {\ensuremath{X}}-{\ensuremath{X}}'' \| {\boldsymbol{I}}_{{\ensuremath{X}},{\ensuremath{X}}'}^{\rho_{k}}) E(\| {\ensuremath{Y}}''-{\ensuremath{Y}}' \| {\boldsymbol{I}}_{{\ensuremath{Y}}',{\ensuremath{Y}}}^{\rho_{l}}).\end{aligned}$$ All the $\alpha$’s are bounded due to the finite first moment assumption on $({\ensuremath{X}},{\ensuremath{Y}})$. Note that for distance covariance, one can go through the same proof with only three terms – $\alpha_{1}, \alpha_{2}, \alpha_{5}$ – while the local version involves eight terms, due to the additional random variables for local scales.
For the sample local covariance, the expectation of the first term can be expanded as $$\begin{aligned}
&\ \frac{1}{n(n-1)}\sum_{i \neq j}^{n}E(A_{ij}B_{ji}{\boldsymbol{I}}(R^{A}_{ij} \leq k){\boldsymbol{I}}(R^{B}_{ji} \leq l))\\
=&\ E( (\frac{n-2}{n-1}\tilde{A}_{ij}-\frac{1}{n-1}\sum_{s \neq i,j} \tilde{A}_{sj}) \\
&\ \cdot (\frac{n-2}{n-1}\tilde{B}_{ji}-\frac{1}{n-1}\sum_{s \neq i,j} \tilde{B}_{si}) {\boldsymbol{I}}(R^{A}_{ij} \leq k){\boldsymbol{I}}(R^{B}_{ji} \leq l))\\
=&\ \frac{(n-2)^2}{(n-1)^2} (\alpha_{1}-\alpha_{2}-\alpha_{3})+\frac{(n-2)(n-3)}{(n-1)^2}\alpha_{4}+\mathcal{O}(\frac{1}{n}) \\
=&\ \alpha_{1}-\alpha_{2}-\alpha_{3}+\alpha_{4}+\mathcal{O}(\frac{1}{n}).\end{aligned}$$ The expectation of the second term can be similarly expanded as $$\begin{aligned}
& E(\frac{1}{n(n-1)}\sum_{i \neq j}^{n}A^{k}_{ij} \frac{1}{n(n-1)}\sum_{i \neq j}^{n}B^{l}_{ji}) \\
=&\ \frac{1}{n^2(n-1)^2} \sum_{u\neq v}^{n} E(A_{uv}{\boldsymbol{I}}(R^{A}_{uv} \leq k) \sum_{i \neq j}^{n}B_{ji}{\boldsymbol{I}}(R^{B}_{ji} \leq l))\\
=&\ \frac{1}{n(n-1)} E( (\frac{n-2}{n-1}\tilde{A}_{uv}-\frac{1}{n-1}\sum_{s \neq u,v} \tilde{A}_{sv}){\boldsymbol{I}}(R^{A}_{uv} \leq k) \\
&\cdot \sum_{i \neq j}^{n} (\frac{n-2}{n-1}\tilde{B}_{ji}-\frac{1}{n-1}\sum_{s \neq i,j} \tilde{B}_{si}) {\boldsymbol{I}}(R^{B}_{ji} \leq l)\\
=&\ \alpha_{5}-\alpha_{6}-\alpha_{7}+\alpha_{8}+\mathcal{O}(\frac{1}{n}).\end{aligned}$$ Combining the results yields that $E(dCov^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n}))=dCov^{\rho_{k},\rho_{l}}({\ensuremath{X}},{\ensuremath{Y}})+\mathcal{O}(\frac{1}{n})$.
**(ii)**: The variance of sample local covariance is computed as $$\begin{aligned}
&Var( {\hat{E}}( A^{k}- {\hat{E}}(A^{k})) (B^{l'}- {\hat{E}}(B^{l'})))\\
=&\ \frac{1}{n^2(n-1)^2} Var(\sum_{i \neq j}^{n} (A^{k}_{ij}- {\hat{E}}(A^{k})) (B^{l}_{ji}- {\hat{E}}(B^{l})))\\
=&\ \frac{n^4}{n^2(n-1)^2} \mathcal{O}(\frac{1}{n}) + \frac{n^3}{n^2(n-1)^2} \mathcal{O}(1).\end{aligned}$$ The last equality follows because: there are $n^4$ covariance terms in the numerator of $\mathcal{O}(\frac{1}{n})$, because $Cov( (A^{k}_{ij}- {\hat{E}}(A^{k})) (B^{l}_{ji}- {\hat{E}}(B^{l})), (A^{k}_{uv}- {\hat{E}}(A^{k})) (B^{l}_{vu}- {\hat{E}}(B^{l})))$ are only related via the column centering when $(i,j)$ does not equal $(u,v)$; and there remains $n^3$ covariance terms of at most $\mathcal{O}(1)$. Note that the finite second moment assumption of $({\ensuremath{X}},{\ensuremath{Y}})$ is required for the big $\mathcal{O}$ notation to have a bounding constant. Therefore, the variance of sample local covariance is of $\mathcal{O}(\frac{1}{n})$.
**(iii)**: $dCov^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n})$ converges to the population local covariance by applying the strong law of large numbers on U-statistics [@KoroljukBook]. Namely, the first term of sample local covariance satisfies $$\begin{aligned}
&\frac{1}{n(n-1)}\sum_{i \neq j}^{n}A_{ij}B_{ji}{\boldsymbol{I}}(R^{A}_{ij} \leq k){\boldsymbol{I}}(R^{B}_{ji} \leq l) \\
=&\ \frac{1}{n}\sum_{i=1}^{n}(\frac{1}{n-1}\sum_{j \neq i}^{n} (\frac{n-2}{n-1}\tilde{A}_{ij}-\frac{1}{n-1}\sum_{s \neq i,j} \tilde{A}_{sj}) \\
& \cdot (\frac{n-2}{n-1}\tilde{B}_{ji}-\frac{1}{n-1}\sum_{s \neq i,j} \tilde{B}_{si}) {\boldsymbol{I}}(R^{A}_{ij} \leq k){\boldsymbol{I}}(R^{B}_{ji} \leq l) ) \\
\rightarrow &\ \frac{1}{n}\sum_{i=1}^{n} (\alpha_{1|(x_i,y_i)}-\alpha_{2|(x_i,y_i)}-\alpha_{3|(x_i,y_i)}+\alpha_{4|(x_i,y_i)})\\
\rightarrow &\ \alpha_{1}-\alpha_{2}-\alpha_{3}+\alpha_{4},\end{aligned}$$ where the second line applies law of large numbers at each $i$ by conditioning on $({\ensuremath{X}},{\ensuremath{Y}})=(x_i,y_i)$ for each $\alpha$’s, and the last line follows by applying law of large numbers to the independently distributed conditioned $\alpha$’s. Similarly, the second term of sample local covariance can be shown to converge to the second term in population local covariance. The convergence is also uniform: each local covariance are dependent with each other, and actually repeats the summands with each other. Thus there exists a scale $(k,l)$ such that $dCor^{k,l}$ has the largest deviation from the mean than all other local covariances, and one can find a suitable $\epsilon$ to bound the maximum deviation for all $dCor^{k,l}$.
Alternatively, convergence in probability can be directly established from (i) and (ii) by applying the Chebyshev’s inequality; the almost sure convergence can also be proved via the integral definition using almost the same steps as in Theorems 1 and 2 from [@SzekelyRizzoBakirov2007], i.e., first define the empirical characteristic function via the $w$ integral for the sample local covariance, and show it converges to the population local covariance in Equation \[eq:dcov1\] by the law of large numbers on U-statistics.
Corollary \[thm5\] {#corollarythm5 .unnumbered}
------------------
It follows directly from Theorem \[thm2\], Theorem \[thm4\], and the convergence of sample distance correlation to the population [@SzekelyRizzoBakirov2007].
Corollary \[cor1\] {#corollarycor1 .unnumbered}
------------------
The population [[[<span style="font-variant:small-caps;">Mantel</span>]{}]{}]{} and its equivalence to expectation of Euclidean distances can be established via the same steps as in Theorem \[thm1\]. The convergence of sample [[[<span style="font-variant:small-caps;">Mantel</span>]{}]{}]{} to its population version can be derived based on either the same procedure in Theorem \[thm4\], or Theorems 1 and 2 from [@SzekelyRizzoBakirov2007] with minimal notational changes.
Theorem \[thm6\] {#theoremthm6 .unnumbered}
----------------
**(a)**: Regardless of the threshold choice, the algorithm enforces Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} to be always no less than $dCorr^{n,n}(\mathcal{X}_{n},\mathcal{Y}_{n})$, and no more than $\max\{dCorr^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n})\}$.
**(b)**: By Corollary \[thm5\], $dCorr^{n,n}(\mathcal{X}_{n},\mathcal{Y}_{n}) \rightarrow dCorr({\ensuremath{X}},{\ensuremath{Y}})$, then the uniform convergence by Theorem \[thm4\] ensures that $\max\{dCorr^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n})\} \rightarrow {c}^{*}({\ensuremath{X}},{\ensuremath{Y}})$. When ${\ensuremath{X}}$ and ${\ensuremath{Y}}$ are independent, $dCorr({\ensuremath{X}},{\ensuremath{Y}})$ and ${c}^{*}({\ensuremath{X}},{\ensuremath{Y}})$ are both $0$, to which Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} must converge; when dependent, $dCorr^{n,n}(\mathcal{X}_{n},\mathcal{Y}_{n})$ converges to a positive constant, so Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} must converge to a constant that is either the same or larger.
Theorem \[thm7\] {#theoremthm7 .unnumbered}
----------------
**(a)**: Given ${c}^{*}({\ensuremath{X}},{\ensuremath{Y}})>dCorr({\ensuremath{X}},{\ensuremath{Y}})$, by the continuity of local correlations with respect to $(\rho_{k},\rho_{l})$, there always exists a non-empty connected area $\mathcal{R} \in \mathcal{S}_{\epsilon}$ such that $dCorr^{\rho_k,\rho_l}({\ensuremath{X}},{\ensuremath{Y}})>dCorr({\ensuremath{X}},{\ensuremath{Y}})$ for all $(\rho_{k},\rho_{l}) \in \mathcal{R}$. Among all possible areas we take the one with largest area.
As $n$ increases to infinity, the set $\{(\frac{k-1}{n-1},\frac{l-1}{n-1}) \ | \ (k, l) \in [n]^2\}$ is a dense subset of $[0,1] \times [0,1]$, and $\{dCorr^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n})\}$ is also a dense subset of $\{dCorr^{\rho_{k},\rho_{l}}({\ensuremath{X}},{\ensuremath{Y}})\}$. Thus for $n$ sufficiently large, the area $\mathcal{R}$ can always be approximated via the largest connected component $R$ by the Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} algorithm. As all sample local correlations within the region $R$ are larger than the sample distance correlation, so is the smoothed maximum. Note that if the threshold $\tau_n$ does not converge to $0$, e.g., if $\tau_n$ is a positive constant like $0.05$, Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} will fail to identify a region $R$ when $0.05 > {c}^{*}({\ensuremath{X}},{\ensuremath{Y}})$.
**(b)**: Following (a), if optimal scale of [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} is in the largest area $\mathcal{R}$, the sample maximum within $R$ converges to the true maximum within $\mathcal{R}$, i.e., Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} converges to the population [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{}.
Corollary \[cor2\] {#corollarycor2 .unnumbered}
------------------
For $v=\frac{n(n-3)}{2}$, $z \sim Beta(\frac{v-1}{2})$, the convergence of $\tau_n= 2F^{-1}_{z}(1-\frac{0.02}{n}) -1$ can be shown as follows: by computing the variance of the Beta distribution and using Chebyschev’s inequality, it follows that $$\begin{aligned}
& \frac{0.04}{n} = Prob(|z-0.5| \geq \tau_n /2) \leq \mathcal{O}(\frac{1}{n^2 \tau_{n}^{2}})\\
\Rightarrow &\ \tau_{n}=\mathcal{O}(\frac{1}{\sqrt{n}}) \rightarrow 0.\end{aligned}$$ The equation also implies that the percentile choice can be either fixed or anything no larger than $1-\frac{c}{n^2}$ for some constant $c$, beyond which the convergence of $\tau_n$ to $0$ will be broken.
Theorem \[thm8\] {#theoremthm8 .unnumbered}
----------------
To prove consistency under the permutation test, it suffices to show that at any type $1$ error level $\alpha$, the p-value of [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} is asymptotically less than $\alpha$. The p-value can be expressed by: $$\begin{aligned}
& Prob({c}^{*}(\mathcal{X}_{n},\mathcal{Y}_{n}^{\pi}) > {c}^{*}(\mathcal{X}_{n},\mathcal{Y}_{n})) \\
= &\ \sum_{j=0}^{n} Prob({c}^{*}(\mathcal{X}_{n},\mathcal{Y}_{n}^{\pi}) > {c}^{*}(\mathcal{X}_{n},\mathcal{Y}_{n}) | \pi \mbox{ is a partial derangement of size $j$})\\
& \times Prob(\mbox{partial derangement of size $j$}) \end{aligned}$$ by conditioning on the permutation being a partial derangement of size $j$, e.g., $j=0$ means $\pi$ is a derangement, while $j=n$ means $\pi$ does not permute any position.
As $n \rightarrow \infty$, we always have $$\begin{aligned}
&Prob(\mbox{partial derangement of size $j$}) \rightarrow e^{-1} / j!, \\
&{c}^{*}(\mathcal{X}_{n},\mathcal{Y}_{n}) \rightarrow \epsilon >0 \ \mbox{ under dependence.}\end{aligned}$$ Thus it suffices to show that for any $\epsilon >0$, $$\begin{aligned}
\label{eq:permconsistency}
\lim_{n\rightarrow \infty} e^{-1}\sum_{j=0}^{n} Prob({c}^{*}(\mathcal{X}_{n},\mathcal{Y}_{n}^{\pi}) > \epsilon | \mbox{ partial derangement of size $j$}) / j! \rightarrow 0.\end{aligned}$$ Then we decompose the above summations into two different cases. The first case is when $j$ is a fixed size, $\mathcal{X}_{n}$ and $\mathcal{Y}_{n}^{\pi}$ are asymptotically independent (due to the *iid* assumption), thus ${c}^{*}(\mathcal{X}_{n},\mathcal{Y}_{n}^{\pi})$ converges to $0$. The other case is the remaining partial derangements $\pi$ of size $\mathcal{O}(n)$, but these partial derangements occur with probability converging to $0$, i.e., for any $\alpha > 0$, there exists $N_{1}$ such that $$\begin{aligned}
e^{-1} \sum_{j=N_{1}+1}^{+\infty} 1/j! < \alpha / 2,\end{aligned}$$ as $\sum\limits_{j=0}^{n} 1/j!$ is bounded above and converges to $e$. Then back to the first case, there further exists $N_{2}>N_{1}$ such that for any $j\leq N_{1}$ and all $n > N_{2}$ $$\begin{aligned}
Prob({c}^{*}(\mathcal{X}_{n},\mathcal{Y}_{n}^{\pi}) > \epsilon)| \mbox{ partial derangement of size $j$}) < \alpha / 2.\end{aligned}$$ It follows that for all $n > N_{2}$, $$\begin{aligned}
& e^{-1} \sum_{j=0}^{n} Prob({c}^{*}(\mathcal{X}_{n},\mathcal{Y}_{n}^{\pi}) > \epsilon | \mbox{ partial derangement of size $j$}) / j!\\
< &\ e^{-1} \sum_{j=0}^{N_{1}} \alpha / 2 j! + e^{-1} \sum_{j=N_{1}+1}^{n} 1 / j!\\
< &\ \alpha.\end{aligned}$$ Thus the convergence in Equation \[eq:permconsistency\] holds.
Therefore, at any type $1$ error level $\alpha>0$, the p-value of Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} under the permutation test will eventually be less than $\alpha$ as $n$ increases, such that Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} always successfully detects any dependency. Thus Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} is consistent against all dependencies with finite second moments. When ${\ensuremath{X}}$ and ${\ensuremath{Y}}$ are independent, each column of $\mathcal{X}_{n}$ and the corresponding column of $\mathcal{Y}_{n}$ are independent for any permutation. Therefore, ${c}^{*}(\mathcal{X}_{n},\mathcal{Y}_{n}^{\pi})$ distributes the same as ${c}^{*}(\mathcal{X}_{n},\mathcal{Y}_{n})$ for any random permutation $\pi$, and $Prob({c}^{*}(\mathcal{X}_{n},\mathcal{Y}_{n}^{\pi}) > {c}^{*}(\mathcal{X}_{n},\mathcal{Y}_{n}))$ is uniformly distributed in $[0,1]$. Thus Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} is valid.
Lemma \[lem1\] {#lemmalem1 .unnumbered}
--------------
$$\begin{aligned}
dCov^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n}) &= {\hat{E}}(A^{k} \circ B^{l'})- {\hat{E}}(A^{k} \circ J){\hat{E}}(B^{l} \circ J) \\
&= tr (A^{k}B^{l})- tr (A^{k}J)tr(B^{l}J)\\
&=tr [ (A^{k}- tr (A^{k}J)J) (B^{l}-tr(B^{l}J)J)]\\
& = \sum_{i=1}^{n} \lambda_{i} [ (A^{k}- tr (A^{k}J)J) (B^{l}-tr(B^{l}J)J)],\end{aligned}$$
where the first line is the definition, the second line follows by noting that ${\hat{E}}(A \circ B^{'})=tr(AB)$ and ${\hat{E}}(A)={\hat{E}}(A \circ J)=tr(AJ)$ for any two matrices $A$ and $B$, and the last two lines follow from basic properties of matrix trace.
Theorem \[thm:dvar\] {#theoremthmdvar .unnumbered}
--------------------
For all these properties, it suffices to prove them on the sample local variance $dVar^{k}(\mathcal{X}_{n})$ first. Then the population version follows by the convergence property in Theorem \[thm4\].
**(a)**: Based on Lemma \[lem1\] it holds that $$\begin{aligned}
dVar^{k}(\mathcal{X}_{n})
& = \sum_{i=1}^{n} \lambda^{2}_{i}[A^{k}- tr (A^{k}J)J] \geq 0.\end{aligned}$$
**(b)**: Following part (a), we have $$\begin{aligned}
&\ dVar^{k}(\mathcal{X}_{n})=0\\
\Leftrightarrow & \ \lambda_{i}[A^{k}- tr (A^{k}J)J] =0, \ \forall i \\
\Leftrightarrow & \ A^{k}- tr (A^{k}J)J = 0_{n \times n} \\
\Leftrightarrow & \ A^{k}_{ij} = tr (A^{k}J), \ \forall i,j =1,\ldots, n\\
\Leftrightarrow & \ A^{k}_{ij}=tr (A^{k}J)=0, \ \forall i,j =1,\ldots, n,\end{aligned}$$ where the last line follows by observing that $A^{k}_{ii}=0$ by Equation \[localCoef2\]. Therefore, distance variance equals $0$ if and only if $A^{k}$ is the zero matrix.
A trivial case is $k=0$, which corresponds to $\rho_k =0$ asymptotically. Otherwise $A^{k}$ is a zero matrix if and only if for all $(i,j)$ satisfying ${\boldsymbol{I}}(R^{A}_{ij} \leq k)=1$, $$\begin{aligned}
\tilde{A}_{ij}=\frac{1}{n-1}\sum_{s=1}^{n} \tilde{A}_{sj}. \end{aligned}$$ Namely, for each point $x_j$, its $k$ smallest distance entries all equal the mean distances with respect to $x_j$, which can only happen when $\tilde{A}_{ij}$ is a constant for all $i \neq j$ at a fixed $j$. Due to the symmetry of the distance matrix, all the off-diagonal entries of $\tilde{A}$ are the same, i.e., $\tilde{A}=u (J - I)$ for some constant $u \geq 0$.
When $u=0$, all observations are the same, so ${\ensuremath{X}}$ is a constant. Otherwise all observations are equally distanced from each other by a distance of $u>0$, which occurs with probability $0$ under the *iid* assumption. This is because when ${\ensuremath{X}}^{'}$ and ${\ensuremath{X}}^{''}$ are independent, one cannot have $\|{\ensuremath{X}}^{''}-{\ensuremath{X}}\|=\|{\ensuremath{X}}^{'}-{\ensuremath{X}}\|$ almost surely unless they are degenerate.
From another point of view, for given sample data that happens to be equally distanced, e.g., $n$ points in $n-1$ dimensions, sample variances can still be $0$. But this scenario occurs with probability $0$ when each observation is assumed *iid*.
**(c)**: This follows trivially from the definition, because upon the transformation the distance matrix is unchanged up-to a factor of $u$.
Theorem \[thm:dcor\] {#theoremthmdcor .unnumbered}
--------------------
Similar as in Theorem \[thm:dvar\], it suffices to prove (a) and (b) for the sample local correlation, then they automatically hold for the population version by convergence.
**(a)**: The symmetric part is trivial: for any $(\rho_k,\rho_l) \in [0,1] \times [0,1]$, by Lemma \[lem1\] $$\begin{aligned}
dCov^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n})
&=tr [ (A^{k}- tr (A^{k}J)J) (B^{l}-tr(B^{l}J)J)] \\
&=tr [ (B^{l}-tr(B^{l}J)J)(A^{k}- tr (A^{k}J)J)] \\
&=dCov^{l,k}(\mathcal{Y}_{n},\mathcal{X}_{n}).\end{aligned}$$ Then by the Cauchy-Schwarz inequality on the trace, $$\begin{aligned}
|dCov^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n}) |
=& \ |tr [ (A^{k}- tr (A^{k}J)J) (B^{l}-tr(B^{l}J)J)]|\\
= & \ \sqrt{tr [ (A^{k}- tr (A^{k}J)J) (B^{l}-tr(B^{l}J)J)]}\\
& \ \times \sqrt{tr [ (A^{k}- tr (A^{k}J)J) (B^{l}-tr(B^{l}J)J)]} \\
\leq & \ \sqrt{ dVar^{k}(\mathcal{X}_{n}) dVar^{l}(\mathcal{Y}_{n})}.\end{aligned}$$ Thus $dCorr^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n}) =dCorr^{l,k}(\mathcal{Y}_{n},\mathcal{X}_{n}) \in [-1,1]$.
**(b)**: The if direction is clear: under isometry, $\tilde{A}=|u| \tilde{B}$, both share the same k-nearest-neighbor graph, so that $A^{k} = |u| \cdot B^{k}$. Thus $dCov^{k,k}(\mathcal{X}_{n},\mathcal{Y}_{n}) = \frac{1}{u^2} dVar^{k}(\mathcal{X}_{n})= u^2 \cdot dVar^{k}(\mathcal{Y}_{n})$, and $dCorr^{k,k}(\mathcal{X}_{n},\mathcal{Y}_{n})=1$. For the only if direction: by part (a), the local correlation can be $\pm 1$ if and only if $(A^{k}- tr (A^{k}J)J)$ is a scalar multiple of $(B^{l}-tr(B^{l}J)J)$, say some constant $u$.
First we argue that the non-zero entries in $A^{k}$ must match the non-zero entries in $B^{l}$. Namely, the k-nearest neighbor graph is the same between $\tilde{A}$ and $\tilde{B}$. As $A^{k}_{ii}=B^{l}_{ii}=0$, $-tr (A^{k}J)$ must be a scalar multiple of $-tr (B^{l}J)$. Then if there exists $i \neq j$ such that $A^{k}_{ij}=0$ while $B^{l}_{ij} \neq 0$, $-tr (A^{k}J)$ must be the same scalar multiple of $B^{l}_{ij}-tr (B^{l}J)$, which is not possible unless $B^{l}_{ij}=0$. Thus $k=l$ and ${\boldsymbol{I}}(R^{A}_{ij} \leq k)= {\boldsymbol{I}}(R^{B}_{ij} \leq k)$ for all $(i,j)$.
Next we show the scalar multiple must be positive, i.e., the local correlation cannot be $-1$. Assuming it can be $-1$, then $$\begin{aligned}
& \ A^{k}- tr (A^{k}J)J=-|u|(B^{k}-tr(B^{k}J)J) \\
\Leftrightarrow & \ A^{k}+|u|B^{k}=(tr (A^{k}J)+|u|tr(B^{k}J))J \\
\Leftrightarrow & \ A^{k}+|u|B^{k}=0_{n \times n} \\
\Leftrightarrow & \ A+|u|B=0_{n \times n}, \end{aligned}$$ where the second to last line follows because the diagonal entries of $A^{k}+|u|B^{k}$ are $0$ by definition, and the last line follows by observing that $tr (A^{k}J)$ and $tr (B^{k}J)$ are both negative unless $k=n$ (e.g., $A$ is always centered to have zero matrix mean, while $A^{k}$ keeps the $k$ smallest entries per column so its matrix mean is negative til $k=n$). However, if the last line is true, then the original distance correlation shall be $-1$, which cannot happen under the *iid* assumption as shown in [@SzekelyRizzoBakirov2007]. Note that the derivation also shows that the local correlations can be $-1$ for general dissimilarity matrices without the *iid* assumption, i.e., when $\tilde{A}+|u| \tilde{B}=v(J-I)$ for some constant $v$.
Therefore, the scalar multiple must be positive, and $A^{k}- tr (A^{k}J)J = |u| (B^{k}-tr (B^{k}J)J)$. As the diagonals satisfy $A^{k}_{ii}=B^{k}_{ii}=0$, it holds that $tr (A^{k}J)= |u| tr (B^{k}J)$ and $A^{k}=|u| B^{k}$. Thus for each $(i, j)$ satisfying ${\boldsymbol{I}}(R^{A}_{ij} \leq k)=1$: $$\begin{aligned}
& \ \tilde{A}_{ij}-\frac{1}{n-1}\sum_{s=1}^{n} \tilde{A}_{sj}=|u|(\tilde{B}_{ij}-\frac{1}{n-1}\sum_{s=1}^{n} \tilde{B}_{sj}) \\
\Leftrightarrow & \ \tilde{A}_{ij}-|u|\tilde{B}_{ij} = \frac{1}{n-1}\sum_{s=1}^{n} \tilde{A}_{sj} -\frac{|u|}{n-1}\sum_{s=1}^{n} \tilde{B}_{sj}\\
\Leftrightarrow & \ \tilde{A}_{ij}-|u|\tilde{B}_{ij} = v.\end{aligned}$$ We argue that if $\tilde{A}_{ij}=|u|\tilde{B}_{ij}+v$ for each $(i, j)$ satisfying ${\boldsymbol{I}}(R^{A}_{ij} \leq k)=1$, it also holds for all $(i,j)$. Suppose there exists $(s,j)$ with ${\boldsymbol{I}}(R^{A}_{sj} \leq k)=0$ and $\tilde{A}_{sj} = |u|\tilde{B}_{sj}+v+w$ for some $w \neq 0$. Without loss of generality, there must exist one more index $t$ such that ${\boldsymbol{I}}(R^{A}_{tj} \leq k)=0$ and $\tilde{A}_{tj} = |u|\tilde{B}_{tj}+v-w$ to maintain the mean (or multiple indices in a similar manner). This requires $\|{\ensuremath{X}}^{''}-{\ensuremath{X}}\|-|u|\|{\ensuremath{Y}}^{''}-{\ensuremath{Y}}\|=\|{\ensuremath{X}}^{'}-{\ensuremath{X}}\|-|u|\|{\ensuremath{Y}}^{'}-{\ensuremath{Y}}\|+2w$, so $({\ensuremath{X}}^{''},{\ensuremath{Y}}^{''})$ and $({\ensuremath{X}}^{'},{\ensuremath{Y}}^{'})$ are related by $w$ when conditioning on $({\ensuremath{X}},{\ensuremath{Y}})$. Thus it imposes a dependency structure and violates the *iid* assumption.
Therefore $\tilde{A}-|u|\tilde{B}=v(J-I)$. When $v=0$, $\tilde{A}=|u|\tilde{B}$ is equivalent to that $({\ensuremath{X}}, u {\ensuremath{Y}}$) are related by an isometry. When $v \neq 0$, it requires each distance entries to be added by the same constant, which occurs with probability $0$ under the *iid* assumption. Namely, if $\|{\ensuremath{X}}^{'}-{\ensuremath{X}}\|-|u| \|{\ensuremath{Y}}^{'}-{\ensuremath{Y}}\|=\|{\ensuremath{X}}^{''}-{\ensuremath{X}}\|-|u| \|{\ensuremath{Y}}^{''}-{\ensuremath{Y}}\|=v \neq 0$ almost surely, then $({\ensuremath{X}}^{''},{\ensuremath{Y}}^{''})$ and $({\ensuremath{X}}^{'},{\ensuremath{Y}}^{'})$ are related by $v$ when conditioning on $({\ensuremath{X}},{\ensuremath{Y}})$, in which case these two pairs become dependent and the *iid* assumption is violated.
**(c)**: As each local correlation is symmetric and bounded for either population or sample case, [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} is symmetric and within $[-1,1]$ by part (a).
**(d)**: If ${\ensuremath{X}}$ and $u {\ensuremath{Y}}$ are related by an isometry, the distance correlation (or the local correlation at the largest scale) equals $1$. For population, [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} takes the maximum local correlation; for sample, [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} cannot be smaller than the local correlation at the largest scale. In both cases population and Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} equal $1$.
When population or Sample [[[<span style="font-variant:small-caps;">MGC</span>]{}]{}]{} equal $1$, there exists at least one local correlation that equals $1$, i.e., $dCorr^{k,l}(\mathcal{X}_{n},\mathcal{Y}_{n})=1$. From the inequality in part (a), $k$ must equal $l$ for the equality to hold. Otherwise the number of non-zero entries does not match between $A^{k}$ and $B^{l}$, and $A^{k}$ cannot be a scalar multiple of $B^{l}$. Thus there exists $k$ such that $dCorr^{k,k}(\mathcal{X}_{n},\mathcal{Y}_{n})=1$, and the conclusion follows from part (b).
Simulation Dependence Functions {#appen:function}
===============================
This section presents the $20$ simulations used in the experiment section, which is mostly based on a combination of simulations from previous works [@SzekelyRizzoBakirov2007; @SimonTibshirani2012; @GorfineHellerHeller2012]. We only made changes to add noise and a weight vector for higher dimensions, thereby making them more difficult and easier to compare all methods throughout different dimensions and sample sizes. For the random variable ${\ensuremath{X}}\in {\mathbb{R}}^{p}$, we denote ${\ensuremath{X}}_{[d]}, d=1,\ldots,p$ as the $d^{th}$ dimension of [$X$]{}. For the purpose of high-dimensional simulations, $w \in {\mathbb{R}}^{p}$ is a decaying vector with $w_{[d]}=1/d$ for each $d$, such that $w{^{\ensuremath{\mathsf{T}}}}{\ensuremath{X}}$ is a weighted summation of all dimensions of [$X$]{}. Furthermore, ${\mathcal{U}}(a,b)$ denotes the uniform distribution on the interval $(a,b)$, ${\mathcal{B}}(p)$ denotes the Bernoulli distribution with probability $p$, ${\mathcal{N}}(\mu,{\Sigma})$ denotes the normal distribution with mean ${\mu}$ and covariance ${\Sigma}$, $U$ and $V$ represent some auxiliary random variables, $\kappa$ is a scalar constant to control the noise level (which equals $1$ for one-dimensional simulations and $0$ otherwise), and $\epsilon$ is sampled from an independent standard normal distribution unless mentioned otherwise.
Linear $({\ensuremath{X}},{\ensuremath{Y}}) \in {\mathbb{R}}^{p} \times {\mathbb{R}}$: $$\begin{aligned}
{\ensuremath{X}}&\sim {\mathcal{U}}(-1,1)^{p},\\
{\ensuremath{Y}}&=w{^{\ensuremath{\mathsf{T}}}}{\ensuremath{X}}+\kappa\epsilon.\end{aligned}$$
Exponential $({\ensuremath{X}},{\ensuremath{Y}}) \in {\mathbb{R}}^{p} \times {\mathbb{R}}$: $$\begin{aligned}
{\ensuremath{X}}&\sim {\mathcal{U}}(0,3)^{p}, \\
{\ensuremath{Y}}&=exp(w{^{\ensuremath{\mathsf{T}}}}{\ensuremath{X}})+10\kappa\epsilon.\end{aligned}$$
Cubic $({\ensuremath{X}},{\ensuremath{Y}}) \in {\mathbb{R}}^{p} \times {\mathbb{R}}$: $$\begin{aligned}
{\ensuremath{X}}&\sim {\mathcal{U}}(-1,1)^{p}, \\
{\ensuremath{Y}}&=128(w{^{\ensuremath{\mathsf{T}}}}{\ensuremath{X}}-\tfrac{1}{3})^3+48(w{^{\ensuremath{\mathsf{T}}}}{\ensuremath{X}}-\tfrac{1}{3})^2-12(w{^{\ensuremath{\mathsf{T}}}}{\ensuremath{X}}-\tfrac{1}{3})+80\kappa\epsilon.\end{aligned}$$
Joint normal $({\ensuremath{X}},{\ensuremath{Y}}) \in {\mathbb{R}}^{p} \times {\mathbb{R}}^{p}$: Let $\rho=1/2p$, $I_{p}$ be the identity matrix of size $p \times p$, $J_{p}$ be the matrix of ones of size $p \times p$, and $\Sigma = \begin{bmatrix} I_{p}&\rho J_{p}\\ \rho J_{p}& (1+0.5\kappa) I_{p} \end{bmatrix}$. Then $$\begin{aligned}
({\ensuremath{X}}, {\ensuremath{Y}}) &\sim {\mathcal{N}}(0, \Sigma).\end{aligned}$$
Step Function $({\ensuremath{X}},{\ensuremath{Y}}) \in {\mathbb{R}}^{p} \times {\mathbb{R}}$: $$\begin{aligned}
{\ensuremath{X}}&\sim {\mathcal{U}}(-1,1)^{p},\\
{\ensuremath{Y}}&={\boldsymbol{I}}(w{^{\ensuremath{\mathsf{T}}}}{\ensuremath{X}}>0)+\epsilon,\end{aligned}$$ where ${\boldsymbol{I}}$ is the indicator function, that is ${\boldsymbol{I}}(z)$ is unity whenever $z$ true, and zero otherwise.
Quadratic $({\ensuremath{X}},{\ensuremath{Y}}) \in {\mathbb{R}}^{p} \times {\mathbb{R}}$: $$\begin{aligned}
{\ensuremath{X}}&\sim {\mathcal{U}}(-1,1)^{p},\\
{\ensuremath{Y}}&=(w{^{\ensuremath{\mathsf{T}}}}{\ensuremath{X}})^2+0.5\kappa\epsilon.\end{aligned}$$
W Shape $({\ensuremath{X}},{\ensuremath{Y}}) \in {\mathbb{R}}^{p} \times {\mathbb{R}}$: $U \sim {\mathcal{U}}(-1,1)^{p}$, $$\begin{aligned}
{\ensuremath{X}}&\sim {\mathcal{U}}(-1,1)^{p},\\
{\ensuremath{Y}}&=4\left[ \left( (w{^{\ensuremath{\mathsf{T}}}}{\ensuremath{X}})^2 - \tfrac{1}{2} \right)^2 + w{^{\ensuremath{\mathsf{T}}}}U/500 \right]+0.5\kappa\epsilon.\end{aligned}$$
Spiral $({\ensuremath{X}},{\ensuremath{Y}}) \in {\mathbb{R}}^{p} \times {\mathbb{R}}$: $U \sim {\mathcal{U}}(0,5)$, $\epsilon \sim {\mathcal{N}}(0, 1)$, $$\begin{aligned}
{\ensuremath{X}}_{[d]}&=U \sin(\pi U) \cos^{d}(\pi U) \mbox{ for $d=1,\ldots,p-1$},\\
{\ensuremath{X}}_{[p]}&=U \cos^{p}(\pi U),\\
{\ensuremath{Y}}&= U \sin(\pi U) +0.4 p\epsilon.\end{aligned}$$
Uncorrelated Bernoulli $({\ensuremath{X}},{\ensuremath{Y}}) \in {\mathbb{R}}^{p} \times {\mathbb{R}}$: $U \sim {\mathcal{B}}(0.5)$, $\epsilon_{1} \sim {\mathcal{N}}(0, I_{p})$, $\epsilon_{2} \sim {\mathcal{N}}(0, 1)$, $$\begin{aligned}
{\ensuremath{X}}&\sim {\mathcal{B}}(0.5)^{p}+0.5\epsilon_{1},\\
{\ensuremath{Y}}&=(2U-1)w{^{\ensuremath{\mathsf{T}}}}{\ensuremath{X}}+0.5\epsilon_{2}.\end{aligned}$$
Logarithmic $({\ensuremath{X}},{\ensuremath{Y}}) \in {\mathbb{R}}^{p} \times {\mathbb{R}}^{p}$: $\epsilon \sim {\mathcal{N}}(0, I_{p})$ $$\begin{aligned}
{\ensuremath{X}}&\sim {\mathcal{N}}(0, I_{p}),\\
{\ensuremath{Y}}_{[d]}&=2\log_{2}(|{\ensuremath{X}}_{[d]}|)+3\kappa\epsilon_{[d]} \mbox{ for $d=1,\ldots,p$.}\end{aligned}$$
Fourth Root $({\ensuremath{X}},{\ensuremath{Y}}) \in {\mathbb{R}}^{p} \times {\mathbb{R}}$: $$\begin{aligned}
{\ensuremath{X}}&\sim {\mathcal{U}}(-1,1)^{p},\\
{\ensuremath{Y}}&=|w{^{\ensuremath{\mathsf{T}}}}{\ensuremath{X}}|^\frac{1}{4}+\frac{\kappa}{4}\epsilon.\end{aligned}$$
Sine Period $4\pi$ $({\ensuremath{X}},{\ensuremath{Y}}) \in {\mathbb{R}}^{p} \times {\mathbb{R}}^{p}$: $U \sim {\mathcal{U}}(-1,1)$, $V \sim {\mathcal{N}}(0,1)^{p}$, $\theta=4\pi$, $$\begin{aligned}
{\ensuremath{X}}_{[d]}&=U+0.02 p V_{[d]} \mbox{ for $d=1,\ldots,p$}, \\
{\ensuremath{Y}}&=\sin ( \theta {\ensuremath{X}})+\kappa\epsilon.\end{aligned}$$
Sine Period $16\pi$ $({\ensuremath{X}},{\ensuremath{Y}}) \in {\mathbb{R}}^{p} \times {\mathbb{R}}^{p}$: Same as above except $\theta=16\pi$ and the noise on ${\ensuremath{Y}}$ is changed to $0.5\kappa\epsilon$.
Square $({\ensuremath{X}},{\ensuremath{Y}}) \in {\mathbb{R}}^{p} \times {\mathbb{R}}^{p}$: Let $U \sim {\mathcal{U}}(-1,1)$, $V \sim {\mathcal{U}}(-1,1)$, $\epsilon \sim {\mathcal{N}}(0,1)^{p}$, $\theta=-\frac{\pi}{8}$. Then $$\begin{aligned}
{\ensuremath{X}}_{[d]}&=U \cos\theta + V \sin\theta + 0.05 p\epsilon_{[d]},\\
{\ensuremath{Y}}_{[d]}&=-U \sin\theta + V \cos\theta,\end{aligned}$$ for $d=1,\ldots,p$.
Two Parabolas $({\ensuremath{X}},{\ensuremath{Y}}) \in {\mathbb{R}}^{p} \times {\mathbb{R}}$: $\epsilon \sim {\mathcal{U}}(0,1)$, $U \sim {\mathcal{B}}(0.5)$, $$\begin{aligned}
{\ensuremath{X}}&\sim {\mathcal{U}}(-1,1)^{p},\\
{\ensuremath{Y}}&=\left( (w{^{\ensuremath{\mathsf{T}}}}{\ensuremath{X}})^2 + 2\kappa\epsilon\right) \cdot (U-\tfrac{1}{2}).\end{aligned}$$
Circle $({\ensuremath{X}},{\ensuremath{Y}}) \in {\mathbb{R}}^{p} \times {\mathbb{R}}$: $U \sim {\mathcal{U}}(-1,1)^{p}$, $\epsilon \sim {\mathcal{N}}(0, I_{p})$, $r=1$, $$\begin{aligned}
{\ensuremath{X}}_{[d]}&=r \left(\sin(\pi U_{[d+1]}) \prod_{j=1}^{d} \cos(\pi U_{[j]})+0.4 \epsilon_{[d]}\right) \mbox{ for $d=1,\ldots,p-1$},\\
{\ensuremath{X}}_{[p]}&=r \left(\prod_{j=1}^{p} \cos(\pi U_{[j]})+0.4 \epsilon_{[p]}\right),\\
{\ensuremath{Y}}&= \sin(\pi U_{[1]}).\end{aligned}$$
Ellipse $({\ensuremath{X}},{\ensuremath{Y}}) \in {\mathbb{R}}^{p} \times {\mathbb{R}}$: Same as above except $r=5$.
Diamond $({\ensuremath{X}},{\ensuremath{Y}}) \in {\mathbb{R}}^{p} \times {\mathbb{R}}^{p}$: Same as “Square” except $\theta=-\frac{\pi}{4}$.
Multiplicative Noise $({\ensuremath{X}},{\ensuremath{Y}}) \in {\mathbb{R}}^{p} \times {\mathbb{R}}^{p}$: $U \sim {\mathcal{N}}(0, I_{p})$, $$\begin{aligned}
{\ensuremath{X}}&\sim {\mathcal{N}}(0, I_{p}),\\
{\ensuremath{Y}}_{[d]}&=U_{[d]}{\ensuremath{X}}_{[d]} \mbox{ for $d=1,\ldots,p$.}\end{aligned}$$
Multimodal Independence $({\ensuremath{X}},{\ensuremath{Y}}) \in {\mathbb{R}}^{p} \times {\mathbb{R}}^{p}$: Let $U \sim {\mathcal{N}}(0,I_{p})$, $V \sim {\mathcal{N}}(0,I_{p})$, $U' \sim {\mathcal{B}}(0.5)^{p}$, $V' \sim {\mathcal{B}}(0.5)^{p}$. Then $$\begin{aligned}
{\ensuremath{X}}&=U/3+2U'-1,\\
{\ensuremath{Y}}&=V/3+2V'-1.\end{aligned}$$
For the increasing dimension simulations in the main paper, we always set $\kappa=0$ and $n=100$, with $p$ increasing. For types $4,10,12,13,14,18,19,20$, $q=p$ such that $q$ increases as well; otherwise $q=1$. The decaying vector $w$ is utilized for $p>1$ to make the high-dimensional relationships more difficult (otherwise, additional dimensions only add more signal). For the one-dimensional simulations, we always set $p=q=1$, $\kappa=1$ and $n=100$.
[^1]: shenc@udel.edu
[^2]: cep@jhu.edu
[^3]: jovo@jhu.edu
[^4]: <https://github.com/neurodata/mgc-matlab>
[^5]: <https://CRAN.R-project.org/package=mgc>
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Halved monotone triangles are a generalisation of vertically symmetric alternating sign matrices (VSASMs). We provide a weighted enumeration of halved monotone triangles with respect to a parameter which generalises the number of $-1$s in a VSASM. Among other things, this enables us to establish a generating function for vertically symmetric alternating sign trapezoids. Our results are mainly presented in terms of constant term expressions. For the proofs, we exploit Fischer’s method of operator formulae as a key tool.'
address: 'Universität Wien, Fakultät für Mathematik, Oskar-Morgenstern-Platz 1, 1090 Wien, Austria'
author:
- Hans Höngesberg
bibliography:
- 'HoengesbergHMT.bib'
title: Refined Enumeration of Halved Monotone Triangles and Applications to Vertically Symmetric Alternating Sign Trapezoids
---
[^1]
Introduction
============
After Robbins and Rumsey had introduced alternating sign matrices [@RR86] and conjectured together with Mills an enumeration formula [@MRR83], it took more than ten years to prove this formula. Zeilberger presented the first proof [@Zei96a]; the key ingredient of his intricate proof are constant term identities with the aid of which he shows that alternating sign matrices are equinumerous with totally symmetric, self-complementary plane partitions. Shortly thereafter, Kuperberg provided a second, much shorter and compact proof [@Kup96] exploiting methods from statistical mechanics via the six-vertex model. The latter approach has become the prominent tool in studying alternating sign arrays, especially symmetry classes of alternating sign matrices as well as other interesting subclasses. The systematic study of symmetric alternating sign matrices, initiated by Stanley [@Rob91], has become a fruitful but arduous task: Robbins initially conjectured a list of simple enumeration formulae [@Rob], the last one thereof has only recently been solved [@BFK17].
Alternating sign triangles are a newly introduced class of alternating sign arrays. Ayyer, Behrend, and Fischer have shown that alternating sign triangles with $n$ rows and $n \times n$ alternating sign matrices are equinumerous [@ABF]. Moreover, it is conjectured that even the generating functions of vertically symmetric alternating sign triangles and vertically symmetric alternating sign matrices with respect to the number of $-1$s are equal [@ABF p. 33].
Alternating sign triangles have been generalised to alternating sign trapezoids. The notion of $(n,l)$-alternating sign trapezoids with bases of odd length $l$ has first been introduced by Ayyer and by Aigner [@Aig17]; Behrend and Fischer expanded the notion to include alternating sign trapezoids with bases of even length $l$. In this way, alternating sign trapezoids generalise alternating sign triangles and quasi alternating sign triangles as defined in [@ABF] at the same time. It has been independently conjectured, first by Behrend and later by Aigner, that $(n,l)$-alternating sign trapezoids are equinumerous with column strict shifted plane partitions of class $l-1$ with at most $n$ parts in the top row. This fact is shown by Fischer [@Fis] by means of operator formulae and constant term expressions. In addition, Behrend and Fischer present a proof using the six-vertex model in a forthcoming paper [@BF].
The basic objects of our investigation are halved monotone triangles which are originally defined and enumerated in [@Fis09]. We introduce halved trees as a generalisation of halved monotone triangles. The purpose of this paper is to provide refined enumeration formulae for halved monotone triangles and halved trees in terms of operator formulae and constant term expressions to study vertically symmetric alternating sign trapezoids. In Section \[sec:Preliminaries\], we provide the basic definitions and explain the correspondence between halved trees and alternating sign triangles and trapezoids. In Section \[sec:EnumHMT\], we discuss the refined enumeration of halved monotone triangles and halved trees. Theorems \[thm:QHTREEenumeration\] and \[thm:QHTREEConstantTerm\] state the main enumeration formulae for halved trees, which are applied to the case of alternating sign arrays in Section \[sec:EnumAST\]. Theorem \[thm:VSASTPQEnumeration\] and Corollary \[thm:VSASTPQEnumerationOdd\] establish generating functions of vertically symmetric alternating sign trapezoids. In Section \[sec:Proofs\], we provide the proofs and some technical details. Finally in Section \[sec:Remarks\], we make some remarks about the $2$-enumeration of halved monotone triangles and the enumeration of the closely related halved Gelfand-Tsetlin patterns.
Since we provide the first expression so far for the number of vertically symmetric alternating sign triangles, our results and especially Corollary \[thm:VSASTPQEnumeration\] are a step towards the proof of the conjecture that there are as many vertically symmetric alternating sign triangles with $n$ rows as vertically symmetric $n \times n$ alternating sign matrices.
Preliminaries {#sec:Preliminaries}
=============
In this section, we define the basic objects of our study and explain the relation between vertically symmetric alternating sign trapezoids and halved trees. This correspondence is crucial for the enumerations in Section \[sec:EnumAST\].
\[def:ASTrap\] For given integers $n \geq 1$ and $l \geq 2$, an *$(n,l)$-alternating sign trapezoid* is defined as an array of integers in a trapezoidal shape with entries $-1$, $0$ or $+1$ arranged in $n$ centred rows of length $2n+l-2, 2n+l-4,\dots,l+2$, and $l$ in the following way $$\begin{array}[t]{ccccccccc}
a_{1,1}&a_{1,2}&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&a_{1,2n+l-2}\\
&a_{2,2}&\cdots&\cdots&\cdots&\cdots&\cdots&a_{2,2n+l-3}&\\
&&\ddots&&&&\reflectbox{$\ddots$}&&\\
&&&a_{n,n}&\cdots&a_{n,n+l-1}&&&
\end{array}$$ such that
the nonzero entries alternate in sign in each row and each column,
the topmost nonzero entry in each column is $1$ (if existent),
the entries in each row sum up to $1$, and
the entries in the central $l-2$ columns sum up to $0$.
In the case of $l=1$, an *$(n,1)$-alternating sign trapezoid* is defined as above with the exception that the entry in the bottom row can be $0$ or $1$.
Note that alternating sign triangles of order $n$ correspond to $(n-1,3)$-alternating sign trapezoids by deleting the bottom row and that quasi alternating sign triangles coincide with $(n,1)$-alternating sign trapezoids.
Let us give an example of a vertically symmetric $(6,9)$-alternating sign trapezoid: $$\begin{array}[t]{ccccccccccccccccccc}
0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0&0&0&0\\
&0&0&0&0&0&0&0&1&-1&1&0&0&0&0&0&0&0&\\
&&0&1&0&0&0&0&-1&1&-1&0&0&0&0&1&0&&\\
&&&0&0&0&1&0&0&-1&0&0&1&0&0&0&&&\\
&&&&1&0&-1&0&0&1&0&0&-1&0&1&&&&\\
&&&&&1&0&0&0&-1&0&0&0&1&&&&&\\
\end{array}$$
The entries in each column of an alternating sign trapezoid sum up to $0$ or $1$ by the first and the second condition in the Definition \[def:ASTrap\]. If the column sum is $1$, we call this column a *$1$-column*, otherwise a *$0$-column*. Among the $1$-columns we distinguish the *$10$-columns* from the *$11$-columns* depending on whether the bottom entry is $0$ or $1$, respectively.
We focus on *vertically symmetric alternating sign trapezoids*. These are alternating sign trapezoids which stay invariant under reflection along a vertical axis of symmetry. Since the nonzero entries in each row alternate in sign and sum up to $1$, there is exactly one $1$ and no other nonzero entry in the top row by the second condition. Therefore, due to symmetry, there exists a central column. Hence $l$ has to be an odd integer.
If we number the $n$ leftmost columns of an ($n$,$l$)-alternating sign trapezoid from $-n$ to $-1$, we can associate the *$1$-column vector* $\mathbf{c}=\left(c_1,\dots,c_m\right)$ with $-n \le c_1 < \dots < c_m \le -1$ according to the positions of the $1$-columns within the $n$ leftmost columns of the alternating sign trapezoid. In the case of vertical symmetry, they are equally distributed on both sides.
Assume that $l \ge 3$. Since the entries of each of the $n$ rows sum up to $1$, there are exactly $n$ columns with column sum $1$. There is no zero in the central column due to symmetry and since the nonzero entries alternate in sign, the central column is $(1,-1,1,\dots,(-1)^{n+1})^\top$. The sum of these entries must be $1$. Hence, $n$ has to be even and $m=\frac{n}{2}$.
If $l=1$, we have to consider the parity of $n$. If $n$ is even, the vertical axis of symmetry is the central column $(1,-1,1,\dots,1,0)^\top$. Thus, there are exactly $n-1$ rows with row sum $1$ and consequently as many $1$-columns. One of those is the middle column, the remaining $n-2$ are equally distributed on both sides. If $n$ is odd, the axis of symmetry is the either $(1,-1,1,\dots,1,-1,1)^\top$ or $(1,-1,1,\dots,1,-1,0)^\top$.
The vertically symmetric $(6,9)$-alternating sign trapezoid given in the example above has two $10$-columns, four $11$-columns and $1$-column vector $\mathbf{c}=(-3,-2,-1)$. The central column is given by $(1,-1,1,-1,1,-1)^\top$.
We define two weights on vertically symmetric $(n,l)$-alternating sign trapezoids: the *$Q$-weight* is $Q$ raised to the number of $-1$s in the $n-1+\frac{l-1}{2}$ leftmost columns; the *$P$-weight* is $P$ raised to the number of $10$-columns within the $n-1$ leftmost columns.
In order to enumerate vertically symmetric alternating sign trapezoids, we transform them into truncated halved monotone triangles.
For a given integer $n \geq 1$, a *halved monotone triangle of order n* is a triangular array of integers with $n$ rows of one of the following shapes depending on the parity of $n$:
in [[$a_{n,1}$,$\dots$,$\dots$,$a_{n,\frac{n+1}{2}}$]{}, [$a_{n-1,1}$,$\dots$,$a_{n-1,\frac{n-1}{2}}$]{}, [$\iddots$,$\iddots$,$\vdots$]{}, [$a_{4,1}$,$a_{4,2}$]{}, [$a_{3,1}$,$a_{3,2}$]{}, [$a_{2,1}$]{}, [$a_{1,1}$]{}]{}
in
at (5,-1) ;
in [[$a_{n,1}$,$\dots$,$a_{n,\frac{n}{2}}$]{}, [$a_{n-1,1}$,$\dots$,$a_{n-1,\frac{n}{2}}$]{}, [$\iddots$,$\iddots$]{}, [$a_{3,1}$,$a_{3,2}$]{}, [$a_{2,1}$]{}, [$a_{1,1}$]{}]{}
in
at (5,-1) ; at (6,7) ;
The entries
strictly increase along rows and
weakly increase along $\nearrow$-diagonals and $\searrow$-diagonals.
For given integers $m \geq 0$ and $n \geq 1$ with $n \geq m$ as well as a weakly decreasing sequence $\mathbf{s}=\left(s_1,s_2,\dots,s_m\right)$ of nonnegative integers, we define a *halved $\mathbf{s}$-tree* as an array of integers which arises from a halved monotone triangle of order $n$ by truncating the diagonals: for each $1 \leq i \leq m$ we delete the $s_i$ bottom entries of the $i^{\text{th}}$ $\nearrow$-diagonal counted from left.
We say that a halved $\mathbf{s}$-tree has bottom row $\mathbf{k}=\left(k_1, \dots, k_{\lceil \frac{n}{2} \rceil}\right)$ if for all $1 \leq i \leq \lceil\frac{n}{2}\rceil$ the bottom entry in the $i^{\text{th}}$ $\nearrow$-diagonal counted from left to right is $k_i$.
The following example is the shape of a halved $(7,3,1)$-tree with $9$ rows:
(4,-1) rectangle (13,9);
in [[,,,$a_{9,4}$,$a_{9,4}$]{}, [,,$a_{8,3}$,$a_{8,4}$]{}, [,,$a_{7,3}$,$a_{7,4}$]{}, [,$a_{6,2}$,$a_{6,3}$]{}, [,$a_{5,2}$,$a_{5,3}$]{}, [,$a_{4,2}$]{}, [,$a_{3,2}$]{}, [$a_{2,1}$]{}, [$a_{1,1}$]{}]{}
in
Notice that halved trees are a generalisation of halved monotone triangles since the latter can be seen as halved $(0,\dots,0)$-trees.
We call an entry $a_{i,j}$ of a halved tree *special* if the entry $a_{i+1,j}$ exists and $$\begin{cases}
a_{i+1,j} < a_{i,j} < a_{i+1,j+1} & \text{if $a_{i+1,j+1}$ exists,}\\
a_{i+1,j} < a_{i,j} & \text{otherwise.}
\end{cases}$$ As in the case of vertically symmetric alternating sign trapezoids, we define two weights on halved trees: The *$Q$-weight* of a halved tree is defined as $Q$ raised to the number of special entries. It essentially counts the entries that lie strictly between the neighbouring entries in the row below. The *$P$-weight* of a halved tree is $P$ raised to the number of $\nearrow$-diagonals such that the two bottommost entries are equal. See Figure \[figure:vsastrapezoid\] for an example.
$$\begin{array}[t]{ccccccccccccccccccc}
0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&0&0&0&0\\
&0&0&0&0&0&0&0&1&-1&1&0&0&0&0&0&0&0&\\
&&0&1&0&0&0&0&-1&1&-1&0&0&0&0&1&0&&\\
&&&0&0&0&1&0&0&-1&0&0&1&0&0&0&&&\\
&&&&1&0&-1&0&0&1&0&0&-1&0&1&&&&\\
&&&&&1&0&0&0&-1&0&0&0&1&&&&&\\
\end{array}
\quad\raisebox{-7ex}{$\longleftrightarrow$}\quad
\begin{array}[t]{ccc}
&&\\
&&2\\
&-3&\\
-3&&0\\
&-2&\\
&&-1
\end{array}$$
There is a correspondence between vertically symmetric alternating sign trapezoids and certain halved trees. This is a variant of a bijection presented by Fischer [@Fis]. To begin with, we assume that $n$ is even and demonstrate the modified construction with the example from Figure \[figure:vsastrapezoid\]. Given a vertically symmetric ($n$,$l$)-alternating sign trapezoid with $1$-column vector $\mathbf{c}=(c_1,\dots,c_m)$, we delete all the entries to the right of the vertical symmetry axis and add $0$s on the left to complete the array to a $n \times \left(\frac{l-1}{2}-n\right)$-rectangular shape. We number the columns from $-n$ to $\frac{l-3}{2}$ from left to right. Then we replace every entry by the partial column sum of all the entries that lie above it in the same column including this very entry itself. This yields an array of integers consisting of $0$s and $1$s. In our example, we get the following array: $$\begin{array}[t]{cccccccccc}
0&0&0&0&0&0&0&0&0&1\\
{\color{gray}0}&0&0&0&0&0&0&0&1&0\\
{\color{gray}0}&{\color{gray}0}&0&1&0&0&0&0&0&1\\
{\color{gray}0}&{\color{gray}0}&{\color{gray}0}&1&0&0&1&0&0&0\\
{\color{gray}0}&{\color{gray}0}&{\color{gray}0}&{\color{gray}1}&1&0&0&0&0&1\\
{\color{gray}0}&{\color{gray}0}&{\color{gray}0}&{\color{gray}1}&{\color{gray}1}&1&0&0&0&0
\end{array}$$
We record the positions of all $1$s. We proceed row by row beginning at the top and record the column of each nonzero entry. The first row of an alternating sign trapezoid consists of exactly one $1$; in the case of vertical symmetry at position $\frac{l-3}{2}$. Copy down this position in the top row of the shape of a halved tree with $n$ rows. Record the positions of the $1$s row by row. Thus, our example turns into the following halved monotone triangle: $$\begin{array}[t]{cccccc}
&&&&&3\\
&&&&2&\\
&&&-3&&3\\
&&-3&&0&\\
&{\color{gray}-3}&&-2&&3\\
{\color{gray}-3}&&{\color{gray}-2}&&-1&
\end{array}$$
Note that an entry $-1$ in the $n+\frac{l-3}{2}$ columns of the alternating sign trapezoid corresponds to exactly one entry in the halved tree that lies strictly between the neighbouring entries in the row below. The $-1$s in the middle column of the alternating sign trapezoid correspond to the entries $\frac{l-3}{2}$ in the right column of the halved tree; they are strictly larger than the left neighbouring entry in the row below. We delete the rightmost column and the entries that originate from the additionally added $0$s at the beginning: $$\begin{array}[t]{ccc}
&&2\\
&-3&\\
-3&&0\\
&-2&\\
&&-1
\end{array}$$
In total, we obtain a halved $(-c_1-1,\dots,-c_m-1)$-tree with $n-1$ rows and bottom row $(c_1,\dots,c_m)$ without entries larger than $\frac{l-5}{2}$ whose $Q$-weight coincide with the $Q$-weight of the corresponding alternating sign trapezoid.
The following observation is crucial: The two bottommost entries of the $i^{\text{th}}$ $\nearrow$-diagonal of the halved tree (counted from left) are identical $c_i$ if and only if the column $c_i < 0$ of the corresponding alternating sign trapezoid is a $10$-column. Consequently, the $P$-weights of the halved tree and the alternating sign trapezoid coincide, too.
Hence, in order to enumerate vertically symmetric ($n$,$l$)-alternating sign trapezoids with even $n$ and $1$-column vector $(c_1,\dots,c_m)$, we have to enumerate halved $(-c_1-1,\dots,-c_m-1)$-trees with $n-1$ rows, bottom row $(c_1,\dots,c_m)$ and no entry larger than $\frac{l-5}{2}$.
If $n$ is odd and, thus, $l=1$, the resulting halved trees have different properties than in the demonstrated of even $n$. In order to avoid a distinction of cases, we make use of the following observation: the bottom row of a vertically symmetric $(n,1)$-alternating sign trapezoid for odd $n$ is either $0$ or $1$. We can delete this bottom row to obtain a vertically symmetric $(n-1,3)$-alternating sign trapezoid. Hence we have reduced the problem to enumerate vertically symmetric alternating sign trapezoids with an even number of rows.
Weighted Enumeration of Halved Monotone Triangles and Trees {#sec:EnumHMT}
===========================================================
Halved monotone triangles and trees can be enumerated by so-called operator formulae. This method of enumeration has been initiated by Fischer [@Fis06] and developed in a series of follow-up papers [@Fis09; @Fis10; @Fis11; @Fis16; @Fis18]. To this end, we need to define the following operators: the *shift operator* $\operatorname{E}_x f(x) {\mathrel{\mathop:\!\!=}}f(x+1)$, the *forward difference* $\operatorname{\Delta}_x {\mathrel{\mathop:\!\!=}}\operatorname{E}_x-\operatorname{id}$ and the *backward difference* $\operatorname{\delta}_x {\mathrel{\mathop:\!\!=}}\operatorname{id}-\operatorname{E}_x^{-1}$. Given a variable $x$ and an integer $a$, we use the notation $\operatorname{E}_a f(a) {\mathrel{\mathop:\!\!=}}\left. \operatorname{E}_x f(x)\right|_{x=a}$ and similarly for other operator expressions. Note that all these operators commute independently of the variables they refer to.
The following theorem is our main result on the refined enumeration of halved monotone triangles. It generalises the straight enumeration [@Fis09 Theorem 1], which can be recovered by setting $Q=1$.
\[thm:QHMTenumeration\] The $Q$-generating function ${\leftidx{^Q}{\operatorname{HMT}}{}}_n\left(K;\mathbf{k}\right)$ of halved monotone triangles of order $n$, prescribed bottom row $\mathbf{k}=(k_1,\dots,k_{\lceil\frac{n}{2}\rceil})$ and no entry larger than $K$ is given by $$\begin{gathered}
\prod_{1\leq s<t\leq \frac{n+1}{2}} \left(\operatorname{E}_{k_s}+\operatorname{E}_{k_t}^{-1}-(2-Q)\operatorname{E}_{k_s}\operatorname{E}_{k_t}^{-1}\right) \left(\operatorname{E}_{k_s}+\operatorname{E}_{k_t}-(2-Q)\operatorname{E}_{k_s}\operatorname{E}_{k_t}\right) \\
\times \prod_{1\leq i<j\leq \frac{n+1}{2}} \frac{(k_j-k_i+j-i)(2K+n+2-k_i-k_j-i-j)}{(j-i)(i+j-1)}
\end{gathered}$$ if $n\in\mathbb{N}$ is odd and by
$$\begin{gathered}
\prod_{r=1}^{\frac{n}{2}} \left((Q-1) \operatorname{E}_{k_r}+\operatorname{id}\right) \prod_{1\leq s<t\leq \frac{n}{2}} \left(\operatorname{E}_{k_s}+\operatorname{E}_{k_t}^{-1}-(2-Q)\operatorname{E}_{k_s}\operatorname{E}_{k_t}^{-1}\right) \left(\operatorname{E}_{k_s}+\operatorname{E}_{k_t}-(2-Q)\operatorname{E}_{k_s}\operatorname{E}_{k_t}\right) \\
\times \prod_{1\leq i<j\leq \frac{n}{2}}\frac{(k_j-k_i+j-i)(2K+n+2-k_i-k_j-i-j)}{(j-i)(i+j)} \prod_{i=1}^{\frac{n}{2}}\frac{K+\frac{n}{2}+1-k_i-i}{i}
\end{gathered}$$
if $n\in\mathbb{N}$ is even.
Halved trees are our main tool for enumerating vertically symmetric alternating sign trapezoids. They can be enumerated by an operator formula as shown in the following theorem. We obtain the generating function of halved trees by applying generalised difference operators to the generating functions in Theorem \[thm:QHMTenumeration\].
\[thm:QHTREEenumeration\] The $Q$-generating function of a halved $\mathbf{s}$-tree with prescribed bottom row $\mathbf{k}$ and no entry greater than $K$ is given by $$\label{eq:QHTREEenumeration}
\prod_{r=1}^{\lceil\frac{n}{2}\rceil} \left( -\operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{k_r} \right)^{s_r} {\leftidx{^Q}{\operatorname{HMT}}{}}_n(K;\mathbf{k}),$$ where $\operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_x {\mathrel{\mathop:\!\!=}}(1 - (1-Q)\operatorname{E}_x)^{-1}\operatorname{\Delta}_x$.
Note that $\operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_x = Q^{-1} \sum_{i=0}^{\infty} (Q^{-1}-1)^i \operatorname{\Delta}_x^{i+1}$ applied to a polynomial $f$ in $x$ becomes a finite sum because $\operatorname{\Delta}_x^{i+1} f(x)$ eventually vanishes for large enough $i$. Hence, the expression in Theorem \[thm:QHTREEenumeration\] is well defined.
In the next theorem, we reformulate (\[eq:QHTREEenumeration\]) as constant term identities.
\[thm:QHTREEConstantTerm\] The $Q$-generating function for halved $\mathbf{s}$-trees with prescribed bottom row $\mathbf{k}$ and no entry larger than $K$ is the constant term of $$\begin{gathered}
\prod_{r=1}^{\frac{n+1}{2}} X_r^{1-n} (1+X_r)^{k_i-K-\frac{n+1}{2}} \left(-\frac{X_r}{Q-(1-Q)X_r}\right)^{s_r} \prod_{1\leq s<t\leq \frac{n+1}{2}} \left( X_t - X_s\right) \left( X_s + X_t + X_s X_t \right) \\
\times \left( Q + (Q-1)X_s + X_t + X_s X_t \right) \left( Q + (Q-1)X_s + (Q-1)X_t + (Q-2)X_s X_t \right)
\end{gathered}$$ if $n$ is odd and the constant term of $$\begin{gathered}
(-1)^{\frac{n}{2}} \prod_{r=1}^{\frac{n}{2}} X_r^{1-n} \left( Q - (1-Q)X_r \right) (1+X_r)^{k_i-K-\frac{n}{2}} \left(-\frac{X_r}{Q-(1-Q)X_r}\right)^{s_r} \prod_{1\leq s<t\leq \frac{n}{2}} \left( X_t - X_s\right) \\
\times \left( X_s + X_t + X_s X_t \right) \left( Q + (Q-1)X_s + X_t + X_s X_t \right) \left( Q + (Q-1)X_s + (Q-1)X_t + (Q-2)X_s X_t \right)
\end{gathered}$$ if $n$ is even.
If we set $Q=1$ and $\mathbf{s}=(0,\dots,0)$, we obtain constant term identities for the number of halved monotone triangles. Other constant term identities for the number of halved monotone triangles have already been established [@Fis09 (6.15) & (6.16)] but they are only valid if $k_i \le K+1-\frac{n+1}{2}$ or $k_i \le K+2-\frac{n}{2}$ for the case that $n$ is odd or even, respectively. In contrast, there are no such constraints in Theorem \[thm:QHTREEConstantTerm\].
The following theorem establishes a generating function of halved trees where we impose certain conditions on the bottommost entries of the diagonals. This becomes useful for the $P$-generating function of vertically symmetric alternating sign trapezoids in Theorem \[thm:VSASTPQCEnumeration\].
\[thm:HMTPQEnumeration\] Let $L_{=} \subseteq \{1,\dots,\lceil \frac{n}{2} \rceil \}$. The $Q$-generating function of halved $\mathbf{s}$-trees with prescribed bottom row $\mathbf{k}$ and no entry greater than $K$ such that the two bottommost entries in the $i^\text{th}$ diagonal are equal if $i \in L_{=}$ and different if $i \notin L_{=}$ is given by $$\prod_{i \in L_{=}} \left( -\operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{k_i} \right) \prod_{\substack{1 \leq i \leq {\lceil \frac{n}{2} \rceil},\\ i \notin L_{=}}} \left( \operatorname{id}+ \operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{k_i} \right) \prod_{r=1}^{\lceil\frac{n}{2}\rceil} \left( -\operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{k_r} \right)^{s_r} {\leftidx{^Q}{\operatorname{HMT}}{}}_n(K;\mathbf{k}).$$
Application to Vertically Symmetric Alternating Sign Trapezoids {#sec:EnumAST}
===============================================================
Let $\mathfrak{S}_m$ be the symmetric group of degree $m$.
The *symmetriser* $\operatorname{\mathbf{Sym}}$ of a function $f(x_1,\dots,x_m)$ is defined as $$\operatorname{\mathbf{Sym}}_{x_1,\dots,x_m} f(x_1,\dots,x_m) {\mathrel{\mathop:\!\!=}}\sum_{\sigma\in\mathfrak{S}_m} f(x_{\sigma(1)},\dots,x_{\sigma(m)});$$ the *antisymmetriser* $\operatorname{\mathbf{ASym}}$ is given by $$\operatorname{\mathbf{ASym}}_{x_1,\dots,x_m} f(x_1,\dots,x_m) {\mathrel{\mathop:\!\!=}}\sum_{\sigma\in\mathfrak{S}_m} \operatorname{sgn}(\sigma) f(x_{\sigma(1)},\dots,x_{\sigma(m)}).$$
We use the symmetriser in combination with the following lemma, which Zeilberger called the Stanton-Stembridge trick [@Zei96a Crucial Fact $\aleph_4$]. Therefore we denote by $$\operatorname{CT}_{x_1,\dots,x_m} f(x_1,\dots,x_m)$$ the constant term of a formal Laurent series $f$, that means the coefficient of the term $x_1^0 \cdots x_m^0$.
For a formal Laurent series $f(x_1,\dots,x_m)$ and a permutation $\sigma \in \mathfrak{S}_m$, it holds that $$\operatorname{CT}_{x_1,\dots,x_m} (f(x_{\sigma(1)},\dots,x_{\sigma(m)}))=\operatorname{CT}_{x_1,\dots,x_m} (f(x_1,\dots,x_m)).$$
As a consequence, it follows that $$\label{eq:SymMethod}
\operatorname{CT}_{x_1,\dots,x_m} f(x_1,\dots,x_m) = \operatorname{CT}_{x_1,\dots,x_m} \left( \frac{1}{m!} \operatorname{\mathbf{Sym}}_{x_1,\dots,x_m} f(x_1,\dots,x_m) \right).$$
We apply our previous results to the case of vertically symmetric $(n,l)$-alternating sign trapezoids in order to derive a generating function. To this end, we adapt the ideas of [@Fis19] to our setting.
Our first result about vertically symmetric alternating sign trapezoids is based on Theorem \[thm:HMTPQEnumeration\] and simply follows from the bijection between vertically symmetric alternating sign trapezoids and halved trees. We start with the case of even $n$.
\[thm:VSASTQCEnumeration\] Let $n$ be even and $l$ odd and let $C_{10} \subseteq \left\{ c_1,\dots,c_{\frac{n}{2}} \right\}$. The $Q$-generating function of vertically symmetric $(n,l)$-alternating sign trapezoids with $1$-columns in positions $\mathbf{c}=\left(c_1,\dots,c_{\frac{n}{2}}\right)$ such that $c_i$ is a $10$-column if and only if $c_i \in C_{10}$ is given by $$\label{eq:VSASTQCEnumeration}
\prod_{c_i \in C_{10}} \left( -\operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{c_i} \right) \prod_{\substack{1 \leq i \leq {\frac{n}{2}},\\ c_i \notin C_{10}}} \left( \operatorname{id}+ \operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{c_i} \right) \prod_{r=1}^{\frac{n}{2}} \left( -\operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{c_r} \right)^{-c_r-1} {\leftidx{^Q}{\operatorname{HMT}}{}}_{n-1} \left( \frac{l-5}{2};\mathbf{c} \right).$$
In the following theorem we derive the $PQ$-generating function of vertically symmetric alternating sign trapezoids with a given distribution of $1$-columns.
\[thm:VSASTPQCEnumeration\] Let $n$ be even. The $PQ$-generating function of vertically symmetric $(n,l)$-alternating sign trapezoids with $1$-columns in positions $\mathbf{c}=\left(c_1,\dots,c_{\frac{n}{2}}\right)$ is given by $$\prod_{r=1}^{\frac{n}{2}} \left( \operatorname{id}- (P-1) \operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{c_r} \right) \left( -\operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{c_r} \right)^{-c_r-1} {\leftidx{^Q}{\operatorname{HMT}}{}}_{n-1} \left( \frac{l-5}{2};\mathbf{c} \right).$$
This is equal to the constant term of $$\begin{gathered}
\prod_{r=1}^{\frac{n}{2}} X_r^{2-n} \left(1+X_r\right)^{c_r-\frac{l-5}{2}-\frac{n}{2}} \left( \frac{Q-(P-Q)X_r}{Q-(1-Q)X_r} \right) \left(- \frac{X_r}{Q - (1-Q)X_r}\right)^{-c_r-1}\\
\times \prod_{1\leq s<t\leq \frac{n}{2}} \left( \left( X_t - X_s\right) \left( X_s + X_t + X_s X_t \right) \right.\\
\left. \times \left( Q + (Q-1)X_s + X_t + X_s X_t \right) \left( Q + (Q-1)X_s + (Q-1)X_t + (Q-2)X_s X_t \right) \right).
\end{gathered}$$
In the next theorem we provide the generating function of all vertically symmetric $(n,l)$-alternating sign trapezoids without prescribing the positions of the $1$-columns.
\[thm:VSASTPQEnumeration\] Let $n$ be even and $l$ odd. The $PQ$-generating function of vertically symmetric $(n,l)$-alternating sign trapezoids is given by the constant term of $$\begin{gathered}
\frac{1}{\left(\frac{n}{2}\right)!} \prod_{r=1}^{\frac{n}{2}} \frac{ Q+(Q-P) X_r}{X_r^{n-2} \left(1+X_r\right)^{\frac{l-5}{2}+\frac{n}{2}} \left(Q(1+X_r)^2 - X_r^2\right)} \\
\times \prod_{1\leq s<t\leq \frac{n}{2}} \frac{\left( X_t - X_s\right)^2 \left( X_s + X_t + X_s X_t \right) \left( Q + (Q-1)X_s + (Q-1)X_t + (Q-2)X_s X_t \right) \left(Q-X_s X_t\right)}{Q(1+X_s)(1+X_t)-X_s X_t}.
\end{gathered}$$
If $n$ is odd, the bottom row of a vertically symmetric $(n,1)$-alternating sign trapezoid is either $1$ or $0$. We can delete this row and obtain a vertically symmetric $(n-1,3)$-alternating sign trapezoid as aforementioned. This observation implies the following theorem.
\[thm:VSASTPQEnumerationOdd\] Let $n$ be odd. The $PQ$-generating function of vertically symmetric $(n,1)$-alternating sign trapezoids is given by the constant term of $$\begin{gathered}
\frac{2}{\left(\frac{n-1}{2}\right)!} \prod_{r=1}^{\frac{n-1}{2}} \frac{ Q+(Q-P) X_r}{X_r^{n-3} \left(1+X_r\right)^{\frac{n-3}{2}} \left(Q(1+X_r)^2 - X_r^2\right)} \\
\times \prod_{1\leq s<t\leq \frac{n-1}{2}} \frac{\left( X_t - X_s\right)^2 \left( X_s + X_t + X_s X_t \right) \left( Q + (Q-1)X_s + (Q-1)X_t + (Q-2)X_s X_t \right) \left(Q-X_s X_t\right)}{Q(1+X_s)(1+X_t)-X_s X_t}.
\end{gathered}$$
Vertically symmetric alternating sign triangles of order $n$ – the order $n$ has to be odd – can be transformed into vertically symmetric $(n-1,3)$-alternating sign trapezoids by cutting off the entry $1$ in the bottom row. If we define the $P$-weight and $Q$-weight in a similar way for alternating sign triangles, we obtain the following generating function:
\[thm:VSASTrianglePQEnumeration\] The $PQ$-generating function of vertically symmetric alternating sign triangles of order $n$ is given by the constant term of $$\begin{gathered}
\frac{1}{\left(\frac{n-1}{2}\right)!} \prod_{r=1}^{\frac{n-1}{2}} \frac{ Q+(Q-P) X_r}{X_r^{n-3} \left(1+X_r\right)^{\frac{n-3}{2}} \left(Q(1+X_r)^2 - X_r^2\right)} \\
\times \prod_{1\leq s<t\leq \frac{n-1}{2}} \frac{\left( X_t - X_s\right)^2 \left( X_s + X_t + X_s X_t \right) \left( Q + (Q-1)X_s + (Q-1)X_t + (Q-2)X_s X_t \right) \left(Q-X_s X_t\right)}{Q(1+X_s)(1+X_t)-X_s X_t}.
\end{gathered}$$
Proofs {#sec:Proofs}
======
Proof of Theorem \[thm:QHMTenumeration\]
----------------------------------------
Theorem \[thm:QHMTenumeration\] is the basic result on the enumeration of halved monotone triangles. Its proof is split up into several steps. First, Lemma \[lem:DetFormulae\] enables us to rewrite the operands appearing in Theorem \[thm:QHMTenumeration\] using determinants. Secondly, we observe how to recursively build up halved monotone triangles which leads to the definition of certain summation operators. Thirdly, in Lemma \[lem:SumOpNormal\] and \[lem:SumOpAlt\] we see how to apply the summation operators to certain polynomials and finally, in Lemma \[lem:AppSumOdd\] and \[lem:AppSumEven\] we apply these results to the operands in Theorem \[thm:QHMTenumeration\].
\[lem:DetFormulae\] The following determinant evaluations hold true: $$\label{eq:DetFormulaeEven}
\det_{1 \leq i,j \leq n} \left( \binom{k_i+j-1}{2j-1} \right)
= \prod_{1 \leq i < j \leq n}\frac{(k_j-k_i)(k_j+k_i)}{(j-i)(j+i)} \prod_{i=1}^n\frac{k_i}{i},$$
$$\label{eq:DetFormulaeOdd}
\det_{1 \leq i,j \leq n} \left( \binom{k_i+j-\frac{3}{2}}{2j-2} \right) = \prod_{1 \leq i < j \leq n} \frac{(k_j-k_i)(k_j+k_i)}{(j-i)(j+i-1)}.$$
First we observe that $$\begin{gathered}
\label{eq:DetEven}
\binom{k_i+j-1}{2j-1}=\frac{\left(k_i+j-1\right) \dotsm \left(k_i+1\right) k_i \left(k_i-1\right) \dotsm \left(k_i-j+1\right)}{(2j-1)!}\\
=\frac{k_i}{(2j-1)!}\prod_{r=1}^{j-1} \left(k_i-r\right) \left(k_i+r\right) = \frac{k_i}{(2j-1)!}\prod_{r=1}^{j-1}\left(k_i^2-r^2\right).
\end{gathered}$$
From equation (\[eq:DetEven\]) it follows that $$\det_{1 \leq i,j \leq n} \left( \binom{k_i+j-1}{2j-1} \right) = \prod_{j=1}^n\frac{1}{(2j-1)!} \prod_{i=1}^n k_i \det_{1 \leq i,j \leq n}\left(\prod_{r=1}^{j-1}\left(k_i^2-r^2\right)\right).$$
By using [@Kra01 Proposition 1], a generalisation of the Vandermonde determinant evaluation, and the fact that $$\prod_{j=1}^n \frac{1}{j} \prod_{1 \leq i < j \leq n} \frac{1}{(j-i)(j+i)}
= \prod_{j=1}^{n} \frac{1}{(2j-1)!},$$ we obtain $$\det_{1 \leq i,j \leq n} \left( \binom{k_i+j-1}{2j-1} \right)
= \prod_{1 \leq i < j \leq n}\frac{(k_j-k_i)(k_j+k_i)}{(j-i)(j+i)} \prod_{i=1}^n\frac{k_i}{i}.$$
We similarly prove that $$\binom{k_i+j-\frac{3}{2}}{2j-2}=\frac{1}{(2j-2)!}\prod_{r=1}^{j-1}\left(k_i^2-\left(r-\frac{1}{2}\right)^2\right),$$ and, consequently, it follows that $$\det_{1 \leq i,j \leq n} \left( \binom{k_i+j-\frac{3}{2}}{2j-2}\right) =
\prod_{j=1}^n\frac{1}{(2j-2)!} \prod_{1 \leq i < j \leq n}\left(k_j^2-k_i^2\right).$$
Since $$\prod_{j=1}^n\frac{1}{(2j-2)!}=\prod_{1 \leq i < j \leq n} \frac{1}{(j-i)(j+i-1)},$$ it follows that $$\det_{1 \leq i,j \leq n} \left( \binom{k_i+j-\frac{3}{2}}{2j-2} \right) = \prod_{1 \leq i < j \leq n} \frac{(k_j-k_i)(k_j+k_i)}{(j-i)(j+i-1)}.$$
If we replace $k_i$ by $k_i+i-K-\frac{n}{2}-1$ for even $n$ and by $k_i+i-K-\frac{n+1}{2}-\frac{1}{2}$ for odd $n$ in the identities (\[eq:DetFormulaeEven\]) and (\[eq:DetFormulaeOdd\]), respectively, we obtain the following two evaluations:
If $n$ is even, then $$\begin{gathered}
\label{eq:HMTDetEven}
\prod_{1\leq i<j\leq \frac{n}{2}}\frac{(k_j-k_i+j-i)(2K+n+2-k_j-k_i-j-i)}{(j-i)(j+i)} \prod_{i=1}^{\frac{n}{2}}\frac{K+\frac{n}{2}+1-k_i-i}{i}\\
= (-1)^{\binom{\frac{n}{2}+1}{2}} \det_{1\leq i,j\leq\frac{n}{2}} \left( \binom{k_i+i+j-K-\frac{n}{2}-2}{2j-1} \right)\end{gathered}$$ and if $n$ is odd, then $$\begin{gathered}
\label{eq:HMTDetOdd}
\prod_{1\leq i<j\leq \frac{n+1}{2}}\frac{(k_j-k_i+j-i)(2K+n+2-k_j-k_i-j-i)}{(j-i)(j+i-1)}\\
= (-1)^{\binom{\frac{n+1}{2}}{2}} \det_{1\leq i,j\leq\frac{n+1}{2}} \left( \binom{k_i+i+j-K-\frac{n+1}{2}-2}{2j-2} \right).\end{gathered}$$
We take advantage of the recursive structure of halved monotone triangles: If we cut off the bottom row of a halved monotone triangle of order $n$, we obtain a halved monotone triangle of order $n-1$. This observation motivates the following identities. The first one was shown in [@Fis16]: $$\sum_{\substack{l_{i-1} < l_{i}, \\ k_{i-1} \leq l_{i-1} \leq k_{i} \leq l_{i} \leq k_{i+1}}} f(l_{i-1},l_{i})
=
\left.\left( \left(\operatorname{E}_{k_{i}^{(1)}}^{-1}+\operatorname{E}_{k_{i}^{(2)}}-\operatorname{E}_{k_{i}^{(1)}}^{-1}\operatorname{E}_{k_{i}^{(2)}}\right) \sum_{l_{i-1}=k_{i-1}^{(2)}}^{k_{i}^{(1)}} \sum_{l_{i}=k_{i}^{(2)}}^{k_{i+1}^{(1)}} f(l_{i-1},l_{i}) \right)\right|_{k_{i}^{(1)}=k_{i}^{(2)}=k_{i}}.$$
If we set $\operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{x,y} {\mathrel{\mathop:\!\!=}}\operatorname{E}_{x}^{-1}+\operatorname{E}_{y}-(2-Q)\operatorname{E}_{x}^{-1}\operatorname{E}_{y}$, the corresponding $Q$-version is $$\begin{gathered}
\sum_{\substack{l_{i-1} < l_{i}, \\ k_{i-1} \leq l_{i-1} \leq k_{i} \leq l_{i} \leq k_{i+1}}} Q^{[k_{i-1} < l_{i-1} < k_{i}]+[k_{i} < l_{i} < k_{i+1}]} f(l_{i-1},l_{i})\\
=Q^{-1} \left.\left( \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{i-1}^{(1)},k_{i-1}^{(2)}} \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{i}^{(1)},k_{i}^{(2)}} \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{i+1}^{(1)},k_{i+1}^{(2)}} \sum_{l_{i-1}=k_{i-1}^{(2)}}^{k_{i}^{(1)}} \sum_{l_{i}=k_{i}^{(2)}}^{k_{i+1}^{(1)}} f(l_{i-1},l_{i}) \right)\right|_{\substack{k_{j}^{(1)}=k_{j}^{(2)}=k_{j} \\ \forall j\in\{i-1,i,i+1\}}},\end{gathered}$$ where we make use of the *Iverson bracket*: For any logical proposition $P$, $[P]=1$ if $P$ is satisfied and $[P]=0$ otherwise.
We extend the previous identity and define the following *summation operator*: $$\begin{gathered}
{\sum\limits_{(l_1,\dots ,l_{n-1})}^{(k_1,\dots ,k_{n})}} F(l_1, l_2, \dots,l_{n-1}) {\mathrel{\mathop:\!\!=}}\sum_{\substack{ l_1 < l_2 < \dots < l_{n-1}, \\ k_1 \leq l_1 \leq k_2 \leq \dots \leq k_{n-1} \leq l_{n-1} \leq k_n}} \hspace*{-1ex} Q^{[k_{1} < l_{1} < k_{2}]+\dots+[k_{n-1} < l_{n-1} < k_{n}]} f(l_1, \dots, l_{n-1})\\
= Q^{-1} \left.\left(\operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{1}^{(1)},k_{1}^{(2)}} \cdots \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{n}^{(1)},k_{n}^{(2)}}
\sum_{l_{1}=k_{1}^{(2)}}^{k_{2}^{(1)}} \cdots \hspace{-1ex} \sum_{l_{n-1}=k_{n-1}^{(2)}}^{k_{n}^{(1)}} f(l_1, \dots, l_{n-1}) \right) \right|_{k_{i}^{(1)}=k_{i}^{(2)}=k_{i}}.\end{gathered}$$
In addition, we need the following alternative version of the summation operator: $$\begin{gathered}
\label{def:AltSumOp}
{\sum\limits_{(l_1,\dots ,l_{n-1})}^{(k_1,\dots ,k_{n})}\hspace{-3ex}'\hspace{3ex}} f(l_1, l_2, \dots,l_{n-1}) {\mathrel{\mathop:\!\!=}}\sum_{\substack{ l_1 < l_2 < \dots < l_{n-1}, \\ k_1 \leq l_1 \leq k_2 \leq \dots \leq k_{n-1} \leq l_{n-1} \leq k_n}} \hspace*{-1ex} Q^{[k_{1} < l_{1} < k_{2}]+\dots+[k_{n-1} < l_{n-1}]} f(l_1, \dots, l_{n-1})\\
= Q^{-1} \left.\left(\operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{1}^{(1)},k_{1}^{(2)}} \cdots \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{n-1}^{(1)},k_{n-1}^{(2)}} \operatorname{\leftidx{^\textit{Q}}{HStrict}{}}_{k_{n}^{(1)},k_{n}^{(2)}}
\sum_{l_{1}=k_{1}^{(2)}}^{k_{2}^{(1)}} \cdots \hspace{-1ex} \sum_{l_{n-1}=k_{n-1}^{(2)}}^{k_{n}^{(1)}} f(l_1, \dots, l_{n-1}) \right) \right|_{k_{i}^{(1)}=k_{i}^{(2)}=k_{i}}\\
= \left.\left(\operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{1}^{(1)},k_{1}^{(2)}} \cdots \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{n-1}^{(1)},k_{n-1}^{(2)}} \sum_{l_{1}=k_{1}^{(2)}}^{k_{2}^{(1)}} \cdots \hspace{-1ex} \sum_{l_{n-1}=k_{n-1}^{(2)}}^{k_{n}^{(1)}} f(l_1, \dots, l_{n-1}) \right) \right|_{k_{i}^{(1)}=k_{i}^{(2)}=k_{i}},\end{gathered}$$ where we set $\operatorname{\leftidx{^\textit{Q}}{HStrict}{}}_{x,y} {\mathrel{\mathop:\!\!=}}\operatorname{E}_{x}^{-1}+Q\operatorname{E}_{y}-\operatorname{E}_{x}^{-1}\operatorname{E}_{y}$ and use the fact that the operand in (\[def:AltSumOp\]) is independent of $k_{n}^{(2)}$.
We are now in a position to state a recursive formula for the generating functions ${\leftidx{^Q}{\operatorname{HMT}}{}}_n\left(K;\mathbf{k}\right)$: $$\begin{aligned}
{\leftidx{^Q}{\operatorname{HMT}}{}}_n\left(K;\left(k_1,\dots,k_{\frac{n+1}{2}}\right)\right) &= {\sum\limits_{(l_1,\dots ,l_{n-1})}^{(k_1,\dots ,k_{n})}} {\leftidx{^Q}{\operatorname{HMT}}{}}_{n-1}\left(K;\left(l_1,\dots,l_{\frac{n-1}{2}}\right)\right)& \text{if $n$ is odd,}\\
{\leftidx{^Q}{\operatorname{HMT}}{}}_n\left(K;\left(k_1,\dots,k_{\frac{n}{2}}\right)\right) &= {\sum\limits_{(l_1,\dots ,l_{n-1})}^{(k_1,\dots ,k_{n-1},K)}\hspace{-3ex}'\hspace{3ex}} {\leftidx{^Q}{\operatorname{HMT}}{}}_{n-1}\left(K;\left(l_1,\dots,l_{\frac{n}{2}}\right)\right)& \text{if $n$ is even.}\end{aligned}$$
In the next two theorems we show how to apply the summation operator and its variation to certain kind of polynomials. The first lemma is a corollary of [@Fis10 Lemma 1], the second one is a variation thereof.
We define the generalised identity operator $\operatorname{\leftidx{^\textit{Q}}{id}{}}_x {\mathrel{\mathop:\!\!=}}(Q-1) \operatorname{E}_x + \operatorname{id}$ and the generalised shift operator $\operatorname{\leftidx{^\textit{Q}}{E}{}}_x {\mathrel{\mathop:\!\!=}}(Q-1) \operatorname{id}+ \operatorname{E}_x$. If we set $Q=1$, we recover the standard identity and shift operator, respectively.
\[lem:SumOpNormal\] Let $g(l_1,\dots ,l_{n-1})$ be a polynomial in $\left(l_1,\dots ,l_{n-1}\right)$ such that $\left.\operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{l_i,l_{i+1}}g(l_1,\dots ,l_{n-1})\right|_{l_i=l_{i+1}+1}$ vanishes for every $i\in\{1,\dots ,n-2\}$. Then $${\sum\limits_{(l_1,\dots ,l_{n-1})}^{(k_1,\dots ,k_{n})}}\prod_{i=1}^{n-1}\operatorname{\Delta}_{l_i} g(l_1,\dots,l_{n-1}) = \sum_{r=1}^n (-1)^{r-1}\prod_{s=1}^{r-1} \operatorname{\leftidx{^\textit{Q}}{id}{}}_{k_s}\prod_{t=r+1}^{n} \operatorname{\leftidx{^\textit{Q}}{E}{}}_{k_t} g(k_1,\dots ,\widehat{k_r},\dots ,k_n),$$ where $g(k_1,\dots ,\widehat{k_r},\dots ,k_n) {\mathrel{\mathop:\!\!=}}g(k_1,\dots ,k_{r-1},k_{r+1},\dots ,k_n)$.
\[lem:SumOpAlt\] Let $g(l_1,\dots ,l_{n-1})$ be a polynomial in $\left(l_1,\dots ,l_{n-1}\right)$ such that $\left.\operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{l_i,l_{i+1}}g(l_1,\dots ,l_{n-1})\right|_{l_i=l_{i+1}+1}$ vanishes for every $i\in\{1,\dots ,n-2\}$. Then $$\begin{gathered}
{\sum\limits_{(l_1,\dots ,l_{n-1})}^{(k_1,\dots ,k_{n})}\hspace{-3ex}'\hspace{3ex}} \prod_{i=1}^{n-1}\operatorname{\Delta}_{l_i} g(l_1,\dots,l_{n-1}) =
Q \sum_{r=1}^{n-1} (-1)^{r-1}\prod_{s=1}^{r-1} \operatorname{\leftidx{^\textit{Q}}{id}{}}_{k_s}\prod_{t=r+1}^{n-1} \operatorname{\leftidx{^\textit{Q}}{E}{}}_{k_t} g(k_1,\dots ,\widehat{k_r},\dots,k_{n-1},k_n+1)\\
+ (-1)^{n-1}\prod_{s=1}^{n-1} \operatorname{\leftidx{^\textit{Q}}{id}{}}_{k_s} g(k_1,\dots,k_{n-1}).
\end{gathered}$$
First, we use the definition of the summation operator (\[def:AltSumOp\]) to create telescoping sums: $$\begin{gathered}
{\sum\limits_{(l_1,\dots ,l_{n-1})}^{(k_1,\dots ,k_{n})}\hspace{-3ex}'\hspace{3ex}}\prod_{i=1}^{n-1}\operatorname{\Delta}_{l_i} g(l_1,\dots,l_{n-1}) \\
\shoveleft = \left.\left(\operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{1}^{(1)},k_{1}^{(2)}} \cdots \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{n-1}^{(1)},k_{n-1}^{(2)}} \sum_{l_{1}=k_{1}^{(2)}}^{k_{2}^{(1)}} \cdots \hspace{-1ex} \sum_{l_{n-1}=k_{n-1}^{(2)}}^{k_{n}^{(1)}} \prod_{i=1}^{n-1}\operatorname{\Delta}_{l_i} g(l_1,\dots,l_{n-1}) \right) \right|_{k_{i}^{(1)}=k_{i}^{(2)}=k_{i}} \\
\shoveleft = \left.\left(\operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{1}^{(1)},k_{1}^{(2)}} \cdots \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{n-1}^{(1)},k_{n-1}^{(2)}} \sum_{l_{1}=k_{1}^{(2)}}^{k_{2}^{(1)}} \cdots \hspace{-1ex} \sum_{l_{n-2}=k_{n-2}^{(2)}}^{k_{n-1}^{(1)}} \prod_{i=1}^{n-2}\operatorname{\Delta}_{l_i} \right.\right.\\
\left.\left. \phantom{\sum_{l_{n-2}=k_{n-2}^{(2)}}^{k_{n-1}^{(1)}}} \times \left(g(l_1,\dots,l_{n-2},k_{n}^{(1)}+1) - g(l_1,\dots,l_{n-2},k_{n-1}^{(2)})\right) \right) \right|_{k_{i}^{(1)}=k_{i}^{(2)}=k_{i}}\\
\shoveleft = Q {\sum\limits_{(l_1,\dots ,l_{n-2})}^{(k_1,\dots ,k_{n-1})}}\prod_{i=1}^{n-2}\operatorname{\Delta}_{l_i} g(l_1,\dots,l_{n-2},k_n+1)\\
- \left.\left(\operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{1}^{(1)},k_{1}^{(2)}} \cdots \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{n-1}^{(1)},k_{n-1}^{(2)}} \sum_{l_{1}=k_{1}^{(2)}}^{k_{2}^{(1)}} \cdots \hspace{-1ex} \sum_{l_{n-2}=k_{n-2}^{(2)}}^{k_{n-1}^{(1)}} \prod_{i=1}^{n-2}\operatorname{\Delta}_{l_i} g(l_1,\dots,l_{n-2},k_{n-1}^{(2)}) \right) \right|_{k_{i}^{(1)}=k_{i}^{(2)}=k_{i}}.
\end{gathered}$$
Using Lemma \[lem:SumOpNormal\], it still remains to prove that $$\begin{gathered}
- \left.\left(\operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{1}^{(1)},k_{1}^{(2)}} \cdots \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{n-1}^{(1)},k_{n-1}^{(2)}} \sum_{l_{1}=k_{1}^{(2)}}^{k_{2}^{(1)}} \cdots \hspace{-1ex} \sum_{l_{n-2}=k_{n-2}^{(2)}}^{k_{n-1}^{(1)}} \prod_{i=1}^{n-2}\operatorname{\Delta}_{l_i} g(l_1,\dots,l_{n-2},k_{n-1}^{(2)}) \right) \right|_{k_{i}^{(1)}=k_{i}^{(2)}=k_{i}} \\
= (-1)^{n-1}\prod_{s=1}^{n-1} \operatorname{\leftidx{^\textit{Q}}{id}{}}_{k_s} g(k_1,\dots,k_{n-1}).
\end{gathered}$$
Again by exploiting telescoping sums, we obtain $$\begin{gathered}
- \left.\left(\operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{1}^{(1)},k_{1}^{(2)}} \cdots \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{n-1}^{(1)},k_{n-1}^{(2)}} \sum_{l_{1}=k_{1}^{(2)}}^{k_{2}^{(1)}} \cdots \hspace{-1ex} \sum_{l_{n-2}=k_{n-2}^{(2)}}^{k_{n-1}^{(1)}} \prod_{i=1}^{n-2}\operatorname{\Delta}_{l_i} g(l_1,\dots,l_{n-2},k_{n-1}^{(2)}) \right) \right|_{k_{i}^{(1)}=k_{i}^{(2)}=k_{i}} \\
\shoveleft = - \left.\left(\operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{1}^{(1)},k_{1}^{(2)}} \cdots \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{n-2}^{(1)},k_{n-2}^{(2)}} \sum_{l_{1}=k_{1}^{(2)}}^{k_{2}^{(1)}} \cdots \hspace{-1ex} \sum_{l_{n-3}=k_{n-3}^{(2)}}^{k_{n-2}^{(1)}} \prod_{i=1}^{n-3}\operatorname{\Delta}_{l_i} \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{n-1}^{(1)},k_{n-1}^{(2)}} \right.\right.\\
\left.\left. \phantom{\sum_{l_{n-3}=k_{n-3}^{(2)}}^{k_{n-2}^{(1)}}} \times \left(g(l_1,\dots,l_{n-3},k_{n-1}^{(1)}+1,k_{n-1}^{(2)}) - g(l_1,\dots,l_{n-3},k_{(n-2)}^{(2)},k_{n-1}^{(2)})\right) \right) \right|_{k_{i}^{(1)}=k_{i}^{(2)}=k_{i}}.
\end{gathered}$$
Since $\operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{n-1}^{(1)},k_{n-1}^{(2)}} g(l_1,\dots,l_{n-3},k_{n-1}^{(1)}+1,k_{n-1}^{(2)})$ vanishes by assumption and $\operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{x,y}$ reduces to $\operatorname{\leftidx{^\textit{Q}}{id}{}}_y$ when applied to functions independent of $x$, the expression above simplifies to
$$\begin{gathered}
\left.\left(\operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{1}^{(1)},k_{1}^{(2)}} \cdots \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{n-2}^{(1)},k_{n-2}^{(2)}} \operatorname{\leftidx{^\textit{Q}}{id}{}}_{k_{n-1}}^{(2)} \sum_{l_{1}=k_{1}^{(2)}}^{k_{2}^{(1)}} \cdots \hspace{-1ex} \sum_{l_{n-3}=k_{n-3}^{(2)}}^{k_{n-2}^{(1)}} \prod_{i=1}^{n-3}\operatorname{\Delta}_{l_i} \right.\right.\\
\left.\left. \phantom{\sum_{l_{n-2}=k_{n-2}^{(2)}}^{k_{n-1}^{(1)}}} \times g(l_1,\dots,l_{n-3},k_{(n-2)}^{(2)},k_{n-1}^{(2)}) \right) \right|_{k_{i}^{(1)}=k_{i}^{(2)}=k_{i}}.
\end{gathered}$$
We complete the proof by repeating the last step $n-2$ times.
Our task is now to apply these previous lemmata to the polynomials appearing in Theorem \[thm:QHMTenumeration\]. Therefore we define the operator $\operatorname{Op}_{x,y} {\mathrel{\mathop:\!\!=}}\operatorname{E}_x + \operatorname{E}_y - (2-Q)\operatorname{E}_x \operatorname{E}_y$.
\[lem:AppSumOdd\] Let $n$ be odd. Then $$\begin{gathered}
{\sum\limits_{(l_1,\dots ,l_{\frac{n-1}{2}})}^{(k_1,\dots ,k_{\frac{n+1}{2}})}} \prod_{r=1}^{\frac{n-1}{2}} \operatorname{\leftidx{^\textit{Q}}{id}{}}_{l_r} \prod_{1\leq s<t\leq \frac{n-1}{2}} \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{l_t,l_s} \operatorname{Op}_{l_s,l_t} \det_{1\leq i,j \leq \frac{n-1}{2}} \left( \binom{l_i+i+j-K-\frac{n-1}{2}-2}{2j-1} \right)\\
= \prod_{1\leq s<t\leq \frac{n+1}{2}} \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_t,k_s} \operatorname{Op}_{k_s,k_t} \det_{1\leq i,j \leq \frac{n+1}{2}} \left( \binom{k_i+i+j-K-\frac{n+1}{2}-2}{2j-2} \right).
\end{gathered}$$
We want to use Lemma \[lem:SumOpNormal\]. Therefore, we set $$g(l_1,\dots,l_{\frac{n-1}{2}}) {\mathrel{\mathop:\!\!=}}\prod_{r=1}^{\frac{n-1}{2}} \operatorname{\leftidx{^\textit{Q}}{id}{}}_{l_r} \prod_{1\leq s<t\leq \frac{n-1}{2}} \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{l_t,l_s} \operatorname{Op}_{l_s,l_t} \det_{1\leq i,j \leq \frac{n-1}{2}} \left( \binom{l_i+i+j-K-\frac{n-1}{2}-2}{2j} \right).$$
Note that the operator $$\operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{l_{i},l_{i+1}}\prod_{r=1}^{\frac{n-1}{2}} \operatorname{\leftidx{^\textit{Q}}{id}{}}_{l_r} \prod_{1\leq s<t\leq \frac{n-1}{2}} \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{l_t,l_s} \operatorname{Op}_{l_s,l_t}$$ is symmetric in $l_{i}$ and $l_{i+1}$ and that the polynomial $$\operatorname{E}_{l_{i}}\det_{1\leq i,j \leq \frac{n-1}{2}} \left( \binom{l_i+i+j-K-\frac{n-1}{2}-2}{2j} \right)$$ is antisymmetric in $l_{i}$ and $l_{i+1}$. Consequently, the polynomial $\operatorname{E}_{l_{i}}\operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{l_{i},l_{i+1}}g(l_1,\dots,l_{\frac{n-1}{2}})$ is also antisymmetric in $l_{i}$ and $l_{i+1}$ and, thus, divisible by the factor $l_{i+1}-l_{i}$. It follows that $g(l_1,\dots,l_{\frac{n-1}{2}})$ fulfils the requirements of Lemma \[lem:SumOpNormal\].
The trick of the proof is the following observation: Suppose $f(x)$ is a function that is independent of $y$. Then the following operator expressions simplify: $\operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{x,y} f(x) = (Q-1)\operatorname{E}_x^{-1}+\operatorname{id}$ and $\operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{y,x} f(x) = \operatorname{Op}_{x,y} f(x) = \operatorname{Op}_{y,x} f(x) = \operatorname{\leftidx{^\textit{Q}}{id}{}}_x$. By using the fact that $\operatorname{\Delta}_x{\binom{x}{n}}={\binom{x}{n-1}}$, we obtain $$\begin{gathered}
{\sum\limits_{(l_1,\dots ,l_{\frac{n-1}{2}})}^{(k_1,\dots ,k_{\frac{n+1}{2}})}} \prod_{r=1}^{\frac{n-1}{2}} \operatorname{\leftidx{^\textit{Q}}{id}{}}_{l_r} \prod_{1\leq s<t\leq \frac{n-1}{2}} \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{l_t,l_s} \operatorname{Op}_{l_s,l_t} \det_{1\leq i,j \leq \frac{n-1}{2}} \left( \binom{l_i+i+j-K-\frac{n-1}{2}-2}{2j-1} \right)\\
\shoveleft = {\sum\limits_{(l_1,\dots ,l_{\frac{n-1}{2}})}^{(k_1,\dots ,k_{\frac{n+1}{2}})}}\prod_{i=1}^{\frac{n-1}{2}}\operatorname{\Delta}_{l_i} g(l_1,\dots,l_{\frac{n-1}{2}})
= \sum_{r=1}^{\frac{n+1}{2}} (-1)^{r-1}\prod_{s=1}^{r-1} \operatorname{\leftidx{^\textit{Q}}{id}{}}_{k_s}\prod_{t=r+1}^{\frac{n+1}{2}} \operatorname{\leftidx{^\textit{Q}}{E}{}}_{k_t} g(k_1,\dots ,\widehat{k_r},\dots ,k_{\frac{n+1}{2}})\\
\shoveleft = \sum_{r=1}^{\frac{n+1}{2}} (-1)^{r-1} \prod_{s=1}^{r-1} \operatorname{\leftidx{^\textit{Q}}{id}{}}_{k_s} \prod_{t=r+1}^{\frac{n+1}{2}} \operatorname{\leftidx{^\textit{Q}}{E}{}}_{k_t} \prod_{\substack{u=1, \\ u \neq r}}^{\frac{n-1}{2}} \operatorname{\leftidx{^\textit{Q}}{id}{}}_{k_u}\\
\times \left.\prod_{\substack{1\leq s<t\leq \frac{n-1}{2}, \\ s,t \neq r}} \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_t,k_s} \operatorname{Op}_{k_s,k_t} \det_{1\leq i,j \leq \frac{n-1}{2}} \left( \binom{l_i+i+j-K-\frac{n-1}{2}-2}{2j} \right) \right|_{(l_1,\dots,l_{\frac{n-1}{2}}) = (k_1,\dots ,\widehat{k_r},\dots ,k_{\frac{n+1}{2}})}\\
\shoveleft = \sum_{r=1}^{\frac{n+1}{2}} (-1)^{r-1} \prod_{t=r+1}^{\frac{n+1}{2}} \operatorname{E}_{k_t} \prod_{1\leq s<t\leq \frac{n-1}{2}} \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_t,k_s} \operatorname{Op}_{k_s,k_t}\\
\left.\times \det_{1\leq i,j \leq \frac{n-1}{2}} \left( \binom{l_i+i+j-K-\frac{n-1}{2}-2}{2j} \right) \right|_{(l_1,\dots,l_{\frac{n-1}{2}}) = (k_1,\dots ,\widehat{k_r},\dots ,k_{\frac{n+1}{2}})}.
\end{gathered}$$
In the last step we use the fact that $\operatorname{\leftidx{^\textit{Q}}{E}{}}_{x} = \operatorname{E}_x ((Q-1)\operatorname{E}_x^{-1}+\operatorname{id})$. Finally, we consider the determinant evaluation $$\begin{gathered}
\det_{1\leq i,j \leq \frac{n+1}{2}} \left( \binom{k_i+i+j-K-\frac{n+1}{2}-2}{2j-2} \right)\\
= \left. \sum_{r=1}^{\frac{n+1}{2}} (-1)^{r-1} \prod_{t=r+1}^{\frac{n+1}{2}} \operatorname{E}_{k_t} \det_{1\leq i,j \leq \frac{n-1}{2}} \left( \binom{l_i+i+j-K-\frac{n-1}{2}-2}{2j} \right) \right|_{(l_1,\dots,l_{\frac{n-1}{2}}) = (k_1,\dots ,\widehat{k_r},\dots ,k_{\frac{n+1}{2}})},
\end{gathered}$$ where we expand the determinant with respect to the first column. This completes the proof.
\[lem:AppSumEven\] Let $n$ be even. Then $$\begin{gathered}
{\sum\limits_{(l_1,\dots ,l_{\frac{n}{2}})}^{(k_1,\dots ,k_{\frac{n}{2}},K)}\hspace{-3ex}'\hspace{3ex}} \prod_{1\leq s<t\leq \frac{n}{2}} \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{l_t,l_s} \operatorname{Op}_{l_s,l_t} \det_{1\leq i,j \leq \frac{n}{2}} \left( \binom{l_i+i+j-K-\frac{n}{2}-2}{2j-2} \right)\\
= (-1)^{\frac{n}{2}}\prod_{r=1}^{\frac{n}{2}} \operatorname{\leftidx{^\textit{Q}}{id}{}}_{k_r} \prod_{1\leq s<t\leq \frac{n}{2}} \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_t,k_s} \operatorname{Op}_{k_s,k_t} \det_{1\leq i,j \leq \frac{n}{2}} \left( \binom{k_i+i+j-K-\frac{n}{2}-2}{2j-1} \right).
\end{gathered}$$
This lemma is proved by means of Lemma \[lem:SumOpAlt\]. Therefore we define $$g_K(l_1,\dots,l_{\frac{n}{2}}) {\mathrel{\mathop:\!\!=}}\prod_{1\leq s<t\leq \frac{n}{2}} \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{l_t,l_s} \operatorname{Op}_{l_s,l_t} \det_{1\leq i,j \leq \frac{n}{2}} \left( \binom{l_i+i+j-K-\frac{n}{2}-2}{2j-1} \right).$$
Similarly as in the proof of Lemma \[lem:AppSumOdd\], we can show that $g_K(l_1,\dots,l_{\frac{n}{2}})$ fulfils the conditions of Lemma \[lem:SumOpAlt\]. Consequently, we obtain $$\begin{gathered}
{\sum\limits_{(l_1,\dots ,l_{\frac{n}{2}})}^{(k_1,\dots ,k_{\frac{n}{2}},K)}\hspace{-3ex}'\hspace{3ex}} \prod_{1\leq s<t\leq \frac{n}{2}} \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{l_t,l_s} \operatorname{Op}_{l_s,l_t} \det_{1\leq i,j \leq \frac{n}{2}} \left( \binom{l_i+i+j-K-\frac{n}{2}-2}{2j-2} \right)\\
=Q \sum_{r=1}^{\frac{n}{2}} (-1)^{r-1}\prod_{s=1}^{r-1} \operatorname{\leftidx{^\textit{Q}}{id}{}}_{k_s}\prod_{t=r+1}^{\frac{n}{2}} \operatorname{\leftidx{^\textit{Q}}{E}{}}_{k_t} g_K(k_1,\dots ,\widehat{k_r},\dots,k_{n-1},K+1)\\
+ (-1)^{\frac{n}{2}} \prod_{s=1}^{\frac{n}{2}} \operatorname{\leftidx{^\textit{Q}}{id}{}}_{k_s} g_K(k_1,\dots,k_{\frac{n}{2}}).
\end{gathered}$$
The proof reduces to show that the first summand of the right-hand side vanishes. It suffices to establish that $$\left. g_K(l_1,\dots,l_{\frac{n}{2}}) \right|_{l_{\frac{n}{2}}=K+1}=0.$$
We will use the following identity $$\label{eq:AuxBinomial}
\sum_{m=0}^{M}\binom{M}{m}(M-2m)^{2N+1}=0,$$ which holds true for any nonnegative integers $M$ and $N$. That follows from $$\sum_{m=0}^{M}\binom{M}{m}(M-2m)^{2N+1}
=\sum_{m=0}^{M}\binom{M}{M-m}(M-2(M-m))^{2N+1}
=-\sum_{m=0}^{M}\binom{M}{m}(M-2m)^{2N+1}.$$
Some manipulation yields $$\operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{l_{\frac{n}{2}},l_i}\operatorname{Op}_{l_i,l_{\frac{n}{2}}} = (1-(4-2Q)\operatorname{E}_{l_i}+(5-4Q+Q^2)\operatorname{E}_{l_i}^2) + \operatorname{E}_{l_i} (1-(2-Q)\operatorname{E}_{l_i}) (\operatorname{E}_{l_{\frac{n}{2}}} + \operatorname{E}_{l_{\frac{n}{2}}}^{-1})$$ and by identity (\[eq:HMTDetEven\]) it is enough to show that for any nonnegative integer $M$ we have the following identity: $$\label{eq:ComEquation}
\left. (\operatorname{E}_{l_{\frac{n}{2}}} + \operatorname{E}_{l_{\frac{n}{2}}}^{-1})^M \left((K+1-l_{\frac{n}{2}}) \prod_{i=1}^{\frac{n}{2}-1} \left( l_{\frac{n}{2}}-l_i+\frac{n}{2}-i \right) \left( 2K+\frac{n}{2}+2-l_{\frac{n}{2}}-l_i-i \right) \right) \right|_{l_{\frac{n}{2}}=K+1}=0.$$
But since the left-hand side of (\[eq:ComEquation\]) is equivalent to $$\begin{gathered}
\left. \sum_{m=0}^M \binom{M}{m} \operatorname{E}_{l_{\frac{n}{2}}}^{2m-M} \left((K+1-l_{\frac{n}{2}}) \prod_{i=1}^{\frac{n}{2}-1} \left( l_{\frac{n}{2}}-l_i+\frac{n}{2}-i \right) \left( 2K+\frac{n}{2}+2-l_{\frac{n}{2}}-l_i-i \right) \right) \right|_{l_{\frac{n}{2}}=K+1}\\
= \sum_{m=0}^M \binom{M}{m} (M-2m) \prod_{i=1}^{\frac{n}{2}-1} \left( \left( K+\frac{n}{2}+1-l_i-i \right)^2 - (M-2m)^2 \right),
\end{gathered}$$ equation (\[eq:ComEquation\]) follows from identity (\[eq:AuxBinomial\]).
Finally, Theorem \[thm:QHMTenumeration\] follows by a simple induction on $n$ using Lemmata \[lem:AppSumOdd\] and \[lem:AppSumEven\] as well as identities (\[eq:HMTDetOdd\]) and (\[eq:HMTDetEven\]).
Proof of Theorem \[thm:QHTREEenumeration\]
------------------------------------------
Fischer showed how to use the forward difference operator to enumerate truncated monotone triangles [@Fis11]. We generalise her ideas to the weighted enumeration of halved trees.
Since the operator $\operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{x,y}$ simplifies to $(Q-1)\operatorname{E}_{y} + \operatorname{id}$ when applied to a function independent of $x$, it follows that
$$\begin{gathered}
-\operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{k_1} {\sum\limits_{(l_1,\dots ,l_{n-1})}^{(k_1,\dots ,k_{n})}} f(l_1, l_2, \dots,l_{n-1})\\
\shoveleft =- Q^{-1}((1-Q^{-1})\operatorname{E}_{k_1} + Q^{-1})^{-1}\operatorname{\Delta}_{k_1} Q^{-1} \left(((Q-1)\operatorname{E}_{k_{1}^{(2)}} + \operatorname{id}) \phantom{\sum_{l_{1}=k_{1}^{(2)}}^{k_{2}^{(1)}}} \right.\\
\left.\left.\times \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{2}^{(1)},k_{2}^{(2)}} \cdots \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{n}^{(1)},k_{n}^{(2)}}
\sum_{l_{1}=k_{1}^{(2)}}^{k_{2}^{(1)}} \cdots \hspace{-1ex} \sum_{l_{n-1}=k_{n-1}^{(2)}}^{k_{n}^{(1)}} f(l_1, \dots, l_{n-1}) \right) \right|_{k_{i}^{(1)}=k_{i}^{(2)}=k_{i}}\\
\shoveleft = -\operatorname{\Delta}_{k_1} Q^{-1} \left.\left( \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{2}^{(1)},k_{2}^{(2)}} \cdots \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{n}^{(1)},k_{n}^{(2)}}
\sum_{l_{1}=k_{1}^{(2)}}^{k_{2}^{(1)}} \cdots \hspace{-1ex} \sum_{l_{n-1}=k_{n-1}^{(2)}}^{k_{n}^{(1)}} f(l_1, \dots, l_{n-1}) \right) \right|_{k_{i}^{(1)}=k_{i}^{(2)}=k_{i}}\\
= Q^{-1} \left.\left( \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{2}^{(1)},k_{2}^{(2)}} \cdots \operatorname{\leftidx{^\textit{Q}}{Strict}{}}_{k_{n}^{(1)},k_{n}^{(2)}}
\sum_{l_{2}=k_{2}^{(2)}}^{k_{3}^{(1)}} \cdots \hspace{-1ex} \sum_{l_{n-1}=k_{n-1}^{(2)}}^{k_{n}^{(1)}} f(k_1,l_2, \dots, l_{n-1}) \right) \right|_{k_{i}^{(1)}=k_{i}^{(2)}=k_{i}}.\end{gathered}$$
The last step follows from Fischer’s crucial observation [@Fis11] that the application of the operator $-\operatorname{\Delta}_{k_1}$ has the effect of truncating the leftmost entry in the bottom row of the pattern and setting $k_1$ as the leftmost entry in the penultimate row. We conclude that $$\begin{gathered}
-\operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{k_1} {\sum\limits_{(l_1,\dots ,l_{n-1})}^{(k_1,\dots ,k_{n})}} f(l_1, l_2, \dots,l_{n-1})\\
= \sum_{\substack{ l_2 < \dots < l_{n-1}, \\ k_2 \leq l_2 \leq k_3 \leq \dots \leq k_{n-1} \leq l_{n-1} \leq k_n}} \hspace*{-1ex} Q^{[k_{2} < l_{2} < k_{3}]+\dots+[k_{n-1} < l_{n-1} < k_{n}]} f(k_1, l_2, \dots, l_{n-1}).\end{gathered}$$
Hence, if we apply the operator $\prod_{r=1}^{\lceil\frac{n}{2}\rceil} \left( -\operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{k_r} \right)^{s_r}$, we truncate the $s_i$ bottom entries from the $i^{\text{th}}$ diagonal for each $1 \le i \le d$.
Proof of Theorem \[thm:QHTREEConstantTerm\]
-------------------------------------------
In the following proof, we present how to transform an operator formula into a constant term identity. This method can also be applied to other operator formulae that involve the same operands, for example like in Theorem \[thm:VSASTPQCEnumeration\]. We assume that $n$ is odd. Then the following holds:
$$\begin{gathered}
\prod_{r=1}^{\frac{n+1}{2}} ( -\operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{k_r} )^{s_r} \prod_{1\leq s<t\leq \frac{n+1}{2}} \left(\operatorname{E}_{k_s}+\operatorname{E}_{k_t}^{-1}-(2-Q)\operatorname{E}_{k_s}\operatorname{E}_{k_t}^{-1}\right) \left(\operatorname{E}_{k_s}+\operatorname{E}_{k_t}-(2-Q)\operatorname{E}_{k_s}\operatorname{E}_{k_t}\right)\\
\times \prod_{1\leq i<j\leq \frac{n+1}{2}} \frac{(k_j-k_i+j-i)(2K+n+2-k_i-k_j-i-j)}{(j-i)(i+j-1)}\\
\shoveleft = \operatorname{CT}_{X_1,\dots,X_{\frac{n+1}{2}}} \left( \prod_{r=1}^{\frac{n+1}{2}} \operatorname{E}_{X_r}^{k_r} ( -\operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{X_r} )^{s_r}\right. \\
\times \prod_{1\leq s<t\leq \frac{n+1}{2}} \left(\operatorname{E}_{X_s}+\operatorname{E}_{X_t}^{-1}-(2-Q)\operatorname{E}_{X_s}\operatorname{E}_{X_t}^{-1}\right) \left(\operatorname{E}_{X_s}+\operatorname{E}_{X_t}-(2-Q)\operatorname{E}_{X_s}\operatorname{E}_{X_t}\right) \\
\times \left.\prod_{1\leq i<j\leq \frac{n+1}{2}} \frac{(X_j-X_i+j-i)(2K+n+2-X_i-X_j-i-j)}{(j-i)(i+j-1)}\right).\end{gathered}$$
Since $\operatorname{\Delta}_x=\operatorname{E}_x-\operatorname{id}$, the expression above is equal to $$\begin{gathered}
\label{eq:ConstantTerm1}
\operatorname{CT}_{X_1,\dots,X_{\frac{n+1}{2}}} \left( \prod_{r=1}^{\frac{n+1}{2}} (\operatorname{id}+ \operatorname{\Delta}_{X_r})^{k_r} \left(-\frac{\operatorname{\Delta}_{X_r}}{Q-(1-Q)\operatorname{\Delta}_{X_r}}\right)^{s_r} \right.\\
\times \prod_{1\leq s<t\leq \frac{n+1}{2}} \left(\operatorname{E}_{X_t}^{-1} \left( Q\operatorname{id}+ (Q-1)\operatorname{\Delta}_{X_s} + \operatorname{\Delta}_{X_t} + \operatorname{\Delta}_{X_s} \operatorname{\Delta}_{X_t} \right)\right.\\
\left. \phantom{\operatorname{E}_{X_t}^{-1}} \times \left( Q\operatorname{id}+ (Q-1)\operatorname{\Delta}_{X_s} + (Q-1)\operatorname{\Delta}_{X_t} + (Q-2)\operatorname{\Delta}_{X_s}\operatorname{\Delta}_{X_t} \right)\right)\\
\left. \times \prod_{1\leq i<j\leq \frac{n+1}{2}} \frac{(X_j-X_i+j-i)(2K+n+2-X_i-X_j-i-j)}{(j-i)(i+j-1)} \right).
\end{gathered}$$
From (\[eq:DetFormulaeOdd\]) we obtain the following determinant identity: $$\begin{gathered}
\prod_{1\leq s<t\leq \frac{n+1}{2}} \operatorname{E}_{X_t}^{-1} \prod_{1\leq i<j\leq \frac{n+1}{2}} \frac{(X_j-X_i+j-i)(2K+n+2-X_i-X_j-i-j)}{(j-i)(i+j-1)}\\
= \prod_{1\leq i<j\leq \frac{n+1}{2}} \frac{(X_j-X_i)(2K+n-X_j-X_i)}{(j-i)(j+i-1)}\\
= (-1)^{\binom{\frac{n+1}{2}}{2}} \det_{1 \leq i,j \leq \frac{n+1}{2}} \left( \binom{X_i+j-K-\frac{n+3}{2}}{2j-2} \right).
\end{gathered}$$
Hence (\[eq:ConstantTerm1\]) is equal to $$\begin{gathered}
(-1)^{\binom{\frac{n+1}{2}}{2}} \operatorname{CT}_{X_1,\dots,X_{\frac{n+1}{2}}} \left( \prod_{r=1}^{\frac{n+1}{2}} (\operatorname{id}+ \operatorname{\Delta}_{X_r})^{k_r} \left(-\frac{\operatorname{\Delta}_{X_r}}{Q-(1-Q)\operatorname{\Delta}_{X_r}}\right)^{s_r} \right.\\
\times \prod_{1\leq s<t\leq \frac{n+1}{2}} \left( \left( Q\operatorname{id}+ (Q-1)\operatorname{\Delta}_{X_s} + \operatorname{\Delta}_{X_t} + \operatorname{\Delta}_{X_s} \operatorname{\Delta}_{X_t} \right)\left( Q\operatorname{id}+ (Q-1)\operatorname{\Delta}_{X_s} + (Q-1)\operatorname{\Delta}_{X_t} + (Q-2)\operatorname{\Delta}_{X_s}\operatorname{\Delta}_{X_t} \right) \right) \\
\left. \times \det_{1 \leq i,j \leq \frac{n+1}{2}} \left( \binom{X_i+j-K-\frac{n+3}{2}}{2j-2} \right) \right).
\end{gathered}$$
By the Leibniz formula, we obtain $$\begin{gathered}
\label{eq:ConstantTerm2}
\shoveleft (-1)^{\binom{\frac{n+1}{2}}{2}} \operatorname{CT}_{X_1,\dots,X_{\frac{n+1}{2}}} \left( \sum_{\sigma \in \mathfrak{S}_{\frac{n+1}{2}}} \operatorname{sgn}(\sigma) \prod_{r=1}^{\frac{n+1}{2}} (\operatorname{id}+ \operatorname{\Delta}_{X_r})^{k_r} \left(-\frac{\operatorname{\Delta}_{X_r}}{Q-(1-Q)\operatorname{\Delta}_{X_r}}\right)^{s_r}\right.\\
\times \prod_{1\leq s<t\leq \frac{n+1}{2}} \left( Q\operatorname{id}+ (Q-1)\operatorname{\Delta}_{X_s} + \operatorname{\Delta}_{X_t} + \operatorname{\Delta}_{X_s} \operatorname{\Delta}_{X_t} \right) \left( Q\operatorname{id}+ (Q-1)\operatorname{\Delta}_{X_s} + (Q-1)\operatorname{\Delta}_{X_t} + (Q-2)\operatorname{\Delta}_{X_s}\operatorname{\Delta}_{X_t} \right)\\
\left. \times \prod_{i=1}^{\frac{n+1}{2}} {\binom{X_i + \sigma(i) - K -\frac{n+3}{2}}{2\sigma(i) - 2}} \right) \\
\shoveleft = (-1)^{\binom{\frac{n+1}{2}}{2}} \operatorname{CT}_{X_1,\dots,X_{\frac{n+1}{2}}} \left( \sum_{\sigma \in \mathfrak{S}_{\frac{n+1}{2}}} \operatorname{sgn}(\sigma) \prod_{r=1}^{\frac{n+1}{2}} (\operatorname{id}+ \operatorname{\Delta}_{X_r})^{k_r} \left(-\frac{\operatorname{\Delta}_{X_r}}{Q-(1-Q)\operatorname{\Delta}_{X_r}}\right)^{s_r}\right.\\
\times \prod_{1\leq s<t\leq \frac{n+1}{2}} \left( Q\operatorname{id}+ (Q-1)\operatorname{\Delta}_{X_s} + \operatorname{\Delta}_{X_t} + \operatorname{\Delta}_{X_s} \operatorname{\Delta}_{X_t} \right) \left( Q\operatorname{id}+ (Q-1)\operatorname{\Delta}_{X_s} + (Q-1)\operatorname{\Delta}_{X_t} + (Q-2)\operatorname{\Delta}_{X_s}\operatorname{\Delta}_{X_t} \right)\\
\left. \times \prod_{i=1}^{\frac{n+1}{2}} \left( \operatorname{id}+ \operatorname{\Delta}_{X_i} \right)^{\sigma(i) - K -\frac{n+3}{2}} {\binom{X_i}{2\sigma(i) - 2}} \right).
\end{gathered}$$
Since $\operatorname{CT}_x \left( \operatorname{\Delta}_x^a \binom{x}{b} \right) = \delta_{a,b}$ holds true, (\[eq:ConstantTerm2\]) is equal to $$\begin{gathered}
\label{eq:ConstantTerm3}
(-1)^{\binom{\frac{n+1}{2}}{2}} \sum_{\sigma \in \mathfrak{S}_{\frac{n+1}{2}}} \operatorname{sgn}(\sigma) \left< X_1^{2\sigma(1) - 2} \cdots X_{\frac{n+1}{2}}^{2\sigma(\frac{n+1}{2}) - 2} \right> \left( \prod_{r=1}^{\frac{n+1}{2}} (1 + X_r)^{k_r + \sigma(i) - K - \frac{n+3}{2}} \left(-\frac{X_r}{Q-(1-Q) X_r}\right)^{s_r} \right. \\
\left. \times \prod_{1\leq s<t\leq \frac{n+1}{2}} \left( Q + (Q-1) X_s + X_t + X_s X_t \right) \left( Q + (Q-1)X_s + (Q-1)X_t + (Q-2)X_s X_t \right) \right) \\
= (-1)^{\binom{\frac{n+1}{2}}{2}} \operatorname{CT}_{X_1,\dots,X_{\frac{n+1}{2}}} \left( \sum_{\sigma \in \mathfrak{S}_{\frac{n+1}{2}}} \operatorname{sgn}(\sigma) \prod_{r=1}^{\frac{n+1}{2}} X_r^{2-2\sigma(r)} (1 + X_r)^{k_r + \sigma(i) - K - \frac{n+3}{2}} \left(-\frac{X_r}{Q-(1-Q) X_r}\right)^{s_r} \right. \\
\left. \times \prod_{1\leq s<t\leq \frac{n+1}{2}} \left( Q + (Q-1) X_s + X_t + X_s X_t \right) \left( Q + (Q-1)X_s + (Q-1)X_t + (Q-2)X_s X_t \right) \right).
\end{gathered}$$
By the generalised Vandermonde determinant evaluation [@Kra01 Proposition 1], the following identity holds true: $$\begin{gathered}
\sum_{\sigma \in \mathfrak{S}_{\frac{n+1}{2}}} \operatorname{sgn}(\sigma) \prod_{r=1}^{\frac{n+1}{2}} X_r^{n+1-2\sigma(r)} \left( 1+X_r \right)^{\sigma(r) - 1} = \det_{1 \leq i,j \leq \frac{n+1}{2}} X_i^{n+1-2j} \left( 1+X_i \right)^{j - 1} \\
= (-1)^{\binom{\frac{n+1}{2}}{2}} \prod_{1\leq s<t\leq \frac{n+1}{2}} \left( X_t - X_s \right) \left( X_s + X_t + X_s X_t \right).
\end{gathered}$$
Therefore we can conclude that (\[eq:ConstantTerm3\]) is equal to $$\begin{gathered}
\operatorname{CT}_{X_1,\dots,X_{\frac{n+1}{2}}} \left( \prod_{r=1}^{\frac{n+1}{2}} X_r^{1-n} (1+X_r)^{k_i - K -\frac{n+1}{2}} \left(-\frac{X_r}{Q-(1-Q) X_r}\right)^{s_r} \right.\\
\times \prod_{1\leq s<t\leq \frac{n+1}{2}} \left( X_t - X_s\right) \left( X_s + X_t + X_s X_t \right)\\
\left. \phantom{\prod_{r=1}^{\frac{n+1}{2}}} \times \left( Q + (Q-1)X_s + X_t + X_s X_t \right) \left( Q + (Q-1)X_s + (Q-1)X_t + (Q-2)X_s X_t \right) \right).
\end{gathered}$$
The case for even $n$ is treated similarly.
Proof of Theorem \[thm:HMTPQEnumeration\]
-----------------------------------------
For the next proof we essentially use the observation of Theorem \[thm:QHTREEenumeration\] that the application of the (generalised) forward difference operator has the effect of truncating entries of the diagonals. If the two bottommost entries in the $i^\text{th}$ diagonal of the halved tree are equal, we can truncate the bottommost entry of this diagonal which is reflected in the operator $-\operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{k_i}$.
If the two bottommost entries in the $i^\text{th}$ diagonal are not the same, we can count all halved trees and subtract those whose bottommost entries in the $i^\text{th}$ diagonal are equal. Thus, we need to apply the operator $\operatorname{id}- (-\operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{k_i})$.
Proof of Theorem \[thm:VSASTPQCEnumeration\]
--------------------------------------------
To prove the generating function of vertically symmetric alternating sign trapezoids, the key idea is to use Theorem \[thm:VSASTQCEnumeration\] and sum over all possible positions of 10-columns. Using the fact that $Q^{-1} \operatorname{\delta}_x = (\operatorname{id}+ \operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_x)^{-1} \operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_x$, we can manipulate the generating function (\[eq:VSASTQCEnumeration\]): $$\begin{gathered}
\prod_{c_i \in C_{10}} \left( -\operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{c_i} \right) \prod_{\substack{1 \leq i \leq \frac{n}{2}, \\ c_i \notin C_{10}}} \left( \operatorname{id}+ \operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{c_i} \right) \prod_{r=1}^{\frac{n}{2}} \left( -\operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{c_r} \right)^{-c_r-1} {\leftidx{^Q}{\operatorname{HMT}}{}}_{n-1} \left( \frac{l-5}{2};\mathbf{c} \right)\\
\shoveleft = \prod_{c_i \in C_{10}} \left( \operatorname{id}+ \operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{c_i} \right) \left( - Q^{-1} \operatorname{\delta}_{c_i} \right) \prod_{\substack{1 \leq i \leq \frac{n}{2}, \\ c_i \notin C_{10}}} \left( \operatorname{id}+ \operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{c_i} \right)\\
\times \prod_{r=1}^{\frac{n}{2}} \left( \left( \operatorname{id}+ \operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{c_i} \right) \left( - Q^{-1} \operatorname{\delta}_{c_i} \right) \right)^{-c_r-1} {\leftidx{^Q}{\operatorname{HMT}}{}}_{n-1} \left( \frac{l-5}{2};\mathbf{c} \right)\\
= \prod_{c_i \in C_{10}} \left( - Q^{-1} \operatorname{\delta}_{c_i} \right) \prod_{r=1}^{\frac{n}{2}} \left( \operatorname{id}+ \operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{c_i} \right)^{-c_r} \left( - Q^{-1} \operatorname{\delta}_{c_i} \right)^{-c_r-1} {\leftidx{^Q}{\operatorname{HMT}}{}}_{n-1} \left( \frac{l-5}{2};\mathbf{c} \right).
\end{gathered}$$
We want to sum over all possible positions of 10-columns. For this purpose we make use of the elementary symmetric function: The generating function of vertically symmetric alternating sign trapezoids with $p$ many 10-columns is $$e_p \left( - Q^{-1} \operatorname{\delta}_{c_1}, \dots, - Q^{-1} \operatorname{\delta}_{c_\frac{n}{2}} \right) \prod_{r=1}^{\frac{n}{2}} \left( \operatorname{id}+ \operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{c_r} \right)^{-c_r} \left( - Q^{-1} \operatorname{\delta}_{c_r} \right)^{-c_r-1} {\leftidx{^Q}{\operatorname{HMT}}{}}_{n-1} \left( \frac{l-5}{2};\mathbf{c} \right)$$ where $e_p$ denotes the $p^{\text{th}}$ elementary symmetric function. Since $e_p \left( X_1,\dots,X_m \right)$ is the coefficient of $P^p$ in $\prod_{i=1}^m \left( 1+ P X_i \right)$, the $PQ$-generating function is $$\begin{gathered}
\prod_{r=1}^{\frac{n}{2}} \left( \operatorname{id}- P Q^{-1} \operatorname{\delta}_{c_r} \right) \left( \operatorname{id}+ \operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{c_r} \right)^{-c_r} \left( - Q^{-1} \operatorname{\delta}_{c_r} \right)^{-c_r-1} {\leftidx{^Q}{\operatorname{HMT}}{}}_{n-1} \left( \frac{l-5}{2};\mathbf{c} \right) \\
= \prod_{r=1}^{\frac{n}{2}} \left( \operatorname{id}- (P-1)\operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{c_r} \right) \left( - \operatorname{\leftidx{^\textit{Q}}{\Delta}{}}_{c_r} \right)^{-c_r-1} {\leftidx{^Q}{\operatorname{HMT}}{}}_{n-1} \left( \frac{l-5}{2};\mathbf{c} \right).
\end{gathered}$$
The transformation into a constant term identity is analogous to the proof of Theorem \[thm:QHTREEConstantTerm\].
Proof of Theorem \[thm:VSASTPQEnumeration\]
-------------------------------------------
To prove Theorem \[thm:VSASTPQEnumeration\], we need the following lemma. It is a generalisation of [@Fis19 Lemma 9].
\[lem:QASym\] The following identity holds true: $$\begin{gathered}
\label{eq:QASym}
\operatorname{\mathbf{ASym}}_{X_1,\dots,X_m} \left( \prod_{r=1}^{m} \frac{\left( \frac{X_r(1+X_r)}{Q+X_r} \right)^{r-1}}{1-\prod_{j=r}^{m} \frac{X_j(1+X_j)}{Q+X_j} } \prod_{1\leq s<t\leq m} ( Q + (Q-1) X_s + X_t + X_s X_t ) \right)\\
= \prod_{r=1}^{m} \frac{Q+X_r}{Q - X_r^2} \prod_{1\leq s<t\leq m} \frac{(Q(1+X_s)(1+X_t)-X_s X_t)(X_t-X_s)}{Q-X_s X_t}.\end{gathered}$$
We show the identity (\[eq:QASym\]) by induction on $m$; it is proved in a similar way as [@Fis19 Lemma 9] which follows from (\[eq:QASym\]) by setting $Q=1$.
The base case $m=1$ is clear. We set $$\begin{gathered}
f(X_1,\dots,X_m) {\mathrel{\mathop:\!\!=}}\prod_{r=1}^{m} \frac{\left( \frac{X_r(1+X_r)}{Q+X_r} \right)^{r-1}}{1-\prod_{j=r}^{m} \frac{X_j(1+X_j)}{Q+X_j} } \prod_{1\leq s<t\leq m} ( Q + (Q-1) X_s + X_t + X_s X_t )\\
=\left( \prod_{r=2}^m ( Q + (Q-1) X_1 + X_r + X_1 X_r ) \frac{X_r(1+X_r)}{Q+X_r} \right) \left( 1-\prod_{r=1}^m \frac{X_r(1+X_r)}{Q+X_r} \right)^{-1} f(X_2,\dots,X_m).
\end{gathered}$$
By the definition of the antisymmetriser we see that $f$ satisfies the following recursion: $$\begin{gathered}
\operatorname{\mathbf{ASym}}_{X_1,\dots,X_m} f(X_1,\dots,X_m) = \left( 1-\prod_{r=1}^m \frac{X_r(1+X_r)}{Q+X_r} \right)^{-1}\\
\times \sum_{k=1}^m (-1)^{k+1} \prod_{\substack{l=1, \\ l \neq k}}^{m} \frac{X_l(1+X_l)}{Q+X_l} (Q+(Q-1)X_k+X_l+X_k X_l)\\
\times \operatorname{\mathbf{ASym}}_{X_1,\dots,\widehat{X_k},\dots,X_m} f(X_1,\dots,\widehat{X_k},\dots,X_m).
\end{gathered}$$
We want to show that the right-hand side of (\[eq:QASymVar\]) fulfils the same recursion. Some manipulation yields that this is equivalent to prove the following polynomial identity: $$\begin{gathered}
\left( \prod_{r=1}^m (Q+X_r) - \prod_{r=1}^m X_r(1+X_r) \right) \prod_{1 \le s < t \le m} \left( Q(1+X_s)(1+X_t)-X_s X_t \right)\\
= \sum_{k=1}^m (Q-X_k^2) \prod_{\substack{1 \le s < t \le m, \\ s,t \neq k}} \left( Q(1+X_s)(1+X_t)-X_s X_t \right)\\
\times \prod_{\substack{1 \le r \le m, \\ r \neq k}} \frac{X_r(1+X_r)(Q-X_k X_r)(Q+(Q-1)X_k+X_r+X_k X_r)}{X_r-X_k}.
\end{gathered}$$
Both sides are symmetric polynomials in ${X_1,\dots,X_m}$, the leading terms are $-(Q-1)^{\binom{m}{2}} X_1^{m+1} \dots X_m^{m+1}$. The identity holds true for the evaluations $X_i=0$ and $X_i=-1$ and both sides vanish for $X_i = \frac{Q(1+X_j)}{(1-Q)X_j-Q}$ for all $i \neq j$. This completes the proof of (\[eq:QASym\]).
We are now in a position to prove Theorem \[thm:VSASTPQEnumeration\]. To this end, we change the number of the columns from $-(n-2)$ to $0$ instead of from $-(n-1)$ to $-1$, that is we shift $c_r \mapsto c_r-1$. Then the generating function of all halved vertically symmetric alternating sign trapezoids with prescribed $1$-columns is equal to the constant term of $$\begin{gathered}
\prod_{r=1}^{\frac{n}{2}} X_r^{2-n} \left(1+X_r\right)^{c_r-\frac{l-3}{2}-\frac{n}{2}} \left( \frac{Q + (Q-P)X_r}{Q - (1-Q)X_r} \right) \left(-\frac{X_r}{Q - (1-Q)X_r}\right)^{-c_r} \\
\times \prod_{1\leq s<t\leq \frac{n}{2}} \left( \left( X_t - X_s\right) \left( X_s + X_t + X_s X_t \right) \right.\\
\left. \times \left( Q + (Q-1)X_s + X_t + X_s X_t \right) \left( Q + (Q-1)X_s + (Q-1)X_t + (Q-2)X_s X_t \right) \right).
\end{gathered}$$
We sum over all possible $1$-column vectors $\mathbf{c}$ and obtain the constant term of $$\begin{gathered}
\sum_{c_1<\dots<c_{\frac{n}{2}}\leq 0} \Bigg( \prod_{r=1}^{\frac{n}{2}} X_r^{2-n} \left(1+X_r\right)^{c_r-\frac{l-3}{2}-\frac{n}{2}} \left( \frac{Q + (Q-P)X_r}{Q - (1-Q)X_r} \right) \left(-\frac{X_r}{Q - (1-Q)X_r}\right)^{-c_r} \\
\times \prod_{1\leq s<t\leq \frac{n}{2}} \left( \left( X_t - X_s\right) \left( X_s + X_t + X_s X_t \right) \right. \\
\left. \times \left( Q + (Q-1)X_s + X_t + X_s X_t \right) \left( Q + (Q-1)X_s + (Q-1)X_t + (Q-2)X_s X_t \right) \right) \Bigg),
\end{gathered}$$ which is equal to the constant term of $$\begin{gathered}
\label{prf:QConstantTerm}
Q^{-\frac{n}{2}} \prod_{r=1}^{\frac{n}{2}} \frac{X_r^{2-n} (1+X_r)^{-\frac{l-3}{2}-\frac{n}{2}} \left( \frac{Q + (Q-P)X_r}{Q - (1-Q)X_r} \right) \left( \frac{-X_r}{(Q-(1-Q)X_r)(1+X_r)} \right)^{\frac{n}{2}-r}}{1-\prod_{j=1}^{r} \left(\frac{-X_j}{(Q-(1-Q)X_j)(1+X_j)}\right)} \\
\times \prod_{1\leq s<t\leq \frac{n}{2}} \left( \left( X_t - X_s\right) \left( X_s + X_t + X_s X_t \right) \right. \\
\left. \times \left( Q + (Q-1)X_s + X_t + X_s X_t \right) \left( Q + (Q-1)X_s + (Q-1)X_t + (Q-2)X_s X_t \right) \right)
\end{gathered}$$ because of the following geometric series expression $$\sum_{c_1<\dots<c_m\leq 0}Y_1^{-c_1} \cdots Y_{m}^{-c_m} = \prod_{r=1}^{m} \frac{Y_r^{m-r}}{1-\prod_{j=1}^{r} Y_j}$$ with $Y_r=\frac{-X_r}{(Q-(1-Q)X_r)(1+X_r)}$ for all $1 \leq r \leq \frac{n}{2}$.
To simplify this expression, we use Lemma (\[lem:QASym\]). We set $m=\frac{n}{2}$ and replace $X_i \mapsto -\frac{X_{m+1-i}}{1+X_{m+1-i}}$ to obtain the following identity: $$\begin{gathered}
\label{eq:QASymVar}
\operatorname{\mathbf{ASym}}_{X_1,\dots,X_m} \left( \prod_{r=1}^{m} \frac{\left( \frac{-X_r}{(Q-(1-Q)X_r)(1+X_r)} \right)^{m-r}}{1-\prod_{j=1}^{r} \left( \frac{-X_j}{(Q-(1-Q)X_j)(1+X_j)} \right)} \prod_{1\leq s<t\leq m} \left( Q + (Q-1) X_s + X_t + X_s X_t \right) \right)\\
= \prod_{r=1}^{m} \frac{(1+X_r)(Q-(1-Q)X_r)}{Q(1+X_r)^2 - X_r^2} \prod_{1\leq s<t\leq m} \frac{\left(Q-X_s X_t\right) \left(X_t - X_s\right)}{Q(1+X_s)(1+X_t)-X_s X_t}.
\end{gathered}$$
We apply (\[eq:SymMethod\]) and (\[eq:QASymVar\]) to (\[prf:QConstantTerm\]) and obtain the constant term of $$\begin{gathered}
\frac{1}{\left(\frac{n}{2}\right)!} \operatorname{\mathbf{Sym}}_{X_1,\dots,X_{\frac{n}{2}}} \Bigg( \prod_{r=1}^{\frac{n}{2}} \frac{X_r^{2-n} (1+X_r)^{-\frac{l-3}{2}-\frac{n}{2}} \left( \frac{Q + (Q-P)X_r}{Q - (1-Q)X_r} \right) \left( \frac{-X_r}{(Q-(1-Q)X_r)(1+X_r)} \right)^{\frac{n}{2}-r}}{1-\prod_{j=1}^{r} \left(\frac{-X_j}{(Q-(1-Q)X_j)(1+X_j)}\right)} \\
\times \prod_{1\leq s<t\leq \frac{n}{2}} \left( \left( X_t - X_s\right) \left( X_s + X_t + X_s X_t \right) \right. \\
\left. \times \left( Q + (Q-1)X_s + X_t + X_s X_t \right) \left( Q + (Q-1)X_s + (Q-1)X_t + (Q-2)X_s X_t \right) \right) \Bigg)\\
\shoveleft = \frac{1}{\left(\frac{n}{2}\right)!} \prod_{r=1}^{\frac{n}{2}} \frac{ \frac{Q + (Q-P)X_r}{Q - (1-Q)X_r} }{X_r^{n-2} \left(1+X_r\right)^{\frac{l-3}{2}+\frac{n}{2}}} \\
\times \prod_{1\leq s<t\leq \frac{n}{2}} \left( X_t - X_s\right) \left( X_s + X_t + X_s X_t \right) \left( Q + (Q-1)X_s + (Q-1)X_t + (Q-2)X_s X_t \right)\\
\times \operatorname{\mathbf{ASym}}_{X_1,\dots,X_{\frac{n}{2}}} \left( \prod_{r=1}^{\frac{n}{2}} \frac{\left( \frac{-X_r}{(Q-(1-Q)X_r)(1+X_r)} \right)^{\frac{n}{2}-r}}{1-\prod_{j=1}^{r} \left( \frac{-X_j}{(Q-(1-Q)X_j)(1+X_j)} \right)} \prod_{1\leq s<t\leq \frac{n}{2}} \left( Q + (Q-1)X_s + X_t + X_s X_t \right) \right).
\end{gathered}$$
To complete the proof, we finally apply Lemma \[lem:QASym\].
Remarks {#sec:Remarks}
=======
The 2-Enumeration of Halved Monotone Triangles
----------------------------------------------
It turns out that the $2$-enumeration of halved monotone triangles can be written in a operator-free product formula. This comes as no surprise: The $2$-enumeration of alternating sign matrices had already been solved by Mills, Robbins, and Rumsey [@MRR83] whereas the straight $1$-enumeration remained unsolved for over a decade. Lai investigates the $2$-enumeration of so-called *antisymmetric monotone triangles* [@Lai] which has apparently been proved before by Jokusch and Propp in an unpublished work. Antisymmetric monotone triangles essentially correspond to halved monotone triangles with no entry larger than $-1$. Lai considers the following *$q$-weight*: It counts all entries that appear in some row but not in the row directly above it. We can recover the $q$-weight of a halved monotone triangle from its $Q$-weight: Given two consecutive rows of a halved monotone triangle and suppose that the upper row contributes the weight $Q^m$. If the lower row has the same number of entries, then this row below contributes the weight $q^m$; if the lower row has one entry more, it contributes the weight $q^{m+1}$. The top row of a halved monotone triangle always contributes a factor $q$. In total, a halved monotone triangle of order $n$ and $Q$-weight $Q^m$ has $q$-weight $q^{m+\lfloor\frac{n}{2}\rfloor}$. This observation implies the following theorem as a corollary of [@Lai Theorem 1.1]:
The $2$-enumeration of halved monotone triangles of order $n$, prescribed bottom row $(k_1,\dots,k_{\lceil\frac{n}{2}\rceil})$ and no entry larger than $K$ is given by $$4^{\binom{\frac{n+1}{2}}{2}} \prod_{1\le i<j\le \frac{n+1}{2}}\frac{(k_j-k_i)(2K+1-k_i-k_j)}{(j-i)(i+j-1)}$$ if $n$ is odd and by $$4^{\binom{\frac{n}{2}}{2}} \prod_{1\le i<j\le \frac{n+1}{2}}\frac{(k_j-k_i)(2K+1-k_i-k_j)}{(j-i)(i+j)} \prod_{i=1}^{\frac{n}{2}}\frac{2K+1-2k_i}{i}$$ if $n$ is even.
Enumeration of Halved Gelfand-Tsetlin Patterns
----------------------------------------------
If we weaken the conditions in the definition of halved monotone triangles by allowing rows to be weakly increasing, we obtain *halved Gelfand-Tsetlin patterns*. They are enumerated by the operands in Theorem \[thm:QHMTenumeration\]. First, we derive an enumeration formula by means of nonintersecting lattice paths. Then, we encounter two more interpretations of halved Gelfand-Tsetlin patterns, one as lozenge tilings of certain regions and one in terms of representations of the symplectic group.
To enumerate halved Gelfand-Tsetlin patterns by nonintersecting lattice paths, we modify the bijection in [@Fis12] between regular Gelfand-Tsetlin patterns and nonintersecting lattice paths: Given a halved Gelfand-Tsetlin pattern with $n$ rows, bottom row $\left( k_1, \dots, k_{\lceil \frac{n}{2} \rceil} \right)$ and no entry larger than $K$, we divide it into $\nearrow$- diagonals and number them from left to right by $1$ to $n$. At the right end of each diagonal we put an additional bounding entry $K$. We add $i-1$ to the entries of the $i^{\text{th}}$ diagonal. These entries are the heights of paths connecting the starting points $\left( -1,k_i+i-1 \right)$ and end points $\left(n+2-2i,K+i-1\right)$ with $(1,0)$- and $(0,1)$-steps. We cut off the first and the last step of each path since they are always horizontal steps, and thus we obtain the starting points $S_i {\mathrel{\mathop:\!\!=}}\left( 0,k_i+i-1 \right)$ and the end points $E_i {\mathrel{\mathop:\!\!=}}\left( n+1-2i,K+i-1 \right)$.
(-1.75,-.75) grid (6.75,6.75);
(S1) at (0,1); (S1) circle (2pt);
(S2) at (0,3); (S2) circle (2pt);
(S3) at (0,4); (S3) circle (2pt);
(E1) at (5,3); (E1) circle (2pt);
(E2) at (3,4); (E2) circle (2pt);
(E3) at (1,5); (E3) circle (2pt);
(S1) – (3,1) – (3,2) – (5,2) – (E1);
(S2) – (1,3) – (1,4) – (E2);
(S3) – (0,5) – (E3);
This yields a bijection between halved Gelfand-Tsetlin patterns and families of certain nonintersecting lattice paths; see Figure \[figure:LatticePaths\] for an example. Due to Lindström-Gessel-Viennot, the number of the nonintersecting lattice paths connecting the given start and end points is given by $$\label{eq:LatticPathsDet}
\det\limits_{1 \le i,j \le \lceil \frac{n}{2} \rceil}\left( \binom{n+K+1-k_i-i-j}{n+1-2j} \right).$$
By using [@Kra01 Theorem 27], this determinant evaluates to $$\prod_{i=1}^{\lceil \frac{n}{2} \rceil} \frac{(K+n-\lceil \frac{n}{2} \rceil +1 -k_i -i)!}{(K+ \lceil \frac{n}{2} \rceil -k_i -i)! (n+1-2i)!} \prod_{1\leq i<j\leq \lceil \frac{n}{2} \rceil} (k_j-k_i+j-i) (2K+n+2 -k_i-k_j -i-j),$$ which is equivalent to (\[eq:HMTDetEven\]) or (\[eq:HMTDetOdd\]) if $n$ is even or odd, respectively.
Halved Gelfand-Tsetlin pattern can also be interpreted as lozenge tilings of so-called *quartered hexagons*: Consider a trapezoidal region in the triangular lattice whose left side has length $a$, the upper and lower parallel sides have length $b$ and $\lceil \frac{a}{2} \rceil$ and the right side is a vertical zigzag line of length $a$. Remove $\lceil \frac{a}{2} \rceil$ unit triangles from the lower side at positions $d_1, d_2,\dots,d_{\lceil \frac{a}{2} \rceil}$. We denote this region by $H_{a,b} (d_1, d_2,\dots,d_{\lceil \frac{a}{2} \rceil})$ (see Figure \[figure:QuarteredHexagons\]).
\(1) at (0,0) ; (2) at ($ (1) + (1,0) $) ; (3) at ($ (2) + (1,0) $) ; (4) at ($ (3) + (1,0) $) ; (5) at ($ (4) + (1,0) $) ; (6) at ($ (5) + (1,0) $) ; (7) at ($ (6) + (1,0) $) ; (8) at ($ (7) + (1,0) $) ; (9) at ($ (8) + (1,0) $) ; (10) at ($ (9) + (1,0) $) ;
\(11) at ($ (1) + (60:1) $) ; (12) at ($ (11) + (1,0) $) ; (14) at ($ (12) + (2,0) $) ; (18) at ($ (14) + (4,0) $) ; (19) at ($ (18) + (1,0) $) ;
\(20) at ($ (11) + (60:1) $) ; (28) at ($ (20) + (8,0) $) ;
\(29) at ($ (20) + (60:1) $) ; (36) at ($ (29) + (7,0) $) ;
\(37) at ($ (29) + (60:1) $) ; (44) at ($ (37) + (7,0) $) ;
\(45) at ($ (37) + (60:1) $) ; (46) at ($ (45) + (1,0) $) ; (47) at ($ (46) + (1,0) $) ; (48) at ($ (47) + (1,0) $) ; (49) at ($ (48) + (1,0) $) ; (50) at ($ (49) + (1,0) $) ; (51) at ($ (50) + (1,0) $) ;
\(1) – (2) – (12) – (3) – (4) – (14) – (5) – (6); (6) – (7); (7) – (8) – (18) – (9) – (10) – (19) – (28) – (36) – (44) – (51) – (45) – (1);
at ($ (2) + (.5,-.5) $) [$d_1$]{}; at ($ (4) + (.5,-.5) $) [$d_2$]{}; at ($ (8) + (.5,-.5) $) [$d_{\frac{n+1}{2}}$]{};
\(12) – (46); (3) – (47); (14) – (48); (5) – (49); (6) – (50); (7) – (51); (18) – (36); (9) – (19);
\(2) – (11); (12) – (20); (4) – (29); (14) – (37); (6) – (45); (7) – (46); (8) – (47); (18) – (48); (19) – (49); (36) – (50);
\(11) –++(8,0); (20) –++(8,0); (29) –++(7,0); (37) –++(7,0);
at ($ (48) + (0,1) $) [$b$]{};
(2.5,4.580) – (8.5,4.580);
at ($ (1,2.165) + (-1,0.25) $) [$a$]{};
(-1.5,2.165) – (3.5,2.165);
at ($ (4.5,-0.866) + (0,-1) $) [$b+\frac{a+1}{2}$]{};
(0,-0.866) – (9,-0.866);
\(1) at (0,0) ; (2) at ($ (1) + (1,0) $) ; (3) at ($ (2) + (1,0) $) ; (4) at ($ (3) + (1,0) $) ; (5) at ($ (4) + (1,0) $) ; (6) at ($ (5) + (1,0) $) ; (7) at ($ (6) + (1,0) $) ; (8) at ($ (7) + (1,0) $) ; (9) at ($ (8) + (1,0) $) ; (10) at ($ (9) + (1,0) $) ;
\(11) at ($ (1) + (60:1) $) ; (12) at ($ (11) + (1,0) $) ; (14) at ($ (12) + (2,0) $) ; (18) at ($ (14) + (4,0) $) ; (20) at ($ (18) + (2,0) $) ;
\(21) at ($ (11) + (60:1) $) ; (29) at ($ (21) + (8,0) $) ;
\(30) at ($ (21) + (60:1) $) ; (38) at ($ (30) + (8,0) $) ;
\(39) at ($ (30) + (60:1) $) ; (46) at ($ (39) + (7,0) $) ;
\(47) at ($ (39) + (60:1) $) ; (54) at ($ (47) + (7,0) $) ;
\(55) at ($ (47) + (60:1) $) ; (56) at ($ (55) + (1,0) $) ; (57) at ($ (56) + (1,0) $) ; (58) at ($ (57) + (1,0) $) ; (59) at ($ (58) + (1,0) $) ; (60) at ($ (59) + (1,0) $) ; (61) at ($ (60) + (1,0) $) ;
\(1) – (2) – (12) – (3) – (4) – (14) – (5) – (6); (6) – (7); (7) – (8) – (18) – (9) – (10) – (20) – (29) – (38) – (46) – (54) – (61) – (55) – (1);
at ($ (2) + (.5,-.5) $) [$d_1$]{}; at ($ (4) + (.5,-.5) $) [$d_2$]{}; at ($ (8) + (.5,-.5) $) [$d_{\frac{n}{2}}$]{};
\(12) – (56); (3) – (57); (14) – (58); (5) – (59); (6) – (60); (7) – (61); (18) – (46); (9) – (29);
\(2) – (11); (12) – (21); (4) – (30); (14) – (39); (6) – (47); (7) – (55); (8) – (56); (18) – (57); (10) – (58); (29) – (59); (46) – (60);
\(11) –++(9,0); (21) –++(8,0); (30) –++(8,0); (39) –++(7,0); (47) –++(7,0);
at ($ (58) + (0,1) $) [$b$]{};
(3,5.446) – (9,5.446);
at ($ (1.25,2.598) + (-1,0.25) $) [$a$]{};
(-1.75,2.598) – (4.25,2.598);
at ($ (4.5,-0.866) + (0,-1) $) [$b+\frac{n}{2}$]{};
(0,-0.866) – (9,-0.866);
Halved Gelfand-Tsetlin patterns with $n$ rows, bottom row $(k_1, k_2,\dots,k_{\lceil \frac{n}{2} \rceil})$ and no entry larger than $K$ correspond to lozenge tilings of the region $H_{n,K-k_1} (k_1, k_2,\dots,k_{\lceil \frac{n}{2} \rceil})$: Divide a given halved Gelfand-Tsetlin pattern with $n$ rows, bottom row $\left( k_1, \dots, k_{\lceil \frac{n}{2} \rceil} \right)$ and no entry larger than $K$ into $\nearrow$-diagonals as seen before and number them from left to right by $1$ to $n$. For all $1 \le i \le n$, add $i-k_1$ to the entries of the $i^{\text{th}}$ diagonal. Thus, we ensure that the leftmost entry in the bottom row is transformed into $1$. The entries in the Gelfand-Tsetlin pattern determine the positions of the tiles ${\begin{tikzpicture}[scale=.15]
\draw (0,0) --++(60:1) --++(-60:1) --++(-120:1) --++(120:1);
\end{tikzpicture}}$. The remaining tiles are forced by these initial lozenges. Figure \[figure:LozengeTiling\] illustrates an example.
\(1) at (0,0) ; (2) at ($ (1) + (1,0) $) ; (3) at ($ (2) + (1,0) $) ; (4) at ($ (3) + (1,0) $) ; (5) at ($ (4) + (1,0) $) ; (6) at ($ (5) + (1,0) $) ;
\(7) at ($ (1) + (60:1) $) ; (8) at ($ (7) + (1,0) $) ; (9) at ($ (8) + (1,0) $) ; (10) at ($ (9) + (1,0) $) ; (11) at ($ (10) + (1,0) $) ; (12) at ($ (11) + (1,0) $) ;
\(13) at ($ (7) + (60:1) $) ; (14) at ($ (13) + (1,0) $) ; (15) at ($ (14) + (1,0) $) ; (16) at ($ (15) + (1,0) $) ; (17) at ($ (16) + (1,0) $) ;
\(18) at ($ (13) + (60:1) $) ; (19) at ($ (18) + (1,0) $) ; (20) at ($ (19) + (1,0) $) ; (21) at ($ (20) + (1,0) $) ; (22) at ($ (21) + (1,0) $) ;
\(23) at ($ (18) + (60:1) $) ; (24) at ($ (23) + (1,0) $) ; (25) at ($ (24) + (1,0) $) ; (26) at ($ (25) + (1,0) $) ;
\(27) at ($ (23) + (60:1) $) ; (28) at ($ (27) + (1,0) $) ; (29) at ($ (28) + (1,0) $) ; (30) at ($ (29) + (1,0) $) ;
\(31) at ($ (27) + (60:1) $) ; (32) at ($ (31) + (1,0) $) ; (33) at ($ (32) + (1,0) $) ;
\(2) – (3) – (9) – (4) – (10) – (5) – (6) – (12) – (17) – (22) – (26) – (30) – (33) – (31) – (7) – (2);
at ($ (1) + (.5,0) $) [$1$]{}; at ($ (3) + (.5,0) $) [$3$]{}; at ($ (4) + (.5,0) $) [$4$]{};
at ($ (7) + (.5,0) $) [$1$]{}; at ($ (9) + (.5,0) $) [$3$]{}; at ($ (11) + (.5,0) $) [$5$]{};
at ($ (13) + (.5,0) $) [$1$]{}; at ($ (16) + (.5,0) $) [$4$]{};
at ($ (18) + (.5,0) $) [$1$]{}; at ($ (21) + (.5,0) $) [$4$]{};
at ($ (24) + (.5,0) $) [$2$]{};
at ($ (28) + (.5,0) $) [$2$]{};
\(2) – (19); (24) – (32);
\(9) – (29);
\(16) – (26);
\(11) – (17);
\(8) – (13);
\(14) – (18);
\(10) – (15); (19) – (23);
\(6) – (16); (20) – (24);
\(17) – (21); (25) – (28);
\(29) – (32);
\(8) –++(1,0); (10) –++(1,0);
\(14) –++(2,0);
\(19) –++(2,0);
\(23) –++(1,0); (25) –++(1,0);
\(27) –++(1,0); (29) –++(1,0);
The lozenge tilings of the region $H_{n,K-k_1} (k_1, k_2,\dots,k_{\lceil \frac{n}{2} \rceil})$ are enumerated in [@Lai14 Theorem 3.1] and are equal to (\[eq:HMTDetEven\]) and (\[eq:HMTDetOdd\]).
Regarding the interpretation in terms of representation theory, we see that halved Gelfand-Tsetlin patterns are in bijective correspondence with *symplectic patterns* as defined by Proctor [@Pro94]: Given a halved Gelfand-Tsetlin pattern of order $n$, bottom row $(k_1,\dots,k_{\lceil\frac{n}{2}\rceil})$ and no entry larger than $K$, replace every entry $x$ by $K-x$ and flip the object upside down to transform it into a $n$-symplectic pattern with the partition $(K-k_1,\dots,K-k_{\lceil\frac{n}{2}\rceil})$ as top row.
Let $n$ be even and denote by $R_i$ the sum of all the entries in row $i$ – counted from bottom to top – of a given $n$-symplectic pattern $P$. The weight is defined as $$w_{even}(P)=\prod_{i=1}^{\frac{n}{2}} x_i^{R_{2i}-2R_{2i-1}+R_{2i-2}},$$ where $R_0 {\mathrel{\mathop:\!\!=}}0$. Proctor showed [@Pro94 Theorem 4.2] that the generating function of all $n$-symplectic patterns with top row $\lambda$ with respect to the weight function $w_{even}$ is given by the *symplectic character* $sp_{\lambda} (x_1,\dots,x_{\frac{n}{2}})$, also known as *symplectic Schur function*. It can be expressed in terms of complete homogeneous symmetric functions by the following Jacobi-Trudi type formula $$\label{eq:SymplecticCharacter}
sp_{\lambda} \left(x_1,\dots,x_{N}\right) = \frac{1}{2} \det_{1 \le i,j \le N} \left| h_{\lambda_i -i+j} \left(x_1,\dots,x_{N}\right) + h_{\lambda_i -i-j+2} \left(x_1,\dots,x_{N}\right) \right|,$$ where $N=\frac{n}{2}$.
Consequently, the number of all halved monotone triangles of even order $n$, bottom row $(k_1,\dots,k_{\frac{n}{2}})$ and no entry larger than $K$ is given by $$\begin{gathered}
sp_{(K-k_1,\dots,K-k_{\frac{n}{2}})} (1,\dots,1)\\
= \prod_{1\leq i<j\leq \frac{n}{2}} \frac{(k_j-k_i+j-i)(2K+n+2-k_i-k_j-i-j)}{(j-i)(i+j)} \prod_{i=1}^{\frac{n}{2}}\frac{K+\frac{n}{2}+1-k_i-i}{i},\end{gathered}$$ which follows from [@FH91 Exercise 24.20].
The classical symplectic group is only defined on even dimensional vector spaces. However, Proctor defines symplectic groups on vector spaces of odd dimension $n$ and proves [@Pro88 Proposition 3.1] that the indecomposable trace free tensor character is also given by the identity (\[eq:SymplecticCharacter\]) with $N=\frac{n+1}{2}$.
As in the previous case, we give a combinatorial interpretation in terms of symplectic patterns. Let $n$ be odd and define the weight of an $n$-symplectic pattern $P$ as $$w_{odd}(P)=x_{\frac{n+1}{2}}^{R_n-R_{n-1}} \prod_{i=1}^{\frac{n-1}{2}} x_i^{R_{2i}-2R_{2i-1}+R_{2i-2}}.$$
It holds true [@Pro94 Theorem 4.2] that the generating function of all $n$-symplectic patterns with top row $\lambda$ with respect to the weight function $w_{odd}$ is given by the symplectic character $sp_{\lambda} (x_1,\dots,x_{\frac{n+1}{2}})$. This implies $$sp_{(K-k_1,\dots,K-k_{\frac{n+1}{2}})} (1,\dots,1)
= \prod_{1\leq i<j\leq \frac{n+1}{2}}\frac{(k_j-k_i+j-i)(2K+n+2-k_j-k_i-j-i)}{(j-i)(j+i-1)}.$$
[^1]: Supported by the Austrian Science Foundation FWF, START grant Y463 and SFB grant F50.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In oxide materials, nanostructuring effect has been found very promising approach for the enhancement of *figure-of-merit*, *ZT*. In the present work, we have synthesized La$_{0.7}$Sr$_{0.3}$MnO$_{3}$ (LSMO) compound using sol-gel method and samples of crystallite size of $\sim$20, $\sim$41, and $\sim$49 nm were obtained by giving different heat treatment. Seebeck coefficient ($\alpha$), electrical resistivity ($\rho$), and thermal conductivity ($\kappa$) measurements were carried out in 300-600 K temperature range. The systematic change in the values of $\alpha$ from $\sim$ -19 $\mu$V/K to $\sim$ -24 $\mu$V/K and drastic reduction in the values of $\kappa$ from $\sim$0.88 W/mK to $\sim$0.23 W/mK are observed as crystallite size is reduced from 49 nm to 20 nm at $\sim$600 K. Also, fall in the values of $\rho$ in the paramagnetic (PM) insulator phase (400-600 K) are effectively responsible for the increasing trend in the values of *ZT* at high temperature. For the crystallite size of 41 nm, value of *ZT* at 600 K was found to be $\sim$0.017, which can be further increased up to $\sim$0.045 around 650 K temperature. The predicted value of *ZT* suggests that LSMO can be suitable oxide material for thermoelectric applications at high temperature.'
author:
- Saurabh Singh
- Simant Kumar Srivastav
- Ashutosh Patel
- Ratnamala Chatterjee
- Sudhir Kumar Pandey
bibliography:
- 'aipsamp.bib'
title: 'Effect of nanostructure on thermoelectric properties of La$_{0.7}$Sr$_{0.3}$MnO$_{3}$ in 300-600 K range'
---
In the past few decades, thermoelectric (TE) materials have been investigated extensively for an alternate and renewable source of energy.[@disalvo; @bell] In the search of new materials, oxide materials have attracted much attention in the field due to their non-toxicity, oxidation resistance, high-temperature stability, easy and low cost manufacturing factors.[@koumoto] A material is said to be suitable for the TE application on the basis of *figure of merit*, *ZT*, which is defined as *ZT* = ($\alpha$$^{2}$$\sigma$/$\kappa$), where the terms $\alpha$, $\sigma$, and $\kappa$ are Seebeck coefficient (or thermopower), electrical conductivity (inverse of electrical resistivity, $\rho$), and thermal conductivity, respectively.[@pei; @lalonde] There are two different source of contributions in the total $\kappa$, defined as $\kappa$ = $\kappa$$_{e}$ + $\kappa$$_{l}$, where $\kappa$$_{e}$ and $\kappa$$_{l}$ are known as electronic and lattice thermal conductivity, respectively. The expression of *ZT* suggests that, the magnitude of $\alpha$ and $\sigma$ should be larger; whereas, lower value of $\kappa$ (especially $\kappa$$_{l}$) is required for the higher values of *ZT*.[@nolas] In search of materials with high *ZT* value, many experimental and theoretical approaches have been used, few of them are such as making the materials with appropriate combination of elements, suitable doping, lowering the dimension, creating defects mechanism, nano structuring and band engineering, etc.[@minnich] Among these, nano structuring method has been one of the novel and effective approach for getting the higher *ZT* values; as decreasing the grain size to the nano scale region increases the phonon scattering in the intragranular region. Due to this scattering effect, phonon mean free path reduces and results in to the decrement in the value of $\kappa$$_{l}$.[@dong] In many nanocrystalline size materials, over all values of $\kappa$ were found to be much lower than that of corresponding bulk or single crystal material.[@nan]\
Generally, oxide materials have limitations for the TE applications due to its large value of $\kappa$. From the industrial point of view, the mechanism by which values of $\kappa$ can be reduced in the oxide materials, with an optimized value of power factor ($\alpha$$^{2}$$\sigma$), are highly demanded and play an important role in tuning the material properties for the TE applications. The size effect on TE properties have been seen in the oxide materials where lowering the crstallite size in nm range increases the magnitude of $\alpha$, whereas drastically reduce the values of $\kappa$.[@dura] The aspects of size effect on TE properties of La$_{0.7}$Sr$_{0.3}$MnO$_{3}$ (LSMO) samples have been found to be of interest due to the improvement in the magnitude of $\alpha$ with reduction in crystallite size.[@salazar] The study of TE properties of this compound by Salzar *et al*. were done on the smallest possible crystallite size of 73 nm, and the measurements of $\alpha$ and $\sigma$ were reported only upto 500 K. There are various report on the TE measurement of bulk sample in wide temperature range.[@mahendiran; @ohtani] In high temperature region, 300-400 K, the measurement on micro-crystallite size sample shows a large value of $\kappa$ ($\sim$2.4 W/mK, at 300 K), and limits its use for TE application.[@wang] In this situation, the possible solution can be the reduction of $\kappa$ by lowering the lattice thermal conductivity and improving the magnitude of $\alpha$ while having the moderate value of $\rho$. Also, the effect of nano crystallite size on the value of *ZT* were found to be lacking for this compound in the literature for the high temperature region, to the best of our knowledge. With this motivation, TE study on nano-crystallite size LSMO samples were investigated in the high temperature region.\
In the present work, La$_{0.7}$Sr$_{0.3}$MnO$_{3}$ compound was synthesized using sol-gel method and samples with three different crystallite size in nano scale range (i.e. 20, 41, and 49 nm) were obtained by sintering the samples at different temperature and time. Measurements of $\alpha$, $\rho$, and $\kappa$ were carried out in the 300-600 K temperature range and values of *ZT* were also estimated to see the applicability of this material for TE applications.\
Nanoparticles of La$_{0.7}$Sr$_{0.3}$MnO$_{3}$ (LSMO) were prepared by the sol–gel method.[@srivastav] Lanthanum nitrate hexahydrate, La(NO$_{3}$)$_{3}$.6H$_{2}$O, manganese nitrate tetrahydrate, Mn(NO$_{3}$)$_{2}$.4H$_{2}$O, strontium nitrate, Sr(NO$_{3}$)$_{2}$ (all from Sigma-Aldrich, USA), propylene glycol (Thomas Baker, India) and citric acid (Merck, Germany) were of analytical grade. Stoichiometric amount of metal nitrates were dissolved in deionized water separately and then mixed to prepare precursor solution. Further, propylene glycol and citric acid were added to the above precursor solution in 1:1 mole ratio with respect to metal nitrates. The solution was magnetically stirred and heated on a thermal plate at $\sim$90 $^{o}$C until all the liquid evaporated out and black precursor powder was obtained. In order to get the well crystallized LSMO nanoparticles, the precursor powder was calcined at 800 $^{o}$C for 3 hrs in ambient atmosphere with heating rate of 2 $^{o}$C min$^{-1}$. To obtain the samples with different grain size, powder form of samples were pelletized in 5 mm diameter pellet under the pressure of $\sim$40 kg/cm$^{2}$, and pellets were sintered at 800 $^{o}$C (24 hr) and 850 $^{o}$C (72 hr). For the structural characterization, XRD diffraction pattern were recorded using the Rigaku Advance x-ray diffractometer using Cu K$\alpha$ radiation ($\lambda$= 1.5418 Å). The temperature dependent measurement of $\alpha$, $\rho$, and $\kappa$ were carried out using the home made setup.[@Patel; @resistivitysingh]\
Fig. 1 shows the XRD pattern of all the samples. To analyze the x-ray data, Rietveld refinement were performed using the FULLPROF software.[@rodriguez] From the refinement, goodness of fit $\chi$$^{2}$ was achieved to the 1.25, and the values of lattice parameters of the unit cell were *a* = 5.488(3) $\AA$ and *c* = 13.371(5) $\AA$. The refinement results (shown in the inset of Fig. 1) corresponding to rhombohedral structure described by R-3C space group confirms that sample is of single phase.[@radaelli]
![(Color online)XRD pattern of La$_{0.7}$Sr$_{0.3}$MnO$_{3}$ sample. Rietveld refinement of XRD data (800 $^{o}$C, 3h) is shown in the inset of the figure.](LSMO.eps){width="45.00000%"}
The values of crystallite size (D) were calculated from the half-width (FWHM- Full Width Half Maximum) of most intense peak, using the Debye Scherrer formula, D = \[(k$\lambda$)/(B(*2$\theta$*).Cos $\theta$)\], where k is a constant (0.94), B(*2$\theta$*) is FWHM, and $\lambda$ is 1.5418 Å.[@warren] The estimated values of crystallite size were found to be 20 nm, 41 nm, and 49 nm, for the sample sintered at 800 $^{o}$C (3hr), 800 $^{o}$C (24 hr), and 850 $^{o}$C (72 hr), respectively. We have also estimated the particle size of sample (LSMO 800 $^{o}$C, 3hr) from the TEM (transmission electron microscope) image analysis (see supplementary material), and the value of mean particle size was found to be $\sim$20 nm, which is consistent with the value obtained from its XRD result.[@kumar]\
Temperature dependent variations of the measured values of $\alpha$ in 300-600 K are shown in Fig. 2.
![(Color online) Seebeck coefficient, $\alpha$(T), variation with temperature.](ST.eps){width="35.00000%"}
In 300-450 K, an increasing trends in the magnitude of $\alpha$ were noticed for all the samples, where as above 450 K, the values of $\alpha$ were almost constant up to 600 K. The negative values of $\alpha$ for all the samples were observed in the entire temperature range under study, which suggests that electrons are dominant carriers in the contribution of $\alpha$ and this system have the character of *n-type* TE material. At 600 K, the values of $\alpha$ are $\sim$-19, $\sim$-22, and $\sim$ -24 $\mu$V/K for the 20 nm, 41 nm, and 49 nm samples, respectively. These values are larger than the reported value of $\alpha$ ($\sim$ -17 $\mu$V/K at $\sim$470 K) for 73 nm crystallite size sample.[@salazar] *ZT* of the materials are dependent on square of the $\alpha$ values, therefore observation of increment in the magnitude of $\alpha$ with decrease in the crystallite size of the LSMO suggests that this compound can be good TE material.\
For the different crystallite size samples, electrical resistivity ($\rho$) vs. temperature plot are shown in the Fig. 3.
![(Color online) Resistivity, $\rho$(T), vs. T behavior of samples with grain size 20 nm, 41 nm, and 49 nm.](RT.eps){width="35.00000%"}
From the Fig. 3, it is observed that with decrease in crystallite size from 49 nm to 20 nm, the magnitude of resistivity increases relatively, whereas metal to insulator transition temperature (T$_{MI}$) shift towards the lower temperature. This type of behavior have also been noticed in the LSMO and other oxide materials.[@dey; @salazar] The values of T$_{MI}$ are at $\sim$340 K, $\sim$358 K, and $\sim$370 K for the 20 nm, 41 nm, and 49 nm samples, respectively. The values of $\rho$ are found to be in same order of the reported values of bulk sample ($\rho$ = $\sim$0.08 $\Omega$ cm at 300 K).[@taran] In the insulating region, 400-600 K, the values of $\rho$ decreases with temperature and there are small difference between the magnitude of $\rho$ of 41 nm and 49 nm samples were observed. The overall behavior of $\rho$ and variations in its magnitude from 20 nm to 49 nm samples suggest that this compound has good electrical property and can be useful for the TE applications.\
Fig. 4 shows the temperature dependent behavior of total thermal conductivity ($\kappa$) plot of the 20 nm, 41 nm, and 49 nm samples in the 300-600 K temperature range.
![(Color online) Temperature dependent variation of thermal conductivity of La$_{0.7}$Sr$_{0.3}$MnO$_{3}$ with 20 nm, 41 nm, and 49 nm crystallite size samples.](KT.eps){width="35.00000%"}
For the different grain size samples, $\kappa$ have very weak dependent on temperature and found to be almost linear in the whole temperature range under study. At 300 K, the observed values of $\kappa$ are $\sim$0.89, $\sim$0.45, and $\sim$0.23 W/mK for the 49 nm, 41 nm, and 20 nm samples, respectively. With decrease in grain size from 49 nm to 20 nm, the systematic decrements in the values of $\kappa$ are found. These values of thermal conductivity are very small in comparison to the value of $\kappa$ reported for bulk sample ($\kappa$ = $\sim$2.4 W/m K at 300 K). The drastic decrements in $\kappa$ values are due to the increase in phonon scattering with decrease in crystallite size, which lower the contributions of lattice part to the total thermal conductivity. The values of $\kappa$ for the 49 nm, 41 nm, and 20 nm samples are $\sim$63%, $\sim$81%, and $\sim$90% smaller than that of the reported values of microcrystalline samples. This kind of behavior have been also seen in the similar oxide material.[@dura] For La$_{0.7}$Sr$_{0.3}$Mn$O_{3}$, the huge reduction in the $\kappa$ values for the samples with crystallite size of nm range is reported first time. Minimization of $\kappa$ value is one of the challenging task and also is the most crucial requirement to get the higher value of *ZT*. Synthesizing the sample with crystallite size of 20 nm, the value of $\kappa$ of LSMO has been reduced to one-tenth of its value reported at 300 K, which is a very good signature for TE applications.\
In order to know the potential capabilities of La$_{0.7}$Sr$_{0.3}$MnO$_{3}$ for TE applications, we have also estimated the *ZT* values in the 300-600 K temperature range, which are shown in the Fig. 5.
![(Color online)Temperature variation of *figure of merit*, *ZT*. Inset shows the predicted data up to 650 K](ZTall.eps){width="40.00000%"}
The values of *ZT* are found to be increasing monotonically with temperature in 300-500 K range, and in this temperature range the values of *ZT* of 20 nm sample are found to be larger than that of 41 nm and 49 nm sample. Above 500 K temperature, *ZT* values of all three samples increases sharply up to 600 K due to almost constant value of $\alpha$ and $\kappa$, while having the continuous decrements in the $\rho$(T) values. The *ZT* curve of 41 nm sample make crossover of the 20 nm sample at $\sim$540 K and reaches to the maximum value of *ZT* equal to $\sim$0.017 at $\sim$600 K. The optimized values of $\alpha$, $\rho$ and $\kappa$ of the 41 nm sample is responsible for showing the higher values of ZT in 540-600 K range. At 600 K, sample with 41 nm grain size have the highest *ZT* value than that of 20 nm (*ZT* = $\sim$0.013) and 49 nm (*ZT* = $\sim$0.012) samples.\
In the high-temperature region of the PM insulator phase, with almost constant values of $\alpha$ and $\kappa$, the continuous decrease in the values of $\rho$ make it a key parameter for increasing the values of *ZT* in LSMO compound. In PMI region, any electronic phase transition is not reported for LSMO below 650 K. Thus, we can also expect the similar temperature dependent behavior of $\alpha$, $\rho$, and $\kappa$ above 600 K. Considering the similar trend of $\alpha$, $\rho$, and $\kappa$, the values of *ZT* have been estimated up to 650 K, which is shown in the inset of Fig. 5. The predicted values of *ZT* for 20 nm, 41 nm, and 49 nm samples are 0.035, 0.045, and 0.040, respectively at 650 K, which are nearly three times of the observed values at 600 K. It will be more interesting to see the experimental verification of predicted value of *ZT* at 650 K. To confirm this conjecture, further measurement of $\alpha$, $\rho$ and $\kappa$ on nano-crystallite LSMO samples above 600 K are highly desirable.\
In conclusion, we have prepared the La$_{0.7}$sr$_{0.3}$MnO$_{3}$ using the sol-gel method, and three different samples of grain sizes 20 nm, 41 nm, and 49 nm were obtained by giving different heat treatment. For this system, lowering the grain size of samples in nano-meter range were found to be very effective in minimization of thermal conductivity, increment in magnitude of Seebeck coefficient, whereas decreasing trends in magnitude of resistivity in the paramagnetic insulator region is noticed.Particularly, for 41 nm sample, values of *ZT* were reported to $\sim$0.017 at 600 K, and also predicted to be improved up to $\sim$0.045 at 650 K. For the 41 nm sample, the continuous increase in the values of *ZT* with temperature and its reported value suggest that La$_{0.7}$Sr$_{0.3}$MnO$_{3}$ compound can be used for the TE applications in the high temperature region. The present study shows that nanostructuring approach is very novel and found to be very effective for the improvement of TE properties of oxide materials.
| {
"pile_set_name": "ArXiv"
} |